Oct 11 03:00:15 localhost kernel: Linux version 5.14.0-621.el9.x86_64 (mockbuild@x86-05.stream.rdu2.redhat.com) (gcc (GCC) 11.5.0 20240719 (Red Hat 11.5.0-11), GNU ld version 2.35.2-67.el9) #1 SMP PREEMPT_DYNAMIC Tue Sep 30 07:37:35 UTC 2025
Oct 11 03:00:15 localhost kernel: The list of certified hardware and cloud instances for Red Hat Enterprise Linux 9 can be viewed at the Red Hat Ecosystem Catalog, https://catalog.redhat.com.
Oct 11 03:00:15 localhost kernel: Command line: BOOT_IMAGE=(hd0,msdos1)/boot/vmlinuz-5.14.0-621.el9.x86_64 root=UUID=9839e2e1-98a2-4594-b609-79d514deb0a3 ro console=ttyS0,115200n8 no_timer_check net.ifnames=0 crashkernel=1G-2G:192M,2G-64G:256M,64G-:512M
Oct 11 03:00:15 localhost kernel: BIOS-provided physical RAM map:
Oct 11 03:00:15 localhost kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable
Oct 11 03:00:15 localhost kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved
Oct 11 03:00:15 localhost kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved
Oct 11 03:00:15 localhost kernel: BIOS-e820: [mem 0x0000000000100000-0x00000000bffdafff] usable
Oct 11 03:00:15 localhost kernel: BIOS-e820: [mem 0x00000000bffdb000-0x00000000bfffffff] reserved
Oct 11 03:00:15 localhost kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved
Oct 11 03:00:15 localhost kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved
Oct 11 03:00:15 localhost kernel: BIOS-e820: [mem 0x0000000100000000-0x000000023fffffff] usable
Oct 11 03:00:15 localhost kernel: NX (Execute Disable) protection: active
Oct 11 03:00:15 localhost kernel: APIC: Static calls initialized
Oct 11 03:00:15 localhost kernel: SMBIOS 2.8 present.
Oct 11 03:00:15 localhost kernel: DMI: OpenStack Foundation OpenStack Nova, BIOS 1.15.0-1 04/01/2014
Oct 11 03:00:15 localhost kernel: Hypervisor detected: KVM
Oct 11 03:00:15 localhost kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00
Oct 11 03:00:15 localhost kernel: kvm-clock: using sched offset of 13196383779 cycles
Oct 11 03:00:15 localhost kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns
Oct 11 03:00:15 localhost kernel: tsc: Detected 2800.000 MHz processor
Oct 11 03:00:15 localhost kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved
Oct 11 03:00:15 localhost kernel: e820: remove [mem 0x000a0000-0x000fffff] usable
Oct 11 03:00:15 localhost kernel: last_pfn = 0x240000 max_arch_pfn = 0x400000000
Oct 11 03:00:15 localhost kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs
Oct 11 03:00:15 localhost kernel: x86/PAT: Configuration [0-7]: WB  WC  UC- UC  WB  WP  UC- WT  
Oct 11 03:00:15 localhost kernel: last_pfn = 0xbffdb max_arch_pfn = 0x400000000
Oct 11 03:00:15 localhost kernel: found SMP MP-table at [mem 0x000f5ae0-0x000f5aef]
Oct 11 03:00:15 localhost kernel: Using GB pages for direct mapping
Oct 11 03:00:15 localhost kernel: RAMDISK: [mem 0x2d858000-0x32c23fff]
Oct 11 03:00:15 localhost kernel: ACPI: Early table checksum verification disabled
Oct 11 03:00:15 localhost kernel: ACPI: RSDP 0x00000000000F5AA0 000014 (v00 BOCHS )
Oct 11 03:00:15 localhost kernel: ACPI: RSDT 0x00000000BFFE16BD 000030 (v01 BOCHS  BXPC     00000001 BXPC 00000001)
Oct 11 03:00:15 localhost kernel: ACPI: FACP 0x00000000BFFE1571 000074 (v01 BOCHS  BXPC     00000001 BXPC 00000001)
Oct 11 03:00:15 localhost kernel: ACPI: DSDT 0x00000000BFFDFC80 0018F1 (v01 BOCHS  BXPC     00000001 BXPC 00000001)
Oct 11 03:00:15 localhost kernel: ACPI: FACS 0x00000000BFFDFC40 000040
Oct 11 03:00:15 localhost kernel: ACPI: APIC 0x00000000BFFE15E5 0000B0 (v01 BOCHS  BXPC     00000001 BXPC 00000001)
Oct 11 03:00:15 localhost kernel: ACPI: WAET 0x00000000BFFE1695 000028 (v01 BOCHS  BXPC     00000001 BXPC 00000001)
Oct 11 03:00:15 localhost kernel: ACPI: Reserving FACP table memory at [mem 0xbffe1571-0xbffe15e4]
Oct 11 03:00:15 localhost kernel: ACPI: Reserving DSDT table memory at [mem 0xbffdfc80-0xbffe1570]
Oct 11 03:00:15 localhost kernel: ACPI: Reserving FACS table memory at [mem 0xbffdfc40-0xbffdfc7f]
Oct 11 03:00:15 localhost kernel: ACPI: Reserving APIC table memory at [mem 0xbffe15e5-0xbffe1694]
Oct 11 03:00:15 localhost kernel: ACPI: Reserving WAET table memory at [mem 0xbffe1695-0xbffe16bc]
Oct 11 03:00:15 localhost kernel: No NUMA configuration found
Oct 11 03:00:15 localhost kernel: Faking a node at [mem 0x0000000000000000-0x000000023fffffff]
Oct 11 03:00:15 localhost kernel: NODE_DATA(0) allocated [mem 0x23ffd3000-0x23fffdfff]
Oct 11 03:00:15 localhost kernel: crashkernel reserved: 0x00000000af000000 - 0x00000000bf000000 (256 MB)
Oct 11 03:00:15 localhost kernel: Zone ranges:
Oct 11 03:00:15 localhost kernel:   DMA      [mem 0x0000000000001000-0x0000000000ffffff]
Oct 11 03:00:15 localhost kernel:   DMA32    [mem 0x0000000001000000-0x00000000ffffffff]
Oct 11 03:00:15 localhost kernel:   Normal   [mem 0x0000000100000000-0x000000023fffffff]
Oct 11 03:00:15 localhost kernel:   Device   empty
Oct 11 03:00:15 localhost kernel: Movable zone start for each node
Oct 11 03:00:15 localhost kernel: Early memory node ranges
Oct 11 03:00:15 localhost kernel:   node   0: [mem 0x0000000000001000-0x000000000009efff]
Oct 11 03:00:15 localhost kernel:   node   0: [mem 0x0000000000100000-0x00000000bffdafff]
Oct 11 03:00:15 localhost kernel:   node   0: [mem 0x0000000100000000-0x000000023fffffff]
Oct 11 03:00:15 localhost kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000023fffffff]
Oct 11 03:00:15 localhost kernel: On node 0, zone DMA: 1 pages in unavailable ranges
Oct 11 03:00:15 localhost kernel: On node 0, zone DMA: 97 pages in unavailable ranges
Oct 11 03:00:15 localhost kernel: On node 0, zone Normal: 37 pages in unavailable ranges
Oct 11 03:00:15 localhost kernel: ACPI: PM-Timer IO Port: 0x608
Oct 11 03:00:15 localhost kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1])
Oct 11 03:00:15 localhost kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23
Oct 11 03:00:15 localhost kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl)
Oct 11 03:00:15 localhost kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level)
Oct 11 03:00:15 localhost kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level)
Oct 11 03:00:15 localhost kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level)
Oct 11 03:00:15 localhost kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level)
Oct 11 03:00:15 localhost kernel: ACPI: Using ACPI (MADT) for SMP configuration information
Oct 11 03:00:15 localhost kernel: TSC deadline timer available
Oct 11 03:00:15 localhost kernel: CPU topo: Max. logical packages:   8
Oct 11 03:00:15 localhost kernel: CPU topo: Max. logical dies:       8
Oct 11 03:00:15 localhost kernel: CPU topo: Max. dies per package:   1
Oct 11 03:00:15 localhost kernel: CPU topo: Max. threads per core:   1
Oct 11 03:00:15 localhost kernel: CPU topo: Num. cores per package:     1
Oct 11 03:00:15 localhost kernel: CPU topo: Num. threads per package:   1
Oct 11 03:00:15 localhost kernel: CPU topo: Allowing 8 present CPUs plus 0 hotplug CPUs
Oct 11 03:00:15 localhost kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write()
Oct 11 03:00:15 localhost kernel: PM: hibernation: Registered nosave memory: [mem 0x00000000-0x00000fff]
Oct 11 03:00:15 localhost kernel: PM: hibernation: Registered nosave memory: [mem 0x0009f000-0x0009ffff]
Oct 11 03:00:15 localhost kernel: PM: hibernation: Registered nosave memory: [mem 0x000a0000-0x000effff]
Oct 11 03:00:15 localhost kernel: PM: hibernation: Registered nosave memory: [mem 0x000f0000-0x000fffff]
Oct 11 03:00:15 localhost kernel: PM: hibernation: Registered nosave memory: [mem 0xbffdb000-0xbfffffff]
Oct 11 03:00:15 localhost kernel: PM: hibernation: Registered nosave memory: [mem 0xc0000000-0xfeffbfff]
Oct 11 03:00:15 localhost kernel: PM: hibernation: Registered nosave memory: [mem 0xfeffc000-0xfeffffff]
Oct 11 03:00:15 localhost kernel: PM: hibernation: Registered nosave memory: [mem 0xff000000-0xfffbffff]
Oct 11 03:00:15 localhost kernel: PM: hibernation: Registered nosave memory: [mem 0xfffc0000-0xffffffff]
Oct 11 03:00:15 localhost kernel: [mem 0xc0000000-0xfeffbfff] available for PCI devices
Oct 11 03:00:15 localhost kernel: Booting paravirtualized kernel on KVM
Oct 11 03:00:15 localhost kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns
Oct 11 03:00:15 localhost kernel: setup_percpu: NR_CPUS:8192 nr_cpumask_bits:8 nr_cpu_ids:8 nr_node_ids:1
Oct 11 03:00:15 localhost kernel: percpu: Embedded 64 pages/cpu s225280 r8192 d28672 u262144
Oct 11 03:00:15 localhost kernel: pcpu-alloc: s225280 r8192 d28672 u262144 alloc=1*2097152
Oct 11 03:00:15 localhost kernel: pcpu-alloc: [0] 0 1 2 3 4 5 6 7 
Oct 11 03:00:15 localhost kernel: kvm-guest: PV spinlocks disabled, no host support
Oct 11 03:00:15 localhost kernel: Kernel command line: BOOT_IMAGE=(hd0,msdos1)/boot/vmlinuz-5.14.0-621.el9.x86_64 root=UUID=9839e2e1-98a2-4594-b609-79d514deb0a3 ro console=ttyS0,115200n8 no_timer_check net.ifnames=0 crashkernel=1G-2G:192M,2G-64G:256M,64G-:512M
Oct 11 03:00:15 localhost kernel: Unknown kernel command line parameters "BOOT_IMAGE=(hd0,msdos1)/boot/vmlinuz-5.14.0-621.el9.x86_64", will be passed to user space.
Oct 11 03:00:15 localhost kernel: random: crng init done
Oct 11 03:00:15 localhost kernel: Dentry cache hash table entries: 1048576 (order: 11, 8388608 bytes, linear)
Oct 11 03:00:15 localhost kernel: Inode-cache hash table entries: 524288 (order: 10, 4194304 bytes, linear)
Oct 11 03:00:15 localhost kernel: Fallback order for Node 0: 0 
Oct 11 03:00:15 localhost kernel: Built 1 zonelists, mobility grouping on.  Total pages: 2064091
Oct 11 03:00:15 localhost kernel: Policy zone: Normal
Oct 11 03:00:15 localhost kernel: mem auto-init: stack:off, heap alloc:off, heap free:off
Oct 11 03:00:15 localhost kernel: software IO TLB: area num 8.
Oct 11 03:00:15 localhost kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=8, Nodes=1
Oct 11 03:00:15 localhost kernel: ftrace: allocating 49162 entries in 193 pages
Oct 11 03:00:15 localhost kernel: ftrace: allocated 193 pages with 3 groups
Oct 11 03:00:15 localhost kernel: Dynamic Preempt: voluntary
Oct 11 03:00:15 localhost kernel: rcu: Preemptible hierarchical RCU implementation.
Oct 11 03:00:15 localhost kernel: rcu:         RCU event tracing is enabled.
Oct 11 03:00:15 localhost kernel: rcu:         RCU restricting CPUs from NR_CPUS=8192 to nr_cpu_ids=8.
Oct 11 03:00:15 localhost kernel:         Trampoline variant of Tasks RCU enabled.
Oct 11 03:00:15 localhost kernel:         Rude variant of Tasks RCU enabled.
Oct 11 03:00:15 localhost kernel:         Tracing variant of Tasks RCU enabled.
Oct 11 03:00:15 localhost kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies.
Oct 11 03:00:15 localhost kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=8
Oct 11 03:00:15 localhost kernel: RCU Tasks: Setting shift to 3 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=8.
Oct 11 03:00:15 localhost kernel: RCU Tasks Rude: Setting shift to 3 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=8.
Oct 11 03:00:15 localhost kernel: RCU Tasks Trace: Setting shift to 3 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=8.
Oct 11 03:00:15 localhost kernel: NR_IRQS: 524544, nr_irqs: 488, preallocated irqs: 16
Oct 11 03:00:15 localhost kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention.
Oct 11 03:00:15 localhost kernel: kfence: initialized - using 2097152 bytes for 255 objects at 0x(____ptrval____)-0x(____ptrval____)
Oct 11 03:00:15 localhost kernel: Console: colour VGA+ 80x25
Oct 11 03:00:15 localhost kernel: printk: console [ttyS0] enabled
Oct 11 03:00:15 localhost kernel: ACPI: Core revision 20230331
Oct 11 03:00:15 localhost kernel: APIC: Switch to symmetric I/O mode setup
Oct 11 03:00:15 localhost kernel: x2apic enabled
Oct 11 03:00:15 localhost kernel: APIC: Switched APIC routing to: physical x2apic
Oct 11 03:00:15 localhost kernel: tsc: Marking TSC unstable due to TSCs unsynchronized
Oct 11 03:00:15 localhost kernel: Calibrating delay loop (skipped) preset value.. 5600.00 BogoMIPS (lpj=2800000)
Oct 11 03:00:15 localhost kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated
Oct 11 03:00:15 localhost kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127
Oct 11 03:00:15 localhost kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0
Oct 11 03:00:15 localhost kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization
Oct 11 03:00:15 localhost kernel: Spectre V2 : Mitigation: Retpolines
Oct 11 03:00:15 localhost kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT
Oct 11 03:00:15 localhost kernel: Spectre V2 : Enabling Speculation Barrier for firmware calls
Oct 11 03:00:15 localhost kernel: RETBleed: Mitigation: untrained return thunk
Oct 11 03:00:15 localhost kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier
Oct 11 03:00:15 localhost kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl
Oct 11 03:00:15 localhost kernel: Speculative Return Stack Overflow: IBPB-extending microcode not applied!
Oct 11 03:00:15 localhost kernel: Speculative Return Stack Overflow: WARNING: See https://kernel.org/doc/html/latest/admin-guide/hw-vuln/srso.html for mitigation options.
Oct 11 03:00:15 localhost kernel: x86/bugs: return thunk changed
Oct 11 03:00:15 localhost kernel: Speculative Return Stack Overflow: Vulnerable: Safe RET, no microcode
Oct 11 03:00:15 localhost kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers'
Oct 11 03:00:15 localhost kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers'
Oct 11 03:00:15 localhost kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers'
Oct 11 03:00:15 localhost kernel: x86/fpu: xstate_offset[2]:  576, xstate_sizes[2]:  256
Oct 11 03:00:15 localhost kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'compacted' format.
Oct 11 03:00:15 localhost kernel: Freeing SMP alternatives memory: 40K
Oct 11 03:00:15 localhost kernel: pid_max: default: 32768 minimum: 301
Oct 11 03:00:15 localhost kernel: LSM: initializing lsm=lockdown,capability,landlock,yama,integrity,selinux,bpf
Oct 11 03:00:15 localhost kernel: landlock: Up and running.
Oct 11 03:00:15 localhost kernel: Yama: becoming mindful.
Oct 11 03:00:15 localhost kernel: SELinux:  Initializing.
Oct 11 03:00:15 localhost kernel: LSM support for eBPF active
Oct 11 03:00:15 localhost kernel: Mount-cache hash table entries: 16384 (order: 5, 131072 bytes, linear)
Oct 11 03:00:15 localhost kernel: Mountpoint-cache hash table entries: 16384 (order: 5, 131072 bytes, linear)
Oct 11 03:00:15 localhost kernel: smpboot: CPU0: AMD EPYC-Rome Processor (family: 0x17, model: 0x31, stepping: 0x0)
Oct 11 03:00:15 localhost kernel: Performance Events: Fam17h+ core perfctr, AMD PMU driver.
Oct 11 03:00:15 localhost kernel: ... version:                0
Oct 11 03:00:15 localhost kernel: ... bit width:              48
Oct 11 03:00:15 localhost kernel: ... generic registers:      6
Oct 11 03:00:15 localhost kernel: ... value mask:             0000ffffffffffff
Oct 11 03:00:15 localhost kernel: ... max period:             00007fffffffffff
Oct 11 03:00:15 localhost kernel: ... fixed-purpose events:   0
Oct 11 03:00:15 localhost kernel: ... event mask:             000000000000003f
Oct 11 03:00:15 localhost kernel: signal: max sigframe size: 1776
Oct 11 03:00:15 localhost kernel: rcu: Hierarchical SRCU implementation.
Oct 11 03:00:15 localhost kernel: rcu:         Max phase no-delay instances is 400.
Oct 11 03:00:15 localhost kernel: smp: Bringing up secondary CPUs ...
Oct 11 03:00:15 localhost kernel: smpboot: x86: Booting SMP configuration:
Oct 11 03:00:15 localhost kernel: .... node  #0, CPUs:      #1 #2 #3 #4 #5 #6 #7
Oct 11 03:00:15 localhost kernel: smp: Brought up 1 node, 8 CPUs
Oct 11 03:00:15 localhost kernel: smpboot: Total of 8 processors activated (44800.00 BogoMIPS)
Oct 11 03:00:15 localhost kernel: node 0 deferred pages initialised in 12ms
Oct 11 03:00:15 localhost kernel: Memory: 7765928K/8388068K available (16384K kernel code, 5784K rwdata, 13864K rodata, 4188K init, 7196K bss, 616216K reserved, 0K cma-reserved)
Oct 11 03:00:15 localhost kernel: devtmpfs: initialized
Oct 11 03:00:15 localhost kernel: x86/mm: Memory block size: 128MB
Oct 11 03:00:15 localhost kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns
Oct 11 03:00:15 localhost kernel: futex hash table entries: 2048 (order: 5, 131072 bytes, linear)
Oct 11 03:00:15 localhost kernel: pinctrl core: initialized pinctrl subsystem
Oct 11 03:00:15 localhost kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family
Oct 11 03:00:15 localhost kernel: DMA: preallocated 1024 KiB GFP_KERNEL pool for atomic allocations
Oct 11 03:00:15 localhost kernel: DMA: preallocated 1024 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations
Oct 11 03:00:15 localhost kernel: DMA: preallocated 1024 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations
Oct 11 03:00:15 localhost kernel: audit: initializing netlink subsys (disabled)
Oct 11 03:00:15 localhost kernel: audit: type=2000 audit(1760151614.585:1): state=initialized audit_enabled=0 res=1
Oct 11 03:00:15 localhost kernel: thermal_sys: Registered thermal governor 'fair_share'
Oct 11 03:00:15 localhost kernel: thermal_sys: Registered thermal governor 'step_wise'
Oct 11 03:00:15 localhost kernel: thermal_sys: Registered thermal governor 'user_space'
Oct 11 03:00:15 localhost kernel: cpuidle: using governor menu
Oct 11 03:00:15 localhost kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5
Oct 11 03:00:15 localhost kernel: PCI: Using configuration type 1 for base access
Oct 11 03:00:15 localhost kernel: PCI: Using configuration type 1 for extended access
Oct 11 03:00:15 localhost kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible.
Oct 11 03:00:15 localhost kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages
Oct 11 03:00:15 localhost kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page
Oct 11 03:00:15 localhost kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages
Oct 11 03:00:15 localhost kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page
Oct 11 03:00:15 localhost kernel: Demotion targets for Node 0: null
Oct 11 03:00:15 localhost kernel: cryptd: max_cpu_qlen set to 1000
Oct 11 03:00:15 localhost kernel: ACPI: Added _OSI(Module Device)
Oct 11 03:00:15 localhost kernel: ACPI: Added _OSI(Processor Device)
Oct 11 03:00:15 localhost kernel: ACPI: Added _OSI(3.0 _SCP Extensions)
Oct 11 03:00:15 localhost kernel: ACPI: Added _OSI(Processor Aggregator Device)
Oct 11 03:00:15 localhost kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded
Oct 11 03:00:15 localhost kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC
Oct 11 03:00:15 localhost kernel: ACPI: Interpreter enabled
Oct 11 03:00:15 localhost kernel: ACPI: PM: (supports S0 S3 S4 S5)
Oct 11 03:00:15 localhost kernel: ACPI: Using IOAPIC for interrupt routing
Oct 11 03:00:15 localhost kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug
Oct 11 03:00:15 localhost kernel: PCI: Using E820 reservations for host bridge windows
Oct 11 03:00:15 localhost kernel: ACPI: Enabled 2 GPEs in block 00 to 0F
Oct 11 03:00:15 localhost kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff])
Oct 11 03:00:15 localhost kernel: acpi PNP0A03:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI EDR HPX-Type3]
Oct 11 03:00:15 localhost kernel: acpiphp: Slot [3] registered
Oct 11 03:00:15 localhost kernel: acpiphp: Slot [4] registered
Oct 11 03:00:15 localhost kernel: acpiphp: Slot [5] registered
Oct 11 03:00:15 localhost kernel: acpiphp: Slot [6] registered
Oct 11 03:00:15 localhost kernel: acpiphp: Slot [7] registered
Oct 11 03:00:15 localhost kernel: acpiphp: Slot [8] registered
Oct 11 03:00:15 localhost kernel: acpiphp: Slot [9] registered
Oct 11 03:00:15 localhost kernel: acpiphp: Slot [10] registered
Oct 11 03:00:15 localhost kernel: acpiphp: Slot [11] registered
Oct 11 03:00:15 localhost kernel: acpiphp: Slot [12] registered
Oct 11 03:00:15 localhost kernel: acpiphp: Slot [13] registered
Oct 11 03:00:15 localhost kernel: acpiphp: Slot [14] registered
Oct 11 03:00:15 localhost kernel: acpiphp: Slot [15] registered
Oct 11 03:00:15 localhost kernel: acpiphp: Slot [16] registered
Oct 11 03:00:15 localhost kernel: acpiphp: Slot [17] registered
Oct 11 03:00:15 localhost kernel: acpiphp: Slot [18] registered
Oct 11 03:00:15 localhost kernel: acpiphp: Slot [19] registered
Oct 11 03:00:15 localhost kernel: acpiphp: Slot [20] registered
Oct 11 03:00:15 localhost kernel: acpiphp: Slot [21] registered
Oct 11 03:00:15 localhost kernel: acpiphp: Slot [22] registered
Oct 11 03:00:15 localhost kernel: acpiphp: Slot [23] registered
Oct 11 03:00:15 localhost kernel: acpiphp: Slot [24] registered
Oct 11 03:00:15 localhost kernel: acpiphp: Slot [25] registered
Oct 11 03:00:15 localhost kernel: acpiphp: Slot [26] registered
Oct 11 03:00:15 localhost kernel: acpiphp: Slot [27] registered
Oct 11 03:00:15 localhost kernel: acpiphp: Slot [28] registered
Oct 11 03:00:15 localhost kernel: acpiphp: Slot [29] registered
Oct 11 03:00:15 localhost kernel: acpiphp: Slot [30] registered
Oct 11 03:00:15 localhost kernel: acpiphp: Slot [31] registered
Oct 11 03:00:15 localhost kernel: PCI host bridge to bus 0000:00
Oct 11 03:00:15 localhost kernel: pci_bus 0000:00: root bus resource [io  0x0000-0x0cf7 window]
Oct 11 03:00:15 localhost kernel: pci_bus 0000:00: root bus resource [io  0x0d00-0xffff window]
Oct 11 03:00:15 localhost kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window]
Oct 11 03:00:15 localhost kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window]
Oct 11 03:00:15 localhost kernel: pci_bus 0000:00: root bus resource [mem 0x240000000-0x2bfffffff window]
Oct 11 03:00:15 localhost kernel: pci_bus 0000:00: root bus resource [bus 00-ff]
Oct 11 03:00:15 localhost kernel: pci 0000:00:00.0: [8086:1237] type 00 class 0x060000 conventional PCI endpoint
Oct 11 03:00:15 localhost kernel: pci 0000:00:01.0: [8086:7000] type 00 class 0x060100 conventional PCI endpoint
Oct 11 03:00:15 localhost kernel: pci 0000:00:01.1: [8086:7010] type 00 class 0x010180 conventional PCI endpoint
Oct 11 03:00:15 localhost kernel: pci 0000:00:01.1: BAR 4 [io  0xc140-0xc14f]
Oct 11 03:00:15 localhost kernel: pci 0000:00:01.1: BAR 0 [io  0x01f0-0x01f7]: legacy IDE quirk
Oct 11 03:00:15 localhost kernel: pci 0000:00:01.1: BAR 1 [io  0x03f6]: legacy IDE quirk
Oct 11 03:00:15 localhost kernel: pci 0000:00:01.1: BAR 2 [io  0x0170-0x0177]: legacy IDE quirk
Oct 11 03:00:15 localhost kernel: pci 0000:00:01.1: BAR 3 [io  0x0376]: legacy IDE quirk
Oct 11 03:00:15 localhost kernel: pci 0000:00:01.2: [8086:7020] type 00 class 0x0c0300 conventional PCI endpoint
Oct 11 03:00:15 localhost kernel: pci 0000:00:01.2: BAR 4 [io  0xc100-0xc11f]
Oct 11 03:00:15 localhost kernel: pci 0000:00:01.3: [8086:7113] type 00 class 0x068000 conventional PCI endpoint
Oct 11 03:00:15 localhost kernel: pci 0000:00:01.3: quirk: [io  0x0600-0x063f] claimed by PIIX4 ACPI
Oct 11 03:00:15 localhost kernel: pci 0000:00:01.3: quirk: [io  0x0700-0x070f] claimed by PIIX4 SMB
Oct 11 03:00:15 localhost kernel: pci 0000:00:02.0: [1af4:1050] type 00 class 0x030000 conventional PCI endpoint
Oct 11 03:00:15 localhost kernel: pci 0000:00:02.0: BAR 0 [mem 0xfe000000-0xfe7fffff pref]
Oct 11 03:00:15 localhost kernel: pci 0000:00:02.0: BAR 2 [mem 0xfe800000-0xfe803fff 64bit pref]
Oct 11 03:00:15 localhost kernel: pci 0000:00:02.0: BAR 4 [mem 0xfeb90000-0xfeb90fff]
Oct 11 03:00:15 localhost kernel: pci 0000:00:02.0: ROM [mem 0xfeb80000-0xfeb8ffff pref]
Oct 11 03:00:15 localhost kernel: pci 0000:00:02.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff]
Oct 11 03:00:15 localhost kernel: pci 0000:00:03.0: [1af4:1000] type 00 class 0x020000 conventional PCI endpoint
Oct 11 03:00:15 localhost kernel: pci 0000:00:03.0: BAR 0 [io  0xc080-0xc0bf]
Oct 11 03:00:15 localhost kernel: pci 0000:00:03.0: BAR 1 [mem 0xfeb91000-0xfeb91fff]
Oct 11 03:00:15 localhost kernel: pci 0000:00:03.0: BAR 4 [mem 0xfe804000-0xfe807fff 64bit pref]
Oct 11 03:00:15 localhost kernel: pci 0000:00:03.0: ROM [mem 0xfeb00000-0xfeb7ffff pref]
Oct 11 03:00:15 localhost kernel: pci 0000:00:04.0: [1af4:1001] type 00 class 0x010000 conventional PCI endpoint
Oct 11 03:00:15 localhost kernel: pci 0000:00:04.0: BAR 0 [io  0xc000-0xc07f]
Oct 11 03:00:15 localhost kernel: pci 0000:00:04.0: BAR 1 [mem 0xfeb92000-0xfeb92fff]
Oct 11 03:00:15 localhost kernel: pci 0000:00:04.0: BAR 4 [mem 0xfe808000-0xfe80bfff 64bit pref]
Oct 11 03:00:15 localhost kernel: pci 0000:00:05.0: [1af4:1002] type 00 class 0x00ff00 conventional PCI endpoint
Oct 11 03:00:15 localhost kernel: pci 0000:00:05.0: BAR 0 [io  0xc0c0-0xc0ff]
Oct 11 03:00:15 localhost kernel: pci 0000:00:05.0: BAR 4 [mem 0xfe80c000-0xfe80ffff 64bit pref]
Oct 11 03:00:15 localhost kernel: pci 0000:00:06.0: [1af4:1005] type 00 class 0x00ff00 conventional PCI endpoint
Oct 11 03:00:15 localhost kernel: pci 0000:00:06.0: BAR 0 [io  0xc120-0xc13f]
Oct 11 03:00:15 localhost kernel: pci 0000:00:06.0: BAR 4 [mem 0xfe810000-0xfe813fff 64bit pref]
Oct 11 03:00:15 localhost kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10
Oct 11 03:00:15 localhost kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10
Oct 11 03:00:15 localhost kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11
Oct 11 03:00:15 localhost kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11
Oct 11 03:00:15 localhost kernel: ACPI: PCI: Interrupt link LNKS configured for IRQ 9
Oct 11 03:00:15 localhost kernel: iommu: Default domain type: Translated
Oct 11 03:00:15 localhost kernel: iommu: DMA domain TLB invalidation policy: lazy mode
Oct 11 03:00:15 localhost kernel: SCSI subsystem initialized
Oct 11 03:00:15 localhost kernel: ACPI: bus type USB registered
Oct 11 03:00:15 localhost kernel: usbcore: registered new interface driver usbfs
Oct 11 03:00:15 localhost kernel: usbcore: registered new interface driver hub
Oct 11 03:00:15 localhost kernel: usbcore: registered new device driver usb
Oct 11 03:00:15 localhost kernel: pps_core: LinuxPPS API ver. 1 registered
Oct 11 03:00:15 localhost kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti <giometti@linux.it>
Oct 11 03:00:15 localhost kernel: PTP clock support registered
Oct 11 03:00:15 localhost kernel: EDAC MC: Ver: 3.0.0
Oct 11 03:00:15 localhost kernel: NetLabel: Initializing
Oct 11 03:00:15 localhost kernel: NetLabel:  domain hash size = 128
Oct 11 03:00:15 localhost kernel: NetLabel:  protocols = UNLABELED CIPSOv4 CALIPSO
Oct 11 03:00:15 localhost kernel: NetLabel:  unlabeled traffic allowed by default
Oct 11 03:00:15 localhost kernel: PCI: Using ACPI for IRQ routing
Oct 11 03:00:15 localhost kernel: PCI: pci_cache_line_size set to 64 bytes
Oct 11 03:00:15 localhost kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff]
Oct 11 03:00:15 localhost kernel: e820: reserve RAM buffer [mem 0xbffdb000-0xbfffffff]
Oct 11 03:00:15 localhost kernel: pci 0000:00:02.0: vgaarb: setting as boot VGA device
Oct 11 03:00:15 localhost kernel: pci 0000:00:02.0: vgaarb: bridge control possible
Oct 11 03:00:15 localhost kernel: pci 0000:00:02.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none
Oct 11 03:00:15 localhost kernel: vgaarb: loaded
Oct 11 03:00:15 localhost kernel: clocksource: Switched to clocksource kvm-clock
Oct 11 03:00:15 localhost kernel: VFS: Disk quotas dquot_6.6.0
Oct 11 03:00:15 localhost kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes)
Oct 11 03:00:15 localhost kernel: pnp: PnP ACPI init
Oct 11 03:00:15 localhost kernel: pnp 00:03: [dma 2]
Oct 11 03:00:15 localhost kernel: pnp: PnP ACPI: found 5 devices
Oct 11 03:00:15 localhost kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns
Oct 11 03:00:15 localhost kernel: NET: Registered PF_INET protocol family
Oct 11 03:00:15 localhost kernel: IP idents hash table entries: 131072 (order: 8, 1048576 bytes, linear)
Oct 11 03:00:15 localhost kernel: tcp_listen_portaddr_hash hash table entries: 4096 (order: 4, 65536 bytes, linear)
Oct 11 03:00:15 localhost kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear)
Oct 11 03:00:15 localhost kernel: TCP established hash table entries: 65536 (order: 7, 524288 bytes, linear)
Oct 11 03:00:15 localhost kernel: TCP bind hash table entries: 65536 (order: 8, 1048576 bytes, linear)
Oct 11 03:00:15 localhost kernel: TCP: Hash tables configured (established 65536 bind 65536)
Oct 11 03:00:15 localhost kernel: MPTCP token hash table entries: 8192 (order: 5, 196608 bytes, linear)
Oct 11 03:00:15 localhost kernel: UDP hash table entries: 4096 (order: 5, 131072 bytes, linear)
Oct 11 03:00:15 localhost kernel: UDP-Lite hash table entries: 4096 (order: 5, 131072 bytes, linear)
Oct 11 03:00:15 localhost kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family
Oct 11 03:00:15 localhost kernel: NET: Registered PF_XDP protocol family
Oct 11 03:00:15 localhost kernel: pci_bus 0000:00: resource 4 [io  0x0000-0x0cf7 window]
Oct 11 03:00:15 localhost kernel: pci_bus 0000:00: resource 5 [io  0x0d00-0xffff window]
Oct 11 03:00:15 localhost kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window]
Oct 11 03:00:15 localhost kernel: pci_bus 0000:00: resource 7 [mem 0xc0000000-0xfebfffff window]
Oct 11 03:00:15 localhost kernel: pci_bus 0000:00: resource 8 [mem 0x240000000-0x2bfffffff window]
Oct 11 03:00:15 localhost kernel: pci 0000:00:01.0: PIIX3: Enabling Passive Release
Oct 11 03:00:15 localhost kernel: pci 0000:00:00.0: Limiting direct PCI/PCI transfers
Oct 11 03:00:15 localhost kernel: ACPI: \_SB_.LNKD: Enabled at IRQ 11
Oct 11 03:00:15 localhost kernel: pci 0000:00:01.2: quirk_usb_early_handoff+0x0/0x160 took 89100 usecs
Oct 11 03:00:15 localhost kernel: PCI: CLS 0 bytes, default 64
Oct 11 03:00:15 localhost kernel: PCI-DMA: Using software bounce buffering for IO (SWIOTLB)
Oct 11 03:00:15 localhost kernel: software IO TLB: mapped [mem 0x00000000ab000000-0x00000000af000000] (64MB)
Oct 11 03:00:15 localhost kernel: ACPI: bus type thunderbolt registered
Oct 11 03:00:15 localhost kernel: Trying to unpack rootfs image as initramfs...
Oct 11 03:00:15 localhost kernel: Initialise system trusted keyrings
Oct 11 03:00:15 localhost kernel: Key type blacklist registered
Oct 11 03:00:15 localhost kernel: workingset: timestamp_bits=36 max_order=21 bucket_order=0
Oct 11 03:00:15 localhost kernel: zbud: loaded
Oct 11 03:00:15 localhost kernel: integrity: Platform Keyring initialized
Oct 11 03:00:15 localhost kernel: integrity: Machine keyring initialized
Oct 11 03:00:15 localhost kernel: Freeing initrd memory: 85808K
Oct 11 03:00:15 localhost kernel: NET: Registered PF_ALG protocol family
Oct 11 03:00:15 localhost kernel: xor: automatically using best checksumming function   avx       
Oct 11 03:00:15 localhost kernel: Key type asymmetric registered
Oct 11 03:00:15 localhost kernel: Asymmetric key parser 'x509' registered
Oct 11 03:00:15 localhost kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 246)
Oct 11 03:00:15 localhost kernel: io scheduler mq-deadline registered
Oct 11 03:00:15 localhost kernel: io scheduler kyber registered
Oct 11 03:00:15 localhost kernel: io scheduler bfq registered
Oct 11 03:00:15 localhost kernel: atomic64_test: passed for x86-64 platform with CX8 and with SSE
Oct 11 03:00:15 localhost kernel: shpchp: Standard Hot Plug PCI Controller Driver version: 0.4
Oct 11 03:00:15 localhost kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input0
Oct 11 03:00:15 localhost kernel: ACPI: button: Power Button [PWRF]
Oct 11 03:00:15 localhost kernel: ACPI: \_SB_.LNKB: Enabled at IRQ 10
Oct 11 03:00:15 localhost kernel: ACPI: \_SB_.LNKC: Enabled at IRQ 11
Oct 11 03:00:15 localhost kernel: ACPI: \_SB_.LNKA: Enabled at IRQ 10
Oct 11 03:00:15 localhost kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled
Oct 11 03:00:15 localhost kernel: 00:00: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A
Oct 11 03:00:15 localhost kernel: Non-volatile memory driver v1.3
Oct 11 03:00:15 localhost kernel: rdac: device handler registered
Oct 11 03:00:15 localhost kernel: hp_sw: device handler registered
Oct 11 03:00:15 localhost kernel: emc: device handler registered
Oct 11 03:00:15 localhost kernel: alua: device handler registered
Oct 11 03:00:15 localhost kernel: uhci_hcd 0000:00:01.2: UHCI Host Controller
Oct 11 03:00:15 localhost kernel: uhci_hcd 0000:00:01.2: new USB bus registered, assigned bus number 1
Oct 11 03:00:15 localhost kernel: uhci_hcd 0000:00:01.2: detected 2 ports
Oct 11 03:00:15 localhost kernel: uhci_hcd 0000:00:01.2: irq 11, io port 0x0000c100
Oct 11 03:00:15 localhost kernel: usb usb1: New USB device found, idVendor=1d6b, idProduct=0001, bcdDevice= 5.14
Oct 11 03:00:15 localhost kernel: usb usb1: New USB device strings: Mfr=3, Product=2, SerialNumber=1
Oct 11 03:00:15 localhost kernel: usb usb1: Product: UHCI Host Controller
Oct 11 03:00:15 localhost kernel: usb usb1: Manufacturer: Linux 5.14.0-621.el9.x86_64 uhci_hcd
Oct 11 03:00:15 localhost kernel: usb usb1: SerialNumber: 0000:00:01.2
Oct 11 03:00:15 localhost kernel: hub 1-0:1.0: USB hub found
Oct 11 03:00:15 localhost kernel: hub 1-0:1.0: 2 ports detected
Oct 11 03:00:15 localhost kernel: usbcore: registered new interface driver usbserial_generic
Oct 11 03:00:15 localhost kernel: usbserial: USB Serial support registered for generic
Oct 11 03:00:15 localhost kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12
Oct 11 03:00:15 localhost kernel: serio: i8042 KBD port at 0x60,0x64 irq 1
Oct 11 03:00:15 localhost kernel: serio: i8042 AUX port at 0x60,0x64 irq 12
Oct 11 03:00:15 localhost kernel: mousedev: PS/2 mouse device common for all mice
Oct 11 03:00:15 localhost kernel: rtc_cmos 00:04: RTC can wake from S4
Oct 11 03:00:15 localhost kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input1
Oct 11 03:00:15 localhost kernel: rtc_cmos 00:04: registered as rtc0
Oct 11 03:00:15 localhost kernel: rtc_cmos 00:04: setting system clock to 2025-10-11T03:00:14 UTC (1760151614)
Oct 11 03:00:15 localhost kernel: rtc_cmos 00:04: alarms up to one day, y3k, 242 bytes nvram
Oct 11 03:00:15 localhost kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled
Oct 11 03:00:15 localhost kernel: input: VirtualPS/2 VMware VMMouse as /devices/platform/i8042/serio1/input/input4
Oct 11 03:00:15 localhost kernel: hid: raw HID events driver (C) Jiri Kosina
Oct 11 03:00:15 localhost kernel: usbcore: registered new interface driver usbhid
Oct 11 03:00:15 localhost kernel: usbhid: USB HID core driver
Oct 11 03:00:15 localhost kernel: input: VirtualPS/2 VMware VMMouse as /devices/platform/i8042/serio1/input/input3
Oct 11 03:00:15 localhost kernel: drop_monitor: Initializing network drop monitor service
Oct 11 03:00:15 localhost kernel: Initializing XFRM netlink socket
Oct 11 03:00:15 localhost kernel: NET: Registered PF_INET6 protocol family
Oct 11 03:00:15 localhost kernel: Segment Routing with IPv6
Oct 11 03:00:15 localhost kernel: NET: Registered PF_PACKET protocol family
Oct 11 03:00:15 localhost kernel: mpls_gso: MPLS GSO support
Oct 11 03:00:15 localhost kernel: IPI shorthand broadcast: enabled
Oct 11 03:00:15 localhost kernel: AVX2 version of gcm_enc/dec engaged.
Oct 11 03:00:15 localhost kernel: AES CTR mode by8 optimization enabled
Oct 11 03:00:15 localhost kernel: sched_clock: Marking stable (1204022179, 146023920)->(1476010469, -125964370)
Oct 11 03:00:15 localhost kernel: registered taskstats version 1
Oct 11 03:00:15 localhost kernel: Loading compiled-in X.509 certificates
Oct 11 03:00:15 localhost kernel: Loaded X.509 cert 'The CentOS Project: CentOS Stream kernel signing key: 72f99a463516b0dfb027e50caab189f607ef1bc9'
Oct 11 03:00:15 localhost kernel: Loaded X.509 cert 'Red Hat Enterprise Linux Driver Update Program (key 3): bf57f3e87362bc7229d9f465321773dfd1f77a80'
Oct 11 03:00:15 localhost kernel: Loaded X.509 cert 'Red Hat Enterprise Linux kpatch signing key: 4d38fd864ebe18c5f0b72e3852e2014c3a676fc8'
Oct 11 03:00:15 localhost kernel: Loaded X.509 cert 'RH-IMA-CA: Red Hat IMA CA: fb31825dd0e073685b264e3038963673f753959a'
Oct 11 03:00:15 localhost kernel: Loaded X.509 cert 'Nvidia GPU OOT signing 001: 55e1cef88193e60419f0b0ec379c49f77545acf0'
Oct 11 03:00:15 localhost kernel: Demotion targets for Node 0: null
Oct 11 03:00:15 localhost kernel: page_owner is disabled
Oct 11 03:00:15 localhost kernel: Key type .fscrypt registered
Oct 11 03:00:15 localhost kernel: Key type fscrypt-provisioning registered
Oct 11 03:00:15 localhost kernel: Key type big_key registered
Oct 11 03:00:15 localhost kernel: Key type encrypted registered
Oct 11 03:00:15 localhost kernel: ima: No TPM chip found, activating TPM-bypass!
Oct 11 03:00:15 localhost kernel: Loading compiled-in module X.509 certificates
Oct 11 03:00:15 localhost kernel: Loaded X.509 cert 'The CentOS Project: CentOS Stream kernel signing key: 72f99a463516b0dfb027e50caab189f607ef1bc9'
Oct 11 03:00:15 localhost kernel: ima: Allocated hash algorithm: sha256
Oct 11 03:00:15 localhost kernel: ima: No architecture policies found
Oct 11 03:00:15 localhost kernel: evm: Initialising EVM extended attributes:
Oct 11 03:00:15 localhost kernel: evm: security.selinux
Oct 11 03:00:15 localhost kernel: evm: security.SMACK64 (disabled)
Oct 11 03:00:15 localhost kernel: evm: security.SMACK64EXEC (disabled)
Oct 11 03:00:15 localhost kernel: evm: security.SMACK64TRANSMUTE (disabled)
Oct 11 03:00:15 localhost kernel: evm: security.SMACK64MMAP (disabled)
Oct 11 03:00:15 localhost kernel: evm: security.apparmor (disabled)
Oct 11 03:00:15 localhost kernel: evm: security.ima
Oct 11 03:00:15 localhost kernel: evm: security.capability
Oct 11 03:00:15 localhost kernel: evm: HMAC attrs: 0x1
Oct 11 03:00:15 localhost kernel: usb 1-1: new full-speed USB device number 2 using uhci_hcd
Oct 11 03:00:15 localhost kernel: Running certificate verification RSA selftest
Oct 11 03:00:15 localhost kernel: Loaded X.509 cert 'Certificate verification self-testing key: f58703bb33ce1b73ee02eccdee5b8817518fe3db'
Oct 11 03:00:15 localhost kernel: Running certificate verification ECDSA selftest
Oct 11 03:00:15 localhost kernel: Loaded X.509 cert 'Certificate verification ECDSA self-testing key: 2900bcea1deb7bc8479a84a23d758efdfdd2b2d3'
Oct 11 03:00:15 localhost kernel: usb 1-1: New USB device found, idVendor=0627, idProduct=0001, bcdDevice= 0.00
Oct 11 03:00:15 localhost kernel: usb 1-1: New USB device strings: Mfr=1, Product=3, SerialNumber=10
Oct 11 03:00:15 localhost kernel: usb 1-1: Product: QEMU USB Tablet
Oct 11 03:00:15 localhost kernel: usb 1-1: Manufacturer: QEMU
Oct 11 03:00:15 localhost kernel: usb 1-1: SerialNumber: 28754-0000:00:01.2-1
Oct 11 03:00:15 localhost kernel: clk: Disabling unused clocks
Oct 11 03:00:15 localhost kernel: Freeing unused decrypted memory: 2028K
Oct 11 03:00:15 localhost kernel: input: QEMU QEMU USB Tablet as /devices/pci0000:00/0000:00:01.2/usb1/1-1/1-1:1.0/0003:0627:0001.0001/input/input5
Oct 11 03:00:15 localhost kernel: hid-generic 0003:0627:0001.0001: input,hidraw0: USB HID v0.01 Mouse [QEMU QEMU USB Tablet] on usb-0000:00:01.2-1/input0
Oct 11 03:00:15 localhost kernel: Freeing unused kernel image (initmem) memory: 4188K
Oct 11 03:00:15 localhost kernel: Write protecting the kernel read-only data: 30720k
Oct 11 03:00:15 localhost kernel: Freeing unused kernel image (rodata/data gap) memory: 472K
Oct 11 03:00:15 localhost kernel: x86/mm: Checked W+X mappings: passed, no W+X pages found.
Oct 11 03:00:15 localhost kernel: Run /init as init process
Oct 11 03:00:15 localhost kernel:   with arguments:
Oct 11 03:00:15 localhost kernel:     /init
Oct 11 03:00:15 localhost kernel:   with environment:
Oct 11 03:00:15 localhost kernel:     HOME=/
Oct 11 03:00:15 localhost kernel:     TERM=linux
Oct 11 03:00:15 localhost kernel:     BOOT_IMAGE=(hd0,msdos1)/boot/vmlinuz-5.14.0-621.el9.x86_64
Oct 11 03:00:15 localhost systemd[1]: systemd 252-57.el9 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT +GNUTLS +OPENSSL +ACL +BLKID +CURL +ELFUTILS +FIDO2 +IDN2 -IDN -IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY +P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK +XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified)
Oct 11 03:00:15 localhost systemd[1]: Detected virtualization kvm.
Oct 11 03:00:15 localhost systemd[1]: Detected architecture x86-64.
Oct 11 03:00:15 localhost systemd[1]: Running in initrd.
Oct 11 03:00:15 localhost systemd[1]: No hostname configured, using default hostname.
Oct 11 03:00:15 localhost systemd[1]: Hostname set to <localhost>.
Oct 11 03:00:15 localhost systemd[1]: Initializing machine ID from VM UUID.
Oct 11 03:00:15 localhost systemd[1]: Queued start job for default target Initrd Default Target.
Oct 11 03:00:15 localhost systemd[1]: Started Dispatch Password Requests to Console Directory Watch.
Oct 11 03:00:15 localhost systemd[1]: Reached target Local Encrypted Volumes.
Oct 11 03:00:15 localhost systemd[1]: Reached target Initrd /usr File System.
Oct 11 03:00:15 localhost systemd[1]: Reached target Local File Systems.
Oct 11 03:00:15 localhost systemd[1]: Reached target Path Units.
Oct 11 03:00:15 localhost systemd[1]: Reached target Slice Units.
Oct 11 03:00:15 localhost systemd[1]: Reached target Swaps.
Oct 11 03:00:15 localhost systemd[1]: Reached target Timer Units.
Oct 11 03:00:15 localhost systemd[1]: Listening on D-Bus System Message Bus Socket.
Oct 11 03:00:15 localhost systemd[1]: Listening on Journal Socket (/dev/log).
Oct 11 03:00:15 localhost systemd[1]: Listening on Journal Socket.
Oct 11 03:00:15 localhost systemd[1]: Listening on udev Control Socket.
Oct 11 03:00:15 localhost systemd[1]: Listening on udev Kernel Socket.
Oct 11 03:00:15 localhost systemd[1]: Reached target Socket Units.
Oct 11 03:00:15 localhost systemd[1]: Starting Create List of Static Device Nodes...
Oct 11 03:00:15 localhost systemd[1]: Starting Journal Service...
Oct 11 03:00:15 localhost systemd[1]: Load Kernel Modules was skipped because no trigger condition checks were met.
Oct 11 03:00:15 localhost systemd[1]: Starting Apply Kernel Variables...
Oct 11 03:00:15 localhost systemd[1]: Starting Create System Users...
Oct 11 03:00:15 localhost systemd[1]: Starting Setup Virtual Console...
Oct 11 03:00:15 localhost systemd[1]: Finished Create List of Static Device Nodes.
Oct 11 03:00:15 localhost systemd[1]: Finished Apply Kernel Variables.
Oct 11 03:00:15 localhost systemd[1]: Finished Create System Users.
Oct 11 03:00:15 localhost systemd-journald[304]: Journal started
Oct 11 03:00:15 localhost systemd-journald[304]: Runtime Journal (/run/log/journal/e4b2deedff064afba523b61a9dddb9cc) is 8.0M, max 153.6M, 145.6M free.
Oct 11 03:00:15 localhost systemd-sysusers[309]: Creating group 'users' with GID 100.
Oct 11 03:00:15 localhost systemd-sysusers[309]: Creating group 'dbus' with GID 81.
Oct 11 03:00:15 localhost systemd-sysusers[309]: Creating user 'dbus' (System Message Bus) with UID 81 and GID 81.
Oct 11 03:00:15 localhost systemd[1]: Started Journal Service.
Oct 11 03:00:15 localhost systemd[1]: Starting Create Static Device Nodes in /dev...
Oct 11 03:00:15 localhost systemd[1]: Starting Create Volatile Files and Directories...
Oct 11 03:00:15 localhost systemd[1]: Finished Create Static Device Nodes in /dev.
Oct 11 03:00:15 localhost systemd[1]: Finished Create Volatile Files and Directories.
Oct 11 03:00:15 localhost systemd[1]: Finished Setup Virtual Console.
Oct 11 03:00:15 localhost systemd[1]: dracut ask for additional cmdline parameters was skipped because no trigger condition checks were met.
Oct 11 03:00:15 localhost systemd[1]: Starting dracut cmdline hook...
Oct 11 03:00:15 localhost dracut-cmdline[324]: dracut-9 dracut-057-102.git20250818.el9
Oct 11 03:00:15 localhost dracut-cmdline[324]: Using kernel command line parameters:    BOOT_IMAGE=(hd0,msdos1)/boot/vmlinuz-5.14.0-621.el9.x86_64 root=UUID=9839e2e1-98a2-4594-b609-79d514deb0a3 ro console=ttyS0,115200n8 no_timer_check net.ifnames=0 crashkernel=1G-2G:192M,2G-64G:256M,64G-:512M
Oct 11 03:00:15 localhost systemd[1]: Finished dracut cmdline hook.
Oct 11 03:00:15 localhost systemd[1]: Starting dracut pre-udev hook...
Oct 11 03:00:15 localhost kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
Oct 11 03:00:15 localhost kernel: device-mapper: uevent: version 1.0.3
Oct 11 03:00:15 localhost kernel: device-mapper: ioctl: 4.50.0-ioctl (2025-04-28) initialised: dm-devel@lists.linux.dev
Oct 11 03:00:15 localhost kernel: RPC: Registered named UNIX socket transport module.
Oct 11 03:00:15 localhost kernel: RPC: Registered udp transport module.
Oct 11 03:00:15 localhost kernel: RPC: Registered tcp transport module.
Oct 11 03:00:15 localhost kernel: RPC: Registered tcp-with-tls transport module.
Oct 11 03:00:15 localhost kernel: RPC: Registered tcp NFSv4.1 backchannel transport module.
Oct 11 03:00:15 localhost rpc.statd[442]: Version 2.5.4 starting
Oct 11 03:00:15 localhost rpc.statd[442]: Initializing NSM state
Oct 11 03:00:15 localhost rpc.idmapd[447]: Setting log level to 0
Oct 11 03:00:15 localhost systemd[1]: Finished dracut pre-udev hook.
Oct 11 03:00:15 localhost systemd[1]: Starting Rule-based Manager for Device Events and Files...
Oct 11 03:00:15 localhost systemd-udevd[460]: Using default interface naming scheme 'rhel-9.0'.
Oct 11 03:00:15 localhost systemd[1]: Started Rule-based Manager for Device Events and Files.
Oct 11 03:00:15 localhost systemd[1]: Starting dracut pre-trigger hook...
Oct 11 03:00:15 localhost systemd[1]: Finished dracut pre-trigger hook.
Oct 11 03:00:16 localhost systemd[1]: Starting Coldplug All udev Devices...
Oct 11 03:00:16 localhost systemd[1]: Created slice Slice /system/modprobe.
Oct 11 03:00:16 localhost systemd[1]: Starting Load Kernel Module configfs...
Oct 11 03:00:16 localhost systemd[1]: Finished Coldplug All udev Devices.
Oct 11 03:00:16 localhost systemd[1]: modprobe@configfs.service: Deactivated successfully.
Oct 11 03:00:16 localhost systemd[1]: Finished Load Kernel Module configfs.
Oct 11 03:00:16 localhost systemd[1]: nm-initrd.service was skipped because of an unmet condition check (ConditionPathExists=/run/NetworkManager/initrd/neednet).
Oct 11 03:00:16 localhost systemd[1]: Reached target Network.
Oct 11 03:00:16 localhost systemd[1]: nm-wait-online-initrd.service was skipped because of an unmet condition check (ConditionPathExists=/run/NetworkManager/initrd/neednet).
Oct 11 03:00:16 localhost systemd[1]: Starting dracut initqueue hook...
Oct 11 03:00:16 localhost kernel: virtio_blk virtio2: 8/0/0 default/read/poll queues
Oct 11 03:00:16 localhost kernel: libata version 3.00 loaded.
Oct 11 03:00:16 localhost kernel: ata_piix 0000:00:01.1: version 2.13
Oct 11 03:00:16 localhost kernel: virtio_blk virtio2: [vda] 167772160 512-byte logical blocks (85.9 GB/80.0 GiB)
Oct 11 03:00:16 localhost kernel: scsi host0: ata_piix
Oct 11 03:00:16 localhost kernel: scsi host1: ata_piix
Oct 11 03:00:16 localhost kernel: ata1: PATA max MWDMA2 cmd 0x1f0 ctl 0x3f6 bmdma 0xc140 irq 14 lpm-pol 0
Oct 11 03:00:16 localhost kernel: ata2: PATA max MWDMA2 cmd 0x170 ctl 0x376 bmdma 0xc148 irq 15 lpm-pol 0
Oct 11 03:00:16 localhost kernel:  vda: vda1
Oct 11 03:00:16 localhost systemd[1]: Mounting Kernel Configuration File System...
Oct 11 03:00:16 localhost systemd[1]: Mounted Kernel Configuration File System.
Oct 11 03:00:16 localhost systemd[1]: Reached target System Initialization.
Oct 11 03:00:16 localhost systemd[1]: Reached target Basic System.
Oct 11 03:00:16 localhost kernel: ata1: found unknown device (class 0)
Oct 11 03:00:16 localhost kernel: ata1.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100
Oct 11 03:00:16 localhost kernel: scsi 0:0:0:0: CD-ROM            QEMU     QEMU DVD-ROM     2.5+ PQ: 0 ANSI: 5
Oct 11 03:00:16 localhost systemd-udevd[491]: Network interface NamePolicy= disabled on kernel command line.
Oct 11 03:00:16 localhost kernel: scsi 0:0:0:0: Attached scsi generic sg0 type 5
Oct 11 03:00:16 localhost systemd[1]: Found device /dev/disk/by-uuid/9839e2e1-98a2-4594-b609-79d514deb0a3.
Oct 11 03:00:16 localhost systemd[1]: Reached target Initrd Root Device.
Oct 11 03:00:16 localhost kernel: sr 0:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray
Oct 11 03:00:16 localhost kernel: cdrom: Uniform CD-ROM driver Revision: 3.20
Oct 11 03:00:16 localhost kernel: sr 0:0:0:0: Attached scsi CD-ROM sr0
Oct 11 03:00:16 localhost systemd[1]: Finished dracut initqueue hook.
Oct 11 03:00:16 localhost systemd[1]: Reached target Preparation for Remote File Systems.
Oct 11 03:00:16 localhost systemd[1]: Reached target Remote Encrypted Volumes.
Oct 11 03:00:16 localhost systemd[1]: Reached target Remote File Systems.
Oct 11 03:00:16 localhost systemd[1]: Starting dracut pre-mount hook...
Oct 11 03:00:16 localhost systemd[1]: Finished dracut pre-mount hook.
Oct 11 03:00:16 localhost systemd[1]: Starting File System Check on /dev/disk/by-uuid/9839e2e1-98a2-4594-b609-79d514deb0a3...
Oct 11 03:00:16 localhost systemd-fsck[555]: /usr/sbin/fsck.xfs: XFS file system.
Oct 11 03:00:16 localhost systemd[1]: Finished File System Check on /dev/disk/by-uuid/9839e2e1-98a2-4594-b609-79d514deb0a3.
Oct 11 03:00:16 localhost systemd[1]: Mounting /sysroot...
Oct 11 03:00:17 localhost kernel: SGI XFS with ACLs, security attributes, scrub, quota, no debug enabled
Oct 11 03:00:17 localhost kernel: XFS (vda1): Mounting V5 Filesystem 9839e2e1-98a2-4594-b609-79d514deb0a3
Oct 11 03:00:17 localhost kernel: XFS (vda1): Ending clean mount
Oct 11 03:00:17 localhost systemd[1]: Mounted /sysroot.
Oct 11 03:00:17 localhost systemd[1]: Reached target Initrd Root File System.
Oct 11 03:00:17 localhost systemd[1]: Starting Mountpoints Configured in the Real Root...
Oct 11 03:00:17 localhost systemd[1]: initrd-parse-etc.service: Deactivated successfully.
Oct 11 03:00:17 localhost systemd[1]: Finished Mountpoints Configured in the Real Root.
Oct 11 03:00:17 localhost systemd[1]: Reached target Initrd File Systems.
Oct 11 03:00:17 localhost systemd[1]: Reached target Initrd Default Target.
Oct 11 03:00:17 localhost systemd[1]: Starting dracut mount hook...
Oct 11 03:00:17 localhost systemd[1]: Finished dracut mount hook.
Oct 11 03:00:17 localhost systemd[1]: Starting dracut pre-pivot and cleanup hook...
Oct 11 03:00:17 localhost rpc.idmapd[447]: exiting on signal 15
Oct 11 03:00:17 localhost systemd[1]: var-lib-nfs-rpc_pipefs.mount: Deactivated successfully.
Oct 11 03:00:17 localhost systemd[1]: Finished dracut pre-pivot and cleanup hook.
Oct 11 03:00:17 localhost systemd[1]: Starting Cleaning Up and Shutting Down Daemons...
Oct 11 03:00:17 localhost systemd[1]: Stopped target Network.
Oct 11 03:00:17 localhost systemd[1]: Stopped target Remote Encrypted Volumes.
Oct 11 03:00:17 localhost systemd[1]: Stopped target Timer Units.
Oct 11 03:00:17 localhost systemd[1]: dbus.socket: Deactivated successfully.
Oct 11 03:00:17 localhost systemd[1]: Closed D-Bus System Message Bus Socket.
Oct 11 03:00:17 localhost systemd[1]: dracut-pre-pivot.service: Deactivated successfully.
Oct 11 03:00:17 localhost systemd[1]: Stopped dracut pre-pivot and cleanup hook.
Oct 11 03:00:17 localhost systemd[1]: Stopped target Initrd Default Target.
Oct 11 03:00:17 localhost systemd[1]: Stopped target Basic System.
Oct 11 03:00:17 localhost systemd[1]: Stopped target Initrd Root Device.
Oct 11 03:00:17 localhost systemd[1]: Stopped target Initrd /usr File System.
Oct 11 03:00:17 localhost systemd[1]: Stopped target Path Units.
Oct 11 03:00:17 localhost systemd[1]: Stopped target Remote File Systems.
Oct 11 03:00:17 localhost systemd[1]: Stopped target Preparation for Remote File Systems.
Oct 11 03:00:17 localhost systemd[1]: Stopped target Slice Units.
Oct 11 03:00:17 localhost systemd[1]: Stopped target Socket Units.
Oct 11 03:00:17 localhost systemd[1]: Stopped target System Initialization.
Oct 11 03:00:17 localhost systemd[1]: Stopped target Local File Systems.
Oct 11 03:00:17 localhost systemd[1]: Stopped target Swaps.
Oct 11 03:00:17 localhost systemd[1]: dracut-mount.service: Deactivated successfully.
Oct 11 03:00:17 localhost systemd[1]: Stopped dracut mount hook.
Oct 11 03:00:17 localhost systemd[1]: dracut-pre-mount.service: Deactivated successfully.
Oct 11 03:00:17 localhost systemd[1]: Stopped dracut pre-mount hook.
Oct 11 03:00:17 localhost systemd[1]: Stopped target Local Encrypted Volumes.
Oct 11 03:00:17 localhost systemd[1]: systemd-ask-password-console.path: Deactivated successfully.
Oct 11 03:00:17 localhost systemd[1]: Stopped Dispatch Password Requests to Console Directory Watch.
Oct 11 03:00:17 localhost systemd[1]: dracut-initqueue.service: Deactivated successfully.
Oct 11 03:00:17 localhost systemd[1]: Stopped dracut initqueue hook.
Oct 11 03:00:17 localhost systemd[1]: systemd-sysctl.service: Deactivated successfully.
Oct 11 03:00:17 localhost systemd[1]: Stopped Apply Kernel Variables.
Oct 11 03:00:17 localhost systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully.
Oct 11 03:00:17 localhost systemd[1]: Stopped Create Volatile Files and Directories.
Oct 11 03:00:17 localhost systemd[1]: systemd-udev-trigger.service: Deactivated successfully.
Oct 11 03:00:17 localhost systemd[1]: Stopped Coldplug All udev Devices.
Oct 11 03:00:17 localhost systemd[1]: dracut-pre-trigger.service: Deactivated successfully.
Oct 11 03:00:17 localhost systemd[1]: Stopped dracut pre-trigger hook.
Oct 11 03:00:17 localhost systemd[1]: Stopping Rule-based Manager for Device Events and Files...
Oct 11 03:00:17 localhost systemd[1]: systemd-vconsole-setup.service: Deactivated successfully.
Oct 11 03:00:17 localhost systemd[1]: Stopped Setup Virtual Console.
Oct 11 03:00:17 localhost systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully.
Oct 11 03:00:17 localhost systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully.
Oct 11 03:00:17 localhost systemd[1]: initrd-cleanup.service: Deactivated successfully.
Oct 11 03:00:17 localhost systemd[1]: Finished Cleaning Up and Shutting Down Daemons.
Oct 11 03:00:17 localhost systemd[1]: systemd-udevd.service: Deactivated successfully.
Oct 11 03:00:17 localhost systemd[1]: Stopped Rule-based Manager for Device Events and Files.
Oct 11 03:00:17 localhost systemd[1]: systemd-udevd-control.socket: Deactivated successfully.
Oct 11 03:00:17 localhost systemd[1]: Closed udev Control Socket.
Oct 11 03:00:17 localhost systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully.
Oct 11 03:00:17 localhost systemd[1]: Closed udev Kernel Socket.
Oct 11 03:00:17 localhost systemd[1]: dracut-pre-udev.service: Deactivated successfully.
Oct 11 03:00:17 localhost systemd[1]: Stopped dracut pre-udev hook.
Oct 11 03:00:17 localhost systemd[1]: dracut-cmdline.service: Deactivated successfully.
Oct 11 03:00:17 localhost systemd[1]: Stopped dracut cmdline hook.
Oct 11 03:00:17 localhost systemd[1]: Starting Cleanup udev Database...
Oct 11 03:00:17 localhost systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully.
Oct 11 03:00:17 localhost systemd[1]: Stopped Create Static Device Nodes in /dev.
Oct 11 03:00:17 localhost systemd[1]: kmod-static-nodes.service: Deactivated successfully.
Oct 11 03:00:17 localhost systemd[1]: Stopped Create List of Static Device Nodes.
Oct 11 03:00:17 localhost systemd[1]: systemd-sysusers.service: Deactivated successfully.
Oct 11 03:00:17 localhost systemd[1]: Stopped Create System Users.
Oct 11 03:00:17 localhost systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully.
Oct 11 03:00:17 localhost systemd[1]: run-credentials-systemd\x2dsysusers.service.mount: Deactivated successfully.
Oct 11 03:00:17 localhost systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully.
Oct 11 03:00:17 localhost systemd[1]: Finished Cleanup udev Database.
Oct 11 03:00:17 localhost systemd[1]: Reached target Switch Root.
Oct 11 03:00:17 localhost systemd[1]: Starting Switch Root...
Oct 11 03:00:17 localhost systemd[1]: Switching root.
Oct 11 03:00:17 localhost systemd-journald[304]: Journal stopped
Oct 11 03:00:18 localhost systemd-journald[304]: Received SIGTERM from PID 1 (systemd).
Oct 11 03:00:18 localhost kernel: audit: type=1404 audit(1760151617.797:2): enforcing=1 old_enforcing=0 auid=4294967295 ses=4294967295 enabled=1 old-enabled=1 lsm=selinux res=1
Oct 11 03:00:18 localhost kernel: SELinux:  policy capability network_peer_controls=1
Oct 11 03:00:18 localhost kernel: SELinux:  policy capability open_perms=1
Oct 11 03:00:18 localhost kernel: SELinux:  policy capability extended_socket_class=1
Oct 11 03:00:18 localhost kernel: SELinux:  policy capability always_check_network=0
Oct 11 03:00:18 localhost kernel: SELinux:  policy capability cgroup_seclabel=1
Oct 11 03:00:18 localhost kernel: SELinux:  policy capability nnp_nosuid_transition=1
Oct 11 03:00:18 localhost kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Oct 11 03:00:18 localhost kernel: audit: type=1403 audit(1760151617.932:3): auid=4294967295 ses=4294967295 lsm=selinux res=1
Oct 11 03:00:18 localhost systemd[1]: Successfully loaded SELinux policy in 138.579ms.
Oct 11 03:00:18 localhost systemd[1]: Relabelled /dev, /dev/shm, /run, /sys/fs/cgroup in 29.431ms.
Oct 11 03:00:18 localhost systemd[1]: systemd 252-57.el9 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT +GNUTLS +OPENSSL +ACL +BLKID +CURL +ELFUTILS +FIDO2 +IDN2 -IDN -IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY +P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK +XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified)
Oct 11 03:00:18 localhost systemd[1]: Detected virtualization kvm.
Oct 11 03:00:18 localhost systemd[1]: Detected architecture x86-64.
Oct 11 03:00:18 localhost systemd-rc-local-generator[635]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 11 03:00:18 localhost systemd[1]: initrd-switch-root.service: Deactivated successfully.
Oct 11 03:00:18 localhost systemd[1]: Stopped Switch Root.
Oct 11 03:00:18 localhost systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1.
Oct 11 03:00:18 localhost systemd[1]: Created slice Slice /system/getty.
Oct 11 03:00:18 localhost systemd[1]: Created slice Slice /system/serial-getty.
Oct 11 03:00:18 localhost systemd[1]: Created slice Slice /system/sshd-keygen.
Oct 11 03:00:18 localhost systemd[1]: Created slice User and Session Slice.
Oct 11 03:00:18 localhost systemd[1]: Started Dispatch Password Requests to Console Directory Watch.
Oct 11 03:00:18 localhost systemd[1]: Started Forward Password Requests to Wall Directory Watch.
Oct 11 03:00:18 localhost systemd[1]: Set up automount Arbitrary Executable File Formats File System Automount Point.
Oct 11 03:00:18 localhost systemd[1]: Reached target Local Encrypted Volumes.
Oct 11 03:00:18 localhost systemd[1]: Stopped target Switch Root.
Oct 11 03:00:18 localhost systemd[1]: Stopped target Initrd File Systems.
Oct 11 03:00:18 localhost systemd[1]: Stopped target Initrd Root File System.
Oct 11 03:00:18 localhost systemd[1]: Reached target Local Integrity Protected Volumes.
Oct 11 03:00:18 localhost systemd[1]: Reached target Path Units.
Oct 11 03:00:18 localhost systemd[1]: Reached target rpc_pipefs.target.
Oct 11 03:00:18 localhost systemd[1]: Reached target Slice Units.
Oct 11 03:00:18 localhost systemd[1]: Reached target Swaps.
Oct 11 03:00:18 localhost systemd[1]: Reached target Local Verity Protected Volumes.
Oct 11 03:00:18 localhost systemd[1]: Listening on RPCbind Server Activation Socket.
Oct 11 03:00:18 localhost systemd[1]: Reached target RPC Port Mapper.
Oct 11 03:00:18 localhost systemd[1]: Listening on Process Core Dump Socket.
Oct 11 03:00:18 localhost systemd[1]: Listening on initctl Compatibility Named Pipe.
Oct 11 03:00:18 localhost systemd[1]: Listening on udev Control Socket.
Oct 11 03:00:18 localhost systemd[1]: Listening on udev Kernel Socket.
Oct 11 03:00:18 localhost systemd[1]: Mounting Huge Pages File System...
Oct 11 03:00:18 localhost systemd[1]: Mounting POSIX Message Queue File System...
Oct 11 03:00:18 localhost systemd[1]: Mounting Kernel Debug File System...
Oct 11 03:00:18 localhost systemd[1]: Mounting Kernel Trace File System...
Oct 11 03:00:18 localhost systemd[1]: Kernel Module supporting RPCSEC_GSS was skipped because of an unmet condition check (ConditionPathExists=/etc/krb5.keytab).
Oct 11 03:00:18 localhost systemd[1]: Starting Create List of Static Device Nodes...
Oct 11 03:00:18 localhost systemd[1]: Starting Load Kernel Module configfs...
Oct 11 03:00:18 localhost systemd[1]: Starting Load Kernel Module drm...
Oct 11 03:00:18 localhost systemd[1]: Starting Load Kernel Module efi_pstore...
Oct 11 03:00:18 localhost systemd[1]: Starting Load Kernel Module fuse...
Oct 11 03:00:18 localhost systemd[1]: Starting Read and set NIS domainname from /etc/sysconfig/network...
Oct 11 03:00:18 localhost systemd[1]: systemd-fsck-root.service: Deactivated successfully.
Oct 11 03:00:18 localhost systemd[1]: Stopped File System Check on Root Device.
Oct 11 03:00:18 localhost systemd[1]: Stopped Journal Service.
Oct 11 03:00:18 localhost systemd[1]: Starting Journal Service...
Oct 11 03:00:18 localhost systemd[1]: Load Kernel Modules was skipped because no trigger condition checks were met.
Oct 11 03:00:18 localhost systemd[1]: Starting Generate network units from Kernel command line...
Oct 11 03:00:18 localhost systemd[1]: TPM2 PCR Machine ID Measurement was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f).
Oct 11 03:00:18 localhost systemd[1]: Starting Remount Root and Kernel File Systems...
Oct 11 03:00:18 localhost systemd[1]: Repartition Root Disk was skipped because no trigger condition checks were met.
Oct 11 03:00:18 localhost systemd[1]: Starting Apply Kernel Variables...
Oct 11 03:00:18 localhost kernel: fuse: init (API version 7.37)
Oct 11 03:00:18 localhost systemd[1]: Starting Coldplug All udev Devices...
Oct 11 03:00:18 localhost kernel: xfs filesystem being remounted at / supports timestamps until 2038 (0x7fffffff)
Oct 11 03:00:18 localhost systemd[1]: Mounted Huge Pages File System.
Oct 11 03:00:18 localhost systemd[1]: Mounted POSIX Message Queue File System.
Oct 11 03:00:18 localhost systemd[1]: Mounted Kernel Debug File System.
Oct 11 03:00:18 localhost systemd[1]: Mounted Kernel Trace File System.
Oct 11 03:00:18 localhost systemd[1]: Finished Create List of Static Device Nodes.
Oct 11 03:00:18 localhost systemd[1]: modprobe@configfs.service: Deactivated successfully.
Oct 11 03:00:18 localhost systemd[1]: Finished Load Kernel Module configfs.
Oct 11 03:00:18 localhost systemd[1]: modprobe@efi_pstore.service: Deactivated successfully.
Oct 11 03:00:18 localhost systemd[1]: Finished Load Kernel Module efi_pstore.
Oct 11 03:00:18 localhost systemd-journald[676]: Journal started
Oct 11 03:00:18 localhost systemd-journald[676]: Runtime Journal (/run/log/journal/a1727ec20198bc6caf436a6e13c4ff5e) is 8.0M, max 153.6M, 145.6M free.
Oct 11 03:00:18 localhost systemd[1]: Queued start job for default target Multi-User System.
Oct 11 03:00:18 localhost systemd[1]: systemd-journald.service: Deactivated successfully.
Oct 11 03:00:18 localhost systemd[1]: Started Journal Service.
Oct 11 03:00:18 localhost systemd[1]: modprobe@fuse.service: Deactivated successfully.
Oct 11 03:00:18 localhost systemd[1]: Finished Load Kernel Module fuse.
Oct 11 03:00:18 localhost systemd[1]: Finished Read and set NIS domainname from /etc/sysconfig/network.
Oct 11 03:00:18 localhost systemd[1]: Finished Generate network units from Kernel command line.
Oct 11 03:00:18 localhost systemd[1]: Finished Remount Root and Kernel File Systems.
Oct 11 03:00:18 localhost systemd[1]: Finished Apply Kernel Variables.
Oct 11 03:00:18 localhost kernel: ACPI: bus type drm_connector registered
Oct 11 03:00:18 localhost systemd[1]: Mounting FUSE Control File System...
Oct 11 03:00:18 localhost systemd[1]: First Boot Wizard was skipped because of an unmet condition check (ConditionFirstBoot=yes).
Oct 11 03:00:18 localhost systemd[1]: Starting Rebuild Hardware Database...
Oct 11 03:00:18 localhost systemd[1]: Starting Flush Journal to Persistent Storage...
Oct 11 03:00:18 localhost systemd[1]: Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore).
Oct 11 03:00:18 localhost systemd[1]: Starting Load/Save OS Random Seed...
Oct 11 03:00:18 localhost systemd[1]: Starting Create System Users...
Oct 11 03:00:18 localhost systemd[1]: modprobe@drm.service: Deactivated successfully.
Oct 11 03:00:18 localhost systemd[1]: Finished Load Kernel Module drm.
Oct 11 03:00:18 localhost systemd[1]: Mounted FUSE Control File System.
Oct 11 03:00:18 localhost systemd[1]: Finished Coldplug All udev Devices.
Oct 11 03:00:18 localhost systemd-journald[676]: Runtime Journal (/run/log/journal/a1727ec20198bc6caf436a6e13c4ff5e) is 8.0M, max 153.6M, 145.6M free.
Oct 11 03:00:18 localhost systemd-journald[676]: Received client request to flush runtime journal.
Oct 11 03:00:18 localhost systemd[1]: Finished Flush Journal to Persistent Storage.
Oct 11 03:00:18 localhost systemd[1]: Finished Load/Save OS Random Seed.
Oct 11 03:00:18 localhost systemd[1]: First Boot Complete was skipped because of an unmet condition check (ConditionFirstBoot=yes).
Oct 11 03:00:18 localhost systemd[1]: Finished Create System Users.
Oct 11 03:00:19 localhost systemd[1]: Starting Create Static Device Nodes in /dev...
Oct 11 03:00:19 localhost systemd[1]: Finished Create Static Device Nodes in /dev.
Oct 11 03:00:19 localhost systemd[1]: Reached target Preparation for Local File Systems.
Oct 11 03:00:19 localhost systemd[1]: Reached target Local File Systems.
Oct 11 03:00:19 localhost systemd[1]: Starting Rebuild Dynamic Linker Cache...
Oct 11 03:00:19 localhost systemd[1]: Mark the need to relabel after reboot was skipped because of an unmet condition check (ConditionSecurity=!selinux).
Oct 11 03:00:19 localhost systemd[1]: Set Up Additional Binary Formats was skipped because no trigger condition checks were met.
Oct 11 03:00:19 localhost systemd[1]: Update Boot Loader Random Seed was skipped because no trigger condition checks were met.
Oct 11 03:00:19 localhost systemd[1]: Starting Automatic Boot Loader Update...
Oct 11 03:00:19 localhost systemd[1]: Commit a transient machine-id on disk was skipped because of an unmet condition check (ConditionPathIsMountPoint=/etc/machine-id).
Oct 11 03:00:19 localhost systemd[1]: Starting Create Volatile Files and Directories...
Oct 11 03:00:19 localhost bootctl[694]: Couldn't find EFI system partition, skipping.
Oct 11 03:00:19 localhost systemd[1]: Finished Automatic Boot Loader Update.
Oct 11 03:00:19 localhost systemd[1]: Finished Create Volatile Files and Directories.
Oct 11 03:00:19 localhost systemd[1]: Starting Security Auditing Service...
Oct 11 03:00:19 localhost systemd[1]: Starting RPC Bind...
Oct 11 03:00:19 localhost systemd[1]: Starting Rebuild Journal Catalog...
Oct 11 03:00:19 localhost systemd[1]: Finished Rebuild Journal Catalog.
Oct 11 03:00:19 localhost auditd[700]: audit dispatcher initialized with q_depth=2000 and 1 active plugins
Oct 11 03:00:19 localhost auditd[700]: Init complete, auditd 3.1.5 listening for events (startup state enable)
Oct 11 03:00:19 localhost systemd[1]: Started RPC Bind.
Oct 11 03:00:19 localhost augenrules[705]: /sbin/augenrules: No change
Oct 11 03:00:19 localhost augenrules[720]: No rules
Oct 11 03:00:19 localhost augenrules[720]: enabled 1
Oct 11 03:00:19 localhost augenrules[720]: failure 1
Oct 11 03:00:19 localhost augenrules[720]: pid 700
Oct 11 03:00:19 localhost augenrules[720]: rate_limit 0
Oct 11 03:00:19 localhost augenrules[720]: backlog_limit 8192
Oct 11 03:00:19 localhost augenrules[720]: lost 0
Oct 11 03:00:19 localhost augenrules[720]: backlog 0
Oct 11 03:00:19 localhost augenrules[720]: backlog_wait_time 60000
Oct 11 03:00:19 localhost augenrules[720]: backlog_wait_time_actual 0
Oct 11 03:00:19 localhost augenrules[720]: enabled 1
Oct 11 03:00:19 localhost augenrules[720]: failure 1
Oct 11 03:00:19 localhost augenrules[720]: pid 700
Oct 11 03:00:19 localhost augenrules[720]: rate_limit 0
Oct 11 03:00:19 localhost augenrules[720]: backlog_limit 8192
Oct 11 03:00:19 localhost augenrules[720]: lost 0
Oct 11 03:00:19 localhost augenrules[720]: backlog 0
Oct 11 03:00:19 localhost augenrules[720]: backlog_wait_time 60000
Oct 11 03:00:19 localhost augenrules[720]: backlog_wait_time_actual 0
Oct 11 03:00:19 localhost augenrules[720]: enabled 1
Oct 11 03:00:19 localhost augenrules[720]: failure 1
Oct 11 03:00:19 localhost augenrules[720]: pid 700
Oct 11 03:00:19 localhost augenrules[720]: rate_limit 0
Oct 11 03:00:19 localhost augenrules[720]: backlog_limit 8192
Oct 11 03:00:19 localhost augenrules[720]: lost 0
Oct 11 03:00:19 localhost augenrules[720]: backlog 4
Oct 11 03:00:19 localhost augenrules[720]: backlog_wait_time 60000
Oct 11 03:00:19 localhost augenrules[720]: backlog_wait_time_actual 0
Oct 11 03:00:19 localhost systemd[1]: Started Security Auditing Service.
Oct 11 03:00:19 localhost systemd[1]: Starting Record System Boot/Shutdown in UTMP...
Oct 11 03:00:19 localhost systemd[1]: Finished Record System Boot/Shutdown in UTMP.
Oct 11 03:00:19 localhost systemd[1]: Finished Rebuild Hardware Database.
Oct 11 03:00:19 localhost systemd[1]: Starting Rule-based Manager for Device Events and Files...
Oct 11 03:00:19 localhost systemd-udevd[728]: Using default interface naming scheme 'rhel-9.0'.
Oct 11 03:00:19 localhost systemd[1]: Started Rule-based Manager for Device Events and Files.
Oct 11 03:00:19 localhost systemd[1]: Starting Load Kernel Module configfs...
Oct 11 03:00:19 localhost systemd[1]: Condition check resulted in /dev/ttyS0 being skipped.
Oct 11 03:00:19 localhost systemd[1]: modprobe@configfs.service: Deactivated successfully.
Oct 11 03:00:19 localhost systemd[1]: Finished Load Kernel Module configfs.
Oct 11 03:00:20 localhost kernel: input: PC Speaker as /devices/platform/pcspkr/input/input6
Oct 11 03:00:20 localhost systemd-udevd[735]: Network interface NamePolicy= disabled on kernel command line.
Oct 11 03:00:20 localhost kernel: piix4_smbus 0000:00:01.3: SMBus Host Controller at 0x700, revision 0
Oct 11 03:00:20 localhost kernel: i2c i2c-0: 1/1 memory slots populated (from DMI)
Oct 11 03:00:20 localhost kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD
Oct 11 03:00:20 localhost kernel: [drm] pci: virtio-vga detected at 0000:00:02.0
Oct 11 03:00:20 localhost kernel: virtio-pci 0000:00:02.0: vgaarb: deactivate vga console
Oct 11 03:00:20 localhost kernel: Console: switching to colour dummy device 80x25
Oct 11 03:00:20 localhost kernel: [drm] features: -virgl +edid -resource_blob -host_visible
Oct 11 03:00:20 localhost kernel: [drm] features: -context_init
Oct 11 03:00:20 localhost kernel: [drm] number of scanouts: 1
Oct 11 03:00:20 localhost kernel: [drm] number of cap sets: 0
Oct 11 03:00:20 localhost kernel: [drm] Initialized virtio_gpu 0.1.0 for 0000:00:02.0 on minor 0
Oct 11 03:00:20 localhost kernel: fbcon: virtio_gpudrmfb (fb0) is primary device
Oct 11 03:00:20 localhost kernel: Console: switching to colour frame buffer device 128x48
Oct 11 03:00:20 localhost kernel: virtio-pci 0000:00:02.0: [drm] fb0: virtio_gpudrmfb frame buffer device
Oct 11 03:00:20 localhost kernel: kvm_amd: TSC scaling supported
Oct 11 03:00:20 localhost kernel: kvm_amd: Nested Virtualization enabled
Oct 11 03:00:20 localhost kernel: kvm_amd: Nested Paging enabled
Oct 11 03:00:20 localhost kernel: kvm_amd: LBR virtualization supported
Oct 11 03:00:20 localhost systemd[1]: Finished Rebuild Dynamic Linker Cache.
Oct 11 03:00:20 localhost systemd[1]: Starting Update is Completed...
Oct 11 03:00:20 localhost systemd[1]: Finished Update is Completed.
Oct 11 03:00:20 localhost systemd[1]: Reached target System Initialization.
Oct 11 03:00:20 localhost systemd[1]: Started dnf makecache --timer.
Oct 11 03:00:20 localhost systemd[1]: Started Daily rotation of log files.
Oct 11 03:00:20 localhost systemd[1]: Started Daily Cleanup of Temporary Directories.
Oct 11 03:00:20 localhost systemd[1]: Reached target Timer Units.
Oct 11 03:00:20 localhost systemd[1]: Listening on D-Bus System Message Bus Socket.
Oct 11 03:00:20 localhost systemd[1]: Listening on SSSD Kerberos Cache Manager responder socket.
Oct 11 03:00:20 localhost systemd[1]: Reached target Socket Units.
Oct 11 03:00:20 localhost systemd[1]: Starting D-Bus System Message Bus...
Oct 11 03:00:20 localhost systemd[1]: TPM2 PCR Barrier (Initialization) was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f).
Oct 11 03:00:21 localhost systemd[1]: Started D-Bus System Message Bus.
Oct 11 03:00:21 localhost systemd[1]: Reached target Basic System.
Oct 11 03:00:21 localhost dbus-broker-lau[809]: Ready
Oct 11 03:00:21 localhost systemd[1]: Starting NTP client/server...
Oct 11 03:00:21 localhost systemd[1]: Starting Cloud-init: Local Stage (pre-network)...
Oct 11 03:00:21 localhost systemd[1]: Starting Restore /run/initramfs on shutdown...
Oct 11 03:00:21 localhost systemd[1]: Starting IPv4 firewall with iptables...
Oct 11 03:00:21 localhost systemd[1]: Started irqbalance daemon.
Oct 11 03:00:21 localhost systemd[1]: Load CPU microcode update was skipped because of an unmet condition check (ConditionPathExists=/sys/devices/system/cpu/microcode/reload).
Oct 11 03:00:21 localhost systemd[1]: OpenSSH ecdsa Server Key Generation was skipped because of an unmet condition check (ConditionPathExists=!/run/systemd/generator.early/multi-user.target.wants/cloud-init.target).
Oct 11 03:00:21 localhost systemd[1]: OpenSSH ed25519 Server Key Generation was skipped because of an unmet condition check (ConditionPathExists=!/run/systemd/generator.early/multi-user.target.wants/cloud-init.target).
Oct 11 03:00:21 localhost systemd[1]: OpenSSH rsa Server Key Generation was skipped because of an unmet condition check (ConditionPathExists=!/run/systemd/generator.early/multi-user.target.wants/cloud-init.target).
Oct 11 03:00:21 localhost systemd[1]: Reached target sshd-keygen.target.
Oct 11 03:00:21 localhost systemd[1]: System Security Services Daemon was skipped because no trigger condition checks were met.
Oct 11 03:00:21 localhost systemd[1]: Reached target User and Group Name Lookups.
Oct 11 03:00:21 localhost systemd[1]: Starting User Login Management...
Oct 11 03:00:21 localhost systemd[1]: Finished Restore /run/initramfs on shutdown.
Oct 11 03:00:21 localhost chronyd[828]: chronyd version 4.6.1 starting (+CMDMON +NTP +REFCLOCK +RTC +PRIVDROP +SCFILTER +SIGND +ASYNCDNS +NTS +SECHASH +IPV6 +DEBUG)
Oct 11 03:00:21 localhost chronyd[828]: Loaded 0 symmetric keys
Oct 11 03:00:21 localhost chronyd[828]: Using right/UTC timezone to obtain leap second data
Oct 11 03:00:21 localhost chronyd[828]: Loaded seccomp filter (level 2)
Oct 11 03:00:21 localhost systemd-logind[820]: New seat seat0.
Oct 11 03:00:21 localhost systemd-logind[820]: Watching system buttons on /dev/input/event0 (Power Button)
Oct 11 03:00:21 localhost systemd-logind[820]: Watching system buttons on /dev/input/event1 (AT Translated Set 2 keyboard)
Oct 11 03:00:21 localhost systemd[1]: Started NTP client/server.
Oct 11 03:00:21 localhost systemd[1]: Started User Login Management.
Oct 11 03:00:21 localhost kernel: Warning: Deprecated Driver is detected: nft_compat will not be maintained in a future major release and may be disabled
Oct 11 03:00:21 localhost kernel: Warning: Deprecated Driver is detected: nft_compat_module_init will not be maintained in a future major release and may be disabled
Oct 11 03:00:21 localhost iptables.init[814]: iptables: Applying firewall rules: [  OK  ]
Oct 11 03:00:21 localhost systemd[1]: Finished IPv4 firewall with iptables.
Oct 11 03:00:23 localhost cloud-init[837]: Cloud-init v. 24.4-7.el9 running 'init-local' at Sat, 11 Oct 2025 03:00:23 +0000. Up 10.56 seconds.
Oct 11 03:00:24 localhost kernel: ISO 9660 Extensions: Microsoft Joliet Level 3
Oct 11 03:00:24 localhost kernel: ISO 9660 Extensions: RRIP_1991A
Oct 11 03:00:24 localhost systemd[1]: run-cloud\x2dinit-tmp-tmpj0rmoz2w.mount: Deactivated successfully.
Oct 11 03:00:24 localhost systemd[1]: Starting Hostname Service...
Oct 11 03:00:24 localhost systemd[1]: Started Hostname Service.
Oct 11 03:00:24 np0005480847.novalocal systemd-hostnamed[853]: Hostname set to <np0005480847.novalocal> (static)
Oct 11 03:00:24 np0005480847.novalocal systemd[1]: Finished Cloud-init: Local Stage (pre-network).
Oct 11 03:00:24 np0005480847.novalocal systemd[1]: Reached target Preparation for Network.
Oct 11 03:00:24 np0005480847.novalocal systemd[1]: Starting Network Manager...
Oct 11 03:00:24 np0005480847.novalocal NetworkManager[857]: <info>  [1760151624.8634] NetworkManager (version 1.54.1-1.el9) is starting... (boot:b8518b17-5d11-4cee-aee6-0266db1747b3)
Oct 11 03:00:24 np0005480847.novalocal NetworkManager[857]: <info>  [1760151624.8639] Read config: /etc/NetworkManager/NetworkManager.conf, /run/NetworkManager/conf.d/15-carrier-timeout.conf
Oct 11 03:00:24 np0005480847.novalocal NetworkManager[857]: <info>  [1760151624.9036] manager[0x55abc772f080]: monitoring kernel firmware directory '/lib/firmware'.
Oct 11 03:00:24 np0005480847.novalocal NetworkManager[857]: <info>  [1760151624.9099] hostname: hostname: using hostnamed
Oct 11 03:00:24 np0005480847.novalocal NetworkManager[857]: <info>  [1760151624.9100] hostname: static hostname changed from (none) to "np0005480847.novalocal"
Oct 11 03:00:24 np0005480847.novalocal NetworkManager[857]: <info>  [1760151624.9105] dns-mgr: init: dns=default,systemd-resolved rc-manager=symlink (auto)
Oct 11 03:00:24 np0005480847.novalocal NetworkManager[857]: <info>  [1760151624.9524] manager[0x55abc772f080]: rfkill: Wi-Fi hardware radio set enabled
Oct 11 03:00:24 np0005480847.novalocal NetworkManager[857]: <info>  [1760151624.9524] manager[0x55abc772f080]: rfkill: WWAN hardware radio set enabled
Oct 11 03:00:24 np0005480847.novalocal systemd[1]: Listening on Load/Save RF Kill Switch Status /dev/rfkill Watch.
Oct 11 03:00:24 np0005480847.novalocal NetworkManager[857]: <info>  [1760151624.9782] Loaded device plugin: NMTeamFactory (/usr/lib64/NetworkManager/1.54.1-1.el9/libnm-device-plugin-team.so)
Oct 11 03:00:24 np0005480847.novalocal NetworkManager[857]: <info>  [1760151624.9783] manager: rfkill: Wi-Fi enabled by radio killswitch; enabled by state file
Oct 11 03:00:24 np0005480847.novalocal NetworkManager[857]: <info>  [1760151624.9783] manager: rfkill: WWAN enabled by radio killswitch; enabled by state file
Oct 11 03:00:24 np0005480847.novalocal NetworkManager[857]: <info>  [1760151624.9784] manager: Networking is enabled by state file
Oct 11 03:00:24 np0005480847.novalocal NetworkManager[857]: <info>  [1760151624.9787] settings: Loaded settings plugin: keyfile (internal)
Oct 11 03:00:24 np0005480847.novalocal systemd[1]: Starting Network Manager Script Dispatcher Service...
Oct 11 03:00:24 np0005480847.novalocal NetworkManager[857]: <info>  [1760151624.9961] settings: Loaded settings plugin: ifcfg-rh ("/usr/lib64/NetworkManager/1.54.1-1.el9/libnm-settings-plugin-ifcfg-rh.so")
Oct 11 03:00:24 np0005480847.novalocal NetworkManager[857]: <info>  [1760151624.9995] Warning: the ifcfg-rh plugin is deprecated, please migrate connections to the keyfile format using "nmcli connection migrate"
Oct 11 03:00:25 np0005480847.novalocal NetworkManager[857]: <info>  [1760151625.0116] dhcp: init: Using DHCP client 'internal'
Oct 11 03:00:25 np0005480847.novalocal NetworkManager[857]: <info>  [1760151625.0119] manager: (lo): new Loopback device (/org/freedesktop/NetworkManager/Devices/1)
Oct 11 03:00:25 np0005480847.novalocal NetworkManager[857]: <info>  [1760151625.0133] device (lo): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 11 03:00:25 np0005480847.novalocal NetworkManager[857]: <info>  [1760151625.0169] device (lo): state change: unavailable -> disconnected (reason 'connection-assumed', managed-type: 'external')
Oct 11 03:00:25 np0005480847.novalocal NetworkManager[857]: <info>  [1760151625.0178] device (lo): Activation: starting connection 'lo' (346f8ef0-a09d-4c38-b58f-91fb90ce9381)
Oct 11 03:00:25 np0005480847.novalocal NetworkManager[857]: <info>  [1760151625.0189] manager: (eth0): new Ethernet device (/org/freedesktop/NetworkManager/Devices/2)
Oct 11 03:00:25 np0005480847.novalocal NetworkManager[857]: <info>  [1760151625.0194] device (eth0): state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Oct 11 03:00:25 np0005480847.novalocal systemd[1]: Started Network Manager.
Oct 11 03:00:25 np0005480847.novalocal NetworkManager[857]: <info>  [1760151625.0231] bus-manager: acquired D-Bus service "org.freedesktop.NetworkManager"
Oct 11 03:00:25 np0005480847.novalocal NetworkManager[857]: <info>  [1760151625.0235] device (lo): state change: disconnected -> prepare (reason 'none', managed-type: 'external')
Oct 11 03:00:25 np0005480847.novalocal NetworkManager[857]: <info>  [1760151625.0237] device (lo): state change: prepare -> config (reason 'none', managed-type: 'external')
Oct 11 03:00:25 np0005480847.novalocal NetworkManager[857]: <info>  [1760151625.0239] device (lo): state change: config -> ip-config (reason 'none', managed-type: 'external')
Oct 11 03:00:25 np0005480847.novalocal NetworkManager[857]: <info>  [1760151625.0241] device (eth0): carrier: link connected
Oct 11 03:00:25 np0005480847.novalocal NetworkManager[857]: <info>  [1760151625.0244] device (lo): state change: ip-config -> ip-check (reason 'none', managed-type: 'external')
Oct 11 03:00:25 np0005480847.novalocal NetworkManager[857]: <info>  [1760151625.0251] device (eth0): state change: unavailable -> disconnected (reason 'carrier-changed', managed-type: 'full')
Oct 11 03:00:25 np0005480847.novalocal NetworkManager[857]: <info>  [1760151625.0261] policy: auto-activating connection 'System eth0' (5fb06bd0-0bb0-7ffb-45f1-d6edd65f3e03)
Oct 11 03:00:25 np0005480847.novalocal systemd[1]: Reached target Network.
Oct 11 03:00:25 np0005480847.novalocal NetworkManager[857]: <info>  [1760151625.0265] device (eth0): Activation: starting connection 'System eth0' (5fb06bd0-0bb0-7ffb-45f1-d6edd65f3e03)
Oct 11 03:00:25 np0005480847.novalocal NetworkManager[857]: <info>  [1760151625.0266] device (eth0): state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Oct 11 03:00:25 np0005480847.novalocal NetworkManager[857]: <info>  [1760151625.0268] manager: NetworkManager state is now CONNECTING
Oct 11 03:00:25 np0005480847.novalocal NetworkManager[857]: <info>  [1760151625.0270] device (eth0): state change: prepare -> config (reason 'none', managed-type: 'full')
Oct 11 03:00:25 np0005480847.novalocal NetworkManager[857]: <info>  [1760151625.0282] device (eth0): state change: config -> ip-config (reason 'none', managed-type: 'full')
Oct 11 03:00:25 np0005480847.novalocal NetworkManager[857]: <info>  [1760151625.0284] dhcp4 (eth0): activation: beginning transaction (timeout in 45 seconds)
Oct 11 03:00:25 np0005480847.novalocal NetworkManager[857]: <info>  [1760151625.0341] dhcp4 (eth0): state changed new lease, address=38.102.83.234
Oct 11 03:00:25 np0005480847.novalocal NetworkManager[857]: <info>  [1760151625.0351] policy: set 'System eth0' (eth0) as default for IPv4 routing and DNS
Oct 11 03:00:25 np0005480847.novalocal NetworkManager[857]: <info>  [1760151625.0376] device (eth0): state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Oct 11 03:00:25 np0005480847.novalocal NetworkManager[857]: <info>  [1760151625.0402] device (lo): state change: ip-check -> secondaries (reason 'none', managed-type: 'external')
Oct 11 03:00:25 np0005480847.novalocal NetworkManager[857]: <info>  [1760151625.0404] device (lo): state change: secondaries -> activated (reason 'none', managed-type: 'external')
Oct 11 03:00:25 np0005480847.novalocal NetworkManager[857]: <info>  [1760151625.0410] device (lo): Activation: successful, device activated.
Oct 11 03:00:25 np0005480847.novalocal systemd[1]: Starting Network Manager Wait Online...
Oct 11 03:00:25 np0005480847.novalocal NetworkManager[857]: <info>  [1760151625.0424] device (eth0): state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Oct 11 03:00:25 np0005480847.novalocal NetworkManager[857]: <info>  [1760151625.0427] device (eth0): state change: secondaries -> activated (reason 'none', managed-type: 'full')
Oct 11 03:00:25 np0005480847.novalocal NetworkManager[857]: <info>  [1760151625.0431] manager: NetworkManager state is now CONNECTED_SITE
Oct 11 03:00:25 np0005480847.novalocal NetworkManager[857]: <info>  [1760151625.0435] device (eth0): Activation: successful, device activated.
Oct 11 03:00:25 np0005480847.novalocal NetworkManager[857]: <info>  [1760151625.0442] manager: NetworkManager state is now CONNECTED_GLOBAL
Oct 11 03:00:25 np0005480847.novalocal NetworkManager[857]: <info>  [1760151625.0447] manager: startup complete
Oct 11 03:00:25 np0005480847.novalocal systemd[1]: Starting GSSAPI Proxy Daemon...
Oct 11 03:00:25 np0005480847.novalocal systemd[1]: Started Network Manager Script Dispatcher Service.
Oct 11 03:00:25 np0005480847.novalocal systemd[1]: Started GSSAPI Proxy Daemon.
Oct 11 03:00:25 np0005480847.novalocal systemd[1]: RPC security service for NFS client and server was skipped because of an unmet condition check (ConditionPathExists=/etc/krb5.keytab).
Oct 11 03:00:25 np0005480847.novalocal systemd[1]: Reached target NFS client services.
Oct 11 03:00:25 np0005480847.novalocal systemd[1]: Reached target Preparation for Remote File Systems.
Oct 11 03:00:25 np0005480847.novalocal systemd[1]: Reached target Remote File Systems.
Oct 11 03:00:25 np0005480847.novalocal systemd[1]: TPM2 PCR Barrier (User) was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f).
Oct 11 03:00:25 np0005480847.novalocal systemd[1]: Finished Network Manager Wait Online.
Oct 11 03:00:25 np0005480847.novalocal systemd[1]: Starting Cloud-init: Network Stage...
Oct 11 03:00:25 np0005480847.novalocal cloud-init[922]: Cloud-init v. 24.4-7.el9 running 'init' at Sat, 11 Oct 2025 03:00:25 +0000. Up 12.13 seconds.
Oct 11 03:00:25 np0005480847.novalocal cloud-init[922]: ci-info: +++++++++++++++++++++++++++++++++++++++Net device info+++++++++++++++++++++++++++++++++++++++
Oct 11 03:00:25 np0005480847.novalocal cloud-init[922]: ci-info: +--------+------+------------------------------+---------------+--------+-------------------+
Oct 11 03:00:25 np0005480847.novalocal cloud-init[922]: ci-info: | Device |  Up  |           Address            |      Mask     | Scope  |     Hw-Address    |
Oct 11 03:00:25 np0005480847.novalocal cloud-init[922]: ci-info: +--------+------+------------------------------+---------------+--------+-------------------+
Oct 11 03:00:25 np0005480847.novalocal cloud-init[922]: ci-info: |  eth0  | True |        38.102.83.234         | 255.255.255.0 | global | fa:16:3e:70:98:74 |
Oct 11 03:00:25 np0005480847.novalocal cloud-init[922]: ci-info: |  eth0  | True | fe80::f816:3eff:fe70:9874/64 |       .       |  link  | fa:16:3e:70:98:74 |
Oct 11 03:00:25 np0005480847.novalocal cloud-init[922]: ci-info: |   lo   | True |          127.0.0.1           |   255.0.0.0   |  host  |         .         |
Oct 11 03:00:25 np0005480847.novalocal cloud-init[922]: ci-info: |   lo   | True |           ::1/128            |       .       |  host  |         .         |
Oct 11 03:00:25 np0005480847.novalocal cloud-init[922]: ci-info: +--------+------+------------------------------+---------------+--------+-------------------+
Oct 11 03:00:25 np0005480847.novalocal cloud-init[922]: ci-info: +++++++++++++++++++++++++++++++++Route IPv4 info+++++++++++++++++++++++++++++++++
Oct 11 03:00:25 np0005480847.novalocal cloud-init[922]: ci-info: +-------+-----------------+---------------+-----------------+-----------+-------+
Oct 11 03:00:25 np0005480847.novalocal cloud-init[922]: ci-info: | Route |   Destination   |    Gateway    |     Genmask     | Interface | Flags |
Oct 11 03:00:25 np0005480847.novalocal cloud-init[922]: ci-info: +-------+-----------------+---------------+-----------------+-----------+-------+
Oct 11 03:00:25 np0005480847.novalocal cloud-init[922]: ci-info: |   0   |     0.0.0.0     |  38.102.83.1  |     0.0.0.0     |    eth0   |   UG  |
Oct 11 03:00:25 np0005480847.novalocal cloud-init[922]: ci-info: |   1   |   38.102.83.0   |    0.0.0.0    |  255.255.255.0  |    eth0   |   U   |
Oct 11 03:00:25 np0005480847.novalocal cloud-init[922]: ci-info: |   2   | 169.254.169.254 | 38.102.83.126 | 255.255.255.255 |    eth0   |  UGH  |
Oct 11 03:00:25 np0005480847.novalocal cloud-init[922]: ci-info: +-------+-----------------+---------------+-----------------+-----------+-------+
Oct 11 03:00:25 np0005480847.novalocal cloud-init[922]: ci-info: +++++++++++++++++++Route IPv6 info+++++++++++++++++++
Oct 11 03:00:25 np0005480847.novalocal cloud-init[922]: ci-info: +-------+-------------+---------+-----------+-------+
Oct 11 03:00:25 np0005480847.novalocal cloud-init[922]: ci-info: | Route | Destination | Gateway | Interface | Flags |
Oct 11 03:00:25 np0005480847.novalocal cloud-init[922]: ci-info: +-------+-------------+---------+-----------+-------+
Oct 11 03:00:25 np0005480847.novalocal cloud-init[922]: ci-info: |   1   |  fe80::/64  |    ::   |    eth0   |   U   |
Oct 11 03:00:25 np0005480847.novalocal cloud-init[922]: ci-info: |   3   |  multicast  |    ::   |    eth0   |   U   |
Oct 11 03:00:25 np0005480847.novalocal cloud-init[922]: ci-info: +-------+-------------+---------+-----------+-------+
Oct 11 03:00:30 np0005480847.novalocal chronyd[828]: Selected source 142.4.192.253 (2.centos.pool.ntp.org)
Oct 11 03:00:30 np0005480847.novalocal chronyd[828]: System clock wrong by 1.572154 seconds
Oct 11 03:00:31 np0005480847.novalocal chronyd[828]: System clock was stepped by 1.572154 seconds
Oct 11 03:00:31 np0005480847.novalocal chronyd[828]: System clock TAI offset set to 37 seconds
Oct 11 03:00:32 np0005480847.novalocal irqbalance[817]: Cannot change IRQ 25 affinity: Operation not permitted
Oct 11 03:00:32 np0005480847.novalocal irqbalance[817]: IRQ 25 affinity is now unmanaged
Oct 11 03:00:32 np0005480847.novalocal irqbalance[817]: Cannot change IRQ 31 affinity: Operation not permitted
Oct 11 03:00:32 np0005480847.novalocal irqbalance[817]: IRQ 31 affinity is now unmanaged
Oct 11 03:00:32 np0005480847.novalocal irqbalance[817]: Cannot change IRQ 28 affinity: Operation not permitted
Oct 11 03:00:32 np0005480847.novalocal irqbalance[817]: IRQ 28 affinity is now unmanaged
Oct 11 03:00:32 np0005480847.novalocal irqbalance[817]: Cannot change IRQ 32 affinity: Operation not permitted
Oct 11 03:00:32 np0005480847.novalocal irqbalance[817]: IRQ 32 affinity is now unmanaged
Oct 11 03:00:32 np0005480847.novalocal irqbalance[817]: Cannot change IRQ 30 affinity: Operation not permitted
Oct 11 03:00:32 np0005480847.novalocal irqbalance[817]: IRQ 30 affinity is now unmanaged
Oct 11 03:00:32 np0005480847.novalocal irqbalance[817]: Cannot change IRQ 29 affinity: Operation not permitted
Oct 11 03:00:32 np0005480847.novalocal irqbalance[817]: IRQ 29 affinity is now unmanaged
Oct 11 03:00:33 np0005480847.novalocal useradd[988]: new group: name=cloud-user, GID=1001
Oct 11 03:00:33 np0005480847.novalocal useradd[988]: new user: name=cloud-user, UID=1001, GID=1001, home=/home/cloud-user, shell=/bin/bash, from=none
Oct 11 03:00:33 np0005480847.novalocal useradd[988]: add 'cloud-user' to group 'adm'
Oct 11 03:00:33 np0005480847.novalocal useradd[988]: add 'cloud-user' to group 'systemd-journal'
Oct 11 03:00:33 np0005480847.novalocal useradd[988]: add 'cloud-user' to shadow group 'adm'
Oct 11 03:00:33 np0005480847.novalocal useradd[988]: add 'cloud-user' to shadow group 'systemd-journal'
Oct 11 03:00:34 np0005480847.novalocal cloud-init[922]: Generating public/private rsa key pair.
Oct 11 03:00:34 np0005480847.novalocal cloud-init[922]: Your identification has been saved in /etc/ssh/ssh_host_rsa_key
Oct 11 03:00:34 np0005480847.novalocal cloud-init[922]: Your public key has been saved in /etc/ssh/ssh_host_rsa_key.pub
Oct 11 03:00:34 np0005480847.novalocal cloud-init[922]: The key fingerprint is:
Oct 11 03:00:34 np0005480847.novalocal cloud-init[922]: SHA256:VnRQo1+Tb8kA28aHP43zyJE55T2O5WUVbTr0V0K0Aj8 root@np0005480847.novalocal
Oct 11 03:00:34 np0005480847.novalocal cloud-init[922]: The key's randomart image is:
Oct 11 03:00:34 np0005480847.novalocal cloud-init[922]: +---[RSA 3072]----+
Oct 11 03:00:34 np0005480847.novalocal cloud-init[922]: |          +++oo..|
Oct 11 03:00:34 np0005480847.novalocal cloud-init[922]: |         . ==.+o=|
Oct 11 03:00:34 np0005480847.novalocal cloud-init[922]: |          o.EB=*+|
Oct 11 03:00:34 np0005480847.novalocal cloud-init[922]: |         . ..+BOB|
Oct 11 03:00:34 np0005480847.novalocal cloud-init[922]: |        S   . BOX|
Oct 11 03:00:34 np0005480847.novalocal cloud-init[922]: |       .     .=O+|
Oct 11 03:00:34 np0005480847.novalocal cloud-init[922]: |             .oo.|
Oct 11 03:00:34 np0005480847.novalocal cloud-init[922]: |                 |
Oct 11 03:00:34 np0005480847.novalocal cloud-init[922]: |                 |
Oct 11 03:00:34 np0005480847.novalocal cloud-init[922]: +----[SHA256]-----+
Oct 11 03:00:34 np0005480847.novalocal cloud-init[922]: Generating public/private ecdsa key pair.
Oct 11 03:00:34 np0005480847.novalocal cloud-init[922]: Your identification has been saved in /etc/ssh/ssh_host_ecdsa_key
Oct 11 03:00:34 np0005480847.novalocal cloud-init[922]: Your public key has been saved in /etc/ssh/ssh_host_ecdsa_key.pub
Oct 11 03:00:34 np0005480847.novalocal cloud-init[922]: The key fingerprint is:
Oct 11 03:00:34 np0005480847.novalocal cloud-init[922]: SHA256:PNzFtXCwYkhcZsV8oqR73ataq09MjS7/+wKjNjh7Ktg root@np0005480847.novalocal
Oct 11 03:00:34 np0005480847.novalocal cloud-init[922]: The key's randomart image is:
Oct 11 03:00:34 np0005480847.novalocal cloud-init[922]: +---[ECDSA 256]---+
Oct 11 03:00:34 np0005480847.novalocal cloud-init[922]: |       ...+++.o  |
Oct 11 03:00:34 np0005480847.novalocal cloud-init[922]: |       ..+..+=.. |
Oct 11 03:00:34 np0005480847.novalocal cloud-init[922]: |        .oo.+o.  |
Oct 11 03:00:34 np0005480847.novalocal cloud-init[922]: |       o.o.o o   |
Oct 11 03:00:34 np0005480847.novalocal cloud-init[922]: |        S...o..  |
Oct 11 03:00:34 np0005480847.novalocal cloud-init[922]: |        ...++ .  |
Oct 11 03:00:34 np0005480847.novalocal cloud-init[922]: |     o   o..=o . |
Oct 11 03:00:34 np0005480847.novalocal cloud-init[922]: |    . E o == .o  |
Oct 11 03:00:34 np0005480847.novalocal cloud-init[922]: |       .o*o==oo+.|
Oct 11 03:00:34 np0005480847.novalocal cloud-init[922]: +----[SHA256]-----+
Oct 11 03:00:34 np0005480847.novalocal cloud-init[922]: Generating public/private ed25519 key pair.
Oct 11 03:00:34 np0005480847.novalocal cloud-init[922]: Your identification has been saved in /etc/ssh/ssh_host_ed25519_key
Oct 11 03:00:34 np0005480847.novalocal cloud-init[922]: Your public key has been saved in /etc/ssh/ssh_host_ed25519_key.pub
Oct 11 03:00:34 np0005480847.novalocal cloud-init[922]: The key fingerprint is:
Oct 11 03:00:34 np0005480847.novalocal cloud-init[922]: SHA256:SU9Wq/HsugdoWF0vs8HHy5FyItBD2AWUmJlT6xYxi4w root@np0005480847.novalocal
Oct 11 03:00:34 np0005480847.novalocal cloud-init[922]: The key's randomart image is:
Oct 11 03:00:34 np0005480847.novalocal cloud-init[922]: +--[ED25519 256]--+
Oct 11 03:00:34 np0005480847.novalocal cloud-init[922]: |         %B+o    |
Oct 11 03:00:34 np0005480847.novalocal cloud-init[922]: |       oO.=*..   |
Oct 11 03:00:34 np0005480847.novalocal cloud-init[922]: |      E ++B+.o . |
Oct 11 03:00:34 np0005480847.novalocal cloud-init[922]: |       ..*o*B B  |
Oct 11 03:00:34 np0005480847.novalocal cloud-init[922]: |       oS.=.o@ o |
Oct 11 03:00:34 np0005480847.novalocal cloud-init[922]: |      . o.... o  |
Oct 11 03:00:34 np0005480847.novalocal cloud-init[922]: |       .   ..    |
Oct 11 03:00:34 np0005480847.novalocal cloud-init[922]: |           ..    |
Oct 11 03:00:34 np0005480847.novalocal cloud-init[922]: |          oo     |
Oct 11 03:00:34 np0005480847.novalocal cloud-init[922]: +----[SHA256]-----+
Oct 11 03:00:34 np0005480847.novalocal sm-notify[1004]: Version 2.5.4 starting
Oct 11 03:00:34 np0005480847.novalocal systemd[1]: Finished Cloud-init: Network Stage.
Oct 11 03:00:34 np0005480847.novalocal systemd[1]: Reached target Cloud-config availability.
Oct 11 03:00:34 np0005480847.novalocal systemd[1]: Reached target Network is Online.
Oct 11 03:00:34 np0005480847.novalocal systemd[1]: Starting Cloud-init: Config Stage...
Oct 11 03:00:34 np0005480847.novalocal systemd[1]: Starting Notify NFS peers of a restart...
Oct 11 03:00:34 np0005480847.novalocal systemd[1]: Starting System Logging Service...
Oct 11 03:00:34 np0005480847.novalocal systemd[1]: Starting OpenSSH server daemon...
Oct 11 03:00:34 np0005480847.novalocal systemd[1]: Starting Permit User Sessions...
Oct 11 03:00:34 np0005480847.novalocal systemd[1]: Started Notify NFS peers of a restart.
Oct 11 03:00:34 np0005480847.novalocal sshd[1006]: Server listening on 0.0.0.0 port 22.
Oct 11 03:00:34 np0005480847.novalocal sshd[1006]: Server listening on :: port 22.
Oct 11 03:00:34 np0005480847.novalocal systemd[1]: Started OpenSSH server daemon.
Oct 11 03:00:34 np0005480847.novalocal systemd[1]: Finished Permit User Sessions.
Oct 11 03:00:35 np0005480847.novalocal systemd[1]: Started Command Scheduler.
Oct 11 03:00:35 np0005480847.novalocal systemd[1]: Started Getty on tty1.
Oct 11 03:00:35 np0005480847.novalocal systemd[1]: Started Serial Getty on ttyS0.
Oct 11 03:00:35 np0005480847.novalocal systemd[1]: Reached target Login Prompts.
Oct 11 03:00:35 np0005480847.novalocal crond[1008]: (CRON) STARTUP (1.5.7)
Oct 11 03:00:35 np0005480847.novalocal crond[1008]: (CRON) INFO (Syslog will be used instead of sendmail.)
Oct 11 03:00:35 np0005480847.novalocal crond[1008]: (CRON) INFO (RANDOM_DELAY will be scaled with factor 70% if used.)
Oct 11 03:00:35 np0005480847.novalocal crond[1008]: (CRON) INFO (running with inotify support)
Oct 11 03:00:35 np0005480847.novalocal rsyslogd[1005]: [origin software="rsyslogd" swVersion="8.2506.0-2.el9" x-pid="1005" x-info="https://www.rsyslog.com"] start
Oct 11 03:00:35 np0005480847.novalocal systemd[1]: Started System Logging Service.
Oct 11 03:00:35 np0005480847.novalocal rsyslogd[1005]: imjournal: No statefile exists, /var/lib/rsyslog/imjournal.state will be created (ignore if this is first run): No such file or directory [v8.2506.0-2.el9 try https://www.rsyslog.com/e/2040 ]
Oct 11 03:00:35 np0005480847.novalocal systemd[1]: Reached target Multi-User System.
Oct 11 03:00:35 np0005480847.novalocal systemd[1]: Starting Record Runlevel Change in UTMP...
Oct 11 03:00:35 np0005480847.novalocal systemd[1]: systemd-update-utmp-runlevel.service: Deactivated successfully.
Oct 11 03:00:35 np0005480847.novalocal systemd[1]: Finished Record Runlevel Change in UTMP.
Oct 11 03:00:35 np0005480847.novalocal rsyslogd[1005]: imjournal: journal files changed, reloading...  [v8.2506.0-2.el9 try https://www.rsyslog.com/e/0 ]
Oct 11 03:00:35 np0005480847.novalocal cloud-init[1018]: Cloud-init v. 24.4-7.el9 running 'modules:config' at Sat, 11 Oct 2025 03:00:35 +0000. Up 20.30 seconds.
Oct 11 03:00:35 np0005480847.novalocal systemd[1]: Finished Cloud-init: Config Stage.
Oct 11 03:00:35 np0005480847.novalocal systemd[1]: Starting Cloud-init: Final Stage...
Oct 11 03:00:35 np0005480847.novalocal sshd-session[1022]: Unable to negotiate with 38.102.83.114 port 54404: no matching host key type found. Their offer: ssh-ed25519,ssh-ed25519-cert-v01@openssh.com [preauth]
Oct 11 03:00:35 np0005480847.novalocal sshd-session[1026]: Unable to negotiate with 38.102.83.114 port 54434: no matching host key type found. Their offer: ecdsa-sha2-nistp384,ecdsa-sha2-nistp384-cert-v01@openssh.com [preauth]
Oct 11 03:00:35 np0005480847.novalocal sshd-session[1020]: Connection closed by 38.102.83.114 port 54396 [preauth]
Oct 11 03:00:35 np0005480847.novalocal sshd-session[1028]: Unable to negotiate with 38.102.83.114 port 54448: no matching host key type found. Their offer: ecdsa-sha2-nistp521,ecdsa-sha2-nistp521-cert-v01@openssh.com [preauth]
Oct 11 03:00:35 np0005480847.novalocal sshd-session[1024]: Connection closed by 38.102.83.114 port 54418 [preauth]
Oct 11 03:00:35 np0005480847.novalocal sshd-session[1032]: Connection reset by 38.102.83.114 port 54462 [preauth]
Oct 11 03:00:35 np0005480847.novalocal sshd-session[1036]: Unable to negotiate with 38.102.83.114 port 54474: no matching host key type found. Their offer: ssh-rsa,ssh-rsa-cert-v01@openssh.com [preauth]
Oct 11 03:00:35 np0005480847.novalocal sshd-session[1038]: Unable to negotiate with 38.102.83.114 port 54490: no matching host key type found. Their offer: ssh-dss,ssh-dss-cert-v01@openssh.com [preauth]
Oct 11 03:00:35 np0005480847.novalocal cloud-init[1039]: Cloud-init v. 24.4-7.el9 running 'modules:final' at Sat, 11 Oct 2025 03:00:35 +0000. Up 20.73 seconds.
Oct 11 03:00:35 np0005480847.novalocal sshd-session[1030]: Connection closed by 38.102.83.114 port 54450 [preauth]
Oct 11 03:00:35 np0005480847.novalocal cloud-init[1042]: #############################################################
Oct 11 03:00:35 np0005480847.novalocal cloud-init[1043]: -----BEGIN SSH HOST KEY FINGERPRINTS-----
Oct 11 03:00:35 np0005480847.novalocal cloud-init[1045]: 256 SHA256:PNzFtXCwYkhcZsV8oqR73ataq09MjS7/+wKjNjh7Ktg root@np0005480847.novalocal (ECDSA)
Oct 11 03:00:35 np0005480847.novalocal cloud-init[1047]: 256 SHA256:SU9Wq/HsugdoWF0vs8HHy5FyItBD2AWUmJlT6xYxi4w root@np0005480847.novalocal (ED25519)
Oct 11 03:00:35 np0005480847.novalocal cloud-init[1049]: 3072 SHA256:VnRQo1+Tb8kA28aHP43zyJE55T2O5WUVbTr0V0K0Aj8 root@np0005480847.novalocal (RSA)
Oct 11 03:00:35 np0005480847.novalocal cloud-init[1050]: -----END SSH HOST KEY FINGERPRINTS-----
Oct 11 03:00:35 np0005480847.novalocal cloud-init[1051]: #############################################################
Oct 11 03:00:36 np0005480847.novalocal cloud-init[1039]: Cloud-init v. 24.4-7.el9 finished at Sat, 11 Oct 2025 03:00:36 +0000. Datasource DataSourceConfigDrive [net,ver=2][source=/dev/sr0].  Up 21.09 seconds
Oct 11 03:00:36 np0005480847.novalocal systemd[1]: Finished Cloud-init: Final Stage.
Oct 11 03:00:36 np0005480847.novalocal systemd[1]: Reached target Cloud-init target.
Oct 11 03:00:36 np0005480847.novalocal systemd[1]: Startup finished in 1.661s (kernel) + 2.785s (initrd) + 16.729s (userspace) = 21.176s.
Oct 11 03:00:36 np0005480847.novalocal systemd[1]: NetworkManager-dispatcher.service: Deactivated successfully.
Oct 11 03:00:56 np0005480847.novalocal systemd[1]: systemd-hostnamed.service: Deactivated successfully.
Oct 11 03:01:01 np0005480847.novalocal CROND[1059]: (root) CMD (run-parts /etc/cron.hourly)
Oct 11 03:01:01 np0005480847.novalocal run-parts[1062]: (/etc/cron.hourly) starting 0anacron
Oct 11 03:01:01 np0005480847.novalocal anacron[1070]: Anacron started on 2025-10-10
Oct 11 03:01:01 np0005480847.novalocal run-parts[1072]: (/etc/cron.hourly) finished 0anacron
Oct 11 03:01:01 np0005480847.novalocal CROND[1058]: (root) CMDEND (run-parts /etc/cron.hourly)
Oct 11 03:01:01 np0005480847.novalocal anacron[1070]: Will run job `cron.daily' in 29 min.
Oct 11 03:01:01 np0005480847.novalocal anacron[1070]: Will run job `cron.weekly' in 49 min.
Oct 11 03:01:01 np0005480847.novalocal anacron[1070]: Will run job `cron.monthly' in 69 min.
Oct 11 03:01:01 np0005480847.novalocal anacron[1070]: Jobs will be executed sequentially
Oct 11 03:02:09 np0005480847.novalocal sshd-session[1073]: Accepted publickey for zuul from 38.102.83.114 port 47140 ssh2: RSA SHA256:zhs3MiW0JhxzckYcMHQES8SMYHj1iGcomnyzmbiwor8
Oct 11 03:02:09 np0005480847.novalocal systemd-logind[820]: New session 1 of user zuul.
Oct 11 03:02:09 np0005480847.novalocal systemd[1]: Created slice User Slice of UID 1000.
Oct 11 03:02:09 np0005480847.novalocal systemd[1]: Starting User Runtime Directory /run/user/1000...
Oct 11 03:02:09 np0005480847.novalocal systemd[1]: Finished User Runtime Directory /run/user/1000.
Oct 11 03:02:09 np0005480847.novalocal systemd[1]: Starting User Manager for UID 1000...
Oct 11 03:02:09 np0005480847.novalocal systemd[1077]: pam_unix(systemd-user:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Oct 11 03:02:09 np0005480847.novalocal systemd[1077]: Queued start job for default target Main User Target.
Oct 11 03:02:09 np0005480847.novalocal systemd[1077]: Created slice User Application Slice.
Oct 11 03:02:09 np0005480847.novalocal systemd[1077]: Started Mark boot as successful after the user session has run 2 minutes.
Oct 11 03:02:09 np0005480847.novalocal systemd[1077]: Started Daily Cleanup of User's Temporary Directories.
Oct 11 03:02:09 np0005480847.novalocal systemd[1077]: Reached target Paths.
Oct 11 03:02:09 np0005480847.novalocal systemd[1077]: Reached target Timers.
Oct 11 03:02:09 np0005480847.novalocal systemd[1077]: Starting D-Bus User Message Bus Socket...
Oct 11 03:02:09 np0005480847.novalocal systemd[1077]: Starting Create User's Volatile Files and Directories...
Oct 11 03:02:09 np0005480847.novalocal systemd[1077]: Finished Create User's Volatile Files and Directories.
Oct 11 03:02:09 np0005480847.novalocal systemd[1077]: Listening on D-Bus User Message Bus Socket.
Oct 11 03:02:09 np0005480847.novalocal systemd[1077]: Reached target Sockets.
Oct 11 03:02:09 np0005480847.novalocal systemd[1077]: Reached target Basic System.
Oct 11 03:02:09 np0005480847.novalocal systemd[1077]: Reached target Main User Target.
Oct 11 03:02:09 np0005480847.novalocal systemd[1077]: Startup finished in 179ms.
Oct 11 03:02:09 np0005480847.novalocal systemd[1]: Started User Manager for UID 1000.
Oct 11 03:02:09 np0005480847.novalocal systemd[1]: Started Session 1 of User zuul.
Oct 11 03:02:09 np0005480847.novalocal sshd-session[1073]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Oct 11 03:02:10 np0005480847.novalocal python3[1159]: ansible-setup Invoked with gather_subset=['!all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct 11 03:02:12 np0005480847.novalocal python3[1187]: ansible-ansible.legacy.setup Invoked with gather_subset=['all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct 11 03:02:18 np0005480847.novalocal python3[1245]: ansible-setup Invoked with gather_subset=['network'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct 11 03:02:19 np0005480847.novalocal python3[1285]: ansible-zuul_console Invoked with path=/tmp/console-{log_uuid}.log port=19885 state=present
Oct 11 03:02:21 np0005480847.novalocal python3[1311]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQC6kXoCaitSXs5LRjFdMnG92tz3e1z2PYZtO6JKnZwwLRIJkLNPvuetuB9wXsQZoSu/aD9QllddiePaUky+GLJDSHA3/SwpDoLUj7exGWqjYTebwA/DetQp8I2Lv2ctg4id7c2nXvxZhWCZSm2JQ9Hihx2KOdGNKRGm9QZKo3Iq5cqscF4WczJMJwoB+SouY+ebd0bxRxXCOYCJ7qiPKj+LoAjTn5Cz2cudVYaTPIp/xnTaG+IyAe9GWiYCXW1FY45vN72S0r5uQ7vIuWcHW4JuzCwXK3GhQYTSu2h29lKk/9uaT0lVQ02GDdE/FpatCpuDlVC7zvJZ+L9rgcuFP/7MbExViB36/Li/EXwVIZiS+L+rYDTF4Pb+B4++Kor0Q0rVFM5YsTDejRp2Eac0V7hwhHNRbm+2JfPgOoGpgKYhxhE1BzZih/hT+fUFWGydM4oAmE0dVnq94i27rO6LE9s64zRayq3IKl05NtEeI0a1ZJAOFOq1h2hxcOqL/d7ATLE= zuul-build-sshkey manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Oct 11 03:02:22 np0005480847.novalocal python3[1335]: ansible-file Invoked with state=directory path=/home/zuul/.ssh mode=448 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 11 03:02:22 np0005480847.novalocal python3[1434]: ansible-ansible.legacy.stat Invoked with path=/home/zuul/.ssh/id_rsa follow=False get_checksum=False checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Oct 11 03:02:23 np0005480847.novalocal python3[1505]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1760151742.4656982-207-200726043076858/source dest=/home/zuul/.ssh/id_rsa mode=384 force=False _original_basename=162e7ea8f0e4462b92d727a0bd288530_id_rsa follow=False checksum=23d52f3b05a7ead5f3f2c4984d3ae5c6dffaa7a4 backup=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 11 03:02:23 np0005480847.novalocal python3[1628]: ansible-ansible.legacy.stat Invoked with path=/home/zuul/.ssh/id_rsa.pub follow=False get_checksum=False checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Oct 11 03:02:24 np0005480847.novalocal python3[1699]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1760151743.519542-240-181917286755346/source dest=/home/zuul/.ssh/id_rsa.pub mode=420 force=False _original_basename=162e7ea8f0e4462b92d727a0bd288530_id_rsa.pub follow=False checksum=860ef785f3c67dbfe84389fcf9ee7d02aa83c41e backup=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 11 03:02:25 np0005480847.novalocal python3[1747]: ansible-ping Invoked with data=pong
Oct 11 03:02:26 np0005480847.novalocal python3[1771]: ansible-setup Invoked with gather_subset=['all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct 11 03:02:28 np0005480847.novalocal python3[1829]: ansible-zuul_debug_info Invoked with ipv4_route_required=False ipv6_route_required=False image_manifest_files=['/etc/dib-builddate.txt', '/etc/image-hostname.txt'] image_manifest=None traceroute_host=None
Oct 11 03:02:29 np0005480847.novalocal python3[1861]: ansible-file Invoked with path=/home/zuul/zuul-output/logs state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 11 03:02:30 np0005480847.novalocal python3[1885]: ansible-file Invoked with path=/home/zuul/zuul-output/artifacts state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 11 03:02:30 np0005480847.novalocal python3[1909]: ansible-file Invoked with path=/home/zuul/zuul-output/docs state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 11 03:02:30 np0005480847.novalocal python3[1933]: ansible-file Invoked with path=/home/zuul/zuul-output/logs state=directory mode=493 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 11 03:02:31 np0005480847.novalocal python3[1957]: ansible-file Invoked with path=/home/zuul/zuul-output/artifacts state=directory mode=493 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 11 03:02:31 np0005480847.novalocal python3[1981]: ansible-file Invoked with path=/home/zuul/zuul-output/docs state=directory mode=493 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 11 03:02:32 np0005480847.novalocal sudo[2005]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-balyrzjhwpirarwcxdwetiyhpuqoxnew ; /usr/bin/python3'
Oct 11 03:02:32 np0005480847.novalocal sudo[2005]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 03:02:32 np0005480847.novalocal python3[2007]: ansible-file Invoked with path=/etc/ci state=directory owner=root group=root mode=493 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 11 03:02:32 np0005480847.novalocal sudo[2005]: pam_unix(sudo:session): session closed for user root
Oct 11 03:02:33 np0005480847.novalocal sudo[2083]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gcxszqvztnbqfajmxqjzvctsdykjigqw ; /usr/bin/python3'
Oct 11 03:02:33 np0005480847.novalocal sudo[2083]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 03:02:33 np0005480847.novalocal python3[2085]: ansible-ansible.legacy.stat Invoked with path=/etc/ci/mirror_info.sh follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Oct 11 03:02:33 np0005480847.novalocal sudo[2083]: pam_unix(sudo:session): session closed for user root
Oct 11 03:02:33 np0005480847.novalocal sudo[2156]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tizivknofizujwcuhrmroycwkaiaxoyy ; /usr/bin/python3'
Oct 11 03:02:33 np0005480847.novalocal sudo[2156]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 03:02:34 np0005480847.novalocal python3[2158]: ansible-ansible.legacy.copy Invoked with dest=/etc/ci/mirror_info.sh owner=root group=root mode=420 src=/home/zuul/.ansible/tmp/ansible-tmp-1760151753.1201072-21-68406458369969/source follow=False _original_basename=mirror_info.sh.j2 checksum=92d92a03afdddee82732741071f662c729080c35 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 11 03:02:34 np0005480847.novalocal sudo[2156]: pam_unix(sudo:session): session closed for user root
Oct 11 03:02:34 np0005480847.novalocal python3[2206]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAABIwAAAQEA4Z/c9osaGGtU6X8fgELwfj/yayRurfcKA0HMFfdpPxev2dbwljysMuzoVp4OZmW1gvGtyYPSNRvnzgsaabPNKNo2ym5NToCP6UM+KSe93aln4BcM/24mXChYAbXJQ5Bqq/pIzsGs/pKetQN+vwvMxLOwTvpcsCJBXaa981RKML6xj9l/UZ7IIq1HSEKMvPLxZMWdu0Ut8DkCd5F4nOw9Wgml2uYpDCj5LLCrQQ9ChdOMz8hz6SighhNlRpPkvPaet3OXxr/ytFMu7j7vv06CaEnuMMiY2aTWN1Imin9eHAylIqFHta/3gFfQSWt9jXM7owkBLKL7ATzhaAn+fjNupw== arxcruz@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Oct 11 03:02:35 np0005480847.novalocal python3[2230]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQDS4Fn6k4deCnIlOtLWqZJyksbepjQt04j8Ed8CGx9EKkj0fKiAxiI4TadXQYPuNHMixZy4Nevjb6aDhL5Z906TfvNHKUrjrG7G26a0k8vdc61NEQ7FmcGMWRLwwc6ReDO7lFpzYKBMk4YqfWgBuGU/K6WLKiVW2cVvwIuGIaYrE1OiiX0iVUUk7KApXlDJMXn7qjSYynfO4mF629NIp8FJal38+Kv+HA+0QkE5Y2xXnzD4Lar5+keymiCHRntPppXHeLIRzbt0gxC7v3L72hpQ3BTBEzwHpeS8KY+SX1y5lRMN45thCHfJqGmARJREDjBvWG8JXOPmVIKQtZmVcD5b mandreou@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Oct 11 03:02:35 np0005480847.novalocal python3[2254]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQC9MiLfy30deHA7xPOAlew5qUq3UP2gmRMYJi8PtkjFB20/DKeWwWNnkZPqP9AayruRoo51SIiVg870gbZE2jYl+Ncx/FYDe56JeC3ySZsXoAVkC9bP7gkOGqOmJjirvAgPMI7bogVz8i+66Q4Ar7OKTp3762G4IuWPPEg4ce4Y7lx9qWocZapHYq4cYKMxrOZ7SEbFSATBbe2bPZAPKTw8do/Eny+Hq/LkHFhIeyra6cqTFQYShr+zPln0Cr+ro/pDX3bB+1ubFgTpjpkkkQsLhDfR6cCdCWM2lgnS3BTtYj5Ct9/JRPR5YOphqZz+uB+OEu2IL68hmU9vNTth1KeX rlandy@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Oct 11 03:02:35 np0005480847.novalocal python3[2278]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIFCbgz8gdERiJlk2IKOtkjQxEXejrio6ZYMJAVJYpOIp raukadah@gmail.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Oct 11 03:02:35 np0005480847.novalocal python3[2302]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIBqb3Q/9uDf4LmihQ7xeJ9gA/STIQUFPSfyyV0m8AoQi bshewale@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Oct 11 03:02:36 np0005480847.novalocal python3[2326]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQC0I8QqQx0Az2ysJt2JuffucLijhBqnsXKEIx5GyHwxVULROa8VtNFXUDH6ZKZavhiMcmfHB2+TBTda+lDP4FldYj06dGmzCY+IYGa+uDRdxHNGYjvCfLFcmLlzRK6fNbTcui+KlUFUdKe0fb9CRoGKyhlJD5GRkM1Dv+Yb6Bj+RNnmm1fVGYxzmrD2utvffYEb0SZGWxq2R9gefx1q/3wCGjeqvufEV+AskPhVGc5T7t9eyZ4qmslkLh1/nMuaIBFcr9AUACRajsvk6mXrAN1g3HlBf2gQlhi1UEyfbqIQvzzFtsbLDlSum/KmKjy818GzvWjERfQ0VkGzCd9bSLVL dviroel@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Oct 11 03:02:36 np0005480847.novalocal python3[2350]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDLOQd4ZLtkZXQGY6UwAr/06ppWQK4fDO3HaqxPk98csyOCBXsliSKK39Bso828+5srIXiW7aI6aC9P5mwi4mUZlGPfJlQbfrcGvY+b/SocuvaGK+1RrHLoJCT52LBhwgrzlXio2jeksZeein8iaTrhsPrOAs7KggIL/rB9hEiB3NaOPWhhoCP4vlW6MEMExGcqB/1FVxXFBPnLkEyW0Lk7ycVflZl2ocRxbfjZi0+tI1Wlinp8PvSQSc/WVrAcDgKjc/mB4ODPOyYy3G8FHgfMsrXSDEyjBKgLKMsdCrAUcqJQWjkqXleXSYOV4q3pzL+9umK+q/e3P/bIoSFQzmJKTU1eDfuvPXmow9F5H54fii/Da7ezlMJ+wPGHJrRAkmzvMbALy7xwswLhZMkOGNtRcPqaKYRmIBKpw3o6bCTtcNUHOtOQnzwY8JzrM2eBWJBXAANYw+9/ho80JIiwhg29CFNpVBuHbql2YxJQNrnl90guN65rYNpDxdIluweyUf8= anbanerj@kaermorhen manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Oct 11 03:02:36 np0005480847.novalocal python3[2374]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQC3VwV8Im9kRm49lt3tM36hj4Zv27FxGo4C1Q/0jqhzFmHY7RHbmeRr8ObhwWoHjXSozKWg8FL5ER0z3hTwL0W6lez3sL7hUaCmSuZmG5Hnl3x4vTSxDI9JZ/Y65rtYiiWQo2fC5xJhU/4+0e5e/pseCm8cKRSu+SaxhO+sd6FDojA2x1BzOzKiQRDy/1zWGp/cZkxcEuB1wHI5LMzN03c67vmbu+fhZRAUO4dQkvcnj2LrhQtpa+ytvnSjr8icMDosf1OsbSffwZFyHB/hfWGAfe0eIeSA2XPraxiPknXxiPKx2MJsaUTYbsZcm3EjFdHBBMumw5rBI74zLrMRvCO9GwBEmGT4rFng1nP+yw5DB8sn2zqpOsPg1LYRwCPOUveC13P6pgsZZPh812e8v5EKnETct+5XI3dVpdw6CnNiLwAyVAF15DJvBGT/u1k0Myg/bQn+Gv9k2MSj6LvQmf6WbZu2Wgjm30z3FyCneBqTL7mLF19YXzeC0ufHz5pnO1E= dasm@fedora manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Oct 11 03:02:37 np0005480847.novalocal python3[2398]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIHUnwjB20UKmsSed9X73eGNV5AOEFccQ3NYrRW776pEk cjeanner manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Oct 11 03:02:37 np0005480847.novalocal python3[2422]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIDercCMGn8rW1C4P67tHgtflPdTeXlpyUJYH+6XDd2lR jgilaber@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Oct 11 03:02:37 np0005480847.novalocal python3[2446]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIAMI6kkg9Wg0sG7jIJmyZemEBwUn1yzNpQQd3gnulOmZ adrianfuscoarnejo@gmail.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Oct 11 03:02:38 np0005480847.novalocal python3[2470]: ansible-authorized_key Invoked with user=zuul state=present key=ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBPijwpQu/3jhhhBZInXNOLEH57DrknPc3PLbsRvYyJIFzwYjX+WD4a7+nGnMYS42MuZk6TJcVqgnqofVx4isoD4= ramishra@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Oct 11 03:02:38 np0005480847.novalocal python3[2494]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIGpU/BepK3qX0NRf5Np+dOBDqzQEefhNrw2DCZaH3uWW rebtoor@monolith manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Oct 11 03:02:38 np0005480847.novalocal python3[2518]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIDK0iKdi8jQTpQrDdLVH/AAgLVYyTXF7AQ1gjc/5uT3t ykarel@yatinkarel manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Oct 11 03:02:38 np0005480847.novalocal python3[2542]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIF/V/cLotA6LZeO32VL45Hd78skuA2lJA425Sm2LlQeZ fmount@horcrux manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Oct 11 03:02:39 np0005480847.novalocal python3[2566]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIDa7QCjuDMVmRPo1rREbGwzYeBCYVN+Ou/3WKXZEC6Sr manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Oct 11 03:02:39 np0005480847.novalocal python3[2590]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAACAQCfNtF7NvKl915TGsGGoseUb06Hj8L/S4toWf0hExeY+F00woL6NvBlJD0nDct+P5a22I4EhvoQCRQ8reaPCm1lybR3uiRIJsj+8zkVvLwby9LXzfZorlNG9ofjd00FEmB09uW/YvTl6Q9XwwwX6tInzIOv3TMqTHHGOL74ibbj8J/FJR0cFEyj0z4WQRvtkh32xAHl83gbuINryMt0sqRI+clj2381NKL55DRLQrVw0gsfqqxiHAnXg21qWmc4J+b9e9kiuAFQjcjwTVkwJCcg3xbPwC/qokYRby/Y5S40UUd7/jEARGXT7RZgpzTuDd1oZiCVrnrqJNPaMNdVv5MLeFdf1B7iIe5aa/fGouX7AO4SdKhZUdnJmCFAGvjC6S3JMZ2wAcUl+OHnssfmdj7XL50cLo27vjuzMtLAgSqi6N99m92WCF2s8J9aVzszX7Xz9OKZCeGsiVJp3/NdABKzSEAyM9xBD/5Vho894Sav+otpySHe3p6RUTgbB5Zu8VyZRZ/UtB3ueXxyo764yrc6qWIDqrehm84Xm9g+/jpIBzGPl07NUNJpdt/6Sgf9RIKXw/7XypO5yZfUcuFNGTxLfqjTNrtgLZNcjfav6sSdVXVcMPL//XNuRdKmVFaO76eV/oGMQGr1fGcCD+N+CpI7+Q+fCNB6VFWG4nZFuI/Iuw== averdagu@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Oct 11 03:02:39 np0005480847.novalocal python3[2614]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDq8l27xI+QlQVdS4djp9ogSoyrNE2+Ox6vKPdhSNL1J3PE5w+WCSvMz9A5gnNuH810zwbekEApbxTze/gLQJwBHA52CChfURpXrFaxY7ePXRElwKAL3mJfzBWY/c5jnNL9TCVmFJTGZkFZP3Nh+BMgZvL6xBkt3WKm6Uq18qzd9XeKcZusrA+O+uLv1fVeQnadY9RIqOCyeFYCzLWrUfTyE8x/XG0hAWIM7qpnF2cALQS2h9n4hW5ybiUN790H08wf9hFwEf5nxY9Z9dVkPFQiTSGKNBzmnCXU9skxS/xhpFjJ5duGSZdtAHe9O+nGZm9c67hxgtf8e5PDuqAdXEv2cf6e3VBAt+Bz8EKI3yosTj0oZHfwr42Yzb1l/SKy14Rggsrc9KAQlrGXan6+u2jcQqqx7l+SWmnpFiWTV9u5cWj2IgOhApOitmRBPYqk9rE2usfO0hLn/Pj/R/Nau4803e1/EikdLE7Ps95s9mX5jRDjAoUa2JwFF5RsVFyL910= ashigupt@ashigupt.remote.csb manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Oct 11 03:02:40 np0005480847.novalocal python3[2638]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIOKLl0NYKwoZ/JY5KeZU8VwRAggeOxqQJeoqp3dsAaY9 manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Oct 11 03:02:40 np0005480847.novalocal python3[2662]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIASASQOH2BcOyLKuuDOdWZlPi2orcjcA8q4400T73DLH evallesp@fedora manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Oct 11 03:02:40 np0005480847.novalocal python3[2686]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAILeBWlamUph+jRKV2qrx1PGU7vWuGIt5+z9k96I8WehW amsinha@amsinha-mac manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Oct 11 03:02:40 np0005480847.novalocal python3[2710]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIANvVgvJBlK3gb1yz5uef/JqIGq4HLEmY2dYA8e37swb morenod@redhat-laptop manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Oct 11 03:02:41 np0005480847.novalocal python3[2734]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAACAQDZdI7t1cxYx65heVI24HTV4F7oQLW1zyfxHreL2TIJKxjyrUUKIFEUmTutcBlJRLNT2Eoix6x1sOw9YrchloCLcn//SGfTElr9mSc5jbjb7QXEU+zJMhtxyEJ1Po3CUGnj7ckiIXw7wcawZtrEOAQ9pH3ExYCJcEMiyNjRQZCxT3tPK+S4B95EWh5Fsrz9CkwpjNRPPH7LigCeQTM3Wc7r97utAslBUUvYceDSLA7rMgkitJE38b7rZBeYzsGQ8YYUBjTCtehqQXxCRjizbHWaaZkBU+N3zkKB6n/iCNGIO690NK7A/qb6msTijiz1PeuM8ThOsi9qXnbX5v0PoTpcFSojV7NHAQ71f0XXuS43FhZctT+Dcx44dT8Fb5vJu2cJGrk+qF8ZgJYNpRS7gPg0EG2EqjK7JMf9ULdjSu0r+KlqIAyLvtzT4eOnQipoKlb/WG5D/0ohKv7OMQ352ggfkBFIQsRXyyTCT98Ft9juqPuahi3CAQmP4H9dyE+7+Kz437PEtsxLmfm6naNmWi7Ee1DqWPwS8rEajsm4sNM4wW9gdBboJQtc0uZw0DfLj1I9r3Mc8Ol0jYtz0yNQDSzVLrGCaJlC311trU70tZ+ZkAVV6Mn8lOhSbj1cK0lvSr6ZK4dgqGl3I1eTZJJhbLNdg7UOVaiRx9543+C/p/As7w== brjackma@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Oct 11 03:02:41 np0005480847.novalocal python3[2758]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIKwedoZ0TWPJX/z/4TAbO/kKcDZOQVgRH0hAqrL5UCI1 vcastell@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Oct 11 03:02:41 np0005480847.novalocal python3[2782]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIEmv8sE8GCk6ZTPIqF0FQrttBdL3mq7rCm/IJy0xDFh7 michburk@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Oct 11 03:02:42 np0005480847.novalocal python3[2806]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAICy6GpGEtwevXEEn4mmLR5lmSLe23dGgAvzkB9DMNbkf rsafrono@rsafrono manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Oct 11 03:02:44 np0005480847.novalocal sudo[2830]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fenhuttmzhiwdmzbfodniixzvfbcjmok ; /usr/bin/python3'
Oct 11 03:02:44 np0005480847.novalocal sudo[2830]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 03:02:44 np0005480847.novalocal python3[2832]: ansible-community.general.timezone Invoked with name=UTC hwclock=None
Oct 11 03:02:44 np0005480847.novalocal systemd[1]: Starting Time & Date Service...
Oct 11 03:02:44 np0005480847.novalocal systemd[1]: Started Time & Date Service.
Oct 11 03:02:44 np0005480847.novalocal systemd-timedated[2834]: Changed time zone to 'UTC' (UTC).
Oct 11 03:02:44 np0005480847.novalocal sudo[2830]: pam_unix(sudo:session): session closed for user root
Oct 11 03:02:45 np0005480847.novalocal sudo[2861]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ykzpoqdhvwbzceumkmymwaxgthcgtyza ; /usr/bin/python3'
Oct 11 03:02:45 np0005480847.novalocal sudo[2861]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 03:02:45 np0005480847.novalocal python3[2863]: ansible-file Invoked with path=/etc/nodepool state=directory mode=511 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 11 03:02:45 np0005480847.novalocal sudo[2861]: pam_unix(sudo:session): session closed for user root
Oct 11 03:02:45 np0005480847.novalocal python3[2939]: ansible-ansible.legacy.stat Invoked with path=/etc/nodepool/sub_nodes follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Oct 11 03:02:46 np0005480847.novalocal python3[3010]: ansible-ansible.legacy.copy Invoked with dest=/etc/nodepool/sub_nodes src=/home/zuul/.ansible/tmp/ansible-tmp-1760151765.4376638-153-216729409987718/source _original_basename=tmpefko4jt_ follow=False checksum=da39a3ee5e6b4b0d3255bfef95601890afd80709 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 11 03:02:46 np0005480847.novalocal python3[3110]: ansible-ansible.legacy.stat Invoked with path=/etc/nodepool/sub_nodes_private follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Oct 11 03:02:46 np0005480847.novalocal python3[3181]: ansible-ansible.legacy.copy Invoked with dest=/etc/nodepool/sub_nodes_private src=/home/zuul/.ansible/tmp/ansible-tmp-1760151766.2527084-183-96368014529436/source _original_basename=tmpyhh2b5qh follow=False checksum=da39a3ee5e6b4b0d3255bfef95601890afd80709 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 11 03:02:47 np0005480847.novalocal sudo[3281]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-iyyvermeacrrsgvgungydvfrsslvgqlh ; /usr/bin/python3'
Oct 11 03:02:47 np0005480847.novalocal sudo[3281]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 03:02:47 np0005480847.novalocal python3[3283]: ansible-ansible.legacy.stat Invoked with path=/etc/nodepool/node_private follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Oct 11 03:02:47 np0005480847.novalocal sudo[3281]: pam_unix(sudo:session): session closed for user root
Oct 11 03:02:47 np0005480847.novalocal sudo[3354]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tgjudhanltlytyticsrjtlugwcnmhtzs ; /usr/bin/python3'
Oct 11 03:02:47 np0005480847.novalocal sudo[3354]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 03:02:48 np0005480847.novalocal python3[3356]: ansible-ansible.legacy.copy Invoked with dest=/etc/nodepool/node_private src=/home/zuul/.ansible/tmp/ansible-tmp-1760151767.3322964-231-224733931650317/source _original_basename=tmpo43z92c3 follow=False checksum=9aa420946138b91e611361a1f3fc02e7d91b7140 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 11 03:02:48 np0005480847.novalocal sudo[3354]: pam_unix(sudo:session): session closed for user root
Oct 11 03:02:48 np0005480847.novalocal python3[3404]: ansible-ansible.legacy.command Invoked with _raw_params=cp .ssh/id_rsa /etc/nodepool/id_rsa zuul_log_id=in-loop-ignore zuul_ansible_split_streams=False _uses_shell=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 11 03:02:48 np0005480847.novalocal python3[3430]: ansible-ansible.legacy.command Invoked with _raw_params=cp .ssh/id_rsa.pub /etc/nodepool/id_rsa.pub zuul_log_id=in-loop-ignore zuul_ansible_split_streams=False _uses_shell=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 11 03:02:49 np0005480847.novalocal sudo[3508]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vitzwykxwlmpyrfbycqztisepptzutns ; /usr/bin/python3'
Oct 11 03:02:49 np0005480847.novalocal sudo[3508]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 03:02:49 np0005480847.novalocal python3[3510]: ansible-ansible.legacy.stat Invoked with path=/etc/sudoers.d/zuul-sudo-grep follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Oct 11 03:02:49 np0005480847.novalocal sudo[3508]: pam_unix(sudo:session): session closed for user root
Oct 11 03:02:49 np0005480847.novalocal sudo[3581]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ryprcenjihhlxzhlnuvvkztvultdgbzg ; /usr/bin/python3'
Oct 11 03:02:49 np0005480847.novalocal sudo[3581]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 03:02:49 np0005480847.novalocal python3[3583]: ansible-ansible.legacy.copy Invoked with dest=/etc/sudoers.d/zuul-sudo-grep mode=288 src=/home/zuul/.ansible/tmp/ansible-tmp-1760151769.1622617-273-126710924483339/source _original_basename=tmpdyx8ydfd follow=False checksum=bdca1a77493d00fb51567671791f4aa30f66c2f0 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 11 03:02:49 np0005480847.novalocal sudo[3581]: pam_unix(sudo:session): session closed for user root
Oct 11 03:02:50 np0005480847.novalocal sudo[3632]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vtiuxdpqqazjxslkisxkilptcxdaoowl ; /usr/bin/python3'
Oct 11 03:02:50 np0005480847.novalocal sudo[3632]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 03:02:50 np0005480847.novalocal python3[3634]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/sbin/visudo -c zuul_log_id=fa163ec2-ffbe-663a-3639-00000000001d-1-compute0 zuul_ansible_split_streams=False _uses_shell=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 11 03:02:50 np0005480847.novalocal sudo[3632]: pam_unix(sudo:session): session closed for user root
Oct 11 03:02:51 np0005480847.novalocal python3[3662]: ansible-ansible.legacy.command Invoked with executable=/bin/bash _raw_params=env
                                                       _uses_shell=True zuul_log_id=fa163ec2-ffbe-663a-3639-00000000001e-1-compute0 zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None creates=None removes=None stdin=None
Oct 11 03:02:52 np0005480847.novalocal python3[3690]: ansible-file Invoked with path=/home/zuul/workspace state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 11 03:03:09 np0005480847.novalocal sudo[3714]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jhadkxvoknizpxtharokjolnrowvpzyg ; /usr/bin/python3'
Oct 11 03:03:09 np0005480847.novalocal sudo[3714]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 03:03:09 np0005480847.novalocal python3[3716]: ansible-ansible.builtin.file Invoked with path=/etc/ci/env state=directory mode=0755 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 11 03:03:09 np0005480847.novalocal sudo[3714]: pam_unix(sudo:session): session closed for user root
Oct 11 03:03:14 np0005480847.novalocal systemd[1]: systemd-timedated.service: Deactivated successfully.
Oct 11 03:03:42 np0005480847.novalocal kernel: pci 0000:00:07.0: [1af4:1000] type 00 class 0x020000 conventional PCI endpoint
Oct 11 03:03:42 np0005480847.novalocal kernel: pci 0000:00:07.0: BAR 0 [io  0x0000-0x003f]
Oct 11 03:03:42 np0005480847.novalocal kernel: pci 0000:00:07.0: BAR 1 [mem 0x00000000-0x00000fff]
Oct 11 03:03:42 np0005480847.novalocal kernel: pci 0000:00:07.0: BAR 4 [mem 0x00000000-0x00003fff 64bit pref]
Oct 11 03:03:42 np0005480847.novalocal kernel: pci 0000:00:07.0: ROM [mem 0x00000000-0x0007ffff pref]
Oct 11 03:03:42 np0005480847.novalocal kernel: pci 0000:00:07.0: ROM [mem 0xc0000000-0xc007ffff pref]: assigned
Oct 11 03:03:42 np0005480847.novalocal kernel: pci 0000:00:07.0: BAR 4 [mem 0x240000000-0x240003fff 64bit pref]: assigned
Oct 11 03:03:42 np0005480847.novalocal kernel: pci 0000:00:07.0: BAR 1 [mem 0xc0080000-0xc0080fff]: assigned
Oct 11 03:03:42 np0005480847.novalocal kernel: pci 0000:00:07.0: BAR 0 [io  0x1000-0x103f]: assigned
Oct 11 03:03:42 np0005480847.novalocal kernel: virtio-pci 0000:00:07.0: enabling device (0000 -> 0003)
Oct 11 03:03:42 np0005480847.novalocal NetworkManager[857]: <info>  [1760151822.8151] manager: (eth1): new Ethernet device (/org/freedesktop/NetworkManager/Devices/3)
Oct 11 03:03:42 np0005480847.novalocal systemd-udevd[3719]: Network interface NamePolicy= disabled on kernel command line.
Oct 11 03:03:42 np0005480847.novalocal NetworkManager[857]: <info>  [1760151822.8384] device (eth1): state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Oct 11 03:03:42 np0005480847.novalocal NetworkManager[857]: <info>  [1760151822.8444] settings: (eth1): created default wired connection 'Wired connection 1'
Oct 11 03:03:42 np0005480847.novalocal NetworkManager[857]: <info>  [1760151822.8453] device (eth1): carrier: link connected
Oct 11 03:03:42 np0005480847.novalocal NetworkManager[857]: <info>  [1760151822.8457] device (eth1): state change: unavailable -> disconnected (reason 'carrier-changed', managed-type: 'full')
Oct 11 03:03:42 np0005480847.novalocal NetworkManager[857]: <info>  [1760151822.8468] policy: auto-activating connection 'Wired connection 1' (4acfe135-17b2-37ce-bcca-a8a1c8735455)
Oct 11 03:03:42 np0005480847.novalocal NetworkManager[857]: <info>  [1760151822.8475] device (eth1): Activation: starting connection 'Wired connection 1' (4acfe135-17b2-37ce-bcca-a8a1c8735455)
Oct 11 03:03:42 np0005480847.novalocal NetworkManager[857]: <info>  [1760151822.8476] device (eth1): state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Oct 11 03:03:42 np0005480847.novalocal NetworkManager[857]: <info>  [1760151822.8482] device (eth1): state change: prepare -> config (reason 'none', managed-type: 'full')
Oct 11 03:03:42 np0005480847.novalocal NetworkManager[857]: <info>  [1760151822.8488] device (eth1): state change: config -> ip-config (reason 'none', managed-type: 'full')
Oct 11 03:03:42 np0005480847.novalocal NetworkManager[857]: <info>  [1760151822.8497] dhcp4 (eth1): activation: beginning transaction (timeout in 45 seconds)
Oct 11 03:03:43 np0005480847.novalocal python3[3746]: ansible-ansible.legacy.command Invoked with _raw_params=ip -j link zuul_log_id=fa163ec2-ffbe-2598-e5da-0000000000fc-0-controller zuul_ansible_split_streams=False _uses_shell=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 11 03:03:53 np0005480847.novalocal sudo[3824]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-piedldtkabzcqtpezcecjkslhituwxkq ; OS_CLOUD=vexxhost /usr/bin/python3'
Oct 11 03:03:53 np0005480847.novalocal sudo[3824]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 03:03:53 np0005480847.novalocal python3[3826]: ansible-ansible.legacy.stat Invoked with path=/etc/NetworkManager/system-connections/ci-private-network.nmconnection follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Oct 11 03:03:53 np0005480847.novalocal sudo[3824]: pam_unix(sudo:session): session closed for user root
Oct 11 03:03:54 np0005480847.novalocal sudo[3897]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fevbeeaevuhalawzfyzpbjvwqgpreriq ; OS_CLOUD=vexxhost /usr/bin/python3'
Oct 11 03:03:54 np0005480847.novalocal sudo[3897]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 03:03:54 np0005480847.novalocal python3[3899]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1760151833.5687506-102-246000744058096/source dest=/etc/NetworkManager/system-connections/ci-private-network.nmconnection mode=0600 owner=root group=root follow=False _original_basename=bootstrap-ci-network-nm-connection.nmconnection.j2 checksum=192a5249bacfac82e846e812ea581baa3a9274b1 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 11 03:03:54 np0005480847.novalocal sudo[3897]: pam_unix(sudo:session): session closed for user root
Oct 11 03:03:55 np0005480847.novalocal sudo[3947]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ioltkcgblrhizialgjziqvohkieregay ; OS_CLOUD=vexxhost /usr/bin/python3'
Oct 11 03:03:55 np0005480847.novalocal sudo[3947]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 03:03:55 np0005480847.novalocal python3[3949]: ansible-ansible.builtin.systemd Invoked with name=NetworkManager state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Oct 11 03:03:55 np0005480847.novalocal systemd[1]: NetworkManager-wait-online.service: Deactivated successfully.
Oct 11 03:03:55 np0005480847.novalocal systemd[1]: Stopped Network Manager Wait Online.
Oct 11 03:03:55 np0005480847.novalocal systemd[1]: Stopping Network Manager Wait Online...
Oct 11 03:03:55 np0005480847.novalocal systemd[1]: Stopping Network Manager...
Oct 11 03:03:55 np0005480847.novalocal NetworkManager[857]: <info>  [1760151835.3326] caught SIGTERM, shutting down normally.
Oct 11 03:03:55 np0005480847.novalocal NetworkManager[857]: <info>  [1760151835.3343] dhcp4 (eth0): canceled DHCP transaction
Oct 11 03:03:55 np0005480847.novalocal NetworkManager[857]: <info>  [1760151835.3343] dhcp4 (eth0): activation: beginning transaction (timeout in 45 seconds)
Oct 11 03:03:55 np0005480847.novalocal NetworkManager[857]: <info>  [1760151835.3344] dhcp4 (eth0): state changed no lease
Oct 11 03:03:55 np0005480847.novalocal NetworkManager[857]: <info>  [1760151835.3348] manager: NetworkManager state is now CONNECTING
Oct 11 03:03:55 np0005480847.novalocal NetworkManager[857]: <info>  [1760151835.3492] dhcp4 (eth1): canceled DHCP transaction
Oct 11 03:03:55 np0005480847.novalocal NetworkManager[857]: <info>  [1760151835.3493] dhcp4 (eth1): state changed no lease
Oct 11 03:03:55 np0005480847.novalocal systemd[1]: Starting Network Manager Script Dispatcher Service...
Oct 11 03:03:55 np0005480847.novalocal NetworkManager[857]: <info>  [1760151835.3550] exiting (success)
Oct 11 03:03:55 np0005480847.novalocal systemd[1]: Started Network Manager Script Dispatcher Service.
Oct 11 03:03:55 np0005480847.novalocal systemd[1]: NetworkManager.service: Deactivated successfully.
Oct 11 03:03:55 np0005480847.novalocal systemd[1]: Stopped Network Manager.
Oct 11 03:03:55 np0005480847.novalocal systemd[1]: NetworkManager.service: Consumed 1.310s CPU time, 10.0M memory peak.
Oct 11 03:03:55 np0005480847.novalocal systemd[1]: Starting Network Manager...
Oct 11 03:03:55 np0005480847.novalocal NetworkManager[3961]: <info>  [1760151835.4328] NetworkManager (version 1.54.1-1.el9) is starting... (after a restart, boot:b8518b17-5d11-4cee-aee6-0266db1747b3)
Oct 11 03:03:55 np0005480847.novalocal NetworkManager[3961]: <info>  [1760151835.4330] Read config: /etc/NetworkManager/NetworkManager.conf, /run/NetworkManager/conf.d/15-carrier-timeout.conf
Oct 11 03:03:55 np0005480847.novalocal NetworkManager[3961]: <info>  [1760151835.4396] manager[0x5605a16da070]: monitoring kernel firmware directory '/lib/firmware'.
Oct 11 03:03:55 np0005480847.novalocal systemd[1]: Starting Hostname Service...
Oct 11 03:03:55 np0005480847.novalocal systemd[1]: Started Hostname Service.
Oct 11 03:03:55 np0005480847.novalocal NetworkManager[3961]: <info>  [1760151835.5552] hostname: hostname: using hostnamed
Oct 11 03:03:55 np0005480847.novalocal NetworkManager[3961]: <info>  [1760151835.5553] hostname: static hostname changed from (none) to "np0005480847.novalocal"
Oct 11 03:03:55 np0005480847.novalocal NetworkManager[3961]: <info>  [1760151835.5562] dns-mgr: init: dns=default,systemd-resolved rc-manager=symlink (auto)
Oct 11 03:03:55 np0005480847.novalocal NetworkManager[3961]: <info>  [1760151835.5570] manager[0x5605a16da070]: rfkill: Wi-Fi hardware radio set enabled
Oct 11 03:03:55 np0005480847.novalocal NetworkManager[3961]: <info>  [1760151835.5571] manager[0x5605a16da070]: rfkill: WWAN hardware radio set enabled
Oct 11 03:03:55 np0005480847.novalocal NetworkManager[3961]: <info>  [1760151835.5623] Loaded device plugin: NMTeamFactory (/usr/lib64/NetworkManager/1.54.1-1.el9/libnm-device-plugin-team.so)
Oct 11 03:03:55 np0005480847.novalocal NetworkManager[3961]: <info>  [1760151835.5623] manager: rfkill: Wi-Fi enabled by radio killswitch; enabled by state file
Oct 11 03:03:55 np0005480847.novalocal NetworkManager[3961]: <info>  [1760151835.5624] manager: rfkill: WWAN enabled by radio killswitch; enabled by state file
Oct 11 03:03:55 np0005480847.novalocal NetworkManager[3961]: <info>  [1760151835.5625] manager: Networking is enabled by state file
Oct 11 03:03:55 np0005480847.novalocal NetworkManager[3961]: <info>  [1760151835.5629] settings: Loaded settings plugin: keyfile (internal)
Oct 11 03:03:55 np0005480847.novalocal NetworkManager[3961]: <info>  [1760151835.5636] settings: Loaded settings plugin: ifcfg-rh ("/usr/lib64/NetworkManager/1.54.1-1.el9/libnm-settings-plugin-ifcfg-rh.so")
Oct 11 03:03:55 np0005480847.novalocal NetworkManager[3961]: <info>  [1760151835.5685] Warning: the ifcfg-rh plugin is deprecated, please migrate connections to the keyfile format using "nmcli connection migrate"
Oct 11 03:03:55 np0005480847.novalocal NetworkManager[3961]: <info>  [1760151835.5706] dhcp: init: Using DHCP client 'internal'
Oct 11 03:03:55 np0005480847.novalocal NetworkManager[3961]: <info>  [1760151835.5711] manager: (lo): new Loopback device (/org/freedesktop/NetworkManager/Devices/1)
Oct 11 03:03:55 np0005480847.novalocal NetworkManager[3961]: <info>  [1760151835.5719] device (lo): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 11 03:03:55 np0005480847.novalocal NetworkManager[3961]: <info>  [1760151835.5728] device (lo): state change: unavailable -> disconnected (reason 'connection-assumed', managed-type: 'external')
Oct 11 03:03:55 np0005480847.novalocal NetworkManager[3961]: <info>  [1760151835.5742] device (lo): Activation: starting connection 'lo' (346f8ef0-a09d-4c38-b58f-91fb90ce9381)
Oct 11 03:03:55 np0005480847.novalocal NetworkManager[3961]: <info>  [1760151835.5754] device (eth0): carrier: link connected
Oct 11 03:03:55 np0005480847.novalocal NetworkManager[3961]: <info>  [1760151835.5761] manager: (eth0): new Ethernet device (/org/freedesktop/NetworkManager/Devices/2)
Oct 11 03:03:55 np0005480847.novalocal NetworkManager[3961]: <info>  [1760151835.5768] manager: (eth0): assume: will attempt to assume matching connection 'System eth0' (5fb06bd0-0bb0-7ffb-45f1-d6edd65f3e03) (indicated)
Oct 11 03:03:55 np0005480847.novalocal NetworkManager[3961]: <info>  [1760151835.5769] device (eth0): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'assume')
Oct 11 03:03:55 np0005480847.novalocal NetworkManager[3961]: <info>  [1760151835.5778] device (eth0): state change: unavailable -> disconnected (reason 'connection-assumed', managed-type: 'assume')
Oct 11 03:03:55 np0005480847.novalocal NetworkManager[3961]: <info>  [1760151835.5788] device (eth0): Activation: starting connection 'System eth0' (5fb06bd0-0bb0-7ffb-45f1-d6edd65f3e03)
Oct 11 03:03:55 np0005480847.novalocal NetworkManager[3961]: <info>  [1760151835.5797] device (eth1): carrier: link connected
Oct 11 03:03:55 np0005480847.novalocal NetworkManager[3961]: <info>  [1760151835.5804] manager: (eth1): new Ethernet device (/org/freedesktop/NetworkManager/Devices/3)
Oct 11 03:03:55 np0005480847.novalocal NetworkManager[3961]: <info>  [1760151835.5812] manager: (eth1): assume: will attempt to assume matching connection 'Wired connection 1' (4acfe135-17b2-37ce-bcca-a8a1c8735455) (indicated)
Oct 11 03:03:55 np0005480847.novalocal NetworkManager[3961]: <info>  [1760151835.5814] device (eth1): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'assume')
Oct 11 03:03:55 np0005480847.novalocal NetworkManager[3961]: <info>  [1760151835.5822] device (eth1): state change: unavailable -> disconnected (reason 'connection-assumed', managed-type: 'assume')
Oct 11 03:03:55 np0005480847.novalocal NetworkManager[3961]: <info>  [1760151835.5833] device (eth1): Activation: starting connection 'Wired connection 1' (4acfe135-17b2-37ce-bcca-a8a1c8735455)
Oct 11 03:03:55 np0005480847.novalocal systemd[1]: Started Network Manager.
Oct 11 03:03:55 np0005480847.novalocal NetworkManager[3961]: <info>  [1760151835.5842] bus-manager: acquired D-Bus service "org.freedesktop.NetworkManager"
Oct 11 03:03:55 np0005480847.novalocal NetworkManager[3961]: <info>  [1760151835.5869] device (lo): state change: disconnected -> prepare (reason 'none', managed-type: 'external')
Oct 11 03:03:55 np0005480847.novalocal NetworkManager[3961]: <info>  [1760151835.5881] device (lo): state change: prepare -> config (reason 'none', managed-type: 'external')
Oct 11 03:03:55 np0005480847.novalocal NetworkManager[3961]: <info>  [1760151835.5888] device (lo): state change: config -> ip-config (reason 'none', managed-type: 'external')
Oct 11 03:03:55 np0005480847.novalocal NetworkManager[3961]: <info>  [1760151835.5898] device (eth0): state change: disconnected -> prepare (reason 'none', managed-type: 'assume')
Oct 11 03:03:55 np0005480847.novalocal NetworkManager[3961]: <info>  [1760151835.5912] device (eth0): state change: prepare -> config (reason 'none', managed-type: 'assume')
Oct 11 03:03:55 np0005480847.novalocal NetworkManager[3961]: <info>  [1760151835.5922] device (eth1): state change: disconnected -> prepare (reason 'none', managed-type: 'assume')
Oct 11 03:03:55 np0005480847.novalocal NetworkManager[3961]: <info>  [1760151835.5932] device (eth1): state change: prepare -> config (reason 'none', managed-type: 'assume')
Oct 11 03:03:55 np0005480847.novalocal NetworkManager[3961]: <info>  [1760151835.5940] device (lo): state change: ip-config -> ip-check (reason 'none', managed-type: 'external')
Oct 11 03:03:55 np0005480847.novalocal systemd[1]: Starting Network Manager Wait Online...
Oct 11 03:03:55 np0005480847.novalocal NetworkManager[3961]: <info>  [1760151835.5964] device (eth0): state change: config -> ip-config (reason 'none', managed-type: 'assume')
Oct 11 03:03:55 np0005480847.novalocal NetworkManager[3961]: <info>  [1760151835.5974] dhcp4 (eth0): activation: beginning transaction (timeout in 45 seconds)
Oct 11 03:03:55 np0005480847.novalocal NetworkManager[3961]: <info>  [1760151835.5995] device (eth1): state change: config -> ip-config (reason 'none', managed-type: 'assume')
Oct 11 03:03:55 np0005480847.novalocal NetworkManager[3961]: <info>  [1760151835.6003] dhcp4 (eth1): activation: beginning transaction (timeout in 45 seconds)
Oct 11 03:03:55 np0005480847.novalocal NetworkManager[3961]: <info>  [1760151835.6040] device (lo): state change: ip-check -> secondaries (reason 'none', managed-type: 'external')
Oct 11 03:03:55 np0005480847.novalocal NetworkManager[3961]: <info>  [1760151835.6053] device (lo): state change: secondaries -> activated (reason 'none', managed-type: 'external')
Oct 11 03:03:55 np0005480847.novalocal NetworkManager[3961]: <info>  [1760151835.6071] device (lo): Activation: successful, device activated.
Oct 11 03:03:55 np0005480847.novalocal NetworkManager[3961]: <info>  [1760151835.6098] dhcp4 (eth0): state changed new lease, address=38.102.83.234
Oct 11 03:03:55 np0005480847.novalocal NetworkManager[3961]: <info>  [1760151835.6127] policy: set 'System eth0' (eth0) as default for IPv4 routing and DNS
Oct 11 03:03:55 np0005480847.novalocal NetworkManager[3961]: <info>  [1760151835.6285] device (eth0): state change: ip-config -> ip-check (reason 'none', managed-type: 'assume')
Oct 11 03:03:55 np0005480847.novalocal sudo[3947]: pam_unix(sudo:session): session closed for user root
Oct 11 03:03:55 np0005480847.novalocal NetworkManager[3961]: <info>  [1760151835.6356] device (eth0): state change: ip-check -> secondaries (reason 'none', managed-type: 'assume')
Oct 11 03:03:55 np0005480847.novalocal NetworkManager[3961]: <info>  [1760151835.6359] device (eth0): state change: secondaries -> activated (reason 'none', managed-type: 'assume')
Oct 11 03:03:55 np0005480847.novalocal NetworkManager[3961]: <info>  [1760151835.6362] manager: NetworkManager state is now CONNECTED_SITE
Oct 11 03:03:55 np0005480847.novalocal NetworkManager[3961]: <info>  [1760151835.6366] device (eth0): Activation: successful, device activated.
Oct 11 03:03:55 np0005480847.novalocal NetworkManager[3961]: <info>  [1760151835.6373] manager: NetworkManager state is now CONNECTED_GLOBAL
Oct 11 03:03:55 np0005480847.novalocal python3[4033]: ansible-ansible.legacy.command Invoked with _raw_params=ip route zuul_log_id=fa163ec2-ffbe-2598-e5da-0000000000a7-0-controller zuul_ansible_split_streams=False _uses_shell=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 11 03:04:05 np0005480847.novalocal systemd[1]: NetworkManager-dispatcher.service: Deactivated successfully.
Oct 11 03:04:25 np0005480847.novalocal systemd[1]: systemd-hostnamed.service: Deactivated successfully.
Oct 11 03:04:40 np0005480847.novalocal NetworkManager[3961]: <info>  [1760151880.9115] device (eth1): state change: ip-config -> ip-check (reason 'none', managed-type: 'assume')
Oct 11 03:04:40 np0005480847.novalocal systemd[1]: Starting Network Manager Script Dispatcher Service...
Oct 11 03:04:40 np0005480847.novalocal systemd[1]: Started Network Manager Script Dispatcher Service.
Oct 11 03:04:40 np0005480847.novalocal NetworkManager[3961]: <info>  [1760151880.9480] device (eth1): state change: ip-check -> secondaries (reason 'none', managed-type: 'assume')
Oct 11 03:04:40 np0005480847.novalocal NetworkManager[3961]: <info>  [1760151880.9485] device (eth1): state change: secondaries -> activated (reason 'none', managed-type: 'assume')
Oct 11 03:04:40 np0005480847.novalocal NetworkManager[3961]: <info>  [1760151880.9510] device (eth1): Activation: successful, device activated.
Oct 11 03:04:40 np0005480847.novalocal NetworkManager[3961]: <info>  [1760151880.9524] manager: startup complete
Oct 11 03:04:40 np0005480847.novalocal NetworkManager[3961]: <info>  [1760151880.9532] device (eth1): state change: activated -> failed (reason 'ip-config-unavailable', managed-type: 'full')
Oct 11 03:04:40 np0005480847.novalocal NetworkManager[3961]: <warn>  [1760151880.9555] device (eth1): Activation: failed for connection 'Wired connection 1'
Oct 11 03:04:40 np0005480847.novalocal NetworkManager[3961]: <info>  [1760151880.9574] device (eth1): state change: failed -> disconnected (reason 'none', managed-type: 'full')
Oct 11 03:04:40 np0005480847.novalocal systemd[1]: Finished Network Manager Wait Online.
Oct 11 03:04:40 np0005480847.novalocal NetworkManager[3961]: <info>  [1760151880.9692] dhcp4 (eth1): canceled DHCP transaction
Oct 11 03:04:40 np0005480847.novalocal NetworkManager[3961]: <info>  [1760151880.9692] dhcp4 (eth1): activation: beginning transaction (timeout in 45 seconds)
Oct 11 03:04:40 np0005480847.novalocal NetworkManager[3961]: <info>  [1760151880.9693] dhcp4 (eth1): state changed no lease
Oct 11 03:04:40 np0005480847.novalocal NetworkManager[3961]: <info>  [1760151880.9720] policy: auto-activating connection 'ci-private-network' (50f82f9b-7ab1-5a17-a628-b0771fc67283)
Oct 11 03:04:40 np0005480847.novalocal NetworkManager[3961]: <info>  [1760151880.9728] device (eth1): Activation: starting connection 'ci-private-network' (50f82f9b-7ab1-5a17-a628-b0771fc67283)
Oct 11 03:04:40 np0005480847.novalocal NetworkManager[3961]: <info>  [1760151880.9731] device (eth1): state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Oct 11 03:04:40 np0005480847.novalocal NetworkManager[3961]: <info>  [1760151880.9740] device (eth1): state change: prepare -> config (reason 'none', managed-type: 'full')
Oct 11 03:04:40 np0005480847.novalocal NetworkManager[3961]: <info>  [1760151880.9753] device (eth1): state change: config -> ip-config (reason 'none', managed-type: 'full')
Oct 11 03:04:40 np0005480847.novalocal NetworkManager[3961]: <info>  [1760151880.9768] device (eth1): state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Oct 11 03:04:40 np0005480847.novalocal NetworkManager[3961]: <info>  [1760151880.9838] device (eth1): state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Oct 11 03:04:40 np0005480847.novalocal NetworkManager[3961]: <info>  [1760151880.9842] device (eth1): state change: secondaries -> activated (reason 'none', managed-type: 'full')
Oct 11 03:04:40 np0005480847.novalocal NetworkManager[3961]: <info>  [1760151880.9864] device (eth1): Activation: successful, device activated.
Oct 11 03:04:46 np0005480847.novalocal systemd[1077]: Starting Mark boot as successful...
Oct 11 03:04:46 np0005480847.novalocal systemd[1077]: Finished Mark boot as successful.
Oct 11 03:04:51 np0005480847.novalocal systemd[1]: NetworkManager-dispatcher.service: Deactivated successfully.
Oct 11 03:04:54 np0005480847.novalocal sudo[4137]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zyohkhpxemefktxjpjkmyswcqhfjovrk ; OS_CLOUD=vexxhost /usr/bin/python3'
Oct 11 03:04:54 np0005480847.novalocal sudo[4137]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 03:04:54 np0005480847.novalocal python3[4139]: ansible-ansible.legacy.stat Invoked with path=/etc/ci/env/networking-info.yml follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Oct 11 03:04:55 np0005480847.novalocal sudo[4137]: pam_unix(sudo:session): session closed for user root
Oct 11 03:04:55 np0005480847.novalocal sudo[4210]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-abchpnwhsqocznopeiqbqjjoushphcbl ; OS_CLOUD=vexxhost /usr/bin/python3'
Oct 11 03:04:55 np0005480847.novalocal sudo[4210]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 03:04:55 np0005480847.novalocal python3[4212]: ansible-ansible.legacy.copy Invoked with dest=/etc/ci/env/networking-info.yml owner=root group=root mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1760151894.636173-267-118095016134023/source _original_basename=tmp06pnqx4q follow=False checksum=d20ce195f7a743ac0bf30a512c68796457231cf2 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 11 03:04:55 np0005480847.novalocal sudo[4210]: pam_unix(sudo:session): session closed for user root
Oct 11 03:05:22 np0005480847.novalocal sshd-session[4237]: Received disconnect from 193.46.255.103 port 58452:11:  [preauth]
Oct 11 03:05:22 np0005480847.novalocal sshd-session[4237]: Disconnected from authenticating user root 193.46.255.103 port 58452 [preauth]
Oct 11 03:05:55 np0005480847.novalocal sshd-session[1086]: Received disconnect from 38.102.83.114 port 47140:11: disconnected by user
Oct 11 03:05:55 np0005480847.novalocal sshd-session[1086]: Disconnected from user zuul 38.102.83.114 port 47140
Oct 11 03:05:55 np0005480847.novalocal sshd-session[1073]: pam_unix(sshd:session): session closed for user zuul
Oct 11 03:05:55 np0005480847.novalocal systemd-logind[820]: Session 1 logged out. Waiting for processes to exit.
Oct 11 03:06:34 np0005480847.novalocal sshd-session[4240]: Connection closed by 167.99.78.165 port 59743
Oct 11 03:07:46 np0005480847.novalocal systemd[1077]: Created slice User Background Tasks Slice.
Oct 11 03:07:46 np0005480847.novalocal systemd[1077]: Starting Cleanup of User's Temporary Files and Directories...
Oct 11 03:07:46 np0005480847.novalocal systemd[1077]: Finished Cleanup of User's Temporary Files and Directories.
Oct 11 03:09:12 np0005480847.novalocal chronyd[828]: Selected source 138.197.135.239 (2.centos.pool.ntp.org)
Oct 11 03:10:03 np0005480847.novalocal sshd-session[4244]: Accepted publickey for zuul from 38.102.83.114 port 60086 ssh2: RSA SHA256:kxWsFSq8COsYLodRw7mhPmCkhu5z7pyatmccmmT74Lc
Oct 11 03:10:03 np0005480847.novalocal systemd-logind[820]: New session 3 of user zuul.
Oct 11 03:10:03 np0005480847.novalocal systemd[1]: Started Session 3 of User zuul.
Oct 11 03:10:03 np0005480847.novalocal sshd-session[4244]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Oct 11 03:10:03 np0005480847.novalocal sudo[4271]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vqomsghbczspudyjhclatdipafuvzytw ; /usr/bin/python3'
Oct 11 03:10:03 np0005480847.novalocal sudo[4271]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 03:10:03 np0005480847.novalocal python3[4273]: ansible-ansible.legacy.command Invoked with _raw_params=lsblk -nd -o MAJ:MIN /dev/vda
                                                       _uses_shell=True zuul_log_id=fa163ec2-ffbe-c7ce-b60a-000000001ce6-1-compute0 zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 11 03:10:03 np0005480847.novalocal sudo[4271]: pam_unix(sudo:session): session closed for user root
Oct 11 03:10:03 np0005480847.novalocal sudo[4300]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kwkgyalvdthmofjyymgbftkkrnecgycy ; /usr/bin/python3'
Oct 11 03:10:03 np0005480847.novalocal sudo[4300]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 03:10:03 np0005480847.novalocal python3[4302]: ansible-ansible.builtin.file Invoked with path=/sys/fs/cgroup/init.scope state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 11 03:10:03 np0005480847.novalocal sudo[4300]: pam_unix(sudo:session): session closed for user root
Oct 11 03:10:04 np0005480847.novalocal sudo[4326]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-waagmgueigbcnhoogbfufdpdrlzwfyuo ; /usr/bin/python3'
Oct 11 03:10:04 np0005480847.novalocal sudo[4326]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 03:10:04 np0005480847.novalocal python3[4328]: ansible-ansible.builtin.file Invoked with path=/sys/fs/cgroup/machine.slice state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 11 03:10:04 np0005480847.novalocal sudo[4326]: pam_unix(sudo:session): session closed for user root
Oct 11 03:10:04 np0005480847.novalocal sudo[4352]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uyenbcuxoisibbvglgifmwixhexuexbu ; /usr/bin/python3'
Oct 11 03:10:04 np0005480847.novalocal sudo[4352]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 03:10:04 np0005480847.novalocal python3[4354]: ansible-ansible.builtin.file Invoked with path=/sys/fs/cgroup/system.slice state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 11 03:10:04 np0005480847.novalocal sudo[4352]: pam_unix(sudo:session): session closed for user root
Oct 11 03:10:04 np0005480847.novalocal sudo[4378]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-frzctquzwsgtndgphjstjipkukqaywwx ; /usr/bin/python3'
Oct 11 03:10:04 np0005480847.novalocal sudo[4378]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 03:10:04 np0005480847.novalocal python3[4380]: ansible-ansible.builtin.file Invoked with path=/sys/fs/cgroup/user.slice state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 11 03:10:04 np0005480847.novalocal sudo[4378]: pam_unix(sudo:session): session closed for user root
Oct 11 03:10:05 np0005480847.novalocal sudo[4404]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rhsbmscpqikaiharrbfsgurjqpahbtqb ; /usr/bin/python3'
Oct 11 03:10:05 np0005480847.novalocal sudo[4404]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 03:10:05 np0005480847.novalocal python3[4406]: ansible-ansible.builtin.lineinfile Invoked with path=/etc/systemd/system.conf regexp=^#DefaultIOAccounting=no line=DefaultIOAccounting=yes state=present backrefs=False create=False backup=False firstmatch=False unsafe_writes=False search_string=None insertafter=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 11 03:10:05 np0005480847.novalocal python3[4406]: ansible-ansible.builtin.lineinfile [WARNING] Module remote_tmp /root/.ansible/tmp did not exist and was created with a mode of 0700, this may cause issues when running as another user. To avoid this, create the remote_tmp dir with the correct permissions manually
Oct 11 03:10:05 np0005480847.novalocal sudo[4404]: pam_unix(sudo:session): session closed for user root
Oct 11 03:10:05 np0005480847.novalocal sudo[4430]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-okrldtihmybmfbbhygfhmhcunsyoatqm ; /usr/bin/python3'
Oct 11 03:10:05 np0005480847.novalocal sudo[4430]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 03:10:06 np0005480847.novalocal python3[4432]: ansible-ansible.builtin.systemd_service Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Oct 11 03:10:06 np0005480847.novalocal systemd[1]: Reloading.
Oct 11 03:10:06 np0005480847.novalocal systemd-rc-local-generator[4454]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 11 03:10:06 np0005480847.novalocal sudo[4430]: pam_unix(sudo:session): session closed for user root
Oct 11 03:10:07 np0005480847.novalocal sudo[4485]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yvgufdxkplegkgyymsepwpklmtprnpsz ; /usr/bin/python3'
Oct 11 03:10:07 np0005480847.novalocal sudo[4485]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 03:10:07 np0005480847.novalocal python3[4487]: ansible-ansible.builtin.wait_for Invoked with path=/sys/fs/cgroup/system.slice/io.max state=present timeout=30 host=127.0.0.1 connect_timeout=5 delay=0 active_connection_states=['ESTABLISHED', 'FIN_WAIT1', 'FIN_WAIT2', 'SYN_RECV', 'SYN_SENT', 'TIME_WAIT'] sleep=1 port=None search_regex=None exclude_hosts=None msg=None
Oct 11 03:10:07 np0005480847.novalocal sudo[4485]: pam_unix(sudo:session): session closed for user root
Oct 11 03:10:08 np0005480847.novalocal sudo[4511]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kowndokouspnwruerhnwzadylmxhzpei ; /usr/bin/python3'
Oct 11 03:10:08 np0005480847.novalocal sudo[4511]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 03:10:08 np0005480847.novalocal python3[4513]: ansible-ansible.legacy.command Invoked with _raw_params=echo "252:0   riops=18000 wiops=18000 rbps=262144000 wbps=262144000" > /sys/fs/cgroup/init.scope/io.max
                                                       _uses_shell=True zuul_log_id=in-loop-ignore zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 11 03:10:08 np0005480847.novalocal sudo[4511]: pam_unix(sudo:session): session closed for user root
Oct 11 03:10:08 np0005480847.novalocal sudo[4539]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zqdaiokgspajemehrcalaskmxfqgxsfg ; /usr/bin/python3'
Oct 11 03:10:08 np0005480847.novalocal sudo[4539]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 03:10:08 np0005480847.novalocal python3[4541]: ansible-ansible.legacy.command Invoked with _raw_params=echo "252:0   riops=18000 wiops=18000 rbps=262144000 wbps=262144000" > /sys/fs/cgroup/machine.slice/io.max
                                                       _uses_shell=True zuul_log_id=in-loop-ignore zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 11 03:10:08 np0005480847.novalocal sudo[4539]: pam_unix(sudo:session): session closed for user root
Oct 11 03:10:08 np0005480847.novalocal sudo[4567]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tsyluuykjlmfkpslkooiecbcwumcxjoz ; /usr/bin/python3'
Oct 11 03:10:08 np0005480847.novalocal sudo[4567]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 03:10:08 np0005480847.novalocal python3[4569]: ansible-ansible.legacy.command Invoked with _raw_params=echo "252:0   riops=18000 wiops=18000 rbps=262144000 wbps=262144000" > /sys/fs/cgroup/system.slice/io.max
                                                       _uses_shell=True zuul_log_id=in-loop-ignore zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 11 03:10:08 np0005480847.novalocal sudo[4567]: pam_unix(sudo:session): session closed for user root
Oct 11 03:10:08 np0005480847.novalocal sudo[4595]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fpocccaqdmlhwbewqfczisfagihrmycr ; /usr/bin/python3'
Oct 11 03:10:08 np0005480847.novalocal sudo[4595]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 03:10:09 np0005480847.novalocal python3[4597]: ansible-ansible.legacy.command Invoked with _raw_params=echo "252:0   riops=18000 wiops=18000 rbps=262144000 wbps=262144000" > /sys/fs/cgroup/user.slice/io.max
                                                       _uses_shell=True zuul_log_id=in-loop-ignore zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 11 03:10:09 np0005480847.novalocal sudo[4595]: pam_unix(sudo:session): session closed for user root
Oct 11 03:10:09 np0005480847.novalocal python3[4624]: ansible-ansible.legacy.command Invoked with _raw_params=echo "init";    cat /sys/fs/cgroup/init.scope/io.max; echo "machine"; cat /sys/fs/cgroup/machine.slice/io.max; echo "system";  cat /sys/fs/cgroup/system.slice/io.max; echo "user";    cat /sys/fs/cgroup/user.slice/io.max;
                                                       _uses_shell=True zuul_log_id=fa163ec2-ffbe-c7ce-b60a-000000001cec-1-compute0 zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 11 03:10:10 np0005480847.novalocal python3[4654]: ansible-ansible.builtin.stat Invoked with path=/sys/fs/cgroup/kubepods.slice/io.max follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct 11 03:10:11 np0005480847.novalocal sshd-session[4247]: Connection closed by 38.102.83.114 port 60086
Oct 11 03:10:11 np0005480847.novalocal sshd-session[4244]: pam_unix(sshd:session): session closed for user zuul
Oct 11 03:10:11 np0005480847.novalocal systemd-logind[820]: Session 3 logged out. Waiting for processes to exit.
Oct 11 03:10:11 np0005480847.novalocal systemd[1]: session-3.scope: Deactivated successfully.
Oct 11 03:10:11 np0005480847.novalocal systemd[1]: session-3.scope: Consumed 3.886s CPU time.
Oct 11 03:10:11 np0005480847.novalocal systemd-logind[820]: Removed session 3.
Oct 11 03:10:12 np0005480847.novalocal irqbalance[817]: Cannot change IRQ 26 affinity: Operation not permitted
Oct 11 03:10:12 np0005480847.novalocal irqbalance[817]: IRQ 26 affinity is now unmanaged
Oct 11 03:10:13 np0005480847.novalocal sshd-session[4661]: Accepted publickey for zuul from 38.102.83.114 port 44734 ssh2: RSA SHA256:kxWsFSq8COsYLodRw7mhPmCkhu5z7pyatmccmmT74Lc
Oct 11 03:10:13 np0005480847.novalocal systemd-logind[820]: New session 4 of user zuul.
Oct 11 03:10:13 np0005480847.novalocal systemd[1]: Started Session 4 of User zuul.
Oct 11 03:10:13 np0005480847.novalocal sshd-session[4661]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Oct 11 03:10:13 np0005480847.novalocal sudo[4688]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dijovmjbvubroqzjorafvffbfuoqkqmr ; /usr/bin/python3'
Oct 11 03:10:13 np0005480847.novalocal sudo[4688]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 03:10:13 np0005480847.novalocal python3[4690]: ansible-ansible.legacy.dnf Invoked with name=['podman', 'buildah'] state=present allow_downgrade=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 allowerasing=False nobest=False use_backend=auto conf_file=None disable_excludes=None download_dir=None list=None releasever=None
Oct 11 03:10:22 np0005480847.novalocal irqbalance[817]: Cannot change IRQ 27 affinity: Operation not permitted
Oct 11 03:10:22 np0005480847.novalocal irqbalance[817]: IRQ 27 affinity is now unmanaged
Oct 11 03:10:31 np0005480847.novalocal kernel: SELinux:  Converting 364 SID table entries...
Oct 11 03:10:31 np0005480847.novalocal kernel: SELinux:  policy capability network_peer_controls=1
Oct 11 03:10:31 np0005480847.novalocal kernel: SELinux:  policy capability open_perms=1
Oct 11 03:10:31 np0005480847.novalocal kernel: SELinux:  policy capability extended_socket_class=1
Oct 11 03:10:31 np0005480847.novalocal kernel: SELinux:  policy capability always_check_network=0
Oct 11 03:10:31 np0005480847.novalocal kernel: SELinux:  policy capability cgroup_seclabel=1
Oct 11 03:10:31 np0005480847.novalocal kernel: SELinux:  policy capability nnp_nosuid_transition=1
Oct 11 03:10:31 np0005480847.novalocal kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Oct 11 03:10:40 np0005480847.novalocal kernel: SELinux:  Converting 364 SID table entries...
Oct 11 03:10:40 np0005480847.novalocal kernel: SELinux:  policy capability network_peer_controls=1
Oct 11 03:10:40 np0005480847.novalocal kernel: SELinux:  policy capability open_perms=1
Oct 11 03:10:40 np0005480847.novalocal kernel: SELinux:  policy capability extended_socket_class=1
Oct 11 03:10:40 np0005480847.novalocal kernel: SELinux:  policy capability always_check_network=0
Oct 11 03:10:40 np0005480847.novalocal kernel: SELinux:  policy capability cgroup_seclabel=1
Oct 11 03:10:40 np0005480847.novalocal kernel: SELinux:  policy capability nnp_nosuid_transition=1
Oct 11 03:10:40 np0005480847.novalocal kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Oct 11 03:10:49 np0005480847.novalocal kernel: SELinux:  Converting 364 SID table entries...
Oct 11 03:10:49 np0005480847.novalocal kernel: SELinux:  policy capability network_peer_controls=1
Oct 11 03:10:49 np0005480847.novalocal kernel: SELinux:  policy capability open_perms=1
Oct 11 03:10:49 np0005480847.novalocal kernel: SELinux:  policy capability extended_socket_class=1
Oct 11 03:10:49 np0005480847.novalocal kernel: SELinux:  policy capability always_check_network=0
Oct 11 03:10:49 np0005480847.novalocal kernel: SELinux:  policy capability cgroup_seclabel=1
Oct 11 03:10:49 np0005480847.novalocal kernel: SELinux:  policy capability nnp_nosuid_transition=1
Oct 11 03:10:49 np0005480847.novalocal kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Oct 11 03:10:50 np0005480847.novalocal setsebool[4757]: The virt_use_nfs policy boolean was changed to 1 by root
Oct 11 03:10:50 np0005480847.novalocal setsebool[4757]: The virt_sandbox_use_all_caps policy boolean was changed to 1 by root
Oct 11 03:11:01 np0005480847.novalocal kernel: SELinux:  Converting 367 SID table entries...
Oct 11 03:11:01 np0005480847.novalocal kernel: SELinux:  policy capability network_peer_controls=1
Oct 11 03:11:01 np0005480847.novalocal kernel: SELinux:  policy capability open_perms=1
Oct 11 03:11:01 np0005480847.novalocal kernel: SELinux:  policy capability extended_socket_class=1
Oct 11 03:11:01 np0005480847.novalocal kernel: SELinux:  policy capability always_check_network=0
Oct 11 03:11:01 np0005480847.novalocal kernel: SELinux:  policy capability cgroup_seclabel=1
Oct 11 03:11:01 np0005480847.novalocal kernel: SELinux:  policy capability nnp_nosuid_transition=1
Oct 11 03:11:01 np0005480847.novalocal kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Oct 11 03:11:20 np0005480847.novalocal dbus-broker-launch[810]: avc:  op=load_policy lsm=selinux seqno=6 res=1
Oct 11 03:11:20 np0005480847.novalocal systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Oct 11 03:11:20 np0005480847.novalocal systemd[1]: Starting man-db-cache-update.service...
Oct 11 03:11:20 np0005480847.novalocal systemd[1]: Reloading.
Oct 11 03:11:20 np0005480847.novalocal systemd-rc-local-generator[5511]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 11 03:11:20 np0005480847.novalocal systemd[1]: Queuing reload/restart jobs for marked units…
Oct 11 03:11:21 np0005480847.novalocal systemd[1]: Starting PackageKit Daemon...
Oct 11 03:11:21 np0005480847.novalocal PackageKit[6174]: daemon start
Oct 11 03:11:21 np0005480847.novalocal systemd[1]: Starting Authorization Manager...
Oct 11 03:11:21 np0005480847.novalocal polkitd[6259]: Started polkitd version 0.117
Oct 11 03:11:21 np0005480847.novalocal polkitd[6259]: Loading rules from directory /etc/polkit-1/rules.d
Oct 11 03:11:21 np0005480847.novalocal polkitd[6259]: Loading rules from directory /usr/share/polkit-1/rules.d
Oct 11 03:11:21 np0005480847.novalocal polkitd[6259]: Finished loading, compiling and executing 3 rules
Oct 11 03:11:21 np0005480847.novalocal systemd[1]: Started Authorization Manager.
Oct 11 03:11:21 np0005480847.novalocal polkitd[6259]: Acquired the name org.freedesktop.PolicyKit1 on the system bus
Oct 11 03:11:21 np0005480847.novalocal systemd[1]: Started PackageKit Daemon.
Oct 11 03:11:21 np0005480847.novalocal sudo[4688]: pam_unix(sudo:session): session closed for user root
Oct 11 03:11:22 np0005480847.novalocal python3[7010]: ansible-ansible.legacy.command Invoked with _raw_params=echo "openstack-k8s-operators+cirobot"
                                                       _uses_shell=True zuul_log_id=fa163ec2-ffbe-c946-c07f-00000000000a-1-compute0 zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 11 03:11:23 np0005480847.novalocal kernel: evm: overlay not supported
Oct 11 03:11:23 np0005480847.novalocal systemd[1077]: Starting D-Bus User Message Bus...
Oct 11 03:11:23 np0005480847.novalocal dbus-broker-launch[7998]: Policy to allow eavesdropping in /usr/share/dbus-1/session.conf +31: Eavesdropping is deprecated and ignored
Oct 11 03:11:23 np0005480847.novalocal dbus-broker-launch[7998]: Policy to allow eavesdropping in /usr/share/dbus-1/session.conf +33: Eavesdropping is deprecated and ignored
Oct 11 03:11:23 np0005480847.novalocal systemd[1077]: Started D-Bus User Message Bus.
Oct 11 03:11:23 np0005480847.novalocal dbus-broker-lau[7998]: Ready
Oct 11 03:11:23 np0005480847.novalocal systemd[1077]: selinux: avc:  op=load_policy lsm=selinux seqno=6 res=1
Oct 11 03:11:23 np0005480847.novalocal systemd[1077]: Created slice Slice /user.
Oct 11 03:11:23 np0005480847.novalocal systemd[1077]: podman-7836.scope: unit configures an IP firewall, but not running as root.
Oct 11 03:11:23 np0005480847.novalocal systemd[1077]: (This warning is only shown for the first unit using IP firewalling.)
Oct 11 03:11:23 np0005480847.novalocal systemd[1077]: Started podman-7836.scope.
Oct 11 03:11:23 np0005480847.novalocal systemd[1077]: Started podman-pause-97939b4f.scope.
Oct 11 03:11:24 np0005480847.novalocal sudo[8584]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ftzabxpifpvspkfvlvelldbzonvpqdud ; /usr/bin/python3'
Oct 11 03:11:24 np0005480847.novalocal sudo[8584]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 03:11:24 np0005480847.novalocal python3[8608]: ansible-ansible.builtin.blockinfile Invoked with state=present insertafter=EOF dest=/etc/containers/registries.conf content=[[registry]]
                                                      location = "38.102.83.59:5001"
                                                      insecure = true path=/etc/containers/registries.conf block=[[registry]]
                                                      location = "38.102.83.59:5001"
                                                      insecure = true marker=# {mark} ANSIBLE MANAGED BLOCK create=False backup=False marker_begin=BEGIN marker_end=END unsafe_writes=False insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 11 03:11:24 np0005480847.novalocal sudo[8584]: pam_unix(sudo:session): session closed for user root
Oct 11 03:11:24 np0005480847.novalocal sshd-session[4664]: Connection closed by 38.102.83.114 port 44734
Oct 11 03:11:24 np0005480847.novalocal sshd-session[4661]: pam_unix(sshd:session): session closed for user zuul
Oct 11 03:11:24 np0005480847.novalocal systemd[1]: session-4.scope: Deactivated successfully.
Oct 11 03:11:24 np0005480847.novalocal systemd[1]: session-4.scope: Consumed 1min 2.260s CPU time.
Oct 11 03:11:24 np0005480847.novalocal systemd-logind[820]: Session 4 logged out. Waiting for processes to exit.
Oct 11 03:11:24 np0005480847.novalocal systemd-logind[820]: Removed session 4.
Oct 11 03:11:43 np0005480847.novalocal sshd-session[16117]: Connection closed by 38.102.83.159 port 33196 [preauth]
Oct 11 03:11:43 np0005480847.novalocal sshd-session[16118]: Unable to negotiate with 38.102.83.159 port 33218: no matching host key type found. Their offer: ssh-ed25519 [preauth]
Oct 11 03:11:43 np0005480847.novalocal sshd-session[16121]: Unable to negotiate with 38.102.83.159 port 33230: no matching host key type found. Their offer: sk-ecdsa-sha2-nistp256@openssh.com [preauth]
Oct 11 03:11:43 np0005480847.novalocal sshd-session[16125]: Connection closed by 38.102.83.159 port 33206 [preauth]
Oct 11 03:11:43 np0005480847.novalocal sshd-session[16124]: Unable to negotiate with 38.102.83.159 port 33232: no matching host key type found. Their offer: sk-ssh-ed25519@openssh.com [preauth]
Oct 11 03:11:48 np0005480847.novalocal sshd-session[17684]: Accepted publickey for zuul from 38.102.83.114 port 33988 ssh2: RSA SHA256:kxWsFSq8COsYLodRw7mhPmCkhu5z7pyatmccmmT74Lc
Oct 11 03:11:48 np0005480847.novalocal systemd-logind[820]: New session 5 of user zuul.
Oct 11 03:11:48 np0005480847.novalocal systemd[1]: Started Session 5 of User zuul.
Oct 11 03:11:48 np0005480847.novalocal sshd-session[17684]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Oct 11 03:11:48 np0005480847.novalocal python3[17784]: ansible-ansible.posix.authorized_key Invoked with user=zuul key=ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBG/v5Efx/208uJJ0BMeNdb1wDtzZNK7CQ8R8SkJufkuV5eGu8KVMQoC0VDM5RxamhTE2oAQfb2QMS24LMIMYW/g= zuul@np0005480846.novalocal
                                                        manage_dir=True state=present exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Oct 11 03:11:48 np0005480847.novalocal sudo[17953]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ilvqzxhynmtgrlanfifjppndimvbkefc ; /usr/bin/python3'
Oct 11 03:11:48 np0005480847.novalocal sudo[17953]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 03:11:48 np0005480847.novalocal python3[17971]: ansible-ansible.posix.authorized_key Invoked with user=root key=ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBG/v5Efx/208uJJ0BMeNdb1wDtzZNK7CQ8R8SkJufkuV5eGu8KVMQoC0VDM5RxamhTE2oAQfb2QMS24LMIMYW/g= zuul@np0005480846.novalocal
                                                        manage_dir=True state=present exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Oct 11 03:11:48 np0005480847.novalocal sudo[17953]: pam_unix(sudo:session): session closed for user root
Oct 11 03:11:49 np0005480847.novalocal sudo[18262]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-csanopeeritjhfqhaihixlevvftukwzj ; /usr/bin/python3'
Oct 11 03:11:49 np0005480847.novalocal sudo[18262]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 03:11:49 np0005480847.novalocal python3[18271]: ansible-ansible.builtin.user Invoked with name=cloud-admin shell=/bin/bash state=present non_unique=False force=False remove=False create_home=True system=False move_home=False append=False ssh_key_bits=0 ssh_key_type=rsa ssh_key_comment=ansible-generated on np0005480847.novalocal update_password=always uid=None group=None groups=None comment=None home=None password=NOT_LOGGING_PARAMETER login_class=None password_expire_max=None password_expire_min=None hidden=None seuser=None skeleton=None generate_ssh_key=None ssh_key_file=None ssh_key_passphrase=NOT_LOGGING_PARAMETER expires=None password_lock=None local=None profile=None authorization=None role=None umask=None
Oct 11 03:11:49 np0005480847.novalocal useradd[18331]: new group: name=cloud-admin, GID=1002
Oct 11 03:11:49 np0005480847.novalocal useradd[18331]: new user: name=cloud-admin, UID=1002, GID=1002, home=/home/cloud-admin, shell=/bin/bash, from=none
Oct 11 03:11:49 np0005480847.novalocal sudo[18262]: pam_unix(sudo:session): session closed for user root
Oct 11 03:11:50 np0005480847.novalocal sudo[18459]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bapvtrymeppnkmxwtkwvgupwhqnsjocs ; /usr/bin/python3'
Oct 11 03:11:50 np0005480847.novalocal sudo[18459]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 03:11:50 np0005480847.novalocal python3[18469]: ansible-ansible.posix.authorized_key Invoked with user=cloud-admin key=ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBG/v5Efx/208uJJ0BMeNdb1wDtzZNK7CQ8R8SkJufkuV5eGu8KVMQoC0VDM5RxamhTE2oAQfb2QMS24LMIMYW/g= zuul@np0005480846.novalocal
                                                        manage_dir=True state=present exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Oct 11 03:11:50 np0005480847.novalocal sudo[18459]: pam_unix(sudo:session): session closed for user root
Oct 11 03:11:50 np0005480847.novalocal sudo[18696]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yxixmaqczfegmsjjheafifvchddueqrm ; /usr/bin/python3'
Oct 11 03:11:50 np0005480847.novalocal sudo[18696]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 03:11:50 np0005480847.novalocal python3[18703]: ansible-ansible.legacy.stat Invoked with path=/etc/sudoers.d/cloud-admin follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Oct 11 03:11:50 np0005480847.novalocal sudo[18696]: pam_unix(sudo:session): session closed for user root
Oct 11 03:11:51 np0005480847.novalocal sudo[18922]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rdaeyxmhywxolkvizvtrrhrbcyoxzgih ; /usr/bin/python3'
Oct 11 03:11:51 np0005480847.novalocal sudo[18922]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 03:11:51 np0005480847.novalocal python3[18934]: ansible-ansible.legacy.copy Invoked with dest=/etc/sudoers.d/cloud-admin mode=0640 src=/home/zuul/.ansible/tmp/ansible-tmp-1760152310.3683689-135-219471063536510/source _original_basename=tmp0suc0ruf follow=False checksum=e7614e5ad3ab06eaae55b8efaa2ed81b63ea5634 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 11 03:11:51 np0005480847.novalocal sudo[18922]: pam_unix(sudo:session): session closed for user root
Oct 11 03:11:51 np0005480847.novalocal sudo[19197]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ztemhgoaztmpptzvvbukdibimaxzhlan ; /usr/bin/python3'
Oct 11 03:11:51 np0005480847.novalocal sudo[19197]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 03:11:52 np0005480847.novalocal python3[19205]: ansible-ansible.builtin.hostname Invoked with name=compute-0 use=systemd
Oct 11 03:11:52 np0005480847.novalocal systemd[1]: Starting Hostname Service...
Oct 11 03:11:52 np0005480847.novalocal systemd[1]: Started Hostname Service.
Oct 11 03:11:52 np0005480847.novalocal systemd-hostnamed[19290]: Changed pretty hostname to 'compute-0'
Oct 11 03:11:52 compute-0 systemd-hostnamed[19290]: Hostname set to <compute-0> (static)
Oct 11 03:11:52 compute-0 NetworkManager[3961]: <info>  [1760152312.2057] hostname: static hostname changed from "np0005480847.novalocal" to "compute-0"
Oct 11 03:11:52 compute-0 systemd[1]: Starting Network Manager Script Dispatcher Service...
Oct 11 03:11:52 compute-0 systemd[1]: Started Network Manager Script Dispatcher Service.
Oct 11 03:11:52 compute-0 sudo[19197]: pam_unix(sudo:session): session closed for user root
Oct 11 03:11:52 compute-0 sshd-session[17731]: Connection closed by 38.102.83.114 port 33988
Oct 11 03:11:52 compute-0 sshd-session[17684]: pam_unix(sshd:session): session closed for user zuul
Oct 11 03:11:52 compute-0 systemd[1]: session-5.scope: Deactivated successfully.
Oct 11 03:11:52 compute-0 systemd[1]: session-5.scope: Consumed 2.654s CPU time.
Oct 11 03:11:52 compute-0 systemd-logind[820]: Session 5 logged out. Waiting for processes to exit.
Oct 11 03:11:52 compute-0 systemd-logind[820]: Removed session 5.
Oct 11 03:12:02 compute-0 systemd[1]: NetworkManager-dispatcher.service: Deactivated successfully.
Oct 11 03:12:15 compute-0 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Oct 11 03:12:15 compute-0 systemd[1]: Finished man-db-cache-update.service.
Oct 11 03:12:15 compute-0 systemd[1]: man-db-cache-update.service: Consumed 1min 7.684s CPU time.
Oct 11 03:12:15 compute-0 systemd[1]: run-rd4e3878328e248e49c46dfc3f7696f5c.service: Deactivated successfully.
Oct 11 03:12:22 compute-0 systemd[1]: systemd-hostnamed.service: Deactivated successfully.
Oct 11 03:15:17 compute-0 sshd-session[26558]: Received disconnect from 193.46.255.159 port 18768:11:  [preauth]
Oct 11 03:15:17 compute-0 sshd-session[26558]: Disconnected from authenticating user root 193.46.255.159 port 18768 [preauth]
Oct 11 03:15:21 compute-0 systemd[1]: Starting Cleanup of Temporary Directories...
Oct 11 03:15:21 compute-0 sshd-session[26560]: Accepted publickey for zuul from 38.102.83.159 port 33372 ssh2: RSA SHA256:kxWsFSq8COsYLodRw7mhPmCkhu5z7pyatmccmmT74Lc
Oct 11 03:15:21 compute-0 systemd-logind[820]: New session 6 of user zuul.
Oct 11 03:15:21 compute-0 systemd[1]: Started Session 6 of User zuul.
Oct 11 03:15:21 compute-0 sshd-session[26560]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Oct 11 03:15:21 compute-0 systemd[1]: systemd-tmpfiles-clean.service: Deactivated successfully.
Oct 11 03:15:21 compute-0 systemd[1]: Finished Cleanup of Temporary Directories.
Oct 11 03:15:21 compute-0 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dclean.service.mount: Deactivated successfully.
Oct 11 03:15:22 compute-0 python3[26638]: ansible-ansible.legacy.setup Invoked with gather_subset=['all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct 11 03:15:25 compute-0 sudo[26752]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ozudedbbvwltevcrabkgdbazyolrzmtv ; /usr/bin/python3'
Oct 11 03:15:25 compute-0 sudo[26752]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 03:15:25 compute-0 python3[26754]: ansible-ansible.legacy.stat Invoked with path=/etc/yum.repos.d/delorean.repo follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Oct 11 03:15:25 compute-0 sudo[26752]: pam_unix(sudo:session): session closed for user root
Oct 11 03:15:25 compute-0 sudo[26825]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qellqgepuulhgoncptcitcvmadkllqzw ; /usr/bin/python3'
Oct 11 03:15:25 compute-0 sudo[26825]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 03:15:25 compute-0 python3[26827]: ansible-ansible.legacy.copy Invoked with dest=/etc/yum.repos.d/ src=/home/zuul/.ansible/tmp/ansible-tmp-1760152524.8630986-30199-4189611752396/source mode=0755 _original_basename=delorean.repo follow=False checksum=f3fabc627b4c59ab3d10213193ffdeeed080e354 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 11 03:15:25 compute-0 sudo[26825]: pam_unix(sudo:session): session closed for user root
Oct 11 03:15:25 compute-0 sudo[26851]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wvnxstdxvoqciaxkdhscnbxhytwmhftu ; /usr/bin/python3'
Oct 11 03:15:25 compute-0 sudo[26851]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 03:15:26 compute-0 python3[26853]: ansible-ansible.legacy.stat Invoked with path=/etc/yum.repos.d/delorean-antelope-testing.repo follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Oct 11 03:15:26 compute-0 sudo[26851]: pam_unix(sudo:session): session closed for user root
Oct 11 03:15:26 compute-0 sudo[26924]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-czdjhdclkxfuinakdxgtyzovzixvlvub ; /usr/bin/python3'
Oct 11 03:15:26 compute-0 sudo[26924]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 03:15:26 compute-0 python3[26926]: ansible-ansible.legacy.copy Invoked with dest=/etc/yum.repos.d/ src=/home/zuul/.ansible/tmp/ansible-tmp-1760152524.8630986-30199-4189611752396/source mode=0755 _original_basename=delorean-antelope-testing.repo follow=False checksum=0bdbb813b840548359ae77c28d76ca272ccaf31b backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 11 03:15:26 compute-0 sudo[26924]: pam_unix(sudo:session): session closed for user root
Oct 11 03:15:26 compute-0 sudo[26950]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mkhkudmsnawfufuqdpnshpjdxkqkhdad ; /usr/bin/python3'
Oct 11 03:15:26 compute-0 sudo[26950]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 03:15:26 compute-0 python3[26952]: ansible-ansible.legacy.stat Invoked with path=/etc/yum.repos.d/repo-setup-centos-highavailability.repo follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Oct 11 03:15:26 compute-0 sudo[26950]: pam_unix(sudo:session): session closed for user root
Oct 11 03:15:27 compute-0 sudo[27023]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zyjecgrcnrrbkbuctthupnvifgqtntjb ; /usr/bin/python3'
Oct 11 03:15:27 compute-0 sudo[27023]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 03:15:27 compute-0 python3[27025]: ansible-ansible.legacy.copy Invoked with dest=/etc/yum.repos.d/ src=/home/zuul/.ansible/tmp/ansible-tmp-1760152524.8630986-30199-4189611752396/source mode=0755 _original_basename=repo-setup-centos-highavailability.repo follow=False checksum=55d0f695fd0d8f47cbc3044ce0dcf5f88862490f backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 11 03:15:27 compute-0 sudo[27023]: pam_unix(sudo:session): session closed for user root
Oct 11 03:15:27 compute-0 sudo[27049]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sflxqfofeslrttnmneghoyrswynjrshb ; /usr/bin/python3'
Oct 11 03:15:27 compute-0 sudo[27049]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 03:15:27 compute-0 python3[27051]: ansible-ansible.legacy.stat Invoked with path=/etc/yum.repos.d/repo-setup-centos-powertools.repo follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Oct 11 03:15:27 compute-0 sudo[27049]: pam_unix(sudo:session): session closed for user root
Oct 11 03:15:27 compute-0 sudo[27122]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wwnpllkytmaypfkxffjxytseqlqjsltf ; /usr/bin/python3'
Oct 11 03:15:27 compute-0 sudo[27122]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 03:15:27 compute-0 python3[27124]: ansible-ansible.legacy.copy Invoked with dest=/etc/yum.repos.d/ src=/home/zuul/.ansible/tmp/ansible-tmp-1760152524.8630986-30199-4189611752396/source mode=0755 _original_basename=repo-setup-centos-powertools.repo follow=False checksum=4b0cf99aa89c5c5be0151545863a7a7568f67568 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 11 03:15:27 compute-0 sudo[27122]: pam_unix(sudo:session): session closed for user root
Oct 11 03:15:28 compute-0 sudo[27148]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xxrqxhtdlvygockfgjrkdixizuoladda ; /usr/bin/python3'
Oct 11 03:15:28 compute-0 sudo[27148]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 03:15:28 compute-0 python3[27150]: ansible-ansible.legacy.stat Invoked with path=/etc/yum.repos.d/repo-setup-centos-appstream.repo follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Oct 11 03:15:28 compute-0 sudo[27148]: pam_unix(sudo:session): session closed for user root
Oct 11 03:15:28 compute-0 sudo[27221]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gxclibtymrwkkinlropifcjatlyuigmq ; /usr/bin/python3'
Oct 11 03:15:28 compute-0 sudo[27221]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 03:15:28 compute-0 python3[27223]: ansible-ansible.legacy.copy Invoked with dest=/etc/yum.repos.d/ src=/home/zuul/.ansible/tmp/ansible-tmp-1760152524.8630986-30199-4189611752396/source mode=0755 _original_basename=repo-setup-centos-appstream.repo follow=False checksum=e89244d2503b2996429dda1857290c1e91e393a1 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 11 03:15:28 compute-0 sudo[27221]: pam_unix(sudo:session): session closed for user root
Oct 11 03:15:28 compute-0 sudo[27247]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qlnumnezfdkclcnlejoxizevtaldeutv ; /usr/bin/python3'
Oct 11 03:15:28 compute-0 sudo[27247]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 03:15:28 compute-0 python3[27249]: ansible-ansible.legacy.stat Invoked with path=/etc/yum.repos.d/repo-setup-centos-baseos.repo follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Oct 11 03:15:28 compute-0 sudo[27247]: pam_unix(sudo:session): session closed for user root
Oct 11 03:15:29 compute-0 sudo[27320]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ivirtfejlersdxhhioebrfqkwetxbcsy ; /usr/bin/python3'
Oct 11 03:15:29 compute-0 sudo[27320]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 03:15:29 compute-0 python3[27322]: ansible-ansible.legacy.copy Invoked with dest=/etc/yum.repos.d/ src=/home/zuul/.ansible/tmp/ansible-tmp-1760152524.8630986-30199-4189611752396/source mode=0755 _original_basename=repo-setup-centos-baseos.repo follow=False checksum=36d926db23a40dbfa5c84b5e4d43eac6fa2301d6 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 11 03:15:29 compute-0 sudo[27320]: pam_unix(sudo:session): session closed for user root
Oct 11 03:15:29 compute-0 sudo[27346]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jitzkxymabrncymmypfedvdqphjlgfwt ; /usr/bin/python3'
Oct 11 03:15:29 compute-0 sudo[27346]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 03:15:29 compute-0 python3[27348]: ansible-ansible.legacy.stat Invoked with path=/etc/yum.repos.d/delorean.repo.md5 follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Oct 11 03:15:29 compute-0 sudo[27346]: pam_unix(sudo:session): session closed for user root
Oct 11 03:15:29 compute-0 sudo[27419]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jjbpdgasqjypwbsxioxktstqwgdljdnh ; /usr/bin/python3'
Oct 11 03:15:29 compute-0 sudo[27419]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 03:15:30 compute-0 python3[27421]: ansible-ansible.legacy.copy Invoked with dest=/etc/yum.repos.d/ src=/home/zuul/.ansible/tmp/ansible-tmp-1760152524.8630986-30199-4189611752396/source mode=0755 _original_basename=delorean.repo.md5 follow=False checksum=5e44558a2b46929660a6b5bfc8824fb4521580a4 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 11 03:15:30 compute-0 sudo[27419]: pam_unix(sudo:session): session closed for user root
Oct 11 03:15:32 compute-0 sshd-session[27446]: Connection closed by 192.168.122.11 port 47494 [preauth]
Oct 11 03:15:32 compute-0 sshd-session[27447]: Connection closed by 192.168.122.11 port 47502 [preauth]
Oct 11 03:15:32 compute-0 sshd-session[27448]: Unable to negotiate with 192.168.122.11 port 47518: no matching host key type found. Their offer: ssh-ed25519 [preauth]
Oct 11 03:15:32 compute-0 sshd-session[27450]: Unable to negotiate with 192.168.122.11 port 47522: no matching host key type found. Their offer: sk-ecdsa-sha2-nistp256@openssh.com [preauth]
Oct 11 03:15:32 compute-0 sshd-session[27451]: Unable to negotiate with 192.168.122.11 port 47530: no matching host key type found. Their offer: sk-ssh-ed25519@openssh.com [preauth]
Oct 11 03:15:42 compute-0 python3[27479]: ansible-ansible.legacy.command Invoked with _raw_params=hostname _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 11 03:16:26 compute-0 PackageKit[6174]: daemon quit
Oct 11 03:16:26 compute-0 systemd[1]: packagekit.service: Deactivated successfully.
Oct 11 03:16:41 compute-0 chronyd[828]: Selected source 142.4.192.253 (2.centos.pool.ntp.org)
Oct 11 03:20:41 compute-0 sshd-session[26565]: Received disconnect from 38.102.83.159 port 33372:11: disconnected by user
Oct 11 03:20:41 compute-0 sshd-session[26565]: Disconnected from user zuul 38.102.83.159 port 33372
Oct 11 03:20:41 compute-0 sshd-session[26560]: pam_unix(sshd:session): session closed for user zuul
Oct 11 03:20:41 compute-0 systemd[1]: session-6.scope: Deactivated successfully.
Oct 11 03:20:41 compute-0 systemd[1]: session-6.scope: Consumed 5.725s CPU time.
Oct 11 03:20:41 compute-0 systemd-logind[820]: Session 6 logged out. Waiting for processes to exit.
Oct 11 03:20:41 compute-0 systemd-logind[820]: Removed session 6.
Oct 11 03:24:17 compute-0 sshd-session[27485]: Received disconnect from 193.46.255.33 port 57868:11:  [preauth]
Oct 11 03:24:17 compute-0 sshd-session[27485]: Disconnected from authenticating user root 193.46.255.33 port 57868 [preauth]
Oct 11 03:26:10 compute-0 sshd-session[27487]: Accepted publickey for zuul from 192.168.122.30 port 43026 ssh2: ECDSA SHA256:qo9+RMabHfLAOt2q/80W97JXaZUdeUCREBuTRaqgxBY
Oct 11 03:26:10 compute-0 systemd-logind[820]: New session 7 of user zuul.
Oct 11 03:26:10 compute-0 systemd[1]: Started Session 7 of User zuul.
Oct 11 03:26:10 compute-0 sshd-session[27487]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Oct 11 03:26:11 compute-0 python3.9[27640]: ansible-ansible.legacy.setup Invoked with gather_subset=['all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct 11 03:26:13 compute-0 sudo[27819]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mpkolawhqdzpwhmndlefgfpecysnxcww ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760153172.6302938-32-160545535456883/AnsiballZ_command.py'
Oct 11 03:26:13 compute-0 sudo[27819]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 03:26:13 compute-0 python3.9[27821]: ansible-ansible.legacy.command Invoked with _raw_params=set -euxo pipefail
                                            pushd /var/tmp
                                            curl -sL https://github.com/openstack-k8s-operators/repo-setup/archive/refs/heads/main.tar.gz | tar -xz
                                            pushd repo-setup-main
                                            python3 -m venv ./venv
                                            PBR_VERSION=0.0.0 ./venv/bin/pip install ./
                                            ./venv/bin/repo-setup current-podified -b antelope
                                            popd
                                            rm -rf repo-setup-main
                                             _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 11 03:26:20 compute-0 sudo[27819]: pam_unix(sudo:session): session closed for user root
Oct 11 03:26:20 compute-0 sshd-session[27490]: Connection closed by 192.168.122.30 port 43026
Oct 11 03:26:20 compute-0 sshd-session[27487]: pam_unix(sshd:session): session closed for user zuul
Oct 11 03:26:20 compute-0 systemd[1]: session-7.scope: Deactivated successfully.
Oct 11 03:26:20 compute-0 systemd[1]: session-7.scope: Consumed 8.211s CPU time.
Oct 11 03:26:20 compute-0 systemd-logind[820]: Session 7 logged out. Waiting for processes to exit.
Oct 11 03:26:20 compute-0 systemd-logind[820]: Removed session 7.
Oct 11 03:26:36 compute-0 sshd-session[27879]: Accepted publickey for zuul from 192.168.122.30 port 42650 ssh2: ECDSA SHA256:qo9+RMabHfLAOt2q/80W97JXaZUdeUCREBuTRaqgxBY
Oct 11 03:26:36 compute-0 systemd-logind[820]: New session 8 of user zuul.
Oct 11 03:26:36 compute-0 systemd[1]: Started Session 8 of User zuul.
Oct 11 03:26:36 compute-0 sshd-session[27879]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Oct 11 03:26:37 compute-0 python3.9[28032]: ansible-ansible.legacy.ping Invoked with data=pong
Oct 11 03:26:38 compute-0 python3.9[28206]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct 11 03:26:39 compute-0 sudo[28356]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rrqvjswasuzsopmfixezwlcccbquokbh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760153198.6873398-45-109770348889355/AnsiballZ_command.py'
Oct 11 03:26:39 compute-0 sudo[28356]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 03:26:39 compute-0 python3.9[28358]: ansible-ansible.legacy.command Invoked with _raw_params=PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin which growvols
                                             _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 11 03:26:39 compute-0 sudo[28356]: pam_unix(sudo:session): session closed for user root
Oct 11 03:26:40 compute-0 sudo[28509]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fndoulpanveunlebjawqkrmvqfjfpjom ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760153199.7406795-57-183299920156430/AnsiballZ_stat.py'
Oct 11 03:26:40 compute-0 sudo[28509]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 03:26:40 compute-0 python3.9[28511]: ansible-ansible.builtin.stat Invoked with path=/etc/ansible/facts.d/bootc.fact follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct 11 03:26:40 compute-0 sudo[28509]: pam_unix(sudo:session): session closed for user root
Oct 11 03:26:41 compute-0 sudo[28661]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yezhnwsxohhjyxvtacqlfdkcuaviskfr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760153200.7425761-65-61705892421815/AnsiballZ_file.py'
Oct 11 03:26:41 compute-0 sudo[28661]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 03:26:41 compute-0 python3.9[28663]: ansible-ansible.builtin.file Invoked with mode=755 path=/etc/ansible/facts.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 11 03:26:41 compute-0 sudo[28661]: pam_unix(sudo:session): session closed for user root
Oct 11 03:26:42 compute-0 sudo[28813]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pdrqqaafpsyoomtkhsufrdsdrsulufcb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760153201.6728725-73-33268645421808/AnsiballZ_stat.py'
Oct 11 03:26:42 compute-0 sudo[28813]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 03:26:42 compute-0 python3.9[28815]: ansible-ansible.legacy.stat Invoked with path=/etc/ansible/facts.d/bootc.fact follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 11 03:26:42 compute-0 sudo[28813]: pam_unix(sudo:session): session closed for user root
Oct 11 03:26:42 compute-0 sudo[28936]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-slxraojksljtifzojjvhzwzgndvldcxn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760153201.6728725-73-33268645421808/AnsiballZ_copy.py'
Oct 11 03:26:42 compute-0 sudo[28936]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 03:26:43 compute-0 python3.9[28938]: ansible-ansible.legacy.copy Invoked with dest=/etc/ansible/facts.d/bootc.fact mode=755 src=/home/zuul/.ansible/tmp/ansible-tmp-1760153201.6728725-73-33268645421808/.source.fact _original_basename=bootc.fact follow=False checksum=eb4122ce7fc50a38407beb511c4ff8c178005b12 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 11 03:26:43 compute-0 sudo[28936]: pam_unix(sudo:session): session closed for user root
Oct 11 03:26:43 compute-0 sudo[29088]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wkegamypzygttrbfrdqvwwhhhnkrizdp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760153203.2759287-88-263961891436659/AnsiballZ_setup.py'
Oct 11 03:26:43 compute-0 sudo[29088]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 03:26:43 compute-0 python3.9[29090]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct 11 03:26:44 compute-0 sudo[29088]: pam_unix(sudo:session): session closed for user root
Oct 11 03:26:44 compute-0 sudo[29244]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-inrqxozyqwduswutmcvpayqrwpfrrjfc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760153204.2829576-96-165582920133823/AnsiballZ_file.py'
Oct 11 03:26:44 compute-0 sudo[29244]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 03:26:44 compute-0 python3.9[29246]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/log/journal setype=var_log_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct 11 03:26:44 compute-0 sudo[29244]: pam_unix(sudo:session): session closed for user root
Oct 11 03:26:45 compute-0 python3.9[29396]: ansible-ansible.builtin.service_facts Invoked
Oct 11 03:26:50 compute-0 python3.9[29651]: ansible-ansible.builtin.lineinfile Invoked with line=cloud-init=disabled path=/proc/cmdline state=present backrefs=False create=False backup=False firstmatch=False unsafe_writes=False regexp=None search_string=None insertafter=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 11 03:26:51 compute-0 python3.9[29801]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct 11 03:26:52 compute-0 python3.9[29955]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local', 'distribution'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct 11 03:26:53 compute-0 sudo[30111]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-exgqybgxabjweytijarrvoiotvqhyrxi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760153212.8312473-144-17696058781322/AnsiballZ_setup.py'
Oct 11 03:26:53 compute-0 sudo[30111]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 03:26:53 compute-0 python3.9[30113]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Oct 11 03:26:53 compute-0 sudo[30111]: pam_unix(sudo:session): session closed for user root
Oct 11 03:26:54 compute-0 sudo[30195]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-afafvlshyyvbkkhoqnhdasunatosdypd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760153212.8312473-144-17696058781322/AnsiballZ_dnf.py'
Oct 11 03:26:54 compute-0 sudo[30195]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 03:26:54 compute-0 python3.9[30197]: ansible-ansible.legacy.dnf Invoked with name=['driverctl', 'lvm2', 'crudini', 'jq', 'nftables', 'NetworkManager', 'openstack-selinux', 'python3-libselinux', 'python3-pyyaml', 'rsync', 'tmpwatch', 'sysstat', 'iproute-tc', 'ksmtuned', 'systemd-container', 'crypto-policies-scripts', 'grubby', 'sos'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Oct 11 03:27:37 compute-0 systemd[1]: Reloading.
Oct 11 03:27:37 compute-0 systemd-rc-local-generator[30396]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 11 03:27:38 compute-0 systemd[1]: Listening on Device-mapper event daemon FIFOs.
Oct 11 03:27:38 compute-0 systemd[1]: Reloading.
Oct 11 03:27:38 compute-0 systemd-rc-local-generator[30434]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 11 03:27:38 compute-0 systemd[1]: Starting dnf makecache...
Oct 11 03:27:38 compute-0 systemd[1]: Starting Monitoring of LVM2 mirrors, snapshots etc. using dmeventd or progress polling...
Oct 11 03:27:38 compute-0 systemd[1]: Finished Monitoring of LVM2 mirrors, snapshots etc. using dmeventd or progress polling.
Oct 11 03:27:38 compute-0 systemd[1]: Reloading.
Oct 11 03:27:38 compute-0 systemd-rc-local-generator[30477]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 11 03:27:38 compute-0 dnf[30444]: Failed determining last makecache time.
Oct 11 03:27:38 compute-0 systemd[1]: Listening on LVM2 poll daemon socket.
Oct 11 03:27:38 compute-0 dnf[30444]: delorean-openstack-barbican-42b4c41831408a8e323 125 kB/s | 3.0 kB     00:00
Oct 11 03:27:38 compute-0 dnf[30444]: delorean-python-glean-10df0bd91b9bc5c9fd9cc02d7 142 kB/s | 3.0 kB     00:00
Oct 11 03:27:38 compute-0 dnf[30444]: delorean-openstack-cinder-1c00d6490d88e436f26ef 163 kB/s | 3.0 kB     00:00
Oct 11 03:27:38 compute-0 dnf[30444]: delorean-python-stevedore-c4acc5639fd2329372142 184 kB/s | 3.0 kB     00:00
Oct 11 03:27:38 compute-0 dnf[30444]: delorean-python-observabilityclient-2f31846d73c 140 kB/s | 3.0 kB     00:00
Oct 11 03:27:38 compute-0 dnf[30444]: delorean-diskimage-builder-7d793e664cf892461c55 154 kB/s | 3.0 kB     00:00
Oct 11 03:27:39 compute-0 dnf[30444]: delorean-openstack-nova-6f8decf0b4f1aa2e96292b6 128 kB/s | 3.0 kB     00:00
Oct 11 03:27:39 compute-0 dnf[30444]: delorean-python-designate-tests-tempest-347fdbc 162 kB/s | 3.0 kB     00:00
Oct 11 03:27:39 compute-0 dnf[30444]: delorean-openstack-glance-1fd12c29b339f30fe823e 162 kB/s | 3.0 kB     00:00
Oct 11 03:27:39 compute-0 dnf[30444]: delorean-openstack-keystone-e4b40af0ae3698fbbbb 161 kB/s | 3.0 kB     00:00
Oct 11 03:27:39 compute-0 dnf[30444]: delorean-openstack-manila-3c01b7181572c95dac462 159 kB/s | 3.0 kB     00:00
Oct 11 03:27:39 compute-0 dnf[30444]: delorean-python-vmware-nsxlib-458234972d1428ac9 167 kB/s | 3.0 kB     00:00
Oct 11 03:27:39 compute-0 dnf[30444]: delorean-openstack-octavia-ba397f07a7331190208c 165 kB/s | 3.0 kB     00:00
Oct 11 03:27:39 compute-0 dbus-broker-launch[809]: Noticed file-system modification, trigger reload.
Oct 11 03:27:39 compute-0 dnf[30444]: delorean-openstack-watcher-c014f81a8647287f6dcc 149 kB/s | 3.0 kB     00:00
Oct 11 03:27:39 compute-0 dbus-broker-launch[809]: Noticed file-system modification, trigger reload.
Oct 11 03:27:39 compute-0 dnf[30444]: delorean-python-tcib-ff70d03bf5bc0bb6f3540a02d3 150 kB/s | 3.0 kB     00:00
Oct 11 03:27:39 compute-0 dnf[30444]: delorean-puppet-ceph-91ba84bc002c318a7f961d084e 129 kB/s | 3.0 kB     00:00
Oct 11 03:27:39 compute-0 dnf[30444]: delorean-openstack-swift-dc98a8463506ac520c469a 151 kB/s | 3.0 kB     00:00
Oct 11 03:27:39 compute-0 dnf[30444]: delorean-python-tempestconf-8515371b7cceebd4282 150 kB/s | 3.0 kB     00:00
Oct 11 03:27:39 compute-0 dnf[30444]: delorean-openstack-heat-ui-013accbfd179753bc3f0 139 kB/s | 3.0 kB     00:00
Oct 11 03:27:39 compute-0 dnf[30444]: CentOS Stream 9 - BaseOS                         70 kB/s | 6.7 kB     00:00
Oct 11 03:27:39 compute-0 dnf[30444]: CentOS Stream 9 - AppStream                      30 kB/s | 6.8 kB     00:00
Oct 11 03:27:40 compute-0 dnf[30444]: CentOS Stream 9 - CRB                            29 kB/s | 6.6 kB     00:00
Oct 11 03:27:40 compute-0 dnf[30444]: CentOS Stream 9 - Extras packages                34 kB/s | 8.0 kB     00:00
Oct 11 03:27:40 compute-0 dnf[30444]: dlrn-antelope-testing                           100 kB/s | 3.0 kB     00:00
Oct 11 03:27:40 compute-0 dnf[30444]: dlrn-antelope-build-deps                         99 kB/s | 3.0 kB     00:00
Oct 11 03:27:40 compute-0 dnf[30444]: centos9-rabbitmq                                 43 kB/s | 3.0 kB     00:00
Oct 11 03:27:40 compute-0 dnf[30444]: centos9-storage                                  36 kB/s | 3.0 kB     00:00
Oct 11 03:27:40 compute-0 dnf[30444]: centos9-opstools                                 30 kB/s | 3.0 kB     00:00
Oct 11 03:27:40 compute-0 dnf[30444]: NFV SIG OpenvSwitch                              74 kB/s | 3.0 kB     00:00
Oct 11 03:27:40 compute-0 dnf[30444]: repo-setup-centos-appstream                      98 kB/s | 4.4 kB     00:00
Oct 11 03:27:40 compute-0 dnf[30444]: repo-setup-centos-baseos                        103 kB/s | 3.9 kB     00:00
Oct 11 03:27:41 compute-0 dnf[30444]: repo-setup-centos-highavailability              119 kB/s | 3.9 kB     00:00
Oct 11 03:27:41 compute-0 dnf[30444]: repo-setup-centos-powertools                    111 kB/s | 4.3 kB     00:00
Oct 11 03:27:41 compute-0 dnf[30444]: Extra Packages for Enterprise Linux 9 - x86_64   25 kB/s | 8.9 kB     00:00
Oct 11 03:27:42 compute-0 dnf[30444]: Metadata cache created.
Oct 11 03:27:42 compute-0 systemd[1]: dnf-makecache.service: Deactivated successfully.
Oct 11 03:27:42 compute-0 systemd[1]: Finished dnf makecache.
Oct 11 03:27:42 compute-0 systemd[1]: dnf-makecache.service: Consumed 2.006s CPU time.
Oct 11 03:28:42 compute-0 kernel: SELinux:  Converting 2714 SID table entries...
Oct 11 03:28:42 compute-0 kernel: SELinux:  policy capability network_peer_controls=1
Oct 11 03:28:42 compute-0 kernel: SELinux:  policy capability open_perms=1
Oct 11 03:28:42 compute-0 kernel: SELinux:  policy capability extended_socket_class=1
Oct 11 03:28:42 compute-0 kernel: SELinux:  policy capability always_check_network=0
Oct 11 03:28:42 compute-0 kernel: SELinux:  policy capability cgroup_seclabel=1
Oct 11 03:28:42 compute-0 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Oct 11 03:28:42 compute-0 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Oct 11 03:28:42 compute-0 dbus-broker-launch[810]: avc:  op=load_policy lsm=selinux seqno=8 res=1
Oct 11 03:28:42 compute-0 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Oct 11 03:28:42 compute-0 systemd[1]: Starting man-db-cache-update.service...
Oct 11 03:28:42 compute-0 systemd[1]: Reloading.
Oct 11 03:28:42 compute-0 systemd-rc-local-generator[30837]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 11 03:28:42 compute-0 systemd[1]: Queuing reload/restart jobs for marked units…
Oct 11 03:28:43 compute-0 systemd[1]: Starting PackageKit Daemon...
Oct 11 03:28:43 compute-0 PackageKit[31002]: daemon start
Oct 11 03:28:43 compute-0 systemd[1]: Started PackageKit Daemon.
Oct 11 03:28:43 compute-0 sudo[30195]: pam_unix(sudo:session): session closed for user root
Oct 11 03:28:43 compute-0 sudo[31752]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-twlpgxfeoblcqskobxkvmcvrharxhqtf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760153323.5953805-156-201303422075705/AnsiballZ_command.py'
Oct 11 03:28:43 compute-0 sudo[31752]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 03:28:43 compute-0 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Oct 11 03:28:43 compute-0 systemd[1]: Finished man-db-cache-update.service.
Oct 11 03:28:43 compute-0 systemd[1]: man-db-cache-update.service: Consumed 1.460s CPU time.
Oct 11 03:28:43 compute-0 systemd[1]: run-r073009d1e3f549b99f92238f41a749cd.service: Deactivated successfully.
Oct 11 03:28:44 compute-0 python3.9[31754]: ansible-ansible.legacy.command Invoked with _raw_params=rpm -V driverctl lvm2 crudini jq nftables NetworkManager openstack-selinux python3-libselinux python3-pyyaml rsync tmpwatch sysstat iproute-tc ksmtuned systemd-container crypto-policies-scripts grubby sos _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 11 03:28:45 compute-0 sudo[31752]: pam_unix(sudo:session): session closed for user root
Oct 11 03:28:46 compute-0 sudo[32034]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-slicrbcgfehzjegqoeqapsxlhaumcsvs ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760153325.413239-164-84347280320830/AnsiballZ_selinux.py'
Oct 11 03:28:46 compute-0 sudo[32034]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 03:28:46 compute-0 python3.9[32036]: ansible-ansible.posix.selinux Invoked with policy=targeted state=enforcing configfile=/etc/selinux/config update_kernel_param=False
Oct 11 03:28:46 compute-0 sudo[32034]: pam_unix(sudo:session): session closed for user root
Oct 11 03:28:47 compute-0 sudo[32186]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tzxbkzcyhslrwhhkyorjibaekrlriyyj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760153326.7274604-175-242037178460058/AnsiballZ_command.py'
Oct 11 03:28:47 compute-0 sudo[32186]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 03:28:47 compute-0 python3.9[32188]: ansible-ansible.legacy.command Invoked with cmd=dd if=/dev/zero of=/swap count=1024 bs=1M creates=/swap _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None removes=None stdin=None
Oct 11 03:28:48 compute-0 sudo[32186]: pam_unix(sudo:session): session closed for user root
Oct 11 03:28:48 compute-0 sudo[32340]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nmwvbukvsdvcxaxsnnhlnrecwztjwnoh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760153328.5290627-183-15910841796561/AnsiballZ_file.py'
Oct 11 03:28:48 compute-0 sudo[32340]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 03:28:49 compute-0 python3.9[32342]: ansible-ansible.builtin.file Invoked with group=root mode=0600 owner=root path=/swap recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False state=None _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 11 03:28:49 compute-0 sudo[32340]: pam_unix(sudo:session): session closed for user root
Oct 11 03:28:50 compute-0 sudo[32492]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qcnuvbbywkqpzmnumhhedopsbwlqpdhp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760153329.6161551-191-217941570449276/AnsiballZ_mount.py'
Oct 11 03:28:50 compute-0 sudo[32492]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 03:28:50 compute-0 python3.9[32494]: ansible-ansible.posix.mount Invoked with dump=0 fstype=swap name=none opts=sw passno=0 src=/swap state=present path=none boot=True opts_no_log=False backup=False fstab=None
Oct 11 03:28:50 compute-0 sudo[32492]: pam_unix(sudo:session): session closed for user root
Oct 11 03:28:51 compute-0 sudo[32644]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sjsyolpoaspschaxumjcemgetoxgqfik ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760153330.9508142-219-266083748632615/AnsiballZ_file.py'
Oct 11 03:28:51 compute-0 sudo[32644]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 03:28:51 compute-0 python3.9[32646]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/pki/ca-trust/source/anchors setype=cert_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct 11 03:28:51 compute-0 sudo[32644]: pam_unix(sudo:session): session closed for user root
Oct 11 03:28:52 compute-0 sudo[32796]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-eolmewugpfcmvqhfqvcmbrnreeonxlfb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760153331.6857345-227-210103793232039/AnsiballZ_stat.py'
Oct 11 03:28:52 compute-0 sudo[32796]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 03:28:52 compute-0 python3.9[32798]: ansible-ansible.legacy.stat Invoked with path=/etc/pki/ca-trust/source/anchors/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 11 03:28:52 compute-0 sudo[32796]: pam_unix(sudo:session): session closed for user root
Oct 11 03:28:52 compute-0 sudo[32919]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fddrivfdwqdomaljmbqabjqqxwmjwclg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760153331.6857345-227-210103793232039/AnsiballZ_copy.py'
Oct 11 03:28:52 compute-0 sudo[32919]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 03:28:52 compute-0 python3.9[32921]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/ca-trust/source/anchors/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1760153331.6857345-227-210103793232039/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=8245a904210c3962a63879d763ded8fcd136bfb2 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 11 03:28:52 compute-0 sudo[32919]: pam_unix(sudo:session): session closed for user root
Oct 11 03:28:53 compute-0 sudo[33071]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ajpxsdbyrtvrahmhsntbdhyvhbszhfmj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760153333.4352176-254-185656983690946/AnsiballZ_getent.py'
Oct 11 03:28:53 compute-0 sudo[33071]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 03:28:56 compute-0 python3.9[33073]: ansible-ansible.builtin.getent Invoked with database=passwd key=qemu fail_key=True service=None split=None
Oct 11 03:28:56 compute-0 sudo[33071]: pam_unix(sudo:session): session closed for user root
Oct 11 03:28:57 compute-0 sudo[33224]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gqxdpdjzihpzsxtsklbqsmajivuegwvn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760153336.982073-262-170186623867503/AnsiballZ_group.py'
Oct 11 03:28:57 compute-0 sudo[33224]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 03:28:57 compute-0 python3.9[33226]: ansible-ansible.builtin.group Invoked with gid=107 name=qemu state=present force=False system=False local=False non_unique=False gid_min=None gid_max=None
Oct 11 03:28:57 compute-0 groupadd[33227]: group added to /etc/group: name=qemu, GID=107
Oct 11 03:28:57 compute-0 rsyslogd[1005]: imjournal: journal files changed, reloading...  [v8.2506.0-2.el9 try https://www.rsyslog.com/e/0 ]
Oct 11 03:28:57 compute-0 groupadd[33227]: group added to /etc/gshadow: name=qemu
Oct 11 03:28:57 compute-0 groupadd[33227]: new group: name=qemu, GID=107
Oct 11 03:28:57 compute-0 sudo[33224]: pam_unix(sudo:session): session closed for user root
Oct 11 03:28:58 compute-0 sudo[33383]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-atlzgzlyubxlwfmqnjllfhclumswuzda ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760153338.013151-270-98885429601691/AnsiballZ_user.py'
Oct 11 03:28:58 compute-0 sudo[33383]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 03:28:58 compute-0 python3.9[33385]: ansible-ansible.builtin.user Invoked with comment=qemu user group=qemu groups=[''] name=qemu shell=/sbin/nologin state=present uid=107 non_unique=False force=False remove=False create_home=True system=False move_home=False append=False ssh_key_bits=0 ssh_key_type=rsa ssh_key_comment=ansible-generated on compute-0 update_password=always home=None password=NOT_LOGGING_PARAMETER login_class=None password_expire_max=None password_expire_min=None password_expire_warn=None hidden=None seuser=None skeleton=None generate_ssh_key=None ssh_key_file=None ssh_key_passphrase=NOT_LOGGING_PARAMETER expires=None password_lock=None local=None profile=None authorization=None role=None umask=None password_expire_account_disable=None uid_min=None uid_max=None
Oct 11 03:28:58 compute-0 useradd[33387]: new user: name=qemu, UID=107, GID=107, home=/home/qemu, shell=/sbin/nologin, from=/dev/pts/0
Oct 11 03:28:58 compute-0 sudo[33383]: pam_unix(sudo:session): session closed for user root
Oct 11 03:28:59 compute-0 sudo[33543]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ctylfmbowojcenbmbxxeymhiuyljfypu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760153339.0217736-278-159284082519222/AnsiballZ_getent.py'
Oct 11 03:28:59 compute-0 sudo[33543]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 03:28:59 compute-0 python3.9[33545]: ansible-ansible.builtin.getent Invoked with database=passwd key=hugetlbfs fail_key=True service=None split=None
Oct 11 03:28:59 compute-0 sudo[33543]: pam_unix(sudo:session): session closed for user root
Oct 11 03:29:00 compute-0 sudo[33696]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cntbvmouemswklbviucqesqdhmvmvvge ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760153339.7671733-286-196190230161968/AnsiballZ_group.py'
Oct 11 03:29:00 compute-0 sudo[33696]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 03:29:00 compute-0 python3.9[33698]: ansible-ansible.builtin.group Invoked with gid=42477 name=hugetlbfs state=present force=False system=False local=False non_unique=False gid_min=None gid_max=None
Oct 11 03:29:00 compute-0 groupadd[33699]: group added to /etc/group: name=hugetlbfs, GID=42477
Oct 11 03:29:00 compute-0 groupadd[33699]: group added to /etc/gshadow: name=hugetlbfs
Oct 11 03:29:00 compute-0 groupadd[33699]: new group: name=hugetlbfs, GID=42477
Oct 11 03:29:00 compute-0 sudo[33696]: pam_unix(sudo:session): session closed for user root
Oct 11 03:29:00 compute-0 sudo[33854]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wufpnkwbmjajkmyqamyzjlgyjnvtkxfd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760153340.603484-295-250267299368575/AnsiballZ_file.py'
Oct 11 03:29:00 compute-0 sudo[33854]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 03:29:01 compute-0 python3.9[33856]: ansible-ansible.builtin.file Invoked with group=qemu mode=0755 owner=qemu path=/var/lib/vhost_sockets setype=virt_cache_t seuser=system_u state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None serole=None selevel=None attributes=None
Oct 11 03:29:01 compute-0 sudo[33854]: pam_unix(sudo:session): session closed for user root
Oct 11 03:29:01 compute-0 sudo[34006]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jtucfqgdrircbvvxtkeilzvubpfbbgxa ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760153341.437731-306-133083389098551/AnsiballZ_dnf.py'
Oct 11 03:29:01 compute-0 sudo[34006]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 03:29:02 compute-0 python3.9[34008]: ansible-ansible.legacy.dnf Invoked with name=['dracut-config-generic'] state=absent allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Oct 11 03:29:03 compute-0 sudo[34006]: pam_unix(sudo:session): session closed for user root
Oct 11 03:29:04 compute-0 sudo[34160]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tkkvuwsfwcowwreocptfmtogorgofucq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760153343.7360642-314-153178399483880/AnsiballZ_file.py'
Oct 11 03:29:04 compute-0 sudo[34160]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 03:29:04 compute-0 python3.9[34162]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/modules-load.d setype=etc_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct 11 03:29:04 compute-0 sudo[34160]: pam_unix(sudo:session): session closed for user root
Oct 11 03:29:04 compute-0 sudo[34312]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xkzemngnadolocfyfdrsueyylahpnrzr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760153344.490811-322-160179732965930/AnsiballZ_stat.py'
Oct 11 03:29:04 compute-0 sudo[34312]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 03:29:05 compute-0 python3.9[34314]: ansible-ansible.legacy.stat Invoked with path=/etc/modules-load.d/99-edpm.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 11 03:29:05 compute-0 sudo[34312]: pam_unix(sudo:session): session closed for user root
Oct 11 03:29:05 compute-0 sudo[34435]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hadqncuvjxbqdkmthwydcqjfkjzgklrq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760153344.490811-322-160179732965930/AnsiballZ_copy.py'
Oct 11 03:29:05 compute-0 sudo[34435]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 03:29:05 compute-0 python3.9[34437]: ansible-ansible.legacy.copy Invoked with dest=/etc/modules-load.d/99-edpm.conf group=root mode=0644 owner=root setype=etc_t src=/home/zuul/.ansible/tmp/ansible-tmp-1760153344.490811-322-160179732965930/.source.conf follow=False _original_basename=edpm-modprobe.conf.j2 checksum=8021efe01721d8fa8cab46b95c00ec1be6dbb9d0 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Oct 11 03:29:05 compute-0 sudo[34435]: pam_unix(sudo:session): session closed for user root
Oct 11 03:29:06 compute-0 sudo[34587]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tmeavcjbhlwumsejkjwcxjqaqztkdnqo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760153345.8991053-337-187791736071899/AnsiballZ_systemd.py'
Oct 11 03:29:06 compute-0 sudo[34587]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 03:29:06 compute-0 python3.9[34589]: ansible-ansible.builtin.systemd Invoked with name=systemd-modules-load.service state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Oct 11 03:29:07 compute-0 systemd[1]: Starting Load Kernel Modules...
Oct 11 03:29:08 compute-0 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this.
Oct 11 03:29:08 compute-0 kernel: Bridge firewalling registered
Oct 11 03:29:08 compute-0 systemd-modules-load[34593]: Inserted module 'br_netfilter'
Oct 11 03:29:08 compute-0 systemd[1]: Finished Load Kernel Modules.
Oct 11 03:29:08 compute-0 sudo[34587]: pam_unix(sudo:session): session closed for user root
Oct 11 03:29:08 compute-0 sudo[34748]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tauziqwqtaomzkrgrgzlhrxetyzuigqt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760153348.2498841-345-166652436703558/AnsiballZ_stat.py'
Oct 11 03:29:08 compute-0 sudo[34748]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 03:29:08 compute-0 python3.9[34750]: ansible-ansible.legacy.stat Invoked with path=/etc/sysctl.d/99-edpm.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 11 03:29:08 compute-0 sudo[34748]: pam_unix(sudo:session): session closed for user root
Oct 11 03:29:09 compute-0 sudo[34871]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pydntemqyqlbaqybbmeejuheyhkmcvms ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760153348.2498841-345-166652436703558/AnsiballZ_copy.py'
Oct 11 03:29:09 compute-0 sudo[34871]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 03:29:09 compute-0 python3.9[34873]: ansible-ansible.legacy.copy Invoked with dest=/etc/sysctl.d/99-edpm.conf group=root mode=0644 owner=root setype=etc_t src=/home/zuul/.ansible/tmp/ansible-tmp-1760153348.2498841-345-166652436703558/.source.conf follow=False _original_basename=edpm-sysctl.conf.j2 checksum=2a366439721b855adcfe4d7f152babb68596a007 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Oct 11 03:29:09 compute-0 sudo[34871]: pam_unix(sudo:session): session closed for user root
Oct 11 03:29:10 compute-0 sudo[35023]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nejfjxscqjxjgszczvjxjwjqkbeejudo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760153349.7425337-363-235498624750236/AnsiballZ_dnf.py'
Oct 11 03:29:10 compute-0 sudo[35023]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 03:29:10 compute-0 python3.9[35025]: ansible-ansible.legacy.dnf Invoked with name=['tuned', 'tuned-profiles-cpu-partitioning'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Oct 11 03:29:13 compute-0 dbus-broker-launch[809]: Noticed file-system modification, trigger reload.
Oct 11 03:29:13 compute-0 dbus-broker-launch[809]: Noticed file-system modification, trigger reload.
Oct 11 03:29:14 compute-0 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Oct 11 03:29:14 compute-0 systemd[1]: Starting man-db-cache-update.service...
Oct 11 03:29:14 compute-0 systemd[1]: Reloading.
Oct 11 03:29:14 compute-0 systemd-rc-local-generator[35090]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 11 03:29:14 compute-0 systemd[1]: Queuing reload/restart jobs for marked units…
Oct 11 03:29:14 compute-0 sudo[35023]: pam_unix(sudo:session): session closed for user root
Oct 11 03:29:15 compute-0 python3.9[36204]: ansible-ansible.builtin.stat Invoked with path=/etc/tuned/active_profile follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct 11 03:29:16 compute-0 python3.9[37167]: ansible-ansible.builtin.slurp Invoked with src=/etc/tuned/active_profile
Oct 11 03:29:16 compute-0 python3.9[37850]: ansible-ansible.builtin.stat Invoked with path=/etc/tuned/throughput-performance-variables.conf follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct 11 03:29:17 compute-0 sudo[38651]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kesfjsvqqfftplimotibjujhtxihigxn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760153357.1701164-402-266290987476320/AnsiballZ_command.py'
Oct 11 03:29:17 compute-0 sudo[38651]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 03:29:17 compute-0 python3.9[38674]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/sbin/tuned-adm profile throughput-performance _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 11 03:29:17 compute-0 systemd[1]: Starting Dynamic System Tuning Daemon...
Oct 11 03:29:18 compute-0 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Oct 11 03:29:18 compute-0 systemd[1]: Finished man-db-cache-update.service.
Oct 11 03:29:18 compute-0 systemd[1]: man-db-cache-update.service: Consumed 5.377s CPU time.
Oct 11 03:29:18 compute-0 systemd[1]: run-r9f31e700f8b74a92bc45f98368fd271e.service: Deactivated successfully.
Oct 11 03:29:18 compute-0 systemd[1]: Started Dynamic System Tuning Daemon.
Oct 11 03:29:18 compute-0 sudo[38651]: pam_unix(sudo:session): session closed for user root
Oct 11 03:29:19 compute-0 sudo[39569]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-csqlbbtxopqfycuwfzclqgcspwufzuoc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760153358.7435536-411-183483455312678/AnsiballZ_systemd.py'
Oct 11 03:29:19 compute-0 sudo[39569]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 03:29:19 compute-0 python3.9[39571]: ansible-ansible.builtin.systemd Invoked with enabled=True name=tuned state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Oct 11 03:29:19 compute-0 systemd[1]: Stopping Dynamic System Tuning Daemon...
Oct 11 03:29:19 compute-0 systemd[1]: tuned.service: Deactivated successfully.
Oct 11 03:29:19 compute-0 systemd[1]: Stopped Dynamic System Tuning Daemon.
Oct 11 03:29:19 compute-0 systemd[1]: Starting Dynamic System Tuning Daemon...
Oct 11 03:29:19 compute-0 systemd[1]: Started Dynamic System Tuning Daemon.
Oct 11 03:29:19 compute-0 sudo[39569]: pam_unix(sudo:session): session closed for user root
Oct 11 03:29:20 compute-0 python3.9[39733]: ansible-ansible.builtin.slurp Invoked with src=/proc/cmdline
Oct 11 03:29:22 compute-0 sudo[39883]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kpdqtofpceyfybiyyoooawkyvnwyeedb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760153362.032934-468-228882203544441/AnsiballZ_systemd.py'
Oct 11 03:29:22 compute-0 sudo[39883]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 03:29:22 compute-0 python3.9[39885]: ansible-ansible.builtin.systemd Invoked with enabled=False name=ksm.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Oct 11 03:29:22 compute-0 systemd[1]: Reloading.
Oct 11 03:29:22 compute-0 systemd-rc-local-generator[39916]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 11 03:29:23 compute-0 sudo[39883]: pam_unix(sudo:session): session closed for user root
Oct 11 03:29:23 compute-0 sudo[40073]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-efmxvldbfoaxohnobyxlawwoygoukoqf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760153363.1899388-468-248029458905764/AnsiballZ_systemd.py'
Oct 11 03:29:23 compute-0 sudo[40073]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 03:29:23 compute-0 python3.9[40075]: ansible-ansible.builtin.systemd Invoked with enabled=False name=ksmtuned.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Oct 11 03:29:23 compute-0 systemd[1]: Reloading.
Oct 11 03:29:24 compute-0 systemd-rc-local-generator[40101]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 11 03:29:24 compute-0 sudo[40073]: pam_unix(sudo:session): session closed for user root
Oct 11 03:29:24 compute-0 sudo[40261]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-seyugvepheokwaddaojeeyfvbeqyuncs ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760153364.3801079-484-268545488886151/AnsiballZ_command.py'
Oct 11 03:29:24 compute-0 sudo[40261]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 03:29:24 compute-0 python3.9[40263]: ansible-ansible.legacy.command Invoked with _raw_params=mkswap "/swap" _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 11 03:29:24 compute-0 sudo[40261]: pam_unix(sudo:session): session closed for user root
Oct 11 03:29:25 compute-0 sudo[40414]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fsvzjydnnydymtmpmvrmljjvsefkrpka ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760153365.0697622-492-52360321344870/AnsiballZ_command.py'
Oct 11 03:29:25 compute-0 sudo[40414]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 03:29:25 compute-0 python3.9[40416]: ansible-ansible.legacy.command Invoked with _raw_params=swapon "/swap" _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 11 03:29:25 compute-0 kernel: Adding 1048572k swap on /swap.  Priority:-2 extents:1 across:1048572k 
Oct 11 03:29:25 compute-0 sudo[40414]: pam_unix(sudo:session): session closed for user root
Oct 11 03:29:26 compute-0 sudo[40567]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ytakspquezeoprujieskliywvyesjjvi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760153365.8025653-500-81781401910268/AnsiballZ_command.py'
Oct 11 03:29:26 compute-0 sudo[40567]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 03:29:26 compute-0 python3.9[40569]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/bin/update-ca-trust _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 11 03:29:27 compute-0 sudo[40567]: pam_unix(sudo:session): session closed for user root
Oct 11 03:29:28 compute-0 sudo[40729]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tqkjdrskvpwrqeyyepbthwsisfwkpusj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760153367.991677-508-135307814263210/AnsiballZ_command.py'
Oct 11 03:29:28 compute-0 sudo[40729]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 03:29:28 compute-0 python3.9[40731]: ansible-ansible.legacy.command Invoked with _raw_params=echo 2 >/sys/kernel/mm/ksm/run _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 11 03:29:28 compute-0 sudo[40729]: pam_unix(sudo:session): session closed for user root
Oct 11 03:29:29 compute-0 sudo[40882]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xzcgwbmcmnalklumymtzjsxhbzybhtpz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760153368.7113686-516-3363781707632/AnsiballZ_systemd.py'
Oct 11 03:29:29 compute-0 sudo[40882]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 03:29:29 compute-0 python3.9[40884]: ansible-ansible.builtin.systemd Invoked with name=systemd-sysctl.service state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Oct 11 03:29:29 compute-0 systemd[1]: systemd-sysctl.service: Deactivated successfully.
Oct 11 03:29:29 compute-0 systemd[1]: Stopped Apply Kernel Variables.
Oct 11 03:29:29 compute-0 systemd[1]: Stopping Apply Kernel Variables...
Oct 11 03:29:29 compute-0 systemd[1]: Starting Apply Kernel Variables...
Oct 11 03:29:29 compute-0 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully.
Oct 11 03:29:29 compute-0 systemd[1]: Finished Apply Kernel Variables.
Oct 11 03:29:29 compute-0 sudo[40882]: pam_unix(sudo:session): session closed for user root
Oct 11 03:29:29 compute-0 sshd-session[27882]: Connection closed by 192.168.122.30 port 42650
Oct 11 03:29:29 compute-0 sshd-session[27879]: pam_unix(sshd:session): session closed for user zuul
Oct 11 03:29:29 compute-0 systemd[1]: session-8.scope: Deactivated successfully.
Oct 11 03:29:29 compute-0 systemd[1]: session-8.scope: Consumed 2min 19.555s CPU time.
Oct 11 03:29:29 compute-0 systemd-logind[820]: Session 8 logged out. Waiting for processes to exit.
Oct 11 03:29:29 compute-0 systemd-logind[820]: Removed session 8.
Oct 11 03:29:34 compute-0 sshd-session[40914]: Accepted publickey for zuul from 192.168.122.30 port 36538 ssh2: ECDSA SHA256:qo9+RMabHfLAOt2q/80W97JXaZUdeUCREBuTRaqgxBY
Oct 11 03:29:34 compute-0 systemd-logind[820]: New session 9 of user zuul.
Oct 11 03:29:34 compute-0 systemd[1]: Started Session 9 of User zuul.
Oct 11 03:29:34 compute-0 sshd-session[40914]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Oct 11 03:29:35 compute-0 python3.9[41067]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct 11 03:29:37 compute-0 sudo[41221]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kcuzphymbkiginusndydonytixtesxdk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760153376.5356364-36-151277050322718/AnsiballZ_getent.py'
Oct 11 03:29:37 compute-0 sudo[41221]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 03:29:37 compute-0 python3.9[41223]: ansible-ansible.builtin.getent Invoked with database=passwd key=openvswitch fail_key=True service=None split=None
Oct 11 03:29:37 compute-0 sudo[41221]: pam_unix(sudo:session): session closed for user root
Oct 11 03:29:37 compute-0 sudo[41374]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rvyzlbhddcwsfswwhmvspnymhxlfavmp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760153377.3811154-44-176220972079955/AnsiballZ_group.py'
Oct 11 03:29:37 compute-0 sudo[41374]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 03:29:38 compute-0 python3.9[41376]: ansible-ansible.builtin.group Invoked with gid=42476 name=openvswitch state=present force=False system=False local=False non_unique=False gid_min=None gid_max=None
Oct 11 03:29:38 compute-0 groupadd[41377]: group added to /etc/group: name=openvswitch, GID=42476
Oct 11 03:29:38 compute-0 groupadd[41377]: group added to /etc/gshadow: name=openvswitch
Oct 11 03:29:38 compute-0 groupadd[41377]: new group: name=openvswitch, GID=42476
Oct 11 03:29:38 compute-0 sudo[41374]: pam_unix(sudo:session): session closed for user root
Oct 11 03:29:38 compute-0 sudo[41532]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ypizdzwnzzfjiawwzmhjpboiquhbwfzk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760153378.323084-52-256660226603465/AnsiballZ_user.py'
Oct 11 03:29:38 compute-0 sudo[41532]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 03:29:39 compute-0 python3.9[41534]: ansible-ansible.builtin.user Invoked with comment=openvswitch user group=openvswitch groups=['hugetlbfs'] name=openvswitch shell=/sbin/nologin state=present uid=42476 non_unique=False force=False remove=False create_home=True system=False move_home=False append=False ssh_key_bits=0 ssh_key_type=rsa ssh_key_comment=ansible-generated on compute-0 update_password=always home=None password=NOT_LOGGING_PARAMETER login_class=None password_expire_max=None password_expire_min=None password_expire_warn=None hidden=None seuser=None skeleton=None generate_ssh_key=None ssh_key_file=None ssh_key_passphrase=NOT_LOGGING_PARAMETER expires=None password_lock=None local=None profile=None authorization=None role=None umask=None password_expire_account_disable=None uid_min=None uid_max=None
Oct 11 03:29:39 compute-0 useradd[41536]: new user: name=openvswitch, UID=42476, GID=42476, home=/home/openvswitch, shell=/sbin/nologin, from=/dev/pts/0
Oct 11 03:29:39 compute-0 useradd[41536]: add 'openvswitch' to group 'hugetlbfs'
Oct 11 03:29:39 compute-0 useradd[41536]: add 'openvswitch' to shadow group 'hugetlbfs'
Oct 11 03:29:39 compute-0 sudo[41532]: pam_unix(sudo:session): session closed for user root
Oct 11 03:29:39 compute-0 sudo[41692]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vzgfciyjhigbwkpvymvxnrdwucylebkb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760153379.5292592-62-164734913122265/AnsiballZ_setup.py'
Oct 11 03:29:39 compute-0 sudo[41692]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 03:29:40 compute-0 python3.9[41694]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Oct 11 03:29:40 compute-0 sudo[41692]: pam_unix(sudo:session): session closed for user root
Oct 11 03:29:40 compute-0 sudo[41776]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uiayhuvyihcljzcyrzmaioezdurebamt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760153379.5292592-62-164734913122265/AnsiballZ_dnf.py'
Oct 11 03:29:40 compute-0 sudo[41776]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 03:29:41 compute-0 python3.9[41778]: ansible-ansible.legacy.dnf Invoked with download_only=True name=['openvswitch'] allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None state=None
Oct 11 03:29:43 compute-0 sudo[41776]: pam_unix(sudo:session): session closed for user root
Oct 11 03:29:43 compute-0 sudo[41939]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jbcmikdgmtwqdoyaunkzgyxygywzgjvw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760153383.4011638-76-197451125414541/AnsiballZ_dnf.py'
Oct 11 03:29:43 compute-0 sudo[41939]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 03:29:44 compute-0 python3.9[41941]: ansible-ansible.legacy.dnf Invoked with name=['openvswitch'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Oct 11 03:29:54 compute-0 kernel: SELinux:  Converting 2724 SID table entries...
Oct 11 03:29:54 compute-0 kernel: SELinux:  policy capability network_peer_controls=1
Oct 11 03:29:54 compute-0 kernel: SELinux:  policy capability open_perms=1
Oct 11 03:29:54 compute-0 kernel: SELinux:  policy capability extended_socket_class=1
Oct 11 03:29:54 compute-0 kernel: SELinux:  policy capability always_check_network=0
Oct 11 03:29:54 compute-0 kernel: SELinux:  policy capability cgroup_seclabel=1
Oct 11 03:29:54 compute-0 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Oct 11 03:29:54 compute-0 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Oct 11 03:29:54 compute-0 groupadd[41964]: group added to /etc/group: name=unbound, GID=993
Oct 11 03:29:54 compute-0 groupadd[41964]: group added to /etc/gshadow: name=unbound
Oct 11 03:29:54 compute-0 groupadd[41964]: new group: name=unbound, GID=993
Oct 11 03:29:54 compute-0 useradd[41971]: new user: name=unbound, UID=993, GID=993, home=/var/lib/unbound, shell=/sbin/nologin, from=none
Oct 11 03:29:55 compute-0 dbus-broker-launch[810]: avc:  op=load_policy lsm=selinux seqno=9 res=1
Oct 11 03:29:55 compute-0 systemd[1]: Started daily update of the root trust anchor for DNSSEC.
Oct 11 03:29:56 compute-0 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Oct 11 03:29:56 compute-0 systemd[1]: Starting man-db-cache-update.service...
Oct 11 03:29:56 compute-0 systemd[1]: Reloading.
Oct 11 03:29:56 compute-0 systemd-rc-local-generator[42468]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 11 03:29:56 compute-0 systemd-sysv-generator[42471]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 11 03:29:57 compute-0 systemd[1]: Queuing reload/restart jobs for marked units…
Oct 11 03:29:57 compute-0 sudo[41939]: pam_unix(sudo:session): session closed for user root
Oct 11 03:29:57 compute-0 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Oct 11 03:29:57 compute-0 systemd[1]: Finished man-db-cache-update.service.
Oct 11 03:29:57 compute-0 systemd[1]: run-ra8a865b16e9842a09e7e33037a49cc41.service: Deactivated successfully.
Oct 11 03:29:58 compute-0 sudo[43041]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tvxctnjoxaythbjknbnsklzgicdfxfnq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760153397.7053082-84-153124018546977/AnsiballZ_systemd.py'
Oct 11 03:29:58 compute-0 sudo[43041]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 03:29:58 compute-0 python3.9[43043]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=openvswitch.service state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Oct 11 03:29:59 compute-0 systemd[1]: Reloading.
Oct 11 03:29:59 compute-0 systemd-rc-local-generator[43075]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 11 03:29:59 compute-0 systemd-sysv-generator[43080]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 11 03:30:00 compute-0 systemd[1]: Starting Open vSwitch Database Unit...
Oct 11 03:30:00 compute-0 chown[43086]: /usr/bin/chown: cannot access '/run/openvswitch': No such file or directory
Oct 11 03:30:00 compute-0 ovs-ctl[43091]: /etc/openvswitch/conf.db does not exist ... (warning).
Oct 11 03:30:00 compute-0 ovs-ctl[43091]: Creating empty database /etc/openvswitch/conf.db [  OK  ]
Oct 11 03:30:00 compute-0 ovs-ctl[43091]: Starting ovsdb-server [  OK  ]
Oct 11 03:30:00 compute-0 ovs-vsctl[43140]: ovs|00001|vsctl|INFO|Called as ovs-vsctl --no-wait -- init -- set Open_vSwitch . db-version=8.5.1
Oct 11 03:30:00 compute-0 ovs-vsctl[43158]: ovs|00001|vsctl|INFO|Called as ovs-vsctl --no-wait set Open_vSwitch . ovs-version=3.3.5-115.el9s "external-ids:system-id=\"8a473e03-2208-47ae-afcd-05ad744a5969\"" "external-ids:rundir=\"/var/run/openvswitch\"" "system-type=\"centos\"" "system-version=\"9\""
Oct 11 03:30:00 compute-0 ovs-ctl[43091]: Configuring Open vSwitch system IDs [  OK  ]
Oct 11 03:30:00 compute-0 ovs-vsctl[43166]: ovs|00001|vsctl|INFO|Called as ovs-vsctl --no-wait add Open_vSwitch . external-ids hostname=compute-0
Oct 11 03:30:00 compute-0 ovs-ctl[43091]: Enabling remote OVSDB managers [  OK  ]
Oct 11 03:30:00 compute-0 systemd[1]: Started Open vSwitch Database Unit.
Oct 11 03:30:00 compute-0 systemd[1]: Starting Open vSwitch Delete Transient Ports...
Oct 11 03:30:00 compute-0 systemd[1]: Finished Open vSwitch Delete Transient Ports.
Oct 11 03:30:00 compute-0 systemd[1]: Starting Open vSwitch Forwarding Unit...
Oct 11 03:30:00 compute-0 kernel: openvswitch: Open vSwitch switching datapath
Oct 11 03:30:00 compute-0 ovs-ctl[43212]: Inserting openvswitch module [  OK  ]
Oct 11 03:30:00 compute-0 ovs-ctl[43181]: Starting ovs-vswitchd [  OK  ]
Oct 11 03:30:00 compute-0 ovs-vsctl[43229]: ovs|00001|vsctl|INFO|Called as ovs-vsctl --no-wait add Open_vSwitch . external-ids hostname=compute-0
Oct 11 03:30:00 compute-0 ovs-ctl[43181]: Enabling remote OVSDB managers [  OK  ]
Oct 11 03:30:00 compute-0 systemd[1]: Started Open vSwitch Forwarding Unit.
Oct 11 03:30:00 compute-0 systemd[1]: Starting Open vSwitch...
Oct 11 03:30:00 compute-0 systemd[1]: Finished Open vSwitch.
Oct 11 03:30:00 compute-0 sudo[43041]: pam_unix(sudo:session): session closed for user root
Oct 11 03:30:01 compute-0 anacron[1070]: Job `cron.daily' started
Oct 11 03:30:01 compute-0 anacron[1070]: Job `cron.daily' terminated
Oct 11 03:30:01 compute-0 python3.9[43383]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'selinux'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct 11 03:30:02 compute-0 sudo[43533]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-soqpefrufhacukbsqlkdcxikvtvxgcsd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760153401.8502169-102-253382009410282/AnsiballZ_sefcontext.py'
Oct 11 03:30:02 compute-0 sudo[43533]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 03:30:02 compute-0 python3.9[43535]: ansible-community.general.sefcontext Invoked with selevel=s0 setype=container_file_t state=present target=/var/lib/edpm-config(/.*)? ignore_selinux_state=False ftype=a reload=True substitute=None seuser=None
Oct 11 03:30:03 compute-0 kernel: SELinux:  Converting 2739 SID table entries...
Oct 11 03:30:03 compute-0 kernel: SELinux:  policy capability network_peer_controls=1
Oct 11 03:30:03 compute-0 kernel: SELinux:  policy capability open_perms=1
Oct 11 03:30:03 compute-0 kernel: SELinux:  policy capability extended_socket_class=1
Oct 11 03:30:03 compute-0 kernel: SELinux:  policy capability always_check_network=0
Oct 11 03:30:03 compute-0 kernel: SELinux:  policy capability cgroup_seclabel=1
Oct 11 03:30:03 compute-0 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Oct 11 03:30:03 compute-0 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Oct 11 03:30:04 compute-0 sudo[43533]: pam_unix(sudo:session): session closed for user root
Oct 11 03:30:05 compute-0 python3.9[43690]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local', 'distribution'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct 11 03:30:05 compute-0 sudo[43846]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-abxajohsozniizusikfyrmggkofnywad ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760153405.4538732-120-28559834775624/AnsiballZ_dnf.py'
Oct 11 03:30:05 compute-0 dbus-broker-launch[810]: avc:  op=load_policy lsm=selinux seqno=10 res=1
Oct 11 03:30:05 compute-0 sudo[43846]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 03:30:06 compute-0 python3.9[43848]: ansible-ansible.legacy.dnf Invoked with name=['driverctl', 'lvm2', 'crudini', 'jq', 'nftables', 'NetworkManager', 'openstack-selinux', 'python3-libselinux', 'python3-pyyaml', 'rsync', 'tmpwatch', 'sysstat', 'iproute-tc', 'ksmtuned', 'systemd-container', 'crypto-policies-scripts', 'grubby', 'sos'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Oct 11 03:30:07 compute-0 sudo[43846]: pam_unix(sudo:session): session closed for user root
Oct 11 03:30:07 compute-0 sudo[43999]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-owkgbuvurnkpelwqdugphoehcdxeqxvc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760153407.3426282-128-145467817668142/AnsiballZ_command.py'
Oct 11 03:30:07 compute-0 sudo[43999]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 03:30:08 compute-0 python3.9[44001]: ansible-ansible.legacy.command Invoked with _raw_params=rpm -V driverctl lvm2 crudini jq nftables NetworkManager openstack-selinux python3-libselinux python3-pyyaml rsync tmpwatch sysstat iproute-tc ksmtuned systemd-container crypto-policies-scripts grubby sos _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 11 03:30:08 compute-0 sudo[43999]: pam_unix(sudo:session): session closed for user root
Oct 11 03:30:09 compute-0 sudo[44286]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gztmnazrpmdfjoekdsyyzquzoklmznkq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760153408.9956841-136-218928469476330/AnsiballZ_file.py'
Oct 11 03:30:09 compute-0 sudo[44286]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 03:30:09 compute-0 python3.9[44288]: ansible-ansible.builtin.file Invoked with mode=0750 path=/var/lib/edpm-config selevel=s0 setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None attributes=None
Oct 11 03:30:09 compute-0 sudo[44286]: pam_unix(sudo:session): session closed for user root
Oct 11 03:30:10 compute-0 python3.9[44438]: ansible-ansible.builtin.stat Invoked with path=/etc/cloud/cloud.cfg.d follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct 11 03:30:10 compute-0 sudo[44590]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-slovlejjwewxoquvwhidmoxttukvlrzr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760153410.6665099-152-38516846294955/AnsiballZ_dnf.py'
Oct 11 03:30:10 compute-0 sudo[44590]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 03:30:11 compute-0 python3.9[44592]: ansible-ansible.legacy.dnf Invoked with name=['NetworkManager-ovs'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Oct 11 03:30:13 compute-0 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Oct 11 03:30:13 compute-0 systemd[1]: Starting man-db-cache-update.service...
Oct 11 03:30:13 compute-0 systemd[1]: Reloading.
Oct 11 03:30:13 compute-0 systemd-rc-local-generator[44633]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 11 03:30:13 compute-0 systemd-sysv-generator[44638]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 11 03:30:13 compute-0 systemd[1]: Queuing reload/restart jobs for marked units…
Oct 11 03:30:13 compute-0 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Oct 11 03:30:13 compute-0 systemd[1]: Finished man-db-cache-update.service.
Oct 11 03:30:13 compute-0 systemd[1]: run-rcba09fe11fef452fb028df77391174b4.service: Deactivated successfully.
Oct 11 03:30:13 compute-0 sudo[44590]: pam_unix(sudo:session): session closed for user root
Oct 11 03:30:14 compute-0 sudo[44908]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dbjrlvyfdmsbzypjuigjrjuranzbomnq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760153413.9559107-160-114058948950336/AnsiballZ_systemd.py'
Oct 11 03:30:14 compute-0 sudo[44908]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 03:30:14 compute-0 python3.9[44910]: ansible-ansible.builtin.systemd Invoked with name=NetworkManager state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Oct 11 03:30:14 compute-0 systemd[1]: NetworkManager-wait-online.service: Deactivated successfully.
Oct 11 03:30:14 compute-0 systemd[1]: Stopped Network Manager Wait Online.
Oct 11 03:30:14 compute-0 systemd[1]: Stopping Network Manager Wait Online...
Oct 11 03:30:14 compute-0 systemd[1]: Stopping Network Manager...
Oct 11 03:30:14 compute-0 NetworkManager[3961]: <info>  [1760153414.7573] caught SIGTERM, shutting down normally.
Oct 11 03:30:14 compute-0 NetworkManager[3961]: <info>  [1760153414.7587] dhcp4 (eth0): canceled DHCP transaction
Oct 11 03:30:14 compute-0 NetworkManager[3961]: <info>  [1760153414.7587] dhcp4 (eth0): activation: beginning transaction (timeout in 45 seconds)
Oct 11 03:30:14 compute-0 NetworkManager[3961]: <info>  [1760153414.7587] dhcp4 (eth0): state changed no lease
Oct 11 03:30:14 compute-0 NetworkManager[3961]: <info>  [1760153414.7590] manager: NetworkManager state is now CONNECTED_SITE
Oct 11 03:30:14 compute-0 NetworkManager[3961]: <info>  [1760153414.7657] exiting (success)
Oct 11 03:30:14 compute-0 systemd[1]: Starting Network Manager Script Dispatcher Service...
Oct 11 03:30:14 compute-0 systemd[1]: Started Network Manager Script Dispatcher Service.
Oct 11 03:30:14 compute-0 systemd[1]: NetworkManager.service: Deactivated successfully.
Oct 11 03:30:14 compute-0 systemd[1]: Stopped Network Manager.
Oct 11 03:30:14 compute-0 systemd[1]: NetworkManager.service: Consumed 9.576s CPU time, 4.1M memory peak, read 0B from disk, written 16.5K to disk.
Oct 11 03:30:14 compute-0 systemd[1]: Starting Network Manager...
Oct 11 03:30:14 compute-0 NetworkManager[44920]: <info>  [1760153414.8734] NetworkManager (version 1.54.1-1.el9) is starting... (after a restart, boot:b8518b17-5d11-4cee-aee6-0266db1747b3)
Oct 11 03:30:14 compute-0 NetworkManager[44920]: <info>  [1760153414.8737] Read config: /etc/NetworkManager/NetworkManager.conf, /run/NetworkManager/conf.d/15-carrier-timeout.conf
Oct 11 03:30:14 compute-0 NetworkManager[44920]: <info>  [1760153414.8812] manager[0x555ddbbe0090]: monitoring kernel firmware directory '/lib/firmware'.
Oct 11 03:30:14 compute-0 systemd[1]: Starting Hostname Service...
Oct 11 03:30:14 compute-0 systemd[1]: Started Hostname Service.
Oct 11 03:30:14 compute-0 NetworkManager[44920]: <info>  [1760153414.9936] hostname: hostname: using hostnamed
Oct 11 03:30:14 compute-0 NetworkManager[44920]: <info>  [1760153414.9937] hostname: static hostname changed from (none) to "compute-0"
Oct 11 03:30:14 compute-0 NetworkManager[44920]: <info>  [1760153414.9949] dns-mgr: init: dns=default,systemd-resolved rc-manager=symlink (auto)
Oct 11 03:30:14 compute-0 NetworkManager[44920]: <info>  [1760153414.9958] manager[0x555ddbbe0090]: rfkill: Wi-Fi hardware radio set enabled
Oct 11 03:30:14 compute-0 NetworkManager[44920]: <info>  [1760153414.9959] manager[0x555ddbbe0090]: rfkill: WWAN hardware radio set enabled
Oct 11 03:30:14 compute-0 NetworkManager[44920]: <info>  [1760153414.9999] Loaded device plugin: NMOvsFactory (/usr/lib64/NetworkManager/1.54.1-1.el9/libnm-device-plugin-ovs.so)
Oct 11 03:30:15 compute-0 NetworkManager[44920]: <info>  [1760153415.0016] Loaded device plugin: NMTeamFactory (/usr/lib64/NetworkManager/1.54.1-1.el9/libnm-device-plugin-team.so)
Oct 11 03:30:15 compute-0 NetworkManager[44920]: <info>  [1760153415.0018] manager: rfkill: Wi-Fi enabled by radio killswitch; enabled by state file
Oct 11 03:30:15 compute-0 NetworkManager[44920]: <info>  [1760153415.0019] manager: rfkill: WWAN enabled by radio killswitch; enabled by state file
Oct 11 03:30:15 compute-0 NetworkManager[44920]: <info>  [1760153415.0020] manager: Networking is enabled by state file
Oct 11 03:30:15 compute-0 NetworkManager[44920]: <info>  [1760153415.0026] settings: Loaded settings plugin: keyfile (internal)
Oct 11 03:30:15 compute-0 NetworkManager[44920]: <info>  [1760153415.0032] settings: Loaded settings plugin: ifcfg-rh ("/usr/lib64/NetworkManager/1.54.1-1.el9/libnm-settings-plugin-ifcfg-rh.so")
Oct 11 03:30:15 compute-0 NetworkManager[44920]: <info>  [1760153415.0087] Warning: the ifcfg-rh plugin is deprecated, please migrate connections to the keyfile format using "nmcli connection migrate"
Oct 11 03:30:15 compute-0 NetworkManager[44920]: <info>  [1760153415.0108] dhcp: init: Using DHCP client 'internal'
Oct 11 03:30:15 compute-0 NetworkManager[44920]: <info>  [1760153415.0114] manager: (lo): new Loopback device (/org/freedesktop/NetworkManager/Devices/1)
Oct 11 03:30:15 compute-0 NetworkManager[44920]: <info>  [1760153415.0126] device (lo): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 11 03:30:15 compute-0 NetworkManager[44920]: <info>  [1760153415.0141] device (lo): state change: unavailable -> disconnected (reason 'connection-assumed', managed-type: 'external')
Oct 11 03:30:15 compute-0 NetworkManager[44920]: <info>  [1760153415.0160] device (lo): Activation: starting connection 'lo' (346f8ef0-a09d-4c38-b58f-91fb90ce9381)
Oct 11 03:30:15 compute-0 NetworkManager[44920]: <info>  [1760153415.0176] device (eth0): carrier: link connected
Oct 11 03:30:15 compute-0 NetworkManager[44920]: <info>  [1760153415.0183] manager: (eth0): new Ethernet device (/org/freedesktop/NetworkManager/Devices/2)
Oct 11 03:30:15 compute-0 NetworkManager[44920]: <info>  [1760153415.0196] manager: (eth0): assume: will attempt to assume matching connection 'System eth0' (5fb06bd0-0bb0-7ffb-45f1-d6edd65f3e03) (indicated)
Oct 11 03:30:15 compute-0 NetworkManager[44920]: <info>  [1760153415.0197] device (eth0): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'assume')
Oct 11 03:30:15 compute-0 NetworkManager[44920]: <info>  [1760153415.0217] device (eth0): state change: unavailable -> disconnected (reason 'connection-assumed', managed-type: 'assume')
Oct 11 03:30:15 compute-0 NetworkManager[44920]: <info>  [1760153415.0234] device (eth0): Activation: starting connection 'System eth0' (5fb06bd0-0bb0-7ffb-45f1-d6edd65f3e03)
Oct 11 03:30:15 compute-0 NetworkManager[44920]: <info>  [1760153415.0246] device (eth1): carrier: link connected
Oct 11 03:30:15 compute-0 NetworkManager[44920]: <info>  [1760153415.0254] manager: (eth1): new Ethernet device (/org/freedesktop/NetworkManager/Devices/3)
Oct 11 03:30:15 compute-0 NetworkManager[44920]: <info>  [1760153415.0268] manager: (eth1): assume: will attempt to assume matching connection 'ci-private-network' (50f82f9b-7ab1-5a17-a628-b0771fc67283) (indicated)
Oct 11 03:30:15 compute-0 NetworkManager[44920]: <info>  [1760153415.0269] device (eth1): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'assume')
Oct 11 03:30:15 compute-0 NetworkManager[44920]: <info>  [1760153415.0279] device (eth1): state change: unavailable -> disconnected (reason 'connection-assumed', managed-type: 'assume')
Oct 11 03:30:15 compute-0 NetworkManager[44920]: <info>  [1760153415.0292] device (eth1): Activation: starting connection 'ci-private-network' (50f82f9b-7ab1-5a17-a628-b0771fc67283)
Oct 11 03:30:15 compute-0 systemd[1]: Started Network Manager.
Oct 11 03:30:15 compute-0 NetworkManager[44920]: <info>  [1760153415.0303] bus-manager: acquired D-Bus service "org.freedesktop.NetworkManager"
Oct 11 03:30:15 compute-0 NetworkManager[44920]: <info>  [1760153415.0322] device (lo): state change: disconnected -> prepare (reason 'none', managed-type: 'external')
Oct 11 03:30:15 compute-0 NetworkManager[44920]: <info>  [1760153415.0326] device (lo): state change: prepare -> config (reason 'none', managed-type: 'external')
Oct 11 03:30:15 compute-0 NetworkManager[44920]: <info>  [1760153415.0330] device (lo): state change: config -> ip-config (reason 'none', managed-type: 'external')
Oct 11 03:30:15 compute-0 NetworkManager[44920]: <info>  [1760153415.0334] device (eth0): state change: disconnected -> prepare (reason 'none', managed-type: 'assume')
Oct 11 03:30:15 compute-0 NetworkManager[44920]: <info>  [1760153415.0339] device (eth0): state change: prepare -> config (reason 'none', managed-type: 'assume')
Oct 11 03:30:15 compute-0 NetworkManager[44920]: <info>  [1760153415.0343] device (eth1): state change: disconnected -> prepare (reason 'none', managed-type: 'assume')
Oct 11 03:30:15 compute-0 NetworkManager[44920]: <info>  [1760153415.0347] device (eth1): state change: prepare -> config (reason 'none', managed-type: 'assume')
Oct 11 03:30:15 compute-0 NetworkManager[44920]: <info>  [1760153415.0354] device (lo): state change: ip-config -> ip-check (reason 'none', managed-type: 'external')
Oct 11 03:30:15 compute-0 NetworkManager[44920]: <info>  [1760153415.0365] device (eth0): state change: config -> ip-config (reason 'none', managed-type: 'assume')
Oct 11 03:30:15 compute-0 NetworkManager[44920]: <info>  [1760153415.0369] dhcp4 (eth0): activation: beginning transaction (timeout in 45 seconds)
Oct 11 03:30:15 compute-0 NetworkManager[44920]: <info>  [1760153415.0413] device (eth1): state change: config -> ip-config (reason 'none', managed-type: 'assume')
Oct 11 03:30:15 compute-0 NetworkManager[44920]: <info>  [1760153415.0446] device (eth1): state change: ip-config -> ip-check (reason 'none', managed-type: 'assume')
Oct 11 03:30:15 compute-0 NetworkManager[44920]: <info>  [1760153415.0467] device (lo): state change: ip-check -> secondaries (reason 'none', managed-type: 'external')
Oct 11 03:30:15 compute-0 NetworkManager[44920]: <info>  [1760153415.0476] dhcp4 (eth0): state changed new lease, address=38.102.83.234
Oct 11 03:30:15 compute-0 NetworkManager[44920]: <info>  [1760153415.0484] device (lo): state change: secondaries -> activated (reason 'none', managed-type: 'external')
Oct 11 03:30:15 compute-0 NetworkManager[44920]: <info>  [1760153415.0498] device (lo): Activation: successful, device activated.
Oct 11 03:30:15 compute-0 systemd[1]: Starting Network Manager Wait Online...
Oct 11 03:30:15 compute-0 NetworkManager[44920]: <info>  [1760153415.0518] policy: set 'System eth0' (eth0) as default for IPv4 routing and DNS
Oct 11 03:30:15 compute-0 NetworkManager[44920]: <info>  [1760153415.0617] device (eth1): state change: ip-check -> secondaries (reason 'none', managed-type: 'assume')
Oct 11 03:30:15 compute-0 NetworkManager[44920]: <info>  [1760153415.0627] device (eth0): state change: ip-config -> ip-check (reason 'none', managed-type: 'assume')
Oct 11 03:30:15 compute-0 NetworkManager[44920]: <info>  [1760153415.0638] device (eth1): state change: secondaries -> activated (reason 'none', managed-type: 'assume')
Oct 11 03:30:15 compute-0 NetworkManager[44920]: <info>  [1760153415.0643] manager: NetworkManager state is now CONNECTED_LOCAL
Oct 11 03:30:15 compute-0 NetworkManager[44920]: <info>  [1760153415.0648] device (eth1): Activation: successful, device activated.
Oct 11 03:30:15 compute-0 NetworkManager[44920]: <info>  [1760153415.0665] device (eth0): state change: ip-check -> secondaries (reason 'none', managed-type: 'assume')
Oct 11 03:30:15 compute-0 NetworkManager[44920]: <info>  [1760153415.0667] device (eth0): state change: secondaries -> activated (reason 'none', managed-type: 'assume')
Oct 11 03:30:15 compute-0 NetworkManager[44920]: <info>  [1760153415.0676] manager: NetworkManager state is now CONNECTED_SITE
Oct 11 03:30:15 compute-0 NetworkManager[44920]: <info>  [1760153415.0684] device (eth0): Activation: successful, device activated.
Oct 11 03:30:15 compute-0 NetworkManager[44920]: <info>  [1760153415.0693] manager: NetworkManager state is now CONNECTED_GLOBAL
Oct 11 03:30:15 compute-0 NetworkManager[44920]: <info>  [1760153415.0750] manager: startup complete
Oct 11 03:30:15 compute-0 sudo[44908]: pam_unix(sudo:session): session closed for user root
Oct 11 03:30:15 compute-0 systemd[1]: Finished Network Manager Wait Online.
Oct 11 03:30:15 compute-0 sudo[45134]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-apfkfffzlmnlmxlwncezmiutltrxtcgw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760153415.2920353-168-26808772278023/AnsiballZ_dnf.py'
Oct 11 03:30:15 compute-0 sudo[45134]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 03:30:15 compute-0 python3.9[45136]: ansible-ansible.legacy.dnf Invoked with name=['os-net-config'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Oct 11 03:30:20 compute-0 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Oct 11 03:30:20 compute-0 systemd[1]: Starting man-db-cache-update.service...
Oct 11 03:30:20 compute-0 systemd[1]: Reloading.
Oct 11 03:30:20 compute-0 systemd-sysv-generator[45191]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 11 03:30:20 compute-0 systemd-rc-local-generator[45187]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 11 03:30:20 compute-0 systemd[1]: Queuing reload/restart jobs for marked units…
Oct 11 03:30:21 compute-0 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Oct 11 03:30:21 compute-0 systemd[1]: Finished man-db-cache-update.service.
Oct 11 03:30:21 compute-0 systemd[1]: run-r8bd061b826c34a16b2e0fb86f683bb1e.service: Deactivated successfully.
Oct 11 03:30:21 compute-0 sudo[45134]: pam_unix(sudo:session): session closed for user root
Oct 11 03:30:22 compute-0 sudo[45596]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uvbwfnjmghtinkoiojjihcxoqmetlkme ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760153422.0181339-180-203097996382125/AnsiballZ_stat.py'
Oct 11 03:30:22 compute-0 sudo[45596]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 03:30:22 compute-0 python3.9[45598]: ansible-ansible.builtin.stat Invoked with path=/var/lib/edpm-config/os-net-config.returncode follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct 11 03:30:22 compute-0 sudo[45596]: pam_unix(sudo:session): session closed for user root
Oct 11 03:30:23 compute-0 sudo[45748]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qckwwpkcnxfsanwigovlsrnxpslhpeye ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760153422.7518108-189-42471071873747/AnsiballZ_ini_file.py'
Oct 11 03:30:23 compute-0 sudo[45748]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 03:30:23 compute-0 python3.9[45750]: ansible-community.general.ini_file Invoked with backup=True mode=0644 no_extra_spaces=True option=no-auto-default path=/etc/NetworkManager/NetworkManager.conf section=main state=present value=* exclusive=True ignore_spaces=False allow_no_value=False modify_inactive_option=True create=True follow=False unsafe_writes=False section_has_values=None values=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 11 03:30:23 compute-0 sudo[45748]: pam_unix(sudo:session): session closed for user root
Oct 11 03:30:24 compute-0 sudo[45902]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mkwijmeobaqlvtyzvqwflwrntrgtqerr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760153423.8068867-199-176918749852196/AnsiballZ_ini_file.py'
Oct 11 03:30:24 compute-0 sudo[45902]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 03:30:24 compute-0 python3.9[45904]: ansible-community.general.ini_file Invoked with backup=True mode=0644 no_extra_spaces=True option=dns path=/etc/NetworkManager/NetworkManager.conf section=main state=absent value=none exclusive=True ignore_spaces=False allow_no_value=False modify_inactive_option=True create=True follow=False unsafe_writes=False section_has_values=None values=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 11 03:30:24 compute-0 sudo[45902]: pam_unix(sudo:session): session closed for user root
Oct 11 03:30:24 compute-0 sudo[46054]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sgnftxbopshdkqahjmpqgdvxbwcwdbxo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760153424.6512074-199-9830559169554/AnsiballZ_ini_file.py'
Oct 11 03:30:25 compute-0 sudo[46054]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 03:30:25 compute-0 systemd[1]: NetworkManager-dispatcher.service: Deactivated successfully.
Oct 11 03:30:25 compute-0 python3.9[46056]: ansible-community.general.ini_file Invoked with backup=True mode=0644 no_extra_spaces=True option=dns path=/etc/NetworkManager/conf.d/99-cloud-init.conf section=main state=absent value=none exclusive=True ignore_spaces=False allow_no_value=False modify_inactive_option=True create=True follow=False unsafe_writes=False section_has_values=None values=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 11 03:30:25 compute-0 sudo[46054]: pam_unix(sudo:session): session closed for user root
Oct 11 03:30:25 compute-0 sudo[46206]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ihfaejtrqldfqnxjchwjiecyzklqmdix ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760153425.4471538-214-43947255937018/AnsiballZ_ini_file.py'
Oct 11 03:30:25 compute-0 sudo[46206]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 03:30:26 compute-0 python3.9[46208]: ansible-community.general.ini_file Invoked with backup=True mode=0644 no_extra_spaces=True option=rc-manager path=/etc/NetworkManager/NetworkManager.conf section=main state=absent value=unmanaged exclusive=True ignore_spaces=False allow_no_value=False modify_inactive_option=True create=True follow=False unsafe_writes=False section_has_values=None values=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 11 03:30:26 compute-0 sudo[46206]: pam_unix(sudo:session): session closed for user root
Oct 11 03:30:26 compute-0 sudo[46358]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ppkhcgrkdcgdaajllqqujysswsqbylyi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760153426.2334356-214-8932157730680/AnsiballZ_ini_file.py'
Oct 11 03:30:26 compute-0 sudo[46358]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 03:30:26 compute-0 python3.9[46360]: ansible-community.general.ini_file Invoked with backup=True mode=0644 no_extra_spaces=True option=rc-manager path=/etc/NetworkManager/conf.d/99-cloud-init.conf section=main state=absent value=unmanaged exclusive=True ignore_spaces=False allow_no_value=False modify_inactive_option=True create=True follow=False unsafe_writes=False section_has_values=None values=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 11 03:30:26 compute-0 sudo[46358]: pam_unix(sudo:session): session closed for user root
Oct 11 03:30:27 compute-0 sudo[46510]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ynchjycnrrvqeblhtetumlzybtgbwzzl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760153427.0109563-229-14102231621251/AnsiballZ_stat.py'
Oct 11 03:30:27 compute-0 sudo[46510]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 03:30:27 compute-0 python3.9[46512]: ansible-ansible.legacy.stat Invoked with path=/etc/dhcp/dhclient-enter-hooks follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 11 03:30:27 compute-0 sudo[46510]: pam_unix(sudo:session): session closed for user root
Oct 11 03:30:28 compute-0 sudo[46633]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tcogpsdbiqyoozerbdoxopsvoyxtiqhz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760153427.0109563-229-14102231621251/AnsiballZ_copy.py'
Oct 11 03:30:28 compute-0 sudo[46633]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 03:30:28 compute-0 python3.9[46635]: ansible-ansible.legacy.copy Invoked with dest=/etc/dhcp/dhclient-enter-hooks mode=0755 src=/home/zuul/.ansible/tmp/ansible-tmp-1760153427.0109563-229-14102231621251/.source _original_basename=.k7bwwexp follow=False checksum=f6278a40de79a9841f6ed1fc584538225566990c backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 11 03:30:28 compute-0 sudo[46633]: pam_unix(sudo:session): session closed for user root
Oct 11 03:30:28 compute-0 sudo[46785]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fmplymmqzfmqearxefgfleyqxovpddcb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760153428.645166-244-78635167227305/AnsiballZ_file.py'
Oct 11 03:30:28 compute-0 sudo[46785]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 03:30:29 compute-0 python3.9[46787]: ansible-ansible.builtin.file Invoked with mode=0755 path=/etc/os-net-config state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 11 03:30:29 compute-0 sudo[46785]: pam_unix(sudo:session): session closed for user root
Oct 11 03:30:29 compute-0 sudo[46937]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-crkzijxhcvaxpniwdhtmuwuoylianave ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760153429.3091214-252-184871247474072/AnsiballZ_edpm_os_net_config_mappings.py'
Oct 11 03:30:29 compute-0 sudo[46937]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 03:30:30 compute-0 python3.9[46939]: ansible-edpm_os_net_config_mappings Invoked with net_config_data_lookup={}
Oct 11 03:30:30 compute-0 sudo[46937]: pam_unix(sudo:session): session closed for user root
Oct 11 03:30:30 compute-0 sudo[47089]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-afxmiiseqavehcjbksxxbeatlydjiqpe ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760153430.3163378-261-206758367216399/AnsiballZ_file.py'
Oct 11 03:30:30 compute-0 sudo[47089]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 03:30:30 compute-0 python3.9[47091]: ansible-ansible.builtin.file Invoked with path=/var/lib/edpm-config/scripts state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 11 03:30:30 compute-0 sudo[47089]: pam_unix(sudo:session): session closed for user root
Oct 11 03:30:31 compute-0 sudo[47241]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jztnpmbcfwxffnvyqboprscumivnbcni ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760153431.1807992-271-6510623350268/AnsiballZ_stat.py'
Oct 11 03:30:31 compute-0 sudo[47241]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 03:30:31 compute-0 sudo[47241]: pam_unix(sudo:session): session closed for user root
Oct 11 03:30:32 compute-0 sudo[47364]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zjpifuedqawjwlrlrzozzdecdewzrkmz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760153431.1807992-271-6510623350268/AnsiballZ_copy.py'
Oct 11 03:30:32 compute-0 sudo[47364]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 03:30:32 compute-0 sudo[47364]: pam_unix(sudo:session): session closed for user root
Oct 11 03:30:32 compute-0 sudo[47516]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wsecursrcsynqdisqhahiqbiamkmgtwg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760153432.4652731-286-214625920822460/AnsiballZ_slurp.py'
Oct 11 03:30:33 compute-0 sudo[47516]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 03:30:33 compute-0 python3.9[47518]: ansible-ansible.builtin.slurp Invoked with path=/etc/os-net-config/config.yaml src=/etc/os-net-config/config.yaml
Oct 11 03:30:33 compute-0 sudo[47516]: pam_unix(sudo:session): session closed for user root
Oct 11 03:30:34 compute-0 sudo[47691]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tnnldwmhdxnstzmjbqlmmvgfrumvcfhr ; ANSIBLE_ASYNC_DIR=\'~/.ansible_async\' /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760153433.5132084-295-220094121604633/async_wrapper.py j76121612614 300 /home/zuul/.ansible/tmp/ansible-tmp-1760153433.5132084-295-220094121604633/AnsiballZ_edpm_os_net_config.py _'
Oct 11 03:30:34 compute-0 sudo[47691]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 03:30:34 compute-0 ansible-async_wrapper.py[47693]: Invoked with j76121612614 300 /home/zuul/.ansible/tmp/ansible-tmp-1760153433.5132084-295-220094121604633/AnsiballZ_edpm_os_net_config.py _
Oct 11 03:30:34 compute-0 ansible-async_wrapper.py[47696]: Starting module and watcher
Oct 11 03:30:34 compute-0 ansible-async_wrapper.py[47696]: Start watching 47697 (300)
Oct 11 03:30:34 compute-0 ansible-async_wrapper.py[47697]: Start module (47697)
Oct 11 03:30:34 compute-0 ansible-async_wrapper.py[47693]: Return async_wrapper task started.
Oct 11 03:30:34 compute-0 sudo[47691]: pam_unix(sudo:session): session closed for user root
Oct 11 03:30:34 compute-0 python3.9[47698]: ansible-edpm_os_net_config Invoked with cleanup=True config_file=/etc/os-net-config/config.yaml debug=True detailed_exit_codes=True safe_defaults=False use_nmstate=True
Oct 11 03:30:35 compute-0 kernel: cfg80211: Loading compiled-in X.509 certificates for regulatory database
Oct 11 03:30:35 compute-0 kernel: Loaded X.509 cert 'sforshee: 00b28ddf47aef9cea7'
Oct 11 03:30:35 compute-0 kernel: Loaded X.509 cert 'wens: 61c038651aabdcf94bd0ac7ff06c7248db18c600'
Oct 11 03:30:35 compute-0 kernel: platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
Oct 11 03:30:35 compute-0 kernel: cfg80211: failed to load regulatory.db
Oct 11 03:30:37 compute-0 NetworkManager[44920]: <info>  [1760153437.1836] audit: op="checkpoint-create" arg="/org/freedesktop/NetworkManager/Checkpoint/1" pid=47699 uid=0 result="success"
Oct 11 03:30:37 compute-0 NetworkManager[44920]: <info>  [1760153437.1868] audit: op="checkpoint-adjust-rollback-timeout" arg="/org/freedesktop/NetworkManager/Checkpoint/1" pid=47699 uid=0 result="success"
Oct 11 03:30:37 compute-0 NetworkManager[44920]: <info>  [1760153437.2733] manager: (br-ex): new Open vSwitch Bridge device (/org/freedesktop/NetworkManager/Devices/4)
Oct 11 03:30:37 compute-0 NetworkManager[44920]: <info>  [1760153437.2735] audit: op="connection-add" uuid="e9b8efb9-a9ba-4f48-a7b3-a638392a530a" name="br-ex-br" pid=47699 uid=0 result="success"
Oct 11 03:30:37 compute-0 NetworkManager[44920]: <info>  [1760153437.2761] manager: (br-ex): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/5)
Oct 11 03:30:37 compute-0 NetworkManager[44920]: <info>  [1760153437.2764] audit: op="connection-add" uuid="5f0a5dc1-5b26-432f-8990-332b307062a0" name="br-ex-port" pid=47699 uid=0 result="success"
Oct 11 03:30:37 compute-0 NetworkManager[44920]: <info>  [1760153437.2785] manager: (eth1): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/6)
Oct 11 03:30:37 compute-0 NetworkManager[44920]: <info>  [1760153437.2788] audit: op="connection-add" uuid="0fae79f6-fd50-4eb2-a789-8922dda55cf7" name="eth1-port" pid=47699 uid=0 result="success"
Oct 11 03:30:37 compute-0 NetworkManager[44920]: <info>  [1760153437.2809] manager: (vlan20): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/7)
Oct 11 03:30:37 compute-0 NetworkManager[44920]: <info>  [1760153437.2813] audit: op="connection-add" uuid="25dddcee-709e-4ac4-82d0-b36a3cafd476" name="vlan20-port" pid=47699 uid=0 result="success"
Oct 11 03:30:37 compute-0 NetworkManager[44920]: <info>  [1760153437.2838] manager: (vlan21): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/8)
Oct 11 03:30:37 compute-0 NetworkManager[44920]: <info>  [1760153437.2842] audit: op="connection-add" uuid="6de21768-82ad-4508-895b-24c50d6d48c8" name="vlan21-port" pid=47699 uid=0 result="success"
Oct 11 03:30:37 compute-0 NetworkManager[44920]: <info>  [1760153437.2865] manager: (vlan22): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/9)
Oct 11 03:30:37 compute-0 NetworkManager[44920]: <info>  [1760153437.2870] audit: op="connection-add" uuid="7d1d299c-cca9-478f-b2c0-76dfaae6a534" name="vlan22-port" pid=47699 uid=0 result="success"
Oct 11 03:30:37 compute-0 NetworkManager[44920]: <info>  [1760153437.2892] manager: (vlan23): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/10)
Oct 11 03:30:37 compute-0 NetworkManager[44920]: <info>  [1760153437.2896] audit: op="connection-add" uuid="e12971a5-7572-4144-b967-8207fc7bb6c6" name="vlan23-port" pid=47699 uid=0 result="success"
Oct 11 03:30:37 compute-0 NetworkManager[44920]: <info>  [1760153437.2931] audit: op="connection-update" uuid="5fb06bd0-0bb0-7ffb-45f1-d6edd65f3e03" name="System eth0" args="ipv4.dhcp-client-id,ipv4.dhcp-timeout,802-3-ethernet.mtu,connection.timestamp,connection.autoconnect-priority,ipv6.dhcp-timeout,ipv6.addr-gen-mode,ipv6.method" pid=47699 uid=0 result="success"
Oct 11 03:30:37 compute-0 NetworkManager[44920]: <info>  [1760153437.2962] manager: (br-ex): new Open vSwitch Interface device (/org/freedesktop/NetworkManager/Devices/11)
Oct 11 03:30:37 compute-0 NetworkManager[44920]: <info>  [1760153437.2966] audit: op="connection-add" uuid="c9f468d2-dc43-458e-9089-3b3cac5057b3" name="br-ex-if" pid=47699 uid=0 result="success"
Oct 11 03:30:37 compute-0 NetworkManager[44920]: <info>  [1760153437.3025] audit: op="connection-update" uuid="50f82f9b-7ab1-5a17-a628-b0771fc67283" name="ci-private-network" args="ipv4.dns,ipv4.never-default,ipv4.method,ipv4.addresses,ipv4.routing-rules,ipv4.routes,ipv6.dns,ipv6.addr-gen-mode,ipv6.method,ipv6.addresses,ipv6.routing-rules,ipv6.routes,connection.master,connection.port-type,connection.timestamp,connection.slave-type,connection.controller,ovs-interface.type,ovs-external-ids.data" pid=47699 uid=0 result="success"
Oct 11 03:30:37 compute-0 NetworkManager[44920]: <info>  [1760153437.3048] manager: (vlan20): new Open vSwitch Interface device (/org/freedesktop/NetworkManager/Devices/12)
Oct 11 03:30:37 compute-0 NetworkManager[44920]: <info>  [1760153437.3052] audit: op="connection-add" uuid="8d9c73ce-9608-42f1-b559-210a483a0267" name="vlan20-if" pid=47699 uid=0 result="success"
Oct 11 03:30:37 compute-0 NetworkManager[44920]: <info>  [1760153437.3077] manager: (vlan21): new Open vSwitch Interface device (/org/freedesktop/NetworkManager/Devices/13)
Oct 11 03:30:37 compute-0 NetworkManager[44920]: <info>  [1760153437.3080] audit: op="connection-add" uuid="94f02021-eba0-4e52-94ea-4bd27dbb47ef" name="vlan21-if" pid=47699 uid=0 result="success"
Oct 11 03:30:37 compute-0 NetworkManager[44920]: <info>  [1760153437.3105] manager: (vlan22): new Open vSwitch Interface device (/org/freedesktop/NetworkManager/Devices/14)
Oct 11 03:30:37 compute-0 NetworkManager[44920]: <info>  [1760153437.3108] audit: op="connection-add" uuid="75c2243b-3912-4158-ba46-8cd673f6ab3b" name="vlan22-if" pid=47699 uid=0 result="success"
Oct 11 03:30:37 compute-0 NetworkManager[44920]: <info>  [1760153437.3130] manager: (vlan23): new Open vSwitch Interface device (/org/freedesktop/NetworkManager/Devices/15)
Oct 11 03:30:37 compute-0 NetworkManager[44920]: <info>  [1760153437.3133] audit: op="connection-add" uuid="93d8f85e-4d01-4a41-82a8-943d54b4b0c0" name="vlan23-if" pid=47699 uid=0 result="success"
Oct 11 03:30:37 compute-0 NetworkManager[44920]: <info>  [1760153437.3150] audit: op="connection-delete" uuid="4acfe135-17b2-37ce-bcca-a8a1c8735455" name="Wired connection 1" pid=47699 uid=0 result="success"
Oct 11 03:30:37 compute-0 NetworkManager[44920]: <info>  [1760153437.3166] device (br-ex)[Open vSwitch Bridge]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Oct 11 03:30:37 compute-0 NetworkManager[44920]: <info>  [1760153437.3180] device (br-ex)[Open vSwitch Bridge]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Oct 11 03:30:37 compute-0 NetworkManager[44920]: <info>  [1760153437.3185] device (br-ex)[Open vSwitch Bridge]: Activation: starting connection 'br-ex-br' (e9b8efb9-a9ba-4f48-a7b3-a638392a530a)
Oct 11 03:30:37 compute-0 NetworkManager[44920]: <info>  [1760153437.3187] audit: op="connection-activate" uuid="e9b8efb9-a9ba-4f48-a7b3-a638392a530a" name="br-ex-br" pid=47699 uid=0 result="success"
Oct 11 03:30:37 compute-0 NetworkManager[44920]: <info>  [1760153437.3189] device (br-ex)[Open vSwitch Port]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Oct 11 03:30:37 compute-0 NetworkManager[44920]: <info>  [1760153437.3197] device (br-ex)[Open vSwitch Port]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Oct 11 03:30:37 compute-0 NetworkManager[44920]: <info>  [1760153437.3202] device (br-ex)[Open vSwitch Port]: Activation: starting connection 'br-ex-port' (5f0a5dc1-5b26-432f-8990-332b307062a0)
Oct 11 03:30:37 compute-0 NetworkManager[44920]: <info>  [1760153437.3205] device (eth1)[Open vSwitch Port]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Oct 11 03:30:37 compute-0 NetworkManager[44920]: <info>  [1760153437.3212] device (eth1)[Open vSwitch Port]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Oct 11 03:30:37 compute-0 NetworkManager[44920]: <info>  [1760153437.3217] device (eth1)[Open vSwitch Port]: Activation: starting connection 'eth1-port' (0fae79f6-fd50-4eb2-a789-8922dda55cf7)
Oct 11 03:30:37 compute-0 NetworkManager[44920]: <info>  [1760153437.3220] device (vlan20)[Open vSwitch Port]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Oct 11 03:30:37 compute-0 NetworkManager[44920]: <info>  [1760153437.3228] device (vlan20)[Open vSwitch Port]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Oct 11 03:30:37 compute-0 NetworkManager[44920]: <info>  [1760153437.3233] device (vlan20)[Open vSwitch Port]: Activation: starting connection 'vlan20-port' (25dddcee-709e-4ac4-82d0-b36a3cafd476)
Oct 11 03:30:37 compute-0 NetworkManager[44920]: <info>  [1760153437.3236] device (vlan21)[Open vSwitch Port]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Oct 11 03:30:37 compute-0 NetworkManager[44920]: <info>  [1760153437.3244] device (vlan21)[Open vSwitch Port]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Oct 11 03:30:37 compute-0 NetworkManager[44920]: <info>  [1760153437.3249] device (vlan21)[Open vSwitch Port]: Activation: starting connection 'vlan21-port' (6de21768-82ad-4508-895b-24c50d6d48c8)
Oct 11 03:30:37 compute-0 NetworkManager[44920]: <info>  [1760153437.3252] device (vlan22)[Open vSwitch Port]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Oct 11 03:30:37 compute-0 NetworkManager[44920]: <info>  [1760153437.3260] device (vlan22)[Open vSwitch Port]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Oct 11 03:30:37 compute-0 NetworkManager[44920]: <info>  [1760153437.3266] device (vlan22)[Open vSwitch Port]: Activation: starting connection 'vlan22-port' (7d1d299c-cca9-478f-b2c0-76dfaae6a534)
Oct 11 03:30:37 compute-0 NetworkManager[44920]: <info>  [1760153437.3269] device (vlan23)[Open vSwitch Port]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Oct 11 03:30:37 compute-0 NetworkManager[44920]: <info>  [1760153437.3277] device (vlan23)[Open vSwitch Port]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Oct 11 03:30:37 compute-0 NetworkManager[44920]: <info>  [1760153437.3284] device (vlan23)[Open vSwitch Port]: Activation: starting connection 'vlan23-port' (e12971a5-7572-4144-b967-8207fc7bb6c6)
Oct 11 03:30:37 compute-0 NetworkManager[44920]: <info>  [1760153437.3286] device (br-ex)[Open vSwitch Bridge]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Oct 11 03:30:37 compute-0 NetworkManager[44920]: <info>  [1760153437.3289] device (br-ex)[Open vSwitch Bridge]: state change: prepare -> config (reason 'none', managed-type: 'full')
Oct 11 03:30:37 compute-0 NetworkManager[44920]: <info>  [1760153437.3292] device (br-ex)[Open vSwitch Bridge]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Oct 11 03:30:37 compute-0 NetworkManager[44920]: <info>  [1760153437.3299] device (br-ex)[Open vSwitch Interface]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Oct 11 03:30:37 compute-0 NetworkManager[44920]: <info>  [1760153437.3306] device (br-ex)[Open vSwitch Interface]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Oct 11 03:30:37 compute-0 NetworkManager[44920]: <info>  [1760153437.3312] device (br-ex)[Open vSwitch Interface]: Activation: starting connection 'br-ex-if' (c9f468d2-dc43-458e-9089-3b3cac5057b3)
Oct 11 03:30:37 compute-0 NetworkManager[44920]: <info>  [1760153437.3314] device (br-ex)[Open vSwitch Port]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Oct 11 03:30:37 compute-0 NetworkManager[44920]: <info>  [1760153437.3318] device (br-ex)[Open vSwitch Port]: state change: prepare -> config (reason 'none', managed-type: 'full')
Oct 11 03:30:37 compute-0 NetworkManager[44920]: <info>  [1760153437.3322] device (br-ex)[Open vSwitch Port]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Oct 11 03:30:37 compute-0 NetworkManager[44920]: <info>  [1760153437.3324] device (br-ex)[Open vSwitch Port]: Activation: connection 'br-ex-port' attached as port, continuing activation
Oct 11 03:30:37 compute-0 NetworkManager[44920]: <info>  [1760153437.3326] device (eth1): state change: activated -> deactivating (reason 'new-activation', managed-type: 'full')
Oct 11 03:30:37 compute-0 NetworkManager[44920]: <info>  [1760153437.3340] device (eth1): disconnecting for new activation request.
Oct 11 03:30:37 compute-0 NetworkManager[44920]: <info>  [1760153437.3341] device (eth1)[Open vSwitch Port]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Oct 11 03:30:37 compute-0 NetworkManager[44920]: <info>  [1760153437.3345] device (eth1)[Open vSwitch Port]: state change: prepare -> config (reason 'none', managed-type: 'full')
Oct 11 03:30:37 compute-0 NetworkManager[44920]: <info>  [1760153437.3348] device (eth1)[Open vSwitch Port]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Oct 11 03:30:37 compute-0 NetworkManager[44920]: <info>  [1760153437.3350] device (eth1)[Open vSwitch Port]: Activation: connection 'eth1-port' attached as port, continuing activation
Oct 11 03:30:37 compute-0 NetworkManager[44920]: <info>  [1760153437.3356] device (vlan20)[Open vSwitch Interface]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Oct 11 03:30:37 compute-0 NetworkManager[44920]: <info>  [1760153437.3363] device (vlan20)[Open vSwitch Interface]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Oct 11 03:30:37 compute-0 NetworkManager[44920]: <info>  [1760153437.3368] device (vlan20)[Open vSwitch Interface]: Activation: starting connection 'vlan20-if' (8d9c73ce-9608-42f1-b559-210a483a0267)
Oct 11 03:30:37 compute-0 NetworkManager[44920]: <info>  [1760153437.3370] device (vlan20)[Open vSwitch Port]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Oct 11 03:30:37 compute-0 NetworkManager[44920]: <info>  [1760153437.3375] device (vlan20)[Open vSwitch Port]: state change: prepare -> config (reason 'none', managed-type: 'full')
Oct 11 03:30:37 compute-0 NetworkManager[44920]: <info>  [1760153437.3379] device (vlan20)[Open vSwitch Port]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Oct 11 03:30:37 compute-0 NetworkManager[44920]: <info>  [1760153437.3382] device (vlan20)[Open vSwitch Port]: Activation: connection 'vlan20-port' attached as port, continuing activation
Oct 11 03:30:37 compute-0 NetworkManager[44920]: <info>  [1760153437.3386] device (vlan21)[Open vSwitch Interface]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Oct 11 03:30:37 compute-0 NetworkManager[44920]: <info>  [1760153437.3392] device (vlan21)[Open vSwitch Interface]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Oct 11 03:30:37 compute-0 NetworkManager[44920]: <info>  [1760153437.3397] device (vlan21)[Open vSwitch Interface]: Activation: starting connection 'vlan21-if' (94f02021-eba0-4e52-94ea-4bd27dbb47ef)
Oct 11 03:30:37 compute-0 NetworkManager[44920]: <info>  [1760153437.3399] device (vlan21)[Open vSwitch Port]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Oct 11 03:30:37 compute-0 NetworkManager[44920]: <info>  [1760153437.3403] device (vlan21)[Open vSwitch Port]: state change: prepare -> config (reason 'none', managed-type: 'full')
Oct 11 03:30:37 compute-0 NetworkManager[44920]: <info>  [1760153437.3406] device (vlan21)[Open vSwitch Port]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Oct 11 03:30:37 compute-0 NetworkManager[44920]: <info>  [1760153437.3408] device (vlan21)[Open vSwitch Port]: Activation: connection 'vlan21-port' attached as port, continuing activation
Oct 11 03:30:37 compute-0 NetworkManager[44920]: <info>  [1760153437.3412] device (vlan22)[Open vSwitch Interface]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Oct 11 03:30:37 compute-0 NetworkManager[44920]: <info>  [1760153437.3418] device (vlan22)[Open vSwitch Interface]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Oct 11 03:30:37 compute-0 NetworkManager[44920]: <info>  [1760153437.3423] device (vlan22)[Open vSwitch Interface]: Activation: starting connection 'vlan22-if' (75c2243b-3912-4158-ba46-8cd673f6ab3b)
Oct 11 03:30:37 compute-0 NetworkManager[44920]: <info>  [1760153437.3425] device (vlan22)[Open vSwitch Port]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Oct 11 03:30:37 compute-0 NetworkManager[44920]: <info>  [1760153437.3429] device (vlan22)[Open vSwitch Port]: state change: prepare -> config (reason 'none', managed-type: 'full')
Oct 11 03:30:37 compute-0 NetworkManager[44920]: <info>  [1760153437.3432] device (vlan22)[Open vSwitch Port]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Oct 11 03:30:37 compute-0 NetworkManager[44920]: <info>  [1760153437.3435] device (vlan22)[Open vSwitch Port]: Activation: connection 'vlan22-port' attached as port, continuing activation
Oct 11 03:30:37 compute-0 NetworkManager[44920]: <info>  [1760153437.3439] device (vlan23)[Open vSwitch Interface]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Oct 11 03:30:37 compute-0 NetworkManager[44920]: <info>  [1760153437.3445] device (vlan23)[Open vSwitch Interface]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Oct 11 03:30:37 compute-0 NetworkManager[44920]: <info>  [1760153437.3451] device (vlan23)[Open vSwitch Interface]: Activation: starting connection 'vlan23-if' (93d8f85e-4d01-4a41-82a8-943d54b4b0c0)
Oct 11 03:30:37 compute-0 NetworkManager[44920]: <info>  [1760153437.3453] device (vlan23)[Open vSwitch Port]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Oct 11 03:30:37 compute-0 NetworkManager[44920]: <info>  [1760153437.3456] device (vlan23)[Open vSwitch Port]: state change: prepare -> config (reason 'none', managed-type: 'full')
Oct 11 03:30:37 compute-0 NetworkManager[44920]: <info>  [1760153437.3459] device (vlan23)[Open vSwitch Port]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Oct 11 03:30:37 compute-0 NetworkManager[44920]: <info>  [1760153437.3461] device (vlan23)[Open vSwitch Port]: Activation: connection 'vlan23-port' attached as port, continuing activation
Oct 11 03:30:37 compute-0 NetworkManager[44920]: <info>  [1760153437.3464] device (br-ex)[Open vSwitch Bridge]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Oct 11 03:30:37 compute-0 NetworkManager[44920]: <info>  [1760153437.3482] audit: op="device-reapply" interface="eth0" ifindex=2 args="ipv4.dhcp-client-id,ipv4.dhcp-timeout,802-3-ethernet.mtu,connection.autoconnect-priority,ipv6.addr-gen-mode,ipv6.method" pid=47699 uid=0 result="success"
Oct 11 03:30:37 compute-0 NetworkManager[44920]: <info>  [1760153437.3485] device (br-ex)[Open vSwitch Interface]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Oct 11 03:30:37 compute-0 NetworkManager[44920]: <info>  [1760153437.3490] device (br-ex)[Open vSwitch Interface]: state change: prepare -> config (reason 'none', managed-type: 'full')
Oct 11 03:30:37 compute-0 NetworkManager[44920]: <info>  [1760153437.3493] device (br-ex)[Open vSwitch Interface]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Oct 11 03:30:37 compute-0 NetworkManager[44920]: <info>  [1760153437.3502] device (br-ex)[Open vSwitch Port]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Oct 11 03:30:37 compute-0 NetworkManager[44920]: <info>  [1760153437.3508] device (eth1)[Open vSwitch Port]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Oct 11 03:30:37 compute-0 NetworkManager[44920]: <info>  [1760153437.3514] device (vlan20)[Open vSwitch Interface]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Oct 11 03:30:37 compute-0 NetworkManager[44920]: <info>  [1760153437.3519] device (vlan20)[Open vSwitch Interface]: state change: prepare -> config (reason 'none', managed-type: 'full')
Oct 11 03:30:37 compute-0 NetworkManager[44920]: <info>  [1760153437.3523] device (vlan20)[Open vSwitch Interface]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Oct 11 03:30:37 compute-0 NetworkManager[44920]: <info>  [1760153437.3529] device (vlan20)[Open vSwitch Port]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Oct 11 03:30:37 compute-0 kernel: ovs-system: entered promiscuous mode
Oct 11 03:30:37 compute-0 NetworkManager[44920]: <info>  [1760153437.3537] device (vlan21)[Open vSwitch Interface]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Oct 11 03:30:37 compute-0 NetworkManager[44920]: <info>  [1760153437.3542] device (vlan21)[Open vSwitch Interface]: state change: prepare -> config (reason 'none', managed-type: 'full')
Oct 11 03:30:37 compute-0 systemd[1]: Starting Network Manager Script Dispatcher Service...
Oct 11 03:30:37 compute-0 NetworkManager[44920]: <info>  [1760153437.3560] device (vlan21)[Open vSwitch Interface]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Oct 11 03:30:37 compute-0 systemd-udevd[47704]: Network interface NamePolicy= disabled on kernel command line.
Oct 11 03:30:37 compute-0 kernel: Timeout policy base is empty
Oct 11 03:30:37 compute-0 NetworkManager[44920]: <info>  [1760153437.3578] device (vlan21)[Open vSwitch Port]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Oct 11 03:30:37 compute-0 NetworkManager[44920]: <info>  [1760153437.3582] device (vlan22)[Open vSwitch Interface]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Oct 11 03:30:37 compute-0 NetworkManager[44920]: <info>  [1760153437.3586] device (vlan22)[Open vSwitch Interface]: state change: prepare -> config (reason 'none', managed-type: 'full')
Oct 11 03:30:37 compute-0 NetworkManager[44920]: <info>  [1760153437.3587] device (vlan22)[Open vSwitch Interface]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Oct 11 03:30:37 compute-0 NetworkManager[44920]: <info>  [1760153437.3594] device (vlan22)[Open vSwitch Port]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Oct 11 03:30:37 compute-0 NetworkManager[44920]: <info>  [1760153437.3600] device (vlan23)[Open vSwitch Interface]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Oct 11 03:30:37 compute-0 NetworkManager[44920]: <info>  [1760153437.3606] device (vlan23)[Open vSwitch Interface]: state change: prepare -> config (reason 'none', managed-type: 'full')
Oct 11 03:30:37 compute-0 NetworkManager[44920]: <info>  [1760153437.3609] device (vlan23)[Open vSwitch Interface]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Oct 11 03:30:37 compute-0 NetworkManager[44920]: <info>  [1760153437.3614] device (vlan23)[Open vSwitch Port]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Oct 11 03:30:37 compute-0 NetworkManager[44920]: <info>  [1760153437.3617] dhcp4 (eth0): canceled DHCP transaction
Oct 11 03:30:37 compute-0 NetworkManager[44920]: <info>  [1760153437.3618] dhcp4 (eth0): activation: beginning transaction (timeout in 45 seconds)
Oct 11 03:30:37 compute-0 NetworkManager[44920]: <info>  [1760153437.3618] dhcp4 (eth0): state changed no lease
Oct 11 03:30:37 compute-0 NetworkManager[44920]: <info>  [1760153437.3619] dhcp4 (eth0): activation: beginning transaction (no timeout)
Oct 11 03:30:37 compute-0 NetworkManager[44920]: <info>  [1760153437.3631] device (br-ex)[Open vSwitch Interface]: Activation: connection 'br-ex-if' attached as port, continuing activation
Oct 11 03:30:37 compute-0 NetworkManager[44920]: <info>  [1760153437.3635] audit: op="device-reapply" interface="eth1" ifindex=3 pid=47699 uid=0 result="fail" reason="Device is not activated"
Oct 11 03:30:37 compute-0 NetworkManager[44920]: <info>  [1760153437.3673] device (vlan20)[Open vSwitch Interface]: Activation: connection 'vlan20-if' attached as port, continuing activation
Oct 11 03:30:37 compute-0 NetworkManager[44920]: <info>  [1760153437.3677] dhcp4 (eth0): state changed new lease, address=38.102.83.234
Oct 11 03:30:37 compute-0 NetworkManager[44920]: <info>  [1760153437.3717] device (vlan21)[Open vSwitch Interface]: Activation: connection 'vlan21-if' attached as port, continuing activation
Oct 11 03:30:37 compute-0 NetworkManager[44920]: <info>  [1760153437.3727] device (eth1): disconnecting for new activation request.
Oct 11 03:30:37 compute-0 NetworkManager[44920]: <info>  [1760153437.3727] audit: op="connection-activate" uuid="50f82f9b-7ab1-5a17-a628-b0771fc67283" name="ci-private-network" pid=47699 uid=0 result="success"
Oct 11 03:30:37 compute-0 NetworkManager[44920]: <info>  [1760153437.3729] device (vlan22)[Open vSwitch Interface]: Activation: connection 'vlan22-if' attached as port, continuing activation
Oct 11 03:30:37 compute-0 NetworkManager[44920]: <info>  [1760153437.3748] device (vlan23)[Open vSwitch Interface]: Activation: connection 'vlan23-if' attached as port, continuing activation
Oct 11 03:30:37 compute-0 systemd[1]: Started Network Manager Script Dispatcher Service.
Oct 11 03:30:37 compute-0 NetworkManager[44920]: <info>  [1760153437.3768] audit: op="checkpoint-adjust-rollback-timeout" arg="/org/freedesktop/NetworkManager/Checkpoint/1" pid=47699 uid=0 result="success"
Oct 11 03:30:37 compute-0 NetworkManager[44920]: <info>  [1760153437.3785] device (eth1): state change: deactivating -> disconnected (reason 'new-activation', managed-type: 'full')
Oct 11 03:30:37 compute-0 NetworkManager[44920]: <info>  [1760153437.3915] device (eth1): Activation: starting connection 'ci-private-network' (50f82f9b-7ab1-5a17-a628-b0771fc67283)
Oct 11 03:30:37 compute-0 NetworkManager[44920]: <info>  [1760153437.3932] device (eth1): state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Oct 11 03:30:37 compute-0 NetworkManager[44920]: <info>  [1760153437.3938] device (eth1): state change: prepare -> config (reason 'none', managed-type: 'full')
Oct 11 03:30:37 compute-0 NetworkManager[44920]: <info>  [1760153437.3949] device (br-ex)[Open vSwitch Bridge]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Oct 11 03:30:37 compute-0 NetworkManager[44920]: <info>  [1760153437.3952] device (br-ex)[Open vSwitch Port]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Oct 11 03:30:37 compute-0 NetworkManager[44920]: <info>  [1760153437.3954] device (eth1)[Open vSwitch Port]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Oct 11 03:30:37 compute-0 NetworkManager[44920]: <info>  [1760153437.3957] device (vlan20)[Open vSwitch Port]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Oct 11 03:30:37 compute-0 NetworkManager[44920]: <info>  [1760153437.3959] device (vlan21)[Open vSwitch Port]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Oct 11 03:30:37 compute-0 NetworkManager[44920]: <info>  [1760153437.3960] device (vlan22)[Open vSwitch Port]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Oct 11 03:30:37 compute-0 NetworkManager[44920]: <info>  [1760153437.3962] device (vlan23)[Open vSwitch Port]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Oct 11 03:30:37 compute-0 NetworkManager[44920]: <info>  [1760153437.3968] device (eth1): state change: config -> ip-config (reason 'none', managed-type: 'full')
Oct 11 03:30:37 compute-0 kernel: br-ex: entered promiscuous mode
Oct 11 03:30:37 compute-0 NetworkManager[44920]: <info>  [1760153437.3981] device (br-ex)[Open vSwitch Bridge]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Oct 11 03:30:37 compute-0 NetworkManager[44920]: <info>  [1760153437.3989] device (br-ex)[Open vSwitch Bridge]: Activation: successful, device activated.
Oct 11 03:30:37 compute-0 NetworkManager[44920]: <info>  [1760153437.3998] device (br-ex)[Open vSwitch Port]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Oct 11 03:30:37 compute-0 NetworkManager[44920]: <info>  [1760153437.4005] device (br-ex)[Open vSwitch Port]: Activation: successful, device activated.
Oct 11 03:30:37 compute-0 NetworkManager[44920]: <info>  [1760153437.4011] device (eth1)[Open vSwitch Port]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Oct 11 03:30:37 compute-0 NetworkManager[44920]: <info>  [1760153437.4018] device (eth1)[Open vSwitch Port]: Activation: successful, device activated.
Oct 11 03:30:37 compute-0 NetworkManager[44920]: <info>  [1760153437.4025] device (vlan20)[Open vSwitch Port]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Oct 11 03:30:37 compute-0 NetworkManager[44920]: <info>  [1760153437.4032] device (vlan20)[Open vSwitch Port]: Activation: successful, device activated.
Oct 11 03:30:37 compute-0 NetworkManager[44920]: <info>  [1760153437.4038] device (vlan21)[Open vSwitch Port]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Oct 11 03:30:37 compute-0 NetworkManager[44920]: <info>  [1760153437.4045] device (vlan21)[Open vSwitch Port]: Activation: successful, device activated.
Oct 11 03:30:37 compute-0 NetworkManager[44920]: <info>  [1760153437.4051] device (vlan22)[Open vSwitch Port]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Oct 11 03:30:37 compute-0 NetworkManager[44920]: <info>  [1760153437.4058] device (vlan22)[Open vSwitch Port]: Activation: successful, device activated.
Oct 11 03:30:37 compute-0 NetworkManager[44920]: <info>  [1760153437.4064] device (vlan23)[Open vSwitch Port]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Oct 11 03:30:37 compute-0 NetworkManager[44920]: <info>  [1760153437.4072] device (vlan23)[Open vSwitch Port]: Activation: successful, device activated.
Oct 11 03:30:37 compute-0 NetworkManager[44920]: <info>  [1760153437.4080] device (eth1): Activation: connection 'ci-private-network' attached as port, continuing activation
Oct 11 03:30:37 compute-0 NetworkManager[44920]: <info>  [1760153437.4098] device (eth1): state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Oct 11 03:30:37 compute-0 kernel: vlan22: entered promiscuous mode
Oct 11 03:30:37 compute-0 kernel: virtio_net virtio5 eth1: entered promiscuous mode
Oct 11 03:30:37 compute-0 systemd-udevd[47705]: Network interface NamePolicy= disabled on kernel command line.
Oct 11 03:30:37 compute-0 NetworkManager[44920]: <info>  [1760153437.4163] device (eth1): state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Oct 11 03:30:37 compute-0 NetworkManager[44920]: <info>  [1760153437.4165] device (eth1): state change: secondaries -> activated (reason 'none', managed-type: 'full')
Oct 11 03:30:37 compute-0 NetworkManager[44920]: <info>  [1760153437.4171] device (eth1): Activation: successful, device activated.
Oct 11 03:30:37 compute-0 kernel: vlan21: entered promiscuous mode
Oct 11 03:30:37 compute-0 systemd-udevd[47703]: Network interface NamePolicy= disabled on kernel command line.
Oct 11 03:30:37 compute-0 NetworkManager[44920]: <info>  [1760153437.4291] device (br-ex)[Open vSwitch Interface]: carrier: link connected
Oct 11 03:30:37 compute-0 NetworkManager[44920]: <info>  [1760153437.4299] device (vlan22)[Open vSwitch Interface]: carrier: link connected
Oct 11 03:30:37 compute-0 NetworkManager[44920]: <info>  [1760153437.4320] device (br-ex)[Open vSwitch Interface]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Oct 11 03:30:37 compute-0 NetworkManager[44920]: <info>  [1760153437.4337] device (vlan22)[Open vSwitch Interface]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Oct 11 03:30:37 compute-0 NetworkManager[44920]: <info>  [1760153437.4356] device (vlan21)[Open vSwitch Interface]: carrier: link connected
Oct 11 03:30:37 compute-0 NetworkManager[44920]: <info>  [1760153437.4368] device (br-ex)[Open vSwitch Interface]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Oct 11 03:30:37 compute-0 NetworkManager[44920]: <info>  [1760153437.4372] device (vlan21)[Open vSwitch Interface]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Oct 11 03:30:37 compute-0 NetworkManager[44920]: <info>  [1760153437.4378] device (br-ex)[Open vSwitch Interface]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Oct 11 03:30:37 compute-0 NetworkManager[44920]: <info>  [1760153437.4383] device (br-ex)[Open vSwitch Interface]: Activation: successful, device activated.
Oct 11 03:30:37 compute-0 NetworkManager[44920]: <info>  [1760153437.4396] device (vlan22)[Open vSwitch Interface]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Oct 11 03:30:37 compute-0 NetworkManager[44920]: <info>  [1760153437.4397] device (vlan21)[Open vSwitch Interface]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Oct 11 03:30:37 compute-0 NetworkManager[44920]: <info>  [1760153437.4398] device (vlan22)[Open vSwitch Interface]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Oct 11 03:30:37 compute-0 NetworkManager[44920]: <info>  [1760153437.4406] device (vlan22)[Open vSwitch Interface]: Activation: successful, device activated.
Oct 11 03:30:37 compute-0 kernel: vlan20: entered promiscuous mode
Oct 11 03:30:37 compute-0 NetworkManager[44920]: <info>  [1760153437.4412] device (vlan21)[Open vSwitch Interface]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Oct 11 03:30:37 compute-0 NetworkManager[44920]: <info>  [1760153437.4416] device (vlan21)[Open vSwitch Interface]: Activation: successful, device activated.
Oct 11 03:30:37 compute-0 kernel: vlan23: entered promiscuous mode
Oct 11 03:30:37 compute-0 NetworkManager[44920]: <info>  [1760153437.4591] device (vlan20)[Open vSwitch Interface]: carrier: link connected
Oct 11 03:30:37 compute-0 NetworkManager[44920]: <info>  [1760153437.4602] device (vlan20)[Open vSwitch Interface]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Oct 11 03:30:37 compute-0 NetworkManager[44920]: <info>  [1760153437.4656] device (vlan20)[Open vSwitch Interface]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Oct 11 03:30:37 compute-0 NetworkManager[44920]: <info>  [1760153437.4658] device (vlan20)[Open vSwitch Interface]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Oct 11 03:30:37 compute-0 NetworkManager[44920]: <info>  [1760153437.4663] device (vlan20)[Open vSwitch Interface]: Activation: successful, device activated.
Oct 11 03:30:37 compute-0 NetworkManager[44920]: <info>  [1760153437.4706] device (vlan23)[Open vSwitch Interface]: carrier: link connected
Oct 11 03:30:37 compute-0 NetworkManager[44920]: <info>  [1760153437.4716] device (vlan23)[Open vSwitch Interface]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Oct 11 03:30:37 compute-0 NetworkManager[44920]: <info>  [1760153437.4749] device (vlan23)[Open vSwitch Interface]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Oct 11 03:30:37 compute-0 NetworkManager[44920]: <info>  [1760153437.4751] device (vlan23)[Open vSwitch Interface]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Oct 11 03:30:37 compute-0 NetworkManager[44920]: <info>  [1760153437.4757] device (vlan23)[Open vSwitch Interface]: Activation: successful, device activated.
Oct 11 03:30:38 compute-0 sudo[48057]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kgurvskpbrdvcoqloayhvbusnbpbserb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760153437.7115178-295-217238660517976/AnsiballZ_async_status.py'
Oct 11 03:30:38 compute-0 sudo[48057]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 03:30:38 compute-0 python3.9[48059]: ansible-ansible.legacy.async_status Invoked with jid=j76121612614.47693 mode=status _async_dir=/root/.ansible_async
Oct 11 03:30:38 compute-0 sudo[48057]: pam_unix(sudo:session): session closed for user root
Oct 11 03:30:38 compute-0 NetworkManager[44920]: <info>  [1760153438.6240] audit: op="checkpoint-adjust-rollback-timeout" arg="/org/freedesktop/NetworkManager/Checkpoint/1" pid=47699 uid=0 result="success"
Oct 11 03:30:38 compute-0 NetworkManager[44920]: <info>  [1760153438.8459] checkpoint[0x555ddbbb6950]: destroy /org/freedesktop/NetworkManager/Checkpoint/1
Oct 11 03:30:38 compute-0 NetworkManager[44920]: <info>  [1760153438.8462] audit: op="checkpoint-destroy" arg="/org/freedesktop/NetworkManager/Checkpoint/1" pid=47699 uid=0 result="success"
Oct 11 03:30:39 compute-0 NetworkManager[44920]: <info>  [1760153439.2667] audit: op="checkpoint-create" arg="/org/freedesktop/NetworkManager/Checkpoint/2" pid=47699 uid=0 result="success"
Oct 11 03:30:39 compute-0 NetworkManager[44920]: <info>  [1760153439.2686] audit: op="checkpoint-adjust-rollback-timeout" arg="/org/freedesktop/NetworkManager/Checkpoint/2" pid=47699 uid=0 result="success"
Oct 11 03:30:39 compute-0 ansible-async_wrapper.py[47696]: 47697 still running (300)
Oct 11 03:30:39 compute-0 NetworkManager[44920]: <info>  [1760153439.6037] audit: op="networking-control" arg="global-dns-configuration" pid=47699 uid=0 result="success"
Oct 11 03:30:39 compute-0 NetworkManager[44920]: <info>  [1760153439.6075] config: signal: SET_VALUES,values,values-intern,global-dns-config (/etc/NetworkManager/NetworkManager.conf, /run/NetworkManager/conf.d/15-carrier-timeout.conf)
Oct 11 03:30:39 compute-0 NetworkManager[44920]: <info>  [1760153439.6113] audit: op="networking-control" arg="global-dns-configuration" pid=47699 uid=0 result="success"
Oct 11 03:30:39 compute-0 NetworkManager[44920]: <info>  [1760153439.6144] audit: op="checkpoint-adjust-rollback-timeout" arg="/org/freedesktop/NetworkManager/Checkpoint/2" pid=47699 uid=0 result="success"
Oct 11 03:30:39 compute-0 NetworkManager[44920]: <info>  [1760153439.8147] checkpoint[0x555ddbbb6a20]: destroy /org/freedesktop/NetworkManager/Checkpoint/2
Oct 11 03:30:39 compute-0 NetworkManager[44920]: <info>  [1760153439.8157] audit: op="checkpoint-destroy" arg="/org/freedesktop/NetworkManager/Checkpoint/2" pid=47699 uid=0 result="success"
Oct 11 03:30:39 compute-0 ansible-async_wrapper.py[47697]: Module complete (47697)
Oct 11 03:30:41 compute-0 sudo[48163]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cgkvaxmwhcldbucfpdmgaebmpqpdcwif ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760153437.7115178-295-217238660517976/AnsiballZ_async_status.py'
Oct 11 03:30:41 compute-0 sudo[48163]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 03:30:41 compute-0 python3.9[48165]: ansible-ansible.legacy.async_status Invoked with jid=j76121612614.47693 mode=status _async_dir=/root/.ansible_async
Oct 11 03:30:41 compute-0 sudo[48163]: pam_unix(sudo:session): session closed for user root
Oct 11 03:30:42 compute-0 sudo[48263]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uwhtanblgzkjokcgmiubamltreaiojln ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760153437.7115178-295-217238660517976/AnsiballZ_async_status.py'
Oct 11 03:30:42 compute-0 sudo[48263]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 03:30:42 compute-0 python3.9[48265]: ansible-ansible.legacy.async_status Invoked with jid=j76121612614.47693 mode=cleanup _async_dir=/root/.ansible_async
Oct 11 03:30:42 compute-0 sudo[48263]: pam_unix(sudo:session): session closed for user root
Oct 11 03:30:43 compute-0 sudo[48415]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rnyducuvvbwkauahpytmfeepvdgxwios ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760153442.7713017-322-67794206858672/AnsiballZ_stat.py'
Oct 11 03:30:43 compute-0 sudo[48415]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 03:30:43 compute-0 python3.9[48417]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/os-net-config.returncode follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 11 03:30:43 compute-0 sudo[48415]: pam_unix(sudo:session): session closed for user root
Oct 11 03:30:43 compute-0 sudo[48538]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zivzqwqvcoaiklhmafasqxnwxernjgem ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760153442.7713017-322-67794206858672/AnsiballZ_copy.py'
Oct 11 03:30:43 compute-0 sudo[48538]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 03:30:43 compute-0 python3.9[48540]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/os-net-config.returncode mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1760153442.7713017-322-67794206858672/.source.returncode _original_basename=.rbbocd69 follow=False checksum=b6589fc6ab0dc82cf12099d1c2d40ab994e8410c backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 11 03:30:43 compute-0 sudo[48538]: pam_unix(sudo:session): session closed for user root
Oct 11 03:30:44 compute-0 sudo[48690]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ftmfxwukamplljarcgmlxqldtbjvchhg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760153444.1863053-338-25973473883876/AnsiballZ_stat.py'
Oct 11 03:30:44 compute-0 sudo[48690]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 03:30:44 compute-0 ansible-async_wrapper.py[47696]: Done in kid B.
Oct 11 03:30:44 compute-0 python3.9[48692]: ansible-ansible.legacy.stat Invoked with path=/etc/cloud/cloud.cfg.d/99-edpm-disable-network-config.cfg follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 11 03:30:44 compute-0 sudo[48690]: pam_unix(sudo:session): session closed for user root
Oct 11 03:30:45 compute-0 systemd[1]: systemd-hostnamed.service: Deactivated successfully.
Oct 11 03:30:45 compute-0 sudo[48816]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ifbvsiviuolehoyhuvgtywttnqmqusgi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760153444.1863053-338-25973473883876/AnsiballZ_copy.py'
Oct 11 03:30:45 compute-0 sudo[48816]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 03:30:45 compute-0 python3.9[48818]: ansible-ansible.legacy.copy Invoked with dest=/etc/cloud/cloud.cfg.d/99-edpm-disable-network-config.cfg mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1760153444.1863053-338-25973473883876/.source.cfg _original_basename=.m9w4cbb6 follow=False checksum=f3c5952a9cd4c6c31b314b25eb897168971cc86e backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 11 03:30:45 compute-0 sudo[48816]: pam_unix(sudo:session): session closed for user root
Oct 11 03:30:45 compute-0 sudo[48968]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tpmmlcnninezqbxbouvaddjfilaonzte ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760153445.5892782-353-210134747259282/AnsiballZ_systemd.py'
Oct 11 03:30:45 compute-0 sudo[48968]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 03:30:46 compute-0 python3.9[48970]: ansible-ansible.builtin.systemd Invoked with name=NetworkManager state=reloaded daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Oct 11 03:30:46 compute-0 systemd[1]: Reloading Network Manager...
Oct 11 03:30:46 compute-0 NetworkManager[44920]: <info>  [1760153446.3249] audit: op="reload" arg="0" pid=48974 uid=0 result="success"
Oct 11 03:30:46 compute-0 NetworkManager[44920]: <info>  [1760153446.3261] config: signal: SIGHUP,config-files,values,values-user,no-auto-default (/etc/NetworkManager/NetworkManager.conf, /usr/lib/NetworkManager/conf.d/00-server.conf, /run/NetworkManager/conf.d/15-carrier-timeout.conf, /var/lib/NetworkManager/NetworkManager-intern.conf)
Oct 11 03:30:46 compute-0 systemd[1]: Reloaded Network Manager.
Oct 11 03:30:46 compute-0 sudo[48968]: pam_unix(sudo:session): session closed for user root
Oct 11 03:30:46 compute-0 sshd-session[40917]: Connection closed by 192.168.122.30 port 36538
Oct 11 03:30:46 compute-0 sshd-session[40914]: pam_unix(sshd:session): session closed for user zuul
Oct 11 03:30:46 compute-0 systemd[1]: session-9.scope: Deactivated successfully.
Oct 11 03:30:46 compute-0 systemd[1]: session-9.scope: Consumed 54.743s CPU time.
Oct 11 03:30:46 compute-0 systemd-logind[820]: Session 9 logged out. Waiting for processes to exit.
Oct 11 03:30:46 compute-0 systemd-logind[820]: Removed session 9.
Oct 11 03:30:51 compute-0 sshd-session[49004]: Accepted publickey for zuul from 192.168.122.30 port 38460 ssh2: ECDSA SHA256:qo9+RMabHfLAOt2q/80W97JXaZUdeUCREBuTRaqgxBY
Oct 11 03:30:51 compute-0 systemd-logind[820]: New session 10 of user zuul.
Oct 11 03:30:51 compute-0 systemd[1]: Started Session 10 of User zuul.
Oct 11 03:30:51 compute-0 sshd-session[49004]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Oct 11 03:30:52 compute-0 python3.9[49158]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct 11 03:30:53 compute-0 python3.9[49312]: ansible-ansible.builtin.setup Invoked with filter=['ansible_default_ipv4'] gather_subset=['!all', '!min', 'network'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Oct 11 03:30:55 compute-0 python3.9[49506]: ansible-ansible.legacy.command Invoked with _raw_params=hostname -f _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 11 03:30:55 compute-0 sshd-session[49008]: Connection closed by 192.168.122.30 port 38460
Oct 11 03:30:55 compute-0 sshd-session[49004]: pam_unix(sshd:session): session closed for user zuul
Oct 11 03:30:55 compute-0 systemd[1]: session-10.scope: Deactivated successfully.
Oct 11 03:30:55 compute-0 systemd[1]: session-10.scope: Consumed 2.777s CPU time.
Oct 11 03:30:55 compute-0 systemd-logind[820]: Session 10 logged out. Waiting for processes to exit.
Oct 11 03:30:55 compute-0 systemd-logind[820]: Removed session 10.
Oct 11 03:30:56 compute-0 systemd[1]: NetworkManager-dispatcher.service: Deactivated successfully.
Oct 11 03:31:00 compute-0 sshd-session[49534]: Accepted publickey for zuul from 192.168.122.30 port 41610 ssh2: ECDSA SHA256:qo9+RMabHfLAOt2q/80W97JXaZUdeUCREBuTRaqgxBY
Oct 11 03:31:00 compute-0 systemd-logind[820]: New session 11 of user zuul.
Oct 11 03:31:00 compute-0 systemd[1]: Started Session 11 of User zuul.
Oct 11 03:31:00 compute-0 sshd-session[49534]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Oct 11 03:31:01 compute-0 python3.9[49688]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct 11 03:31:02 compute-0 python3.9[49842]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct 11 03:31:03 compute-0 sudo[49996]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hqqavihziopbgmllxeautyerbvkbvfqx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760153463.4237227-40-89940735132854/AnsiballZ_setup.py'
Oct 11 03:31:03 compute-0 sudo[49996]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 03:31:04 compute-0 python3.9[49998]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Oct 11 03:31:04 compute-0 sudo[49996]: pam_unix(sudo:session): session closed for user root
Oct 11 03:31:04 compute-0 sudo[50081]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jcogjsutudnmtmhjzvekudjjtutkqcvy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760153463.4237227-40-89940735132854/AnsiballZ_dnf.py'
Oct 11 03:31:04 compute-0 sudo[50081]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 03:31:05 compute-0 python3.9[50083]: ansible-ansible.legacy.dnf Invoked with name=['podman'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Oct 11 03:31:06 compute-0 sudo[50081]: pam_unix(sudo:session): session closed for user root
Oct 11 03:31:06 compute-0 sudo[50234]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zbzqkjvvqpwptkaxghkfejzvlorivzmp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760153466.465451-52-706914474467/AnsiballZ_setup.py'
Oct 11 03:31:06 compute-0 sudo[50234]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 03:31:07 compute-0 python3.9[50236]: ansible-ansible.builtin.setup Invoked with filter=['ansible_interfaces'] gather_subset=['!all', '!min', 'network'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Oct 11 03:31:07 compute-0 sudo[50234]: pam_unix(sudo:session): session closed for user root
Oct 11 03:31:08 compute-0 sudo[50430]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qsedwygvqvbdbufvszadonrbjlyvnfpy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760153467.8461494-63-124087626852970/AnsiballZ_file.py'
Oct 11 03:31:08 compute-0 sudo[50430]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 03:31:08 compute-0 python3.9[50432]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/containers/networks recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 11 03:31:08 compute-0 sudo[50430]: pam_unix(sudo:session): session closed for user root
Oct 11 03:31:09 compute-0 sudo[50582]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nnvlbfsmrorkcebtpeyxfyanrlvbjnft ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760153468.8190377-71-176189501020297/AnsiballZ_command.py'
Oct 11 03:31:09 compute-0 sudo[50582]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 03:31:09 compute-0 python3.9[50584]: ansible-ansible.legacy.command Invoked with _raw_params=podman network inspect podman
                                             _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 11 03:31:09 compute-0 systemd[1]: var-lib-containers-storage-overlay-compat142830702-merged.mount: Deactivated successfully.
Oct 11 03:31:09 compute-0 podman[50585]: 2025-10-11 03:31:09.574492458 +0000 UTC m=+0.045377397 system refresh
Oct 11 03:31:09 compute-0 sudo[50582]: pam_unix(sudo:session): session closed for user root
Oct 11 03:31:10 compute-0 sudo[50745]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-futlnlhxzdvrlprpbuvjowznmhzaytrv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760153469.8148637-79-118784712650786/AnsiballZ_stat.py'
Oct 11 03:31:10 compute-0 sudo[50745]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 03:31:10 compute-0 python3.9[50747]: ansible-ansible.legacy.stat Invoked with path=/etc/containers/networks/podman.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 11 03:31:10 compute-0 sudo[50745]: pam_unix(sudo:session): session closed for user root
Oct 11 03:31:10 compute-0 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Oct 11 03:31:11 compute-0 sudo[50868]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vxsnrwyyiqwexiqikxaylwersswkfdin ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760153469.8148637-79-118784712650786/AnsiballZ_copy.py'
Oct 11 03:31:11 compute-0 sudo[50868]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 03:31:11 compute-0 python3.9[50870]: ansible-ansible.legacy.copy Invoked with dest=/etc/containers/networks/podman.json group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1760153469.8148637-79-118784712650786/.source.json follow=False _original_basename=podman_network_config.j2 checksum=ec744779908d5b9fd402a0e83d966e28fa95aae2 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 11 03:31:11 compute-0 sudo[50868]: pam_unix(sudo:session): session closed for user root
Oct 11 03:31:11 compute-0 sudo[51020]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-heksrrcjouvcvyyytezrslvftzipafjf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760153471.4849343-94-97133342232259/AnsiballZ_stat.py'
Oct 11 03:31:11 compute-0 sudo[51020]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 03:31:11 compute-0 python3.9[51022]: ansible-ansible.legacy.stat Invoked with path=/etc/containers/registries.conf.d/20-edpm-podman-registries.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 11 03:31:12 compute-0 sudo[51020]: pam_unix(sudo:session): session closed for user root
Oct 11 03:31:12 compute-0 sudo[51143]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gpjetlafdypvuvjtjpntrdhrujwzslfb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760153471.4849343-94-97133342232259/AnsiballZ_copy.py'
Oct 11 03:31:12 compute-0 sudo[51143]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 03:31:12 compute-0 python3.9[51145]: ansible-ansible.legacy.copy Invoked with dest=/etc/containers/registries.conf.d/20-edpm-podman-registries.conf group=root mode=0644 owner=root setype=etc_t src=/home/zuul/.ansible/tmp/ansible-tmp-1760153471.4849343-94-97133342232259/.source.conf follow=False _original_basename=registries.conf.j2 checksum=b0997da0dac7c72916bfa4feb1650346bde4dfbe backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Oct 11 03:31:12 compute-0 sudo[51143]: pam_unix(sudo:session): session closed for user root
Oct 11 03:31:13 compute-0 sudo[51295]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kpsjvubrgusoiszoqdknxdhajvwpfcmg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760153472.856002-110-258484078425945/AnsiballZ_ini_file.py'
Oct 11 03:31:13 compute-0 sudo[51295]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 03:31:13 compute-0 python3.9[51297]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=pids_limit owner=root path=/etc/containers/containers.conf section=containers setype=etc_t value=4096 backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None
Oct 11 03:31:13 compute-0 sudo[51295]: pam_unix(sudo:session): session closed for user root
Oct 11 03:31:14 compute-0 sudo[51447]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-aeoponrixkofylyphonoypfndoqchqoi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760153473.785775-110-139504953525804/AnsiballZ_ini_file.py'
Oct 11 03:31:14 compute-0 sudo[51447]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 03:31:14 compute-0 python3.9[51449]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=events_logger owner=root path=/etc/containers/containers.conf section=engine setype=etc_t value="journald" backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None
Oct 11 03:31:14 compute-0 sudo[51447]: pam_unix(sudo:session): session closed for user root
Oct 11 03:31:14 compute-0 sudo[51599]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xkbdfquzcmfionyxvpudeahktsqwmrcl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760153474.5060568-110-111564543380914/AnsiballZ_ini_file.py'
Oct 11 03:31:14 compute-0 sudo[51599]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 03:31:15 compute-0 python3.9[51601]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=runtime owner=root path=/etc/containers/containers.conf section=engine setype=etc_t value="crun" backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None
Oct 11 03:31:15 compute-0 sudo[51599]: pam_unix(sudo:session): session closed for user root
Oct 11 03:31:15 compute-0 sudo[51751]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ofauixhdchozzzfxgjnqvienzivihohb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760153475.3391874-110-40621593622596/AnsiballZ_ini_file.py'
Oct 11 03:31:15 compute-0 sudo[51751]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 03:31:15 compute-0 python3.9[51753]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=network_backend owner=root path=/etc/containers/containers.conf section=network setype=etc_t value="netavark" backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None
Oct 11 03:31:15 compute-0 sudo[51751]: pam_unix(sudo:session): session closed for user root
Oct 11 03:31:16 compute-0 sudo[51903]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bkmjeikpkwjjmttswadzhgqrdcfjeowe ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760153476.2294245-141-237776030263902/AnsiballZ_dnf.py'
Oct 11 03:31:16 compute-0 sudo[51903]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 03:31:16 compute-0 python3.9[51905]: ansible-ansible.legacy.dnf Invoked with name=['openssh-server'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Oct 11 03:31:17 compute-0 sudo[51903]: pam_unix(sudo:session): session closed for user root
Oct 11 03:31:18 compute-0 sudo[52056]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-arpgrcbocmefgzmbbdbwxylfduyhulmr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760153478.303915-152-278664302176026/AnsiballZ_setup.py'
Oct 11 03:31:18 compute-0 sudo[52056]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 03:31:18 compute-0 python3.9[52058]: ansible-setup Invoked with gather_subset=['!all', '!min', 'distribution', 'distribution_major_version', 'distribution_version', 'os_family'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct 11 03:31:19 compute-0 sudo[52056]: pam_unix(sudo:session): session closed for user root
Oct 11 03:31:19 compute-0 sudo[52210]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wsljgegdufckdewkhxsxetwlsjubbasu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760153479.2359736-160-150116304761801/AnsiballZ_stat.py'
Oct 11 03:31:19 compute-0 sudo[52210]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 03:31:19 compute-0 python3.9[52212]: ansible-stat Invoked with path=/run/ostree-booted follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct 11 03:31:19 compute-0 sudo[52210]: pam_unix(sudo:session): session closed for user root
Oct 11 03:31:20 compute-0 sudo[52362]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ldhwjcqrhbshflbzssvzofyjuejalefs ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760153479.9777918-169-22417131662837/AnsiballZ_stat.py'
Oct 11 03:31:20 compute-0 sudo[52362]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 03:31:20 compute-0 python3.9[52364]: ansible-stat Invoked with path=/sbin/transactional-update follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct 11 03:31:20 compute-0 sudo[52362]: pam_unix(sudo:session): session closed for user root
Oct 11 03:31:21 compute-0 sudo[52514]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rnrgbdppdwisqktpqqpjbjsklkfqvsme ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760153480.7983608-179-232059639761259/AnsiballZ_service_facts.py'
Oct 11 03:31:21 compute-0 sudo[52514]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 03:31:21 compute-0 python3.9[52516]: ansible-service_facts Invoked
Oct 11 03:31:21 compute-0 network[52533]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Oct 11 03:31:21 compute-0 network[52534]: 'network-scripts' will be removed from distribution in near future.
Oct 11 03:31:21 compute-0 network[52535]: It is advised to switch to 'NetworkManager' instead for network management.
Oct 11 03:31:25 compute-0 sudo[52514]: pam_unix(sudo:session): session closed for user root
Oct 11 03:31:26 compute-0 sudo[52820]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ujyuulboegwzdhycgvvitinthubgtduq ; /bin/bash /home/zuul/.ansible/tmp/ansible-tmp-1760153485.9887342-192-240771960467091/AnsiballZ_timesync_provider.sh /home/zuul/.ansible/tmp/ansible-tmp-1760153485.9887342-192-240771960467091/args'
Oct 11 03:31:26 compute-0 sudo[52820]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 03:31:26 compute-0 sudo[52820]: pam_unix(sudo:session): session closed for user root
Oct 11 03:31:27 compute-0 sudo[52987]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zmsmcpcbqoehagikmmrefowthazwbcgm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760153486.8685908-203-56427692022848/AnsiballZ_dnf.py'
Oct 11 03:31:27 compute-0 sudo[52987]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 03:31:27 compute-0 python3.9[52989]: ansible-ansible.legacy.dnf Invoked with name=['chrony'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Oct 11 03:31:28 compute-0 sudo[52987]: pam_unix(sudo:session): session closed for user root
Oct 11 03:31:29 compute-0 sudo[53140]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xhfgkfuahiebppfpqyxkewtefbykgbbf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760153488.8770523-216-97043019914152/AnsiballZ_package_facts.py'
Oct 11 03:31:29 compute-0 sudo[53140]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 03:31:29 compute-0 python3.9[53142]: ansible-package_facts Invoked with manager=['auto'] strategy=first
Oct 11 03:31:29 compute-0 sudo[53140]: pam_unix(sudo:session): session closed for user root
Oct 11 03:31:30 compute-0 sudo[53292]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pdmpeudrhnbwmibyesotcqsqvpcmlubt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760153490.4420776-226-53699983741298/AnsiballZ_stat.py'
Oct 11 03:31:30 compute-0 sudo[53292]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 03:31:31 compute-0 python3.9[53294]: ansible-ansible.legacy.stat Invoked with path=/etc/chrony.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 11 03:31:31 compute-0 sudo[53292]: pam_unix(sudo:session): session closed for user root
Oct 11 03:31:31 compute-0 sudo[53417]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-taoxucegctmbgwezcqocrrsecovpnpjx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760153490.4420776-226-53699983741298/AnsiballZ_copy.py'
Oct 11 03:31:31 compute-0 sudo[53417]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 03:31:31 compute-0 python3.9[53419]: ansible-ansible.legacy.copy Invoked with backup=True dest=/etc/chrony.conf mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1760153490.4420776-226-53699983741298/.source.conf follow=False _original_basename=chrony.conf.j2 checksum=cfb003e56d02d0d2c65555452eb1a05073fecdad force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 11 03:31:31 compute-0 sudo[53417]: pam_unix(sudo:session): session closed for user root
Oct 11 03:31:32 compute-0 sudo[53571]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vxaijcsnhdyjmnfcbpjvehneqbzebnpr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760153492.022766-241-149368947564081/AnsiballZ_stat.py'
Oct 11 03:31:32 compute-0 sudo[53571]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 03:31:32 compute-0 python3.9[53573]: ansible-ansible.legacy.stat Invoked with path=/etc/sysconfig/chronyd follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 11 03:31:32 compute-0 sudo[53571]: pam_unix(sudo:session): session closed for user root
Oct 11 03:31:33 compute-0 sudo[53696]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ztpbamlvftucixpievkqxvtoybtwafoh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760153492.022766-241-149368947564081/AnsiballZ_copy.py'
Oct 11 03:31:33 compute-0 sudo[53696]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 03:31:33 compute-0 python3.9[53698]: ansible-ansible.legacy.copy Invoked with backup=True dest=/etc/sysconfig/chronyd mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1760153492.022766-241-149368947564081/.source follow=False _original_basename=chronyd.sysconfig.j2 checksum=dd196b1ff1f915b23eebc37ec77405b5dd3df76c force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 11 03:31:33 compute-0 sudo[53696]: pam_unix(sudo:session): session closed for user root
Oct 11 03:31:34 compute-0 sudo[53850]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wtzdcxueodzcqetubjwpkwdpwgkzrlwk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760153493.8529415-262-271365465025763/AnsiballZ_lineinfile.py'
Oct 11 03:31:34 compute-0 sudo[53850]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 03:31:34 compute-0 python3.9[53852]: ansible-lineinfile Invoked with backup=True create=True dest=/etc/sysconfig/network line=PEERNTP=no mode=0644 regexp=^PEERNTP= state=present path=/etc/sysconfig/network backrefs=False firstmatch=False unsafe_writes=False search_string=None insertafter=None insertbefore=None validate=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 11 03:31:34 compute-0 sudo[53850]: pam_unix(sudo:session): session closed for user root
Oct 11 03:31:35 compute-0 sudo[54004]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-inowpixppnospoudkshybdvodeiizigz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760153495.1298256-277-100567534104926/AnsiballZ_setup.py'
Oct 11 03:31:35 compute-0 sudo[54004]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 03:31:35 compute-0 python3.9[54006]: ansible-ansible.legacy.setup Invoked with gather_subset=['!all'] filter=['ansible_service_mgr'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Oct 11 03:31:36 compute-0 sudo[54004]: pam_unix(sudo:session): session closed for user root
Oct 11 03:31:36 compute-0 sudo[54088]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ggphqudiskchiccoanobzgixfklzatre ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760153495.1298256-277-100567534104926/AnsiballZ_systemd.py'
Oct 11 03:31:36 compute-0 sudo[54088]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 03:31:37 compute-0 python3.9[54090]: ansible-ansible.legacy.systemd Invoked with enabled=True name=chronyd state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Oct 11 03:31:37 compute-0 sudo[54088]: pam_unix(sudo:session): session closed for user root
Oct 11 03:31:37 compute-0 sudo[54242]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pozurakeulspetyxqnkbqulwenhvbmkf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760153497.5490034-293-79296741453695/AnsiballZ_setup.py'
Oct 11 03:31:37 compute-0 sudo[54242]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 03:31:38 compute-0 python3.9[54244]: ansible-ansible.legacy.setup Invoked with gather_subset=['!all'] filter=['ansible_service_mgr'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Oct 11 03:31:38 compute-0 sudo[54242]: pam_unix(sudo:session): session closed for user root
Oct 11 03:31:38 compute-0 sudo[54326]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kcnrezjbhyeiygglpfrscpapqvvkdovd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760153497.5490034-293-79296741453695/AnsiballZ_systemd.py'
Oct 11 03:31:38 compute-0 sudo[54326]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 03:31:38 compute-0 python3.9[54328]: ansible-ansible.legacy.systemd Invoked with name=chronyd state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Oct 11 03:31:39 compute-0 chronyd[828]: chronyd exiting
Oct 11 03:31:39 compute-0 systemd[1]: Stopping NTP client/server...
Oct 11 03:31:39 compute-0 systemd[1]: chronyd.service: Deactivated successfully.
Oct 11 03:31:39 compute-0 systemd[1]: Stopped NTP client/server.
Oct 11 03:31:39 compute-0 systemd[1]: Starting NTP client/server...
Oct 11 03:31:39 compute-0 chronyd[54336]: chronyd version 4.6.1 starting (+CMDMON +NTP +REFCLOCK +RTC +PRIVDROP +SCFILTER +SIGND +ASYNCDNS +NTS +SECHASH +IPV6 +DEBUG)
Oct 11 03:31:39 compute-0 chronyd[54336]: Frequency -27.962 +/- 0.289 ppm read from /var/lib/chrony/drift
Oct 11 03:31:39 compute-0 chronyd[54336]: Loaded seccomp filter (level 2)
Oct 11 03:31:39 compute-0 systemd[1]: Started NTP client/server.
Oct 11 03:31:39 compute-0 sudo[54326]: pam_unix(sudo:session): session closed for user root
Oct 11 03:31:39 compute-0 sshd-session[49537]: Connection closed by 192.168.122.30 port 41610
Oct 11 03:31:39 compute-0 sshd-session[49534]: pam_unix(sshd:session): session closed for user zuul
Oct 11 03:31:39 compute-0 systemd[1]: session-11.scope: Deactivated successfully.
Oct 11 03:31:39 compute-0 systemd[1]: session-11.scope: Consumed 29.859s CPU time.
Oct 11 03:31:39 compute-0 systemd-logind[820]: Session 11 logged out. Waiting for processes to exit.
Oct 11 03:31:39 compute-0 systemd-logind[820]: Removed session 11.
Oct 11 03:31:45 compute-0 sshd-session[54362]: Accepted publickey for zuul from 192.168.122.30 port 60568 ssh2: ECDSA SHA256:qo9+RMabHfLAOt2q/80W97JXaZUdeUCREBuTRaqgxBY
Oct 11 03:31:45 compute-0 systemd-logind[820]: New session 12 of user zuul.
Oct 11 03:31:45 compute-0 systemd[1]: Started Session 12 of User zuul.
Oct 11 03:31:45 compute-0 sshd-session[54362]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Oct 11 03:31:45 compute-0 sudo[54515]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vlagjaazdypjfqyuervyqfgdkbpkcbfv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760153505.41393-22-181368607936249/AnsiballZ_file.py'
Oct 11 03:31:45 compute-0 sudo[54515]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 03:31:46 compute-0 python3.9[54517]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 11 03:31:46 compute-0 sudo[54515]: pam_unix(sudo:session): session closed for user root
Oct 11 03:31:46 compute-0 sudo[54667]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uibzaglsobdxhxnrseprrwamflfspimr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760153506.386933-34-174313014280313/AnsiballZ_stat.py'
Oct 11 03:31:46 compute-0 sudo[54667]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 03:31:47 compute-0 python3.9[54669]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/ceph-networks.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 11 03:31:47 compute-0 sudo[54667]: pam_unix(sudo:session): session closed for user root
Oct 11 03:31:47 compute-0 sudo[54790]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hkvobsvzcoriurquznzpqeazjvnjudpq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760153506.386933-34-174313014280313/AnsiballZ_copy.py'
Oct 11 03:31:47 compute-0 sudo[54790]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 03:31:47 compute-0 python3.9[54792]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/firewall/ceph-networks.yaml mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1760153506.386933-34-174313014280313/.source.yaml follow=False _original_basename=firewall.yaml.j2 checksum=729ea8396013e3343245d6e934e0dcef55029ad2 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 11 03:31:47 compute-0 sudo[54790]: pam_unix(sudo:session): session closed for user root
Oct 11 03:31:48 compute-0 sshd-session[54365]: Connection closed by 192.168.122.30 port 60568
Oct 11 03:31:48 compute-0 sshd-session[54362]: pam_unix(sshd:session): session closed for user zuul
Oct 11 03:31:48 compute-0 systemd[1]: session-12.scope: Deactivated successfully.
Oct 11 03:31:48 compute-0 systemd[1]: session-12.scope: Consumed 2.008s CPU time.
Oct 11 03:31:48 compute-0 systemd-logind[820]: Session 12 logged out. Waiting for processes to exit.
Oct 11 03:31:48 compute-0 systemd-logind[820]: Removed session 12.
Oct 11 03:31:53 compute-0 sshd-session[54817]: Accepted publickey for zuul from 192.168.122.30 port 60268 ssh2: ECDSA SHA256:qo9+RMabHfLAOt2q/80W97JXaZUdeUCREBuTRaqgxBY
Oct 11 03:31:53 compute-0 systemd-logind[820]: New session 13 of user zuul.
Oct 11 03:31:53 compute-0 systemd[1]: Started Session 13 of User zuul.
Oct 11 03:31:53 compute-0 sshd-session[54817]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Oct 11 03:31:54 compute-0 python3.9[54970]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct 11 03:31:55 compute-0 sudo[55124]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-peqjyjzeeewdbyibnpkstzmrlbgyayvo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760153514.7675178-33-137491135450978/AnsiballZ_file.py'
Oct 11 03:31:55 compute-0 sudo[55124]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 03:31:55 compute-0 python3.9[55126]: ansible-ansible.builtin.file Invoked with group=zuul mode=0770 owner=zuul path=/root/.config/containers recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 11 03:31:55 compute-0 sudo[55124]: pam_unix(sudo:session): session closed for user root
Oct 11 03:31:56 compute-0 sudo[55299]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qdpwflvrtlqfsyfvuzbboykkpluahdck ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760153515.8030837-41-93909580647098/AnsiballZ_stat.py'
Oct 11 03:31:56 compute-0 sudo[55299]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 03:31:56 compute-0 python3.9[55301]: ansible-ansible.legacy.stat Invoked with path=/root/.config/containers/auth.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 11 03:31:56 compute-0 sudo[55299]: pam_unix(sudo:session): session closed for user root
Oct 11 03:31:57 compute-0 sudo[55422]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gvhimmqljupqlngftsfpwzfnoazspmmc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760153515.8030837-41-93909580647098/AnsiballZ_copy.py'
Oct 11 03:31:57 compute-0 sudo[55422]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 03:31:57 compute-0 python3.9[55424]: ansible-ansible.legacy.copy Invoked with dest=/root/.config/containers/auth.json group=zuul mode=0660 owner=zuul src=/home/zuul/.ansible/tmp/ansible-tmp-1760153515.8030837-41-93909580647098/.source.json _original_basename=.tj222q37 follow=False checksum=bf21a9e8fbc5a3846fb05b4fa0859e0917b2202f backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 11 03:31:57 compute-0 sudo[55422]: pam_unix(sudo:session): session closed for user root
Oct 11 03:31:58 compute-0 sudo[55574]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hupzpnnjvobrmygdaxryfrrmlddwpuhv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760153517.8521326-64-273214326492538/AnsiballZ_stat.py'
Oct 11 03:31:58 compute-0 sudo[55574]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 03:31:58 compute-0 python3.9[55576]: ansible-ansible.legacy.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 11 03:31:58 compute-0 sudo[55574]: pam_unix(sudo:session): session closed for user root
Oct 11 03:31:58 compute-0 sudo[55697]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qhwsmiophwzzrkzukkrkousgyghnefxx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760153517.8521326-64-273214326492538/AnsiballZ_copy.py'
Oct 11 03:31:58 compute-0 sudo[55697]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 03:31:59 compute-0 python3.9[55699]: ansible-ansible.legacy.copy Invoked with dest=/etc/sysconfig/podman_drop_in mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1760153517.8521326-64-273214326492538/.source _original_basename=.qcrzzh7v follow=False checksum=125299ce8dea7711a76292961206447f0043248b backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 11 03:31:59 compute-0 sudo[55697]: pam_unix(sudo:session): session closed for user root
Oct 11 03:31:59 compute-0 sudo[55849]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-emovpyocywbwvfwclizbogccpzrritku ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760153519.3493168-80-148724856854325/AnsiballZ_file.py'
Oct 11 03:31:59 compute-0 sudo[55849]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 03:31:59 compute-0 python3.9[55851]: ansible-ansible.builtin.file Invoked with path=/var/local/libexec recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Oct 11 03:31:59 compute-0 sudo[55849]: pam_unix(sudo:session): session closed for user root
Oct 11 03:32:00 compute-0 sudo[56001]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ltjppyvcofsdqrwxtvkluliwtflgjzle ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760153520.0846395-88-165264738058487/AnsiballZ_stat.py'
Oct 11 03:32:00 compute-0 sudo[56001]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 03:32:00 compute-0 python3.9[56003]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-container-shutdown follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 11 03:32:00 compute-0 sudo[56001]: pam_unix(sudo:session): session closed for user root
Oct 11 03:32:01 compute-0 sudo[56124]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hncqsueydonbjgjmntrpginwqkmejwmp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760153520.0846395-88-165264738058487/AnsiballZ_copy.py'
Oct 11 03:32:01 compute-0 sudo[56124]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 03:32:01 compute-0 python3.9[56126]: ansible-ansible.legacy.copy Invoked with dest=/var/local/libexec/edpm-container-shutdown group=root mode=0700 owner=root setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1760153520.0846395-88-165264738058487/.source _original_basename=edpm-container-shutdown follow=False checksum=632c3792eb3dce4288b33ae7b265b71950d69f13 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Oct 11 03:32:01 compute-0 sudo[56124]: pam_unix(sudo:session): session closed for user root
Oct 11 03:32:01 compute-0 sudo[56276]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lpnxatmrdudvzbvhyzlkwppevevgyfje ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760153521.5197258-88-177279746078671/AnsiballZ_stat.py'
Oct 11 03:32:01 compute-0 sudo[56276]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 03:32:02 compute-0 python3.9[56278]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-start-podman-container follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 11 03:32:02 compute-0 sudo[56276]: pam_unix(sudo:session): session closed for user root
Oct 11 03:32:02 compute-0 sudo[56399]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-echkztjcvjvdyabrmkressxhsoaarnpa ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760153521.5197258-88-177279746078671/AnsiballZ_copy.py'
Oct 11 03:32:02 compute-0 sudo[56399]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 03:32:02 compute-0 python3.9[56401]: ansible-ansible.legacy.copy Invoked with dest=/var/local/libexec/edpm-start-podman-container group=root mode=0700 owner=root setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1760153521.5197258-88-177279746078671/.source _original_basename=edpm-start-podman-container follow=False checksum=b963c569d75a655c0ccae95d9bb4a2a9a4df27d1 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Oct 11 03:32:02 compute-0 sudo[56399]: pam_unix(sudo:session): session closed for user root
Oct 11 03:32:03 compute-0 sudo[56551]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ybkrrfvvvlflobvqwblywuxbqeszirvf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760153522.92523-117-186470430501198/AnsiballZ_file.py'
Oct 11 03:32:03 compute-0 sudo[56551]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 03:32:03 compute-0 python3.9[56553]: ansible-ansible.builtin.file Invoked with mode=420 path=/etc/systemd/system-preset state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 11 03:32:03 compute-0 sudo[56551]: pam_unix(sudo:session): session closed for user root
Oct 11 03:32:04 compute-0 sudo[56703]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tbfaqbujkhblcithxpqqkbbmqrsqohbx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760153523.718515-125-71034162246956/AnsiballZ_stat.py'
Oct 11 03:32:04 compute-0 sudo[56703]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 03:32:04 compute-0 python3.9[56705]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/edpm-container-shutdown.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 11 03:32:04 compute-0 sudo[56703]: pam_unix(sudo:session): session closed for user root
Oct 11 03:32:04 compute-0 sudo[56826]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sdjsxuuoohclwhasghuvxxsoznipjmuy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760153523.718515-125-71034162246956/AnsiballZ_copy.py'
Oct 11 03:32:04 compute-0 sudo[56826]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 03:32:05 compute-0 python3.9[56828]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/edpm-container-shutdown.service group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1760153523.718515-125-71034162246956/.source.service _original_basename=edpm-container-shutdown-service follow=False checksum=6336835cb0f888670cc99de31e19c8c071444d33 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 11 03:32:05 compute-0 sudo[56826]: pam_unix(sudo:session): session closed for user root
Oct 11 03:32:05 compute-0 sudo[56978]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gkvjhcriapihfweosqncfqirjhwyqsmh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760153525.2238681-140-246723891691630/AnsiballZ_stat.py'
Oct 11 03:32:05 compute-0 sudo[56978]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 03:32:05 compute-0 python3.9[56980]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 11 03:32:05 compute-0 sudo[56978]: pam_unix(sudo:session): session closed for user root
Oct 11 03:32:06 compute-0 sudo[57101]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jokszojizyaiswvzvipztplznaqnbsvu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760153525.2238681-140-246723891691630/AnsiballZ_copy.py'
Oct 11 03:32:06 compute-0 sudo[57101]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 03:32:06 compute-0 python3.9[57103]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system-preset/91-edpm-container-shutdown.preset group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1760153525.2238681-140-246723891691630/.source.preset _original_basename=91-edpm-container-shutdown-preset follow=False checksum=b275e4375287528cb63464dd32f622c4f142a915 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 11 03:32:06 compute-0 sudo[57101]: pam_unix(sudo:session): session closed for user root
Oct 11 03:32:07 compute-0 sudo[57253]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sawffeuinguzysofepvkvdvpjuplcfos ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760153526.5759401-155-223707924071465/AnsiballZ_systemd.py'
Oct 11 03:32:07 compute-0 sudo[57253]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 03:32:07 compute-0 python3.9[57255]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=edpm-container-shutdown state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Oct 11 03:32:07 compute-0 systemd[1]: Reloading.
Oct 11 03:32:07 compute-0 systemd-rc-local-generator[57281]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 11 03:32:07 compute-0 systemd-sysv-generator[57285]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 11 03:32:07 compute-0 systemd[1]: Reloading.
Oct 11 03:32:08 compute-0 systemd-rc-local-generator[57319]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 11 03:32:08 compute-0 systemd-sysv-generator[57323]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 11 03:32:08 compute-0 systemd[1]: Starting EDPM Container Shutdown...
Oct 11 03:32:08 compute-0 systemd[1]: Finished EDPM Container Shutdown.
Oct 11 03:32:08 compute-0 sudo[57253]: pam_unix(sudo:session): session closed for user root
Oct 11 03:32:08 compute-0 sudo[57480]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qjclaiwzgkxdgmxskbnbrdfbsrtpvgyl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760153528.3913114-163-132449998008130/AnsiballZ_stat.py'
Oct 11 03:32:08 compute-0 sudo[57480]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 03:32:08 compute-0 python3.9[57482]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/netns-placeholder.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 11 03:32:08 compute-0 sudo[57480]: pam_unix(sudo:session): session closed for user root
Oct 11 03:32:09 compute-0 sudo[57603]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-alzvmuzmukrqntyjnrzxdmqdjiqdmina ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760153528.3913114-163-132449998008130/AnsiballZ_copy.py'
Oct 11 03:32:09 compute-0 sudo[57603]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 03:32:09 compute-0 python3.9[57605]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/netns-placeholder.service group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1760153528.3913114-163-132449998008130/.source.service _original_basename=netns-placeholder-service follow=False checksum=b61b1b5918c20c877b8b226fbf34ff89a082d972 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 11 03:32:09 compute-0 sudo[57603]: pam_unix(sudo:session): session closed for user root
Oct 11 03:32:10 compute-0 sudo[57755]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lzjxvpotedsmcympwlgtavrmymbczalm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760153529.7167675-178-67485634379871/AnsiballZ_stat.py'
Oct 11 03:32:10 compute-0 sudo[57755]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 03:32:10 compute-0 python3.9[57757]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-netns-placeholder.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 11 03:32:10 compute-0 sudo[57755]: pam_unix(sudo:session): session closed for user root
Oct 11 03:32:10 compute-0 sudo[57878]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jbhwybnrrvzsdwevgqukppakwqzvoffn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760153529.7167675-178-67485634379871/AnsiballZ_copy.py'
Oct 11 03:32:10 compute-0 sudo[57878]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 03:32:10 compute-0 python3.9[57880]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system-preset/91-netns-placeholder.preset group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1760153529.7167675-178-67485634379871/.source.preset _original_basename=91-netns-placeholder-preset follow=False checksum=28b7b9aa893525d134a1eeda8a0a48fb25b736b9 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 11 03:32:10 compute-0 sudo[57878]: pam_unix(sudo:session): session closed for user root
Oct 11 03:32:11 compute-0 sudo[58030]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nlqpvswkskxnhuwwktwfdcxxrtgrbswv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760153531.0930998-193-99424592278331/AnsiballZ_systemd.py'
Oct 11 03:32:11 compute-0 sudo[58030]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 03:32:11 compute-0 python3.9[58032]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=netns-placeholder state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Oct 11 03:32:11 compute-0 systemd[1]: Reloading.
Oct 11 03:32:11 compute-0 systemd-rc-local-generator[58059]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 11 03:32:11 compute-0 systemd-sysv-generator[58062]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 11 03:32:12 compute-0 systemd[1]: Reloading.
Oct 11 03:32:12 compute-0 systemd-rc-local-generator[58093]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 11 03:32:12 compute-0 systemd-sysv-generator[58098]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 11 03:32:12 compute-0 systemd[1]: Starting Create netns directory...
Oct 11 03:32:12 compute-0 systemd[1]: run-netns-placeholder.mount: Deactivated successfully.
Oct 11 03:32:12 compute-0 systemd[1]: netns-placeholder.service: Deactivated successfully.
Oct 11 03:32:12 compute-0 systemd[1]: Finished Create netns directory.
Oct 11 03:32:12 compute-0 sudo[58030]: pam_unix(sudo:session): session closed for user root
Oct 11 03:32:13 compute-0 python3.9[58257]: ansible-ansible.builtin.service_facts Invoked
Oct 11 03:32:13 compute-0 network[58274]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Oct 11 03:32:13 compute-0 network[58275]: 'network-scripts' will be removed from distribution in near future.
Oct 11 03:32:13 compute-0 network[58276]: It is advised to switch to 'NetworkManager' instead for network management.
Oct 11 03:32:19 compute-0 sudo[58538]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qnjymjruvfxoszfbbkbzsfahkzpvjjwv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760153539.179294-209-123918596860059/AnsiballZ_systemd.py'
Oct 11 03:32:19 compute-0 sudo[58538]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 03:32:19 compute-0 python3.9[58540]: ansible-ansible.builtin.systemd Invoked with enabled=False name=iptables.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Oct 11 03:32:20 compute-0 systemd[1]: Reloading.
Oct 11 03:32:21 compute-0 systemd-rc-local-generator[58569]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 11 03:32:21 compute-0 systemd-sysv-generator[58573]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 11 03:32:21 compute-0 systemd[1]: Stopping IPv4 firewall with iptables...
Oct 11 03:32:21 compute-0 iptables.init[58580]: iptables: Setting chains to policy ACCEPT: raw mangle filter nat [  OK  ]
Oct 11 03:32:21 compute-0 iptables.init[58580]: iptables: Flushing firewall rules: [  OK  ]
Oct 11 03:32:21 compute-0 systemd[1]: iptables.service: Deactivated successfully.
Oct 11 03:32:21 compute-0 systemd[1]: Stopped IPv4 firewall with iptables.
Oct 11 03:32:21 compute-0 sudo[58538]: pam_unix(sudo:session): session closed for user root
Oct 11 03:32:22 compute-0 sudo[58774]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ngjlszayeyaiiwcmaeehtfxobnumddag ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760153541.875245-209-81473651153674/AnsiballZ_systemd.py'
Oct 11 03:32:22 compute-0 sudo[58774]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 03:32:22 compute-0 python3.9[58776]: ansible-ansible.builtin.systemd Invoked with enabled=False name=ip6tables.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Oct 11 03:32:22 compute-0 sudo[58774]: pam_unix(sudo:session): session closed for user root
Oct 11 03:32:23 compute-0 sudo[58928]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vcymdweaxusfveunjqtxvxfyhiygnxlz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760153542.8887427-225-170955219639038/AnsiballZ_systemd.py'
Oct 11 03:32:23 compute-0 sudo[58928]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 03:32:23 compute-0 python3.9[58930]: ansible-ansible.builtin.systemd Invoked with enabled=True name=nftables state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Oct 11 03:32:23 compute-0 systemd[1]: Reloading.
Oct 11 03:32:23 compute-0 systemd-rc-local-generator[58960]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 11 03:32:23 compute-0 systemd-sysv-generator[58965]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 11 03:32:23 compute-0 systemd[1]: Starting Netfilter Tables...
Oct 11 03:32:23 compute-0 systemd[1]: Finished Netfilter Tables.
Oct 11 03:32:24 compute-0 sudo[58928]: pam_unix(sudo:session): session closed for user root
Oct 11 03:32:24 compute-0 sudo[59119]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-aduzaccfcatvdijuigexncxambcicple ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760153544.2250402-233-70238016597591/AnsiballZ_command.py'
Oct 11 03:32:24 compute-0 sudo[59119]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 03:32:24 compute-0 python3.9[59121]: ansible-ansible.legacy.command Invoked with _raw_params=nft flush ruleset _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 11 03:32:24 compute-0 sudo[59119]: pam_unix(sudo:session): session closed for user root
Oct 11 03:32:25 compute-0 sudo[59272]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jxpxnonkgcllhjxclnhzhlcsfkgpleof ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760153545.3239894-247-178612074162937/AnsiballZ_stat.py'
Oct 11 03:32:25 compute-0 sudo[59272]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 03:32:25 compute-0 python3.9[59274]: ansible-ansible.legacy.stat Invoked with path=/etc/ssh/sshd_config follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 11 03:32:25 compute-0 sudo[59272]: pam_unix(sudo:session): session closed for user root
Oct 11 03:32:26 compute-0 sudo[59397]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-geoxnnzvbdomlnrrpdyabiqzwcxoqdmi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760153545.3239894-247-178612074162937/AnsiballZ_copy.py'
Oct 11 03:32:26 compute-0 sudo[59397]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 03:32:26 compute-0 python3.9[59399]: ansible-ansible.legacy.copy Invoked with dest=/etc/ssh/sshd_config mode=0600 src=/home/zuul/.ansible/tmp/ansible-tmp-1760153545.3239894-247-178612074162937/.source validate=/usr/sbin/sshd -T -f %s follow=False _original_basename=sshd_config_block.j2 checksum=4729b6ffc5b555fa142bf0b6e6dc15609cb89a22 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 11 03:32:26 compute-0 sudo[59397]: pam_unix(sudo:session): session closed for user root
Oct 11 03:32:27 compute-0 python3.9[59550]: ansible-ansible.builtin.systemd Invoked with name=sshd state=reloaded daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Oct 11 03:32:27 compute-0 polkitd[6259]: Registered Authentication Agent for unix-process:59552:193254 (system bus name :1.522 [/usr/bin/pkttyagent --notify-fd 5 --fallback], object path /org/freedesktop/PolicyKit1/AuthenticationAgent, locale en_US.UTF-8)
Oct 11 03:32:52 compute-0 polkit-agent-helper-1[59564]: pam_unix(polkit-1:auth): conversation failed
Oct 11 03:32:52 compute-0 polkit-agent-helper-1[59564]: pam_unix(polkit-1:auth): auth could not identify password for [root]
Oct 11 03:32:52 compute-0 polkitd[6259]: Unregistered Authentication Agent for unix-process:59552:193254 (system bus name :1.522, object path /org/freedesktop/PolicyKit1/AuthenticationAgent, locale en_US.UTF-8) (disconnected from bus)
Oct 11 03:32:52 compute-0 polkitd[6259]: Operator of unix-process:59552:193254 FAILED to authenticate to gain authorization for action org.freedesktop.systemd1.manage-units for system-bus-name::1.521 [<unknown>] (owned by unix-user:zuul)
Oct 11 03:32:52 compute-0 sshd-session[54820]: Connection closed by 192.168.122.30 port 60268
Oct 11 03:32:52 compute-0 sshd-session[54817]: pam_unix(sshd:session): session closed for user zuul
Oct 11 03:32:52 compute-0 systemd[1]: session-13.scope: Deactivated successfully.
Oct 11 03:32:52 compute-0 systemd[1]: session-13.scope: Consumed 24.205s CPU time.
Oct 11 03:32:52 compute-0 systemd-logind[820]: Session 13 logged out. Waiting for processes to exit.
Oct 11 03:32:52 compute-0 systemd-logind[820]: Removed session 13.
Oct 11 03:33:04 compute-0 sshd-session[59590]: Accepted publickey for zuul from 192.168.122.30 port 40708 ssh2: ECDSA SHA256:qo9+RMabHfLAOt2q/80W97JXaZUdeUCREBuTRaqgxBY
Oct 11 03:33:04 compute-0 systemd-logind[820]: New session 14 of user zuul.
Oct 11 03:33:04 compute-0 systemd[1]: Started Session 14 of User zuul.
Oct 11 03:33:04 compute-0 sshd-session[59590]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Oct 11 03:33:05 compute-0 python3.9[59743]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct 11 03:33:06 compute-0 sudo[59897]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nnzjvwitrbaxnenkmiqmggiiskxybvug ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760153586.3931181-33-207141408900634/AnsiballZ_file.py'
Oct 11 03:33:06 compute-0 sudo[59897]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 03:33:07 compute-0 python3.9[59899]: ansible-ansible.builtin.file Invoked with group=zuul mode=0770 owner=zuul path=/root/.config/containers recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 11 03:33:07 compute-0 sudo[59897]: pam_unix(sudo:session): session closed for user root
Oct 11 03:33:07 compute-0 sudo[60072]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-aqjuvystaquxzhqddkxmlultodqgqgxi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760153587.3381546-41-140742377096832/AnsiballZ_stat.py'
Oct 11 03:33:07 compute-0 sudo[60072]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 03:33:08 compute-0 python3.9[60074]: ansible-ansible.legacy.stat Invoked with path=/root/.config/containers/auth.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 11 03:33:08 compute-0 sudo[60072]: pam_unix(sudo:session): session closed for user root
Oct 11 03:33:08 compute-0 sudo[60150]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ubbnhkbpksybmpxkwiwcqcwblqrstwkn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760153587.3381546-41-140742377096832/AnsiballZ_file.py'
Oct 11 03:33:08 compute-0 sudo[60150]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 03:33:08 compute-0 python3.9[60152]: ansible-ansible.legacy.file Invoked with group=zuul mode=0660 owner=zuul dest=/root/.config/containers/auth.json _original_basename=.7g_m8i5h recurse=False state=file path=/root/.config/containers/auth.json force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 11 03:33:08 compute-0 sudo[60150]: pam_unix(sudo:session): session closed for user root
Oct 11 03:33:09 compute-0 sudo[60302]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ygopladgcscketnlrniimvjtvmgjgjkq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760153589.0512092-61-193017635396434/AnsiballZ_stat.py'
Oct 11 03:33:09 compute-0 sudo[60302]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 03:33:09 compute-0 python3.9[60304]: ansible-ansible.legacy.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 11 03:33:09 compute-0 sudo[60302]: pam_unix(sudo:session): session closed for user root
Oct 11 03:33:09 compute-0 sudo[60380]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-oqsowfpysjsfgabkmhgmzlvbtujwgcgw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760153589.0512092-61-193017635396434/AnsiballZ_file.py'
Oct 11 03:33:09 compute-0 sudo[60380]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 03:33:10 compute-0 python3.9[60382]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/etc/sysconfig/podman_drop_in _original_basename=.ujtc7a0u recurse=False state=file path=/etc/sysconfig/podman_drop_in force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 11 03:33:10 compute-0 sudo[60380]: pam_unix(sudo:session): session closed for user root
Oct 11 03:33:10 compute-0 sudo[60532]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-okuxwoijiixtokkpcrgplsgicfihgcex ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760153590.3821187-74-219757677057598/AnsiballZ_file.py'
Oct 11 03:33:10 compute-0 sudo[60532]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 03:33:10 compute-0 python3.9[60534]: ansible-ansible.builtin.file Invoked with path=/var/local/libexec recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Oct 11 03:33:10 compute-0 sudo[60532]: pam_unix(sudo:session): session closed for user root
Oct 11 03:33:11 compute-0 sudo[60684]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hzkzygqpzasxjstnzlzutbcwgpxzruuz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760153591.124702-82-173600679964716/AnsiballZ_stat.py'
Oct 11 03:33:11 compute-0 sudo[60684]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 03:33:11 compute-0 python3.9[60686]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-container-shutdown follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 11 03:33:11 compute-0 sudo[60684]: pam_unix(sudo:session): session closed for user root
Oct 11 03:33:11 compute-0 sudo[60762]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-undlcyguybiadiggwdgrqokwcqhrcdoc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760153591.124702-82-173600679964716/AnsiballZ_file.py'
Oct 11 03:33:11 compute-0 sudo[60762]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 03:33:12 compute-0 python3.9[60764]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-container-shutdown _original_basename=edpm-container-shutdown recurse=False state=file path=/var/local/libexec/edpm-container-shutdown force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct 11 03:33:12 compute-0 sudo[60762]: pam_unix(sudo:session): session closed for user root
Oct 11 03:33:12 compute-0 sudo[60914]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wutuyphqvgaueyfglqbbxvpzllafwtgj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760153592.3565881-82-178112488374663/AnsiballZ_stat.py'
Oct 11 03:33:12 compute-0 sudo[60914]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 03:33:12 compute-0 python3.9[60916]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-start-podman-container follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 11 03:33:12 compute-0 sudo[60914]: pam_unix(sudo:session): session closed for user root
Oct 11 03:33:13 compute-0 sudo[60992]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bjhpgugjqruhgspmgtducktdkwuzygck ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760153592.3565881-82-178112488374663/AnsiballZ_file.py'
Oct 11 03:33:13 compute-0 sudo[60992]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 03:33:13 compute-0 python3.9[60994]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-start-podman-container _original_basename=edpm-start-podman-container recurse=False state=file path=/var/local/libexec/edpm-start-podman-container force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct 11 03:33:13 compute-0 sudo[60992]: pam_unix(sudo:session): session closed for user root
Oct 11 03:33:14 compute-0 sudo[61144]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ucovnezkdejyfxhzlockbmrthlcdpvis ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760153593.6714282-105-212453455122255/AnsiballZ_file.py'
Oct 11 03:33:14 compute-0 sudo[61144]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 03:33:14 compute-0 python3.9[61146]: ansible-ansible.builtin.file Invoked with mode=420 path=/etc/systemd/system-preset state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 11 03:33:14 compute-0 sudo[61144]: pam_unix(sudo:session): session closed for user root
Oct 11 03:33:14 compute-0 sudo[61296]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cefmlgumngxcztndoslxjulkagtngnqx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760153594.4924786-113-72702346479322/AnsiballZ_stat.py'
Oct 11 03:33:14 compute-0 sudo[61296]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 03:33:15 compute-0 python3.9[61298]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/edpm-container-shutdown.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 11 03:33:15 compute-0 sudo[61296]: pam_unix(sudo:session): session closed for user root
Oct 11 03:33:15 compute-0 sudo[61374]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tcwmndjyfinnynvuarypucfifrwgmwjn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760153594.4924786-113-72702346479322/AnsiballZ_file.py'
Oct 11 03:33:15 compute-0 sudo[61374]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 03:33:15 compute-0 python3.9[61376]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/edpm-container-shutdown.service _original_basename=edpm-container-shutdown-service recurse=False state=file path=/etc/systemd/system/edpm-container-shutdown.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 11 03:33:15 compute-0 sudo[61374]: pam_unix(sudo:session): session closed for user root
Oct 11 03:33:16 compute-0 sudo[61526]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xftdprsndbwufkelbfgnydzgmunpulli ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760153595.8137422-125-102231916220732/AnsiballZ_stat.py'
Oct 11 03:33:16 compute-0 sudo[61526]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 03:33:16 compute-0 python3.9[61528]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 11 03:33:16 compute-0 sudo[61526]: pam_unix(sudo:session): session closed for user root
Oct 11 03:33:16 compute-0 sudo[61604]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hmdlyacjhvapgfzpgynibszxredwfufd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760153595.8137422-125-102231916220732/AnsiballZ_file.py'
Oct 11 03:33:16 compute-0 sudo[61604]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 03:33:16 compute-0 python3.9[61606]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-edpm-container-shutdown.preset _original_basename=91-edpm-container-shutdown-preset recurse=False state=file path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 11 03:33:16 compute-0 sudo[61604]: pam_unix(sudo:session): session closed for user root
Oct 11 03:33:17 compute-0 sudo[61756]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-smozcqtysrifglvidtphvlbfzjciqjln ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760153597.109762-137-219305039269502/AnsiballZ_systemd.py'
Oct 11 03:33:17 compute-0 sudo[61756]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 03:33:18 compute-0 python3.9[61758]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=edpm-container-shutdown state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Oct 11 03:33:18 compute-0 systemd[1]: Reloading.
Oct 11 03:33:18 compute-0 systemd-rc-local-generator[61785]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 11 03:33:18 compute-0 systemd-sysv-generator[61788]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 11 03:33:18 compute-0 sudo[61756]: pam_unix(sudo:session): session closed for user root
Oct 11 03:33:18 compute-0 sudo[61944]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ejerkbiphoybpqqflpbxwmhhlxxajguu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760153598.5113528-145-174573599887588/AnsiballZ_stat.py'
Oct 11 03:33:18 compute-0 sudo[61944]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 03:33:19 compute-0 python3.9[61946]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/netns-placeholder.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 11 03:33:19 compute-0 sudo[61944]: pam_unix(sudo:session): session closed for user root
Oct 11 03:33:19 compute-0 sudo[62022]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pscynqvnvythcarwdosjbozinjotepcf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760153598.5113528-145-174573599887588/AnsiballZ_file.py'
Oct 11 03:33:19 compute-0 sudo[62022]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 03:33:19 compute-0 python3.9[62024]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/netns-placeholder.service _original_basename=netns-placeholder-service recurse=False state=file path=/etc/systemd/system/netns-placeholder.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 11 03:33:19 compute-0 sudo[62022]: pam_unix(sudo:session): session closed for user root
Oct 11 03:33:20 compute-0 sudo[62174]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ipewyxlevhtppiexbjpwjzqwmknguetr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760153599.7192588-157-29227337112916/AnsiballZ_stat.py'
Oct 11 03:33:20 compute-0 sudo[62174]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 03:33:20 compute-0 python3.9[62176]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-netns-placeholder.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 11 03:33:20 compute-0 sudo[62174]: pam_unix(sudo:session): session closed for user root
Oct 11 03:33:20 compute-0 sudo[62252]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yinzaxogzgwfszhabftmpdovjptvydzo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760153599.7192588-157-29227337112916/AnsiballZ_file.py'
Oct 11 03:33:20 compute-0 sudo[62252]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 03:33:20 compute-0 python3.9[62254]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-netns-placeholder.preset _original_basename=91-netns-placeholder-preset recurse=False state=file path=/etc/systemd/system-preset/91-netns-placeholder.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 11 03:33:20 compute-0 sudo[62252]: pam_unix(sudo:session): session closed for user root
Oct 11 03:33:21 compute-0 sudo[62404]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qkuhzatarrnqbvazaevdqjazpfqjmrug ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760153600.955096-169-90135332786101/AnsiballZ_systemd.py'
Oct 11 03:33:21 compute-0 sudo[62404]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 03:33:21 compute-0 python3.9[62406]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=netns-placeholder state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Oct 11 03:33:21 compute-0 systemd[1]: Reloading.
Oct 11 03:33:21 compute-0 systemd-sysv-generator[62438]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 11 03:33:21 compute-0 systemd-rc-local-generator[62434]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 11 03:33:21 compute-0 systemd[1]: Starting Create netns directory...
Oct 11 03:33:21 compute-0 systemd[1]: run-netns-placeholder.mount: Deactivated successfully.
Oct 11 03:33:21 compute-0 systemd[1]: netns-placeholder.service: Deactivated successfully.
Oct 11 03:33:21 compute-0 systemd[1]: Finished Create netns directory.
Oct 11 03:33:22 compute-0 sudo[62404]: pam_unix(sudo:session): session closed for user root
Oct 11 03:33:22 compute-0 python3.9[62601]: ansible-ansible.builtin.service_facts Invoked
Oct 11 03:33:22 compute-0 network[62618]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Oct 11 03:33:22 compute-0 network[62619]: 'network-scripts' will be removed from distribution in near future.
Oct 11 03:33:22 compute-0 network[62620]: It is advised to switch to 'NetworkManager' instead for network management.
Oct 11 03:33:27 compute-0 sudo[62881]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rjsswvjgtdhdytxocqwkiravmjezncmj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760153606.9225605-195-253191999601329/AnsiballZ_stat.py'
Oct 11 03:33:27 compute-0 sudo[62881]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 03:33:27 compute-0 python3.9[62883]: ansible-ansible.legacy.stat Invoked with path=/etc/ssh/sshd_config follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 11 03:33:27 compute-0 sudo[62881]: pam_unix(sudo:session): session closed for user root
Oct 11 03:33:27 compute-0 sudo[62959]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kgbybvmvzjzxrcloeqfdqxdmylpcoqmo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760153606.9225605-195-253191999601329/AnsiballZ_file.py'
Oct 11 03:33:27 compute-0 sudo[62959]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 03:33:28 compute-0 python3.9[62961]: ansible-ansible.legacy.file Invoked with mode=0600 dest=/etc/ssh/sshd_config _original_basename=sshd_config_block.j2 recurse=False state=file path=/etc/ssh/sshd_config force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 11 03:33:28 compute-0 sudo[62959]: pam_unix(sudo:session): session closed for user root
Oct 11 03:33:28 compute-0 sudo[63111]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zatsseaekokcwalffcqfucqcdyzxpldp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760153608.1990037-208-251643889769089/AnsiballZ_file.py'
Oct 11 03:33:28 compute-0 sudo[63111]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 03:33:28 compute-0 python3.9[63113]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 11 03:33:28 compute-0 sudo[63111]: pam_unix(sudo:session): session closed for user root
Oct 11 03:33:29 compute-0 sudo[63263]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vcecnjqvdkpeprjsiijheivegvgwcymj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760153608.806172-216-82873545564960/AnsiballZ_stat.py'
Oct 11 03:33:29 compute-0 sudo[63263]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 03:33:29 compute-0 python3.9[63265]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/sshd-networks.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 11 03:33:29 compute-0 sudo[63263]: pam_unix(sudo:session): session closed for user root
Oct 11 03:33:29 compute-0 sudo[63386]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ididslokntbmhtmyzxroszamybxdzjey ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760153608.806172-216-82873545564960/AnsiballZ_copy.py'
Oct 11 03:33:29 compute-0 sudo[63386]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 03:33:30 compute-0 python3.9[63388]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/firewall/sshd-networks.yaml group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1760153608.806172-216-82873545564960/.source.yaml follow=False _original_basename=firewall.yaml.j2 checksum=0bfc8440fd8f39002ab90252479fb794f51b5ae8 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 11 03:33:30 compute-0 sudo[63386]: pam_unix(sudo:session): session closed for user root
Oct 11 03:33:30 compute-0 sudo[63538]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qcvnuhavqmktceelrnbdzyitbxsbedoa ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760153610.435044-234-198814153962928/AnsiballZ_timezone.py'
Oct 11 03:33:30 compute-0 sudo[63538]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 03:33:31 compute-0 python3.9[63540]: ansible-community.general.timezone Invoked with name=UTC hwclock=None
Oct 11 03:33:31 compute-0 systemd[1]: Starting Time & Date Service...
Oct 11 03:33:31 compute-0 systemd[1]: Started Time & Date Service.
Oct 11 03:33:31 compute-0 sudo[63538]: pam_unix(sudo:session): session closed for user root
Oct 11 03:33:31 compute-0 sudo[63694]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rzcwmplexjbzqtvdtubnbrpsjvpxtgdx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760153611.5560908-243-234063256086568/AnsiballZ_file.py'
Oct 11 03:33:31 compute-0 sudo[63694]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 03:33:32 compute-0 python3.9[63696]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 11 03:33:32 compute-0 sudo[63694]: pam_unix(sudo:session): session closed for user root
Oct 11 03:33:32 compute-0 sudo[63846]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vuwlnnoyitodwkfzipcqvnyzvlkmurqu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760153612.329757-251-91469534601222/AnsiballZ_stat.py'
Oct 11 03:33:32 compute-0 sudo[63846]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 03:33:32 compute-0 python3.9[63848]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 11 03:33:32 compute-0 sudo[63846]: pam_unix(sudo:session): session closed for user root
Oct 11 03:33:33 compute-0 sudo[63969]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jaolgiuznynxiihylswqtvnrpdgznorh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760153612.329757-251-91469534601222/AnsiballZ_copy.py'
Oct 11 03:33:33 compute-0 sudo[63969]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 03:33:33 compute-0 python3.9[63971]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1760153612.329757-251-91469534601222/.source.yaml follow=False _original_basename=base-rules.yaml.j2 checksum=450456afcafded6d4bdecceec7a02e806eebd8b3 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 11 03:33:33 compute-0 sudo[63969]: pam_unix(sudo:session): session closed for user root
Oct 11 03:33:34 compute-0 sudo[64121]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gkydupptrlusmddcdymyjqmdwfowubnx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760153613.7719202-266-102210156107359/AnsiballZ_stat.py'
Oct 11 03:33:34 compute-0 sudo[64121]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 03:33:34 compute-0 python3.9[64123]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 11 03:33:34 compute-0 sudo[64121]: pam_unix(sudo:session): session closed for user root
Oct 11 03:33:34 compute-0 sudo[64244]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-srlxxvqbthvyhjvpayvinmqxilfdamwk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760153613.7719202-266-102210156107359/AnsiballZ_copy.py'
Oct 11 03:33:34 compute-0 sudo[64244]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 03:33:34 compute-0 python3.9[64246]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1760153613.7719202-266-102210156107359/.source.yaml _original_basename=.pf7tgx17 follow=False checksum=97d170e1550eee4afc0af065b78cda302a97674c backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 11 03:33:34 compute-0 sudo[64244]: pam_unix(sudo:session): session closed for user root
Oct 11 03:33:35 compute-0 sudo[64396]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mcxlyszefcogwxrhwzwgdcexfrgtaekk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760153615.1949458-281-253423503648514/AnsiballZ_stat.py'
Oct 11 03:33:35 compute-0 sudo[64396]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 03:33:35 compute-0 python3.9[64398]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/iptables.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 11 03:33:35 compute-0 sudo[64396]: pam_unix(sudo:session): session closed for user root
Oct 11 03:33:36 compute-0 sudo[64519]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bcirgvinmdeobesiaaosjutdameearmv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760153615.1949458-281-253423503648514/AnsiballZ_copy.py'
Oct 11 03:33:36 compute-0 sudo[64519]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 03:33:36 compute-0 python3.9[64521]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/iptables.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1760153615.1949458-281-253423503648514/.source.nft _original_basename=iptables.nft follow=False checksum=3e02df08f1f3ab4a513e94056dbd390e3d38fe30 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 11 03:33:36 compute-0 sudo[64519]: pam_unix(sudo:session): session closed for user root
Oct 11 03:33:37 compute-0 sudo[64671]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nxnkucmqyrtsrvikhkoqimrmsanakxxh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760153616.6520596-296-273355681285778/AnsiballZ_command.py'
Oct 11 03:33:37 compute-0 sudo[64671]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 03:33:37 compute-0 python3.9[64673]: ansible-ansible.legacy.command Invoked with _raw_params=nft -f /etc/nftables/iptables.nft _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 11 03:33:37 compute-0 sudo[64671]: pam_unix(sudo:session): session closed for user root
Oct 11 03:33:37 compute-0 sudo[64824]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-twgxhtdpfuwuibcdscezyltimglncict ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760153617.566193-304-279040453167036/AnsiballZ_command.py'
Oct 11 03:33:37 compute-0 sudo[64824]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 03:33:38 compute-0 python3.9[64826]: ansible-ansible.legacy.command Invoked with _raw_params=nft -j list ruleset _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 11 03:33:38 compute-0 sudo[64824]: pam_unix(sudo:session): session closed for user root
Oct 11 03:33:38 compute-0 sudo[64977]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-smuyfzwoiqfeadrarymsqthasirkfjly ; /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1760153618.3886592-312-201727598665049/AnsiballZ_edpm_nftables_from_files.py'
Oct 11 03:33:38 compute-0 sudo[64977]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 03:33:39 compute-0 python3[64979]: ansible-edpm_nftables_from_files Invoked with src=/var/lib/edpm-config/firewall
Oct 11 03:33:39 compute-0 sudo[64977]: pam_unix(sudo:session): session closed for user root
Oct 11 03:33:39 compute-0 sudo[65129]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dshysyajcskcoauxcfrhrowustnedvkn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760153619.3141766-320-51864167742458/AnsiballZ_stat.py'
Oct 11 03:33:39 compute-0 sudo[65129]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 03:33:39 compute-0 python3.9[65131]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 11 03:33:39 compute-0 sudo[65129]: pam_unix(sudo:session): session closed for user root
Oct 11 03:33:40 compute-0 sudo[65252]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qyipscisdgppfbjxgafbjcxxnzdlwkyq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760153619.3141766-320-51864167742458/AnsiballZ_copy.py'
Oct 11 03:33:40 compute-0 sudo[65252]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 03:33:40 compute-0 python3.9[65254]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-jumps.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1760153619.3141766-320-51864167742458/.source.nft follow=False _original_basename=jump-chain.j2 checksum=4c6f036d2d5808f109acc0880c19aa74ca48c961 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 11 03:33:40 compute-0 sudo[65252]: pam_unix(sudo:session): session closed for user root
Oct 11 03:33:41 compute-0 sudo[65404]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fbmflpusuvbanpgnicvomnokpliejonc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760153620.7001047-335-227597879684960/AnsiballZ_stat.py'
Oct 11 03:33:41 compute-0 sudo[65404]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 03:33:41 compute-0 python3.9[65406]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-update-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 11 03:33:41 compute-0 sudo[65404]: pam_unix(sudo:session): session closed for user root
Oct 11 03:33:41 compute-0 sudo[65527]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-eharjvofjvnhpbyrxxfclvdpspznlkil ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760153620.7001047-335-227597879684960/AnsiballZ_copy.py'
Oct 11 03:33:41 compute-0 sudo[65527]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 03:33:41 compute-0 python3.9[65529]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-update-jumps.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1760153620.7001047-335-227597879684960/.source.nft follow=False _original_basename=jump-chain.j2 checksum=4c6f036d2d5808f109acc0880c19aa74ca48c961 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 11 03:33:41 compute-0 sudo[65527]: pam_unix(sudo:session): session closed for user root
Oct 11 03:33:42 compute-0 sudo[65679]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ueiagltmqnbtzmtumpvhfevzdyzqvgbm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760153622.0600593-350-194207375705710/AnsiballZ_stat.py'
Oct 11 03:33:42 compute-0 sudo[65679]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 03:33:42 compute-0 python3.9[65681]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-flushes.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 11 03:33:42 compute-0 sudo[65679]: pam_unix(sudo:session): session closed for user root
Oct 11 03:33:43 compute-0 sudo[65802]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pjsppgfdczyjpazsdwhcudzacvlonlpp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760153622.0600593-350-194207375705710/AnsiballZ_copy.py'
Oct 11 03:33:43 compute-0 sudo[65802]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 03:33:43 compute-0 python3.9[65804]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-flushes.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1760153622.0600593-350-194207375705710/.source.nft follow=False _original_basename=flush-chain.j2 checksum=d16337256a56373421842284fe09e4e6c7df417e backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 11 03:33:43 compute-0 sudo[65802]: pam_unix(sudo:session): session closed for user root
Oct 11 03:33:43 compute-0 sudo[65954]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mxmymlqrlqaikpmohsqaojaphrmtewgp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760153623.497261-365-165318192326203/AnsiballZ_stat.py'
Oct 11 03:33:43 compute-0 sudo[65954]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 03:33:44 compute-0 python3.9[65956]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-chains.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 11 03:33:44 compute-0 sudo[65954]: pam_unix(sudo:session): session closed for user root
Oct 11 03:33:44 compute-0 sudo[66077]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ipzvaaeuoorbczizupkzvcorxfzgubce ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760153623.497261-365-165318192326203/AnsiballZ_copy.py'
Oct 11 03:33:44 compute-0 sudo[66077]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 03:33:44 compute-0 python3.9[66079]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-chains.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1760153623.497261-365-165318192326203/.source.nft follow=False _original_basename=chains.j2 checksum=2079f3b60590a165d1d502e763170876fc8e2984 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 11 03:33:44 compute-0 sudo[66077]: pam_unix(sudo:session): session closed for user root
Oct 11 03:33:45 compute-0 sudo[66229]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nujkhhzvyevivjzzfheuxiumqtkefdvd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760153624.9887896-380-210988687615340/AnsiballZ_stat.py'
Oct 11 03:33:45 compute-0 sudo[66229]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 03:33:45 compute-0 python3.9[66231]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-rules.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 11 03:33:45 compute-0 sudo[66229]: pam_unix(sudo:session): session closed for user root
Oct 11 03:33:46 compute-0 sudo[66352]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xubpezdjsddtcgjdeuwekeokyuhsbydz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760153624.9887896-380-210988687615340/AnsiballZ_copy.py'
Oct 11 03:33:46 compute-0 sudo[66352]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 03:33:46 compute-0 python3.9[66354]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-rules.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1760153624.9887896-380-210988687615340/.source.nft follow=False _original_basename=ruleset.j2 checksum=693377dc03e5b6b24713cb537b18b88774724e35 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 11 03:33:46 compute-0 sudo[66352]: pam_unix(sudo:session): session closed for user root
Oct 11 03:33:46 compute-0 sudo[66504]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gxvlckkvjrczacftyedmcnwrbwjfurie ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760153626.4482646-395-182108357433252/AnsiballZ_file.py'
Oct 11 03:33:46 compute-0 sudo[66504]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 03:33:46 compute-0 python3.9[66506]: ansible-ansible.builtin.file Invoked with group=root mode=0600 owner=root path=/etc/nftables/edpm-rules.nft.changed state=touch recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 11 03:33:47 compute-0 sudo[66504]: pam_unix(sudo:session): session closed for user root
Oct 11 03:33:47 compute-0 sudo[66656]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gffhlxeledufcnvcucxthqdrmoststxa ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760153627.1835723-403-87290838698534/AnsiballZ_command.py'
Oct 11 03:33:47 compute-0 sudo[66656]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 03:33:47 compute-0 python3.9[66658]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-chains.nft /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft /etc/nftables/edpm-jumps.nft | nft -c -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 11 03:33:47 compute-0 sudo[66656]: pam_unix(sudo:session): session closed for user root
Oct 11 03:33:48 compute-0 sudo[66815]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-eooqpjpjmoxhmwulbstikumbwbowqjda ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760153627.962851-411-1103562371898/AnsiballZ_blockinfile.py'
Oct 11 03:33:48 compute-0 sudo[66815]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 03:33:48 compute-0 python3.9[66817]: ansible-ansible.builtin.blockinfile Invoked with backup=False block=include "/etc/nftables/iptables.nft"
                                            include "/etc/nftables/edpm-chains.nft"
                                            include "/etc/nftables/edpm-rules.nft"
                                            include "/etc/nftables/edpm-jumps.nft"
                                             path=/etc/sysconfig/nftables.conf validate=nft -c -f %s state=present marker=# {mark} ANSIBLE MANAGED BLOCK create=False marker_begin=BEGIN marker_end=END append_newline=False prepend_newline=False unsafe_writes=False insertafter=None insertbefore=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 11 03:33:48 compute-0 chronyd[54336]: Selected source 138.197.135.239 (pool.ntp.org)
Oct 11 03:33:48 compute-0 sudo[66815]: pam_unix(sudo:session): session closed for user root
Oct 11 03:33:49 compute-0 sudo[66969]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xotwcowutujfwxvmrvsdhfpckkekklok ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760153628.9781408-420-41378459644283/AnsiballZ_file.py'
Oct 11 03:33:49 compute-0 sudo[66969]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 03:33:49 compute-0 python3.9[66971]: ansible-ansible.builtin.file Invoked with group=hugetlbfs mode=0775 owner=zuul path=/dev/hugepages1G state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 11 03:33:49 compute-0 sudo[66969]: pam_unix(sudo:session): session closed for user root
Oct 11 03:33:50 compute-0 sudo[67121]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fjhtutezowgzmujtrtlicfjyyuvlqmog ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760153629.7057126-420-65151915317491/AnsiballZ_file.py'
Oct 11 03:33:50 compute-0 sudo[67121]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 03:33:50 compute-0 python3.9[67123]: ansible-ansible.builtin.file Invoked with group=hugetlbfs mode=0775 owner=zuul path=/dev/hugepages2M state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 11 03:33:50 compute-0 sudo[67121]: pam_unix(sudo:session): session closed for user root
Oct 11 03:33:51 compute-0 sudo[67273]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jzpiazxpfyioszmdhrnolecgzksjhgik ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760153630.5456297-435-206060158393232/AnsiballZ_mount.py'
Oct 11 03:33:51 compute-0 sudo[67273]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 03:33:51 compute-0 python3.9[67275]: ansible-ansible.posix.mount Invoked with fstype=hugetlbfs opts=pagesize=1G path=/dev/hugepages1G src=none state=mounted boot=True dump=0 opts_no_log=False passno=0 backup=False fstab=None
Oct 11 03:33:51 compute-0 sudo[67273]: pam_unix(sudo:session): session closed for user root
Oct 11 03:33:51 compute-0 sudo[67426]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-irybesbvisslswkqtqsahtlhbynsdjer ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760153631.4144843-435-31364945813044/AnsiballZ_mount.py'
Oct 11 03:33:51 compute-0 sudo[67426]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 03:33:51 compute-0 python3.9[67428]: ansible-ansible.posix.mount Invoked with fstype=hugetlbfs opts=pagesize=2M path=/dev/hugepages2M src=none state=mounted boot=True dump=0 opts_no_log=False passno=0 backup=False fstab=None
Oct 11 03:33:52 compute-0 sudo[67426]: pam_unix(sudo:session): session closed for user root
Oct 11 03:33:52 compute-0 sshd-session[59593]: Connection closed by 192.168.122.30 port 40708
Oct 11 03:33:52 compute-0 sshd-session[59590]: pam_unix(sshd:session): session closed for user zuul
Oct 11 03:33:52 compute-0 systemd[1]: session-14.scope: Deactivated successfully.
Oct 11 03:33:52 compute-0 systemd[1]: session-14.scope: Consumed 38.681s CPU time.
Oct 11 03:33:52 compute-0 systemd-logind[820]: Session 14 logged out. Waiting for processes to exit.
Oct 11 03:33:52 compute-0 systemd-logind[820]: Removed session 14.
Oct 11 03:33:53 compute-0 sshd-session[67454]: Received disconnect from 193.46.255.7 port 49574:11:  [preauth]
Oct 11 03:33:53 compute-0 sshd-session[67454]: Disconnected from authenticating user root 193.46.255.7 port 49574 [preauth]
Oct 11 03:33:57 compute-0 sshd-session[67456]: Accepted publickey for zuul from 192.168.122.30 port 45958 ssh2: ECDSA SHA256:qo9+RMabHfLAOt2q/80W97JXaZUdeUCREBuTRaqgxBY
Oct 11 03:33:57 compute-0 systemd-logind[820]: New session 15 of user zuul.
Oct 11 03:33:57 compute-0 systemd[1]: Started Session 15 of User zuul.
Oct 11 03:33:57 compute-0 sshd-session[67456]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Oct 11 03:33:58 compute-0 sudo[67609]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bagumcelyvdwilnscrjaytnadseyecht ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760153637.6005929-16-15532407209036/AnsiballZ_tempfile.py'
Oct 11 03:33:58 compute-0 sudo[67609]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 03:33:58 compute-0 python3.9[67611]: ansible-ansible.builtin.tempfile Invoked with state=file prefix=ansible. suffix= path=None
Oct 11 03:33:58 compute-0 sudo[67609]: pam_unix(sudo:session): session closed for user root
Oct 11 03:33:59 compute-0 sudo[67761]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rgcbzcixrodmwzlbdiwtznaozuslvlrh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760153638.5429125-28-191822913879465/AnsiballZ_stat.py'
Oct 11 03:33:59 compute-0 sudo[67761]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 03:33:59 compute-0 python3.9[67763]: ansible-ansible.builtin.stat Invoked with path=/etc/ssh/ssh_known_hosts follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct 11 03:33:59 compute-0 sudo[67761]: pam_unix(sudo:session): session closed for user root
Oct 11 03:34:00 compute-0 sudo[67913]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-edaqwisrpmwowchdqkkrvkhjbrauezyd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760153639.4857123-38-175359295974935/AnsiballZ_setup.py'
Oct 11 03:34:00 compute-0 sudo[67913]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 03:34:00 compute-0 python3.9[67915]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'ssh_host_key_rsa_public', 'ssh_host_key_ed25519_public', 'ssh_host_key_ecdsa_public'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct 11 03:34:00 compute-0 sudo[67913]: pam_unix(sudo:session): session closed for user root
Oct 11 03:34:01 compute-0 sudo[68065]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ggaopqxplmauswsmlmsddcuemqpggizt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760153640.68484-47-98630980582980/AnsiballZ_blockinfile.py'
Oct 11 03:34:01 compute-0 sudo[68065]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 03:34:01 compute-0 systemd[1]: systemd-timedated.service: Deactivated successfully.
Oct 11 03:34:01 compute-0 python3.9[68067]: ansible-ansible.builtin.blockinfile Invoked with block=compute-0.ctlplane.example.com,192.168.122.100,compute-0* ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCmVtx50w2Ce2BfePsAxe42wtfQuybkCFnQ+I2wKBdvA+hHDGHKq+DK0r0MLsjknW+B6oLz7z83ONuCSI5fnEYMb6H8z3rFIW9mdAsCheBoEcRPQSdsEr1zoV+Lv7A+HyKWCln0chhVjM/32sWu15LGXmQorZF/GWzY1NOxhihAQtcIqeMT/3Ua2PANdYB0fdrnFkb+3YzO84UBMzDk8jdHKd/7U3YMrD+kPoytRTEVSpo5OvNuBM1OtTrDNBt/j+ftF4YOc18YwJqu7X9wBLwb9xO071ScxcKpyHsBBrC0Mv75H6BF6LQH5rL1Un6T/ewz/3gkpzNbm+04c9OFAH44gTl6zh4XfklWhAbff0bb1vm3n/8G/NcKRmHB0qeM8UEmmrHKyTtqF41fpNChphqfswUDGB+9FLfONvHYzJeldie9EXYZFpbG3Ov0TnUSyQk9YAWPQzbqKMg7Cz2zKcExApU/ZMwSQ4tTzzNkHOxmfgkaDEyh1ByhBS2ocb/FoZc=
                                            compute-0.ctlplane.example.com,192.168.122.100,compute-0* ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIGpKnRjzn6GUq2BdxYSnAaefVvunenomnLuP3H43+vw4
                                            compute-0.ctlplane.example.com,192.168.122.100,compute-0* ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBPYbrjRTf9G+akEKWCGs7xCkq0HSionPcF1rxn4XZxvd/UFlbPUo5VosqUj/1lwDnQIVl+rXU6w4H/eH4SjxsN0=
                                             create=True mode=0644 path=/tmp/ansible.n134jpj8 state=present marker=# {mark} ANSIBLE MANAGED BLOCK backup=False marker_begin=BEGIN marker_end=END append_newline=False prepend_newline=False unsafe_writes=False insertafter=None insertbefore=None validate=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 11 03:34:01 compute-0 sudo[68065]: pam_unix(sudo:session): session closed for user root
Oct 11 03:34:02 compute-0 sudo[68219]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cewghxqdpnndjtreqcupizpzgnihctmb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760153641.594667-55-49955164153082/AnsiballZ_command.py'
Oct 11 03:34:02 compute-0 sudo[68219]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 03:34:02 compute-0 python3.9[68221]: ansible-ansible.legacy.command Invoked with _raw_params=cat '/tmp/ansible.n134jpj8' > /etc/ssh/ssh_known_hosts _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 11 03:34:02 compute-0 sudo[68219]: pam_unix(sudo:session): session closed for user root
Oct 11 03:34:03 compute-0 sudo[68373]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qfmubydupxmoxlqywrrfkjufgtterrun ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760153642.5732982-63-280454354091697/AnsiballZ_file.py'
Oct 11 03:34:03 compute-0 sudo[68373]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 03:34:03 compute-0 python3.9[68375]: ansible-ansible.builtin.file Invoked with path=/tmp/ansible.n134jpj8 state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 11 03:34:03 compute-0 sudo[68373]: pam_unix(sudo:session): session closed for user root
Oct 11 03:34:03 compute-0 sshd-session[67459]: Connection closed by 192.168.122.30 port 45958
Oct 11 03:34:03 compute-0 sshd-session[67456]: pam_unix(sshd:session): session closed for user zuul
Oct 11 03:34:03 compute-0 systemd[1]: session-15.scope: Deactivated successfully.
Oct 11 03:34:03 compute-0 systemd[1]: session-15.scope: Consumed 4.260s CPU time.
Oct 11 03:34:03 compute-0 systemd-logind[820]: Session 15 logged out. Waiting for processes to exit.
Oct 11 03:34:03 compute-0 systemd-logind[820]: Removed session 15.
Oct 11 03:34:08 compute-0 sshd-session[68400]: Accepted publickey for zuul from 192.168.122.30 port 40964 ssh2: ECDSA SHA256:qo9+RMabHfLAOt2q/80W97JXaZUdeUCREBuTRaqgxBY
Oct 11 03:34:08 compute-0 systemd-logind[820]: New session 16 of user zuul.
Oct 11 03:34:08 compute-0 systemd[1]: Started Session 16 of User zuul.
Oct 11 03:34:08 compute-0 sshd-session[68400]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Oct 11 03:34:09 compute-0 python3.9[68553]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct 11 03:34:10 compute-0 sudo[68707]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vzmuocasatpqygqsjhzibfcnbmtjiobd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760153649.96717-32-48726315119985/AnsiballZ_systemd.py'
Oct 11 03:34:10 compute-0 sudo[68707]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 03:34:10 compute-0 python3.9[68709]: ansible-ansible.builtin.systemd Invoked with enabled=True name=sshd daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None masked=None
Oct 11 03:34:10 compute-0 sudo[68707]: pam_unix(sudo:session): session closed for user root
Oct 11 03:34:11 compute-0 sudo[68861]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nfsywjqvsnudogjihzxmxtewtixecnan ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760153651.073266-40-37774768558866/AnsiballZ_systemd.py'
Oct 11 03:34:11 compute-0 sudo[68861]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 03:34:11 compute-0 python3.9[68863]: ansible-ansible.builtin.systemd Invoked with name=sshd state=started daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Oct 11 03:34:11 compute-0 sudo[68861]: pam_unix(sudo:session): session closed for user root
Oct 11 03:34:12 compute-0 sudo[69014]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dudtyiflegyugaeaqhltlgqwxfltnnzx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760153651.8243706-49-206262173529494/AnsiballZ_command.py'
Oct 11 03:34:12 compute-0 sudo[69014]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 03:34:12 compute-0 python3.9[69016]: ansible-ansible.legacy.command Invoked with _raw_params=nft -f /etc/nftables/edpm-chains.nft _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 11 03:34:12 compute-0 sudo[69014]: pam_unix(sudo:session): session closed for user root
Oct 11 03:34:13 compute-0 sudo[69167]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jiorxdckuslttlfytsbaikhqsurrcavw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760153652.6806195-57-179726931509338/AnsiballZ_stat.py'
Oct 11 03:34:13 compute-0 sudo[69167]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 03:34:13 compute-0 python3.9[69169]: ansible-ansible.builtin.stat Invoked with path=/etc/nftables/edpm-rules.nft.changed follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct 11 03:34:13 compute-0 sudo[69167]: pam_unix(sudo:session): session closed for user root
Oct 11 03:34:13 compute-0 sudo[69321]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dghiqewvxekpppnjraellwfjsgaihzwr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760153653.6280882-65-9212446243085/AnsiballZ_command.py'
Oct 11 03:34:13 compute-0 sudo[69321]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 03:34:14 compute-0 python3.9[69323]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft | nft -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 11 03:34:14 compute-0 sudo[69321]: pam_unix(sudo:session): session closed for user root
Oct 11 03:34:14 compute-0 sudo[69476]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fmzctiinoeanjlniplkldzbaaxtximrf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760153654.4252548-73-53796248618080/AnsiballZ_file.py'
Oct 11 03:34:14 compute-0 sudo[69476]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 03:34:15 compute-0 python3.9[69478]: ansible-ansible.builtin.file Invoked with path=/etc/nftables/edpm-rules.nft.changed state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 11 03:34:15 compute-0 sudo[69476]: pam_unix(sudo:session): session closed for user root
Oct 11 03:34:15 compute-0 sshd-session[68403]: Connection closed by 192.168.122.30 port 40964
Oct 11 03:34:15 compute-0 sshd-session[68400]: pam_unix(sshd:session): session closed for user zuul
Oct 11 03:34:15 compute-0 systemd[1]: session-16.scope: Deactivated successfully.
Oct 11 03:34:15 compute-0 systemd[1]: session-16.scope: Consumed 5.275s CPU time.
Oct 11 03:34:15 compute-0 systemd-logind[820]: Session 16 logged out. Waiting for processes to exit.
Oct 11 03:34:15 compute-0 systemd-logind[820]: Removed session 16.
Oct 11 03:34:20 compute-0 sshd-session[69503]: Accepted publickey for zuul from 192.168.122.30 port 34258 ssh2: ECDSA SHA256:qo9+RMabHfLAOt2q/80W97JXaZUdeUCREBuTRaqgxBY
Oct 11 03:34:20 compute-0 systemd-logind[820]: New session 17 of user zuul.
Oct 11 03:34:20 compute-0 systemd[1]: Started Session 17 of User zuul.
Oct 11 03:34:20 compute-0 sshd-session[69503]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Oct 11 03:34:21 compute-0 python3.9[69656]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct 11 03:34:22 compute-0 sudo[69810]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ebxgnvynxlojmoayqnxvisjgmrguuprk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760153662.0995517-34-247555793927303/AnsiballZ_setup.py'
Oct 11 03:34:22 compute-0 sudo[69810]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 03:34:22 compute-0 python3.9[69812]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Oct 11 03:34:23 compute-0 sudo[69810]: pam_unix(sudo:session): session closed for user root
Oct 11 03:34:23 compute-0 sudo[69894]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qwiagnsebosuxkvfkynsbmiwhfjmopvu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760153662.0995517-34-247555793927303/AnsiballZ_dnf.py'
Oct 11 03:34:23 compute-0 sudo[69894]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 03:34:23 compute-0 python3.9[69896]: ansible-ansible.legacy.dnf Invoked with name=['yum-utils'] allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None state=None
Oct 11 03:34:24 compute-0 sudo[69894]: pam_unix(sudo:session): session closed for user root
Oct 11 03:34:25 compute-0 python3.9[70047]: ansible-ansible.legacy.command Invoked with _raw_params=needs-restarting -r _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 11 03:34:27 compute-0 python3.9[70198]: ansible-ansible.builtin.find Invoked with paths=['/var/lib/openstack/reboot_required/'] patterns=[] read_whole_file=False file_type=file age_stamp=mtime recurse=False hidden=False follow=False get_checksum=False checksum_algorithm=sha1 use_regex=False exact_mode=True excludes=None contains=None age=None size=None depth=None mode=None encoding=None limit=None
Oct 11 03:34:28 compute-0 python3.9[70348]: ansible-ansible.builtin.stat Invoked with path=/var/lib/config-data/puppet-generated follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct 11 03:34:28 compute-0 rsyslogd[1005]: imjournal: journal files changed, reloading...  [v8.2506.0-2.el9 try https://www.rsyslog.com/e/0 ]
Oct 11 03:34:28 compute-0 python3.9[70499]: ansible-ansible.builtin.stat Invoked with path=/var/lib/openstack/config follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct 11 03:34:29 compute-0 sshd-session[69506]: Connection closed by 192.168.122.30 port 34258
Oct 11 03:34:29 compute-0 sshd-session[69503]: pam_unix(sshd:session): session closed for user zuul
Oct 11 03:34:29 compute-0 systemd[1]: session-17.scope: Deactivated successfully.
Oct 11 03:34:29 compute-0 systemd[1]: session-17.scope: Consumed 6.614s CPU time.
Oct 11 03:34:29 compute-0 systemd-logind[820]: Session 17 logged out. Waiting for processes to exit.
Oct 11 03:34:29 compute-0 systemd-logind[820]: Removed session 17.
Oct 11 03:34:37 compute-0 sshd-session[70524]: Accepted publickey for zuul from 38.102.83.159 port 48286 ssh2: RSA SHA256:kxWsFSq8COsYLodRw7mhPmCkhu5z7pyatmccmmT74Lc
Oct 11 03:34:37 compute-0 systemd-logind[820]: New session 18 of user zuul.
Oct 11 03:34:37 compute-0 systemd[1]: Started Session 18 of User zuul.
Oct 11 03:34:37 compute-0 sshd-session[70524]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Oct 11 03:34:37 compute-0 sudo[70600]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mewrqjnwsvvqburxipoounffxwnxhokn ; /usr/bin/python3'
Oct 11 03:34:37 compute-0 sudo[70600]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 03:34:38 compute-0 useradd[70604]: new group: name=ceph-admin, GID=42478
Oct 11 03:34:38 compute-0 useradd[70604]: new user: name=ceph-admin, UID=42477, GID=42478, home=/home/ceph-admin, shell=/bin/bash, from=none
Oct 11 03:34:38 compute-0 sudo[70600]: pam_unix(sudo:session): session closed for user root
Oct 11 03:34:38 compute-0 sudo[70686]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dexzlxypkejciiiilmuqnuujclcteiyp ; /usr/bin/python3'
Oct 11 03:34:38 compute-0 sudo[70686]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 03:34:38 compute-0 sudo[70686]: pam_unix(sudo:session): session closed for user root
Oct 11 03:34:39 compute-0 sudo[70759]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fygbsfulmzlispqdvehzvokyegbaccsw ; /usr/bin/python3'
Oct 11 03:34:39 compute-0 sudo[70759]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 03:34:39 compute-0 sudo[70759]: pam_unix(sudo:session): session closed for user root
Oct 11 03:34:39 compute-0 sudo[70809]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uvresracsmnrvhsrjezomrotihjfbtln ; /usr/bin/python3'
Oct 11 03:34:39 compute-0 sudo[70809]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 03:34:39 compute-0 sudo[70809]: pam_unix(sudo:session): session closed for user root
Oct 11 03:34:40 compute-0 sudo[70835]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fgoirovoyutqoslptuzwrhjncrrdnfku ; /usr/bin/python3'
Oct 11 03:34:40 compute-0 sudo[70835]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 03:34:40 compute-0 sudo[70835]: pam_unix(sudo:session): session closed for user root
Oct 11 03:34:40 compute-0 sudo[70861]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pxdeleozwywdtwlysyuehsphbrwqdjbo ; /usr/bin/python3'
Oct 11 03:34:40 compute-0 sudo[70861]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 03:34:40 compute-0 sudo[70861]: pam_unix(sudo:session): session closed for user root
Oct 11 03:34:41 compute-0 sudo[70887]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-arkytdwmkvtcdwmmafuxppesfyltqfcn ; /usr/bin/python3'
Oct 11 03:34:41 compute-0 sudo[70887]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 03:34:41 compute-0 sudo[70887]: pam_unix(sudo:session): session closed for user root
Oct 11 03:34:41 compute-0 sudo[70965]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lnpjapyphmtqdwwwrtvjkgjrqsckmert ; /usr/bin/python3'
Oct 11 03:34:41 compute-0 sudo[70965]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 03:34:41 compute-0 sudo[70965]: pam_unix(sudo:session): session closed for user root
Oct 11 03:34:41 compute-0 sudo[71038]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jpnsrhnaxrhbhmtrcnjcmssctclmnoob ; /usr/bin/python3'
Oct 11 03:34:41 compute-0 sudo[71038]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 03:34:42 compute-0 sudo[71038]: pam_unix(sudo:session): session closed for user root
Oct 11 03:34:42 compute-0 sudo[71140]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dbwpupehwmdvbkotykhninfwwfxsmijc ; /usr/bin/python3'
Oct 11 03:34:42 compute-0 sudo[71140]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 03:34:42 compute-0 sudo[71140]: pam_unix(sudo:session): session closed for user root
Oct 11 03:34:43 compute-0 sudo[71213]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cdncwupcagkmadkyosdckbymaygzfssf ; /usr/bin/python3'
Oct 11 03:34:43 compute-0 sudo[71213]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 03:34:43 compute-0 sudo[71213]: pam_unix(sudo:session): session closed for user root
Oct 11 03:34:43 compute-0 sudo[71263]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rmfddeqsktlrholgqhffovufmbgvejod ; /usr/bin/python3'
Oct 11 03:34:43 compute-0 sudo[71263]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 03:34:43 compute-0 python3[71265]: ansible-ansible.legacy.setup Invoked with gather_subset=['all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct 11 03:34:45 compute-0 sudo[71263]: pam_unix(sudo:session): session closed for user root
Oct 11 03:34:45 compute-0 sudo[71358]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vlejkpwozfsoienxnvzedcybtkxscvka ; /usr/bin/python3'
Oct 11 03:34:45 compute-0 sudo[71358]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 03:34:45 compute-0 python3[71360]: ansible-ansible.legacy.dnf Invoked with name=['util-linux', 'lvm2', 'jq', 'podman'] state=present allow_downgrade=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 allowerasing=False nobest=False use_backend=auto conf_file=None disable_excludes=None download_dir=None list=None releasever=None
Oct 11 03:34:47 compute-0 sudo[71358]: pam_unix(sudo:session): session closed for user root
Oct 11 03:34:47 compute-0 sudo[71385]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-iaoanrfgehreitibqlysowjlitstloca ; /usr/bin/python3'
Oct 11 03:34:47 compute-0 sudo[71385]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 03:34:47 compute-0 python3[71387]: ansible-ansible.builtin.stat Invoked with path=/dev/loop3 follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct 11 03:34:47 compute-0 sudo[71385]: pam_unix(sudo:session): session closed for user root
Oct 11 03:34:47 compute-0 sudo[71411]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lyrarhtdrqgzxghydruhcvmvybzeoqet ; /usr/bin/python3'
Oct 11 03:34:47 compute-0 sudo[71411]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 03:34:47 compute-0 python3[71413]: ansible-ansible.legacy.command Invoked with _raw_params=dd if=/dev/zero of=/var/lib/ceph-osd-0.img bs=1 count=0 seek=20G
                                          losetup /dev/loop3 /var/lib/ceph-osd-0.img
                                          lsblk _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 11 03:34:47 compute-0 kernel: loop: module loaded
Oct 11 03:34:47 compute-0 kernel: loop3: detected capacity change from 0 to 41943040
Oct 11 03:34:47 compute-0 sudo[71411]: pam_unix(sudo:session): session closed for user root
Oct 11 03:34:48 compute-0 sudo[71446]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-eupgbbqkcaeclusuhmkrwtwsvfsgtrhe ; /usr/bin/python3'
Oct 11 03:34:48 compute-0 sudo[71446]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 03:34:48 compute-0 python3[71448]: ansible-ansible.legacy.command Invoked with _raw_params=pvcreate /dev/loop3
                                          vgcreate ceph_vg0 /dev/loop3
                                          lvcreate -n ceph_lv0 -l +100%FREE ceph_vg0
                                          lvs _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 11 03:34:48 compute-0 lvm[71451]: PV /dev/loop3 not used.
Oct 11 03:34:48 compute-0 lvm[71460]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Oct 11 03:34:48 compute-0 systemd[1]: Started /usr/sbin/lvm vgchange -aay --autoactivation event ceph_vg0.
Oct 11 03:34:48 compute-0 lvm[71462]:   1 logical volume(s) in volume group "ceph_vg0" now active
Oct 11 03:34:48 compute-0 sudo[71446]: pam_unix(sudo:session): session closed for user root
Oct 11 03:34:48 compute-0 systemd[1]: lvm-activate-ceph_vg0.service: Deactivated successfully.
Oct 11 03:34:48 compute-0 sudo[71539]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rlmbnupqprnliprotwiomezgdgzrrcso ; /usr/bin/python3'
Oct 11 03:34:48 compute-0 sudo[71539]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 03:34:49 compute-0 python3[71541]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/ceph-osd-losetup-0.service follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Oct 11 03:34:49 compute-0 sudo[71539]: pam_unix(sudo:session): session closed for user root
Oct 11 03:34:49 compute-0 sudo[71612]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wyclccuiaryrbbvpumnaoohcifmqnqny ; /usr/bin/python3'
Oct 11 03:34:49 compute-0 sudo[71612]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 03:34:49 compute-0 python3[71614]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1760153688.7741826-32733-80565165149013/source dest=/etc/systemd/system/ceph-osd-losetup-0.service mode=0644 force=True follow=False _original_basename=ceph-osd-losetup.service.j2 checksum=427b1db064a970126b729b07acf99fa7d0eecb9c backup=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 11 03:34:49 compute-0 sudo[71612]: pam_unix(sudo:session): session closed for user root
Oct 11 03:34:50 compute-0 sudo[71662]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-exvsuparocytwzunsvbtmzbajdupsujv ; /usr/bin/python3'
Oct 11 03:34:50 compute-0 sudo[71662]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 03:34:50 compute-0 python3[71664]: ansible-ansible.builtin.systemd Invoked with state=started enabled=True name=ceph-osd-losetup-0.service daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Oct 11 03:34:50 compute-0 systemd[1]: Reloading.
Oct 11 03:34:50 compute-0 systemd-rc-local-generator[71695]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 11 03:34:50 compute-0 systemd-sysv-generator[71698]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 11 03:34:50 compute-0 systemd[1]: Starting Ceph OSD losetup...
Oct 11 03:34:50 compute-0 bash[71705]: /dev/loop3: [64513]:4555666 (/var/lib/ceph-osd-0.img)
Oct 11 03:34:50 compute-0 systemd[1]: Finished Ceph OSD losetup.
Oct 11 03:34:50 compute-0 lvm[71706]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Oct 11 03:34:50 compute-0 lvm[71706]: VG ceph_vg0 finished
Oct 11 03:34:50 compute-0 sudo[71662]: pam_unix(sudo:session): session closed for user root
Oct 11 03:34:51 compute-0 sudo[71730]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zarbvqsonxrbsowrmnbthmpxdjfakjhs ; /usr/bin/python3'
Oct 11 03:34:51 compute-0 sudo[71730]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 03:34:51 compute-0 python3[71732]: ansible-ansible.legacy.dnf Invoked with name=['util-linux', 'lvm2', 'jq', 'podman'] state=present allow_downgrade=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 allowerasing=False nobest=False use_backend=auto conf_file=None disable_excludes=None download_dir=None list=None releasever=None
Oct 11 03:34:52 compute-0 sudo[71730]: pam_unix(sudo:session): session closed for user root
Oct 11 03:34:52 compute-0 sudo[71757]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hgrntgffwlculjeraltakjdtnwkqjtdc ; /usr/bin/python3'
Oct 11 03:34:52 compute-0 sudo[71757]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 03:34:52 compute-0 python3[71759]: ansible-ansible.builtin.stat Invoked with path=/dev/loop4 follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct 11 03:34:52 compute-0 sudo[71757]: pam_unix(sudo:session): session closed for user root
Oct 11 03:34:53 compute-0 sudo[71783]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ktjouighzvhxrcalioypuewmaccirafm ; /usr/bin/python3'
Oct 11 03:34:53 compute-0 sudo[71783]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 03:34:53 compute-0 python3[71785]: ansible-ansible.legacy.command Invoked with _raw_params=dd if=/dev/zero of=/var/lib/ceph-osd-1.img bs=1 count=0 seek=20G
                                          losetup /dev/loop4 /var/lib/ceph-osd-1.img
                                          lsblk _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 11 03:34:53 compute-0 kernel: loop4: detected capacity change from 0 to 41943040
Oct 11 03:34:53 compute-0 sudo[71783]: pam_unix(sudo:session): session closed for user root
Oct 11 03:34:53 compute-0 sudo[71815]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tnhyugstfozfyloccrauyuvjvosrlnzr ; /usr/bin/python3'
Oct 11 03:34:53 compute-0 sudo[71815]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 03:34:53 compute-0 python3[71817]: ansible-ansible.legacy.command Invoked with _raw_params=pvcreate /dev/loop4
                                          vgcreate ceph_vg1 /dev/loop4
                                          lvcreate -n ceph_lv1 -l +100%FREE ceph_vg1
                                          lvs _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 11 03:34:53 compute-0 lvm[71820]: PV /dev/loop4 not used.
Oct 11 03:34:53 compute-0 lvm[71822]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Oct 11 03:34:53 compute-0 systemd[1]: Started /usr/sbin/lvm vgchange -aay --autoactivation event ceph_vg1.
Oct 11 03:34:53 compute-0 lvm[71832]:   1 logical volume(s) in volume group "ceph_vg1" now active
Oct 11 03:34:53 compute-0 systemd[1]: lvm-activate-ceph_vg1.service: Deactivated successfully.
Oct 11 03:34:53 compute-0 sudo[71815]: pam_unix(sudo:session): session closed for user root
Oct 11 03:34:54 compute-0 sudo[71908]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-twuqetcgtpxvlqgxmireuotoonnpkygr ; /usr/bin/python3'
Oct 11 03:34:54 compute-0 sudo[71908]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 03:34:54 compute-0 python3[71910]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/ceph-osd-losetup-1.service follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Oct 11 03:34:54 compute-0 sudo[71908]: pam_unix(sudo:session): session closed for user root
Oct 11 03:34:54 compute-0 sudo[71981]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qsfbdftbdmlhejjzmrfcwozujlshvhzz ; /usr/bin/python3'
Oct 11 03:34:54 compute-0 sudo[71981]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 03:34:54 compute-0 python3[71983]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1760153694.0893087-32760-170143627092704/source dest=/etc/systemd/system/ceph-osd-losetup-1.service mode=0644 force=True follow=False _original_basename=ceph-osd-losetup.service.j2 checksum=19612168ea279db4171b94ee1f8625de1ec44b58 backup=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 11 03:34:54 compute-0 sudo[71981]: pam_unix(sudo:session): session closed for user root
Oct 11 03:34:55 compute-0 sudo[72031]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-itumamxeaijjjvbqwroxlbsbbrorpehf ; /usr/bin/python3'
Oct 11 03:34:55 compute-0 sudo[72031]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 03:34:55 compute-0 python3[72033]: ansible-ansible.builtin.systemd Invoked with state=started enabled=True name=ceph-osd-losetup-1.service daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Oct 11 03:34:55 compute-0 systemd[1]: Reloading.
Oct 11 03:34:55 compute-0 systemd-rc-local-generator[72062]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 11 03:34:55 compute-0 systemd-sysv-generator[72066]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 11 03:34:55 compute-0 systemd[1]: Starting Ceph OSD losetup...
Oct 11 03:34:55 compute-0 bash[72073]: /dev/loop4: [64513]:4782317 (/var/lib/ceph-osd-1.img)
Oct 11 03:34:55 compute-0 systemd[1]: Finished Ceph OSD losetup.
Oct 11 03:34:56 compute-0 lvm[72074]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Oct 11 03:34:56 compute-0 lvm[72074]: VG ceph_vg1 finished
Oct 11 03:34:56 compute-0 sudo[72031]: pam_unix(sudo:session): session closed for user root
Oct 11 03:34:56 compute-0 sudo[72098]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nteyvmhfflsfntyniqgaenfpibimgprr ; /usr/bin/python3'
Oct 11 03:34:56 compute-0 sudo[72098]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 03:34:56 compute-0 python3[72100]: ansible-ansible.legacy.dnf Invoked with name=['util-linux', 'lvm2', 'jq', 'podman'] state=present allow_downgrade=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 allowerasing=False nobest=False use_backend=auto conf_file=None disable_excludes=None download_dir=None list=None releasever=None
Oct 11 03:34:57 compute-0 sudo[72098]: pam_unix(sudo:session): session closed for user root
Oct 11 03:34:57 compute-0 sudo[72125]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dchqbgbjixmprlhtlspysgfldqtuajnb ; /usr/bin/python3'
Oct 11 03:34:57 compute-0 sudo[72125]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 03:34:57 compute-0 python3[72127]: ansible-ansible.builtin.stat Invoked with path=/dev/loop5 follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct 11 03:34:57 compute-0 sudo[72125]: pam_unix(sudo:session): session closed for user root
Oct 11 03:34:58 compute-0 sudo[72151]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ynscinujamighautazrtpkgryrjwoqsj ; /usr/bin/python3'
Oct 11 03:34:58 compute-0 sudo[72151]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 03:34:58 compute-0 python3[72153]: ansible-ansible.legacy.command Invoked with _raw_params=dd if=/dev/zero of=/var/lib/ceph-osd-2.img bs=1 count=0 seek=20G
                                          losetup /dev/loop5 /var/lib/ceph-osd-2.img
                                          lsblk _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 11 03:34:58 compute-0 kernel: loop5: detected capacity change from 0 to 41943040
Oct 11 03:34:58 compute-0 sudo[72151]: pam_unix(sudo:session): session closed for user root
Oct 11 03:34:58 compute-0 sudo[72183]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-idynencvdxzfbfqwrbximjmlknweswbf ; /usr/bin/python3'
Oct 11 03:34:58 compute-0 sudo[72183]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 03:34:58 compute-0 python3[72185]: ansible-ansible.legacy.command Invoked with _raw_params=pvcreate /dev/loop5
                                          vgcreate ceph_vg2 /dev/loop5
                                          lvcreate -n ceph_lv2 -l +100%FREE ceph_vg2
                                          lvs _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 11 03:34:58 compute-0 lvm[72188]: PV /dev/loop5 not used.
Oct 11 03:34:58 compute-0 lvm[72190]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Oct 11 03:34:58 compute-0 systemd[1]: Started /usr/sbin/lvm vgchange -aay --autoactivation event ceph_vg2.
Oct 11 03:34:58 compute-0 lvm[72201]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Oct 11 03:34:58 compute-0 lvm[72201]: VG ceph_vg2 finished
Oct 11 03:34:58 compute-0 lvm[72199]:   1 logical volume(s) in volume group "ceph_vg2" now active
Oct 11 03:34:58 compute-0 systemd[1]: lvm-activate-ceph_vg2.service: Deactivated successfully.
Oct 11 03:34:58 compute-0 sudo[72183]: pam_unix(sudo:session): session closed for user root
Oct 11 03:34:59 compute-0 sudo[72277]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uduqbxpsramcssgqtcoegwigmyibjnjo ; /usr/bin/python3'
Oct 11 03:34:59 compute-0 sudo[72277]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 03:34:59 compute-0 python3[72279]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/ceph-osd-losetup-2.service follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Oct 11 03:34:59 compute-0 sudo[72277]: pam_unix(sudo:session): session closed for user root
Oct 11 03:34:59 compute-0 sudo[72350]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dfklefjcujrzkzhhhiywbemdlybupjui ; /usr/bin/python3'
Oct 11 03:34:59 compute-0 sudo[72350]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 03:35:00 compute-0 python3[72352]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1760153699.1741042-32787-139881013434717/source dest=/etc/systemd/system/ceph-osd-losetup-2.service mode=0644 force=True follow=False _original_basename=ceph-osd-losetup.service.j2 checksum=4c5b1bc5693c499ffe2edaa97d63f5df7075d845 backup=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 11 03:35:00 compute-0 sudo[72350]: pam_unix(sudo:session): session closed for user root
Oct 11 03:35:00 compute-0 sudo[72400]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cgnmprlhsscapnhtrebxpvqbzrfnzvwi ; /usr/bin/python3'
Oct 11 03:35:00 compute-0 sudo[72400]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 03:35:00 compute-0 python3[72402]: ansible-ansible.builtin.systemd Invoked with state=started enabled=True name=ceph-osd-losetup-2.service daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Oct 11 03:35:00 compute-0 systemd[1]: Reloading.
Oct 11 03:35:00 compute-0 systemd-sysv-generator[72434]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 11 03:35:00 compute-0 systemd-rc-local-generator[72429]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 11 03:35:00 compute-0 systemd[1]: Starting Ceph OSD losetup...
Oct 11 03:35:00 compute-0 bash[72442]: /dev/loop5: [64513]:4812232 (/var/lib/ceph-osd-2.img)
Oct 11 03:35:00 compute-0 systemd[1]: Finished Ceph OSD losetup.
Oct 11 03:35:00 compute-0 sudo[72400]: pam_unix(sudo:session): session closed for user root
Oct 11 03:35:00 compute-0 lvm[72443]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Oct 11 03:35:00 compute-0 lvm[72443]: VG ceph_vg2 finished
Oct 11 03:35:03 compute-0 python3[72467]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'network'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct 11 03:35:05 compute-0 sudo[72559]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qvkfrndzvbllcibyvhxlmnxdbiisflut ; /usr/bin/python3'
Oct 11 03:35:05 compute-0 sudo[72559]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 03:35:05 compute-0 python3[72561]: ansible-ansible.legacy.dnf Invoked with name=['cephadm'] state=present allow_downgrade=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 allowerasing=False nobest=False use_backend=auto conf_file=None disable_excludes=None download_dir=None list=None releasever=None
Oct 11 03:35:06 compute-0 groupadd[72567]: group added to /etc/group: name=cephadm, GID=992
Oct 11 03:35:06 compute-0 groupadd[72567]: group added to /etc/gshadow: name=cephadm
Oct 11 03:35:06 compute-0 groupadd[72567]: new group: name=cephadm, GID=992
Oct 11 03:35:06 compute-0 useradd[72574]: new user: name=cephadm, UID=992, GID=992, home=/var/lib/cephadm, shell=/bin/bash, from=none
Oct 11 03:35:06 compute-0 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Oct 11 03:35:06 compute-0 systemd[1]: Starting man-db-cache-update.service...
Oct 11 03:35:06 compute-0 sudo[72559]: pam_unix(sudo:session): session closed for user root
Oct 11 03:35:07 compute-0 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Oct 11 03:35:07 compute-0 systemd[1]: Finished man-db-cache-update.service.
Oct 11 03:35:07 compute-0 systemd[1]: run-r90762178cdb84d498726a76b71977dc8.service: Deactivated successfully.
Oct 11 03:35:07 compute-0 sudo[72675]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xztcydezdoxgsqolqfcjrctpasghotmv ; /usr/bin/python3'
Oct 11 03:35:07 compute-0 sudo[72675]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 03:35:07 compute-0 python3[72677]: ansible-ansible.builtin.stat Invoked with path=/usr/sbin/cephadm follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct 11 03:35:07 compute-0 sudo[72675]: pam_unix(sudo:session): session closed for user root
Oct 11 03:35:07 compute-0 sudo[72703]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pmqfdbvgdvieppcbvcqgxcsrqvwbcgkt ; /usr/bin/python3'
Oct 11 03:35:07 compute-0 sudo[72703]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 03:35:07 compute-0 python3[72705]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/sbin/cephadm ls --no-detail _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 11 03:35:07 compute-0 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Oct 11 03:35:07 compute-0 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Oct 11 03:35:08 compute-0 sudo[72703]: pam_unix(sudo:session): session closed for user root
Oct 11 03:35:08 compute-0 sudo[72766]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fghwnljuztvgohlsogqkzepbszhlqjox ; /usr/bin/python3'
Oct 11 03:35:08 compute-0 sudo[72766]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 03:35:08 compute-0 python3[72768]: ansible-ansible.builtin.file Invoked with path=/etc/ceph state=directory mode=0755 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 11 03:35:08 compute-0 sudo[72766]: pam_unix(sudo:session): session closed for user root
Oct 11 03:35:08 compute-0 sudo[72792]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lzkpqmfsugolsuxcevkiivsidrycymav ; /usr/bin/python3'
Oct 11 03:35:08 compute-0 sudo[72792]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 03:35:08 compute-0 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Oct 11 03:35:08 compute-0 python3[72794]: ansible-ansible.builtin.file Invoked with path=/home/ceph-admin/specs owner=ceph-admin group=ceph-admin mode=0755 state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 11 03:35:08 compute-0 sudo[72792]: pam_unix(sudo:session): session closed for user root
Oct 11 03:35:09 compute-0 sudo[72870]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mlglutvtvvptstypdmowgbvrgxkwdzwt ; /usr/bin/python3'
Oct 11 03:35:09 compute-0 sudo[72870]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 03:35:09 compute-0 python3[72872]: ansible-ansible.legacy.stat Invoked with path=/home/ceph-admin/specs/ceph_spec.yaml follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Oct 11 03:35:09 compute-0 sudo[72870]: pam_unix(sudo:session): session closed for user root
Oct 11 03:35:09 compute-0 sudo[72943]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jpuopawablthmdxvatkofojtdnarwaww ; /usr/bin/python3'
Oct 11 03:35:09 compute-0 sudo[72943]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 03:35:10 compute-0 python3[72945]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1760153709.3899665-32934-233711656612142/source dest=/home/ceph-admin/specs/ceph_spec.yaml owner=ceph-admin group=ceph-admin mode=0644 _original_basename=ceph_spec.yml follow=False checksum=bb83c53af4ffd926a3f1eafe26a8be437df6401f backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 11 03:35:10 compute-0 sudo[72943]: pam_unix(sudo:session): session closed for user root
Oct 11 03:35:10 compute-0 sudo[73045]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-msnuepsrrbbwnbxfxgycgnfojkhtdvgv ; /usr/bin/python3'
Oct 11 03:35:10 compute-0 sudo[73045]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 03:35:11 compute-0 python3[73047]: ansible-ansible.legacy.stat Invoked with path=/home/ceph-admin/assimilate_ceph.conf follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Oct 11 03:35:11 compute-0 sudo[73045]: pam_unix(sudo:session): session closed for user root
Oct 11 03:35:11 compute-0 sudo[73118]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nkraqidaidingnnsfabzxgsdsqpkvpsb ; /usr/bin/python3'
Oct 11 03:35:11 compute-0 sudo[73118]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 03:35:11 compute-0 python3[73120]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1760153710.6443796-32952-103947228490280/source dest=/home/ceph-admin/assimilate_ceph.conf owner=ceph-admin group=ceph-admin mode=0644 _original_basename=initial_ceph.conf follow=False checksum=41828f7c2442fdf376911255e33c12863fc3b1b3 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 11 03:35:11 compute-0 sudo[73118]: pam_unix(sudo:session): session closed for user root
Oct 11 03:35:11 compute-0 sudo[73168]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hffnisfryqrxbzvwmzezuqjmatkfdgwl ; /usr/bin/python3'
Oct 11 03:35:11 compute-0 sudo[73168]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 03:35:11 compute-0 python3[73170]: ansible-ansible.builtin.stat Invoked with path=/home/ceph-admin/.ssh/id_rsa follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct 11 03:35:12 compute-0 sudo[73168]: pam_unix(sudo:session): session closed for user root
Oct 11 03:35:12 compute-0 sudo[73196]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fpmbkfzdalbloyzeqbfrhexiqzoeoodl ; /usr/bin/python3'
Oct 11 03:35:12 compute-0 sudo[73196]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 03:35:12 compute-0 python3[73198]: ansible-ansible.builtin.stat Invoked with path=/home/ceph-admin/.ssh/id_rsa.pub follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct 11 03:35:12 compute-0 sudo[73196]: pam_unix(sudo:session): session closed for user root
Oct 11 03:35:12 compute-0 sudo[73224]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qrrzlnshljxajurqfsnebsoxjggkjstq ; /usr/bin/python3'
Oct 11 03:35:12 compute-0 sudo[73224]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 03:35:12 compute-0 python3[73226]: ansible-ansible.builtin.stat Invoked with path=/home/ceph-admin/assimilate_ceph.conf follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct 11 03:35:12 compute-0 sudo[73224]: pam_unix(sudo:session): session closed for user root
Oct 11 03:35:12 compute-0 sudo[73252]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jwijyxdgjukztmuofsynhwihhdbjteym ; /usr/bin/python3'
Oct 11 03:35:13 compute-0 sudo[73252]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 03:35:13 compute-0 python3[73254]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/sbin/cephadm bootstrap --skip-firewalld --skip-prepare-host --ssh-private-key /home/ceph-admin/.ssh/id_rsa --ssh-public-key /home/ceph-admin/.ssh/id_rsa.pub --ssh-user ceph-admin --allow-fqdn-hostname --output-keyring /etc/ceph/ceph.client.admin.keyring --output-config /etc/ceph/ceph.conf --fsid 23b68101-59a9-532f-ab6b-9acf78fb2162 --config /home/ceph-admin/assimilate_ceph.conf \--single-host-defaults \--skip-monitoring-stack --skip-dashboard --mon-ip 192.168.122.100
                                           _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 11 03:35:13 compute-0 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Oct 11 03:35:13 compute-0 sshd-session[73271]: Accepted publickey for ceph-admin from 192.168.122.100 port 46866 ssh2: RSA SHA256:zq0SbJ37OVxJQ9NCID+839O2GCdjjA3YZoJ895MeqUE
Oct 11 03:35:13 compute-0 systemd[1]: Created slice User Slice of UID 42477.
Oct 11 03:35:13 compute-0 systemd[1]: Starting User Runtime Directory /run/user/42477...
Oct 11 03:35:13 compute-0 systemd-logind[820]: New session 19 of user ceph-admin.
Oct 11 03:35:13 compute-0 systemd[1]: Finished User Runtime Directory /run/user/42477.
Oct 11 03:35:13 compute-0 systemd[1]: Starting User Manager for UID 42477...
Oct 11 03:35:13 compute-0 systemd[73275]: pam_unix(systemd-user:session): session opened for user ceph-admin(uid=42477) by ceph-admin(uid=0)
Oct 11 03:35:13 compute-0 systemd[73275]: Queued start job for default target Main User Target.
Oct 11 03:35:13 compute-0 systemd[73275]: Created slice User Application Slice.
Oct 11 03:35:13 compute-0 systemd[73275]: Started Mark boot as successful after the user session has run 2 minutes.
Oct 11 03:35:13 compute-0 systemd[73275]: Started Daily Cleanup of User's Temporary Directories.
Oct 11 03:35:13 compute-0 systemd[73275]: Reached target Paths.
Oct 11 03:35:13 compute-0 systemd[73275]: Reached target Timers.
Oct 11 03:35:13 compute-0 systemd[73275]: Starting D-Bus User Message Bus Socket...
Oct 11 03:35:13 compute-0 systemd[73275]: Starting Create User's Volatile Files and Directories...
Oct 11 03:35:13 compute-0 systemd[73275]: Listening on D-Bus User Message Bus Socket.
Oct 11 03:35:13 compute-0 systemd[73275]: Reached target Sockets.
Oct 11 03:35:13 compute-0 systemd[73275]: Finished Create User's Volatile Files and Directories.
Oct 11 03:35:13 compute-0 systemd[73275]: Reached target Basic System.
Oct 11 03:35:13 compute-0 systemd[73275]: Reached target Main User Target.
Oct 11 03:35:13 compute-0 systemd[73275]: Startup finished in 160ms.
Oct 11 03:35:13 compute-0 systemd[1]: Started User Manager for UID 42477.
Oct 11 03:35:13 compute-0 systemd[1]: Started Session 19 of User ceph-admin.
Oct 11 03:35:13 compute-0 sshd-session[73271]: pam_unix(sshd:session): session opened for user ceph-admin(uid=42477) by ceph-admin(uid=0)
Oct 11 03:35:13 compute-0 sudo[73292]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/echo
Oct 11 03:35:13 compute-0 sudo[73292]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 03:35:13 compute-0 sudo[73292]: pam_unix(sudo:session): session closed for user root
Oct 11 03:35:13 compute-0 sshd-session[73291]: Received disconnect from 192.168.122.100 port 46866:11: disconnected by user
Oct 11 03:35:13 compute-0 sshd-session[73291]: Disconnected from user ceph-admin 192.168.122.100 port 46866
Oct 11 03:35:13 compute-0 sshd-session[73271]: pam_unix(sshd:session): session closed for user ceph-admin
Oct 11 03:35:13 compute-0 systemd[1]: session-19.scope: Deactivated successfully.
Oct 11 03:35:13 compute-0 systemd-logind[820]: Session 19 logged out. Waiting for processes to exit.
Oct 11 03:35:13 compute-0 systemd-logind[820]: Removed session 19.
Oct 11 03:35:16 compute-0 systemd[1]: var-lib-containers-storage-overlay-compat3528256394-lower\x2dmapped.mount: Deactivated successfully.
Oct 11 03:35:24 compute-0 systemd[1]: Stopping User Manager for UID 42477...
Oct 11 03:35:24 compute-0 systemd[73275]: Activating special unit Exit the Session...
Oct 11 03:35:24 compute-0 systemd[73275]: Stopped target Main User Target.
Oct 11 03:35:24 compute-0 systemd[73275]: Stopped target Basic System.
Oct 11 03:35:24 compute-0 systemd[73275]: Stopped target Paths.
Oct 11 03:35:24 compute-0 systemd[73275]: Stopped target Sockets.
Oct 11 03:35:24 compute-0 systemd[73275]: Stopped target Timers.
Oct 11 03:35:24 compute-0 systemd[73275]: Stopped Mark boot as successful after the user session has run 2 minutes.
Oct 11 03:35:24 compute-0 systemd[73275]: Stopped Daily Cleanup of User's Temporary Directories.
Oct 11 03:35:24 compute-0 systemd[73275]: Closed D-Bus User Message Bus Socket.
Oct 11 03:35:24 compute-0 systemd[73275]: Stopped Create User's Volatile Files and Directories.
Oct 11 03:35:24 compute-0 systemd[73275]: Removed slice User Application Slice.
Oct 11 03:35:24 compute-0 systemd[73275]: Reached target Shutdown.
Oct 11 03:35:24 compute-0 systemd[73275]: Finished Exit the Session.
Oct 11 03:35:24 compute-0 systemd[73275]: Reached target Exit the Session.
Oct 11 03:35:24 compute-0 systemd[1]: user@42477.service: Deactivated successfully.
Oct 11 03:35:24 compute-0 systemd[1]: Stopped User Manager for UID 42477.
Oct 11 03:35:24 compute-0 systemd[1]: Stopping User Runtime Directory /run/user/42477...
Oct 11 03:35:24 compute-0 systemd[1]: run-user-42477.mount: Deactivated successfully.
Oct 11 03:35:24 compute-0 systemd[1]: user-runtime-dir@42477.service: Deactivated successfully.
Oct 11 03:35:24 compute-0 systemd[1]: Stopped User Runtime Directory /run/user/42477.
Oct 11 03:35:24 compute-0 systemd[1]: Removed slice User Slice of UID 42477.
Oct 11 03:35:26 compute-0 podman[73329]: 2025-10-11 03:35:26.781698451 +0000 UTC m=+12.877191750 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Oct 11 03:35:26 compute-0 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Oct 11 03:35:26 compute-0 podman[73391]: 2025-10-11 03:35:26.87402165 +0000 UTC m=+0.063268574 container create cea08351d9cdd034d57f03b211a39be44a08875070deec33fa5d4ac116483d88 (image=quay.io/ceph/ceph:v18, name=busy_mcnulty, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, io.buildah.version=1.39.3, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True)
Oct 11 03:35:26 compute-0 systemd[1]: Created slice Virtual Machine and Container Slice.
Oct 11 03:35:26 compute-0 systemd[1]: Started libpod-conmon-cea08351d9cdd034d57f03b211a39be44a08875070deec33fa5d4ac116483d88.scope.
Oct 11 03:35:26 compute-0 podman[73391]: 2025-10-11 03:35:26.840785199 +0000 UTC m=+0.030032183 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Oct 11 03:35:26 compute-0 systemd[1]: Started libcrun container.
Oct 11 03:35:26 compute-0 podman[73391]: 2025-10-11 03:35:26.979629467 +0000 UTC m=+0.168876401 container init cea08351d9cdd034d57f03b211a39be44a08875070deec33fa5d4ac116483d88 (image=quay.io/ceph/ceph:v18, name=busy_mcnulty, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507)
Oct 11 03:35:26 compute-0 podman[73391]: 2025-10-11 03:35:26.990335554 +0000 UTC m=+0.179582448 container start cea08351d9cdd034d57f03b211a39be44a08875070deec33fa5d4ac116483d88 (image=quay.io/ceph/ceph:v18, name=busy_mcnulty, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, ceph=True, OSD_FLAVOR=default)
Oct 11 03:35:26 compute-0 podman[73391]: 2025-10-11 03:35:26.99380961 +0000 UTC m=+0.183056514 container attach cea08351d9cdd034d57f03b211a39be44a08875070deec33fa5d4ac116483d88 (image=quay.io/ceph/ceph:v18, name=busy_mcnulty, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Oct 11 03:35:27 compute-0 busy_mcnulty[73408]: ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable)
Oct 11 03:35:27 compute-0 systemd[1]: libpod-cea08351d9cdd034d57f03b211a39be44a08875070deec33fa5d4ac116483d88.scope: Deactivated successfully.
Oct 11 03:35:27 compute-0 podman[73391]: 2025-10-11 03:35:27.312540424 +0000 UTC m=+0.501787358 container died cea08351d9cdd034d57f03b211a39be44a08875070deec33fa5d4ac116483d88 (image=quay.io/ceph/ceph:v18, name=busy_mcnulty, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 11 03:35:27 compute-0 systemd[1]: var-lib-containers-storage-overlay-36d8f47c11c41ca25bbc79ffe87adc2593a633fea38d5ee3ca73cf34b5074c36-merged.mount: Deactivated successfully.
Oct 11 03:35:27 compute-0 podman[73391]: 2025-10-11 03:35:27.368757522 +0000 UTC m=+0.558004416 container remove cea08351d9cdd034d57f03b211a39be44a08875070deec33fa5d4ac116483d88 (image=quay.io/ceph/ceph:v18, name=busy_mcnulty, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 11 03:35:27 compute-0 systemd[1]: libpod-conmon-cea08351d9cdd034d57f03b211a39be44a08875070deec33fa5d4ac116483d88.scope: Deactivated successfully.
Oct 11 03:35:27 compute-0 podman[73426]: 2025-10-11 03:35:27.469315159 +0000 UTC m=+0.069683263 container create 44b262e240ca6c9e62e89b268b1691502b5a064518064454606cfea541ad3633 (image=quay.io/ceph/ceph:v18, name=sharp_villani, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 11 03:35:27 compute-0 systemd[1]: Started libpod-conmon-44b262e240ca6c9e62e89b268b1691502b5a064518064454606cfea541ad3633.scope.
Oct 11 03:35:27 compute-0 podman[73426]: 2025-10-11 03:35:27.440521831 +0000 UTC m=+0.040889995 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Oct 11 03:35:27 compute-0 systemd[1]: Started libcrun container.
Oct 11 03:35:27 compute-0 podman[73426]: 2025-10-11 03:35:27.569133965 +0000 UTC m=+0.169502129 container init 44b262e240ca6c9e62e89b268b1691502b5a064518064454606cfea541ad3633 (image=quay.io/ceph/ceph:v18, name=sharp_villani, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 11 03:35:27 compute-0 podman[73426]: 2025-10-11 03:35:27.578611208 +0000 UTC m=+0.178979292 container start 44b262e240ca6c9e62e89b268b1691502b5a064518064454606cfea541ad3633 (image=quay.io/ceph/ceph:v18, name=sharp_villani, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_REF=reef)
Oct 11 03:35:27 compute-0 podman[73426]: 2025-10-11 03:35:27.582546257 +0000 UTC m=+0.182914341 container attach 44b262e240ca6c9e62e89b268b1691502b5a064518064454606cfea541ad3633 (image=quay.io/ceph/ceph:v18, name=sharp_villani, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 11 03:35:27 compute-0 sharp_villani[73442]: 167 167
Oct 11 03:35:27 compute-0 systemd[1]: libpod-44b262e240ca6c9e62e89b268b1691502b5a064518064454606cfea541ad3633.scope: Deactivated successfully.
Oct 11 03:35:27 compute-0 podman[73426]: 2025-10-11 03:35:27.585954791 +0000 UTC m=+0.186322905 container died 44b262e240ca6c9e62e89b268b1691502b5a064518064454606cfea541ad3633 (image=quay.io/ceph/ceph:v18, name=sharp_villani, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default)
Oct 11 03:35:27 compute-0 podman[73426]: 2025-10-11 03:35:27.63754608 +0000 UTC m=+0.237914194 container remove 44b262e240ca6c9e62e89b268b1691502b5a064518064454606cfea541ad3633 (image=quay.io/ceph/ceph:v18, name=sharp_villani, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 11 03:35:27 compute-0 systemd[1]: libpod-conmon-44b262e240ca6c9e62e89b268b1691502b5a064518064454606cfea541ad3633.scope: Deactivated successfully.
Oct 11 03:35:27 compute-0 podman[73458]: 2025-10-11 03:35:27.71910355 +0000 UTC m=+0.057338300 container create ad4cfc9ebc3ade0dcdfbaae082cb54c3a04a987f0925eb22552fd5d94c93f98d (image=quay.io/ceph/ceph:v18, name=stoic_kilby, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507)
Oct 11 03:35:27 compute-0 systemd[1]: Started libpod-conmon-ad4cfc9ebc3ade0dcdfbaae082cb54c3a04a987f0925eb22552fd5d94c93f98d.scope.
Oct 11 03:35:27 compute-0 podman[73458]: 2025-10-11 03:35:27.689203432 +0000 UTC m=+0.027438232 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Oct 11 03:35:27 compute-0 systemd[1]: Started libcrun container.
Oct 11 03:35:27 compute-0 podman[73458]: 2025-10-11 03:35:27.8035053 +0000 UTC m=+0.141740060 container init ad4cfc9ebc3ade0dcdfbaae082cb54c3a04a987f0925eb22552fd5d94c93f98d (image=quay.io/ceph/ceph:v18, name=stoic_kilby, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 11 03:35:27 compute-0 podman[73458]: 2025-10-11 03:35:27.813949309 +0000 UTC m=+0.152184049 container start ad4cfc9ebc3ade0dcdfbaae082cb54c3a04a987f0925eb22552fd5d94c93f98d (image=quay.io/ceph/ceph:v18, name=stoic_kilby, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Oct 11 03:35:27 compute-0 podman[73458]: 2025-10-11 03:35:27.818234948 +0000 UTC m=+0.156469738 container attach ad4cfc9ebc3ade0dcdfbaae082cb54c3a04a987f0925eb22552fd5d94c93f98d (image=quay.io/ceph/ceph:v18, name=stoic_kilby, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3)
Oct 11 03:35:27 compute-0 stoic_kilby[73475]: AQB/0Olou4iWMhAAVTDDg6Svs6ZdXPlrBvRaxA==
Oct 11 03:35:27 compute-0 systemd[1]: libpod-ad4cfc9ebc3ade0dcdfbaae082cb54c3a04a987f0925eb22552fd5d94c93f98d.scope: Deactivated successfully.
Oct 11 03:35:27 compute-0 podman[73458]: 2025-10-11 03:35:27.854257396 +0000 UTC m=+0.192492186 container died ad4cfc9ebc3ade0dcdfbaae082cb54c3a04a987f0925eb22552fd5d94c93f98d (image=quay.io/ceph/ceph:v18, name=stoic_kilby, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_REF=reef)
Oct 11 03:35:27 compute-0 systemd[1]: var-lib-containers-storage-overlay-5cc5daec0b0905a926e1d92ecfdc16719eddc2aa43f673eb24d6f138904a06c3-merged.mount: Deactivated successfully.
Oct 11 03:35:27 compute-0 podman[73458]: 2025-10-11 03:35:27.892832545 +0000 UTC m=+0.231067265 container remove ad4cfc9ebc3ade0dcdfbaae082cb54c3a04a987f0925eb22552fd5d94c93f98d (image=quay.io/ceph/ceph:v18, name=stoic_kilby, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 11 03:35:27 compute-0 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Oct 11 03:35:27 compute-0 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Oct 11 03:35:27 compute-0 systemd[1]: libpod-conmon-ad4cfc9ebc3ade0dcdfbaae082cb54c3a04a987f0925eb22552fd5d94c93f98d.scope: Deactivated successfully.
Oct 11 03:35:27 compute-0 podman[73495]: 2025-10-11 03:35:27.968570955 +0000 UTC m=+0.051041626 container create 8b73adab3938ede6ff7bc2b59702b263e3c8ab3b2a976ee4c9183de4cef486c4 (image=quay.io/ceph/ceph:v18, name=amazing_lichterman, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 11 03:35:28 compute-0 systemd[1]: Started libpod-conmon-8b73adab3938ede6ff7bc2b59702b263e3c8ab3b2a976ee4c9183de4cef486c4.scope.
Oct 11 03:35:28 compute-0 systemd[1]: Started libcrun container.
Oct 11 03:35:28 compute-0 podman[73495]: 2025-10-11 03:35:27.944331953 +0000 UTC m=+0.026802634 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Oct 11 03:35:28 compute-0 podman[73495]: 2025-10-11 03:35:28.04921635 +0000 UTC m=+0.131687171 container init 8b73adab3938ede6ff7bc2b59702b263e3c8ab3b2a976ee4c9183de4cef486c4 (image=quay.io/ceph/ceph:v18, name=amazing_lichterman, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 11 03:35:28 compute-0 podman[73495]: 2025-10-11 03:35:28.06185344 +0000 UTC m=+0.144324111 container start 8b73adab3938ede6ff7bc2b59702b263e3c8ab3b2a976ee4c9183de4cef486c4 (image=quay.io/ceph/ceph:v18, name=amazing_lichterman, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Oct 11 03:35:28 compute-0 podman[73495]: 2025-10-11 03:35:28.066518669 +0000 UTC m=+0.148989390 container attach 8b73adab3938ede6ff7bc2b59702b263e3c8ab3b2a976ee4c9183de4cef486c4 (image=quay.io/ceph/ceph:v18, name=amazing_lichterman, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 11 03:35:28 compute-0 amazing_lichterman[73511]: AQCA0OloZ/fNBRAA/FnPR67EK3V50ZRdwmgTbw==
Oct 11 03:35:28 compute-0 systemd[1]: libpod-8b73adab3938ede6ff7bc2b59702b263e3c8ab3b2a976ee4c9183de4cef486c4.scope: Deactivated successfully.
Oct 11 03:35:28 compute-0 podman[73495]: 2025-10-11 03:35:28.100908402 +0000 UTC m=+0.183379043 container died 8b73adab3938ede6ff7bc2b59702b263e3c8ab3b2a976ee4c9183de4cef486c4 (image=quay.io/ceph/ceph:v18, name=amazing_lichterman, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Oct 11 03:35:28 compute-0 podman[73495]: 2025-10-11 03:35:28.129848594 +0000 UTC m=+0.212319225 container remove 8b73adab3938ede6ff7bc2b59702b263e3c8ab3b2a976ee4c9183de4cef486c4 (image=quay.io/ceph/ceph:v18, name=amazing_lichterman, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Oct 11 03:35:28 compute-0 systemd[1]: libpod-conmon-8b73adab3938ede6ff7bc2b59702b263e3c8ab3b2a976ee4c9183de4cef486c4.scope: Deactivated successfully.
Oct 11 03:35:28 compute-0 podman[73530]: 2025-10-11 03:35:28.207492466 +0000 UTC m=+0.046728136 container create 3d942f0537b9742aeaa36535b6615918da1f8e7683cdf8d265600104929b5612 (image=quay.io/ceph/ceph:v18, name=jovial_curie, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507)
Oct 11 03:35:28 compute-0 systemd[1]: Started libpod-conmon-3d942f0537b9742aeaa36535b6615918da1f8e7683cdf8d265600104929b5612.scope.
Oct 11 03:35:28 compute-0 systemd[1]: Started libcrun container.
Oct 11 03:35:28 compute-0 podman[73530]: 2025-10-11 03:35:28.186575547 +0000 UTC m=+0.025811267 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Oct 11 03:35:28 compute-0 podman[73530]: 2025-10-11 03:35:28.487634 +0000 UTC m=+0.326869740 container init 3d942f0537b9742aeaa36535b6615918da1f8e7683cdf8d265600104929b5612 (image=quay.io/ceph/ceph:v18, name=jovial_curie, ceph=True, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef)
Oct 11 03:35:28 compute-0 podman[73530]: 2025-10-11 03:35:28.493612946 +0000 UTC m=+0.332848626 container start 3d942f0537b9742aeaa36535b6615918da1f8e7683cdf8d265600104929b5612 (image=quay.io/ceph/ceph:v18, name=jovial_curie, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 11 03:35:28 compute-0 podman[73530]: 2025-10-11 03:35:28.496953979 +0000 UTC m=+0.336189729 container attach 3d942f0537b9742aeaa36535b6615918da1f8e7683cdf8d265600104929b5612 (image=quay.io/ceph/ceph:v18, name=jovial_curie, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 11 03:35:28 compute-0 jovial_curie[73546]: AQCA0Olo8WPVHhAAvCGKXunGjAocLj1EpWhn3A==
Oct 11 03:35:28 compute-0 systemd[1]: libpod-3d942f0537b9742aeaa36535b6615918da1f8e7683cdf8d265600104929b5612.scope: Deactivated successfully.
Oct 11 03:35:28 compute-0 podman[73553]: 2025-10-11 03:35:28.558573747 +0000 UTC m=+0.025343254 container died 3d942f0537b9742aeaa36535b6615918da1f8e7683cdf8d265600104929b5612 (image=quay.io/ceph/ceph:v18, name=jovial_curie, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_REF=reef, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS)
Oct 11 03:35:28 compute-0 podman[73553]: 2025-10-11 03:35:28.597805944 +0000 UTC m=+0.064575421 container remove 3d942f0537b9742aeaa36535b6615918da1f8e7683cdf8d265600104929b5612 (image=quay.io/ceph/ceph:v18, name=jovial_curie, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 11 03:35:28 compute-0 systemd[1]: libpod-conmon-3d942f0537b9742aeaa36535b6615918da1f8e7683cdf8d265600104929b5612.scope: Deactivated successfully.
Oct 11 03:35:28 compute-0 podman[73569]: 2025-10-11 03:35:28.70264656 +0000 UTC m=+0.066057752 container create 2f68e7745d37c2b6501864f7261d7082e65d302cbb67444ad819d58e7de1d5ab (image=quay.io/ceph/ceph:v18, name=frosty_elgamal, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507)
Oct 11 03:35:28 compute-0 systemd[1]: Started libpod-conmon-2f68e7745d37c2b6501864f7261d7082e65d302cbb67444ad819d58e7de1d5ab.scope.
Oct 11 03:35:28 compute-0 podman[73569]: 2025-10-11 03:35:28.673625865 +0000 UTC m=+0.037037107 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Oct 11 03:35:28 compute-0 systemd[1]: Started libcrun container.
Oct 11 03:35:28 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/da273faf647bd13ae1efb898a73c0cb2665a82f2a9774c6567966773c19308e0/merged/tmp/monmap supports timestamps until 2038 (0x7fffffff)
Oct 11 03:35:28 compute-0 podman[73569]: 2025-10-11 03:35:28.795850873 +0000 UTC m=+0.159262085 container init 2f68e7745d37c2b6501864f7261d7082e65d302cbb67444ad819d58e7de1d5ab (image=quay.io/ceph/ceph:v18, name=frosty_elgamal, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True)
Oct 11 03:35:28 compute-0 podman[73569]: 2025-10-11 03:35:28.804289447 +0000 UTC m=+0.167700639 container start 2f68e7745d37c2b6501864f7261d7082e65d302cbb67444ad819d58e7de1d5ab (image=quay.io/ceph/ceph:v18, name=frosty_elgamal, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 11 03:35:28 compute-0 podman[73569]: 2025-10-11 03:35:28.807782453 +0000 UTC m=+0.171193645 container attach 2f68e7745d37c2b6501864f7261d7082e65d302cbb67444ad819d58e7de1d5ab (image=quay.io/ceph/ceph:v18, name=frosty_elgamal, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Oct 11 03:35:28 compute-0 frosty_elgamal[73585]: /usr/bin/monmaptool: monmap file /tmp/monmap
Oct 11 03:35:28 compute-0 frosty_elgamal[73585]: setting min_mon_release = pacific
Oct 11 03:35:28 compute-0 frosty_elgamal[73585]: /usr/bin/monmaptool: set fsid to 23b68101-59a9-532f-ab6b-9acf78fb2162
Oct 11 03:35:28 compute-0 frosty_elgamal[73585]: /usr/bin/monmaptool: writing epoch 0 to /tmp/monmap (1 monitors)
Oct 11 03:35:28 compute-0 systemd[1]: libpod-2f68e7745d37c2b6501864f7261d7082e65d302cbb67444ad819d58e7de1d5ab.scope: Deactivated successfully.
Oct 11 03:35:28 compute-0 podman[73569]: 2025-10-11 03:35:28.857300716 +0000 UTC m=+0.220711908 container died 2f68e7745d37c2b6501864f7261d7082e65d302cbb67444ad819d58e7de1d5ab (image=quay.io/ceph/ceph:v18, name=frosty_elgamal, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef)
Oct 11 03:35:28 compute-0 systemd[1]: var-lib-containers-storage-overlay-da273faf647bd13ae1efb898a73c0cb2665a82f2a9774c6567966773c19308e0-merged.mount: Deactivated successfully.
Oct 11 03:35:28 compute-0 podman[73569]: 2025-10-11 03:35:28.906436058 +0000 UTC m=+0.269847250 container remove 2f68e7745d37c2b6501864f7261d7082e65d302cbb67444ad819d58e7de1d5ab (image=quay.io/ceph/ceph:v18, name=frosty_elgamal, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS)
Oct 11 03:35:28 compute-0 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Oct 11 03:35:28 compute-0 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Oct 11 03:35:28 compute-0 systemd[1]: libpod-conmon-2f68e7745d37c2b6501864f7261d7082e65d302cbb67444ad819d58e7de1d5ab.scope: Deactivated successfully.
Oct 11 03:35:28 compute-0 podman[73605]: 2025-10-11 03:35:28.993367637 +0000 UTC m=+0.056599370 container create 550243fbc226ddaa594008632e28e7bdfd67c4027225c1d1b058f7582f9f8fdb (image=quay.io/ceph/ceph:v18, name=vibrant_jepsen, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 11 03:35:29 compute-0 systemd[1]: Started libpod-conmon-550243fbc226ddaa594008632e28e7bdfd67c4027225c1d1b058f7582f9f8fdb.scope.
Oct 11 03:35:29 compute-0 podman[73605]: 2025-10-11 03:35:28.967340906 +0000 UTC m=+0.030572679 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Oct 11 03:35:29 compute-0 systemd[1]: Started libcrun container.
Oct 11 03:35:29 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/608770db4ba6d1cc7bdff64d7d7e7ab83991515813c77fad553805351116bb32/merged/tmp/keyring supports timestamps until 2038 (0x7fffffff)
Oct 11 03:35:29 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/608770db4ba6d1cc7bdff64d7d7e7ab83991515813c77fad553805351116bb32/merged/tmp/monmap supports timestamps until 2038 (0x7fffffff)
Oct 11 03:35:29 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/608770db4ba6d1cc7bdff64d7d7e7ab83991515813c77fad553805351116bb32/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 11 03:35:29 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/608770db4ba6d1cc7bdff64d7d7e7ab83991515813c77fad553805351116bb32/merged/var/lib/ceph/mon/ceph-compute-0 supports timestamps until 2038 (0x7fffffff)
Oct 11 03:35:29 compute-0 podman[73605]: 2025-10-11 03:35:29.097270497 +0000 UTC m=+0.160502270 container init 550243fbc226ddaa594008632e28e7bdfd67c4027225c1d1b058f7582f9f8fdb (image=quay.io/ceph/ceph:v18, name=vibrant_jepsen, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 11 03:35:29 compute-0 podman[73605]: 2025-10-11 03:35:29.108537709 +0000 UTC m=+0.171769432 container start 550243fbc226ddaa594008632e28e7bdfd67c4027225c1d1b058f7582f9f8fdb (image=quay.io/ceph/ceph:v18, name=vibrant_jepsen, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Oct 11 03:35:29 compute-0 podman[73605]: 2025-10-11 03:35:29.112636462 +0000 UTC m=+0.175868245 container attach 550243fbc226ddaa594008632e28e7bdfd67c4027225c1d1b058f7582f9f8fdb (image=quay.io/ceph/ceph:v18, name=vibrant_jepsen, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Oct 11 03:35:29 compute-0 systemd[1]: libpod-550243fbc226ddaa594008632e28e7bdfd67c4027225c1d1b058f7582f9f8fdb.scope: Deactivated successfully.
Oct 11 03:35:29 compute-0 podman[73605]: 2025-10-11 03:35:29.216795639 +0000 UTC m=+0.280027362 container died 550243fbc226ddaa594008632e28e7bdfd67c4027225c1d1b058f7582f9f8fdb (image=quay.io/ceph/ceph:v18, name=vibrant_jepsen, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 11 03:35:29 compute-0 podman[73605]: 2025-10-11 03:35:29.261356944 +0000 UTC m=+0.324588667 container remove 550243fbc226ddaa594008632e28e7bdfd67c4027225c1d1b058f7582f9f8fdb (image=quay.io/ceph/ceph:v18, name=vibrant_jepsen, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 11 03:35:29 compute-0 systemd[1]: libpod-conmon-550243fbc226ddaa594008632e28e7bdfd67c4027225c1d1b058f7582f9f8fdb.scope: Deactivated successfully.
Oct 11 03:35:29 compute-0 systemd[1]: Reloading.
Oct 11 03:35:29 compute-0 systemd-sysv-generator[73688]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 11 03:35:29 compute-0 systemd-rc-local-generator[73685]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 11 03:35:29 compute-0 systemd[1]: Reloading.
Oct 11 03:35:29 compute-0 systemd-sysv-generator[73726]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 11 03:35:29 compute-0 systemd-rc-local-generator[73723]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 11 03:35:29 compute-0 systemd[1]: Reached target All Ceph clusters and services.
Oct 11 03:35:29 compute-0 systemd[1]: Reloading.
Oct 11 03:35:29 compute-0 systemd-sysv-generator[73767]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 11 03:35:29 compute-0 systemd-rc-local-generator[73761]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 11 03:35:30 compute-0 systemd[1]: Reached target Ceph cluster 23b68101-59a9-532f-ab6b-9acf78fb2162.
Oct 11 03:35:30 compute-0 systemd[1]: Reloading.
Oct 11 03:35:30 compute-0 systemd-sysv-generator[73805]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 11 03:35:30 compute-0 systemd-rc-local-generator[73801]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 11 03:35:30 compute-0 systemd[1]: Reloading.
Oct 11 03:35:30 compute-0 systemd-sysv-generator[73838]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 11 03:35:30 compute-0 systemd-rc-local-generator[73834]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 11 03:35:30 compute-0 systemd[1]: Created slice Slice /system/ceph-23b68101-59a9-532f-ab6b-9acf78fb2162.
Oct 11 03:35:30 compute-0 systemd[1]: Reached target System Time Set.
Oct 11 03:35:30 compute-0 systemd[1]: Reached target System Time Synchronized.
Oct 11 03:35:30 compute-0 systemd[1]: Starting Ceph mon.compute-0 for 23b68101-59a9-532f-ab6b-9acf78fb2162...
Oct 11 03:35:30 compute-0 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Oct 11 03:35:30 compute-0 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Oct 11 03:35:30 compute-0 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Oct 11 03:35:30 compute-0 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Oct 11 03:35:30 compute-0 podman[73898]: 2025-10-11 03:35:30.970402151 +0000 UTC m=+0.057031142 container create c5d85eabee408d8e1c37887b45e4bed5e301407e18f7db308ce02ab451cf0dcd (image=quay.io/ceph/ceph:v18, name=ceph-23b68101-59a9-532f-ab6b-9acf78fb2162-mon-compute-0, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 11 03:35:31 compute-0 podman[73898]: 2025-10-11 03:35:30.942119337 +0000 UTC m=+0.028748368 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Oct 11 03:35:31 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a1d992e6f25627153bd55ae562b2f664994ab4830310ee83de5b43d90ccbe569/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 11 03:35:31 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a1d992e6f25627153bd55ae562b2f664994ab4830310ee83de5b43d90ccbe569/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 11 03:35:31 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a1d992e6f25627153bd55ae562b2f664994ab4830310ee83de5b43d90ccbe569/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 11 03:35:31 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a1d992e6f25627153bd55ae562b2f664994ab4830310ee83de5b43d90ccbe569/merged/var/lib/ceph/mon/ceph-compute-0 supports timestamps until 2038 (0x7fffffff)
Oct 11 03:35:31 compute-0 podman[73898]: 2025-10-11 03:35:31.082453726 +0000 UTC m=+0.169082757 container init c5d85eabee408d8e1c37887b45e4bed5e301407e18f7db308ce02ab451cf0dcd (image=quay.io/ceph/ceph:v18, name=ceph-23b68101-59a9-532f-ab6b-9acf78fb2162-mon-compute-0, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Oct 11 03:35:31 compute-0 podman[73898]: 2025-10-11 03:35:31.093249885 +0000 UTC m=+0.179878876 container start c5d85eabee408d8e1c37887b45e4bed5e301407e18f7db308ce02ab451cf0dcd (image=quay.io/ceph/ceph:v18, name=ceph-23b68101-59a9-532f-ab6b-9acf78fb2162-mon-compute-0, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2)
Oct 11 03:35:31 compute-0 bash[73898]: c5d85eabee408d8e1c37887b45e4bed5e301407e18f7db308ce02ab451cf0dcd
Oct 11 03:35:31 compute-0 systemd[1]: Started Ceph mon.compute-0 for 23b68101-59a9-532f-ab6b-9acf78fb2162.
Oct 11 03:35:31 compute-0 ceph-mon[73917]: set uid:gid to 167:167 (ceph:ceph)
Oct 11 03:35:31 compute-0 ceph-mon[73917]: ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable), process ceph-mon, pid 2
Oct 11 03:35:31 compute-0 ceph-mon[73917]: pidfile_write: ignore empty --pid-file
Oct 11 03:35:31 compute-0 ceph-mon[73917]: load: jerasure load: lrc 
Oct 11 03:35:31 compute-0 ceph-mon[73917]: rocksdb: RocksDB version: 7.9.2
Oct 11 03:35:31 compute-0 ceph-mon[73917]: rocksdb: Git sha 0
Oct 11 03:35:31 compute-0 ceph-mon[73917]: rocksdb: Compile date 2025-05-06 23:30:25
Oct 11 03:35:31 compute-0 ceph-mon[73917]: rocksdb: DB SUMMARY
Oct 11 03:35:31 compute-0 ceph-mon[73917]: rocksdb: DB Session ID:  L0KU8GFCJSFEADATDTA8
Oct 11 03:35:31 compute-0 ceph-mon[73917]: rocksdb: CURRENT file:  CURRENT
Oct 11 03:35:31 compute-0 ceph-mon[73917]: rocksdb: IDENTITY file:  IDENTITY
Oct 11 03:35:31 compute-0 ceph-mon[73917]: rocksdb: MANIFEST file:  MANIFEST-000005 size: 59 Bytes
Oct 11 03:35:31 compute-0 ceph-mon[73917]: rocksdb: SST files in /var/lib/ceph/mon/ceph-compute-0/store.db dir, Total Num: 0, files: 
Oct 11 03:35:31 compute-0 ceph-mon[73917]: rocksdb: Write Ahead Log file in /var/lib/ceph/mon/ceph-compute-0/store.db: 000004.log size: 807 ; 
Oct 11 03:35:31 compute-0 ceph-mon[73917]: rocksdb:                         Options.error_if_exists: 0
Oct 11 03:35:31 compute-0 ceph-mon[73917]: rocksdb:                       Options.create_if_missing: 0
Oct 11 03:35:31 compute-0 ceph-mon[73917]: rocksdb:                         Options.paranoid_checks: 1
Oct 11 03:35:31 compute-0 ceph-mon[73917]: rocksdb:             Options.flush_verify_memtable_count: 1
Oct 11 03:35:31 compute-0 ceph-mon[73917]: rocksdb:                               Options.track_and_verify_wals_in_manifest: 0
Oct 11 03:35:31 compute-0 ceph-mon[73917]: rocksdb:        Options.verify_sst_unique_id_in_manifest: 1
Oct 11 03:35:31 compute-0 ceph-mon[73917]: rocksdb:                                     Options.env: 0x55a2d40ffc40
Oct 11 03:35:31 compute-0 ceph-mon[73917]: rocksdb:                                      Options.fs: PosixFileSystem
Oct 11 03:35:31 compute-0 ceph-mon[73917]: rocksdb:                                Options.info_log: 0x55a2d68c4e80
Oct 11 03:35:31 compute-0 ceph-mon[73917]: rocksdb:                Options.max_file_opening_threads: 16
Oct 11 03:35:31 compute-0 ceph-mon[73917]: rocksdb:                              Options.statistics: (nil)
Oct 11 03:35:31 compute-0 ceph-mon[73917]: rocksdb:                               Options.use_fsync: 0
Oct 11 03:35:31 compute-0 ceph-mon[73917]: rocksdb:                       Options.max_log_file_size: 0
Oct 11 03:35:31 compute-0 ceph-mon[73917]: rocksdb:                  Options.max_manifest_file_size: 1073741824
Oct 11 03:35:31 compute-0 ceph-mon[73917]: rocksdb:                   Options.log_file_time_to_roll: 0
Oct 11 03:35:31 compute-0 ceph-mon[73917]: rocksdb:                       Options.keep_log_file_num: 1000
Oct 11 03:35:31 compute-0 ceph-mon[73917]: rocksdb:                    Options.recycle_log_file_num: 0
Oct 11 03:35:31 compute-0 ceph-mon[73917]: rocksdb:                         Options.allow_fallocate: 1
Oct 11 03:35:31 compute-0 ceph-mon[73917]: rocksdb:                        Options.allow_mmap_reads: 0
Oct 11 03:35:31 compute-0 ceph-mon[73917]: rocksdb:                       Options.allow_mmap_writes: 0
Oct 11 03:35:31 compute-0 ceph-mon[73917]: rocksdb:                        Options.use_direct_reads: 0
Oct 11 03:35:31 compute-0 ceph-mon[73917]: rocksdb:                        Options.use_direct_io_for_flush_and_compaction: 0
Oct 11 03:35:31 compute-0 ceph-mon[73917]: rocksdb:          Options.create_missing_column_families: 0
Oct 11 03:35:31 compute-0 ceph-mon[73917]: rocksdb:                              Options.db_log_dir: 
Oct 11 03:35:31 compute-0 ceph-mon[73917]: rocksdb:                                 Options.wal_dir: 
Oct 11 03:35:31 compute-0 ceph-mon[73917]: rocksdb:                Options.table_cache_numshardbits: 6
Oct 11 03:35:31 compute-0 ceph-mon[73917]: rocksdb:                         Options.WAL_ttl_seconds: 0
Oct 11 03:35:31 compute-0 ceph-mon[73917]: rocksdb:                       Options.WAL_size_limit_MB: 0
Oct 11 03:35:31 compute-0 ceph-mon[73917]: rocksdb:                        Options.max_write_batch_group_size_bytes: 1048576
Oct 11 03:35:31 compute-0 ceph-mon[73917]: rocksdb:             Options.manifest_preallocation_size: 4194304
Oct 11 03:35:31 compute-0 ceph-mon[73917]: rocksdb:                     Options.is_fd_close_on_exec: 1
Oct 11 03:35:31 compute-0 ceph-mon[73917]: rocksdb:                   Options.advise_random_on_open: 1
Oct 11 03:35:31 compute-0 ceph-mon[73917]: rocksdb:                    Options.db_write_buffer_size: 0
Oct 11 03:35:31 compute-0 ceph-mon[73917]: rocksdb:                    Options.write_buffer_manager: 0x55a2d68d4b40
Oct 11 03:35:31 compute-0 ceph-mon[73917]: rocksdb:         Options.access_hint_on_compaction_start: 1
Oct 11 03:35:31 compute-0 ceph-mon[73917]: rocksdb:           Options.random_access_max_buffer_size: 1048576
Oct 11 03:35:31 compute-0 ceph-mon[73917]: rocksdb:                      Options.use_adaptive_mutex: 0
Oct 11 03:35:31 compute-0 ceph-mon[73917]: rocksdb:                            Options.rate_limiter: (nil)
Oct 11 03:35:31 compute-0 ceph-mon[73917]: rocksdb:     Options.sst_file_manager.rate_bytes_per_sec: 0
Oct 11 03:35:31 compute-0 ceph-mon[73917]: rocksdb:                       Options.wal_recovery_mode: 2
Oct 11 03:35:31 compute-0 ceph-mon[73917]: rocksdb:                  Options.enable_thread_tracking: 0
Oct 11 03:35:31 compute-0 ceph-mon[73917]: rocksdb:                  Options.enable_pipelined_write: 0
Oct 11 03:35:31 compute-0 ceph-mon[73917]: rocksdb:                  Options.unordered_write: 0
Oct 11 03:35:31 compute-0 ceph-mon[73917]: rocksdb:         Options.allow_concurrent_memtable_write: 1
Oct 11 03:35:31 compute-0 ceph-mon[73917]: rocksdb:      Options.enable_write_thread_adaptive_yield: 1
Oct 11 03:35:31 compute-0 ceph-mon[73917]: rocksdb:             Options.write_thread_max_yield_usec: 100
Oct 11 03:35:31 compute-0 ceph-mon[73917]: rocksdb:            Options.write_thread_slow_yield_usec: 3
Oct 11 03:35:31 compute-0 ceph-mon[73917]: rocksdb:                               Options.row_cache: None
Oct 11 03:35:31 compute-0 ceph-mon[73917]: rocksdb:                              Options.wal_filter: None
Oct 11 03:35:31 compute-0 ceph-mon[73917]: rocksdb:             Options.avoid_flush_during_recovery: 0
Oct 11 03:35:31 compute-0 ceph-mon[73917]: rocksdb:             Options.allow_ingest_behind: 0
Oct 11 03:35:31 compute-0 ceph-mon[73917]: rocksdb:             Options.two_write_queues: 0
Oct 11 03:35:31 compute-0 ceph-mon[73917]: rocksdb:             Options.manual_wal_flush: 0
Oct 11 03:35:31 compute-0 ceph-mon[73917]: rocksdb:             Options.wal_compression: 0
Oct 11 03:35:31 compute-0 ceph-mon[73917]: rocksdb:             Options.atomic_flush: 0
Oct 11 03:35:31 compute-0 ceph-mon[73917]: rocksdb:             Options.avoid_unnecessary_blocking_io: 0
Oct 11 03:35:31 compute-0 ceph-mon[73917]: rocksdb:                 Options.persist_stats_to_disk: 0
Oct 11 03:35:31 compute-0 ceph-mon[73917]: rocksdb:                 Options.write_dbid_to_manifest: 0
Oct 11 03:35:31 compute-0 ceph-mon[73917]: rocksdb:                 Options.log_readahead_size: 0
Oct 11 03:35:31 compute-0 ceph-mon[73917]: rocksdb:                 Options.file_checksum_gen_factory: Unknown
Oct 11 03:35:31 compute-0 ceph-mon[73917]: rocksdb:                 Options.best_efforts_recovery: 0
Oct 11 03:35:31 compute-0 ceph-mon[73917]: rocksdb:                Options.max_bgerror_resume_count: 2147483647
Oct 11 03:35:31 compute-0 ceph-mon[73917]: rocksdb:            Options.bgerror_resume_retry_interval: 1000000
Oct 11 03:35:31 compute-0 ceph-mon[73917]: rocksdb:             Options.allow_data_in_errors: 0
Oct 11 03:35:31 compute-0 ceph-mon[73917]: rocksdb:             Options.db_host_id: __hostname__
Oct 11 03:35:31 compute-0 ceph-mon[73917]: rocksdb:             Options.enforce_single_del_contracts: true
Oct 11 03:35:31 compute-0 ceph-mon[73917]: rocksdb:             Options.max_background_jobs: 2
Oct 11 03:35:31 compute-0 ceph-mon[73917]: rocksdb:             Options.max_background_compactions: -1
Oct 11 03:35:31 compute-0 ceph-mon[73917]: rocksdb:             Options.max_subcompactions: 1
Oct 11 03:35:31 compute-0 ceph-mon[73917]: rocksdb:             Options.avoid_flush_during_shutdown: 0
Oct 11 03:35:31 compute-0 ceph-mon[73917]: rocksdb:           Options.writable_file_max_buffer_size: 1048576
Oct 11 03:35:31 compute-0 ceph-mon[73917]: rocksdb:             Options.delayed_write_rate : 16777216
Oct 11 03:35:31 compute-0 ceph-mon[73917]: rocksdb:             Options.max_total_wal_size: 0
Oct 11 03:35:31 compute-0 ceph-mon[73917]: rocksdb:             Options.delete_obsolete_files_period_micros: 21600000000
Oct 11 03:35:31 compute-0 ceph-mon[73917]: rocksdb:                   Options.stats_dump_period_sec: 600
Oct 11 03:35:31 compute-0 ceph-mon[73917]: rocksdb:                 Options.stats_persist_period_sec: 600
Oct 11 03:35:31 compute-0 ceph-mon[73917]: rocksdb:                 Options.stats_history_buffer_size: 1048576
Oct 11 03:35:31 compute-0 ceph-mon[73917]: rocksdb:                          Options.max_open_files: -1
Oct 11 03:35:31 compute-0 ceph-mon[73917]: rocksdb:                          Options.bytes_per_sync: 0
Oct 11 03:35:31 compute-0 ceph-mon[73917]: rocksdb:                      Options.wal_bytes_per_sync: 0
Oct 11 03:35:31 compute-0 ceph-mon[73917]: rocksdb:                   Options.strict_bytes_per_sync: 0
Oct 11 03:35:31 compute-0 ceph-mon[73917]: rocksdb:       Options.compaction_readahead_size: 0
Oct 11 03:35:31 compute-0 ceph-mon[73917]: rocksdb:                  Options.max_background_flushes: -1
Oct 11 03:35:31 compute-0 ceph-mon[73917]: rocksdb: Compression algorithms supported:
Oct 11 03:35:31 compute-0 ceph-mon[73917]: rocksdb:         kZSTD supported: 0
Oct 11 03:35:31 compute-0 ceph-mon[73917]: rocksdb:         kXpressCompression supported: 0
Oct 11 03:35:31 compute-0 ceph-mon[73917]: rocksdb:         kBZip2Compression supported: 0
Oct 11 03:35:31 compute-0 ceph-mon[73917]: rocksdb:         kZSTDNotFinalCompression supported: 0
Oct 11 03:35:31 compute-0 ceph-mon[73917]: rocksdb:         kLZ4Compression supported: 1
Oct 11 03:35:31 compute-0 ceph-mon[73917]: rocksdb:         kZlibCompression supported: 1
Oct 11 03:35:31 compute-0 ceph-mon[73917]: rocksdb:         kLZ4HCCompression supported: 1
Oct 11 03:35:31 compute-0 ceph-mon[73917]: rocksdb:         kSnappyCompression supported: 1
Oct 11 03:35:31 compute-0 ceph-mon[73917]: rocksdb: Fast CRC32 supported: Supported on x86
Oct 11 03:35:31 compute-0 ceph-mon[73917]: rocksdb: DMutex implementation: pthread_mutex_t
Oct 11 03:35:31 compute-0 ceph-mon[73917]: rocksdb: [db/version_set.cc:5527] Recovering from manifest file: /var/lib/ceph/mon/ceph-compute-0/store.db/MANIFEST-000005
Oct 11 03:35:31 compute-0 ceph-mon[73917]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [default]:
Oct 11 03:35:31 compute-0 ceph-mon[73917]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Oct 11 03:35:31 compute-0 ceph-mon[73917]: rocksdb:           Options.merge_operator: 
Oct 11 03:35:31 compute-0 ceph-mon[73917]: rocksdb:        Options.compaction_filter: None
Oct 11 03:35:31 compute-0 ceph-mon[73917]: rocksdb:        Options.compaction_filter_factory: None
Oct 11 03:35:31 compute-0 ceph-mon[73917]: rocksdb:  Options.sst_partitioner_factory: None
Oct 11 03:35:31 compute-0 ceph-mon[73917]: rocksdb:         Options.memtable_factory: SkipListFactory
Oct 11 03:35:31 compute-0 ceph-mon[73917]: rocksdb:            Options.table_factory: BlockBasedTable
Oct 11 03:35:31 compute-0 ceph-mon[73917]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55a2d68c4a80)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x55a2d68bd1f0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 536870912
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Oct 11 03:35:31 compute-0 ceph-mon[73917]: rocksdb:        Options.write_buffer_size: 33554432
Oct 11 03:35:31 compute-0 ceph-mon[73917]: rocksdb:  Options.max_write_buffer_number: 2
Oct 11 03:35:31 compute-0 ceph-mon[73917]: rocksdb:          Options.compression: NoCompression
Oct 11 03:35:31 compute-0 ceph-mon[73917]: rocksdb:                  Options.bottommost_compression: Disabled
Oct 11 03:35:31 compute-0 ceph-mon[73917]: rocksdb:       Options.prefix_extractor: nullptr
Oct 11 03:35:31 compute-0 ceph-mon[73917]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Oct 11 03:35:31 compute-0 ceph-mon[73917]: rocksdb:             Options.num_levels: 7
Oct 11 03:35:31 compute-0 ceph-mon[73917]: rocksdb:        Options.min_write_buffer_number_to_merge: 1
Oct 11 03:35:31 compute-0 ceph-mon[73917]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Oct 11 03:35:31 compute-0 ceph-mon[73917]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Oct 11 03:35:31 compute-0 ceph-mon[73917]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Oct 11 03:35:31 compute-0 ceph-mon[73917]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Oct 11 03:35:31 compute-0 ceph-mon[73917]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Oct 11 03:35:31 compute-0 ceph-mon[73917]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Oct 11 03:35:31 compute-0 ceph-mon[73917]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Oct 11 03:35:31 compute-0 ceph-mon[73917]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Oct 11 03:35:31 compute-0 ceph-mon[73917]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Oct 11 03:35:31 compute-0 ceph-mon[73917]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Oct 11 03:35:31 compute-0 ceph-mon[73917]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Oct 11 03:35:31 compute-0 ceph-mon[73917]: rocksdb:            Options.compression_opts.window_bits: -14
Oct 11 03:35:31 compute-0 ceph-mon[73917]: rocksdb:                  Options.compression_opts.level: 32767
Oct 11 03:35:31 compute-0 ceph-mon[73917]: rocksdb:               Options.compression_opts.strategy: 0
Oct 11 03:35:31 compute-0 ceph-mon[73917]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Oct 11 03:35:31 compute-0 ceph-mon[73917]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Oct 11 03:35:31 compute-0 ceph-mon[73917]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Oct 11 03:35:31 compute-0 ceph-mon[73917]: rocksdb:         Options.compression_opts.parallel_threads: 1
Oct 11 03:35:31 compute-0 ceph-mon[73917]: rocksdb:                  Options.compression_opts.enabled: false
Oct 11 03:35:31 compute-0 ceph-mon[73917]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Oct 11 03:35:31 compute-0 ceph-mon[73917]: rocksdb:      Options.level0_file_num_compaction_trigger: 4
Oct 11 03:35:31 compute-0 ceph-mon[73917]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Oct 11 03:35:31 compute-0 ceph-mon[73917]: rocksdb:              Options.level0_stop_writes_trigger: 36
Oct 11 03:35:31 compute-0 ceph-mon[73917]: rocksdb:                   Options.target_file_size_base: 67108864
Oct 11 03:35:31 compute-0 ceph-mon[73917]: rocksdb:             Options.target_file_size_multiplier: 1
Oct 11 03:35:31 compute-0 ceph-mon[73917]: rocksdb:                Options.max_bytes_for_level_base: 268435456
Oct 11 03:35:31 compute-0 ceph-mon[73917]: rocksdb: Options.level_compaction_dynamic_level_bytes: 1
Oct 11 03:35:31 compute-0 ceph-mon[73917]: rocksdb:          Options.max_bytes_for_level_multiplier: 10.000000
Oct 11 03:35:31 compute-0 ceph-mon[73917]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Oct 11 03:35:31 compute-0 ceph-mon[73917]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Oct 11 03:35:31 compute-0 ceph-mon[73917]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Oct 11 03:35:31 compute-0 ceph-mon[73917]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Oct 11 03:35:31 compute-0 ceph-mon[73917]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Oct 11 03:35:31 compute-0 ceph-mon[73917]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Oct 11 03:35:31 compute-0 ceph-mon[73917]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Oct 11 03:35:31 compute-0 ceph-mon[73917]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Oct 11 03:35:31 compute-0 ceph-mon[73917]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Oct 11 03:35:31 compute-0 ceph-mon[73917]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Oct 11 03:35:31 compute-0 ceph-mon[73917]: rocksdb:                        Options.arena_block_size: 1048576
Oct 11 03:35:31 compute-0 ceph-mon[73917]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Oct 11 03:35:31 compute-0 ceph-mon[73917]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Oct 11 03:35:31 compute-0 ceph-mon[73917]: rocksdb:                Options.disable_auto_compactions: 0
Oct 11 03:35:31 compute-0 ceph-mon[73917]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Oct 11 03:35:31 compute-0 ceph-mon[73917]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Oct 11 03:35:31 compute-0 ceph-mon[73917]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Oct 11 03:35:31 compute-0 ceph-mon[73917]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Oct 11 03:35:31 compute-0 ceph-mon[73917]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Oct 11 03:35:31 compute-0 ceph-mon[73917]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Oct 11 03:35:31 compute-0 ceph-mon[73917]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Oct 11 03:35:31 compute-0 ceph-mon[73917]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Oct 11 03:35:31 compute-0 ceph-mon[73917]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Oct 11 03:35:31 compute-0 ceph-mon[73917]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Oct 11 03:35:31 compute-0 ceph-mon[73917]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Oct 11 03:35:31 compute-0 ceph-mon[73917]: rocksdb:                   Options.inplace_update_support: 0
Oct 11 03:35:31 compute-0 ceph-mon[73917]: rocksdb:                 Options.inplace_update_num_locks: 10000
Oct 11 03:35:31 compute-0 ceph-mon[73917]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Oct 11 03:35:31 compute-0 ceph-mon[73917]: rocksdb:               Options.memtable_whole_key_filtering: 0
Oct 11 03:35:31 compute-0 ceph-mon[73917]: rocksdb:   Options.memtable_huge_page_size: 0
Oct 11 03:35:31 compute-0 ceph-mon[73917]: rocksdb:                           Options.bloom_locality: 0
Oct 11 03:35:31 compute-0 ceph-mon[73917]: rocksdb:                    Options.max_successive_merges: 0
Oct 11 03:35:31 compute-0 ceph-mon[73917]: rocksdb:                Options.optimize_filters_for_hits: 0
Oct 11 03:35:31 compute-0 ceph-mon[73917]: rocksdb:                Options.paranoid_file_checks: 0
Oct 11 03:35:31 compute-0 ceph-mon[73917]: rocksdb:                Options.force_consistency_checks: 1
Oct 11 03:35:31 compute-0 ceph-mon[73917]: rocksdb:                Options.report_bg_io_stats: 0
Oct 11 03:35:31 compute-0 ceph-mon[73917]: rocksdb:                               Options.ttl: 2592000
Oct 11 03:35:31 compute-0 ceph-mon[73917]: rocksdb:          Options.periodic_compaction_seconds: 0
Oct 11 03:35:31 compute-0 ceph-mon[73917]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Oct 11 03:35:31 compute-0 ceph-mon[73917]: rocksdb:    Options.preserve_internal_time_seconds: 0
Oct 11 03:35:31 compute-0 ceph-mon[73917]: rocksdb:                       Options.enable_blob_files: false
Oct 11 03:35:31 compute-0 ceph-mon[73917]: rocksdb:                           Options.min_blob_size: 0
Oct 11 03:35:31 compute-0 ceph-mon[73917]: rocksdb:                          Options.blob_file_size: 268435456
Oct 11 03:35:31 compute-0 ceph-mon[73917]: rocksdb:                   Options.blob_compression_type: NoCompression
Oct 11 03:35:31 compute-0 ceph-mon[73917]: rocksdb:          Options.enable_blob_garbage_collection: false
Oct 11 03:35:31 compute-0 ceph-mon[73917]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Oct 11 03:35:31 compute-0 ceph-mon[73917]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Oct 11 03:35:31 compute-0 ceph-mon[73917]: rocksdb:          Options.blob_compaction_readahead_size: 0
Oct 11 03:35:31 compute-0 ceph-mon[73917]: rocksdb:                Options.blob_file_starting_level: 0
Oct 11 03:35:31 compute-0 ceph-mon[73917]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Oct 11 03:35:31 compute-0 ceph-mon[73917]: rocksdb: [db/version_set.cc:5566] Recovered from manifest file:/var/lib/ceph/mon/ceph-compute-0/store.db/MANIFEST-000005 succeeded,manifest_file_number is 5, next_file_number is 7, last_sequence is 0, log_number is 0,prev_log_number is 0,max_column_family is 0,min_log_number_to_keep is 0
Oct 11 03:35:31 compute-0 ceph-mon[73917]: rocksdb: [db/version_set.cc:5581] Column family [default] (ID 0), log number is 0
Oct 11 03:35:31 compute-0 ceph-mon[73917]: rocksdb: [db/db_impl/db_impl_open.cc:539] DB ID: c5ef686a-ea96-43f3-b64e-136aeef6150d
Oct 11 03:35:31 compute-0 ceph-mon[73917]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760153731160994, "job": 1, "event": "recovery_started", "wal_files": [4]}
Oct 11 03:35:31 compute-0 ceph-mon[73917]: rocksdb: [db/db_impl/db_impl_open.cc:1043] Recovering log #4 mode 2
Oct 11 03:35:31 compute-0 ceph-mon[73917]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760153731163363, "cf_name": "default", "job": 1, "event": "table_file_creation", "file_number": 8, "file_size": 1944, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 1, "largest_seqno": 5, "table_properties": {"data_size": 819, "index_size": 31, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 69, "raw_key_size": 115, "raw_average_key_size": 23, "raw_value_size": 696, "raw_average_value_size": 139, "num_data_blocks": 1, "num_entries": 5, "num_filter_entries": 5, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1760153731, "oldest_key_time": 0, "file_creation_time": 0, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "c5ef686a-ea96-43f3-b64e-136aeef6150d", "db_session_id": "L0KU8GFCJSFEADATDTA8", "orig_file_number": 8, "seqno_to_time_mapping": "N/A"}}
Oct 11 03:35:31 compute-0 ceph-mon[73917]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760153731163518, "job": 1, "event": "recovery_finished"}
Oct 11 03:35:31 compute-0 ceph-mon[73917]: rocksdb: [db/version_set.cc:5047] Creating manifest 10
Oct 11 03:35:31 compute-0 ceph-mon[73917]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000004.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 11 03:35:31 compute-0 ceph-mon[73917]: rocksdb: [db/db_impl/db_impl_open.cc:1987] SstFileManager instance 0x55a2d68e6e00
Oct 11 03:35:31 compute-0 ceph-mon[73917]: rocksdb: DB pointer 0x55a2d6970000
Oct 11 03:35:31 compute-0 ceph-mon[73917]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Oct 11 03:35:31 compute-0 ceph-mon[73917]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 0.0 total, 0.0 interval
                                           Cumulative writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 GB, 0.00 MB/s
                                           Cumulative WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 MB, 0.00 MB/s
                                           Interval WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                           
                                           ** Compaction Stats [default] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.90 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.8      0.00              0.00         1    0.002       0      0       0.0       0.0
                                            Sum      1/0    1.90 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.8      0.00              0.00         1    0.002       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.8      0.00              0.00         1    0.002       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [default] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.8      0.00              0.00         1    0.002       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.0 total, 0.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.09 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.09 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55a2d68bd1f0#2 capacity: 512.00 MB usage: 0.22 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 0 last_secs: 1.7e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): FilterBlock(1,0.11 KB,2.08616e-05%) IndexBlock(1,0.11 KB,2.08616e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [default] **
Oct 11 03:35:31 compute-0 ceph-mon[73917]: starting mon.compute-0 rank 0 at public addrs [v2:192.168.122.100:3300/0,v1:192.168.122.100:6789/0] at bind addrs [v2:192.168.122.100:3300/0,v1:192.168.122.100:6789/0] mon_data /var/lib/ceph/mon/ceph-compute-0 fsid 23b68101-59a9-532f-ab6b-9acf78fb2162
Oct 11 03:35:31 compute-0 ceph-mon[73917]: mon.compute-0@-1(???) e0 preinit fsid 23b68101-59a9-532f-ab6b-9acf78fb2162
Oct 11 03:35:31 compute-0 ceph-mon[73917]: mon.compute-0@-1(probing) e0  my rank is now 0 (was -1)
Oct 11 03:35:31 compute-0 ceph-mon[73917]: mon.compute-0@0(probing) e0 win_standalone_election
Oct 11 03:35:31 compute-0 ceph-mon[73917]: paxos.0).electionLogic(0) init, first boot, initializing epoch at 1 
Oct 11 03:35:31 compute-0 ceph-mon[73917]: mon.compute-0@0(electing) e0 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Oct 11 03:35:31 compute-0 ceph-mon[73917]: log_channel(cluster) log [INF] : mon.compute-0 is new leader, mons compute-0 in quorum (ranks 0)
Oct 11 03:35:31 compute-0 ceph-mon[73917]: mon.compute-0@0(leader).osd e0 create_pending setting backfillfull_ratio = 0.9
Oct 11 03:35:31 compute-0 ceph-mon[73917]: mon.compute-0@0(leader).osd e0 create_pending setting full_ratio = 0.95
Oct 11 03:35:31 compute-0 ceph-mon[73917]: mon.compute-0@0(leader).osd e0 create_pending setting nearfull_ratio = 0.85
Oct 11 03:35:31 compute-0 ceph-mon[73917]: mon.compute-0@0(leader).osd e0 do_prune osdmap full prune enabled
Oct 11 03:35:31 compute-0 ceph-mon[73917]: mon.compute-0@0(leader).osd e0 encode_pending skipping prime_pg_temp; mapping job did not start
Oct 11 03:35:31 compute-0 ceph-mon[73917]: mon.compute-0@0(leader) e0 _apply_compatset_features enabling new quorum features: compat={},rocompat={},incompat={4=support erasure code pools,5=new-style osdmap encoding,6=support isa/lrc erasure code,7=support shec erasure code}
Oct 11 03:35:31 compute-0 ceph-mon[73917]: mon.compute-0@0(leader).paxosservice(auth 0..0) refresh upgraded, format 3 -> 0
Oct 11 03:35:31 compute-0 ceph-mon[73917]: mon.compute-0@0(probing) e1 win_standalone_election
Oct 11 03:35:31 compute-0 ceph-mon[73917]: paxos.0).electionLogic(2) init, last seen epoch 2
Oct 11 03:35:31 compute-0 podman[73918]: 2025-10-11 03:35:31.203100059 +0000 UTC m=+0.062749909 container create d870bd654ea8039443bfde5468c4dd9a48ef0a8121cfe58cbe6e69c70bccd703 (image=quay.io/ceph/ceph:v18, name=vigilant_wing, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_REF=reef, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 11 03:35:31 compute-0 ceph-mon[73917]: mon.compute-0@0(electing) e1 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Oct 11 03:35:31 compute-0 ceph-mon[73917]: log_channel(cluster) log [INF] : mon.compute-0 is new leader, mons compute-0 in quorum (ranks 0)
Oct 11 03:35:31 compute-0 ceph-mon[73917]: log_channel(cluster) log [DBG] : monmap e1: 1 mons at {compute-0=[v2:192.168.122.100:3300/0,v1:192.168.122.100:6789/0]} removed_ranks: {} disallowed_leaders: {}
Oct 11 03:35:31 compute-0 ceph-mon[73917]: mon.compute-0@0(leader) e1 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Oct 11 03:35:31 compute-0 ceph-mon[73917]: mgrc update_daemon_metadata mon.compute-0 metadata {addrs=[v2:192.168.122.100:3300/0,v1:192.168.122.100:6789/0],arch=x86_64,ceph_release=reef,ceph_version=ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable),ceph_version_short=18.2.7,ceph_version_when_created=ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable),compression_algorithms=none, snappy, zlib, zstd, lz4,container_hostname=compute-0,container_image=quay.io/ceph/ceph:v18,cpu=AMD EPYC-Rome Processor,created_at=2025-10-11T03:35:29.165473Z,device_ids=,device_paths=vda=/dev/disk/by-path/pci-0000:00:04.0,devices=vda,distro=centos,distro_description=CentOS Stream 9,distro_version=9,hostname=compute-0,kernel_description=#1 SMP PREEMPT_DYNAMIC Tue Sep 30 07:37:35 UTC 2025,kernel_version=5.14.0-621.el9.x86_64,mem_swap_kb=1048572,mem_total_kb=7864348,os=Linux}
Oct 11 03:35:31 compute-0 ceph-mon[73917]: mon.compute-0@0(leader).osd e0 create_pending setting backfillfull_ratio = 0.9
Oct 11 03:35:31 compute-0 ceph-mon[73917]: mon.compute-0@0(leader).osd e0 create_pending setting full_ratio = 0.95
Oct 11 03:35:31 compute-0 ceph-mon[73917]: mon.compute-0@0(leader).osd e0 create_pending setting nearfull_ratio = 0.85
Oct 11 03:35:31 compute-0 ceph-mon[73917]: mon.compute-0@0(leader).osd e0 do_prune osdmap full prune enabled
Oct 11 03:35:31 compute-0 ceph-mon[73917]: mon.compute-0@0(leader).osd e0 encode_pending skipping prime_pg_temp; mapping job did not start
Oct 11 03:35:31 compute-0 ceph-mon[73917]: mon.compute-0@0(leader) e1 _apply_compatset_features enabling new quorum features: compat={},rocompat={},incompat={8=support monmap features,9=luminous ondisk layout,10=mimic ondisk layout,11=nautilus ondisk layout,12=octopus ondisk layout,13=pacific ondisk layout,14=quincy ondisk layout,15=reef ondisk layout}
Oct 11 03:35:31 compute-0 ceph-mon[73917]: mon.compute-0@0(leader).mds e1 new map
Oct 11 03:35:31 compute-0 ceph-mon[73917]: mon.compute-0@0(leader).mds e1 print_map
                                           e1
                                           enable_multiple, ever_enabled_multiple: 1,1
                                           default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2}
                                           legacy client fscid: -1
                                            
                                           No filesystems configured
Oct 11 03:35:31 compute-0 ceph-mon[73917]: mon.compute-0@0(leader).paxosservice(auth 0..0) refresh upgraded, format 3 -> 0
Oct 11 03:35:31 compute-0 ceph-mon[73917]: log_channel(cluster) log [DBG] : fsmap 
Oct 11 03:35:31 compute-0 ceph-mon[73917]: mon.compute-0@0(leader).osd e0 _set_cache_ratios kv ratio 0.25 inc ratio 0.375 full ratio 0.375
Oct 11 03:35:31 compute-0 ceph-mon[73917]: mon.compute-0@0(leader).osd e0 register_cache_with_pcm pcm target: 2147483648 pcm max: 1020054732 pcm min: 134217728 inc_osd_cache size: 1
Oct 11 03:35:31 compute-0 ceph-mon[73917]: mon.compute-0@0(leader).osd e1 e1: 0 total, 0 up, 0 in
Oct 11 03:35:31 compute-0 ceph-mon[73917]: mon.compute-0@0(leader).osd e1 crush map has features 3314932999778484224, adjusting msgr requires
Oct 11 03:35:31 compute-0 ceph-mon[73917]: mon.compute-0@0(leader).osd e1 crush map has features 288514050185494528, adjusting msgr requires
Oct 11 03:35:31 compute-0 ceph-mon[73917]: mon.compute-0@0(leader).osd e1 crush map has features 288514050185494528, adjusting msgr requires
Oct 11 03:35:31 compute-0 ceph-mon[73917]: mon.compute-0@0(leader).osd e1 crush map has features 288514050185494528, adjusting msgr requires
Oct 11 03:35:31 compute-0 ceph-mon[73917]: mkfs 23b68101-59a9-532f-ab6b-9acf78fb2162
Oct 11 03:35:31 compute-0 ceph-mon[73917]: mon.compute-0@0(leader).paxosservice(auth 1..1) refresh upgraded, format 0 -> 3
Oct 11 03:35:31 compute-0 ceph-mon[73917]: log_channel(cluster) log [DBG] : osdmap e1: 0 total, 0 up, 0 in
Oct 11 03:35:31 compute-0 ceph-mon[73917]: log_channel(cluster) log [DBG] : mgrmap e1: no daemons active
Oct 11 03:35:31 compute-0 systemd[1]: Started libpod-conmon-d870bd654ea8039443bfde5468c4dd9a48ef0a8121cfe58cbe6e69c70bccd703.scope.
Oct 11 03:35:31 compute-0 ceph-mon[73917]: mon.compute-0 is new leader, mons compute-0 in quorum (ranks 0)
Oct 11 03:35:31 compute-0 podman[73918]: 2025-10-11 03:35:31.175889636 +0000 UTC m=+0.035539576 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Oct 11 03:35:31 compute-0 systemd[1]: Started libcrun container.
Oct 11 03:35:31 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5eefb2bb23075792b3071093e47bcd248f4aa953da494f7ffb694e3377617d0a/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Oct 11 03:35:31 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5eefb2bb23075792b3071093e47bcd248f4aa953da494f7ffb694e3377617d0a/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 11 03:35:31 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5eefb2bb23075792b3071093e47bcd248f4aa953da494f7ffb694e3377617d0a/merged/var/lib/ceph/mon/ceph-compute-0 supports timestamps until 2038 (0x7fffffff)
Oct 11 03:35:31 compute-0 podman[73918]: 2025-10-11 03:35:31.300284952 +0000 UTC m=+0.159934842 container init d870bd654ea8039443bfde5468c4dd9a48ef0a8121cfe58cbe6e69c70bccd703 (image=quay.io/ceph/ceph:v18, name=vigilant_wing, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.39.3)
Oct 11 03:35:31 compute-0 podman[73918]: 2025-10-11 03:35:31.314218548 +0000 UTC m=+0.173868408 container start d870bd654ea8039443bfde5468c4dd9a48ef0a8121cfe58cbe6e69c70bccd703 (image=quay.io/ceph/ceph:v18, name=vigilant_wing, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 11 03:35:31 compute-0 podman[73918]: 2025-10-11 03:35:31.317635333 +0000 UTC m=+0.177285223 container attach d870bd654ea8039443bfde5468c4dd9a48ef0a8121cfe58cbe6e69c70bccd703 (image=quay.io/ceph/ceph:v18, name=vigilant_wing, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 11 03:35:31 compute-0 ceph-mon[73917]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status"} v 0) v1
Oct 11 03:35:31 compute-0 ceph-mon[73917]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3464425039' entity='client.admin' cmd=[{"prefix": "status"}]: dispatch
Oct 11 03:35:31 compute-0 vigilant_wing[73973]:   cluster:
Oct 11 03:35:31 compute-0 vigilant_wing[73973]:     id:     23b68101-59a9-532f-ab6b-9acf78fb2162
Oct 11 03:35:31 compute-0 vigilant_wing[73973]:     health: HEALTH_OK
Oct 11 03:35:31 compute-0 vigilant_wing[73973]:  
Oct 11 03:35:31 compute-0 vigilant_wing[73973]:   services:
Oct 11 03:35:31 compute-0 vigilant_wing[73973]:     mon: 1 daemons, quorum compute-0 (age 0.507249s)
Oct 11 03:35:31 compute-0 vigilant_wing[73973]:     mgr: no daemons active
Oct 11 03:35:31 compute-0 vigilant_wing[73973]:     osd: 0 osds: 0 up, 0 in
Oct 11 03:35:31 compute-0 vigilant_wing[73973]:  
Oct 11 03:35:31 compute-0 vigilant_wing[73973]:   data:
Oct 11 03:35:31 compute-0 vigilant_wing[73973]:     pools:   0 pools, 0 pgs
Oct 11 03:35:31 compute-0 vigilant_wing[73973]:     objects: 0 objects, 0 B
Oct 11 03:35:31 compute-0 vigilant_wing[73973]:     usage:   0 B used, 0 B / 0 B avail
Oct 11 03:35:31 compute-0 vigilant_wing[73973]:     pgs:     
Oct 11 03:35:31 compute-0 vigilant_wing[73973]:  
Oct 11 03:35:31 compute-0 systemd[1]: libpod-d870bd654ea8039443bfde5468c4dd9a48ef0a8121cfe58cbe6e69c70bccd703.scope: Deactivated successfully.
Oct 11 03:35:31 compute-0 podman[73918]: 2025-10-11 03:35:31.728893631 +0000 UTC m=+0.588543571 container died d870bd654ea8039443bfde5468c4dd9a48ef0a8121cfe58cbe6e69c70bccd703 (image=quay.io/ceph/ceph:v18, name=vigilant_wing, CEPH_REF=reef, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 11 03:35:31 compute-0 podman[73918]: 2025-10-11 03:35:31.789645415 +0000 UTC m=+0.649295305 container remove d870bd654ea8039443bfde5468c4dd9a48ef0a8121cfe58cbe6e69c70bccd703 (image=quay.io/ceph/ceph:v18, name=vigilant_wing, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 11 03:35:31 compute-0 systemd[1]: libpod-conmon-d870bd654ea8039443bfde5468c4dd9a48ef0a8121cfe58cbe6e69c70bccd703.scope: Deactivated successfully.
Oct 11 03:35:31 compute-0 podman[74009]: 2025-10-11 03:35:31.856841207 +0000 UTC m=+0.042816357 container create 4372defc81d1259be0e6b5b20e4db21312d1e28067134550e3e249bede3bae3e (image=quay.io/ceph/ceph:v18, name=musing_rubin, org.label-schema.schema-version=1.0, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 11 03:35:31 compute-0 systemd[1]: Started libpod-conmon-4372defc81d1259be0e6b5b20e4db21312d1e28067134550e3e249bede3bae3e.scope.
Oct 11 03:35:31 compute-0 systemd[1]: Started libcrun container.
Oct 11 03:35:31 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7a6b933f8e7d55dc4678977018d73368d5dc458f026ceda34a221b7ef3476e8f/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 11 03:35:31 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7a6b933f8e7d55dc4678977018d73368d5dc458f026ceda34a221b7ef3476e8f/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 11 03:35:31 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7a6b933f8e7d55dc4678977018d73368d5dc458f026ceda34a221b7ef3476e8f/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Oct 11 03:35:31 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7a6b933f8e7d55dc4678977018d73368d5dc458f026ceda34a221b7ef3476e8f/merged/var/lib/ceph/mon/ceph-compute-0 supports timestamps until 2038 (0x7fffffff)
Oct 11 03:35:31 compute-0 podman[74009]: 2025-10-11 03:35:31.837988925 +0000 UTC m=+0.023964065 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Oct 11 03:35:31 compute-0 podman[74009]: 2025-10-11 03:35:31.94859417 +0000 UTC m=+0.134569360 container init 4372defc81d1259be0e6b5b20e4db21312d1e28067134550e3e249bede3bae3e (image=quay.io/ceph/ceph:v18, name=musing_rubin, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.schema-version=1.0)
Oct 11 03:35:31 compute-0 podman[74009]: 2025-10-11 03:35:31.958665919 +0000 UTC m=+0.144641079 container start 4372defc81d1259be0e6b5b20e4db21312d1e28067134550e3e249bede3bae3e (image=quay.io/ceph/ceph:v18, name=musing_rubin, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 11 03:35:31 compute-0 podman[74009]: 2025-10-11 03:35:31.962849495 +0000 UTC m=+0.148824715 container attach 4372defc81d1259be0e6b5b20e4db21312d1e28067134550e3e249bede3bae3e (image=quay.io/ceph/ceph:v18, name=musing_rubin, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0)
Oct 11 03:35:32 compute-0 ceph-mon[73917]: mon.compute-0 is new leader, mons compute-0 in quorum (ranks 0)
Oct 11 03:35:32 compute-0 ceph-mon[73917]: monmap e1: 1 mons at {compute-0=[v2:192.168.122.100:3300/0,v1:192.168.122.100:6789/0]} removed_ranks: {} disallowed_leaders: {}
Oct 11 03:35:32 compute-0 ceph-mon[73917]: fsmap 
Oct 11 03:35:32 compute-0 ceph-mon[73917]: osdmap e1: 0 total, 0 up, 0 in
Oct 11 03:35:32 compute-0 ceph-mon[73917]: mgrmap e1: no daemons active
Oct 11 03:35:32 compute-0 ceph-mon[73917]: from='client.? 192.168.122.100:0/3464425039' entity='client.admin' cmd=[{"prefix": "status"}]: dispatch
Oct 11 03:35:32 compute-0 ceph-mon[73917]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config assimilate-conf"} v 0) v1
Oct 11 03:35:32 compute-0 ceph-mon[73917]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1640832481' entity='client.admin' cmd=[{"prefix": "config assimilate-conf"}]: dispatch
Oct 11 03:35:32 compute-0 ceph-mon[73917]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1640832481' entity='client.admin' cmd='[{"prefix": "config assimilate-conf"}]': finished
Oct 11 03:35:32 compute-0 musing_rubin[74026]: 
Oct 11 03:35:32 compute-0 musing_rubin[74026]: [global]
Oct 11 03:35:32 compute-0 musing_rubin[74026]:         fsid = 23b68101-59a9-532f-ab6b-9acf78fb2162
Oct 11 03:35:32 compute-0 musing_rubin[74026]:         mon_host = [v2:192.168.122.100:3300,v1:192.168.122.100:6789]
Oct 11 03:35:32 compute-0 musing_rubin[74026]:         osd_crush_chooseleaf_type = 0
Oct 11 03:35:32 compute-0 systemd[1]: libpod-4372defc81d1259be0e6b5b20e4db21312d1e28067134550e3e249bede3bae3e.scope: Deactivated successfully.
Oct 11 03:35:32 compute-0 podman[74009]: 2025-10-11 03:35:32.380298745 +0000 UTC m=+0.566273895 container died 4372defc81d1259be0e6b5b20e4db21312d1e28067134550e3e249bede3bae3e (image=quay.io/ceph/ceph:v18, name=musing_rubin, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Oct 11 03:35:32 compute-0 systemd[1]: var-lib-containers-storage-overlay-7a6b933f8e7d55dc4678977018d73368d5dc458f026ceda34a221b7ef3476e8f-merged.mount: Deactivated successfully.
Oct 11 03:35:32 compute-0 podman[74009]: 2025-10-11 03:35:32.425386105 +0000 UTC m=+0.611361265 container remove 4372defc81d1259be0e6b5b20e4db21312d1e28067134550e3e249bede3bae3e (image=quay.io/ceph/ceph:v18, name=musing_rubin, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 11 03:35:32 compute-0 systemd[1]: libpod-conmon-4372defc81d1259be0e6b5b20e4db21312d1e28067134550e3e249bede3bae3e.scope: Deactivated successfully.
Oct 11 03:35:32 compute-0 podman[74063]: 2025-10-11 03:35:32.485749828 +0000 UTC m=+0.039248499 container create 20cc3cbb0802593e190a123291d419058f4784e8d4711ee5f677d807be08abbe (image=quay.io/ceph/ceph:v18, name=gracious_taussig, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 11 03:35:32 compute-0 systemd[1]: Started libpod-conmon-20cc3cbb0802593e190a123291d419058f4784e8d4711ee5f677d807be08abbe.scope.
Oct 11 03:35:32 compute-0 systemd[1]: Started libcrun container.
Oct 11 03:35:32 compute-0 podman[74063]: 2025-10-11 03:35:32.469742574 +0000 UTC m=+0.023241235 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Oct 11 03:35:32 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c145e9060ad88383fc899a11d109a607281a4845b7e5e17f93a3971e844e3f78/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 11 03:35:32 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c145e9060ad88383fc899a11d109a607281a4845b7e5e17f93a3971e844e3f78/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Oct 11 03:35:32 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c145e9060ad88383fc899a11d109a607281a4845b7e5e17f93a3971e844e3f78/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 11 03:35:32 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c145e9060ad88383fc899a11d109a607281a4845b7e5e17f93a3971e844e3f78/merged/var/lib/ceph/mon/ceph-compute-0 supports timestamps until 2038 (0x7fffffff)
Oct 11 03:35:32 compute-0 podman[74063]: 2025-10-11 03:35:32.59264278 +0000 UTC m=+0.146141491 container init 20cc3cbb0802593e190a123291d419058f4784e8d4711ee5f677d807be08abbe (image=quay.io/ceph/ceph:v18, name=gracious_taussig, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 11 03:35:32 compute-0 podman[74063]: 2025-10-11 03:35:32.605290551 +0000 UTC m=+0.158789222 container start 20cc3cbb0802593e190a123291d419058f4784e8d4711ee5f677d807be08abbe (image=quay.io/ceph/ceph:v18, name=gracious_taussig, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, ceph=True, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 11 03:35:32 compute-0 podman[74063]: 2025-10-11 03:35:32.608707765 +0000 UTC m=+0.162206446 container attach 20cc3cbb0802593e190a123291d419058f4784e8d4711ee5f677d807be08abbe (image=quay.io/ceph/ceph:v18, name=gracious_taussig, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 11 03:35:33 compute-0 ceph-mon[73917]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 11 03:35:33 compute-0 ceph-mon[73917]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3386447037' entity='client.admin' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 11 03:35:33 compute-0 systemd[1]: libpod-20cc3cbb0802593e190a123291d419058f4784e8d4711ee5f677d807be08abbe.scope: Deactivated successfully.
Oct 11 03:35:33 compute-0 podman[74107]: 2025-10-11 03:35:33.052788223 +0000 UTC m=+0.021776314 container died 20cc3cbb0802593e190a123291d419058f4784e8d4711ee5f677d807be08abbe (image=quay.io/ceph/ceph:v18, name=gracious_taussig, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.39.3)
Oct 11 03:35:33 compute-0 systemd[1]: var-lib-containers-storage-overlay-c145e9060ad88383fc899a11d109a607281a4845b7e5e17f93a3971e844e3f78-merged.mount: Deactivated successfully.
Oct 11 03:35:33 compute-0 podman[74107]: 2025-10-11 03:35:33.105025051 +0000 UTC m=+0.074013082 container remove 20cc3cbb0802593e190a123291d419058f4784e8d4711ee5f677d807be08abbe (image=quay.io/ceph/ceph:v18, name=gracious_taussig, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0)
Oct 11 03:35:33 compute-0 systemd[1]: libpod-conmon-20cc3cbb0802593e190a123291d419058f4784e8d4711ee5f677d807be08abbe.scope: Deactivated successfully.
Oct 11 03:35:33 compute-0 systemd[1]: Stopping Ceph mon.compute-0 for 23b68101-59a9-532f-ab6b-9acf78fb2162...
Oct 11 03:35:33 compute-0 ceph-mon[73917]: from='client.? 192.168.122.100:0/1640832481' entity='client.admin' cmd=[{"prefix": "config assimilate-conf"}]: dispatch
Oct 11 03:35:33 compute-0 ceph-mon[73917]: from='client.? 192.168.122.100:0/1640832481' entity='client.admin' cmd='[{"prefix": "config assimilate-conf"}]': finished
Oct 11 03:35:33 compute-0 ceph-mon[73917]: from='client.? 192.168.122.100:0/3386447037' entity='client.admin' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 11 03:35:33 compute-0 ceph-mon[73917]: received  signal: Terminated from /run/podman-init -- /usr/bin/ceph-mon -n mon.compute-0 -f --setuser ceph --setgroup ceph --default-log-to-file=false --default-log-to-journald=true --default-log-to-stderr=false --default-mon-cluster-log-to-file=false --default-mon-cluster-log-to-journald=true --default-mon-cluster-log-to-stderr=false  (PID: 1) UID: 0
Oct 11 03:35:33 compute-0 ceph-mon[73917]: mon.compute-0@0(leader) e1 *** Got Signal Terminated ***
Oct 11 03:35:33 compute-0 ceph-mon[73917]: mon.compute-0@0(leader) e1 shutdown
Oct 11 03:35:33 compute-0 ceph-23b68101-59a9-532f-ab6b-9acf78fb2162-mon-compute-0[73913]: 2025-10-11T03:35:33.380+0000 7f5f95229640 -1 received  signal: Terminated from /run/podman-init -- /usr/bin/ceph-mon -n mon.compute-0 -f --setuser ceph --setgroup ceph --default-log-to-file=false --default-log-to-journald=true --default-log-to-stderr=false --default-mon-cluster-log-to-file=false --default-mon-cluster-log-to-journald=true --default-mon-cluster-log-to-stderr=false  (PID: 1) UID: 0
Oct 11 03:35:33 compute-0 ceph-23b68101-59a9-532f-ab6b-9acf78fb2162-mon-compute-0[73913]: 2025-10-11T03:35:33.380+0000 7f5f95229640 -1 mon.compute-0@0(leader) e1 *** Got Signal Terminated ***
Oct 11 03:35:33 compute-0 ceph-mon[73917]: rocksdb: [db/db_impl/db_impl.cc:496] Shutdown: canceling all background work
Oct 11 03:35:33 compute-0 ceph-mon[73917]: rocksdb: [db/db_impl/db_impl.cc:704] Shutdown complete
Oct 11 03:35:33 compute-0 podman[74150]: 2025-10-11 03:35:33.422302994 +0000 UTC m=+0.087766543 container died c5d85eabee408d8e1c37887b45e4bed5e301407e18f7db308ce02ab451cf0dcd (image=quay.io/ceph/ceph:v18, name=ceph-23b68101-59a9-532f-ab6b-9acf78fb2162-mon-compute-0, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 11 03:35:33 compute-0 systemd[1]: var-lib-containers-storage-overlay-a1d992e6f25627153bd55ae562b2f664994ab4830310ee83de5b43d90ccbe569-merged.mount: Deactivated successfully.
Oct 11 03:35:33 compute-0 podman[74150]: 2025-10-11 03:35:33.468208267 +0000 UTC m=+0.133671816 container remove c5d85eabee408d8e1c37887b45e4bed5e301407e18f7db308ce02ab451cf0dcd (image=quay.io/ceph/ceph:v18, name=ceph-23b68101-59a9-532f-ab6b-9acf78fb2162-mon-compute-0, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 11 03:35:33 compute-0 bash[74150]: ceph-23b68101-59a9-532f-ab6b-9acf78fb2162-mon-compute-0
Oct 11 03:35:33 compute-0 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Oct 11 03:35:33 compute-0 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Oct 11 03:35:33 compute-0 systemd[1]: ceph-23b68101-59a9-532f-ab6b-9acf78fb2162@mon.compute-0.service: Deactivated successfully.
Oct 11 03:35:33 compute-0 systemd[1]: Stopped Ceph mon.compute-0 for 23b68101-59a9-532f-ab6b-9acf78fb2162.
Oct 11 03:35:33 compute-0 systemd[1]: ceph-23b68101-59a9-532f-ab6b-9acf78fb2162@mon.compute-0.service: Consumed 1.286s CPU time.
Oct 11 03:35:33 compute-0 systemd[1]: Starting Ceph mon.compute-0 for 23b68101-59a9-532f-ab6b-9acf78fb2162...
Oct 11 03:35:33 compute-0 podman[74254]: 2025-10-11 03:35:33.977019438 +0000 UTC m=+0.065177827 container create 24261ba7295af5a6a49cb537d1551fd7fd4de28fdeebff7ecec5d89143ebddf9 (image=quay.io/ceph/ceph:v18, name=ceph-23b68101-59a9-532f-ab6b-9acf78fb2162-mon-compute-0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 11 03:35:34 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1bb918e791d167be87789817c53984de583fcaa3e77b301462d8912632766909/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 11 03:35:34 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1bb918e791d167be87789817c53984de583fcaa3e77b301462d8912632766909/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 11 03:35:34 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1bb918e791d167be87789817c53984de583fcaa3e77b301462d8912632766909/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 11 03:35:34 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1bb918e791d167be87789817c53984de583fcaa3e77b301462d8912632766909/merged/var/lib/ceph/mon/ceph-compute-0 supports timestamps until 2038 (0x7fffffff)
Oct 11 03:35:34 compute-0 podman[74254]: 2025-10-11 03:35:33.950547935 +0000 UTC m=+0.038706394 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Oct 11 03:35:34 compute-0 podman[74254]: 2025-10-11 03:35:34.05754859 +0000 UTC m=+0.145707009 container init 24261ba7295af5a6a49cb537d1551fd7fd4de28fdeebff7ecec5d89143ebddf9 (image=quay.io/ceph/ceph:v18, name=ceph-23b68101-59a9-532f-ab6b-9acf78fb2162-mon-compute-0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 11 03:35:34 compute-0 podman[74254]: 2025-10-11 03:35:34.069417039 +0000 UTC m=+0.157575428 container start 24261ba7295af5a6a49cb537d1551fd7fd4de28fdeebff7ecec5d89143ebddf9 (image=quay.io/ceph/ceph:v18, name=ceph-23b68101-59a9-532f-ab6b-9acf78fb2162-mon-compute-0, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 11 03:35:34 compute-0 bash[74254]: 24261ba7295af5a6a49cb537d1551fd7fd4de28fdeebff7ecec5d89143ebddf9
Oct 11 03:35:34 compute-0 systemd[1]: Started Ceph mon.compute-0 for 23b68101-59a9-532f-ab6b-9acf78fb2162.
Oct 11 03:35:34 compute-0 ceph-mon[74273]: set uid:gid to 167:167 (ceph:ceph)
Oct 11 03:35:34 compute-0 ceph-mon[74273]: ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable), process ceph-mon, pid 2
Oct 11 03:35:34 compute-0 ceph-mon[74273]: pidfile_write: ignore empty --pid-file
Oct 11 03:35:34 compute-0 ceph-mon[74273]: load: jerasure load: lrc 
Oct 11 03:35:34 compute-0 ceph-mon[74273]: rocksdb: RocksDB version: 7.9.2
Oct 11 03:35:34 compute-0 ceph-mon[74273]: rocksdb: Git sha 0
Oct 11 03:35:34 compute-0 ceph-mon[74273]: rocksdb: Compile date 2025-05-06 23:30:25
Oct 11 03:35:34 compute-0 ceph-mon[74273]: rocksdb: DB SUMMARY
Oct 11 03:35:34 compute-0 ceph-mon[74273]: rocksdb: DB Session ID:  5PENSWPAYOBU0GJSS6OB
Oct 11 03:35:34 compute-0 ceph-mon[74273]: rocksdb: CURRENT file:  CURRENT
Oct 11 03:35:34 compute-0 ceph-mon[74273]: rocksdb: IDENTITY file:  IDENTITY
Oct 11 03:35:34 compute-0 ceph-mon[74273]: rocksdb: MANIFEST file:  MANIFEST-000010 size: 179 Bytes
Oct 11 03:35:34 compute-0 ceph-mon[74273]: rocksdb: SST files in /var/lib/ceph/mon/ceph-compute-0/store.db dir, Total Num: 1, files: 000008.sst 
Oct 11 03:35:34 compute-0 ceph-mon[74273]: rocksdb: Write Ahead Log file in /var/lib/ceph/mon/ceph-compute-0/store.db: 000009.log size: 55680 ; 
Oct 11 03:35:34 compute-0 ceph-mon[74273]: rocksdb:                         Options.error_if_exists: 0
Oct 11 03:35:34 compute-0 ceph-mon[74273]: rocksdb:                       Options.create_if_missing: 0
Oct 11 03:35:34 compute-0 ceph-mon[74273]: rocksdb:                         Options.paranoid_checks: 1
Oct 11 03:35:34 compute-0 ceph-mon[74273]: rocksdb:             Options.flush_verify_memtable_count: 1
Oct 11 03:35:34 compute-0 ceph-mon[74273]: rocksdb:                               Options.track_and_verify_wals_in_manifest: 0
Oct 11 03:35:34 compute-0 ceph-mon[74273]: rocksdb:        Options.verify_sst_unique_id_in_manifest: 1
Oct 11 03:35:34 compute-0 ceph-mon[74273]: rocksdb:                                     Options.env: 0x558494adcc40
Oct 11 03:35:34 compute-0 ceph-mon[74273]: rocksdb:                                      Options.fs: PosixFileSystem
Oct 11 03:35:34 compute-0 ceph-mon[74273]: rocksdb:                                Options.info_log: 0x558495a65040
Oct 11 03:35:34 compute-0 ceph-mon[74273]: rocksdb:                Options.max_file_opening_threads: 16
Oct 11 03:35:34 compute-0 ceph-mon[74273]: rocksdb:                              Options.statistics: (nil)
Oct 11 03:35:34 compute-0 ceph-mon[74273]: rocksdb:                               Options.use_fsync: 0
Oct 11 03:35:34 compute-0 ceph-mon[74273]: rocksdb:                       Options.max_log_file_size: 0
Oct 11 03:35:34 compute-0 ceph-mon[74273]: rocksdb:                  Options.max_manifest_file_size: 1073741824
Oct 11 03:35:34 compute-0 ceph-mon[74273]: rocksdb:                   Options.log_file_time_to_roll: 0
Oct 11 03:35:34 compute-0 ceph-mon[74273]: rocksdb:                       Options.keep_log_file_num: 1000
Oct 11 03:35:34 compute-0 ceph-mon[74273]: rocksdb:                    Options.recycle_log_file_num: 0
Oct 11 03:35:34 compute-0 ceph-mon[74273]: rocksdb:                         Options.allow_fallocate: 1
Oct 11 03:35:34 compute-0 ceph-mon[74273]: rocksdb:                        Options.allow_mmap_reads: 0
Oct 11 03:35:34 compute-0 ceph-mon[74273]: rocksdb:                       Options.allow_mmap_writes: 0
Oct 11 03:35:34 compute-0 ceph-mon[74273]: rocksdb:                        Options.use_direct_reads: 0
Oct 11 03:35:34 compute-0 ceph-mon[74273]: rocksdb:                        Options.use_direct_io_for_flush_and_compaction: 0
Oct 11 03:35:34 compute-0 ceph-mon[74273]: rocksdb:          Options.create_missing_column_families: 0
Oct 11 03:35:34 compute-0 ceph-mon[74273]: rocksdb:                              Options.db_log_dir: 
Oct 11 03:35:34 compute-0 ceph-mon[74273]: rocksdb:                                 Options.wal_dir: 
Oct 11 03:35:34 compute-0 ceph-mon[74273]: rocksdb:                Options.table_cache_numshardbits: 6
Oct 11 03:35:34 compute-0 ceph-mon[74273]: rocksdb:                         Options.WAL_ttl_seconds: 0
Oct 11 03:35:34 compute-0 ceph-mon[74273]: rocksdb:                       Options.WAL_size_limit_MB: 0
Oct 11 03:35:34 compute-0 ceph-mon[74273]: rocksdb:                        Options.max_write_batch_group_size_bytes: 1048576
Oct 11 03:35:34 compute-0 ceph-mon[74273]: rocksdb:             Options.manifest_preallocation_size: 4194304
Oct 11 03:35:34 compute-0 ceph-mon[74273]: rocksdb:                     Options.is_fd_close_on_exec: 1
Oct 11 03:35:34 compute-0 ceph-mon[74273]: rocksdb:                   Options.advise_random_on_open: 1
Oct 11 03:35:34 compute-0 ceph-mon[74273]: rocksdb:                    Options.db_write_buffer_size: 0
Oct 11 03:35:34 compute-0 ceph-mon[74273]: rocksdb:                    Options.write_buffer_manager: 0x558495a74b40
Oct 11 03:35:34 compute-0 ceph-mon[74273]: rocksdb:         Options.access_hint_on_compaction_start: 1
Oct 11 03:35:34 compute-0 ceph-mon[74273]: rocksdb:           Options.random_access_max_buffer_size: 1048576
Oct 11 03:35:34 compute-0 ceph-mon[74273]: rocksdb:                      Options.use_adaptive_mutex: 0
Oct 11 03:35:34 compute-0 ceph-mon[74273]: rocksdb:                            Options.rate_limiter: (nil)
Oct 11 03:35:34 compute-0 ceph-mon[74273]: rocksdb:     Options.sst_file_manager.rate_bytes_per_sec: 0
Oct 11 03:35:34 compute-0 ceph-mon[74273]: rocksdb:                       Options.wal_recovery_mode: 2
Oct 11 03:35:34 compute-0 ceph-mon[74273]: rocksdb:                  Options.enable_thread_tracking: 0
Oct 11 03:35:34 compute-0 ceph-mon[74273]: rocksdb:                  Options.enable_pipelined_write: 0
Oct 11 03:35:34 compute-0 ceph-mon[74273]: rocksdb:                  Options.unordered_write: 0
Oct 11 03:35:34 compute-0 ceph-mon[74273]: rocksdb:         Options.allow_concurrent_memtable_write: 1
Oct 11 03:35:34 compute-0 ceph-mon[74273]: rocksdb:      Options.enable_write_thread_adaptive_yield: 1
Oct 11 03:35:34 compute-0 ceph-mon[74273]: rocksdb:             Options.write_thread_max_yield_usec: 100
Oct 11 03:35:34 compute-0 ceph-mon[74273]: rocksdb:            Options.write_thread_slow_yield_usec: 3
Oct 11 03:35:34 compute-0 ceph-mon[74273]: rocksdb:                               Options.row_cache: None
Oct 11 03:35:34 compute-0 ceph-mon[74273]: rocksdb:                              Options.wal_filter: None
Oct 11 03:35:34 compute-0 ceph-mon[74273]: rocksdb:             Options.avoid_flush_during_recovery: 0
Oct 11 03:35:34 compute-0 ceph-mon[74273]: rocksdb:             Options.allow_ingest_behind: 0
Oct 11 03:35:34 compute-0 ceph-mon[74273]: rocksdb:             Options.two_write_queues: 0
Oct 11 03:35:34 compute-0 ceph-mon[74273]: rocksdb:             Options.manual_wal_flush: 0
Oct 11 03:35:34 compute-0 ceph-mon[74273]: rocksdb:             Options.wal_compression: 0
Oct 11 03:35:34 compute-0 ceph-mon[74273]: rocksdb:             Options.atomic_flush: 0
Oct 11 03:35:34 compute-0 ceph-mon[74273]: rocksdb:             Options.avoid_unnecessary_blocking_io: 0
Oct 11 03:35:34 compute-0 ceph-mon[74273]: rocksdb:                 Options.persist_stats_to_disk: 0
Oct 11 03:35:34 compute-0 ceph-mon[74273]: rocksdb:                 Options.write_dbid_to_manifest: 0
Oct 11 03:35:34 compute-0 ceph-mon[74273]: rocksdb:                 Options.log_readahead_size: 0
Oct 11 03:35:34 compute-0 ceph-mon[74273]: rocksdb:                 Options.file_checksum_gen_factory: Unknown
Oct 11 03:35:34 compute-0 ceph-mon[74273]: rocksdb:                 Options.best_efforts_recovery: 0
Oct 11 03:35:34 compute-0 ceph-mon[74273]: rocksdb:                Options.max_bgerror_resume_count: 2147483647
Oct 11 03:35:34 compute-0 ceph-mon[74273]: rocksdb:            Options.bgerror_resume_retry_interval: 1000000
Oct 11 03:35:34 compute-0 ceph-mon[74273]: rocksdb:             Options.allow_data_in_errors: 0
Oct 11 03:35:34 compute-0 ceph-mon[74273]: rocksdb:             Options.db_host_id: __hostname__
Oct 11 03:35:34 compute-0 ceph-mon[74273]: rocksdb:             Options.enforce_single_del_contracts: true
Oct 11 03:35:34 compute-0 ceph-mon[74273]: rocksdb:             Options.max_background_jobs: 2
Oct 11 03:35:34 compute-0 ceph-mon[74273]: rocksdb:             Options.max_background_compactions: -1
Oct 11 03:35:34 compute-0 ceph-mon[74273]: rocksdb:             Options.max_subcompactions: 1
Oct 11 03:35:34 compute-0 ceph-mon[74273]: rocksdb:             Options.avoid_flush_during_shutdown: 0
Oct 11 03:35:34 compute-0 ceph-mon[74273]: rocksdb:           Options.writable_file_max_buffer_size: 1048576
Oct 11 03:35:34 compute-0 ceph-mon[74273]: rocksdb:             Options.delayed_write_rate : 16777216
Oct 11 03:35:34 compute-0 ceph-mon[74273]: rocksdb:             Options.max_total_wal_size: 0
Oct 11 03:35:34 compute-0 ceph-mon[74273]: rocksdb:             Options.delete_obsolete_files_period_micros: 21600000000
Oct 11 03:35:34 compute-0 ceph-mon[74273]: rocksdb:                   Options.stats_dump_period_sec: 600
Oct 11 03:35:34 compute-0 ceph-mon[74273]: rocksdb:                 Options.stats_persist_period_sec: 600
Oct 11 03:35:34 compute-0 ceph-mon[74273]: rocksdb:                 Options.stats_history_buffer_size: 1048576
Oct 11 03:35:34 compute-0 ceph-mon[74273]: rocksdb:                          Options.max_open_files: -1
Oct 11 03:35:34 compute-0 ceph-mon[74273]: rocksdb:                          Options.bytes_per_sync: 0
Oct 11 03:35:34 compute-0 ceph-mon[74273]: rocksdb:                      Options.wal_bytes_per_sync: 0
Oct 11 03:35:34 compute-0 ceph-mon[74273]: rocksdb:                   Options.strict_bytes_per_sync: 0
Oct 11 03:35:34 compute-0 ceph-mon[74273]: rocksdb:       Options.compaction_readahead_size: 0
Oct 11 03:35:34 compute-0 ceph-mon[74273]: rocksdb:                  Options.max_background_flushes: -1
Oct 11 03:35:34 compute-0 ceph-mon[74273]: rocksdb: Compression algorithms supported:
Oct 11 03:35:34 compute-0 ceph-mon[74273]: rocksdb:         kZSTD supported: 0
Oct 11 03:35:34 compute-0 ceph-mon[74273]: rocksdb:         kXpressCompression supported: 0
Oct 11 03:35:34 compute-0 ceph-mon[74273]: rocksdb:         kBZip2Compression supported: 0
Oct 11 03:35:34 compute-0 ceph-mon[74273]: rocksdb:         kZSTDNotFinalCompression supported: 0
Oct 11 03:35:34 compute-0 ceph-mon[74273]: rocksdb:         kLZ4Compression supported: 1
Oct 11 03:35:34 compute-0 ceph-mon[74273]: rocksdb:         kZlibCompression supported: 1
Oct 11 03:35:34 compute-0 ceph-mon[74273]: rocksdb:         kLZ4HCCompression supported: 1
Oct 11 03:35:34 compute-0 ceph-mon[74273]: rocksdb:         kSnappyCompression supported: 1
Oct 11 03:35:34 compute-0 ceph-mon[74273]: rocksdb: Fast CRC32 supported: Supported on x86
Oct 11 03:35:34 compute-0 ceph-mon[74273]: rocksdb: DMutex implementation: pthread_mutex_t
Oct 11 03:35:34 compute-0 ceph-mon[74273]: rocksdb: [db/version_set.cc:5527] Recovering from manifest file: /var/lib/ceph/mon/ceph-compute-0/store.db/MANIFEST-000010
Oct 11 03:35:34 compute-0 ceph-mon[74273]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [default]:
Oct 11 03:35:34 compute-0 ceph-mon[74273]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Oct 11 03:35:34 compute-0 ceph-mon[74273]: rocksdb:           Options.merge_operator: 
Oct 11 03:35:34 compute-0 ceph-mon[74273]: rocksdb:        Options.compaction_filter: None
Oct 11 03:35:34 compute-0 ceph-mon[74273]: rocksdb:        Options.compaction_filter_factory: None
Oct 11 03:35:34 compute-0 ceph-mon[74273]: rocksdb:  Options.sst_partitioner_factory: None
Oct 11 03:35:34 compute-0 ceph-mon[74273]: rocksdb:         Options.memtable_factory: SkipListFactory
Oct 11 03:35:34 compute-0 ceph-mon[74273]: rocksdb:            Options.table_factory: BlockBasedTable
Oct 11 03:35:34 compute-0 ceph-mon[74273]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x558495a64c40)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x558495a5d1f0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 536870912
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Oct 11 03:35:34 compute-0 ceph-mon[74273]: rocksdb:        Options.write_buffer_size: 33554432
Oct 11 03:35:34 compute-0 ceph-mon[74273]: rocksdb:  Options.max_write_buffer_number: 2
Oct 11 03:35:34 compute-0 ceph-mon[74273]: rocksdb:          Options.compression: NoCompression
Oct 11 03:35:34 compute-0 ceph-mon[74273]: rocksdb:                  Options.bottommost_compression: Disabled
Oct 11 03:35:34 compute-0 ceph-mon[74273]: rocksdb:       Options.prefix_extractor: nullptr
Oct 11 03:35:34 compute-0 ceph-mon[74273]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Oct 11 03:35:34 compute-0 ceph-mon[74273]: rocksdb:             Options.num_levels: 7
Oct 11 03:35:34 compute-0 ceph-mon[74273]: rocksdb:        Options.min_write_buffer_number_to_merge: 1
Oct 11 03:35:34 compute-0 ceph-mon[74273]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Oct 11 03:35:34 compute-0 ceph-mon[74273]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Oct 11 03:35:34 compute-0 ceph-mon[74273]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Oct 11 03:35:34 compute-0 ceph-mon[74273]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Oct 11 03:35:34 compute-0 ceph-mon[74273]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Oct 11 03:35:34 compute-0 ceph-mon[74273]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Oct 11 03:35:34 compute-0 ceph-mon[74273]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Oct 11 03:35:34 compute-0 ceph-mon[74273]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Oct 11 03:35:34 compute-0 ceph-mon[74273]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Oct 11 03:35:34 compute-0 ceph-mon[74273]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Oct 11 03:35:34 compute-0 ceph-mon[74273]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Oct 11 03:35:34 compute-0 ceph-mon[74273]: rocksdb:            Options.compression_opts.window_bits: -14
Oct 11 03:35:34 compute-0 ceph-mon[74273]: rocksdb:                  Options.compression_opts.level: 32767
Oct 11 03:35:34 compute-0 ceph-mon[74273]: rocksdb:               Options.compression_opts.strategy: 0
Oct 11 03:35:34 compute-0 ceph-mon[74273]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Oct 11 03:35:34 compute-0 ceph-mon[74273]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Oct 11 03:35:34 compute-0 ceph-mon[74273]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Oct 11 03:35:34 compute-0 ceph-mon[74273]: rocksdb:         Options.compression_opts.parallel_threads: 1
Oct 11 03:35:34 compute-0 ceph-mon[74273]: rocksdb:                  Options.compression_opts.enabled: false
Oct 11 03:35:34 compute-0 ceph-mon[74273]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Oct 11 03:35:34 compute-0 ceph-mon[74273]: rocksdb:      Options.level0_file_num_compaction_trigger: 4
Oct 11 03:35:34 compute-0 ceph-mon[74273]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Oct 11 03:35:34 compute-0 ceph-mon[74273]: rocksdb:              Options.level0_stop_writes_trigger: 36
Oct 11 03:35:34 compute-0 ceph-mon[74273]: rocksdb:                   Options.target_file_size_base: 67108864
Oct 11 03:35:34 compute-0 ceph-mon[74273]: rocksdb:             Options.target_file_size_multiplier: 1
Oct 11 03:35:34 compute-0 ceph-mon[74273]: rocksdb:                Options.max_bytes_for_level_base: 268435456
Oct 11 03:35:34 compute-0 ceph-mon[74273]: rocksdb: Options.level_compaction_dynamic_level_bytes: 1
Oct 11 03:35:34 compute-0 ceph-mon[74273]: rocksdb:          Options.max_bytes_for_level_multiplier: 10.000000
Oct 11 03:35:34 compute-0 ceph-mon[74273]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Oct 11 03:35:34 compute-0 ceph-mon[74273]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Oct 11 03:35:34 compute-0 ceph-mon[74273]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Oct 11 03:35:34 compute-0 ceph-mon[74273]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Oct 11 03:35:34 compute-0 ceph-mon[74273]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Oct 11 03:35:34 compute-0 ceph-mon[74273]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Oct 11 03:35:34 compute-0 ceph-mon[74273]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Oct 11 03:35:34 compute-0 ceph-mon[74273]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Oct 11 03:35:34 compute-0 ceph-mon[74273]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Oct 11 03:35:34 compute-0 ceph-mon[74273]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Oct 11 03:35:34 compute-0 ceph-mon[74273]: rocksdb:                        Options.arena_block_size: 1048576
Oct 11 03:35:34 compute-0 ceph-mon[74273]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Oct 11 03:35:34 compute-0 ceph-mon[74273]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Oct 11 03:35:34 compute-0 ceph-mon[74273]: rocksdb:                Options.disable_auto_compactions: 0
Oct 11 03:35:34 compute-0 ceph-mon[74273]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Oct 11 03:35:34 compute-0 ceph-mon[74273]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Oct 11 03:35:34 compute-0 ceph-mon[74273]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Oct 11 03:35:34 compute-0 ceph-mon[74273]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Oct 11 03:35:34 compute-0 ceph-mon[74273]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Oct 11 03:35:34 compute-0 ceph-mon[74273]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Oct 11 03:35:34 compute-0 ceph-mon[74273]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Oct 11 03:35:34 compute-0 ceph-mon[74273]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Oct 11 03:35:34 compute-0 ceph-mon[74273]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Oct 11 03:35:34 compute-0 ceph-mon[74273]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Oct 11 03:35:34 compute-0 ceph-mon[74273]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Oct 11 03:35:34 compute-0 ceph-mon[74273]: rocksdb:                   Options.inplace_update_support: 0
Oct 11 03:35:34 compute-0 ceph-mon[74273]: rocksdb:                 Options.inplace_update_num_locks: 10000
Oct 11 03:35:34 compute-0 ceph-mon[74273]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Oct 11 03:35:34 compute-0 ceph-mon[74273]: rocksdb:               Options.memtable_whole_key_filtering: 0
Oct 11 03:35:34 compute-0 ceph-mon[74273]: rocksdb:   Options.memtable_huge_page_size: 0
Oct 11 03:35:34 compute-0 ceph-mon[74273]: rocksdb:                           Options.bloom_locality: 0
Oct 11 03:35:34 compute-0 ceph-mon[74273]: rocksdb:                    Options.max_successive_merges: 0
Oct 11 03:35:34 compute-0 ceph-mon[74273]: rocksdb:                Options.optimize_filters_for_hits: 0
Oct 11 03:35:34 compute-0 ceph-mon[74273]: rocksdb:                Options.paranoid_file_checks: 0
Oct 11 03:35:34 compute-0 ceph-mon[74273]: rocksdb:                Options.force_consistency_checks: 1
Oct 11 03:35:34 compute-0 ceph-mon[74273]: rocksdb:                Options.report_bg_io_stats: 0
Oct 11 03:35:34 compute-0 ceph-mon[74273]: rocksdb:                               Options.ttl: 2592000
Oct 11 03:35:34 compute-0 ceph-mon[74273]: rocksdb:          Options.periodic_compaction_seconds: 0
Oct 11 03:35:34 compute-0 ceph-mon[74273]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Oct 11 03:35:34 compute-0 ceph-mon[74273]: rocksdb:    Options.preserve_internal_time_seconds: 0
Oct 11 03:35:34 compute-0 ceph-mon[74273]: rocksdb:                       Options.enable_blob_files: false
Oct 11 03:35:34 compute-0 ceph-mon[74273]: rocksdb:                           Options.min_blob_size: 0
Oct 11 03:35:34 compute-0 ceph-mon[74273]: rocksdb:                          Options.blob_file_size: 268435456
Oct 11 03:35:34 compute-0 ceph-mon[74273]: rocksdb:                   Options.blob_compression_type: NoCompression
Oct 11 03:35:34 compute-0 ceph-mon[74273]: rocksdb:          Options.enable_blob_garbage_collection: false
Oct 11 03:35:34 compute-0 ceph-mon[74273]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Oct 11 03:35:34 compute-0 ceph-mon[74273]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Oct 11 03:35:34 compute-0 ceph-mon[74273]: rocksdb:          Options.blob_compaction_readahead_size: 0
Oct 11 03:35:34 compute-0 ceph-mon[74273]: rocksdb:                Options.blob_file_starting_level: 0
Oct 11 03:35:34 compute-0 ceph-mon[74273]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Oct 11 03:35:34 compute-0 ceph-mon[74273]: rocksdb: [db/version_set.cc:5566] Recovered from manifest file:/var/lib/ceph/mon/ceph-compute-0/store.db/MANIFEST-000010 succeeded,manifest_file_number is 10, next_file_number is 12, last_sequence is 5, log_number is 5,prev_log_number is 0,max_column_family is 0,min_log_number_to_keep is 5
Oct 11 03:35:34 compute-0 ceph-mon[74273]: rocksdb: [db/version_set.cc:5581] Column family [default] (ID 0), log number is 5
Oct 11 03:35:34 compute-0 ceph-mon[74273]: rocksdb: [db/db_impl/db_impl_open.cc:539] DB ID: c5ef686a-ea96-43f3-b64e-136aeef6150d
Oct 11 03:35:34 compute-0 ceph-mon[74273]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760153734144672, "job": 1, "event": "recovery_started", "wal_files": [9]}
Oct 11 03:35:34 compute-0 ceph-mon[74273]: rocksdb: [db/db_impl/db_impl_open.cc:1043] Recovering log #9 mode 2
Oct 11 03:35:34 compute-0 ceph-mon[74273]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760153734149135, "cf_name": "default", "job": 1, "event": "table_file_creation", "file_number": 13, "file_size": 55261, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 8, "largest_seqno": 138, "table_properties": {"data_size": 53801, "index_size": 166, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 261, "raw_key_size": 3050, "raw_average_key_size": 30, "raw_value_size": 51390, "raw_average_value_size": 508, "num_data_blocks": 9, "num_entries": 101, "num_filter_entries": 101, "num_deletions": 3, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1760153734, "oldest_key_time": 0, "file_creation_time": 0, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "c5ef686a-ea96-43f3-b64e-136aeef6150d", "db_session_id": "5PENSWPAYOBU0GJSS6OB", "orig_file_number": 13, "seqno_to_time_mapping": "N/A"}}
Oct 11 03:35:34 compute-0 ceph-mon[74273]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760153734149269, "job": 1, "event": "recovery_finished"}
Oct 11 03:35:34 compute-0 ceph-mon[74273]: rocksdb: [db/version_set.cc:5047] Creating manifest 15
Oct 11 03:35:34 compute-0 ceph-mon[74273]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000009.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 11 03:35:34 compute-0 ceph-mon[74273]: rocksdb: [db/db_impl/db_impl_open.cc:1987] SstFileManager instance 0x558495a86e00
Oct 11 03:35:34 compute-0 ceph-mon[74273]: rocksdb: DB pointer 0x558495b10000
Oct 11 03:35:34 compute-0 ceph-mon[74273]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Oct 11 03:35:34 compute-0 ceph-mon[74273]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 0.0 total, 0.0 interval
                                           Cumulative writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 GB, 0.00 MB/s
                                           Cumulative WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 MB, 0.00 MB/s
                                           Interval WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                           
                                           ** Compaction Stats [default] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      2/0   55.86 KB   0.5      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0     13.2      0.00              0.00         1    0.004       0      0       0.0       0.0
                                            Sum      2/0   55.86 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0     13.2      0.00              0.00         1    0.004       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0     13.2      0.00              0.00         1    0.004       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [default] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0     13.2      0.00              0.00         1    0.004       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.0 total, 0.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 2.91 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 2.91 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x558495a5d1f0#2 capacity: 512.00 MB usage: 0.78 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 0 last_secs: 1.3e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): FilterBlock(2,0.42 KB,8.04663e-05%) IndexBlock(2,0.36 KB,6.85453e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [default] **
Oct 11 03:35:34 compute-0 ceph-mon[74273]: starting mon.compute-0 rank 0 at public addrs [v2:192.168.122.100:3300/0,v1:192.168.122.100:6789/0] at bind addrs [v2:192.168.122.100:3300/0,v1:192.168.122.100:6789/0] mon_data /var/lib/ceph/mon/ceph-compute-0 fsid 23b68101-59a9-532f-ab6b-9acf78fb2162
Oct 11 03:35:34 compute-0 ceph-mon[74273]: mon.compute-0@-1(???) e1 preinit fsid 23b68101-59a9-532f-ab6b-9acf78fb2162
Oct 11 03:35:34 compute-0 ceph-mon[74273]: mon.compute-0@-1(???).mds e1 new map
Oct 11 03:35:34 compute-0 ceph-mon[74273]: mon.compute-0@-1(???).mds e1 print_map
                                           e1
                                           enable_multiple, ever_enabled_multiple: 1,1
                                           default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2}
                                           legacy client fscid: -1
                                            
                                           No filesystems configured
Oct 11 03:35:34 compute-0 ceph-mon[74273]: mon.compute-0@-1(???).osd e1 crush map has features 3314932999778484224, adjusting msgr requires
Oct 11 03:35:34 compute-0 ceph-mon[74273]: mon.compute-0@-1(???).osd e1 crush map has features 288514050185494528, adjusting msgr requires
Oct 11 03:35:34 compute-0 ceph-mon[74273]: mon.compute-0@-1(???).osd e1 crush map has features 288514050185494528, adjusting msgr requires
Oct 11 03:35:34 compute-0 ceph-mon[74273]: mon.compute-0@-1(???).osd e1 crush map has features 288514050185494528, adjusting msgr requires
Oct 11 03:35:34 compute-0 ceph-mon[74273]: mon.compute-0@-1(???).paxosservice(auth 1..2) refresh upgraded, format 0 -> 3
Oct 11 03:35:34 compute-0 ceph-mon[74273]: mon.compute-0@-1(probing) e1  my rank is now 0 (was -1)
Oct 11 03:35:34 compute-0 ceph-mon[74273]: mon.compute-0@0(probing) e1 win_standalone_election
Oct 11 03:35:34 compute-0 ceph-mon[74273]: paxos.0).electionLogic(3) init, last seen epoch 3, mid-election, bumping
Oct 11 03:35:34 compute-0 ceph-mon[74273]: mon.compute-0@0(electing) e1 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Oct 11 03:35:34 compute-0 ceph-mon[74273]: log_channel(cluster) log [INF] : mon.compute-0 is new leader, mons compute-0 in quorum (ranks 0)
Oct 11 03:35:34 compute-0 ceph-mon[74273]: log_channel(cluster) log [DBG] : monmap e1: 1 mons at {compute-0=[v2:192.168.122.100:3300/0,v1:192.168.122.100:6789/0]} removed_ranks: {} disallowed_leaders: {}
Oct 11 03:35:34 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Oct 11 03:35:34 compute-0 ceph-mon[74273]: log_channel(cluster) log [DBG] : fsmap 
Oct 11 03:35:34 compute-0 ceph-mon[74273]: log_channel(cluster) log [DBG] : osdmap e1: 0 total, 0 up, 0 in
Oct 11 03:35:34 compute-0 ceph-mon[74273]: log_channel(cluster) log [DBG] : mgrmap e1: no daemons active
Oct 11 03:35:34 compute-0 podman[74274]: 2025-10-11 03:35:34.18778916 +0000 UTC m=+0.063150791 container create 24b579597efdc8f0ea564dd3f28c552ec8413d37a3318eb1e3faf409b332ce50 (image=quay.io/ceph/ceph:v18, name=elastic_meitner, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3)
Oct 11 03:35:34 compute-0 ceph-mon[74273]: mon.compute-0 is new leader, mons compute-0 in quorum (ranks 0)
Oct 11 03:35:34 compute-0 ceph-mon[74273]: monmap e1: 1 mons at {compute-0=[v2:192.168.122.100:3300/0,v1:192.168.122.100:6789/0]} removed_ranks: {} disallowed_leaders: {}
Oct 11 03:35:34 compute-0 ceph-mon[74273]: fsmap 
Oct 11 03:35:34 compute-0 ceph-mon[74273]: osdmap e1: 0 total, 0 up, 0 in
Oct 11 03:35:34 compute-0 ceph-mon[74273]: mgrmap e1: no daemons active
Oct 11 03:35:34 compute-0 systemd[1]: Started libpod-conmon-24b579597efdc8f0ea564dd3f28c552ec8413d37a3318eb1e3faf409b332ce50.scope.
Oct 11 03:35:34 compute-0 podman[74274]: 2025-10-11 03:35:34.162343545 +0000 UTC m=+0.037705216 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Oct 11 03:35:34 compute-0 systemd[1]: Started libcrun container.
Oct 11 03:35:34 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/57c0e3d9cc07b1358dd18561728ca614ddbf4656923a8232409373f8d6b33f99/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 11 03:35:34 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/57c0e3d9cc07b1358dd18561728ca614ddbf4656923a8232409373f8d6b33f99/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Oct 11 03:35:34 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/57c0e3d9cc07b1358dd18561728ca614ddbf4656923a8232409373f8d6b33f99/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 11 03:35:34 compute-0 podman[74274]: 2025-10-11 03:35:34.302771526 +0000 UTC m=+0.178133157 container init 24b579597efdc8f0ea564dd3f28c552ec8413d37a3318eb1e3faf409b332ce50 (image=quay.io/ceph/ceph:v18, name=elastic_meitner, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, OSD_FLAVOR=default)
Oct 11 03:35:34 compute-0 podman[74274]: 2025-10-11 03:35:34.316119266 +0000 UTC m=+0.191480887 container start 24b579597efdc8f0ea564dd3f28c552ec8413d37a3318eb1e3faf409b332ce50 (image=quay.io/ceph/ceph:v18, name=elastic_meitner, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 11 03:35:34 compute-0 podman[74274]: 2025-10-11 03:35:34.320672763 +0000 UTC m=+0.196034384 container attach 24b579597efdc8f0ea564dd3f28c552ec8413d37a3318eb1e3faf409b332ce50 (image=quay.io/ceph/ceph:v18, name=elastic_meitner, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 11 03:35:34 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=public_network}] v 0) v1
Oct 11 03:35:34 compute-0 systemd[1]: libpod-24b579597efdc8f0ea564dd3f28c552ec8413d37a3318eb1e3faf409b332ce50.scope: Deactivated successfully.
Oct 11 03:35:34 compute-0 podman[74274]: 2025-10-11 03:35:34.733070322 +0000 UTC m=+0.608431963 container died 24b579597efdc8f0ea564dd3f28c552ec8413d37a3318eb1e3faf409b332ce50 (image=quay.io/ceph/ceph:v18, name=elastic_meitner, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 11 03:35:34 compute-0 systemd[1]: var-lib-containers-storage-overlay-57c0e3d9cc07b1358dd18561728ca614ddbf4656923a8232409373f8d6b33f99-merged.mount: Deactivated successfully.
Oct 11 03:35:34 compute-0 podman[74274]: 2025-10-11 03:35:34.787422608 +0000 UTC m=+0.662784209 container remove 24b579597efdc8f0ea564dd3f28c552ec8413d37a3318eb1e3faf409b332ce50 (image=quay.io/ceph/ceph:v18, name=elastic_meitner, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef)
Oct 11 03:35:34 compute-0 systemd[1]: libpod-conmon-24b579597efdc8f0ea564dd3f28c552ec8413d37a3318eb1e3faf409b332ce50.scope: Deactivated successfully.
Oct 11 03:35:34 compute-0 podman[74363]: 2025-10-11 03:35:34.892772287 +0000 UTC m=+0.077478638 container create b7325a9e55aec3a82729d8f932fdcf0463b31bd2a07b8e8090fa0020c480cd9a (image=quay.io/ceph/ceph:v18, name=fervent_elgamal, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 11 03:35:34 compute-0 systemd[1]: Started libpod-conmon-b7325a9e55aec3a82729d8f932fdcf0463b31bd2a07b8e8090fa0020c480cd9a.scope.
Oct 11 03:35:34 compute-0 podman[74363]: 2025-10-11 03:35:34.857074758 +0000 UTC m=+0.041781179 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Oct 11 03:35:34 compute-0 systemd[1]: Started libcrun container.
Oct 11 03:35:34 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9eb42eeb7adf760b8f47b8d1fe9ece17c80ab95fd69e3dcf4ff808a6f654e5d9/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 11 03:35:34 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9eb42eeb7adf760b8f47b8d1fe9ece17c80ab95fd69e3dcf4ff808a6f654e5d9/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Oct 11 03:35:34 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9eb42eeb7adf760b8f47b8d1fe9ece17c80ab95fd69e3dcf4ff808a6f654e5d9/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 11 03:35:34 compute-0 podman[74363]: 2025-10-11 03:35:34.983269576 +0000 UTC m=+0.167975947 container init b7325a9e55aec3a82729d8f932fdcf0463b31bd2a07b8e8090fa0020c480cd9a (image=quay.io/ceph/ceph:v18, name=fervent_elgamal, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Oct 11 03:35:34 compute-0 podman[74363]: 2025-10-11 03:35:34.994468896 +0000 UTC m=+0.179175267 container start b7325a9e55aec3a82729d8f932fdcf0463b31bd2a07b8e8090fa0020c480cd9a (image=quay.io/ceph/ceph:v18, name=fervent_elgamal, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 11 03:35:34 compute-0 podman[74363]: 2025-10-11 03:35:34.998136198 +0000 UTC m=+0.182842569 container attach b7325a9e55aec3a82729d8f932fdcf0463b31bd2a07b8e8090fa0020c480cd9a (image=quay.io/ceph/ceph:v18, name=fervent_elgamal, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS)
Oct 11 03:35:35 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=cluster_network}] v 0) v1
Oct 11 03:35:35 compute-0 systemd[1]: libpod-b7325a9e55aec3a82729d8f932fdcf0463b31bd2a07b8e8090fa0020c480cd9a.scope: Deactivated successfully.
Oct 11 03:35:35 compute-0 podman[74363]: 2025-10-11 03:35:35.420236136 +0000 UTC m=+0.604942497 container died b7325a9e55aec3a82729d8f932fdcf0463b31bd2a07b8e8090fa0020c480cd9a (image=quay.io/ceph/ceph:v18, name=fervent_elgamal, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 11 03:35:35 compute-0 systemd[1]: var-lib-containers-storage-overlay-9eb42eeb7adf760b8f47b8d1fe9ece17c80ab95fd69e3dcf4ff808a6f654e5d9-merged.mount: Deactivated successfully.
Oct 11 03:35:35 compute-0 podman[74363]: 2025-10-11 03:35:35.478532702 +0000 UTC m=+0.663239033 container remove b7325a9e55aec3a82729d8f932fdcf0463b31bd2a07b8e8090fa0020c480cd9a (image=quay.io/ceph/ceph:v18, name=fervent_elgamal, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, ceph=True)
Oct 11 03:35:35 compute-0 systemd[1]: libpod-conmon-b7325a9e55aec3a82729d8f932fdcf0463b31bd2a07b8e8090fa0020c480cd9a.scope: Deactivated successfully.
Oct 11 03:35:35 compute-0 systemd[1]: Reloading.
Oct 11 03:35:35 compute-0 systemd-sysv-generator[74448]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 11 03:35:35 compute-0 systemd-rc-local-generator[74441]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 11 03:35:35 compute-0 systemd[1]: Reloading.
Oct 11 03:35:35 compute-0 systemd-rc-local-generator[74480]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 11 03:35:35 compute-0 systemd-sysv-generator[74485]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 11 03:35:36 compute-0 systemd[1]: Starting Ceph mgr.compute-0.jhqlii for 23b68101-59a9-532f-ab6b-9acf78fb2162...
Oct 11 03:35:36 compute-0 podman[74543]: 2025-10-11 03:35:36.410668906 +0000 UTC m=+0.066069342 container create e47365f8d8930ef3ec515e4f06adf77a4aadf7c27e160e96d105714b9d44e3d3 (image=quay.io/ceph/ceph:v18, name=ceph-23b68101-59a9-532f-ab6b-9acf78fb2162-mgr-compute-0-jhqlii, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2)
Oct 11 03:35:36 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/12284a1ac3c34de89090177db9b3925de234cd44539a36098c0875d3564021c2/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 11 03:35:36 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/12284a1ac3c34de89090177db9b3925de234cd44539a36098c0875d3564021c2/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 11 03:35:36 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/12284a1ac3c34de89090177db9b3925de234cd44539a36098c0875d3564021c2/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 11 03:35:36 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/12284a1ac3c34de89090177db9b3925de234cd44539a36098c0875d3564021c2/merged/var/lib/ceph/mgr/ceph-compute-0.jhqlii supports timestamps until 2038 (0x7fffffff)
Oct 11 03:35:36 compute-0 podman[74543]: 2025-10-11 03:35:36.383312708 +0000 UTC m=+0.038713194 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Oct 11 03:35:36 compute-0 podman[74543]: 2025-10-11 03:35:36.493664236 +0000 UTC m=+0.149064732 container init e47365f8d8930ef3ec515e4f06adf77a4aadf7c27e160e96d105714b9d44e3d3 (image=quay.io/ceph/ceph:v18, name=ceph-23b68101-59a9-532f-ab6b-9acf78fb2162-mgr-compute-0-jhqlii, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 11 03:35:36 compute-0 podman[74543]: 2025-10-11 03:35:36.507285804 +0000 UTC m=+0.162686240 container start e47365f8d8930ef3ec515e4f06adf77a4aadf7c27e160e96d105714b9d44e3d3 (image=quay.io/ceph/ceph:v18, name=ceph-23b68101-59a9-532f-ab6b-9acf78fb2162-mgr-compute-0-jhqlii, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 11 03:35:36 compute-0 bash[74543]: e47365f8d8930ef3ec515e4f06adf77a4aadf7c27e160e96d105714b9d44e3d3
Oct 11 03:35:36 compute-0 systemd[1]: Started Ceph mgr.compute-0.jhqlii for 23b68101-59a9-532f-ab6b-9acf78fb2162.
Oct 11 03:35:36 compute-0 ceph-mgr[74563]: set uid:gid to 167:167 (ceph:ceph)
Oct 11 03:35:36 compute-0 ceph-mgr[74563]: ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable), process ceph-mgr, pid 2
Oct 11 03:35:36 compute-0 ceph-mgr[74563]: pidfile_write: ignore empty --pid-file
Oct 11 03:35:36 compute-0 podman[74564]: 2025-10-11 03:35:36.633275106 +0000 UTC m=+0.078294331 container create 43d7bf83368c07e6e17d6477f82a2194183d58c6181f6e1bb69b5d0028859082 (image=quay.io/ceph/ceph:v18, name=determined_hamilton, CEPH_REF=reef, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.license=GPLv2)
Oct 11 03:35:36 compute-0 systemd[1]: Started libpod-conmon-43d7bf83368c07e6e17d6477f82a2194183d58c6181f6e1bb69b5d0028859082.scope.
Oct 11 03:35:36 compute-0 podman[74564]: 2025-10-11 03:35:36.60314483 +0000 UTC m=+0.048164105 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Oct 11 03:35:36 compute-0 systemd[1]: Started libcrun container.
Oct 11 03:35:36 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9ca554708f4c78ad42ea670c66583617c39454d561a5e3d0bd502e7665d69977/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Oct 11 03:35:36 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9ca554708f4c78ad42ea670c66583617c39454d561a5e3d0bd502e7665d69977/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 11 03:35:36 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9ca554708f4c78ad42ea670c66583617c39454d561a5e3d0bd502e7665d69977/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 11 03:35:36 compute-0 ceph-mgr[74563]: mgr[py] Loading python module 'alerts'
Oct 11 03:35:36 compute-0 podman[74564]: 2025-10-11 03:35:36.727508647 +0000 UTC m=+0.172527932 container init 43d7bf83368c07e6e17d6477f82a2194183d58c6181f6e1bb69b5d0028859082 (image=quay.io/ceph/ceph:v18, name=determined_hamilton, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Oct 11 03:35:36 compute-0 podman[74564]: 2025-10-11 03:35:36.740092676 +0000 UTC m=+0.185111861 container start 43d7bf83368c07e6e17d6477f82a2194183d58c6181f6e1bb69b5d0028859082 (image=quay.io/ceph/ceph:v18, name=determined_hamilton, ceph=True, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 11 03:35:36 compute-0 podman[74564]: 2025-10-11 03:35:36.743740407 +0000 UTC m=+0.188759682 container attach 43d7bf83368c07e6e17d6477f82a2194183d58c6181f6e1bb69b5d0028859082 (image=quay.io/ceph/ceph:v18, name=determined_hamilton, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=reef)
Oct 11 03:35:36 compute-0 ceph-mgr[74563]: mgr[py] Module alerts has missing NOTIFY_TYPES member
Oct 11 03:35:36 compute-0 ceph-mgr[74563]: mgr[py] Loading python module 'balancer'
Oct 11 03:35:36 compute-0 ceph-23b68101-59a9-532f-ab6b-9acf78fb2162-mgr-compute-0-jhqlii[74559]: 2025-10-11T03:35:36.996+0000 7f86a4c73140 -1 mgr[py] Module alerts has missing NOTIFY_TYPES member
Oct 11 03:35:37 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status", "format": "json-pretty"} v 0) v1
Oct 11 03:35:37 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4059342301' entity='client.admin' cmd=[{"prefix": "status", "format": "json-pretty"}]: dispatch
Oct 11 03:35:37 compute-0 determined_hamilton[74604]: 
Oct 11 03:35:37 compute-0 determined_hamilton[74604]: {
Oct 11 03:35:37 compute-0 determined_hamilton[74604]:     "fsid": "23b68101-59a9-532f-ab6b-9acf78fb2162",
Oct 11 03:35:37 compute-0 determined_hamilton[74604]:     "health": {
Oct 11 03:35:37 compute-0 determined_hamilton[74604]:         "status": "HEALTH_OK",
Oct 11 03:35:37 compute-0 determined_hamilton[74604]:         "checks": {},
Oct 11 03:35:37 compute-0 determined_hamilton[74604]:         "mutes": []
Oct 11 03:35:37 compute-0 determined_hamilton[74604]:     },
Oct 11 03:35:37 compute-0 determined_hamilton[74604]:     "election_epoch": 5,
Oct 11 03:35:37 compute-0 determined_hamilton[74604]:     "quorum": [
Oct 11 03:35:37 compute-0 determined_hamilton[74604]:         0
Oct 11 03:35:37 compute-0 determined_hamilton[74604]:     ],
Oct 11 03:35:37 compute-0 determined_hamilton[74604]:     "quorum_names": [
Oct 11 03:35:37 compute-0 determined_hamilton[74604]:         "compute-0"
Oct 11 03:35:37 compute-0 determined_hamilton[74604]:     ],
Oct 11 03:35:37 compute-0 determined_hamilton[74604]:     "quorum_age": 2,
Oct 11 03:35:37 compute-0 determined_hamilton[74604]:     "monmap": {
Oct 11 03:35:37 compute-0 determined_hamilton[74604]:         "epoch": 1,
Oct 11 03:35:37 compute-0 determined_hamilton[74604]:         "min_mon_release_name": "reef",
Oct 11 03:35:37 compute-0 determined_hamilton[74604]:         "num_mons": 1
Oct 11 03:35:37 compute-0 determined_hamilton[74604]:     },
Oct 11 03:35:37 compute-0 determined_hamilton[74604]:     "osdmap": {
Oct 11 03:35:37 compute-0 determined_hamilton[74604]:         "epoch": 1,
Oct 11 03:35:37 compute-0 determined_hamilton[74604]:         "num_osds": 0,
Oct 11 03:35:37 compute-0 determined_hamilton[74604]:         "num_up_osds": 0,
Oct 11 03:35:37 compute-0 determined_hamilton[74604]:         "osd_up_since": 0,
Oct 11 03:35:37 compute-0 determined_hamilton[74604]:         "num_in_osds": 0,
Oct 11 03:35:37 compute-0 determined_hamilton[74604]:         "osd_in_since": 0,
Oct 11 03:35:37 compute-0 determined_hamilton[74604]:         "num_remapped_pgs": 0
Oct 11 03:35:37 compute-0 determined_hamilton[74604]:     },
Oct 11 03:35:37 compute-0 determined_hamilton[74604]:     "pgmap": {
Oct 11 03:35:37 compute-0 determined_hamilton[74604]:         "pgs_by_state": [],
Oct 11 03:35:37 compute-0 determined_hamilton[74604]:         "num_pgs": 0,
Oct 11 03:35:37 compute-0 determined_hamilton[74604]:         "num_pools": 0,
Oct 11 03:35:37 compute-0 determined_hamilton[74604]:         "num_objects": 0,
Oct 11 03:35:37 compute-0 determined_hamilton[74604]:         "data_bytes": 0,
Oct 11 03:35:37 compute-0 determined_hamilton[74604]:         "bytes_used": 0,
Oct 11 03:35:37 compute-0 determined_hamilton[74604]:         "bytes_avail": 0,
Oct 11 03:35:37 compute-0 determined_hamilton[74604]:         "bytes_total": 0
Oct 11 03:35:37 compute-0 determined_hamilton[74604]:     },
Oct 11 03:35:37 compute-0 determined_hamilton[74604]:     "fsmap": {
Oct 11 03:35:37 compute-0 determined_hamilton[74604]:         "epoch": 1,
Oct 11 03:35:37 compute-0 determined_hamilton[74604]:         "by_rank": [],
Oct 11 03:35:37 compute-0 determined_hamilton[74604]:         "up:standby": 0
Oct 11 03:35:37 compute-0 determined_hamilton[74604]:     },
Oct 11 03:35:37 compute-0 determined_hamilton[74604]:     "mgrmap": {
Oct 11 03:35:37 compute-0 determined_hamilton[74604]:         "available": false,
Oct 11 03:35:37 compute-0 determined_hamilton[74604]:         "num_standbys": 0,
Oct 11 03:35:37 compute-0 determined_hamilton[74604]:         "modules": [
Oct 11 03:35:37 compute-0 determined_hamilton[74604]:             "iostat",
Oct 11 03:35:37 compute-0 determined_hamilton[74604]:             "nfs",
Oct 11 03:35:37 compute-0 determined_hamilton[74604]:             "restful"
Oct 11 03:35:37 compute-0 determined_hamilton[74604]:         ],
Oct 11 03:35:37 compute-0 determined_hamilton[74604]:         "services": {}
Oct 11 03:35:37 compute-0 determined_hamilton[74604]:     },
Oct 11 03:35:37 compute-0 determined_hamilton[74604]:     "servicemap": {
Oct 11 03:35:37 compute-0 determined_hamilton[74604]:         "epoch": 1,
Oct 11 03:35:37 compute-0 determined_hamilton[74604]:         "modified": "2025-10-11T03:35:31.210977+0000",
Oct 11 03:35:37 compute-0 determined_hamilton[74604]:         "services": {}
Oct 11 03:35:37 compute-0 determined_hamilton[74604]:     },
Oct 11 03:35:37 compute-0 determined_hamilton[74604]:     "progress_events": {}
Oct 11 03:35:37 compute-0 determined_hamilton[74604]: }
Oct 11 03:35:37 compute-0 systemd[1]: libpod-43d7bf83368c07e6e17d6477f82a2194183d58c6181f6e1bb69b5d0028859082.scope: Deactivated successfully.
Oct 11 03:35:37 compute-0 podman[74564]: 2025-10-11 03:35:37.151501478 +0000 UTC m=+0.596520723 container died 43d7bf83368c07e6e17d6477f82a2194183d58c6181f6e1bb69b5d0028859082 (image=quay.io/ceph/ceph:v18, name=determined_hamilton, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Oct 11 03:35:37 compute-0 systemd[1]: var-lib-containers-storage-overlay-9ca554708f4c78ad42ea670c66583617c39454d561a5e3d0bd502e7665d69977-merged.mount: Deactivated successfully.
Oct 11 03:35:37 compute-0 ceph-mon[74273]: from='client.? 192.168.122.100:0/4059342301' entity='client.admin' cmd=[{"prefix": "status", "format": "json-pretty"}]: dispatch
Oct 11 03:35:37 compute-0 podman[74564]: 2025-10-11 03:35:37.216555991 +0000 UTC m=+0.661575186 container remove 43d7bf83368c07e6e17d6477f82a2194183d58c6181f6e1bb69b5d0028859082 (image=quay.io/ceph/ceph:v18, name=determined_hamilton, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_REF=reef, io.buildah.version=1.39.3, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 11 03:35:37 compute-0 systemd[1]: libpod-conmon-43d7bf83368c07e6e17d6477f82a2194183d58c6181f6e1bb69b5d0028859082.scope: Deactivated successfully.
Oct 11 03:35:37 compute-0 ceph-mgr[74563]: mgr[py] Module balancer has missing NOTIFY_TYPES member
Oct 11 03:35:37 compute-0 ceph-mgr[74563]: mgr[py] Loading python module 'cephadm'
Oct 11 03:35:37 compute-0 ceph-23b68101-59a9-532f-ab6b-9acf78fb2162-mgr-compute-0-jhqlii[74559]: 2025-10-11T03:35:37.288+0000 7f86a4c73140 -1 mgr[py] Module balancer has missing NOTIFY_TYPES member
Oct 11 03:35:39 compute-0 ceph-mgr[74563]: mgr[py] Loading python module 'crash'
Oct 11 03:35:39 compute-0 podman[74653]: 2025-10-11 03:35:39.337816931 +0000 UTC m=+0.087041293 container create f11196e7802a09995cf8456c4fd45aedf8d702482997c0b4746e62904b3297d2 (image=quay.io/ceph/ceph:v18, name=admiring_bardeen, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_REF=reef)
Oct 11 03:35:39 compute-0 systemd[1]: Started libpod-conmon-f11196e7802a09995cf8456c4fd45aedf8d702482997c0b4746e62904b3297d2.scope.
Oct 11 03:35:39 compute-0 podman[74653]: 2025-10-11 03:35:39.296943488 +0000 UTC m=+0.046167870 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Oct 11 03:35:39 compute-0 systemd[1]: Started libcrun container.
Oct 11 03:35:39 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0655cc46f1df247569a1ba30bd9101ae8fb6d7d5cada1f5026082cd1e0e61275/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Oct 11 03:35:39 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0655cc46f1df247569a1ba30bd9101ae8fb6d7d5cada1f5026082cd1e0e61275/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 11 03:35:39 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0655cc46f1df247569a1ba30bd9101ae8fb6d7d5cada1f5026082cd1e0e61275/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 11 03:35:39 compute-0 podman[74653]: 2025-10-11 03:35:39.426963332 +0000 UTC m=+0.176187694 container init f11196e7802a09995cf8456c4fd45aedf8d702482997c0b4746e62904b3297d2 (image=quay.io/ceph/ceph:v18, name=admiring_bardeen, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_REF=reef, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Oct 11 03:35:39 compute-0 podman[74653]: 2025-10-11 03:35:39.437950536 +0000 UTC m=+0.187174858 container start f11196e7802a09995cf8456c4fd45aedf8d702482997c0b4746e62904b3297d2 (image=quay.io/ceph/ceph:v18, name=admiring_bardeen, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=reef)
Oct 11 03:35:39 compute-0 ceph-mgr[74563]: mgr[py] Module crash has missing NOTIFY_TYPES member
Oct 11 03:35:39 compute-0 ceph-mgr[74563]: mgr[py] Loading python module 'dashboard'
Oct 11 03:35:39 compute-0 ceph-23b68101-59a9-532f-ab6b-9acf78fb2162-mgr-compute-0-jhqlii[74559]: 2025-10-11T03:35:39.437+0000 7f86a4c73140 -1 mgr[py] Module crash has missing NOTIFY_TYPES member
Oct 11 03:35:39 compute-0 podman[74653]: 2025-10-11 03:35:39.44206203 +0000 UTC m=+0.191286432 container attach f11196e7802a09995cf8456c4fd45aedf8d702482997c0b4746e62904b3297d2 (image=quay.io/ceph/ceph:v18, name=admiring_bardeen, org.label-schema.license=GPLv2, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Oct 11 03:35:39 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status", "format": "json-pretty"} v 0) v1
Oct 11 03:35:39 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2100903107' entity='client.admin' cmd=[{"prefix": "status", "format": "json-pretty"}]: dispatch
Oct 11 03:35:39 compute-0 admiring_bardeen[74670]: 
Oct 11 03:35:39 compute-0 admiring_bardeen[74670]: {
Oct 11 03:35:39 compute-0 admiring_bardeen[74670]:     "fsid": "23b68101-59a9-532f-ab6b-9acf78fb2162",
Oct 11 03:35:39 compute-0 admiring_bardeen[74670]:     "health": {
Oct 11 03:35:39 compute-0 admiring_bardeen[74670]:         "status": "HEALTH_OK",
Oct 11 03:35:39 compute-0 admiring_bardeen[74670]:         "checks": {},
Oct 11 03:35:39 compute-0 admiring_bardeen[74670]:         "mutes": []
Oct 11 03:35:39 compute-0 admiring_bardeen[74670]:     },
Oct 11 03:35:39 compute-0 admiring_bardeen[74670]:     "election_epoch": 5,
Oct 11 03:35:39 compute-0 admiring_bardeen[74670]:     "quorum": [
Oct 11 03:35:39 compute-0 admiring_bardeen[74670]:         0
Oct 11 03:35:39 compute-0 admiring_bardeen[74670]:     ],
Oct 11 03:35:39 compute-0 admiring_bardeen[74670]:     "quorum_names": [
Oct 11 03:35:39 compute-0 admiring_bardeen[74670]:         "compute-0"
Oct 11 03:35:39 compute-0 admiring_bardeen[74670]:     ],
Oct 11 03:35:39 compute-0 admiring_bardeen[74670]:     "quorum_age": 5,
Oct 11 03:35:39 compute-0 admiring_bardeen[74670]:     "monmap": {
Oct 11 03:35:39 compute-0 admiring_bardeen[74670]:         "epoch": 1,
Oct 11 03:35:39 compute-0 admiring_bardeen[74670]:         "min_mon_release_name": "reef",
Oct 11 03:35:39 compute-0 admiring_bardeen[74670]:         "num_mons": 1
Oct 11 03:35:39 compute-0 admiring_bardeen[74670]:     },
Oct 11 03:35:39 compute-0 admiring_bardeen[74670]:     "osdmap": {
Oct 11 03:35:39 compute-0 admiring_bardeen[74670]:         "epoch": 1,
Oct 11 03:35:39 compute-0 admiring_bardeen[74670]:         "num_osds": 0,
Oct 11 03:35:39 compute-0 admiring_bardeen[74670]:         "num_up_osds": 0,
Oct 11 03:35:39 compute-0 admiring_bardeen[74670]:         "osd_up_since": 0,
Oct 11 03:35:39 compute-0 admiring_bardeen[74670]:         "num_in_osds": 0,
Oct 11 03:35:39 compute-0 admiring_bardeen[74670]:         "osd_in_since": 0,
Oct 11 03:35:39 compute-0 admiring_bardeen[74670]:         "num_remapped_pgs": 0
Oct 11 03:35:39 compute-0 admiring_bardeen[74670]:     },
Oct 11 03:35:39 compute-0 admiring_bardeen[74670]:     "pgmap": {
Oct 11 03:35:39 compute-0 admiring_bardeen[74670]:         "pgs_by_state": [],
Oct 11 03:35:39 compute-0 admiring_bardeen[74670]:         "num_pgs": 0,
Oct 11 03:35:39 compute-0 admiring_bardeen[74670]:         "num_pools": 0,
Oct 11 03:35:39 compute-0 admiring_bardeen[74670]:         "num_objects": 0,
Oct 11 03:35:39 compute-0 admiring_bardeen[74670]:         "data_bytes": 0,
Oct 11 03:35:39 compute-0 admiring_bardeen[74670]:         "bytes_used": 0,
Oct 11 03:35:39 compute-0 admiring_bardeen[74670]:         "bytes_avail": 0,
Oct 11 03:35:39 compute-0 admiring_bardeen[74670]:         "bytes_total": 0
Oct 11 03:35:39 compute-0 admiring_bardeen[74670]:     },
Oct 11 03:35:39 compute-0 admiring_bardeen[74670]:     "fsmap": {
Oct 11 03:35:39 compute-0 admiring_bardeen[74670]:         "epoch": 1,
Oct 11 03:35:39 compute-0 admiring_bardeen[74670]:         "by_rank": [],
Oct 11 03:35:39 compute-0 admiring_bardeen[74670]:         "up:standby": 0
Oct 11 03:35:39 compute-0 admiring_bardeen[74670]:     },
Oct 11 03:35:39 compute-0 admiring_bardeen[74670]:     "mgrmap": {
Oct 11 03:35:39 compute-0 admiring_bardeen[74670]:         "available": false,
Oct 11 03:35:39 compute-0 admiring_bardeen[74670]:         "num_standbys": 0,
Oct 11 03:35:39 compute-0 admiring_bardeen[74670]:         "modules": [
Oct 11 03:35:39 compute-0 admiring_bardeen[74670]:             "iostat",
Oct 11 03:35:39 compute-0 admiring_bardeen[74670]:             "nfs",
Oct 11 03:35:39 compute-0 admiring_bardeen[74670]:             "restful"
Oct 11 03:35:39 compute-0 admiring_bardeen[74670]:         ],
Oct 11 03:35:39 compute-0 admiring_bardeen[74670]:         "services": {}
Oct 11 03:35:39 compute-0 admiring_bardeen[74670]:     },
Oct 11 03:35:39 compute-0 admiring_bardeen[74670]:     "servicemap": {
Oct 11 03:35:39 compute-0 admiring_bardeen[74670]:         "epoch": 1,
Oct 11 03:35:39 compute-0 admiring_bardeen[74670]:         "modified": "2025-10-11T03:35:31.210977+0000",
Oct 11 03:35:39 compute-0 admiring_bardeen[74670]:         "services": {}
Oct 11 03:35:39 compute-0 admiring_bardeen[74670]:     },
Oct 11 03:35:39 compute-0 admiring_bardeen[74670]:     "progress_events": {}
Oct 11 03:35:39 compute-0 admiring_bardeen[74670]: }
Oct 11 03:35:39 compute-0 systemd[1]: libpod-f11196e7802a09995cf8456c4fd45aedf8d702482997c0b4746e62904b3297d2.scope: Deactivated successfully.
Oct 11 03:35:39 compute-0 podman[74653]: 2025-10-11 03:35:39.858629825 +0000 UTC m=+0.607854187 container died f11196e7802a09995cf8456c4fd45aedf8d702482997c0b4746e62904b3297d2 (image=quay.io/ceph/ceph:v18, name=admiring_bardeen, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 11 03:35:39 compute-0 ceph-mon[74273]: from='client.? 192.168.122.100:0/2100903107' entity='client.admin' cmd=[{"prefix": "status", "format": "json-pretty"}]: dispatch
Oct 11 03:35:39 compute-0 systemd[1]: var-lib-containers-storage-overlay-0655cc46f1df247569a1ba30bd9101ae8fb6d7d5cada1f5026082cd1e0e61275-merged.mount: Deactivated successfully.
Oct 11 03:35:39 compute-0 podman[74653]: 2025-10-11 03:35:39.91327337 +0000 UTC m=+0.662497692 container remove f11196e7802a09995cf8456c4fd45aedf8d702482997c0b4746e62904b3297d2 (image=quay.io/ceph/ceph:v18, name=admiring_bardeen, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 11 03:35:39 compute-0 systemd[1]: libpod-conmon-f11196e7802a09995cf8456c4fd45aedf8d702482997c0b4746e62904b3297d2.scope: Deactivated successfully.
Oct 11 03:35:40 compute-0 ceph-mgr[74563]: mgr[py] Loading python module 'devicehealth'
Oct 11 03:35:41 compute-0 ceph-mgr[74563]: mgr[py] Module devicehealth has missing NOTIFY_TYPES member
Oct 11 03:35:41 compute-0 ceph-mgr[74563]: mgr[py] Loading python module 'diskprediction_local'
Oct 11 03:35:41 compute-0 ceph-23b68101-59a9-532f-ab6b-9acf78fb2162-mgr-compute-0-jhqlii[74559]: 2025-10-11T03:35:41.057+0000 7f86a4c73140 -1 mgr[py] Module devicehealth has missing NOTIFY_TYPES member
Oct 11 03:35:41 compute-0 ceph-23b68101-59a9-532f-ab6b-9acf78fb2162-mgr-compute-0-jhqlii[74559]: /lib64/python3.9/site-packages/scipy/__init__.py:73: UserWarning: NumPy was imported from a Python sub-interpreter but NumPy does not properly support sub-interpreters. This will likely work for most users but might cause hard to track down issues or subtle bugs. A common user of the rare sub-interpreter feature is wsgi which also allows single-interpreter mode.
Oct 11 03:35:41 compute-0 ceph-23b68101-59a9-532f-ab6b-9acf78fb2162-mgr-compute-0-jhqlii[74559]: Improvements in the case of bugs are welcome, but is not on the NumPy roadmap, and full support may require significant effort to achieve.
Oct 11 03:35:41 compute-0 ceph-23b68101-59a9-532f-ab6b-9acf78fb2162-mgr-compute-0-jhqlii[74559]:   from numpy import show_config as show_numpy_config
Oct 11 03:35:41 compute-0 ceph-mgr[74563]: mgr[py] Module diskprediction_local has missing NOTIFY_TYPES member
Oct 11 03:35:41 compute-0 ceph-mgr[74563]: mgr[py] Loading python module 'influx'
Oct 11 03:35:41 compute-0 ceph-23b68101-59a9-532f-ab6b-9acf78fb2162-mgr-compute-0-jhqlii[74559]: 2025-10-11T03:35:41.579+0000 7f86a4c73140 -1 mgr[py] Module diskprediction_local has missing NOTIFY_TYPES member
Oct 11 03:35:41 compute-0 ceph-mgr[74563]: mgr[py] Module influx has missing NOTIFY_TYPES member
Oct 11 03:35:41 compute-0 ceph-mgr[74563]: mgr[py] Loading python module 'insights'
Oct 11 03:35:41 compute-0 ceph-23b68101-59a9-532f-ab6b-9acf78fb2162-mgr-compute-0-jhqlii[74559]: 2025-10-11T03:35:41.796+0000 7f86a4c73140 -1 mgr[py] Module influx has missing NOTIFY_TYPES member
Oct 11 03:35:41 compute-0 podman[74709]: 2025-10-11 03:35:41.992960187 +0000 UTC m=+0.052675931 container create e45c9d830efed017fbebe138d6cb84eb651bae923a2afbb0b0dafd9b83c9bcbd (image=quay.io/ceph/ceph:v18, name=brave_proskuriakova, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=reef)
Oct 11 03:35:42 compute-0 systemd[1]: Started libpod-conmon-e45c9d830efed017fbebe138d6cb84eb651bae923a2afbb0b0dafd9b83c9bcbd.scope.
Oct 11 03:35:42 compute-0 ceph-mgr[74563]: mgr[py] Loading python module 'iostat'
Oct 11 03:35:42 compute-0 systemd[1]: Started libcrun container.
Oct 11 03:35:42 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/adc69a2e0b0eff7bb6750cf01e9b8cb1afb0329c85369f17e3337dc5d71ea58d/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 11 03:35:42 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/adc69a2e0b0eff7bb6750cf01e9b8cb1afb0329c85369f17e3337dc5d71ea58d/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 11 03:35:42 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/adc69a2e0b0eff7bb6750cf01e9b8cb1afb0329c85369f17e3337dc5d71ea58d/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Oct 11 03:35:42 compute-0 podman[74709]: 2025-10-11 03:35:41.966984317 +0000 UTC m=+0.026700161 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Oct 11 03:35:42 compute-0 podman[74709]: 2025-10-11 03:35:42.077202782 +0000 UTC m=+0.136918536 container init e45c9d830efed017fbebe138d6cb84eb651bae923a2afbb0b0dafd9b83c9bcbd (image=quay.io/ceph/ceph:v18, name=brave_proskuriakova, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default)
Oct 11 03:35:42 compute-0 podman[74709]: 2025-10-11 03:35:42.088412713 +0000 UTC m=+0.148128447 container start e45c9d830efed017fbebe138d6cb84eb651bae923a2afbb0b0dafd9b83c9bcbd (image=quay.io/ceph/ceph:v18, name=brave_proskuriakova, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 11 03:35:42 compute-0 podman[74709]: 2025-10-11 03:35:42.091821437 +0000 UTC m=+0.151537171 container attach e45c9d830efed017fbebe138d6cb84eb651bae923a2afbb0b0dafd9b83c9bcbd (image=quay.io/ceph/ceph:v18, name=brave_proskuriakova, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 11 03:35:42 compute-0 ceph-mgr[74563]: mgr[py] Module iostat has missing NOTIFY_TYPES member
Oct 11 03:35:42 compute-0 ceph-mgr[74563]: mgr[py] Loading python module 'k8sevents'
Oct 11 03:35:42 compute-0 ceph-23b68101-59a9-532f-ab6b-9acf78fb2162-mgr-compute-0-jhqlii[74559]: 2025-10-11T03:35:42.254+0000 7f86a4c73140 -1 mgr[py] Module iostat has missing NOTIFY_TYPES member
Oct 11 03:35:42 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status", "format": "json-pretty"} v 0) v1
Oct 11 03:35:42 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/619283888' entity='client.admin' cmd=[{"prefix": "status", "format": "json-pretty"}]: dispatch
Oct 11 03:35:42 compute-0 brave_proskuriakova[74726]: 
Oct 11 03:35:42 compute-0 brave_proskuriakova[74726]: {
Oct 11 03:35:42 compute-0 brave_proskuriakova[74726]:     "fsid": "23b68101-59a9-532f-ab6b-9acf78fb2162",
Oct 11 03:35:42 compute-0 brave_proskuriakova[74726]:     "health": {
Oct 11 03:35:42 compute-0 brave_proskuriakova[74726]:         "status": "HEALTH_OK",
Oct 11 03:35:42 compute-0 brave_proskuriakova[74726]:         "checks": {},
Oct 11 03:35:42 compute-0 brave_proskuriakova[74726]:         "mutes": []
Oct 11 03:35:42 compute-0 brave_proskuriakova[74726]:     },
Oct 11 03:35:42 compute-0 brave_proskuriakova[74726]:     "election_epoch": 5,
Oct 11 03:35:42 compute-0 brave_proskuriakova[74726]:     "quorum": [
Oct 11 03:35:42 compute-0 brave_proskuriakova[74726]:         0
Oct 11 03:35:42 compute-0 brave_proskuriakova[74726]:     ],
Oct 11 03:35:42 compute-0 brave_proskuriakova[74726]:     "quorum_names": [
Oct 11 03:35:42 compute-0 brave_proskuriakova[74726]:         "compute-0"
Oct 11 03:35:42 compute-0 brave_proskuriakova[74726]:     ],
Oct 11 03:35:42 compute-0 brave_proskuriakova[74726]:     "quorum_age": 8,
Oct 11 03:35:42 compute-0 brave_proskuriakova[74726]:     "monmap": {
Oct 11 03:35:42 compute-0 brave_proskuriakova[74726]:         "epoch": 1,
Oct 11 03:35:42 compute-0 brave_proskuriakova[74726]:         "min_mon_release_name": "reef",
Oct 11 03:35:42 compute-0 brave_proskuriakova[74726]:         "num_mons": 1
Oct 11 03:35:42 compute-0 brave_proskuriakova[74726]:     },
Oct 11 03:35:42 compute-0 brave_proskuriakova[74726]:     "osdmap": {
Oct 11 03:35:42 compute-0 brave_proskuriakova[74726]:         "epoch": 1,
Oct 11 03:35:42 compute-0 brave_proskuriakova[74726]:         "num_osds": 0,
Oct 11 03:35:42 compute-0 brave_proskuriakova[74726]:         "num_up_osds": 0,
Oct 11 03:35:42 compute-0 brave_proskuriakova[74726]:         "osd_up_since": 0,
Oct 11 03:35:42 compute-0 brave_proskuriakova[74726]:         "num_in_osds": 0,
Oct 11 03:35:42 compute-0 brave_proskuriakova[74726]:         "osd_in_since": 0,
Oct 11 03:35:42 compute-0 brave_proskuriakova[74726]:         "num_remapped_pgs": 0
Oct 11 03:35:42 compute-0 brave_proskuriakova[74726]:     },
Oct 11 03:35:42 compute-0 brave_proskuriakova[74726]:     "pgmap": {
Oct 11 03:35:42 compute-0 brave_proskuriakova[74726]:         "pgs_by_state": [],
Oct 11 03:35:42 compute-0 brave_proskuriakova[74726]:         "num_pgs": 0,
Oct 11 03:35:42 compute-0 brave_proskuriakova[74726]:         "num_pools": 0,
Oct 11 03:35:42 compute-0 brave_proskuriakova[74726]:         "num_objects": 0,
Oct 11 03:35:42 compute-0 brave_proskuriakova[74726]:         "data_bytes": 0,
Oct 11 03:35:42 compute-0 brave_proskuriakova[74726]:         "bytes_used": 0,
Oct 11 03:35:42 compute-0 brave_proskuriakova[74726]:         "bytes_avail": 0,
Oct 11 03:35:42 compute-0 brave_proskuriakova[74726]:         "bytes_total": 0
Oct 11 03:35:42 compute-0 brave_proskuriakova[74726]:     },
Oct 11 03:35:42 compute-0 brave_proskuriakova[74726]:     "fsmap": {
Oct 11 03:35:42 compute-0 brave_proskuriakova[74726]:         "epoch": 1,
Oct 11 03:35:42 compute-0 brave_proskuriakova[74726]:         "by_rank": [],
Oct 11 03:35:42 compute-0 brave_proskuriakova[74726]:         "up:standby": 0
Oct 11 03:35:42 compute-0 brave_proskuriakova[74726]:     },
Oct 11 03:35:42 compute-0 brave_proskuriakova[74726]:     "mgrmap": {
Oct 11 03:35:42 compute-0 brave_proskuriakova[74726]:         "available": false,
Oct 11 03:35:42 compute-0 brave_proskuriakova[74726]:         "num_standbys": 0,
Oct 11 03:35:42 compute-0 brave_proskuriakova[74726]:         "modules": [
Oct 11 03:35:42 compute-0 brave_proskuriakova[74726]:             "iostat",
Oct 11 03:35:42 compute-0 brave_proskuriakova[74726]:             "nfs",
Oct 11 03:35:42 compute-0 brave_proskuriakova[74726]:             "restful"
Oct 11 03:35:42 compute-0 brave_proskuriakova[74726]:         ],
Oct 11 03:35:42 compute-0 brave_proskuriakova[74726]:         "services": {}
Oct 11 03:35:42 compute-0 brave_proskuriakova[74726]:     },
Oct 11 03:35:42 compute-0 brave_proskuriakova[74726]:     "servicemap": {
Oct 11 03:35:42 compute-0 brave_proskuriakova[74726]:         "epoch": 1,
Oct 11 03:35:42 compute-0 brave_proskuriakova[74726]:         "modified": "2025-10-11T03:35:31.210977+0000",
Oct 11 03:35:42 compute-0 brave_proskuriakova[74726]:         "services": {}
Oct 11 03:35:42 compute-0 brave_proskuriakova[74726]:     },
Oct 11 03:35:42 compute-0 brave_proskuriakova[74726]:     "progress_events": {}
Oct 11 03:35:42 compute-0 brave_proskuriakova[74726]: }
Oct 11 03:35:42 compute-0 systemd[1]: libpod-e45c9d830efed017fbebe138d6cb84eb651bae923a2afbb0b0dafd9b83c9bcbd.scope: Deactivated successfully.
Oct 11 03:35:42 compute-0 podman[74709]: 2025-10-11 03:35:42.483936575 +0000 UTC m=+0.543652319 container died e45c9d830efed017fbebe138d6cb84eb651bae923a2afbb0b0dafd9b83c9bcbd (image=quay.io/ceph/ceph:v18, name=brave_proskuriakova, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS)
Oct 11 03:35:42 compute-0 systemd[1]: var-lib-containers-storage-overlay-adc69a2e0b0eff7bb6750cf01e9b8cb1afb0329c85369f17e3337dc5d71ea58d-merged.mount: Deactivated successfully.
Oct 11 03:35:42 compute-0 ceph-mon[74273]: from='client.? 192.168.122.100:0/619283888' entity='client.admin' cmd=[{"prefix": "status", "format": "json-pretty"}]: dispatch
Oct 11 03:35:42 compute-0 podman[74709]: 2025-10-11 03:35:42.533684644 +0000 UTC m=+0.593400368 container remove e45c9d830efed017fbebe138d6cb84eb651bae923a2afbb0b0dafd9b83c9bcbd (image=quay.io/ceph/ceph:v18, name=brave_proskuriakova, CEPH_REF=reef, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507)
Oct 11 03:35:42 compute-0 systemd[1]: libpod-conmon-e45c9d830efed017fbebe138d6cb84eb651bae923a2afbb0b0dafd9b83c9bcbd.scope: Deactivated successfully.
Oct 11 03:35:43 compute-0 ceph-mgr[74563]: mgr[py] Loading python module 'localpool'
Oct 11 03:35:44 compute-0 ceph-mgr[74563]: mgr[py] Loading python module 'mds_autoscaler'
Oct 11 03:35:44 compute-0 podman[74764]: 2025-10-11 03:35:44.619908303 +0000 UTC m=+0.061265929 container create 523730f6042846e071fa88067ace23e3ad99ef8d97e47da0c00ae26ee26e5e64 (image=quay.io/ceph/ceph:v18, name=sharp_hawking, org.label-schema.vendor=CentOS, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3)
Oct 11 03:35:44 compute-0 systemd[1]: Started libpod-conmon-523730f6042846e071fa88067ace23e3ad99ef8d97e47da0c00ae26ee26e5e64.scope.
Oct 11 03:35:44 compute-0 podman[74764]: 2025-10-11 03:35:44.589860461 +0000 UTC m=+0.031218107 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Oct 11 03:35:44 compute-0 systemd[1]: Started libcrun container.
Oct 11 03:35:44 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5808d99e1ca4b1877e2bb60305ed1f48a4ce0b8cb81d4d5cde7e86bc97b8f9fb/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Oct 11 03:35:44 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5808d99e1ca4b1877e2bb60305ed1f48a4ce0b8cb81d4d5cde7e86bc97b8f9fb/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 11 03:35:44 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5808d99e1ca4b1877e2bb60305ed1f48a4ce0b8cb81d4d5cde7e86bc97b8f9fb/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 11 03:35:44 compute-0 podman[74764]: 2025-10-11 03:35:44.710011851 +0000 UTC m=+0.151369537 container init 523730f6042846e071fa88067ace23e3ad99ef8d97e47da0c00ae26ee26e5e64 (image=quay.io/ceph/ceph:v18, name=sharp_hawking, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Oct 11 03:35:44 compute-0 podman[74764]: 2025-10-11 03:35:44.719770541 +0000 UTC m=+0.161128137 container start 523730f6042846e071fa88067ace23e3ad99ef8d97e47da0c00ae26ee26e5e64 (image=quay.io/ceph/ceph:v18, name=sharp_hawking, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_REF=reef, ceph=True, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 11 03:35:44 compute-0 podman[74764]: 2025-10-11 03:35:44.723285198 +0000 UTC m=+0.164642894 container attach 523730f6042846e071fa88067ace23e3ad99ef8d97e47da0c00ae26ee26e5e64 (image=quay.io/ceph/ceph:v18, name=sharp_hawking, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_REF=reef)
Oct 11 03:35:44 compute-0 ceph-mgr[74563]: mgr[py] Loading python module 'mirroring'
Oct 11 03:35:45 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status", "format": "json-pretty"} v 0) v1
Oct 11 03:35:45 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1493115921' entity='client.admin' cmd=[{"prefix": "status", "format": "json-pretty"}]: dispatch
Oct 11 03:35:45 compute-0 sharp_hawking[74780]: 
Oct 11 03:35:45 compute-0 sharp_hawking[74780]: {
Oct 11 03:35:45 compute-0 sharp_hawking[74780]:     "fsid": "23b68101-59a9-532f-ab6b-9acf78fb2162",
Oct 11 03:35:45 compute-0 sharp_hawking[74780]:     "health": {
Oct 11 03:35:45 compute-0 sharp_hawking[74780]:         "status": "HEALTH_OK",
Oct 11 03:35:45 compute-0 sharp_hawking[74780]:         "checks": {},
Oct 11 03:35:45 compute-0 sharp_hawking[74780]:         "mutes": []
Oct 11 03:35:45 compute-0 sharp_hawking[74780]:     },
Oct 11 03:35:45 compute-0 sharp_hawking[74780]:     "election_epoch": 5,
Oct 11 03:35:45 compute-0 sharp_hawking[74780]:     "quorum": [
Oct 11 03:35:45 compute-0 sharp_hawking[74780]:         0
Oct 11 03:35:45 compute-0 sharp_hawking[74780]:     ],
Oct 11 03:35:45 compute-0 sharp_hawking[74780]:     "quorum_names": [
Oct 11 03:35:45 compute-0 sharp_hawking[74780]:         "compute-0"
Oct 11 03:35:45 compute-0 sharp_hawking[74780]:     ],
Oct 11 03:35:45 compute-0 sharp_hawking[74780]:     "quorum_age": 10,
Oct 11 03:35:45 compute-0 sharp_hawking[74780]:     "monmap": {
Oct 11 03:35:45 compute-0 sharp_hawking[74780]:         "epoch": 1,
Oct 11 03:35:45 compute-0 sharp_hawking[74780]:         "min_mon_release_name": "reef",
Oct 11 03:35:45 compute-0 sharp_hawking[74780]:         "num_mons": 1
Oct 11 03:35:45 compute-0 sharp_hawking[74780]:     },
Oct 11 03:35:45 compute-0 sharp_hawking[74780]:     "osdmap": {
Oct 11 03:35:45 compute-0 sharp_hawking[74780]:         "epoch": 1,
Oct 11 03:35:45 compute-0 sharp_hawking[74780]:         "num_osds": 0,
Oct 11 03:35:45 compute-0 sharp_hawking[74780]:         "num_up_osds": 0,
Oct 11 03:35:45 compute-0 sharp_hawking[74780]:         "osd_up_since": 0,
Oct 11 03:35:45 compute-0 sharp_hawking[74780]:         "num_in_osds": 0,
Oct 11 03:35:45 compute-0 sharp_hawking[74780]:         "osd_in_since": 0,
Oct 11 03:35:45 compute-0 sharp_hawking[74780]:         "num_remapped_pgs": 0
Oct 11 03:35:45 compute-0 sharp_hawking[74780]:     },
Oct 11 03:35:45 compute-0 sharp_hawking[74780]:     "pgmap": {
Oct 11 03:35:45 compute-0 sharp_hawking[74780]:         "pgs_by_state": [],
Oct 11 03:35:45 compute-0 sharp_hawking[74780]:         "num_pgs": 0,
Oct 11 03:35:45 compute-0 sharp_hawking[74780]:         "num_pools": 0,
Oct 11 03:35:45 compute-0 sharp_hawking[74780]:         "num_objects": 0,
Oct 11 03:35:45 compute-0 sharp_hawking[74780]:         "data_bytes": 0,
Oct 11 03:35:45 compute-0 sharp_hawking[74780]:         "bytes_used": 0,
Oct 11 03:35:45 compute-0 sharp_hawking[74780]:         "bytes_avail": 0,
Oct 11 03:35:45 compute-0 sharp_hawking[74780]:         "bytes_total": 0
Oct 11 03:35:45 compute-0 sharp_hawking[74780]:     },
Oct 11 03:35:45 compute-0 sharp_hawking[74780]:     "fsmap": {
Oct 11 03:35:45 compute-0 sharp_hawking[74780]:         "epoch": 1,
Oct 11 03:35:45 compute-0 sharp_hawking[74780]:         "by_rank": [],
Oct 11 03:35:45 compute-0 sharp_hawking[74780]:         "up:standby": 0
Oct 11 03:35:45 compute-0 sharp_hawking[74780]:     },
Oct 11 03:35:45 compute-0 sharp_hawking[74780]:     "mgrmap": {
Oct 11 03:35:45 compute-0 sharp_hawking[74780]:         "available": false,
Oct 11 03:35:45 compute-0 sharp_hawking[74780]:         "num_standbys": 0,
Oct 11 03:35:45 compute-0 sharp_hawking[74780]:         "modules": [
Oct 11 03:35:45 compute-0 sharp_hawking[74780]:             "iostat",
Oct 11 03:35:45 compute-0 sharp_hawking[74780]:             "nfs",
Oct 11 03:35:45 compute-0 sharp_hawking[74780]:             "restful"
Oct 11 03:35:45 compute-0 sharp_hawking[74780]:         ],
Oct 11 03:35:45 compute-0 sharp_hawking[74780]:         "services": {}
Oct 11 03:35:45 compute-0 sharp_hawking[74780]:     },
Oct 11 03:35:45 compute-0 sharp_hawking[74780]:     "servicemap": {
Oct 11 03:35:45 compute-0 sharp_hawking[74780]:         "epoch": 1,
Oct 11 03:35:45 compute-0 sharp_hawking[74780]:         "modified": "2025-10-11T03:35:31.210977+0000",
Oct 11 03:35:45 compute-0 sharp_hawking[74780]:         "services": {}
Oct 11 03:35:45 compute-0 sharp_hawking[74780]:     },
Oct 11 03:35:45 compute-0 sharp_hawking[74780]:     "progress_events": {}
Oct 11 03:35:45 compute-0 sharp_hawking[74780]: }
Oct 11 03:35:45 compute-0 systemd[1]: libpod-523730f6042846e071fa88067ace23e3ad99ef8d97e47da0c00ae26ee26e5e64.scope: Deactivated successfully.
Oct 11 03:35:45 compute-0 podman[74764]: 2025-10-11 03:35:45.099850865 +0000 UTC m=+0.541208441 container died 523730f6042846e071fa88067ace23e3ad99ef8d97e47da0c00ae26ee26e5e64 (image=quay.io/ceph/ceph:v18, name=sharp_hawking, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 11 03:35:45 compute-0 systemd[1]: var-lib-containers-storage-overlay-5808d99e1ca4b1877e2bb60305ed1f48a4ce0b8cb81d4d5cde7e86bc97b8f9fb-merged.mount: Deactivated successfully.
Oct 11 03:35:45 compute-0 ceph-mgr[74563]: mgr[py] Loading python module 'nfs'
Oct 11 03:35:45 compute-0 podman[74764]: 2025-10-11 03:35:45.14008083 +0000 UTC m=+0.581438406 container remove 523730f6042846e071fa88067ace23e3ad99ef8d97e47da0c00ae26ee26e5e64 (image=quay.io/ceph/ceph:v18, name=sharp_hawking, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Oct 11 03:35:45 compute-0 ceph-mon[74273]: from='client.? 192.168.122.100:0/1493115921' entity='client.admin' cmd=[{"prefix": "status", "format": "json-pretty"}]: dispatch
Oct 11 03:35:45 compute-0 systemd[1]: libpod-conmon-523730f6042846e071fa88067ace23e3ad99ef8d97e47da0c00ae26ee26e5e64.scope: Deactivated successfully.
Oct 11 03:35:45 compute-0 ceph-mgr[74563]: mgr[py] Module nfs has missing NOTIFY_TYPES member
Oct 11 03:35:45 compute-0 ceph-mgr[74563]: mgr[py] Loading python module 'orchestrator'
Oct 11 03:35:45 compute-0 ceph-23b68101-59a9-532f-ab6b-9acf78fb2162-mgr-compute-0-jhqlii[74559]: 2025-10-11T03:35:45.750+0000 7f86a4c73140 -1 mgr[py] Module nfs has missing NOTIFY_TYPES member
Oct 11 03:35:46 compute-0 ceph-mgr[74563]: mgr[py] Module orchestrator has missing NOTIFY_TYPES member
Oct 11 03:35:46 compute-0 ceph-mgr[74563]: mgr[py] Loading python module 'osd_perf_query'
Oct 11 03:35:46 compute-0 ceph-23b68101-59a9-532f-ab6b-9acf78fb2162-mgr-compute-0-jhqlii[74559]: 2025-10-11T03:35:46.347+0000 7f86a4c73140 -1 mgr[py] Module orchestrator has missing NOTIFY_TYPES member
Oct 11 03:35:46 compute-0 ceph-mgr[74563]: mgr[py] Module osd_perf_query has missing NOTIFY_TYPES member
Oct 11 03:35:46 compute-0 ceph-mgr[74563]: mgr[py] Loading python module 'osd_support'
Oct 11 03:35:46 compute-0 ceph-23b68101-59a9-532f-ab6b-9acf78fb2162-mgr-compute-0-jhqlii[74559]: 2025-10-11T03:35:46.600+0000 7f86a4c73140 -1 mgr[py] Module osd_perf_query has missing NOTIFY_TYPES member
Oct 11 03:35:46 compute-0 ceph-mgr[74563]: mgr[py] Module osd_support has missing NOTIFY_TYPES member
Oct 11 03:35:46 compute-0 ceph-mgr[74563]: mgr[py] Loading python module 'pg_autoscaler'
Oct 11 03:35:46 compute-0 ceph-23b68101-59a9-532f-ab6b-9acf78fb2162-mgr-compute-0-jhqlii[74559]: 2025-10-11T03:35:46.842+0000 7f86a4c73140 -1 mgr[py] Module osd_support has missing NOTIFY_TYPES member
Oct 11 03:35:47 compute-0 ceph-mgr[74563]: mgr[py] Module pg_autoscaler has missing NOTIFY_TYPES member
Oct 11 03:35:47 compute-0 ceph-mgr[74563]: mgr[py] Loading python module 'progress'
Oct 11 03:35:47 compute-0 ceph-23b68101-59a9-532f-ab6b-9acf78fb2162-mgr-compute-0-jhqlii[74559]: 2025-10-11T03:35:47.111+0000 7f86a4c73140 -1 mgr[py] Module pg_autoscaler has missing NOTIFY_TYPES member
Oct 11 03:35:47 compute-0 podman[74817]: 2025-10-11 03:35:47.221591898 +0000 UTC m=+0.054026568 container create 061af41aecfd2726738a51b7bafa608cc07d086274c44811e251b69bf928c07d (image=quay.io/ceph/ceph:v18, name=gracious_haibt, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=reef, io.buildah.version=1.39.3)
Oct 11 03:35:47 compute-0 systemd[1]: Started libpod-conmon-061af41aecfd2726738a51b7bafa608cc07d086274c44811e251b69bf928c07d.scope.
Oct 11 03:35:47 compute-0 podman[74817]: 2025-10-11 03:35:47.193925671 +0000 UTC m=+0.026360401 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Oct 11 03:35:47 compute-0 systemd[1]: Started libcrun container.
Oct 11 03:35:47 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/aef3ce46dbc0d5be8160e7f0727f7d45b879b7f5dc86b04735f4e15ce523a29c/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 11 03:35:47 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/aef3ce46dbc0d5be8160e7f0727f7d45b879b7f5dc86b04735f4e15ce523a29c/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Oct 11 03:35:47 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/aef3ce46dbc0d5be8160e7f0727f7d45b879b7f5dc86b04735f4e15ce523a29c/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 11 03:35:47 compute-0 podman[74817]: 2025-10-11 03:35:47.313501445 +0000 UTC m=+0.145936125 container init 061af41aecfd2726738a51b7bafa608cc07d086274c44811e251b69bf928c07d (image=quay.io/ceph/ceph:v18, name=gracious_haibt, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2)
Oct 11 03:35:47 compute-0 podman[74817]: 2025-10-11 03:35:47.322567297 +0000 UTC m=+0.155001977 container start 061af41aecfd2726738a51b7bafa608cc07d086274c44811e251b69bf928c07d (image=quay.io/ceph/ceph:v18, name=gracious_haibt, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 11 03:35:47 compute-0 podman[74817]: 2025-10-11 03:35:47.326520606 +0000 UTC m=+0.158955296 container attach 061af41aecfd2726738a51b7bafa608cc07d086274c44811e251b69bf928c07d (image=quay.io/ceph/ceph:v18, name=gracious_haibt, org.label-schema.license=GPLv2, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507)
Oct 11 03:35:47 compute-0 ceph-mgr[74563]: mgr[py] Module progress has missing NOTIFY_TYPES member
Oct 11 03:35:47 compute-0 ceph-mgr[74563]: mgr[py] Loading python module 'prometheus'
Oct 11 03:35:47 compute-0 ceph-23b68101-59a9-532f-ab6b-9acf78fb2162-mgr-compute-0-jhqlii[74559]: 2025-10-11T03:35:47.354+0000 7f86a4c73140 -1 mgr[py] Module progress has missing NOTIFY_TYPES member
Oct 11 03:35:47 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status", "format": "json-pretty"} v 0) v1
Oct 11 03:35:47 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2108770165' entity='client.admin' cmd=[{"prefix": "status", "format": "json-pretty"}]: dispatch
Oct 11 03:35:47 compute-0 gracious_haibt[74834]: 
Oct 11 03:35:47 compute-0 gracious_haibt[74834]: {
Oct 11 03:35:47 compute-0 gracious_haibt[74834]:     "fsid": "23b68101-59a9-532f-ab6b-9acf78fb2162",
Oct 11 03:35:47 compute-0 gracious_haibt[74834]:     "health": {
Oct 11 03:35:47 compute-0 gracious_haibt[74834]:         "status": "HEALTH_OK",
Oct 11 03:35:47 compute-0 gracious_haibt[74834]:         "checks": {},
Oct 11 03:35:47 compute-0 gracious_haibt[74834]:         "mutes": []
Oct 11 03:35:47 compute-0 gracious_haibt[74834]:     },
Oct 11 03:35:47 compute-0 gracious_haibt[74834]:     "election_epoch": 5,
Oct 11 03:35:47 compute-0 gracious_haibt[74834]:     "quorum": [
Oct 11 03:35:47 compute-0 gracious_haibt[74834]:         0
Oct 11 03:35:47 compute-0 gracious_haibt[74834]:     ],
Oct 11 03:35:47 compute-0 gracious_haibt[74834]:     "quorum_names": [
Oct 11 03:35:47 compute-0 gracious_haibt[74834]:         "compute-0"
Oct 11 03:35:47 compute-0 gracious_haibt[74834]:     ],
Oct 11 03:35:47 compute-0 gracious_haibt[74834]:     "quorum_age": 13,
Oct 11 03:35:47 compute-0 gracious_haibt[74834]:     "monmap": {
Oct 11 03:35:47 compute-0 gracious_haibt[74834]:         "epoch": 1,
Oct 11 03:35:47 compute-0 gracious_haibt[74834]:         "min_mon_release_name": "reef",
Oct 11 03:35:47 compute-0 gracious_haibt[74834]:         "num_mons": 1
Oct 11 03:35:47 compute-0 gracious_haibt[74834]:     },
Oct 11 03:35:47 compute-0 gracious_haibt[74834]:     "osdmap": {
Oct 11 03:35:47 compute-0 gracious_haibt[74834]:         "epoch": 1,
Oct 11 03:35:47 compute-0 gracious_haibt[74834]:         "num_osds": 0,
Oct 11 03:35:47 compute-0 gracious_haibt[74834]:         "num_up_osds": 0,
Oct 11 03:35:47 compute-0 gracious_haibt[74834]:         "osd_up_since": 0,
Oct 11 03:35:47 compute-0 gracious_haibt[74834]:         "num_in_osds": 0,
Oct 11 03:35:47 compute-0 gracious_haibt[74834]:         "osd_in_since": 0,
Oct 11 03:35:47 compute-0 gracious_haibt[74834]:         "num_remapped_pgs": 0
Oct 11 03:35:47 compute-0 gracious_haibt[74834]:     },
Oct 11 03:35:47 compute-0 gracious_haibt[74834]:     "pgmap": {
Oct 11 03:35:47 compute-0 gracious_haibt[74834]:         "pgs_by_state": [],
Oct 11 03:35:47 compute-0 gracious_haibt[74834]:         "num_pgs": 0,
Oct 11 03:35:47 compute-0 gracious_haibt[74834]:         "num_pools": 0,
Oct 11 03:35:47 compute-0 gracious_haibt[74834]:         "num_objects": 0,
Oct 11 03:35:47 compute-0 gracious_haibt[74834]:         "data_bytes": 0,
Oct 11 03:35:47 compute-0 gracious_haibt[74834]:         "bytes_used": 0,
Oct 11 03:35:47 compute-0 gracious_haibt[74834]:         "bytes_avail": 0,
Oct 11 03:35:47 compute-0 gracious_haibt[74834]:         "bytes_total": 0
Oct 11 03:35:47 compute-0 gracious_haibt[74834]:     },
Oct 11 03:35:47 compute-0 gracious_haibt[74834]:     "fsmap": {
Oct 11 03:35:47 compute-0 gracious_haibt[74834]:         "epoch": 1,
Oct 11 03:35:47 compute-0 gracious_haibt[74834]:         "by_rank": [],
Oct 11 03:35:47 compute-0 gracious_haibt[74834]:         "up:standby": 0
Oct 11 03:35:47 compute-0 gracious_haibt[74834]:     },
Oct 11 03:35:47 compute-0 gracious_haibt[74834]:     "mgrmap": {
Oct 11 03:35:47 compute-0 gracious_haibt[74834]:         "available": false,
Oct 11 03:35:47 compute-0 gracious_haibt[74834]:         "num_standbys": 0,
Oct 11 03:35:47 compute-0 gracious_haibt[74834]:         "modules": [
Oct 11 03:35:47 compute-0 gracious_haibt[74834]:             "iostat",
Oct 11 03:35:47 compute-0 gracious_haibt[74834]:             "nfs",
Oct 11 03:35:47 compute-0 gracious_haibt[74834]:             "restful"
Oct 11 03:35:47 compute-0 gracious_haibt[74834]:         ],
Oct 11 03:35:47 compute-0 gracious_haibt[74834]:         "services": {}
Oct 11 03:35:47 compute-0 gracious_haibt[74834]:     },
Oct 11 03:35:47 compute-0 gracious_haibt[74834]:     "servicemap": {
Oct 11 03:35:47 compute-0 gracious_haibt[74834]:         "epoch": 1,
Oct 11 03:35:47 compute-0 gracious_haibt[74834]:         "modified": "2025-10-11T03:35:31.210977+0000",
Oct 11 03:35:47 compute-0 gracious_haibt[74834]:         "services": {}
Oct 11 03:35:47 compute-0 gracious_haibt[74834]:     },
Oct 11 03:35:47 compute-0 gracious_haibt[74834]:     "progress_events": {}
Oct 11 03:35:47 compute-0 gracious_haibt[74834]: }
Oct 11 03:35:47 compute-0 systemd[1]: libpod-061af41aecfd2726738a51b7bafa608cc07d086274c44811e251b69bf928c07d.scope: Deactivated successfully.
Oct 11 03:35:47 compute-0 podman[74817]: 2025-10-11 03:35:47.706377944 +0000 UTC m=+0.538812594 container died 061af41aecfd2726738a51b7bafa608cc07d086274c44811e251b69bf928c07d (image=quay.io/ceph/ceph:v18, name=gracious_haibt, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS)
Oct 11 03:35:47 compute-0 ceph-mon[74273]: from='client.? 192.168.122.100:0/2108770165' entity='client.admin' cmd=[{"prefix": "status", "format": "json-pretty"}]: dispatch
Oct 11 03:35:47 compute-0 systemd[1]: var-lib-containers-storage-overlay-aef3ce46dbc0d5be8160e7f0727f7d45b879b7f5dc86b04735f4e15ce523a29c-merged.mount: Deactivated successfully.
Oct 11 03:35:47 compute-0 podman[74817]: 2025-10-11 03:35:47.778988956 +0000 UTC m=+0.611423636 container remove 061af41aecfd2726738a51b7bafa608cc07d086274c44811e251b69bf928c07d (image=quay.io/ceph/ceph:v18, name=gracious_haibt, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0)
Oct 11 03:35:47 compute-0 systemd[1]: libpod-conmon-061af41aecfd2726738a51b7bafa608cc07d086274c44811e251b69bf928c07d.scope: Deactivated successfully.
Oct 11 03:35:48 compute-0 ceph-23b68101-59a9-532f-ab6b-9acf78fb2162-mgr-compute-0-jhqlii[74559]: 2025-10-11T03:35:48.326+0000 7f86a4c73140 -1 mgr[py] Module prometheus has missing NOTIFY_TYPES member
Oct 11 03:35:48 compute-0 ceph-mgr[74563]: mgr[py] Module prometheus has missing NOTIFY_TYPES member
Oct 11 03:35:48 compute-0 ceph-mgr[74563]: mgr[py] Loading python module 'rbd_support'
Oct 11 03:35:48 compute-0 ceph-23b68101-59a9-532f-ab6b-9acf78fb2162-mgr-compute-0-jhqlii[74559]: 2025-10-11T03:35:48.598+0000 7f86a4c73140 -1 mgr[py] Module rbd_support has missing NOTIFY_TYPES member
Oct 11 03:35:48 compute-0 ceph-mgr[74563]: mgr[py] Module rbd_support has missing NOTIFY_TYPES member
Oct 11 03:35:48 compute-0 ceph-mgr[74563]: mgr[py] Loading python module 'restful'
Oct 11 03:35:49 compute-0 ceph-mgr[74563]: mgr[py] Loading python module 'rgw'
Oct 11 03:35:49 compute-0 podman[74874]: 2025-10-11 03:35:49.886427883 +0000 UTC m=+0.071662987 container create 8f2a15bb0e3d6c822e24dbea002b7dad2820ca4f68625e710cd0ba1c516855ef (image=quay.io/ceph/ceph:v18, name=frosty_hamilton, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 11 03:35:49 compute-0 systemd[1]: Started libpod-conmon-8f2a15bb0e3d6c822e24dbea002b7dad2820ca4f68625e710cd0ba1c516855ef.scope.
Oct 11 03:35:49 compute-0 podman[74874]: 2025-10-11 03:35:49.853596023 +0000 UTC m=+0.038831197 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Oct 11 03:35:49 compute-0 ceph-23b68101-59a9-532f-ab6b-9acf78fb2162-mgr-compute-0-jhqlii[74559]: 2025-10-11T03:35:49.964+0000 7f86a4c73140 -1 mgr[py] Module rgw has missing NOTIFY_TYPES member
Oct 11 03:35:49 compute-0 ceph-mgr[74563]: mgr[py] Module rgw has missing NOTIFY_TYPES member
Oct 11 03:35:49 compute-0 ceph-mgr[74563]: mgr[py] Loading python module 'rook'
Oct 11 03:35:49 compute-0 systemd[1]: Started libcrun container.
Oct 11 03:35:49 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d48a7886d0e2d4dbf636e95d80e7e20149389449250c4137a41d0cec548d8a41/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Oct 11 03:35:49 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d48a7886d0e2d4dbf636e95d80e7e20149389449250c4137a41d0cec548d8a41/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 11 03:35:49 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d48a7886d0e2d4dbf636e95d80e7e20149389449250c4137a41d0cec548d8a41/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 11 03:35:50 compute-0 podman[74874]: 2025-10-11 03:35:50.008795935 +0000 UTC m=+0.194031019 container init 8f2a15bb0e3d6c822e24dbea002b7dad2820ca4f68625e710cd0ba1c516855ef (image=quay.io/ceph/ceph:v18, name=frosty_hamilton, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_REF=reef, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Oct 11 03:35:50 compute-0 podman[74874]: 2025-10-11 03:35:50.018252747 +0000 UTC m=+0.203487841 container start 8f2a15bb0e3d6c822e24dbea002b7dad2820ca4f68625e710cd0ba1c516855ef (image=quay.io/ceph/ceph:v18, name=frosty_hamilton, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 11 03:35:50 compute-0 podman[74874]: 2025-10-11 03:35:50.022082363 +0000 UTC m=+0.207317457 container attach 8f2a15bb0e3d6c822e24dbea002b7dad2820ca4f68625e710cd0ba1c516855ef (image=quay.io/ceph/ceph:v18, name=frosty_hamilton, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 11 03:35:50 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status", "format": "json-pretty"} v 0) v1
Oct 11 03:35:50 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1773738111' entity='client.admin' cmd=[{"prefix": "status", "format": "json-pretty"}]: dispatch
Oct 11 03:35:50 compute-0 frosty_hamilton[74891]: 
Oct 11 03:35:50 compute-0 frosty_hamilton[74891]: {
Oct 11 03:35:50 compute-0 frosty_hamilton[74891]:     "fsid": "23b68101-59a9-532f-ab6b-9acf78fb2162",
Oct 11 03:35:50 compute-0 frosty_hamilton[74891]:     "health": {
Oct 11 03:35:50 compute-0 frosty_hamilton[74891]:         "status": "HEALTH_OK",
Oct 11 03:35:50 compute-0 frosty_hamilton[74891]:         "checks": {},
Oct 11 03:35:50 compute-0 frosty_hamilton[74891]:         "mutes": []
Oct 11 03:35:50 compute-0 frosty_hamilton[74891]:     },
Oct 11 03:35:50 compute-0 frosty_hamilton[74891]:     "election_epoch": 5,
Oct 11 03:35:50 compute-0 frosty_hamilton[74891]:     "quorum": [
Oct 11 03:35:50 compute-0 frosty_hamilton[74891]:         0
Oct 11 03:35:50 compute-0 frosty_hamilton[74891]:     ],
Oct 11 03:35:50 compute-0 frosty_hamilton[74891]:     "quorum_names": [
Oct 11 03:35:50 compute-0 frosty_hamilton[74891]:         "compute-0"
Oct 11 03:35:50 compute-0 frosty_hamilton[74891]:     ],
Oct 11 03:35:50 compute-0 frosty_hamilton[74891]:     "quorum_age": 16,
Oct 11 03:35:50 compute-0 frosty_hamilton[74891]:     "monmap": {
Oct 11 03:35:50 compute-0 frosty_hamilton[74891]:         "epoch": 1,
Oct 11 03:35:50 compute-0 frosty_hamilton[74891]:         "min_mon_release_name": "reef",
Oct 11 03:35:50 compute-0 frosty_hamilton[74891]:         "num_mons": 1
Oct 11 03:35:50 compute-0 frosty_hamilton[74891]:     },
Oct 11 03:35:50 compute-0 frosty_hamilton[74891]:     "osdmap": {
Oct 11 03:35:50 compute-0 frosty_hamilton[74891]:         "epoch": 1,
Oct 11 03:35:50 compute-0 frosty_hamilton[74891]:         "num_osds": 0,
Oct 11 03:35:50 compute-0 frosty_hamilton[74891]:         "num_up_osds": 0,
Oct 11 03:35:50 compute-0 frosty_hamilton[74891]:         "osd_up_since": 0,
Oct 11 03:35:50 compute-0 frosty_hamilton[74891]:         "num_in_osds": 0,
Oct 11 03:35:50 compute-0 frosty_hamilton[74891]:         "osd_in_since": 0,
Oct 11 03:35:50 compute-0 frosty_hamilton[74891]:         "num_remapped_pgs": 0
Oct 11 03:35:50 compute-0 frosty_hamilton[74891]:     },
Oct 11 03:35:50 compute-0 frosty_hamilton[74891]:     "pgmap": {
Oct 11 03:35:50 compute-0 frosty_hamilton[74891]:         "pgs_by_state": [],
Oct 11 03:35:50 compute-0 frosty_hamilton[74891]:         "num_pgs": 0,
Oct 11 03:35:50 compute-0 frosty_hamilton[74891]:         "num_pools": 0,
Oct 11 03:35:50 compute-0 frosty_hamilton[74891]:         "num_objects": 0,
Oct 11 03:35:50 compute-0 frosty_hamilton[74891]:         "data_bytes": 0,
Oct 11 03:35:50 compute-0 frosty_hamilton[74891]:         "bytes_used": 0,
Oct 11 03:35:50 compute-0 frosty_hamilton[74891]:         "bytes_avail": 0,
Oct 11 03:35:50 compute-0 frosty_hamilton[74891]:         "bytes_total": 0
Oct 11 03:35:50 compute-0 frosty_hamilton[74891]:     },
Oct 11 03:35:50 compute-0 frosty_hamilton[74891]:     "fsmap": {
Oct 11 03:35:50 compute-0 frosty_hamilton[74891]:         "epoch": 1,
Oct 11 03:35:50 compute-0 frosty_hamilton[74891]:         "by_rank": [],
Oct 11 03:35:50 compute-0 frosty_hamilton[74891]:         "up:standby": 0
Oct 11 03:35:50 compute-0 frosty_hamilton[74891]:     },
Oct 11 03:35:50 compute-0 frosty_hamilton[74891]:     "mgrmap": {
Oct 11 03:35:50 compute-0 frosty_hamilton[74891]:         "available": false,
Oct 11 03:35:50 compute-0 frosty_hamilton[74891]:         "num_standbys": 0,
Oct 11 03:35:50 compute-0 frosty_hamilton[74891]:         "modules": [
Oct 11 03:35:50 compute-0 frosty_hamilton[74891]:             "iostat",
Oct 11 03:35:50 compute-0 frosty_hamilton[74891]:             "nfs",
Oct 11 03:35:50 compute-0 frosty_hamilton[74891]:             "restful"
Oct 11 03:35:50 compute-0 frosty_hamilton[74891]:         ],
Oct 11 03:35:50 compute-0 frosty_hamilton[74891]:         "services": {}
Oct 11 03:35:50 compute-0 frosty_hamilton[74891]:     },
Oct 11 03:35:50 compute-0 frosty_hamilton[74891]:     "servicemap": {
Oct 11 03:35:50 compute-0 frosty_hamilton[74891]:         "epoch": 1,
Oct 11 03:35:50 compute-0 frosty_hamilton[74891]:         "modified": "2025-10-11T03:35:31.210977+0000",
Oct 11 03:35:50 compute-0 frosty_hamilton[74891]:         "services": {}
Oct 11 03:35:50 compute-0 frosty_hamilton[74891]:     },
Oct 11 03:35:50 compute-0 frosty_hamilton[74891]:     "progress_events": {}
Oct 11 03:35:50 compute-0 frosty_hamilton[74891]: }
Oct 11 03:35:50 compute-0 systemd[1]: libpod-8f2a15bb0e3d6c822e24dbea002b7dad2820ca4f68625e710cd0ba1c516855ef.scope: Deactivated successfully.
Oct 11 03:35:50 compute-0 ceph-mon[74273]: from='client.? 192.168.122.100:0/1773738111' entity='client.admin' cmd=[{"prefix": "status", "format": "json-pretty"}]: dispatch
Oct 11 03:35:50 compute-0 podman[74917]: 2025-10-11 03:35:50.491950705 +0000 UTC m=+0.037573422 container died 8f2a15bb0e3d6c822e24dbea002b7dad2820ca4f68625e710cd0ba1c516855ef (image=quay.io/ceph/ceph:v18, name=frosty_hamilton, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS)
Oct 11 03:35:50 compute-0 systemd[1]: var-lib-containers-storage-overlay-d48a7886d0e2d4dbf636e95d80e7e20149389449250c4137a41d0cec548d8a41-merged.mount: Deactivated successfully.
Oct 11 03:35:50 compute-0 podman[74917]: 2025-10-11 03:35:50.541859719 +0000 UTC m=+0.087482406 container remove 8f2a15bb0e3d6c822e24dbea002b7dad2820ca4f68625e710cd0ba1c516855ef (image=quay.io/ceph/ceph:v18, name=frosty_hamilton, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 11 03:35:50 compute-0 systemd[1]: libpod-conmon-8f2a15bb0e3d6c822e24dbea002b7dad2820ca4f68625e710cd0ba1c516855ef.scope: Deactivated successfully.
Oct 11 03:35:51 compute-0 ceph-23b68101-59a9-532f-ab6b-9acf78fb2162-mgr-compute-0-jhqlii[74559]: 2025-10-11T03:35:51.957+0000 7f86a4c73140 -1 mgr[py] Module rook has missing NOTIFY_TYPES member
Oct 11 03:35:51 compute-0 ceph-mgr[74563]: mgr[py] Module rook has missing NOTIFY_TYPES member
Oct 11 03:35:51 compute-0 ceph-mgr[74563]: mgr[py] Loading python module 'selftest'
Oct 11 03:35:52 compute-0 ceph-23b68101-59a9-532f-ab6b-9acf78fb2162-mgr-compute-0-jhqlii[74559]: 2025-10-11T03:35:52.179+0000 7f86a4c73140 -1 mgr[py] Module selftest has missing NOTIFY_TYPES member
Oct 11 03:35:52 compute-0 ceph-mgr[74563]: mgr[py] Module selftest has missing NOTIFY_TYPES member
Oct 11 03:35:52 compute-0 ceph-mgr[74563]: mgr[py] Loading python module 'snap_schedule'
Oct 11 03:35:52 compute-0 ceph-23b68101-59a9-532f-ab6b-9acf78fb2162-mgr-compute-0-jhqlii[74559]: 2025-10-11T03:35:52.423+0000 7f86a4c73140 -1 mgr[py] Module snap_schedule has missing NOTIFY_TYPES member
Oct 11 03:35:52 compute-0 ceph-mgr[74563]: mgr[py] Module snap_schedule has missing NOTIFY_TYPES member
Oct 11 03:35:52 compute-0 ceph-mgr[74563]: mgr[py] Loading python module 'stats'
Oct 11 03:35:52 compute-0 podman[74933]: 2025-10-11 03:35:52.649844642 +0000 UTC m=+0.061799404 container create 4cb391bf69878efb1c60340b152e98e71bc0c47f58ee867e60dcc89990586d7e (image=quay.io/ceph/ceph:v18, name=elegant_brown, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 11 03:35:52 compute-0 ceph-mgr[74563]: mgr[py] Loading python module 'status'
Oct 11 03:35:52 compute-0 systemd[1]: Started libpod-conmon-4cb391bf69878efb1c60340b152e98e71bc0c47f58ee867e60dcc89990586d7e.scope.
Oct 11 03:35:52 compute-0 systemd[1]: Started libcrun container.
Oct 11 03:35:52 compute-0 podman[74933]: 2025-10-11 03:35:52.627298067 +0000 UTC m=+0.039252829 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Oct 11 03:35:52 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/64eb07d30185ecb25fbed6a27d26b6aea8c18c5c3c87fa46975234f5cd30093e/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 11 03:35:52 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/64eb07d30185ecb25fbed6a27d26b6aea8c18c5c3c87fa46975234f5cd30093e/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Oct 11 03:35:52 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/64eb07d30185ecb25fbed6a27d26b6aea8c18c5c3c87fa46975234f5cd30093e/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 11 03:35:52 compute-0 podman[74933]: 2025-10-11 03:35:52.741199252 +0000 UTC m=+0.153154074 container init 4cb391bf69878efb1c60340b152e98e71bc0c47f58ee867e60dcc89990586d7e (image=quay.io/ceph/ceph:v18, name=elegant_brown, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.license=GPLv2)
Oct 11 03:35:52 compute-0 podman[74933]: 2025-10-11 03:35:52.753837323 +0000 UTC m=+0.165792055 container start 4cb391bf69878efb1c60340b152e98e71bc0c47f58ee867e60dcc89990586d7e (image=quay.io/ceph/ceph:v18, name=elegant_brown, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Oct 11 03:35:52 compute-0 podman[74933]: 2025-10-11 03:35:52.757501774 +0000 UTC m=+0.169456576 container attach 4cb391bf69878efb1c60340b152e98e71bc0c47f58ee867e60dcc89990586d7e (image=quay.io/ceph/ceph:v18, name=elegant_brown, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Oct 11 03:35:52 compute-0 ceph-mgr[74563]: mgr[py] Module status has missing NOTIFY_TYPES member
Oct 11 03:35:52 compute-0 ceph-mgr[74563]: mgr[py] Loading python module 'telegraf'
Oct 11 03:35:52 compute-0 ceph-23b68101-59a9-532f-ab6b-9acf78fb2162-mgr-compute-0-jhqlii[74559]: 2025-10-11T03:35:52.955+0000 7f86a4c73140 -1 mgr[py] Module status has missing NOTIFY_TYPES member
Oct 11 03:35:53 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status", "format": "json-pretty"} v 0) v1
Oct 11 03:35:53 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3067224797' entity='client.admin' cmd=[{"prefix": "status", "format": "json-pretty"}]: dispatch
Oct 11 03:35:53 compute-0 elegant_brown[74949]: 
Oct 11 03:35:53 compute-0 elegant_brown[74949]: {
Oct 11 03:35:53 compute-0 elegant_brown[74949]:     "fsid": "23b68101-59a9-532f-ab6b-9acf78fb2162",
Oct 11 03:35:53 compute-0 elegant_brown[74949]:     "health": {
Oct 11 03:35:53 compute-0 elegant_brown[74949]:         "status": "HEALTH_OK",
Oct 11 03:35:53 compute-0 elegant_brown[74949]:         "checks": {},
Oct 11 03:35:53 compute-0 elegant_brown[74949]:         "mutes": []
Oct 11 03:35:53 compute-0 elegant_brown[74949]:     },
Oct 11 03:35:53 compute-0 elegant_brown[74949]:     "election_epoch": 5,
Oct 11 03:35:53 compute-0 elegant_brown[74949]:     "quorum": [
Oct 11 03:35:53 compute-0 elegant_brown[74949]:         0
Oct 11 03:35:53 compute-0 elegant_brown[74949]:     ],
Oct 11 03:35:53 compute-0 elegant_brown[74949]:     "quorum_names": [
Oct 11 03:35:53 compute-0 elegant_brown[74949]:         "compute-0"
Oct 11 03:35:53 compute-0 elegant_brown[74949]:     ],
Oct 11 03:35:53 compute-0 elegant_brown[74949]:     "quorum_age": 18,
Oct 11 03:35:53 compute-0 elegant_brown[74949]:     "monmap": {
Oct 11 03:35:53 compute-0 elegant_brown[74949]:         "epoch": 1,
Oct 11 03:35:53 compute-0 elegant_brown[74949]:         "min_mon_release_name": "reef",
Oct 11 03:35:53 compute-0 elegant_brown[74949]:         "num_mons": 1
Oct 11 03:35:53 compute-0 elegant_brown[74949]:     },
Oct 11 03:35:53 compute-0 elegant_brown[74949]:     "osdmap": {
Oct 11 03:35:53 compute-0 elegant_brown[74949]:         "epoch": 1,
Oct 11 03:35:53 compute-0 elegant_brown[74949]:         "num_osds": 0,
Oct 11 03:35:53 compute-0 elegant_brown[74949]:         "num_up_osds": 0,
Oct 11 03:35:53 compute-0 elegant_brown[74949]:         "osd_up_since": 0,
Oct 11 03:35:53 compute-0 elegant_brown[74949]:         "num_in_osds": 0,
Oct 11 03:35:53 compute-0 elegant_brown[74949]:         "osd_in_since": 0,
Oct 11 03:35:53 compute-0 elegant_brown[74949]:         "num_remapped_pgs": 0
Oct 11 03:35:53 compute-0 elegant_brown[74949]:     },
Oct 11 03:35:53 compute-0 elegant_brown[74949]:     "pgmap": {
Oct 11 03:35:53 compute-0 elegant_brown[74949]:         "pgs_by_state": [],
Oct 11 03:35:53 compute-0 elegant_brown[74949]:         "num_pgs": 0,
Oct 11 03:35:53 compute-0 elegant_brown[74949]:         "num_pools": 0,
Oct 11 03:35:53 compute-0 elegant_brown[74949]:         "num_objects": 0,
Oct 11 03:35:53 compute-0 elegant_brown[74949]:         "data_bytes": 0,
Oct 11 03:35:53 compute-0 elegant_brown[74949]:         "bytes_used": 0,
Oct 11 03:35:53 compute-0 elegant_brown[74949]:         "bytes_avail": 0,
Oct 11 03:35:53 compute-0 elegant_brown[74949]:         "bytes_total": 0
Oct 11 03:35:53 compute-0 elegant_brown[74949]:     },
Oct 11 03:35:53 compute-0 elegant_brown[74949]:     "fsmap": {
Oct 11 03:35:53 compute-0 elegant_brown[74949]:         "epoch": 1,
Oct 11 03:35:53 compute-0 elegant_brown[74949]:         "by_rank": [],
Oct 11 03:35:53 compute-0 elegant_brown[74949]:         "up:standby": 0
Oct 11 03:35:53 compute-0 elegant_brown[74949]:     },
Oct 11 03:35:53 compute-0 elegant_brown[74949]:     "mgrmap": {
Oct 11 03:35:53 compute-0 elegant_brown[74949]:         "available": false,
Oct 11 03:35:53 compute-0 elegant_brown[74949]:         "num_standbys": 0,
Oct 11 03:35:53 compute-0 elegant_brown[74949]:         "modules": [
Oct 11 03:35:53 compute-0 elegant_brown[74949]:             "iostat",
Oct 11 03:35:53 compute-0 elegant_brown[74949]:             "nfs",
Oct 11 03:35:53 compute-0 elegant_brown[74949]:             "restful"
Oct 11 03:35:53 compute-0 elegant_brown[74949]:         ],
Oct 11 03:35:53 compute-0 elegant_brown[74949]:         "services": {}
Oct 11 03:35:53 compute-0 elegant_brown[74949]:     },
Oct 11 03:35:53 compute-0 elegant_brown[74949]:     "servicemap": {
Oct 11 03:35:53 compute-0 elegant_brown[74949]:         "epoch": 1,
Oct 11 03:35:53 compute-0 elegant_brown[74949]:         "modified": "2025-10-11T03:35:31.210977+0000",
Oct 11 03:35:53 compute-0 elegant_brown[74949]:         "services": {}
Oct 11 03:35:53 compute-0 elegant_brown[74949]:     },
Oct 11 03:35:53 compute-0 elegant_brown[74949]:     "progress_events": {}
Oct 11 03:35:53 compute-0 elegant_brown[74949]: }
Oct 11 03:35:53 compute-0 systemd[1]: libpod-4cb391bf69878efb1c60340b152e98e71bc0c47f58ee867e60dcc89990586d7e.scope: Deactivated successfully.
Oct 11 03:35:53 compute-0 podman[74933]: 2025-10-11 03:35:53.142183156 +0000 UTC m=+0.554137968 container died 4cb391bf69878efb1c60340b152e98e71bc0c47f58ee867e60dcc89990586d7e (image=quay.io/ceph/ceph:v18, name=elegant_brown, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 11 03:35:53 compute-0 systemd[1]: var-lib-containers-storage-overlay-64eb07d30185ecb25fbed6a27d26b6aea8c18c5c3c87fa46975234f5cd30093e-merged.mount: Deactivated successfully.
Oct 11 03:35:53 compute-0 ceph-mon[74273]: from='client.? 192.168.122.100:0/3067224797' entity='client.admin' cmd=[{"prefix": "status", "format": "json-pretty"}]: dispatch
Oct 11 03:35:53 compute-0 ceph-23b68101-59a9-532f-ab6b-9acf78fb2162-mgr-compute-0-jhqlii[74559]: 2025-10-11T03:35:53.186+0000 7f86a4c73140 -1 mgr[py] Module telegraf has missing NOTIFY_TYPES member
Oct 11 03:35:53 compute-0 ceph-mgr[74563]: mgr[py] Module telegraf has missing NOTIFY_TYPES member
Oct 11 03:35:53 compute-0 ceph-mgr[74563]: mgr[py] Loading python module 'telemetry'
Oct 11 03:35:53 compute-0 podman[74933]: 2025-10-11 03:35:53.194021822 +0000 UTC m=+0.605976554 container remove 4cb391bf69878efb1c60340b152e98e71bc0c47f58ee867e60dcc89990586d7e (image=quay.io/ceph/ceph:v18, name=elegant_brown, CEPH_REF=reef, org.label-schema.build-date=20250507, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 11 03:35:53 compute-0 systemd[1]: libpod-conmon-4cb391bf69878efb1c60340b152e98e71bc0c47f58ee867e60dcc89990586d7e.scope: Deactivated successfully.
Oct 11 03:35:53 compute-0 ceph-23b68101-59a9-532f-ab6b-9acf78fb2162-mgr-compute-0-jhqlii[74559]: 2025-10-11T03:35:53.783+0000 7f86a4c73140 -1 mgr[py] Module telemetry has missing NOTIFY_TYPES member
Oct 11 03:35:53 compute-0 ceph-mgr[74563]: mgr[py] Module telemetry has missing NOTIFY_TYPES member
Oct 11 03:35:53 compute-0 ceph-mgr[74563]: mgr[py] Loading python module 'test_orchestrator'
Oct 11 03:35:54 compute-0 ceph-23b68101-59a9-532f-ab6b-9acf78fb2162-mgr-compute-0-jhqlii[74559]: 2025-10-11T03:35:54.419+0000 7f86a4c73140 -1 mgr[py] Module test_orchestrator has missing NOTIFY_TYPES member
Oct 11 03:35:54 compute-0 ceph-mgr[74563]: mgr[py] Module test_orchestrator has missing NOTIFY_TYPES member
Oct 11 03:35:54 compute-0 ceph-mgr[74563]: mgr[py] Loading python module 'volumes'
Oct 11 03:35:55 compute-0 ceph-23b68101-59a9-532f-ab6b-9acf78fb2162-mgr-compute-0-jhqlii[74559]: 2025-10-11T03:35:55.096+0000 7f86a4c73140 -1 mgr[py] Module volumes has missing NOTIFY_TYPES member
Oct 11 03:35:55 compute-0 ceph-mgr[74563]: mgr[py] Module volumes has missing NOTIFY_TYPES member
Oct 11 03:35:55 compute-0 ceph-mgr[74563]: mgr[py] Loading python module 'zabbix'
Oct 11 03:35:55 compute-0 podman[74989]: 2025-10-11 03:35:55.28737435 +0000 UTC m=+0.059958493 container create fa642af56d6b1916f6cb0053d175e2b6a188f7b7a42658063ef1fb6705115e52 (image=quay.io/ceph/ceph:v18, name=sleepy_allen, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 11 03:35:55 compute-0 ceph-23b68101-59a9-532f-ab6b-9acf78fb2162-mgr-compute-0-jhqlii[74559]: 2025-10-11T03:35:55.325+0000 7f86a4c73140 -1 mgr[py] Module zabbix has missing NOTIFY_TYPES member
Oct 11 03:35:55 compute-0 ceph-mgr[74563]: mgr[py] Module zabbix has missing NOTIFY_TYPES member
Oct 11 03:35:55 compute-0 systemd[1]: Started libpod-conmon-fa642af56d6b1916f6cb0053d175e2b6a188f7b7a42658063ef1fb6705115e52.scope.
Oct 11 03:35:55 compute-0 ceph-mgr[74563]: ms_deliver_dispatch: unhandled message 0x5629a59df1e0 mon_map magic: 0 v1 from mon.0 v2:192.168.122.100:3300/0
Oct 11 03:35:55 compute-0 ceph-mon[74273]: log_channel(cluster) log [INF] : Activating manager daemon compute-0.jhqlii
Oct 11 03:35:55 compute-0 ceph-mgr[74563]: mgr handle_mgr_map Activating!
Oct 11 03:35:55 compute-0 ceph-mgr[74563]: mgr handle_mgr_map I am now activating
Oct 11 03:35:55 compute-0 ceph-mon[74273]: log_channel(cluster) log [DBG] : mgrmap e2: compute-0.jhqlii(active, starting, since 0.0143608s)
Oct 11 03:35:55 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mds metadata"} v 0) v1
Oct 11 03:35:55 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='mgr.14102 192.168.122.100:0/2456549360' entity='mgr.compute-0.jhqlii' cmd=[{"prefix": "mds metadata"}]: dispatch
Oct 11 03:35:55 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).mds e1 all = 1
Oct 11 03:35:55 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata"} v 0) v1
Oct 11 03:35:55 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='mgr.14102 192.168.122.100:0/2456549360' entity='mgr.compute-0.jhqlii' cmd=[{"prefix": "osd metadata"}]: dispatch
Oct 11 03:35:55 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon metadata"} v 0) v1
Oct 11 03:35:55 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='mgr.14102 192.168.122.100:0/2456549360' entity='mgr.compute-0.jhqlii' cmd=[{"prefix": "mon metadata"}]: dispatch
Oct 11 03:35:55 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon metadata", "id": "compute-0"} v 0) v1
Oct 11 03:35:55 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='mgr.14102 192.168.122.100:0/2456549360' entity='mgr.compute-0.jhqlii' cmd=[{"prefix": "mon metadata", "id": "compute-0"}]: dispatch
Oct 11 03:35:55 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr metadata", "who": "compute-0.jhqlii", "id": "compute-0.jhqlii"} v 0) v1
Oct 11 03:35:55 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='mgr.14102 192.168.122.100:0/2456549360' entity='mgr.compute-0.jhqlii' cmd=[{"prefix": "mgr metadata", "who": "compute-0.jhqlii", "id": "compute-0.jhqlii"}]: dispatch
Oct 11 03:35:55 compute-0 podman[74989]: 2025-10-11 03:35:55.26357293 +0000 UTC m=+0.036157103 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Oct 11 03:35:55 compute-0 systemd[1]: Started libcrun container.
Oct 11 03:35:55 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/93acb8969fbc4c1399670ed396da089e09bb64ddcc51a446e8aa1c27c101fac1/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 11 03:35:55 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/93acb8969fbc4c1399670ed396da089e09bb64ddcc51a446e8aa1c27c101fac1/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 11 03:35:55 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/93acb8969fbc4c1399670ed396da089e09bb64ddcc51a446e8aa1c27c101fac1/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Oct 11 03:35:55 compute-0 ceph-mon[74273]: log_channel(cluster) log [INF] : Manager daemon compute-0.jhqlii is now available
Oct 11 03:35:55 compute-0 ceph-mgr[74563]: [balancer DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Oct 11 03:35:55 compute-0 ceph-mgr[74563]: mgr load Constructed class from module: balancer
Oct 11 03:35:55 compute-0 ceph-mgr[74563]: [balancer INFO root] Starting
Oct 11 03:35:55 compute-0 ceph-mgr[74563]: [balancer INFO root] Optimize plan auto_2025-10-11_03:35:55
Oct 11 03:35:55 compute-0 ceph-mgr[74563]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 11 03:35:55 compute-0 ceph-mgr[74563]: [balancer INFO root] do_upmap
Oct 11 03:35:55 compute-0 ceph-mgr[74563]: [balancer INFO root] No pools available
Oct 11 03:35:55 compute-0 ceph-mgr[74563]: [crash DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Oct 11 03:35:55 compute-0 ceph-mgr[74563]: mgr load Constructed class from module: crash
Oct 11 03:35:55 compute-0 ceph-mgr[74563]: [devicehealth DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Oct 11 03:35:55 compute-0 ceph-mgr[74563]: mgr load Constructed class from module: devicehealth
Oct 11 03:35:55 compute-0 ceph-mgr[74563]: [devicehealth INFO root] Starting
Oct 11 03:35:55 compute-0 ceph-mgr[74563]: [iostat DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Oct 11 03:35:55 compute-0 ceph-mgr[74563]: mgr load Constructed class from module: iostat
Oct 11 03:35:55 compute-0 ceph-mgr[74563]: [nfs DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Oct 11 03:35:55 compute-0 ceph-mgr[74563]: mgr load Constructed class from module: nfs
Oct 11 03:35:55 compute-0 ceph-mgr[74563]: [orchestrator DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Oct 11 03:35:55 compute-0 ceph-mgr[74563]: mgr load Constructed class from module: orchestrator
Oct 11 03:35:55 compute-0 ceph-mgr[74563]: [pg_autoscaler DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Oct 11 03:35:55 compute-0 ceph-mgr[74563]: mgr load Constructed class from module: pg_autoscaler
Oct 11 03:35:55 compute-0 ceph-mgr[74563]: [progress DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Oct 11 03:35:55 compute-0 ceph-mgr[74563]: mgr load Constructed class from module: progress
Oct 11 03:35:55 compute-0 ceph-mgr[74563]: [pg_autoscaler INFO root] _maybe_adjust
Oct 11 03:35:55 compute-0 podman[74989]: 2025-10-11 03:35:55.387495665 +0000 UTC m=+0.160079868 container init fa642af56d6b1916f6cb0053d175e2b6a188f7b7a42658063ef1fb6705115e52 (image=quay.io/ceph/ceph:v18, name=sleepy_allen, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 11 03:35:55 compute-0 ceph-mgr[74563]: [progress INFO root] Loading...
Oct 11 03:35:55 compute-0 ceph-mgr[74563]: [progress INFO root] No stored events to load
Oct 11 03:35:55 compute-0 ceph-mgr[74563]: [progress INFO root] Loaded [] historic events
Oct 11 03:35:55 compute-0 ceph-mgr[74563]: [rbd_support DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Oct 11 03:35:55 compute-0 ceph-mgr[74563]: [progress INFO root] Loaded OSDMap, ready.
Oct 11 03:35:55 compute-0 ceph-mon[74273]: Activating manager daemon compute-0.jhqlii
Oct 11 03:35:55 compute-0 ceph-mon[74273]: mgrmap e2: compute-0.jhqlii(active, starting, since 0.0143608s)
Oct 11 03:35:55 compute-0 ceph-mon[74273]: from='mgr.14102 192.168.122.100:0/2456549360' entity='mgr.compute-0.jhqlii' cmd=[{"prefix": "mds metadata"}]: dispatch
Oct 11 03:35:55 compute-0 ceph-mon[74273]: from='mgr.14102 192.168.122.100:0/2456549360' entity='mgr.compute-0.jhqlii' cmd=[{"prefix": "osd metadata"}]: dispatch
Oct 11 03:35:55 compute-0 ceph-mon[74273]: from='mgr.14102 192.168.122.100:0/2456549360' entity='mgr.compute-0.jhqlii' cmd=[{"prefix": "mon metadata"}]: dispatch
Oct 11 03:35:55 compute-0 ceph-mon[74273]: from='mgr.14102 192.168.122.100:0/2456549360' entity='mgr.compute-0.jhqlii' cmd=[{"prefix": "mon metadata", "id": "compute-0"}]: dispatch
Oct 11 03:35:55 compute-0 ceph-mon[74273]: from='mgr.14102 192.168.122.100:0/2456549360' entity='mgr.compute-0.jhqlii' cmd=[{"prefix": "mgr metadata", "who": "compute-0.jhqlii", "id": "compute-0.jhqlii"}]: dispatch
Oct 11 03:35:55 compute-0 ceph-mon[74273]: Manager daemon compute-0.jhqlii is now available
Oct 11 03:35:55 compute-0 ceph-mgr[74563]: [rbd_support INFO root] recovery thread starting
Oct 11 03:35:55 compute-0 ceph-mgr[74563]: [rbd_support INFO root] starting setup
Oct 11 03:35:55 compute-0 ceph-mgr[74563]: mgr load Constructed class from module: rbd_support
Oct 11 03:35:55 compute-0 ceph-mgr[74563]: [restful DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Oct 11 03:35:55 compute-0 ceph-mgr[74563]: mgr load Constructed class from module: restful
Oct 11 03:35:55 compute-0 podman[74989]: 2025-10-11 03:35:55.398819409 +0000 UTC m=+0.171403562 container start fa642af56d6b1916f6cb0053d175e2b6a188f7b7a42658063ef1fb6705115e52 (image=quay.io/ceph/ceph:v18, name=sleepy_allen, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, ceph=True)
Oct 11 03:35:55 compute-0 ceph-mgr[74563]: [restful INFO root] server_addr: :: server_port: 8003
Oct 11 03:35:55 compute-0 ceph-mgr[74563]: [restful WARNING root] server not running: no certificate configured
Oct 11 03:35:55 compute-0 ceph-mgr[74563]: [status DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Oct 11 03:35:55 compute-0 ceph-mgr[74563]: mgr load Constructed class from module: status
Oct 11 03:35:55 compute-0 ceph-mgr[74563]: [telemetry DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Oct 11 03:35:55 compute-0 podman[74989]: 2025-10-11 03:35:55.402358037 +0000 UTC m=+0.174942340 container attach fa642af56d6b1916f6cb0053d175e2b6a188f7b7a42658063ef1fb6705115e52 (image=quay.io/ceph/ceph:v18, name=sleepy_allen, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 11 03:35:55 compute-0 ceph-mgr[74563]: mgr load Constructed class from module: telemetry
Oct 11 03:35:55 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.jhqlii/mirror_snapshot_schedule"} v 0) v1
Oct 11 03:35:55 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='mgr.14102 192.168.122.100:0/2456549360' entity='mgr.compute-0.jhqlii' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.jhqlii/mirror_snapshot_schedule"}]: dispatch
Oct 11 03:35:55 compute-0 ceph-mgr[74563]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 11 03:35:55 compute-0 ceph-mgr[74563]: [volumes DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Oct 11 03:35:55 compute-0 ceph-mgr[74563]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: starting
Oct 11 03:35:55 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/telemetry/report_id}] v 0) v1
Oct 11 03:35:55 compute-0 ceph-mgr[74563]: [rbd_support INFO root] PerfHandler: starting
Oct 11 03:35:55 compute-0 ceph-mgr[74563]: [rbd_support INFO root] TaskHandler: starting
Oct 11 03:35:55 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.jhqlii/trash_purge_schedule"} v 0) v1
Oct 11 03:35:55 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='mgr.14102 192.168.122.100:0/2456549360' entity='mgr.compute-0.jhqlii' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.jhqlii/trash_purge_schedule"}]: dispatch
Oct 11 03:35:55 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='mgr.14102 192.168.122.100:0/2456549360' entity='mgr.compute-0.jhqlii' 
Oct 11 03:35:55 compute-0 ceph-mgr[74563]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 11 03:35:55 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/telemetry/salt}] v 0) v1
Oct 11 03:35:55 compute-0 ceph-mgr[74563]: [rbd_support INFO root] TrashPurgeScheduleHandler: starting
Oct 11 03:35:55 compute-0 ceph-mgr[74563]: [rbd_support INFO root] setup complete
Oct 11 03:35:55 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='mgr.14102 192.168.122.100:0/2456549360' entity='mgr.compute-0.jhqlii' 
Oct 11 03:35:55 compute-0 ceph-mgr[74563]: mgr load Constructed class from module: volumes
Oct 11 03:35:55 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/telemetry/collection}] v 0) v1
Oct 11 03:35:55 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='mgr.14102 192.168.122.100:0/2456549360' entity='mgr.compute-0.jhqlii' 
Oct 11 03:35:55 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status", "format": "json-pretty"} v 0) v1
Oct 11 03:35:55 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2948675985' entity='client.admin' cmd=[{"prefix": "status", "format": "json-pretty"}]: dispatch
Oct 11 03:35:55 compute-0 sleepy_allen[75005]: 
Oct 11 03:35:55 compute-0 sleepy_allen[75005]: {
Oct 11 03:35:55 compute-0 sleepy_allen[75005]:     "fsid": "23b68101-59a9-532f-ab6b-9acf78fb2162",
Oct 11 03:35:55 compute-0 sleepy_allen[75005]:     "health": {
Oct 11 03:35:55 compute-0 sleepy_allen[75005]:         "status": "HEALTH_OK",
Oct 11 03:35:55 compute-0 sleepy_allen[75005]:         "checks": {},
Oct 11 03:35:55 compute-0 sleepy_allen[75005]:         "mutes": []
Oct 11 03:35:55 compute-0 sleepy_allen[75005]:     },
Oct 11 03:35:55 compute-0 sleepy_allen[75005]:     "election_epoch": 5,
Oct 11 03:35:55 compute-0 sleepy_allen[75005]:     "quorum": [
Oct 11 03:35:55 compute-0 sleepy_allen[75005]:         0
Oct 11 03:35:55 compute-0 sleepy_allen[75005]:     ],
Oct 11 03:35:55 compute-0 sleepy_allen[75005]:     "quorum_names": [
Oct 11 03:35:55 compute-0 sleepy_allen[75005]:         "compute-0"
Oct 11 03:35:55 compute-0 sleepy_allen[75005]:     ],
Oct 11 03:35:55 compute-0 sleepy_allen[75005]:     "quorum_age": 21,
Oct 11 03:35:55 compute-0 sleepy_allen[75005]:     "monmap": {
Oct 11 03:35:55 compute-0 sleepy_allen[75005]:         "epoch": 1,
Oct 11 03:35:55 compute-0 sleepy_allen[75005]:         "min_mon_release_name": "reef",
Oct 11 03:35:55 compute-0 sleepy_allen[75005]:         "num_mons": 1
Oct 11 03:35:55 compute-0 sleepy_allen[75005]:     },
Oct 11 03:35:55 compute-0 sleepy_allen[75005]:     "osdmap": {
Oct 11 03:35:55 compute-0 sleepy_allen[75005]:         "epoch": 1,
Oct 11 03:35:55 compute-0 sleepy_allen[75005]:         "num_osds": 0,
Oct 11 03:35:55 compute-0 sleepy_allen[75005]:         "num_up_osds": 0,
Oct 11 03:35:55 compute-0 sleepy_allen[75005]:         "osd_up_since": 0,
Oct 11 03:35:55 compute-0 sleepy_allen[75005]:         "num_in_osds": 0,
Oct 11 03:35:55 compute-0 sleepy_allen[75005]:         "osd_in_since": 0,
Oct 11 03:35:55 compute-0 sleepy_allen[75005]:         "num_remapped_pgs": 0
Oct 11 03:35:55 compute-0 sleepy_allen[75005]:     },
Oct 11 03:35:55 compute-0 sleepy_allen[75005]:     "pgmap": {
Oct 11 03:35:55 compute-0 sleepy_allen[75005]:         "pgs_by_state": [],
Oct 11 03:35:55 compute-0 sleepy_allen[75005]:         "num_pgs": 0,
Oct 11 03:35:55 compute-0 sleepy_allen[75005]:         "num_pools": 0,
Oct 11 03:35:55 compute-0 sleepy_allen[75005]:         "num_objects": 0,
Oct 11 03:35:55 compute-0 sleepy_allen[75005]:         "data_bytes": 0,
Oct 11 03:35:55 compute-0 sleepy_allen[75005]:         "bytes_used": 0,
Oct 11 03:35:55 compute-0 sleepy_allen[75005]:         "bytes_avail": 0,
Oct 11 03:35:55 compute-0 sleepy_allen[75005]:         "bytes_total": 0
Oct 11 03:35:55 compute-0 sleepy_allen[75005]:     },
Oct 11 03:35:55 compute-0 sleepy_allen[75005]:     "fsmap": {
Oct 11 03:35:55 compute-0 sleepy_allen[75005]:         "epoch": 1,
Oct 11 03:35:55 compute-0 sleepy_allen[75005]:         "by_rank": [],
Oct 11 03:35:55 compute-0 sleepy_allen[75005]:         "up:standby": 0
Oct 11 03:35:55 compute-0 sleepy_allen[75005]:     },
Oct 11 03:35:55 compute-0 sleepy_allen[75005]:     "mgrmap": {
Oct 11 03:35:55 compute-0 sleepy_allen[75005]:         "available": false,
Oct 11 03:35:55 compute-0 sleepy_allen[75005]:         "num_standbys": 0,
Oct 11 03:35:55 compute-0 sleepy_allen[75005]:         "modules": [
Oct 11 03:35:55 compute-0 sleepy_allen[75005]:             "iostat",
Oct 11 03:35:55 compute-0 sleepy_allen[75005]:             "nfs",
Oct 11 03:35:55 compute-0 sleepy_allen[75005]:             "restful"
Oct 11 03:35:55 compute-0 sleepy_allen[75005]:         ],
Oct 11 03:35:55 compute-0 sleepy_allen[75005]:         "services": {}
Oct 11 03:35:55 compute-0 sleepy_allen[75005]:     },
Oct 11 03:35:55 compute-0 sleepy_allen[75005]:     "servicemap": {
Oct 11 03:35:55 compute-0 sleepy_allen[75005]:         "epoch": 1,
Oct 11 03:35:55 compute-0 sleepy_allen[75005]:         "modified": "2025-10-11T03:35:31.210977+0000",
Oct 11 03:35:55 compute-0 sleepy_allen[75005]:         "services": {}
Oct 11 03:35:55 compute-0 sleepy_allen[75005]:     },
Oct 11 03:35:55 compute-0 sleepy_allen[75005]:     "progress_events": {}
Oct 11 03:35:55 compute-0 sleepy_allen[75005]: }
Oct 11 03:35:55 compute-0 systemd[1]: libpod-fa642af56d6b1916f6cb0053d175e2b6a188f7b7a42658063ef1fb6705115e52.scope: Deactivated successfully.
Oct 11 03:35:55 compute-0 podman[74989]: 2025-10-11 03:35:55.800320136 +0000 UTC m=+0.572904269 container died fa642af56d6b1916f6cb0053d175e2b6a188f7b7a42658063ef1fb6705115e52 (image=quay.io/ceph/ceph:v18, name=sleepy_allen, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507)
Oct 11 03:35:55 compute-0 systemd[1]: var-lib-containers-storage-overlay-93acb8969fbc4c1399670ed396da089e09bb64ddcc51a446e8aa1c27c101fac1-merged.mount: Deactivated successfully.
Oct 11 03:35:55 compute-0 podman[74989]: 2025-10-11 03:35:55.846479385 +0000 UTC m=+0.619063548 container remove fa642af56d6b1916f6cb0053d175e2b6a188f7b7a42658063ef1fb6705115e52 (image=quay.io/ceph/ceph:v18, name=sleepy_allen, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Oct 11 03:35:55 compute-0 systemd[1]: libpod-conmon-fa642af56d6b1916f6cb0053d175e2b6a188f7b7a42658063ef1fb6705115e52.scope: Deactivated successfully.
Oct 11 03:35:56 compute-0 ceph-mon[74273]: log_channel(cluster) log [DBG] : mgrmap e3: compute-0.jhqlii(active, since 1.02845s)
Oct 11 03:35:56 compute-0 ceph-mon[74273]: from='mgr.14102 192.168.122.100:0/2456549360' entity='mgr.compute-0.jhqlii' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.jhqlii/mirror_snapshot_schedule"}]: dispatch
Oct 11 03:35:56 compute-0 ceph-mon[74273]: from='mgr.14102 192.168.122.100:0/2456549360' entity='mgr.compute-0.jhqlii' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.jhqlii/trash_purge_schedule"}]: dispatch
Oct 11 03:35:56 compute-0 ceph-mon[74273]: from='mgr.14102 192.168.122.100:0/2456549360' entity='mgr.compute-0.jhqlii' 
Oct 11 03:35:56 compute-0 ceph-mon[74273]: from='mgr.14102 192.168.122.100:0/2456549360' entity='mgr.compute-0.jhqlii' 
Oct 11 03:35:56 compute-0 ceph-mon[74273]: from='mgr.14102 192.168.122.100:0/2456549360' entity='mgr.compute-0.jhqlii' 
Oct 11 03:35:56 compute-0 ceph-mon[74273]: from='client.? 192.168.122.100:0/2948675985' entity='client.admin' cmd=[{"prefix": "status", "format": "json-pretty"}]: dispatch
Oct 11 03:35:56 compute-0 ceph-mon[74273]: mgrmap e3: compute-0.jhqlii(active, since 1.02845s)
Oct 11 03:35:57 compute-0 ceph-mgr[74563]: mgr.server send_report Not sending PG status to monitor yet, waiting for OSDs
Oct 11 03:35:57 compute-0 ceph-mon[74273]: log_channel(cluster) log [DBG] : mgrmap e4: compute-0.jhqlii(active, since 2s)
Oct 11 03:35:57 compute-0 podman[75123]: 2025-10-11 03:35:57.940393988 +0000 UTC m=+0.058386169 container create 90f2733cd4695acc203108c69917938781d7e995d7415c5dee031650868f714e (image=quay.io/ceph/ceph:v18, name=pensive_kepler, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 11 03:35:57 compute-0 systemd[1]: Started libpod-conmon-90f2733cd4695acc203108c69917938781d7e995d7415c5dee031650868f714e.scope.
Oct 11 03:35:58 compute-0 systemd[1]: Started libcrun container.
Oct 11 03:35:58 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5e2e26f5891b1a4d5185d40fd86faefc592680311a819fcc519b6f5bb16e4af1/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 11 03:35:58 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5e2e26f5891b1a4d5185d40fd86faefc592680311a819fcc519b6f5bb16e4af1/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Oct 11 03:35:58 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5e2e26f5891b1a4d5185d40fd86faefc592680311a819fcc519b6f5bb16e4af1/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 11 03:35:58 compute-0 podman[75123]: 2025-10-11 03:35:57.922522363 +0000 UTC m=+0.040514564 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Oct 11 03:35:58 compute-0 podman[75123]: 2025-10-11 03:35:58.027291176 +0000 UTC m=+0.145283357 container init 90f2733cd4695acc203108c69917938781d7e995d7415c5dee031650868f714e (image=quay.io/ceph/ceph:v18, name=pensive_kepler, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 11 03:35:58 compute-0 podman[75123]: 2025-10-11 03:35:58.035905015 +0000 UTC m=+0.153897196 container start 90f2733cd4695acc203108c69917938781d7e995d7415c5dee031650868f714e (image=quay.io/ceph/ceph:v18, name=pensive_kepler, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 11 03:35:58 compute-0 podman[75123]: 2025-10-11 03:35:58.038747214 +0000 UTC m=+0.156739395 container attach 90f2733cd4695acc203108c69917938781d7e995d7415c5dee031650868f714e (image=quay.io/ceph/ceph:v18, name=pensive_kepler, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Oct 11 03:35:58 compute-0 ceph-mon[74273]: mgrmap e4: compute-0.jhqlii(active, since 2s)
Oct 11 03:35:58 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status", "format": "json-pretty"} v 0) v1
Oct 11 03:35:58 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1488605903' entity='client.admin' cmd=[{"prefix": "status", "format": "json-pretty"}]: dispatch
Oct 11 03:35:58 compute-0 pensive_kepler[75140]: 
Oct 11 03:35:58 compute-0 pensive_kepler[75140]: {
Oct 11 03:35:58 compute-0 pensive_kepler[75140]:     "fsid": "23b68101-59a9-532f-ab6b-9acf78fb2162",
Oct 11 03:35:58 compute-0 pensive_kepler[75140]:     "health": {
Oct 11 03:35:58 compute-0 pensive_kepler[75140]:         "status": "HEALTH_OK",
Oct 11 03:35:58 compute-0 pensive_kepler[75140]:         "checks": {},
Oct 11 03:35:58 compute-0 pensive_kepler[75140]:         "mutes": []
Oct 11 03:35:58 compute-0 pensive_kepler[75140]:     },
Oct 11 03:35:58 compute-0 pensive_kepler[75140]:     "election_epoch": 5,
Oct 11 03:35:58 compute-0 pensive_kepler[75140]:     "quorum": [
Oct 11 03:35:58 compute-0 pensive_kepler[75140]:         0
Oct 11 03:35:58 compute-0 pensive_kepler[75140]:     ],
Oct 11 03:35:58 compute-0 pensive_kepler[75140]:     "quorum_names": [
Oct 11 03:35:58 compute-0 pensive_kepler[75140]:         "compute-0"
Oct 11 03:35:58 compute-0 pensive_kepler[75140]:     ],
Oct 11 03:35:58 compute-0 pensive_kepler[75140]:     "quorum_age": 24,
Oct 11 03:35:58 compute-0 pensive_kepler[75140]:     "monmap": {
Oct 11 03:35:58 compute-0 pensive_kepler[75140]:         "epoch": 1,
Oct 11 03:35:58 compute-0 pensive_kepler[75140]:         "min_mon_release_name": "reef",
Oct 11 03:35:58 compute-0 pensive_kepler[75140]:         "num_mons": 1
Oct 11 03:35:58 compute-0 pensive_kepler[75140]:     },
Oct 11 03:35:58 compute-0 pensive_kepler[75140]:     "osdmap": {
Oct 11 03:35:58 compute-0 pensive_kepler[75140]:         "epoch": 1,
Oct 11 03:35:58 compute-0 pensive_kepler[75140]:         "num_osds": 0,
Oct 11 03:35:58 compute-0 pensive_kepler[75140]:         "num_up_osds": 0,
Oct 11 03:35:58 compute-0 pensive_kepler[75140]:         "osd_up_since": 0,
Oct 11 03:35:58 compute-0 pensive_kepler[75140]:         "num_in_osds": 0,
Oct 11 03:35:58 compute-0 pensive_kepler[75140]:         "osd_in_since": 0,
Oct 11 03:35:58 compute-0 pensive_kepler[75140]:         "num_remapped_pgs": 0
Oct 11 03:35:58 compute-0 pensive_kepler[75140]:     },
Oct 11 03:35:58 compute-0 pensive_kepler[75140]:     "pgmap": {
Oct 11 03:35:58 compute-0 pensive_kepler[75140]:         "pgs_by_state": [],
Oct 11 03:35:58 compute-0 pensive_kepler[75140]:         "num_pgs": 0,
Oct 11 03:35:58 compute-0 pensive_kepler[75140]:         "num_pools": 0,
Oct 11 03:35:58 compute-0 pensive_kepler[75140]:         "num_objects": 0,
Oct 11 03:35:58 compute-0 pensive_kepler[75140]:         "data_bytes": 0,
Oct 11 03:35:58 compute-0 pensive_kepler[75140]:         "bytes_used": 0,
Oct 11 03:35:58 compute-0 pensive_kepler[75140]:         "bytes_avail": 0,
Oct 11 03:35:58 compute-0 pensive_kepler[75140]:         "bytes_total": 0
Oct 11 03:35:58 compute-0 pensive_kepler[75140]:     },
Oct 11 03:35:58 compute-0 pensive_kepler[75140]:     "fsmap": {
Oct 11 03:35:58 compute-0 pensive_kepler[75140]:         "epoch": 1,
Oct 11 03:35:58 compute-0 pensive_kepler[75140]:         "by_rank": [],
Oct 11 03:35:58 compute-0 pensive_kepler[75140]:         "up:standby": 0
Oct 11 03:35:58 compute-0 pensive_kepler[75140]:     },
Oct 11 03:35:58 compute-0 pensive_kepler[75140]:     "mgrmap": {
Oct 11 03:35:58 compute-0 pensive_kepler[75140]:         "available": true,
Oct 11 03:35:58 compute-0 pensive_kepler[75140]:         "num_standbys": 0,
Oct 11 03:35:58 compute-0 pensive_kepler[75140]:         "modules": [
Oct 11 03:35:58 compute-0 pensive_kepler[75140]:             "iostat",
Oct 11 03:35:58 compute-0 pensive_kepler[75140]:             "nfs",
Oct 11 03:35:58 compute-0 pensive_kepler[75140]:             "restful"
Oct 11 03:35:58 compute-0 pensive_kepler[75140]:         ],
Oct 11 03:35:58 compute-0 pensive_kepler[75140]:         "services": {}
Oct 11 03:35:58 compute-0 pensive_kepler[75140]:     },
Oct 11 03:35:58 compute-0 pensive_kepler[75140]:     "servicemap": {
Oct 11 03:35:58 compute-0 pensive_kepler[75140]:         "epoch": 1,
Oct 11 03:35:58 compute-0 pensive_kepler[75140]:         "modified": "2025-10-11T03:35:31.210977+0000",
Oct 11 03:35:58 compute-0 pensive_kepler[75140]:         "services": {}
Oct 11 03:35:58 compute-0 pensive_kepler[75140]:     },
Oct 11 03:35:58 compute-0 pensive_kepler[75140]:     "progress_events": {}
Oct 11 03:35:58 compute-0 pensive_kepler[75140]: }
Oct 11 03:35:58 compute-0 systemd[1]: libpod-90f2733cd4695acc203108c69917938781d7e995d7415c5dee031650868f714e.scope: Deactivated successfully.
Oct 11 03:35:58 compute-0 podman[75123]: 2025-10-11 03:35:58.652754521 +0000 UTC m=+0.770746752 container died 90f2733cd4695acc203108c69917938781d7e995d7415c5dee031650868f714e (image=quay.io/ceph/ceph:v18, name=pensive_kepler, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 11 03:35:58 compute-0 systemd[1]: var-lib-containers-storage-overlay-5e2e26f5891b1a4d5185d40fd86faefc592680311a819fcc519b6f5bb16e4af1-merged.mount: Deactivated successfully.
Oct 11 03:35:58 compute-0 podman[75123]: 2025-10-11 03:35:58.702398977 +0000 UTC m=+0.820391168 container remove 90f2733cd4695acc203108c69917938781d7e995d7415c5dee031650868f714e (image=quay.io/ceph/ceph:v18, name=pensive_kepler, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2)
Oct 11 03:35:58 compute-0 systemd[1]: libpod-conmon-90f2733cd4695acc203108c69917938781d7e995d7415c5dee031650868f714e.scope: Deactivated successfully.
Oct 11 03:35:58 compute-0 podman[75179]: 2025-10-11 03:35:58.798569272 +0000 UTC m=+0.064166249 container create af5dca892d65e1977a7100b10b4cae94758eede487384bf598d36752bb6cf618 (image=quay.io/ceph/ceph:v18, name=charming_hermann, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 11 03:35:58 compute-0 systemd[1]: Started libpod-conmon-af5dca892d65e1977a7100b10b4cae94758eede487384bf598d36752bb6cf618.scope.
Oct 11 03:35:58 compute-0 systemd[1]: Started libcrun container.
Oct 11 03:35:58 compute-0 podman[75179]: 2025-10-11 03:35:58.772268033 +0000 UTC m=+0.037865050 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Oct 11 03:35:58 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3ac7749540e7a25e93ce34910b1d2935500274839715f032aa2ec761e62b88ff/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 11 03:35:58 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3ac7749540e7a25e93ce34910b1d2935500274839715f032aa2ec761e62b88ff/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Oct 11 03:35:58 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3ac7749540e7a25e93ce34910b1d2935500274839715f032aa2ec761e62b88ff/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 11 03:35:58 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3ac7749540e7a25e93ce34910b1d2935500274839715f032aa2ec761e62b88ff/merged/var/lib/ceph/user.conf supports timestamps until 2038 (0x7fffffff)
Oct 11 03:35:58 compute-0 podman[75179]: 2025-10-11 03:35:58.881512561 +0000 UTC m=+0.147109578 container init af5dca892d65e1977a7100b10b4cae94758eede487384bf598d36752bb6cf618 (image=quay.io/ceph/ceph:v18, name=charming_hermann, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Oct 11 03:35:58 compute-0 podman[75179]: 2025-10-11 03:35:58.894013309 +0000 UTC m=+0.159610286 container start af5dca892d65e1977a7100b10b4cae94758eede487384bf598d36752bb6cf618 (image=quay.io/ceph/ceph:v18, name=charming_hermann, org.label-schema.build-date=20250507, ceph=True, io.buildah.version=1.39.3, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 11 03:35:58 compute-0 podman[75179]: 2025-10-11 03:35:58.89783485 +0000 UTC m=+0.163431877 container attach af5dca892d65e1977a7100b10b4cae94758eede487384bf598d36752bb6cf618 (image=quay.io/ceph/ceph:v18, name=charming_hermann, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, ceph=True, CEPH_REF=reef, io.buildah.version=1.39.3, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Oct 11 03:35:59 compute-0 ceph-mgr[74563]: mgr.server send_report Not sending PG status to monitor yet, waiting for OSDs
Oct 11 03:35:59 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config assimilate-conf"} v 0) v1
Oct 11 03:35:59 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2541568419' entity='client.admin' cmd=[{"prefix": "config assimilate-conf"}]: dispatch
Oct 11 03:35:59 compute-0 ceph-mon[74273]: from='client.? 192.168.122.100:0/1488605903' entity='client.admin' cmd=[{"prefix": "status", "format": "json-pretty"}]: dispatch
Oct 11 03:35:59 compute-0 systemd[1]: libpod-af5dca892d65e1977a7100b10b4cae94758eede487384bf598d36752bb6cf618.scope: Deactivated successfully.
Oct 11 03:35:59 compute-0 podman[75179]: 2025-10-11 03:35:59.492040129 +0000 UTC m=+0.757637066 container died af5dca892d65e1977a7100b10b4cae94758eede487384bf598d36752bb6cf618 (image=quay.io/ceph/ceph:v18, name=charming_hermann, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True)
Oct 11 03:36:00 compute-0 systemd[1]: var-lib-containers-storage-overlay-3ac7749540e7a25e93ce34910b1d2935500274839715f032aa2ec761e62b88ff-merged.mount: Deactivated successfully.
Oct 11 03:36:00 compute-0 podman[75179]: 2025-10-11 03:36:00.11683652 +0000 UTC m=+1.382433487 container remove af5dca892d65e1977a7100b10b4cae94758eede487384bf598d36752bb6cf618 (image=quay.io/ceph/ceph:v18, name=charming_hermann, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_REF=reef, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default)
Oct 11 03:36:00 compute-0 systemd[1]: libpod-conmon-af5dca892d65e1977a7100b10b4cae94758eede487384bf598d36752bb6cf618.scope: Deactivated successfully.
Oct 11 03:36:00 compute-0 podman[75234]: 2025-10-11 03:36:00.210673732 +0000 UTC m=+0.061952368 container create 8d7ddd792d34379791a170858bb6f2b4bff74e86a8fc71c362f71401c949ecb4 (image=quay.io/ceph/ceph:v18, name=sad_margulis, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 11 03:36:00 compute-0 systemd[1]: Started libpod-conmon-8d7ddd792d34379791a170858bb6f2b4bff74e86a8fc71c362f71401c949ecb4.scope.
Oct 11 03:36:00 compute-0 systemd[1]: Started libcrun container.
Oct 11 03:36:00 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bf5eb26d3bb088e472d2c498d31d9d0bed9c71d896f04f58599e029a769b8443/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 11 03:36:00 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bf5eb26d3bb088e472d2c498d31d9d0bed9c71d896f04f58599e029a769b8443/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 11 03:36:00 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bf5eb26d3bb088e472d2c498d31d9d0bed9c71d896f04f58599e029a769b8443/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Oct 11 03:36:00 compute-0 podman[75234]: 2025-10-11 03:36:00.188317006 +0000 UTC m=+0.039595612 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Oct 11 03:36:00 compute-0 podman[75234]: 2025-10-11 03:36:00.301107959 +0000 UTC m=+0.152386566 container init 8d7ddd792d34379791a170858bb6f2b4bff74e86a8fc71c362f71401c949ecb4 (image=quay.io/ceph/ceph:v18, name=sad_margulis, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3)
Oct 11 03:36:00 compute-0 podman[75234]: 2025-10-11 03:36:00.312832605 +0000 UTC m=+0.164111231 container start 8d7ddd792d34379791a170858bb6f2b4bff74e86a8fc71c362f71401c949ecb4 (image=quay.io/ceph/ceph:v18, name=sad_margulis, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef)
Oct 11 03:36:00 compute-0 podman[75234]: 2025-10-11 03:36:00.316970489 +0000 UTC m=+0.168249115 container attach 8d7ddd792d34379791a170858bb6f2b4bff74e86a8fc71c362f71401c949ecb4 (image=quay.io/ceph/ceph:v18, name=sad_margulis, OSD_FLAVOR=default, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0)
Oct 11 03:36:00 compute-0 ceph-mon[74273]: from='client.? 192.168.122.100:0/2541568419' entity='client.admin' cmd=[{"prefix": "config assimilate-conf"}]: dispatch
Oct 11 03:36:00 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr module enable", "module": "cephadm"} v 0) v1
Oct 11 03:36:00 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2316030152' entity='client.admin' cmd=[{"prefix": "mgr module enable", "module": "cephadm"}]: dispatch
Oct 11 03:36:01 compute-0 ceph-mgr[74563]: mgr.server send_report Not sending PG status to monitor yet, waiting for OSDs
Oct 11 03:36:01 compute-0 ceph-mon[74273]: from='client.? 192.168.122.100:0/2316030152' entity='client.admin' cmd=[{"prefix": "mgr module enable", "module": "cephadm"}]: dispatch
Oct 11 03:36:01 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2316030152' entity='client.admin' cmd='[{"prefix": "mgr module enable", "module": "cephadm"}]': finished
Oct 11 03:36:01 compute-0 ceph-mgr[74563]: mgr handle_mgr_map respawning because set of enabled modules changed!
Oct 11 03:36:01 compute-0 ceph-mgr[74563]: mgr respawn  e: '/usr/bin/ceph-mgr'
Oct 11 03:36:01 compute-0 ceph-mgr[74563]: mgr respawn  0: '/usr/bin/ceph-mgr'
Oct 11 03:36:01 compute-0 ceph-mgr[74563]: mgr respawn  1: '-n'
Oct 11 03:36:01 compute-0 ceph-mgr[74563]: mgr respawn  2: 'mgr.compute-0.jhqlii'
Oct 11 03:36:01 compute-0 ceph-mgr[74563]: mgr respawn  3: '-f'
Oct 11 03:36:01 compute-0 ceph-mgr[74563]: mgr respawn  4: '--setuser'
Oct 11 03:36:01 compute-0 ceph-mgr[74563]: mgr respawn  5: 'ceph'
Oct 11 03:36:01 compute-0 ceph-mgr[74563]: mgr respawn  6: '--setgroup'
Oct 11 03:36:01 compute-0 ceph-mgr[74563]: mgr respawn  7: 'ceph'
Oct 11 03:36:01 compute-0 ceph-mgr[74563]: mgr respawn  8: '--default-log-to-file=false'
Oct 11 03:36:01 compute-0 ceph-mgr[74563]: mgr respawn  9: '--default-log-to-journald=true'
Oct 11 03:36:01 compute-0 ceph-mgr[74563]: mgr respawn  10: '--default-log-to-stderr=false'
Oct 11 03:36:01 compute-0 ceph-mon[74273]: log_channel(cluster) log [DBG] : mgrmap e5: compute-0.jhqlii(active, since 6s)
Oct 11 03:36:01 compute-0 ceph-mgr[74563]: mgr respawn respawning with exe /usr/bin/ceph-mgr
Oct 11 03:36:01 compute-0 ceph-mgr[74563]: mgr respawn  exe_path /proc/self/exe
Oct 11 03:36:01 compute-0 systemd[1]: libpod-8d7ddd792d34379791a170858bb6f2b4bff74e86a8fc71c362f71401c949ecb4.scope: Deactivated successfully.
Oct 11 03:36:01 compute-0 conmon[75250]: conmon 8d7ddd792d34379791a1 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-8d7ddd792d34379791a170858bb6f2b4bff74e86a8fc71c362f71401c949ecb4.scope/container/memory.events
Oct 11 03:36:01 compute-0 podman[75234]: 2025-10-11 03:36:01.537844444 +0000 UTC m=+1.389123050 container died 8d7ddd792d34379791a170858bb6f2b4bff74e86a8fc71c362f71401c949ecb4 (image=quay.io/ceph/ceph:v18, name=sad_margulis, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, OSD_FLAVOR=default)
Oct 11 03:36:01 compute-0 systemd[1]: var-lib-containers-storage-overlay-bf5eb26d3bb088e472d2c498d31d9d0bed9c71d896f04f58599e029a769b8443-merged.mount: Deactivated successfully.
Oct 11 03:36:01 compute-0 podman[75234]: 2025-10-11 03:36:01.600947737 +0000 UTC m=+1.452226333 container remove 8d7ddd792d34379791a170858bb6f2b4bff74e86a8fc71c362f71401c949ecb4 (image=quay.io/ceph/ceph:v18, name=sad_margulis, org.label-schema.license=GPLv2, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 11 03:36:01 compute-0 systemd[1]: libpod-conmon-8d7ddd792d34379791a170858bb6f2b4bff74e86a8fc71c362f71401c949ecb4.scope: Deactivated successfully.
Oct 11 03:36:01 compute-0 ceph-23b68101-59a9-532f-ab6b-9acf78fb2162-mgr-compute-0-jhqlii[74559]: ignoring --setuser ceph since I am not root
Oct 11 03:36:01 compute-0 ceph-23b68101-59a9-532f-ab6b-9acf78fb2162-mgr-compute-0-jhqlii[74559]: ignoring --setgroup ceph since I am not root
Oct 11 03:36:01 compute-0 ceph-mgr[74563]: ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable), process ceph-mgr, pid 2
Oct 11 03:36:01 compute-0 ceph-mgr[74563]: pidfile_write: ignore empty --pid-file
Oct 11 03:36:01 compute-0 podman[75288]: 2025-10-11 03:36:01.662092062 +0000 UTC m=+0.040121512 container create 6de88af92a763cc9c16b397e77cbc8657d74600efefec8aebec32544df692b13 (image=quay.io/ceph/ceph:v18, name=boring_ritchie, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 11 03:36:01 compute-0 systemd[1]: Started libpod-conmon-6de88af92a763cc9c16b397e77cbc8657d74600efefec8aebec32544df692b13.scope.
Oct 11 03:36:01 compute-0 systemd[1]: Started libcrun container.
Oct 11 03:36:01 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/33e9e22639fd8cdd0f59a36c17377f3352dc92cd4558ed48878a661a0710089c/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Oct 11 03:36:01 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/33e9e22639fd8cdd0f59a36c17377f3352dc92cd4558ed48878a661a0710089c/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 11 03:36:01 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/33e9e22639fd8cdd0f59a36c17377f3352dc92cd4558ed48878a661a0710089c/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 11 03:36:01 compute-0 podman[75288]: 2025-10-11 03:36:01.645547036 +0000 UTC m=+0.023576516 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Oct 11 03:36:01 compute-0 podman[75288]: 2025-10-11 03:36:01.742544653 +0000 UTC m=+0.120574113 container init 6de88af92a763cc9c16b397e77cbc8657d74600efefec8aebec32544df692b13 (image=quay.io/ceph/ceph:v18, name=boring_ritchie, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 11 03:36:01 compute-0 podman[75288]: 2025-10-11 03:36:01.748408036 +0000 UTC m=+0.126437486 container start 6de88af92a763cc9c16b397e77cbc8657d74600efefec8aebec32544df692b13 (image=quay.io/ceph/ceph:v18, name=boring_ritchie, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Oct 11 03:36:01 compute-0 podman[75288]: 2025-10-11 03:36:01.752253258 +0000 UTC m=+0.130282728 container attach 6de88af92a763cc9c16b397e77cbc8657d74600efefec8aebec32544df692b13 (image=quay.io/ceph/ceph:v18, name=boring_ritchie, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 11 03:36:01 compute-0 ceph-mgr[74563]: mgr[py] Loading python module 'alerts'
Oct 11 03:36:02 compute-0 ceph-23b68101-59a9-532f-ab6b-9acf78fb2162-mgr-compute-0-jhqlii[74559]: 2025-10-11T03:36:02.078+0000 7f0d3a059140 -1 mgr[py] Module alerts has missing NOTIFY_TYPES member
Oct 11 03:36:02 compute-0 ceph-mgr[74563]: mgr[py] Module alerts has missing NOTIFY_TYPES member
Oct 11 03:36:02 compute-0 ceph-mgr[74563]: mgr[py] Loading python module 'balancer'
Oct 11 03:36:02 compute-0 ceph-23b68101-59a9-532f-ab6b-9acf78fb2162-mgr-compute-0-jhqlii[74559]: 2025-10-11T03:36:02.344+0000 7f0d3a059140 -1 mgr[py] Module balancer has missing NOTIFY_TYPES member
Oct 11 03:36:02 compute-0 ceph-mgr[74563]: mgr[py] Module balancer has missing NOTIFY_TYPES member
Oct 11 03:36:02 compute-0 ceph-mgr[74563]: mgr[py] Loading python module 'cephadm'
Oct 11 03:36:02 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr stat"} v 0) v1
Oct 11 03:36:02 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/283238026' entity='client.admin' cmd=[{"prefix": "mgr stat"}]: dispatch
Oct 11 03:36:02 compute-0 boring_ritchie[75328]: {
Oct 11 03:36:02 compute-0 boring_ritchie[75328]:     "epoch": 5,
Oct 11 03:36:02 compute-0 boring_ritchie[75328]:     "available": true,
Oct 11 03:36:02 compute-0 boring_ritchie[75328]:     "active_name": "compute-0.jhqlii",
Oct 11 03:36:02 compute-0 boring_ritchie[75328]:     "num_standby": 0
Oct 11 03:36:02 compute-0 boring_ritchie[75328]: }
Oct 11 03:36:02 compute-0 systemd[1]: libpod-6de88af92a763cc9c16b397e77cbc8657d74600efefec8aebec32544df692b13.scope: Deactivated successfully.
Oct 11 03:36:02 compute-0 podman[75288]: 2025-10-11 03:36:02.376475278 +0000 UTC m=+0.754504728 container died 6de88af92a763cc9c16b397e77cbc8657d74600efefec8aebec32544df692b13 (image=quay.io/ceph/ceph:v18, name=boring_ritchie, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 11 03:36:02 compute-0 systemd[1]: var-lib-containers-storage-overlay-33e9e22639fd8cdd0f59a36c17377f3352dc92cd4558ed48878a661a0710089c-merged.mount: Deactivated successfully.
Oct 11 03:36:02 compute-0 podman[75288]: 2025-10-11 03:36:02.437879543 +0000 UTC m=+0.815909033 container remove 6de88af92a763cc9c16b397e77cbc8657d74600efefec8aebec32544df692b13 (image=quay.io/ceph/ceph:v18, name=boring_ritchie, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.license=GPLv2)
Oct 11 03:36:02 compute-0 systemd[1]: libpod-conmon-6de88af92a763cc9c16b397e77cbc8657d74600efefec8aebec32544df692b13.scope: Deactivated successfully.
Oct 11 03:36:02 compute-0 ceph-mon[74273]: from='client.? 192.168.122.100:0/2316030152' entity='client.admin' cmd='[{"prefix": "mgr module enable", "module": "cephadm"}]': finished
Oct 11 03:36:02 compute-0 ceph-mon[74273]: mgrmap e5: compute-0.jhqlii(active, since 6s)
Oct 11 03:36:02 compute-0 ceph-mon[74273]: from='client.? 192.168.122.100:0/283238026' entity='client.admin' cmd=[{"prefix": "mgr stat"}]: dispatch
Oct 11 03:36:02 compute-0 podman[75367]: 2025-10-11 03:36:02.54696993 +0000 UTC m=+0.076958133 container create e0f30ce6f8b371ab5387f8725c2a663ef6480eb54d86fc799741f024b87195af (image=quay.io/ceph/ceph:v18, name=infallible_mahavira, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Oct 11 03:36:02 compute-0 systemd[1]: Started libpod-conmon-e0f30ce6f8b371ab5387f8725c2a663ef6480eb54d86fc799741f024b87195af.scope.
Oct 11 03:36:02 compute-0 podman[75367]: 2025-10-11 03:36:02.515819935 +0000 UTC m=+0.045808208 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Oct 11 03:36:02 compute-0 systemd[1]: Started libcrun container.
Oct 11 03:36:02 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/93022109c5295c84ecd012e9255f6cc8331fcdde4ea0e7f8d87c705c1f3d3c66/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 11 03:36:02 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/93022109c5295c84ecd012e9255f6cc8331fcdde4ea0e7f8d87c705c1f3d3c66/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Oct 11 03:36:02 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/93022109c5295c84ecd012e9255f6cc8331fcdde4ea0e7f8d87c705c1f3d3c66/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 11 03:36:02 compute-0 podman[75367]: 2025-10-11 03:36:02.639951558 +0000 UTC m=+0.169939721 container init e0f30ce6f8b371ab5387f8725c2a663ef6480eb54d86fc799741f024b87195af (image=quay.io/ceph/ceph:v18, name=infallible_mahavira, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=reef)
Oct 11 03:36:02 compute-0 podman[75367]: 2025-10-11 03:36:02.644417385 +0000 UTC m=+0.174405578 container start e0f30ce6f8b371ab5387f8725c2a663ef6480eb54d86fc799741f024b87195af (image=quay.io/ceph/ceph:v18, name=infallible_mahavira, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3)
Oct 11 03:36:02 compute-0 podman[75367]: 2025-10-11 03:36:02.648209726 +0000 UTC m=+0.178197919 container attach e0f30ce6f8b371ab5387f8725c2a663ef6480eb54d86fc799741f024b87195af (image=quay.io/ceph/ceph:v18, name=infallible_mahavira, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS)
Oct 11 03:36:04 compute-0 ceph-mgr[74563]: mgr[py] Loading python module 'crash'
Oct 11 03:36:04 compute-0 ceph-23b68101-59a9-532f-ab6b-9acf78fb2162-mgr-compute-0-jhqlii[74559]: 2025-10-11T03:36:04.586+0000 7f0d3a059140 -1 mgr[py] Module crash has missing NOTIFY_TYPES member
Oct 11 03:36:04 compute-0 ceph-mgr[74563]: mgr[py] Module crash has missing NOTIFY_TYPES member
Oct 11 03:36:04 compute-0 ceph-mgr[74563]: mgr[py] Loading python module 'dashboard'
Oct 11 03:36:05 compute-0 ceph-mgr[74563]: mgr[py] Loading python module 'devicehealth'
Oct 11 03:36:06 compute-0 ceph-23b68101-59a9-532f-ab6b-9acf78fb2162-mgr-compute-0-jhqlii[74559]: 2025-10-11T03:36:06.185+0000 7f0d3a059140 -1 mgr[py] Module devicehealth has missing NOTIFY_TYPES member
Oct 11 03:36:06 compute-0 ceph-mgr[74563]: mgr[py] Module devicehealth has missing NOTIFY_TYPES member
Oct 11 03:36:06 compute-0 ceph-mgr[74563]: mgr[py] Loading python module 'diskprediction_local'
Oct 11 03:36:06 compute-0 ceph-23b68101-59a9-532f-ab6b-9acf78fb2162-mgr-compute-0-jhqlii[74559]: /lib64/python3.9/site-packages/scipy/__init__.py:73: UserWarning: NumPy was imported from a Python sub-interpreter but NumPy does not properly support sub-interpreters. This will likely work for most users but might cause hard to track down issues or subtle bugs. A common user of the rare sub-interpreter feature is wsgi which also allows single-interpreter mode.
Oct 11 03:36:06 compute-0 ceph-23b68101-59a9-532f-ab6b-9acf78fb2162-mgr-compute-0-jhqlii[74559]: Improvements in the case of bugs are welcome, but is not on the NumPy roadmap, and full support may require significant effort to achieve.
Oct 11 03:36:06 compute-0 ceph-23b68101-59a9-532f-ab6b-9acf78fb2162-mgr-compute-0-jhqlii[74559]:   from numpy import show_config as show_numpy_config
Oct 11 03:36:06 compute-0 ceph-23b68101-59a9-532f-ab6b-9acf78fb2162-mgr-compute-0-jhqlii[74559]: 2025-10-11T03:36:06.729+0000 7f0d3a059140 -1 mgr[py] Module diskprediction_local has missing NOTIFY_TYPES member
Oct 11 03:36:06 compute-0 ceph-mgr[74563]: mgr[py] Module diskprediction_local has missing NOTIFY_TYPES member
Oct 11 03:36:06 compute-0 ceph-mgr[74563]: mgr[py] Loading python module 'influx'
Oct 11 03:36:06 compute-0 ceph-23b68101-59a9-532f-ab6b-9acf78fb2162-mgr-compute-0-jhqlii[74559]: 2025-10-11T03:36:06.963+0000 7f0d3a059140 -1 mgr[py] Module influx has missing NOTIFY_TYPES member
Oct 11 03:36:06 compute-0 ceph-mgr[74563]: mgr[py] Module influx has missing NOTIFY_TYPES member
Oct 11 03:36:06 compute-0 ceph-mgr[74563]: mgr[py] Loading python module 'insights'
Oct 11 03:36:07 compute-0 ceph-mgr[74563]: mgr[py] Loading python module 'iostat'
Oct 11 03:36:07 compute-0 ceph-23b68101-59a9-532f-ab6b-9acf78fb2162-mgr-compute-0-jhqlii[74559]: 2025-10-11T03:36:07.425+0000 7f0d3a059140 -1 mgr[py] Module iostat has missing NOTIFY_TYPES member
Oct 11 03:36:07 compute-0 ceph-mgr[74563]: mgr[py] Module iostat has missing NOTIFY_TYPES member
Oct 11 03:36:07 compute-0 ceph-mgr[74563]: mgr[py] Loading python module 'k8sevents'
Oct 11 03:36:09 compute-0 ceph-mgr[74563]: mgr[py] Loading python module 'localpool'
Oct 11 03:36:09 compute-0 ceph-mgr[74563]: mgr[py] Loading python module 'mds_autoscaler'
Oct 11 03:36:09 compute-0 ceph-mgr[74563]: mgr[py] Loading python module 'mirroring'
Oct 11 03:36:10 compute-0 ceph-mgr[74563]: mgr[py] Loading python module 'nfs'
Oct 11 03:36:10 compute-0 ceph-23b68101-59a9-532f-ab6b-9acf78fb2162-mgr-compute-0-jhqlii[74559]: 2025-10-11T03:36:10.909+0000 7f0d3a059140 -1 mgr[py] Module nfs has missing NOTIFY_TYPES member
Oct 11 03:36:10 compute-0 ceph-mgr[74563]: mgr[py] Module nfs has missing NOTIFY_TYPES member
Oct 11 03:36:10 compute-0 ceph-mgr[74563]: mgr[py] Loading python module 'orchestrator'
Oct 11 03:36:11 compute-0 ceph-23b68101-59a9-532f-ab6b-9acf78fb2162-mgr-compute-0-jhqlii[74559]: 2025-10-11T03:36:11.565+0000 7f0d3a059140 -1 mgr[py] Module orchestrator has missing NOTIFY_TYPES member
Oct 11 03:36:11 compute-0 ceph-mgr[74563]: mgr[py] Module orchestrator has missing NOTIFY_TYPES member
Oct 11 03:36:11 compute-0 ceph-mgr[74563]: mgr[py] Loading python module 'osd_perf_query'
Oct 11 03:36:11 compute-0 ceph-23b68101-59a9-532f-ab6b-9acf78fb2162-mgr-compute-0-jhqlii[74559]: 2025-10-11T03:36:11.807+0000 7f0d3a059140 -1 mgr[py] Module osd_perf_query has missing NOTIFY_TYPES member
Oct 11 03:36:11 compute-0 ceph-mgr[74563]: mgr[py] Module osd_perf_query has missing NOTIFY_TYPES member
Oct 11 03:36:11 compute-0 ceph-mgr[74563]: mgr[py] Loading python module 'osd_support'
Oct 11 03:36:12 compute-0 ceph-23b68101-59a9-532f-ab6b-9acf78fb2162-mgr-compute-0-jhqlii[74559]: 2025-10-11T03:36:12.038+0000 7f0d3a059140 -1 mgr[py] Module osd_support has missing NOTIFY_TYPES member
Oct 11 03:36:12 compute-0 ceph-mgr[74563]: mgr[py] Module osd_support has missing NOTIFY_TYPES member
Oct 11 03:36:12 compute-0 ceph-mgr[74563]: mgr[py] Loading python module 'pg_autoscaler'
Oct 11 03:36:12 compute-0 ceph-23b68101-59a9-532f-ab6b-9acf78fb2162-mgr-compute-0-jhqlii[74559]: 2025-10-11T03:36:12.297+0000 7f0d3a059140 -1 mgr[py] Module pg_autoscaler has missing NOTIFY_TYPES member
Oct 11 03:36:12 compute-0 ceph-mgr[74563]: mgr[py] Module pg_autoscaler has missing NOTIFY_TYPES member
Oct 11 03:36:12 compute-0 ceph-mgr[74563]: mgr[py] Loading python module 'progress'
Oct 11 03:36:12 compute-0 ceph-23b68101-59a9-532f-ab6b-9acf78fb2162-mgr-compute-0-jhqlii[74559]: 2025-10-11T03:36:12.529+0000 7f0d3a059140 -1 mgr[py] Module progress has missing NOTIFY_TYPES member
Oct 11 03:36:12 compute-0 ceph-mgr[74563]: mgr[py] Module progress has missing NOTIFY_TYPES member
Oct 11 03:36:12 compute-0 ceph-mgr[74563]: mgr[py] Loading python module 'prometheus'
Oct 11 03:36:13 compute-0 ceph-23b68101-59a9-532f-ab6b-9acf78fb2162-mgr-compute-0-jhqlii[74559]: 2025-10-11T03:36:13.517+0000 7f0d3a059140 -1 mgr[py] Module prometheus has missing NOTIFY_TYPES member
Oct 11 03:36:13 compute-0 ceph-mgr[74563]: mgr[py] Module prometheus has missing NOTIFY_TYPES member
Oct 11 03:36:13 compute-0 ceph-mgr[74563]: mgr[py] Loading python module 'rbd_support'
Oct 11 03:36:13 compute-0 ceph-23b68101-59a9-532f-ab6b-9acf78fb2162-mgr-compute-0-jhqlii[74559]: 2025-10-11T03:36:13.825+0000 7f0d3a059140 -1 mgr[py] Module rbd_support has missing NOTIFY_TYPES member
Oct 11 03:36:13 compute-0 ceph-mgr[74563]: mgr[py] Module rbd_support has missing NOTIFY_TYPES member
Oct 11 03:36:13 compute-0 ceph-mgr[74563]: mgr[py] Loading python module 'restful'
Oct 11 03:36:14 compute-0 ceph-mgr[74563]: mgr[py] Loading python module 'rgw'
Oct 11 03:36:15 compute-0 ceph-23b68101-59a9-532f-ab6b-9acf78fb2162-mgr-compute-0-jhqlii[74559]: 2025-10-11T03:36:15.222+0000 7f0d3a059140 -1 mgr[py] Module rgw has missing NOTIFY_TYPES member
Oct 11 03:36:15 compute-0 ceph-mgr[74563]: mgr[py] Module rgw has missing NOTIFY_TYPES member
Oct 11 03:36:15 compute-0 ceph-mgr[74563]: mgr[py] Loading python module 'rook'
Oct 11 03:36:17 compute-0 ceph-23b68101-59a9-532f-ab6b-9acf78fb2162-mgr-compute-0-jhqlii[74559]: 2025-10-11T03:36:17.244+0000 7f0d3a059140 -1 mgr[py] Module rook has missing NOTIFY_TYPES member
Oct 11 03:36:17 compute-0 ceph-mgr[74563]: mgr[py] Module rook has missing NOTIFY_TYPES member
Oct 11 03:36:17 compute-0 ceph-mgr[74563]: mgr[py] Loading python module 'selftest'
Oct 11 03:36:17 compute-0 ceph-23b68101-59a9-532f-ab6b-9acf78fb2162-mgr-compute-0-jhqlii[74559]: 2025-10-11T03:36:17.489+0000 7f0d3a059140 -1 mgr[py] Module selftest has missing NOTIFY_TYPES member
Oct 11 03:36:17 compute-0 ceph-mgr[74563]: mgr[py] Module selftest has missing NOTIFY_TYPES member
Oct 11 03:36:17 compute-0 ceph-mgr[74563]: mgr[py] Loading python module 'snap_schedule'
Oct 11 03:36:17 compute-0 ceph-23b68101-59a9-532f-ab6b-9acf78fb2162-mgr-compute-0-jhqlii[74559]: 2025-10-11T03:36:17.744+0000 7f0d3a059140 -1 mgr[py] Module snap_schedule has missing NOTIFY_TYPES member
Oct 11 03:36:17 compute-0 ceph-mgr[74563]: mgr[py] Module snap_schedule has missing NOTIFY_TYPES member
Oct 11 03:36:17 compute-0 ceph-mgr[74563]: mgr[py] Loading python module 'stats'
Oct 11 03:36:17 compute-0 ceph-mgr[74563]: mgr[py] Loading python module 'status'
Oct 11 03:36:18 compute-0 ceph-23b68101-59a9-532f-ab6b-9acf78fb2162-mgr-compute-0-jhqlii[74559]: 2025-10-11T03:36:18.249+0000 7f0d3a059140 -1 mgr[py] Module status has missing NOTIFY_TYPES member
Oct 11 03:36:18 compute-0 ceph-mgr[74563]: mgr[py] Module status has missing NOTIFY_TYPES member
Oct 11 03:36:18 compute-0 ceph-mgr[74563]: mgr[py] Loading python module 'telegraf'
Oct 11 03:36:18 compute-0 ceph-23b68101-59a9-532f-ab6b-9acf78fb2162-mgr-compute-0-jhqlii[74559]: 2025-10-11T03:36:18.484+0000 7f0d3a059140 -1 mgr[py] Module telegraf has missing NOTIFY_TYPES member
Oct 11 03:36:18 compute-0 ceph-mgr[74563]: mgr[py] Module telegraf has missing NOTIFY_TYPES member
Oct 11 03:36:18 compute-0 ceph-mgr[74563]: mgr[py] Loading python module 'telemetry'
Oct 11 03:36:19 compute-0 ceph-23b68101-59a9-532f-ab6b-9acf78fb2162-mgr-compute-0-jhqlii[74559]: 2025-10-11T03:36:19.067+0000 7f0d3a059140 -1 mgr[py] Module telemetry has missing NOTIFY_TYPES member
Oct 11 03:36:19 compute-0 ceph-mgr[74563]: mgr[py] Module telemetry has missing NOTIFY_TYPES member
Oct 11 03:36:19 compute-0 ceph-mgr[74563]: mgr[py] Loading python module 'test_orchestrator'
Oct 11 03:36:19 compute-0 ceph-23b68101-59a9-532f-ab6b-9acf78fb2162-mgr-compute-0-jhqlii[74559]: 2025-10-11T03:36:19.711+0000 7f0d3a059140 -1 mgr[py] Module test_orchestrator has missing NOTIFY_TYPES member
Oct 11 03:36:19 compute-0 ceph-mgr[74563]: mgr[py] Module test_orchestrator has missing NOTIFY_TYPES member
Oct 11 03:36:19 compute-0 ceph-mgr[74563]: mgr[py] Loading python module 'volumes'
Oct 11 03:36:20 compute-0 ceph-23b68101-59a9-532f-ab6b-9acf78fb2162-mgr-compute-0-jhqlii[74559]: 2025-10-11T03:36:20.396+0000 7f0d3a059140 -1 mgr[py] Module volumes has missing NOTIFY_TYPES member
Oct 11 03:36:20 compute-0 ceph-mgr[74563]: mgr[py] Module volumes has missing NOTIFY_TYPES member
Oct 11 03:36:20 compute-0 ceph-mgr[74563]: mgr[py] Loading python module 'zabbix'
Oct 11 03:36:20 compute-0 ceph-23b68101-59a9-532f-ab6b-9acf78fb2162-mgr-compute-0-jhqlii[74559]: 2025-10-11T03:36:20.631+0000 7f0d3a059140 -1 mgr[py] Module zabbix has missing NOTIFY_TYPES member
Oct 11 03:36:20 compute-0 ceph-mgr[74563]: mgr[py] Module zabbix has missing NOTIFY_TYPES member
Oct 11 03:36:20 compute-0 ceph-mon[74273]: log_channel(cluster) log [INF] : Active manager daemon compute-0.jhqlii restarted
Oct 11 03:36:20 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e1 do_prune osdmap full prune enabled
Oct 11 03:36:20 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e1 encode_pending skipping prime_pg_temp; mapping job did not start
Oct 11 03:36:20 compute-0 ceph-mon[74273]: log_channel(cluster) log [INF] : Activating manager daemon compute-0.jhqlii
Oct 11 03:36:20 compute-0 ceph-mgr[74563]: ms_deliver_dispatch: unhandled message 0x564d4ed131e0 mon_map magic: 0 v1 from mon.0 v2:192.168.122.100:3300/0
Oct 11 03:36:20 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e1 _set_cache_ratios kv ratio 0.25 inc ratio 0.375 full ratio 0.375
Oct 11 03:36:20 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e1 register_cache_with_pcm pcm target: 2147483648 pcm max: 1020054732 pcm min: 134217728 inc_osd_cache size: 1
Oct 11 03:36:20 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e2 e2: 0 total, 0 up, 0 in
Oct 11 03:36:20 compute-0 ceph-mon[74273]: log_channel(cluster) log [DBG] : osdmap e2: 0 total, 0 up, 0 in
Oct 11 03:36:20 compute-0 ceph-mon[74273]: log_channel(cluster) log [DBG] : mgrmap e6: compute-0.jhqlii(active, starting, since 0.0145751s)
Oct 11 03:36:20 compute-0 ceph-mgr[74563]: mgr handle_mgr_map Activating!
Oct 11 03:36:20 compute-0 ceph-mgr[74563]: mgr handle_mgr_map I am now activating
Oct 11 03:36:20 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon metadata", "id": "compute-0"} v 0) v1
Oct 11 03:36:20 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' cmd=[{"prefix": "mon metadata", "id": "compute-0"}]: dispatch
Oct 11 03:36:20 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr metadata", "who": "compute-0.jhqlii", "id": "compute-0.jhqlii"} v 0) v1
Oct 11 03:36:20 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' cmd=[{"prefix": "mgr metadata", "who": "compute-0.jhqlii", "id": "compute-0.jhqlii"}]: dispatch
Oct 11 03:36:20 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mds metadata"} v 0) v1
Oct 11 03:36:20 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' cmd=[{"prefix": "mds metadata"}]: dispatch
Oct 11 03:36:20 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).mds e1 all = 1
Oct 11 03:36:20 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata"} v 0) v1
Oct 11 03:36:20 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' cmd=[{"prefix": "osd metadata"}]: dispatch
Oct 11 03:36:20 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon metadata"} v 0) v1
Oct 11 03:36:20 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' cmd=[{"prefix": "mon metadata"}]: dispatch
Oct 11 03:36:20 compute-0 ceph-mgr[74563]: [balancer DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Oct 11 03:36:20 compute-0 ceph-mgr[74563]: mgr load Constructed class from module: balancer
Oct 11 03:36:20 compute-0 ceph-mgr[74563]: [balancer INFO root] Starting
Oct 11 03:36:20 compute-0 ceph-mon[74273]: log_channel(cluster) log [INF] : Manager daemon compute-0.jhqlii is now available
Oct 11 03:36:20 compute-0 ceph-mgr[74563]: [cephadm DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Oct 11 03:36:20 compute-0 ceph-mgr[74563]: [balancer INFO root] Optimize plan auto_2025-10-11_03:36:20
Oct 11 03:36:20 compute-0 ceph-mgr[74563]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 11 03:36:20 compute-0 ceph-mgr[74563]: [balancer INFO root] do_upmap
Oct 11 03:36:20 compute-0 ceph-mgr[74563]: [balancer INFO root] No pools available
Oct 11 03:36:20 compute-0 ceph-mon[74273]: Active manager daemon compute-0.jhqlii restarted
Oct 11 03:36:20 compute-0 ceph-mon[74273]: Activating manager daemon compute-0.jhqlii
Oct 11 03:36:20 compute-0 ceph-mon[74273]: osdmap e2: 0 total, 0 up, 0 in
Oct 11 03:36:20 compute-0 ceph-mon[74273]: mgrmap e6: compute-0.jhqlii(active, starting, since 0.0145751s)
Oct 11 03:36:20 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' cmd=[{"prefix": "mon metadata", "id": "compute-0"}]: dispatch
Oct 11 03:36:20 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' cmd=[{"prefix": "mgr metadata", "who": "compute-0.jhqlii", "id": "compute-0.jhqlii"}]: dispatch
Oct 11 03:36:20 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' cmd=[{"prefix": "mds metadata"}]: dispatch
Oct 11 03:36:20 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' cmd=[{"prefix": "osd metadata"}]: dispatch
Oct 11 03:36:20 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' cmd=[{"prefix": "mon metadata"}]: dispatch
Oct 11 03:36:20 compute-0 ceph-mon[74273]: Manager daemon compute-0.jhqlii is now available
Oct 11 03:36:20 compute-0 ceph-mgr[74563]: [cephadm INFO cephadm.migrations] Found migration_current of "None". Setting to last migration.
Oct 11 03:36:20 compute-0 ceph-mgr[74563]: log_channel(cephadm) log [INF] : Found migration_current of "None". Setting to last migration.
Oct 11 03:36:20 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=mgr/cephadm/migration_current}] v 0) v1
Oct 11 03:36:20 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' 
Oct 11 03:36:20 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/config_checks}] v 0) v1
Oct 11 03:36:20 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' 
Oct 11 03:36:20 compute-0 ceph-mgr[74563]: mgr load Constructed class from module: cephadm
Oct 11 03:36:20 compute-0 ceph-mgr[74563]: [crash DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Oct 11 03:36:20 compute-0 ceph-mgr[74563]: mgr load Constructed class from module: crash
Oct 11 03:36:20 compute-0 ceph-mgr[74563]: [devicehealth DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Oct 11 03:36:20 compute-0 ceph-mgr[74563]: mgr load Constructed class from module: devicehealth
Oct 11 03:36:20 compute-0 ceph-mgr[74563]: [iostat DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Oct 11 03:36:20 compute-0 ceph-mgr[74563]: mgr load Constructed class from module: iostat
Oct 11 03:36:20 compute-0 ceph-mgr[74563]: [devicehealth INFO root] Starting
Oct 11 03:36:20 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config dump", "format": "json"} v 0) v1
Oct 11 03:36:20 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch
Oct 11 03:36:20 compute-0 ceph-mgr[74563]: [nfs DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Oct 11 03:36:20 compute-0 ceph-mgr[74563]: mgr load Constructed class from module: nfs
Oct 11 03:36:20 compute-0 ceph-mgr[74563]: [orchestrator DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Oct 11 03:36:20 compute-0 ceph-mgr[74563]: mgr load Constructed class from module: orchestrator
Oct 11 03:36:20 compute-0 ceph-mgr[74563]: [pg_autoscaler DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Oct 11 03:36:20 compute-0 ceph-mgr[74563]: mgr load Constructed class from module: pg_autoscaler
Oct 11 03:36:20 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config dump", "format": "json"} v 0) v1
Oct 11 03:36:20 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch
Oct 11 03:36:20 compute-0 ceph-mgr[74563]: [pg_autoscaler INFO root] _maybe_adjust
Oct 11 03:36:20 compute-0 ceph-mgr[74563]: [progress DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Oct 11 03:36:20 compute-0 ceph-mgr[74563]: mgr load Constructed class from module: progress
Oct 11 03:36:20 compute-0 ceph-mgr[74563]: [rbd_support DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Oct 11 03:36:20 compute-0 ceph-mgr[74563]: [progress INFO root] Loading...
Oct 11 03:36:20 compute-0 ceph-mgr[74563]: [progress INFO root] No stored events to load
Oct 11 03:36:20 compute-0 ceph-mgr[74563]: [progress INFO root] Loaded [] historic events
Oct 11 03:36:20 compute-0 ceph-mgr[74563]: [progress INFO root] Loaded OSDMap, ready.
Oct 11 03:36:20 compute-0 ceph-mgr[74563]: [rbd_support INFO root] recovery thread starting
Oct 11 03:36:20 compute-0 ceph-mgr[74563]: [rbd_support INFO root] starting setup
Oct 11 03:36:20 compute-0 ceph-mgr[74563]: mgr load Constructed class from module: rbd_support
Oct 11 03:36:20 compute-0 ceph-mgr[74563]: [restful DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Oct 11 03:36:20 compute-0 ceph-mgr[74563]: mgr load Constructed class from module: restful
Oct 11 03:36:20 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.jhqlii/mirror_snapshot_schedule"} v 0) v1
Oct 11 03:36:20 compute-0 ceph-mgr[74563]: [restful INFO root] server_addr: :: server_port: 8003
Oct 11 03:36:20 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.jhqlii/mirror_snapshot_schedule"}]: dispatch
Oct 11 03:36:20 compute-0 ceph-mgr[74563]: [status DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Oct 11 03:36:20 compute-0 ceph-mgr[74563]: mgr load Constructed class from module: status
Oct 11 03:36:20 compute-0 ceph-mgr[74563]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 11 03:36:20 compute-0 ceph-mgr[74563]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: starting
Oct 11 03:36:20 compute-0 ceph-mgr[74563]: [telemetry DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Oct 11 03:36:20 compute-0 ceph-mgr[74563]: mgr load Constructed class from module: telemetry
Oct 11 03:36:20 compute-0 ceph-mgr[74563]: [rbd_support INFO root] PerfHandler: starting
Oct 11 03:36:20 compute-0 ceph-mgr[74563]: [volumes DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Oct 11 03:36:20 compute-0 ceph-mgr[74563]: [rbd_support INFO root] TaskHandler: starting
Oct 11 03:36:20 compute-0 ceph-mgr[74563]: [restful WARNING root] server not running: no certificate configured
Oct 11 03:36:20 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.jhqlii/trash_purge_schedule"} v 0) v1
Oct 11 03:36:20 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.jhqlii/trash_purge_schedule"}]: dispatch
Oct 11 03:36:20 compute-0 ceph-mgr[74563]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 11 03:36:20 compute-0 ceph-mgr[74563]: [rbd_support INFO root] TrashPurgeScheduleHandler: starting
Oct 11 03:36:20 compute-0 ceph-mgr[74563]: [rbd_support INFO root] setup complete
Oct 11 03:36:20 compute-0 ceph-mgr[74563]: mgr load Constructed class from module: volumes
Oct 11 03:36:21 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/cephadm_agent/root/cert}] v 0) v1
Oct 11 03:36:21 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' 
Oct 11 03:36:21 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/cephadm_agent/root/key}] v 0) v1
Oct 11 03:36:21 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' 
Oct 11 03:36:21 compute-0 ceph-mon[74273]: log_channel(cluster) log [DBG] : mgrmap e7: compute-0.jhqlii(active, since 1.02793s)
Oct 11 03:36:21 compute-0 ceph-mgr[74563]: log_channel(audit) log [DBG] : from='client.14136 -' entity='client.admin' cmd=[{"prefix": "get_command_descriptions"}]: dispatch
Oct 11 03:36:21 compute-0 ceph-mgr[74563]: log_channel(audit) log [DBG] : from='client.14136 -' entity='client.admin' cmd=[{"prefix": "mgr_status"}]: dispatch
Oct 11 03:36:21 compute-0 infallible_mahavira[75384]: {
Oct 11 03:36:21 compute-0 infallible_mahavira[75384]:     "mgrmap_epoch": 7,
Oct 11 03:36:21 compute-0 infallible_mahavira[75384]:     "initialized": true
Oct 11 03:36:21 compute-0 infallible_mahavira[75384]: }
Oct 11 03:36:21 compute-0 systemd[1]: libpod-e0f30ce6f8b371ab5387f8725c2a663ef6480eb54d86fc799741f024b87195af.scope: Deactivated successfully.
Oct 11 03:36:21 compute-0 podman[75367]: 2025-10-11 03:36:21.708518637 +0000 UTC m=+19.238506830 container died e0f30ce6f8b371ab5387f8725c2a663ef6480eb54d86fc799741f024b87195af (image=quay.io/ceph/ceph:v18, name=infallible_mahavira, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 11 03:36:21 compute-0 ceph-mon[74273]: Found migration_current of "None". Setting to last migration.
Oct 11 03:36:21 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' 
Oct 11 03:36:21 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' 
Oct 11 03:36:21 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch
Oct 11 03:36:21 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch
Oct 11 03:36:21 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.jhqlii/mirror_snapshot_schedule"}]: dispatch
Oct 11 03:36:21 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.jhqlii/trash_purge_schedule"}]: dispatch
Oct 11 03:36:21 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' 
Oct 11 03:36:21 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' 
Oct 11 03:36:21 compute-0 ceph-mon[74273]: mgrmap e7: compute-0.jhqlii(active, since 1.02793s)
Oct 11 03:36:21 compute-0 ceph-mon[74273]: from='client.14136 -' entity='client.admin' cmd=[{"prefix": "get_command_descriptions"}]: dispatch
Oct 11 03:36:21 compute-0 systemd[1]: var-lib-containers-storage-overlay-93022109c5295c84ecd012e9255f6cc8331fcdde4ea0e7f8d87c705c1f3d3c66-merged.mount: Deactivated successfully.
Oct 11 03:36:21 compute-0 podman[75367]: 2025-10-11 03:36:21.755887816 +0000 UTC m=+19.285875979 container remove e0f30ce6f8b371ab5387f8725c2a663ef6480eb54d86fc799741f024b87195af (image=quay.io/ceph/ceph:v18, name=infallible_mahavira, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 11 03:36:21 compute-0 systemd[1]: libpod-conmon-e0f30ce6f8b371ab5387f8725c2a663ef6480eb54d86fc799741f024b87195af.scope: Deactivated successfully.
Oct 11 03:36:21 compute-0 podman[75544]: 2025-10-11 03:36:21.85052985 +0000 UTC m=+0.064613854 container create c13aea04e59a9f806e0a7dd829244ef24f43b4df182b9ed6fa1e238e078d0d2c (image=quay.io/ceph/ceph:v18, name=sleepy_pascal, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 11 03:36:21 compute-0 systemd[1]: Started libpod-conmon-c13aea04e59a9f806e0a7dd829244ef24f43b4df182b9ed6fa1e238e078d0d2c.scope.
Oct 11 03:36:21 compute-0 systemd[1]: Started libcrun container.
Oct 11 03:36:21 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5b941386a42801e8915a5df9d7cafc66326888951d93ec4da4ee7a14af57782a/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 11 03:36:21 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5b941386a42801e8915a5df9d7cafc66326888951d93ec4da4ee7a14af57782a/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Oct 11 03:36:21 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5b941386a42801e8915a5df9d7cafc66326888951d93ec4da4ee7a14af57782a/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 11 03:36:21 compute-0 podman[75544]: 2025-10-11 03:36:21.821477748 +0000 UTC m=+0.035561792 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Oct 11 03:36:21 compute-0 podman[75544]: 2025-10-11 03:36:21.924356878 +0000 UTC m=+0.138440862 container init c13aea04e59a9f806e0a7dd829244ef24f43b4df182b9ed6fa1e238e078d0d2c (image=quay.io/ceph/ceph:v18, name=sleepy_pascal, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=reef, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3)
Oct 11 03:36:21 compute-0 podman[75544]: 2025-10-11 03:36:21.932640167 +0000 UTC m=+0.146724141 container start c13aea04e59a9f806e0a7dd829244ef24f43b4df182b9ed6fa1e238e078d0d2c (image=quay.io/ceph/ceph:v18, name=sleepy_pascal, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.build-date=20250507, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 11 03:36:21 compute-0 podman[75544]: 2025-10-11 03:36:21.935754721 +0000 UTC m=+0.149838695 container attach c13aea04e59a9f806e0a7dd829244ef24f43b4df182b9ed6fa1e238e078d0d2c (image=quay.io/ceph/ceph:v18, name=sleepy_pascal, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 11 03:36:22 compute-0 ceph-mgr[74563]: [cephadm INFO cherrypy.error] [11/Oct/2025:03:36:22] ENGINE Bus STARTING
Oct 11 03:36:22 compute-0 ceph-mgr[74563]: log_channel(cephadm) log [INF] : [11/Oct/2025:03:36:22] ENGINE Bus STARTING
Oct 11 03:36:22 compute-0 ceph-mgr[74563]: log_channel(audit) log [DBG] : from='client.14144 -' entity='client.admin' cmd=[{"prefix": "orch set backend", "module_name": "cephadm", "target": ["mon-mgr", ""]}]: dispatch
Oct 11 03:36:22 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=mgr/orchestrator/orchestrator}] v 0) v1
Oct 11 03:36:22 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' 
Oct 11 03:36:22 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config dump", "format": "json"} v 0) v1
Oct 11 03:36:22 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch
Oct 11 03:36:22 compute-0 systemd[1]: libpod-c13aea04e59a9f806e0a7dd829244ef24f43b4df182b9ed6fa1e238e078d0d2c.scope: Deactivated successfully.
Oct 11 03:36:22 compute-0 podman[75544]: 2025-10-11 03:36:22.484211905 +0000 UTC m=+0.698295869 container died c13aea04e59a9f806e0a7dd829244ef24f43b4df182b9ed6fa1e238e078d0d2c (image=quay.io/ceph/ceph:v18, name=sleepy_pascal, CEPH_REF=reef, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Oct 11 03:36:22 compute-0 systemd[1]: var-lib-containers-storage-overlay-5b941386a42801e8915a5df9d7cafc66326888951d93ec4da4ee7a14af57782a-merged.mount: Deactivated successfully.
Oct 11 03:36:22 compute-0 podman[75544]: 2025-10-11 03:36:22.521251864 +0000 UTC m=+0.735335828 container remove c13aea04e59a9f806e0a7dd829244ef24f43b4df182b9ed6fa1e238e078d0d2c (image=quay.io/ceph/ceph:v18, name=sleepy_pascal, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.license=GPLv2)
Oct 11 03:36:22 compute-0 systemd[1]: libpod-conmon-c13aea04e59a9f806e0a7dd829244ef24f43b4df182b9ed6fa1e238e078d0d2c.scope: Deactivated successfully.
Oct 11 03:36:22 compute-0 ceph-mgr[74563]: [cephadm INFO cherrypy.error] [11/Oct/2025:03:36:22] ENGINE Serving on http://192.168.122.100:8765
Oct 11 03:36:22 compute-0 ceph-mgr[74563]: log_channel(cephadm) log [INF] : [11/Oct/2025:03:36:22] ENGINE Serving on http://192.168.122.100:8765
Oct 11 03:36:22 compute-0 podman[75610]: 2025-10-11 03:36:22.601391163 +0000 UTC m=+0.052325757 container create 6a1f8df68a7597719c478de8261cd95656561c25315a621d91e97fff5d85c597 (image=quay.io/ceph/ceph:v18, name=epic_kirch, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 11 03:36:22 compute-0 systemd[1]: Started libpod-conmon-6a1f8df68a7597719c478de8261cd95656561c25315a621d91e97fff5d85c597.scope.
Oct 11 03:36:22 compute-0 ceph-mgr[74563]: mgr.server send_report Not sending PG status to monitor yet, waiting for OSDs
Oct 11 03:36:22 compute-0 ceph-mgr[74563]: [cephadm INFO cherrypy.error] [11/Oct/2025:03:36:22] ENGINE Serving on https://192.168.122.100:7150
Oct 11 03:36:22 compute-0 ceph-mgr[74563]: log_channel(cephadm) log [INF] : [11/Oct/2025:03:36:22] ENGINE Serving on https://192.168.122.100:7150
Oct 11 03:36:22 compute-0 ceph-mgr[74563]: [cephadm INFO cherrypy.error] [11/Oct/2025:03:36:22] ENGINE Bus STARTED
Oct 11 03:36:22 compute-0 ceph-mgr[74563]: log_channel(cephadm) log [INF] : [11/Oct/2025:03:36:22] ENGINE Bus STARTED
Oct 11 03:36:22 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config dump", "format": "json"} v 0) v1
Oct 11 03:36:22 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch
Oct 11 03:36:22 compute-0 ceph-mgr[74563]: [cephadm INFO cherrypy.error] [11/Oct/2025:03:36:22] ENGINE Client ('192.168.122.100', 60458) lost — peer dropped the TLS connection suddenly, during handshake: (6, 'TLS/SSL connection has been closed (EOF) (_ssl.c:1147)')
Oct 11 03:36:22 compute-0 ceph-mgr[74563]: log_channel(cephadm) log [INF] : [11/Oct/2025:03:36:22] ENGINE Client ('192.168.122.100', 60458) lost — peer dropped the TLS connection suddenly, during handshake: (6, 'TLS/SSL connection has been closed (EOF) (_ssl.c:1147)')
Oct 11 03:36:22 compute-0 systemd[1]: Started libcrun container.
Oct 11 03:36:22 compute-0 podman[75610]: 2025-10-11 03:36:22.579204183 +0000 UTC m=+0.030138807 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Oct 11 03:36:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e48cbc83d86e9efd0cb8833efe6697a681bac9e5bfd1d6a42ffe975e3739c182/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 11 03:36:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e48cbc83d86e9efd0cb8833efe6697a681bac9e5bfd1d6a42ffe975e3739c182/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Oct 11 03:36:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e48cbc83d86e9efd0cb8833efe6697a681bac9e5bfd1d6a42ffe975e3739c182/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 11 03:36:22 compute-0 podman[75610]: 2025-10-11 03:36:22.695915452 +0000 UTC m=+0.146850096 container init 6a1f8df68a7597719c478de8261cd95656561c25315a621d91e97fff5d85c597 (image=quay.io/ceph/ceph:v18, name=epic_kirch, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Oct 11 03:36:22 compute-0 podman[75610]: 2025-10-11 03:36:22.708591765 +0000 UTC m=+0.159526359 container start 6a1f8df68a7597719c478de8261cd95656561c25315a621d91e97fff5d85c597 (image=quay.io/ceph/ceph:v18, name=epic_kirch, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Oct 11 03:36:22 compute-0 podman[75610]: 2025-10-11 03:36:22.712081183 +0000 UTC m=+0.163015847 container attach 6a1f8df68a7597719c478de8261cd95656561c25315a621d91e97fff5d85c597 (image=quay.io/ceph/ceph:v18, name=epic_kirch, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Oct 11 03:36:23 compute-0 ceph-mgr[74563]: log_channel(audit) log [DBG] : from='client.14146 -' entity='client.admin' cmd=[{"prefix": "cephadm set-user", "user": "ceph-admin", "target": ["mon-mgr", ""]}]: dispatch
Oct 11 03:36:23 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/ssh_user}] v 0) v1
Oct 11 03:36:23 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' 
Oct 11 03:36:23 compute-0 ceph-mgr[74563]: [cephadm INFO root] Set ssh ssh_user
Oct 11 03:36:23 compute-0 ceph-mgr[74563]: log_channel(cephadm) log [INF] : Set ssh ssh_user
Oct 11 03:36:23 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/ssh_config}] v 0) v1
Oct 11 03:36:23 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' 
Oct 11 03:36:23 compute-0 ceph-mgr[74563]: [cephadm INFO root] Set ssh ssh_config
Oct 11 03:36:23 compute-0 ceph-mgr[74563]: log_channel(cephadm) log [INF] : Set ssh ssh_config
Oct 11 03:36:23 compute-0 ceph-mgr[74563]: [cephadm INFO root] ssh user set to ceph-admin. sudo will be used
Oct 11 03:36:23 compute-0 ceph-mgr[74563]: log_channel(cephadm) log [INF] : ssh user set to ceph-admin. sudo will be used
Oct 11 03:36:23 compute-0 epic_kirch[75638]: ssh user set to ceph-admin. sudo will be used
Oct 11 03:36:23 compute-0 systemd[1]: libpod-6a1f8df68a7597719c478de8261cd95656561c25315a621d91e97fff5d85c597.scope: Deactivated successfully.
Oct 11 03:36:23 compute-0 podman[75610]: 2025-10-11 03:36:23.263580278 +0000 UTC m=+0.714514912 container died 6a1f8df68a7597719c478de8261cd95656561c25315a621d91e97fff5d85c597 (image=quay.io/ceph/ceph:v18, name=epic_kirch, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507)
Oct 11 03:36:23 compute-0 systemd[1]: var-lib-containers-storage-overlay-e48cbc83d86e9efd0cb8833efe6697a681bac9e5bfd1d6a42ffe975e3739c182-merged.mount: Deactivated successfully.
Oct 11 03:36:23 compute-0 podman[75610]: 2025-10-11 03:36:23.311752069 +0000 UTC m=+0.762686693 container remove 6a1f8df68a7597719c478de8261cd95656561c25315a621d91e97fff5d85c597 (image=quay.io/ceph/ceph:v18, name=epic_kirch, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3)
Oct 11 03:36:23 compute-0 systemd[1]: libpod-conmon-6a1f8df68a7597719c478de8261cd95656561c25315a621d91e97fff5d85c597.scope: Deactivated successfully.
Oct 11 03:36:23 compute-0 podman[75678]: 2025-10-11 03:36:23.384380029 +0000 UTC m=+0.045438303 container create 6787248e5b69c4d25c3e38e2570e3853aa93b0d95bfe42e4ae49451d0d405f7e (image=quay.io/ceph/ceph:v18, name=gallant_shannon, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 11 03:36:23 compute-0 systemd[1]: Started libpod-conmon-6787248e5b69c4d25c3e38e2570e3853aa93b0d95bfe42e4ae49451d0d405f7e.scope.
Oct 11 03:36:23 compute-0 systemd[1]: Started libcrun container.
Oct 11 03:36:23 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c5b3ced6ca501945ba3464cccb528df426296d44e9e0b9b3f4906b5a80a90aa4/merged/tmp/cephadm-ssh-key supports timestamps until 2038 (0x7fffffff)
Oct 11 03:36:23 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c5b3ced6ca501945ba3464cccb528df426296d44e9e0b9b3f4906b5a80a90aa4/merged/tmp/cephadm-ssh-key.pub supports timestamps until 2038 (0x7fffffff)
Oct 11 03:36:23 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c5b3ced6ca501945ba3464cccb528df426296d44e9e0b9b3f4906b5a80a90aa4/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 11 03:36:23 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c5b3ced6ca501945ba3464cccb528df426296d44e9e0b9b3f4906b5a80a90aa4/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Oct 11 03:36:23 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c5b3ced6ca501945ba3464cccb528df426296d44e9e0b9b3f4906b5a80a90aa4/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 11 03:36:23 compute-0 podman[75678]: 2025-10-11 03:36:23.368364454 +0000 UTC m=+0.029422748 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Oct 11 03:36:23 compute-0 ceph-mon[74273]: from='client.14136 -' entity='client.admin' cmd=[{"prefix": "mgr_status"}]: dispatch
Oct 11 03:36:23 compute-0 ceph-mon[74273]: [11/Oct/2025:03:36:22] ENGINE Bus STARTING
Oct 11 03:36:23 compute-0 ceph-mon[74273]: from='client.14144 -' entity='client.admin' cmd=[{"prefix": "orch set backend", "module_name": "cephadm", "target": ["mon-mgr", ""]}]: dispatch
Oct 11 03:36:23 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' 
Oct 11 03:36:23 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch
Oct 11 03:36:23 compute-0 ceph-mon[74273]: [11/Oct/2025:03:36:22] ENGINE Serving on http://192.168.122.100:8765
Oct 11 03:36:23 compute-0 ceph-mon[74273]: [11/Oct/2025:03:36:22] ENGINE Serving on https://192.168.122.100:7150
Oct 11 03:36:23 compute-0 ceph-mon[74273]: [11/Oct/2025:03:36:22] ENGINE Bus STARTED
Oct 11 03:36:23 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch
Oct 11 03:36:23 compute-0 ceph-mon[74273]: [11/Oct/2025:03:36:22] ENGINE Client ('192.168.122.100', 60458) lost — peer dropped the TLS connection suddenly, during handshake: (6, 'TLS/SSL connection has been closed (EOF) (_ssl.c:1147)')
Oct 11 03:36:23 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' 
Oct 11 03:36:23 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' 
Oct 11 03:36:23 compute-0 podman[75678]: 2025-10-11 03:36:23.477019004 +0000 UTC m=+0.138077368 container init 6787248e5b69c4d25c3e38e2570e3853aa93b0d95bfe42e4ae49451d0d405f7e (image=quay.io/ceph/ceph:v18, name=gallant_shannon, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 11 03:36:23 compute-0 ceph-mon[74273]: log_channel(cluster) log [DBG] : mgrmap e8: compute-0.jhqlii(active, since 2s)
Oct 11 03:36:23 compute-0 podman[75678]: 2025-10-11 03:36:23.48951949 +0000 UTC m=+0.150577804 container start 6787248e5b69c4d25c3e38e2570e3853aa93b0d95bfe42e4ae49451d0d405f7e (image=quay.io/ceph/ceph:v18, name=gallant_shannon, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Oct 11 03:36:23 compute-0 podman[75678]: 2025-10-11 03:36:23.493034419 +0000 UTC m=+0.154092723 container attach 6787248e5b69c4d25c3e38e2570e3853aa93b0d95bfe42e4ae49451d0d405f7e (image=quay.io/ceph/ceph:v18, name=gallant_shannon, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef)
Oct 11 03:36:24 compute-0 ceph-mgr[74563]: log_channel(audit) log [DBG] : from='client.14148 -' entity='client.admin' cmd=[{"prefix": "cephadm set-priv-key", "target": ["mon-mgr", ""]}]: dispatch
Oct 11 03:36:24 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/ssh_identity_key}] v 0) v1
Oct 11 03:36:24 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' 
Oct 11 03:36:24 compute-0 ceph-mgr[74563]: [cephadm INFO root] Set ssh ssh_identity_key
Oct 11 03:36:24 compute-0 ceph-mgr[74563]: log_channel(cephadm) log [INF] : Set ssh ssh_identity_key
Oct 11 03:36:24 compute-0 ceph-mgr[74563]: [cephadm INFO root] Set ssh private key
Oct 11 03:36:24 compute-0 ceph-mgr[74563]: log_channel(cephadm) log [INF] : Set ssh private key
Oct 11 03:36:24 compute-0 systemd[1]: libpod-6787248e5b69c4d25c3e38e2570e3853aa93b0d95bfe42e4ae49451d0d405f7e.scope: Deactivated successfully.
Oct 11 03:36:24 compute-0 podman[75678]: 2025-10-11 03:36:24.051374886 +0000 UTC m=+0.712433200 container died 6787248e5b69c4d25c3e38e2570e3853aa93b0d95bfe42e4ae49451d0d405f7e (image=quay.io/ceph/ceph:v18, name=gallant_shannon, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 11 03:36:24 compute-0 systemd[1]: var-lib-containers-storage-overlay-c5b3ced6ca501945ba3464cccb528df426296d44e9e0b9b3f4906b5a80a90aa4-merged.mount: Deactivated successfully.
Oct 11 03:36:24 compute-0 podman[75678]: 2025-10-11 03:36:24.093937724 +0000 UTC m=+0.754996008 container remove 6787248e5b69c4d25c3e38e2570e3853aa93b0d95bfe42e4ae49451d0d405f7e (image=quay.io/ceph/ceph:v18, name=gallant_shannon, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef)
Oct 11 03:36:24 compute-0 systemd[1]: libpod-conmon-6787248e5b69c4d25c3e38e2570e3853aa93b0d95bfe42e4ae49451d0d405f7e.scope: Deactivated successfully.
Oct 11 03:36:24 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e2 _set_new_cache_sizes cache_size:1019921087 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 11 03:36:24 compute-0 podman[75732]: 2025-10-11 03:36:24.179404874 +0000 UTC m=+0.059458890 container create 33aa078ad494fb44c92cb1a57c4c7cd70f17f6b6b8822025df495cc84fde361d (image=quay.io/ceph/ceph:v18, name=trusting_colden, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef)
Oct 11 03:36:24 compute-0 systemd[1]: Started libpod-conmon-33aa078ad494fb44c92cb1a57c4c7cd70f17f6b6b8822025df495cc84fde361d.scope.
Oct 11 03:36:24 compute-0 podman[75732]: 2025-10-11 03:36:24.151843771 +0000 UTC m=+0.031897877 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Oct 11 03:36:24 compute-0 systemd[1]: Started libcrun container.
Oct 11 03:36:24 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2c6d1d682f32fd83f4babbf43e8a48292cd542ae4c3fb84dcab2365fe177297e/merged/tmp/cephadm-ssh-key supports timestamps until 2038 (0x7fffffff)
Oct 11 03:36:24 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2c6d1d682f32fd83f4babbf43e8a48292cd542ae4c3fb84dcab2365fe177297e/merged/tmp/cephadm-ssh-key.pub supports timestamps until 2038 (0x7fffffff)
Oct 11 03:36:24 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2c6d1d682f32fd83f4babbf43e8a48292cd542ae4c3fb84dcab2365fe177297e/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 11 03:36:24 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2c6d1d682f32fd83f4babbf43e8a48292cd542ae4c3fb84dcab2365fe177297e/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Oct 11 03:36:24 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2c6d1d682f32fd83f4babbf43e8a48292cd542ae4c3fb84dcab2365fe177297e/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 11 03:36:24 compute-0 podman[75732]: 2025-10-11 03:36:24.271262707 +0000 UTC m=+0.151316793 container init 33aa078ad494fb44c92cb1a57c4c7cd70f17f6b6b8822025df495cc84fde361d (image=quay.io/ceph/ceph:v18, name=trusting_colden, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, io.buildah.version=1.39.3)
Oct 11 03:36:24 compute-0 podman[75732]: 2025-10-11 03:36:24.280111378 +0000 UTC m=+0.160165424 container start 33aa078ad494fb44c92cb1a57c4c7cd70f17f6b6b8822025df495cc84fde361d (image=quay.io/ceph/ceph:v18, name=trusting_colden, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Oct 11 03:36:24 compute-0 podman[75732]: 2025-10-11 03:36:24.28394129 +0000 UTC m=+0.163995336 container attach 33aa078ad494fb44c92cb1a57c4c7cd70f17f6b6b8822025df495cc84fde361d (image=quay.io/ceph/ceph:v18, name=trusting_colden, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507)
Oct 11 03:36:24 compute-0 ceph-mon[74273]: from='client.14146 -' entity='client.admin' cmd=[{"prefix": "cephadm set-user", "user": "ceph-admin", "target": ["mon-mgr", ""]}]: dispatch
Oct 11 03:36:24 compute-0 ceph-mon[74273]: Set ssh ssh_user
Oct 11 03:36:24 compute-0 ceph-mon[74273]: Set ssh ssh_config
Oct 11 03:36:24 compute-0 ceph-mon[74273]: ssh user set to ceph-admin. sudo will be used
Oct 11 03:36:24 compute-0 ceph-mon[74273]: mgrmap e8: compute-0.jhqlii(active, since 2s)
Oct 11 03:36:24 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' 
Oct 11 03:36:24 compute-0 ceph-mgr[74563]: mgr.server send_report Not sending PG status to monitor yet, waiting for OSDs
Oct 11 03:36:24 compute-0 ceph-mgr[74563]: log_channel(audit) log [DBG] : from='client.14150 -' entity='client.admin' cmd=[{"prefix": "cephadm set-pub-key", "target": ["mon-mgr", ""]}]: dispatch
Oct 11 03:36:24 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/ssh_identity_pub}] v 0) v1
Oct 11 03:36:24 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' 
Oct 11 03:36:24 compute-0 ceph-mgr[74563]: [cephadm INFO root] Set ssh ssh_identity_pub
Oct 11 03:36:24 compute-0 ceph-mgr[74563]: log_channel(cephadm) log [INF] : Set ssh ssh_identity_pub
Oct 11 03:36:24 compute-0 systemd[1]: libpod-33aa078ad494fb44c92cb1a57c4c7cd70f17f6b6b8822025df495cc84fde361d.scope: Deactivated successfully.
Oct 11 03:36:24 compute-0 podman[75732]: 2025-10-11 03:36:24.849715501 +0000 UTC m=+0.729769557 container died 33aa078ad494fb44c92cb1a57c4c7cd70f17f6b6b8822025df495cc84fde361d (image=quay.io/ceph/ceph:v18, name=trusting_colden, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Oct 11 03:36:24 compute-0 systemd[1]: var-lib-containers-storage-overlay-2c6d1d682f32fd83f4babbf43e8a48292cd542ae4c3fb84dcab2365fe177297e-merged.mount: Deactivated successfully.
Oct 11 03:36:24 compute-0 podman[75732]: 2025-10-11 03:36:24.901538196 +0000 UTC m=+0.781592252 container remove 33aa078ad494fb44c92cb1a57c4c7cd70f17f6b6b8822025df495cc84fde361d (image=quay.io/ceph/ceph:v18, name=trusting_colden, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 11 03:36:24 compute-0 systemd[1]: libpod-conmon-33aa078ad494fb44c92cb1a57c4c7cd70f17f6b6b8822025df495cc84fde361d.scope: Deactivated successfully.
Oct 11 03:36:24 compute-0 podman[75785]: 2025-10-11 03:36:24.984137572 +0000 UTC m=+0.054481122 container create 04635f52a6f83f6dcdcb18394a75cc8d4493f131e380ad9029eb6477eb90ba03 (image=quay.io/ceph/ceph:v18, name=infallible_kepler, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Oct 11 03:36:25 compute-0 systemd[1]: Started libpod-conmon-04635f52a6f83f6dcdcb18394a75cc8d4493f131e380ad9029eb6477eb90ba03.scope.
Oct 11 03:36:25 compute-0 podman[75785]: 2025-10-11 03:36:24.962049026 +0000 UTC m=+0.032392636 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Oct 11 03:36:25 compute-0 systemd[1]: Started libcrun container.
Oct 11 03:36:25 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/932ad953aaa1e86e12df436c2d98bda5e3eecea9f1ab8d6b4172937485370cf7/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 11 03:36:25 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/932ad953aaa1e86e12df436c2d98bda5e3eecea9f1ab8d6b4172937485370cf7/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Oct 11 03:36:25 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/932ad953aaa1e86e12df436c2d98bda5e3eecea9f1ab8d6b4172937485370cf7/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 11 03:36:25 compute-0 podman[75785]: 2025-10-11 03:36:25.082070497 +0000 UTC m=+0.152414097 container init 04635f52a6f83f6dcdcb18394a75cc8d4493f131e380ad9029eb6477eb90ba03 (image=quay.io/ceph/ceph:v18, name=infallible_kepler, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Oct 11 03:36:25 compute-0 podman[75785]: 2025-10-11 03:36:25.089775842 +0000 UTC m=+0.160119412 container start 04635f52a6f83f6dcdcb18394a75cc8d4493f131e380ad9029eb6477eb90ba03 (image=quay.io/ceph/ceph:v18, name=infallible_kepler, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Oct 11 03:36:25 compute-0 podman[75785]: 2025-10-11 03:36:25.092831633 +0000 UTC m=+0.163175283 container attach 04635f52a6f83f6dcdcb18394a75cc8d4493f131e380ad9029eb6477eb90ba03 (image=quay.io/ceph/ceph:v18, name=infallible_kepler, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0)
Oct 11 03:36:25 compute-0 ceph-mon[74273]: from='client.14148 -' entity='client.admin' cmd=[{"prefix": "cephadm set-priv-key", "target": ["mon-mgr", ""]}]: dispatch
Oct 11 03:36:25 compute-0 ceph-mon[74273]: Set ssh ssh_identity_key
Oct 11 03:36:25 compute-0 ceph-mon[74273]: Set ssh private key
Oct 11 03:36:25 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' 
Oct 11 03:36:25 compute-0 ceph-mgr[74563]: log_channel(audit) log [DBG] : from='client.14152 -' entity='client.admin' cmd=[{"prefix": "cephadm get-pub-key", "target": ["mon-mgr", ""]}]: dispatch
Oct 11 03:36:25 compute-0 infallible_kepler[75801]: ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCPGEHgctnaK17t+Ki1BNAKhSw72U/tHarv8fc97R5rkBJQZHk+XNCegp0DlSTUINA4FOl06MMf+cZ6zxcJTwNL/grtl5GHwGeqdimakBKByjOUORALUcqEOcarI0jvm42DTK85IO2cv5L7wYjEfEseP6LTK6vQuWAdg35YgARG0r1vzfg88ZlPBATX2xqaJWSPE9YKP6y5HA7OtP5hOZNIF7+Pv8eOhoiYCYhyErSMJXNuOOEjXjU46pTYm7SOjO58uSU3vIsdv2H+N4kRuW/ClKTHQvhOG78ZKKAj2t8atuhWTKEFsmRFm3qkHxITy+DEJaeKlcn4u2U9+D5SJxoXSUTDJQHj52hiYB9ARLw3zQ3rInPIldLIlRXwHK8FeoM5ArUG/RvHHQnfitMuSNsNEZruOq3ybfPuouA9IaLt0KeL7j4mGp3rsNNP2HeUcGLBYmvj3C3D5aZc+61cmeKvgPE3wS+2/xBzUCO3oZdGVIXqQuZwtZqLunt3cv3OqKk= zuul@controller
Oct 11 03:36:25 compute-0 systemd[1]: libpod-04635f52a6f83f6dcdcb18394a75cc8d4493f131e380ad9029eb6477eb90ba03.scope: Deactivated successfully.
Oct 11 03:36:25 compute-0 podman[75785]: 2025-10-11 03:36:25.628120015 +0000 UTC m=+0.698463605 container died 04635f52a6f83f6dcdcb18394a75cc8d4493f131e380ad9029eb6477eb90ba03 (image=quay.io/ceph/ceph:v18, name=infallible_kepler, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 11 03:36:25 compute-0 systemd[1]: var-lib-containers-storage-overlay-932ad953aaa1e86e12df436c2d98bda5e3eecea9f1ab8d6b4172937485370cf7-merged.mount: Deactivated successfully.
Oct 11 03:36:25 compute-0 podman[75785]: 2025-10-11 03:36:25.684982211 +0000 UTC m=+0.755325791 container remove 04635f52a6f83f6dcdcb18394a75cc8d4493f131e380ad9029eb6477eb90ba03 (image=quay.io/ceph/ceph:v18, name=infallible_kepler, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_REF=reef, ceph=True)
Oct 11 03:36:25 compute-0 systemd[1]: libpod-conmon-04635f52a6f83f6dcdcb18394a75cc8d4493f131e380ad9029eb6477eb90ba03.scope: Deactivated successfully.
Oct 11 03:36:25 compute-0 podman[75840]: 2025-10-11 03:36:25.777062833 +0000 UTC m=+0.064859884 container create 02e981ea06bd849be41eccee76ad87cffb706531db7841ae23f4b28c58fc9ddd (image=quay.io/ceph/ceph:v18, name=focused_jennings, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3)
Oct 11 03:36:25 compute-0 systemd[1]: Started libpod-conmon-02e981ea06bd849be41eccee76ad87cffb706531db7841ae23f4b28c58fc9ddd.scope.
Oct 11 03:36:25 compute-0 systemd[1]: Started libcrun container.
Oct 11 03:36:25 compute-0 podman[75840]: 2025-10-11 03:36:25.751645375 +0000 UTC m=+0.039442486 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Oct 11 03:36:25 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0171edd28d4990cb2352af98ccb662e8cfd13a6ce16613318c0622e23eb64fd2/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 11 03:36:25 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0171edd28d4990cb2352af98ccb662e8cfd13a6ce16613318c0622e23eb64fd2/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 11 03:36:25 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0171edd28d4990cb2352af98ccb662e8cfd13a6ce16613318c0622e23eb64fd2/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Oct 11 03:36:25 compute-0 podman[75840]: 2025-10-11 03:36:25.877616641 +0000 UTC m=+0.165413713 container init 02e981ea06bd849be41eccee76ad87cffb706531db7841ae23f4b28c58fc9ddd (image=quay.io/ceph/ceph:v18, name=focused_jennings, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 11 03:36:25 compute-0 podman[75840]: 2025-10-11 03:36:25.887527835 +0000 UTC m=+0.175324856 container start 02e981ea06bd849be41eccee76ad87cffb706531db7841ae23f4b28c58fc9ddd (image=quay.io/ceph/ceph:v18, name=focused_jennings, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 11 03:36:25 compute-0 podman[75840]: 2025-10-11 03:36:25.890674969 +0000 UTC m=+0.178472030 container attach 02e981ea06bd849be41eccee76ad87cffb706531db7841ae23f4b28c58fc9ddd (image=quay.io/ceph/ceph:v18, name=focused_jennings, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Oct 11 03:36:26 compute-0 ceph-mgr[74563]: log_channel(audit) log [DBG] : from='client.14154 -' entity='client.admin' cmd=[{"prefix": "orch host add", "hostname": "compute-0", "addr": "192.168.122.100", "target": ["mon-mgr", ""]}]: dispatch
Oct 11 03:36:26 compute-0 ceph-mon[74273]: from='client.14150 -' entity='client.admin' cmd=[{"prefix": "cephadm set-pub-key", "target": ["mon-mgr", ""]}]: dispatch
Oct 11 03:36:26 compute-0 ceph-mon[74273]: Set ssh ssh_identity_pub
Oct 11 03:36:26 compute-0 ceph-mon[74273]: from='client.14152 -' entity='client.admin' cmd=[{"prefix": "cephadm get-pub-key", "target": ["mon-mgr", ""]}]: dispatch
Oct 11 03:36:26 compute-0 sshd-session[75882]: Accepted publickey for ceph-admin from 192.168.122.100 port 39514 ssh2: RSA SHA256:zq0SbJ37OVxJQ9NCID+839O2GCdjjA3YZoJ895MeqUE
Oct 11 03:36:26 compute-0 ceph-mgr[74563]: mgr.server send_report Not sending PG status to monitor yet, waiting for OSDs
Oct 11 03:36:26 compute-0 systemd-logind[820]: New session 21 of user ceph-admin.
Oct 11 03:36:26 compute-0 systemd[1]: Created slice User Slice of UID 42477.
Oct 11 03:36:26 compute-0 systemd[1]: Starting User Runtime Directory /run/user/42477...
Oct 11 03:36:26 compute-0 systemd[1]: Finished User Runtime Directory /run/user/42477.
Oct 11 03:36:26 compute-0 systemd[1]: Starting User Manager for UID 42477...
Oct 11 03:36:26 compute-0 systemd[75886]: pam_unix(systemd-user:session): session opened for user ceph-admin(uid=42477) by ceph-admin(uid=0)
Oct 11 03:36:26 compute-0 systemd[75886]: Queued start job for default target Main User Target.
Oct 11 03:36:26 compute-0 systemd[75886]: Created slice User Application Slice.
Oct 11 03:36:26 compute-0 systemd[75886]: Started Mark boot as successful after the user session has run 2 minutes.
Oct 11 03:36:26 compute-0 systemd[75886]: Started Daily Cleanup of User's Temporary Directories.
Oct 11 03:36:26 compute-0 systemd[75886]: Reached target Paths.
Oct 11 03:36:26 compute-0 systemd[75886]: Reached target Timers.
Oct 11 03:36:26 compute-0 systemd[75886]: Starting D-Bus User Message Bus Socket...
Oct 11 03:36:26 compute-0 systemd[75886]: Starting Create User's Volatile Files and Directories...
Oct 11 03:36:26 compute-0 sshd-session[75899]: Accepted publickey for ceph-admin from 192.168.122.100 port 39524 ssh2: RSA SHA256:zq0SbJ37OVxJQ9NCID+839O2GCdjjA3YZoJ895MeqUE
Oct 11 03:36:26 compute-0 systemd[75886]: Listening on D-Bus User Message Bus Socket.
Oct 11 03:36:26 compute-0 systemd[75886]: Reached target Sockets.
Oct 11 03:36:26 compute-0 systemd-logind[820]: New session 23 of user ceph-admin.
Oct 11 03:36:26 compute-0 systemd[75886]: Finished Create User's Volatile Files and Directories.
Oct 11 03:36:26 compute-0 systemd[75886]: Reached target Basic System.
Oct 11 03:36:26 compute-0 systemd[75886]: Reached target Main User Target.
Oct 11 03:36:26 compute-0 systemd[75886]: Startup finished in 148ms.
Oct 11 03:36:26 compute-0 systemd[1]: Started User Manager for UID 42477.
Oct 11 03:36:26 compute-0 systemd[1]: Started Session 21 of User ceph-admin.
Oct 11 03:36:26 compute-0 systemd[1]: Started Session 23 of User ceph-admin.
Oct 11 03:36:26 compute-0 sshd-session[75882]: pam_unix(sshd:session): session opened for user ceph-admin(uid=42477) by ceph-admin(uid=0)
Oct 11 03:36:26 compute-0 sshd-session[75899]: pam_unix(sshd:session): session opened for user ceph-admin(uid=42477) by ceph-admin(uid=0)
Oct 11 03:36:27 compute-0 sudo[75906]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 03:36:27 compute-0 sudo[75906]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 03:36:27 compute-0 sudo[75906]: pam_unix(sudo:session): session closed for user root
Oct 11 03:36:27 compute-0 sudo[75931]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 11 03:36:27 compute-0 sudo[75931]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 03:36:27 compute-0 sudo[75931]: pam_unix(sudo:session): session closed for user root
Oct 11 03:36:27 compute-0 sshd-session[75956]: Accepted publickey for ceph-admin from 192.168.122.100 port 39532 ssh2: RSA SHA256:zq0SbJ37OVxJQ9NCID+839O2GCdjjA3YZoJ895MeqUE
Oct 11 03:36:27 compute-0 systemd-logind[820]: New session 24 of user ceph-admin.
Oct 11 03:36:27 compute-0 systemd[1]: Started Session 24 of User ceph-admin.
Oct 11 03:36:27 compute-0 sshd-session[75956]: pam_unix(sshd:session): session opened for user ceph-admin(uid=42477) by ceph-admin(uid=0)
Oct 11 03:36:27 compute-0 ceph-mon[74273]: from='client.14154 -' entity='client.admin' cmd=[{"prefix": "orch host add", "hostname": "compute-0", "addr": "192.168.122.100", "target": ["mon-mgr", ""]}]: dispatch
Oct 11 03:36:27 compute-0 sudo[75960]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 03:36:27 compute-0 sudo[75960]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 03:36:27 compute-0 sudo[75960]: pam_unix(sudo:session): session closed for user root
Oct 11 03:36:27 compute-0 sudo[75985]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/23b68101-59a9-532f-ab6b-9acf78fb2162/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 check-host --expect-hostname compute-0
Oct 11 03:36:27 compute-0 sudo[75985]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 03:36:27 compute-0 sudo[75985]: pam_unix(sudo:session): session closed for user root
Oct 11 03:36:27 compute-0 sshd-session[76010]: Accepted publickey for ceph-admin from 192.168.122.100 port 41868 ssh2: RSA SHA256:zq0SbJ37OVxJQ9NCID+839O2GCdjjA3YZoJ895MeqUE
Oct 11 03:36:27 compute-0 systemd-logind[820]: New session 25 of user ceph-admin.
Oct 11 03:36:27 compute-0 systemd[1]: Started Session 25 of User ceph-admin.
Oct 11 03:36:27 compute-0 sshd-session[76010]: pam_unix(sshd:session): session opened for user ceph-admin(uid=42477) by ceph-admin(uid=0)
Oct 11 03:36:27 compute-0 sudo[76014]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 03:36:27 compute-0 sudo[76014]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 03:36:27 compute-0 sudo[76014]: pam_unix(sudo:session): session closed for user root
Oct 11 03:36:28 compute-0 sudo[76039]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /var/lib/ceph/23b68101-59a9-532f-ab6b-9acf78fb2162/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d
Oct 11 03:36:28 compute-0 sudo[76039]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 03:36:28 compute-0 sudo[76039]: pam_unix(sudo:session): session closed for user root
Oct 11 03:36:28 compute-0 ceph-mgr[74563]: [cephadm INFO cephadm.serve] Deploying cephadm binary to compute-0
Oct 11 03:36:28 compute-0 ceph-mgr[74563]: log_channel(cephadm) log [INF] : Deploying cephadm binary to compute-0
Oct 11 03:36:28 compute-0 sshd-session[76064]: Accepted publickey for ceph-admin from 192.168.122.100 port 41880 ssh2: RSA SHA256:zq0SbJ37OVxJQ9NCID+839O2GCdjjA3YZoJ895MeqUE
Oct 11 03:36:28 compute-0 systemd-logind[820]: New session 26 of user ceph-admin.
Oct 11 03:36:28 compute-0 systemd[1]: Started Session 26 of User ceph-admin.
Oct 11 03:36:28 compute-0 sshd-session[76064]: pam_unix(sshd:session): session opened for user ceph-admin(uid=42477) by ceph-admin(uid=0)
Oct 11 03:36:28 compute-0 sudo[76068]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 03:36:28 compute-0 sudo[76068]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 03:36:28 compute-0 sudo[76068]: pam_unix(sudo:session): session closed for user root
Oct 11 03:36:28 compute-0 sudo[76093]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /var/lib/ceph/23b68101-59a9-532f-ab6b-9acf78fb2162
Oct 11 03:36:28 compute-0 sudo[76093]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 03:36:28 compute-0 sudo[76093]: pam_unix(sudo:session): session closed for user root
Oct 11 03:36:28 compute-0 ceph-mgr[74563]: mgr.server send_report Not sending PG status to monitor yet, waiting for OSDs
Oct 11 03:36:28 compute-0 ceph-mon[74273]: Deploying cephadm binary to compute-0
Oct 11 03:36:28 compute-0 sshd-session[76118]: Accepted publickey for ceph-admin from 192.168.122.100 port 41890 ssh2: RSA SHA256:zq0SbJ37OVxJQ9NCID+839O2GCdjjA3YZoJ895MeqUE
Oct 11 03:36:28 compute-0 systemd-logind[820]: New session 27 of user ceph-admin.
Oct 11 03:36:28 compute-0 systemd[1]: Started Session 27 of User ceph-admin.
Oct 11 03:36:28 compute-0 sshd-session[76118]: pam_unix(sshd:session): session opened for user ceph-admin(uid=42477) by ceph-admin(uid=0)
Oct 11 03:36:28 compute-0 sudo[76122]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 03:36:28 compute-0 sudo[76122]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 03:36:28 compute-0 sudo[76122]: pam_unix(sudo:session): session closed for user root
Oct 11 03:36:29 compute-0 sudo[76147]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /tmp/cephadm-23b68101-59a9-532f-ab6b-9acf78fb2162/var/lib/ceph/23b68101-59a9-532f-ab6b-9acf78fb2162
Oct 11 03:36:29 compute-0 sudo[76147]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 03:36:29 compute-0 sudo[76147]: pam_unix(sudo:session): session closed for user root
Oct 11 03:36:29 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e2 _set_new_cache_sizes cache_size:1020053027 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 11 03:36:29 compute-0 sshd-session[76172]: Accepted publickey for ceph-admin from 192.168.122.100 port 41904 ssh2: RSA SHA256:zq0SbJ37OVxJQ9NCID+839O2GCdjjA3YZoJ895MeqUE
Oct 11 03:36:29 compute-0 systemd-logind[820]: New session 28 of user ceph-admin.
Oct 11 03:36:29 compute-0 systemd[1]: Started Session 28 of User ceph-admin.
Oct 11 03:36:29 compute-0 sshd-session[76172]: pam_unix(sshd:session): session opened for user ceph-admin(uid=42477) by ceph-admin(uid=0)
Oct 11 03:36:29 compute-0 sudo[76176]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 03:36:29 compute-0 sudo[76176]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 03:36:29 compute-0 sudo[76176]: pam_unix(sudo:session): session closed for user root
Oct 11 03:36:29 compute-0 sudo[76201]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/touch /tmp/cephadm-23b68101-59a9-532f-ab6b-9acf78fb2162/var/lib/ceph/23b68101-59a9-532f-ab6b-9acf78fb2162/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d.new
Oct 11 03:36:29 compute-0 sudo[76201]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 03:36:29 compute-0 sudo[76201]: pam_unix(sudo:session): session closed for user root
Oct 11 03:36:29 compute-0 sshd-session[76226]: Accepted publickey for ceph-admin from 192.168.122.100 port 41914 ssh2: RSA SHA256:zq0SbJ37OVxJQ9NCID+839O2GCdjjA3YZoJ895MeqUE
Oct 11 03:36:29 compute-0 systemd-logind[820]: New session 29 of user ceph-admin.
Oct 11 03:36:29 compute-0 systemd[1]: Started Session 29 of User ceph-admin.
Oct 11 03:36:29 compute-0 sshd-session[76226]: pam_unix(sshd:session): session opened for user ceph-admin(uid=42477) by ceph-admin(uid=0)
Oct 11 03:36:29 compute-0 sudo[76230]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 03:36:29 compute-0 sudo[76230]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 03:36:29 compute-0 sudo[76230]: pam_unix(sudo:session): session closed for user root
Oct 11 03:36:30 compute-0 sudo[76255]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R ceph-admin /tmp/cephadm-23b68101-59a9-532f-ab6b-9acf78fb2162
Oct 11 03:36:30 compute-0 sudo[76255]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 03:36:30 compute-0 sudo[76255]: pam_unix(sudo:session): session closed for user root
Oct 11 03:36:30 compute-0 sshd-session[76280]: Accepted publickey for ceph-admin from 192.168.122.100 port 41930 ssh2: RSA SHA256:zq0SbJ37OVxJQ9NCID+839O2GCdjjA3YZoJ895MeqUE
Oct 11 03:36:30 compute-0 systemd-logind[820]: New session 30 of user ceph-admin.
Oct 11 03:36:30 compute-0 systemd[1]: Started Session 30 of User ceph-admin.
Oct 11 03:36:30 compute-0 sshd-session[76280]: pam_unix(sshd:session): session opened for user ceph-admin(uid=42477) by ceph-admin(uid=0)
Oct 11 03:36:30 compute-0 sudo[76284]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 03:36:30 compute-0 sudo[76284]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 03:36:30 compute-0 sudo[76284]: pam_unix(sudo:session): session closed for user root
Oct 11 03:36:30 compute-0 sudo[76309]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-23b68101-59a9-532f-ab6b-9acf78fb2162/var/lib/ceph/23b68101-59a9-532f-ab6b-9acf78fb2162/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d.new
Oct 11 03:36:30 compute-0 sudo[76309]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 03:36:30 compute-0 sudo[76309]: pam_unix(sudo:session): session closed for user root
Oct 11 03:36:30 compute-0 ceph-mgr[74563]: mgr.server send_report Not sending PG status to monitor yet, waiting for OSDs
Oct 11 03:36:30 compute-0 sshd-session[76334]: Accepted publickey for ceph-admin from 192.168.122.100 port 41932 ssh2: RSA SHA256:zq0SbJ37OVxJQ9NCID+839O2GCdjjA3YZoJ895MeqUE
Oct 11 03:36:30 compute-0 systemd-logind[820]: New session 31 of user ceph-admin.
Oct 11 03:36:30 compute-0 systemd[1]: Started Session 31 of User ceph-admin.
Oct 11 03:36:30 compute-0 sshd-session[76334]: pam_unix(sshd:session): session opened for user ceph-admin(uid=42477) by ceph-admin(uid=0)
Oct 11 03:36:31 compute-0 sshd-session[76361]: Accepted publickey for ceph-admin from 192.168.122.100 port 41942 ssh2: RSA SHA256:zq0SbJ37OVxJQ9NCID+839O2GCdjjA3YZoJ895MeqUE
Oct 11 03:36:31 compute-0 systemd-logind[820]: New session 32 of user ceph-admin.
Oct 11 03:36:31 compute-0 systemd[1]: Started Session 32 of User ceph-admin.
Oct 11 03:36:31 compute-0 sshd-session[76361]: pam_unix(sshd:session): session opened for user ceph-admin(uid=42477) by ceph-admin(uid=0)
Oct 11 03:36:31 compute-0 sudo[76365]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 03:36:31 compute-0 sudo[76365]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 03:36:31 compute-0 sudo[76365]: pam_unix(sudo:session): session closed for user root
Oct 11 03:36:31 compute-0 sudo[76390]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mv /tmp/cephadm-23b68101-59a9-532f-ab6b-9acf78fb2162/var/lib/ceph/23b68101-59a9-532f-ab6b-9acf78fb2162/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d.new /var/lib/ceph/23b68101-59a9-532f-ab6b-9acf78fb2162/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d
Oct 11 03:36:31 compute-0 sudo[76390]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 03:36:31 compute-0 sudo[76390]: pam_unix(sudo:session): session closed for user root
Oct 11 03:36:31 compute-0 sshd-session[76415]: Accepted publickey for ceph-admin from 192.168.122.100 port 41954 ssh2: RSA SHA256:zq0SbJ37OVxJQ9NCID+839O2GCdjjA3YZoJ895MeqUE
Oct 11 03:36:31 compute-0 systemd-logind[820]: New session 33 of user ceph-admin.
Oct 11 03:36:31 compute-0 systemd[1]: Started Session 33 of User ceph-admin.
Oct 11 03:36:31 compute-0 sshd-session[76415]: pam_unix(sshd:session): session opened for user ceph-admin(uid=42477) by ceph-admin(uid=0)
Oct 11 03:36:31 compute-0 sudo[76419]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 03:36:31 compute-0 sudo[76419]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 03:36:31 compute-0 sudo[76419]: pam_unix(sudo:session): session closed for user root
Oct 11 03:36:32 compute-0 sudo[76444]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/23b68101-59a9-532f-ab6b-9acf78fb2162/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 check-host --expect-hostname compute-0
Oct 11 03:36:32 compute-0 sudo[76444]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 03:36:32 compute-0 sudo[76444]: pam_unix(sudo:session): session closed for user root
Oct 11 03:36:32 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/inventory}] v 0) v1
Oct 11 03:36:32 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' 
Oct 11 03:36:32 compute-0 ceph-mgr[74563]: [cephadm INFO root] Added host compute-0
Oct 11 03:36:32 compute-0 ceph-mgr[74563]: log_channel(cephadm) log [INF] : Added host compute-0
Oct 11 03:36:32 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config dump", "format": "json"} v 0) v1
Oct 11 03:36:32 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch
Oct 11 03:36:32 compute-0 focused_jennings[75856]: Added host 'compute-0' with addr '192.168.122.100'
Oct 11 03:36:32 compute-0 systemd[1]: libpod-02e981ea06bd849be41eccee76ad87cffb706531db7841ae23f4b28c58fc9ddd.scope: Deactivated successfully.
Oct 11 03:36:32 compute-0 podman[75840]: 2025-10-11 03:36:32.385375856 +0000 UTC m=+6.673172917 container died 02e981ea06bd849be41eccee76ad87cffb706531db7841ae23f4b28c58fc9ddd (image=quay.io/ceph/ceph:v18, name=focused_jennings, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, ceph=True, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 11 03:36:32 compute-0 systemd[1]: var-lib-containers-storage-overlay-0171edd28d4990cb2352af98ccb662e8cfd13a6ce16613318c0622e23eb64fd2-merged.mount: Deactivated successfully.
Oct 11 03:36:32 compute-0 sudo[76491]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 03:36:32 compute-0 sudo[76491]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 03:36:32 compute-0 podman[75840]: 2025-10-11 03:36:32.449678937 +0000 UTC m=+6.737475998 container remove 02e981ea06bd849be41eccee76ad87cffb706531db7841ae23f4b28c58fc9ddd (image=quay.io/ceph/ceph:v18, name=focused_jennings, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Oct 11 03:36:32 compute-0 sudo[76491]: pam_unix(sudo:session): session closed for user root
Oct 11 03:36:32 compute-0 systemd[1]: libpod-conmon-02e981ea06bd849be41eccee76ad87cffb706531db7841ae23f4b28c58fc9ddd.scope: Deactivated successfully.
Oct 11 03:36:32 compute-0 sudo[76529]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 11 03:36:32 compute-0 sudo[76529]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 03:36:32 compute-0 sudo[76529]: pam_unix(sudo:session): session closed for user root
Oct 11 03:36:32 compute-0 podman[76530]: 2025-10-11 03:36:32.552257255 +0000 UTC m=+0.069989567 container create f6217e9f2e138a5e50ecc061ebb56b98bceb8e1ca55c7b8c5865275d26491541 (image=quay.io/ceph/ceph:v18, name=stupefied_tharp, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 11 03:36:32 compute-0 systemd[1]: Started libpod-conmon-f6217e9f2e138a5e50ecc061ebb56b98bceb8e1ca55c7b8c5865275d26491541.scope.
Oct 11 03:36:32 compute-0 sudo[76568]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 03:36:32 compute-0 sudo[76568]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 03:36:32 compute-0 sudo[76568]: pam_unix(sudo:session): session closed for user root
Oct 11 03:36:32 compute-0 podman[76530]: 2025-10-11 03:36:32.522839619 +0000 UTC m=+0.040571971 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Oct 11 03:36:32 compute-0 systemd[1]: Started libcrun container.
Oct 11 03:36:32 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2afd18b6d8fca58ffd29ee384bf8b42d8112ae17bb10965363058f2e556b357c/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 11 03:36:32 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2afd18b6d8fca58ffd29ee384bf8b42d8112ae17bb10965363058f2e556b357c/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Oct 11 03:36:32 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2afd18b6d8fca58ffd29ee384bf8b42d8112ae17bb10965363058f2e556b357c/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 11 03:36:32 compute-0 ceph-mgr[74563]: mgr.server send_report Not sending PG status to monitor yet, waiting for OSDs
Oct 11 03:36:32 compute-0 podman[76530]: 2025-10-11 03:36:32.666562799 +0000 UTC m=+0.184295151 container init f6217e9f2e138a5e50ecc061ebb56b98bceb8e1ca55c7b8c5865275d26491541 (image=quay.io/ceph/ceph:v18, name=stupefied_tharp, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=reef, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 11 03:36:32 compute-0 podman[76530]: 2025-10-11 03:36:32.679115587 +0000 UTC m=+0.196847869 container start f6217e9f2e138a5e50ecc061ebb56b98bceb8e1ca55c7b8c5865275d26491541 (image=quay.io/ceph/ceph:v18, name=stupefied_tharp, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default)
Oct 11 03:36:32 compute-0 podman[76530]: 2025-10-11 03:36:32.682888997 +0000 UTC m=+0.200621309 container attach f6217e9f2e138a5e50ecc061ebb56b98bceb8e1ca55c7b8c5865275d26491541 (image=quay.io/ceph/ceph:v18, name=stupefied_tharp, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.license=GPLv2)
Oct 11 03:36:32 compute-0 sudo[76598]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/23b68101-59a9-532f-ab6b-9acf78fb2162/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph:v18 --timeout 895 inspect-image
Oct 11 03:36:32 compute-0 sudo[76598]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 03:36:33 compute-0 podman[76651]: 2025-10-11 03:36:33.024730866 +0000 UTC m=+0.046917352 container create 5da3d4cd53b234da02b19d15a4af36841328358b1d1619f34ee4536329d35781 (image=quay.io/ceph/ceph:v18, name=nice_montalcini, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 11 03:36:33 compute-0 systemd[1]: Started libpod-conmon-5da3d4cd53b234da02b19d15a4af36841328358b1d1619f34ee4536329d35781.scope.
Oct 11 03:36:33 compute-0 systemd[1]: Started libcrun container.
Oct 11 03:36:33 compute-0 podman[76651]: 2025-10-11 03:36:33.089587438 +0000 UTC m=+0.111773984 container init 5da3d4cd53b234da02b19d15a4af36841328358b1d1619f34ee4536329d35781 (image=quay.io/ceph/ceph:v18, name=nice_montalcini, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Oct 11 03:36:33 compute-0 podman[76651]: 2025-10-11 03:36:33.09442859 +0000 UTC m=+0.116615096 container start 5da3d4cd53b234da02b19d15a4af36841328358b1d1619f34ee4536329d35781 (image=quay.io/ceph/ceph:v18, name=nice_montalcini, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, ceph=True)
Oct 11 03:36:33 compute-0 podman[76651]: 2025-10-11 03:36:33.0976883 +0000 UTC m=+0.119874816 container attach 5da3d4cd53b234da02b19d15a4af36841328358b1d1619f34ee4536329d35781 (image=quay.io/ceph/ceph:v18, name=nice_montalcini, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 11 03:36:33 compute-0 podman[76651]: 2025-10-11 03:36:33.005659469 +0000 UTC m=+0.027845985 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Oct 11 03:36:33 compute-0 ceph-mgr[74563]: log_channel(audit) log [DBG] : from='client.14156 -' entity='client.admin' cmd=[{"prefix": "orch apply", "service_type": "mon", "target": ["mon-mgr", ""]}]: dispatch
Oct 11 03:36:33 compute-0 ceph-mgr[74563]: [cephadm INFO root] Saving service mon spec with placement count:5
Oct 11 03:36:33 compute-0 ceph-mgr[74563]: log_channel(cephadm) log [INF] : Saving service mon spec with placement count:5
Oct 11 03:36:33 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mon}] v 0) v1
Oct 11 03:36:33 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' 
Oct 11 03:36:33 compute-0 stupefied_tharp[76593]: Scheduled mon update...
Oct 11 03:36:33 compute-0 systemd[1]: libpod-f6217e9f2e138a5e50ecc061ebb56b98bceb8e1ca55c7b8c5865275d26491541.scope: Deactivated successfully.
Oct 11 03:36:33 compute-0 podman[76530]: 2025-10-11 03:36:33.31301014 +0000 UTC m=+0.830742472 container died f6217e9f2e138a5e50ecc061ebb56b98bceb8e1ca55c7b8c5865275d26491541 (image=quay.io/ceph/ceph:v18, name=stupefied_tharp, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Oct 11 03:36:33 compute-0 systemd[1]: var-lib-containers-storage-overlay-2afd18b6d8fca58ffd29ee384bf8b42d8112ae17bb10965363058f2e556b357c-merged.mount: Deactivated successfully.
Oct 11 03:36:33 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' 
Oct 11 03:36:33 compute-0 ceph-mon[74273]: Added host compute-0
Oct 11 03:36:33 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch
Oct 11 03:36:33 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' 
Oct 11 03:36:33 compute-0 podman[76530]: 2025-10-11 03:36:33.37375561 +0000 UTC m=+0.891487922 container remove f6217e9f2e138a5e50ecc061ebb56b98bceb8e1ca55c7b8c5865275d26491541 (image=quay.io/ceph/ceph:v18, name=stupefied_tharp, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 11 03:36:33 compute-0 nice_montalcini[76686]: ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable)
Oct 11 03:36:33 compute-0 systemd[1]: libpod-conmon-f6217e9f2e138a5e50ecc061ebb56b98bceb8e1ca55c7b8c5865275d26491541.scope: Deactivated successfully.
Oct 11 03:36:33 compute-0 systemd[1]: libpod-5da3d4cd53b234da02b19d15a4af36841328358b1d1619f34ee4536329d35781.scope: Deactivated successfully.
Oct 11 03:36:33 compute-0 podman[76651]: 2025-10-11 03:36:33.388427672 +0000 UTC m=+0.410614178 container died 5da3d4cd53b234da02b19d15a4af36841328358b1d1619f34ee4536329d35781 (image=quay.io/ceph/ceph:v18, name=nice_montalcini, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507)
Oct 11 03:36:33 compute-0 systemd[1]: var-lib-containers-storage-overlay-bab3a5854e2e9e0128754edfb1d10e266895682b509d5db5a158f2072ce1d0bd-merged.mount: Deactivated successfully.
Oct 11 03:36:33 compute-0 podman[76651]: 2025-10-11 03:36:33.445314798 +0000 UTC m=+0.467501314 container remove 5da3d4cd53b234da02b19d15a4af36841328358b1d1619f34ee4536329d35781 (image=quay.io/ceph/ceph:v18, name=nice_montalcini, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS)
Oct 11 03:36:33 compute-0 systemd[1]: libpod-conmon-5da3d4cd53b234da02b19d15a4af36841328358b1d1619f34ee4536329d35781.scope: Deactivated successfully.
Oct 11 03:36:33 compute-0 sudo[76598]: pam_unix(sudo:session): session closed for user root
Oct 11 03:36:33 compute-0 podman[76705]: 2025-10-11 03:36:33.487640067 +0000 UTC m=+0.081525055 container create ad8b359a5b97fdc24ba32629b4894b7f63ac60bfcab6fb555fd7557354a609ff (image=quay.io/ceph/ceph:v18, name=quizzical_sammet, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 11 03:36:33 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=container_image}] v 0) v1
Oct 11 03:36:33 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' 
Oct 11 03:36:33 compute-0 systemd[1]: Started libpod-conmon-ad8b359a5b97fdc24ba32629b4894b7f63ac60bfcab6fb555fd7557354a609ff.scope.
Oct 11 03:36:33 compute-0 podman[76705]: 2025-10-11 03:36:33.459369175 +0000 UTC m=+0.053254173 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Oct 11 03:36:33 compute-0 systemd[1]: Started libcrun container.
Oct 11 03:36:33 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5b80d96b2f53990f9e0e08c55ad66f910bfa973fd725b9f5c3a556c405165d4d/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 11 03:36:33 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5b80d96b2f53990f9e0e08c55ad66f910bfa973fd725b9f5c3a556c405165d4d/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Oct 11 03:36:33 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5b80d96b2f53990f9e0e08c55ad66f910bfa973fd725b9f5c3a556c405165d4d/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 11 03:36:33 compute-0 podman[76705]: 2025-10-11 03:36:33.578194909 +0000 UTC m=+0.172079957 container init ad8b359a5b97fdc24ba32629b4894b7f63ac60bfcab6fb555fd7557354a609ff (image=quay.io/ceph/ceph:v18, name=quizzical_sammet, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507)
Oct 11 03:36:33 compute-0 podman[76705]: 2025-10-11 03:36:33.589235427 +0000 UTC m=+0.183120415 container start ad8b359a5b97fdc24ba32629b4894b7f63ac60bfcab6fb555fd7557354a609ff (image=quay.io/ceph/ceph:v18, name=quizzical_sammet, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.license=GPLv2)
Oct 11 03:36:33 compute-0 podman[76705]: 2025-10-11 03:36:33.592741796 +0000 UTC m=+0.186626814 container attach ad8b359a5b97fdc24ba32629b4894b7f63ac60bfcab6fb555fd7557354a609ff (image=quay.io/ceph/ceph:v18, name=quizzical_sammet, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507)
Oct 11 03:36:33 compute-0 sudo[76736]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 03:36:33 compute-0 sudo[76736]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 03:36:33 compute-0 sudo[76736]: pam_unix(sudo:session): session closed for user root
Oct 11 03:36:33 compute-0 sudo[76766]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 11 03:36:33 compute-0 sudo[76766]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 03:36:33 compute-0 sudo[76766]: pam_unix(sudo:session): session closed for user root
Oct 11 03:36:33 compute-0 sudo[76791]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 03:36:33 compute-0 sudo[76791]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 03:36:33 compute-0 sudo[76791]: pam_unix(sudo:session): session closed for user root
Oct 11 03:36:33 compute-0 sudo[76816]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/23b68101-59a9-532f-ab6b-9acf78fb2162/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 check-host
Oct 11 03:36:33 compute-0 sudo[76816]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 03:36:34 compute-0 ceph-mgr[74563]: log_channel(audit) log [DBG] : from='client.14158 -' entity='client.admin' cmd=[{"prefix": "orch apply", "service_type": "mgr", "target": ["mon-mgr", ""]}]: dispatch
Oct 11 03:36:34 compute-0 ceph-mgr[74563]: [cephadm INFO root] Saving service mgr spec with placement count:2
Oct 11 03:36:34 compute-0 ceph-mgr[74563]: log_channel(cephadm) log [INF] : Saving service mgr spec with placement count:2
Oct 11 03:36:34 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mgr}] v 0) v1
Oct 11 03:36:34 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' 
Oct 11 03:36:34 compute-0 quizzical_sammet[76740]: Scheduled mgr update...
Oct 11 03:36:34 compute-0 sudo[76816]: pam_unix(sudo:session): session closed for user root
Oct 11 03:36:34 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 11 03:36:34 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' 
Oct 11 03:36:34 compute-0 systemd[1]: libpod-ad8b359a5b97fdc24ba32629b4894b7f63ac60bfcab6fb555fd7557354a609ff.scope: Deactivated successfully.
Oct 11 03:36:34 compute-0 podman[76705]: 2025-10-11 03:36:34.156683174 +0000 UTC m=+0.750568192 container died ad8b359a5b97fdc24ba32629b4894b7f63ac60bfcab6fb555fd7557354a609ff (image=quay.io/ceph/ceph:v18, name=quizzical_sammet, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Oct 11 03:36:34 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e2 _set_new_cache_sizes cache_size:1020054710 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 11 03:36:34 compute-0 systemd[1]: var-lib-containers-storage-overlay-5b80d96b2f53990f9e0e08c55ad66f910bfa973fd725b9f5c3a556c405165d4d-merged.mount: Deactivated successfully.
Oct 11 03:36:34 compute-0 podman[76705]: 2025-10-11 03:36:34.198733222 +0000 UTC m=+0.792618200 container remove ad8b359a5b97fdc24ba32629b4894b7f63ac60bfcab6fb555fd7557354a609ff (image=quay.io/ceph/ceph:v18, name=quizzical_sammet, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default)
Oct 11 03:36:34 compute-0 systemd[1]: libpod-conmon-ad8b359a5b97fdc24ba32629b4894b7f63ac60bfcab6fb555fd7557354a609ff.scope: Deactivated successfully.
Oct 11 03:36:34 compute-0 sudo[76882]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 03:36:34 compute-0 sudo[76882]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 03:36:34 compute-0 sudo[76882]: pam_unix(sudo:session): session closed for user root
Oct 11 03:36:34 compute-0 podman[76917]: 2025-10-11 03:36:34.289271223 +0000 UTC m=+0.059134906 container create 2d17058e453225651c1f359abd38e1f18b136ffa89bbf103a23803f28117e3b9 (image=quay.io/ceph/ceph:v18, name=hungry_joliot, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Oct 11 03:36:34 compute-0 sudo[76926]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 11 03:36:34 compute-0 sudo[76926]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 03:36:34 compute-0 sudo[76926]: pam_unix(sudo:session): session closed for user root
Oct 11 03:36:34 compute-0 systemd[1]: Started libpod-conmon-2d17058e453225651c1f359abd38e1f18b136ffa89bbf103a23803f28117e3b9.scope.
Oct 11 03:36:34 compute-0 podman[76917]: 2025-10-11 03:36:34.266849014 +0000 UTC m=+0.036712727 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Oct 11 03:36:34 compute-0 systemd[1]: Started libcrun container.
Oct 11 03:36:34 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7777de55fbbe7ae89c9a33bb46ad55fbb6a66b08aff01d4b136e336c273bcb07/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 11 03:36:34 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7777de55fbbe7ae89c9a33bb46ad55fbb6a66b08aff01d4b136e336c273bcb07/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Oct 11 03:36:34 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7777de55fbbe7ae89c9a33bb46ad55fbb6a66b08aff01d4b136e336c273bcb07/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 11 03:36:34 compute-0 sudo[76959]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 03:36:34 compute-0 sudo[76959]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 03:36:34 compute-0 sudo[76959]: pam_unix(sudo:session): session closed for user root
Oct 11 03:36:34 compute-0 podman[76917]: 2025-10-11 03:36:34.401793146 +0000 UTC m=+0.171656849 container init 2d17058e453225651c1f359abd38e1f18b136ffa89bbf103a23803f28117e3b9 (image=quay.io/ceph/ceph:v18, name=hungry_joliot, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 11 03:36:34 compute-0 podman[76917]: 2025-10-11 03:36:34.413424638 +0000 UTC m=+0.183288341 container start 2d17058e453225651c1f359abd38e1f18b136ffa89bbf103a23803f28117e3b9 (image=quay.io/ceph/ceph:v18, name=hungry_joliot, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.license=GPLv2)
Oct 11 03:36:34 compute-0 podman[76917]: 2025-10-11 03:36:34.420437366 +0000 UTC m=+0.190301059 container attach 2d17058e453225651c1f359abd38e1f18b136ffa89bbf103a23803f28117e3b9 (image=quay.io/ceph/ceph:v18, name=hungry_joliot, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 11 03:36:34 compute-0 sudo[76990]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/23b68101-59a9-532f-ab6b-9acf78fb2162/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ls
Oct 11 03:36:34 compute-0 sudo[76990]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 03:36:34 compute-0 ceph-mon[74273]: from='client.14156 -' entity='client.admin' cmd=[{"prefix": "orch apply", "service_type": "mon", "target": ["mon-mgr", ""]}]: dispatch
Oct 11 03:36:34 compute-0 ceph-mon[74273]: Saving service mon spec with placement count:5
Oct 11 03:36:34 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' 
Oct 11 03:36:34 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' 
Oct 11 03:36:34 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' 
Oct 11 03:36:34 compute-0 ceph-mgr[74563]: mgr.server send_report Not sending PG status to monitor yet, waiting for OSDs
Oct 11 03:36:34 compute-0 ceph-mgr[74563]: log_channel(audit) log [DBG] : from='client.14160 -' entity='client.admin' cmd=[{"prefix": "orch apply", "service_type": "crash", "target": ["mon-mgr", ""]}]: dispatch
Oct 11 03:36:34 compute-0 ceph-mgr[74563]: [cephadm INFO root] Saving service crash spec with placement *
Oct 11 03:36:34 compute-0 ceph-mgr[74563]: log_channel(cephadm) log [INF] : Saving service crash spec with placement *
Oct 11 03:36:34 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.crash}] v 0) v1
Oct 11 03:36:34 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' 
Oct 11 03:36:34 compute-0 hungry_joliot[76968]: Scheduled crash update...
Oct 11 03:36:35 compute-0 podman[76917]: 2025-10-11 03:36:35.001379008 +0000 UTC m=+0.771242701 container died 2d17058e453225651c1f359abd38e1f18b136ffa89bbf103a23803f28117e3b9 (image=quay.io/ceph/ceph:v18, name=hungry_joliot, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0)
Oct 11 03:36:35 compute-0 systemd[1]: libpod-2d17058e453225651c1f359abd38e1f18b136ffa89bbf103a23803f28117e3b9.scope: Deactivated successfully.
Oct 11 03:36:35 compute-0 systemd[1]: var-lib-containers-storage-overlay-7777de55fbbe7ae89c9a33bb46ad55fbb6a66b08aff01d4b136e336c273bcb07-merged.mount: Deactivated successfully.
Oct 11 03:36:35 compute-0 podman[76917]: 2025-10-11 03:36:35.058525215 +0000 UTC m=+0.828388908 container remove 2d17058e453225651c1f359abd38e1f18b136ffa89bbf103a23803f28117e3b9 (image=quay.io/ceph/ceph:v18, name=hungry_joliot, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, ceph=True, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 11 03:36:35 compute-0 systemd[1]: libpod-conmon-2d17058e453225651c1f359abd38e1f18b136ffa89bbf103a23803f28117e3b9.scope: Deactivated successfully.
Oct 11 03:36:35 compute-0 podman[77105]: 2025-10-11 03:36:35.100795892 +0000 UTC m=+0.080411981 container exec 24261ba7295af5a6a49cb537d1551fd7fd4de28fdeebff7ecec5d89143ebddf9 (image=quay.io/ceph/ceph:v18, name=ceph-23b68101-59a9-532f-ab6b-9acf78fb2162-mon-compute-0, CEPH_REF=reef, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 11 03:36:35 compute-0 podman[77132]: 2025-10-11 03:36:35.158663707 +0000 UTC m=+0.071329380 container create bf89b1b77d7ffe103e8608c8b45a48262acf1ad95b1d6a533bab7319c003123b (image=quay.io/ceph/ceph:v18, name=dreamy_cohen, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 11 03:36:35 compute-0 systemd[1]: Started libpod-conmon-bf89b1b77d7ffe103e8608c8b45a48262acf1ad95b1d6a533bab7319c003123b.scope.
Oct 11 03:36:35 compute-0 systemd[1]: Started libcrun container.
Oct 11 03:36:35 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/aa35006618931340e28f3db60095ad31111e0d536b2713f9a9f321985654d640/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 11 03:36:35 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/aa35006618931340e28f3db60095ad31111e0d536b2713f9a9f321985654d640/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 11 03:36:35 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/aa35006618931340e28f3db60095ad31111e0d536b2713f9a9f321985654d640/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Oct 11 03:36:35 compute-0 podman[77132]: 2025-10-11 03:36:35.131360174 +0000 UTC m=+0.044025887 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Oct 11 03:36:35 compute-0 podman[77132]: 2025-10-11 03:36:35.230808239 +0000 UTC m=+0.143473962 container init bf89b1b77d7ffe103e8608c8b45a48262acf1ad95b1d6a533bab7319c003123b (image=quay.io/ceph/ceph:v18, name=dreamy_cohen, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True)
Oct 11 03:36:35 compute-0 podman[77132]: 2025-10-11 03:36:35.241381928 +0000 UTC m=+0.154047561 container start bf89b1b77d7ffe103e8608c8b45a48262acf1ad95b1d6a533bab7319c003123b (image=quay.io/ceph/ceph:v18, name=dreamy_cohen, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 11 03:36:35 compute-0 podman[77132]: 2025-10-11 03:36:35.245053734 +0000 UTC m=+0.157719397 container attach bf89b1b77d7ffe103e8608c8b45a48262acf1ad95b1d6a533bab7319c003123b (image=quay.io/ceph/ceph:v18, name=dreamy_cohen, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 11 03:36:35 compute-0 podman[77105]: 2025-10-11 03:36:35.432240982 +0000 UTC m=+0.411857141 container exec_died 24261ba7295af5a6a49cb537d1551fd7fd4de28fdeebff7ecec5d89143ebddf9 (image=quay.io/ceph/ceph:v18, name=ceph-23b68101-59a9-532f-ab6b-9acf78fb2162-mon-compute-0, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3)
Oct 11 03:36:35 compute-0 ceph-mon[74273]: from='client.14158 -' entity='client.admin' cmd=[{"prefix": "orch apply", "service_type": "mgr", "target": ["mon-mgr", ""]}]: dispatch
Oct 11 03:36:35 compute-0 ceph-mon[74273]: Saving service mgr spec with placement count:2
Oct 11 03:36:35 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' 
Oct 11 03:36:35 compute-0 sudo[76990]: pam_unix(sudo:session): session closed for user root
Oct 11 03:36:35 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 11 03:36:35 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' 
Oct 11 03:36:35 compute-0 sudo[77211]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 03:36:35 compute-0 sudo[77211]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 03:36:35 compute-0 sudo[77211]: pam_unix(sudo:session): session closed for user root
Oct 11 03:36:35 compute-0 sudo[77236]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 11 03:36:35 compute-0 sudo[77236]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 03:36:35 compute-0 sudo[77236]: pam_unix(sudo:session): session closed for user root
Oct 11 03:36:35 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=mgr/cephadm/container_init}] v 0) v1
Oct 11 03:36:35 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3929784892' entity='client.admin' 
Oct 11 03:36:35 compute-0 sudo[77261]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 03:36:35 compute-0 sudo[77261]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 03:36:35 compute-0 sudo[77261]: pam_unix(sudo:session): session closed for user root
Oct 11 03:36:35 compute-0 systemd[1]: libpod-bf89b1b77d7ffe103e8608c8b45a48262acf1ad95b1d6a533bab7319c003123b.scope: Deactivated successfully.
Oct 11 03:36:35 compute-0 podman[77132]: 2025-10-11 03:36:35.794349986 +0000 UTC m=+0.707015619 container died bf89b1b77d7ffe103e8608c8b45a48262acf1ad95b1d6a533bab7319c003123b (image=quay.io/ceph/ceph:v18, name=dreamy_cohen, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 11 03:36:35 compute-0 systemd[1]: var-lib-containers-storage-overlay-aa35006618931340e28f3db60095ad31111e0d536b2713f9a9f321985654d640-merged.mount: Deactivated successfully.
Oct 11 03:36:35 compute-0 podman[77132]: 2025-10-11 03:36:35.850234828 +0000 UTC m=+0.762900471 container remove bf89b1b77d7ffe103e8608c8b45a48262acf1ad95b1d6a533bab7319c003123b (image=quay.io/ceph/ceph:v18, name=dreamy_cohen, org.label-schema.schema-version=1.0, CEPH_REF=reef, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 11 03:36:35 compute-0 systemd[1]: libpod-conmon-bf89b1b77d7ffe103e8608c8b45a48262acf1ad95b1d6a533bab7319c003123b.scope: Deactivated successfully.
Oct 11 03:36:35 compute-0 sudo[77289]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/23b68101-59a9-532f-ab6b-9acf78fb2162/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Oct 11 03:36:35 compute-0 sudo[77289]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 03:36:35 compute-0 podman[77324]: 2025-10-11 03:36:35.920613646 +0000 UTC m=+0.045448742 container create 2a4ad5fb9a4eb5f2915dea44b3900cbb7f22dd1502612aa3b83e95303116a84c (image=quay.io/ceph/ceph:v18, name=cool_keldysh, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, OSD_FLAVOR=default, io.buildah.version=1.39.3)
Oct 11 03:36:35 compute-0 systemd[1]: Started libpod-conmon-2a4ad5fb9a4eb5f2915dea44b3900cbb7f22dd1502612aa3b83e95303116a84c.scope.
Oct 11 03:36:35 compute-0 systemd[1]: Started libcrun container.
Oct 11 03:36:35 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/68be1992f30d347b2fd4129e4399109786185655ea576bcfd57813cbca83d78c/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Oct 11 03:36:35 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/68be1992f30d347b2fd4129e4399109786185655ea576bcfd57813cbca83d78c/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 11 03:36:35 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/68be1992f30d347b2fd4129e4399109786185655ea576bcfd57813cbca83d78c/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 11 03:36:35 compute-0 podman[77324]: 2025-10-11 03:36:35.901869902 +0000 UTC m=+0.026705038 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Oct 11 03:36:36 compute-0 podman[77324]: 2025-10-11 03:36:36.007337461 +0000 UTC m=+0.132172657 container init 2a4ad5fb9a4eb5f2915dea44b3900cbb7f22dd1502612aa3b83e95303116a84c (image=quay.io/ceph/ceph:v18, name=cool_keldysh, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 11 03:36:36 compute-0 podman[77324]: 2025-10-11 03:36:36.014877752 +0000 UTC m=+0.139712878 container start 2a4ad5fb9a4eb5f2915dea44b3900cbb7f22dd1502612aa3b83e95303116a84c (image=quay.io/ceph/ceph:v18, name=cool_keldysh, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, OSD_FLAVOR=default)
Oct 11 03:36:36 compute-0 podman[77324]: 2025-10-11 03:36:36.019278935 +0000 UTC m=+0.144114111 container attach 2a4ad5fb9a4eb5f2915dea44b3900cbb7f22dd1502612aa3b83e95303116a84c (image=quay.io/ceph/ceph:v18, name=cool_keldysh, io.buildah.version=1.39.3, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 11 03:36:36 compute-0 systemd[1]: proc-sys-fs-binfmt_misc.automount: Got automount request for /proc/sys/fs/binfmt_misc, triggered by 77358 (sysctl)
Oct 11 03:36:36 compute-0 systemd[1]: Mounting Arbitrary Executable File Formats File System...
Oct 11 03:36:36 compute-0 systemd[1]: Mounted Arbitrary Executable File Formats File System.
Oct 11 03:36:36 compute-0 sudo[77289]: pam_unix(sudo:session): session closed for user root
Oct 11 03:36:36 compute-0 ceph-mgr[74563]: log_channel(audit) log [DBG] : from='client.14164 -' entity='client.admin' cmd=[{"prefix": "orch client-keyring set", "entity": "client.admin", "placement": "label:_admin", "target": ["mon-mgr", ""]}]: dispatch
Oct 11 03:36:36 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/client_keyrings}] v 0) v1
Oct 11 03:36:36 compute-0 sudo[77399]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 03:36:36 compute-0 sudo[77399]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 03:36:36 compute-0 sudo[77399]: pam_unix(sudo:session): session closed for user root
Oct 11 03:36:36 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' 
Oct 11 03:36:36 compute-0 ceph-mon[74273]: from='client.14160 -' entity='client.admin' cmd=[{"prefix": "orch apply", "service_type": "crash", "target": ["mon-mgr", ""]}]: dispatch
Oct 11 03:36:36 compute-0 ceph-mon[74273]: Saving service crash spec with placement *
Oct 11 03:36:36 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' 
Oct 11 03:36:36 compute-0 ceph-mon[74273]: from='client.? 192.168.122.100:0/3929784892' entity='client.admin' 
Oct 11 03:36:36 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' 
Oct 11 03:36:36 compute-0 systemd[1]: libpod-2a4ad5fb9a4eb5f2915dea44b3900cbb7f22dd1502612aa3b83e95303116a84c.scope: Deactivated successfully.
Oct 11 03:36:36 compute-0 podman[77324]: 2025-10-11 03:36:36.628117598 +0000 UTC m=+0.752952704 container died 2a4ad5fb9a4eb5f2915dea44b3900cbb7f22dd1502612aa3b83e95303116a84c (image=quay.io/ceph/ceph:v18, name=cool_keldysh, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 11 03:36:36 compute-0 ceph-mgr[74563]: mgr.server send_report Not sending PG status to monitor yet, waiting for OSDs
Oct 11 03:36:36 compute-0 systemd[1]: var-lib-containers-storage-overlay-68be1992f30d347b2fd4129e4399109786185655ea576bcfd57813cbca83d78c-merged.mount: Deactivated successfully.
Oct 11 03:36:36 compute-0 podman[77324]: 2025-10-11 03:36:36.692952941 +0000 UTC m=+0.817788047 container remove 2a4ad5fb9a4eb5f2915dea44b3900cbb7f22dd1502612aa3b83e95303116a84c (image=quay.io/ceph/ceph:v18, name=cool_keldysh, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_REF=reef)
Oct 11 03:36:36 compute-0 sudo[77425]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 11 03:36:36 compute-0 systemd[1]: libpod-conmon-2a4ad5fb9a4eb5f2915dea44b3900cbb7f22dd1502612aa3b83e95303116a84c.scope: Deactivated successfully.
Oct 11 03:36:36 compute-0 sudo[77425]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 03:36:36 compute-0 sudo[77425]: pam_unix(sudo:session): session closed for user root
Oct 11 03:36:36 compute-0 podman[77464]: 2025-10-11 03:36:36.764976925 +0000 UTC m=+0.048945719 container create d9767f864303b3ca0022887e15b35f43fbee6e676f2892911e6090f2e22d9526 (image=quay.io/ceph/ceph:v18, name=beautiful_einstein, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20250507, io.buildah.version=1.39.3)
Oct 11 03:36:36 compute-0 systemd[1]: Started libpod-conmon-d9767f864303b3ca0022887e15b35f43fbee6e676f2892911e6090f2e22d9526.scope.
Oct 11 03:36:36 compute-0 sudo[77472]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 03:36:36 compute-0 sudo[77472]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 03:36:36 compute-0 sudo[77472]: pam_unix(sudo:session): session closed for user root
Oct 11 03:36:36 compute-0 systemd[1]: Started libcrun container.
Oct 11 03:36:36 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ce5d54ec40b62d4b5d2cd6102f0e6cc60e26c09ac0d5223e13eafb009c85e41a/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Oct 11 03:36:36 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ce5d54ec40b62d4b5d2cd6102f0e6cc60e26c09ac0d5223e13eafb009c85e41a/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 11 03:36:36 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ce5d54ec40b62d4b5d2cd6102f0e6cc60e26c09ac0d5223e13eafb009c85e41a/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 11 03:36:36 compute-0 podman[77464]: 2025-10-11 03:36:36.838844541 +0000 UTC m=+0.122813375 container init d9767f864303b3ca0022887e15b35f43fbee6e676f2892911e6090f2e22d9526 (image=quay.io/ceph/ceph:v18, name=beautiful_einstein, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 11 03:36:36 compute-0 podman[77464]: 2025-10-11 03:36:36.743308989 +0000 UTC m=+0.027277823 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Oct 11 03:36:36 compute-0 podman[77464]: 2025-10-11 03:36:36.847606156 +0000 UTC m=+0.131574950 container start d9767f864303b3ca0022887e15b35f43fbee6e676f2892911e6090f2e22d9526 (image=quay.io/ceph/ceph:v18, name=beautiful_einstein, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3)
Oct 11 03:36:36 compute-0 podman[77464]: 2025-10-11 03:36:36.851829684 +0000 UTC m=+0.135798488 container attach d9767f864303b3ca0022887e15b35f43fbee6e676f2892911e6090f2e22d9526 (image=quay.io/ceph/ceph:v18, name=beautiful_einstein, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 11 03:36:36 compute-0 sudo[77510]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/23b68101-59a9-532f-ab6b-9acf78fb2162/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 list-networks
Oct 11 03:36:36 compute-0 sudo[77510]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 03:36:37 compute-0 sudo[77510]: pam_unix(sudo:session): session closed for user root
Oct 11 03:36:37 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 11 03:36:37 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' 
Oct 11 03:36:37 compute-0 sudo[77555]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 03:36:37 compute-0 sudo[77555]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 03:36:37 compute-0 sudo[77555]: pam_unix(sudo:session): session closed for user root
Oct 11 03:36:37 compute-0 sudo[77599]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 11 03:36:37 compute-0 sudo[77599]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 03:36:37 compute-0 sudo[77599]: pam_unix(sudo:session): session closed for user root
Oct 11 03:36:37 compute-0 sudo[77624]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 03:36:37 compute-0 sudo[77624]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 03:36:37 compute-0 sudo[77624]: pam_unix(sudo:session): session closed for user root
Oct 11 03:36:37 compute-0 sudo[77649]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/23b68101-59a9-532f-ab6b-9acf78fb2162/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 23b68101-59a9-532f-ab6b-9acf78fb2162 -- inventory --format=json-pretty --filter-for-batch
Oct 11 03:36:37 compute-0 sudo[77649]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 03:36:37 compute-0 ceph-mgr[74563]: log_channel(audit) log [DBG] : from='client.14166 -' entity='client.admin' cmd=[{"prefix": "orch host label add", "hostname": "compute-0", "label": "_admin", "target": ["mon-mgr", ""]}]: dispatch
Oct 11 03:36:37 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/inventory}] v 0) v1
Oct 11 03:36:37 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' 
Oct 11 03:36:37 compute-0 ceph-mgr[74563]: [cephadm INFO root] Added label _admin to host compute-0
Oct 11 03:36:37 compute-0 ceph-mgr[74563]: log_channel(cephadm) log [INF] : Added label _admin to host compute-0
Oct 11 03:36:37 compute-0 beautiful_einstein[77506]: Added label _admin to host compute-0
Oct 11 03:36:37 compute-0 systemd[1]: libpod-d9767f864303b3ca0022887e15b35f43fbee6e676f2892911e6090f2e22d9526.scope: Deactivated successfully.
Oct 11 03:36:37 compute-0 podman[77464]: 2025-10-11 03:36:37.414415044 +0000 UTC m=+0.698383838 container died d9767f864303b3ca0022887e15b35f43fbee6e676f2892911e6090f2e22d9526 (image=quay.io/ceph/ceph:v18, name=beautiful_einstein, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507)
Oct 11 03:36:37 compute-0 systemd[1]: var-lib-containers-storage-overlay-ce5d54ec40b62d4b5d2cd6102f0e6cc60e26c09ac0d5223e13eafb009c85e41a-merged.mount: Deactivated successfully.
Oct 11 03:36:37 compute-0 podman[77464]: 2025-10-11 03:36:37.454458774 +0000 UTC m=+0.738427558 container remove d9767f864303b3ca0022887e15b35f43fbee6e676f2892911e6090f2e22d9526 (image=quay.io/ceph/ceph:v18, name=beautiful_einstein, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_REF=reef)
Oct 11 03:36:37 compute-0 systemd[1]: libpod-conmon-d9767f864303b3ca0022887e15b35f43fbee6e676f2892911e6090f2e22d9526.scope: Deactivated successfully.
Oct 11 03:36:37 compute-0 podman[77687]: 2025-10-11 03:36:37.521634332 +0000 UTC m=+0.040303828 container create 9c9be8524bd2cfd0f176a543ace0922e3bfa00ca8ed64b6ee5826ed07554ae63 (image=quay.io/ceph/ceph:v18, name=gallant_elion, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.build-date=20250507)
Oct 11 03:36:37 compute-0 systemd[1]: Started libpod-conmon-9c9be8524bd2cfd0f176a543ace0922e3bfa00ca8ed64b6ee5826ed07554ae63.scope.
Oct 11 03:36:37 compute-0 systemd[1]: Started libcrun container.
Oct 11 03:36:37 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9df99d5572b05ae38454ca7843f48dccba96e5eac9cedb59c9071912f28da431/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Oct 11 03:36:37 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9df99d5572b05ae38454ca7843f48dccba96e5eac9cedb59c9071912f28da431/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 11 03:36:37 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9df99d5572b05ae38454ca7843f48dccba96e5eac9cedb59c9071912f28da431/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 11 03:36:37 compute-0 podman[77687]: 2025-10-11 03:36:37.502376213 +0000 UTC m=+0.021045709 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Oct 11 03:36:37 compute-0 podman[77687]: 2025-10-11 03:36:37.612949555 +0000 UTC m=+0.131619121 container init 9c9be8524bd2cfd0f176a543ace0922e3bfa00ca8ed64b6ee5826ed07554ae63 (image=quay.io/ceph/ceph:v18, name=gallant_elion, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 11 03:36:37 compute-0 ceph-mon[74273]: from='client.14164 -' entity='client.admin' cmd=[{"prefix": "orch client-keyring set", "entity": "client.admin", "placement": "label:_admin", "target": ["mon-mgr", ""]}]: dispatch
Oct 11 03:36:37 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' 
Oct 11 03:36:37 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' 
Oct 11 03:36:37 compute-0 podman[77687]: 2025-10-11 03:36:37.624390995 +0000 UTC m=+0.143060471 container start 9c9be8524bd2cfd0f176a543ace0922e3bfa00ca8ed64b6ee5826ed07554ae63 (image=quay.io/ceph/ceph:v18, name=gallant_elion, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef)
Oct 11 03:36:37 compute-0 podman[77687]: 2025-10-11 03:36:37.631089212 +0000 UTC m=+0.149758778 container attach 9c9be8524bd2cfd0f176a543ace0922e3bfa00ca8ed64b6ee5826ed07554ae63 (image=quay.io/ceph/ceph:v18, name=gallant_elion, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 11 03:36:37 compute-0 podman[77748]: 2025-10-11 03:36:37.815487188 +0000 UTC m=+0.058440295 container create 630558392d5b58925ce5f84376719afa8b8bdd253fa791454ca49a81fc7904e3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_euler, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2)
Oct 11 03:36:37 compute-0 systemd[1]: Started libpod-conmon-630558392d5b58925ce5f84376719afa8b8bdd253fa791454ca49a81fc7904e3.scope.
Oct 11 03:36:37 compute-0 podman[77748]: 2025-10-11 03:36:37.788637967 +0000 UTC m=+0.031591114 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 03:36:37 compute-0 systemd[1]: Started libcrun container.
Oct 11 03:36:37 compute-0 podman[77748]: 2025-10-11 03:36:37.907874851 +0000 UTC m=+0.150828018 container init 630558392d5b58925ce5f84376719afa8b8bdd253fa791454ca49a81fc7904e3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_euler, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2)
Oct 11 03:36:37 compute-0 podman[77748]: 2025-10-11 03:36:37.918386535 +0000 UTC m=+0.161339642 container start 630558392d5b58925ce5f84376719afa8b8bdd253fa791454ca49a81fc7904e3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_euler, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.license=GPLv2)
Oct 11 03:36:37 compute-0 admiring_euler[77766]: 167 167
Oct 11 03:36:37 compute-0 systemd[1]: libpod-630558392d5b58925ce5f84376719afa8b8bdd253fa791454ca49a81fc7904e3.scope: Deactivated successfully.
Oct 11 03:36:37 compute-0 podman[77748]: 2025-10-11 03:36:37.926416859 +0000 UTC m=+0.169369966 container attach 630558392d5b58925ce5f84376719afa8b8bdd253fa791454ca49a81fc7904e3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_euler, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 11 03:36:37 compute-0 podman[77748]: 2025-10-11 03:36:37.926924584 +0000 UTC m=+0.169877681 container died 630558392d5b58925ce5f84376719afa8b8bdd253fa791454ca49a81fc7904e3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_euler, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507)
Oct 11 03:36:37 compute-0 systemd[1]: var-lib-containers-storage-overlay-58dc16a61b09613cd8d26d208b74d8403da8db2f846e4817ee2d166de395fbcf-merged.mount: Deactivated successfully.
Oct 11 03:36:37 compute-0 podman[77748]: 2025-10-11 03:36:37.976824479 +0000 UTC m=+0.219777586 container remove 630558392d5b58925ce5f84376719afa8b8bdd253fa791454ca49a81fc7904e3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_euler, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 11 03:36:38 compute-0 systemd[1]: libpod-conmon-630558392d5b58925ce5f84376719afa8b8bdd253fa791454ca49a81fc7904e3.scope: Deactivated successfully.
Oct 11 03:36:38 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=osd_memory_target_autotune}] v 0) v1
Oct 11 03:36:38 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3550741541' entity='client.admin' 
Oct 11 03:36:38 compute-0 systemd[1]: libpod-9c9be8524bd2cfd0f176a543ace0922e3bfa00ca8ed64b6ee5826ed07554ae63.scope: Deactivated successfully.
Oct 11 03:36:38 compute-0 conmon[77716]: conmon 9c9be8524bd2cfd0f176 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-9c9be8524bd2cfd0f176a543ace0922e3bfa00ca8ed64b6ee5826ed07554ae63.scope/container/memory.events
Oct 11 03:36:38 compute-0 podman[77687]: 2025-10-11 03:36:38.195809902 +0000 UTC m=+0.714479468 container died 9c9be8524bd2cfd0f176a543ace0922e3bfa00ca8ed64b6ee5826ed07554ae63 (image=quay.io/ceph/ceph:v18, name=gallant_elion, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default)
Oct 11 03:36:38 compute-0 systemd[1]: var-lib-containers-storage-overlay-9df99d5572b05ae38454ca7843f48dccba96e5eac9cedb59c9071912f28da431-merged.mount: Deactivated successfully.
Oct 11 03:36:38 compute-0 podman[77687]: 2025-10-11 03:36:38.2436564 +0000 UTC m=+0.762325906 container remove 9c9be8524bd2cfd0f176a543ace0922e3bfa00ca8ed64b6ee5826ed07554ae63 (image=quay.io/ceph/ceph:v18, name=gallant_elion, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Oct 11 03:36:38 compute-0 systemd[1]: libpod-conmon-9c9be8524bd2cfd0f176a543ace0922e3bfa00ca8ed64b6ee5826ed07554ae63.scope: Deactivated successfully.
Oct 11 03:36:38 compute-0 podman[77814]: 2025-10-11 03:36:38.306936899 +0000 UTC m=+0.043160358 container create d608e6f585fe08effb791de8b3b6a47cc4803e47aca6cb6837a86bcf890d2b77 (image=quay.io/ceph/ceph:v18, name=modest_mayer, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3)
Oct 11 03:36:38 compute-0 systemd[1]: Started libpod-conmon-d608e6f585fe08effb791de8b3b6a47cc4803e47aca6cb6837a86bcf890d2b77.scope.
Oct 11 03:36:38 compute-0 podman[77814]: 2025-10-11 03:36:38.288480393 +0000 UTC m=+0.024703882 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Oct 11 03:36:38 compute-0 systemd[1]: Started libcrun container.
Oct 11 03:36:38 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7a77609e5dfd8d37e0bf285956e8423dca4f5f4e8b9187f205301e609164a90e/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 11 03:36:38 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7a77609e5dfd8d37e0bf285956e8423dca4f5f4e8b9187f205301e609164a90e/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Oct 11 03:36:38 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7a77609e5dfd8d37e0bf285956e8423dca4f5f4e8b9187f205301e609164a90e/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 11 03:36:38 compute-0 podman[77814]: 2025-10-11 03:36:38.417567982 +0000 UTC m=+0.153791511 container init d608e6f585fe08effb791de8b3b6a47cc4803e47aca6cb6837a86bcf890d2b77 (image=quay.io/ceph/ceph:v18, name=modest_mayer, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2)
Oct 11 03:36:38 compute-0 podman[77814]: 2025-10-11 03:36:38.428234401 +0000 UTC m=+0.164457860 container start d608e6f585fe08effb791de8b3b6a47cc4803e47aca6cb6837a86bcf890d2b77 (image=quay.io/ceph/ceph:v18, name=modest_mayer, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_REF=reef, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Oct 11 03:36:38 compute-0 podman[77814]: 2025-10-11 03:36:38.434194007 +0000 UTC m=+0.170417496 container attach d608e6f585fe08effb791de8b3b6a47cc4803e47aca6cb6837a86bcf890d2b77 (image=quay.io/ceph/ceph:v18, name=modest_mayer, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.build-date=20250507, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 11 03:36:38 compute-0 ceph-mon[74273]: from='client.14166 -' entity='client.admin' cmd=[{"prefix": "orch host label add", "hostname": "compute-0", "label": "_admin", "target": ["mon-mgr", ""]}]: dispatch
Oct 11 03:36:38 compute-0 ceph-mon[74273]: Added label _admin to host compute-0
Oct 11 03:36:38 compute-0 ceph-mon[74273]: from='client.? 192.168.122.100:0/3550741541' entity='client.admin' 
Oct 11 03:36:38 compute-0 ceph-mgr[74563]: mgr.server send_report Not sending PG status to monitor yet, waiting for OSDs
Oct 11 03:36:39 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/dashboard/cluster/status}] v 0) v1
Oct 11 03:36:39 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/4075276874' entity='client.admin' 
Oct 11 03:36:39 compute-0 modest_mayer[77830]: set mgr/dashboard/cluster/status
Oct 11 03:36:39 compute-0 systemd[1]: libpod-d608e6f585fe08effb791de8b3b6a47cc4803e47aca6cb6837a86bcf890d2b77.scope: Deactivated successfully.
Oct 11 03:36:39 compute-0 podman[77814]: 2025-10-11 03:36:39.082919786 +0000 UTC m=+0.819143265 container died d608e6f585fe08effb791de8b3b6a47cc4803e47aca6cb6837a86bcf890d2b77 (image=quay.io/ceph/ceph:v18, name=modest_mayer, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507)
Oct 11 03:36:39 compute-0 systemd[1]: var-lib-containers-storage-overlay-7a77609e5dfd8d37e0bf285956e8423dca4f5f4e8b9187f205301e609164a90e-merged.mount: Deactivated successfully.
Oct 11 03:36:39 compute-0 podman[77814]: 2025-10-11 03:36:39.13812942 +0000 UTC m=+0.874352899 container remove d608e6f585fe08effb791de8b3b6a47cc4803e47aca6cb6837a86bcf890d2b77 (image=quay.io/ceph/ceph:v18, name=modest_mayer, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 11 03:36:39 compute-0 systemd[1]: libpod-conmon-d608e6f585fe08effb791de8b3b6a47cc4803e47aca6cb6837a86bcf890d2b77.scope: Deactivated successfully.
Oct 11 03:36:39 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e2 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 11 03:36:39 compute-0 sudo[73252]: pam_unix(sudo:session): session closed for user root
Oct 11 03:36:39 compute-0 podman[77877]: 2025-10-11 03:36:39.420821003 +0000 UTC m=+0.065042960 container create 42c1f7f60fa6f8817628240d72ae746075372b6e454fa1a978bbf8c555b89974 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_lamarr, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Oct 11 03:36:39 compute-0 systemd[1]: Started libpod-conmon-42c1f7f60fa6f8817628240d72ae746075372b6e454fa1a978bbf8c555b89974.scope.
Oct 11 03:36:39 compute-0 podman[77877]: 2025-10-11 03:36:39.394725743 +0000 UTC m=+0.038947740 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 03:36:39 compute-0 systemd[1]: Started libcrun container.
Oct 11 03:36:39 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4eb4337972deb9726f6b083f43304394af8fcb5e9b31faa8b54802713dc4bb08/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 11 03:36:39 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4eb4337972deb9726f6b083f43304394af8fcb5e9b31faa8b54802713dc4bb08/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 11 03:36:39 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4eb4337972deb9726f6b083f43304394af8fcb5e9b31faa8b54802713dc4bb08/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 11 03:36:39 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4eb4337972deb9726f6b083f43304394af8fcb5e9b31faa8b54802713dc4bb08/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 11 03:36:39 compute-0 podman[77877]: 2025-10-11 03:36:39.517421964 +0000 UTC m=+0.161643921 container init 42c1f7f60fa6f8817628240d72ae746075372b6e454fa1a978bbf8c555b89974 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_lamarr, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Oct 11 03:36:39 compute-0 podman[77877]: 2025-10-11 03:36:39.532490465 +0000 UTC m=+0.176712422 container start 42c1f7f60fa6f8817628240d72ae746075372b6e454fa1a978bbf8c555b89974 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_lamarr, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True)
Oct 11 03:36:39 compute-0 podman[77877]: 2025-10-11 03:36:39.537451454 +0000 UTC m=+0.181673471 container attach 42c1f7f60fa6f8817628240d72ae746075372b6e454fa1a978bbf8c555b89974 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_lamarr, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2)
Oct 11 03:36:39 compute-0 sudo[77921]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kckhfcamelahpfmniqfydpvxoydnzcwi ; /usr/bin/python3'
Oct 11 03:36:39 compute-0 sudo[77921]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 03:36:39 compute-0 python3[77923]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid 23b68101-59a9-532f-ab6b-9acf78fb2162 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   config set mgr mgr/cephadm/use_repo_digest false
                                           _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 11 03:36:39 compute-0 podman[77924]: 2025-10-11 03:36:39.848234433 +0000 UTC m=+0.071349146 container create 2be6587665e19462613ab6b12f3be42249a84efe172b40c4566bd501f860be61 (image=quay.io/ceph/ceph:v18, name=friendly_turing, ceph=True, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 11 03:36:39 compute-0 systemd[1]: Started libpod-conmon-2be6587665e19462613ab6b12f3be42249a84efe172b40c4566bd501f860be61.scope.
Oct 11 03:36:39 compute-0 podman[77924]: 2025-10-11 03:36:39.814941282 +0000 UTC m=+0.038056055 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Oct 11 03:36:39 compute-0 systemd[1]: Started libcrun container.
Oct 11 03:36:39 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dd781181155ddccfb5219983741da8ea19ba114532e555de1d502f61dae34021/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 11 03:36:39 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dd781181155ddccfb5219983741da8ea19ba114532e555de1d502f61dae34021/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Oct 11 03:36:39 compute-0 podman[77924]: 2025-10-11 03:36:39.945501963 +0000 UTC m=+0.168616666 container init 2be6587665e19462613ab6b12f3be42249a84efe172b40c4566bd501f860be61 (image=quay.io/ceph/ceph:v18, name=friendly_turing, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250507)
Oct 11 03:36:39 compute-0 podman[77924]: 2025-10-11 03:36:39.95647523 +0000 UTC m=+0.179589933 container start 2be6587665e19462613ab6b12f3be42249a84efe172b40c4566bd501f860be61 (image=quay.io/ceph/ceph:v18, name=friendly_turing, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.build-date=20250507, io.buildah.version=1.39.3)
Oct 11 03:36:39 compute-0 podman[77924]: 2025-10-11 03:36:39.960274216 +0000 UTC m=+0.183388889 container attach 2be6587665e19462613ab6b12f3be42249a84efe172b40c4566bd501f860be61 (image=quay.io/ceph/ceph:v18, name=friendly_turing, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 11 03:36:40 compute-0 ceph-mon[74273]: from='client.? 192.168.122.100:0/4075276874' entity='client.admin' 
Oct 11 03:36:40 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=mgr/cephadm/use_repo_digest}] v 0) v1
Oct 11 03:36:40 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2528428311' entity='client.admin' 
Oct 11 03:36:40 compute-0 systemd[1]: libpod-2be6587665e19462613ab6b12f3be42249a84efe172b40c4566bd501f860be61.scope: Deactivated successfully.
Oct 11 03:36:40 compute-0 conmon[77937]: conmon 2be6587665e19462613a <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-2be6587665e19462613ab6b12f3be42249a84efe172b40c4566bd501f860be61.scope/container/memory.events
Oct 11 03:36:40 compute-0 podman[77924]: 2025-10-11 03:36:40.514178903 +0000 UTC m=+0.737293586 container died 2be6587665e19462613ab6b12f3be42249a84efe172b40c4566bd501f860be61 (image=quay.io/ceph/ceph:v18, name=friendly_turing, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 11 03:36:40 compute-0 systemd[1]: var-lib-containers-storage-overlay-dd781181155ddccfb5219983741da8ea19ba114532e555de1d502f61dae34021-merged.mount: Deactivated successfully.
Oct 11 03:36:40 compute-0 podman[77924]: 2025-10-11 03:36:40.57343978 +0000 UTC m=+0.796554483 container remove 2be6587665e19462613ab6b12f3be42249a84efe172b40c4566bd501f860be61 (image=quay.io/ceph/ceph:v18, name=friendly_turing, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0)
Oct 11 03:36:40 compute-0 systemd[1]: libpod-conmon-2be6587665e19462613ab6b12f3be42249a84efe172b40c4566bd501f860be61.scope: Deactivated successfully.
Oct 11 03:36:40 compute-0 sudo[77921]: pam_unix(sudo:session): session closed for user root
Oct 11 03:36:40 compute-0 ceph-mgr[74563]: mgr.server send_report Giving up on OSDs that haven't reported yet, sending potentially incomplete PG state to mon
Oct 11 03:36:40 compute-0 ceph-mon[74273]: log_channel(cluster) log [WRN] : Health check failed: OSD count 0 < osd_pool_default_size 1 (TOO_FEW_OSDS)
Oct 11 03:36:40 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v3: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Oct 11 03:36:41 compute-0 zen_lamarr[77893]: [
Oct 11 03:36:41 compute-0 zen_lamarr[77893]:     {
Oct 11 03:36:41 compute-0 zen_lamarr[77893]:         "available": false,
Oct 11 03:36:41 compute-0 zen_lamarr[77893]:         "ceph_device": false,
Oct 11 03:36:41 compute-0 zen_lamarr[77893]:         "device_id": "QEMU_DVD-ROM_QM00001",
Oct 11 03:36:41 compute-0 zen_lamarr[77893]:         "lsm_data": {},
Oct 11 03:36:41 compute-0 zen_lamarr[77893]:         "lvs": [],
Oct 11 03:36:41 compute-0 zen_lamarr[77893]:         "path": "/dev/sr0",
Oct 11 03:36:41 compute-0 zen_lamarr[77893]:         "rejected_reasons": [
Oct 11 03:36:41 compute-0 zen_lamarr[77893]:             "Has a FileSystem",
Oct 11 03:36:41 compute-0 zen_lamarr[77893]:             "Insufficient space (<5GB)"
Oct 11 03:36:41 compute-0 zen_lamarr[77893]:         ],
Oct 11 03:36:41 compute-0 zen_lamarr[77893]:         "sys_api": {
Oct 11 03:36:41 compute-0 zen_lamarr[77893]:             "actuators": null,
Oct 11 03:36:41 compute-0 zen_lamarr[77893]:             "device_nodes": "sr0",
Oct 11 03:36:41 compute-0 zen_lamarr[77893]:             "devname": "sr0",
Oct 11 03:36:41 compute-0 zen_lamarr[77893]:             "human_readable_size": "482.00 KB",
Oct 11 03:36:41 compute-0 zen_lamarr[77893]:             "id_bus": "ata",
Oct 11 03:36:41 compute-0 zen_lamarr[77893]:             "model": "QEMU DVD-ROM",
Oct 11 03:36:41 compute-0 zen_lamarr[77893]:             "nr_requests": "2",
Oct 11 03:36:41 compute-0 zen_lamarr[77893]:             "parent": "/dev/sr0",
Oct 11 03:36:41 compute-0 zen_lamarr[77893]:             "partitions": {},
Oct 11 03:36:41 compute-0 zen_lamarr[77893]:             "path": "/dev/sr0",
Oct 11 03:36:41 compute-0 zen_lamarr[77893]:             "removable": "1",
Oct 11 03:36:41 compute-0 zen_lamarr[77893]:             "rev": "2.5+",
Oct 11 03:36:41 compute-0 zen_lamarr[77893]:             "ro": "0",
Oct 11 03:36:41 compute-0 zen_lamarr[77893]:             "rotational": "0",
Oct 11 03:36:41 compute-0 zen_lamarr[77893]:             "sas_address": "",
Oct 11 03:36:41 compute-0 zen_lamarr[77893]:             "sas_device_handle": "",
Oct 11 03:36:41 compute-0 zen_lamarr[77893]:             "scheduler_mode": "mq-deadline",
Oct 11 03:36:41 compute-0 zen_lamarr[77893]:             "sectors": 0,
Oct 11 03:36:41 compute-0 zen_lamarr[77893]:             "sectorsize": "2048",
Oct 11 03:36:41 compute-0 zen_lamarr[77893]:             "size": 493568.0,
Oct 11 03:36:41 compute-0 zen_lamarr[77893]:             "support_discard": "2048",
Oct 11 03:36:41 compute-0 zen_lamarr[77893]:             "type": "disk",
Oct 11 03:36:41 compute-0 zen_lamarr[77893]:             "vendor": "QEMU"
Oct 11 03:36:41 compute-0 zen_lamarr[77893]:         }
Oct 11 03:36:41 compute-0 zen_lamarr[77893]:     }
Oct 11 03:36:41 compute-0 zen_lamarr[77893]: ]
Oct 11 03:36:41 compute-0 podman[77877]: 2025-10-11 03:36:41.146041651 +0000 UTC m=+1.790263598 container died 42c1f7f60fa6f8817628240d72ae746075372b6e454fa1a978bbf8c555b89974 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_lamarr, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3)
Oct 11 03:36:41 compute-0 systemd[1]: libpod-42c1f7f60fa6f8817628240d72ae746075372b6e454fa1a978bbf8c555b89974.scope: Deactivated successfully.
Oct 11 03:36:41 compute-0 systemd[1]: libpod-42c1f7f60fa6f8817628240d72ae746075372b6e454fa1a978bbf8c555b89974.scope: Consumed 1.655s CPU time.
Oct 11 03:36:41 compute-0 systemd[1]: var-lib-containers-storage-overlay-4eb4337972deb9726f6b083f43304394af8fcb5e9b31faa8b54802713dc4bb08-merged.mount: Deactivated successfully.
Oct 11 03:36:41 compute-0 podman[77877]: 2025-10-11 03:36:41.194577338 +0000 UTC m=+1.838799255 container remove 42c1f7f60fa6f8817628240d72ae746075372b6e454fa1a978bbf8c555b89974 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_lamarr, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 11 03:36:41 compute-0 systemd[1]: libpod-conmon-42c1f7f60fa6f8817628240d72ae746075372b6e454fa1a978bbf8c555b89974.scope: Deactivated successfully.
Oct 11 03:36:41 compute-0 sudo[77649]: pam_unix(sudo:session): session closed for user root
Oct 11 03:36:41 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 11 03:36:41 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' 
Oct 11 03:36:41 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 11 03:36:41 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' 
Oct 11 03:36:41 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 11 03:36:41 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' 
Oct 11 03:36:41 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 11 03:36:41 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' 
Oct 11 03:36:41 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"} v 0) v1
Oct 11 03:36:41 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Oct 11 03:36:41 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 11 03:36:41 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 11 03:36:41 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct 11 03:36:41 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 11 03:36:41 compute-0 ceph-mgr[74563]: [cephadm INFO cephadm.serve] Updating compute-0:/etc/ceph/ceph.conf
Oct 11 03:36:41 compute-0 ceph-mgr[74563]: log_channel(cephadm) log [INF] : Updating compute-0:/etc/ceph/ceph.conf
Oct 11 03:36:41 compute-0 sudo[79945]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 03:36:41 compute-0 sudo[79945]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 03:36:41 compute-0 sudo[79945]: pam_unix(sudo:session): session closed for user root
Oct 11 03:36:41 compute-0 sudo[79993]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /etc/ceph
Oct 11 03:36:41 compute-0 sudo[79993]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 03:36:41 compute-0 sudo[79993]: pam_unix(sudo:session): session closed for user root
Oct 11 03:36:41 compute-0 sudo[80042]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 03:36:41 compute-0 sudo[80042]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 03:36:41 compute-0 sudo[80042]: pam_unix(sudo:session): session closed for user root
Oct 11 03:36:41 compute-0 ceph-mon[74273]: from='client.? 192.168.122.100:0/2528428311' entity='client.admin' 
Oct 11 03:36:41 compute-0 ceph-mon[74273]: pgmap v3: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Oct 11 03:36:41 compute-0 ceph-mon[74273]: Health check failed: OSD count 0 < osd_pool_default_size 1 (TOO_FEW_OSDS)
Oct 11 03:36:41 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' 
Oct 11 03:36:41 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' 
Oct 11 03:36:41 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' 
Oct 11 03:36:41 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' 
Oct 11 03:36:41 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Oct 11 03:36:41 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 11 03:36:41 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 11 03:36:41 compute-0 sudo[80092]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ehpecmjrufelczvarhysfbbksfahjyyl ; ANSIBLE_ASYNC_DIR=\'~/.ansible_async\' /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1760153800.9999168-32994-203162637935888/async_wrapper.py j281775416265 30 /home/zuul/.ansible/tmp/ansible-tmp-1760153800.9999168-32994-203162637935888/AnsiballZ_command.py _'
Oct 11 03:36:41 compute-0 sudo[80092]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 03:36:41 compute-0 sudo[80093]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /tmp/cephadm-23b68101-59a9-532f-ab6b-9acf78fb2162/etc/ceph
Oct 11 03:36:41 compute-0 sudo[80093]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 03:36:41 compute-0 sudo[80093]: pam_unix(sudo:session): session closed for user root
Oct 11 03:36:41 compute-0 sudo[80120]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 03:36:41 compute-0 sudo[80120]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 03:36:41 compute-0 sudo[80120]: pam_unix(sudo:session): session closed for user root
Oct 11 03:36:41 compute-0 ansible-async_wrapper.py[80112]: Invoked with j281775416265 30 /home/zuul/.ansible/tmp/ansible-tmp-1760153800.9999168-32994-203162637935888/AnsiballZ_command.py _
Oct 11 03:36:41 compute-0 ansible-async_wrapper.py[80156]: Starting module and watcher
Oct 11 03:36:41 compute-0 ansible-async_wrapper.py[80156]: Start watching 80160 (30)
Oct 11 03:36:41 compute-0 ansible-async_wrapper.py[80160]: Start module (80160)
Oct 11 03:36:41 compute-0 ansible-async_wrapper.py[80112]: Return async_wrapper task started.
Oct 11 03:36:41 compute-0 sudo[80092]: pam_unix(sudo:session): session closed for user root
Oct 11 03:36:41 compute-0 sudo[80145]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/touch /tmp/cephadm-23b68101-59a9-532f-ab6b-9acf78fb2162/etc/ceph/ceph.conf.new
Oct 11 03:36:41 compute-0 sudo[80145]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 03:36:41 compute-0 sudo[80145]: pam_unix(sudo:session): session closed for user root
Oct 11 03:36:41 compute-0 sudo[80175]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 03:36:41 compute-0 sudo[80175]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 03:36:41 compute-0 sudo[80175]: pam_unix(sudo:session): session closed for user root
Oct 11 03:36:41 compute-0 sudo[80200]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R ceph-admin /tmp/cephadm-23b68101-59a9-532f-ab6b-9acf78fb2162
Oct 11 03:36:41 compute-0 sudo[80200]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 03:36:41 compute-0 python3[80166]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid 23b68101-59a9-532f-ab6b-9acf78fb2162 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   orch status --format json _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 11 03:36:41 compute-0 sudo[80200]: pam_unix(sudo:session): session closed for user root
Oct 11 03:36:41 compute-0 podman[80225]: 2025-10-11 03:36:41.888826979 +0000 UTC m=+0.054986528 container create 211890a524f949a50e37a529ba17e23f3573f7d67dde7262b23f4d3cc6e65c99 (image=quay.io/ceph/ceph:v18, name=elated_sutherland, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 11 03:36:41 compute-0 systemd[1]: Started libpod-conmon-211890a524f949a50e37a529ba17e23f3573f7d67dde7262b23f4d3cc6e65c99.scope.
Oct 11 03:36:41 compute-0 sudo[80231]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 03:36:41 compute-0 sudo[80231]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 03:36:41 compute-0 sudo[80231]: pam_unix(sudo:session): session closed for user root
Oct 11 03:36:41 compute-0 podman[80225]: 2025-10-11 03:36:41.862111682 +0000 UTC m=+0.028271281 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Oct 11 03:36:41 compute-0 systemd[1]: Started libcrun container.
Oct 11 03:36:41 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/93da7924e20a462723395f4485d3e482434dda6a95ab6f181ab1e4ef6db40018/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 11 03:36:41 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/93da7924e20a462723395f4485d3e482434dda6a95ab6f181ab1e4ef6db40018/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Oct 11 03:36:42 compute-0 podman[80225]: 2025-10-11 03:36:42.003536357 +0000 UTC m=+0.169695906 container init 211890a524f949a50e37a529ba17e23f3573f7d67dde7262b23f4d3cc6e65c99 (image=quay.io/ceph/ceph:v18, name=elated_sutherland, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 11 03:36:42 compute-0 podman[80225]: 2025-10-11 03:36:42.011658194 +0000 UTC m=+0.177817713 container start 211890a524f949a50e37a529ba17e23f3573f7d67dde7262b23f4d3cc6e65c99 (image=quay.io/ceph/ceph:v18, name=elated_sutherland, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_REF=reef, ceph=True)
Oct 11 03:36:42 compute-0 podman[80225]: 2025-10-11 03:36:42.015354887 +0000 UTC m=+0.181514406 container attach 211890a524f949a50e37a529ba17e23f3573f7d67dde7262b23f4d3cc6e65c99 (image=quay.io/ceph/ceph:v18, name=elated_sutherland, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS)
Oct 11 03:36:42 compute-0 sudo[80268]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-23b68101-59a9-532f-ab6b-9acf78fb2162/etc/ceph/ceph.conf.new
Oct 11 03:36:42 compute-0 sudo[80268]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 03:36:42 compute-0 sudo[80268]: pam_unix(sudo:session): session closed for user root
Oct 11 03:36:42 compute-0 sudo[80317]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 03:36:42 compute-0 sudo[80317]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 03:36:42 compute-0 sudo[80317]: pam_unix(sudo:session): session closed for user root
Oct 11 03:36:42 compute-0 sudo[80342]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R 0:0 /tmp/cephadm-23b68101-59a9-532f-ab6b-9acf78fb2162/etc/ceph/ceph.conf.new
Oct 11 03:36:42 compute-0 sudo[80342]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 03:36:42 compute-0 sudo[80342]: pam_unix(sudo:session): session closed for user root
Oct 11 03:36:42 compute-0 sudo[80367]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 03:36:42 compute-0 sudo[80367]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 03:36:42 compute-0 sudo[80367]: pam_unix(sudo:session): session closed for user root
Oct 11 03:36:42 compute-0 sudo[80392]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-23b68101-59a9-532f-ab6b-9acf78fb2162/etc/ceph/ceph.conf.new
Oct 11 03:36:42 compute-0 sudo[80392]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 03:36:42 compute-0 sudo[80392]: pam_unix(sudo:session): session closed for user root
Oct 11 03:36:42 compute-0 sudo[80436]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 03:36:42 compute-0 sudo[80436]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 03:36:42 compute-0 sudo[80436]: pam_unix(sudo:session): session closed for user root
Oct 11 03:36:42 compute-0 ceph-mon[74273]: Updating compute-0:/etc/ceph/ceph.conf
Oct 11 03:36:42 compute-0 sudo[80461]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mv /tmp/cephadm-23b68101-59a9-532f-ab6b-9acf78fb2162/etc/ceph/ceph.conf.new /etc/ceph/ceph.conf
Oct 11 03:36:42 compute-0 sudo[80461]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 03:36:42 compute-0 sudo[80461]: pam_unix(sudo:session): session closed for user root
Oct 11 03:36:42 compute-0 ceph-mgr[74563]: [cephadm INFO cephadm.serve] Updating compute-0:/var/lib/ceph/23b68101-59a9-532f-ab6b-9acf78fb2162/config/ceph.conf
Oct 11 03:36:42 compute-0 ceph-mgr[74563]: log_channel(cephadm) log [INF] : Updating compute-0:/var/lib/ceph/23b68101-59a9-532f-ab6b-9acf78fb2162/config/ceph.conf
Oct 11 03:36:42 compute-0 ceph-mgr[74563]: log_channel(audit) log [DBG] : from='client.14174 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""], "format": "json"}]: dispatch
Oct 11 03:36:42 compute-0 elated_sutherland[80264]: 
Oct 11 03:36:42 compute-0 elated_sutherland[80264]: {"available": true, "backend": "cephadm", "paused": false, "workers": 10}
Oct 11 03:36:42 compute-0 systemd[1]: libpod-211890a524f949a50e37a529ba17e23f3573f7d67dde7262b23f4d3cc6e65c99.scope: Deactivated successfully.
Oct 11 03:36:42 compute-0 podman[80225]: 2025-10-11 03:36:42.614083858 +0000 UTC m=+0.780243377 container died 211890a524f949a50e37a529ba17e23f3573f7d67dde7262b23f4d3cc6e65c99 (image=quay.io/ceph/ceph:v18, name=elated_sutherland, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 11 03:36:42 compute-0 sudo[80486]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 03:36:42 compute-0 sudo[80486]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 03:36:42 compute-0 systemd[1]: var-lib-containers-storage-overlay-93da7924e20a462723395f4485d3e482434dda6a95ab6f181ab1e4ef6db40018-merged.mount: Deactivated successfully.
Oct 11 03:36:42 compute-0 sudo[80486]: pam_unix(sudo:session): session closed for user root
Oct 11 03:36:42 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v4: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Oct 11 03:36:42 compute-0 podman[80225]: 2025-10-11 03:36:42.670081694 +0000 UTC m=+0.836241203 container remove 211890a524f949a50e37a529ba17e23f3573f7d67dde7262b23f4d3cc6e65c99 (image=quay.io/ceph/ceph:v18, name=elated_sutherland, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 11 03:36:42 compute-0 systemd[1]: libpod-conmon-211890a524f949a50e37a529ba17e23f3573f7d67dde7262b23f4d3cc6e65c99.scope: Deactivated successfully.
Oct 11 03:36:42 compute-0 ansible-async_wrapper.py[80160]: Module complete (80160)
Oct 11 03:36:42 compute-0 sudo[80527]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /var/lib/ceph/23b68101-59a9-532f-ab6b-9acf78fb2162/config
Oct 11 03:36:42 compute-0 sudo[80527]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 03:36:42 compute-0 sudo[80527]: pam_unix(sudo:session): session closed for user root
Oct 11 03:36:42 compute-0 sudo[80573]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 03:36:42 compute-0 sudo[80573]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 03:36:42 compute-0 sudo[80573]: pam_unix(sudo:session): session closed for user root
Oct 11 03:36:42 compute-0 sudo[80600]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /tmp/cephadm-23b68101-59a9-532f-ab6b-9acf78fb2162/var/lib/ceph/23b68101-59a9-532f-ab6b-9acf78fb2162/config
Oct 11 03:36:42 compute-0 sudo[80600]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 03:36:42 compute-0 sudo[80600]: pam_unix(sudo:session): session closed for user root
Oct 11 03:36:42 compute-0 sudo[80625]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 03:36:42 compute-0 sudo[80625]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 03:36:43 compute-0 sudo[80625]: pam_unix(sudo:session): session closed for user root
Oct 11 03:36:43 compute-0 sudo[80671]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gcykaygnkngztazqwaelbmsuyekjlncr ; /usr/bin/python3'
Oct 11 03:36:43 compute-0 sudo[80671]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 03:36:43 compute-0 sudo[80675]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/touch /tmp/cephadm-23b68101-59a9-532f-ab6b-9acf78fb2162/var/lib/ceph/23b68101-59a9-532f-ab6b-9acf78fb2162/config/ceph.conf.new
Oct 11 03:36:43 compute-0 sudo[80675]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 03:36:43 compute-0 sudo[80675]: pam_unix(sudo:session): session closed for user root
Oct 11 03:36:43 compute-0 sudo[80701]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 03:36:43 compute-0 python3[80676]: ansible-ansible.legacy.async_status Invoked with jid=j281775416265.80112 mode=status _async_dir=/root/.ansible_async
Oct 11 03:36:43 compute-0 sudo[80701]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 03:36:43 compute-0 sudo[80701]: pam_unix(sudo:session): session closed for user root
Oct 11 03:36:43 compute-0 sudo[80671]: pam_unix(sudo:session): session closed for user root
Oct 11 03:36:43 compute-0 sudo[80726]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R ceph-admin /tmp/cephadm-23b68101-59a9-532f-ab6b-9acf78fb2162
Oct 11 03:36:43 compute-0 sudo[80726]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 03:36:43 compute-0 sudo[80726]: pam_unix(sudo:session): session closed for user root
Oct 11 03:36:43 compute-0 sudo[80771]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 03:36:43 compute-0 sudo[80771]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 03:36:43 compute-0 sudo[80771]: pam_unix(sudo:session): session closed for user root
Oct 11 03:36:43 compute-0 sudo[80840]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-oezjgwfgaftqkgrvqgleapjezwjunbao ; /usr/bin/python3'
Oct 11 03:36:43 compute-0 sudo[80840]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 03:36:43 compute-0 sudo[80808]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-23b68101-59a9-532f-ab6b-9acf78fb2162/var/lib/ceph/23b68101-59a9-532f-ab6b-9acf78fb2162/config/ceph.conf.new
Oct 11 03:36:43 compute-0 sudo[80808]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 03:36:43 compute-0 sudo[80808]: pam_unix(sudo:session): session closed for user root
Oct 11 03:36:43 compute-0 sudo[80873]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 03:36:43 compute-0 sudo[80873]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 03:36:43 compute-0 sudo[80873]: pam_unix(sudo:session): session closed for user root
Oct 11 03:36:43 compute-0 python3[80847]: ansible-ansible.legacy.async_status Invoked with jid=j281775416265.80112 mode=cleanup _async_dir=/root/.ansible_async
Oct 11 03:36:43 compute-0 sudo[80840]: pam_unix(sudo:session): session closed for user root
Oct 11 03:36:43 compute-0 sudo[80898]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R 0:0 /tmp/cephadm-23b68101-59a9-532f-ab6b-9acf78fb2162/var/lib/ceph/23b68101-59a9-532f-ab6b-9acf78fb2162/config/ceph.conf.new
Oct 11 03:36:43 compute-0 sudo[80898]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 03:36:43 compute-0 sudo[80898]: pam_unix(sudo:session): session closed for user root
Oct 11 03:36:43 compute-0 ceph-mon[74273]: Updating compute-0:/var/lib/ceph/23b68101-59a9-532f-ab6b-9acf78fb2162/config/ceph.conf
Oct 11 03:36:43 compute-0 ceph-mon[74273]: from='client.14174 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""], "format": "json"}]: dispatch
Oct 11 03:36:43 compute-0 ceph-mon[74273]: pgmap v4: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Oct 11 03:36:43 compute-0 sudo[80923]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 03:36:43 compute-0 sudo[80923]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 03:36:43 compute-0 sudo[80923]: pam_unix(sudo:session): session closed for user root
Oct 11 03:36:43 compute-0 sudo[80948]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-23b68101-59a9-532f-ab6b-9acf78fb2162/var/lib/ceph/23b68101-59a9-532f-ab6b-9acf78fb2162/config/ceph.conf.new
Oct 11 03:36:43 compute-0 sudo[80948]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 03:36:43 compute-0 sudo[80948]: pam_unix(sudo:session): session closed for user root
Oct 11 03:36:43 compute-0 sudo[80973]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 03:36:43 compute-0 sudo[80973]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 03:36:43 compute-0 sudo[80973]: pam_unix(sudo:session): session closed for user root
Oct 11 03:36:43 compute-0 sudo[80998]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mv /tmp/cephadm-23b68101-59a9-532f-ab6b-9acf78fb2162/var/lib/ceph/23b68101-59a9-532f-ab6b-9acf78fb2162/config/ceph.conf.new /var/lib/ceph/23b68101-59a9-532f-ab6b-9acf78fb2162/config/ceph.conf
Oct 11 03:36:43 compute-0 sudo[80998]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 03:36:43 compute-0 sudo[81044]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-emjkyzynlybqlixzfqpfsrksfegbyfih ; /usr/bin/python3'
Oct 11 03:36:43 compute-0 sudo[80998]: pam_unix(sudo:session): session closed for user root
Oct 11 03:36:43 compute-0 sudo[81044]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 03:36:43 compute-0 ceph-mgr[74563]: [cephadm INFO cephadm.serve] Updating compute-0:/etc/ceph/ceph.client.admin.keyring
Oct 11 03:36:43 compute-0 ceph-mgr[74563]: log_channel(cephadm) log [INF] : Updating compute-0:/etc/ceph/ceph.client.admin.keyring
Oct 11 03:36:43 compute-0 sudo[81049]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 03:36:43 compute-0 sudo[81049]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 03:36:43 compute-0 sudo[81049]: pam_unix(sudo:session): session closed for user root
Oct 11 03:36:43 compute-0 sudo[81074]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /etc/ceph
Oct 11 03:36:43 compute-0 python3[81048]: ansible-ansible.builtin.stat Invoked with path=/home/ceph-admin/specs/ceph_spec.yaml follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct 11 03:36:43 compute-0 sudo[81074]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 03:36:43 compute-0 sudo[81074]: pam_unix(sudo:session): session closed for user root
Oct 11 03:36:43 compute-0 sudo[81044]: pam_unix(sudo:session): session closed for user root
Oct 11 03:36:44 compute-0 sudo[81101]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 03:36:44 compute-0 sudo[81101]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 03:36:44 compute-0 sudo[81101]: pam_unix(sudo:session): session closed for user root
Oct 11 03:36:44 compute-0 sudo[81126]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /tmp/cephadm-23b68101-59a9-532f-ab6b-9acf78fb2162/etc/ceph
Oct 11 03:36:44 compute-0 sudo[81126]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 03:36:44 compute-0 sudo[81126]: pam_unix(sudo:session): session closed for user root
Oct 11 03:36:44 compute-0 sudo[81151]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 03:36:44 compute-0 sudo[81151]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 03:36:44 compute-0 sudo[81151]: pam_unix(sudo:session): session closed for user root
Oct 11 03:36:44 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e2 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 11 03:36:44 compute-0 sudo[81176]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/touch /tmp/cephadm-23b68101-59a9-532f-ab6b-9acf78fb2162/etc/ceph/ceph.client.admin.keyring.new
Oct 11 03:36:44 compute-0 sudo[81176]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 03:36:44 compute-0 sudo[81176]: pam_unix(sudo:session): session closed for user root
Oct 11 03:36:44 compute-0 sudo[81247]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bxlunjqoxxdicjrlbxpfopblboztjjeu ; /usr/bin/python3'
Oct 11 03:36:44 compute-0 sudo[81247]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 03:36:44 compute-0 sudo[81202]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 03:36:44 compute-0 sudo[81202]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 03:36:44 compute-0 sudo[81202]: pam_unix(sudo:session): session closed for user root
Oct 11 03:36:44 compute-0 sudo[81252]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R ceph-admin /tmp/cephadm-23b68101-59a9-532f-ab6b-9acf78fb2162
Oct 11 03:36:44 compute-0 sudo[81252]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 03:36:44 compute-0 sudo[81252]: pam_unix(sudo:session): session closed for user root
Oct 11 03:36:44 compute-0 python3[81249]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /home/ceph-admin/specs/ceph_spec.yaml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v18 --fsid 23b68101-59a9-532f-ab6b-9acf78fb2162 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   orch status --format json _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 11 03:36:44 compute-0 sudo[81277]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 03:36:44 compute-0 sudo[81277]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 03:36:44 compute-0 sudo[81277]: pam_unix(sudo:session): session closed for user root
Oct 11 03:36:44 compute-0 podman[81298]: 2025-10-11 03:36:44.577601578 +0000 UTC m=+0.062812067 container create d5cd82aa1df7add885887998d754cb464b201282296e464bcec1cddd26c07a12 (image=quay.io/ceph/ceph:v18, name=hungry_northcutt, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef)
Oct 11 03:36:44 compute-0 systemd[1]: Started libpod-conmon-d5cd82aa1df7add885887998d754cb464b201282296e464bcec1cddd26c07a12.scope.
Oct 11 03:36:44 compute-0 sudo[81313]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-23b68101-59a9-532f-ab6b-9acf78fb2162/etc/ceph/ceph.client.admin.keyring.new
Oct 11 03:36:44 compute-0 sudo[81313]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 03:36:44 compute-0 sudo[81313]: pam_unix(sudo:session): session closed for user root
Oct 11 03:36:44 compute-0 podman[81298]: 2025-10-11 03:36:44.553536725 +0000 UTC m=+0.038747244 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Oct 11 03:36:44 compute-0 systemd[1]: Started libcrun container.
Oct 11 03:36:44 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v5: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Oct 11 03:36:44 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b8e3cd36e94f206e381c8a7b2ace286ca55b334a01b0c11a9b539100c1cd21ff/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 11 03:36:44 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b8e3cd36e94f206e381c8a7b2ace286ca55b334a01b0c11a9b539100c1cd21ff/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Oct 11 03:36:44 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b8e3cd36e94f206e381c8a7b2ace286ca55b334a01b0c11a9b539100c1cd21ff/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Oct 11 03:36:44 compute-0 podman[81298]: 2025-10-11 03:36:44.672390749 +0000 UTC m=+0.157601308 container init d5cd82aa1df7add885887998d754cb464b201282296e464bcec1cddd26c07a12 (image=quay.io/ceph/ceph:v18, name=hungry_northcutt, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, ceph=True)
Oct 11 03:36:44 compute-0 podman[81298]: 2025-10-11 03:36:44.683665684 +0000 UTC m=+0.168876193 container start d5cd82aa1df7add885887998d754cb464b201282296e464bcec1cddd26c07a12 (image=quay.io/ceph/ceph:v18, name=hungry_northcutt, CEPH_REF=reef, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0)
Oct 11 03:36:44 compute-0 podman[81298]: 2025-10-11 03:36:44.688025966 +0000 UTC m=+0.173236545 container attach d5cd82aa1df7add885887998d754cb464b201282296e464bcec1cddd26c07a12 (image=quay.io/ceph/ceph:v18, name=hungry_northcutt, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3)
Oct 11 03:36:44 compute-0 ceph-mon[74273]: Updating compute-0:/etc/ceph/ceph.client.admin.keyring
Oct 11 03:36:44 compute-0 ceph-mon[74273]: pgmap v5: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Oct 11 03:36:44 compute-0 sudo[81369]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 03:36:44 compute-0 sudo[81369]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 03:36:44 compute-0 sudo[81369]: pam_unix(sudo:session): session closed for user root
Oct 11 03:36:44 compute-0 sudo[81394]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R 0:0 /tmp/cephadm-23b68101-59a9-532f-ab6b-9acf78fb2162/etc/ceph/ceph.client.admin.keyring.new
Oct 11 03:36:44 compute-0 sudo[81394]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 03:36:44 compute-0 sudo[81394]: pam_unix(sudo:session): session closed for user root
Oct 11 03:36:45 compute-0 sudo[81419]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 03:36:45 compute-0 sudo[81419]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 03:36:45 compute-0 sudo[81419]: pam_unix(sudo:session): session closed for user root
Oct 11 03:36:45 compute-0 sudo[81454]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 600 /tmp/cephadm-23b68101-59a9-532f-ab6b-9acf78fb2162/etc/ceph/ceph.client.admin.keyring.new
Oct 11 03:36:45 compute-0 sudo[81454]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 03:36:45 compute-0 sudo[81454]: pam_unix(sudo:session): session closed for user root
Oct 11 03:36:45 compute-0 sudo[81488]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 03:36:45 compute-0 sudo[81488]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 03:36:45 compute-0 sudo[81488]: pam_unix(sudo:session): session closed for user root
Oct 11 03:36:45 compute-0 ceph-mgr[74563]: log_channel(audit) log [DBG] : from='client.14176 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""], "format": "json"}]: dispatch
Oct 11 03:36:45 compute-0 hungry_northcutt[81341]: 
Oct 11 03:36:45 compute-0 hungry_northcutt[81341]: {"available": true, "backend": "cephadm", "paused": false, "workers": 10}
Oct 11 03:36:45 compute-0 systemd[1]: libpod-d5cd82aa1df7add885887998d754cb464b201282296e464bcec1cddd26c07a12.scope: Deactivated successfully.
Oct 11 03:36:45 compute-0 podman[81298]: 2025-10-11 03:36:45.261519531 +0000 UTC m=+0.746730030 container died d5cd82aa1df7add885887998d754cb464b201282296e464bcec1cddd26c07a12 (image=quay.io/ceph/ceph:v18, name=hungry_northcutt, org.label-schema.build-date=20250507, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Oct 11 03:36:45 compute-0 sudo[81513]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mv /tmp/cephadm-23b68101-59a9-532f-ab6b-9acf78fb2162/etc/ceph/ceph.client.admin.keyring.new /etc/ceph/ceph.client.admin.keyring
Oct 11 03:36:45 compute-0 sudo[81513]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 03:36:45 compute-0 sudo[81513]: pam_unix(sudo:session): session closed for user root
Oct 11 03:36:45 compute-0 systemd[1]: var-lib-containers-storage-overlay-b8e3cd36e94f206e381c8a7b2ace286ca55b334a01b0c11a9b539100c1cd21ff-merged.mount: Deactivated successfully.
Oct 11 03:36:45 compute-0 ceph-mgr[74563]: [cephadm INFO cephadm.serve] Updating compute-0:/var/lib/ceph/23b68101-59a9-532f-ab6b-9acf78fb2162/config/ceph.client.admin.keyring
Oct 11 03:36:45 compute-0 ceph-mgr[74563]: log_channel(cephadm) log [INF] : Updating compute-0:/var/lib/ceph/23b68101-59a9-532f-ab6b-9acf78fb2162/config/ceph.client.admin.keyring
Oct 11 03:36:45 compute-0 podman[81298]: 2025-10-11 03:36:45.312441835 +0000 UTC m=+0.797652314 container remove d5cd82aa1df7add885887998d754cb464b201282296e464bcec1cddd26c07a12 (image=quay.io/ceph/ceph:v18, name=hungry_northcutt, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=reef, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 11 03:36:45 compute-0 systemd[1]: libpod-conmon-d5cd82aa1df7add885887998d754cb464b201282296e464bcec1cddd26c07a12.scope: Deactivated successfully.
Oct 11 03:36:45 compute-0 sudo[81247]: pam_unix(sudo:session): session closed for user root
Oct 11 03:36:45 compute-0 sudo[81547]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 03:36:45 compute-0 sudo[81547]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 03:36:45 compute-0 sudo[81547]: pam_unix(sudo:session): session closed for user root
Oct 11 03:36:45 compute-0 sudo[81577]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /var/lib/ceph/23b68101-59a9-532f-ab6b-9acf78fb2162/config
Oct 11 03:36:45 compute-0 sudo[81577]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 03:36:45 compute-0 sudo[81577]: pam_unix(sudo:session): session closed for user root
Oct 11 03:36:45 compute-0 sudo[81602]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 03:36:45 compute-0 sudo[81602]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 03:36:45 compute-0 sudo[81602]: pam_unix(sudo:session): session closed for user root
Oct 11 03:36:45 compute-0 sudo[81627]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /tmp/cephadm-23b68101-59a9-532f-ab6b-9acf78fb2162/var/lib/ceph/23b68101-59a9-532f-ab6b-9acf78fb2162/config
Oct 11 03:36:45 compute-0 sudo[81627]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 03:36:45 compute-0 sudo[81627]: pam_unix(sudo:session): session closed for user root
Oct 11 03:36:45 compute-0 sudo[81652]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 03:36:45 compute-0 sudo[81652]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 03:36:45 compute-0 sudo[81652]: pam_unix(sudo:session): session closed for user root
Oct 11 03:36:45 compute-0 sudo[81723]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pvqmyeoickonvyizbbimolqquymrfiue ; /usr/bin/python3'
Oct 11 03:36:45 compute-0 sudo[81723]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 03:36:45 compute-0 sudo[81680]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/touch /tmp/cephadm-23b68101-59a9-532f-ab6b-9acf78fb2162/var/lib/ceph/23b68101-59a9-532f-ab6b-9acf78fb2162/config/ceph.client.admin.keyring.new
Oct 11 03:36:45 compute-0 sudo[81680]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 03:36:45 compute-0 sudo[81680]: pam_unix(sudo:session): session closed for user root
Oct 11 03:36:45 compute-0 ceph-mon[74273]: from='client.14176 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""], "format": "json"}]: dispatch
Oct 11 03:36:45 compute-0 ceph-mon[74273]: Updating compute-0:/var/lib/ceph/23b68101-59a9-532f-ab6b-9acf78fb2162/config/ceph.client.admin.keyring
Oct 11 03:36:45 compute-0 sudo[81728]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 03:36:45 compute-0 sudo[81728]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 03:36:45 compute-0 sudo[81728]: pam_unix(sudo:session): session closed for user root
Oct 11 03:36:45 compute-0 python3[81725]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /home/ceph-admin/specs/ceph_spec.yaml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v18 --fsid 23b68101-59a9-532f-ab6b-9acf78fb2162 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   config set global log_to_file true _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 11 03:36:45 compute-0 sudo[81753]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R ceph-admin /tmp/cephadm-23b68101-59a9-532f-ab6b-9acf78fb2162
Oct 11 03:36:45 compute-0 sudo[81753]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 03:36:45 compute-0 sudo[81753]: pam_unix(sudo:session): session closed for user root
Oct 11 03:36:45 compute-0 podman[81773]: 2025-10-11 03:36:45.865978042 +0000 UTC m=+0.056148321 container create 3cb80ecb9736f5ccc2b1733383914d87237caa3c3b406247cca00759a824ac69 (image=quay.io/ceph/ceph:v18, name=clever_swirles, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, OSD_FLAVOR=default, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 11 03:36:45 compute-0 systemd[1]: Started libpod-conmon-3cb80ecb9736f5ccc2b1733383914d87237caa3c3b406247cca00759a824ac69.scope.
Oct 11 03:36:45 compute-0 sudo[81790]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 03:36:45 compute-0 sudo[81790]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 03:36:45 compute-0 sudo[81790]: pam_unix(sudo:session): session closed for user root
Oct 11 03:36:45 compute-0 podman[81773]: 2025-10-11 03:36:45.838644118 +0000 UTC m=+0.028814437 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Oct 11 03:36:45 compute-0 systemd[1]: Started libcrun container.
Oct 11 03:36:45 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/352d68b4a9bbcb95cb5d81c713306ac3d05bc693adc246a26dcc6e5a10b01088/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Oct 11 03:36:45 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/352d68b4a9bbcb95cb5d81c713306ac3d05bc693adc246a26dcc6e5a10b01088/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 11 03:36:45 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/352d68b4a9bbcb95cb5d81c713306ac3d05bc693adc246a26dcc6e5a10b01088/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Oct 11 03:36:45 compute-0 podman[81773]: 2025-10-11 03:36:45.971512813 +0000 UTC m=+0.161683092 container init 3cb80ecb9736f5ccc2b1733383914d87237caa3c3b406247cca00759a824ac69 (image=quay.io/ceph/ceph:v18, name=clever_swirles, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 11 03:36:45 compute-0 podman[81773]: 2025-10-11 03:36:45.981171213 +0000 UTC m=+0.171341462 container start 3cb80ecb9736f5ccc2b1733383914d87237caa3c3b406247cca00759a824ac69 (image=quay.io/ceph/ceph:v18, name=clever_swirles, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 11 03:36:45 compute-0 podman[81773]: 2025-10-11 03:36:45.987313545 +0000 UTC m=+0.177483804 container attach 3cb80ecb9736f5ccc2b1733383914d87237caa3c3b406247cca00759a824ac69 (image=quay.io/ceph/ceph:v18, name=clever_swirles, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 11 03:36:46 compute-0 sudo[81821]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-23b68101-59a9-532f-ab6b-9acf78fb2162/var/lib/ceph/23b68101-59a9-532f-ab6b-9acf78fb2162/config/ceph.client.admin.keyring.new
Oct 11 03:36:46 compute-0 sudo[81821]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 03:36:46 compute-0 sudo[81821]: pam_unix(sudo:session): session closed for user root
Oct 11 03:36:46 compute-0 sudo[81871]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 03:36:46 compute-0 sudo[81871]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 03:36:46 compute-0 sudo[81871]: pam_unix(sudo:session): session closed for user root
Oct 11 03:36:46 compute-0 sudo[81896]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R 0:0 /tmp/cephadm-23b68101-59a9-532f-ab6b-9acf78fb2162/var/lib/ceph/23b68101-59a9-532f-ab6b-9acf78fb2162/config/ceph.client.admin.keyring.new
Oct 11 03:36:46 compute-0 sudo[81896]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 03:36:46 compute-0 sudo[81896]: pam_unix(sudo:session): session closed for user root
Oct 11 03:36:46 compute-0 sudo[81925]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 03:36:46 compute-0 sudo[81925]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 03:36:46 compute-0 sudo[81925]: pam_unix(sudo:session): session closed for user root
Oct 11 03:36:46 compute-0 sudo[81965]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 600 /tmp/cephadm-23b68101-59a9-532f-ab6b-9acf78fb2162/var/lib/ceph/23b68101-59a9-532f-ab6b-9acf78fb2162/config/ceph.client.admin.keyring.new
Oct 11 03:36:46 compute-0 sudo[81965]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 03:36:46 compute-0 sudo[81965]: pam_unix(sudo:session): session closed for user root
Oct 11 03:36:46 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=log_to_file}] v 0) v1
Oct 11 03:36:46 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1201488921' entity='client.admin' 
Oct 11 03:36:46 compute-0 sudo[81990]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 03:36:46 compute-0 systemd[1]: libpod-3cb80ecb9736f5ccc2b1733383914d87237caa3c3b406247cca00759a824ac69.scope: Deactivated successfully.
Oct 11 03:36:46 compute-0 sudo[81990]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 03:36:46 compute-0 podman[81773]: 2025-10-11 03:36:46.559452471 +0000 UTC m=+0.749622740 container died 3cb80ecb9736f5ccc2b1733383914d87237caa3c3b406247cca00759a824ac69 (image=quay.io/ceph/ceph:v18, name=clever_swirles, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0)
Oct 11 03:36:46 compute-0 sudo[81990]: pam_unix(sudo:session): session closed for user root
Oct 11 03:36:46 compute-0 systemd[1]: var-lib-containers-storage-overlay-352d68b4a9bbcb95cb5d81c713306ac3d05bc693adc246a26dcc6e5a10b01088-merged.mount: Deactivated successfully.
Oct 11 03:36:46 compute-0 podman[81773]: 2025-10-11 03:36:46.620101227 +0000 UTC m=+0.810271466 container remove 3cb80ecb9736f5ccc2b1733383914d87237caa3c3b406247cca00759a824ac69 (image=quay.io/ceph/ceph:v18, name=clever_swirles, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 11 03:36:46 compute-0 systemd[1]: libpod-conmon-3cb80ecb9736f5ccc2b1733383914d87237caa3c3b406247cca00759a824ac69.scope: Deactivated successfully.
Oct 11 03:36:46 compute-0 sudo[81723]: pam_unix(sudo:session): session closed for user root
Oct 11 03:36:46 compute-0 ansible-async_wrapper.py[80156]: Done in kid B.
Oct 11 03:36:46 compute-0 sudo[82023]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mv /tmp/cephadm-23b68101-59a9-532f-ab6b-9acf78fb2162/var/lib/ceph/23b68101-59a9-532f-ab6b-9acf78fb2162/config/ceph.client.admin.keyring.new /var/lib/ceph/23b68101-59a9-532f-ab6b-9acf78fb2162/config/ceph.client.admin.keyring
Oct 11 03:36:46 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v6: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Oct 11 03:36:46 compute-0 sudo[82023]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 03:36:46 compute-0 sudo[82023]: pam_unix(sudo:session): session closed for user root
Oct 11 03:36:46 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 11 03:36:46 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' 
Oct 11 03:36:46 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 11 03:36:46 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' 
Oct 11 03:36:46 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct 11 03:36:46 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' 
Oct 11 03:36:46 compute-0 ceph-mgr[74563]: [progress INFO root] update: starting ev e5685b81-4eb0-4ab2-bbef-34ea8f7f2486 (Updating crash deployment (+1 -> 1))
Oct 11 03:36:46 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.crash.compute-0", "caps": ["mon", "profile crash", "mgr", "profile crash"]} v 0) v1
Oct 11 03:36:46 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' cmd=[{"prefix": "auth get-or-create", "entity": "client.crash.compute-0", "caps": ["mon", "profile crash", "mgr", "profile crash"]}]: dispatch
Oct 11 03:36:46 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' cmd='[{"prefix": "auth get-or-create", "entity": "client.crash.compute-0", "caps": ["mon", "profile crash", "mgr", "profile crash"]}]': finished
Oct 11 03:36:46 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 11 03:36:46 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 11 03:36:46 compute-0 ceph-mgr[74563]: [cephadm INFO cephadm.serve] Deploying daemon crash.compute-0 on compute-0
Oct 11 03:36:46 compute-0 ceph-mgr[74563]: log_channel(cephadm) log [INF] : Deploying daemon crash.compute-0 on compute-0
Oct 11 03:36:46 compute-0 sudo[82055]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 03:36:46 compute-0 sudo[82055]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 03:36:46 compute-0 sudo[82055]: pam_unix(sudo:session): session closed for user root
Oct 11 03:36:46 compute-0 sudo[82102]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uuexkijaswfpxvwdovsirwkqapokvgyt ; /usr/bin/python3'
Oct 11 03:36:46 compute-0 sudo[82102]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 03:36:46 compute-0 sudo[82105]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 11 03:36:46 compute-0 sudo[82105]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 03:36:46 compute-0 sudo[82105]: pam_unix(sudo:session): session closed for user root
Oct 11 03:36:46 compute-0 python3[82107]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /home/ceph-admin/specs/ceph_spec.yaml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v18 --fsid 23b68101-59a9-532f-ab6b-9acf78fb2162 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   config set global mon_cluster_log_to_file true _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 11 03:36:46 compute-0 sudo[82131]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 03:36:46 compute-0 sudo[82131]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 03:36:46 compute-0 sudo[82131]: pam_unix(sudo:session): session closed for user root
Oct 11 03:36:47 compute-0 podman[82154]: 2025-10-11 03:36:47.058825434 +0000 UTC m=+0.067626142 container create 80f2a6950ebb4de54f4848f2ccf6f76dc3738bbba7e5e1631f5988e27a9c1600 (image=quay.io/ceph/ceph:v18, name=sleepy_cannon, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 11 03:36:47 compute-0 sudo[82162]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/23b68101-59a9-532f-ab6b-9acf78fb2162/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 _orch deploy --fsid 23b68101-59a9-532f-ab6b-9acf78fb2162
Oct 11 03:36:47 compute-0 sudo[82162]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 03:36:47 compute-0 systemd[1]: Started libpod-conmon-80f2a6950ebb4de54f4848f2ccf6f76dc3738bbba7e5e1631f5988e27a9c1600.scope.
Oct 11 03:36:47 compute-0 podman[82154]: 2025-10-11 03:36:47.03152517 +0000 UTC m=+0.040325978 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Oct 11 03:36:47 compute-0 systemd[1]: Started libcrun container.
Oct 11 03:36:47 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/11231450bd7b41e0058a90ed53d3c7965289426387034e73c313583b7a878472/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 11 03:36:47 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/11231450bd7b41e0058a90ed53d3c7965289426387034e73c313583b7a878472/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Oct 11 03:36:47 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/11231450bd7b41e0058a90ed53d3c7965289426387034e73c313583b7a878472/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Oct 11 03:36:47 compute-0 podman[82154]: 2025-10-11 03:36:47.173425188 +0000 UTC m=+0.182225926 container init 80f2a6950ebb4de54f4848f2ccf6f76dc3738bbba7e5e1631f5988e27a9c1600 (image=quay.io/ceph/ceph:v18, name=sleepy_cannon, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 11 03:36:47 compute-0 podman[82154]: 2025-10-11 03:36:47.181698609 +0000 UTC m=+0.190499357 container start 80f2a6950ebb4de54f4848f2ccf6f76dc3738bbba7e5e1631f5988e27a9c1600 (image=quay.io/ceph/ceph:v18, name=sleepy_cannon, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=reef)
Oct 11 03:36:47 compute-0 podman[82154]: 2025-10-11 03:36:47.185002612 +0000 UTC m=+0.193803340 container attach 80f2a6950ebb4de54f4848f2ccf6f76dc3738bbba7e5e1631f5988e27a9c1600 (image=quay.io/ceph/ceph:v18, name=sleepy_cannon, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 11 03:36:47 compute-0 podman[82241]: 2025-10-11 03:36:47.536101479 +0000 UTC m=+0.060824872 container create 1f528f5bb736b380ece9e9db3ba2e56636393bdd01581140a2edbe5028fc52ec (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_heisenberg, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef)
Oct 11 03:36:47 compute-0 ceph-mon[74273]: from='client.? 192.168.122.100:0/1201488921' entity='client.admin' 
Oct 11 03:36:47 compute-0 ceph-mon[74273]: pgmap v6: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Oct 11 03:36:47 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' 
Oct 11 03:36:47 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' 
Oct 11 03:36:47 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' 
Oct 11 03:36:47 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' cmd=[{"prefix": "auth get-or-create", "entity": "client.crash.compute-0", "caps": ["mon", "profile crash", "mgr", "profile crash"]}]: dispatch
Oct 11 03:36:47 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' cmd='[{"prefix": "auth get-or-create", "entity": "client.crash.compute-0", "caps": ["mon", "profile crash", "mgr", "profile crash"]}]': finished
Oct 11 03:36:47 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 11 03:36:47 compute-0 systemd[1]: Started libpod-conmon-1f528f5bb736b380ece9e9db3ba2e56636393bdd01581140a2edbe5028fc52ec.scope.
Oct 11 03:36:47 compute-0 podman[82241]: 2025-10-11 03:36:47.510893244 +0000 UTC m=+0.035616637 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 03:36:47 compute-0 systemd[1]: Started libcrun container.
Oct 11 03:36:47 compute-0 podman[82241]: 2025-10-11 03:36:47.651291979 +0000 UTC m=+0.176015412 container init 1f528f5bb736b380ece9e9db3ba2e56636393bdd01581140a2edbe5028fc52ec (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_heisenberg, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3)
Oct 11 03:36:47 compute-0 podman[82241]: 2025-10-11 03:36:47.657883894 +0000 UTC m=+0.182607247 container start 1f528f5bb736b380ece9e9db3ba2e56636393bdd01581140a2edbe5028fc52ec (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_heisenberg, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 11 03:36:47 compute-0 podman[82241]: 2025-10-11 03:36:47.660905448 +0000 UTC m=+0.185628891 container attach 1f528f5bb736b380ece9e9db3ba2e56636393bdd01581140a2edbe5028fc52ec (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_heisenberg, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.license=GPLv2)
Oct 11 03:36:47 compute-0 agitated_heisenberg[82276]: 167 167
Oct 11 03:36:47 compute-0 systemd[1]: libpod-1f528f5bb736b380ece9e9db3ba2e56636393bdd01581140a2edbe5028fc52ec.scope: Deactivated successfully.
Oct 11 03:36:47 compute-0 podman[82241]: 2025-10-11 03:36:47.664011715 +0000 UTC m=+0.188735068 container died 1f528f5bb736b380ece9e9db3ba2e56636393bdd01581140a2edbe5028fc52ec (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_heisenberg, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 11 03:36:47 compute-0 systemd[1]: var-lib-containers-storage-overlay-9f0feef30b8aedba6827c044191f335e16caef484ee60395596402e726716b6f-merged.mount: Deactivated successfully.
Oct 11 03:36:47 compute-0 podman[82241]: 2025-10-11 03:36:47.699732604 +0000 UTC m=+0.224455957 container remove 1f528f5bb736b380ece9e9db3ba2e56636393bdd01581140a2edbe5028fc52ec (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_heisenberg, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2)
Oct 11 03:36:47 compute-0 systemd[1]: libpod-conmon-1f528f5bb736b380ece9e9db3ba2e56636393bdd01581140a2edbe5028fc52ec.scope: Deactivated successfully.
Oct 11 03:36:47 compute-0 systemd[1]: Reloading.
Oct 11 03:36:47 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=mon_cluster_log_to_file}] v 0) v1
Oct 11 03:36:47 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3729698903' entity='client.admin' 
Oct 11 03:36:47 compute-0 podman[82154]: 2025-10-11 03:36:47.797327993 +0000 UTC m=+0.806128741 container died 80f2a6950ebb4de54f4848f2ccf6f76dc3738bbba7e5e1631f5988e27a9c1600 (image=quay.io/ceph/ceph:v18, name=sleepy_cannon, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 11 03:36:47 compute-0 systemd-sysv-generator[82332]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 11 03:36:47 compute-0 systemd-rc-local-generator[82329]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 11 03:36:48 compute-0 systemd[1]: libpod-80f2a6950ebb4de54f4848f2ccf6f76dc3738bbba7e5e1631f5988e27a9c1600.scope: Deactivated successfully.
Oct 11 03:36:48 compute-0 systemd[1]: var-lib-containers-storage-overlay-11231450bd7b41e0058a90ed53d3c7965289426387034e73c313583b7a878472-merged.mount: Deactivated successfully.
Oct 11 03:36:48 compute-0 podman[82154]: 2025-10-11 03:36:48.070773438 +0000 UTC m=+1.079574166 container remove 80f2a6950ebb4de54f4848f2ccf6f76dc3738bbba7e5e1631f5988e27a9c1600 (image=quay.io/ceph/ceph:v18, name=sleepy_cannon, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 11 03:36:48 compute-0 systemd[1]: libpod-conmon-80f2a6950ebb4de54f4848f2ccf6f76dc3738bbba7e5e1631f5988e27a9c1600.scope: Deactivated successfully.
Oct 11 03:36:48 compute-0 sudo[82102]: pam_unix(sudo:session): session closed for user root
Oct 11 03:36:48 compute-0 systemd[1]: Reloading.
Oct 11 03:36:48 compute-0 systemd-rc-local-generator[82370]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 11 03:36:48 compute-0 systemd-sysv-generator[82375]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 11 03:36:48 compute-0 sudo[82404]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dqfbnepnqcabdfrgaretemedkuspuhyp ; /usr/bin/python3'
Oct 11 03:36:48 compute-0 sudo[82404]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 03:36:48 compute-0 systemd[1]: Starting Ceph crash.compute-0 for 23b68101-59a9-532f-ab6b-9acf78fb2162...
Oct 11 03:36:48 compute-0 ceph-mon[74273]: Deploying daemon crash.compute-0 on compute-0
Oct 11 03:36:48 compute-0 ceph-mon[74273]: from='client.? 192.168.122.100:0/3729698903' entity='client.admin' 
Oct 11 03:36:48 compute-0 python3[82408]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /home/ceph-admin/specs/ceph_spec.yaml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v18 --fsid 23b68101-59a9-532f-ab6b-9acf78fb2162 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd set-require-min-compat-client mimic
                                           _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 11 03:36:48 compute-0 podman[82435]: 2025-10-11 03:36:48.635889139 +0000 UTC m=+0.047325554 container create 5090cf985a0b2081f106fa2b9e958d15666234bc579a0f754a6f5f226216d323 (image=quay.io/ceph/ceph:v18, name=lucid_lehmann, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 11 03:36:48 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v7: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Oct 11 03:36:48 compute-0 systemd[1]: Started libpod-conmon-5090cf985a0b2081f106fa2b9e958d15666234bc579a0f754a6f5f226216d323.scope.
Oct 11 03:36:48 compute-0 podman[82435]: 2025-10-11 03:36:48.618665008 +0000 UTC m=+0.030101443 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Oct 11 03:36:48 compute-0 systemd[1]: Started libcrun container.
Oct 11 03:36:48 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9e8a93fb8572a62d409a360cb406e0baee7a5b58cada1c36a337dc1a442d678f/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Oct 11 03:36:48 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9e8a93fb8572a62d409a360cb406e0baee7a5b58cada1c36a337dc1a442d678f/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 11 03:36:48 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9e8a93fb8572a62d409a360cb406e0baee7a5b58cada1c36a337dc1a442d678f/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Oct 11 03:36:48 compute-0 podman[82435]: 2025-10-11 03:36:48.740501834 +0000 UTC m=+0.151938299 container init 5090cf985a0b2081f106fa2b9e958d15666234bc579a0f754a6f5f226216d323 (image=quay.io/ceph/ceph:v18, name=lucid_lehmann, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 11 03:36:48 compute-0 podman[82435]: 2025-10-11 03:36:48.748602881 +0000 UTC m=+0.160039286 container start 5090cf985a0b2081f106fa2b9e958d15666234bc579a0f754a6f5f226216d323 (image=quay.io/ceph/ceph:v18, name=lucid_lehmann, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Oct 11 03:36:48 compute-0 podman[82470]: 2025-10-11 03:36:48.749578628 +0000 UTC m=+0.061208392 container create 1d7481d1ebe741dcd0eb93a0b8a67774bf714b4da0b4748f11a2a5967eb38c8a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-23b68101-59a9-532f-ab6b-9acf78fb2162-crash-compute-0, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 11 03:36:48 compute-0 podman[82435]: 2025-10-11 03:36:48.755392151 +0000 UTC m=+0.166828576 container attach 5090cf985a0b2081f106fa2b9e958d15666234bc579a0f754a6f5f226216d323 (image=quay.io/ceph/ceph:v18, name=lucid_lehmann, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 11 03:36:48 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0ff5e1893b43c238bd377b64c63d1dae734a781621c7f91c0a874c20257f1c65/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 11 03:36:48 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0ff5e1893b43c238bd377b64c63d1dae734a781621c7f91c0a874c20257f1c65/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 11 03:36:48 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0ff5e1893b43c238bd377b64c63d1dae734a781621c7f91c0a874c20257f1c65/merged/etc/ceph/ceph.client.crash.compute-0.keyring supports timestamps until 2038 (0x7fffffff)
Oct 11 03:36:48 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0ff5e1893b43c238bd377b64c63d1dae734a781621c7f91c0a874c20257f1c65/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 11 03:36:48 compute-0 podman[82470]: 2025-10-11 03:36:48.720376712 +0000 UTC m=+0.032006486 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 03:36:48 compute-0 podman[82470]: 2025-10-11 03:36:48.816772247 +0000 UTC m=+0.128401991 container init 1d7481d1ebe741dcd0eb93a0b8a67774bf714b4da0b4748f11a2a5967eb38c8a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-23b68101-59a9-532f-ab6b-9acf78fb2162-crash-compute-0, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef)
Oct 11 03:36:48 compute-0 podman[82470]: 2025-10-11 03:36:48.823954628 +0000 UTC m=+0.135584372 container start 1d7481d1ebe741dcd0eb93a0b8a67774bf714b4da0b4748f11a2a5967eb38c8a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-23b68101-59a9-532f-ab6b-9acf78fb2162-crash-compute-0, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 11 03:36:48 compute-0 bash[82470]: 1d7481d1ebe741dcd0eb93a0b8a67774bf714b4da0b4748f11a2a5967eb38c8a
Oct 11 03:36:48 compute-0 systemd[1]: Started Ceph crash.compute-0 for 23b68101-59a9-532f-ab6b-9acf78fb2162.
Oct 11 03:36:48 compute-0 sudo[82162]: pam_unix(sudo:session): session closed for user root
Oct 11 03:36:48 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 11 03:36:48 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' 
Oct 11 03:36:48 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 11 03:36:48 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' 
Oct 11 03:36:48 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.crash}] v 0) v1
Oct 11 03:36:48 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' 
Oct 11 03:36:48 compute-0 ceph-mgr[74563]: [progress INFO root] complete: finished ev e5685b81-4eb0-4ab2-bbef-34ea8f7f2486 (Updating crash deployment (+1 -> 1))
Oct 11 03:36:48 compute-0 ceph-mgr[74563]: [progress INFO root] Completed event e5685b81-4eb0-4ab2-bbef-34ea8f7f2486 (Updating crash deployment (+1 -> 1)) in 2 seconds
Oct 11 03:36:48 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.crash}] v 0) v1
Oct 11 03:36:48 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' 
Oct 11 03:36:48 compute-0 ceph-mgr[74563]: [progress WARNING root] complete: ev 188c5eae-506e-4029-92fe-040f8773e481 does not exist
Oct 11 03:36:48 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mon}] v 0) v1
Oct 11 03:36:48 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' 
Oct 11 03:36:48 compute-0 ceph-mgr[74563]: [progress INFO root] update: starting ev 672647a0-3774-4452-bc7f-c99abc24bb0a (Updating mgr deployment (+1 -> 2))
Oct 11 03:36:48 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get-or-create", "entity": "mgr.compute-0.xairjq", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} v 0) v1
Oct 11 03:36:48 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' cmd=[{"prefix": "auth get-or-create", "entity": "mgr.compute-0.xairjq", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]: dispatch
Oct 11 03:36:48 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' cmd='[{"prefix": "auth get-or-create", "entity": "mgr.compute-0.xairjq", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]': finished
Oct 11 03:36:48 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr services"} v 0) v1
Oct 11 03:36:48 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' cmd=[{"prefix": "mgr services"}]: dispatch
Oct 11 03:36:48 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 11 03:36:48 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 11 03:36:48 compute-0 ceph-mgr[74563]: [cephadm INFO cephadm.serve] Deploying daemon mgr.compute-0.xairjq on compute-0
Oct 11 03:36:48 compute-0 ceph-mgr[74563]: log_channel(cephadm) log [INF] : Deploying daemon mgr.compute-0.xairjq on compute-0
Oct 11 03:36:49 compute-0 sudo[82494]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 03:36:49 compute-0 sudo[82494]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 03:36:49 compute-0 ceph-23b68101-59a9-532f-ab6b-9acf78fb2162-crash-compute-0[82489]: INFO:ceph-crash:pinging cluster to exercise our key
Oct 11 03:36:49 compute-0 sudo[82494]: pam_unix(sudo:session): session closed for user root
Oct 11 03:36:49 compute-0 sudo[82521]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 11 03:36:49 compute-0 sudo[82521]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 03:36:49 compute-0 sudo[82521]: pam_unix(sudo:session): session closed for user root
Oct 11 03:36:49 compute-0 sudo[82548]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 03:36:49 compute-0 sudo[82548]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 03:36:49 compute-0 sudo[82548]: pam_unix(sudo:session): session closed for user root
Oct 11 03:36:49 compute-0 sudo[82590]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/23b68101-59a9-532f-ab6b-9acf78fb2162/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 _orch deploy --fsid 23b68101-59a9-532f-ab6b-9acf78fb2162
Oct 11 03:36:49 compute-0 sudo[82590]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 03:36:49 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e2 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 11 03:36:49 compute-0 ceph-23b68101-59a9-532f-ab6b-9acf78fb2162-crash-compute-0[82489]: 2025-10-11T03:36:49.212+0000 7f752a5a9640 -1 auth: unable to find a keyring on /etc/ceph/ceph.client.admin.keyring,/etc/ceph/ceph.keyring,/etc/ceph/keyring,/etc/ceph/keyring.bin: (2) No such file or directory
Oct 11 03:36:49 compute-0 ceph-23b68101-59a9-532f-ab6b-9acf78fb2162-crash-compute-0[82489]: 2025-10-11T03:36:49.212+0000 7f752a5a9640 -1 AuthRegistry(0x7f7524067440) no keyring found at /etc/ceph/ceph.client.admin.keyring,/etc/ceph/ceph.keyring,/etc/ceph/keyring,/etc/ceph/keyring.bin, disabling cephx
Oct 11 03:36:49 compute-0 ceph-23b68101-59a9-532f-ab6b-9acf78fb2162-crash-compute-0[82489]: 2025-10-11T03:36:49.213+0000 7f752a5a9640 -1 auth: unable to find a keyring on /etc/ceph/ceph.client.admin.keyring,/etc/ceph/ceph.keyring,/etc/ceph/keyring,/etc/ceph/keyring.bin: (2) No such file or directory
Oct 11 03:36:49 compute-0 ceph-23b68101-59a9-532f-ab6b-9acf78fb2162-crash-compute-0[82489]: 2025-10-11T03:36:49.213+0000 7f752a5a9640 -1 AuthRegistry(0x7f752a5a8000) no keyring found at /etc/ceph/ceph.client.admin.keyring,/etc/ceph/ceph.keyring,/etc/ceph/keyring,/etc/ceph/keyring.bin, disabling cephx
Oct 11 03:36:49 compute-0 ceph-23b68101-59a9-532f-ab6b-9acf78fb2162-crash-compute-0[82489]: 2025-10-11T03:36:49.220+0000 7f7523fff640 -1 monclient(hunting): handle_auth_bad_method server allowed_methods [2] but i only support [1]
Oct 11 03:36:49 compute-0 ceph-23b68101-59a9-532f-ab6b-9acf78fb2162-crash-compute-0[82489]: 2025-10-11T03:36:49.220+0000 7f752a5a9640 -1 monclient: authenticate NOTE: no keyring found; disabled cephx authentication
Oct 11 03:36:49 compute-0 ceph-23b68101-59a9-532f-ab6b-9acf78fb2162-crash-compute-0[82489]: [errno 13] RADOS permission denied (error connecting to the cluster)
Oct 11 03:36:49 compute-0 ceph-23b68101-59a9-532f-ab6b-9acf78fb2162-crash-compute-0[82489]: INFO:ceph-crash:monitoring path /var/lib/ceph/crash, delay 600s
Oct 11 03:36:49 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd set-require-min-compat-client", "version": "mimic"} v 0) v1
Oct 11 03:36:49 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/663850120' entity='client.admin' cmd=[{"prefix": "osd set-require-min-compat-client", "version": "mimic"}]: dispatch
Oct 11 03:36:49 compute-0 ceph-mon[74273]: pgmap v7: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Oct 11 03:36:49 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' 
Oct 11 03:36:49 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' 
Oct 11 03:36:49 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' 
Oct 11 03:36:49 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' 
Oct 11 03:36:49 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' 
Oct 11 03:36:49 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' cmd=[{"prefix": "auth get-or-create", "entity": "mgr.compute-0.xairjq", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]: dispatch
Oct 11 03:36:49 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' cmd='[{"prefix": "auth get-or-create", "entity": "mgr.compute-0.xairjq", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]': finished
Oct 11 03:36:49 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' cmd=[{"prefix": "mgr services"}]: dispatch
Oct 11 03:36:49 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 11 03:36:49 compute-0 ceph-mon[74273]: from='client.? 192.168.122.100:0/663850120' entity='client.admin' cmd=[{"prefix": "osd set-require-min-compat-client", "version": "mimic"}]: dispatch
Oct 11 03:36:49 compute-0 podman[82667]: 2025-10-11 03:36:49.574075272 +0000 UTC m=+0.065825932 container create 7108a94d74ce0b57389caabccf611d6c1e0c55636df507ac1378d995cb8cbbbf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_brown, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Oct 11 03:36:49 compute-0 systemd[1]: Started libpod-conmon-7108a94d74ce0b57389caabccf611d6c1e0c55636df507ac1378d995cb8cbbbf.scope.
Oct 11 03:36:49 compute-0 podman[82667]: 2025-10-11 03:36:49.545302597 +0000 UTC m=+0.037053307 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 03:36:49 compute-0 systemd[1]: Started libcrun container.
Oct 11 03:36:49 compute-0 podman[82667]: 2025-10-11 03:36:49.664903001 +0000 UTC m=+0.156653691 container init 7108a94d74ce0b57389caabccf611d6c1e0c55636df507ac1378d995cb8cbbbf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_brown, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Oct 11 03:36:49 compute-0 podman[82667]: 2025-10-11 03:36:49.673253485 +0000 UTC m=+0.165004145 container start 7108a94d74ce0b57389caabccf611d6c1e0c55636df507ac1378d995cb8cbbbf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_brown, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_REF=reef, OSD_FLAVOR=default)
Oct 11 03:36:49 compute-0 friendly_brown[82683]: 167 167
Oct 11 03:36:49 compute-0 systemd[1]: libpod-7108a94d74ce0b57389caabccf611d6c1e0c55636df507ac1378d995cb8cbbbf.scope: Deactivated successfully.
Oct 11 03:36:49 compute-0 podman[82667]: 2025-10-11 03:36:49.677943166 +0000 UTC m=+0.169693866 container attach 7108a94d74ce0b57389caabccf611d6c1e0c55636df507ac1378d995cb8cbbbf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_brown, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.license=GPLv2)
Oct 11 03:36:49 compute-0 podman[82667]: 2025-10-11 03:36:49.679045957 +0000 UTC m=+0.170796617 container died 7108a94d74ce0b57389caabccf611d6c1e0c55636df507ac1378d995cb8cbbbf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_brown, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef)
Oct 11 03:36:49 compute-0 systemd[1]: var-lib-containers-storage-overlay-6c35c8b8fe166c94f584b06993d6df44179e27d1a5d1bfd3ad7b7a43f77a1150-merged.mount: Deactivated successfully.
Oct 11 03:36:49 compute-0 podman[82667]: 2025-10-11 03:36:49.734436145 +0000 UTC m=+0.226186795 container remove 7108a94d74ce0b57389caabccf611d6c1e0c55636df507ac1378d995cb8cbbbf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_brown, ceph=True, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 11 03:36:49 compute-0 systemd[1]: libpod-conmon-7108a94d74ce0b57389caabccf611d6c1e0c55636df507ac1378d995cb8cbbbf.scope: Deactivated successfully.
Oct 11 03:36:49 compute-0 systemd[1]: Reloading.
Oct 11 03:36:49 compute-0 systemd-rc-local-generator[82724]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 11 03:36:49 compute-0 systemd-sysv-generator[82728]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 11 03:36:49 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e2 do_prune osdmap full prune enabled
Oct 11 03:36:49 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e2 encode_pending skipping prime_pg_temp; mapping job did not start
Oct 11 03:36:49 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/663850120' entity='client.admin' cmd='[{"prefix": "osd set-require-min-compat-client", "version": "mimic"}]': finished
Oct 11 03:36:49 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e3 e3: 0 total, 0 up, 0 in
Oct 11 03:36:49 compute-0 lucid_lehmann[82476]: set require_min_compat_client to mimic
Oct 11 03:36:49 compute-0 ceph-mon[74273]: log_channel(cluster) log [DBG] : osdmap e3: 0 total, 0 up, 0 in
Oct 11 03:36:49 compute-0 podman[82435]: 2025-10-11 03:36:49.949559359 +0000 UTC m=+1.360995784 container died 5090cf985a0b2081f106fa2b9e958d15666234bc579a0f754a6f5f226216d323 (image=quay.io/ceph/ceph:v18, name=lucid_lehmann, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=reef)
Oct 11 03:36:50 compute-0 systemd[1]: libpod-5090cf985a0b2081f106fa2b9e958d15666234bc579a0f754a6f5f226216d323.scope: Deactivated successfully.
Oct 11 03:36:50 compute-0 systemd[1]: var-lib-containers-storage-overlay-9e8a93fb8572a62d409a360cb406e0baee7a5b58cada1c36a337dc1a442d678f-merged.mount: Deactivated successfully.
Oct 11 03:36:50 compute-0 podman[82435]: 2025-10-11 03:36:50.098604687 +0000 UTC m=+1.510041092 container remove 5090cf985a0b2081f106fa2b9e958d15666234bc579a0f754a6f5f226216d323 (image=quay.io/ceph/ceph:v18, name=lucid_lehmann, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.build-date=20250507)
Oct 11 03:36:50 compute-0 systemd[1]: libpod-conmon-5090cf985a0b2081f106fa2b9e958d15666234bc579a0f754a6f5f226216d323.scope: Deactivated successfully.
Oct 11 03:36:50 compute-0 sudo[82404]: pam_unix(sudo:session): session closed for user root
Oct 11 03:36:50 compute-0 systemd[1]: Reloading.
Oct 11 03:36:50 compute-0 systemd-rc-local-generator[82783]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 11 03:36:50 compute-0 systemd-sysv-generator[82787]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 11 03:36:50 compute-0 systemd[1]: Starting Ceph mgr.compute-0.xairjq for 23b68101-59a9-532f-ab6b-9acf78fb2162...
Oct 11 03:36:50 compute-0 ceph-mon[74273]: Deploying daemon mgr.compute-0.xairjq on compute-0
Oct 11 03:36:50 compute-0 ceph-mon[74273]: from='client.? 192.168.122.100:0/663850120' entity='client.admin' cmd='[{"prefix": "osd set-require-min-compat-client", "version": "mimic"}]': finished
Oct 11 03:36:50 compute-0 ceph-mon[74273]: osdmap e3: 0 total, 0 up, 0 in
Oct 11 03:36:50 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v9: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Oct 11 03:36:50 compute-0 sudo[82858]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ipavwuudkxvubnbcrbrwlwpeopcbrzet ; /usr/bin/python3'
Oct 11 03:36:50 compute-0 sudo[82858]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 03:36:50 compute-0 ceph-mgr[74563]: [progress INFO root] Writing back 1 completed events
Oct 11 03:36:50 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0) v1
Oct 11 03:36:50 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' 
Oct 11 03:36:50 compute-0 ceph-mgr[74563]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 03:36:50 compute-0 ceph-mgr[74563]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 03:36:50 compute-0 ceph-mgr[74563]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 03:36:50 compute-0 ceph-mgr[74563]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 03:36:50 compute-0 ceph-mgr[74563]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 03:36:50 compute-0 ceph-mgr[74563]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 03:36:50 compute-0 podman[82865]: 2025-10-11 03:36:50.769108825 +0000 UTC m=+0.048761455 container create 4270d0cd953fc8aa60821433781f3976420f007c60c091497141308c4c8eec4a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-23b68101-59a9-532f-ab6b-9acf78fb2162-mgr-compute-0-xairjq, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3)
Oct 11 03:36:50 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d13a39712026e031ba331d9511bfa6acf0f88d87dc9dc907c779387d424d7e83/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 11 03:36:50 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d13a39712026e031ba331d9511bfa6acf0f88d87dc9dc907c779387d424d7e83/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 11 03:36:50 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d13a39712026e031ba331d9511bfa6acf0f88d87dc9dc907c779387d424d7e83/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 11 03:36:50 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d13a39712026e031ba331d9511bfa6acf0f88d87dc9dc907c779387d424d7e83/merged/var/lib/ceph/mgr/ceph-compute-0.xairjq supports timestamps until 2038 (0x7fffffff)
Oct 11 03:36:50 compute-0 podman[82865]: 2025-10-11 03:36:50.748858738 +0000 UTC m=+0.028511378 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 03:36:50 compute-0 podman[82865]: 2025-10-11 03:36:50.852909268 +0000 UTC m=+0.132561958 container init 4270d0cd953fc8aa60821433781f3976420f007c60c091497141308c4c8eec4a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-23b68101-59a9-532f-ab6b-9acf78fb2162-mgr-compute-0-xairjq, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.39.3)
Oct 11 03:36:50 compute-0 podman[82865]: 2025-10-11 03:36:50.862668651 +0000 UTC m=+0.142321291 container start 4270d0cd953fc8aa60821433781f3976420f007c60c091497141308c4c8eec4a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-23b68101-59a9-532f-ab6b-9acf78fb2162-mgr-compute-0-xairjq, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 11 03:36:50 compute-0 bash[82865]: 4270d0cd953fc8aa60821433781f3976420f007c60c091497141308c4c8eec4a
Oct 11 03:36:50 compute-0 python3[82864]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /home/ceph-admin/specs/ceph_spec.yaml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v18 --fsid 23b68101-59a9-532f-ab6b-9acf78fb2162 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   orch apply --in-file /home/ceph_spec.yaml _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 11 03:36:50 compute-0 systemd[1]: Started Ceph mgr.compute-0.xairjq for 23b68101-59a9-532f-ab6b-9acf78fb2162.
Oct 11 03:36:50 compute-0 ceph-mgr[82885]: set uid:gid to 167:167 (ceph:ceph)
Oct 11 03:36:50 compute-0 ceph-mgr[82885]: ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable), process ceph-mgr, pid 2
Oct 11 03:36:50 compute-0 ceph-mgr[82885]: pidfile_write: ignore empty --pid-file
Oct 11 03:36:50 compute-0 sudo[82590]: pam_unix(sudo:session): session closed for user root
Oct 11 03:36:50 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 11 03:36:50 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' 
Oct 11 03:36:50 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 11 03:36:50 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' 
Oct 11 03:36:50 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mgr}] v 0) v1
Oct 11 03:36:50 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' 
Oct 11 03:36:50 compute-0 ceph-mgr[74563]: [progress INFO root] complete: finished ev 672647a0-3774-4452-bc7f-c99abc24bb0a (Updating mgr deployment (+1 -> 2))
Oct 11 03:36:50 compute-0 ceph-mgr[74563]: [progress INFO root] Completed event 672647a0-3774-4452-bc7f-c99abc24bb0a (Updating mgr deployment (+1 -> 2)) in 2 seconds
Oct 11 03:36:50 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mgr}] v 0) v1
Oct 11 03:36:50 compute-0 podman[82886]: 2025-10-11 03:36:50.975926377 +0000 UTC m=+0.065006928 container create 354363de46982c65398c903c9f6d999708c744dfc3940afd136d73883fbc2006 (image=quay.io/ceph/ceph:v18, name=vibrant_ganguly, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Oct 11 03:36:50 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' 
Oct 11 03:36:51 compute-0 ceph-mgr[82885]: mgr[py] Loading python module 'alerts'
Oct 11 03:36:51 compute-0 systemd[1]: Started libpod-conmon-354363de46982c65398c903c9f6d999708c744dfc3940afd136d73883fbc2006.scope.
Oct 11 03:36:51 compute-0 systemd[1]: Started libcrun container.
Oct 11 03:36:51 compute-0 podman[82886]: 2025-10-11 03:36:50.957000898 +0000 UTC m=+0.046081479 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Oct 11 03:36:51 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/070e632a2ba4860ad4a9815dfeab2afe4467ba2b5ab1e3ef3096f3c5877fc486/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Oct 11 03:36:51 compute-0 sudo[82923]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 03:36:51 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/070e632a2ba4860ad4a9815dfeab2afe4467ba2b5ab1e3ef3096f3c5877fc486/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Oct 11 03:36:51 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/070e632a2ba4860ad4a9815dfeab2afe4467ba2b5ab1e3ef3096f3c5877fc486/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 11 03:36:51 compute-0 sudo[82923]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 03:36:51 compute-0 sudo[82923]: pam_unix(sudo:session): session closed for user root
Oct 11 03:36:51 compute-0 podman[82886]: 2025-10-11 03:36:51.102233579 +0000 UTC m=+0.191314230 container init 354363de46982c65398c903c9f6d999708c744dfc3940afd136d73883fbc2006 (image=quay.io/ceph/ceph:v18, name=vibrant_ganguly, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=reef)
Oct 11 03:36:51 compute-0 podman[82886]: 2025-10-11 03:36:51.110070248 +0000 UTC m=+0.199150819 container start 354363de46982c65398c903c9f6d999708c744dfc3940afd136d73883fbc2006 (image=quay.io/ceph/ceph:v18, name=vibrant_ganguly, org.label-schema.vendor=CentOS, CEPH_REF=reef, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 11 03:36:51 compute-0 podman[82886]: 2025-10-11 03:36:51.114264585 +0000 UTC m=+0.203345166 container attach 354363de46982c65398c903c9f6d999708c744dfc3940afd136d73883fbc2006 (image=quay.io/ceph/ceph:v18, name=vibrant_ganguly, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Oct 11 03:36:51 compute-0 sudo[82953]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Oct 11 03:36:51 compute-0 sudo[82953]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 03:36:51 compute-0 sudo[82953]: pam_unix(sudo:session): session closed for user root
Oct 11 03:36:51 compute-0 sudo[82979]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 03:36:51 compute-0 sudo[82979]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 03:36:51 compute-0 sudo[82979]: pam_unix(sudo:session): session closed for user root
Oct 11 03:36:51 compute-0 ceph-23b68101-59a9-532f-ab6b-9acf78fb2162-mgr-compute-0-xairjq[82881]: 2025-10-11T03:36:51.317+0000 7f2d7db3e140 -1 mgr[py] Module alerts has missing NOTIFY_TYPES member
Oct 11 03:36:51 compute-0 ceph-mgr[82885]: mgr[py] Module alerts has missing NOTIFY_TYPES member
Oct 11 03:36:51 compute-0 ceph-mgr[82885]: mgr[py] Loading python module 'balancer'
Oct 11 03:36:51 compute-0 sudo[83004]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 11 03:36:51 compute-0 sudo[83004]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 03:36:51 compute-0 sudo[83004]: pam_unix(sudo:session): session closed for user root
Oct 11 03:36:51 compute-0 sudo[83029]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 03:36:51 compute-0 sudo[83029]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 03:36:51 compute-0 sudo[83029]: pam_unix(sudo:session): session closed for user root
Oct 11 03:36:51 compute-0 sudo[83054]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/23b68101-59a9-532f-ab6b-9acf78fb2162/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ls
Oct 11 03:36:51 compute-0 sudo[83054]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 03:36:51 compute-0 ceph-mgr[82885]: mgr[py] Module balancer has missing NOTIFY_TYPES member
Oct 11 03:36:51 compute-0 ceph-mgr[82885]: mgr[py] Loading python module 'cephadm'
Oct 11 03:36:51 compute-0 ceph-23b68101-59a9-532f-ab6b-9acf78fb2162-mgr-compute-0-xairjq[82881]: 2025-10-11T03:36:51.567+0000 7f2d7db3e140 -1 mgr[py] Module balancer has missing NOTIFY_TYPES member
Oct 11 03:36:51 compute-0 ceph-mon[74273]: pgmap v9: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Oct 11 03:36:51 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' 
Oct 11 03:36:51 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' 
Oct 11 03:36:51 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' 
Oct 11 03:36:51 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' 
Oct 11 03:36:51 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' 
Oct 11 03:36:51 compute-0 ceph-mgr[74563]: log_channel(audit) log [DBG] : from='client.14186 -' entity='client.admin' cmd=[{"prefix": "orch apply", "target": ["mon-mgr", ""]}]: dispatch
Oct 11 03:36:51 compute-0 sudo[83122]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 03:36:51 compute-0 sudo[83122]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 03:36:51 compute-0 sudo[83122]: pam_unix(sudo:session): session closed for user root
Oct 11 03:36:51 compute-0 sudo[83167]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 11 03:36:51 compute-0 sudo[83167]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 03:36:51 compute-0 sudo[83167]: pam_unix(sudo:session): session closed for user root
Oct 11 03:36:51 compute-0 sudo[83214]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 03:36:51 compute-0 sudo[83214]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 03:36:51 compute-0 sudo[83214]: pam_unix(sudo:session): session closed for user root
Oct 11 03:36:51 compute-0 podman[83224]: 2025-10-11 03:36:51.968069898 +0000 UTC m=+0.056949273 container exec 24261ba7295af5a6a49cb537d1551fd7fd4de28fdeebff7ecec5d89143ebddf9 (image=quay.io/ceph/ceph:v18, name=ceph-23b68101-59a9-532f-ab6b-9acf78fb2162-mon-compute-0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef)
Oct 11 03:36:51 compute-0 sudo[83257]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/23b68101-59a9-532f-ab6b-9acf78fb2162/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 check-host --expect-hostname compute-0
Oct 11 03:36:51 compute-0 sudo[83257]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 03:36:52 compute-0 podman[83224]: 2025-10-11 03:36:52.054214817 +0000 UTC m=+0.143094212 container exec_died 24261ba7295af5a6a49cb537d1551fd7fd4de28fdeebff7ecec5d89143ebddf9 (image=quay.io/ceph/ceph:v18, name=ceph-23b68101-59a9-532f-ab6b-9acf78fb2162-mon-compute-0, ceph=True, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 11 03:36:52 compute-0 sudo[83257]: pam_unix(sudo:session): session closed for user root
Oct 11 03:36:52 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/inventory}] v 0) v1
Oct 11 03:36:52 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' 
Oct 11 03:36:52 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/inventory}] v 0) v1
Oct 11 03:36:52 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' 
Oct 11 03:36:52 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/inventory}] v 0) v1
Oct 11 03:36:52 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' 
Oct 11 03:36:52 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/inventory}] v 0) v1
Oct 11 03:36:52 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' 
Oct 11 03:36:52 compute-0 ceph-mgr[74563]: [cephadm INFO root] Added host compute-0
Oct 11 03:36:52 compute-0 ceph-mgr[74563]: log_channel(cephadm) log [INF] : Added host compute-0
Oct 11 03:36:52 compute-0 ceph-mgr[74563]: [cephadm INFO root] Saving service mon spec with placement compute-0
Oct 11 03:36:52 compute-0 ceph-mgr[74563]: log_channel(cephadm) log [INF] : Saving service mon spec with placement compute-0
Oct 11 03:36:52 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mon}] v 0) v1
Oct 11 03:36:52 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' 
Oct 11 03:36:52 compute-0 ceph-mgr[74563]: [cephadm INFO root] Saving service mgr spec with placement compute-0
Oct 11 03:36:52 compute-0 ceph-mgr[74563]: log_channel(cephadm) log [INF] : Saving service mgr spec with placement compute-0
Oct 11 03:36:52 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mgr}] v 0) v1
Oct 11 03:36:52 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' 
Oct 11 03:36:52 compute-0 ceph-mgr[74563]: [cephadm INFO root] Marking host: compute-0 for OSDSpec preview refresh.
Oct 11 03:36:52 compute-0 ceph-mgr[74563]: log_channel(cephadm) log [INF] : Marking host: compute-0 for OSDSpec preview refresh.
Oct 11 03:36:52 compute-0 ceph-mgr[74563]: [cephadm INFO root] Saving service osd.default_drive_group spec with placement compute-0
Oct 11 03:36:52 compute-0 ceph-mgr[74563]: log_channel(cephadm) log [INF] : Saving service osd.default_drive_group spec with placement compute-0
Oct 11 03:36:52 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.osd.default_drive_group}] v 0) v1
Oct 11 03:36:52 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' 
Oct 11 03:36:52 compute-0 vibrant_ganguly[82946]: Added host 'compute-0' with addr '192.168.122.100'
Oct 11 03:36:52 compute-0 vibrant_ganguly[82946]: Scheduled mon update...
Oct 11 03:36:52 compute-0 vibrant_ganguly[82946]: Scheduled mgr update...
Oct 11 03:36:52 compute-0 vibrant_ganguly[82946]: Scheduled osd.default_drive_group update...
Oct 11 03:36:52 compute-0 systemd[1]: libpod-354363de46982c65398c903c9f6d999708c744dfc3940afd136d73883fbc2006.scope: Deactivated successfully.
Oct 11 03:36:52 compute-0 podman[82886]: 2025-10-11 03:36:52.304721601 +0000 UTC m=+1.393802182 container died 354363de46982c65398c903c9f6d999708c744dfc3940afd136d73883fbc2006 (image=quay.io/ceph/ceph:v18, name=vibrant_ganguly, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 11 03:36:52 compute-0 systemd[1]: var-lib-containers-storage-overlay-070e632a2ba4860ad4a9815dfeab2afe4467ba2b5ab1e3ef3096f3c5877fc486-merged.mount: Deactivated successfully.
Oct 11 03:36:52 compute-0 podman[82886]: 2025-10-11 03:36:52.387715472 +0000 UTC m=+1.476796023 container remove 354363de46982c65398c903c9f6d999708c744dfc3940afd136d73883fbc2006 (image=quay.io/ceph/ceph:v18, name=vibrant_ganguly, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Oct 11 03:36:52 compute-0 systemd[1]: libpod-conmon-354363de46982c65398c903c9f6d999708c744dfc3940afd136d73883fbc2006.scope: Deactivated successfully.
Oct 11 03:36:52 compute-0 sudo[82858]: pam_unix(sudo:session): session closed for user root
Oct 11 03:36:52 compute-0 sudo[83054]: pam_unix(sudo:session): session closed for user root
Oct 11 03:36:52 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 11 03:36:52 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' 
Oct 11 03:36:52 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 11 03:36:52 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' 
Oct 11 03:36:52 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 11 03:36:52 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' 
Oct 11 03:36:52 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 11 03:36:52 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' 
Oct 11 03:36:52 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 11 03:36:52 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 11 03:36:52 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct 11 03:36:52 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 11 03:36:52 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct 11 03:36:52 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' 
Oct 11 03:36:52 compute-0 ceph-mgr[74563]: [progress WARNING root] complete: ev 158fcc29-b7c0-466d-a309-47e7f212bd1f does not exist
Oct 11 03:36:52 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mon}] v 0) v1
Oct 11 03:36:52 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' 
Oct 11 03:36:52 compute-0 ceph-mgr[74563]: [progress INFO root] update: starting ev 7742e374-3f28-44c1-b249-4026a0a84730 (Updating mgr deployment (-1 -> 1))
Oct 11 03:36:52 compute-0 ceph-mgr[74563]: [cephadm INFO cephadm.serve] Removing daemon mgr.compute-0.xairjq from compute-0 -- ports [8765]
Oct 11 03:36:52 compute-0 ceph-mgr[74563]: log_channel(cephadm) log [INF] : Removing daemon mgr.compute-0.xairjq from compute-0 -- ports [8765]
Oct 11 03:36:52 compute-0 sudo[83387]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 03:36:52 compute-0 sudo[83387]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 03:36:52 compute-0 sudo[83387]: pam_unix(sudo:session): session closed for user root
Oct 11 03:36:52 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v10: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Oct 11 03:36:52 compute-0 sudo[83412]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 11 03:36:52 compute-0 sudo[83412]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 03:36:52 compute-0 sudo[83412]: pam_unix(sudo:session): session closed for user root
Oct 11 03:36:52 compute-0 sudo[83460]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pybloaqknrmsmixjgsigsxnkghupikhh ; /usr/bin/python3'
Oct 11 03:36:52 compute-0 sudo[83460]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 03:36:52 compute-0 sudo[83461]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 03:36:52 compute-0 sudo[83461]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 03:36:52 compute-0 sudo[83461]: pam_unix(sudo:session): session closed for user root
Oct 11 03:36:52 compute-0 python3[83465]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /home/ceph-admin/specs/ceph_spec.yaml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v18 --fsid 23b68101-59a9-532f-ab6b-9acf78fb2162 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   status --format json | jq .osdmap.num_up_osds _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 11 03:36:52 compute-0 sudo[83488]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/23b68101-59a9-532f-ab6b-9acf78fb2162/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 rm-daemon --fsid 23b68101-59a9-532f-ab6b-9acf78fb2162 --name mgr.compute-0.xairjq --force --tcp-ports 8765
Oct 11 03:36:52 compute-0 sudo[83488]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 03:36:52 compute-0 podman[83517]: 2025-10-11 03:36:52.947473523 +0000 UTC m=+0.043158218 container create b77ab7b2933058f9a8ba465aa3734e1a602cb3a6e7fe1022408b4213cf9867cb (image=quay.io/ceph/ceph:v18, name=epic_dirac, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Oct 11 03:36:53 compute-0 systemd[1]: Started libpod-conmon-b77ab7b2933058f9a8ba465aa3734e1a602cb3a6e7fe1022408b4213cf9867cb.scope.
Oct 11 03:36:53 compute-0 podman[83517]: 2025-10-11 03:36:52.931106135 +0000 UTC m=+0.026790860 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Oct 11 03:36:53 compute-0 systemd[1]: Started libcrun container.
Oct 11 03:36:53 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7f89807eb224f47895543c9b97fff6d278ebb3f2325f4fb431b422bfe2c17018/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Oct 11 03:36:53 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7f89807eb224f47895543c9b97fff6d278ebb3f2325f4fb431b422bfe2c17018/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Oct 11 03:36:53 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7f89807eb224f47895543c9b97fff6d278ebb3f2325f4fb431b422bfe2c17018/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 11 03:36:53 compute-0 podman[83517]: 2025-10-11 03:36:53.071694546 +0000 UTC m=+0.167379341 container init b77ab7b2933058f9a8ba465aa3734e1a602cb3a6e7fe1022408b4213cf9867cb (image=quay.io/ceph/ceph:v18, name=epic_dirac, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Oct 11 03:36:53 compute-0 podman[83517]: 2025-10-11 03:36:53.100308026 +0000 UTC m=+0.195992721 container start b77ab7b2933058f9a8ba465aa3734e1a602cb3a6e7fe1022408b4213cf9867cb (image=quay.io/ceph/ceph:v18, name=epic_dirac, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.vendor=CentOS)
Oct 11 03:36:53 compute-0 podman[83517]: 2025-10-11 03:36:53.114787461 +0000 UTC m=+0.210472186 container attach b77ab7b2933058f9a8ba465aa3734e1a602cb3a6e7fe1022408b4213cf9867cb (image=quay.io/ceph/ceph:v18, name=epic_dirac, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS)
Oct 11 03:36:53 compute-0 systemd[1]: Stopping Ceph mgr.compute-0.xairjq for 23b68101-59a9-532f-ab6b-9acf78fb2162...
Oct 11 03:36:53 compute-0 ceph-mon[74273]: from='client.14186 -' entity='client.admin' cmd=[{"prefix": "orch apply", "target": ["mon-mgr", ""]}]: dispatch
Oct 11 03:36:53 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' 
Oct 11 03:36:53 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' 
Oct 11 03:36:53 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' 
Oct 11 03:36:53 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' 
Oct 11 03:36:53 compute-0 ceph-mon[74273]: Added host compute-0
Oct 11 03:36:53 compute-0 ceph-mon[74273]: Saving service mon spec with placement compute-0
Oct 11 03:36:53 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' 
Oct 11 03:36:53 compute-0 ceph-mon[74273]: Saving service mgr spec with placement compute-0
Oct 11 03:36:53 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' 
Oct 11 03:36:53 compute-0 ceph-mon[74273]: Marking host: compute-0 for OSDSpec preview refresh.
Oct 11 03:36:53 compute-0 ceph-mon[74273]: Saving service osd.default_drive_group spec with placement compute-0
Oct 11 03:36:53 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' 
Oct 11 03:36:53 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' 
Oct 11 03:36:53 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' 
Oct 11 03:36:53 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' 
Oct 11 03:36:53 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' 
Oct 11 03:36:53 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 11 03:36:53 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 11 03:36:53 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' 
Oct 11 03:36:53 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' 
Oct 11 03:36:53 compute-0 ceph-mon[74273]: Removing daemon mgr.compute-0.xairjq from compute-0 -- ports [8765]
Oct 11 03:36:53 compute-0 ceph-mon[74273]: pgmap v10: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Oct 11 03:36:53 compute-0 podman[83616]: 2025-10-11 03:36:53.431401174 +0000 UTC m=+0.071890401 container stop 4270d0cd953fc8aa60821433781f3976420f007c60c091497141308c4c8eec4a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-23b68101-59a9-532f-ab6b-9acf78fb2162-mgr-compute-0-xairjq, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 11 03:36:53 compute-0 podman[83616]: 2025-10-11 03:36:53.462460032 +0000 UTC m=+0.102949329 container died 4270d0cd953fc8aa60821433781f3976420f007c60c091497141308c4c8eec4a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-23b68101-59a9-532f-ab6b-9acf78fb2162-mgr-compute-0-xairjq, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Oct 11 03:36:53 compute-0 systemd[1]: var-lib-containers-storage-overlay-d13a39712026e031ba331d9511bfa6acf0f88d87dc9dc907c779387d424d7e83-merged.mount: Deactivated successfully.
Oct 11 03:36:53 compute-0 podman[83616]: 2025-10-11 03:36:53.514442476 +0000 UTC m=+0.154931673 container remove 4270d0cd953fc8aa60821433781f3976420f007c60c091497141308c4c8eec4a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-23b68101-59a9-532f-ab6b-9acf78fb2162-mgr-compute-0-xairjq, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 11 03:36:53 compute-0 bash[83616]: ceph-23b68101-59a9-532f-ab6b-9acf78fb2162-mgr-compute-0-xairjq
Oct 11 03:36:53 compute-0 systemd[1]: ceph-23b68101-59a9-532f-ab6b-9acf78fb2162@mgr.compute-0.xairjq.service: Main process exited, code=exited, status=143/n/a
Oct 11 03:36:53 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status", "format": "json"} v 0) v1
Oct 11 03:36:53 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/250512293' entity='client.admin' cmd=[{"prefix": "status", "format": "json"}]: dispatch
Oct 11 03:36:53 compute-0 epic_dirac[83542]: 
Oct 11 03:36:53 compute-0 epic_dirac[83542]: {"fsid":"23b68101-59a9-532f-ab6b-9acf78fb2162","health":{"status":"HEALTH_WARN","checks":{"TOO_FEW_OSDS":{"severity":"HEALTH_WARN","summary":{"message":"OSD count 0 < osd_pool_default_size 1","count":1},"muted":false}},"mutes":[]},"election_epoch":5,"quorum":[0],"quorum_names":["compute-0"],"quorum_age":79,"monmap":{"epoch":1,"min_mon_release_name":"reef","num_mons":1},"osdmap":{"epoch":3,"num_osds":0,"num_up_osds":0,"osd_up_since":0,"num_in_osds":0,"osd_in_since":0,"num_remapped_pgs":0},"pgmap":{"pgs_by_state":[],"num_pgs":0,"num_pools":0,"num_objects":0,"data_bytes":0,"bytes_used":0,"bytes_avail":0,"bytes_total":0},"fsmap":{"epoch":1,"by_rank":[],"up:standby":0},"mgrmap":{"available":true,"num_standbys":0,"modules":["cephadm","iostat","nfs","restful"],"services":{}},"servicemap":{"epoch":1,"modified":"2025-10-11T03:35:31.210977+0000","services":{}},"progress_events":{"7742e374-3f28-44c1-b249-4026a0a84730":{"message":"Updating mgr deployment (-1 -> 1) (0s)\n      [............................] ","progress":0,"add_to_ceph_s":true}}}
Oct 11 03:36:53 compute-0 systemd[1]: ceph-23b68101-59a9-532f-ab6b-9acf78fb2162@mgr.compute-0.xairjq.service: Failed with result 'exit-code'.
Oct 11 03:36:53 compute-0 systemd[1]: Stopped Ceph mgr.compute-0.xairjq for 23b68101-59a9-532f-ab6b-9acf78fb2162.
Oct 11 03:36:53 compute-0 systemd[1]: ceph-23b68101-59a9-532f-ab6b-9acf78fb2162@mgr.compute-0.xairjq.service: Consumed 3.564s CPU time.
Oct 11 03:36:53 compute-0 systemd[1]: libpod-b77ab7b2933058f9a8ba465aa3734e1a602cb3a6e7fe1022408b4213cf9867cb.scope: Deactivated successfully.
Oct 11 03:36:53 compute-0 podman[83517]: 2025-10-11 03:36:53.706791473 +0000 UTC m=+0.802476178 container died b77ab7b2933058f9a8ba465aa3734e1a602cb3a6e7fe1022408b4213cf9867cb (image=quay.io/ceph/ceph:v18, name=epic_dirac, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 11 03:36:53 compute-0 systemd[1]: var-lib-containers-storage-overlay-7f89807eb224f47895543c9b97fff6d278ebb3f2325f4fb431b422bfe2c17018-merged.mount: Deactivated successfully.
Oct 11 03:36:53 compute-0 systemd[1]: Reloading.
Oct 11 03:36:53 compute-0 podman[83517]: 2025-10-11 03:36:53.759209819 +0000 UTC m=+0.854894544 container remove b77ab7b2933058f9a8ba465aa3734e1a602cb3a6e7fe1022408b4213cf9867cb (image=quay.io/ceph/ceph:v18, name=epic_dirac, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 11 03:36:53 compute-0 sudo[83460]: pam_unix(sudo:session): session closed for user root
Oct 11 03:36:53 compute-0 systemd-rc-local-generator[83729]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 11 03:36:53 compute-0 systemd-sysv-generator[83735]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 11 03:36:53 compute-0 systemd[1]: libpod-conmon-b77ab7b2933058f9a8ba465aa3734e1a602cb3a6e7fe1022408b4213cf9867cb.scope: Deactivated successfully.
Oct 11 03:36:54 compute-0 sudo[83488]: pam_unix(sudo:session): session closed for user root
Oct 11 03:36:54 compute-0 ceph-mgr[74563]: [cephadm INFO cephadm.services.cephadmservice] Removing key for mgr.compute-0.xairjq
Oct 11 03:36:54 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth rm", "entity": "mgr.compute-0.xairjq"} v 0) v1
Oct 11 03:36:54 compute-0 ceph-mgr[74563]: log_channel(cephadm) log [INF] : Removing key for mgr.compute-0.xairjq
Oct 11 03:36:54 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' cmd=[{"prefix": "auth rm", "entity": "mgr.compute-0.xairjq"}]: dispatch
Oct 11 03:36:54 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' cmd='[{"prefix": "auth rm", "entity": "mgr.compute-0.xairjq"}]': finished
Oct 11 03:36:54 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mgr}] v 0) v1
Oct 11 03:36:54 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' 
Oct 11 03:36:54 compute-0 ceph-mgr[74563]: [progress INFO root] complete: finished ev 7742e374-3f28-44c1-b249-4026a0a84730 (Updating mgr deployment (-1 -> 1))
Oct 11 03:36:54 compute-0 ceph-mgr[74563]: [progress INFO root] Completed event 7742e374-3f28-44c1-b249-4026a0a84730 (Updating mgr deployment (-1 -> 1)) in 2 seconds
Oct 11 03:36:54 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mgr}] v 0) v1
Oct 11 03:36:54 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' 
Oct 11 03:36:54 compute-0 ceph-mgr[74563]: [progress WARNING root] complete: ev 810e04a6-b35c-4138-9774-180ec07d5574 does not exist
Oct 11 03:36:54 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct 11 03:36:54 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 11 03:36:54 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct 11 03:36:54 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 11 03:36:54 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 11 03:36:54 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 11 03:36:54 compute-0 sudo[83742]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 03:36:54 compute-0 sudo[83742]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 03:36:54 compute-0 sudo[83742]: pam_unix(sudo:session): session closed for user root
Oct 11 03:36:54 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e3 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 11 03:36:54 compute-0 sudo[83767]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 11 03:36:54 compute-0 sudo[83767]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 03:36:54 compute-0 sudo[83767]: pam_unix(sudo:session): session closed for user root
Oct 11 03:36:54 compute-0 sudo[83792]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 03:36:54 compute-0 sudo[83792]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 03:36:54 compute-0 sudo[83792]: pam_unix(sudo:session): session closed for user root
Oct 11 03:36:54 compute-0 sudo[83817]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/23b68101-59a9-532f-ab6b-9acf78fb2162/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 23b68101-59a9-532f-ab6b-9acf78fb2162 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Oct 11 03:36:54 compute-0 sudo[83817]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 03:36:54 compute-0 ceph-mon[74273]: from='client.? 192.168.122.100:0/250512293' entity='client.admin' cmd=[{"prefix": "status", "format": "json"}]: dispatch
Oct 11 03:36:54 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' cmd=[{"prefix": "auth rm", "entity": "mgr.compute-0.xairjq"}]: dispatch
Oct 11 03:36:54 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' cmd='[{"prefix": "auth rm", "entity": "mgr.compute-0.xairjq"}]': finished
Oct 11 03:36:54 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' 
Oct 11 03:36:54 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' 
Oct 11 03:36:54 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 11 03:36:54 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 11 03:36:54 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 11 03:36:54 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v11: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Oct 11 03:36:54 compute-0 podman[83882]: 2025-10-11 03:36:54.877011243 +0000 UTC m=+0.067676153 container create 9ee93e505c910f4713bc1586ed2bb1d8655b0bdc993084fa9267de7271532ba4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_austin, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Oct 11 03:36:54 compute-0 systemd[1]: Started libpod-conmon-9ee93e505c910f4713bc1586ed2bb1d8655b0bdc993084fa9267de7271532ba4.scope.
Oct 11 03:36:54 compute-0 podman[83882]: 2025-10-11 03:36:54.847980251 +0000 UTC m=+0.038645221 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 03:36:54 compute-0 systemd[1]: Started libcrun container.
Oct 11 03:36:54 compute-0 podman[83882]: 2025-10-11 03:36:54.983163441 +0000 UTC m=+0.173828341 container init 9ee93e505c910f4713bc1586ed2bb1d8655b0bdc993084fa9267de7271532ba4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_austin, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 11 03:36:54 compute-0 podman[83882]: 2025-10-11 03:36:54.995466565 +0000 UTC m=+0.186131465 container start 9ee93e505c910f4713bc1586ed2bb1d8655b0bdc993084fa9267de7271532ba4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_austin, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, ceph=True, io.buildah.version=1.39.3)
Oct 11 03:36:55 compute-0 podman[83882]: 2025-10-11 03:36:54.999881469 +0000 UTC m=+0.190546349 container attach 9ee93e505c910f4713bc1586ed2bb1d8655b0bdc993084fa9267de7271532ba4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_austin, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3)
Oct 11 03:36:55 compute-0 condescending_austin[83898]: 167 167
Oct 11 03:36:55 compute-0 systemd[1]: libpod-9ee93e505c910f4713bc1586ed2bb1d8655b0bdc993084fa9267de7271532ba4.scope: Deactivated successfully.
Oct 11 03:36:55 compute-0 podman[83882]: 2025-10-11 03:36:55.004465517 +0000 UTC m=+0.195130427 container died 9ee93e505c910f4713bc1586ed2bb1d8655b0bdc993084fa9267de7271532ba4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_austin, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=reef)
Oct 11 03:36:55 compute-0 systemd[1]: var-lib-containers-storage-overlay-241910ce73ee5a1feaf60978f0bde31290a0cc2aea730b4ba02208d568c1932e-merged.mount: Deactivated successfully.
Oct 11 03:36:55 compute-0 podman[83882]: 2025-10-11 03:36:55.055825013 +0000 UTC m=+0.246489913 container remove 9ee93e505c910f4713bc1586ed2bb1d8655b0bdc993084fa9267de7271532ba4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_austin, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3)
Oct 11 03:36:55 compute-0 systemd[1]: libpod-conmon-9ee93e505c910f4713bc1586ed2bb1d8655b0bdc993084fa9267de7271532ba4.scope: Deactivated successfully.
Oct 11 03:36:55 compute-0 podman[83922]: 2025-10-11 03:36:55.302330215 +0000 UTC m=+0.073241399 container create a1a06818ad62d2540bbf83d38cce76b4cbaf2c5d8ce8266531da627a9bae4b61 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=frosty_ride, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507)
Oct 11 03:36:55 compute-0 systemd[1]: Started libpod-conmon-a1a06818ad62d2540bbf83d38cce76b4cbaf2c5d8ce8266531da627a9bae4b61.scope.
Oct 11 03:36:55 compute-0 podman[83922]: 2025-10-11 03:36:55.273912301 +0000 UTC m=+0.044823485 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 03:36:55 compute-0 systemd[1]: Started libcrun container.
Oct 11 03:36:55 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2847501bd062284e98e0a3527c6f6c1b262b9b664387c6870c8719a5266f1947/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 11 03:36:55 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2847501bd062284e98e0a3527c6f6c1b262b9b664387c6870c8719a5266f1947/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 11 03:36:55 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2847501bd062284e98e0a3527c6f6c1b262b9b664387c6870c8719a5266f1947/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 11 03:36:55 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2847501bd062284e98e0a3527c6f6c1b262b9b664387c6870c8719a5266f1947/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 11 03:36:55 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2847501bd062284e98e0a3527c6f6c1b262b9b664387c6870c8719a5266f1947/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct 11 03:36:55 compute-0 podman[83922]: 2025-10-11 03:36:55.402858066 +0000 UTC m=+0.173769300 container init a1a06818ad62d2540bbf83d38cce76b4cbaf2c5d8ce8266531da627a9bae4b61 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=frosty_ride, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3)
Oct 11 03:36:55 compute-0 podman[83922]: 2025-10-11 03:36:55.417838065 +0000 UTC m=+0.188749239 container start a1a06818ad62d2540bbf83d38cce76b4cbaf2c5d8ce8266531da627a9bae4b61 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=frosty_ride, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 11 03:36:55 compute-0 podman[83922]: 2025-10-11 03:36:55.422034902 +0000 UTC m=+0.192946076 container attach a1a06818ad62d2540bbf83d38cce76b4cbaf2c5d8ce8266531da627a9bae4b61 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=frosty_ride, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 11 03:36:55 compute-0 ceph-mon[74273]: Removing key for mgr.compute-0.xairjq
Oct 11 03:36:55 compute-0 ceph-mon[74273]: pgmap v11: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Oct 11 03:36:55 compute-0 ceph-mgr[74563]: [progress INFO root] Writing back 3 completed events
Oct 11 03:36:55 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0) v1
Oct 11 03:36:55 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' 
Oct 11 03:36:56 compute-0 frosty_ride[83938]: --> passed data devices: 0 physical, 3 LVM
Oct 11 03:36:56 compute-0 frosty_ride[83938]: --> relative data size: 1.0
Oct 11 03:36:56 compute-0 frosty_ride[83938]: Running command: /usr/bin/ceph-authtool --gen-print-key
Oct 11 03:36:56 compute-0 frosty_ride[83938]: Running command: /usr/bin/ceph --cluster ceph --name client.bootstrap-osd --keyring /var/lib/ceph/bootstrap-osd/ceph.keyring -i - osd new bd7ac921-1218-45c1-b1c6-7c594dbceccb
Oct 11 03:36:56 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v12: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Oct 11 03:36:56 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' 
Oct 11 03:36:56 compute-0 ceph-mon[74273]: pgmap v12: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Oct 11 03:36:57 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd new", "uuid": "bd7ac921-1218-45c1-b1c6-7c594dbceccb"} v 0) v1
Oct 11 03:36:57 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2681504423' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "bd7ac921-1218-45c1-b1c6-7c594dbceccb"}]: dispatch
Oct 11 03:36:57 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e3 do_prune osdmap full prune enabled
Oct 11 03:36:57 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e3 encode_pending skipping prime_pg_temp; mapping job did not start
Oct 11 03:36:57 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2681504423' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "bd7ac921-1218-45c1-b1c6-7c594dbceccb"}]': finished
Oct 11 03:36:57 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e4 e4: 1 total, 0 up, 1 in
Oct 11 03:36:57 compute-0 ceph-mon[74273]: log_channel(cluster) log [DBG] : osdmap e4: 1 total, 0 up, 1 in
Oct 11 03:36:57 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0) v1
Oct 11 03:36:57 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Oct 11 03:36:57 compute-0 ceph-mgr[74563]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Oct 11 03:36:57 compute-0 frosty_ride[83938]: Running command: /usr/bin/ceph-authtool --gen-print-key
Oct 11 03:36:57 compute-0 lvm[84000]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Oct 11 03:36:57 compute-0 lvm[84000]: VG ceph_vg0 finished
Oct 11 03:36:57 compute-0 frosty_ride[83938]: Running command: /usr/bin/mount -t tmpfs tmpfs /var/lib/ceph/osd/ceph-0
Oct 11 03:36:57 compute-0 frosty_ride[83938]: Running command: /usr/bin/chown -h ceph:ceph /dev/ceph_vg0/ceph_lv0
Oct 11 03:36:57 compute-0 frosty_ride[83938]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-0
Oct 11 03:36:57 compute-0 frosty_ride[83938]: Running command: /usr/bin/ln -s /dev/ceph_vg0/ceph_lv0 /var/lib/ceph/osd/ceph-0/block
Oct 11 03:36:57 compute-0 frosty_ride[83938]: Running command: /usr/bin/ceph --cluster ceph --name client.bootstrap-osd --keyring /var/lib/ceph/bootstrap-osd/ceph.keyring mon getmap -o /var/lib/ceph/osd/ceph-0/activate.monmap
Oct 11 03:36:57 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon getmap"} v 0) v1
Oct 11 03:36:57 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2869870240' entity='client.bootstrap-osd' cmd=[{"prefix": "mon getmap"}]: dispatch
Oct 11 03:36:57 compute-0 frosty_ride[83938]:  stderr: got monmap epoch 1
Oct 11 03:36:57 compute-0 frosty_ride[83938]: --> Creating keyring file for osd.0
Oct 11 03:36:57 compute-0 frosty_ride[83938]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-0/keyring
Oct 11 03:36:57 compute-0 frosty_ride[83938]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-0/
Oct 11 03:36:57 compute-0 ceph-mon[74273]: from='client.? 192.168.122.100:0/2681504423' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "bd7ac921-1218-45c1-b1c6-7c594dbceccb"}]: dispatch
Oct 11 03:36:57 compute-0 ceph-mon[74273]: from='client.? 192.168.122.100:0/2681504423' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "bd7ac921-1218-45c1-b1c6-7c594dbceccb"}]': finished
Oct 11 03:36:57 compute-0 ceph-mon[74273]: osdmap e4: 1 total, 0 up, 1 in
Oct 11 03:36:57 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Oct 11 03:36:57 compute-0 ceph-mon[74273]: from='client.? 192.168.122.100:0/2869870240' entity='client.bootstrap-osd' cmd=[{"prefix": "mon getmap"}]: dispatch
Oct 11 03:36:57 compute-0 frosty_ride[83938]: Running command: /usr/bin/ceph-osd --cluster ceph --osd-objectstore bluestore --mkfs -i 0 --monmap /var/lib/ceph/osd/ceph-0/activate.monmap --keyfile - --osdspec-affinity default_drive_group --osd-data /var/lib/ceph/osd/ceph-0/ --osd-uuid bd7ac921-1218-45c1-b1c6-7c594dbceccb --setuser ceph --setgroup ceph
Oct 11 03:36:58 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v14: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Oct 11 03:36:58 compute-0 ceph-mon[74273]: log_channel(cluster) log [INF] : Health check cleared: TOO_FEW_OSDS (was: OSD count 0 < osd_pool_default_size 1)
Oct 11 03:36:58 compute-0 ceph-mon[74273]: log_channel(cluster) log [INF] : Cluster is now healthy
Oct 11 03:36:58 compute-0 ceph-mon[74273]: pgmap v14: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Oct 11 03:36:59 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e4 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 11 03:36:59 compute-0 ceph-mon[74273]: Health check cleared: TOO_FEW_OSDS (was: OSD count 0 < osd_pool_default_size 1)
Oct 11 03:36:59 compute-0 ceph-mon[74273]: Cluster is now healthy
Oct 11 03:37:00 compute-0 frosty_ride[83938]:  stderr: 2025-10-11T03:36:57.899+0000 7fe240e9c740 -1 bluestore(/var/lib/ceph/osd/ceph-0//block) _read_bdev_label unable to decode label at offset 102: void bluestore_bdev_label_t::decode(ceph::buffer::v15_2_0::list::const_iterator&) decode past end of struct encoding: Malformed input [buffer:3]
Oct 11 03:37:00 compute-0 frosty_ride[83938]:  stderr: 2025-10-11T03:36:57.899+0000 7fe240e9c740 -1 bluestore(/var/lib/ceph/osd/ceph-0//block) _read_bdev_label unable to decode label at offset 102: void bluestore_bdev_label_t::decode(ceph::buffer::v15_2_0::list::const_iterator&) decode past end of struct encoding: Malformed input [buffer:3]
Oct 11 03:37:00 compute-0 frosty_ride[83938]:  stderr: 2025-10-11T03:36:57.899+0000 7fe240e9c740 -1 bluestore(/var/lib/ceph/osd/ceph-0//block) _read_bdev_label unable to decode label at offset 102: void bluestore_bdev_label_t::decode(ceph::buffer::v15_2_0::list::const_iterator&) decode past end of struct encoding: Malformed input [buffer:3]
Oct 11 03:37:00 compute-0 frosty_ride[83938]:  stderr: 2025-10-11T03:36:57.899+0000 7fe240e9c740 -1 bluestore(/var/lib/ceph/osd/ceph-0/) _read_fsid unparsable uuid
Oct 11 03:37:00 compute-0 frosty_ride[83938]: --> ceph-volume lvm prepare successful for: ceph_vg0/ceph_lv0
Oct 11 03:37:00 compute-0 frosty_ride[83938]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-0
Oct 11 03:37:00 compute-0 frosty_ride[83938]: Running command: /usr/bin/ceph-bluestore-tool --cluster=ceph prime-osd-dir --dev /dev/ceph_vg0/ceph_lv0 --path /var/lib/ceph/osd/ceph-0 --no-mon-config
Oct 11 03:37:00 compute-0 frosty_ride[83938]: Running command: /usr/bin/ln -snf /dev/ceph_vg0/ceph_lv0 /var/lib/ceph/osd/ceph-0/block
Oct 11 03:37:00 compute-0 frosty_ride[83938]: Running command: /usr/bin/chown -h ceph:ceph /var/lib/ceph/osd/ceph-0/block
Oct 11 03:37:00 compute-0 frosty_ride[83938]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-0
Oct 11 03:37:00 compute-0 frosty_ride[83938]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-0
Oct 11 03:37:00 compute-0 frosty_ride[83938]: --> ceph-volume lvm activate successful for osd ID: 0
Oct 11 03:37:00 compute-0 frosty_ride[83938]: --> ceph-volume lvm create successful for: ceph_vg0/ceph_lv0
Oct 11 03:37:00 compute-0 frosty_ride[83938]: Running command: /usr/bin/ceph-authtool --gen-print-key
Oct 11 03:37:00 compute-0 frosty_ride[83938]: Running command: /usr/bin/ceph --cluster ceph --name client.bootstrap-osd --keyring /var/lib/ceph/bootstrap-osd/ceph.keyring -i - osd new 38da774d-7ecf-442f-9a7a-97978287cff8
Oct 11 03:37:00 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v15: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Oct 11 03:37:00 compute-0 ceph-mon[74273]: pgmap v15: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Oct 11 03:37:00 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd new", "uuid": "38da774d-7ecf-442f-9a7a-97978287cff8"} v 0) v1
Oct 11 03:37:00 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2579482292' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "38da774d-7ecf-442f-9a7a-97978287cff8"}]: dispatch
Oct 11 03:37:00 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e4 do_prune osdmap full prune enabled
Oct 11 03:37:00 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e4 encode_pending skipping prime_pg_temp; mapping job did not start
Oct 11 03:37:00 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2579482292' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "38da774d-7ecf-442f-9a7a-97978287cff8"}]': finished
Oct 11 03:37:00 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e5 e5: 2 total, 0 up, 2 in
Oct 11 03:37:00 compute-0 ceph-mon[74273]: log_channel(cluster) log [DBG] : osdmap e5: 2 total, 0 up, 2 in
Oct 11 03:37:00 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0) v1
Oct 11 03:37:00 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Oct 11 03:37:00 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0) v1
Oct 11 03:37:00 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Oct 11 03:37:00 compute-0 ceph-mgr[74563]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Oct 11 03:37:00 compute-0 ceph-mgr[74563]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Oct 11 03:37:01 compute-0 lvm[84933]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Oct 11 03:37:01 compute-0 lvm[84933]: VG ceph_vg1 finished
Oct 11 03:37:01 compute-0 frosty_ride[83938]: Running command: /usr/bin/ceph-authtool --gen-print-key
Oct 11 03:37:01 compute-0 frosty_ride[83938]: Running command: /usr/bin/mount -t tmpfs tmpfs /var/lib/ceph/osd/ceph-1
Oct 11 03:37:01 compute-0 frosty_ride[83938]: Running command: /usr/bin/chown -h ceph:ceph /dev/ceph_vg1/ceph_lv1
Oct 11 03:37:01 compute-0 frosty_ride[83938]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-1
Oct 11 03:37:01 compute-0 frosty_ride[83938]: Running command: /usr/bin/ln -s /dev/ceph_vg1/ceph_lv1 /var/lib/ceph/osd/ceph-1/block
Oct 11 03:37:01 compute-0 frosty_ride[83938]: Running command: /usr/bin/ceph --cluster ceph --name client.bootstrap-osd --keyring /var/lib/ceph/bootstrap-osd/ceph.keyring mon getmap -o /var/lib/ceph/osd/ceph-1/activate.monmap
Oct 11 03:37:01 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon getmap"} v 0) v1
Oct 11 03:37:01 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/720770564' entity='client.bootstrap-osd' cmd=[{"prefix": "mon getmap"}]: dispatch
Oct 11 03:37:01 compute-0 frosty_ride[83938]:  stderr: got monmap epoch 1
Oct 11 03:37:01 compute-0 frosty_ride[83938]: --> Creating keyring file for osd.1
Oct 11 03:37:01 compute-0 frosty_ride[83938]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-1/keyring
Oct 11 03:37:01 compute-0 frosty_ride[83938]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-1/
Oct 11 03:37:01 compute-0 frosty_ride[83938]: Running command: /usr/bin/ceph-osd --cluster ceph --osd-objectstore bluestore --mkfs -i 1 --monmap /var/lib/ceph/osd/ceph-1/activate.monmap --keyfile - --osdspec-affinity default_drive_group --osd-data /var/lib/ceph/osd/ceph-1/ --osd-uuid 38da774d-7ecf-442f-9a7a-97978287cff8 --setuser ceph --setgroup ceph
Oct 11 03:37:01 compute-0 ceph-mon[74273]: from='client.? 192.168.122.100:0/2579482292' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "38da774d-7ecf-442f-9a7a-97978287cff8"}]: dispatch
Oct 11 03:37:01 compute-0 ceph-mon[74273]: from='client.? 192.168.122.100:0/2579482292' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "38da774d-7ecf-442f-9a7a-97978287cff8"}]': finished
Oct 11 03:37:01 compute-0 ceph-mon[74273]: osdmap e5: 2 total, 0 up, 2 in
Oct 11 03:37:01 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Oct 11 03:37:01 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Oct 11 03:37:01 compute-0 ceph-mon[74273]: from='client.? 192.168.122.100:0/720770564' entity='client.bootstrap-osd' cmd=[{"prefix": "mon getmap"}]: dispatch
Oct 11 03:37:02 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v17: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Oct 11 03:37:02 compute-0 ceph-mon[74273]: pgmap v17: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Oct 11 03:37:04 compute-0 frosty_ride[83938]:  stderr: 2025-10-11T03:37:01.618+0000 7f2b8b619740 -1 bluestore(/var/lib/ceph/osd/ceph-1//block) _read_bdev_label unable to decode label at offset 102: void bluestore_bdev_label_t::decode(ceph::buffer::v15_2_0::list::const_iterator&) decode past end of struct encoding: Malformed input [buffer:3]
Oct 11 03:37:04 compute-0 frosty_ride[83938]:  stderr: 2025-10-11T03:37:01.619+0000 7f2b8b619740 -1 bluestore(/var/lib/ceph/osd/ceph-1//block) _read_bdev_label unable to decode label at offset 102: void bluestore_bdev_label_t::decode(ceph::buffer::v15_2_0::list::const_iterator&) decode past end of struct encoding: Malformed input [buffer:3]
Oct 11 03:37:04 compute-0 frosty_ride[83938]:  stderr: 2025-10-11T03:37:01.619+0000 7f2b8b619740 -1 bluestore(/var/lib/ceph/osd/ceph-1//block) _read_bdev_label unable to decode label at offset 102: void bluestore_bdev_label_t::decode(ceph::buffer::v15_2_0::list::const_iterator&) decode past end of struct encoding: Malformed input [buffer:3]
Oct 11 03:37:04 compute-0 frosty_ride[83938]:  stderr: 2025-10-11T03:37:01.619+0000 7f2b8b619740 -1 bluestore(/var/lib/ceph/osd/ceph-1/) _read_fsid unparsable uuid
Oct 11 03:37:04 compute-0 frosty_ride[83938]: --> ceph-volume lvm prepare successful for: ceph_vg1/ceph_lv1
Oct 11 03:37:04 compute-0 frosty_ride[83938]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-1
Oct 11 03:37:04 compute-0 frosty_ride[83938]: Running command: /usr/bin/ceph-bluestore-tool --cluster=ceph prime-osd-dir --dev /dev/ceph_vg1/ceph_lv1 --path /var/lib/ceph/osd/ceph-1 --no-mon-config
Oct 11 03:37:04 compute-0 frosty_ride[83938]: Running command: /usr/bin/ln -snf /dev/ceph_vg1/ceph_lv1 /var/lib/ceph/osd/ceph-1/block
Oct 11 03:37:04 compute-0 frosty_ride[83938]: Running command: /usr/bin/chown -h ceph:ceph /var/lib/ceph/osd/ceph-1/block
Oct 11 03:37:04 compute-0 frosty_ride[83938]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-1
Oct 11 03:37:04 compute-0 frosty_ride[83938]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-1
Oct 11 03:37:04 compute-0 frosty_ride[83938]: --> ceph-volume lvm activate successful for osd ID: 1
Oct 11 03:37:04 compute-0 frosty_ride[83938]: --> ceph-volume lvm create successful for: ceph_vg1/ceph_lv1
Oct 11 03:37:04 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e5 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 11 03:37:04 compute-0 frosty_ride[83938]: Running command: /usr/bin/ceph-authtool --gen-print-key
Oct 11 03:37:04 compute-0 frosty_ride[83938]: Running command: /usr/bin/ceph --cluster ceph --name client.bootstrap-osd --keyring /var/lib/ceph/bootstrap-osd/ceph.keyring -i - osd new b0a32a6b-06c7-4cda-9235-0c5ffbd1bd85
Oct 11 03:37:04 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd new", "uuid": "b0a32a6b-06c7-4cda-9235-0c5ffbd1bd85"} v 0) v1
Oct 11 03:37:04 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3201746271' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "b0a32a6b-06c7-4cda-9235-0c5ffbd1bd85"}]: dispatch
Oct 11 03:37:04 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e5 do_prune osdmap full prune enabled
Oct 11 03:37:04 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e5 encode_pending skipping prime_pg_temp; mapping job did not start
Oct 11 03:37:04 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3201746271' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "b0a32a6b-06c7-4cda-9235-0c5ffbd1bd85"}]': finished
Oct 11 03:37:04 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e6 e6: 3 total, 0 up, 3 in
Oct 11 03:37:04 compute-0 ceph-mon[74273]: log_channel(cluster) log [DBG] : osdmap e6: 3 total, 0 up, 3 in
Oct 11 03:37:04 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0) v1
Oct 11 03:37:04 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Oct 11 03:37:04 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0) v1
Oct 11 03:37:04 compute-0 ceph-mgr[74563]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Oct 11 03:37:04 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Oct 11 03:37:04 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0) v1
Oct 11 03:37:04 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Oct 11 03:37:04 compute-0 ceph-mgr[74563]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Oct 11 03:37:04 compute-0 ceph-mgr[74563]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Oct 11 03:37:04 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v19: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Oct 11 03:37:04 compute-0 ceph-mon[74273]: from='client.? 192.168.122.100:0/3201746271' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "b0a32a6b-06c7-4cda-9235-0c5ffbd1bd85"}]: dispatch
Oct 11 03:37:04 compute-0 ceph-mon[74273]: from='client.? 192.168.122.100:0/3201746271' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "b0a32a6b-06c7-4cda-9235-0c5ffbd1bd85"}]': finished
Oct 11 03:37:04 compute-0 ceph-mon[74273]: osdmap e6: 3 total, 0 up, 3 in
Oct 11 03:37:04 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Oct 11 03:37:04 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Oct 11 03:37:04 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Oct 11 03:37:04 compute-0 ceph-mon[74273]: pgmap v19: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Oct 11 03:37:04 compute-0 lvm[85869]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Oct 11 03:37:04 compute-0 lvm[85869]: VG ceph_vg2 finished
Oct 11 03:37:04 compute-0 frosty_ride[83938]: Running command: /usr/bin/ceph-authtool --gen-print-key
Oct 11 03:37:04 compute-0 frosty_ride[83938]: Running command: /usr/bin/mount -t tmpfs tmpfs /var/lib/ceph/osd/ceph-2
Oct 11 03:37:04 compute-0 frosty_ride[83938]: Running command: /usr/bin/chown -h ceph:ceph /dev/ceph_vg2/ceph_lv2
Oct 11 03:37:04 compute-0 frosty_ride[83938]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-2
Oct 11 03:37:04 compute-0 frosty_ride[83938]: Running command: /usr/bin/ln -s /dev/ceph_vg2/ceph_lv2 /var/lib/ceph/osd/ceph-2/block
Oct 11 03:37:04 compute-0 frosty_ride[83938]: Running command: /usr/bin/ceph --cluster ceph --name client.bootstrap-osd --keyring /var/lib/ceph/bootstrap-osd/ceph.keyring mon getmap -o /var/lib/ceph/osd/ceph-2/activate.monmap
Oct 11 03:37:05 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon getmap"} v 0) v1
Oct 11 03:37:05 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/452887506' entity='client.bootstrap-osd' cmd=[{"prefix": "mon getmap"}]: dispatch
Oct 11 03:37:05 compute-0 frosty_ride[83938]:  stderr: got monmap epoch 1
Oct 11 03:37:05 compute-0 frosty_ride[83938]: --> Creating keyring file for osd.2
Oct 11 03:37:05 compute-0 frosty_ride[83938]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-2/keyring
Oct 11 03:37:05 compute-0 frosty_ride[83938]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-2/
Oct 11 03:37:05 compute-0 frosty_ride[83938]: Running command: /usr/bin/ceph-osd --cluster ceph --osd-objectstore bluestore --mkfs -i 2 --monmap /var/lib/ceph/osd/ceph-2/activate.monmap --keyfile - --osdspec-affinity default_drive_group --osd-data /var/lib/ceph/osd/ceph-2/ --osd-uuid b0a32a6b-06c7-4cda-9235-0c5ffbd1bd85 --setuser ceph --setgroup ceph
Oct 11 03:37:05 compute-0 ceph-mon[74273]: from='client.? 192.168.122.100:0/452887506' entity='client.bootstrap-osd' cmd=[{"prefix": "mon getmap"}]: dispatch
Oct 11 03:37:06 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v20: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Oct 11 03:37:06 compute-0 ceph-mon[74273]: pgmap v20: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Oct 11 03:37:07 compute-0 frosty_ride[83938]:  stderr: 2025-10-11T03:37:05.381+0000 7f7f7916b740 -1 bluestore(/var/lib/ceph/osd/ceph-2//block) _read_bdev_label unable to decode label at offset 102: void bluestore_bdev_label_t::decode(ceph::buffer::v15_2_0::list::const_iterator&) decode past end of struct encoding: Malformed input [buffer:3]
Oct 11 03:37:07 compute-0 frosty_ride[83938]:  stderr: 2025-10-11T03:37:05.381+0000 7f7f7916b740 -1 bluestore(/var/lib/ceph/osd/ceph-2//block) _read_bdev_label unable to decode label at offset 102: void bluestore_bdev_label_t::decode(ceph::buffer::v15_2_0::list::const_iterator&) decode past end of struct encoding: Malformed input [buffer:3]
Oct 11 03:37:07 compute-0 frosty_ride[83938]:  stderr: 2025-10-11T03:37:05.381+0000 7f7f7916b740 -1 bluestore(/var/lib/ceph/osd/ceph-2//block) _read_bdev_label unable to decode label at offset 102: void bluestore_bdev_label_t::decode(ceph::buffer::v15_2_0::list::const_iterator&) decode past end of struct encoding: Malformed input [buffer:3]
Oct 11 03:37:07 compute-0 frosty_ride[83938]:  stderr: 2025-10-11T03:37:05.381+0000 7f7f7916b740 -1 bluestore(/var/lib/ceph/osd/ceph-2/) _read_fsid unparsable uuid
Oct 11 03:37:07 compute-0 frosty_ride[83938]: --> ceph-volume lvm prepare successful for: ceph_vg2/ceph_lv2
Oct 11 03:37:07 compute-0 frosty_ride[83938]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-2
Oct 11 03:37:07 compute-0 frosty_ride[83938]: Running command: /usr/bin/ceph-bluestore-tool --cluster=ceph prime-osd-dir --dev /dev/ceph_vg2/ceph_lv2 --path /var/lib/ceph/osd/ceph-2 --no-mon-config
Oct 11 03:37:07 compute-0 frosty_ride[83938]: Running command: /usr/bin/ln -snf /dev/ceph_vg2/ceph_lv2 /var/lib/ceph/osd/ceph-2/block
Oct 11 03:37:07 compute-0 frosty_ride[83938]: Running command: /usr/bin/chown -h ceph:ceph /var/lib/ceph/osd/ceph-2/block
Oct 11 03:37:07 compute-0 frosty_ride[83938]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-2
Oct 11 03:37:07 compute-0 frosty_ride[83938]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-2
Oct 11 03:37:08 compute-0 frosty_ride[83938]: --> ceph-volume lvm activate successful for osd ID: 2
Oct 11 03:37:08 compute-0 frosty_ride[83938]: --> ceph-volume lvm create successful for: ceph_vg2/ceph_lv2
Oct 11 03:37:08 compute-0 systemd[1]: libpod-a1a06818ad62d2540bbf83d38cce76b4cbaf2c5d8ce8266531da627a9bae4b61.scope: Deactivated successfully.
Oct 11 03:37:08 compute-0 systemd[1]: libpod-a1a06818ad62d2540bbf83d38cce76b4cbaf2c5d8ce8266531da627a9bae4b61.scope: Consumed 6.492s CPU time.
Oct 11 03:37:08 compute-0 podman[86776]: 2025-10-11 03:37:08.100215072 +0000 UTC m=+0.030212821 container died a1a06818ad62d2540bbf83d38cce76b4cbaf2c5d8ce8266531da627a9bae4b61 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=frosty_ride, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 11 03:37:08 compute-0 systemd[1]: var-lib-containers-storage-overlay-2847501bd062284e98e0a3527c6f6c1b262b9b664387c6870c8719a5266f1947-merged.mount: Deactivated successfully.
Oct 11 03:37:08 compute-0 podman[86776]: 2025-10-11 03:37:08.175741976 +0000 UTC m=+0.105739625 container remove a1a06818ad62d2540bbf83d38cce76b4cbaf2c5d8ce8266531da627a9bae4b61 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=frosty_ride, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Oct 11 03:37:08 compute-0 systemd[1]: libpod-conmon-a1a06818ad62d2540bbf83d38cce76b4cbaf2c5d8ce8266531da627a9bae4b61.scope: Deactivated successfully.
Oct 11 03:37:08 compute-0 sudo[83817]: pam_unix(sudo:session): session closed for user root
Oct 11 03:37:08 compute-0 sudo[86791]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 03:37:08 compute-0 sudo[86791]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 03:37:08 compute-0 sudo[86791]: pam_unix(sudo:session): session closed for user root
Oct 11 03:37:08 compute-0 sudo[86816]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 11 03:37:08 compute-0 sudo[86816]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 03:37:08 compute-0 sudo[86816]: pam_unix(sudo:session): session closed for user root
Oct 11 03:37:08 compute-0 sudo[86841]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 03:37:08 compute-0 sudo[86841]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 03:37:08 compute-0 sudo[86841]: pam_unix(sudo:session): session closed for user root
Oct 11 03:37:08 compute-0 sudo[86866]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/23b68101-59a9-532f-ab6b-9acf78fb2162/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 23b68101-59a9-532f-ab6b-9acf78fb2162 -- lvm list --format json
Oct 11 03:37:08 compute-0 sudo[86866]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 03:37:08 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v21: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Oct 11 03:37:08 compute-0 ceph-mon[74273]: pgmap v21: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Oct 11 03:37:08 compute-0 podman[86929]: 2025-10-11 03:37:08.957327529 +0000 UTC m=+0.051813899 container create b109b0d9f5039733ed4bb0a4c632826ae266b6e3abf44462eb4a61194939126f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_antonelli, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 11 03:37:08 compute-0 systemd[1]: Started libpod-conmon-b109b0d9f5039733ed4bb0a4c632826ae266b6e3abf44462eb4a61194939126f.scope.
Oct 11 03:37:09 compute-0 podman[86929]: 2025-10-11 03:37:08.932677385 +0000 UTC m=+0.027163785 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 03:37:09 compute-0 systemd[1]: Started libcrun container.
Oct 11 03:37:09 compute-0 podman[86929]: 2025-10-11 03:37:09.042956167 +0000 UTC m=+0.137442577 container init b109b0d9f5039733ed4bb0a4c632826ae266b6e3abf44462eb4a61194939126f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_antonelli, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 11 03:37:09 compute-0 podman[86929]: 2025-10-11 03:37:09.054330277 +0000 UTC m=+0.148816677 container start b109b0d9f5039733ed4bb0a4c632826ae266b6e3abf44462eb4a61194939126f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_antonelli, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 11 03:37:09 compute-0 podman[86929]: 2025-10-11 03:37:09.058633518 +0000 UTC m=+0.153119968 container attach b109b0d9f5039733ed4bb0a4c632826ae266b6e3abf44462eb4a61194939126f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_antonelli, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS)
Oct 11 03:37:09 compute-0 infallible_antonelli[86945]: 167 167
Oct 11 03:37:09 compute-0 systemd[1]: libpod-b109b0d9f5039733ed4bb0a4c632826ae266b6e3abf44462eb4a61194939126f.scope: Deactivated successfully.
Oct 11 03:37:09 compute-0 podman[86929]: 2025-10-11 03:37:09.062079415 +0000 UTC m=+0.156565825 container died b109b0d9f5039733ed4bb0a4c632826ae266b6e3abf44462eb4a61194939126f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_antonelli, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Oct 11 03:37:09 compute-0 systemd[1]: var-lib-containers-storage-overlay-ac64d31fc2d1a18346f14f538d03d39f3216729e47a619d10fc2af6a0e406b3e-merged.mount: Deactivated successfully.
Oct 11 03:37:09 compute-0 podman[86929]: 2025-10-11 03:37:09.104470787 +0000 UTC m=+0.198957187 container remove b109b0d9f5039733ed4bb0a4c632826ae266b6e3abf44462eb4a61194939126f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_antonelli, ceph=True, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef)
Oct 11 03:37:09 compute-0 systemd[1]: libpod-conmon-b109b0d9f5039733ed4bb0a4c632826ae266b6e3abf44462eb4a61194939126f.scope: Deactivated successfully.
Oct 11 03:37:09 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e6 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 11 03:37:09 compute-0 podman[86969]: 2025-10-11 03:37:09.314371131 +0000 UTC m=+0.064057853 container create 8dfd15255eadf4fe00715be6983eecf2587bf71feb26babf720c14d9215a9696 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_lumiere, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 11 03:37:09 compute-0 systemd[1]: Started libpod-conmon-8dfd15255eadf4fe00715be6983eecf2587bf71feb26babf720c14d9215a9696.scope.
Oct 11 03:37:09 compute-0 podman[86969]: 2025-10-11 03:37:09.288863263 +0000 UTC m=+0.038549995 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 03:37:09 compute-0 systemd[1]: Started libcrun container.
Oct 11 03:37:09 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/53036cc89d8cf828770a3dbe0cba1f6a586bca98e4e886133576e975d15841fc/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 11 03:37:09 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/53036cc89d8cf828770a3dbe0cba1f6a586bca98e4e886133576e975d15841fc/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 11 03:37:09 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/53036cc89d8cf828770a3dbe0cba1f6a586bca98e4e886133576e975d15841fc/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 11 03:37:09 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/53036cc89d8cf828770a3dbe0cba1f6a586bca98e4e886133576e975d15841fc/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 11 03:37:09 compute-0 podman[86969]: 2025-10-11 03:37:09.408545349 +0000 UTC m=+0.158232081 container init 8dfd15255eadf4fe00715be6983eecf2587bf71feb26babf720c14d9215a9696 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_lumiere, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 11 03:37:09 compute-0 podman[86969]: 2025-10-11 03:37:09.424388015 +0000 UTC m=+0.174074737 container start 8dfd15255eadf4fe00715be6983eecf2587bf71feb26babf720c14d9215a9696 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_lumiere, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 11 03:37:09 compute-0 podman[86969]: 2025-10-11 03:37:09.428574203 +0000 UTC m=+0.178260965 container attach 8dfd15255eadf4fe00715be6983eecf2587bf71feb26babf720c14d9215a9696 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_lumiere, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=reef)
Oct 11 03:37:10 compute-0 quirky_lumiere[86986]: {
Oct 11 03:37:10 compute-0 quirky_lumiere[86986]:     "0": [
Oct 11 03:37:10 compute-0 quirky_lumiere[86986]:         {
Oct 11 03:37:10 compute-0 quirky_lumiere[86986]:             "devices": [
Oct 11 03:37:10 compute-0 quirky_lumiere[86986]:                 "/dev/loop3"
Oct 11 03:37:10 compute-0 quirky_lumiere[86986]:             ],
Oct 11 03:37:10 compute-0 quirky_lumiere[86986]:             "lv_name": "ceph_lv0",
Oct 11 03:37:10 compute-0 quirky_lumiere[86986]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct 11 03:37:10 compute-0 quirky_lumiere[86986]:             "lv_size": "21470642176",
Oct 11 03:37:10 compute-0 quirky_lumiere[86986]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=1XVTvq-Um5W-oCLh-MZzq-EKrd-vgGj-JdO4S3,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=23b68101-59a9-532f-ab6b-9acf78fb2162,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=bd7ac921-1218-45c1-b1c6-7c594dbceccb,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 11 03:37:10 compute-0 quirky_lumiere[86986]:             "lv_uuid": "1XVTvq-Um5W-oCLh-MZzq-EKrd-vgGj-JdO4S3",
Oct 11 03:37:10 compute-0 quirky_lumiere[86986]:             "name": "ceph_lv0",
Oct 11 03:37:10 compute-0 quirky_lumiere[86986]:             "path": "/dev/ceph_vg0/ceph_lv0",
Oct 11 03:37:10 compute-0 quirky_lumiere[86986]:             "tags": {
Oct 11 03:37:10 compute-0 quirky_lumiere[86986]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct 11 03:37:10 compute-0 quirky_lumiere[86986]:                 "ceph.block_uuid": "1XVTvq-Um5W-oCLh-MZzq-EKrd-vgGj-JdO4S3",
Oct 11 03:37:10 compute-0 quirky_lumiere[86986]:                 "ceph.cephx_lockbox_secret": "",
Oct 11 03:37:10 compute-0 quirky_lumiere[86986]:                 "ceph.cluster_fsid": "23b68101-59a9-532f-ab6b-9acf78fb2162",
Oct 11 03:37:10 compute-0 quirky_lumiere[86986]:                 "ceph.cluster_name": "ceph",
Oct 11 03:37:10 compute-0 quirky_lumiere[86986]:                 "ceph.crush_device_class": "",
Oct 11 03:37:10 compute-0 quirky_lumiere[86986]:                 "ceph.encrypted": "0",
Oct 11 03:37:10 compute-0 quirky_lumiere[86986]:                 "ceph.osd_fsid": "bd7ac921-1218-45c1-b1c6-7c594dbceccb",
Oct 11 03:37:10 compute-0 quirky_lumiere[86986]:                 "ceph.osd_id": "0",
Oct 11 03:37:10 compute-0 quirky_lumiere[86986]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 11 03:37:10 compute-0 quirky_lumiere[86986]:                 "ceph.type": "block",
Oct 11 03:37:10 compute-0 quirky_lumiere[86986]:                 "ceph.vdo": "0"
Oct 11 03:37:10 compute-0 quirky_lumiere[86986]:             },
Oct 11 03:37:10 compute-0 quirky_lumiere[86986]:             "type": "block",
Oct 11 03:37:10 compute-0 quirky_lumiere[86986]:             "vg_name": "ceph_vg0"
Oct 11 03:37:10 compute-0 quirky_lumiere[86986]:         }
Oct 11 03:37:10 compute-0 quirky_lumiere[86986]:     ],
Oct 11 03:37:10 compute-0 quirky_lumiere[86986]:     "1": [
Oct 11 03:37:10 compute-0 quirky_lumiere[86986]:         {
Oct 11 03:37:10 compute-0 quirky_lumiere[86986]:             "devices": [
Oct 11 03:37:10 compute-0 quirky_lumiere[86986]:                 "/dev/loop4"
Oct 11 03:37:10 compute-0 quirky_lumiere[86986]:             ],
Oct 11 03:37:10 compute-0 quirky_lumiere[86986]:             "lv_name": "ceph_lv1",
Oct 11 03:37:10 compute-0 quirky_lumiere[86986]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Oct 11 03:37:10 compute-0 quirky_lumiere[86986]:             "lv_size": "21470642176",
Oct 11 03:37:10 compute-0 quirky_lumiere[86986]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=SYHCVo-UVf0-Iwc3-4dzz-QuCG-19LJ-9rRA5x,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=23b68101-59a9-532f-ab6b-9acf78fb2162,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=38da774d-7ecf-442f-9a7a-97978287cff8,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 11 03:37:10 compute-0 quirky_lumiere[86986]:             "lv_uuid": "SYHCVo-UVf0-Iwc3-4dzz-QuCG-19LJ-9rRA5x",
Oct 11 03:37:10 compute-0 quirky_lumiere[86986]:             "name": "ceph_lv1",
Oct 11 03:37:10 compute-0 quirky_lumiere[86986]:             "path": "/dev/ceph_vg1/ceph_lv1",
Oct 11 03:37:10 compute-0 quirky_lumiere[86986]:             "tags": {
Oct 11 03:37:10 compute-0 quirky_lumiere[86986]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Oct 11 03:37:10 compute-0 quirky_lumiere[86986]:                 "ceph.block_uuid": "SYHCVo-UVf0-Iwc3-4dzz-QuCG-19LJ-9rRA5x",
Oct 11 03:37:10 compute-0 quirky_lumiere[86986]:                 "ceph.cephx_lockbox_secret": "",
Oct 11 03:37:10 compute-0 quirky_lumiere[86986]:                 "ceph.cluster_fsid": "23b68101-59a9-532f-ab6b-9acf78fb2162",
Oct 11 03:37:10 compute-0 quirky_lumiere[86986]:                 "ceph.cluster_name": "ceph",
Oct 11 03:37:10 compute-0 quirky_lumiere[86986]:                 "ceph.crush_device_class": "",
Oct 11 03:37:10 compute-0 quirky_lumiere[86986]:                 "ceph.encrypted": "0",
Oct 11 03:37:10 compute-0 quirky_lumiere[86986]:                 "ceph.osd_fsid": "38da774d-7ecf-442f-9a7a-97978287cff8",
Oct 11 03:37:10 compute-0 quirky_lumiere[86986]:                 "ceph.osd_id": "1",
Oct 11 03:37:10 compute-0 quirky_lumiere[86986]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 11 03:37:10 compute-0 quirky_lumiere[86986]:                 "ceph.type": "block",
Oct 11 03:37:10 compute-0 quirky_lumiere[86986]:                 "ceph.vdo": "0"
Oct 11 03:37:10 compute-0 quirky_lumiere[86986]:             },
Oct 11 03:37:10 compute-0 quirky_lumiere[86986]:             "type": "block",
Oct 11 03:37:10 compute-0 quirky_lumiere[86986]:             "vg_name": "ceph_vg1"
Oct 11 03:37:10 compute-0 quirky_lumiere[86986]:         }
Oct 11 03:37:10 compute-0 quirky_lumiere[86986]:     ],
Oct 11 03:37:10 compute-0 quirky_lumiere[86986]:     "2": [
Oct 11 03:37:10 compute-0 quirky_lumiere[86986]:         {
Oct 11 03:37:10 compute-0 quirky_lumiere[86986]:             "devices": [
Oct 11 03:37:10 compute-0 quirky_lumiere[86986]:                 "/dev/loop5"
Oct 11 03:37:10 compute-0 quirky_lumiere[86986]:             ],
Oct 11 03:37:10 compute-0 quirky_lumiere[86986]:             "lv_name": "ceph_lv2",
Oct 11 03:37:10 compute-0 quirky_lumiere[86986]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Oct 11 03:37:10 compute-0 quirky_lumiere[86986]:             "lv_size": "21470642176",
Oct 11 03:37:10 compute-0 quirky_lumiere[86986]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=RoMfIg-8Myq-NBZ3-1HM3-6QfC-sjk8-dlChvt,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=23b68101-59a9-532f-ab6b-9acf78fb2162,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=b0a32a6b-06c7-4cda-9235-0c5ffbd1bd85,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 11 03:37:10 compute-0 quirky_lumiere[86986]:             "lv_uuid": "RoMfIg-8Myq-NBZ3-1HM3-6QfC-sjk8-dlChvt",
Oct 11 03:37:10 compute-0 quirky_lumiere[86986]:             "name": "ceph_lv2",
Oct 11 03:37:10 compute-0 quirky_lumiere[86986]:             "path": "/dev/ceph_vg2/ceph_lv2",
Oct 11 03:37:10 compute-0 quirky_lumiere[86986]:             "tags": {
Oct 11 03:37:10 compute-0 quirky_lumiere[86986]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Oct 11 03:37:10 compute-0 quirky_lumiere[86986]:                 "ceph.block_uuid": "RoMfIg-8Myq-NBZ3-1HM3-6QfC-sjk8-dlChvt",
Oct 11 03:37:10 compute-0 quirky_lumiere[86986]:                 "ceph.cephx_lockbox_secret": "",
Oct 11 03:37:10 compute-0 quirky_lumiere[86986]:                 "ceph.cluster_fsid": "23b68101-59a9-532f-ab6b-9acf78fb2162",
Oct 11 03:37:10 compute-0 quirky_lumiere[86986]:                 "ceph.cluster_name": "ceph",
Oct 11 03:37:10 compute-0 quirky_lumiere[86986]:                 "ceph.crush_device_class": "",
Oct 11 03:37:10 compute-0 quirky_lumiere[86986]:                 "ceph.encrypted": "0",
Oct 11 03:37:10 compute-0 quirky_lumiere[86986]:                 "ceph.osd_fsid": "b0a32a6b-06c7-4cda-9235-0c5ffbd1bd85",
Oct 11 03:37:10 compute-0 quirky_lumiere[86986]:                 "ceph.osd_id": "2",
Oct 11 03:37:10 compute-0 quirky_lumiere[86986]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 11 03:37:10 compute-0 quirky_lumiere[86986]:                 "ceph.type": "block",
Oct 11 03:37:10 compute-0 quirky_lumiere[86986]:                 "ceph.vdo": "0"
Oct 11 03:37:10 compute-0 quirky_lumiere[86986]:             },
Oct 11 03:37:10 compute-0 quirky_lumiere[86986]:             "type": "block",
Oct 11 03:37:10 compute-0 quirky_lumiere[86986]:             "vg_name": "ceph_vg2"
Oct 11 03:37:10 compute-0 quirky_lumiere[86986]:         }
Oct 11 03:37:10 compute-0 quirky_lumiere[86986]:     ]
Oct 11 03:37:10 compute-0 quirky_lumiere[86986]: }
Oct 11 03:37:10 compute-0 systemd[1]: libpod-8dfd15255eadf4fe00715be6983eecf2587bf71feb26babf720c14d9215a9696.scope: Deactivated successfully.
Oct 11 03:37:10 compute-0 conmon[86986]: conmon 8dfd15255eadf4fe0071 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-8dfd15255eadf4fe00715be6983eecf2587bf71feb26babf720c14d9215a9696.scope/container/memory.events
Oct 11 03:37:10 compute-0 podman[86969]: 2025-10-11 03:37:10.22704267 +0000 UTC m=+0.976729392 container died 8dfd15255eadf4fe00715be6983eecf2587bf71feb26babf720c14d9215a9696 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_lumiere, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20250507, CEPH_REF=reef, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default)
Oct 11 03:37:10 compute-0 systemd[1]: var-lib-containers-storage-overlay-53036cc89d8cf828770a3dbe0cba1f6a586bca98e4e886133576e975d15841fc-merged.mount: Deactivated successfully.
Oct 11 03:37:10 compute-0 podman[86969]: 2025-10-11 03:37:10.306445684 +0000 UTC m=+1.056132366 container remove 8dfd15255eadf4fe00715be6983eecf2587bf71feb26babf720c14d9215a9696 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_lumiere, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, ceph=True, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 11 03:37:10 compute-0 systemd[1]: libpod-conmon-8dfd15255eadf4fe00715be6983eecf2587bf71feb26babf720c14d9215a9696.scope: Deactivated successfully.
Oct 11 03:37:10 compute-0 sudo[86866]: pam_unix(sudo:session): session closed for user root
Oct 11 03:37:10 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "osd.0"} v 0) v1
Oct 11 03:37:10 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' cmd=[{"prefix": "auth get", "entity": "osd.0"}]: dispatch
Oct 11 03:37:10 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 11 03:37:10 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 11 03:37:10 compute-0 ceph-mgr[74563]: [cephadm INFO cephadm.serve] Deploying daemon osd.0 on compute-0
Oct 11 03:37:10 compute-0 ceph-mgr[74563]: log_channel(cephadm) log [INF] : Deploying daemon osd.0 on compute-0
Oct 11 03:37:10 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' cmd=[{"prefix": "auth get", "entity": "osd.0"}]: dispatch
Oct 11 03:37:10 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 11 03:37:10 compute-0 sudo[87008]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 03:37:10 compute-0 sudo[87008]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 03:37:10 compute-0 sudo[87008]: pam_unix(sudo:session): session closed for user root
Oct 11 03:37:10 compute-0 sudo[87033]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 11 03:37:10 compute-0 sudo[87033]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 03:37:10 compute-0 sudo[87033]: pam_unix(sudo:session): session closed for user root
Oct 11 03:37:10 compute-0 sudo[87058]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 03:37:10 compute-0 sudo[87058]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 03:37:10 compute-0 sudo[87058]: pam_unix(sudo:session): session closed for user root
Oct 11 03:37:10 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v22: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Oct 11 03:37:10 compute-0 sudo[87083]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/23b68101-59a9-532f-ab6b-9acf78fb2162/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 _orch deploy --fsid 23b68101-59a9-532f-ab6b-9acf78fb2162
Oct 11 03:37:10 compute-0 sudo[87083]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 03:37:11 compute-0 podman[87148]: 2025-10-11 03:37:11.170628139 +0000 UTC m=+0.060520433 container create 503df39db5bb159d3ffc73ecfd6687897272cfa773f5edee81310393a6fb3e5c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_leakey, io.buildah.version=1.39.3, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 11 03:37:11 compute-0 systemd[1]: Started libpod-conmon-503df39db5bb159d3ffc73ecfd6687897272cfa773f5edee81310393a6fb3e5c.scope.
Oct 11 03:37:11 compute-0 systemd[1]: Started libcrun container.
Oct 11 03:37:11 compute-0 podman[87148]: 2025-10-11 03:37:11.144412432 +0000 UTC m=+0.034304796 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 03:37:11 compute-0 podman[87148]: 2025-10-11 03:37:11.24993835 +0000 UTC m=+0.139830714 container init 503df39db5bb159d3ffc73ecfd6687897272cfa773f5edee81310393a6fb3e5c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_leakey, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef)
Oct 11 03:37:11 compute-0 podman[87148]: 2025-10-11 03:37:11.261131435 +0000 UTC m=+0.151023699 container start 503df39db5bb159d3ffc73ecfd6687897272cfa773f5edee81310393a6fb3e5c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_leakey, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Oct 11 03:37:11 compute-0 podman[87148]: 2025-10-11 03:37:11.264911831 +0000 UTC m=+0.154804125 container attach 503df39db5bb159d3ffc73ecfd6687897272cfa773f5edee81310393a6fb3e5c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_leakey, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, ceph=True)
Oct 11 03:37:11 compute-0 peaceful_leakey[87164]: 167 167
Oct 11 03:37:11 compute-0 systemd[1]: libpod-503df39db5bb159d3ffc73ecfd6687897272cfa773f5edee81310393a6fb3e5c.scope: Deactivated successfully.
Oct 11 03:37:11 compute-0 podman[87148]: 2025-10-11 03:37:11.26735592 +0000 UTC m=+0.157248224 container died 503df39db5bb159d3ffc73ecfd6687897272cfa773f5edee81310393a6fb3e5c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_leakey, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef)
Oct 11 03:37:11 compute-0 systemd[1]: var-lib-containers-storage-overlay-25e0a1ef7a1c3ea3f12a9115af9e7f801ad45a5fc361684e95ddc92e42d2602b-merged.mount: Deactivated successfully.
Oct 11 03:37:11 compute-0 podman[87148]: 2025-10-11 03:37:11.316869792 +0000 UTC m=+0.206762046 container remove 503df39db5bb159d3ffc73ecfd6687897272cfa773f5edee81310393a6fb3e5c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_leakey, OSD_FLAVOR=default, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=reef)
Oct 11 03:37:11 compute-0 systemd[1]: libpod-conmon-503df39db5bb159d3ffc73ecfd6687897272cfa773f5edee81310393a6fb3e5c.scope: Deactivated successfully.
Oct 11 03:37:11 compute-0 ceph-mon[74273]: Deploying daemon osd.0 on compute-0
Oct 11 03:37:11 compute-0 ceph-mon[74273]: pgmap v22: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Oct 11 03:37:11 compute-0 podman[87195]: 2025-10-11 03:37:11.610996444 +0000 UTC m=+0.040784088 container create a44fe0331570753c1ab6d3cdf7f9daab6987391525e856ca7334a420e6941bcb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-23b68101-59a9-532f-ab6b-9acf78fb2162-osd-0-activate-test, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True)
Oct 11 03:37:11 compute-0 systemd[1]: Started libpod-conmon-a44fe0331570753c1ab6d3cdf7f9daab6987391525e856ca7334a420e6941bcb.scope.
Oct 11 03:37:11 compute-0 systemd[1]: Started libcrun container.
Oct 11 03:37:11 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1a1dfa58f982a8f6371e5b36469fbdf1370001f87ecc95bcc5049be5aa304292/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 11 03:37:11 compute-0 podman[87195]: 2025-10-11 03:37:11.592991198 +0000 UTC m=+0.022778872 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 03:37:11 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1a1dfa58f982a8f6371e5b36469fbdf1370001f87ecc95bcc5049be5aa304292/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 11 03:37:11 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1a1dfa58f982a8f6371e5b36469fbdf1370001f87ecc95bcc5049be5aa304292/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 11 03:37:11 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1a1dfa58f982a8f6371e5b36469fbdf1370001f87ecc95bcc5049be5aa304292/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 11 03:37:11 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1a1dfa58f982a8f6371e5b36469fbdf1370001f87ecc95bcc5049be5aa304292/merged/var/lib/ceph/osd/ceph-0 supports timestamps until 2038 (0x7fffffff)
Oct 11 03:37:11 compute-0 podman[87195]: 2025-10-11 03:37:11.706990934 +0000 UTC m=+0.136778628 container init a44fe0331570753c1ab6d3cdf7f9daab6987391525e856ca7334a420e6941bcb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-23b68101-59a9-532f-ab6b-9acf78fb2162-osd-0-activate-test, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, CEPH_REF=reef, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 11 03:37:11 compute-0 podman[87195]: 2025-10-11 03:37:11.719344651 +0000 UTC m=+0.149132325 container start a44fe0331570753c1ab6d3cdf7f9daab6987391525e856ca7334a420e6941bcb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-23b68101-59a9-532f-ab6b-9acf78fb2162-osd-0-activate-test, ceph=True, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 11 03:37:11 compute-0 podman[87195]: 2025-10-11 03:37:11.723771846 +0000 UTC m=+0.153559490 container attach a44fe0331570753c1ab6d3cdf7f9daab6987391525e856ca7334a420e6941bcb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-23b68101-59a9-532f-ab6b-9acf78fb2162-osd-0-activate-test, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 11 03:37:12 compute-0 ceph-23b68101-59a9-532f-ab6b-9acf78fb2162-osd-0-activate-test[87212]: usage: ceph-volume activate [-h] [--osd-id OSD_ID] [--osd-uuid OSD_UUID]
Oct 11 03:37:12 compute-0 ceph-23b68101-59a9-532f-ab6b-9acf78fb2162-osd-0-activate-test[87212]:                             [--no-systemd] [--no-tmpfs]
Oct 11 03:37:12 compute-0 ceph-23b68101-59a9-532f-ab6b-9acf78fb2162-osd-0-activate-test[87212]: ceph-volume activate: error: unrecognized arguments: --bad-option
Oct 11 03:37:12 compute-0 systemd[1]: libpod-a44fe0331570753c1ab6d3cdf7f9daab6987391525e856ca7334a420e6941bcb.scope: Deactivated successfully.
Oct 11 03:37:12 compute-0 podman[87195]: 2025-10-11 03:37:12.470457287 +0000 UTC m=+0.900244981 container died a44fe0331570753c1ab6d3cdf7f9daab6987391525e856ca7334a420e6941bcb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-23b68101-59a9-532f-ab6b-9acf78fb2162-osd-0-activate-test, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Oct 11 03:37:12 compute-0 systemd[1]: var-lib-containers-storage-overlay-1a1dfa58f982a8f6371e5b36469fbdf1370001f87ecc95bcc5049be5aa304292-merged.mount: Deactivated successfully.
Oct 11 03:37:12 compute-0 podman[87195]: 2025-10-11 03:37:12.534191969 +0000 UTC m=+0.963979623 container remove a44fe0331570753c1ab6d3cdf7f9daab6987391525e856ca7334a420e6941bcb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-23b68101-59a9-532f-ab6b-9acf78fb2162-osd-0-activate-test, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 11 03:37:12 compute-0 systemd[1]: libpod-conmon-a44fe0331570753c1ab6d3cdf7f9daab6987391525e856ca7334a420e6941bcb.scope: Deactivated successfully.
Oct 11 03:37:12 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v23: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Oct 11 03:37:12 compute-0 ceph-mon[74273]: pgmap v23: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Oct 11 03:37:12 compute-0 systemd[1]: Reloading.
Oct 11 03:37:12 compute-0 systemd-rc-local-generator[87274]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 11 03:37:12 compute-0 systemd-sysv-generator[87277]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 11 03:37:13 compute-0 systemd[1]: Reloading.
Oct 11 03:37:13 compute-0 systemd-rc-local-generator[87316]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 11 03:37:13 compute-0 systemd-sysv-generator[87320]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 11 03:37:13 compute-0 systemd[1]: Starting Ceph osd.0 for 23b68101-59a9-532f-ab6b-9acf78fb2162...
Oct 11 03:37:13 compute-0 podman[87371]: 2025-10-11 03:37:13.67944543 +0000 UTC m=+0.058064884 container create 188e3259b73fa5b77a053d8675fb18cce183d822186b91113db9ea6c3fcefabb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-23b68101-59a9-532f-ab6b-9acf78fb2162-osd-0-activate, io.buildah.version=1.39.3, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2)
Oct 11 03:37:13 compute-0 podman[87371]: 2025-10-11 03:37:13.644388974 +0000 UTC m=+0.023008498 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 03:37:13 compute-0 systemd[1]: Started libcrun container.
Oct 11 03:37:13 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7d9b45c273f8e158c48192c88e3b08ffba5b414dbb6c9c5836957581b5031ae9/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 11 03:37:13 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7d9b45c273f8e158c48192c88e3b08ffba5b414dbb6c9c5836957581b5031ae9/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 11 03:37:13 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7d9b45c273f8e158c48192c88e3b08ffba5b414dbb6c9c5836957581b5031ae9/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 11 03:37:13 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7d9b45c273f8e158c48192c88e3b08ffba5b414dbb6c9c5836957581b5031ae9/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 11 03:37:13 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7d9b45c273f8e158c48192c88e3b08ffba5b414dbb6c9c5836957581b5031ae9/merged/var/lib/ceph/osd/ceph-0 supports timestamps until 2038 (0x7fffffff)
Oct 11 03:37:13 compute-0 podman[87371]: 2025-10-11 03:37:13.771547671 +0000 UTC m=+0.150167165 container init 188e3259b73fa5b77a053d8675fb18cce183d822186b91113db9ea6c3fcefabb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-23b68101-59a9-532f-ab6b-9acf78fb2162-osd-0-activate, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 11 03:37:13 compute-0 podman[87371]: 2025-10-11 03:37:13.786333767 +0000 UTC m=+0.164953211 container start 188e3259b73fa5b77a053d8675fb18cce183d822186b91113db9ea6c3fcefabb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-23b68101-59a9-532f-ab6b-9acf78fb2162-osd-0-activate, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, OSD_FLAVOR=default)
Oct 11 03:37:13 compute-0 podman[87371]: 2025-10-11 03:37:13.790206936 +0000 UTC m=+0.168826430 container attach 188e3259b73fa5b77a053d8675fb18cce183d822186b91113db9ea6c3fcefabb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-23b68101-59a9-532f-ab6b-9acf78fb2162-osd-0-activate, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 11 03:37:14 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e6 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 11 03:37:14 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v24: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Oct 11 03:37:14 compute-0 ceph-mon[74273]: pgmap v24: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Oct 11 03:37:14 compute-0 ceph-23b68101-59a9-532f-ab6b-9acf78fb2162-osd-0-activate[87386]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-0
Oct 11 03:37:14 compute-0 bash[87371]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-0
Oct 11 03:37:14 compute-0 ceph-23b68101-59a9-532f-ab6b-9acf78fb2162-osd-0-activate[87386]: Running command: /usr/bin/ceph-bluestore-tool prime-osd-dir --path /var/lib/ceph/osd/ceph-0 --no-mon-config --dev /dev/mapper/ceph_vg0-ceph_lv0
Oct 11 03:37:14 compute-0 bash[87371]: Running command: /usr/bin/ceph-bluestore-tool prime-osd-dir --path /var/lib/ceph/osd/ceph-0 --no-mon-config --dev /dev/mapper/ceph_vg0-ceph_lv0
Oct 11 03:37:14 compute-0 ceph-23b68101-59a9-532f-ab6b-9acf78fb2162-osd-0-activate[87386]: Running command: /usr/bin/chown -h ceph:ceph /dev/mapper/ceph_vg0-ceph_lv0
Oct 11 03:37:14 compute-0 bash[87371]: Running command: /usr/bin/chown -h ceph:ceph /dev/mapper/ceph_vg0-ceph_lv0
Oct 11 03:37:14 compute-0 ceph-23b68101-59a9-532f-ab6b-9acf78fb2162-osd-0-activate[87386]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-0
Oct 11 03:37:14 compute-0 bash[87371]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-0
Oct 11 03:37:14 compute-0 ceph-23b68101-59a9-532f-ab6b-9acf78fb2162-osd-0-activate[87386]: Running command: /usr/bin/ln -s /dev/mapper/ceph_vg0-ceph_lv0 /var/lib/ceph/osd/ceph-0/block
Oct 11 03:37:14 compute-0 bash[87371]: Running command: /usr/bin/ln -s /dev/mapper/ceph_vg0-ceph_lv0 /var/lib/ceph/osd/ceph-0/block
Oct 11 03:37:14 compute-0 ceph-23b68101-59a9-532f-ab6b-9acf78fb2162-osd-0-activate[87386]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-0
Oct 11 03:37:14 compute-0 bash[87371]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-0
Oct 11 03:37:14 compute-0 ceph-23b68101-59a9-532f-ab6b-9acf78fb2162-osd-0-activate[87386]: --> ceph-volume raw activate successful for osd ID: 0
Oct 11 03:37:14 compute-0 bash[87371]: --> ceph-volume raw activate successful for osd ID: 0
Oct 11 03:37:14 compute-0 systemd[1]: libpod-188e3259b73fa5b77a053d8675fb18cce183d822186b91113db9ea6c3fcefabb.scope: Deactivated successfully.
Oct 11 03:37:14 compute-0 podman[87371]: 2025-10-11 03:37:14.983992052 +0000 UTC m=+1.362611506 container died 188e3259b73fa5b77a053d8675fb18cce183d822186b91113db9ea6c3fcefabb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-23b68101-59a9-532f-ab6b-9acf78fb2162-osd-0-activate, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef)
Oct 11 03:37:14 compute-0 systemd[1]: libpod-188e3259b73fa5b77a053d8675fb18cce183d822186b91113db9ea6c3fcefabb.scope: Consumed 1.214s CPU time.
Oct 11 03:37:15 compute-0 systemd[1]: var-lib-containers-storage-overlay-7d9b45c273f8e158c48192c88e3b08ffba5b414dbb6c9c5836957581b5031ae9-merged.mount: Deactivated successfully.
Oct 11 03:37:15 compute-0 podman[87371]: 2025-10-11 03:37:15.044826652 +0000 UTC m=+1.423446106 container remove 188e3259b73fa5b77a053d8675fb18cce183d822186b91113db9ea6c3fcefabb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-23b68101-59a9-532f-ab6b-9acf78fb2162-osd-0-activate, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default)
Oct 11 03:37:15 compute-0 podman[87572]: 2025-10-11 03:37:15.325596388 +0000 UTC m=+0.066290775 container create 25bc18b533a9f532fabb25e1bc557b8630cd4db535536595d70e27f3c39872a6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-23b68101-59a9-532f-ab6b-9acf78fb2162-osd-0, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef)
Oct 11 03:37:15 compute-0 podman[87572]: 2025-10-11 03:37:15.298570118 +0000 UTC m=+0.039264565 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 03:37:15 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/506385c6734e4fe98e1a3e37d9e582bcc66d29f2957656430fe20e222f16bc7a/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 11 03:37:15 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/506385c6734e4fe98e1a3e37d9e582bcc66d29f2957656430fe20e222f16bc7a/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 11 03:37:15 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/506385c6734e4fe98e1a3e37d9e582bcc66d29f2957656430fe20e222f16bc7a/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 11 03:37:15 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/506385c6734e4fe98e1a3e37d9e582bcc66d29f2957656430fe20e222f16bc7a/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 11 03:37:15 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/506385c6734e4fe98e1a3e37d9e582bcc66d29f2957656430fe20e222f16bc7a/merged/var/lib/ceph/osd/ceph-0 supports timestamps until 2038 (0x7fffffff)
Oct 11 03:37:15 compute-0 podman[87572]: 2025-10-11 03:37:15.413879021 +0000 UTC m=+0.154573468 container init 25bc18b533a9f532fabb25e1bc557b8630cd4db535536595d70e27f3c39872a6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-23b68101-59a9-532f-ab6b-9acf78fb2162-osd-0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Oct 11 03:37:15 compute-0 podman[87572]: 2025-10-11 03:37:15.430334694 +0000 UTC m=+0.171029091 container start 25bc18b533a9f532fabb25e1bc557b8630cd4db535536595d70e27f3c39872a6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-23b68101-59a9-532f-ab6b-9acf78fb2162-osd-0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_REF=reef, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Oct 11 03:37:15 compute-0 bash[87572]: 25bc18b533a9f532fabb25e1bc557b8630cd4db535536595d70e27f3c39872a6
Oct 11 03:37:15 compute-0 systemd[1]: Started Ceph osd.0 for 23b68101-59a9-532f-ab6b-9acf78fb2162.
Oct 11 03:37:15 compute-0 sudo[87083]: pam_unix(sudo:session): session closed for user root
Oct 11 03:37:15 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 11 03:37:15 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' 
Oct 11 03:37:15 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 11 03:37:15 compute-0 ceph-osd[87591]: set uid:gid to 167:167 (ceph:ceph)
Oct 11 03:37:15 compute-0 ceph-osd[87591]: ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable), process ceph-osd, pid 2
Oct 11 03:37:15 compute-0 ceph-osd[87591]: pidfile_write: ignore empty --pid-file
Oct 11 03:37:15 compute-0 ceph-osd[87591]: bdev(0x5651f2989800 /var/lib/ceph/osd/ceph-0/block) open path /var/lib/ceph/osd/ceph-0/block
Oct 11 03:37:15 compute-0 ceph-osd[87591]: bdev(0x5651f2989800 /var/lib/ceph/osd/ceph-0/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-0/block failed: (22) Invalid argument
Oct 11 03:37:15 compute-0 ceph-osd[87591]: bdev(0x5651f2989800 /var/lib/ceph/osd/ceph-0/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Oct 11 03:37:15 compute-0 ceph-osd[87591]: bluestore(/var/lib/ceph/osd/ceph-0) _set_cache_sizes cache_size 1073741824 meta 0.45 kv 0.45 kv_onode 0.04 data 0.06
Oct 11 03:37:15 compute-0 ceph-osd[87591]: bdev(0x5651f37c1800 /var/lib/ceph/osd/ceph-0/block) open path /var/lib/ceph/osd/ceph-0/block
Oct 11 03:37:15 compute-0 ceph-osd[87591]: bdev(0x5651f37c1800 /var/lib/ceph/osd/ceph-0/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-0/block failed: (22) Invalid argument
Oct 11 03:37:15 compute-0 ceph-osd[87591]: bdev(0x5651f37c1800 /var/lib/ceph/osd/ceph-0/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Oct 11 03:37:15 compute-0 ceph-osd[87591]: bluefs add_block_device bdev 1 path /var/lib/ceph/osd/ceph-0/block size 20 GiB
Oct 11 03:37:15 compute-0 ceph-osd[87591]: bdev(0x5651f37c1800 /var/lib/ceph/osd/ceph-0/block) close
Oct 11 03:37:15 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' 
Oct 11 03:37:15 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "osd.1"} v 0) v1
Oct 11 03:37:15 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' cmd=[{"prefix": "auth get", "entity": "osd.1"}]: dispatch
Oct 11 03:37:15 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 11 03:37:15 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 11 03:37:15 compute-0 ceph-mgr[74563]: [cephadm INFO cephadm.serve] Deploying daemon osd.1 on compute-0
Oct 11 03:37:15 compute-0 ceph-mgr[74563]: log_channel(cephadm) log [INF] : Deploying daemon osd.1 on compute-0
Oct 11 03:37:15 compute-0 sudo[87604]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 03:37:15 compute-0 sudo[87604]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 03:37:15 compute-0 sudo[87604]: pam_unix(sudo:session): session closed for user root
Oct 11 03:37:15 compute-0 sudo[87629]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 11 03:37:15 compute-0 sudo[87629]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 03:37:15 compute-0 sudo[87629]: pam_unix(sudo:session): session closed for user root
Oct 11 03:37:15 compute-0 sudo[87654]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 03:37:15 compute-0 sudo[87654]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 03:37:15 compute-0 sudo[87654]: pam_unix(sudo:session): session closed for user root
Oct 11 03:37:15 compute-0 sudo[87679]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/23b68101-59a9-532f-ab6b-9acf78fb2162/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 _orch deploy --fsid 23b68101-59a9-532f-ab6b-9acf78fb2162
Oct 11 03:37:15 compute-0 sudo[87679]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 03:37:15 compute-0 ceph-osd[87591]: bdev(0x5651f2989800 /var/lib/ceph/osd/ceph-0/block) close
Oct 11 03:37:16 compute-0 ceph-osd[87591]: starting osd.0 osd_data /var/lib/ceph/osd/ceph-0 /var/lib/ceph/osd/ceph-0/journal
Oct 11 03:37:16 compute-0 ceph-osd[87591]: load: jerasure load: lrc 
Oct 11 03:37:16 compute-0 ceph-osd[87591]: bdev(0x5651f3842c00 /var/lib/ceph/osd/ceph-0/block) open path /var/lib/ceph/osd/ceph-0/block
Oct 11 03:37:16 compute-0 ceph-osd[87591]: bdev(0x5651f3842c00 /var/lib/ceph/osd/ceph-0/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-0/block failed: (22) Invalid argument
Oct 11 03:37:16 compute-0 ceph-osd[87591]: bdev(0x5651f3842c00 /var/lib/ceph/osd/ceph-0/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Oct 11 03:37:16 compute-0 ceph-osd[87591]: bluestore(/var/lib/ceph/osd/ceph-0) _set_cache_sizes cache_size 1073741824 meta 0.45 kv 0.45 kv_onode 0.04 data 0.06
Oct 11 03:37:16 compute-0 ceph-osd[87591]: bdev(0x5651f3842c00 /var/lib/ceph/osd/ceph-0/block) close
Oct 11 03:37:16 compute-0 podman[87752]: 2025-10-11 03:37:16.185491044 +0000 UTC m=+0.064692181 container create 24a1946d2634d23a634b5e1e5ec079dc4e5507d04ab498914a2d60d937b9c032 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_babbage, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 11 03:37:16 compute-0 systemd[1]: Started libpod-conmon-24a1946d2634d23a634b5e1e5ec079dc4e5507d04ab498914a2d60d937b9c032.scope.
Oct 11 03:37:16 compute-0 podman[87752]: 2025-10-11 03:37:16.157870047 +0000 UTC m=+0.037071224 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 03:37:16 compute-0 systemd[1]: Started libcrun container.
Oct 11 03:37:16 compute-0 podman[87752]: 2025-10-11 03:37:16.284689144 +0000 UTC m=+0.163890321 container init 24a1946d2634d23a634b5e1e5ec079dc4e5507d04ab498914a2d60d937b9c032 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_babbage, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 11 03:37:16 compute-0 podman[87752]: 2025-10-11 03:37:16.2963079 +0000 UTC m=+0.175509027 container start 24a1946d2634d23a634b5e1e5ec079dc4e5507d04ab498914a2d60d937b9c032 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_babbage, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2)
Oct 11 03:37:16 compute-0 podman[87752]: 2025-10-11 03:37:16.300017265 +0000 UTC m=+0.179218412 container attach 24a1946d2634d23a634b5e1e5ec079dc4e5507d04ab498914a2d60d937b9c032 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_babbage, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 11 03:37:16 compute-0 naughty_babbage[87767]: 167 167
Oct 11 03:37:16 compute-0 systemd[1]: libpod-24a1946d2634d23a634b5e1e5ec079dc4e5507d04ab498914a2d60d937b9c032.scope: Deactivated successfully.
Oct 11 03:37:16 compute-0 conmon[87767]: conmon 24a1946d2634d23a634b <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-24a1946d2634d23a634b5e1e5ec079dc4e5507d04ab498914a2d60d937b9c032.scope/container/memory.events
Oct 11 03:37:16 compute-0 podman[87752]: 2025-10-11 03:37:16.30660333 +0000 UTC m=+0.185804457 container died 24a1946d2634d23a634b5e1e5ec079dc4e5507d04ab498914a2d60d937b9c032 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_babbage, org.label-schema.vendor=CentOS, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Oct 11 03:37:16 compute-0 ceph-osd[87591]: bdev(0x5651f3842c00 /var/lib/ceph/osd/ceph-0/block) open path /var/lib/ceph/osd/ceph-0/block
Oct 11 03:37:16 compute-0 ceph-osd[87591]: bdev(0x5651f3842c00 /var/lib/ceph/osd/ceph-0/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-0/block failed: (22) Invalid argument
Oct 11 03:37:16 compute-0 ceph-osd[87591]: bdev(0x5651f3842c00 /var/lib/ceph/osd/ceph-0/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Oct 11 03:37:16 compute-0 ceph-osd[87591]: bluestore(/var/lib/ceph/osd/ceph-0) _set_cache_sizes cache_size 1073741824 meta 0.45 kv 0.45 kv_onode 0.04 data 0.06
Oct 11 03:37:16 compute-0 ceph-osd[87591]: bdev(0x5651f3842c00 /var/lib/ceph/osd/ceph-0/block) close
Oct 11 03:37:16 compute-0 systemd[1]: var-lib-containers-storage-overlay-d7304f39236c3207be01d426ea472f02d3a5dfef15f09c3534759fe514775850-merged.mount: Deactivated successfully.
Oct 11 03:37:16 compute-0 podman[87752]: 2025-10-11 03:37:16.364629892 +0000 UTC m=+0.243831029 container remove 24a1946d2634d23a634b5e1e5ec079dc4e5507d04ab498914a2d60d937b9c032 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_babbage, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS)
Oct 11 03:37:16 compute-0 systemd[1]: libpod-conmon-24a1946d2634d23a634b5e1e5ec079dc4e5507d04ab498914a2d60d937b9c032.scope: Deactivated successfully.
Oct 11 03:37:16 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' 
Oct 11 03:37:16 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' 
Oct 11 03:37:16 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' cmd=[{"prefix": "auth get", "entity": "osd.1"}]: dispatch
Oct 11 03:37:16 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 11 03:37:16 compute-0 ceph-mon[74273]: Deploying daemon osd.1 on compute-0
Oct 11 03:37:16 compute-0 ceph-osd[87591]: mClockScheduler: set_osd_capacity_params_from_config: osd_bandwidth_cost_per_io: 499321.90 bytes/io, osd_bandwidth_capacity_per_shard 157286400.00 bytes/second
Oct 11 03:37:16 compute-0 ceph-osd[87591]: osd.0:0.OSDShard using op scheduler mclock_scheduler, cutoff=196
Oct 11 03:37:16 compute-0 ceph-osd[87591]: bdev(0x5651f3842c00 /var/lib/ceph/osd/ceph-0/block) open path /var/lib/ceph/osd/ceph-0/block
Oct 11 03:37:16 compute-0 ceph-osd[87591]: bdev(0x5651f3842c00 /var/lib/ceph/osd/ceph-0/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-0/block failed: (22) Invalid argument
Oct 11 03:37:16 compute-0 ceph-osd[87591]: bdev(0x5651f3842c00 /var/lib/ceph/osd/ceph-0/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Oct 11 03:37:16 compute-0 ceph-osd[87591]: bluestore(/var/lib/ceph/osd/ceph-0) _set_cache_sizes cache_size 1073741824 meta 0.45 kv 0.45 kv_onode 0.04 data 0.06
Oct 11 03:37:16 compute-0 ceph-osd[87591]: bdev(0x5651f3843400 /var/lib/ceph/osd/ceph-0/block) open path /var/lib/ceph/osd/ceph-0/block
Oct 11 03:37:16 compute-0 ceph-osd[87591]: bdev(0x5651f3843400 /var/lib/ceph/osd/ceph-0/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-0/block failed: (22) Invalid argument
Oct 11 03:37:16 compute-0 ceph-osd[87591]: bdev(0x5651f3843400 /var/lib/ceph/osd/ceph-0/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Oct 11 03:37:16 compute-0 ceph-osd[87591]: bluefs add_block_device bdev 1 path /var/lib/ceph/osd/ceph-0/block size 20 GiB
Oct 11 03:37:16 compute-0 ceph-osd[87591]: bluefs mount
Oct 11 03:37:16 compute-0 ceph-osd[87591]: bluefs _init_alloc shared, id 1, capacity 0x4ffc00000, block size 0x10000
Oct 11 03:37:16 compute-0 ceph-osd[87591]: bluefs mount shared_bdev_used = 0
Oct 11 03:37:16 compute-0 ceph-osd[87591]: bluestore(/var/lib/ceph/osd/ceph-0) _prepare_db_environment set db_paths to db,20397110067 db.slow,20397110067
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb: RocksDB version: 7.9.2
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb: Git sha 0
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb: Compile date 2025-05-06 23:30:25
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb: DB SUMMARY
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb: DB Session ID:  2RAARMJ8GB7GVJ94BLNB
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb: CURRENT file:  CURRENT
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb: IDENTITY file:  IDENTITY
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb: MANIFEST file:  MANIFEST-000032 size: 1007 Bytes
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb: SST files in db dir, Total Num: 1, files: 000030.sst 
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb: SST files in db.slow dir, Total Num: 0, files: 
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb: Write Ahead Log file in db.wal: 000031.log size: 5093 ; 
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:                         Options.error_if_exists: 0
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:                       Options.create_if_missing: 0
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:                         Options.paranoid_checks: 1
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:             Options.flush_verify_memtable_count: 1
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:                               Options.track_and_verify_wals_in_manifest: 0
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:        Options.verify_sst_unique_id_in_manifest: 1
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:                                     Options.env: 0x5651f3813d50
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:                                      Options.fs: LegacyFileSystem
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:                                Options.info_log: 0x5651f2a107e0
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:                Options.max_file_opening_threads: 16
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:                              Options.statistics: (nil)
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:                               Options.use_fsync: 0
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:                       Options.max_log_file_size: 0
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:                  Options.max_manifest_file_size: 1073741824
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:                   Options.log_file_time_to_roll: 0
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:                       Options.keep_log_file_num: 1000
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:                    Options.recycle_log_file_num: 0
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:                         Options.allow_fallocate: 1
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:                        Options.allow_mmap_reads: 0
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:                       Options.allow_mmap_writes: 0
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:                        Options.use_direct_reads: 0
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:                        Options.use_direct_io_for_flush_and_compaction: 0
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:          Options.create_missing_column_families: 0
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:                              Options.db_log_dir: 
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:                                 Options.wal_dir: db.wal
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:                Options.table_cache_numshardbits: 6
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:                         Options.WAL_ttl_seconds: 0
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:                       Options.WAL_size_limit_MB: 0
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:                        Options.max_write_batch_group_size_bytes: 1048576
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:             Options.manifest_preallocation_size: 4194304
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:                     Options.is_fd_close_on_exec: 1
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:                   Options.advise_random_on_open: 1
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:                    Options.db_write_buffer_size: 0
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:                    Options.write_buffer_manager: 0x5651f391c460
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:         Options.access_hint_on_compaction_start: 1
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:           Options.random_access_max_buffer_size: 1048576
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:                      Options.use_adaptive_mutex: 0
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:                            Options.rate_limiter: (nil)
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:     Options.sst_file_manager.rate_bytes_per_sec: 0
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:                       Options.wal_recovery_mode: 2
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:                  Options.enable_thread_tracking: 0
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:                  Options.enable_pipelined_write: 0
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:                  Options.unordered_write: 0
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:         Options.allow_concurrent_memtable_write: 1
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:      Options.enable_write_thread_adaptive_yield: 1
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:             Options.write_thread_max_yield_usec: 100
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:            Options.write_thread_slow_yield_usec: 3
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:                               Options.row_cache: None
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:                              Options.wal_filter: None
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:             Options.avoid_flush_during_recovery: 0
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:             Options.allow_ingest_behind: 0
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:             Options.two_write_queues: 0
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:             Options.manual_wal_flush: 0
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:             Options.wal_compression: 0
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:             Options.atomic_flush: 0
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:             Options.avoid_unnecessary_blocking_io: 0
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:                 Options.persist_stats_to_disk: 0
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:                 Options.write_dbid_to_manifest: 0
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:                 Options.log_readahead_size: 0
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:                 Options.file_checksum_gen_factory: Unknown
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:                 Options.best_efforts_recovery: 0
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:                Options.max_bgerror_resume_count: 2147483647
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:            Options.bgerror_resume_retry_interval: 1000000
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:             Options.allow_data_in_errors: 0
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:             Options.db_host_id: __hostname__
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:             Options.enforce_single_del_contracts: true
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:             Options.max_background_jobs: 4
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:             Options.max_background_compactions: -1
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:             Options.max_subcompactions: 1
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:             Options.avoid_flush_during_shutdown: 0
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:           Options.writable_file_max_buffer_size: 0
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:             Options.delayed_write_rate : 16777216
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:             Options.max_total_wal_size: 1073741824
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:             Options.delete_obsolete_files_period_micros: 21600000000
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:                   Options.stats_dump_period_sec: 600
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:                 Options.stats_persist_period_sec: 600
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:                 Options.stats_history_buffer_size: 1048576
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:                          Options.max_open_files: -1
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:                          Options.bytes_per_sync: 0
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:                      Options.wal_bytes_per_sync: 0
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:                   Options.strict_bytes_per_sync: 0
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:       Options.compaction_readahead_size: 2097152
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:                  Options.max_background_flushes: -1
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb: Compression algorithms supported:
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:         kZSTD supported: 0
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:         kXpressCompression supported: 0
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:         kBZip2Compression supported: 0
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:         kZSTDNotFinalCompression supported: 0
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:         kLZ4Compression supported: 1
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:         kZlibCompression supported: 1
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:         kLZ4HCCompression supported: 1
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:         kSnappyCompression supported: 1
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb: Fast CRC32 supported: Supported on x86
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb: DMutex implementation: pthread_mutex_t
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb: [db/db_impl/db_impl_readonly.cc:25] Opening the db in read only mode
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb: [db/version_set.cc:5527] Recovering from manifest file: db/MANIFEST-000032
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [default]:
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:           Options.merge_operator: .T:int64_array.b:bitwise_xor
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:        Options.compaction_filter: None
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:        Options.compaction_filter_factory: None
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:  Options.sst_partitioner_factory: None
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:         Options.memtable_factory: SkipListFactory
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:            Options.table_factory: BlockBasedTable
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x5651f2a10200)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x5651f29fd1f0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:        Options.write_buffer_size: 16777216
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:  Options.max_write_buffer_number: 64
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:          Options.compression: LZ4
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:                  Options.bottommost_compression: Disabled
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:       Options.prefix_extractor: nullptr
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:             Options.num_levels: 7
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:            Options.compression_opts.window_bits: -14
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:                  Options.compression_opts.level: 32767
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:               Options.compression_opts.strategy: 0
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:         Options.compression_opts.parallel_threads: 1
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:                  Options.compression_opts.enabled: false
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:              Options.level0_stop_writes_trigger: 36
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:                   Options.target_file_size_base: 67108864
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:             Options.target_file_size_multiplier: 1
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:                        Options.arena_block_size: 1048576
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:                Options.disable_auto_compactions: 0
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:                   Options.inplace_update_support: 0
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:                 Options.inplace_update_num_locks: 10000
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:               Options.memtable_whole_key_filtering: 0
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:   Options.memtable_huge_page_size: 0
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:                           Options.bloom_locality: 0
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:                    Options.max_successive_merges: 0
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:                Options.optimize_filters_for_hits: 0
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:                Options.paranoid_file_checks: 0
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:                Options.force_consistency_checks: 1
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:                Options.report_bg_io_stats: 0
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:                               Options.ttl: 2592000
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:          Options.periodic_compaction_seconds: 0
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:    Options.preserve_internal_time_seconds: 0
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:                       Options.enable_blob_files: false
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:                           Options.min_blob_size: 0
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:                          Options.blob_file_size: 268435456
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:                   Options.blob_compression_type: NoCompression
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:          Options.enable_blob_garbage_collection: false
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:          Options.blob_compaction_readahead_size: 0
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:                Options.blob_file_starting_level: 0
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-0]:
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:           Options.merge_operator: None
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:        Options.compaction_filter: None
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:        Options.compaction_filter_factory: None
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:  Options.sst_partitioner_factory: None
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:         Options.memtable_factory: SkipListFactory
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:            Options.table_factory: BlockBasedTable
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x5651f2a10200)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x5651f29fd1f0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:        Options.write_buffer_size: 16777216
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:  Options.max_write_buffer_number: 64
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:          Options.compression: LZ4
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:                  Options.bottommost_compression: Disabled
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:       Options.prefix_extractor: nullptr
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:             Options.num_levels: 7
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:            Options.compression_opts.window_bits: -14
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:                  Options.compression_opts.level: 32767
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:               Options.compression_opts.strategy: 0
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:         Options.compression_opts.parallel_threads: 1
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:                  Options.compression_opts.enabled: false
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:              Options.level0_stop_writes_trigger: 36
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:                   Options.target_file_size_base: 67108864
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:             Options.target_file_size_multiplier: 1
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:                        Options.arena_block_size: 1048576
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:                Options.disable_auto_compactions: 0
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:                   Options.inplace_update_support: 0
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:                 Options.inplace_update_num_locks: 10000
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:               Options.memtable_whole_key_filtering: 0
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:   Options.memtable_huge_page_size: 0
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:                           Options.bloom_locality: 0
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:                    Options.max_successive_merges: 0
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:                Options.optimize_filters_for_hits: 0
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:                Options.paranoid_file_checks: 0
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:                Options.force_consistency_checks: 1
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:                Options.report_bg_io_stats: 0
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:                               Options.ttl: 2592000
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:          Options.periodic_compaction_seconds: 0
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:    Options.preserve_internal_time_seconds: 0
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:                       Options.enable_blob_files: false
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:                           Options.min_blob_size: 0
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:                          Options.blob_file_size: 268435456
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:                   Options.blob_compression_type: NoCompression
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:          Options.enable_blob_garbage_collection: false
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:          Options.blob_compaction_readahead_size: 0
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:                Options.blob_file_starting_level: 0
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-1]:
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:           Options.merge_operator: None
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:        Options.compaction_filter: None
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:        Options.compaction_filter_factory: None
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:  Options.sst_partitioner_factory: None
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:         Options.memtable_factory: SkipListFactory
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:            Options.table_factory: BlockBasedTable
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x5651f2a10200)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x5651f29fd1f0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:        Options.write_buffer_size: 16777216
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:  Options.max_write_buffer_number: 64
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:          Options.compression: LZ4
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:                  Options.bottommost_compression: Disabled
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:       Options.prefix_extractor: nullptr
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:             Options.num_levels: 7
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:            Options.compression_opts.window_bits: -14
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:                  Options.compression_opts.level: 32767
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:               Options.compression_opts.strategy: 0
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:         Options.compression_opts.parallel_threads: 1
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:                  Options.compression_opts.enabled: false
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:              Options.level0_stop_writes_trigger: 36
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:                   Options.target_file_size_base: 67108864
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:             Options.target_file_size_multiplier: 1
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:                        Options.arena_block_size: 1048576
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:                Options.disable_auto_compactions: 0
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:                   Options.inplace_update_support: 0
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:                 Options.inplace_update_num_locks: 10000
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:               Options.memtable_whole_key_filtering: 0
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:   Options.memtable_huge_page_size: 0
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:                           Options.bloom_locality: 0
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:                    Options.max_successive_merges: 0
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:                Options.optimize_filters_for_hits: 0
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:                Options.paranoid_file_checks: 0
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:                Options.force_consistency_checks: 1
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:                Options.report_bg_io_stats: 0
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:                               Options.ttl: 2592000
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:          Options.periodic_compaction_seconds: 0
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:    Options.preserve_internal_time_seconds: 0
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:                       Options.enable_blob_files: false
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:                           Options.min_blob_size: 0
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:                          Options.blob_file_size: 268435456
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:                   Options.blob_compression_type: NoCompression
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:          Options.enable_blob_garbage_collection: false
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:          Options.blob_compaction_readahead_size: 0
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:                Options.blob_file_starting_level: 0
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-2]:
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:           Options.merge_operator: None
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:        Options.compaction_filter: None
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:        Options.compaction_filter_factory: None
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:  Options.sst_partitioner_factory: None
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:         Options.memtable_factory: SkipListFactory
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:            Options.table_factory: BlockBasedTable
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x5651f2a10200)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x5651f29fd1f0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:        Options.write_buffer_size: 16777216
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:  Options.max_write_buffer_number: 64
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:          Options.compression: LZ4
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:                  Options.bottommost_compression: Disabled
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:       Options.prefix_extractor: nullptr
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:             Options.num_levels: 7
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:            Options.compression_opts.window_bits: -14
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:                  Options.compression_opts.level: 32767
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:               Options.compression_opts.strategy: 0
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:         Options.compression_opts.parallel_threads: 1
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:                  Options.compression_opts.enabled: false
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:              Options.level0_stop_writes_trigger: 36
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:                   Options.target_file_size_base: 67108864
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:             Options.target_file_size_multiplier: 1
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:                        Options.arena_block_size: 1048576
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:                Options.disable_auto_compactions: 0
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:                   Options.inplace_update_support: 0
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:                 Options.inplace_update_num_locks: 10000
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:               Options.memtable_whole_key_filtering: 0
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:   Options.memtable_huge_page_size: 0
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:                           Options.bloom_locality: 0
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:                    Options.max_successive_merges: 0
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:                Options.optimize_filters_for_hits: 0
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:                Options.paranoid_file_checks: 0
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:                Options.force_consistency_checks: 1
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:                Options.report_bg_io_stats: 0
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:                               Options.ttl: 2592000
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:          Options.periodic_compaction_seconds: 0
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:    Options.preserve_internal_time_seconds: 0
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:                       Options.enable_blob_files: false
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:                           Options.min_blob_size: 0
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:                          Options.blob_file_size: 268435456
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:                   Options.blob_compression_type: NoCompression
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:          Options.enable_blob_garbage_collection: false
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:          Options.blob_compaction_readahead_size: 0
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:                Options.blob_file_starting_level: 0
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-0]:
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:           Options.merge_operator: None
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:        Options.compaction_filter: None
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:        Options.compaction_filter_factory: None
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:  Options.sst_partitioner_factory: None
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:         Options.memtable_factory: SkipListFactory
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:            Options.table_factory: BlockBasedTable
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x5651f2a10200)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x5651f29fd1f0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:        Options.write_buffer_size: 16777216
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:  Options.max_write_buffer_number: 64
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:          Options.compression: LZ4
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:                  Options.bottommost_compression: Disabled
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:       Options.prefix_extractor: nullptr
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:             Options.num_levels: 7
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:            Options.compression_opts.window_bits: -14
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:                  Options.compression_opts.level: 32767
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:               Options.compression_opts.strategy: 0
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:         Options.compression_opts.parallel_threads: 1
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:                  Options.compression_opts.enabled: false
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:              Options.level0_stop_writes_trigger: 36
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:                   Options.target_file_size_base: 67108864
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:             Options.target_file_size_multiplier: 1
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:                        Options.arena_block_size: 1048576
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:                Options.disable_auto_compactions: 0
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:                   Options.inplace_update_support: 0
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:                 Options.inplace_update_num_locks: 10000
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:               Options.memtable_whole_key_filtering: 0
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:   Options.memtable_huge_page_size: 0
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:                           Options.bloom_locality: 0
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:                    Options.max_successive_merges: 0
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:                Options.optimize_filters_for_hits: 0
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:                Options.paranoid_file_checks: 0
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:                Options.force_consistency_checks: 1
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:                Options.report_bg_io_stats: 0
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:                               Options.ttl: 2592000
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:          Options.periodic_compaction_seconds: 0
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:    Options.preserve_internal_time_seconds: 0
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:                       Options.enable_blob_files: false
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:                           Options.min_blob_size: 0
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:                          Options.blob_file_size: 268435456
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:                   Options.blob_compression_type: NoCompression
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:          Options.enable_blob_garbage_collection: false
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:          Options.blob_compaction_readahead_size: 0
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:                Options.blob_file_starting_level: 0
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-1]:
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:           Options.merge_operator: None
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:        Options.compaction_filter: None
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:        Options.compaction_filter_factory: None
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:  Options.sst_partitioner_factory: None
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:         Options.memtable_factory: SkipListFactory
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:            Options.table_factory: BlockBasedTable
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x5651f2a10200)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x5651f29fd1f0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:        Options.write_buffer_size: 16777216
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:  Options.max_write_buffer_number: 64
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:          Options.compression: LZ4
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:                  Options.bottommost_compression: Disabled
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:       Options.prefix_extractor: nullptr
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:             Options.num_levels: 7
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:            Options.compression_opts.window_bits: -14
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:                  Options.compression_opts.level: 32767
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:               Options.compression_opts.strategy: 0
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:         Options.compression_opts.parallel_threads: 1
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:                  Options.compression_opts.enabled: false
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:              Options.level0_stop_writes_trigger: 36
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:                   Options.target_file_size_base: 67108864
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:             Options.target_file_size_multiplier: 1
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:                        Options.arena_block_size: 1048576
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:                Options.disable_auto_compactions: 0
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:                   Options.inplace_update_support: 0
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:                 Options.inplace_update_num_locks: 10000
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:               Options.memtable_whole_key_filtering: 0
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:   Options.memtable_huge_page_size: 0
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:                           Options.bloom_locality: 0
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:                    Options.max_successive_merges: 0
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:                Options.optimize_filters_for_hits: 0
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:                Options.paranoid_file_checks: 0
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:                Options.force_consistency_checks: 1
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:                Options.report_bg_io_stats: 0
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:                               Options.ttl: 2592000
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:          Options.periodic_compaction_seconds: 0
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:    Options.preserve_internal_time_seconds: 0
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:                       Options.enable_blob_files: false
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:                           Options.min_blob_size: 0
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:                          Options.blob_file_size: 268435456
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:                   Options.blob_compression_type: NoCompression
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:          Options.enable_blob_garbage_collection: false
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:          Options.blob_compaction_readahead_size: 0
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:                Options.blob_file_starting_level: 0
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-2]:
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:           Options.merge_operator: None
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:        Options.compaction_filter: None
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:        Options.compaction_filter_factory: None
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:  Options.sst_partitioner_factory: None
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:         Options.memtable_factory: SkipListFactory
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:            Options.table_factory: BlockBasedTable
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x5651f2a10200)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x5651f29fd1f0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:        Options.write_buffer_size: 16777216
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:  Options.max_write_buffer_number: 64
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:          Options.compression: LZ4
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:                  Options.bottommost_compression: Disabled
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:       Options.prefix_extractor: nullptr
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:             Options.num_levels: 7
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:            Options.compression_opts.window_bits: -14
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:                  Options.compression_opts.level: 32767
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:               Options.compression_opts.strategy: 0
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:         Options.compression_opts.parallel_threads: 1
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:                  Options.compression_opts.enabled: false
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:              Options.level0_stop_writes_trigger: 36
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:                   Options.target_file_size_base: 67108864
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:             Options.target_file_size_multiplier: 1
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:                        Options.arena_block_size: 1048576
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:                Options.disable_auto_compactions: 0
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:                   Options.inplace_update_support: 0
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:                 Options.inplace_update_num_locks: 10000
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:               Options.memtable_whole_key_filtering: 0
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:   Options.memtable_huge_page_size: 0
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:                           Options.bloom_locality: 0
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:                    Options.max_successive_merges: 0
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:                Options.optimize_filters_for_hits: 0
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:                Options.paranoid_file_checks: 0
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:                Options.force_consistency_checks: 1
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:                Options.report_bg_io_stats: 0
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:                               Options.ttl: 2592000
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:          Options.periodic_compaction_seconds: 0
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:    Options.preserve_internal_time_seconds: 0
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:                       Options.enable_blob_files: false
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:                           Options.min_blob_size: 0
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:                          Options.blob_file_size: 268435456
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:                   Options.blob_compression_type: NoCompression
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:          Options.enable_blob_garbage_collection: false
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:          Options.blob_compaction_readahead_size: 0
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:                Options.blob_file_starting_level: 0
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-0]:
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:           Options.merge_operator: None
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:        Options.compaction_filter: None
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:        Options.compaction_filter_factory: None
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:  Options.sst_partitioner_factory: None
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:         Options.memtable_factory: SkipListFactory
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:            Options.table_factory: BlockBasedTable
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x5651f2a10180)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x5651f29fd090
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 536870912
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:        Options.write_buffer_size: 16777216
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:  Options.max_write_buffer_number: 64
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:          Options.compression: LZ4
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:                  Options.bottommost_compression: Disabled
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:       Options.prefix_extractor: nullptr
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:             Options.num_levels: 7
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:            Options.compression_opts.window_bits: -14
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:                  Options.compression_opts.level: 32767
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:               Options.compression_opts.strategy: 0
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:         Options.compression_opts.parallel_threads: 1
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:                  Options.compression_opts.enabled: false
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:              Options.level0_stop_writes_trigger: 36
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:                   Options.target_file_size_base: 67108864
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:             Options.target_file_size_multiplier: 1
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:                        Options.arena_block_size: 1048576
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:                Options.disable_auto_compactions: 0
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:                   Options.inplace_update_support: 0
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:                 Options.inplace_update_num_locks: 10000
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:               Options.memtable_whole_key_filtering: 0
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:   Options.memtable_huge_page_size: 0
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:                           Options.bloom_locality: 0
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:                    Options.max_successive_merges: 0
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:                Options.optimize_filters_for_hits: 0
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:                Options.paranoid_file_checks: 0
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:                Options.force_consistency_checks: 1
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:                Options.report_bg_io_stats: 0
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:                               Options.ttl: 2592000
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:          Options.periodic_compaction_seconds: 0
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:    Options.preserve_internal_time_seconds: 0
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:                       Options.enable_blob_files: false
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:                           Options.min_blob_size: 0
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:                          Options.blob_file_size: 268435456
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:                   Options.blob_compression_type: NoCompression
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:          Options.enable_blob_garbage_collection: false
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:          Options.blob_compaction_readahead_size: 0
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:                Options.blob_file_starting_level: 0
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-1]:
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:           Options.merge_operator: None
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:        Options.compaction_filter: None
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:        Options.compaction_filter_factory: None
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:  Options.sst_partitioner_factory: None
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:         Options.memtable_factory: SkipListFactory
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:            Options.table_factory: BlockBasedTable
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x5651f2a10180)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x5651f29fd090
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 536870912
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:        Options.write_buffer_size: 16777216
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:  Options.max_write_buffer_number: 64
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:          Options.compression: LZ4
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:                  Options.bottommost_compression: Disabled
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:       Options.prefix_extractor: nullptr
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:             Options.num_levels: 7
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:            Options.compression_opts.window_bits: -14
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:                  Options.compression_opts.level: 32767
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:               Options.compression_opts.strategy: 0
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:         Options.compression_opts.parallel_threads: 1
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:                  Options.compression_opts.enabled: false
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:              Options.level0_stop_writes_trigger: 36
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:                   Options.target_file_size_base: 67108864
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:             Options.target_file_size_multiplier: 1
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:                        Options.arena_block_size: 1048576
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:                Options.disable_auto_compactions: 0
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:                   Options.inplace_update_support: 0
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:                 Options.inplace_update_num_locks: 10000
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:               Options.memtable_whole_key_filtering: 0
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:   Options.memtable_huge_page_size: 0
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:                           Options.bloom_locality: 0
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:                    Options.max_successive_merges: 0
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:                Options.optimize_filters_for_hits: 0
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:                Options.paranoid_file_checks: 0
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:                Options.force_consistency_checks: 1
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:                Options.report_bg_io_stats: 0
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:                               Options.ttl: 2592000
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:          Options.periodic_compaction_seconds: 0
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:    Options.preserve_internal_time_seconds: 0
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:                       Options.enable_blob_files: false
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:                           Options.min_blob_size: 0
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:                          Options.blob_file_size: 268435456
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:                   Options.blob_compression_type: NoCompression
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:          Options.enable_blob_garbage_collection: false
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:          Options.blob_compaction_readahead_size: 0
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:                Options.blob_file_starting_level: 0
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-2]:
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:           Options.merge_operator: None
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:        Options.compaction_filter: None
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:        Options.compaction_filter_factory: None
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:  Options.sst_partitioner_factory: None
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:         Options.memtable_factory: SkipListFactory
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:            Options.table_factory: BlockBasedTable
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x5651f2a10180)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x5651f29fd090
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 536870912
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:        Options.write_buffer_size: 16777216
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:  Options.max_write_buffer_number: 64
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:          Options.compression: LZ4
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:                  Options.bottommost_compression: Disabled
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:       Options.prefix_extractor: nullptr
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:             Options.num_levels: 7
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:            Options.compression_opts.window_bits: -14
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:                  Options.compression_opts.level: 32767
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:               Options.compression_opts.strategy: 0
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:         Options.compression_opts.parallel_threads: 1
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:                  Options.compression_opts.enabled: false
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:              Options.level0_stop_writes_trigger: 36
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:                   Options.target_file_size_base: 67108864
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:             Options.target_file_size_multiplier: 1
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:                        Options.arena_block_size: 1048576
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:                Options.disable_auto_compactions: 0
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:                   Options.inplace_update_support: 0
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:                 Options.inplace_update_num_locks: 10000
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:               Options.memtable_whole_key_filtering: 0
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:   Options.memtable_huge_page_size: 0
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:                           Options.bloom_locality: 0
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:                    Options.max_successive_merges: 0
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:                Options.optimize_filters_for_hits: 0
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:                Options.paranoid_file_checks: 0
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:                Options.force_consistency_checks: 1
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:                Options.report_bg_io_stats: 0
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:                               Options.ttl: 2592000
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:          Options.periodic_compaction_seconds: 0
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:    Options.preserve_internal_time_seconds: 0
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:                       Options.enable_blob_files: false
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:                           Options.min_blob_size: 0
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:                          Options.blob_file_size: 268435456
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:                   Options.blob_compression_type: NoCompression
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:          Options.enable_blob_garbage_collection: false
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:          Options.blob_compaction_readahead_size: 0
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:                Options.blob_file_starting_level: 0
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb: [db/column_family.cc:635]         (skipping printing options)
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb: [db/column_family.cc:635]         (skipping printing options)
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb: [db/version_set.cc:5566] Recovered from manifest file:db/MANIFEST-000032 succeeded,manifest_file_number is 32, next_file_number is 34, last_sequence is 12, log_number is 5,prev_log_number is 0,max_column_family is 11,min_log_number_to_keep is 5
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb: [db/version_set.cc:5581] Column family [default] (ID 0), log number is 5
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb: [db/version_set.cc:5581] Column family [m-0] (ID 1), log number is 5
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb: [db/version_set.cc:5581] Column family [m-1] (ID 2), log number is 5
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb: [db/version_set.cc:5581] Column family [m-2] (ID 3), log number is 5
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb: [db/version_set.cc:5581] Column family [p-0] (ID 4), log number is 5
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb: [db/version_set.cc:5581] Column family [p-1] (ID 5), log number is 5
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb: [db/version_set.cc:5581] Column family [p-2] (ID 6), log number is 5
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb: [db/version_set.cc:5581] Column family [O-0] (ID 7), log number is 5
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb: [db/version_set.cc:5581] Column family [O-1] (ID 8), log number is 5
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb: [db/version_set.cc:5581] Column family [O-2] (ID 9), log number is 5
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb: [db/version_set.cc:5581] Column family [L] (ID 10), log number is 5
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb: [db/version_set.cc:5581] Column family [P] (ID 11), log number is 5
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb: [db/db_impl/db_impl_open.cc:539] DB ID: 58cf71d6-a2b8-499c-bb65-72a54e89bf3b
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760153836626555, "job": 1, "event": "recovery_started", "wal_files": [31]}
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb: [db/db_impl/db_impl_open.cc:1043] Recovering log #31 mode 2
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760153836626797, "job": 1, "event": "recovery_finished"}
Oct 11 03:37:16 compute-0 ceph-osd[87591]: bluestore(/var/lib/ceph/osd/ceph-0) _open_db opened rocksdb path db options compression=kLZ4Compression,max_write_buffer_number=64,min_write_buffer_number_to_merge=6,compaction_style=kCompactionStyleLevel,write_buffer_size=16777216,max_background_jobs=4,level0_file_num_compaction_trigger=8,max_bytes_for_level_base=1073741824,max_bytes_for_level_multiplier=8,compaction_readahead_size=2MB,max_total_wal_size=1073741824,writable_file_max_buffer_size=0
Oct 11 03:37:16 compute-0 ceph-osd[87591]: bluestore(/var/lib/ceph/osd/ceph-0) _open_super_meta old nid_max 1025
Oct 11 03:37:16 compute-0 ceph-osd[87591]: bluestore(/var/lib/ceph/osd/ceph-0) _open_super_meta old blobid_max 10240
Oct 11 03:37:16 compute-0 ceph-osd[87591]: bluestore(/var/lib/ceph/osd/ceph-0) _open_super_meta ondisk_format 4 compat_ondisk_format 3
Oct 11 03:37:16 compute-0 ceph-osd[87591]: bluestore(/var/lib/ceph/osd/ceph-0) _open_super_meta min_alloc_size 0x1000
Oct 11 03:37:16 compute-0 ceph-osd[87591]: freelist init
Oct 11 03:37:16 compute-0 ceph-osd[87591]: freelist _read_cfg
Oct 11 03:37:16 compute-0 ceph-osd[87591]: bluestore(/var/lib/ceph/osd/ceph-0) _init_alloc loaded 20 GiB in 2 extents, allocator type hybrid, capacity 0x4ffc00000, block size 0x1000, free 0x4ffbfd000, fragmentation 1.9e-07
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb: [db/db_impl/db_impl.cc:496] Shutdown: canceling all background work
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb: [db/db_impl/db_impl.cc:704] Shutdown complete
Oct 11 03:37:16 compute-0 ceph-osd[87591]: bluefs umount
Oct 11 03:37:16 compute-0 ceph-osd[87591]: bdev(0x5651f3843400 /var/lib/ceph/osd/ceph-0/block) close
Oct 11 03:37:16 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v25: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Oct 11 03:37:16 compute-0 podman[87998]: 2025-10-11 03:37:16.730935564 +0000 UTC m=+0.057102707 container create cd5d43d38cb2254d8b2ca938bf36faba0799b3dce410c9083ef5ce985dd43b40 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-23b68101-59a9-532f-ab6b-9acf78fb2162-osd-1-activate-test, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 11 03:37:16 compute-0 systemd[1]: Started libpod-conmon-cd5d43d38cb2254d8b2ca938bf36faba0799b3dce410c9083ef5ce985dd43b40.scope.
Oct 11 03:37:16 compute-0 podman[87998]: 2025-10-11 03:37:16.706110466 +0000 UTC m=+0.032277689 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 03:37:16 compute-0 systemd[1]: Started libcrun container.
Oct 11 03:37:16 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e6cf87c4ad9d317534b08ac214f2152a02c016a7cf53587fa04bb4ae1bc242a8/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 11 03:37:16 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e6cf87c4ad9d317534b08ac214f2152a02c016a7cf53587fa04bb4ae1bc242a8/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 11 03:37:16 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e6cf87c4ad9d317534b08ac214f2152a02c016a7cf53587fa04bb4ae1bc242a8/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 11 03:37:16 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e6cf87c4ad9d317534b08ac214f2152a02c016a7cf53587fa04bb4ae1bc242a8/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 11 03:37:16 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e6cf87c4ad9d317534b08ac214f2152a02c016a7cf53587fa04bb4ae1bc242a8/merged/var/lib/ceph/osd/ceph-1 supports timestamps until 2038 (0x7fffffff)
Oct 11 03:37:16 compute-0 podman[87998]: 2025-10-11 03:37:16.836497784 +0000 UTC m=+0.162664957 container init cd5d43d38cb2254d8b2ca938bf36faba0799b3dce410c9083ef5ce985dd43b40 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-23b68101-59a9-532f-ab6b-9acf78fb2162-osd-1-activate-test, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 11 03:37:16 compute-0 podman[87998]: 2025-10-11 03:37:16.854961593 +0000 UTC m=+0.181128756 container start cd5d43d38cb2254d8b2ca938bf36faba0799b3dce410c9083ef5ce985dd43b40 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-23b68101-59a9-532f-ab6b-9acf78fb2162-osd-1-activate-test, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 11 03:37:16 compute-0 podman[87998]: 2025-10-11 03:37:16.864334826 +0000 UTC m=+0.190501979 container attach cd5d43d38cb2254d8b2ca938bf36faba0799b3dce410c9083ef5ce985dd43b40 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-23b68101-59a9-532f-ab6b-9acf78fb2162-osd-1-activate-test, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.license=GPLv2)
Oct 11 03:37:16 compute-0 ceph-osd[87591]: bdev(0x5651f3843400 /var/lib/ceph/osd/ceph-0/block) open path /var/lib/ceph/osd/ceph-0/block
Oct 11 03:37:16 compute-0 ceph-osd[87591]: bdev(0x5651f3843400 /var/lib/ceph/osd/ceph-0/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-0/block failed: (22) Invalid argument
Oct 11 03:37:16 compute-0 ceph-osd[87591]: bdev(0x5651f3843400 /var/lib/ceph/osd/ceph-0/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Oct 11 03:37:16 compute-0 ceph-osd[87591]: bluefs add_block_device bdev 1 path /var/lib/ceph/osd/ceph-0/block size 20 GiB
Oct 11 03:37:16 compute-0 ceph-osd[87591]: bluefs mount
Oct 11 03:37:16 compute-0 ceph-osd[87591]: bluefs _init_alloc shared, id 1, capacity 0x4ffc00000, block size 0x10000
Oct 11 03:37:16 compute-0 ceph-osd[87591]: bluefs mount shared_bdev_used = 4718592
Oct 11 03:37:16 compute-0 ceph-osd[87591]: bluestore(/var/lib/ceph/osd/ceph-0) _prepare_db_environment set db_paths to db,20397110067 db.slow,20397110067
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb: RocksDB version: 7.9.2
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb: Git sha 0
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb: Compile date 2025-05-06 23:30:25
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb: DB SUMMARY
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb: DB Session ID:  2RAARMJ8GB7GVJ94BLNA
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb: CURRENT file:  CURRENT
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb: IDENTITY file:  IDENTITY
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb: MANIFEST file:  MANIFEST-000032 size: 1007 Bytes
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb: SST files in db dir, Total Num: 1, files: 000030.sst 
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb: SST files in db.slow dir, Total Num: 0, files: 
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb: Write Ahead Log file in db.wal: 000031.log size: 5093 ; 
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:                         Options.error_if_exists: 0
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:                       Options.create_if_missing: 0
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:                         Options.paranoid_checks: 1
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:             Options.flush_verify_memtable_count: 1
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:                               Options.track_and_verify_wals_in_manifest: 0
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:        Options.verify_sst_unique_id_in_manifest: 1
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:                                     Options.env: 0x5651f39ac230
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:                                      Options.fs: LegacyFileSystem
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:                                Options.info_log: 0x5651f2a10540
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:                Options.max_file_opening_threads: 16
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:                              Options.statistics: (nil)
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:                               Options.use_fsync: 0
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:                       Options.max_log_file_size: 0
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:                  Options.max_manifest_file_size: 1073741824
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:                   Options.log_file_time_to_roll: 0
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:                       Options.keep_log_file_num: 1000
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:                    Options.recycle_log_file_num: 0
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:                         Options.allow_fallocate: 1
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:                        Options.allow_mmap_reads: 0
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:                       Options.allow_mmap_writes: 0
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:                        Options.use_direct_reads: 0
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:                        Options.use_direct_io_for_flush_and_compaction: 0
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:          Options.create_missing_column_families: 0
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:                              Options.db_log_dir: 
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:                                 Options.wal_dir: db.wal
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:                Options.table_cache_numshardbits: 6
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:                         Options.WAL_ttl_seconds: 0
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:                       Options.WAL_size_limit_MB: 0
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:                        Options.max_write_batch_group_size_bytes: 1048576
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:             Options.manifest_preallocation_size: 4194304
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:                     Options.is_fd_close_on_exec: 1
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:                   Options.advise_random_on_open: 1
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:                    Options.db_write_buffer_size: 0
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:                    Options.write_buffer_manager: 0x5651f391c6e0
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:         Options.access_hint_on_compaction_start: 1
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:           Options.random_access_max_buffer_size: 1048576
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:                      Options.use_adaptive_mutex: 0
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:                            Options.rate_limiter: (nil)
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:     Options.sst_file_manager.rate_bytes_per_sec: 0
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:                       Options.wal_recovery_mode: 2
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:                  Options.enable_thread_tracking: 0
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:                  Options.enable_pipelined_write: 0
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:                  Options.unordered_write: 0
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:         Options.allow_concurrent_memtable_write: 1
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:      Options.enable_write_thread_adaptive_yield: 1
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:             Options.write_thread_max_yield_usec: 100
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:            Options.write_thread_slow_yield_usec: 3
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:                               Options.row_cache: None
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:                              Options.wal_filter: None
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:             Options.avoid_flush_during_recovery: 0
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:             Options.allow_ingest_behind: 0
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:             Options.two_write_queues: 0
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:             Options.manual_wal_flush: 0
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:             Options.wal_compression: 0
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:             Options.atomic_flush: 0
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:             Options.avoid_unnecessary_blocking_io: 0
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:                 Options.persist_stats_to_disk: 0
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:                 Options.write_dbid_to_manifest: 0
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:                 Options.log_readahead_size: 0
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:                 Options.file_checksum_gen_factory: Unknown
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:                 Options.best_efforts_recovery: 0
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:                Options.max_bgerror_resume_count: 2147483647
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:            Options.bgerror_resume_retry_interval: 1000000
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:             Options.allow_data_in_errors: 0
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:             Options.db_host_id: __hostname__
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:             Options.enforce_single_del_contracts: true
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:             Options.max_background_jobs: 4
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:             Options.max_background_compactions: -1
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:             Options.max_subcompactions: 1
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:             Options.avoid_flush_during_shutdown: 0
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:           Options.writable_file_max_buffer_size: 0
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:             Options.delayed_write_rate : 16777216
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:             Options.max_total_wal_size: 1073741824
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:             Options.delete_obsolete_files_period_micros: 21600000000
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:                   Options.stats_dump_period_sec: 600
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:                 Options.stats_persist_period_sec: 600
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:                 Options.stats_history_buffer_size: 1048576
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:                          Options.max_open_files: -1
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:                          Options.bytes_per_sync: 0
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:                      Options.wal_bytes_per_sync: 0
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:                   Options.strict_bytes_per_sync: 0
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:       Options.compaction_readahead_size: 2097152
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:                  Options.max_background_flushes: -1
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb: Compression algorithms supported:
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:         kZSTD supported: 0
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:         kXpressCompression supported: 0
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:         kBZip2Compression supported: 0
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:         kZSTDNotFinalCompression supported: 0
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:         kLZ4Compression supported: 1
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:         kZlibCompression supported: 1
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:         kLZ4HCCompression supported: 1
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:         kSnappyCompression supported: 1
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb: Fast CRC32 supported: Supported on x86
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb: DMutex implementation: pthread_mutex_t
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb: [db/version_set.cc:5527] Recovering from manifest file: db/MANIFEST-000032
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [default]:
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:           Options.merge_operator: .T:int64_array.b:bitwise_xor
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:        Options.compaction_filter: None
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:        Options.compaction_filter_factory: None
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:  Options.sst_partitioner_factory: None
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:         Options.memtable_factory: SkipListFactory
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:            Options.table_factory: BlockBasedTable
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x5651f2a06400)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x5651f29fd1f0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:        Options.write_buffer_size: 16777216
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:  Options.max_write_buffer_number: 64
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:          Options.compression: LZ4
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:                  Options.bottommost_compression: Disabled
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:       Options.prefix_extractor: nullptr
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:             Options.num_levels: 7
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:            Options.compression_opts.window_bits: -14
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:                  Options.compression_opts.level: 32767
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:               Options.compression_opts.strategy: 0
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:         Options.compression_opts.parallel_threads: 1
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:                  Options.compression_opts.enabled: false
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:              Options.level0_stop_writes_trigger: 36
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:                   Options.target_file_size_base: 67108864
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:             Options.target_file_size_multiplier: 1
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:                        Options.arena_block_size: 1048576
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:                Options.disable_auto_compactions: 0
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:                   Options.inplace_update_support: 0
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:                 Options.inplace_update_num_locks: 10000
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:               Options.memtable_whole_key_filtering: 0
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:   Options.memtable_huge_page_size: 0
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:                           Options.bloom_locality: 0
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:                    Options.max_successive_merges: 0
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:                Options.optimize_filters_for_hits: 0
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:                Options.paranoid_file_checks: 0
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:                Options.force_consistency_checks: 1
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:                Options.report_bg_io_stats: 0
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:                               Options.ttl: 2592000
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:          Options.periodic_compaction_seconds: 0
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:    Options.preserve_internal_time_seconds: 0
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:                       Options.enable_blob_files: false
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:                           Options.min_blob_size: 0
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:                          Options.blob_file_size: 268435456
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:                   Options.blob_compression_type: NoCompression
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:          Options.enable_blob_garbage_collection: false
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:          Options.blob_compaction_readahead_size: 0
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:                Options.blob_file_starting_level: 0
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-0]:
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:           Options.merge_operator: None
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:        Options.compaction_filter: None
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:        Options.compaction_filter_factory: None
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:  Options.sst_partitioner_factory: None
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:         Options.memtable_factory: SkipListFactory
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:            Options.table_factory: BlockBasedTable
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x5651f2a06400)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x5651f29fd1f0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:        Options.write_buffer_size: 16777216
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:  Options.max_write_buffer_number: 64
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:          Options.compression: LZ4
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:                  Options.bottommost_compression: Disabled
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:       Options.prefix_extractor: nullptr
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:             Options.num_levels: 7
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:            Options.compression_opts.window_bits: -14
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:                  Options.compression_opts.level: 32767
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:               Options.compression_opts.strategy: 0
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:         Options.compression_opts.parallel_threads: 1
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:                  Options.compression_opts.enabled: false
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:              Options.level0_stop_writes_trigger: 36
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:                   Options.target_file_size_base: 67108864
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:             Options.target_file_size_multiplier: 1
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:                        Options.arena_block_size: 1048576
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:                Options.disable_auto_compactions: 0
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:                   Options.inplace_update_support: 0
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:                 Options.inplace_update_num_locks: 10000
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:               Options.memtable_whole_key_filtering: 0
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:   Options.memtable_huge_page_size: 0
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:                           Options.bloom_locality: 0
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:                    Options.max_successive_merges: 0
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:                Options.optimize_filters_for_hits: 0
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:                Options.paranoid_file_checks: 0
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:                Options.force_consistency_checks: 1
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:                Options.report_bg_io_stats: 0
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:                               Options.ttl: 2592000
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:          Options.periodic_compaction_seconds: 0
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:    Options.preserve_internal_time_seconds: 0
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:                       Options.enable_blob_files: false
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:                           Options.min_blob_size: 0
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:                          Options.blob_file_size: 268435456
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:                   Options.blob_compression_type: NoCompression
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:          Options.enable_blob_garbage_collection: false
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:          Options.blob_compaction_readahead_size: 0
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:                Options.blob_file_starting_level: 0
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-1]:
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:           Options.merge_operator: None
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:        Options.compaction_filter: None
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:        Options.compaction_filter_factory: None
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:  Options.sst_partitioner_factory: None
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:         Options.memtable_factory: SkipListFactory
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:            Options.table_factory: BlockBasedTable
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x5651f2a06400)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x5651f29fd1f0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:        Options.write_buffer_size: 16777216
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:  Options.max_write_buffer_number: 64
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:          Options.compression: LZ4
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:                  Options.bottommost_compression: Disabled
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:       Options.prefix_extractor: nullptr
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:             Options.num_levels: 7
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:            Options.compression_opts.window_bits: -14
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:                  Options.compression_opts.level: 32767
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:               Options.compression_opts.strategy: 0
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:         Options.compression_opts.parallel_threads: 1
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:                  Options.compression_opts.enabled: false
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:              Options.level0_stop_writes_trigger: 36
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:                   Options.target_file_size_base: 67108864
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:             Options.target_file_size_multiplier: 1
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:                        Options.arena_block_size: 1048576
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:                Options.disable_auto_compactions: 0
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:                   Options.inplace_update_support: 0
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:                 Options.inplace_update_num_locks: 10000
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:               Options.memtable_whole_key_filtering: 0
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:   Options.memtable_huge_page_size: 0
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:                           Options.bloom_locality: 0
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:                    Options.max_successive_merges: 0
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:                Options.optimize_filters_for_hits: 0
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:                Options.paranoid_file_checks: 0
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:                Options.force_consistency_checks: 1
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:                Options.report_bg_io_stats: 0
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:                               Options.ttl: 2592000
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:          Options.periodic_compaction_seconds: 0
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:    Options.preserve_internal_time_seconds: 0
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:                       Options.enable_blob_files: false
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:                           Options.min_blob_size: 0
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:                          Options.blob_file_size: 268435456
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:                   Options.blob_compression_type: NoCompression
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:          Options.enable_blob_garbage_collection: false
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:          Options.blob_compaction_readahead_size: 0
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:                Options.blob_file_starting_level: 0
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-2]:
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:           Options.merge_operator: None
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:        Options.compaction_filter: None
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:        Options.compaction_filter_factory: None
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:  Options.sst_partitioner_factory: None
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:         Options.memtable_factory: SkipListFactory
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:            Options.table_factory: BlockBasedTable
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x5651f2a06400)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x5651f29fd1f0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:        Options.write_buffer_size: 16777216
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:  Options.max_write_buffer_number: 64
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:          Options.compression: LZ4
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:                  Options.bottommost_compression: Disabled
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:       Options.prefix_extractor: nullptr
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:             Options.num_levels: 7
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:            Options.compression_opts.window_bits: -14
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:                  Options.compression_opts.level: 32767
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:               Options.compression_opts.strategy: 0
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:         Options.compression_opts.parallel_threads: 1
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:                  Options.compression_opts.enabled: false
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:              Options.level0_stop_writes_trigger: 36
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:                   Options.target_file_size_base: 67108864
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:             Options.target_file_size_multiplier: 1
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:                        Options.arena_block_size: 1048576
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:                Options.disable_auto_compactions: 0
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:                   Options.inplace_update_support: 0
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:                 Options.inplace_update_num_locks: 10000
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:               Options.memtable_whole_key_filtering: 0
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:   Options.memtable_huge_page_size: 0
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:                           Options.bloom_locality: 0
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:                    Options.max_successive_merges: 0
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:                Options.optimize_filters_for_hits: 0
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:                Options.paranoid_file_checks: 0
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:                Options.force_consistency_checks: 1
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:                Options.report_bg_io_stats: 0
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:                               Options.ttl: 2592000
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:          Options.periodic_compaction_seconds: 0
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:    Options.preserve_internal_time_seconds: 0
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:                       Options.enable_blob_files: false
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:                           Options.min_blob_size: 0
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:                          Options.blob_file_size: 268435456
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:                   Options.blob_compression_type: NoCompression
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:          Options.enable_blob_garbage_collection: false
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:          Options.blob_compaction_readahead_size: 0
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:                Options.blob_file_starting_level: 0
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-0]:
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:           Options.merge_operator: None
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:        Options.compaction_filter: None
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:        Options.compaction_filter_factory: None
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:  Options.sst_partitioner_factory: None
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:         Options.memtable_factory: SkipListFactory
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:            Options.table_factory: BlockBasedTable
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x5651f2a06400)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x5651f29fd1f0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:        Options.write_buffer_size: 16777216
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:  Options.max_write_buffer_number: 64
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:          Options.compression: LZ4
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:                  Options.bottommost_compression: Disabled
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:       Options.prefix_extractor: nullptr
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:             Options.num_levels: 7
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:            Options.compression_opts.window_bits: -14
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:                  Options.compression_opts.level: 32767
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:               Options.compression_opts.strategy: 0
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:         Options.compression_opts.parallel_threads: 1
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:                  Options.compression_opts.enabled: false
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:              Options.level0_stop_writes_trigger: 36
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:                   Options.target_file_size_base: 67108864
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:             Options.target_file_size_multiplier: 1
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:                        Options.arena_block_size: 1048576
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:                Options.disable_auto_compactions: 0
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:                   Options.inplace_update_support: 0
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:                 Options.inplace_update_num_locks: 10000
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:               Options.memtable_whole_key_filtering: 0
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:   Options.memtable_huge_page_size: 0
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:                           Options.bloom_locality: 0
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:                    Options.max_successive_merges: 0
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:                Options.optimize_filters_for_hits: 0
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:                Options.paranoid_file_checks: 0
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:                Options.force_consistency_checks: 1
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:                Options.report_bg_io_stats: 0
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:                               Options.ttl: 2592000
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:          Options.periodic_compaction_seconds: 0
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:    Options.preserve_internal_time_seconds: 0
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:                       Options.enable_blob_files: false
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:                           Options.min_blob_size: 0
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:                          Options.blob_file_size: 268435456
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:                   Options.blob_compression_type: NoCompression
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:          Options.enable_blob_garbage_collection: false
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:          Options.blob_compaction_readahead_size: 0
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:                Options.blob_file_starting_level: 0
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-1]:
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:           Options.merge_operator: None
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:        Options.compaction_filter: None
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:        Options.compaction_filter_factory: None
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:  Options.sst_partitioner_factory: None
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:         Options.memtable_factory: SkipListFactory
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:            Options.table_factory: BlockBasedTable
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x5651f2a06400)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x5651f29fd1f0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:        Options.write_buffer_size: 16777216
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:  Options.max_write_buffer_number: 64
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:          Options.compression: LZ4
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:                  Options.bottommost_compression: Disabled
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:       Options.prefix_extractor: nullptr
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:             Options.num_levels: 7
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:            Options.compression_opts.window_bits: -14
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:                  Options.compression_opts.level: 32767
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:               Options.compression_opts.strategy: 0
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:         Options.compression_opts.parallel_threads: 1
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:                  Options.compression_opts.enabled: false
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:              Options.level0_stop_writes_trigger: 36
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:                   Options.target_file_size_base: 67108864
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:             Options.target_file_size_multiplier: 1
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:                        Options.arena_block_size: 1048576
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:                Options.disable_auto_compactions: 0
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:                   Options.inplace_update_support: 0
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:                 Options.inplace_update_num_locks: 10000
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:               Options.memtable_whole_key_filtering: 0
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:   Options.memtable_huge_page_size: 0
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:                           Options.bloom_locality: 0
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:                    Options.max_successive_merges: 0
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:                Options.optimize_filters_for_hits: 0
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:                Options.paranoid_file_checks: 0
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:                Options.force_consistency_checks: 1
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:                Options.report_bg_io_stats: 0
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:                               Options.ttl: 2592000
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:          Options.periodic_compaction_seconds: 0
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:    Options.preserve_internal_time_seconds: 0
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:                       Options.enable_blob_files: false
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:                           Options.min_blob_size: 0
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:                          Options.blob_file_size: 268435456
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:                   Options.blob_compression_type: NoCompression
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:          Options.enable_blob_garbage_collection: false
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:          Options.blob_compaction_readahead_size: 0
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:                Options.blob_file_starting_level: 0
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-2]:
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:           Options.merge_operator: None
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:        Options.compaction_filter: None
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:        Options.compaction_filter_factory: None
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:  Options.sst_partitioner_factory: None
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:         Options.memtable_factory: SkipListFactory
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:            Options.table_factory: BlockBasedTable
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x5651f2a06400)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x5651f29fd1f0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:        Options.write_buffer_size: 16777216
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:  Options.max_write_buffer_number: 64
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:          Options.compression: LZ4
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:                  Options.bottommost_compression: Disabled
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:       Options.prefix_extractor: nullptr
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:             Options.num_levels: 7
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:            Options.compression_opts.window_bits: -14
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:                  Options.compression_opts.level: 32767
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:               Options.compression_opts.strategy: 0
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:         Options.compression_opts.parallel_threads: 1
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:                  Options.compression_opts.enabled: false
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:              Options.level0_stop_writes_trigger: 36
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:                   Options.target_file_size_base: 67108864
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:             Options.target_file_size_multiplier: 1
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:                        Options.arena_block_size: 1048576
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:                Options.disable_auto_compactions: 0
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:                   Options.inplace_update_support: 0
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:                 Options.inplace_update_num_locks: 10000
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:               Options.memtable_whole_key_filtering: 0
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:   Options.memtable_huge_page_size: 0
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:                           Options.bloom_locality: 0
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:                    Options.max_successive_merges: 0
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:                Options.optimize_filters_for_hits: 0
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:                Options.paranoid_file_checks: 0
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:                Options.force_consistency_checks: 1
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:                Options.report_bg_io_stats: 0
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:                               Options.ttl: 2592000
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:          Options.periodic_compaction_seconds: 0
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:    Options.preserve_internal_time_seconds: 0
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:                       Options.enable_blob_files: false
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:                           Options.min_blob_size: 0
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:                          Options.blob_file_size: 268435456
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:                   Options.blob_compression_type: NoCompression
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:          Options.enable_blob_garbage_collection: false
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:          Options.blob_compaction_readahead_size: 0
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:                Options.blob_file_starting_level: 0
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-0]:
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:           Options.merge_operator: None
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:        Options.compaction_filter: None
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:        Options.compaction_filter_factory: None
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:  Options.sst_partitioner_factory: None
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:         Options.memtable_factory: SkipListFactory
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:            Options.table_factory: BlockBasedTable
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x5651f2a063c0)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x5651f29fd090
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 536870912
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:        Options.write_buffer_size: 16777216
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:  Options.max_write_buffer_number: 64
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:          Options.compression: LZ4
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:                  Options.bottommost_compression: Disabled
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:       Options.prefix_extractor: nullptr
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:             Options.num_levels: 7
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:            Options.compression_opts.window_bits: -14
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:                  Options.compression_opts.level: 32767
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:               Options.compression_opts.strategy: 0
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:         Options.compression_opts.parallel_threads: 1
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:                  Options.compression_opts.enabled: false
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:              Options.level0_stop_writes_trigger: 36
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:                   Options.target_file_size_base: 67108864
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:             Options.target_file_size_multiplier: 1
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:                        Options.arena_block_size: 1048576
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:                Options.disable_auto_compactions: 0
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:                   Options.inplace_update_support: 0
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:                 Options.inplace_update_num_locks: 10000
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:               Options.memtable_whole_key_filtering: 0
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:   Options.memtable_huge_page_size: 0
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:                           Options.bloom_locality: 0
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:                    Options.max_successive_merges: 0
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:                Options.optimize_filters_for_hits: 0
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:                Options.paranoid_file_checks: 0
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:                Options.force_consistency_checks: 1
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:                Options.report_bg_io_stats: 0
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:                               Options.ttl: 2592000
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:          Options.periodic_compaction_seconds: 0
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:    Options.preserve_internal_time_seconds: 0
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:                       Options.enable_blob_files: false
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:                           Options.min_blob_size: 0
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:                          Options.blob_file_size: 268435456
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:                   Options.blob_compression_type: NoCompression
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:          Options.enable_blob_garbage_collection: false
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:          Options.blob_compaction_readahead_size: 0
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:                Options.blob_file_starting_level: 0
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-1]:
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:           Options.merge_operator: None
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:        Options.compaction_filter: None
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:        Options.compaction_filter_factory: None
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:  Options.sst_partitioner_factory: None
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:         Options.memtable_factory: SkipListFactory
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:            Options.table_factory: BlockBasedTable
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x5651f2a063c0)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x5651f29fd090
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 536870912
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:        Options.write_buffer_size: 16777216
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:  Options.max_write_buffer_number: 64
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:          Options.compression: LZ4
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:                  Options.bottommost_compression: Disabled
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:       Options.prefix_extractor: nullptr
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:             Options.num_levels: 7
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:            Options.compression_opts.window_bits: -14
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:                  Options.compression_opts.level: 32767
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:               Options.compression_opts.strategy: 0
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:         Options.compression_opts.parallel_threads: 1
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:                  Options.compression_opts.enabled: false
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:              Options.level0_stop_writes_trigger: 36
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:                   Options.target_file_size_base: 67108864
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:             Options.target_file_size_multiplier: 1
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:                        Options.arena_block_size: 1048576
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:                Options.disable_auto_compactions: 0
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:                   Options.inplace_update_support: 0
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:                 Options.inplace_update_num_locks: 10000
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:               Options.memtable_whole_key_filtering: 0
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:   Options.memtable_huge_page_size: 0
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:                           Options.bloom_locality: 0
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:                    Options.max_successive_merges: 0
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:                Options.optimize_filters_for_hits: 0
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:                Options.paranoid_file_checks: 0
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:                Options.force_consistency_checks: 1
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:                Options.report_bg_io_stats: 0
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:                               Options.ttl: 2592000
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:          Options.periodic_compaction_seconds: 0
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:    Options.preserve_internal_time_seconds: 0
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:                       Options.enable_blob_files: false
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:                           Options.min_blob_size: 0
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:                          Options.blob_file_size: 268435456
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:                   Options.blob_compression_type: NoCompression
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:          Options.enable_blob_garbage_collection: false
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:          Options.blob_compaction_readahead_size: 0
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:                Options.blob_file_starting_level: 0
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-2]:
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:           Options.merge_operator: None
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:        Options.compaction_filter: None
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:        Options.compaction_filter_factory: None
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:  Options.sst_partitioner_factory: None
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:         Options.memtable_factory: SkipListFactory
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:            Options.table_factory: BlockBasedTable
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x5651f2a063c0)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x5651f29fd090
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 536870912
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:        Options.write_buffer_size: 16777216
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:  Options.max_write_buffer_number: 64
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:          Options.compression: LZ4
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:                  Options.bottommost_compression: Disabled
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:       Options.prefix_extractor: nullptr
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:             Options.num_levels: 7
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:            Options.compression_opts.window_bits: -14
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:                  Options.compression_opts.level: 32767
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:               Options.compression_opts.strategy: 0
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:         Options.compression_opts.parallel_threads: 1
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:                  Options.compression_opts.enabled: false
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:              Options.level0_stop_writes_trigger: 36
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:                   Options.target_file_size_base: 67108864
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:             Options.target_file_size_multiplier: 1
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:                        Options.arena_block_size: 1048576
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:                Options.disable_auto_compactions: 0
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:                   Options.inplace_update_support: 0
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:                 Options.inplace_update_num_locks: 10000
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:               Options.memtable_whole_key_filtering: 0
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:   Options.memtable_huge_page_size: 0
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:                           Options.bloom_locality: 0
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:                    Options.max_successive_merges: 0
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:                Options.optimize_filters_for_hits: 0
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:                Options.paranoid_file_checks: 0
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:                Options.force_consistency_checks: 1
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:                Options.report_bg_io_stats: 0
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:                               Options.ttl: 2592000
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:          Options.periodic_compaction_seconds: 0
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:    Options.preserve_internal_time_seconds: 0
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:                       Options.enable_blob_files: false
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:                           Options.min_blob_size: 0
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:                          Options.blob_file_size: 268435456
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:                   Options.blob_compression_type: NoCompression
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:          Options.enable_blob_garbage_collection: false
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:          Options.blob_compaction_readahead_size: 0
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb:                Options.blob_file_starting_level: 0
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb: [db/column_family.cc:635]         (skipping printing options)
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb: [db/column_family.cc:635]         (skipping printing options)
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb: [db/version_set.cc:5566] Recovered from manifest file:db/MANIFEST-000032 succeeded,manifest_file_number is 32, next_file_number is 34, last_sequence is 12, log_number is 5,prev_log_number is 0,max_column_family is 11,min_log_number_to_keep is 5
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb: [db/version_set.cc:5581] Column family [default] (ID 0), log number is 5
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb: [db/version_set.cc:5581] Column family [m-0] (ID 1), log number is 5
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb: [db/version_set.cc:5581] Column family [m-1] (ID 2), log number is 5
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb: [db/version_set.cc:5581] Column family [m-2] (ID 3), log number is 5
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb: [db/version_set.cc:5581] Column family [p-0] (ID 4), log number is 5
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb: [db/version_set.cc:5581] Column family [p-1] (ID 5), log number is 5
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb: [db/version_set.cc:5581] Column family [p-2] (ID 6), log number is 5
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb: [db/version_set.cc:5581] Column family [O-0] (ID 7), log number is 5
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb: [db/version_set.cc:5581] Column family [O-1] (ID 8), log number is 5
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb: [db/version_set.cc:5581] Column family [O-2] (ID 9), log number is 5
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb: [db/version_set.cc:5581] Column family [L] (ID 10), log number is 5
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb: [db/version_set.cc:5581] Column family [P] (ID 11), log number is 5
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb: [db/db_impl/db_impl_open.cc:539] DB ID: 58cf71d6-a2b8-499c-bb65-72a54e89bf3b
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760153836906394, "job": 1, "event": "recovery_started", "wal_files": [31]}
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb: [db/db_impl/db_impl_open.cc:1043] Recovering log #31 mode 2
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760153836912099, "cf_name": "default", "job": 1, "event": "table_file_creation", "file_number": 35, "file_size": 1272, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 13, "largest_seqno": 21, "table_properties": {"data_size": 128, "index_size": 27, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 69, "raw_key_size": 87, "raw_average_key_size": 17, "raw_value_size": 82, "raw_average_value_size": 16, "num_data_blocks": 1, "num_entries": 5, "num_filter_entries": 5, "num_deletions": 0, "num_merge_operands": 2, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": ".T:int64_array.b:bitwise_xor", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "LZ4", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1760153836, "oldest_key_time": 0, "file_creation_time": 0, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "58cf71d6-a2b8-499c-bb65-72a54e89bf3b", "db_session_id": "2RAARMJ8GB7GVJ94BLNA", "orig_file_number": 35, "seqno_to_time_mapping": "N/A"}}
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760153836915335, "cf_name": "p-0", "job": 1, "event": "table_file_creation", "file_number": 36, "file_size": 1594, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 14, "largest_seqno": 15, "table_properties": {"data_size": 468, "index_size": 39, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 69, "raw_key_size": 72, "raw_average_key_size": 36, "raw_value_size": 567, "raw_average_value_size": 283, "num_data_blocks": 1, "num_entries": 2, "num_filter_entries": 2, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "p-0", "column_family_id": 4, "comparator": "leveldb.BytewiseComparator", "merge_operator": "nullptr", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "LZ4", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1760153836, "oldest_key_time": 0, "file_creation_time": 0, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "58cf71d6-a2b8-499c-bb65-72a54e89bf3b", "db_session_id": "2RAARMJ8GB7GVJ94BLNA", "orig_file_number": 36, "seqno_to_time_mapping": "N/A"}}
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760153836918291, "cf_name": "O-2", "job": 1, "event": "table_file_creation", "file_number": 37, "file_size": 1275, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 16, "largest_seqno": 16, "table_properties": {"data_size": 121, "index_size": 64, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 69, "raw_key_size": 55, "raw_average_key_size": 55, "raw_value_size": 50, "raw_average_value_size": 50, "num_data_blocks": 1, "num_entries": 1, "num_filter_entries": 1, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "O-2", "column_family_id": 9, "comparator": "leveldb.BytewiseComparator", "merge_operator": "nullptr", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "LZ4", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1760153836, "oldest_key_time": 0, "file_creation_time": 0, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "58cf71d6-a2b8-499c-bb65-72a54e89bf3b", "db_session_id": "2RAARMJ8GB7GVJ94BLNA", "orig_file_number": 37, "seqno_to_time_mapping": "N/A"}}
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760153836919625, "job": 1, "event": "recovery_finished"}
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb: [db/version_set.cc:5047] Creating manifest 40
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb: [db/db_impl/db_impl_open.cc:1987] SstFileManager instance 0x5651f39d1c00
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb: DB pointer 0x5651f3905a00
Oct 11 03:37:16 compute-0 ceph-osd[87591]: bluestore(/var/lib/ceph/osd/ceph-0) _open_db opened rocksdb path db options compression=kLZ4Compression,max_write_buffer_number=64,min_write_buffer_number_to_merge=6,compaction_style=kCompactionStyleLevel,write_buffer_size=16777216,max_background_jobs=4,level0_file_num_compaction_trigger=8,max_bytes_for_level_base=1073741824,max_bytes_for_level_multiplier=8,compaction_readahead_size=2MB,max_total_wal_size=1073741824,writable_file_max_buffer_size=0
Oct 11 03:37:16 compute-0 ceph-osd[87591]: bluestore(/var/lib/ceph/osd/ceph-0) _upgrade_super from 4, latest 4
Oct 11 03:37:16 compute-0 ceph-osd[87591]: bluestore(/var/lib/ceph/osd/ceph-0) _upgrade_super done
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Oct 11 03:37:16 compute-0 ceph-osd[87591]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 0.1 total, 0.1 interval
                                           Cumulative writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 GB, 0.00 MB/s
                                           Cumulative WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 MB, 0.00 MB/s
                                           Interval WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                           
                                           ** Compaction Stats [default] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      2/0    2.63 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.2      0.01              0.00         1    0.006       0      0       0.0       0.0
                                            Sum      2/0    2.63 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.2      0.01              0.00         1    0.006       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.2      0.01              0.00         1    0.006       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [default] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.2      0.01              0.00         1    0.006       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.1 total, 0.1 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.02 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.02 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5651f29fd1f0#2 capacity: 460.80 MB usage: 0.67 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 4.3e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.27 KB,5.62933e-05%) FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [default] **
                                           
                                           ** Compaction Stats [m-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.1 total, 0.1 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5651f29fd1f0#2 capacity: 460.80 MB usage: 0.67 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 4.3e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.27 KB,5.62933e-05%) FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-0] **
                                           
                                           ** Compaction Stats [m-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.1 total, 0.1 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5651f29fd1f0#2 capacity: 460.80 MB usage: 0.67 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 4.3e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.27 KB,5.62933e-05%) FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-1] **
                                           
                                           ** Compaction Stats [m-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.1 total, 0.1 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5651f29fd1f0#2 capacity: 460.80 MB usage: 0.67 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 4.3e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.27 KB,5.62933e-05%) FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-2] **
                                           
                                           ** Compaction Stats [p-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.56 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.5      0.00              0.00         1    0.003       0      0       0.0       0.0
                                            Sum      1/0    1.56 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.5      0.00              0.00         1    0.003       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.5      0.00              0.00         1    0.003       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.5      0.00              0.00         1    0.003       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.1 total, 0.1 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.02 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.02 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5651f29fd1f0#2 capacity: 460.80 MB usage: 0.67 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 4.3e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.27 KB,5.62933e-05%) FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-0] **
                                           
                                           ** Compaction Stats [p-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.1 total, 0.1 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5651f29fd1f0#2 capacity: 460.80 MB usage: 0.67 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 4.3e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.27 KB,5.62933e-05%) FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-1] **
                                           
                                           ** Compaction Stats [p-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.1 total, 0.1 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5651f29fd1f0#2 capacity: 460.80 MB usage: 0.67 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 4.3e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.27 KB,5.62933e-05%) FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-2] **
                                           
                                           ** Compaction Stats [O-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.1 total, 0.1 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5651f29fd090#2 capacity: 512.00 MB usage: 0.25 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 2 last_secs: 1.1e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): FilterBlock(1,0.11 KB,2.08616e-05%) IndexBlock(1,0.14 KB,2.68221e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-0] **
                                           
                                           ** Compaction Stats [O-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.1 total, 0.1 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5651f29fd090#2 capacity: 512.00 MB usage: 0.25 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 2 last_secs: 1.1e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): FilterBlock(1,0.11 KB,2.08616e-05%) IndexBlock(1,0.14 KB,2.68221e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-1] **
                                           
                                           ** Compaction Stats [O-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.25 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.4      0.00              0.00         1    0.003       0      0       0.0       0.0
                                            Sum      1/0    1.25 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.4      0.00              0.00         1    0.003       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.4      0.00              0.00         1    0.003       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.4      0.00              0.00         1    0.003       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.1 total, 0.1 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.02 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.02 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5651f29fd090#2 capacity: 512.00 MB usage: 0.25 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 2 last_secs: 1.1e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): FilterBlock(1,0.11 KB,2.08616e-05%) IndexBlock(1,0.14 KB,2.68221e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-2] **
                                           
                                           ** Compaction Stats [L] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.001       0      0       0.0       0.0
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.001       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.001       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [L] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.001       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.1 total, 0.1 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5651f29fd1f0#2 capacity: 460.80 MB usage: 0.67 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 4.3e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.27 KB,5.62933e-05%) FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [L] **
                                           
                                           ** Compaction Stats [P] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [P] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.1 total, 0.1 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5651f29fd1f0#2 capacity: 460.80 MB usage: 0.67 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 4.3e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.27 KB,5.62933e-05%) FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [P] **
Oct 11 03:37:16 compute-0 ceph-osd[87591]: <cls> /home/jenkins-build/build/workspace/ceph-build/ARCH/x86_64/AVAILABLE_ARCH/x86_64/AVAILABLE_DIST/centos9/DIST/centos9/MACHINE_SIZE/gigantic/release/18.2.7/rpm/el9/BUILD/ceph-18.2.7/src/cls/cephfs/cls_cephfs.cc:201: loading cephfs
Oct 11 03:37:16 compute-0 ceph-osd[87591]: <cls> /home/jenkins-build/build/workspace/ceph-build/ARCH/x86_64/AVAILABLE_ARCH/x86_64/AVAILABLE_DIST/centos9/DIST/centos9/MACHINE_SIZE/gigantic/release/18.2.7/rpm/el9/BUILD/ceph-18.2.7/src/cls/hello/cls_hello.cc:316: loading cls_hello
Oct 11 03:37:16 compute-0 ceph-osd[87591]: _get_class not permitted to load lua
Oct 11 03:37:16 compute-0 ceph-osd[87591]: _get_class not permitted to load sdk
Oct 11 03:37:16 compute-0 ceph-osd[87591]: _get_class not permitted to load test_remote_reads
Oct 11 03:37:16 compute-0 ceph-osd[87591]: osd.0 0 crush map has features 288232575208783872, adjusting msgr requires for clients
Oct 11 03:37:16 compute-0 ceph-osd[87591]: osd.0 0 crush map has features 288232575208783872 was 8705, adjusting msgr requires for mons
Oct 11 03:37:16 compute-0 ceph-osd[87591]: osd.0 0 crush map has features 288232575208783872, adjusting msgr requires for osds
Oct 11 03:37:16 compute-0 ceph-osd[87591]: osd.0 0 check_osdmap_features enabling on-disk ERASURE CODES compat feature
Oct 11 03:37:16 compute-0 ceph-osd[87591]: osd.0 0 load_pgs
Oct 11 03:37:16 compute-0 ceph-osd[87591]: osd.0 0 load_pgs opened 0 pgs
Oct 11 03:37:16 compute-0 ceph-osd[87591]: osd.0 0 log_to_monitors true
Oct 11 03:37:16 compute-0 ceph-23b68101-59a9-532f-ab6b-9acf78fb2162-osd-0[87587]: 2025-10-11T03:37:16.950+0000 7f36231aa740 -1 osd.0 0 log_to_monitors true
Oct 11 03:37:16 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["0"]} v 0) v1
Oct 11 03:37:16 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='osd.0 [v2:192.168.122.100:6802/3430557825,v1:192.168.122.100:6803/3430557825]' entity='osd.0' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["0"]}]: dispatch
Oct 11 03:37:17 compute-0 ceph-23b68101-59a9-532f-ab6b-9acf78fb2162-osd-1-activate-test[88014]: usage: ceph-volume activate [-h] [--osd-id OSD_ID] [--osd-uuid OSD_UUID]
Oct 11 03:37:17 compute-0 ceph-23b68101-59a9-532f-ab6b-9acf78fb2162-osd-1-activate-test[88014]:                             [--no-systemd] [--no-tmpfs]
Oct 11 03:37:17 compute-0 ceph-23b68101-59a9-532f-ab6b-9acf78fb2162-osd-1-activate-test[88014]: ceph-volume activate: error: unrecognized arguments: --bad-option
Oct 11 03:37:17 compute-0 systemd[1]: libpod-cd5d43d38cb2254d8b2ca938bf36faba0799b3dce410c9083ef5ce985dd43b40.scope: Deactivated successfully.
Oct 11 03:37:17 compute-0 podman[87998]: 2025-10-11 03:37:17.485185828 +0000 UTC m=+0.811353031 container died cd5d43d38cb2254d8b2ca938bf36faba0799b3dce410c9083ef5ce985dd43b40 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-23b68101-59a9-532f-ab6b-9acf78fb2162-osd-1-activate-test, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=reef)
Oct 11 03:37:17 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e6 do_prune osdmap full prune enabled
Oct 11 03:37:17 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e6 encode_pending skipping prime_pg_temp; mapping job did not start
Oct 11 03:37:17 compute-0 ceph-mon[74273]: pgmap v25: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Oct 11 03:37:17 compute-0 ceph-mon[74273]: from='osd.0 [v2:192.168.122.100:6802/3430557825,v1:192.168.122.100:6803/3430557825]' entity='osd.0' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["0"]}]: dispatch
Oct 11 03:37:17 compute-0 systemd[1]: var-lib-containers-storage-overlay-e6cf87c4ad9d317534b08ac214f2152a02c016a7cf53587fa04bb4ae1bc242a8-merged.mount: Deactivated successfully.
Oct 11 03:37:17 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='osd.0 [v2:192.168.122.100:6802/3430557825,v1:192.168.122.100:6803/3430557825]' entity='osd.0' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["0"]}]': finished
Oct 11 03:37:17 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e7 e7: 3 total, 0 up, 3 in
Oct 11 03:37:17 compute-0 ceph-mon[74273]: log_channel(cluster) log [DBG] : osdmap e7: 3 total, 0 up, 3 in
Oct 11 03:37:17 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd crush create-or-move", "id": 0, "weight":0.0195, "args": ["host=compute-0", "root=default"]} v 0) v1
Oct 11 03:37:17 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='osd.0 [v2:192.168.122.100:6802/3430557825,v1:192.168.122.100:6803/3430557825]' entity='osd.0' cmd=[{"prefix": "osd crush create-or-move", "id": 0, "weight":0.0195, "args": ["host=compute-0", "root=default"]}]: dispatch
Oct 11 03:37:17 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e7 create-or-move crush item name 'osd.0' initial_weight 0.0195 at location {host=compute-0,root=default}
Oct 11 03:37:17 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0) v1
Oct 11 03:37:17 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Oct 11 03:37:17 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0) v1
Oct 11 03:37:17 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Oct 11 03:37:17 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0) v1
Oct 11 03:37:17 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Oct 11 03:37:17 compute-0 ceph-mgr[74563]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Oct 11 03:37:17 compute-0 ceph-mgr[74563]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Oct 11 03:37:17 compute-0 ceph-mgr[74563]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Oct 11 03:37:17 compute-0 podman[87998]: 2025-10-11 03:37:17.546647197 +0000 UTC m=+0.872814360 container remove cd5d43d38cb2254d8b2ca938bf36faba0799b3dce410c9083ef5ce985dd43b40 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-23b68101-59a9-532f-ab6b-9acf78fb2162-osd-1-activate-test, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 11 03:37:17 compute-0 systemd[1]: libpod-conmon-cd5d43d38cb2254d8b2ca938bf36faba0799b3dce410c9083ef5ce985dd43b40.scope: Deactivated successfully.
Oct 11 03:37:17 compute-0 systemd[1]: Reloading.
Oct 11 03:37:17 compute-0 systemd-rc-local-generator[88291]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 11 03:37:17 compute-0 systemd-sysv-generator[88296]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 11 03:37:17 compute-0 ceph-osd[87591]: log_channel(cluster) log [DBG] : purged_snaps scrub starts
Oct 11 03:37:17 compute-0 ceph-osd[87591]: log_channel(cluster) log [DBG] : purged_snaps scrub ok
Oct 11 03:37:18 compute-0 systemd[1]: Reloading.
Oct 11 03:37:18 compute-0 systemd-sysv-generator[88329]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 11 03:37:18 compute-0 systemd-rc-local-generator[88325]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 11 03:37:18 compute-0 systemd[1]: Starting Ceph osd.1 for 23b68101-59a9-532f-ab6b-9acf78fb2162...
Oct 11 03:37:18 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e7 do_prune osdmap full prune enabled
Oct 11 03:37:18 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e7 encode_pending skipping prime_pg_temp; mapping job did not start
Oct 11 03:37:18 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='osd.0 [v2:192.168.122.100:6802/3430557825,v1:192.168.122.100:6803/3430557825]' entity='osd.0' cmd='[{"prefix": "osd crush create-or-move", "id": 0, "weight":0.0195, "args": ["host=compute-0", "root=default"]}]': finished
Oct 11 03:37:18 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e8 e8: 3 total, 0 up, 3 in
Oct 11 03:37:18 compute-0 ceph-osd[87591]: osd.0 0 done with init, starting boot process
Oct 11 03:37:18 compute-0 ceph-osd[87591]: osd.0 0 start_boot
Oct 11 03:37:18 compute-0 ceph-osd[87591]: osd.0 0 maybe_override_options_for_qos osd_max_backfills set to 1
Oct 11 03:37:18 compute-0 ceph-osd[87591]: osd.0 0 maybe_override_options_for_qos osd_recovery_max_active set to 0
Oct 11 03:37:18 compute-0 ceph-osd[87591]: osd.0 0 maybe_override_options_for_qos osd_recovery_max_active_hdd set to 3
Oct 11 03:37:18 compute-0 ceph-osd[87591]: osd.0 0 maybe_override_options_for_qos osd_recovery_max_active_ssd set to 10
Oct 11 03:37:18 compute-0 ceph-osd[87591]: osd.0 0  bench count 12288000 bsize 4 KiB
Oct 11 03:37:18 compute-0 ceph-mon[74273]: log_channel(cluster) log [DBG] : osdmap e8: 3 total, 0 up, 3 in
Oct 11 03:37:18 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0) v1
Oct 11 03:37:18 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Oct 11 03:37:18 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0) v1
Oct 11 03:37:18 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Oct 11 03:37:18 compute-0 ceph-mgr[74563]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Oct 11 03:37:18 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0) v1
Oct 11 03:37:18 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Oct 11 03:37:18 compute-0 ceph-mgr[74563]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Oct 11 03:37:18 compute-0 ceph-mgr[74563]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Oct 11 03:37:18 compute-0 ceph-mon[74273]: from='osd.0 [v2:192.168.122.100:6802/3430557825,v1:192.168.122.100:6803/3430557825]' entity='osd.0' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["0"]}]': finished
Oct 11 03:37:18 compute-0 ceph-mon[74273]: osdmap e7: 3 total, 0 up, 3 in
Oct 11 03:37:18 compute-0 ceph-mon[74273]: from='osd.0 [v2:192.168.122.100:6802/3430557825,v1:192.168.122.100:6803/3430557825]' entity='osd.0' cmd=[{"prefix": "osd crush create-or-move", "id": 0, "weight":0.0195, "args": ["host=compute-0", "root=default"]}]: dispatch
Oct 11 03:37:18 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Oct 11 03:37:18 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Oct 11 03:37:18 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Oct 11 03:37:18 compute-0 ceph-mgr[74563]: mgr.server handle_open ignoring open from osd.0 v2:192.168.122.100:6802/3430557825; not ready for session (expect reconnect)
Oct 11 03:37:18 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0) v1
Oct 11 03:37:18 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Oct 11 03:37:18 compute-0 ceph-mgr[74563]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Oct 11 03:37:18 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v28: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Oct 11 03:37:18 compute-0 podman[88387]: 2025-10-11 03:37:18.820257577 +0000 UTC m=+0.079037704 container create 762b339466782a95c47c88b176fc6ece435df5948d952fc0e3bc4bf442b75b80 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-23b68101-59a9-532f-ab6b-9acf78fb2162-osd-1-activate, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Oct 11 03:37:18 compute-0 podman[88387]: 2025-10-11 03:37:18.781286371 +0000 UTC m=+0.040066608 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 03:37:18 compute-0 systemd[1]: Started libcrun container.
Oct 11 03:37:18 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/28102d0605eacabd3ee797b60f9cbcfa614dd1e90c8ed050364b20889b244772/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 11 03:37:18 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/28102d0605eacabd3ee797b60f9cbcfa614dd1e90c8ed050364b20889b244772/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 11 03:37:18 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/28102d0605eacabd3ee797b60f9cbcfa614dd1e90c8ed050364b20889b244772/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 11 03:37:18 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/28102d0605eacabd3ee797b60f9cbcfa614dd1e90c8ed050364b20889b244772/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 11 03:37:18 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/28102d0605eacabd3ee797b60f9cbcfa614dd1e90c8ed050364b20889b244772/merged/var/lib/ceph/osd/ceph-1 supports timestamps until 2038 (0x7fffffff)
Oct 11 03:37:18 compute-0 podman[88387]: 2025-10-11 03:37:18.938320817 +0000 UTC m=+0.197100984 container init 762b339466782a95c47c88b176fc6ece435df5948d952fc0e3bc4bf442b75b80 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-23b68101-59a9-532f-ab6b-9acf78fb2162-osd-1-activate, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 11 03:37:18 compute-0 podman[88387]: 2025-10-11 03:37:18.947229128 +0000 UTC m=+0.206009245 container start 762b339466782a95c47c88b176fc6ece435df5948d952fc0e3bc4bf442b75b80 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-23b68101-59a9-532f-ab6b-9acf78fb2162-osd-1-activate, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 11 03:37:18 compute-0 podman[88387]: 2025-10-11 03:37:18.956657333 +0000 UTC m=+0.215437550 container attach 762b339466782a95c47c88b176fc6ece435df5948d952fc0e3bc4bf442b75b80 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-23b68101-59a9-532f-ab6b-9acf78fb2162-osd-1-activate, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3)
Oct 11 03:37:19 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e8 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 11 03:37:19 compute-0 ceph-mgr[74563]: mgr.server handle_open ignoring open from osd.0 v2:192.168.122.100:6802/3430557825; not ready for session (expect reconnect)
Oct 11 03:37:19 compute-0 ceph-mgr[74563]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Oct 11 03:37:19 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0) v1
Oct 11 03:37:19 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Oct 11 03:37:19 compute-0 ceph-mon[74273]: from='osd.0 [v2:192.168.122.100:6802/3430557825,v1:192.168.122.100:6803/3430557825]' entity='osd.0' cmd='[{"prefix": "osd crush create-or-move", "id": 0, "weight":0.0195, "args": ["host=compute-0", "root=default"]}]': finished
Oct 11 03:37:19 compute-0 ceph-mon[74273]: osdmap e8: 3 total, 0 up, 3 in
Oct 11 03:37:19 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Oct 11 03:37:19 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Oct 11 03:37:19 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Oct 11 03:37:19 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Oct 11 03:37:19 compute-0 ceph-mon[74273]: pgmap v28: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Oct 11 03:37:19 compute-0 ceph-23b68101-59a9-532f-ab6b-9acf78fb2162-osd-1-activate[88402]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-1
Oct 11 03:37:19 compute-0 bash[88387]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-1
Oct 11 03:37:19 compute-0 ceph-23b68101-59a9-532f-ab6b-9acf78fb2162-osd-1-activate[88402]: Running command: /usr/bin/ceph-bluestore-tool prime-osd-dir --path /var/lib/ceph/osd/ceph-1 --no-mon-config --dev /dev/mapper/ceph_vg1-ceph_lv1
Oct 11 03:37:19 compute-0 bash[88387]: Running command: /usr/bin/ceph-bluestore-tool prime-osd-dir --path /var/lib/ceph/osd/ceph-1 --no-mon-config --dev /dev/mapper/ceph_vg1-ceph_lv1
Oct 11 03:37:19 compute-0 ceph-23b68101-59a9-532f-ab6b-9acf78fb2162-osd-1-activate[88402]: Running command: /usr/bin/chown -h ceph:ceph /dev/mapper/ceph_vg1-ceph_lv1
Oct 11 03:37:19 compute-0 bash[88387]: Running command: /usr/bin/chown -h ceph:ceph /dev/mapper/ceph_vg1-ceph_lv1
Oct 11 03:37:19 compute-0 ceph-23b68101-59a9-532f-ab6b-9acf78fb2162-osd-1-activate[88402]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-1
Oct 11 03:37:19 compute-0 bash[88387]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-1
Oct 11 03:37:20 compute-0 ceph-23b68101-59a9-532f-ab6b-9acf78fb2162-osd-1-activate[88402]: Running command: /usr/bin/ln -s /dev/mapper/ceph_vg1-ceph_lv1 /var/lib/ceph/osd/ceph-1/block
Oct 11 03:37:20 compute-0 bash[88387]: Running command: /usr/bin/ln -s /dev/mapper/ceph_vg1-ceph_lv1 /var/lib/ceph/osd/ceph-1/block
Oct 11 03:37:20 compute-0 ceph-23b68101-59a9-532f-ab6b-9acf78fb2162-osd-1-activate[88402]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-1
Oct 11 03:37:20 compute-0 bash[88387]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-1
Oct 11 03:37:20 compute-0 ceph-23b68101-59a9-532f-ab6b-9acf78fb2162-osd-1-activate[88402]: --> ceph-volume raw activate successful for osd ID: 1
Oct 11 03:37:20 compute-0 bash[88387]: --> ceph-volume raw activate successful for osd ID: 1
Oct 11 03:37:20 compute-0 systemd[1]: libpod-762b339466782a95c47c88b176fc6ece435df5948d952fc0e3bc4bf442b75b80.scope: Deactivated successfully.
Oct 11 03:37:20 compute-0 systemd[1]: libpod-762b339466782a95c47c88b176fc6ece435df5948d952fc0e3bc4bf442b75b80.scope: Consumed 1.132s CPU time.
Oct 11 03:37:20 compute-0 podman[88387]: 2025-10-11 03:37:20.063466533 +0000 UTC m=+1.322246690 container died 762b339466782a95c47c88b176fc6ece435df5948d952fc0e3bc4bf442b75b80 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-23b68101-59a9-532f-ab6b-9acf78fb2162-osd-1-activate, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 11 03:37:20 compute-0 systemd[1]: var-lib-containers-storage-overlay-28102d0605eacabd3ee797b60f9cbcfa614dd1e90c8ed050364b20889b244772-merged.mount: Deactivated successfully.
Oct 11 03:37:20 compute-0 podman[88387]: 2025-10-11 03:37:20.145424168 +0000 UTC m=+1.404204295 container remove 762b339466782a95c47c88b176fc6ece435df5948d952fc0e3bc4bf442b75b80 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-23b68101-59a9-532f-ab6b-9acf78fb2162-osd-1-activate, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.vendor=CentOS)
Oct 11 03:37:20 compute-0 podman[88575]: 2025-10-11 03:37:20.421856423 +0000 UTC m=+0.043380211 container create 57e2909688769832b7f6b737db71199f0b1729a8d0aa7921857b40670e084072 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-23b68101-59a9-532f-ab6b-9acf78fb2162-osd-1, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 11 03:37:20 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6512b9dbb136594eb82d0e8c4d4363284503bef2a89489855d7998409c3c7caf/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 11 03:37:20 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6512b9dbb136594eb82d0e8c4d4363284503bef2a89489855d7998409c3c7caf/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 11 03:37:20 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6512b9dbb136594eb82d0e8c4d4363284503bef2a89489855d7998409c3c7caf/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 11 03:37:20 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6512b9dbb136594eb82d0e8c4d4363284503bef2a89489855d7998409c3c7caf/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 11 03:37:20 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6512b9dbb136594eb82d0e8c4d4363284503bef2a89489855d7998409c3c7caf/merged/var/lib/ceph/osd/ceph-1 supports timestamps until 2038 (0x7fffffff)
Oct 11 03:37:20 compute-0 podman[88575]: 2025-10-11 03:37:20.482464518 +0000 UTC m=+0.103988306 container init 57e2909688769832b7f6b737db71199f0b1729a8d0aa7921857b40670e084072 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-23b68101-59a9-532f-ab6b-9acf78fb2162-osd-1, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 11 03:37:20 compute-0 podman[88575]: 2025-10-11 03:37:20.490769001 +0000 UTC m=+0.112292779 container start 57e2909688769832b7f6b737db71199f0b1729a8d0aa7921857b40670e084072 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-23b68101-59a9-532f-ab6b-9acf78fb2162-osd-1, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 11 03:37:20 compute-0 bash[88575]: 57e2909688769832b7f6b737db71199f0b1729a8d0aa7921857b40670e084072
Oct 11 03:37:20 compute-0 podman[88575]: 2025-10-11 03:37:20.397829707 +0000 UTC m=+0.019353505 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 03:37:20 compute-0 systemd[1]: Started Ceph osd.1 for 23b68101-59a9-532f-ab6b-9acf78fb2162.
Oct 11 03:37:20 compute-0 ceph-osd[88594]: set uid:gid to 167:167 (ceph:ceph)
Oct 11 03:37:20 compute-0 ceph-osd[88594]: ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable), process ceph-osd, pid 2
Oct 11 03:37:20 compute-0 ceph-osd[88594]: pidfile_write: ignore empty --pid-file
Oct 11 03:37:20 compute-0 ceph-osd[88594]: bdev(0x56493a2df800 /var/lib/ceph/osd/ceph-1/block) open path /var/lib/ceph/osd/ceph-1/block
Oct 11 03:37:20 compute-0 ceph-osd[88594]: bdev(0x56493a2df800 /var/lib/ceph/osd/ceph-1/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-1/block failed: (22) Invalid argument
Oct 11 03:37:20 compute-0 ceph-osd[88594]: bdev(0x56493a2df800 /var/lib/ceph/osd/ceph-1/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Oct 11 03:37:20 compute-0 ceph-osd[88594]: bluestore(/var/lib/ceph/osd/ceph-1) _set_cache_sizes cache_size 1073741824 meta 0.45 kv 0.45 kv_onode 0.04 data 0.06
Oct 11 03:37:20 compute-0 ceph-osd[88594]: bdev(0x56493b117800 /var/lib/ceph/osd/ceph-1/block) open path /var/lib/ceph/osd/ceph-1/block
Oct 11 03:37:20 compute-0 ceph-osd[88594]: bdev(0x56493b117800 /var/lib/ceph/osd/ceph-1/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-1/block failed: (22) Invalid argument
Oct 11 03:37:20 compute-0 sudo[87679]: pam_unix(sudo:session): session closed for user root
Oct 11 03:37:20 compute-0 ceph-osd[88594]: bdev(0x56493b117800 /var/lib/ceph/osd/ceph-1/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Oct 11 03:37:20 compute-0 ceph-osd[88594]: bluefs add_block_device bdev 1 path /var/lib/ceph/osd/ceph-1/block size 20 GiB
Oct 11 03:37:20 compute-0 ceph-osd[88594]: bdev(0x56493b117800 /var/lib/ceph/osd/ceph-1/block) close
Oct 11 03:37:20 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 11 03:37:20 compute-0 ceph-mgr[74563]: mgr.server handle_open ignoring open from osd.0 v2:192.168.122.100:6802/3430557825; not ready for session (expect reconnect)
Oct 11 03:37:20 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0) v1
Oct 11 03:37:20 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Oct 11 03:37:20 compute-0 ceph-mgr[74563]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Oct 11 03:37:20 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' 
Oct 11 03:37:20 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 11 03:37:20 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' 
Oct 11 03:37:20 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "osd.2"} v 0) v1
Oct 11 03:37:20 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' cmd=[{"prefix": "auth get", "entity": "osd.2"}]: dispatch
Oct 11 03:37:20 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 11 03:37:20 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 11 03:37:20 compute-0 ceph-mgr[74563]: [cephadm INFO cephadm.serve] Deploying daemon osd.2 on compute-0
Oct 11 03:37:20 compute-0 ceph-mgr[74563]: log_channel(cephadm) log [INF] : Deploying daemon osd.2 on compute-0
Oct 11 03:37:20 compute-0 ceph-mon[74273]: purged_snaps scrub starts
Oct 11 03:37:20 compute-0 ceph-mon[74273]: purged_snaps scrub ok
Oct 11 03:37:20 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Oct 11 03:37:20 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Oct 11 03:37:20 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' 
Oct 11 03:37:20 compute-0 sudo[88607]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 03:37:20 compute-0 sudo[88607]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 03:37:20 compute-0 sudo[88607]: pam_unix(sudo:session): session closed for user root
Oct 11 03:37:20 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v29: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Oct 11 03:37:20 compute-0 ceph-mgr[74563]: [balancer INFO root] Optimize plan auto_2025-10-11_03:37:20
Oct 11 03:37:20 compute-0 ceph-mgr[74563]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 11 03:37:20 compute-0 ceph-mgr[74563]: [balancer INFO root] do_upmap
Oct 11 03:37:20 compute-0 ceph-mgr[74563]: [balancer INFO root] No pools available
Oct 11 03:37:20 compute-0 sudo[88632]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 11 03:37:20 compute-0 sudo[88632]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 03:37:20 compute-0 ceph-mgr[74563]: [pg_autoscaler INFO root] _maybe_adjust
Oct 11 03:37:20 compute-0 sudo[88632]: pam_unix(sudo:session): session closed for user root
Oct 11 03:37:20 compute-0 ceph-mgr[74563]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 11 03:37:20 compute-0 ceph-mgr[74563]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 03:37:20 compute-0 ceph-mgr[74563]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 03:37:20 compute-0 ceph-mgr[74563]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 11 03:37:20 compute-0 ceph-mgr[74563]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 03:37:20 compute-0 ceph-mgr[74563]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 03:37:20 compute-0 ceph-mgr[74563]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 03:37:20 compute-0 ceph-mgr[74563]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 03:37:20 compute-0 sudo[88657]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 03:37:20 compute-0 sudo[88657]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 03:37:20 compute-0 sudo[88657]: pam_unix(sudo:session): session closed for user root
Oct 11 03:37:20 compute-0 ceph-osd[88594]: bdev(0x56493a2df800 /var/lib/ceph/osd/ceph-1/block) close
Oct 11 03:37:20 compute-0 sudo[88682]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/23b68101-59a9-532f-ab6b-9acf78fb2162/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 _orch deploy --fsid 23b68101-59a9-532f-ab6b-9acf78fb2162
Oct 11 03:37:20 compute-0 sudo[88682]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 03:37:21 compute-0 ceph-osd[88594]: starting osd.1 osd_data /var/lib/ceph/osd/ceph-1 /var/lib/ceph/osd/ceph-1/journal
Oct 11 03:37:21 compute-0 ceph-osd[88594]: load: jerasure load: lrc 
Oct 11 03:37:21 compute-0 ceph-osd[88594]: bdev(0x56493b198c00 /var/lib/ceph/osd/ceph-1/block) open path /var/lib/ceph/osd/ceph-1/block
Oct 11 03:37:21 compute-0 ceph-osd[88594]: bdev(0x56493b198c00 /var/lib/ceph/osd/ceph-1/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-1/block failed: (22) Invalid argument
Oct 11 03:37:21 compute-0 ceph-osd[88594]: bdev(0x56493b198c00 /var/lib/ceph/osd/ceph-1/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Oct 11 03:37:21 compute-0 ceph-osd[88594]: bluestore(/var/lib/ceph/osd/ceph-1) _set_cache_sizes cache_size 1073741824 meta 0.45 kv 0.45 kv_onode 0.04 data 0.06
Oct 11 03:37:21 compute-0 ceph-osd[88594]: bdev(0x56493b198c00 /var/lib/ceph/osd/ceph-1/block) close
Oct 11 03:37:21 compute-0 ceph-osd[87591]: osd.0 0 maybe_override_max_osd_capacity_for_qos osd bench result - bandwidth (MiB/sec): 37.453 iops: 9588.024 elapsed_sec: 0.313
Oct 11 03:37:21 compute-0 ceph-osd[87591]: log_channel(cluster) log [WRN] : OSD bench result of 9588.024328 IOPS is not within the threshold limit range of 50.000000 IOPS and 500.000000 IOPS for osd.0. IOPS capacity is unchanged at 315.000000 IOPS. The recommendation is to establish the osd's IOPS capacity using other benchmark tools (e.g. Fio) and then override osd_mclock_max_capacity_iops_[hdd|ssd].
Oct 11 03:37:21 compute-0 ceph-osd[87591]: osd.0 0 waiting for initial osdmap
Oct 11 03:37:21 compute-0 ceph-23b68101-59a9-532f-ab6b-9acf78fb2162-osd-0[87587]: 2025-10-11T03:37:21.111+0000 7f361f941640 -1 osd.0 0 waiting for initial osdmap
Oct 11 03:37:21 compute-0 ceph-osd[87591]: osd.0 8 crush map has features 288514050185494528, adjusting msgr requires for clients
Oct 11 03:37:21 compute-0 ceph-osd[87591]: osd.0 8 crush map has features 288514050185494528 was 288232575208792577, adjusting msgr requires for mons
Oct 11 03:37:21 compute-0 ceph-osd[87591]: osd.0 8 crush map has features 3314932999778484224, adjusting msgr requires for osds
Oct 11 03:37:21 compute-0 ceph-osd[87591]: osd.0 8 check_osdmap_features require_osd_release unknown -> reef
Oct 11 03:37:21 compute-0 ceph-osd[87591]: osd.0 8 set_numa_affinity unable to identify public interface '' numa node: (2) No such file or directory
Oct 11 03:37:21 compute-0 ceph-osd[87591]: osd.0 8 set_numa_affinity not setting numa affinity
Oct 11 03:37:21 compute-0 ceph-23b68101-59a9-532f-ab6b-9acf78fb2162-osd-0[87587]: 2025-10-11T03:37:21.134+0000 7f361a752640 -1 osd.0 8 set_numa_affinity unable to identify public interface '' numa node: (2) No such file or directory
Oct 11 03:37:21 compute-0 ceph-osd[87591]: osd.0 8 _collect_metadata loop3:  no unique device id for loop3: fallback method has no model nor serial
Oct 11 03:37:21 compute-0 podman[88754]: 2025-10-11 03:37:21.234256802 +0000 UTC m=+0.039699677 container create 61a484b6d109e533a76a6ca46259287bbd0a0a312057d093aebd89e4c57fc013 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_khayyam, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 11 03:37:21 compute-0 systemd[1]: Started libpod-conmon-61a484b6d109e533a76a6ca46259287bbd0a0a312057d093aebd89e4c57fc013.scope.
Oct 11 03:37:21 compute-0 systemd[1]: Started libcrun container.
Oct 11 03:37:21 compute-0 podman[88754]: 2025-10-11 03:37:21.215183926 +0000 UTC m=+0.020626781 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 03:37:21 compute-0 podman[88754]: 2025-10-11 03:37:21.325013465 +0000 UTC m=+0.130456320 container init 61a484b6d109e533a76a6ca46259287bbd0a0a312057d093aebd89e4c57fc013 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_khayyam, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507)
Oct 11 03:37:21 compute-0 podman[88754]: 2025-10-11 03:37:21.336795416 +0000 UTC m=+0.142238281 container start 61a484b6d109e533a76a6ca46259287bbd0a0a312057d093aebd89e4c57fc013 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_khayyam, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20250507)
Oct 11 03:37:21 compute-0 podman[88754]: 2025-10-11 03:37:21.340562122 +0000 UTC m=+0.146004997 container attach 61a484b6d109e533a76a6ca46259287bbd0a0a312057d093aebd89e4c57fc013 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_khayyam, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_REF=reef)
Oct 11 03:37:21 compute-0 jolly_khayyam[88770]: 167 167
Oct 11 03:37:21 compute-0 systemd[1]: libpod-61a484b6d109e533a76a6ca46259287bbd0a0a312057d093aebd89e4c57fc013.scope: Deactivated successfully.
Oct 11 03:37:21 compute-0 conmon[88770]: conmon 61a484b6d109e533a76a <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-61a484b6d109e533a76a6ca46259287bbd0a0a312057d093aebd89e4c57fc013.scope/container/memory.events
Oct 11 03:37:21 compute-0 podman[88754]: 2025-10-11 03:37:21.344583265 +0000 UTC m=+0.150026120 container died 61a484b6d109e533a76a6ca46259287bbd0a0a312057d093aebd89e4c57fc013 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_khayyam, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 11 03:37:21 compute-0 ceph-osd[88594]: bdev(0x56493b198c00 /var/lib/ceph/osd/ceph-1/block) open path /var/lib/ceph/osd/ceph-1/block
Oct 11 03:37:21 compute-0 ceph-osd[88594]: bdev(0x56493b198c00 /var/lib/ceph/osd/ceph-1/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-1/block failed: (22) Invalid argument
Oct 11 03:37:21 compute-0 ceph-osd[88594]: bdev(0x56493b198c00 /var/lib/ceph/osd/ceph-1/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Oct 11 03:37:21 compute-0 ceph-osd[88594]: bluestore(/var/lib/ceph/osd/ceph-1) _set_cache_sizes cache_size 1073741824 meta 0.45 kv 0.45 kv_onode 0.04 data 0.06
Oct 11 03:37:21 compute-0 ceph-osd[88594]: bdev(0x56493b198c00 /var/lib/ceph/osd/ceph-1/block) close
Oct 11 03:37:21 compute-0 systemd[1]: var-lib-containers-storage-overlay-e632cbb18f8964f17236d6cdf109754422a1fb5467f0df60fc32556f236034a6-merged.mount: Deactivated successfully.
Oct 11 03:37:21 compute-0 podman[88754]: 2025-10-11 03:37:21.389980862 +0000 UTC m=+0.195423697 container remove 61a484b6d109e533a76a6ca46259287bbd0a0a312057d093aebd89e4c57fc013 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_khayyam, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 11 03:37:21 compute-0 systemd[1]: libpod-conmon-61a484b6d109e533a76a6ca46259287bbd0a0a312057d093aebd89e4c57fc013.scope: Deactivated successfully.
Oct 11 03:37:21 compute-0 ceph-mgr[74563]: mgr.server handle_open ignoring open from osd.0 v2:192.168.122.100:6802/3430557825; not ready for session (expect reconnect)
Oct 11 03:37:21 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0) v1
Oct 11 03:37:21 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Oct 11 03:37:21 compute-0 ceph-mgr[74563]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Oct 11 03:37:21 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e8 do_prune osdmap full prune enabled
Oct 11 03:37:21 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e8 encode_pending skipping prime_pg_temp; mapping job did not start
Oct 11 03:37:21 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e9 e9: 3 total, 1 up, 3 in
Oct 11 03:37:21 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' 
Oct 11 03:37:21 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' cmd=[{"prefix": "auth get", "entity": "osd.2"}]: dispatch
Oct 11 03:37:21 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 11 03:37:21 compute-0 ceph-mon[74273]: Deploying daemon osd.2 on compute-0
Oct 11 03:37:21 compute-0 ceph-mon[74273]: pgmap v29: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Oct 11 03:37:21 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Oct 11 03:37:21 compute-0 ceph-mon[74273]: log_channel(cluster) log [INF] : osd.0 [v2:192.168.122.100:6802/3430557825,v1:192.168.122.100:6803/3430557825] boot
Oct 11 03:37:21 compute-0 ceph-mon[74273]: log_channel(cluster) log [DBG] : osdmap e9: 3 total, 1 up, 3 in
Oct 11 03:37:21 compute-0 ceph-osd[87591]: osd.0 9 state: booting -> active
Oct 11 03:37:21 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0) v1
Oct 11 03:37:21 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Oct 11 03:37:21 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0) v1
Oct 11 03:37:21 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Oct 11 03:37:21 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0) v1
Oct 11 03:37:21 compute-0 ceph-mgr[74563]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Oct 11 03:37:21 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Oct 11 03:37:21 compute-0 ceph-mgr[74563]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Oct 11 03:37:21 compute-0 ceph-osd[88594]: mClockScheduler: set_osd_capacity_params_from_config: osd_bandwidth_cost_per_io: 499321.90 bytes/io, osd_bandwidth_capacity_per_shard 157286400.00 bytes/second
Oct 11 03:37:21 compute-0 ceph-osd[88594]: osd.1:0.OSDShard using op scheduler mclock_scheduler, cutoff=196
Oct 11 03:37:21 compute-0 ceph-osd[88594]: bdev(0x56493b198c00 /var/lib/ceph/osd/ceph-1/block) open path /var/lib/ceph/osd/ceph-1/block
Oct 11 03:37:21 compute-0 ceph-osd[88594]: bdev(0x56493b198c00 /var/lib/ceph/osd/ceph-1/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-1/block failed: (22) Invalid argument
Oct 11 03:37:21 compute-0 ceph-osd[88594]: bdev(0x56493b198c00 /var/lib/ceph/osd/ceph-1/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Oct 11 03:37:21 compute-0 ceph-osd[88594]: bluestore(/var/lib/ceph/osd/ceph-1) _set_cache_sizes cache_size 1073741824 meta 0.45 kv 0.45 kv_onode 0.04 data 0.06
Oct 11 03:37:21 compute-0 ceph-osd[88594]: bdev(0x56493b199400 /var/lib/ceph/osd/ceph-1/block) open path /var/lib/ceph/osd/ceph-1/block
Oct 11 03:37:21 compute-0 ceph-osd[88594]: bdev(0x56493b199400 /var/lib/ceph/osd/ceph-1/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-1/block failed: (22) Invalid argument
Oct 11 03:37:21 compute-0 ceph-osd[88594]: bdev(0x56493b199400 /var/lib/ceph/osd/ceph-1/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Oct 11 03:37:21 compute-0 ceph-osd[88594]: bluefs add_block_device bdev 1 path /var/lib/ceph/osd/ceph-1/block size 20 GiB
Oct 11 03:37:21 compute-0 ceph-osd[88594]: bluefs mount
Oct 11 03:37:21 compute-0 ceph-osd[88594]: bluefs _init_alloc shared, id 1, capacity 0x4ffc00000, block size 0x10000
Oct 11 03:37:21 compute-0 ceph-osd[88594]: bluefs mount shared_bdev_used = 0
Oct 11 03:37:21 compute-0 ceph-osd[88594]: bluestore(/var/lib/ceph/osd/ceph-1) _prepare_db_environment set db_paths to db,20397110067 db.slow,20397110067
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb: RocksDB version: 7.9.2
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb: Git sha 0
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb: Compile date 2025-05-06 23:30:25
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb: DB SUMMARY
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb: DB Session ID:  896UZ1KL24HAO5WR3YIN
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb: CURRENT file:  CURRENT
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb: IDENTITY file:  IDENTITY
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb: MANIFEST file:  MANIFEST-000032 size: 1007 Bytes
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb: SST files in db dir, Total Num: 1, files: 000030.sst 
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb: SST files in db.slow dir, Total Num: 0, files: 
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb: Write Ahead Log file in db.wal: 000031.log size: 5093 ; 
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:                         Options.error_if_exists: 0
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:                       Options.create_if_missing: 0
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:                         Options.paranoid_checks: 1
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:             Options.flush_verify_memtable_count: 1
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:                               Options.track_and_verify_wals_in_manifest: 0
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:        Options.verify_sst_unique_id_in_manifest: 1
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:                                     Options.env: 0x56493b169d50
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:                                      Options.fs: LegacyFileSystem
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:                                Options.info_log: 0x56493a3667e0
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:                Options.max_file_opening_threads: 16
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:                              Options.statistics: (nil)
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:                               Options.use_fsync: 0
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:                       Options.max_log_file_size: 0
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:                  Options.max_manifest_file_size: 1073741824
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:                   Options.log_file_time_to_roll: 0
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:                       Options.keep_log_file_num: 1000
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:                    Options.recycle_log_file_num: 0
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:                         Options.allow_fallocate: 1
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:                        Options.allow_mmap_reads: 0
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:                       Options.allow_mmap_writes: 0
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:                        Options.use_direct_reads: 0
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:                        Options.use_direct_io_for_flush_and_compaction: 0
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:          Options.create_missing_column_families: 0
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:                              Options.db_log_dir: 
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:                                 Options.wal_dir: db.wal
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:                Options.table_cache_numshardbits: 6
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:                         Options.WAL_ttl_seconds: 0
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:                       Options.WAL_size_limit_MB: 0
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:                        Options.max_write_batch_group_size_bytes: 1048576
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:             Options.manifest_preallocation_size: 4194304
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:                     Options.is_fd_close_on_exec: 1
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:                   Options.advise_random_on_open: 1
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:                    Options.db_write_buffer_size: 0
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:                    Options.write_buffer_manager: 0x56493b272460
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:         Options.access_hint_on_compaction_start: 1
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:           Options.random_access_max_buffer_size: 1048576
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:                      Options.use_adaptive_mutex: 0
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:                            Options.rate_limiter: (nil)
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:     Options.sst_file_manager.rate_bytes_per_sec: 0
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:                       Options.wal_recovery_mode: 2
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:                  Options.enable_thread_tracking: 0
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:                  Options.enable_pipelined_write: 0
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:                  Options.unordered_write: 0
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:         Options.allow_concurrent_memtable_write: 1
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:      Options.enable_write_thread_adaptive_yield: 1
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:             Options.write_thread_max_yield_usec: 100
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:            Options.write_thread_slow_yield_usec: 3
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:                               Options.row_cache: None
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:                              Options.wal_filter: None
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:             Options.avoid_flush_during_recovery: 0
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:             Options.allow_ingest_behind: 0
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:             Options.two_write_queues: 0
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:             Options.manual_wal_flush: 0
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:             Options.wal_compression: 0
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:             Options.atomic_flush: 0
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:             Options.avoid_unnecessary_blocking_io: 0
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:                 Options.persist_stats_to_disk: 0
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:                 Options.write_dbid_to_manifest: 0
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:                 Options.log_readahead_size: 0
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:                 Options.file_checksum_gen_factory: Unknown
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:                 Options.best_efforts_recovery: 0
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:                Options.max_bgerror_resume_count: 2147483647
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:            Options.bgerror_resume_retry_interval: 1000000
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:             Options.allow_data_in_errors: 0
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:             Options.db_host_id: __hostname__
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:             Options.enforce_single_del_contracts: true
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:             Options.max_background_jobs: 4
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:             Options.max_background_compactions: -1
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:             Options.max_subcompactions: 1
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:             Options.avoid_flush_during_shutdown: 0
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:           Options.writable_file_max_buffer_size: 0
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:             Options.delayed_write_rate : 16777216
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:             Options.max_total_wal_size: 1073741824
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:             Options.delete_obsolete_files_period_micros: 21600000000
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:                   Options.stats_dump_period_sec: 600
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:                 Options.stats_persist_period_sec: 600
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:                 Options.stats_history_buffer_size: 1048576
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:                          Options.max_open_files: -1
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:                          Options.bytes_per_sync: 0
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:                      Options.wal_bytes_per_sync: 0
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:                   Options.strict_bytes_per_sync: 0
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:       Options.compaction_readahead_size: 2097152
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:                  Options.max_background_flushes: -1
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb: Compression algorithms supported:
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:         kZSTD supported: 0
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:         kXpressCompression supported: 0
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:         kBZip2Compression supported: 0
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:         kZSTDNotFinalCompression supported: 0
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:         kLZ4Compression supported: 1
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:         kZlibCompression supported: 1
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:         kLZ4HCCompression supported: 1
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:         kSnappyCompression supported: 1
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb: Fast CRC32 supported: Supported on x86
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb: DMutex implementation: pthread_mutex_t
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb: [db/db_impl/db_impl_readonly.cc:25] Opening the db in read only mode
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb: [db/version_set.cc:5527] Recovering from manifest file: db/MANIFEST-000032
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [default]:
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:           Options.merge_operator: .T:int64_array.b:bitwise_xor
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:        Options.compaction_filter: None
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:        Options.compaction_filter_factory: None
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:  Options.sst_partitioner_factory: None
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:         Options.memtable_factory: SkipListFactory
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:            Options.table_factory: BlockBasedTable
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x56493a366200)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x56493a3531f0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:        Options.write_buffer_size: 16777216
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:  Options.max_write_buffer_number: 64
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:          Options.compression: LZ4
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:                  Options.bottommost_compression: Disabled
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:       Options.prefix_extractor: nullptr
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:             Options.num_levels: 7
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:            Options.compression_opts.window_bits: -14
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:                  Options.compression_opts.level: 32767
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:               Options.compression_opts.strategy: 0
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:         Options.compression_opts.parallel_threads: 1
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:                  Options.compression_opts.enabled: false
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:              Options.level0_stop_writes_trigger: 36
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:                   Options.target_file_size_base: 67108864
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:             Options.target_file_size_multiplier: 1
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:                        Options.arena_block_size: 1048576
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:                Options.disable_auto_compactions: 0
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:                   Options.inplace_update_support: 0
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:                 Options.inplace_update_num_locks: 10000
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:               Options.memtable_whole_key_filtering: 0
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:   Options.memtable_huge_page_size: 0
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:                           Options.bloom_locality: 0
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:                    Options.max_successive_merges: 0
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:                Options.optimize_filters_for_hits: 0
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:                Options.paranoid_file_checks: 0
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:                Options.force_consistency_checks: 1
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:                Options.report_bg_io_stats: 0
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:                               Options.ttl: 2592000
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:          Options.periodic_compaction_seconds: 0
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:    Options.preserve_internal_time_seconds: 0
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:                       Options.enable_blob_files: false
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:                           Options.min_blob_size: 0
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:                          Options.blob_file_size: 268435456
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:                   Options.blob_compression_type: NoCompression
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:          Options.enable_blob_garbage_collection: false
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:          Options.blob_compaction_readahead_size: 0
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:                Options.blob_file_starting_level: 0
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-0]:
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:           Options.merge_operator: None
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:        Options.compaction_filter: None
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:        Options.compaction_filter_factory: None
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:  Options.sst_partitioner_factory: None
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:         Options.memtable_factory: SkipListFactory
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:            Options.table_factory: BlockBasedTable
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x56493a366200)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x56493a3531f0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:        Options.write_buffer_size: 16777216
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:  Options.max_write_buffer_number: 64
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:          Options.compression: LZ4
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:                  Options.bottommost_compression: Disabled
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:       Options.prefix_extractor: nullptr
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:             Options.num_levels: 7
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:            Options.compression_opts.window_bits: -14
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:                  Options.compression_opts.level: 32767
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:               Options.compression_opts.strategy: 0
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:         Options.compression_opts.parallel_threads: 1
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:                  Options.compression_opts.enabled: false
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:              Options.level0_stop_writes_trigger: 36
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:                   Options.target_file_size_base: 67108864
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:             Options.target_file_size_multiplier: 1
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:                        Options.arena_block_size: 1048576
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:                Options.disable_auto_compactions: 0
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:                   Options.inplace_update_support: 0
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:                 Options.inplace_update_num_locks: 10000
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:               Options.memtable_whole_key_filtering: 0
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:   Options.memtable_huge_page_size: 0
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:                           Options.bloom_locality: 0
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:                    Options.max_successive_merges: 0
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:                Options.optimize_filters_for_hits: 0
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:                Options.paranoid_file_checks: 0
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:                Options.force_consistency_checks: 1
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:                Options.report_bg_io_stats: 0
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:                               Options.ttl: 2592000
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:          Options.periodic_compaction_seconds: 0
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:    Options.preserve_internal_time_seconds: 0
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:                       Options.enable_blob_files: false
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:                           Options.min_blob_size: 0
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:                          Options.blob_file_size: 268435456
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:                   Options.blob_compression_type: NoCompression
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:          Options.enable_blob_garbage_collection: false
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:          Options.blob_compaction_readahead_size: 0
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:                Options.blob_file_starting_level: 0
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-1]:
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:           Options.merge_operator: None
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:        Options.compaction_filter: None
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:        Options.compaction_filter_factory: None
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:  Options.sst_partitioner_factory: None
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:         Options.memtable_factory: SkipListFactory
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:            Options.table_factory: BlockBasedTable
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x56493a366200)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x56493a3531f0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:        Options.write_buffer_size: 16777216
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:  Options.max_write_buffer_number: 64
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:          Options.compression: LZ4
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:                  Options.bottommost_compression: Disabled
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:       Options.prefix_extractor: nullptr
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:             Options.num_levels: 7
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:            Options.compression_opts.window_bits: -14
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:                  Options.compression_opts.level: 32767
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:               Options.compression_opts.strategy: 0
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:         Options.compression_opts.parallel_threads: 1
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:                  Options.compression_opts.enabled: false
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:              Options.level0_stop_writes_trigger: 36
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:                   Options.target_file_size_base: 67108864
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:             Options.target_file_size_multiplier: 1
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:                        Options.arena_block_size: 1048576
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:                Options.disable_auto_compactions: 0
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:                   Options.inplace_update_support: 0
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:                 Options.inplace_update_num_locks: 10000
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:               Options.memtable_whole_key_filtering: 0
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:   Options.memtable_huge_page_size: 0
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:                           Options.bloom_locality: 0
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:                    Options.max_successive_merges: 0
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:                Options.optimize_filters_for_hits: 0
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:                Options.paranoid_file_checks: 0
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:                Options.force_consistency_checks: 1
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:                Options.report_bg_io_stats: 0
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:                               Options.ttl: 2592000
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:          Options.periodic_compaction_seconds: 0
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:    Options.preserve_internal_time_seconds: 0
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:                       Options.enable_blob_files: false
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:                           Options.min_blob_size: 0
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:                          Options.blob_file_size: 268435456
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:                   Options.blob_compression_type: NoCompression
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:          Options.enable_blob_garbage_collection: false
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:          Options.blob_compaction_readahead_size: 0
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:                Options.blob_file_starting_level: 0
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-2]:
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:           Options.merge_operator: None
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:        Options.compaction_filter: None
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:        Options.compaction_filter_factory: None
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:  Options.sst_partitioner_factory: None
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:         Options.memtable_factory: SkipListFactory
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:            Options.table_factory: BlockBasedTable
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x56493a366200)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x56493a3531f0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:        Options.write_buffer_size: 16777216
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:  Options.max_write_buffer_number: 64
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:          Options.compression: LZ4
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:                  Options.bottommost_compression: Disabled
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:       Options.prefix_extractor: nullptr
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:             Options.num_levels: 7
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:            Options.compression_opts.window_bits: -14
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:                  Options.compression_opts.level: 32767
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:               Options.compression_opts.strategy: 0
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:         Options.compression_opts.parallel_threads: 1
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:                  Options.compression_opts.enabled: false
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:              Options.level0_stop_writes_trigger: 36
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:                   Options.target_file_size_base: 67108864
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:             Options.target_file_size_multiplier: 1
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:                        Options.arena_block_size: 1048576
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:                Options.disable_auto_compactions: 0
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:                   Options.inplace_update_support: 0
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:                 Options.inplace_update_num_locks: 10000
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:               Options.memtable_whole_key_filtering: 0
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:   Options.memtable_huge_page_size: 0
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:                           Options.bloom_locality: 0
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:                    Options.max_successive_merges: 0
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:                Options.optimize_filters_for_hits: 0
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:                Options.paranoid_file_checks: 0
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:                Options.force_consistency_checks: 1
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:                Options.report_bg_io_stats: 0
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:                               Options.ttl: 2592000
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:          Options.periodic_compaction_seconds: 0
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:    Options.preserve_internal_time_seconds: 0
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:                       Options.enable_blob_files: false
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:                           Options.min_blob_size: 0
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:                          Options.blob_file_size: 268435456
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:                   Options.blob_compression_type: NoCompression
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:          Options.enable_blob_garbage_collection: false
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:          Options.blob_compaction_readahead_size: 0
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:                Options.blob_file_starting_level: 0
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-0]:
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:           Options.merge_operator: None
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:        Options.compaction_filter: None
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:        Options.compaction_filter_factory: None
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:  Options.sst_partitioner_factory: None
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:         Options.memtable_factory: SkipListFactory
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:            Options.table_factory: BlockBasedTable
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x56493a366200)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x56493a3531f0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:        Options.write_buffer_size: 16777216
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:  Options.max_write_buffer_number: 64
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:          Options.compression: LZ4
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:                  Options.bottommost_compression: Disabled
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:       Options.prefix_extractor: nullptr
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:             Options.num_levels: 7
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:            Options.compression_opts.window_bits: -14
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:                  Options.compression_opts.level: 32767
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:               Options.compression_opts.strategy: 0
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:         Options.compression_opts.parallel_threads: 1
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:                  Options.compression_opts.enabled: false
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:              Options.level0_stop_writes_trigger: 36
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:                   Options.target_file_size_base: 67108864
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:             Options.target_file_size_multiplier: 1
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:                        Options.arena_block_size: 1048576
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:                Options.disable_auto_compactions: 0
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:                   Options.inplace_update_support: 0
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:                 Options.inplace_update_num_locks: 10000
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:               Options.memtable_whole_key_filtering: 0
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:   Options.memtable_huge_page_size: 0
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:                           Options.bloom_locality: 0
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:                    Options.max_successive_merges: 0
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:                Options.optimize_filters_for_hits: 0
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:                Options.paranoid_file_checks: 0
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:                Options.force_consistency_checks: 1
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:                Options.report_bg_io_stats: 0
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:                               Options.ttl: 2592000
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:          Options.periodic_compaction_seconds: 0
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:    Options.preserve_internal_time_seconds: 0
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:                       Options.enable_blob_files: false
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:                           Options.min_blob_size: 0
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:                          Options.blob_file_size: 268435456
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:                   Options.blob_compression_type: NoCompression
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:          Options.enable_blob_garbage_collection: false
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:          Options.blob_compaction_readahead_size: 0
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:                Options.blob_file_starting_level: 0
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-1]:
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:           Options.merge_operator: None
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:        Options.compaction_filter: None
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:        Options.compaction_filter_factory: None
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:  Options.sst_partitioner_factory: None
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:         Options.memtable_factory: SkipListFactory
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:            Options.table_factory: BlockBasedTable
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x56493a366200)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x56493a3531f0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:        Options.write_buffer_size: 16777216
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:  Options.max_write_buffer_number: 64
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:          Options.compression: LZ4
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:                  Options.bottommost_compression: Disabled
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:       Options.prefix_extractor: nullptr
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:             Options.num_levels: 7
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:            Options.compression_opts.window_bits: -14
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:                  Options.compression_opts.level: 32767
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:               Options.compression_opts.strategy: 0
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:         Options.compression_opts.parallel_threads: 1
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:                  Options.compression_opts.enabled: false
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:              Options.level0_stop_writes_trigger: 36
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:                   Options.target_file_size_base: 67108864
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:             Options.target_file_size_multiplier: 1
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:                        Options.arena_block_size: 1048576
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:                Options.disable_auto_compactions: 0
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:                   Options.inplace_update_support: 0
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:                 Options.inplace_update_num_locks: 10000
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:               Options.memtable_whole_key_filtering: 0
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:   Options.memtable_huge_page_size: 0
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:                           Options.bloom_locality: 0
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:                    Options.max_successive_merges: 0
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:                Options.optimize_filters_for_hits: 0
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:                Options.paranoid_file_checks: 0
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:                Options.force_consistency_checks: 1
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:                Options.report_bg_io_stats: 0
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:                               Options.ttl: 2592000
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:          Options.periodic_compaction_seconds: 0
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:    Options.preserve_internal_time_seconds: 0
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:                       Options.enable_blob_files: false
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:                           Options.min_blob_size: 0
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:                          Options.blob_file_size: 268435456
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:                   Options.blob_compression_type: NoCompression
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:          Options.enable_blob_garbage_collection: false
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:          Options.blob_compaction_readahead_size: 0
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:                Options.blob_file_starting_level: 0
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-2]:
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:           Options.merge_operator: None
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:        Options.compaction_filter: None
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:        Options.compaction_filter_factory: None
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:  Options.sst_partitioner_factory: None
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:         Options.memtable_factory: SkipListFactory
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:            Options.table_factory: BlockBasedTable
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x56493a366200)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x56493a3531f0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:        Options.write_buffer_size: 16777216
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:  Options.max_write_buffer_number: 64
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:          Options.compression: LZ4
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:                  Options.bottommost_compression: Disabled
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:       Options.prefix_extractor: nullptr
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:             Options.num_levels: 7
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:            Options.compression_opts.window_bits: -14
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:                  Options.compression_opts.level: 32767
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:               Options.compression_opts.strategy: 0
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:         Options.compression_opts.parallel_threads: 1
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:                  Options.compression_opts.enabled: false
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:              Options.level0_stop_writes_trigger: 36
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:                   Options.target_file_size_base: 67108864
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:             Options.target_file_size_multiplier: 1
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:                        Options.arena_block_size: 1048576
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:                Options.disable_auto_compactions: 0
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:                   Options.inplace_update_support: 0
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:                 Options.inplace_update_num_locks: 10000
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:               Options.memtable_whole_key_filtering: 0
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:   Options.memtable_huge_page_size: 0
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:                           Options.bloom_locality: 0
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:                    Options.max_successive_merges: 0
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:                Options.optimize_filters_for_hits: 0
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:                Options.paranoid_file_checks: 0
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:                Options.force_consistency_checks: 1
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:                Options.report_bg_io_stats: 0
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:                               Options.ttl: 2592000
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:          Options.periodic_compaction_seconds: 0
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:    Options.preserve_internal_time_seconds: 0
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:                       Options.enable_blob_files: false
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:                           Options.min_blob_size: 0
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:                          Options.blob_file_size: 268435456
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:                   Options.blob_compression_type: NoCompression
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:          Options.enable_blob_garbage_collection: false
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:          Options.blob_compaction_readahead_size: 0
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:                Options.blob_file_starting_level: 0
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-0]:
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:           Options.merge_operator: None
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:        Options.compaction_filter: None
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:        Options.compaction_filter_factory: None
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:  Options.sst_partitioner_factory: None
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:         Options.memtable_factory: SkipListFactory
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:            Options.table_factory: BlockBasedTable
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x56493a366180)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x56493a353090
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 536870912
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:        Options.write_buffer_size: 16777216
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:  Options.max_write_buffer_number: 64
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:          Options.compression: LZ4
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:                  Options.bottommost_compression: Disabled
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:       Options.prefix_extractor: nullptr
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:             Options.num_levels: 7
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:            Options.compression_opts.window_bits: -14
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:                  Options.compression_opts.level: 32767
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:               Options.compression_opts.strategy: 0
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:         Options.compression_opts.parallel_threads: 1
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:                  Options.compression_opts.enabled: false
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:              Options.level0_stop_writes_trigger: 36
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:                   Options.target_file_size_base: 67108864
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:             Options.target_file_size_multiplier: 1
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:                        Options.arena_block_size: 1048576
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:                Options.disable_auto_compactions: 0
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:                   Options.inplace_update_support: 0
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:                 Options.inplace_update_num_locks: 10000
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:               Options.memtable_whole_key_filtering: 0
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:   Options.memtable_huge_page_size: 0
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:                           Options.bloom_locality: 0
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:                    Options.max_successive_merges: 0
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:                Options.optimize_filters_for_hits: 0
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:                Options.paranoid_file_checks: 0
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:                Options.force_consistency_checks: 1
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:                Options.report_bg_io_stats: 0
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:                               Options.ttl: 2592000
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:          Options.periodic_compaction_seconds: 0
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:    Options.preserve_internal_time_seconds: 0
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:                       Options.enable_blob_files: false
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:                           Options.min_blob_size: 0
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:                          Options.blob_file_size: 268435456
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:                   Options.blob_compression_type: NoCompression
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:          Options.enable_blob_garbage_collection: false
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:          Options.blob_compaction_readahead_size: 0
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:                Options.blob_file_starting_level: 0
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-1]:
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:           Options.merge_operator: None
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:        Options.compaction_filter: None
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:        Options.compaction_filter_factory: None
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:  Options.sst_partitioner_factory: None
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:         Options.memtable_factory: SkipListFactory
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:            Options.table_factory: BlockBasedTable
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x56493a366180)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x56493a353090
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 536870912
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:        Options.write_buffer_size: 16777216
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:  Options.max_write_buffer_number: 64
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:          Options.compression: LZ4
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:                  Options.bottommost_compression: Disabled
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:       Options.prefix_extractor: nullptr
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:             Options.num_levels: 7
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:            Options.compression_opts.window_bits: -14
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:                  Options.compression_opts.level: 32767
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:               Options.compression_opts.strategy: 0
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:         Options.compression_opts.parallel_threads: 1
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:                  Options.compression_opts.enabled: false
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:              Options.level0_stop_writes_trigger: 36
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:                   Options.target_file_size_base: 67108864
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:             Options.target_file_size_multiplier: 1
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:                        Options.arena_block_size: 1048576
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:                Options.disable_auto_compactions: 0
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:                   Options.inplace_update_support: 0
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:                 Options.inplace_update_num_locks: 10000
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:               Options.memtable_whole_key_filtering: 0
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:   Options.memtable_huge_page_size: 0
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:                           Options.bloom_locality: 0
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:                    Options.max_successive_merges: 0
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:                Options.optimize_filters_for_hits: 0
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:                Options.paranoid_file_checks: 0
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:                Options.force_consistency_checks: 1
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:                Options.report_bg_io_stats: 0
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:                               Options.ttl: 2592000
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:          Options.periodic_compaction_seconds: 0
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:    Options.preserve_internal_time_seconds: 0
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:                       Options.enable_blob_files: false
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:                           Options.min_blob_size: 0
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:                          Options.blob_file_size: 268435456
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:                   Options.blob_compression_type: NoCompression
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:          Options.enable_blob_garbage_collection: false
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:          Options.blob_compaction_readahead_size: 0
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:                Options.blob_file_starting_level: 0
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-2]:
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:           Options.merge_operator: None
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:        Options.compaction_filter: None
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:        Options.compaction_filter_factory: None
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:  Options.sst_partitioner_factory: None
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:         Options.memtable_factory: SkipListFactory
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:            Options.table_factory: BlockBasedTable
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x56493a366180)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x56493a353090
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 536870912
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:        Options.write_buffer_size: 16777216
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:  Options.max_write_buffer_number: 64
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:          Options.compression: LZ4
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:                  Options.bottommost_compression: Disabled
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:       Options.prefix_extractor: nullptr
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:             Options.num_levels: 7
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:            Options.compression_opts.window_bits: -14
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:                  Options.compression_opts.level: 32767
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:               Options.compression_opts.strategy: 0
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:         Options.compression_opts.parallel_threads: 1
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:                  Options.compression_opts.enabled: false
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:              Options.level0_stop_writes_trigger: 36
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:                   Options.target_file_size_base: 67108864
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:             Options.target_file_size_multiplier: 1
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:                        Options.arena_block_size: 1048576
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:                Options.disable_auto_compactions: 0
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:                   Options.inplace_update_support: 0
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:                 Options.inplace_update_num_locks: 10000
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:               Options.memtable_whole_key_filtering: 0
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:   Options.memtable_huge_page_size: 0
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:                           Options.bloom_locality: 0
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:                    Options.max_successive_merges: 0
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:                Options.optimize_filters_for_hits: 0
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:                Options.paranoid_file_checks: 0
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:                Options.force_consistency_checks: 1
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:                Options.report_bg_io_stats: 0
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:                               Options.ttl: 2592000
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:          Options.periodic_compaction_seconds: 0
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:    Options.preserve_internal_time_seconds: 0
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:                       Options.enable_blob_files: false
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:                           Options.min_blob_size: 0
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:                          Options.blob_file_size: 268435456
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:                   Options.blob_compression_type: NoCompression
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:          Options.enable_blob_garbage_collection: false
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:          Options.blob_compaction_readahead_size: 0
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:                Options.blob_file_starting_level: 0
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb: [db/column_family.cc:635]         (skipping printing options)
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb: [db/column_family.cc:635]         (skipping printing options)
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb: [db/version_set.cc:5566] Recovered from manifest file:db/MANIFEST-000032 succeeded,manifest_file_number is 32, next_file_number is 34, last_sequence is 12, log_number is 5,prev_log_number is 0,max_column_family is 11,min_log_number_to_keep is 5
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb: [db/version_set.cc:5581] Column family [default] (ID 0), log number is 5
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb: [db/version_set.cc:5581] Column family [m-0] (ID 1), log number is 5
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb: [db/version_set.cc:5581] Column family [m-1] (ID 2), log number is 5
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb: [db/version_set.cc:5581] Column family [m-2] (ID 3), log number is 5
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb: [db/version_set.cc:5581] Column family [p-0] (ID 4), log number is 5
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb: [db/version_set.cc:5581] Column family [p-1] (ID 5), log number is 5
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb: [db/version_set.cc:5581] Column family [p-2] (ID 6), log number is 5
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb: [db/version_set.cc:5581] Column family [O-0] (ID 7), log number is 5
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb: [db/version_set.cc:5581] Column family [O-1] (ID 8), log number is 5
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb: [db/version_set.cc:5581] Column family [O-2] (ID 9), log number is 5
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb: [db/version_set.cc:5581] Column family [L] (ID 10), log number is 5
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb: [db/version_set.cc:5581] Column family [P] (ID 11), log number is 5
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb: [db/db_impl/db_impl_open.cc:539] DB ID: 8afc8215-bd2b-439d-89a9-910130e41fd8
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760153841663995, "job": 1, "event": "recovery_started", "wal_files": [31]}
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb: [db/db_impl/db_impl_open.cc:1043] Recovering log #31 mode 2
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760153841664498, "job": 1, "event": "recovery_finished"}
Oct 11 03:37:21 compute-0 ceph-osd[88594]: bluestore(/var/lib/ceph/osd/ceph-1) _open_db opened rocksdb path db options compression=kLZ4Compression,max_write_buffer_number=64,min_write_buffer_number_to_merge=6,compaction_style=kCompactionStyleLevel,write_buffer_size=16777216,max_background_jobs=4,level0_file_num_compaction_trigger=8,max_bytes_for_level_base=1073741824,max_bytes_for_level_multiplier=8,compaction_readahead_size=2MB,max_total_wal_size=1073741824,writable_file_max_buffer_size=0
Oct 11 03:37:21 compute-0 ceph-osd[88594]: bluestore(/var/lib/ceph/osd/ceph-1) _open_super_meta old nid_max 1025
Oct 11 03:37:21 compute-0 ceph-osd[88594]: bluestore(/var/lib/ceph/osd/ceph-1) _open_super_meta old blobid_max 10240
Oct 11 03:37:21 compute-0 ceph-osd[88594]: bluestore(/var/lib/ceph/osd/ceph-1) _open_super_meta ondisk_format 4 compat_ondisk_format 3
Oct 11 03:37:21 compute-0 ceph-osd[88594]: bluestore(/var/lib/ceph/osd/ceph-1) _open_super_meta min_alloc_size 0x1000
Oct 11 03:37:21 compute-0 ceph-osd[88594]: freelist init
Oct 11 03:37:21 compute-0 ceph-osd[88594]: freelist _read_cfg
Oct 11 03:37:21 compute-0 ceph-osd[88594]: bluestore(/var/lib/ceph/osd/ceph-1) _init_alloc loaded 20 GiB in 2 extents, allocator type hybrid, capacity 0x4ffc00000, block size 0x1000, free 0x4ffbfd000, fragmentation 1.9e-07
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb: [db/db_impl/db_impl.cc:496] Shutdown: canceling all background work
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb: [db/db_impl/db_impl.cc:704] Shutdown complete
Oct 11 03:37:21 compute-0 ceph-osd[88594]: bluefs umount
Oct 11 03:37:21 compute-0 ceph-osd[88594]: bdev(0x56493b199400 /var/lib/ceph/osd/ceph-1/block) close
Oct 11 03:37:21 compute-0 podman[88820]: 2025-10-11 03:37:21.718714548 +0000 UTC m=+0.061197942 container create 74eeab3d74c4d21fc48969a6c3310afa621327f32f9ee6f06730a9b7060b70fb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-23b68101-59a9-532f-ab6b-9acf78fb2162-osd-2-activate-test, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.39.3)
Oct 11 03:37:21 compute-0 systemd[1]: Started libpod-conmon-74eeab3d74c4d21fc48969a6c3310afa621327f32f9ee6f06730a9b7060b70fb.scope.
Oct 11 03:37:21 compute-0 podman[88820]: 2025-10-11 03:37:21.690598797 +0000 UTC m=+0.033082171 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 03:37:21 compute-0 systemd[1]: Started libcrun container.
Oct 11 03:37:21 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/baa0821e1e285b5e1422953d07bc0b6724656d16dd9998d1f7ad99ff64b8d874/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 11 03:37:21 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/baa0821e1e285b5e1422953d07bc0b6724656d16dd9998d1f7ad99ff64b8d874/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 11 03:37:21 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/baa0821e1e285b5e1422953d07bc0b6724656d16dd9998d1f7ad99ff64b8d874/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 11 03:37:21 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/baa0821e1e285b5e1422953d07bc0b6724656d16dd9998d1f7ad99ff64b8d874/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 11 03:37:21 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/baa0821e1e285b5e1422953d07bc0b6724656d16dd9998d1f7ad99ff64b8d874/merged/var/lib/ceph/osd/ceph-2 supports timestamps until 2038 (0x7fffffff)
Oct 11 03:37:21 compute-0 podman[88820]: 2025-10-11 03:37:21.844831175 +0000 UTC m=+0.187314609 container init 74eeab3d74c4d21fc48969a6c3310afa621327f32f9ee6f06730a9b7060b70fb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-23b68101-59a9-532f-ab6b-9acf78fb2162-osd-2-activate-test, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 11 03:37:21 compute-0 podman[88820]: 2025-10-11 03:37:21.857247754 +0000 UTC m=+0.199731138 container start 74eeab3d74c4d21fc48969a6c3310afa621327f32f9ee6f06730a9b7060b70fb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-23b68101-59a9-532f-ab6b-9acf78fb2162-osd-2-activate-test, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 11 03:37:21 compute-0 podman[88820]: 2025-10-11 03:37:21.861382661 +0000 UTC m=+0.203866075 container attach 74eeab3d74c4d21fc48969a6c3310afa621327f32f9ee6f06730a9b7060b70fb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-23b68101-59a9-532f-ab6b-9acf78fb2162-osd-2-activate-test, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20250507, CEPH_REF=reef)
Oct 11 03:37:21 compute-0 ceph-osd[88594]: bdev(0x56493b199400 /var/lib/ceph/osd/ceph-1/block) open path /var/lib/ceph/osd/ceph-1/block
Oct 11 03:37:21 compute-0 ceph-osd[88594]: bdev(0x56493b199400 /var/lib/ceph/osd/ceph-1/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-1/block failed: (22) Invalid argument
Oct 11 03:37:21 compute-0 ceph-osd[88594]: bdev(0x56493b199400 /var/lib/ceph/osd/ceph-1/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Oct 11 03:37:21 compute-0 ceph-osd[88594]: bluefs add_block_device bdev 1 path /var/lib/ceph/osd/ceph-1/block size 20 GiB
Oct 11 03:37:21 compute-0 ceph-osd[88594]: bluefs mount
Oct 11 03:37:21 compute-0 ceph-osd[88594]: bluefs _init_alloc shared, id 1, capacity 0x4ffc00000, block size 0x10000
Oct 11 03:37:21 compute-0 ceph-osd[88594]: bluefs mount shared_bdev_used = 4718592
Oct 11 03:37:21 compute-0 ceph-osd[88594]: bluestore(/var/lib/ceph/osd/ceph-1) _prepare_db_environment set db_paths to db,20397110067 db.slow,20397110067
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb: RocksDB version: 7.9.2
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb: Git sha 0
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb: Compile date 2025-05-06 23:30:25
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb: DB SUMMARY
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb: DB Session ID:  896UZ1KL24HAO5WR3YIM
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb: CURRENT file:  CURRENT
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb: IDENTITY file:  IDENTITY
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb: MANIFEST file:  MANIFEST-000032 size: 1007 Bytes
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb: SST files in db dir, Total Num: 1, files: 000030.sst 
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb: SST files in db.slow dir, Total Num: 0, files: 
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb: Write Ahead Log file in db.wal: 000031.log size: 5093 ; 
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:                         Options.error_if_exists: 0
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:                       Options.create_if_missing: 0
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:                         Options.paranoid_checks: 1
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:             Options.flush_verify_memtable_count: 1
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:                               Options.track_and_verify_wals_in_manifest: 0
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:        Options.verify_sst_unique_id_in_manifest: 1
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:                                     Options.env: 0x56493a4bb960
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:                                      Options.fs: LegacyFileSystem
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:                                Options.info_log: 0x56493a339400
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:                Options.max_file_opening_threads: 16
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:                              Options.statistics: (nil)
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:                               Options.use_fsync: 0
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:                       Options.max_log_file_size: 0
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:                  Options.max_manifest_file_size: 1073741824
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:                   Options.log_file_time_to_roll: 0
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:                       Options.keep_log_file_num: 1000
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:                    Options.recycle_log_file_num: 0
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:                         Options.allow_fallocate: 1
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:                        Options.allow_mmap_reads: 0
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:                       Options.allow_mmap_writes: 0
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:                        Options.use_direct_reads: 0
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:                        Options.use_direct_io_for_flush_and_compaction: 0
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:          Options.create_missing_column_families: 0
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:                              Options.db_log_dir: 
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:                                 Options.wal_dir: db.wal
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:                Options.table_cache_numshardbits: 6
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:                         Options.WAL_ttl_seconds: 0
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:                       Options.WAL_size_limit_MB: 0
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:                        Options.max_write_batch_group_size_bytes: 1048576
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:             Options.manifest_preallocation_size: 4194304
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:                     Options.is_fd_close_on_exec: 1
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:                   Options.advise_random_on_open: 1
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:                    Options.db_write_buffer_size: 0
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:                    Options.write_buffer_manager: 0x56493b2726e0
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:         Options.access_hint_on_compaction_start: 1
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:           Options.random_access_max_buffer_size: 1048576
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:                      Options.use_adaptive_mutex: 0
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:                            Options.rate_limiter: (nil)
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:     Options.sst_file_manager.rate_bytes_per_sec: 0
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:                       Options.wal_recovery_mode: 2
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:                  Options.enable_thread_tracking: 0
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:                  Options.enable_pipelined_write: 0
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:                  Options.unordered_write: 0
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:         Options.allow_concurrent_memtable_write: 1
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:      Options.enable_write_thread_adaptive_yield: 1
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:             Options.write_thread_max_yield_usec: 100
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:            Options.write_thread_slow_yield_usec: 3
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:                               Options.row_cache: None
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:                              Options.wal_filter: None
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:             Options.avoid_flush_during_recovery: 0
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:             Options.allow_ingest_behind: 0
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:             Options.two_write_queues: 0
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:             Options.manual_wal_flush: 0
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:             Options.wal_compression: 0
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:             Options.atomic_flush: 0
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:             Options.avoid_unnecessary_blocking_io: 0
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:                 Options.persist_stats_to_disk: 0
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:                 Options.write_dbid_to_manifest: 0
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:                 Options.log_readahead_size: 0
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:                 Options.file_checksum_gen_factory: Unknown
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:                 Options.best_efforts_recovery: 0
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:                Options.max_bgerror_resume_count: 2147483647
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:            Options.bgerror_resume_retry_interval: 1000000
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:             Options.allow_data_in_errors: 0
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:             Options.db_host_id: __hostname__
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:             Options.enforce_single_del_contracts: true
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:             Options.max_background_jobs: 4
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:             Options.max_background_compactions: -1
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:             Options.max_subcompactions: 1
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:             Options.avoid_flush_during_shutdown: 0
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:           Options.writable_file_max_buffer_size: 0
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:             Options.delayed_write_rate : 16777216
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:             Options.max_total_wal_size: 1073741824
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:             Options.delete_obsolete_files_period_micros: 21600000000
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:                   Options.stats_dump_period_sec: 600
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:                 Options.stats_persist_period_sec: 600
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:                 Options.stats_history_buffer_size: 1048576
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:                          Options.max_open_files: -1
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:                          Options.bytes_per_sync: 0
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:                      Options.wal_bytes_per_sync: 0
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:                   Options.strict_bytes_per_sync: 0
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:       Options.compaction_readahead_size: 2097152
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:                  Options.max_background_flushes: -1
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb: Compression algorithms supported:
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:         kZSTD supported: 0
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:         kXpressCompression supported: 0
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:         kBZip2Compression supported: 0
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:         kZSTDNotFinalCompression supported: 0
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:         kLZ4Compression supported: 1
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:         kZlibCompression supported: 1
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:         kLZ4HCCompression supported: 1
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:         kSnappyCompression supported: 1
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb: Fast CRC32 supported: Supported on x86
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb: DMutex implementation: pthread_mutex_t
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb: [db/version_set.cc:5527] Recovering from manifest file: db/MANIFEST-000032
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [default]:
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:           Options.merge_operator: .T:int64_array.b:bitwise_xor
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:        Options.compaction_filter: None
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:        Options.compaction_filter_factory: None
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:  Options.sst_partitioner_factory: None
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:         Options.memtable_factory: SkipListFactory
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:            Options.table_factory: BlockBasedTable
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x56493a35cf80)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x56493a3531f0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:        Options.write_buffer_size: 16777216
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:  Options.max_write_buffer_number: 64
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:          Options.compression: LZ4
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:                  Options.bottommost_compression: Disabled
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:       Options.prefix_extractor: nullptr
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:             Options.num_levels: 7
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:            Options.compression_opts.window_bits: -14
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:                  Options.compression_opts.level: 32767
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:               Options.compression_opts.strategy: 0
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:         Options.compression_opts.parallel_threads: 1
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:                  Options.compression_opts.enabled: false
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:              Options.level0_stop_writes_trigger: 36
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:                   Options.target_file_size_base: 67108864
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:             Options.target_file_size_multiplier: 1
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:                        Options.arena_block_size: 1048576
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:                Options.disable_auto_compactions: 0
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:                   Options.inplace_update_support: 0
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:                 Options.inplace_update_num_locks: 10000
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:               Options.memtable_whole_key_filtering: 0
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:   Options.memtable_huge_page_size: 0
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:                           Options.bloom_locality: 0
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:                    Options.max_successive_merges: 0
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:                Options.optimize_filters_for_hits: 0
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:                Options.paranoid_file_checks: 0
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:                Options.force_consistency_checks: 1
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:                Options.report_bg_io_stats: 0
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:                               Options.ttl: 2592000
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:          Options.periodic_compaction_seconds: 0
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:    Options.preserve_internal_time_seconds: 0
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:                       Options.enable_blob_files: false
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:                           Options.min_blob_size: 0
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:                          Options.blob_file_size: 268435456
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:                   Options.blob_compression_type: NoCompression
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:          Options.enable_blob_garbage_collection: false
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:          Options.blob_compaction_readahead_size: 0
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:                Options.blob_file_starting_level: 0
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-0]:
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:           Options.merge_operator: None
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:        Options.compaction_filter: None
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:        Options.compaction_filter_factory: None
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:  Options.sst_partitioner_factory: None
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:         Options.memtable_factory: SkipListFactory
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:            Options.table_factory: BlockBasedTable
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x56493a35cf80)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x56493a3531f0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:        Options.write_buffer_size: 16777216
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:  Options.max_write_buffer_number: 64
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:          Options.compression: LZ4
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:                  Options.bottommost_compression: Disabled
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:       Options.prefix_extractor: nullptr
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:             Options.num_levels: 7
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:            Options.compression_opts.window_bits: -14
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:                  Options.compression_opts.level: 32767
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:               Options.compression_opts.strategy: 0
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:         Options.compression_opts.parallel_threads: 1
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:                  Options.compression_opts.enabled: false
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:              Options.level0_stop_writes_trigger: 36
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:                   Options.target_file_size_base: 67108864
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:             Options.target_file_size_multiplier: 1
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:                        Options.arena_block_size: 1048576
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:                Options.disable_auto_compactions: 0
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:                   Options.inplace_update_support: 0
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:                 Options.inplace_update_num_locks: 10000
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:               Options.memtable_whole_key_filtering: 0
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:   Options.memtable_huge_page_size: 0
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:                           Options.bloom_locality: 0
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:                    Options.max_successive_merges: 0
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:                Options.optimize_filters_for_hits: 0
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:                Options.paranoid_file_checks: 0
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:                Options.force_consistency_checks: 1
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:                Options.report_bg_io_stats: 0
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:                               Options.ttl: 2592000
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:          Options.periodic_compaction_seconds: 0
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:    Options.preserve_internal_time_seconds: 0
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:                       Options.enable_blob_files: false
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:                           Options.min_blob_size: 0
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:                          Options.blob_file_size: 268435456
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:                   Options.blob_compression_type: NoCompression
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:          Options.enable_blob_garbage_collection: false
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:          Options.blob_compaction_readahead_size: 0
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:                Options.blob_file_starting_level: 0
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-1]:
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:           Options.merge_operator: None
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:        Options.compaction_filter: None
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:        Options.compaction_filter_factory: None
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:  Options.sst_partitioner_factory: None
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:         Options.memtable_factory: SkipListFactory
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:            Options.table_factory: BlockBasedTable
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x56493a35cf80)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x56493a3531f0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:        Options.write_buffer_size: 16777216
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:  Options.max_write_buffer_number: 64
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:          Options.compression: LZ4
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:                  Options.bottommost_compression: Disabled
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:       Options.prefix_extractor: nullptr
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:             Options.num_levels: 7
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:            Options.compression_opts.window_bits: -14
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:                  Options.compression_opts.level: 32767
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:               Options.compression_opts.strategy: 0
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:         Options.compression_opts.parallel_threads: 1
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:                  Options.compression_opts.enabled: false
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:              Options.level0_stop_writes_trigger: 36
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:                   Options.target_file_size_base: 67108864
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:             Options.target_file_size_multiplier: 1
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:                        Options.arena_block_size: 1048576
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:                Options.disable_auto_compactions: 0
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:                   Options.inplace_update_support: 0
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:                 Options.inplace_update_num_locks: 10000
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:               Options.memtable_whole_key_filtering: 0
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:   Options.memtable_huge_page_size: 0
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:                           Options.bloom_locality: 0
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:                    Options.max_successive_merges: 0
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:                Options.optimize_filters_for_hits: 0
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:                Options.paranoid_file_checks: 0
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:                Options.force_consistency_checks: 1
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:                Options.report_bg_io_stats: 0
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:                               Options.ttl: 2592000
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:          Options.periodic_compaction_seconds: 0
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:    Options.preserve_internal_time_seconds: 0
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:                       Options.enable_blob_files: false
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:                           Options.min_blob_size: 0
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:                          Options.blob_file_size: 268435456
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:                   Options.blob_compression_type: NoCompression
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:          Options.enable_blob_garbage_collection: false
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:          Options.blob_compaction_readahead_size: 0
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:                Options.blob_file_starting_level: 0
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-2]:
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:           Options.merge_operator: None
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:        Options.compaction_filter: None
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:        Options.compaction_filter_factory: None
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:  Options.sst_partitioner_factory: None
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:         Options.memtable_factory: SkipListFactory
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:            Options.table_factory: BlockBasedTable
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x56493a35cf80)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x56493a3531f0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:        Options.write_buffer_size: 16777216
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:  Options.max_write_buffer_number: 64
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:          Options.compression: LZ4
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:                  Options.bottommost_compression: Disabled
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:       Options.prefix_extractor: nullptr
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:             Options.num_levels: 7
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:            Options.compression_opts.window_bits: -14
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:                  Options.compression_opts.level: 32767
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:               Options.compression_opts.strategy: 0
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:         Options.compression_opts.parallel_threads: 1
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:                  Options.compression_opts.enabled: false
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:              Options.level0_stop_writes_trigger: 36
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:                   Options.target_file_size_base: 67108864
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:             Options.target_file_size_multiplier: 1
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:                        Options.arena_block_size: 1048576
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:                Options.disable_auto_compactions: 0
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:                   Options.inplace_update_support: 0
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:                 Options.inplace_update_num_locks: 10000
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:               Options.memtable_whole_key_filtering: 0
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:   Options.memtable_huge_page_size: 0
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:                           Options.bloom_locality: 0
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:                    Options.max_successive_merges: 0
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:                Options.optimize_filters_for_hits: 0
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:                Options.paranoid_file_checks: 0
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:                Options.force_consistency_checks: 1
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:                Options.report_bg_io_stats: 0
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:                               Options.ttl: 2592000
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:          Options.periodic_compaction_seconds: 0
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:    Options.preserve_internal_time_seconds: 0
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:                       Options.enable_blob_files: false
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:                           Options.min_blob_size: 0
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:                          Options.blob_file_size: 268435456
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:                   Options.blob_compression_type: NoCompression
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:          Options.enable_blob_garbage_collection: false
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:          Options.blob_compaction_readahead_size: 0
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:                Options.blob_file_starting_level: 0
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-0]:
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:           Options.merge_operator: None
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:        Options.compaction_filter: None
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:        Options.compaction_filter_factory: None
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:  Options.sst_partitioner_factory: None
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:         Options.memtable_factory: SkipListFactory
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:            Options.table_factory: BlockBasedTable
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x56493a35cf80)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x56493a3531f0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:        Options.write_buffer_size: 16777216
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:  Options.max_write_buffer_number: 64
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:          Options.compression: LZ4
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:                  Options.bottommost_compression: Disabled
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:       Options.prefix_extractor: nullptr
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:             Options.num_levels: 7
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:            Options.compression_opts.window_bits: -14
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:                  Options.compression_opts.level: 32767
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:               Options.compression_opts.strategy: 0
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:         Options.compression_opts.parallel_threads: 1
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:                  Options.compression_opts.enabled: false
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:              Options.level0_stop_writes_trigger: 36
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:                   Options.target_file_size_base: 67108864
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:             Options.target_file_size_multiplier: 1
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:                        Options.arena_block_size: 1048576
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:                Options.disable_auto_compactions: 0
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:                   Options.inplace_update_support: 0
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:                 Options.inplace_update_num_locks: 10000
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:               Options.memtable_whole_key_filtering: 0
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:   Options.memtable_huge_page_size: 0
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:                           Options.bloom_locality: 0
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:                    Options.max_successive_merges: 0
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:                Options.optimize_filters_for_hits: 0
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:                Options.paranoid_file_checks: 0
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:                Options.force_consistency_checks: 1
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:                Options.report_bg_io_stats: 0
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:                               Options.ttl: 2592000
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:          Options.periodic_compaction_seconds: 0
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:    Options.preserve_internal_time_seconds: 0
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:                       Options.enable_blob_files: false
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:                           Options.min_blob_size: 0
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:                          Options.blob_file_size: 268435456
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:                   Options.blob_compression_type: NoCompression
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:          Options.enable_blob_garbage_collection: false
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:          Options.blob_compaction_readahead_size: 0
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:                Options.blob_file_starting_level: 0
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-1]:
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:           Options.merge_operator: None
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:        Options.compaction_filter: None
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:        Options.compaction_filter_factory: None
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:  Options.sst_partitioner_factory: None
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:         Options.memtable_factory: SkipListFactory
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:            Options.table_factory: BlockBasedTable
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x56493a35cf80)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x56493a3531f0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:        Options.write_buffer_size: 16777216
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:  Options.max_write_buffer_number: 64
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:          Options.compression: LZ4
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:                  Options.bottommost_compression: Disabled
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:       Options.prefix_extractor: nullptr
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:             Options.num_levels: 7
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:            Options.compression_opts.window_bits: -14
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:                  Options.compression_opts.level: 32767
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:               Options.compression_opts.strategy: 0
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:         Options.compression_opts.parallel_threads: 1
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:                  Options.compression_opts.enabled: false
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:              Options.level0_stop_writes_trigger: 36
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:                   Options.target_file_size_base: 67108864
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:             Options.target_file_size_multiplier: 1
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:                        Options.arena_block_size: 1048576
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:                Options.disable_auto_compactions: 0
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:                   Options.inplace_update_support: 0
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:                 Options.inplace_update_num_locks: 10000
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:               Options.memtable_whole_key_filtering: 0
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:   Options.memtable_huge_page_size: 0
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:                           Options.bloom_locality: 0
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:                    Options.max_successive_merges: 0
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:                Options.optimize_filters_for_hits: 0
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:                Options.paranoid_file_checks: 0
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:                Options.force_consistency_checks: 1
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:                Options.report_bg_io_stats: 0
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:                               Options.ttl: 2592000
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:          Options.periodic_compaction_seconds: 0
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:    Options.preserve_internal_time_seconds: 0
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:                       Options.enable_blob_files: false
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:                           Options.min_blob_size: 0
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:                          Options.blob_file_size: 268435456
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:                   Options.blob_compression_type: NoCompression
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:          Options.enable_blob_garbage_collection: false
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:          Options.blob_compaction_readahead_size: 0
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:                Options.blob_file_starting_level: 0
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-2]:
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:           Options.merge_operator: None
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:        Options.compaction_filter: None
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:        Options.compaction_filter_factory: None
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:  Options.sst_partitioner_factory: None
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:         Options.memtable_factory: SkipListFactory
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:            Options.table_factory: BlockBasedTable
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x56493a35cf80)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x56493a3531f0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:        Options.write_buffer_size: 16777216
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:  Options.max_write_buffer_number: 64
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:          Options.compression: LZ4
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:                  Options.bottommost_compression: Disabled
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:       Options.prefix_extractor: nullptr
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:             Options.num_levels: 7
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:            Options.compression_opts.window_bits: -14
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:                  Options.compression_opts.level: 32767
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:               Options.compression_opts.strategy: 0
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:         Options.compression_opts.parallel_threads: 1
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:                  Options.compression_opts.enabled: false
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:              Options.level0_stop_writes_trigger: 36
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:                   Options.target_file_size_base: 67108864
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:             Options.target_file_size_multiplier: 1
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:                        Options.arena_block_size: 1048576
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:                Options.disable_auto_compactions: 0
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:                   Options.inplace_update_support: 0
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:                 Options.inplace_update_num_locks: 10000
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:               Options.memtable_whole_key_filtering: 0
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:   Options.memtable_huge_page_size: 0
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:                           Options.bloom_locality: 0
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:                    Options.max_successive_merges: 0
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:                Options.optimize_filters_for_hits: 0
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:                Options.paranoid_file_checks: 0
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:                Options.force_consistency_checks: 1
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:                Options.report_bg_io_stats: 0
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:                               Options.ttl: 2592000
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:          Options.periodic_compaction_seconds: 0
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:    Options.preserve_internal_time_seconds: 0
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:                       Options.enable_blob_files: false
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:                           Options.min_blob_size: 0
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:                          Options.blob_file_size: 268435456
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:                   Options.blob_compression_type: NoCompression
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:          Options.enable_blob_garbage_collection: false
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:          Options.blob_compaction_readahead_size: 0
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:                Options.blob_file_starting_level: 0
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-0]:
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:           Options.merge_operator: None
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:        Options.compaction_filter: None
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:        Options.compaction_filter_factory: None
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:  Options.sst_partitioner_factory: None
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:         Options.memtable_factory: SkipListFactory
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:            Options.table_factory: BlockBasedTable
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x56493a35cfe0)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x56493a353090
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 536870912
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:        Options.write_buffer_size: 16777216
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:  Options.max_write_buffer_number: 64
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:          Options.compression: LZ4
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:                  Options.bottommost_compression: Disabled
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:       Options.prefix_extractor: nullptr
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:             Options.num_levels: 7
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:            Options.compression_opts.window_bits: -14
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:                  Options.compression_opts.level: 32767
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:               Options.compression_opts.strategy: 0
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:         Options.compression_opts.parallel_threads: 1
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:                  Options.compression_opts.enabled: false
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:              Options.level0_stop_writes_trigger: 36
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:                   Options.target_file_size_base: 67108864
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:             Options.target_file_size_multiplier: 1
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:                        Options.arena_block_size: 1048576
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:                Options.disable_auto_compactions: 0
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:                   Options.inplace_update_support: 0
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:                 Options.inplace_update_num_locks: 10000
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:               Options.memtable_whole_key_filtering: 0
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:   Options.memtable_huge_page_size: 0
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:                           Options.bloom_locality: 0
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:                    Options.max_successive_merges: 0
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:                Options.optimize_filters_for_hits: 0
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:                Options.paranoid_file_checks: 0
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:                Options.force_consistency_checks: 1
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:                Options.report_bg_io_stats: 0
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:                               Options.ttl: 2592000
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:          Options.periodic_compaction_seconds: 0
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:    Options.preserve_internal_time_seconds: 0
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:                       Options.enable_blob_files: false
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:                           Options.min_blob_size: 0
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:                          Options.blob_file_size: 268435456
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:                   Options.blob_compression_type: NoCompression
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:          Options.enable_blob_garbage_collection: false
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:          Options.blob_compaction_readahead_size: 0
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:                Options.blob_file_starting_level: 0
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-1]:
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:           Options.merge_operator: None
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:        Options.compaction_filter: None
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:        Options.compaction_filter_factory: None
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:  Options.sst_partitioner_factory: None
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:         Options.memtable_factory: SkipListFactory
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:            Options.table_factory: BlockBasedTable
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x56493a35cfe0)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x56493a353090
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 536870912
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:        Options.write_buffer_size: 16777216
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:  Options.max_write_buffer_number: 64
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:          Options.compression: LZ4
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:                  Options.bottommost_compression: Disabled
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:       Options.prefix_extractor: nullptr
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:             Options.num_levels: 7
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:            Options.compression_opts.window_bits: -14
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:                  Options.compression_opts.level: 32767
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:               Options.compression_opts.strategy: 0
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:         Options.compression_opts.parallel_threads: 1
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:                  Options.compression_opts.enabled: false
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:              Options.level0_stop_writes_trigger: 36
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:                   Options.target_file_size_base: 67108864
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:             Options.target_file_size_multiplier: 1
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:                        Options.arena_block_size: 1048576
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:                Options.disable_auto_compactions: 0
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:                   Options.inplace_update_support: 0
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:                 Options.inplace_update_num_locks: 10000
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:               Options.memtable_whole_key_filtering: 0
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:   Options.memtable_huge_page_size: 0
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:                           Options.bloom_locality: 0
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:                    Options.max_successive_merges: 0
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:                Options.optimize_filters_for_hits: 0
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:                Options.paranoid_file_checks: 0
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:                Options.force_consistency_checks: 1
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:                Options.report_bg_io_stats: 0
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:                               Options.ttl: 2592000
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:          Options.periodic_compaction_seconds: 0
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:    Options.preserve_internal_time_seconds: 0
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:                       Options.enable_blob_files: false
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:                           Options.min_blob_size: 0
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:                          Options.blob_file_size: 268435456
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:                   Options.blob_compression_type: NoCompression
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:          Options.enable_blob_garbage_collection: false
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:          Options.blob_compaction_readahead_size: 0
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:                Options.blob_file_starting_level: 0
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-2]:
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:           Options.merge_operator: None
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:        Options.compaction_filter: None
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:        Options.compaction_filter_factory: None
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:  Options.sst_partitioner_factory: None
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:         Options.memtable_factory: SkipListFactory
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:            Options.table_factory: BlockBasedTable
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x56493a35cfe0)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x56493a353090
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 536870912
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:        Options.write_buffer_size: 16777216
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:  Options.max_write_buffer_number: 64
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:          Options.compression: LZ4
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:                  Options.bottommost_compression: Disabled
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:       Options.prefix_extractor: nullptr
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:             Options.num_levels: 7
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:            Options.compression_opts.window_bits: -14
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:                  Options.compression_opts.level: 32767
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:               Options.compression_opts.strategy: 0
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:         Options.compression_opts.parallel_threads: 1
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:                  Options.compression_opts.enabled: false
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:              Options.level0_stop_writes_trigger: 36
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:                   Options.target_file_size_base: 67108864
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:             Options.target_file_size_multiplier: 1
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:                        Options.arena_block_size: 1048576
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:                Options.disable_auto_compactions: 0
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:                   Options.inplace_update_support: 0
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:                 Options.inplace_update_num_locks: 10000
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:               Options.memtable_whole_key_filtering: 0
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:   Options.memtable_huge_page_size: 0
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:                           Options.bloom_locality: 0
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:                    Options.max_successive_merges: 0
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:                Options.optimize_filters_for_hits: 0
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:                Options.paranoid_file_checks: 0
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:                Options.force_consistency_checks: 1
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:                Options.report_bg_io_stats: 0
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:                               Options.ttl: 2592000
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:          Options.periodic_compaction_seconds: 0
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:    Options.preserve_internal_time_seconds: 0
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:                       Options.enable_blob_files: false
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:                           Options.min_blob_size: 0
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:                          Options.blob_file_size: 268435456
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:                   Options.blob_compression_type: NoCompression
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:          Options.enable_blob_garbage_collection: false
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:          Options.blob_compaction_readahead_size: 0
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb:                Options.blob_file_starting_level: 0
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb: [db/column_family.cc:635]         (skipping printing options)
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb: [db/column_family.cc:635]         (skipping printing options)
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb: [db/version_set.cc:5566] Recovered from manifest file:db/MANIFEST-000032 succeeded,manifest_file_number is 32, next_file_number is 34, last_sequence is 12, log_number is 5,prev_log_number is 0,max_column_family is 11,min_log_number_to_keep is 5
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb: [db/version_set.cc:5581] Column family [default] (ID 0), log number is 5
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb: [db/version_set.cc:5581] Column family [m-0] (ID 1), log number is 5
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb: [db/version_set.cc:5581] Column family [m-1] (ID 2), log number is 5
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb: [db/version_set.cc:5581] Column family [m-2] (ID 3), log number is 5
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb: [db/version_set.cc:5581] Column family [p-0] (ID 4), log number is 5
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb: [db/version_set.cc:5581] Column family [p-1] (ID 5), log number is 5
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb: [db/version_set.cc:5581] Column family [p-2] (ID 6), log number is 5
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb: [db/version_set.cc:5581] Column family [O-0] (ID 7), log number is 5
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb: [db/version_set.cc:5581] Column family [O-1] (ID 8), log number is 5
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb: [db/version_set.cc:5581] Column family [O-2] (ID 9), log number is 5
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb: [db/version_set.cc:5581] Column family [L] (ID 10), log number is 5
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb: [db/version_set.cc:5581] Column family [P] (ID 11), log number is 5
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb: [db/db_impl/db_impl_open.cc:539] DB ID: 8afc8215-bd2b-439d-89a9-910130e41fd8
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760153841937553, "job": 1, "event": "recovery_started", "wal_files": [31]}
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb: [db/db_impl/db_impl_open.cc:1043] Recovering log #31 mode 2
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760153841942843, "cf_name": "default", "job": 1, "event": "table_file_creation", "file_number": 35, "file_size": 1272, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 13, "largest_seqno": 21, "table_properties": {"data_size": 128, "index_size": 27, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 69, "raw_key_size": 87, "raw_average_key_size": 17, "raw_value_size": 82, "raw_average_value_size": 16, "num_data_blocks": 1, "num_entries": 5, "num_filter_entries": 5, "num_deletions": 0, "num_merge_operands": 2, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": ".T:int64_array.b:bitwise_xor", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "LZ4", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1760153841, "oldest_key_time": 0, "file_creation_time": 0, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "8afc8215-bd2b-439d-89a9-910130e41fd8", "db_session_id": "896UZ1KL24HAO5WR3YIM", "orig_file_number": 35, "seqno_to_time_mapping": "N/A"}}
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760153841945620, "cf_name": "p-0", "job": 1, "event": "table_file_creation", "file_number": 36, "file_size": 1594, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 14, "largest_seqno": 15, "table_properties": {"data_size": 468, "index_size": 39, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 69, "raw_key_size": 72, "raw_average_key_size": 36, "raw_value_size": 567, "raw_average_value_size": 283, "num_data_blocks": 1, "num_entries": 2, "num_filter_entries": 2, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "p-0", "column_family_id": 4, "comparator": "leveldb.BytewiseComparator", "merge_operator": "nullptr", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "LZ4", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1760153841, "oldest_key_time": 0, "file_creation_time": 0, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "8afc8215-bd2b-439d-89a9-910130e41fd8", "db_session_id": "896UZ1KL24HAO5WR3YIM", "orig_file_number": 36, "seqno_to_time_mapping": "N/A"}}
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760153841948199, "cf_name": "O-2", "job": 1, "event": "table_file_creation", "file_number": 37, "file_size": 1275, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 16, "largest_seqno": 16, "table_properties": {"data_size": 121, "index_size": 64, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 69, "raw_key_size": 55, "raw_average_key_size": 55, "raw_value_size": 50, "raw_average_value_size": 50, "num_data_blocks": 1, "num_entries": 1, "num_filter_entries": 1, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "O-2", "column_family_id": 9, "comparator": "leveldb.BytewiseComparator", "merge_operator": "nullptr", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "LZ4", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1760153841, "oldest_key_time": 0, "file_creation_time": 0, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "8afc8215-bd2b-439d-89a9-910130e41fd8", "db_session_id": "896UZ1KL24HAO5WR3YIM", "orig_file_number": 37, "seqno_to_time_mapping": "N/A"}}
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760153841949516, "job": 1, "event": "recovery_finished"}
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb: [db/version_set.cc:5047] Creating manifest 40
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb: [db/db_impl/db_impl_open.cc:1987] SstFileManager instance 0x56493a4c1c00
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb: DB pointer 0x56493b25ba00
Oct 11 03:37:21 compute-0 ceph-osd[88594]: bluestore(/var/lib/ceph/osd/ceph-1) _open_db opened rocksdb path db options compression=kLZ4Compression,max_write_buffer_number=64,min_write_buffer_number_to_merge=6,compaction_style=kCompactionStyleLevel,write_buffer_size=16777216,max_background_jobs=4,level0_file_num_compaction_trigger=8,max_bytes_for_level_base=1073741824,max_bytes_for_level_multiplier=8,compaction_readahead_size=2MB,max_total_wal_size=1073741824,writable_file_max_buffer_size=0
Oct 11 03:37:21 compute-0 ceph-osd[88594]: bluestore(/var/lib/ceph/osd/ceph-1) _upgrade_super from 4, latest 4
Oct 11 03:37:21 compute-0 ceph-osd[88594]: bluestore(/var/lib/ceph/osd/ceph-1) _upgrade_super done
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Oct 11 03:37:21 compute-0 ceph-osd[88594]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 0.1 total, 0.1 interval
                                           Cumulative writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 GB, 0.00 MB/s
                                           Cumulative WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 MB, 0.00 MB/s
                                           Interval WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                           
                                           ** Compaction Stats [default] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      2/0    2.63 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.2      0.01              0.00         1    0.005       0      0       0.0       0.0
                                            Sum      2/0    2.63 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.2      0.01              0.00         1    0.005       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.2      0.01              0.00         1    0.005       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [default] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.2      0.01              0.00         1    0.005       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.1 total, 0.1 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.02 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.02 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x56493a3531f0#2 capacity: 460.80 MB usage: 0.67 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 1.8e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [default] **
                                           
                                           ** Compaction Stats [m-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.1 total, 0.1 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x56493a3531f0#2 capacity: 460.80 MB usage: 0.67 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 1.8e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-0] **
                                           
                                           ** Compaction Stats [m-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.1 total, 0.1 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x56493a3531f0#2 capacity: 460.80 MB usage: 0.67 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 1.8e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-1] **
                                           
                                           ** Compaction Stats [m-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.1 total, 0.1 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x56493a3531f0#2 capacity: 460.80 MB usage: 0.67 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 1.8e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-2] **
                                           
                                           ** Compaction Stats [p-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.56 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.6      0.00              0.00         1    0.003       0      0       0.0       0.0
                                            Sum      1/0    1.56 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.6      0.00              0.00         1    0.003       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.6      0.00              0.00         1    0.003       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.6      0.00              0.00         1    0.003       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.1 total, 0.1 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.02 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.02 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x56493a3531f0#2 capacity: 460.80 MB usage: 0.67 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 1.8e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-0] **
                                           
                                           ** Compaction Stats [p-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.1 total, 0.1 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x56493a3531f0#2 capacity: 460.80 MB usage: 0.67 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 1.8e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-1] **
                                           
                                           ** Compaction Stats [p-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.1 total, 0.1 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x56493a3531f0#2 capacity: 460.80 MB usage: 0.67 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 1.8e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-2] **
                                           
                                           ** Compaction Stats [O-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.1 total, 0.1 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x56493a353090#2 capacity: 512.00 MB usage: 0.25 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 2 last_secs: 6e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): FilterBlock(1,0.11 KB,2.08616e-05%) IndexBlock(1,0.14 KB,2.68221e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-0] **
                                           
                                           ** Compaction Stats [O-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.1 total, 0.1 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x56493a353090#2 capacity: 512.00 MB usage: 0.25 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 2 last_secs: 6e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): FilterBlock(1,0.11 KB,2.08616e-05%) IndexBlock(1,0.14 KB,2.68221e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-1] **
                                           
                                           ** Compaction Stats [O-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.25 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.5      0.00              0.00         1    0.003       0      0       0.0       0.0
                                            Sum      1/0    1.25 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.5      0.00              0.00         1    0.003       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.5      0.00              0.00         1    0.003       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.5      0.00              0.00         1    0.003       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.1 total, 0.1 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.02 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.02 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x56493a353090#2 capacity: 512.00 MB usage: 0.25 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 2 last_secs: 6e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): FilterBlock(1,0.11 KB,2.08616e-05%) IndexBlock(1,0.14 KB,2.68221e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-2] **
                                           
                                           ** Compaction Stats [L] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.001       0      0       0.0       0.0
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.001       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.001       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [L] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.001       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.1 total, 0.1 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x56493a3531f0#2 capacity: 460.80 MB usage: 0.67 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 1.8e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [L] **
                                           
                                           ** Compaction Stats [P] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [P] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.1 total, 0.1 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x56493a3531f0#2 capacity: 460.80 MB usage: 0.67 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 1.8e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [P] **
Oct 11 03:37:21 compute-0 ceph-osd[88594]: <cls> /home/jenkins-build/build/workspace/ceph-build/ARCH/x86_64/AVAILABLE_ARCH/x86_64/AVAILABLE_DIST/centos9/DIST/centos9/MACHINE_SIZE/gigantic/release/18.2.7/rpm/el9/BUILD/ceph-18.2.7/src/cls/cephfs/cls_cephfs.cc:201: loading cephfs
Oct 11 03:37:21 compute-0 ceph-osd[88594]: <cls> /home/jenkins-build/build/workspace/ceph-build/ARCH/x86_64/AVAILABLE_ARCH/x86_64/AVAILABLE_DIST/centos9/DIST/centos9/MACHINE_SIZE/gigantic/release/18.2.7/rpm/el9/BUILD/ceph-18.2.7/src/cls/hello/cls_hello.cc:316: loading cls_hello
Oct 11 03:37:21 compute-0 ceph-osd[88594]: _get_class not permitted to load lua
Oct 11 03:37:21 compute-0 ceph-osd[88594]: _get_class not permitted to load sdk
Oct 11 03:37:21 compute-0 ceph-osd[88594]: _get_class not permitted to load test_remote_reads
Oct 11 03:37:21 compute-0 ceph-osd[88594]: osd.1 0 crush map has features 288232575208783872, adjusting msgr requires for clients
Oct 11 03:37:21 compute-0 ceph-osd[88594]: osd.1 0 crush map has features 288232575208783872 was 8705, adjusting msgr requires for mons
Oct 11 03:37:21 compute-0 ceph-osd[88594]: osd.1 0 crush map has features 288232575208783872, adjusting msgr requires for osds
Oct 11 03:37:21 compute-0 ceph-osd[88594]: osd.1 0 check_osdmap_features enabling on-disk ERASURE CODES compat feature
Oct 11 03:37:21 compute-0 ceph-osd[88594]: osd.1 0 load_pgs
Oct 11 03:37:21 compute-0 ceph-osd[88594]: osd.1 0 load_pgs opened 0 pgs
Oct 11 03:37:21 compute-0 ceph-osd[88594]: osd.1 0 log_to_monitors true
Oct 11 03:37:21 compute-0 ceph-23b68101-59a9-532f-ab6b-9acf78fb2162-osd-1[88590]: 2025-10-11T03:37:21.983+0000 7f86638c8740 -1 osd.1 0 log_to_monitors true
Oct 11 03:37:21 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["1"]} v 0) v1
Oct 11 03:37:21 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='osd.1 [v2:192.168.122.100:6806/1035388975,v1:192.168.122.100:6807/1035388975]' entity='osd.1' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["1"]}]: dispatch
Oct 11 03:37:22 compute-0 ceph-23b68101-59a9-532f-ab6b-9acf78fb2162-osd-2-activate-test[89016]: usage: ceph-volume activate [-h] [--osd-id OSD_ID] [--osd-uuid OSD_UUID]
Oct 11 03:37:22 compute-0 ceph-23b68101-59a9-532f-ab6b-9acf78fb2162-osd-2-activate-test[89016]:                             [--no-systemd] [--no-tmpfs]
Oct 11 03:37:22 compute-0 ceph-23b68101-59a9-532f-ab6b-9acf78fb2162-osd-2-activate-test[89016]: ceph-volume activate: error: unrecognized arguments: --bad-option
Oct 11 03:37:22 compute-0 systemd[1]: libpod-74eeab3d74c4d21fc48969a6c3310afa621327f32f9ee6f06730a9b7060b70fb.scope: Deactivated successfully.
Oct 11 03:37:22 compute-0 podman[88820]: 2025-10-11 03:37:22.517229456 +0000 UTC m=+0.859712850 container died 74eeab3d74c4d21fc48969a6c3310afa621327f32f9ee6f06730a9b7060b70fb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-23b68101-59a9-532f-ab6b-9acf78fb2162-osd-2-activate-test, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507)
Oct 11 03:37:22 compute-0 systemd[1]: var-lib-containers-storage-overlay-baa0821e1e285b5e1422953d07bc0b6724656d16dd9998d1f7ad99ff64b8d874-merged.mount: Deactivated successfully.
Oct 11 03:37:22 compute-0 podman[88820]: 2025-10-11 03:37:22.592719189 +0000 UTC m=+0.935202543 container remove 74eeab3d74c4d21fc48969a6c3310afa621327f32f9ee6f06730a9b7060b70fb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-23b68101-59a9-532f-ab6b-9acf78fb2162-osd-2-activate-test, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, io.buildah.version=1.39.3)
Oct 11 03:37:22 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e9 do_prune osdmap full prune enabled
Oct 11 03:37:22 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e9 encode_pending skipping prime_pg_temp; mapping job did not start
Oct 11 03:37:22 compute-0 ceph-mon[74273]: OSD bench result of 9588.024328 IOPS is not within the threshold limit range of 50.000000 IOPS and 500.000000 IOPS for osd.0. IOPS capacity is unchanged at 315.000000 IOPS. The recommendation is to establish the osd's IOPS capacity using other benchmark tools (e.g. Fio) and then override osd_mclock_max_capacity_iops_[hdd|ssd].
Oct 11 03:37:22 compute-0 ceph-mon[74273]: osd.0 [v2:192.168.122.100:6802/3430557825,v1:192.168.122.100:6803/3430557825] boot
Oct 11 03:37:22 compute-0 ceph-mon[74273]: osdmap e9: 3 total, 1 up, 3 in
Oct 11 03:37:22 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Oct 11 03:37:22 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Oct 11 03:37:22 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Oct 11 03:37:22 compute-0 ceph-mon[74273]: from='osd.1 [v2:192.168.122.100:6806/1035388975,v1:192.168.122.100:6807/1035388975]' entity='osd.1' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["1"]}]: dispatch
Oct 11 03:37:22 compute-0 systemd[1]: libpod-conmon-74eeab3d74c4d21fc48969a6c3310afa621327f32f9ee6f06730a9b7060b70fb.scope: Deactivated successfully.
Oct 11 03:37:22 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='osd.1 [v2:192.168.122.100:6806/1035388975,v1:192.168.122.100:6807/1035388975]' entity='osd.1' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["1"]}]': finished
Oct 11 03:37:22 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e10 e10: 3 total, 1 up, 3 in
Oct 11 03:37:22 compute-0 ceph-mon[74273]: log_channel(cluster) log [DBG] : osdmap e10: 3 total, 1 up, 3 in
Oct 11 03:37:22 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd crush create-or-move", "id": 1, "weight":0.0195, "args": ["host=compute-0", "root=default"]} v 0) v1
Oct 11 03:37:22 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='osd.1 [v2:192.168.122.100:6806/1035388975,v1:192.168.122.100:6807/1035388975]' entity='osd.1' cmd=[{"prefix": "osd crush create-or-move", "id": 1, "weight":0.0195, "args": ["host=compute-0", "root=default"]}]: dispatch
Oct 11 03:37:22 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e10 create-or-move crush item name 'osd.1' initial_weight 0.0195 at location {host=compute-0,root=default}
Oct 11 03:37:22 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0) v1
Oct 11 03:37:22 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Oct 11 03:37:22 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0) v1
Oct 11 03:37:22 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Oct 11 03:37:22 compute-0 ceph-mgr[74563]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Oct 11 03:37:22 compute-0 ceph-mgr[74563]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Oct 11 03:37:22 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v32: 0 pgs: ; 0 B data, 426 MiB used, 20 GiB / 20 GiB avail
Oct 11 03:37:22 compute-0 ceph-mgr[74563]: [devicehealth INFO root] creating mgr pool
Oct 11 03:37:22 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool create", "format": "json", "pool": ".mgr", "pg_num": 1, "pg_num_min": 1, "pg_num_max": 32, "yes_i_really_mean_it": true} v 0) v1
Oct 11 03:37:22 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' cmd=[{"prefix": "osd pool create", "format": "json", "pool": ".mgr", "pg_num": 1, "pg_num_min": 1, "pg_num_max": 32, "yes_i_really_mean_it": true}]: dispatch
Oct 11 03:37:22 compute-0 systemd[1]: Reloading.
Oct 11 03:37:22 compute-0 systemd-rc-local-generator[89294]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 11 03:37:22 compute-0 systemd-sysv-generator[89297]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 11 03:37:22 compute-0 ceph-osd[88594]: log_channel(cluster) log [DBG] : purged_snaps scrub starts
Oct 11 03:37:22 compute-0 ceph-osd[88594]: log_channel(cluster) log [DBG] : purged_snaps scrub ok
Oct 11 03:37:23 compute-0 systemd[1]: Reloading.
Oct 11 03:37:23 compute-0 systemd-rc-local-generator[89334]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 11 03:37:23 compute-0 systemd-sysv-generator[89338]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 11 03:37:23 compute-0 systemd[1]: Starting Ceph osd.2 for 23b68101-59a9-532f-ab6b-9acf78fb2162...
Oct 11 03:37:23 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e10 do_prune osdmap full prune enabled
Oct 11 03:37:23 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e10 encode_pending skipping prime_pg_temp; mapping job did not start
Oct 11 03:37:23 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='osd.1 [v2:192.168.122.100:6806/1035388975,v1:192.168.122.100:6807/1035388975]' entity='osd.1' cmd='[{"prefix": "osd crush create-or-move", "id": 1, "weight":0.0195, "args": ["host=compute-0", "root=default"]}]': finished
Oct 11 03:37:23 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' cmd='[{"prefix": "osd pool create", "format": "json", "pool": ".mgr", "pg_num": 1, "pg_num_min": 1, "pg_num_max": 32, "yes_i_really_mean_it": true}]': finished
Oct 11 03:37:23 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e11 e11: 3 total, 1 up, 3 in
Oct 11 03:37:23 compute-0 ceph-osd[88594]: osd.1 0 done with init, starting boot process
Oct 11 03:37:23 compute-0 ceph-osd[88594]: osd.1 0 start_boot
Oct 11 03:37:23 compute-0 ceph-osd[88594]: osd.1 0 maybe_override_options_for_qos osd_max_backfills set to 1
Oct 11 03:37:23 compute-0 ceph-osd[88594]: osd.1 0 maybe_override_options_for_qos osd_recovery_max_active set to 0
Oct 11 03:37:23 compute-0 ceph-osd[88594]: osd.1 0 maybe_override_options_for_qos osd_recovery_max_active_hdd set to 3
Oct 11 03:37:23 compute-0 ceph-osd[88594]: osd.1 0 maybe_override_options_for_qos osd_recovery_max_active_ssd set to 10
Oct 11 03:37:23 compute-0 ceph-osd[88594]: osd.1 0  bench count 12288000 bsize 4 KiB
Oct 11 03:37:23 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e11 crush map has features 3314933000852226048, adjusting msgr requires
Oct 11 03:37:23 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e11 crush map has features 288514051259236352, adjusting msgr requires
Oct 11 03:37:23 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e11 crush map has features 288514051259236352, adjusting msgr requires
Oct 11 03:37:23 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e11 crush map has features 288514051259236352, adjusting msgr requires
Oct 11 03:37:23 compute-0 ceph-mon[74273]: log_channel(cluster) log [DBG] : osdmap e11: 3 total, 1 up, 3 in
Oct 11 03:37:23 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0) v1
Oct 11 03:37:23 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Oct 11 03:37:23 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0) v1
Oct 11 03:37:23 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Oct 11 03:37:23 compute-0 ceph-mon[74273]: from='osd.1 [v2:192.168.122.100:6806/1035388975,v1:192.168.122.100:6807/1035388975]' entity='osd.1' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["1"]}]': finished
Oct 11 03:37:23 compute-0 ceph-osd[87591]: osd.0 11 crush map has features 288514051259236352, adjusting msgr requires for clients
Oct 11 03:37:23 compute-0 ceph-osd[87591]: osd.0 11 crush map has features 288514051259236352 was 288514050185503233, adjusting msgr requires for mons
Oct 11 03:37:23 compute-0 ceph-osd[87591]: osd.0 11 crush map has features 3314933000852226048, adjusting msgr requires for osds
Oct 11 03:37:23 compute-0 ceph-mon[74273]: osdmap e10: 3 total, 1 up, 3 in
Oct 11 03:37:23 compute-0 ceph-mon[74273]: from='osd.1 [v2:192.168.122.100:6806/1035388975,v1:192.168.122.100:6807/1035388975]' entity='osd.1' cmd=[{"prefix": "osd crush create-or-move", "id": 1, "weight":0.0195, "args": ["host=compute-0", "root=default"]}]: dispatch
Oct 11 03:37:23 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Oct 11 03:37:23 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Oct 11 03:37:23 compute-0 ceph-mon[74273]: pgmap v32: 0 pgs: ; 0 B data, 426 MiB used, 20 GiB / 20 GiB avail
Oct 11 03:37:23 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' cmd=[{"prefix": "osd pool create", "format": "json", "pool": ".mgr", "pg_num": 1, "pg_num_min": 1, "pg_num_max": 32, "yes_i_really_mean_it": true}]: dispatch
Oct 11 03:37:23 compute-0 ceph-mgr[74563]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Oct 11 03:37:23 compute-0 ceph-mgr[74563]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Oct 11 03:37:23 compute-0 ceph-mgr[74563]: mgr.server handle_open ignoring open from osd.1 v2:192.168.122.100:6806/1035388975; not ready for session (expect reconnect)
Oct 11 03:37:23 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool application enable", "format": "json", "pool": ".mgr", "app": "mgr", "yes_i_really_mean_it": true} v 0) v1
Oct 11 03:37:23 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' cmd=[{"prefix": "osd pool application enable", "format": "json", "pool": ".mgr", "app": "mgr", "yes_i_really_mean_it": true}]: dispatch
Oct 11 03:37:23 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0) v1
Oct 11 03:37:23 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Oct 11 03:37:23 compute-0 ceph-mgr[74563]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Oct 11 03:37:23 compute-0 podman[89392]: 2025-10-11 03:37:23.806743624 +0000 UTC m=+0.076302947 container create 130acea4e28f7e816637db64320d8892fd90bb22cbee027881a877aaa9c320e8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-23b68101-59a9-532f-ab6b-9acf78fb2162-osd-2-activate, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 11 03:37:23 compute-0 podman[89392]: 2025-10-11 03:37:23.775004661 +0000 UTC m=+0.044564014 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 03:37:23 compute-0 systemd[1]: Started libcrun container.
Oct 11 03:37:23 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/03b9493ac00156789115826402e78862bd0fc219f6e53cde944ec57831710d23/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 11 03:37:23 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/03b9493ac00156789115826402e78862bd0fc219f6e53cde944ec57831710d23/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 11 03:37:23 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/03b9493ac00156789115826402e78862bd0fc219f6e53cde944ec57831710d23/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 11 03:37:23 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/03b9493ac00156789115826402e78862bd0fc219f6e53cde944ec57831710d23/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 11 03:37:23 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/03b9493ac00156789115826402e78862bd0fc219f6e53cde944ec57831710d23/merged/var/lib/ceph/osd/ceph-2 supports timestamps until 2038 (0x7fffffff)
Oct 11 03:37:23 compute-0 podman[89392]: 2025-10-11 03:37:23.897062764 +0000 UTC m=+0.166622137 container init 130acea4e28f7e816637db64320d8892fd90bb22cbee027881a877aaa9c320e8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-23b68101-59a9-532f-ab6b-9acf78fb2162-osd-2-activate, org.label-schema.build-date=20250507, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 11 03:37:23 compute-0 podman[89392]: 2025-10-11 03:37:23.9189476 +0000 UTC m=+0.188506953 container start 130acea4e28f7e816637db64320d8892fd90bb22cbee027881a877aaa9c320e8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-23b68101-59a9-532f-ab6b-9acf78fb2162-osd-2-activate, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 11 03:37:23 compute-0 podman[89392]: 2025-10-11 03:37:23.932372108 +0000 UTC m=+0.201931531 container attach 130acea4e28f7e816637db64320d8892fd90bb22cbee027881a877aaa9c320e8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-23b68101-59a9-532f-ab6b-9acf78fb2162-osd-2-activate, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3)
Oct 11 03:37:23 compute-0 sudo[89434]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-iiegxacwhgqducurpischdmhzyedmpjn ; /usr/bin/python3'
Oct 11 03:37:23 compute-0 sudo[89434]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 03:37:24 compute-0 python3[89437]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /home/ceph-admin/specs/ceph_spec.yaml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v18 --fsid 23b68101-59a9-532f-ab6b-9acf78fb2162 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   status --format json | jq .osdmap.num_up_osds _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 11 03:37:24 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e11 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 11 03:37:24 compute-0 podman[89439]: 2025-10-11 03:37:24.21938115 +0000 UTC m=+0.078790057 container create ec7196c7dd62934e55a08ebbb20c0d932f612c28631208483a75a74874e8560b (image=quay.io/ceph/ceph:v18, name=jovial_murdock, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=reef, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507)
Oct 11 03:37:24 compute-0 systemd[1]: Started libpod-conmon-ec7196c7dd62934e55a08ebbb20c0d932f612c28631208483a75a74874e8560b.scope.
Oct 11 03:37:24 compute-0 podman[89439]: 2025-10-11 03:37:24.189951282 +0000 UTC m=+0.049360219 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Oct 11 03:37:24 compute-0 systemd[1]: Started libcrun container.
Oct 11 03:37:24 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/049e6bd9097a55974984d4775c0f11ec77dee8534ee43df729739823e19923be/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Oct 11 03:37:24 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/049e6bd9097a55974984d4775c0f11ec77dee8534ee43df729739823e19923be/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 11 03:37:24 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/049e6bd9097a55974984d4775c0f11ec77dee8534ee43df729739823e19923be/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Oct 11 03:37:24 compute-0 podman[89439]: 2025-10-11 03:37:24.354646004 +0000 UTC m=+0.214054951 container init ec7196c7dd62934e55a08ebbb20c0d932f612c28631208483a75a74874e8560b (image=quay.io/ceph/ceph:v18, name=jovial_murdock, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.39.3)
Oct 11 03:37:24 compute-0 podman[89439]: 2025-10-11 03:37:24.362565897 +0000 UTC m=+0.221974784 container start ec7196c7dd62934e55a08ebbb20c0d932f612c28631208483a75a74874e8560b (image=quay.io/ceph/ceph:v18, name=jovial_murdock, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef)
Oct 11 03:37:24 compute-0 podman[89439]: 2025-10-11 03:37:24.369334627 +0000 UTC m=+0.228743594 container attach ec7196c7dd62934e55a08ebbb20c0d932f612c28631208483a75a74874e8560b (image=quay.io/ceph/ceph:v18, name=jovial_murdock, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS)
Oct 11 03:37:24 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e11 do_prune osdmap full prune enabled
Oct 11 03:37:24 compute-0 ceph-mgr[74563]: mgr.server handle_open ignoring open from osd.1 v2:192.168.122.100:6806/1035388975; not ready for session (expect reconnect)
Oct 11 03:37:24 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0) v1
Oct 11 03:37:24 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Oct 11 03:37:24 compute-0 ceph-mgr[74563]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Oct 11 03:37:24 compute-0 ceph-mon[74273]: from='osd.1 [v2:192.168.122.100:6806/1035388975,v1:192.168.122.100:6807/1035388975]' entity='osd.1' cmd='[{"prefix": "osd crush create-or-move", "id": 1, "weight":0.0195, "args": ["host=compute-0", "root=default"]}]': finished
Oct 11 03:37:24 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' cmd='[{"prefix": "osd pool create", "format": "json", "pool": ".mgr", "pg_num": 1, "pg_num_min": 1, "pg_num_max": 32, "yes_i_really_mean_it": true}]': finished
Oct 11 03:37:24 compute-0 ceph-mon[74273]: osdmap e11: 3 total, 1 up, 3 in
Oct 11 03:37:24 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Oct 11 03:37:24 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Oct 11 03:37:24 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' cmd=[{"prefix": "osd pool application enable", "format": "json", "pool": ".mgr", "app": "mgr", "yes_i_really_mean_it": true}]: dispatch
Oct 11 03:37:24 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Oct 11 03:37:24 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' cmd='[{"prefix": "osd pool application enable", "format": "json", "pool": ".mgr", "app": "mgr", "yes_i_really_mean_it": true}]': finished
Oct 11 03:37:24 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e12 e12: 3 total, 1 up, 3 in
Oct 11 03:37:24 compute-0 ceph-mon[74273]: log_channel(cluster) log [DBG] : osdmap e12: 3 total, 1 up, 3 in
Oct 11 03:37:24 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0) v1
Oct 11 03:37:24 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Oct 11 03:37:24 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0) v1
Oct 11 03:37:24 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Oct 11 03:37:24 compute-0 ceph-mgr[74563]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Oct 11 03:37:24 compute-0 ceph-mgr[74563]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Oct 11 03:37:24 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v35: 1 pgs: 1 unknown; 0 B data, 426 MiB used, 20 GiB / 20 GiB avail
Oct 11 03:37:24 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status", "format": "json"} v 0) v1
Oct 11 03:37:24 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/847946312' entity='client.admin' cmd=[{"prefix": "status", "format": "json"}]: dispatch
Oct 11 03:37:24 compute-0 jovial_murdock[89455]: 
Oct 11 03:37:24 compute-0 jovial_murdock[89455]: {"fsid":"23b68101-59a9-532f-ab6b-9acf78fb2162","health":{"status":"HEALTH_OK","checks":{},"mutes":[]},"election_epoch":5,"quorum":[0],"quorum_names":["compute-0"],"quorum_age":110,"monmap":{"epoch":1,"min_mon_release_name":"reef","num_mons":1},"osdmap":{"epoch":12,"num_osds":3,"num_up_osds":1,"osd_up_since":1760153841,"num_in_osds":3,"osd_in_since":1760153824,"num_remapped_pgs":0},"pgmap":{"pgs_by_state":[],"num_pgs":0,"num_pools":0,"num_objects":0,"data_bytes":0,"bytes_used":446984192,"bytes_avail":21023657984,"bytes_total":21470642176},"fsmap":{"epoch":1,"by_rank":[],"up:standby":0},"mgrmap":{"available":true,"num_standbys":0,"modules":["cephadm","iostat","nfs","restful"],"services":{}},"servicemap":{"epoch":2,"modified":"2025-10-11T03:37:22.666401+0000","services":{"osd":{"daemons":{"summary":"","0":{"start_epoch":0,"start_stamp":"0.000000","gid":0,"addr":"(unrecognized address family 0)/0","metadata":{},"task_status":{}}}}}},"progress_events":{}}
Oct 11 03:37:25 compute-0 podman[89439]: 2025-10-11 03:37:25.003652468 +0000 UTC m=+0.863061345 container died ec7196c7dd62934e55a08ebbb20c0d932f612c28631208483a75a74874e8560b (image=quay.io/ceph/ceph:v18, name=jovial_murdock, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 11 03:37:25 compute-0 systemd[1]: libpod-ec7196c7dd62934e55a08ebbb20c0d932f612c28631208483a75a74874e8560b.scope: Deactivated successfully.
Oct 11 03:37:25 compute-0 systemd[1]: var-lib-containers-storage-overlay-049e6bd9097a55974984d4775c0f11ec77dee8534ee43df729739823e19923be-merged.mount: Deactivated successfully.
Oct 11 03:37:25 compute-0 ceph-23b68101-59a9-532f-ab6b-9acf78fb2162-osd-2-activate[89407]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-2
Oct 11 03:37:25 compute-0 bash[89392]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-2
Oct 11 03:37:25 compute-0 podman[89439]: 2025-10-11 03:37:25.065046365 +0000 UTC m=+0.924455242 container remove ec7196c7dd62934e55a08ebbb20c0d932f612c28631208483a75a74874e8560b (image=quay.io/ceph/ceph:v18, name=jovial_murdock, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, ceph=True, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 11 03:37:25 compute-0 ceph-23b68101-59a9-532f-ab6b-9acf78fb2162-osd-2-activate[89407]: Running command: /usr/bin/ceph-bluestore-tool prime-osd-dir --path /var/lib/ceph/osd/ceph-2 --no-mon-config --dev /dev/mapper/ceph_vg2-ceph_lv2
Oct 11 03:37:25 compute-0 bash[89392]: Running command: /usr/bin/ceph-bluestore-tool prime-osd-dir --path /var/lib/ceph/osd/ceph-2 --no-mon-config --dev /dev/mapper/ceph_vg2-ceph_lv2
Oct 11 03:37:25 compute-0 systemd[1]: libpod-conmon-ec7196c7dd62934e55a08ebbb20c0d932f612c28631208483a75a74874e8560b.scope: Deactivated successfully.
Oct 11 03:37:25 compute-0 sudo[89434]: pam_unix(sudo:session): session closed for user root
Oct 11 03:37:25 compute-0 ceph-23b68101-59a9-532f-ab6b-9acf78fb2162-osd-2-activate[89407]: Running command: /usr/bin/chown -h ceph:ceph /dev/mapper/ceph_vg2-ceph_lv2
Oct 11 03:37:25 compute-0 bash[89392]: Running command: /usr/bin/chown -h ceph:ceph /dev/mapper/ceph_vg2-ceph_lv2
Oct 11 03:37:25 compute-0 ceph-23b68101-59a9-532f-ab6b-9acf78fb2162-osd-2-activate[89407]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-2
Oct 11 03:37:25 compute-0 bash[89392]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-2
Oct 11 03:37:25 compute-0 ceph-23b68101-59a9-532f-ab6b-9acf78fb2162-osd-2-activate[89407]: Running command: /usr/bin/ln -s /dev/mapper/ceph_vg2-ceph_lv2 /var/lib/ceph/osd/ceph-2/block
Oct 11 03:37:25 compute-0 bash[89392]: Running command: /usr/bin/ln -s /dev/mapper/ceph_vg2-ceph_lv2 /var/lib/ceph/osd/ceph-2/block
Oct 11 03:37:25 compute-0 ceph-23b68101-59a9-532f-ab6b-9acf78fb2162-osd-2-activate[89407]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-2
Oct 11 03:37:25 compute-0 bash[89392]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-2
Oct 11 03:37:25 compute-0 ceph-23b68101-59a9-532f-ab6b-9acf78fb2162-osd-2-activate[89407]: --> ceph-volume raw activate successful for osd ID: 2
Oct 11 03:37:25 compute-0 bash[89392]: --> ceph-volume raw activate successful for osd ID: 2
Oct 11 03:37:25 compute-0 systemd[1]: libpod-130acea4e28f7e816637db64320d8892fd90bb22cbee027881a877aaa9c320e8.scope: Deactivated successfully.
Oct 11 03:37:25 compute-0 systemd[1]: libpod-130acea4e28f7e816637db64320d8892fd90bb22cbee027881a877aaa9c320e8.scope: Consumed 1.217s CPU time.
Oct 11 03:37:25 compute-0 podman[89392]: 2025-10-11 03:37:25.168327419 +0000 UTC m=+1.437886822 container died 130acea4e28f7e816637db64320d8892fd90bb22cbee027881a877aaa9c320e8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-23b68101-59a9-532f-ab6b-9acf78fb2162-osd-2-activate, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 11 03:37:25 compute-0 systemd[1]: var-lib-containers-storage-overlay-03b9493ac00156789115826402e78862bd0fc219f6e53cde944ec57831710d23-merged.mount: Deactivated successfully.
Oct 11 03:37:25 compute-0 podman[89392]: 2025-10-11 03:37:25.291860204 +0000 UTC m=+1.561419527 container remove 130acea4e28f7e816637db64320d8892fd90bb22cbee027881a877aaa9c320e8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-23b68101-59a9-532f-ab6b-9acf78fb2162-osd-2-activate, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef)
Oct 11 03:37:25 compute-0 sudo[89674]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fbviawdxlapddwinhcjjhtrdvxqralyq ; /usr/bin/python3'
Oct 11 03:37:25 compute-0 sudo[89674]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 03:37:25 compute-0 podman[89693]: 2025-10-11 03:37:25.572500427 +0000 UTC m=+0.042728813 container create dffaee05b3c0e58d15930bafc002ea9a421c3abd8fedda3a921c57ee82fd173e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-23b68101-59a9-532f-ab6b-9acf78fb2162-osd-2, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 11 03:37:25 compute-0 python3[89681]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid 23b68101-59a9-532f-ab6b-9acf78fb2162 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool create vms  replicated_rule --autoscale-mode on _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 11 03:37:25 compute-0 ceph-mgr[74563]: mgr.server handle_open ignoring open from osd.1 v2:192.168.122.100:6806/1035388975; not ready for session (expect reconnect)
Oct 11 03:37:25 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0) v1
Oct 11 03:37:25 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Oct 11 03:37:25 compute-0 ceph-mgr[74563]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Oct 11 03:37:25 compute-0 ceph-mon[74273]: purged_snaps scrub starts
Oct 11 03:37:25 compute-0 ceph-mon[74273]: purged_snaps scrub ok
Oct 11 03:37:25 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Oct 11 03:37:25 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' cmd='[{"prefix": "osd pool application enable", "format": "json", "pool": ".mgr", "app": "mgr", "yes_i_really_mean_it": true}]': finished
Oct 11 03:37:25 compute-0 ceph-mon[74273]: osdmap e12: 3 total, 1 up, 3 in
Oct 11 03:37:25 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Oct 11 03:37:25 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Oct 11 03:37:25 compute-0 ceph-mon[74273]: pgmap v35: 1 pgs: 1 unknown; 0 B data, 426 MiB used, 20 GiB / 20 GiB avail
Oct 11 03:37:25 compute-0 ceph-mon[74273]: from='client.? 192.168.122.100:0/847946312' entity='client.admin' cmd=[{"prefix": "status", "format": "json"}]: dispatch
Oct 11 03:37:25 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Oct 11 03:37:25 compute-0 podman[89693]: 2025-10-11 03:37:25.554660855 +0000 UTC m=+0.024889271 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 03:37:25 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b40046f2bd2490eaa1de0c3d9dee3753355182cd40c788398bc59a9cd8183147/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 11 03:37:25 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b40046f2bd2490eaa1de0c3d9dee3753355182cd40c788398bc59a9cd8183147/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 11 03:37:25 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b40046f2bd2490eaa1de0c3d9dee3753355182cd40c788398bc59a9cd8183147/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 11 03:37:25 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b40046f2bd2490eaa1de0c3d9dee3753355182cd40c788398bc59a9cd8183147/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 11 03:37:25 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b40046f2bd2490eaa1de0c3d9dee3753355182cd40c788398bc59a9cd8183147/merged/var/lib/ceph/osd/ceph-2 supports timestamps until 2038 (0x7fffffff)
Oct 11 03:37:25 compute-0 podman[89693]: 2025-10-11 03:37:25.668743254 +0000 UTC m=+0.138971700 container init dffaee05b3c0e58d15930bafc002ea9a421c3abd8fedda3a921c57ee82fd173e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-23b68101-59a9-532f-ab6b-9acf78fb2162-osd-2, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Oct 11 03:37:25 compute-0 podman[89693]: 2025-10-11 03:37:25.684271391 +0000 UTC m=+0.154499807 container start dffaee05b3c0e58d15930bafc002ea9a421c3abd8fedda3a921c57ee82fd173e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-23b68101-59a9-532f-ab6b-9acf78fb2162-osd-2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3)
Oct 11 03:37:25 compute-0 bash[89693]: dffaee05b3c0e58d15930bafc002ea9a421c3abd8fedda3a921c57ee82fd173e
Oct 11 03:37:25 compute-0 systemd[1]: Started Ceph osd.2 for 23b68101-59a9-532f-ab6b-9acf78fb2162.
Oct 11 03:37:25 compute-0 podman[89707]: 2025-10-11 03:37:25.710285322 +0000 UTC m=+0.079361753 container create d4e8adb6b6f7ae7dfb3757a61f976779a02a1fc9535c899822dad87a0eb05570 (image=quay.io/ceph/ceph:v18, name=unruffled_beaver, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Oct 11 03:37:25 compute-0 sudo[88682]: pam_unix(sudo:session): session closed for user root
Oct 11 03:37:25 compute-0 ceph-osd[89722]: set uid:gid to 167:167 (ceph:ceph)
Oct 11 03:37:25 compute-0 ceph-osd[89722]: ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable), process ceph-osd, pid 2
Oct 11 03:37:25 compute-0 ceph-osd[89722]: pidfile_write: ignore empty --pid-file
Oct 11 03:37:25 compute-0 ceph-osd[89722]: bdev(0x55f104159800 /var/lib/ceph/osd/ceph-2/block) open path /var/lib/ceph/osd/ceph-2/block
Oct 11 03:37:25 compute-0 ceph-osd[89722]: bdev(0x55f104159800 /var/lib/ceph/osd/ceph-2/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-2/block failed: (22) Invalid argument
Oct 11 03:37:25 compute-0 ceph-osd[89722]: bdev(0x55f104159800 /var/lib/ceph/osd/ceph-2/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Oct 11 03:37:25 compute-0 ceph-osd[89722]: bluestore(/var/lib/ceph/osd/ceph-2) _set_cache_sizes cache_size 1073741824 meta 0.45 kv 0.45 kv_onode 0.04 data 0.06
Oct 11 03:37:25 compute-0 ceph-osd[89722]: bdev(0x55f104f9b000 /var/lib/ceph/osd/ceph-2/block) open path /var/lib/ceph/osd/ceph-2/block
Oct 11 03:37:25 compute-0 ceph-osd[89722]: bdev(0x55f104f9b000 /var/lib/ceph/osd/ceph-2/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-2/block failed: (22) Invalid argument
Oct 11 03:37:25 compute-0 ceph-osd[89722]: bdev(0x55f104f9b000 /var/lib/ceph/osd/ceph-2/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Oct 11 03:37:25 compute-0 ceph-osd[89722]: bluefs add_block_device bdev 1 path /var/lib/ceph/osd/ceph-2/block size 20 GiB
Oct 11 03:37:25 compute-0 ceph-osd[89722]: bdev(0x55f104f9b000 /var/lib/ceph/osd/ceph-2/block) close
Oct 11 03:37:25 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 11 03:37:25 compute-0 systemd[1]: Started libpod-conmon-d4e8adb6b6f7ae7dfb3757a61f976779a02a1fc9535c899822dad87a0eb05570.scope.
Oct 11 03:37:25 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' 
Oct 11 03:37:25 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 11 03:37:25 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' 
Oct 11 03:37:25 compute-0 podman[89707]: 2025-10-11 03:37:25.683033876 +0000 UTC m=+0.052110337 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Oct 11 03:37:25 compute-0 systemd[1]: Started libcrun container.
Oct 11 03:37:25 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f5700ffa7e9ab261d877a7d917d469317d1653de2befd7c1c2ca40595a047681/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Oct 11 03:37:25 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f5700ffa7e9ab261d877a7d917d469317d1653de2befd7c1c2ca40595a047681/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 11 03:37:25 compute-0 podman[89707]: 2025-10-11 03:37:25.812407424 +0000 UTC m=+0.181483875 container init d4e8adb6b6f7ae7dfb3757a61f976779a02a1fc9535c899822dad87a0eb05570 (image=quay.io/ceph/ceph:v18, name=unruffled_beaver, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 11 03:37:25 compute-0 podman[89707]: 2025-10-11 03:37:25.820392998 +0000 UTC m=+0.189469419 container start d4e8adb6b6f7ae7dfb3757a61f976779a02a1fc9535c899822dad87a0eb05570 (image=quay.io/ceph/ceph:v18, name=unruffled_beaver, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 11 03:37:25 compute-0 podman[89707]: 2025-10-11 03:37:25.828758404 +0000 UTC m=+0.197834835 container attach d4e8adb6b6f7ae7dfb3757a61f976779a02a1fc9535c899822dad87a0eb05570 (image=quay.io/ceph/ceph:v18, name=unruffled_beaver, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Oct 11 03:37:25 compute-0 sudo[89743]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 03:37:25 compute-0 sudo[89743]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 03:37:25 compute-0 sudo[89743]: pam_unix(sudo:session): session closed for user root
Oct 11 03:37:25 compute-0 sudo[89769]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 11 03:37:25 compute-0 sudo[89769]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 03:37:25 compute-0 sudo[89769]: pam_unix(sudo:session): session closed for user root
Oct 11 03:37:25 compute-0 sudo[89794]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 03:37:25 compute-0 sudo[89794]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 03:37:25 compute-0 sudo[89794]: pam_unix(sudo:session): session closed for user root
Oct 11 03:37:26 compute-0 ceph-osd[89722]: bdev(0x55f104159800 /var/lib/ceph/osd/ceph-2/block) close
Oct 11 03:37:26 compute-0 sudo[89819]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/23b68101-59a9-532f-ab6b-9acf78fb2162/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 23b68101-59a9-532f-ab6b-9acf78fb2162 -- raw list --format json
Oct 11 03:37:26 compute-0 sudo[89819]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 03:37:26 compute-0 ceph-osd[88594]: osd.1 0 maybe_override_max_osd_capacity_for_qos osd bench result - bandwidth (MiB/sec): 36.740 iops: 9405.552 elapsed_sec: 0.319
Oct 11 03:37:26 compute-0 ceph-osd[88594]: log_channel(cluster) log [WRN] : OSD bench result of 9405.552143 IOPS is not within the threshold limit range of 50.000000 IOPS and 500.000000 IOPS for osd.1. IOPS capacity is unchanged at 315.000000 IOPS. The recommendation is to establish the osd's IOPS capacity using other benchmark tools (e.g. Fio) and then override osd_mclock_max_capacity_iops_[hdd|ssd].
Oct 11 03:37:26 compute-0 ceph-osd[88594]: osd.1 0 waiting for initial osdmap
Oct 11 03:37:26 compute-0 ceph-23b68101-59a9-532f-ab6b-9acf78fb2162-osd-1[88590]: 2025-10-11T03:37:26.080+0000 7f866005f640 -1 osd.1 0 waiting for initial osdmap
Oct 11 03:37:26 compute-0 ceph-osd[88594]: osd.1 12 crush map has features 288514051259236352, adjusting msgr requires for clients
Oct 11 03:37:26 compute-0 ceph-osd[88594]: osd.1 12 crush map has features 288514051259236352 was 288232575208792577, adjusting msgr requires for mons
Oct 11 03:37:26 compute-0 ceph-osd[88594]: osd.1 12 crush map has features 3314933000852226048, adjusting msgr requires for osds
Oct 11 03:37:26 compute-0 ceph-osd[88594]: osd.1 12 check_osdmap_features require_osd_release unknown -> reef
Oct 11 03:37:26 compute-0 ceph-osd[88594]: osd.1 12 set_numa_affinity unable to identify public interface '' numa node: (2) No such file or directory
Oct 11 03:37:26 compute-0 ceph-23b68101-59a9-532f-ab6b-9acf78fb2162-osd-1[88590]: 2025-10-11T03:37:26.104+0000 7f865ae70640 -1 osd.1 12 set_numa_affinity unable to identify public interface '' numa node: (2) No such file or directory
Oct 11 03:37:26 compute-0 ceph-osd[88594]: osd.1 12 set_numa_affinity not setting numa affinity
Oct 11 03:37:26 compute-0 ceph-osd[88594]: osd.1 12 _collect_metadata loop4:  no unique device id for loop4: fallback method has no model nor serial
Oct 11 03:37:26 compute-0 ceph-osd[89722]: starting osd.2 osd_data /var/lib/ceph/osd/ceph-2 /var/lib/ceph/osd/ceph-2/journal
Oct 11 03:37:26 compute-0 ceph-osd[89722]: load: jerasure load: lrc 
Oct 11 03:37:26 compute-0 ceph-osd[89722]: bdev(0x55f104f9bc00 /var/lib/ceph/osd/ceph-2/block) open path /var/lib/ceph/osd/ceph-2/block
Oct 11 03:37:26 compute-0 ceph-osd[89722]: bdev(0x55f104f9bc00 /var/lib/ceph/osd/ceph-2/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-2/block failed: (22) Invalid argument
Oct 11 03:37:26 compute-0 ceph-osd[89722]: bdev(0x55f104f9bc00 /var/lib/ceph/osd/ceph-2/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Oct 11 03:37:26 compute-0 ceph-osd[89722]: bluestore(/var/lib/ceph/osd/ceph-2) _set_cache_sizes cache_size 1073741824 meta 0.45 kv 0.45 kv_onode 0.04 data 0.06
Oct 11 03:37:26 compute-0 ceph-osd[89722]: bdev(0x55f104f9bc00 /var/lib/ceph/osd/ceph-2/block) close
Oct 11 03:37:26 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool create", "pool": "vms", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"} v 0) v1
Oct 11 03:37:26 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1724546563' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "vms", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]: dispatch
Oct 11 03:37:26 compute-0 podman[89914]: 2025-10-11 03:37:26.489417325 +0000 UTC m=+0.070806533 container create e0bda2696c18468c8a9c887968728b5ecf2f2dc473c5079dacff470b60b2d64f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_jones, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=reef, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 11 03:37:26 compute-0 systemd[1]: Started libpod-conmon-e0bda2696c18468c8a9c887968728b5ecf2f2dc473c5079dacff470b60b2d64f.scope.
Oct 11 03:37:26 compute-0 podman[89914]: 2025-10-11 03:37:26.460236934 +0000 UTC m=+0.041626192 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 03:37:26 compute-0 ceph-osd[89722]: bdev(0x55f104f9bc00 /var/lib/ceph/osd/ceph-2/block) open path /var/lib/ceph/osd/ceph-2/block
Oct 11 03:37:26 compute-0 ceph-osd[89722]: bdev(0x55f104f9bc00 /var/lib/ceph/osd/ceph-2/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-2/block failed: (22) Invalid argument
Oct 11 03:37:26 compute-0 ceph-osd[89722]: bdev(0x55f104f9bc00 /var/lib/ceph/osd/ceph-2/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Oct 11 03:37:26 compute-0 ceph-osd[89722]: bluestore(/var/lib/ceph/osd/ceph-2) _set_cache_sizes cache_size 1073741824 meta 0.45 kv 0.45 kv_onode 0.04 data 0.06
Oct 11 03:37:26 compute-0 ceph-osd[89722]: bdev(0x55f104f9bc00 /var/lib/ceph/osd/ceph-2/block) close
Oct 11 03:37:26 compute-0 systemd[1]: Started libcrun container.
Oct 11 03:37:26 compute-0 podman[89914]: 2025-10-11 03:37:26.604013798 +0000 UTC m=+0.185403016 container init e0bda2696c18468c8a9c887968728b5ecf2f2dc473c5079dacff470b60b2d64f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_jones, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 11 03:37:26 compute-0 podman[89914]: 2025-10-11 03:37:26.616025336 +0000 UTC m=+0.197414554 container start e0bda2696c18468c8a9c887968728b5ecf2f2dc473c5079dacff470b60b2d64f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_jones, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 11 03:37:26 compute-0 podman[89914]: 2025-10-11 03:37:26.620707788 +0000 UTC m=+0.202097056 container attach e0bda2696c18468c8a9c887968728b5ecf2f2dc473c5079dacff470b60b2d64f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_jones, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3)
Oct 11 03:37:26 compute-0 vibrant_jones[89930]: 167 167
Oct 11 03:37:26 compute-0 systemd[1]: libpod-e0bda2696c18468c8a9c887968728b5ecf2f2dc473c5079dacff470b60b2d64f.scope: Deactivated successfully.
Oct 11 03:37:26 compute-0 conmon[89930]: conmon e0bda2696c18468c8a9c <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-e0bda2696c18468c8a9c887968728b5ecf2f2dc473c5079dacff470b60b2d64f.scope/container/memory.events
Oct 11 03:37:26 compute-0 podman[89914]: 2025-10-11 03:37:26.624984458 +0000 UTC m=+0.206373676 container died e0bda2696c18468c8a9c887968728b5ecf2f2dc473c5079dacff470b60b2d64f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_jones, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2)
Oct 11 03:37:26 compute-0 ceph-mgr[74563]: mgr.server handle_open ignoring open from osd.1 v2:192.168.122.100:6806/1035388975; not ready for session (expect reconnect)
Oct 11 03:37:26 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0) v1
Oct 11 03:37:26 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Oct 11 03:37:26 compute-0 ceph-mgr[74563]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Oct 11 03:37:26 compute-0 systemd[1]: var-lib-containers-storage-overlay-d4c6b67cb8b3cdc7232826f7fbe3842266d2df62d6c68c3bf301b4c77fbd2828-merged.mount: Deactivated successfully.
Oct 11 03:37:26 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v36: 1 pgs: 1 unknown; 0 B data, 426 MiB used, 20 GiB / 20 GiB avail
Oct 11 03:37:26 compute-0 podman[89914]: 2025-10-11 03:37:26.679241104 +0000 UTC m=+0.260630312 container remove e0bda2696c18468c8a9c887968728b5ecf2f2dc473c5079dacff470b60b2d64f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_jones, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20250507)
Oct 11 03:37:26 compute-0 systemd[1]: libpod-conmon-e0bda2696c18468c8a9c887968728b5ecf2f2dc473c5079dacff470b60b2d64f.scope: Deactivated successfully.
Oct 11 03:37:26 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' 
Oct 11 03:37:26 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' 
Oct 11 03:37:26 compute-0 ceph-mon[74273]: from='client.? 192.168.122.100:0/1724546563' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "vms", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]: dispatch
Oct 11 03:37:26 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Oct 11 03:37:26 compute-0 ceph-mon[74273]: pgmap v36: 1 pgs: 1 unknown; 0 B data, 426 MiB used, 20 GiB / 20 GiB avail
Oct 11 03:37:26 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e12 do_prune osdmap full prune enabled
Oct 11 03:37:26 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1724546563' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "vms", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Oct 11 03:37:26 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e13 e13: 3 total, 2 up, 3 in
Oct 11 03:37:26 compute-0 unruffled_beaver[89740]: pool 'vms' created
Oct 11 03:37:26 compute-0 ceph-mon[74273]: log_channel(cluster) log [INF] : osd.1 [v2:192.168.122.100:6806/1035388975,v1:192.168.122.100:6807/1035388975] boot
Oct 11 03:37:26 compute-0 ceph-mon[74273]: log_channel(cluster) log [DBG] : osdmap e13: 3 total, 2 up, 3 in
Oct 11 03:37:26 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0) v1
Oct 11 03:37:26 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Oct 11 03:37:26 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0) v1
Oct 11 03:37:26 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Oct 11 03:37:26 compute-0 ceph-mgr[74563]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Oct 11 03:37:26 compute-0 ceph-osd[88594]: osd.1 13 state: booting -> active
Oct 11 03:37:26 compute-0 ceph-osd[88594]: osd.1 pg_epoch: 13 pg[1.0( empty local-lis/les=0/0 n=0 ec=11/11 lis/c=0/0 les/c/f=0/0/0 sis=13) [1] r=0 lpr=13 pi=[11,13)/0 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 03:37:26 compute-0 systemd[1]: libpod-d4e8adb6b6f7ae7dfb3757a61f976779a02a1fc9535c899822dad87a0eb05570.scope: Deactivated successfully.
Oct 11 03:37:26 compute-0 conmon[89740]: conmon d4e8adb6b6f7ae7dfb37 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-d4e8adb6b6f7ae7dfb3757a61f976779a02a1fc9535c899822dad87a0eb05570.scope/container/memory.events
Oct 11 03:37:26 compute-0 podman[89707]: 2025-10-11 03:37:26.799960929 +0000 UTC m=+1.169037360 container died d4e8adb6b6f7ae7dfb3757a61f976779a02a1fc9535c899822dad87a0eb05570 (image=quay.io/ceph/ceph:v18, name=unruffled_beaver, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Oct 11 03:37:26 compute-0 systemd[1]: var-lib-containers-storage-overlay-f5700ffa7e9ab261d877a7d917d469317d1653de2befd7c1c2ca40595a047681-merged.mount: Deactivated successfully.
Oct 11 03:37:26 compute-0 ceph-osd[89722]: mClockScheduler: set_osd_capacity_params_from_config: osd_bandwidth_cost_per_io: 499321.90 bytes/io, osd_bandwidth_capacity_per_shard 157286400.00 bytes/second
Oct 11 03:37:26 compute-0 ceph-osd[89722]: osd.2:0.OSDShard using op scheduler mclock_scheduler, cutoff=196
Oct 11 03:37:26 compute-0 ceph-osd[89722]: bdev(0x55f104f9bc00 /var/lib/ceph/osd/ceph-2/block) open path /var/lib/ceph/osd/ceph-2/block
Oct 11 03:37:26 compute-0 ceph-osd[89722]: bdev(0x55f104f9bc00 /var/lib/ceph/osd/ceph-2/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-2/block failed: (22) Invalid argument
Oct 11 03:37:26 compute-0 ceph-osd[89722]: bdev(0x55f104f9bc00 /var/lib/ceph/osd/ceph-2/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Oct 11 03:37:26 compute-0 ceph-osd[89722]: bluestore(/var/lib/ceph/osd/ceph-2) _set_cache_sizes cache_size 1073741824 meta 0.45 kv 0.45 kv_onode 0.04 data 0.06
Oct 11 03:37:26 compute-0 ceph-osd[89722]: bdev(0x55f105186400 /var/lib/ceph/osd/ceph-2/block) open path /var/lib/ceph/osd/ceph-2/block
Oct 11 03:37:26 compute-0 ceph-osd[89722]: bdev(0x55f105186400 /var/lib/ceph/osd/ceph-2/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-2/block failed: (22) Invalid argument
Oct 11 03:37:26 compute-0 ceph-osd[89722]: bdev(0x55f105186400 /var/lib/ceph/osd/ceph-2/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Oct 11 03:37:26 compute-0 ceph-osd[89722]: bluefs add_block_device bdev 1 path /var/lib/ceph/osd/ceph-2/block size 20 GiB
Oct 11 03:37:26 compute-0 ceph-osd[89722]: bluefs mount
Oct 11 03:37:26 compute-0 ceph-osd[89722]: bluefs _init_alloc shared, id 1, capacity 0x4ffc00000, block size 0x10000
Oct 11 03:37:26 compute-0 ceph-osd[89722]: bluefs mount shared_bdev_used = 0
Oct 11 03:37:26 compute-0 ceph-osd[89722]: bluestore(/var/lib/ceph/osd/ceph-2) _prepare_db_environment set db_paths to db,20397110067 db.slow,20397110067
Oct 11 03:37:26 compute-0 podman[89707]: 2025-10-11 03:37:26.847741443 +0000 UTC m=+1.216817864 container remove d4e8adb6b6f7ae7dfb3757a61f976779a02a1fc9535c899822dad87a0eb05570 (image=quay.io/ceph/ceph:v18, name=unruffled_beaver, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 11 03:37:26 compute-0 ceph-osd[89722]: rocksdb: RocksDB version: 7.9.2
Oct 11 03:37:26 compute-0 ceph-osd[89722]: rocksdb: Git sha 0
Oct 11 03:37:26 compute-0 ceph-osd[89722]: rocksdb: Compile date 2025-05-06 23:30:25
Oct 11 03:37:26 compute-0 ceph-osd[89722]: rocksdb: DB SUMMARY
Oct 11 03:37:26 compute-0 ceph-osd[89722]: rocksdb: DB Session ID:  NVO3UOMUDUFVY2NKG1Z6
Oct 11 03:37:26 compute-0 ceph-osd[89722]: rocksdb: CURRENT file:  CURRENT
Oct 11 03:37:26 compute-0 ceph-osd[89722]: rocksdb: IDENTITY file:  IDENTITY
Oct 11 03:37:26 compute-0 ceph-osd[89722]: rocksdb: MANIFEST file:  MANIFEST-000032 size: 1007 Bytes
Oct 11 03:37:26 compute-0 ceph-osd[89722]: rocksdb: SST files in db dir, Total Num: 1, files: 000030.sst 
Oct 11 03:37:26 compute-0 ceph-osd[89722]: rocksdb: SST files in db.slow dir, Total Num: 0, files: 
Oct 11 03:37:26 compute-0 ceph-osd[89722]: rocksdb: Write Ahead Log file in db.wal: 000031.log size: 5093 ; 
Oct 11 03:37:26 compute-0 ceph-osd[89722]: rocksdb:                         Options.error_if_exists: 0
Oct 11 03:37:26 compute-0 ceph-osd[89722]: rocksdb:                       Options.create_if_missing: 0
Oct 11 03:37:26 compute-0 ceph-osd[89722]: rocksdb:                         Options.paranoid_checks: 1
Oct 11 03:37:26 compute-0 ceph-osd[89722]: rocksdb:             Options.flush_verify_memtable_count: 1
Oct 11 03:37:26 compute-0 ceph-osd[89722]: rocksdb:                               Options.track_and_verify_wals_in_manifest: 0
Oct 11 03:37:26 compute-0 ceph-osd[89722]: rocksdb:        Options.verify_sst_unique_id_in_manifest: 1
Oct 11 03:37:26 compute-0 ceph-osd[89722]: rocksdb:                                     Options.env: 0x55f104fedce0
Oct 11 03:37:26 compute-0 ceph-osd[89722]: rocksdb:                                      Options.fs: LegacyFileSystem
Oct 11 03:37:26 compute-0 ceph-osd[89722]: rocksdb:                                Options.info_log: 0x55f1041e4bc0
Oct 11 03:37:26 compute-0 ceph-osd[89722]: rocksdb:                Options.max_file_opening_threads: 16
Oct 11 03:37:26 compute-0 ceph-osd[89722]: rocksdb:                              Options.statistics: (nil)
Oct 11 03:37:26 compute-0 ceph-osd[89722]: rocksdb:                               Options.use_fsync: 0
Oct 11 03:37:26 compute-0 ceph-osd[89722]: rocksdb:                       Options.max_log_file_size: 0
Oct 11 03:37:26 compute-0 ceph-osd[89722]: rocksdb:                  Options.max_manifest_file_size: 1073741824
Oct 11 03:37:26 compute-0 ceph-osd[89722]: rocksdb:                   Options.log_file_time_to_roll: 0
Oct 11 03:37:26 compute-0 ceph-osd[89722]: rocksdb:                       Options.keep_log_file_num: 1000
Oct 11 03:37:26 compute-0 ceph-osd[89722]: rocksdb:                    Options.recycle_log_file_num: 0
Oct 11 03:37:26 compute-0 ceph-osd[89722]: rocksdb:                         Options.allow_fallocate: 1
Oct 11 03:37:26 compute-0 ceph-osd[89722]: rocksdb:                        Options.allow_mmap_reads: 0
Oct 11 03:37:26 compute-0 ceph-osd[89722]: rocksdb:                       Options.allow_mmap_writes: 0
Oct 11 03:37:26 compute-0 ceph-osd[89722]: rocksdb:                        Options.use_direct_reads: 0
Oct 11 03:37:26 compute-0 ceph-osd[89722]: rocksdb:                        Options.use_direct_io_for_flush_and_compaction: 0
Oct 11 03:37:26 compute-0 ceph-osd[89722]: rocksdb:          Options.create_missing_column_families: 0
Oct 11 03:37:26 compute-0 ceph-osd[89722]: rocksdb:                              Options.db_log_dir: 
Oct 11 03:37:26 compute-0 ceph-osd[89722]: rocksdb:                                 Options.wal_dir: db.wal
Oct 11 03:37:26 compute-0 ceph-osd[89722]: rocksdb:                Options.table_cache_numshardbits: 6
Oct 11 03:37:26 compute-0 ceph-osd[89722]: rocksdb:                         Options.WAL_ttl_seconds: 0
Oct 11 03:37:26 compute-0 ceph-osd[89722]: rocksdb:                       Options.WAL_size_limit_MB: 0
Oct 11 03:37:26 compute-0 ceph-osd[89722]: rocksdb:                        Options.max_write_batch_group_size_bytes: 1048576
Oct 11 03:37:26 compute-0 ceph-osd[89722]: rocksdb:             Options.manifest_preallocation_size: 4194304
Oct 11 03:37:26 compute-0 ceph-osd[89722]: rocksdb:                     Options.is_fd_close_on_exec: 1
Oct 11 03:37:26 compute-0 ceph-osd[89722]: rocksdb:                   Options.advise_random_on_open: 1
Oct 11 03:37:26 compute-0 ceph-osd[89722]: rocksdb:                    Options.db_write_buffer_size: 0
Oct 11 03:37:26 compute-0 ceph-osd[89722]: rocksdb:                    Options.write_buffer_manager: 0x55f105100460
Oct 11 03:37:26 compute-0 ceph-osd[89722]: rocksdb:         Options.access_hint_on_compaction_start: 1
Oct 11 03:37:26 compute-0 ceph-osd[89722]: rocksdb:           Options.random_access_max_buffer_size: 1048576
Oct 11 03:37:26 compute-0 ceph-osd[89722]: rocksdb:                      Options.use_adaptive_mutex: 0
Oct 11 03:37:26 compute-0 ceph-osd[89722]: rocksdb:                            Options.rate_limiter: (nil)
Oct 11 03:37:26 compute-0 ceph-osd[89722]: rocksdb:     Options.sst_file_manager.rate_bytes_per_sec: 0
Oct 11 03:37:26 compute-0 ceph-osd[89722]: rocksdb:                       Options.wal_recovery_mode: 2
Oct 11 03:37:26 compute-0 ceph-osd[89722]: rocksdb:                  Options.enable_thread_tracking: 0
Oct 11 03:37:26 compute-0 ceph-osd[89722]: rocksdb:                  Options.enable_pipelined_write: 0
Oct 11 03:37:26 compute-0 ceph-osd[89722]: rocksdb:                  Options.unordered_write: 0
Oct 11 03:37:26 compute-0 ceph-osd[89722]: rocksdb:         Options.allow_concurrent_memtable_write: 1
Oct 11 03:37:26 compute-0 ceph-osd[89722]: rocksdb:      Options.enable_write_thread_adaptive_yield: 1
Oct 11 03:37:26 compute-0 ceph-osd[89722]: rocksdb:             Options.write_thread_max_yield_usec: 100
Oct 11 03:37:26 compute-0 ceph-osd[89722]: rocksdb:            Options.write_thread_slow_yield_usec: 3
Oct 11 03:37:26 compute-0 ceph-osd[89722]: rocksdb:                               Options.row_cache: None
Oct 11 03:37:26 compute-0 ceph-osd[89722]: rocksdb:                              Options.wal_filter: None
Oct 11 03:37:26 compute-0 ceph-osd[89722]: rocksdb:             Options.avoid_flush_during_recovery: 0
Oct 11 03:37:26 compute-0 ceph-osd[89722]: rocksdb:             Options.allow_ingest_behind: 0
Oct 11 03:37:26 compute-0 ceph-osd[89722]: rocksdb:             Options.two_write_queues: 0
Oct 11 03:37:26 compute-0 ceph-osd[89722]: rocksdb:             Options.manual_wal_flush: 0
Oct 11 03:37:26 compute-0 ceph-osd[89722]: rocksdb:             Options.wal_compression: 0
Oct 11 03:37:26 compute-0 ceph-osd[89722]: rocksdb:             Options.atomic_flush: 0
Oct 11 03:37:26 compute-0 ceph-osd[89722]: rocksdb:             Options.avoid_unnecessary_blocking_io: 0
Oct 11 03:37:26 compute-0 ceph-osd[89722]: rocksdb:                 Options.persist_stats_to_disk: 0
Oct 11 03:37:26 compute-0 ceph-osd[89722]: rocksdb:                 Options.write_dbid_to_manifest: 0
Oct 11 03:37:26 compute-0 ceph-osd[89722]: rocksdb:                 Options.log_readahead_size: 0
Oct 11 03:37:26 compute-0 ceph-osd[89722]: rocksdb:                 Options.file_checksum_gen_factory: Unknown
Oct 11 03:37:26 compute-0 ceph-osd[89722]: rocksdb:                 Options.best_efforts_recovery: 0
Oct 11 03:37:26 compute-0 ceph-osd[89722]: rocksdb:                Options.max_bgerror_resume_count: 2147483647
Oct 11 03:37:26 compute-0 ceph-osd[89722]: rocksdb:            Options.bgerror_resume_retry_interval: 1000000
Oct 11 03:37:26 compute-0 ceph-osd[89722]: rocksdb:             Options.allow_data_in_errors: 0
Oct 11 03:37:26 compute-0 ceph-osd[89722]: rocksdb:             Options.db_host_id: __hostname__
Oct 11 03:37:26 compute-0 ceph-osd[89722]: rocksdb:             Options.enforce_single_del_contracts: true
Oct 11 03:37:26 compute-0 ceph-osd[89722]: rocksdb:             Options.max_background_jobs: 4
Oct 11 03:37:26 compute-0 ceph-osd[89722]: rocksdb:             Options.max_background_compactions: -1
Oct 11 03:37:26 compute-0 ceph-osd[89722]: rocksdb:             Options.max_subcompactions: 1
Oct 11 03:37:26 compute-0 ceph-osd[89722]: rocksdb:             Options.avoid_flush_during_shutdown: 0
Oct 11 03:37:26 compute-0 ceph-osd[89722]: rocksdb:           Options.writable_file_max_buffer_size: 0
Oct 11 03:37:26 compute-0 ceph-osd[89722]: rocksdb:             Options.delayed_write_rate : 16777216
Oct 11 03:37:26 compute-0 ceph-osd[89722]: rocksdb:             Options.max_total_wal_size: 1073741824
Oct 11 03:37:26 compute-0 ceph-osd[89722]: rocksdb:             Options.delete_obsolete_files_period_micros: 21600000000
Oct 11 03:37:26 compute-0 ceph-osd[89722]: rocksdb:                   Options.stats_dump_period_sec: 600
Oct 11 03:37:26 compute-0 ceph-osd[89722]: rocksdb:                 Options.stats_persist_period_sec: 600
Oct 11 03:37:26 compute-0 ceph-osd[89722]: rocksdb:                 Options.stats_history_buffer_size: 1048576
Oct 11 03:37:26 compute-0 ceph-osd[89722]: rocksdb:                          Options.max_open_files: -1
Oct 11 03:37:26 compute-0 ceph-osd[89722]: rocksdb:                          Options.bytes_per_sync: 0
Oct 11 03:37:26 compute-0 ceph-osd[89722]: rocksdb:                      Options.wal_bytes_per_sync: 0
Oct 11 03:37:26 compute-0 ceph-osd[89722]: rocksdb:                   Options.strict_bytes_per_sync: 0
Oct 11 03:37:26 compute-0 ceph-osd[89722]: rocksdb:       Options.compaction_readahead_size: 2097152
Oct 11 03:37:26 compute-0 ceph-osd[89722]: rocksdb:                  Options.max_background_flushes: -1
Oct 11 03:37:26 compute-0 ceph-osd[89722]: rocksdb: Compression algorithms supported:
Oct 11 03:37:26 compute-0 ceph-osd[89722]: rocksdb:         kZSTD supported: 0
Oct 11 03:37:26 compute-0 ceph-osd[89722]: rocksdb:         kXpressCompression supported: 0
Oct 11 03:37:26 compute-0 ceph-osd[89722]: rocksdb:         kBZip2Compression supported: 0
Oct 11 03:37:26 compute-0 ceph-osd[89722]: rocksdb:         kZSTDNotFinalCompression supported: 0
Oct 11 03:37:26 compute-0 ceph-osd[89722]: rocksdb:         kLZ4Compression supported: 1
Oct 11 03:37:26 compute-0 ceph-osd[89722]: rocksdb:         kZlibCompression supported: 1
Oct 11 03:37:26 compute-0 ceph-osd[89722]: rocksdb:         kLZ4HCCompression supported: 1
Oct 11 03:37:26 compute-0 ceph-osd[89722]: rocksdb:         kSnappyCompression supported: 1
Oct 11 03:37:26 compute-0 ceph-osd[89722]: rocksdb: Fast CRC32 supported: Supported on x86
Oct 11 03:37:26 compute-0 ceph-osd[89722]: rocksdb: DMutex implementation: pthread_mutex_t
Oct 11 03:37:26 compute-0 ceph-osd[89722]: rocksdb: [db/db_impl/db_impl_readonly.cc:25] Opening the db in read only mode
Oct 11 03:37:26 compute-0 ceph-osd[89722]: rocksdb: [db/version_set.cc:5527] Recovering from manifest file: db/MANIFEST-000032
Oct 11 03:37:26 compute-0 ceph-osd[89722]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [default]:
Oct 11 03:37:26 compute-0 ceph-osd[89722]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Oct 11 03:37:26 compute-0 ceph-osd[89722]: rocksdb:           Options.merge_operator: .T:int64_array.b:bitwise_xor
Oct 11 03:37:26 compute-0 ceph-osd[89722]: rocksdb:        Options.compaction_filter: None
Oct 11 03:37:26 compute-0 ceph-osd[89722]: rocksdb:        Options.compaction_filter_factory: None
Oct 11 03:37:26 compute-0 ceph-osd[89722]: rocksdb:  Options.sst_partitioner_factory: None
Oct 11 03:37:26 compute-0 ceph-osd[89722]: rocksdb:         Options.memtable_factory: SkipListFactory
Oct 11 03:37:26 compute-0 ceph-osd[89722]: rocksdb:            Options.table_factory: BlockBasedTable
Oct 11 03:37:26 compute-0 ceph-osd[89722]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55f1041e5280)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x55f1041ccdd0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Oct 11 03:37:26 compute-0 ceph-osd[89722]: rocksdb:        Options.write_buffer_size: 16777216
Oct 11 03:37:26 compute-0 ceph-osd[89722]: rocksdb:  Options.max_write_buffer_number: 64
Oct 11 03:37:26 compute-0 ceph-osd[89722]: rocksdb:          Options.compression: LZ4
Oct 11 03:37:26 compute-0 ceph-osd[89722]: rocksdb:                  Options.bottommost_compression: Disabled
Oct 11 03:37:26 compute-0 ceph-osd[89722]: rocksdb:       Options.prefix_extractor: nullptr
Oct 11 03:37:26 compute-0 ceph-osd[89722]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Oct 11 03:37:26 compute-0 ceph-osd[89722]: rocksdb:             Options.num_levels: 7
Oct 11 03:37:26 compute-0 ceph-osd[89722]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Oct 11 03:37:26 compute-0 ceph-osd[89722]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Oct 11 03:37:26 compute-0 ceph-osd[89722]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Oct 11 03:37:26 compute-0 ceph-osd[89722]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Oct 11 03:37:26 compute-0 ceph-osd[89722]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Oct 11 03:37:26 compute-0 ceph-osd[89722]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Oct 11 03:37:26 compute-0 ceph-osd[89722]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Oct 11 03:37:26 compute-0 ceph-osd[89722]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Oct 11 03:37:26 compute-0 ceph-osd[89722]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Oct 11 03:37:26 compute-0 ceph-osd[89722]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Oct 11 03:37:26 compute-0 ceph-osd[89722]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Oct 11 03:37:26 compute-0 ceph-osd[89722]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Oct 11 03:37:26 compute-0 ceph-osd[89722]: rocksdb:            Options.compression_opts.window_bits: -14
Oct 11 03:37:26 compute-0 ceph-osd[89722]: rocksdb:                  Options.compression_opts.level: 32767
Oct 11 03:37:26 compute-0 ceph-osd[89722]: rocksdb:               Options.compression_opts.strategy: 0
Oct 11 03:37:26 compute-0 ceph-osd[89722]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Oct 11 03:37:26 compute-0 ceph-osd[89722]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Oct 11 03:37:26 compute-0 ceph-osd[89722]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Oct 11 03:37:26 compute-0 ceph-osd[89722]: rocksdb:         Options.compression_opts.parallel_threads: 1
Oct 11 03:37:26 compute-0 ceph-osd[89722]: rocksdb:                  Options.compression_opts.enabled: false
Oct 11 03:37:26 compute-0 ceph-osd[89722]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Oct 11 03:37:26 compute-0 ceph-osd[89722]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Oct 11 03:37:26 compute-0 ceph-osd[89722]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Oct 11 03:37:26 compute-0 ceph-osd[89722]: rocksdb:              Options.level0_stop_writes_trigger: 36
Oct 11 03:37:26 compute-0 ceph-osd[89722]: rocksdb:                   Options.target_file_size_base: 67108864
Oct 11 03:37:26 compute-0 ceph-osd[89722]: rocksdb:             Options.target_file_size_multiplier: 1
Oct 11 03:37:26 compute-0 ceph-osd[89722]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Oct 11 03:37:26 compute-0 ceph-osd[89722]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Oct 11 03:37:26 compute-0 ceph-osd[89722]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Oct 11 03:37:26 compute-0 ceph-osd[89722]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Oct 11 03:37:26 compute-0 ceph-osd[89722]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Oct 11 03:37:26 compute-0 ceph-osd[89722]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Oct 11 03:37:26 compute-0 ceph-osd[89722]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Oct 11 03:37:26 compute-0 ceph-osd[89722]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Oct 11 03:37:26 compute-0 ceph-osd[89722]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Oct 11 03:37:26 compute-0 ceph-osd[89722]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Oct 11 03:37:26 compute-0 ceph-osd[89722]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Oct 11 03:37:26 compute-0 ceph-osd[89722]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Oct 11 03:37:26 compute-0 ceph-osd[89722]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Oct 11 03:37:26 compute-0 ceph-osd[89722]: rocksdb:                        Options.arena_block_size: 1048576
Oct 11 03:37:26 compute-0 ceph-osd[89722]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Oct 11 03:37:26 compute-0 ceph-osd[89722]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Oct 11 03:37:26 compute-0 ceph-osd[89722]: rocksdb:                Options.disable_auto_compactions: 0
Oct 11 03:37:26 compute-0 ceph-osd[89722]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Oct 11 03:37:26 compute-0 ceph-osd[89722]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Oct 11 03:37:26 compute-0 ceph-osd[89722]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Oct 11 03:37:26 compute-0 ceph-osd[89722]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Oct 11 03:37:26 compute-0 ceph-osd[89722]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Oct 11 03:37:26 compute-0 ceph-osd[89722]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Oct 11 03:37:26 compute-0 ceph-osd[89722]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Oct 11 03:37:26 compute-0 ceph-osd[89722]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Oct 11 03:37:26 compute-0 ceph-osd[89722]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Oct 11 03:37:26 compute-0 ceph-osd[89722]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Oct 11 03:37:26 compute-0 ceph-osd[89722]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Oct 11 03:37:26 compute-0 ceph-osd[89722]: rocksdb:                   Options.inplace_update_support: 0
Oct 11 03:37:26 compute-0 ceph-osd[89722]: rocksdb:                 Options.inplace_update_num_locks: 10000
Oct 11 03:37:26 compute-0 ceph-osd[89722]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Oct 11 03:37:26 compute-0 ceph-osd[89722]: rocksdb:               Options.memtable_whole_key_filtering: 0
Oct 11 03:37:26 compute-0 ceph-osd[89722]: rocksdb:   Options.memtable_huge_page_size: 0
Oct 11 03:37:26 compute-0 ceph-osd[89722]: rocksdb:                           Options.bloom_locality: 0
Oct 11 03:37:26 compute-0 ceph-osd[89722]: rocksdb:                    Options.max_successive_merges: 0
Oct 11 03:37:26 compute-0 ceph-osd[89722]: rocksdb:                Options.optimize_filters_for_hits: 0
Oct 11 03:37:26 compute-0 ceph-osd[89722]: rocksdb:                Options.paranoid_file_checks: 0
Oct 11 03:37:26 compute-0 ceph-osd[89722]: rocksdb:                Options.force_consistency_checks: 1
Oct 11 03:37:26 compute-0 ceph-osd[89722]: rocksdb:                Options.report_bg_io_stats: 0
Oct 11 03:37:26 compute-0 ceph-osd[89722]: rocksdb:                               Options.ttl: 2592000
Oct 11 03:37:26 compute-0 ceph-osd[89722]: rocksdb:          Options.periodic_compaction_seconds: 0
Oct 11 03:37:26 compute-0 ceph-osd[89722]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Oct 11 03:37:26 compute-0 ceph-osd[89722]: rocksdb:    Options.preserve_internal_time_seconds: 0
Oct 11 03:37:26 compute-0 ceph-osd[89722]: rocksdb:                       Options.enable_blob_files: false
Oct 11 03:37:26 compute-0 ceph-osd[89722]: rocksdb:                           Options.min_blob_size: 0
Oct 11 03:37:26 compute-0 ceph-osd[89722]: rocksdb:                          Options.blob_file_size: 268435456
Oct 11 03:37:26 compute-0 ceph-osd[89722]: rocksdb:                   Options.blob_compression_type: NoCompression
Oct 11 03:37:26 compute-0 ceph-osd[89722]: rocksdb:          Options.enable_blob_garbage_collection: false
Oct 11 03:37:26 compute-0 ceph-osd[89722]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Oct 11 03:37:26 compute-0 ceph-osd[89722]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Oct 11 03:37:26 compute-0 ceph-osd[89722]: rocksdb:          Options.blob_compaction_readahead_size: 0
Oct 11 03:37:26 compute-0 ceph-osd[89722]: rocksdb:                Options.blob_file_starting_level: 0
Oct 11 03:37:26 compute-0 ceph-osd[89722]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Oct 11 03:37:26 compute-0 ceph-osd[89722]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-0]:
Oct 11 03:37:26 compute-0 ceph-osd[89722]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Oct 11 03:37:26 compute-0 ceph-osd[89722]: rocksdb:           Options.merge_operator: None
Oct 11 03:37:26 compute-0 ceph-osd[89722]: rocksdb:        Options.compaction_filter: None
Oct 11 03:37:26 compute-0 ceph-osd[89722]: rocksdb:        Options.compaction_filter_factory: None
Oct 11 03:37:26 compute-0 ceph-osd[89722]: rocksdb:  Options.sst_partitioner_factory: None
Oct 11 03:37:26 compute-0 ceph-osd[89722]: rocksdb:         Options.memtable_factory: SkipListFactory
Oct 11 03:37:26 compute-0 ceph-osd[89722]: rocksdb:            Options.table_factory: BlockBasedTable
Oct 11 03:37:26 compute-0 ceph-osd[89722]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55f1041e5280)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x55f1041ccdd0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Oct 11 03:37:26 compute-0 ceph-osd[89722]: rocksdb:        Options.write_buffer_size: 16777216
Oct 11 03:37:26 compute-0 ceph-osd[89722]: rocksdb:  Options.max_write_buffer_number: 64
Oct 11 03:37:26 compute-0 ceph-osd[89722]: rocksdb:          Options.compression: LZ4
Oct 11 03:37:26 compute-0 ceph-osd[89722]: rocksdb:                  Options.bottommost_compression: Disabled
Oct 11 03:37:26 compute-0 ceph-osd[89722]: rocksdb:       Options.prefix_extractor: nullptr
Oct 11 03:37:26 compute-0 ceph-osd[89722]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Oct 11 03:37:26 compute-0 ceph-osd[89722]: rocksdb:             Options.num_levels: 7
Oct 11 03:37:26 compute-0 ceph-osd[89722]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Oct 11 03:37:26 compute-0 ceph-osd[89722]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Oct 11 03:37:26 compute-0 ceph-osd[89722]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Oct 11 03:37:26 compute-0 ceph-osd[89722]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Oct 11 03:37:26 compute-0 ceph-osd[89722]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Oct 11 03:37:26 compute-0 ceph-osd[89722]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Oct 11 03:37:26 compute-0 ceph-osd[89722]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Oct 11 03:37:26 compute-0 ceph-osd[89722]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Oct 11 03:37:26 compute-0 ceph-osd[89722]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Oct 11 03:37:26 compute-0 ceph-osd[89722]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Oct 11 03:37:26 compute-0 ceph-osd[89722]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Oct 11 03:37:26 compute-0 ceph-osd[89722]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Oct 11 03:37:26 compute-0 ceph-osd[89722]: rocksdb:            Options.compression_opts.window_bits: -14
Oct 11 03:37:26 compute-0 ceph-osd[89722]: rocksdb:                  Options.compression_opts.level: 32767
Oct 11 03:37:26 compute-0 ceph-osd[89722]: rocksdb:               Options.compression_opts.strategy: 0
Oct 11 03:37:26 compute-0 ceph-osd[89722]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Oct 11 03:37:26 compute-0 ceph-osd[89722]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Oct 11 03:37:26 compute-0 ceph-osd[89722]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Oct 11 03:37:26 compute-0 ceph-osd[89722]: rocksdb:         Options.compression_opts.parallel_threads: 1
Oct 11 03:37:26 compute-0 ceph-osd[89722]: rocksdb:                  Options.compression_opts.enabled: false
Oct 11 03:37:26 compute-0 ceph-osd[89722]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Oct 11 03:37:26 compute-0 ceph-osd[89722]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Oct 11 03:37:26 compute-0 ceph-osd[89722]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Oct 11 03:37:26 compute-0 ceph-osd[89722]: rocksdb:              Options.level0_stop_writes_trigger: 36
Oct 11 03:37:26 compute-0 ceph-osd[89722]: rocksdb:                   Options.target_file_size_base: 67108864
Oct 11 03:37:26 compute-0 ceph-osd[89722]: rocksdb:             Options.target_file_size_multiplier: 1
Oct 11 03:37:26 compute-0 ceph-osd[89722]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Oct 11 03:37:26 compute-0 ceph-osd[89722]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Oct 11 03:37:26 compute-0 ceph-osd[89722]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Oct 11 03:37:26 compute-0 ceph-osd[89722]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Oct 11 03:37:26 compute-0 ceph-osd[89722]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Oct 11 03:37:26 compute-0 ceph-osd[89722]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Oct 11 03:37:26 compute-0 ceph-osd[89722]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Oct 11 03:37:26 compute-0 ceph-osd[89722]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Oct 11 03:37:26 compute-0 ceph-osd[89722]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Oct 11 03:37:26 compute-0 ceph-osd[89722]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Oct 11 03:37:26 compute-0 ceph-osd[89722]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Oct 11 03:37:26 compute-0 ceph-osd[89722]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Oct 11 03:37:26 compute-0 ceph-osd[89722]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Oct 11 03:37:26 compute-0 ceph-osd[89722]: rocksdb:                        Options.arena_block_size: 1048576
Oct 11 03:37:26 compute-0 ceph-osd[89722]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Oct 11 03:37:26 compute-0 ceph-osd[89722]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Oct 11 03:37:26 compute-0 ceph-osd[89722]: rocksdb:                Options.disable_auto_compactions: 0
Oct 11 03:37:26 compute-0 ceph-osd[89722]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Oct 11 03:37:26 compute-0 ceph-osd[89722]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Oct 11 03:37:26 compute-0 ceph-osd[89722]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Oct 11 03:37:26 compute-0 ceph-osd[89722]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Oct 11 03:37:26 compute-0 ceph-osd[89722]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Oct 11 03:37:26 compute-0 ceph-osd[89722]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Oct 11 03:37:26 compute-0 ceph-osd[89722]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Oct 11 03:37:26 compute-0 ceph-osd[89722]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Oct 11 03:37:26 compute-0 ceph-osd[89722]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Oct 11 03:37:26 compute-0 ceph-osd[89722]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Oct 11 03:37:26 compute-0 ceph-osd[89722]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Oct 11 03:37:26 compute-0 ceph-osd[89722]: rocksdb:                   Options.inplace_update_support: 0
Oct 11 03:37:26 compute-0 ceph-osd[89722]: rocksdb:                 Options.inplace_update_num_locks: 10000
Oct 11 03:37:26 compute-0 ceph-osd[89722]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Oct 11 03:37:26 compute-0 ceph-osd[89722]: rocksdb:               Options.memtable_whole_key_filtering: 0
Oct 11 03:37:26 compute-0 ceph-osd[89722]: rocksdb:   Options.memtable_huge_page_size: 0
Oct 11 03:37:26 compute-0 ceph-osd[89722]: rocksdb:                           Options.bloom_locality: 0
Oct 11 03:37:26 compute-0 ceph-osd[89722]: rocksdb:                    Options.max_successive_merges: 0
Oct 11 03:37:26 compute-0 ceph-osd[89722]: rocksdb:                Options.optimize_filters_for_hits: 0
Oct 11 03:37:26 compute-0 ceph-osd[89722]: rocksdb:                Options.paranoid_file_checks: 0
Oct 11 03:37:26 compute-0 ceph-osd[89722]: rocksdb:                Options.force_consistency_checks: 1
Oct 11 03:37:26 compute-0 ceph-osd[89722]: rocksdb:                Options.report_bg_io_stats: 0
Oct 11 03:37:26 compute-0 ceph-osd[89722]: rocksdb:                               Options.ttl: 2592000
Oct 11 03:37:26 compute-0 ceph-osd[89722]: rocksdb:          Options.periodic_compaction_seconds: 0
Oct 11 03:37:26 compute-0 ceph-osd[89722]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Oct 11 03:37:26 compute-0 ceph-osd[89722]: rocksdb:    Options.preserve_internal_time_seconds: 0
Oct 11 03:37:26 compute-0 ceph-osd[89722]: rocksdb:                       Options.enable_blob_files: false
Oct 11 03:37:26 compute-0 ceph-osd[89722]: rocksdb:                           Options.min_blob_size: 0
Oct 11 03:37:26 compute-0 ceph-osd[89722]: rocksdb:                          Options.blob_file_size: 268435456
Oct 11 03:37:26 compute-0 ceph-osd[89722]: rocksdb:                   Options.blob_compression_type: NoCompression
Oct 11 03:37:26 compute-0 ceph-osd[89722]: rocksdb:          Options.enable_blob_garbage_collection: false
Oct 11 03:37:26 compute-0 ceph-osd[89722]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Oct 11 03:37:26 compute-0 ceph-osd[89722]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Oct 11 03:37:26 compute-0 ceph-osd[89722]: rocksdb:          Options.blob_compaction_readahead_size: 0
Oct 11 03:37:26 compute-0 ceph-osd[89722]: rocksdb:                Options.blob_file_starting_level: 0
Oct 11 03:37:26 compute-0 ceph-osd[89722]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Oct 11 03:37:26 compute-0 ceph-osd[89722]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-1]:
Oct 11 03:37:26 compute-0 ceph-osd[89722]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Oct 11 03:37:26 compute-0 ceph-osd[89722]: rocksdb:           Options.merge_operator: None
Oct 11 03:37:26 compute-0 ceph-osd[89722]: rocksdb:        Options.compaction_filter: None
Oct 11 03:37:26 compute-0 ceph-osd[89722]: rocksdb:        Options.compaction_filter_factory: None
Oct 11 03:37:26 compute-0 ceph-osd[89722]: rocksdb:  Options.sst_partitioner_factory: None
Oct 11 03:37:26 compute-0 ceph-osd[89722]: rocksdb:         Options.memtable_factory: SkipListFactory
Oct 11 03:37:26 compute-0 ceph-osd[89722]: rocksdb:            Options.table_factory: BlockBasedTable
Oct 11 03:37:26 compute-0 ceph-osd[89722]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55f1041e5280)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x55f1041ccdd0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Oct 11 03:37:26 compute-0 ceph-osd[89722]: rocksdb:        Options.write_buffer_size: 16777216
Oct 11 03:37:26 compute-0 ceph-osd[89722]: rocksdb:  Options.max_write_buffer_number: 64
Oct 11 03:37:26 compute-0 ceph-osd[89722]: rocksdb:          Options.compression: LZ4
Oct 11 03:37:26 compute-0 ceph-osd[89722]: rocksdb:                  Options.bottommost_compression: Disabled
Oct 11 03:37:26 compute-0 ceph-osd[89722]: rocksdb:       Options.prefix_extractor: nullptr
Oct 11 03:37:26 compute-0 ceph-osd[89722]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Oct 11 03:37:26 compute-0 ceph-osd[89722]: rocksdb:             Options.num_levels: 7
Oct 11 03:37:26 compute-0 ceph-osd[89722]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Oct 11 03:37:26 compute-0 ceph-osd[89722]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Oct 11 03:37:26 compute-0 ceph-osd[89722]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Oct 11 03:37:26 compute-0 ceph-osd[89722]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Oct 11 03:37:26 compute-0 ceph-osd[89722]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Oct 11 03:37:26 compute-0 ceph-osd[89722]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Oct 11 03:37:26 compute-0 ceph-osd[89722]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Oct 11 03:37:26 compute-0 ceph-osd[89722]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Oct 11 03:37:26 compute-0 ceph-osd[89722]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Oct 11 03:37:26 compute-0 ceph-osd[89722]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Oct 11 03:37:26 compute-0 ceph-osd[89722]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Oct 11 03:37:26 compute-0 ceph-osd[89722]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Oct 11 03:37:26 compute-0 ceph-osd[89722]: rocksdb:            Options.compression_opts.window_bits: -14
Oct 11 03:37:26 compute-0 ceph-osd[89722]: rocksdb:                  Options.compression_opts.level: 32767
Oct 11 03:37:26 compute-0 ceph-osd[89722]: rocksdb:               Options.compression_opts.strategy: 0
Oct 11 03:37:26 compute-0 ceph-osd[89722]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Oct 11 03:37:26 compute-0 ceph-osd[89722]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Oct 11 03:37:26 compute-0 ceph-osd[89722]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Oct 11 03:37:26 compute-0 ceph-osd[89722]: rocksdb:         Options.compression_opts.parallel_threads: 1
Oct 11 03:37:26 compute-0 ceph-osd[89722]: rocksdb:                  Options.compression_opts.enabled: false
Oct 11 03:37:26 compute-0 ceph-osd[89722]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Oct 11 03:37:26 compute-0 ceph-osd[89722]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Oct 11 03:37:26 compute-0 ceph-osd[89722]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Oct 11 03:37:26 compute-0 ceph-osd[89722]: rocksdb:              Options.level0_stop_writes_trigger: 36
Oct 11 03:37:26 compute-0 ceph-osd[89722]: rocksdb:                   Options.target_file_size_base: 67108864
Oct 11 03:37:26 compute-0 ceph-osd[89722]: rocksdb:             Options.target_file_size_multiplier: 1
Oct 11 03:37:26 compute-0 ceph-osd[89722]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Oct 11 03:37:26 compute-0 ceph-osd[89722]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Oct 11 03:37:26 compute-0 ceph-osd[89722]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Oct 11 03:37:26 compute-0 ceph-osd[89722]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Oct 11 03:37:26 compute-0 ceph-osd[89722]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Oct 11 03:37:26 compute-0 ceph-osd[89722]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Oct 11 03:37:26 compute-0 ceph-osd[89722]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Oct 11 03:37:26 compute-0 ceph-osd[89722]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Oct 11 03:37:26 compute-0 ceph-osd[89722]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Oct 11 03:37:26 compute-0 ceph-osd[89722]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Oct 11 03:37:26 compute-0 ceph-osd[89722]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Oct 11 03:37:26 compute-0 ceph-osd[89722]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Oct 11 03:37:26 compute-0 ceph-osd[89722]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Oct 11 03:37:26 compute-0 ceph-osd[89722]: rocksdb:                        Options.arena_block_size: 1048576
Oct 11 03:37:26 compute-0 ceph-osd[89722]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Oct 11 03:37:26 compute-0 ceph-osd[89722]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Oct 11 03:37:26 compute-0 ceph-osd[89722]: rocksdb:                Options.disable_auto_compactions: 0
Oct 11 03:37:26 compute-0 ceph-osd[89722]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Oct 11 03:37:26 compute-0 ceph-osd[89722]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Oct 11 03:37:26 compute-0 ceph-osd[89722]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Oct 11 03:37:26 compute-0 ceph-osd[89722]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Oct 11 03:37:26 compute-0 ceph-osd[89722]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Oct 11 03:37:26 compute-0 ceph-osd[89722]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Oct 11 03:37:26 compute-0 ceph-osd[89722]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Oct 11 03:37:26 compute-0 ceph-osd[89722]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Oct 11 03:37:26 compute-0 ceph-osd[89722]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Oct 11 03:37:26 compute-0 ceph-osd[89722]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Oct 11 03:37:26 compute-0 ceph-osd[89722]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Oct 11 03:37:26 compute-0 ceph-osd[89722]: rocksdb:                   Options.inplace_update_support: 0
Oct 11 03:37:26 compute-0 ceph-osd[89722]: rocksdb:                 Options.inplace_update_num_locks: 10000
Oct 11 03:37:26 compute-0 ceph-osd[89722]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Oct 11 03:37:26 compute-0 ceph-osd[89722]: rocksdb:               Options.memtable_whole_key_filtering: 0
Oct 11 03:37:26 compute-0 ceph-osd[89722]: rocksdb:   Options.memtable_huge_page_size: 0
Oct 11 03:37:26 compute-0 ceph-osd[89722]: rocksdb:                           Options.bloom_locality: 0
Oct 11 03:37:26 compute-0 ceph-osd[89722]: rocksdb:                    Options.max_successive_merges: 0
Oct 11 03:37:26 compute-0 ceph-osd[89722]: rocksdb:                Options.optimize_filters_for_hits: 0
Oct 11 03:37:26 compute-0 ceph-osd[89722]: rocksdb:                Options.paranoid_file_checks: 0
Oct 11 03:37:26 compute-0 ceph-osd[89722]: rocksdb:                Options.force_consistency_checks: 1
Oct 11 03:37:26 compute-0 ceph-osd[89722]: rocksdb:                Options.report_bg_io_stats: 0
Oct 11 03:37:26 compute-0 ceph-osd[89722]: rocksdb:                               Options.ttl: 2592000
Oct 11 03:37:26 compute-0 ceph-osd[89722]: rocksdb:          Options.periodic_compaction_seconds: 0
Oct 11 03:37:26 compute-0 ceph-osd[89722]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Oct 11 03:37:26 compute-0 ceph-osd[89722]: rocksdb:    Options.preserve_internal_time_seconds: 0
Oct 11 03:37:26 compute-0 ceph-osd[89722]: rocksdb:                       Options.enable_blob_files: false
Oct 11 03:37:26 compute-0 ceph-osd[89722]: rocksdb:                           Options.min_blob_size: 0
Oct 11 03:37:26 compute-0 ceph-osd[89722]: rocksdb:                          Options.blob_file_size: 268435456
Oct 11 03:37:26 compute-0 ceph-osd[89722]: rocksdb:                   Options.blob_compression_type: NoCompression
Oct 11 03:37:26 compute-0 ceph-osd[89722]: rocksdb:          Options.enable_blob_garbage_collection: false
Oct 11 03:37:26 compute-0 ceph-osd[89722]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Oct 11 03:37:26 compute-0 ceph-osd[89722]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Oct 11 03:37:26 compute-0 ceph-osd[89722]: rocksdb:          Options.blob_compaction_readahead_size: 0
Oct 11 03:37:26 compute-0 ceph-osd[89722]: rocksdb:                Options.blob_file_starting_level: 0
Oct 11 03:37:26 compute-0 ceph-osd[89722]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Oct 11 03:37:26 compute-0 ceph-osd[89722]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-2]:
Oct 11 03:37:26 compute-0 ceph-osd[89722]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Oct 11 03:37:26 compute-0 ceph-osd[89722]: rocksdb:           Options.merge_operator: None
Oct 11 03:37:26 compute-0 ceph-osd[89722]: rocksdb:        Options.compaction_filter: None
Oct 11 03:37:26 compute-0 ceph-osd[89722]: rocksdb:        Options.compaction_filter_factory: None
Oct 11 03:37:26 compute-0 ceph-osd[89722]: rocksdb:  Options.sst_partitioner_factory: None
Oct 11 03:37:26 compute-0 ceph-osd[89722]: rocksdb:         Options.memtable_factory: SkipListFactory
Oct 11 03:37:26 compute-0 ceph-osd[89722]: rocksdb:            Options.table_factory: BlockBasedTable
Oct 11 03:37:26 compute-0 ceph-osd[89722]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55f1041e5280)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x55f1041ccdd0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Oct 11 03:37:26 compute-0 ceph-osd[89722]: rocksdb:        Options.write_buffer_size: 16777216
Oct 11 03:37:26 compute-0 ceph-osd[89722]: rocksdb:  Options.max_write_buffer_number: 64
Oct 11 03:37:26 compute-0 ceph-osd[89722]: rocksdb:          Options.compression: LZ4
Oct 11 03:37:26 compute-0 ceph-osd[89722]: rocksdb:                  Options.bottommost_compression: Disabled
Oct 11 03:37:26 compute-0 ceph-osd[89722]: rocksdb:       Options.prefix_extractor: nullptr
Oct 11 03:37:26 compute-0 ceph-osd[89722]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Oct 11 03:37:26 compute-0 ceph-osd[89722]: rocksdb:             Options.num_levels: 7
Oct 11 03:37:26 compute-0 ceph-osd[89722]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Oct 11 03:37:26 compute-0 ceph-osd[89722]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Oct 11 03:37:26 compute-0 ceph-osd[89722]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Oct 11 03:37:26 compute-0 ceph-osd[89722]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Oct 11 03:37:26 compute-0 ceph-osd[89722]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Oct 11 03:37:26 compute-0 ceph-osd[89722]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Oct 11 03:37:26 compute-0 ceph-osd[89722]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Oct 11 03:37:26 compute-0 ceph-osd[89722]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Oct 11 03:37:26 compute-0 ceph-osd[89722]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Oct 11 03:37:26 compute-0 ceph-osd[89722]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Oct 11 03:37:26 compute-0 ceph-osd[89722]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Oct 11 03:37:26 compute-0 ceph-osd[89722]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Oct 11 03:37:26 compute-0 ceph-osd[89722]: rocksdb:            Options.compression_opts.window_bits: -14
Oct 11 03:37:26 compute-0 ceph-osd[89722]: rocksdb:                  Options.compression_opts.level: 32767
Oct 11 03:37:26 compute-0 ceph-osd[89722]: rocksdb:               Options.compression_opts.strategy: 0
Oct 11 03:37:26 compute-0 ceph-osd[89722]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Oct 11 03:37:26 compute-0 ceph-osd[89722]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Oct 11 03:37:26 compute-0 ceph-osd[89722]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Oct 11 03:37:26 compute-0 ceph-osd[89722]: rocksdb:         Options.compression_opts.parallel_threads: 1
Oct 11 03:37:26 compute-0 ceph-osd[89722]: rocksdb:                  Options.compression_opts.enabled: false
Oct 11 03:37:26 compute-0 ceph-osd[89722]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Oct 11 03:37:26 compute-0 ceph-osd[89722]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Oct 11 03:37:26 compute-0 ceph-osd[89722]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Oct 11 03:37:26 compute-0 ceph-osd[89722]: rocksdb:              Options.level0_stop_writes_trigger: 36
Oct 11 03:37:26 compute-0 ceph-osd[89722]: rocksdb:                   Options.target_file_size_base: 67108864
Oct 11 03:37:26 compute-0 ceph-osd[89722]: rocksdb:             Options.target_file_size_multiplier: 1
Oct 11 03:37:26 compute-0 ceph-osd[89722]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Oct 11 03:37:26 compute-0 ceph-osd[89722]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Oct 11 03:37:26 compute-0 ceph-osd[89722]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Oct 11 03:37:26 compute-0 ceph-osd[89722]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Oct 11 03:37:26 compute-0 ceph-osd[89722]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Oct 11 03:37:26 compute-0 ceph-osd[89722]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Oct 11 03:37:26 compute-0 ceph-osd[89722]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Oct 11 03:37:26 compute-0 ceph-osd[89722]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Oct 11 03:37:26 compute-0 ceph-osd[89722]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Oct 11 03:37:26 compute-0 ceph-osd[89722]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Oct 11 03:37:26 compute-0 ceph-osd[89722]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Oct 11 03:37:26 compute-0 ceph-osd[89722]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Oct 11 03:37:26 compute-0 ceph-osd[89722]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Oct 11 03:37:26 compute-0 ceph-osd[89722]: rocksdb:                        Options.arena_block_size: 1048576
Oct 11 03:37:26 compute-0 ceph-osd[89722]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Oct 11 03:37:26 compute-0 ceph-osd[89722]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Oct 11 03:37:26 compute-0 ceph-osd[89722]: rocksdb:                Options.disable_auto_compactions: 0
Oct 11 03:37:26 compute-0 ceph-osd[89722]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Oct 11 03:37:26 compute-0 ceph-osd[89722]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Oct 11 03:37:26 compute-0 ceph-osd[89722]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Oct 11 03:37:26 compute-0 systemd[1]: libpod-conmon-d4e8adb6b6f7ae7dfb3757a61f976779a02a1fc9535c899822dad87a0eb05570.scope: Deactivated successfully.
Oct 11 03:37:26 compute-0 ceph-osd[89722]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Oct 11 03:37:26 compute-0 ceph-osd[89722]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Oct 11 03:37:26 compute-0 ceph-osd[89722]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Oct 11 03:37:26 compute-0 ceph-osd[89722]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Oct 11 03:37:26 compute-0 ceph-osd[89722]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Oct 11 03:37:26 compute-0 ceph-osd[89722]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Oct 11 03:37:26 compute-0 ceph-osd[89722]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Oct 11 03:37:26 compute-0 ceph-osd[89722]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Oct 11 03:37:26 compute-0 ceph-osd[89722]: rocksdb:                   Options.inplace_update_support: 0
Oct 11 03:37:26 compute-0 ceph-osd[89722]: rocksdb:                 Options.inplace_update_num_locks: 10000
Oct 11 03:37:26 compute-0 ceph-osd[89722]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Oct 11 03:37:26 compute-0 ceph-osd[89722]: rocksdb:               Options.memtable_whole_key_filtering: 0
Oct 11 03:37:26 compute-0 ceph-osd[89722]: rocksdb:   Options.memtable_huge_page_size: 0
Oct 11 03:37:26 compute-0 ceph-osd[89722]: rocksdb:                           Options.bloom_locality: 0
Oct 11 03:37:26 compute-0 ceph-osd[89722]: rocksdb:                    Options.max_successive_merges: 0
Oct 11 03:37:26 compute-0 ceph-osd[89722]: rocksdb:                Options.optimize_filters_for_hits: 0
Oct 11 03:37:26 compute-0 ceph-osd[89722]: rocksdb:                Options.paranoid_file_checks: 0
Oct 11 03:37:26 compute-0 ceph-osd[89722]: rocksdb:                Options.force_consistency_checks: 1
Oct 11 03:37:26 compute-0 ceph-osd[89722]: rocksdb:                Options.report_bg_io_stats: 0
Oct 11 03:37:26 compute-0 ceph-osd[89722]: rocksdb:                               Options.ttl: 2592000
Oct 11 03:37:26 compute-0 ceph-osd[89722]: rocksdb:          Options.periodic_compaction_seconds: 0
Oct 11 03:37:26 compute-0 ceph-osd[89722]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Oct 11 03:37:26 compute-0 ceph-osd[89722]: rocksdb:    Options.preserve_internal_time_seconds: 0
Oct 11 03:37:26 compute-0 ceph-osd[89722]: rocksdb:                       Options.enable_blob_files: false
Oct 11 03:37:26 compute-0 ceph-osd[89722]: rocksdb:                           Options.min_blob_size: 0
Oct 11 03:37:26 compute-0 ceph-osd[89722]: rocksdb:                          Options.blob_file_size: 268435456
Oct 11 03:37:26 compute-0 ceph-osd[89722]: rocksdb:                   Options.blob_compression_type: NoCompression
Oct 11 03:37:26 compute-0 ceph-osd[89722]: rocksdb:          Options.enable_blob_garbage_collection: false
Oct 11 03:37:26 compute-0 ceph-osd[89722]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Oct 11 03:37:26 compute-0 ceph-osd[89722]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Oct 11 03:37:26 compute-0 ceph-osd[89722]: rocksdb:          Options.blob_compaction_readahead_size: 0
Oct 11 03:37:26 compute-0 ceph-osd[89722]: rocksdb:                Options.blob_file_starting_level: 0
Oct 11 03:37:26 compute-0 ceph-osd[89722]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Oct 11 03:37:26 compute-0 ceph-osd[89722]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-0]:
Oct 11 03:37:26 compute-0 ceph-osd[89722]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Oct 11 03:37:26 compute-0 ceph-osd[89722]: rocksdb:           Options.merge_operator: None
Oct 11 03:37:26 compute-0 ceph-osd[89722]: rocksdb:        Options.compaction_filter: None
Oct 11 03:37:26 compute-0 ceph-osd[89722]: rocksdb:        Options.compaction_filter_factory: None
Oct 11 03:37:26 compute-0 ceph-osd[89722]: rocksdb:  Options.sst_partitioner_factory: None
Oct 11 03:37:26 compute-0 ceph-osd[89722]: rocksdb:         Options.memtable_factory: SkipListFactory
Oct 11 03:37:26 compute-0 ceph-osd[89722]: rocksdb:            Options.table_factory: BlockBasedTable
Oct 11 03:37:26 compute-0 ceph-osd[89722]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55f1041e5280)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x55f1041ccdd0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Oct 11 03:37:26 compute-0 ceph-osd[89722]: rocksdb:        Options.write_buffer_size: 16777216
Oct 11 03:37:26 compute-0 ceph-osd[89722]: rocksdb:  Options.max_write_buffer_number: 64
Oct 11 03:37:26 compute-0 ceph-osd[89722]: rocksdb:          Options.compression: LZ4
Oct 11 03:37:26 compute-0 ceph-osd[89722]: rocksdb:                  Options.bottommost_compression: Disabled
Oct 11 03:37:26 compute-0 ceph-osd[89722]: rocksdb:       Options.prefix_extractor: nullptr
Oct 11 03:37:26 compute-0 ceph-osd[89722]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Oct 11 03:37:26 compute-0 ceph-osd[89722]: rocksdb:             Options.num_levels: 7
Oct 11 03:37:26 compute-0 ceph-osd[89722]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Oct 11 03:37:26 compute-0 ceph-osd[89722]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Oct 11 03:37:26 compute-0 ceph-osd[89722]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Oct 11 03:37:26 compute-0 ceph-osd[89722]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Oct 11 03:37:26 compute-0 ceph-osd[89722]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Oct 11 03:37:26 compute-0 ceph-osd[89722]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Oct 11 03:37:26 compute-0 ceph-osd[89722]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Oct 11 03:37:26 compute-0 ceph-osd[89722]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Oct 11 03:37:26 compute-0 ceph-osd[89722]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Oct 11 03:37:26 compute-0 ceph-osd[89722]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Oct 11 03:37:26 compute-0 ceph-osd[89722]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Oct 11 03:37:26 compute-0 ceph-osd[89722]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Oct 11 03:37:26 compute-0 ceph-osd[89722]: rocksdb:            Options.compression_opts.window_bits: -14
Oct 11 03:37:26 compute-0 ceph-osd[89722]: rocksdb:                  Options.compression_opts.level: 32767
Oct 11 03:37:26 compute-0 ceph-osd[89722]: rocksdb:               Options.compression_opts.strategy: 0
Oct 11 03:37:26 compute-0 ceph-osd[89722]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Oct 11 03:37:26 compute-0 ceph-osd[89722]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Oct 11 03:37:26 compute-0 ceph-osd[89722]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Oct 11 03:37:26 compute-0 ceph-osd[89722]: rocksdb:         Options.compression_opts.parallel_threads: 1
Oct 11 03:37:26 compute-0 ceph-osd[89722]: rocksdb:                  Options.compression_opts.enabled: false
Oct 11 03:37:26 compute-0 ceph-osd[89722]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Oct 11 03:37:26 compute-0 ceph-osd[89722]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Oct 11 03:37:26 compute-0 ceph-osd[89722]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Oct 11 03:37:26 compute-0 ceph-osd[89722]: rocksdb:              Options.level0_stop_writes_trigger: 36
Oct 11 03:37:26 compute-0 ceph-osd[89722]: rocksdb:                   Options.target_file_size_base: 67108864
Oct 11 03:37:26 compute-0 ceph-osd[89722]: rocksdb:             Options.target_file_size_multiplier: 1
Oct 11 03:37:26 compute-0 ceph-osd[89722]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Oct 11 03:37:26 compute-0 ceph-osd[89722]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Oct 11 03:37:26 compute-0 ceph-osd[89722]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Oct 11 03:37:26 compute-0 ceph-osd[89722]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Oct 11 03:37:26 compute-0 ceph-osd[89722]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Oct 11 03:37:26 compute-0 ceph-osd[89722]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Oct 11 03:37:26 compute-0 ceph-osd[89722]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Oct 11 03:37:26 compute-0 ceph-osd[89722]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Oct 11 03:37:26 compute-0 ceph-osd[89722]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Oct 11 03:37:26 compute-0 ceph-osd[89722]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Oct 11 03:37:26 compute-0 ceph-osd[89722]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Oct 11 03:37:26 compute-0 ceph-osd[89722]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Oct 11 03:37:26 compute-0 ceph-osd[89722]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Oct 11 03:37:26 compute-0 ceph-osd[89722]: rocksdb:                        Options.arena_block_size: 1048576
Oct 11 03:37:26 compute-0 ceph-osd[89722]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Oct 11 03:37:26 compute-0 ceph-osd[89722]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Oct 11 03:37:26 compute-0 ceph-osd[89722]: rocksdb:                Options.disable_auto_compactions: 0
Oct 11 03:37:26 compute-0 ceph-osd[89722]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Oct 11 03:37:26 compute-0 ceph-osd[89722]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Oct 11 03:37:26 compute-0 ceph-osd[89722]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Oct 11 03:37:26 compute-0 ceph-osd[89722]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Oct 11 03:37:26 compute-0 ceph-osd[89722]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Oct 11 03:37:26 compute-0 ceph-osd[89722]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Oct 11 03:37:26 compute-0 ceph-osd[89722]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Oct 11 03:37:26 compute-0 ceph-osd[89722]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Oct 11 03:37:26 compute-0 ceph-osd[89722]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Oct 11 03:37:26 compute-0 ceph-osd[89722]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Oct 11 03:37:26 compute-0 ceph-osd[89722]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Oct 11 03:37:26 compute-0 ceph-osd[89722]: rocksdb:                   Options.inplace_update_support: 0
Oct 11 03:37:26 compute-0 ceph-osd[89722]: rocksdb:                 Options.inplace_update_num_locks: 10000
Oct 11 03:37:26 compute-0 ceph-osd[89722]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Oct 11 03:37:26 compute-0 ceph-osd[89722]: rocksdb:               Options.memtable_whole_key_filtering: 0
Oct 11 03:37:26 compute-0 ceph-osd[89722]: rocksdb:   Options.memtable_huge_page_size: 0
Oct 11 03:37:26 compute-0 ceph-osd[89722]: rocksdb:                           Options.bloom_locality: 0
Oct 11 03:37:26 compute-0 ceph-osd[89722]: rocksdb:                    Options.max_successive_merges: 0
Oct 11 03:37:26 compute-0 ceph-osd[89722]: rocksdb:                Options.optimize_filters_for_hits: 0
Oct 11 03:37:26 compute-0 ceph-osd[89722]: rocksdb:                Options.paranoid_file_checks: 0
Oct 11 03:37:26 compute-0 ceph-osd[89722]: rocksdb:                Options.force_consistency_checks: 1
Oct 11 03:37:26 compute-0 ceph-osd[89722]: rocksdb:                Options.report_bg_io_stats: 0
Oct 11 03:37:26 compute-0 ceph-osd[89722]: rocksdb:                               Options.ttl: 2592000
Oct 11 03:37:26 compute-0 ceph-osd[89722]: rocksdb:          Options.periodic_compaction_seconds: 0
Oct 11 03:37:26 compute-0 ceph-osd[89722]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Oct 11 03:37:26 compute-0 ceph-osd[89722]: rocksdb:    Options.preserve_internal_time_seconds: 0
Oct 11 03:37:26 compute-0 ceph-osd[89722]: rocksdb:                       Options.enable_blob_files: false
Oct 11 03:37:26 compute-0 ceph-osd[89722]: rocksdb:                           Options.min_blob_size: 0
Oct 11 03:37:26 compute-0 ceph-osd[89722]: rocksdb:                          Options.blob_file_size: 268435456
Oct 11 03:37:26 compute-0 ceph-osd[89722]: rocksdb:                   Options.blob_compression_type: NoCompression
Oct 11 03:37:26 compute-0 ceph-osd[89722]: rocksdb:          Options.enable_blob_garbage_collection: false
Oct 11 03:37:26 compute-0 ceph-osd[89722]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Oct 11 03:37:26 compute-0 ceph-osd[89722]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Oct 11 03:37:26 compute-0 ceph-osd[89722]: rocksdb:          Options.blob_compaction_readahead_size: 0
Oct 11 03:37:26 compute-0 ceph-osd[89722]: rocksdb:                Options.blob_file_starting_level: 0
Oct 11 03:37:26 compute-0 ceph-osd[89722]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Oct 11 03:37:26 compute-0 ceph-osd[89722]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-1]:
Oct 11 03:37:26 compute-0 ceph-osd[89722]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Oct 11 03:37:26 compute-0 ceph-osd[89722]: rocksdb:           Options.merge_operator: None
Oct 11 03:37:26 compute-0 ceph-osd[89722]: rocksdb:        Options.compaction_filter: None
Oct 11 03:37:26 compute-0 ceph-osd[89722]: rocksdb:        Options.compaction_filter_factory: None
Oct 11 03:37:26 compute-0 ceph-osd[89722]: rocksdb:  Options.sst_partitioner_factory: None
Oct 11 03:37:26 compute-0 ceph-osd[89722]: rocksdb:         Options.memtable_factory: SkipListFactory
Oct 11 03:37:26 compute-0 ceph-osd[89722]: rocksdb:            Options.table_factory: BlockBasedTable
Oct 11 03:37:26 compute-0 ceph-osd[89722]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55f1041e5280)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x55f1041ccdd0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Oct 11 03:37:26 compute-0 ceph-osd[89722]: rocksdb:        Options.write_buffer_size: 16777216
Oct 11 03:37:26 compute-0 ceph-osd[89722]: rocksdb:  Options.max_write_buffer_number: 64
Oct 11 03:37:26 compute-0 ceph-osd[89722]: rocksdb:          Options.compression: LZ4
Oct 11 03:37:26 compute-0 ceph-osd[89722]: rocksdb:                  Options.bottommost_compression: Disabled
Oct 11 03:37:26 compute-0 ceph-osd[89722]: rocksdb:       Options.prefix_extractor: nullptr
Oct 11 03:37:26 compute-0 ceph-osd[89722]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Oct 11 03:37:26 compute-0 ceph-osd[89722]: rocksdb:             Options.num_levels: 7
Oct 11 03:37:26 compute-0 ceph-osd[89722]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Oct 11 03:37:26 compute-0 ceph-osd[89722]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Oct 11 03:37:26 compute-0 ceph-osd[89722]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Oct 11 03:37:26 compute-0 ceph-osd[89722]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Oct 11 03:37:26 compute-0 ceph-osd[89722]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Oct 11 03:37:26 compute-0 ceph-osd[89722]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Oct 11 03:37:26 compute-0 ceph-osd[89722]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Oct 11 03:37:26 compute-0 ceph-osd[89722]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Oct 11 03:37:26 compute-0 ceph-osd[89722]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Oct 11 03:37:26 compute-0 ceph-osd[89722]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Oct 11 03:37:26 compute-0 ceph-osd[89722]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Oct 11 03:37:26 compute-0 ceph-osd[89722]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Oct 11 03:37:26 compute-0 ceph-osd[89722]: rocksdb:            Options.compression_opts.window_bits: -14
Oct 11 03:37:26 compute-0 ceph-osd[89722]: rocksdb:                  Options.compression_opts.level: 32767
Oct 11 03:37:26 compute-0 ceph-osd[89722]: rocksdb:               Options.compression_opts.strategy: 0
Oct 11 03:37:26 compute-0 ceph-osd[89722]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Oct 11 03:37:26 compute-0 ceph-osd[89722]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Oct 11 03:37:26 compute-0 ceph-osd[89722]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Oct 11 03:37:26 compute-0 ceph-osd[89722]: rocksdb:         Options.compression_opts.parallel_threads: 1
Oct 11 03:37:26 compute-0 ceph-osd[89722]: rocksdb:                  Options.compression_opts.enabled: false
Oct 11 03:37:26 compute-0 ceph-osd[89722]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Oct 11 03:37:26 compute-0 ceph-osd[89722]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Oct 11 03:37:26 compute-0 ceph-osd[89722]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Oct 11 03:37:26 compute-0 ceph-osd[89722]: rocksdb:              Options.level0_stop_writes_trigger: 36
Oct 11 03:37:26 compute-0 ceph-osd[89722]: rocksdb:                   Options.target_file_size_base: 67108864
Oct 11 03:37:26 compute-0 ceph-osd[89722]: rocksdb:             Options.target_file_size_multiplier: 1
Oct 11 03:37:26 compute-0 ceph-osd[89722]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Oct 11 03:37:26 compute-0 ceph-osd[89722]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Oct 11 03:37:26 compute-0 ceph-osd[89722]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Oct 11 03:37:26 compute-0 ceph-osd[89722]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Oct 11 03:37:26 compute-0 ceph-osd[89722]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Oct 11 03:37:26 compute-0 ceph-osd[89722]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Oct 11 03:37:26 compute-0 ceph-osd[89722]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Oct 11 03:37:26 compute-0 ceph-osd[89722]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Oct 11 03:37:26 compute-0 ceph-osd[89722]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Oct 11 03:37:26 compute-0 ceph-osd[89722]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Oct 11 03:37:26 compute-0 ceph-osd[89722]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Oct 11 03:37:26 compute-0 ceph-osd[89722]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Oct 11 03:37:26 compute-0 ceph-osd[89722]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Oct 11 03:37:26 compute-0 ceph-osd[89722]: rocksdb:                        Options.arena_block_size: 1048576
Oct 11 03:37:26 compute-0 ceph-osd[89722]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Oct 11 03:37:26 compute-0 ceph-osd[89722]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Oct 11 03:37:26 compute-0 ceph-osd[89722]: rocksdb:                Options.disable_auto_compactions: 0
Oct 11 03:37:26 compute-0 ceph-osd[89722]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Oct 11 03:37:26 compute-0 ceph-osd[89722]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Oct 11 03:37:26 compute-0 ceph-osd[89722]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Oct 11 03:37:26 compute-0 ceph-osd[89722]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Oct 11 03:37:26 compute-0 ceph-osd[89722]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Oct 11 03:37:26 compute-0 ceph-osd[89722]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Oct 11 03:37:26 compute-0 ceph-osd[89722]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Oct 11 03:37:26 compute-0 ceph-osd[89722]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Oct 11 03:37:26 compute-0 ceph-osd[89722]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Oct 11 03:37:26 compute-0 ceph-osd[89722]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Oct 11 03:37:26 compute-0 ceph-osd[89722]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Oct 11 03:37:26 compute-0 ceph-osd[89722]: rocksdb:                   Options.inplace_update_support: 0
Oct 11 03:37:26 compute-0 ceph-osd[89722]: rocksdb:                 Options.inplace_update_num_locks: 10000
Oct 11 03:37:26 compute-0 ceph-osd[89722]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Oct 11 03:37:26 compute-0 ceph-osd[89722]: rocksdb:               Options.memtable_whole_key_filtering: 0
Oct 11 03:37:26 compute-0 ceph-osd[89722]: rocksdb:   Options.memtable_huge_page_size: 0
Oct 11 03:37:26 compute-0 ceph-osd[89722]: rocksdb:                           Options.bloom_locality: 0
Oct 11 03:37:26 compute-0 ceph-osd[89722]: rocksdb:                    Options.max_successive_merges: 0
Oct 11 03:37:26 compute-0 ceph-osd[89722]: rocksdb:                Options.optimize_filters_for_hits: 0
Oct 11 03:37:26 compute-0 ceph-osd[89722]: rocksdb:                Options.paranoid_file_checks: 0
Oct 11 03:37:26 compute-0 ceph-osd[89722]: rocksdb:                Options.force_consistency_checks: 1
Oct 11 03:37:26 compute-0 ceph-osd[89722]: rocksdb:                Options.report_bg_io_stats: 0
Oct 11 03:37:26 compute-0 ceph-osd[89722]: rocksdb:                               Options.ttl: 2592000
Oct 11 03:37:26 compute-0 ceph-osd[89722]: rocksdb:          Options.periodic_compaction_seconds: 0
Oct 11 03:37:26 compute-0 ceph-osd[89722]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Oct 11 03:37:26 compute-0 ceph-osd[89722]: rocksdb:    Options.preserve_internal_time_seconds: 0
Oct 11 03:37:26 compute-0 ceph-osd[89722]: rocksdb:                       Options.enable_blob_files: false
Oct 11 03:37:26 compute-0 ceph-osd[89722]: rocksdb:                           Options.min_blob_size: 0
Oct 11 03:37:26 compute-0 ceph-osd[89722]: rocksdb:                          Options.blob_file_size: 268435456
Oct 11 03:37:26 compute-0 ceph-osd[89722]: rocksdb:                   Options.blob_compression_type: NoCompression
Oct 11 03:37:26 compute-0 ceph-osd[89722]: rocksdb:          Options.enable_blob_garbage_collection: false
Oct 11 03:37:26 compute-0 ceph-osd[89722]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Oct 11 03:37:26 compute-0 ceph-osd[89722]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Oct 11 03:37:26 compute-0 ceph-osd[89722]: rocksdb:          Options.blob_compaction_readahead_size: 0
Oct 11 03:37:26 compute-0 ceph-osd[89722]: rocksdb:                Options.blob_file_starting_level: 0
Oct 11 03:37:26 compute-0 ceph-osd[89722]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Oct 11 03:37:26 compute-0 ceph-osd[89722]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-2]:
Oct 11 03:37:26 compute-0 ceph-osd[89722]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Oct 11 03:37:26 compute-0 ceph-osd[89722]: rocksdb:           Options.merge_operator: None
Oct 11 03:37:26 compute-0 ceph-osd[89722]: rocksdb:        Options.compaction_filter: None
Oct 11 03:37:26 compute-0 ceph-osd[89722]: rocksdb:        Options.compaction_filter_factory: None
Oct 11 03:37:26 compute-0 ceph-osd[89722]: rocksdb:  Options.sst_partitioner_factory: None
Oct 11 03:37:26 compute-0 ceph-osd[89722]: rocksdb:         Options.memtable_factory: SkipListFactory
Oct 11 03:37:26 compute-0 ceph-osd[89722]: rocksdb:            Options.table_factory: BlockBasedTable
Oct 11 03:37:26 compute-0 ceph-osd[89722]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55f1041e5280)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x55f1041ccdd0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Oct 11 03:37:26 compute-0 ceph-osd[89722]: rocksdb:        Options.write_buffer_size: 16777216
Oct 11 03:37:26 compute-0 ceph-osd[89722]: rocksdb:  Options.max_write_buffer_number: 64
Oct 11 03:37:26 compute-0 ceph-osd[89722]: rocksdb:          Options.compression: LZ4
Oct 11 03:37:26 compute-0 ceph-osd[89722]: rocksdb:                  Options.bottommost_compression: Disabled
Oct 11 03:37:26 compute-0 ceph-osd[89722]: rocksdb:       Options.prefix_extractor: nullptr
Oct 11 03:37:26 compute-0 ceph-osd[89722]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Oct 11 03:37:26 compute-0 ceph-osd[89722]: rocksdb:             Options.num_levels: 7
Oct 11 03:37:26 compute-0 ceph-osd[89722]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Oct 11 03:37:26 compute-0 ceph-osd[89722]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Oct 11 03:37:26 compute-0 ceph-osd[89722]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Oct 11 03:37:26 compute-0 ceph-osd[89722]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Oct 11 03:37:26 compute-0 ceph-osd[89722]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Oct 11 03:37:26 compute-0 ceph-osd[89722]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Oct 11 03:37:26 compute-0 ceph-osd[89722]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Oct 11 03:37:26 compute-0 ceph-osd[89722]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Oct 11 03:37:26 compute-0 ceph-osd[89722]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Oct 11 03:37:26 compute-0 ceph-osd[89722]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Oct 11 03:37:26 compute-0 ceph-osd[89722]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Oct 11 03:37:26 compute-0 ceph-osd[89722]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Oct 11 03:37:26 compute-0 ceph-osd[89722]: rocksdb:            Options.compression_opts.window_bits: -14
Oct 11 03:37:26 compute-0 ceph-osd[89722]: rocksdb:                  Options.compression_opts.level: 32767
Oct 11 03:37:26 compute-0 ceph-osd[89722]: rocksdb:               Options.compression_opts.strategy: 0
Oct 11 03:37:26 compute-0 ceph-osd[89722]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Oct 11 03:37:26 compute-0 ceph-osd[89722]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Oct 11 03:37:26 compute-0 ceph-osd[89722]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Oct 11 03:37:26 compute-0 ceph-osd[89722]: rocksdb:         Options.compression_opts.parallel_threads: 1
Oct 11 03:37:26 compute-0 ceph-osd[89722]: rocksdb:                  Options.compression_opts.enabled: false
Oct 11 03:37:26 compute-0 ceph-osd[89722]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Oct 11 03:37:26 compute-0 ceph-osd[89722]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Oct 11 03:37:26 compute-0 ceph-osd[89722]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Oct 11 03:37:26 compute-0 ceph-osd[89722]: rocksdb:              Options.level0_stop_writes_trigger: 36
Oct 11 03:37:26 compute-0 ceph-osd[89722]: rocksdb:                   Options.target_file_size_base: 67108864
Oct 11 03:37:26 compute-0 ceph-osd[89722]: rocksdb:             Options.target_file_size_multiplier: 1
Oct 11 03:37:26 compute-0 ceph-osd[89722]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Oct 11 03:37:26 compute-0 ceph-osd[89722]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Oct 11 03:37:26 compute-0 ceph-osd[89722]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Oct 11 03:37:26 compute-0 ceph-osd[89722]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Oct 11 03:37:26 compute-0 ceph-osd[89722]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Oct 11 03:37:26 compute-0 ceph-osd[89722]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Oct 11 03:37:26 compute-0 ceph-osd[89722]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Oct 11 03:37:26 compute-0 ceph-osd[89722]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Oct 11 03:37:26 compute-0 ceph-osd[89722]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Oct 11 03:37:26 compute-0 ceph-osd[89722]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Oct 11 03:37:26 compute-0 ceph-osd[89722]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Oct 11 03:37:26 compute-0 ceph-osd[89722]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Oct 11 03:37:26 compute-0 ceph-osd[89722]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Oct 11 03:37:26 compute-0 ceph-osd[89722]: rocksdb:                        Options.arena_block_size: 1048576
Oct 11 03:37:26 compute-0 ceph-osd[89722]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Oct 11 03:37:26 compute-0 ceph-osd[89722]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Oct 11 03:37:26 compute-0 ceph-osd[89722]: rocksdb:                Options.disable_auto_compactions: 0
Oct 11 03:37:26 compute-0 ceph-osd[89722]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Oct 11 03:37:26 compute-0 ceph-osd[89722]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Oct 11 03:37:26 compute-0 ceph-osd[89722]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Oct 11 03:37:26 compute-0 ceph-osd[89722]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Oct 11 03:37:26 compute-0 ceph-osd[89722]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Oct 11 03:37:26 compute-0 ceph-osd[89722]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Oct 11 03:37:26 compute-0 ceph-osd[89722]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Oct 11 03:37:26 compute-0 ceph-osd[89722]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Oct 11 03:37:26 compute-0 ceph-osd[89722]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Oct 11 03:37:26 compute-0 ceph-osd[89722]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Oct 11 03:37:26 compute-0 ceph-osd[89722]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Oct 11 03:37:26 compute-0 ceph-osd[89722]: rocksdb:                   Options.inplace_update_support: 0
Oct 11 03:37:26 compute-0 ceph-osd[89722]: rocksdb:                 Options.inplace_update_num_locks: 10000
Oct 11 03:37:26 compute-0 ceph-osd[89722]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Oct 11 03:37:26 compute-0 ceph-osd[89722]: rocksdb:               Options.memtable_whole_key_filtering: 0
Oct 11 03:37:26 compute-0 ceph-osd[89722]: rocksdb:   Options.memtable_huge_page_size: 0
Oct 11 03:37:26 compute-0 ceph-osd[89722]: rocksdb:                           Options.bloom_locality: 0
Oct 11 03:37:26 compute-0 ceph-osd[89722]: rocksdb:                    Options.max_successive_merges: 0
Oct 11 03:37:26 compute-0 ceph-osd[89722]: rocksdb:                Options.optimize_filters_for_hits: 0
Oct 11 03:37:26 compute-0 ceph-osd[89722]: rocksdb:                Options.paranoid_file_checks: 0
Oct 11 03:37:26 compute-0 ceph-osd[89722]: rocksdb:                Options.force_consistency_checks: 1
Oct 11 03:37:26 compute-0 ceph-osd[89722]: rocksdb:                Options.report_bg_io_stats: 0
Oct 11 03:37:26 compute-0 ceph-osd[89722]: rocksdb:                               Options.ttl: 2592000
Oct 11 03:37:26 compute-0 ceph-osd[89722]: rocksdb:          Options.periodic_compaction_seconds: 0
Oct 11 03:37:26 compute-0 ceph-osd[89722]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Oct 11 03:37:26 compute-0 ceph-osd[89722]: rocksdb:    Options.preserve_internal_time_seconds: 0
Oct 11 03:37:26 compute-0 ceph-osd[89722]: rocksdb:                       Options.enable_blob_files: false
Oct 11 03:37:26 compute-0 ceph-osd[89722]: rocksdb:                           Options.min_blob_size: 0
Oct 11 03:37:26 compute-0 ceph-osd[89722]: rocksdb:                          Options.blob_file_size: 268435456
Oct 11 03:37:26 compute-0 ceph-osd[89722]: rocksdb:                   Options.blob_compression_type: NoCompression
Oct 11 03:37:26 compute-0 ceph-osd[89722]: rocksdb:          Options.enable_blob_garbage_collection: false
Oct 11 03:37:26 compute-0 ceph-osd[89722]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Oct 11 03:37:26 compute-0 ceph-osd[89722]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Oct 11 03:37:26 compute-0 ceph-osd[89722]: rocksdb:          Options.blob_compaction_readahead_size: 0
Oct 11 03:37:26 compute-0 ceph-osd[89722]: rocksdb:                Options.blob_file_starting_level: 0
Oct 11 03:37:26 compute-0 ceph-osd[89722]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Oct 11 03:37:26 compute-0 ceph-osd[89722]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-0]:
Oct 11 03:37:26 compute-0 ceph-osd[89722]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Oct 11 03:37:26 compute-0 ceph-osd[89722]: rocksdb:           Options.merge_operator: None
Oct 11 03:37:26 compute-0 ceph-osd[89722]: rocksdb:        Options.compaction_filter: None
Oct 11 03:37:26 compute-0 ceph-osd[89722]: rocksdb:        Options.compaction_filter_factory: None
Oct 11 03:37:26 compute-0 ceph-osd[89722]: rocksdb:  Options.sst_partitioner_factory: None
Oct 11 03:37:26 compute-0 ceph-osd[89722]: rocksdb:         Options.memtable_factory: SkipListFactory
Oct 11 03:37:26 compute-0 ceph-osd[89722]: rocksdb:            Options.table_factory: BlockBasedTable
Oct 11 03:37:26 compute-0 ceph-osd[89722]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55f1041e5260)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x55f1041cc430
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 536870912
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Oct 11 03:37:26 compute-0 ceph-osd[89722]: rocksdb:        Options.write_buffer_size: 16777216
Oct 11 03:37:26 compute-0 ceph-osd[89722]: rocksdb:  Options.max_write_buffer_number: 64
Oct 11 03:37:26 compute-0 ceph-osd[89722]: rocksdb:          Options.compression: LZ4
Oct 11 03:37:26 compute-0 ceph-osd[89722]: rocksdb:                  Options.bottommost_compression: Disabled
Oct 11 03:37:26 compute-0 ceph-osd[89722]: rocksdb:       Options.prefix_extractor: nullptr
Oct 11 03:37:26 compute-0 ceph-osd[89722]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Oct 11 03:37:26 compute-0 ceph-osd[89722]: rocksdb:             Options.num_levels: 7
Oct 11 03:37:26 compute-0 ceph-osd[89722]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Oct 11 03:37:26 compute-0 ceph-osd[89722]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Oct 11 03:37:26 compute-0 ceph-osd[89722]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Oct 11 03:37:26 compute-0 ceph-osd[89722]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Oct 11 03:37:26 compute-0 ceph-osd[89722]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Oct 11 03:37:26 compute-0 ceph-osd[89722]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Oct 11 03:37:26 compute-0 ceph-osd[89722]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Oct 11 03:37:26 compute-0 ceph-osd[89722]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Oct 11 03:37:26 compute-0 ceph-osd[89722]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Oct 11 03:37:26 compute-0 ceph-osd[89722]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Oct 11 03:37:26 compute-0 ceph-osd[89722]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Oct 11 03:37:26 compute-0 ceph-osd[89722]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Oct 11 03:37:26 compute-0 ceph-osd[89722]: rocksdb:            Options.compression_opts.window_bits: -14
Oct 11 03:37:26 compute-0 ceph-osd[89722]: rocksdb:                  Options.compression_opts.level: 32767
Oct 11 03:37:26 compute-0 ceph-osd[89722]: rocksdb:               Options.compression_opts.strategy: 0
Oct 11 03:37:26 compute-0 ceph-osd[89722]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Oct 11 03:37:26 compute-0 ceph-osd[89722]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Oct 11 03:37:26 compute-0 ceph-osd[89722]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Oct 11 03:37:26 compute-0 ceph-osd[89722]: rocksdb:         Options.compression_opts.parallel_threads: 1
Oct 11 03:37:26 compute-0 ceph-osd[89722]: rocksdb:                  Options.compression_opts.enabled: false
Oct 11 03:37:26 compute-0 ceph-osd[89722]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Oct 11 03:37:26 compute-0 ceph-osd[89722]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Oct 11 03:37:26 compute-0 ceph-osd[89722]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Oct 11 03:37:26 compute-0 ceph-osd[89722]: rocksdb:              Options.level0_stop_writes_trigger: 36
Oct 11 03:37:26 compute-0 ceph-osd[89722]: rocksdb:                   Options.target_file_size_base: 67108864
Oct 11 03:37:26 compute-0 ceph-osd[89722]: rocksdb:             Options.target_file_size_multiplier: 1
Oct 11 03:37:26 compute-0 ceph-osd[89722]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Oct 11 03:37:26 compute-0 ceph-osd[89722]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Oct 11 03:37:26 compute-0 ceph-osd[89722]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Oct 11 03:37:26 compute-0 ceph-osd[89722]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Oct 11 03:37:26 compute-0 ceph-osd[89722]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Oct 11 03:37:26 compute-0 ceph-osd[89722]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Oct 11 03:37:26 compute-0 ceph-osd[89722]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Oct 11 03:37:26 compute-0 ceph-osd[89722]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Oct 11 03:37:26 compute-0 ceph-osd[89722]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Oct 11 03:37:26 compute-0 ceph-osd[89722]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Oct 11 03:37:26 compute-0 ceph-osd[89722]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Oct 11 03:37:26 compute-0 ceph-osd[89722]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Oct 11 03:37:26 compute-0 ceph-osd[89722]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Oct 11 03:37:26 compute-0 ceph-osd[89722]: rocksdb:                        Options.arena_block_size: 1048576
Oct 11 03:37:26 compute-0 ceph-osd[89722]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Oct 11 03:37:26 compute-0 ceph-osd[89722]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Oct 11 03:37:26 compute-0 ceph-osd[89722]: rocksdb:                Options.disable_auto_compactions: 0
Oct 11 03:37:26 compute-0 ceph-osd[89722]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Oct 11 03:37:26 compute-0 ceph-osd[89722]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Oct 11 03:37:26 compute-0 ceph-osd[89722]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Oct 11 03:37:26 compute-0 ceph-osd[89722]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Oct 11 03:37:26 compute-0 ceph-osd[89722]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Oct 11 03:37:26 compute-0 ceph-osd[89722]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Oct 11 03:37:26 compute-0 ceph-osd[89722]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Oct 11 03:37:26 compute-0 ceph-osd[89722]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Oct 11 03:37:26 compute-0 ceph-osd[89722]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Oct 11 03:37:26 compute-0 ceph-osd[89722]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Oct 11 03:37:26 compute-0 ceph-osd[89722]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Oct 11 03:37:26 compute-0 ceph-osd[89722]: rocksdb:                   Options.inplace_update_support: 0
Oct 11 03:37:26 compute-0 ceph-osd[89722]: rocksdb:                 Options.inplace_update_num_locks: 10000
Oct 11 03:37:26 compute-0 ceph-osd[89722]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Oct 11 03:37:26 compute-0 ceph-osd[89722]: rocksdb:               Options.memtable_whole_key_filtering: 0
Oct 11 03:37:26 compute-0 ceph-osd[89722]: rocksdb:   Options.memtable_huge_page_size: 0
Oct 11 03:37:26 compute-0 ceph-osd[89722]: rocksdb:                           Options.bloom_locality: 0
Oct 11 03:37:26 compute-0 ceph-osd[89722]: rocksdb:                    Options.max_successive_merges: 0
Oct 11 03:37:26 compute-0 ceph-osd[89722]: rocksdb:                Options.optimize_filters_for_hits: 0
Oct 11 03:37:26 compute-0 ceph-osd[89722]: rocksdb:                Options.paranoid_file_checks: 0
Oct 11 03:37:26 compute-0 ceph-osd[89722]: rocksdb:                Options.force_consistency_checks: 1
Oct 11 03:37:26 compute-0 ceph-osd[89722]: rocksdb:                Options.report_bg_io_stats: 0
Oct 11 03:37:26 compute-0 ceph-osd[89722]: rocksdb:                               Options.ttl: 2592000
Oct 11 03:37:26 compute-0 ceph-osd[89722]: rocksdb:          Options.periodic_compaction_seconds: 0
Oct 11 03:37:26 compute-0 ceph-osd[89722]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Oct 11 03:37:26 compute-0 ceph-osd[89722]: rocksdb:    Options.preserve_internal_time_seconds: 0
Oct 11 03:37:26 compute-0 ceph-osd[89722]: rocksdb:                       Options.enable_blob_files: false
Oct 11 03:37:26 compute-0 ceph-osd[89722]: rocksdb:                           Options.min_blob_size: 0
Oct 11 03:37:26 compute-0 ceph-osd[89722]: rocksdb:                          Options.blob_file_size: 268435456
Oct 11 03:37:26 compute-0 ceph-osd[89722]: rocksdb:                   Options.blob_compression_type: NoCompression
Oct 11 03:37:26 compute-0 ceph-osd[89722]: rocksdb:          Options.enable_blob_garbage_collection: false
Oct 11 03:37:26 compute-0 ceph-osd[89722]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Oct 11 03:37:26 compute-0 ceph-osd[89722]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Oct 11 03:37:26 compute-0 ceph-osd[89722]: rocksdb:          Options.blob_compaction_readahead_size: 0
Oct 11 03:37:26 compute-0 ceph-osd[89722]: rocksdb:                Options.blob_file_starting_level: 0
Oct 11 03:37:26 compute-0 ceph-osd[89722]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Oct 11 03:37:26 compute-0 ceph-osd[89722]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-1]:
Oct 11 03:37:26 compute-0 ceph-osd[89722]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Oct 11 03:37:26 compute-0 ceph-osd[89722]: rocksdb:           Options.merge_operator: None
Oct 11 03:37:26 compute-0 ceph-osd[89722]: rocksdb:        Options.compaction_filter: None
Oct 11 03:37:26 compute-0 ceph-osd[89722]: rocksdb:        Options.compaction_filter_factory: None
Oct 11 03:37:26 compute-0 ceph-osd[89722]: rocksdb:  Options.sst_partitioner_factory: None
Oct 11 03:37:26 compute-0 ceph-osd[89722]: rocksdb:         Options.memtable_factory: SkipListFactory
Oct 11 03:37:26 compute-0 ceph-osd[89722]: rocksdb:            Options.table_factory: BlockBasedTable
Oct 11 03:37:26 compute-0 ceph-osd[89722]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55f1041e5260)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x55f1041cc430
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 536870912
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Oct 11 03:37:26 compute-0 ceph-osd[89722]: rocksdb:        Options.write_buffer_size: 16777216
Oct 11 03:37:26 compute-0 ceph-osd[89722]: rocksdb:  Options.max_write_buffer_number: 64
Oct 11 03:37:26 compute-0 ceph-osd[89722]: rocksdb:          Options.compression: LZ4
Oct 11 03:37:26 compute-0 ceph-osd[89722]: rocksdb:                  Options.bottommost_compression: Disabled
Oct 11 03:37:26 compute-0 ceph-osd[89722]: rocksdb:       Options.prefix_extractor: nullptr
Oct 11 03:37:26 compute-0 ceph-osd[89722]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Oct 11 03:37:26 compute-0 ceph-osd[89722]: rocksdb:             Options.num_levels: 7
Oct 11 03:37:26 compute-0 ceph-osd[89722]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Oct 11 03:37:26 compute-0 ceph-osd[89722]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Oct 11 03:37:26 compute-0 ceph-osd[89722]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Oct 11 03:37:26 compute-0 ceph-osd[89722]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Oct 11 03:37:26 compute-0 ceph-osd[89722]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Oct 11 03:37:26 compute-0 ceph-osd[89722]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Oct 11 03:37:26 compute-0 ceph-osd[89722]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Oct 11 03:37:26 compute-0 ceph-osd[89722]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Oct 11 03:37:26 compute-0 ceph-osd[89722]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Oct 11 03:37:26 compute-0 ceph-osd[89722]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Oct 11 03:37:26 compute-0 ceph-osd[89722]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Oct 11 03:37:26 compute-0 ceph-osd[89722]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Oct 11 03:37:26 compute-0 ceph-osd[89722]: rocksdb:            Options.compression_opts.window_bits: -14
Oct 11 03:37:26 compute-0 ceph-osd[89722]: rocksdb:                  Options.compression_opts.level: 32767
Oct 11 03:37:26 compute-0 ceph-osd[89722]: rocksdb:               Options.compression_opts.strategy: 0
Oct 11 03:37:26 compute-0 ceph-osd[89722]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Oct 11 03:37:26 compute-0 ceph-osd[89722]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Oct 11 03:37:26 compute-0 ceph-osd[89722]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Oct 11 03:37:26 compute-0 ceph-osd[89722]: rocksdb:         Options.compression_opts.parallel_threads: 1
Oct 11 03:37:26 compute-0 ceph-osd[89722]: rocksdb:                  Options.compression_opts.enabled: false
Oct 11 03:37:26 compute-0 ceph-osd[89722]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Oct 11 03:37:26 compute-0 ceph-osd[89722]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Oct 11 03:37:26 compute-0 ceph-osd[89722]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Oct 11 03:37:26 compute-0 ceph-osd[89722]: rocksdb:              Options.level0_stop_writes_trigger: 36
Oct 11 03:37:26 compute-0 ceph-osd[89722]: rocksdb:                   Options.target_file_size_base: 67108864
Oct 11 03:37:26 compute-0 ceph-osd[89722]: rocksdb:             Options.target_file_size_multiplier: 1
Oct 11 03:37:26 compute-0 ceph-osd[89722]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Oct 11 03:37:26 compute-0 ceph-osd[89722]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Oct 11 03:37:26 compute-0 ceph-osd[89722]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Oct 11 03:37:26 compute-0 ceph-osd[89722]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Oct 11 03:37:26 compute-0 ceph-osd[89722]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Oct 11 03:37:26 compute-0 ceph-osd[89722]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Oct 11 03:37:26 compute-0 ceph-osd[89722]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Oct 11 03:37:26 compute-0 ceph-osd[89722]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Oct 11 03:37:26 compute-0 ceph-osd[89722]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Oct 11 03:37:26 compute-0 sudo[89674]: pam_unix(sudo:session): session closed for user root
Oct 11 03:37:26 compute-0 ceph-osd[89722]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Oct 11 03:37:26 compute-0 ceph-osd[89722]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Oct 11 03:37:26 compute-0 ceph-osd[89722]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Oct 11 03:37:26 compute-0 ceph-osd[89722]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Oct 11 03:37:26 compute-0 ceph-osd[89722]: rocksdb:                        Options.arena_block_size: 1048576
Oct 11 03:37:26 compute-0 ceph-osd[89722]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Oct 11 03:37:26 compute-0 ceph-osd[89722]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Oct 11 03:37:26 compute-0 ceph-osd[89722]: rocksdb:                Options.disable_auto_compactions: 0
Oct 11 03:37:26 compute-0 ceph-osd[89722]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Oct 11 03:37:26 compute-0 ceph-osd[89722]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Oct 11 03:37:26 compute-0 ceph-osd[89722]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Oct 11 03:37:26 compute-0 ceph-osd[89722]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Oct 11 03:37:26 compute-0 ceph-osd[89722]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Oct 11 03:37:26 compute-0 ceph-osd[89722]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Oct 11 03:37:26 compute-0 ceph-osd[89722]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Oct 11 03:37:26 compute-0 ceph-osd[89722]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Oct 11 03:37:26 compute-0 ceph-osd[89722]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Oct 11 03:37:26 compute-0 ceph-osd[89722]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Oct 11 03:37:26 compute-0 ceph-osd[89722]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Oct 11 03:37:26 compute-0 ceph-osd[89722]: rocksdb:                   Options.inplace_update_support: 0
Oct 11 03:37:26 compute-0 ceph-osd[89722]: rocksdb:                 Options.inplace_update_num_locks: 10000
Oct 11 03:37:26 compute-0 ceph-osd[89722]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Oct 11 03:37:26 compute-0 ceph-osd[89722]: rocksdb:               Options.memtable_whole_key_filtering: 0
Oct 11 03:37:26 compute-0 ceph-osd[89722]: rocksdb:   Options.memtable_huge_page_size: 0
Oct 11 03:37:26 compute-0 ceph-osd[89722]: rocksdb:                           Options.bloom_locality: 0
Oct 11 03:37:26 compute-0 ceph-osd[89722]: rocksdb:                    Options.max_successive_merges: 0
Oct 11 03:37:26 compute-0 ceph-osd[89722]: rocksdb:                Options.optimize_filters_for_hits: 0
Oct 11 03:37:26 compute-0 ceph-osd[89722]: rocksdb:                Options.paranoid_file_checks: 0
Oct 11 03:37:26 compute-0 ceph-osd[89722]: rocksdb:                Options.force_consistency_checks: 1
Oct 11 03:37:26 compute-0 ceph-osd[89722]: rocksdb:                Options.report_bg_io_stats: 0
Oct 11 03:37:26 compute-0 ceph-osd[89722]: rocksdb:                               Options.ttl: 2592000
Oct 11 03:37:26 compute-0 ceph-osd[89722]: rocksdb:          Options.periodic_compaction_seconds: 0
Oct 11 03:37:26 compute-0 ceph-osd[89722]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Oct 11 03:37:26 compute-0 ceph-osd[89722]: rocksdb:    Options.preserve_internal_time_seconds: 0
Oct 11 03:37:26 compute-0 ceph-osd[89722]: rocksdb:                       Options.enable_blob_files: false
Oct 11 03:37:26 compute-0 ceph-osd[89722]: rocksdb:                           Options.min_blob_size: 0
Oct 11 03:37:26 compute-0 ceph-osd[89722]: rocksdb:                          Options.blob_file_size: 268435456
Oct 11 03:37:26 compute-0 ceph-osd[89722]: rocksdb:                   Options.blob_compression_type: NoCompression
Oct 11 03:37:26 compute-0 ceph-osd[89722]: rocksdb:          Options.enable_blob_garbage_collection: false
Oct 11 03:37:26 compute-0 ceph-osd[89722]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Oct 11 03:37:26 compute-0 ceph-osd[89722]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Oct 11 03:37:26 compute-0 ceph-osd[89722]: rocksdb:          Options.blob_compaction_readahead_size: 0
Oct 11 03:37:26 compute-0 ceph-osd[89722]: rocksdb:                Options.blob_file_starting_level: 0
Oct 11 03:37:26 compute-0 ceph-osd[89722]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Oct 11 03:37:26 compute-0 ceph-osd[89722]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-2]:
Oct 11 03:37:26 compute-0 ceph-osd[89722]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Oct 11 03:37:26 compute-0 ceph-osd[89722]: rocksdb:           Options.merge_operator: None
Oct 11 03:37:26 compute-0 ceph-osd[89722]: rocksdb:        Options.compaction_filter: None
Oct 11 03:37:26 compute-0 ceph-osd[89722]: rocksdb:        Options.compaction_filter_factory: None
Oct 11 03:37:26 compute-0 ceph-osd[89722]: rocksdb:  Options.sst_partitioner_factory: None
Oct 11 03:37:26 compute-0 ceph-osd[89722]: rocksdb:         Options.memtable_factory: SkipListFactory
Oct 11 03:37:26 compute-0 ceph-osd[89722]: rocksdb:            Options.table_factory: BlockBasedTable
Oct 11 03:37:26 compute-0 ceph-osd[89722]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55f1041e5260)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x55f1041cc430
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 536870912
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Oct 11 03:37:26 compute-0 ceph-osd[89722]: rocksdb:        Options.write_buffer_size: 16777216
Oct 11 03:37:26 compute-0 ceph-osd[89722]: rocksdb:  Options.max_write_buffer_number: 64
Oct 11 03:37:26 compute-0 ceph-osd[89722]: rocksdb:          Options.compression: LZ4
Oct 11 03:37:26 compute-0 ceph-osd[89722]: rocksdb:                  Options.bottommost_compression: Disabled
Oct 11 03:37:26 compute-0 ceph-osd[89722]: rocksdb:       Options.prefix_extractor: nullptr
Oct 11 03:37:26 compute-0 ceph-osd[89722]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Oct 11 03:37:26 compute-0 ceph-osd[89722]: rocksdb:             Options.num_levels: 7
Oct 11 03:37:26 compute-0 ceph-osd[89722]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Oct 11 03:37:26 compute-0 ceph-osd[89722]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Oct 11 03:37:26 compute-0 ceph-osd[89722]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Oct 11 03:37:26 compute-0 ceph-osd[89722]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Oct 11 03:37:26 compute-0 ceph-osd[89722]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Oct 11 03:37:26 compute-0 ceph-osd[89722]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Oct 11 03:37:26 compute-0 ceph-osd[89722]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Oct 11 03:37:26 compute-0 ceph-osd[89722]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Oct 11 03:37:26 compute-0 ceph-osd[89722]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Oct 11 03:37:26 compute-0 ceph-osd[89722]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Oct 11 03:37:26 compute-0 ceph-osd[89722]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Oct 11 03:37:26 compute-0 ceph-osd[89722]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Oct 11 03:37:26 compute-0 ceph-osd[89722]: rocksdb:            Options.compression_opts.window_bits: -14
Oct 11 03:37:26 compute-0 ceph-osd[89722]: rocksdb:                  Options.compression_opts.level: 32767
Oct 11 03:37:26 compute-0 ceph-osd[89722]: rocksdb:               Options.compression_opts.strategy: 0
Oct 11 03:37:26 compute-0 ceph-osd[89722]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Oct 11 03:37:26 compute-0 ceph-osd[89722]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Oct 11 03:37:26 compute-0 ceph-osd[89722]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Oct 11 03:37:26 compute-0 ceph-osd[89722]: rocksdb:         Options.compression_opts.parallel_threads: 1
Oct 11 03:37:26 compute-0 ceph-osd[89722]: rocksdb:                  Options.compression_opts.enabled: false
Oct 11 03:37:26 compute-0 ceph-osd[89722]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Oct 11 03:37:26 compute-0 ceph-osd[89722]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Oct 11 03:37:26 compute-0 ceph-osd[89722]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Oct 11 03:37:26 compute-0 ceph-osd[89722]: rocksdb:              Options.level0_stop_writes_trigger: 36
Oct 11 03:37:26 compute-0 ceph-osd[89722]: rocksdb:                   Options.target_file_size_base: 67108864
Oct 11 03:37:26 compute-0 ceph-osd[89722]: rocksdb:             Options.target_file_size_multiplier: 1
Oct 11 03:37:26 compute-0 ceph-osd[89722]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Oct 11 03:37:26 compute-0 ceph-osd[89722]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Oct 11 03:37:26 compute-0 ceph-osd[89722]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Oct 11 03:37:26 compute-0 ceph-osd[89722]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Oct 11 03:37:26 compute-0 ceph-osd[89722]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Oct 11 03:37:26 compute-0 ceph-osd[89722]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Oct 11 03:37:26 compute-0 ceph-osd[89722]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Oct 11 03:37:26 compute-0 ceph-osd[89722]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Oct 11 03:37:26 compute-0 ceph-osd[89722]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Oct 11 03:37:26 compute-0 ceph-osd[89722]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Oct 11 03:37:26 compute-0 ceph-osd[89722]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Oct 11 03:37:26 compute-0 ceph-osd[89722]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Oct 11 03:37:26 compute-0 ceph-osd[89722]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Oct 11 03:37:26 compute-0 ceph-osd[89722]: rocksdb:                        Options.arena_block_size: 1048576
Oct 11 03:37:26 compute-0 ceph-osd[89722]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Oct 11 03:37:26 compute-0 ceph-osd[89722]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Oct 11 03:37:26 compute-0 ceph-osd[89722]: rocksdb:                Options.disable_auto_compactions: 0
Oct 11 03:37:26 compute-0 ceph-osd[89722]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Oct 11 03:37:26 compute-0 ceph-osd[89722]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Oct 11 03:37:26 compute-0 ceph-osd[89722]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Oct 11 03:37:26 compute-0 ceph-osd[89722]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Oct 11 03:37:26 compute-0 ceph-osd[89722]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Oct 11 03:37:26 compute-0 ceph-osd[89722]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Oct 11 03:37:26 compute-0 ceph-osd[89722]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Oct 11 03:37:26 compute-0 ceph-osd[89722]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Oct 11 03:37:26 compute-0 ceph-osd[89722]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Oct 11 03:37:26 compute-0 ceph-osd[89722]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Oct 11 03:37:26 compute-0 ceph-osd[89722]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Oct 11 03:37:26 compute-0 ceph-osd[89722]: rocksdb:                   Options.inplace_update_support: 0
Oct 11 03:37:26 compute-0 ceph-osd[89722]: rocksdb:                 Options.inplace_update_num_locks: 10000
Oct 11 03:37:26 compute-0 ceph-osd[89722]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Oct 11 03:37:26 compute-0 ceph-osd[89722]: rocksdb:               Options.memtable_whole_key_filtering: 0
Oct 11 03:37:26 compute-0 ceph-osd[89722]: rocksdb:   Options.memtable_huge_page_size: 0
Oct 11 03:37:26 compute-0 ceph-osd[89722]: rocksdb:                           Options.bloom_locality: 0
Oct 11 03:37:26 compute-0 ceph-osd[89722]: rocksdb:                    Options.max_successive_merges: 0
Oct 11 03:37:26 compute-0 ceph-osd[89722]: rocksdb:                Options.optimize_filters_for_hits: 0
Oct 11 03:37:26 compute-0 ceph-osd[89722]: rocksdb:                Options.paranoid_file_checks: 0
Oct 11 03:37:26 compute-0 ceph-osd[89722]: rocksdb:                Options.force_consistency_checks: 1
Oct 11 03:37:26 compute-0 ceph-osd[89722]: rocksdb:                Options.report_bg_io_stats: 0
Oct 11 03:37:26 compute-0 ceph-osd[89722]: rocksdb:                               Options.ttl: 2592000
Oct 11 03:37:26 compute-0 ceph-osd[89722]: rocksdb:          Options.periodic_compaction_seconds: 0
Oct 11 03:37:26 compute-0 ceph-osd[89722]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Oct 11 03:37:26 compute-0 ceph-osd[89722]: rocksdb:    Options.preserve_internal_time_seconds: 0
Oct 11 03:37:26 compute-0 ceph-osd[89722]: rocksdb:                       Options.enable_blob_files: false
Oct 11 03:37:26 compute-0 ceph-osd[89722]: rocksdb:                           Options.min_blob_size: 0
Oct 11 03:37:26 compute-0 ceph-osd[89722]: rocksdb:                          Options.blob_file_size: 268435456
Oct 11 03:37:26 compute-0 ceph-osd[89722]: rocksdb:                   Options.blob_compression_type: NoCompression
Oct 11 03:37:26 compute-0 ceph-osd[89722]: rocksdb:          Options.enable_blob_garbage_collection: false
Oct 11 03:37:26 compute-0 ceph-osd[89722]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Oct 11 03:37:26 compute-0 ceph-osd[89722]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Oct 11 03:37:26 compute-0 ceph-osd[89722]: rocksdb:          Options.blob_compaction_readahead_size: 0
Oct 11 03:37:26 compute-0 ceph-osd[89722]: rocksdb:                Options.blob_file_starting_level: 0
Oct 11 03:37:26 compute-0 ceph-osd[89722]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Oct 11 03:37:26 compute-0 ceph-osd[89722]: rocksdb: [db/column_family.cc:635]         (skipping printing options)
Oct 11 03:37:26 compute-0 ceph-osd[89722]: rocksdb: [db/column_family.cc:635]         (skipping printing options)
Oct 11 03:37:26 compute-0 ceph-osd[89722]: rocksdb: [db/version_set.cc:5566] Recovered from manifest file:db/MANIFEST-000032 succeeded,manifest_file_number is 32, next_file_number is 34, last_sequence is 12, log_number is 5,prev_log_number is 0,max_column_family is 11,min_log_number_to_keep is 5
Oct 11 03:37:26 compute-0 ceph-osd[89722]: rocksdb: [db/version_set.cc:5581] Column family [default] (ID 0), log number is 5
Oct 11 03:37:26 compute-0 ceph-osd[89722]: rocksdb: [db/version_set.cc:5581] Column family [m-0] (ID 1), log number is 5
Oct 11 03:37:26 compute-0 ceph-osd[89722]: rocksdb: [db/version_set.cc:5581] Column family [m-1] (ID 2), log number is 5
Oct 11 03:37:26 compute-0 ceph-osd[89722]: rocksdb: [db/version_set.cc:5581] Column family [m-2] (ID 3), log number is 5
Oct 11 03:37:26 compute-0 ceph-osd[89722]: rocksdb: [db/version_set.cc:5581] Column family [p-0] (ID 4), log number is 5
Oct 11 03:37:26 compute-0 ceph-osd[89722]: rocksdb: [db/version_set.cc:5581] Column family [p-1] (ID 5), log number is 5
Oct 11 03:37:26 compute-0 ceph-osd[89722]: rocksdb: [db/version_set.cc:5581] Column family [p-2] (ID 6), log number is 5
Oct 11 03:37:26 compute-0 ceph-osd[89722]: rocksdb: [db/version_set.cc:5581] Column family [O-0] (ID 7), log number is 5
Oct 11 03:37:26 compute-0 ceph-osd[89722]: rocksdb: [db/version_set.cc:5581] Column family [O-1] (ID 8), log number is 5
Oct 11 03:37:26 compute-0 ceph-osd[89722]: rocksdb: [db/version_set.cc:5581] Column family [O-2] (ID 9), log number is 5
Oct 11 03:37:26 compute-0 ceph-osd[89722]: rocksdb: [db/version_set.cc:5581] Column family [L] (ID 10), log number is 5
Oct 11 03:37:26 compute-0 ceph-osd[89722]: rocksdb: [db/version_set.cc:5581] Column family [P] (ID 11), log number is 5
Oct 11 03:37:26 compute-0 ceph-osd[89722]: rocksdb: [db/db_impl/db_impl_open.cc:539] DB ID: 2434321e-904e-481a-8531-1d5bfefc457e
Oct 11 03:37:26 compute-0 ceph-osd[89722]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760153846881622, "job": 1, "event": "recovery_started", "wal_files": [31]}
Oct 11 03:37:26 compute-0 ceph-osd[89722]: rocksdb: [db/db_impl/db_impl_open.cc:1043] Recovering log #31 mode 2
Oct 11 03:37:26 compute-0 ceph-osd[89722]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760153846882094, "job": 1, "event": "recovery_finished"}
Oct 11 03:37:26 compute-0 ceph-osd[89722]: bluestore(/var/lib/ceph/osd/ceph-2) _open_db opened rocksdb path db options compression=kLZ4Compression,max_write_buffer_number=64,min_write_buffer_number_to_merge=6,compaction_style=kCompactionStyleLevel,write_buffer_size=16777216,max_background_jobs=4,level0_file_num_compaction_trigger=8,max_bytes_for_level_base=1073741824,max_bytes_for_level_multiplier=8,compaction_readahead_size=2MB,max_total_wal_size=1073741824,writable_file_max_buffer_size=0
Oct 11 03:37:26 compute-0 ceph-osd[89722]: bluestore(/var/lib/ceph/osd/ceph-2) _open_super_meta old nid_max 1025
Oct 11 03:37:26 compute-0 ceph-osd[89722]: bluestore(/var/lib/ceph/osd/ceph-2) _open_super_meta old blobid_max 10240
Oct 11 03:37:26 compute-0 ceph-osd[89722]: bluestore(/var/lib/ceph/osd/ceph-2) _open_super_meta ondisk_format 4 compat_ondisk_format 3
Oct 11 03:37:26 compute-0 ceph-osd[89722]: bluestore(/var/lib/ceph/osd/ceph-2) _open_super_meta min_alloc_size 0x1000
Oct 11 03:37:26 compute-0 ceph-osd[89722]: freelist init
Oct 11 03:37:26 compute-0 ceph-osd[89722]: freelist _read_cfg
Oct 11 03:37:26 compute-0 ceph-osd[89722]: bluestore(/var/lib/ceph/osd/ceph-2) _init_alloc loaded 20 GiB in 2 extents, allocator type hybrid, capacity 0x4ffc00000, block size 0x1000, free 0x4ffbfd000, fragmentation 1.9e-07
Oct 11 03:37:26 compute-0 ceph-osd[89722]: rocksdb: [db/db_impl/db_impl.cc:496] Shutdown: canceling all background work
Oct 11 03:37:26 compute-0 ceph-osd[89722]: rocksdb: [db/db_impl/db_impl.cc:704] Shutdown complete
Oct 11 03:37:26 compute-0 ceph-osd[89722]: bluefs umount
Oct 11 03:37:26 compute-0 ceph-osd[89722]: bdev(0x55f105186400 /var/lib/ceph/osd/ceph-2/block) close
Oct 11 03:37:26 compute-0 podman[89968]: 2025-10-11 03:37:26.903961454 +0000 UTC m=+0.057937280 container create 6ac54afe2ac958fa1112279ab06f5d29b20845295418a72f4e9b9bcd2b57b1f1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_noether, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 11 03:37:26 compute-0 systemd[1]: Started libpod-conmon-6ac54afe2ac958fa1112279ab06f5d29b20845295418a72f4e9b9bcd2b57b1f1.scope.
Oct 11 03:37:26 compute-0 ceph-osd[87591]: osd.0 pg_epoch: 13 pg[2.0( empty local-lis/les=0/0 n=0 ec=13/13 lis/c=0/0 les/c/f=0/0/0 sis=13) [0] r=0 lpr=13 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 03:37:26 compute-0 systemd[1]: Started libcrun container.
Oct 11 03:37:26 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/402f49de78baf6cd377c6ff5de6da47b2737be795fe411f6001b0152c5ac316c/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 11 03:37:26 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/402f49de78baf6cd377c6ff5de6da47b2737be795fe411f6001b0152c5ac316c/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 11 03:37:26 compute-0 podman[89968]: 2025-10-11 03:37:26.883755606 +0000 UTC m=+0.037731442 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 03:37:26 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/402f49de78baf6cd377c6ff5de6da47b2737be795fe411f6001b0152c5ac316c/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 11 03:37:26 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/402f49de78baf6cd377c6ff5de6da47b2737be795fe411f6001b0152c5ac316c/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 11 03:37:27 compute-0 podman[89968]: 2025-10-11 03:37:27.000787808 +0000 UTC m=+0.154763624 container init 6ac54afe2ac958fa1112279ab06f5d29b20845295418a72f4e9b9bcd2b57b1f1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_noether, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 11 03:37:27 compute-0 podman[89968]: 2025-10-11 03:37:27.01046113 +0000 UTC m=+0.164436916 container start 6ac54afe2ac958fa1112279ab06f5d29b20845295418a72f4e9b9bcd2b57b1f1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_noether, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=reef)
Oct 11 03:37:27 compute-0 podman[89968]: 2025-10-11 03:37:27.013357511 +0000 UTC m=+0.167333347 container attach 6ac54afe2ac958fa1112279ab06f5d29b20845295418a72f4e9b9bcd2b57b1f1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_noether, io.buildah.version=1.39.3, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.license=GPLv2)
Oct 11 03:37:27 compute-0 sudo[90210]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bajieqtetybzkehsxnmlehpaqyakndog ; /usr/bin/python3'
Oct 11 03:37:27 compute-0 sudo[90210]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 03:37:27 compute-0 ceph-osd[89722]: bdev(0x55f105186400 /var/lib/ceph/osd/ceph-2/block) open path /var/lib/ceph/osd/ceph-2/block
Oct 11 03:37:27 compute-0 ceph-osd[89722]: bdev(0x55f105186400 /var/lib/ceph/osd/ceph-2/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-2/block failed: (22) Invalid argument
Oct 11 03:37:27 compute-0 ceph-osd[89722]: bdev(0x55f105186400 /var/lib/ceph/osd/ceph-2/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Oct 11 03:37:27 compute-0 ceph-osd[89722]: bluefs add_block_device bdev 1 path /var/lib/ceph/osd/ceph-2/block size 20 GiB
Oct 11 03:37:27 compute-0 ceph-osd[89722]: bluefs mount
Oct 11 03:37:27 compute-0 ceph-osd[89722]: bluefs _init_alloc shared, id 1, capacity 0x4ffc00000, block size 0x10000
Oct 11 03:37:27 compute-0 ceph-osd[89722]: bluefs mount shared_bdev_used = 4718592
Oct 11 03:37:27 compute-0 ceph-osd[89722]: bluestore(/var/lib/ceph/osd/ceph-2) _prepare_db_environment set db_paths to db,20397110067 db.slow,20397110067
Oct 11 03:37:27 compute-0 ceph-osd[89722]: rocksdb: RocksDB version: 7.9.2
Oct 11 03:37:27 compute-0 ceph-osd[89722]: rocksdb: Git sha 0
Oct 11 03:37:27 compute-0 ceph-osd[89722]: rocksdb: Compile date 2025-05-06 23:30:25
Oct 11 03:37:27 compute-0 ceph-osd[89722]: rocksdb: DB SUMMARY
Oct 11 03:37:27 compute-0 ceph-osd[89722]: rocksdb: DB Session ID:  NVO3UOMUDUFVY2NKG1Z7
Oct 11 03:37:27 compute-0 ceph-osd[89722]: rocksdb: CURRENT file:  CURRENT
Oct 11 03:37:27 compute-0 ceph-osd[89722]: rocksdb: IDENTITY file:  IDENTITY
Oct 11 03:37:27 compute-0 ceph-osd[89722]: rocksdb: MANIFEST file:  MANIFEST-000032 size: 1007 Bytes
Oct 11 03:37:27 compute-0 ceph-osd[89722]: rocksdb: SST files in db dir, Total Num: 1, files: 000030.sst 
Oct 11 03:37:27 compute-0 ceph-osd[89722]: rocksdb: SST files in db.slow dir, Total Num: 0, files: 
Oct 11 03:37:27 compute-0 ceph-osd[89722]: rocksdb: Write Ahead Log file in db.wal: 000031.log size: 5093 ; 
Oct 11 03:37:27 compute-0 ceph-osd[89722]: rocksdb:                         Options.error_if_exists: 0
Oct 11 03:37:27 compute-0 ceph-osd[89722]: rocksdb:                       Options.create_if_missing: 0
Oct 11 03:37:27 compute-0 ceph-osd[89722]: rocksdb:                         Options.paranoid_checks: 1
Oct 11 03:37:27 compute-0 ceph-osd[89722]: rocksdb:             Options.flush_verify_memtable_count: 1
Oct 11 03:37:27 compute-0 ceph-osd[89722]: rocksdb:                               Options.track_and_verify_wals_in_manifest: 0
Oct 11 03:37:27 compute-0 ceph-osd[89722]: rocksdb:        Options.verify_sst_unique_id_in_manifest: 1
Oct 11 03:37:27 compute-0 ceph-osd[89722]: rocksdb:                                     Options.env: 0x55f1051aeb60
Oct 11 03:37:27 compute-0 ceph-osd[89722]: rocksdb:                                      Options.fs: LegacyFileSystem
Oct 11 03:37:27 compute-0 ceph-osd[89722]: rocksdb:                                Options.info_log: 0x55f1041e4960
Oct 11 03:37:27 compute-0 ceph-osd[89722]: rocksdb:                Options.max_file_opening_threads: 16
Oct 11 03:37:27 compute-0 ceph-osd[89722]: rocksdb:                              Options.statistics: (nil)
Oct 11 03:37:27 compute-0 ceph-osd[89722]: rocksdb:                               Options.use_fsync: 0
Oct 11 03:37:27 compute-0 ceph-osd[89722]: rocksdb:                       Options.max_log_file_size: 0
Oct 11 03:37:27 compute-0 ceph-osd[89722]: rocksdb:                  Options.max_manifest_file_size: 1073741824
Oct 11 03:37:27 compute-0 ceph-osd[89722]: rocksdb:                   Options.log_file_time_to_roll: 0
Oct 11 03:37:27 compute-0 ceph-osd[89722]: rocksdb:                       Options.keep_log_file_num: 1000
Oct 11 03:37:27 compute-0 ceph-osd[89722]: rocksdb:                    Options.recycle_log_file_num: 0
Oct 11 03:37:27 compute-0 ceph-osd[89722]: rocksdb:                         Options.allow_fallocate: 1
Oct 11 03:37:27 compute-0 ceph-osd[89722]: rocksdb:                        Options.allow_mmap_reads: 0
Oct 11 03:37:27 compute-0 ceph-osd[89722]: rocksdb:                       Options.allow_mmap_writes: 0
Oct 11 03:37:27 compute-0 ceph-osd[89722]: rocksdb:                        Options.use_direct_reads: 0
Oct 11 03:37:27 compute-0 ceph-osd[89722]: rocksdb:                        Options.use_direct_io_for_flush_and_compaction: 0
Oct 11 03:37:27 compute-0 ceph-osd[89722]: rocksdb:          Options.create_missing_column_families: 0
Oct 11 03:37:27 compute-0 ceph-osd[89722]: rocksdb:                              Options.db_log_dir: 
Oct 11 03:37:27 compute-0 ceph-osd[89722]: rocksdb:                                 Options.wal_dir: db.wal
Oct 11 03:37:27 compute-0 ceph-osd[89722]: rocksdb:                Options.table_cache_numshardbits: 6
Oct 11 03:37:27 compute-0 ceph-osd[89722]: rocksdb:                         Options.WAL_ttl_seconds: 0
Oct 11 03:37:27 compute-0 ceph-osd[89722]: rocksdb:                       Options.WAL_size_limit_MB: 0
Oct 11 03:37:27 compute-0 ceph-osd[89722]: rocksdb:                        Options.max_write_batch_group_size_bytes: 1048576
Oct 11 03:37:27 compute-0 ceph-osd[89722]: rocksdb:             Options.manifest_preallocation_size: 4194304
Oct 11 03:37:27 compute-0 ceph-osd[89722]: rocksdb:                     Options.is_fd_close_on_exec: 1
Oct 11 03:37:27 compute-0 ceph-osd[89722]: rocksdb:                   Options.advise_random_on_open: 1
Oct 11 03:37:27 compute-0 ceph-osd[89722]: rocksdb:                    Options.db_write_buffer_size: 0
Oct 11 03:37:27 compute-0 ceph-osd[89722]: rocksdb:                    Options.write_buffer_manager: 0x55f105100a00
Oct 11 03:37:27 compute-0 ceph-osd[89722]: rocksdb:         Options.access_hint_on_compaction_start: 1
Oct 11 03:37:27 compute-0 ceph-osd[89722]: rocksdb:           Options.random_access_max_buffer_size: 1048576
Oct 11 03:37:27 compute-0 ceph-osd[89722]: rocksdb:                      Options.use_adaptive_mutex: 0
Oct 11 03:37:27 compute-0 ceph-osd[89722]: rocksdb:                            Options.rate_limiter: (nil)
Oct 11 03:37:27 compute-0 ceph-osd[89722]: rocksdb:     Options.sst_file_manager.rate_bytes_per_sec: 0
Oct 11 03:37:27 compute-0 ceph-osd[89722]: rocksdb:                       Options.wal_recovery_mode: 2
Oct 11 03:37:27 compute-0 ceph-osd[89722]: rocksdb:                  Options.enable_thread_tracking: 0
Oct 11 03:37:27 compute-0 ceph-osd[89722]: rocksdb:                  Options.enable_pipelined_write: 0
Oct 11 03:37:27 compute-0 ceph-osd[89722]: rocksdb:                  Options.unordered_write: 0
Oct 11 03:37:27 compute-0 ceph-osd[89722]: rocksdb:         Options.allow_concurrent_memtable_write: 1
Oct 11 03:37:27 compute-0 ceph-osd[89722]: rocksdb:      Options.enable_write_thread_adaptive_yield: 1
Oct 11 03:37:27 compute-0 ceph-osd[89722]: rocksdb:             Options.write_thread_max_yield_usec: 100
Oct 11 03:37:27 compute-0 ceph-osd[89722]: rocksdb:            Options.write_thread_slow_yield_usec: 3
Oct 11 03:37:27 compute-0 ceph-osd[89722]: rocksdb:                               Options.row_cache: None
Oct 11 03:37:27 compute-0 ceph-osd[89722]: rocksdb:                              Options.wal_filter: None
Oct 11 03:37:27 compute-0 ceph-osd[89722]: rocksdb:             Options.avoid_flush_during_recovery: 0
Oct 11 03:37:27 compute-0 ceph-osd[89722]: rocksdb:             Options.allow_ingest_behind: 0
Oct 11 03:37:27 compute-0 ceph-osd[89722]: rocksdb:             Options.two_write_queues: 0
Oct 11 03:37:27 compute-0 ceph-osd[89722]: rocksdb:             Options.manual_wal_flush: 0
Oct 11 03:37:27 compute-0 ceph-osd[89722]: rocksdb:             Options.wal_compression: 0
Oct 11 03:37:27 compute-0 ceph-osd[89722]: rocksdb:             Options.atomic_flush: 0
Oct 11 03:37:27 compute-0 ceph-osd[89722]: rocksdb:             Options.avoid_unnecessary_blocking_io: 0
Oct 11 03:37:27 compute-0 ceph-osd[89722]: rocksdb:                 Options.persist_stats_to_disk: 0
Oct 11 03:37:27 compute-0 ceph-osd[89722]: rocksdb:                 Options.write_dbid_to_manifest: 0
Oct 11 03:37:27 compute-0 ceph-osd[89722]: rocksdb:                 Options.log_readahead_size: 0
Oct 11 03:37:27 compute-0 ceph-osd[89722]: rocksdb:                 Options.file_checksum_gen_factory: Unknown
Oct 11 03:37:27 compute-0 ceph-osd[89722]: rocksdb:                 Options.best_efforts_recovery: 0
Oct 11 03:37:27 compute-0 ceph-osd[89722]: rocksdb:                Options.max_bgerror_resume_count: 2147483647
Oct 11 03:37:27 compute-0 ceph-osd[89722]: rocksdb:            Options.bgerror_resume_retry_interval: 1000000
Oct 11 03:37:27 compute-0 ceph-osd[89722]: rocksdb:             Options.allow_data_in_errors: 0
Oct 11 03:37:27 compute-0 ceph-osd[89722]: rocksdb:             Options.db_host_id: __hostname__
Oct 11 03:37:27 compute-0 ceph-osd[89722]: rocksdb:             Options.enforce_single_del_contracts: true
Oct 11 03:37:27 compute-0 ceph-osd[89722]: rocksdb:             Options.max_background_jobs: 4
Oct 11 03:37:27 compute-0 ceph-osd[89722]: rocksdb:             Options.max_background_compactions: -1
Oct 11 03:37:27 compute-0 ceph-osd[89722]: rocksdb:             Options.max_subcompactions: 1
Oct 11 03:37:27 compute-0 ceph-osd[89722]: rocksdb:             Options.avoid_flush_during_shutdown: 0
Oct 11 03:37:27 compute-0 ceph-osd[89722]: rocksdb:           Options.writable_file_max_buffer_size: 0
Oct 11 03:37:27 compute-0 ceph-osd[89722]: rocksdb:             Options.delayed_write_rate : 16777216
Oct 11 03:37:27 compute-0 ceph-osd[89722]: rocksdb:             Options.max_total_wal_size: 1073741824
Oct 11 03:37:27 compute-0 ceph-osd[89722]: rocksdb:             Options.delete_obsolete_files_period_micros: 21600000000
Oct 11 03:37:27 compute-0 ceph-osd[89722]: rocksdb:                   Options.stats_dump_period_sec: 600
Oct 11 03:37:27 compute-0 ceph-osd[89722]: rocksdb:                 Options.stats_persist_period_sec: 600
Oct 11 03:37:27 compute-0 ceph-osd[89722]: rocksdb:                 Options.stats_history_buffer_size: 1048576
Oct 11 03:37:27 compute-0 ceph-osd[89722]: rocksdb:                          Options.max_open_files: -1
Oct 11 03:37:27 compute-0 ceph-osd[89722]: rocksdb:                          Options.bytes_per_sync: 0
Oct 11 03:37:27 compute-0 ceph-osd[89722]: rocksdb:                      Options.wal_bytes_per_sync: 0
Oct 11 03:37:27 compute-0 ceph-osd[89722]: rocksdb:                   Options.strict_bytes_per_sync: 0
Oct 11 03:37:27 compute-0 ceph-osd[89722]: rocksdb:       Options.compaction_readahead_size: 2097152
Oct 11 03:37:27 compute-0 ceph-osd[89722]: rocksdb:                  Options.max_background_flushes: -1
Oct 11 03:37:27 compute-0 ceph-osd[89722]: rocksdb: Compression algorithms supported:
Oct 11 03:37:27 compute-0 ceph-osd[89722]: rocksdb:         kZSTD supported: 0
Oct 11 03:37:27 compute-0 ceph-osd[89722]: rocksdb:         kXpressCompression supported: 0
Oct 11 03:37:27 compute-0 ceph-osd[89722]: rocksdb:         kBZip2Compression supported: 0
Oct 11 03:37:27 compute-0 ceph-osd[89722]: rocksdb:         kZSTDNotFinalCompression supported: 0
Oct 11 03:37:27 compute-0 ceph-osd[89722]: rocksdb:         kLZ4Compression supported: 1
Oct 11 03:37:27 compute-0 ceph-osd[89722]: rocksdb:         kZlibCompression supported: 1
Oct 11 03:37:27 compute-0 ceph-osd[89722]: rocksdb:         kLZ4HCCompression supported: 1
Oct 11 03:37:27 compute-0 ceph-osd[89722]: rocksdb:         kSnappyCompression supported: 1
Oct 11 03:37:27 compute-0 ceph-osd[89722]: rocksdb: Fast CRC32 supported: Supported on x86
Oct 11 03:37:27 compute-0 ceph-osd[89722]: rocksdb: DMutex implementation: pthread_mutex_t
Oct 11 03:37:27 compute-0 ceph-osd[89722]: rocksdb: [db/version_set.cc:5527] Recovering from manifest file: db/MANIFEST-000032
Oct 11 03:37:27 compute-0 ceph-osd[89722]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [default]:
Oct 11 03:37:27 compute-0 ceph-osd[89722]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Oct 11 03:37:27 compute-0 ceph-osd[89722]: rocksdb:           Options.merge_operator: .T:int64_array.b:bitwise_xor
Oct 11 03:37:27 compute-0 ceph-osd[89722]: rocksdb:        Options.compaction_filter: None
Oct 11 03:37:27 compute-0 ceph-osd[89722]: rocksdb:        Options.compaction_filter_factory: None
Oct 11 03:37:27 compute-0 ceph-osd[89722]: rocksdb:  Options.sst_partitioner_factory: None
Oct 11 03:37:27 compute-0 ceph-osd[89722]: rocksdb:         Options.memtable_factory: SkipListFactory
Oct 11 03:37:27 compute-0 ceph-osd[89722]: rocksdb:            Options.table_factory: BlockBasedTable
Oct 11 03:37:27 compute-0 ceph-osd[89722]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55f104263160)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x55f1041ccdd0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Oct 11 03:37:27 compute-0 ceph-osd[89722]: rocksdb:        Options.write_buffer_size: 16777216
Oct 11 03:37:27 compute-0 ceph-osd[89722]: rocksdb:  Options.max_write_buffer_number: 64
Oct 11 03:37:27 compute-0 ceph-osd[89722]: rocksdb:          Options.compression: LZ4
Oct 11 03:37:27 compute-0 ceph-osd[89722]: rocksdb:                  Options.bottommost_compression: Disabled
Oct 11 03:37:27 compute-0 ceph-osd[89722]: rocksdb:       Options.prefix_extractor: nullptr
Oct 11 03:37:27 compute-0 ceph-osd[89722]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Oct 11 03:37:27 compute-0 ceph-osd[89722]: rocksdb:             Options.num_levels: 7
Oct 11 03:37:27 compute-0 ceph-osd[89722]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Oct 11 03:37:27 compute-0 ceph-osd[89722]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Oct 11 03:37:27 compute-0 ceph-osd[89722]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Oct 11 03:37:27 compute-0 ceph-osd[89722]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Oct 11 03:37:27 compute-0 ceph-osd[89722]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Oct 11 03:37:27 compute-0 ceph-osd[89722]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Oct 11 03:37:27 compute-0 ceph-osd[89722]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Oct 11 03:37:27 compute-0 ceph-osd[89722]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Oct 11 03:37:27 compute-0 ceph-osd[89722]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Oct 11 03:37:27 compute-0 ceph-osd[89722]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Oct 11 03:37:27 compute-0 ceph-osd[89722]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Oct 11 03:37:27 compute-0 ceph-osd[89722]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Oct 11 03:37:27 compute-0 ceph-osd[89722]: rocksdb:            Options.compression_opts.window_bits: -14
Oct 11 03:37:27 compute-0 ceph-osd[89722]: rocksdb:                  Options.compression_opts.level: 32767
Oct 11 03:37:27 compute-0 ceph-osd[89722]: rocksdb:               Options.compression_opts.strategy: 0
Oct 11 03:37:27 compute-0 ceph-osd[89722]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Oct 11 03:37:27 compute-0 ceph-osd[89722]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Oct 11 03:37:27 compute-0 ceph-osd[89722]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Oct 11 03:37:27 compute-0 ceph-osd[89722]: rocksdb:         Options.compression_opts.parallel_threads: 1
Oct 11 03:37:27 compute-0 ceph-osd[89722]: rocksdb:                  Options.compression_opts.enabled: false
Oct 11 03:37:27 compute-0 ceph-osd[89722]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Oct 11 03:37:27 compute-0 ceph-osd[89722]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Oct 11 03:37:27 compute-0 ceph-osd[89722]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Oct 11 03:37:27 compute-0 ceph-osd[89722]: rocksdb:              Options.level0_stop_writes_trigger: 36
Oct 11 03:37:27 compute-0 ceph-osd[89722]: rocksdb:                   Options.target_file_size_base: 67108864
Oct 11 03:37:27 compute-0 ceph-osd[89722]: rocksdb:             Options.target_file_size_multiplier: 1
Oct 11 03:37:27 compute-0 ceph-osd[89722]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Oct 11 03:37:27 compute-0 ceph-osd[89722]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Oct 11 03:37:27 compute-0 ceph-osd[89722]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Oct 11 03:37:27 compute-0 ceph-osd[89722]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Oct 11 03:37:27 compute-0 ceph-osd[89722]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Oct 11 03:37:27 compute-0 ceph-osd[89722]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Oct 11 03:37:27 compute-0 ceph-osd[89722]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Oct 11 03:37:27 compute-0 ceph-osd[89722]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Oct 11 03:37:27 compute-0 ceph-osd[89722]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Oct 11 03:37:27 compute-0 ceph-osd[89722]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Oct 11 03:37:27 compute-0 ceph-osd[89722]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Oct 11 03:37:27 compute-0 ceph-osd[89722]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Oct 11 03:37:27 compute-0 ceph-osd[89722]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Oct 11 03:37:27 compute-0 ceph-osd[89722]: rocksdb:                        Options.arena_block_size: 1048576
Oct 11 03:37:27 compute-0 ceph-osd[89722]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Oct 11 03:37:27 compute-0 ceph-osd[89722]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Oct 11 03:37:27 compute-0 ceph-osd[89722]: rocksdb:                Options.disable_auto_compactions: 0
Oct 11 03:37:27 compute-0 ceph-osd[89722]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Oct 11 03:37:27 compute-0 ceph-osd[89722]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Oct 11 03:37:27 compute-0 ceph-osd[89722]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Oct 11 03:37:27 compute-0 ceph-osd[89722]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Oct 11 03:37:27 compute-0 ceph-osd[89722]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Oct 11 03:37:27 compute-0 ceph-osd[89722]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Oct 11 03:37:27 compute-0 ceph-osd[89722]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Oct 11 03:37:27 compute-0 ceph-osd[89722]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Oct 11 03:37:27 compute-0 ceph-osd[89722]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Oct 11 03:37:27 compute-0 ceph-osd[89722]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Oct 11 03:37:27 compute-0 ceph-osd[89722]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Oct 11 03:37:27 compute-0 ceph-osd[89722]: rocksdb:                   Options.inplace_update_support: 0
Oct 11 03:37:27 compute-0 ceph-osd[89722]: rocksdb:                 Options.inplace_update_num_locks: 10000
Oct 11 03:37:27 compute-0 ceph-osd[89722]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Oct 11 03:37:27 compute-0 ceph-osd[89722]: rocksdb:               Options.memtable_whole_key_filtering: 0
Oct 11 03:37:27 compute-0 ceph-osd[89722]: rocksdb:   Options.memtable_huge_page_size: 0
Oct 11 03:37:27 compute-0 ceph-osd[89722]: rocksdb:                           Options.bloom_locality: 0
Oct 11 03:37:27 compute-0 ceph-osd[89722]: rocksdb:                    Options.max_successive_merges: 0
Oct 11 03:37:27 compute-0 ceph-osd[89722]: rocksdb:                Options.optimize_filters_for_hits: 0
Oct 11 03:37:27 compute-0 ceph-osd[89722]: rocksdb:                Options.paranoid_file_checks: 0
Oct 11 03:37:27 compute-0 ceph-osd[89722]: rocksdb:                Options.force_consistency_checks: 1
Oct 11 03:37:27 compute-0 ceph-osd[89722]: rocksdb:                Options.report_bg_io_stats: 0
Oct 11 03:37:27 compute-0 ceph-osd[89722]: rocksdb:                               Options.ttl: 2592000
Oct 11 03:37:27 compute-0 ceph-osd[89722]: rocksdb:          Options.periodic_compaction_seconds: 0
Oct 11 03:37:27 compute-0 ceph-osd[89722]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Oct 11 03:37:27 compute-0 ceph-osd[89722]: rocksdb:    Options.preserve_internal_time_seconds: 0
Oct 11 03:37:27 compute-0 ceph-osd[89722]: rocksdb:                       Options.enable_blob_files: false
Oct 11 03:37:27 compute-0 ceph-osd[89722]: rocksdb:                           Options.min_blob_size: 0
Oct 11 03:37:27 compute-0 ceph-osd[89722]: rocksdb:                          Options.blob_file_size: 268435456
Oct 11 03:37:27 compute-0 ceph-osd[89722]: rocksdb:                   Options.blob_compression_type: NoCompression
Oct 11 03:37:27 compute-0 ceph-osd[89722]: rocksdb:          Options.enable_blob_garbage_collection: false
Oct 11 03:37:27 compute-0 ceph-osd[89722]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Oct 11 03:37:27 compute-0 ceph-osd[89722]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Oct 11 03:37:27 compute-0 ceph-osd[89722]: rocksdb:          Options.blob_compaction_readahead_size: 0
Oct 11 03:37:27 compute-0 ceph-osd[89722]: rocksdb:                Options.blob_file_starting_level: 0
Oct 11 03:37:27 compute-0 ceph-osd[89722]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Oct 11 03:37:27 compute-0 ceph-osd[89722]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-0]:
Oct 11 03:37:27 compute-0 ceph-osd[89722]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Oct 11 03:37:27 compute-0 ceph-osd[89722]: rocksdb:           Options.merge_operator: None
Oct 11 03:37:27 compute-0 ceph-osd[89722]: rocksdb:        Options.compaction_filter: None
Oct 11 03:37:27 compute-0 ceph-osd[89722]: rocksdb:        Options.compaction_filter_factory: None
Oct 11 03:37:27 compute-0 ceph-osd[89722]: rocksdb:  Options.sst_partitioner_factory: None
Oct 11 03:37:27 compute-0 ceph-osd[89722]: rocksdb:         Options.memtable_factory: SkipListFactory
Oct 11 03:37:27 compute-0 ceph-osd[89722]: rocksdb:            Options.table_factory: BlockBasedTable
Oct 11 03:37:27 compute-0 ceph-osd[89722]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55f104263160)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x55f1041ccdd0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Oct 11 03:37:27 compute-0 ceph-osd[89722]: rocksdb:        Options.write_buffer_size: 16777216
Oct 11 03:37:27 compute-0 ceph-osd[89722]: rocksdb:  Options.max_write_buffer_number: 64
Oct 11 03:37:27 compute-0 ceph-osd[89722]: rocksdb:          Options.compression: LZ4
Oct 11 03:37:27 compute-0 ceph-osd[89722]: rocksdb:                  Options.bottommost_compression: Disabled
Oct 11 03:37:27 compute-0 ceph-osd[89722]: rocksdb:       Options.prefix_extractor: nullptr
Oct 11 03:37:27 compute-0 ceph-osd[89722]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Oct 11 03:37:27 compute-0 ceph-osd[89722]: rocksdb:             Options.num_levels: 7
Oct 11 03:37:27 compute-0 ceph-osd[89722]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Oct 11 03:37:27 compute-0 ceph-osd[89722]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Oct 11 03:37:27 compute-0 ceph-osd[89722]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Oct 11 03:37:27 compute-0 ceph-osd[89722]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Oct 11 03:37:27 compute-0 ceph-osd[89722]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Oct 11 03:37:27 compute-0 ceph-osd[89722]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Oct 11 03:37:27 compute-0 ceph-osd[89722]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Oct 11 03:37:27 compute-0 ceph-osd[89722]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Oct 11 03:37:27 compute-0 ceph-osd[89722]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Oct 11 03:37:27 compute-0 ceph-osd[89722]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Oct 11 03:37:27 compute-0 ceph-osd[89722]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Oct 11 03:37:27 compute-0 ceph-osd[89722]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Oct 11 03:37:27 compute-0 ceph-osd[89722]: rocksdb:            Options.compression_opts.window_bits: -14
Oct 11 03:37:27 compute-0 ceph-osd[89722]: rocksdb:                  Options.compression_opts.level: 32767
Oct 11 03:37:27 compute-0 ceph-osd[89722]: rocksdb:               Options.compression_opts.strategy: 0
Oct 11 03:37:27 compute-0 ceph-osd[89722]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Oct 11 03:37:27 compute-0 ceph-osd[89722]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Oct 11 03:37:27 compute-0 ceph-osd[89722]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Oct 11 03:37:27 compute-0 ceph-osd[89722]: rocksdb:         Options.compression_opts.parallel_threads: 1
Oct 11 03:37:27 compute-0 ceph-osd[89722]: rocksdb:                  Options.compression_opts.enabled: false
Oct 11 03:37:27 compute-0 ceph-osd[89722]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Oct 11 03:37:27 compute-0 ceph-osd[89722]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Oct 11 03:37:27 compute-0 ceph-osd[89722]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Oct 11 03:37:27 compute-0 ceph-osd[89722]: rocksdb:              Options.level0_stop_writes_trigger: 36
Oct 11 03:37:27 compute-0 ceph-osd[89722]: rocksdb:                   Options.target_file_size_base: 67108864
Oct 11 03:37:27 compute-0 ceph-osd[89722]: rocksdb:             Options.target_file_size_multiplier: 1
Oct 11 03:37:27 compute-0 ceph-osd[89722]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Oct 11 03:37:27 compute-0 ceph-osd[89722]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Oct 11 03:37:27 compute-0 ceph-osd[89722]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Oct 11 03:37:27 compute-0 ceph-osd[89722]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Oct 11 03:37:27 compute-0 ceph-osd[89722]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Oct 11 03:37:27 compute-0 ceph-osd[89722]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Oct 11 03:37:27 compute-0 ceph-osd[89722]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Oct 11 03:37:27 compute-0 ceph-osd[89722]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Oct 11 03:37:27 compute-0 ceph-osd[89722]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Oct 11 03:37:27 compute-0 ceph-osd[89722]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Oct 11 03:37:27 compute-0 ceph-osd[89722]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Oct 11 03:37:27 compute-0 ceph-osd[89722]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Oct 11 03:37:27 compute-0 ceph-osd[89722]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Oct 11 03:37:27 compute-0 ceph-osd[89722]: rocksdb:                        Options.arena_block_size: 1048576
Oct 11 03:37:27 compute-0 ceph-osd[89722]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Oct 11 03:37:27 compute-0 ceph-osd[89722]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Oct 11 03:37:27 compute-0 ceph-osd[89722]: rocksdb:                Options.disable_auto_compactions: 0
Oct 11 03:37:27 compute-0 ceph-osd[89722]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Oct 11 03:37:27 compute-0 ceph-osd[89722]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Oct 11 03:37:27 compute-0 ceph-osd[89722]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Oct 11 03:37:27 compute-0 ceph-osd[89722]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Oct 11 03:37:27 compute-0 ceph-osd[89722]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Oct 11 03:37:27 compute-0 ceph-osd[89722]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Oct 11 03:37:27 compute-0 ceph-osd[89722]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Oct 11 03:37:27 compute-0 ceph-osd[89722]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Oct 11 03:37:27 compute-0 ceph-osd[89722]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Oct 11 03:37:27 compute-0 ceph-osd[89722]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Oct 11 03:37:27 compute-0 ceph-osd[89722]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Oct 11 03:37:27 compute-0 ceph-osd[89722]: rocksdb:                   Options.inplace_update_support: 0
Oct 11 03:37:27 compute-0 ceph-osd[89722]: rocksdb:                 Options.inplace_update_num_locks: 10000
Oct 11 03:37:27 compute-0 ceph-osd[89722]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Oct 11 03:37:27 compute-0 ceph-osd[89722]: rocksdb:               Options.memtable_whole_key_filtering: 0
Oct 11 03:37:27 compute-0 ceph-osd[89722]: rocksdb:   Options.memtable_huge_page_size: 0
Oct 11 03:37:27 compute-0 ceph-osd[89722]: rocksdb:                           Options.bloom_locality: 0
Oct 11 03:37:27 compute-0 ceph-osd[89722]: rocksdb:                    Options.max_successive_merges: 0
Oct 11 03:37:27 compute-0 ceph-osd[89722]: rocksdb:                Options.optimize_filters_for_hits: 0
Oct 11 03:37:27 compute-0 ceph-osd[89722]: rocksdb:                Options.paranoid_file_checks: 0
Oct 11 03:37:27 compute-0 ceph-osd[89722]: rocksdb:                Options.force_consistency_checks: 1
Oct 11 03:37:27 compute-0 ceph-osd[89722]: rocksdb:                Options.report_bg_io_stats: 0
Oct 11 03:37:27 compute-0 ceph-osd[89722]: rocksdb:                               Options.ttl: 2592000
Oct 11 03:37:27 compute-0 ceph-osd[89722]: rocksdb:          Options.periodic_compaction_seconds: 0
Oct 11 03:37:27 compute-0 ceph-osd[89722]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Oct 11 03:37:27 compute-0 ceph-osd[89722]: rocksdb:    Options.preserve_internal_time_seconds: 0
Oct 11 03:37:27 compute-0 ceph-osd[89722]: rocksdb:                       Options.enable_blob_files: false
Oct 11 03:37:27 compute-0 ceph-osd[89722]: rocksdb:                           Options.min_blob_size: 0
Oct 11 03:37:27 compute-0 ceph-osd[89722]: rocksdb:                          Options.blob_file_size: 268435456
Oct 11 03:37:27 compute-0 ceph-osd[89722]: rocksdb:                   Options.blob_compression_type: NoCompression
Oct 11 03:37:27 compute-0 ceph-osd[89722]: rocksdb:          Options.enable_blob_garbage_collection: false
Oct 11 03:37:27 compute-0 ceph-osd[89722]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Oct 11 03:37:27 compute-0 ceph-osd[89722]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Oct 11 03:37:27 compute-0 ceph-osd[89722]: rocksdb:          Options.blob_compaction_readahead_size: 0
Oct 11 03:37:27 compute-0 ceph-osd[89722]: rocksdb:                Options.blob_file_starting_level: 0
Oct 11 03:37:27 compute-0 ceph-osd[89722]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Oct 11 03:37:27 compute-0 ceph-osd[89722]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-1]:
Oct 11 03:37:27 compute-0 ceph-osd[89722]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Oct 11 03:37:27 compute-0 ceph-osd[89722]: rocksdb:           Options.merge_operator: None
Oct 11 03:37:27 compute-0 ceph-osd[89722]: rocksdb:        Options.compaction_filter: None
Oct 11 03:37:27 compute-0 ceph-osd[89722]: rocksdb:        Options.compaction_filter_factory: None
Oct 11 03:37:27 compute-0 ceph-osd[89722]: rocksdb:  Options.sst_partitioner_factory: None
Oct 11 03:37:27 compute-0 ceph-osd[89722]: rocksdb:         Options.memtable_factory: SkipListFactory
Oct 11 03:37:27 compute-0 ceph-osd[89722]: rocksdb:            Options.table_factory: BlockBasedTable
Oct 11 03:37:27 compute-0 ceph-osd[89722]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55f104263160)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x55f1041ccdd0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Oct 11 03:37:27 compute-0 ceph-osd[89722]: rocksdb:        Options.write_buffer_size: 16777216
Oct 11 03:37:27 compute-0 ceph-osd[89722]: rocksdb:  Options.max_write_buffer_number: 64
Oct 11 03:37:27 compute-0 ceph-osd[89722]: rocksdb:          Options.compression: LZ4
Oct 11 03:37:27 compute-0 ceph-osd[89722]: rocksdb:                  Options.bottommost_compression: Disabled
Oct 11 03:37:27 compute-0 ceph-osd[89722]: rocksdb:       Options.prefix_extractor: nullptr
Oct 11 03:37:27 compute-0 ceph-osd[89722]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Oct 11 03:37:27 compute-0 ceph-osd[89722]: rocksdb:             Options.num_levels: 7
Oct 11 03:37:27 compute-0 ceph-osd[89722]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Oct 11 03:37:27 compute-0 ceph-osd[89722]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Oct 11 03:37:27 compute-0 ceph-osd[89722]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Oct 11 03:37:27 compute-0 ceph-osd[89722]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Oct 11 03:37:27 compute-0 ceph-osd[89722]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Oct 11 03:37:27 compute-0 ceph-osd[89722]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Oct 11 03:37:27 compute-0 ceph-osd[89722]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Oct 11 03:37:27 compute-0 ceph-osd[89722]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Oct 11 03:37:27 compute-0 ceph-osd[89722]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Oct 11 03:37:27 compute-0 ceph-osd[89722]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Oct 11 03:37:27 compute-0 ceph-osd[89722]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Oct 11 03:37:27 compute-0 ceph-osd[89722]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Oct 11 03:37:27 compute-0 ceph-osd[89722]: rocksdb:            Options.compression_opts.window_bits: -14
Oct 11 03:37:27 compute-0 ceph-osd[89722]: rocksdb:                  Options.compression_opts.level: 32767
Oct 11 03:37:27 compute-0 ceph-osd[89722]: rocksdb:               Options.compression_opts.strategy: 0
Oct 11 03:37:27 compute-0 ceph-osd[89722]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Oct 11 03:37:27 compute-0 ceph-osd[89722]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Oct 11 03:37:27 compute-0 ceph-osd[89722]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Oct 11 03:37:27 compute-0 ceph-osd[89722]: rocksdb:         Options.compression_opts.parallel_threads: 1
Oct 11 03:37:27 compute-0 ceph-osd[89722]: rocksdb:                  Options.compression_opts.enabled: false
Oct 11 03:37:27 compute-0 ceph-osd[89722]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Oct 11 03:37:27 compute-0 ceph-osd[89722]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Oct 11 03:37:27 compute-0 ceph-osd[89722]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Oct 11 03:37:27 compute-0 ceph-osd[89722]: rocksdb:              Options.level0_stop_writes_trigger: 36
Oct 11 03:37:27 compute-0 ceph-osd[89722]: rocksdb:                   Options.target_file_size_base: 67108864
Oct 11 03:37:27 compute-0 ceph-osd[89722]: rocksdb:             Options.target_file_size_multiplier: 1
Oct 11 03:37:27 compute-0 ceph-osd[89722]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Oct 11 03:37:27 compute-0 ceph-osd[89722]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Oct 11 03:37:27 compute-0 ceph-osd[89722]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Oct 11 03:37:27 compute-0 ceph-osd[89722]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Oct 11 03:37:27 compute-0 ceph-osd[89722]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Oct 11 03:37:27 compute-0 ceph-osd[89722]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Oct 11 03:37:27 compute-0 ceph-osd[89722]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Oct 11 03:37:27 compute-0 ceph-osd[89722]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Oct 11 03:37:27 compute-0 ceph-osd[89722]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Oct 11 03:37:27 compute-0 ceph-osd[89722]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Oct 11 03:37:27 compute-0 ceph-osd[89722]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Oct 11 03:37:27 compute-0 ceph-osd[89722]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Oct 11 03:37:27 compute-0 ceph-osd[89722]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Oct 11 03:37:27 compute-0 ceph-osd[89722]: rocksdb:                        Options.arena_block_size: 1048576
Oct 11 03:37:27 compute-0 ceph-osd[89722]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Oct 11 03:37:27 compute-0 ceph-osd[89722]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Oct 11 03:37:27 compute-0 ceph-osd[89722]: rocksdb:                Options.disable_auto_compactions: 0
Oct 11 03:37:27 compute-0 ceph-osd[89722]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Oct 11 03:37:27 compute-0 ceph-osd[89722]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Oct 11 03:37:27 compute-0 ceph-osd[89722]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Oct 11 03:37:27 compute-0 ceph-osd[89722]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Oct 11 03:37:27 compute-0 ceph-osd[89722]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Oct 11 03:37:27 compute-0 ceph-osd[89722]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Oct 11 03:37:27 compute-0 ceph-osd[89722]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Oct 11 03:37:27 compute-0 ceph-osd[89722]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Oct 11 03:37:27 compute-0 ceph-osd[89722]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Oct 11 03:37:27 compute-0 ceph-osd[89722]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Oct 11 03:37:27 compute-0 ceph-osd[89722]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Oct 11 03:37:27 compute-0 ceph-osd[89722]: rocksdb:                   Options.inplace_update_support: 0
Oct 11 03:37:27 compute-0 ceph-osd[89722]: rocksdb:                 Options.inplace_update_num_locks: 10000
Oct 11 03:37:27 compute-0 ceph-osd[89722]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Oct 11 03:37:27 compute-0 ceph-osd[89722]: rocksdb:               Options.memtable_whole_key_filtering: 0
Oct 11 03:37:27 compute-0 ceph-osd[89722]: rocksdb:   Options.memtable_huge_page_size: 0
Oct 11 03:37:27 compute-0 ceph-osd[89722]: rocksdb:                           Options.bloom_locality: 0
Oct 11 03:37:27 compute-0 ceph-osd[89722]: rocksdb:                    Options.max_successive_merges: 0
Oct 11 03:37:27 compute-0 ceph-osd[89722]: rocksdb:                Options.optimize_filters_for_hits: 0
Oct 11 03:37:27 compute-0 ceph-osd[89722]: rocksdb:                Options.paranoid_file_checks: 0
Oct 11 03:37:27 compute-0 ceph-osd[89722]: rocksdb:                Options.force_consistency_checks: 1
Oct 11 03:37:27 compute-0 ceph-osd[89722]: rocksdb:                Options.report_bg_io_stats: 0
Oct 11 03:37:27 compute-0 ceph-osd[89722]: rocksdb:                               Options.ttl: 2592000
Oct 11 03:37:27 compute-0 ceph-osd[89722]: rocksdb:          Options.periodic_compaction_seconds: 0
Oct 11 03:37:27 compute-0 ceph-osd[89722]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Oct 11 03:37:27 compute-0 ceph-osd[89722]: rocksdb:    Options.preserve_internal_time_seconds: 0
Oct 11 03:37:27 compute-0 ceph-osd[89722]: rocksdb:                       Options.enable_blob_files: false
Oct 11 03:37:27 compute-0 ceph-osd[89722]: rocksdb:                           Options.min_blob_size: 0
Oct 11 03:37:27 compute-0 ceph-osd[89722]: rocksdb:                          Options.blob_file_size: 268435456
Oct 11 03:37:27 compute-0 ceph-osd[89722]: rocksdb:                   Options.blob_compression_type: NoCompression
Oct 11 03:37:27 compute-0 ceph-osd[89722]: rocksdb:          Options.enable_blob_garbage_collection: false
Oct 11 03:37:27 compute-0 ceph-osd[89722]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Oct 11 03:37:27 compute-0 ceph-osd[89722]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Oct 11 03:37:27 compute-0 ceph-osd[89722]: rocksdb:          Options.blob_compaction_readahead_size: 0
Oct 11 03:37:27 compute-0 ceph-osd[89722]: rocksdb:                Options.blob_file_starting_level: 0
Oct 11 03:37:27 compute-0 ceph-osd[89722]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Oct 11 03:37:27 compute-0 ceph-osd[89722]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-2]:
Oct 11 03:37:27 compute-0 ceph-osd[89722]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Oct 11 03:37:27 compute-0 ceph-osd[89722]: rocksdb:           Options.merge_operator: None
Oct 11 03:37:27 compute-0 ceph-osd[89722]: rocksdb:        Options.compaction_filter: None
Oct 11 03:37:27 compute-0 ceph-osd[89722]: rocksdb:        Options.compaction_filter_factory: None
Oct 11 03:37:27 compute-0 ceph-osd[89722]: rocksdb:  Options.sst_partitioner_factory: None
Oct 11 03:37:27 compute-0 ceph-osd[89722]: rocksdb:         Options.memtable_factory: SkipListFactory
Oct 11 03:37:27 compute-0 ceph-osd[89722]: rocksdb:            Options.table_factory: BlockBasedTable
Oct 11 03:37:27 compute-0 ceph-osd[89722]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55f104263160)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x55f1041ccdd0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Oct 11 03:37:27 compute-0 ceph-osd[89722]: rocksdb:        Options.write_buffer_size: 16777216
Oct 11 03:37:27 compute-0 ceph-osd[89722]: rocksdb:  Options.max_write_buffer_number: 64
Oct 11 03:37:27 compute-0 ceph-osd[89722]: rocksdb:          Options.compression: LZ4
Oct 11 03:37:27 compute-0 ceph-osd[89722]: rocksdb:                  Options.bottommost_compression: Disabled
Oct 11 03:37:27 compute-0 ceph-osd[89722]: rocksdb:       Options.prefix_extractor: nullptr
Oct 11 03:37:27 compute-0 ceph-osd[89722]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Oct 11 03:37:27 compute-0 ceph-osd[89722]: rocksdb:             Options.num_levels: 7
Oct 11 03:37:27 compute-0 ceph-osd[89722]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Oct 11 03:37:27 compute-0 ceph-osd[89722]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Oct 11 03:37:27 compute-0 ceph-osd[89722]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Oct 11 03:37:27 compute-0 ceph-osd[89722]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Oct 11 03:37:27 compute-0 ceph-osd[89722]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Oct 11 03:37:27 compute-0 ceph-osd[89722]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Oct 11 03:37:27 compute-0 ceph-osd[89722]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Oct 11 03:37:27 compute-0 ceph-osd[89722]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Oct 11 03:37:27 compute-0 ceph-osd[89722]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Oct 11 03:37:27 compute-0 ceph-osd[89722]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Oct 11 03:37:27 compute-0 ceph-osd[89722]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Oct 11 03:37:27 compute-0 ceph-osd[89722]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Oct 11 03:37:27 compute-0 ceph-osd[89722]: rocksdb:            Options.compression_opts.window_bits: -14
Oct 11 03:37:27 compute-0 ceph-osd[89722]: rocksdb:                  Options.compression_opts.level: 32767
Oct 11 03:37:27 compute-0 ceph-osd[89722]: rocksdb:               Options.compression_opts.strategy: 0
Oct 11 03:37:27 compute-0 ceph-osd[89722]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Oct 11 03:37:27 compute-0 ceph-osd[89722]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Oct 11 03:37:27 compute-0 ceph-osd[89722]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Oct 11 03:37:27 compute-0 ceph-osd[89722]: rocksdb:         Options.compression_opts.parallel_threads: 1
Oct 11 03:37:27 compute-0 ceph-osd[89722]: rocksdb:                  Options.compression_opts.enabled: false
Oct 11 03:37:27 compute-0 ceph-osd[89722]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Oct 11 03:37:27 compute-0 ceph-osd[89722]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Oct 11 03:37:27 compute-0 ceph-osd[89722]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Oct 11 03:37:27 compute-0 ceph-osd[89722]: rocksdb:              Options.level0_stop_writes_trigger: 36
Oct 11 03:37:27 compute-0 ceph-osd[89722]: rocksdb:                   Options.target_file_size_base: 67108864
Oct 11 03:37:27 compute-0 ceph-osd[89722]: rocksdb:             Options.target_file_size_multiplier: 1
Oct 11 03:37:27 compute-0 ceph-osd[89722]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Oct 11 03:37:27 compute-0 ceph-osd[89722]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Oct 11 03:37:27 compute-0 ceph-osd[89722]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Oct 11 03:37:27 compute-0 ceph-osd[89722]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Oct 11 03:37:27 compute-0 ceph-osd[89722]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Oct 11 03:37:27 compute-0 ceph-osd[89722]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Oct 11 03:37:27 compute-0 ceph-osd[89722]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Oct 11 03:37:27 compute-0 ceph-osd[89722]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Oct 11 03:37:27 compute-0 ceph-osd[89722]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Oct 11 03:37:27 compute-0 ceph-osd[89722]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Oct 11 03:37:27 compute-0 ceph-osd[89722]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Oct 11 03:37:27 compute-0 ceph-osd[89722]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Oct 11 03:37:27 compute-0 ceph-osd[89722]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Oct 11 03:37:27 compute-0 ceph-osd[89722]: rocksdb:                        Options.arena_block_size: 1048576
Oct 11 03:37:27 compute-0 ceph-osd[89722]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Oct 11 03:37:27 compute-0 ceph-osd[89722]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Oct 11 03:37:27 compute-0 ceph-osd[89722]: rocksdb:                Options.disable_auto_compactions: 0
Oct 11 03:37:27 compute-0 ceph-osd[89722]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Oct 11 03:37:27 compute-0 ceph-osd[89722]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Oct 11 03:37:27 compute-0 ceph-osd[89722]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Oct 11 03:37:27 compute-0 ceph-osd[89722]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Oct 11 03:37:27 compute-0 ceph-osd[89722]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Oct 11 03:37:27 compute-0 ceph-osd[89722]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Oct 11 03:37:27 compute-0 ceph-osd[89722]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Oct 11 03:37:27 compute-0 ceph-osd[89722]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Oct 11 03:37:27 compute-0 ceph-osd[89722]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Oct 11 03:37:27 compute-0 ceph-osd[89722]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Oct 11 03:37:27 compute-0 ceph-osd[89722]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Oct 11 03:37:27 compute-0 ceph-osd[89722]: rocksdb:                   Options.inplace_update_support: 0
Oct 11 03:37:27 compute-0 ceph-osd[89722]: rocksdb:                 Options.inplace_update_num_locks: 10000
Oct 11 03:37:27 compute-0 ceph-osd[89722]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Oct 11 03:37:27 compute-0 ceph-osd[89722]: rocksdb:               Options.memtable_whole_key_filtering: 0
Oct 11 03:37:27 compute-0 ceph-osd[89722]: rocksdb:   Options.memtable_huge_page_size: 0
Oct 11 03:37:27 compute-0 ceph-osd[89722]: rocksdb:                           Options.bloom_locality: 0
Oct 11 03:37:27 compute-0 ceph-osd[89722]: rocksdb:                    Options.max_successive_merges: 0
Oct 11 03:37:27 compute-0 ceph-osd[89722]: rocksdb:                Options.optimize_filters_for_hits: 0
Oct 11 03:37:27 compute-0 ceph-osd[89722]: rocksdb:                Options.paranoid_file_checks: 0
Oct 11 03:37:27 compute-0 ceph-osd[89722]: rocksdb:                Options.force_consistency_checks: 1
Oct 11 03:37:27 compute-0 ceph-osd[89722]: rocksdb:                Options.report_bg_io_stats: 0
Oct 11 03:37:27 compute-0 ceph-osd[89722]: rocksdb:                               Options.ttl: 2592000
Oct 11 03:37:27 compute-0 ceph-osd[89722]: rocksdb:          Options.periodic_compaction_seconds: 0
Oct 11 03:37:27 compute-0 ceph-osd[89722]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Oct 11 03:37:27 compute-0 ceph-osd[89722]: rocksdb:    Options.preserve_internal_time_seconds: 0
Oct 11 03:37:27 compute-0 ceph-osd[89722]: rocksdb:                       Options.enable_blob_files: false
Oct 11 03:37:27 compute-0 ceph-osd[89722]: rocksdb:                           Options.min_blob_size: 0
Oct 11 03:37:27 compute-0 ceph-osd[89722]: rocksdb:                          Options.blob_file_size: 268435456
Oct 11 03:37:27 compute-0 ceph-osd[89722]: rocksdb:                   Options.blob_compression_type: NoCompression
Oct 11 03:37:27 compute-0 ceph-osd[89722]: rocksdb:          Options.enable_blob_garbage_collection: false
Oct 11 03:37:27 compute-0 ceph-osd[89722]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Oct 11 03:37:27 compute-0 ceph-osd[89722]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Oct 11 03:37:27 compute-0 ceph-osd[89722]: rocksdb:          Options.blob_compaction_readahead_size: 0
Oct 11 03:37:27 compute-0 ceph-osd[89722]: rocksdb:                Options.blob_file_starting_level: 0
Oct 11 03:37:27 compute-0 ceph-osd[89722]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Oct 11 03:37:27 compute-0 ceph-osd[89722]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-0]:
Oct 11 03:37:27 compute-0 ceph-osd[89722]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Oct 11 03:37:27 compute-0 ceph-osd[89722]: rocksdb:           Options.merge_operator: None
Oct 11 03:37:27 compute-0 ceph-osd[89722]: rocksdb:        Options.compaction_filter: None
Oct 11 03:37:27 compute-0 ceph-osd[89722]: rocksdb:        Options.compaction_filter_factory: None
Oct 11 03:37:27 compute-0 ceph-osd[89722]: rocksdb:  Options.sst_partitioner_factory: None
Oct 11 03:37:27 compute-0 ceph-osd[89722]: rocksdb:         Options.memtable_factory: SkipListFactory
Oct 11 03:37:27 compute-0 ceph-osd[89722]: rocksdb:            Options.table_factory: BlockBasedTable
Oct 11 03:37:27 compute-0 ceph-osd[89722]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55f104263160)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x55f1041ccdd0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Oct 11 03:37:27 compute-0 ceph-osd[89722]: rocksdb:        Options.write_buffer_size: 16777216
Oct 11 03:37:27 compute-0 ceph-osd[89722]: rocksdb:  Options.max_write_buffer_number: 64
Oct 11 03:37:27 compute-0 ceph-osd[89722]: rocksdb:          Options.compression: LZ4
Oct 11 03:37:27 compute-0 ceph-osd[89722]: rocksdb:                  Options.bottommost_compression: Disabled
Oct 11 03:37:27 compute-0 ceph-osd[89722]: rocksdb:       Options.prefix_extractor: nullptr
Oct 11 03:37:27 compute-0 ceph-osd[89722]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Oct 11 03:37:27 compute-0 ceph-osd[89722]: rocksdb:             Options.num_levels: 7
Oct 11 03:37:27 compute-0 ceph-osd[89722]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Oct 11 03:37:27 compute-0 ceph-osd[89722]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Oct 11 03:37:27 compute-0 ceph-osd[89722]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Oct 11 03:37:27 compute-0 ceph-osd[89722]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Oct 11 03:37:27 compute-0 ceph-osd[89722]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Oct 11 03:37:27 compute-0 ceph-osd[89722]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Oct 11 03:37:27 compute-0 ceph-osd[89722]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Oct 11 03:37:27 compute-0 ceph-osd[89722]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Oct 11 03:37:27 compute-0 ceph-osd[89722]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Oct 11 03:37:27 compute-0 ceph-osd[89722]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Oct 11 03:37:27 compute-0 ceph-osd[89722]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Oct 11 03:37:27 compute-0 ceph-osd[89722]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Oct 11 03:37:27 compute-0 ceph-osd[89722]: rocksdb:            Options.compression_opts.window_bits: -14
Oct 11 03:37:27 compute-0 ceph-osd[89722]: rocksdb:                  Options.compression_opts.level: 32767
Oct 11 03:37:27 compute-0 ceph-osd[89722]: rocksdb:               Options.compression_opts.strategy: 0
Oct 11 03:37:27 compute-0 ceph-osd[89722]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Oct 11 03:37:27 compute-0 ceph-osd[89722]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Oct 11 03:37:27 compute-0 ceph-osd[89722]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Oct 11 03:37:27 compute-0 ceph-osd[89722]: rocksdb:         Options.compression_opts.parallel_threads: 1
Oct 11 03:37:27 compute-0 ceph-osd[89722]: rocksdb:                  Options.compression_opts.enabled: false
Oct 11 03:37:27 compute-0 ceph-osd[89722]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Oct 11 03:37:27 compute-0 ceph-osd[89722]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Oct 11 03:37:27 compute-0 ceph-osd[89722]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Oct 11 03:37:27 compute-0 ceph-osd[89722]: rocksdb:              Options.level0_stop_writes_trigger: 36
Oct 11 03:37:27 compute-0 ceph-osd[89722]: rocksdb:                   Options.target_file_size_base: 67108864
Oct 11 03:37:27 compute-0 ceph-osd[89722]: rocksdb:             Options.target_file_size_multiplier: 1
Oct 11 03:37:27 compute-0 ceph-osd[89722]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Oct 11 03:37:27 compute-0 ceph-osd[89722]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Oct 11 03:37:27 compute-0 ceph-osd[89722]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Oct 11 03:37:27 compute-0 ceph-osd[89722]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Oct 11 03:37:27 compute-0 ceph-osd[89722]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Oct 11 03:37:27 compute-0 ceph-osd[89722]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Oct 11 03:37:27 compute-0 ceph-osd[89722]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Oct 11 03:37:27 compute-0 ceph-osd[89722]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Oct 11 03:37:27 compute-0 ceph-osd[89722]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Oct 11 03:37:27 compute-0 ceph-osd[89722]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Oct 11 03:37:27 compute-0 ceph-osd[89722]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Oct 11 03:37:27 compute-0 ceph-osd[89722]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Oct 11 03:37:27 compute-0 ceph-osd[89722]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Oct 11 03:37:27 compute-0 ceph-osd[89722]: rocksdb:                        Options.arena_block_size: 1048576
Oct 11 03:37:27 compute-0 ceph-osd[89722]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Oct 11 03:37:27 compute-0 ceph-osd[89722]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Oct 11 03:37:27 compute-0 ceph-osd[89722]: rocksdb:                Options.disable_auto_compactions: 0
Oct 11 03:37:27 compute-0 ceph-osd[89722]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Oct 11 03:37:27 compute-0 ceph-osd[89722]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Oct 11 03:37:27 compute-0 ceph-osd[89722]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Oct 11 03:37:27 compute-0 ceph-osd[89722]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Oct 11 03:37:27 compute-0 ceph-osd[89722]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Oct 11 03:37:27 compute-0 ceph-osd[89722]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Oct 11 03:37:27 compute-0 ceph-osd[89722]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Oct 11 03:37:27 compute-0 ceph-osd[89722]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Oct 11 03:37:27 compute-0 ceph-osd[89722]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Oct 11 03:37:27 compute-0 ceph-osd[89722]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Oct 11 03:37:27 compute-0 ceph-osd[89722]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Oct 11 03:37:27 compute-0 ceph-osd[89722]: rocksdb:                   Options.inplace_update_support: 0
Oct 11 03:37:27 compute-0 ceph-osd[89722]: rocksdb:                 Options.inplace_update_num_locks: 10000
Oct 11 03:37:27 compute-0 ceph-osd[89722]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Oct 11 03:37:27 compute-0 ceph-osd[89722]: rocksdb:               Options.memtable_whole_key_filtering: 0
Oct 11 03:37:27 compute-0 ceph-osd[89722]: rocksdb:   Options.memtable_huge_page_size: 0
Oct 11 03:37:27 compute-0 ceph-osd[89722]: rocksdb:                           Options.bloom_locality: 0
Oct 11 03:37:27 compute-0 ceph-osd[89722]: rocksdb:                    Options.max_successive_merges: 0
Oct 11 03:37:27 compute-0 ceph-osd[89722]: rocksdb:                Options.optimize_filters_for_hits: 0
Oct 11 03:37:27 compute-0 ceph-osd[89722]: rocksdb:                Options.paranoid_file_checks: 0
Oct 11 03:37:27 compute-0 ceph-osd[89722]: rocksdb:                Options.force_consistency_checks: 1
Oct 11 03:37:27 compute-0 ceph-osd[89722]: rocksdb:                Options.report_bg_io_stats: 0
Oct 11 03:37:27 compute-0 ceph-osd[89722]: rocksdb:                               Options.ttl: 2592000
Oct 11 03:37:27 compute-0 ceph-osd[89722]: rocksdb:          Options.periodic_compaction_seconds: 0
Oct 11 03:37:27 compute-0 ceph-osd[89722]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Oct 11 03:37:27 compute-0 ceph-osd[89722]: rocksdb:    Options.preserve_internal_time_seconds: 0
Oct 11 03:37:27 compute-0 ceph-osd[89722]: rocksdb:                       Options.enable_blob_files: false
Oct 11 03:37:27 compute-0 ceph-osd[89722]: rocksdb:                           Options.min_blob_size: 0
Oct 11 03:37:27 compute-0 ceph-osd[89722]: rocksdb:                          Options.blob_file_size: 268435456
Oct 11 03:37:27 compute-0 ceph-osd[89722]: rocksdb:                   Options.blob_compression_type: NoCompression
Oct 11 03:37:27 compute-0 ceph-osd[89722]: rocksdb:          Options.enable_blob_garbage_collection: false
Oct 11 03:37:27 compute-0 ceph-osd[89722]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Oct 11 03:37:27 compute-0 ceph-osd[89722]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Oct 11 03:37:27 compute-0 ceph-osd[89722]: rocksdb:          Options.blob_compaction_readahead_size: 0
Oct 11 03:37:27 compute-0 ceph-osd[89722]: rocksdb:                Options.blob_file_starting_level: 0
Oct 11 03:37:27 compute-0 ceph-osd[89722]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Oct 11 03:37:27 compute-0 ceph-osd[89722]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-1]:
Oct 11 03:37:27 compute-0 ceph-osd[89722]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Oct 11 03:37:27 compute-0 ceph-osd[89722]: rocksdb:           Options.merge_operator: None
Oct 11 03:37:27 compute-0 ceph-osd[89722]: rocksdb:        Options.compaction_filter: None
Oct 11 03:37:27 compute-0 ceph-osd[89722]: rocksdb:        Options.compaction_filter_factory: None
Oct 11 03:37:27 compute-0 ceph-osd[89722]: rocksdb:  Options.sst_partitioner_factory: None
Oct 11 03:37:27 compute-0 ceph-osd[89722]: rocksdb:         Options.memtable_factory: SkipListFactory
Oct 11 03:37:27 compute-0 ceph-osd[89722]: rocksdb:            Options.table_factory: BlockBasedTable
Oct 11 03:37:27 compute-0 ceph-osd[89722]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55f104263160)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x55f1041ccdd0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Oct 11 03:37:27 compute-0 ceph-osd[89722]: rocksdb:        Options.write_buffer_size: 16777216
Oct 11 03:37:27 compute-0 ceph-osd[89722]: rocksdb:  Options.max_write_buffer_number: 64
Oct 11 03:37:27 compute-0 ceph-osd[89722]: rocksdb:          Options.compression: LZ4
Oct 11 03:37:27 compute-0 ceph-osd[89722]: rocksdb:                  Options.bottommost_compression: Disabled
Oct 11 03:37:27 compute-0 ceph-osd[89722]: rocksdb:       Options.prefix_extractor: nullptr
Oct 11 03:37:27 compute-0 ceph-osd[89722]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Oct 11 03:37:27 compute-0 ceph-osd[89722]: rocksdb:             Options.num_levels: 7
Oct 11 03:37:27 compute-0 ceph-osd[89722]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Oct 11 03:37:27 compute-0 ceph-osd[89722]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Oct 11 03:37:27 compute-0 ceph-osd[89722]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Oct 11 03:37:27 compute-0 ceph-osd[89722]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Oct 11 03:37:27 compute-0 ceph-osd[89722]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Oct 11 03:37:27 compute-0 ceph-osd[89722]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Oct 11 03:37:27 compute-0 ceph-osd[89722]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Oct 11 03:37:27 compute-0 ceph-osd[89722]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Oct 11 03:37:27 compute-0 ceph-osd[89722]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Oct 11 03:37:27 compute-0 ceph-osd[89722]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Oct 11 03:37:27 compute-0 ceph-osd[89722]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Oct 11 03:37:27 compute-0 ceph-osd[89722]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Oct 11 03:37:27 compute-0 ceph-osd[89722]: rocksdb:            Options.compression_opts.window_bits: -14
Oct 11 03:37:27 compute-0 ceph-osd[89722]: rocksdb:                  Options.compression_opts.level: 32767
Oct 11 03:37:27 compute-0 ceph-osd[89722]: rocksdb:               Options.compression_opts.strategy: 0
Oct 11 03:37:27 compute-0 ceph-osd[89722]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Oct 11 03:37:27 compute-0 ceph-osd[89722]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Oct 11 03:37:27 compute-0 ceph-osd[89722]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Oct 11 03:37:27 compute-0 ceph-osd[89722]: rocksdb:         Options.compression_opts.parallel_threads: 1
Oct 11 03:37:27 compute-0 ceph-osd[89722]: rocksdb:                  Options.compression_opts.enabled: false
Oct 11 03:37:27 compute-0 ceph-osd[89722]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Oct 11 03:37:27 compute-0 ceph-osd[89722]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Oct 11 03:37:27 compute-0 ceph-osd[89722]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Oct 11 03:37:27 compute-0 ceph-osd[89722]: rocksdb:              Options.level0_stop_writes_trigger: 36
Oct 11 03:37:27 compute-0 ceph-osd[89722]: rocksdb:                   Options.target_file_size_base: 67108864
Oct 11 03:37:27 compute-0 ceph-osd[89722]: rocksdb:             Options.target_file_size_multiplier: 1
Oct 11 03:37:27 compute-0 ceph-osd[89722]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Oct 11 03:37:27 compute-0 ceph-osd[89722]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Oct 11 03:37:27 compute-0 ceph-osd[89722]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Oct 11 03:37:27 compute-0 ceph-osd[89722]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Oct 11 03:37:27 compute-0 ceph-osd[89722]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Oct 11 03:37:27 compute-0 ceph-osd[89722]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Oct 11 03:37:27 compute-0 ceph-osd[89722]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Oct 11 03:37:27 compute-0 ceph-osd[89722]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Oct 11 03:37:27 compute-0 ceph-osd[89722]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Oct 11 03:37:27 compute-0 ceph-osd[89722]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Oct 11 03:37:27 compute-0 ceph-osd[89722]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Oct 11 03:37:27 compute-0 ceph-osd[89722]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Oct 11 03:37:27 compute-0 ceph-osd[89722]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Oct 11 03:37:27 compute-0 ceph-osd[89722]: rocksdb:                        Options.arena_block_size: 1048576
Oct 11 03:37:27 compute-0 ceph-osd[89722]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Oct 11 03:37:27 compute-0 ceph-osd[89722]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Oct 11 03:37:27 compute-0 ceph-osd[89722]: rocksdb:                Options.disable_auto_compactions: 0
Oct 11 03:37:27 compute-0 ceph-osd[89722]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Oct 11 03:37:27 compute-0 ceph-osd[89722]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Oct 11 03:37:27 compute-0 ceph-osd[89722]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Oct 11 03:37:27 compute-0 ceph-osd[89722]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Oct 11 03:37:27 compute-0 ceph-osd[89722]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Oct 11 03:37:27 compute-0 ceph-osd[89722]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Oct 11 03:37:27 compute-0 ceph-osd[89722]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Oct 11 03:37:27 compute-0 ceph-osd[89722]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Oct 11 03:37:27 compute-0 ceph-osd[89722]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Oct 11 03:37:27 compute-0 ceph-osd[89722]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Oct 11 03:37:27 compute-0 ceph-osd[89722]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Oct 11 03:37:27 compute-0 ceph-osd[89722]: rocksdb:                   Options.inplace_update_support: 0
Oct 11 03:37:27 compute-0 ceph-osd[89722]: rocksdb:                 Options.inplace_update_num_locks: 10000
Oct 11 03:37:27 compute-0 ceph-osd[89722]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Oct 11 03:37:27 compute-0 ceph-osd[89722]: rocksdb:               Options.memtable_whole_key_filtering: 0
Oct 11 03:37:27 compute-0 ceph-osd[89722]: rocksdb:   Options.memtable_huge_page_size: 0
Oct 11 03:37:27 compute-0 ceph-osd[89722]: rocksdb:                           Options.bloom_locality: 0
Oct 11 03:37:27 compute-0 ceph-osd[89722]: rocksdb:                    Options.max_successive_merges: 0
Oct 11 03:37:27 compute-0 ceph-osd[89722]: rocksdb:                Options.optimize_filters_for_hits: 0
Oct 11 03:37:27 compute-0 ceph-osd[89722]: rocksdb:                Options.paranoid_file_checks: 0
Oct 11 03:37:27 compute-0 ceph-osd[89722]: rocksdb:                Options.force_consistency_checks: 1
Oct 11 03:37:27 compute-0 ceph-osd[89722]: rocksdb:                Options.report_bg_io_stats: 0
Oct 11 03:37:27 compute-0 ceph-osd[89722]: rocksdb:                               Options.ttl: 2592000
Oct 11 03:37:27 compute-0 ceph-osd[89722]: rocksdb:          Options.periodic_compaction_seconds: 0
Oct 11 03:37:27 compute-0 ceph-osd[89722]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Oct 11 03:37:27 compute-0 ceph-osd[89722]: rocksdb:    Options.preserve_internal_time_seconds: 0
Oct 11 03:37:27 compute-0 ceph-osd[89722]: rocksdb:                       Options.enable_blob_files: false
Oct 11 03:37:27 compute-0 ceph-osd[89722]: rocksdb:                           Options.min_blob_size: 0
Oct 11 03:37:27 compute-0 ceph-osd[89722]: rocksdb:                          Options.blob_file_size: 268435456
Oct 11 03:37:27 compute-0 ceph-osd[89722]: rocksdb:                   Options.blob_compression_type: NoCompression
Oct 11 03:37:27 compute-0 ceph-osd[89722]: rocksdb:          Options.enable_blob_garbage_collection: false
Oct 11 03:37:27 compute-0 ceph-osd[89722]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Oct 11 03:37:27 compute-0 ceph-osd[89722]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Oct 11 03:37:27 compute-0 ceph-osd[89722]: rocksdb:          Options.blob_compaction_readahead_size: 0
Oct 11 03:37:27 compute-0 ceph-osd[89722]: rocksdb:                Options.blob_file_starting_level: 0
Oct 11 03:37:27 compute-0 ceph-osd[89722]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Oct 11 03:37:27 compute-0 ceph-osd[89722]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-2]:
Oct 11 03:37:27 compute-0 ceph-osd[89722]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Oct 11 03:37:27 compute-0 ceph-osd[89722]: rocksdb:           Options.merge_operator: None
Oct 11 03:37:27 compute-0 ceph-osd[89722]: rocksdb:        Options.compaction_filter: None
Oct 11 03:37:27 compute-0 ceph-osd[89722]: rocksdb:        Options.compaction_filter_factory: None
Oct 11 03:37:27 compute-0 ceph-osd[89722]: rocksdb:  Options.sst_partitioner_factory: None
Oct 11 03:37:27 compute-0 ceph-osd[89722]: rocksdb:         Options.memtable_factory: SkipListFactory
Oct 11 03:37:27 compute-0 ceph-osd[89722]: rocksdb:            Options.table_factory: BlockBasedTable
Oct 11 03:37:27 compute-0 ceph-osd[89722]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55f104263160)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x55f1041ccdd0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Oct 11 03:37:27 compute-0 ceph-osd[89722]: rocksdb:        Options.write_buffer_size: 16777216
Oct 11 03:37:27 compute-0 ceph-osd[89722]: rocksdb:  Options.max_write_buffer_number: 64
Oct 11 03:37:27 compute-0 ceph-osd[89722]: rocksdb:          Options.compression: LZ4
Oct 11 03:37:27 compute-0 ceph-osd[89722]: rocksdb:                  Options.bottommost_compression: Disabled
Oct 11 03:37:27 compute-0 ceph-osd[89722]: rocksdb:       Options.prefix_extractor: nullptr
Oct 11 03:37:27 compute-0 ceph-osd[89722]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Oct 11 03:37:27 compute-0 ceph-osd[89722]: rocksdb:             Options.num_levels: 7
Oct 11 03:37:27 compute-0 ceph-osd[89722]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Oct 11 03:37:27 compute-0 ceph-osd[89722]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Oct 11 03:37:27 compute-0 ceph-osd[89722]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Oct 11 03:37:27 compute-0 ceph-osd[89722]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Oct 11 03:37:27 compute-0 ceph-osd[89722]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Oct 11 03:37:27 compute-0 ceph-osd[89722]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Oct 11 03:37:27 compute-0 ceph-osd[89722]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Oct 11 03:37:27 compute-0 ceph-osd[89722]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Oct 11 03:37:27 compute-0 ceph-osd[89722]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Oct 11 03:37:27 compute-0 ceph-osd[89722]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Oct 11 03:37:27 compute-0 ceph-osd[89722]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Oct 11 03:37:27 compute-0 ceph-osd[89722]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Oct 11 03:37:27 compute-0 ceph-osd[89722]: rocksdb:            Options.compression_opts.window_bits: -14
Oct 11 03:37:27 compute-0 ceph-osd[89722]: rocksdb:                  Options.compression_opts.level: 32767
Oct 11 03:37:27 compute-0 ceph-osd[89722]: rocksdb:               Options.compression_opts.strategy: 0
Oct 11 03:37:27 compute-0 ceph-osd[89722]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Oct 11 03:37:27 compute-0 ceph-osd[89722]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Oct 11 03:37:27 compute-0 ceph-osd[89722]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Oct 11 03:37:27 compute-0 ceph-osd[89722]: rocksdb:         Options.compression_opts.parallel_threads: 1
Oct 11 03:37:27 compute-0 ceph-osd[89722]: rocksdb:                  Options.compression_opts.enabled: false
Oct 11 03:37:27 compute-0 ceph-osd[89722]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Oct 11 03:37:27 compute-0 ceph-osd[89722]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Oct 11 03:37:27 compute-0 ceph-osd[89722]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Oct 11 03:37:27 compute-0 ceph-osd[89722]: rocksdb:              Options.level0_stop_writes_trigger: 36
Oct 11 03:37:27 compute-0 ceph-osd[89722]: rocksdb:                   Options.target_file_size_base: 67108864
Oct 11 03:37:27 compute-0 ceph-osd[89722]: rocksdb:             Options.target_file_size_multiplier: 1
Oct 11 03:37:27 compute-0 ceph-osd[89722]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Oct 11 03:37:27 compute-0 ceph-osd[89722]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Oct 11 03:37:27 compute-0 ceph-osd[89722]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Oct 11 03:37:27 compute-0 ceph-osd[89722]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Oct 11 03:37:27 compute-0 ceph-osd[89722]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Oct 11 03:37:27 compute-0 ceph-osd[89722]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Oct 11 03:37:27 compute-0 ceph-osd[89722]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Oct 11 03:37:27 compute-0 ceph-osd[89722]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Oct 11 03:37:27 compute-0 ceph-osd[89722]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Oct 11 03:37:27 compute-0 ceph-osd[89722]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Oct 11 03:37:27 compute-0 ceph-osd[89722]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Oct 11 03:37:27 compute-0 ceph-osd[89722]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Oct 11 03:37:27 compute-0 ceph-osd[89722]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Oct 11 03:37:27 compute-0 ceph-osd[89722]: rocksdb:                        Options.arena_block_size: 1048576
Oct 11 03:37:27 compute-0 ceph-osd[89722]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Oct 11 03:37:27 compute-0 ceph-osd[89722]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Oct 11 03:37:27 compute-0 ceph-osd[89722]: rocksdb:                Options.disable_auto_compactions: 0
Oct 11 03:37:27 compute-0 ceph-osd[89722]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Oct 11 03:37:27 compute-0 ceph-osd[89722]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Oct 11 03:37:27 compute-0 ceph-osd[89722]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Oct 11 03:37:27 compute-0 ceph-osd[89722]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Oct 11 03:37:27 compute-0 ceph-osd[89722]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Oct 11 03:37:27 compute-0 ceph-osd[89722]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Oct 11 03:37:27 compute-0 ceph-osd[89722]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Oct 11 03:37:27 compute-0 ceph-osd[89722]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Oct 11 03:37:27 compute-0 ceph-osd[89722]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Oct 11 03:37:27 compute-0 ceph-osd[89722]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Oct 11 03:37:27 compute-0 ceph-osd[89722]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Oct 11 03:37:27 compute-0 ceph-osd[89722]: rocksdb:                   Options.inplace_update_support: 0
Oct 11 03:37:27 compute-0 ceph-osd[89722]: rocksdb:                 Options.inplace_update_num_locks: 10000
Oct 11 03:37:27 compute-0 ceph-osd[89722]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Oct 11 03:37:27 compute-0 ceph-osd[89722]: rocksdb:               Options.memtable_whole_key_filtering: 0
Oct 11 03:37:27 compute-0 ceph-osd[89722]: rocksdb:   Options.memtable_huge_page_size: 0
Oct 11 03:37:27 compute-0 ceph-osd[89722]: rocksdb:                           Options.bloom_locality: 0
Oct 11 03:37:27 compute-0 ceph-osd[89722]: rocksdb:                    Options.max_successive_merges: 0
Oct 11 03:37:27 compute-0 ceph-osd[89722]: rocksdb:                Options.optimize_filters_for_hits: 0
Oct 11 03:37:27 compute-0 ceph-osd[89722]: rocksdb:                Options.paranoid_file_checks: 0
Oct 11 03:37:27 compute-0 ceph-osd[89722]: rocksdb:                Options.force_consistency_checks: 1
Oct 11 03:37:27 compute-0 ceph-osd[89722]: rocksdb:                Options.report_bg_io_stats: 0
Oct 11 03:37:27 compute-0 ceph-osd[89722]: rocksdb:                               Options.ttl: 2592000
Oct 11 03:37:27 compute-0 ceph-osd[89722]: rocksdb:          Options.periodic_compaction_seconds: 0
Oct 11 03:37:27 compute-0 ceph-osd[89722]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Oct 11 03:37:27 compute-0 ceph-osd[89722]: rocksdb:    Options.preserve_internal_time_seconds: 0
Oct 11 03:37:27 compute-0 ceph-osd[89722]: rocksdb:                       Options.enable_blob_files: false
Oct 11 03:37:27 compute-0 ceph-osd[89722]: rocksdb:                           Options.min_blob_size: 0
Oct 11 03:37:27 compute-0 ceph-osd[89722]: rocksdb:                          Options.blob_file_size: 268435456
Oct 11 03:37:27 compute-0 ceph-osd[89722]: rocksdb:                   Options.blob_compression_type: NoCompression
Oct 11 03:37:27 compute-0 ceph-osd[89722]: rocksdb:          Options.enable_blob_garbage_collection: false
Oct 11 03:37:27 compute-0 ceph-osd[89722]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Oct 11 03:37:27 compute-0 ceph-osd[89722]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Oct 11 03:37:27 compute-0 ceph-osd[89722]: rocksdb:          Options.blob_compaction_readahead_size: 0
Oct 11 03:37:27 compute-0 ceph-osd[89722]: rocksdb:                Options.blob_file_starting_level: 0
Oct 11 03:37:27 compute-0 ceph-osd[89722]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Oct 11 03:37:27 compute-0 ceph-osd[89722]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-0]:
Oct 11 03:37:27 compute-0 ceph-osd[89722]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Oct 11 03:37:27 compute-0 ceph-osd[89722]: rocksdb:           Options.merge_operator: None
Oct 11 03:37:27 compute-0 ceph-osd[89722]: rocksdb:        Options.compaction_filter: None
Oct 11 03:37:27 compute-0 ceph-osd[89722]: rocksdb:        Options.compaction_filter_factory: None
Oct 11 03:37:27 compute-0 ceph-osd[89722]: rocksdb:  Options.sst_partitioner_factory: None
Oct 11 03:37:27 compute-0 ceph-osd[89722]: rocksdb:         Options.memtable_factory: SkipListFactory
Oct 11 03:37:27 compute-0 ceph-osd[89722]: rocksdb:            Options.table_factory: BlockBasedTable
Oct 11 03:37:27 compute-0 ceph-osd[89722]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55f1042631c0)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x55f1041cc430
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 536870912
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Oct 11 03:37:27 compute-0 ceph-osd[89722]: rocksdb:        Options.write_buffer_size: 16777216
Oct 11 03:37:27 compute-0 ceph-osd[89722]: rocksdb:  Options.max_write_buffer_number: 64
Oct 11 03:37:27 compute-0 ceph-osd[89722]: rocksdb:          Options.compression: LZ4
Oct 11 03:37:27 compute-0 ceph-osd[89722]: rocksdb:                  Options.bottommost_compression: Disabled
Oct 11 03:37:27 compute-0 ceph-osd[89722]: rocksdb:       Options.prefix_extractor: nullptr
Oct 11 03:37:27 compute-0 ceph-osd[89722]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Oct 11 03:37:27 compute-0 ceph-osd[89722]: rocksdb:             Options.num_levels: 7
Oct 11 03:37:27 compute-0 ceph-osd[89722]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Oct 11 03:37:27 compute-0 ceph-osd[89722]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Oct 11 03:37:27 compute-0 ceph-osd[89722]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Oct 11 03:37:27 compute-0 ceph-osd[89722]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Oct 11 03:37:27 compute-0 ceph-osd[89722]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Oct 11 03:37:27 compute-0 ceph-osd[89722]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Oct 11 03:37:27 compute-0 ceph-osd[89722]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Oct 11 03:37:27 compute-0 ceph-osd[89722]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Oct 11 03:37:27 compute-0 ceph-osd[89722]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Oct 11 03:37:27 compute-0 ceph-osd[89722]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Oct 11 03:37:27 compute-0 ceph-osd[89722]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Oct 11 03:37:27 compute-0 ceph-osd[89722]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Oct 11 03:37:27 compute-0 ceph-osd[89722]: rocksdb:            Options.compression_opts.window_bits: -14
Oct 11 03:37:27 compute-0 ceph-osd[89722]: rocksdb:                  Options.compression_opts.level: 32767
Oct 11 03:37:27 compute-0 ceph-osd[89722]: rocksdb:               Options.compression_opts.strategy: 0
Oct 11 03:37:27 compute-0 ceph-osd[89722]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Oct 11 03:37:27 compute-0 ceph-osd[89722]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Oct 11 03:37:27 compute-0 ceph-osd[89722]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Oct 11 03:37:27 compute-0 ceph-osd[89722]: rocksdb:         Options.compression_opts.parallel_threads: 1
Oct 11 03:37:27 compute-0 ceph-osd[89722]: rocksdb:                  Options.compression_opts.enabled: false
Oct 11 03:37:27 compute-0 ceph-osd[89722]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Oct 11 03:37:27 compute-0 ceph-osd[89722]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Oct 11 03:37:27 compute-0 ceph-osd[89722]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Oct 11 03:37:27 compute-0 ceph-osd[89722]: rocksdb:              Options.level0_stop_writes_trigger: 36
Oct 11 03:37:27 compute-0 ceph-osd[89722]: rocksdb:                   Options.target_file_size_base: 67108864
Oct 11 03:37:27 compute-0 ceph-osd[89722]: rocksdb:             Options.target_file_size_multiplier: 1
Oct 11 03:37:27 compute-0 ceph-osd[89722]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Oct 11 03:37:27 compute-0 ceph-osd[89722]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Oct 11 03:37:27 compute-0 ceph-osd[89722]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Oct 11 03:37:27 compute-0 ceph-osd[89722]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Oct 11 03:37:27 compute-0 ceph-osd[89722]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Oct 11 03:37:27 compute-0 ceph-osd[89722]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Oct 11 03:37:27 compute-0 ceph-osd[89722]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Oct 11 03:37:27 compute-0 ceph-osd[89722]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Oct 11 03:37:27 compute-0 ceph-osd[89722]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Oct 11 03:37:27 compute-0 ceph-osd[89722]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Oct 11 03:37:27 compute-0 ceph-osd[89722]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Oct 11 03:37:27 compute-0 ceph-osd[89722]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Oct 11 03:37:27 compute-0 ceph-osd[89722]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Oct 11 03:37:27 compute-0 ceph-osd[89722]: rocksdb:                        Options.arena_block_size: 1048576
Oct 11 03:37:27 compute-0 ceph-osd[89722]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Oct 11 03:37:27 compute-0 ceph-osd[89722]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Oct 11 03:37:27 compute-0 ceph-osd[89722]: rocksdb:                Options.disable_auto_compactions: 0
Oct 11 03:37:27 compute-0 ceph-osd[89722]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Oct 11 03:37:27 compute-0 ceph-osd[89722]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Oct 11 03:37:27 compute-0 ceph-osd[89722]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Oct 11 03:37:27 compute-0 ceph-osd[89722]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Oct 11 03:37:27 compute-0 ceph-osd[89722]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Oct 11 03:37:27 compute-0 ceph-osd[89722]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Oct 11 03:37:27 compute-0 ceph-osd[89722]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Oct 11 03:37:27 compute-0 ceph-osd[89722]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Oct 11 03:37:27 compute-0 ceph-osd[89722]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Oct 11 03:37:27 compute-0 ceph-osd[89722]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Oct 11 03:37:27 compute-0 ceph-osd[89722]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Oct 11 03:37:27 compute-0 ceph-osd[89722]: rocksdb:                   Options.inplace_update_support: 0
Oct 11 03:37:27 compute-0 ceph-osd[89722]: rocksdb:                 Options.inplace_update_num_locks: 10000
Oct 11 03:37:27 compute-0 ceph-osd[89722]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Oct 11 03:37:27 compute-0 ceph-osd[89722]: rocksdb:               Options.memtable_whole_key_filtering: 0
Oct 11 03:37:27 compute-0 ceph-osd[89722]: rocksdb:   Options.memtable_huge_page_size: 0
Oct 11 03:37:27 compute-0 ceph-osd[89722]: rocksdb:                           Options.bloom_locality: 0
Oct 11 03:37:27 compute-0 ceph-osd[89722]: rocksdb:                    Options.max_successive_merges: 0
Oct 11 03:37:27 compute-0 ceph-osd[89722]: rocksdb:                Options.optimize_filters_for_hits: 0
Oct 11 03:37:27 compute-0 ceph-osd[89722]: rocksdb:                Options.paranoid_file_checks: 0
Oct 11 03:37:27 compute-0 ceph-osd[89722]: rocksdb:                Options.force_consistency_checks: 1
Oct 11 03:37:27 compute-0 ceph-osd[89722]: rocksdb:                Options.report_bg_io_stats: 0
Oct 11 03:37:27 compute-0 ceph-osd[89722]: rocksdb:                               Options.ttl: 2592000
Oct 11 03:37:27 compute-0 ceph-osd[89722]: rocksdb:          Options.periodic_compaction_seconds: 0
Oct 11 03:37:27 compute-0 ceph-osd[89722]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Oct 11 03:37:27 compute-0 ceph-osd[89722]: rocksdb:    Options.preserve_internal_time_seconds: 0
Oct 11 03:37:27 compute-0 ceph-osd[89722]: rocksdb:                       Options.enable_blob_files: false
Oct 11 03:37:27 compute-0 ceph-osd[89722]: rocksdb:                           Options.min_blob_size: 0
Oct 11 03:37:27 compute-0 ceph-osd[89722]: rocksdb:                          Options.blob_file_size: 268435456
Oct 11 03:37:27 compute-0 ceph-osd[89722]: rocksdb:                   Options.blob_compression_type: NoCompression
Oct 11 03:37:27 compute-0 ceph-osd[89722]: rocksdb:          Options.enable_blob_garbage_collection: false
Oct 11 03:37:27 compute-0 ceph-osd[89722]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Oct 11 03:37:27 compute-0 ceph-osd[89722]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Oct 11 03:37:27 compute-0 ceph-osd[89722]: rocksdb:          Options.blob_compaction_readahead_size: 0
Oct 11 03:37:27 compute-0 ceph-osd[89722]: rocksdb:                Options.blob_file_starting_level: 0
Oct 11 03:37:27 compute-0 ceph-osd[89722]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Oct 11 03:37:27 compute-0 ceph-osd[89722]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-1]:
Oct 11 03:37:27 compute-0 ceph-osd[89722]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Oct 11 03:37:27 compute-0 ceph-osd[89722]: rocksdb:           Options.merge_operator: None
Oct 11 03:37:27 compute-0 ceph-osd[89722]: rocksdb:        Options.compaction_filter: None
Oct 11 03:37:27 compute-0 ceph-osd[89722]: rocksdb:        Options.compaction_filter_factory: None
Oct 11 03:37:27 compute-0 ceph-osd[89722]: rocksdb:  Options.sst_partitioner_factory: None
Oct 11 03:37:27 compute-0 ceph-osd[89722]: rocksdb:         Options.memtable_factory: SkipListFactory
Oct 11 03:37:27 compute-0 ceph-osd[89722]: rocksdb:            Options.table_factory: BlockBasedTable
Oct 11 03:37:27 compute-0 ceph-osd[89722]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55f1042631c0)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x55f1041cc430
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 536870912
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Oct 11 03:37:27 compute-0 ceph-osd[89722]: rocksdb:        Options.write_buffer_size: 16777216
Oct 11 03:37:27 compute-0 ceph-osd[89722]: rocksdb:  Options.max_write_buffer_number: 64
Oct 11 03:37:27 compute-0 ceph-osd[89722]: rocksdb:          Options.compression: LZ4
Oct 11 03:37:27 compute-0 ceph-osd[89722]: rocksdb:                  Options.bottommost_compression: Disabled
Oct 11 03:37:27 compute-0 ceph-osd[89722]: rocksdb:       Options.prefix_extractor: nullptr
Oct 11 03:37:27 compute-0 ceph-osd[89722]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Oct 11 03:37:27 compute-0 ceph-osd[89722]: rocksdb:             Options.num_levels: 7
Oct 11 03:37:27 compute-0 ceph-osd[89722]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Oct 11 03:37:27 compute-0 ceph-osd[89722]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Oct 11 03:37:27 compute-0 ceph-osd[89722]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Oct 11 03:37:27 compute-0 ceph-osd[89722]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Oct 11 03:37:27 compute-0 ceph-osd[89722]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Oct 11 03:37:27 compute-0 ceph-osd[89722]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Oct 11 03:37:27 compute-0 ceph-osd[89722]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Oct 11 03:37:27 compute-0 ceph-osd[89722]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Oct 11 03:37:27 compute-0 ceph-osd[89722]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Oct 11 03:37:27 compute-0 ceph-osd[89722]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Oct 11 03:37:27 compute-0 ceph-osd[89722]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Oct 11 03:37:27 compute-0 ceph-osd[89722]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Oct 11 03:37:27 compute-0 ceph-osd[89722]: rocksdb:            Options.compression_opts.window_bits: -14
Oct 11 03:37:27 compute-0 ceph-osd[89722]: rocksdb:                  Options.compression_opts.level: 32767
Oct 11 03:37:27 compute-0 ceph-osd[89722]: rocksdb:               Options.compression_opts.strategy: 0
Oct 11 03:37:27 compute-0 ceph-osd[89722]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Oct 11 03:37:27 compute-0 ceph-osd[89722]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Oct 11 03:37:27 compute-0 ceph-osd[89722]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Oct 11 03:37:27 compute-0 ceph-osd[89722]: rocksdb:         Options.compression_opts.parallel_threads: 1
Oct 11 03:37:27 compute-0 ceph-osd[89722]: rocksdb:                  Options.compression_opts.enabled: false
Oct 11 03:37:27 compute-0 ceph-osd[89722]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Oct 11 03:37:27 compute-0 ceph-osd[89722]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Oct 11 03:37:27 compute-0 ceph-osd[89722]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Oct 11 03:37:27 compute-0 ceph-osd[89722]: rocksdb:              Options.level0_stop_writes_trigger: 36
Oct 11 03:37:27 compute-0 ceph-osd[89722]: rocksdb:                   Options.target_file_size_base: 67108864
Oct 11 03:37:27 compute-0 ceph-osd[89722]: rocksdb:             Options.target_file_size_multiplier: 1
Oct 11 03:37:27 compute-0 ceph-osd[89722]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Oct 11 03:37:27 compute-0 ceph-osd[89722]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Oct 11 03:37:27 compute-0 ceph-osd[89722]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Oct 11 03:37:27 compute-0 ceph-osd[89722]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Oct 11 03:37:27 compute-0 ceph-osd[89722]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Oct 11 03:37:27 compute-0 ceph-osd[89722]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Oct 11 03:37:27 compute-0 ceph-osd[89722]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Oct 11 03:37:27 compute-0 ceph-osd[89722]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Oct 11 03:37:27 compute-0 ceph-osd[89722]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Oct 11 03:37:27 compute-0 ceph-osd[89722]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Oct 11 03:37:27 compute-0 ceph-osd[89722]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Oct 11 03:37:27 compute-0 ceph-osd[89722]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Oct 11 03:37:27 compute-0 ceph-osd[89722]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Oct 11 03:37:27 compute-0 ceph-osd[89722]: rocksdb:                        Options.arena_block_size: 1048576
Oct 11 03:37:27 compute-0 ceph-osd[89722]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Oct 11 03:37:27 compute-0 ceph-osd[89722]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Oct 11 03:37:27 compute-0 ceph-osd[89722]: rocksdb:                Options.disable_auto_compactions: 0
Oct 11 03:37:27 compute-0 ceph-osd[89722]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Oct 11 03:37:27 compute-0 ceph-osd[89722]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Oct 11 03:37:27 compute-0 ceph-osd[89722]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Oct 11 03:37:27 compute-0 ceph-osd[89722]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Oct 11 03:37:27 compute-0 ceph-osd[89722]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Oct 11 03:37:27 compute-0 ceph-osd[89722]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Oct 11 03:37:27 compute-0 ceph-osd[89722]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Oct 11 03:37:27 compute-0 ceph-osd[89722]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Oct 11 03:37:27 compute-0 ceph-osd[89722]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Oct 11 03:37:27 compute-0 ceph-osd[89722]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Oct 11 03:37:27 compute-0 ceph-osd[89722]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Oct 11 03:37:27 compute-0 ceph-osd[89722]: rocksdb:                   Options.inplace_update_support: 0
Oct 11 03:37:27 compute-0 ceph-osd[89722]: rocksdb:                 Options.inplace_update_num_locks: 10000
Oct 11 03:37:27 compute-0 ceph-osd[89722]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Oct 11 03:37:27 compute-0 ceph-osd[89722]: rocksdb:               Options.memtable_whole_key_filtering: 0
Oct 11 03:37:27 compute-0 ceph-osd[89722]: rocksdb:   Options.memtable_huge_page_size: 0
Oct 11 03:37:27 compute-0 ceph-osd[89722]: rocksdb:                           Options.bloom_locality: 0
Oct 11 03:37:27 compute-0 ceph-osd[89722]: rocksdb:                    Options.max_successive_merges: 0
Oct 11 03:37:27 compute-0 ceph-osd[89722]: rocksdb:                Options.optimize_filters_for_hits: 0
Oct 11 03:37:27 compute-0 ceph-osd[89722]: rocksdb:                Options.paranoid_file_checks: 0
Oct 11 03:37:27 compute-0 ceph-osd[89722]: rocksdb:                Options.force_consistency_checks: 1
Oct 11 03:37:27 compute-0 ceph-osd[89722]: rocksdb:                Options.report_bg_io_stats: 0
Oct 11 03:37:27 compute-0 ceph-osd[89722]: rocksdb:                               Options.ttl: 2592000
Oct 11 03:37:27 compute-0 ceph-osd[89722]: rocksdb:          Options.periodic_compaction_seconds: 0
Oct 11 03:37:27 compute-0 ceph-osd[89722]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Oct 11 03:37:27 compute-0 ceph-osd[89722]: rocksdb:    Options.preserve_internal_time_seconds: 0
Oct 11 03:37:27 compute-0 ceph-osd[89722]: rocksdb:                       Options.enable_blob_files: false
Oct 11 03:37:27 compute-0 ceph-osd[89722]: rocksdb:                           Options.min_blob_size: 0
Oct 11 03:37:27 compute-0 ceph-osd[89722]: rocksdb:                          Options.blob_file_size: 268435456
Oct 11 03:37:27 compute-0 ceph-osd[89722]: rocksdb:                   Options.blob_compression_type: NoCompression
Oct 11 03:37:27 compute-0 ceph-osd[89722]: rocksdb:          Options.enable_blob_garbage_collection: false
Oct 11 03:37:27 compute-0 ceph-osd[89722]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Oct 11 03:37:27 compute-0 ceph-osd[89722]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Oct 11 03:37:27 compute-0 ceph-osd[89722]: rocksdb:          Options.blob_compaction_readahead_size: 0
Oct 11 03:37:27 compute-0 ceph-osd[89722]: rocksdb:                Options.blob_file_starting_level: 0
Oct 11 03:37:27 compute-0 ceph-osd[89722]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Oct 11 03:37:27 compute-0 ceph-osd[89722]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-2]:
Oct 11 03:37:27 compute-0 ceph-osd[89722]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Oct 11 03:37:27 compute-0 ceph-osd[89722]: rocksdb:           Options.merge_operator: None
Oct 11 03:37:27 compute-0 ceph-osd[89722]: rocksdb:        Options.compaction_filter: None
Oct 11 03:37:27 compute-0 ceph-osd[89722]: rocksdb:        Options.compaction_filter_factory: None
Oct 11 03:37:27 compute-0 ceph-osd[89722]: rocksdb:  Options.sst_partitioner_factory: None
Oct 11 03:37:27 compute-0 ceph-osd[89722]: rocksdb:         Options.memtable_factory: SkipListFactory
Oct 11 03:37:27 compute-0 ceph-osd[89722]: rocksdb:            Options.table_factory: BlockBasedTable
Oct 11 03:37:27 compute-0 ceph-osd[89722]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55f1042631c0)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x55f1041cc430
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 536870912
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Oct 11 03:37:27 compute-0 ceph-osd[89722]: rocksdb:        Options.write_buffer_size: 16777216
Oct 11 03:37:27 compute-0 ceph-osd[89722]: rocksdb:  Options.max_write_buffer_number: 64
Oct 11 03:37:27 compute-0 ceph-osd[89722]: rocksdb:          Options.compression: LZ4
Oct 11 03:37:27 compute-0 ceph-osd[89722]: rocksdb:                  Options.bottommost_compression: Disabled
Oct 11 03:37:27 compute-0 ceph-osd[89722]: rocksdb:       Options.prefix_extractor: nullptr
Oct 11 03:37:27 compute-0 ceph-osd[89722]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Oct 11 03:37:27 compute-0 ceph-osd[89722]: rocksdb:             Options.num_levels: 7
Oct 11 03:37:27 compute-0 ceph-osd[89722]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Oct 11 03:37:27 compute-0 ceph-osd[89722]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Oct 11 03:37:27 compute-0 ceph-osd[89722]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Oct 11 03:37:27 compute-0 ceph-osd[89722]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Oct 11 03:37:27 compute-0 ceph-osd[89722]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Oct 11 03:37:27 compute-0 ceph-osd[89722]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Oct 11 03:37:27 compute-0 ceph-osd[89722]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Oct 11 03:37:27 compute-0 ceph-osd[89722]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Oct 11 03:37:27 compute-0 ceph-osd[89722]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Oct 11 03:37:27 compute-0 ceph-osd[89722]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Oct 11 03:37:27 compute-0 ceph-osd[89722]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Oct 11 03:37:27 compute-0 ceph-osd[89722]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Oct 11 03:37:27 compute-0 ceph-osd[89722]: rocksdb:            Options.compression_opts.window_bits: -14
Oct 11 03:37:27 compute-0 ceph-osd[89722]: rocksdb:                  Options.compression_opts.level: 32767
Oct 11 03:37:27 compute-0 ceph-osd[89722]: rocksdb:               Options.compression_opts.strategy: 0
Oct 11 03:37:27 compute-0 ceph-osd[89722]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Oct 11 03:37:27 compute-0 ceph-osd[89722]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Oct 11 03:37:27 compute-0 ceph-osd[89722]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Oct 11 03:37:27 compute-0 ceph-osd[89722]: rocksdb:         Options.compression_opts.parallel_threads: 1
Oct 11 03:37:27 compute-0 ceph-osd[89722]: rocksdb:                  Options.compression_opts.enabled: false
Oct 11 03:37:27 compute-0 ceph-osd[89722]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Oct 11 03:37:27 compute-0 ceph-osd[89722]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Oct 11 03:37:27 compute-0 ceph-osd[89722]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Oct 11 03:37:27 compute-0 ceph-osd[89722]: rocksdb:              Options.level0_stop_writes_trigger: 36
Oct 11 03:37:27 compute-0 ceph-osd[89722]: rocksdb:                   Options.target_file_size_base: 67108864
Oct 11 03:37:27 compute-0 ceph-osd[89722]: rocksdb:             Options.target_file_size_multiplier: 1
Oct 11 03:37:27 compute-0 ceph-osd[89722]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Oct 11 03:37:27 compute-0 ceph-osd[89722]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Oct 11 03:37:27 compute-0 ceph-osd[89722]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Oct 11 03:37:27 compute-0 ceph-osd[89722]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Oct 11 03:37:27 compute-0 ceph-osd[89722]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Oct 11 03:37:27 compute-0 ceph-osd[89722]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Oct 11 03:37:27 compute-0 ceph-osd[89722]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Oct 11 03:37:27 compute-0 ceph-osd[89722]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Oct 11 03:37:27 compute-0 ceph-osd[89722]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Oct 11 03:37:27 compute-0 ceph-osd[89722]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Oct 11 03:37:27 compute-0 ceph-osd[89722]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Oct 11 03:37:27 compute-0 ceph-osd[89722]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Oct 11 03:37:27 compute-0 ceph-osd[89722]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Oct 11 03:37:27 compute-0 ceph-osd[89722]: rocksdb:                        Options.arena_block_size: 1048576
Oct 11 03:37:27 compute-0 ceph-osd[89722]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Oct 11 03:37:27 compute-0 ceph-osd[89722]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Oct 11 03:37:27 compute-0 ceph-osd[89722]: rocksdb:                Options.disable_auto_compactions: 0
Oct 11 03:37:27 compute-0 ceph-osd[89722]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Oct 11 03:37:27 compute-0 ceph-osd[89722]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Oct 11 03:37:27 compute-0 ceph-osd[89722]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Oct 11 03:37:27 compute-0 ceph-osd[89722]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Oct 11 03:37:27 compute-0 ceph-osd[89722]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Oct 11 03:37:27 compute-0 ceph-osd[89722]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Oct 11 03:37:27 compute-0 ceph-osd[89722]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Oct 11 03:37:27 compute-0 ceph-osd[89722]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Oct 11 03:37:27 compute-0 ceph-osd[89722]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Oct 11 03:37:27 compute-0 ceph-osd[89722]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Oct 11 03:37:27 compute-0 ceph-osd[89722]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Oct 11 03:37:27 compute-0 ceph-osd[89722]: rocksdb:                   Options.inplace_update_support: 0
Oct 11 03:37:27 compute-0 ceph-osd[89722]: rocksdb:                 Options.inplace_update_num_locks: 10000
Oct 11 03:37:27 compute-0 ceph-osd[89722]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Oct 11 03:37:27 compute-0 ceph-osd[89722]: rocksdb:               Options.memtable_whole_key_filtering: 0
Oct 11 03:37:27 compute-0 ceph-osd[89722]: rocksdb:   Options.memtable_huge_page_size: 0
Oct 11 03:37:27 compute-0 ceph-osd[89722]: rocksdb:                           Options.bloom_locality: 0
Oct 11 03:37:27 compute-0 ceph-osd[89722]: rocksdb:                    Options.max_successive_merges: 0
Oct 11 03:37:27 compute-0 ceph-osd[89722]: rocksdb:                Options.optimize_filters_for_hits: 0
Oct 11 03:37:27 compute-0 ceph-osd[89722]: rocksdb:                Options.paranoid_file_checks: 0
Oct 11 03:37:27 compute-0 ceph-osd[89722]: rocksdb:                Options.force_consistency_checks: 1
Oct 11 03:37:27 compute-0 ceph-osd[89722]: rocksdb:                Options.report_bg_io_stats: 0
Oct 11 03:37:27 compute-0 ceph-osd[89722]: rocksdb:                               Options.ttl: 2592000
Oct 11 03:37:27 compute-0 ceph-osd[89722]: rocksdb:          Options.periodic_compaction_seconds: 0
Oct 11 03:37:27 compute-0 ceph-osd[89722]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Oct 11 03:37:27 compute-0 ceph-osd[89722]: rocksdb:    Options.preserve_internal_time_seconds: 0
Oct 11 03:37:27 compute-0 ceph-osd[89722]: rocksdb:                       Options.enable_blob_files: false
Oct 11 03:37:27 compute-0 ceph-osd[89722]: rocksdb:                           Options.min_blob_size: 0
Oct 11 03:37:27 compute-0 ceph-osd[89722]: rocksdb:                          Options.blob_file_size: 268435456
Oct 11 03:37:27 compute-0 ceph-osd[89722]: rocksdb:                   Options.blob_compression_type: NoCompression
Oct 11 03:37:27 compute-0 ceph-osd[89722]: rocksdb:          Options.enable_blob_garbage_collection: false
Oct 11 03:37:27 compute-0 ceph-osd[89722]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Oct 11 03:37:27 compute-0 ceph-osd[89722]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Oct 11 03:37:27 compute-0 ceph-osd[89722]: rocksdb:          Options.blob_compaction_readahead_size: 0
Oct 11 03:37:27 compute-0 ceph-osd[89722]: rocksdb:                Options.blob_file_starting_level: 0
Oct 11 03:37:27 compute-0 ceph-osd[89722]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Oct 11 03:37:27 compute-0 ceph-osd[89722]: rocksdb: [db/column_family.cc:635]         (skipping printing options)
Oct 11 03:37:27 compute-0 ceph-osd[89722]: rocksdb: [db/column_family.cc:635]         (skipping printing options)
Oct 11 03:37:27 compute-0 ceph-osd[89722]: rocksdb: [db/version_set.cc:5566] Recovered from manifest file:db/MANIFEST-000032 succeeded,manifest_file_number is 32, next_file_number is 34, last_sequence is 12, log_number is 5,prev_log_number is 0,max_column_family is 11,min_log_number_to_keep is 5
Oct 11 03:37:27 compute-0 ceph-osd[89722]: rocksdb: [db/version_set.cc:5581] Column family [default] (ID 0), log number is 5
Oct 11 03:37:27 compute-0 ceph-osd[89722]: rocksdb: [db/version_set.cc:5581] Column family [m-0] (ID 1), log number is 5
Oct 11 03:37:27 compute-0 ceph-osd[89722]: rocksdb: [db/version_set.cc:5581] Column family [m-1] (ID 2), log number is 5
Oct 11 03:37:27 compute-0 ceph-osd[89722]: rocksdb: [db/version_set.cc:5581] Column family [m-2] (ID 3), log number is 5
Oct 11 03:37:27 compute-0 ceph-osd[89722]: rocksdb: [db/version_set.cc:5581] Column family [p-0] (ID 4), log number is 5
Oct 11 03:37:27 compute-0 ceph-osd[89722]: rocksdb: [db/version_set.cc:5581] Column family [p-1] (ID 5), log number is 5
Oct 11 03:37:27 compute-0 ceph-osd[89722]: rocksdb: [db/version_set.cc:5581] Column family [p-2] (ID 6), log number is 5
Oct 11 03:37:27 compute-0 ceph-osd[89722]: rocksdb: [db/version_set.cc:5581] Column family [O-0] (ID 7), log number is 5
Oct 11 03:37:27 compute-0 ceph-osd[89722]: rocksdb: [db/version_set.cc:5581] Column family [O-1] (ID 8), log number is 5
Oct 11 03:37:27 compute-0 ceph-osd[89722]: rocksdb: [db/version_set.cc:5581] Column family [O-2] (ID 9), log number is 5
Oct 11 03:37:27 compute-0 ceph-osd[89722]: rocksdb: [db/version_set.cc:5581] Column family [L] (ID 10), log number is 5
Oct 11 03:37:27 compute-0 ceph-osd[89722]: rocksdb: [db/version_set.cc:5581] Column family [P] (ID 11), log number is 5
Oct 11 03:37:27 compute-0 ceph-osd[89722]: rocksdb: [db/db_impl/db_impl_open.cc:539] DB ID: 2434321e-904e-481a-8531-1d5bfefc457e
Oct 11 03:37:27 compute-0 ceph-osd[89722]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760153847186538, "job": 1, "event": "recovery_started", "wal_files": [31]}
Oct 11 03:37:27 compute-0 ceph-osd[89722]: rocksdb: [db/db_impl/db_impl_open.cc:1043] Recovering log #31 mode 2
Oct 11 03:37:27 compute-0 ceph-osd[89722]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760153847193358, "cf_name": "default", "job": 1, "event": "table_file_creation", "file_number": 35, "file_size": 1272, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 13, "largest_seqno": 21, "table_properties": {"data_size": 128, "index_size": 27, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 69, "raw_key_size": 87, "raw_average_key_size": 17, "raw_value_size": 82, "raw_average_value_size": 16, "num_data_blocks": 1, "num_entries": 5, "num_filter_entries": 5, "num_deletions": 0, "num_merge_operands": 2, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": ".T:int64_array.b:bitwise_xor", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "LZ4", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1760153847, "oldest_key_time": 0, "file_creation_time": 0, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "2434321e-904e-481a-8531-1d5bfefc457e", "db_session_id": "NVO3UOMUDUFVY2NKG1Z7", "orig_file_number": 35, "seqno_to_time_mapping": "N/A"}}
Oct 11 03:37:27 compute-0 ceph-osd[89722]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760153847196874, "cf_name": "p-0", "job": 1, "event": "table_file_creation", "file_number": 36, "file_size": 1594, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 14, "largest_seqno": 15, "table_properties": {"data_size": 468, "index_size": 39, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 69, "raw_key_size": 72, "raw_average_key_size": 36, "raw_value_size": 567, "raw_average_value_size": 283, "num_data_blocks": 1, "num_entries": 2, "num_filter_entries": 2, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "p-0", "column_family_id": 4, "comparator": "leveldb.BytewiseComparator", "merge_operator": "nullptr", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "LZ4", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1760153847, "oldest_key_time": 0, "file_creation_time": 0, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "2434321e-904e-481a-8531-1d5bfefc457e", "db_session_id": "NVO3UOMUDUFVY2NKG1Z7", "orig_file_number": 36, "seqno_to_time_mapping": "N/A"}}
Oct 11 03:37:27 compute-0 ceph-osd[89722]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760153847200001, "cf_name": "O-2", "job": 1, "event": "table_file_creation", "file_number": 37, "file_size": 1275, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 16, "largest_seqno": 16, "table_properties": {"data_size": 121, "index_size": 64, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 69, "raw_key_size": 55, "raw_average_key_size": 55, "raw_value_size": 50, "raw_average_value_size": 50, "num_data_blocks": 1, "num_entries": 1, "num_filter_entries": 1, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "O-2", "column_family_id": 9, "comparator": "leveldb.BytewiseComparator", "merge_operator": "nullptr", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "LZ4", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1760153847, "oldest_key_time": 0, "file_creation_time": 0, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "2434321e-904e-481a-8531-1d5bfefc457e", "db_session_id": "NVO3UOMUDUFVY2NKG1Z7", "orig_file_number": 37, "seqno_to_time_mapping": "N/A"}}
Oct 11 03:37:27 compute-0 ceph-osd[89722]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760153847201396, "job": 1, "event": "recovery_finished"}
Oct 11 03:37:27 compute-0 ceph-osd[89722]: rocksdb: [db/version_set.cc:5047] Creating manifest 40
Oct 11 03:37:27 compute-0 ceph-osd[89722]: rocksdb: [db/db_impl/db_impl_open.cc:1987] SstFileManager instance 0x55f1051ba000
Oct 11 03:37:27 compute-0 ceph-osd[89722]: rocksdb: DB pointer 0x55f1050f3a00
Oct 11 03:37:27 compute-0 ceph-osd[89722]: bluestore(/var/lib/ceph/osd/ceph-2) _open_db opened rocksdb path db options compression=kLZ4Compression,max_write_buffer_number=64,min_write_buffer_number_to_merge=6,compaction_style=kCompactionStyleLevel,write_buffer_size=16777216,max_background_jobs=4,level0_file_num_compaction_trigger=8,max_bytes_for_level_base=1073741824,max_bytes_for_level_multiplier=8,compaction_readahead_size=2MB,max_total_wal_size=1073741824,writable_file_max_buffer_size=0
Oct 11 03:37:27 compute-0 ceph-osd[89722]: bluestore(/var/lib/ceph/osd/ceph-2) _upgrade_super from 4, latest 4
Oct 11 03:37:27 compute-0 ceph-osd[89722]: bluestore(/var/lib/ceph/osd/ceph-2) _upgrade_super done
Oct 11 03:37:27 compute-0 ceph-osd[89722]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Oct 11 03:37:27 compute-0 ceph-osd[89722]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 0.1 total, 0.1 interval
                                           Cumulative writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 GB, 0.00 MB/s
                                           Cumulative WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 MB, 0.00 MB/s
                                           Interval WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                           
                                           ** Compaction Stats [default] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      2/0    2.63 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.2      0.01              0.00         1    0.007       0      0       0.0       0.0
                                            Sum      2/0    2.63 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.2      0.01              0.00         1    0.007       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.2      0.01              0.00         1    0.007       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [default] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.2      0.01              0.00         1    0.007       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.1 total, 0.1 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.02 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.02 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55f1041ccdd0#2 capacity: 460.80 MB usage: 0.94 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 4.2e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(2,0.72 KB,0.000152323%) FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [default] **
                                           
                                           ** Compaction Stats [m-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.1 total, 0.1 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55f1041ccdd0#2 capacity: 460.80 MB usage: 0.94 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 4.2e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(2,0.72 KB,0.000152323%) FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-0] **
                                           
                                           ** Compaction Stats [m-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.1 total, 0.1 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55f1041ccdd0#2 capacity: 460.80 MB usage: 0.94 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 4.2e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(2,0.72 KB,0.000152323%) FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-1] **
                                           
                                           ** Compaction Stats [m-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.1 total, 0.1 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55f1041ccdd0#2 capacity: 460.80 MB usage: 0.94 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 4.2e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(2,0.72 KB,0.000152323%) FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-2] **
                                           
                                           ** Compaction Stats [p-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.56 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.4      0.00              0.00         1    0.003       0      0       0.0       0.0
                                            Sum      1/0    1.56 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.4      0.00              0.00         1    0.003       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.4      0.00              0.00         1    0.003       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.4      0.00              0.00         1    0.003       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.1 total, 0.1 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.03 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.03 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55f1041ccdd0#2 capacity: 460.80 MB usage: 0.94 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 4.2e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(2,0.72 KB,0.000152323%) FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-0] **
                                           
                                           ** Compaction Stats [p-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.1 total, 0.1 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55f1041ccdd0#2 capacity: 460.80 MB usage: 0.94 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 4.2e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(2,0.72 KB,0.000152323%) FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-1] **
                                           
                                           ** Compaction Stats [p-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.1 total, 0.1 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55f1041ccdd0#2 capacity: 460.80 MB usage: 0.94 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 4.2e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(2,0.72 KB,0.000152323%) FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-2] **
                                           
                                           ** Compaction Stats [O-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.1 total, 0.1 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55f1041cc430#2 capacity: 512.00 MB usage: 0.25 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 2 last_secs: 1.2e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): FilterBlock(1,0.11 KB,2.08616e-05%) IndexBlock(1,0.14 KB,2.68221e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-0] **
                                           
                                           ** Compaction Stats [O-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.1 total, 0.1 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55f1041cc430#2 capacity: 512.00 MB usage: 0.25 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 2 last_secs: 1.2e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): FilterBlock(1,0.11 KB,2.08616e-05%) IndexBlock(1,0.14 KB,2.68221e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-1] **
                                           
                                           ** Compaction Stats [O-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.25 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.4      0.00              0.00         1    0.003       0      0       0.0       0.0
                                            Sum      1/0    1.25 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.4      0.00              0.00         1    0.003       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.4      0.00              0.00         1    0.003       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.4      0.00              0.00         1    0.003       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.1 total, 0.1 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.02 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.02 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55f1041cc430#2 capacity: 512.00 MB usage: 0.25 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 2 last_secs: 1.2e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): FilterBlock(1,0.11 KB,2.08616e-05%) IndexBlock(1,0.14 KB,2.68221e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-2] **
                                           
                                           ** Compaction Stats [L] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.001       0      0       0.0       0.0
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.001       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.001       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [L] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.001       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.1 total, 0.1 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55f1041ccdd0#2 capacity: 460.80 MB usage: 0.94 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 4.2e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(2,0.72 KB,0.000152323%) FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [L] **
                                           
                                           ** Compaction Stats [P] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [P] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.1 total, 0.1 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55f1041ccdd0#2 capacity: 460.80 MB usage: 0.94 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 4.2e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(2,0.72 KB,0.000152323%) FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [P] **
Oct 11 03:37:27 compute-0 ceph-osd[89722]: <cls> /home/jenkins-build/build/workspace/ceph-build/ARCH/x86_64/AVAILABLE_ARCH/x86_64/AVAILABLE_DIST/centos9/DIST/centos9/MACHINE_SIZE/gigantic/release/18.2.7/rpm/el9/BUILD/ceph-18.2.7/src/cls/cephfs/cls_cephfs.cc:201: loading cephfs
Oct 11 03:37:27 compute-0 ceph-osd[89722]: <cls> /home/jenkins-build/build/workspace/ceph-build/ARCH/x86_64/AVAILABLE_ARCH/x86_64/AVAILABLE_DIST/centos9/DIST/centos9/MACHINE_SIZE/gigantic/release/18.2.7/rpm/el9/BUILD/ceph-18.2.7/src/cls/hello/cls_hello.cc:316: loading cls_hello
Oct 11 03:37:27 compute-0 ceph-osd[89722]: _get_class not permitted to load lua
Oct 11 03:37:27 compute-0 ceph-osd[89722]: _get_class not permitted to load sdk
Oct 11 03:37:27 compute-0 ceph-osd[89722]: _get_class not permitted to load test_remote_reads
Oct 11 03:37:27 compute-0 ceph-osd[89722]: osd.2 0 crush map has features 288232575208783872, adjusting msgr requires for clients
Oct 11 03:37:27 compute-0 ceph-osd[89722]: osd.2 0 crush map has features 288232575208783872 was 8705, adjusting msgr requires for mons
Oct 11 03:37:27 compute-0 ceph-osd[89722]: osd.2 0 crush map has features 288232575208783872, adjusting msgr requires for osds
Oct 11 03:37:27 compute-0 ceph-osd[89722]: osd.2 0 check_osdmap_features enabling on-disk ERASURE CODES compat feature
Oct 11 03:37:27 compute-0 ceph-osd[89722]: osd.2 0 load_pgs
Oct 11 03:37:27 compute-0 ceph-osd[89722]: osd.2 0 load_pgs opened 0 pgs
Oct 11 03:37:27 compute-0 ceph-osd[89722]: osd.2 0 log_to_monitors true
Oct 11 03:37:27 compute-0 python3[90212]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid 23b68101-59a9-532f-ab6b-9acf78fb2162 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool create volumes  replicated_rule --autoscale-mode on _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 11 03:37:27 compute-0 ceph-23b68101-59a9-532f-ab6b-9acf78fb2162-osd-2[89710]: 2025-10-11T03:37:27.231+0000 7f18a7347740 -1 osd.2 0 log_to_monitors true
Oct 11 03:37:27 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["2"]} v 0) v1
Oct 11 03:37:27 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='osd.2 [v2:192.168.122.100:6810/875665671,v1:192.168.122.100:6811/875665671]' entity='osd.2' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["2"]}]: dispatch
Oct 11 03:37:27 compute-0 podman[90429]: 2025-10-11 03:37:27.298038568 +0000 UTC m=+0.045565793 container create ef6122b8139172167fefa6c24c20c5695c67d822887b96fdd4fba4b10844ff87 (image=quay.io/ceph/ceph:v18, name=bold_hopper, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=reef)
Oct 11 03:37:27 compute-0 systemd[1]: Started libpod-conmon-ef6122b8139172167fefa6c24c20c5695c67d822887b96fdd4fba4b10844ff87.scope.
Oct 11 03:37:27 compute-0 systemd[1]: Started libcrun container.
Oct 11 03:37:27 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ca29e50007eecba39f193a25d33011098b24bfd0be2c250cc4b6d1cc9df52666/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Oct 11 03:37:27 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ca29e50007eecba39f193a25d33011098b24bfd0be2c250cc4b6d1cc9df52666/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 11 03:37:27 compute-0 podman[90429]: 2025-10-11 03:37:27.278396386 +0000 UTC m=+0.025923611 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Oct 11 03:37:27 compute-0 podman[90429]: 2025-10-11 03:37:27.37702818 +0000 UTC m=+0.124555395 container init ef6122b8139172167fefa6c24c20c5695c67d822887b96fdd4fba4b10844ff87 (image=quay.io/ceph/ceph:v18, name=bold_hopper, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 11 03:37:27 compute-0 podman[90429]: 2025-10-11 03:37:27.384643184 +0000 UTC m=+0.132170419 container start ef6122b8139172167fefa6c24c20c5695c67d822887b96fdd4fba4b10844ff87 (image=quay.io/ceph/ceph:v18, name=bold_hopper, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3)
Oct 11 03:37:27 compute-0 podman[90429]: 2025-10-11 03:37:27.388355088 +0000 UTC m=+0.135882323 container attach ef6122b8139172167fefa6c24c20c5695c67d822887b96fdd4fba4b10844ff87 (image=quay.io/ceph/ceph:v18, name=bold_hopper, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.build-date=20250507, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 11 03:37:27 compute-0 ceph-mon[74273]: OSD bench result of 9405.552143 IOPS is not within the threshold limit range of 50.000000 IOPS and 500.000000 IOPS for osd.1. IOPS capacity is unchanged at 315.000000 IOPS. The recommendation is to establish the osd's IOPS capacity using other benchmark tools (e.g. Fio) and then override osd_mclock_max_capacity_iops_[hdd|ssd].
Oct 11 03:37:27 compute-0 ceph-mon[74273]: from='client.? 192.168.122.100:0/1724546563' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "vms", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Oct 11 03:37:27 compute-0 ceph-mon[74273]: osd.1 [v2:192.168.122.100:6806/1035388975,v1:192.168.122.100:6807/1035388975] boot
Oct 11 03:37:27 compute-0 ceph-mon[74273]: osdmap e13: 3 total, 2 up, 3 in
Oct 11 03:37:27 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Oct 11 03:37:27 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Oct 11 03:37:27 compute-0 ceph-mon[74273]: from='osd.2 [v2:192.168.122.100:6810/875665671,v1:192.168.122.100:6811/875665671]' entity='osd.2' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["2"]}]: dispatch
Oct 11 03:37:27 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e13 do_prune osdmap full prune enabled
Oct 11 03:37:27 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='osd.2 [v2:192.168.122.100:6810/875665671,v1:192.168.122.100:6811/875665671]' entity='osd.2' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["2"]}]': finished
Oct 11 03:37:27 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e14 e14: 3 total, 2 up, 3 in
Oct 11 03:37:27 compute-0 ceph-mon[74273]: log_channel(cluster) log [DBG] : osdmap e14: 3 total, 2 up, 3 in
Oct 11 03:37:27 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd crush create-or-move", "id": 2, "weight":0.0195, "args": ["host=compute-0", "root=default"]} v 0) v1
Oct 11 03:37:27 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='osd.2 [v2:192.168.122.100:6810/875665671,v1:192.168.122.100:6811/875665671]' entity='osd.2' cmd=[{"prefix": "osd crush create-or-move", "id": 2, "weight":0.0195, "args": ["host=compute-0", "root=default"]}]: dispatch
Oct 11 03:37:27 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e14 create-or-move crush item name 'osd.2' initial_weight 0.0195 at location {host=compute-0,root=default}
Oct 11 03:37:27 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0) v1
Oct 11 03:37:27 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Oct 11 03:37:27 compute-0 ceph-mgr[74563]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Oct 11 03:37:27 compute-0 ceph-osd[88594]: osd.1 pg_epoch: 14 pg[1.0( empty local-lis/les=13/14 n=0 ec=11/11 lis/c=0/0 les/c/f=0/0/0 sis=13) [1] r=0 lpr=13 pi=[11,13)/0 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 03:37:27 compute-0 ceph-osd[87591]: osd.0 pg_epoch: 14 pg[2.0( empty local-lis/les=13/14 n=0 ec=13/13 lis/c=0/0 les/c/f=0/0/0 sis=13) [0] r=0 lpr=13 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 03:37:27 compute-0 ceph-mgr[74563]: [devicehealth INFO root] creating main.db for devicehealth
Oct 11 03:37:27 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool create", "pool": "volumes", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"} v 0) v1
Oct 11 03:37:27 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2819948537' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "volumes", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]: dispatch
Oct 11 03:37:27 compute-0 ceph-mgr[74563]: [devicehealth INFO root] Check health
Oct 11 03:37:27 compute-0 ceph-mgr[74563]: [devicehealth ERROR root] Fail to parse JSON result from daemon osd.2 ()
Oct 11 03:37:27 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='admin socket' entity='admin socket' cmd='smart' args=[json]: dispatch
Oct 11 03:37:27 compute-0 sudo[90502]:     ceph : PWD=/ ; USER=root ; COMMAND=/usr/sbin/smartctl -x --json=o /dev/vda
Oct 11 03:37:27 compute-0 sudo[90502]: pam_systemd(sudo:session): Failed to connect to system bus: No such file or directory
Oct 11 03:37:27 compute-0 sudo[90502]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=167)
Oct 11 03:37:27 compute-0 sudo[90502]: pam_unix(sudo:session): session closed for user root
Oct 11 03:37:27 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='admin socket' entity='admin socket' cmd=smart args=[json]: finished
Oct 11 03:37:27 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon metadata", "id": "compute-0"} v 0) v1
Oct 11 03:37:27 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' cmd=[{"prefix": "mon metadata", "id": "compute-0"}]: dispatch
Oct 11 03:37:28 compute-0 nice_noether[90182]: {
Oct 11 03:37:28 compute-0 nice_noether[90182]:     "38da774d-7ecf-442f-9a7a-97978287cff8": {
Oct 11 03:37:28 compute-0 nice_noether[90182]:         "ceph_fsid": "23b68101-59a9-532f-ab6b-9acf78fb2162",
Oct 11 03:37:28 compute-0 nice_noether[90182]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Oct 11 03:37:28 compute-0 nice_noether[90182]:         "osd_id": 1,
Oct 11 03:37:28 compute-0 nice_noether[90182]:         "osd_uuid": "38da774d-7ecf-442f-9a7a-97978287cff8",
Oct 11 03:37:28 compute-0 nice_noether[90182]:         "type": "bluestore"
Oct 11 03:37:28 compute-0 nice_noether[90182]:     },
Oct 11 03:37:28 compute-0 nice_noether[90182]:     "b0a32a6b-06c7-4cda-9235-0c5ffbd1bd85": {
Oct 11 03:37:28 compute-0 nice_noether[90182]:         "ceph_fsid": "23b68101-59a9-532f-ab6b-9acf78fb2162",
Oct 11 03:37:28 compute-0 nice_noether[90182]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Oct 11 03:37:28 compute-0 nice_noether[90182]:         "osd_id": 2,
Oct 11 03:37:28 compute-0 nice_noether[90182]:         "osd_uuid": "b0a32a6b-06c7-4cda-9235-0c5ffbd1bd85",
Oct 11 03:37:28 compute-0 nice_noether[90182]:         "type": "bluestore"
Oct 11 03:37:28 compute-0 nice_noether[90182]:     },
Oct 11 03:37:28 compute-0 nice_noether[90182]:     "bd7ac921-1218-45c1-b1c6-7c594dbceccb": {
Oct 11 03:37:28 compute-0 nice_noether[90182]:         "ceph_fsid": "23b68101-59a9-532f-ab6b-9acf78fb2162",
Oct 11 03:37:28 compute-0 nice_noether[90182]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct 11 03:37:28 compute-0 nice_noether[90182]:         "osd_id": 0,
Oct 11 03:37:28 compute-0 nice_noether[90182]:         "osd_uuid": "bd7ac921-1218-45c1-b1c6-7c594dbceccb",
Oct 11 03:37:28 compute-0 nice_noether[90182]:         "type": "bluestore"
Oct 11 03:37:28 compute-0 nice_noether[90182]:     }
Oct 11 03:37:28 compute-0 nice_noether[90182]: }
Oct 11 03:37:28 compute-0 systemd[1]: libpod-6ac54afe2ac958fa1112279ab06f5d29b20845295418a72f4e9b9bcd2b57b1f1.scope: Deactivated successfully.
Oct 11 03:37:28 compute-0 podman[90514]: 2025-10-11 03:37:28.090759114 +0000 UTC m=+0.034526702 container died 6ac54afe2ac958fa1112279ab06f5d29b20845295418a72f4e9b9bcd2b57b1f1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_noether, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Oct 11 03:37:28 compute-0 systemd[1]: var-lib-containers-storage-overlay-402f49de78baf6cd377c6ff5de6da47b2737be795fe411f6001b0152c5ac316c-merged.mount: Deactivated successfully.
Oct 11 03:37:28 compute-0 podman[90514]: 2025-10-11 03:37:28.140347128 +0000 UTC m=+0.084114696 container remove 6ac54afe2ac958fa1112279ab06f5d29b20845295418a72f4e9b9bcd2b57b1f1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_noether, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef)
Oct 11 03:37:28 compute-0 systemd[1]: libpod-conmon-6ac54afe2ac958fa1112279ab06f5d29b20845295418a72f4e9b9bcd2b57b1f1.scope: Deactivated successfully.
Oct 11 03:37:28 compute-0 sudo[89819]: pam_unix(sudo:session): session closed for user root
Oct 11 03:37:28 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 11 03:37:28 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' 
Oct 11 03:37:28 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 11 03:37:28 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' 
Oct 11 03:37:28 compute-0 ceph-osd[89722]: log_channel(cluster) log [DBG] : purged_snaps scrub starts
Oct 11 03:37:28 compute-0 ceph-osd[89722]: log_channel(cluster) log [DBG] : purged_snaps scrub ok
Oct 11 03:37:28 compute-0 sudo[90527]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 03:37:28 compute-0 sudo[90527]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 03:37:28 compute-0 sudo[90527]: pam_unix(sudo:session): session closed for user root
Oct 11 03:37:28 compute-0 sudo[90552]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Oct 11 03:37:28 compute-0 sudo[90552]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 03:37:28 compute-0 sudo[90552]: pam_unix(sudo:session): session closed for user root
Oct 11 03:37:28 compute-0 sudo[90577]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 03:37:28 compute-0 sudo[90577]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 03:37:28 compute-0 sudo[90577]: pam_unix(sudo:session): session closed for user root
Oct 11 03:37:28 compute-0 sudo[90602]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 11 03:37:28 compute-0 sudo[90602]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 03:37:28 compute-0 sudo[90602]: pam_unix(sudo:session): session closed for user root
Oct 11 03:37:28 compute-0 sudo[90627]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 03:37:28 compute-0 sudo[90627]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 03:37:28 compute-0 sudo[90627]: pam_unix(sudo:session): session closed for user root
Oct 11 03:37:28 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v39: 2 pgs: 2 creating+peering; 0 B data, 53 MiB used, 40 GiB / 40 GiB avail
Oct 11 03:37:28 compute-0 sudo[90652]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/23b68101-59a9-532f-ab6b-9acf78fb2162/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ls
Oct 11 03:37:28 compute-0 sudo[90652]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 03:37:28 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e14 do_prune osdmap full prune enabled
Oct 11 03:37:28 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='osd.2 [v2:192.168.122.100:6810/875665671,v1:192.168.122.100:6811/875665671]' entity='osd.2' cmd='[{"prefix": "osd crush create-or-move", "id": 2, "weight":0.0195, "args": ["host=compute-0", "root=default"]}]': finished
Oct 11 03:37:28 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2819948537' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "volumes", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Oct 11 03:37:28 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e15 e15: 3 total, 2 up, 3 in
Oct 11 03:37:28 compute-0 ceph-mon[74273]: log_channel(cluster) log [DBG] : osdmap e15: 3 total, 2 up, 3 in
Oct 11 03:37:28 compute-0 ceph-osd[89722]: osd.2 0 done with init, starting boot process
Oct 11 03:37:28 compute-0 ceph-osd[89722]: osd.2 0 start_boot
Oct 11 03:37:28 compute-0 ceph-osd[89722]: osd.2 0 maybe_override_options_for_qos osd_max_backfills set to 1
Oct 11 03:37:28 compute-0 ceph-osd[89722]: osd.2 0 maybe_override_options_for_qos osd_recovery_max_active set to 0
Oct 11 03:37:28 compute-0 ceph-osd[89722]: osd.2 0 maybe_override_options_for_qos osd_recovery_max_active_hdd set to 3
Oct 11 03:37:28 compute-0 ceph-osd[89722]: osd.2 0 maybe_override_options_for_qos osd_recovery_max_active_ssd set to 10
Oct 11 03:37:28 compute-0 ceph-osd[89722]: osd.2 0  bench count 12288000 bsize 4 KiB
Oct 11 03:37:28 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0) v1
Oct 11 03:37:28 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Oct 11 03:37:28 compute-0 ceph-mgr[74563]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Oct 11 03:37:28 compute-0 ceph-mon[74273]: from='osd.2 [v2:192.168.122.100:6810/875665671,v1:192.168.122.100:6811/875665671]' entity='osd.2' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["2"]}]': finished
Oct 11 03:37:28 compute-0 ceph-mon[74273]: osdmap e14: 3 total, 2 up, 3 in
Oct 11 03:37:28 compute-0 ceph-mon[74273]: from='osd.2 [v2:192.168.122.100:6810/875665671,v1:192.168.122.100:6811/875665671]' entity='osd.2' cmd=[{"prefix": "osd crush create-or-move", "id": 2, "weight":0.0195, "args": ["host=compute-0", "root=default"]}]: dispatch
Oct 11 03:37:28 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Oct 11 03:37:28 compute-0 ceph-mon[74273]: from='client.? 192.168.122.100:0/2819948537' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "volumes", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]: dispatch
Oct 11 03:37:28 compute-0 ceph-mon[74273]: from='admin socket' entity='admin socket' cmd='smart' args=[json]: dispatch
Oct 11 03:37:28 compute-0 ceph-mon[74273]: from='admin socket' entity='admin socket' cmd=smart args=[json]: finished
Oct 11 03:37:28 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' cmd=[{"prefix": "mon metadata", "id": "compute-0"}]: dispatch
Oct 11 03:37:28 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' 
Oct 11 03:37:28 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' 
Oct 11 03:37:28 compute-0 bold_hopper[90445]: pool 'volumes' created
Oct 11 03:37:28 compute-0 ceph-mon[74273]: pgmap v39: 2 pgs: 2 creating+peering; 0 B data, 53 MiB used, 40 GiB / 40 GiB avail
Oct 11 03:37:28 compute-0 ceph-mgr[74563]: mgr.server handle_open ignoring open from osd.2 v2:192.168.122.100:6810/875665671; not ready for session (expect reconnect)
Oct 11 03:37:28 compute-0 ceph-osd[88594]: osd.1 pg_epoch: 15 pg[3.0( empty local-lis/les=0/0 n=0 ec=15/15 lis/c=0/0 les/c/f=0/0/0 sis=15) [1] r=0 lpr=15 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 03:37:28 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0) v1
Oct 11 03:37:28 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Oct 11 03:37:28 compute-0 ceph-mgr[74563]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Oct 11 03:37:28 compute-0 systemd[1]: libpod-ef6122b8139172167fefa6c24c20c5695c67d822887b96fdd4fba4b10844ff87.scope: Deactivated successfully.
Oct 11 03:37:28 compute-0 podman[90429]: 2025-10-11 03:37:28.839478122 +0000 UTC m=+1.587005367 container died ef6122b8139172167fefa6c24c20c5695c67d822887b96fdd4fba4b10844ff87 (image=quay.io/ceph/ceph:v18, name=bold_hopper, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 11 03:37:28 compute-0 systemd[1]: var-lib-containers-storage-overlay-ca29e50007eecba39f193a25d33011098b24bfd0be2c250cc4b6d1cc9df52666-merged.mount: Deactivated successfully.
Oct 11 03:37:28 compute-0 podman[90429]: 2025-10-11 03:37:28.916235671 +0000 UTC m=+1.663762886 container remove ef6122b8139172167fefa6c24c20c5695c67d822887b96fdd4fba4b10844ff87 (image=quay.io/ceph/ceph:v18, name=bold_hopper, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Oct 11 03:37:28 compute-0 systemd[1]: libpod-conmon-ef6122b8139172167fefa6c24c20c5695c67d822887b96fdd4fba4b10844ff87.scope: Deactivated successfully.
Oct 11 03:37:28 compute-0 sudo[90210]: pam_unix(sudo:session): session closed for user root
Oct 11 03:37:29 compute-0 sudo[90759]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yolampijzhblkzrfwvkyxolestmsfmfi ; /usr/bin/python3'
Oct 11 03:37:29 compute-0 sudo[90759]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 03:37:29 compute-0 ceph-osd[87591]: osd.0 pg_epoch: 15 pg[2.0( empty local-lis/les=13/14 n=0 ec=13/13 lis/c=13/13 les/c/f=14/14/0 sis=15 pruub=14.643673897s) [] r=-1 lpr=15 pi=[13,15)/1 crt=0'0 mlcod 0'0 active pruub 26.860136032s@ mbc={}] start_peering_interval up [0] -> [], acting [0] -> [], acting_primary 0 -> -1, up_primary 0 -> -1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 11 03:37:29 compute-0 ceph-osd[87591]: osd.0 pg_epoch: 15 pg[2.0( empty local-lis/les=13/14 n=0 ec=13/13 lis/c=13/13 les/c/f=14/14/0 sis=15 pruub=14.643673897s) [] r=-1 lpr=15 pi=[13,15)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 26.860136032s@ mbc={}] state<Start>: transitioning to Stray
Oct 11 03:37:29 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e15 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 11 03:37:29 compute-0 ceph-mon[74273]: log_channel(cluster) log [WRN] : Health check failed: 1 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED)
Oct 11 03:37:29 compute-0 ceph-mon[74273]: log_channel(cluster) log [DBG] : mgrmap e9: compute-0.jhqlii(active, since 68s)
Oct 11 03:37:29 compute-0 python3[90769]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid 23b68101-59a9-532f-ab6b-9acf78fb2162 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool create backups  replicated_rule --autoscale-mode on _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 11 03:37:29 compute-0 podman[90783]: 2025-10-11 03:37:29.384334325 +0000 UTC m=+0.171102112 container exec 24261ba7295af5a6a49cb537d1551fd7fd4de28fdeebff7ecec5d89143ebddf9 (image=quay.io/ceph/ceph:v18, name=ceph-23b68101-59a9-532f-ab6b-9acf78fb2162-mon-compute-0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 11 03:37:29 compute-0 podman[90797]: 2025-10-11 03:37:29.424186966 +0000 UTC m=+0.100558369 container create a460b15344a4349387c0b8df7a68eed29722e70f6013aac55da9231857a99b56 (image=quay.io/ceph/ceph:v18, name=amazing_chatterjee, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 11 03:37:29 compute-0 podman[90797]: 2025-10-11 03:37:29.360557707 +0000 UTC m=+0.036929150 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Oct 11 03:37:29 compute-0 systemd[1]: Started libpod-conmon-a460b15344a4349387c0b8df7a68eed29722e70f6013aac55da9231857a99b56.scope.
Oct 11 03:37:29 compute-0 podman[90783]: 2025-10-11 03:37:29.475065807 +0000 UTC m=+0.261833644 container exec_died 24261ba7295af5a6a49cb537d1551fd7fd4de28fdeebff7ecec5d89143ebddf9 (image=quay.io/ceph/ceph:v18, name=ceph-23b68101-59a9-532f-ab6b-9acf78fb2162-mon-compute-0, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 11 03:37:29 compute-0 systemd[1]: Started libcrun container.
Oct 11 03:37:29 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/af0d241b4aebe7b4540da25675ddbe7ee07232aeda56fb8bffbe0c273744e245/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 11 03:37:29 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/af0d241b4aebe7b4540da25675ddbe7ee07232aeda56fb8bffbe0c273744e245/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Oct 11 03:37:29 compute-0 podman[90797]: 2025-10-11 03:37:29.541581008 +0000 UTC m=+0.217952451 container init a460b15344a4349387c0b8df7a68eed29722e70f6013aac55da9231857a99b56 (image=quay.io/ceph/ceph:v18, name=amazing_chatterjee, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 11 03:37:29 compute-0 podman[90797]: 2025-10-11 03:37:29.555103218 +0000 UTC m=+0.231474631 container start a460b15344a4349387c0b8df7a68eed29722e70f6013aac55da9231857a99b56 (image=quay.io/ceph/ceph:v18, name=amazing_chatterjee, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, ceph=True, CEPH_REF=reef)
Oct 11 03:37:29 compute-0 podman[90797]: 2025-10-11 03:37:29.622390631 +0000 UTC m=+0.298762044 container attach a460b15344a4349387c0b8df7a68eed29722e70f6013aac55da9231857a99b56 (image=quay.io/ceph/ceph:v18, name=amazing_chatterjee, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 11 03:37:29 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e15 do_prune osdmap full prune enabled
Oct 11 03:37:29 compute-0 ceph-mgr[74563]: mgr.server handle_open ignoring open from osd.2 v2:192.168.122.100:6810/875665671; not ready for session (expect reconnect)
Oct 11 03:37:29 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0) v1
Oct 11 03:37:29 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Oct 11 03:37:29 compute-0 ceph-mgr[74563]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Oct 11 03:37:29 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e16 e16: 3 total, 2 up, 3 in
Oct 11 03:37:29 compute-0 ceph-mon[74273]: log_channel(cluster) log [DBG] : osdmap e16: 3 total, 2 up, 3 in
Oct 11 03:37:29 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0) v1
Oct 11 03:37:29 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Oct 11 03:37:29 compute-0 ceph-mgr[74563]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Oct 11 03:37:29 compute-0 ceph-mon[74273]: from='osd.2 [v2:192.168.122.100:6810/875665671,v1:192.168.122.100:6811/875665671]' entity='osd.2' cmd='[{"prefix": "osd crush create-or-move", "id": 2, "weight":0.0195, "args": ["host=compute-0", "root=default"]}]': finished
Oct 11 03:37:29 compute-0 ceph-mon[74273]: from='client.? 192.168.122.100:0/2819948537' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "volumes", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Oct 11 03:37:29 compute-0 ceph-mon[74273]: osdmap e15: 3 total, 2 up, 3 in
Oct 11 03:37:29 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Oct 11 03:37:29 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Oct 11 03:37:29 compute-0 ceph-mon[74273]: Health check failed: 1 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED)
Oct 11 03:37:29 compute-0 ceph-mon[74273]: mgrmap e9: compute-0.jhqlii(active, since 68s)
Oct 11 03:37:30 compute-0 ceph-osd[88594]: osd.1 pg_epoch: 16 pg[3.0( empty local-lis/les=15/16 n=0 ec=15/15 lis/c=0/0 les/c/f=0/0/0 sis=15) [1] r=0 lpr=15 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 03:37:30 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool create", "pool": "backups", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"} v 0) v1
Oct 11 03:37:30 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3052002623' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "backups", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]: dispatch
Oct 11 03:37:30 compute-0 sudo[90652]: pam_unix(sudo:session): session closed for user root
Oct 11 03:37:30 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 11 03:37:30 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' 
Oct 11 03:37:30 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 11 03:37:30 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' 
Oct 11 03:37:30 compute-0 sudo[90945]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 03:37:30 compute-0 sudo[90945]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 03:37:30 compute-0 sudo[90945]: pam_unix(sudo:session): session closed for user root
Oct 11 03:37:30 compute-0 sudo[90970]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 11 03:37:30 compute-0 sudo[90970]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 03:37:30 compute-0 sudo[90970]: pam_unix(sudo:session): session closed for user root
Oct 11 03:37:30 compute-0 sudo[90995]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 03:37:30 compute-0 sudo[90995]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 03:37:30 compute-0 sudo[90995]: pam_unix(sudo:session): session closed for user root
Oct 11 03:37:30 compute-0 sudo[91020]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/23b68101-59a9-532f-ab6b-9acf78fb2162/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 23b68101-59a9-532f-ab6b-9acf78fb2162 -- inventory --format=json-pretty --filter-for-batch
Oct 11 03:37:30 compute-0 sudo[91020]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 03:37:30 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v42: 3 pgs: 1 unknown, 2 creating+peering; 0 B data, 53 MiB used, 40 GiB / 40 GiB avail
Oct 11 03:37:30 compute-0 ceph-mgr[74563]: mgr.server handle_open ignoring open from osd.2 v2:192.168.122.100:6810/875665671; not ready for session (expect reconnect)
Oct 11 03:37:30 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0) v1
Oct 11 03:37:30 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Oct 11 03:37:30 compute-0 ceph-mgr[74563]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Oct 11 03:37:30 compute-0 podman[91085]: 2025-10-11 03:37:30.923689761 +0000 UTC m=+0.061890222 container create b67e20c1e97326bc496582bb1fdd8f107c8caff4b7072455bbb2ae50ed31c0d4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_moser, io.buildah.version=1.39.3, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 11 03:37:30 compute-0 systemd[1]: Started libpod-conmon-b67e20c1e97326bc496582bb1fdd8f107c8caff4b7072455bbb2ae50ed31c0d4.scope.
Oct 11 03:37:30 compute-0 podman[91085]: 2025-10-11 03:37:30.894344995 +0000 UTC m=+0.032545526 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 03:37:30 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e16 do_prune osdmap full prune enabled
Oct 11 03:37:31 compute-0 systemd[1]: Started libcrun container.
Oct 11 03:37:31 compute-0 ceph-mon[74273]: purged_snaps scrub starts
Oct 11 03:37:31 compute-0 ceph-mon[74273]: purged_snaps scrub ok
Oct 11 03:37:31 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Oct 11 03:37:31 compute-0 ceph-mon[74273]: osdmap e16: 3 total, 2 up, 3 in
Oct 11 03:37:31 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Oct 11 03:37:31 compute-0 ceph-mon[74273]: from='client.? 192.168.122.100:0/3052002623' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "backups", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]: dispatch
Oct 11 03:37:31 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' 
Oct 11 03:37:31 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' 
Oct 11 03:37:31 compute-0 ceph-mon[74273]: pgmap v42: 3 pgs: 1 unknown, 2 creating+peering; 0 B data, 53 MiB used, 40 GiB / 40 GiB avail
Oct 11 03:37:31 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Oct 11 03:37:31 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3052002623' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "backups", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Oct 11 03:37:31 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e17 e17: 3 total, 2 up, 3 in
Oct 11 03:37:31 compute-0 amazing_chatterjee[90818]: pool 'backups' created
Oct 11 03:37:31 compute-0 ceph-mon[74273]: log_channel(cluster) log [DBG] : osdmap e17: 3 total, 2 up, 3 in
Oct 11 03:37:31 compute-0 podman[91085]: 2025-10-11 03:37:31.047294397 +0000 UTC m=+0.185494848 container init b67e20c1e97326bc496582bb1fdd8f107c8caff4b7072455bbb2ae50ed31c0d4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_moser, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Oct 11 03:37:31 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0) v1
Oct 11 03:37:31 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Oct 11 03:37:31 compute-0 ceph-osd[87591]: osd.0 pg_epoch: 17 pg[4.0( empty local-lis/les=0/0 n=0 ec=17/17 lis/c=0/0 les/c/f=0/0/0 sis=17) [0] r=0 lpr=17 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 03:37:31 compute-0 ceph-mgr[74563]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Oct 11 03:37:31 compute-0 podman[91085]: 2025-10-11 03:37:31.059040228 +0000 UTC m=+0.197240699 container start b67e20c1e97326bc496582bb1fdd8f107c8caff4b7072455bbb2ae50ed31c0d4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_moser, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507)
Oct 11 03:37:31 compute-0 adoring_moser[91101]: 167 167
Oct 11 03:37:31 compute-0 systemd[1]: libpod-b67e20c1e97326bc496582bb1fdd8f107c8caff4b7072455bbb2ae50ed31c0d4.scope: Deactivated successfully.
Oct 11 03:37:31 compute-0 podman[91085]: 2025-10-11 03:37:31.066987521 +0000 UTC m=+0.205188032 container attach b67e20c1e97326bc496582bb1fdd8f107c8caff4b7072455bbb2ae50ed31c0d4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_moser, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, io.buildah.version=1.39.3, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Oct 11 03:37:31 compute-0 podman[91085]: 2025-10-11 03:37:31.0690958 +0000 UTC m=+0.207296241 container died b67e20c1e97326bc496582bb1fdd8f107c8caff4b7072455bbb2ae50ed31c0d4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_moser, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Oct 11 03:37:31 compute-0 systemd[1]: libpod-a460b15344a4349387c0b8df7a68eed29722e70f6013aac55da9231857a99b56.scope: Deactivated successfully.
Oct 11 03:37:31 compute-0 podman[90797]: 2025-10-11 03:37:31.077781385 +0000 UTC m=+1.754152828 container died a460b15344a4349387c0b8df7a68eed29722e70f6013aac55da9231857a99b56 (image=quay.io/ceph/ceph:v18, name=amazing_chatterjee, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 11 03:37:31 compute-0 systemd[1]: var-lib-containers-storage-overlay-9f2adc9c78c1c0cb6245527f046ecb0a130b1d59974c3da6e6c0ce48ce8bd493-merged.mount: Deactivated successfully.
Oct 11 03:37:31 compute-0 podman[91085]: 2025-10-11 03:37:31.150425758 +0000 UTC m=+0.288626229 container remove b67e20c1e97326bc496582bb1fdd8f107c8caff4b7072455bbb2ae50ed31c0d4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_moser, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=reef, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Oct 11 03:37:31 compute-0 systemd[1]: libpod-conmon-b67e20c1e97326bc496582bb1fdd8f107c8caff4b7072455bbb2ae50ed31c0d4.scope: Deactivated successfully.
Oct 11 03:37:31 compute-0 systemd[1]: var-lib-containers-storage-overlay-af0d241b4aebe7b4540da25675ddbe7ee07232aeda56fb8bffbe0c273744e245-merged.mount: Deactivated successfully.
Oct 11 03:37:31 compute-0 podman[90797]: 2025-10-11 03:37:31.210711123 +0000 UTC m=+1.887082526 container remove a460b15344a4349387c0b8df7a68eed29722e70f6013aac55da9231857a99b56 (image=quay.io/ceph/ceph:v18, name=amazing_chatterjee, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 11 03:37:31 compute-0 systemd[1]: libpod-conmon-a460b15344a4349387c0b8df7a68eed29722e70f6013aac55da9231857a99b56.scope: Deactivated successfully.
Oct 11 03:37:31 compute-0 sudo[90759]: pam_unix(sudo:session): session closed for user root
Oct 11 03:37:31 compute-0 podman[91138]: 2025-10-11 03:37:31.377902026 +0000 UTC m=+0.088583413 container create b715333ff8e9e92943e8e1660ae952202653f391466a020cbd05c37b15807e1e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_villani, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 11 03:37:31 compute-0 sudo[91174]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bmrfjqiypsdbmyvabgyvwcyiisuewyqn ; /usr/bin/python3'
Oct 11 03:37:31 compute-0 sudo[91174]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 03:37:31 compute-0 podman[91138]: 2025-10-11 03:37:31.33574159 +0000 UTC m=+0.046422987 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 03:37:31 compute-0 systemd[1]: Started libpod-conmon-b715333ff8e9e92943e8e1660ae952202653f391466a020cbd05c37b15807e1e.scope.
Oct 11 03:37:31 compute-0 systemd[1]: Started libcrun container.
Oct 11 03:37:31 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d9be710d94d59c1e86fc59284215cd1df2c9f1b13ea0644726af43a76d886e9c/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 11 03:37:31 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d9be710d94d59c1e86fc59284215cd1df2c9f1b13ea0644726af43a76d886e9c/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 11 03:37:31 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d9be710d94d59c1e86fc59284215cd1df2c9f1b13ea0644726af43a76d886e9c/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 11 03:37:31 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d9be710d94d59c1e86fc59284215cd1df2c9f1b13ea0644726af43a76d886e9c/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 11 03:37:31 compute-0 podman[91138]: 2025-10-11 03:37:31.512422829 +0000 UTC m=+0.223104256 container init b715333ff8e9e92943e8e1660ae952202653f391466a020cbd05c37b15807e1e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_villani, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 11 03:37:31 compute-0 podman[91138]: 2025-10-11 03:37:31.524652803 +0000 UTC m=+0.235334150 container start b715333ff8e9e92943e8e1660ae952202653f391466a020cbd05c37b15807e1e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_villani, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Oct 11 03:37:31 compute-0 podman[91138]: 2025-10-11 03:37:31.528514712 +0000 UTC m=+0.239196099 container attach b715333ff8e9e92943e8e1660ae952202653f391466a020cbd05c37b15807e1e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_villani, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3)
Oct 11 03:37:31 compute-0 python3[91176]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid 23b68101-59a9-532f-ab6b-9acf78fb2162 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool create images  replicated_rule --autoscale-mode on _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 11 03:37:31 compute-0 podman[91184]: 2025-10-11 03:37:31.651656075 +0000 UTC m=+0.047237469 container create 79835044eb48064770621bb93de5c57c55673fcbfd84ce53c59661900334565d (image=quay.io/ceph/ceph:v18, name=eager_shaw, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef)
Oct 11 03:37:31 compute-0 systemd[1]: Started libpod-conmon-79835044eb48064770621bb93de5c57c55673fcbfd84ce53c59661900334565d.scope.
Oct 11 03:37:31 compute-0 podman[91184]: 2025-10-11 03:37:31.633543696 +0000 UTC m=+0.029125100 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Oct 11 03:37:31 compute-0 systemd[1]: Started libcrun container.
Oct 11 03:37:31 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d054ef249a3aeaaf86e5fcf89a16d20d7078db8f1adc104b23a39c372d8db85b/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Oct 11 03:37:31 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d054ef249a3aeaaf86e5fcf89a16d20d7078db8f1adc104b23a39c372d8db85b/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 11 03:37:31 compute-0 podman[91184]: 2025-10-11 03:37:31.769666414 +0000 UTC m=+0.165247898 container init 79835044eb48064770621bb93de5c57c55673fcbfd84ce53c59661900334565d (image=quay.io/ceph/ceph:v18, name=eager_shaw, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3)
Oct 11 03:37:31 compute-0 podman[91184]: 2025-10-11 03:37:31.777307969 +0000 UTC m=+0.172889353 container start 79835044eb48064770621bb93de5c57c55673fcbfd84ce53c59661900334565d (image=quay.io/ceph/ceph:v18, name=eager_shaw, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 11 03:37:31 compute-0 podman[91184]: 2025-10-11 03:37:31.784850221 +0000 UTC m=+0.180431655 container attach 79835044eb48064770621bb93de5c57c55673fcbfd84ce53c59661900334565d (image=quay.io/ceph/ceph:v18, name=eager_shaw, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 11 03:37:31 compute-0 ceph-mgr[74563]: mgr.server handle_open ignoring open from osd.2 v2:192.168.122.100:6810/875665671; not ready for session (expect reconnect)
Oct 11 03:37:31 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0) v1
Oct 11 03:37:31 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Oct 11 03:37:31 compute-0 ceph-mgr[74563]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Oct 11 03:37:31 compute-0 ceph-osd[89722]: osd.2 0 maybe_override_max_osd_capacity_for_qos osd bench result - bandwidth (MiB/sec): 36.286 iops: 9289.144 elapsed_sec: 0.323
Oct 11 03:37:31 compute-0 ceph-osd[89722]: log_channel(cluster) log [WRN] : OSD bench result of 9289.143552 IOPS is not within the threshold limit range of 50.000000 IOPS and 500.000000 IOPS for osd.2. IOPS capacity is unchanged at 315.000000 IOPS. The recommendation is to establish the osd's IOPS capacity using other benchmark tools (e.g. Fio) and then override osd_mclock_max_capacity_iops_[hdd|ssd].
Oct 11 03:37:31 compute-0 ceph-osd[89722]: osd.2 0 waiting for initial osdmap
Oct 11 03:37:31 compute-0 ceph-23b68101-59a9-532f-ab6b-9acf78fb2162-osd-2[89710]: 2025-10-11T03:37:31.984+0000 7f18a3ade640 -1 osd.2 0 waiting for initial osdmap
Oct 11 03:37:31 compute-0 ceph-osd[89722]: osd.2 17 crush map has features 288514051259236352, adjusting msgr requires for clients
Oct 11 03:37:31 compute-0 ceph-osd[89722]: osd.2 17 crush map has features 288514051259236352 was 288232575208792577, adjusting msgr requires for mons
Oct 11 03:37:31 compute-0 ceph-osd[89722]: osd.2 17 crush map has features 3314933000852226048, adjusting msgr requires for osds
Oct 11 03:37:31 compute-0 ceph-osd[89722]: osd.2 17 check_osdmap_features require_osd_release unknown -> reef
Oct 11 03:37:32 compute-0 ceph-osd[89722]: osd.2 17 set_numa_affinity unable to identify public interface '' numa node: (2) No such file or directory
Oct 11 03:37:32 compute-0 ceph-23b68101-59a9-532f-ab6b-9acf78fb2162-osd-2[89710]: 2025-10-11T03:37:32.014+0000 7f189e8ef640 -1 osd.2 17 set_numa_affinity unable to identify public interface '' numa node: (2) No such file or directory
Oct 11 03:37:32 compute-0 ceph-osd[89722]: osd.2 17 set_numa_affinity not setting numa affinity
Oct 11 03:37:32 compute-0 ceph-osd[89722]: osd.2 17 _collect_metadata loop5:  no unique device id for loop5: fallback method has no model nor serial
Oct 11 03:37:32 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e17 do_prune osdmap full prune enabled
Oct 11 03:37:32 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e18 e18: 3 total, 3 up, 3 in
Oct 11 03:37:32 compute-0 ceph-mon[74273]: log_channel(cluster) log [INF] : osd.2 [v2:192.168.122.100:6810/875665671,v1:192.168.122.100:6811/875665671] boot
Oct 11 03:37:32 compute-0 ceph-mon[74273]: log_channel(cluster) log [DBG] : osdmap e18: 3 total, 3 up, 3 in
Oct 11 03:37:32 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0) v1
Oct 11 03:37:32 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Oct 11 03:37:32 compute-0 ceph-mon[74273]: from='client.? 192.168.122.100:0/3052002623' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "backups", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Oct 11 03:37:32 compute-0 ceph-mon[74273]: osdmap e17: 3 total, 2 up, 3 in
Oct 11 03:37:32 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Oct 11 03:37:32 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Oct 11 03:37:32 compute-0 ceph-osd[87591]: osd.0 pg_epoch: 18 pg[2.0( empty local-lis/les=13/14 n=0 ec=13/13 lis/c=13/13 les/c/f=14/14/0 sis=18 pruub=11.739219666s) [2] r=-1 lpr=18 pi=[13,18)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 26.860136032s@ mbc={}] start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 11 03:37:32 compute-0 ceph-osd[87591]: osd.0 pg_epoch: 18 pg[2.0( empty local-lis/les=13/14 n=0 ec=13/13 lis/c=13/13 les/c/f=14/14/0 sis=18 pruub=11.739170074s) [2] r=-1 lpr=18 pi=[13,18)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 26.860136032s@ mbc={}] state<Start>: transitioning to Stray
Oct 11 03:37:32 compute-0 ceph-osd[89722]: osd.2 18 state: booting -> active
Oct 11 03:37:32 compute-0 ceph-osd[89722]: osd.2 pg_epoch: 18 pg[2.0( empty local-lis/les=0/0 n=0 ec=13/13 lis/c=13/13 les/c/f=14/14/0 sis=18) [2] r=0 lpr=18 pi=[13,18)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 03:37:32 compute-0 ceph-osd[87591]: osd.0 pg_epoch: 18 pg[4.0( empty local-lis/les=17/18 n=0 ec=17/17 lis/c=0/0 les/c/f=0/0/0 sis=17) [0] r=0 lpr=17 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 03:37:32 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool create", "pool": "images", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"} v 0) v1
Oct 11 03:37:32 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/4011250601' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "images", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]: dispatch
Oct 11 03:37:32 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v45: 4 pgs: 3 active+clean, 1 creating+peering; 449 KiB data, 53 MiB used, 40 GiB / 40 GiB avail
Oct 11 03:37:33 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e18 do_prune osdmap full prune enabled
Oct 11 03:37:33 compute-0 ceph-mon[74273]: OSD bench result of 9289.143552 IOPS is not within the threshold limit range of 50.000000 IOPS and 500.000000 IOPS for osd.2. IOPS capacity is unchanged at 315.000000 IOPS. The recommendation is to establish the osd's IOPS capacity using other benchmark tools (e.g. Fio) and then override osd_mclock_max_capacity_iops_[hdd|ssd].
Oct 11 03:37:33 compute-0 ceph-mon[74273]: osd.2 [v2:192.168.122.100:6810/875665671,v1:192.168.122.100:6811/875665671] boot
Oct 11 03:37:33 compute-0 ceph-mon[74273]: osdmap e18: 3 total, 3 up, 3 in
Oct 11 03:37:33 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Oct 11 03:37:33 compute-0 ceph-mon[74273]: from='client.? 192.168.122.100:0/4011250601' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "images", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]: dispatch
Oct 11 03:37:33 compute-0 ceph-mon[74273]: pgmap v45: 4 pgs: 3 active+clean, 1 creating+peering; 449 KiB data, 53 MiB used, 40 GiB / 40 GiB avail
Oct 11 03:37:33 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/4011250601' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "images", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Oct 11 03:37:33 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e19 e19: 3 total, 3 up, 3 in
Oct 11 03:37:33 compute-0 eager_shaw[91199]: pool 'images' created
Oct 11 03:37:33 compute-0 ceph-mon[74273]: log_channel(cluster) log [DBG] : osdmap e19: 3 total, 3 up, 3 in
Oct 11 03:37:33 compute-0 ceph-osd[89722]: osd.2 pg_epoch: 19 pg[5.0( empty local-lis/les=0/0 n=0 ec=19/19 lis/c=0/0 les/c/f=0/0/0 sis=19) [2] r=0 lpr=19 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 03:37:33 compute-0 ceph-osd[89722]: osd.2 pg_epoch: 19 pg[2.0( empty local-lis/les=18/19 n=0 ec=13/13 lis/c=13/13 les/c/f=14/14/0 sis=18) [2] r=0 lpr=18 pi=[13,18)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 03:37:33 compute-0 systemd[1]: libpod-79835044eb48064770621bb93de5c57c55673fcbfd84ce53c59661900334565d.scope: Deactivated successfully.
Oct 11 03:37:33 compute-0 happy_villani[91179]: [
Oct 11 03:37:33 compute-0 happy_villani[91179]:     {
Oct 11 03:37:33 compute-0 happy_villani[91179]:         "available": false,
Oct 11 03:37:33 compute-0 happy_villani[91179]:         "ceph_device": false,
Oct 11 03:37:33 compute-0 happy_villani[91179]:         "device_id": "QEMU_DVD-ROM_QM00001",
Oct 11 03:37:33 compute-0 happy_villani[91179]:         "lsm_data": {},
Oct 11 03:37:33 compute-0 happy_villani[91179]:         "lvs": [],
Oct 11 03:37:33 compute-0 happy_villani[91179]:         "path": "/dev/sr0",
Oct 11 03:37:33 compute-0 happy_villani[91179]:         "rejected_reasons": [
Oct 11 03:37:33 compute-0 happy_villani[91179]:             "Has a FileSystem",
Oct 11 03:37:33 compute-0 happy_villani[91179]:             "Insufficient space (<5GB)"
Oct 11 03:37:33 compute-0 happy_villani[91179]:         ],
Oct 11 03:37:33 compute-0 happy_villani[91179]:         "sys_api": {
Oct 11 03:37:33 compute-0 happy_villani[91179]:             "actuators": null,
Oct 11 03:37:33 compute-0 happy_villani[91179]:             "device_nodes": "sr0",
Oct 11 03:37:33 compute-0 happy_villani[91179]:             "devname": "sr0",
Oct 11 03:37:33 compute-0 happy_villani[91179]:             "human_readable_size": "482.00 KB",
Oct 11 03:37:33 compute-0 happy_villani[91179]:             "id_bus": "ata",
Oct 11 03:37:33 compute-0 happy_villani[91179]:             "model": "QEMU DVD-ROM",
Oct 11 03:37:33 compute-0 happy_villani[91179]:             "nr_requests": "2",
Oct 11 03:37:33 compute-0 happy_villani[91179]:             "parent": "/dev/sr0",
Oct 11 03:37:33 compute-0 happy_villani[91179]:             "partitions": {},
Oct 11 03:37:33 compute-0 happy_villani[91179]:             "path": "/dev/sr0",
Oct 11 03:37:33 compute-0 happy_villani[91179]:             "removable": "1",
Oct 11 03:37:33 compute-0 happy_villani[91179]:             "rev": "2.5+",
Oct 11 03:37:33 compute-0 happy_villani[91179]:             "ro": "0",
Oct 11 03:37:33 compute-0 happy_villani[91179]:             "rotational": "0",
Oct 11 03:37:33 compute-0 happy_villani[91179]:             "sas_address": "",
Oct 11 03:37:33 compute-0 happy_villani[91179]:             "sas_device_handle": "",
Oct 11 03:37:33 compute-0 happy_villani[91179]:             "scheduler_mode": "mq-deadline",
Oct 11 03:37:33 compute-0 happy_villani[91179]:             "sectors": 0,
Oct 11 03:37:33 compute-0 happy_villani[91179]:             "sectorsize": "2048",
Oct 11 03:37:33 compute-0 happy_villani[91179]:             "size": 493568.0,
Oct 11 03:37:33 compute-0 happy_villani[91179]:             "support_discard": "2048",
Oct 11 03:37:33 compute-0 happy_villani[91179]:             "type": "disk",
Oct 11 03:37:33 compute-0 happy_villani[91179]:             "vendor": "QEMU"
Oct 11 03:37:33 compute-0 happy_villani[91179]:         }
Oct 11 03:37:33 compute-0 happy_villani[91179]:     }
Oct 11 03:37:33 compute-0 happy_villani[91179]: ]
Oct 11 03:37:33 compute-0 podman[93036]: 2025-10-11 03:37:33.170779021 +0000 UTC m=+0.039178473 container died 79835044eb48064770621bb93de5c57c55673fcbfd84ce53c59661900334565d (image=quay.io/ceph/ceph:v18, name=eager_shaw, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef)
Oct 11 03:37:33 compute-0 systemd[1]: libpod-b715333ff8e9e92943e8e1660ae952202653f391466a020cbd05c37b15807e1e.scope: Deactivated successfully.
Oct 11 03:37:33 compute-0 podman[91138]: 2025-10-11 03:37:33.184726693 +0000 UTC m=+1.895408040 container died b715333ff8e9e92943e8e1660ae952202653f391466a020cbd05c37b15807e1e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_villani, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 11 03:37:33 compute-0 systemd[1]: libpod-b715333ff8e9e92943e8e1660ae952202653f391466a020cbd05c37b15807e1e.scope: Consumed 1.709s CPU time.
Oct 11 03:37:33 compute-0 systemd[1]: var-lib-containers-storage-overlay-d054ef249a3aeaaf86e5fcf89a16d20d7078db8f1adc104b23a39c372d8db85b-merged.mount: Deactivated successfully.
Oct 11 03:37:33 compute-0 systemd[1]: var-lib-containers-storage-overlay-d9be710d94d59c1e86fc59284215cd1df2c9f1b13ea0644726af43a76d886e9c-merged.mount: Deactivated successfully.
Oct 11 03:37:33 compute-0 podman[93036]: 2025-10-11 03:37:33.244969357 +0000 UTC m=+0.113368789 container remove 79835044eb48064770621bb93de5c57c55673fcbfd84ce53c59661900334565d (image=quay.io/ceph/ceph:v18, name=eager_shaw, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Oct 11 03:37:33 compute-0 systemd[1]: libpod-conmon-79835044eb48064770621bb93de5c57c55673fcbfd84ce53c59661900334565d.scope: Deactivated successfully.
Oct 11 03:37:33 compute-0 podman[91138]: 2025-10-11 03:37:33.265231157 +0000 UTC m=+1.975912524 container remove b715333ff8e9e92943e8e1660ae952202653f391466a020cbd05c37b15807e1e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_villani, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=reef, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 11 03:37:33 compute-0 systemd[1]: libpod-conmon-b715333ff8e9e92943e8e1660ae952202653f391466a020cbd05c37b15807e1e.scope: Deactivated successfully.
Oct 11 03:37:33 compute-0 sudo[91174]: pam_unix(sudo:session): session closed for user root
Oct 11 03:37:33 compute-0 sudo[91020]: pam_unix(sudo:session): session closed for user root
Oct 11 03:37:33 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 11 03:37:33 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' 
Oct 11 03:37:33 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 11 03:37:33 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' 
Oct 11 03:37:33 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config rm", "who": "osd.0", "name": "osd_memory_target"} v 0) v1
Oct 11 03:37:33 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' cmd=[{"prefix": "config rm", "who": "osd.0", "name": "osd_memory_target"}]: dispatch
Oct 11 03:37:33 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config rm", "who": "osd.1", "name": "osd_memory_target"} v 0) v1
Oct 11 03:37:33 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' cmd=[{"prefix": "config rm", "who": "osd.1", "name": "osd_memory_target"}]: dispatch
Oct 11 03:37:33 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config rm", "who": "osd.2", "name": "osd_memory_target"} v 0) v1
Oct 11 03:37:33 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' cmd=[{"prefix": "config rm", "who": "osd.2", "name": "osd_memory_target"}]: dispatch
Oct 11 03:37:33 compute-0 ceph-mgr[74563]: [cephadm INFO root] Adjusting osd_memory_target on compute-0 to 43697k
Oct 11 03:37:33 compute-0 ceph-mgr[74563]: log_channel(cephadm) log [INF] : Adjusting osd_memory_target on compute-0 to 43697k
Oct 11 03:37:33 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=osd_memory_target}] v 0) v1
Oct 11 03:37:33 compute-0 ceph-mgr[74563]: [cephadm WARNING cephadm.serve] Unable to set osd_memory_target on compute-0 to 44745932: error parsing value: Value '44745932' is below minimum 939524096
Oct 11 03:37:33 compute-0 ceph-mgr[74563]: log_channel(cephadm) log [WRN] : Unable to set osd_memory_target on compute-0 to 44745932: error parsing value: Value '44745932' is below minimum 939524096
Oct 11 03:37:33 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 11 03:37:33 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 11 03:37:33 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct 11 03:37:33 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 11 03:37:33 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct 11 03:37:33 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' 
Oct 11 03:37:33 compute-0 ceph-mgr[74563]: [progress WARNING root] complete: ev acdfe811-c3dd-4635-83a6-0a1f9a54758f does not exist
Oct 11 03:37:33 compute-0 ceph-mgr[74563]: [progress WARNING root] complete: ev c27471b5-a661-44a2-91e1-ffb68ddaef07 does not exist
Oct 11 03:37:33 compute-0 ceph-mgr[74563]: [progress WARNING root] complete: ev e9cd8767-8c9b-48a5-b0b7-add18284f517 does not exist
Oct 11 03:37:33 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct 11 03:37:33 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 11 03:37:33 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct 11 03:37:33 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 11 03:37:33 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 11 03:37:33 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 11 03:37:33 compute-0 sudo[93220]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 03:37:33 compute-0 sudo[93220]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 03:37:33 compute-0 sudo[93220]: pam_unix(sudo:session): session closed for user root
Oct 11 03:37:33 compute-0 sudo[93267]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yaueacikacpkfncjdslakchndhgmnvfo ; /usr/bin/python3'
Oct 11 03:37:33 compute-0 sudo[93267]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 03:37:33 compute-0 sudo[93271]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 11 03:37:33 compute-0 sudo[93271]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 03:37:33 compute-0 sudo[93271]: pam_unix(sudo:session): session closed for user root
Oct 11 03:37:33 compute-0 sudo[93296]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 03:37:33 compute-0 sudo[93296]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 03:37:33 compute-0 sudo[93296]: pam_unix(sudo:session): session closed for user root
Oct 11 03:37:33 compute-0 python3[93270]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid 23b68101-59a9-532f-ab6b-9acf78fb2162 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool create cephfs.cephfs.meta  replicated_rule --autoscale-mode on _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 11 03:37:33 compute-0 podman[93321]: 2025-10-11 03:37:33.618621506 +0000 UTC m=+0.038393170 container create eb8dce2ffbba18281d1f4b7cd3a8ee74b7e9a8a037005d3263603a8d52eee421 (image=quay.io/ceph/ceph:v18, name=infallible_leakey, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, ceph=True)
Oct 11 03:37:33 compute-0 sudo[93322]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/23b68101-59a9-532f-ab6b-9acf78fb2162/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 23b68101-59a9-532f-ab6b-9acf78fb2162 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Oct 11 03:37:33 compute-0 sudo[93322]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 03:37:33 compute-0 systemd[1]: Started libpod-conmon-eb8dce2ffbba18281d1f4b7cd3a8ee74b7e9a8a037005d3263603a8d52eee421.scope.
Oct 11 03:37:33 compute-0 systemd[1]: Started libcrun container.
Oct 11 03:37:33 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/309eb1509e42a10b244cb8a4ee5c07efda38483f885771f5c2182fcbedc0578e/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Oct 11 03:37:33 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/309eb1509e42a10b244cb8a4ee5c07efda38483f885771f5c2182fcbedc0578e/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 11 03:37:33 compute-0 podman[93321]: 2025-10-11 03:37:33.689620913 +0000 UTC m=+0.109392587 container init eb8dce2ffbba18281d1f4b7cd3a8ee74b7e9a8a037005d3263603a8d52eee421 (image=quay.io/ceph/ceph:v18, name=infallible_leakey, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 11 03:37:33 compute-0 podman[93321]: 2025-10-11 03:37:33.602947006 +0000 UTC m=+0.022718680 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Oct 11 03:37:33 compute-0 podman[93321]: 2025-10-11 03:37:33.712333122 +0000 UTC m=+0.132104786 container start eb8dce2ffbba18281d1f4b7cd3a8ee74b7e9a8a037005d3263603a8d52eee421 (image=quay.io/ceph/ceph:v18, name=infallible_leakey, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3)
Oct 11 03:37:33 compute-0 podman[93321]: 2025-10-11 03:37:33.730131373 +0000 UTC m=+0.149903037 container attach eb8dce2ffbba18281d1f4b7cd3a8ee74b7e9a8a037005d3263603a8d52eee421 (image=quay.io/ceph/ceph:v18, name=infallible_leakey, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 11 03:37:34 compute-0 podman[93406]: 2025-10-11 03:37:34.022329291 +0000 UTC m=+0.065285857 container create ae6c85fffdb2e78e040cfb4e1fed01a594cad6d0386cf6dfc6652fa071922c16 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_mclean, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Oct 11 03:37:34 compute-0 systemd[1]: Started libpod-conmon-ae6c85fffdb2e78e040cfb4e1fed01a594cad6d0386cf6dfc6652fa071922c16.scope.
Oct 11 03:37:34 compute-0 systemd[1]: Started libcrun container.
Oct 11 03:37:34 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e19 do_prune osdmap full prune enabled
Oct 11 03:37:34 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e20 e20: 3 total, 3 up, 3 in
Oct 11 03:37:34 compute-0 podman[93406]: 2025-10-11 03:37:33.997088841 +0000 UTC m=+0.040045417 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 03:37:34 compute-0 ceph-mon[74273]: log_channel(cluster) log [DBG] : osdmap e20: 3 total, 3 up, 3 in
Oct 11 03:37:34 compute-0 podman[93406]: 2025-10-11 03:37:34.093519593 +0000 UTC m=+0.136476139 container init ae6c85fffdb2e78e040cfb4e1fed01a594cad6d0386cf6dfc6652fa071922c16 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_mclean, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Oct 11 03:37:34 compute-0 ceph-mon[74273]: from='client.? 192.168.122.100:0/4011250601' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "images", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Oct 11 03:37:34 compute-0 ceph-mon[74273]: osdmap e19: 3 total, 3 up, 3 in
Oct 11 03:37:34 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' 
Oct 11 03:37:34 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' 
Oct 11 03:37:34 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' cmd=[{"prefix": "config rm", "who": "osd.0", "name": "osd_memory_target"}]: dispatch
Oct 11 03:37:34 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' cmd=[{"prefix": "config rm", "who": "osd.1", "name": "osd_memory_target"}]: dispatch
Oct 11 03:37:34 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' cmd=[{"prefix": "config rm", "who": "osd.2", "name": "osd_memory_target"}]: dispatch
Oct 11 03:37:34 compute-0 ceph-mon[74273]: Adjusting osd_memory_target on compute-0 to 43697k
Oct 11 03:37:34 compute-0 ceph-mon[74273]: Unable to set osd_memory_target on compute-0 to 44745932: error parsing value: Value '44745932' is below minimum 939524096
Oct 11 03:37:34 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 11 03:37:34 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 11 03:37:34 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' 
Oct 11 03:37:34 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 11 03:37:34 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 11 03:37:34 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 11 03:37:34 compute-0 ceph-osd[89722]: osd.2 pg_epoch: 20 pg[5.0( empty local-lis/les=19/20 n=0 ec=19/19 lis/c=0/0 les/c/f=0/0/0 sis=19) [2] r=0 lpr=19 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 03:37:34 compute-0 podman[93406]: 2025-10-11 03:37:34.103928736 +0000 UTC m=+0.146885272 container start ae6c85fffdb2e78e040cfb4e1fed01a594cad6d0386cf6dfc6652fa071922c16 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_mclean, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 11 03:37:34 compute-0 podman[93406]: 2025-10-11 03:37:34.106859548 +0000 UTC m=+0.149816104 container attach ae6c85fffdb2e78e040cfb4e1fed01a594cad6d0386cf6dfc6652fa071922c16 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_mclean, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=reef, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Oct 11 03:37:34 compute-0 hardcore_mclean[93442]: 167 167
Oct 11 03:37:34 compute-0 systemd[1]: libpod-ae6c85fffdb2e78e040cfb4e1fed01a594cad6d0386cf6dfc6652fa071922c16.scope: Deactivated successfully.
Oct 11 03:37:34 compute-0 conmon[93442]: conmon ae6c85fffdb2e78e040c <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-ae6c85fffdb2e78e040cfb4e1fed01a594cad6d0386cf6dfc6652fa071922c16.scope/container/memory.events
Oct 11 03:37:34 compute-0 podman[93406]: 2025-10-11 03:37:34.108594387 +0000 UTC m=+0.151550953 container died ae6c85fffdb2e78e040cfb4e1fed01a594cad6d0386cf6dfc6652fa071922c16 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_mclean, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3)
Oct 11 03:37:34 compute-0 systemd[1]: var-lib-containers-storage-overlay-c36449995cfc12a567aac6148f925b7d94b3cc822159427c1d5304f75a85fd42-merged.mount: Deactivated successfully.
Oct 11 03:37:34 compute-0 podman[93406]: 2025-10-11 03:37:34.147531362 +0000 UTC m=+0.190487928 container remove ae6c85fffdb2e78e040cfb4e1fed01a594cad6d0386cf6dfc6652fa071922c16 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_mclean, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 11 03:37:34 compute-0 systemd[1]: libpod-conmon-ae6c85fffdb2e78e040cfb4e1fed01a594cad6d0386cf6dfc6652fa071922c16.scope: Deactivated successfully.
Oct 11 03:37:34 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e20 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 11 03:37:34 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool create", "pool": "cephfs.cephfs.meta", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"} v 0) v1
Oct 11 03:37:34 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/490029447' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "cephfs.cephfs.meta", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]: dispatch
Oct 11 03:37:34 compute-0 podman[93468]: 2025-10-11 03:37:34.327806043 +0000 UTC m=+0.052680293 container create 7ec001d4e1afbb56fd42d40cd4d86db92bbee88f1105c1ac1b038aa7b18ee17a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_mclean, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 11 03:37:34 compute-0 systemd[1]: Started libpod-conmon-7ec001d4e1afbb56fd42d40cd4d86db92bbee88f1105c1ac1b038aa7b18ee17a.scope.
Oct 11 03:37:34 compute-0 podman[93468]: 2025-10-11 03:37:34.304190638 +0000 UTC m=+0.029064888 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 03:37:34 compute-0 systemd[1]: Started libcrun container.
Oct 11 03:37:34 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/506d8a899328c4df4ddc4e1bfc5556bc7da01150686d041cdf52fcc44ea1f5cd/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 11 03:37:34 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/506d8a899328c4df4ddc4e1bfc5556bc7da01150686d041cdf52fcc44ea1f5cd/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 11 03:37:34 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/506d8a899328c4df4ddc4e1bfc5556bc7da01150686d041cdf52fcc44ea1f5cd/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 11 03:37:34 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/506d8a899328c4df4ddc4e1bfc5556bc7da01150686d041cdf52fcc44ea1f5cd/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 11 03:37:34 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/506d8a899328c4df4ddc4e1bfc5556bc7da01150686d041cdf52fcc44ea1f5cd/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct 11 03:37:34 compute-0 podman[93468]: 2025-10-11 03:37:34.423428462 +0000 UTC m=+0.148302762 container init 7ec001d4e1afbb56fd42d40cd4d86db92bbee88f1105c1ac1b038aa7b18ee17a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_mclean, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Oct 11 03:37:34 compute-0 podman[93468]: 2025-10-11 03:37:34.435991215 +0000 UTC m=+0.160865455 container start 7ec001d4e1afbb56fd42d40cd4d86db92bbee88f1105c1ac1b038aa7b18ee17a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_mclean, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 11 03:37:34 compute-0 podman[93468]: 2025-10-11 03:37:34.439947167 +0000 UTC m=+0.164821407 container attach 7ec001d4e1afbb56fd42d40cd4d86db92bbee88f1105c1ac1b038aa7b18ee17a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_mclean, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 11 03:37:34 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v48: 5 pgs: 1 unknown, 1 peering, 3 active+clean; 449 KiB data, 479 MiB used, 60 GiB / 60 GiB avail
Oct 11 03:37:35 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e20 do_prune osdmap full prune enabled
Oct 11 03:37:35 compute-0 ceph-mon[74273]: log_channel(cluster) log [WRN] : Health check update: 4 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED)
Oct 11 03:37:35 compute-0 ceph-mon[74273]: osdmap e20: 3 total, 3 up, 3 in
Oct 11 03:37:35 compute-0 ceph-mon[74273]: from='client.? 192.168.122.100:0/490029447' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "cephfs.cephfs.meta", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]: dispatch
Oct 11 03:37:35 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/490029447' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "cephfs.cephfs.meta", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Oct 11 03:37:35 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e21 e21: 3 total, 3 up, 3 in
Oct 11 03:37:35 compute-0 infallible_leakey[93362]: pool 'cephfs.cephfs.meta' created
Oct 11 03:37:35 compute-0 ceph-mon[74273]: log_channel(cluster) log [DBG] : osdmap e21: 3 total, 3 up, 3 in
Oct 11 03:37:35 compute-0 systemd[1]: libpod-eb8dce2ffbba18281d1f4b7cd3a8ee74b7e9a8a037005d3263603a8d52eee421.scope: Deactivated successfully.
Oct 11 03:37:35 compute-0 podman[93321]: 2025-10-11 03:37:35.133434341 +0000 UTC m=+1.553206005 container died eb8dce2ffbba18281d1f4b7cd3a8ee74b7e9a8a037005d3263603a8d52eee421 (image=quay.io/ceph/ceph:v18, name=infallible_leakey, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Oct 11 03:37:35 compute-0 systemd[1]: var-lib-containers-storage-overlay-309eb1509e42a10b244cb8a4ee5c07efda38483f885771f5c2182fcbedc0578e-merged.mount: Deactivated successfully.
Oct 11 03:37:35 compute-0 podman[93321]: 2025-10-11 03:37:35.181071961 +0000 UTC m=+1.600843615 container remove eb8dce2ffbba18281d1f4b7cd3a8ee74b7e9a8a037005d3263603a8d52eee421 (image=quay.io/ceph/ceph:v18, name=infallible_leakey, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Oct 11 03:37:35 compute-0 systemd[1]: libpod-conmon-eb8dce2ffbba18281d1f4b7cd3a8ee74b7e9a8a037005d3263603a8d52eee421.scope: Deactivated successfully.
Oct 11 03:37:35 compute-0 sudo[93267]: pam_unix(sudo:session): session closed for user root
Oct 11 03:37:35 compute-0 sudo[93545]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cvvlhoqiogfybtoyokkdbomfyysudlin ; /usr/bin/python3'
Oct 11 03:37:35 compute-0 sudo[93545]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 03:37:35 compute-0 cranky_mclean[93485]: --> passed data devices: 0 physical, 3 LVM
Oct 11 03:37:35 compute-0 cranky_mclean[93485]: --> relative data size: 1.0
Oct 11 03:37:35 compute-0 cranky_mclean[93485]: --> All data devices are unavailable
Oct 11 03:37:35 compute-0 python3[93550]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid 23b68101-59a9-532f-ab6b-9acf78fb2162 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool create cephfs.cephfs.data  replicated_rule --autoscale-mode on _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 11 03:37:35 compute-0 systemd[1]: libpod-7ec001d4e1afbb56fd42d40cd4d86db92bbee88f1105c1ac1b038aa7b18ee17a.scope: Deactivated successfully.
Oct 11 03:37:35 compute-0 systemd[1]: libpod-7ec001d4e1afbb56fd42d40cd4d86db92bbee88f1105c1ac1b038aa7b18ee17a.scope: Consumed 1.042s CPU time.
Oct 11 03:37:35 compute-0 podman[93468]: 2025-10-11 03:37:35.548417323 +0000 UTC m=+1.273291533 container died 7ec001d4e1afbb56fd42d40cd4d86db92bbee88f1105c1ac1b038aa7b18ee17a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_mclean, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 11 03:37:35 compute-0 systemd[1]: var-lib-containers-storage-overlay-506d8a899328c4df4ddc4e1bfc5556bc7da01150686d041cdf52fcc44ea1f5cd-merged.mount: Deactivated successfully.
Oct 11 03:37:35 compute-0 podman[93555]: 2025-10-11 03:37:35.587606025 +0000 UTC m=+0.054318848 container create 830943880a49f766455fd207342d33429efd24d434314d2ad4334e16af045c48 (image=quay.io/ceph/ceph:v18, name=kind_blackburn, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 11 03:37:35 compute-0 podman[93468]: 2025-10-11 03:37:35.616079896 +0000 UTC m=+1.340954126 container remove 7ec001d4e1afbb56fd42d40cd4d86db92bbee88f1105c1ac1b038aa7b18ee17a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_mclean, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_REF=reef, io.buildah.version=1.39.3)
Oct 11 03:37:35 compute-0 systemd[1]: libpod-conmon-7ec001d4e1afbb56fd42d40cd4d86db92bbee88f1105c1ac1b038aa7b18ee17a.scope: Deactivated successfully.
Oct 11 03:37:35 compute-0 systemd[1]: Started libpod-conmon-830943880a49f766455fd207342d33429efd24d434314d2ad4334e16af045c48.scope.
Oct 11 03:37:35 compute-0 sudo[93322]: pam_unix(sudo:session): session closed for user root
Oct 11 03:37:35 compute-0 podman[93555]: 2025-10-11 03:37:35.567475869 +0000 UTC m=+0.034188732 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Oct 11 03:37:35 compute-0 systemd[1]: Started libcrun container.
Oct 11 03:37:35 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ac5edc7021783d537719c18d7ac6658e08f65905e85de55863fcc9e0dd6c4ba8/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 11 03:37:35 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ac5edc7021783d537719c18d7ac6658e08f65905e85de55863fcc9e0dd6c4ba8/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Oct 11 03:37:35 compute-0 podman[93555]: 2025-10-11 03:37:35.682637618 +0000 UTC m=+0.149350481 container init 830943880a49f766455fd207342d33429efd24d434314d2ad4334e16af045c48 (image=quay.io/ceph/ceph:v18, name=kind_blackburn, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Oct 11 03:37:35 compute-0 podman[93555]: 2025-10-11 03:37:35.694298596 +0000 UTC m=+0.161011429 container start 830943880a49f766455fd207342d33429efd24d434314d2ad4334e16af045c48 (image=quay.io/ceph/ceph:v18, name=kind_blackburn, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Oct 11 03:37:35 compute-0 podman[93555]: 2025-10-11 03:37:35.698122214 +0000 UTC m=+0.164835087 container attach 830943880a49f766455fd207342d33429efd24d434314d2ad4334e16af045c48 (image=quay.io/ceph/ceph:v18, name=kind_blackburn, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 11 03:37:35 compute-0 sudo[93588]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 03:37:35 compute-0 sudo[93588]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 03:37:35 compute-0 sudo[93588]: pam_unix(sudo:session): session closed for user root
Oct 11 03:37:35 compute-0 sudo[93614]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 11 03:37:35 compute-0 sudo[93614]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 03:37:35 compute-0 sudo[93614]: pam_unix(sudo:session): session closed for user root
Oct 11 03:37:35 compute-0 sudo[93639]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 03:37:35 compute-0 sudo[93639]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 03:37:35 compute-0 sudo[93639]: pam_unix(sudo:session): session closed for user root
Oct 11 03:37:36 compute-0 sudo[93664]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/23b68101-59a9-532f-ab6b-9acf78fb2162/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 23b68101-59a9-532f-ab6b-9acf78fb2162 -- lvm list --format json
Oct 11 03:37:36 compute-0 sudo[93664]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 03:37:36 compute-0 ceph-osd[87591]: osd.0 pg_epoch: 21 pg[6.0( empty local-lis/les=0/0 n=0 ec=21/21 lis/c=0/0 les/c/f=0/0/0 sis=21) [0] r=0 lpr=21 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 03:37:36 compute-0 ceph-mon[74273]: pgmap v48: 5 pgs: 1 unknown, 1 peering, 3 active+clean; 449 KiB data, 479 MiB used, 60 GiB / 60 GiB avail
Oct 11 03:37:36 compute-0 ceph-mon[74273]: Health check update: 4 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED)
Oct 11 03:37:36 compute-0 ceph-mon[74273]: from='client.? 192.168.122.100:0/490029447' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "cephfs.cephfs.meta", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Oct 11 03:37:36 compute-0 ceph-mon[74273]: osdmap e21: 3 total, 3 up, 3 in
Oct 11 03:37:36 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e21 do_prune osdmap full prune enabled
Oct 11 03:37:36 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e22 e22: 3 total, 3 up, 3 in
Oct 11 03:37:36 compute-0 ceph-mon[74273]: log_channel(cluster) log [DBG] : osdmap e22: 3 total, 3 up, 3 in
Oct 11 03:37:36 compute-0 ceph-osd[87591]: osd.0 pg_epoch: 22 pg[6.0( empty local-lis/les=21/22 n=0 ec=21/21 lis/c=0/0 les/c/f=0/0/0 sis=21) [0] r=0 lpr=21 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 03:37:36 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool create", "pool": "cephfs.cephfs.data", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"} v 0) v1
Oct 11 03:37:36 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1101148907' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "cephfs.cephfs.data", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]: dispatch
Oct 11 03:37:36 compute-0 podman[93751]: 2025-10-11 03:37:36.376864484 +0000 UTC m=+0.062722235 container create 1f99e1fc2b225fe9412052c5b72742d7653a52ed92a3124ba5e348c7ed8b2ed2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_haibt, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507)
Oct 11 03:37:36 compute-0 systemd[1]: Started libpod-conmon-1f99e1fc2b225fe9412052c5b72742d7653a52ed92a3124ba5e348c7ed8b2ed2.scope.
Oct 11 03:37:36 compute-0 systemd[1]: Started libcrun container.
Oct 11 03:37:36 compute-0 podman[93751]: 2025-10-11 03:37:36.443850808 +0000 UTC m=+0.129708619 container init 1f99e1fc2b225fe9412052c5b72742d7653a52ed92a3124ba5e348c7ed8b2ed2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_haibt, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 11 03:37:36 compute-0 podman[93751]: 2025-10-11 03:37:36.351804369 +0000 UTC m=+0.037662190 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 03:37:36 compute-0 podman[93751]: 2025-10-11 03:37:36.455032682 +0000 UTC m=+0.140890463 container start 1f99e1fc2b225fe9412052c5b72742d7653a52ed92a3124ba5e348c7ed8b2ed2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_haibt, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Oct 11 03:37:36 compute-0 podman[93751]: 2025-10-11 03:37:36.458827989 +0000 UTC m=+0.144685830 container attach 1f99e1fc2b225fe9412052c5b72742d7653a52ed92a3124ba5e348c7ed8b2ed2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_haibt, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef)
Oct 11 03:37:36 compute-0 keen_haibt[93767]: 167 167
Oct 11 03:37:36 compute-0 systemd[1]: libpod-1f99e1fc2b225fe9412052c5b72742d7653a52ed92a3124ba5e348c7ed8b2ed2.scope: Deactivated successfully.
Oct 11 03:37:36 compute-0 podman[93751]: 2025-10-11 03:37:36.461104933 +0000 UTC m=+0.146962744 container died 1f99e1fc2b225fe9412052c5b72742d7653a52ed92a3124ba5e348c7ed8b2ed2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_haibt, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 11 03:37:36 compute-0 systemd[1]: var-lib-containers-storage-overlay-15cd26c1bf9b1ec78c81078f506ecf5e4fa67c27448fb5d5cfb0fde1efca3c8e-merged.mount: Deactivated successfully.
Oct 11 03:37:36 compute-0 podman[93751]: 2025-10-11 03:37:36.512397015 +0000 UTC m=+0.198254766 container remove 1f99e1fc2b225fe9412052c5b72742d7653a52ed92a3124ba5e348c7ed8b2ed2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_haibt, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Oct 11 03:37:36 compute-0 systemd[1]: libpod-conmon-1f99e1fc2b225fe9412052c5b72742d7653a52ed92a3124ba5e348c7ed8b2ed2.scope: Deactivated successfully.
Oct 11 03:37:36 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v51: 6 pgs: 2 unknown, 1 peering, 3 active+clean; 449 KiB data, 479 MiB used, 60 GiB / 60 GiB avail
Oct 11 03:37:36 compute-0 podman[93791]: 2025-10-11 03:37:36.73124101 +0000 UTC m=+0.063561919 container create ad59e0b02bcc4b531826dc14facd4e59e2a16e884c4041b90e02e841c39d7631 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sleepy_noether, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 11 03:37:36 compute-0 systemd[1]: Started libpod-conmon-ad59e0b02bcc4b531826dc14facd4e59e2a16e884c4041b90e02e841c39d7631.scope.
Oct 11 03:37:36 compute-0 podman[93791]: 2025-10-11 03:37:36.709416996 +0000 UTC m=+0.041737995 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 03:37:36 compute-0 systemd[1]: Started libcrun container.
Oct 11 03:37:36 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/be953a40ced94bcf9bf502ac3d5876517a1696dba92744f8731f7fae12395764/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 11 03:37:36 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/be953a40ced94bcf9bf502ac3d5876517a1696dba92744f8731f7fae12395764/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 11 03:37:36 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/be953a40ced94bcf9bf502ac3d5876517a1696dba92744f8731f7fae12395764/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 11 03:37:36 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/be953a40ced94bcf9bf502ac3d5876517a1696dba92744f8731f7fae12395764/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 11 03:37:36 compute-0 podman[93791]: 2025-10-11 03:37:36.829856043 +0000 UTC m=+0.162177032 container init ad59e0b02bcc4b531826dc14facd4e59e2a16e884c4041b90e02e841c39d7631 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sleepy_noether, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 11 03:37:36 compute-0 podman[93791]: 2025-10-11 03:37:36.844352831 +0000 UTC m=+0.176673770 container start ad59e0b02bcc4b531826dc14facd4e59e2a16e884c4041b90e02e841c39d7631 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sleepy_noether, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507)
Oct 11 03:37:36 compute-0 podman[93791]: 2025-10-11 03:37:36.848821867 +0000 UTC m=+0.181142796 container attach ad59e0b02bcc4b531826dc14facd4e59e2a16e884c4041b90e02e841c39d7631 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sleepy_noether, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True)
Oct 11 03:37:37 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e22 do_prune osdmap full prune enabled
Oct 11 03:37:37 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1101148907' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "cephfs.cephfs.data", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Oct 11 03:37:37 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e23 e23: 3 total, 3 up, 3 in
Oct 11 03:37:37 compute-0 kind_blackburn[93585]: pool 'cephfs.cephfs.data' created
Oct 11 03:37:37 compute-0 ceph-mon[74273]: log_channel(cluster) log [DBG] : osdmap e23: 3 total, 3 up, 3 in
Oct 11 03:37:37 compute-0 ceph-mon[74273]: osdmap e22: 3 total, 3 up, 3 in
Oct 11 03:37:37 compute-0 ceph-mon[74273]: from='client.? 192.168.122.100:0/1101148907' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "cephfs.cephfs.data", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]: dispatch
Oct 11 03:37:37 compute-0 systemd[1]: libpod-830943880a49f766455fd207342d33429efd24d434314d2ad4334e16af045c48.scope: Deactivated successfully.
Oct 11 03:37:37 compute-0 podman[93555]: 2025-10-11 03:37:37.159457064 +0000 UTC m=+1.626169917 container died 830943880a49f766455fd207342d33429efd24d434314d2ad4334e16af045c48 (image=quay.io/ceph/ceph:v18, name=kind_blackburn, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Oct 11 03:37:37 compute-0 systemd[1]: var-lib-containers-storage-overlay-ac5edc7021783d537719c18d7ac6658e08f65905e85de55863fcc9e0dd6c4ba8-merged.mount: Deactivated successfully.
Oct 11 03:37:37 compute-0 podman[93555]: 2025-10-11 03:37:37.217599339 +0000 UTC m=+1.684312192 container remove 830943880a49f766455fd207342d33429efd24d434314d2ad4334e16af045c48 (image=quay.io/ceph/ceph:v18, name=kind_blackburn, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250507)
Oct 11 03:37:37 compute-0 ceph-osd[88594]: osd.1 pg_epoch: 23 pg[7.0( empty local-lis/les=0/0 n=0 ec=23/23 lis/c=0/0 les/c/f=0/0/0 sis=23) [1] r=0 lpr=23 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 03:37:37 compute-0 systemd[1]: libpod-conmon-830943880a49f766455fd207342d33429efd24d434314d2ad4334e16af045c48.scope: Deactivated successfully.
Oct 11 03:37:37 compute-0 sudo[93545]: pam_unix(sudo:session): session closed for user root
Oct 11 03:37:37 compute-0 sudo[93847]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pxaywubwzretzsujucranlmeguacppiw ; /usr/bin/python3'
Oct 11 03:37:37 compute-0 sudo[93847]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 03:37:37 compute-0 sleepy_noether[93808]: {
Oct 11 03:37:37 compute-0 sleepy_noether[93808]:     "0": [
Oct 11 03:37:37 compute-0 sleepy_noether[93808]:         {
Oct 11 03:37:37 compute-0 sleepy_noether[93808]:             "devices": [
Oct 11 03:37:37 compute-0 sleepy_noether[93808]:                 "/dev/loop3"
Oct 11 03:37:37 compute-0 sleepy_noether[93808]:             ],
Oct 11 03:37:37 compute-0 sleepy_noether[93808]:             "lv_name": "ceph_lv0",
Oct 11 03:37:37 compute-0 sleepy_noether[93808]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct 11 03:37:37 compute-0 sleepy_noether[93808]:             "lv_size": "21470642176",
Oct 11 03:37:37 compute-0 sleepy_noether[93808]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=1XVTvq-Um5W-oCLh-MZzq-EKrd-vgGj-JdO4S3,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=23b68101-59a9-532f-ab6b-9acf78fb2162,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=bd7ac921-1218-45c1-b1c6-7c594dbceccb,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 11 03:37:37 compute-0 sleepy_noether[93808]:             "lv_uuid": "1XVTvq-Um5W-oCLh-MZzq-EKrd-vgGj-JdO4S3",
Oct 11 03:37:37 compute-0 sleepy_noether[93808]:             "name": "ceph_lv0",
Oct 11 03:37:37 compute-0 sleepy_noether[93808]:             "path": "/dev/ceph_vg0/ceph_lv0",
Oct 11 03:37:37 compute-0 sleepy_noether[93808]:             "tags": {
Oct 11 03:37:37 compute-0 sleepy_noether[93808]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct 11 03:37:37 compute-0 sleepy_noether[93808]:                 "ceph.block_uuid": "1XVTvq-Um5W-oCLh-MZzq-EKrd-vgGj-JdO4S3",
Oct 11 03:37:37 compute-0 sleepy_noether[93808]:                 "ceph.cephx_lockbox_secret": "",
Oct 11 03:37:37 compute-0 sleepy_noether[93808]:                 "ceph.cluster_fsid": "23b68101-59a9-532f-ab6b-9acf78fb2162",
Oct 11 03:37:37 compute-0 sleepy_noether[93808]:                 "ceph.cluster_name": "ceph",
Oct 11 03:37:37 compute-0 sleepy_noether[93808]:                 "ceph.crush_device_class": "",
Oct 11 03:37:37 compute-0 sleepy_noether[93808]:                 "ceph.encrypted": "0",
Oct 11 03:37:37 compute-0 sleepy_noether[93808]:                 "ceph.osd_fsid": "bd7ac921-1218-45c1-b1c6-7c594dbceccb",
Oct 11 03:37:37 compute-0 sleepy_noether[93808]:                 "ceph.osd_id": "0",
Oct 11 03:37:37 compute-0 sleepy_noether[93808]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 11 03:37:37 compute-0 sleepy_noether[93808]:                 "ceph.type": "block",
Oct 11 03:37:37 compute-0 sleepy_noether[93808]:                 "ceph.vdo": "0"
Oct 11 03:37:37 compute-0 sleepy_noether[93808]:             },
Oct 11 03:37:37 compute-0 sleepy_noether[93808]:             "type": "block",
Oct 11 03:37:37 compute-0 sleepy_noether[93808]:             "vg_name": "ceph_vg0"
Oct 11 03:37:37 compute-0 sleepy_noether[93808]:         }
Oct 11 03:37:37 compute-0 sleepy_noether[93808]:     ],
Oct 11 03:37:37 compute-0 sleepy_noether[93808]:     "1": [
Oct 11 03:37:37 compute-0 sleepy_noether[93808]:         {
Oct 11 03:37:37 compute-0 sleepy_noether[93808]:             "devices": [
Oct 11 03:37:37 compute-0 sleepy_noether[93808]:                 "/dev/loop4"
Oct 11 03:37:37 compute-0 sleepy_noether[93808]:             ],
Oct 11 03:37:37 compute-0 sleepy_noether[93808]:             "lv_name": "ceph_lv1",
Oct 11 03:37:37 compute-0 sleepy_noether[93808]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Oct 11 03:37:37 compute-0 sleepy_noether[93808]:             "lv_size": "21470642176",
Oct 11 03:37:37 compute-0 sleepy_noether[93808]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=SYHCVo-UVf0-Iwc3-4dzz-QuCG-19LJ-9rRA5x,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=23b68101-59a9-532f-ab6b-9acf78fb2162,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=38da774d-7ecf-442f-9a7a-97978287cff8,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 11 03:37:37 compute-0 sleepy_noether[93808]:             "lv_uuid": "SYHCVo-UVf0-Iwc3-4dzz-QuCG-19LJ-9rRA5x",
Oct 11 03:37:37 compute-0 sleepy_noether[93808]:             "name": "ceph_lv1",
Oct 11 03:37:37 compute-0 sleepy_noether[93808]:             "path": "/dev/ceph_vg1/ceph_lv1",
Oct 11 03:37:37 compute-0 sleepy_noether[93808]:             "tags": {
Oct 11 03:37:37 compute-0 sleepy_noether[93808]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Oct 11 03:37:37 compute-0 sleepy_noether[93808]:                 "ceph.block_uuid": "SYHCVo-UVf0-Iwc3-4dzz-QuCG-19LJ-9rRA5x",
Oct 11 03:37:37 compute-0 sleepy_noether[93808]:                 "ceph.cephx_lockbox_secret": "",
Oct 11 03:37:37 compute-0 sleepy_noether[93808]:                 "ceph.cluster_fsid": "23b68101-59a9-532f-ab6b-9acf78fb2162",
Oct 11 03:37:37 compute-0 sleepy_noether[93808]:                 "ceph.cluster_name": "ceph",
Oct 11 03:37:37 compute-0 sleepy_noether[93808]:                 "ceph.crush_device_class": "",
Oct 11 03:37:37 compute-0 sleepy_noether[93808]:                 "ceph.encrypted": "0",
Oct 11 03:37:37 compute-0 sleepy_noether[93808]:                 "ceph.osd_fsid": "38da774d-7ecf-442f-9a7a-97978287cff8",
Oct 11 03:37:37 compute-0 sleepy_noether[93808]:                 "ceph.osd_id": "1",
Oct 11 03:37:37 compute-0 sleepy_noether[93808]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 11 03:37:37 compute-0 sleepy_noether[93808]:                 "ceph.type": "block",
Oct 11 03:37:37 compute-0 sleepy_noether[93808]:                 "ceph.vdo": "0"
Oct 11 03:37:37 compute-0 sleepy_noether[93808]:             },
Oct 11 03:37:37 compute-0 sleepy_noether[93808]:             "type": "block",
Oct 11 03:37:37 compute-0 sleepy_noether[93808]:             "vg_name": "ceph_vg1"
Oct 11 03:37:37 compute-0 sleepy_noether[93808]:         }
Oct 11 03:37:37 compute-0 sleepy_noether[93808]:     ],
Oct 11 03:37:37 compute-0 sleepy_noether[93808]:     "2": [
Oct 11 03:37:37 compute-0 sleepy_noether[93808]:         {
Oct 11 03:37:37 compute-0 sleepy_noether[93808]:             "devices": [
Oct 11 03:37:37 compute-0 sleepy_noether[93808]:                 "/dev/loop5"
Oct 11 03:37:37 compute-0 sleepy_noether[93808]:             ],
Oct 11 03:37:37 compute-0 sleepy_noether[93808]:             "lv_name": "ceph_lv2",
Oct 11 03:37:37 compute-0 sleepy_noether[93808]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Oct 11 03:37:37 compute-0 sleepy_noether[93808]:             "lv_size": "21470642176",
Oct 11 03:37:37 compute-0 sleepy_noether[93808]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=RoMfIg-8Myq-NBZ3-1HM3-6QfC-sjk8-dlChvt,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=23b68101-59a9-532f-ab6b-9acf78fb2162,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=b0a32a6b-06c7-4cda-9235-0c5ffbd1bd85,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 11 03:37:37 compute-0 sleepy_noether[93808]:             "lv_uuid": "RoMfIg-8Myq-NBZ3-1HM3-6QfC-sjk8-dlChvt",
Oct 11 03:37:37 compute-0 sleepy_noether[93808]:             "name": "ceph_lv2",
Oct 11 03:37:37 compute-0 sleepy_noether[93808]:             "path": "/dev/ceph_vg2/ceph_lv2",
Oct 11 03:37:37 compute-0 sleepy_noether[93808]:             "tags": {
Oct 11 03:37:37 compute-0 sleepy_noether[93808]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Oct 11 03:37:37 compute-0 sleepy_noether[93808]:                 "ceph.block_uuid": "RoMfIg-8Myq-NBZ3-1HM3-6QfC-sjk8-dlChvt",
Oct 11 03:37:37 compute-0 sleepy_noether[93808]:                 "ceph.cephx_lockbox_secret": "",
Oct 11 03:37:37 compute-0 sleepy_noether[93808]:                 "ceph.cluster_fsid": "23b68101-59a9-532f-ab6b-9acf78fb2162",
Oct 11 03:37:37 compute-0 sleepy_noether[93808]:                 "ceph.cluster_name": "ceph",
Oct 11 03:37:37 compute-0 sleepy_noether[93808]:                 "ceph.crush_device_class": "",
Oct 11 03:37:37 compute-0 sleepy_noether[93808]:                 "ceph.encrypted": "0",
Oct 11 03:37:37 compute-0 sleepy_noether[93808]:                 "ceph.osd_fsid": "b0a32a6b-06c7-4cda-9235-0c5ffbd1bd85",
Oct 11 03:37:37 compute-0 sleepy_noether[93808]:                 "ceph.osd_id": "2",
Oct 11 03:37:37 compute-0 sleepy_noether[93808]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 11 03:37:37 compute-0 sleepy_noether[93808]:                 "ceph.type": "block",
Oct 11 03:37:37 compute-0 sleepy_noether[93808]:                 "ceph.vdo": "0"
Oct 11 03:37:37 compute-0 sleepy_noether[93808]:             },
Oct 11 03:37:37 compute-0 sleepy_noether[93808]:             "type": "block",
Oct 11 03:37:37 compute-0 sleepy_noether[93808]:             "vg_name": "ceph_vg2"
Oct 11 03:37:37 compute-0 sleepy_noether[93808]:         }
Oct 11 03:37:37 compute-0 sleepy_noether[93808]:     ]
Oct 11 03:37:37 compute-0 sleepy_noether[93808]: }
Oct 11 03:37:37 compute-0 python3[93849]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid 23b68101-59a9-532f-ab6b-9acf78fb2162 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool application enable vms rbd _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 11 03:37:37 compute-0 systemd[1]: libpod-ad59e0b02bcc4b531826dc14facd4e59e2a16e884c4041b90e02e841c39d7631.scope: Deactivated successfully.
Oct 11 03:37:37 compute-0 podman[93791]: 2025-10-11 03:37:37.721014358 +0000 UTC m=+1.053335297 container died ad59e0b02bcc4b531826dc14facd4e59e2a16e884c4041b90e02e841c39d7631 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sleepy_noether, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 11 03:37:37 compute-0 systemd[1]: var-lib-containers-storage-overlay-be953a40ced94bcf9bf502ac3d5876517a1696dba92744f8731f7fae12395764-merged.mount: Deactivated successfully.
Oct 11 03:37:37 compute-0 podman[93854]: 2025-10-11 03:37:37.798632101 +0000 UTC m=+0.079452326 container create 649fba6b17a5e16a8fffa0c6bef7cbe0ba054ad61dca394992382b46727c3762 (image=quay.io/ceph/ceph:v18, name=vigorous_chandrasekhar, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507)
Oct 11 03:37:37 compute-0 podman[93791]: 2025-10-11 03:37:37.806006768 +0000 UTC m=+1.138327677 container remove ad59e0b02bcc4b531826dc14facd4e59e2a16e884c4041b90e02e841c39d7631 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sleepy_noether, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS)
Oct 11 03:37:37 compute-0 systemd[1]: libpod-conmon-ad59e0b02bcc4b531826dc14facd4e59e2a16e884c4041b90e02e841c39d7631.scope: Deactivated successfully.
Oct 11 03:37:37 compute-0 podman[93854]: 2025-10-11 03:37:37.751974279 +0000 UTC m=+0.032794534 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Oct 11 03:37:37 compute-0 sudo[93664]: pam_unix(sudo:session): session closed for user root
Oct 11 03:37:37 compute-0 systemd[1]: Started libpod-conmon-649fba6b17a5e16a8fffa0c6bef7cbe0ba054ad61dca394992382b46727c3762.scope.
Oct 11 03:37:37 compute-0 systemd[1]: Started libcrun container.
Oct 11 03:37:37 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/90b30745556c3e7c8f5adc4ca00b927066f7f007e98c68aa8a7376317925c7f4/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Oct 11 03:37:37 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/90b30745556c3e7c8f5adc4ca00b927066f7f007e98c68aa8a7376317925c7f4/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 11 03:37:37 compute-0 podman[93854]: 2025-10-11 03:37:37.913466671 +0000 UTC m=+0.194286926 container init 649fba6b17a5e16a8fffa0c6bef7cbe0ba054ad61dca394992382b46727c3762 (image=quay.io/ceph/ceph:v18, name=vigorous_chandrasekhar, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, io.buildah.version=1.39.3)
Oct 11 03:37:37 compute-0 podman[93854]: 2025-10-11 03:37:37.921918468 +0000 UTC m=+0.202738693 container start 649fba6b17a5e16a8fffa0c6bef7cbe0ba054ad61dca394992382b46727c3762 (image=quay.io/ceph/ceph:v18, name=vigorous_chandrasekhar, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507)
Oct 11 03:37:37 compute-0 podman[93854]: 2025-10-11 03:37:37.924969194 +0000 UTC m=+0.205789419 container attach 649fba6b17a5e16a8fffa0c6bef7cbe0ba054ad61dca394992382b46727c3762 (image=quay.io/ceph/ceph:v18, name=vigorous_chandrasekhar, OSD_FLAVOR=default, io.buildah.version=1.39.3, ceph=True, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 11 03:37:37 compute-0 sudo[93886]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 03:37:37 compute-0 sudo[93886]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 03:37:37 compute-0 sudo[93886]: pam_unix(sudo:session): session closed for user root
Oct 11 03:37:38 compute-0 sudo[93912]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 11 03:37:38 compute-0 sudo[93912]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 03:37:38 compute-0 sudo[93912]: pam_unix(sudo:session): session closed for user root
Oct 11 03:37:38 compute-0 sudo[93937]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 03:37:38 compute-0 sudo[93937]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 03:37:38 compute-0 sudo[93937]: pam_unix(sudo:session): session closed for user root
Oct 11 03:37:38 compute-0 sudo[93962]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/23b68101-59a9-532f-ab6b-9acf78fb2162/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 23b68101-59a9-532f-ab6b-9acf78fb2162 -- raw list --format json
Oct 11 03:37:38 compute-0 sudo[93962]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 03:37:38 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e23 do_prune osdmap full prune enabled
Oct 11 03:37:38 compute-0 ceph-mon[74273]: pgmap v51: 6 pgs: 2 unknown, 1 peering, 3 active+clean; 449 KiB data, 479 MiB used, 60 GiB / 60 GiB avail
Oct 11 03:37:38 compute-0 ceph-mon[74273]: from='client.? 192.168.122.100:0/1101148907' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "cephfs.cephfs.data", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Oct 11 03:37:38 compute-0 ceph-mon[74273]: osdmap e23: 3 total, 3 up, 3 in
Oct 11 03:37:38 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e24 e24: 3 total, 3 up, 3 in
Oct 11 03:37:38 compute-0 ceph-mon[74273]: log_channel(cluster) log [DBG] : osdmap e24: 3 total, 3 up, 3 in
Oct 11 03:37:38 compute-0 ceph-osd[88594]: osd.1 pg_epoch: 24 pg[7.0( empty local-lis/les=23/24 n=0 ec=23/23 lis/c=0/0 les/c/f=0/0/0 sis=23) [1] r=0 lpr=23 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 03:37:38 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool application enable", "pool": "vms", "app": "rbd"} v 0) v1
Oct 11 03:37:38 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2741369923' entity='client.admin' cmd=[{"prefix": "osd pool application enable", "pool": "vms", "app": "rbd"}]: dispatch
Oct 11 03:37:38 compute-0 podman[94047]: 2025-10-11 03:37:38.561641301 +0000 UTC m=+0.059742901 container create c55844e8dd3352fe0be254f4e26a93e375c1dc9441f49e740f872adef0ccfff4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_einstein, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 11 03:37:38 compute-0 systemd[1]: Started libpod-conmon-c55844e8dd3352fe0be254f4e26a93e375c1dc9441f49e740f872adef0ccfff4.scope.
Oct 11 03:37:38 compute-0 podman[94047]: 2025-10-11 03:37:38.532033638 +0000 UTC m=+0.030135318 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 03:37:38 compute-0 systemd[1]: Started libcrun container.
Oct 11 03:37:38 compute-0 podman[94047]: 2025-10-11 03:37:38.643114092 +0000 UTC m=+0.141215702 container init c55844e8dd3352fe0be254f4e26a93e375c1dc9441f49e740f872adef0ccfff4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_einstein, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.vendor=CentOS)
Oct 11 03:37:38 compute-0 podman[94047]: 2025-10-11 03:37:38.655410558 +0000 UTC m=+0.153512158 container start c55844e8dd3352fe0be254f4e26a93e375c1dc9441f49e740f872adef0ccfff4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_einstein, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=reef)
Oct 11 03:37:38 compute-0 wonderful_einstein[94064]: 167 167
Oct 11 03:37:38 compute-0 systemd[1]: libpod-c55844e8dd3352fe0be254f4e26a93e375c1dc9441f49e740f872adef0ccfff4.scope: Deactivated successfully.
Oct 11 03:37:38 compute-0 podman[94047]: 2025-10-11 03:37:38.661733576 +0000 UTC m=+0.159835216 container attach c55844e8dd3352fe0be254f4e26a93e375c1dc9441f49e740f872adef0ccfff4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_einstein, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Oct 11 03:37:38 compute-0 podman[94047]: 2025-10-11 03:37:38.6622355 +0000 UTC m=+0.160337120 container died c55844e8dd3352fe0be254f4e26a93e375c1dc9441f49e740f872adef0ccfff4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_einstein, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 11 03:37:38 compute-0 systemd[1]: var-lib-containers-storage-overlay-987ad471792fe90dacc825bcc84c4376f7755d153ff83913bc8c50d418ac99c8-merged.mount: Deactivated successfully.
Oct 11 03:37:38 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v54: 7 pgs: 1 creating+peering, 6 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Oct 11 03:37:38 compute-0 podman[94047]: 2025-10-11 03:37:38.707564295 +0000 UTC m=+0.205665895 container remove c55844e8dd3352fe0be254f4e26a93e375c1dc9441f49e740f872adef0ccfff4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_einstein, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_REF=reef, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 11 03:37:38 compute-0 systemd[1]: libpod-conmon-c55844e8dd3352fe0be254f4e26a93e375c1dc9441f49e740f872adef0ccfff4.scope: Deactivated successfully.
Oct 11 03:37:38 compute-0 podman[94086]: 2025-10-11 03:37:38.956565038 +0000 UTC m=+0.072365786 container create eb154ad57b13c5fe95b59eb72ad7da427f42368b3a908a92a85948e52b4aa175 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_swanson, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Oct 11 03:37:38 compute-0 systemd[1]: Started libpod-conmon-eb154ad57b13c5fe95b59eb72ad7da427f42368b3a908a92a85948e52b4aa175.scope.
Oct 11 03:37:39 compute-0 systemd[1]: Started libcrun container.
Oct 11 03:37:39 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6af9d93f350dad12e4a273dea4a0a3801a9e564cf9aa724f64c777b26ac7255c/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 11 03:37:39 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6af9d93f350dad12e4a273dea4a0a3801a9e564cf9aa724f64c777b26ac7255c/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 11 03:37:39 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6af9d93f350dad12e4a273dea4a0a3801a9e564cf9aa724f64c777b26ac7255c/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 11 03:37:39 compute-0 podman[94086]: 2025-10-11 03:37:38.928938421 +0000 UTC m=+0.044739209 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 03:37:39 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6af9d93f350dad12e4a273dea4a0a3801a9e564cf9aa724f64c777b26ac7255c/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 11 03:37:39 compute-0 podman[94086]: 2025-10-11 03:37:39.038913505 +0000 UTC m=+0.154714233 container init eb154ad57b13c5fe95b59eb72ad7da427f42368b3a908a92a85948e52b4aa175 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_swanson, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 11 03:37:39 compute-0 podman[94086]: 2025-10-11 03:37:39.046953591 +0000 UTC m=+0.162754339 container start eb154ad57b13c5fe95b59eb72ad7da427f42368b3a908a92a85948e52b4aa175 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_swanson, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=reef, io.buildah.version=1.39.3)
Oct 11 03:37:39 compute-0 podman[94086]: 2025-10-11 03:37:39.051512979 +0000 UTC m=+0.167313787 container attach eb154ad57b13c5fe95b59eb72ad7da427f42368b3a908a92a85948e52b4aa175 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_swanson, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 11 03:37:39 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e24 do_prune osdmap full prune enabled
Oct 11 03:37:39 compute-0 ceph-mon[74273]: osdmap e24: 3 total, 3 up, 3 in
Oct 11 03:37:39 compute-0 ceph-mon[74273]: from='client.? 192.168.122.100:0/2741369923' entity='client.admin' cmd=[{"prefix": "osd pool application enable", "pool": "vms", "app": "rbd"}]: dispatch
Oct 11 03:37:39 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2741369923' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "vms", "app": "rbd"}]': finished
Oct 11 03:37:39 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e25 e25: 3 total, 3 up, 3 in
Oct 11 03:37:39 compute-0 ceph-mon[74273]: log_channel(cluster) log [DBG] : osdmap e25: 3 total, 3 up, 3 in
Oct 11 03:37:39 compute-0 vigorous_chandrasekhar[93883]: enabled application 'rbd' on pool 'vms'
Oct 11 03:37:39 compute-0 systemd[1]: libpod-649fba6b17a5e16a8fffa0c6bef7cbe0ba054ad61dca394992382b46727c3762.scope: Deactivated successfully.
Oct 11 03:37:39 compute-0 conmon[93883]: conmon 649fba6b17a5e16a8fff <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-649fba6b17a5e16a8fffa0c6bef7cbe0ba054ad61dca394992382b46727c3762.scope/container/memory.events
Oct 11 03:37:39 compute-0 podman[93854]: 2025-10-11 03:37:39.249302392 +0000 UTC m=+1.530122647 container died 649fba6b17a5e16a8fffa0c6bef7cbe0ba054ad61dca394992382b46727c3762 (image=quay.io/ceph/ceph:v18, name=vigorous_chandrasekhar, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_REF=reef, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 11 03:37:39 compute-0 systemd[1]: var-lib-containers-storage-overlay-90b30745556c3e7c8f5adc4ca00b927066f7f007e98c68aa8a7376317925c7f4-merged.mount: Deactivated successfully.
Oct 11 03:37:39 compute-0 podman[93854]: 2025-10-11 03:37:39.305599045 +0000 UTC m=+1.586419260 container remove 649fba6b17a5e16a8fffa0c6bef7cbe0ba054ad61dca394992382b46727c3762 (image=quay.io/ceph/ceph:v18, name=vigorous_chandrasekhar, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 11 03:37:39 compute-0 systemd[1]: libpod-conmon-649fba6b17a5e16a8fffa0c6bef7cbe0ba054ad61dca394992382b46727c3762.scope: Deactivated successfully.
Oct 11 03:37:39 compute-0 sudo[93847]: pam_unix(sudo:session): session closed for user root
Oct 11 03:37:39 compute-0 sudo[94144]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ojpgysbannbtwwwyqimshvseljmytxdw ; /usr/bin/python3'
Oct 11 03:37:39 compute-0 sudo[94144]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 03:37:39 compute-0 python3[94146]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid 23b68101-59a9-532f-ab6b-9acf78fb2162 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool application enable volumes rbd _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 11 03:37:39 compute-0 podman[94147]: 2025-10-11 03:37:39.715270097 +0000 UTC m=+0.064445913 container create cd5d0cdca320b3c956d089da78e5ca66e8cc2a484c987bc1876ad1003f8a1554 (image=quay.io/ceph/ceph:v18, name=youthful_satoshi, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 11 03:37:39 compute-0 systemd[1]: Started libpod-conmon-cd5d0cdca320b3c956d089da78e5ca66e8cc2a484c987bc1876ad1003f8a1554.scope.
Oct 11 03:37:39 compute-0 systemd[1]: Started libcrun container.
Oct 11 03:37:39 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d7c7a3f84bc58f94fa91b007e84031d42d5d343cb56ee99289b977efc0d795ab/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Oct 11 03:37:39 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d7c7a3f84bc58f94fa91b007e84031d42d5d343cb56ee99289b977efc0d795ab/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 11 03:37:39 compute-0 podman[94147]: 2025-10-11 03:37:39.780617165 +0000 UTC m=+0.129793011 container init cd5d0cdca320b3c956d089da78e5ca66e8cc2a484c987bc1876ad1003f8a1554 (image=quay.io/ceph/ceph:v18, name=youthful_satoshi, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 11 03:37:39 compute-0 podman[94147]: 2025-10-11 03:37:39.689813422 +0000 UTC m=+0.038989268 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Oct 11 03:37:39 compute-0 podman[94147]: 2025-10-11 03:37:39.787207491 +0000 UTC m=+0.136383307 container start cd5d0cdca320b3c956d089da78e5ca66e8cc2a484c987bc1876ad1003f8a1554 (image=quay.io/ceph/ceph:v18, name=youthful_satoshi, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Oct 11 03:37:39 compute-0 podman[94147]: 2025-10-11 03:37:39.791347447 +0000 UTC m=+0.140523303 container attach cd5d0cdca320b3c956d089da78e5ca66e8cc2a484c987bc1876ad1003f8a1554 (image=quay.io/ceph/ceph:v18, name=youthful_satoshi, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=reef)
Oct 11 03:37:40 compute-0 kind_swanson[94103]: {
Oct 11 03:37:40 compute-0 kind_swanson[94103]:     "38da774d-7ecf-442f-9a7a-97978287cff8": {
Oct 11 03:37:40 compute-0 kind_swanson[94103]:         "ceph_fsid": "23b68101-59a9-532f-ab6b-9acf78fb2162",
Oct 11 03:37:40 compute-0 kind_swanson[94103]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Oct 11 03:37:40 compute-0 kind_swanson[94103]:         "osd_id": 1,
Oct 11 03:37:40 compute-0 kind_swanson[94103]:         "osd_uuid": "38da774d-7ecf-442f-9a7a-97978287cff8",
Oct 11 03:37:40 compute-0 kind_swanson[94103]:         "type": "bluestore"
Oct 11 03:37:40 compute-0 kind_swanson[94103]:     },
Oct 11 03:37:40 compute-0 kind_swanson[94103]:     "b0a32a6b-06c7-4cda-9235-0c5ffbd1bd85": {
Oct 11 03:37:40 compute-0 kind_swanson[94103]:         "ceph_fsid": "23b68101-59a9-532f-ab6b-9acf78fb2162",
Oct 11 03:37:40 compute-0 kind_swanson[94103]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Oct 11 03:37:40 compute-0 kind_swanson[94103]:         "osd_id": 2,
Oct 11 03:37:40 compute-0 kind_swanson[94103]:         "osd_uuid": "b0a32a6b-06c7-4cda-9235-0c5ffbd1bd85",
Oct 11 03:37:40 compute-0 kind_swanson[94103]:         "type": "bluestore"
Oct 11 03:37:40 compute-0 kind_swanson[94103]:     },
Oct 11 03:37:40 compute-0 kind_swanson[94103]:     "bd7ac921-1218-45c1-b1c6-7c594dbceccb": {
Oct 11 03:37:40 compute-0 kind_swanson[94103]:         "ceph_fsid": "23b68101-59a9-532f-ab6b-9acf78fb2162",
Oct 11 03:37:40 compute-0 kind_swanson[94103]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct 11 03:37:40 compute-0 kind_swanson[94103]:         "osd_id": 0,
Oct 11 03:37:40 compute-0 kind_swanson[94103]:         "osd_uuid": "bd7ac921-1218-45c1-b1c6-7c594dbceccb",
Oct 11 03:37:40 compute-0 kind_swanson[94103]:         "type": "bluestore"
Oct 11 03:37:40 compute-0 kind_swanson[94103]:     }
Oct 11 03:37:40 compute-0 kind_swanson[94103]: }
Oct 11 03:37:40 compute-0 systemd[1]: libpod-eb154ad57b13c5fe95b59eb72ad7da427f42368b3a908a92a85948e52b4aa175.scope: Deactivated successfully.
Oct 11 03:37:40 compute-0 podman[94086]: 2025-10-11 03:37:40.110567184 +0000 UTC m=+1.226367902 container died eb154ad57b13c5fe95b59eb72ad7da427f42368b3a908a92a85948e52b4aa175 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_swanson, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0)
Oct 11 03:37:40 compute-0 systemd[1]: libpod-eb154ad57b13c5fe95b59eb72ad7da427f42368b3a908a92a85948e52b4aa175.scope: Consumed 1.062s CPU time.
Oct 11 03:37:40 compute-0 systemd[1]: var-lib-containers-storage-overlay-6af9d93f350dad12e4a273dea4a0a3801a9e564cf9aa724f64c777b26ac7255c-merged.mount: Deactivated successfully.
Oct 11 03:37:40 compute-0 podman[94086]: 2025-10-11 03:37:40.190330828 +0000 UTC m=+1.306131546 container remove eb154ad57b13c5fe95b59eb72ad7da427f42368b3a908a92a85948e52b4aa175 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_swanson, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 11 03:37:40 compute-0 systemd[1]: libpod-conmon-eb154ad57b13c5fe95b59eb72ad7da427f42368b3a908a92a85948e52b4aa175.scope: Deactivated successfully.
Oct 11 03:37:40 compute-0 ceph-mon[74273]: pgmap v54: 7 pgs: 1 creating+peering, 6 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Oct 11 03:37:40 compute-0 ceph-mon[74273]: from='client.? 192.168.122.100:0/2741369923' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "vms", "app": "rbd"}]': finished
Oct 11 03:37:40 compute-0 ceph-mon[74273]: osdmap e25: 3 total, 3 up, 3 in
Oct 11 03:37:40 compute-0 sudo[93962]: pam_unix(sudo:session): session closed for user root
Oct 11 03:37:40 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 11 03:37:40 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' 
Oct 11 03:37:40 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 11 03:37:40 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' 
Oct 11 03:37:40 compute-0 sudo[94227]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 03:37:40 compute-0 sudo[94227]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 03:37:40 compute-0 sudo[94227]: pam_unix(sudo:session): session closed for user root
Oct 11 03:37:40 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool application enable", "pool": "volumes", "app": "rbd"} v 0) v1
Oct 11 03:37:40 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2898238909' entity='client.admin' cmd=[{"prefix": "osd pool application enable", "pool": "volumes", "app": "rbd"}]: dispatch
Oct 11 03:37:40 compute-0 sudo[94253]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Oct 11 03:37:40 compute-0 sudo[94253]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 03:37:40 compute-0 sudo[94253]: pam_unix(sudo:session): session closed for user root
Oct 11 03:37:40 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/alertmanager/web_user}] v 0) v1
Oct 11 03:37:40 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' 
Oct 11 03:37:40 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/alertmanager/web_password}] v 0) v1
Oct 11 03:37:40 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' 
Oct 11 03:37:40 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/prometheus/web_user}] v 0) v1
Oct 11 03:37:40 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' 
Oct 11 03:37:40 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/prometheus/web_password}] v 0) v1
Oct 11 03:37:40 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' 
Oct 11 03:37:40 compute-0 ceph-mgr[74563]: [cephadm INFO cephadm.serve] Reconfiguring mon.compute-0 (unknown last config time)...
Oct 11 03:37:40 compute-0 ceph-mgr[74563]: log_channel(cephadm) log [INF] : Reconfiguring mon.compute-0 (unknown last config time)...
Oct 11 03:37:40 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "mon."} v 0) v1
Oct 11 03:37:40 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' cmd=[{"prefix": "auth get", "entity": "mon."}]: dispatch
Oct 11 03:37:40 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config get", "who": "mon", "key": "public_network"} v 0) v1
Oct 11 03:37:40 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' cmd=[{"prefix": "config get", "who": "mon", "key": "public_network"}]: dispatch
Oct 11 03:37:40 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 11 03:37:40 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 11 03:37:40 compute-0 ceph-mgr[74563]: [cephadm INFO cephadm.serve] Reconfiguring daemon mon.compute-0 on compute-0
Oct 11 03:37:40 compute-0 ceph-mgr[74563]: log_channel(cephadm) log [INF] : Reconfiguring daemon mon.compute-0 on compute-0
Oct 11 03:37:40 compute-0 sudo[94278]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 03:37:40 compute-0 sudo[94278]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 03:37:40 compute-0 sudo[94278]: pam_unix(sudo:session): session closed for user root
Oct 11 03:37:40 compute-0 sudo[94303]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 11 03:37:40 compute-0 sudo[94303]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 03:37:40 compute-0 sudo[94303]: pam_unix(sudo:session): session closed for user root
Oct 11 03:37:40 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v56: 7 pgs: 1 creating+peering, 6 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Oct 11 03:37:40 compute-0 sudo[94328]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 03:37:40 compute-0 sudo[94328]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 03:37:40 compute-0 sudo[94328]: pam_unix(sudo:session): session closed for user root
Oct 11 03:37:40 compute-0 sudo[94353]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/23b68101-59a9-532f-ab6b-9acf78fb2162/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 _orch deploy --fsid 23b68101-59a9-532f-ab6b-9acf78fb2162
Oct 11 03:37:40 compute-0 sudo[94353]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 03:37:41 compute-0 podman[94395]: 2025-10-11 03:37:41.156096431 +0000 UTC m=+0.047073225 container create 78b9265ea923aa90ece09feba9cea7302a75843e8ec6ae4ad1e8b9ea29ea0e63 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_bardeen, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Oct 11 03:37:41 compute-0 systemd[1]: Started libpod-conmon-78b9265ea923aa90ece09feba9cea7302a75843e8ec6ae4ad1e8b9ea29ea0e63.scope.
Oct 11 03:37:41 compute-0 podman[94395]: 2025-10-11 03:37:41.132384564 +0000 UTC m=+0.023361338 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 03:37:41 compute-0 systemd[1]: Started libcrun container.
Oct 11 03:37:41 compute-0 podman[94395]: 2025-10-11 03:37:41.243618682 +0000 UTC m=+0.134595526 container init 78b9265ea923aa90ece09feba9cea7302a75843e8ec6ae4ad1e8b9ea29ea0e63 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_bardeen, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 11 03:37:41 compute-0 podman[94395]: 2025-10-11 03:37:41.255135336 +0000 UTC m=+0.146112110 container start 78b9265ea923aa90ece09feba9cea7302a75843e8ec6ae4ad1e8b9ea29ea0e63 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_bardeen, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Oct 11 03:37:41 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' 
Oct 11 03:37:41 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' 
Oct 11 03:37:41 compute-0 ceph-mon[74273]: from='client.? 192.168.122.100:0/2898238909' entity='client.admin' cmd=[{"prefix": "osd pool application enable", "pool": "volumes", "app": "rbd"}]: dispatch
Oct 11 03:37:41 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' 
Oct 11 03:37:41 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' 
Oct 11 03:37:41 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' 
Oct 11 03:37:41 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' 
Oct 11 03:37:41 compute-0 ceph-mon[74273]: Reconfiguring mon.compute-0 (unknown last config time)...
Oct 11 03:37:41 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' cmd=[{"prefix": "auth get", "entity": "mon."}]: dispatch
Oct 11 03:37:41 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' cmd=[{"prefix": "config get", "who": "mon", "key": "public_network"}]: dispatch
Oct 11 03:37:41 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 11 03:37:41 compute-0 ceph-mon[74273]: Reconfiguring daemon mon.compute-0 on compute-0
Oct 11 03:37:41 compute-0 nifty_bardeen[94411]: 167 167
Oct 11 03:37:41 compute-0 podman[94395]: 2025-10-11 03:37:41.260649951 +0000 UTC m=+0.151626755 container attach 78b9265ea923aa90ece09feba9cea7302a75843e8ec6ae4ad1e8b9ea29ea0e63 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_bardeen, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 11 03:37:41 compute-0 systemd[1]: libpod-78b9265ea923aa90ece09feba9cea7302a75843e8ec6ae4ad1e8b9ea29ea0e63.scope: Deactivated successfully.
Oct 11 03:37:41 compute-0 conmon[94411]: conmon 78b9265ea923aa90ece0 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-78b9265ea923aa90ece09feba9cea7302a75843e8ec6ae4ad1e8b9ea29ea0e63.scope/container/memory.events
Oct 11 03:37:41 compute-0 podman[94395]: 2025-10-11 03:37:41.263611385 +0000 UTC m=+0.154588229 container died 78b9265ea923aa90ece09feba9cea7302a75843e8ec6ae4ad1e8b9ea29ea0e63 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_bardeen, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 11 03:37:41 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e25 do_prune osdmap full prune enabled
Oct 11 03:37:41 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2898238909' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "volumes", "app": "rbd"}]': finished
Oct 11 03:37:41 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e26 e26: 3 total, 3 up, 3 in
Oct 11 03:37:41 compute-0 youthful_satoshi[94169]: enabled application 'rbd' on pool 'volumes'
Oct 11 03:37:41 compute-0 ceph-mon[74273]: log_channel(cluster) log [DBG] : osdmap e26: 3 total, 3 up, 3 in
Oct 11 03:37:41 compute-0 systemd[1]: var-lib-containers-storage-overlay-c1619980daffcefa237ac0040164f73613fc0f39e176482636cf0cb5c8e37536-merged.mount: Deactivated successfully.
Oct 11 03:37:41 compute-0 systemd[1]: libpod-cd5d0cdca320b3c956d089da78e5ca66e8cc2a484c987bc1876ad1003f8a1554.scope: Deactivated successfully.
Oct 11 03:37:41 compute-0 podman[94147]: 2025-10-11 03:37:41.306366177 +0000 UTC m=+1.655542023 container died cd5d0cdca320b3c956d089da78e5ca66e8cc2a484c987bc1876ad1003f8a1554 (image=quay.io/ceph/ceph:v18, name=youthful_satoshi, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_REF=reef, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 11 03:37:41 compute-0 podman[94395]: 2025-10-11 03:37:41.321897754 +0000 UTC m=+0.212874558 container remove 78b9265ea923aa90ece09feba9cea7302a75843e8ec6ae4ad1e8b9ea29ea0e63 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_bardeen, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Oct 11 03:37:41 compute-0 systemd[1]: var-lib-containers-storage-overlay-d7c7a3f84bc58f94fa91b007e84031d42d5d343cb56ee99289b977efc0d795ab-merged.mount: Deactivated successfully.
Oct 11 03:37:41 compute-0 systemd[1]: libpod-conmon-78b9265ea923aa90ece09feba9cea7302a75843e8ec6ae4ad1e8b9ea29ea0e63.scope: Deactivated successfully.
Oct 11 03:37:41 compute-0 podman[94147]: 2025-10-11 03:37:41.371489269 +0000 UTC m=+1.720665115 container remove cd5d0cdca320b3c956d089da78e5ca66e8cc2a484c987bc1876ad1003f8a1554 (image=quay.io/ceph/ceph:v18, name=youthful_satoshi, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 11 03:37:41 compute-0 systemd[1]: libpod-conmon-cd5d0cdca320b3c956d089da78e5ca66e8cc2a484c987bc1876ad1003f8a1554.scope: Deactivated successfully.
Oct 11 03:37:41 compute-0 sudo[94353]: pam_unix(sudo:session): session closed for user root
Oct 11 03:37:41 compute-0 sudo[94144]: pam_unix(sudo:session): session closed for user root
Oct 11 03:37:41 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 11 03:37:41 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' 
Oct 11 03:37:41 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 11 03:37:41 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' 
Oct 11 03:37:41 compute-0 ceph-mgr[74563]: [cephadm INFO cephadm.serve] Reconfiguring mgr.compute-0.jhqlii (unknown last config time)...
Oct 11 03:37:41 compute-0 ceph-mgr[74563]: log_channel(cephadm) log [INF] : Reconfiguring mgr.compute-0.jhqlii (unknown last config time)...
Oct 11 03:37:41 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get-or-create", "entity": "mgr.compute-0.jhqlii", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} v 0) v1
Oct 11 03:37:41 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' cmd=[{"prefix": "auth get-or-create", "entity": "mgr.compute-0.jhqlii", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]: dispatch
Oct 11 03:37:41 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr services"} v 0) v1
Oct 11 03:37:41 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' cmd=[{"prefix": "mgr services"}]: dispatch
Oct 11 03:37:41 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 11 03:37:41 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 11 03:37:41 compute-0 ceph-mgr[74563]: [cephadm INFO cephadm.serve] Reconfiguring daemon mgr.compute-0.jhqlii on compute-0
Oct 11 03:37:41 compute-0 ceph-mgr[74563]: log_channel(cephadm) log [INF] : Reconfiguring daemon mgr.compute-0.jhqlii on compute-0
Oct 11 03:37:41 compute-0 ceph-mon[74273]: log_channel(cluster) log [WRN] : Health check update: 5 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED)
Oct 11 03:37:41 compute-0 sudo[94442]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 03:37:41 compute-0 sudo[94442]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 03:37:41 compute-0 sudo[94442]: pam_unix(sudo:session): session closed for user root
Oct 11 03:37:41 compute-0 sudo[94495]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ejtiwgrxcycimbfhatytolnrofxtwrpx ; /usr/bin/python3'
Oct 11 03:37:41 compute-0 sudo[94495]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 03:37:41 compute-0 sudo[94488]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 11 03:37:41 compute-0 sudo[94488]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 03:37:41 compute-0 sudo[94488]: pam_unix(sudo:session): session closed for user root
Oct 11 03:37:41 compute-0 sudo[94518]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 03:37:41 compute-0 sudo[94518]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 03:37:41 compute-0 sudo[94518]: pam_unix(sudo:session): session closed for user root
Oct 11 03:37:41 compute-0 python3[94513]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid 23b68101-59a9-532f-ab6b-9acf78fb2162 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool application enable backups rbd _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 11 03:37:41 compute-0 sudo[94543]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/23b68101-59a9-532f-ab6b-9acf78fb2162/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 _orch deploy --fsid 23b68101-59a9-532f-ab6b-9acf78fb2162
Oct 11 03:37:41 compute-0 sudo[94543]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 03:37:41 compute-0 podman[94556]: 2025-10-11 03:37:41.822991607 +0000 UTC m=+0.055846111 container create 16b40cd1e0f417f08ff9515bbb7a6c19f75b47aa847a4d2bc2a0f2b37e8a360d (image=quay.io/ceph/ceph:v18, name=objective_stonebraker, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 11 03:37:41 compute-0 systemd[1]: Started libpod-conmon-16b40cd1e0f417f08ff9515bbb7a6c19f75b47aa847a4d2bc2a0f2b37e8a360d.scope.
Oct 11 03:37:41 compute-0 systemd[1]: Started libcrun container.
Oct 11 03:37:41 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7d2aa3da369b144019802b4e0c1590d9509e46bf4679444d8a10d4e4dd2663e3/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 11 03:37:41 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7d2aa3da369b144019802b4e0c1590d9509e46bf4679444d8a10d4e4dd2663e3/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Oct 11 03:37:41 compute-0 podman[94556]: 2025-10-11 03:37:41.803764077 +0000 UTC m=+0.036618601 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Oct 11 03:37:41 compute-0 podman[94556]: 2025-10-11 03:37:41.910663483 +0000 UTC m=+0.143518047 container init 16b40cd1e0f417f08ff9515bbb7a6c19f75b47aa847a4d2bc2a0f2b37e8a360d (image=quay.io/ceph/ceph:v18, name=objective_stonebraker, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Oct 11 03:37:41 compute-0 podman[94556]: 2025-10-11 03:37:41.921056256 +0000 UTC m=+0.153910790 container start 16b40cd1e0f417f08ff9515bbb7a6c19f75b47aa847a4d2bc2a0f2b37e8a360d (image=quay.io/ceph/ceph:v18, name=objective_stonebraker, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2)
Oct 11 03:37:41 compute-0 podman[94556]: 2025-10-11 03:37:41.924997036 +0000 UTC m=+0.157851630 container attach 16b40cd1e0f417f08ff9515bbb7a6c19f75b47aa847a4d2bc2a0f2b37e8a360d (image=quay.io/ceph/ceph:v18, name=objective_stonebraker, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Oct 11 03:37:42 compute-0 podman[94603]: 2025-10-11 03:37:42.110549595 +0000 UTC m=+0.067842249 container create baa84b95f33c1cfca3e10392cc14ce6b9b81999d036fdbf0f783b7e8e492faf0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_lederberg, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.build-date=20250507)
Oct 11 03:37:42 compute-0 systemd[1]: Started libpod-conmon-baa84b95f33c1cfca3e10392cc14ce6b9b81999d036fdbf0f783b7e8e492faf0.scope.
Oct 11 03:37:42 compute-0 podman[94603]: 2025-10-11 03:37:42.081650212 +0000 UTC m=+0.038942906 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 03:37:42 compute-0 systemd[1]: Started libcrun container.
Oct 11 03:37:42 compute-0 podman[94603]: 2025-10-11 03:37:42.195010701 +0000 UTC m=+0.152303385 container init baa84b95f33c1cfca3e10392cc14ce6b9b81999d036fdbf0f783b7e8e492faf0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_lederberg, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_REF=reef, org.label-schema.license=GPLv2, io.buildah.version=1.39.3)
Oct 11 03:37:42 compute-0 podman[94603]: 2025-10-11 03:37:42.205019092 +0000 UTC m=+0.162311746 container start baa84b95f33c1cfca3e10392cc14ce6b9b81999d036fdbf0f783b7e8e492faf0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_lederberg, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 11 03:37:42 compute-0 modest_lederberg[94620]: 167 167
Oct 11 03:37:42 compute-0 podman[94603]: 2025-10-11 03:37:42.209374645 +0000 UTC m=+0.166667289 container attach baa84b95f33c1cfca3e10392cc14ce6b9b81999d036fdbf0f783b7e8e492faf0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_lederberg, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 11 03:37:42 compute-0 systemd[1]: libpod-baa84b95f33c1cfca3e10392cc14ce6b9b81999d036fdbf0f783b7e8e492faf0.scope: Deactivated successfully.
Oct 11 03:37:42 compute-0 podman[94603]: 2025-10-11 03:37:42.210454995 +0000 UTC m=+0.167747649 container died baa84b95f33c1cfca3e10392cc14ce6b9b81999d036fdbf0f783b7e8e492faf0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_lederberg, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef)
Oct 11 03:37:42 compute-0 systemd[1]: var-lib-containers-storage-overlay-b22fbf2c0a9996cfd4581c38bb4a8e57640f088d22a7a68cb99a64850a47e6af-merged.mount: Deactivated successfully.
Oct 11 03:37:42 compute-0 podman[94603]: 2025-10-11 03:37:42.259109214 +0000 UTC m=+0.216401868 container remove baa84b95f33c1cfca3e10392cc14ce6b9b81999d036fdbf0f783b7e8e492faf0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_lederberg, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Oct 11 03:37:42 compute-0 ceph-mon[74273]: pgmap v56: 7 pgs: 1 creating+peering, 6 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Oct 11 03:37:42 compute-0 ceph-mon[74273]: from='client.? 192.168.122.100:0/2898238909' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "volumes", "app": "rbd"}]': finished
Oct 11 03:37:42 compute-0 ceph-mon[74273]: osdmap e26: 3 total, 3 up, 3 in
Oct 11 03:37:42 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' 
Oct 11 03:37:42 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' 
Oct 11 03:37:42 compute-0 ceph-mon[74273]: Reconfiguring mgr.compute-0.jhqlii (unknown last config time)...
Oct 11 03:37:42 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' cmd=[{"prefix": "auth get-or-create", "entity": "mgr.compute-0.jhqlii", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]: dispatch
Oct 11 03:37:42 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' cmd=[{"prefix": "mgr services"}]: dispatch
Oct 11 03:37:42 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 11 03:37:42 compute-0 ceph-mon[74273]: Reconfiguring daemon mgr.compute-0.jhqlii on compute-0
Oct 11 03:37:42 compute-0 ceph-mon[74273]: Health check update: 5 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED)
Oct 11 03:37:42 compute-0 systemd[1]: libpod-conmon-baa84b95f33c1cfca3e10392cc14ce6b9b81999d036fdbf0f783b7e8e492faf0.scope: Deactivated successfully.
Oct 11 03:37:42 compute-0 sudo[94543]: pam_unix(sudo:session): session closed for user root
Oct 11 03:37:42 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 11 03:37:42 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' 
Oct 11 03:37:42 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 11 03:37:42 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' 
Oct 11 03:37:42 compute-0 sudo[94657]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 03:37:42 compute-0 sudo[94657]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 03:37:42 compute-0 sudo[94657]: pam_unix(sudo:session): session closed for user root
Oct 11 03:37:42 compute-0 sudo[94682]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 11 03:37:42 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool application enable", "pool": "backups", "app": "rbd"} v 0) v1
Oct 11 03:37:42 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3726479181' entity='client.admin' cmd=[{"prefix": "osd pool application enable", "pool": "backups", "app": "rbd"}]: dispatch
Oct 11 03:37:42 compute-0 sudo[94682]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 03:37:42 compute-0 sudo[94682]: pam_unix(sudo:session): session closed for user root
Oct 11 03:37:42 compute-0 sudo[94708]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 03:37:42 compute-0 sudo[94708]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 03:37:42 compute-0 sudo[94708]: pam_unix(sudo:session): session closed for user root
Oct 11 03:37:42 compute-0 sudo[94733]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/23b68101-59a9-532f-ab6b-9acf78fb2162/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ls
Oct 11 03:37:42 compute-0 sudo[94733]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 03:37:42 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v58: 7 pgs: 7 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Oct 11 03:37:43 compute-0 podman[94831]: 2025-10-11 03:37:43.304835075 +0000 UTC m=+0.095542928 container exec 24261ba7295af5a6a49cb537d1551fd7fd4de28fdeebff7ecec5d89143ebddf9 (image=quay.io/ceph/ceph:v18, name=ceph-23b68101-59a9-532f-ab6b-9acf78fb2162-mon-compute-0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Oct 11 03:37:43 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' 
Oct 11 03:37:43 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' 
Oct 11 03:37:43 compute-0 ceph-mon[74273]: from='client.? 192.168.122.100:0/3726479181' entity='client.admin' cmd=[{"prefix": "osd pool application enable", "pool": "backups", "app": "rbd"}]: dispatch
Oct 11 03:37:43 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e26 do_prune osdmap full prune enabled
Oct 11 03:37:43 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3726479181' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "backups", "app": "rbd"}]': finished
Oct 11 03:37:43 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e27 e27: 3 total, 3 up, 3 in
Oct 11 03:37:43 compute-0 objective_stonebraker[94583]: enabled application 'rbd' on pool 'backups'
Oct 11 03:37:43 compute-0 ceph-mon[74273]: log_channel(cluster) log [DBG] : osdmap e27: 3 total, 3 up, 3 in
Oct 11 03:37:43 compute-0 systemd[1]: libpod-16b40cd1e0f417f08ff9515bbb7a6c19f75b47aa847a4d2bc2a0f2b37e8a360d.scope: Deactivated successfully.
Oct 11 03:37:43 compute-0 podman[94556]: 2025-10-11 03:37:43.380553115 +0000 UTC m=+1.613407609 container died 16b40cd1e0f417f08ff9515bbb7a6c19f75b47aa847a4d2bc2a0f2b37e8a360d (image=quay.io/ceph/ceph:v18, name=objective_stonebraker, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 11 03:37:43 compute-0 systemd[1]: var-lib-containers-storage-overlay-7d2aa3da369b144019802b4e0c1590d9509e46bf4679444d8a10d4e4dd2663e3-merged.mount: Deactivated successfully.
Oct 11 03:37:43 compute-0 podman[94556]: 2025-10-11 03:37:43.438858365 +0000 UTC m=+1.671712899 container remove 16b40cd1e0f417f08ff9515bbb7a6c19f75b47aa847a4d2bc2a0f2b37e8a360d (image=quay.io/ceph/ceph:v18, name=objective_stonebraker, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, io.buildah.version=1.39.3)
Oct 11 03:37:43 compute-0 systemd[1]: libpod-conmon-16b40cd1e0f417f08ff9515bbb7a6c19f75b47aa847a4d2bc2a0f2b37e8a360d.scope: Deactivated successfully.
Oct 11 03:37:43 compute-0 podman[94831]: 2025-10-11 03:37:43.46073064 +0000 UTC m=+0.251438463 container exec_died 24261ba7295af5a6a49cb537d1551fd7fd4de28fdeebff7ecec5d89143ebddf9 (image=quay.io/ceph/ceph:v18, name=ceph-23b68101-59a9-532f-ab6b-9acf78fb2162-mon-compute-0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Oct 11 03:37:43 compute-0 sudo[94495]: pam_unix(sudo:session): session closed for user root
Oct 11 03:37:43 compute-0 sudo[94924]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fxcrtjsopormkbuhtktjzrpyifruvtbe ; /usr/bin/python3'
Oct 11 03:37:43 compute-0 sudo[94924]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 03:37:43 compute-0 python3[94933]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid 23b68101-59a9-532f-ab6b-9acf78fb2162 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool application enable images rbd _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 11 03:37:43 compute-0 podman[94955]: 2025-10-11 03:37:43.894323714 +0000 UTC m=+0.062913480 container create a50f1d00480b0fc0a06f7a4d4a5be09177cf0c098ca4d67cf53243e9828ac972 (image=quay.io/ceph/ceph:v18, name=funny_panini, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef)
Oct 11 03:37:43 compute-0 systemd[1]: Started libpod-conmon-a50f1d00480b0fc0a06f7a4d4a5be09177cf0c098ca4d67cf53243e9828ac972.scope.
Oct 11 03:37:43 compute-0 systemd[1]: Started libcrun container.
Oct 11 03:37:43 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b310ca2935acc09facd66037c8f02d407878d50188531151c1870d9d846fa55a/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Oct 11 03:37:43 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b310ca2935acc09facd66037c8f02d407878d50188531151c1870d9d846fa55a/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 11 03:37:43 compute-0 podman[94955]: 2025-10-11 03:37:43.873523399 +0000 UTC m=+0.042113145 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Oct 11 03:37:43 compute-0 podman[94955]: 2025-10-11 03:37:43.977890794 +0000 UTC m=+0.146480601 container init a50f1d00480b0fc0a06f7a4d4a5be09177cf0c098ca4d67cf53243e9828ac972 (image=quay.io/ceph/ceph:v18, name=funny_panini, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef)
Oct 11 03:37:43 compute-0 podman[94955]: 2025-10-11 03:37:43.987134444 +0000 UTC m=+0.155724170 container start a50f1d00480b0fc0a06f7a4d4a5be09177cf0c098ca4d67cf53243e9828ac972 (image=quay.io/ceph/ceph:v18, name=funny_panini, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 11 03:37:43 compute-0 podman[94955]: 2025-10-11 03:37:43.990691354 +0000 UTC m=+0.159281130 container attach a50f1d00480b0fc0a06f7a4d4a5be09177cf0c098ca4d67cf53243e9828ac972 (image=quay.io/ceph/ceph:v18, name=funny_panini, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Oct 11 03:37:44 compute-0 sudo[94733]: pam_unix(sudo:session): session closed for user root
Oct 11 03:37:44 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 11 03:37:44 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' 
Oct 11 03:37:44 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 11 03:37:44 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' 
Oct 11 03:37:44 compute-0 sudo[95013]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 03:37:44 compute-0 sudo[95013]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 03:37:44 compute-0 sudo[95013]: pam_unix(sudo:session): session closed for user root
Oct 11 03:37:44 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e27 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 11 03:37:44 compute-0 sudo[95038]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 11 03:37:44 compute-0 sudo[95038]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 03:37:44 compute-0 sudo[95038]: pam_unix(sudo:session): session closed for user root
Oct 11 03:37:44 compute-0 ceph-mon[74273]: pgmap v58: 7 pgs: 7 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Oct 11 03:37:44 compute-0 ceph-mon[74273]: from='client.? 192.168.122.100:0/3726479181' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "backups", "app": "rbd"}]': finished
Oct 11 03:37:44 compute-0 ceph-mon[74273]: osdmap e27: 3 total, 3 up, 3 in
Oct 11 03:37:44 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' 
Oct 11 03:37:44 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' 
Oct 11 03:37:44 compute-0 sudo[95065]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 03:37:44 compute-0 sudo[95065]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 03:37:44 compute-0 sudo[95065]: pam_unix(sudo:session): session closed for user root
Oct 11 03:37:44 compute-0 sudo[95107]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/23b68101-59a9-532f-ab6b-9acf78fb2162/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Oct 11 03:37:44 compute-0 sudo[95107]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 03:37:44 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool application enable", "pool": "images", "app": "rbd"} v 0) v1
Oct 11 03:37:44 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/728581244' entity='client.admin' cmd=[{"prefix": "osd pool application enable", "pool": "images", "app": "rbd"}]: dispatch
Oct 11 03:37:44 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v60: 7 pgs: 7 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Oct 11 03:37:45 compute-0 sudo[95107]: pam_unix(sudo:session): session closed for user root
Oct 11 03:37:45 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 11 03:37:45 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 11 03:37:45 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct 11 03:37:45 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 11 03:37:45 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct 11 03:37:45 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' 
Oct 11 03:37:45 compute-0 ceph-mgr[74563]: [progress WARNING root] complete: ev 8ed33b66-01c7-425e-bb7d-977ea5fe5379 does not exist
Oct 11 03:37:45 compute-0 ceph-mgr[74563]: [progress WARNING root] complete: ev d45bcb7f-0a90-41c5-841f-60a3bd62ab33 does not exist
Oct 11 03:37:45 compute-0 ceph-mgr[74563]: [progress WARNING root] complete: ev 34152f90-50a3-40ea-905e-5fb1627be081 does not exist
Oct 11 03:37:45 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct 11 03:37:45 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 11 03:37:45 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct 11 03:37:45 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 11 03:37:45 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 11 03:37:45 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 11 03:37:45 compute-0 sudo[95164]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 03:37:45 compute-0 sudo[95164]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 03:37:45 compute-0 sudo[95164]: pam_unix(sudo:session): session closed for user root
Oct 11 03:37:45 compute-0 sudo[95189]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 11 03:37:45 compute-0 sudo[95189]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 03:37:45 compute-0 sudo[95189]: pam_unix(sudo:session): session closed for user root
Oct 11 03:37:45 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e27 do_prune osdmap full prune enabled
Oct 11 03:37:45 compute-0 ceph-mon[74273]: from='client.? 192.168.122.100:0/728581244' entity='client.admin' cmd=[{"prefix": "osd pool application enable", "pool": "images", "app": "rbd"}]: dispatch
Oct 11 03:37:45 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 11 03:37:45 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 11 03:37:45 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' 
Oct 11 03:37:45 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 11 03:37:45 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 11 03:37:45 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 11 03:37:45 compute-0 sudo[95214]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 03:37:45 compute-0 sudo[95214]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 03:37:45 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/728581244' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "images", "app": "rbd"}]': finished
Oct 11 03:37:45 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e28 e28: 3 total, 3 up, 3 in
Oct 11 03:37:45 compute-0 funny_panini[94990]: enabled application 'rbd' on pool 'images'
Oct 11 03:37:45 compute-0 sudo[95214]: pam_unix(sudo:session): session closed for user root
Oct 11 03:37:45 compute-0 ceph-mon[74273]: log_channel(cluster) log [DBG] : osdmap e28: 3 total, 3 up, 3 in
Oct 11 03:37:45 compute-0 systemd[1]: libpod-a50f1d00480b0fc0a06f7a4d4a5be09177cf0c098ca4d67cf53243e9828ac972.scope: Deactivated successfully.
Oct 11 03:37:45 compute-0 podman[94955]: 2025-10-11 03:37:45.398211652 +0000 UTC m=+1.566801378 container died a50f1d00480b0fc0a06f7a4d4a5be09177cf0c098ca4d67cf53243e9828ac972 (image=quay.io/ceph/ceph:v18, name=funny_panini, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 11 03:37:45 compute-0 systemd[1]: var-lib-containers-storage-overlay-b310ca2935acc09facd66037c8f02d407878d50188531151c1870d9d846fa55a-merged.mount: Deactivated successfully.
Oct 11 03:37:45 compute-0 podman[94955]: 2025-10-11 03:37:45.4603648 +0000 UTC m=+1.628954526 container remove a50f1d00480b0fc0a06f7a4d4a5be09177cf0c098ca4d67cf53243e9828ac972 (image=quay.io/ceph/ceph:v18, name=funny_panini, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_REF=reef, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Oct 11 03:37:45 compute-0 sudo[95240]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/23b68101-59a9-532f-ab6b-9acf78fb2162/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 23b68101-59a9-532f-ab6b-9acf78fb2162 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Oct 11 03:37:45 compute-0 sudo[95240]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 03:37:45 compute-0 systemd[1]: libpod-conmon-a50f1d00480b0fc0a06f7a4d4a5be09177cf0c098ca4d67cf53243e9828ac972.scope: Deactivated successfully.
Oct 11 03:37:45 compute-0 sudo[94924]: pam_unix(sudo:session): session closed for user root
Oct 11 03:37:45 compute-0 sudo[95300]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hikkknotzaalybcnzzcbifvvdxzakhbi ; /usr/bin/python3'
Oct 11 03:37:45 compute-0 sudo[95300]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 03:37:45 compute-0 python3[95304]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid 23b68101-59a9-532f-ab6b-9acf78fb2162 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool application enable cephfs.cephfs.meta cephfs _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 11 03:37:45 compute-0 podman[95331]: 2025-10-11 03:37:45.836586841 +0000 UTC m=+0.059976457 container create b2171f1f6638e6f79db74d7380ed8645910b96400adbf83b46a200e9c8d4a425 (image=quay.io/ceph/ceph:v18, name=silly_poincare, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3)
Oct 11 03:37:45 compute-0 systemd[1]: Started libpod-conmon-b2171f1f6638e6f79db74d7380ed8645910b96400adbf83b46a200e9c8d4a425.scope.
Oct 11 03:37:45 compute-0 podman[95354]: 2025-10-11 03:37:45.894287204 +0000 UTC m=+0.068557659 container create d89fb61d375d16c15930fc5edb9a024bf380e41f10978fed6d2b269f88498e2b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_wing, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3)
Oct 11 03:37:45 compute-0 systemd[1]: Started libcrun container.
Oct 11 03:37:45 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6340af6ddbc73e2aa0e5c2de90a6ad040883ed2d06149b10a015b35ea11bbca6/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Oct 11 03:37:45 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6340af6ddbc73e2aa0e5c2de90a6ad040883ed2d06149b10a015b35ea11bbca6/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 11 03:37:45 compute-0 podman[95331]: 2025-10-11 03:37:45.816579919 +0000 UTC m=+0.039969585 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Oct 11 03:37:45 compute-0 systemd[1]: Started libpod-conmon-d89fb61d375d16c15930fc5edb9a024bf380e41f10978fed6d2b269f88498e2b.scope.
Oct 11 03:37:45 compute-0 podman[95331]: 2025-10-11 03:37:45.92363468 +0000 UTC m=+0.147024316 container init b2171f1f6638e6f79db74d7380ed8645910b96400adbf83b46a200e9c8d4a425 (image=quay.io/ceph/ceph:v18, name=silly_poincare, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 11 03:37:45 compute-0 podman[95331]: 2025-10-11 03:37:45.93109139 +0000 UTC m=+0.154481046 container start b2171f1f6638e6f79db74d7380ed8645910b96400adbf83b46a200e9c8d4a425 (image=quay.io/ceph/ceph:v18, name=silly_poincare, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=reef)
Oct 11 03:37:45 compute-0 systemd[1]: Started libcrun container.
Oct 11 03:37:45 compute-0 podman[95331]: 2025-10-11 03:37:45.934980209 +0000 UTC m=+0.158369845 container attach b2171f1f6638e6f79db74d7380ed8645910b96400adbf83b46a200e9c8d4a425 (image=quay.io/ceph/ceph:v18, name=silly_poincare, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Oct 11 03:37:45 compute-0 podman[95354]: 2025-10-11 03:37:45.946474752 +0000 UTC m=+0.120745307 container init d89fb61d375d16c15930fc5edb9a024bf380e41f10978fed6d2b269f88498e2b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_wing, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Oct 11 03:37:45 compute-0 podman[95354]: 2025-10-11 03:37:45.953213132 +0000 UTC m=+0.127483627 container start d89fb61d375d16c15930fc5edb9a024bf380e41f10978fed6d2b269f88498e2b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_wing, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, ceph=True, io.buildah.version=1.39.3)
Oct 11 03:37:45 compute-0 sharp_wing[95377]: 167 167
Oct 11 03:37:45 compute-0 systemd[1]: libpod-d89fb61d375d16c15930fc5edb9a024bf380e41f10978fed6d2b269f88498e2b.scope: Deactivated successfully.
Oct 11 03:37:45 compute-0 podman[95354]: 2025-10-11 03:37:45.957211954 +0000 UTC m=+0.131482459 container attach d89fb61d375d16c15930fc5edb9a024bf380e41f10978fed6d2b269f88498e2b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_wing, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=reef, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 11 03:37:45 compute-0 podman[95354]: 2025-10-11 03:37:45.958444739 +0000 UTC m=+0.132715244 container died d89fb61d375d16c15930fc5edb9a024bf380e41f10978fed6d2b269f88498e2b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_wing, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 11 03:37:45 compute-0 podman[95354]: 2025-10-11 03:37:45.872515242 +0000 UTC m=+0.046785727 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 03:37:45 compute-0 systemd[1]: var-lib-containers-storage-overlay-72511c4364fcf1b8374baf0164335e916296fb92c17a29abd52eb71fe531b8f0-merged.mount: Deactivated successfully.
Oct 11 03:37:46 compute-0 podman[95354]: 2025-10-11 03:37:46.003190417 +0000 UTC m=+0.177460882 container remove d89fb61d375d16c15930fc5edb9a024bf380e41f10978fed6d2b269f88498e2b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_wing, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 11 03:37:46 compute-0 systemd[1]: libpod-conmon-d89fb61d375d16c15930fc5edb9a024bf380e41f10978fed6d2b269f88498e2b.scope: Deactivated successfully.
Oct 11 03:37:46 compute-0 podman[95401]: 2025-10-11 03:37:46.211207758 +0000 UTC m=+0.066441640 container create 64df9bf6ce5c5abff3f18b5d99716ef21efc1d3634bd1261d7d7768e3a5050ff (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_curran, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Oct 11 03:37:46 compute-0 systemd[1]: Started libpod-conmon-64df9bf6ce5c5abff3f18b5d99716ef21efc1d3634bd1261d7d7768e3a5050ff.scope.
Oct 11 03:37:46 compute-0 podman[95401]: 2025-10-11 03:37:46.186964556 +0000 UTC m=+0.042198488 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 03:37:46 compute-0 systemd[1]: Started libcrun container.
Oct 11 03:37:46 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/54fde047e8e02710abdc7ab2bc5cfdb0f35864571a1b8912c7b3b881b1914986/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 11 03:37:46 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/54fde047e8e02710abdc7ab2bc5cfdb0f35864571a1b8912c7b3b881b1914986/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 11 03:37:46 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/54fde047e8e02710abdc7ab2bc5cfdb0f35864571a1b8912c7b3b881b1914986/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 11 03:37:46 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/54fde047e8e02710abdc7ab2bc5cfdb0f35864571a1b8912c7b3b881b1914986/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 11 03:37:46 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/54fde047e8e02710abdc7ab2bc5cfdb0f35864571a1b8912c7b3b881b1914986/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct 11 03:37:46 compute-0 podman[95401]: 2025-10-11 03:37:46.312677312 +0000 UTC m=+0.167911174 container init 64df9bf6ce5c5abff3f18b5d99716ef21efc1d3634bd1261d7d7768e3a5050ff (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_curran, org.label-schema.build-date=20250507, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=reef, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 11 03:37:46 compute-0 podman[95401]: 2025-10-11 03:37:46.331704347 +0000 UTC m=+0.186938189 container start 64df9bf6ce5c5abff3f18b5d99716ef21efc1d3634bd1261d7d7768e3a5050ff (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_curran, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Oct 11 03:37:46 compute-0 podman[95401]: 2025-10-11 03:37:46.335863724 +0000 UTC m=+0.191097616 container attach 64df9bf6ce5c5abff3f18b5d99716ef21efc1d3634bd1261d7d7768e3a5050ff (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_curran, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.schema-version=1.0)
Oct 11 03:37:46 compute-0 ceph-mon[74273]: pgmap v60: 7 pgs: 7 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Oct 11 03:37:46 compute-0 ceph-mon[74273]: from='client.? 192.168.122.100:0/728581244' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "images", "app": "rbd"}]': finished
Oct 11 03:37:46 compute-0 ceph-mon[74273]: osdmap e28: 3 total, 3 up, 3 in
Oct 11 03:37:46 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool application enable", "pool": "cephfs.cephfs.meta", "app": "cephfs"} v 0) v1
Oct 11 03:37:46 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2201615117' entity='client.admin' cmd=[{"prefix": "osd pool application enable", "pool": "cephfs.cephfs.meta", "app": "cephfs"}]: dispatch
Oct 11 03:37:46 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v62: 7 pgs: 7 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Oct 11 03:37:47 compute-0 ceph-mon[74273]: log_channel(cluster) log [WRN] : Health check update: 2 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED)
Oct 11 03:37:47 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e28 do_prune osdmap full prune enabled
Oct 11 03:37:47 compute-0 ceph-mon[74273]: from='client.? 192.168.122.100:0/2201615117' entity='client.admin' cmd=[{"prefix": "osd pool application enable", "pool": "cephfs.cephfs.meta", "app": "cephfs"}]: dispatch
Oct 11 03:37:47 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2201615117' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "cephfs.cephfs.meta", "app": "cephfs"}]': finished
Oct 11 03:37:47 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e29 e29: 3 total, 3 up, 3 in
Oct 11 03:37:47 compute-0 silly_poincare[95370]: enabled application 'cephfs' on pool 'cephfs.cephfs.meta'
Oct 11 03:37:47 compute-0 ceph-mon[74273]: log_channel(cluster) log [DBG] : osdmap e29: 3 total, 3 up, 3 in
Oct 11 03:37:47 compute-0 systemd[1]: libpod-b2171f1f6638e6f79db74d7380ed8645910b96400adbf83b46a200e9c8d4a425.scope: Deactivated successfully.
Oct 11 03:37:47 compute-0 podman[95331]: 2025-10-11 03:37:47.431350944 +0000 UTC m=+1.654740580 container died b2171f1f6638e6f79db74d7380ed8645910b96400adbf83b46a200e9c8d4a425 (image=quay.io/ceph/ceph:v18, name=silly_poincare, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Oct 11 03:37:47 compute-0 practical_curran[95437]: --> passed data devices: 0 physical, 3 LVM
Oct 11 03:37:47 compute-0 practical_curran[95437]: --> relative data size: 1.0
Oct 11 03:37:47 compute-0 practical_curran[95437]: --> All data devices are unavailable
Oct 11 03:37:47 compute-0 systemd[1]: var-lib-containers-storage-overlay-6340af6ddbc73e2aa0e5c2de90a6ad040883ed2d06149b10a015b35ea11bbca6-merged.mount: Deactivated successfully.
Oct 11 03:37:47 compute-0 systemd[1]: libpod-64df9bf6ce5c5abff3f18b5d99716ef21efc1d3634bd1261d7d7768e3a5050ff.scope: Deactivated successfully.
Oct 11 03:37:47 compute-0 systemd[1]: libpod-64df9bf6ce5c5abff3f18b5d99716ef21efc1d3634bd1261d7d7768e3a5050ff.scope: Consumed 1.107s CPU time.
Oct 11 03:37:47 compute-0 conmon[95437]: conmon 64df9bf6ce5c5abff3f1 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-64df9bf6ce5c5abff3f18b5d99716ef21efc1d3634bd1261d7d7768e3a5050ff.scope/container/memory.events
Oct 11 03:37:47 compute-0 podman[95401]: 2025-10-11 03:37:47.480030213 +0000 UTC m=+1.335264065 container died 64df9bf6ce5c5abff3f18b5d99716ef21efc1d3634bd1261d7d7768e3a5050ff (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_curran, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_REF=reef, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Oct 11 03:37:47 compute-0 podman[95331]: 2025-10-11 03:37:47.504202583 +0000 UTC m=+1.727592229 container remove b2171f1f6638e6f79db74d7380ed8645910b96400adbf83b46a200e9c8d4a425 (image=quay.io/ceph/ceph:v18, name=silly_poincare, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, OSD_FLAVOR=default, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Oct 11 03:37:47 compute-0 systemd[1]: libpod-conmon-b2171f1f6638e6f79db74d7380ed8645910b96400adbf83b46a200e9c8d4a425.scope: Deactivated successfully.
Oct 11 03:37:47 compute-0 systemd[1]: var-lib-containers-storage-overlay-54fde047e8e02710abdc7ab2bc5cfdb0f35864571a1b8912c7b3b881b1914986-merged.mount: Deactivated successfully.
Oct 11 03:37:47 compute-0 sudo[95300]: pam_unix(sudo:session): session closed for user root
Oct 11 03:37:47 compute-0 podman[95401]: 2025-10-11 03:37:47.567197825 +0000 UTC m=+1.422431707 container remove 64df9bf6ce5c5abff3f18b5d99716ef21efc1d3634bd1261d7d7768e3a5050ff (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_curran, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Oct 11 03:37:47 compute-0 systemd[1]: libpod-conmon-64df9bf6ce5c5abff3f18b5d99716ef21efc1d3634bd1261d7d7768e3a5050ff.scope: Deactivated successfully.
Oct 11 03:37:47 compute-0 sudo[95240]: pam_unix(sudo:session): session closed for user root
Oct 11 03:37:47 compute-0 sudo[95493]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 03:37:47 compute-0 sudo[95493]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 03:37:47 compute-0 sudo[95493]: pam_unix(sudo:session): session closed for user root
Oct 11 03:37:47 compute-0 sudo[95540]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-feyfddgtjflgsyjpzcgapnifalfaluwr ; /usr/bin/python3'
Oct 11 03:37:47 compute-0 sudo[95540]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 03:37:47 compute-0 sudo[95543]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 11 03:37:47 compute-0 sudo[95543]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 03:37:47 compute-0 sudo[95543]: pam_unix(sudo:session): session closed for user root
Oct 11 03:37:47 compute-0 python3[95544]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid 23b68101-59a9-532f-ab6b-9acf78fb2162 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool application enable cephfs.cephfs.data cephfs _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 11 03:37:47 compute-0 sudo[95569]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 03:37:47 compute-0 sudo[95569]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 03:37:47 compute-0 sudo[95569]: pam_unix(sudo:session): session closed for user root
Oct 11 03:37:47 compute-0 podman[95592]: 2025-10-11 03:37:47.922693693 +0000 UTC m=+0.061761218 container create fad10acc94b37b3d261d8ae502b690b9bff2d715e4bd74588d74fd378e8baa97 (image=quay.io/ceph/ceph:v18, name=heuristic_albattani, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef)
Oct 11 03:37:47 compute-0 sudo[95601]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/23b68101-59a9-532f-ab6b-9acf78fb2162/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 23b68101-59a9-532f-ab6b-9acf78fb2162 -- lvm list --format json
Oct 11 03:37:47 compute-0 sudo[95601]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 03:37:47 compute-0 systemd[1]: Started libpod-conmon-fad10acc94b37b3d261d8ae502b690b9bff2d715e4bd74588d74fd378e8baa97.scope.
Oct 11 03:37:47 compute-0 podman[95592]: 2025-10-11 03:37:47.891473075 +0000 UTC m=+0.030540630 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Oct 11 03:37:48 compute-0 systemd[1]: Started libcrun container.
Oct 11 03:37:48 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1bd82cef77d915667698e5af6db5bcc208c419e741404c0afda609e0b73bb97e/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Oct 11 03:37:48 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1bd82cef77d915667698e5af6db5bcc208c419e741404c0afda609e0b73bb97e/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 11 03:37:48 compute-0 podman[95592]: 2025-10-11 03:37:48.028568201 +0000 UTC m=+0.167635816 container init fad10acc94b37b3d261d8ae502b690b9bff2d715e4bd74588d74fd378e8baa97 (image=quay.io/ceph/ceph:v18, name=heuristic_albattani, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507)
Oct 11 03:37:48 compute-0 podman[95592]: 2025-10-11 03:37:48.038880071 +0000 UTC m=+0.177947596 container start fad10acc94b37b3d261d8ae502b690b9bff2d715e4bd74588d74fd378e8baa97 (image=quay.io/ceph/ceph:v18, name=heuristic_albattani, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 11 03:37:48 compute-0 podman[95592]: 2025-10-11 03:37:48.042259766 +0000 UTC m=+0.181327401 container attach fad10acc94b37b3d261d8ae502b690b9bff2d715e4bd74588d74fd378e8baa97 (image=quay.io/ceph/ceph:v18, name=heuristic_albattani, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Oct 11 03:37:48 compute-0 podman[95680]: 2025-10-11 03:37:48.308373791 +0000 UTC m=+0.040903171 container create edd8bd5a36e2f2a3fa8da2a7ed0b537453692c96028de97f732b119b5c55dbdc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_bouman, ceph=True, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.schema-version=1.0)
Oct 11 03:37:48 compute-0 systemd[1]: Started libpod-conmon-edd8bd5a36e2f2a3fa8da2a7ed0b537453692c96028de97f732b119b5c55dbdc.scope.
Oct 11 03:37:48 compute-0 systemd[1]: Started libcrun container.
Oct 11 03:37:48 compute-0 podman[95680]: 2025-10-11 03:37:48.369619394 +0000 UTC m=+0.102148794 container init edd8bd5a36e2f2a3fa8da2a7ed0b537453692c96028de97f732b119b5c55dbdc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_bouman, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Oct 11 03:37:48 compute-0 podman[95680]: 2025-10-11 03:37:48.374507801 +0000 UTC m=+0.107037191 container start edd8bd5a36e2f2a3fa8da2a7ed0b537453692c96028de97f732b119b5c55dbdc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_bouman, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 11 03:37:48 compute-0 podman[95680]: 2025-10-11 03:37:48.377988799 +0000 UTC m=+0.110518239 container attach edd8bd5a36e2f2a3fa8da2a7ed0b537453692c96028de97f732b119b5c55dbdc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_bouman, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, ceph=True, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 11 03:37:48 compute-0 nifty_bouman[95699]: 167 167
Oct 11 03:37:48 compute-0 ceph-mon[74273]: pgmap v62: 7 pgs: 7 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Oct 11 03:37:48 compute-0 ceph-mon[74273]: Health check update: 2 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED)
Oct 11 03:37:48 compute-0 ceph-mon[74273]: from='client.? 192.168.122.100:0/2201615117' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "cephfs.cephfs.meta", "app": "cephfs"}]': finished
Oct 11 03:37:48 compute-0 ceph-mon[74273]: osdmap e29: 3 total, 3 up, 3 in
Oct 11 03:37:48 compute-0 podman[95680]: 2025-10-11 03:37:48.292446963 +0000 UTC m=+0.024976363 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 03:37:48 compute-0 systemd[1]: libpod-edd8bd5a36e2f2a3fa8da2a7ed0b537453692c96028de97f732b119b5c55dbdc.scope: Deactivated successfully.
Oct 11 03:37:48 compute-0 podman[95721]: 2025-10-11 03:37:48.466668763 +0000 UTC m=+0.048726431 container died edd8bd5a36e2f2a3fa8da2a7ed0b537453692c96028de97f732b119b5c55dbdc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_bouman, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, io.buildah.version=1.39.3)
Oct 11 03:37:48 compute-0 systemd[1]: var-lib-containers-storage-overlay-b3a734e7d51f0e1d562563014745cef859d6700692edcb723cf8c9d4892e2bd7-merged.mount: Deactivated successfully.
Oct 11 03:37:48 compute-0 podman[95721]: 2025-10-11 03:37:48.514055136 +0000 UTC m=+0.096112744 container remove edd8bd5a36e2f2a3fa8da2a7ed0b537453692c96028de97f732b119b5c55dbdc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_bouman, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Oct 11 03:37:48 compute-0 systemd[1]: libpod-conmon-edd8bd5a36e2f2a3fa8da2a7ed0b537453692c96028de97f732b119b5c55dbdc.scope: Deactivated successfully.
Oct 11 03:37:48 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool application enable", "pool": "cephfs.cephfs.data", "app": "cephfs"} v 0) v1
Oct 11 03:37:48 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3714955844' entity='client.admin' cmd=[{"prefix": "osd pool application enable", "pool": "cephfs.cephfs.data", "app": "cephfs"}]: dispatch
Oct 11 03:37:48 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v64: 7 pgs: 7 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Oct 11 03:37:48 compute-0 podman[95744]: 2025-10-11 03:37:48.762840143 +0000 UTC m=+0.078171079 container create 56ab08c839556e8b21e6b3d3f8c5aea17d09dfe1551e812ddaf743ab020d3baa (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_borg, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_REF=reef)
Oct 11 03:37:48 compute-0 systemd[1]: Started libpod-conmon-56ab08c839556e8b21e6b3d3f8c5aea17d09dfe1551e812ddaf743ab020d3baa.scope.
Oct 11 03:37:48 compute-0 podman[95744]: 2025-10-11 03:37:48.730058801 +0000 UTC m=+0.045389817 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 03:37:48 compute-0 systemd[1]: Started libcrun container.
Oct 11 03:37:48 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/291c6d521f5882ae6f72d77471f1226e507e63f760b58202f8607a17096234fd/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 11 03:37:48 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/291c6d521f5882ae6f72d77471f1226e507e63f760b58202f8607a17096234fd/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 11 03:37:48 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/291c6d521f5882ae6f72d77471f1226e507e63f760b58202f8607a17096234fd/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 11 03:37:48 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/291c6d521f5882ae6f72d77471f1226e507e63f760b58202f8607a17096234fd/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 11 03:37:48 compute-0 podman[95744]: 2025-10-11 03:37:48.850637842 +0000 UTC m=+0.165968828 container init 56ab08c839556e8b21e6b3d3f8c5aea17d09dfe1551e812ddaf743ab020d3baa (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_borg, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2)
Oct 11 03:37:48 compute-0 podman[95744]: 2025-10-11 03:37:48.865742067 +0000 UTC m=+0.181073023 container start 56ab08c839556e8b21e6b3d3f8c5aea17d09dfe1551e812ddaf743ab020d3baa (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_borg, io.buildah.version=1.39.3, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Oct 11 03:37:48 compute-0 podman[95744]: 2025-10-11 03:37:48.870241114 +0000 UTC m=+0.185572140 container attach 56ab08c839556e8b21e6b3d3f8c5aea17d09dfe1551e812ddaf743ab020d3baa (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_borg, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Oct 11 03:37:49 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e29 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 11 03:37:49 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e29 do_prune osdmap full prune enabled
Oct 11 03:37:49 compute-0 ceph-mon[74273]: from='client.? 192.168.122.100:0/3714955844' entity='client.admin' cmd=[{"prefix": "osd pool application enable", "pool": "cephfs.cephfs.data", "app": "cephfs"}]: dispatch
Oct 11 03:37:49 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3714955844' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "cephfs.cephfs.data", "app": "cephfs"}]': finished
Oct 11 03:37:49 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e30 e30: 3 total, 3 up, 3 in
Oct 11 03:37:49 compute-0 heuristic_albattani[95634]: enabled application 'cephfs' on pool 'cephfs.cephfs.data'
Oct 11 03:37:49 compute-0 ceph-mon[74273]: log_channel(cluster) log [DBG] : osdmap e30: 3 total, 3 up, 3 in
Oct 11 03:37:49 compute-0 systemd[1]: libpod-fad10acc94b37b3d261d8ae502b690b9bff2d715e4bd74588d74fd378e8baa97.scope: Deactivated successfully.
Oct 11 03:37:49 compute-0 podman[95592]: 2025-10-11 03:37:49.420612173 +0000 UTC m=+1.559679728 container died fad10acc94b37b3d261d8ae502b690b9bff2d715e4bd74588d74fd378e8baa97 (image=quay.io/ceph/ceph:v18, name=heuristic_albattani, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 11 03:37:49 compute-0 systemd[1]: var-lib-containers-storage-overlay-1bd82cef77d915667698e5af6db5bcc208c419e741404c0afda609e0b73bb97e-merged.mount: Deactivated successfully.
Oct 11 03:37:49 compute-0 podman[95592]: 2025-10-11 03:37:49.474263542 +0000 UTC m=+1.613331057 container remove fad10acc94b37b3d261d8ae502b690b9bff2d715e4bd74588d74fd378e8baa97 (image=quay.io/ceph/ceph:v18, name=heuristic_albattani, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Oct 11 03:37:49 compute-0 systemd[1]: libpod-conmon-fad10acc94b37b3d261d8ae502b690b9bff2d715e4bd74588d74fd378e8baa97.scope: Deactivated successfully.
Oct 11 03:37:49 compute-0 sudo[95540]: pam_unix(sudo:session): session closed for user root
Oct 11 03:37:49 compute-0 happy_borg[95761]: {
Oct 11 03:37:49 compute-0 happy_borg[95761]:     "0": [
Oct 11 03:37:49 compute-0 happy_borg[95761]:         {
Oct 11 03:37:49 compute-0 happy_borg[95761]:             "devices": [
Oct 11 03:37:49 compute-0 happy_borg[95761]:                 "/dev/loop3"
Oct 11 03:37:49 compute-0 happy_borg[95761]:             ],
Oct 11 03:37:49 compute-0 happy_borg[95761]:             "lv_name": "ceph_lv0",
Oct 11 03:37:49 compute-0 happy_borg[95761]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct 11 03:37:49 compute-0 happy_borg[95761]:             "lv_size": "21470642176",
Oct 11 03:37:49 compute-0 happy_borg[95761]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=1XVTvq-Um5W-oCLh-MZzq-EKrd-vgGj-JdO4S3,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=23b68101-59a9-532f-ab6b-9acf78fb2162,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=bd7ac921-1218-45c1-b1c6-7c594dbceccb,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 11 03:37:49 compute-0 happy_borg[95761]:             "lv_uuid": "1XVTvq-Um5W-oCLh-MZzq-EKrd-vgGj-JdO4S3",
Oct 11 03:37:49 compute-0 happy_borg[95761]:             "name": "ceph_lv0",
Oct 11 03:37:49 compute-0 happy_borg[95761]:             "path": "/dev/ceph_vg0/ceph_lv0",
Oct 11 03:37:49 compute-0 happy_borg[95761]:             "tags": {
Oct 11 03:37:49 compute-0 happy_borg[95761]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct 11 03:37:49 compute-0 happy_borg[95761]:                 "ceph.block_uuid": "1XVTvq-Um5W-oCLh-MZzq-EKrd-vgGj-JdO4S3",
Oct 11 03:37:49 compute-0 happy_borg[95761]:                 "ceph.cephx_lockbox_secret": "",
Oct 11 03:37:49 compute-0 happy_borg[95761]:                 "ceph.cluster_fsid": "23b68101-59a9-532f-ab6b-9acf78fb2162",
Oct 11 03:37:49 compute-0 happy_borg[95761]:                 "ceph.cluster_name": "ceph",
Oct 11 03:37:49 compute-0 happy_borg[95761]:                 "ceph.crush_device_class": "",
Oct 11 03:37:49 compute-0 happy_borg[95761]:                 "ceph.encrypted": "0",
Oct 11 03:37:49 compute-0 happy_borg[95761]:                 "ceph.osd_fsid": "bd7ac921-1218-45c1-b1c6-7c594dbceccb",
Oct 11 03:37:49 compute-0 happy_borg[95761]:                 "ceph.osd_id": "0",
Oct 11 03:37:49 compute-0 happy_borg[95761]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 11 03:37:49 compute-0 happy_borg[95761]:                 "ceph.type": "block",
Oct 11 03:37:49 compute-0 happy_borg[95761]:                 "ceph.vdo": "0"
Oct 11 03:37:49 compute-0 happy_borg[95761]:             },
Oct 11 03:37:49 compute-0 happy_borg[95761]:             "type": "block",
Oct 11 03:37:49 compute-0 happy_borg[95761]:             "vg_name": "ceph_vg0"
Oct 11 03:37:49 compute-0 happy_borg[95761]:         }
Oct 11 03:37:49 compute-0 happy_borg[95761]:     ],
Oct 11 03:37:49 compute-0 happy_borg[95761]:     "1": [
Oct 11 03:37:49 compute-0 happy_borg[95761]:         {
Oct 11 03:37:49 compute-0 happy_borg[95761]:             "devices": [
Oct 11 03:37:49 compute-0 happy_borg[95761]:                 "/dev/loop4"
Oct 11 03:37:49 compute-0 happy_borg[95761]:             ],
Oct 11 03:37:49 compute-0 happy_borg[95761]:             "lv_name": "ceph_lv1",
Oct 11 03:37:49 compute-0 happy_borg[95761]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Oct 11 03:37:49 compute-0 happy_borg[95761]:             "lv_size": "21470642176",
Oct 11 03:37:49 compute-0 happy_borg[95761]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=SYHCVo-UVf0-Iwc3-4dzz-QuCG-19LJ-9rRA5x,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=23b68101-59a9-532f-ab6b-9acf78fb2162,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=38da774d-7ecf-442f-9a7a-97978287cff8,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 11 03:37:49 compute-0 happy_borg[95761]:             "lv_uuid": "SYHCVo-UVf0-Iwc3-4dzz-QuCG-19LJ-9rRA5x",
Oct 11 03:37:49 compute-0 happy_borg[95761]:             "name": "ceph_lv1",
Oct 11 03:37:49 compute-0 happy_borg[95761]:             "path": "/dev/ceph_vg1/ceph_lv1",
Oct 11 03:37:49 compute-0 happy_borg[95761]:             "tags": {
Oct 11 03:37:49 compute-0 happy_borg[95761]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Oct 11 03:37:49 compute-0 happy_borg[95761]:                 "ceph.block_uuid": "SYHCVo-UVf0-Iwc3-4dzz-QuCG-19LJ-9rRA5x",
Oct 11 03:37:49 compute-0 happy_borg[95761]:                 "ceph.cephx_lockbox_secret": "",
Oct 11 03:37:49 compute-0 happy_borg[95761]:                 "ceph.cluster_fsid": "23b68101-59a9-532f-ab6b-9acf78fb2162",
Oct 11 03:37:49 compute-0 happy_borg[95761]:                 "ceph.cluster_name": "ceph",
Oct 11 03:37:49 compute-0 happy_borg[95761]:                 "ceph.crush_device_class": "",
Oct 11 03:37:49 compute-0 happy_borg[95761]:                 "ceph.encrypted": "0",
Oct 11 03:37:49 compute-0 happy_borg[95761]:                 "ceph.osd_fsid": "38da774d-7ecf-442f-9a7a-97978287cff8",
Oct 11 03:37:49 compute-0 happy_borg[95761]:                 "ceph.osd_id": "1",
Oct 11 03:37:49 compute-0 happy_borg[95761]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 11 03:37:49 compute-0 happy_borg[95761]:                 "ceph.type": "block",
Oct 11 03:37:49 compute-0 happy_borg[95761]:                 "ceph.vdo": "0"
Oct 11 03:37:49 compute-0 happy_borg[95761]:             },
Oct 11 03:37:49 compute-0 happy_borg[95761]:             "type": "block",
Oct 11 03:37:49 compute-0 happy_borg[95761]:             "vg_name": "ceph_vg1"
Oct 11 03:37:49 compute-0 happy_borg[95761]:         }
Oct 11 03:37:49 compute-0 happy_borg[95761]:     ],
Oct 11 03:37:49 compute-0 happy_borg[95761]:     "2": [
Oct 11 03:37:49 compute-0 happy_borg[95761]:         {
Oct 11 03:37:49 compute-0 happy_borg[95761]:             "devices": [
Oct 11 03:37:49 compute-0 happy_borg[95761]:                 "/dev/loop5"
Oct 11 03:37:49 compute-0 happy_borg[95761]:             ],
Oct 11 03:37:49 compute-0 happy_borg[95761]:             "lv_name": "ceph_lv2",
Oct 11 03:37:49 compute-0 happy_borg[95761]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Oct 11 03:37:49 compute-0 happy_borg[95761]:             "lv_size": "21470642176",
Oct 11 03:37:49 compute-0 happy_borg[95761]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=RoMfIg-8Myq-NBZ3-1HM3-6QfC-sjk8-dlChvt,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=23b68101-59a9-532f-ab6b-9acf78fb2162,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=b0a32a6b-06c7-4cda-9235-0c5ffbd1bd85,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 11 03:37:49 compute-0 happy_borg[95761]:             "lv_uuid": "RoMfIg-8Myq-NBZ3-1HM3-6QfC-sjk8-dlChvt",
Oct 11 03:37:49 compute-0 happy_borg[95761]:             "name": "ceph_lv2",
Oct 11 03:37:49 compute-0 happy_borg[95761]:             "path": "/dev/ceph_vg2/ceph_lv2",
Oct 11 03:37:49 compute-0 happy_borg[95761]:             "tags": {
Oct 11 03:37:49 compute-0 happy_borg[95761]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Oct 11 03:37:49 compute-0 happy_borg[95761]:                 "ceph.block_uuid": "RoMfIg-8Myq-NBZ3-1HM3-6QfC-sjk8-dlChvt",
Oct 11 03:37:49 compute-0 happy_borg[95761]:                 "ceph.cephx_lockbox_secret": "",
Oct 11 03:37:49 compute-0 happy_borg[95761]:                 "ceph.cluster_fsid": "23b68101-59a9-532f-ab6b-9acf78fb2162",
Oct 11 03:37:49 compute-0 happy_borg[95761]:                 "ceph.cluster_name": "ceph",
Oct 11 03:37:49 compute-0 happy_borg[95761]:                 "ceph.crush_device_class": "",
Oct 11 03:37:49 compute-0 happy_borg[95761]:                 "ceph.encrypted": "0",
Oct 11 03:37:49 compute-0 happy_borg[95761]:                 "ceph.osd_fsid": "b0a32a6b-06c7-4cda-9235-0c5ffbd1bd85",
Oct 11 03:37:49 compute-0 happy_borg[95761]:                 "ceph.osd_id": "2",
Oct 11 03:37:49 compute-0 happy_borg[95761]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 11 03:37:49 compute-0 happy_borg[95761]:                 "ceph.type": "block",
Oct 11 03:37:49 compute-0 happy_borg[95761]:                 "ceph.vdo": "0"
Oct 11 03:37:49 compute-0 happy_borg[95761]:             },
Oct 11 03:37:49 compute-0 happy_borg[95761]:             "type": "block",
Oct 11 03:37:49 compute-0 happy_borg[95761]:             "vg_name": "ceph_vg2"
Oct 11 03:37:49 compute-0 happy_borg[95761]:         }
Oct 11 03:37:49 compute-0 happy_borg[95761]:     ]
Oct 11 03:37:49 compute-0 happy_borg[95761]: }
Oct 11 03:37:49 compute-0 systemd[1]: libpod-56ab08c839556e8b21e6b3d3f8c5aea17d09dfe1551e812ddaf743ab020d3baa.scope: Deactivated successfully.
Oct 11 03:37:49 compute-0 podman[95744]: 2025-10-11 03:37:49.655260733 +0000 UTC m=+0.970591669 container died 56ab08c839556e8b21e6b3d3f8c5aea17d09dfe1551e812ddaf743ab020d3baa (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_borg, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, ceph=True)
Oct 11 03:37:49 compute-0 systemd[1]: var-lib-containers-storage-overlay-291c6d521f5882ae6f72d77471f1226e507e63f760b58202f8607a17096234fd-merged.mount: Deactivated successfully.
Oct 11 03:37:49 compute-0 podman[95744]: 2025-10-11 03:37:49.726244199 +0000 UTC m=+1.041575135 container remove 56ab08c839556e8b21e6b3d3f8c5aea17d09dfe1551e812ddaf743ab020d3baa (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_borg, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef)
Oct 11 03:37:49 compute-0 systemd[1]: libpod-conmon-56ab08c839556e8b21e6b3d3f8c5aea17d09dfe1551e812ddaf743ab020d3baa.scope: Deactivated successfully.
Oct 11 03:37:49 compute-0 sudo[95601]: pam_unix(sudo:session): session closed for user root
Oct 11 03:37:49 compute-0 sudo[95795]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 03:37:49 compute-0 sudo[95795]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 03:37:49 compute-0 sudo[95795]: pam_unix(sudo:session): session closed for user root
Oct 11 03:37:49 compute-0 sudo[95820]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 11 03:37:49 compute-0 sudo[95820]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 03:37:49 compute-0 sudo[95820]: pam_unix(sudo:session): session closed for user root
Oct 11 03:37:50 compute-0 sudo[95845]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 03:37:50 compute-0 sudo[95845]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 03:37:50 compute-0 sudo[95845]: pam_unix(sudo:session): session closed for user root
Oct 11 03:37:50 compute-0 sudo[95870]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/23b68101-59a9-532f-ab6b-9acf78fb2162/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 23b68101-59a9-532f-ab6b-9acf78fb2162 -- raw list --format json
Oct 11 03:37:50 compute-0 sudo[95870]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 03:37:50 compute-0 ceph-mon[74273]: pgmap v64: 7 pgs: 7 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Oct 11 03:37:50 compute-0 ceph-mon[74273]: from='client.? 192.168.122.100:0/3714955844' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "cephfs.cephfs.data", "app": "cephfs"}]': finished
Oct 11 03:37:50 compute-0 ceph-mon[74273]: osdmap e30: 3 total, 3 up, 3 in
Oct 11 03:37:50 compute-0 podman[96010]: 2025-10-11 03:37:50.463883176 +0000 UTC m=+0.045714017 container create 34ff11ed9e8054f7f130a9528daaf9635dc25029440e31ccace8b1b258fa401f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_raman, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Oct 11 03:37:50 compute-0 systemd[1]: Started libpod-conmon-34ff11ed9e8054f7f130a9528daaf9635dc25029440e31ccace8b1b258fa401f.scope.
Oct 11 03:37:50 compute-0 python3[95997]: ansible-ansible.legacy.stat Invoked with path=/tmp/ceph_rgw.yml follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Oct 11 03:37:50 compute-0 systemd[1]: Started libcrun container.
Oct 11 03:37:50 compute-0 podman[96010]: 2025-10-11 03:37:50.445295003 +0000 UTC m=+0.027125854 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 03:37:50 compute-0 podman[96010]: 2025-10-11 03:37:50.556737448 +0000 UTC m=+0.138568289 container init 34ff11ed9e8054f7f130a9528daaf9635dc25029440e31ccace8b1b258fa401f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_raman, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2)
Oct 11 03:37:50 compute-0 podman[96010]: 2025-10-11 03:37:50.570713731 +0000 UTC m=+0.152544592 container start 34ff11ed9e8054f7f130a9528daaf9635dc25029440e31ccace8b1b258fa401f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_raman, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 11 03:37:50 compute-0 podman[96010]: 2025-10-11 03:37:50.575488445 +0000 UTC m=+0.157319316 container attach 34ff11ed9e8054f7f130a9528daaf9635dc25029440e31ccace8b1b258fa401f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_raman, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 11 03:37:50 compute-0 youthful_raman[96026]: 167 167
Oct 11 03:37:50 compute-0 systemd[1]: libpod-34ff11ed9e8054f7f130a9528daaf9635dc25029440e31ccace8b1b258fa401f.scope: Deactivated successfully.
Oct 11 03:37:50 compute-0 podman[96010]: 2025-10-11 03:37:50.577107541 +0000 UTC m=+0.158938402 container died 34ff11ed9e8054f7f130a9528daaf9635dc25029440e31ccace8b1b258fa401f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_raman, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.39.3)
Oct 11 03:37:50 compute-0 systemd[1]: var-lib-containers-storage-overlay-012e6314fd8980321c70bde2fd1516250e4c276deea7f515e2f946ac7779fbc4-merged.mount: Deactivated successfully.
Oct 11 03:37:50 compute-0 podman[96010]: 2025-10-11 03:37:50.624573776 +0000 UTC m=+0.206404607 container remove 34ff11ed9e8054f7f130a9528daaf9635dc25029440e31ccace8b1b258fa401f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_raman, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Oct 11 03:37:50 compute-0 systemd[1]: libpod-conmon-34ff11ed9e8054f7f130a9528daaf9635dc25029440e31ccace8b1b258fa401f.scope: Deactivated successfully.
Oct 11 03:37:50 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v66: 7 pgs: 7 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Oct 11 03:37:50 compute-0 ceph-mgr[74563]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 03:37:50 compute-0 ceph-mgr[74563]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 03:37:50 compute-0 ceph-mgr[74563]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 03:37:50 compute-0 ceph-mgr[74563]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 03:37:50 compute-0 ceph-mgr[74563]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 03:37:50 compute-0 ceph-mgr[74563]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 03:37:50 compute-0 podman[96121]: 2025-10-11 03:37:50.827314757 +0000 UTC m=+0.058986309 container create 03bfd18ec4d699ba88bec26a855f2c8181c5ddaa5614f8abde54aa0a88d5e683 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=priceless_mayer, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=reef, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 11 03:37:50 compute-0 systemd[1]: Started libpod-conmon-03bfd18ec4d699ba88bec26a855f2c8181c5ddaa5614f8abde54aa0a88d5e683.scope.
Oct 11 03:37:50 compute-0 podman[96121]: 2025-10-11 03:37:50.802411156 +0000 UTC m=+0.034082758 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 03:37:50 compute-0 systemd[1]: Started libcrun container.
Oct 11 03:37:50 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/76796b1a5597f662d2e3c6845b9df77a677d889a930a6249490473e46448128b/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 11 03:37:50 compute-0 python3[96120]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1760153870.1654003-33108-119635093728092/source dest=/tmp/ceph_rgw.yml mode=0644 force=True follow=False _original_basename=ceph_rgw.yml.j2 checksum=0a1ea65aada399f80274d3cc2047646f2797712b backup=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 11 03:37:50 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/76796b1a5597f662d2e3c6845b9df77a677d889a930a6249490473e46448128b/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 11 03:37:50 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/76796b1a5597f662d2e3c6845b9df77a677d889a930a6249490473e46448128b/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 11 03:37:50 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/76796b1a5597f662d2e3c6845b9df77a677d889a930a6249490473e46448128b/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 11 03:37:50 compute-0 podman[96121]: 2025-10-11 03:37:50.923833291 +0000 UTC m=+0.155504903 container init 03bfd18ec4d699ba88bec26a855f2c8181c5ddaa5614f8abde54aa0a88d5e683 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=priceless_mayer, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_REF=reef)
Oct 11 03:37:50 compute-0 podman[96121]: 2025-10-11 03:37:50.929534782 +0000 UTC m=+0.161206294 container start 03bfd18ec4d699ba88bec26a855f2c8181c5ddaa5614f8abde54aa0a88d5e683 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=priceless_mayer, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 11 03:37:50 compute-0 podman[96121]: 2025-10-11 03:37:50.934026408 +0000 UTC m=+0.165697960 container attach 03bfd18ec4d699ba88bec26a855f2c8181c5ddaa5614f8abde54aa0a88d5e683 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=priceless_mayer, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True)
Oct 11 03:37:51 compute-0 ceph-mon[74273]: log_channel(cluster) log [INF] : Health check cleared: POOL_APP_NOT_ENABLED (was: 1 pool(s) do not have an application enabled)
Oct 11 03:37:51 compute-0 ceph-mon[74273]: log_channel(cluster) log [INF] : Cluster is now healthy
Oct 11 03:37:51 compute-0 ceph-mon[74273]: Health check cleared: POOL_APP_NOT_ENABLED (was: 1 pool(s) do not have an application enabled)
Oct 11 03:37:51 compute-0 ceph-mon[74273]: Cluster is now healthy
Oct 11 03:37:51 compute-0 sudo[96242]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ummmudmpiboahcgahfgkbllfiimryzda ; /usr/bin/python3'
Oct 11 03:37:51 compute-0 sudo[96242]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 03:37:51 compute-0 python3[96244]: ansible-ansible.legacy.stat Invoked with path=/home/ceph-admin/assimilate_ceph.conf follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Oct 11 03:37:51 compute-0 sudo[96242]: pam_unix(sudo:session): session closed for user root
Oct 11 03:37:51 compute-0 priceless_mayer[96138]: {
Oct 11 03:37:51 compute-0 priceless_mayer[96138]:     "38da774d-7ecf-442f-9a7a-97978287cff8": {
Oct 11 03:37:51 compute-0 priceless_mayer[96138]:         "ceph_fsid": "23b68101-59a9-532f-ab6b-9acf78fb2162",
Oct 11 03:37:51 compute-0 priceless_mayer[96138]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Oct 11 03:37:51 compute-0 priceless_mayer[96138]:         "osd_id": 1,
Oct 11 03:37:51 compute-0 priceless_mayer[96138]:         "osd_uuid": "38da774d-7ecf-442f-9a7a-97978287cff8",
Oct 11 03:37:51 compute-0 priceless_mayer[96138]:         "type": "bluestore"
Oct 11 03:37:51 compute-0 priceless_mayer[96138]:     },
Oct 11 03:37:51 compute-0 priceless_mayer[96138]:     "b0a32a6b-06c7-4cda-9235-0c5ffbd1bd85": {
Oct 11 03:37:51 compute-0 priceless_mayer[96138]:         "ceph_fsid": "23b68101-59a9-532f-ab6b-9acf78fb2162",
Oct 11 03:37:51 compute-0 priceless_mayer[96138]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Oct 11 03:37:51 compute-0 priceless_mayer[96138]:         "osd_id": 2,
Oct 11 03:37:51 compute-0 priceless_mayer[96138]:         "osd_uuid": "b0a32a6b-06c7-4cda-9235-0c5ffbd1bd85",
Oct 11 03:37:51 compute-0 priceless_mayer[96138]:         "type": "bluestore"
Oct 11 03:37:51 compute-0 priceless_mayer[96138]:     },
Oct 11 03:37:51 compute-0 priceless_mayer[96138]:     "bd7ac921-1218-45c1-b1c6-7c594dbceccb": {
Oct 11 03:37:51 compute-0 priceless_mayer[96138]:         "ceph_fsid": "23b68101-59a9-532f-ab6b-9acf78fb2162",
Oct 11 03:37:51 compute-0 priceless_mayer[96138]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct 11 03:37:51 compute-0 priceless_mayer[96138]:         "osd_id": 0,
Oct 11 03:37:51 compute-0 priceless_mayer[96138]:         "osd_uuid": "bd7ac921-1218-45c1-b1c6-7c594dbceccb",
Oct 11 03:37:51 compute-0 priceless_mayer[96138]:         "type": "bluestore"
Oct 11 03:37:51 compute-0 priceless_mayer[96138]:     }
Oct 11 03:37:51 compute-0 priceless_mayer[96138]: }
Oct 11 03:37:51 compute-0 systemd[1]: libpod-03bfd18ec4d699ba88bec26a855f2c8181c5ddaa5614f8abde54aa0a88d5e683.scope: Deactivated successfully.
Oct 11 03:37:51 compute-0 systemd[1]: libpod-03bfd18ec4d699ba88bec26a855f2c8181c5ddaa5614f8abde54aa0a88d5e683.scope: Consumed 1.024s CPU time.
Oct 11 03:37:51 compute-0 podman[96121]: 2025-10-11 03:37:51.94739917 +0000 UTC m=+1.179070682 container died 03bfd18ec4d699ba88bec26a855f2c8181c5ddaa5614f8abde54aa0a88d5e683 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=priceless_mayer, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 11 03:37:51 compute-0 sudo[96346]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wkrcfmphhpcidnxtzyairwbkzurpyfja ; /usr/bin/python3'
Oct 11 03:37:51 compute-0 sudo[96346]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 03:37:51 compute-0 systemd[1]: var-lib-containers-storage-overlay-76796b1a5597f662d2e3c6845b9df77a677d889a930a6249490473e46448128b-merged.mount: Deactivated successfully.
Oct 11 03:37:52 compute-0 podman[96121]: 2025-10-11 03:37:52.008681483 +0000 UTC m=+1.240352995 container remove 03bfd18ec4d699ba88bec26a855f2c8181c5ddaa5614f8abde54aa0a88d5e683 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=priceless_mayer, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 11 03:37:52 compute-0 systemd[1]: libpod-conmon-03bfd18ec4d699ba88bec26a855f2c8181c5ddaa5614f8abde54aa0a88d5e683.scope: Deactivated successfully.
Oct 11 03:37:52 compute-0 sudo[95870]: pam_unix(sudo:session): session closed for user root
Oct 11 03:37:52 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 11 03:37:52 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' 
Oct 11 03:37:52 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 11 03:37:52 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' 
Oct 11 03:37:52 compute-0 python3[96357]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1760153871.278963-33122-83803056695939/source dest=/home/ceph-admin/assimilate_ceph.conf owner=167 group=167 mode=0644 follow=False _original_basename=ceph_rgw.conf.j2 checksum=5ba43dee4b8399b7dea7cb74d765ad90f714d9b9 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 11 03:37:52 compute-0 sudo[96359]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 03:37:52 compute-0 sudo[96359]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 03:37:52 compute-0 sudo[96346]: pam_unix(sudo:session): session closed for user root
Oct 11 03:37:52 compute-0 sudo[96359]: pam_unix(sudo:session): session closed for user root
Oct 11 03:37:52 compute-0 sudo[96384]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Oct 11 03:37:52 compute-0 sudo[96384]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 03:37:52 compute-0 sudo[96384]: pam_unix(sudo:session): session closed for user root
Oct 11 03:37:52 compute-0 sudo[96456]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dekqwarydtddqzwkdyrryngbetfwqxjz ; /usr/bin/python3'
Oct 11 03:37:52 compute-0 sudo[96456]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 03:37:52 compute-0 ceph-mon[74273]: pgmap v66: 7 pgs: 7 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Oct 11 03:37:52 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' 
Oct 11 03:37:52 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' 
Oct 11 03:37:52 compute-0 python3[96458]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /tmp/ceph_rgw.yml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v18 --fsid 23b68101-59a9-532f-ab6b-9acf78fb2162 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   config assimilate-conf -i /home/assimilate_ceph.conf
                                           _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 11 03:37:52 compute-0 podman[96459]: 2025-10-11 03:37:52.691663312 +0000 UTC m=+0.063605170 container create 48343872c302a9d847233291e9789be576bcdcd460f5b43ab845a46d0069b2aa (image=quay.io/ceph/ceph:v18, name=youthful_wu, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 11 03:37:52 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v67: 7 pgs: 7 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Oct 11 03:37:52 compute-0 systemd[1]: Started libpod-conmon-48343872c302a9d847233291e9789be576bcdcd460f5b43ab845a46d0069b2aa.scope.
Oct 11 03:37:52 compute-0 podman[96459]: 2025-10-11 03:37:52.6677592 +0000 UTC m=+0.039701098 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Oct 11 03:37:52 compute-0 systemd[1]: Started libcrun container.
Oct 11 03:37:52 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a3452a462bf01571dfdef714eff27dd59590e9c12c54550520404cfea2036961/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Oct 11 03:37:52 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a3452a462bf01571dfdef714eff27dd59590e9c12c54550520404cfea2036961/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 11 03:37:52 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a3452a462bf01571dfdef714eff27dd59590e9c12c54550520404cfea2036961/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Oct 11 03:37:52 compute-0 podman[96459]: 2025-10-11 03:37:52.798539079 +0000 UTC m=+0.170480947 container init 48343872c302a9d847233291e9789be576bcdcd460f5b43ab845a46d0069b2aa (image=quay.io/ceph/ceph:v18, name=youthful_wu, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507)
Oct 11 03:37:52 compute-0 podman[96459]: 2025-10-11 03:37:52.810207177 +0000 UTC m=+0.182149075 container start 48343872c302a9d847233291e9789be576bcdcd460f5b43ab845a46d0069b2aa (image=quay.io/ceph/ceph:v18, name=youthful_wu, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507)
Oct 11 03:37:52 compute-0 podman[96459]: 2025-10-11 03:37:52.814601611 +0000 UTC m=+0.186543529 container attach 48343872c302a9d847233291e9789be576bcdcd460f5b43ab845a46d0069b2aa (image=quay.io/ceph/ceph:v18, name=youthful_wu, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef)
Oct 11 03:37:53 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config assimilate-conf"} v 0) v1
Oct 11 03:37:53 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1884455704' entity='client.admin' cmd=[{"prefix": "config assimilate-conf"}]: dispatch
Oct 11 03:37:53 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1884455704' entity='client.admin' cmd='[{"prefix": "config assimilate-conf"}]': finished
Oct 11 03:37:53 compute-0 youthful_wu[96474]: 
Oct 11 03:37:53 compute-0 youthful_wu[96474]: [global]
Oct 11 03:37:53 compute-0 youthful_wu[96474]:         fsid = 23b68101-59a9-532f-ab6b-9acf78fb2162
Oct 11 03:37:53 compute-0 youthful_wu[96474]:         mon_host = 192.168.122.100
Oct 11 03:37:53 compute-0 systemd[1]: libpod-48343872c302a9d847233291e9789be576bcdcd460f5b43ab845a46d0069b2aa.scope: Deactivated successfully.
Oct 11 03:37:53 compute-0 podman[96459]: 2025-10-11 03:37:53.379401736 +0000 UTC m=+0.751343564 container died 48343872c302a9d847233291e9789be576bcdcd460f5b43ab845a46d0069b2aa (image=quay.io/ceph/ceph:v18, name=youthful_wu, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3)
Oct 11 03:37:53 compute-0 systemd[1]: var-lib-containers-storage-overlay-a3452a462bf01571dfdef714eff27dd59590e9c12c54550520404cfea2036961-merged.mount: Deactivated successfully.
Oct 11 03:37:53 compute-0 podman[96459]: 2025-10-11 03:37:53.433450236 +0000 UTC m=+0.805392094 container remove 48343872c302a9d847233291e9789be576bcdcd460f5b43ab845a46d0069b2aa (image=quay.io/ceph/ceph:v18, name=youthful_wu, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 11 03:37:53 compute-0 sudo[96499]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 03:37:53 compute-0 sudo[96499]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 03:37:53 compute-0 sudo[96499]: pam_unix(sudo:session): session closed for user root
Oct 11 03:37:53 compute-0 systemd[1]: libpod-conmon-48343872c302a9d847233291e9789be576bcdcd460f5b43ab845a46d0069b2aa.scope: Deactivated successfully.
Oct 11 03:37:53 compute-0 sudo[96456]: pam_unix(sudo:session): session closed for user root
Oct 11 03:37:53 compute-0 ceph-mon[74273]: from='client.? 192.168.122.100:0/1884455704' entity='client.admin' cmd=[{"prefix": "config assimilate-conf"}]: dispatch
Oct 11 03:37:53 compute-0 ceph-mon[74273]: from='client.? 192.168.122.100:0/1884455704' entity='client.admin' cmd='[{"prefix": "config assimilate-conf"}]': finished
Oct 11 03:37:53 compute-0 sudo[96536]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 11 03:37:53 compute-0 sudo[96536]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 03:37:53 compute-0 sudo[96536]: pam_unix(sudo:session): session closed for user root
Oct 11 03:37:53 compute-0 sudo[96561]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 03:37:53 compute-0 sudo[96561]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 03:37:53 compute-0 sudo[96561]: pam_unix(sudo:session): session closed for user root
Oct 11 03:37:53 compute-0 sudo[96608]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ptjmonthzqgwqmuthwxjhmaqcpmjyfdl ; /usr/bin/python3'
Oct 11 03:37:53 compute-0 sudo[96608]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 03:37:53 compute-0 sudo[96610]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/23b68101-59a9-532f-ab6b-9acf78fb2162/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ls
Oct 11 03:37:53 compute-0 sudo[96610]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 03:37:53 compute-0 python3[96616]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /tmp/ceph_rgw.yml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v18 --fsid 23b68101-59a9-532f-ab6b-9acf78fb2162 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   config-key set ssl_option no_sslv2:sslv3:no_tlsv1:no_tlsv1_1
                                           _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 11 03:37:53 compute-0 podman[96637]: 2025-10-11 03:37:53.872272398 +0000 UTC m=+0.064140265 container create 4652fed9c341224ea365878f7f5dae8dd1b511726ac1184687576ae4037b8730 (image=quay.io/ceph/ceph:v18, name=serene_antonelli, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 11 03:37:53 compute-0 systemd[1]: Started libpod-conmon-4652fed9c341224ea365878f7f5dae8dd1b511726ac1184687576ae4037b8730.scope.
Oct 11 03:37:53 compute-0 podman[96637]: 2025-10-11 03:37:53.848085848 +0000 UTC m=+0.039953695 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Oct 11 03:37:53 compute-0 systemd[1]: Started libcrun container.
Oct 11 03:37:53 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/753146ac3b23f3a8b5631d21b603c444174e5c7b67f9f57b087dbcf251512632/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 11 03:37:53 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/753146ac3b23f3a8b5631d21b603c444174e5c7b67f9f57b087dbcf251512632/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Oct 11 03:37:53 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/753146ac3b23f3a8b5631d21b603c444174e5c7b67f9f57b087dbcf251512632/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Oct 11 03:37:53 compute-0 podman[96637]: 2025-10-11 03:37:53.970923183 +0000 UTC m=+0.162791060 container init 4652fed9c341224ea365878f7f5dae8dd1b511726ac1184687576ae4037b8730 (image=quay.io/ceph/ceph:v18, name=serene_antonelli, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 11 03:37:53 compute-0 podman[96637]: 2025-10-11 03:37:53.978740793 +0000 UTC m=+0.170608630 container start 4652fed9c341224ea365878f7f5dae8dd1b511726ac1184687576ae4037b8730 (image=quay.io/ceph/ceph:v18, name=serene_antonelli, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 11 03:37:53 compute-0 podman[96637]: 2025-10-11 03:37:53.981879121 +0000 UTC m=+0.173746958 container attach 4652fed9c341224ea365878f7f5dae8dd1b511726ac1184687576ae4037b8730 (image=quay.io/ceph/ceph:v18, name=serene_antonelli, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_REF=reef)
Oct 11 03:37:54 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e30 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 11 03:37:54 compute-0 podman[96729]: 2025-10-11 03:37:54.33277044 +0000 UTC m=+0.095138257 container exec 24261ba7295af5a6a49cb537d1551fd7fd4de28fdeebff7ecec5d89143ebddf9 (image=quay.io/ceph/ceph:v18, name=ceph-23b68101-59a9-532f-ab6b-9acf78fb2162-mon-compute-0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=reef, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Oct 11 03:37:54 compute-0 podman[96729]: 2025-10-11 03:37:54.423377637 +0000 UTC m=+0.185745454 container exec_died 24261ba7295af5a6a49cb537d1551fd7fd4de28fdeebff7ecec5d89143ebddf9 (image=quay.io/ceph/ceph:v18, name=ceph-23b68101-59a9-532f-ab6b-9acf78fb2162-mon-compute-0, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_REF=reef, ceph=True, org.label-schema.license=GPLv2)
Oct 11 03:37:54 compute-0 ceph-mon[74273]: pgmap v67: 7 pgs: 7 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Oct 11 03:37:54 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=ssl_option}] v 0) v1
Oct 11 03:37:54 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1729856311' entity='client.admin' 
Oct 11 03:37:54 compute-0 serene_antonelli[96666]: set ssl_option
Oct 11 03:37:54 compute-0 systemd[1]: libpod-4652fed9c341224ea365878f7f5dae8dd1b511726ac1184687576ae4037b8730.scope: Deactivated successfully.
Oct 11 03:37:54 compute-0 podman[96637]: 2025-10-11 03:37:54.657329667 +0000 UTC m=+0.849197524 container died 4652fed9c341224ea365878f7f5dae8dd1b511726ac1184687576ae4037b8730 (image=quay.io/ceph/ceph:v18, name=serene_antonelli, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Oct 11 03:37:54 compute-0 systemd[1]: var-lib-containers-storage-overlay-753146ac3b23f3a8b5631d21b603c444174e5c7b67f9f57b087dbcf251512632-merged.mount: Deactivated successfully.
Oct 11 03:37:54 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v68: 7 pgs: 7 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Oct 11 03:37:54 compute-0 podman[96637]: 2025-10-11 03:37:54.708370293 +0000 UTC m=+0.900238130 container remove 4652fed9c341224ea365878f7f5dae8dd1b511726ac1184687576ae4037b8730 (image=quay.io/ceph/ceph:v18, name=serene_antonelli, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, ceph=True, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 11 03:37:54 compute-0 systemd[1]: libpod-conmon-4652fed9c341224ea365878f7f5dae8dd1b511726ac1184687576ae4037b8730.scope: Deactivated successfully.
Oct 11 03:37:54 compute-0 sudo[96608]: pam_unix(sudo:session): session closed for user root
Oct 11 03:37:54 compute-0 sudo[96901]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xrkbmxwqlswkrejjvgeffbiqzyytwtec ; /usr/bin/python3'
Oct 11 03:37:54 compute-0 sudo[96901]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 03:37:54 compute-0 sudo[96610]: pam_unix(sudo:session): session closed for user root
Oct 11 03:37:54 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 11 03:37:55 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' 
Oct 11 03:37:55 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 11 03:37:55 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' 
Oct 11 03:37:55 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 11 03:37:55 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 11 03:37:55 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct 11 03:37:55 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 11 03:37:55 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct 11 03:37:55 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' 
Oct 11 03:37:55 compute-0 ceph-mgr[74563]: [progress WARNING root] complete: ev 18fe6bf2-5d7f-4424-8e48-0466648923e7 does not exist
Oct 11 03:37:55 compute-0 ceph-mgr[74563]: [progress WARNING root] complete: ev ad3ffb3d-519b-4a7c-a97f-200523513aa1 does not exist
Oct 11 03:37:55 compute-0 ceph-mgr[74563]: [progress WARNING root] complete: ev 3ff5881a-4c38-4c46-a466-877ec68c594b does not exist
Oct 11 03:37:55 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct 11 03:37:55 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 11 03:37:55 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct 11 03:37:55 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 11 03:37:55 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 11 03:37:55 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 11 03:37:55 compute-0 sudo[96912]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 03:37:55 compute-0 sudo[96912]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 03:37:55 compute-0 python3[96911]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /tmp/ceph_rgw.yml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v18 --fsid 23b68101-59a9-532f-ab6b-9acf78fb2162 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   orch apply --in-file /home/ceph_spec.yaml _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 11 03:37:55 compute-0 sudo[96912]: pam_unix(sudo:session): session closed for user root
Oct 11 03:37:55 compute-0 podman[96937]: 2025-10-11 03:37:55.211554335 +0000 UTC m=+0.064756682 container create 9d585dc4e9dca9364fed91c1ed54788d9775fcad8bff3b337b990c0f5091a412 (image=quay.io/ceph/ceph:v18, name=tender_swirles, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3)
Oct 11 03:37:55 compute-0 sudo[96938]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 11 03:37:55 compute-0 sudo[96938]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 03:37:55 compute-0 sudo[96938]: pam_unix(sudo:session): session closed for user root
Oct 11 03:37:55 compute-0 systemd[1]: Started libpod-conmon-9d585dc4e9dca9364fed91c1ed54788d9775fcad8bff3b337b990c0f5091a412.scope.
Oct 11 03:37:55 compute-0 podman[96937]: 2025-10-11 03:37:55.190967506 +0000 UTC m=+0.044169823 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Oct 11 03:37:55 compute-0 systemd[1]: Started libcrun container.
Oct 11 03:37:55 compute-0 sudo[96975]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 03:37:55 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c98a570f6f53e25bd808af4ff69790ceb160cb35ef189a3fd693db8d1b3a82cb/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Oct 11 03:37:55 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c98a570f6f53e25bd808af4ff69790ceb160cb35ef189a3fd693db8d1b3a82cb/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 11 03:37:55 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c98a570f6f53e25bd808af4ff69790ceb160cb35ef189a3fd693db8d1b3a82cb/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Oct 11 03:37:55 compute-0 sudo[96975]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 03:37:55 compute-0 sudo[96975]: pam_unix(sudo:session): session closed for user root
Oct 11 03:37:55 compute-0 podman[96937]: 2025-10-11 03:37:55.325947373 +0000 UTC m=+0.179149710 container init 9d585dc4e9dca9364fed91c1ed54788d9775fcad8bff3b337b990c0f5091a412 (image=quay.io/ceph/ceph:v18, name=tender_swirles, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 11 03:37:55 compute-0 podman[96937]: 2025-10-11 03:37:55.335618155 +0000 UTC m=+0.188820492 container start 9d585dc4e9dca9364fed91c1ed54788d9775fcad8bff3b337b990c0f5091a412 (image=quay.io/ceph/ceph:v18, name=tender_swirles, CEPH_REF=reef, org.label-schema.build-date=20250507, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 11 03:37:55 compute-0 podman[96937]: 2025-10-11 03:37:55.340139962 +0000 UTC m=+0.193342349 container attach 9d585dc4e9dca9364fed91c1ed54788d9775fcad8bff3b337b990c0f5091a412 (image=quay.io/ceph/ceph:v18, name=tender_swirles, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 11 03:37:55 compute-0 sudo[97007]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/23b68101-59a9-532f-ab6b-9acf78fb2162/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 23b68101-59a9-532f-ab6b-9acf78fb2162 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Oct 11 03:37:55 compute-0 sudo[97007]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 03:37:55 compute-0 ceph-mon[74273]: from='client.? 192.168.122.100:0/1729856311' entity='client.admin' 
Oct 11 03:37:55 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' 
Oct 11 03:37:55 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' 
Oct 11 03:37:55 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 11 03:37:55 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 11 03:37:55 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' 
Oct 11 03:37:55 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 11 03:37:55 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 11 03:37:55 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 11 03:37:55 compute-0 podman[97093]: 2025-10-11 03:37:55.806178159 +0000 UTC m=+0.052722403 container create c42edc85d4ad5e0db6e52c7a0fd8a6715f52a1ab6890e14783c19aa02bfef77f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_solomon, CEPH_REF=reef, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Oct 11 03:37:55 compute-0 systemd[1]: Started libpod-conmon-c42edc85d4ad5e0db6e52c7a0fd8a6715f52a1ab6890e14783c19aa02bfef77f.scope.
Oct 11 03:37:55 compute-0 systemd[1]: Started libcrun container.
Oct 11 03:37:55 compute-0 podman[97093]: 2025-10-11 03:37:55.787827783 +0000 UTC m=+0.034372067 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 03:37:55 compute-0 ceph-mgr[74563]: log_channel(audit) log [DBG] : from='client.14244 -' entity='client.admin' cmd=[{"prefix": "orch apply", "target": ["mon-mgr", ""]}]: dispatch
Oct 11 03:37:55 compute-0 ceph-mgr[74563]: [cephadm INFO root] Saving service rgw.rgw spec with placement compute-0
Oct 11 03:37:55 compute-0 ceph-mgr[74563]: log_channel(cephadm) log [INF] : Saving service rgw.rgw spec with placement compute-0
Oct 11 03:37:55 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.rgw.rgw}] v 0) v1
Oct 11 03:37:55 compute-0 podman[97093]: 2025-10-11 03:37:55.893515646 +0000 UTC m=+0.140059940 container init c42edc85d4ad5e0db6e52c7a0fd8a6715f52a1ab6890e14783c19aa02bfef77f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_solomon, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Oct 11 03:37:55 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' 
Oct 11 03:37:55 compute-0 podman[97093]: 2025-10-11 03:37:55.898509886 +0000 UTC m=+0.145054140 container start c42edc85d4ad5e0db6e52c7a0fd8a6715f52a1ab6890e14783c19aa02bfef77f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_solomon, io.buildah.version=1.39.3, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 11 03:37:55 compute-0 tender_swirles[96991]: Scheduled rgw.rgw update...
Oct 11 03:37:55 compute-0 cranky_solomon[97110]: 167 167
Oct 11 03:37:55 compute-0 systemd[1]: libpod-c42edc85d4ad5e0db6e52c7a0fd8a6715f52a1ab6890e14783c19aa02bfef77f.scope: Deactivated successfully.
Oct 11 03:37:55 compute-0 conmon[97110]: conmon c42edc85d4ad5e0db6e5 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-c42edc85d4ad5e0db6e52c7a0fd8a6715f52a1ab6890e14783c19aa02bfef77f.scope/container/memory.events
Oct 11 03:37:55 compute-0 podman[97093]: 2025-10-11 03:37:55.910190305 +0000 UTC m=+0.156734559 container attach c42edc85d4ad5e0db6e52c7a0fd8a6715f52a1ab6890e14783c19aa02bfef77f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_solomon, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, ceph=True)
Oct 11 03:37:55 compute-0 podman[97093]: 2025-10-11 03:37:55.910599536 +0000 UTC m=+0.157143790 container died c42edc85d4ad5e0db6e52c7a0fd8a6715f52a1ab6890e14783c19aa02bfef77f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_solomon, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 11 03:37:55 compute-0 systemd[1]: libpod-9d585dc4e9dca9364fed91c1ed54788d9775fcad8bff3b337b990c0f5091a412.scope: Deactivated successfully.
Oct 11 03:37:55 compute-0 podman[96937]: 2025-10-11 03:37:55.917159421 +0000 UTC m=+0.770361738 container died 9d585dc4e9dca9364fed91c1ed54788d9775fcad8bff3b337b990c0f5091a412 (image=quay.io/ceph/ceph:v18, name=tender_swirles, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 11 03:37:55 compute-0 systemd[1]: var-lib-containers-storage-overlay-c98a570f6f53e25bd808af4ff69790ceb160cb35ef189a3fd693db8d1b3a82cb-merged.mount: Deactivated successfully.
Oct 11 03:37:55 compute-0 systemd[1]: var-lib-containers-storage-overlay-060ae754f8e7552ca8f160c75a822ffce489b31c895b82e1d01493d3715392f3-merged.mount: Deactivated successfully.
Oct 11 03:37:55 compute-0 podman[97093]: 2025-10-11 03:37:55.951829336 +0000 UTC m=+0.198373580 container remove c42edc85d4ad5e0db6e52c7a0fd8a6715f52a1ab6890e14783c19aa02bfef77f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_solomon, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 11 03:37:55 compute-0 podman[96937]: 2025-10-11 03:37:55.977052255 +0000 UTC m=+0.830254552 container remove 9d585dc4e9dca9364fed91c1ed54788d9775fcad8bff3b337b990c0f5091a412 (image=quay.io/ceph/ceph:v18, name=tender_swirles, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 11 03:37:55 compute-0 systemd[1]: libpod-conmon-c42edc85d4ad5e0db6e52c7a0fd8a6715f52a1ab6890e14783c19aa02bfef77f.scope: Deactivated successfully.
Oct 11 03:37:55 compute-0 systemd[1]: libpod-conmon-9d585dc4e9dca9364fed91c1ed54788d9775fcad8bff3b337b990c0f5091a412.scope: Deactivated successfully.
Oct 11 03:37:55 compute-0 sudo[96901]: pam_unix(sudo:session): session closed for user root
Oct 11 03:37:56 compute-0 podman[97149]: 2025-10-11 03:37:56.14681692 +0000 UTC m=+0.051029556 container create 1ab00f42f10af41379e89cbc0ca897e09997e78bcfb3a7ddc8e304fd63bd3a2a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_rubin, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.license=GPLv2)
Oct 11 03:37:56 compute-0 systemd[1]: Started libpod-conmon-1ab00f42f10af41379e89cbc0ca897e09997e78bcfb3a7ddc8e304fd63bd3a2a.scope.
Oct 11 03:37:56 compute-0 podman[97149]: 2025-10-11 03:37:56.129573865 +0000 UTC m=+0.033786471 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 03:37:56 compute-0 systemd[1]: Started libcrun container.
Oct 11 03:37:56 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c647d3b714b63fa7d9bc35ee69ebd3be8487c765bf9f238a01f3ccd7337366b5/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 11 03:37:56 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c647d3b714b63fa7d9bc35ee69ebd3be8487c765bf9f238a01f3ccd7337366b5/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 11 03:37:56 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c647d3b714b63fa7d9bc35ee69ebd3be8487c765bf9f238a01f3ccd7337366b5/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 11 03:37:56 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c647d3b714b63fa7d9bc35ee69ebd3be8487c765bf9f238a01f3ccd7337366b5/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 11 03:37:56 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c647d3b714b63fa7d9bc35ee69ebd3be8487c765bf9f238a01f3ccd7337366b5/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct 11 03:37:56 compute-0 podman[97149]: 2025-10-11 03:37:56.26021794 +0000 UTC m=+0.164430556 container init 1ab00f42f10af41379e89cbc0ca897e09997e78bcfb3a7ddc8e304fd63bd3a2a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_rubin, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3)
Oct 11 03:37:56 compute-0 podman[97149]: 2025-10-11 03:37:56.273298318 +0000 UTC m=+0.177510954 container start 1ab00f42f10af41379e89cbc0ca897e09997e78bcfb3a7ddc8e304fd63bd3a2a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_rubin, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS)
Oct 11 03:37:56 compute-0 podman[97149]: 2025-10-11 03:37:56.277356932 +0000 UTC m=+0.181569628 container attach 1ab00f42f10af41379e89cbc0ca897e09997e78bcfb3a7ddc8e304fd63bd3a2a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_rubin, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef)
Oct 11 03:37:56 compute-0 ceph-mon[74273]: pgmap v68: 7 pgs: 7 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Oct 11 03:37:56 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' 
Oct 11 03:37:56 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v69: 7 pgs: 7 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Oct 11 03:37:57 compute-0 python3[97253]: ansible-ansible.legacy.stat Invoked with path=/tmp/ceph_mds.yml follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Oct 11 03:37:57 compute-0 beautiful_rubin[97165]: --> passed data devices: 0 physical, 3 LVM
Oct 11 03:37:57 compute-0 beautiful_rubin[97165]: --> relative data size: 1.0
Oct 11 03:37:57 compute-0 beautiful_rubin[97165]: --> All data devices are unavailable
Oct 11 03:37:57 compute-0 systemd[1]: libpod-1ab00f42f10af41379e89cbc0ca897e09997e78bcfb3a7ddc8e304fd63bd3a2a.scope: Deactivated successfully.
Oct 11 03:37:57 compute-0 podman[97149]: 2025-10-11 03:37:57.45700582 +0000 UTC m=+1.361218426 container died 1ab00f42f10af41379e89cbc0ca897e09997e78bcfb3a7ddc8e304fd63bd3a2a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_rubin, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 11 03:37:57 compute-0 systemd[1]: libpod-1ab00f42f10af41379e89cbc0ca897e09997e78bcfb3a7ddc8e304fd63bd3a2a.scope: Consumed 1.130s CPU time.
Oct 11 03:37:57 compute-0 systemd[1]: var-lib-containers-storage-overlay-c647d3b714b63fa7d9bc35ee69ebd3be8487c765bf9f238a01f3ccd7337366b5-merged.mount: Deactivated successfully.
Oct 11 03:37:57 compute-0 podman[97149]: 2025-10-11 03:37:57.516337289 +0000 UTC m=+1.420549895 container remove 1ab00f42f10af41379e89cbc0ca897e09997e78bcfb3a7ddc8e304fd63bd3a2a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_rubin, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Oct 11 03:37:57 compute-0 systemd[1]: libpod-conmon-1ab00f42f10af41379e89cbc0ca897e09997e78bcfb3a7ddc8e304fd63bd3a2a.scope: Deactivated successfully.
Oct 11 03:37:57 compute-0 sudo[97007]: pam_unix(sudo:session): session closed for user root
Oct 11 03:37:57 compute-0 sudo[97351]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 03:37:57 compute-0 sudo[97351]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 03:37:57 compute-0 sudo[97351]: pam_unix(sudo:session): session closed for user root
Oct 11 03:37:57 compute-0 ceph-mon[74273]: from='client.14244 -' entity='client.admin' cmd=[{"prefix": "orch apply", "target": ["mon-mgr", ""]}]: dispatch
Oct 11 03:37:57 compute-0 ceph-mon[74273]: Saving service rgw.rgw spec with placement compute-0
Oct 11 03:37:57 compute-0 python3[97350]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1760153876.874006-33163-40594443639066/source dest=/tmp/ceph_mds.yml mode=0644 force=True follow=False _original_basename=ceph_mds.yml.j2 checksum=e359e26d9e42bc107a0de03375144cf8590b6f68 backup=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 11 03:37:57 compute-0 sudo[97376]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 11 03:37:57 compute-0 sudo[97376]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 03:37:57 compute-0 sudo[97376]: pam_unix(sudo:session): session closed for user root
Oct 11 03:37:57 compute-0 sudo[97424]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 03:37:57 compute-0 sudo[97424]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 03:37:57 compute-0 sudo[97424]: pam_unix(sudo:session): session closed for user root
Oct 11 03:37:57 compute-0 sudo[97450]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/23b68101-59a9-532f-ab6b-9acf78fb2162/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 23b68101-59a9-532f-ab6b-9acf78fb2162 -- lvm list --format json
Oct 11 03:37:57 compute-0 sudo[97450]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 03:37:58 compute-0 sudo[97500]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zjriicbrxugsyykjgecjdglelryfhpjn ; /usr/bin/python3'
Oct 11 03:37:58 compute-0 sudo[97500]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 03:37:58 compute-0 python3[97509]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /tmp/ceph_mds.yml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v18 --fsid 23b68101-59a9-532f-ab6b-9acf78fb2162 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   fs volume create cephfs '--placement=compute-0 '
                                           _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 11 03:37:58 compute-0 podman[97541]: 2025-10-11 03:37:58.263683397 +0000 UTC m=+0.034759498 container create db5864fad89970aadf290436151104936de34d7023d3ce0c086ef0ae80922003 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_allen, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Oct 11 03:37:58 compute-0 podman[97548]: 2025-10-11 03:37:58.298544278 +0000 UTC m=+0.054678499 container create 6f8128ca31d34d1846010d8f9b8a06fc71271adcf82621eb813ce550f3834bef (image=quay.io/ceph/ceph:v18, name=zealous_euler, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 11 03:37:58 compute-0 systemd[1]: Started libpod-conmon-db5864fad89970aadf290436151104936de34d7023d3ce0c086ef0ae80922003.scope.
Oct 11 03:37:58 compute-0 systemd[1]: Started libcrun container.
Oct 11 03:37:58 compute-0 systemd[1]: Started libpod-conmon-6f8128ca31d34d1846010d8f9b8a06fc71271adcf82621eb813ce550f3834bef.scope.
Oct 11 03:37:58 compute-0 podman[97541]: 2025-10-11 03:37:58.338902563 +0000 UTC m=+0.109978694 container init db5864fad89970aadf290436151104936de34d7023d3ce0c086ef0ae80922003 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_allen, io.buildah.version=1.39.3, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507)
Oct 11 03:37:58 compute-0 podman[97541]: 2025-10-11 03:37:58.249031205 +0000 UTC m=+0.020107336 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 03:37:58 compute-0 podman[97541]: 2025-10-11 03:37:58.348999387 +0000 UTC m=+0.120075498 container start db5864fad89970aadf290436151104936de34d7023d3ce0c086ef0ae80922003 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_allen, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 11 03:37:58 compute-0 podman[97541]: 2025-10-11 03:37:58.353218026 +0000 UTC m=+0.124294157 container attach db5864fad89970aadf290436151104936de34d7023d3ce0c086ef0ae80922003 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_allen, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 11 03:37:58 compute-0 sharp_allen[97570]: 167 167
Oct 11 03:37:58 compute-0 podman[97541]: 2025-10-11 03:37:58.354634595 +0000 UTC m=+0.125710706 container died db5864fad89970aadf290436151104936de34d7023d3ce0c086ef0ae80922003 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_allen, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Oct 11 03:37:58 compute-0 podman[97548]: 2025-10-11 03:37:58.266727623 +0000 UTC m=+0.022861824 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Oct 11 03:37:58 compute-0 systemd[1]: Started libcrun container.
Oct 11 03:37:58 compute-0 systemd[1]: libpod-db5864fad89970aadf290436151104936de34d7023d3ce0c086ef0ae80922003.scope: Deactivated successfully.
Oct 11 03:37:58 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d81adaf7e6675753c60cd4c1851fee9eab4dd67e9c68f070da4c4fa665c62f44/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Oct 11 03:37:58 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d81adaf7e6675753c60cd4c1851fee9eab4dd67e9c68f070da4c4fa665c62f44/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 11 03:37:58 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d81adaf7e6675753c60cd4c1851fee9eab4dd67e9c68f070da4c4fa665c62f44/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Oct 11 03:37:58 compute-0 systemd[1]: var-lib-containers-storage-overlay-9efb0d8c5e0f470984674c01851ac39751870ae8f7d5dc8630cc3184fa165852-merged.mount: Deactivated successfully.
Oct 11 03:37:58 compute-0 podman[97548]: 2025-10-11 03:37:58.393525619 +0000 UTC m=+0.149659850 container init 6f8128ca31d34d1846010d8f9b8a06fc71271adcf82621eb813ce550f3834bef (image=quay.io/ceph/ceph:v18, name=zealous_euler, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 11 03:37:58 compute-0 podman[97548]: 2025-10-11 03:37:58.400425213 +0000 UTC m=+0.156559434 container start 6f8128ca31d34d1846010d8f9b8a06fc71271adcf82621eb813ce550f3834bef (image=quay.io/ceph/ceph:v18, name=zealous_euler, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 11 03:37:58 compute-0 podman[97541]: 2025-10-11 03:37:58.400950268 +0000 UTC m=+0.172026369 container remove db5864fad89970aadf290436151104936de34d7023d3ce0c086ef0ae80922003 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_allen, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_REF=reef, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 11 03:37:58 compute-0 podman[97548]: 2025-10-11 03:37:58.406820463 +0000 UTC m=+0.162954664 container attach 6f8128ca31d34d1846010d8f9b8a06fc71271adcf82621eb813ce550f3834bef (image=quay.io/ceph/ceph:v18, name=zealous_euler, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 11 03:37:58 compute-0 systemd[1]: libpod-conmon-db5864fad89970aadf290436151104936de34d7023d3ce0c086ef0ae80922003.scope: Deactivated successfully.
Oct 11 03:37:58 compute-0 podman[97599]: 2025-10-11 03:37:58.640692741 +0000 UTC m=+0.067237442 container create 111e637317a0c228dece93eb7b0b9314ca98b4edd19ce1a04ce01a4b6c57d382 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_volhard, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Oct 11 03:37:58 compute-0 ceph-mon[74273]: pgmap v69: 7 pgs: 7 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Oct 11 03:37:58 compute-0 systemd[1]: Started libpod-conmon-111e637317a0c228dece93eb7b0b9314ca98b4edd19ce1a04ce01a4b6c57d382.scope.
Oct 11 03:37:58 compute-0 podman[97599]: 2025-10-11 03:37:58.613029563 +0000 UTC m=+0.039574284 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 03:37:58 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v70: 7 pgs: 7 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Oct 11 03:37:58 compute-0 systemd[1]: Started libcrun container.
Oct 11 03:37:58 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/41cc9175b920b3ec4287b4729f860c67684001cc295822988293716432eb9e09/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 11 03:37:58 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/41cc9175b920b3ec4287b4729f860c67684001cc295822988293716432eb9e09/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 11 03:37:58 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/41cc9175b920b3ec4287b4729f860c67684001cc295822988293716432eb9e09/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 11 03:37:58 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/41cc9175b920b3ec4287b4729f860c67684001cc295822988293716432eb9e09/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 11 03:37:58 compute-0 podman[97599]: 2025-10-11 03:37:58.743487232 +0000 UTC m=+0.170031933 container init 111e637317a0c228dece93eb7b0b9314ca98b4edd19ce1a04ce01a4b6c57d382 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_volhard, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=reef, io.buildah.version=1.39.3)
Oct 11 03:37:58 compute-0 podman[97599]: 2025-10-11 03:37:58.75160288 +0000 UTC m=+0.178147551 container start 111e637317a0c228dece93eb7b0b9314ca98b4edd19ce1a04ce01a4b6c57d382 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_volhard, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Oct 11 03:37:58 compute-0 podman[97599]: 2025-10-11 03:37:58.756029905 +0000 UTC m=+0.182574666 container attach 111e637317a0c228dece93eb7b0b9314ca98b4edd19ce1a04ce01a4b6c57d382 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_volhard, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 11 03:37:58 compute-0 ceph-mgr[74563]: log_channel(audit) log [DBG] : from='client.14246 -' entity='client.admin' cmd=[{"prefix": "fs volume create", "name": "cephfs", "placement": "compute-0 ", "target": ["mon-mgr", ""]}]: dispatch
Oct 11 03:37:58 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool create", "pool": "cephfs.cephfs.meta"} v 0) v1
Oct 11 03:37:58 compute-0 ceph-mgr[74563]: [volumes INFO volumes.module] Starting _cmd_fs_volume_create(name:cephfs, placement:compute-0 , prefix:fs volume create, target:['mon-mgr', '']) < ""
Oct 11 03:37:58 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' cmd=[{"prefix": "osd pool create", "pool": "cephfs.cephfs.meta"}]: dispatch
Oct 11 03:37:58 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"bulk": true, "prefix": "osd pool create", "pool": "cephfs.cephfs.data"} v 0) v1
Oct 11 03:37:58 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' cmd=[{"bulk": true, "prefix": "osd pool create", "pool": "cephfs.cephfs.data"}]: dispatch
Oct 11 03:37:58 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "fs new", "fs_name": "cephfs", "metadata": "cephfs.cephfs.meta", "data": "cephfs.cephfs.data"} v 0) v1
Oct 11 03:37:58 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' cmd=[{"prefix": "fs new", "fs_name": "cephfs", "metadata": "cephfs.cephfs.meta", "data": "cephfs.cephfs.data"}]: dispatch
Oct 11 03:37:58 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e30 do_prune osdmap full prune enabled
Oct 11 03:37:58 compute-0 ceph-mon[74273]: log_channel(cluster) log [ERR] : Health check failed: 1 filesystem is offline (MDS_ALL_DOWN)
Oct 11 03:37:58 compute-0 ceph-mon[74273]: log_channel(cluster) log [WRN] : Health check failed: 1 filesystem is online with fewer MDS than max_mds (MDS_UP_LESS_THAN_MAX)
Oct 11 03:37:58 compute-0 ceph-23b68101-59a9-532f-ab6b-9acf78fb2162-mon-compute-0[74269]: 2025-10-11T03:37:58.961+0000 7f251ff25640 -1 log_channel(cluster) log [ERR] : Health check failed: 1 filesystem is offline (MDS_ALL_DOWN)
Oct 11 03:37:58 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' cmd='[{"prefix": "fs new", "fs_name": "cephfs", "metadata": "cephfs.cephfs.meta", "data": "cephfs.cephfs.data"}]': finished
Oct 11 03:37:58 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).mds e2 new map
Oct 11 03:37:58 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).mds e2 print_map
                                           e2
                                           enable_multiple, ever_enabled_multiple: 1,1
                                           default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2}
                                           legacy client fscid: 1
                                            
                                           Filesystem 'cephfs' (1)
                                           fs_name        cephfs
                                           epoch        2
                                           flags        12 joinable allow_snaps allow_multimds_snaps
                                           created        2025-10-11T03:37:58.962470+0000
                                           modified        2025-10-11T03:37:58.962524+0000
                                           tableserver        0
                                           root        0
                                           session_timeout        60
                                           session_autoclose        300
                                           max_file_size        1099511627776
                                           max_xattr_size        65536
                                           required_client_features        {}
                                           last_failure        0
                                           last_failure_osd_epoch        0
                                           compat        compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,7=mds uses inline data,8=no anchor table,9=file layout v2,10=snaprealm v2}
                                           max_mds        1
                                           in        
                                           up        {}
                                           failed        
                                           damaged        
                                           stopped        
                                           data_pools        [7]
                                           metadata_pool        6
                                           inline_data        disabled
                                           balancer        
                                           bal_rank_mask        -1
                                           standby_count_wanted        0
                                            
                                            
Oct 11 03:37:58 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e31 e31: 3 total, 3 up, 3 in
Oct 11 03:37:58 compute-0 ceph-mon[74273]: log_channel(cluster) log [DBG] : osdmap e31: 3 total, 3 up, 3 in
Oct 11 03:37:58 compute-0 ceph-mon[74273]: log_channel(cluster) log [DBG] : fsmap cephfs:0
Oct 11 03:37:58 compute-0 ceph-mgr[74563]: [cephadm INFO root] Saving service mds.cephfs spec with placement compute-0
Oct 11 03:37:58 compute-0 ceph-mgr[74563]: log_channel(cephadm) log [INF] : Saving service mds.cephfs spec with placement compute-0
Oct 11 03:37:58 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mds.cephfs}] v 0) v1
Oct 11 03:37:58 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' 
Oct 11 03:37:58 compute-0 ceph-mgr[74563]: [volumes INFO volumes.module] Finishing _cmd_fs_volume_create(name:cephfs, placement:compute-0 , prefix:fs volume create, target:['mon-mgr', '']) < ""
Oct 11 03:37:59 compute-0 systemd[1]: libpod-6f8128ca31d34d1846010d8f9b8a06fc71271adcf82621eb813ce550f3834bef.scope: Deactivated successfully.
Oct 11 03:37:59 compute-0 podman[97548]: 2025-10-11 03:37:59.006129499 +0000 UTC m=+0.762263670 container died 6f8128ca31d34d1846010d8f9b8a06fc71271adcf82621eb813ce550f3834bef (image=quay.io/ceph/ceph:v18, name=zealous_euler, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3)
Oct 11 03:37:59 compute-0 systemd[1]: var-lib-containers-storage-overlay-d81adaf7e6675753c60cd4c1851fee9eab4dd67e9c68f070da4c4fa665c62f44-merged.mount: Deactivated successfully.
Oct 11 03:37:59 compute-0 podman[97548]: 2025-10-11 03:37:59.044714214 +0000 UTC m=+0.800848395 container remove 6f8128ca31d34d1846010d8f9b8a06fc71271adcf82621eb813ce550f3834bef (image=quay.io/ceph/ceph:v18, name=zealous_euler, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 11 03:37:59 compute-0 systemd[1]: libpod-conmon-6f8128ca31d34d1846010d8f9b8a06fc71271adcf82621eb813ce550f3834bef.scope: Deactivated successfully.
Oct 11 03:37:59 compute-0 sudo[97500]: pam_unix(sudo:session): session closed for user root
Oct 11 03:37:59 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e31 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 11 03:37:59 compute-0 sudo[97676]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hvyvydoxvanjarcmyjsvdzmnauawbqrk ; /usr/bin/python3'
Oct 11 03:37:59 compute-0 sudo[97676]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 03:37:59 compute-0 python3[97678]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /tmp/ceph_mds.yml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v18 --fsid 23b68101-59a9-532f-ab6b-9acf78fb2162 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   orch apply --in-file /home/ceph_spec.yaml _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 11 03:37:59 compute-0 podman[97681]: 2025-10-11 03:37:59.504212768 +0000 UTC m=+0.048963258 container create 525369c459b0cb612c921d02a1e72434f4050eae1e96a9fa93226dbe1e64fb29 (image=quay.io/ceph/ceph:v18, name=naughty_chandrasekhar, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0)
Oct 11 03:37:59 compute-0 systemd[1]: Started libpod-conmon-525369c459b0cb612c921d02a1e72434f4050eae1e96a9fa93226dbe1e64fb29.scope.
Oct 11 03:37:59 compute-0 wonderful_volhard[97617]: {
Oct 11 03:37:59 compute-0 wonderful_volhard[97617]:     "0": [
Oct 11 03:37:59 compute-0 wonderful_volhard[97617]:         {
Oct 11 03:37:59 compute-0 wonderful_volhard[97617]:             "devices": [
Oct 11 03:37:59 compute-0 wonderful_volhard[97617]:                 "/dev/loop3"
Oct 11 03:37:59 compute-0 wonderful_volhard[97617]:             ],
Oct 11 03:37:59 compute-0 wonderful_volhard[97617]:             "lv_name": "ceph_lv0",
Oct 11 03:37:59 compute-0 wonderful_volhard[97617]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct 11 03:37:59 compute-0 wonderful_volhard[97617]:             "lv_size": "21470642176",
Oct 11 03:37:59 compute-0 wonderful_volhard[97617]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=1XVTvq-Um5W-oCLh-MZzq-EKrd-vgGj-JdO4S3,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=23b68101-59a9-532f-ab6b-9acf78fb2162,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=bd7ac921-1218-45c1-b1c6-7c594dbceccb,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 11 03:37:59 compute-0 wonderful_volhard[97617]:             "lv_uuid": "1XVTvq-Um5W-oCLh-MZzq-EKrd-vgGj-JdO4S3",
Oct 11 03:37:59 compute-0 wonderful_volhard[97617]:             "name": "ceph_lv0",
Oct 11 03:37:59 compute-0 wonderful_volhard[97617]:             "path": "/dev/ceph_vg0/ceph_lv0",
Oct 11 03:37:59 compute-0 wonderful_volhard[97617]:             "tags": {
Oct 11 03:37:59 compute-0 wonderful_volhard[97617]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct 11 03:37:59 compute-0 wonderful_volhard[97617]:                 "ceph.block_uuid": "1XVTvq-Um5W-oCLh-MZzq-EKrd-vgGj-JdO4S3",
Oct 11 03:37:59 compute-0 wonderful_volhard[97617]:                 "ceph.cephx_lockbox_secret": "",
Oct 11 03:37:59 compute-0 wonderful_volhard[97617]:                 "ceph.cluster_fsid": "23b68101-59a9-532f-ab6b-9acf78fb2162",
Oct 11 03:37:59 compute-0 wonderful_volhard[97617]:                 "ceph.cluster_name": "ceph",
Oct 11 03:37:59 compute-0 wonderful_volhard[97617]:                 "ceph.crush_device_class": "",
Oct 11 03:37:59 compute-0 wonderful_volhard[97617]:                 "ceph.encrypted": "0",
Oct 11 03:37:59 compute-0 wonderful_volhard[97617]:                 "ceph.osd_fsid": "bd7ac921-1218-45c1-b1c6-7c594dbceccb",
Oct 11 03:37:59 compute-0 wonderful_volhard[97617]:                 "ceph.osd_id": "0",
Oct 11 03:37:59 compute-0 wonderful_volhard[97617]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 11 03:37:59 compute-0 wonderful_volhard[97617]:                 "ceph.type": "block",
Oct 11 03:37:59 compute-0 wonderful_volhard[97617]:                 "ceph.vdo": "0"
Oct 11 03:37:59 compute-0 wonderful_volhard[97617]:             },
Oct 11 03:37:59 compute-0 wonderful_volhard[97617]:             "type": "block",
Oct 11 03:37:59 compute-0 wonderful_volhard[97617]:             "vg_name": "ceph_vg0"
Oct 11 03:37:59 compute-0 wonderful_volhard[97617]:         }
Oct 11 03:37:59 compute-0 wonderful_volhard[97617]:     ],
Oct 11 03:37:59 compute-0 wonderful_volhard[97617]:     "1": [
Oct 11 03:37:59 compute-0 wonderful_volhard[97617]:         {
Oct 11 03:37:59 compute-0 wonderful_volhard[97617]:             "devices": [
Oct 11 03:37:59 compute-0 wonderful_volhard[97617]:                 "/dev/loop4"
Oct 11 03:37:59 compute-0 wonderful_volhard[97617]:             ],
Oct 11 03:37:59 compute-0 wonderful_volhard[97617]:             "lv_name": "ceph_lv1",
Oct 11 03:37:59 compute-0 wonderful_volhard[97617]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Oct 11 03:37:59 compute-0 wonderful_volhard[97617]:             "lv_size": "21470642176",
Oct 11 03:37:59 compute-0 wonderful_volhard[97617]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=SYHCVo-UVf0-Iwc3-4dzz-QuCG-19LJ-9rRA5x,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=23b68101-59a9-532f-ab6b-9acf78fb2162,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=38da774d-7ecf-442f-9a7a-97978287cff8,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 11 03:37:59 compute-0 wonderful_volhard[97617]:             "lv_uuid": "SYHCVo-UVf0-Iwc3-4dzz-QuCG-19LJ-9rRA5x",
Oct 11 03:37:59 compute-0 wonderful_volhard[97617]:             "name": "ceph_lv1",
Oct 11 03:37:59 compute-0 wonderful_volhard[97617]:             "path": "/dev/ceph_vg1/ceph_lv1",
Oct 11 03:37:59 compute-0 wonderful_volhard[97617]:             "tags": {
Oct 11 03:37:59 compute-0 wonderful_volhard[97617]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Oct 11 03:37:59 compute-0 wonderful_volhard[97617]:                 "ceph.block_uuid": "SYHCVo-UVf0-Iwc3-4dzz-QuCG-19LJ-9rRA5x",
Oct 11 03:37:59 compute-0 wonderful_volhard[97617]:                 "ceph.cephx_lockbox_secret": "",
Oct 11 03:37:59 compute-0 wonderful_volhard[97617]:                 "ceph.cluster_fsid": "23b68101-59a9-532f-ab6b-9acf78fb2162",
Oct 11 03:37:59 compute-0 wonderful_volhard[97617]:                 "ceph.cluster_name": "ceph",
Oct 11 03:37:59 compute-0 wonderful_volhard[97617]:                 "ceph.crush_device_class": "",
Oct 11 03:37:59 compute-0 wonderful_volhard[97617]:                 "ceph.encrypted": "0",
Oct 11 03:37:59 compute-0 wonderful_volhard[97617]:                 "ceph.osd_fsid": "38da774d-7ecf-442f-9a7a-97978287cff8",
Oct 11 03:37:59 compute-0 wonderful_volhard[97617]:                 "ceph.osd_id": "1",
Oct 11 03:37:59 compute-0 wonderful_volhard[97617]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 11 03:37:59 compute-0 wonderful_volhard[97617]:                 "ceph.type": "block",
Oct 11 03:37:59 compute-0 wonderful_volhard[97617]:                 "ceph.vdo": "0"
Oct 11 03:37:59 compute-0 wonderful_volhard[97617]:             },
Oct 11 03:37:59 compute-0 wonderful_volhard[97617]:             "type": "block",
Oct 11 03:37:59 compute-0 wonderful_volhard[97617]:             "vg_name": "ceph_vg1"
Oct 11 03:37:59 compute-0 wonderful_volhard[97617]:         }
Oct 11 03:37:59 compute-0 wonderful_volhard[97617]:     ],
Oct 11 03:37:59 compute-0 podman[97681]: 2025-10-11 03:37:59.482778325 +0000 UTC m=+0.027528795 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Oct 11 03:37:59 compute-0 wonderful_volhard[97617]:     "2": [
Oct 11 03:37:59 compute-0 wonderful_volhard[97617]:         {
Oct 11 03:37:59 compute-0 wonderful_volhard[97617]:             "devices": [
Oct 11 03:37:59 compute-0 wonderful_volhard[97617]:                 "/dev/loop5"
Oct 11 03:37:59 compute-0 wonderful_volhard[97617]:             ],
Oct 11 03:37:59 compute-0 wonderful_volhard[97617]:             "lv_name": "ceph_lv2",
Oct 11 03:37:59 compute-0 wonderful_volhard[97617]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Oct 11 03:37:59 compute-0 wonderful_volhard[97617]:             "lv_size": "21470642176",
Oct 11 03:37:59 compute-0 wonderful_volhard[97617]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=RoMfIg-8Myq-NBZ3-1HM3-6QfC-sjk8-dlChvt,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=23b68101-59a9-532f-ab6b-9acf78fb2162,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=b0a32a6b-06c7-4cda-9235-0c5ffbd1bd85,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 11 03:37:59 compute-0 wonderful_volhard[97617]:             "lv_uuid": "RoMfIg-8Myq-NBZ3-1HM3-6QfC-sjk8-dlChvt",
Oct 11 03:37:59 compute-0 wonderful_volhard[97617]:             "name": "ceph_lv2",
Oct 11 03:37:59 compute-0 wonderful_volhard[97617]:             "path": "/dev/ceph_vg2/ceph_lv2",
Oct 11 03:37:59 compute-0 wonderful_volhard[97617]:             "tags": {
Oct 11 03:37:59 compute-0 wonderful_volhard[97617]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Oct 11 03:37:59 compute-0 wonderful_volhard[97617]:                 "ceph.block_uuid": "RoMfIg-8Myq-NBZ3-1HM3-6QfC-sjk8-dlChvt",
Oct 11 03:37:59 compute-0 wonderful_volhard[97617]:                 "ceph.cephx_lockbox_secret": "",
Oct 11 03:37:59 compute-0 wonderful_volhard[97617]:                 "ceph.cluster_fsid": "23b68101-59a9-532f-ab6b-9acf78fb2162",
Oct 11 03:37:59 compute-0 wonderful_volhard[97617]:                 "ceph.cluster_name": "ceph",
Oct 11 03:37:59 compute-0 wonderful_volhard[97617]:                 "ceph.crush_device_class": "",
Oct 11 03:37:59 compute-0 wonderful_volhard[97617]:                 "ceph.encrypted": "0",
Oct 11 03:37:59 compute-0 wonderful_volhard[97617]:                 "ceph.osd_fsid": "b0a32a6b-06c7-4cda-9235-0c5ffbd1bd85",
Oct 11 03:37:59 compute-0 wonderful_volhard[97617]:                 "ceph.osd_id": "2",
Oct 11 03:37:59 compute-0 wonderful_volhard[97617]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 11 03:37:59 compute-0 wonderful_volhard[97617]:                 "ceph.type": "block",
Oct 11 03:37:59 compute-0 wonderful_volhard[97617]:                 "ceph.vdo": "0"
Oct 11 03:37:59 compute-0 wonderful_volhard[97617]:             },
Oct 11 03:37:59 compute-0 wonderful_volhard[97617]:             "type": "block",
Oct 11 03:37:59 compute-0 wonderful_volhard[97617]:             "vg_name": "ceph_vg2"
Oct 11 03:37:59 compute-0 wonderful_volhard[97617]:         }
Oct 11 03:37:59 compute-0 wonderful_volhard[97617]:     ]
Oct 11 03:37:59 compute-0 wonderful_volhard[97617]: }
Oct 11 03:37:59 compute-0 systemd[1]: Started libcrun container.
Oct 11 03:37:59 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7cb872911db50e3e8956d95dbcfa8a9569e37664f824eb038a5f996a2d3faa0b/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Oct 11 03:37:59 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7cb872911db50e3e8956d95dbcfa8a9569e37664f824eb038a5f996a2d3faa0b/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 11 03:37:59 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7cb872911db50e3e8956d95dbcfa8a9569e37664f824eb038a5f996a2d3faa0b/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Oct 11 03:37:59 compute-0 podman[97681]: 2025-10-11 03:37:59.608011848 +0000 UTC m=+0.152762298 container init 525369c459b0cb612c921d02a1e72434f4050eae1e96a9fa93226dbe1e64fb29 (image=quay.io/ceph/ceph:v18, name=naughty_chandrasekhar, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef)
Oct 11 03:37:59 compute-0 systemd[1]: libpod-111e637317a0c228dece93eb7b0b9314ca98b4edd19ce1a04ce01a4b6c57d382.scope: Deactivated successfully.
Oct 11 03:37:59 compute-0 podman[97599]: 2025-10-11 03:37:59.611937528 +0000 UTC m=+1.038482199 container died 111e637317a0c228dece93eb7b0b9314ca98b4edd19ce1a04ce01a4b6c57d382 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_volhard, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Oct 11 03:37:59 compute-0 podman[97681]: 2025-10-11 03:37:59.617278428 +0000 UTC m=+0.162028888 container start 525369c459b0cb612c921d02a1e72434f4050eae1e96a9fa93226dbe1e64fb29 (image=quay.io/ceph/ceph:v18, name=naughty_chandrasekhar, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 11 03:37:59 compute-0 podman[97681]: 2025-10-11 03:37:59.624052369 +0000 UTC m=+0.168802829 container attach 525369c459b0cb612c921d02a1e72434f4050eae1e96a9fa93226dbe1e64fb29 (image=quay.io/ceph/ceph:v18, name=naughty_chandrasekhar, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Oct 11 03:37:59 compute-0 systemd[1]: var-lib-containers-storage-overlay-41cc9175b920b3ec4287b4729f860c67684001cc295822988293716432eb9e09-merged.mount: Deactivated successfully.
Oct 11 03:37:59 compute-0 podman[97599]: 2025-10-11 03:37:59.679093547 +0000 UTC m=+1.105638228 container remove 111e637317a0c228dece93eb7b0b9314ca98b4edd19ce1a04ce01a4b6c57d382 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_volhard, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507)
Oct 11 03:37:59 compute-0 systemd[1]: libpod-conmon-111e637317a0c228dece93eb7b0b9314ca98b4edd19ce1a04ce01a4b6c57d382.scope: Deactivated successfully.
Oct 11 03:37:59 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' cmd=[{"prefix": "osd pool create", "pool": "cephfs.cephfs.meta"}]: dispatch
Oct 11 03:37:59 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' cmd=[{"bulk": true, "prefix": "osd pool create", "pool": "cephfs.cephfs.data"}]: dispatch
Oct 11 03:37:59 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' cmd=[{"prefix": "fs new", "fs_name": "cephfs", "metadata": "cephfs.cephfs.meta", "data": "cephfs.cephfs.data"}]: dispatch
Oct 11 03:37:59 compute-0 ceph-mon[74273]: Health check failed: 1 filesystem is offline (MDS_ALL_DOWN)
Oct 11 03:37:59 compute-0 ceph-mon[74273]: Health check failed: 1 filesystem is online with fewer MDS than max_mds (MDS_UP_LESS_THAN_MAX)
Oct 11 03:37:59 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' cmd='[{"prefix": "fs new", "fs_name": "cephfs", "metadata": "cephfs.cephfs.meta", "data": "cephfs.cephfs.data"}]': finished
Oct 11 03:37:59 compute-0 ceph-mon[74273]: osdmap e31: 3 total, 3 up, 3 in
Oct 11 03:37:59 compute-0 ceph-mon[74273]: fsmap cephfs:0
Oct 11 03:37:59 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' 
Oct 11 03:37:59 compute-0 sudo[97450]: pam_unix(sudo:session): session closed for user root
Oct 11 03:37:59 compute-0 sudo[97716]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 03:37:59 compute-0 sudo[97716]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 03:37:59 compute-0 sudo[97716]: pam_unix(sudo:session): session closed for user root
Oct 11 03:37:59 compute-0 sudo[97741]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 11 03:37:59 compute-0 sudo[97741]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 03:37:59 compute-0 sudo[97741]: pam_unix(sudo:session): session closed for user root
Oct 11 03:37:59 compute-0 sudo[97766]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 03:37:59 compute-0 sudo[97766]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 03:37:59 compute-0 sudo[97766]: pam_unix(sudo:session): session closed for user root
Oct 11 03:38:00 compute-0 sudo[97805]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/23b68101-59a9-532f-ab6b-9acf78fb2162/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 23b68101-59a9-532f-ab6b-9acf78fb2162 -- raw list --format json
Oct 11 03:38:00 compute-0 sudo[97805]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 03:38:00 compute-0 ceph-mgr[74563]: log_channel(audit) log [DBG] : from='client.14248 -' entity='client.admin' cmd=[{"prefix": "orch apply", "target": ["mon-mgr", ""]}]: dispatch
Oct 11 03:38:00 compute-0 ceph-mgr[74563]: [cephadm INFO root] Saving service mds.cephfs spec with placement compute-0
Oct 11 03:38:00 compute-0 ceph-mgr[74563]: log_channel(cephadm) log [INF] : Saving service mds.cephfs spec with placement compute-0
Oct 11 03:38:00 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mds.cephfs}] v 0) v1
Oct 11 03:38:00 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' 
Oct 11 03:38:00 compute-0 naughty_chandrasekhar[97698]: Scheduled mds.cephfs update...
Oct 11 03:38:00 compute-0 systemd[1]: libpod-525369c459b0cb612c921d02a1e72434f4050eae1e96a9fa93226dbe1e64fb29.scope: Deactivated successfully.
Oct 11 03:38:00 compute-0 podman[97681]: 2025-10-11 03:38:00.200389629 +0000 UTC m=+0.745140079 container died 525369c459b0cb612c921d02a1e72434f4050eae1e96a9fa93226dbe1e64fb29 (image=quay.io/ceph/ceph:v18, name=naughty_chandrasekhar, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 11 03:38:00 compute-0 systemd[1]: var-lib-containers-storage-overlay-7cb872911db50e3e8956d95dbcfa8a9569e37664f824eb038a5f996a2d3faa0b-merged.mount: Deactivated successfully.
Oct 11 03:38:00 compute-0 podman[97681]: 2025-10-11 03:38:00.244791257 +0000 UTC m=+0.789541717 container remove 525369c459b0cb612c921d02a1e72434f4050eae1e96a9fa93226dbe1e64fb29 (image=quay.io/ceph/ceph:v18, name=naughty_chandrasekhar, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Oct 11 03:38:00 compute-0 sudo[97676]: pam_unix(sudo:session): session closed for user root
Oct 11 03:38:00 compute-0 systemd[1]: libpod-conmon-525369c459b0cb612c921d02a1e72434f4050eae1e96a9fa93226dbe1e64fb29.scope: Deactivated successfully.
Oct 11 03:38:00 compute-0 podman[97890]: 2025-10-11 03:38:00.384930449 +0000 UTC m=+0.037169317 container create c52eb6761536f8af871492ff7828fab418475173089e692f36497d7e2e76e35c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=suspicious_elgamal, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507)
Oct 11 03:38:00 compute-0 systemd[1]: Started libpod-conmon-c52eb6761536f8af871492ff7828fab418475173089e692f36497d7e2e76e35c.scope.
Oct 11 03:38:00 compute-0 systemd[1]: Started libcrun container.
Oct 11 03:38:00 compute-0 podman[97890]: 2025-10-11 03:38:00.450629067 +0000 UTC m=+0.102867985 container init c52eb6761536f8af871492ff7828fab418475173089e692f36497d7e2e76e35c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=suspicious_elgamal, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 11 03:38:00 compute-0 podman[97890]: 2025-10-11 03:38:00.456355498 +0000 UTC m=+0.108594366 container start c52eb6761536f8af871492ff7828fab418475173089e692f36497d7e2e76e35c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=suspicious_elgamal, CEPH_REF=reef, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2)
Oct 11 03:38:00 compute-0 suspicious_elgamal[97905]: 167 167
Oct 11 03:38:00 compute-0 podman[97890]: 2025-10-11 03:38:00.460377991 +0000 UTC m=+0.112616859 container attach c52eb6761536f8af871492ff7828fab418475173089e692f36497d7e2e76e35c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=suspicious_elgamal, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_REF=reef, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 11 03:38:00 compute-0 podman[97890]: 2025-10-11 03:38:00.461990806 +0000 UTC m=+0.114229634 container died c52eb6761536f8af871492ff7828fab418475173089e692f36497d7e2e76e35c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=suspicious_elgamal, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_REF=reef, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 11 03:38:00 compute-0 systemd[1]: libpod-c52eb6761536f8af871492ff7828fab418475173089e692f36497d7e2e76e35c.scope: Deactivated successfully.
Oct 11 03:38:00 compute-0 podman[97890]: 2025-10-11 03:38:00.368697552 +0000 UTC m=+0.020936400 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 03:38:00 compute-0 systemd[1]: var-lib-containers-storage-overlay-38fe2618a56008e1246d85c5ce9ef9f5d3397a67a471ab940e1cb16b99940280-merged.mount: Deactivated successfully.
Oct 11 03:38:00 compute-0 podman[97890]: 2025-10-11 03:38:00.497140905 +0000 UTC m=+0.149379733 container remove c52eb6761536f8af871492ff7828fab418475173089e692f36497d7e2e76e35c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=suspicious_elgamal, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_REF=reef, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 11 03:38:00 compute-0 systemd[1]: libpod-conmon-c52eb6761536f8af871492ff7828fab418475173089e692f36497d7e2e76e35c.scope: Deactivated successfully.
Oct 11 03:38:00 compute-0 podman[97927]: 2025-10-11 03:38:00.670442869 +0000 UTC m=+0.050519352 container create 4a1783f6a20d7af864f579e287de41f25196721deadd2413a61ad2e94e640273 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=amazing_leavitt, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, ceph=True, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 11 03:38:00 compute-0 ceph-mon[74273]: pgmap v70: 7 pgs: 7 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Oct 11 03:38:00 compute-0 ceph-mon[74273]: from='client.14246 -' entity='client.admin' cmd=[{"prefix": "fs volume create", "name": "cephfs", "placement": "compute-0 ", "target": ["mon-mgr", ""]}]: dispatch
Oct 11 03:38:00 compute-0 ceph-mon[74273]: Saving service mds.cephfs spec with placement compute-0
Oct 11 03:38:00 compute-0 ceph-mon[74273]: from='client.14248 -' entity='client.admin' cmd=[{"prefix": "orch apply", "target": ["mon-mgr", ""]}]: dispatch
Oct 11 03:38:00 compute-0 ceph-mon[74273]: Saving service mds.cephfs spec with placement compute-0
Oct 11 03:38:00 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' 
Oct 11 03:38:00 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v72: 7 pgs: 7 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Oct 11 03:38:00 compute-0 systemd[1]: Started libpod-conmon-4a1783f6a20d7af864f579e287de41f25196721deadd2413a61ad2e94e640273.scope.
Oct 11 03:38:00 compute-0 podman[97927]: 2025-10-11 03:38:00.646710112 +0000 UTC m=+0.026786585 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 03:38:00 compute-0 systemd[1]: Started libcrun container.
Oct 11 03:38:00 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/03c195a4e8099807edc165f0681f79fff34ec039431ba5750fb7ba76e88a2da9/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 11 03:38:00 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/03c195a4e8099807edc165f0681f79fff34ec039431ba5750fb7ba76e88a2da9/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 11 03:38:00 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/03c195a4e8099807edc165f0681f79fff34ec039431ba5750fb7ba76e88a2da9/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 11 03:38:00 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/03c195a4e8099807edc165f0681f79fff34ec039431ba5750fb7ba76e88a2da9/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 11 03:38:00 compute-0 podman[97927]: 2025-10-11 03:38:00.76896832 +0000 UTC m=+0.149044813 container init 4a1783f6a20d7af864f579e287de41f25196721deadd2413a61ad2e94e640273 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=amazing_leavitt, ceph=True, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 11 03:38:00 compute-0 podman[97927]: 2025-10-11 03:38:00.783081107 +0000 UTC m=+0.163157570 container start 4a1783f6a20d7af864f579e287de41f25196721deadd2413a61ad2e94e640273 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=amazing_leavitt, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS)
Oct 11 03:38:00 compute-0 podman[97927]: 2025-10-11 03:38:00.787132551 +0000 UTC m=+0.167209054 container attach 4a1783f6a20d7af864f579e287de41f25196721deadd2413a61ad2e94e640273 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=amazing_leavitt, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 11 03:38:00 compute-0 sudo[98023]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-eykpcdmdvmelidtattbcslbsldwlgfiq ; /usr/bin/python3'
Oct 11 03:38:00 compute-0 sudo[98023]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 03:38:01 compute-0 python3[98025]: ansible-ansible.legacy.stat Invoked with path=/etc/ceph/ceph.client.openstack.keyring follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Oct 11 03:38:01 compute-0 sudo[98023]: pam_unix(sudo:session): session closed for user root
Oct 11 03:38:01 compute-0 sudo[98096]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-khisufvidydqutlaandznbmaujesnhxr ; /usr/bin/python3'
Oct 11 03:38:01 compute-0 sudo[98096]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 03:38:01 compute-0 python3[98098]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1760153880.6880875-33193-141664848516825/source dest=/etc/ceph/ceph.client.openstack.keyring mode=0644 force=True owner=167 group=167 follow=False _original_basename=ceph_key.j2 checksum=5ffab1b62c7b96c69504627db7d5c17b04f06e25 backup=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 11 03:38:01 compute-0 sudo[98096]: pam_unix(sudo:session): session closed for user root
Oct 11 03:38:01 compute-0 ceph-mon[74273]: pgmap v72: 7 pgs: 7 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Oct 11 03:38:01 compute-0 sudo[98172]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ktlskmdcommweabznhygyehuzydrpmyn ; /usr/bin/python3'
Oct 11 03:38:01 compute-0 sudo[98172]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 03:38:01 compute-0 amazing_leavitt[97973]: {
Oct 11 03:38:01 compute-0 amazing_leavitt[97973]:     "38da774d-7ecf-442f-9a7a-97978287cff8": {
Oct 11 03:38:01 compute-0 amazing_leavitt[97973]:         "ceph_fsid": "23b68101-59a9-532f-ab6b-9acf78fb2162",
Oct 11 03:38:01 compute-0 amazing_leavitt[97973]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Oct 11 03:38:01 compute-0 amazing_leavitt[97973]:         "osd_id": 1,
Oct 11 03:38:01 compute-0 amazing_leavitt[97973]:         "osd_uuid": "38da774d-7ecf-442f-9a7a-97978287cff8",
Oct 11 03:38:01 compute-0 amazing_leavitt[97973]:         "type": "bluestore"
Oct 11 03:38:01 compute-0 amazing_leavitt[97973]:     },
Oct 11 03:38:01 compute-0 amazing_leavitt[97973]:     "b0a32a6b-06c7-4cda-9235-0c5ffbd1bd85": {
Oct 11 03:38:01 compute-0 amazing_leavitt[97973]:         "ceph_fsid": "23b68101-59a9-532f-ab6b-9acf78fb2162",
Oct 11 03:38:01 compute-0 amazing_leavitt[97973]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Oct 11 03:38:01 compute-0 amazing_leavitt[97973]:         "osd_id": 2,
Oct 11 03:38:01 compute-0 amazing_leavitt[97973]:         "osd_uuid": "b0a32a6b-06c7-4cda-9235-0c5ffbd1bd85",
Oct 11 03:38:01 compute-0 amazing_leavitt[97973]:         "type": "bluestore"
Oct 11 03:38:01 compute-0 amazing_leavitt[97973]:     },
Oct 11 03:38:01 compute-0 amazing_leavitt[97973]:     "bd7ac921-1218-45c1-b1c6-7c594dbceccb": {
Oct 11 03:38:01 compute-0 amazing_leavitt[97973]:         "ceph_fsid": "23b68101-59a9-532f-ab6b-9acf78fb2162",
Oct 11 03:38:01 compute-0 amazing_leavitt[97973]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct 11 03:38:01 compute-0 amazing_leavitt[97973]:         "osd_id": 0,
Oct 11 03:38:01 compute-0 amazing_leavitt[97973]:         "osd_uuid": "bd7ac921-1218-45c1-b1c6-7c594dbceccb",
Oct 11 03:38:01 compute-0 amazing_leavitt[97973]:         "type": "bluestore"
Oct 11 03:38:01 compute-0 amazing_leavitt[97973]:     }
Oct 11 03:38:01 compute-0 amazing_leavitt[97973]: }
Oct 11 03:38:01 compute-0 systemd[1]: libpod-4a1783f6a20d7af864f579e287de41f25196721deadd2413a61ad2e94e640273.scope: Deactivated successfully.
Oct 11 03:38:01 compute-0 podman[97927]: 2025-10-11 03:38:01.829330663 +0000 UTC m=+1.209407146 container died 4a1783f6a20d7af864f579e287de41f25196721deadd2413a61ad2e94e640273 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=amazing_leavitt, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=reef, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 11 03:38:01 compute-0 systemd[1]: libpod-4a1783f6a20d7af864f579e287de41f25196721deadd2413a61ad2e94e640273.scope: Consumed 1.051s CPU time.
Oct 11 03:38:01 compute-0 systemd[1]: var-lib-containers-storage-overlay-03c195a4e8099807edc165f0681f79fff34ec039431ba5750fb7ba76e88a2da9-merged.mount: Deactivated successfully.
Oct 11 03:38:01 compute-0 podman[97927]: 2025-10-11 03:38:01.896936694 +0000 UTC m=+1.277013157 container remove 4a1783f6a20d7af864f579e287de41f25196721deadd2413a61ad2e94e640273 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=amazing_leavitt, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, io.buildah.version=1.39.3, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507)
Oct 11 03:38:01 compute-0 systemd[1]: libpod-conmon-4a1783f6a20d7af864f579e287de41f25196721deadd2413a61ad2e94e640273.scope: Deactivated successfully.
Oct 11 03:38:01 compute-0 python3[98176]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid 23b68101-59a9-532f-ab6b-9acf78fb2162 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   auth import -i /etc/ceph/ceph.client.openstack.keyring _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 11 03:38:01 compute-0 sudo[97805]: pam_unix(sudo:session): session closed for user root
Oct 11 03:38:01 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 11 03:38:01 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' 
Oct 11 03:38:01 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 11 03:38:01 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' 
Oct 11 03:38:02 compute-0 podman[98189]: 2025-10-11 03:38:02.010164549 +0000 UTC m=+0.061403458 container create bfd0b3c5df5896892c00d92a0355f0cacc705635a08130f8df82edef2ad0a70d (image=quay.io/ceph/ceph:v18, name=zen_swartz, ceph=True, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 11 03:38:02 compute-0 systemd[1]: Started libpod-conmon-bfd0b3c5df5896892c00d92a0355f0cacc705635a08130f8df82edef2ad0a70d.scope.
Oct 11 03:38:02 compute-0 sudo[98200]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 03:38:02 compute-0 sudo[98200]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 03:38:02 compute-0 sudo[98200]: pam_unix(sudo:session): session closed for user root
Oct 11 03:38:02 compute-0 systemd[1]: Started libcrun container.
Oct 11 03:38:02 compute-0 podman[98189]: 2025-10-11 03:38:01.981018999 +0000 UTC m=+0.032257938 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Oct 11 03:38:02 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4c77e5cf4a03884338ba39b9d259eaf4b542605751099a1cdb825e5c7b9fa55a/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 11 03:38:02 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4c77e5cf4a03884338ba39b9d259eaf4b542605751099a1cdb825e5c7b9fa55a/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Oct 11 03:38:02 compute-0 podman[98189]: 2025-10-11 03:38:02.103210576 +0000 UTC m=+0.154449575 container init bfd0b3c5df5896892c00d92a0355f0cacc705635a08130f8df82edef2ad0a70d (image=quay.io/ceph/ceph:v18, name=zen_swartz, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_REF=reef)
Oct 11 03:38:02 compute-0 podman[98189]: 2025-10-11 03:38:02.110931463 +0000 UTC m=+0.162170382 container start bfd0b3c5df5896892c00d92a0355f0cacc705635a08130f8df82edef2ad0a70d (image=quay.io/ceph/ceph:v18, name=zen_swartz, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3)
Oct 11 03:38:02 compute-0 podman[98189]: 2025-10-11 03:38:02.114807082 +0000 UTC m=+0.166046031 container attach bfd0b3c5df5896892c00d92a0355f0cacc705635a08130f8df82edef2ad0a70d (image=quay.io/ceph/ceph:v18, name=zen_swartz, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 11 03:38:02 compute-0 sudo[98233]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Oct 11 03:38:02 compute-0 sudo[98233]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 03:38:02 compute-0 sudo[98233]: pam_unix(sudo:session): session closed for user root
Oct 11 03:38:02 compute-0 sudo[98259]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 03:38:02 compute-0 sudo[98259]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 03:38:02 compute-0 sudo[98259]: pam_unix(sudo:session): session closed for user root
Oct 11 03:38:02 compute-0 sudo[98284]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 11 03:38:02 compute-0 sudo[98284]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 03:38:02 compute-0 sudo[98284]: pam_unix(sudo:session): session closed for user root
Oct 11 03:38:02 compute-0 sudo[98309]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 03:38:02 compute-0 sudo[98309]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 03:38:02 compute-0 sudo[98309]: pam_unix(sudo:session): session closed for user root
Oct 11 03:38:02 compute-0 sudo[98334]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/23b68101-59a9-532f-ab6b-9acf78fb2162/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ls
Oct 11 03:38:02 compute-0 sudo[98334]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 03:38:02 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v73: 7 pgs: 7 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Oct 11 03:38:02 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth import"} v 0) v1
Oct 11 03:38:02 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1494615825' entity='client.admin' cmd=[{"prefix": "auth import"}]: dispatch
Oct 11 03:38:02 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1494615825' entity='client.admin' cmd='[{"prefix": "auth import"}]': finished
Oct 11 03:38:02 compute-0 systemd[1]: libpod-bfd0b3c5df5896892c00d92a0355f0cacc705635a08130f8df82edef2ad0a70d.scope: Deactivated successfully.
Oct 11 03:38:02 compute-0 podman[98189]: 2025-10-11 03:38:02.752551379 +0000 UTC m=+0.803790328 container died bfd0b3c5df5896892c00d92a0355f0cacc705635a08130f8df82edef2ad0a70d (image=quay.io/ceph/ceph:v18, name=zen_swartz, io.buildah.version=1.39.3, ceph=True, org.label-schema.build-date=20250507, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 11 03:38:02 compute-0 systemd[1]: var-lib-containers-storage-overlay-4c77e5cf4a03884338ba39b9d259eaf4b542605751099a1cdb825e5c7b9fa55a-merged.mount: Deactivated successfully.
Oct 11 03:38:02 compute-0 podman[98189]: 2025-10-11 03:38:02.811444465 +0000 UTC m=+0.862683374 container remove bfd0b3c5df5896892c00d92a0355f0cacc705635a08130f8df82edef2ad0a70d (image=quay.io/ceph/ceph:v18, name=zen_swartz, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20250507)
Oct 11 03:38:02 compute-0 systemd[1]: libpod-conmon-bfd0b3c5df5896892c00d92a0355f0cacc705635a08130f8df82edef2ad0a70d.scope: Deactivated successfully.
Oct 11 03:38:02 compute-0 sudo[98172]: pam_unix(sudo:session): session closed for user root
Oct 11 03:38:02 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' 
Oct 11 03:38:02 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' 
Oct 11 03:38:02 compute-0 ceph-mon[74273]: from='client.? 192.168.122.100:0/1494615825' entity='client.admin' cmd=[{"prefix": "auth import"}]: dispatch
Oct 11 03:38:02 compute-0 ceph-mon[74273]: from='client.? 192.168.122.100:0/1494615825' entity='client.admin' cmd='[{"prefix": "auth import"}]': finished
Oct 11 03:38:03 compute-0 podman[98464]: 2025-10-11 03:38:03.071403387 +0000 UTC m=+0.086761372 container exec 24261ba7295af5a6a49cb537d1551fd7fd4de28fdeebff7ecec5d89143ebddf9 (image=quay.io/ceph/ceph:v18, name=ceph-23b68101-59a9-532f-ab6b-9acf78fb2162-mon-compute-0, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 11 03:38:03 compute-0 podman[98464]: 2025-10-11 03:38:03.163691742 +0000 UTC m=+0.179049737 container exec_died 24261ba7295af5a6a49cb537d1551fd7fd4de28fdeebff7ecec5d89143ebddf9 (image=quay.io/ceph/ceph:v18, name=ceph-23b68101-59a9-532f-ab6b-9acf78fb2162-mon-compute-0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Oct 11 03:38:03 compute-0 sudo[98554]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-czlyotpqhcocilyfhdxzohmouccdvgii ; /usr/bin/python3'
Oct 11 03:38:03 compute-0 sudo[98554]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 03:38:03 compute-0 python3[98559]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid 23b68101-59a9-532f-ab6b-9acf78fb2162 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   status --format json | jq .monmap.num_mons _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 11 03:38:03 compute-0 podman[98595]: 2025-10-11 03:38:03.615545531 +0000 UTC m=+0.037928718 container create cff0ad5c3c77a0b3aa5ab94ffba0f36e5518f26c25d41979635989564e6c9819 (image=quay.io/ceph/ceph:v18, name=awesome_maxwell, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 11 03:38:03 compute-0 systemd[1]: Started libpod-conmon-cff0ad5c3c77a0b3aa5ab94ffba0f36e5518f26c25d41979635989564e6c9819.scope.
Oct 11 03:38:03 compute-0 systemd[1]: Started libcrun container.
Oct 11 03:38:03 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6e6ab6efc5b3baebc00b26e2fbee1216f0e311a2529ccbf29516485330adf258/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Oct 11 03:38:03 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6e6ab6efc5b3baebc00b26e2fbee1216f0e311a2529ccbf29516485330adf258/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 11 03:38:03 compute-0 podman[98595]: 2025-10-11 03:38:03.598957394 +0000 UTC m=+0.021340591 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Oct 11 03:38:03 compute-0 podman[98595]: 2025-10-11 03:38:03.708791674 +0000 UTC m=+0.131174851 container init cff0ad5c3c77a0b3aa5ab94ffba0f36e5518f26c25d41979635989564e6c9819 (image=quay.io/ceph/ceph:v18, name=awesome_maxwell, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef)
Oct 11 03:38:03 compute-0 podman[98595]: 2025-10-11 03:38:03.717131498 +0000 UTC m=+0.139514685 container start cff0ad5c3c77a0b3aa5ab94ffba0f36e5518f26c25d41979635989564e6c9819 (image=quay.io/ceph/ceph:v18, name=awesome_maxwell, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 11 03:38:03 compute-0 podman[98595]: 2025-10-11 03:38:03.720272617 +0000 UTC m=+0.142655804 container attach cff0ad5c3c77a0b3aa5ab94ffba0f36e5518f26c25d41979635989564e6c9819 (image=quay.io/ceph/ceph:v18, name=awesome_maxwell, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507)
Oct 11 03:38:03 compute-0 sudo[98334]: pam_unix(sudo:session): session closed for user root
Oct 11 03:38:03 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 11 03:38:03 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' 
Oct 11 03:38:03 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 11 03:38:03 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' 
Oct 11 03:38:03 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 11 03:38:03 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 11 03:38:03 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct 11 03:38:03 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 11 03:38:03 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct 11 03:38:03 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' 
Oct 11 03:38:03 compute-0 ceph-mgr[74563]: [progress WARNING root] complete: ev 9b71b817-c3c7-432b-91a3-d78e13050af9 does not exist
Oct 11 03:38:03 compute-0 ceph-mgr[74563]: [progress WARNING root] complete: ev cb9ba3c9-e977-4883-aa7d-668ca3e5ddde does not exist
Oct 11 03:38:03 compute-0 ceph-mgr[74563]: [progress WARNING root] complete: ev aa266e6e-cd76-45ad-89ad-7178a5c83653 does not exist
Oct 11 03:38:03 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct 11 03:38:03 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 11 03:38:03 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct 11 03:38:03 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 11 03:38:03 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 11 03:38:03 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 11 03:38:03 compute-0 sudo[98631]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 03:38:03 compute-0 sudo[98631]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 03:38:03 compute-0 sudo[98631]: pam_unix(sudo:session): session closed for user root
Oct 11 03:38:03 compute-0 sudo[98656]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 11 03:38:03 compute-0 sudo[98656]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 03:38:03 compute-0 sudo[98656]: pam_unix(sudo:session): session closed for user root
Oct 11 03:38:03 compute-0 ceph-mon[74273]: pgmap v73: 7 pgs: 7 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Oct 11 03:38:03 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' 
Oct 11 03:38:03 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' 
Oct 11 03:38:03 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 11 03:38:03 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 11 03:38:03 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' 
Oct 11 03:38:03 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 11 03:38:03 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 11 03:38:03 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 11 03:38:04 compute-0 sudo[98681]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 03:38:04 compute-0 sudo[98681]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 03:38:04 compute-0 sudo[98681]: pam_unix(sudo:session): session closed for user root
Oct 11 03:38:04 compute-0 sudo[98723]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/23b68101-59a9-532f-ab6b-9acf78fb2162/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 23b68101-59a9-532f-ab6b-9acf78fb2162 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Oct 11 03:38:04 compute-0 sudo[98723]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 03:38:04 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e31 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 11 03:38:04 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status", "format": "json"} v 0) v1
Oct 11 03:38:04 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3289330622' entity='client.admin' cmd=[{"prefix": "status", "format": "json"}]: dispatch
Oct 11 03:38:04 compute-0 awesome_maxwell[98626]: 
Oct 11 03:38:04 compute-0 awesome_maxwell[98626]: {"fsid":"23b68101-59a9-532f-ab6b-9acf78fb2162","health":{"status":"HEALTH_ERR","checks":{"MDS_ALL_DOWN":{"severity":"HEALTH_ERR","summary":{"message":"1 filesystem is offline","count":1},"muted":false},"MDS_UP_LESS_THAN_MAX":{"severity":"HEALTH_WARN","summary":{"message":"1 filesystem is online with fewer MDS than max_mds","count":1},"muted":false}},"mutes":[]},"election_epoch":5,"quorum":[0],"quorum_names":["compute-0"],"quorum_age":150,"monmap":{"epoch":1,"min_mon_release_name":"reef","num_mons":1},"osdmap":{"epoch":31,"num_osds":3,"num_up_osds":3,"osd_up_since":1760153852,"num_in_osds":3,"osd_in_since":1760153824,"num_remapped_pgs":0},"pgmap":{"pgs_by_state":[{"state_name":"active+clean","count":7}],"num_pgs":7,"num_pools":7,"num_objects":2,"data_bytes":459280,"bytes_used":83828736,"bytes_avail":64328097792,"bytes_total":64411926528},"fsmap":{"epoch":2,"id":1,"up":0,"in":0,"max":1,"by_rank":[],"up:standby":0},"mgrmap":{"available":true,"num_standbys":0,"modules":["cephadm","iostat","nfs","restful"],"services":{}},"servicemap":{"epoch":2,"modified":"2025-10-11T03:37:22.666401+0000","services":{"osd":{"daemons":{"summary":"","0":{"start_epoch":0,"start_stamp":"0.000000","gid":0,"addr":"(unrecognized address family 0)/0","metadata":{},"task_status":{}}}}}},"progress_events":{}}
Oct 11 03:38:04 compute-0 systemd[1]: libpod-cff0ad5c3c77a0b3aa5ab94ffba0f36e5518f26c25d41979635989564e6c9819.scope: Deactivated successfully.
Oct 11 03:38:04 compute-0 podman[98595]: 2025-10-11 03:38:04.353928979 +0000 UTC m=+0.776312166 container died cff0ad5c3c77a0b3aa5ab94ffba0f36e5518f26c25d41979635989564e6c9819 (image=quay.io/ceph/ceph:v18, name=awesome_maxwell, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 11 03:38:04 compute-0 systemd[1]: var-lib-containers-storage-overlay-6e6ab6efc5b3baebc00b26e2fbee1216f0e311a2529ccbf29516485330adf258-merged.mount: Deactivated successfully.
Oct 11 03:38:04 compute-0 podman[98595]: 2025-10-11 03:38:04.417836116 +0000 UTC m=+0.840219293 container remove cff0ad5c3c77a0b3aa5ab94ffba0f36e5518f26c25d41979635989564e6c9819 (image=quay.io/ceph/ceph:v18, name=awesome_maxwell, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Oct 11 03:38:04 compute-0 systemd[1]: libpod-conmon-cff0ad5c3c77a0b3aa5ab94ffba0f36e5518f26c25d41979635989564e6c9819.scope: Deactivated successfully.
Oct 11 03:38:04 compute-0 sudo[98554]: pam_unix(sudo:session): session closed for user root
Oct 11 03:38:04 compute-0 podman[98805]: 2025-10-11 03:38:04.492215788 +0000 UTC m=+0.042217918 container create fa26f673559602627dfe3742654e9f280240bfd5cefd6fc18568f85ed2ee4bc1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_cori, org.label-schema.license=GPLv2, CEPH_REF=reef, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Oct 11 03:38:04 compute-0 systemd[1]: Started libpod-conmon-fa26f673559602627dfe3742654e9f280240bfd5cefd6fc18568f85ed2ee4bc1.scope.
Oct 11 03:38:04 compute-0 systemd[1]: Started libcrun container.
Oct 11 03:38:04 compute-0 podman[98805]: 2025-10-11 03:38:04.552748451 +0000 UTC m=+0.102750581 container init fa26f673559602627dfe3742654e9f280240bfd5cefd6fc18568f85ed2ee4bc1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_cori, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 11 03:38:04 compute-0 podman[98805]: 2025-10-11 03:38:04.559796649 +0000 UTC m=+0.109798789 container start fa26f673559602627dfe3742654e9f280240bfd5cefd6fc18568f85ed2ee4bc1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_cori, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Oct 11 03:38:04 compute-0 musing_cori[98821]: 167 167
Oct 11 03:38:04 compute-0 podman[98805]: 2025-10-11 03:38:04.562988198 +0000 UTC m=+0.112990318 container attach fa26f673559602627dfe3742654e9f280240bfd5cefd6fc18568f85ed2ee4bc1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_cori, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Oct 11 03:38:04 compute-0 systemd[1]: libpod-fa26f673559602627dfe3742654e9f280240bfd5cefd6fc18568f85ed2ee4bc1.scope: Deactivated successfully.
Oct 11 03:38:04 compute-0 podman[98805]: 2025-10-11 03:38:04.565609682 +0000 UTC m=+0.115611812 container died fa26f673559602627dfe3742654e9f280240bfd5cefd6fc18568f85ed2ee4bc1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_cori, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=reef, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 11 03:38:04 compute-0 podman[98805]: 2025-10-11 03:38:04.474914471 +0000 UTC m=+0.024916601 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 03:38:04 compute-0 systemd[1]: var-lib-containers-storage-overlay-d958999c0d3580cc831952c1f108328dbf0c826d20b6f97d88f32d5de89e8c15-merged.mount: Deactivated successfully.
Oct 11 03:38:04 compute-0 sudo[98856]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qzwdjukgjnahdlbfypuhmtzcvqoflnbf ; /usr/bin/python3'
Oct 11 03:38:04 compute-0 sudo[98856]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 03:38:04 compute-0 podman[98805]: 2025-10-11 03:38:04.602178101 +0000 UTC m=+0.152180221 container remove fa26f673559602627dfe3742654e9f280240bfd5cefd6fc18568f85ed2ee4bc1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_cori, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS)
Oct 11 03:38:04 compute-0 systemd[1]: libpod-conmon-fa26f673559602627dfe3742654e9f280240bfd5cefd6fc18568f85ed2ee4bc1.scope: Deactivated successfully.
Oct 11 03:38:04 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v74: 7 pgs: 7 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Oct 11 03:38:04 compute-0 python3[98862]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid 23b68101-59a9-532f-ab6b-9acf78fb2162 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   mon dump --format json _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 11 03:38:04 compute-0 podman[98871]: 2025-10-11 03:38:04.802847775 +0000 UTC m=+0.053837916 container create 82bf09d01c0eb14000813af5b64013835db8d1a86f9e417828d0a779724d680f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_turing, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef)
Oct 11 03:38:04 compute-0 podman[98870]: 2025-10-11 03:38:04.807497295 +0000 UTC m=+0.055951344 container create 0a7182b79bdb4a4f1f8ebc72658f630c43266dac97ffdbbdd622116386c84d36 (image=quay.io/ceph/ceph:v18, name=relaxed_satoshi, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Oct 11 03:38:04 compute-0 systemd[1]: Started libpod-conmon-0a7182b79bdb4a4f1f8ebc72658f630c43266dac97ffdbbdd622116386c84d36.scope.
Oct 11 03:38:04 compute-0 systemd[1]: Started libpod-conmon-82bf09d01c0eb14000813af5b64013835db8d1a86f9e417828d0a779724d680f.scope.
Oct 11 03:38:04 compute-0 systemd[1]: Started libcrun container.
Oct 11 03:38:04 compute-0 systemd[1]: Started libcrun container.
Oct 11 03:38:04 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d30661c66c6c1ec6e1d6ea2ba0c209887488c6dc36d01a79b8b0e3b06e658095/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Oct 11 03:38:04 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d30661c66c6c1ec6e1d6ea2ba0c209887488c6dc36d01a79b8b0e3b06e658095/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 11 03:38:04 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/31e7cb0cea96fa4256b5c186553ba5d8099fb6af83f3755a234f5e99bbda68f9/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 11 03:38:04 compute-0 podman[98871]: 2025-10-11 03:38:04.777669507 +0000 UTC m=+0.028659738 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 03:38:04 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/31e7cb0cea96fa4256b5c186553ba5d8099fb6af83f3755a234f5e99bbda68f9/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 11 03:38:04 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/31e7cb0cea96fa4256b5c186553ba5d8099fb6af83f3755a234f5e99bbda68f9/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 11 03:38:04 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/31e7cb0cea96fa4256b5c186553ba5d8099fb6af83f3755a234f5e99bbda68f9/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 11 03:38:04 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/31e7cb0cea96fa4256b5c186553ba5d8099fb6af83f3755a234f5e99bbda68f9/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct 11 03:38:04 compute-0 podman[98870]: 2025-10-11 03:38:04.789834229 +0000 UTC m=+0.038288298 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Oct 11 03:38:04 compute-0 podman[98871]: 2025-10-11 03:38:04.909604057 +0000 UTC m=+0.160594218 container init 82bf09d01c0eb14000813af5b64013835db8d1a86f9e417828d0a779724d680f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_turing, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=reef, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 11 03:38:04 compute-0 podman[98870]: 2025-10-11 03:38:04.912941381 +0000 UTC m=+0.161395460 container init 0a7182b79bdb4a4f1f8ebc72658f630c43266dac97ffdbbdd622116386c84d36 (image=quay.io/ceph/ceph:v18, name=relaxed_satoshi, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 11 03:38:04 compute-0 podman[98871]: 2025-10-11 03:38:04.915354209 +0000 UTC m=+0.166344350 container start 82bf09d01c0eb14000813af5b64013835db8d1a86f9e417828d0a779724d680f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_turing, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 11 03:38:04 compute-0 podman[98870]: 2025-10-11 03:38:04.91788302 +0000 UTC m=+0.166337079 container start 0a7182b79bdb4a4f1f8ebc72658f630c43266dac97ffdbbdd622116386c84d36 (image=quay.io/ceph/ceph:v18, name=relaxed_satoshi, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 11 03:38:04 compute-0 podman[98871]: 2025-10-11 03:38:04.918288492 +0000 UTC m=+0.169278673 container attach 82bf09d01c0eb14000813af5b64013835db8d1a86f9e417828d0a779724d680f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_turing, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 11 03:38:04 compute-0 podman[98870]: 2025-10-11 03:38:04.921388199 +0000 UTC m=+0.169867598 container attach 0a7182b79bdb4a4f1f8ebc72658f630c43266dac97ffdbbdd622116386c84d36 (image=quay.io/ceph/ceph:v18, name=relaxed_satoshi, CEPH_REF=reef, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 11 03:38:04 compute-0 ceph-mon[74273]: from='client.? 192.168.122.100:0/3289330622' entity='client.admin' cmd=[{"prefix": "status", "format": "json"}]: dispatch
Oct 11 03:38:05 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 11 03:38:05 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2128785698' entity='client.admin' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 11 03:38:05 compute-0 relaxed_satoshi[98902]: 
Oct 11 03:38:05 compute-0 relaxed_satoshi[98902]: {"epoch":1,"fsid":"23b68101-59a9-532f-ab6b-9acf78fb2162","modified":"2025-10-11T03:35:28.851489Z","created":"2025-10-11T03:35:28.851489Z","min_mon_release":18,"min_mon_release_name":"reef","election_strategy":1,"disallowed_leaders: ":"","stretch_mode":false,"tiebreaker_mon":"","removed_ranks: ":"","features":{"persistent":["kraken","luminous","mimic","osdmap-prune","nautilus","octopus","pacific","elector-pinging","quincy","reef"],"optional":[]},"mons":[{"rank":0,"name":"compute-0","public_addrs":{"addrvec":[{"type":"v2","addr":"192.168.122.100:3300","nonce":0},{"type":"v1","addr":"192.168.122.100:6789","nonce":0}]},"addr":"192.168.122.100:6789/0","public_addr":"192.168.122.100:6789/0","priority":0,"weight":0,"crush_location":"{}"}],"quorum":[0]}
Oct 11 03:38:05 compute-0 relaxed_satoshi[98902]: dumped monmap epoch 1
Oct 11 03:38:05 compute-0 systemd[1]: libpod-0a7182b79bdb4a4f1f8ebc72658f630c43266dac97ffdbbdd622116386c84d36.scope: Deactivated successfully.
Oct 11 03:38:05 compute-0 podman[98870]: 2025-10-11 03:38:05.605463018 +0000 UTC m=+0.853917107 container died 0a7182b79bdb4a4f1f8ebc72658f630c43266dac97ffdbbdd622116386c84d36 (image=quay.io/ceph/ceph:v18, name=relaxed_satoshi, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3)
Oct 11 03:38:05 compute-0 systemd[1]: var-lib-containers-storage-overlay-d30661c66c6c1ec6e1d6ea2ba0c209887488c6dc36d01a79b8b0e3b06e658095-merged.mount: Deactivated successfully.
Oct 11 03:38:05 compute-0 podman[98870]: 2025-10-11 03:38:05.655573737 +0000 UTC m=+0.904027786 container remove 0a7182b79bdb4a4f1f8ebc72658f630c43266dac97ffdbbdd622116386c84d36 (image=quay.io/ceph/ceph:v18, name=relaxed_satoshi, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2)
Oct 11 03:38:05 compute-0 systemd[1]: libpod-conmon-0a7182b79bdb4a4f1f8ebc72658f630c43266dac97ffdbbdd622116386c84d36.scope: Deactivated successfully.
Oct 11 03:38:05 compute-0 sudo[98856]: pam_unix(sudo:session): session closed for user root
Oct 11 03:38:05 compute-0 sweet_turing[98904]: --> passed data devices: 0 physical, 3 LVM
Oct 11 03:38:05 compute-0 sweet_turing[98904]: --> relative data size: 1.0
Oct 11 03:38:05 compute-0 sweet_turing[98904]: --> All data devices are unavailable
Oct 11 03:38:05 compute-0 systemd[1]: libpod-82bf09d01c0eb14000813af5b64013835db8d1a86f9e417828d0a779724d680f.scope: Deactivated successfully.
Oct 11 03:38:05 compute-0 conmon[98904]: conmon 82bf09d01c0eb1400081 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-82bf09d01c0eb14000813af5b64013835db8d1a86f9e417828d0a779724d680f.scope/container/memory.events
Oct 11 03:38:05 compute-0 podman[98871]: 2025-10-11 03:38:05.955851853 +0000 UTC m=+1.206842024 container died 82bf09d01c0eb14000813af5b64013835db8d1a86f9e417828d0a779724d680f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_turing, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=reef, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 11 03:38:05 compute-0 systemd[1]: var-lib-containers-storage-overlay-31e7cb0cea96fa4256b5c186553ba5d8099fb6af83f3755a234f5e99bbda68f9-merged.mount: Deactivated successfully.
Oct 11 03:38:05 compute-0 ceph-mon[74273]: pgmap v74: 7 pgs: 7 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Oct 11 03:38:05 compute-0 ceph-mon[74273]: from='client.? 192.168.122.100:0/2128785698' entity='client.admin' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 11 03:38:06 compute-0 podman[98871]: 2025-10-11 03:38:06.021744206 +0000 UTC m=+1.272734377 container remove 82bf09d01c0eb14000813af5b64013835db8d1a86f9e417828d0a779724d680f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_turing, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 11 03:38:06 compute-0 systemd[1]: libpod-conmon-82bf09d01c0eb14000813af5b64013835db8d1a86f9e417828d0a779724d680f.scope: Deactivated successfully.
Oct 11 03:38:06 compute-0 sudo[98723]: pam_unix(sudo:session): session closed for user root
Oct 11 03:38:06 compute-0 sudo[98981]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 03:38:06 compute-0 sudo[98981]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 03:38:06 compute-0 sudo[98981]: pam_unix(sudo:session): session closed for user root
Oct 11 03:38:06 compute-0 sudo[99030]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mfnbpqjwgocbpfwzrwujpymabcjsuqqz ; /usr/bin/python3'
Oct 11 03:38:06 compute-0 sudo[99030]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 03:38:06 compute-0 sudo[99029]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 11 03:38:06 compute-0 sudo[99029]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 03:38:06 compute-0 sudo[99029]: pam_unix(sudo:session): session closed for user root
Oct 11 03:38:06 compute-0 sudo[99057]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 03:38:06 compute-0 sudo[99057]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 03:38:06 compute-0 sudo[99057]: pam_unix(sudo:session): session closed for user root
Oct 11 03:38:06 compute-0 python3[99049]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid 23b68101-59a9-532f-ab6b-9acf78fb2162 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   auth get client.openstack _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 11 03:38:06 compute-0 sudo[99082]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/23b68101-59a9-532f-ab6b-9acf78fb2162/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 23b68101-59a9-532f-ab6b-9acf78fb2162 -- lvm list --format json
Oct 11 03:38:06 compute-0 sudo[99082]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 03:38:06 compute-0 podman[99089]: 2025-10-11 03:38:06.453813508 +0000 UTC m=+0.061967254 container create 21ac6dc1756fba13bcadad63e4b795a0a501352f1aa87fa3341e0cad295af502 (image=quay.io/ceph/ceph:v18, name=compassionate_cohen, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=reef, org.label-schema.vendor=CentOS)
Oct 11 03:38:06 compute-0 systemd[1]: Started libpod-conmon-21ac6dc1756fba13bcadad63e4b795a0a501352f1aa87fa3341e0cad295af502.scope.
Oct 11 03:38:06 compute-0 systemd[1]: Started libcrun container.
Oct 11 03:38:06 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1d7a07a68963e752f07e8ff2a1999a1d575ee9e44e277879e8c699557a7d903c/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Oct 11 03:38:06 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1d7a07a68963e752f07e8ff2a1999a1d575ee9e44e277879e8c699557a7d903c/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 11 03:38:06 compute-0 podman[99089]: 2025-10-11 03:38:06.440324729 +0000 UTC m=+0.048478485 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Oct 11 03:38:06 compute-0 podman[99089]: 2025-10-11 03:38:06.544389456 +0000 UTC m=+0.152543222 container init 21ac6dc1756fba13bcadad63e4b795a0a501352f1aa87fa3341e0cad295af502 (image=quay.io/ceph/ceph:v18, name=compassionate_cohen, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 11 03:38:06 compute-0 podman[99089]: 2025-10-11 03:38:06.555061296 +0000 UTC m=+0.163215042 container start 21ac6dc1756fba13bcadad63e4b795a0a501352f1aa87fa3341e0cad295af502 (image=quay.io/ceph/ceph:v18, name=compassionate_cohen, ceph=True, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Oct 11 03:38:06 compute-0 podman[99089]: 2025-10-11 03:38:06.55946099 +0000 UTC m=+0.167614766 container attach 21ac6dc1756fba13bcadad63e4b795a0a501352f1aa87fa3341e0cad295af502 (image=quay.io/ceph/ceph:v18, name=compassionate_cohen, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0)
Oct 11 03:38:06 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v75: 7 pgs: 7 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Oct 11 03:38:06 compute-0 podman[99165]: 2025-10-11 03:38:06.803686509 +0000 UTC m=+0.073774406 container create c3379b7b0ca76103a012627af36acff6745e169d6be0b06e61390b4b5b890604 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_proskuriakova, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef)
Oct 11 03:38:06 compute-0 systemd[1]: Started libpod-conmon-c3379b7b0ca76103a012627af36acff6745e169d6be0b06e61390b4b5b890604.scope.
Oct 11 03:38:06 compute-0 podman[99165]: 2025-10-11 03:38:06.767430359 +0000 UTC m=+0.037518306 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 03:38:06 compute-0 systemd[1]: Started libcrun container.
Oct 11 03:38:06 compute-0 podman[99165]: 2025-10-11 03:38:06.895256614 +0000 UTC m=+0.165344541 container init c3379b7b0ca76103a012627af36acff6745e169d6be0b06e61390b4b5b890604 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_proskuriakova, ceph=True, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_REF=reef, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Oct 11 03:38:06 compute-0 podman[99165]: 2025-10-11 03:38:06.90152027 +0000 UTC m=+0.171608167 container start c3379b7b0ca76103a012627af36acff6745e169d6be0b06e61390b4b5b890604 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_proskuriakova, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Oct 11 03:38:06 compute-0 sad_proskuriakova[99182]: 167 167
Oct 11 03:38:06 compute-0 podman[99165]: 2025-10-11 03:38:06.906285664 +0000 UTC m=+0.176373571 container attach c3379b7b0ca76103a012627af36acff6745e169d6be0b06e61390b4b5b890604 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_proskuriakova, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Oct 11 03:38:06 compute-0 systemd[1]: libpod-c3379b7b0ca76103a012627af36acff6745e169d6be0b06e61390b4b5b890604.scope: Deactivated successfully.
Oct 11 03:38:06 compute-0 podman[99206]: 2025-10-11 03:38:06.973900496 +0000 UTC m=+0.042877687 container died c3379b7b0ca76103a012627af36acff6745e169d6be0b06e61390b4b5b890604 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_proskuriakova, ceph=True, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 11 03:38:06 compute-0 systemd[1]: var-lib-containers-storage-overlay-3b2b1aff311b07ef3f47b506eb11fc2372b2ef469258e95127e135be9732a89b-merged.mount: Deactivated successfully.
Oct 11 03:38:07 compute-0 podman[99206]: 2025-10-11 03:38:07.018310925 +0000 UTC m=+0.087288146 container remove c3379b7b0ca76103a012627af36acff6745e169d6be0b06e61390b4b5b890604 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_proskuriakova, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=reef)
Oct 11 03:38:07 compute-0 systemd[1]: libpod-conmon-c3379b7b0ca76103a012627af36acff6745e169d6be0b06e61390b4b5b890604.scope: Deactivated successfully.
Oct 11 03:38:07 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.openstack"} v 0) v1
Oct 11 03:38:07 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2855202115' entity='client.admin' cmd=[{"prefix": "auth get", "entity": "client.openstack"}]: dispatch
Oct 11 03:38:07 compute-0 podman[99228]: 2025-10-11 03:38:07.229437933 +0000 UTC m=+0.043403702 container create 4daf3477028552ceab453a7ccc615f17c98742ec7b4b7294d0d6be3866b39d79 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_albattani, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 11 03:38:07 compute-0 compassionate_cohen[99122]: [client.openstack]
Oct 11 03:38:07 compute-0 compassionate_cohen[99122]:         key = AQBn0OloAAAAABAA5vR2TXb/EBj5CZlyN7iICQ==
Oct 11 03:38:07 compute-0 compassionate_cohen[99122]:         caps mgr = "allow *"
Oct 11 03:38:07 compute-0 compassionate_cohen[99122]:         caps mon = "profile rbd"
Oct 11 03:38:07 compute-0 compassionate_cohen[99122]:         caps osd = "profile rbd pool=vms, profile rbd pool=volumes, profile rbd pool=backups, profile rbd pool=images, profile rbd pool=cephfs.cephfs.meta, profile rbd pool=cephfs.cephfs.data"
Oct 11 03:38:07 compute-0 systemd[1]: libpod-21ac6dc1756fba13bcadad63e4b795a0a501352f1aa87fa3341e0cad295af502.scope: Deactivated successfully.
Oct 11 03:38:07 compute-0 podman[99089]: 2025-10-11 03:38:07.251875134 +0000 UTC m=+0.860028880 container died 21ac6dc1756fba13bcadad63e4b795a0a501352f1aa87fa3341e0cad295af502 (image=quay.io/ceph/ceph:v18, name=compassionate_cohen, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.39.3)
Oct 11 03:38:07 compute-0 systemd[1]: Started libpod-conmon-4daf3477028552ceab453a7ccc615f17c98742ec7b4b7294d0d6be3866b39d79.scope.
Oct 11 03:38:07 compute-0 systemd[1]: var-lib-containers-storage-overlay-1d7a07a68963e752f07e8ff2a1999a1d575ee9e44e277879e8c699557a7d903c-merged.mount: Deactivated successfully.
Oct 11 03:38:07 compute-0 systemd[1]: Started libcrun container.
Oct 11 03:38:07 compute-0 podman[99228]: 2025-10-11 03:38:07.209595525 +0000 UTC m=+0.023561304 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 03:38:07 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4762be9151122176ca6629c3ee13022918fbc2b101d1ba766826407bab17e3ce/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 11 03:38:07 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4762be9151122176ca6629c3ee13022918fbc2b101d1ba766826407bab17e3ce/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 11 03:38:07 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4762be9151122176ca6629c3ee13022918fbc2b101d1ba766826407bab17e3ce/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 11 03:38:07 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4762be9151122176ca6629c3ee13022918fbc2b101d1ba766826407bab17e3ce/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 11 03:38:07 compute-0 podman[99089]: 2025-10-11 03:38:07.310342789 +0000 UTC m=+0.918496545 container remove 21ac6dc1756fba13bcadad63e4b795a0a501352f1aa87fa3341e0cad295af502 (image=quay.io/ceph/ceph:v18, name=compassionate_cohen, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Oct 11 03:38:07 compute-0 systemd[1]: libpod-conmon-21ac6dc1756fba13bcadad63e4b795a0a501352f1aa87fa3341e0cad295af502.scope: Deactivated successfully.
Oct 11 03:38:07 compute-0 podman[99228]: 2025-10-11 03:38:07.334721334 +0000 UTC m=+0.148687143 container init 4daf3477028552ceab453a7ccc615f17c98742ec7b4b7294d0d6be3866b39d79 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_albattani, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 11 03:38:07 compute-0 podman[99228]: 2025-10-11 03:38:07.343565953 +0000 UTC m=+0.157531752 container start 4daf3477028552ceab453a7ccc615f17c98742ec7b4b7294d0d6be3866b39d79 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_albattani, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, OSD_FLAVOR=default)
Oct 11 03:38:07 compute-0 sudo[99030]: pam_unix(sudo:session): session closed for user root
Oct 11 03:38:07 compute-0 podman[99228]: 2025-10-11 03:38:07.349178011 +0000 UTC m=+0.163143820 container attach 4daf3477028552ceab453a7ccc615f17c98742ec7b4b7294d0d6be3866b39d79 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_albattani, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.build-date=20250507, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 11 03:38:08 compute-0 ceph-mon[74273]: pgmap v75: 7 pgs: 7 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Oct 11 03:38:08 compute-0 ceph-mon[74273]: from='client.? 192.168.122.100:0/2855202115' entity='client.admin' cmd=[{"prefix": "auth get", "entity": "client.openstack"}]: dispatch
Oct 11 03:38:08 compute-0 pedantic_albattani[99255]: {
Oct 11 03:38:08 compute-0 pedantic_albattani[99255]:     "0": [
Oct 11 03:38:08 compute-0 pedantic_albattani[99255]:         {
Oct 11 03:38:08 compute-0 pedantic_albattani[99255]:             "devices": [
Oct 11 03:38:08 compute-0 pedantic_albattani[99255]:                 "/dev/loop3"
Oct 11 03:38:08 compute-0 pedantic_albattani[99255]:             ],
Oct 11 03:38:08 compute-0 pedantic_albattani[99255]:             "lv_name": "ceph_lv0",
Oct 11 03:38:08 compute-0 pedantic_albattani[99255]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct 11 03:38:08 compute-0 pedantic_albattani[99255]:             "lv_size": "21470642176",
Oct 11 03:38:08 compute-0 pedantic_albattani[99255]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=1XVTvq-Um5W-oCLh-MZzq-EKrd-vgGj-JdO4S3,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=23b68101-59a9-532f-ab6b-9acf78fb2162,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=bd7ac921-1218-45c1-b1c6-7c594dbceccb,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 11 03:38:08 compute-0 pedantic_albattani[99255]:             "lv_uuid": "1XVTvq-Um5W-oCLh-MZzq-EKrd-vgGj-JdO4S3",
Oct 11 03:38:08 compute-0 pedantic_albattani[99255]:             "name": "ceph_lv0",
Oct 11 03:38:08 compute-0 pedantic_albattani[99255]:             "path": "/dev/ceph_vg0/ceph_lv0",
Oct 11 03:38:08 compute-0 pedantic_albattani[99255]:             "tags": {
Oct 11 03:38:08 compute-0 pedantic_albattani[99255]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct 11 03:38:08 compute-0 pedantic_albattani[99255]:                 "ceph.block_uuid": "1XVTvq-Um5W-oCLh-MZzq-EKrd-vgGj-JdO4S3",
Oct 11 03:38:08 compute-0 pedantic_albattani[99255]:                 "ceph.cephx_lockbox_secret": "",
Oct 11 03:38:08 compute-0 pedantic_albattani[99255]:                 "ceph.cluster_fsid": "23b68101-59a9-532f-ab6b-9acf78fb2162",
Oct 11 03:38:08 compute-0 pedantic_albattani[99255]:                 "ceph.cluster_name": "ceph",
Oct 11 03:38:08 compute-0 pedantic_albattani[99255]:                 "ceph.crush_device_class": "",
Oct 11 03:38:08 compute-0 pedantic_albattani[99255]:                 "ceph.encrypted": "0",
Oct 11 03:38:08 compute-0 pedantic_albattani[99255]:                 "ceph.osd_fsid": "bd7ac921-1218-45c1-b1c6-7c594dbceccb",
Oct 11 03:38:08 compute-0 pedantic_albattani[99255]:                 "ceph.osd_id": "0",
Oct 11 03:38:08 compute-0 pedantic_albattani[99255]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 11 03:38:08 compute-0 pedantic_albattani[99255]:                 "ceph.type": "block",
Oct 11 03:38:08 compute-0 pedantic_albattani[99255]:                 "ceph.vdo": "0"
Oct 11 03:38:08 compute-0 pedantic_albattani[99255]:             },
Oct 11 03:38:08 compute-0 pedantic_albattani[99255]:             "type": "block",
Oct 11 03:38:08 compute-0 pedantic_albattani[99255]:             "vg_name": "ceph_vg0"
Oct 11 03:38:08 compute-0 pedantic_albattani[99255]:         }
Oct 11 03:38:08 compute-0 pedantic_albattani[99255]:     ],
Oct 11 03:38:08 compute-0 pedantic_albattani[99255]:     "1": [
Oct 11 03:38:08 compute-0 pedantic_albattani[99255]:         {
Oct 11 03:38:08 compute-0 pedantic_albattani[99255]:             "devices": [
Oct 11 03:38:08 compute-0 pedantic_albattani[99255]:                 "/dev/loop4"
Oct 11 03:38:08 compute-0 pedantic_albattani[99255]:             ],
Oct 11 03:38:08 compute-0 pedantic_albattani[99255]:             "lv_name": "ceph_lv1",
Oct 11 03:38:08 compute-0 pedantic_albattani[99255]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Oct 11 03:38:08 compute-0 pedantic_albattani[99255]:             "lv_size": "21470642176",
Oct 11 03:38:08 compute-0 pedantic_albattani[99255]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=SYHCVo-UVf0-Iwc3-4dzz-QuCG-19LJ-9rRA5x,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=23b68101-59a9-532f-ab6b-9acf78fb2162,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=38da774d-7ecf-442f-9a7a-97978287cff8,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 11 03:38:08 compute-0 pedantic_albattani[99255]:             "lv_uuid": "SYHCVo-UVf0-Iwc3-4dzz-QuCG-19LJ-9rRA5x",
Oct 11 03:38:08 compute-0 pedantic_albattani[99255]:             "name": "ceph_lv1",
Oct 11 03:38:08 compute-0 pedantic_albattani[99255]:             "path": "/dev/ceph_vg1/ceph_lv1",
Oct 11 03:38:08 compute-0 pedantic_albattani[99255]:             "tags": {
Oct 11 03:38:08 compute-0 pedantic_albattani[99255]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Oct 11 03:38:08 compute-0 pedantic_albattani[99255]:                 "ceph.block_uuid": "SYHCVo-UVf0-Iwc3-4dzz-QuCG-19LJ-9rRA5x",
Oct 11 03:38:08 compute-0 pedantic_albattani[99255]:                 "ceph.cephx_lockbox_secret": "",
Oct 11 03:38:08 compute-0 pedantic_albattani[99255]:                 "ceph.cluster_fsid": "23b68101-59a9-532f-ab6b-9acf78fb2162",
Oct 11 03:38:08 compute-0 pedantic_albattani[99255]:                 "ceph.cluster_name": "ceph",
Oct 11 03:38:08 compute-0 pedantic_albattani[99255]:                 "ceph.crush_device_class": "",
Oct 11 03:38:08 compute-0 pedantic_albattani[99255]:                 "ceph.encrypted": "0",
Oct 11 03:38:08 compute-0 pedantic_albattani[99255]:                 "ceph.osd_fsid": "38da774d-7ecf-442f-9a7a-97978287cff8",
Oct 11 03:38:08 compute-0 pedantic_albattani[99255]:                 "ceph.osd_id": "1",
Oct 11 03:38:08 compute-0 pedantic_albattani[99255]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 11 03:38:08 compute-0 pedantic_albattani[99255]:                 "ceph.type": "block",
Oct 11 03:38:08 compute-0 pedantic_albattani[99255]:                 "ceph.vdo": "0"
Oct 11 03:38:08 compute-0 pedantic_albattani[99255]:             },
Oct 11 03:38:08 compute-0 pedantic_albattani[99255]:             "type": "block",
Oct 11 03:38:08 compute-0 pedantic_albattani[99255]:             "vg_name": "ceph_vg1"
Oct 11 03:38:08 compute-0 pedantic_albattani[99255]:         }
Oct 11 03:38:08 compute-0 pedantic_albattani[99255]:     ],
Oct 11 03:38:08 compute-0 pedantic_albattani[99255]:     "2": [
Oct 11 03:38:08 compute-0 pedantic_albattani[99255]:         {
Oct 11 03:38:08 compute-0 pedantic_albattani[99255]:             "devices": [
Oct 11 03:38:08 compute-0 pedantic_albattani[99255]:                 "/dev/loop5"
Oct 11 03:38:08 compute-0 pedantic_albattani[99255]:             ],
Oct 11 03:38:08 compute-0 pedantic_albattani[99255]:             "lv_name": "ceph_lv2",
Oct 11 03:38:08 compute-0 pedantic_albattani[99255]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Oct 11 03:38:08 compute-0 pedantic_albattani[99255]:             "lv_size": "21470642176",
Oct 11 03:38:08 compute-0 pedantic_albattani[99255]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=RoMfIg-8Myq-NBZ3-1HM3-6QfC-sjk8-dlChvt,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=23b68101-59a9-532f-ab6b-9acf78fb2162,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=b0a32a6b-06c7-4cda-9235-0c5ffbd1bd85,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 11 03:38:08 compute-0 pedantic_albattani[99255]:             "lv_uuid": "RoMfIg-8Myq-NBZ3-1HM3-6QfC-sjk8-dlChvt",
Oct 11 03:38:08 compute-0 pedantic_albattani[99255]:             "name": "ceph_lv2",
Oct 11 03:38:08 compute-0 pedantic_albattani[99255]:             "path": "/dev/ceph_vg2/ceph_lv2",
Oct 11 03:38:08 compute-0 pedantic_albattani[99255]:             "tags": {
Oct 11 03:38:08 compute-0 pedantic_albattani[99255]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Oct 11 03:38:08 compute-0 pedantic_albattani[99255]:                 "ceph.block_uuid": "RoMfIg-8Myq-NBZ3-1HM3-6QfC-sjk8-dlChvt",
Oct 11 03:38:08 compute-0 pedantic_albattani[99255]:                 "ceph.cephx_lockbox_secret": "",
Oct 11 03:38:08 compute-0 pedantic_albattani[99255]:                 "ceph.cluster_fsid": "23b68101-59a9-532f-ab6b-9acf78fb2162",
Oct 11 03:38:08 compute-0 pedantic_albattani[99255]:                 "ceph.cluster_name": "ceph",
Oct 11 03:38:08 compute-0 pedantic_albattani[99255]:                 "ceph.crush_device_class": "",
Oct 11 03:38:08 compute-0 pedantic_albattani[99255]:                 "ceph.encrypted": "0",
Oct 11 03:38:08 compute-0 pedantic_albattani[99255]:                 "ceph.osd_fsid": "b0a32a6b-06c7-4cda-9235-0c5ffbd1bd85",
Oct 11 03:38:08 compute-0 pedantic_albattani[99255]:                 "ceph.osd_id": "2",
Oct 11 03:38:08 compute-0 pedantic_albattani[99255]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 11 03:38:08 compute-0 pedantic_albattani[99255]:                 "ceph.type": "block",
Oct 11 03:38:08 compute-0 pedantic_albattani[99255]:                 "ceph.vdo": "0"
Oct 11 03:38:08 compute-0 pedantic_albattani[99255]:             },
Oct 11 03:38:08 compute-0 pedantic_albattani[99255]:             "type": "block",
Oct 11 03:38:08 compute-0 pedantic_albattani[99255]:             "vg_name": "ceph_vg2"
Oct 11 03:38:08 compute-0 pedantic_albattani[99255]:         }
Oct 11 03:38:08 compute-0 pedantic_albattani[99255]:     ]
Oct 11 03:38:08 compute-0 pedantic_albattani[99255]: }
Oct 11 03:38:08 compute-0 systemd[1]: libpod-4daf3477028552ceab453a7ccc615f17c98742ec7b4b7294d0d6be3866b39d79.scope: Deactivated successfully.
Oct 11 03:38:08 compute-0 podman[99228]: 2025-10-11 03:38:08.139211181 +0000 UTC m=+0.953176980 container died 4daf3477028552ceab453a7ccc615f17c98742ec7b4b7294d0d6be3866b39d79 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_albattani, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 11 03:38:08 compute-0 systemd[1]: var-lib-containers-storage-overlay-4762be9151122176ca6629c3ee13022918fbc2b101d1ba766826407bab17e3ce-merged.mount: Deactivated successfully.
Oct 11 03:38:08 compute-0 podman[99228]: 2025-10-11 03:38:08.219385896 +0000 UTC m=+1.033351695 container remove 4daf3477028552ceab453a7ccc615f17c98742ec7b4b7294d0d6be3866b39d79 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_albattani, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=reef, OSD_FLAVOR=default)
Oct 11 03:38:08 compute-0 systemd[1]: libpod-conmon-4daf3477028552ceab453a7ccc615f17c98742ec7b4b7294d0d6be3866b39d79.scope: Deactivated successfully.
Oct 11 03:38:08 compute-0 sudo[99082]: pam_unix(sudo:session): session closed for user root
Oct 11 03:38:08 compute-0 sudo[99282]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 03:38:08 compute-0 sudo[99282]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 03:38:08 compute-0 sudo[99282]: pam_unix(sudo:session): session closed for user root
Oct 11 03:38:08 compute-0 sudo[99312]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 11 03:38:08 compute-0 sudo[99312]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 03:38:08 compute-0 sudo[99312]: pam_unix(sudo:session): session closed for user root
Oct 11 03:38:08 compute-0 sudo[99362]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 03:38:08 compute-0 sudo[99362]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 03:38:08 compute-0 sudo[99362]: pam_unix(sudo:session): session closed for user root
Oct 11 03:38:08 compute-0 sudo[99409]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/23b68101-59a9-532f-ab6b-9acf78fb2162/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 23b68101-59a9-532f-ab6b-9acf78fb2162 -- raw list --format json
Oct 11 03:38:08 compute-0 sudo[99409]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 03:38:08 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v76: 7 pgs: 7 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Oct 11 03:38:08 compute-0 sudo[99556]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wevkonfhsolftywapyoiemcjqxibkzxk ; ANSIBLE_ASYNC_DIR=\'~/.ansible_async\' /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1760153888.3887005-33265-267665439658645/async_wrapper.py j990836003519 30 /home/zuul/.ansible/tmp/ansible-tmp-1760153888.3887005-33265-267665439658645/AnsiballZ_command.py _'
Oct 11 03:38:08 compute-0 sudo[99556]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 03:38:08 compute-0 podman[99571]: 2025-10-11 03:38:08.875683061 +0000 UTC m=+0.043328224 container create fbf8b7499afd7ec5e09ba58128b01399957a5d7c04db27e2709a290abde300df (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_mendel, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507)
Oct 11 03:38:08 compute-0 systemd[1]: Started libpod-conmon-fbf8b7499afd7ec5e09ba58128b01399957a5d7c04db27e2709a290abde300df.scope.
Oct 11 03:38:08 compute-0 systemd[1]: Started libcrun container.
Oct 11 03:38:08 compute-0 podman[99571]: 2025-10-11 03:38:08.950094353 +0000 UTC m=+0.117739586 container init fbf8b7499afd7ec5e09ba58128b01399957a5d7c04db27e2709a290abde300df (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_mendel, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=reef, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 11 03:38:08 compute-0 podman[99571]: 2025-10-11 03:38:08.857172173 +0000 UTC m=+0.024817326 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 03:38:08 compute-0 podman[99571]: 2025-10-11 03:38:08.960117814 +0000 UTC m=+0.127762947 container start fbf8b7499afd7ec5e09ba58128b01399957a5d7c04db27e2709a290abde300df (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_mendel, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.39.3, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 11 03:38:08 compute-0 ansible-async_wrapper.py[99569]: Invoked with j990836003519 30 /home/zuul/.ansible/tmp/ansible-tmp-1760153888.3887005-33265-267665439658645/AnsiballZ_command.py _
Oct 11 03:38:08 compute-0 hopeful_mendel[99587]: 167 167
Oct 11 03:38:08 compute-0 systemd[1]: libpod-fbf8b7499afd7ec5e09ba58128b01399957a5d7c04db27e2709a290abde300df.scope: Deactivated successfully.
Oct 11 03:38:08 compute-0 podman[99571]: 2025-10-11 03:38:08.968535369 +0000 UTC m=+0.136180602 container attach fbf8b7499afd7ec5e09ba58128b01399957a5d7c04db27e2709a290abde300df (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_mendel, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Oct 11 03:38:08 compute-0 conmon[99587]: conmon fbf8b7499afd7ec5e09b <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-fbf8b7499afd7ec5e09ba58128b01399957a5d7c04db27e2709a290abde300df.scope/container/memory.events
Oct 11 03:38:08 compute-0 podman[99571]: 2025-10-11 03:38:08.971620565 +0000 UTC m=+0.139265738 container died fbf8b7499afd7ec5e09ba58128b01399957a5d7c04db27e2709a290abde300df (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_mendel, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default)
Oct 11 03:38:08 compute-0 ansible-async_wrapper.py[99595]: Starting module and watcher
Oct 11 03:38:08 compute-0 ansible-async_wrapper.py[99595]: Start watching 99596 (30)
Oct 11 03:38:08 compute-0 ansible-async_wrapper.py[99596]: Start module (99596)
Oct 11 03:38:08 compute-0 ansible-async_wrapper.py[99569]: Return async_wrapper task started.
Oct 11 03:38:08 compute-0 sudo[99556]: pam_unix(sudo:session): session closed for user root
Oct 11 03:38:09 compute-0 systemd[1]: var-lib-containers-storage-overlay-05a878d82d4d93d16b2fd2db97b6899676d37292df807135f5a8edef6cf9b364-merged.mount: Deactivated successfully.
Oct 11 03:38:09 compute-0 podman[99571]: 2025-10-11 03:38:09.022105978 +0000 UTC m=+0.189751111 container remove fbf8b7499afd7ec5e09ba58128b01399957a5d7c04db27e2709a290abde300df (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_mendel, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20250507)
Oct 11 03:38:09 compute-0 systemd[1]: libpod-conmon-fbf8b7499afd7ec5e09ba58128b01399957a5d7c04db27e2709a290abde300df.scope: Deactivated successfully.
Oct 11 03:38:09 compute-0 python3[99597]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid 23b68101-59a9-532f-ab6b-9acf78fb2162 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   orch status --format json _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 11 03:38:09 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e31 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 11 03:38:09 compute-0 podman[99611]: 2025-10-11 03:38:09.236062125 +0000 UTC m=+0.075761461 container create 43037db19f72fc3e1428f36db7c100cb582c0f14f3aef779889daf65dee0e074 (image=quay.io/ceph/ceph:v18, name=funny_kepler, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Oct 11 03:38:09 compute-0 podman[99622]: 2025-10-11 03:38:09.242339651 +0000 UTC m=+0.059558988 container create 51eb1db2509dd1553d699c27573d27df320f64a407586b7af964588de857412b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_chaplygin, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Oct 11 03:38:09 compute-0 systemd[1]: Started libpod-conmon-43037db19f72fc3e1428f36db7c100cb582c0f14f3aef779889daf65dee0e074.scope.
Oct 11 03:38:09 compute-0 podman[99611]: 2025-10-11 03:38:09.203683389 +0000 UTC m=+0.043382775 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Oct 11 03:38:09 compute-0 systemd[1]: Started libpod-conmon-51eb1db2509dd1553d699c27573d27df320f64a407586b7af964588de857412b.scope.
Oct 11 03:38:09 compute-0 podman[99622]: 2025-10-11 03:38:09.208750401 +0000 UTC m=+0.025969768 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 03:38:09 compute-0 systemd[1]: Started libcrun container.
Oct 11 03:38:09 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bd9201fc727024e18079db29157f36afc9f7a68d7168c6b1e4037a933d3f486f/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 11 03:38:09 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bd9201fc727024e18079db29157f36afc9f7a68d7168c6b1e4037a933d3f486f/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Oct 11 03:38:09 compute-0 systemd[1]: Started libcrun container.
Oct 11 03:38:09 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/22863c87000e4bc6dcc204596009f8af9997c17d410564fe0c1bcccc9d9428b6/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 11 03:38:09 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/22863c87000e4bc6dcc204596009f8af9997c17d410564fe0c1bcccc9d9428b6/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 11 03:38:09 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/22863c87000e4bc6dcc204596009f8af9997c17d410564fe0c1bcccc9d9428b6/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 11 03:38:09 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/22863c87000e4bc6dcc204596009f8af9997c17d410564fe0c1bcccc9d9428b6/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 11 03:38:09 compute-0 podman[99611]: 2025-10-11 03:38:09.334814339 +0000 UTC m=+0.174513665 container init 43037db19f72fc3e1428f36db7c100cb582c0f14f3aef779889daf65dee0e074 (image=quay.io/ceph/ceph:v18, name=funny_kepler, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Oct 11 03:38:09 compute-0 podman[99611]: 2025-10-11 03:38:09.345015444 +0000 UTC m=+0.184714780 container start 43037db19f72fc3e1428f36db7c100cb582c0f14f3aef779889daf65dee0e074 (image=quay.io/ceph/ceph:v18, name=funny_kepler, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, io.buildah.version=1.39.3, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 11 03:38:09 compute-0 podman[99622]: 2025-10-11 03:38:09.351264669 +0000 UTC m=+0.168484046 container init 51eb1db2509dd1553d699c27573d27df320f64a407586b7af964588de857412b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_chaplygin, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 11 03:38:09 compute-0 podman[99611]: 2025-10-11 03:38:09.354558391 +0000 UTC m=+0.194257717 container attach 43037db19f72fc3e1428f36db7c100cb582c0f14f3aef779889daf65dee0e074 (image=quay.io/ceph/ceph:v18, name=funny_kepler, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef)
Oct 11 03:38:09 compute-0 podman[99622]: 2025-10-11 03:38:09.356974029 +0000 UTC m=+0.174193396 container start 51eb1db2509dd1553d699c27573d27df320f64a407586b7af964588de857412b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_chaplygin, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 11 03:38:09 compute-0 podman[99622]: 2025-10-11 03:38:09.361083464 +0000 UTC m=+0.178302871 container attach 51eb1db2509dd1553d699c27573d27df320f64a407586b7af964588de857412b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_chaplygin, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 11 03:38:09 compute-0 ceph-mgr[74563]: log_channel(audit) log [DBG] : from='client.14258 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""], "format": "json"}]: dispatch
Oct 11 03:38:09 compute-0 funny_kepler[99649]: 
Oct 11 03:38:09 compute-0 funny_kepler[99649]: {"available": true, "backend": "cephadm", "paused": false, "workers": 10}
Oct 11 03:38:09 compute-0 systemd[1]: libpod-43037db19f72fc3e1428f36db7c100cb582c0f14f3aef779889daf65dee0e074.scope: Deactivated successfully.
Oct 11 03:38:09 compute-0 conmon[99649]: conmon 43037db19f72fc3e1428 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-43037db19f72fc3e1428f36db7c100cb582c0f14f3aef779889daf65dee0e074.scope/container/memory.events
Oct 11 03:38:09 compute-0 podman[99679]: 2025-10-11 03:38:09.979940511 +0000 UTC m=+0.034571579 container died 43037db19f72fc3e1428f36db7c100cb582c0f14f3aef779889daf65dee0e074 (image=quay.io/ceph/ceph:v18, name=funny_kepler, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Oct 11 03:38:10 compute-0 systemd[1]: var-lib-containers-storage-overlay-bd9201fc727024e18079db29157f36afc9f7a68d7168c6b1e4037a933d3f486f-merged.mount: Deactivated successfully.
Oct 11 03:38:10 compute-0 ceph-mon[74273]: pgmap v76: 7 pgs: 7 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Oct 11 03:38:10 compute-0 podman[99679]: 2025-10-11 03:38:10.032463641 +0000 UTC m=+0.087094619 container remove 43037db19f72fc3e1428f36db7c100cb582c0f14f3aef779889daf65dee0e074 (image=quay.io/ceph/ceph:v18, name=funny_kepler, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 11 03:38:10 compute-0 systemd[1]: libpod-conmon-43037db19f72fc3e1428f36db7c100cb582c0f14f3aef779889daf65dee0e074.scope: Deactivated successfully.
Oct 11 03:38:10 compute-0 ansible-async_wrapper.py[99596]: Module complete (99596)
Oct 11 03:38:10 compute-0 sudo[99751]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nxxgjmkemlqzzygzrxnrzfqlfidmgfpi ; /usr/bin/python3'
Oct 11 03:38:10 compute-0 sudo[99751]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 03:38:10 compute-0 python3[99756]: ansible-ansible.legacy.async_status Invoked with jid=j990836003519.99569 mode=status _async_dir=/root/.ansible_async
Oct 11 03:38:10 compute-0 sudo[99751]: pam_unix(sudo:session): session closed for user root
Oct 11 03:38:10 compute-0 nostalgic_chaplygin[99651]: {
Oct 11 03:38:10 compute-0 nostalgic_chaplygin[99651]:     "38da774d-7ecf-442f-9a7a-97978287cff8": {
Oct 11 03:38:10 compute-0 nostalgic_chaplygin[99651]:         "ceph_fsid": "23b68101-59a9-532f-ab6b-9acf78fb2162",
Oct 11 03:38:10 compute-0 nostalgic_chaplygin[99651]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Oct 11 03:38:10 compute-0 nostalgic_chaplygin[99651]:         "osd_id": 1,
Oct 11 03:38:10 compute-0 nostalgic_chaplygin[99651]:         "osd_uuid": "38da774d-7ecf-442f-9a7a-97978287cff8",
Oct 11 03:38:10 compute-0 nostalgic_chaplygin[99651]:         "type": "bluestore"
Oct 11 03:38:10 compute-0 nostalgic_chaplygin[99651]:     },
Oct 11 03:38:10 compute-0 nostalgic_chaplygin[99651]:     "b0a32a6b-06c7-4cda-9235-0c5ffbd1bd85": {
Oct 11 03:38:10 compute-0 nostalgic_chaplygin[99651]:         "ceph_fsid": "23b68101-59a9-532f-ab6b-9acf78fb2162",
Oct 11 03:38:10 compute-0 nostalgic_chaplygin[99651]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Oct 11 03:38:10 compute-0 nostalgic_chaplygin[99651]:         "osd_id": 2,
Oct 11 03:38:10 compute-0 nostalgic_chaplygin[99651]:         "osd_uuid": "b0a32a6b-06c7-4cda-9235-0c5ffbd1bd85",
Oct 11 03:38:10 compute-0 nostalgic_chaplygin[99651]:         "type": "bluestore"
Oct 11 03:38:10 compute-0 nostalgic_chaplygin[99651]:     },
Oct 11 03:38:10 compute-0 nostalgic_chaplygin[99651]:     "bd7ac921-1218-45c1-b1c6-7c594dbceccb": {
Oct 11 03:38:10 compute-0 nostalgic_chaplygin[99651]:         "ceph_fsid": "23b68101-59a9-532f-ab6b-9acf78fb2162",
Oct 11 03:38:10 compute-0 nostalgic_chaplygin[99651]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct 11 03:38:10 compute-0 nostalgic_chaplygin[99651]:         "osd_id": 0,
Oct 11 03:38:10 compute-0 nostalgic_chaplygin[99651]:         "osd_uuid": "bd7ac921-1218-45c1-b1c6-7c594dbceccb",
Oct 11 03:38:10 compute-0 nostalgic_chaplygin[99651]:         "type": "bluestore"
Oct 11 03:38:10 compute-0 nostalgic_chaplygin[99651]:     }
Oct 11 03:38:10 compute-0 nostalgic_chaplygin[99651]: }
Oct 11 03:38:10 compute-0 systemd[1]: libpod-51eb1db2509dd1553d699c27573d27df320f64a407586b7af964588de857412b.scope: Deactivated successfully.
Oct 11 03:38:10 compute-0 podman[99622]: 2025-10-11 03:38:10.403442322 +0000 UTC m=+1.220661689 container died 51eb1db2509dd1553d699c27573d27df320f64a407586b7af964588de857412b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_chaplygin, io.buildah.version=1.39.3, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Oct 11 03:38:10 compute-0 systemd[1]: libpod-51eb1db2509dd1553d699c27573d27df320f64a407586b7af964588de857412b.scope: Consumed 1.043s CPU time.
Oct 11 03:38:10 compute-0 systemd[1]: var-lib-containers-storage-overlay-22863c87000e4bc6dcc204596009f8af9997c17d410564fe0c1bcccc9d9428b6-merged.mount: Deactivated successfully.
Oct 11 03:38:10 compute-0 podman[99622]: 2025-10-11 03:38:10.459379797 +0000 UTC m=+1.276599134 container remove 51eb1db2509dd1553d699c27573d27df320f64a407586b7af964588de857412b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_chaplygin, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Oct 11 03:38:10 compute-0 systemd[1]: libpod-conmon-51eb1db2509dd1553d699c27573d27df320f64a407586b7af964588de857412b.scope: Deactivated successfully.
Oct 11 03:38:10 compute-0 sudo[99830]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lirgdtxstkyqpuxicioorvnpmehlmspa ; /usr/bin/python3'
Oct 11 03:38:10 compute-0 sudo[99830]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 03:38:10 compute-0 sudo[99409]: pam_unix(sudo:session): session closed for user root
Oct 11 03:38:10 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 11 03:38:10 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' 
Oct 11 03:38:10 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 11 03:38:10 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' 
Oct 11 03:38:10 compute-0 ceph-mgr[74563]: [progress INFO root] update: starting ev ff3c409c-fe38-4cb2-bee2-e7feb7da8c95 (Updating rgw.rgw deployment (+1 -> 1))
Oct 11 03:38:10 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.rgw.rgw.compute-0.xmbhit", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]} v 0) v1
Oct 11 03:38:10 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' cmd=[{"prefix": "auth get-or-create", "entity": "client.rgw.rgw.compute-0.xmbhit", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]: dispatch
Oct 11 03:38:10 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' cmd='[{"prefix": "auth get-or-create", "entity": "client.rgw.rgw.compute-0.xmbhit", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]': finished
Oct 11 03:38:10 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=rgw_frontends}] v 0) v1
Oct 11 03:38:10 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' 
Oct 11 03:38:10 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 11 03:38:10 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 11 03:38:10 compute-0 ceph-mgr[74563]: [cephadm INFO cephadm.serve] Deploying daemon rgw.rgw.compute-0.xmbhit on compute-0
Oct 11 03:38:10 compute-0 ceph-mgr[74563]: log_channel(cephadm) log [INF] : Deploying daemon rgw.rgw.compute-0.xmbhit on compute-0
Oct 11 03:38:10 compute-0 sudo[99833]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 03:38:10 compute-0 sudo[99833]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 03:38:10 compute-0 sudo[99833]: pam_unix(sudo:session): session closed for user root
Oct 11 03:38:10 compute-0 python3[99832]: ansible-ansible.legacy.async_status Invoked with jid=j990836003519.99569 mode=cleanup _async_dir=/root/.ansible_async
Oct 11 03:38:10 compute-0 sudo[99830]: pam_unix(sudo:session): session closed for user root
Oct 11 03:38:10 compute-0 sudo[99858]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 11 03:38:10 compute-0 sudo[99858]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 03:38:10 compute-0 sudo[99858]: pam_unix(sudo:session): session closed for user root
Oct 11 03:38:10 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v77: 7 pgs: 7 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Oct 11 03:38:10 compute-0 sudo[99883]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 03:38:10 compute-0 sudo[99883]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 03:38:10 compute-0 sudo[99883]: pam_unix(sudo:session): session closed for user root
Oct 11 03:38:10 compute-0 sudo[99908]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/23b68101-59a9-532f-ab6b-9acf78fb2162/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 _orch deploy --fsid 23b68101-59a9-532f-ab6b-9acf78fb2162
Oct 11 03:38:10 compute-0 sudo[99908]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 03:38:11 compute-0 sudo[99970]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uuddpwcuehnrgbqozfbiqrahdpfaxbhl ; /usr/bin/python3'
Oct 11 03:38:11 compute-0 sudo[99970]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 03:38:11 compute-0 python3[99973]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid 23b68101-59a9-532f-ab6b-9acf78fb2162 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   orch status --format json _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 11 03:38:11 compute-0 podman[100000]: 2025-10-11 03:38:11.246997277 +0000 UTC m=+0.047606434 container create 883ffb86df27d1a4fc0812d4b5cb2a4f3b0be9f655f17ab22df87d25fcdd9379 (image=quay.io/ceph/ceph:v18, name=happy_turing, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 11 03:38:11 compute-0 podman[100001]: 2025-10-11 03:38:11.285819183 +0000 UTC m=+0.074918178 container create 95bacffb60fff116b5b2157fba9cfec6e66d1697b75546f11de679fe51f46e61 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_liskov, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 11 03:38:11 compute-0 systemd[1]: Started libpod-conmon-883ffb86df27d1a4fc0812d4b5cb2a4f3b0be9f655f17ab22df87d25fcdd9379.scope.
Oct 11 03:38:11 compute-0 podman[100000]: 2025-10-11 03:38:11.225298259 +0000 UTC m=+0.025907476 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Oct 11 03:38:11 compute-0 systemd[1]: Started libcrun container.
Oct 11 03:38:11 compute-0 systemd[1]: Started libpod-conmon-95bacffb60fff116b5b2157fba9cfec6e66d1697b75546f11de679fe51f46e61.scope.
Oct 11 03:38:11 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ff93fc5fc54966489438437f92dd672225bb3c7c7820e7b00c0a015ef9311d10/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Oct 11 03:38:11 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ff93fc5fc54966489438437f92dd672225bb3c7c7820e7b00c0a015ef9311d10/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 11 03:38:11 compute-0 systemd[1]: Started libcrun container.
Oct 11 03:38:11 compute-0 podman[100001]: 2025-10-11 03:38:11.264460195 +0000 UTC m=+0.053559270 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 03:38:11 compute-0 podman[100000]: 2025-10-11 03:38:11.363237019 +0000 UTC m=+0.163846206 container init 883ffb86df27d1a4fc0812d4b5cb2a4f3b0be9f655f17ab22df87d25fcdd9379 (image=quay.io/ceph/ceph:v18, name=happy_turing, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0)
Oct 11 03:38:11 compute-0 podman[100001]: 2025-10-11 03:38:11.370468402 +0000 UTC m=+0.159567467 container init 95bacffb60fff116b5b2157fba9cfec6e66d1697b75546f11de679fe51f46e61 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_liskov, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 11 03:38:11 compute-0 podman[100001]: 2025-10-11 03:38:11.376725887 +0000 UTC m=+0.165824872 container start 95bacffb60fff116b5b2157fba9cfec6e66d1697b75546f11de679fe51f46e61 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_liskov, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Oct 11 03:38:11 compute-0 podman[100001]: 2025-10-11 03:38:11.379261758 +0000 UTC m=+0.168360833 container attach 95bacffb60fff116b5b2157fba9cfec6e66d1697b75546f11de679fe51f46e61 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_liskov, io.buildah.version=1.39.3, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 11 03:38:11 compute-0 podman[100000]: 2025-10-11 03:38:11.379603147 +0000 UTC m=+0.180212294 container start 883ffb86df27d1a4fc0812d4b5cb2a4f3b0be9f655f17ab22df87d25fcdd9379 (image=quay.io/ceph/ceph:v18, name=happy_turing, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 11 03:38:11 compute-0 eloquent_liskov[100034]: 167 167
Oct 11 03:38:11 compute-0 podman[100000]: 2025-10-11 03:38:11.383258069 +0000 UTC m=+0.183867306 container attach 883ffb86df27d1a4fc0812d4b5cb2a4f3b0be9f655f17ab22df87d25fcdd9379 (image=quay.io/ceph/ceph:v18, name=happy_turing, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 11 03:38:11 compute-0 systemd[1]: libpod-95bacffb60fff116b5b2157fba9cfec6e66d1697b75546f11de679fe51f46e61.scope: Deactivated successfully.
Oct 11 03:38:11 compute-0 podman[100001]: 2025-10-11 03:38:11.383836796 +0000 UTC m=+0.172935841 container died 95bacffb60fff116b5b2157fba9cfec6e66d1697b75546f11de679fe51f46e61 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_liskov, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.license=GPLv2)
Oct 11 03:38:11 compute-0 systemd[1]: var-lib-containers-storage-overlay-7438d92f978c6450773895eb61e93c732fab3dbee128c61824ad05e866cf5c65-merged.mount: Deactivated successfully.
Oct 11 03:38:11 compute-0 podman[100001]: 2025-10-11 03:38:11.429206555 +0000 UTC m=+0.218305550 container remove 95bacffb60fff116b5b2157fba9cfec6e66d1697b75546f11de679fe51f46e61 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_liskov, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 11 03:38:11 compute-0 systemd[1]: libpod-conmon-95bacffb60fff116b5b2157fba9cfec6e66d1697b75546f11de679fe51f46e61.scope: Deactivated successfully.
Oct 11 03:38:11 compute-0 systemd[1]: Reloading.
Oct 11 03:38:11 compute-0 ceph-mon[74273]: from='client.14258 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""], "format": "json"}]: dispatch
Oct 11 03:38:11 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' 
Oct 11 03:38:11 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' 
Oct 11 03:38:11 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' cmd=[{"prefix": "auth get-or-create", "entity": "client.rgw.rgw.compute-0.xmbhit", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]: dispatch
Oct 11 03:38:11 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' cmd='[{"prefix": "auth get-or-create", "entity": "client.rgw.rgw.compute-0.xmbhit", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]': finished
Oct 11 03:38:11 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' 
Oct 11 03:38:11 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 11 03:38:11 compute-0 ceph-mon[74273]: Deploying daemon rgw.rgw.compute-0.xmbhit on compute-0
Oct 11 03:38:11 compute-0 systemd-sysv-generator[100084]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 11 03:38:11 compute-0 systemd-rc-local-generator[100081]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 11 03:38:11 compute-0 systemd[1]: Reloading.
Oct 11 03:38:11 compute-0 systemd-rc-local-generator[100134]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 11 03:38:11 compute-0 systemd-sysv-generator[100137]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 11 03:38:12 compute-0 ceph-mgr[74563]: log_channel(audit) log [DBG] : from='client.14260 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""], "format": "json"}]: dispatch
Oct 11 03:38:12 compute-0 happy_turing[100029]: 
Oct 11 03:38:12 compute-0 happy_turing[100029]: {"available": true, "backend": "cephadm", "paused": false, "workers": 10}
Oct 11 03:38:12 compute-0 podman[100000]: 2025-10-11 03:38:12.020996075 +0000 UTC m=+0.821605222 container died 883ffb86df27d1a4fc0812d4b5cb2a4f3b0be9f655f17ab22df87d25fcdd9379 (image=quay.io/ceph/ceph:v18, name=happy_turing, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 11 03:38:12 compute-0 systemd[1]: libpod-883ffb86df27d1a4fc0812d4b5cb2a4f3b0be9f655f17ab22df87d25fcdd9379.scope: Deactivated successfully.
Oct 11 03:38:12 compute-0 systemd[1]: var-lib-containers-storage-overlay-ff93fc5fc54966489438437f92dd672225bb3c7c7820e7b00c0a015ef9311d10-merged.mount: Deactivated successfully.
Oct 11 03:38:12 compute-0 podman[100000]: 2025-10-11 03:38:12.073065812 +0000 UTC m=+0.873674939 container remove 883ffb86df27d1a4fc0812d4b5cb2a4f3b0be9f655f17ab22df87d25fcdd9379 (image=quay.io/ceph/ceph:v18, name=happy_turing, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=reef)
Oct 11 03:38:12 compute-0 systemd[1]: Starting Ceph rgw.rgw.compute-0.xmbhit for 23b68101-59a9-532f-ab6b-9acf78fb2162...
Oct 11 03:38:12 compute-0 systemd[1]: libpod-conmon-883ffb86df27d1a4fc0812d4b5cb2a4f3b0be9f655f17ab22df87d25fcdd9379.scope: Deactivated successfully.
Oct 11 03:38:12 compute-0 sudo[99970]: pam_unix(sudo:session): session closed for user root
Oct 11 03:38:12 compute-0 podman[100210]: 2025-10-11 03:38:12.383649672 +0000 UTC m=+0.060993228 container create 8a9ca5f46b9433e4056dd8057f4357da6be7e215e2ac67375a98f6cee9f72543 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-23b68101-59a9-532f-ab6b-9acf78fb2162-rgw-rgw-compute-0-xmbhit, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 11 03:38:12 compute-0 podman[100210]: 2025-10-11 03:38:12.352050478 +0000 UTC m=+0.029394104 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 03:38:12 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7ac2c44d6105808b33036ae152a6b0b5273d9d3f0b8f624d3071a2f13343a733/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 11 03:38:12 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7ac2c44d6105808b33036ae152a6b0b5273d9d3f0b8f624d3071a2f13343a733/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 11 03:38:12 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7ac2c44d6105808b33036ae152a6b0b5273d9d3f0b8f624d3071a2f13343a733/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 11 03:38:12 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7ac2c44d6105808b33036ae152a6b0b5273d9d3f0b8f624d3071a2f13343a733/merged/var/lib/ceph/radosgw/ceph-rgw.rgw.compute-0.xmbhit supports timestamps until 2038 (0x7fffffff)
Oct 11 03:38:12 compute-0 podman[100210]: 2025-10-11 03:38:12.469881475 +0000 UTC m=+0.147225021 container init 8a9ca5f46b9433e4056dd8057f4357da6be7e215e2ac67375a98f6cee9f72543 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-23b68101-59a9-532f-ab6b-9acf78fb2162-rgw-rgw-compute-0-xmbhit, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Oct 11 03:38:12 compute-0 podman[100210]: 2025-10-11 03:38:12.486717846 +0000 UTC m=+0.164061412 container start 8a9ca5f46b9433e4056dd8057f4357da6be7e215e2ac67375a98f6cee9f72543 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-23b68101-59a9-532f-ab6b-9acf78fb2162-rgw-rgw-compute-0-xmbhit, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Oct 11 03:38:12 compute-0 bash[100210]: 8a9ca5f46b9433e4056dd8057f4357da6be7e215e2ac67375a98f6cee9f72543
Oct 11 03:38:12 compute-0 systemd[1]: Started Ceph rgw.rgw.compute-0.xmbhit for 23b68101-59a9-532f-ab6b-9acf78fb2162.
Oct 11 03:38:12 compute-0 ceph-mon[74273]: pgmap v77: 7 pgs: 7 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Oct 11 03:38:12 compute-0 sudo[99908]: pam_unix(sudo:session): session closed for user root
Oct 11 03:38:12 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 11 03:38:12 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' 
Oct 11 03:38:12 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 11 03:38:12 compute-0 radosgw[100230]: deferred set uid:gid to 167:167 (ceph:ceph)
Oct 11 03:38:12 compute-0 radosgw[100230]: ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable), process radosgw, pid 2
Oct 11 03:38:12 compute-0 radosgw[100230]: framework: beast
Oct 11 03:38:12 compute-0 radosgw[100230]: framework conf key: endpoint, val: 192.168.122.100:8082
Oct 11 03:38:12 compute-0 radosgw[100230]: init_numa not setting numa affinity
Oct 11 03:38:12 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' 
Oct 11 03:38:12 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.rgw.rgw}] v 0) v1
Oct 11 03:38:12 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' 
Oct 11 03:38:12 compute-0 ceph-mgr[74563]: [progress INFO root] complete: finished ev ff3c409c-fe38-4cb2-bee2-e7feb7da8c95 (Updating rgw.rgw deployment (+1 -> 1))
Oct 11 03:38:12 compute-0 ceph-mgr[74563]: [progress INFO root] Completed event ff3c409c-fe38-4cb2-bee2-e7feb7da8c95 (Updating rgw.rgw deployment (+1 -> 1)) in 2 seconds
Oct 11 03:38:12 compute-0 ceph-mgr[74563]: [cephadm INFO cephadm.services.cephadmservice] Saving service rgw.rgw spec with placement compute-0
Oct 11 03:38:12 compute-0 ceph-mgr[74563]: log_channel(cephadm) log [INF] : Saving service rgw.rgw spec with placement compute-0
Oct 11 03:38:12 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.rgw.rgw}] v 0) v1
Oct 11 03:38:12 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' 
Oct 11 03:38:12 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.rgw.rgw}] v 0) v1
Oct 11 03:38:12 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' 
Oct 11 03:38:12 compute-0 ceph-mgr[74563]: [progress INFO root] update: starting ev 642cc273-8bdd-4292-9969-1a84b62dfd1b (Updating mds.cephfs deployment (+1 -> 1))
Oct 11 03:38:12 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get-or-create", "entity": "mds.cephfs.compute-0.lkhlqa", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]} v 0) v1
Oct 11 03:38:12 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' cmd=[{"prefix": "auth get-or-create", "entity": "mds.cephfs.compute-0.lkhlqa", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]}]: dispatch
Oct 11 03:38:12 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' cmd='[{"prefix": "auth get-or-create", "entity": "mds.cephfs.compute-0.lkhlqa", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]}]': finished
Oct 11 03:38:12 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 11 03:38:12 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 11 03:38:12 compute-0 ceph-mgr[74563]: [cephadm INFO cephadm.serve] Deploying daemon mds.cephfs.compute-0.lkhlqa on compute-0
Oct 11 03:38:12 compute-0 ceph-mgr[74563]: log_channel(cephadm) log [INF] : Deploying daemon mds.cephfs.compute-0.lkhlqa on compute-0
Oct 11 03:38:12 compute-0 sudo[100292]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 03:38:12 compute-0 sudo[100292]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 03:38:12 compute-0 sudo[100292]: pam_unix(sudo:session): session closed for user root
Oct 11 03:38:12 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v78: 7 pgs: 7 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Oct 11 03:38:12 compute-0 sudo[100353]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ltzhtwcsmpzvhhqguhnfqswpfzuwccts ; /usr/bin/python3'
Oct 11 03:38:12 compute-0 sudo[100353]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 03:38:12 compute-0 sudo[100323]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 11 03:38:12 compute-0 sudo[100323]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 03:38:12 compute-0 sudo[100323]: pam_unix(sudo:session): session closed for user root
Oct 11 03:38:12 compute-0 sudo[100368]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 03:38:12 compute-0 sudo[100368]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 03:38:12 compute-0 sudo[100368]: pam_unix(sudo:session): session closed for user root
Oct 11 03:38:12 compute-0 python3[100365]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid 23b68101-59a9-532f-ab6b-9acf78fb2162 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   orch ls --export -f json _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 11 03:38:12 compute-0 sudo[100393]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/23b68101-59a9-532f-ab6b-9acf78fb2162/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 _orch deploy --fsid 23b68101-59a9-532f-ab6b-9acf78fb2162
Oct 11 03:38:12 compute-0 sudo[100393]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 03:38:13 compute-0 podman[100416]: 2025-10-11 03:38:13.012677824 +0000 UTC m=+0.077163060 container create 01183d281d11990eb795453721dda86ac53f0e0eeb4fa6c9e0c80701b0cfca68 (image=quay.io/ceph/ceph:v18, name=thirsty_williamson, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Oct 11 03:38:13 compute-0 systemd[1]: Started libpod-conmon-01183d281d11990eb795453721dda86ac53f0e0eeb4fa6c9e0c80701b0cfca68.scope.
Oct 11 03:38:13 compute-0 podman[100416]: 2025-10-11 03:38:12.983801936 +0000 UTC m=+0.048287182 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Oct 11 03:38:13 compute-0 systemd[1]: Started libcrun container.
Oct 11 03:38:13 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/62aa973a9f09dd3b045c14b54b33b75eccc91f97464dd9c85e75ef19f61c770a/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Oct 11 03:38:13 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/62aa973a9f09dd3b045c14b54b33b75eccc91f97464dd9c85e75ef19f61c770a/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 11 03:38:13 compute-0 podman[100416]: 2025-10-11 03:38:13.104051851 +0000 UTC m=+0.168537087 container init 01183d281d11990eb795453721dda86ac53f0e0eeb4fa6c9e0c80701b0cfca68 (image=quay.io/ceph/ceph:v18, name=thirsty_williamson, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 11 03:38:13 compute-0 podman[100416]: 2025-10-11 03:38:13.112544118 +0000 UTC m=+0.177029344 container start 01183d281d11990eb795453721dda86ac53f0e0eeb4fa6c9e0c80701b0cfca68 (image=quay.io/ceph/ceph:v18, name=thirsty_williamson, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Oct 11 03:38:13 compute-0 podman[100416]: 2025-10-11 03:38:13.115699317 +0000 UTC m=+0.180184543 container attach 01183d281d11990eb795453721dda86ac53f0e0eeb4fa6c9e0c80701b0cfca68 (image=quay.io/ceph/ceph:v18, name=thirsty_williamson, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 11 03:38:13 compute-0 podman[100476]: 2025-10-11 03:38:13.239732848 +0000 UTC m=+0.033220721 container create a07b4478427d44e5d7568d2c46b732d2d4688b77d339305f1f92413fde2e6fac (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_goldberg, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True)
Oct 11 03:38:13 compute-0 systemd[1]: Started libpod-conmon-a07b4478427d44e5d7568d2c46b732d2d4688b77d339305f1f92413fde2e6fac.scope.
Oct 11 03:38:13 compute-0 systemd[1]: Started libcrun container.
Oct 11 03:38:13 compute-0 podman[100476]: 2025-10-11 03:38:13.313231974 +0000 UTC m=+0.106719877 container init a07b4478427d44e5d7568d2c46b732d2d4688b77d339305f1f92413fde2e6fac (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_goldberg, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, ceph=True)
Oct 11 03:38:13 compute-0 podman[100476]: 2025-10-11 03:38:13.224841061 +0000 UTC m=+0.018328934 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 03:38:13 compute-0 podman[100476]: 2025-10-11 03:38:13.325122707 +0000 UTC m=+0.118610570 container start a07b4478427d44e5d7568d2c46b732d2d4688b77d339305f1f92413fde2e6fac (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_goldberg, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Oct 11 03:38:13 compute-0 stupefied_goldberg[100492]: 167 167
Oct 11 03:38:13 compute-0 systemd[1]: libpod-a07b4478427d44e5d7568d2c46b732d2d4688b77d339305f1f92413fde2e6fac.scope: Deactivated successfully.
Oct 11 03:38:13 compute-0 podman[100476]: 2025-10-11 03:38:13.32879177 +0000 UTC m=+0.122279643 container attach a07b4478427d44e5d7568d2c46b732d2d4688b77d339305f1f92413fde2e6fac (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_goldberg, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 11 03:38:13 compute-0 podman[100476]: 2025-10-11 03:38:13.331448384 +0000 UTC m=+0.124936257 container died a07b4478427d44e5d7568d2c46b732d2d4688b77d339305f1f92413fde2e6fac (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_goldberg, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Oct 11 03:38:13 compute-0 systemd[1]: var-lib-containers-storage-overlay-ce6f37cf5698219752d46a823a24f3cfec84d56ff10b764a1b520141a67bd48a-merged.mount: Deactivated successfully.
Oct 11 03:38:13 compute-0 podman[100476]: 2025-10-11 03:38:13.379305123 +0000 UTC m=+0.172792986 container remove a07b4478427d44e5d7568d2c46b732d2d4688b77d339305f1f92413fde2e6fac (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_goldberg, io.buildah.version=1.39.3, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0)
Oct 11 03:38:13 compute-0 systemd[1]: libpod-conmon-a07b4478427d44e5d7568d2c46b732d2d4688b77d339305f1f92413fde2e6fac.scope: Deactivated successfully.
Oct 11 03:38:13 compute-0 systemd[1]: Reloading.
Oct 11 03:38:13 compute-0 systemd-rc-local-generator[100554]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 11 03:38:13 compute-0 systemd-sysv-generator[100562]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 11 03:38:13 compute-0 ceph-mon[74273]: from='client.14260 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""], "format": "json"}]: dispatch
Oct 11 03:38:13 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' 
Oct 11 03:38:13 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' 
Oct 11 03:38:13 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' 
Oct 11 03:38:13 compute-0 ceph-mon[74273]: Saving service rgw.rgw spec with placement compute-0
Oct 11 03:38:13 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' 
Oct 11 03:38:13 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' 
Oct 11 03:38:13 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' cmd=[{"prefix": "auth get-or-create", "entity": "mds.cephfs.compute-0.lkhlqa", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]}]: dispatch
Oct 11 03:38:13 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' cmd='[{"prefix": "auth get-or-create", "entity": "mds.cephfs.compute-0.lkhlqa", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]}]': finished
Oct 11 03:38:13 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 11 03:38:13 compute-0 ceph-mon[74273]: Deploying daemon mds.cephfs.compute-0.lkhlqa on compute-0
Oct 11 03:38:13 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e31 do_prune osdmap full prune enabled
Oct 11 03:38:13 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e32 e32: 3 total, 3 up, 3 in
Oct 11 03:38:13 compute-0 ceph-mon[74273]: log_channel(cluster) log [DBG] : osdmap e32: 3 total, 3 up, 3 in
Oct 11 03:38:13 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool application enable","pool": ".rgw.root","app": "rgw"} v 0) v1
Oct 11 03:38:13 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/4269325060' entity='client.rgw.rgw.compute-0.xmbhit' cmd=[{"prefix": "osd pool application enable","pool": ".rgw.root","app": "rgw"}]: dispatch
Oct 11 03:38:13 compute-0 ceph-mgr[74563]: log_channel(audit) log [DBG] : from='client.14265 -' entity='client.admin' cmd=[{"prefix": "orch ls", "export": true, "target": ["mon-mgr", ""], "format": "json"}]: dispatch
Oct 11 03:38:13 compute-0 thirsty_williamson[100432]: 
Oct 11 03:38:13 compute-0 thirsty_williamson[100432]: [{"placement": {"host_pattern": "*"}, "service_name": "crash", "service_type": "crash"}, {"placement": {"hosts": ["compute-0"]}, "service_id": "cephfs", "service_name": "mds.cephfs", "service_type": "mds"}, {"placement": {"hosts": ["compute-0"]}, "service_name": "mgr", "service_type": "mgr"}, {"placement": {"hosts": ["compute-0"]}, "service_name": "mon", "service_type": "mon"}, {"placement": {"hosts": ["compute-0"]}, "service_id": "default_drive_group", "service_name": "osd.default_drive_group", "service_type": "osd", "spec": {"data_devices": {"paths": ["/dev/ceph_vg0/ceph_lv0", "/dev/ceph_vg1/ceph_lv1", "/dev/ceph_vg2/ceph_lv2"]}, "filter_logic": "AND", "objectstore": "bluestore"}}, {"networks": ["192.168.122.0/24"], "placement": {"hosts": ["compute-0"]}, "service_id": "rgw", "service_name": "rgw.rgw", "service_type": "rgw", "spec": {"rgw_frontend_port": 8082}}]
Oct 11 03:38:13 compute-0 podman[100416]: 2025-10-11 03:38:13.688613539 +0000 UTC m=+0.753098765 container died 01183d281d11990eb795453721dda86ac53f0e0eeb4fa6c9e0c80701b0cfca68 (image=quay.io/ceph/ceph:v18, name=thirsty_williamson, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 11 03:38:13 compute-0 systemd[1]: libpod-01183d281d11990eb795453721dda86ac53f0e0eeb4fa6c9e0c80701b0cfca68.scope: Deactivated successfully.
Oct 11 03:38:13 compute-0 systemd[1]: var-lib-containers-storage-overlay-62aa973a9f09dd3b045c14b54b33b75eccc91f97464dd9c85e75ef19f61c770a-merged.mount: Deactivated successfully.
Oct 11 03:38:13 compute-0 podman[100416]: 2025-10-11 03:38:13.736603001 +0000 UTC m=+0.801088207 container remove 01183d281d11990eb795453721dda86ac53f0e0eeb4fa6c9e0c80701b0cfca68 (image=quay.io/ceph/ceph:v18, name=thirsty_williamson, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.39.3)
Oct 11 03:38:13 compute-0 systemd[1]: libpod-conmon-01183d281d11990eb795453721dda86ac53f0e0eeb4fa6c9e0c80701b0cfca68.scope: Deactivated successfully.
Oct 11 03:38:13 compute-0 sudo[100353]: pam_unix(sudo:session): session closed for user root
Oct 11 03:38:13 compute-0 systemd[1]: Reloading.
Oct 11 03:38:13 compute-0 ceph-osd[88594]: osd.1 pg_epoch: 32 pg[8.0( empty local-lis/les=0/0 n=0 ec=32/32 lis/c=0/0 les/c/f=0/0/0 sis=32) [1] r=0 lpr=32 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 03:38:13 compute-0 systemd-sysv-generator[100615]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 11 03:38:13 compute-0 systemd-rc-local-generator[100611]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 11 03:38:13 compute-0 ansible-async_wrapper.py[99595]: Done in kid B.
Oct 11 03:38:14 compute-0 systemd[1]: Starting Ceph mds.cephfs.compute-0.lkhlqa for 23b68101-59a9-532f-ab6b-9acf78fb2162...
Oct 11 03:38:14 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e32 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 11 03:38:14 compute-0 podman[100671]: 2025-10-11 03:38:14.412505925 +0000 UTC m=+0.064517306 container create 0f33a7e4f667c43d4f8c024330c9eca12d2a661fea7e1f4d8a282aa4110790b0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-23b68101-59a9-532f-ab6b-9acf78fb2162-mds-cephfs-compute-0-lkhlqa, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True)
Oct 11 03:38:14 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/14a62d135d864b17dfe04135826cca98c0bbd02a61e46091b3f2d5541890667a/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 11 03:38:14 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/14a62d135d864b17dfe04135826cca98c0bbd02a61e46091b3f2d5541890667a/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 11 03:38:14 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/14a62d135d864b17dfe04135826cca98c0bbd02a61e46091b3f2d5541890667a/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 11 03:38:14 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/14a62d135d864b17dfe04135826cca98c0bbd02a61e46091b3f2d5541890667a/merged/var/lib/ceph/mds/ceph-cephfs.compute-0.lkhlqa supports timestamps until 2038 (0x7fffffff)
Oct 11 03:38:14 compute-0 podman[100671]: 2025-10-11 03:38:14.387559077 +0000 UTC m=+0.039570528 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 03:38:14 compute-0 podman[100671]: 2025-10-11 03:38:14.493069969 +0000 UTC m=+0.145081420 container init 0f33a7e4f667c43d4f8c024330c9eca12d2a661fea7e1f4d8a282aa4110790b0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-23b68101-59a9-532f-ab6b-9acf78fb2162-mds-cephfs-compute-0-lkhlqa, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 11 03:38:14 compute-0 podman[100671]: 2025-10-11 03:38:14.50165626 +0000 UTC m=+0.153667671 container start 0f33a7e4f667c43d4f8c024330c9eca12d2a661fea7e1f4d8a282aa4110790b0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-23b68101-59a9-532f-ab6b-9acf78fb2162-mds-cephfs-compute-0-lkhlqa, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3)
Oct 11 03:38:14 compute-0 bash[100671]: 0f33a7e4f667c43d4f8c024330c9eca12d2a661fea7e1f4d8a282aa4110790b0
Oct 11 03:38:14 compute-0 systemd[1]: Started Ceph mds.cephfs.compute-0.lkhlqa for 23b68101-59a9-532f-ab6b-9acf78fb2162.
Oct 11 03:38:14 compute-0 ceph-mds[100691]: set uid:gid to 167:167 (ceph:ceph)
Oct 11 03:38:14 compute-0 ceph-mds[100691]: ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable), process ceph-mds, pid 2
Oct 11 03:38:14 compute-0 ceph-mds[100691]: main not setting numa affinity
Oct 11 03:38:14 compute-0 ceph-mds[100691]: pidfile_write: ignore empty --pid-file
Oct 11 03:38:14 compute-0 ceph-23b68101-59a9-532f-ab6b-9acf78fb2162-mds-cephfs-compute-0-lkhlqa[100686]: starting mds.cephfs.compute-0.lkhlqa at 
Oct 11 03:38:14 compute-0 ceph-mds[100691]: mds.cephfs.compute-0.lkhlqa Updating MDS map to version 2 from mon.0
Oct 11 03:38:14 compute-0 sudo[100393]: pam_unix(sudo:session): session closed for user root
Oct 11 03:38:14 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 11 03:38:14 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' 
Oct 11 03:38:14 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 11 03:38:14 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' 
Oct 11 03:38:14 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mds.cephfs}] v 0) v1
Oct 11 03:38:14 compute-0 sudo[100730]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xqrhfdmlabgenouhdzyxozyjugviqzht ; /usr/bin/python3'
Oct 11 03:38:14 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' 
Oct 11 03:38:14 compute-0 ceph-mgr[74563]: [progress INFO root] complete: finished ev 642cc273-8bdd-4292-9969-1a84b62dfd1b (Updating mds.cephfs deployment (+1 -> 1))
Oct 11 03:38:14 compute-0 ceph-mgr[74563]: [progress INFO root] Completed event 642cc273-8bdd-4292-9969-1a84b62dfd1b (Updating mds.cephfs deployment (+1 -> 1)) in 2 seconds
Oct 11 03:38:14 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=mds_join_fs}] v 0) v1
Oct 11 03:38:14 compute-0 sudo[100730]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 03:38:14 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e32 do_prune osdmap full prune enabled
Oct 11 03:38:14 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' 
Oct 11 03:38:14 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mds.cephfs}] v 0) v1
Oct 11 03:38:14 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/4269325060' entity='client.rgw.rgw.compute-0.xmbhit' cmd='[{"prefix": "osd pool application enable","pool": ".rgw.root","app": "rgw"}]': finished
Oct 11 03:38:14 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e33 e33: 3 total, 3 up, 3 in
Oct 11 03:38:14 compute-0 ceph-mon[74273]: pgmap v78: 7 pgs: 7 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Oct 11 03:38:14 compute-0 ceph-mon[74273]: osdmap e32: 3 total, 3 up, 3 in
Oct 11 03:38:14 compute-0 ceph-mon[74273]: from='client.? 192.168.122.100:0/4269325060' entity='client.rgw.rgw.compute-0.xmbhit' cmd=[{"prefix": "osd pool application enable","pool": ".rgw.root","app": "rgw"}]: dispatch
Oct 11 03:38:14 compute-0 ceph-mon[74273]: from='client.14265 -' entity='client.admin' cmd=[{"prefix": "orch ls", "export": true, "target": ["mon-mgr", ""], "format": "json"}]: dispatch
Oct 11 03:38:14 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' 
Oct 11 03:38:14 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' 
Oct 11 03:38:14 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' 
Oct 11 03:38:14 compute-0 ceph-mon[74273]: log_channel(cluster) log [DBG] : osdmap e33: 3 total, 3 up, 3 in
Oct 11 03:38:14 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' 
Oct 11 03:38:14 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).mds e3 new map
Oct 11 03:38:14 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).mds e3 print_map
                                           e3
                                           enable_multiple, ever_enabled_multiple: 1,1
                                           default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2}
                                           legacy client fscid: 1
                                            
                                           Filesystem 'cephfs' (1)
                                           fs_name        cephfs
                                           epoch        2
                                           flags        12 joinable allow_snaps allow_multimds_snaps
                                           created        2025-10-11T03:37:58.962470+0000
                                           modified        2025-10-11T03:37:58.962524+0000
                                           tableserver        0
                                           root        0
                                           session_timeout        60
                                           session_autoclose        300
                                           max_file_size        1099511627776
                                           max_xattr_size        65536
                                           required_client_features        {}
                                           last_failure        0
                                           last_failure_osd_epoch        0
                                           compat        compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,7=mds uses inline data,8=no anchor table,9=file layout v2,10=snaprealm v2}
                                           max_mds        1
                                           in        
                                           up        {}
                                           failed        
                                           damaged        
                                           stopped        
                                           data_pools        [7]
                                           metadata_pool        6
                                           inline_data        disabled
                                           balancer        
                                           bal_rank_mask        -1
                                           standby_count_wanted        0
                                            
                                            
                                           Standby daemons:
                                            
                                           [mds.cephfs.compute-0.lkhlqa{-1:14267} state up:standby seq 1 addr [v2:192.168.122.100:6814/1659573245,v1:192.168.122.100:6815/1659573245] compat {c=[1],r=[1],i=[7ff]}]
Oct 11 03:38:14 compute-0 ceph-osd[88594]: osd.1 pg_epoch: 33 pg[8.0( empty local-lis/les=32/33 n=0 ec=32/32 lis/c=0/0 les/c/f=0/0/0 sis=32) [1] r=0 lpr=32 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 03:38:14 compute-0 ceph-mds[100691]: mds.cephfs.compute-0.lkhlqa Updating MDS map to version 3 from mon.0
Oct 11 03:38:14 compute-0 ceph-mds[100691]: mds.cephfs.compute-0.lkhlqa Monitors have assigned me to become a standby.
Oct 11 03:38:14 compute-0 ceph-mon[74273]: log_channel(cluster) log [DBG] : mds.? [v2:192.168.122.100:6814/1659573245,v1:192.168.122.100:6815/1659573245] up:boot
Oct 11 03:38:14 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).mds e3 assigned standby [v2:192.168.122.100:6814/1659573245,v1:192.168.122.100:6815/1659573245] as mds.0
Oct 11 03:38:14 compute-0 ceph-mon[74273]: log_channel(cluster) log [INF] : daemon mds.cephfs.compute-0.lkhlqa assigned to filesystem cephfs as rank 0 (now has 1 ranks)
Oct 11 03:38:14 compute-0 ceph-mon[74273]: log_channel(cluster) log [INF] : Health check cleared: MDS_ALL_DOWN (was: 1 filesystem is offline)
Oct 11 03:38:14 compute-0 ceph-mon[74273]: log_channel(cluster) log [INF] : Health check cleared: MDS_UP_LESS_THAN_MAX (was: 1 filesystem is online with fewer MDS than max_mds)
Oct 11 03:38:14 compute-0 ceph-mon[74273]: log_channel(cluster) log [INF] : Cluster is now healthy
Oct 11 03:38:14 compute-0 ceph-mon[74273]: log_channel(cluster) log [DBG] : fsmap cephfs:0 1 up:standby
Oct 11 03:38:14 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mds metadata", "who": "cephfs.compute-0.lkhlqa"} v 0) v1
Oct 11 03:38:14 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' cmd=[{"prefix": "mds metadata", "who": "cephfs.compute-0.lkhlqa"}]: dispatch
Oct 11 03:38:14 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).mds e3 all = 0
Oct 11 03:38:14 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).mds e4 new map
Oct 11 03:38:14 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).mds e4 print_map
                                           e4
                                           enable_multiple, ever_enabled_multiple: 1,1
                                           default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2}
                                           legacy client fscid: 1
                                            
                                           Filesystem 'cephfs' (1)
                                           fs_name        cephfs
                                           epoch        4
                                           flags        12 joinable allow_snaps allow_multimds_snaps
                                           created        2025-10-11T03:37:58.962470+0000
                                           modified        2025-10-11T03:38:14.624016+0000
                                           tableserver        0
                                           root        0
                                           session_timeout        60
                                           session_autoclose        300
                                           max_file_size        1099511627776
                                           max_xattr_size        65536
                                           required_client_features        {}
                                           last_failure        0
                                           last_failure_osd_epoch        0
                                           compat        compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,7=mds uses inline data,8=no anchor table,9=file layout v2,10=snaprealm v2}
                                           max_mds        1
                                           in        0
                                           up        {0=14267}
                                           failed        
                                           damaged        
                                           stopped        
                                           data_pools        [7]
                                           metadata_pool        6
                                           inline_data        disabled
                                           balancer        
                                           bal_rank_mask        -1
                                           standby_count_wanted        0
                                           [mds.cephfs.compute-0.lkhlqa{0:14267} state up:creating seq 1 addr [v2:192.168.122.100:6814/1659573245,v1:192.168.122.100:6815/1659573245] compat {c=[1],r=[1],i=[7ff]}]
                                            
                                            
Oct 11 03:38:14 compute-0 ceph-mds[100691]: mds.cephfs.compute-0.lkhlqa Updating MDS map to version 4 from mon.0
Oct 11 03:38:14 compute-0 ceph-mon[74273]: log_channel(cluster) log [DBG] : fsmap cephfs:1 {0=cephfs.compute-0.lkhlqa=up:creating}
Oct 11 03:38:14 compute-0 ceph-mds[100691]: mds.0.4 handle_mds_map i am now mds.0.4
Oct 11 03:38:14 compute-0 ceph-mds[100691]: mds.0.4 handle_mds_map state change up:standby --> up:creating
Oct 11 03:38:14 compute-0 ceph-mds[100691]: mds.0.cache creating system inode with ino:0x1
Oct 11 03:38:14 compute-0 ceph-mds[100691]: mds.0.cache creating system inode with ino:0x100
Oct 11 03:38:14 compute-0 ceph-mds[100691]: mds.0.cache creating system inode with ino:0x600
Oct 11 03:38:14 compute-0 ceph-mds[100691]: mds.0.cache creating system inode with ino:0x601
Oct 11 03:38:14 compute-0 ceph-mds[100691]: mds.0.cache creating system inode with ino:0x602
Oct 11 03:38:14 compute-0 ceph-mds[100691]: mds.0.cache creating system inode with ino:0x603
Oct 11 03:38:14 compute-0 ceph-mds[100691]: mds.0.cache creating system inode with ino:0x604
Oct 11 03:38:14 compute-0 ceph-mds[100691]: mds.0.cache creating system inode with ino:0x605
Oct 11 03:38:14 compute-0 ceph-mds[100691]: mds.0.cache creating system inode with ino:0x606
Oct 11 03:38:14 compute-0 ceph-mds[100691]: mds.0.cache creating system inode with ino:0x607
Oct 11 03:38:14 compute-0 ceph-mds[100691]: mds.0.cache creating system inode with ino:0x608
Oct 11 03:38:14 compute-0 ceph-mds[100691]: mds.0.cache creating system inode with ino:0x609
Oct 11 03:38:14 compute-0 ceph-mds[100691]: mds.0.4 creating_done
Oct 11 03:38:14 compute-0 ceph-mon[74273]: log_channel(cluster) log [INF] : daemon mds.cephfs.compute-0.lkhlqa is now active in filesystem cephfs as rank 0
Oct 11 03:38:14 compute-0 sudo[100735]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 03:38:14 compute-0 sudo[100735]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 03:38:14 compute-0 sudo[100735]: pam_unix(sudo:session): session closed for user root
Oct 11 03:38:14 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v81: 8 pgs: 1 unknown, 7 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Oct 11 03:38:14 compute-0 python3[100734]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid 23b68101-59a9-532f-ab6b-9acf78fb2162 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   orch ps -f json _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 11 03:38:14 compute-0 sudo[100771]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Oct 11 03:38:14 compute-0 sudo[100771]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 03:38:14 compute-0 sudo[100771]: pam_unix(sudo:session): session closed for user root
Oct 11 03:38:14 compute-0 podman[100794]: 2025-10-11 03:38:14.782687424 +0000 UTC m=+0.044714873 container create 574c821446cedac1ec5c049c18bd5ec9e493b52111f32086d941e2a47e9617c0 (image=quay.io/ceph/ceph:v18, name=zealous_sinoussi, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 11 03:38:14 compute-0 sudo[100802]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 03:38:14 compute-0 sudo[100802]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 03:38:14 compute-0 sudo[100802]: pam_unix(sudo:session): session closed for user root
Oct 11 03:38:14 compute-0 systemd[1]: Started libpod-conmon-574c821446cedac1ec5c049c18bd5ec9e493b52111f32086d941e2a47e9617c0.scope.
Oct 11 03:38:14 compute-0 podman[100794]: 2025-10-11 03:38:14.763576559 +0000 UTC m=+0.025604038 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Oct 11 03:38:14 compute-0 systemd[1]: Started libcrun container.
Oct 11 03:38:14 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6ee556c87955d62fa60474dcf447be070903d6ca10d7c110f6d64d1cd77bcdf5/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Oct 11 03:38:14 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6ee556c87955d62fa60474dcf447be070903d6ca10d7c110f6d64d1cd77bcdf5/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 11 03:38:14 compute-0 sudo[100837]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 11 03:38:14 compute-0 podman[100794]: 2025-10-11 03:38:14.887384923 +0000 UTC m=+0.149412412 container init 574c821446cedac1ec5c049c18bd5ec9e493b52111f32086d941e2a47e9617c0 (image=quay.io/ceph/ceph:v18, name=zealous_sinoussi, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=reef, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 11 03:38:14 compute-0 sudo[100837]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 03:38:14 compute-0 sudo[100837]: pam_unix(sudo:session): session closed for user root
Oct 11 03:38:14 compute-0 podman[100794]: 2025-10-11 03:38:14.89513618 +0000 UTC m=+0.157163629 container start 574c821446cedac1ec5c049c18bd5ec9e493b52111f32086d941e2a47e9617c0 (image=quay.io/ceph/ceph:v18, name=zealous_sinoussi, org.label-schema.build-date=20250507, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Oct 11 03:38:14 compute-0 podman[100794]: 2025-10-11 03:38:14.898319419 +0000 UTC m=+0.160346878 container attach 574c821446cedac1ec5c049c18bd5ec9e493b52111f32086d941e2a47e9617c0 (image=quay.io/ceph/ceph:v18, name=zealous_sinoussi, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 11 03:38:14 compute-0 sudo[100866]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 03:38:14 compute-0 sudo[100866]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 03:38:14 compute-0 sudo[100866]: pam_unix(sudo:session): session closed for user root
Oct 11 03:38:15 compute-0 sudo[100891]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/23b68101-59a9-532f-ab6b-9acf78fb2162/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ls
Oct 11 03:38:15 compute-0 sudo[100891]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 03:38:15 compute-0 ceph-mgr[74563]: log_channel(audit) log [DBG] : from='client.14269 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""], "format": "json"}]: dispatch
Oct 11 03:38:15 compute-0 zealous_sinoussi[100841]: 
Oct 11 03:38:15 compute-0 zealous_sinoussi[100841]: [{"container_id": "1d7481d1ebe7", "container_image_digests": ["quay.io/ceph/ceph@sha256:7d8bb82696d5d9cbeae2a2828dc12b6835aa2dded890fa3ac5a733cb66b72b1c", "quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0"], "container_image_id": "0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1", "container_image_name": "quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0", "cpu_percentage": "0.54%", "created": "2025-10-11T03:36:48.837526Z", "daemon_id": "compute-0", "daemon_name": "crash.compute-0", "daemon_type": "crash", "events": ["2025-10-11T03:36:48.887504Z daemon:crash.compute-0 [INFO] \"Deployed crash.compute-0 on host 'compute-0'\""], "hostname": "compute-0", "is_active": false, "last_refresh": "2025-10-11T03:38:03.729840Z", "memory_usage": 11618222, "ports": [], "service_name": "crash", "started": "2025-10-11T03:36:48.726565Z", "status": 1, "status_desc": "running", "systemd_unit": "ceph-23b68101-59a9-532f-ab6b-9acf78fb2162@crash.compute-0", "version": "18.2.7"}, {"daemon_id": "cephfs.compute-0.lkhlqa", "daemon_name": "mds.cephfs.compute-0.lkhlqa", "daemon_type": "mds", "events": ["2025-10-11T03:38:14.583116Z daemon:mds.cephfs.compute-0.lkhlqa [INFO] \"Deployed mds.cephfs.compute-0.lkhlqa on host 'compute-0'\""], "hostname": "compute-0", "is_active": false, "ports": [], "service_name": "mds.cephfs", "status": 2, "status_desc": "starting"}, {"container_id": "e47365f8d893", "container_image_digests": ["quay.io/ceph/ceph@sha256:7d8bb82696d5d9cbeae2a2828dc12b6835aa2dded890fa3ac5a733cb66b72b1c", "quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0"], "container_image_id": "0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1", "container_image_name": "quay.io/ceph/ceph:v18", "cpu_percentage": "31.07%", "created": "2025-10-11T03:35:36.526084Z", "daemon_id": "compute-0.jhqlii", "daemon_name": "mgr.compute-0.jhqlii", "daemon_type": "mgr", "events": ["2025-10-11T03:37:42.338839Z daemon:mgr.compute-0.jhqlii [INFO] \"Reconfigured mgr.compute-0.jhqlii on host 'compute-0'\""], "hostname": "compute-0", "is_active": false, "last_refresh": "2025-10-11T03:38:03.729711Z", "memory_usage": 547985817, "ports": [9283, 8765], "service_name": "mgr", "started": "2025-10-11T03:35:36.391109Z", "status": 1, "status_desc": "running", "systemd_unit": "ceph-23b68101-59a9-532f-ab6b-9acf78fb2162@mgr.compute-0.jhqlii", "version": "18.2.7"}, {"container_id": "24261ba7295a", "container_image_digests": ["quay.io/ceph/ceph@sha256:7d8bb82696d5d9cbeae2a2828dc12b6835aa2dded890fa3ac5a733cb66b72b1c", "quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0"], "container_image_id": "0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1", "container_image_name": "quay.io/ceph/ceph:v18", "cpu_percentage": "2.27%", "created": "2025-10-11T03:35:31.112934Z", "daemon_id": "compute-0", "daemon_name": "mon.compute-0", "daemon_type": "mon", "events": ["2025-10-11T03:37:41.418918Z daemon:mon.compute-0 [INFO] \"Reconfigured mon.compute-0 on host 'compute-0'\""], "hostname": "compute-0", "is_active": false, "last_refresh": "2025-10-11T03:38:03.729562Z", "memory_request": 2147483648, "memory_usage": 37832622, "ports": [], "service_name": "mon", "started": "2025-10-11T03:35:33.957976Z", "status": 1, "status_desc": "running", "systemd_unit": "ceph-23b68101-59a9-532f-ab6b-9acf78fb2162@mon.compute-0", "version": "18.2.7"}, {"container_id": "25bc18b533a9", "container_image_digests": ["quay.io/ceph/ceph@sha256:7d8bb82696d5d9cbeae2a2828dc12b6835aa2dded890fa3ac5a733cb66b72b1c", "quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0"], "container_image_id": "0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1", "container_image_name": "quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0", "cpu_percentage": "2.13%", "created": "2025-10-11T03:37:15.451272Z", "daemon_id": "0", "daemon_name": "osd.0", "daemon_type": "osd", "events": ["2025-10-11T03:37:15.505890Z daemon:osd.0 [INFO] \"Deployed osd.0 on host 'compute-0'\""], "hostname": "compute-0", "is_active": false, "last_refresh": "2025-10-11T03:38:03.729962Z", "memory_request": 4294967296, "memory_usage": 56895733, "ports": [], "service_name": "osd.default_drive_group", "started": "2025-10-11T03:37:15.305717Z", "status": 1, "status_desc": "running", "systemd_unit": "ceph-23b68101-59a9-532f-ab6b-9acf78fb2162@osd.0", "version": "18.2.7"}, {"container_id": "57e290968876", "container_image_digests": ["quay.io/ceph/ceph@sha256:7d8bb82696d5d9cbeae2a2828dc12b6835aa2dded890fa3ac5a733cb66b72b1c", "quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0"], "container_image_id": "0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1", "container_image_name": "quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0", "cpu_percentage": "2.36%", "created": "2025-10-11T03:37:20.504415Z", "daemon_id": "1", "daemon_name": "osd.1", "daemon_type": "osd", "events": ["2025-10-11T03:37:20.570951Z daemon:osd.1 [INFO] \"Deployed osd.1 on host 'compute-0'\""], "hostname": "compute-0", "is_active": false, "last_refresh": "2025-10-11T03:38:03.730130Z", "memory_request": 4294967296, "memory_usage": 57608765, "ports": [], "service_name": "osd.default_drive_group", "started": "2025-10-11T03:37:20.400990Z", "status": 1, "status_desc": "running", "systemd_unit": "ceph-23b68101-59a9-532f-ab6b-9acf78fb2162@osd.1", "version": "18.2.7"}, {"container_id": "dffaee05b3c0", "container_image_digests": ["quay.io/ceph/ceph@sha256:7d8bb82696d5d9cbeae2a2828dc12b6835aa2dded890fa3ac5a733cb66b72b1c", "quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0"], "container_image_id": "0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1", "container_image_name": "quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0", "cpu_percentage": "2.55%", "created": "2025-10-11T03:37:25.707561Z", "daemon_id": "2", "daemon_name": "osd.2", "daemon_type": "osd", "events": ["2025-10-11T03:37:25.776271Z daemon:osd.2 [INFO] \"Deployed osd.2 on host 'compute-0'\""], "hostname": "compute-0", "is_active": false, "last_refresh": "2025-10-11T03:38:03.730280Z", "memory_request": 4294967296, "memory_usage": 56727961, "ports": [], "service_name": "osd.default_drive_group", "started": "2025-10-11T03:37:25.558819Z", "status": 1, "status_desc": "running", "systemd_unit": "ceph-23b68101-59a9-532f-ab6b-9acf78fb2162@osd.2", "version": "18.2.7"}, {"daemon_id": "rgw.compute-0.xmbhit", "daemon_name": "rgw.rgw.compute-0.xmbhit", "daemon_type": "rgw", "events": ["2025-10-11T03:38:12.576367Z daemon:rgw.rgw.compute-0.xmbhit [INFO] \"Deployed rgw.rgw.compute-0.xmbhit on host 'compute-0'\""], "hostname": "compute-0", "ip": "192.168.122.100", "is_active": false, "ports": [8082], "service_name": "rgw.rgw", "status": 2, "status_desc": "starting"}]
Oct 11 03:38:15 compute-0 podman[100794]: 2025-10-11 03:38:15.460726697 +0000 UTC m=+0.722754176 container died 574c821446cedac1ec5c049c18bd5ec9e493b52111f32086d941e2a47e9617c0 (image=quay.io/ceph/ceph:v18, name=zealous_sinoussi, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Oct 11 03:38:15 compute-0 systemd[1]: libpod-574c821446cedac1ec5c049c18bd5ec9e493b52111f32086d941e2a47e9617c0.scope: Deactivated successfully.
Oct 11 03:38:15 compute-0 systemd[1]: var-lib-containers-storage-overlay-6ee556c87955d62fa60474dcf447be070903d6ca10d7c110f6d64d1cd77bcdf5-merged.mount: Deactivated successfully.
Oct 11 03:38:15 compute-0 podman[100794]: 2025-10-11 03:38:15.542056943 +0000 UTC m=+0.804084392 container remove 574c821446cedac1ec5c049c18bd5ec9e493b52111f32086d941e2a47e9617c0 (image=quay.io/ceph/ceph:v18, name=zealous_sinoussi, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, ceph=True, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Oct 11 03:38:15 compute-0 systemd[1]: libpod-conmon-574c821446cedac1ec5c049c18bd5ec9e493b52111f32086d941e2a47e9617c0.scope: Deactivated successfully.
Oct 11 03:38:15 compute-0 sudo[100730]: pam_unix(sudo:session): session closed for user root
Oct 11 03:38:15 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' 
Oct 11 03:38:15 compute-0 ceph-mon[74273]: from='client.? 192.168.122.100:0/4269325060' entity='client.rgw.rgw.compute-0.xmbhit' cmd='[{"prefix": "osd pool application enable","pool": ".rgw.root","app": "rgw"}]': finished
Oct 11 03:38:15 compute-0 ceph-mon[74273]: osdmap e33: 3 total, 3 up, 3 in
Oct 11 03:38:15 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' 
Oct 11 03:38:15 compute-0 ceph-mon[74273]: mds.? [v2:192.168.122.100:6814/1659573245,v1:192.168.122.100:6815/1659573245] up:boot
Oct 11 03:38:15 compute-0 ceph-mon[74273]: daemon mds.cephfs.compute-0.lkhlqa assigned to filesystem cephfs as rank 0 (now has 1 ranks)
Oct 11 03:38:15 compute-0 ceph-mon[74273]: Health check cleared: MDS_ALL_DOWN (was: 1 filesystem is offline)
Oct 11 03:38:15 compute-0 ceph-mon[74273]: Health check cleared: MDS_UP_LESS_THAN_MAX (was: 1 filesystem is online with fewer MDS than max_mds)
Oct 11 03:38:15 compute-0 ceph-mon[74273]: Cluster is now healthy
Oct 11 03:38:15 compute-0 ceph-mon[74273]: fsmap cephfs:0 1 up:standby
Oct 11 03:38:15 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' cmd=[{"prefix": "mds metadata", "who": "cephfs.compute-0.lkhlqa"}]: dispatch
Oct 11 03:38:15 compute-0 ceph-mon[74273]: fsmap cephfs:1 {0=cephfs.compute-0.lkhlqa=up:creating}
Oct 11 03:38:15 compute-0 ceph-mon[74273]: daemon mds.cephfs.compute-0.lkhlqa is now active in filesystem cephfs as rank 0
Oct 11 03:38:15 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e33 do_prune osdmap full prune enabled
Oct 11 03:38:15 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e34 e34: 3 total, 3 up, 3 in
Oct 11 03:38:15 compute-0 ceph-mon[74273]: log_channel(cluster) log [DBG] : osdmap e34: 3 total, 3 up, 3 in
Oct 11 03:38:15 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool application enable","pool": "default.rgw.log","app": "rgw"} v 0) v1
Oct 11 03:38:15 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/4269325060' entity='client.rgw.rgw.compute-0.xmbhit' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.log","app": "rgw"}]: dispatch
Oct 11 03:38:15 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).mds e5 new map
Oct 11 03:38:15 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).mds e5 print_map
                                           e5
                                           enable_multiple, ever_enabled_multiple: 1,1
                                           default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2}
                                           legacy client fscid: 1
                                            
                                           Filesystem 'cephfs' (1)
                                           fs_name        cephfs
                                           epoch        5
                                           flags        12 joinable allow_snaps allow_multimds_snaps
                                           created        2025-10-11T03:37:58.962470+0000
                                           modified        2025-10-11T03:38:15.631328+0000
                                           tableserver        0
                                           root        0
                                           session_timeout        60
                                           session_autoclose        300
                                           max_file_size        1099511627776
                                           max_xattr_size        65536
                                           required_client_features        {}
                                           last_failure        0
                                           last_failure_osd_epoch        0
                                           compat        compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,7=mds uses inline data,8=no anchor table,9=file layout v2,10=snaprealm v2}
                                           max_mds        1
                                           in        0
                                           up        {0=14267}
                                           failed        
                                           damaged        
                                           stopped        
                                           data_pools        [7]
                                           metadata_pool        6
                                           inline_data        disabled
                                           balancer        
                                           bal_rank_mask        -1
                                           standby_count_wanted        0
                                           [mds.cephfs.compute-0.lkhlqa{0:14267} state up:active seq 2 join_fscid=1 addr [v2:192.168.122.100:6814/1659573245,v1:192.168.122.100:6815/1659573245] compat {c=[1],r=[1],i=[7ff]}]
                                            
                                            
Oct 11 03:38:15 compute-0 ceph-mds[100691]: mds.cephfs.compute-0.lkhlqa Updating MDS map to version 5 from mon.0
Oct 11 03:38:15 compute-0 ceph-mds[100691]: mds.0.4 handle_mds_map i am now mds.0.4
Oct 11 03:38:15 compute-0 ceph-mds[100691]: mds.0.4 handle_mds_map state change up:creating --> up:active
Oct 11 03:38:15 compute-0 ceph-mds[100691]: mds.0.4 recovery_done -- successful recovery!
Oct 11 03:38:15 compute-0 ceph-mds[100691]: mds.0.4 active_start
Oct 11 03:38:15 compute-0 ceph-mon[74273]: log_channel(cluster) log [DBG] : mds.? [v2:192.168.122.100:6814/1659573245,v1:192.168.122.100:6815/1659573245] up:active
Oct 11 03:38:15 compute-0 ceph-mon[74273]: log_channel(cluster) log [DBG] : fsmap cephfs:1 {0=cephfs.compute-0.lkhlqa=up:active}
Oct 11 03:38:15 compute-0 podman[101021]: 2025-10-11 03:38:15.691444293 +0000 UTC m=+0.064271789 container exec 24261ba7295af5a6a49cb537d1551fd7fd4de28fdeebff7ecec5d89143ebddf9 (image=quay.io/ceph/ceph:v18, name=ceph-23b68101-59a9-532f-ab6b-9acf78fb2162-mon-compute-0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 11 03:38:15 compute-0 ceph-osd[88594]: osd.1 pg_epoch: 34 pg[9.0( empty local-lis/les=0/0 n=0 ec=34/34 lis/c=0/0 les/c/f=0/0/0 sis=34) [1] r=0 lpr=34 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 03:38:15 compute-0 ceph-mgr[74563]: [progress INFO root] Writing back 5 completed events
Oct 11 03:38:15 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0) v1
Oct 11 03:38:15 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' 
Oct 11 03:38:15 compute-0 podman[101021]: 2025-10-11 03:38:15.808874639 +0000 UTC m=+0.181702145 container exec_died 24261ba7295af5a6a49cb537d1551fd7fd4de28fdeebff7ecec5d89143ebddf9 (image=quay.io/ceph/ceph:v18, name=ceph-23b68101-59a9-532f-ab6b-9acf78fb2162-mon-compute-0, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Oct 11 03:38:16 compute-0 sudo[101171]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-slazeqbbeqjcifwgazdxsbuxitxdajjk ; /usr/bin/python3'
Oct 11 03:38:16 compute-0 sudo[101171]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 03:38:16 compute-0 python3[101175]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid 23b68101-59a9-532f-ab6b-9acf78fb2162 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   -s -f json _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 11 03:38:16 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e34 do_prune osdmap full prune enabled
Oct 11 03:38:16 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/4269325060' entity='client.rgw.rgw.compute-0.xmbhit' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.log","app": "rgw"}]': finished
Oct 11 03:38:16 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e35 e35: 3 total, 3 up, 3 in
Oct 11 03:38:16 compute-0 ceph-mon[74273]: log_channel(cluster) log [DBG] : osdmap e35: 3 total, 3 up, 3 in
Oct 11 03:38:16 compute-0 ceph-mon[74273]: pgmap v81: 8 pgs: 1 unknown, 7 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Oct 11 03:38:16 compute-0 ceph-mon[74273]: from='client.14269 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""], "format": "json"}]: dispatch
Oct 11 03:38:16 compute-0 ceph-mon[74273]: osdmap e34: 3 total, 3 up, 3 in
Oct 11 03:38:16 compute-0 ceph-mon[74273]: from='client.? 192.168.122.100:0/4269325060' entity='client.rgw.rgw.compute-0.xmbhit' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.log","app": "rgw"}]: dispatch
Oct 11 03:38:16 compute-0 ceph-mon[74273]: mds.? [v2:192.168.122.100:6814/1659573245,v1:192.168.122.100:6815/1659573245] up:active
Oct 11 03:38:16 compute-0 ceph-mon[74273]: fsmap cephfs:1 {0=cephfs.compute-0.lkhlqa=up:active}
Oct 11 03:38:16 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' 
Oct 11 03:38:16 compute-0 sudo[100891]: pam_unix(sudo:session): session closed for user root
Oct 11 03:38:16 compute-0 ceph-osd[88594]: osd.1 pg_epoch: 35 pg[9.0( empty local-lis/les=34/35 n=0 ec=34/34 lis/c=0/0 les/c/f=0/0/0 sis=34) [1] r=0 lpr=34 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 03:38:16 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 11 03:38:16 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' 
Oct 11 03:38:16 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 11 03:38:16 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' 
Oct 11 03:38:16 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 11 03:38:16 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 11 03:38:16 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct 11 03:38:16 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 11 03:38:16 compute-0 podman[101208]: 2025-10-11 03:38:16.680446477 +0000 UTC m=+0.048982782 container create adae8955774b04c71bbe946da48b1a9c51921b8b1e977157a1ee0c80813c6b54 (image=quay.io/ceph/ceph:v18, name=reverent_blackwell, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True)
Oct 11 03:38:16 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct 11 03:38:16 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' 
Oct 11 03:38:16 compute-0 ceph-mgr[74563]: [progress WARNING root] complete: ev 76248ece-1c56-4163-bf88-3c244eab173e does not exist
Oct 11 03:38:16 compute-0 ceph-mgr[74563]: [progress WARNING root] complete: ev e367a117-8c2d-4bba-93bd-326ad3075bfa does not exist
Oct 11 03:38:16 compute-0 ceph-mgr[74563]: [progress WARNING root] complete: ev 0d242239-5081-4a4d-9316-d516f7272ac1 does not exist
Oct 11 03:38:16 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct 11 03:38:16 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 11 03:38:16 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct 11 03:38:16 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 11 03:38:16 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 11 03:38:16 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 11 03:38:16 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v84: 9 pgs: 2 unknown, 7 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Oct 11 03:38:16 compute-0 systemd[1]: Started libpod-conmon-adae8955774b04c71bbe946da48b1a9c51921b8b1e977157a1ee0c80813c6b54.scope.
Oct 11 03:38:16 compute-0 systemd[1]: Started libcrun container.
Oct 11 03:38:16 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ea4ad18b470db34767fb45c0cd26e17a4edcc106f5d30af8d66d6264724d7bd7/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Oct 11 03:38:16 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ea4ad18b470db34767fb45c0cd26e17a4edcc106f5d30af8d66d6264724d7bd7/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 11 03:38:16 compute-0 sudo[101225]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 03:38:16 compute-0 podman[101208]: 2025-10-11 03:38:16.750138567 +0000 UTC m=+0.118674872 container init adae8955774b04c71bbe946da48b1a9c51921b8b1e977157a1ee0c80813c6b54 (image=quay.io/ceph/ceph:v18, name=reverent_blackwell, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=reef)
Oct 11 03:38:16 compute-0 sudo[101225]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 03:38:16 compute-0 sudo[101225]: pam_unix(sudo:session): session closed for user root
Oct 11 03:38:16 compute-0 podman[101208]: 2025-10-11 03:38:16.658944135 +0000 UTC m=+0.027480460 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Oct 11 03:38:16 compute-0 podman[101208]: 2025-10-11 03:38:16.760490497 +0000 UTC m=+0.129026782 container start adae8955774b04c71bbe946da48b1a9c51921b8b1e977157a1ee0c80813c6b54 (image=quay.io/ceph/ceph:v18, name=reverent_blackwell, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 11 03:38:16 compute-0 podman[101208]: 2025-10-11 03:38:16.763866321 +0000 UTC m=+0.132402606 container attach adae8955774b04c71bbe946da48b1a9c51921b8b1e977157a1ee0c80813c6b54 (image=quay.io/ceph/ceph:v18, name=reverent_blackwell, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Oct 11 03:38:16 compute-0 sudo[101254]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 11 03:38:16 compute-0 sudo[101254]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 03:38:16 compute-0 sudo[101254]: pam_unix(sudo:session): session closed for user root
Oct 11 03:38:16 compute-0 sudo[101279]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 03:38:16 compute-0 sudo[101279]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 03:38:16 compute-0 sudo[101279]: pam_unix(sudo:session): session closed for user root
Oct 11 03:38:16 compute-0 sudo[101304]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/23b68101-59a9-532f-ab6b-9acf78fb2162/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 23b68101-59a9-532f-ab6b-9acf78fb2162 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Oct 11 03:38:16 compute-0 sudo[101304]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 03:38:17 compute-0 podman[101386]: 2025-10-11 03:38:17.252386911 +0000 UTC m=+0.041170943 container create ec423426ec97af7f2458b66839535abf44ed0438a4d1425cd73be6bc8c6a9cdb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_easley, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_REF=reef, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 11 03:38:17 compute-0 systemd[1]: Started libpod-conmon-ec423426ec97af7f2458b66839535abf44ed0438a4d1425cd73be6bc8c6a9cdb.scope.
Oct 11 03:38:17 compute-0 systemd[1]: Started libcrun container.
Oct 11 03:38:17 compute-0 podman[101386]: 2025-10-11 03:38:17.316578328 +0000 UTC m=+0.105362370 container init ec423426ec97af7f2458b66839535abf44ed0438a4d1425cd73be6bc8c6a9cdb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_easley, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, ceph=True)
Oct 11 03:38:17 compute-0 podman[101386]: 2025-10-11 03:38:17.328213353 +0000 UTC m=+0.116997375 container start ec423426ec97af7f2458b66839535abf44ed0438a4d1425cd73be6bc8c6a9cdb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_easley, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.build-date=20250507)
Oct 11 03:38:17 compute-0 podman[101386]: 2025-10-11 03:38:17.23340349 +0000 UTC m=+0.022187552 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 03:38:17 compute-0 podman[101386]: 2025-10-11 03:38:17.332200835 +0000 UTC m=+0.120984887 container attach ec423426ec97af7f2458b66839535abf44ed0438a4d1425cd73be6bc8c6a9cdb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_easley, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 11 03:38:17 compute-0 ecstatic_easley[101402]: 167 167
Oct 11 03:38:17 compute-0 systemd[1]: libpod-ec423426ec97af7f2458b66839535abf44ed0438a4d1425cd73be6bc8c6a9cdb.scope: Deactivated successfully.
Oct 11 03:38:17 compute-0 podman[101386]: 2025-10-11 03:38:17.335320622 +0000 UTC m=+0.124104644 container died ec423426ec97af7f2458b66839535abf44ed0438a4d1425cd73be6bc8c6a9cdb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_easley, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 11 03:38:17 compute-0 systemd[1]: var-lib-containers-storage-overlay-afe14d5e627338d634752f09eeeed1eb4a7cd5107e87fd0c2ad464971dd285e4-merged.mount: Deactivated successfully.
Oct 11 03:38:17 compute-0 podman[101386]: 2025-10-11 03:38:17.382297057 +0000 UTC m=+0.171081119 container remove ec423426ec97af7f2458b66839535abf44ed0438a4d1425cd73be6bc8c6a9cdb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_easley, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0)
Oct 11 03:38:17 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status", "format": "json"} v 0) v1
Oct 11 03:38:17 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3707092654' entity='client.admin' cmd=[{"prefix": "status", "format": "json"}]: dispatch
Oct 11 03:38:17 compute-0 reverent_blackwell[101238]: 
Oct 11 03:38:17 compute-0 reverent_blackwell[101238]: {"fsid":"23b68101-59a9-532f-ab6b-9acf78fb2162","health":{"status":"HEALTH_OK","checks":{},"mutes":[]},"election_epoch":5,"quorum":[0],"quorum_names":["compute-0"],"quorum_age":163,"monmap":{"epoch":1,"min_mon_release_name":"reef","num_mons":1},"osdmap":{"epoch":35,"num_osds":3,"num_up_osds":3,"osd_up_since":1760153852,"num_in_osds":3,"osd_in_since":1760153824,"num_remapped_pgs":0},"pgmap":{"pgs_by_state":[{"state_name":"active+clean","count":7},{"state_name":"unknown","count":1}],"num_pgs":8,"num_pools":8,"num_objects":2,"data_bytes":459280,"bytes_used":83845120,"bytes_avail":64328081408,"bytes_total":64411926528,"unknown_pgs_ratio":0.125},"fsmap":{"epoch":5,"id":1,"up":1,"in":1,"max":1,"by_rank":[{"filesystem_id":1,"rank":0,"name":"cephfs.compute-0.lkhlqa","status":"up:active","gid":14267}],"up:standby":0},"mgrmap":{"available":true,"num_standbys":0,"modules":["cephadm","iostat","nfs","restful"],"services":{}},"servicemap":{"epoch":2,"modified":"2025-10-11T03:37:22.666401+0000","services":{"osd":{"daemons":{"summary":"","0":{"start_epoch":0,"start_stamp":"0.000000","gid":0,"addr":"(unrecognized address family 0)/0","metadata":{},"task_status":{}}}}}},"progress_events":{}}
Oct 11 03:38:17 compute-0 systemd[1]: libpod-conmon-ec423426ec97af7f2458b66839535abf44ed0438a4d1425cd73be6bc8c6a9cdb.scope: Deactivated successfully.
Oct 11 03:38:17 compute-0 systemd[1]: libpod-adae8955774b04c71bbe946da48b1a9c51921b8b1e977157a1ee0c80813c6b54.scope: Deactivated successfully.
Oct 11 03:38:17 compute-0 conmon[101238]: conmon adae8955774b04c71bbe <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-adae8955774b04c71bbe946da48b1a9c51921b8b1e977157a1ee0c80813c6b54.scope/container/memory.events
Oct 11 03:38:17 compute-0 podman[101208]: 2025-10-11 03:38:17.407513402 +0000 UTC m=+0.776049717 container died adae8955774b04c71bbe946da48b1a9c51921b8b1e977157a1ee0c80813c6b54 (image=quay.io/ceph/ceph:v18, name=reverent_blackwell, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 11 03:38:17 compute-0 systemd[1]: var-lib-containers-storage-overlay-ea4ad18b470db34767fb45c0cd26e17a4edcc106f5d30af8d66d6264724d7bd7-merged.mount: Deactivated successfully.
Oct 11 03:38:17 compute-0 podman[101208]: 2025-10-11 03:38:17.464687992 +0000 UTC m=+0.833224277 container remove adae8955774b04c71bbe946da48b1a9c51921b8b1e977157a1ee0c80813c6b54 (image=quay.io/ceph/ceph:v18, name=reverent_blackwell, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS)
Oct 11 03:38:17 compute-0 systemd[1]: libpod-conmon-adae8955774b04c71bbe946da48b1a9c51921b8b1e977157a1ee0c80813c6b54.scope: Deactivated successfully.
Oct 11 03:38:17 compute-0 sudo[101171]: pam_unix(sudo:session): session closed for user root
Oct 11 03:38:17 compute-0 podman[101441]: 2025-10-11 03:38:17.609247417 +0000 UTC m=+0.072196471 container create 4ab35b077aa2136e958d8731d2c96e1a8e322c48ec9304b80c99469a50de2ecf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_villani, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3)
Oct 11 03:38:17 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e35 do_prune osdmap full prune enabled
Oct 11 03:38:17 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e36 e36: 3 total, 3 up, 3 in
Oct 11 03:38:17 compute-0 ceph-mon[74273]: log_channel(cluster) log [DBG] : osdmap e36: 3 total, 3 up, 3 in
Oct 11 03:38:17 compute-0 systemd[1]: Started libpod-conmon-4ab35b077aa2136e958d8731d2c96e1a8e322c48ec9304b80c99469a50de2ecf.scope.
Oct 11 03:38:17 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"} v 0) v1
Oct 11 03:38:17 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/4269325060' entity='client.rgw.rgw.compute-0.xmbhit' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"}]: dispatch
Oct 11 03:38:17 compute-0 ceph-mon[74273]: from='client.? 192.168.122.100:0/4269325060' entity='client.rgw.rgw.compute-0.xmbhit' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.log","app": "rgw"}]': finished
Oct 11 03:38:17 compute-0 ceph-mon[74273]: osdmap e35: 3 total, 3 up, 3 in
Oct 11 03:38:17 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' 
Oct 11 03:38:17 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' 
Oct 11 03:38:17 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 11 03:38:17 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 11 03:38:17 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' 
Oct 11 03:38:17 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 11 03:38:17 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 11 03:38:17 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 11 03:38:17 compute-0 ceph-mon[74273]: from='client.? 192.168.122.100:0/3707092654' entity='client.admin' cmd=[{"prefix": "status", "format": "json"}]: dispatch
Oct 11 03:38:17 compute-0 podman[101441]: 2025-10-11 03:38:17.577001345 +0000 UTC m=+0.039950489 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 03:38:17 compute-0 systemd[1]: Started libcrun container.
Oct 11 03:38:17 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5f2d5ce15ffd655ffa3072968b48fc6c0b9633b2204c64d9510ed369cab70153/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 11 03:38:17 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5f2d5ce15ffd655ffa3072968b48fc6c0b9633b2204c64d9510ed369cab70153/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 11 03:38:17 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5f2d5ce15ffd655ffa3072968b48fc6c0b9633b2204c64d9510ed369cab70153/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 11 03:38:17 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5f2d5ce15ffd655ffa3072968b48fc6c0b9633b2204c64d9510ed369cab70153/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 11 03:38:17 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5f2d5ce15ffd655ffa3072968b48fc6c0b9633b2204c64d9510ed369cab70153/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct 11 03:38:17 compute-0 podman[101441]: 2025-10-11 03:38:17.705705816 +0000 UTC m=+0.168654910 container init 4ab35b077aa2136e958d8731d2c96e1a8e322c48ec9304b80c99469a50de2ecf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_villani, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 11 03:38:17 compute-0 podman[101441]: 2025-10-11 03:38:17.720210632 +0000 UTC m=+0.183159706 container start 4ab35b077aa2136e958d8731d2c96e1a8e322c48ec9304b80c99469a50de2ecf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_villani, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 11 03:38:17 compute-0 podman[101441]: 2025-10-11 03:38:17.723553246 +0000 UTC m=+0.186502300 container attach 4ab35b077aa2136e958d8731d2c96e1a8e322c48ec9304b80c99469a50de2ecf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_villani, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3)
Oct 11 03:38:18 compute-0 ceph-osd[89722]: osd.2 pg_epoch: 36 pg[10.0( empty local-lis/les=0/0 n=0 ec=36/36 lis/c=0/0 les/c/f=0/0/0 sis=36) [2] r=0 lpr=36 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 03:38:18 compute-0 sudo[101485]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ifgsxkmwvlwkkegxmwufwxvlmoyqndmy ; /usr/bin/python3'
Oct 11 03:38:18 compute-0 sudo[101485]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 03:38:18 compute-0 python3[101487]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid 23b68101-59a9-532f-ab6b-9acf78fb2162 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   config dump -f json _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 11 03:38:18 compute-0 podman[101496]: 2025-10-11 03:38:18.559973471 +0000 UTC m=+0.070725630 container create 04efb680e1fd141ae2e4eddffdd7c42a88a973a1dd6efabdf12981ee5da99b57 (image=quay.io/ceph/ceph:v18, name=priceless_goldwasser, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, ceph=True, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 11 03:38:18 compute-0 systemd[1]: Started libpod-conmon-04efb680e1fd141ae2e4eddffdd7c42a88a973a1dd6efabdf12981ee5da99b57.scope.
Oct 11 03:38:18 compute-0 podman[101496]: 2025-10-11 03:38:18.528700286 +0000 UTC m=+0.039452455 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Oct 11 03:38:18 compute-0 systemd[1]: Started libcrun container.
Oct 11 03:38:18 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/35aff14d2446203a97795d34ec7859871dc0ed11a91b505d2a9da41450c2a51a/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Oct 11 03:38:18 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/35aff14d2446203a97795d34ec7859871dc0ed11a91b505d2a9da41450c2a51a/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 11 03:38:18 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e36 do_prune osdmap full prune enabled
Oct 11 03:38:18 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/4269325060' entity='client.rgw.rgw.compute-0.xmbhit' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"}]': finished
Oct 11 03:38:18 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e37 e37: 3 total, 3 up, 3 in
Oct 11 03:38:18 compute-0 podman[101496]: 2025-10-11 03:38:18.659372373 +0000 UTC m=+0.170124612 container init 04efb680e1fd141ae2e4eddffdd7c42a88a973a1dd6efabdf12981ee5da99b57 (image=quay.io/ceph/ceph:v18, name=priceless_goldwasser, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 11 03:38:18 compute-0 ceph-mon[74273]: log_channel(cluster) log [DBG] : osdmap e37: 3 total, 3 up, 3 in
Oct 11 03:38:18 compute-0 ceph-mon[74273]: pgmap v84: 9 pgs: 2 unknown, 7 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Oct 11 03:38:18 compute-0 ceph-mon[74273]: osdmap e36: 3 total, 3 up, 3 in
Oct 11 03:38:18 compute-0 ceph-mon[74273]: from='client.? 192.168.122.100:0/4269325060' entity='client.rgw.rgw.compute-0.xmbhit' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"}]: dispatch
Oct 11 03:38:18 compute-0 ceph-mon[74273]: from='client.? 192.168.122.100:0/4269325060' entity='client.rgw.rgw.compute-0.xmbhit' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"}]': finished
Oct 11 03:38:18 compute-0 ceph-osd[89722]: osd.2 pg_epoch: 37 pg[10.0( empty local-lis/les=36/37 n=0 ec=36/36 lis/c=0/0 les/c/f=0/0/0 sis=36) [2] r=0 lpr=36 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 03:38:18 compute-0 podman[101496]: 2025-10-11 03:38:18.676212624 +0000 UTC m=+0.186964813 container start 04efb680e1fd141ae2e4eddffdd7c42a88a973a1dd6efabdf12981ee5da99b57 (image=quay.io/ceph/ceph:v18, name=priceless_goldwasser, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Oct 11 03:38:18 compute-0 podman[101496]: 2025-10-11 03:38:18.681026059 +0000 UTC m=+0.191778248 container attach 04efb680e1fd141ae2e4eddffdd7c42a88a973a1dd6efabdf12981ee5da99b57 (image=quay.io/ceph/ceph:v18, name=priceless_goldwasser, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True)
Oct 11 03:38:18 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v87: 10 pgs: 1 unknown, 9 active+clean; 452 KiB data, 80 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 4.5 KiB/s wr, 12 op/s
Oct 11 03:38:18 compute-0 interesting_villani[101457]: --> passed data devices: 0 physical, 3 LVM
Oct 11 03:38:18 compute-0 interesting_villani[101457]: --> relative data size: 1.0
Oct 11 03:38:18 compute-0 interesting_villani[101457]: --> All data devices are unavailable
Oct 11 03:38:18 compute-0 podman[101441]: 2025-10-11 03:38:18.850611174 +0000 UTC m=+1.313560228 container died 4ab35b077aa2136e958d8731d2c96e1a8e322c48ec9304b80c99469a50de2ecf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_villani, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Oct 11 03:38:18 compute-0 systemd[1]: libpod-4ab35b077aa2136e958d8731d2c96e1a8e322c48ec9304b80c99469a50de2ecf.scope: Deactivated successfully.
Oct 11 03:38:18 compute-0 systemd[1]: libpod-4ab35b077aa2136e958d8731d2c96e1a8e322c48ec9304b80c99469a50de2ecf.scope: Consumed 1.080s CPU time.
Oct 11 03:38:18 compute-0 systemd[1]: var-lib-containers-storage-overlay-5f2d5ce15ffd655ffa3072968b48fc6c0b9633b2204c64d9510ed369cab70153-merged.mount: Deactivated successfully.
Oct 11 03:38:18 compute-0 podman[101441]: 2025-10-11 03:38:18.930227892 +0000 UTC m=+1.393176956 container remove 4ab35b077aa2136e958d8731d2c96e1a8e322c48ec9304b80c99469a50de2ecf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_villani, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.39.3)
Oct 11 03:38:18 compute-0 systemd[1]: libpod-conmon-4ab35b077aa2136e958d8731d2c96e1a8e322c48ec9304b80c99469a50de2ecf.scope: Deactivated successfully.
Oct 11 03:38:18 compute-0 sudo[101304]: pam_unix(sudo:session): session closed for user root
Oct 11 03:38:19 compute-0 sudo[101556]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 03:38:19 compute-0 sudo[101556]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 03:38:19 compute-0 sudo[101556]: pam_unix(sudo:session): session closed for user root
Oct 11 03:38:19 compute-0 sudo[101600]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 11 03:38:19 compute-0 sudo[101600]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 03:38:19 compute-0 sudo[101600]: pam_unix(sudo:session): session closed for user root
Oct 11 03:38:19 compute-0 sudo[101625]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 03:38:19 compute-0 sudo[101625]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 03:38:19 compute-0 sudo[101625]: pam_unix(sudo:session): session closed for user root
Oct 11 03:38:19 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e37 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 11 03:38:19 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config dump", "format": "json"} v 0) v1
Oct 11 03:38:19 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1888709036' entity='client.admin' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch
Oct 11 03:38:19 compute-0 priceless_goldwasser[101517]: 
Oct 11 03:38:19 compute-0 systemd[1]: libpod-04efb680e1fd141ae2e4eddffdd7c42a88a973a1dd6efabdf12981ee5da99b57.scope: Deactivated successfully.
Oct 11 03:38:19 compute-0 priceless_goldwasser[101517]: [{"section":"global","name":"cluster_network","value":"172.20.0.0/24","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"global","name":"container_image","value":"quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0","level":"basic","can_update_at_runtime":false,"mask":""},{"section":"global","name":"log_to_file","value":"true","level":"basic","can_update_at_runtime":true,"mask":""},{"section":"global","name":"mon_cluster_log_to_file","value":"true","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"global","name":"ms_bind_ipv4","value":"true","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"global","name":"ms_bind_ipv6","value":"false","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"global","name":"osd_pool_default_size","value":"1","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"global","name":"public_network","value":"192.168.122.0/24","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"global","name":"rgw_keystone_accepted_admin_roles","value":"ResellerAdmin, swiftoperator","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"global","name":"rgw_keystone_accepted_roles","value":"member, Member, admin","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"global","name":"rgw_keystone_admin_domain","value":"default","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"global","name":"rgw_keystone_admin_password","value":"12345678","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"global","name":"rgw_keystone_admin_project","value":"service","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"global","name":"rgw_keystone_admin_user","value":"swift","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"global","name":"rgw_keystone_api_version","value":"3","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"global","name":"rgw_keystone_implicit_tenants","value":"true","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"global","name":"rgw_keystone_url","value":"https://keystone-internal.openstack.svc:5000","level":"basic","can_update_at_runtime":false,"mask":""},{"section":"global","name":"rgw_keystone_verify_ssl","value":"false","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"global","name":"rgw_max_attr_name_len","value":"128","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"global","name":"rgw_max_attr_size","value":"1024","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"global","name":"rgw_max_attrs_num_in_req","value":"90","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"global","name":"rgw_s3_auth_use_keystone","value":"true","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"global","name":"rgw_swift_account_in_url","value":"true","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"global","name":"rgw_swift_enforce_content_length","value":"true","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"global","name":"rgw_swift_versioning_enabled","value":"true","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"global","name":"rgw_trust_forwarded_https","value":"true","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"mon","name":"auth_allow_insecure_global_id_reclaim","value":"false","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"mon","name":"mon_warn_on_pool_no_redundancy","value":"false","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"mgr","name":"mgr/cephadm/container_init","value":"True","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"mgr","name":"mgr/cephadm/migration_current","value":"6","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"mgr","name":"mgr/cephadm/use_repo_digest","value":"false","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"mgr","name":"mgr/orchestrator/orchestrator","value":"cephadm","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"mgr","name":"mgr_standby_modules","value":"false","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"osd","name":"osd_memory_target_autotune","value":"true","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"mds.cephfs","name":"mds_join_fs","value":"cephfs","level":"basic","can_update_at_runtime":true,"mask":""},{"section":"client.rgw.rgw.compute-0.xmbhit","name":"rgw_frontends","value":"beast endpoint=192.168.122.100:8082","level":"basic","can_update_at_runtime":false,"mask":""}]
Oct 11 03:38:19 compute-0 podman[101496]: 2025-10-11 03:38:19.259135756 +0000 UTC m=+0.769887905 container died 04efb680e1fd141ae2e4eddffdd7c42a88a973a1dd6efabdf12981ee5da99b57 (image=quay.io/ceph/ceph:v18, name=priceless_goldwasser, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 11 03:38:19 compute-0 systemd[1]: var-lib-containers-storage-overlay-35aff14d2446203a97795d34ec7859871dc0ed11a91b505d2a9da41450c2a51a-merged.mount: Deactivated successfully.
Oct 11 03:38:19 compute-0 sudo[101650]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/23b68101-59a9-532f-ab6b-9acf78fb2162/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 23b68101-59a9-532f-ab6b-9acf78fb2162 -- lvm list --format json
Oct 11 03:38:19 compute-0 sudo[101650]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 03:38:19 compute-0 podman[101496]: 2025-10-11 03:38:19.301086689 +0000 UTC m=+0.811838838 container remove 04efb680e1fd141ae2e4eddffdd7c42a88a973a1dd6efabdf12981ee5da99b57 (image=quay.io/ceph/ceph:v18, name=priceless_goldwasser, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 11 03:38:19 compute-0 systemd[1]: libpod-conmon-04efb680e1fd141ae2e4eddffdd7c42a88a973a1dd6efabdf12981ee5da99b57.scope: Deactivated successfully.
Oct 11 03:38:19 compute-0 sudo[101485]: pam_unix(sudo:session): session closed for user root
Oct 11 03:38:19 compute-0 ceph-mds[100691]: mds.pinger is_rank_lagging: rank=0 was never sent ping request.
Oct 11 03:38:19 compute-0 ceph-23b68101-59a9-532f-ab6b-9acf78fb2162-mds-cephfs-compute-0-lkhlqa[100686]: 2025-10-11T03:38:19.637+0000 7f3317254640 -1 mds.pinger is_rank_lagging: rank=0 was never sent ping request.
Oct 11 03:38:19 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e37 do_prune osdmap full prune enabled
Oct 11 03:38:19 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e38 e38: 3 total, 3 up, 3 in
Oct 11 03:38:19 compute-0 ceph-mon[74273]: log_channel(cluster) log [DBG] : osdmap e38: 3 total, 3 up, 3 in
Oct 11 03:38:19 compute-0 ceph-osd[88594]: osd.1 pg_epoch: 38 pg[11.0( empty local-lis/les=0/0 n=0 ec=38/38 lis/c=0/0 les/c/f=0/0/0 sis=38) [1] r=0 lpr=38 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 03:38:19 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"} v 0) v1
Oct 11 03:38:19 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/459311002' entity='client.rgw.rgw.compute-0.xmbhit' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]: dispatch
Oct 11 03:38:19 compute-0 ceph-mon[74273]: osdmap e37: 3 total, 3 up, 3 in
Oct 11 03:38:19 compute-0 ceph-mon[74273]: from='client.? 192.168.122.100:0/1888709036' entity='client.admin' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch
Oct 11 03:38:19 compute-0 podman[101729]: 2025-10-11 03:38:19.7399938 +0000 UTC m=+0.063883738 container create 7bb8fee86f77345c28a1ddd640ccd2be92d55ab126b7e3965bf029e7ebe6149b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_ritchie, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 11 03:38:19 compute-0 systemd[1]: Started libpod-conmon-7bb8fee86f77345c28a1ddd640ccd2be92d55ab126b7e3965bf029e7ebe6149b.scope.
Oct 11 03:38:19 compute-0 systemd[1]: Started libcrun container.
Oct 11 03:38:19 compute-0 podman[101729]: 2025-10-11 03:38:19.809590408 +0000 UTC m=+0.133480336 container init 7bb8fee86f77345c28a1ddd640ccd2be92d55ab126b7e3965bf029e7ebe6149b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_ritchie, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_REF=reef, ceph=True, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 11 03:38:19 compute-0 podman[101729]: 2025-10-11 03:38:19.716926925 +0000 UTC m=+0.040816883 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 03:38:19 compute-0 podman[101729]: 2025-10-11 03:38:19.822298173 +0000 UTC m=+0.146188081 container start 7bb8fee86f77345c28a1ddd640ccd2be92d55ab126b7e3965bf029e7ebe6149b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_ritchie, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 11 03:38:19 compute-0 podman[101729]: 2025-10-11 03:38:19.825936625 +0000 UTC m=+0.149826543 container attach 7bb8fee86f77345c28a1ddd640ccd2be92d55ab126b7e3965bf029e7ebe6149b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_ritchie, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Oct 11 03:38:19 compute-0 sweet_ritchie[101745]: 167 167
Oct 11 03:38:19 compute-0 rsyslogd[1005]: imjournal: journal files changed, reloading...  [v8.2506.0-2.el9 try https://www.rsyslog.com/e/0 ]
Oct 11 03:38:19 compute-0 systemd[1]: libpod-7bb8fee86f77345c28a1ddd640ccd2be92d55ab126b7e3965bf029e7ebe6149b.scope: Deactivated successfully.
Oct 11 03:38:19 compute-0 rsyslogd[1005]: imjournal: journal files changed, reloading...  [v8.2506.0-2.el9 try https://www.rsyslog.com/e/0 ]
Oct 11 03:38:19 compute-0 podman[101729]: 2025-10-11 03:38:19.832727235 +0000 UTC m=+0.156617183 container died 7bb8fee86f77345c28a1ddd640ccd2be92d55ab126b7e3965bf029e7ebe6149b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_ritchie, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 11 03:38:19 compute-0 systemd[1]: var-lib-containers-storage-overlay-94a5ee64f7f690090169e912727e59afd994360b849d7ff1fb4851b331f6fa3e-merged.mount: Deactivated successfully.
Oct 11 03:38:19 compute-0 podman[101729]: 2025-10-11 03:38:19.883074234 +0000 UTC m=+0.206964152 container remove 7bb8fee86f77345c28a1ddd640ccd2be92d55ab126b7e3965bf029e7ebe6149b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_ritchie, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 11 03:38:19 compute-0 systemd[1]: libpod-conmon-7bb8fee86f77345c28a1ddd640ccd2be92d55ab126b7e3965bf029e7ebe6149b.scope: Deactivated successfully.
Oct 11 03:38:20 compute-0 podman[101770]: 2025-10-11 03:38:20.091614579 +0000 UTC m=+0.071738328 container create ce79a245688aa1238f48d5aa3c16be3870ba665b72eac50563d4bfe9b9190432 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elegant_zhukovsky, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Oct 11 03:38:20 compute-0 systemd[1]: Started libpod-conmon-ce79a245688aa1238f48d5aa3c16be3870ba665b72eac50563d4bfe9b9190432.scope.
Oct 11 03:38:20 compute-0 sudo[101807]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-aasdniifsxlyhaihxhgeiskrwmvxbryo ; /usr/bin/python3'
Oct 11 03:38:20 compute-0 podman[101770]: 2025-10-11 03:38:20.061411594 +0000 UTC m=+0.041535403 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 03:38:20 compute-0 sudo[101807]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 03:38:20 compute-0 systemd[1]: Started libcrun container.
Oct 11 03:38:20 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/66e7588b35eeddc90184cff8efa5c20b12ce352536d1dd3dfac8b9562d1afb91/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 11 03:38:20 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/66e7588b35eeddc90184cff8efa5c20b12ce352536d1dd3dfac8b9562d1afb91/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 11 03:38:20 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/66e7588b35eeddc90184cff8efa5c20b12ce352536d1dd3dfac8b9562d1afb91/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 11 03:38:20 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/66e7588b35eeddc90184cff8efa5c20b12ce352536d1dd3dfac8b9562d1afb91/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 11 03:38:20 compute-0 podman[101770]: 2025-10-11 03:38:20.204102107 +0000 UTC m=+0.184225846 container init ce79a245688aa1238f48d5aa3c16be3870ba665b72eac50563d4bfe9b9190432 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elegant_zhukovsky, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 11 03:38:20 compute-0 podman[101770]: 2025-10-11 03:38:20.218990544 +0000 UTC m=+0.199114253 container start ce79a245688aa1238f48d5aa3c16be3870ba665b72eac50563d4bfe9b9190432 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elegant_zhukovsky, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3)
Oct 11 03:38:20 compute-0 podman[101770]: 2025-10-11 03:38:20.222228364 +0000 UTC m=+0.202352073 container attach ce79a245688aa1238f48d5aa3c16be3870ba665b72eac50563d4bfe9b9190432 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elegant_zhukovsky, org.label-schema.build-date=20250507, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 11 03:38:20 compute-0 python3[101813]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid 23b68101-59a9-532f-ab6b-9acf78fb2162 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd get-require-min-compat-client _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 11 03:38:20 compute-0 podman[101817]: 2025-10-11 03:38:20.371323606 +0000 UTC m=+0.053114147 container create 5de14012da99b274f2f8dce002db64e64cb6b7eddc8ee0e330aa14f0eff59e09 (image=quay.io/ceph/ceph:v18, name=flamboyant_saha, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.build-date=20250507)
Oct 11 03:38:20 compute-0 systemd[1]: Started libpod-conmon-5de14012da99b274f2f8dce002db64e64cb6b7eddc8ee0e330aa14f0eff59e09.scope.
Oct 11 03:38:20 compute-0 systemd[1]: Started libcrun container.
Oct 11 03:38:20 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0eb3de16df1a07881b1c156523ef65b38dd8f820f72130129959c3d994efb152/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Oct 11 03:38:20 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0eb3de16df1a07881b1c156523ef65b38dd8f820f72130129959c3d994efb152/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 11 03:38:20 compute-0 podman[101817]: 2025-10-11 03:38:20.349968479 +0000 UTC m=+0.031759030 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Oct 11 03:38:20 compute-0 podman[101817]: 2025-10-11 03:38:20.457611891 +0000 UTC m=+0.139402462 container init 5de14012da99b274f2f8dce002db64e64cb6b7eddc8ee0e330aa14f0eff59e09 (image=quay.io/ceph/ceph:v18, name=flamboyant_saha, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 11 03:38:20 compute-0 podman[101817]: 2025-10-11 03:38:20.466616593 +0000 UTC m=+0.148407144 container start 5de14012da99b274f2f8dce002db64e64cb6b7eddc8ee0e330aa14f0eff59e09 (image=quay.io/ceph/ceph:v18, name=flamboyant_saha, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Oct 11 03:38:20 compute-0 podman[101817]: 2025-10-11 03:38:20.469981287 +0000 UTC m=+0.151771868 container attach 5de14012da99b274f2f8dce002db64e64cb6b7eddc8ee0e330aa14f0eff59e09 (image=quay.io/ceph/ceph:v18, name=flamboyant_saha, CEPH_REF=reef, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 11 03:38:20 compute-0 ceph-mgr[74563]: [balancer INFO root] Optimize plan auto_2025-10-11_03:38:20
Oct 11 03:38:20 compute-0 ceph-mgr[74563]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 11 03:38:20 compute-0 ceph-mgr[74563]: [balancer INFO root] Some PGs (0.181818) are unknown; try again later
Oct 11 03:38:20 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e38 do_prune osdmap full prune enabled
Oct 11 03:38:20 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/459311002' entity='client.rgw.rgw.compute-0.xmbhit' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]': finished
Oct 11 03:38:20 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e39 e39: 3 total, 3 up, 3 in
Oct 11 03:38:20 compute-0 ceph-mon[74273]: log_channel(cluster) log [DBG] : osdmap e39: 3 total, 3 up, 3 in
Oct 11 03:38:20 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"} v 0) v1
Oct 11 03:38:20 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/459311002' entity='client.rgw.rgw.compute-0.xmbhit' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]: dispatch
Oct 11 03:38:20 compute-0 ceph-mon[74273]: pgmap v87: 10 pgs: 1 unknown, 9 active+clean; 452 KiB data, 80 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 4.5 KiB/s wr, 12 op/s
Oct 11 03:38:20 compute-0 ceph-mon[74273]: osdmap e38: 3 total, 3 up, 3 in
Oct 11 03:38:20 compute-0 ceph-mon[74273]: from='client.? 192.168.122.100:0/459311002' entity='client.rgw.rgw.compute-0.xmbhit' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]: dispatch
Oct 11 03:38:20 compute-0 ceph-mon[74273]: from='client.? 192.168.122.100:0/459311002' entity='client.rgw.rgw.compute-0.xmbhit' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]': finished
Oct 11 03:38:20 compute-0 ceph-osd[88594]: osd.1 pg_epoch: 39 pg[11.0( empty local-lis/les=38/39 n=0 ec=38/38 lis/c=0/0 les/c/f=0/0/0 sis=38) [1] r=0 lpr=38 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 03:38:20 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v90: 11 pgs: 2 unknown, 9 active+clean; 452 KiB data, 80 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 4.5 KiB/s wr, 12 op/s
Oct 11 03:38:20 compute-0 ceph-mgr[74563]: [pg_autoscaler INFO root] _maybe_adjust
Oct 11 03:38:20 compute-0 ceph-mgr[74563]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 03:38:20 compute-0 ceph-mgr[74563]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct 11 03:38:20 compute-0 ceph-mgr[74563]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 03:38:20 compute-0 ceph-mgr[74563]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 1)
Oct 11 03:38:20 compute-0 ceph-mgr[74563]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 03:38:20 compute-0 ceph-mgr[74563]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 1)
Oct 11 03:38:20 compute-0 ceph-mgr[74563]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 03:38:20 compute-0 ceph-mgr[74563]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 1)
Oct 11 03:38:20 compute-0 ceph-mgr[74563]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 03:38:20 compute-0 ceph-mgr[74563]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 1)
Oct 11 03:38:20 compute-0 ceph-mgr[74563]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 03:38:20 compute-0 ceph-mgr[74563]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 1)
Oct 11 03:38:20 compute-0 ceph-mgr[74563]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 03:38:20 compute-0 ceph-mgr[74563]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 1)
Oct 11 03:38:20 compute-0 ceph-mgr[74563]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 03:38:20 compute-0 ceph-mgr[74563]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 1)
Oct 11 03:38:20 compute-0 ceph-mgr[74563]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 03:38:20 compute-0 ceph-mgr[74563]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 1)
Oct 11 03:38:20 compute-0 ceph-mgr[74563]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 03:38:20 compute-0 ceph-mgr[74563]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 1)
Oct 11 03:38:20 compute-0 ceph-mgr[74563]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 03:38:20 compute-0 ceph-mgr[74563]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 1)
Oct 11 03:38:20 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "vms", "var": "pg_num", "val": "32"} v 0) v1
Oct 11 03:38:20 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' cmd=[{"prefix": "osd pool set", "pool": "vms", "var": "pg_num", "val": "32"}]: dispatch
Oct 11 03:38:20 compute-0 ceph-mgr[74563]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 11 03:38:20 compute-0 ceph-mgr[74563]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 11 03:38:20 compute-0 ceph-mgr[74563]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 03:38:20 compute-0 ceph-mgr[74563]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 03:38:20 compute-0 ceph-mgr[74563]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 11 03:38:20 compute-0 ceph-mgr[74563]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 11 03:38:20 compute-0 ceph-mgr[74563]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 11 03:38:20 compute-0 ceph-mgr[74563]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 11 03:38:20 compute-0 ceph-mgr[74563]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 11 03:38:20 compute-0 ceph-mgr[74563]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 11 03:38:20 compute-0 ceph-mgr[74563]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 03:38:20 compute-0 ceph-mgr[74563]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 03:38:20 compute-0 ceph-mgr[74563]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 03:38:20 compute-0 ceph-mgr[74563]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 03:38:20 compute-0 ceph-mgr[74563]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 11 03:38:20 compute-0 ceph-mgr[74563]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 11 03:38:20 compute-0 elegant_zhukovsky[101811]: {
Oct 11 03:38:20 compute-0 elegant_zhukovsky[101811]:     "0": [
Oct 11 03:38:20 compute-0 elegant_zhukovsky[101811]:         {
Oct 11 03:38:20 compute-0 elegant_zhukovsky[101811]:             "devices": [
Oct 11 03:38:20 compute-0 elegant_zhukovsky[101811]:                 "/dev/loop3"
Oct 11 03:38:20 compute-0 elegant_zhukovsky[101811]:             ],
Oct 11 03:38:20 compute-0 elegant_zhukovsky[101811]:             "lv_name": "ceph_lv0",
Oct 11 03:38:20 compute-0 elegant_zhukovsky[101811]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct 11 03:38:20 compute-0 elegant_zhukovsky[101811]:             "lv_size": "21470642176",
Oct 11 03:38:20 compute-0 elegant_zhukovsky[101811]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=1XVTvq-Um5W-oCLh-MZzq-EKrd-vgGj-JdO4S3,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=23b68101-59a9-532f-ab6b-9acf78fb2162,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=bd7ac921-1218-45c1-b1c6-7c594dbceccb,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 11 03:38:20 compute-0 elegant_zhukovsky[101811]:             "lv_uuid": "1XVTvq-Um5W-oCLh-MZzq-EKrd-vgGj-JdO4S3",
Oct 11 03:38:20 compute-0 elegant_zhukovsky[101811]:             "name": "ceph_lv0",
Oct 11 03:38:20 compute-0 elegant_zhukovsky[101811]:             "path": "/dev/ceph_vg0/ceph_lv0",
Oct 11 03:38:20 compute-0 elegant_zhukovsky[101811]:             "tags": {
Oct 11 03:38:20 compute-0 elegant_zhukovsky[101811]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct 11 03:38:20 compute-0 elegant_zhukovsky[101811]:                 "ceph.block_uuid": "1XVTvq-Um5W-oCLh-MZzq-EKrd-vgGj-JdO4S3",
Oct 11 03:38:20 compute-0 elegant_zhukovsky[101811]:                 "ceph.cephx_lockbox_secret": "",
Oct 11 03:38:20 compute-0 elegant_zhukovsky[101811]:                 "ceph.cluster_fsid": "23b68101-59a9-532f-ab6b-9acf78fb2162",
Oct 11 03:38:20 compute-0 elegant_zhukovsky[101811]:                 "ceph.cluster_name": "ceph",
Oct 11 03:38:20 compute-0 elegant_zhukovsky[101811]:                 "ceph.crush_device_class": "",
Oct 11 03:38:20 compute-0 elegant_zhukovsky[101811]:                 "ceph.encrypted": "0",
Oct 11 03:38:20 compute-0 elegant_zhukovsky[101811]:                 "ceph.osd_fsid": "bd7ac921-1218-45c1-b1c6-7c594dbceccb",
Oct 11 03:38:20 compute-0 elegant_zhukovsky[101811]:                 "ceph.osd_id": "0",
Oct 11 03:38:20 compute-0 elegant_zhukovsky[101811]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 11 03:38:20 compute-0 elegant_zhukovsky[101811]:                 "ceph.type": "block",
Oct 11 03:38:20 compute-0 elegant_zhukovsky[101811]:                 "ceph.vdo": "0"
Oct 11 03:38:20 compute-0 elegant_zhukovsky[101811]:             },
Oct 11 03:38:20 compute-0 elegant_zhukovsky[101811]:             "type": "block",
Oct 11 03:38:20 compute-0 elegant_zhukovsky[101811]:             "vg_name": "ceph_vg0"
Oct 11 03:38:20 compute-0 elegant_zhukovsky[101811]:         }
Oct 11 03:38:20 compute-0 elegant_zhukovsky[101811]:     ],
Oct 11 03:38:20 compute-0 elegant_zhukovsky[101811]:     "1": [
Oct 11 03:38:20 compute-0 elegant_zhukovsky[101811]:         {
Oct 11 03:38:20 compute-0 elegant_zhukovsky[101811]:             "devices": [
Oct 11 03:38:20 compute-0 elegant_zhukovsky[101811]:                 "/dev/loop4"
Oct 11 03:38:20 compute-0 elegant_zhukovsky[101811]:             ],
Oct 11 03:38:20 compute-0 elegant_zhukovsky[101811]:             "lv_name": "ceph_lv1",
Oct 11 03:38:20 compute-0 elegant_zhukovsky[101811]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Oct 11 03:38:20 compute-0 elegant_zhukovsky[101811]:             "lv_size": "21470642176",
Oct 11 03:38:20 compute-0 elegant_zhukovsky[101811]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=SYHCVo-UVf0-Iwc3-4dzz-QuCG-19LJ-9rRA5x,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=23b68101-59a9-532f-ab6b-9acf78fb2162,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=38da774d-7ecf-442f-9a7a-97978287cff8,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 11 03:38:20 compute-0 elegant_zhukovsky[101811]:             "lv_uuid": "SYHCVo-UVf0-Iwc3-4dzz-QuCG-19LJ-9rRA5x",
Oct 11 03:38:20 compute-0 elegant_zhukovsky[101811]:             "name": "ceph_lv1",
Oct 11 03:38:20 compute-0 elegant_zhukovsky[101811]:             "path": "/dev/ceph_vg1/ceph_lv1",
Oct 11 03:38:20 compute-0 elegant_zhukovsky[101811]:             "tags": {
Oct 11 03:38:20 compute-0 elegant_zhukovsky[101811]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Oct 11 03:38:20 compute-0 elegant_zhukovsky[101811]:                 "ceph.block_uuid": "SYHCVo-UVf0-Iwc3-4dzz-QuCG-19LJ-9rRA5x",
Oct 11 03:38:20 compute-0 elegant_zhukovsky[101811]:                 "ceph.cephx_lockbox_secret": "",
Oct 11 03:38:20 compute-0 elegant_zhukovsky[101811]:                 "ceph.cluster_fsid": "23b68101-59a9-532f-ab6b-9acf78fb2162",
Oct 11 03:38:20 compute-0 elegant_zhukovsky[101811]:                 "ceph.cluster_name": "ceph",
Oct 11 03:38:20 compute-0 elegant_zhukovsky[101811]:                 "ceph.crush_device_class": "",
Oct 11 03:38:20 compute-0 elegant_zhukovsky[101811]:                 "ceph.encrypted": "0",
Oct 11 03:38:20 compute-0 elegant_zhukovsky[101811]:                 "ceph.osd_fsid": "38da774d-7ecf-442f-9a7a-97978287cff8",
Oct 11 03:38:20 compute-0 elegant_zhukovsky[101811]:                 "ceph.osd_id": "1",
Oct 11 03:38:20 compute-0 elegant_zhukovsky[101811]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 11 03:38:20 compute-0 elegant_zhukovsky[101811]:                 "ceph.type": "block",
Oct 11 03:38:20 compute-0 elegant_zhukovsky[101811]:                 "ceph.vdo": "0"
Oct 11 03:38:20 compute-0 elegant_zhukovsky[101811]:             },
Oct 11 03:38:20 compute-0 elegant_zhukovsky[101811]:             "type": "block",
Oct 11 03:38:20 compute-0 elegant_zhukovsky[101811]:             "vg_name": "ceph_vg1"
Oct 11 03:38:20 compute-0 elegant_zhukovsky[101811]:         }
Oct 11 03:38:20 compute-0 elegant_zhukovsky[101811]:     ],
Oct 11 03:38:20 compute-0 elegant_zhukovsky[101811]:     "2": [
Oct 11 03:38:20 compute-0 elegant_zhukovsky[101811]:         {
Oct 11 03:38:20 compute-0 elegant_zhukovsky[101811]:             "devices": [
Oct 11 03:38:20 compute-0 elegant_zhukovsky[101811]:                 "/dev/loop5"
Oct 11 03:38:20 compute-0 elegant_zhukovsky[101811]:             ],
Oct 11 03:38:20 compute-0 elegant_zhukovsky[101811]:             "lv_name": "ceph_lv2",
Oct 11 03:38:20 compute-0 elegant_zhukovsky[101811]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Oct 11 03:38:20 compute-0 elegant_zhukovsky[101811]:             "lv_size": "21470642176",
Oct 11 03:38:20 compute-0 elegant_zhukovsky[101811]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=RoMfIg-8Myq-NBZ3-1HM3-6QfC-sjk8-dlChvt,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=23b68101-59a9-532f-ab6b-9acf78fb2162,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=b0a32a6b-06c7-4cda-9235-0c5ffbd1bd85,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 11 03:38:20 compute-0 elegant_zhukovsky[101811]:             "lv_uuid": "RoMfIg-8Myq-NBZ3-1HM3-6QfC-sjk8-dlChvt",
Oct 11 03:38:20 compute-0 elegant_zhukovsky[101811]:             "name": "ceph_lv2",
Oct 11 03:38:20 compute-0 elegant_zhukovsky[101811]:             "path": "/dev/ceph_vg2/ceph_lv2",
Oct 11 03:38:20 compute-0 elegant_zhukovsky[101811]:             "tags": {
Oct 11 03:38:20 compute-0 elegant_zhukovsky[101811]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Oct 11 03:38:20 compute-0 elegant_zhukovsky[101811]:                 "ceph.block_uuid": "RoMfIg-8Myq-NBZ3-1HM3-6QfC-sjk8-dlChvt",
Oct 11 03:38:20 compute-0 elegant_zhukovsky[101811]:                 "ceph.cephx_lockbox_secret": "",
Oct 11 03:38:20 compute-0 elegant_zhukovsky[101811]:                 "ceph.cluster_fsid": "23b68101-59a9-532f-ab6b-9acf78fb2162",
Oct 11 03:38:20 compute-0 elegant_zhukovsky[101811]:                 "ceph.cluster_name": "ceph",
Oct 11 03:38:20 compute-0 elegant_zhukovsky[101811]:                 "ceph.crush_device_class": "",
Oct 11 03:38:20 compute-0 elegant_zhukovsky[101811]:                 "ceph.encrypted": "0",
Oct 11 03:38:20 compute-0 elegant_zhukovsky[101811]:                 "ceph.osd_fsid": "b0a32a6b-06c7-4cda-9235-0c5ffbd1bd85",
Oct 11 03:38:20 compute-0 elegant_zhukovsky[101811]:                 "ceph.osd_id": "2",
Oct 11 03:38:20 compute-0 elegant_zhukovsky[101811]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 11 03:38:20 compute-0 elegant_zhukovsky[101811]:                 "ceph.type": "block",
Oct 11 03:38:20 compute-0 elegant_zhukovsky[101811]:                 "ceph.vdo": "0"
Oct 11 03:38:20 compute-0 elegant_zhukovsky[101811]:             },
Oct 11 03:38:20 compute-0 elegant_zhukovsky[101811]:             "type": "block",
Oct 11 03:38:20 compute-0 elegant_zhukovsky[101811]:             "vg_name": "ceph_vg2"
Oct 11 03:38:20 compute-0 elegant_zhukovsky[101811]:         }
Oct 11 03:38:20 compute-0 elegant_zhukovsky[101811]:     ]
Oct 11 03:38:20 compute-0 elegant_zhukovsky[101811]: }
Oct 11 03:38:21 compute-0 systemd[1]: libpod-ce79a245688aa1238f48d5aa3c16be3870ba665b72eac50563d4bfe9b9190432.scope: Deactivated successfully.
Oct 11 03:38:21 compute-0 podman[101770]: 2025-10-11 03:38:21.028418164 +0000 UTC m=+1.008541913 container died ce79a245688aa1238f48d5aa3c16be3870ba665b72eac50563d4bfe9b9190432 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elegant_zhukovsky, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 11 03:38:21 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd get-require-min-compat-client"} v 0) v1
Oct 11 03:38:21 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3221156618' entity='client.admin' cmd=[{"prefix": "osd get-require-min-compat-client"}]: dispatch
Oct 11 03:38:21 compute-0 flamboyant_saha[101831]: mimic
Oct 11 03:38:21 compute-0 systemd[1]: libpod-5de14012da99b274f2f8dce002db64e64cb6b7eddc8ee0e330aa14f0eff59e09.scope: Deactivated successfully.
Oct 11 03:38:21 compute-0 podman[101817]: 2025-10-11 03:38:21.062416375 +0000 UTC m=+0.744206946 container died 5de14012da99b274f2f8dce002db64e64cb6b7eddc8ee0e330aa14f0eff59e09 (image=quay.io/ceph/ceph:v18, name=flamboyant_saha, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 11 03:38:21 compute-0 systemd[1]: var-lib-containers-storage-overlay-66e7588b35eeddc90184cff8efa5c20b12ce352536d1dd3dfac8b9562d1afb91-merged.mount: Deactivated successfully.
Oct 11 03:38:21 compute-0 systemd[1]: var-lib-containers-storage-overlay-0eb3de16df1a07881b1c156523ef65b38dd8f820f72130129959c3d994efb152-merged.mount: Deactivated successfully.
Oct 11 03:38:21 compute-0 podman[101817]: 2025-10-11 03:38:21.1204814 +0000 UTC m=+0.802271941 container remove 5de14012da99b274f2f8dce002db64e64cb6b7eddc8ee0e330aa14f0eff59e09 (image=quay.io/ceph/ceph:v18, name=flamboyant_saha, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Oct 11 03:38:21 compute-0 systemd[1]: libpod-conmon-5de14012da99b274f2f8dce002db64e64cb6b7eddc8ee0e330aa14f0eff59e09.scope: Deactivated successfully.
Oct 11 03:38:21 compute-0 podman[101770]: 2025-10-11 03:38:21.136698904 +0000 UTC m=+1.116822643 container remove ce79a245688aa1238f48d5aa3c16be3870ba665b72eac50563d4bfe9b9190432 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elegant_zhukovsky, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20250507, CEPH_REF=reef)
Oct 11 03:38:21 compute-0 sudo[101807]: pam_unix(sudo:session): session closed for user root
Oct 11 03:38:21 compute-0 systemd[1]: libpod-conmon-ce79a245688aa1238f48d5aa3c16be3870ba665b72eac50563d4bfe9b9190432.scope: Deactivated successfully.
Oct 11 03:38:21 compute-0 sudo[101650]: pam_unix(sudo:session): session closed for user root
Oct 11 03:38:21 compute-0 sudo[101888]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 03:38:21 compute-0 sudo[101888]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 03:38:21 compute-0 sudo[101888]: pam_unix(sudo:session): session closed for user root
Oct 11 03:38:21 compute-0 sudo[101913]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 11 03:38:21 compute-0 sudo[101913]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 03:38:21 compute-0 sudo[101913]: pam_unix(sudo:session): session closed for user root
Oct 11 03:38:21 compute-0 sudo[101938]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 03:38:21 compute-0 sudo[101938]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 03:38:21 compute-0 sudo[101938]: pam_unix(sudo:session): session closed for user root
Oct 11 03:38:21 compute-0 sudo[101963]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/23b68101-59a9-532f-ab6b-9acf78fb2162/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 23b68101-59a9-532f-ab6b-9acf78fb2162 -- raw list --format json
Oct 11 03:38:21 compute-0 sudo[101963]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 03:38:21 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e39 do_prune osdmap full prune enabled
Oct 11 03:38:21 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/459311002' entity='client.rgw.rgw.compute-0.xmbhit' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]': finished
Oct 11 03:38:21 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' cmd='[{"prefix": "osd pool set", "pool": "vms", "var": "pg_num", "val": "32"}]': finished
Oct 11 03:38:21 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e40 e40: 3 total, 3 up, 3 in
Oct 11 03:38:21 compute-0 ceph-mon[74273]: log_channel(cluster) log [DBG] : osdmap e40: 3 total, 3 up, 3 in
Oct 11 03:38:21 compute-0 ceph-mgr[74563]: [progress INFO root] update: starting ev 303f1d66-ef18-43be-abb6-dbef2104eae9 (PG autoscaler increasing pool 2 PGs from 1 to 32)
Oct 11 03:38:21 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "volumes", "var": "pg_num", "val": "32"} v 0) v1
Oct 11 03:38:21 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' cmd=[{"prefix": "osd pool set", "pool": "volumes", "var": "pg_num", "val": "32"}]: dispatch
Oct 11 03:38:21 compute-0 ceph-mon[74273]: osdmap e39: 3 total, 3 up, 3 in
Oct 11 03:38:21 compute-0 ceph-mon[74273]: from='client.? 192.168.122.100:0/459311002' entity='client.rgw.rgw.compute-0.xmbhit' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]: dispatch
Oct 11 03:38:21 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' cmd=[{"prefix": "osd pool set", "pool": "vms", "var": "pg_num", "val": "32"}]: dispatch
Oct 11 03:38:21 compute-0 ceph-mon[74273]: from='client.? 192.168.122.100:0/3221156618' entity='client.admin' cmd=[{"prefix": "osd get-require-min-compat-client"}]: dispatch
Oct 11 03:38:21 compute-0 ceph-mon[74273]: from='client.? 192.168.122.100:0/459311002' entity='client.rgw.rgw.compute-0.xmbhit' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]': finished
Oct 11 03:38:21 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' cmd='[{"prefix": "osd pool set", "pool": "vms", "var": "pg_num", "val": "32"}]': finished
Oct 11 03:38:21 compute-0 ceph-mon[74273]: osdmap e40: 3 total, 3 up, 3 in
Oct 11 03:38:21 compute-0 radosgw[100230]: LDAP not started since no server URIs were provided in the configuration.
Oct 11 03:38:21 compute-0 ceph-23b68101-59a9-532f-ab6b-9acf78fb2162-rgw-rgw-compute-0-xmbhit[100226]: 2025-10-11T03:38:21.850+0000 7fdbbdd6d940 -1 LDAP not started since no server URIs were provided in the configuration.
Oct 11 03:38:21 compute-0 radosgw[100230]: framework: beast
Oct 11 03:38:21 compute-0 radosgw[100230]: framework conf key: ssl_certificate, val: config://rgw/cert/$realm/$zone.crt
Oct 11 03:38:21 compute-0 radosgw[100230]: framework conf key: ssl_private_key, val: config://rgw/cert/$realm/$zone.key
Oct 11 03:38:21 compute-0 radosgw[100230]: starting handler: beast
Oct 11 03:38:21 compute-0 radosgw[100230]: set uid:gid to 167:167 (ceph:ceph)
Oct 11 03:38:21 compute-0 podman[102034]: 2025-10-11 03:38:21.908189452 +0000 UTC m=+0.044951499 container create 09c6944488207ec08099dfccd187540d671919a174cdaa9f2063a4550c995093 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_gauss, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 11 03:38:21 compute-0 systemd[1]: Started libpod-conmon-09c6944488207ec08099dfccd187540d671919a174cdaa9f2063a4550c995093.scope.
Oct 11 03:38:21 compute-0 radosgw[100230]: mgrc service_daemon_register rgw.14273 metadata {arch=x86_64,ceph_release=reef,ceph_version=ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable),ceph_version_short=18.2.7,container_hostname=compute-0,container_image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0,cpu=AMD EPYC-Rome Processor,distro=centos,distro_description=CentOS Stream 9,distro_version=9,frontend_config#0=beast endpoint=192.168.122.100:8082,frontend_type#0=beast,hostname=compute-0,id=rgw.compute-0.xmbhit,kernel_description=#1 SMP PREEMPT_DYNAMIC Tue Sep 30 07:37:35 UTC 2025,kernel_version=5.14.0-621.el9.x86_64,mem_swap_kb=1048572,mem_total_kb=7864348,num_handles=1,os=Linux,pid=2,realm_id=,realm_name=,zone_id=8858ba12-287a-441d-a130-a0494e92033b,zone_name=default,zonegroup_id=a6eb7032-11aa-4b6e-829e-ea18c039af52,zonegroup_name=default}
Oct 11 03:38:21 compute-0 systemd[1]: Started libcrun container.
Oct 11 03:38:21 compute-0 podman[102034]: 2025-10-11 03:38:21.983324515 +0000 UTC m=+0.120086592 container init 09c6944488207ec08099dfccd187540d671919a174cdaa9f2063a4550c995093 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_gauss, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Oct 11 03:38:21 compute-0 podman[102034]: 2025-10-11 03:38:21.887946206 +0000 UTC m=+0.024708263 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 03:38:21 compute-0 sudo[102615]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cegqefjttxootdggmrklkulpxadnsfhj ; /usr/bin/python3'
Oct 11 03:38:21 compute-0 sudo[102615]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 03:38:21 compute-0 podman[102034]: 2025-10-11 03:38:21.990681811 +0000 UTC m=+0.127443868 container start 09c6944488207ec08099dfccd187540d671919a174cdaa9f2063a4550c995093 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_gauss, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Oct 11 03:38:21 compute-0 hardcore_gauss[102520]: 167 167
Oct 11 03:38:21 compute-0 systemd[1]: libpod-09c6944488207ec08099dfccd187540d671919a174cdaa9f2063a4550c995093.scope: Deactivated successfully.
Oct 11 03:38:21 compute-0 podman[102034]: 2025-10-11 03:38:21.99529657 +0000 UTC m=+0.132058617 container attach 09c6944488207ec08099dfccd187540d671919a174cdaa9f2063a4550c995093 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_gauss, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Oct 11 03:38:21 compute-0 podman[102034]: 2025-10-11 03:38:21.99565027 +0000 UTC m=+0.132412317 container died 09c6944488207ec08099dfccd187540d671919a174cdaa9f2063a4550c995093 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_gauss, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Oct 11 03:38:22 compute-0 systemd[1]: var-lib-containers-storage-overlay-5fcf56eac5ba1e9144f7b92ed45c4059f68c07d0586a223b2009a2f625608f31-merged.mount: Deactivated successfully.
Oct 11 03:38:22 compute-0 podman[102034]: 2025-10-11 03:38:22.042628424 +0000 UTC m=+0.179390471 container remove 09c6944488207ec08099dfccd187540d671919a174cdaa9f2063a4550c995093 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_gauss, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 11 03:38:22 compute-0 systemd[1]: libpod-conmon-09c6944488207ec08099dfccd187540d671919a174cdaa9f2063a4550c995093.scope: Deactivated successfully.
Oct 11 03:38:22 compute-0 python3[102619]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid 23b68101-59a9-532f-ab6b-9acf78fb2162 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   versions -f json _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 11 03:38:22 compute-0 podman[102633]: 2025-10-11 03:38:22.195925404 +0000 UTC m=+0.043846928 container create bab6ee29420e959c3c5910e6d849717b25a9b7e09b22ea671d33ad59a59c4bd7 (image=quay.io/ceph/ceph:v18, name=zen_gates, CEPH_REF=reef, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.build-date=20250507)
Oct 11 03:38:22 compute-0 systemd[1]: Started libpod-conmon-bab6ee29420e959c3c5910e6d849717b25a9b7e09b22ea671d33ad59a59c4bd7.scope.
Oct 11 03:38:22 compute-0 podman[102648]: 2025-10-11 03:38:22.245751758 +0000 UTC m=+0.057038897 container create de35e6c1f96b01862a8f6b1dd6ce49c2d62dc5aac8ba9e375c91ef6a9727f6b6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_gates, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.39.3)
Oct 11 03:38:22 compute-0 systemd[1]: Started libcrun container.
Oct 11 03:38:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ac2fd6d8e0203a148139d50317ddcde668dc9e09339c56b8e7147754b37884a3/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 11 03:38:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ac2fd6d8e0203a148139d50317ddcde668dc9e09339c56b8e7147754b37884a3/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Oct 11 03:38:22 compute-0 podman[102633]: 2025-10-11 03:38:22.180403749 +0000 UTC m=+0.028325293 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Oct 11 03:38:22 compute-0 podman[102633]: 2025-10-11 03:38:22.279306937 +0000 UTC m=+0.127228541 container init bab6ee29420e959c3c5910e6d849717b25a9b7e09b22ea671d33ad59a59c4bd7 (image=quay.io/ceph/ceph:v18, name=zen_gates, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 11 03:38:22 compute-0 podman[102633]: 2025-10-11 03:38:22.288973878 +0000 UTC m=+0.136895402 container start bab6ee29420e959c3c5910e6d849717b25a9b7e09b22ea671d33ad59a59c4bd7 (image=quay.io/ceph/ceph:v18, name=zen_gates, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 11 03:38:22 compute-0 podman[102633]: 2025-10-11 03:38:22.29193265 +0000 UTC m=+0.139854224 container attach bab6ee29420e959c3c5910e6d849717b25a9b7e09b22ea671d33ad59a59c4bd7 (image=quay.io/ceph/ceph:v18, name=zen_gates, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 11 03:38:22 compute-0 systemd[1]: Started libpod-conmon-de35e6c1f96b01862a8f6b1dd6ce49c2d62dc5aac8ba9e375c91ef6a9727f6b6.scope.
Oct 11 03:38:22 compute-0 podman[102648]: 2025-10-11 03:38:22.219314508 +0000 UTC m=+0.030601667 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 03:38:22 compute-0 systemd[1]: Started libcrun container.
Oct 11 03:38:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8ad70291197ecbdb4b8eb40db82c121099d2ea520a78018f8ee452a4428afce7/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 11 03:38:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8ad70291197ecbdb4b8eb40db82c121099d2ea520a78018f8ee452a4428afce7/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 11 03:38:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8ad70291197ecbdb4b8eb40db82c121099d2ea520a78018f8ee452a4428afce7/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 11 03:38:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8ad70291197ecbdb4b8eb40db82c121099d2ea520a78018f8ee452a4428afce7/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 11 03:38:22 compute-0 podman[102648]: 2025-10-11 03:38:22.349494211 +0000 UTC m=+0.160781390 container init de35e6c1f96b01862a8f6b1dd6ce49c2d62dc5aac8ba9e375c91ef6a9727f6b6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_gates, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Oct 11 03:38:22 compute-0 podman[102648]: 2025-10-11 03:38:22.358911654 +0000 UTC m=+0.170198803 container start de35e6c1f96b01862a8f6b1dd6ce49c2d62dc5aac8ba9e375c91ef6a9727f6b6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_gates, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2)
Oct 11 03:38:22 compute-0 podman[102648]: 2025-10-11 03:38:22.362427253 +0000 UTC m=+0.173714402 container attach de35e6c1f96b01862a8f6b1dd6ce49c2d62dc5aac8ba9e375c91ef6a9727f6b6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_gates, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 11 03:38:22 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e40 do_prune osdmap full prune enabled
Oct 11 03:38:22 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' cmd='[{"prefix": "osd pool set", "pool": "volumes", "var": "pg_num", "val": "32"}]': finished
Oct 11 03:38:22 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e41 e41: 3 total, 3 up, 3 in
Oct 11 03:38:22 compute-0 ceph-mon[74273]: log_channel(cluster) log [DBG] : osdmap e41: 3 total, 3 up, 3 in
Oct 11 03:38:22 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v93: 11 pgs: 1 unknown, 10 active+clean; 456 KiB data, 80 MiB used, 60 GiB / 60 GiB avail; 148 KiB/s rd, 8.5 KiB/s wr, 326 op/s
Oct 11 03:38:22 compute-0 ceph-mgr[74563]: [progress INFO root] update: starting ev 567cc96c-86d8-418d-a8b7-75907884310a (PG autoscaler increasing pool 3 PGs from 1 to 32)
Oct 11 03:38:22 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "vms", "var": "pg_num_actual", "val": "32"} v 0) v1
Oct 11 03:38:22 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' cmd=[{"prefix": "osd pool set", "pool": "vms", "var": "pg_num_actual", "val": "32"}]: dispatch
Oct 11 03:38:22 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "volumes", "var": "pg_num_actual", "val": "32"} v 0) v1
Oct 11 03:38:22 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' cmd=[{"prefix": "osd pool set", "pool": "volumes", "var": "pg_num_actual", "val": "32"}]: dispatch
Oct 11 03:38:22 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "backups", "var": "pg_num", "val": "32"} v 0) v1
Oct 11 03:38:22 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' cmd=[{"prefix": "osd pool set", "pool": "backups", "var": "pg_num", "val": "32"}]: dispatch
Oct 11 03:38:22 compute-0 ceph-mon[74273]: pgmap v90: 11 pgs: 2 unknown, 9 active+clean; 452 KiB data, 80 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 4.5 KiB/s wr, 12 op/s
Oct 11 03:38:22 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' cmd=[{"prefix": "osd pool set", "pool": "volumes", "var": "pg_num", "val": "32"}]: dispatch
Oct 11 03:38:22 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' cmd='[{"prefix": "osd pool set", "pool": "volumes", "var": "pg_num", "val": "32"}]': finished
Oct 11 03:38:22 compute-0 ceph-mon[74273]: osdmap e41: 3 total, 3 up, 3 in
Oct 11 03:38:22 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "versions", "format": "json"} v 0) v1
Oct 11 03:38:22 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4068532383' entity='client.admin' cmd=[{"prefix": "versions", "format": "json"}]: dispatch
Oct 11 03:38:22 compute-0 zen_gates[102667]: 
Oct 11 03:38:22 compute-0 zen_gates[102667]: {"mon":{"ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable)":1},"mgr":{"ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable)":1},"osd":{"ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable)":3},"mds":{"ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable)":1},"overall":{"ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable)":6}}
Oct 11 03:38:22 compute-0 systemd[1]: libpod-bab6ee29420e959c3c5910e6d849717b25a9b7e09b22ea671d33ad59a59c4bd7.scope: Deactivated successfully.
Oct 11 03:38:22 compute-0 podman[102633]: 2025-10-11 03:38:22.91555349 +0000 UTC m=+0.763475094 container died bab6ee29420e959c3c5910e6d849717b25a9b7e09b22ea671d33ad59a59c4bd7 (image=quay.io/ceph/ceph:v18, name=zen_gates, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Oct 11 03:38:22 compute-0 systemd[1]: var-lib-containers-storage-overlay-ac2fd6d8e0203a148139d50317ddcde668dc9e09339c56b8e7147754b37884a3-merged.mount: Deactivated successfully.
Oct 11 03:38:22 compute-0 podman[102633]: 2025-10-11 03:38:22.989432728 +0000 UTC m=+0.837354252 container remove bab6ee29420e959c3c5910e6d849717b25a9b7e09b22ea671d33ad59a59c4bd7 (image=quay.io/ceph/ceph:v18, name=zen_gates, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Oct 11 03:38:23 compute-0 systemd[1]: libpod-conmon-bab6ee29420e959c3c5910e6d849717b25a9b7e09b22ea671d33ad59a59c4bd7.scope: Deactivated successfully.
Oct 11 03:38:23 compute-0 sudo[102615]: pam_unix(sudo:session): session closed for user root
Oct 11 03:38:23 compute-0 inspiring_gates[102675]: {
Oct 11 03:38:23 compute-0 inspiring_gates[102675]:     "38da774d-7ecf-442f-9a7a-97978287cff8": {
Oct 11 03:38:23 compute-0 inspiring_gates[102675]:         "ceph_fsid": "23b68101-59a9-532f-ab6b-9acf78fb2162",
Oct 11 03:38:23 compute-0 inspiring_gates[102675]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Oct 11 03:38:23 compute-0 inspiring_gates[102675]:         "osd_id": 1,
Oct 11 03:38:23 compute-0 inspiring_gates[102675]:         "osd_uuid": "38da774d-7ecf-442f-9a7a-97978287cff8",
Oct 11 03:38:23 compute-0 inspiring_gates[102675]:         "type": "bluestore"
Oct 11 03:38:23 compute-0 inspiring_gates[102675]:     },
Oct 11 03:38:23 compute-0 inspiring_gates[102675]:     "b0a32a6b-06c7-4cda-9235-0c5ffbd1bd85": {
Oct 11 03:38:23 compute-0 inspiring_gates[102675]:         "ceph_fsid": "23b68101-59a9-532f-ab6b-9acf78fb2162",
Oct 11 03:38:23 compute-0 inspiring_gates[102675]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Oct 11 03:38:23 compute-0 inspiring_gates[102675]:         "osd_id": 2,
Oct 11 03:38:23 compute-0 inspiring_gates[102675]:         "osd_uuid": "b0a32a6b-06c7-4cda-9235-0c5ffbd1bd85",
Oct 11 03:38:23 compute-0 inspiring_gates[102675]:         "type": "bluestore"
Oct 11 03:38:23 compute-0 inspiring_gates[102675]:     },
Oct 11 03:38:23 compute-0 inspiring_gates[102675]:     "bd7ac921-1218-45c1-b1c6-7c594dbceccb": {
Oct 11 03:38:23 compute-0 inspiring_gates[102675]:         "ceph_fsid": "23b68101-59a9-532f-ab6b-9acf78fb2162",
Oct 11 03:38:23 compute-0 inspiring_gates[102675]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct 11 03:38:23 compute-0 inspiring_gates[102675]:         "osd_id": 0,
Oct 11 03:38:23 compute-0 inspiring_gates[102675]:         "osd_uuid": "bd7ac921-1218-45c1-b1c6-7c594dbceccb",
Oct 11 03:38:23 compute-0 inspiring_gates[102675]:         "type": "bluestore"
Oct 11 03:38:23 compute-0 inspiring_gates[102675]:     }
Oct 11 03:38:23 compute-0 inspiring_gates[102675]: }
Oct 11 03:38:23 compute-0 systemd[1]: libpod-de35e6c1f96b01862a8f6b1dd6ce49c2d62dc5aac8ba9e375c91ef6a9727f6b6.scope: Deactivated successfully.
Oct 11 03:38:23 compute-0 systemd[1]: libpod-de35e6c1f96b01862a8f6b1dd6ce49c2d62dc5aac8ba9e375c91ef6a9727f6b6.scope: Consumed 1.073s CPU time.
Oct 11 03:38:23 compute-0 podman[102740]: 2025-10-11 03:38:23.495077635 +0000 UTC m=+0.037235962 container died de35e6c1f96b01862a8f6b1dd6ce49c2d62dc5aac8ba9e375c91ef6a9727f6b6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_gates, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Oct 11 03:38:23 compute-0 systemd[1]: var-lib-containers-storage-overlay-8ad70291197ecbdb4b8eb40db82c121099d2ea520a78018f8ee452a4428afce7-merged.mount: Deactivated successfully.
Oct 11 03:38:23 compute-0 podman[102740]: 2025-10-11 03:38:23.572826951 +0000 UTC m=+0.114985238 container remove de35e6c1f96b01862a8f6b1dd6ce49c2d62dc5aac8ba9e375c91ef6a9727f6b6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_gates, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3)
Oct 11 03:38:23 compute-0 systemd[1]: libpod-conmon-de35e6c1f96b01862a8f6b1dd6ce49c2d62dc5aac8ba9e375c91ef6a9727f6b6.scope: Deactivated successfully.
Oct 11 03:38:23 compute-0 sudo[101963]: pam_unix(sudo:session): session closed for user root
Oct 11 03:38:23 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 11 03:38:23 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' 
Oct 11 03:38:23 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 11 03:38:23 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' 
Oct 11 03:38:23 compute-0 ceph-mgr[74563]: [progress WARNING root] complete: ev b517a238-aa0e-484f-aebc-8bbae6a1af68 does not exist
Oct 11 03:38:23 compute-0 ceph-mgr[74563]: [progress WARNING root] complete: ev 05497f3e-4e74-409e-97d2-d41e0f196c6e does not exist
Oct 11 03:38:23 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e41 do_prune osdmap full prune enabled
Oct 11 03:38:23 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' cmd='[{"prefix": "osd pool set", "pool": "vms", "var": "pg_num_actual", "val": "32"}]': finished
Oct 11 03:38:23 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' cmd='[{"prefix": "osd pool set", "pool": "volumes", "var": "pg_num_actual", "val": "32"}]': finished
Oct 11 03:38:23 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' cmd='[{"prefix": "osd pool set", "pool": "backups", "var": "pg_num", "val": "32"}]': finished
Oct 11 03:38:23 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e42 e42: 3 total, 3 up, 3 in
Oct 11 03:38:23 compute-0 ceph-mon[74273]: log_channel(cluster) log [DBG] : osdmap e42: 3 total, 3 up, 3 in
Oct 11 03:38:23 compute-0 ceph-mgr[74563]: [progress INFO root] update: starting ev ee352db8-62d0-447e-8839-7035662c2779 (PG autoscaler increasing pool 4 PGs from 1 to 32)
Oct 11 03:38:23 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "images", "var": "pg_num", "val": "32"} v 0) v1
Oct 11 03:38:23 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' cmd=[{"prefix": "osd pool set", "pool": "images", "var": "pg_num", "val": "32"}]: dispatch
Oct 11 03:38:23 compute-0 ceph-mon[74273]: pgmap v93: 11 pgs: 1 unknown, 10 active+clean; 456 KiB data, 80 MiB used, 60 GiB / 60 GiB avail; 148 KiB/s rd, 8.5 KiB/s wr, 326 op/s
Oct 11 03:38:23 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' cmd=[{"prefix": "osd pool set", "pool": "vms", "var": "pg_num_actual", "val": "32"}]: dispatch
Oct 11 03:38:23 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' cmd=[{"prefix": "osd pool set", "pool": "volumes", "var": "pg_num_actual", "val": "32"}]: dispatch
Oct 11 03:38:23 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' cmd=[{"prefix": "osd pool set", "pool": "backups", "var": "pg_num", "val": "32"}]: dispatch
Oct 11 03:38:23 compute-0 ceph-mon[74273]: from='client.? 192.168.122.100:0/4068532383' entity='client.admin' cmd=[{"prefix": "versions", "format": "json"}]: dispatch
Oct 11 03:38:23 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' 
Oct 11 03:38:23 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' 
Oct 11 03:38:23 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' cmd='[{"prefix": "osd pool set", "pool": "vms", "var": "pg_num_actual", "val": "32"}]': finished
Oct 11 03:38:23 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' cmd='[{"prefix": "osd pool set", "pool": "volumes", "var": "pg_num_actual", "val": "32"}]': finished
Oct 11 03:38:23 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' cmd='[{"prefix": "osd pool set", "pool": "backups", "var": "pg_num", "val": "32"}]': finished
Oct 11 03:38:23 compute-0 sudo[102756]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 03:38:23 compute-0 sudo[102756]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 03:38:23 compute-0 sudo[102756]: pam_unix(sudo:session): session closed for user root
Oct 11 03:38:23 compute-0 sudo[102781]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Oct 11 03:38:23 compute-0 sudo[102781]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 03:38:23 compute-0 sudo[102781]: pam_unix(sudo:session): session closed for user root
Oct 11 03:38:23 compute-0 ceph-osd[89722]: osd.2 pg_epoch: 42 pg[2.0( empty local-lis/les=18/19 n=0 ec=13/13 lis/c=18/18 les/c/f=19/19/0 sis=42 pruub=13.230977058s) [2] r=0 lpr=42 pi=[18,42)/1 crt=0'0 mlcod 0'0 active pruub 69.877388000s@ mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [2], acting_primary 2 -> 2, up_primary 2 -> 2, role 0 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Oct 11 03:38:23 compute-0 ceph-osd[88594]: osd.1 pg_epoch: 42 pg[3.0( empty local-lis/les=15/16 n=0 ec=15/15 lis/c=15/15 les/c/f=16/16/0 sis=42 pruub=10.182303429s) [1] r=0 lpr=42 pi=[15,42)/1 crt=0'0 mlcod 0'0 active pruub 72.078186035s@ mbc={}] start_peering_interval up [1] -> [1], acting [1] -> [1], acting_primary 1 -> 1, up_primary 1 -> 1, role 0 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Oct 11 03:38:23 compute-0 ceph-osd[89722]: osd.2 pg_epoch: 42 pg[2.0( empty local-lis/les=18/19 n=0 ec=13/13 lis/c=18/18 les/c/f=19/19/0 sis=42 pruub=13.230977058s) [2] r=0 lpr=42 pi=[18,42)/1 crt=0'0 mlcod 0'0 unknown pruub 69.877388000s@ mbc={}] state<Start>: transitioning to Primary
Oct 11 03:38:23 compute-0 ceph-osd[88594]: osd.1 pg_epoch: 42 pg[3.0( empty local-lis/les=15/16 n=0 ec=15/15 lis/c=15/15 les/c/f=16/16/0 sis=42 pruub=10.182303429s) [1] r=0 lpr=42 pi=[15,42)/1 crt=0'0 mlcod 0'0 unknown pruub 72.078186035s@ mbc={}] state<Start>: transitioning to Primary
Oct 11 03:38:23 compute-0 sudo[102806]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 03:38:23 compute-0 sudo[102806]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 03:38:23 compute-0 sudo[102806]: pam_unix(sudo:session): session closed for user root
Oct 11 03:38:24 compute-0 sudo[102831]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 11 03:38:24 compute-0 sudo[102831]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 03:38:24 compute-0 sudo[102831]: pam_unix(sudo:session): session closed for user root
Oct 11 03:38:24 compute-0 sudo[102856]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 03:38:24 compute-0 sudo[102856]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 03:38:24 compute-0 sudo[102856]: pam_unix(sudo:session): session closed for user root
Oct 11 03:38:24 compute-0 sudo[102881]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/23b68101-59a9-532f-ab6b-9acf78fb2162/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ls
Oct 11 03:38:24 compute-0 sudo[102881]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 03:38:24 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e42 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 11 03:38:24 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v95: 73 pgs: 62 unknown, 11 active+clean; 456 KiB data, 80 MiB used, 60 GiB / 60 GiB avail; 147 KiB/s rd, 8.4 KiB/s wr, 324 op/s
Oct 11 03:38:24 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "backups", "var": "pg_num_actual", "val": "32"} v 0) v1
Oct 11 03:38:24 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' cmd=[{"prefix": "osd pool set", "pool": "backups", "var": "pg_num_actual", "val": "32"}]: dispatch
Oct 11 03:38:24 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e42 do_prune osdmap full prune enabled
Oct 11 03:38:24 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' cmd='[{"prefix": "osd pool set", "pool": "images", "var": "pg_num", "val": "32"}]': finished
Oct 11 03:38:24 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' cmd='[{"prefix": "osd pool set", "pool": "backups", "var": "pg_num_actual", "val": "32"}]': finished
Oct 11 03:38:24 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e43 e43: 3 total, 3 up, 3 in
Oct 11 03:38:24 compute-0 ceph-mon[74273]: log_channel(cluster) log [DBG] : osdmap e43: 3 total, 3 up, 3 in
Oct 11 03:38:24 compute-0 ceph-mgr[74563]: [progress INFO root] update: starting ev 17dd071f-6fee-408d-8dfa-8b3b90fe01f8 (PG autoscaler increasing pool 5 PGs from 1 to 32)
Oct 11 03:38:24 compute-0 ceph-osd[89722]: osd.2 pg_epoch: 43 pg[2.1e( empty local-lis/les=18/19 n=0 ec=42/13 lis/c=18/18 les/c/f=19/19/0 sis=42) [2] r=0 lpr=42 pi=[18,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 03:38:24 compute-0 ceph-osd[89722]: osd.2 pg_epoch: 43 pg[2.1f( empty local-lis/les=18/19 n=0 ec=42/13 lis/c=18/18 les/c/f=19/19/0 sis=42) [2] r=0 lpr=42 pi=[18,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 03:38:24 compute-0 ceph-osd[89722]: osd.2 pg_epoch: 43 pg[2.1d( empty local-lis/les=18/19 n=0 ec=42/13 lis/c=18/18 les/c/f=19/19/0 sis=42) [2] r=0 lpr=42 pi=[18,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 03:38:24 compute-0 ceph-osd[89722]: osd.2 pg_epoch: 43 pg[2.1c( empty local-lis/les=18/19 n=0 ec=42/13 lis/c=18/18 les/c/f=19/19/0 sis=42) [2] r=0 lpr=42 pi=[18,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 03:38:24 compute-0 ceph-osd[89722]: osd.2 pg_epoch: 43 pg[2.1b( empty local-lis/les=18/19 n=0 ec=42/13 lis/c=18/18 les/c/f=19/19/0 sis=42) [2] r=0 lpr=42 pi=[18,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 03:38:24 compute-0 ceph-osd[89722]: osd.2 pg_epoch: 43 pg[2.a( empty local-lis/les=18/19 n=0 ec=42/13 lis/c=18/18 les/c/f=19/19/0 sis=42) [2] r=0 lpr=42 pi=[18,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 03:38:24 compute-0 ceph-osd[89722]: osd.2 pg_epoch: 43 pg[2.9( empty local-lis/les=18/19 n=0 ec=42/13 lis/c=18/18 les/c/f=19/19/0 sis=42) [2] r=0 lpr=42 pi=[18,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 03:38:24 compute-0 ceph-osd[89722]: osd.2 pg_epoch: 43 pg[2.6( empty local-lis/les=18/19 n=0 ec=42/13 lis/c=18/18 les/c/f=19/19/0 sis=42) [2] r=0 lpr=42 pi=[18,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 03:38:24 compute-0 ceph-osd[89722]: osd.2 pg_epoch: 43 pg[2.5( empty local-lis/les=18/19 n=0 ec=42/13 lis/c=18/18 les/c/f=19/19/0 sis=42) [2] r=0 lpr=42 pi=[18,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 03:38:24 compute-0 ceph-osd[89722]: osd.2 pg_epoch: 43 pg[2.3( empty local-lis/les=18/19 n=0 ec=42/13 lis/c=18/18 les/c/f=19/19/0 sis=42) [2] r=0 lpr=42 pi=[18,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 03:38:24 compute-0 ceph-osd[89722]: osd.2 pg_epoch: 43 pg[2.2( empty local-lis/les=18/19 n=0 ec=42/13 lis/c=18/18 les/c/f=19/19/0 sis=42) [2] r=0 lpr=42 pi=[18,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 03:38:24 compute-0 ceph-osd[89722]: osd.2 pg_epoch: 43 pg[2.1( empty local-lis/les=18/19 n=0 ec=42/13 lis/c=18/18 les/c/f=19/19/0 sis=42) [2] r=0 lpr=42 pi=[18,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 03:38:24 compute-0 ceph-osd[89722]: osd.2 pg_epoch: 43 pg[2.8( empty local-lis/les=18/19 n=0 ec=42/13 lis/c=18/18 les/c/f=19/19/0 sis=42) [2] r=0 lpr=42 pi=[18,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 03:38:24 compute-0 ceph-osd[89722]: osd.2 pg_epoch: 43 pg[2.7( empty local-lis/les=18/19 n=0 ec=42/13 lis/c=18/18 les/c/f=19/19/0 sis=42) [2] r=0 lpr=42 pi=[18,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 03:38:24 compute-0 ceph-osd[89722]: osd.2 pg_epoch: 43 pg[2.b( empty local-lis/les=18/19 n=0 ec=42/13 lis/c=18/18 les/c/f=19/19/0 sis=42) [2] r=0 lpr=42 pi=[18,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 03:38:24 compute-0 ceph-osd[89722]: osd.2 pg_epoch: 43 pg[2.c( empty local-lis/les=18/19 n=0 ec=42/13 lis/c=18/18 les/c/f=19/19/0 sis=42) [2] r=0 lpr=42 pi=[18,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 03:38:24 compute-0 ceph-osd[89722]: osd.2 pg_epoch: 43 pg[2.d( empty local-lis/les=18/19 n=0 ec=42/13 lis/c=18/18 les/c/f=19/19/0 sis=42) [2] r=0 lpr=42 pi=[18,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 03:38:24 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pg_num", "val": "16"} v 0) v1
Oct 11 03:38:24 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pg_num", "val": "16"}]: dispatch
Oct 11 03:38:24 compute-0 ceph-osd[89722]: osd.2 pg_epoch: 43 pg[2.e( empty local-lis/les=18/19 n=0 ec=42/13 lis/c=18/18 les/c/f=19/19/0 sis=42) [2] r=0 lpr=42 pi=[18,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 03:38:24 compute-0 ceph-osd[89722]: osd.2 pg_epoch: 43 pg[2.10( empty local-lis/les=18/19 n=0 ec=42/13 lis/c=18/18 les/c/f=19/19/0 sis=42) [2] r=0 lpr=42 pi=[18,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 03:38:24 compute-0 ceph-osd[89722]: osd.2 pg_epoch: 43 pg[2.f( empty local-lis/les=18/19 n=0 ec=42/13 lis/c=18/18 les/c/f=19/19/0 sis=42) [2] r=0 lpr=42 pi=[18,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 03:38:24 compute-0 ceph-osd[89722]: osd.2 pg_epoch: 43 pg[2.11( empty local-lis/les=18/19 n=0 ec=42/13 lis/c=18/18 les/c/f=19/19/0 sis=42) [2] r=0 lpr=42 pi=[18,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 03:38:24 compute-0 ceph-osd[89722]: osd.2 pg_epoch: 43 pg[2.12( empty local-lis/les=18/19 n=0 ec=42/13 lis/c=18/18 les/c/f=19/19/0 sis=42) [2] r=0 lpr=42 pi=[18,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 03:38:24 compute-0 ceph-osd[89722]: osd.2 pg_epoch: 43 pg[2.13( empty local-lis/les=18/19 n=0 ec=42/13 lis/c=18/18 les/c/f=19/19/0 sis=42) [2] r=0 lpr=42 pi=[18,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 03:38:24 compute-0 ceph-osd[89722]: osd.2 pg_epoch: 43 pg[2.14( empty local-lis/les=18/19 n=0 ec=42/13 lis/c=18/18 les/c/f=19/19/0 sis=42) [2] r=0 lpr=42 pi=[18,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 03:38:24 compute-0 ceph-osd[89722]: osd.2 pg_epoch: 43 pg[2.15( empty local-lis/les=18/19 n=0 ec=42/13 lis/c=18/18 les/c/f=19/19/0 sis=42) [2] r=0 lpr=42 pi=[18,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 03:38:24 compute-0 ceph-osd[89722]: osd.2 pg_epoch: 43 pg[2.16( empty local-lis/les=18/19 n=0 ec=42/13 lis/c=18/18 les/c/f=19/19/0 sis=42) [2] r=0 lpr=42 pi=[18,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 03:38:24 compute-0 ceph-osd[89722]: osd.2 pg_epoch: 43 pg[2.17( empty local-lis/les=18/19 n=0 ec=42/13 lis/c=18/18 les/c/f=19/19/0 sis=42) [2] r=0 lpr=42 pi=[18,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 03:38:24 compute-0 ceph-osd[89722]: osd.2 pg_epoch: 43 pg[2.18( empty local-lis/les=18/19 n=0 ec=42/13 lis/c=18/18 les/c/f=19/19/0 sis=42) [2] r=0 lpr=42 pi=[18,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 03:38:24 compute-0 ceph-osd[89722]: osd.2 pg_epoch: 43 pg[2.19( empty local-lis/les=18/19 n=0 ec=42/13 lis/c=18/18 les/c/f=19/19/0 sis=42) [2] r=0 lpr=42 pi=[18,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 03:38:24 compute-0 ceph-osd[89722]: osd.2 pg_epoch: 43 pg[2.1a( empty local-lis/les=18/19 n=0 ec=42/13 lis/c=18/18 les/c/f=19/19/0 sis=42) [2] r=0 lpr=42 pi=[18,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 03:38:24 compute-0 ceph-osd[89722]: osd.2 pg_epoch: 43 pg[2.4( empty local-lis/les=18/19 n=0 ec=42/13 lis/c=18/18 les/c/f=19/19/0 sis=42) [2] r=0 lpr=42 pi=[18,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 03:38:24 compute-0 ceph-osd[88594]: osd.1 pg_epoch: 43 pg[3.1f( empty local-lis/les=15/16 n=0 ec=42/15 lis/c=15/15 les/c/f=16/16/0 sis=42) [1] r=0 lpr=42 pi=[15,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 03:38:24 compute-0 ceph-osd[88594]: osd.1 pg_epoch: 43 pg[3.1e( empty local-lis/les=15/16 n=0 ec=42/15 lis/c=15/15 les/c/f=16/16/0 sis=42) [1] r=0 lpr=42 pi=[15,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 03:38:24 compute-0 ceph-osd[88594]: osd.1 pg_epoch: 43 pg[3.1d( empty local-lis/les=15/16 n=0 ec=42/15 lis/c=15/15 les/c/f=16/16/0 sis=42) [1] r=0 lpr=42 pi=[15,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 03:38:24 compute-0 ceph-osd[88594]: osd.1 pg_epoch: 43 pg[3.1c( empty local-lis/les=15/16 n=0 ec=42/15 lis/c=15/15 les/c/f=16/16/0 sis=42) [1] r=0 lpr=42 pi=[15,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 03:38:24 compute-0 ceph-osd[88594]: osd.1 pg_epoch: 43 pg[3.1b( empty local-lis/les=15/16 n=0 ec=42/15 lis/c=15/15 les/c/f=16/16/0 sis=42) [1] r=0 lpr=42 pi=[15,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 03:38:24 compute-0 ceph-osd[88594]: osd.1 pg_epoch: 43 pg[3.1a( empty local-lis/les=15/16 n=0 ec=42/15 lis/c=15/15 les/c/f=16/16/0 sis=42) [1] r=0 lpr=42 pi=[15,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 03:38:24 compute-0 ceph-osd[88594]: osd.1 pg_epoch: 43 pg[3.19( empty local-lis/les=15/16 n=0 ec=42/15 lis/c=15/15 les/c/f=16/16/0 sis=42) [1] r=0 lpr=42 pi=[15,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 03:38:24 compute-0 ceph-osd[88594]: osd.1 pg_epoch: 43 pg[3.18( empty local-lis/les=15/16 n=0 ec=42/15 lis/c=15/15 les/c/f=16/16/0 sis=42) [1] r=0 lpr=42 pi=[15,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 03:38:24 compute-0 ceph-osd[88594]: osd.1 pg_epoch: 43 pg[3.7( empty local-lis/les=15/16 n=0 ec=42/15 lis/c=15/15 les/c/f=16/16/0 sis=42) [1] r=0 lpr=42 pi=[15,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 03:38:24 compute-0 ceph-osd[88594]: osd.1 pg_epoch: 43 pg[3.6( empty local-lis/les=15/16 n=0 ec=42/15 lis/c=15/15 les/c/f=16/16/0 sis=42) [1] r=0 lpr=42 pi=[15,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 03:38:24 compute-0 ceph-osd[88594]: osd.1 pg_epoch: 43 pg[3.5( empty local-lis/les=15/16 n=0 ec=42/15 lis/c=15/15 les/c/f=16/16/0 sis=42) [1] r=0 lpr=42 pi=[15,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 03:38:24 compute-0 ceph-osd[88594]: osd.1 pg_epoch: 43 pg[3.3( empty local-lis/les=15/16 n=0 ec=42/15 lis/c=15/15 les/c/f=16/16/0 sis=42) [1] r=0 lpr=42 pi=[15,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 03:38:24 compute-0 ceph-osd[88594]: osd.1 pg_epoch: 43 pg[3.1( empty local-lis/les=15/16 n=0 ec=42/15 lis/c=15/15 les/c/f=16/16/0 sis=42) [1] r=0 lpr=42 pi=[15,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 03:38:24 compute-0 ceph-osd[88594]: osd.1 pg_epoch: 43 pg[3.8( empty local-lis/les=15/16 n=0 ec=42/15 lis/c=15/15 les/c/f=16/16/0 sis=42) [1] r=0 lpr=42 pi=[15,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 03:38:24 compute-0 ceph-osd[88594]: osd.1 pg_epoch: 43 pg[3.a( empty local-lis/les=15/16 n=0 ec=42/15 lis/c=15/15 les/c/f=16/16/0 sis=42) [1] r=0 lpr=42 pi=[15,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 03:38:24 compute-0 ceph-osd[88594]: osd.1 pg_epoch: 43 pg[3.4( empty local-lis/les=15/16 n=0 ec=42/15 lis/c=15/15 les/c/f=16/16/0 sis=42) [1] r=0 lpr=42 pi=[15,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 03:38:24 compute-0 ceph-osd[88594]: osd.1 pg_epoch: 43 pg[3.b( empty local-lis/les=15/16 n=0 ec=42/15 lis/c=15/15 les/c/f=16/16/0 sis=42) [1] r=0 lpr=42 pi=[15,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 03:38:24 compute-0 ceph-osd[88594]: osd.1 pg_epoch: 43 pg[3.2( empty local-lis/les=15/16 n=0 ec=42/15 lis/c=15/15 les/c/f=16/16/0 sis=42) [1] r=0 lpr=42 pi=[15,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 03:38:24 compute-0 ceph-osd[88594]: osd.1 pg_epoch: 43 pg[3.9( empty local-lis/les=15/16 n=0 ec=42/15 lis/c=15/15 les/c/f=16/16/0 sis=42) [1] r=0 lpr=42 pi=[15,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 03:38:24 compute-0 ceph-osd[88594]: osd.1 pg_epoch: 43 pg[3.c( empty local-lis/les=15/16 n=0 ec=42/15 lis/c=15/15 les/c/f=16/16/0 sis=42) [1] r=0 lpr=42 pi=[15,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 03:38:24 compute-0 ceph-osd[88594]: osd.1 pg_epoch: 43 pg[3.d( empty local-lis/les=15/16 n=0 ec=42/15 lis/c=15/15 les/c/f=16/16/0 sis=42) [1] r=0 lpr=42 pi=[15,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 03:38:24 compute-0 ceph-osd[88594]: osd.1 pg_epoch: 43 pg[3.e( empty local-lis/les=15/16 n=0 ec=42/15 lis/c=15/15 les/c/f=16/16/0 sis=42) [1] r=0 lpr=42 pi=[15,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 03:38:24 compute-0 ceph-osd[88594]: osd.1 pg_epoch: 43 pg[3.f( empty local-lis/les=15/16 n=0 ec=42/15 lis/c=15/15 les/c/f=16/16/0 sis=42) [1] r=0 lpr=42 pi=[15,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 03:38:24 compute-0 ceph-osd[88594]: osd.1 pg_epoch: 43 pg[3.10( empty local-lis/les=15/16 n=0 ec=42/15 lis/c=15/15 les/c/f=16/16/0 sis=42) [1] r=0 lpr=42 pi=[15,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 03:38:24 compute-0 ceph-osd[88594]: osd.1 pg_epoch: 43 pg[3.11( empty local-lis/les=15/16 n=0 ec=42/15 lis/c=15/15 les/c/f=16/16/0 sis=42) [1] r=0 lpr=42 pi=[15,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 03:38:24 compute-0 ceph-osd[89722]: osd.2 pg_epoch: 43 pg[2.1f( empty local-lis/les=42/43 n=0 ec=42/13 lis/c=18/18 les/c/f=19/19/0 sis=42) [2] r=0 lpr=42 pi=[18,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 03:38:24 compute-0 ceph-osd[88594]: osd.1 pg_epoch: 43 pg[3.12( empty local-lis/les=15/16 n=0 ec=42/15 lis/c=15/15 les/c/f=16/16/0 sis=42) [1] r=0 lpr=42 pi=[15,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 03:38:24 compute-0 ceph-osd[88594]: osd.1 pg_epoch: 43 pg[3.13( empty local-lis/les=15/16 n=0 ec=42/15 lis/c=15/15 les/c/f=16/16/0 sis=42) [1] r=0 lpr=42 pi=[15,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 03:38:24 compute-0 ceph-osd[88594]: osd.1 pg_epoch: 43 pg[3.14( empty local-lis/les=15/16 n=0 ec=42/15 lis/c=15/15 les/c/f=16/16/0 sis=42) [1] r=0 lpr=42 pi=[15,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 03:38:24 compute-0 ceph-osd[88594]: osd.1 pg_epoch: 43 pg[3.15( empty local-lis/les=15/16 n=0 ec=42/15 lis/c=15/15 les/c/f=16/16/0 sis=42) [1] r=0 lpr=42 pi=[15,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 03:38:24 compute-0 ceph-osd[88594]: osd.1 pg_epoch: 43 pg[3.16( empty local-lis/les=15/16 n=0 ec=42/15 lis/c=15/15 les/c/f=16/16/0 sis=42) [1] r=0 lpr=42 pi=[15,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 03:38:24 compute-0 ceph-osd[88594]: osd.1 pg_epoch: 43 pg[3.17( empty local-lis/les=15/16 n=0 ec=42/15 lis/c=15/15 les/c/f=16/16/0 sis=42) [1] r=0 lpr=42 pi=[15,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 03:38:24 compute-0 ceph-osd[88594]: osd.1 pg_epoch: 43 pg[3.1e( empty local-lis/les=42/43 n=0 ec=42/15 lis/c=15/15 les/c/f=16/16/0 sis=42) [1] r=0 lpr=42 pi=[15,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 03:38:24 compute-0 ceph-osd[88594]: osd.1 pg_epoch: 43 pg[3.1c( empty local-lis/les=42/43 n=0 ec=42/15 lis/c=15/15 les/c/f=16/16/0 sis=42) [1] r=0 lpr=42 pi=[15,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 03:38:24 compute-0 ceph-osd[88594]: osd.1 pg_epoch: 43 pg[3.1d( empty local-lis/les=42/43 n=0 ec=42/15 lis/c=15/15 les/c/f=16/16/0 sis=42) [1] r=0 lpr=42 pi=[15,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 03:38:24 compute-0 ceph-osd[88594]: osd.1 pg_epoch: 43 pg[3.1a( empty local-lis/les=42/43 n=0 ec=42/15 lis/c=15/15 les/c/f=16/16/0 sis=42) [1] r=0 lpr=42 pi=[15,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 03:38:24 compute-0 ceph-osd[88594]: osd.1 pg_epoch: 43 pg[3.19( empty local-lis/les=42/43 n=0 ec=42/15 lis/c=15/15 les/c/f=16/16/0 sis=42) [1] r=0 lpr=42 pi=[15,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 03:38:24 compute-0 ceph-osd[88594]: osd.1 pg_epoch: 43 pg[3.18( empty local-lis/les=42/43 n=0 ec=42/15 lis/c=15/15 les/c/f=16/16/0 sis=42) [1] r=0 lpr=42 pi=[15,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 03:38:24 compute-0 ceph-osd[88594]: osd.1 pg_epoch: 43 pg[3.7( empty local-lis/les=42/43 n=0 ec=42/15 lis/c=15/15 les/c/f=16/16/0 sis=42) [1] r=0 lpr=42 pi=[15,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 03:38:24 compute-0 ceph-osd[88594]: osd.1 pg_epoch: 43 pg[3.5( empty local-lis/les=42/43 n=0 ec=42/15 lis/c=15/15 les/c/f=16/16/0 sis=42) [1] r=0 lpr=42 pi=[15,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 03:38:24 compute-0 ceph-osd[88594]: osd.1 pg_epoch: 43 pg[3.3( empty local-lis/les=42/43 n=0 ec=42/15 lis/c=15/15 les/c/f=16/16/0 sis=42) [1] r=0 lpr=42 pi=[15,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 03:38:24 compute-0 ceph-osd[88594]: osd.1 pg_epoch: 43 pg[3.1( empty local-lis/les=42/43 n=0 ec=42/15 lis/c=15/15 les/c/f=16/16/0 sis=42) [1] r=0 lpr=42 pi=[15,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 03:38:24 compute-0 ceph-osd[88594]: osd.1 pg_epoch: 43 pg[3.1b( empty local-lis/les=42/43 n=0 ec=42/15 lis/c=15/15 les/c/f=16/16/0 sis=42) [1] r=0 lpr=42 pi=[15,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 03:38:24 compute-0 ceph-osd[88594]: osd.1 pg_epoch: 43 pg[3.8( empty local-lis/les=42/43 n=0 ec=42/15 lis/c=15/15 les/c/f=16/16/0 sis=42) [1] r=0 lpr=42 pi=[15,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 03:38:24 compute-0 ceph-osd[89722]: osd.2 pg_epoch: 43 pg[2.1e( empty local-lis/les=42/43 n=0 ec=42/13 lis/c=18/18 les/c/f=19/19/0 sis=42) [2] r=0 lpr=42 pi=[18,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 03:38:24 compute-0 ceph-osd[89722]: osd.2 pg_epoch: 43 pg[2.1d( empty local-lis/les=42/43 n=0 ec=42/13 lis/c=18/18 les/c/f=19/19/0 sis=42) [2] r=0 lpr=42 pi=[18,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 03:38:24 compute-0 ceph-osd[89722]: osd.2 pg_epoch: 43 pg[2.1c( empty local-lis/les=42/43 n=0 ec=42/13 lis/c=18/18 les/c/f=19/19/0 sis=42) [2] r=0 lpr=42 pi=[18,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 03:38:24 compute-0 ceph-osd[89722]: osd.2 pg_epoch: 43 pg[2.1b( empty local-lis/les=42/43 n=0 ec=42/13 lis/c=18/18 les/c/f=19/19/0 sis=42) [2] r=0 lpr=42 pi=[18,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 03:38:24 compute-0 ceph-osd[89722]: osd.2 pg_epoch: 43 pg[2.9( empty local-lis/les=42/43 n=0 ec=42/13 lis/c=18/18 les/c/f=19/19/0 sis=42) [2] r=0 lpr=42 pi=[18,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 03:38:24 compute-0 ceph-osd[89722]: osd.2 pg_epoch: 43 pg[2.a( empty local-lis/les=42/43 n=0 ec=42/13 lis/c=18/18 les/c/f=19/19/0 sis=42) [2] r=0 lpr=42 pi=[18,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 03:38:24 compute-0 ceph-osd[89722]: osd.2 pg_epoch: 43 pg[2.5( empty local-lis/les=42/43 n=0 ec=42/13 lis/c=18/18 les/c/f=19/19/0 sis=42) [2] r=0 lpr=42 pi=[18,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 03:38:24 compute-0 ceph-osd[89722]: osd.2 pg_epoch: 43 pg[2.3( empty local-lis/les=42/43 n=0 ec=42/13 lis/c=18/18 les/c/f=19/19/0 sis=42) [2] r=0 lpr=42 pi=[18,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 03:38:24 compute-0 ceph-osd[89722]: osd.2 pg_epoch: 43 pg[2.2( empty local-lis/les=42/43 n=0 ec=42/13 lis/c=18/18 les/c/f=19/19/0 sis=42) [2] r=0 lpr=42 pi=[18,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 03:38:24 compute-0 ceph-osd[89722]: osd.2 pg_epoch: 43 pg[2.1( empty local-lis/les=42/43 n=0 ec=42/13 lis/c=18/18 les/c/f=19/19/0 sis=42) [2] r=0 lpr=42 pi=[18,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 03:38:24 compute-0 ceph-osd[89722]: osd.2 pg_epoch: 43 pg[2.8( empty local-lis/les=42/43 n=0 ec=42/13 lis/c=18/18 les/c/f=19/19/0 sis=42) [2] r=0 lpr=42 pi=[18,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 03:38:24 compute-0 ceph-osd[89722]: osd.2 pg_epoch: 43 pg[2.6( empty local-lis/les=42/43 n=0 ec=42/13 lis/c=18/18 les/c/f=19/19/0 sis=42) [2] r=0 lpr=42 pi=[18,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 03:38:24 compute-0 ceph-osd[89722]: osd.2 pg_epoch: 43 pg[2.0( empty local-lis/les=42/43 n=0 ec=13/13 lis/c=18/18 les/c/f=19/19/0 sis=42) [2] r=0 lpr=42 pi=[18,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 03:38:24 compute-0 ceph-osd[89722]: osd.2 pg_epoch: 43 pg[2.7( empty local-lis/les=42/43 n=0 ec=42/13 lis/c=18/18 les/c/f=19/19/0 sis=42) [2] r=0 lpr=42 pi=[18,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 03:38:24 compute-0 ceph-osd[89722]: osd.2 pg_epoch: 43 pg[2.b( empty local-lis/les=42/43 n=0 ec=42/13 lis/c=18/18 les/c/f=19/19/0 sis=42) [2] r=0 lpr=42 pi=[18,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 03:38:24 compute-0 ceph-osd[89722]: osd.2 pg_epoch: 43 pg[2.c( empty local-lis/les=42/43 n=0 ec=42/13 lis/c=18/18 les/c/f=19/19/0 sis=42) [2] r=0 lpr=42 pi=[18,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 03:38:24 compute-0 ceph-osd[89722]: osd.2 pg_epoch: 43 pg[2.e( empty local-lis/les=42/43 n=0 ec=42/13 lis/c=18/18 les/c/f=19/19/0 sis=42) [2] r=0 lpr=42 pi=[18,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 03:38:24 compute-0 ceph-osd[89722]: osd.2 pg_epoch: 43 pg[2.d( empty local-lis/les=42/43 n=0 ec=42/13 lis/c=18/18 les/c/f=19/19/0 sis=42) [2] r=0 lpr=42 pi=[18,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 03:38:24 compute-0 ceph-osd[89722]: osd.2 pg_epoch: 43 pg[2.10( empty local-lis/les=42/43 n=0 ec=42/13 lis/c=18/18 les/c/f=19/19/0 sis=42) [2] r=0 lpr=42 pi=[18,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 03:38:24 compute-0 ceph-osd[89722]: osd.2 pg_epoch: 43 pg[2.11( empty local-lis/les=42/43 n=0 ec=42/13 lis/c=18/18 les/c/f=19/19/0 sis=42) [2] r=0 lpr=42 pi=[18,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 03:38:24 compute-0 ceph-osd[89722]: osd.2 pg_epoch: 43 pg[2.f( empty local-lis/les=42/43 n=0 ec=42/13 lis/c=18/18 les/c/f=19/19/0 sis=42) [2] r=0 lpr=42 pi=[18,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 03:38:24 compute-0 ceph-osd[89722]: osd.2 pg_epoch: 43 pg[2.12( empty local-lis/les=42/43 n=0 ec=42/13 lis/c=18/18 les/c/f=19/19/0 sis=42) [2] r=0 lpr=42 pi=[18,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 03:38:24 compute-0 ceph-osd[89722]: osd.2 pg_epoch: 43 pg[2.13( empty local-lis/les=42/43 n=0 ec=42/13 lis/c=18/18 les/c/f=19/19/0 sis=42) [2] r=0 lpr=42 pi=[18,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 03:38:24 compute-0 ceph-osd[89722]: osd.2 pg_epoch: 43 pg[2.14( empty local-lis/les=42/43 n=0 ec=42/13 lis/c=18/18 les/c/f=19/19/0 sis=42) [2] r=0 lpr=42 pi=[18,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 03:38:24 compute-0 ceph-osd[89722]: osd.2 pg_epoch: 43 pg[2.15( empty local-lis/les=42/43 n=0 ec=42/13 lis/c=18/18 les/c/f=19/19/0 sis=42) [2] r=0 lpr=42 pi=[18,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 03:38:24 compute-0 ceph-osd[89722]: osd.2 pg_epoch: 43 pg[2.16( empty local-lis/les=42/43 n=0 ec=42/13 lis/c=18/18 les/c/f=19/19/0 sis=42) [2] r=0 lpr=42 pi=[18,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 03:38:24 compute-0 ceph-osd[89722]: osd.2 pg_epoch: 43 pg[2.18( empty local-lis/les=42/43 n=0 ec=42/13 lis/c=18/18 les/c/f=19/19/0 sis=42) [2] r=0 lpr=42 pi=[18,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 03:38:24 compute-0 ceph-osd[89722]: osd.2 pg_epoch: 43 pg[2.17( empty local-lis/les=42/43 n=0 ec=42/13 lis/c=18/18 les/c/f=19/19/0 sis=42) [2] r=0 lpr=42 pi=[18,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 03:38:24 compute-0 ceph-osd[89722]: osd.2 pg_epoch: 43 pg[2.19( empty local-lis/les=42/43 n=0 ec=42/13 lis/c=18/18 les/c/f=19/19/0 sis=42) [2] r=0 lpr=42 pi=[18,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 03:38:24 compute-0 ceph-osd[89722]: osd.2 pg_epoch: 43 pg[2.4( empty local-lis/les=42/43 n=0 ec=42/13 lis/c=18/18 les/c/f=19/19/0 sis=42) [2] r=0 lpr=42 pi=[18,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 03:38:24 compute-0 ceph-osd[89722]: osd.2 pg_epoch: 43 pg[2.1a( empty local-lis/les=42/43 n=0 ec=42/13 lis/c=18/18 les/c/f=19/19/0 sis=42) [2] r=0 lpr=42 pi=[18,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 03:38:24 compute-0 ceph-osd[88594]: osd.1 pg_epoch: 43 pg[3.a( empty local-lis/les=42/43 n=0 ec=42/15 lis/c=15/15 les/c/f=16/16/0 sis=42) [1] r=0 lpr=42 pi=[15,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 03:38:24 compute-0 ceph-osd[88594]: osd.1 pg_epoch: 43 pg[3.b( empty local-lis/les=42/43 n=0 ec=42/15 lis/c=15/15 les/c/f=16/16/0 sis=42) [1] r=0 lpr=42 pi=[15,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 03:38:24 compute-0 ceph-osd[88594]: osd.1 pg_epoch: 43 pg[3.0( empty local-lis/les=42/43 n=0 ec=15/15 lis/c=15/15 les/c/f=16/16/0 sis=42) [1] r=0 lpr=42 pi=[15,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 03:38:24 compute-0 ceph-osd[88594]: osd.1 pg_epoch: 43 pg[3.9( empty local-lis/les=42/43 n=0 ec=42/15 lis/c=15/15 les/c/f=16/16/0 sis=42) [1] r=0 lpr=42 pi=[15,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 03:38:24 compute-0 ceph-osd[88594]: osd.1 pg_epoch: 43 pg[3.2( empty local-lis/les=42/43 n=0 ec=42/15 lis/c=15/15 les/c/f=16/16/0 sis=42) [1] r=0 lpr=42 pi=[15,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 03:38:24 compute-0 ceph-osd[88594]: osd.1 pg_epoch: 43 pg[3.c( empty local-lis/les=42/43 n=0 ec=42/15 lis/c=15/15 les/c/f=16/16/0 sis=42) [1] r=0 lpr=42 pi=[15,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 03:38:24 compute-0 ceph-osd[88594]: osd.1 pg_epoch: 43 pg[3.d( empty local-lis/les=42/43 n=0 ec=42/15 lis/c=15/15 les/c/f=16/16/0 sis=42) [1] r=0 lpr=42 pi=[15,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 03:38:24 compute-0 ceph-osd[88594]: osd.1 pg_epoch: 43 pg[3.4( empty local-lis/les=42/43 n=0 ec=42/15 lis/c=15/15 les/c/f=16/16/0 sis=42) [1] r=0 lpr=42 pi=[15,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 03:38:24 compute-0 ceph-osd[88594]: osd.1 pg_epoch: 43 pg[3.e( empty local-lis/les=42/43 n=0 ec=42/15 lis/c=15/15 les/c/f=16/16/0 sis=42) [1] r=0 lpr=42 pi=[15,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 03:38:24 compute-0 ceph-osd[88594]: osd.1 pg_epoch: 43 pg[3.f( empty local-lis/les=42/43 n=0 ec=42/15 lis/c=15/15 les/c/f=16/16/0 sis=42) [1] r=0 lpr=42 pi=[15,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 03:38:24 compute-0 ceph-osd[88594]: osd.1 pg_epoch: 43 pg[3.11( empty local-lis/les=42/43 n=0 ec=42/15 lis/c=15/15 les/c/f=16/16/0 sis=42) [1] r=0 lpr=42 pi=[15,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 03:38:24 compute-0 ceph-osd[88594]: osd.1 pg_epoch: 43 pg[3.12( empty local-lis/les=42/43 n=0 ec=42/15 lis/c=15/15 les/c/f=16/16/0 sis=42) [1] r=0 lpr=42 pi=[15,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 03:38:24 compute-0 ceph-osd[88594]: osd.1 pg_epoch: 43 pg[3.10( empty local-lis/les=42/43 n=0 ec=42/15 lis/c=15/15 les/c/f=16/16/0 sis=42) [1] r=0 lpr=42 pi=[15,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 03:38:24 compute-0 ceph-osd[88594]: osd.1 pg_epoch: 43 pg[3.13( empty local-lis/les=42/43 n=0 ec=42/15 lis/c=15/15 les/c/f=16/16/0 sis=42) [1] r=0 lpr=42 pi=[15,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 03:38:24 compute-0 ceph-osd[88594]: osd.1 pg_epoch: 43 pg[3.14( empty local-lis/les=42/43 n=0 ec=42/15 lis/c=15/15 les/c/f=16/16/0 sis=42) [1] r=0 lpr=42 pi=[15,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 03:38:24 compute-0 ceph-osd[88594]: osd.1 pg_epoch: 43 pg[3.16( empty local-lis/les=42/43 n=0 ec=42/15 lis/c=15/15 les/c/f=16/16/0 sis=42) [1] r=0 lpr=42 pi=[15,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 03:38:24 compute-0 ceph-osd[88594]: osd.1 pg_epoch: 43 pg[3.15( empty local-lis/les=42/43 n=0 ec=42/15 lis/c=15/15 les/c/f=16/16/0 sis=42) [1] r=0 lpr=42 pi=[15,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 03:38:24 compute-0 ceph-osd[88594]: osd.1 pg_epoch: 43 pg[3.17( empty local-lis/les=42/43 n=0 ec=42/15 lis/c=15/15 les/c/f=16/16/0 sis=42) [1] r=0 lpr=42 pi=[15,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 03:38:24 compute-0 ceph-osd[88594]: osd.1 pg_epoch: 43 pg[3.6( empty local-lis/les=42/43 n=0 ec=42/15 lis/c=15/15 les/c/f=16/16/0 sis=42) [1] r=0 lpr=42 pi=[15,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 03:38:24 compute-0 ceph-osd[88594]: osd.1 pg_epoch: 43 pg[3.1f( empty local-lis/les=42/43 n=0 ec=42/15 lis/c=15/15 les/c/f=16/16/0 sis=42) [1] r=0 lpr=42 pi=[15,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 03:38:24 compute-0 ceph-mon[74273]: osdmap e42: 3 total, 3 up, 3 in
Oct 11 03:38:24 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' cmd=[{"prefix": "osd pool set", "pool": "images", "var": "pg_num", "val": "32"}]: dispatch
Oct 11 03:38:24 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' cmd=[{"prefix": "osd pool set", "pool": "backups", "var": "pg_num_actual", "val": "32"}]: dispatch
Oct 11 03:38:24 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' cmd='[{"prefix": "osd pool set", "pool": "images", "var": "pg_num", "val": "32"}]': finished
Oct 11 03:38:24 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' cmd='[{"prefix": "osd pool set", "pool": "backups", "var": "pg_num_actual", "val": "32"}]': finished
Oct 11 03:38:24 compute-0 ceph-mon[74273]: osdmap e43: 3 total, 3 up, 3 in
Oct 11 03:38:24 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pg_num", "val": "16"}]: dispatch
Oct 11 03:38:24 compute-0 podman[102978]: 2025-10-11 03:38:24.798028025 +0000 UTC m=+0.097509989 container exec 24261ba7295af5a6a49cb537d1551fd7fd4de28fdeebff7ecec5d89143ebddf9 (image=quay.io/ceph/ceph:v18, name=ceph-23b68101-59a9-532f-ab6b-9acf78fb2162-mon-compute-0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 11 03:38:24 compute-0 podman[102978]: 2025-10-11 03:38:24.915106412 +0000 UTC m=+0.214588316 container exec_died 24261ba7295af5a6a49cb537d1551fd7fd4de28fdeebff7ecec5d89143ebddf9 (image=quay.io/ceph/ceph:v18, name=ceph-23b68101-59a9-532f-ab6b-9acf78fb2162-mon-compute-0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507)
Oct 11 03:38:25 compute-0 ceph-osd[87591]: osd.0 pg_epoch: 43 pg[4.0( empty local-lis/les=17/18 n=0 ec=17/17 lis/c=17/17 les/c/f=18/18/0 sis=43 pruub=10.468119621s) [0] r=0 lpr=43 pi=[17,43)/1 crt=0'0 mlcod 0'0 active pruub 79.126472473s@ mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [0], acting_primary 0 -> 0, up_primary 0 -> 0, role 0 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Oct 11 03:38:25 compute-0 ceph-osd[87591]: osd.0 pg_epoch: 43 pg[4.0( empty local-lis/les=17/18 n=0 ec=17/17 lis/c=17/17 les/c/f=18/18/0 sis=43 pruub=10.468119621s) [0] r=0 lpr=43 pi=[17,43)/1 crt=0'0 mlcod 0'0 unknown pruub 79.126472473s@ mbc={}] state<Start>: transitioning to Primary
Oct 11 03:38:25 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e43 do_prune osdmap full prune enabled
Oct 11 03:38:25 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pg_num", "val": "16"}]': finished
Oct 11 03:38:25 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e44 e44: 3 total, 3 up, 3 in
Oct 11 03:38:25 compute-0 ceph-osd[87591]: osd.0 pg_epoch: 44 pg[4.1d( empty local-lis/les=17/18 n=0 ec=43/17 lis/c=17/17 les/c/f=18/18/0 sis=43) [0] r=0 lpr=43 pi=[17,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 03:38:25 compute-0 ceph-osd[87591]: osd.0 pg_epoch: 44 pg[4.1c( empty local-lis/les=17/18 n=0 ec=43/17 lis/c=17/17 les/c/f=18/18/0 sis=43) [0] r=0 lpr=43 pi=[17,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 03:38:25 compute-0 ceph-osd[87591]: osd.0 pg_epoch: 44 pg[4.8( empty local-lis/les=17/18 n=0 ec=43/17 lis/c=17/17 les/c/f=18/18/0 sis=43) [0] r=0 lpr=43 pi=[17,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 03:38:25 compute-0 ceph-osd[87591]: osd.0 pg_epoch: 44 pg[4.7( empty local-lis/les=17/18 n=0 ec=43/17 lis/c=17/17 les/c/f=18/18/0 sis=43) [0] r=0 lpr=43 pi=[17,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 03:38:25 compute-0 ceph-osd[87591]: osd.0 pg_epoch: 44 pg[4.1e( empty local-lis/les=17/18 n=0 ec=43/17 lis/c=17/17 les/c/f=18/18/0 sis=43) [0] r=0 lpr=43 pi=[17,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 03:38:25 compute-0 ceph-osd[87591]: osd.0 pg_epoch: 44 pg[4.1f( empty local-lis/les=17/18 n=0 ec=43/17 lis/c=17/17 les/c/f=18/18/0 sis=43) [0] r=0 lpr=43 pi=[17,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 03:38:25 compute-0 ceph-osd[87591]: osd.0 pg_epoch: 44 pg[4.b( empty local-lis/les=17/18 n=0 ec=43/17 lis/c=17/17 les/c/f=18/18/0 sis=43) [0] r=0 lpr=43 pi=[17,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 03:38:25 compute-0 ceph-mon[74273]: log_channel(cluster) log [DBG] : osdmap e44: 3 total, 3 up, 3 in
Oct 11 03:38:25 compute-0 ceph-osd[87591]: osd.0 pg_epoch: 44 pg[4.1b( empty local-lis/les=17/18 n=0 ec=43/17 lis/c=17/17 les/c/f=18/18/0 sis=43) [0] r=0 lpr=43 pi=[17,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 03:38:25 compute-0 ceph-osd[87591]: osd.0 pg_epoch: 44 pg[4.a( empty local-lis/les=17/18 n=0 ec=43/17 lis/c=17/17 les/c/f=18/18/0 sis=43) [0] r=0 lpr=43 pi=[17,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 03:38:25 compute-0 ceph-osd[87591]: osd.0 pg_epoch: 44 pg[4.5( empty local-lis/les=17/18 n=0 ec=43/17 lis/c=17/17 les/c/f=18/18/0 sis=43) [0] r=0 lpr=43 pi=[17,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 03:38:25 compute-0 ceph-osd[87591]: osd.0 pg_epoch: 44 pg[4.1a( empty local-lis/les=17/18 n=0 ec=43/17 lis/c=17/17 les/c/f=18/18/0 sis=43) [0] r=0 lpr=43 pi=[17,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 03:38:25 compute-0 ceph-osd[87591]: osd.0 pg_epoch: 44 pg[4.9( empty local-lis/les=17/18 n=0 ec=43/17 lis/c=17/17 les/c/f=18/18/0 sis=43) [0] r=0 lpr=43 pi=[17,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 03:38:25 compute-0 ceph-osd[87591]: osd.0 pg_epoch: 44 pg[4.4( empty local-lis/les=17/18 n=0 ec=43/17 lis/c=17/17 les/c/f=18/18/0 sis=43) [0] r=0 lpr=43 pi=[17,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 03:38:25 compute-0 ceph-osd[87591]: osd.0 pg_epoch: 44 pg[4.19( empty local-lis/les=17/18 n=0 ec=43/17 lis/c=17/17 les/c/f=18/18/0 sis=43) [0] r=0 lpr=43 pi=[17,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 03:38:25 compute-0 ceph-osd[87591]: osd.0 pg_epoch: 44 pg[4.3( empty local-lis/les=17/18 n=0 ec=43/17 lis/c=17/17 les/c/f=18/18/0 sis=43) [0] r=0 lpr=43 pi=[17,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 03:38:25 compute-0 ceph-osd[87591]: osd.0 pg_epoch: 44 pg[4.1( empty local-lis/les=17/18 n=0 ec=43/17 lis/c=17/17 les/c/f=18/18/0 sis=43) [0] r=0 lpr=43 pi=[17,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 03:38:25 compute-0 ceph-mgr[74563]: [progress INFO root] update: starting ev d976cf27-0fa2-47d3-bad3-4060bbed156a (PG autoscaler increasing pool 6 PGs from 1 to 16)
Oct 11 03:38:25 compute-0 ceph-osd[87591]: osd.0 pg_epoch: 44 pg[4.2( empty local-lis/les=17/18 n=0 ec=43/17 lis/c=17/17 les/c/f=18/18/0 sis=43) [0] r=0 lpr=43 pi=[17,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 03:38:25 compute-0 ceph-osd[87591]: osd.0 pg_epoch: 44 pg[4.c( empty local-lis/les=17/18 n=0 ec=43/17 lis/c=17/17 les/c/f=18/18/0 sis=43) [0] r=0 lpr=43 pi=[17,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 03:38:25 compute-0 ceph-osd[87591]: osd.0 pg_epoch: 44 pg[4.d( empty local-lis/les=17/18 n=0 ec=43/17 lis/c=17/17 les/c/f=18/18/0 sis=43) [0] r=0 lpr=43 pi=[17,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 03:38:25 compute-0 ceph-osd[87591]: osd.0 pg_epoch: 44 pg[4.e( empty local-lis/les=17/18 n=0 ec=43/17 lis/c=17/17 les/c/f=18/18/0 sis=43) [0] r=0 lpr=43 pi=[17,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 03:38:25 compute-0 ceph-osd[87591]: osd.0 pg_epoch: 44 pg[4.10( empty local-lis/les=17/18 n=0 ec=43/17 lis/c=17/17 les/c/f=18/18/0 sis=43) [0] r=0 lpr=43 pi=[17,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 03:38:25 compute-0 ceph-osd[87591]: osd.0 pg_epoch: 44 pg[4.f( empty local-lis/les=17/18 n=0 ec=43/17 lis/c=17/17 les/c/f=18/18/0 sis=43) [0] r=0 lpr=43 pi=[17,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 03:38:25 compute-0 ceph-osd[87591]: osd.0 pg_epoch: 44 pg[4.11( empty local-lis/les=17/18 n=0 ec=43/17 lis/c=17/17 les/c/f=18/18/0 sis=43) [0] r=0 lpr=43 pi=[17,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 03:38:25 compute-0 ceph-osd[87591]: osd.0 pg_epoch: 44 pg[4.12( empty local-lis/les=17/18 n=0 ec=43/17 lis/c=17/17 les/c/f=18/18/0 sis=43) [0] r=0 lpr=43 pi=[17,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 03:38:25 compute-0 ceph-osd[87591]: osd.0 pg_epoch: 44 pg[4.13( empty local-lis/les=17/18 n=0 ec=43/17 lis/c=17/17 les/c/f=18/18/0 sis=43) [0] r=0 lpr=43 pi=[17,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 03:38:25 compute-0 ceph-osd[87591]: osd.0 pg_epoch: 44 pg[4.14( empty local-lis/les=17/18 n=0 ec=43/17 lis/c=17/17 les/c/f=18/18/0 sis=43) [0] r=0 lpr=43 pi=[17,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 03:38:25 compute-0 ceph-osd[87591]: osd.0 pg_epoch: 44 pg[4.15( empty local-lis/les=17/18 n=0 ec=43/17 lis/c=17/17 les/c/f=18/18/0 sis=43) [0] r=0 lpr=43 pi=[17,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 03:38:25 compute-0 ceph-osd[87591]: osd.0 pg_epoch: 44 pg[4.16( empty local-lis/les=17/18 n=0 ec=43/17 lis/c=17/17 les/c/f=18/18/0 sis=43) [0] r=0 lpr=43 pi=[17,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 03:38:25 compute-0 ceph-osd[87591]: osd.0 pg_epoch: 44 pg[4.17( empty local-lis/les=17/18 n=0 ec=43/17 lis/c=17/17 les/c/f=18/18/0 sis=43) [0] r=0 lpr=43 pi=[17,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 03:38:25 compute-0 ceph-osd[87591]: osd.0 pg_epoch: 44 pg[4.6( empty local-lis/les=17/18 n=0 ec=43/17 lis/c=17/17 les/c/f=18/18/0 sis=43) [0] r=0 lpr=43 pi=[17,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 03:38:25 compute-0 ceph-osd[87591]: osd.0 pg_epoch: 44 pg[4.18( empty local-lis/les=17/18 n=0 ec=43/17 lis/c=17/17 les/c/f=18/18/0 sis=43) [0] r=0 lpr=43 pi=[17,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 03:38:25 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pg_num", "val": "32"} v 0) v1
Oct 11 03:38:25 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pg_num", "val": "32"}]: dispatch
Oct 11 03:38:25 compute-0 ceph-osd[87591]: osd.0 pg_epoch: 44 pg[4.1d( empty local-lis/les=43/44 n=0 ec=43/17 lis/c=17/17 les/c/f=18/18/0 sis=43) [0] r=0 lpr=43 pi=[17,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 03:38:25 compute-0 ceph-osd[87591]: osd.0 pg_epoch: 44 pg[4.8( empty local-lis/les=43/44 n=0 ec=43/17 lis/c=17/17 les/c/f=18/18/0 sis=43) [0] r=0 lpr=43 pi=[17,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 03:38:25 compute-0 ceph-osd[87591]: osd.0 pg_epoch: 44 pg[4.1c( empty local-lis/les=43/44 n=0 ec=43/17 lis/c=17/17 les/c/f=18/18/0 sis=43) [0] r=0 lpr=43 pi=[17,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 03:38:25 compute-0 ceph-osd[87591]: osd.0 pg_epoch: 44 pg[4.7( empty local-lis/les=43/44 n=0 ec=43/17 lis/c=17/17 les/c/f=18/18/0 sis=43) [0] r=0 lpr=43 pi=[17,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 03:38:25 compute-0 ceph-osd[87591]: osd.0 pg_epoch: 44 pg[4.1e( empty local-lis/les=43/44 n=0 ec=43/17 lis/c=17/17 les/c/f=18/18/0 sis=43) [0] r=0 lpr=43 pi=[17,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 03:38:25 compute-0 ceph-osd[87591]: osd.0 pg_epoch: 44 pg[4.1f( empty local-lis/les=43/44 n=0 ec=43/17 lis/c=17/17 les/c/f=18/18/0 sis=43) [0] r=0 lpr=43 pi=[17,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 03:38:25 compute-0 ceph-osd[87591]: osd.0 pg_epoch: 44 pg[4.b( empty local-lis/les=43/44 n=0 ec=43/17 lis/c=17/17 les/c/f=18/18/0 sis=43) [0] r=0 lpr=43 pi=[17,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 03:38:25 compute-0 ceph-osd[87591]: osd.0 pg_epoch: 44 pg[4.a( empty local-lis/les=43/44 n=0 ec=43/17 lis/c=17/17 les/c/f=18/18/0 sis=43) [0] r=0 lpr=43 pi=[17,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 03:38:25 compute-0 ceph-osd[87591]: osd.0 pg_epoch: 44 pg[4.1b( empty local-lis/les=43/44 n=0 ec=43/17 lis/c=17/17 les/c/f=18/18/0 sis=43) [0] r=0 lpr=43 pi=[17,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 03:38:25 compute-0 ceph-osd[87591]: osd.0 pg_epoch: 44 pg[4.5( empty local-lis/les=43/44 n=0 ec=43/17 lis/c=17/17 les/c/f=18/18/0 sis=43) [0] r=0 lpr=43 pi=[17,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 03:38:25 compute-0 ceph-osd[87591]: osd.0 pg_epoch: 44 pg[4.1a( empty local-lis/les=43/44 n=0 ec=43/17 lis/c=17/17 les/c/f=18/18/0 sis=43) [0] r=0 lpr=43 pi=[17,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 03:38:25 compute-0 ceph-osd[87591]: osd.0 pg_epoch: 44 pg[4.4( empty local-lis/les=43/44 n=0 ec=43/17 lis/c=17/17 les/c/f=18/18/0 sis=43) [0] r=0 lpr=43 pi=[17,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 03:38:25 compute-0 ceph-osd[87591]: osd.0 pg_epoch: 44 pg[4.9( empty local-lis/les=43/44 n=0 ec=43/17 lis/c=17/17 les/c/f=18/18/0 sis=43) [0] r=0 lpr=43 pi=[17,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 03:38:25 compute-0 ceph-osd[87591]: osd.0 pg_epoch: 44 pg[4.19( empty local-lis/les=43/44 n=0 ec=43/17 lis/c=17/17 les/c/f=18/18/0 sis=43) [0] r=0 lpr=43 pi=[17,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 03:38:25 compute-0 ceph-osd[87591]: osd.0 pg_epoch: 44 pg[4.3( empty local-lis/les=43/44 n=0 ec=43/17 lis/c=17/17 les/c/f=18/18/0 sis=43) [0] r=0 lpr=43 pi=[17,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 03:38:25 compute-0 ceph-osd[87591]: osd.0 pg_epoch: 44 pg[4.1( empty local-lis/les=43/44 n=0 ec=43/17 lis/c=17/17 les/c/f=18/18/0 sis=43) [0] r=0 lpr=43 pi=[17,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 03:38:25 compute-0 ceph-osd[87591]: osd.0 pg_epoch: 44 pg[4.2( empty local-lis/les=43/44 n=0 ec=43/17 lis/c=17/17 les/c/f=18/18/0 sis=43) [0] r=0 lpr=43 pi=[17,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 03:38:25 compute-0 ceph-osd[87591]: osd.0 pg_epoch: 44 pg[4.0( empty local-lis/les=43/44 n=0 ec=17/17 lis/c=17/17 les/c/f=18/18/0 sis=43) [0] r=0 lpr=43 pi=[17,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 03:38:25 compute-0 ceph-osd[87591]: osd.0 pg_epoch: 44 pg[4.c( empty local-lis/les=43/44 n=0 ec=43/17 lis/c=17/17 les/c/f=18/18/0 sis=43) [0] r=0 lpr=43 pi=[17,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 03:38:25 compute-0 ceph-osd[87591]: osd.0 pg_epoch: 44 pg[4.d( empty local-lis/les=43/44 n=0 ec=43/17 lis/c=17/17 les/c/f=18/18/0 sis=43) [0] r=0 lpr=43 pi=[17,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 03:38:25 compute-0 ceph-osd[87591]: osd.0 pg_epoch: 44 pg[4.e( empty local-lis/les=43/44 n=0 ec=43/17 lis/c=17/17 les/c/f=18/18/0 sis=43) [0] r=0 lpr=43 pi=[17,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 03:38:25 compute-0 ceph-osd[87591]: osd.0 pg_epoch: 44 pg[4.f( empty local-lis/les=43/44 n=0 ec=43/17 lis/c=17/17 les/c/f=18/18/0 sis=43) [0] r=0 lpr=43 pi=[17,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 03:38:25 compute-0 ceph-osd[87591]: osd.0 pg_epoch: 44 pg[4.10( empty local-lis/les=43/44 n=0 ec=43/17 lis/c=17/17 les/c/f=18/18/0 sis=43) [0] r=0 lpr=43 pi=[17,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 03:38:25 compute-0 ceph-osd[87591]: osd.0 pg_epoch: 44 pg[4.11( empty local-lis/les=43/44 n=0 ec=43/17 lis/c=17/17 les/c/f=18/18/0 sis=43) [0] r=0 lpr=43 pi=[17,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 03:38:25 compute-0 ceph-osd[87591]: osd.0 pg_epoch: 44 pg[4.13( empty local-lis/les=43/44 n=0 ec=43/17 lis/c=17/17 les/c/f=18/18/0 sis=43) [0] r=0 lpr=43 pi=[17,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 03:38:25 compute-0 ceph-osd[87591]: osd.0 pg_epoch: 44 pg[4.12( empty local-lis/les=43/44 n=0 ec=43/17 lis/c=17/17 les/c/f=18/18/0 sis=43) [0] r=0 lpr=43 pi=[17,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 03:38:25 compute-0 ceph-osd[87591]: osd.0 pg_epoch: 44 pg[4.14( empty local-lis/les=43/44 n=0 ec=43/17 lis/c=17/17 les/c/f=18/18/0 sis=43) [0] r=0 lpr=43 pi=[17,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 03:38:25 compute-0 ceph-osd[87591]: osd.0 pg_epoch: 44 pg[4.15( empty local-lis/les=43/44 n=0 ec=43/17 lis/c=17/17 les/c/f=18/18/0 sis=43) [0] r=0 lpr=43 pi=[17,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 03:38:25 compute-0 ceph-osd[87591]: osd.0 pg_epoch: 44 pg[4.16( empty local-lis/les=43/44 n=0 ec=43/17 lis/c=17/17 les/c/f=18/18/0 sis=43) [0] r=0 lpr=43 pi=[17,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 03:38:25 compute-0 ceph-osd[87591]: osd.0 pg_epoch: 44 pg[4.18( empty local-lis/les=43/44 n=0 ec=43/17 lis/c=17/17 les/c/f=18/18/0 sis=43) [0] r=0 lpr=43 pi=[17,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 03:38:25 compute-0 ceph-osd[87591]: osd.0 pg_epoch: 44 pg[4.6( empty local-lis/les=43/44 n=0 ec=43/17 lis/c=17/17 les/c/f=18/18/0 sis=43) [0] r=0 lpr=43 pi=[17,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 03:38:25 compute-0 ceph-osd[87591]: osd.0 pg_epoch: 44 pg[4.17( empty local-lis/les=43/44 n=0 ec=43/17 lis/c=17/17 les/c/f=18/18/0 sis=43) [0] r=0 lpr=43 pi=[17,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 03:38:25 compute-0 ceph-mon[74273]: pgmap v95: 73 pgs: 62 unknown, 11 active+clean; 456 KiB data, 80 MiB used, 60 GiB / 60 GiB avail; 147 KiB/s rd, 8.4 KiB/s wr, 324 op/s
Oct 11 03:38:25 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pg_num", "val": "16"}]': finished
Oct 11 03:38:25 compute-0 ceph-mon[74273]: osdmap e44: 3 total, 3 up, 3 in
Oct 11 03:38:25 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pg_num", "val": "32"}]: dispatch
Oct 11 03:38:25 compute-0 ceph-mgr[74563]: [progress WARNING root] Starting Global Recovery Event,93 pgs not in active + clean state
Oct 11 03:38:25 compute-0 sudo[102881]: pam_unix(sudo:session): session closed for user root
Oct 11 03:38:25 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 11 03:38:25 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' 
Oct 11 03:38:25 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 11 03:38:25 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' 
Oct 11 03:38:25 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 11 03:38:25 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 11 03:38:25 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct 11 03:38:25 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 11 03:38:25 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct 11 03:38:25 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' 
Oct 11 03:38:25 compute-0 ceph-mgr[74563]: [progress WARNING root] complete: ev 2c84ef77-58db-4ea2-a15b-11cf6efdad8e does not exist
Oct 11 03:38:25 compute-0 ceph-mgr[74563]: [progress WARNING root] complete: ev 0a23bd34-c775-4a90-8455-a41d2f34c798 does not exist
Oct 11 03:38:25 compute-0 ceph-mgr[74563]: [progress WARNING root] complete: ev ff82899f-6ada-4c56-9dc7-8bb2d6063920 does not exist
Oct 11 03:38:25 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct 11 03:38:25 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 11 03:38:25 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct 11 03:38:25 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 11 03:38:25 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 11 03:38:25 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 11 03:38:25 compute-0 sudo[103139]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 03:38:25 compute-0 sudo[103139]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 03:38:25 compute-0 sudo[103139]: pam_unix(sudo:session): session closed for user root
Oct 11 03:38:26 compute-0 sudo[103164]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 11 03:38:26 compute-0 sudo[103164]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 03:38:26 compute-0 ceph-osd[89722]: log_channel(cluster) log [DBG] : 2.1 scrub starts
Oct 11 03:38:26 compute-0 sudo[103164]: pam_unix(sudo:session): session closed for user root
Oct 11 03:38:26 compute-0 ceph-osd[89722]: log_channel(cluster) log [DBG] : 2.1 scrub ok
Oct 11 03:38:26 compute-0 sudo[103189]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 03:38:26 compute-0 sudo[103189]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 03:38:26 compute-0 sudo[103189]: pam_unix(sudo:session): session closed for user root
Oct 11 03:38:26 compute-0 sudo[103214]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/23b68101-59a9-532f-ab6b-9acf78fb2162/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 23b68101-59a9-532f-ab6b-9acf78fb2162 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Oct 11 03:38:26 compute-0 sudo[103214]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 03:38:26 compute-0 podman[103280]: 2025-10-11 03:38:26.668239347 +0000 UTC m=+0.074198827 container create c844f8d070977c150730593579fab520334c49e066e917d80a312045f1e2add1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_dijkstra, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.build-date=20250507)
Oct 11 03:38:26 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v98: 104 pgs: 93 unknown, 11 active+clean; 456 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Oct 11 03:38:26 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pg_num_actual", "val": "16"} v 0) v1
Oct 11 03:38:26 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pg_num_actual", "val": "16"}]: dispatch
Oct 11 03:38:26 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "images", "var": "pg_num_actual", "val": "32"} v 0) v1
Oct 11 03:38:26 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' cmd=[{"prefix": "osd pool set", "pool": "images", "var": "pg_num_actual", "val": "32"}]: dispatch
Oct 11 03:38:26 compute-0 systemd[1]: Started libpod-conmon-c844f8d070977c150730593579fab520334c49e066e917d80a312045f1e2add1.scope.
Oct 11 03:38:26 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e44 do_prune osdmap full prune enabled
Oct 11 03:38:26 compute-0 podman[103280]: 2025-10-11 03:38:26.635781329 +0000 UTC m=+0.041740839 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 03:38:26 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pg_num", "val": "32"}]': finished
Oct 11 03:38:26 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pg_num_actual", "val": "16"}]': finished
Oct 11 03:38:26 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' cmd='[{"prefix": "osd pool set", "pool": "images", "var": "pg_num_actual", "val": "32"}]': finished
Oct 11 03:38:26 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e45 e45: 3 total, 3 up, 3 in
Oct 11 03:38:26 compute-0 ceph-mon[74273]: log_channel(cluster) log [DBG] : osdmap e45: 3 total, 3 up, 3 in
Oct 11 03:38:26 compute-0 ceph-mgr[74563]: [progress INFO root] update: starting ev d4693d22-3301-43b5-a0b2-acaa7a29cd35 (PG autoscaler increasing pool 7 PGs from 1 to 32)
Oct 11 03:38:26 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": ".rgw.root", "var": "pg_num", "val": "32"} v 0) v1
Oct 11 03:38:26 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' cmd=[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pg_num", "val": "32"}]: dispatch
Oct 11 03:38:26 compute-0 systemd[1]: Started libcrun container.
Oct 11 03:38:26 compute-0 podman[103280]: 2025-10-11 03:38:26.774056438 +0000 UTC m=+0.180015898 container init c844f8d070977c150730593579fab520334c49e066e917d80a312045f1e2add1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_dijkstra, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Oct 11 03:38:26 compute-0 podman[103280]: 2025-10-11 03:38:26.782560326 +0000 UTC m=+0.188519786 container start c844f8d070977c150730593579fab520334c49e066e917d80a312045f1e2add1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_dijkstra, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0)
Oct 11 03:38:26 compute-0 podman[103280]: 2025-10-11 03:38:26.786415934 +0000 UTC m=+0.192375404 container attach c844f8d070977c150730593579fab520334c49e066e917d80a312045f1e2add1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_dijkstra, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.39.3)
Oct 11 03:38:26 compute-0 musing_dijkstra[103296]: 167 167
Oct 11 03:38:26 compute-0 systemd[1]: libpod-c844f8d070977c150730593579fab520334c49e066e917d80a312045f1e2add1.scope: Deactivated successfully.
Oct 11 03:38:26 compute-0 podman[103280]: 2025-10-11 03:38:26.791868587 +0000 UTC m=+0.197828047 container died c844f8d070977c150730593579fab520334c49e066e917d80a312045f1e2add1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_dijkstra, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=reef)
Oct 11 03:38:26 compute-0 systemd[1]: var-lib-containers-storage-overlay-36db004925cfbc4e8e12d1cfeb6e44c4097ba1778e4acdab1d5567892abcf561-merged.mount: Deactivated successfully.
Oct 11 03:38:26 compute-0 podman[103280]: 2025-10-11 03:38:26.838017568 +0000 UTC m=+0.243976998 container remove c844f8d070977c150730593579fab520334c49e066e917d80a312045f1e2add1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_dijkstra, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 11 03:38:26 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' 
Oct 11 03:38:26 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' 
Oct 11 03:38:26 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 11 03:38:26 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 11 03:38:26 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' 
Oct 11 03:38:26 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 11 03:38:26 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 11 03:38:26 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 11 03:38:26 compute-0 ceph-mon[74273]: 2.1 scrub starts
Oct 11 03:38:26 compute-0 ceph-mon[74273]: 2.1 scrub ok
Oct 11 03:38:26 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pg_num_actual", "val": "16"}]: dispatch
Oct 11 03:38:26 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' cmd=[{"prefix": "osd pool set", "pool": "images", "var": "pg_num_actual", "val": "32"}]: dispatch
Oct 11 03:38:26 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pg_num", "val": "32"}]': finished
Oct 11 03:38:26 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pg_num_actual", "val": "16"}]': finished
Oct 11 03:38:26 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' cmd='[{"prefix": "osd pool set", "pool": "images", "var": "pg_num_actual", "val": "32"}]': finished
Oct 11 03:38:26 compute-0 ceph-mon[74273]: osdmap e45: 3 total, 3 up, 3 in
Oct 11 03:38:26 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' cmd=[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pg_num", "val": "32"}]: dispatch
Oct 11 03:38:26 compute-0 systemd[1]: libpod-conmon-c844f8d070977c150730593579fab520334c49e066e917d80a312045f1e2add1.scope: Deactivated successfully.
Oct 11 03:38:27 compute-0 podman[103319]: 2025-10-11 03:38:27.032990394 +0000 UTC m=+0.056002668 container create 54af6177c9d67824f94392003b817ee0bd4ea15318ed603cfef163a13124feae (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_goldberg, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 11 03:38:27 compute-0 systemd[75886]: Starting Mark boot as successful...
Oct 11 03:38:27 compute-0 systemd[75886]: Finished Mark boot as successful.
Oct 11 03:38:27 compute-0 ceph-osd[89722]: log_channel(cluster) log [DBG] : 2.2 scrub starts
Oct 11 03:38:27 compute-0 systemd[1]: Started libpod-conmon-54af6177c9d67824f94392003b817ee0bd4ea15318ed603cfef163a13124feae.scope.
Oct 11 03:38:27 compute-0 ceph-osd[89722]: log_channel(cluster) log [DBG] : 2.2 scrub ok
Oct 11 03:38:27 compute-0 ceph-osd[87591]: log_channel(cluster) log [DBG] : 4.1 scrub starts
Oct 11 03:38:27 compute-0 systemd[1]: Started libcrun container.
Oct 11 03:38:27 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b6ec10f918678b8db4555dea9d987cca94cc8212998194e283783dc1b5274554/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 11 03:38:27 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b6ec10f918678b8db4555dea9d987cca94cc8212998194e283783dc1b5274554/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 11 03:38:27 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b6ec10f918678b8db4555dea9d987cca94cc8212998194e283783dc1b5274554/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 11 03:38:27 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b6ec10f918678b8db4555dea9d987cca94cc8212998194e283783dc1b5274554/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 11 03:38:27 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b6ec10f918678b8db4555dea9d987cca94cc8212998194e283783dc1b5274554/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct 11 03:38:27 compute-0 ceph-osd[87591]: log_channel(cluster) log [DBG] : 4.1 scrub ok
Oct 11 03:38:27 compute-0 podman[103319]: 2025-10-11 03:38:27.01247719 +0000 UTC m=+0.035489504 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 03:38:27 compute-0 podman[103319]: 2025-10-11 03:38:27.12292249 +0000 UTC m=+0.145934824 container init 54af6177c9d67824f94392003b817ee0bd4ea15318ed603cfef163a13124feae (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_goldberg, org.label-schema.schema-version=1.0, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Oct 11 03:38:27 compute-0 podman[103319]: 2025-10-11 03:38:27.134830553 +0000 UTC m=+0.157842847 container start 54af6177c9d67824f94392003b817ee0bd4ea15318ed603cfef163a13124feae (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_goldberg, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507)
Oct 11 03:38:27 compute-0 podman[103319]: 2025-10-11 03:38:27.13863199 +0000 UTC m=+0.161644284 container attach 54af6177c9d67824f94392003b817ee0bd4ea15318ed603cfef163a13124feae (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_goldberg, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS)
Oct 11 03:38:27 compute-0 ceph-osd[87591]: osd.0 pg_epoch: 45 pg[6.0( v 37'39 (0'0,37'39] local-lis/les=21/22 n=22 ec=21/21 lis/c=21/21 les/c/f=22/22/0 sis=45 pruub=12.759903908s) [0] r=0 lpr=45 pi=[21,45)/1 crt=37'39 lcod 33'38 mlcod 33'38 active pruub 83.188156128s@ mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [0], acting_primary 0 -> 0, up_primary 0 -> 0, role 0 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Oct 11 03:38:27 compute-0 ceph-osd[87591]: osd.0 pg_epoch: 45 pg[6.0( v 37'39 lc 0'0 (0'0,37'39] local-lis/les=21/22 n=1 ec=21/21 lis/c=21/21 les/c/f=22/22/0 sis=45 pruub=12.759903908s) [0] r=0 lpr=45 pi=[21,45)/1 crt=37'39 lcod 33'38 mlcod 0'0 unknown pruub 83.188156128s@ mbc={}] state<Start>: transitioning to Primary
Oct 11 03:38:27 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e45 do_prune osdmap full prune enabled
Oct 11 03:38:27 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' cmd='[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pg_num", "val": "32"}]': finished
Oct 11 03:38:27 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e46 e46: 3 total, 3 up, 3 in
Oct 11 03:38:27 compute-0 ceph-mon[74273]: log_channel(cluster) log [DBG] : osdmap e46: 3 total, 3 up, 3 in
Oct 11 03:38:27 compute-0 ceph-mgr[74563]: [progress INFO root] update: starting ev 4d114e80-e2a2-4850-8343-d9601033b32b (PG autoscaler increasing pool 8 PGs from 1 to 32)
Oct 11 03:38:27 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pg_num", "val": "32"} v 0) v1
Oct 11 03:38:27 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pg_num", "val": "32"}]: dispatch
Oct 11 03:38:27 compute-0 ceph-osd[87591]: osd.0 pg_epoch: 46 pg[6.5( v 37'39 lc 0'0 (0'0,37'39] local-lis/les=21/22 n=2 ec=45/21 lis/c=21/21 les/c/f=22/22/0 sis=45) [0] r=0 lpr=45 pi=[21,45)/1 crt=37'39 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 03:38:27 compute-0 ceph-osd[87591]: osd.0 pg_epoch: 46 pg[6.a( v 37'39 lc 0'0 (0'0,37'39] local-lis/les=21/22 n=1 ec=45/21 lis/c=21/21 les/c/f=22/22/0 sis=45) [0] r=0 lpr=45 pi=[21,45)/1 crt=37'39 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 03:38:27 compute-0 ceph-osd[87591]: osd.0 pg_epoch: 46 pg[6.9( v 37'39 lc 0'0 (0'0,37'39] local-lis/les=21/22 n=1 ec=45/21 lis/c=21/21 les/c/f=22/22/0 sis=45) [0] r=0 lpr=45 pi=[21,45)/1 crt=37'39 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 03:38:27 compute-0 ceph-osd[87591]: osd.0 pg_epoch: 46 pg[6.4( v 37'39 lc 0'0 (0'0,37'39] local-lis/les=21/22 n=2 ec=45/21 lis/c=21/21 les/c/f=22/22/0 sis=45) [0] r=0 lpr=45 pi=[21,45)/1 crt=37'39 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 03:38:27 compute-0 ceph-osd[87591]: osd.0 pg_epoch: 46 pg[6.7( v 37'39 lc 0'0 (0'0,37'39] local-lis/les=21/22 n=1 ec=45/21 lis/c=21/21 les/c/f=22/22/0 sis=45) [0] r=0 lpr=45 pi=[21,45)/1 crt=37'39 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 03:38:27 compute-0 ceph-osd[87591]: osd.0 pg_epoch: 46 pg[6.b( v 37'39 lc 0'0 (0'0,37'39] local-lis/les=21/22 n=1 ec=45/21 lis/c=21/21 les/c/f=22/22/0 sis=45) [0] r=0 lpr=45 pi=[21,45)/1 crt=37'39 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 03:38:27 compute-0 ceph-osd[87591]: osd.0 pg_epoch: 46 pg[6.6( v 37'39 lc 0'0 (0'0,37'39] local-lis/les=21/22 n=2 ec=45/21 lis/c=21/21 les/c/f=22/22/0 sis=45) [0] r=0 lpr=45 pi=[21,45)/1 crt=37'39 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 03:38:27 compute-0 ceph-osd[87591]: osd.0 pg_epoch: 46 pg[6.8( v 37'39 lc 0'0 (0'0,37'39] local-lis/les=21/22 n=1 ec=45/21 lis/c=21/21 les/c/f=22/22/0 sis=45) [0] r=0 lpr=45 pi=[21,45)/1 crt=37'39 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 03:38:27 compute-0 ceph-osd[87591]: osd.0 pg_epoch: 46 pg[6.1( v 37'39 (0'0,37'39] local-lis/les=21/22 n=2 ec=45/21 lis/c=21/21 les/c/f=22/22/0 sis=45) [0] r=0 lpr=45 pi=[21,45)/1 crt=37'39 lcod 0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 03:38:27 compute-0 ceph-osd[87591]: osd.0 pg_epoch: 46 pg[6.3( v 37'39 lc 0'0 (0'0,37'39] local-lis/les=21/22 n=2 ec=45/21 lis/c=21/21 les/c/f=22/22/0 sis=45) [0] r=0 lpr=45 pi=[21,45)/1 crt=37'39 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 03:38:27 compute-0 ceph-osd[87591]: osd.0 pg_epoch: 46 pg[6.2( v 37'39 lc 0'0 (0'0,37'39] local-lis/les=21/22 n=2 ec=45/21 lis/c=21/21 les/c/f=22/22/0 sis=45) [0] r=0 lpr=45 pi=[21,45)/1 crt=37'39 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 03:38:27 compute-0 ceph-osd[87591]: osd.0 pg_epoch: 46 pg[6.e( v 37'39 lc 0'0 (0'0,37'39] local-lis/les=21/22 n=1 ec=45/21 lis/c=21/21 les/c/f=22/22/0 sis=45) [0] r=0 lpr=45 pi=[21,45)/1 crt=37'39 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 03:38:27 compute-0 ceph-osd[87591]: osd.0 pg_epoch: 46 pg[6.f( v 37'39 lc 0'0 (0'0,37'39] local-lis/les=21/22 n=1 ec=45/21 lis/c=21/21 les/c/f=22/22/0 sis=45) [0] r=0 lpr=45 pi=[21,45)/1 crt=37'39 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 03:38:27 compute-0 ceph-osd[87591]: osd.0 pg_epoch: 46 pg[6.c( v 37'39 lc 0'0 (0'0,37'39] local-lis/les=21/22 n=1 ec=45/21 lis/c=21/21 les/c/f=22/22/0 sis=45) [0] r=0 lpr=45 pi=[21,45)/1 crt=37'39 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 03:38:27 compute-0 ceph-osd[87591]: osd.0 pg_epoch: 46 pg[6.d( v 37'39 lc 0'0 (0'0,37'39] local-lis/les=21/22 n=1 ec=45/21 lis/c=21/21 les/c/f=22/22/0 sis=45) [0] r=0 lpr=45 pi=[21,45)/1 crt=37'39 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 03:38:27 compute-0 ceph-osd[87591]: osd.0 pg_epoch: 46 pg[6.5( v 37'39 (0'0,37'39] local-lis/les=45/46 n=2 ec=45/21 lis/c=21/21 les/c/f=22/22/0 sis=45) [0] r=0 lpr=45 pi=[21,45)/1 crt=37'39 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 03:38:27 compute-0 ceph-osd[87591]: osd.0 pg_epoch: 46 pg[6.9( v 37'39 (0'0,37'39] local-lis/les=45/46 n=1 ec=45/21 lis/c=21/21 les/c/f=22/22/0 sis=45) [0] r=0 lpr=45 pi=[21,45)/1 crt=37'39 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 03:38:27 compute-0 ceph-osd[87591]: osd.0 pg_epoch: 46 pg[6.a( v 37'39 (0'0,37'39] local-lis/les=45/46 n=1 ec=45/21 lis/c=21/21 les/c/f=22/22/0 sis=45) [0] r=0 lpr=45 pi=[21,45)/1 crt=37'39 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 03:38:27 compute-0 ceph-osd[87591]: osd.0 pg_epoch: 46 pg[6.4( v 37'39 (0'0,37'39] local-lis/les=45/46 n=2 ec=45/21 lis/c=21/21 les/c/f=22/22/0 sis=45) [0] r=0 lpr=45 pi=[21,45)/1 crt=37'39 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 03:38:27 compute-0 ceph-osd[87591]: osd.0 pg_epoch: 46 pg[6.7( v 37'39 (0'0,37'39] local-lis/les=45/46 n=1 ec=45/21 lis/c=21/21 les/c/f=22/22/0 sis=45) [0] r=0 lpr=45 pi=[21,45)/1 crt=37'39 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 03:38:27 compute-0 ceph-osd[87591]: osd.0 pg_epoch: 46 pg[6.6( v 37'39 (0'0,37'39] local-lis/les=45/46 n=2 ec=45/21 lis/c=21/21 les/c/f=22/22/0 sis=45) [0] r=0 lpr=45 pi=[21,45)/1 crt=37'39 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 03:38:27 compute-0 ceph-osd[87591]: osd.0 pg_epoch: 46 pg[6.b( v 37'39 (0'0,37'39] local-lis/les=45/46 n=1 ec=45/21 lis/c=21/21 les/c/f=22/22/0 sis=45) [0] r=0 lpr=45 pi=[21,45)/1 crt=37'39 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 03:38:27 compute-0 ceph-osd[87591]: osd.0 pg_epoch: 46 pg[6.8( v 37'39 (0'0,37'39] local-lis/les=45/46 n=1 ec=45/21 lis/c=21/21 les/c/f=22/22/0 sis=45) [0] r=0 lpr=45 pi=[21,45)/1 crt=37'39 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 03:38:27 compute-0 ceph-osd[87591]: osd.0 pg_epoch: 46 pg[6.0( v 37'39 (0'0,37'39] local-lis/les=45/46 n=1 ec=21/21 lis/c=21/21 les/c/f=22/22/0 sis=45) [0] r=0 lpr=45 pi=[21,45)/1 crt=37'39 lcod 33'38 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 03:38:27 compute-0 ceph-osd[87591]: osd.0 pg_epoch: 46 pg[6.1( v 37'39 (0'0,37'39] local-lis/les=45/46 n=2 ec=45/21 lis/c=21/21 les/c/f=22/22/0 sis=45) [0] r=0 lpr=45 pi=[21,45)/1 crt=37'39 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 03:38:27 compute-0 ceph-osd[87591]: osd.0 pg_epoch: 46 pg[6.3( v 37'39 (0'0,37'39] local-lis/les=45/46 n=2 ec=45/21 lis/c=21/21 les/c/f=22/22/0 sis=45) [0] r=0 lpr=45 pi=[21,45)/1 crt=37'39 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 03:38:27 compute-0 ceph-osd[87591]: osd.0 pg_epoch: 46 pg[6.2( v 37'39 (0'0,37'39] local-lis/les=45/46 n=2 ec=45/21 lis/c=21/21 les/c/f=22/22/0 sis=45) [0] r=0 lpr=45 pi=[21,45)/1 crt=37'39 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 03:38:27 compute-0 ceph-osd[87591]: osd.0 pg_epoch: 46 pg[6.f( v 37'39 (0'0,37'39] local-lis/les=45/46 n=1 ec=45/21 lis/c=21/21 les/c/f=22/22/0 sis=45) [0] r=0 lpr=45 pi=[21,45)/1 crt=37'39 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 03:38:27 compute-0 ceph-osd[87591]: osd.0 pg_epoch: 46 pg[6.c( v 37'39 (0'0,37'39] local-lis/les=45/46 n=1 ec=45/21 lis/c=21/21 les/c/f=22/22/0 sis=45) [0] r=0 lpr=45 pi=[21,45)/1 crt=37'39 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 03:38:27 compute-0 ceph-osd[87591]: osd.0 pg_epoch: 46 pg[6.e( v 37'39 (0'0,37'39] local-lis/les=45/46 n=1 ec=45/21 lis/c=21/21 les/c/f=22/22/0 sis=45) [0] r=0 lpr=45 pi=[21,45)/1 crt=37'39 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 03:38:27 compute-0 ceph-osd[87591]: osd.0 pg_epoch: 46 pg[6.d( v 37'39 (0'0,37'39] local-lis/les=45/46 n=1 ec=45/21 lis/c=21/21 les/c/f=22/22/0 sis=45) [0] r=0 lpr=45 pi=[21,45)/1 crt=37'39 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 03:38:27 compute-0 ceph-mon[74273]: pgmap v98: 104 pgs: 93 unknown, 11 active+clean; 456 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Oct 11 03:38:27 compute-0 ceph-mon[74273]: 2.2 scrub starts
Oct 11 03:38:27 compute-0 ceph-mon[74273]: 2.2 scrub ok
Oct 11 03:38:27 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' cmd='[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pg_num", "val": "32"}]': finished
Oct 11 03:38:27 compute-0 ceph-mon[74273]: osdmap e46: 3 total, 3 up, 3 in
Oct 11 03:38:27 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pg_num", "val": "32"}]: dispatch
Oct 11 03:38:28 compute-0 ceph-osd[87591]: log_channel(cluster) log [DBG] : 4.2 deep-scrub starts
Oct 11 03:38:28 compute-0 ceph-osd[87591]: log_channel(cluster) log [DBG] : 4.2 deep-scrub ok
Oct 11 03:38:28 compute-0 hardcore_goldberg[103336]: --> passed data devices: 0 physical, 3 LVM
Oct 11 03:38:28 compute-0 hardcore_goldberg[103336]: --> relative data size: 1.0
Oct 11 03:38:28 compute-0 hardcore_goldberg[103336]: --> All data devices are unavailable
Oct 11 03:38:28 compute-0 systemd[1]: libpod-54af6177c9d67824f94392003b817ee0bd4ea15318ed603cfef163a13124feae.scope: Deactivated successfully.
Oct 11 03:38:28 compute-0 podman[103319]: 2025-10-11 03:38:28.19249706 +0000 UTC m=+1.215509344 container died 54af6177c9d67824f94392003b817ee0bd4ea15318ed603cfef163a13124feae (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_goldberg, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_REF=reef, OSD_FLAVOR=default)
Oct 11 03:38:28 compute-0 systemd[1]: libpod-54af6177c9d67824f94392003b817ee0bd4ea15318ed603cfef163a13124feae.scope: Consumed 1.014s CPU time.
Oct 11 03:38:28 compute-0 systemd[1]: var-lib-containers-storage-overlay-b6ec10f918678b8db4555dea9d987cca94cc8212998194e283783dc1b5274554-merged.mount: Deactivated successfully.
Oct 11 03:38:28 compute-0 podman[103319]: 2025-10-11 03:38:28.269682089 +0000 UTC m=+1.292694393 container remove 54af6177c9d67824f94392003b817ee0bd4ea15318ed603cfef163a13124feae (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_goldberg, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Oct 11 03:38:28 compute-0 systemd[1]: libpod-conmon-54af6177c9d67824f94392003b817ee0bd4ea15318ed603cfef163a13124feae.scope: Deactivated successfully.
Oct 11 03:38:28 compute-0 sudo[103214]: pam_unix(sudo:session): session closed for user root
Oct 11 03:38:28 compute-0 ceph-osd[89722]: osd.2 pg_epoch: 45 pg[5.0( empty local-lis/les=19/20 n=0 ec=19/19 lis/c=19/19 les/c/f=20/20/0 sis=45 pruub=9.751561165s) [2] r=0 lpr=45 pi=[19,45)/1 crt=0'0 mlcod 0'0 active pruub 70.877510071s@ mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [2], acting_primary 2 -> 2, up_primary 2 -> 2, role 0 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Oct 11 03:38:28 compute-0 ceph-osd[89722]: osd.2 pg_epoch: 46 pg[5.0( empty local-lis/les=19/20 n=0 ec=19/19 lis/c=19/19 les/c/f=20/20/0 sis=45 pruub=9.751561165s) [2] r=0 lpr=45 pi=[19,45)/1 crt=0'0 mlcod 0'0 unknown pruub 70.877510071s@ mbc={}] state<Start>: transitioning to Primary
Oct 11 03:38:28 compute-0 ceph-osd[89722]: osd.2 pg_epoch: 46 pg[5.10( empty local-lis/les=19/20 n=0 ec=45/19 lis/c=19/19 les/c/f=20/20/0 sis=45) [2] r=0 lpr=45 pi=[19,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 03:38:28 compute-0 ceph-osd[89722]: osd.2 pg_epoch: 46 pg[5.11( empty local-lis/les=19/20 n=0 ec=45/19 lis/c=19/19 les/c/f=20/20/0 sis=45) [2] r=0 lpr=45 pi=[19,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 03:38:28 compute-0 ceph-osd[89722]: osd.2 pg_epoch: 46 pg[5.14( empty local-lis/les=19/20 n=0 ec=45/19 lis/c=19/19 les/c/f=20/20/0 sis=45) [2] r=0 lpr=45 pi=[19,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 03:38:28 compute-0 ceph-osd[89722]: osd.2 pg_epoch: 46 pg[5.15( empty local-lis/les=19/20 n=0 ec=45/19 lis/c=19/19 les/c/f=20/20/0 sis=45) [2] r=0 lpr=45 pi=[19,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 03:38:28 compute-0 ceph-osd[89722]: osd.2 pg_epoch: 46 pg[5.e( empty local-lis/les=19/20 n=0 ec=45/19 lis/c=19/19 les/c/f=20/20/0 sis=45) [2] r=0 lpr=45 pi=[19,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 03:38:28 compute-0 ceph-osd[89722]: osd.2 pg_epoch: 46 pg[5.f( empty local-lis/les=19/20 n=0 ec=45/19 lis/c=19/19 les/c/f=20/20/0 sis=45) [2] r=0 lpr=45 pi=[19,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 03:38:28 compute-0 ceph-osd[89722]: osd.2 pg_epoch: 46 pg[5.12( empty local-lis/les=19/20 n=0 ec=45/19 lis/c=19/19 les/c/f=20/20/0 sis=45) [2] r=0 lpr=45 pi=[19,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 03:38:28 compute-0 ceph-osd[89722]: osd.2 pg_epoch: 46 pg[5.13( empty local-lis/les=19/20 n=0 ec=45/19 lis/c=19/19 les/c/f=20/20/0 sis=45) [2] r=0 lpr=45 pi=[19,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 03:38:28 compute-0 ceph-osd[89722]: osd.2 pg_epoch: 46 pg[5.1( empty local-lis/les=19/20 n=0 ec=45/19 lis/c=19/19 les/c/f=20/20/0 sis=45) [2] r=0 lpr=45 pi=[19,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 03:38:28 compute-0 ceph-osd[89722]: osd.2 pg_epoch: 46 pg[5.2( empty local-lis/les=19/20 n=0 ec=45/19 lis/c=19/19 les/c/f=20/20/0 sis=45) [2] r=0 lpr=45 pi=[19,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 03:38:28 compute-0 ceph-osd[89722]: osd.2 pg_epoch: 46 pg[5.3( empty local-lis/les=19/20 n=0 ec=45/19 lis/c=19/19 les/c/f=20/20/0 sis=45) [2] r=0 lpr=45 pi=[19,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 03:38:28 compute-0 ceph-osd[89722]: osd.2 pg_epoch: 46 pg[5.4( empty local-lis/les=19/20 n=0 ec=45/19 lis/c=19/19 les/c/f=20/20/0 sis=45) [2] r=0 lpr=45 pi=[19,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 03:38:28 compute-0 ceph-osd[89722]: osd.2 pg_epoch: 46 pg[5.5( empty local-lis/les=19/20 n=0 ec=45/19 lis/c=19/19 les/c/f=20/20/0 sis=45) [2] r=0 lpr=45 pi=[19,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 03:38:28 compute-0 ceph-osd[89722]: osd.2 pg_epoch: 46 pg[5.8( empty local-lis/les=19/20 n=0 ec=45/19 lis/c=19/19 les/c/f=20/20/0 sis=45) [2] r=0 lpr=45 pi=[19,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 03:38:28 compute-0 ceph-osd[89722]: osd.2 pg_epoch: 46 pg[5.9( empty local-lis/les=19/20 n=0 ec=45/19 lis/c=19/19 les/c/f=20/20/0 sis=45) [2] r=0 lpr=45 pi=[19,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 03:38:28 compute-0 ceph-osd[89722]: osd.2 pg_epoch: 46 pg[5.6( empty local-lis/les=19/20 n=0 ec=45/19 lis/c=19/19 les/c/f=20/20/0 sis=45) [2] r=0 lpr=45 pi=[19,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 03:38:28 compute-0 ceph-osd[89722]: osd.2 pg_epoch: 46 pg[5.7( empty local-lis/les=19/20 n=0 ec=45/19 lis/c=19/19 les/c/f=20/20/0 sis=45) [2] r=0 lpr=45 pi=[19,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 03:38:28 compute-0 ceph-osd[89722]: osd.2 pg_epoch: 46 pg[5.a( empty local-lis/les=19/20 n=0 ec=45/19 lis/c=19/19 les/c/f=20/20/0 sis=45) [2] r=0 lpr=45 pi=[19,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 03:38:28 compute-0 ceph-osd[89722]: osd.2 pg_epoch: 46 pg[5.b( empty local-lis/les=19/20 n=0 ec=45/19 lis/c=19/19 les/c/f=20/20/0 sis=45) [2] r=0 lpr=45 pi=[19,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 03:38:28 compute-0 ceph-osd[89722]: osd.2 pg_epoch: 46 pg[5.c( empty local-lis/les=19/20 n=0 ec=45/19 lis/c=19/19 les/c/f=20/20/0 sis=45) [2] r=0 lpr=45 pi=[19,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 03:38:28 compute-0 ceph-osd[89722]: osd.2 pg_epoch: 46 pg[5.d( empty local-lis/les=19/20 n=0 ec=45/19 lis/c=19/19 les/c/f=20/20/0 sis=45) [2] r=0 lpr=45 pi=[19,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 03:38:28 compute-0 ceph-osd[89722]: osd.2 pg_epoch: 46 pg[5.18( empty local-lis/les=19/20 n=0 ec=45/19 lis/c=19/19 les/c/f=20/20/0 sis=45) [2] r=0 lpr=45 pi=[19,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 03:38:28 compute-0 ceph-osd[89722]: osd.2 pg_epoch: 46 pg[5.19( empty local-lis/les=19/20 n=0 ec=45/19 lis/c=19/19 les/c/f=20/20/0 sis=45) [2] r=0 lpr=45 pi=[19,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 03:38:28 compute-0 ceph-osd[89722]: osd.2 pg_epoch: 46 pg[5.1a( empty local-lis/les=19/20 n=0 ec=45/19 lis/c=19/19 les/c/f=20/20/0 sis=45) [2] r=0 lpr=45 pi=[19,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 03:38:28 compute-0 ceph-osd[89722]: osd.2 pg_epoch: 46 pg[5.1b( empty local-lis/les=19/20 n=0 ec=45/19 lis/c=19/19 les/c/f=20/20/0 sis=45) [2] r=0 lpr=45 pi=[19,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 03:38:28 compute-0 ceph-osd[89722]: osd.2 pg_epoch: 46 pg[5.1c( empty local-lis/les=19/20 n=0 ec=45/19 lis/c=19/19 les/c/f=20/20/0 sis=45) [2] r=0 lpr=45 pi=[19,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 03:38:28 compute-0 ceph-osd[89722]: osd.2 pg_epoch: 46 pg[5.1d( empty local-lis/les=19/20 n=0 ec=45/19 lis/c=19/19 les/c/f=20/20/0 sis=45) [2] r=0 lpr=45 pi=[19,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 03:38:28 compute-0 ceph-osd[89722]: osd.2 pg_epoch: 46 pg[5.1e( empty local-lis/les=19/20 n=0 ec=45/19 lis/c=19/19 les/c/f=20/20/0 sis=45) [2] r=0 lpr=45 pi=[19,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 03:38:28 compute-0 ceph-osd[89722]: osd.2 pg_epoch: 46 pg[5.1f( empty local-lis/les=19/20 n=0 ec=45/19 lis/c=19/19 les/c/f=20/20/0 sis=45) [2] r=0 lpr=45 pi=[19,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 03:38:28 compute-0 ceph-osd[89722]: osd.2 pg_epoch: 46 pg[5.16( empty local-lis/les=19/20 n=0 ec=45/19 lis/c=19/19 les/c/f=20/20/0 sis=45) [2] r=0 lpr=45 pi=[19,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 03:38:28 compute-0 ceph-osd[89722]: osd.2 pg_epoch: 46 pg[5.17( empty local-lis/les=19/20 n=0 ec=45/19 lis/c=19/19 les/c/f=20/20/0 sis=45) [2] r=0 lpr=45 pi=[19,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 03:38:28 compute-0 sudo[103378]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 03:38:28 compute-0 sudo[103378]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 03:38:28 compute-0 sudo[103378]: pam_unix(sudo:session): session closed for user root
Oct 11 03:38:28 compute-0 sudo[103403]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 11 03:38:28 compute-0 sudo[103403]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 03:38:28 compute-0 sudo[103403]: pam_unix(sudo:session): session closed for user root
Oct 11 03:38:28 compute-0 sudo[103428]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 03:38:28 compute-0 sudo[103428]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 03:38:28 compute-0 sudo[103428]: pam_unix(sudo:session): session closed for user root
Oct 11 03:38:28 compute-0 sudo[103453]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/23b68101-59a9-532f-ab6b-9acf78fb2162/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 23b68101-59a9-532f-ab6b-9acf78fb2162 -- lvm list --format json
Oct 11 03:38:28 compute-0 sudo[103453]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 03:38:28 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v101: 150 pgs: 1 peering, 46 unknown, 103 active+clean; 456 KiB data, 81 MiB used, 60 GiB / 60 GiB avail; 28 KiB/s rd, 0 B/s wr, 69 op/s
Oct 11 03:38:28 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": ".rgw.root", "var": "pg_num_actual", "val": "32"} v 0) v1
Oct 11 03:38:28 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' cmd=[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pg_num_actual", "val": "32"}]: dispatch
Oct 11 03:38:28 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pg_num_actual", "val": "32"} v 0) v1
Oct 11 03:38:28 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pg_num_actual", "val": "32"}]: dispatch
Oct 11 03:38:28 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e46 do_prune osdmap full prune enabled
Oct 11 03:38:28 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pg_num", "val": "32"}]': finished
Oct 11 03:38:28 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' cmd='[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pg_num_actual", "val": "32"}]': finished
Oct 11 03:38:28 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pg_num_actual", "val": "32"}]': finished
Oct 11 03:38:28 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e47 e47: 3 total, 3 up, 3 in
Oct 11 03:38:28 compute-0 ceph-mon[74273]: log_channel(cluster) log [DBG] : osdmap e47: 3 total, 3 up, 3 in
Oct 11 03:38:28 compute-0 ceph-mgr[74563]: [progress INFO root] update: starting ev 66fcc8cc-450b-48e5-ab4e-281bb27fb8c3 (PG autoscaler increasing pool 9 PGs from 1 to 32)
Oct 11 03:38:28 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pg_num", "val": "32"} v 0) v1
Oct 11 03:38:28 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pg_num", "val": "32"}]: dispatch
Oct 11 03:38:28 compute-0 ceph-osd[88594]: osd.1 pg_epoch: 47 pg[8.0( v 33'4 (0'0,33'4] local-lis/les=32/33 n=4 ec=32/32 lis/c=32/32 les/c/f=33/33/0 sis=47 pruub=9.865345955s) [1] r=0 lpr=47 pi=[32,47)/1 crt=33'4 lcod 33'3 mlcod 33'3 active pruub 76.644935608s@ mbc={}] start_peering_interval up [1] -> [1], acting [1] -> [1], acting_primary 1 -> 1, up_primary 1 -> 1, role 0 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Oct 11 03:38:28 compute-0 ceph-osd[88594]: osd.1 pg_epoch: 47 pg[7.0( empty local-lis/les=23/24 n=0 ec=23/23 lis/c=23/23 les/c/f=24/24/0 sis=47 pruub=13.520667076s) [1] r=0 lpr=47 pi=[23,47)/1 crt=0'0 mlcod 0'0 active pruub 80.300361633s@ mbc={}] start_peering_interval up [1] -> [1], acting [1] -> [1], acting_primary 1 -> 1, up_primary 1 -> 1, role 0 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Oct 11 03:38:28 compute-0 ceph-osd[89722]: osd.2 pg_epoch: 47 pg[5.1f( empty local-lis/les=45/47 n=0 ec=45/19 lis/c=19/19 les/c/f=20/20/0 sis=45) [2] r=0 lpr=45 pi=[19,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 03:38:28 compute-0 ceph-osd[88594]: osd.1 pg_epoch: 47 pg[7.0( empty local-lis/les=23/24 n=0 ec=23/23 lis/c=23/23 les/c/f=24/24/0 sis=47 pruub=13.520667076s) [1] r=0 lpr=47 pi=[23,47)/1 crt=0'0 mlcod 0'0 unknown pruub 80.300361633s@ mbc={}] state<Start>: transitioning to Primary
Oct 11 03:38:28 compute-0 ceph-osd[89722]: osd.2 pg_epoch: 47 pg[5.10( empty local-lis/les=45/47 n=0 ec=45/19 lis/c=19/19 les/c/f=20/20/0 sis=45) [2] r=0 lpr=45 pi=[19,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 03:38:28 compute-0 ceph-osd[89722]: osd.2 pg_epoch: 47 pg[5.11( empty local-lis/les=45/47 n=0 ec=45/19 lis/c=19/19 les/c/f=20/20/0 sis=45) [2] r=0 lpr=45 pi=[19,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 03:38:28 compute-0 ceph-osd[89722]: osd.2 pg_epoch: 47 pg[5.1e( empty local-lis/les=45/47 n=0 ec=45/19 lis/c=19/19 les/c/f=20/20/0 sis=45) [2] r=0 lpr=45 pi=[19,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 03:38:28 compute-0 ceph-osd[89722]: osd.2 pg_epoch: 47 pg[5.14( empty local-lis/les=45/47 n=0 ec=45/19 lis/c=19/19 les/c/f=20/20/0 sis=45) [2] r=0 lpr=45 pi=[19,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 03:38:28 compute-0 ceph-osd[89722]: osd.2 pg_epoch: 47 pg[5.12( empty local-lis/les=45/47 n=0 ec=45/19 lis/c=19/19 les/c/f=20/20/0 sis=45) [2] r=0 lpr=45 pi=[19,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 03:38:28 compute-0 ceph-osd[89722]: osd.2 pg_epoch: 47 pg[5.15( empty local-lis/les=45/47 n=0 ec=45/19 lis/c=19/19 les/c/f=20/20/0 sis=45) [2] r=0 lpr=45 pi=[19,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 03:38:28 compute-0 ceph-osd[89722]: osd.2 pg_epoch: 47 pg[5.13( empty local-lis/les=45/47 n=0 ec=45/19 lis/c=19/19 les/c/f=20/20/0 sis=45) [2] r=0 lpr=45 pi=[19,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 03:38:28 compute-0 ceph-osd[89722]: osd.2 pg_epoch: 47 pg[5.16( empty local-lis/les=45/47 n=0 ec=45/19 lis/c=19/19 les/c/f=20/20/0 sis=45) [2] r=0 lpr=45 pi=[19,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 03:38:28 compute-0 ceph-osd[89722]: osd.2 pg_epoch: 47 pg[5.1d( empty local-lis/les=45/47 n=0 ec=45/19 lis/c=19/19 les/c/f=20/20/0 sis=45) [2] r=0 lpr=45 pi=[19,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 03:38:28 compute-0 ceph-osd[89722]: osd.2 pg_epoch: 47 pg[5.9( empty local-lis/les=45/47 n=0 ec=45/19 lis/c=19/19 les/c/f=20/20/0 sis=45) [2] r=0 lpr=45 pi=[19,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 03:38:28 compute-0 ceph-osd[89722]: osd.2 pg_epoch: 47 pg[5.17( empty local-lis/les=45/47 n=0 ec=45/19 lis/c=19/19 les/c/f=20/20/0 sis=45) [2] r=0 lpr=45 pi=[19,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 03:38:28 compute-0 ceph-osd[89722]: osd.2 pg_epoch: 47 pg[5.a( empty local-lis/les=45/47 n=0 ec=45/19 lis/c=19/19 les/c/f=20/20/0 sis=45) [2] r=0 lpr=45 pi=[19,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 03:38:28 compute-0 ceph-osd[89722]: osd.2 pg_epoch: 47 pg[5.c( empty local-lis/les=45/47 n=0 ec=45/19 lis/c=19/19 les/c/f=20/20/0 sis=45) [2] r=0 lpr=45 pi=[19,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 03:38:28 compute-0 ceph-osd[89722]: osd.2 pg_epoch: 47 pg[5.8( empty local-lis/les=45/47 n=0 ec=45/19 lis/c=19/19 les/c/f=20/20/0 sis=45) [2] r=0 lpr=45 pi=[19,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 03:38:28 compute-0 ceph-osd[88594]: osd.1 pg_epoch: 47 pg[8.0( v 33'4 lc 0'0 (0'0,33'4] local-lis/les=32/33 n=0 ec=32/32 lis/c=32/32 les/c/f=33/33/0 sis=47 pruub=9.865345955s) [1] r=0 lpr=47 pi=[32,47)/1 crt=33'4 lcod 33'3 mlcod 0'0 unknown pruub 76.644935608s@ mbc={}] state<Start>: transitioning to Primary
Oct 11 03:38:28 compute-0 ceph-osd[89722]: osd.2 pg_epoch: 47 pg[5.f( empty local-lis/les=45/47 n=0 ec=45/19 lis/c=19/19 les/c/f=20/20/0 sis=45) [2] r=0 lpr=45 pi=[19,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 03:38:28 compute-0 ceph-osd[89722]: osd.2 pg_epoch: 47 pg[5.7( empty local-lis/les=45/47 n=0 ec=45/19 lis/c=19/19 les/c/f=20/20/0 sis=45) [2] r=0 lpr=45 pi=[19,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 03:38:28 compute-0 ceph-osd[89722]: osd.2 pg_epoch: 47 pg[5.5( empty local-lis/les=45/47 n=0 ec=45/19 lis/c=19/19 les/c/f=20/20/0 sis=45) [2] r=0 lpr=45 pi=[19,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 03:38:28 compute-0 ceph-osd[89722]: osd.2 pg_epoch: 47 pg[5.0( empty local-lis/les=45/47 n=0 ec=19/19 lis/c=19/19 les/c/f=20/20/0 sis=45) [2] r=0 lpr=45 pi=[19,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 03:38:28 compute-0 ceph-osd[89722]: osd.2 pg_epoch: 47 pg[5.6( empty local-lis/les=45/47 n=0 ec=45/19 lis/c=19/19 les/c/f=20/20/0 sis=45) [2] r=0 lpr=45 pi=[19,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 03:38:28 compute-0 ceph-osd[89722]: osd.2 pg_epoch: 47 pg[5.3( empty local-lis/les=45/47 n=0 ec=45/19 lis/c=19/19 les/c/f=20/20/0 sis=45) [2] r=0 lpr=45 pi=[19,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 03:38:28 compute-0 ceph-osd[89722]: osd.2 pg_epoch: 47 pg[5.2( empty local-lis/les=45/47 n=0 ec=45/19 lis/c=19/19 les/c/f=20/20/0 sis=45) [2] r=0 lpr=45 pi=[19,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 03:38:28 compute-0 ceph-osd[89722]: osd.2 pg_epoch: 47 pg[5.e( empty local-lis/les=45/47 n=0 ec=45/19 lis/c=19/19 les/c/f=20/20/0 sis=45) [2] r=0 lpr=45 pi=[19,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 03:38:28 compute-0 ceph-osd[89722]: osd.2 pg_epoch: 47 pg[5.4( empty local-lis/les=45/47 n=0 ec=45/19 lis/c=19/19 les/c/f=20/20/0 sis=45) [2] r=0 lpr=45 pi=[19,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 03:38:28 compute-0 ceph-osd[89722]: osd.2 pg_epoch: 47 pg[5.b( empty local-lis/les=45/47 n=0 ec=45/19 lis/c=19/19 les/c/f=20/20/0 sis=45) [2] r=0 lpr=45 pi=[19,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 03:38:28 compute-0 ceph-osd[89722]: osd.2 pg_epoch: 47 pg[5.1c( empty local-lis/les=45/47 n=0 ec=45/19 lis/c=19/19 les/c/f=20/20/0 sis=45) [2] r=0 lpr=45 pi=[19,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 03:38:28 compute-0 ceph-osd[89722]: osd.2 pg_epoch: 47 pg[5.d( empty local-lis/les=45/47 n=0 ec=45/19 lis/c=19/19 les/c/f=20/20/0 sis=45) [2] r=0 lpr=45 pi=[19,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 03:38:28 compute-0 ceph-osd[89722]: osd.2 pg_epoch: 47 pg[5.1a( empty local-lis/les=45/47 n=0 ec=45/19 lis/c=19/19 les/c/f=20/20/0 sis=45) [2] r=0 lpr=45 pi=[19,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 03:38:28 compute-0 ceph-osd[89722]: osd.2 pg_epoch: 47 pg[5.1b( empty local-lis/les=45/47 n=0 ec=45/19 lis/c=19/19 les/c/f=20/20/0 sis=45) [2] r=0 lpr=45 pi=[19,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 03:38:28 compute-0 ceph-osd[89722]: osd.2 pg_epoch: 47 pg[5.19( empty local-lis/les=45/47 n=0 ec=45/19 lis/c=19/19 les/c/f=20/20/0 sis=45) [2] r=0 lpr=45 pi=[19,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 03:38:28 compute-0 ceph-osd[89722]: osd.2 pg_epoch: 47 pg[5.18( empty local-lis/les=45/47 n=0 ec=45/19 lis/c=19/19 les/c/f=20/20/0 sis=45) [2] r=0 lpr=45 pi=[19,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 03:38:28 compute-0 ceph-osd[89722]: osd.2 pg_epoch: 47 pg[5.1( empty local-lis/les=45/47 n=0 ec=45/19 lis/c=19/19 les/c/f=20/20/0 sis=45) [2] r=0 lpr=45 pi=[19,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 03:38:28 compute-0 ceph-mon[74273]: 4.1 scrub starts
Oct 11 03:38:28 compute-0 ceph-mon[74273]: 4.1 scrub ok
Oct 11 03:38:28 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' cmd=[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pg_num_actual", "val": "32"}]: dispatch
Oct 11 03:38:28 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pg_num_actual", "val": "32"}]: dispatch
Oct 11 03:38:28 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pg_num", "val": "32"}]': finished
Oct 11 03:38:28 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' cmd='[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pg_num_actual", "val": "32"}]': finished
Oct 11 03:38:28 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pg_num_actual", "val": "32"}]': finished
Oct 11 03:38:28 compute-0 ceph-mon[74273]: osdmap e47: 3 total, 3 up, 3 in
Oct 11 03:38:28 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pg_num", "val": "32"}]: dispatch
Oct 11 03:38:29 compute-0 podman[103519]: 2025-10-11 03:38:29.125283332 +0000 UTC m=+0.065101653 container create ef488e8c6fde6557f07f7382cd5761e222260e3f3abb917662d9bb9b36a3d265 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elegant_wozniak, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 11 03:38:29 compute-0 systemd[1]: Started libpod-conmon-ef488e8c6fde6557f07f7382cd5761e222260e3f3abb917662d9bb9b36a3d265.scope.
Oct 11 03:38:29 compute-0 podman[103519]: 2025-10-11 03:38:29.096937658 +0000 UTC m=+0.036755949 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 03:38:29 compute-0 systemd[1]: Started libcrun container.
Oct 11 03:38:29 compute-0 podman[103519]: 2025-10-11 03:38:29.228583312 +0000 UTC m=+0.168401633 container init ef488e8c6fde6557f07f7382cd5761e222260e3f3abb917662d9bb9b36a3d265 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elegant_wozniak, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 11 03:38:29 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e47 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 11 03:38:29 compute-0 podman[103519]: 2025-10-11 03:38:29.241294258 +0000 UTC m=+0.181112589 container start ef488e8c6fde6557f07f7382cd5761e222260e3f3abb917662d9bb9b36a3d265 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elegant_wozniak, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.build-date=20250507)
Oct 11 03:38:29 compute-0 podman[103519]: 2025-10-11 03:38:29.245910557 +0000 UTC m=+0.185728928 container attach ef488e8c6fde6557f07f7382cd5761e222260e3f3abb917662d9bb9b36a3d265 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elegant_wozniak, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 11 03:38:29 compute-0 elegant_wozniak[103535]: 167 167
Oct 11 03:38:29 compute-0 systemd[1]: libpod-ef488e8c6fde6557f07f7382cd5761e222260e3f3abb917662d9bb9b36a3d265.scope: Deactivated successfully.
Oct 11 03:38:29 compute-0 podman[103519]: 2025-10-11 03:38:29.250223688 +0000 UTC m=+0.190042009 container died ef488e8c6fde6557f07f7382cd5761e222260e3f3abb917662d9bb9b36a3d265 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elegant_wozniak, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Oct 11 03:38:29 compute-0 systemd[1]: var-lib-containers-storage-overlay-3262413760bc6b8eab3a8a0bd421ad0d77f139c7c343cd1334bc42a170f0bb34-merged.mount: Deactivated successfully.
Oct 11 03:38:29 compute-0 podman[103519]: 2025-10-11 03:38:29.292769758 +0000 UTC m=+0.232588059 container remove ef488e8c6fde6557f07f7382cd5761e222260e3f3abb917662d9bb9b36a3d265 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elegant_wozniak, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default)
Oct 11 03:38:29 compute-0 systemd[1]: libpod-conmon-ef488e8c6fde6557f07f7382cd5761e222260e3f3abb917662d9bb9b36a3d265.scope: Deactivated successfully.
Oct 11 03:38:29 compute-0 podman[103559]: 2025-10-11 03:38:29.510089629 +0000 UTC m=+0.064104874 container create 3248d778d5b81787e8b1727ab045120f94c1c22a31e30df37a97e57552293ecf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_dubinsky, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3)
Oct 11 03:38:29 compute-0 systemd[1]: Started libpod-conmon-3248d778d5b81787e8b1727ab045120f94c1c22a31e30df37a97e57552293ecf.scope.
Oct 11 03:38:29 compute-0 podman[103559]: 2025-10-11 03:38:29.483777453 +0000 UTC m=+0.037792718 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 03:38:29 compute-0 systemd[1]: Started libcrun container.
Oct 11 03:38:29 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7bf6527a0fcda470eb359836804a20cb8d15949ef5d666ec9c5f809f6b767e2f/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 11 03:38:29 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7bf6527a0fcda470eb359836804a20cb8d15949ef5d666ec9c5f809f6b767e2f/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 11 03:38:29 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7bf6527a0fcda470eb359836804a20cb8d15949ef5d666ec9c5f809f6b767e2f/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 11 03:38:29 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7bf6527a0fcda470eb359836804a20cb8d15949ef5d666ec9c5f809f6b767e2f/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 11 03:38:29 compute-0 podman[103559]: 2025-10-11 03:38:29.610985813 +0000 UTC m=+0.165001118 container init 3248d778d5b81787e8b1727ab045120f94c1c22a31e30df37a97e57552293ecf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_dubinsky, ceph=True, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 11 03:38:29 compute-0 podman[103559]: 2025-10-11 03:38:29.61731534 +0000 UTC m=+0.171330595 container start 3248d778d5b81787e8b1727ab045120f94c1c22a31e30df37a97e57552293ecf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_dubinsky, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Oct 11 03:38:29 compute-0 podman[103559]: 2025-10-11 03:38:29.621579259 +0000 UTC m=+0.175594544 container attach 3248d778d5b81787e8b1727ab045120f94c1c22a31e30df37a97e57552293ecf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_dubinsky, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Oct 11 03:38:29 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e47 do_prune osdmap full prune enabled
Oct 11 03:38:29 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pg_num", "val": "32"}]': finished
Oct 11 03:38:29 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e48 e48: 3 total, 3 up, 3 in
Oct 11 03:38:29 compute-0 ceph-mon[74273]: log_channel(cluster) log [DBG] : osdmap e48: 3 total, 3 up, 3 in
Oct 11 03:38:29 compute-0 ceph-mgr[74563]: [progress INFO root] update: starting ev 415c2e87-cbcd-47d4-9202-2c5edf08bba6 (PG autoscaler increasing pool 10 PGs from 1 to 32)
Oct 11 03:38:29 compute-0 ceph-osd[88594]: osd.1 pg_epoch: 48 pg[8.1c( v 33'4 lc 0'0 (0'0,33'4] local-lis/les=32/33 n=0 ec=47/32 lis/c=32/32 les/c/f=33/33/0 sis=47) [1] r=0 lpr=47 pi=[32,47)/1 crt=33'4 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 03:38:29 compute-0 ceph-osd[88594]: osd.1 pg_epoch: 48 pg[7.13( empty local-lis/les=23/24 n=0 ec=47/23 lis/c=23/23 les/c/f=24/24/0 sis=47) [1] r=0 lpr=47 pi=[23,47)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 03:38:29 compute-0 ceph-osd[88594]: osd.1 pg_epoch: 48 pg[8.1d( v 33'4 lc 0'0 (0'0,33'4] local-lis/les=32/33 n=0 ec=47/32 lis/c=32/32 les/c/f=33/33/0 sis=47) [1] r=0 lpr=47 pi=[32,47)/1 crt=33'4 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 03:38:29 compute-0 ceph-osd[88594]: osd.1 pg_epoch: 48 pg[7.12( empty local-lis/les=23/24 n=0 ec=47/23 lis/c=23/23 les/c/f=24/24/0 sis=47) [1] r=0 lpr=47 pi=[23,47)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 03:38:29 compute-0 ceph-osd[88594]: osd.1 pg_epoch: 48 pg[8.1e( v 33'4 lc 0'0 (0'0,33'4] local-lis/les=32/33 n=0 ec=47/32 lis/c=32/32 les/c/f=33/33/0 sis=47) [1] r=0 lpr=47 pi=[32,47)/1 crt=33'4 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 03:38:29 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_num", "val": "32"} v 0) v1
Oct 11 03:38:29 compute-0 ceph-osd[88594]: osd.1 pg_epoch: 48 pg[7.11( empty local-lis/les=23/24 n=0 ec=47/23 lis/c=23/23 les/c/f=24/24/0 sis=47) [1] r=0 lpr=47 pi=[23,47)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 03:38:29 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_num", "val": "32"}]: dispatch
Oct 11 03:38:29 compute-0 ceph-osd[88594]: osd.1 pg_epoch: 48 pg[8.1f( v 33'4 lc 0'0 (0'0,33'4] local-lis/les=32/33 n=0 ec=47/32 lis/c=32/32 les/c/f=33/33/0 sis=47) [1] r=0 lpr=47 pi=[32,47)/1 crt=33'4 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 03:38:29 compute-0 ceph-osd[88594]: osd.1 pg_epoch: 48 pg[7.10( empty local-lis/les=23/24 n=0 ec=47/23 lis/c=23/23 les/c/f=24/24/0 sis=47) [1] r=0 lpr=47 pi=[23,47)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 03:38:29 compute-0 ceph-osd[88594]: osd.1 pg_epoch: 48 pg[8.18( v 33'4 lc 0'0 (0'0,33'4] local-lis/les=32/33 n=0 ec=47/32 lis/c=32/32 les/c/f=33/33/0 sis=47) [1] r=0 lpr=47 pi=[32,47)/1 crt=33'4 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 03:38:29 compute-0 ceph-osd[88594]: osd.1 pg_epoch: 48 pg[7.17( empty local-lis/les=23/24 n=0 ec=47/23 lis/c=23/23 les/c/f=24/24/0 sis=47) [1] r=0 lpr=47 pi=[23,47)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 03:38:29 compute-0 ceph-osd[88594]: osd.1 pg_epoch: 48 pg[8.19( v 33'4 lc 0'0 (0'0,33'4] local-lis/les=32/33 n=0 ec=47/32 lis/c=32/32 les/c/f=33/33/0 sis=47) [1] r=0 lpr=47 pi=[32,47)/1 crt=33'4 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 03:38:29 compute-0 ceph-osd[88594]: osd.1 pg_epoch: 48 pg[7.16( empty local-lis/les=23/24 n=0 ec=47/23 lis/c=23/23 les/c/f=24/24/0 sis=47) [1] r=0 lpr=47 pi=[23,47)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 03:38:29 compute-0 ceph-osd[88594]: osd.1 pg_epoch: 48 pg[7.15( empty local-lis/les=23/24 n=0 ec=47/23 lis/c=23/23 les/c/f=24/24/0 sis=47) [1] r=0 lpr=47 pi=[23,47)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 03:38:29 compute-0 ceph-osd[88594]: osd.1 pg_epoch: 48 pg[8.1a( v 33'4 lc 0'0 (0'0,33'4] local-lis/les=32/33 n=0 ec=47/32 lis/c=32/32 les/c/f=33/33/0 sis=47) [1] r=0 lpr=47 pi=[32,47)/1 crt=33'4 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 03:38:29 compute-0 ceph-osd[88594]: osd.1 pg_epoch: 48 pg[8.1b( v 33'4 lc 0'0 (0'0,33'4] local-lis/les=32/33 n=0 ec=47/32 lis/c=32/32 les/c/f=33/33/0 sis=47) [1] r=0 lpr=47 pi=[32,47)/1 crt=33'4 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 03:38:29 compute-0 ceph-osd[88594]: osd.1 pg_epoch: 48 pg[7.14( empty local-lis/les=23/24 n=0 ec=47/23 lis/c=23/23 les/c/f=24/24/0 sis=47) [1] r=0 lpr=47 pi=[23,47)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 03:38:29 compute-0 ceph-osd[88594]: osd.1 pg_epoch: 48 pg[8.4( v 33'4 lc 0'0 (0'0,33'4] local-lis/les=32/33 n=1 ec=47/32 lis/c=32/32 les/c/f=33/33/0 sis=47) [1] r=0 lpr=47 pi=[32,47)/1 crt=33'4 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 03:38:29 compute-0 ceph-osd[88594]: osd.1 pg_epoch: 48 pg[8.5( v 33'4 lc 0'0 (0'0,33'4] local-lis/les=32/33 n=0 ec=47/32 lis/c=32/32 les/c/f=33/33/0 sis=47) [1] r=0 lpr=47 pi=[32,47)/1 crt=33'4 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 03:38:29 compute-0 ceph-osd[88594]: osd.1 pg_epoch: 48 pg[7.b( empty local-lis/les=23/24 n=0 ec=47/23 lis/c=23/23 les/c/f=24/24/0 sis=47) [1] r=0 lpr=47 pi=[23,47)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 03:38:29 compute-0 ceph-osd[88594]: osd.1 pg_epoch: 48 pg[7.a( empty local-lis/les=23/24 n=0 ec=47/23 lis/c=23/23 les/c/f=24/24/0 sis=47) [1] r=0 lpr=47 pi=[23,47)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 03:38:29 compute-0 ceph-osd[88594]: osd.1 pg_epoch: 48 pg[8.6( v 33'4 lc 0'0 (0'0,33'4] local-lis/les=32/33 n=0 ec=47/32 lis/c=32/32 les/c/f=33/33/0 sis=47) [1] r=0 lpr=47 pi=[32,47)/1 crt=33'4 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 03:38:29 compute-0 ceph-osd[88594]: osd.1 pg_epoch: 48 pg[7.9( empty local-lis/les=23/24 n=0 ec=47/23 lis/c=23/23 les/c/f=24/24/0 sis=47) [1] r=0 lpr=47 pi=[23,47)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 03:38:29 compute-0 ceph-osd[88594]: osd.1 pg_epoch: 48 pg[8.7( v 33'4 lc 0'0 (0'0,33'4] local-lis/les=32/33 n=0 ec=47/32 lis/c=32/32 les/c/f=33/33/0 sis=47) [1] r=0 lpr=47 pi=[32,47)/1 crt=33'4 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 03:38:29 compute-0 ceph-osd[88594]: osd.1 pg_epoch: 48 pg[7.8( empty local-lis/les=23/24 n=0 ec=47/23 lis/c=23/23 les/c/f=24/24/0 sis=47) [1] r=0 lpr=47 pi=[23,47)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 03:38:29 compute-0 ceph-osd[88594]: osd.1 pg_epoch: 48 pg[7.d( empty local-lis/les=23/24 n=0 ec=47/23 lis/c=23/23 les/c/f=24/24/0 sis=47) [1] r=0 lpr=47 pi=[23,47)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 03:38:29 compute-0 ceph-osd[88594]: osd.1 pg_epoch: 48 pg[8.9( v 33'4 lc 0'0 (0'0,33'4] local-lis/les=32/33 n=0 ec=47/32 lis/c=32/32 les/c/f=33/33/0 sis=47) [1] r=0 lpr=47 pi=[32,47)/1 crt=33'4 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 03:38:29 compute-0 ceph-osd[88594]: osd.1 pg_epoch: 48 pg[7.6( empty local-lis/les=23/24 n=0 ec=47/23 lis/c=23/23 les/c/f=24/24/0 sis=47) [1] r=0 lpr=47 pi=[23,47)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 03:38:29 compute-0 ceph-osd[88594]: osd.1 pg_epoch: 48 pg[8.b( v 33'4 lc 0'0 (0'0,33'4] local-lis/les=32/33 n=0 ec=47/32 lis/c=32/32 les/c/f=33/33/0 sis=47) [1] r=0 lpr=47 pi=[32,47)/1 crt=33'4 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 03:38:29 compute-0 ceph-osd[88594]: osd.1 pg_epoch: 48 pg[7.4( empty local-lis/les=23/24 n=0 ec=47/23 lis/c=23/23 les/c/f=24/24/0 sis=47) [1] r=0 lpr=47 pi=[23,47)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 03:38:29 compute-0 ceph-osd[88594]: osd.1 pg_epoch: 48 pg[8.f( v 33'4 lc 0'0 (0'0,33'4] local-lis/les=32/33 n=0 ec=47/32 lis/c=32/32 les/c/f=33/33/0 sis=47) [1] r=0 lpr=47 pi=[32,47)/1 crt=33'4 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 03:38:29 compute-0 ceph-osd[88594]: osd.1 pg_epoch: 48 pg[7.f( empty local-lis/les=23/24 n=0 ec=47/23 lis/c=23/23 les/c/f=24/24/0 sis=47) [1] r=0 lpr=47 pi=[23,47)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 03:38:29 compute-0 ceph-osd[88594]: osd.1 pg_epoch: 48 pg[8.1( v 33'4 (0'0,33'4] local-lis/les=32/33 n=1 ec=47/32 lis/c=32/32 les/c/f=33/33/0 sis=47) [1] r=0 lpr=47 pi=[32,47)/1 crt=33'4 lcod 0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 03:38:29 compute-0 ceph-osd[88594]: osd.1 pg_epoch: 48 pg[7.e( empty local-lis/les=23/24 n=0 ec=47/23 lis/c=23/23 les/c/f=24/24/0 sis=47) [1] r=0 lpr=47 pi=[23,47)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 03:38:29 compute-0 ceph-osd[88594]: osd.1 pg_epoch: 48 pg[8.3( v 33'4 lc 0'0 (0'0,33'4] local-lis/les=32/33 n=1 ec=47/32 lis/c=32/32 les/c/f=33/33/0 sis=47) [1] r=0 lpr=47 pi=[32,47)/1 crt=33'4 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 03:38:29 compute-0 ceph-osd[88594]: osd.1 pg_epoch: 48 pg[7.c( empty local-lis/les=23/24 n=0 ec=47/23 lis/c=23/23 les/c/f=24/24/0 sis=47) [1] r=0 lpr=47 pi=[23,47)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 03:38:29 compute-0 ceph-osd[88594]: osd.1 pg_epoch: 48 pg[8.a( v 33'4 lc 0'0 (0'0,33'4] local-lis/les=32/33 n=0 ec=47/32 lis/c=32/32 les/c/f=33/33/0 sis=47) [1] r=0 lpr=47 pi=[32,47)/1 crt=33'4 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 03:38:29 compute-0 ceph-osd[88594]: osd.1 pg_epoch: 48 pg[7.5( empty local-lis/les=23/24 n=0 ec=47/23 lis/c=23/23 les/c/f=24/24/0 sis=47) [1] r=0 lpr=47 pi=[23,47)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 03:38:29 compute-0 ceph-osd[88594]: osd.1 pg_epoch: 48 pg[8.8( v 33'4 lc 0'0 (0'0,33'4] local-lis/les=32/33 n=0 ec=47/32 lis/c=32/32 les/c/f=33/33/0 sis=47) [1] r=0 lpr=47 pi=[32,47)/1 crt=33'4 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 03:38:29 compute-0 ceph-osd[88594]: osd.1 pg_epoch: 48 pg[7.7( empty local-lis/les=23/24 n=0 ec=47/23 lis/c=23/23 les/c/f=24/24/0 sis=47) [1] r=0 lpr=47 pi=[23,47)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 03:38:29 compute-0 ceph-osd[88594]: osd.1 pg_epoch: 48 pg[8.e( v 33'4 lc 0'0 (0'0,33'4] local-lis/les=32/33 n=0 ec=47/32 lis/c=32/32 les/c/f=33/33/0 sis=47) [1] r=0 lpr=47 pi=[32,47)/1 crt=33'4 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 03:38:29 compute-0 ceph-osd[88594]: osd.1 pg_epoch: 48 pg[7.1( empty local-lis/les=23/24 n=0 ec=47/23 lis/c=23/23 les/c/f=24/24/0 sis=47) [1] r=0 lpr=47 pi=[23,47)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 03:38:29 compute-0 ceph-osd[88594]: osd.1 pg_epoch: 48 pg[8.d( v 33'4 lc 0'0 (0'0,33'4] local-lis/les=32/33 n=0 ec=47/32 lis/c=32/32 les/c/f=33/33/0 sis=47) [1] r=0 lpr=47 pi=[32,47)/1 crt=33'4 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 03:38:29 compute-0 ceph-osd[88594]: osd.1 pg_epoch: 48 pg[7.2( empty local-lis/les=23/24 n=0 ec=47/23 lis/c=23/23 les/c/f=24/24/0 sis=47) [1] r=0 lpr=47 pi=[23,47)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 03:38:29 compute-0 ceph-osd[88594]: osd.1 pg_epoch: 48 pg[8.c( v 33'4 lc 0'0 (0'0,33'4] local-lis/les=32/33 n=0 ec=47/32 lis/c=32/32 les/c/f=33/33/0 sis=47) [1] r=0 lpr=47 pi=[32,47)/1 crt=33'4 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 03:38:29 compute-0 ceph-osd[88594]: osd.1 pg_epoch: 48 pg[7.3( empty local-lis/les=23/24 n=0 ec=47/23 lis/c=23/23 les/c/f=24/24/0 sis=47) [1] r=0 lpr=47 pi=[23,47)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 03:38:29 compute-0 ceph-osd[88594]: osd.1 pg_epoch: 48 pg[8.13( v 33'4 lc 0'0 (0'0,33'4] local-lis/les=32/33 n=0 ec=47/32 lis/c=32/32 les/c/f=33/33/0 sis=47) [1] r=0 lpr=47 pi=[32,47)/1 crt=33'4 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 03:38:29 compute-0 ceph-osd[88594]: osd.1 pg_epoch: 48 pg[7.1c( empty local-lis/les=23/24 n=0 ec=47/23 lis/c=23/23 les/c/f=24/24/0 sis=47) [1] r=0 lpr=47 pi=[23,47)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 03:38:29 compute-0 ceph-osd[88594]: osd.1 pg_epoch: 48 pg[8.12( v 33'4 lc 0'0 (0'0,33'4] local-lis/les=32/33 n=0 ec=47/32 lis/c=32/32 les/c/f=33/33/0 sis=47) [1] r=0 lpr=47 pi=[32,47)/1 crt=33'4 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 03:38:29 compute-0 ceph-osd[88594]: osd.1 pg_epoch: 48 pg[7.1d( empty local-lis/les=23/24 n=0 ec=47/23 lis/c=23/23 les/c/f=24/24/0 sis=47) [1] r=0 lpr=47 pi=[23,47)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 03:38:29 compute-0 ceph-osd[88594]: osd.1 pg_epoch: 48 pg[7.1e( empty local-lis/les=23/24 n=0 ec=47/23 lis/c=23/23 les/c/f=24/24/0 sis=47) [1] r=0 lpr=47 pi=[23,47)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 03:38:29 compute-0 ceph-osd[88594]: osd.1 pg_epoch: 48 pg[8.11( v 33'4 lc 0'0 (0'0,33'4] local-lis/les=32/33 n=0 ec=47/32 lis/c=32/32 les/c/f=33/33/0 sis=47) [1] r=0 lpr=47 pi=[32,47)/1 crt=33'4 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 03:38:29 compute-0 ceph-osd[88594]: osd.1 pg_epoch: 48 pg[8.10( v 33'4 lc 0'0 (0'0,33'4] local-lis/les=32/33 n=0 ec=47/32 lis/c=32/32 les/c/f=33/33/0 sis=47) [1] r=0 lpr=47 pi=[32,47)/1 crt=33'4 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 03:38:29 compute-0 ceph-osd[88594]: osd.1 pg_epoch: 48 pg[7.1f( empty local-lis/les=23/24 n=0 ec=47/23 lis/c=23/23 les/c/f=24/24/0 sis=47) [1] r=0 lpr=47 pi=[23,47)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 03:38:29 compute-0 ceph-osd[88594]: osd.1 pg_epoch: 48 pg[8.17( v 33'4 lc 0'0 (0'0,33'4] local-lis/les=32/33 n=0 ec=47/32 lis/c=32/32 les/c/f=33/33/0 sis=47) [1] r=0 lpr=47 pi=[32,47)/1 crt=33'4 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 03:38:29 compute-0 ceph-osd[88594]: osd.1 pg_epoch: 48 pg[7.18( empty local-lis/les=23/24 n=0 ec=47/23 lis/c=23/23 les/c/f=24/24/0 sis=47) [1] r=0 lpr=47 pi=[23,47)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 03:38:29 compute-0 ceph-osd[88594]: osd.1 pg_epoch: 48 pg[8.16( v 33'4 lc 0'0 (0'0,33'4] local-lis/les=32/33 n=0 ec=47/32 lis/c=32/32 les/c/f=33/33/0 sis=47) [1] r=0 lpr=47 pi=[32,47)/1 crt=33'4 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 03:38:29 compute-0 ceph-osd[88594]: osd.1 pg_epoch: 48 pg[7.19( empty local-lis/les=23/24 n=0 ec=47/23 lis/c=23/23 les/c/f=24/24/0 sis=47) [1] r=0 lpr=47 pi=[23,47)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 03:38:29 compute-0 ceph-osd[88594]: osd.1 pg_epoch: 48 pg[8.15( v 33'4 lc 0'0 (0'0,33'4] local-lis/les=32/33 n=0 ec=47/32 lis/c=32/32 les/c/f=33/33/0 sis=47) [1] r=0 lpr=47 pi=[32,47)/1 crt=33'4 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 03:38:29 compute-0 ceph-osd[88594]: osd.1 pg_epoch: 48 pg[7.1a( empty local-lis/les=23/24 n=0 ec=47/23 lis/c=23/23 les/c/f=24/24/0 sis=47) [1] r=0 lpr=47 pi=[23,47)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 03:38:29 compute-0 ceph-osd[88594]: osd.1 pg_epoch: 48 pg[8.14( v 33'4 lc 0'0 (0'0,33'4] local-lis/les=32/33 n=0 ec=47/32 lis/c=32/32 les/c/f=33/33/0 sis=47) [1] r=0 lpr=47 pi=[32,47)/1 crt=33'4 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 03:38:29 compute-0 ceph-osd[88594]: osd.1 pg_epoch: 48 pg[7.1b( empty local-lis/les=23/24 n=0 ec=47/23 lis/c=23/23 les/c/f=24/24/0 sis=47) [1] r=0 lpr=47 pi=[23,47)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 03:38:29 compute-0 ceph-osd[88594]: osd.1 pg_epoch: 48 pg[8.2( v 33'4 lc 0'0 (0'0,33'4] local-lis/les=32/33 n=1 ec=47/32 lis/c=32/32 les/c/f=33/33/0 sis=47) [1] r=0 lpr=47 pi=[32,47)/1 crt=33'4 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 03:38:29 compute-0 ceph-osd[88594]: osd.1 pg_epoch: 48 pg[8.1c( v 33'4 (0'0,33'4] local-lis/les=47/48 n=0 ec=47/32 lis/c=32/32 les/c/f=33/33/0 sis=47) [1] r=0 lpr=47 pi=[32,47)/1 crt=33'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 03:38:29 compute-0 ceph-osd[88594]: osd.1 pg_epoch: 48 pg[7.13( empty local-lis/les=47/48 n=0 ec=47/23 lis/c=23/23 les/c/f=24/24/0 sis=47) [1] r=0 lpr=47 pi=[23,47)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 03:38:29 compute-0 ceph-osd[88594]: osd.1 pg_epoch: 48 pg[7.12( empty local-lis/les=47/48 n=0 ec=47/23 lis/c=23/23 les/c/f=24/24/0 sis=47) [1] r=0 lpr=47 pi=[23,47)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 03:38:29 compute-0 ceph-osd[88594]: osd.1 pg_epoch: 48 pg[8.1e( v 33'4 (0'0,33'4] local-lis/les=47/48 n=0 ec=47/32 lis/c=32/32 les/c/f=33/33/0 sis=47) [1] r=0 lpr=47 pi=[32,47)/1 crt=33'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 03:38:29 compute-0 ceph-osd[88594]: osd.1 pg_epoch: 48 pg[7.10( empty local-lis/les=47/48 n=0 ec=47/23 lis/c=23/23 les/c/f=24/24/0 sis=47) [1] r=0 lpr=47 pi=[23,47)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 03:38:29 compute-0 ceph-osd[88594]: osd.1 pg_epoch: 48 pg[8.1f( v 33'4 (0'0,33'4] local-lis/les=47/48 n=0 ec=47/32 lis/c=32/32 les/c/f=33/33/0 sis=47) [1] r=0 lpr=47 pi=[32,47)/1 crt=33'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 03:38:29 compute-0 ceph-osd[88594]: osd.1 pg_epoch: 48 pg[8.18( v 33'4 (0'0,33'4] local-lis/les=47/48 n=0 ec=47/32 lis/c=32/32 les/c/f=33/33/0 sis=47) [1] r=0 lpr=47 pi=[32,47)/1 crt=33'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 03:38:29 compute-0 ceph-osd[88594]: osd.1 pg_epoch: 48 pg[7.17( empty local-lis/les=47/48 n=0 ec=47/23 lis/c=23/23 les/c/f=24/24/0 sis=47) [1] r=0 lpr=47 pi=[23,47)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 03:38:29 compute-0 ceph-osd[88594]: osd.1 pg_epoch: 48 pg[8.1d( v 33'4 (0'0,33'4] local-lis/les=47/48 n=0 ec=47/32 lis/c=32/32 les/c/f=33/33/0 sis=47) [1] r=0 lpr=47 pi=[32,47)/1 crt=33'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 03:38:29 compute-0 ceph-osd[88594]: osd.1 pg_epoch: 48 pg[8.19( v 33'4 (0'0,33'4] local-lis/les=47/48 n=0 ec=47/32 lis/c=32/32 les/c/f=33/33/0 sis=47) [1] r=0 lpr=47 pi=[32,47)/1 crt=33'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 03:38:29 compute-0 ceph-osd[88594]: osd.1 pg_epoch: 48 pg[7.16( empty local-lis/les=47/48 n=0 ec=47/23 lis/c=23/23 les/c/f=24/24/0 sis=47) [1] r=0 lpr=47 pi=[23,47)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 03:38:29 compute-0 ceph-osd[88594]: osd.1 pg_epoch: 48 pg[7.15( empty local-lis/les=47/48 n=0 ec=47/23 lis/c=23/23 les/c/f=24/24/0 sis=47) [1] r=0 lpr=47 pi=[23,47)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 03:38:29 compute-0 ceph-osd[88594]: osd.1 pg_epoch: 48 pg[8.1a( v 33'4 (0'0,33'4] local-lis/les=47/48 n=0 ec=47/32 lis/c=32/32 les/c/f=33/33/0 sis=47) [1] r=0 lpr=47 pi=[32,47)/1 crt=33'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 03:38:29 compute-0 ceph-osd[88594]: osd.1 pg_epoch: 48 pg[8.1b( v 33'4 (0'0,33'4] local-lis/les=47/48 n=0 ec=47/32 lis/c=32/32 les/c/f=33/33/0 sis=47) [1] r=0 lpr=47 pi=[32,47)/1 crt=33'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 03:38:29 compute-0 ceph-osd[88594]: osd.1 pg_epoch: 48 pg[7.14( empty local-lis/les=47/48 n=0 ec=47/23 lis/c=23/23 les/c/f=24/24/0 sis=47) [1] r=0 lpr=47 pi=[23,47)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 03:38:29 compute-0 ceph-osd[88594]: osd.1 pg_epoch: 48 pg[8.5( v 33'4 (0'0,33'4] local-lis/les=47/48 n=0 ec=47/32 lis/c=32/32 les/c/f=33/33/0 sis=47) [1] r=0 lpr=47 pi=[32,47)/1 crt=33'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 03:38:29 compute-0 ceph-osd[88594]: osd.1 pg_epoch: 48 pg[8.4( v 33'4 (0'0,33'4] local-lis/les=47/48 n=1 ec=47/32 lis/c=32/32 les/c/f=33/33/0 sis=47) [1] r=0 lpr=47 pi=[32,47)/1 crt=33'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 03:38:29 compute-0 ceph-osd[88594]: osd.1 pg_epoch: 48 pg[7.b( empty local-lis/les=47/48 n=0 ec=47/23 lis/c=23/23 les/c/f=24/24/0 sis=47) [1] r=0 lpr=47 pi=[23,47)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 03:38:29 compute-0 ceph-osd[88594]: osd.1 pg_epoch: 48 pg[7.a( empty local-lis/les=47/48 n=0 ec=47/23 lis/c=23/23 les/c/f=24/24/0 sis=47) [1] r=0 lpr=47 pi=[23,47)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 03:38:29 compute-0 ceph-osd[88594]: osd.1 pg_epoch: 48 pg[8.6( v 33'4 (0'0,33'4] local-lis/les=47/48 n=0 ec=47/32 lis/c=32/32 les/c/f=33/33/0 sis=47) [1] r=0 lpr=47 pi=[32,47)/1 crt=33'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 03:38:29 compute-0 ceph-osd[88594]: osd.1 pg_epoch: 48 pg[7.9( empty local-lis/les=47/48 n=0 ec=47/23 lis/c=23/23 les/c/f=24/24/0 sis=47) [1] r=0 lpr=47 pi=[23,47)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 03:38:29 compute-0 ceph-osd[88594]: osd.1 pg_epoch: 48 pg[8.7( v 33'4 (0'0,33'4] local-lis/les=47/48 n=0 ec=47/32 lis/c=32/32 les/c/f=33/33/0 sis=47) [1] r=0 lpr=47 pi=[32,47)/1 crt=33'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 03:38:29 compute-0 ceph-osd[88594]: osd.1 pg_epoch: 48 pg[7.8( empty local-lis/les=47/48 n=0 ec=47/23 lis/c=23/23 les/c/f=24/24/0 sis=47) [1] r=0 lpr=47 pi=[23,47)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 03:38:29 compute-0 ceph-osd[88594]: osd.1 pg_epoch: 48 pg[7.d( empty local-lis/les=47/48 n=0 ec=47/23 lis/c=23/23 les/c/f=24/24/0 sis=47) [1] r=0 lpr=47 pi=[23,47)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 03:38:29 compute-0 ceph-osd[88594]: osd.1 pg_epoch: 48 pg[8.9( v 33'4 (0'0,33'4] local-lis/les=47/48 n=0 ec=47/32 lis/c=32/32 les/c/f=33/33/0 sis=47) [1] r=0 lpr=47 pi=[32,47)/1 crt=33'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 03:38:29 compute-0 ceph-osd[88594]: osd.1 pg_epoch: 48 pg[7.6( empty local-lis/les=47/48 n=0 ec=47/23 lis/c=23/23 les/c/f=24/24/0 sis=47) [1] r=0 lpr=47 pi=[23,47)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 03:38:29 compute-0 ceph-osd[88594]: osd.1 pg_epoch: 48 pg[7.4( empty local-lis/les=47/48 n=0 ec=47/23 lis/c=23/23 les/c/f=24/24/0 sis=47) [1] r=0 lpr=47 pi=[23,47)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 03:38:29 compute-0 ceph-osd[88594]: osd.1 pg_epoch: 48 pg[8.b( v 33'4 (0'0,33'4] local-lis/les=47/48 n=0 ec=47/32 lis/c=32/32 les/c/f=33/33/0 sis=47) [1] r=0 lpr=47 pi=[32,47)/1 crt=33'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 03:38:29 compute-0 ceph-osd[88594]: osd.1 pg_epoch: 48 pg[7.11( empty local-lis/les=47/48 n=0 ec=47/23 lis/c=23/23 les/c/f=24/24/0 sis=47) [1] r=0 lpr=47 pi=[23,47)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 03:38:29 compute-0 ceph-osd[88594]: osd.1 pg_epoch: 48 pg[8.f( v 33'4 (0'0,33'4] local-lis/les=47/48 n=0 ec=47/32 lis/c=32/32 les/c/f=33/33/0 sis=47) [1] r=0 lpr=47 pi=[32,47)/1 crt=33'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 03:38:29 compute-0 ceph-osd[88594]: osd.1 pg_epoch: 48 pg[8.0( v 33'4 (0'0,33'4] local-lis/les=47/48 n=0 ec=32/32 lis/c=32/32 les/c/f=33/33/0 sis=47) [1] r=0 lpr=47 pi=[32,47)/1 crt=33'4 lcod 33'3 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 03:38:29 compute-0 ceph-osd[88594]: osd.1 pg_epoch: 48 pg[7.f( empty local-lis/les=47/48 n=0 ec=47/23 lis/c=23/23 les/c/f=24/24/0 sis=47) [1] r=0 lpr=47 pi=[23,47)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 03:38:29 compute-0 ceph-osd[88594]: osd.1 pg_epoch: 48 pg[8.1( v 33'4 (0'0,33'4] local-lis/les=47/48 n=1 ec=47/32 lis/c=32/32 les/c/f=33/33/0 sis=47) [1] r=0 lpr=47 pi=[32,47)/1 crt=33'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 03:38:29 compute-0 ceph-osd[88594]: osd.1 pg_epoch: 48 pg[7.e( empty local-lis/les=47/48 n=0 ec=47/23 lis/c=23/23 les/c/f=24/24/0 sis=47) [1] r=0 lpr=47 pi=[23,47)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 03:38:29 compute-0 ceph-osd[88594]: osd.1 pg_epoch: 48 pg[8.3( v 33'4 (0'0,33'4] local-lis/les=47/48 n=1 ec=47/32 lis/c=32/32 les/c/f=33/33/0 sis=47) [1] r=0 lpr=47 pi=[32,47)/1 crt=33'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 03:38:29 compute-0 ceph-osd[88594]: osd.1 pg_epoch: 48 pg[7.c( empty local-lis/les=47/48 n=0 ec=47/23 lis/c=23/23 les/c/f=24/24/0 sis=47) [1] r=0 lpr=47 pi=[23,47)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 03:38:29 compute-0 ceph-osd[88594]: osd.1 pg_epoch: 48 pg[7.5( empty local-lis/les=47/48 n=0 ec=47/23 lis/c=23/23 les/c/f=24/24/0 sis=47) [1] r=0 lpr=47 pi=[23,47)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 03:38:29 compute-0 ceph-osd[88594]: osd.1 pg_epoch: 48 pg[8.a( v 33'4 (0'0,33'4] local-lis/les=47/48 n=0 ec=47/32 lis/c=32/32 les/c/f=33/33/0 sis=47) [1] r=0 lpr=47 pi=[32,47)/1 crt=33'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 03:38:29 compute-0 ceph-osd[88594]: osd.1 pg_epoch: 48 pg[8.8( v 33'4 (0'0,33'4] local-lis/les=47/48 n=0 ec=47/32 lis/c=32/32 les/c/f=33/33/0 sis=47) [1] r=0 lpr=47 pi=[32,47)/1 crt=33'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 03:38:29 compute-0 ceph-osd[88594]: osd.1 pg_epoch: 48 pg[7.7( empty local-lis/les=47/48 n=0 ec=47/23 lis/c=23/23 les/c/f=24/24/0 sis=47) [1] r=0 lpr=47 pi=[23,47)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 03:38:29 compute-0 ceph-osd[88594]: osd.1 pg_epoch: 48 pg[8.e( v 33'4 (0'0,33'4] local-lis/les=47/48 n=0 ec=47/32 lis/c=32/32 les/c/f=33/33/0 sis=47) [1] r=0 lpr=47 pi=[32,47)/1 crt=33'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 03:38:29 compute-0 ceph-osd[88594]: osd.1 pg_epoch: 48 pg[7.1( empty local-lis/les=47/48 n=0 ec=47/23 lis/c=23/23 les/c/f=24/24/0 sis=47) [1] r=0 lpr=47 pi=[23,47)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 03:38:29 compute-0 ceph-osd[88594]: osd.1 pg_epoch: 48 pg[8.d( v 33'4 (0'0,33'4] local-lis/les=47/48 n=0 ec=47/32 lis/c=32/32 les/c/f=33/33/0 sis=47) [1] r=0 lpr=47 pi=[32,47)/1 crt=33'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 03:38:29 compute-0 ceph-osd[88594]: osd.1 pg_epoch: 48 pg[8.c( v 33'4 (0'0,33'4] local-lis/les=47/48 n=0 ec=47/32 lis/c=32/32 les/c/f=33/33/0 sis=47) [1] r=0 lpr=47 pi=[32,47)/1 crt=33'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 03:38:29 compute-0 ceph-osd[88594]: osd.1 pg_epoch: 48 pg[7.2( empty local-lis/les=47/48 n=0 ec=47/23 lis/c=23/23 les/c/f=24/24/0 sis=47) [1] r=0 lpr=47 pi=[23,47)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 03:38:29 compute-0 ceph-osd[88594]: osd.1 pg_epoch: 48 pg[7.3( empty local-lis/les=47/48 n=0 ec=47/23 lis/c=23/23 les/c/f=24/24/0 sis=47) [1] r=0 lpr=47 pi=[23,47)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 03:38:29 compute-0 ceph-osd[88594]: osd.1 pg_epoch: 48 pg[8.13( v 33'4 (0'0,33'4] local-lis/les=47/48 n=0 ec=47/32 lis/c=32/32 les/c/f=33/33/0 sis=47) [1] r=0 lpr=47 pi=[32,47)/1 crt=33'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 03:38:29 compute-0 ceph-osd[88594]: osd.1 pg_epoch: 48 pg[7.1c( empty local-lis/les=47/48 n=0 ec=47/23 lis/c=23/23 les/c/f=24/24/0 sis=47) [1] r=0 lpr=47 pi=[23,47)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 03:38:29 compute-0 ceph-osd[88594]: osd.1 pg_epoch: 48 pg[7.1d( empty local-lis/les=47/48 n=0 ec=47/23 lis/c=23/23 les/c/f=24/24/0 sis=47) [1] r=0 lpr=47 pi=[23,47)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 03:38:29 compute-0 ceph-osd[88594]: osd.1 pg_epoch: 48 pg[8.12( v 33'4 (0'0,33'4] local-lis/les=47/48 n=0 ec=47/32 lis/c=32/32 les/c/f=33/33/0 sis=47) [1] r=0 lpr=47 pi=[32,47)/1 crt=33'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 03:38:29 compute-0 ceph-osd[88594]: osd.1 pg_epoch: 48 pg[7.1e( empty local-lis/les=47/48 n=0 ec=47/23 lis/c=23/23 les/c/f=24/24/0 sis=47) [1] r=0 lpr=47 pi=[23,47)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 03:38:29 compute-0 ceph-osd[88594]: osd.1 pg_epoch: 48 pg[8.11( v 33'4 (0'0,33'4] local-lis/les=47/48 n=0 ec=47/32 lis/c=32/32 les/c/f=33/33/0 sis=47) [1] r=0 lpr=47 pi=[32,47)/1 crt=33'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 03:38:29 compute-0 ceph-osd[88594]: osd.1 pg_epoch: 48 pg[8.10( v 33'4 (0'0,33'4] local-lis/les=47/48 n=0 ec=47/32 lis/c=32/32 les/c/f=33/33/0 sis=47) [1] r=0 lpr=47 pi=[32,47)/1 crt=33'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 03:38:29 compute-0 ceph-osd[88594]: osd.1 pg_epoch: 48 pg[7.1f( empty local-lis/les=47/48 n=0 ec=47/23 lis/c=23/23 les/c/f=24/24/0 sis=47) [1] r=0 lpr=47 pi=[23,47)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 03:38:29 compute-0 ceph-osd[88594]: osd.1 pg_epoch: 48 pg[8.17( v 33'4 (0'0,33'4] local-lis/les=47/48 n=0 ec=47/32 lis/c=32/32 les/c/f=33/33/0 sis=47) [1] r=0 lpr=47 pi=[32,47)/1 crt=33'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 03:38:29 compute-0 ceph-osd[88594]: osd.1 pg_epoch: 48 pg[8.16( v 33'4 (0'0,33'4] local-lis/les=47/48 n=0 ec=47/32 lis/c=32/32 les/c/f=33/33/0 sis=47) [1] r=0 lpr=47 pi=[32,47)/1 crt=33'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 03:38:29 compute-0 ceph-osd[88594]: osd.1 pg_epoch: 48 pg[7.18( empty local-lis/les=47/48 n=0 ec=47/23 lis/c=23/23 les/c/f=24/24/0 sis=47) [1] r=0 lpr=47 pi=[23,47)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 03:38:29 compute-0 ceph-osd[88594]: osd.1 pg_epoch: 48 pg[7.19( empty local-lis/les=47/48 n=0 ec=47/23 lis/c=23/23 les/c/f=24/24/0 sis=47) [1] r=0 lpr=47 pi=[23,47)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 03:38:29 compute-0 ceph-osd[88594]: osd.1 pg_epoch: 48 pg[8.15( v 33'4 (0'0,33'4] local-lis/les=47/48 n=0 ec=47/32 lis/c=32/32 les/c/f=33/33/0 sis=47) [1] r=0 lpr=47 pi=[32,47)/1 crt=33'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 03:38:29 compute-0 ceph-osd[88594]: osd.1 pg_epoch: 48 pg[7.1a( empty local-lis/les=47/48 n=0 ec=47/23 lis/c=23/23 les/c/f=24/24/0 sis=47) [1] r=0 lpr=47 pi=[23,47)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 03:38:29 compute-0 ceph-osd[88594]: osd.1 pg_epoch: 48 pg[8.14( v 33'4 (0'0,33'4] local-lis/les=47/48 n=0 ec=47/32 lis/c=32/32 les/c/f=33/33/0 sis=47) [1] r=0 lpr=47 pi=[32,47)/1 crt=33'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 03:38:29 compute-0 ceph-osd[88594]: osd.1 pg_epoch: 48 pg[7.1b( empty local-lis/les=47/48 n=0 ec=47/23 lis/c=23/23 les/c/f=24/24/0 sis=47) [1] r=0 lpr=47 pi=[23,47)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 03:38:29 compute-0 ceph-osd[88594]: osd.1 pg_epoch: 48 pg[8.2( v 33'4 (0'0,33'4] local-lis/les=47/48 n=1 ec=47/32 lis/c=32/32 les/c/f=33/33/0 sis=47) [1] r=0 lpr=47 pi=[32,47)/1 crt=33'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 03:38:29 compute-0 ceph-osd[88594]: osd.1 pg_epoch: 48 pg[7.0( empty local-lis/les=47/48 n=0 ec=23/23 lis/c=23/23 les/c/f=24/24/0 sis=47) [1] r=0 lpr=47 pi=[23,47)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 03:38:29 compute-0 ceph-mon[74273]: 4.2 deep-scrub starts
Oct 11 03:38:29 compute-0 ceph-mon[74273]: 4.2 deep-scrub ok
Oct 11 03:38:29 compute-0 ceph-mon[74273]: pgmap v101: 150 pgs: 1 peering, 46 unknown, 103 active+clean; 456 KiB data, 81 MiB used, 60 GiB / 60 GiB avail; 28 KiB/s rd, 0 B/s wr, 69 op/s
Oct 11 03:38:29 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pg_num", "val": "32"}]': finished
Oct 11 03:38:29 compute-0 ceph-mon[74273]: osdmap e48: 3 total, 3 up, 3 in
Oct 11 03:38:29 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_num", "val": "32"}]: dispatch
Oct 11 03:38:30 compute-0 ceph-osd[89722]: log_channel(cluster) log [DBG] : 2.3 scrub starts
Oct 11 03:38:30 compute-0 ceph-osd[89722]: log_channel(cluster) log [DBG] : 2.3 scrub ok
Oct 11 03:38:30 compute-0 fervent_dubinsky[103575]: {
Oct 11 03:38:30 compute-0 fervent_dubinsky[103575]:     "0": [
Oct 11 03:38:30 compute-0 fervent_dubinsky[103575]:         {
Oct 11 03:38:30 compute-0 fervent_dubinsky[103575]:             "devices": [
Oct 11 03:38:30 compute-0 fervent_dubinsky[103575]:                 "/dev/loop3"
Oct 11 03:38:30 compute-0 fervent_dubinsky[103575]:             ],
Oct 11 03:38:30 compute-0 fervent_dubinsky[103575]:             "lv_name": "ceph_lv0",
Oct 11 03:38:30 compute-0 fervent_dubinsky[103575]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct 11 03:38:30 compute-0 fervent_dubinsky[103575]:             "lv_size": "21470642176",
Oct 11 03:38:30 compute-0 fervent_dubinsky[103575]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=1XVTvq-Um5W-oCLh-MZzq-EKrd-vgGj-JdO4S3,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=23b68101-59a9-532f-ab6b-9acf78fb2162,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=bd7ac921-1218-45c1-b1c6-7c594dbceccb,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 11 03:38:30 compute-0 fervent_dubinsky[103575]:             "lv_uuid": "1XVTvq-Um5W-oCLh-MZzq-EKrd-vgGj-JdO4S3",
Oct 11 03:38:30 compute-0 fervent_dubinsky[103575]:             "name": "ceph_lv0",
Oct 11 03:38:30 compute-0 fervent_dubinsky[103575]:             "path": "/dev/ceph_vg0/ceph_lv0",
Oct 11 03:38:30 compute-0 fervent_dubinsky[103575]:             "tags": {
Oct 11 03:38:30 compute-0 fervent_dubinsky[103575]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct 11 03:38:30 compute-0 fervent_dubinsky[103575]:                 "ceph.block_uuid": "1XVTvq-Um5W-oCLh-MZzq-EKrd-vgGj-JdO4S3",
Oct 11 03:38:30 compute-0 fervent_dubinsky[103575]:                 "ceph.cephx_lockbox_secret": "",
Oct 11 03:38:30 compute-0 fervent_dubinsky[103575]:                 "ceph.cluster_fsid": "23b68101-59a9-532f-ab6b-9acf78fb2162",
Oct 11 03:38:30 compute-0 fervent_dubinsky[103575]:                 "ceph.cluster_name": "ceph",
Oct 11 03:38:30 compute-0 fervent_dubinsky[103575]:                 "ceph.crush_device_class": "",
Oct 11 03:38:30 compute-0 fervent_dubinsky[103575]:                 "ceph.encrypted": "0",
Oct 11 03:38:30 compute-0 fervent_dubinsky[103575]:                 "ceph.osd_fsid": "bd7ac921-1218-45c1-b1c6-7c594dbceccb",
Oct 11 03:38:30 compute-0 fervent_dubinsky[103575]:                 "ceph.osd_id": "0",
Oct 11 03:38:30 compute-0 fervent_dubinsky[103575]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 11 03:38:30 compute-0 fervent_dubinsky[103575]:                 "ceph.type": "block",
Oct 11 03:38:30 compute-0 fervent_dubinsky[103575]:                 "ceph.vdo": "0"
Oct 11 03:38:30 compute-0 fervent_dubinsky[103575]:             },
Oct 11 03:38:30 compute-0 fervent_dubinsky[103575]:             "type": "block",
Oct 11 03:38:30 compute-0 fervent_dubinsky[103575]:             "vg_name": "ceph_vg0"
Oct 11 03:38:30 compute-0 fervent_dubinsky[103575]:         }
Oct 11 03:38:30 compute-0 fervent_dubinsky[103575]:     ],
Oct 11 03:38:30 compute-0 fervent_dubinsky[103575]:     "1": [
Oct 11 03:38:30 compute-0 fervent_dubinsky[103575]:         {
Oct 11 03:38:30 compute-0 fervent_dubinsky[103575]:             "devices": [
Oct 11 03:38:30 compute-0 fervent_dubinsky[103575]:                 "/dev/loop4"
Oct 11 03:38:30 compute-0 fervent_dubinsky[103575]:             ],
Oct 11 03:38:30 compute-0 fervent_dubinsky[103575]:             "lv_name": "ceph_lv1",
Oct 11 03:38:30 compute-0 fervent_dubinsky[103575]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Oct 11 03:38:30 compute-0 fervent_dubinsky[103575]:             "lv_size": "21470642176",
Oct 11 03:38:30 compute-0 fervent_dubinsky[103575]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=SYHCVo-UVf0-Iwc3-4dzz-QuCG-19LJ-9rRA5x,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=23b68101-59a9-532f-ab6b-9acf78fb2162,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=38da774d-7ecf-442f-9a7a-97978287cff8,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 11 03:38:30 compute-0 fervent_dubinsky[103575]:             "lv_uuid": "SYHCVo-UVf0-Iwc3-4dzz-QuCG-19LJ-9rRA5x",
Oct 11 03:38:30 compute-0 fervent_dubinsky[103575]:             "name": "ceph_lv1",
Oct 11 03:38:30 compute-0 fervent_dubinsky[103575]:             "path": "/dev/ceph_vg1/ceph_lv1",
Oct 11 03:38:30 compute-0 fervent_dubinsky[103575]:             "tags": {
Oct 11 03:38:30 compute-0 fervent_dubinsky[103575]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Oct 11 03:38:30 compute-0 fervent_dubinsky[103575]:                 "ceph.block_uuid": "SYHCVo-UVf0-Iwc3-4dzz-QuCG-19LJ-9rRA5x",
Oct 11 03:38:30 compute-0 fervent_dubinsky[103575]:                 "ceph.cephx_lockbox_secret": "",
Oct 11 03:38:30 compute-0 fervent_dubinsky[103575]:                 "ceph.cluster_fsid": "23b68101-59a9-532f-ab6b-9acf78fb2162",
Oct 11 03:38:30 compute-0 fervent_dubinsky[103575]:                 "ceph.cluster_name": "ceph",
Oct 11 03:38:30 compute-0 fervent_dubinsky[103575]:                 "ceph.crush_device_class": "",
Oct 11 03:38:30 compute-0 fervent_dubinsky[103575]:                 "ceph.encrypted": "0",
Oct 11 03:38:30 compute-0 fervent_dubinsky[103575]:                 "ceph.osd_fsid": "38da774d-7ecf-442f-9a7a-97978287cff8",
Oct 11 03:38:30 compute-0 fervent_dubinsky[103575]:                 "ceph.osd_id": "1",
Oct 11 03:38:30 compute-0 fervent_dubinsky[103575]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 11 03:38:30 compute-0 fervent_dubinsky[103575]:                 "ceph.type": "block",
Oct 11 03:38:30 compute-0 fervent_dubinsky[103575]:                 "ceph.vdo": "0"
Oct 11 03:38:30 compute-0 fervent_dubinsky[103575]:             },
Oct 11 03:38:30 compute-0 fervent_dubinsky[103575]:             "type": "block",
Oct 11 03:38:30 compute-0 fervent_dubinsky[103575]:             "vg_name": "ceph_vg1"
Oct 11 03:38:30 compute-0 fervent_dubinsky[103575]:         }
Oct 11 03:38:30 compute-0 fervent_dubinsky[103575]:     ],
Oct 11 03:38:30 compute-0 fervent_dubinsky[103575]:     "2": [
Oct 11 03:38:30 compute-0 fervent_dubinsky[103575]:         {
Oct 11 03:38:30 compute-0 fervent_dubinsky[103575]:             "devices": [
Oct 11 03:38:30 compute-0 fervent_dubinsky[103575]:                 "/dev/loop5"
Oct 11 03:38:30 compute-0 fervent_dubinsky[103575]:             ],
Oct 11 03:38:30 compute-0 fervent_dubinsky[103575]:             "lv_name": "ceph_lv2",
Oct 11 03:38:30 compute-0 fervent_dubinsky[103575]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Oct 11 03:38:30 compute-0 fervent_dubinsky[103575]:             "lv_size": "21470642176",
Oct 11 03:38:30 compute-0 fervent_dubinsky[103575]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=RoMfIg-8Myq-NBZ3-1HM3-6QfC-sjk8-dlChvt,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=23b68101-59a9-532f-ab6b-9acf78fb2162,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=b0a32a6b-06c7-4cda-9235-0c5ffbd1bd85,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 11 03:38:30 compute-0 fervent_dubinsky[103575]:             "lv_uuid": "RoMfIg-8Myq-NBZ3-1HM3-6QfC-sjk8-dlChvt",
Oct 11 03:38:30 compute-0 fervent_dubinsky[103575]:             "name": "ceph_lv2",
Oct 11 03:38:30 compute-0 fervent_dubinsky[103575]:             "path": "/dev/ceph_vg2/ceph_lv2",
Oct 11 03:38:30 compute-0 fervent_dubinsky[103575]:             "tags": {
Oct 11 03:38:30 compute-0 fervent_dubinsky[103575]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Oct 11 03:38:30 compute-0 fervent_dubinsky[103575]:                 "ceph.block_uuid": "RoMfIg-8Myq-NBZ3-1HM3-6QfC-sjk8-dlChvt",
Oct 11 03:38:30 compute-0 fervent_dubinsky[103575]:                 "ceph.cephx_lockbox_secret": "",
Oct 11 03:38:30 compute-0 fervent_dubinsky[103575]:                 "ceph.cluster_fsid": "23b68101-59a9-532f-ab6b-9acf78fb2162",
Oct 11 03:38:30 compute-0 fervent_dubinsky[103575]:                 "ceph.cluster_name": "ceph",
Oct 11 03:38:30 compute-0 fervent_dubinsky[103575]:                 "ceph.crush_device_class": "",
Oct 11 03:38:30 compute-0 fervent_dubinsky[103575]:                 "ceph.encrypted": "0",
Oct 11 03:38:30 compute-0 fervent_dubinsky[103575]:                 "ceph.osd_fsid": "b0a32a6b-06c7-4cda-9235-0c5ffbd1bd85",
Oct 11 03:38:30 compute-0 fervent_dubinsky[103575]:                 "ceph.osd_id": "2",
Oct 11 03:38:30 compute-0 fervent_dubinsky[103575]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 11 03:38:30 compute-0 fervent_dubinsky[103575]:                 "ceph.type": "block",
Oct 11 03:38:30 compute-0 fervent_dubinsky[103575]:                 "ceph.vdo": "0"
Oct 11 03:38:30 compute-0 fervent_dubinsky[103575]:             },
Oct 11 03:38:30 compute-0 fervent_dubinsky[103575]:             "type": "block",
Oct 11 03:38:30 compute-0 fervent_dubinsky[103575]:             "vg_name": "ceph_vg2"
Oct 11 03:38:30 compute-0 fervent_dubinsky[103575]:         }
Oct 11 03:38:30 compute-0 fervent_dubinsky[103575]:     ]
Oct 11 03:38:30 compute-0 fervent_dubinsky[103575]: }
Oct 11 03:38:30 compute-0 systemd[1]: libpod-3248d778d5b81787e8b1727ab045120f94c1c22a31e30df37a97e57552293ecf.scope: Deactivated successfully.
Oct 11 03:38:30 compute-0 podman[103559]: 2025-10-11 03:38:30.41284356 +0000 UTC m=+0.966858795 container died 3248d778d5b81787e8b1727ab045120f94c1c22a31e30df37a97e57552293ecf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_dubinsky, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Oct 11 03:38:30 compute-0 systemd[1]: var-lib-containers-storage-overlay-7bf6527a0fcda470eb359836804a20cb8d15949ef5d666ec9c5f809f6b767e2f-merged.mount: Deactivated successfully.
Oct 11 03:38:30 compute-0 podman[103559]: 2025-10-11 03:38:30.481344577 +0000 UTC m=+1.035359802 container remove 3248d778d5b81787e8b1727ab045120f94c1c22a31e30df37a97e57552293ecf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_dubinsky, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, ceph=True, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Oct 11 03:38:30 compute-0 systemd[1]: libpod-conmon-3248d778d5b81787e8b1727ab045120f94c1c22a31e30df37a97e57552293ecf.scope: Deactivated successfully.
Oct 11 03:38:30 compute-0 sudo[103453]: pam_unix(sudo:session): session closed for user root
Oct 11 03:38:30 compute-0 sudo[103598]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 03:38:30 compute-0 sudo[103598]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 03:38:30 compute-0 sudo[103598]: pam_unix(sudo:session): session closed for user root
Oct 11 03:38:30 compute-0 sudo[103623]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 11 03:38:30 compute-0 sudo[103623]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 03:38:30 compute-0 sudo[103623]: pam_unix(sudo:session): session closed for user root
Oct 11 03:38:30 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v104: 212 pgs: 1 peering, 108 unknown, 103 active+clean; 456 KiB data, 81 MiB used, 60 GiB / 60 GiB avail; 28 KiB/s rd, 0 B/s wr, 69 op/s
Oct 11 03:38:30 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pg_num_actual", "val": "32"} v 0) v1
Oct 11 03:38:30 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pg_num_actual", "val": "32"}]: dispatch
Oct 11 03:38:30 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pg_num_actual", "val": "32"} v 0) v1
Oct 11 03:38:30 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pg_num_actual", "val": "32"}]: dispatch
Oct 11 03:38:30 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e48 do_prune osdmap full prune enabled
Oct 11 03:38:30 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_num", "val": "32"}]': finished
Oct 11 03:38:30 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pg_num_actual", "val": "32"}]': finished
Oct 11 03:38:30 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pg_num_actual", "val": "32"}]': finished
Oct 11 03:38:30 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e49 e49: 3 total, 3 up, 3 in
Oct 11 03:38:30 compute-0 ceph-mon[74273]: log_channel(cluster) log [DBG] : osdmap e49: 3 total, 3 up, 3 in
Oct 11 03:38:30 compute-0 ceph-mgr[74563]: [progress INFO root] update: starting ev ee790f44-23e8-46b7-b084-a4f7bc5874d4 (PG autoscaler increasing pool 11 PGs from 1 to 32)
Oct 11 03:38:30 compute-0 ceph-mgr[74563]: [progress INFO root] complete: finished ev 303f1d66-ef18-43be-abb6-dbef2104eae9 (PG autoscaler increasing pool 2 PGs from 1 to 32)
Oct 11 03:38:30 compute-0 ceph-mgr[74563]: [progress INFO root] Completed event 303f1d66-ef18-43be-abb6-dbef2104eae9 (PG autoscaler increasing pool 2 PGs from 1 to 32) in 9 seconds
Oct 11 03:38:30 compute-0 ceph-mgr[74563]: [progress INFO root] complete: finished ev 567cc96c-86d8-418d-a8b7-75907884310a (PG autoscaler increasing pool 3 PGs from 1 to 32)
Oct 11 03:38:30 compute-0 ceph-mgr[74563]: [progress INFO root] Completed event 567cc96c-86d8-418d-a8b7-75907884310a (PG autoscaler increasing pool 3 PGs from 1 to 32) in 8 seconds
Oct 11 03:38:30 compute-0 sudo[103648]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 03:38:30 compute-0 ceph-mgr[74563]: [progress INFO root] complete: finished ev ee352db8-62d0-447e-8839-7035662c2779 (PG autoscaler increasing pool 4 PGs from 1 to 32)
Oct 11 03:38:30 compute-0 ceph-mgr[74563]: [progress INFO root] Completed event ee352db8-62d0-447e-8839-7035662c2779 (PG autoscaler increasing pool 4 PGs from 1 to 32) in 7 seconds
Oct 11 03:38:30 compute-0 ceph-mgr[74563]: [progress INFO root] complete: finished ev 17dd071f-6fee-408d-8dfa-8b3b90fe01f8 (PG autoscaler increasing pool 5 PGs from 1 to 32)
Oct 11 03:38:30 compute-0 ceph-mgr[74563]: [progress INFO root] Completed event 17dd071f-6fee-408d-8dfa-8b3b90fe01f8 (PG autoscaler increasing pool 5 PGs from 1 to 32) in 6 seconds
Oct 11 03:38:30 compute-0 ceph-mgr[74563]: [progress INFO root] complete: finished ev d976cf27-0fa2-47d3-bad3-4060bbed156a (PG autoscaler increasing pool 6 PGs from 1 to 16)
Oct 11 03:38:30 compute-0 ceph-mgr[74563]: [progress INFO root] Completed event d976cf27-0fa2-47d3-bad3-4060bbed156a (PG autoscaler increasing pool 6 PGs from 1 to 16) in 5 seconds
Oct 11 03:38:30 compute-0 ceph-mgr[74563]: [progress INFO root] complete: finished ev d4693d22-3301-43b5-a0b2-acaa7a29cd35 (PG autoscaler increasing pool 7 PGs from 1 to 32)
Oct 11 03:38:30 compute-0 ceph-mgr[74563]: [progress INFO root] Completed event d4693d22-3301-43b5-a0b2-acaa7a29cd35 (PG autoscaler increasing pool 7 PGs from 1 to 32) in 4 seconds
Oct 11 03:38:30 compute-0 ceph-mgr[74563]: [progress INFO root] complete: finished ev 4d114e80-e2a2-4850-8343-d9601033b32b (PG autoscaler increasing pool 8 PGs from 1 to 32)
Oct 11 03:38:30 compute-0 ceph-mgr[74563]: [progress INFO root] Completed event 4d114e80-e2a2-4850-8343-d9601033b32b (PG autoscaler increasing pool 8 PGs from 1 to 32) in 3 seconds
Oct 11 03:38:30 compute-0 ceph-mgr[74563]: [progress INFO root] complete: finished ev 66fcc8cc-450b-48e5-ab4e-281bb27fb8c3 (PG autoscaler increasing pool 9 PGs from 1 to 32)
Oct 11 03:38:30 compute-0 ceph-mgr[74563]: [progress INFO root] Completed event 66fcc8cc-450b-48e5-ab4e-281bb27fb8c3 (PG autoscaler increasing pool 9 PGs from 1 to 32) in 2 seconds
Oct 11 03:38:30 compute-0 sudo[103648]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 03:38:30 compute-0 ceph-mgr[74563]: [progress INFO root] complete: finished ev 415c2e87-cbcd-47d4-9202-2c5edf08bba6 (PG autoscaler increasing pool 10 PGs from 1 to 32)
Oct 11 03:38:30 compute-0 ceph-mgr[74563]: [progress INFO root] Completed event 415c2e87-cbcd-47d4-9202-2c5edf08bba6 (PG autoscaler increasing pool 10 PGs from 1 to 32) in 1 seconds
Oct 11 03:38:30 compute-0 ceph-mgr[74563]: [progress INFO root] complete: finished ev ee790f44-23e8-46b7-b084-a4f7bc5874d4 (PG autoscaler increasing pool 11 PGs from 1 to 32)
Oct 11 03:38:30 compute-0 ceph-mgr[74563]: [progress INFO root] Completed event ee790f44-23e8-46b7-b084-a4f7bc5874d4 (PG autoscaler increasing pool 11 PGs from 1 to 32) in 0 seconds
Oct 11 03:38:30 compute-0 ceph-osd[88594]: log_channel(cluster) log [DBG] : 3.1 deep-scrub starts
Oct 11 03:38:30 compute-0 sudo[103648]: pam_unix(sudo:session): session closed for user root
Oct 11 03:38:30 compute-0 ceph-osd[88594]: log_channel(cluster) log [DBG] : 3.1 deep-scrub ok
Oct 11 03:38:30 compute-0 sudo[103673]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/23b68101-59a9-532f-ab6b-9acf78fb2162/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 23b68101-59a9-532f-ab6b-9acf78fb2162 -- raw list --format json
Oct 11 03:38:30 compute-0 sudo[103673]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 03:38:30 compute-0 ceph-mon[74273]: 2.3 scrub starts
Oct 11 03:38:30 compute-0 ceph-mon[74273]: 2.3 scrub ok
Oct 11 03:38:30 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pg_num_actual", "val": "32"}]: dispatch
Oct 11 03:38:30 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pg_num_actual", "val": "32"}]: dispatch
Oct 11 03:38:30 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_num", "val": "32"}]': finished
Oct 11 03:38:30 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pg_num_actual", "val": "32"}]': finished
Oct 11 03:38:30 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pg_num_actual", "val": "32"}]': finished
Oct 11 03:38:30 compute-0 ceph-mon[74273]: osdmap e49: 3 total, 3 up, 3 in
Oct 11 03:38:31 compute-0 ceph-osd[87591]: log_channel(cluster) log [DBG] : 4.3 scrub starts
Oct 11 03:38:31 compute-0 ceph-osd[87591]: log_channel(cluster) log [DBG] : 4.3 scrub ok
Oct 11 03:38:31 compute-0 podman[103738]: 2025-10-11 03:38:31.26481355 +0000 UTC m=+0.059138796 container create c6bf432eebe5e0923030ebf200a3274d3f8d031af50de27ccc8fc858ca484870 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=serene_kalam, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507)
Oct 11 03:38:31 compute-0 systemd[1]: Started libpod-conmon-c6bf432eebe5e0923030ebf200a3274d3f8d031af50de27ccc8fc858ca484870.scope.
Oct 11 03:38:31 compute-0 systemd[1]: Started libcrun container.
Oct 11 03:38:31 compute-0 podman[103738]: 2025-10-11 03:38:31.327023241 +0000 UTC m=+0.121348487 container init c6bf432eebe5e0923030ebf200a3274d3f8d031af50de27ccc8fc858ca484870 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=serene_kalam, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 11 03:38:31 compute-0 podman[103738]: 2025-10-11 03:38:31.237467345 +0000 UTC m=+0.031792671 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 03:38:31 compute-0 podman[103738]: 2025-10-11 03:38:31.340394105 +0000 UTC m=+0.134719381 container start c6bf432eebe5e0923030ebf200a3274d3f8d031af50de27ccc8fc858ca484870 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=serene_kalam, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Oct 11 03:38:31 compute-0 podman[103738]: 2025-10-11 03:38:31.344011706 +0000 UTC m=+0.138336952 container attach c6bf432eebe5e0923030ebf200a3274d3f8d031af50de27ccc8fc858ca484870 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=serene_kalam, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 11 03:38:31 compute-0 serene_kalam[103754]: 167 167
Oct 11 03:38:31 compute-0 systemd[1]: libpod-c6bf432eebe5e0923030ebf200a3274d3f8d031af50de27ccc8fc858ca484870.scope: Deactivated successfully.
Oct 11 03:38:31 compute-0 podman[103738]: 2025-10-11 03:38:31.347644118 +0000 UTC m=+0.141969364 container died c6bf432eebe5e0923030ebf200a3274d3f8d031af50de27ccc8fc858ca484870 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=serene_kalam, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=reef, OSD_FLAVOR=default)
Oct 11 03:38:31 compute-0 systemd[1]: var-lib-containers-storage-overlay-c792dd8cbd08ab495add1cad8d431bd1dc15854088605293fdb9d1211f2ee249-merged.mount: Deactivated successfully.
Oct 11 03:38:31 compute-0 podman[103738]: 2025-10-11 03:38:31.391693701 +0000 UTC m=+0.186018977 container remove c6bf432eebe5e0923030ebf200a3274d3f8d031af50de27ccc8fc858ca484870 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=serene_kalam, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=reef, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 11 03:38:31 compute-0 ceph-osd[88594]: osd.1 pg_epoch: 49 pg[9.0( v 41'577 (0'0,41'577] local-lis/les=34/35 n=209 ec=34/34 lis/c=34/34 les/c/f=35/35/0 sis=49 pruub=9.243556976s) [1] r=0 lpr=49 pi=[34,49)/1 crt=41'577 lcod 41'576 mlcod 41'576 active pruub 78.674095154s@ mbc={}] start_peering_interval up [1] -> [1], acting [1] -> [1], acting_primary 1 -> 1, up_primary 1 -> 1, role 0 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Oct 11 03:38:31 compute-0 systemd[1]: libpod-conmon-c6bf432eebe5e0923030ebf200a3274d3f8d031af50de27ccc8fc858ca484870.scope: Deactivated successfully.
Oct 11 03:38:31 compute-0 ceph-osd[88594]: osd.1 pg_epoch: 49 pg[9.0( v 41'577 lc 0'0 (0'0,41'577] local-lis/les=34/35 n=6 ec=34/34 lis/c=34/34 les/c/f=35/35/0 sis=49 pruub=9.243556976s) [1] r=0 lpr=49 pi=[34,49)/1 crt=41'577 lcod 41'576 mlcod 0'0 unknown pruub 78.674095154s@ mbc={}] state<Start>: transitioning to Primary
Oct 11 03:38:31 compute-0 podman[103777]: 2025-10-11 03:38:31.631978624 +0000 UTC m=+0.064968129 container create b080b9924391cc6e1d90efacd13f0cf57e9e1841abacfca03376a3826600d85f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_ellis, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Oct 11 03:38:31 compute-0 systemd[1]: Started libpod-conmon-b080b9924391cc6e1d90efacd13f0cf57e9e1841abacfca03376a3826600d85f.scope.
Oct 11 03:38:31 compute-0 podman[103777]: 2025-10-11 03:38:31.604987929 +0000 UTC m=+0.037977473 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 03:38:31 compute-0 systemd[1]: Started libcrun container.
Oct 11 03:38:31 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ca20bb34b134bd19072cf57195ab44c0b5126a1522ca0face6f5d91ae7a39f01/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 11 03:38:31 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ca20bb34b134bd19072cf57195ab44c0b5126a1522ca0face6f5d91ae7a39f01/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 11 03:38:31 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ca20bb34b134bd19072cf57195ab44c0b5126a1522ca0face6f5d91ae7a39f01/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 11 03:38:31 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ca20bb34b134bd19072cf57195ab44c0b5126a1522ca0face6f5d91ae7a39f01/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 11 03:38:31 compute-0 podman[103777]: 2025-10-11 03:38:31.730928563 +0000 UTC m=+0.163918107 container init b080b9924391cc6e1d90efacd13f0cf57e9e1841abacfca03376a3826600d85f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_ellis, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.vendor=CentOS)
Oct 11 03:38:31 compute-0 ceph-osd[88594]: log_channel(cluster) log [DBG] : 3.2 scrub starts
Oct 11 03:38:31 compute-0 podman[103777]: 2025-10-11 03:38:31.744214945 +0000 UTC m=+0.177204449 container start b080b9924391cc6e1d90efacd13f0cf57e9e1841abacfca03376a3826600d85f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_ellis, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3)
Oct 11 03:38:31 compute-0 podman[103777]: 2025-10-11 03:38:31.748633379 +0000 UTC m=+0.181622933 container attach b080b9924391cc6e1d90efacd13f0cf57e9e1841abacfca03376a3826600d85f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_ellis, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, OSD_FLAVOR=default)
Oct 11 03:38:31 compute-0 ceph-osd[88594]: log_channel(cluster) log [DBG] : 3.2 scrub ok
Oct 11 03:38:31 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e49 do_prune osdmap full prune enabled
Oct 11 03:38:31 compute-0 ceph-mon[74273]: pgmap v104: 212 pgs: 1 peering, 108 unknown, 103 active+clean; 456 KiB data, 81 MiB used, 60 GiB / 60 GiB avail; 28 KiB/s rd, 0 B/s wr, 69 op/s
Oct 11 03:38:31 compute-0 ceph-mon[74273]: 3.1 deep-scrub starts
Oct 11 03:38:31 compute-0 ceph-mon[74273]: 3.1 deep-scrub ok
Oct 11 03:38:31 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e50 e50: 3 total, 3 up, 3 in
Oct 11 03:38:31 compute-0 ceph-mon[74273]: log_channel(cluster) log [DBG] : osdmap e50: 3 total, 3 up, 3 in
Oct 11 03:38:31 compute-0 ceph-osd[88594]: osd.1 pg_epoch: 50 pg[9.15( v 41'577 lc 0'0 (0'0,41'577] local-lis/les=34/35 n=6 ec=49/34 lis/c=34/34 les/c/f=35/35/0 sis=49) [1] r=0 lpr=49 pi=[34,49)/1 crt=41'577 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 03:38:31 compute-0 ceph-osd[88594]: osd.1 pg_epoch: 50 pg[9.14( v 41'577 lc 0'0 (0'0,41'577] local-lis/les=34/35 n=6 ec=49/34 lis/c=34/34 les/c/f=35/35/0 sis=49) [1] r=0 lpr=49 pi=[34,49)/1 crt=41'577 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 03:38:31 compute-0 ceph-osd[88594]: osd.1 pg_epoch: 50 pg[9.17( v 41'577 lc 0'0 (0'0,41'577] local-lis/les=34/35 n=6 ec=49/34 lis/c=34/34 les/c/f=35/35/0 sis=49) [1] r=0 lpr=49 pi=[34,49)/1 crt=41'577 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 03:38:31 compute-0 ceph-osd[88594]: osd.1 pg_epoch: 50 pg[9.16( v 41'577 lc 0'0 (0'0,41'577] local-lis/les=34/35 n=6 ec=49/34 lis/c=34/34 les/c/f=35/35/0 sis=49) [1] r=0 lpr=49 pi=[34,49)/1 crt=41'577 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 03:38:31 compute-0 ceph-osd[88594]: osd.1 pg_epoch: 50 pg[9.11( v 41'577 lc 0'0 (0'0,41'577] local-lis/les=34/35 n=7 ec=49/34 lis/c=34/34 les/c/f=35/35/0 sis=49) [1] r=0 lpr=49 pi=[34,49)/1 crt=41'577 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 03:38:31 compute-0 ceph-osd[88594]: osd.1 pg_epoch: 50 pg[9.10( v 41'577 lc 0'0 (0'0,41'577] local-lis/les=34/35 n=7 ec=49/34 lis/c=34/34 les/c/f=35/35/0 sis=49) [1] r=0 lpr=49 pi=[34,49)/1 crt=41'577 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 03:38:31 compute-0 ceph-osd[88594]: osd.1 pg_epoch: 50 pg[9.13( v 41'577 lc 0'0 (0'0,41'577] local-lis/les=34/35 n=6 ec=49/34 lis/c=34/34 les/c/f=35/35/0 sis=49) [1] r=0 lpr=49 pi=[34,49)/1 crt=41'577 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 03:38:31 compute-0 ceph-osd[88594]: osd.1 pg_epoch: 50 pg[9.d( v 41'577 lc 0'0 (0'0,41'577] local-lis/les=34/35 n=7 ec=49/34 lis/c=34/34 les/c/f=35/35/0 sis=49) [1] r=0 lpr=49 pi=[34,49)/1 crt=41'577 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 03:38:31 compute-0 ceph-osd[88594]: osd.1 pg_epoch: 50 pg[9.12( v 41'577 lc 0'0 (0'0,41'577] local-lis/les=34/35 n=6 ec=49/34 lis/c=34/34 les/c/f=35/35/0 sis=49) [1] r=0 lpr=49 pi=[34,49)/1 crt=41'577 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 03:38:31 compute-0 ceph-osd[88594]: osd.1 pg_epoch: 50 pg[9.c( v 41'577 lc 0'0 (0'0,41'577] local-lis/les=34/35 n=7 ec=49/34 lis/c=34/34 les/c/f=35/35/0 sis=49) [1] r=0 lpr=49 pi=[34,49)/1 crt=41'577 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 03:38:31 compute-0 ceph-osd[88594]: osd.1 pg_epoch: 50 pg[9.f( v 41'577 lc 0'0 (0'0,41'577] local-lis/les=34/35 n=7 ec=49/34 lis/c=34/34 les/c/f=35/35/0 sis=49) [1] r=0 lpr=49 pi=[34,49)/1 crt=41'577 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 03:38:31 compute-0 ceph-osd[88594]: osd.1 pg_epoch: 50 pg[9.9( v 41'577 lc 0'0 (0'0,41'577] local-lis/les=34/35 n=7 ec=49/34 lis/c=34/34 les/c/f=35/35/0 sis=49) [1] r=0 lpr=49 pi=[34,49)/1 crt=41'577 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 03:38:31 compute-0 ceph-osd[88594]: osd.1 pg_epoch: 50 pg[9.b( v 41'577 lc 0'0 (0'0,41'577] local-lis/les=34/35 n=7 ec=49/34 lis/c=34/34 les/c/f=35/35/0 sis=49) [1] r=0 lpr=49 pi=[34,49)/1 crt=41'577 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 03:38:31 compute-0 ceph-osd[88594]: osd.1 pg_epoch: 50 pg[9.2( v 41'577 lc 0'0 (0'0,41'577] local-lis/les=34/35 n=7 ec=49/34 lis/c=34/34 les/c/f=35/35/0 sis=49) [1] r=0 lpr=49 pi=[34,49)/1 crt=41'577 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 03:38:31 compute-0 ceph-osd[88594]: osd.1 pg_epoch: 50 pg[9.1( v 41'577 lc 0'0 (0'0,41'577] local-lis/les=34/35 n=7 ec=49/34 lis/c=34/34 les/c/f=35/35/0 sis=49) [1] r=0 lpr=49 pi=[34,49)/1 crt=41'577 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 03:38:31 compute-0 ceph-osd[88594]: osd.1 pg_epoch: 50 pg[9.e( v 41'577 lc 0'0 (0'0,41'577] local-lis/les=34/35 n=7 ec=49/34 lis/c=34/34 les/c/f=35/35/0 sis=49) [1] r=0 lpr=49 pi=[34,49)/1 crt=41'577 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 03:38:31 compute-0 ceph-osd[88594]: osd.1 pg_epoch: 50 pg[9.a( v 41'577 lc 0'0 (0'0,41'577] local-lis/les=34/35 n=7 ec=49/34 lis/c=34/34 les/c/f=35/35/0 sis=49) [1] r=0 lpr=49 pi=[34,49)/1 crt=41'577 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 03:38:31 compute-0 ceph-osd[88594]: osd.1 pg_epoch: 50 pg[9.8( v 41'577 lc 0'0 (0'0,41'577] local-lis/les=34/35 n=7 ec=49/34 lis/c=34/34 les/c/f=35/35/0 sis=49) [1] r=0 lpr=49 pi=[34,49)/1 crt=41'577 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 03:38:31 compute-0 ceph-osd[88594]: osd.1 pg_epoch: 50 pg[9.3( v 41'577 lc 0'0 (0'0,41'577] local-lis/les=34/35 n=7 ec=49/34 lis/c=34/34 les/c/f=35/35/0 sis=49) [1] r=0 lpr=49 pi=[34,49)/1 crt=41'577 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 03:38:31 compute-0 ceph-osd[88594]: osd.1 pg_epoch: 50 pg[9.6( v 41'577 lc 0'0 (0'0,41'577] local-lis/les=34/35 n=7 ec=49/34 lis/c=34/34 les/c/f=35/35/0 sis=49) [1] r=0 lpr=49 pi=[34,49)/1 crt=41'577 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 03:38:31 compute-0 ceph-osd[88594]: osd.1 pg_epoch: 50 pg[9.7( v 41'577 lc 0'0 (0'0,41'577] local-lis/les=34/35 n=7 ec=49/34 lis/c=34/34 les/c/f=35/35/0 sis=49) [1] r=0 lpr=49 pi=[34,49)/1 crt=41'577 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 03:38:31 compute-0 ceph-osd[88594]: osd.1 pg_epoch: 50 pg[9.4( v 41'577 lc 0'0 (0'0,41'577] local-lis/les=34/35 n=7 ec=49/34 lis/c=34/34 les/c/f=35/35/0 sis=49) [1] r=0 lpr=49 pi=[34,49)/1 crt=41'577 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 03:38:31 compute-0 ceph-osd[88594]: osd.1 pg_epoch: 50 pg[9.5( v 41'577 lc 0'0 (0'0,41'577] local-lis/les=34/35 n=7 ec=49/34 lis/c=34/34 les/c/f=35/35/0 sis=49) [1] r=0 lpr=49 pi=[34,49)/1 crt=41'577 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 03:38:31 compute-0 ceph-osd[88594]: osd.1 pg_epoch: 50 pg[9.1a( v 41'577 lc 0'0 (0'0,41'577] local-lis/les=34/35 n=6 ec=49/34 lis/c=34/34 les/c/f=35/35/0 sis=49) [1] r=0 lpr=49 pi=[34,49)/1 crt=41'577 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 03:38:31 compute-0 ceph-osd[88594]: osd.1 pg_epoch: 50 pg[9.1b( v 41'577 lc 0'0 (0'0,41'577] local-lis/les=34/35 n=6 ec=49/34 lis/c=34/34 les/c/f=35/35/0 sis=49) [1] r=0 lpr=49 pi=[34,49)/1 crt=41'577 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 03:38:31 compute-0 ceph-osd[88594]: osd.1 pg_epoch: 50 pg[9.18( v 41'577 lc 0'0 (0'0,41'577] local-lis/les=34/35 n=6 ec=49/34 lis/c=34/34 les/c/f=35/35/0 sis=49) [1] r=0 lpr=49 pi=[34,49)/1 crt=41'577 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 03:38:31 compute-0 ceph-osd[88594]: osd.1 pg_epoch: 50 pg[9.19( v 41'577 lc 0'0 (0'0,41'577] local-lis/les=34/35 n=6 ec=49/34 lis/c=34/34 les/c/f=35/35/0 sis=49) [1] r=0 lpr=49 pi=[34,49)/1 crt=41'577 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 03:38:31 compute-0 ceph-osd[88594]: osd.1 pg_epoch: 50 pg[9.1e( v 41'577 lc 0'0 (0'0,41'577] local-lis/les=34/35 n=6 ec=49/34 lis/c=34/34 les/c/f=35/35/0 sis=49) [1] r=0 lpr=49 pi=[34,49)/1 crt=41'577 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 03:38:31 compute-0 ceph-osd[88594]: osd.1 pg_epoch: 50 pg[9.1f( v 41'577 lc 0'0 (0'0,41'577] local-lis/les=34/35 n=6 ec=49/34 lis/c=34/34 les/c/f=35/35/0 sis=49) [1] r=0 lpr=49 pi=[34,49)/1 crt=41'577 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 03:38:31 compute-0 ceph-osd[88594]: osd.1 pg_epoch: 50 pg[9.1c( v 41'577 lc 0'0 (0'0,41'577] local-lis/les=34/35 n=6 ec=49/34 lis/c=34/34 les/c/f=35/35/0 sis=49) [1] r=0 lpr=49 pi=[34,49)/1 crt=41'577 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 03:38:31 compute-0 ceph-osd[88594]: osd.1 pg_epoch: 50 pg[9.1d( v 41'577 lc 0'0 (0'0,41'577] local-lis/les=34/35 n=6 ec=49/34 lis/c=34/34 les/c/f=35/35/0 sis=49) [1] r=0 lpr=49 pi=[34,49)/1 crt=41'577 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 03:38:31 compute-0 ceph-osd[88594]: osd.1 pg_epoch: 50 pg[9.15( v 41'577 (0'0,41'577] local-lis/les=49/50 n=6 ec=49/34 lis/c=34/34 les/c/f=35/35/0 sis=49) [1] r=0 lpr=49 pi=[34,49)/1 crt=41'577 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 03:38:31 compute-0 ceph-osd[88594]: osd.1 pg_epoch: 50 pg[9.14( v 41'577 (0'0,41'577] local-lis/les=49/50 n=6 ec=49/34 lis/c=34/34 les/c/f=35/35/0 sis=49) [1] r=0 lpr=49 pi=[34,49)/1 crt=41'577 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 03:38:31 compute-0 ceph-osd[88594]: osd.1 pg_epoch: 50 pg[9.16( v 41'577 (0'0,41'577] local-lis/les=49/50 n=6 ec=49/34 lis/c=34/34 les/c/f=35/35/0 sis=49) [1] r=0 lpr=49 pi=[34,49)/1 crt=41'577 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 03:38:31 compute-0 ceph-osd[88594]: osd.1 pg_epoch: 50 pg[9.11( v 41'577 (0'0,41'577] local-lis/les=49/50 n=7 ec=49/34 lis/c=34/34 les/c/f=35/35/0 sis=49) [1] r=0 lpr=49 pi=[34,49)/1 crt=41'577 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 03:38:31 compute-0 ceph-osd[88594]: osd.1 pg_epoch: 50 pg[9.13( v 41'577 (0'0,41'577] local-lis/les=49/50 n=6 ec=49/34 lis/c=34/34 les/c/f=35/35/0 sis=49) [1] r=0 lpr=49 pi=[34,49)/1 crt=41'577 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 03:38:31 compute-0 ceph-osd[88594]: osd.1 pg_epoch: 50 pg[9.10( v 41'577 (0'0,41'577] local-lis/les=49/50 n=7 ec=49/34 lis/c=34/34 les/c/f=35/35/0 sis=49) [1] r=0 lpr=49 pi=[34,49)/1 crt=41'577 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 03:38:31 compute-0 ceph-osd[88594]: osd.1 pg_epoch: 50 pg[9.d( v 41'577 (0'0,41'577] local-lis/les=49/50 n=7 ec=49/34 lis/c=34/34 les/c/f=35/35/0 sis=49) [1] r=0 lpr=49 pi=[34,49)/1 crt=41'577 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 03:38:31 compute-0 ceph-osd[88594]: osd.1 pg_epoch: 50 pg[9.12( v 41'577 (0'0,41'577] local-lis/les=49/50 n=6 ec=49/34 lis/c=34/34 les/c/f=35/35/0 sis=49) [1] r=0 lpr=49 pi=[34,49)/1 crt=41'577 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 03:38:31 compute-0 ceph-osd[88594]: osd.1 pg_epoch: 50 pg[9.c( v 41'577 (0'0,41'577] local-lis/les=49/50 n=7 ec=49/34 lis/c=34/34 les/c/f=35/35/0 sis=49) [1] r=0 lpr=49 pi=[34,49)/1 crt=41'577 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 03:38:31 compute-0 ceph-osd[88594]: osd.1 pg_epoch: 50 pg[9.f( v 41'577 (0'0,41'577] local-lis/les=49/50 n=7 ec=49/34 lis/c=34/34 les/c/f=35/35/0 sis=49) [1] r=0 lpr=49 pi=[34,49)/1 crt=41'577 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 03:38:31 compute-0 ceph-osd[88594]: osd.1 pg_epoch: 50 pg[9.9( v 41'577 (0'0,41'577] local-lis/les=49/50 n=7 ec=49/34 lis/c=34/34 les/c/f=35/35/0 sis=49) [1] r=0 lpr=49 pi=[34,49)/1 crt=41'577 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 03:38:31 compute-0 ceph-osd[88594]: osd.1 pg_epoch: 50 pg[9.b( v 41'577 (0'0,41'577] local-lis/les=49/50 n=7 ec=49/34 lis/c=34/34 les/c/f=35/35/0 sis=49) [1] r=0 lpr=49 pi=[34,49)/1 crt=41'577 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 03:38:31 compute-0 ceph-osd[88594]: osd.1 pg_epoch: 50 pg[9.2( v 41'577 (0'0,41'577] local-lis/les=49/50 n=7 ec=49/34 lis/c=34/34 les/c/f=35/35/0 sis=49) [1] r=0 lpr=49 pi=[34,49)/1 crt=41'577 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 03:38:31 compute-0 ceph-osd[88594]: osd.1 pg_epoch: 50 pg[9.0( v 41'577 (0'0,41'577] local-lis/les=49/50 n=6 ec=34/34 lis/c=34/34 les/c/f=35/35/0 sis=49) [1] r=0 lpr=49 pi=[34,49)/1 crt=41'577 lcod 41'576 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 03:38:31 compute-0 ceph-osd[88594]: osd.1 pg_epoch: 50 pg[9.e( v 41'577 (0'0,41'577] local-lis/les=49/50 n=7 ec=49/34 lis/c=34/34 les/c/f=35/35/0 sis=49) [1] r=0 lpr=49 pi=[34,49)/1 crt=41'577 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 03:38:31 compute-0 ceph-osd[88594]: osd.1 pg_epoch: 50 pg[9.a( v 41'577 (0'0,41'577] local-lis/les=49/50 n=7 ec=49/34 lis/c=34/34 les/c/f=35/35/0 sis=49) [1] r=0 lpr=49 pi=[34,49)/1 crt=41'577 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 03:38:31 compute-0 ceph-osd[88594]: osd.1 pg_epoch: 50 pg[9.8( v 41'577 (0'0,41'577] local-lis/les=49/50 n=7 ec=49/34 lis/c=34/34 les/c/f=35/35/0 sis=49) [1] r=0 lpr=49 pi=[34,49)/1 crt=41'577 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 03:38:31 compute-0 ceph-osd[88594]: osd.1 pg_epoch: 50 pg[9.6( v 41'577 (0'0,41'577] local-lis/les=49/50 n=7 ec=49/34 lis/c=34/34 les/c/f=35/35/0 sis=49) [1] r=0 lpr=49 pi=[34,49)/1 crt=41'577 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 03:38:31 compute-0 ceph-osd[88594]: osd.1 pg_epoch: 50 pg[9.3( v 41'577 (0'0,41'577] local-lis/les=49/50 n=7 ec=49/34 lis/c=34/34 les/c/f=35/35/0 sis=49) [1] r=0 lpr=49 pi=[34,49)/1 crt=41'577 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 03:38:31 compute-0 ceph-osd[88594]: osd.1 pg_epoch: 50 pg[9.1( v 41'577 (0'0,41'577] local-lis/les=49/50 n=7 ec=49/34 lis/c=34/34 les/c/f=35/35/0 sis=49) [1] r=0 lpr=49 pi=[34,49)/1 crt=41'577 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 03:38:31 compute-0 ceph-osd[88594]: osd.1 pg_epoch: 50 pg[9.7( v 41'577 (0'0,41'577] local-lis/les=49/50 n=7 ec=49/34 lis/c=34/34 les/c/f=35/35/0 sis=49) [1] r=0 lpr=49 pi=[34,49)/1 crt=41'577 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 03:38:31 compute-0 ceph-osd[88594]: osd.1 pg_epoch: 50 pg[9.4( v 41'577 (0'0,41'577] local-lis/les=49/50 n=7 ec=49/34 lis/c=34/34 les/c/f=35/35/0 sis=49) [1] r=0 lpr=49 pi=[34,49)/1 crt=41'577 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 03:38:31 compute-0 ceph-osd[88594]: osd.1 pg_epoch: 50 pg[9.1a( v 41'577 (0'0,41'577] local-lis/les=49/50 n=6 ec=49/34 lis/c=34/34 les/c/f=35/35/0 sis=49) [1] r=0 lpr=49 pi=[34,49)/1 crt=41'577 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 03:38:31 compute-0 ceph-osd[88594]: osd.1 pg_epoch: 50 pg[9.5( v 41'577 (0'0,41'577] local-lis/les=49/50 n=7 ec=49/34 lis/c=34/34 les/c/f=35/35/0 sis=49) [1] r=0 lpr=49 pi=[34,49)/1 crt=41'577 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 03:38:31 compute-0 ceph-osd[88594]: osd.1 pg_epoch: 50 pg[9.18( v 41'577 (0'0,41'577] local-lis/les=49/50 n=6 ec=49/34 lis/c=34/34 les/c/f=35/35/0 sis=49) [1] r=0 lpr=49 pi=[34,49)/1 crt=41'577 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 03:38:31 compute-0 ceph-osd[88594]: osd.1 pg_epoch: 50 pg[9.1b( v 41'577 (0'0,41'577] local-lis/les=49/50 n=6 ec=49/34 lis/c=34/34 les/c/f=35/35/0 sis=49) [1] r=0 lpr=49 pi=[34,49)/1 crt=41'577 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 03:38:31 compute-0 ceph-osd[88594]: osd.1 pg_epoch: 50 pg[9.1e( v 41'577 (0'0,41'577] local-lis/les=49/50 n=6 ec=49/34 lis/c=34/34 les/c/f=35/35/0 sis=49) [1] r=0 lpr=49 pi=[34,49)/1 crt=41'577 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 03:38:31 compute-0 ceph-osd[88594]: osd.1 pg_epoch: 50 pg[9.19( v 41'577 (0'0,41'577] local-lis/les=49/50 n=6 ec=49/34 lis/c=34/34 les/c/f=35/35/0 sis=49) [1] r=0 lpr=49 pi=[34,49)/1 crt=41'577 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 03:38:31 compute-0 ceph-osd[88594]: osd.1 pg_epoch: 50 pg[9.1f( v 41'577 (0'0,41'577] local-lis/les=49/50 n=6 ec=49/34 lis/c=34/34 les/c/f=35/35/0 sis=49) [1] r=0 lpr=49 pi=[34,49)/1 crt=41'577 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 03:38:31 compute-0 ceph-osd[88594]: osd.1 pg_epoch: 50 pg[9.1d( v 41'577 (0'0,41'577] local-lis/les=49/50 n=6 ec=49/34 lis/c=34/34 les/c/f=35/35/0 sis=49) [1] r=0 lpr=49 pi=[34,49)/1 crt=41'577 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 03:38:31 compute-0 ceph-osd[88594]: osd.1 pg_epoch: 50 pg[9.1c( v 41'577 (0'0,41'577] local-lis/les=49/50 n=6 ec=49/34 lis/c=34/34 les/c/f=35/35/0 sis=49) [1] r=0 lpr=49 pi=[34,49)/1 crt=41'577 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 03:38:31 compute-0 ceph-osd[88594]: osd.1 pg_epoch: 50 pg[9.17( v 41'577 (0'0,41'577] local-lis/les=49/50 n=6 ec=49/34 lis/c=34/34 les/c/f=35/35/0 sis=49) [1] r=0 lpr=49 pi=[34,49)/1 crt=41'577 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 03:38:32 compute-0 ceph-osd[89722]: log_channel(cluster) log [DBG] : 2.4 scrub starts
Oct 11 03:38:32 compute-0 ceph-osd[89722]: log_channel(cluster) log [DBG] : 2.4 scrub ok
Oct 11 03:38:32 compute-0 ceph-osd[89722]: osd.2 pg_epoch: 49 pg[10.0( v 37'16 (0'0,37'16] local-lis/les=36/37 n=8 ec=36/36 lis/c=36/36 les/c/f=37/37/0 sis=49 pruub=10.077598572s) [2] r=0 lpr=49 pi=[36,49)/1 crt=37'16 lcod 37'15 mlcod 37'15 active pruub 75.449829102s@ mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [2], acting_primary 2 -> 2, up_primary 2 -> 2, role 0 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Oct 11 03:38:32 compute-0 ceph-osd[89722]: osd.2 pg_epoch: 50 pg[10.0( v 37'16 lc 0'0 (0'0,37'16] local-lis/les=36/37 n=0 ec=36/36 lis/c=36/36 les/c/f=37/37/0 sis=49 pruub=10.077598572s) [2] r=0 lpr=49 pi=[36,49)/1 crt=37'16 lcod 37'15 mlcod 0'0 unknown pruub 75.449829102s@ mbc={}] state<Start>: transitioning to Primary
Oct 11 03:38:32 compute-0 ceph-osd[89722]: osd.2 pg_epoch: 50 pg[10.3( v 37'16 lc 0'0 (0'0,37'16] local-lis/les=36/37 n=1 ec=49/36 lis/c=36/36 les/c/f=37/37/0 sis=49) [2] r=0 lpr=49 pi=[36,49)/1 crt=37'16 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 03:38:32 compute-0 ceph-osd[89722]: osd.2 pg_epoch: 50 pg[10.4( v 37'16 lc 0'0 (0'0,37'16] local-lis/les=36/37 n=1 ec=49/36 lis/c=36/36 les/c/f=37/37/0 sis=49) [2] r=0 lpr=49 pi=[36,49)/1 crt=37'16 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 03:38:32 compute-0 ceph-osd[89722]: osd.2 pg_epoch: 50 pg[10.1( v 37'16 (0'0,37'16] local-lis/les=36/37 n=1 ec=49/36 lis/c=36/36 les/c/f=37/37/0 sis=49) [2] r=0 lpr=49 pi=[36,49)/1 crt=37'16 lcod 0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 03:38:32 compute-0 ceph-osd[89722]: osd.2 pg_epoch: 50 pg[10.2( v 37'16 lc 0'0 (0'0,37'16] local-lis/les=36/37 n=1 ec=49/36 lis/c=36/36 les/c/f=37/37/0 sis=49) [2] r=0 lpr=49 pi=[36,49)/1 crt=37'16 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 03:38:32 compute-0 ceph-osd[89722]: osd.2 pg_epoch: 50 pg[10.9( v 37'16 lc 0'0 (0'0,37'16] local-lis/les=36/37 n=0 ec=49/36 lis/c=36/36 les/c/f=37/37/0 sis=49) [2] r=0 lpr=49 pi=[36,49)/1 crt=37'16 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 03:38:32 compute-0 ceph-osd[89722]: osd.2 pg_epoch: 50 pg[10.b( v 37'16 lc 0'0 (0'0,37'16] local-lis/les=36/37 n=0 ec=49/36 lis/c=36/36 les/c/f=37/37/0 sis=49) [2] r=0 lpr=49 pi=[36,49)/1 crt=37'16 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 03:38:32 compute-0 ceph-osd[89722]: osd.2 pg_epoch: 50 pg[10.c( v 37'16 lc 0'0 (0'0,37'16] local-lis/les=36/37 n=0 ec=49/36 lis/c=36/36 les/c/f=37/37/0 sis=49) [2] r=0 lpr=49 pi=[36,49)/1 crt=37'16 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 03:38:32 compute-0 ceph-osd[89722]: osd.2 pg_epoch: 50 pg[10.d( v 37'16 lc 0'0 (0'0,37'16] local-lis/les=36/37 n=0 ec=49/36 lis/c=36/36 les/c/f=37/37/0 sis=49) [2] r=0 lpr=49 pi=[36,49)/1 crt=37'16 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 03:38:32 compute-0 ceph-osd[89722]: osd.2 pg_epoch: 50 pg[10.a( v 37'16 lc 0'0 (0'0,37'16] local-lis/les=36/37 n=0 ec=49/36 lis/c=36/36 les/c/f=37/37/0 sis=49) [2] r=0 lpr=49 pi=[36,49)/1 crt=37'16 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 03:38:32 compute-0 ceph-osd[89722]: osd.2 pg_epoch: 50 pg[10.e( v 37'16 lc 0'0 (0'0,37'16] local-lis/les=36/37 n=0 ec=49/36 lis/c=36/36 les/c/f=37/37/0 sis=49) [2] r=0 lpr=49 pi=[36,49)/1 crt=37'16 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 03:38:32 compute-0 ceph-osd[89722]: osd.2 pg_epoch: 50 pg[10.f( v 37'16 lc 0'0 (0'0,37'16] local-lis/les=36/37 n=0 ec=49/36 lis/c=36/36 les/c/f=37/37/0 sis=49) [2] r=0 lpr=49 pi=[36,49)/1 crt=37'16 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 03:38:32 compute-0 ceph-osd[89722]: osd.2 pg_epoch: 50 pg[10.10( v 37'16 lc 0'0 (0'0,37'16] local-lis/les=36/37 n=0 ec=49/36 lis/c=36/36 les/c/f=37/37/0 sis=49) [2] r=0 lpr=49 pi=[36,49)/1 crt=37'16 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 03:38:32 compute-0 ceph-osd[89722]: osd.2 pg_epoch: 50 pg[10.11( v 37'16 lc 0'0 (0'0,37'16] local-lis/les=36/37 n=0 ec=49/36 lis/c=36/36 les/c/f=37/37/0 sis=49) [2] r=0 lpr=49 pi=[36,49)/1 crt=37'16 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 03:38:32 compute-0 ceph-osd[89722]: osd.2 pg_epoch: 50 pg[10.12( v 37'16 lc 0'0 (0'0,37'16] local-lis/les=36/37 n=0 ec=49/36 lis/c=36/36 les/c/f=37/37/0 sis=49) [2] r=0 lpr=49 pi=[36,49)/1 crt=37'16 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 03:38:32 compute-0 ceph-osd[89722]: osd.2 pg_epoch: 50 pg[10.5( v 37'16 lc 0'0 (0'0,37'16] local-lis/les=36/37 n=1 ec=49/36 lis/c=36/36 les/c/f=37/37/0 sis=49) [2] r=0 lpr=49 pi=[36,49)/1 crt=37'16 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 03:38:32 compute-0 ceph-osd[89722]: osd.2 pg_epoch: 50 pg[10.6( v 37'16 lc 0'0 (0'0,37'16] local-lis/les=36/37 n=1 ec=49/36 lis/c=36/36 les/c/f=37/37/0 sis=49) [2] r=0 lpr=49 pi=[36,49)/1 crt=37'16 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 03:38:32 compute-0 ceph-osd[89722]: osd.2 pg_epoch: 50 pg[10.7( v 37'16 lc 0'0 (0'0,37'16] local-lis/les=36/37 n=1 ec=49/36 lis/c=36/36 les/c/f=37/37/0 sis=49) [2] r=0 lpr=49 pi=[36,49)/1 crt=37'16 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 03:38:32 compute-0 ceph-osd[89722]: osd.2 pg_epoch: 50 pg[10.8( v 37'16 lc 0'0 (0'0,37'16] local-lis/les=36/37 n=1 ec=49/36 lis/c=36/36 les/c/f=37/37/0 sis=49) [2] r=0 lpr=49 pi=[36,49)/1 crt=37'16 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 03:38:32 compute-0 ceph-osd[89722]: osd.2 pg_epoch: 50 pg[10.13( v 37'16 lc 0'0 (0'0,37'16] local-lis/les=36/37 n=0 ec=49/36 lis/c=36/36 les/c/f=37/37/0 sis=49) [2] r=0 lpr=49 pi=[36,49)/1 crt=37'16 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 03:38:32 compute-0 ceph-osd[89722]: osd.2 pg_epoch: 50 pg[10.14( v 37'16 lc 0'0 (0'0,37'16] local-lis/les=36/37 n=0 ec=49/36 lis/c=36/36 les/c/f=37/37/0 sis=49) [2] r=0 lpr=49 pi=[36,49)/1 crt=37'16 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 03:38:32 compute-0 ceph-osd[89722]: osd.2 pg_epoch: 50 pg[10.15( v 37'16 lc 0'0 (0'0,37'16] local-lis/les=36/37 n=0 ec=49/36 lis/c=36/36 les/c/f=37/37/0 sis=49) [2] r=0 lpr=49 pi=[36,49)/1 crt=37'16 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 03:38:32 compute-0 ceph-osd[89722]: osd.2 pg_epoch: 50 pg[10.16( v 37'16 lc 0'0 (0'0,37'16] local-lis/les=36/37 n=0 ec=49/36 lis/c=36/36 les/c/f=37/37/0 sis=49) [2] r=0 lpr=49 pi=[36,49)/1 crt=37'16 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 03:38:32 compute-0 ceph-osd[89722]: osd.2 pg_epoch: 50 pg[10.17( v 37'16 lc 0'0 (0'0,37'16] local-lis/les=36/37 n=0 ec=49/36 lis/c=36/36 les/c/f=37/37/0 sis=49) [2] r=0 lpr=49 pi=[36,49)/1 crt=37'16 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 03:38:32 compute-0 ceph-osd[89722]: osd.2 pg_epoch: 50 pg[10.18( v 37'16 lc 0'0 (0'0,37'16] local-lis/les=36/37 n=0 ec=49/36 lis/c=36/36 les/c/f=37/37/0 sis=49) [2] r=0 lpr=49 pi=[36,49)/1 crt=37'16 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 03:38:32 compute-0 ceph-osd[89722]: osd.2 pg_epoch: 50 pg[10.19( v 37'16 lc 0'0 (0'0,37'16] local-lis/les=36/37 n=0 ec=49/36 lis/c=36/36 les/c/f=37/37/0 sis=49) [2] r=0 lpr=49 pi=[36,49)/1 crt=37'16 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 03:38:32 compute-0 ceph-osd[89722]: osd.2 pg_epoch: 50 pg[10.1a( v 37'16 lc 0'0 (0'0,37'16] local-lis/les=36/37 n=0 ec=49/36 lis/c=36/36 les/c/f=37/37/0 sis=49) [2] r=0 lpr=49 pi=[36,49)/1 crt=37'16 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 03:38:32 compute-0 ceph-osd[89722]: osd.2 pg_epoch: 50 pg[10.1b( v 37'16 lc 0'0 (0'0,37'16] local-lis/les=36/37 n=0 ec=49/36 lis/c=36/36 les/c/f=37/37/0 sis=49) [2] r=0 lpr=49 pi=[36,49)/1 crt=37'16 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 03:38:32 compute-0 ceph-osd[89722]: osd.2 pg_epoch: 50 pg[10.1c( v 37'16 lc 0'0 (0'0,37'16] local-lis/les=36/37 n=0 ec=49/36 lis/c=36/36 les/c/f=37/37/0 sis=49) [2] r=0 lpr=49 pi=[36,49)/1 crt=37'16 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 03:38:32 compute-0 ceph-osd[89722]: osd.2 pg_epoch: 50 pg[10.1d( v 37'16 lc 0'0 (0'0,37'16] local-lis/les=36/37 n=0 ec=49/36 lis/c=36/36 les/c/f=37/37/0 sis=49) [2] r=0 lpr=49 pi=[36,49)/1 crt=37'16 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 03:38:32 compute-0 ceph-osd[89722]: osd.2 pg_epoch: 50 pg[10.1e( v 37'16 lc 0'0 (0'0,37'16] local-lis/les=36/37 n=0 ec=49/36 lis/c=36/36 les/c/f=37/37/0 sis=49) [2] r=0 lpr=49 pi=[36,49)/1 crt=37'16 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 03:38:32 compute-0 ceph-osd[89722]: osd.2 pg_epoch: 50 pg[10.1f( v 37'16 lc 0'0 (0'0,37'16] local-lis/les=36/37 n=0 ec=49/36 lis/c=36/36 les/c/f=37/37/0 sis=49) [2] r=0 lpr=49 pi=[36,49)/1 crt=37'16 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 03:38:32 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v107: 274 pgs: 62 unknown, 212 active+clean; 456 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Oct 11 03:38:32 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_num_actual", "val": "32"} v 0) v1
Oct 11 03:38:32 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_num_actual", "val": "32"}]: dispatch
Oct 11 03:38:32 compute-0 brave_ellis[103794]: {
Oct 11 03:38:32 compute-0 brave_ellis[103794]:     "38da774d-7ecf-442f-9a7a-97978287cff8": {
Oct 11 03:38:32 compute-0 brave_ellis[103794]:         "ceph_fsid": "23b68101-59a9-532f-ab6b-9acf78fb2162",
Oct 11 03:38:32 compute-0 brave_ellis[103794]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Oct 11 03:38:32 compute-0 brave_ellis[103794]:         "osd_id": 1,
Oct 11 03:38:32 compute-0 brave_ellis[103794]:         "osd_uuid": "38da774d-7ecf-442f-9a7a-97978287cff8",
Oct 11 03:38:32 compute-0 brave_ellis[103794]:         "type": "bluestore"
Oct 11 03:38:32 compute-0 brave_ellis[103794]:     },
Oct 11 03:38:32 compute-0 brave_ellis[103794]:     "b0a32a6b-06c7-4cda-9235-0c5ffbd1bd85": {
Oct 11 03:38:32 compute-0 brave_ellis[103794]:         "ceph_fsid": "23b68101-59a9-532f-ab6b-9acf78fb2162",
Oct 11 03:38:32 compute-0 brave_ellis[103794]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Oct 11 03:38:32 compute-0 brave_ellis[103794]:         "osd_id": 2,
Oct 11 03:38:32 compute-0 brave_ellis[103794]:         "osd_uuid": "b0a32a6b-06c7-4cda-9235-0c5ffbd1bd85",
Oct 11 03:38:32 compute-0 brave_ellis[103794]:         "type": "bluestore"
Oct 11 03:38:32 compute-0 brave_ellis[103794]:     },
Oct 11 03:38:32 compute-0 brave_ellis[103794]:     "bd7ac921-1218-45c1-b1c6-7c594dbceccb": {
Oct 11 03:38:32 compute-0 brave_ellis[103794]:         "ceph_fsid": "23b68101-59a9-532f-ab6b-9acf78fb2162",
Oct 11 03:38:32 compute-0 brave_ellis[103794]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct 11 03:38:32 compute-0 brave_ellis[103794]:         "osd_id": 0,
Oct 11 03:38:32 compute-0 brave_ellis[103794]:         "osd_uuid": "bd7ac921-1218-45c1-b1c6-7c594dbceccb",
Oct 11 03:38:32 compute-0 brave_ellis[103794]:         "type": "bluestore"
Oct 11 03:38:32 compute-0 brave_ellis[103794]:     }
Oct 11 03:38:32 compute-0 brave_ellis[103794]: }
Oct 11 03:38:32 compute-0 systemd[1]: libpod-b080b9924391cc6e1d90efacd13f0cf57e9e1841abacfca03376a3826600d85f.scope: Deactivated successfully.
Oct 11 03:38:32 compute-0 podman[103777]: 2025-10-11 03:38:32.917332392 +0000 UTC m=+1.350321926 container died b080b9924391cc6e1d90efacd13f0cf57e9e1841abacfca03376a3826600d85f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_ellis, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.vendor=CentOS, ceph=True)
Oct 11 03:38:32 compute-0 systemd[1]: libpod-b080b9924391cc6e1d90efacd13f0cf57e9e1841abacfca03376a3826600d85f.scope: Consumed 1.183s CPU time.
Oct 11 03:38:32 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e50 do_prune osdmap full prune enabled
Oct 11 03:38:32 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_num_actual", "val": "32"}]': finished
Oct 11 03:38:32 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e51 e51: 3 total, 3 up, 3 in
Oct 11 03:38:32 compute-0 ceph-mon[74273]: 4.3 scrub starts
Oct 11 03:38:32 compute-0 ceph-mon[74273]: 4.3 scrub ok
Oct 11 03:38:32 compute-0 ceph-mon[74273]: 3.2 scrub starts
Oct 11 03:38:32 compute-0 ceph-mon[74273]: 3.2 scrub ok
Oct 11 03:38:32 compute-0 ceph-mon[74273]: osdmap e50: 3 total, 3 up, 3 in
Oct 11 03:38:32 compute-0 ceph-mon[74273]: 2.4 scrub starts
Oct 11 03:38:32 compute-0 ceph-mon[74273]: 2.4 scrub ok
Oct 11 03:38:32 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_num_actual", "val": "32"}]: dispatch
Oct 11 03:38:32 compute-0 systemd[1]: var-lib-containers-storage-overlay-ca20bb34b134bd19072cf57195ab44c0b5126a1522ca0face6f5d91ae7a39f01-merged.mount: Deactivated successfully.
Oct 11 03:38:32 compute-0 ceph-mon[74273]: log_channel(cluster) log [DBG] : osdmap e51: 3 total, 3 up, 3 in
Oct 11 03:38:32 compute-0 ceph-osd[89722]: osd.2 pg_epoch: 51 pg[10.12( v 37'16 (0'0,37'16] local-lis/les=49/51 n=0 ec=49/36 lis/c=36/36 les/c/f=37/37/0 sis=49) [2] r=0 lpr=49 pi=[36,49)/1 crt=37'16 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 03:38:33 compute-0 podman[103777]: 2025-10-11 03:38:33.003857163 +0000 UTC m=+1.436846627 container remove b080b9924391cc6e1d90efacd13f0cf57e9e1841abacfca03376a3826600d85f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_ellis, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20250507)
Oct 11 03:38:33 compute-0 ceph-osd[89722]: osd.2 pg_epoch: 51 pg[10.11( v 37'16 (0'0,37'16] local-lis/les=49/51 n=0 ec=49/36 lis/c=36/36 les/c/f=37/37/0 sis=49) [2] r=0 lpr=49 pi=[36,49)/1 crt=37'16 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 03:38:33 compute-0 ceph-osd[89722]: osd.2 pg_epoch: 51 pg[10.10( v 37'16 (0'0,37'16] local-lis/les=49/51 n=0 ec=49/36 lis/c=36/36 les/c/f=37/37/0 sis=49) [2] r=0 lpr=49 pi=[36,49)/1 crt=37'16 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 03:38:33 compute-0 ceph-osd[89722]: osd.2 pg_epoch: 51 pg[10.1d( v 37'16 (0'0,37'16] local-lis/les=49/51 n=0 ec=49/36 lis/c=36/36 les/c/f=37/37/0 sis=49) [2] r=0 lpr=49 pi=[36,49)/1 crt=37'16 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 03:38:33 compute-0 ceph-osd[89722]: osd.2 pg_epoch: 51 pg[10.1f( v 37'16 (0'0,37'16] local-lis/les=49/51 n=0 ec=49/36 lis/c=36/36 les/c/f=37/37/0 sis=49) [2] r=0 lpr=49 pi=[36,49)/1 crt=37'16 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 03:38:33 compute-0 ceph-osd[89722]: osd.2 pg_epoch: 51 pg[10.1e( v 37'16 (0'0,37'16] local-lis/les=49/51 n=0 ec=49/36 lis/c=36/36 les/c/f=37/37/0 sis=49) [2] r=0 lpr=49 pi=[36,49)/1 crt=37'16 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 03:38:33 compute-0 ceph-osd[89722]: osd.2 pg_epoch: 51 pg[10.1c( v 37'16 (0'0,37'16] local-lis/les=49/51 n=0 ec=49/36 lis/c=36/36 les/c/f=37/37/0 sis=49) [2] r=0 lpr=49 pi=[36,49)/1 crt=37'16 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 03:38:33 compute-0 ceph-osd[89722]: osd.2 pg_epoch: 51 pg[10.1a( v 37'16 (0'0,37'16] local-lis/les=49/51 n=0 ec=49/36 lis/c=36/36 les/c/f=37/37/0 sis=49) [2] r=0 lpr=49 pi=[36,49)/1 crt=37'16 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 03:38:33 compute-0 ceph-osd[89722]: osd.2 pg_epoch: 51 pg[10.1b( v 37'16 (0'0,37'16] local-lis/les=49/51 n=0 ec=49/36 lis/c=36/36 les/c/f=37/37/0 sis=49) [2] r=0 lpr=49 pi=[36,49)/1 crt=37'16 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 03:38:33 compute-0 ceph-osd[89722]: osd.2 pg_epoch: 51 pg[10.6( v 37'16 (0'0,37'16] local-lis/les=49/51 n=1 ec=49/36 lis/c=36/36 les/c/f=37/37/0 sis=49) [2] r=0 lpr=49 pi=[36,49)/1 crt=37'16 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 03:38:33 compute-0 ceph-osd[89722]: osd.2 pg_epoch: 51 pg[10.7( v 37'16 (0'0,37'16] local-lis/les=49/51 n=1 ec=49/36 lis/c=36/36 les/c/f=37/37/0 sis=49) [2] r=0 lpr=49 pi=[36,49)/1 crt=37'16 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 03:38:33 compute-0 ceph-osd[89722]: osd.2 pg_epoch: 51 pg[10.19( v 37'16 (0'0,37'16] local-lis/les=49/51 n=0 ec=49/36 lis/c=36/36 les/c/f=37/37/0 sis=49) [2] r=0 lpr=49 pi=[36,49)/1 crt=37'16 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 03:38:33 compute-0 ceph-osd[89722]: osd.2 pg_epoch: 51 pg[10.5( v 37'16 (0'0,37'16] local-lis/les=49/51 n=1 ec=49/36 lis/c=36/36 les/c/f=37/37/0 sis=49) [2] r=0 lpr=49 pi=[36,49)/1 crt=37'16 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 03:38:33 compute-0 ceph-osd[89722]: osd.2 pg_epoch: 51 pg[10.4( v 37'16 (0'0,37'16] local-lis/les=49/51 n=1 ec=49/36 lis/c=36/36 les/c/f=37/37/0 sis=49) [2] r=0 lpr=49 pi=[36,49)/1 crt=37'16 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 03:38:33 compute-0 ceph-osd[89722]: osd.2 pg_epoch: 51 pg[10.f( v 37'16 (0'0,37'16] local-lis/les=49/51 n=0 ec=49/36 lis/c=36/36 les/c/f=37/37/0 sis=49) [2] r=0 lpr=49 pi=[36,49)/1 crt=37'16 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 03:38:33 compute-0 ceph-osd[89722]: osd.2 pg_epoch: 51 pg[10.3( v 37'16 (0'0,37'16] local-lis/les=49/51 n=1 ec=49/36 lis/c=36/36 les/c/f=37/37/0 sis=49) [2] r=0 lpr=49 pi=[36,49)/1 crt=37'16 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 03:38:33 compute-0 ceph-osd[89722]: osd.2 pg_epoch: 51 pg[10.0( v 37'16 (0'0,37'16] local-lis/les=49/51 n=0 ec=36/36 lis/c=36/36 les/c/f=37/37/0 sis=49) [2] r=0 lpr=49 pi=[36,49)/1 crt=37'16 lcod 37'15 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 03:38:33 compute-0 ceph-osd[89722]: osd.2 pg_epoch: 51 pg[10.9( v 37'16 (0'0,37'16] local-lis/les=49/51 n=0 ec=49/36 lis/c=36/36 les/c/f=37/37/0 sis=49) [2] r=0 lpr=49 pi=[36,49)/1 crt=37'16 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 03:38:33 compute-0 ceph-osd[89722]: osd.2 pg_epoch: 51 pg[10.8( v 37'16 (0'0,37'16] local-lis/les=49/51 n=1 ec=49/36 lis/c=36/36 les/c/f=37/37/0 sis=49) [2] r=0 lpr=49 pi=[36,49)/1 crt=37'16 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 03:38:33 compute-0 ceph-osd[89722]: osd.2 pg_epoch: 51 pg[10.a( v 37'16 (0'0,37'16] local-lis/les=49/51 n=0 ec=49/36 lis/c=36/36 les/c/f=37/37/0 sis=49) [2] r=0 lpr=49 pi=[36,49)/1 crt=37'16 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 03:38:33 compute-0 ceph-osd[89722]: osd.2 pg_epoch: 51 pg[10.18( v 37'16 (0'0,37'16] local-lis/les=49/51 n=0 ec=49/36 lis/c=36/36 les/c/f=37/37/0 sis=49) [2] r=0 lpr=49 pi=[36,49)/1 crt=37'16 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 03:38:33 compute-0 ceph-osd[89722]: osd.2 pg_epoch: 51 pg[10.b( v 37'16 (0'0,37'16] local-lis/les=49/51 n=0 ec=49/36 lis/c=36/36 les/c/f=37/37/0 sis=49) [2] r=0 lpr=49 pi=[36,49)/1 crt=37'16 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 03:38:33 compute-0 ceph-osd[89722]: osd.2 pg_epoch: 51 pg[10.c( v 37'16 (0'0,37'16] local-lis/les=49/51 n=0 ec=49/36 lis/c=36/36 les/c/f=37/37/0 sis=49) [2] r=0 lpr=49 pi=[36,49)/1 crt=37'16 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 03:38:33 compute-0 ceph-osd[89722]: osd.2 pg_epoch: 51 pg[10.2( v 37'16 (0'0,37'16] local-lis/les=49/51 n=1 ec=49/36 lis/c=36/36 les/c/f=37/37/0 sis=49) [2] r=0 lpr=49 pi=[36,49)/1 crt=37'16 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 03:38:33 compute-0 ceph-osd[89722]: osd.2 pg_epoch: 51 pg[10.e( v 37'16 (0'0,37'16] local-lis/les=49/51 n=0 ec=49/36 lis/c=36/36 les/c/f=37/37/0 sis=49) [2] r=0 lpr=49 pi=[36,49)/1 crt=37'16 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 03:38:33 compute-0 ceph-osd[89722]: osd.2 pg_epoch: 51 pg[10.1( v 37'16 (0'0,37'16] local-lis/les=49/51 n=1 ec=49/36 lis/c=36/36 les/c/f=37/37/0 sis=49) [2] r=0 lpr=49 pi=[36,49)/1 crt=37'16 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 03:38:33 compute-0 ceph-osd[89722]: osd.2 pg_epoch: 51 pg[10.13( v 37'16 (0'0,37'16] local-lis/les=49/51 n=0 ec=49/36 lis/c=36/36 les/c/f=37/37/0 sis=49) [2] r=0 lpr=49 pi=[36,49)/1 crt=37'16 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 03:38:33 compute-0 ceph-osd[89722]: osd.2 pg_epoch: 51 pg[10.d( v 37'16 (0'0,37'16] local-lis/les=49/51 n=0 ec=49/36 lis/c=36/36 les/c/f=37/37/0 sis=49) [2] r=0 lpr=49 pi=[36,49)/1 crt=37'16 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 03:38:33 compute-0 ceph-osd[89722]: osd.2 pg_epoch: 51 pg[10.15( v 37'16 (0'0,37'16] local-lis/les=49/51 n=0 ec=49/36 lis/c=36/36 les/c/f=37/37/0 sis=49) [2] r=0 lpr=49 pi=[36,49)/1 crt=37'16 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 03:38:33 compute-0 ceph-osd[89722]: osd.2 pg_epoch: 51 pg[10.16( v 37'16 (0'0,37'16] local-lis/les=49/51 n=0 ec=49/36 lis/c=36/36 les/c/f=37/37/0 sis=49) [2] r=0 lpr=49 pi=[36,49)/1 crt=37'16 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 03:38:33 compute-0 ceph-osd[89722]: osd.2 pg_epoch: 51 pg[10.17( v 37'16 (0'0,37'16] local-lis/les=49/51 n=0 ec=49/36 lis/c=36/36 les/c/f=37/37/0 sis=49) [2] r=0 lpr=49 pi=[36,49)/1 crt=37'16 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 03:38:33 compute-0 ceph-osd[89722]: osd.2 pg_epoch: 51 pg[10.14( v 37'16 (0'0,37'16] local-lis/les=49/51 n=0 ec=49/36 lis/c=36/36 les/c/f=37/37/0 sis=49) [2] r=0 lpr=49 pi=[36,49)/1 crt=37'16 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 03:38:33 compute-0 systemd[1]: libpod-conmon-b080b9924391cc6e1d90efacd13f0cf57e9e1841abacfca03376a3826600d85f.scope: Deactivated successfully.
Oct 11 03:38:33 compute-0 sudo[103673]: pam_unix(sudo:session): session closed for user root
Oct 11 03:38:33 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 11 03:38:33 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' 
Oct 11 03:38:33 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 11 03:38:33 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' 
Oct 11 03:38:33 compute-0 ceph-mgr[74563]: [progress WARNING root] complete: ev 85e9ddfa-d728-4821-8115-ac92a0f5e254 does not exist
Oct 11 03:38:33 compute-0 ceph-mgr[74563]: [progress WARNING root] complete: ev b2976ce2-102d-4450-a6b5-40be5c1ab31e does not exist
Oct 11 03:38:33 compute-0 ceph-osd[88594]: osd.1 pg_epoch: 51 pg[11.0( empty local-lis/les=38/39 n=0 ec=38/38 lis/c=38/38 les/c/f=39/39/0 sis=51 pruub=11.584620476s) [1] r=0 lpr=51 pi=[38,51)/1 crt=0'0 mlcod 0'0 active pruub 82.718025208s@ mbc={}] start_peering_interval up [1] -> [1], acting [1] -> [1], acting_primary 1 -> 1, up_primary 1 -> 1, role 0 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Oct 11 03:38:33 compute-0 ceph-osd[88594]: osd.1 pg_epoch: 51 pg[11.0( empty local-lis/les=38/39 n=0 ec=38/38 lis/c=38/38 les/c/f=39/39/0 sis=51 pruub=11.584620476s) [1] r=0 lpr=51 pi=[38,51)/1 crt=0'0 mlcod 0'0 unknown pruub 82.718025208s@ mbc={}] state<Start>: transitioning to Primary
Oct 11 03:38:33 compute-0 sudo[103837]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 03:38:33 compute-0 sudo[103837]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 03:38:33 compute-0 sudo[103837]: pam_unix(sudo:session): session closed for user root
Oct 11 03:38:33 compute-0 sudo[103862]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Oct 11 03:38:33 compute-0 sudo[103862]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 03:38:33 compute-0 sudo[103862]: pam_unix(sudo:session): session closed for user root
Oct 11 03:38:33 compute-0 ceph-mon[74273]: pgmap v107: 274 pgs: 62 unknown, 212 active+clean; 456 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Oct 11 03:38:33 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_num_actual", "val": "32"}]': finished
Oct 11 03:38:33 compute-0 ceph-mon[74273]: osdmap e51: 3 total, 3 up, 3 in
Oct 11 03:38:33 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' 
Oct 11 03:38:33 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' 
Oct 11 03:38:34 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e51 do_prune osdmap full prune enabled
Oct 11 03:38:34 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e52 e52: 3 total, 3 up, 3 in
Oct 11 03:38:34 compute-0 ceph-mon[74273]: log_channel(cluster) log [DBG] : osdmap e52: 3 total, 3 up, 3 in
Oct 11 03:38:34 compute-0 ceph-osd[88594]: osd.1 pg_epoch: 52 pg[11.16( empty local-lis/les=38/39 n=0 ec=51/38 lis/c=38/38 les/c/f=39/39/0 sis=51) [1] r=0 lpr=51 pi=[38,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 03:38:34 compute-0 ceph-osd[88594]: osd.1 pg_epoch: 52 pg[11.17( empty local-lis/les=38/39 n=0 ec=51/38 lis/c=38/38 les/c/f=39/39/0 sis=51) [1] r=0 lpr=51 pi=[38,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 03:38:34 compute-0 ceph-osd[88594]: osd.1 pg_epoch: 52 pg[11.15( empty local-lis/les=38/39 n=0 ec=51/38 lis/c=38/38 les/c/f=39/39/0 sis=51) [1] r=0 lpr=51 pi=[38,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 03:38:34 compute-0 ceph-osd[88594]: osd.1 pg_epoch: 52 pg[11.13( empty local-lis/les=38/39 n=0 ec=51/38 lis/c=38/38 les/c/f=39/39/0 sis=51) [1] r=0 lpr=51 pi=[38,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 03:38:34 compute-0 ceph-osd[88594]: osd.1 pg_epoch: 52 pg[11.12( empty local-lis/les=38/39 n=0 ec=51/38 lis/c=38/38 les/c/f=39/39/0 sis=51) [1] r=0 lpr=51 pi=[38,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 03:38:34 compute-0 ceph-osd[88594]: osd.1 pg_epoch: 52 pg[11.14( empty local-lis/les=38/39 n=0 ec=51/38 lis/c=38/38 les/c/f=39/39/0 sis=51) [1] r=0 lpr=51 pi=[38,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 03:38:34 compute-0 ceph-osd[88594]: osd.1 pg_epoch: 52 pg[11.10( empty local-lis/les=38/39 n=0 ec=51/38 lis/c=38/38 les/c/f=39/39/0 sis=51) [1] r=0 lpr=51 pi=[38,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 03:38:34 compute-0 ceph-osd[88594]: osd.1 pg_epoch: 52 pg[11.11( empty local-lis/les=38/39 n=0 ec=51/38 lis/c=38/38 les/c/f=39/39/0 sis=51) [1] r=0 lpr=51 pi=[38,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 03:38:34 compute-0 ceph-osd[88594]: osd.1 pg_epoch: 52 pg[11.f( empty local-lis/les=38/39 n=0 ec=51/38 lis/c=38/38 les/c/f=39/39/0 sis=51) [1] r=0 lpr=51 pi=[38,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 03:38:34 compute-0 ceph-osd[88594]: osd.1 pg_epoch: 52 pg[11.e( empty local-lis/les=38/39 n=0 ec=51/38 lis/c=38/38 les/c/f=39/39/0 sis=51) [1] r=0 lpr=51 pi=[38,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 03:38:34 compute-0 ceph-osd[88594]: osd.1 pg_epoch: 52 pg[11.d( empty local-lis/les=38/39 n=0 ec=51/38 lis/c=38/38 les/c/f=39/39/0 sis=51) [1] r=0 lpr=51 pi=[38,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 03:38:34 compute-0 ceph-osd[88594]: osd.1 pg_epoch: 52 pg[11.b( empty local-lis/les=38/39 n=0 ec=51/38 lis/c=38/38 les/c/f=39/39/0 sis=51) [1] r=0 lpr=51 pi=[38,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 03:38:34 compute-0 ceph-osd[88594]: osd.1 pg_epoch: 52 pg[11.9( empty local-lis/les=38/39 n=0 ec=51/38 lis/c=38/38 les/c/f=39/39/0 sis=51) [1] r=0 lpr=51 pi=[38,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 03:38:34 compute-0 ceph-osd[88594]: osd.1 pg_epoch: 52 pg[11.3( empty local-lis/les=38/39 n=0 ec=51/38 lis/c=38/38 les/c/f=39/39/0 sis=51) [1] r=0 lpr=51 pi=[38,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 03:38:34 compute-0 ceph-osd[88594]: osd.1 pg_epoch: 52 pg[11.2( empty local-lis/les=38/39 n=0 ec=51/38 lis/c=38/38 les/c/f=39/39/0 sis=51) [1] r=0 lpr=51 pi=[38,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 03:38:34 compute-0 ceph-osd[88594]: osd.1 pg_epoch: 52 pg[11.8( empty local-lis/les=38/39 n=0 ec=51/38 lis/c=38/38 les/c/f=39/39/0 sis=51) [1] r=0 lpr=51 pi=[38,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 03:38:34 compute-0 ceph-osd[88594]: osd.1 pg_epoch: 52 pg[11.a( empty local-lis/les=38/39 n=0 ec=51/38 lis/c=38/38 les/c/f=39/39/0 sis=51) [1] r=0 lpr=51 pi=[38,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 03:38:34 compute-0 ceph-osd[88594]: osd.1 pg_epoch: 52 pg[11.c( empty local-lis/les=38/39 n=0 ec=51/38 lis/c=38/38 les/c/f=39/39/0 sis=51) [1] r=0 lpr=51 pi=[38,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 03:38:34 compute-0 ceph-osd[88594]: osd.1 pg_epoch: 52 pg[11.1( empty local-lis/les=38/39 n=0 ec=51/38 lis/c=38/38 les/c/f=39/39/0 sis=51) [1] r=0 lpr=51 pi=[38,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 03:38:34 compute-0 ceph-osd[88594]: osd.1 pg_epoch: 52 pg[11.4( empty local-lis/les=38/39 n=0 ec=51/38 lis/c=38/38 les/c/f=39/39/0 sis=51) [1] r=0 lpr=51 pi=[38,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 03:38:34 compute-0 ceph-osd[88594]: osd.1 pg_epoch: 52 pg[11.5( empty local-lis/les=38/39 n=0 ec=51/38 lis/c=38/38 les/c/f=39/39/0 sis=51) [1] r=0 lpr=51 pi=[38,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 03:38:34 compute-0 ceph-osd[88594]: osd.1 pg_epoch: 52 pg[11.6( empty local-lis/les=38/39 n=0 ec=51/38 lis/c=38/38 les/c/f=39/39/0 sis=51) [1] r=0 lpr=51 pi=[38,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 03:38:34 compute-0 ceph-osd[88594]: osd.1 pg_epoch: 52 pg[11.7( empty local-lis/les=38/39 n=0 ec=51/38 lis/c=38/38 les/c/f=39/39/0 sis=51) [1] r=0 lpr=51 pi=[38,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 03:38:34 compute-0 ceph-osd[88594]: osd.1 pg_epoch: 52 pg[11.18( empty local-lis/les=38/39 n=0 ec=51/38 lis/c=38/38 les/c/f=39/39/0 sis=51) [1] r=0 lpr=51 pi=[38,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 03:38:34 compute-0 ceph-osd[88594]: osd.1 pg_epoch: 52 pg[11.19( empty local-lis/les=38/39 n=0 ec=51/38 lis/c=38/38 les/c/f=39/39/0 sis=51) [1] r=0 lpr=51 pi=[38,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 03:38:34 compute-0 ceph-osd[88594]: osd.1 pg_epoch: 52 pg[11.1a( empty local-lis/les=38/39 n=0 ec=51/38 lis/c=38/38 les/c/f=39/39/0 sis=51) [1] r=0 lpr=51 pi=[38,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 03:38:34 compute-0 ceph-osd[88594]: osd.1 pg_epoch: 52 pg[11.1b( empty local-lis/les=38/39 n=0 ec=51/38 lis/c=38/38 les/c/f=39/39/0 sis=51) [1] r=0 lpr=51 pi=[38,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 03:38:34 compute-0 ceph-osd[88594]: osd.1 pg_epoch: 52 pg[11.1c( empty local-lis/les=38/39 n=0 ec=51/38 lis/c=38/38 les/c/f=39/39/0 sis=51) [1] r=0 lpr=51 pi=[38,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 03:38:34 compute-0 ceph-osd[88594]: osd.1 pg_epoch: 52 pg[11.1e( empty local-lis/les=38/39 n=0 ec=51/38 lis/c=38/38 les/c/f=39/39/0 sis=51) [1] r=0 lpr=51 pi=[38,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 03:38:34 compute-0 ceph-osd[88594]: osd.1 pg_epoch: 52 pg[11.1d( empty local-lis/les=38/39 n=0 ec=51/38 lis/c=38/38 les/c/f=39/39/0 sis=51) [1] r=0 lpr=51 pi=[38,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 03:38:34 compute-0 ceph-osd[88594]: osd.1 pg_epoch: 52 pg[11.1f( empty local-lis/les=38/39 n=0 ec=51/38 lis/c=38/38 les/c/f=39/39/0 sis=51) [1] r=0 lpr=51 pi=[38,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 03:38:34 compute-0 ceph-osd[88594]: osd.1 pg_epoch: 52 pg[11.15( empty local-lis/les=51/52 n=0 ec=51/38 lis/c=38/38 les/c/f=39/39/0 sis=51) [1] r=0 lpr=51 pi=[38,51)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 03:38:34 compute-0 ceph-osd[88594]: osd.1 pg_epoch: 52 pg[11.16( empty local-lis/les=51/52 n=0 ec=51/38 lis/c=38/38 les/c/f=39/39/0 sis=51) [1] r=0 lpr=51 pi=[38,51)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 03:38:34 compute-0 ceph-osd[88594]: osd.1 pg_epoch: 52 pg[11.12( empty local-lis/les=51/52 n=0 ec=51/38 lis/c=38/38 les/c/f=39/39/0 sis=51) [1] r=0 lpr=51 pi=[38,51)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 03:38:34 compute-0 ceph-osd[88594]: osd.1 pg_epoch: 52 pg[11.13( empty local-lis/les=51/52 n=0 ec=51/38 lis/c=38/38 les/c/f=39/39/0 sis=51) [1] r=0 lpr=51 pi=[38,51)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 03:38:34 compute-0 ceph-osd[88594]: osd.1 pg_epoch: 52 pg[11.17( empty local-lis/les=51/52 n=0 ec=51/38 lis/c=38/38 les/c/f=39/39/0 sis=51) [1] r=0 lpr=51 pi=[38,51)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 03:38:34 compute-0 ceph-osd[88594]: osd.1 pg_epoch: 52 pg[11.10( empty local-lis/les=51/52 n=0 ec=51/38 lis/c=38/38 les/c/f=39/39/0 sis=51) [1] r=0 lpr=51 pi=[38,51)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 03:38:34 compute-0 ceph-osd[88594]: osd.1 pg_epoch: 52 pg[11.14( empty local-lis/les=51/52 n=0 ec=51/38 lis/c=38/38 les/c/f=39/39/0 sis=51) [1] r=0 lpr=51 pi=[38,51)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 03:38:34 compute-0 ceph-osd[88594]: osd.1 pg_epoch: 52 pg[11.f( empty local-lis/les=51/52 n=0 ec=51/38 lis/c=38/38 les/c/f=39/39/0 sis=51) [1] r=0 lpr=51 pi=[38,51)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 03:38:34 compute-0 ceph-osd[88594]: osd.1 pg_epoch: 52 pg[11.11( empty local-lis/les=51/52 n=0 ec=51/38 lis/c=38/38 les/c/f=39/39/0 sis=51) [1] r=0 lpr=51 pi=[38,51)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 03:38:34 compute-0 ceph-osd[88594]: osd.1 pg_epoch: 52 pg[11.e( empty local-lis/les=51/52 n=0 ec=51/38 lis/c=38/38 les/c/f=39/39/0 sis=51) [1] r=0 lpr=51 pi=[38,51)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 03:38:34 compute-0 ceph-osd[88594]: osd.1 pg_epoch: 52 pg[11.b( empty local-lis/les=51/52 n=0 ec=51/38 lis/c=38/38 les/c/f=39/39/0 sis=51) [1] r=0 lpr=51 pi=[38,51)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 03:38:34 compute-0 ceph-osd[88594]: osd.1 pg_epoch: 52 pg[11.0( empty local-lis/les=51/52 n=0 ec=38/38 lis/c=38/38 les/c/f=39/39/0 sis=51) [1] r=0 lpr=51 pi=[38,51)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 03:38:34 compute-0 ceph-osd[88594]: osd.1 pg_epoch: 52 pg[11.d( empty local-lis/les=51/52 n=0 ec=51/38 lis/c=38/38 les/c/f=39/39/0 sis=51) [1] r=0 lpr=51 pi=[38,51)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 03:38:34 compute-0 ceph-osd[88594]: osd.1 pg_epoch: 52 pg[11.3( empty local-lis/les=51/52 n=0 ec=51/38 lis/c=38/38 les/c/f=39/39/0 sis=51) [1] r=0 lpr=51 pi=[38,51)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 03:38:34 compute-0 ceph-osd[88594]: osd.1 pg_epoch: 52 pg[11.2( empty local-lis/les=51/52 n=0 ec=51/38 lis/c=38/38 les/c/f=39/39/0 sis=51) [1] r=0 lpr=51 pi=[38,51)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 03:38:34 compute-0 ceph-osd[88594]: osd.1 pg_epoch: 52 pg[11.8( empty local-lis/les=51/52 n=0 ec=51/38 lis/c=38/38 les/c/f=39/39/0 sis=51) [1] r=0 lpr=51 pi=[38,51)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 03:38:34 compute-0 ceph-osd[88594]: osd.1 pg_epoch: 52 pg[11.c( empty local-lis/les=51/52 n=0 ec=51/38 lis/c=38/38 les/c/f=39/39/0 sis=51) [1] r=0 lpr=51 pi=[38,51)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 03:38:34 compute-0 ceph-osd[88594]: osd.1 pg_epoch: 52 pg[11.1( empty local-lis/les=51/52 n=0 ec=51/38 lis/c=38/38 les/c/f=39/39/0 sis=51) [1] r=0 lpr=51 pi=[38,51)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 03:38:34 compute-0 ceph-osd[88594]: osd.1 pg_epoch: 52 pg[11.4( empty local-lis/les=51/52 n=0 ec=51/38 lis/c=38/38 les/c/f=39/39/0 sis=51) [1] r=0 lpr=51 pi=[38,51)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 03:38:34 compute-0 ceph-osd[88594]: osd.1 pg_epoch: 52 pg[11.5( empty local-lis/les=51/52 n=0 ec=51/38 lis/c=38/38 les/c/f=39/39/0 sis=51) [1] r=0 lpr=51 pi=[38,51)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 03:38:34 compute-0 ceph-osd[88594]: osd.1 pg_epoch: 52 pg[11.9( empty local-lis/les=51/52 n=0 ec=51/38 lis/c=38/38 les/c/f=39/39/0 sis=51) [1] r=0 lpr=51 pi=[38,51)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 03:38:34 compute-0 ceph-osd[88594]: osd.1 pg_epoch: 52 pg[11.a( empty local-lis/les=51/52 n=0 ec=51/38 lis/c=38/38 les/c/f=39/39/0 sis=51) [1] r=0 lpr=51 pi=[38,51)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 03:38:34 compute-0 ceph-osd[88594]: osd.1 pg_epoch: 52 pg[11.6( empty local-lis/les=51/52 n=0 ec=51/38 lis/c=38/38 les/c/f=39/39/0 sis=51) [1] r=0 lpr=51 pi=[38,51)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 03:38:34 compute-0 ceph-osd[88594]: osd.1 pg_epoch: 52 pg[11.7( empty local-lis/les=51/52 n=0 ec=51/38 lis/c=38/38 les/c/f=39/39/0 sis=51) [1] r=0 lpr=51 pi=[38,51)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 03:38:34 compute-0 ceph-osd[88594]: osd.1 pg_epoch: 52 pg[11.1a( empty local-lis/les=51/52 n=0 ec=51/38 lis/c=38/38 les/c/f=39/39/0 sis=51) [1] r=0 lpr=51 pi=[38,51)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 03:38:34 compute-0 ceph-osd[88594]: osd.1 pg_epoch: 52 pg[11.19( empty local-lis/les=51/52 n=0 ec=51/38 lis/c=38/38 les/c/f=39/39/0 sis=51) [1] r=0 lpr=51 pi=[38,51)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 03:38:34 compute-0 ceph-osd[88594]: osd.1 pg_epoch: 52 pg[11.18( empty local-lis/les=51/52 n=0 ec=51/38 lis/c=38/38 les/c/f=39/39/0 sis=51) [1] r=0 lpr=51 pi=[38,51)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 03:38:34 compute-0 ceph-osd[88594]: osd.1 pg_epoch: 52 pg[11.1b( empty local-lis/les=51/52 n=0 ec=51/38 lis/c=38/38 les/c/f=39/39/0 sis=51) [1] r=0 lpr=51 pi=[38,51)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 03:38:34 compute-0 ceph-osd[88594]: osd.1 pg_epoch: 52 pg[11.1d( empty local-lis/les=51/52 n=0 ec=51/38 lis/c=38/38 les/c/f=39/39/0 sis=51) [1] r=0 lpr=51 pi=[38,51)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 03:38:34 compute-0 ceph-osd[88594]: osd.1 pg_epoch: 52 pg[11.1c( empty local-lis/les=51/52 n=0 ec=51/38 lis/c=38/38 les/c/f=39/39/0 sis=51) [1] r=0 lpr=51 pi=[38,51)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 03:38:34 compute-0 ceph-osd[88594]: osd.1 pg_epoch: 52 pg[11.1e( empty local-lis/les=51/52 n=0 ec=51/38 lis/c=38/38 les/c/f=39/39/0 sis=51) [1] r=0 lpr=51 pi=[38,51)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 03:38:34 compute-0 ceph-osd[88594]: osd.1 pg_epoch: 52 pg[11.1f( empty local-lis/les=51/52 n=0 ec=51/38 lis/c=38/38 les/c/f=39/39/0 sis=51) [1] r=0 lpr=51 pi=[38,51)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 03:38:34 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e52 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 11 03:38:34 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v110: 305 pgs: 31 unknown, 32 peering, 242 active+clean; 456 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Oct 11 03:38:35 compute-0 ceph-mon[74273]: osdmap e52: 3 total, 3 up, 3 in
Oct 11 03:38:35 compute-0 ceph-osd[89722]: log_channel(cluster) log [DBG] : 2.5 scrub starts
Oct 11 03:38:35 compute-0 ceph-osd[89722]: log_channel(cluster) log [DBG] : 2.5 scrub ok
Oct 11 03:38:35 compute-0 ceph-mgr[74563]: [progress INFO root] Writing back 15 completed events
Oct 11 03:38:35 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0) v1
Oct 11 03:38:35 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' 
Oct 11 03:38:36 compute-0 ceph-osd[87591]: log_channel(cluster) log [DBG] : 4.4 scrub starts
Oct 11 03:38:36 compute-0 ceph-osd[87591]: log_channel(cluster) log [DBG] : 4.4 scrub ok
Oct 11 03:38:36 compute-0 ceph-mon[74273]: pgmap v110: 305 pgs: 31 unknown, 32 peering, 242 active+clean; 456 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Oct 11 03:38:36 compute-0 ceph-mon[74273]: 2.5 scrub starts
Oct 11 03:38:36 compute-0 ceph-mon[74273]: 2.5 scrub ok
Oct 11 03:38:36 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' 
Oct 11 03:38:36 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v111: 305 pgs: 31 unknown, 32 peering, 242 active+clean; 456 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Oct 11 03:38:36 compute-0 ceph-osd[88594]: log_channel(cluster) log [DBG] : 3.3 scrub starts
Oct 11 03:38:36 compute-0 ceph-osd[88594]: log_channel(cluster) log [DBG] : 3.3 scrub ok
Oct 11 03:38:37 compute-0 ceph-osd[87591]: log_channel(cluster) log [DBG] : 4.5 scrub starts
Oct 11 03:38:37 compute-0 ceph-osd[87591]: log_channel(cluster) log [DBG] : 4.5 scrub ok
Oct 11 03:38:37 compute-0 ceph-osd[89722]: log_channel(cluster) log [DBG] : 2.6 scrub starts
Oct 11 03:38:37 compute-0 ceph-osd[89722]: log_channel(cluster) log [DBG] : 2.6 scrub ok
Oct 11 03:38:37 compute-0 ceph-mon[74273]: 4.4 scrub starts
Oct 11 03:38:37 compute-0 ceph-mon[74273]: 4.4 scrub ok
Oct 11 03:38:37 compute-0 ceph-mon[74273]: 3.3 scrub starts
Oct 11 03:38:37 compute-0 ceph-mon[74273]: 3.3 scrub ok
Oct 11 03:38:37 compute-0 ceph-osd[88594]: log_channel(cluster) log [DBG] : 3.4 deep-scrub starts
Oct 11 03:38:37 compute-0 ceph-osd[88594]: log_channel(cluster) log [DBG] : 3.4 deep-scrub ok
Oct 11 03:38:38 compute-0 ceph-mon[74273]: pgmap v111: 305 pgs: 31 unknown, 32 peering, 242 active+clean; 456 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Oct 11 03:38:38 compute-0 ceph-mon[74273]: 4.5 scrub starts
Oct 11 03:38:38 compute-0 ceph-mon[74273]: 4.5 scrub ok
Oct 11 03:38:38 compute-0 ceph-mon[74273]: 2.6 scrub starts
Oct 11 03:38:38 compute-0 ceph-mon[74273]: 2.6 scrub ok
Oct 11 03:38:38 compute-0 ceph-mon[74273]: 3.4 deep-scrub starts
Oct 11 03:38:38 compute-0 ceph-mon[74273]: 3.4 deep-scrub ok
Oct 11 03:38:38 compute-0 ceph-osd[89722]: log_channel(cluster) log [DBG] : 2.7 scrub starts
Oct 11 03:38:38 compute-0 ceph-osd[89722]: log_channel(cluster) log [DBG] : 2.7 scrub ok
Oct 11 03:38:38 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v112: 305 pgs: 305 active+clean; 456 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Oct 11 03:38:38 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": ".rgw.root", "var": "pgp_num_actual", "val": "32"} v 0) v1
Oct 11 03:38:38 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' cmd=[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pgp_num_actual", "val": "32"}]: dispatch
Oct 11 03:38:38 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "backups", "var": "pgp_num_actual", "val": "32"} v 0) v1
Oct 11 03:38:38 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' cmd=[{"prefix": "osd pool set", "pool": "backups", "var": "pgp_num_actual", "val": "32"}]: dispatch
Oct 11 03:38:38 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pgp_num_actual", "val": "32"} v 0) v1
Oct 11 03:38:38 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pgp_num_actual", "val": "32"}]: dispatch
Oct 11 03:38:38 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "2"} v 0) v1
Oct 11 03:38:38 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "2"}]: dispatch
Oct 11 03:38:38 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pgp_num_actual", "val": "32"} v 0) v1
Oct 11 03:38:38 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pgp_num_actual", "val": "32"}]: dispatch
Oct 11 03:38:38 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "2"} v 0) v1
Oct 11 03:38:38 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "2"}]: dispatch
Oct 11 03:38:38 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pgp_num_actual", "val": "32"} v 0) v1
Oct 11 03:38:38 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pgp_num_actual", "val": "32"}]: dispatch
Oct 11 03:38:38 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "images", "var": "pgp_num_actual", "val": "32"} v 0) v1
Oct 11 03:38:38 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' cmd=[{"prefix": "osd pool set", "pool": "images", "var": "pgp_num_actual", "val": "32"}]: dispatch
Oct 11 03:38:38 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "vms", "var": "pgp_num_actual", "val": "32"} v 0) v1
Oct 11 03:38:38 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' cmd=[{"prefix": "osd pool set", "pool": "vms", "var": "pgp_num_actual", "val": "32"}]: dispatch
Oct 11 03:38:38 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "volumes", "var": "pgp_num_actual", "val": "32"} v 0) v1
Oct 11 03:38:38 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' cmd=[{"prefix": "osd pool set", "pool": "volumes", "var": "pgp_num_actual", "val": "32"}]: dispatch
Oct 11 03:38:38 compute-0 ceph-osd[87591]: log_channel(cluster) log [DBG] : 4.6 scrub starts
Oct 11 03:38:38 compute-0 ceph-osd[87591]: log_channel(cluster) log [DBG] : 4.6 scrub ok
Oct 11 03:38:39 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e52 do_prune osdmap full prune enabled
Oct 11 03:38:39 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' cmd='[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pgp_num_actual", "val": "32"}]': finished
Oct 11 03:38:39 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' cmd='[{"prefix": "osd pool set", "pool": "backups", "var": "pgp_num_actual", "val": "32"}]': finished
Oct 11 03:38:39 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pgp_num_actual", "val": "32"}]': finished
Oct 11 03:38:39 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "2"}]': finished
Oct 11 03:38:39 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pgp_num_actual", "val": "32"}]': finished
Oct 11 03:38:39 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "2"}]': finished
Oct 11 03:38:39 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pgp_num_actual", "val": "32"}]': finished
Oct 11 03:38:39 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' cmd='[{"prefix": "osd pool set", "pool": "images", "var": "pgp_num_actual", "val": "32"}]': finished
Oct 11 03:38:39 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' cmd='[{"prefix": "osd pool set", "pool": "vms", "var": "pgp_num_actual", "val": "32"}]': finished
Oct 11 03:38:39 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' cmd='[{"prefix": "osd pool set", "pool": "volumes", "var": "pgp_num_actual", "val": "32"}]': finished
Oct 11 03:38:39 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e53 e53: 3 total, 3 up, 3 in
Oct 11 03:38:39 compute-0 ceph-mon[74273]: log_channel(cluster) log [DBG] : osdmap e53: 3 total, 3 up, 3 in
Oct 11 03:38:39 compute-0 ceph-osd[89722]: osd.2 pg_epoch: 53 pg[10.12( v 37'16 (0'0,37'16] local-lis/les=49/51 n=0 ec=49/36 lis/c=49/49 les/c/f=51/51/0 sis=53 pruub=9.846140862s) [1] r=-1 lpr=53 pi=[49,53)/1 crt=37'16 lcod 0'0 mlcod 0'0 active pruub 81.772102356s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 11 03:38:39 compute-0 ceph-osd[89722]: osd.2 pg_epoch: 53 pg[10.12( v 37'16 (0'0,37'16] local-lis/les=49/51 n=0 ec=49/36 lis/c=49/49 les/c/f=51/51/0 sis=53 pruub=9.846064568s) [1] r=-1 lpr=53 pi=[49,53)/1 crt=37'16 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 81.772102356s@ mbc={}] state<Start>: transitioning to Stray
Oct 11 03:38:39 compute-0 ceph-osd[89722]: osd.2 pg_epoch: 53 pg[5.1e( empty local-lis/les=45/47 n=0 ec=45/19 lis/c=45/45 les/c/f=47/47/0 sis=53 pruub=13.609042168s) [0] r=-1 lpr=53 pi=[45,53)/1 crt=0'0 mlcod 0'0 active pruub 85.535156250s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 11 03:38:39 compute-0 ceph-osd[89722]: osd.2 pg_epoch: 53 pg[5.1d( empty local-lis/les=45/47 n=0 ec=45/19 lis/c=45/45 les/c/f=47/47/0 sis=53 pruub=13.609335899s) [1] r=-1 lpr=53 pi=[45,53)/1 crt=0'0 mlcod 0'0 active pruub 85.535522461s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 11 03:38:39 compute-0 ceph-osd[89722]: osd.2 pg_epoch: 53 pg[5.1e( empty local-lis/les=45/47 n=0 ec=45/19 lis/c=45/45 les/c/f=47/47/0 sis=53 pruub=13.608965874s) [0] r=-1 lpr=53 pi=[45,53)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 85.535156250s@ mbc={}] state<Start>: transitioning to Stray
Oct 11 03:38:39 compute-0 ceph-osd[89722]: osd.2 pg_epoch: 53 pg[5.1d( empty local-lis/les=45/47 n=0 ec=45/19 lis/c=45/45 les/c/f=47/47/0 sis=53 pruub=13.609258652s) [1] r=-1 lpr=53 pi=[45,53)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 85.535522461s@ mbc={}] state<Start>: transitioning to Stray
Oct 11 03:38:39 compute-0 ceph-mon[74273]: 2.7 scrub starts
Oct 11 03:38:39 compute-0 ceph-osd[89722]: osd.2 pg_epoch: 53 pg[2.19( empty local-lis/les=42/43 n=0 ec=42/13 lis/c=42/42 les/c/f=43/43/0 sis=53 pruub=9.593832016s) [0] r=-1 lpr=53 pi=[42,53)/1 crt=0'0 mlcod 0'0 active pruub 81.520362854s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 11 03:38:39 compute-0 ceph-osd[89722]: osd.2 pg_epoch: 53 pg[10.11( v 37'16 (0'0,37'16] local-lis/les=49/51 n=0 ec=49/36 lis/c=49/49 les/c/f=51/51/0 sis=53 pruub=9.854397774s) [1] r=-1 lpr=53 pi=[49,53)/1 crt=37'16 lcod 0'0 mlcod 0'0 active pruub 81.780914307s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 11 03:38:39 compute-0 ceph-mon[74273]: 2.7 scrub ok
Oct 11 03:38:39 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' cmd=[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pgp_num_actual", "val": "32"}]: dispatch
Oct 11 03:38:39 compute-0 ceph-osd[89722]: osd.2 pg_epoch: 53 pg[10.10( v 37'16 (0'0,37'16] local-lis/les=49/51 n=0 ec=49/36 lis/c=49/49 les/c/f=51/51/0 sis=53 pruub=9.854317665s) [1] r=-1 lpr=53 pi=[49,53)/1 crt=37'16 lcod 0'0 mlcod 0'0 active pruub 81.780921936s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 11 03:38:39 compute-0 ceph-osd[89722]: osd.2 pg_epoch: 53 pg[2.18( empty local-lis/les=42/43 n=0 ec=42/13 lis/c=42/42 les/c/f=43/43/0 sis=53 pruub=9.593435287s) [0] r=-1 lpr=53 pi=[42,53)/1 crt=0'0 mlcod 0'0 active pruub 81.520080566s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 11 03:38:39 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' cmd=[{"prefix": "osd pool set", "pool": "backups", "var": "pgp_num_actual", "val": "32"}]: dispatch
Oct 11 03:38:39 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pgp_num_actual", "val": "32"}]: dispatch
Oct 11 03:38:39 compute-0 ceph-osd[89722]: osd.2 pg_epoch: 53 pg[10.10( v 37'16 (0'0,37'16] local-lis/les=49/51 n=0 ec=49/36 lis/c=49/49 les/c/f=51/51/0 sis=53 pruub=9.854273796s) [1] r=-1 lpr=53 pi=[49,53)/1 crt=37'16 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 81.780921936s@ mbc={}] state<Start>: transitioning to Stray
Oct 11 03:38:39 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "2"}]: dispatch
Oct 11 03:38:39 compute-0 ceph-osd[89722]: osd.2 pg_epoch: 53 pg[10.11( v 37'16 (0'0,37'16] local-lis/les=49/51 n=0 ec=49/36 lis/c=49/49 les/c/f=51/51/0 sis=53 pruub=9.854285240s) [1] r=-1 lpr=53 pi=[49,53)/1 crt=37'16 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 81.780914307s@ mbc={}] state<Start>: transitioning to Stray
Oct 11 03:38:39 compute-0 ceph-osd[89722]: osd.2 pg_epoch: 53 pg[2.19( empty local-lis/les=42/43 n=0 ec=42/13 lis/c=42/42 les/c/f=43/43/0 sis=53 pruub=9.593705177s) [0] r=-1 lpr=53 pi=[42,53)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 81.520362854s@ mbc={}] state<Start>: transitioning to Stray
Oct 11 03:38:39 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pgp_num_actual", "val": "32"}]: dispatch
Oct 11 03:38:39 compute-0 ceph-osd[89722]: osd.2 pg_epoch: 53 pg[2.18( empty local-lis/les=42/43 n=0 ec=42/13 lis/c=42/42 les/c/f=43/43/0 sis=53 pruub=9.593403816s) [0] r=-1 lpr=53 pi=[42,53)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 81.520080566s@ mbc={}] state<Start>: transitioning to Stray
Oct 11 03:38:39 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "2"}]: dispatch
Oct 11 03:38:39 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pgp_num_actual", "val": "32"}]: dispatch
Oct 11 03:38:39 compute-0 ceph-osd[89722]: osd.2 pg_epoch: 53 pg[2.17( empty local-lis/les=42/43 n=0 ec=42/13 lis/c=42/42 les/c/f=43/43/0 sis=53 pruub=9.593162537s) [1] r=-1 lpr=53 pi=[42,53)/1 crt=0'0 mlcod 0'0 active pruub 81.520187378s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 11 03:38:39 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' cmd=[{"prefix": "osd pool set", "pool": "images", "var": "pgp_num_actual", "val": "32"}]: dispatch
Oct 11 03:38:39 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' cmd=[{"prefix": "osd pool set", "pool": "vms", "var": "pgp_num_actual", "val": "32"}]: dispatch
Oct 11 03:38:39 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' cmd=[{"prefix": "osd pool set", "pool": "volumes", "var": "pgp_num_actual", "val": "32"}]: dispatch
Oct 11 03:38:39 compute-0 ceph-osd[89722]: osd.2 pg_epoch: 53 pg[2.17( empty local-lis/les=42/43 n=0 ec=42/13 lis/c=42/42 les/c/f=43/43/0 sis=53 pruub=9.593132973s) [1] r=-1 lpr=53 pi=[42,53)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 81.520187378s@ mbc={}] state<Start>: transitioning to Stray
Oct 11 03:38:39 compute-0 ceph-osd[89722]: osd.2 pg_epoch: 53 pg[2.16( empty local-lis/les=42/43 n=0 ec=42/13 lis/c=42/42 les/c/f=43/43/0 sis=53 pruub=9.592852592s) [0] r=-1 lpr=53 pi=[42,53)/1 crt=0'0 mlcod 0'0 active pruub 81.519966125s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 11 03:38:39 compute-0 ceph-mon[74273]: 4.6 scrub starts
Oct 11 03:38:39 compute-0 ceph-osd[89722]: osd.2 pg_epoch: 53 pg[5.11( empty local-lis/les=45/47 n=0 ec=45/19 lis/c=45/45 les/c/f=47/47/0 sis=53 pruub=13.607984543s) [1] r=-1 lpr=53 pi=[45,53)/1 crt=0'0 mlcod 0'0 active pruub 85.535148621s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 11 03:38:39 compute-0 ceph-osd[89722]: osd.2 pg_epoch: 53 pg[10.1e( v 37'16 (0'0,37'16] local-lis/les=49/51 n=0 ec=49/36 lis/c=49/49 les/c/f=51/51/0 sis=53 pruub=9.853845596s) [0] r=-1 lpr=53 pi=[49,53)/1 crt=37'16 lcod 0'0 mlcod 0'0 active pruub 81.781005859s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 11 03:38:39 compute-0 ceph-osd[89722]: osd.2 pg_epoch: 53 pg[5.11( empty local-lis/les=45/47 n=0 ec=45/19 lis/c=45/45 les/c/f=47/47/0 sis=53 pruub=13.607944489s) [1] r=-1 lpr=53 pi=[45,53)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 85.535148621s@ mbc={}] state<Start>: transitioning to Stray
Oct 11 03:38:39 compute-0 ceph-osd[89722]: osd.2 pg_epoch: 53 pg[10.1e( v 37'16 (0'0,37'16] local-lis/les=49/51 n=0 ec=49/36 lis/c=49/49 les/c/f=51/51/0 sis=53 pruub=9.853794098s) [0] r=-1 lpr=53 pi=[49,53)/1 crt=37'16 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 81.781005859s@ mbc={}] state<Start>: transitioning to Stray
Oct 11 03:38:39 compute-0 ceph-osd[89722]: osd.2 pg_epoch: 53 pg[2.15( empty local-lis/les=42/43 n=0 ec=42/13 lis/c=42/42 les/c/f=43/43/0 sis=53 pruub=9.592617989s) [1] r=-1 lpr=53 pi=[42,53)/1 crt=0'0 mlcod 0'0 active pruub 81.519897461s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 11 03:38:39 compute-0 ceph-osd[89722]: osd.2 pg_epoch: 53 pg[2.15( empty local-lis/les=42/43 n=0 ec=42/13 lis/c=42/42 les/c/f=43/43/0 sis=53 pruub=9.592537880s) [1] r=-1 lpr=53 pi=[42,53)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 81.519897461s@ mbc={}] state<Start>: transitioning to Stray
Oct 11 03:38:39 compute-0 ceph-osd[89722]: osd.2 pg_epoch: 53 pg[5.12( empty local-lis/les=45/47 n=0 ec=45/19 lis/c=45/45 les/c/f=47/47/0 sis=53 pruub=13.607798576s) [1] r=-1 lpr=53 pi=[45,53)/1 crt=0'0 mlcod 0'0 active pruub 85.535224915s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 11 03:38:39 compute-0 ceph-osd[89722]: osd.2 pg_epoch: 53 pg[5.12( empty local-lis/les=45/47 n=0 ec=45/19 lis/c=45/45 les/c/f=47/47/0 sis=53 pruub=13.607766151s) [1] r=-1 lpr=53 pi=[45,53)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 85.535224915s@ mbc={}] state<Start>: transitioning to Stray
Oct 11 03:38:39 compute-0 ceph-osd[89722]: osd.2 pg_epoch: 53 pg[2.16( empty local-lis/les=42/43 n=0 ec=42/13 lis/c=42/42 les/c/f=43/43/0 sis=53 pruub=9.592399597s) [0] r=-1 lpr=53 pi=[42,53)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 81.519966125s@ mbc={}] state<Start>: transitioning to Stray
Oct 11 03:38:39 compute-0 ceph-osd[89722]: osd.2 pg_epoch: 53 pg[2.13( empty local-lis/les=42/43 n=0 ec=42/13 lis/c=42/42 les/c/f=43/43/0 sis=53 pruub=9.592114449s) [0] r=-1 lpr=53 pi=[42,53)/1 crt=0'0 mlcod 0'0 active pruub 81.519714355s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 11 03:38:39 compute-0 ceph-osd[89722]: osd.2 pg_epoch: 53 pg[5.13( empty local-lis/les=45/47 n=0 ec=45/19 lis/c=45/45 les/c/f=47/47/0 sis=53 pruub=13.607820511s) [1] r=-1 lpr=53 pi=[45,53)/1 crt=0'0 mlcod 0'0 active pruub 85.535377502s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 11 03:38:39 compute-0 ceph-osd[89722]: osd.2 pg_epoch: 53 pg[5.14( empty local-lis/les=45/47 n=0 ec=45/19 lis/c=45/45 les/c/f=47/47/0 sis=53 pruub=13.607528687s) [0] r=-1 lpr=53 pi=[45,53)/1 crt=0'0 mlcod 0'0 active pruub 85.535163879s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 11 03:38:39 compute-0 ceph-osd[89722]: osd.2 pg_epoch: 53 pg[2.13( empty local-lis/les=42/43 n=0 ec=42/13 lis/c=42/42 les/c/f=43/43/0 sis=53 pruub=9.592078209s) [0] r=-1 lpr=53 pi=[42,53)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 81.519714355s@ mbc={}] state<Start>: transitioning to Stray
Oct 11 03:38:39 compute-0 ceph-osd[89722]: osd.2 pg_epoch: 53 pg[5.13( empty local-lis/les=45/47 n=0 ec=45/19 lis/c=45/45 les/c/f=47/47/0 sis=53 pruub=13.607689857s) [1] r=-1 lpr=53 pi=[45,53)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 85.535377502s@ mbc={}] state<Start>: transitioning to Stray
Oct 11 03:38:39 compute-0 ceph-osd[89722]: osd.2 pg_epoch: 53 pg[5.14( empty local-lis/les=45/47 n=0 ec=45/19 lis/c=45/45 les/c/f=47/47/0 sis=53 pruub=13.607442856s) [0] r=-1 lpr=53 pi=[45,53)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 85.535163879s@ mbc={}] state<Start>: transitioning to Stray
Oct 11 03:38:39 compute-0 ceph-osd[89722]: osd.2 pg_epoch: 53 pg[10.19( v 37'16 (0'0,37'16] local-lis/les=49/51 n=0 ec=49/36 lis/c=49/49 les/c/f=51/51/0 sis=53 pruub=9.853404999s) [1] r=-1 lpr=53 pi=[49,53)/1 crt=37'16 lcod 0'0 mlcod 0'0 active pruub 81.781257629s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 11 03:38:39 compute-0 ceph-osd[89722]: osd.2 pg_epoch: 53 pg[5.15( empty local-lis/les=45/47 n=0 ec=45/19 lis/c=45/45 les/c/f=47/47/0 sis=53 pruub=13.607395172s) [0] r=-1 lpr=53 pi=[45,53)/1 crt=0'0 mlcod 0'0 active pruub 85.535270691s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 11 03:38:39 compute-0 ceph-osd[89722]: osd.2 pg_epoch: 53 pg[2.11( empty local-lis/les=42/43 n=0 ec=42/13 lis/c=42/42 les/c/f=43/43/0 sis=53 pruub=9.591287613s) [0] r=-1 lpr=53 pi=[42,53)/1 crt=0'0 mlcod 0'0 active pruub 81.519203186s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 11 03:38:39 compute-0 ceph-osd[89722]: osd.2 pg_epoch: 53 pg[10.19( v 37'16 (0'0,37'16] local-lis/les=49/51 n=0 ec=49/36 lis/c=49/49 les/c/f=51/51/0 sis=53 pruub=9.853335381s) [1] r=-1 lpr=53 pi=[49,53)/1 crt=37'16 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 81.781257629s@ mbc={}] state<Start>: transitioning to Stray
Oct 11 03:38:39 compute-0 ceph-osd[89722]: osd.2 pg_epoch: 53 pg[5.15( empty local-lis/les=45/47 n=0 ec=45/19 lis/c=45/45 les/c/f=47/47/0 sis=53 pruub=13.607349396s) [0] r=-1 lpr=53 pi=[45,53)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 85.535270691s@ mbc={}] state<Start>: transitioning to Stray
Oct 11 03:38:39 compute-0 ceph-osd[89722]: osd.2 pg_epoch: 53 pg[10.1a( v 37'16 (0'0,37'16] local-lis/les=49/51 n=0 ec=49/36 lis/c=49/49 les/c/f=51/51/0 sis=53 pruub=9.853140831s) [1] r=-1 lpr=53 pi=[49,53)/1 crt=37'16 lcod 0'0 mlcod 0'0 active pruub 81.781181335s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 11 03:38:39 compute-0 ceph-osd[89722]: osd.2 pg_epoch: 53 pg[2.11( empty local-lis/les=42/43 n=0 ec=42/13 lis/c=42/42 les/c/f=43/43/0 sis=53 pruub=9.591191292s) [0] r=-1 lpr=53 pi=[42,53)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 81.519203186s@ mbc={}] state<Start>: transitioning to Stray
Oct 11 03:38:39 compute-0 ceph-osd[89722]: osd.2 pg_epoch: 53 pg[5.16( empty local-lis/les=45/47 n=0 ec=45/19 lis/c=45/45 les/c/f=47/47/0 sis=53 pruub=13.607322693s) [1] r=-1 lpr=53 pi=[45,53)/1 crt=0'0 mlcod 0'0 active pruub 85.535476685s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 11 03:38:39 compute-0 ceph-osd[89722]: osd.2 pg_epoch: 53 pg[5.16( empty local-lis/les=45/47 n=0 ec=45/19 lis/c=45/45 les/c/f=47/47/0 sis=53 pruub=13.607293129s) [1] r=-1 lpr=53 pi=[45,53)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 85.535476685s@ mbc={}] state<Start>: transitioning to Stray
Oct 11 03:38:39 compute-0 ceph-osd[89722]: osd.2 pg_epoch: 53 pg[10.1a( v 37'16 (0'0,37'16] local-lis/les=49/51 n=0 ec=49/36 lis/c=49/49 les/c/f=51/51/0 sis=53 pruub=9.853115082s) [1] r=-1 lpr=53 pi=[49,53)/1 crt=37'16 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 81.781181335s@ mbc={}] state<Start>: transitioning to Stray
Oct 11 03:38:39 compute-0 ceph-osd[89722]: osd.2 pg_epoch: 53 pg[2.f( empty local-lis/les=42/43 n=0 ec=42/13 lis/c=42/42 les/c/f=43/43/0 sis=53 pruub=9.590889931s) [0] r=-1 lpr=53 pi=[42,53)/1 crt=0'0 mlcod 0'0 active pruub 81.519210815s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 11 03:38:39 compute-0 ceph-osd[89722]: osd.2 pg_epoch: 53 pg[10.7( v 37'16 (0'0,37'16] local-lis/les=49/51 n=1 ec=49/36 lis/c=49/49 les/c/f=51/51/0 sis=53 pruub=9.852903366s) [0] r=-1 lpr=53 pi=[49,53)/1 crt=37'16 lcod 0'0 mlcod 0'0 active pruub 81.781242371s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 11 03:38:39 compute-0 ceph-osd[89722]: osd.2 pg_epoch: 53 pg[2.f( empty local-lis/les=42/43 n=0 ec=42/13 lis/c=42/42 les/c/f=43/43/0 sis=53 pruub=9.590843201s) [0] r=-1 lpr=53 pi=[42,53)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 81.519210815s@ mbc={}] state<Start>: transitioning to Stray
Oct 11 03:38:39 compute-0 ceph-osd[89722]: osd.2 pg_epoch: 53 pg[10.7( v 37'16 (0'0,37'16] local-lis/les=49/51 n=1 ec=49/36 lis/c=49/49 les/c/f=51/51/0 sis=53 pruub=9.852863312s) [0] r=-1 lpr=53 pi=[49,53)/1 crt=37'16 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 81.781242371s@ mbc={}] state<Start>: transitioning to Stray
Oct 11 03:38:39 compute-0 ceph-osd[89722]: osd.2 pg_epoch: 53 pg[10.6( v 37'16 (0'0,37'16] local-lis/les=49/51 n=1 ec=49/36 lis/c=49/49 les/c/f=51/51/0 sis=53 pruub=9.852688789s) [1] r=-1 lpr=53 pi=[49,53)/1 crt=37'16 lcod 0'0 mlcod 0'0 active pruub 81.781234741s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 11 03:38:39 compute-0 ceph-osd[89722]: osd.2 pg_epoch: 53 pg[10.6( v 37'16 (0'0,37'16] local-lis/les=49/51 n=1 ec=49/36 lis/c=49/49 les/c/f=51/51/0 sis=53 pruub=9.852605820s) [1] r=-1 lpr=53 pi=[49,53)/1 crt=37'16 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 81.781234741s@ mbc={}] state<Start>: transitioning to Stray
Oct 11 03:38:39 compute-0 ceph-osd[89722]: osd.2 pg_epoch: 53 pg[5.9( empty local-lis/les=45/47 n=0 ec=45/19 lis/c=45/45 les/c/f=47/47/0 sis=53 pruub=13.606822014s) [1] r=-1 lpr=53 pi=[45,53)/1 crt=0'0 mlcod 0'0 active pruub 85.535598755s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 11 03:38:39 compute-0 ceph-osd[89722]: osd.2 pg_epoch: 53 pg[10.4( v 37'16 (0'0,37'16] local-lis/les=49/51 n=1 ec=49/36 lis/c=49/49 les/c/f=51/51/0 sis=53 pruub=9.852495193s) [0] r=-1 lpr=53 pi=[49,53)/1 crt=37'16 lcod 0'0 mlcod 0'0 active pruub 81.781333923s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 11 03:38:39 compute-0 ceph-osd[89722]: osd.2 pg_epoch: 53 pg[2.b( empty local-lis/les=42/43 n=0 ec=42/13 lis/c=42/42 les/c/f=43/43/0 sis=53 pruub=9.590257645s) [0] r=-1 lpr=53 pi=[42,53)/1 crt=0'0 mlcod 0'0 active pruub 81.519157410s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 11 03:38:39 compute-0 ceph-osd[89722]: osd.2 pg_epoch: 53 pg[10.4( v 37'16 (0'0,37'16] local-lis/les=49/51 n=1 ec=49/36 lis/c=49/49 les/c/f=51/51/0 sis=53 pruub=9.852452278s) [0] r=-1 lpr=53 pi=[49,53)/1 crt=37'16 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 81.781333923s@ mbc={}] state<Start>: transitioning to Stray
Oct 11 03:38:39 compute-0 ceph-osd[89722]: osd.2 pg_epoch: 53 pg[5.9( empty local-lis/les=45/47 n=0 ec=45/19 lis/c=45/45 les/c/f=47/47/0 sis=53 pruub=13.606725693s) [1] r=-1 lpr=53 pi=[45,53)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 85.535598755s@ mbc={}] state<Start>: transitioning to Stray
Oct 11 03:38:39 compute-0 ceph-osd[89722]: osd.2 pg_epoch: 53 pg[2.d( empty local-lis/les=42/43 n=0 ec=42/13 lis/c=42/42 les/c/f=43/43/0 sis=53 pruub=9.590210915s) [1] r=-1 lpr=53 pi=[42,53)/1 crt=0'0 mlcod 0'0 active pruub 81.519210815s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 11 03:38:39 compute-0 ceph-osd[89722]: osd.2 pg_epoch: 53 pg[2.b( empty local-lis/les=42/43 n=0 ec=42/13 lis/c=42/42 les/c/f=43/43/0 sis=53 pruub=9.590208054s) [0] r=-1 lpr=53 pi=[42,53)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 81.519157410s@ mbc={}] state<Start>: transitioning to Stray
Oct 11 03:38:39 compute-0 ceph-osd[89722]: osd.2 pg_epoch: 53 pg[2.d( empty local-lis/les=42/43 n=0 ec=42/13 lis/c=42/42 les/c/f=43/43/0 sis=53 pruub=9.590180397s) [1] r=-1 lpr=53 pi=[42,53)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 81.519210815s@ mbc={}] state<Start>: transitioning to Stray
Oct 11 03:38:39 compute-0 ceph-osd[89722]: osd.2 pg_epoch: 53 pg[10.f( v 37'16 (0'0,37'16] local-lis/les=49/51 n=0 ec=49/36 lis/c=49/49 les/c/f=51/51/0 sis=53 pruub=9.852046013s) [1] r=-1 lpr=53 pi=[49,53)/1 crt=37'16 lcod 0'0 mlcod 0'0 active pruub 81.781417847s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 11 03:38:39 compute-0 ceph-osd[89722]: osd.2 pg_epoch: 53 pg[5.7( empty local-lis/les=45/47 n=0 ec=45/19 lis/c=45/45 les/c/f=47/47/0 sis=53 pruub=13.606435776s) [0] r=-1 lpr=53 pi=[45,53)/1 crt=0'0 mlcod 0'0 active pruub 85.535697937s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 11 03:38:39 compute-0 ceph-osd[89722]: osd.2 pg_epoch: 53 pg[5.c( empty local-lis/les=45/47 n=0 ec=45/19 lis/c=45/45 les/c/f=47/47/0 sis=53 pruub=13.606407166s) [1] r=-1 lpr=53 pi=[45,53)/1 crt=0'0 mlcod 0'0 active pruub 85.535652161s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 11 03:38:39 compute-0 ceph-osd[89722]: osd.2 pg_epoch: 53 pg[10.f( v 37'16 (0'0,37'16] local-lis/les=49/51 n=0 ec=49/36 lis/c=49/49 les/c/f=51/51/0 sis=53 pruub=9.851998329s) [1] r=-1 lpr=53 pi=[49,53)/1 crt=37'16 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 81.781417847s@ mbc={}] state<Start>: transitioning to Stray
Oct 11 03:38:39 compute-0 ceph-osd[89722]: osd.2 pg_epoch: 53 pg[5.7( empty local-lis/les=45/47 n=0 ec=45/19 lis/c=45/45 les/c/f=47/47/0 sis=53 pruub=13.606257439s) [0] r=-1 lpr=53 pi=[45,53)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 85.535697937s@ mbc={}] state<Start>: transitioning to Stray
Oct 11 03:38:39 compute-0 ceph-osd[89722]: osd.2 pg_epoch: 53 pg[5.c( empty local-lis/les=45/47 n=0 ec=45/19 lis/c=45/45 les/c/f=47/47/0 sis=53 pruub=13.606183052s) [1] r=-1 lpr=53 pi=[45,53)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 85.535652161s@ mbc={}] state<Start>: transitioning to Stray
Oct 11 03:38:39 compute-0 ceph-osd[89722]: osd.2 pg_epoch: 53 pg[2.7( empty local-lis/les=42/43 n=0 ec=42/13 lis/c=42/42 les/c/f=43/43/0 sis=53 pruub=9.589399338s) [1] r=-1 lpr=53 pi=[42,53)/1 crt=0'0 mlcod 0'0 active pruub 81.519111633s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 11 03:38:39 compute-0 ceph-osd[89722]: osd.2 pg_epoch: 53 pg[2.7( empty local-lis/les=42/43 n=0 ec=42/13 lis/c=42/42 les/c/f=43/43/0 sis=53 pruub=9.589360237s) [1] r=-1 lpr=53 pi=[42,53)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 81.519111633s@ mbc={}] state<Start>: transitioning to Stray
Oct 11 03:38:39 compute-0 ceph-osd[89722]: osd.2 pg_epoch: 53 pg[5.f( empty local-lis/les=45/47 n=0 ec=45/19 lis/c=45/45 les/c/f=47/47/0 sis=53 pruub=13.605881691s) [1] r=-1 lpr=53 pi=[45,53)/1 crt=0'0 mlcod 0'0 active pruub 85.535659790s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 11 03:38:39 compute-0 ceph-osd[89722]: osd.2 pg_epoch: 53 pg[10.9( v 51'17 (0'0,51'17] local-lis/les=49/51 n=0 ec=49/36 lis/c=49/49 les/c/f=51/51/0 sis=53 pruub=9.851724625s) [0] r=-1 lpr=53 pi=[49,53)/1 crt=37'16 lcod 37'16 mlcod 37'16 active pruub 81.781562805s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 11 03:38:39 compute-0 ceph-osd[89722]: osd.2 pg_epoch: 53 pg[5.f( empty local-lis/les=45/47 n=0 ec=45/19 lis/c=45/45 les/c/f=47/47/0 sis=53 pruub=13.605834961s) [1] r=-1 lpr=53 pi=[45,53)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 85.535659790s@ mbc={}] state<Start>: transitioning to Stray
Oct 11 03:38:39 compute-0 ceph-osd[89722]: osd.2 pg_epoch: 53 pg[10.9( v 51'17 (0'0,51'17] local-lis/les=49/51 n=0 ec=49/36 lis/c=49/49 les/c/f=51/51/0 sis=53 pruub=9.851662636s) [0] r=-1 lpr=53 pi=[49,53)/1 crt=37'16 lcod 37'16 mlcod 0'0 unknown NOTIFY pruub 81.781562805s@ mbc={}] state<Start>: transitioning to Stray
Oct 11 03:38:39 compute-0 ceph-osd[89722]: osd.2 pg_epoch: 53 pg[2.2( empty local-lis/les=42/43 n=0 ec=42/13 lis/c=42/42 les/c/f=43/43/0 sis=53 pruub=9.588811874s) [0] r=-1 lpr=53 pi=[42,53)/1 crt=0'0 mlcod 0'0 active pruub 81.518890381s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 11 03:38:39 compute-0 ceph-osd[89722]: osd.2 pg_epoch: 53 pg[10.8( v 37'16 (0'0,37'16] local-lis/les=49/51 n=1 ec=49/36 lis/c=49/49 les/c/f=51/51/0 sis=53 pruub=9.851483345s) [0] r=-1 lpr=53 pi=[49,53)/1 crt=37'16 lcod 0'0 mlcod 0'0 active pruub 81.781585693s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 11 03:38:39 compute-0 ceph-osd[89722]: osd.2 pg_epoch: 53 pg[5.5( empty local-lis/les=45/47 n=0 ec=45/19 lis/c=45/45 les/c/f=47/47/0 sis=53 pruub=13.605544090s) [0] r=-1 lpr=53 pi=[45,53)/1 crt=0'0 mlcod 0'0 active pruub 85.535705566s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 11 03:38:39 compute-0 ceph-osd[89722]: osd.2 pg_epoch: 53 pg[2.2( empty local-lis/les=42/43 n=0 ec=42/13 lis/c=42/42 les/c/f=43/43/0 sis=53 pruub=9.588753700s) [0] r=-1 lpr=53 pi=[42,53)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 81.518890381s@ mbc={}] state<Start>: transitioning to Stray
Oct 11 03:38:39 compute-0 ceph-osd[89722]: osd.2 pg_epoch: 53 pg[10.b( v 37'16 (0'0,37'16] local-lis/les=49/51 n=0 ec=49/36 lis/c=49/49 les/c/f=51/51/0 sis=53 pruub=9.851439476s) [1] r=-1 lpr=53 pi=[49,53)/1 crt=37'16 lcod 0'0 mlcod 0'0 active pruub 81.781623840s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 11 03:38:39 compute-0 ceph-osd[89722]: osd.2 pg_epoch: 53 pg[5.5( empty local-lis/les=45/47 n=0 ec=45/19 lis/c=45/45 les/c/f=47/47/0 sis=53 pruub=13.605503082s) [0] r=-1 lpr=53 pi=[45,53)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 85.535705566s@ mbc={}] state<Start>: transitioning to Stray
Oct 11 03:38:39 compute-0 ceph-osd[89722]: osd.2 pg_epoch: 53 pg[10.b( v 37'16 (0'0,37'16] local-lis/les=49/51 n=0 ec=49/36 lis/c=49/49 les/c/f=51/51/0 sis=53 pruub=9.851406097s) [1] r=-1 lpr=53 pi=[49,53)/1 crt=37'16 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 81.781623840s@ mbc={}] state<Start>: transitioning to Stray
Oct 11 03:38:39 compute-0 ceph-osd[89722]: osd.2 pg_epoch: 53 pg[2.8( empty local-lis/les=42/43 n=0 ec=42/13 lis/c=42/42 les/c/f=43/43/0 sis=53 pruub=9.588579178s) [0] r=-1 lpr=53 pi=[42,53)/1 crt=0'0 mlcod 0'0 active pruub 81.518913269s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 11 03:38:39 compute-0 ceph-osd[89722]: osd.2 pg_epoch: 53 pg[2.3( empty local-lis/les=42/43 n=0 ec=42/13 lis/c=42/42 les/c/f=43/43/0 sis=53 pruub=9.588451385s) [1] r=-1 lpr=53 pi=[42,53)/1 crt=0'0 mlcod 0'0 active pruub 81.518836975s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 11 03:38:39 compute-0 ceph-osd[89722]: osd.2 pg_epoch: 53 pg[2.3( empty local-lis/les=42/43 n=0 ec=42/13 lis/c=42/42 les/c/f=43/43/0 sis=53 pruub=9.588411331s) [1] r=-1 lpr=53 pi=[42,53)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 81.518836975s@ mbc={}] state<Start>: transitioning to Stray
Oct 11 03:38:39 compute-0 ceph-osd[89722]: osd.2 pg_epoch: 53 pg[10.8( v 37'16 (0'0,37'16] local-lis/les=49/51 n=1 ec=49/36 lis/c=49/49 les/c/f=51/51/0 sis=53 pruub=9.851450920s) [0] r=-1 lpr=53 pi=[49,53)/1 crt=37'16 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 81.781585693s@ mbc={}] state<Start>: transitioning to Stray
Oct 11 03:38:39 compute-0 ceph-osd[89722]: osd.2 pg_epoch: 53 pg[5.4( empty local-lis/les=45/47 n=0 ec=45/19 lis/c=45/45 les/c/f=47/47/0 sis=53 pruub=13.605421066s) [0] r=-1 lpr=53 pi=[45,53)/1 crt=0'0 mlcod 0'0 active pruub 85.535873413s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 11 03:38:39 compute-0 ceph-osd[89722]: osd.2 pg_epoch: 53 pg[2.4( empty local-lis/les=42/43 n=0 ec=42/13 lis/c=42/42 les/c/f=43/43/0 sis=53 pruub=9.589930534s) [1] r=-1 lpr=53 pi=[42,53)/1 crt=0'0 mlcod 0'0 active pruub 81.520423889s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 11 03:38:39 compute-0 ceph-osd[89722]: osd.2 pg_epoch: 53 pg[5.4( empty local-lis/les=45/47 n=0 ec=45/19 lis/c=45/45 les/c/f=47/47/0 sis=53 pruub=13.605342865s) [0] r=-1 lpr=53 pi=[45,53)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 85.535873413s@ mbc={}] state<Start>: transitioning to Stray
Oct 11 03:38:39 compute-0 ceph-osd[89722]: osd.2 pg_epoch: 53 pg[2.4( empty local-lis/les=42/43 n=0 ec=42/13 lis/c=42/42 les/c/f=43/43/0 sis=53 pruub=9.589891434s) [1] r=-1 lpr=53 pi=[42,53)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 81.520423889s@ mbc={}] state<Start>: transitioning to Stray
Oct 11 03:38:39 compute-0 ceph-osd[89722]: osd.2 pg_epoch: 53 pg[10.d( v 51'17 (0'0,51'17] local-lis/les=49/51 n=0 ec=49/36 lis/c=49/49 les/c/f=51/51/0 sis=53 pruub=9.851052284s) [0] r=-1 lpr=53 pi=[49,53)/1 crt=37'16 lcod 37'16 mlcod 37'16 active pruub 81.781677246s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 11 03:38:39 compute-0 ceph-osd[89722]: osd.2 pg_epoch: 53 pg[5.3( empty local-lis/les=45/47 n=0 ec=45/19 lis/c=45/45 les/c/f=47/47/0 sis=53 pruub=13.605221748s) [0] r=-1 lpr=53 pi=[45,53)/1 crt=0'0 mlcod 0'0 active pruub 85.535850525s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 11 03:38:39 compute-0 ceph-osd[89722]: osd.2 pg_epoch: 53 pg[2.8( empty local-lis/les=42/43 n=0 ec=42/13 lis/c=42/42 les/c/f=43/43/0 sis=53 pruub=9.588550568s) [0] r=-1 lpr=53 pi=[42,53)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 81.518913269s@ mbc={}] state<Start>: transitioning to Stray
Oct 11 03:38:39 compute-0 ceph-osd[89722]: osd.2 pg_epoch: 53 pg[5.3( empty local-lis/les=45/47 n=0 ec=45/19 lis/c=45/45 les/c/f=47/47/0 sis=53 pruub=13.605189323s) [0] r=-1 lpr=53 pi=[45,53)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 85.535850525s@ mbc={}] state<Start>: transitioning to Stray
Oct 11 03:38:39 compute-0 ceph-osd[89722]: osd.2 pg_epoch: 53 pg[2.5( empty local-lis/les=42/43 n=0 ec=42/13 lis/c=42/42 les/c/f=43/43/0 sis=53 pruub=9.588064194s) [1] r=-1 lpr=53 pi=[42,53)/1 crt=0'0 mlcod 0'0 active pruub 81.518821716s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 11 03:38:39 compute-0 ceph-osd[89722]: osd.2 pg_epoch: 53 pg[5.2( empty local-lis/les=45/47 n=0 ec=45/19 lis/c=45/45 les/c/f=47/47/0 sis=53 pruub=13.604990959s) [0] r=-1 lpr=53 pi=[45,53)/1 crt=0'0 mlcod 0'0 active pruub 85.535781860s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 11 03:38:39 compute-0 ceph-osd[89722]: osd.2 pg_epoch: 53 pg[2.5( empty local-lis/les=42/43 n=0 ec=42/13 lis/c=42/42 les/c/f=43/43/0 sis=53 pruub=9.588018417s) [1] r=-1 lpr=53 pi=[42,53)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 81.518821716s@ mbc={}] state<Start>: transitioning to Stray
Oct 11 03:38:39 compute-0 ceph-osd[89722]: osd.2 pg_epoch: 53 pg[5.2( empty local-lis/les=45/47 n=0 ec=45/19 lis/c=45/45 les/c/f=47/47/0 sis=53 pruub=13.604953766s) [0] r=-1 lpr=53 pi=[45,53)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 85.535781860s@ mbc={}] state<Start>: transitioning to Stray
Oct 11 03:38:39 compute-0 ceph-osd[89722]: osd.2 pg_epoch: 53 pg[10.e( v 51'17 (0'0,51'17] local-lis/les=49/51 n=0 ec=49/36 lis/c=49/49 les/c/f=51/51/0 sis=53 pruub=9.850793839s) [0] r=-1 lpr=53 pi=[49,53)/1 crt=37'16 lcod 37'16 mlcod 37'16 active pruub 81.781654358s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 11 03:38:39 compute-0 ceph-osd[89722]: osd.2 pg_epoch: 53 pg[10.e( v 51'17 (0'0,51'17] local-lis/les=49/51 n=0 ec=49/36 lis/c=49/49 les/c/f=51/51/0 sis=53 pruub=9.850744247s) [0] r=-1 lpr=53 pi=[49,53)/1 crt=37'16 lcod 37'16 mlcod 0'0 unknown NOTIFY pruub 81.781654358s@ mbc={}] state<Start>: transitioning to Stray
Oct 11 03:38:39 compute-0 ceph-osd[89722]: osd.2 pg_epoch: 53 pg[2.6( empty local-lis/les=42/43 n=0 ec=42/13 lis/c=42/42 les/c/f=43/43/0 sis=53 pruub=9.588072777s) [1] r=-1 lpr=53 pi=[42,53)/1 crt=0'0 mlcod 0'0 active pruub 81.519065857s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 11 03:38:39 compute-0 ceph-osd[89722]: osd.2 pg_epoch: 53 pg[5.1( empty local-lis/les=45/47 n=0 ec=45/19 lis/c=45/45 les/c/f=47/47/0 sis=53 pruub=13.604988098s) [1] r=-1 lpr=53 pi=[45,53)/1 crt=0'0 mlcod 0'0 active pruub 85.536026001s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 11 03:38:39 compute-0 ceph-osd[89722]: osd.2 pg_epoch: 53 pg[2.6( empty local-lis/les=42/43 n=0 ec=42/13 lis/c=42/42 les/c/f=43/43/0 sis=53 pruub=9.588039398s) [1] r=-1 lpr=53 pi=[42,53)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 81.519065857s@ mbc={}] state<Start>: transitioning to Stray
Oct 11 03:38:39 compute-0 ceph-osd[89722]: osd.2 pg_epoch: 53 pg[10.1( v 37'16 (0'0,37'16] local-lis/les=49/51 n=1 ec=49/36 lis/c=49/49 les/c/f=51/51/0 sis=53 pruub=9.850544930s) [0] r=-1 lpr=53 pi=[49,53)/1 crt=37'16 lcod 0'0 mlcod 0'0 active pruub 81.781661987s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 11 03:38:39 compute-0 ceph-osd[89722]: osd.2 pg_epoch: 53 pg[10.1( v 37'16 (0'0,37'16] local-lis/les=49/51 n=1 ec=49/36 lis/c=49/49 les/c/f=51/51/0 sis=53 pruub=9.850515366s) [0] r=-1 lpr=53 pi=[49,53)/1 crt=37'16 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 81.781661987s@ mbc={}] state<Start>: transitioning to Stray
Oct 11 03:38:39 compute-0 ceph-osd[89722]: osd.2 pg_epoch: 53 pg[10.d( v 51'17 (0'0,51'17] local-lis/les=49/51 n=0 ec=49/36 lis/c=49/49 les/c/f=51/51/0 sis=53 pruub=9.851015091s) [0] r=-1 lpr=53 pi=[49,53)/1 crt=37'16 lcod 37'16 mlcod 0'0 unknown NOTIFY pruub 81.781677246s@ mbc={}] state<Start>: transitioning to Stray
Oct 11 03:38:39 compute-0 ceph-osd[89722]: osd.2 pg_epoch: 53 pg[2.9( empty local-lis/les=42/43 n=0 ec=42/13 lis/c=42/42 les/c/f=43/43/0 sis=53 pruub=9.587563515s) [1] r=-1 lpr=53 pi=[42,53)/1 crt=0'0 mlcod 0'0 active pruub 81.518798828s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 11 03:38:39 compute-0 ceph-osd[89722]: osd.2 pg_epoch: 53 pg[2.9( empty local-lis/les=42/43 n=0 ec=42/13 lis/c=42/42 les/c/f=43/43/0 sis=53 pruub=9.587543488s) [1] r=-1 lpr=53 pi=[42,53)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 81.518798828s@ mbc={}] state<Start>: transitioning to Stray
Oct 11 03:38:39 compute-0 ceph-osd[89722]: osd.2 pg_epoch: 53 pg[10.2( v 37'16 (0'0,37'16] local-lis/les=49/51 n=1 ec=49/36 lis/c=49/49 les/c/f=51/51/0 sis=53 pruub=9.850301743s) [1] r=-1 lpr=53 pi=[49,53)/1 crt=37'16 lcod 0'0 mlcod 0'0 active pruub 81.781654358s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 11 03:38:39 compute-0 ceph-osd[89722]: osd.2 pg_epoch: 53 pg[5.1( empty local-lis/les=45/47 n=0 ec=45/19 lis/c=45/45 les/c/f=47/47/0 sis=53 pruub=13.604930878s) [1] r=-1 lpr=53 pi=[45,53)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 85.536026001s@ mbc={}] state<Start>: transitioning to Stray
Oct 11 03:38:39 compute-0 ceph-osd[89722]: osd.2 pg_epoch: 53 pg[10.2( v 37'16 (0'0,37'16] local-lis/les=49/51 n=1 ec=49/36 lis/c=49/49 les/c/f=51/51/0 sis=53 pruub=9.850265503s) [1] r=-1 lpr=53 pi=[49,53)/1 crt=37'16 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 81.781654358s@ mbc={}] state<Start>: transitioning to Stray
Oct 11 03:38:39 compute-0 ceph-osd[89722]: osd.2 pg_epoch: 53 pg[10.13( v 37'16 (0'0,37'16] local-lis/les=49/51 n=0 ec=49/36 lis/c=49/49 les/c/f=51/51/0 sis=53 pruub=9.850193024s) [1] r=-1 lpr=53 pi=[49,53)/1 crt=37'16 lcod 0'0 mlcod 0'0 active pruub 81.781661987s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 11 03:38:39 compute-0 ceph-osd[89722]: osd.2 pg_epoch: 53 pg[10.13( v 37'16 (0'0,37'16] local-lis/les=49/51 n=0 ec=49/36 lis/c=49/49 les/c/f=51/51/0 sis=53 pruub=9.850165367s) [1] r=-1 lpr=53 pi=[49,53)/1 crt=37'16 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 81.781661987s@ mbc={}] state<Start>: transitioning to Stray
Oct 11 03:38:39 compute-0 ceph-osd[89722]: osd.2 pg_epoch: 53 pg[2.a( empty local-lis/les=42/43 n=0 ec=42/13 lis/c=42/42 les/c/f=43/43/0 sis=53 pruub=9.587298393s) [1] r=-1 lpr=53 pi=[42,53)/1 crt=0'0 mlcod 0'0 active pruub 81.518829346s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 11 03:38:39 compute-0 ceph-osd[89722]: osd.2 pg_epoch: 53 pg[2.a( empty local-lis/les=42/43 n=0 ec=42/13 lis/c=42/42 les/c/f=43/43/0 sis=53 pruub=9.587253571s) [1] r=-1 lpr=53 pi=[42,53)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 81.518829346s@ mbc={}] state<Start>: transitioning to Stray
Oct 11 03:38:39 compute-0 ceph-osd[89722]: osd.2 pg_epoch: 53 pg[2.1b( empty local-lis/les=42/43 n=0 ec=42/13 lis/c=42/42 les/c/f=43/43/0 sis=53 pruub=9.587075233s) [1] r=-1 lpr=53 pi=[42,53)/1 crt=0'0 mlcod 0'0 active pruub 81.518669128s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 11 03:38:39 compute-0 ceph-osd[89722]: osd.2 pg_epoch: 53 pg[10.14( v 51'17 (0'0,51'17] local-lis/les=49/51 n=0 ec=49/36 lis/c=49/49 les/c/f=51/51/0 sis=53 pruub=9.850066185s) [1] r=-1 lpr=53 pi=[49,53)/1 crt=37'16 lcod 37'16 mlcod 37'16 active pruub 81.781715393s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 11 03:38:39 compute-0 ceph-osd[89722]: osd.2 pg_epoch: 53 pg[2.1c( empty local-lis/les=42/43 n=0 ec=42/13 lis/c=42/42 les/c/f=43/43/0 sis=53 pruub=9.586805344s) [0] r=-1 lpr=53 pi=[42,53)/1 crt=0'0 mlcod 0'0 active pruub 81.518478394s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 11 03:38:39 compute-0 ceph-osd[89722]: osd.2 pg_epoch: 53 pg[2.1b( empty local-lis/les=42/43 n=0 ec=42/13 lis/c=42/42 les/c/f=43/43/0 sis=53 pruub=9.587006569s) [1] r=-1 lpr=53 pi=[42,53)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 81.518669128s@ mbc={}] state<Start>: transitioning to Stray
Oct 11 03:38:39 compute-0 ceph-osd[89722]: osd.2 pg_epoch: 53 pg[10.14( v 51'17 (0'0,51'17] local-lis/les=49/51 n=0 ec=49/36 lis/c=49/49 les/c/f=51/51/0 sis=53 pruub=9.849995613s) [1] r=-1 lpr=53 pi=[49,53)/1 crt=37'16 lcod 37'16 mlcod 0'0 unknown NOTIFY pruub 81.781715393s@ mbc={}] state<Start>: transitioning to Stray
Oct 11 03:38:39 compute-0 ceph-osd[89722]: osd.2 pg_epoch: 53 pg[10.15( v 51'17 (0'0,51'17] local-lis/les=49/51 n=0 ec=49/36 lis/c=49/49 les/c/f=51/51/0 sis=53 pruub=9.849875450s) [0] r=-1 lpr=53 pi=[49,53)/1 crt=37'16 lcod 37'16 mlcod 37'16 active pruub 81.781692505s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 11 03:38:39 compute-0 ceph-osd[89722]: osd.2 pg_epoch: 53 pg[5.1a( empty local-lis/les=45/47 n=0 ec=45/19 lis/c=45/45 les/c/f=47/47/0 sis=53 pruub=13.604096413s) [1] r=-1 lpr=53 pi=[45,53)/1 crt=0'0 mlcod 0'0 active pruub 85.535949707s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 11 03:38:39 compute-0 ceph-osd[89722]: osd.2 pg_epoch: 53 pg[10.15( v 51'17 (0'0,51'17] local-lis/les=49/51 n=0 ec=49/36 lis/c=49/49 les/c/f=51/51/0 sis=53 pruub=9.849830627s) [0] r=-1 lpr=53 pi=[49,53)/1 crt=37'16 lcod 37'16 mlcod 0'0 unknown NOTIFY pruub 81.781692505s@ mbc={}] state<Start>: transitioning to Stray
Oct 11 03:38:39 compute-0 ceph-osd[89722]: osd.2 pg_epoch: 53 pg[5.1a( empty local-lis/les=45/47 n=0 ec=45/19 lis/c=45/45 les/c/f=47/47/0 sis=53 pruub=13.604063988s) [1] r=-1 lpr=53 pi=[45,53)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 85.535949707s@ mbc={}] state<Start>: transitioning to Stray
Oct 11 03:38:39 compute-0 ceph-osd[89722]: osd.2 pg_epoch: 53 pg[2.1d( empty local-lis/les=42/43 n=0 ec=42/13 lis/c=42/42 les/c/f=43/43/0 sis=53 pruub=9.586447716s) [0] r=-1 lpr=53 pi=[42,53)/1 crt=0'0 mlcod 0'0 active pruub 81.518394470s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 11 03:38:39 compute-0 ceph-osd[89722]: osd.2 pg_epoch: 53 pg[2.1d( empty local-lis/les=42/43 n=0 ec=42/13 lis/c=42/42 les/c/f=43/43/0 sis=53 pruub=9.586415291s) [0] r=-1 lpr=53 pi=[42,53)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 81.518394470s@ mbc={}] state<Start>: transitioning to Stray
Oct 11 03:38:39 compute-0 ceph-osd[89722]: osd.2 pg_epoch: 53 pg[10.16( v 37'16 (0'0,37'16] local-lis/les=49/51 n=0 ec=49/36 lis/c=49/49 les/c/f=51/51/0 sis=53 pruub=9.849639893s) [0] r=-1 lpr=53 pi=[49,53)/1 crt=37'16 lcod 0'0 mlcod 0'0 active pruub 81.781700134s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 11 03:38:39 compute-0 ceph-osd[89722]: osd.2 pg_epoch: 53 pg[10.16( v 37'16 (0'0,37'16] local-lis/les=49/51 n=0 ec=49/36 lis/c=49/49 les/c/f=51/51/0 sis=53 pruub=9.849609375s) [0] r=-1 lpr=53 pi=[49,53)/1 crt=37'16 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 81.781700134s@ mbc={}] state<Start>: transitioning to Stray
Oct 11 03:38:39 compute-0 ceph-osd[89722]: osd.2 pg_epoch: 53 pg[10.17( v 37'16 (0'0,37'16] local-lis/les=49/51 n=0 ec=49/36 lis/c=49/49 les/c/f=51/51/0 sis=53 pruub=9.849560738s) [0] r=-1 lpr=53 pi=[49,53)/1 crt=37'16 lcod 0'0 mlcod 0'0 active pruub 81.781707764s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 11 03:38:39 compute-0 ceph-osd[89722]: osd.2 pg_epoch: 53 pg[5.19( empty local-lis/les=45/47 n=0 ec=45/19 lis/c=45/45 les/c/f=47/47/0 sis=53 pruub=13.603845596s) [1] r=-1 lpr=53 pi=[45,53)/1 crt=0'0 mlcod 0'0 active pruub 85.536003113s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 11 03:38:39 compute-0 ceph-osd[89722]: osd.2 pg_epoch: 53 pg[10.17( v 37'16 (0'0,37'16] local-lis/les=49/51 n=0 ec=49/36 lis/c=49/49 les/c/f=51/51/0 sis=53 pruub=9.849532127s) [0] r=-1 lpr=53 pi=[49,53)/1 crt=37'16 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 81.781707764s@ mbc={}] state<Start>: transitioning to Stray
Oct 11 03:38:39 compute-0 ceph-osd[89722]: osd.2 pg_epoch: 53 pg[5.19( empty local-lis/les=45/47 n=0 ec=45/19 lis/c=45/45 les/c/f=47/47/0 sis=53 pruub=13.603806496s) [1] r=-1 lpr=53 pi=[45,53)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 85.536003113s@ mbc={}] state<Start>: transitioning to Stray
Oct 11 03:38:39 compute-0 ceph-osd[89722]: osd.2 pg_epoch: 53 pg[2.1f( empty local-lis/les=42/43 n=0 ec=42/13 lis/c=42/42 les/c/f=43/43/0 sis=53 pruub=9.578022003s) [0] r=-1 lpr=53 pi=[42,53)/1 crt=0'0 mlcod 0'0 active pruub 81.510375977s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 11 03:38:39 compute-0 ceph-osd[89722]: osd.2 pg_epoch: 53 pg[5.18( empty local-lis/les=45/47 n=0 ec=45/19 lis/c=45/45 les/c/f=47/47/0 sis=53 pruub=13.603634834s) [1] r=-1 lpr=53 pi=[45,53)/1 crt=0'0 mlcod 0'0 active pruub 85.536026001s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 11 03:38:39 compute-0 ceph-osd[89722]: osd.2 pg_epoch: 53 pg[2.1f( empty local-lis/les=42/43 n=0 ec=42/13 lis/c=42/42 les/c/f=43/43/0 sis=53 pruub=9.577982903s) [0] r=-1 lpr=53 pi=[42,53)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 81.510375977s@ mbc={}] state<Start>: transitioning to Stray
Oct 11 03:38:39 compute-0 ceph-osd[89722]: osd.2 pg_epoch: 53 pg[5.18( empty local-lis/les=45/47 n=0 ec=45/19 lis/c=45/45 les/c/f=47/47/0 sis=53 pruub=13.603605270s) [1] r=-1 lpr=53 pi=[45,53)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 85.536026001s@ mbc={}] state<Start>: transitioning to Stray
Oct 11 03:38:39 compute-0 ceph-osd[89722]: osd.2 pg_epoch: 53 pg[2.1c( empty local-lis/les=42/43 n=0 ec=42/13 lis/c=42/42 les/c/f=43/43/0 sis=53 pruub=9.586781502s) [0] r=-1 lpr=53 pi=[42,53)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 81.518478394s@ mbc={}] state<Start>: transitioning to Stray
Oct 11 03:38:39 compute-0 ceph-osd[88594]: osd.1 pg_epoch: 53 pg[5.11( empty local-lis/les=0/0 n=0 ec=45/19 lis/c=45/45 les/c/f=47/47/0 sis=53) [1] r=0 lpr=53 pi=[45,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 03:38:39 compute-0 ceph-osd[88594]: osd.1 pg_epoch: 53 pg[2.17( empty local-lis/les=0/0 n=0 ec=42/13 lis/c=42/42 les/c/f=43/43/0 sis=53) [1] r=0 lpr=53 pi=[42,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 03:38:39 compute-0 ceph-osd[88594]: osd.1 pg_epoch: 53 pg[5.13( empty local-lis/les=0/0 n=0 ec=45/19 lis/c=45/45 les/c/f=47/47/0 sis=53) [1] r=0 lpr=53 pi=[45,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 03:38:39 compute-0 ceph-osd[88594]: osd.1 pg_epoch: 53 pg[2.15( empty local-lis/les=0/0 n=0 ec=42/13 lis/c=42/42 les/c/f=43/43/0 sis=53) [1] r=0 lpr=53 pi=[42,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 03:38:39 compute-0 ceph-osd[88594]: osd.1 pg_epoch: 53 pg[5.12( empty local-lis/les=0/0 n=0 ec=45/19 lis/c=45/45 les/c/f=47/47/0 sis=53) [1] r=0 lpr=53 pi=[45,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 03:38:39 compute-0 ceph-osd[88594]: osd.1 pg_epoch: 53 pg[10.1a( empty local-lis/les=0/0 n=0 ec=49/36 lis/c=49/49 les/c/f=51/51/0 sis=53) [1] r=0 lpr=53 pi=[49,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 03:38:39 compute-0 ceph-osd[88594]: osd.1 pg_epoch: 53 pg[10.19( empty local-lis/les=0/0 n=0 ec=49/36 lis/c=49/49 les/c/f=51/51/0 sis=53) [1] r=0 lpr=53 pi=[49,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 03:38:39 compute-0 ceph-osd[87591]: osd.0 pg_epoch: 53 pg[2.19( empty local-lis/les=0/0 n=0 ec=42/13 lis/c=42/42 les/c/f=43/43/0 sis=53) [0] r=0 lpr=53 pi=[42,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 03:38:39 compute-0 ceph-osd[88594]: osd.1 pg_epoch: 53 pg[5.16( empty local-lis/les=0/0 n=0 ec=45/19 lis/c=45/45 les/c/f=47/47/0 sis=53) [1] r=0 lpr=53 pi=[45,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 03:38:39 compute-0 ceph-osd[87591]: osd.0 pg_epoch: 53 pg[5.1e( empty local-lis/les=0/0 n=0 ec=45/19 lis/c=45/45 les/c/f=47/47/0 sis=53) [0] r=0 lpr=53 pi=[45,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 03:38:39 compute-0 ceph-osd[88594]: osd.1 pg_epoch: 53 pg[10.6( empty local-lis/les=0/0 n=0 ec=49/36 lis/c=49/49 les/c/f=51/51/0 sis=53) [1] r=0 lpr=53 pi=[49,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 03:38:39 compute-0 ceph-osd[87591]: osd.0 pg_epoch: 53 pg[2.18( empty local-lis/les=0/0 n=0 ec=42/13 lis/c=42/42 les/c/f=43/43/0 sis=53) [0] r=0 lpr=53 pi=[42,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 03:38:39 compute-0 ceph-osd[88594]: osd.1 pg_epoch: 53 pg[5.9( empty local-lis/les=0/0 n=0 ec=45/19 lis/c=45/45 les/c/f=47/47/0 sis=53) [1] r=0 lpr=53 pi=[45,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 03:38:39 compute-0 ceph-osd[88594]: osd.1 pg_epoch: 53 pg[2.d( empty local-lis/les=0/0 n=0 ec=42/13 lis/c=42/42 les/c/f=43/43/0 sis=53) [1] r=0 lpr=53 pi=[42,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 03:38:39 compute-0 ceph-osd[88594]: osd.1 pg_epoch: 53 pg[5.f( empty local-lis/les=0/0 n=0 ec=45/19 lis/c=45/45 les/c/f=47/47/0 sis=53) [1] r=0 lpr=53 pi=[45,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 03:38:39 compute-0 ceph-osd[88594]: osd.1 pg_epoch: 53 pg[10.b( empty local-lis/les=0/0 n=0 ec=49/36 lis/c=49/49 les/c/f=51/51/0 sis=53) [1] r=0 lpr=53 pi=[49,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 03:38:39 compute-0 ceph-osd[88594]: osd.1 pg_epoch: 53 pg[2.3( empty local-lis/les=0/0 n=0 ec=42/13 lis/c=42/42 les/c/f=43/43/0 sis=53) [1] r=0 lpr=53 pi=[42,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 03:38:39 compute-0 ceph-osd[88594]: osd.1 pg_epoch: 53 pg[2.5( empty local-lis/les=0/0 n=0 ec=42/13 lis/c=42/42 les/c/f=43/43/0 sis=53) [1] r=0 lpr=53 pi=[42,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 03:38:39 compute-0 ceph-osd[88594]: osd.1 pg_epoch: 53 pg[10.2( empty local-lis/les=0/0 n=0 ec=49/36 lis/c=49/49 les/c/f=51/51/0 sis=53) [1] r=0 lpr=53 pi=[49,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 03:38:39 compute-0 ceph-osd[87591]: osd.0 pg_epoch: 53 pg[10.9( empty local-lis/les=0/0 n=0 ec=49/36 lis/c=49/49 les/c/f=51/51/0 sis=53) [0] r=0 lpr=53 pi=[49,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 03:38:39 compute-0 ceph-osd[88594]: osd.1 pg_epoch: 53 pg[2.a( empty local-lis/les=0/0 n=0 ec=42/13 lis/c=42/42 les/c/f=43/43/0 sis=53) [1] r=0 lpr=53 pi=[42,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 03:38:39 compute-0 ceph-osd[88594]: osd.1 pg_epoch: 53 pg[5.c( empty local-lis/les=0/0 n=0 ec=45/19 lis/c=45/45 les/c/f=47/47/0 sis=53) [1] r=0 lpr=53 pi=[45,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 03:38:39 compute-0 ceph-osd[88594]: osd.1 pg_epoch: 53 pg[2.9( empty local-lis/les=0/0 n=0 ec=42/13 lis/c=42/42 les/c/f=43/43/0 sis=53) [1] r=0 lpr=53 pi=[42,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 03:38:39 compute-0 ceph-osd[87591]: osd.0 pg_epoch: 53 pg[5.7( empty local-lis/les=0/0 n=0 ec=45/19 lis/c=45/45 les/c/f=47/47/0 sis=53) [0] r=0 lpr=53 pi=[45,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 03:38:39 compute-0 ceph-osd[88594]: osd.1 pg_epoch: 53 pg[2.4( empty local-lis/les=0/0 n=0 ec=42/13 lis/c=42/42 les/c/f=43/43/0 sis=53) [1] r=0 lpr=53 pi=[42,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 03:38:39 compute-0 ceph-osd[88594]: osd.1 pg_epoch: 53 pg[10.f( empty local-lis/les=0/0 n=0 ec=49/36 lis/c=49/49 les/c/f=51/51/0 sis=53) [1] r=0 lpr=53 pi=[49,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 03:38:39 compute-0 ceph-osd[87591]: osd.0 pg_epoch: 53 pg[10.8( empty local-lis/les=0/0 n=0 ec=49/36 lis/c=49/49 les/c/f=51/51/0 sis=53) [0] r=0 lpr=53 pi=[49,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 03:38:39 compute-0 ceph-osd[88594]: osd.1 pg_epoch: 53 pg[2.7( empty local-lis/les=0/0 n=0 ec=42/13 lis/c=42/42 les/c/f=43/43/0 sis=53) [1] r=0 lpr=53 pi=[42,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 03:38:39 compute-0 ceph-osd[88594]: osd.1 pg_epoch: 53 pg[2.6( empty local-lis/les=0/0 n=0 ec=42/13 lis/c=42/42 les/c/f=43/43/0 sis=53) [1] r=0 lpr=53 pi=[42,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 03:38:39 compute-0 ceph-osd[87591]: osd.0 pg_epoch: 53 pg[10.15( empty local-lis/les=0/0 n=0 ec=49/36 lis/c=49/49 les/c/f=51/51/0 sis=53) [0] r=0 lpr=53 pi=[49,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 03:38:39 compute-0 ceph-osd[88594]: osd.1 pg_epoch: 53 pg[5.1( empty local-lis/les=0/0 n=0 ec=45/19 lis/c=45/45 les/c/f=47/47/0 sis=53) [1] r=0 lpr=53 pi=[45,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 03:38:39 compute-0 ceph-osd[88594]: osd.1 pg_epoch: 53 pg[10.11( empty local-lis/les=0/0 n=0 ec=49/36 lis/c=49/49 les/c/f=51/51/0 sis=53) [1] r=0 lpr=53 pi=[49,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 03:38:39 compute-0 ceph-osd[87591]: osd.0 pg_epoch: 53 pg[2.1d( empty local-lis/les=0/0 n=0 ec=42/13 lis/c=42/42 les/c/f=43/43/0 sis=53) [0] r=0 lpr=53 pi=[42,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 03:38:39 compute-0 ceph-osd[88594]: osd.1 pg_epoch: 53 pg[10.10( empty local-lis/les=0/0 n=0 ec=49/36 lis/c=49/49 les/c/f=51/51/0 sis=53) [1] r=0 lpr=53 pi=[49,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 03:38:39 compute-0 ceph-osd[88594]: osd.1 pg_epoch: 53 pg[10.13( empty local-lis/les=0/0 n=0 ec=49/36 lis/c=49/49 les/c/f=51/51/0 sis=53) [1] r=0 lpr=53 pi=[49,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 03:38:39 compute-0 ceph-osd[87591]: osd.0 pg_epoch: 53 pg[10.4( empty local-lis/les=0/0 n=0 ec=49/36 lis/c=49/49 les/c/f=51/51/0 sis=53) [0] r=0 lpr=53 pi=[49,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 03:38:39 compute-0 ceph-osd[88594]: osd.1 pg_epoch: 53 pg[2.1b( empty local-lis/les=0/0 n=0 ec=42/13 lis/c=42/42 les/c/f=43/43/0 sis=53) [1] r=0 lpr=53 pi=[42,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 03:38:39 compute-0 ceph-osd[87591]: osd.0 pg_epoch: 53 pg[5.4( empty local-lis/les=0/0 n=0 ec=45/19 lis/c=45/45 les/c/f=47/47/0 sis=53) [0] r=0 lpr=53 pi=[45,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 03:38:39 compute-0 ceph-osd[88594]: osd.1 pg_epoch: 53 pg[5.1d( empty local-lis/les=0/0 n=0 ec=45/19 lis/c=45/45 les/c/f=47/47/0 sis=53) [1] r=0 lpr=53 pi=[45,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 03:38:39 compute-0 ceph-osd[88594]: osd.1 pg_epoch: 53 pg[10.12( empty local-lis/les=0/0 n=0 ec=49/36 lis/c=49/49 les/c/f=51/51/0 sis=53) [1] r=0 lpr=53 pi=[49,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 03:38:39 compute-0 ceph-osd[87591]: osd.0 pg_epoch: 53 pg[2.1c( empty local-lis/les=0/0 n=0 ec=42/13 lis/c=42/42 les/c/f=43/43/0 sis=53) [0] r=0 lpr=53 pi=[42,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 03:38:39 compute-0 ceph-osd[88594]: osd.1 pg_epoch: 53 pg[5.1a( empty local-lis/les=0/0 n=0 ec=45/19 lis/c=45/45 les/c/f=47/47/0 sis=53) [1] r=0 lpr=53 pi=[45,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 03:38:39 compute-0 ceph-osd[88594]: osd.1 pg_epoch: 53 pg[10.14( empty local-lis/les=0/0 n=0 ec=49/36 lis/c=49/49 les/c/f=51/51/0 sis=53) [1] r=0 lpr=53 pi=[49,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 03:38:39 compute-0 ceph-osd[87591]: osd.0 pg_epoch: 53 pg[10.7( empty local-lis/les=0/0 n=0 ec=49/36 lis/c=49/49 les/c/f=51/51/0 sis=53) [0] r=0 lpr=53 pi=[49,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 03:38:39 compute-0 ceph-osd[88594]: osd.1 pg_epoch: 53 pg[5.18( empty local-lis/les=0/0 n=0 ec=45/19 lis/c=45/45 les/c/f=47/47/0 sis=53) [1] r=0 lpr=53 pi=[45,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 03:38:39 compute-0 ceph-osd[88594]: osd.1 pg_epoch: 53 pg[5.19( empty local-lis/les=0/0 n=0 ec=45/19 lis/c=45/45 les/c/f=47/47/0 sis=53) [1] r=0 lpr=53 pi=[45,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 03:38:39 compute-0 ceph-osd[87591]: osd.0 pg_epoch: 53 pg[2.f( empty local-lis/les=0/0 n=0 ec=42/13 lis/c=42/42 les/c/f=43/43/0 sis=53) [0] r=0 lpr=53 pi=[42,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 03:38:39 compute-0 ceph-osd[88594]: osd.1 pg_epoch: 53 pg[11.17( empty local-lis/les=51/52 n=0 ec=51/38 lis/c=51/51 les/c/f=52/52/0 sis=53 pruub=10.934719086s) [0] r=-1 lpr=53 pi=[51,53)/1 crt=0'0 mlcod 0'0 active pruub 88.131675720s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 11 03:38:39 compute-0 ceph-osd[88594]: osd.1 pg_epoch: 53 pg[11.17( empty local-lis/les=51/52 n=0 ec=51/38 lis/c=51/51 les/c/f=52/52/0 sis=53 pruub=10.934689522s) [0] r=-1 lpr=53 pi=[51,53)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 88.131675720s@ mbc={}] state<Start>: transitioning to Stray
Oct 11 03:38:39 compute-0 ceph-osd[88594]: osd.1 pg_epoch: 53 pg[3.1f( empty local-lis/les=42/43 n=0 ec=42/15 lis/c=42/42 les/c/f=43/43/0 sis=53 pruub=9.575214386s) [0] r=-1 lpr=53 pi=[42,53)/1 crt=0'0 mlcod 0'0 active pruub 86.772323608s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 11 03:38:39 compute-0 ceph-osd[87591]: osd.0 pg_epoch: 53 pg[2.2( empty local-lis/les=0/0 n=0 ec=42/13 lis/c=42/42 les/c/f=43/43/0 sis=53) [0] r=0 lpr=53 pi=[42,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 03:38:39 compute-0 ceph-osd[88594]: osd.1 pg_epoch: 53 pg[3.1f( empty local-lis/les=42/43 n=0 ec=42/15 lis/c=42/42 les/c/f=43/43/0 sis=53 pruub=9.575197220s) [0] r=-1 lpr=53 pi=[42,53)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 86.772323608s@ mbc={}] state<Start>: transitioning to Stray
Oct 11 03:38:39 compute-0 ceph-osd[88594]: osd.1 pg_epoch: 53 pg[7.1b( empty local-lis/les=47/48 n=0 ec=47/23 lis/c=47/47 les/c/f=48/48/0 sis=53 pruub=14.604460716s) [0] r=-1 lpr=53 pi=[47,53)/1 crt=0'0 mlcod 0'0 active pruub 91.801666260s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 11 03:38:39 compute-0 ceph-osd[87591]: osd.0 pg_epoch: 53 pg[5.5( empty local-lis/les=0/0 n=0 ec=45/19 lis/c=45/45 les/c/f=47/47/0 sis=53) [0] r=0 lpr=53 pi=[45,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 03:38:39 compute-0 ceph-osd[88594]: osd.1 pg_epoch: 53 pg[7.1b( empty local-lis/les=47/48 n=0 ec=47/23 lis/c=47/47 les/c/f=48/48/0 sis=53 pruub=14.604429245s) [0] r=-1 lpr=53 pi=[47,53)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 91.801666260s@ mbc={}] state<Start>: transitioning to Stray
Oct 11 03:38:39 compute-0 ceph-osd[88594]: osd.1 pg_epoch: 53 pg[8.14( v 33'4 (0'0,33'4] local-lis/les=47/48 n=0 ec=47/32 lis/c=47/47 les/c/f=48/48/0 sis=53 pruub=14.604270935s) [0] r=-1 lpr=53 pi=[47,53)/1 crt=33'4 lcod 0'0 mlcod 0'0 active pruub 91.801605225s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 11 03:38:39 compute-0 ceph-osd[88594]: osd.1 pg_epoch: 53 pg[8.14( v 33'4 (0'0,33'4] local-lis/les=47/48 n=0 ec=47/32 lis/c=47/47 les/c/f=48/48/0 sis=53 pruub=14.604252815s) [0] r=-1 lpr=53 pi=[47,53)/1 crt=33'4 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 91.801605225s@ mbc={}] state<Start>: transitioning to Stray
Oct 11 03:38:39 compute-0 ceph-osd[87591]: osd.0 pg_epoch: 53 pg[10.17( empty local-lis/les=0/0 n=0 ec=49/36 lis/c=49/49 les/c/f=51/51/0 sis=53) [0] r=0 lpr=53 pi=[49,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 03:38:39 compute-0 ceph-osd[87591]: osd.0 pg_epoch: 53 pg[2.1f( empty local-lis/les=0/0 n=0 ec=42/13 lis/c=42/42 les/c/f=43/43/0 sis=53) [0] r=0 lpr=53 pi=[42,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 03:38:39 compute-0 ceph-osd[88594]: osd.1 pg_epoch: 53 pg[9.15( v 41'577 (0'0,41'577] local-lis/les=49/50 n=6 ec=49/34 lis/c=49/49 les/c/f=50/50/0 sis=53 pruub=8.768841743s) [0] r=-1 lpr=53 pi=[49,53)/1 crt=41'577 lcod 0'0 mlcod 0'0 active pruub 85.968315125s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 11 03:38:39 compute-0 ceph-osd[88594]: osd.1 pg_epoch: 53 pg[9.15( v 41'577 (0'0,41'577] local-lis/les=49/50 n=6 ec=49/34 lis/c=49/49 les/c/f=50/50/0 sis=53 pruub=8.768816948s) [0] r=-1 lpr=53 pi=[49,53)/1 crt=41'577 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 85.968315125s@ mbc={}] state<Start>: transitioning to Stray
Oct 11 03:38:39 compute-0 ceph-osd[87591]: osd.0 pg_epoch: 53 pg[5.2( empty local-lis/les=0/0 n=0 ec=45/19 lis/c=45/45 les/c/f=47/47/0 sis=53) [0] r=0 lpr=53 pi=[45,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 03:38:39 compute-0 ceph-osd[88594]: osd.1 pg_epoch: 53 pg[8.15( v 33'4 (0'0,33'4] local-lis/les=47/48 n=0 ec=47/32 lis/c=47/47 les/c/f=48/48/0 sis=53 pruub=14.601932526s) [2] r=-1 lpr=53 pi=[47,53)/1 crt=33'4 lcod 0'0 mlcod 0'0 active pruub 91.801567078s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 11 03:38:39 compute-0 ceph-osd[87591]: osd.0 pg_epoch: 53 pg[10.d( empty local-lis/les=0/0 n=0 ec=49/36 lis/c=49/49 les/c/f=51/51/0 sis=53) [0] r=0 lpr=53 pi=[49,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 03:38:39 compute-0 ceph-osd[88594]: osd.1 pg_epoch: 53 pg[8.15( v 33'4 (0'0,33'4] local-lis/les=47/48 n=0 ec=47/32 lis/c=47/47 les/c/f=48/48/0 sis=53 pruub=14.601913452s) [2] r=-1 lpr=53 pi=[47,53)/1 crt=33'4 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 91.801567078s@ mbc={}] state<Start>: transitioning to Stray
Oct 11 03:38:39 compute-0 ceph-osd[88594]: osd.1 pg_epoch: 53 pg[7.1a( empty local-lis/les=47/48 n=0 ec=47/23 lis/c=47/47 les/c/f=48/48/0 sis=53 pruub=14.601908684s) [2] r=-1 lpr=53 pi=[47,53)/1 crt=0'0 mlcod 0'0 active pruub 91.801574707s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 11 03:38:39 compute-0 ceph-osd[88594]: osd.1 pg_epoch: 53 pg[3.1e( empty local-lis/les=42/43 n=0 ec=42/15 lis/c=42/42 les/c/f=43/43/0 sis=53 pruub=9.565215111s) [2] r=-1 lpr=53 pi=[42,53)/1 crt=0'0 mlcod 0'0 active pruub 86.764915466s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 11 03:38:39 compute-0 ceph-osd[87591]: osd.0 pg_epoch: 53 pg[5.3( empty local-lis/les=0/0 n=0 ec=45/19 lis/c=45/45 les/c/f=47/47/0 sis=53) [0] r=0 lpr=53 pi=[45,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 03:38:39 compute-0 ceph-osd[88594]: osd.1 pg_epoch: 53 pg[11.15( empty local-lis/les=51/52 n=0 ec=51/38 lis/c=51/51 les/c/f=52/52/0 sis=53 pruub=10.925107956s) [2] r=-1 lpr=53 pi=[51,53)/1 crt=0'0 mlcod 0'0 active pruub 88.124916077s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 11 03:38:39 compute-0 ceph-osd[88594]: osd.1 pg_epoch: 53 pg[7.1a( empty local-lis/les=47/48 n=0 ec=47/23 lis/c=47/47 les/c/f=48/48/0 sis=53 pruub=14.601833344s) [2] r=-1 lpr=53 pi=[47,53)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 91.801574707s@ mbc={}] state<Start>: transitioning to Stray
Oct 11 03:38:39 compute-0 ceph-osd[88594]: osd.1 pg_epoch: 53 pg[3.1e( empty local-lis/les=42/43 n=0 ec=42/15 lis/c=42/42 les/c/f=43/43/0 sis=53 pruub=9.565164566s) [2] r=-1 lpr=53 pi=[42,53)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 86.764915466s@ mbc={}] state<Start>: transitioning to Stray
Oct 11 03:38:39 compute-0 ceph-osd[87591]: osd.0 pg_epoch: 53 pg[10.e( empty local-lis/les=0/0 n=0 ec=49/36 lis/c=49/49 les/c/f=51/51/0 sis=53) [0] r=0 lpr=53 pi=[49,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 03:38:39 compute-0 ceph-osd[88594]: osd.1 pg_epoch: 53 pg[11.15( empty local-lis/les=51/52 n=0 ec=51/38 lis/c=51/51 les/c/f=52/52/0 sis=53 pruub=10.925079346s) [2] r=-1 lpr=53 pi=[51,53)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 88.124916077s@ mbc={}] state<Start>: transitioning to Stray
Oct 11 03:38:39 compute-0 ceph-osd[87591]: osd.0 pg_epoch: 53 pg[2.b( empty local-lis/les=0/0 n=0 ec=42/13 lis/c=42/42 les/c/f=43/43/0 sis=53) [0] r=0 lpr=53 pi=[42,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 03:38:39 compute-0 ceph-osd[88594]: osd.1 pg_epoch: 53 pg[11.14( empty local-lis/les=51/52 n=0 ec=51/38 lis/c=51/51 les/c/f=52/52/0 sis=53 pruub=10.931690216s) [0] r=-1 lpr=53 pi=[51,53)/1 crt=0'0 mlcod 0'0 active pruub 88.131759644s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 11 03:38:39 compute-0 ceph-osd[88594]: osd.1 pg_epoch: 53 pg[11.14( empty local-lis/les=51/52 n=0 ec=51/38 lis/c=51/51 les/c/f=52/52/0 sis=53 pruub=10.931669235s) [0] r=-1 lpr=53 pi=[51,53)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 88.131759644s@ mbc={}] state<Start>: transitioning to Stray
Oct 11 03:38:39 compute-0 ceph-osd[88594]: osd.1 pg_epoch: 53 pg[9.17( v 41'577 (0'0,41'577] local-lis/les=49/50 n=6 ec=49/34 lis/c=49/49 les/c/f=50/50/0 sis=53 pruub=8.775624275s) [0] r=-1 lpr=53 pi=[49,53)/1 crt=41'577 lcod 0'0 mlcod 0'0 active pruub 85.975784302s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 11 03:38:39 compute-0 ceph-osd[88594]: osd.1 pg_epoch: 53 pg[9.17( v 41'577 (0'0,41'577] local-lis/les=49/50 n=6 ec=49/34 lis/c=49/49 les/c/f=50/50/0 sis=53 pruub=8.775576591s) [0] r=-1 lpr=53 pi=[49,53)/1 crt=41'577 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 85.975784302s@ mbc={}] state<Start>: transitioning to Stray
Oct 11 03:38:39 compute-0 ceph-osd[88594]: osd.1 pg_epoch: 53 pg[7.18( empty local-lis/les=47/48 n=0 ec=47/23 lis/c=47/47 les/c/f=48/48/0 sis=53 pruub=14.601289749s) [0] r=-1 lpr=53 pi=[47,53)/1 crt=0'0 mlcod 0'0 active pruub 91.801528931s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 11 03:38:39 compute-0 ceph-osd[88594]: osd.1 pg_epoch: 53 pg[3.1b( empty local-lis/les=42/43 n=0 ec=42/15 lis/c=42/42 les/c/f=43/43/0 sis=53 pruub=9.564856529s) [0] r=-1 lpr=53 pi=[42,53)/1 crt=0'0 mlcod 0'0 active pruub 86.765113831s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 11 03:38:39 compute-0 ceph-osd[88594]: osd.1 pg_epoch: 53 pg[3.1b( empty local-lis/les=42/43 n=0 ec=42/15 lis/c=42/42 les/c/f=43/43/0 sis=53 pruub=9.564831734s) [0] r=-1 lpr=53 pi=[42,53)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 86.765113831s@ mbc={}] state<Start>: transitioning to Stray
Oct 11 03:38:39 compute-0 ceph-osd[88594]: osd.1 pg_epoch: 53 pg[7.1f( empty local-lis/les=47/48 n=0 ec=47/23 lis/c=47/47 les/c/f=48/48/0 sis=53 pruub=14.601113319s) [0] r=-1 lpr=53 pi=[47,53)/1 crt=0'0 mlcod 0'0 active pruub 91.801513672s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 11 03:38:39 compute-0 ceph-osd[88594]: osd.1 pg_epoch: 53 pg[7.18( empty local-lis/les=47/48 n=0 ec=47/23 lis/c=47/47 les/c/f=48/48/0 sis=53 pruub=14.601246834s) [0] r=-1 lpr=53 pi=[47,53)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 91.801528931s@ mbc={}] state<Start>: transitioning to Stray
Oct 11 03:38:39 compute-0 ceph-osd[88594]: osd.1 pg_epoch: 53 pg[7.1f( empty local-lis/les=47/48 n=0 ec=47/23 lis/c=47/47 les/c/f=48/48/0 sis=53 pruub=14.601093292s) [0] r=-1 lpr=53 pi=[47,53)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 91.801513672s@ mbc={}] state<Start>: transitioning to Stray
Oct 11 03:38:39 compute-0 ceph-osd[88594]: osd.1 pg_epoch: 53 pg[3.1d( empty local-lis/les=42/43 n=0 ec=42/15 lis/c=42/42 les/c/f=43/43/0 sis=53 pruub=9.564312935s) [2] r=-1 lpr=53 pi=[42,53)/1 crt=0'0 mlcod 0'0 active pruub 86.764793396s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 11 03:38:39 compute-0 ceph-osd[88594]: osd.1 pg_epoch: 53 pg[8.10( v 33'4 (0'0,33'4] local-lis/les=47/48 n=0 ec=47/32 lis/c=47/47 les/c/f=48/48/0 sis=53 pruub=14.601008415s) [0] r=-1 lpr=53 pi=[47,53)/1 crt=33'4 lcod 0'0 mlcod 0'0 active pruub 91.801513672s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 11 03:38:39 compute-0 ceph-osd[88594]: osd.1 pg_epoch: 53 pg[3.1d( empty local-lis/les=42/43 n=0 ec=42/15 lis/c=42/42 les/c/f=43/43/0 sis=53 pruub=9.564286232s) [2] r=-1 lpr=53 pi=[42,53)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 86.764793396s@ mbc={}] state<Start>: transitioning to Stray
Oct 11 03:38:39 compute-0 ceph-osd[88594]: osd.1 pg_epoch: 53 pg[8.10( v 33'4 (0'0,33'4] local-lis/les=47/48 n=0 ec=47/32 lis/c=47/47 les/c/f=48/48/0 sis=53 pruub=14.600959778s) [0] r=-1 lpr=53 pi=[47,53)/1 crt=33'4 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 91.801513672s@ mbc={}] state<Start>: transitioning to Stray
Oct 11 03:38:39 compute-0 ceph-osd[88594]: osd.1 pg_epoch: 53 pg[9.11( v 41'577 (0'0,41'577] local-lis/les=49/50 n=7 ec=49/34 lis/c=49/49 les/c/f=50/50/0 sis=53 pruub=8.773216248s) [0] r=-1 lpr=53 pi=[49,53)/1 crt=41'577 lcod 0'0 mlcod 0'0 active pruub 85.973831177s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 11 03:38:39 compute-0 ceph-osd[87591]: osd.0 pg_epoch: 53 pg[2.8( empty local-lis/les=0/0 n=0 ec=42/13 lis/c=42/42 les/c/f=43/43/0 sis=53) [0] r=0 lpr=53 pi=[42,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 03:38:39 compute-0 ceph-osd[88594]: osd.1 pg_epoch: 53 pg[11.12( empty local-lis/les=51/52 n=0 ec=51/38 lis/c=51/51 les/c/f=52/52/0 sis=53 pruub=10.930934906s) [2] r=-1 lpr=53 pi=[51,53)/1 crt=0'0 mlcod 0'0 active pruub 88.131561279s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 11 03:38:39 compute-0 ceph-osd[88594]: osd.1 pg_epoch: 53 pg[9.11( v 41'577 (0'0,41'577] local-lis/les=49/50 n=7 ec=49/34 lis/c=49/49 les/c/f=50/50/0 sis=53 pruub=8.773165703s) [0] r=-1 lpr=53 pi=[49,53)/1 crt=41'577 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 85.973831177s@ mbc={}] state<Start>: transitioning to Stray
Oct 11 03:38:39 compute-0 ceph-osd[88594]: osd.1 pg_epoch: 53 pg[11.11( empty local-lis/les=51/52 n=0 ec=51/38 lis/c=51/51 les/c/f=52/52/0 sis=53 pruub=10.931027412s) [2] r=-1 lpr=53 pi=[51,53)/1 crt=0'0 mlcod 0'0 active pruub 88.131813049s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 11 03:38:39 compute-0 ceph-osd[88594]: osd.1 pg_epoch: 53 pg[11.11( empty local-lis/les=51/52 n=0 ec=51/38 lis/c=51/51 les/c/f=52/52/0 sis=53 pruub=10.931008339s) [2] r=-1 lpr=53 pi=[51,53)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 88.131813049s@ mbc={}] state<Start>: transitioning to Stray
Oct 11 03:38:39 compute-0 ceph-osd[88594]: osd.1 pg_epoch: 53 pg[8.12( v 33'4 (0'0,33'4] local-lis/les=47/48 n=0 ec=47/32 lis/c=47/47 les/c/f=48/48/0 sis=53 pruub=14.600413322s) [2] r=-1 lpr=53 pi=[47,53)/1 crt=33'4 lcod 0'0 mlcod 0'0 active pruub 91.801307678s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 11 03:38:39 compute-0 ceph-osd[88594]: osd.1 pg_epoch: 53 pg[9.13( v 41'577 (0'0,41'577] local-lis/les=49/50 n=6 ec=49/34 lis/c=49/49 les/c/f=50/50/0 sis=53 pruub=8.772943497s) [0] r=-1 lpr=53 pi=[49,53)/1 crt=41'577 lcod 0'0 mlcod 0'0 active pruub 85.973876953s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 11 03:38:39 compute-0 ceph-osd[88594]: osd.1 pg_epoch: 53 pg[8.12( v 33'4 (0'0,33'4] local-lis/les=47/48 n=0 ec=47/32 lis/c=47/47 les/c/f=48/48/0 sis=53 pruub=14.600375175s) [2] r=-1 lpr=53 pi=[47,53)/1 crt=33'4 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 91.801307678s@ mbc={}] state<Start>: transitioning to Stray
Oct 11 03:38:39 compute-0 ceph-osd[88594]: osd.1 pg_epoch: 53 pg[9.13( v 41'577 (0'0,41'577] local-lis/les=49/50 n=6 ec=49/34 lis/c=49/49 les/c/f=50/50/0 sis=53 pruub=8.772923470s) [0] r=-1 lpr=53 pi=[49,53)/1 crt=41'577 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 85.973876953s@ mbc={}] state<Start>: transitioning to Stray
Oct 11 03:38:39 compute-0 ceph-osd[88594]: osd.1 pg_epoch: 53 pg[3.18( empty local-lis/les=42/43 n=0 ec=42/15 lis/c=42/42 les/c/f=43/43/0 sis=53 pruub=9.563879013s) [2] r=-1 lpr=53 pi=[42,53)/1 crt=0'0 mlcod 0'0 active pruub 86.764961243s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 11 03:38:39 compute-0 ceph-osd[88594]: osd.1 pg_epoch: 53 pg[3.18( empty local-lis/les=42/43 n=0 ec=42/15 lis/c=42/42 les/c/f=43/43/0 sis=53 pruub=9.563860893s) [2] r=-1 lpr=53 pi=[42,53)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 86.764961243s@ mbc={}] state<Start>: transitioning to Stray
Oct 11 03:38:39 compute-0 ceph-osd[88594]: osd.1 pg_epoch: 53 pg[11.10( empty local-lis/les=51/52 n=0 ec=51/38 lis/c=51/51 les/c/f=52/52/0 sis=53 pruub=10.930575371s) [0] r=-1 lpr=53 pi=[51,53)/1 crt=0'0 mlcod 0'0 active pruub 88.131713867s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 11 03:38:39 compute-0 ceph-osd[88594]: osd.1 pg_epoch: 53 pg[7.1c( empty local-lis/les=47/48 n=0 ec=47/23 lis/c=47/47 les/c/f=48/48/0 sis=53 pruub=14.600071907s) [2] r=-1 lpr=53 pi=[47,53)/1 crt=0'0 mlcod 0'0 active pruub 91.801246643s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 11 03:38:39 compute-0 ceph-osd[88594]: osd.1 pg_epoch: 53 pg[11.10( empty local-lis/les=51/52 n=0 ec=51/38 lis/c=51/51 les/c/f=52/52/0 sis=53 pruub=10.930535316s) [0] r=-1 lpr=53 pi=[51,53)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 88.131713867s@ mbc={}] state<Start>: transitioning to Stray
Oct 11 03:38:39 compute-0 ceph-osd[88594]: osd.1 pg_epoch: 53 pg[11.f( empty local-lis/les=51/52 n=0 ec=51/38 lis/c=51/51 les/c/f=52/52/0 sis=53 pruub=10.930532455s) [0] r=-1 lpr=53 pi=[51,53)/1 crt=0'0 mlcod 0'0 active pruub 88.131813049s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 11 03:38:39 compute-0 ceph-osd[88594]: osd.1 pg_epoch: 53 pg[7.1c( empty local-lis/les=47/48 n=0 ec=47/23 lis/c=47/47 les/c/f=48/48/0 sis=53 pruub=14.599993706s) [2] r=-1 lpr=53 pi=[47,53)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 91.801246643s@ mbc={}] state<Start>: transitioning to Stray
Oct 11 03:38:39 compute-0 ceph-osd[88594]: osd.1 pg_epoch: 53 pg[11.f( empty local-lis/les=51/52 n=0 ec=51/38 lis/c=51/51 les/c/f=52/52/0 sis=53 pruub=10.930503845s) [0] r=-1 lpr=53 pi=[51,53)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 88.131813049s@ mbc={}] state<Start>: transitioning to Stray
Oct 11 03:38:39 compute-0 ceph-osd[88594]: osd.1 pg_epoch: 53 pg[7.3( empty local-lis/les=47/48 n=0 ec=47/23 lis/c=47/47 les/c/f=48/48/0 sis=53 pruub=14.599753380s) [0] r=-1 lpr=53 pi=[47,53)/1 crt=0'0 mlcod 0'0 active pruub 91.801200867s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 11 03:38:39 compute-0 ceph-osd[88594]: osd.1 pg_epoch: 53 pg[7.3( empty local-lis/les=47/48 n=0 ec=47/23 lis/c=47/47 les/c/f=48/48/0 sis=53 pruub=14.599724770s) [0] r=-1 lpr=53 pi=[47,53)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 91.801200867s@ mbc={}] state<Start>: transitioning to Stray
Oct 11 03:38:39 compute-0 ceph-osd[88594]: osd.1 pg_epoch: 53 pg[8.c( v 33'4 (0'0,33'4] local-lis/les=47/48 n=0 ec=47/32 lis/c=47/47 les/c/f=48/48/0 sis=53 pruub=14.599608421s) [0] r=-1 lpr=53 pi=[47,53)/1 crt=33'4 lcod 0'0 mlcod 0'0 active pruub 91.801155090s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 11 03:38:39 compute-0 ceph-osd[88594]: osd.1 pg_epoch: 53 pg[8.11( v 33'4 (0'0,33'4] local-lis/les=47/48 n=0 ec=47/32 lis/c=47/47 les/c/f=48/48/0 sis=53 pruub=14.599900246s) [2] r=-1 lpr=53 pi=[47,53)/1 crt=33'4 lcod 0'0 mlcod 0'0 active pruub 91.801483154s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 11 03:38:39 compute-0 ceph-osd[88594]: osd.1 pg_epoch: 53 pg[8.c( v 33'4 (0'0,33'4] local-lis/les=47/48 n=0 ec=47/32 lis/c=47/47 les/c/f=48/48/0 sis=53 pruub=14.599571228s) [0] r=-1 lpr=53 pi=[47,53)/1 crt=33'4 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 91.801155090s@ mbc={}] state<Start>: transitioning to Stray
Oct 11 03:38:39 compute-0 ceph-osd[88594]: osd.1 pg_epoch: 53 pg[8.11( v 33'4 (0'0,33'4] local-lis/les=47/48 n=0 ec=47/32 lis/c=47/47 les/c/f=48/48/0 sis=53 pruub=14.599882126s) [2] r=-1 lpr=53 pi=[47,53)/1 crt=33'4 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 91.801483154s@ mbc={}] state<Start>: transitioning to Stray
Oct 11 03:38:39 compute-0 ceph-osd[88594]: osd.1 pg_epoch: 53 pg[3.7( empty local-lis/les=42/43 n=0 ec=42/15 lis/c=42/42 les/c/f=43/43/0 sis=53 pruub=9.563433647s) [2] r=-1 lpr=53 pi=[42,53)/1 crt=0'0 mlcod 0'0 active pruub 86.764961243s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 11 03:38:39 compute-0 ceph-osd[88594]: osd.1 pg_epoch: 53 pg[9.d( v 41'577 (0'0,41'577] local-lis/les=49/50 n=7 ec=49/34 lis/c=49/49 les/c/f=50/50/0 sis=53 pruub=8.772174835s) [0] r=-1 lpr=53 pi=[49,53)/1 crt=41'577 lcod 0'0 mlcod 0'0 active pruub 85.973907471s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 11 03:38:39 compute-0 ceph-osd[88594]: osd.1 pg_epoch: 53 pg[3.7( empty local-lis/les=42/43 n=0 ec=42/15 lis/c=42/42 les/c/f=43/43/0 sis=53 pruub=9.563293457s) [2] r=-1 lpr=53 pi=[42,53)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 86.764961243s@ mbc={}] state<Start>: transitioning to Stray
Oct 11 03:38:39 compute-0 ceph-osd[88594]: osd.1 pg_epoch: 53 pg[7.2( empty local-lis/les=47/48 n=0 ec=47/23 lis/c=47/47 les/c/f=48/48/0 sis=53 pruub=14.599332809s) [2] r=-1 lpr=53 pi=[47,53)/1 crt=0'0 mlcod 0'0 active pruub 91.801193237s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 11 03:38:39 compute-0 ceph-osd[88594]: osd.1 pg_epoch: 53 pg[9.d( v 41'577 (0'0,41'577] local-lis/les=49/50 n=7 ec=49/34 lis/c=49/49 les/c/f=50/50/0 sis=53 pruub=8.772157669s) [0] r=-1 lpr=53 pi=[49,53)/1 crt=41'577 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 85.973907471s@ mbc={}] state<Start>: transitioning to Stray
Oct 11 03:38:39 compute-0 ceph-osd[88594]: osd.1 pg_epoch: 53 pg[8.d( v 33'4 (0'0,33'4] local-lis/les=47/48 n=0 ec=47/32 lis/c=47/47 les/c/f=48/48/0 sis=53 pruub=14.599149704s) [2] r=-1 lpr=53 pi=[47,53)/1 crt=33'4 lcod 0'0 mlcod 0'0 active pruub 91.801132202s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 11 03:38:39 compute-0 ceph-osd[88594]: osd.1 pg_epoch: 53 pg[8.d( v 33'4 (0'0,33'4] local-lis/les=47/48 n=0 ec=47/32 lis/c=47/47 les/c/f=48/48/0 sis=53 pruub=14.599116325s) [2] r=-1 lpr=53 pi=[47,53)/1 crt=33'4 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 91.801132202s@ mbc={}] state<Start>: transitioning to Stray
Oct 11 03:38:39 compute-0 ceph-osd[88594]: osd.1 pg_epoch: 53 pg[11.e( empty local-lis/les=51/52 n=0 ec=51/38 lis/c=51/51 les/c/f=52/52/0 sis=53 pruub=10.930177689s) [0] r=-1 lpr=53 pi=[51,53)/1 crt=0'0 mlcod 0'0 active pruub 88.132270813s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 11 03:38:39 compute-0 ceph-osd[88594]: osd.1 pg_epoch: 53 pg[3.6( empty local-lis/les=42/43 n=0 ec=42/15 lis/c=42/42 les/c/f=43/43/0 sis=53 pruub=9.570214272s) [0] r=-1 lpr=53 pi=[42,53)/1 crt=0'0 mlcod 0'0 active pruub 86.772308350s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 11 03:38:39 compute-0 ceph-osd[88594]: osd.1 pg_epoch: 53 pg[11.d( empty local-lis/les=51/52 n=0 ec=51/38 lis/c=51/51 les/c/f=52/52/0 sis=53 pruub=10.930229187s) [2] r=-1 lpr=53 pi=[51,53)/1 crt=0'0 mlcod 0'0 active pruub 88.132362366s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 11 03:38:39 compute-0 ceph-osd[88594]: osd.1 pg_epoch: 53 pg[11.d( empty local-lis/les=51/52 n=0 ec=51/38 lis/c=51/51 les/c/f=52/52/0 sis=53 pruub=10.930210114s) [2] r=-1 lpr=53 pi=[51,53)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 88.132362366s@ mbc={}] state<Start>: transitioning to Stray
Oct 11 03:38:39 compute-0 ceph-osd[88594]: osd.1 pg_epoch: 53 pg[7.2( empty local-lis/les=47/48 n=0 ec=47/23 lis/c=47/47 les/c/f=48/48/0 sis=53 pruub=14.599316597s) [2] r=-1 lpr=53 pi=[47,53)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 91.801193237s@ mbc={}] state<Start>: transitioning to Stray
Oct 11 03:38:39 compute-0 ceph-osd[88594]: osd.1 pg_epoch: 53 pg[3.6( empty local-lis/les=42/43 n=0 ec=42/15 lis/c=42/42 les/c/f=43/43/0 sis=53 pruub=9.570154190s) [0] r=-1 lpr=53 pi=[42,53)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 86.772308350s@ mbc={}] state<Start>: transitioning to Stray
Oct 11 03:38:39 compute-0 ceph-osd[88594]: osd.1 pg_epoch: 53 pg[11.e( empty local-lis/les=51/52 n=0 ec=51/38 lis/c=51/51 les/c/f=52/52/0 sis=53 pruub=10.930103302s) [0] r=-1 lpr=53 pi=[51,53)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 88.132270813s@ mbc={}] state<Start>: transitioning to Stray
Oct 11 03:38:39 compute-0 ceph-osd[88594]: osd.1 pg_epoch: 53 pg[7.1( empty local-lis/les=47/48 n=0 ec=47/23 lis/c=47/47 les/c/f=48/48/0 sis=53 pruub=14.598816872s) [2] r=-1 lpr=53 pi=[47,53)/1 crt=0'0 mlcod 0'0 active pruub 91.801116943s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 11 03:38:39 compute-0 ceph-osd[88594]: osd.1 pg_epoch: 53 pg[7.1( empty local-lis/les=47/48 n=0 ec=47/23 lis/c=47/47 les/c/f=48/48/0 sis=53 pruub=14.598802567s) [2] r=-1 lpr=53 pi=[47,53)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 91.801116943s@ mbc={}] state<Start>: transitioning to Stray
Oct 11 03:38:39 compute-0 ceph-osd[88594]: osd.1 pg_epoch: 53 pg[3.5( empty local-lis/les=42/43 n=0 ec=42/15 lis/c=42/42 les/c/f=43/43/0 sis=53 pruub=9.562579155s) [2] r=-1 lpr=53 pi=[42,53)/1 crt=0'0 mlcod 0'0 active pruub 86.764976501s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 11 03:38:39 compute-0 ceph-osd[88594]: osd.1 pg_epoch: 53 pg[3.5( empty local-lis/les=42/43 n=0 ec=42/15 lis/c=42/42 les/c/f=43/43/0 sis=53 pruub=9.562564850s) [2] r=-1 lpr=53 pi=[42,53)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 86.764976501s@ mbc={}] state<Start>: transitioning to Stray
Oct 11 03:38:39 compute-0 ceph-osd[88594]: osd.1 pg_epoch: 53 pg[8.e( v 33'4 (0'0,33'4] local-lis/les=47/48 n=0 ec=47/32 lis/c=47/47 les/c/f=48/48/0 sis=53 pruub=14.598498344s) [0] r=-1 lpr=53 pi=[47,53)/1 crt=33'4 lcod 0'0 mlcod 0'0 active pruub 91.801002502s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 11 03:38:39 compute-0 ceph-osd[88594]: osd.1 pg_epoch: 53 pg[8.e( v 33'4 (0'0,33'4] local-lis/les=47/48 n=0 ec=47/32 lis/c=47/47 les/c/f=48/48/0 sis=53 pruub=14.598484039s) [0] r=-1 lpr=53 pi=[47,53)/1 crt=33'4 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 91.801002502s@ mbc={}] state<Start>: transitioning to Stray
Oct 11 03:38:39 compute-0 ceph-osd[88594]: osd.1 pg_epoch: 53 pg[11.12( empty local-lis/les=51/52 n=0 ec=51/38 lis/c=51/51 les/c/f=52/52/0 sis=53 pruub=10.928991318s) [2] r=-1 lpr=53 pi=[51,53)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 88.131561279s@ mbc={}] state<Start>: transitioning to Stray
Oct 11 03:38:39 compute-0 ceph-osd[88594]: osd.1 pg_epoch: 53 pg[9.f( v 41'577 (0'0,41'577] local-lis/les=49/50 n=7 ec=49/34 lis/c=49/49 les/c/f=50/50/0 sis=53 pruub=8.771291733s) [0] r=-1 lpr=53 pi=[49,53)/1 crt=41'577 lcod 0'0 mlcod 0'0 active pruub 85.973968506s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 11 03:38:39 compute-0 ceph-osd[88594]: osd.1 pg_epoch: 53 pg[9.f( v 41'577 (0'0,41'577] local-lis/les=49/50 n=7 ec=49/34 lis/c=49/49 les/c/f=50/50/0 sis=53 pruub=8.771274567s) [0] r=-1 lpr=53 pi=[49,53)/1 crt=41'577 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 85.973968506s@ mbc={}] state<Start>: transitioning to Stray
Oct 11 03:38:39 compute-0 ceph-osd[88594]: osd.1 pg_epoch: 53 pg[11.b( empty local-lis/les=51/52 n=0 ec=51/38 lis/c=51/51 les/c/f=52/52/0 sis=53 pruub=10.929560661s) [2] r=-1 lpr=53 pi=[51,53)/1 crt=0'0 mlcod 0'0 active pruub 88.132301331s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 11 03:38:39 compute-0 ceph-osd[87591]: osd.0 pg_epoch: 53 pg[10.1( empty local-lis/les=0/0 n=0 ec=49/36 lis/c=49/49 les/c/f=51/51/0 sis=53) [0] r=0 lpr=53 pi=[49,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 03:38:39 compute-0 ceph-osd[87591]: osd.0 pg_epoch: 53 pg[2.16( empty local-lis/les=0/0 n=0 ec=42/13 lis/c=42/42 les/c/f=43/43/0 sis=53) [0] r=0 lpr=53 pi=[42,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 03:38:39 compute-0 ceph-osd[87591]: osd.0 pg_epoch: 53 pg[10.1e( empty local-lis/les=0/0 n=0 ec=49/36 lis/c=49/49 les/c/f=51/51/0 sis=53) [0] r=0 lpr=53 pi=[49,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 03:38:39 compute-0 ceph-osd[87591]: osd.0 pg_epoch: 53 pg[5.15( empty local-lis/les=0/0 n=0 ec=45/19 lis/c=45/45 les/c/f=47/47/0 sis=53) [0] r=0 lpr=53 pi=[45,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 03:38:39 compute-0 ceph-osd[87591]: osd.0 pg_epoch: 53 pg[5.14( empty local-lis/les=0/0 n=0 ec=45/19 lis/c=45/45 les/c/f=47/47/0 sis=53) [0] r=0 lpr=53 pi=[45,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 03:38:39 compute-0 ceph-osd[87591]: osd.0 pg_epoch: 53 pg[2.13( empty local-lis/les=0/0 n=0 ec=42/13 lis/c=42/42 les/c/f=43/43/0 sis=53) [0] r=0 lpr=53 pi=[42,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 03:38:39 compute-0 ceph-osd[87591]: osd.0 pg_epoch: 53 pg[2.11( empty local-lis/les=0/0 n=0 ec=42/13 lis/c=42/42 les/c/f=43/43/0 sis=53) [0] r=0 lpr=53 pi=[42,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 03:38:39 compute-0 ceph-osd[87591]: osd.0 pg_epoch: 53 pg[10.16( empty local-lis/les=0/0 n=0 ec=49/36 lis/c=49/49 les/c/f=51/51/0 sis=53) [0] r=0 lpr=53 pi=[49,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 03:38:39 compute-0 ceph-osd[87591]: osd.0 pg_epoch: 53 pg[4.18( empty local-lis/les=43/44 n=0 ec=43/17 lis/c=43/43 les/c/f=44/44/0 sis=53 pruub=10.563500404s) [2] r=-1 lpr=53 pi=[43,53)/1 crt=0'0 mlcod 0'0 active pruub 92.799087524s@ mbc={}] start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 11 03:38:39 compute-0 ceph-osd[87591]: osd.0 pg_epoch: 53 pg[4.18( empty local-lis/les=43/44 n=0 ec=43/17 lis/c=43/43 les/c/f=44/44/0 sis=53 pruub=10.563472748s) [2] r=-1 lpr=53 pi=[43,53)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 92.799087524s@ mbc={}] state<Start>: transitioning to Stray
Oct 11 03:38:39 compute-0 ceph-osd[87591]: osd.0 pg_epoch: 53 pg[4.14( empty local-lis/les=43/44 n=0 ec=43/17 lis/c=43/43 les/c/f=44/44/0 sis=53 pruub=10.563126564s) [1] r=-1 lpr=53 pi=[43,53)/1 crt=0'0 mlcod 0'0 active pruub 92.798896790s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 11 03:38:39 compute-0 ceph-osd[87591]: osd.0 pg_epoch: 53 pg[4.14( empty local-lis/les=43/44 n=0 ec=43/17 lis/c=43/43 les/c/f=44/44/0 sis=53 pruub=10.563106537s) [1] r=-1 lpr=53 pi=[43,53)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 92.798896790s@ mbc={}] state<Start>: transitioning to Stray
Oct 11 03:38:39 compute-0 ceph-osd[87591]: osd.0 pg_epoch: 53 pg[4.13( empty local-lis/les=43/44 n=0 ec=43/17 lis/c=43/43 les/c/f=44/44/0 sis=53 pruub=10.562996864s) [2] r=-1 lpr=53 pi=[43,53)/1 crt=0'0 mlcod 0'0 active pruub 92.798889160s@ mbc={}] start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 11 03:38:39 compute-0 ceph-osd[87591]: osd.0 pg_epoch: 53 pg[4.13( empty local-lis/les=43/44 n=0 ec=43/17 lis/c=43/43 les/c/f=44/44/0 sis=53 pruub=10.562980652s) [2] r=-1 lpr=53 pi=[43,53)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 92.798889160s@ mbc={}] state<Start>: transitioning to Stray
Oct 11 03:38:39 compute-0 ceph-osd[87591]: osd.0 pg_epoch: 53 pg[4.12( empty local-lis/les=43/44 n=0 ec=43/17 lis/c=43/43 les/c/f=44/44/0 sis=53 pruub=10.562916756s) [1] r=-1 lpr=53 pi=[43,53)/1 crt=0'0 mlcod 0'0 active pruub 92.798896790s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 11 03:38:39 compute-0 ceph-osd[87591]: osd.0 pg_epoch: 53 pg[4.12( empty local-lis/les=43/44 n=0 ec=43/17 lis/c=43/43 les/c/f=44/44/0 sis=53 pruub=10.562900543s) [1] r=-1 lpr=53 pi=[43,53)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 92.798896790s@ mbc={}] state<Start>: transitioning to Stray
Oct 11 03:38:39 compute-0 ceph-osd[87591]: osd.0 pg_epoch: 53 pg[4.11( empty local-lis/les=43/44 n=0 ec=43/17 lis/c=43/43 les/c/f=44/44/0 sis=53 pruub=10.562783241s) [2] r=-1 lpr=53 pi=[43,53)/1 crt=0'0 mlcod 0'0 active pruub 92.798889160s@ mbc={}] start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 11 03:38:39 compute-0 ceph-osd[87591]: osd.0 pg_epoch: 53 pg[4.11( empty local-lis/les=43/44 n=0 ec=43/17 lis/c=43/43 les/c/f=44/44/0 sis=53 pruub=10.562766075s) [2] r=-1 lpr=53 pi=[43,53)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 92.798889160s@ mbc={}] state<Start>: transitioning to Stray
Oct 11 03:38:39 compute-0 ceph-osd[87591]: osd.0 pg_epoch: 53 pg[4.10( empty local-lis/les=43/44 n=0 ec=43/17 lis/c=43/43 les/c/f=44/44/0 sis=53 pruub=10.562693596s) [1] r=-1 lpr=53 pi=[43,53)/1 crt=0'0 mlcod 0'0 active pruub 92.798896790s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 11 03:38:39 compute-0 ceph-osd[87591]: osd.0 pg_epoch: 53 pg[4.10( empty local-lis/les=43/44 n=0 ec=43/17 lis/c=43/43 les/c/f=44/44/0 sis=53 pruub=10.562648773s) [1] r=-1 lpr=53 pi=[43,53)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 92.798896790s@ mbc={}] state<Start>: transitioning to Stray
Oct 11 03:38:39 compute-0 ceph-osd[87591]: osd.0 pg_epoch: 53 pg[4.f( empty local-lis/les=43/44 n=0 ec=43/17 lis/c=43/43 les/c/f=44/44/0 sis=53 pruub=10.562466621s) [1] r=-1 lpr=53 pi=[43,53)/1 crt=0'0 mlcod 0'0 active pruub 92.798828125s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 11 03:38:39 compute-0 ceph-osd[87591]: osd.0 pg_epoch: 53 pg[4.f( empty local-lis/les=43/44 n=0 ec=43/17 lis/c=43/43 les/c/f=44/44/0 sis=53 pruub=10.562443733s) [1] r=-1 lpr=53 pi=[43,53)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 92.798828125s@ mbc={}] state<Start>: transitioning to Stray
Oct 11 03:38:39 compute-0 ceph-osd[87591]: osd.0 pg_epoch: 53 pg[6.d( v 37'39 (0'0,37'39] local-lis/les=45/46 n=1 ec=45/21 lis/c=45/45 les/c/f=46/46/0 sis=53 pruub=12.573090553s) [1] r=-1 lpr=53 pi=[45,53)/1 crt=37'39 lcod 0'0 mlcod 0'0 active pruub 94.809562683s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 11 03:38:39 compute-0 ceph-osd[87591]: osd.0 pg_epoch: 53 pg[6.d( v 37'39 (0'0,37'39] local-lis/les=45/46 n=1 ec=45/21 lis/c=45/45 les/c/f=46/46/0 sis=53 pruub=12.573074341s) [1] r=-1 lpr=53 pi=[45,53)/1 crt=37'39 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 94.809562683s@ mbc={}] state<Start>: transitioning to Stray
Oct 11 03:38:39 compute-0 ceph-osd[87591]: osd.0 pg_epoch: 53 pg[4.e( empty local-lis/les=43/44 n=0 ec=43/17 lis/c=43/43 les/c/f=44/44/0 sis=53 pruub=10.562240601s) [2] r=-1 lpr=53 pi=[43,53)/1 crt=0'0 mlcod 0'0 active pruub 92.798820496s@ mbc={}] start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 11 03:38:39 compute-0 ceph-osd[87591]: osd.0 pg_epoch: 53 pg[4.e( empty local-lis/les=43/44 n=0 ec=43/17 lis/c=43/43 les/c/f=44/44/0 sis=53 pruub=10.562225342s) [2] r=-1 lpr=53 pi=[43,53)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 92.798820496s@ mbc={}] state<Start>: transitioning to Stray
Oct 11 03:38:39 compute-0 ceph-osd[87591]: osd.0 pg_epoch: 53 pg[4.d( empty local-lis/les=43/44 n=0 ec=43/17 lis/c=43/43 les/c/f=44/44/0 sis=53 pruub=10.562060356s) [1] r=-1 lpr=53 pi=[43,53)/1 crt=0'0 mlcod 0'0 active pruub 92.798744202s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 11 03:38:39 compute-0 ceph-osd[87591]: osd.0 pg_epoch: 53 pg[4.d( empty local-lis/les=43/44 n=0 ec=43/17 lis/c=43/43 les/c/f=44/44/0 sis=53 pruub=10.562039375s) [1] r=-1 lpr=53 pi=[43,53)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 92.798744202s@ mbc={}] state<Start>: transitioning to Stray
Oct 11 03:38:39 compute-0 ceph-osd[87591]: osd.0 pg_epoch: 53 pg[6.f( v 37'39 (0'0,37'39] local-lis/les=45/46 n=1 ec=45/21 lis/c=45/45 les/c/f=46/46/0 sis=53 pruub=12.572684288s) [1] r=-1 lpr=53 pi=[45,53)/1 crt=37'39 lcod 0'0 mlcod 0'0 active pruub 94.809478760s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 11 03:38:39 compute-0 ceph-osd[87591]: osd.0 pg_epoch: 53 pg[6.f( v 37'39 (0'0,37'39] local-lis/les=45/46 n=1 ec=45/21 lis/c=45/45 les/c/f=46/46/0 sis=53 pruub=12.572665215s) [1] r=-1 lpr=53 pi=[45,53)/1 crt=37'39 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 94.809478760s@ mbc={}] state<Start>: transitioning to Stray
Oct 11 03:38:39 compute-0 ceph-osd[87591]: osd.0 pg_epoch: 53 pg[4.2( empty local-lis/les=43/44 n=0 ec=43/17 lis/c=43/43 les/c/f=44/44/0 sis=53 pruub=10.561749458s) [1] r=-1 lpr=53 pi=[43,53)/1 crt=0'0 mlcod 0'0 active pruub 92.798721313s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 11 03:38:39 compute-0 ceph-osd[87591]: osd.0 pg_epoch: 53 pg[4.2( empty local-lis/les=43/44 n=0 ec=43/17 lis/c=43/43 les/c/f=44/44/0 sis=53 pruub=10.561727524s) [1] r=-1 lpr=53 pi=[43,53)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 92.798721313s@ mbc={}] state<Start>: transitioning to Stray
Oct 11 03:38:39 compute-0 ceph-osd[87591]: osd.0 pg_epoch: 53 pg[4.1( empty local-lis/les=43/44 n=0 ec=43/17 lis/c=43/43 les/c/f=44/44/0 sis=53 pruub=10.561609268s) [2] r=-1 lpr=53 pi=[43,53)/1 crt=0'0 mlcod 0'0 active pruub 92.798721313s@ mbc={}] start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 11 03:38:39 compute-0 ceph-osd[87591]: osd.0 pg_epoch: 53 pg[4.1( empty local-lis/les=43/44 n=0 ec=43/17 lis/c=43/43 les/c/f=44/44/0 sis=53 pruub=10.561591148s) [2] r=-1 lpr=53 pi=[43,53)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 92.798721313s@ mbc={}] state<Start>: transitioning to Stray
Oct 11 03:38:39 compute-0 ceph-osd[87591]: osd.0 pg_epoch: 53 pg[6.3( v 37'39 (0'0,37'39] local-lis/les=45/46 n=2 ec=45/21 lis/c=45/45 les/c/f=46/46/0 sis=53 pruub=12.569659233s) [1] r=-1 lpr=53 pi=[45,53)/1 crt=37'39 lcod 0'0 mlcod 0'0 active pruub 94.806877136s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 11 03:38:39 compute-0 ceph-osd[87591]: osd.0 pg_epoch: 53 pg[6.3( v 37'39 (0'0,37'39] local-lis/les=45/46 n=2 ec=45/21 lis/c=45/45 les/c/f=46/46/0 sis=53 pruub=12.569631577s) [1] r=-1 lpr=53 pi=[45,53)/1 crt=37'39 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 94.806877136s@ mbc={}] state<Start>: transitioning to Stray
Oct 11 03:38:39 compute-0 ceph-osd[87591]: osd.0 pg_epoch: 53 pg[6.1( v 37'39 (0'0,37'39] local-lis/les=45/46 n=2 ec=45/21 lis/c=45/45 les/c/f=46/46/0 sis=53 pruub=12.569513321s) [1] r=-1 lpr=53 pi=[45,53)/1 crt=37'39 lcod 0'0 mlcod 0'0 active pruub 94.806869507s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 11 03:38:39 compute-0 ceph-osd[87591]: osd.0 pg_epoch: 53 pg[6.1( v 37'39 (0'0,37'39] local-lis/les=45/46 n=2 ec=45/21 lis/c=45/45 les/c/f=46/46/0 sis=53 pruub=12.569498062s) [1] r=-1 lpr=53 pi=[45,53)/1 crt=37'39 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 94.806869507s@ mbc={}] state<Start>: transitioning to Stray
Oct 11 03:38:39 compute-0 ceph-osd[87591]: osd.0 pg_epoch: 53 pg[4.4( empty local-lis/les=43/44 n=0 ec=43/17 lis/c=43/43 les/c/f=44/44/0 sis=53 pruub=10.561136246s) [1] r=-1 lpr=53 pi=[43,53)/1 crt=0'0 mlcod 0'0 active pruub 92.798591614s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 11 03:38:39 compute-0 ceph-osd[87591]: osd.0 pg_epoch: 53 pg[4.4( empty local-lis/les=43/44 n=0 ec=43/17 lis/c=43/43 les/c/f=44/44/0 sis=53 pruub=10.561120033s) [1] r=-1 lpr=53 pi=[43,53)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 92.798591614s@ mbc={}] state<Start>: transitioning to Stray
Oct 11 03:38:39 compute-0 ceph-osd[87591]: osd.0 pg_epoch: 53 pg[4.9( empty local-lis/les=43/44 n=0 ec=43/17 lis/c=43/43 les/c/f=44/44/0 sis=53 pruub=10.561007500s) [1] r=-1 lpr=53 pi=[43,53)/1 crt=0'0 mlcod 0'0 active pruub 92.798591614s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 11 03:38:39 compute-0 ceph-osd[87591]: osd.0 pg_epoch: 53 pg[4.9( empty local-lis/les=43/44 n=0 ec=43/17 lis/c=43/43 les/c/f=44/44/0 sis=53 pruub=10.560991287s) [1] r=-1 lpr=53 pi=[43,53)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 92.798591614s@ mbc={}] state<Start>: transitioning to Stray
Oct 11 03:38:39 compute-0 ceph-osd[87591]: osd.0 pg_epoch: 53 pg[6.b( v 37'39 (0'0,37'39] local-lis/les=45/46 n=1 ec=45/21 lis/c=45/45 les/c/f=46/46/0 sis=53 pruub=12.569162369s) [1] r=-1 lpr=53 pi=[45,53)/1 crt=37'39 lcod 0'0 mlcod 0'0 active pruub 94.806846619s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 11 03:38:39 compute-0 ceph-osd[87591]: osd.0 pg_epoch: 53 pg[6.b( v 37'39 (0'0,37'39] local-lis/les=45/46 n=1 ec=45/21 lis/c=45/45 les/c/f=46/46/0 sis=53 pruub=12.569147110s) [1] r=-1 lpr=53 pi=[45,53)/1 crt=37'39 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 94.806846619s@ mbc={}] state<Start>: transitioning to Stray
Oct 11 03:38:39 compute-0 ceph-osd[87591]: osd.0 pg_epoch: 53 pg[4.1a( empty local-lis/les=43/44 n=0 ec=43/17 lis/c=43/43 les/c/f=44/44/0 sis=53 pruub=10.560787201s) [2] r=-1 lpr=53 pi=[43,53)/1 crt=0'0 mlcod 0'0 active pruub 92.798568726s@ mbc={}] start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 11 03:38:39 compute-0 ceph-osd[87591]: osd.0 pg_epoch: 53 pg[4.1a( empty local-lis/les=43/44 n=0 ec=43/17 lis/c=43/43 les/c/f=44/44/0 sis=53 pruub=10.560770988s) [2] r=-1 lpr=53 pi=[43,53)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 92.798568726s@ mbc={}] state<Start>: transitioning to Stray
Oct 11 03:38:39 compute-0 ceph-osd[87591]: osd.0 pg_epoch: 53 pg[4.5( empty local-lis/les=43/44 n=0 ec=43/17 lis/c=43/43 les/c/f=44/44/0 sis=53 pruub=10.560663223s) [1] r=-1 lpr=53 pi=[43,53)/1 crt=0'0 mlcod 0'0 active pruub 92.798530579s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 11 03:38:39 compute-0 ceph-osd[87591]: osd.0 pg_epoch: 53 pg[4.5( empty local-lis/les=43/44 n=0 ec=43/17 lis/c=43/43 les/c/f=44/44/0 sis=53 pruub=10.560635567s) [1] r=-1 lpr=53 pi=[43,53)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 92.798530579s@ mbc={}] state<Start>: transitioning to Stray
Oct 11 03:38:39 compute-0 ceph-osd[87591]: osd.0 pg_epoch: 53 pg[6.7( v 37'39 (0'0,37'39] local-lis/les=45/46 n=1 ec=45/21 lis/c=45/45 les/c/f=46/46/0 sis=53 pruub=12.568759918s) [1] r=-1 lpr=53 pi=[45,53)/1 crt=37'39 lcod 0'0 mlcod 0'0 active pruub 94.806838989s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 11 03:38:39 compute-0 ceph-osd[87591]: osd.0 pg_epoch: 53 pg[6.7( v 37'39 (0'0,37'39] local-lis/les=45/46 n=1 ec=45/21 lis/c=45/45 les/c/f=46/46/0 sis=53 pruub=12.568737030s) [1] r=-1 lpr=53 pi=[45,53)/1 crt=37'39 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 94.806838989s@ mbc={}] state<Start>: transitioning to Stray
Oct 11 03:38:39 compute-0 ceph-osd[87591]: osd.0 pg_epoch: 53 pg[4.a( empty local-lis/les=43/44 n=0 ec=43/17 lis/c=43/43 les/c/f=44/44/0 sis=53 pruub=10.560349464s) [2] r=-1 lpr=53 pi=[43,53)/1 crt=0'0 mlcod 0'0 active pruub 92.798553467s@ mbc={}] start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 11 03:38:39 compute-0 ceph-osd[87591]: osd.0 pg_epoch: 53 pg[4.a( empty local-lis/les=43/44 n=0 ec=43/17 lis/c=43/43 les/c/f=44/44/0 sis=53 pruub=10.560328484s) [2] r=-1 lpr=53 pi=[43,53)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 92.798553467s@ mbc={}] state<Start>: transitioning to Stray
Oct 11 03:38:39 compute-0 ceph-osd[87591]: osd.0 pg_epoch: 53 pg[4.1b( empty local-lis/les=43/44 n=0 ec=43/17 lis/c=43/43 les/c/f=44/44/0 sis=53 pruub=10.560216904s) [2] r=-1 lpr=53 pi=[43,53)/1 crt=0'0 mlcod 0'0 active pruub 92.798522949s@ mbc={}] start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 11 03:38:39 compute-0 ceph-osd[87591]: osd.0 pg_epoch: 53 pg[4.1b( empty local-lis/les=43/44 n=0 ec=43/17 lis/c=43/43 les/c/f=44/44/0 sis=53 pruub=10.560199738s) [2] r=-1 lpr=53 pi=[43,53)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 92.798522949s@ mbc={}] state<Start>: transitioning to Stray
Oct 11 03:38:39 compute-0 ceph-osd[87591]: osd.0 pg_epoch: 53 pg[6.9( v 37'39 (0'0,37'39] local-lis/les=45/46 n=1 ec=45/21 lis/c=45/45 les/c/f=46/46/0 sis=53 pruub=12.568336487s) [1] r=-1 lpr=53 pi=[45,53)/1 crt=37'39 lcod 0'0 mlcod 0'0 active pruub 94.806808472s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 11 03:38:39 compute-0 ceph-osd[87591]: osd.0 pg_epoch: 53 pg[6.9( v 37'39 (0'0,37'39] local-lis/les=45/46 n=1 ec=45/21 lis/c=45/45 les/c/f=46/46/0 sis=53 pruub=12.568315506s) [1] r=-1 lpr=53 pi=[45,53)/1 crt=37'39 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 94.806808472s@ mbc={}] state<Start>: transitioning to Stray
Oct 11 03:38:39 compute-0 ceph-osd[87591]: osd.0 pg_epoch: 53 pg[4.7( empty local-lis/les=43/44 n=0 ec=43/17 lis/c=43/43 les/c/f=44/44/0 sis=53 pruub=10.559739113s) [1] r=-1 lpr=53 pi=[43,53)/1 crt=0'0 mlcod 0'0 active pruub 92.798332214s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 11 03:38:39 compute-0 ceph-osd[87591]: osd.0 pg_epoch: 53 pg[4.7( empty local-lis/les=43/44 n=0 ec=43/17 lis/c=43/43 les/c/f=44/44/0 sis=53 pruub=10.559722900s) [1] r=-1 lpr=53 pi=[43,53)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 92.798332214s@ mbc={}] state<Start>: transitioning to Stray
Oct 11 03:38:39 compute-0 ceph-osd[87591]: osd.0 pg_epoch: 53 pg[6.5( v 37'39 (0'0,37'39] local-lis/les=45/46 n=2 ec=45/21 lis/c=45/45 les/c/f=46/46/0 sis=53 pruub=12.568077087s) [1] r=-1 lpr=53 pi=[45,53)/1 crt=37'39 lcod 0'0 mlcod 0'0 active pruub 94.806762695s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 11 03:38:39 compute-0 ceph-osd[87591]: osd.0 pg_epoch: 53 pg[6.5( v 37'39 (0'0,37'39] local-lis/les=45/46 n=2 ec=45/21 lis/c=45/45 les/c/f=46/46/0 sis=53 pruub=12.568061829s) [1] r=-1 lpr=53 pi=[45,53)/1 crt=37'39 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 94.806762695s@ mbc={}] state<Start>: transitioning to Stray
Oct 11 03:38:39 compute-0 ceph-osd[87591]: osd.0 pg_epoch: 53 pg[4.8( empty local-lis/les=43/44 n=0 ec=43/17 lis/c=43/43 les/c/f=44/44/0 sis=53 pruub=10.559531212s) [1] r=-1 lpr=53 pi=[43,53)/1 crt=0'0 mlcod 0'0 active pruub 92.798316956s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 11 03:38:39 compute-0 ceph-osd[87591]: osd.0 pg_epoch: 53 pg[4.8( empty local-lis/les=43/44 n=0 ec=43/17 lis/c=43/43 les/c/f=44/44/0 sis=53 pruub=10.559515953s) [1] r=-1 lpr=53 pi=[43,53)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 92.798316956s@ mbc={}] state<Start>: transitioning to Stray
Oct 11 03:38:39 compute-0 ceph-osd[87591]: osd.0 pg_epoch: 53 pg[4.1c( empty local-lis/les=43/44 n=0 ec=43/17 lis/c=43/43 les/c/f=44/44/0 sis=53 pruub=10.559438705s) [2] r=-1 lpr=53 pi=[43,53)/1 crt=0'0 mlcod 0'0 active pruub 92.798332214s@ mbc={}] start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 11 03:38:39 compute-0 ceph-osd[87591]: osd.0 pg_epoch: 53 pg[4.1c( empty local-lis/les=43/44 n=0 ec=43/17 lis/c=43/43 les/c/f=44/44/0 sis=53 pruub=10.559424400s) [2] r=-1 lpr=53 pi=[43,53)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 92.798332214s@ mbc={}] state<Start>: transitioning to Stray
Oct 11 03:38:39 compute-0 ceph-osd[87591]: osd.0 pg_epoch: 53 pg[11.17( empty local-lis/les=0/0 n=0 ec=51/38 lis/c=51/51 les/c/f=52/52/0 sis=53) [0] r=0 lpr=53 pi=[51,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 03:38:39 compute-0 ceph-osd[87591]: osd.0 pg_epoch: 53 pg[3.1f( empty local-lis/les=0/0 n=0 ec=42/15 lis/c=42/42 les/c/f=43/43/0 sis=53) [0] r=0 lpr=53 pi=[42,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 03:38:39 compute-0 ceph-osd[89722]: osd.2 pg_epoch: 53 pg[8.15( empty local-lis/les=0/0 n=0 ec=47/32 lis/c=47/47 les/c/f=48/48/0 sis=53) [2] r=0 lpr=53 pi=[47,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 03:38:39 compute-0 ceph-osd[87591]: osd.0 pg_epoch: 53 pg[7.1b( empty local-lis/les=0/0 n=0 ec=47/23 lis/c=47/47 les/c/f=48/48/0 sis=53) [0] r=0 lpr=53 pi=[47,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 03:38:39 compute-0 ceph-osd[89722]: osd.2 pg_epoch: 53 pg[3.1e( empty local-lis/les=0/0 n=0 ec=42/15 lis/c=42/42 les/c/f=43/43/0 sis=53) [2] r=0 lpr=53 pi=[42,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 03:38:39 compute-0 ceph-osd[89722]: osd.2 pg_epoch: 53 pg[11.15( empty local-lis/les=0/0 n=0 ec=51/38 lis/c=51/51 les/c/f=52/52/0 sis=53) [2] r=0 lpr=53 pi=[51,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 03:38:39 compute-0 ceph-osd[88594]: osd.1 pg_epoch: 53 pg[3.3( empty local-lis/les=42/43 n=0 ec=42/15 lis/c=42/42 les/c/f=43/43/0 sis=53 pruub=9.562150955s) [0] r=-1 lpr=53 pi=[42,53)/1 crt=0'0 mlcod 0'0 active pruub 86.765075684s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 11 03:38:39 compute-0 ceph-osd[88594]: osd.1 pg_epoch: 53 pg[11.b( empty local-lis/les=51/52 n=0 ec=51/38 lis/c=51/51 les/c/f=52/52/0 sis=53 pruub=10.929389000s) [2] r=-1 lpr=53 pi=[51,53)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 88.132301331s@ mbc={}] state<Start>: transitioning to Stray
Oct 11 03:38:39 compute-0 ceph-osd[87591]: osd.0 pg_epoch: 53 pg[8.14( empty local-lis/les=0/0 n=0 ec=47/32 lis/c=47/47 les/c/f=48/48/0 sis=53) [0] r=0 lpr=53 pi=[47,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 03:38:39 compute-0 ceph-osd[89722]: osd.2 pg_epoch: 53 pg[7.1a( empty local-lis/les=0/0 n=0 ec=47/23 lis/c=47/47 les/c/f=48/48/0 sis=53) [2] r=0 lpr=53 pi=[47,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 03:38:39 compute-0 ceph-osd[88594]: osd.1 pg_epoch: 53 pg[3.3( empty local-lis/les=42/43 n=0 ec=42/15 lis/c=42/42 les/c/f=43/43/0 sis=53 pruub=9.562113762s) [0] r=-1 lpr=53 pi=[42,53)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 86.765075684s@ mbc={}] state<Start>: transitioning to Stray
Oct 11 03:38:39 compute-0 ceph-osd[88594]: osd.1 pg_epoch: 53 pg[9.9( v 41'577 (0'0,41'577] local-lis/les=49/50 n=7 ec=49/34 lis/c=49/49 les/c/f=50/50/0 sis=53 pruub=8.770686150s) [0] r=-1 lpr=53 pi=[49,53)/1 crt=41'577 lcod 0'0 mlcod 0'0 active pruub 85.973976135s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 11 03:38:39 compute-0 ceph-osd[88594]: osd.1 pg_epoch: 53 pg[7.5( empty local-lis/les=47/48 n=0 ec=47/23 lis/c=47/47 les/c/f=48/48/0 sis=53 pruub=14.597564697s) [2] r=-1 lpr=53 pi=[47,53)/1 crt=0'0 mlcod 0'0 active pruub 91.800895691s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 11 03:38:39 compute-0 ceph-osd[89722]: osd.2 pg_epoch: 53 pg[3.1d( empty local-lis/les=0/0 n=0 ec=42/15 lis/c=42/42 les/c/f=43/43/0 sis=53) [2] r=0 lpr=53 pi=[42,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 03:38:39 compute-0 ceph-osd[87591]: osd.0 pg_epoch: 53 pg[9.15( empty local-lis/les=0/0 n=0 ec=49/34 lis/c=49/49 les/c/f=50/50/0 sis=53) [0] r=0 lpr=53 pi=[49,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 03:38:39 compute-0 ceph-osd[88594]: osd.1 pg_epoch: 53 pg[3.1( empty local-lis/les=42/43 n=0 ec=42/15 lis/c=42/42 les/c/f=43/43/0 sis=53 pruub=9.561760902s) [0] r=-1 lpr=53 pi=[42,53)/1 crt=0'0 mlcod 0'0 active pruub 86.765098572s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 11 03:38:39 compute-0 ceph-osd[89722]: osd.2 pg_epoch: 53 pg[11.11( empty local-lis/les=0/0 n=0 ec=51/38 lis/c=51/51 les/c/f=52/52/0 sis=53) [2] r=0 lpr=53 pi=[51,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 03:38:39 compute-0 ceph-osd[88594]: osd.1 pg_epoch: 53 pg[3.1( empty local-lis/les=42/43 n=0 ec=42/15 lis/c=42/42 les/c/f=43/43/0 sis=53 pruub=9.561727524s) [0] r=-1 lpr=53 pi=[42,53)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 86.765098572s@ mbc={}] state<Start>: transitioning to Stray
Oct 11 03:38:39 compute-0 ceph-osd[88594]: osd.1 pg_epoch: 53 pg[9.9( v 41'577 (0'0,41'577] local-lis/les=49/50 n=7 ec=49/34 lis/c=49/49 les/c/f=50/50/0 sis=53 pruub=8.770606995s) [0] r=-1 lpr=53 pi=[49,53)/1 crt=41'577 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 85.973976135s@ mbc={}] state<Start>: transitioning to Stray
Oct 11 03:38:39 compute-0 ceph-osd[88594]: osd.1 pg_epoch: 53 pg[9.b( v 41'577 (0'0,41'577] local-lis/les=49/50 n=7 ec=49/34 lis/c=49/49 les/c/f=50/50/0 sis=53 pruub=8.770523071s) [0] r=-1 lpr=53 pi=[49,53)/1 crt=41'577 lcod 0'0 mlcod 0'0 active pruub 85.974029541s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 11 03:38:39 compute-0 ceph-osd[88594]: osd.1 pg_epoch: 53 pg[9.b( v 41'577 (0'0,41'577] local-lis/les=49/50 n=7 ec=49/34 lis/c=49/49 les/c/f=50/50/0 sis=53 pruub=8.770502090s) [0] r=-1 lpr=53 pi=[49,53)/1 crt=41'577 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 85.974029541s@ mbc={}] state<Start>: transitioning to Stray
Oct 11 03:38:39 compute-0 ceph-osd[87591]: osd.0 pg_epoch: 53 pg[11.14( empty local-lis/les=0/0 n=0 ec=51/38 lis/c=51/51 les/c/f=52/52/0 sis=53) [0] r=0 lpr=53 pi=[51,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 03:38:39 compute-0 ceph-osd[89722]: osd.2 pg_epoch: 53 pg[8.12( empty local-lis/les=0/0 n=0 ec=47/32 lis/c=47/47 les/c/f=48/48/0 sis=53) [2] r=0 lpr=53 pi=[47,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 03:38:39 compute-0 ceph-osd[88594]: osd.1 pg_epoch: 53 pg[7.c( empty local-lis/les=47/48 n=0 ec=47/23 lis/c=47/47 les/c/f=48/48/0 sis=53 pruub=14.597046852s) [2] r=-1 lpr=53 pi=[47,53)/1 crt=0'0 mlcod 0'0 active pruub 91.800743103s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 11 03:38:39 compute-0 ceph-osd[88594]: osd.1 pg_epoch: 53 pg[3.8( empty local-lis/les=42/43 n=0 ec=42/15 lis/c=42/42 les/c/f=43/43/0 sis=53 pruub=9.561435699s) [2] r=-1 lpr=53 pi=[42,53)/1 crt=0'0 mlcod 0'0 active pruub 86.765136719s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 11 03:38:39 compute-0 ceph-osd[88594]: osd.1 pg_epoch: 53 pg[7.5( empty local-lis/les=47/48 n=0 ec=47/23 lis/c=47/47 les/c/f=48/48/0 sis=53 pruub=14.597238541s) [2] r=-1 lpr=53 pi=[47,53)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 91.800895691s@ mbc={}] state<Start>: transitioning to Stray
Oct 11 03:38:39 compute-0 ceph-osd[88594]: osd.1 pg_epoch: 53 pg[3.8( empty local-lis/les=42/43 n=0 ec=42/15 lis/c=42/42 les/c/f=43/43/0 sis=53 pruub=9.561414719s) [2] r=-1 lpr=53 pi=[42,53)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 86.765136719s@ mbc={}] state<Start>: transitioning to Stray
Oct 11 03:38:39 compute-0 ceph-osd[88594]: osd.1 pg_epoch: 53 pg[7.e( empty local-lis/les=47/48 n=0 ec=47/23 lis/c=47/47 les/c/f=48/48/0 sis=53 pruub=14.596830368s) [2] r=-1 lpr=53 pi=[47,53)/1 crt=0'0 mlcod 0'0 active pruub 91.800727844s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 11 03:38:39 compute-0 ceph-osd[89722]: osd.2 pg_epoch: 53 pg[3.18( empty local-lis/les=0/0 n=0 ec=42/15 lis/c=42/42 les/c/f=43/43/0 sis=53) [2] r=0 lpr=53 pi=[42,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 03:38:39 compute-0 ceph-osd[89722]: osd.2 pg_epoch: 53 pg[7.1c( empty local-lis/les=0/0 n=0 ec=47/23 lis/c=47/47 les/c/f=48/48/0 sis=53) [2] r=0 lpr=53 pi=[47,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 03:38:39 compute-0 ceph-osd[89722]: osd.2 pg_epoch: 53 pg[8.11( empty local-lis/les=0/0 n=0 ec=47/32 lis/c=47/47 les/c/f=48/48/0 sis=53) [2] r=0 lpr=53 pi=[47,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 03:38:39 compute-0 ceph-osd[88594]: osd.1 pg_epoch: 53 pg[7.e( empty local-lis/les=47/48 n=0 ec=47/23 lis/c=47/47 les/c/f=48/48/0 sis=53 pruub=14.596800804s) [2] r=-1 lpr=53 pi=[47,53)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 91.800727844s@ mbc={}] state<Start>: transitioning to Stray
Oct 11 03:38:39 compute-0 ceph-osd[88594]: osd.1 pg_epoch: 53 pg[3.a( empty local-lis/les=42/43 n=0 ec=42/15 lis/c=42/42 les/c/f=43/43/0 sis=53 pruub=9.566009521s) [0] r=-1 lpr=53 pi=[42,53)/1 crt=0'0 mlcod 0'0 active pruub 86.770027161s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 11 03:38:39 compute-0 ceph-osd[89722]: osd.2 pg_epoch: 53 pg[3.7( empty local-lis/les=0/0 n=0 ec=42/15 lis/c=42/42 les/c/f=43/43/0 sis=53) [2] r=0 lpr=53 pi=[42,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 03:38:39 compute-0 ceph-osd[87591]: osd.0 pg_epoch: 53 pg[9.17( empty local-lis/les=0/0 n=0 ec=49/34 lis/c=49/49 les/c/f=50/50/0 sis=53) [0] r=0 lpr=53 pi=[49,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 03:38:39 compute-0 ceph-osd[88594]: osd.1 pg_epoch: 53 pg[3.a( empty local-lis/les=42/43 n=0 ec=42/15 lis/c=42/42 les/c/f=43/43/0 sis=53 pruub=9.565992355s) [0] r=-1 lpr=53 pi=[42,53)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 86.770027161s@ mbc={}] state<Start>: transitioning to Stray
Oct 11 03:38:39 compute-0 ceph-osd[89722]: osd.2 pg_epoch: 53 pg[8.d( empty local-lis/les=0/0 n=0 ec=47/32 lis/c=47/47 les/c/f=48/48/0 sis=53) [2] r=0 lpr=53 pi=[47,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 03:38:39 compute-0 ceph-osd[88594]: osd.1 pg_epoch: 53 pg[11.3( empty local-lis/les=51/52 n=0 ec=51/38 lis/c=51/51 les/c/f=52/52/0 sis=53 pruub=10.928240776s) [2] r=-1 lpr=53 pi=[51,53)/1 crt=0'0 mlcod 0'0 active pruub 88.132377625s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 11 03:38:39 compute-0 ceph-osd[87591]: osd.0 pg_epoch: 53 pg[7.18( empty local-lis/les=0/0 n=0 ec=47/23 lis/c=47/47 les/c/f=48/48/0 sis=53) [0] r=0 lpr=53 pi=[47,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 03:38:39 compute-0 ceph-osd[88594]: osd.1 pg_epoch: 53 pg[11.3( empty local-lis/les=51/52 n=0 ec=51/38 lis/c=51/51 les/c/f=52/52/0 sis=53 pruub=10.928225517s) [2] r=-1 lpr=53 pi=[51,53)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 88.132377625s@ mbc={}] state<Start>: transitioning to Stray
Oct 11 03:38:39 compute-0 ceph-osd[88594]: osd.1 pg_epoch: 53 pg[7.f( empty local-lis/les=47/48 n=0 ec=47/23 lis/c=47/47 les/c/f=48/48/0 sis=53 pruub=14.596422195s) [0] r=-1 lpr=53 pi=[47,53)/1 crt=0'0 mlcod 0'0 active pruub 91.800674438s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 11 03:38:39 compute-0 ceph-osd[89722]: osd.2 pg_epoch: 53 pg[7.2( empty local-lis/les=0/0 n=0 ec=47/23 lis/c=47/47 les/c/f=48/48/0 sis=53) [2] r=0 lpr=53 pi=[47,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 03:38:39 compute-0 ceph-osd[88594]: osd.1 pg_epoch: 53 pg[11.9( empty local-lis/les=51/52 n=0 ec=51/38 lis/c=51/51 les/c/f=52/52/0 sis=53 pruub=10.928311348s) [2] r=-1 lpr=53 pi=[51,53)/1 crt=0'0 mlcod 0'0 active pruub 88.132583618s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 11 03:38:39 compute-0 ceph-osd[89722]: osd.2 pg_epoch: 53 pg[7.1( empty local-lis/les=0/0 n=0 ec=47/23 lis/c=47/47 les/c/f=48/48/0 sis=53) [2] r=0 lpr=53 pi=[47,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 03:38:39 compute-0 ceph-osd[88594]: osd.1 pg_epoch: 53 pg[7.f( empty local-lis/les=47/48 n=0 ec=47/23 lis/c=47/47 les/c/f=48/48/0 sis=53 pruub=14.596375465s) [0] r=-1 lpr=53 pi=[47,53)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 91.800674438s@ mbc={}] state<Start>: transitioning to Stray
Oct 11 03:38:39 compute-0 ceph-osd[87591]: osd.0 pg_epoch: 53 pg[7.1f( empty local-lis/les=0/0 n=0 ec=47/23 lis/c=47/47 les/c/f=48/48/0 sis=53) [0] r=0 lpr=53 pi=[47,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 03:38:39 compute-0 ceph-osd[88594]: osd.1 pg_epoch: 53 pg[11.2( empty local-lis/les=51/52 n=0 ec=51/38 lis/c=51/51 les/c/f=52/52/0 sis=53 pruub=10.927991867s) [2] r=-1 lpr=53 pi=[51,53)/1 crt=0'0 mlcod 0'0 active pruub 88.132377625s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 11 03:38:39 compute-0 ceph-osd[88594]: osd.1 pg_epoch: 53 pg[11.2( empty local-lis/les=51/52 n=0 ec=51/38 lis/c=51/51 les/c/f=52/52/0 sis=53 pruub=10.927974701s) [2] r=-1 lpr=53 pi=[51,53)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 88.132377625s@ mbc={}] state<Start>: transitioning to Stray
Oct 11 03:38:39 compute-0 ceph-osd[88594]: osd.1 pg_epoch: 53 pg[9.1( v 41'577 (0'0,41'577] local-lis/les=49/50 n=7 ec=49/34 lis/c=49/49 les/c/f=50/50/0 sis=53 pruub=8.769888878s) [0] r=-1 lpr=53 pi=[49,53)/1 crt=41'577 lcod 0'0 mlcod 0'0 active pruub 85.974411011s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 11 03:38:39 compute-0 ceph-osd[88594]: osd.1 pg_epoch: 53 pg[9.1( v 41'577 (0'0,41'577] local-lis/les=49/50 n=7 ec=49/34 lis/c=49/49 les/c/f=50/50/0 sis=53 pruub=8.769868851s) [0] r=-1 lpr=53 pi=[49,53)/1 crt=41'577 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 85.974411011s@ mbc={}] state<Start>: transitioning to Stray
Oct 11 03:38:39 compute-0 ceph-osd[88594]: osd.1 pg_epoch: 53 pg[8.f( v 33'4 (0'0,33'4] local-lis/les=47/48 n=0 ec=47/32 lis/c=47/47 les/c/f=48/48/0 sis=53 pruub=14.596036911s) [0] r=-1 lpr=53 pi=[47,53)/1 crt=33'4 lcod 0'0 mlcod 0'0 active pruub 91.800682068s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 11 03:38:39 compute-0 ceph-osd[88594]: osd.1 pg_epoch: 53 pg[8.f( v 33'4 (0'0,33'4] local-lis/les=47/48 n=0 ec=47/32 lis/c=47/47 les/c/f=48/48/0 sis=53 pruub=14.596018791s) [0] r=-1 lpr=53 pi=[47,53)/1 crt=33'4 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 91.800682068s@ mbc={}] state<Start>: transitioning to Stray
Oct 11 03:38:39 compute-0 ceph-osd[88594]: osd.1 pg_epoch: 53 pg[11.8( empty local-lis/les=51/52 n=0 ec=51/38 lis/c=51/51 les/c/f=52/52/0 sis=53 pruub=10.927651405s) [2] r=-1 lpr=53 pi=[51,53)/1 crt=0'0 mlcod 0'0 active pruub 88.132408142s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 11 03:38:39 compute-0 ceph-osd[88594]: osd.1 pg_epoch: 53 pg[11.8( empty local-lis/les=51/52 n=0 ec=51/38 lis/c=51/51 les/c/f=52/52/0 sis=53 pruub=10.927628517s) [2] r=-1 lpr=53 pi=[51,53)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 88.132408142s@ mbc={}] state<Start>: transitioning to Stray
Oct 11 03:38:39 compute-0 ceph-osd[88594]: osd.1 pg_epoch: 53 pg[7.4( empty local-lis/les=47/48 n=0 ec=47/23 lis/c=47/47 les/c/f=48/48/0 sis=53 pruub=14.595805168s) [0] r=-1 lpr=53 pi=[47,53)/1 crt=0'0 mlcod 0'0 active pruub 91.800651550s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 11 03:38:39 compute-0 ceph-osd[88594]: osd.1 pg_epoch: 53 pg[7.4( empty local-lis/les=47/48 n=0 ec=47/23 lis/c=47/47 les/c/f=48/48/0 sis=53 pruub=14.595767975s) [0] r=-1 lpr=53 pi=[47,53)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 91.800651550s@ mbc={}] state<Start>: transitioning to Stray
Oct 11 03:38:39 compute-0 ceph-osd[89722]: osd.2 pg_epoch: 53 pg[3.5( empty local-lis/les=0/0 n=0 ec=42/15 lis/c=42/42 les/c/f=43/43/0 sis=53) [2] r=0 lpr=53 pi=[42,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 03:38:39 compute-0 ceph-osd[89722]: osd.2 pg_epoch: 53 pg[11.12( empty local-lis/les=0/0 n=0 ec=51/38 lis/c=51/51 les/c/f=52/52/0 sis=53) [2] r=0 lpr=53 pi=[51,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 03:38:39 compute-0 ceph-osd[89722]: osd.2 pg_epoch: 53 pg[11.d( empty local-lis/les=0/0 n=0 ec=51/38 lis/c=51/51 les/c/f=52/52/0 sis=53) [2] r=0 lpr=53 pi=[51,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 03:38:39 compute-0 ceph-osd[89722]: osd.2 pg_epoch: 53 pg[11.b( empty local-lis/les=0/0 n=0 ec=51/38 lis/c=51/51 les/c/f=52/52/0 sis=53) [2] r=0 lpr=53 pi=[51,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 03:38:39 compute-0 ceph-osd[89722]: osd.2 pg_epoch: 53 pg[3.8( empty local-lis/les=0/0 n=0 ec=42/15 lis/c=42/42 les/c/f=43/43/0 sis=53) [2] r=0 lpr=53 pi=[42,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 03:38:39 compute-0 ceph-osd[89722]: osd.2 pg_epoch: 53 pg[7.5( empty local-lis/les=0/0 n=0 ec=47/23 lis/c=47/47 les/c/f=48/48/0 sis=53) [2] r=0 lpr=53 pi=[47,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 03:38:39 compute-0 ceph-osd[89722]: osd.2 pg_epoch: 53 pg[7.e( empty local-lis/les=0/0 n=0 ec=47/23 lis/c=47/47 les/c/f=48/48/0 sis=53) [2] r=0 lpr=53 pi=[47,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 03:38:39 compute-0 ceph-osd[89722]: osd.2 pg_epoch: 53 pg[11.3( empty local-lis/les=0/0 n=0 ec=51/38 lis/c=51/51 les/c/f=52/52/0 sis=53) [2] r=0 lpr=53 pi=[51,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 03:38:39 compute-0 ceph-osd[89722]: osd.2 pg_epoch: 53 pg[11.2( empty local-lis/les=0/0 n=0 ec=51/38 lis/c=51/51 les/c/f=52/52/0 sis=53) [2] r=0 lpr=53 pi=[51,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 03:38:39 compute-0 ceph-osd[88594]: osd.1 pg_epoch: 53 pg[8.b( v 33'4 (0'0,33'4] local-lis/les=47/48 n=0 ec=47/32 lis/c=47/47 les/c/f=48/48/0 sis=53 pruub=14.595708847s) [0] r=-1 lpr=53 pi=[47,53)/1 crt=33'4 lcod 0'0 mlcod 0'0 active pruub 91.800659180s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 11 03:38:39 compute-0 ceph-osd[88594]: osd.1 pg_epoch: 53 pg[8.b( v 33'4 (0'0,33'4] local-lis/les=47/48 n=0 ec=47/32 lis/c=47/47 les/c/f=48/48/0 sis=53 pruub=14.595685959s) [0] r=-1 lpr=53 pi=[47,53)/1 crt=33'4 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 91.800659180s@ mbc={}] state<Start>: transitioning to Stray
Oct 11 03:38:39 compute-0 ceph-osd[88594]: osd.1 pg_epoch: 53 pg[11.9( empty local-lis/les=51/52 n=0 ec=51/38 lis/c=51/51 les/c/f=52/52/0 sis=53 pruub=10.927554131s) [2] r=-1 lpr=53 pi=[51,53)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 88.132583618s@ mbc={}] state<Start>: transitioning to Stray
Oct 11 03:38:39 compute-0 ceph-osd[89722]: osd.2 pg_epoch: 53 pg[11.8( empty local-lis/les=0/0 n=0 ec=51/38 lis/c=51/51 les/c/f=52/52/0 sis=53) [2] r=0 lpr=53 pi=[51,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 03:38:39 compute-0 ceph-osd[89722]: osd.2 pg_epoch: 53 pg[11.9( empty local-lis/les=0/0 n=0 ec=51/38 lis/c=51/51 les/c/f=52/52/0 sis=53) [2] r=0 lpr=53 pi=[51,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 03:38:39 compute-0 ceph-osd[89722]: osd.2 pg_epoch: 53 pg[7.c( empty local-lis/les=0/0 n=0 ec=47/23 lis/c=47/47 les/c/f=48/48/0 sis=53) [2] r=0 lpr=53 pi=[47,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 03:38:39 compute-0 ceph-osd[89722]: osd.2 pg_epoch: 53 pg[8.2( empty local-lis/les=0/0 n=0 ec=47/32 lis/c=47/47 les/c/f=48/48/0 sis=53) [2] r=0 lpr=53 pi=[47,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 03:38:39 compute-0 ceph-osd[89722]: osd.2 pg_epoch: 53 pg[7.8( empty local-lis/les=0/0 n=0 ec=47/23 lis/c=47/47 les/c/f=48/48/0 sis=53) [2] r=0 lpr=53 pi=[47,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 03:38:39 compute-0 ceph-osd[87591]: osd.0 pg_epoch: 53 pg[8.10( empty local-lis/les=0/0 n=0 ec=47/32 lis/c=47/47 les/c/f=48/48/0 sis=53) [0] r=0 lpr=53 pi=[47,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 03:38:39 compute-0 ceph-osd[89722]: osd.2 pg_epoch: 53 pg[7.a( empty local-lis/les=0/0 n=0 ec=47/23 lis/c=47/47 les/c/f=48/48/0 sis=53) [2] r=0 lpr=53 pi=[47,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 03:38:39 compute-0 ceph-osd[88594]: osd.1 pg_epoch: 53 pg[7.6( empty local-lis/les=47/48 n=0 ec=47/23 lis/c=47/47 les/c/f=48/48/0 sis=53 pruub=14.594914436s) [0] r=-1 lpr=53 pi=[47,53)/1 crt=0'0 mlcod 0'0 active pruub 91.800071716s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 11 03:38:39 compute-0 ceph-osd[88594]: osd.1 pg_epoch: 53 pg[11.1( empty local-lis/les=51/52 n=0 ec=51/38 lis/c=51/51 les/c/f=52/52/0 sis=53 pruub=10.927220345s) [0] r=-1 lpr=53 pi=[51,53)/1 crt=0'0 mlcod 0'0 active pruub 88.132560730s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 11 03:38:39 compute-0 ceph-osd[88594]: osd.1 pg_epoch: 53 pg[11.1( empty local-lis/les=51/52 n=0 ec=51/38 lis/c=51/51 les/c/f=52/52/0 sis=53 pruub=10.927200317s) [0] r=-1 lpr=53 pi=[51,53)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 88.132560730s@ mbc={}] state<Start>: transitioning to Stray
Oct 11 03:38:39 compute-0 ceph-osd[88594]: osd.1 pg_epoch: 53 pg[8.9( v 33'4 (0'0,33'4] local-lis/les=47/48 n=0 ec=47/32 lis/c=47/47 les/c/f=48/48/0 sis=53 pruub=14.594679832s) [0] r=-1 lpr=53 pi=[47,53)/1 crt=33'4 lcod 0'0 mlcod 0'0 active pruub 91.800056458s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 11 03:38:39 compute-0 ceph-osd[88594]: osd.1 pg_epoch: 53 pg[7.c( empty local-lis/les=47/48 n=0 ec=47/23 lis/c=47/47 les/c/f=48/48/0 sis=53 pruub=14.595386505s) [2] r=-1 lpr=53 pi=[47,53)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 91.800743103s@ mbc={}] state<Start>: transitioning to Stray
Oct 11 03:38:39 compute-0 ceph-osd[87591]: osd.0 pg_epoch: 53 pg[3.1b( empty local-lis/les=0/0 n=0 ec=42/15 lis/c=42/42 les/c/f=43/43/0 sis=53) [0] r=0 lpr=53 pi=[42,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 03:38:39 compute-0 ceph-osd[88594]: osd.1 pg_epoch: 53 pg[3.9( empty local-lis/les=42/43 n=0 ec=42/15 lis/c=42/42 les/c/f=43/43/0 sis=53 pruub=9.565036774s) [0] r=-1 lpr=53 pi=[42,53)/1 crt=0'0 mlcod 0'0 active pruub 86.770507812s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 11 03:38:39 compute-0 ceph-osd[89722]: osd.2 pg_epoch: 53 pg[3.e( empty local-lis/les=0/0 n=0 ec=42/15 lis/c=42/42 les/c/f=43/43/0 sis=53) [2] r=0 lpr=53 pi=[42,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 03:38:39 compute-0 ceph-osd[89722]: osd.2 pg_epoch: 53 pg[11.18( empty local-lis/les=0/0 n=0 ec=51/38 lis/c=51/51 les/c/f=52/52/0 sis=53) [2] r=0 lpr=53 pi=[51,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 03:38:39 compute-0 ceph-osd[88594]: osd.1 pg_epoch: 53 pg[3.9( empty local-lis/les=42/43 n=0 ec=42/15 lis/c=42/42 les/c/f=43/43/0 sis=53 pruub=9.565019608s) [0] r=-1 lpr=53 pi=[42,53)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 86.770507812s@ mbc={}] state<Start>: transitioning to Stray
Oct 11 03:38:39 compute-0 ceph-osd[88594]: osd.1 pg_epoch: 53 pg[8.9( v 33'4 (0'0,33'4] local-lis/les=47/48 n=0 ec=47/32 lis/c=47/47 les/c/f=48/48/0 sis=53 pruub=14.594589233s) [0] r=-1 lpr=53 pi=[47,53)/1 crt=33'4 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 91.800056458s@ mbc={}] state<Start>: transitioning to Stray
Oct 11 03:38:39 compute-0 ceph-osd[89722]: osd.2 pg_epoch: 53 pg[8.4( empty local-lis/les=0/0 n=0 ec=47/32 lis/c=47/47 les/c/f=48/48/0 sis=53) [2] r=0 lpr=53 pi=[47,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 03:38:39 compute-0 ceph-osd[88594]: osd.1 pg_epoch: 53 pg[8.2( v 33'4 (0'0,33'4] local-lis/les=47/48 n=1 ec=47/32 lis/c=47/47 les/c/f=48/48/0 sis=53 pruub=14.596170425s) [2] r=-1 lpr=53 pi=[47,53)/1 crt=33'4 lcod 0'0 mlcod 0'0 active pruub 91.801757812s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 11 03:38:39 compute-0 ceph-osd[88594]: osd.1 pg_epoch: 53 pg[8.2( v 33'4 (0'0,33'4] local-lis/les=47/48 n=1 ec=47/32 lis/c=47/47 les/c/f=48/48/0 sis=53 pruub=14.596150398s) [2] r=-1 lpr=53 pi=[47,53)/1 crt=33'4 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 91.801757812s@ mbc={}] state<Start>: transitioning to Stray
Oct 11 03:38:39 compute-0 ceph-osd[89722]: osd.2 pg_epoch: 53 pg[8.1b( empty local-lis/les=0/0 n=0 ec=47/32 lis/c=47/47 les/c/f=48/48/0 sis=53) [2] r=0 lpr=53 pi=[47,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 03:38:39 compute-0 ceph-osd[87591]: osd.0 pg_epoch: 53 pg[9.11( empty local-lis/les=0/0 n=0 ec=49/34 lis/c=49/49 les/c/f=50/50/0 sis=53) [0] r=0 lpr=53 pi=[49,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 03:38:39 compute-0 ceph-osd[87591]: osd.0 pg_epoch: 53 pg[9.13( empty local-lis/les=0/0 n=0 ec=49/34 lis/c=49/49 les/c/f=50/50/0 sis=53) [0] r=0 lpr=53 pi=[49,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 03:38:39 compute-0 ceph-osd[88594]: osd.1 pg_epoch: 53 pg[9.3( v 41'577 (0'0,41'577] local-lis/les=49/50 n=7 ec=49/34 lis/c=49/49 les/c/f=50/50/0 sis=53 pruub=8.768647194s) [0] r=-1 lpr=53 pi=[49,53)/1 crt=41'577 lcod 0'0 mlcod 0'0 active pruub 85.974372864s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 11 03:38:39 compute-0 ceph-osd[88594]: osd.1 pg_epoch: 53 pg[7.8( empty local-lis/les=47/48 n=0 ec=47/23 lis/c=47/47 les/c/f=48/48/0 sis=53 pruub=14.594301224s) [2] r=-1 lpr=53 pi=[47,53)/1 crt=0'0 mlcod 0'0 active pruub 91.800048828s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 11 03:38:39 compute-0 ceph-osd[87591]: osd.0 pg_epoch: 53 pg[11.10( empty local-lis/les=0/0 n=0 ec=51/38 lis/c=51/51 les/c/f=52/52/0 sis=53) [0] r=0 lpr=53 pi=[51,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 03:38:39 compute-0 ceph-osd[89722]: osd.2 pg_epoch: 53 pg[7.15( empty local-lis/les=0/0 n=0 ec=47/23 lis/c=47/47 les/c/f=48/48/0 sis=53) [2] r=0 lpr=53 pi=[47,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 03:38:39 compute-0 ceph-osd[88594]: osd.1 pg_epoch: 53 pg[7.8( empty local-lis/les=47/48 n=0 ec=47/23 lis/c=47/47 les/c/f=48/48/0 sis=53 pruub=14.594281197s) [2] r=-1 lpr=53 pi=[47,53)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 91.800048828s@ mbc={}] state<Start>: transitioning to Stray
Oct 11 03:38:39 compute-0 ceph-osd[88594]: osd.1 pg_epoch: 53 pg[11.4( empty local-lis/les=51/52 n=0 ec=51/38 lis/c=51/51 les/c/f=52/52/0 sis=53 pruub=10.926810265s) [0] r=-1 lpr=53 pi=[51,53)/1 crt=0'0 mlcod 0'0 active pruub 88.132591248s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 11 03:38:39 compute-0 ceph-osd[89722]: osd.2 pg_epoch: 53 pg[11.1a( empty local-lis/les=0/0 n=0 ec=51/38 lis/c=51/51 les/c/f=52/52/0 sis=53) [2] r=0 lpr=53 pi=[51,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 03:38:39 compute-0 ceph-osd[88594]: osd.1 pg_epoch: 53 pg[9.3( v 41'577 (0'0,41'577] local-lis/les=49/50 n=7 ec=49/34 lis/c=49/49 les/c/f=50/50/0 sis=53 pruub=8.768590927s) [0] r=-1 lpr=53 pi=[49,53)/1 crt=41'577 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 85.974372864s@ mbc={}] state<Start>: transitioning to Stray
Oct 11 03:38:39 compute-0 ceph-osd[88594]: osd.1 pg_epoch: 53 pg[3.c( empty local-lis/les=42/43 n=0 ec=42/15 lis/c=42/42 les/c/f=43/43/0 sis=53 pruub=9.564703941s) [0] r=-1 lpr=53 pi=[42,53)/1 crt=0'0 mlcod 0'0 active pruub 86.770584106s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 11 03:38:39 compute-0 ceph-osd[89722]: osd.2 pg_epoch: 53 pg[3.11( empty local-lis/les=0/0 n=0 ec=42/15 lis/c=42/42 les/c/f=43/43/0 sis=53) [2] r=0 lpr=53 pi=[42,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 03:38:39 compute-0 ceph-osd[88594]: osd.1 pg_epoch: 53 pg[3.c( empty local-lis/les=42/43 n=0 ec=42/15 lis/c=42/42 les/c/f=43/43/0 sis=53 pruub=9.564676285s) [0] r=-1 lpr=53 pi=[42,53)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 86.770584106s@ mbc={}] state<Start>: transitioning to Stray
Oct 11 03:38:39 compute-0 ceph-osd[88594]: osd.1 pg_epoch: 53 pg[11.4( empty local-lis/les=51/52 n=0 ec=51/38 lis/c=51/51 les/c/f=52/52/0 sis=53 pruub=10.926760674s) [0] r=-1 lpr=53 pi=[51,53)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 88.132591248s@ mbc={}] state<Start>: transitioning to Stray
Oct 11 03:38:39 compute-0 ceph-osd[89722]: osd.2 pg_epoch: 53 pg[11.1b( empty local-lis/les=0/0 n=0 ec=51/38 lis/c=51/51 les/c/f=52/52/0 sis=53) [2] r=0 lpr=53 pi=[51,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 03:38:39 compute-0 ceph-osd[88594]: osd.1 pg_epoch: 53 pg[7.9( empty local-lis/les=47/48 n=0 ec=47/23 lis/c=47/47 les/c/f=48/48/0 sis=53 pruub=14.593882561s) [0] r=-1 lpr=53 pi=[47,53)/1 crt=0'0 mlcod 0'0 active pruub 91.799987793s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 11 03:38:39 compute-0 ceph-osd[88594]: osd.1 pg_epoch: 53 pg[7.9( empty local-lis/les=47/48 n=0 ec=47/23 lis/c=47/47 les/c/f=48/48/0 sis=53 pruub=14.593860626s) [0] r=-1 lpr=53 pi=[47,53)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 91.799987793s@ mbc={}] state<Start>: transitioning to Stray
Oct 11 03:38:39 compute-0 ceph-osd[89722]: osd.2 pg_epoch: 53 pg[11.1c( empty local-lis/les=0/0 n=0 ec=51/38 lis/c=51/51 les/c/f=52/52/0 sis=53) [2] r=0 lpr=53 pi=[51,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 03:38:39 compute-0 ceph-osd[87591]: osd.0 pg_epoch: 53 pg[11.f( empty local-lis/les=0/0 n=0 ec=51/38 lis/c=51/51 les/c/f=52/52/0 sis=53) [0] r=0 lpr=53 pi=[51,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 03:38:39 compute-0 ceph-osd[89722]: osd.2 pg_epoch: 53 pg[7.11( empty local-lis/les=0/0 n=0 ec=47/23 lis/c=47/47 les/c/f=48/48/0 sis=53) [2] r=0 lpr=53 pi=[47,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 03:38:39 compute-0 ceph-osd[88594]: osd.1 pg_epoch: 53 pg[8.6( v 33'4 (0'0,33'4] local-lis/les=47/48 n=0 ec=47/32 lis/c=47/47 les/c/f=48/48/0 sis=53 pruub=14.593743324s) [0] r=-1 lpr=53 pi=[47,53)/1 crt=33'4 lcod 0'0 mlcod 0'0 active pruub 91.799987793s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 11 03:38:39 compute-0 ceph-osd[88594]: osd.1 pg_epoch: 53 pg[8.6( v 33'4 (0'0,33'4] local-lis/les=47/48 n=0 ec=47/32 lis/c=47/47 les/c/f=48/48/0 sis=53 pruub=14.593724251s) [0] r=-1 lpr=53 pi=[47,53)/1 crt=33'4 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 91.799987793s@ mbc={}] state<Start>: transitioning to Stray
Oct 11 03:38:39 compute-0 ceph-osd[87591]: osd.0 pg_epoch: 53 pg[8.c( empty local-lis/les=0/0 n=0 ec=47/32 lis/c=47/47 les/c/f=48/48/0 sis=53) [0] r=0 lpr=53 pi=[47,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 03:38:39 compute-0 ceph-osd[88594]: osd.1 pg_epoch: 53 pg[9.7( v 41'577 (0'0,41'577] local-lis/les=49/50 n=7 ec=49/34 lis/c=49/49 les/c/f=50/50/0 sis=53 pruub=8.768120766s) [0] r=-1 lpr=53 pi=[49,53)/1 crt=41'577 lcod 0'0 mlcod 0'0 active pruub 85.974418640s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 11 03:38:39 compute-0 ceph-osd[88594]: osd.1 pg_epoch: 53 pg[9.7( v 41'577 (0'0,41'577] local-lis/les=49/50 n=7 ec=49/34 lis/c=49/49 les/c/f=50/50/0 sis=53 pruub=8.768080711s) [0] r=-1 lpr=53 pi=[49,53)/1 crt=41'577 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 85.974418640s@ mbc={}] state<Start>: transitioning to Stray
Oct 11 03:38:39 compute-0 ceph-osd[88594]: osd.1 pg_epoch: 53 pg[11.6( empty local-lis/les=51/52 n=0 ec=51/38 lis/c=51/51 les/c/f=52/52/0 sis=53 pruub=10.926279068s) [0] r=-1 lpr=53 pi=[51,53)/1 crt=0'0 mlcod 0'0 active pruub 88.132682800s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 11 03:38:39 compute-0 ceph-osd[88594]: osd.1 pg_epoch: 53 pg[7.a( empty local-lis/les=47/48 n=0 ec=47/23 lis/c=47/47 les/c/f=48/48/0 sis=53 pruub=14.593564034s) [2] r=-1 lpr=53 pi=[47,53)/1 crt=0'0 mlcod 0'0 active pruub 91.799980164s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 11 03:38:39 compute-0 ceph-osd[89722]: osd.2 pg_epoch: 53 pg[3.16( empty local-lis/les=0/0 n=0 ec=42/15 lis/c=42/42 les/c/f=43/43/0 sis=53) [2] r=0 lpr=53 pi=[42,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 03:38:39 compute-0 ceph-osd[89722]: osd.2 pg_epoch: 53 pg[11.1e( empty local-lis/les=0/0 n=0 ec=51/38 lis/c=51/51 les/c/f=52/52/0 sis=53) [2] r=0 lpr=53 pi=[51,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 03:38:39 compute-0 ceph-osd[87591]: osd.0 pg_epoch: 53 pg[7.3( empty local-lis/les=0/0 n=0 ec=47/23 lis/c=47/47 les/c/f=48/48/0 sis=53) [0] r=0 lpr=53 pi=[47,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 03:38:39 compute-0 ceph-osd[88594]: osd.1 pg_epoch: 53 pg[7.a( empty local-lis/les=47/48 n=0 ec=47/23 lis/c=47/47 les/c/f=48/48/0 sis=53 pruub=14.593545914s) [2] r=-1 lpr=53 pi=[47,53)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 91.799980164s@ mbc={}] state<Start>: transitioning to Stray
Oct 11 03:38:39 compute-0 ceph-osd[89722]: osd.2 pg_epoch: 53 pg[8.1c( empty local-lis/les=0/0 n=0 ec=47/32 lis/c=47/47 les/c/f=48/48/0 sis=53) [2] r=0 lpr=53 pi=[47,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 03:38:39 compute-0 ceph-osd[88594]: osd.1 pg_epoch: 53 pg[11.6( empty local-lis/les=51/52 n=0 ec=51/38 lis/c=51/51 les/c/f=52/52/0 sis=53 pruub=10.926198006s) [0] r=-1 lpr=53 pi=[51,53)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 88.132682800s@ mbc={}] state<Start>: transitioning to Stray
Oct 11 03:38:39 compute-0 ceph-osd[88594]: osd.1 pg_epoch: 53 pg[3.e( empty local-lis/les=42/43 n=0 ec=42/15 lis/c=42/42 les/c/f=43/43/0 sis=53 pruub=9.564168930s) [2] r=-1 lpr=53 pi=[42,53)/1 crt=0'0 mlcod 0'0 active pruub 86.770698547s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 11 03:38:39 compute-0 ceph-osd[87591]: osd.0 pg_epoch: 53 pg[9.d( empty local-lis/les=0/0 n=0 ec=49/34 lis/c=49/49 les/c/f=50/50/0 sis=53) [0] r=0 lpr=53 pi=[49,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 03:38:39 compute-0 ceph-osd[89722]: osd.2 pg_epoch: 53 pg[11.1f( empty local-lis/les=0/0 n=0 ec=51/38 lis/c=51/51 les/c/f=52/52/0 sis=53) [2] r=0 lpr=53 pi=[51,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 03:38:39 compute-0 ceph-osd[88594]: osd.1 pg_epoch: 53 pg[3.e( empty local-lis/les=42/43 n=0 ec=42/15 lis/c=42/42 les/c/f=43/43/0 sis=53 pruub=9.564138412s) [2] r=-1 lpr=53 pi=[42,53)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 86.770698547s@ mbc={}] state<Start>: transitioning to Stray
Oct 11 03:38:39 compute-0 ceph-osd[88594]: osd.1 pg_epoch: 53 pg[3.f( empty local-lis/les=42/43 n=0 ec=42/15 lis/c=42/42 les/c/f=43/43/0 sis=53 pruub=9.564120293s) [0] r=-1 lpr=53 pi=[42,53)/1 crt=0'0 mlcod 0'0 active pruub 86.770713806s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 11 03:38:39 compute-0 ceph-osd[88594]: osd.1 pg_epoch: 53 pg[3.f( empty local-lis/les=42/43 n=0 ec=42/15 lis/c=42/42 les/c/f=43/43/0 sis=53 pruub=9.564100266s) [0] r=-1 lpr=53 pi=[42,53)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 86.770713806s@ mbc={}] state<Start>: transitioning to Stray
Oct 11 03:38:39 compute-0 ceph-osd[88594]: osd.1 pg_epoch: 53 pg[11.18( empty local-lis/les=51/52 n=0 ec=51/38 lis/c=51/51 les/c/f=52/52/0 sis=53 pruub=10.926079750s) [2] r=-1 lpr=53 pi=[51,53)/1 crt=0'0 mlcod 0'0 active pruub 88.132797241s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 11 03:38:39 compute-0 ceph-osd[88594]: osd.1 pg_epoch: 53 pg[11.18( empty local-lis/les=51/52 n=0 ec=51/38 lis/c=51/51 les/c/f=52/52/0 sis=53 pruub=10.926060677s) [2] r=-1 lpr=53 pi=[51,53)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 88.132797241s@ mbc={}] state<Start>: transitioning to Stray
Oct 11 03:38:39 compute-0 ceph-osd[88594]: osd.1 pg_epoch: 53 pg[8.4( v 33'4 (0'0,33'4] local-lis/les=47/48 n=1 ec=47/32 lis/c=47/47 les/c/f=48/48/0 sis=53 pruub=14.593188286s) [2] r=-1 lpr=53 pi=[47,53)/1 crt=33'4 lcod 0'0 mlcod 0'0 active pruub 91.799949646s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 11 03:38:39 compute-0 ceph-osd[88594]: osd.1 pg_epoch: 53 pg[9.5( v 41'577 (0'0,41'577] local-lis/les=49/50 n=7 ec=49/34 lis/c=49/49 les/c/f=50/50/0 sis=53 pruub=8.767663002s) [0] r=-1 lpr=53 pi=[49,53)/1 crt=41'577 lcod 0'0 mlcod 0'0 active pruub 85.974464417s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 11 03:38:39 compute-0 ceph-osd[88594]: osd.1 pg_epoch: 53 pg[8.4( v 33'4 (0'0,33'4] local-lis/les=47/48 n=1 ec=47/32 lis/c=47/47 les/c/f=48/48/0 sis=53 pruub=14.593149185s) [2] r=-1 lpr=53 pi=[47,53)/1 crt=33'4 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 91.799949646s@ mbc={}] state<Start>: transitioning to Stray
Oct 11 03:38:39 compute-0 ceph-osd[88594]: osd.1 pg_epoch: 53 pg[9.5( v 41'577 (0'0,41'577] local-lis/les=49/50 n=7 ec=49/34 lis/c=49/49 les/c/f=50/50/0 sis=53 pruub=8.767625809s) [0] r=-1 lpr=53 pi=[49,53)/1 crt=41'577 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 85.974464417s@ mbc={}] state<Start>: transitioning to Stray
Oct 11 03:38:39 compute-0 ceph-osd[87591]: osd.0 pg_epoch: 53 pg[3.6( empty local-lis/les=0/0 n=0 ec=42/15 lis/c=42/42 les/c/f=43/43/0 sis=53) [0] r=0 lpr=53 pi=[42,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 03:38:39 compute-0 ceph-osd[87591]: osd.0 pg_epoch: 53 pg[11.e( empty local-lis/les=0/0 n=0 ec=51/38 lis/c=51/51 les/c/f=52/52/0 sis=53) [0] r=0 lpr=53 pi=[51,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 03:38:39 compute-0 ceph-osd[87591]: osd.0 pg_epoch: 53 pg[9.f( empty local-lis/les=0/0 n=0 ec=49/34 lis/c=49/49 les/c/f=50/50/0 sis=53) [0] r=0 lpr=53 pi=[49,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 03:38:39 compute-0 ceph-osd[88594]: osd.1 pg_epoch: 53 pg[8.1b( v 33'4 (0'0,33'4] local-lis/les=47/48 n=0 ec=47/32 lis/c=47/47 les/c/f=48/48/0 sis=53 pruub=14.592884064s) [2] r=-1 lpr=53 pi=[47,53)/1 crt=33'4 lcod 0'0 mlcod 0'0 active pruub 91.799758911s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 11 03:38:39 compute-0 ceph-osd[88594]: osd.1 pg_epoch: 53 pg[8.1b( v 33'4 (0'0,33'4] local-lis/les=47/48 n=0 ec=47/32 lis/c=47/47 les/c/f=48/48/0 sis=53 pruub=14.592864037s) [2] r=-1 lpr=53 pi=[47,53)/1 crt=33'4 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 91.799758911s@ mbc={}] state<Start>: transitioning to Stray
Oct 11 03:38:39 compute-0 ceph-osd[87591]: osd.0 pg_epoch: 53 pg[8.e( empty local-lis/les=0/0 n=0 ec=47/32 lis/c=47/47 les/c/f=48/48/0 sis=53) [0] r=0 lpr=53 pi=[47,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 03:38:39 compute-0 ceph-osd[88594]: osd.1 pg_epoch: 53 pg[8.1a( v 33'4 (0'0,33'4] local-lis/les=47/48 n=0 ec=47/32 lis/c=47/47 les/c/f=48/48/0 sis=53 pruub=14.592689514s) [0] r=-1 lpr=53 pi=[47,53)/1 crt=33'4 lcod 0'0 mlcod 0'0 active pruub 91.799781799s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 11 03:38:39 compute-0 ceph-osd[88594]: osd.1 pg_epoch: 53 pg[7.15( empty local-lis/les=47/48 n=0 ec=47/23 lis/c=47/47 les/c/f=48/48/0 sis=53 pruub=14.592651367s) [2] r=-1 lpr=53 pi=[47,53)/1 crt=0'0 mlcod 0'0 active pruub 91.799743652s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 11 03:38:39 compute-0 ceph-osd[88594]: osd.1 pg_epoch: 53 pg[8.1a( v 33'4 (0'0,33'4] local-lis/les=47/48 n=0 ec=47/32 lis/c=47/47 les/c/f=48/48/0 sis=53 pruub=14.592670441s) [0] r=-1 lpr=53 pi=[47,53)/1 crt=33'4 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 91.799781799s@ mbc={}] state<Start>: transitioning to Stray
Oct 11 03:38:39 compute-0 ceph-osd[88594]: osd.1 pg_epoch: 53 pg[7.15( empty local-lis/les=47/48 n=0 ec=47/23 lis/c=47/47 les/c/f=48/48/0 sis=53 pruub=14.592615128s) [2] r=-1 lpr=53 pi=[47,53)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 91.799743652s@ mbc={}] state<Start>: transitioning to Stray
Oct 11 03:38:39 compute-0 ceph-osd[88594]: osd.1 pg_epoch: 53 pg[9.1b( v 41'577 (0'0,41'577] local-lis/les=49/50 n=6 ec=49/34 lis/c=49/49 les/c/f=50/50/0 sis=53 pruub=8.767283440s) [0] r=-1 lpr=53 pi=[49,53)/1 crt=41'577 lcod 0'0 mlcod 0'0 active pruub 85.974502563s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 11 03:38:39 compute-0 ceph-osd[87591]: osd.0 pg_epoch: 53 pg[3.3( empty local-lis/les=0/0 n=0 ec=42/15 lis/c=42/42 les/c/f=43/43/0 sis=53) [0] r=0 lpr=53 pi=[42,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 03:38:39 compute-0 ceph-osd[88594]: osd.1 pg_epoch: 53 pg[9.1b( v 41'577 (0'0,41'577] local-lis/les=49/50 n=6 ec=49/34 lis/c=49/49 les/c/f=50/50/0 sis=53 pruub=8.767265320s) [0] r=-1 lpr=53 pi=[49,53)/1 crt=41'577 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 85.974502563s@ mbc={}] state<Start>: transitioning to Stray
Oct 11 03:38:39 compute-0 ceph-osd[88594]: osd.1 pg_epoch: 53 pg[11.1a( empty local-lis/les=51/52 n=0 ec=51/38 lis/c=51/51 les/c/f=52/52/0 sis=53 pruub=10.925411224s) [2] r=-1 lpr=53 pi=[51,53)/1 crt=0'0 mlcod 0'0 active pruub 88.132720947s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 11 03:38:39 compute-0 ceph-osd[88594]: osd.1 pg_epoch: 53 pg[3.12( empty local-lis/les=42/43 n=0 ec=42/15 lis/c=42/42 les/c/f=43/43/0 sis=53 pruub=9.563437462s) [0] r=-1 lpr=53 pi=[42,53)/1 crt=0'0 mlcod 0'0 active pruub 86.770797729s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 11 03:38:39 compute-0 ceph-osd[88594]: osd.1 pg_epoch: 53 pg[3.12( empty local-lis/les=42/43 n=0 ec=42/15 lis/c=42/42 les/c/f=43/43/0 sis=53 pruub=9.563420296s) [0] r=-1 lpr=53 pi=[42,53)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 86.770797729s@ mbc={}] state<Start>: transitioning to Stray
Oct 11 03:38:39 compute-0 ceph-osd[88594]: osd.1 pg_epoch: 53 pg[11.1a( empty local-lis/les=51/52 n=0 ec=51/38 lis/c=51/51 les/c/f=52/52/0 sis=53 pruub=10.925380707s) [2] r=-1 lpr=53 pi=[51,53)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 88.132720947s@ mbc={}] state<Start>: transitioning to Stray
Oct 11 03:38:39 compute-0 ceph-osd[88594]: osd.1 pg_epoch: 53 pg[7.6( empty local-lis/les=47/48 n=0 ec=47/23 lis/c=47/47 les/c/f=48/48/0 sis=53 pruub=14.593766212s) [0] r=-1 lpr=53 pi=[47,53)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 91.800071716s@ mbc={}] state<Start>: transitioning to Stray
Oct 11 03:38:39 compute-0 ceph-osd[87591]: osd.0 pg_epoch: 53 pg[3.1( empty local-lis/les=0/0 n=0 ec=42/15 lis/c=42/42 les/c/f=43/43/0 sis=53) [0] r=0 lpr=53 pi=[42,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 03:38:39 compute-0 ceph-osd[88594]: osd.1 pg_epoch: 53 pg[11.1b( empty local-lis/les=51/52 n=0 ec=51/38 lis/c=51/51 les/c/f=52/52/0 sis=53 pruub=10.925229073s) [2] r=-1 lpr=53 pi=[51,53)/1 crt=0'0 mlcod 0'0 active pruub 88.132781982s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 11 03:38:39 compute-0 ceph-osd[88594]: osd.1 pg_epoch: 53 pg[3.11( empty local-lis/les=42/43 n=0 ec=42/15 lis/c=42/42 les/c/f=43/43/0 sis=53 pruub=9.563156128s) [2] r=-1 lpr=53 pi=[42,53)/1 crt=0'0 mlcod 0'0 active pruub 86.770774841s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 11 03:38:39 compute-0 ceph-osd[88594]: osd.1 pg_epoch: 53 pg[8.18( v 33'4 (0'0,33'4] local-lis/les=47/48 n=0 ec=47/32 lis/c=47/47 les/c/f=48/48/0 sis=53 pruub=14.592135429s) [0] r=-1 lpr=53 pi=[47,53)/1 crt=33'4 lcod 0'0 mlcod 0'0 active pruub 91.799774170s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 11 03:38:39 compute-0 ceph-osd[88594]: osd.1 pg_epoch: 53 pg[9.19( v 41'577 (0'0,41'577] local-lis/les=49/50 n=6 ec=49/34 lis/c=49/49 les/c/f=50/50/0 sis=53 pruub=8.766836166s) [0] r=-1 lpr=53 pi=[49,53)/1 crt=41'577 lcod 0'0 mlcod 0'0 active pruub 85.974540710s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 11 03:38:39 compute-0 ceph-osd[88594]: osd.1 pg_epoch: 53 pg[3.11( empty local-lis/les=42/43 n=0 ec=42/15 lis/c=42/42 les/c/f=43/43/0 sis=53 pruub=9.563101768s) [2] r=-1 lpr=53 pi=[42,53)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 86.770774841s@ mbc={}] state<Start>: transitioning to Stray
Oct 11 03:38:39 compute-0 ceph-osd[88594]: osd.1 pg_epoch: 53 pg[11.1b( empty local-lis/les=51/52 n=0 ec=51/38 lis/c=51/51 les/c/f=52/52/0 sis=53 pruub=10.925198555s) [2] r=-1 lpr=53 pi=[51,53)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 88.132781982s@ mbc={}] state<Start>: transitioning to Stray
Oct 11 03:38:39 compute-0 ceph-osd[88594]: osd.1 pg_epoch: 53 pg[9.19( v 41'577 (0'0,41'577] local-lis/les=49/50 n=6 ec=49/34 lis/c=49/49 les/c/f=50/50/0 sis=53 pruub=8.766815186s) [0] r=-1 lpr=53 pi=[49,53)/1 crt=41'577 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 85.974540710s@ mbc={}] state<Start>: transitioning to Stray
Oct 11 03:38:39 compute-0 ceph-osd[89722]: osd.2 pg_epoch: 53 pg[4.18( empty local-lis/les=0/0 n=0 ec=43/17 lis/c=43/43 les/c/f=44/44/0 sis=53) [2] r=0 lpr=53 pi=[43,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 03:38:39 compute-0 ceph-osd[88594]: osd.1 pg_epoch: 53 pg[11.1c( empty local-lis/les=51/52 n=0 ec=51/38 lis/c=51/51 les/c/f=52/52/0 sis=53 pruub=10.925016403s) [2] r=-1 lpr=53 pi=[51,53)/1 crt=0'0 mlcod 0'0 active pruub 88.132858276s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 11 03:38:39 compute-0 ceph-osd[88594]: osd.1 pg_epoch: 53 pg[11.1c( empty local-lis/les=51/52 n=0 ec=51/38 lis/c=51/51 les/c/f=52/52/0 sis=53 pruub=10.924995422s) [2] r=-1 lpr=53 pi=[51,53)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 88.132858276s@ mbc={}] state<Start>: transitioning to Stray
Oct 11 03:38:39 compute-0 ceph-osd[88594]: osd.1 pg_epoch: 53 pg[7.11( empty local-lis/les=47/48 n=0 ec=47/23 lis/c=47/47 les/c/f=48/48/0 sis=53 pruub=14.591506004s) [2] r=-1 lpr=53 pi=[47,53)/1 crt=0'0 mlcod 0'0 active pruub 91.799476624s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 11 03:38:39 compute-0 ceph-osd[89722]: osd.2 pg_epoch: 53 pg[4.13( empty local-lis/les=0/0 n=0 ec=43/17 lis/c=43/43 les/c/f=44/44/0 sis=53) [2] r=0 lpr=53 pi=[43,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 03:38:39 compute-0 ceph-osd[88594]: osd.1 pg_epoch: 53 pg[7.11( empty local-lis/les=47/48 n=0 ec=47/23 lis/c=47/47 les/c/f=48/48/0 sis=53 pruub=14.591484070s) [2] r=-1 lpr=53 pi=[47,53)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 91.799476624s@ mbc={}] state<Start>: transitioning to Stray
Oct 11 03:38:39 compute-0 ceph-osd[88594]: osd.1 pg_epoch: 53 pg[9.1f( v 41'577 (0'0,41'577] local-lis/les=49/50 n=6 ec=49/34 lis/c=49/49 les/c/f=50/50/0 sis=53 pruub=8.766568184s) [0] r=-1 lpr=53 pi=[49,53)/1 crt=41'577 lcod 0'0 mlcod 0'0 active pruub 85.974639893s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 11 03:38:39 compute-0 ceph-osd[88594]: osd.1 pg_epoch: 53 pg[8.1f( v 33'4 (0'0,33'4] local-lis/les=47/48 n=0 ec=47/32 lis/c=47/47 les/c/f=48/48/0 sis=53 pruub=14.591440201s) [0] r=-1 lpr=53 pi=[47,53)/1 crt=33'4 lcod 0'0 mlcod 0'0 active pruub 91.799514771s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 11 03:38:39 compute-0 ceph-osd[88594]: osd.1 pg_epoch: 53 pg[9.1f( v 41'577 (0'0,41'577] local-lis/les=49/50 n=6 ec=49/34 lis/c=49/49 les/c/f=50/50/0 sis=53 pruub=8.766530991s) [0] r=-1 lpr=53 pi=[49,53)/1 crt=41'577 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 85.974639893s@ mbc={}] state<Start>: transitioning to Stray
Oct 11 03:38:39 compute-0 ceph-osd[87591]: osd.0 pg_epoch: 53 pg[9.9( empty local-lis/les=0/0 n=0 ec=49/34 lis/c=49/49 les/c/f=50/50/0 sis=53) [0] r=0 lpr=53 pi=[49,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 03:38:39 compute-0 ceph-osd[89722]: osd.2 pg_epoch: 53 pg[4.11( empty local-lis/les=0/0 n=0 ec=43/17 lis/c=43/43 les/c/f=44/44/0 sis=53) [2] r=0 lpr=53 pi=[43,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 03:38:39 compute-0 ceph-osd[87591]: osd.0 pg_epoch: 53 pg[9.b( empty local-lis/les=0/0 n=0 ec=49/34 lis/c=49/49 les/c/f=50/50/0 sis=53) [0] r=0 lpr=53 pi=[49,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 03:38:39 compute-0 ceph-osd[87591]: osd.0 pg_epoch: 53 pg[3.a( empty local-lis/les=0/0 n=0 ec=42/15 lis/c=42/42 les/c/f=43/43/0 sis=53) [0] r=0 lpr=53 pi=[42,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 03:38:39 compute-0 ceph-osd[88594]: osd.1 pg_epoch: 53 pg[8.1f( v 33'4 (0'0,33'4] local-lis/les=47/48 n=0 ec=47/32 lis/c=47/47 les/c/f=48/48/0 sis=53 pruub=14.591403008s) [0] r=-1 lpr=53 pi=[47,53)/1 crt=33'4 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 91.799514771s@ mbc={}] state<Start>: transitioning to Stray
Oct 11 03:38:39 compute-0 ceph-osd[88594]: osd.1 pg_epoch: 53 pg[3.15( empty local-lis/les=42/43 n=0 ec=42/15 lis/c=42/42 les/c/f=43/43/0 sis=53 pruub=9.562818527s) [0] r=-1 lpr=53 pi=[42,53)/1 crt=0'0 mlcod 0'0 active pruub 86.770965576s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 11 03:38:39 compute-0 ceph-osd[89722]: osd.2 pg_epoch: 53 pg[4.e( empty local-lis/les=0/0 n=0 ec=43/17 lis/c=43/43 les/c/f=44/44/0 sis=53) [2] r=0 lpr=53 pi=[43,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 03:38:39 compute-0 ceph-osd[88594]: osd.1 pg_epoch: 53 pg[3.15( empty local-lis/les=42/43 n=0 ec=42/15 lis/c=42/42 les/c/f=43/43/0 sis=53 pruub=9.562798500s) [0] r=-1 lpr=53 pi=[42,53)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 86.770965576s@ mbc={}] state<Start>: transitioning to Stray
Oct 11 03:38:39 compute-0 ceph-osd[88594]: osd.1 pg_epoch: 53 pg[3.16( empty local-lis/les=42/43 n=0 ec=42/15 lis/c=42/42 les/c/f=43/43/0 sis=53 pruub=9.562583923s) [2] r=-1 lpr=53 pi=[42,53)/1 crt=0'0 mlcod 0'0 active pruub 86.770904541s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 11 03:38:39 compute-0 ceph-osd[88594]: osd.1 pg_epoch: 53 pg[3.16( empty local-lis/les=42/43 n=0 ec=42/15 lis/c=42/42 les/c/f=43/43/0 sis=53 pruub=9.562565804s) [2] r=-1 lpr=53 pi=[42,53)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 86.770904541s@ mbc={}] state<Start>: transitioning to Stray
Oct 11 03:38:39 compute-0 ceph-osd[88594]: osd.1 pg_epoch: 53 pg[11.1e( empty local-lis/les=51/52 n=0 ec=51/38 lis/c=51/51 les/c/f=52/52/0 sis=53 pruub=10.924517632s) [2] r=-1 lpr=53 pi=[51,53)/1 crt=0'0 mlcod 0'0 active pruub 88.132858276s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 11 03:38:39 compute-0 ceph-osd[87591]: osd.0 pg_epoch: 53 pg[7.f( empty local-lis/les=0/0 n=0 ec=47/23 lis/c=47/47 les/c/f=48/48/0 sis=53) [0] r=0 lpr=53 pi=[47,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 03:38:39 compute-0 ceph-osd[88594]: osd.1 pg_epoch: 53 pg[8.1d( v 33'4 (0'0,33'4] local-lis/les=47/48 n=0 ec=47/32 lis/c=47/47 les/c/f=48/48/0 sis=53 pruub=14.591302872s) [0] r=-1 lpr=53 pi=[47,53)/1 crt=33'4 lcod 0'0 mlcod 0'0 active pruub 91.799720764s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 11 03:38:39 compute-0 ceph-osd[88594]: osd.1 pg_epoch: 53 pg[11.1e( empty local-lis/les=51/52 n=0 ec=51/38 lis/c=51/51 les/c/f=52/52/0 sis=53 pruub=10.924482346s) [2] r=-1 lpr=53 pi=[51,53)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 88.132858276s@ mbc={}] state<Start>: transitioning to Stray
Oct 11 03:38:39 compute-0 ceph-osd[88594]: osd.1 pg_epoch: 53 pg[8.18( v 33'4 (0'0,33'4] local-lis/les=47/48 n=0 ec=47/32 lis/c=47/47 les/c/f=48/48/0 sis=53 pruub=14.591292381s) [0] r=-1 lpr=53 pi=[47,53)/1 crt=33'4 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 91.799774170s@ mbc={}] state<Start>: transitioning to Stray
Oct 11 03:38:39 compute-0 ceph-osd[88594]: osd.1 pg_epoch: 53 pg[8.1d( v 33'4 (0'0,33'4] local-lis/les=47/48 n=0 ec=47/32 lis/c=47/47 les/c/f=48/48/0 sis=53 pruub=14.591266632s) [0] r=-1 lpr=53 pi=[47,53)/1 crt=33'4 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 91.799720764s@ mbc={}] state<Start>: transitioning to Stray
Oct 11 03:38:39 compute-0 ceph-osd[88594]: osd.1 pg_epoch: 53 pg[7.13( empty local-lis/les=47/48 n=0 ec=47/23 lis/c=47/47 les/c/f=48/48/0 sis=53 pruub=14.590831757s) [0] r=-1 lpr=53 pi=[47,53)/1 crt=0'0 mlcod 0'0 active pruub 91.799438477s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 11 03:38:39 compute-0 ceph-osd[87591]: osd.0 pg_epoch: 53 pg[9.1( empty local-lis/les=0/0 n=0 ec=49/34 lis/c=49/49 les/c/f=50/50/0 sis=53) [0] r=0 lpr=53 pi=[49,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 03:38:39 compute-0 ceph-osd[88594]: osd.1 pg_epoch: 53 pg[9.1d( v 41'577 (0'0,41'577] local-lis/les=49/50 n=6 ec=49/34 lis/c=49/49 les/c/f=50/50/0 sis=53 pruub=8.766058922s) [0] r=-1 lpr=53 pi=[49,53)/1 crt=41'577 lcod 0'0 mlcod 0'0 active pruub 85.974678040s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 11 03:38:39 compute-0 ceph-osd[88594]: osd.1 pg_epoch: 53 pg[7.13( empty local-lis/les=47/48 n=0 ec=47/23 lis/c=47/47 les/c/f=48/48/0 sis=53 pruub=14.590805054s) [0] r=-1 lpr=53 pi=[47,53)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 91.799438477s@ mbc={}] state<Start>: transitioning to Stray
Oct 11 03:38:39 compute-0 ceph-osd[88594]: osd.1 pg_epoch: 53 pg[9.1d( v 41'577 (0'0,41'577] local-lis/les=49/50 n=6 ec=49/34 lis/c=49/49 les/c/f=50/50/0 sis=53 pruub=8.766026497s) [0] r=-1 lpr=53 pi=[49,53)/1 crt=41'577 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 85.974678040s@ mbc={}] state<Start>: transitioning to Stray
Oct 11 03:38:39 compute-0 ceph-osd[88594]: osd.1 pg_epoch: 53 pg[11.19( empty local-lis/les=51/52 n=0 ec=51/38 lis/c=51/51 les/c/f=52/52/0 sis=53 pruub=10.925753593s) [0] r=-1 lpr=53 pi=[51,53)/1 crt=0'0 mlcod 0'0 active pruub 88.132774353s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 11 03:38:39 compute-0 ceph-osd[88594]: osd.1 pg_epoch: 53 pg[11.19( empty local-lis/les=51/52 n=0 ec=51/38 lis/c=51/51 les/c/f=52/52/0 sis=53 pruub=10.924001694s) [0] r=-1 lpr=53 pi=[51,53)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 88.132774353s@ mbc={}] state<Start>: transitioning to Stray
Oct 11 03:38:39 compute-0 ceph-osd[89722]: osd.2 pg_epoch: 53 pg[4.1( empty local-lis/les=0/0 n=0 ec=43/17 lis/c=43/43 les/c/f=44/44/0 sis=53) [2] r=0 lpr=53 pi=[43,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 03:38:39 compute-0 ceph-osd[89722]: osd.2 pg_epoch: 53 pg[4.1a( empty local-lis/les=0/0 n=0 ec=43/17 lis/c=43/43 les/c/f=44/44/0 sis=53) [2] r=0 lpr=53 pi=[43,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 03:38:39 compute-0 ceph-osd[87591]: osd.0 pg_epoch: 53 pg[8.f( empty local-lis/les=0/0 n=0 ec=47/32 lis/c=47/47 les/c/f=48/48/0 sis=53) [0] r=0 lpr=53 pi=[47,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 03:38:39 compute-0 ceph-osd[89722]: osd.2 pg_epoch: 53 pg[4.a( empty local-lis/les=0/0 n=0 ec=43/17 lis/c=43/43 les/c/f=44/44/0 sis=53) [2] r=0 lpr=53 pi=[43,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 03:38:39 compute-0 ceph-osd[89722]: osd.2 pg_epoch: 53 pg[4.1b( empty local-lis/les=0/0 n=0 ec=43/17 lis/c=43/43 les/c/f=44/44/0 sis=53) [2] r=0 lpr=53 pi=[43,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 03:38:39 compute-0 ceph-osd[89722]: osd.2 pg_epoch: 53 pg[4.1c( empty local-lis/les=0/0 n=0 ec=43/17 lis/c=43/43 les/c/f=44/44/0 sis=53) [2] r=0 lpr=53 pi=[43,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 03:38:39 compute-0 ceph-osd[88594]: osd.1 pg_epoch: 53 pg[3.17( empty local-lis/les=42/43 n=0 ec=42/15 lis/c=42/42 les/c/f=43/43/0 sis=53 pruub=9.562172890s) [0] r=-1 lpr=53 pi=[42,53)/1 crt=0'0 mlcod 0'0 active pruub 86.771064758s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 11 03:38:39 compute-0 ceph-osd[88594]: osd.1 pg_epoch: 53 pg[8.1c( v 33'4 (0'0,33'4] local-lis/les=47/48 n=0 ec=47/32 lis/c=47/47 les/c/f=48/48/0 sis=53 pruub=14.590502739s) [2] r=-1 lpr=53 pi=[47,53)/1 crt=33'4 lcod 0'0 mlcod 0'0 active pruub 91.799423218s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 11 03:38:39 compute-0 ceph-osd[88594]: osd.1 pg_epoch: 53 pg[8.1c( v 33'4 (0'0,33'4] local-lis/les=47/48 n=0 ec=47/32 lis/c=47/47 les/c/f=48/48/0 sis=53 pruub=14.590454102s) [2] r=-1 lpr=53 pi=[47,53)/1 crt=33'4 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 91.799423218s@ mbc={}] state<Start>: transitioning to Stray
Oct 11 03:38:39 compute-0 ceph-osd[88594]: osd.1 pg_epoch: 53 pg[11.1f( empty local-lis/les=51/52 n=0 ec=51/38 lis/c=51/51 les/c/f=52/52/0 sis=53 pruub=10.923810959s) [2] r=-1 lpr=53 pi=[51,53)/1 crt=0'0 mlcod 0'0 active pruub 88.132881165s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 11 03:38:39 compute-0 ceph-osd[88594]: osd.1 pg_epoch: 53 pg[3.17( empty local-lis/les=42/43 n=0 ec=42/15 lis/c=42/42 les/c/f=43/43/0 sis=53 pruub=9.562118530s) [0] r=-1 lpr=53 pi=[42,53)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 86.771064758s@ mbc={}] state<Start>: transitioning to Stray
Oct 11 03:38:39 compute-0 ceph-osd[87591]: osd.0 pg_epoch: 53 pg[7.4( empty local-lis/les=0/0 n=0 ec=47/23 lis/c=47/47 les/c/f=48/48/0 sis=53) [0] r=0 lpr=53 pi=[47,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 03:38:39 compute-0 ceph-osd[88594]: osd.1 pg_epoch: 53 pg[11.1f( empty local-lis/les=51/52 n=0 ec=51/38 lis/c=51/51 les/c/f=52/52/0 sis=53 pruub=10.923792839s) [2] r=-1 lpr=53 pi=[51,53)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 88.132881165s@ mbc={}] state<Start>: transitioning to Stray
Oct 11 03:38:39 compute-0 ceph-osd[88594]: osd.1 pg_epoch: 53 pg[4.14( empty local-lis/les=0/0 n=0 ec=43/17 lis/c=43/43 les/c/f=44/44/0 sis=53) [1] r=0 lpr=53 pi=[43,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 03:38:39 compute-0 ceph-osd[88594]: osd.1 pg_epoch: 53 pg[4.12( empty local-lis/les=0/0 n=0 ec=43/17 lis/c=43/43 les/c/f=44/44/0 sis=53) [1] r=0 lpr=53 pi=[43,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 03:38:39 compute-0 ceph-osd[88594]: osd.1 pg_epoch: 53 pg[4.10( empty local-lis/les=0/0 n=0 ec=43/17 lis/c=43/43 les/c/f=44/44/0 sis=53) [1] r=0 lpr=53 pi=[43,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 03:38:39 compute-0 ceph-osd[88594]: osd.1 pg_epoch: 53 pg[4.f( empty local-lis/les=0/0 n=0 ec=43/17 lis/c=43/43 les/c/f=44/44/0 sis=53) [1] r=0 lpr=53 pi=[43,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 03:38:39 compute-0 ceph-osd[88594]: osd.1 pg_epoch: 53 pg[6.d( empty local-lis/les=0/0 n=0 ec=45/21 lis/c=45/45 les/c/f=46/46/0 sis=53) [1] r=0 lpr=53 pi=[45,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 03:38:39 compute-0 ceph-osd[88594]: osd.1 pg_epoch: 53 pg[4.d( empty local-lis/les=0/0 n=0 ec=43/17 lis/c=43/43 les/c/f=44/44/0 sis=53) [1] r=0 lpr=53 pi=[43,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 03:38:39 compute-0 ceph-osd[87591]: osd.0 pg_epoch: 53 pg[8.b( empty local-lis/les=0/0 n=0 ec=47/32 lis/c=47/47 les/c/f=48/48/0 sis=53) [0] r=0 lpr=53 pi=[47,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 03:38:39 compute-0 ceph-osd[88594]: osd.1 pg_epoch: 53 pg[6.f( empty local-lis/les=0/0 n=0 ec=45/21 lis/c=45/45 les/c/f=46/46/0 sis=53) [1] r=0 lpr=53 pi=[45,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 03:38:39 compute-0 ceph-osd[88594]: osd.1 pg_epoch: 53 pg[4.2( empty local-lis/les=0/0 n=0 ec=43/17 lis/c=43/43 les/c/f=44/44/0 sis=53) [1] r=0 lpr=53 pi=[43,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 03:38:39 compute-0 ceph-osd[87591]: osd.0 pg_epoch: 53 pg[11.1( empty local-lis/les=0/0 n=0 ec=51/38 lis/c=51/51 les/c/f=52/52/0 sis=53) [0] r=0 lpr=53 pi=[51,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 03:38:39 compute-0 ceph-osd[88594]: osd.1 pg_epoch: 53 pg[6.3( empty local-lis/les=0/0 n=0 ec=45/21 lis/c=45/45 les/c/f=46/46/0 sis=53) [1] r=0 lpr=53 pi=[45,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 03:38:39 compute-0 ceph-osd[88594]: osd.1 pg_epoch: 53 pg[6.1( empty local-lis/les=0/0 n=0 ec=45/21 lis/c=45/45 les/c/f=46/46/0 sis=53) [1] r=0 lpr=53 pi=[45,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 03:38:39 compute-0 ceph-osd[88594]: osd.1 pg_epoch: 53 pg[4.4( empty local-lis/les=0/0 n=0 ec=43/17 lis/c=43/43 les/c/f=44/44/0 sis=53) [1] r=0 lpr=53 pi=[43,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 03:38:39 compute-0 ceph-osd[88594]: osd.1 pg_epoch: 53 pg[4.9( empty local-lis/les=0/0 n=0 ec=43/17 lis/c=43/43 les/c/f=44/44/0 sis=53) [1] r=0 lpr=53 pi=[43,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 03:38:39 compute-0 ceph-osd[88594]: osd.1 pg_epoch: 53 pg[6.b( empty local-lis/les=0/0 n=0 ec=45/21 lis/c=45/45 les/c/f=46/46/0 sis=53) [1] r=0 lpr=53 pi=[45,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 03:38:39 compute-0 ceph-osd[88594]: osd.1 pg_epoch: 53 pg[4.5( empty local-lis/les=0/0 n=0 ec=43/17 lis/c=43/43 les/c/f=44/44/0 sis=53) [1] r=0 lpr=53 pi=[43,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 03:38:39 compute-0 ceph-osd[88594]: osd.1 pg_epoch: 53 pg[6.7( empty local-lis/les=0/0 n=0 ec=45/21 lis/c=45/45 les/c/f=46/46/0 sis=53) [1] r=0 lpr=53 pi=[45,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 03:38:39 compute-0 ceph-osd[87591]: osd.0 pg_epoch: 53 pg[3.9( empty local-lis/les=0/0 n=0 ec=42/15 lis/c=42/42 les/c/f=43/43/0 sis=53) [0] r=0 lpr=53 pi=[42,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 03:38:39 compute-0 ceph-osd[88594]: osd.1 pg_epoch: 53 pg[6.9( empty local-lis/les=0/0 n=0 ec=45/21 lis/c=45/45 les/c/f=46/46/0 sis=53) [1] r=0 lpr=53 pi=[45,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 03:38:39 compute-0 ceph-osd[88594]: osd.1 pg_epoch: 53 pg[4.7( empty local-lis/les=0/0 n=0 ec=43/17 lis/c=43/43 les/c/f=44/44/0 sis=53) [1] r=0 lpr=53 pi=[43,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 03:38:39 compute-0 ceph-osd[87591]: osd.0 pg_epoch: 53 pg[8.9( empty local-lis/les=0/0 n=0 ec=47/32 lis/c=47/47 les/c/f=48/48/0 sis=53) [0] r=0 lpr=53 pi=[47,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 03:38:39 compute-0 ceph-osd[88594]: osd.1 pg_epoch: 53 pg[6.5( empty local-lis/les=0/0 n=0 ec=45/21 lis/c=45/45 les/c/f=46/46/0 sis=53) [1] r=0 lpr=53 pi=[45,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 03:38:39 compute-0 ceph-osd[88594]: osd.1 pg_epoch: 53 pg[4.8( empty local-lis/les=0/0 n=0 ec=43/17 lis/c=43/43 les/c/f=44/44/0 sis=53) [1] r=0 lpr=53 pi=[43,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 03:38:39 compute-0 ceph-osd[87591]: osd.0 pg_epoch: 53 pg[9.3( empty local-lis/les=0/0 n=0 ec=49/34 lis/c=49/49 les/c/f=50/50/0 sis=53) [0] r=0 lpr=53 pi=[49,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 03:38:39 compute-0 ceph-osd[87591]: osd.0 pg_epoch: 53 pg[3.c( empty local-lis/les=0/0 n=0 ec=42/15 lis/c=42/42 les/c/f=43/43/0 sis=53) [0] r=0 lpr=53 pi=[42,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 03:38:39 compute-0 ceph-osd[87591]: osd.0 pg_epoch: 53 pg[11.4( empty local-lis/les=0/0 n=0 ec=51/38 lis/c=51/51 les/c/f=52/52/0 sis=53) [0] r=0 lpr=53 pi=[51,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 03:38:39 compute-0 ceph-osd[87591]: osd.0 pg_epoch: 53 pg[7.9( empty local-lis/les=0/0 n=0 ec=47/23 lis/c=47/47 les/c/f=48/48/0 sis=53) [0] r=0 lpr=53 pi=[47,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 03:38:39 compute-0 ceph-osd[87591]: osd.0 pg_epoch: 53 pg[8.6( empty local-lis/les=0/0 n=0 ec=47/32 lis/c=47/47 les/c/f=48/48/0 sis=53) [0] r=0 lpr=53 pi=[47,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 03:38:39 compute-0 ceph-osd[87591]: osd.0 pg_epoch: 53 pg[9.7( empty local-lis/les=0/0 n=0 ec=49/34 lis/c=49/49 les/c/f=50/50/0 sis=53) [0] r=0 lpr=53 pi=[49,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 03:38:39 compute-0 ceph-osd[87591]: osd.0 pg_epoch: 53 pg[11.6( empty local-lis/les=0/0 n=0 ec=51/38 lis/c=51/51 les/c/f=52/52/0 sis=53) [0] r=0 lpr=53 pi=[51,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 03:38:39 compute-0 ceph-osd[87591]: osd.0 pg_epoch: 53 pg[3.f( empty local-lis/les=0/0 n=0 ec=42/15 lis/c=42/42 les/c/f=43/43/0 sis=53) [0] r=0 lpr=53 pi=[42,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 03:38:39 compute-0 ceph-osd[87591]: osd.0 pg_epoch: 53 pg[9.5( empty local-lis/les=0/0 n=0 ec=49/34 lis/c=49/49 les/c/f=50/50/0 sis=53) [0] r=0 lpr=53 pi=[49,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 03:38:39 compute-0 ceph-osd[87591]: osd.0 pg_epoch: 53 pg[8.1a( empty local-lis/les=0/0 n=0 ec=47/32 lis/c=47/47 les/c/f=48/48/0 sis=53) [0] r=0 lpr=53 pi=[47,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 03:38:39 compute-0 ceph-osd[87591]: osd.0 pg_epoch: 53 pg[9.1b( empty local-lis/les=0/0 n=0 ec=49/34 lis/c=49/49 les/c/f=50/50/0 sis=53) [0] r=0 lpr=53 pi=[49,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 03:38:39 compute-0 ceph-osd[87591]: osd.0 pg_epoch: 53 pg[3.12( empty local-lis/les=0/0 n=0 ec=42/15 lis/c=42/42 les/c/f=43/43/0 sis=53) [0] r=0 lpr=53 pi=[42,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 03:38:39 compute-0 ceph-osd[87591]: osd.0 pg_epoch: 53 pg[7.6( empty local-lis/les=0/0 n=0 ec=47/23 lis/c=47/47 les/c/f=48/48/0 sis=53) [0] r=0 lpr=53 pi=[47,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 03:38:39 compute-0 ceph-osd[87591]: osd.0 pg_epoch: 53 pg[9.19( empty local-lis/les=0/0 n=0 ec=49/34 lis/c=49/49 les/c/f=50/50/0 sis=53) [0] r=0 lpr=53 pi=[49,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 03:38:39 compute-0 ceph-osd[87591]: osd.0 pg_epoch: 53 pg[9.1f( empty local-lis/les=0/0 n=0 ec=49/34 lis/c=49/49 les/c/f=50/50/0 sis=53) [0] r=0 lpr=53 pi=[49,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 03:38:39 compute-0 ceph-osd[87591]: osd.0 pg_epoch: 53 pg[3.15( empty local-lis/les=0/0 n=0 ec=42/15 lis/c=42/42 les/c/f=43/43/0 sis=53) [0] r=0 lpr=53 pi=[42,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 03:38:39 compute-0 ceph-osd[87591]: osd.0 pg_epoch: 53 pg[8.1f( empty local-lis/les=0/0 n=0 ec=47/32 lis/c=47/47 les/c/f=48/48/0 sis=53) [0] r=0 lpr=53 pi=[47,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 03:38:39 compute-0 ceph-osd[87591]: osd.0 pg_epoch: 53 pg[8.18( empty local-lis/les=0/0 n=0 ec=47/32 lis/c=47/47 les/c/f=48/48/0 sis=53) [0] r=0 lpr=53 pi=[47,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 03:38:39 compute-0 ceph-osd[87591]: osd.0 pg_epoch: 53 pg[8.1d( empty local-lis/les=0/0 n=0 ec=47/32 lis/c=47/47 les/c/f=48/48/0 sis=53) [0] r=0 lpr=53 pi=[47,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 03:38:39 compute-0 ceph-osd[87591]: osd.0 pg_epoch: 53 pg[7.13( empty local-lis/les=0/0 n=0 ec=47/23 lis/c=47/47 les/c/f=48/48/0 sis=53) [0] r=0 lpr=53 pi=[47,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 03:38:39 compute-0 ceph-osd[87591]: osd.0 pg_epoch: 53 pg[9.1d( empty local-lis/les=0/0 n=0 ec=49/34 lis/c=49/49 les/c/f=50/50/0 sis=53) [0] r=0 lpr=53 pi=[49,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 03:38:39 compute-0 ceph-osd[87591]: osd.0 pg_epoch: 53 pg[11.19( empty local-lis/les=0/0 n=0 ec=51/38 lis/c=51/51 les/c/f=52/52/0 sis=53) [0] r=0 lpr=53 pi=[51,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 03:38:39 compute-0 ceph-osd[87591]: osd.0 pg_epoch: 53 pg[3.17( empty local-lis/les=0/0 n=0 ec=42/15 lis/c=42/42 les/c/f=43/43/0 sis=53) [0] r=0 lpr=53 pi=[42,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 03:38:39 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e53 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 11 03:38:39 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e53 do_prune osdmap full prune enabled
Oct 11 03:38:39 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e54 e54: 3 total, 3 up, 3 in
Oct 11 03:38:39 compute-0 ceph-mon[74273]: log_channel(cluster) log [DBG] : osdmap e54: 3 total, 3 up, 3 in
Oct 11 03:38:39 compute-0 ceph-osd[88594]: osd.1 pg_epoch: 54 pg[9.15( v 41'577 (0'0,41'577] local-lis/les=49/50 n=6 ec=49/34 lis/c=49/49 les/c/f=50/50/0 sis=54) [0]/[1] r=0 lpr=54 pi=[49,54)/1 crt=41'577 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Oct 11 03:38:39 compute-0 ceph-osd[88594]: osd.1 pg_epoch: 54 pg[9.15( v 41'577 (0'0,41'577] local-lis/les=49/50 n=6 ec=49/34 lis/c=49/49 les/c/f=50/50/0 sis=54) [0]/[1] r=0 lpr=54 pi=[49,54)/1 crt=41'577 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Oct 11 03:38:39 compute-0 ceph-osd[88594]: osd.1 pg_epoch: 54 pg[9.17( v 41'577 (0'0,41'577] local-lis/les=49/50 n=6 ec=49/34 lis/c=49/49 les/c/f=50/50/0 sis=54) [0]/[1] r=0 lpr=54 pi=[49,54)/1 crt=41'577 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Oct 11 03:38:39 compute-0 ceph-osd[88594]: osd.1 pg_epoch: 54 pg[9.17( v 41'577 (0'0,41'577] local-lis/les=49/50 n=6 ec=49/34 lis/c=49/49 les/c/f=50/50/0 sis=54) [0]/[1] r=0 lpr=54 pi=[49,54)/1 crt=41'577 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Oct 11 03:38:39 compute-0 ceph-osd[88594]: osd.1 pg_epoch: 54 pg[9.11( v 41'577 (0'0,41'577] local-lis/les=49/50 n=7 ec=49/34 lis/c=49/49 les/c/f=50/50/0 sis=54) [0]/[1] r=0 lpr=54 pi=[49,54)/1 crt=41'577 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Oct 11 03:38:39 compute-0 ceph-osd[88594]: osd.1 pg_epoch: 54 pg[9.11( v 41'577 (0'0,41'577] local-lis/les=49/50 n=7 ec=49/34 lis/c=49/49 les/c/f=50/50/0 sis=54) [0]/[1] r=0 lpr=54 pi=[49,54)/1 crt=41'577 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Oct 11 03:38:39 compute-0 ceph-osd[88594]: osd.1 pg_epoch: 54 pg[9.13( v 41'577 (0'0,41'577] local-lis/les=49/50 n=6 ec=49/34 lis/c=49/49 les/c/f=50/50/0 sis=54) [0]/[1] r=0 lpr=54 pi=[49,54)/1 crt=41'577 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Oct 11 03:38:39 compute-0 ceph-osd[88594]: osd.1 pg_epoch: 54 pg[9.13( v 41'577 (0'0,41'577] local-lis/les=49/50 n=6 ec=49/34 lis/c=49/49 les/c/f=50/50/0 sis=54) [0]/[1] r=0 lpr=54 pi=[49,54)/1 crt=41'577 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Oct 11 03:38:39 compute-0 ceph-osd[88594]: osd.1 pg_epoch: 54 pg[9.d( v 41'577 (0'0,41'577] local-lis/les=49/50 n=7 ec=49/34 lis/c=49/49 les/c/f=50/50/0 sis=54) [0]/[1] r=0 lpr=54 pi=[49,54)/1 crt=41'577 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Oct 11 03:38:39 compute-0 ceph-osd[88594]: osd.1 pg_epoch: 54 pg[9.d( v 41'577 (0'0,41'577] local-lis/les=49/50 n=7 ec=49/34 lis/c=49/49 les/c/f=50/50/0 sis=54) [0]/[1] r=0 lpr=54 pi=[49,54)/1 crt=41'577 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Oct 11 03:38:39 compute-0 ceph-osd[88594]: osd.1 pg_epoch: 54 pg[9.f( v 41'577 (0'0,41'577] local-lis/les=49/50 n=7 ec=49/34 lis/c=49/49 les/c/f=50/50/0 sis=54) [0]/[1] r=0 lpr=54 pi=[49,54)/1 crt=41'577 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Oct 11 03:38:39 compute-0 ceph-osd[88594]: osd.1 pg_epoch: 54 pg[9.f( v 41'577 (0'0,41'577] local-lis/les=49/50 n=7 ec=49/34 lis/c=49/49 les/c/f=50/50/0 sis=54) [0]/[1] r=0 lpr=54 pi=[49,54)/1 crt=41'577 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Oct 11 03:38:39 compute-0 ceph-osd[88594]: osd.1 pg_epoch: 54 pg[9.9( v 41'577 (0'0,41'577] local-lis/les=49/50 n=7 ec=49/34 lis/c=49/49 les/c/f=50/50/0 sis=54) [0]/[1] r=0 lpr=54 pi=[49,54)/1 crt=41'577 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Oct 11 03:38:39 compute-0 ceph-osd[88594]: osd.1 pg_epoch: 54 pg[9.9( v 41'577 (0'0,41'577] local-lis/les=49/50 n=7 ec=49/34 lis/c=49/49 les/c/f=50/50/0 sis=54) [0]/[1] r=0 lpr=54 pi=[49,54)/1 crt=41'577 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Oct 11 03:38:39 compute-0 ceph-osd[88594]: osd.1 pg_epoch: 54 pg[9.b( v 41'577 (0'0,41'577] local-lis/les=49/50 n=7 ec=49/34 lis/c=49/49 les/c/f=50/50/0 sis=54) [0]/[1] r=0 lpr=54 pi=[49,54)/1 crt=41'577 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Oct 11 03:38:39 compute-0 ceph-osd[88594]: osd.1 pg_epoch: 54 pg[9.b( v 41'577 (0'0,41'577] local-lis/les=49/50 n=7 ec=49/34 lis/c=49/49 les/c/f=50/50/0 sis=54) [0]/[1] r=0 lpr=54 pi=[49,54)/1 crt=41'577 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Oct 11 03:38:39 compute-0 ceph-osd[88594]: osd.1 pg_epoch: 54 pg[9.1( v 41'577 (0'0,41'577] local-lis/les=49/50 n=7 ec=49/34 lis/c=49/49 les/c/f=50/50/0 sis=54) [0]/[1] r=0 lpr=54 pi=[49,54)/1 crt=41'577 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Oct 11 03:38:39 compute-0 ceph-osd[88594]: osd.1 pg_epoch: 54 pg[9.1( v 41'577 (0'0,41'577] local-lis/les=49/50 n=7 ec=49/34 lis/c=49/49 les/c/f=50/50/0 sis=54) [0]/[1] r=0 lpr=54 pi=[49,54)/1 crt=41'577 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Oct 11 03:38:39 compute-0 ceph-osd[87591]: osd.0 pg_epoch: 54 pg[9.15( empty local-lis/les=0/0 n=0 ec=49/34 lis/c=49/49 les/c/f=50/50/0 sis=54) [0]/[1] r=-1 lpr=54 pi=[49,54)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 11 03:38:39 compute-0 ceph-osd[87591]: osd.0 pg_epoch: 54 pg[9.15( empty local-lis/les=0/0 n=0 ec=49/34 lis/c=49/49 les/c/f=50/50/0 sis=54) [0]/[1] r=-1 lpr=54 pi=[49,54)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Oct 11 03:38:39 compute-0 ceph-osd[88594]: osd.1 pg_epoch: 54 pg[9.3( v 41'577 (0'0,41'577] local-lis/les=49/50 n=7 ec=49/34 lis/c=49/49 les/c/f=50/50/0 sis=54) [0]/[1] r=0 lpr=54 pi=[49,54)/1 crt=41'577 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Oct 11 03:38:39 compute-0 ceph-osd[88594]: osd.1 pg_epoch: 54 pg[9.3( v 41'577 (0'0,41'577] local-lis/les=49/50 n=7 ec=49/34 lis/c=49/49 les/c/f=50/50/0 sis=54) [0]/[1] r=0 lpr=54 pi=[49,54)/1 crt=41'577 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Oct 11 03:38:39 compute-0 ceph-osd[89722]: osd.2 pg_epoch: 54 pg[11.15( empty local-lis/les=53/54 n=0 ec=51/38 lis/c=51/51 les/c/f=52/52/0 sis=53) [2] r=0 lpr=53 pi=[51,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 03:38:39 compute-0 ceph-osd[88594]: osd.1 pg_epoch: 54 pg[9.7( v 41'577 (0'0,41'577] local-lis/les=49/50 n=7 ec=49/34 lis/c=49/49 les/c/f=50/50/0 sis=54) [0]/[1] r=0 lpr=54 pi=[49,54)/1 crt=41'577 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Oct 11 03:38:39 compute-0 ceph-osd[88594]: osd.1 pg_epoch: 54 pg[9.7( v 41'577 (0'0,41'577] local-lis/les=49/50 n=7 ec=49/34 lis/c=49/49 les/c/f=50/50/0 sis=54) [0]/[1] r=0 lpr=54 pi=[49,54)/1 crt=41'577 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Oct 11 03:38:39 compute-0 ceph-osd[88594]: osd.1 pg_epoch: 54 pg[9.5( v 41'577 (0'0,41'577] local-lis/les=49/50 n=7 ec=49/34 lis/c=49/49 les/c/f=50/50/0 sis=54) [0]/[1] r=0 lpr=54 pi=[49,54)/1 crt=41'577 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Oct 11 03:38:39 compute-0 ceph-osd[88594]: osd.1 pg_epoch: 54 pg[9.5( v 41'577 (0'0,41'577] local-lis/les=49/50 n=7 ec=49/34 lis/c=49/49 les/c/f=50/50/0 sis=54) [0]/[1] r=0 lpr=54 pi=[49,54)/1 crt=41'577 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Oct 11 03:38:39 compute-0 ceph-osd[88594]: osd.1 pg_epoch: 54 pg[9.1b( v 41'577 (0'0,41'577] local-lis/les=49/50 n=6 ec=49/34 lis/c=49/49 les/c/f=50/50/0 sis=54) [0]/[1] r=0 lpr=54 pi=[49,54)/1 crt=41'577 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Oct 11 03:38:39 compute-0 ceph-osd[88594]: osd.1 pg_epoch: 54 pg[9.1b( v 41'577 (0'0,41'577] local-lis/les=49/50 n=6 ec=49/34 lis/c=49/49 les/c/f=50/50/0 sis=54) [0]/[1] r=0 lpr=54 pi=[49,54)/1 crt=41'577 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Oct 11 03:38:39 compute-0 ceph-osd[88594]: osd.1 pg_epoch: 54 pg[9.19( v 41'577 (0'0,41'577] local-lis/les=49/50 n=6 ec=49/34 lis/c=49/49 les/c/f=50/50/0 sis=54) [0]/[1] r=0 lpr=54 pi=[49,54)/1 crt=41'577 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Oct 11 03:38:39 compute-0 ceph-osd[88594]: osd.1 pg_epoch: 54 pg[9.19( v 41'577 (0'0,41'577] local-lis/les=49/50 n=6 ec=49/34 lis/c=49/49 les/c/f=50/50/0 sis=54) [0]/[1] r=0 lpr=54 pi=[49,54)/1 crt=41'577 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Oct 11 03:38:39 compute-0 ceph-osd[87591]: osd.0 pg_epoch: 54 pg[9.17( empty local-lis/les=0/0 n=0 ec=49/34 lis/c=49/49 les/c/f=50/50/0 sis=54) [0]/[1] r=-1 lpr=54 pi=[49,54)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 11 03:38:39 compute-0 ceph-osd[87591]: osd.0 pg_epoch: 54 pg[9.17( empty local-lis/les=0/0 n=0 ec=49/34 lis/c=49/49 les/c/f=50/50/0 sis=54) [0]/[1] r=-1 lpr=54 pi=[49,54)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Oct 11 03:38:39 compute-0 ceph-osd[87591]: osd.0 pg_epoch: 54 pg[9.11( empty local-lis/les=0/0 n=0 ec=49/34 lis/c=49/49 les/c/f=50/50/0 sis=54) [0]/[1] r=-1 lpr=54 pi=[49,54)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 11 03:38:39 compute-0 ceph-osd[87591]: osd.0 pg_epoch: 54 pg[9.11( empty local-lis/les=0/0 n=0 ec=49/34 lis/c=49/49 les/c/f=50/50/0 sis=54) [0]/[1] r=-1 lpr=54 pi=[49,54)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Oct 11 03:38:39 compute-0 ceph-osd[87591]: osd.0 pg_epoch: 54 pg[9.13( empty local-lis/les=0/0 n=0 ec=49/34 lis/c=49/49 les/c/f=50/50/0 sis=54) [0]/[1] r=-1 lpr=54 pi=[49,54)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 11 03:38:39 compute-0 ceph-osd[87591]: osd.0 pg_epoch: 54 pg[9.13( empty local-lis/les=0/0 n=0 ec=49/34 lis/c=49/49 les/c/f=50/50/0 sis=54) [0]/[1] r=-1 lpr=54 pi=[49,54)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Oct 11 03:38:39 compute-0 ceph-osd[87591]: osd.0 pg_epoch: 54 pg[9.5( empty local-lis/les=0/0 n=0 ec=49/34 lis/c=49/49 les/c/f=50/50/0 sis=54) [0]/[1] r=-1 lpr=54 pi=[49,54)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 11 03:38:39 compute-0 ceph-osd[87591]: osd.0 pg_epoch: 54 pg[9.5( empty local-lis/les=0/0 n=0 ec=49/34 lis/c=49/49 les/c/f=50/50/0 sis=54) [0]/[1] r=-1 lpr=54 pi=[49,54)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Oct 11 03:38:39 compute-0 ceph-osd[87591]: osd.0 pg_epoch: 54 pg[9.b( empty local-lis/les=0/0 n=0 ec=49/34 lis/c=49/49 les/c/f=50/50/0 sis=54) [0]/[1] r=-1 lpr=54 pi=[49,54)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 11 03:38:39 compute-0 ceph-osd[87591]: osd.0 pg_epoch: 54 pg[9.b( empty local-lis/les=0/0 n=0 ec=49/34 lis/c=49/49 les/c/f=50/50/0 sis=54) [0]/[1] r=-1 lpr=54 pi=[49,54)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Oct 11 03:38:39 compute-0 ceph-osd[87591]: osd.0 pg_epoch: 54 pg[9.7( empty local-lis/les=0/0 n=0 ec=49/34 lis/c=49/49 les/c/f=50/50/0 sis=54) [0]/[1] r=-1 lpr=54 pi=[49,54)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 11 03:38:39 compute-0 ceph-osd[87591]: osd.0 pg_epoch: 54 pg[9.7( empty local-lis/les=0/0 n=0 ec=49/34 lis/c=49/49 les/c/f=50/50/0 sis=54) [0]/[1] r=-1 lpr=54 pi=[49,54)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Oct 11 03:38:39 compute-0 ceph-osd[87591]: osd.0 pg_epoch: 54 pg[9.9( empty local-lis/les=0/0 n=0 ec=49/34 lis/c=49/49 les/c/f=50/50/0 sis=54) [0]/[1] r=-1 lpr=54 pi=[49,54)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 11 03:38:39 compute-0 ceph-osd[87591]: osd.0 pg_epoch: 54 pg[9.9( empty local-lis/les=0/0 n=0 ec=49/34 lis/c=49/49 les/c/f=50/50/0 sis=54) [0]/[1] r=-1 lpr=54 pi=[49,54)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Oct 11 03:38:39 compute-0 ceph-osd[87591]: osd.0 pg_epoch: 54 pg[9.f( empty local-lis/les=0/0 n=0 ec=49/34 lis/c=49/49 les/c/f=50/50/0 sis=54) [0]/[1] r=-1 lpr=54 pi=[49,54)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 11 03:38:39 compute-0 ceph-osd[87591]: osd.0 pg_epoch: 54 pg[9.f( empty local-lis/les=0/0 n=0 ec=49/34 lis/c=49/49 les/c/f=50/50/0 sis=54) [0]/[1] r=-1 lpr=54 pi=[49,54)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Oct 11 03:38:39 compute-0 ceph-osd[87591]: osd.0 pg_epoch: 54 pg[9.d( empty local-lis/les=0/0 n=0 ec=49/34 lis/c=49/49 les/c/f=50/50/0 sis=54) [0]/[1] r=-1 lpr=54 pi=[49,54)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 11 03:38:39 compute-0 ceph-osd[87591]: osd.0 pg_epoch: 54 pg[9.d( empty local-lis/les=0/0 n=0 ec=49/34 lis/c=49/49 les/c/f=50/50/0 sis=54) [0]/[1] r=-1 lpr=54 pi=[49,54)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Oct 11 03:38:39 compute-0 ceph-osd[87591]: osd.0 pg_epoch: 54 pg[9.1( empty local-lis/les=0/0 n=0 ec=49/34 lis/c=49/49 les/c/f=50/50/0 sis=54) [0]/[1] r=-1 lpr=54 pi=[49,54)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 11 03:38:39 compute-0 ceph-osd[87591]: osd.0 pg_epoch: 54 pg[9.1( empty local-lis/les=0/0 n=0 ec=49/34 lis/c=49/49 les/c/f=50/50/0 sis=54) [0]/[1] r=-1 lpr=54 pi=[49,54)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Oct 11 03:38:39 compute-0 ceph-osd[87591]: osd.0 pg_epoch: 54 pg[9.1d( empty local-lis/les=0/0 n=0 ec=49/34 lis/c=49/49 les/c/f=50/50/0 sis=54) [0]/[1] r=-1 lpr=54 pi=[49,54)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 11 03:38:39 compute-0 ceph-osd[87591]: osd.0 pg_epoch: 54 pg[9.3( empty local-lis/les=0/0 n=0 ec=49/34 lis/c=49/49 les/c/f=50/50/0 sis=54) [0]/[1] r=-1 lpr=54 pi=[49,54)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 11 03:38:39 compute-0 ceph-osd[87591]: osd.0 pg_epoch: 54 pg[9.3( empty local-lis/les=0/0 n=0 ec=49/34 lis/c=49/49 les/c/f=50/50/0 sis=54) [0]/[1] r=-1 lpr=54 pi=[49,54)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Oct 11 03:38:39 compute-0 ceph-osd[87591]: osd.0 pg_epoch: 54 pg[9.1d( empty local-lis/les=0/0 n=0 ec=49/34 lis/c=49/49 les/c/f=50/50/0 sis=54) [0]/[1] r=-1 lpr=54 pi=[49,54)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Oct 11 03:38:39 compute-0 ceph-osd[87591]: osd.0 pg_epoch: 54 pg[9.1f( empty local-lis/les=0/0 n=0 ec=49/34 lis/c=49/49 les/c/f=50/50/0 sis=54) [0]/[1] r=-1 lpr=54 pi=[49,54)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 11 03:38:39 compute-0 ceph-osd[87591]: osd.0 pg_epoch: 54 pg[9.1f( empty local-lis/les=0/0 n=0 ec=49/34 lis/c=49/49 les/c/f=50/50/0 sis=54) [0]/[1] r=-1 lpr=54 pi=[49,54)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Oct 11 03:38:39 compute-0 ceph-osd[87591]: osd.0 pg_epoch: 54 pg[9.19( empty local-lis/les=0/0 n=0 ec=49/34 lis/c=49/49 les/c/f=50/50/0 sis=54) [0]/[1] r=-1 lpr=54 pi=[49,54)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 11 03:38:39 compute-0 ceph-osd[87591]: osd.0 pg_epoch: 54 pg[9.19( empty local-lis/les=0/0 n=0 ec=49/34 lis/c=49/49 les/c/f=50/50/0 sis=54) [0]/[1] r=-1 lpr=54 pi=[49,54)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Oct 11 03:38:39 compute-0 ceph-osd[87591]: osd.0 pg_epoch: 54 pg[9.1b( empty local-lis/les=0/0 n=0 ec=49/34 lis/c=49/49 les/c/f=50/50/0 sis=54) [0]/[1] r=-1 lpr=54 pi=[49,54)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 11 03:38:39 compute-0 ceph-osd[87591]: osd.0 pg_epoch: 54 pg[9.1b( empty local-lis/les=0/0 n=0 ec=49/34 lis/c=49/49 les/c/f=50/50/0 sis=54) [0]/[1] r=-1 lpr=54 pi=[49,54)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Oct 11 03:38:39 compute-0 ceph-osd[87591]: osd.0 pg_epoch: 54 pg[7.1b( empty local-lis/les=53/54 n=0 ec=47/23 lis/c=47/47 les/c/f=48/48/0 sis=53) [0] r=0 lpr=53 pi=[47,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 03:38:39 compute-0 ceph-osd[89722]: osd.2 pg_epoch: 54 pg[3.1d( empty local-lis/les=53/54 n=0 ec=42/15 lis/c=42/42 les/c/f=43/43/0 sis=53) [2] r=0 lpr=53 pi=[42,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 03:38:39 compute-0 ceph-osd[89722]: osd.2 pg_epoch: 54 pg[7.1a( empty local-lis/les=53/54 n=0 ec=47/23 lis/c=47/47 les/c/f=48/48/0 sis=53) [2] r=0 lpr=53 pi=[47,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 03:38:39 compute-0 ceph-osd[89722]: osd.2 pg_epoch: 54 pg[3.1e( empty local-lis/les=53/54 n=0 ec=42/15 lis/c=42/42 les/c/f=43/43/0 sis=53) [2] r=0 lpr=53 pi=[42,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 03:38:39 compute-0 ceph-osd[89722]: osd.2 pg_epoch: 54 pg[8.15( v 33'4 (0'0,33'4] local-lis/les=53/54 n=0 ec=47/32 lis/c=47/47 les/c/f=48/48/0 sis=53) [2] r=0 lpr=53 pi=[47,53)/1 crt=33'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 03:38:39 compute-0 ceph-osd[89722]: osd.2 pg_epoch: 54 pg[4.1b( empty local-lis/les=53/54 n=0 ec=43/17 lis/c=43/43 les/c/f=44/44/0 sis=53) [2] r=0 lpr=53 pi=[43,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 03:38:39 compute-0 ceph-osd[89722]: osd.2 pg_epoch: 54 pg[4.18( empty local-lis/les=53/54 n=0 ec=43/17 lis/c=43/43 les/c/f=44/44/0 sis=53) [2] r=0 lpr=53 pi=[43,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 03:38:39 compute-0 ceph-osd[88594]: osd.1 pg_epoch: 54 pg[9.1f( v 41'577 (0'0,41'577] local-lis/les=49/50 n=6 ec=49/34 lis/c=49/49 les/c/f=50/50/0 sis=54) [0]/[1] r=0 lpr=54 pi=[49,54)/1 crt=41'577 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Oct 11 03:38:39 compute-0 ceph-osd[89722]: osd.2 pg_epoch: 54 pg[4.1a( empty local-lis/les=53/54 n=0 ec=43/17 lis/c=43/43 les/c/f=44/44/0 sis=53) [2] r=0 lpr=53 pi=[43,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 03:38:39 compute-0 ceph-osd[88594]: osd.1 pg_epoch: 54 pg[9.1f( v 41'577 (0'0,41'577] local-lis/les=49/50 n=6 ec=49/34 lis/c=49/49 les/c/f=50/50/0 sis=54) [0]/[1] r=0 lpr=54 pi=[49,54)/1 crt=41'577 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Oct 11 03:38:39 compute-0 ceph-osd[89722]: osd.2 pg_epoch: 54 pg[11.12( empty local-lis/les=53/54 n=0 ec=51/38 lis/c=51/51 les/c/f=52/52/0 sis=53) [2] r=0 lpr=53 pi=[51,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 03:38:39 compute-0 ceph-osd[88594]: osd.1 pg_epoch: 54 pg[9.1d( v 41'577 (0'0,41'577] local-lis/les=49/50 n=6 ec=49/34 lis/c=49/49 les/c/f=50/50/0 sis=54) [0]/[1] r=0 lpr=54 pi=[49,54)/1 crt=41'577 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Oct 11 03:38:39 compute-0 ceph-osd[88594]: osd.1 pg_epoch: 54 pg[9.1d( v 41'577 (0'0,41'577] local-lis/les=49/50 n=6 ec=49/34 lis/c=49/49 les/c/f=50/50/0 sis=54) [0]/[1] r=0 lpr=54 pi=[49,54)/1 crt=41'577 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Oct 11 03:38:39 compute-0 ceph-osd[89722]: osd.2 pg_epoch: 54 pg[11.3( empty local-lis/les=53/54 n=0 ec=51/38 lis/c=51/51 les/c/f=52/52/0 sis=53) [2] r=0 lpr=53 pi=[51,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 03:38:39 compute-0 ceph-osd[88594]: osd.1 pg_epoch: 54 pg[5.19( empty local-lis/les=53/54 n=0 ec=45/19 lis/c=45/45 les/c/f=47/47/0 sis=53) [1] r=0 lpr=53 pi=[45,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 03:38:39 compute-0 ceph-osd[88594]: osd.1 pg_epoch: 54 pg[5.1a( empty local-lis/les=53/54 n=0 ec=45/19 lis/c=45/45 les/c/f=47/47/0 sis=53) [1] r=0 lpr=53 pi=[45,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 03:38:39 compute-0 ceph-osd[88594]: osd.1 pg_epoch: 54 pg[5.1d( empty local-lis/les=53/54 n=0 ec=45/19 lis/c=45/45 les/c/f=47/47/0 sis=53) [1] r=0 lpr=53 pi=[45,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 03:38:39 compute-0 ceph-osd[88594]: osd.1 pg_epoch: 54 pg[10.12( v 37'16 (0'0,37'16] local-lis/les=53/54 n=0 ec=49/36 lis/c=49/49 les/c/f=51/51/0 sis=53) [1] r=0 lpr=53 pi=[49,53)/1 crt=37'16 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 03:38:39 compute-0 ceph-osd[88594]: osd.1 pg_epoch: 54 pg[10.13( v 37'16 (0'0,37'16] local-lis/les=53/54 n=0 ec=49/36 lis/c=49/49 les/c/f=51/51/0 sis=53) [1] r=0 lpr=53 pi=[49,53)/1 crt=37'16 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 03:38:39 compute-0 ceph-osd[88594]: osd.1 pg_epoch: 54 pg[2.1b( empty local-lis/les=53/54 n=0 ec=42/13 lis/c=42/42 les/c/f=43/43/0 sis=53) [1] r=0 lpr=53 pi=[42,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 03:38:39 compute-0 ceph-osd[88594]: osd.1 pg_epoch: 54 pg[10.10( v 37'16 (0'0,37'16] local-lis/les=53/54 n=0 ec=49/36 lis/c=49/49 les/c/f=51/51/0 sis=53) [1] r=0 lpr=53 pi=[49,53)/1 crt=37'16 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 03:38:39 compute-0 ceph-osd[89722]: osd.2 pg_epoch: 54 pg[3.7( empty local-lis/les=53/54 n=0 ec=42/15 lis/c=42/42 les/c/f=43/43/0 sis=53) [2] r=0 lpr=53 pi=[42,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 03:38:39 compute-0 ceph-osd[88594]: osd.1 pg_epoch: 54 pg[10.11( v 37'16 (0'0,37'16] local-lis/les=53/54 n=0 ec=49/36 lis/c=49/49 les/c/f=51/51/0 sis=53) [1] r=0 lpr=53 pi=[49,53)/1 crt=37'16 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 03:38:39 compute-0 ceph-osd[88594]: osd.1 pg_epoch: 54 pg[5.1( empty local-lis/les=53/54 n=0 ec=45/19 lis/c=45/45 les/c/f=47/47/0 sis=53) [1] r=0 lpr=53 pi=[45,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 03:38:39 compute-0 ceph-osd[88594]: osd.1 pg_epoch: 54 pg[2.6( empty local-lis/les=53/54 n=0 ec=42/13 lis/c=42/42 les/c/f=43/43/0 sis=53) [1] r=0 lpr=53 pi=[42,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 03:38:39 compute-0 ceph-osd[89722]: osd.2 pg_epoch: 54 pg[7.c( empty local-lis/les=53/54 n=0 ec=47/23 lis/c=47/47 les/c/f=48/48/0 sis=53) [2] r=0 lpr=53 pi=[47,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 03:38:39 compute-0 ceph-osd[88594]: osd.1 pg_epoch: 54 pg[6.3( v 37'39 lc 0'0 (0'0,37'39] local-lis/les=53/54 n=2 ec=45/21 lis/c=45/45 les/c/f=46/46/0 sis=53) [1] r=0 lpr=53 pi=[45,53)/1 crt=37'39 mlcod 0'0 active+degraded m=2 mbc={255={(0+1)=2}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 03:38:39 compute-0 ceph-osd[88594]: osd.1 pg_epoch: 54 pg[2.7( empty local-lis/les=53/54 n=0 ec=42/13 lis/c=42/42 les/c/f=43/43/0 sis=53) [1] r=0 lpr=53 pi=[42,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 03:38:39 compute-0 ceph-osd[88594]: osd.1 pg_epoch: 54 pg[4.2( empty local-lis/les=53/54 n=0 ec=43/17 lis/c=43/43 les/c/f=44/44/0 sis=53) [1] r=0 lpr=53 pi=[43,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 03:38:39 compute-0 ceph-osd[89722]: osd.2 pg_epoch: 54 pg[7.1( empty local-lis/les=53/54 n=0 ec=47/23 lis/c=47/47 les/c/f=48/48/0 sis=53) [2] r=0 lpr=53 pi=[47,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 03:38:39 compute-0 ceph-osd[88594]: osd.1 pg_epoch: 54 pg[10.14( v 51'17 lc 37'13 (0'0,51'17] local-lis/les=53/54 n=0 ec=49/36 lis/c=49/49 les/c/f=51/51/0 sis=53) [1] r=0 lpr=53 pi=[49,53)/1 crt=51'17 lcod 0'0 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 03:38:39 compute-0 ceph-osd[88594]: osd.1 pg_epoch: 54 pg[2.4( empty local-lis/les=53/54 n=0 ec=42/13 lis/c=42/42 les/c/f=43/43/0 sis=53) [1] r=0 lpr=53 pi=[42,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 03:38:39 compute-0 ceph-osd[88594]: osd.1 pg_epoch: 54 pg[10.f( v 37'16 (0'0,37'16] local-lis/les=53/54 n=0 ec=49/36 lis/c=49/49 les/c/f=51/51/0 sis=53) [1] r=0 lpr=53 pi=[49,53)/1 crt=37'16 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 03:38:39 compute-0 ceph-osd[89722]: osd.2 pg_epoch: 54 pg[3.8( empty local-lis/les=53/54 n=0 ec=42/15 lis/c=42/42 les/c/f=43/43/0 sis=53) [2] r=0 lpr=53 pi=[42,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 03:38:39 compute-0 ceph-osd[89722]: osd.2 pg_epoch: 54 pg[3.5( empty local-lis/les=53/54 n=0 ec=42/15 lis/c=42/42 les/c/f=43/43/0 sis=53) [2] r=0 lpr=53 pi=[42,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 03:38:39 compute-0 ceph-osd[89722]: osd.2 pg_epoch: 54 pg[11.d( empty local-lis/les=53/54 n=0 ec=51/38 lis/c=51/51 les/c/f=52/52/0 sis=53) [2] r=0 lpr=53 pi=[51,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 03:38:39 compute-0 ceph-osd[89722]: osd.2 pg_epoch: 54 pg[11.b( empty local-lis/les=53/54 n=0 ec=51/38 lis/c=51/51 les/c/f=52/52/0 sis=53) [2] r=0 lpr=53 pi=[51,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 03:38:39 compute-0 ceph-osd[89722]: osd.2 pg_epoch: 54 pg[8.2( v 33'4 (0'0,33'4] local-lis/les=53/54 n=1 ec=47/32 lis/c=47/47 les/c/f=48/48/0 sis=53) [2] r=0 lpr=53 pi=[47,53)/1 crt=33'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 03:38:39 compute-0 ceph-osd[89722]: osd.2 pg_epoch: 54 pg[4.e( empty local-lis/les=53/54 n=0 ec=43/17 lis/c=43/43 les/c/f=44/44/0 sis=53) [2] r=0 lpr=53 pi=[43,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 03:38:39 compute-0 ceph-osd[89722]: osd.2 pg_epoch: 54 pg[11.8( empty local-lis/les=53/54 n=0 ec=51/38 lis/c=51/51 les/c/f=52/52/0 sis=53) [2] r=0 lpr=53 pi=[51,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 03:38:39 compute-0 ceph-osd[89722]: osd.2 pg_epoch: 54 pg[4.1( empty local-lis/les=53/54 n=0 ec=43/17 lis/c=43/43 les/c/f=44/44/0 sis=53) [2] r=0 lpr=53 pi=[43,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 03:38:39 compute-0 ceph-osd[89722]: osd.2 pg_epoch: 54 pg[7.2( empty local-lis/les=53/54 n=0 ec=47/23 lis/c=47/47 les/c/f=48/48/0 sis=53) [2] r=0 lpr=53 pi=[47,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 03:38:39 compute-0 ceph-osd[89722]: osd.2 pg_epoch: 54 pg[8.d( v 33'4 (0'0,33'4] local-lis/les=53/54 n=0 ec=47/32 lis/c=47/47 les/c/f=48/48/0 sis=53) [2] r=0 lpr=53 pi=[47,53)/1 crt=33'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 03:38:39 compute-0 ceph-osd[89722]: osd.2 pg_epoch: 54 pg[7.5( empty local-lis/les=53/54 n=0 ec=47/23 lis/c=47/47 les/c/f=48/48/0 sis=53) [2] r=0 lpr=53 pi=[47,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 03:38:39 compute-0 ceph-osd[89722]: osd.2 pg_epoch: 54 pg[11.9( empty local-lis/les=53/54 n=0 ec=51/38 lis/c=51/51 les/c/f=52/52/0 sis=53) [2] r=0 lpr=53 pi=[51,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 03:38:39 compute-0 ceph-osd[89722]: osd.2 pg_epoch: 54 pg[11.2( empty local-lis/les=53/54 n=0 ec=51/38 lis/c=51/51 les/c/f=52/52/0 sis=53) [2] r=0 lpr=53 pi=[51,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 03:38:39 compute-0 ceph-osd[89722]: osd.2 pg_epoch: 54 pg[4.a( empty local-lis/les=53/54 n=0 ec=43/17 lis/c=43/43 les/c/f=44/44/0 sis=53) [2] r=0 lpr=53 pi=[43,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 03:38:39 compute-0 ceph-osd[89722]: osd.2 pg_epoch: 54 pg[7.e( empty local-lis/les=53/54 n=0 ec=47/23 lis/c=47/47 les/c/f=48/48/0 sis=53) [2] r=0 lpr=53 pi=[47,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 03:38:39 compute-0 ceph-osd[89722]: osd.2 pg_epoch: 54 pg[11.11( empty local-lis/les=53/54 n=0 ec=51/38 lis/c=51/51 les/c/f=52/52/0 sis=53) [2] r=0 lpr=53 pi=[51,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 03:38:39 compute-0 ceph-osd[89722]: osd.2 pg_epoch: 54 pg[8.4( v 33'4 (0'0,33'4] local-lis/les=53/54 n=1 ec=47/32 lis/c=47/47 les/c/f=48/48/0 sis=53) [2] r=0 lpr=53 pi=[47,53)/1 crt=33'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 03:38:39 compute-0 ceph-osd[89722]: osd.2 pg_epoch: 54 pg[7.8( empty local-lis/les=53/54 n=0 ec=47/23 lis/c=47/47 les/c/f=48/48/0 sis=53) [2] r=0 lpr=53 pi=[47,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 03:38:39 compute-0 ceph-osd[89722]: osd.2 pg_epoch: 54 pg[7.a( empty local-lis/les=53/54 n=0 ec=47/23 lis/c=47/47 les/c/f=48/48/0 sis=53) [2] r=0 lpr=53 pi=[47,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 03:38:39 compute-0 ceph-osd[89722]: osd.2 pg_epoch: 54 pg[7.15( empty local-lis/les=53/54 n=0 ec=47/23 lis/c=47/47 les/c/f=48/48/0 sis=53) [2] r=0 lpr=53 pi=[47,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 03:38:39 compute-0 ceph-osd[89722]: osd.2 pg_epoch: 54 pg[8.11( v 33'4 (0'0,33'4] local-lis/les=53/54 n=0 ec=47/32 lis/c=47/47 les/c/f=48/48/0 sis=53) [2] r=0 lpr=53 pi=[47,53)/1 crt=33'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 03:38:39 compute-0 ceph-osd[89722]: osd.2 pg_epoch: 54 pg[3.e( empty local-lis/les=53/54 n=0 ec=42/15 lis/c=42/42 les/c/f=43/43/0 sis=53) [2] r=0 lpr=53 pi=[42,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 03:38:39 compute-0 ceph-osd[89722]: osd.2 pg_epoch: 54 pg[8.1b( v 33'4 (0'0,33'4] local-lis/les=53/54 n=0 ec=47/32 lis/c=47/47 les/c/f=48/48/0 sis=53) [2] r=0 lpr=53 pi=[47,53)/1 crt=33'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 03:38:39 compute-0 ceph-osd[89722]: osd.2 pg_epoch: 54 pg[3.11( empty local-lis/les=53/54 n=0 ec=42/15 lis/c=42/42 les/c/f=43/43/0 sis=53) [2] r=0 lpr=53 pi=[42,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 03:38:39 compute-0 ceph-osd[89722]: osd.2 pg_epoch: 54 pg[11.18( empty local-lis/les=53/54 n=0 ec=51/38 lis/c=51/51 les/c/f=52/52/0 sis=53) [2] r=0 lpr=53 pi=[51,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 03:38:39 compute-0 ceph-osd[89722]: osd.2 pg_epoch: 54 pg[11.1a( empty local-lis/les=53/54 n=0 ec=51/38 lis/c=51/51 les/c/f=52/52/0 sis=53) [2] r=0 lpr=53 pi=[51,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 03:38:39 compute-0 ceph-osd[89722]: osd.2 pg_epoch: 54 pg[8.12( v 33'4 (0'0,33'4] local-lis/les=53/54 n=0 ec=47/32 lis/c=47/47 les/c/f=48/48/0 sis=53) [2] r=0 lpr=53 pi=[47,53)/1 crt=33'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 03:38:39 compute-0 ceph-osd[89722]: osd.2 pg_epoch: 54 pg[4.13( empty local-lis/les=53/54 n=0 ec=43/17 lis/c=43/43 les/c/f=44/44/0 sis=53) [2] r=0 lpr=53 pi=[43,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 03:38:39 compute-0 ceph-osd[89722]: osd.2 pg_epoch: 54 pg[11.1c( empty local-lis/les=53/54 n=0 ec=51/38 lis/c=51/51 les/c/f=52/52/0 sis=53) [2] r=0 lpr=53 pi=[51,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 03:38:39 compute-0 ceph-osd[89722]: osd.2 pg_epoch: 54 pg[11.1f( empty local-lis/les=53/54 n=0 ec=51/38 lis/c=51/51 les/c/f=52/52/0 sis=53) [2] r=0 lpr=53 pi=[51,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 03:38:39 compute-0 ceph-osd[89722]: osd.2 pg_epoch: 54 pg[7.1c( empty local-lis/les=53/54 n=0 ec=47/23 lis/c=47/47 les/c/f=48/48/0 sis=53) [2] r=0 lpr=53 pi=[47,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 03:38:39 compute-0 ceph-osd[89722]: osd.2 pg_epoch: 54 pg[3.18( empty local-lis/les=53/54 n=0 ec=42/15 lis/c=42/42 les/c/f=43/43/0 sis=53) [2] r=0 lpr=53 pi=[42,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 03:38:39 compute-0 ceph-osd[88594]: osd.1 pg_epoch: 54 pg[5.18( empty local-lis/les=53/54 n=0 ec=45/19 lis/c=45/45 les/c/f=47/47/0 sis=53) [1] r=0 lpr=53 pi=[45,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 03:38:39 compute-0 ceph-osd[88594]: osd.1 pg_epoch: 54 pg[4.4( empty local-lis/les=53/54 n=0 ec=43/17 lis/c=43/43 les/c/f=44/44/0 sis=53) [1] r=0 lpr=53 pi=[43,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 03:38:39 compute-0 ceph-osd[88594]: osd.1 pg_epoch: 54 pg[4.f( empty local-lis/les=53/54 n=0 ec=43/17 lis/c=43/43 les/c/f=44/44/0 sis=53) [1] r=0 lpr=53 pi=[43,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 03:38:39 compute-0 ceph-osd[88594]: osd.1 pg_epoch: 54 pg[6.d( v 37'39 lc 33'13 (0'0,37'39] local-lis/les=53/54 n=1 ec=45/21 lis/c=45/45 les/c/f=46/46/0 sis=53) [1] r=0 lpr=53 pi=[45,53)/1 crt=37'39 lcod 0'0 mlcod 0'0 active+degraded m=2 mbc={255={(0+1)=2}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 03:38:39 compute-0 ceph-osd[88594]: osd.1 pg_epoch: 54 pg[2.9( empty local-lis/les=53/54 n=0 ec=42/13 lis/c=42/42 les/c/f=43/43/0 sis=53) [1] r=0 lpr=53 pi=[42,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 03:38:39 compute-0 ceph-osd[88594]: osd.1 pg_epoch: 54 pg[4.d( empty local-lis/les=53/54 n=0 ec=43/17 lis/c=43/43 les/c/f=44/44/0 sis=53) [1] r=0 lpr=53 pi=[43,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 03:38:39 compute-0 ceph-osd[88594]: osd.1 pg_epoch: 54 pg[5.c( empty local-lis/les=53/54 n=0 ec=45/19 lis/c=45/45 les/c/f=47/47/0 sis=53) [1] r=0 lpr=53 pi=[45,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 03:38:39 compute-0 ceph-osd[89722]: osd.2 pg_epoch: 54 pg[8.1c( v 33'4 (0'0,33'4] local-lis/les=53/54 n=0 ec=47/32 lis/c=47/47 les/c/f=48/48/0 sis=53) [2] r=0 lpr=53 pi=[47,53)/1 crt=33'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 03:38:39 compute-0 ceph-osd[89722]: osd.2 pg_epoch: 54 pg[3.16( empty local-lis/les=53/54 n=0 ec=42/15 lis/c=42/42 les/c/f=43/43/0 sis=53) [2] r=0 lpr=53 pi=[42,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 03:38:39 compute-0 ceph-osd[89722]: osd.2 pg_epoch: 54 pg[11.1e( empty local-lis/les=53/54 n=0 ec=51/38 lis/c=51/51 les/c/f=52/52/0 sis=53) [2] r=0 lpr=53 pi=[51,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 03:38:39 compute-0 ceph-osd[89722]: osd.2 pg_epoch: 54 pg[4.11( empty local-lis/les=53/54 n=0 ec=43/17 lis/c=43/43 les/c/f=44/44/0 sis=53) [2] r=0 lpr=53 pi=[43,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 03:38:39 compute-0 ceph-osd[89722]: osd.2 pg_epoch: 54 pg[4.1c( empty local-lis/les=53/54 n=0 ec=43/17 lis/c=43/43 les/c/f=44/44/0 sis=53) [2] r=0 lpr=53 pi=[43,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 03:38:39 compute-0 ceph-osd[88594]: osd.1 pg_epoch: 54 pg[2.a( empty local-lis/les=53/54 n=0 ec=42/13 lis/c=42/42 les/c/f=43/43/0 sis=53) [1] r=0 lpr=53 pi=[42,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 03:38:39 compute-0 ceph-osd[89722]: osd.2 pg_epoch: 54 pg[7.11( empty local-lis/les=53/54 n=0 ec=47/23 lis/c=47/47 les/c/f=48/48/0 sis=53) [2] r=0 lpr=53 pi=[47,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 03:38:39 compute-0 ceph-osd[88594]: osd.1 pg_epoch: 54 pg[10.2( v 37'16 (0'0,37'16] local-lis/les=53/54 n=1 ec=49/36 lis/c=49/49 les/c/f=51/51/0 sis=53) [1] r=0 lpr=53 pi=[49,53)/1 crt=37'16 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 03:38:39 compute-0 ceph-osd[88594]: osd.1 pg_epoch: 54 pg[6.1( v 37'39 (0'0,37'39] local-lis/les=53/54 n=2 ec=45/21 lis/c=45/45 les/c/f=46/46/0 sis=53) [1] r=0 lpr=53 pi=[45,53)/1 crt=37'39 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 03:38:39 compute-0 ceph-osd[88594]: osd.1 pg_epoch: 54 pg[6.f( v 37'39 lc 33'1 (0'0,37'39] local-lis/les=53/54 n=1 ec=45/21 lis/c=45/45 les/c/f=46/46/0 sis=53) [1] r=0 lpr=53 pi=[45,53)/1 crt=37'39 lcod 0'0 mlcod 0'0 active+degraded m=3 mbc={255={(0+1)=3}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 03:38:39 compute-0 ceph-osd[88594]: osd.1 pg_epoch: 54 pg[2.5( empty local-lis/les=53/54 n=0 ec=42/13 lis/c=42/42 les/c/f=43/43/0 sis=53) [1] r=0 lpr=53 pi=[42,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 03:38:39 compute-0 ceph-osd[89722]: osd.2 pg_epoch: 54 pg[11.1b( empty local-lis/les=53/54 n=0 ec=51/38 lis/c=51/51 les/c/f=52/52/0 sis=53) [2] r=0 lpr=53 pi=[51,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 03:38:39 compute-0 ceph-osd[88594]: osd.1 pg_epoch: 54 pg[4.7( empty local-lis/les=53/54 n=0 ec=43/17 lis/c=43/43 les/c/f=44/44/0 sis=53) [1] r=0 lpr=53 pi=[43,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 03:38:39 compute-0 ceph-osd[88594]: osd.1 pg_epoch: 54 pg[6.5( v 37'39 lc 33'11 (0'0,37'39] local-lis/les=53/54 n=2 ec=45/21 lis/c=45/45 les/c/f=46/46/0 sis=53) [1] r=0 lpr=53 pi=[45,53)/1 crt=37'39 lcod 0'0 mlcod 0'0 active+degraded m=2 mbc={255={(0+1)=2}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 03:38:39 compute-0 ceph-osd[88594]: osd.1 pg_epoch: 54 pg[6.7( v 37'39 lc 33'21 (0'0,37'39] local-lis/les=53/54 n=1 ec=45/21 lis/c=45/45 les/c/f=46/46/0 sis=53) [1] r=0 lpr=53 pi=[45,53)/1 crt=37'39 lcod 0'0 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 03:38:39 compute-0 ceph-osd[88594]: osd.1 pg_epoch: 54 pg[2.3( empty local-lis/les=53/54 n=0 ec=42/13 lis/c=42/42 les/c/f=43/43/0 sis=53) [1] r=0 lpr=53 pi=[42,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 03:38:39 compute-0 ceph-osd[88594]: osd.1 pg_epoch: 54 pg[10.b( v 37'16 (0'0,37'16] local-lis/les=53/54 n=0 ec=49/36 lis/c=49/49 les/c/f=51/51/0 sis=53) [1] r=0 lpr=53 pi=[49,53)/1 crt=37'16 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 03:38:39 compute-0 ceph-osd[88594]: osd.1 pg_epoch: 54 pg[4.5( empty local-lis/les=53/54 n=0 ec=43/17 lis/c=43/43 les/c/f=44/44/0 sis=53) [1] r=0 lpr=53 pi=[43,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 03:38:39 compute-0 ceph-osd[88594]: osd.1 pg_epoch: 54 pg[4.9( empty local-lis/les=53/54 n=0 ec=43/17 lis/c=43/43 les/c/f=44/44/0 sis=53) [1] r=0 lpr=53 pi=[43,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 03:38:39 compute-0 ceph-osd[88594]: osd.1 pg_epoch: 54 pg[6.9( v 37'39 (0'0,37'39] local-lis/les=53/54 n=1 ec=45/21 lis/c=45/45 les/c/f=46/46/0 sis=53) [1] r=0 lpr=53 pi=[45,53)/1 crt=37'39 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 03:38:39 compute-0 ceph-osd[88594]: osd.1 pg_epoch: 54 pg[6.b( v 37'39 lc 0'0 (0'0,37'39] local-lis/les=53/54 n=1 ec=45/21 lis/c=45/45 les/c/f=46/46/0 sis=53) [1] r=0 lpr=53 pi=[45,53)/1 crt=37'39 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 03:38:39 compute-0 ceph-osd[88594]: osd.1 pg_epoch: 54 pg[2.d( empty local-lis/les=53/54 n=0 ec=42/13 lis/c=42/42 les/c/f=43/43/0 sis=53) [1] r=0 lpr=53 pi=[42,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 03:38:39 compute-0 ceph-osd[88594]: osd.1 pg_epoch: 54 pg[5.f( empty local-lis/les=53/54 n=0 ec=45/19 lis/c=45/45 les/c/f=47/47/0 sis=53) [1] r=0 lpr=53 pi=[45,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 03:38:39 compute-0 ceph-osd[87591]: osd.0 pg_epoch: 54 pg[11.17( empty local-lis/les=53/54 n=0 ec=51/38 lis/c=51/51 les/c/f=52/52/0 sis=53) [0] r=0 lpr=53 pi=[51,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 03:38:39 compute-0 ceph-osd[87591]: osd.0 pg_epoch: 54 pg[3.1f( empty local-lis/les=53/54 n=0 ec=42/15 lis/c=42/42 les/c/f=43/43/0 sis=53) [0] r=0 lpr=53 pi=[42,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 03:38:39 compute-0 ceph-osd[87591]: osd.0 pg_epoch: 54 pg[10.16( v 37'16 (0'0,37'16] local-lis/les=53/54 n=0 ec=49/36 lis/c=49/49 les/c/f=51/51/0 sis=53) [0] r=0 lpr=53 pi=[49,53)/1 crt=37'16 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 03:38:39 compute-0 ceph-osd[87591]: osd.0 pg_epoch: 54 pg[2.11( empty local-lis/les=53/54 n=0 ec=42/13 lis/c=42/42 les/c/f=43/43/0 sis=53) [0] r=0 lpr=53 pi=[42,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 03:38:39 compute-0 ceph-osd[87591]: osd.0 pg_epoch: 54 pg[10.1( v 37'16 (0'0,37'16] local-lis/les=53/54 n=1 ec=49/36 lis/c=49/49 les/c/f=51/51/0 sis=53) [0] r=0 lpr=53 pi=[49,53)/1 crt=37'16 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 03:38:39 compute-0 ceph-osd[87591]: osd.0 pg_epoch: 54 pg[5.15( empty local-lis/les=53/54 n=0 ec=45/19 lis/c=45/45 les/c/f=47/47/0 sis=53) [0] r=0 lpr=53 pi=[45,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 03:38:39 compute-0 ceph-osd[87591]: osd.0 pg_epoch: 54 pg[2.b( empty local-lis/les=53/54 n=0 ec=42/13 lis/c=42/42 les/c/f=43/43/0 sis=53) [0] r=0 lpr=53 pi=[42,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 03:38:39 compute-0 ceph-osd[87591]: osd.0 pg_epoch: 54 pg[8.14( v 33'4 lc 0'0 (0'0,33'4] local-lis/les=53/54 n=0 ec=47/32 lis/c=47/47 les/c/f=48/48/0 sis=53) [0] r=0 lpr=53 pi=[47,53)/1 crt=33'4 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 03:38:39 compute-0 ceph-osd[87591]: osd.0 pg_epoch: 54 pg[5.14( empty local-lis/les=53/54 n=0 ec=45/19 lis/c=45/45 les/c/f=47/47/0 sis=53) [0] r=0 lpr=53 pi=[45,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 03:38:39 compute-0 ceph-osd[87591]: osd.0 pg_epoch: 54 pg[10.e( v 51'17 lc 37'7 (0'0,51'17] local-lis/les=53/54 n=0 ec=49/36 lis/c=49/49 les/c/f=51/51/0 sis=53) [0] r=0 lpr=53 pi=[49,53)/1 crt=51'17 lcod 0'0 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 03:38:39 compute-0 ceph-osd[87591]: osd.0 pg_epoch: 54 pg[5.3( empty local-lis/les=53/54 n=0 ec=45/19 lis/c=45/45 les/c/f=47/47/0 sis=53) [0] r=0 lpr=53 pi=[45,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 03:38:39 compute-0 ceph-osd[87591]: osd.0 pg_epoch: 54 pg[5.2( empty local-lis/les=53/54 n=0 ec=45/19 lis/c=45/45 les/c/f=47/47/0 sis=53) [0] r=0 lpr=53 pi=[45,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 03:38:39 compute-0 ceph-osd[87591]: osd.0 pg_epoch: 54 pg[10.d( v 51'17 lc 37'9 (0'0,51'17] local-lis/les=53/54 n=0 ec=49/36 lis/c=49/49 les/c/f=51/51/0 sis=53) [0] r=0 lpr=53 pi=[49,53)/1 crt=51'17 lcod 0'0 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 03:38:39 compute-0 ceph-osd[87591]: osd.0 pg_epoch: 54 pg[10.17( v 37'16 (0'0,37'16] local-lis/les=53/54 n=0 ec=49/36 lis/c=49/49 les/c/f=51/51/0 sis=53) [0] r=0 lpr=53 pi=[49,53)/1 crt=37'16 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 03:38:39 compute-0 ceph-osd[87591]: osd.0 pg_epoch: 54 pg[2.1f( empty local-lis/les=53/54 n=0 ec=42/13 lis/c=42/42 les/c/f=43/43/0 sis=53) [0] r=0 lpr=53 pi=[42,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 03:38:39 compute-0 ceph-osd[87591]: osd.0 pg_epoch: 54 pg[2.f( empty local-lis/les=53/54 n=0 ec=42/13 lis/c=42/42 les/c/f=43/43/0 sis=53) [0] r=0 lpr=53 pi=[42,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 03:38:39 compute-0 ceph-osd[87591]: osd.0 pg_epoch: 54 pg[5.5( empty local-lis/les=53/54 n=0 ec=45/19 lis/c=45/45 les/c/f=47/47/0 sis=53) [0] r=0 lpr=53 pi=[45,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 03:38:39 compute-0 ceph-osd[87591]: osd.0 pg_epoch: 54 pg[2.8( empty local-lis/les=53/54 n=0 ec=42/13 lis/c=42/42 les/c/f=43/43/0 sis=53) [0] r=0 lpr=53 pi=[42,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 03:38:39 compute-0 ceph-osd[87591]: osd.0 pg_epoch: 54 pg[10.7( v 37'16 (0'0,37'16] local-lis/les=53/54 n=1 ec=49/36 lis/c=49/49 les/c/f=51/51/0 sis=53) [0] r=0 lpr=53 pi=[49,53)/1 crt=37'16 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 03:38:39 compute-0 ceph-osd[87591]: osd.0 pg_epoch: 54 pg[5.4( empty local-lis/les=53/54 n=0 ec=45/19 lis/c=45/45 les/c/f=47/47/0 sis=53) [0] r=0 lpr=53 pi=[45,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 03:38:39 compute-0 ceph-osd[87591]: osd.0 pg_epoch: 54 pg[2.1c( empty local-lis/les=53/54 n=0 ec=42/13 lis/c=42/42 les/c/f=43/43/0 sis=53) [0] r=0 lpr=53 pi=[42,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 03:38:39 compute-0 ceph-osd[87591]: osd.0 pg_epoch: 54 pg[7.18( empty local-lis/les=53/54 n=0 ec=47/23 lis/c=47/47 les/c/f=48/48/0 sis=53) [0] r=0 lpr=53 pi=[47,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 03:38:39 compute-0 ceph-osd[87591]: osd.0 pg_epoch: 54 pg[11.14( empty local-lis/les=53/54 n=0 ec=51/38 lis/c=51/51 les/c/f=52/52/0 sis=53) [0] r=0 lpr=53 pi=[51,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 03:38:39 compute-0 ceph-osd[87591]: osd.0 pg_epoch: 54 pg[2.1d( empty local-lis/les=53/54 n=0 ec=42/13 lis/c=42/42 les/c/f=43/43/0 sis=53) [0] r=0 lpr=53 pi=[42,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 03:38:39 compute-0 ceph-osd[87591]: osd.0 pg_epoch: 54 pg[5.7( empty local-lis/les=53/54 n=0 ec=45/19 lis/c=45/45 les/c/f=47/47/0 sis=53) [0] r=0 lpr=53 pi=[45,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 03:38:39 compute-0 ceph-osd[87591]: osd.0 pg_epoch: 54 pg[10.15( v 51'17 lc 37'5 (0'0,51'17] local-lis/les=53/54 n=0 ec=49/36 lis/c=49/49 les/c/f=51/51/0 sis=53) [0] r=0 lpr=53 pi=[49,53)/1 crt=51'17 lcod 0'0 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 03:38:39 compute-0 ceph-osd[87591]: osd.0 pg_epoch: 54 pg[10.4( v 37'16 (0'0,37'16] local-lis/les=53/54 n=1 ec=49/36 lis/c=49/49 les/c/f=51/51/0 sis=53) [0] r=0 lpr=53 pi=[49,53)/1 crt=37'16 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 03:38:39 compute-0 ceph-osd[87591]: osd.0 pg_epoch: 54 pg[10.8( v 37'16 (0'0,37'16] local-lis/les=53/54 n=1 ec=49/36 lis/c=49/49 les/c/f=51/51/0 sis=53) [0] r=0 lpr=53 pi=[49,53)/1 crt=37'16 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 03:38:39 compute-0 ceph-osd[87591]: osd.0 pg_epoch: 54 pg[10.9( v 51'17 lc 37'15 (0'0,51'17] local-lis/les=53/54 n=0 ec=49/36 lis/c=49/49 les/c/f=51/51/0 sis=53) [0] r=0 lpr=53 pi=[49,53)/1 crt=51'17 lcod 0'0 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 03:38:39 compute-0 ceph-osd[87591]: osd.0 pg_epoch: 54 pg[3.1b( empty local-lis/les=53/54 n=0 ec=42/15 lis/c=42/42 les/c/f=43/43/0 sis=53) [0] r=0 lpr=53 pi=[42,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 03:38:39 compute-0 ceph-osd[87591]: osd.0 pg_epoch: 54 pg[8.10( v 33'4 (0'0,33'4] local-lis/les=53/54 n=0 ec=47/32 lis/c=47/47 les/c/f=48/48/0 sis=53) [0] r=0 lpr=53 pi=[47,53)/1 crt=33'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 03:38:39 compute-0 ceph-osd[87591]: osd.0 pg_epoch: 54 pg[7.1f( empty local-lis/les=53/54 n=0 ec=47/23 lis/c=47/47 les/c/f=48/48/0 sis=53) [0] r=0 lpr=53 pi=[47,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 03:38:39 compute-0 ceph-osd[87591]: osd.0 pg_epoch: 54 pg[11.10( empty local-lis/les=53/54 n=0 ec=51/38 lis/c=51/51 les/c/f=52/52/0 sis=53) [0] r=0 lpr=53 pi=[51,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 03:38:39 compute-0 ceph-osd[87591]: osd.0 pg_epoch: 54 pg[2.19( empty local-lis/les=53/54 n=0 ec=42/13 lis/c=42/42 les/c/f=43/43/0 sis=53) [0] r=0 lpr=53 pi=[42,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 03:38:39 compute-0 ceph-osd[87591]: osd.0 pg_epoch: 54 pg[5.1e( empty local-lis/les=53/54 n=0 ec=45/19 lis/c=45/45 les/c/f=47/47/0 sis=53) [0] r=0 lpr=53 pi=[45,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 03:38:39 compute-0 ceph-osd[87591]: osd.0 pg_epoch: 54 pg[2.18( empty local-lis/les=53/54 n=0 ec=42/13 lis/c=42/42 les/c/f=43/43/0 sis=53) [0] r=0 lpr=53 pi=[42,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 03:38:39 compute-0 ceph-osd[87591]: osd.0 pg_epoch: 54 pg[3.f( empty local-lis/les=53/54 n=0 ec=42/15 lis/c=42/42 les/c/f=43/43/0 sis=53) [0] r=0 lpr=53 pi=[42,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 03:38:39 compute-0 ceph-osd[87591]: osd.0 pg_epoch: 54 pg[7.4( empty local-lis/les=53/54 n=0 ec=47/23 lis/c=47/47 les/c/f=48/48/0 sis=53) [0] r=0 lpr=53 pi=[47,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 03:38:39 compute-0 ceph-osd[87591]: osd.0 pg_epoch: 54 pg[8.b( v 33'4 (0'0,33'4] local-lis/les=53/54 n=0 ec=47/32 lis/c=47/47 les/c/f=48/48/0 sis=53) [0] r=0 lpr=53 pi=[47,53)/1 crt=33'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 03:38:39 compute-0 ceph-osd[87591]: osd.0 pg_epoch: 54 pg[3.c( empty local-lis/les=53/54 n=0 ec=42/15 lis/c=42/42 les/c/f=43/43/0 sis=53) [0] r=0 lpr=53 pi=[42,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 03:38:39 compute-0 ceph-osd[87591]: osd.0 pg_epoch: 54 pg[11.4( empty local-lis/les=53/54 n=0 ec=51/38 lis/c=51/51 les/c/f=52/52/0 sis=53) [0] r=0 lpr=53 pi=[51,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 03:38:39 compute-0 ceph-osd[87591]: osd.0 pg_epoch: 54 pg[3.1( empty local-lis/les=53/54 n=0 ec=42/15 lis/c=42/42 les/c/f=43/43/0 sis=53) [0] r=0 lpr=53 pi=[42,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 03:38:39 compute-0 ceph-osd[87591]: osd.0 pg_epoch: 54 pg[7.9( empty local-lis/les=53/54 n=0 ec=47/23 lis/c=47/47 les/c/f=48/48/0 sis=53) [0] r=0 lpr=53 pi=[47,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 03:38:39 compute-0 ceph-osd[87591]: osd.0 pg_epoch: 54 pg[8.6( v 33'4 (0'0,33'4] local-lis/les=53/54 n=0 ec=47/32 lis/c=47/47 les/c/f=48/48/0 sis=53) [0] r=0 lpr=53 pi=[47,53)/1 crt=33'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 03:38:39 compute-0 ceph-osd[87591]: osd.0 pg_epoch: 54 pg[8.9( v 33'4 (0'0,33'4] local-lis/les=53/54 n=0 ec=47/32 lis/c=47/47 les/c/f=48/48/0 sis=53) [0] r=0 lpr=53 pi=[47,53)/1 crt=33'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 03:38:39 compute-0 ceph-osd[87591]: osd.0 pg_epoch: 54 pg[2.2( empty local-lis/les=53/54 n=0 ec=42/13 lis/c=42/42 les/c/f=43/43/0 sis=53) [0] r=0 lpr=53 pi=[42,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 03:38:39 compute-0 ceph-osd[87591]: osd.0 pg_epoch: 54 pg[7.6( empty local-lis/les=53/54 n=0 ec=47/23 lis/c=47/47 les/c/f=48/48/0 sis=53) [0] r=0 lpr=53 pi=[47,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 03:38:39 compute-0 ceph-osd[87591]: osd.0 pg_epoch: 54 pg[3.3( empty local-lis/les=53/54 n=0 ec=42/15 lis/c=42/42 les/c/f=43/43/0 sis=53) [0] r=0 lpr=53 pi=[42,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 03:38:39 compute-0 ceph-osd[87591]: osd.0 pg_epoch: 54 pg[11.6( empty local-lis/les=53/54 n=0 ec=51/38 lis/c=51/51 les/c/f=52/52/0 sis=53) [0] r=0 lpr=53 pi=[51,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 03:38:39 compute-0 ceph-osd[87591]: osd.0 pg_epoch: 54 pg[11.e( empty local-lis/les=53/54 n=0 ec=51/38 lis/c=51/51 les/c/f=52/52/0 sis=53) [0] r=0 lpr=53 pi=[51,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 03:38:39 compute-0 ceph-osd[87591]: osd.0 pg_epoch: 54 pg[3.6( empty local-lis/les=53/54 n=0 ec=42/15 lis/c=42/42 les/c/f=43/43/0 sis=53) [0] r=0 lpr=53 pi=[42,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 03:38:39 compute-0 ceph-osd[87591]: osd.0 pg_epoch: 54 pg[8.f( v 33'4 lc 0'0 (0'0,33'4] local-lis/les=53/54 n=0 ec=47/32 lis/c=47/47 les/c/f=48/48/0 sis=53) [0] r=0 lpr=53 pi=[47,53)/1 crt=33'4 mlcod 0'0 active+degraded m=2 mbc={255={(0+1)=2}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 03:38:39 compute-0 ceph-osd[87591]: osd.0 pg_epoch: 54 pg[8.e( v 33'4 (0'0,33'4] local-lis/les=53/54 n=0 ec=47/32 lis/c=47/47 les/c/f=48/48/0 sis=53) [0] r=0 lpr=53 pi=[47,53)/1 crt=33'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 03:38:39 compute-0 ceph-osd[87591]: osd.0 pg_epoch: 54 pg[8.c( v 33'4 (0'0,33'4] local-lis/les=53/54 n=0 ec=47/32 lis/c=47/47 les/c/f=48/48/0 sis=53) [0] r=0 lpr=53 pi=[47,53)/1 crt=33'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 03:38:39 compute-0 ceph-osd[87591]: osd.0 pg_epoch: 54 pg[11.f( empty local-lis/les=53/54 n=0 ec=51/38 lis/c=51/51 les/c/f=52/52/0 sis=53) [0] r=0 lpr=53 pi=[51,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 03:38:39 compute-0 ceph-osd[87591]: osd.0 pg_epoch: 54 pg[7.3( empty local-lis/les=53/54 n=0 ec=47/23 lis/c=47/47 les/c/f=48/48/0 sis=53) [0] r=0 lpr=53 pi=[47,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 03:38:39 compute-0 ceph-osd[87591]: osd.0 pg_epoch: 54 pg[7.f( empty local-lis/les=53/54 n=0 ec=47/23 lis/c=47/47 les/c/f=48/48/0 sis=53) [0] r=0 lpr=53 pi=[47,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 03:38:39 compute-0 ceph-osd[87591]: osd.0 pg_epoch: 54 pg[3.17( empty local-lis/les=53/54 n=0 ec=42/15 lis/c=42/42 les/c/f=43/43/0 sis=53) [0] r=0 lpr=53 pi=[42,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 03:38:39 compute-0 ceph-osd[87591]: osd.0 pg_epoch: 54 pg[3.9( empty local-lis/les=53/54 n=0 ec=42/15 lis/c=42/42 les/c/f=43/43/0 sis=53) [0] r=0 lpr=53 pi=[42,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 03:38:39 compute-0 ceph-osd[87591]: osd.0 pg_epoch: 54 pg[3.a( empty local-lis/les=53/54 n=0 ec=42/15 lis/c=42/42 les/c/f=43/43/0 sis=53) [0] r=0 lpr=53 pi=[42,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 03:38:39 compute-0 ceph-osd[87591]: osd.0 pg_epoch: 54 pg[7.13( empty local-lis/les=53/54 n=0 ec=47/23 lis/c=47/47 les/c/f=48/48/0 sis=53) [0] r=0 lpr=53 pi=[47,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 03:38:39 compute-0 ceph-osd[87591]: osd.0 pg_epoch: 54 pg[11.1( empty local-lis/les=53/54 n=0 ec=51/38 lis/c=51/51 les/c/f=52/52/0 sis=53) [0] r=0 lpr=53 pi=[51,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 03:38:39 compute-0 ceph-osd[87591]: osd.0 pg_epoch: 54 pg[10.1e( v 37'16 (0'0,37'16] local-lis/les=53/54 n=0 ec=49/36 lis/c=49/49 les/c/f=51/51/0 sis=53) [0] r=0 lpr=53 pi=[49,53)/1 crt=37'16 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 03:38:39 compute-0 ceph-osd[87591]: osd.0 pg_epoch: 54 pg[2.16( empty local-lis/les=53/54 n=0 ec=42/13 lis/c=42/42 les/c/f=43/43/0 sis=53) [0] r=0 lpr=53 pi=[42,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 03:38:39 compute-0 ceph-osd[87591]: osd.0 pg_epoch: 54 pg[8.1d( v 33'4 (0'0,33'4] local-lis/les=53/54 n=0 ec=47/32 lis/c=47/47 les/c/f=48/48/0 sis=53) [0] r=0 lpr=53 pi=[47,53)/1 crt=33'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 03:38:39 compute-0 ceph-osd[87591]: osd.0 pg_epoch: 54 pg[3.15( empty local-lis/les=53/54 n=0 ec=42/15 lis/c=42/42 les/c/f=43/43/0 sis=53) [0] r=0 lpr=53 pi=[42,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 03:38:39 compute-0 ceph-osd[87591]: osd.0 pg_epoch: 54 pg[8.1f( v 33'4 (0'0,33'4] local-lis/les=53/54 n=0 ec=47/32 lis/c=47/47 les/c/f=48/48/0 sis=53) [0] r=0 lpr=53 pi=[47,53)/1 crt=33'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 03:38:39 compute-0 ceph-osd[87591]: osd.0 pg_epoch: 54 pg[2.13( empty local-lis/les=53/54 n=0 ec=42/13 lis/c=42/42 les/c/f=43/43/0 sis=53) [0] r=0 lpr=53 pi=[42,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 03:38:39 compute-0 ceph-osd[87591]: osd.0 pg_epoch: 54 pg[8.18( v 33'4 (0'0,33'4] local-lis/les=53/54 n=0 ec=47/32 lis/c=47/47 les/c/f=48/48/0 sis=53) [0] r=0 lpr=53 pi=[47,53)/1 crt=33'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 03:38:39 compute-0 ceph-osd[87591]: osd.0 pg_epoch: 54 pg[3.12( empty local-lis/les=53/54 n=0 ec=42/15 lis/c=42/42 les/c/f=43/43/0 sis=53) [0] r=0 lpr=53 pi=[42,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 03:38:39 compute-0 ceph-osd[87591]: osd.0 pg_epoch: 54 pg[11.19( empty local-lis/les=53/54 n=0 ec=51/38 lis/c=51/51 les/c/f=52/52/0 sis=53) [0] r=0 lpr=53 pi=[51,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 03:38:39 compute-0 ceph-osd[87591]: osd.0 pg_epoch: 54 pg[8.1a( v 33'4 (0'0,33'4] local-lis/les=53/54 n=0 ec=47/32 lis/c=47/47 les/c/f=48/48/0 sis=53) [0] r=0 lpr=53 pi=[47,53)/1 crt=33'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 03:38:39 compute-0 ceph-osd[88594]: osd.1 pg_epoch: 54 pg[4.8( empty local-lis/les=53/54 n=0 ec=43/17 lis/c=43/43 les/c/f=44/44/0 sis=53) [1] r=0 lpr=53 pi=[43,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 03:38:39 compute-0 ceph-osd[88594]: osd.1 pg_epoch: 54 pg[5.9( empty local-lis/les=53/54 n=0 ec=45/19 lis/c=45/45 les/c/f=47/47/0 sis=53) [1] r=0 lpr=53 pi=[45,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 03:38:39 compute-0 ceph-osd[88594]: osd.1 pg_epoch: 54 pg[10.6( v 37'16 (0'0,37'16] local-lis/les=53/54 n=1 ec=49/36 lis/c=49/49 les/c/f=51/51/0 sis=53) [1] r=0 lpr=53 pi=[49,53)/1 crt=37'16 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 03:38:39 compute-0 ceph-osd[88594]: osd.1 pg_epoch: 54 pg[10.19( v 37'16 (0'0,37'16] local-lis/les=53/54 n=0 ec=49/36 lis/c=49/49 les/c/f=51/51/0 sis=53) [1] r=0 lpr=53 pi=[49,53)/1 crt=37'16 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 03:38:39 compute-0 ceph-osd[88594]: osd.1 pg_epoch: 54 pg[5.16( empty local-lis/les=53/54 n=0 ec=45/19 lis/c=45/45 les/c/f=47/47/0 sis=53) [1] r=0 lpr=53 pi=[45,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 03:38:39 compute-0 ceph-osd[88594]: osd.1 pg_epoch: 54 pg[4.14( empty local-lis/les=53/54 n=0 ec=43/17 lis/c=43/43 les/c/f=44/44/0 sis=53) [1] r=0 lpr=53 pi=[43,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 03:38:39 compute-0 ceph-osd[88594]: osd.1 pg_epoch: 54 pg[5.12( empty local-lis/les=53/54 n=0 ec=45/19 lis/c=45/45 les/c/f=47/47/0 sis=53) [1] r=0 lpr=53 pi=[45,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 03:38:39 compute-0 ceph-osd[88594]: osd.1 pg_epoch: 54 pg[10.1a( v 37'16 (0'0,37'16] local-lis/les=53/54 n=0 ec=49/36 lis/c=49/49 les/c/f=51/51/0 sis=53) [1] r=0 lpr=53 pi=[49,53)/1 crt=37'16 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 03:38:39 compute-0 ceph-osd[88594]: osd.1 pg_epoch: 54 pg[2.15( empty local-lis/les=53/54 n=0 ec=42/13 lis/c=42/42 les/c/f=43/43/0 sis=53) [1] r=0 lpr=53 pi=[42,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 03:38:39 compute-0 ceph-osd[88594]: osd.1 pg_epoch: 54 pg[4.12( empty local-lis/les=53/54 n=0 ec=43/17 lis/c=43/43 les/c/f=44/44/0 sis=53) [1] r=0 lpr=53 pi=[43,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 03:38:39 compute-0 ceph-osd[88594]: osd.1 pg_epoch: 54 pg[5.13( empty local-lis/les=53/54 n=0 ec=45/19 lis/c=45/45 les/c/f=47/47/0 sis=53) [1] r=0 lpr=53 pi=[45,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 03:38:39 compute-0 ceph-osd[88594]: osd.1 pg_epoch: 54 pg[2.17( empty local-lis/les=53/54 n=0 ec=42/13 lis/c=42/42 les/c/f=43/43/0 sis=53) [1] r=0 lpr=53 pi=[42,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 03:38:39 compute-0 ceph-osd[88594]: osd.1 pg_epoch: 54 pg[4.10( empty local-lis/les=53/54 n=0 ec=43/17 lis/c=43/43 les/c/f=44/44/0 sis=53) [1] r=0 lpr=53 pi=[43,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 03:38:39 compute-0 ceph-osd[88594]: osd.1 pg_epoch: 54 pg[5.11( empty local-lis/les=53/54 n=0 ec=45/19 lis/c=45/45 les/c/f=47/47/0 sis=53) [1] r=0 lpr=53 pi=[45,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 03:38:40 compute-0 ceph-osd[89722]: log_channel(cluster) log [DBG] : 2.c scrub starts
Oct 11 03:38:40 compute-0 ceph-mon[74273]: pgmap v112: 305 pgs: 305 active+clean; 456 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Oct 11 03:38:40 compute-0 ceph-mon[74273]: 4.6 scrub ok
Oct 11 03:38:40 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' cmd='[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pgp_num_actual", "val": "32"}]': finished
Oct 11 03:38:40 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' cmd='[{"prefix": "osd pool set", "pool": "backups", "var": "pgp_num_actual", "val": "32"}]': finished
Oct 11 03:38:40 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pgp_num_actual", "val": "32"}]': finished
Oct 11 03:38:40 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "2"}]': finished
Oct 11 03:38:40 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pgp_num_actual", "val": "32"}]': finished
Oct 11 03:38:40 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "2"}]': finished
Oct 11 03:38:40 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pgp_num_actual", "val": "32"}]': finished
Oct 11 03:38:40 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' cmd='[{"prefix": "osd pool set", "pool": "images", "var": "pgp_num_actual", "val": "32"}]': finished
Oct 11 03:38:40 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' cmd='[{"prefix": "osd pool set", "pool": "vms", "var": "pgp_num_actual", "val": "32"}]': finished
Oct 11 03:38:40 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' cmd='[{"prefix": "osd pool set", "pool": "volumes", "var": "pgp_num_actual", "val": "32"}]': finished
Oct 11 03:38:40 compute-0 ceph-mon[74273]: osdmap e53: 3 total, 3 up, 3 in
Oct 11 03:38:40 compute-0 ceph-mon[74273]: osdmap e54: 3 total, 3 up, 3 in
Oct 11 03:38:40 compute-0 ceph-osd[89722]: log_channel(cluster) log [DBG] : 2.c scrub ok
Oct 11 03:38:40 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e54 do_prune osdmap full prune enabled
Oct 11 03:38:40 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e55 e55: 3 total, 3 up, 3 in
Oct 11 03:38:40 compute-0 ceph-mon[74273]: log_channel(cluster) log [DBG] : osdmap e55: 3 total, 3 up, 3 in
Oct 11 03:38:40 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v116: 305 pgs: 305 active+clean; 456 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Oct 11 03:38:40 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "3"} v 0) v1
Oct 11 03:38:40 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "3"}]: dispatch
Oct 11 03:38:40 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "3"} v 0) v1
Oct 11 03:38:40 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "3"}]: dispatch
Oct 11 03:38:40 compute-0 ceph-osd[88594]: log_channel(cluster) log [DBG] : 3.b scrub starts
Oct 11 03:38:40 compute-0 ceph-osd[88594]: osd.1 pg_epoch: 55 pg[9.1d( v 41'577 (0'0,41'577] local-lis/les=54/55 n=6 ec=49/34 lis/c=49/49 les/c/f=50/50/0 sis=54) [0]/[1] async=[0] r=0 lpr=54 pi=[49,54)/1 crt=41'577 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=6}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 03:38:40 compute-0 ceph-osd[88594]: osd.1 pg_epoch: 55 pg[9.15( v 41'577 (0'0,41'577] local-lis/les=54/55 n=6 ec=49/34 lis/c=49/49 les/c/f=50/50/0 sis=54) [0]/[1] async=[0] r=0 lpr=54 pi=[49,54)/1 crt=41'577 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=5}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 03:38:40 compute-0 ceph-osd[88594]: osd.1 pg_epoch: 55 pg[9.19( v 41'577 (0'0,41'577] local-lis/les=54/55 n=6 ec=49/34 lis/c=49/49 les/c/f=50/50/0 sis=54) [0]/[1] async=[0] r=0 lpr=54 pi=[49,54)/1 crt=41'577 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=11}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 03:38:40 compute-0 ceph-osd[88594]: osd.1 pg_epoch: 55 pg[9.1b( v 41'577 (0'0,41'577] local-lis/les=54/55 n=6 ec=49/34 lis/c=49/49 les/c/f=50/50/0 sis=54) [0]/[1] async=[0] r=0 lpr=54 pi=[49,54)/1 crt=41'577 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=3}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 03:38:40 compute-0 ceph-osd[88594]: osd.1 pg_epoch: 55 pg[9.d( v 41'577 (0'0,41'577] local-lis/les=54/55 n=7 ec=49/34 lis/c=49/49 les/c/f=50/50/0 sis=54) [0]/[1] async=[0] r=0 lpr=54 pi=[49,54)/1 crt=41'577 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=8}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 03:38:40 compute-0 ceph-osd[88594]: osd.1 pg_epoch: 55 pg[9.17( v 41'577 (0'0,41'577] local-lis/les=54/55 n=6 ec=49/34 lis/c=49/49 les/c/f=50/50/0 sis=54) [0]/[1] async=[0] r=0 lpr=54 pi=[49,54)/1 crt=41'577 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=4}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 03:38:40 compute-0 ceph-osd[88594]: osd.1 pg_epoch: 55 pg[9.7( v 41'577 (0'0,41'577] local-lis/les=54/55 n=7 ec=49/34 lis/c=49/49 les/c/f=50/50/0 sis=54) [0]/[1] async=[0] r=0 lpr=54 pi=[49,54)/1 crt=41'577 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=6}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 03:38:40 compute-0 ceph-osd[88594]: osd.1 pg_epoch: 55 pg[9.f( v 41'577 (0'0,41'577] local-lis/les=54/55 n=7 ec=49/34 lis/c=49/49 les/c/f=50/50/0 sis=54) [0]/[1] async=[0] r=0 lpr=54 pi=[49,54)/1 crt=41'577 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=7}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 03:38:40 compute-0 ceph-osd[88594]: osd.1 pg_epoch: 55 pg[9.9( v 41'577 (0'0,41'577] local-lis/les=54/55 n=7 ec=49/34 lis/c=49/49 les/c/f=50/50/0 sis=54) [0]/[1] async=[0] r=0 lpr=54 pi=[49,54)/1 crt=41'577 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=7}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 03:38:40 compute-0 ceph-osd[88594]: osd.1 pg_epoch: 55 pg[9.3( v 41'577 (0'0,41'577] local-lis/les=54/55 n=7 ec=49/34 lis/c=49/49 les/c/f=50/50/0 sis=54) [0]/[1] async=[0] r=0 lpr=54 pi=[49,54)/1 crt=41'577 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=9}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 03:38:40 compute-0 ceph-osd[88594]: osd.1 pg_epoch: 55 pg[9.11( v 41'577 (0'0,41'577] local-lis/les=54/55 n=7 ec=49/34 lis/c=49/49 les/c/f=50/50/0 sis=54) [0]/[1] async=[0] r=0 lpr=54 pi=[49,54)/1 crt=41'577 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=6}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 03:38:40 compute-0 ceph-osd[88594]: osd.1 pg_epoch: 55 pg[9.5( v 41'577 (0'0,41'577] local-lis/les=54/55 n=7 ec=49/34 lis/c=49/49 les/c/f=50/50/0 sis=54) [0]/[1] async=[0] r=0 lpr=54 pi=[49,54)/1 crt=41'577 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=7}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 03:38:40 compute-0 ceph-osd[88594]: osd.1 pg_epoch: 55 pg[9.13( v 41'577 (0'0,41'577] local-lis/les=54/55 n=6 ec=49/34 lis/c=49/49 les/c/f=50/50/0 sis=54) [0]/[1] async=[0] r=0 lpr=54 pi=[49,54)/1 crt=41'577 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=5}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 03:38:40 compute-0 ceph-osd[88594]: osd.1 pg_epoch: 55 pg[9.1f( v 41'577 (0'0,41'577] local-lis/les=54/55 n=6 ec=49/34 lis/c=49/49 les/c/f=50/50/0 sis=54) [0]/[1] async=[0] r=0 lpr=54 pi=[49,54)/1 crt=41'577 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=6}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 03:38:40 compute-0 ceph-osd[88594]: osd.1 pg_epoch: 55 pg[9.b( v 41'577 (0'0,41'577] local-lis/les=54/55 n=7 ec=49/34 lis/c=49/49 les/c/f=50/50/0 sis=54) [0]/[1] async=[0] r=0 lpr=54 pi=[49,54)/1 crt=41'577 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=5}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 03:38:40 compute-0 ceph-osd[88594]: osd.1 pg_epoch: 55 pg[9.1( v 41'577 (0'0,41'577] local-lis/les=54/55 n=7 ec=49/34 lis/c=49/49 les/c/f=50/50/0 sis=54) [0]/[1] async=[0] r=0 lpr=54 pi=[49,54)/1 crt=41'577 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=8}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 03:38:40 compute-0 ceph-mgr[74563]: [progress INFO root] Completed event 316a17f3-97f6-4e3c-b1bc-1b8daa621d2b (Global Recovery Event) in 15 seconds
Oct 11 03:38:40 compute-0 ceph-osd[88594]: log_channel(cluster) log [DBG] : 3.b scrub ok
Oct 11 03:38:41 compute-0 ceph-mon[74273]: 2.c scrub starts
Oct 11 03:38:41 compute-0 ceph-mon[74273]: 2.c scrub ok
Oct 11 03:38:41 compute-0 ceph-mon[74273]: osdmap e55: 3 total, 3 up, 3 in
Oct 11 03:38:41 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "3"}]: dispatch
Oct 11 03:38:41 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "3"}]: dispatch
Oct 11 03:38:41 compute-0 ceph-mon[74273]: 3.b scrub starts
Oct 11 03:38:41 compute-0 ceph-mon[74273]: 3.b scrub ok
Oct 11 03:38:41 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e55 do_prune osdmap full prune enabled
Oct 11 03:38:41 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "3"}]': finished
Oct 11 03:38:41 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "3"}]': finished
Oct 11 03:38:41 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e56 e56: 3 total, 3 up, 3 in
Oct 11 03:38:41 compute-0 ceph-mon[74273]: log_channel(cluster) log [DBG] : osdmap e56: 3 total, 3 up, 3 in
Oct 11 03:38:41 compute-0 ceph-osd[88594]: osd.1 pg_epoch: 56 pg[9.15( v 41'577 (0'0,41'577] local-lis/les=54/55 n=6 ec=49/34 lis/c=54/49 les/c/f=55/50/0 sis=56 pruub=15.477936745s) [0] async=[0] r=-1 lpr=56 pi=[49,56)/1 crt=41'577 lcod 0'0 mlcod 0'0 active pruub 94.766540527s@ mbc={255={}}] start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 11 03:38:41 compute-0 ceph-osd[88594]: osd.1 pg_epoch: 56 pg[9.15( v 41'577 (0'0,41'577] local-lis/les=54/55 n=6 ec=49/34 lis/c=54/49 les/c/f=55/50/0 sis=56 pruub=15.477789879s) [0] r=-1 lpr=56 pi=[49,56)/1 crt=41'577 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 94.766540527s@ mbc={}] state<Start>: transitioning to Stray
Oct 11 03:38:41 compute-0 ceph-osd[88594]: osd.1 pg_epoch: 56 pg[9.1b( v 41'577 (0'0,41'577] local-lis/les=54/55 n=6 ec=49/34 lis/c=54/49 les/c/f=55/50/0 sis=56 pruub=15.484329224s) [0] async=[0] r=-1 lpr=56 pi=[49,56)/1 crt=41'577 lcod 0'0 mlcod 0'0 active pruub 94.774902344s@ mbc={255={}}] start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 11 03:38:41 compute-0 ceph-osd[88594]: osd.1 pg_epoch: 56 pg[9.1b( v 41'577 (0'0,41'577] local-lis/les=54/55 n=6 ec=49/34 lis/c=54/49 les/c/f=55/50/0 sis=56 pruub=15.484086990s) [0] r=-1 lpr=56 pi=[49,56)/1 crt=41'577 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 94.774902344s@ mbc={}] state<Start>: transitioning to Stray
Oct 11 03:38:41 compute-0 ceph-osd[88594]: osd.1 pg_epoch: 56 pg[9.1d( v 41'577 (0'0,41'577] local-lis/les=54/55 n=6 ec=49/34 lis/c=54/49 les/c/f=55/50/0 sis=56 pruub=15.475263596s) [0] async=[0] r=-1 lpr=56 pi=[49,56)/1 crt=41'577 lcod 0'0 mlcod 0'0 active pruub 94.766235352s@ mbc={255={}}] start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 11 03:38:41 compute-0 ceph-osd[88594]: osd.1 pg_epoch: 56 pg[9.1d( v 41'577 (0'0,41'577] local-lis/les=54/55 n=6 ec=49/34 lis/c=54/49 les/c/f=55/50/0 sis=56 pruub=15.475094795s) [0] r=-1 lpr=56 pi=[49,56)/1 crt=41'577 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 94.766235352s@ mbc={}] state<Start>: transitioning to Stray
Oct 11 03:38:41 compute-0 ceph-osd[87591]: osd.0 pg_epoch: 56 pg[9.1b( v 41'577 (0'0,41'577] local-lis/les=0/0 n=6 ec=49/34 lis/c=54/49 les/c/f=55/50/0 sis=56) [0] r=0 lpr=56 pi=[49,56)/1 luod=0'0 crt=41'577 mlcod 0'0 active mbc={}] start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Oct 11 03:38:41 compute-0 ceph-osd[87591]: osd.0 pg_epoch: 56 pg[9.1b( v 41'577 (0'0,41'577] local-lis/les=0/0 n=6 ec=49/34 lis/c=54/49 les/c/f=55/50/0 sis=56) [0] r=0 lpr=56 pi=[49,56)/1 crt=41'577 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 03:38:41 compute-0 ceph-osd[87591]: osd.0 pg_epoch: 56 pg[9.15( v 41'577 (0'0,41'577] local-lis/les=0/0 n=6 ec=49/34 lis/c=54/49 les/c/f=55/50/0 sis=56) [0] r=0 lpr=56 pi=[49,56)/1 luod=0'0 crt=41'577 mlcod 0'0 active mbc={}] start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Oct 11 03:38:41 compute-0 ceph-osd[87591]: osd.0 pg_epoch: 56 pg[9.15( v 41'577 (0'0,41'577] local-lis/les=0/0 n=6 ec=49/34 lis/c=54/49 les/c/f=55/50/0 sis=56) [0] r=0 lpr=56 pi=[49,56)/1 crt=41'577 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 03:38:41 compute-0 ceph-osd[87591]: osd.0 pg_epoch: 56 pg[6.a( v 37'39 (0'0,37'39] local-lis/les=45/46 n=1 ec=45/21 lis/c=45/45 les/c/f=46/46/0 sis=56 pruub=10.477656364s) [1] r=-1 lpr=56 pi=[45,56)/1 crt=37'39 lcod 0'0 mlcod 0'0 active pruub 94.806800842s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 11 03:38:41 compute-0 ceph-osd[87591]: osd.0 pg_epoch: 56 pg[6.a( v 37'39 (0'0,37'39] local-lis/les=45/46 n=1 ec=45/21 lis/c=45/45 les/c/f=46/46/0 sis=56 pruub=10.477617264s) [1] r=-1 lpr=56 pi=[45,56)/1 crt=37'39 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 94.806800842s@ mbc={}] state<Start>: transitioning to Stray
Oct 11 03:38:41 compute-0 ceph-osd[87591]: osd.0 pg_epoch: 56 pg[9.1d( v 41'577 (0'0,41'577] local-lis/les=0/0 n=6 ec=49/34 lis/c=54/49 les/c/f=55/50/0 sis=56) [0] r=0 lpr=56 pi=[49,56)/1 luod=0'0 crt=41'577 mlcod 0'0 active mbc={}] start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Oct 11 03:38:41 compute-0 ceph-osd[87591]: osd.0 pg_epoch: 56 pg[9.1d( v 41'577 (0'0,41'577] local-lis/les=0/0 n=6 ec=49/34 lis/c=54/49 les/c/f=55/50/0 sis=56) [0] r=0 lpr=56 pi=[49,56)/1 crt=41'577 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 03:38:41 compute-0 ceph-osd[87591]: osd.0 pg_epoch: 56 pg[6.6( v 37'39 (0'0,37'39] local-lis/les=45/46 n=2 ec=45/21 lis/c=45/45 les/c/f=46/46/0 sis=56 pruub=10.477365494s) [1] r=-1 lpr=56 pi=[45,56)/1 crt=37'39 lcod 0'0 mlcod 0'0 active pruub 94.806838989s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 11 03:38:41 compute-0 ceph-osd[87591]: osd.0 pg_epoch: 56 pg[6.6( v 37'39 (0'0,37'39] local-lis/les=45/46 n=2 ec=45/21 lis/c=45/45 les/c/f=46/46/0 sis=56 pruub=10.477341652s) [1] r=-1 lpr=56 pi=[45,56)/1 crt=37'39 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 94.806838989s@ mbc={}] state<Start>: transitioning to Stray
Oct 11 03:38:41 compute-0 ceph-osd[87591]: osd.0 pg_epoch: 56 pg[6.e( v 37'39 (0'0,37'39] local-lis/les=45/46 n=1 ec=45/21 lis/c=45/45 les/c/f=46/46/0 sis=56 pruub=10.479798317s) [1] r=-1 lpr=56 pi=[45,56)/1 crt=37'39 lcod 0'0 mlcod 0'0 active pruub 94.809486389s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 11 03:38:41 compute-0 ceph-osd[87591]: osd.0 pg_epoch: 56 pg[6.e( v 37'39 (0'0,37'39] local-lis/les=45/46 n=1 ec=45/21 lis/c=45/45 les/c/f=46/46/0 sis=56 pruub=10.479775429s) [1] r=-1 lpr=56 pi=[45,56)/1 crt=37'39 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 94.809486389s@ mbc={}] state<Start>: transitioning to Stray
Oct 11 03:38:41 compute-0 ceph-osd[87591]: osd.0 pg_epoch: 56 pg[6.2( v 37'39 (0'0,37'39] local-lis/les=45/46 n=2 ec=45/21 lis/c=45/45 les/c/f=46/46/0 sis=56 pruub=10.479540825s) [1] r=-1 lpr=56 pi=[45,56)/1 crt=37'39 lcod 0'0 mlcod 0'0 active pruub 94.809471130s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 11 03:38:41 compute-0 ceph-osd[87591]: osd.0 pg_epoch: 56 pg[6.2( v 37'39 (0'0,37'39] local-lis/les=45/46 n=2 ec=45/21 lis/c=45/45 les/c/f=46/46/0 sis=56 pruub=10.479351997s) [1] r=-1 lpr=56 pi=[45,56)/1 crt=37'39 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 94.809471130s@ mbc={}] state<Start>: transitioning to Stray
Oct 11 03:38:41 compute-0 ceph-osd[88594]: osd.1 pg_epoch: 56 pg[6.a( empty local-lis/les=0/0 n=0 ec=45/21 lis/c=45/45 les/c/f=46/46/0 sis=56) [1] r=0 lpr=56 pi=[45,56)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 03:38:41 compute-0 ceph-osd[88594]: osd.1 pg_epoch: 56 pg[6.6( empty local-lis/les=0/0 n=0 ec=45/21 lis/c=45/45 les/c/f=46/46/0 sis=56) [1] r=0 lpr=56 pi=[45,56)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 03:38:41 compute-0 ceph-osd[88594]: osd.1 pg_epoch: 56 pg[6.e( empty local-lis/les=0/0 n=0 ec=45/21 lis/c=45/45 les/c/f=46/46/0 sis=56) [1] r=0 lpr=56 pi=[45,56)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 03:38:41 compute-0 ceph-osd[88594]: osd.1 pg_epoch: 56 pg[6.2( empty local-lis/les=0/0 n=0 ec=45/21 lis/c=45/45 les/c/f=46/46/0 sis=56) [1] r=0 lpr=56 pi=[45,56)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 03:38:42 compute-0 ceph-mon[74273]: pgmap v116: 305 pgs: 305 active+clean; 456 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Oct 11 03:38:42 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "3"}]': finished
Oct 11 03:38:42 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "3"}]': finished
Oct 11 03:38:42 compute-0 ceph-mon[74273]: osdmap e56: 3 total, 3 up, 3 in
Oct 11 03:38:42 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e56 do_prune osdmap full prune enabled
Oct 11 03:38:42 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e57 e57: 3 total, 3 up, 3 in
Oct 11 03:38:42 compute-0 ceph-mon[74273]: log_channel(cluster) log [DBG] : osdmap e57: 3 total, 3 up, 3 in
Oct 11 03:38:42 compute-0 ceph-osd[87591]: osd.0 pg_epoch: 57 pg[9.13( v 41'577 (0'0,41'577] local-lis/les=0/0 n=6 ec=49/34 lis/c=54/49 les/c/f=55/50/0 sis=57) [0] r=0 lpr=57 pi=[49,57)/1 luod=0'0 crt=41'577 mlcod 0'0 active mbc={}] start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Oct 11 03:38:42 compute-0 ceph-osd[87591]: osd.0 pg_epoch: 57 pg[9.13( v 41'577 (0'0,41'577] local-lis/les=0/0 n=6 ec=49/34 lis/c=54/49 les/c/f=55/50/0 sis=57) [0] r=0 lpr=57 pi=[49,57)/1 crt=41'577 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 03:38:42 compute-0 ceph-osd[88594]: osd.1 pg_epoch: 57 pg[9.17( v 41'577 (0'0,41'577] local-lis/les=54/55 n=6 ec=49/34 lis/c=54/49 les/c/f=55/50/0 sis=57 pruub=14.471135139s) [0] async=[0] r=-1 lpr=57 pi=[49,57)/1 crt=41'577 lcod 0'0 mlcod 0'0 active pruub 94.775169373s@ mbc={255={}}] start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 11 03:38:42 compute-0 ceph-osd[88594]: osd.1 pg_epoch: 57 pg[9.17( v 41'577 (0'0,41'577] local-lis/les=54/55 n=6 ec=49/34 lis/c=54/49 les/c/f=55/50/0 sis=57 pruub=14.471054077s) [0] r=-1 lpr=57 pi=[49,57)/1 crt=41'577 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 94.775169373s@ mbc={}] state<Start>: transitioning to Stray
Oct 11 03:38:42 compute-0 ceph-osd[88594]: osd.1 pg_epoch: 57 pg[9.11( v 41'577 (0'0,41'577] local-lis/les=54/55 n=7 ec=49/34 lis/c=54/49 les/c/f=55/50/0 sis=57 pruub=14.471872330s) [0] async=[0] r=-1 lpr=57 pi=[49,57)/1 crt=41'577 lcod 0'0 mlcod 0'0 active pruub 94.776092529s@ mbc={255={}}] start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 11 03:38:42 compute-0 ceph-osd[88594]: osd.1 pg_epoch: 57 pg[9.11( v 41'577 (0'0,41'577] local-lis/les=54/55 n=7 ec=49/34 lis/c=54/49 les/c/f=55/50/0 sis=57 pruub=14.471768379s) [0] r=-1 lpr=57 pi=[49,57)/1 crt=41'577 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 94.776092529s@ mbc={}] state<Start>: transitioning to Stray
Oct 11 03:38:42 compute-0 ceph-osd[88594]: osd.1 pg_epoch: 57 pg[9.13( v 41'577 (0'0,41'577] local-lis/les=54/55 n=6 ec=49/34 lis/c=54/49 les/c/f=55/50/0 sis=57 pruub=14.471561432s) [0] async=[0] r=-1 lpr=57 pi=[49,57)/1 crt=41'577 lcod 0'0 mlcod 0'0 active pruub 94.776237488s@ mbc={255={}}] start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 11 03:38:42 compute-0 ceph-osd[88594]: osd.1 pg_epoch: 57 pg[9.13( v 41'577 (0'0,41'577] local-lis/les=54/55 n=6 ec=49/34 lis/c=54/49 les/c/f=55/50/0 sis=57 pruub=14.471469879s) [0] r=-1 lpr=57 pi=[49,57)/1 crt=41'577 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 94.776237488s@ mbc={}] state<Start>: transitioning to Stray
Oct 11 03:38:42 compute-0 ceph-osd[88594]: osd.1 pg_epoch: 57 pg[9.d( v 41'577 (0'0,41'577] local-lis/les=54/55 n=7 ec=49/34 lis/c=54/49 les/c/f=55/50/0 sis=57 pruub=14.470180511s) [0] async=[0] r=-1 lpr=57 pi=[49,57)/1 crt=41'577 lcod 0'0 mlcod 0'0 active pruub 94.775077820s@ mbc={255={}}] start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 11 03:38:42 compute-0 ceph-osd[88594]: osd.1 pg_epoch: 57 pg[9.d( v 41'577 (0'0,41'577] local-lis/les=54/55 n=7 ec=49/34 lis/c=54/49 les/c/f=55/50/0 sis=57 pruub=14.470107079s) [0] r=-1 lpr=57 pi=[49,57)/1 crt=41'577 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 94.775077820s@ mbc={}] state<Start>: transitioning to Stray
Oct 11 03:38:42 compute-0 ceph-osd[88594]: osd.1 pg_epoch: 57 pg[9.f( v 41'577 (0'0,41'577] local-lis/les=54/55 n=7 ec=49/34 lis/c=54/49 les/c/f=55/50/0 sis=57 pruub=14.470749855s) [0] async=[0] r=-1 lpr=57 pi=[49,57)/1 crt=41'577 lcod 0'0 mlcod 0'0 active pruub 94.775833130s@ mbc={255={}}] start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 11 03:38:42 compute-0 ceph-osd[88594]: osd.1 pg_epoch: 57 pg[9.f( v 41'577 (0'0,41'577] local-lis/les=54/55 n=7 ec=49/34 lis/c=54/49 les/c/f=55/50/0 sis=57 pruub=14.470684052s) [0] r=-1 lpr=57 pi=[49,57)/1 crt=41'577 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 94.775833130s@ mbc={}] state<Start>: transitioning to Stray
Oct 11 03:38:42 compute-0 ceph-osd[88594]: osd.1 pg_epoch: 57 pg[9.9( v 41'577 (0'0,41'577] local-lis/les=54/55 n=7 ec=49/34 lis/c=54/49 les/c/f=55/50/0 sis=57 pruub=14.470589638s) [0] async=[0] r=-1 lpr=57 pi=[49,57)/1 crt=41'577 lcod 0'0 mlcod 0'0 active pruub 94.775856018s@ mbc={255={}}] start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 11 03:38:42 compute-0 ceph-osd[88594]: osd.1 pg_epoch: 57 pg[9.b( v 41'577 (0'0,41'577] local-lis/les=54/55 n=7 ec=49/34 lis/c=54/49 les/c/f=55/50/0 sis=57 pruub=14.470983505s) [0] async=[0] r=-1 lpr=57 pi=[49,57)/1 crt=41'577 lcod 0'0 mlcod 0'0 active pruub 94.776329041s@ mbc={255={}}] start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 11 03:38:42 compute-0 ceph-osd[88594]: osd.1 pg_epoch: 57 pg[9.9( v 41'577 (0'0,41'577] local-lis/les=54/55 n=7 ec=49/34 lis/c=54/49 les/c/f=55/50/0 sis=57 pruub=14.470511436s) [0] r=-1 lpr=57 pi=[49,57)/1 crt=41'577 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 94.775856018s@ mbc={}] state<Start>: transitioning to Stray
Oct 11 03:38:42 compute-0 ceph-osd[88594]: osd.1 pg_epoch: 57 pg[9.b( v 41'577 (0'0,41'577] local-lis/les=54/55 n=7 ec=49/34 lis/c=54/49 les/c/f=55/50/0 sis=57 pruub=14.470910072s) [0] r=-1 lpr=57 pi=[49,57)/1 crt=41'577 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 94.776329041s@ mbc={}] state<Start>: transitioning to Stray
Oct 11 03:38:42 compute-0 ceph-osd[88594]: osd.1 pg_epoch: 57 pg[9.1( v 41'577 (0'0,41'577] local-lis/les=54/55 n=7 ec=49/34 lis/c=54/49 les/c/f=55/50/0 sis=57 pruub=14.470166206s) [0] async=[0] r=-1 lpr=57 pi=[49,57)/1 crt=41'577 lcod 0'0 mlcod 0'0 active pruub 94.775772095s@ mbc={255={}}] start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 11 03:38:42 compute-0 ceph-osd[88594]: osd.1 pg_epoch: 57 pg[9.1( v 41'577 (0'0,41'577] local-lis/les=54/55 n=7 ec=49/34 lis/c=54/49 les/c/f=55/50/0 sis=57 pruub=14.470088005s) [0] r=-1 lpr=57 pi=[49,57)/1 crt=41'577 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 94.775772095s@ mbc={}] state<Start>: transitioning to Stray
Oct 11 03:38:42 compute-0 ceph-osd[87591]: osd.0 pg_epoch: 57 pg[9.5( v 41'577 (0'0,41'577] local-lis/les=0/0 n=7 ec=49/34 lis/c=54/49 les/c/f=55/50/0 sis=57) [0] r=0 lpr=57 pi=[49,57)/1 luod=0'0 crt=41'577 mlcod 0'0 active mbc={}] start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Oct 11 03:38:42 compute-0 ceph-osd[88594]: osd.1 pg_epoch: 57 pg[9.7( v 41'577 (0'0,41'577] local-lis/les=54/55 n=7 ec=49/34 lis/c=54/49 les/c/f=55/50/0 sis=57 pruub=14.469360352s) [0] async=[0] r=-1 lpr=57 pi=[49,57)/1 crt=41'577 lcod 0'0 mlcod 0'0 active pruub 94.775344849s@ mbc={255={}}] start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 11 03:38:42 compute-0 ceph-osd[88594]: osd.1 pg_epoch: 57 pg[9.5( v 41'577 (0'0,41'577] local-lis/les=54/55 n=7 ec=49/34 lis/c=54/49 les/c/f=55/50/0 sis=57 pruub=14.469954491s) [0] async=[0] r=-1 lpr=57 pi=[49,57)/1 crt=41'577 lcod 0'0 mlcod 0'0 active pruub 94.776168823s@ mbc={255={}}] start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 11 03:38:42 compute-0 ceph-osd[87591]: osd.0 pg_epoch: 57 pg[9.11( v 41'577 (0'0,41'577] local-lis/les=0/0 n=7 ec=49/34 lis/c=54/49 les/c/f=55/50/0 sis=57) [0] r=0 lpr=57 pi=[49,57)/1 luod=0'0 crt=41'577 mlcod 0'0 active mbc={}] start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Oct 11 03:38:42 compute-0 ceph-osd[88594]: osd.1 pg_epoch: 57 pg[9.5( v 41'577 (0'0,41'577] local-lis/les=54/55 n=7 ec=49/34 lis/c=54/49 les/c/f=55/50/0 sis=57 pruub=14.469892502s) [0] r=-1 lpr=57 pi=[49,57)/1 crt=41'577 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 94.776168823s@ mbc={}] state<Start>: transitioning to Stray
Oct 11 03:38:42 compute-0 ceph-osd[87591]: osd.0 pg_epoch: 57 pg[9.5( v 41'577 (0'0,41'577] local-lis/les=0/0 n=7 ec=49/34 lis/c=54/49 les/c/f=55/50/0 sis=57) [0] r=0 lpr=57 pi=[49,57)/1 crt=41'577 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 03:38:42 compute-0 ceph-osd[87591]: osd.0 pg_epoch: 57 pg[9.7( v 41'577 (0'0,41'577] local-lis/les=0/0 n=7 ec=49/34 lis/c=54/49 les/c/f=55/50/0 sis=57) [0] r=0 lpr=57 pi=[49,57)/1 luod=0'0 crt=41'577 mlcod 0'0 active mbc={}] start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Oct 11 03:38:42 compute-0 ceph-osd[87591]: osd.0 pg_epoch: 57 pg[9.7( v 41'577 (0'0,41'577] local-lis/les=0/0 n=7 ec=49/34 lis/c=54/49 les/c/f=55/50/0 sis=57) [0] r=0 lpr=57 pi=[49,57)/1 crt=41'577 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 03:38:42 compute-0 ceph-osd[88594]: osd.1 pg_epoch: 57 pg[9.7( v 41'577 (0'0,41'577] local-lis/les=54/55 n=7 ec=49/34 lis/c=54/49 les/c/f=55/50/0 sis=57 pruub=14.469277382s) [0] r=-1 lpr=57 pi=[49,57)/1 crt=41'577 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 94.775344849s@ mbc={}] state<Start>: transitioning to Stray
Oct 11 03:38:42 compute-0 ceph-osd[87591]: osd.0 pg_epoch: 57 pg[9.17( v 41'577 (0'0,41'577] local-lis/les=0/0 n=6 ec=49/34 lis/c=54/49 les/c/f=55/50/0 sis=57) [0] r=0 lpr=57 pi=[49,57)/1 luod=0'0 crt=41'577 mlcod 0'0 active mbc={}] start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Oct 11 03:38:42 compute-0 ceph-osd[87591]: osd.0 pg_epoch: 57 pg[9.17( v 41'577 (0'0,41'577] local-lis/les=0/0 n=6 ec=49/34 lis/c=54/49 les/c/f=55/50/0 sis=57) [0] r=0 lpr=57 pi=[49,57)/1 crt=41'577 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 03:38:42 compute-0 ceph-osd[87591]: osd.0 pg_epoch: 57 pg[9.b( v 41'577 (0'0,41'577] local-lis/les=0/0 n=7 ec=49/34 lis/c=54/49 les/c/f=55/50/0 sis=57) [0] r=0 lpr=57 pi=[49,57)/1 luod=0'0 crt=41'577 mlcod 0'0 active mbc={}] start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Oct 11 03:38:42 compute-0 ceph-osd[88594]: osd.1 pg_epoch: 57 pg[9.1f( v 41'577 (0'0,41'577] local-lis/les=54/55 n=6 ec=49/34 lis/c=54/49 les/c/f=55/50/0 sis=57 pruub=14.469480515s) [0] async=[0] r=-1 lpr=57 pi=[49,57)/1 crt=41'577 lcod 0'0 mlcod 0'0 active pruub 94.776252747s@ mbc={255={}}] start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 11 03:38:42 compute-0 ceph-osd[88594]: osd.1 pg_epoch: 57 pg[9.3( v 41'577 (0'0,41'577] local-lis/les=54/55 n=7 ec=49/34 lis/c=54/49 les/c/f=55/50/0 sis=57 pruub=14.469158173s) [0] async=[0] r=-1 lpr=57 pi=[49,57)/1 crt=41'577 lcod 0'0 mlcod 0'0 active pruub 94.775962830s@ mbc={255={}}] start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 11 03:38:42 compute-0 ceph-osd[88594]: osd.1 pg_epoch: 57 pg[9.1f( v 41'577 (0'0,41'577] local-lis/les=54/55 n=6 ec=49/34 lis/c=54/49 les/c/f=55/50/0 sis=57 pruub=14.469414711s) [0] r=-1 lpr=57 pi=[49,57)/1 crt=41'577 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 94.776252747s@ mbc={}] state<Start>: transitioning to Stray
Oct 11 03:38:42 compute-0 ceph-osd[88594]: osd.1 pg_epoch: 57 pg[9.3( v 41'577 (0'0,41'577] local-lis/les=54/55 n=7 ec=49/34 lis/c=54/49 les/c/f=55/50/0 sis=57 pruub=14.469059944s) [0] r=-1 lpr=57 pi=[49,57)/1 crt=41'577 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 94.775962830s@ mbc={}] state<Start>: transitioning to Stray
Oct 11 03:38:42 compute-0 ceph-osd[87591]: osd.0 pg_epoch: 57 pg[9.b( v 41'577 (0'0,41'577] local-lis/les=0/0 n=7 ec=49/34 lis/c=54/49 les/c/f=55/50/0 sis=57) [0] r=0 lpr=57 pi=[49,57)/1 crt=41'577 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 03:38:42 compute-0 ceph-osd[88594]: osd.1 pg_epoch: 57 pg[9.19( v 41'577 (0'0,41'577] local-lis/les=54/55 n=6 ec=49/34 lis/c=54/49 les/c/f=55/50/0 sis=57 pruub=14.468131065s) [0] async=[0] r=-1 lpr=57 pi=[49,57)/1 crt=41'577 lcod 0'0 mlcod 0'0 active pruub 94.774826050s@ mbc={255={}}] start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 11 03:38:42 compute-0 ceph-osd[88594]: osd.1 pg_epoch: 57 pg[9.19( v 41'577 (0'0,41'577] local-lis/les=54/55 n=6 ec=49/34 lis/c=54/49 les/c/f=55/50/0 sis=57 pruub=14.467647552s) [0] r=-1 lpr=57 pi=[49,57)/1 crt=41'577 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 94.774826050s@ mbc={}] state<Start>: transitioning to Stray
Oct 11 03:38:42 compute-0 ceph-osd[87591]: osd.0 pg_epoch: 57 pg[9.11( v 41'577 (0'0,41'577] local-lis/les=0/0 n=7 ec=49/34 lis/c=54/49 les/c/f=55/50/0 sis=57) [0] r=0 lpr=57 pi=[49,57)/1 crt=41'577 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 03:38:42 compute-0 ceph-osd[87591]: osd.0 pg_epoch: 57 pg[9.9( v 41'577 (0'0,41'577] local-lis/les=0/0 n=7 ec=49/34 lis/c=54/49 les/c/f=55/50/0 sis=57) [0] r=0 lpr=57 pi=[49,57)/1 luod=0'0 crt=41'577 mlcod 0'0 active mbc={}] start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Oct 11 03:38:42 compute-0 ceph-osd[87591]: osd.0 pg_epoch: 57 pg[9.9( v 41'577 (0'0,41'577] local-lis/les=0/0 n=7 ec=49/34 lis/c=54/49 les/c/f=55/50/0 sis=57) [0] r=0 lpr=57 pi=[49,57)/1 crt=41'577 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 03:38:42 compute-0 ceph-osd[87591]: osd.0 pg_epoch: 57 pg[9.f( v 41'577 (0'0,41'577] local-lis/les=0/0 n=7 ec=49/34 lis/c=54/49 les/c/f=55/50/0 sis=57) [0] r=0 lpr=57 pi=[49,57)/1 luod=0'0 crt=41'577 mlcod 0'0 active mbc={}] start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Oct 11 03:38:42 compute-0 ceph-osd[87591]: osd.0 pg_epoch: 57 pg[9.1( v 41'577 (0'0,41'577] local-lis/les=0/0 n=7 ec=49/34 lis/c=54/49 les/c/f=55/50/0 sis=57) [0] r=0 lpr=57 pi=[49,57)/1 luod=0'0 crt=41'577 mlcod 0'0 active mbc={}] start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Oct 11 03:38:42 compute-0 ceph-osd[87591]: osd.0 pg_epoch: 57 pg[9.f( v 41'577 (0'0,41'577] local-lis/les=0/0 n=7 ec=49/34 lis/c=54/49 les/c/f=55/50/0 sis=57) [0] r=0 lpr=57 pi=[49,57)/1 crt=41'577 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 03:38:42 compute-0 ceph-osd[87591]: osd.0 pg_epoch: 57 pg[9.3( v 41'577 (0'0,41'577] local-lis/les=0/0 n=7 ec=49/34 lis/c=54/49 les/c/f=55/50/0 sis=57) [0] r=0 lpr=57 pi=[49,57)/1 luod=0'0 crt=41'577 mlcod 0'0 active mbc={}] start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Oct 11 03:38:42 compute-0 ceph-osd[87591]: osd.0 pg_epoch: 57 pg[9.1f( v 41'577 (0'0,41'577] local-lis/les=0/0 n=6 ec=49/34 lis/c=54/49 les/c/f=55/50/0 sis=57) [0] r=0 lpr=57 pi=[49,57)/1 luod=0'0 crt=41'577 mlcod 0'0 active mbc={}] start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Oct 11 03:38:42 compute-0 ceph-osd[87591]: osd.0 pg_epoch: 57 pg[9.d( v 41'577 (0'0,41'577] local-lis/les=0/0 n=7 ec=49/34 lis/c=54/49 les/c/f=55/50/0 sis=57) [0] r=0 lpr=57 pi=[49,57)/1 luod=0'0 crt=41'577 mlcod 0'0 active mbc={}] start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Oct 11 03:38:42 compute-0 ceph-osd[87591]: osd.0 pg_epoch: 57 pg[9.1( v 41'577 (0'0,41'577] local-lis/les=0/0 n=7 ec=49/34 lis/c=54/49 les/c/f=55/50/0 sis=57) [0] r=0 lpr=57 pi=[49,57)/1 crt=41'577 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 03:38:42 compute-0 ceph-osd[87591]: osd.0 pg_epoch: 57 pg[9.19( v 41'577 (0'0,41'577] local-lis/les=0/0 n=6 ec=49/34 lis/c=54/49 les/c/f=55/50/0 sis=57) [0] r=0 lpr=57 pi=[49,57)/1 luod=0'0 crt=41'577 mlcod 0'0 active mbc={}] start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Oct 11 03:38:42 compute-0 ceph-osd[87591]: osd.0 pg_epoch: 57 pg[9.19( v 41'577 (0'0,41'577] local-lis/les=0/0 n=6 ec=49/34 lis/c=54/49 les/c/f=55/50/0 sis=57) [0] r=0 lpr=57 pi=[49,57)/1 crt=41'577 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 03:38:42 compute-0 ceph-osd[87591]: osd.0 pg_epoch: 57 pg[9.1f( v 41'577 (0'0,41'577] local-lis/les=0/0 n=6 ec=49/34 lis/c=54/49 les/c/f=55/50/0 sis=57) [0] r=0 lpr=57 pi=[49,57)/1 crt=41'577 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 03:38:42 compute-0 ceph-osd[87591]: osd.0 pg_epoch: 57 pg[9.d( v 41'577 (0'0,41'577] local-lis/les=0/0 n=7 ec=49/34 lis/c=54/49 les/c/f=55/50/0 sis=57) [0] r=0 lpr=57 pi=[49,57)/1 crt=41'577 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 03:38:42 compute-0 ceph-osd[87591]: osd.0 pg_epoch: 57 pg[9.3( v 41'577 (0'0,41'577] local-lis/les=0/0 n=7 ec=49/34 lis/c=54/49 les/c/f=55/50/0 sis=57) [0] r=0 lpr=57 pi=[49,57)/1 crt=41'577 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 03:38:42 compute-0 ceph-osd[88594]: osd.1 pg_epoch: 57 pg[6.2( v 37'39 (0'0,37'39] local-lis/les=56/57 n=2 ec=45/21 lis/c=45/45 les/c/f=46/46/0 sis=56) [1] r=0 lpr=56 pi=[45,56)/1 crt=37'39 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 03:38:42 compute-0 ceph-osd[88594]: osd.1 pg_epoch: 57 pg[6.6( v 37'39 lc 0'0 (0'0,37'39] local-lis/les=56/57 n=2 ec=45/21 lis/c=45/45 les/c/f=46/46/0 sis=56) [1] r=0 lpr=56 pi=[45,56)/1 crt=37'39 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 03:38:42 compute-0 ceph-osd[88594]: osd.1 pg_epoch: 57 pg[6.e( v 37'39 lc 33'19 (0'0,37'39] local-lis/les=56/57 n=1 ec=45/21 lis/c=45/45 les/c/f=46/46/0 sis=56) [1] r=0 lpr=56 pi=[45,56)/1 crt=37'39 lcod 0'0 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 03:38:42 compute-0 ceph-osd[88594]: osd.1 pg_epoch: 57 pg[6.a( v 37'39 (0'0,37'39] local-lis/les=56/57 n=1 ec=45/21 lis/c=45/45 les/c/f=46/46/0 sis=56) [1] r=0 lpr=56 pi=[45,56)/1 crt=37'39 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 03:38:42 compute-0 ceph-osd[87591]: osd.0 pg_epoch: 57 pg[9.1d( v 41'577 (0'0,41'577] local-lis/les=56/57 n=6 ec=49/34 lis/c=54/49 les/c/f=55/50/0 sis=56) [0] r=0 lpr=56 pi=[49,56)/1 crt=41'577 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 03:38:42 compute-0 ceph-osd[87591]: osd.0 pg_epoch: 57 pg[9.1b( v 41'577 (0'0,41'577] local-lis/les=56/57 n=6 ec=49/34 lis/c=54/49 les/c/f=55/50/0 sis=56) [0] r=0 lpr=56 pi=[49,56)/1 crt=41'577 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 03:38:42 compute-0 ceph-osd[87591]: osd.0 pg_epoch: 57 pg[9.15( v 41'577 (0'0,41'577] local-lis/les=56/57 n=6 ec=49/34 lis/c=54/49 les/c/f=55/50/0 sis=56) [0] r=0 lpr=56 pi=[49,56)/1 crt=41'577 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 03:38:42 compute-0 ceph-osd[88594]: log_channel(cluster) log [DBG] : 3.d scrub starts
Oct 11 03:38:42 compute-0 ceph-osd[88594]: log_channel(cluster) log [DBG] : 3.d scrub ok
Oct 11 03:38:42 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v119: 305 pgs: 13 peering, 1 active+clean+scrubbing, 291 active+clean; 456 KiB data, 103 MiB used, 60 GiB / 60 GiB avail; 1.1 KiB/s, 2 keys/s, 34 objects/s recovering
Oct 11 03:38:43 compute-0 ceph-osd[89722]: log_channel(cluster) log [DBG] : 2.e scrub starts
Oct 11 03:38:43 compute-0 ceph-osd[89722]: log_channel(cluster) log [DBG] : 2.e scrub ok
Oct 11 03:38:43 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e57 do_prune osdmap full prune enabled
Oct 11 03:38:43 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e58 e58: 3 total, 3 up, 3 in
Oct 11 03:38:43 compute-0 ceph-mon[74273]: log_channel(cluster) log [DBG] : osdmap e58: 3 total, 3 up, 3 in
Oct 11 03:38:43 compute-0 ceph-mon[74273]: osdmap e57: 3 total, 3 up, 3 in
Oct 11 03:38:43 compute-0 ceph-mon[74273]: 3.d scrub starts
Oct 11 03:38:43 compute-0 ceph-mon[74273]: 3.d scrub ok
Oct 11 03:38:43 compute-0 ceph-mon[74273]: 2.e scrub starts
Oct 11 03:38:43 compute-0 ceph-mon[74273]: 2.e scrub ok
Oct 11 03:38:43 compute-0 ceph-osd[87591]: osd.0 pg_epoch: 58 pg[9.13( v 41'577 (0'0,41'577] local-lis/les=57/58 n=6 ec=49/34 lis/c=54/49 les/c/f=55/50/0 sis=57) [0] r=0 lpr=57 pi=[49,57)/1 crt=41'577 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 03:38:43 compute-0 ceph-osd[87591]: osd.0 pg_epoch: 58 pg[9.11( v 41'577 (0'0,41'577] local-lis/les=57/58 n=7 ec=49/34 lis/c=54/49 les/c/f=55/50/0 sis=57) [0] r=0 lpr=57 pi=[49,57)/1 crt=41'577 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 03:38:43 compute-0 ceph-osd[87591]: osd.0 pg_epoch: 58 pg[9.5( v 41'577 (0'0,41'577] local-lis/les=57/58 n=7 ec=49/34 lis/c=54/49 les/c/f=55/50/0 sis=57) [0] r=0 lpr=57 pi=[49,57)/1 crt=41'577 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 03:38:43 compute-0 ceph-osd[87591]: osd.0 pg_epoch: 58 pg[9.7( v 41'577 (0'0,41'577] local-lis/les=57/58 n=7 ec=49/34 lis/c=54/49 les/c/f=55/50/0 sis=57) [0] r=0 lpr=57 pi=[49,57)/1 crt=41'577 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 03:38:43 compute-0 ceph-osd[87591]: osd.0 pg_epoch: 58 pg[9.9( v 41'577 (0'0,41'577] local-lis/les=57/58 n=7 ec=49/34 lis/c=54/49 les/c/f=55/50/0 sis=57) [0] r=0 lpr=57 pi=[49,57)/1 crt=41'577 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 03:38:43 compute-0 ceph-osd[87591]: osd.0 pg_epoch: 58 pg[9.f( v 41'577 (0'0,41'577] local-lis/les=57/58 n=7 ec=49/34 lis/c=54/49 les/c/f=55/50/0 sis=57) [0] r=0 lpr=57 pi=[49,57)/1 crt=41'577 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 03:38:43 compute-0 ceph-osd[87591]: osd.0 pg_epoch: 58 pg[9.d( v 41'577 (0'0,41'577] local-lis/les=57/58 n=7 ec=49/34 lis/c=54/49 les/c/f=55/50/0 sis=57) [0] r=0 lpr=57 pi=[49,57)/1 crt=41'577 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 03:38:43 compute-0 ceph-osd[87591]: osd.0 pg_epoch: 58 pg[9.3( v 41'577 (0'0,41'577] local-lis/les=57/58 n=7 ec=49/34 lis/c=54/49 les/c/f=55/50/0 sis=57) [0] r=0 lpr=57 pi=[49,57)/1 crt=41'577 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 03:38:43 compute-0 ceph-osd[87591]: osd.0 pg_epoch: 58 pg[9.17( v 41'577 (0'0,41'577] local-lis/les=57/58 n=6 ec=49/34 lis/c=54/49 les/c/f=55/50/0 sis=57) [0] r=0 lpr=57 pi=[49,57)/1 crt=41'577 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 03:38:43 compute-0 ceph-osd[87591]: osd.0 pg_epoch: 58 pg[9.1f( v 41'577 (0'0,41'577] local-lis/les=57/58 n=6 ec=49/34 lis/c=54/49 les/c/f=55/50/0 sis=57) [0] r=0 lpr=57 pi=[49,57)/1 crt=41'577 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 03:38:43 compute-0 ceph-osd[87591]: osd.0 pg_epoch: 58 pg[9.1( v 41'577 (0'0,41'577] local-lis/les=57/58 n=7 ec=49/34 lis/c=54/49 les/c/f=55/50/0 sis=57) [0] r=0 lpr=57 pi=[49,57)/1 crt=41'577 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 03:38:43 compute-0 ceph-osd[87591]: osd.0 pg_epoch: 58 pg[9.19( v 41'577 (0'0,41'577] local-lis/les=57/58 n=6 ec=49/34 lis/c=54/49 les/c/f=55/50/0 sis=57) [0] r=0 lpr=57 pi=[49,57)/1 crt=41'577 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 03:38:43 compute-0 ceph-osd[87591]: osd.0 pg_epoch: 58 pg[9.b( v 41'577 (0'0,41'577] local-lis/les=57/58 n=7 ec=49/34 lis/c=54/49 les/c/f=55/50/0 sis=57) [0] r=0 lpr=57 pi=[49,57)/1 crt=41'577 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 03:38:44 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e58 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 11 03:38:44 compute-0 ceph-mon[74273]: pgmap v119: 305 pgs: 13 peering, 1 active+clean+scrubbing, 291 active+clean; 456 KiB data, 103 MiB used, 60 GiB / 60 GiB avail; 1.1 KiB/s, 2 keys/s, 34 objects/s recovering
Oct 11 03:38:44 compute-0 ceph-mon[74273]: osdmap e58: 3 total, 3 up, 3 in
Oct 11 03:38:44 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v121: 305 pgs: 13 peering, 1 active+clean+scrubbing, 291 active+clean; 456 KiB data, 103 MiB used, 60 GiB / 60 GiB avail; 897 B/s, 2 keys/s, 27 objects/s recovering
Oct 11 03:38:45 compute-0 ceph-mgr[74563]: [progress INFO root] Writing back 16 completed events
Oct 11 03:38:45 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0) v1
Oct 11 03:38:45 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' 
Oct 11 03:38:46 compute-0 ceph-mon[74273]: pgmap v121: 305 pgs: 13 peering, 1 active+clean+scrubbing, 291 active+clean; 456 KiB data, 103 MiB used, 60 GiB / 60 GiB avail; 897 B/s, 2 keys/s, 27 objects/s recovering
Oct 11 03:38:46 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' 
Oct 11 03:38:46 compute-0 ceph-osd[88594]: log_channel(cluster) log [DBG] : 3.10 scrub starts
Oct 11 03:38:46 compute-0 ceph-osd[88594]: log_channel(cluster) log [DBG] : 3.10 scrub ok
Oct 11 03:38:46 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v122: 305 pgs: 13 peering, 1 active+clean+scrubbing, 291 active+clean; 456 KiB data, 103 MiB used, 60 GiB / 60 GiB avail; 667 B/s, 1 keys/s, 20 objects/s recovering
Oct 11 03:38:47 compute-0 ceph-mon[74273]: 3.10 scrub starts
Oct 11 03:38:47 compute-0 ceph-mon[74273]: 3.10 scrub ok
Oct 11 03:38:48 compute-0 ceph-mon[74273]: pgmap v122: 305 pgs: 13 peering, 1 active+clean+scrubbing, 291 active+clean; 456 KiB data, 103 MiB used, 60 GiB / 60 GiB avail; 667 B/s, 1 keys/s, 20 objects/s recovering
Oct 11 03:38:48 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v123: 305 pgs: 305 active+clean; 456 KiB data, 103 MiB used, 60 GiB / 60 GiB avail; 536 B/s, 1 keys/s, 16 objects/s recovering
Oct 11 03:38:48 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "4"} v 0) v1
Oct 11 03:38:48 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "4"}]: dispatch
Oct 11 03:38:48 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "4"} v 0) v1
Oct 11 03:38:48 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "4"}]: dispatch
Oct 11 03:38:48 compute-0 ceph-osd[88594]: log_channel(cluster) log [DBG] : 3.13 scrub starts
Oct 11 03:38:48 compute-0 ceph-osd[88594]: log_channel(cluster) log [DBG] : 3.13 scrub ok
Oct 11 03:38:48 compute-0 ceph-osd[87591]: log_channel(cluster) log [DBG] : 4.b scrub starts
Oct 11 03:38:48 compute-0 ceph-osd[87591]: log_channel(cluster) log [DBG] : 4.b scrub ok
Oct 11 03:38:49 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e58 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 11 03:38:49 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e58 do_prune osdmap full prune enabled
Oct 11 03:38:49 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "4"}]: dispatch
Oct 11 03:38:49 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "4"}]: dispatch
Oct 11 03:38:49 compute-0 ceph-mon[74273]: 3.13 scrub starts
Oct 11 03:38:49 compute-0 ceph-mon[74273]: 3.13 scrub ok
Oct 11 03:38:49 compute-0 ceph-mon[74273]: 4.b scrub starts
Oct 11 03:38:49 compute-0 ceph-mon[74273]: 4.b scrub ok
Oct 11 03:38:49 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "4"}]': finished
Oct 11 03:38:49 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "4"}]': finished
Oct 11 03:38:49 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e59 e59: 3 total, 3 up, 3 in
Oct 11 03:38:49 compute-0 ceph-mon[74273]: log_channel(cluster) log [DBG] : osdmap e59: 3 total, 3 up, 3 in
Oct 11 03:38:49 compute-0 ceph-osd[88594]: log_channel(cluster) log [DBG] : 3.14 scrub starts
Oct 11 03:38:49 compute-0 ceph-osd[88594]: log_channel(cluster) log [DBG] : 3.14 scrub ok
Oct 11 03:38:50 compute-0 ceph-osd[89722]: log_channel(cluster) log [DBG] : 2.10 scrub starts
Oct 11 03:38:50 compute-0 ceph-osd[89722]: log_channel(cluster) log [DBG] : 2.10 scrub ok
Oct 11 03:38:50 compute-0 ceph-mon[74273]: pgmap v123: 305 pgs: 305 active+clean; 456 KiB data, 103 MiB used, 60 GiB / 60 GiB avail; 536 B/s, 1 keys/s, 16 objects/s recovering
Oct 11 03:38:50 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "4"}]': finished
Oct 11 03:38:50 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "4"}]': finished
Oct 11 03:38:50 compute-0 ceph-mon[74273]: osdmap e59: 3 total, 3 up, 3 in
Oct 11 03:38:50 compute-0 ceph-mon[74273]: 3.14 scrub starts
Oct 11 03:38:50 compute-0 ceph-mon[74273]: 3.14 scrub ok
Oct 11 03:38:50 compute-0 ceph-mon[74273]: 2.10 scrub starts
Oct 11 03:38:50 compute-0 ceph-mon[74273]: 2.10 scrub ok
Oct 11 03:38:50 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v125: 305 pgs: 305 active+clean; 456 KiB data, 103 MiB used, 60 GiB / 60 GiB avail
Oct 11 03:38:50 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "5"} v 0) v1
Oct 11 03:38:50 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "5"}]: dispatch
Oct 11 03:38:50 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "5"} v 0) v1
Oct 11 03:38:50 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "5"}]: dispatch
Oct 11 03:38:50 compute-0 ceph-mgr[74563]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 03:38:50 compute-0 ceph-mgr[74563]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 03:38:50 compute-0 ceph-mgr[74563]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 03:38:50 compute-0 ceph-mgr[74563]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 03:38:50 compute-0 ceph-mgr[74563]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 03:38:50 compute-0 ceph-mgr[74563]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 03:38:51 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e59 do_prune osdmap full prune enabled
Oct 11 03:38:51 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "5"}]': finished
Oct 11 03:38:51 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "5"}]': finished
Oct 11 03:38:51 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e60 e60: 3 total, 3 up, 3 in
Oct 11 03:38:51 compute-0 ceph-mon[74273]: log_channel(cluster) log [DBG] : osdmap e60: 3 total, 3 up, 3 in
Oct 11 03:38:51 compute-0 ceph-osd[88594]: osd.1 pg_epoch: 59 pg[6.3( v 37'39 (0'0,37'39] local-lis/les=53/54 n=2 ec=45/21 lis/c=53/53 les/c/f=54/54/0 sis=59 pruub=11.907789230s) [0] r=-1 lpr=59 pi=[53,59)/1 crt=37'39 mlcod 37'39 active pruub 101.283721924s@ mbc={255={}}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 11 03:38:51 compute-0 ceph-osd[88594]: osd.1 pg_epoch: 59 pg[6.f( v 37'39 (0'0,37'39] local-lis/les=53/54 n=1 ec=45/21 lis/c=53/53 les/c/f=54/54/0 sis=59 pruub=11.914105415s) [0] r=-1 lpr=59 pi=[53,59)/1 crt=37'39 mlcod 37'39 active pruub 101.290199280s@ mbc={255={}}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 11 03:38:51 compute-0 ceph-osd[88594]: osd.1 pg_epoch: 60 pg[6.3( v 37'39 (0'0,37'39] local-lis/les=53/54 n=2 ec=45/21 lis/c=53/53 les/c/f=54/54/0 sis=59 pruub=11.907638550s) [0] r=-1 lpr=59 pi=[53,59)/1 crt=37'39 mlcod 0'0 unknown NOTIFY pruub 101.283721924s@ mbc={}] state<Start>: transitioning to Stray
Oct 11 03:38:51 compute-0 ceph-osd[88594]: osd.1 pg_epoch: 60 pg[6.f( v 37'39 (0'0,37'39] local-lis/les=53/54 n=1 ec=45/21 lis/c=53/53 les/c/f=54/54/0 sis=59 pruub=11.913903236s) [0] r=-1 lpr=59 pi=[53,59)/1 crt=37'39 mlcod 0'0 unknown NOTIFY pruub 101.290199280s@ mbc={}] state<Start>: transitioning to Stray
Oct 11 03:38:51 compute-0 ceph-osd[88594]: osd.1 pg_epoch: 59 pg[6.7( v 37'39 (0'0,37'39] local-lis/les=53/54 n=1 ec=45/21 lis/c=53/53 les/c/f=54/54/0 sis=59 pruub=11.916414261s) [0] r=-1 lpr=59 pi=[53,59)/1 crt=37'39 mlcod 37'39 active pruub 101.292854309s@ mbc={255={}}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 11 03:38:51 compute-0 ceph-osd[88594]: osd.1 pg_epoch: 60 pg[6.7( v 37'39 (0'0,37'39] local-lis/les=53/54 n=1 ec=45/21 lis/c=53/53 les/c/f=54/54/0 sis=59 pruub=11.916305542s) [0] r=-1 lpr=59 pi=[53,59)/1 crt=37'39 mlcod 0'0 unknown NOTIFY pruub 101.292854309s@ mbc={}] state<Start>: transitioning to Stray
Oct 11 03:38:51 compute-0 ceph-osd[88594]: osd.1 pg_epoch: 59 pg[6.b( v 37'39 (0'0,37'39] local-lis/les=53/54 n=1 ec=45/21 lis/c=53/53 les/c/f=54/54/0 sis=59 pruub=11.916513443s) [0] r=-1 lpr=59 pi=[53,59)/1 crt=37'39 mlcod 37'39 active pruub 101.293205261s@ mbc={255={}}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 11 03:38:51 compute-0 ceph-osd[88594]: osd.1 pg_epoch: 60 pg[6.b( v 37'39 (0'0,37'39] local-lis/les=53/54 n=1 ec=45/21 lis/c=53/53 les/c/f=54/54/0 sis=59 pruub=11.916465759s) [0] r=-1 lpr=59 pi=[53,59)/1 crt=37'39 mlcod 0'0 unknown NOTIFY pruub 101.293205261s@ mbc={}] state<Start>: transitioning to Stray
Oct 11 03:38:51 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "5"}]: dispatch
Oct 11 03:38:51 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "5"}]: dispatch
Oct 11 03:38:51 compute-0 ceph-osd[87591]: osd.0 pg_epoch: 60 pg[6.f( empty local-lis/les=0/0 n=0 ec=45/21 lis/c=53/53 les/c/f=54/54/0 sis=59) [0] r=0 lpr=60 pi=[53,59)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 03:38:51 compute-0 ceph-osd[87591]: osd.0 pg_epoch: 60 pg[6.3( empty local-lis/les=0/0 n=0 ec=45/21 lis/c=53/53 les/c/f=54/54/0 sis=59) [0] r=0 lpr=60 pi=[53,59)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 03:38:51 compute-0 ceph-osd[87591]: osd.0 pg_epoch: 60 pg[6.b( empty local-lis/les=0/0 n=0 ec=45/21 lis/c=53/53 les/c/f=54/54/0 sis=59) [0] r=0 lpr=60 pi=[53,59)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 03:38:51 compute-0 ceph-osd[87591]: osd.0 pg_epoch: 60 pg[6.7( empty local-lis/les=0/0 n=0 ec=45/21 lis/c=53/53 les/c/f=54/54/0 sis=59) [0] r=0 lpr=60 pi=[53,59)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 03:38:51 compute-0 ceph-osd[87591]: osd.0 pg_epoch: 60 pg[6.4( v 37'39 (0'0,37'39] local-lis/les=45/46 n=2 ec=45/21 lis/c=45/45 les/c/f=46/46/0 sis=60 pruub=8.388011932s) [1] r=-1 lpr=60 pi=[45,60)/1 crt=37'39 lcod 0'0 mlcod 0'0 active pruub 102.807106018s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 11 03:38:51 compute-0 ceph-osd[87591]: osd.0 pg_epoch: 60 pg[6.4( v 37'39 (0'0,37'39] local-lis/les=45/46 n=2 ec=45/21 lis/c=45/45 les/c/f=46/46/0 sis=60 pruub=8.387760162s) [1] r=-1 lpr=60 pi=[45,60)/1 crt=37'39 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 102.807106018s@ mbc={}] state<Start>: transitioning to Stray
Oct 11 03:38:51 compute-0 ceph-osd[87591]: osd.0 pg_epoch: 60 pg[6.c( v 37'39 (0'0,37'39] local-lis/les=45/46 n=1 ec=45/21 lis/c=45/45 les/c/f=46/46/0 sis=60 pruub=8.389528275s) [1] r=-1 lpr=60 pi=[45,60)/1 crt=37'39 lcod 0'0 mlcod 0'0 active pruub 102.809684753s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 11 03:38:51 compute-0 ceph-osd[87591]: osd.0 pg_epoch: 60 pg[6.c( v 37'39 (0'0,37'39] local-lis/les=45/46 n=1 ec=45/21 lis/c=45/45 les/c/f=46/46/0 sis=60 pruub=8.389405251s) [1] r=-1 lpr=60 pi=[45,60)/1 crt=37'39 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 102.809684753s@ mbc={}] state<Start>: transitioning to Stray
Oct 11 03:38:51 compute-0 ceph-osd[88594]: osd.1 pg_epoch: 60 pg[6.4( empty local-lis/les=0/0 n=0 ec=45/21 lis/c=45/45 les/c/f=46/46/0 sis=60) [1] r=0 lpr=60 pi=[45,60)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 03:38:51 compute-0 ceph-osd[88594]: osd.1 pg_epoch: 60 pg[6.c( empty local-lis/les=0/0 n=0 ec=45/21 lis/c=45/45 les/c/f=46/46/0 sis=60) [1] r=0 lpr=60 pi=[45,60)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 03:38:52 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e60 do_prune osdmap full prune enabled
Oct 11 03:38:52 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e61 e61: 3 total, 3 up, 3 in
Oct 11 03:38:52 compute-0 ceph-mon[74273]: log_channel(cluster) log [DBG] : osdmap e61: 3 total, 3 up, 3 in
Oct 11 03:38:52 compute-0 ceph-mon[74273]: pgmap v125: 305 pgs: 305 active+clean; 456 KiB data, 103 MiB used, 60 GiB / 60 GiB avail
Oct 11 03:38:52 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "5"}]': finished
Oct 11 03:38:52 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "5"}]': finished
Oct 11 03:38:52 compute-0 ceph-mon[74273]: osdmap e60: 3 total, 3 up, 3 in
Oct 11 03:38:52 compute-0 ceph-osd[88594]: osd.1 pg_epoch: 61 pg[6.4( v 37'39 lc 33'15 (0'0,37'39] local-lis/les=60/61 n=2 ec=45/21 lis/c=45/45 les/c/f=46/46/0 sis=60) [1] r=0 lpr=60 pi=[45,60)/1 crt=37'39 lcod 0'0 mlcod 0'0 active+degraded m=4 mbc={255={(0+1)=4}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 03:38:52 compute-0 ceph-osd[87591]: osd.0 pg_epoch: 61 pg[6.f( v 37'39 lc 33'1 (0'0,37'39] local-lis/les=59/61 n=1 ec=45/21 lis/c=53/53 les/c/f=54/54/0 sis=59) [0] r=0 lpr=60 pi=[53,59)/1 crt=37'39 lcod 0'0 mlcod 0'0 active+degraded m=3 mbc={255={(0+1)=3}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 03:38:52 compute-0 ceph-osd[88594]: osd.1 pg_epoch: 61 pg[6.c( v 37'39 lc 33'17 (0'0,37'39] local-lis/les=60/61 n=1 ec=45/21 lis/c=45/45 les/c/f=46/46/0 sis=60) [1] r=0 lpr=60 pi=[45,60)/1 crt=37'39 lcod 0'0 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 03:38:52 compute-0 ceph-osd[87591]: osd.0 pg_epoch: 61 pg[6.b( v 37'39 lc 0'0 (0'0,37'39] local-lis/les=59/61 n=1 ec=45/21 lis/c=53/53 les/c/f=54/54/0 sis=59) [0] r=0 lpr=60 pi=[53,59)/1 crt=37'39 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 03:38:52 compute-0 ceph-osd[87591]: osd.0 pg_epoch: 61 pg[6.7( v 37'39 lc 33'21 (0'0,37'39] local-lis/les=59/61 n=1 ec=45/21 lis/c=53/53 les/c/f=54/54/0 sis=59) [0] r=0 lpr=60 pi=[53,59)/1 crt=37'39 lcod 0'0 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 03:38:52 compute-0 ceph-osd[87591]: osd.0 pg_epoch: 61 pg[6.3( v 37'39 lc 0'0 (0'0,37'39] local-lis/les=59/61 n=2 ec=45/21 lis/c=53/53 les/c/f=54/54/0 sis=59) [0] r=0 lpr=60 pi=[53,59)/1 crt=37'39 mlcod 0'0 active+degraded m=2 mbc={255={(0+1)=2}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 03:38:52 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v128: 305 pgs: 4 active+recovery_wait+degraded, 2 active+recovering, 299 active+clean; 456 KiB data, 103 MiB used, 60 GiB / 60 GiB avail; 5/245 objects degraded (2.041%); 2/245 objects misplaced (0.816%); 115 B/s, 0 objects/s recovering
Oct 11 03:38:52 compute-0 ceph-osd[88594]: log_channel(cluster) log [DBG] : 3.19 scrub starts
Oct 11 03:38:52 compute-0 ceph-osd[88594]: log_channel(cluster) log [DBG] : 3.19 scrub ok
Oct 11 03:38:53 compute-0 ceph-mon[74273]: log_channel(cluster) log [WRN] : Health check failed: Degraded data redundancy: 5/245 objects degraded (2.041%), 4 pgs degraded (PG_DEGRADED)
Oct 11 03:38:53 compute-0 ceph-mon[74273]: osdmap e61: 3 total, 3 up, 3 in
Oct 11 03:38:53 compute-0 ceph-mon[74273]: 3.19 scrub starts
Oct 11 03:38:53 compute-0 ceph-mon[74273]: 3.19 scrub ok
Oct 11 03:38:54 compute-0 ceph-osd[89722]: log_channel(cluster) log [DBG] : 2.12 deep-scrub starts
Oct 11 03:38:54 compute-0 ceph-osd[89722]: log_channel(cluster) log [DBG] : 2.12 deep-scrub ok
Oct 11 03:38:54 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e61 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 11 03:38:54 compute-0 ceph-mon[74273]: pgmap v128: 305 pgs: 4 active+recovery_wait+degraded, 2 active+recovering, 299 active+clean; 456 KiB data, 103 MiB used, 60 GiB / 60 GiB avail; 5/245 objects degraded (2.041%); 2/245 objects misplaced (0.816%); 115 B/s, 0 objects/s recovering
Oct 11 03:38:54 compute-0 ceph-mon[74273]: Health check failed: Degraded data redundancy: 5/245 objects degraded (2.041%), 4 pgs degraded (PG_DEGRADED)
Oct 11 03:38:54 compute-0 ceph-mon[74273]: 2.12 deep-scrub starts
Oct 11 03:38:54 compute-0 ceph-mon[74273]: 2.12 deep-scrub ok
Oct 11 03:38:54 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v129: 305 pgs: 4 active+recovery_wait+degraded, 2 active+recovering, 299 active+clean; 456 KiB data, 103 MiB used, 60 GiB / 60 GiB avail; 5/245 objects degraded (2.041%); 2/245 objects misplaced (0.816%); 115 B/s, 0 objects/s recovering
Oct 11 03:38:54 compute-0 ceph-osd[88594]: log_channel(cluster) log [DBG] : 3.1a scrub starts
Oct 11 03:38:54 compute-0 ceph-osd[88594]: log_channel(cluster) log [DBG] : 3.1a scrub ok
Oct 11 03:38:55 compute-0 ceph-osd[89722]: log_channel(cluster) log [DBG] : 2.14 scrub starts
Oct 11 03:38:55 compute-0 ceph-osd[89722]: log_channel(cluster) log [DBG] : 2.14 scrub ok
Oct 11 03:38:55 compute-0 ceph-mon[74273]: 3.1a scrub starts
Oct 11 03:38:55 compute-0 ceph-mon[74273]: 3.1a scrub ok
Oct 11 03:38:55 compute-0 ceph-mon[74273]: 2.14 scrub starts
Oct 11 03:38:55 compute-0 ceph-mon[74273]: 2.14 scrub ok
Oct 11 03:38:55 compute-0 ceph-osd[88594]: log_channel(cluster) log [DBG] : 3.1c scrub starts
Oct 11 03:38:55 compute-0 ceph-osd[88594]: log_channel(cluster) log [DBG] : 3.1c scrub ok
Oct 11 03:38:56 compute-0 ceph-mon[74273]: pgmap v129: 305 pgs: 4 active+recovery_wait+degraded, 2 active+recovering, 299 active+clean; 456 KiB data, 103 MiB used, 60 GiB / 60 GiB avail; 5/245 objects degraded (2.041%); 2/245 objects misplaced (0.816%); 115 B/s, 0 objects/s recovering
Oct 11 03:38:56 compute-0 ceph-mon[74273]: 3.1c scrub starts
Oct 11 03:38:56 compute-0 ceph-mon[74273]: 3.1c scrub ok
Oct 11 03:38:56 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v130: 305 pgs: 4 active+recovery_wait+degraded, 2 active+recovering, 299 active+clean; 456 KiB data, 103 MiB used, 60 GiB / 60 GiB avail; 5/245 objects degraded (2.041%); 2/245 objects misplaced (0.816%); 93 B/s, 0 objects/s recovering
Oct 11 03:38:56 compute-0 ceph-osd[87591]: log_channel(cluster) log [DBG] : 4.c scrub starts
Oct 11 03:38:56 compute-0 ceph-osd[87591]: log_channel(cluster) log [DBG] : 4.c scrub ok
Oct 11 03:38:57 compute-0 ceph-osd[89722]: log_channel(cluster) log [DBG] : 2.1a scrub starts
Oct 11 03:38:57 compute-0 ceph-osd[89722]: log_channel(cluster) log [DBG] : 2.1a scrub ok
Oct 11 03:38:57 compute-0 ceph-mon[74273]: 4.c scrub starts
Oct 11 03:38:57 compute-0 ceph-mon[74273]: 4.c scrub ok
Oct 11 03:38:57 compute-0 ceph-mon[74273]: 2.1a scrub starts
Oct 11 03:38:57 compute-0 ceph-mon[74273]: 2.1a scrub ok
Oct 11 03:38:58 compute-0 ceph-mon[74273]: pgmap v130: 305 pgs: 4 active+recovery_wait+degraded, 2 active+recovering, 299 active+clean; 456 KiB data, 103 MiB used, 60 GiB / 60 GiB avail; 5/245 objects degraded (2.041%); 2/245 objects misplaced (0.816%); 93 B/s, 0 objects/s recovering
Oct 11 03:38:58 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v131: 305 pgs: 305 active+clean; 456 KiB data, 103 MiB used, 60 GiB / 60 GiB avail; 275 B/s, 1 keys/s, 1 objects/s recovering
Oct 11 03:38:58 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "6"} v 0) v1
Oct 11 03:38:58 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "6"}]: dispatch
Oct 11 03:38:58 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "6"} v 0) v1
Oct 11 03:38:58 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "6"}]: dispatch
Oct 11 03:38:58 compute-0 ceph-osd[88594]: log_channel(cluster) log [DBG] : 8.1 scrub starts
Oct 11 03:38:58 compute-0 ceph-osd[88594]: log_channel(cluster) log [DBG] : 8.1 scrub ok
Oct 11 03:38:58 compute-0 ceph-osd[87591]: log_channel(cluster) log [DBG] : 4.15 scrub starts
Oct 11 03:38:58 compute-0 ceph-osd[87591]: log_channel(cluster) log [DBG] : 4.15 scrub ok
Oct 11 03:38:59 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e61 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 11 03:38:59 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e61 do_prune osdmap full prune enabled
Oct 11 03:38:59 compute-0 ceph-mon[74273]: log_channel(cluster) log [INF] : Health check cleared: PG_DEGRADED (was: Degraded data redundancy: 5/245 objects degraded (2.041%), 4 pgs degraded)
Oct 11 03:38:59 compute-0 ceph-mon[74273]: log_channel(cluster) log [INF] : Cluster is now healthy
Oct 11 03:38:59 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "6"}]: dispatch
Oct 11 03:38:59 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "6"}]: dispatch
Oct 11 03:38:59 compute-0 ceph-mon[74273]: 8.1 scrub starts
Oct 11 03:38:59 compute-0 ceph-mon[74273]: 8.1 scrub ok
Oct 11 03:38:59 compute-0 ceph-mon[74273]: 4.15 scrub starts
Oct 11 03:38:59 compute-0 ceph-mon[74273]: 4.15 scrub ok
Oct 11 03:38:59 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "6"}]': finished
Oct 11 03:38:59 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "6"}]': finished
Oct 11 03:38:59 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e62 e62: 3 total, 3 up, 3 in
Oct 11 03:38:59 compute-0 ceph-mon[74273]: log_channel(cluster) log [DBG] : osdmap e62: 3 total, 3 up, 3 in
Oct 11 03:38:59 compute-0 ceph-osd[88594]: osd.1 pg_epoch: 62 pg[6.d( v 37'39 (0'0,37'39] local-lis/les=53/54 n=1 ec=45/21 lis/c=53/53 les/c/f=54/54/0 sis=62 pruub=11.758262634s) [0] r=-1 lpr=62 pi=[53,62)/1 crt=37'39 mlcod 37'39 active pruub 109.289962769s@ mbc={255={}}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 11 03:38:59 compute-0 ceph-osd[88594]: osd.1 pg_epoch: 62 pg[6.d( v 37'39 (0'0,37'39] local-lis/les=53/54 n=1 ec=45/21 lis/c=53/53 les/c/f=54/54/0 sis=62 pruub=11.758160591s) [0] r=-1 lpr=62 pi=[53,62)/1 crt=37'39 mlcod 0'0 unknown NOTIFY pruub 109.289962769s@ mbc={}] state<Start>: transitioning to Stray
Oct 11 03:38:59 compute-0 ceph-osd[88594]: osd.1 pg_epoch: 62 pg[6.5( v 37'39 (0'0,37'39] local-lis/les=53/54 n=2 ec=45/21 lis/c=53/53 les/c/f=54/54/0 sis=62 pruub=11.761156082s) [0] r=-1 lpr=62 pi=[53,62)/1 crt=37'39 mlcod 37'39 active pruub 109.292991638s@ mbc={255={}}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 11 03:38:59 compute-0 ceph-osd[88594]: osd.1 pg_epoch: 62 pg[6.5( v 37'39 (0'0,37'39] local-lis/les=53/54 n=2 ec=45/21 lis/c=53/53 les/c/f=54/54/0 sis=62 pruub=11.761106491s) [0] r=-1 lpr=62 pi=[53,62)/1 crt=37'39 mlcod 0'0 unknown NOTIFY pruub 109.292991638s@ mbc={}] state<Start>: transitioning to Stray
Oct 11 03:38:59 compute-0 ceph-osd[87591]: osd.0 pg_epoch: 62 pg[6.d( empty local-lis/les=0/0 n=0 ec=45/21 lis/c=53/53 les/c/f=54/54/0 sis=62) [0] r=0 lpr=62 pi=[53,62)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 03:38:59 compute-0 ceph-osd[87591]: osd.0 pg_epoch: 62 pg[6.5( empty local-lis/les=0/0 n=0 ec=45/21 lis/c=53/53 les/c/f=54/54/0 sis=62) [0] r=0 lpr=62 pi=[53,62)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 03:38:59 compute-0 ceph-osd[87591]: log_channel(cluster) log [DBG] : 4.16 scrub starts
Oct 11 03:38:59 compute-0 ceph-osd[87591]: log_channel(cluster) log [DBG] : 4.16 scrub ok
Oct 11 03:38:59 compute-0 ceph-osd[89722]: log_channel(cluster) log [DBG] : 2.1e scrub starts
Oct 11 03:38:59 compute-0 ceph-osd[89722]: log_channel(cluster) log [DBG] : 2.1e scrub ok
Oct 11 03:39:00 compute-0 ceph-mon[74273]: pgmap v131: 305 pgs: 305 active+clean; 456 KiB data, 103 MiB used, 60 GiB / 60 GiB avail; 275 B/s, 1 keys/s, 1 objects/s recovering
Oct 11 03:39:00 compute-0 ceph-mon[74273]: Health check cleared: PG_DEGRADED (was: Degraded data redundancy: 5/245 objects degraded (2.041%), 4 pgs degraded)
Oct 11 03:39:00 compute-0 ceph-mon[74273]: Cluster is now healthy
Oct 11 03:39:00 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "6"}]': finished
Oct 11 03:39:00 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "6"}]': finished
Oct 11 03:39:00 compute-0 ceph-mon[74273]: osdmap e62: 3 total, 3 up, 3 in
Oct 11 03:39:00 compute-0 ceph-mon[74273]: 4.16 scrub starts
Oct 11 03:39:00 compute-0 ceph-mon[74273]: 4.16 scrub ok
Oct 11 03:39:00 compute-0 ceph-mon[74273]: 2.1e scrub starts
Oct 11 03:39:00 compute-0 ceph-mon[74273]: 2.1e scrub ok
Oct 11 03:39:00 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e62 do_prune osdmap full prune enabled
Oct 11 03:39:00 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e63 e63: 3 total, 3 up, 3 in
Oct 11 03:39:00 compute-0 ceph-mon[74273]: log_channel(cluster) log [DBG] : osdmap e63: 3 total, 3 up, 3 in
Oct 11 03:39:00 compute-0 ceph-osd[87591]: osd.0 pg_epoch: 63 pg[6.5( v 37'39 lc 33'11 (0'0,37'39] local-lis/les=62/63 n=2 ec=45/21 lis/c=53/53 les/c/f=54/54/0 sis=62) [0] r=0 lpr=62 pi=[53,62)/1 crt=37'39 lcod 0'0 mlcod 0'0 active+degraded m=2 mbc={255={(0+1)=2}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 03:39:00 compute-0 ceph-osd[87591]: osd.0 pg_epoch: 63 pg[6.d( v 37'39 lc 33'13 (0'0,37'39] local-lis/les=62/63 n=1 ec=45/21 lis/c=53/53 les/c/f=54/54/0 sis=62) [0] r=0 lpr=62 pi=[53,62)/1 crt=37'39 lcod 0'0 mlcod 0'0 active+degraded m=2 mbc={255={(0+1)=2}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 03:39:00 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v134: 305 pgs: 305 active+clean; 456 KiB data, 103 MiB used, 60 GiB / 60 GiB avail; 188 B/s, 1 keys/s, 0 objects/s recovering
Oct 11 03:39:00 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "7"} v 0) v1
Oct 11 03:39:00 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "7"}]: dispatch
Oct 11 03:39:00 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "7"} v 0) v1
Oct 11 03:39:00 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "7"}]: dispatch
Oct 11 03:39:00 compute-0 ceph-osd[89722]: log_channel(cluster) log [DBG] : 5.6 scrub starts
Oct 11 03:39:00 compute-0 ceph-osd[89722]: log_channel(cluster) log [DBG] : 5.6 scrub ok
Oct 11 03:39:01 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e63 do_prune osdmap full prune enabled
Oct 11 03:39:01 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "7"}]': finished
Oct 11 03:39:01 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "7"}]': finished
Oct 11 03:39:01 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e64 e64: 3 total, 3 up, 3 in
Oct 11 03:39:01 compute-0 ceph-mon[74273]: log_channel(cluster) log [DBG] : osdmap e64: 3 total, 3 up, 3 in
Oct 11 03:39:01 compute-0 ceph-mon[74273]: osdmap e63: 3 total, 3 up, 3 in
Oct 11 03:39:01 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "7"}]: dispatch
Oct 11 03:39:01 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "7"}]: dispatch
Oct 11 03:39:01 compute-0 ceph-mon[74273]: 5.6 scrub starts
Oct 11 03:39:01 compute-0 ceph-mon[74273]: 5.6 scrub ok
Oct 11 03:39:01 compute-0 ceph-osd[87591]: log_channel(cluster) log [DBG] : 4.17 scrub starts
Oct 11 03:39:01 compute-0 ceph-osd[87591]: log_channel(cluster) log [DBG] : 4.17 scrub ok
Oct 11 03:39:02 compute-0 ceph-mon[74273]: pgmap v134: 305 pgs: 305 active+clean; 456 KiB data, 103 MiB used, 60 GiB / 60 GiB avail; 188 B/s, 1 keys/s, 0 objects/s recovering
Oct 11 03:39:02 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "7"}]': finished
Oct 11 03:39:02 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "7"}]': finished
Oct 11 03:39:02 compute-0 ceph-mon[74273]: osdmap e64: 3 total, 3 up, 3 in
Oct 11 03:39:02 compute-0 ceph-mon[74273]: 4.17 scrub starts
Oct 11 03:39:02 compute-0 ceph-mon[74273]: 4.17 scrub ok
Oct 11 03:39:02 compute-0 ceph-osd[88594]: osd.1 pg_epoch: 64 pg[9.16( v 41'577 (0'0,41'577] local-lis/les=49/50 n=6 ec=49/34 lis/c=49/49 les/c/f=50/50/0 sis=64 pruub=9.474671364s) [2] r=-1 lpr=64 pi=[49,64)/1 crt=41'577 lcod 0'0 mlcod 0'0 active pruub 109.974571228s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 11 03:39:02 compute-0 ceph-osd[88594]: osd.1 pg_epoch: 64 pg[9.e( v 41'577 (0'0,41'577] local-lis/les=49/50 n=7 ec=49/34 lis/c=49/49 les/c/f=50/50/0 sis=64 pruub=9.474671364s) [2] r=-1 lpr=64 pi=[49,64)/1 crt=41'577 lcod 0'0 mlcod 0'0 active pruub 109.975128174s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 11 03:39:02 compute-0 ceph-osd[88594]: osd.1 pg_epoch: 64 pg[9.e( v 41'577 (0'0,41'577] local-lis/les=49/50 n=7 ec=49/34 lis/c=49/49 les/c/f=50/50/0 sis=64 pruub=9.474316597s) [2] r=-1 lpr=64 pi=[49,64)/1 crt=41'577 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 109.975128174s@ mbc={}] state<Start>: transitioning to Stray
Oct 11 03:39:02 compute-0 ceph-osd[88594]: osd.1 pg_epoch: 64 pg[9.16( v 41'577 (0'0,41'577] local-lis/les=49/50 n=6 ec=49/34 lis/c=49/49 les/c/f=50/50/0 sis=64 pruub=9.473600388s) [2] r=-1 lpr=64 pi=[49,64)/1 crt=41'577 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 109.974571228s@ mbc={}] state<Start>: transitioning to Stray
Oct 11 03:39:02 compute-0 ceph-osd[88594]: osd.1 pg_epoch: 64 pg[9.6( v 41'577 (0'0,41'577] local-lis/les=49/50 n=7 ec=49/34 lis/c=49/49 les/c/f=50/50/0 sis=64 pruub=9.473767281s) [2] r=-1 lpr=64 pi=[49,64)/1 crt=41'577 lcod 0'0 mlcod 0'0 active pruub 109.974998474s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 11 03:39:02 compute-0 ceph-osd[88594]: osd.1 pg_epoch: 64 pg[9.6( v 41'577 (0'0,41'577] local-lis/les=49/50 n=7 ec=49/34 lis/c=49/49 les/c/f=50/50/0 sis=64 pruub=9.473725319s) [2] r=-1 lpr=64 pi=[49,64)/1 crt=41'577 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 109.974998474s@ mbc={}] state<Start>: transitioning to Stray
Oct 11 03:39:02 compute-0 ceph-osd[88594]: osd.1 pg_epoch: 64 pg[9.1e( v 41'577 (0'0,41'577] local-lis/les=49/50 n=6 ec=49/34 lis/c=49/49 les/c/f=50/50/0 sis=64 pruub=9.474032402s) [2] r=-1 lpr=64 pi=[49,64)/1 crt=41'577 lcod 0'0 mlcod 0'0 active pruub 109.975471497s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 11 03:39:02 compute-0 ceph-osd[88594]: osd.1 pg_epoch: 64 pg[9.1e( v 41'577 (0'0,41'577] local-lis/les=49/50 n=6 ec=49/34 lis/c=49/49 les/c/f=50/50/0 sis=64 pruub=9.473999023s) [2] r=-1 lpr=64 pi=[49,64)/1 crt=41'577 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 109.975471497s@ mbc={}] state<Start>: transitioning to Stray
Oct 11 03:39:02 compute-0 ceph-osd[89722]: osd.2 pg_epoch: 64 pg[9.e( empty local-lis/les=0/0 n=0 ec=49/34 lis/c=49/49 les/c/f=50/50/0 sis=64) [2] r=0 lpr=64 pi=[49,64)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 03:39:02 compute-0 ceph-osd[89722]: osd.2 pg_epoch: 64 pg[9.1e( empty local-lis/les=0/0 n=0 ec=49/34 lis/c=49/49 les/c/f=50/50/0 sis=64) [2] r=0 lpr=64 pi=[49,64)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 03:39:02 compute-0 ceph-osd[89722]: osd.2 pg_epoch: 64 pg[9.6( empty local-lis/les=0/0 n=0 ec=49/34 lis/c=49/49 les/c/f=50/50/0 sis=64) [2] r=0 lpr=64 pi=[49,64)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 03:39:02 compute-0 ceph-osd[89722]: osd.2 pg_epoch: 64 pg[9.16( empty local-lis/les=0/0 n=0 ec=49/34 lis/c=49/49 les/c/f=50/50/0 sis=64) [2] r=0 lpr=64 pi=[49,64)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 03:39:02 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v136: 305 pgs: 305 active+clean; 456 KiB data, 103 MiB used, 60 GiB / 60 GiB avail; 277 B/s, 1 keys/s, 1 objects/s recovering
Oct 11 03:39:02 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "8"} v 0) v1
Oct 11 03:39:02 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "8"}]: dispatch
Oct 11 03:39:02 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "8"} v 0) v1
Oct 11 03:39:02 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "8"}]: dispatch
Oct 11 03:39:03 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e64 do_prune osdmap full prune enabled
Oct 11 03:39:03 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "8"}]: dispatch
Oct 11 03:39:03 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "8"}]: dispatch
Oct 11 03:39:03 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "8"}]': finished
Oct 11 03:39:03 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "8"}]': finished
Oct 11 03:39:03 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e65 e65: 3 total, 3 up, 3 in
Oct 11 03:39:03 compute-0 ceph-mon[74273]: log_channel(cluster) log [DBG] : osdmap e65: 3 total, 3 up, 3 in
Oct 11 03:39:03 compute-0 ceph-osd[89722]: osd.2 pg_epoch: 65 pg[9.e( empty local-lis/les=0/0 n=0 ec=49/34 lis/c=49/49 les/c/f=50/50/0 sis=65) [2]/[1] r=-1 lpr=65 pi=[49,65)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 11 03:39:03 compute-0 ceph-osd[89722]: osd.2 pg_epoch: 65 pg[9.e( empty local-lis/les=0/0 n=0 ec=49/34 lis/c=49/49 les/c/f=50/50/0 sis=65) [2]/[1] r=-1 lpr=65 pi=[49,65)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Oct 11 03:39:03 compute-0 ceph-osd[89722]: osd.2 pg_epoch: 65 pg[9.16( empty local-lis/les=0/0 n=0 ec=49/34 lis/c=49/49 les/c/f=50/50/0 sis=65) [2]/[1] r=-1 lpr=65 pi=[49,65)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 11 03:39:03 compute-0 ceph-osd[89722]: osd.2 pg_epoch: 65 pg[9.16( empty local-lis/les=0/0 n=0 ec=49/34 lis/c=49/49 les/c/f=50/50/0 sis=65) [2]/[1] r=-1 lpr=65 pi=[49,65)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Oct 11 03:39:03 compute-0 ceph-osd[89722]: osd.2 pg_epoch: 65 pg[9.1e( empty local-lis/les=0/0 n=0 ec=49/34 lis/c=49/49 les/c/f=50/50/0 sis=65) [2]/[1] r=-1 lpr=65 pi=[49,65)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 11 03:39:03 compute-0 ceph-osd[89722]: osd.2 pg_epoch: 65 pg[9.1e( empty local-lis/les=0/0 n=0 ec=49/34 lis/c=49/49 les/c/f=50/50/0 sis=65) [2]/[1] r=-1 lpr=65 pi=[49,65)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Oct 11 03:39:03 compute-0 ceph-osd[89722]: osd.2 pg_epoch: 65 pg[9.6( empty local-lis/les=0/0 n=0 ec=49/34 lis/c=49/49 les/c/f=50/50/0 sis=65) [2]/[1] r=-1 lpr=65 pi=[49,65)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 11 03:39:03 compute-0 ceph-osd[89722]: osd.2 pg_epoch: 65 pg[9.6( empty local-lis/les=0/0 n=0 ec=49/34 lis/c=49/49 les/c/f=50/50/0 sis=65) [2]/[1] r=-1 lpr=65 pi=[49,65)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Oct 11 03:39:03 compute-0 ceph-osd[88594]: osd.1 pg_epoch: 65 pg[9.16( v 41'577 (0'0,41'577] local-lis/les=49/50 n=6 ec=49/34 lis/c=49/49 les/c/f=50/50/0 sis=65) [2]/[1] r=0 lpr=65 pi=[49,65)/1 crt=41'577 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Oct 11 03:39:03 compute-0 ceph-osd[88594]: osd.1 pg_epoch: 65 pg[9.1e( v 41'577 (0'0,41'577] local-lis/les=49/50 n=6 ec=49/34 lis/c=49/49 les/c/f=50/50/0 sis=65) [2]/[1] r=0 lpr=65 pi=[49,65)/1 crt=41'577 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Oct 11 03:39:03 compute-0 ceph-osd[88594]: osd.1 pg_epoch: 65 pg[9.6( v 41'577 (0'0,41'577] local-lis/les=49/50 n=7 ec=49/34 lis/c=49/49 les/c/f=50/50/0 sis=65) [2]/[1] r=0 lpr=65 pi=[49,65)/1 crt=41'577 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Oct 11 03:39:03 compute-0 ceph-osd[88594]: osd.1 pg_epoch: 65 pg[9.1e( v 41'577 (0'0,41'577] local-lis/les=49/50 n=6 ec=49/34 lis/c=49/49 les/c/f=50/50/0 sis=65) [2]/[1] r=0 lpr=65 pi=[49,65)/1 crt=41'577 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Oct 11 03:39:03 compute-0 ceph-osd[88594]: osd.1 pg_epoch: 65 pg[9.6( v 41'577 (0'0,41'577] local-lis/les=49/50 n=7 ec=49/34 lis/c=49/49 les/c/f=50/50/0 sis=65) [2]/[1] r=0 lpr=65 pi=[49,65)/1 crt=41'577 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Oct 11 03:39:03 compute-0 ceph-osd[88594]: osd.1 pg_epoch: 65 pg[9.e( v 41'577 (0'0,41'577] local-lis/les=49/50 n=7 ec=49/34 lis/c=49/49 les/c/f=50/50/0 sis=65) [2]/[1] r=0 lpr=65 pi=[49,65)/1 crt=41'577 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Oct 11 03:39:03 compute-0 ceph-osd[88594]: osd.1 pg_epoch: 65 pg[9.16( v 41'577 (0'0,41'577] local-lis/les=49/50 n=6 ec=49/34 lis/c=49/49 les/c/f=50/50/0 sis=65) [2]/[1] r=0 lpr=65 pi=[49,65)/1 crt=41'577 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Oct 11 03:39:03 compute-0 ceph-osd[88594]: osd.1 pg_epoch: 65 pg[9.e( v 41'577 (0'0,41'577] local-lis/les=49/50 n=7 ec=49/34 lis/c=49/49 les/c/f=50/50/0 sis=65) [2]/[1] r=0 lpr=65 pi=[49,65)/1 crt=41'577 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Oct 11 03:39:03 compute-0 ceph-osd[87591]: osd.0 pg_epoch: 65 pg[9.17( v 41'577 (0'0,41'577] local-lis/les=57/58 n=6 ec=49/34 lis/c=57/57 les/c/f=58/58/0 sis=65 pruub=11.686775208s) [2] r=-1 lpr=65 pi=[57,65)/1 crt=41'577 mlcod 0'0 active pruub 118.352439880s@ mbc={}] start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 11 03:39:03 compute-0 ceph-osd[87591]: osd.0 pg_epoch: 65 pg[9.17( v 41'577 (0'0,41'577] local-lis/les=57/58 n=6 ec=49/34 lis/c=57/57 les/c/f=58/58/0 sis=65 pruub=11.686727524s) [2] r=-1 lpr=65 pi=[57,65)/1 crt=41'577 mlcod 0'0 unknown NOTIFY pruub 118.352439880s@ mbc={}] state<Start>: transitioning to Stray
Oct 11 03:39:03 compute-0 ceph-osd[87591]: osd.0 pg_epoch: 65 pg[9.7( v 41'577 (0'0,41'577] local-lis/les=57/58 n=7 ec=49/34 lis/c=57/57 les/c/f=58/58/0 sis=65 pruub=11.686449051s) [2] r=-1 lpr=65 pi=[57,65)/1 crt=41'577 mlcod 0'0 active pruub 118.352386475s@ mbc={}] start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 11 03:39:03 compute-0 ceph-osd[87591]: osd.0 pg_epoch: 65 pg[9.7( v 41'577 (0'0,41'577] local-lis/les=57/58 n=7 ec=49/34 lis/c=57/57 les/c/f=58/58/0 sis=65 pruub=11.686428070s) [2] r=-1 lpr=65 pi=[57,65)/1 crt=41'577 mlcod 0'0 unknown NOTIFY pruub 118.352386475s@ mbc={}] state<Start>: transitioning to Stray
Oct 11 03:39:03 compute-0 ceph-osd[87591]: osd.0 pg_epoch: 65 pg[9.f( v 41'577 (0'0,41'577] local-lis/les=57/58 n=7 ec=49/34 lis/c=57/57 les/c/f=58/58/0 sis=65 pruub=11.686451912s) [2] r=-1 lpr=65 pi=[57,65)/1 crt=41'577 mlcod 0'0 active pruub 118.352462769s@ mbc={}] start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 11 03:39:03 compute-0 ceph-osd[87591]: osd.0 pg_epoch: 65 pg[9.f( v 41'577 (0'0,41'577] local-lis/les=57/58 n=7 ec=49/34 lis/c=57/57 les/c/f=58/58/0 sis=65 pruub=11.686347961s) [2] r=-1 lpr=65 pi=[57,65)/1 crt=41'577 mlcod 0'0 unknown NOTIFY pruub 118.352462769s@ mbc={}] state<Start>: transitioning to Stray
Oct 11 03:39:03 compute-0 ceph-osd[87591]: osd.0 pg_epoch: 65 pg[9.1f( v 41'577 (0'0,41'577] local-lis/les=57/58 n=6 ec=49/34 lis/c=57/57 les/c/f=58/58/0 sis=65 pruub=11.685994148s) [2] r=-1 lpr=65 pi=[57,65)/1 crt=41'577 mlcod 0'0 active pruub 118.352516174s@ mbc={}] start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 11 03:39:03 compute-0 ceph-osd[89722]: osd.2 pg_epoch: 65 pg[9.17( empty local-lis/les=0/0 n=0 ec=49/34 lis/c=57/57 les/c/f=58/58/0 sis=65) [2] r=0 lpr=65 pi=[57,65)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 03:39:03 compute-0 ceph-osd[87591]: osd.0 pg_epoch: 65 pg[9.1f( v 41'577 (0'0,41'577] local-lis/les=57/58 n=6 ec=49/34 lis/c=57/57 les/c/f=58/58/0 sis=65 pruub=11.685889244s) [2] r=-1 lpr=65 pi=[57,65)/1 crt=41'577 mlcod 0'0 unknown NOTIFY pruub 118.352516174s@ mbc={}] state<Start>: transitioning to Stray
Oct 11 03:39:03 compute-0 ceph-osd[89722]: osd.2 pg_epoch: 65 pg[9.7( empty local-lis/les=0/0 n=0 ec=49/34 lis/c=57/57 les/c/f=58/58/0 sis=65) [2] r=0 lpr=65 pi=[57,65)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 03:39:03 compute-0 ceph-osd[89722]: osd.2 pg_epoch: 65 pg[9.f( empty local-lis/les=0/0 n=0 ec=49/34 lis/c=57/57 les/c/f=58/58/0 sis=65) [2] r=0 lpr=65 pi=[57,65)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 03:39:03 compute-0 ceph-osd[89722]: osd.2 pg_epoch: 65 pg[9.1f( empty local-lis/les=0/0 n=0 ec=49/34 lis/c=57/57 les/c/f=58/58/0 sis=65) [2] r=0 lpr=65 pi=[57,65)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 03:39:04 compute-0 sudo[103910]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-itjnozkcyufkucjgaydhvreragiuzona ; /usr/bin/python3'
Oct 11 03:39:04 compute-0 sudo[103910]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 03:39:04 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e65 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 11 03:39:04 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e65 do_prune osdmap full prune enabled
Oct 11 03:39:04 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e66 e66: 3 total, 3 up, 3 in
Oct 11 03:39:04 compute-0 ceph-mon[74273]: log_channel(cluster) log [DBG] : osdmap e66: 3 total, 3 up, 3 in
Oct 11 03:39:04 compute-0 ceph-osd[89722]: osd.2 pg_epoch: 66 pg[9.17( empty local-lis/les=0/0 n=0 ec=49/34 lis/c=57/57 les/c/f=58/58/0 sis=66) [2]/[0] r=-1 lpr=66 pi=[57,66)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 11 03:39:04 compute-0 ceph-osd[89722]: osd.2 pg_epoch: 66 pg[9.f( empty local-lis/les=0/0 n=0 ec=49/34 lis/c=57/57 les/c/f=58/58/0 sis=66) [2]/[0] r=-1 lpr=66 pi=[57,66)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 11 03:39:04 compute-0 ceph-osd[89722]: osd.2 pg_epoch: 66 pg[9.f( empty local-lis/les=0/0 n=0 ec=49/34 lis/c=57/57 les/c/f=58/58/0 sis=66) [2]/[0] r=-1 lpr=66 pi=[57,66)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Oct 11 03:39:04 compute-0 ceph-osd[89722]: osd.2 pg_epoch: 66 pg[9.17( empty local-lis/les=0/0 n=0 ec=49/34 lis/c=57/57 les/c/f=58/58/0 sis=66) [2]/[0] r=-1 lpr=66 pi=[57,66)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Oct 11 03:39:04 compute-0 ceph-osd[89722]: osd.2 pg_epoch: 66 pg[9.7( empty local-lis/les=0/0 n=0 ec=49/34 lis/c=57/57 les/c/f=58/58/0 sis=66) [2]/[0] r=-1 lpr=66 pi=[57,66)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 11 03:39:04 compute-0 ceph-osd[89722]: osd.2 pg_epoch: 66 pg[9.7( empty local-lis/les=0/0 n=0 ec=49/34 lis/c=57/57 les/c/f=58/58/0 sis=66) [2]/[0] r=-1 lpr=66 pi=[57,66)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Oct 11 03:39:04 compute-0 ceph-osd[89722]: osd.2 pg_epoch: 66 pg[9.1f( empty local-lis/les=0/0 n=0 ec=49/34 lis/c=57/57 les/c/f=58/58/0 sis=66) [2]/[0] r=-1 lpr=66 pi=[57,66)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 11 03:39:04 compute-0 ceph-osd[89722]: osd.2 pg_epoch: 66 pg[9.1f( empty local-lis/les=0/0 n=0 ec=49/34 lis/c=57/57 les/c/f=58/58/0 sis=66) [2]/[0] r=-1 lpr=66 pi=[57,66)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Oct 11 03:39:04 compute-0 ceph-osd[87591]: osd.0 pg_epoch: 66 pg[9.f( v 41'577 (0'0,41'577] local-lis/les=57/58 n=7 ec=49/34 lis/c=57/57 les/c/f=58/58/0 sis=66) [2]/[0] r=0 lpr=66 pi=[57,66)/1 crt=41'577 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Oct 11 03:39:04 compute-0 ceph-osd[87591]: osd.0 pg_epoch: 66 pg[9.f( v 41'577 (0'0,41'577] local-lis/les=57/58 n=7 ec=49/34 lis/c=57/57 les/c/f=58/58/0 sis=66) [2]/[0] r=0 lpr=66 pi=[57,66)/1 crt=41'577 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Oct 11 03:39:04 compute-0 ceph-osd[87591]: osd.0 pg_epoch: 66 pg[9.1f( v 41'577 (0'0,41'577] local-lis/les=57/58 n=6 ec=49/34 lis/c=57/57 les/c/f=58/58/0 sis=66) [2]/[0] r=0 lpr=66 pi=[57,66)/1 crt=41'577 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Oct 11 03:39:04 compute-0 ceph-osd[87591]: osd.0 pg_epoch: 66 pg[9.1f( v 41'577 (0'0,41'577] local-lis/les=57/58 n=6 ec=49/34 lis/c=57/57 les/c/f=58/58/0 sis=66) [2]/[0] r=0 lpr=66 pi=[57,66)/1 crt=41'577 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Oct 11 03:39:04 compute-0 ceph-osd[87591]: osd.0 pg_epoch: 66 pg[9.17( v 41'577 (0'0,41'577] local-lis/les=57/58 n=6 ec=49/34 lis/c=57/57 les/c/f=58/58/0 sis=66) [2]/[0] r=0 lpr=66 pi=[57,66)/1 crt=41'577 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Oct 11 03:39:04 compute-0 ceph-osd[87591]: osd.0 pg_epoch: 66 pg[9.17( v 41'577 (0'0,41'577] local-lis/les=57/58 n=6 ec=49/34 lis/c=57/57 les/c/f=58/58/0 sis=66) [2]/[0] r=0 lpr=66 pi=[57,66)/1 crt=41'577 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Oct 11 03:39:04 compute-0 ceph-osd[87591]: osd.0 pg_epoch: 66 pg[9.7( v 41'577 (0'0,41'577] local-lis/les=57/58 n=7 ec=49/34 lis/c=57/57 les/c/f=58/58/0 sis=66) [2]/[0] r=0 lpr=66 pi=[57,66)/1 crt=41'577 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Oct 11 03:39:04 compute-0 ceph-osd[87591]: osd.0 pg_epoch: 66 pg[9.7( v 41'577 (0'0,41'577] local-lis/les=57/58 n=7 ec=49/34 lis/c=57/57 les/c/f=58/58/0 sis=66) [2]/[0] r=0 lpr=66 pi=[57,66)/1 crt=41'577 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Oct 11 03:39:04 compute-0 ceph-osd[88594]: osd.1 pg_epoch: 66 pg[9.e( v 41'577 (0'0,41'577] local-lis/les=65/66 n=7 ec=49/34 lis/c=49/49 les/c/f=50/50/0 sis=65) [2]/[1] async=[2] r=0 lpr=65 pi=[49,65)/1 crt=41'577 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=6}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 03:39:04 compute-0 ceph-osd[88594]: osd.1 pg_epoch: 66 pg[9.6( v 41'577 (0'0,41'577] local-lis/les=65/66 n=7 ec=49/34 lis/c=49/49 les/c/f=50/50/0 sis=65) [2]/[1] async=[2] r=0 lpr=65 pi=[49,65)/1 crt=41'577 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=6}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 03:39:04 compute-0 ceph-osd[88594]: osd.1 pg_epoch: 66 pg[9.16( v 41'577 (0'0,41'577] local-lis/les=65/66 n=6 ec=49/34 lis/c=49/49 les/c/f=50/50/0 sis=65) [2]/[1] async=[2] r=0 lpr=65 pi=[49,65)/1 crt=41'577 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=6}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 03:39:04 compute-0 ceph-osd[88594]: osd.1 pg_epoch: 66 pg[9.1e( v 41'577 (0'0,41'577] local-lis/les=65/66 n=6 ec=49/34 lis/c=49/49 les/c/f=50/50/0 sis=65) [2]/[1] async=[2] r=0 lpr=65 pi=[49,65)/1 crt=41'577 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=6}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 03:39:04 compute-0 python3[103912]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint radosgw-admin quay.io/ceph/ceph:v18 --fsid 23b68101-59a9-532f-ab6b-9acf78fb2162 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   user info --uid openstack _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 11 03:39:04 compute-0 podman[103913]: 2025-10-11 03:39:04.455489341 +0000 UTC m=+0.044195857 container create 2255e1b9ca78a993751e6610536253b9bf1629315a33bddac8048c4cb3f89328 (image=quay.io/ceph/ceph:v18, name=sleepy_payne, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, ceph=True, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_REF=reef)
Oct 11 03:39:04 compute-0 ceph-mon[74273]: pgmap v136: 305 pgs: 305 active+clean; 456 KiB data, 103 MiB used, 60 GiB / 60 GiB avail; 277 B/s, 1 keys/s, 1 objects/s recovering
Oct 11 03:39:04 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "8"}]': finished
Oct 11 03:39:04 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "8"}]': finished
Oct 11 03:39:04 compute-0 ceph-mon[74273]: osdmap e65: 3 total, 3 up, 3 in
Oct 11 03:39:04 compute-0 ceph-mon[74273]: osdmap e66: 3 total, 3 up, 3 in
Oct 11 03:39:04 compute-0 systemd[1]: Started libpod-conmon-2255e1b9ca78a993751e6610536253b9bf1629315a33bddac8048c4cb3f89328.scope.
Oct 11 03:39:04 compute-0 systemd[1]: Started libcrun container.
Oct 11 03:39:04 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a9dc9e61841b43975fea954076133c9e6c03a06cabf82b8ed76b2d1a062d77fd/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Oct 11 03:39:04 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a9dc9e61841b43975fea954076133c9e6c03a06cabf82b8ed76b2d1a062d77fd/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 11 03:39:04 compute-0 podman[103913]: 2025-10-11 03:39:04.436457319 +0000 UTC m=+0.025163925 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Oct 11 03:39:04 compute-0 podman[103913]: 2025-10-11 03:39:04.546817557 +0000 UTC m=+0.135524183 container init 2255e1b9ca78a993751e6610536253b9bf1629315a33bddac8048c4cb3f89328 (image=quay.io/ceph/ceph:v18, name=sleepy_payne, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, ceph=True, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 11 03:39:04 compute-0 podman[103913]: 2025-10-11 03:39:04.55656063 +0000 UTC m=+0.145267156 container start 2255e1b9ca78a993751e6610536253b9bf1629315a33bddac8048c4cb3f89328 (image=quay.io/ceph/ceph:v18, name=sleepy_payne, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 11 03:39:04 compute-0 podman[103913]: 2025-10-11 03:39:04.560322315 +0000 UTC m=+0.149028871 container attach 2255e1b9ca78a993751e6610536253b9bf1629315a33bddac8048c4cb3f89328 (image=quay.io/ceph/ceph:v18, name=sleepy_payne, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Oct 11 03:39:04 compute-0 sleepy_payne[103929]: could not fetch user info: no user info saved
Oct 11 03:39:04 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v139: 305 pgs: 4 unknown, 301 active+clean; 455 KiB data, 103 MiB used, 60 GiB / 60 GiB avail; 37 B/s, 0 objects/s recovering
Oct 11 03:39:04 compute-0 systemd[1]: libpod-2255e1b9ca78a993751e6610536253b9bf1629315a33bddac8048c4cb3f89328.scope: Deactivated successfully.
Oct 11 03:39:04 compute-0 conmon[103929]: conmon 2255e1b9ca78a993751e <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-2255e1b9ca78a993751e6610536253b9bf1629315a33bddac8048c4cb3f89328.scope/container/memory.events
Oct 11 03:39:04 compute-0 podman[103913]: 2025-10-11 03:39:04.787767269 +0000 UTC m=+0.376473815 container died 2255e1b9ca78a993751e6610536253b9bf1629315a33bddac8048c4cb3f89328 (image=quay.io/ceph/ceph:v18, name=sleepy_payne, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, ceph=True, org.label-schema.schema-version=1.0)
Oct 11 03:39:04 compute-0 ceph-osd[88594]: log_channel(cluster) log [DBG] : 7.7 scrub starts
Oct 11 03:39:04 compute-0 systemd[1]: var-lib-containers-storage-overlay-a9dc9e61841b43975fea954076133c9e6c03a06cabf82b8ed76b2d1a062d77fd-merged.mount: Deactivated successfully.
Oct 11 03:39:04 compute-0 ceph-osd[88594]: log_channel(cluster) log [DBG] : 7.7 scrub ok
Oct 11 03:39:04 compute-0 podman[103913]: 2025-10-11 03:39:04.843862819 +0000 UTC m=+0.432569375 container remove 2255e1b9ca78a993751e6610536253b9bf1629315a33bddac8048c4cb3f89328 (image=quay.io/ceph/ceph:v18, name=sleepy_payne, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, ceph=True, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 11 03:39:04 compute-0 systemd[1]: libpod-conmon-2255e1b9ca78a993751e6610536253b9bf1629315a33bddac8048c4cb3f89328.scope: Deactivated successfully.
Oct 11 03:39:04 compute-0 sudo[103910]: pam_unix(sudo:session): session closed for user root
Oct 11 03:39:04 compute-0 ceph-osd[89722]: log_channel(cluster) log [DBG] : 5.8 deep-scrub starts
Oct 11 03:39:04 compute-0 ceph-osd[89722]: log_channel(cluster) log [DBG] : 5.8 deep-scrub ok
Oct 11 03:39:05 compute-0 sudo[104048]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jjcljrrajghorrjfnyzzmdvqbhntjnnf ; /usr/bin/python3'
Oct 11 03:39:05 compute-0 sudo[104048]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 03:39:05 compute-0 python3[104050]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint radosgw-admin quay.io/ceph/ceph:v18 --fsid 23b68101-59a9-532f-ab6b-9acf78fb2162 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   user create --uid="openstack" --display-name "openstack" _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 11 03:39:05 compute-0 podman[104051]: 2025-10-11 03:39:05.247177565 +0000 UTC m=+0.039072704 container create c1835eff8c3ca2fca6366ab27e122770d42d98498110339dff2d293359c1ad78 (image=quay.io/ceph/ceph:v18, name=lucid_neumann, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Oct 11 03:39:05 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e66 do_prune osdmap full prune enabled
Oct 11 03:39:05 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e67 e67: 3 total, 3 up, 3 in
Oct 11 03:39:05 compute-0 ceph-mon[74273]: log_channel(cluster) log [DBG] : osdmap e67: 3 total, 3 up, 3 in
Oct 11 03:39:05 compute-0 ceph-osd[88594]: osd.1 pg_epoch: 67 pg[9.16( v 41'577 (0'0,41'577] local-lis/les=65/66 n=6 ec=49/34 lis/c=65/49 les/c/f=66/50/0 sis=67 pruub=15.009329796s) [2] async=[2] r=-1 lpr=67 pi=[49,67)/1 crt=41'577 lcod 0'0 mlcod 0'0 active pruub 118.295974731s@ mbc={255={}}] start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 11 03:39:05 compute-0 ceph-osd[88594]: osd.1 pg_epoch: 67 pg[9.16( v 41'577 (0'0,41'577] local-lis/les=65/66 n=6 ec=49/34 lis/c=65/49 les/c/f=66/50/0 sis=67 pruub=15.009208679s) [2] r=-1 lpr=67 pi=[49,67)/1 crt=41'577 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 118.295974731s@ mbc={}] state<Start>: transitioning to Stray
Oct 11 03:39:05 compute-0 ceph-osd[88594]: osd.1 pg_epoch: 67 pg[9.e( v 41'577 (0'0,41'577] local-lis/les=65/66 n=7 ec=49/34 lis/c=65/49 les/c/f=66/50/0 sis=67 pruub=15.008551598s) [2] async=[2] r=-1 lpr=67 pi=[49,67)/1 crt=41'577 lcod 0'0 mlcod 0'0 active pruub 118.295867920s@ mbc={255={}}] start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 11 03:39:05 compute-0 ceph-osd[88594]: osd.1 pg_epoch: 67 pg[9.e( v 41'577 (0'0,41'577] local-lis/les=65/66 n=7 ec=49/34 lis/c=65/49 les/c/f=66/50/0 sis=67 pruub=15.008472443s) [2] r=-1 lpr=67 pi=[49,67)/1 crt=41'577 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 118.295867920s@ mbc={}] state<Start>: transitioning to Stray
Oct 11 03:39:05 compute-0 ceph-osd[88594]: osd.1 pg_epoch: 67 pg[9.6( v 41'577 (0'0,41'577] local-lis/les=65/66 n=7 ec=49/34 lis/c=65/49 les/c/f=66/50/0 sis=67 pruub=15.008341789s) [2] async=[2] r=-1 lpr=67 pi=[49,67)/1 crt=41'577 lcod 0'0 mlcod 0'0 active pruub 118.295906067s@ mbc={255={}}] start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 11 03:39:05 compute-0 ceph-osd[88594]: osd.1 pg_epoch: 67 pg[9.6( v 41'577 (0'0,41'577] local-lis/les=65/66 n=7 ec=49/34 lis/c=65/49 les/c/f=66/50/0 sis=67 pruub=15.008270264s) [2] r=-1 lpr=67 pi=[49,67)/1 crt=41'577 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 118.295906067s@ mbc={}] state<Start>: transitioning to Stray
Oct 11 03:39:05 compute-0 ceph-osd[88594]: osd.1 pg_epoch: 67 pg[9.1e( v 41'577 (0'0,41'577] local-lis/les=65/66 n=6 ec=49/34 lis/c=65/49 les/c/f=66/50/0 sis=67 pruub=15.007719040s) [2] async=[2] r=-1 lpr=67 pi=[49,67)/1 crt=41'577 lcod 0'0 mlcod 0'0 active pruub 118.296112061s@ mbc={255={}}] start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 11 03:39:05 compute-0 ceph-osd[88594]: osd.1 pg_epoch: 67 pg[9.1e( v 41'577 (0'0,41'577] local-lis/les=65/66 n=6 ec=49/34 lis/c=65/49 les/c/f=66/50/0 sis=67 pruub=15.007644653s) [2] r=-1 lpr=67 pi=[49,67)/1 crt=41'577 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 118.296112061s@ mbc={}] state<Start>: transitioning to Stray
Oct 11 03:39:05 compute-0 ceph-osd[89722]: osd.2 pg_epoch: 67 pg[9.1e( v 41'577 (0'0,41'577] local-lis/les=0/0 n=6 ec=49/34 lis/c=65/49 les/c/f=66/50/0 sis=67) [2] r=0 lpr=67 pi=[49,67)/1 luod=0'0 crt=41'577 mlcod 0'0 active mbc={}] start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Oct 11 03:39:05 compute-0 ceph-osd[89722]: osd.2 pg_epoch: 67 pg[9.1e( v 41'577 (0'0,41'577] local-lis/les=0/0 n=6 ec=49/34 lis/c=65/49 les/c/f=66/50/0 sis=67) [2] r=0 lpr=67 pi=[49,67)/1 crt=41'577 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 03:39:05 compute-0 ceph-osd[89722]: osd.2 pg_epoch: 67 pg[9.6( v 41'577 (0'0,41'577] local-lis/les=0/0 n=7 ec=49/34 lis/c=65/49 les/c/f=66/50/0 sis=67) [2] r=0 lpr=67 pi=[49,67)/1 luod=0'0 crt=41'577 mlcod 0'0 active mbc={}] start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Oct 11 03:39:05 compute-0 ceph-osd[89722]: osd.2 pg_epoch: 67 pg[9.6( v 41'577 (0'0,41'577] local-lis/les=0/0 n=7 ec=49/34 lis/c=65/49 les/c/f=66/50/0 sis=67) [2] r=0 lpr=67 pi=[49,67)/1 crt=41'577 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 03:39:05 compute-0 ceph-osd[89722]: osd.2 pg_epoch: 67 pg[9.16( v 41'577 (0'0,41'577] local-lis/les=0/0 n=6 ec=49/34 lis/c=65/49 les/c/f=66/50/0 sis=67) [2] r=0 lpr=67 pi=[49,67)/1 luod=0'0 crt=41'577 mlcod 0'0 active mbc={}] start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Oct 11 03:39:05 compute-0 ceph-osd[89722]: osd.2 pg_epoch: 67 pg[9.16( v 41'577 (0'0,41'577] local-lis/les=0/0 n=6 ec=49/34 lis/c=65/49 les/c/f=66/50/0 sis=67) [2] r=0 lpr=67 pi=[49,67)/1 crt=41'577 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 03:39:05 compute-0 ceph-osd[89722]: osd.2 pg_epoch: 67 pg[9.e( v 41'577 (0'0,41'577] local-lis/les=0/0 n=7 ec=49/34 lis/c=65/49 les/c/f=66/50/0 sis=67) [2] r=0 lpr=67 pi=[49,67)/1 luod=0'0 crt=41'577 mlcod 0'0 active mbc={}] start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Oct 11 03:39:05 compute-0 ceph-osd[89722]: osd.2 pg_epoch: 67 pg[9.e( v 41'577 (0'0,41'577] local-lis/les=0/0 n=7 ec=49/34 lis/c=65/49 les/c/f=66/50/0 sis=67) [2] r=0 lpr=67 pi=[49,67)/1 crt=41'577 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 03:39:05 compute-0 ceph-osd[87591]: osd.0 pg_epoch: 67 pg[9.1f( v 41'577 (0'0,41'577] local-lis/les=66/67 n=6 ec=49/34 lis/c=57/57 les/c/f=58/58/0 sis=66) [2]/[0] async=[2] r=0 lpr=66 pi=[57,66)/1 crt=41'577 mlcod 0'0 active+remapped mbc={255={(0+1)=6}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 03:39:05 compute-0 ceph-osd[87591]: osd.0 pg_epoch: 67 pg[9.17( v 41'577 (0'0,41'577] local-lis/les=66/67 n=6 ec=49/34 lis/c=57/57 les/c/f=58/58/0 sis=66) [2]/[0] async=[2] r=0 lpr=66 pi=[57,66)/1 crt=41'577 mlcod 0'0 active+remapped mbc={255={(0+1)=4}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 03:39:05 compute-0 ceph-osd[87591]: osd.0 pg_epoch: 67 pg[9.f( v 41'577 (0'0,41'577] local-lis/les=66/67 n=7 ec=49/34 lis/c=57/57 les/c/f=58/58/0 sis=66) [2]/[0] async=[2] r=0 lpr=66 pi=[57,66)/1 crt=41'577 mlcod 0'0 active+remapped mbc={255={(0+1)=7}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 03:39:05 compute-0 ceph-osd[87591]: osd.0 pg_epoch: 67 pg[9.7( v 41'577 (0'0,41'577] local-lis/les=66/67 n=7 ec=49/34 lis/c=57/57 les/c/f=58/58/0 sis=66) [2]/[0] async=[2] r=0 lpr=66 pi=[57,66)/1 crt=41'577 mlcod 0'0 active+remapped mbc={255={(0+1)=6}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 03:39:05 compute-0 systemd[1]: Started libpod-conmon-c1835eff8c3ca2fca6366ab27e122770d42d98498110339dff2d293359c1ad78.scope.
Oct 11 03:39:05 compute-0 podman[104051]: 2025-10-11 03:39:05.229931322 +0000 UTC m=+0.021826441 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Oct 11 03:39:05 compute-0 systemd[1]: Started libcrun container.
Oct 11 03:39:05 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b53193657fd73533cc70d77f0a5ace81e26044283e38e260eab965fe317b83f8/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Oct 11 03:39:05 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b53193657fd73533cc70d77f0a5ace81e26044283e38e260eab965fe317b83f8/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 11 03:39:05 compute-0 podman[104051]: 2025-10-11 03:39:05.366637368 +0000 UTC m=+0.158532557 container init c1835eff8c3ca2fca6366ab27e122770d42d98498110339dff2d293359c1ad78 (image=quay.io/ceph/ceph:v18, name=lucid_neumann, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 11 03:39:05 compute-0 podman[104051]: 2025-10-11 03:39:05.376309838 +0000 UTC m=+0.168204977 container start c1835eff8c3ca2fca6366ab27e122770d42d98498110339dff2d293359c1ad78 (image=quay.io/ceph/ceph:v18, name=lucid_neumann, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.schema-version=1.0)
Oct 11 03:39:05 compute-0 podman[104051]: 2025-10-11 03:39:05.381485073 +0000 UTC m=+0.173380332 container attach c1835eff8c3ca2fca6366ab27e122770d42d98498110339dff2d293359c1ad78 (image=quay.io/ceph/ceph:v18, name=lucid_neumann, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_REF=reef)
Oct 11 03:39:05 compute-0 ceph-mon[74273]: 7.7 scrub starts
Oct 11 03:39:05 compute-0 ceph-mon[74273]: 7.7 scrub ok
Oct 11 03:39:05 compute-0 ceph-mon[74273]: 5.8 deep-scrub starts
Oct 11 03:39:05 compute-0 ceph-mon[74273]: 5.8 deep-scrub ok
Oct 11 03:39:05 compute-0 ceph-mon[74273]: osdmap e67: 3 total, 3 up, 3 in
Oct 11 03:39:05 compute-0 lucid_neumann[104066]: {
Oct 11 03:39:05 compute-0 lucid_neumann[104066]:     "user_id": "openstack",
Oct 11 03:39:05 compute-0 lucid_neumann[104066]:     "display_name": "openstack",
Oct 11 03:39:05 compute-0 lucid_neumann[104066]:     "email": "",
Oct 11 03:39:05 compute-0 lucid_neumann[104066]:     "suspended": 0,
Oct 11 03:39:05 compute-0 lucid_neumann[104066]:     "max_buckets": 1000,
Oct 11 03:39:05 compute-0 lucid_neumann[104066]:     "subusers": [],
Oct 11 03:39:05 compute-0 lucid_neumann[104066]:     "keys": [
Oct 11 03:39:05 compute-0 lucid_neumann[104066]:         {
Oct 11 03:39:05 compute-0 lucid_neumann[104066]:             "user": "openstack",
Oct 11 03:39:05 compute-0 lucid_neumann[104066]:             "access_key": "WGZCZLT87QXKN56E39I4",
Oct 11 03:39:05 compute-0 lucid_neumann[104066]:             "secret_key": "9XjC3fwYXQMrSa77F5Hr186GLRizbsRjWuTRYI8G"
Oct 11 03:39:05 compute-0 lucid_neumann[104066]:         }
Oct 11 03:39:05 compute-0 lucid_neumann[104066]:     ],
Oct 11 03:39:05 compute-0 lucid_neumann[104066]:     "swift_keys": [],
Oct 11 03:39:05 compute-0 lucid_neumann[104066]:     "caps": [],
Oct 11 03:39:05 compute-0 lucid_neumann[104066]:     "op_mask": "read, write, delete",
Oct 11 03:39:05 compute-0 lucid_neumann[104066]:     "default_placement": "",
Oct 11 03:39:05 compute-0 lucid_neumann[104066]:     "default_storage_class": "",
Oct 11 03:39:05 compute-0 lucid_neumann[104066]:     "placement_tags": [],
Oct 11 03:39:05 compute-0 lucid_neumann[104066]:     "bucket_quota": {
Oct 11 03:39:05 compute-0 lucid_neumann[104066]:         "enabled": false,
Oct 11 03:39:05 compute-0 lucid_neumann[104066]:         "check_on_raw": false,
Oct 11 03:39:05 compute-0 lucid_neumann[104066]:         "max_size": -1,
Oct 11 03:39:05 compute-0 lucid_neumann[104066]:         "max_size_kb": 0,
Oct 11 03:39:05 compute-0 lucid_neumann[104066]:         "max_objects": -1
Oct 11 03:39:05 compute-0 lucid_neumann[104066]:     },
Oct 11 03:39:05 compute-0 lucid_neumann[104066]:     "user_quota": {
Oct 11 03:39:05 compute-0 lucid_neumann[104066]:         "enabled": false,
Oct 11 03:39:05 compute-0 lucid_neumann[104066]:         "check_on_raw": false,
Oct 11 03:39:05 compute-0 lucid_neumann[104066]:         "max_size": -1,
Oct 11 03:39:05 compute-0 lucid_neumann[104066]:         "max_size_kb": 0,
Oct 11 03:39:05 compute-0 lucid_neumann[104066]:         "max_objects": -1
Oct 11 03:39:05 compute-0 lucid_neumann[104066]:     },
Oct 11 03:39:05 compute-0 lucid_neumann[104066]:     "temp_url_keys": [],
Oct 11 03:39:05 compute-0 lucid_neumann[104066]:     "type": "rgw",
Oct 11 03:39:05 compute-0 lucid_neumann[104066]:     "mfa_ids": []
Oct 11 03:39:05 compute-0 lucid_neumann[104066]: }
Oct 11 03:39:05 compute-0 lucid_neumann[104066]: 
Oct 11 03:39:05 compute-0 systemd[1]: libpod-c1835eff8c3ca2fca6366ab27e122770d42d98498110339dff2d293359c1ad78.scope: Deactivated successfully.
Oct 11 03:39:05 compute-0 conmon[104066]: conmon c1835eff8c3ca2fca636 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-c1835eff8c3ca2fca6366ab27e122770d42d98498110339dff2d293359c1ad78.scope/container/memory.events
Oct 11 03:39:05 compute-0 podman[104051]: 2025-10-11 03:39:05.632422935 +0000 UTC m=+0.424318044 container died c1835eff8c3ca2fca6366ab27e122770d42d98498110339dff2d293359c1ad78 (image=quay.io/ceph/ceph:v18, name=lucid_neumann, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, ceph=True)
Oct 11 03:39:05 compute-0 systemd[1]: var-lib-containers-storage-overlay-b53193657fd73533cc70d77f0a5ace81e26044283e38e260eab965fe317b83f8-merged.mount: Deactivated successfully.
Oct 11 03:39:05 compute-0 podman[104051]: 2025-10-11 03:39:05.674362579 +0000 UTC m=+0.466257698 container remove c1835eff8c3ca2fca6366ab27e122770d42d98498110339dff2d293359c1ad78 (image=quay.io/ceph/ceph:v18, name=lucid_neumann, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 11 03:39:05 compute-0 systemd[1]: libpod-conmon-c1835eff8c3ca2fca6366ab27e122770d42d98498110339dff2d293359c1ad78.scope: Deactivated successfully.
Oct 11 03:39:05 compute-0 sudo[104048]: pam_unix(sudo:session): session closed for user root
Oct 11 03:39:05 compute-0 ceph-osd[88594]: log_channel(cluster) log [DBG] : 7.b scrub starts
Oct 11 03:39:05 compute-0 ceph-osd[88594]: log_channel(cluster) log [DBG] : 7.b scrub ok
Oct 11 03:39:06 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e67 do_prune osdmap full prune enabled
Oct 11 03:39:06 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e68 e68: 3 total, 3 up, 3 in
Oct 11 03:39:06 compute-0 ceph-mon[74273]: log_channel(cluster) log [DBG] : osdmap e68: 3 total, 3 up, 3 in
Oct 11 03:39:06 compute-0 ceph-osd[87591]: osd.0 pg_epoch: 68 pg[9.7( v 41'577 (0'0,41'577] local-lis/les=66/67 n=7 ec=49/34 lis/c=66/57 les/c/f=67/58/0 sis=68 pruub=15.013319969s) [2] async=[2] r=-1 lpr=68 pi=[57,68)/1 crt=41'577 mlcod 41'577 active pruub 124.340972900s@ mbc={255={}}] start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 11 03:39:06 compute-0 ceph-osd[87591]: osd.0 pg_epoch: 68 pg[9.7( v 41'577 (0'0,41'577] local-lis/les=66/67 n=7 ec=49/34 lis/c=66/57 les/c/f=67/58/0 sis=68 pruub=15.013225555s) [2] r=-1 lpr=68 pi=[57,68)/1 crt=41'577 mlcod 0'0 unknown NOTIFY pruub 124.340972900s@ mbc={}] state<Start>: transitioning to Stray
Oct 11 03:39:06 compute-0 ceph-osd[87591]: osd.0 pg_epoch: 68 pg[9.17( v 41'577 (0'0,41'577] local-lis/les=66/67 n=6 ec=49/34 lis/c=66/57 les/c/f=67/58/0 sis=68 pruub=15.006283760s) [2] async=[2] r=-1 lpr=68 pi=[57,68)/1 crt=41'577 mlcod 41'577 active pruub 124.334281921s@ mbc={255={}}] start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 11 03:39:06 compute-0 ceph-osd[87591]: osd.0 pg_epoch: 68 pg[9.17( v 41'577 (0'0,41'577] local-lis/les=66/67 n=6 ec=49/34 lis/c=66/57 les/c/f=67/58/0 sis=68 pruub=15.006155014s) [2] r=-1 lpr=68 pi=[57,68)/1 crt=41'577 mlcod 0'0 unknown NOTIFY pruub 124.334281921s@ mbc={}] state<Start>: transitioning to Stray
Oct 11 03:39:06 compute-0 ceph-osd[87591]: osd.0 pg_epoch: 68 pg[9.f( v 41'577 (0'0,41'577] local-lis/les=66/67 n=7 ec=49/34 lis/c=66/57 les/c/f=67/58/0 sis=68 pruub=15.012424469s) [2] async=[2] r=-1 lpr=68 pi=[57,68)/1 crt=41'577 mlcod 41'577 active pruub 124.340766907s@ mbc={255={}}] start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 11 03:39:06 compute-0 ceph-osd[87591]: osd.0 pg_epoch: 68 pg[9.f( v 41'577 (0'0,41'577] local-lis/les=66/67 n=7 ec=49/34 lis/c=66/57 les/c/f=67/58/0 sis=68 pruub=15.012330055s) [2] r=-1 lpr=68 pi=[57,68)/1 crt=41'577 mlcod 0'0 unknown NOTIFY pruub 124.340766907s@ mbc={}] state<Start>: transitioning to Stray
Oct 11 03:39:06 compute-0 ceph-osd[87591]: osd.0 pg_epoch: 68 pg[9.1f( v 41'577 (0'0,41'577] local-lis/les=66/67 n=6 ec=49/34 lis/c=66/57 les/c/f=67/58/0 sis=68 pruub=15.005106926s) [2] async=[2] r=-1 lpr=68 pi=[57,68)/1 crt=41'577 mlcod 41'577 active pruub 124.334205627s@ mbc={255={}}] start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 11 03:39:06 compute-0 ceph-osd[89722]: osd.2 pg_epoch: 68 pg[9.17( v 41'577 (0'0,41'577] local-lis/les=0/0 n=6 ec=49/34 lis/c=66/57 les/c/f=67/58/0 sis=68) [2] r=0 lpr=68 pi=[57,68)/1 luod=0'0 crt=41'577 mlcod 0'0 active mbc={}] start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Oct 11 03:39:06 compute-0 ceph-osd[89722]: osd.2 pg_epoch: 68 pg[9.17( v 41'577 (0'0,41'577] local-lis/les=0/0 n=6 ec=49/34 lis/c=66/57 les/c/f=67/58/0 sis=68) [2] r=0 lpr=68 pi=[57,68)/1 crt=41'577 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 03:39:06 compute-0 ceph-osd[89722]: osd.2 pg_epoch: 68 pg[9.f( v 41'577 (0'0,41'577] local-lis/les=0/0 n=7 ec=49/34 lis/c=66/57 les/c/f=67/58/0 sis=68) [2] r=0 lpr=68 pi=[57,68)/1 luod=0'0 crt=41'577 mlcod 0'0 active mbc={}] start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Oct 11 03:39:06 compute-0 ceph-osd[87591]: osd.0 pg_epoch: 68 pg[9.1f( v 41'577 (0'0,41'577] local-lis/les=66/67 n=6 ec=49/34 lis/c=66/57 les/c/f=67/58/0 sis=68 pruub=15.004752159s) [2] r=-1 lpr=68 pi=[57,68)/1 crt=41'577 mlcod 0'0 unknown NOTIFY pruub 124.334205627s@ mbc={}] state<Start>: transitioning to Stray
Oct 11 03:39:06 compute-0 ceph-osd[89722]: osd.2 pg_epoch: 68 pg[9.f( v 41'577 (0'0,41'577] local-lis/les=0/0 n=7 ec=49/34 lis/c=66/57 les/c/f=67/58/0 sis=68) [2] r=0 lpr=68 pi=[57,68)/1 crt=41'577 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 03:39:06 compute-0 ceph-osd[89722]: osd.2 pg_epoch: 68 pg[9.7( v 41'577 (0'0,41'577] local-lis/les=0/0 n=7 ec=49/34 lis/c=66/57 les/c/f=67/58/0 sis=68) [2] r=0 lpr=68 pi=[57,68)/1 luod=0'0 crt=41'577 mlcod 0'0 active mbc={}] start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Oct 11 03:39:06 compute-0 ceph-osd[89722]: osd.2 pg_epoch: 68 pg[9.7( v 41'577 (0'0,41'577] local-lis/les=0/0 n=7 ec=49/34 lis/c=66/57 les/c/f=67/58/0 sis=68) [2] r=0 lpr=68 pi=[57,68)/1 crt=41'577 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 03:39:06 compute-0 ceph-osd[89722]: osd.2 pg_epoch: 68 pg[9.1f( v 41'577 (0'0,41'577] local-lis/les=0/0 n=6 ec=49/34 lis/c=66/57 les/c/f=67/58/0 sis=68) [2] r=0 lpr=68 pi=[57,68)/1 luod=0'0 crt=41'577 mlcod 0'0 active mbc={}] start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Oct 11 03:39:06 compute-0 ceph-osd[89722]: osd.2 pg_epoch: 68 pg[9.1f( v 41'577 (0'0,41'577] local-lis/les=0/0 n=6 ec=49/34 lis/c=66/57 les/c/f=67/58/0 sis=68) [2] r=0 lpr=68 pi=[57,68)/1 crt=41'577 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 03:39:06 compute-0 ceph-osd[89722]: osd.2 pg_epoch: 68 pg[9.6( v 41'577 (0'0,41'577] local-lis/les=67/68 n=7 ec=49/34 lis/c=65/49 les/c/f=66/50/0 sis=67) [2] r=0 lpr=67 pi=[49,67)/1 crt=41'577 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 03:39:06 compute-0 ceph-osd[89722]: osd.2 pg_epoch: 68 pg[9.1e( v 41'577 (0'0,41'577] local-lis/les=67/68 n=6 ec=49/34 lis/c=65/49 les/c/f=66/50/0 sis=67) [2] r=0 lpr=67 pi=[49,67)/1 crt=41'577 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 03:39:06 compute-0 ceph-osd[89722]: osd.2 pg_epoch: 68 pg[9.e( v 41'577 (0'0,41'577] local-lis/les=67/68 n=7 ec=49/34 lis/c=65/49 les/c/f=66/50/0 sis=67) [2] r=0 lpr=67 pi=[49,67)/1 crt=41'577 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 03:39:06 compute-0 ceph-osd[89722]: osd.2 pg_epoch: 68 pg[9.16( v 41'577 (0'0,41'577] local-lis/les=67/68 n=6 ec=49/34 lis/c=65/49 les/c/f=66/50/0 sis=67) [2] r=0 lpr=67 pi=[49,67)/1 crt=41'577 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 03:39:06 compute-0 ceph-mon[74273]: pgmap v139: 305 pgs: 4 unknown, 301 active+clean; 455 KiB data, 103 MiB used, 60 GiB / 60 GiB avail; 37 B/s, 0 objects/s recovering
Oct 11 03:39:06 compute-0 ceph-mon[74273]: 7.b scrub starts
Oct 11 03:39:06 compute-0 ceph-mon[74273]: 7.b scrub ok
Oct 11 03:39:06 compute-0 ceph-mon[74273]: osdmap e68: 3 total, 3 up, 3 in
Oct 11 03:39:06 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v142: 305 pgs: 4 unknown, 301 active+clean; 455 KiB data, 103 MiB used, 60 GiB / 60 GiB avail
Oct 11 03:39:06 compute-0 ceph-osd[88594]: log_channel(cluster) log [DBG] : 7.d scrub starts
Oct 11 03:39:06 compute-0 ceph-osd[88594]: log_channel(cluster) log [DBG] : 7.d scrub ok
Oct 11 03:39:07 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e68 do_prune osdmap full prune enabled
Oct 11 03:39:07 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e69 e69: 3 total, 3 up, 3 in
Oct 11 03:39:07 compute-0 ceph-mon[74273]: log_channel(cluster) log [DBG] : osdmap e69: 3 total, 3 up, 3 in
Oct 11 03:39:07 compute-0 ceph-osd[89722]: osd.2 pg_epoch: 69 pg[9.17( v 41'577 (0'0,41'577] local-lis/les=68/69 n=6 ec=49/34 lis/c=66/57 les/c/f=67/58/0 sis=68) [2] r=0 lpr=68 pi=[57,68)/1 crt=41'577 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 03:39:07 compute-0 ceph-osd[89722]: osd.2 pg_epoch: 69 pg[9.7( v 41'577 (0'0,41'577] local-lis/les=68/69 n=7 ec=49/34 lis/c=66/57 les/c/f=67/58/0 sis=68) [2] r=0 lpr=68 pi=[57,68)/1 crt=41'577 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 03:39:07 compute-0 ceph-osd[89722]: osd.2 pg_epoch: 69 pg[9.f( v 41'577 (0'0,41'577] local-lis/les=68/69 n=7 ec=49/34 lis/c=66/57 les/c/f=67/58/0 sis=68) [2] r=0 lpr=68 pi=[57,68)/1 crt=41'577 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 03:39:07 compute-0 ceph-osd[89722]: osd.2 pg_epoch: 69 pg[9.1f( v 41'577 (0'0,41'577] local-lis/les=68/69 n=6 ec=49/34 lis/c=66/57 les/c/f=67/58/0 sis=68) [2] r=0 lpr=68 pi=[57,68)/1 crt=41'577 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 03:39:07 compute-0 ceph-mon[74273]: 7.d scrub starts
Oct 11 03:39:07 compute-0 ceph-mon[74273]: 7.d scrub ok
Oct 11 03:39:07 compute-0 ceph-mon[74273]: osdmap e69: 3 total, 3 up, 3 in
Oct 11 03:39:08 compute-0 ceph-mon[74273]: pgmap v142: 305 pgs: 4 unknown, 301 active+clean; 455 KiB data, 103 MiB used, 60 GiB / 60 GiB avail
Oct 11 03:39:08 compute-0 ceph-osd[88594]: log_channel(cluster) log [DBG] : 7.10 scrub starts
Oct 11 03:39:08 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v144: 305 pgs: 305 active+clean; 456 KiB data, 103 MiB used, 60 GiB / 60 GiB avail; 23 KiB/s rd, 1.3 KiB/s wr, 47 op/s; 171 B/s, 10 objects/s recovering
Oct 11 03:39:08 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "9"} v 0) v1
Oct 11 03:39:08 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "9"}]: dispatch
Oct 11 03:39:08 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "9"} v 0) v1
Oct 11 03:39:08 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "9"}]: dispatch
Oct 11 03:39:08 compute-0 ceph-osd[88594]: log_channel(cluster) log [DBG] : 7.10 scrub ok
Oct 11 03:39:09 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e69 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 11 03:39:09 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e69 do_prune osdmap full prune enabled
Oct 11 03:39:09 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "9"}]': finished
Oct 11 03:39:09 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "9"}]': finished
Oct 11 03:39:09 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e70 e70: 3 total, 3 up, 3 in
Oct 11 03:39:09 compute-0 ceph-mon[74273]: log_channel(cluster) log [DBG] : osdmap e70: 3 total, 3 up, 3 in
Oct 11 03:39:09 compute-0 ceph-osd[87591]: osd.0 pg_epoch: 70 pg[6.8( v 37'39 (0'0,37'39] local-lis/les=45/46 n=1 ec=45/21 lis/c=45/45 les/c/f=46/46/0 sis=70 pruub=14.201015472s) [2] r=-1 lpr=70 pi=[45,70)/1 crt=37'39 lcod 0'0 mlcod 0'0 active pruub 126.807975769s@ mbc={}] start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 11 03:39:09 compute-0 ceph-osd[87591]: osd.0 pg_epoch: 70 pg[6.8( v 37'39 (0'0,37'39] local-lis/les=45/46 n=1 ec=45/21 lis/c=45/45 les/c/f=46/46/0 sis=70 pruub=14.200906754s) [2] r=-1 lpr=70 pi=[45,70)/1 crt=37'39 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 126.807975769s@ mbc={}] state<Start>: transitioning to Stray
Oct 11 03:39:09 compute-0 ceph-mon[74273]: 7.10 scrub starts
Oct 11 03:39:09 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "9"}]: dispatch
Oct 11 03:39:09 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "9"}]: dispatch
Oct 11 03:39:09 compute-0 ceph-mon[74273]: 7.10 scrub ok
Oct 11 03:39:09 compute-0 ceph-osd[89722]: osd.2 pg_epoch: 70 pg[6.8( empty local-lis/les=0/0 n=0 ec=45/21 lis/c=45/45 les/c/f=46/46/0 sis=70) [2] r=0 lpr=70 pi=[45,70)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 03:39:09 compute-0 ceph-osd[88594]: osd.1 pg_epoch: 70 pg[9.8( v 41'577 (0'0,41'577] local-lis/les=49/50 n=7 ec=49/34 lis/c=49/49 les/c/f=50/50/0 sis=70 pruub=9.974048615s) [2] r=-1 lpr=70 pi=[49,70)/1 crt=41'577 lcod 0'0 mlcod 0'0 active pruub 117.975128174s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 11 03:39:09 compute-0 ceph-osd[88594]: osd.1 pg_epoch: 70 pg[9.8( v 41'577 (0'0,41'577] local-lis/les=49/50 n=7 ec=49/34 lis/c=49/49 les/c/f=50/50/0 sis=70 pruub=9.973771095s) [2] r=-1 lpr=70 pi=[49,70)/1 crt=41'577 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 117.975128174s@ mbc={}] state<Start>: transitioning to Stray
Oct 11 03:39:09 compute-0 ceph-osd[88594]: osd.1 pg_epoch: 70 pg[9.18( v 41'577 (0'0,41'577] local-lis/les=49/50 n=6 ec=49/34 lis/c=49/49 les/c/f=50/50/0 sis=70 pruub=9.973217964s) [2] r=-1 lpr=70 pi=[49,70)/1 crt=41'577 lcod 0'0 mlcod 0'0 active pruub 117.975196838s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 11 03:39:09 compute-0 ceph-osd[89722]: osd.2 pg_epoch: 70 pg[9.8( empty local-lis/les=0/0 n=0 ec=49/34 lis/c=49/49 les/c/f=50/50/0 sis=70) [2] r=0 lpr=70 pi=[49,70)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 03:39:09 compute-0 ceph-osd[88594]: osd.1 pg_epoch: 70 pg[9.18( v 41'577 (0'0,41'577] local-lis/les=49/50 n=6 ec=49/34 lis/c=49/49 les/c/f=50/50/0 sis=70 pruub=9.972894669s) [2] r=-1 lpr=70 pi=[49,70)/1 crt=41'577 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 117.975196838s@ mbc={}] state<Start>: transitioning to Stray
Oct 11 03:39:09 compute-0 ceph-osd[89722]: osd.2 pg_epoch: 70 pg[9.18( empty local-lis/les=0/0 n=0 ec=49/34 lis/c=49/49 les/c/f=50/50/0 sis=70) [2] r=0 lpr=70 pi=[49,70)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 03:39:10 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e70 do_prune osdmap full prune enabled
Oct 11 03:39:10 compute-0 ceph-mon[74273]: pgmap v144: 305 pgs: 305 active+clean; 456 KiB data, 103 MiB used, 60 GiB / 60 GiB avail; 23 KiB/s rd, 1.3 KiB/s wr, 47 op/s; 171 B/s, 10 objects/s recovering
Oct 11 03:39:10 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "9"}]': finished
Oct 11 03:39:10 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "9"}]': finished
Oct 11 03:39:10 compute-0 ceph-mon[74273]: osdmap e70: 3 total, 3 up, 3 in
Oct 11 03:39:10 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e71 e71: 3 total, 3 up, 3 in
Oct 11 03:39:10 compute-0 ceph-mon[74273]: log_channel(cluster) log [DBG] : osdmap e71: 3 total, 3 up, 3 in
Oct 11 03:39:10 compute-0 ceph-osd[89722]: osd.2 pg_epoch: 71 pg[9.8( empty local-lis/les=0/0 n=0 ec=49/34 lis/c=49/49 les/c/f=50/50/0 sis=71) [2]/[1] r=-1 lpr=71 pi=[49,71)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 11 03:39:10 compute-0 ceph-osd[89722]: osd.2 pg_epoch: 71 pg[9.8( empty local-lis/les=0/0 n=0 ec=49/34 lis/c=49/49 les/c/f=50/50/0 sis=71) [2]/[1] r=-1 lpr=71 pi=[49,71)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Oct 11 03:39:10 compute-0 ceph-osd[89722]: osd.2 pg_epoch: 71 pg[9.18( empty local-lis/les=0/0 n=0 ec=49/34 lis/c=49/49 les/c/f=50/50/0 sis=71) [2]/[1] r=-1 lpr=71 pi=[49,71)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 11 03:39:10 compute-0 ceph-osd[89722]: osd.2 pg_epoch: 71 pg[9.18( empty local-lis/les=0/0 n=0 ec=49/34 lis/c=49/49 les/c/f=50/50/0 sis=71) [2]/[1] r=-1 lpr=71 pi=[49,71)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Oct 11 03:39:10 compute-0 ceph-osd[88594]: osd.1 pg_epoch: 71 pg[9.18( v 41'577 (0'0,41'577] local-lis/les=49/50 n=6 ec=49/34 lis/c=49/49 les/c/f=50/50/0 sis=71) [2]/[1] r=0 lpr=71 pi=[49,71)/1 crt=41'577 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Oct 11 03:39:10 compute-0 ceph-osd[88594]: osd.1 pg_epoch: 71 pg[9.18( v 41'577 (0'0,41'577] local-lis/les=49/50 n=6 ec=49/34 lis/c=49/49 les/c/f=50/50/0 sis=71) [2]/[1] r=0 lpr=71 pi=[49,71)/1 crt=41'577 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Oct 11 03:39:10 compute-0 ceph-osd[88594]: osd.1 pg_epoch: 71 pg[9.8( v 41'577 (0'0,41'577] local-lis/les=49/50 n=7 ec=49/34 lis/c=49/49 les/c/f=50/50/0 sis=71) [2]/[1] r=0 lpr=71 pi=[49,71)/1 crt=41'577 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Oct 11 03:39:10 compute-0 ceph-osd[88594]: osd.1 pg_epoch: 71 pg[9.8( v 41'577 (0'0,41'577] local-lis/les=49/50 n=7 ec=49/34 lis/c=49/49 les/c/f=50/50/0 sis=71) [2]/[1] r=0 lpr=71 pi=[49,71)/1 crt=41'577 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Oct 11 03:39:10 compute-0 ceph-osd[89722]: osd.2 pg_epoch: 71 pg[6.8( v 37'39 (0'0,37'39] local-lis/les=70/71 n=1 ec=45/21 lis/c=45/45 les/c/f=46/46/0 sis=70) [2] r=0 lpr=70 pi=[45,70)/1 crt=37'39 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 03:39:10 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v147: 305 pgs: 305 active+clean; 456 KiB data, 103 MiB used, 60 GiB / 60 GiB avail; 23 KiB/s rd, 1.3 KiB/s wr, 48 op/s; 172 B/s, 10 objects/s recovering
Oct 11 03:39:10 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "10"} v 0) v1
Oct 11 03:39:10 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "10"}]: dispatch
Oct 11 03:39:10 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "10"} v 0) v1
Oct 11 03:39:10 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "10"}]: dispatch
Oct 11 03:39:11 compute-0 ceph-osd[89722]: log_channel(cluster) log [DBG] : 5.a scrub starts
Oct 11 03:39:11 compute-0 ceph-osd[89722]: log_channel(cluster) log [DBG] : 5.a scrub ok
Oct 11 03:39:11 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e71 do_prune osdmap full prune enabled
Oct 11 03:39:11 compute-0 ceph-mon[74273]: osdmap e71: 3 total, 3 up, 3 in
Oct 11 03:39:11 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "10"}]: dispatch
Oct 11 03:39:11 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "10"}]: dispatch
Oct 11 03:39:11 compute-0 ceph-mon[74273]: 5.a scrub starts
Oct 11 03:39:11 compute-0 ceph-mon[74273]: 5.a scrub ok
Oct 11 03:39:11 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "10"}]': finished
Oct 11 03:39:11 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "10"}]': finished
Oct 11 03:39:11 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e72 e72: 3 total, 3 up, 3 in
Oct 11 03:39:11 compute-0 ceph-mon[74273]: log_channel(cluster) log [DBG] : osdmap e72: 3 total, 3 up, 3 in
Oct 11 03:39:11 compute-0 ceph-osd[88594]: osd.1 pg_epoch: 72 pg[6.9( v 37'39 (0'0,37'39] local-lis/les=53/54 n=1 ec=45/21 lis/c=53/53 les/c/f=54/54/0 sis=72 pruub=15.675548553s) [0] r=-1 lpr=72 pi=[53,72)/1 crt=37'39 lcod 0'0 mlcod 0'0 active pruub 125.293601990s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 11 03:39:11 compute-0 ceph-osd[88594]: osd.1 pg_epoch: 72 pg[6.9( v 37'39 (0'0,37'39] local-lis/les=53/54 n=1 ec=45/21 lis/c=53/53 les/c/f=54/54/0 sis=72 pruub=15.675465584s) [0] r=-1 lpr=72 pi=[53,72)/1 crt=37'39 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 125.293601990s@ mbc={}] state<Start>: transitioning to Stray
Oct 11 03:39:11 compute-0 ceph-osd[87591]: osd.0 pg_epoch: 72 pg[6.9( empty local-lis/les=0/0 n=0 ec=45/21 lis/c=53/53 les/c/f=54/54/0 sis=72) [0] r=0 lpr=72 pi=[53,72)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 03:39:11 compute-0 ceph-osd[88594]: osd.1 pg_epoch: 72 pg[9.8( v 41'577 (0'0,41'577] local-lis/les=71/72 n=7 ec=49/34 lis/c=49/49 les/c/f=50/50/0 sis=71) [2]/[1] async=[2] r=0 lpr=71 pi=[49,71)/1 crt=41'577 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=9}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 03:39:11 compute-0 ceph-osd[88594]: osd.1 pg_epoch: 72 pg[9.18( v 41'577 (0'0,41'577] local-lis/les=71/72 n=6 ec=49/34 lis/c=49/49 les/c/f=50/50/0 sis=71) [2]/[1] async=[2] r=0 lpr=71 pi=[49,71)/1 crt=41'577 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=5}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 03:39:11 compute-0 ceph-osd[88594]: log_channel(cluster) log [DBG] : 8.3 deep-scrub starts
Oct 11 03:39:11 compute-0 ceph-osd[87591]: log_channel(cluster) log [DBG] : 4.19 scrub starts
Oct 11 03:39:11 compute-0 ceph-osd[88594]: log_channel(cluster) log [DBG] : 8.3 deep-scrub ok
Oct 11 03:39:11 compute-0 ceph-osd[87591]: log_channel(cluster) log [DBG] : 4.19 scrub ok
Oct 11 03:39:12 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e72 do_prune osdmap full prune enabled
Oct 11 03:39:12 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e73 e73: 3 total, 3 up, 3 in
Oct 11 03:39:12 compute-0 ceph-mon[74273]: log_channel(cluster) log [DBG] : osdmap e73: 3 total, 3 up, 3 in
Oct 11 03:39:12 compute-0 ceph-osd[89722]: osd.2 pg_epoch: 73 pg[9.8( v 41'577 (0'0,41'577] local-lis/les=0/0 n=7 ec=49/34 lis/c=71/49 les/c/f=72/50/0 sis=73) [2] r=0 lpr=73 pi=[49,73)/1 luod=0'0 crt=41'577 mlcod 0'0 active mbc={}] start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Oct 11 03:39:12 compute-0 ceph-osd[89722]: osd.2 pg_epoch: 73 pg[9.8( v 41'577 (0'0,41'577] local-lis/les=0/0 n=7 ec=49/34 lis/c=71/49 les/c/f=72/50/0 sis=73) [2] r=0 lpr=73 pi=[49,73)/1 crt=41'577 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 03:39:12 compute-0 ceph-osd[89722]: osd.2 pg_epoch: 73 pg[9.18( v 41'577 (0'0,41'577] local-lis/les=0/0 n=6 ec=49/34 lis/c=71/49 les/c/f=72/50/0 sis=73) [2] r=0 lpr=73 pi=[49,73)/1 luod=0'0 crt=41'577 mlcod 0'0 active mbc={}] start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Oct 11 03:39:12 compute-0 ceph-osd[89722]: osd.2 pg_epoch: 73 pg[9.18( v 41'577 (0'0,41'577] local-lis/les=0/0 n=6 ec=49/34 lis/c=71/49 les/c/f=72/50/0 sis=73) [2] r=0 lpr=73 pi=[49,73)/1 crt=41'577 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 03:39:12 compute-0 ceph-mon[74273]: pgmap v147: 305 pgs: 305 active+clean; 456 KiB data, 103 MiB used, 60 GiB / 60 GiB avail; 23 KiB/s rd, 1.3 KiB/s wr, 48 op/s; 172 B/s, 10 objects/s recovering
Oct 11 03:39:12 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "10"}]': finished
Oct 11 03:39:12 compute-0 ceph-osd[88594]: osd.1 pg_epoch: 73 pg[9.18( v 41'577 (0'0,41'577] local-lis/les=71/72 n=6 ec=49/34 lis/c=71/49 les/c/f=72/50/0 sis=73 pruub=15.065786362s) [2] async=[2] r=-1 lpr=73 pi=[49,73)/1 crt=41'577 lcod 0'0 mlcod 0'0 active pruub 125.696136475s@ mbc={255={}}] start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 11 03:39:12 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "10"}]': finished
Oct 11 03:39:12 compute-0 ceph-osd[88594]: osd.1 pg_epoch: 73 pg[9.8( v 41'577 (0'0,41'577] local-lis/les=71/72 n=7 ec=49/34 lis/c=71/49 les/c/f=72/50/0 sis=73 pruub=15.056883812s) [2] async=[2] r=-1 lpr=73 pi=[49,73)/1 crt=41'577 lcod 0'0 mlcod 0'0 active pruub 125.687286377s@ mbc={255={}}] start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 11 03:39:12 compute-0 ceph-mon[74273]: osdmap e72: 3 total, 3 up, 3 in
Oct 11 03:39:12 compute-0 ceph-osd[88594]: osd.1 pg_epoch: 73 pg[9.8( v 41'577 (0'0,41'577] local-lis/les=71/72 n=7 ec=49/34 lis/c=71/49 les/c/f=72/50/0 sis=73 pruub=15.056796074s) [2] r=-1 lpr=73 pi=[49,73)/1 crt=41'577 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 125.687286377s@ mbc={}] state<Start>: transitioning to Stray
Oct 11 03:39:12 compute-0 ceph-mon[74273]: 8.3 deep-scrub starts
Oct 11 03:39:12 compute-0 ceph-mon[74273]: 4.19 scrub starts
Oct 11 03:39:12 compute-0 ceph-mon[74273]: 4.19 scrub ok
Oct 11 03:39:12 compute-0 ceph-mon[74273]: 8.3 deep-scrub ok
Oct 11 03:39:12 compute-0 ceph-osd[88594]: osd.1 pg_epoch: 73 pg[9.18( v 41'577 (0'0,41'577] local-lis/les=71/72 n=6 ec=49/34 lis/c=71/49 les/c/f=72/50/0 sis=73 pruub=15.064996719s) [2] r=-1 lpr=73 pi=[49,73)/1 crt=41'577 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 125.696136475s@ mbc={}] state<Start>: transitioning to Stray
Oct 11 03:39:12 compute-0 ceph-osd[87591]: osd.0 pg_epoch: 73 pg[6.9( v 37'39 (0'0,37'39] local-lis/les=72/73 n=1 ec=45/21 lis/c=53/53 les/c/f=54/54/0 sis=72) [0] r=0 lpr=72 pi=[53,72)/1 crt=37'39 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 03:39:12 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v150: 305 pgs: 1 peering, 304 active+clean; 456 KiB data, 103 MiB used, 60 GiB / 60 GiB avail
Oct 11 03:39:12 compute-0 ceph-osd[89722]: log_channel(cluster) log [DBG] : 5.b deep-scrub starts
Oct 11 03:39:12 compute-0 ceph-osd[89722]: log_channel(cluster) log [DBG] : 5.b deep-scrub ok
Oct 11 03:39:13 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e73 do_prune osdmap full prune enabled
Oct 11 03:39:13 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e74 e74: 3 total, 3 up, 3 in
Oct 11 03:39:13 compute-0 ceph-mon[74273]: log_channel(cluster) log [DBG] : osdmap e74: 3 total, 3 up, 3 in
Oct 11 03:39:13 compute-0 ceph-osd[89722]: osd.2 pg_epoch: 74 pg[9.8( v 41'577 (0'0,41'577] local-lis/les=73/74 n=7 ec=49/34 lis/c=71/49 les/c/f=72/50/0 sis=73) [2] r=0 lpr=73 pi=[49,73)/1 crt=41'577 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 03:39:13 compute-0 ceph-mon[74273]: osdmap e73: 3 total, 3 up, 3 in
Oct 11 03:39:13 compute-0 ceph-mon[74273]: 5.b deep-scrub starts
Oct 11 03:39:13 compute-0 ceph-mon[74273]: 5.b deep-scrub ok
Oct 11 03:39:13 compute-0 ceph-mon[74273]: osdmap e74: 3 total, 3 up, 3 in
Oct 11 03:39:13 compute-0 ceph-osd[89722]: osd.2 pg_epoch: 74 pg[9.18( v 41'577 (0'0,41'577] local-lis/les=73/74 n=6 ec=49/34 lis/c=71/49 les/c/f=72/50/0 sis=73) [2] r=0 lpr=73 pi=[49,73)/1 crt=41'577 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 03:39:13 compute-0 ceph-osd[88594]: log_channel(cluster) log [DBG] : 7.12 scrub starts
Oct 11 03:39:13 compute-0 ceph-osd[88594]: log_channel(cluster) log [DBG] : 7.12 scrub ok
Oct 11 03:39:14 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e74 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 11 03:39:14 compute-0 ceph-mon[74273]: pgmap v150: 305 pgs: 1 peering, 304 active+clean; 456 KiB data, 103 MiB used, 60 GiB / 60 GiB avail
Oct 11 03:39:14 compute-0 ceph-mon[74273]: 7.12 scrub starts
Oct 11 03:39:14 compute-0 ceph-mon[74273]: 7.12 scrub ok
Oct 11 03:39:14 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v152: 305 pgs: 3 peering, 302 active+clean; 456 KiB data, 103 MiB used, 60 GiB / 60 GiB avail; 52 B/s, 3 objects/s recovering
Oct 11 03:39:14 compute-0 ceph-osd[88594]: log_channel(cluster) log [DBG] : 7.14 deep-scrub starts
Oct 11 03:39:14 compute-0 ceph-osd[88594]: log_channel(cluster) log [DBG] : 7.14 deep-scrub ok
Oct 11 03:39:15 compute-0 ceph-mon[74273]: 7.14 deep-scrub starts
Oct 11 03:39:15 compute-0 ceph-mon[74273]: 7.14 deep-scrub ok
Oct 11 03:39:15 compute-0 ceph-osd[88594]: log_channel(cluster) log [DBG] : 7.16 scrub starts
Oct 11 03:39:15 compute-0 ceph-osd[88594]: log_channel(cluster) log [DBG] : 7.16 scrub ok
Oct 11 03:39:15 compute-0 ceph-osd[87591]: log_channel(cluster) log [DBG] : 4.1d scrub starts
Oct 11 03:39:15 compute-0 ceph-osd[87591]: log_channel(cluster) log [DBG] : 4.1d scrub ok
Oct 11 03:39:16 compute-0 ceph-osd[89722]: log_channel(cluster) log [DBG] : 5.d scrub starts
Oct 11 03:39:16 compute-0 ceph-osd[89722]: log_channel(cluster) log [DBG] : 5.d scrub ok
Oct 11 03:39:16 compute-0 ceph-mon[74273]: pgmap v152: 305 pgs: 3 peering, 302 active+clean; 456 KiB data, 103 MiB used, 60 GiB / 60 GiB avail; 52 B/s, 3 objects/s recovering
Oct 11 03:39:16 compute-0 ceph-mon[74273]: 7.16 scrub starts
Oct 11 03:39:16 compute-0 ceph-mon[74273]: 7.16 scrub ok
Oct 11 03:39:16 compute-0 ceph-mon[74273]: 4.1d scrub starts
Oct 11 03:39:16 compute-0 ceph-mon[74273]: 4.1d scrub ok
Oct 11 03:39:16 compute-0 ceph-mon[74273]: 5.d scrub starts
Oct 11 03:39:16 compute-0 ceph-mon[74273]: 5.d scrub ok
Oct 11 03:39:16 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v153: 305 pgs: 3 peering, 302 active+clean; 456 KiB data, 103 MiB used, 60 GiB / 60 GiB avail; 36 B/s, 2 objects/s recovering
Oct 11 03:39:17 compute-0 ceph-osd[89722]: log_channel(cluster) log [DBG] : 5.e scrub starts
Oct 11 03:39:17 compute-0 ceph-osd[89722]: log_channel(cluster) log [DBG] : 5.e scrub ok
Oct 11 03:39:17 compute-0 ceph-mon[74273]: 5.e scrub starts
Oct 11 03:39:17 compute-0 ceph-mon[74273]: 5.e scrub ok
Oct 11 03:39:18 compute-0 ceph-osd[89722]: log_channel(cluster) log [DBG] : 5.10 scrub starts
Oct 11 03:39:18 compute-0 ceph-osd[89722]: log_channel(cluster) log [DBG] : 5.10 scrub ok
Oct 11 03:39:18 compute-0 ceph-mon[74273]: pgmap v153: 305 pgs: 3 peering, 302 active+clean; 456 KiB data, 103 MiB used, 60 GiB / 60 GiB avail; 36 B/s, 2 objects/s recovering
Oct 11 03:39:18 compute-0 ceph-mon[74273]: 5.10 scrub starts
Oct 11 03:39:18 compute-0 ceph-mon[74273]: 5.10 scrub ok
Oct 11 03:39:18 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v154: 305 pgs: 305 active+clean; 456 KiB data, 103 MiB used, 60 GiB / 60 GiB avail; 30 B/s, 1 objects/s recovering
Oct 11 03:39:18 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "11"} v 0) v1
Oct 11 03:39:18 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "11"}]: dispatch
Oct 11 03:39:18 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "11"} v 0) v1
Oct 11 03:39:18 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "11"}]: dispatch
Oct 11 03:39:18 compute-0 ceph-osd[88594]: log_channel(cluster) log [DBG] : 7.17 scrub starts
Oct 11 03:39:18 compute-0 ceph-osd[88594]: log_channel(cluster) log [DBG] : 7.17 scrub ok
Oct 11 03:39:19 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e74 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 11 03:39:19 compute-0 sshd-session[104163]: Accepted publickey for zuul from 192.168.122.30 port 51962 ssh2: ECDSA SHA256:qo9+RMabHfLAOt2q/80W97JXaZUdeUCREBuTRaqgxBY
Oct 11 03:39:19 compute-0 systemd-logind[820]: New session 34 of user zuul.
Oct 11 03:39:19 compute-0 systemd[1]: Started Session 34 of User zuul.
Oct 11 03:39:19 compute-0 sshd-session[104163]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Oct 11 03:39:19 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e74 do_prune osdmap full prune enabled
Oct 11 03:39:19 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "11"}]: dispatch
Oct 11 03:39:19 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "11"}]: dispatch
Oct 11 03:39:19 compute-0 ceph-mon[74273]: 7.17 scrub starts
Oct 11 03:39:19 compute-0 ceph-mon[74273]: 7.17 scrub ok
Oct 11 03:39:19 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "11"}]': finished
Oct 11 03:39:19 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "11"}]': finished
Oct 11 03:39:19 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e75 e75: 3 total, 3 up, 3 in
Oct 11 03:39:19 compute-0 ceph-mon[74273]: log_channel(cluster) log [DBG] : osdmap e75: 3 total, 3 up, 3 in
Oct 11 03:39:19 compute-0 ceph-osd[88594]: osd.1 pg_epoch: 75 pg[6.a( v 37'39 (0'0,37'39] local-lis/les=56/57 n=1 ec=45/21 lis/c=56/56 les/c/f=57/57/0 sis=75 pruub=10.606144905s) [0] r=-1 lpr=75 pi=[56,75)/1 crt=37'39 lcod 0'0 mlcod 0'0 active pruub 128.316955566s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 11 03:39:19 compute-0 ceph-osd[88594]: osd.1 pg_epoch: 75 pg[6.a( v 37'39 (0'0,37'39] local-lis/les=56/57 n=1 ec=45/21 lis/c=56/56 les/c/f=57/57/0 sis=75 pruub=10.606088638s) [0] r=-1 lpr=75 pi=[56,75)/1 crt=37'39 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 128.316955566s@ mbc={}] state<Start>: transitioning to Stray
Oct 11 03:39:19 compute-0 ceph-osd[87591]: osd.0 pg_epoch: 75 pg[6.a( empty local-lis/les=0/0 n=0 ec=45/21 lis/c=56/56 les/c/f=57/57/0 sis=75) [0] r=0 lpr=75 pi=[56,75)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 03:39:19 compute-0 ceph-osd[87591]: log_channel(cluster) log [DBG] : 4.1e scrub starts
Oct 11 03:39:19 compute-0 ceph-osd[88594]: log_channel(cluster) log [DBG] : 7.19 scrub starts
Oct 11 03:39:19 compute-0 ceph-osd[87591]: log_channel(cluster) log [DBG] : 4.1e scrub ok
Oct 11 03:39:19 compute-0 ceph-osd[88594]: log_channel(cluster) log [DBG] : 7.19 scrub ok
Oct 11 03:39:20 compute-0 python3.9[104316]: ansible-ansible.legacy.setup Invoked with gather_subset=['all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct 11 03:39:20 compute-0 ceph-mgr[74563]: [balancer INFO root] Optimize plan auto_2025-10-11_03:39:20
Oct 11 03:39:20 compute-0 ceph-mgr[74563]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 11 03:39:20 compute-0 ceph-mgr[74563]: [balancer INFO root] do_upmap
Oct 11 03:39:20 compute-0 ceph-mgr[74563]: [balancer INFO root] pools ['default.rgw.meta', 'default.rgw.log', 'cephfs.cephfs.data', 'default.rgw.control', 'vms', 'images', 'backups', '.mgr', 'volumes', 'cephfs.cephfs.meta', '.rgw.root']
Oct 11 03:39:20 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e75 do_prune osdmap full prune enabled
Oct 11 03:39:20 compute-0 ceph-mgr[74563]: [balancer INFO root] prepared 0/10 changes
Oct 11 03:39:20 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e76 e76: 3 total, 3 up, 3 in
Oct 11 03:39:20 compute-0 ceph-mon[74273]: log_channel(cluster) log [DBG] : osdmap e76: 3 total, 3 up, 3 in
Oct 11 03:39:20 compute-0 ceph-mon[74273]: pgmap v154: 305 pgs: 305 active+clean; 456 KiB data, 103 MiB used, 60 GiB / 60 GiB avail; 30 B/s, 1 objects/s recovering
Oct 11 03:39:20 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "11"}]': finished
Oct 11 03:39:20 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "11"}]': finished
Oct 11 03:39:20 compute-0 ceph-mon[74273]: osdmap e75: 3 total, 3 up, 3 in
Oct 11 03:39:20 compute-0 ceph-mon[74273]: 4.1e scrub starts
Oct 11 03:39:20 compute-0 ceph-mon[74273]: 7.19 scrub starts
Oct 11 03:39:20 compute-0 ceph-mon[74273]: 4.1e scrub ok
Oct 11 03:39:20 compute-0 ceph-mon[74273]: 7.19 scrub ok
Oct 11 03:39:20 compute-0 ceph-osd[87591]: osd.0 pg_epoch: 76 pg[6.a( v 37'39 (0'0,37'39] local-lis/les=75/76 n=1 ec=45/21 lis/c=56/56 les/c/f=57/57/0 sis=75) [0] r=0 lpr=75 pi=[56,75)/1 crt=37'39 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 03:39:20 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v157: 305 pgs: 305 active+clean; 456 KiB data, 103 MiB used, 60 GiB / 60 GiB avail
Oct 11 03:39:20 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "12"} v 0) v1
Oct 11 03:39:20 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "12"}]: dispatch
Oct 11 03:39:20 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "12"} v 0) v1
Oct 11 03:39:20 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "12"}]: dispatch
Oct 11 03:39:20 compute-0 ceph-mgr[74563]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 03:39:20 compute-0 ceph-mgr[74563]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 03:39:20 compute-0 ceph-mgr[74563]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 03:39:20 compute-0 ceph-mgr[74563]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 03:39:20 compute-0 ceph-mgr[74563]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 03:39:20 compute-0 ceph-mgr[74563]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 03:39:20 compute-0 ceph-mgr[74563]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 11 03:39:20 compute-0 ceph-mgr[74563]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 11 03:39:20 compute-0 ceph-mgr[74563]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 11 03:39:20 compute-0 ceph-mgr[74563]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 11 03:39:20 compute-0 ceph-mgr[74563]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 11 03:39:20 compute-0 ceph-mgr[74563]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 11 03:39:20 compute-0 ceph-mgr[74563]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 11 03:39:20 compute-0 ceph-mgr[74563]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 11 03:39:20 compute-0 ceph-mgr[74563]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 11 03:39:20 compute-0 ceph-mgr[74563]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 11 03:39:20 compute-0 ceph-osd[87591]: log_channel(cluster) log [DBG] : 4.1f scrub starts
Oct 11 03:39:20 compute-0 ceph-osd[87591]: log_channel(cluster) log [DBG] : 4.1f scrub ok
Oct 11 03:39:21 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e76 do_prune osdmap full prune enabled
Oct 11 03:39:21 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "12"}]': finished
Oct 11 03:39:21 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "12"}]': finished
Oct 11 03:39:21 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e77 e77: 3 total, 3 up, 3 in
Oct 11 03:39:21 compute-0 ceph-mon[74273]: log_channel(cluster) log [DBG] : osdmap e77: 3 total, 3 up, 3 in
Oct 11 03:39:21 compute-0 ceph-mon[74273]: osdmap e76: 3 total, 3 up, 3 in
Oct 11 03:39:21 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "12"}]: dispatch
Oct 11 03:39:21 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "12"}]: dispatch
Oct 11 03:39:21 compute-0 ceph-mon[74273]: 4.1f scrub starts
Oct 11 03:39:21 compute-0 ceph-mon[74273]: 4.1f scrub ok
Oct 11 03:39:21 compute-0 ceph-osd[87591]: log_channel(cluster) log [DBG] : 2.19 scrub starts
Oct 11 03:39:21 compute-0 ceph-osd[87591]: log_channel(cluster) log [DBG] : 2.19 scrub ok
Oct 11 03:39:21 compute-0 ceph-osd[88594]: log_channel(cluster) log [DBG] : 8.5 scrub starts
Oct 11 03:39:21 compute-0 ceph-osd[88594]: log_channel(cluster) log [DBG] : 8.5 scrub ok
Oct 11 03:39:22 compute-0 sudo[104532]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qhprszgendwxibjtanuxacmovhxawrso ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760153961.587502-32-171131192433739/AnsiballZ_command.py'
Oct 11 03:39:22 compute-0 sudo[104532]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 03:39:22 compute-0 python3.9[104534]: ansible-ansible.legacy.command Invoked with _raw_params=set -euxo pipefail
                                             pushd /var/tmp
                                             curl -sL https://github.com/openstack-k8s-operators/repo-setup/archive/refs/heads/main.tar.gz | tar -xz
                                             pushd repo-setup-main
                                             python3 -m venv ./venv
                                             PBR_VERSION=0.0.0 ./venv/bin/pip install ./
                                             ./venv/bin/repo-setup current-podified -b antelope
                                             popd
                                             rm -rf repo-setup-main
                                              _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 11 03:39:22 compute-0 ceph-mon[74273]: pgmap v157: 305 pgs: 305 active+clean; 456 KiB data, 103 MiB used, 60 GiB / 60 GiB avail
Oct 11 03:39:22 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "12"}]': finished
Oct 11 03:39:22 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "12"}]': finished
Oct 11 03:39:22 compute-0 ceph-mon[74273]: osdmap e77: 3 total, 3 up, 3 in
Oct 11 03:39:22 compute-0 ceph-mon[74273]: 2.19 scrub starts
Oct 11 03:39:22 compute-0 ceph-mon[74273]: 2.19 scrub ok
Oct 11 03:39:22 compute-0 ceph-mon[74273]: 8.5 scrub starts
Oct 11 03:39:22 compute-0 ceph-mon[74273]: 8.5 scrub ok
Oct 11 03:39:22 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v159: 305 pgs: 305 active+clean; 456 KiB data, 103 MiB used, 60 GiB / 60 GiB avail
Oct 11 03:39:22 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "13"} v 0) v1
Oct 11 03:39:22 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "13"}]: dispatch
Oct 11 03:39:22 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "13"} v 0) v1
Oct 11 03:39:22 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "13"}]: dispatch
Oct 11 03:39:22 compute-0 ceph-osd[87591]: osd.0 pg_epoch: 77 pg[6.b( v 37'39 (0'0,37'39] local-lis/les=59/61 n=1 ec=45/21 lis/c=59/59 les/c/f=61/61/0 sis=77 pruub=9.620896339s) [1] r=-1 lpr=77 pi=[59,77)/1 crt=37'39 mlcod 37'39 active pruub 135.439300537s@ mbc={255={}}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 11 03:39:22 compute-0 ceph-osd[87591]: osd.0 pg_epoch: 77 pg[6.b( v 37'39 (0'0,37'39] local-lis/les=59/61 n=1 ec=45/21 lis/c=59/59 les/c/f=61/61/0 sis=77 pruub=9.620635033s) [1] r=-1 lpr=77 pi=[59,77)/1 crt=37'39 mlcod 0'0 unknown NOTIFY pruub 135.439300537s@ mbc={}] state<Start>: transitioning to Stray
Oct 11 03:39:22 compute-0 ceph-osd[88594]: osd.1 pg_epoch: 77 pg[6.b( empty local-lis/les=0/0 n=0 ec=45/21 lis/c=59/59 les/c/f=61/61/0 sis=77) [1] r=0 lpr=77 pi=[59,77)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 03:39:22 compute-0 ceph-osd[87591]: log_channel(cluster) log [DBG] : 2.18 scrub starts
Oct 11 03:39:22 compute-0 ceph-osd[87591]: log_channel(cluster) log [DBG] : 2.18 scrub ok
Oct 11 03:39:22 compute-0 ceph-osd[88594]: log_channel(cluster) log [DBG] : 7.1d scrub starts
Oct 11 03:39:23 compute-0 ceph-osd[88594]: log_channel(cluster) log [DBG] : 7.1d scrub ok
Oct 11 03:39:23 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e77 do_prune osdmap full prune enabled
Oct 11 03:39:23 compute-0 ceph-mon[74273]: pgmap v159: 305 pgs: 305 active+clean; 456 KiB data, 103 MiB used, 60 GiB / 60 GiB avail
Oct 11 03:39:23 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "13"}]: dispatch
Oct 11 03:39:23 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "13"}]: dispatch
Oct 11 03:39:23 compute-0 ceph-mon[74273]: 2.18 scrub starts
Oct 11 03:39:23 compute-0 ceph-mon[74273]: 2.18 scrub ok
Oct 11 03:39:23 compute-0 ceph-mon[74273]: 7.1d scrub starts
Oct 11 03:39:23 compute-0 ceph-mon[74273]: 7.1d scrub ok
Oct 11 03:39:23 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "13"}]': finished
Oct 11 03:39:23 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "13"}]': finished
Oct 11 03:39:23 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e78 e78: 3 total, 3 up, 3 in
Oct 11 03:39:23 compute-0 ceph-mon[74273]: log_channel(cluster) log [DBG] : osdmap e78: 3 total, 3 up, 3 in
Oct 11 03:39:23 compute-0 ceph-osd[88594]: osd.1 pg_epoch: 78 pg[9.c( v 41'577 (0'0,41'577] local-lis/les=49/50 n=7 ec=49/34 lis/c=49/49 les/c/f=50/50/0 sis=78 pruub=12.210286140s) [2] r=-1 lpr=78 pi=[49,78)/1 crt=41'577 lcod 0'0 mlcod 0'0 active pruub 133.975524902s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 11 03:39:23 compute-0 ceph-osd[88594]: osd.1 pg_epoch: 78 pg[9.1c( v 41'577 (0'0,41'577] local-lis/les=49/50 n=6 ec=49/34 lis/c=49/49 les/c/f=50/50/0 sis=78 pruub=12.210105896s) [2] r=-1 lpr=78 pi=[49,78)/1 crt=41'577 lcod 0'0 mlcod 0'0 active pruub 133.975875854s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 11 03:39:23 compute-0 ceph-osd[88594]: osd.1 pg_epoch: 78 pg[9.1c( v 41'577 (0'0,41'577] local-lis/les=49/50 n=6 ec=49/34 lis/c=49/49 les/c/f=50/50/0 sis=78 pruub=12.210042000s) [2] r=-1 lpr=78 pi=[49,78)/1 crt=41'577 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 133.975875854s@ mbc={}] state<Start>: transitioning to Stray
Oct 11 03:39:23 compute-0 ceph-osd[88594]: osd.1 pg_epoch: 78 pg[9.c( v 41'577 (0'0,41'577] local-lis/les=49/50 n=7 ec=49/34 lis/c=49/49 les/c/f=50/50/0 sis=78 pruub=12.209760666s) [2] r=-1 lpr=78 pi=[49,78)/1 crt=41'577 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 133.975524902s@ mbc={}] state<Start>: transitioning to Stray
Oct 11 03:39:23 compute-0 ceph-osd[89722]: osd.2 pg_epoch: 78 pg[9.1c( empty local-lis/les=0/0 n=0 ec=49/34 lis/c=49/49 les/c/f=50/50/0 sis=78) [2] r=0 lpr=78 pi=[49,78)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 03:39:23 compute-0 ceph-osd[89722]: osd.2 pg_epoch: 78 pg[9.c( empty local-lis/les=0/0 n=0 ec=49/34 lis/c=49/49 les/c/f=50/50/0 sis=78) [2] r=0 lpr=78 pi=[49,78)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 03:39:23 compute-0 ceph-osd[88594]: osd.1 pg_epoch: 78 pg[6.b( v 37'39 lc 0'0 (0'0,37'39] local-lis/les=77/78 n=1 ec=45/21 lis/c=59/59 les/c/f=61/61/0 sis=77) [1] r=0 lpr=77 pi=[59,77)/1 crt=37'39 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 03:39:24 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e78 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 11 03:39:24 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e78 do_prune osdmap full prune enabled
Oct 11 03:39:24 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e79 e79: 3 total, 3 up, 3 in
Oct 11 03:39:24 compute-0 ceph-mon[74273]: log_channel(cluster) log [DBG] : osdmap e79: 3 total, 3 up, 3 in
Oct 11 03:39:24 compute-0 ceph-osd[89722]: osd.2 pg_epoch: 79 pg[9.c( empty local-lis/les=0/0 n=0 ec=49/34 lis/c=49/49 les/c/f=50/50/0 sis=79) [2]/[1] r=-1 lpr=79 pi=[49,79)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 11 03:39:24 compute-0 ceph-osd[89722]: osd.2 pg_epoch: 79 pg[9.1c( empty local-lis/les=0/0 n=0 ec=49/34 lis/c=49/49 les/c/f=50/50/0 sis=79) [2]/[1] r=-1 lpr=79 pi=[49,79)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 11 03:39:24 compute-0 ceph-osd[89722]: osd.2 pg_epoch: 79 pg[9.1c( empty local-lis/les=0/0 n=0 ec=49/34 lis/c=49/49 les/c/f=50/50/0 sis=79) [2]/[1] r=-1 lpr=79 pi=[49,79)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Oct 11 03:39:24 compute-0 ceph-osd[89722]: osd.2 pg_epoch: 79 pg[9.c( empty local-lis/les=0/0 n=0 ec=49/34 lis/c=49/49 les/c/f=50/50/0 sis=79) [2]/[1] r=-1 lpr=79 pi=[49,79)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Oct 11 03:39:24 compute-0 ceph-osd[88594]: osd.1 pg_epoch: 79 pg[9.c( v 41'577 (0'0,41'577] local-lis/les=49/50 n=7 ec=49/34 lis/c=49/49 les/c/f=50/50/0 sis=79) [2]/[1] r=0 lpr=79 pi=[49,79)/1 crt=41'577 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Oct 11 03:39:24 compute-0 ceph-osd[88594]: osd.1 pg_epoch: 79 pg[9.c( v 41'577 (0'0,41'577] local-lis/les=49/50 n=7 ec=49/34 lis/c=49/49 les/c/f=50/50/0 sis=79) [2]/[1] r=0 lpr=79 pi=[49,79)/1 crt=41'577 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Oct 11 03:39:24 compute-0 ceph-osd[88594]: osd.1 pg_epoch: 79 pg[9.1c( v 41'577 (0'0,41'577] local-lis/les=49/50 n=6 ec=49/34 lis/c=49/49 les/c/f=50/50/0 sis=79) [2]/[1] r=0 lpr=79 pi=[49,79)/1 crt=41'577 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Oct 11 03:39:24 compute-0 ceph-osd[88594]: osd.1 pg_epoch: 79 pg[9.1c( v 41'577 (0'0,41'577] local-lis/les=49/50 n=6 ec=49/34 lis/c=49/49 les/c/f=50/50/0 sis=79) [2]/[1] r=0 lpr=79 pi=[49,79)/1 crt=41'577 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Oct 11 03:39:24 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v162: 305 pgs: 305 active+clean; 456 KiB data, 103 MiB used, 60 GiB / 60 GiB avail
Oct 11 03:39:24 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "13"}]': finished
Oct 11 03:39:24 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "13"}]': finished
Oct 11 03:39:24 compute-0 ceph-mon[74273]: osdmap e78: 3 total, 3 up, 3 in
Oct 11 03:39:24 compute-0 ceph-mon[74273]: osdmap e79: 3 total, 3 up, 3 in
Oct 11 03:39:24 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "14"} v 0) v1
Oct 11 03:39:24 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "14"}]: dispatch
Oct 11 03:39:24 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "14"} v 0) v1
Oct 11 03:39:24 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "14"}]: dispatch
Oct 11 03:39:25 compute-0 ceph-osd[89722]: log_channel(cluster) log [DBG] : 5.17 deep-scrub starts
Oct 11 03:39:25 compute-0 ceph-osd[89722]: log_channel(cluster) log [DBG] : 5.17 deep-scrub ok
Oct 11 03:39:25 compute-0 ceph-osd[88594]: log_channel(cluster) log [DBG] : 7.1e scrub starts
Oct 11 03:39:25 compute-0 ceph-osd[88594]: log_channel(cluster) log [DBG] : 7.1e scrub ok
Oct 11 03:39:25 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e79 do_prune osdmap full prune enabled
Oct 11 03:39:25 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "14"}]': finished
Oct 11 03:39:25 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "14"}]': finished
Oct 11 03:39:25 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e80 e80: 3 total, 3 up, 3 in
Oct 11 03:39:25 compute-0 ceph-mon[74273]: log_channel(cluster) log [DBG] : osdmap e80: 3 total, 3 up, 3 in
Oct 11 03:39:25 compute-0 ceph-osd[87591]: osd.0 pg_epoch: 80 pg[6.d( v 37'39 (0'0,37'39] local-lis/les=62/63 n=1 ec=45/21 lis/c=62/62 les/c/f=63/63/0 sis=80 pruub=15.196624756s) [1] r=-1 lpr=80 pi=[62,80)/1 crt=37'39 mlcod 37'39 active pruub 143.522369385s@ mbc={255={}}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 11 03:39:25 compute-0 ceph-osd[87591]: osd.0 pg_epoch: 80 pg[6.d( v 37'39 (0'0,37'39] local-lis/les=62/63 n=1 ec=45/21 lis/c=62/62 les/c/f=63/63/0 sis=80 pruub=15.196572304s) [1] r=-1 lpr=80 pi=[62,80)/1 crt=37'39 mlcod 0'0 unknown NOTIFY pruub 143.522369385s@ mbc={}] state<Start>: transitioning to Stray
Oct 11 03:39:25 compute-0 ceph-osd[88594]: osd.1 pg_epoch: 80 pg[6.d( empty local-lis/les=0/0 n=0 ec=45/21 lis/c=62/62 les/c/f=63/63/0 sis=80) [1] r=0 lpr=80 pi=[62,80)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 03:39:25 compute-0 ceph-osd[88594]: osd.1 pg_epoch: 80 pg[9.1c( v 41'577 (0'0,41'577] local-lis/les=79/80 n=6 ec=49/34 lis/c=49/49 les/c/f=50/50/0 sis=79) [2]/[1] async=[2] r=0 lpr=79 pi=[49,79)/1 crt=41'577 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=7}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 03:39:25 compute-0 ceph-osd[88594]: osd.1 pg_epoch: 80 pg[9.c( v 41'577 (0'0,41'577] local-lis/les=79/80 n=7 ec=49/34 lis/c=49/49 les/c/f=50/50/0 sis=79) [2]/[1] async=[2] r=0 lpr=79 pi=[49,79)/1 crt=41'577 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=5}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 03:39:25 compute-0 ceph-mon[74273]: pgmap v162: 305 pgs: 305 active+clean; 456 KiB data, 103 MiB used, 60 GiB / 60 GiB avail
Oct 11 03:39:25 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "14"}]: dispatch
Oct 11 03:39:25 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "14"}]: dispatch
Oct 11 03:39:25 compute-0 ceph-mon[74273]: 5.17 deep-scrub starts
Oct 11 03:39:25 compute-0 ceph-mon[74273]: 5.17 deep-scrub ok
Oct 11 03:39:25 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "14"}]': finished
Oct 11 03:39:25 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "14"}]': finished
Oct 11 03:39:25 compute-0 ceph-mon[74273]: osdmap e80: 3 total, 3 up, 3 in
Oct 11 03:39:26 compute-0 ceph-osd[88594]: log_channel(cluster) log [DBG] : 8.7 scrub starts
Oct 11 03:39:26 compute-0 ceph-osd[88594]: log_channel(cluster) log [DBG] : 8.7 scrub ok
Oct 11 03:39:26 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e80 do_prune osdmap full prune enabled
Oct 11 03:39:26 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e81 e81: 3 total, 3 up, 3 in
Oct 11 03:39:26 compute-0 ceph-mon[74273]: log_channel(cluster) log [DBG] : osdmap e81: 3 total, 3 up, 3 in
Oct 11 03:39:26 compute-0 ceph-osd[88594]: osd.1 pg_epoch: 81 pg[9.c( v 41'577 (0'0,41'577] local-lis/les=79/80 n=7 ec=49/34 lis/c=79/49 les/c/f=80/50/0 sis=81 pruub=15.236288071s) [2] async=[2] r=-1 lpr=81 pi=[49,81)/1 crt=41'577 lcod 0'0 mlcod 0'0 active pruub 139.544677734s@ mbc={255={}}] start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 11 03:39:26 compute-0 ceph-osd[88594]: osd.1 pg_epoch: 81 pg[9.c( v 41'577 (0'0,41'577] local-lis/les=79/80 n=7 ec=49/34 lis/c=79/49 les/c/f=80/50/0 sis=81 pruub=15.236227036s) [2] r=-1 lpr=81 pi=[49,81)/1 crt=41'577 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 139.544677734s@ mbc={}] state<Start>: transitioning to Stray
Oct 11 03:39:26 compute-0 ceph-osd[88594]: osd.1 pg_epoch: 81 pg[9.1c( v 41'577 (0'0,41'577] local-lis/les=79/80 n=6 ec=49/34 lis/c=79/49 les/c/f=80/50/0 sis=81 pruub=15.235684395s) [2] async=[2] r=-1 lpr=81 pi=[49,81)/1 crt=41'577 lcod 0'0 mlcod 0'0 active pruub 139.544631958s@ mbc={255={}}] start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 11 03:39:26 compute-0 ceph-osd[88594]: osd.1 pg_epoch: 81 pg[9.1c( v 41'577 (0'0,41'577] local-lis/les=79/80 n=6 ec=49/34 lis/c=79/49 les/c/f=80/50/0 sis=81 pruub=15.235630989s) [2] r=-1 lpr=81 pi=[49,81)/1 crt=41'577 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 139.544631958s@ mbc={}] state<Start>: transitioning to Stray
Oct 11 03:39:26 compute-0 ceph-osd[88594]: osd.1 pg_epoch: 81 pg[6.d( v 37'39 lc 33'13 (0'0,37'39] local-lis/les=80/81 n=1 ec=45/21 lis/c=62/62 les/c/f=63/63/0 sis=80) [1] r=0 lpr=80 pi=[62,80)/1 crt=37'39 lcod 0'0 mlcod 0'0 active+degraded m=2 mbc={255={(0+1)=2}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 03:39:26 compute-0 ceph-osd[89722]: osd.2 pg_epoch: 81 pg[9.c( v 41'577 (0'0,41'577] local-lis/les=0/0 n=7 ec=49/34 lis/c=79/49 les/c/f=80/50/0 sis=81) [2] r=0 lpr=81 pi=[49,81)/1 luod=0'0 crt=41'577 mlcod 0'0 active mbc={}] start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Oct 11 03:39:26 compute-0 ceph-osd[89722]: osd.2 pg_epoch: 81 pg[9.c( v 41'577 (0'0,41'577] local-lis/les=0/0 n=7 ec=49/34 lis/c=79/49 les/c/f=80/50/0 sis=81) [2] r=0 lpr=81 pi=[49,81)/1 crt=41'577 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 03:39:26 compute-0 ceph-osd[89722]: osd.2 pg_epoch: 81 pg[9.1c( v 41'577 (0'0,41'577] local-lis/les=0/0 n=6 ec=49/34 lis/c=79/49 les/c/f=80/50/0 sis=81) [2] r=0 lpr=81 pi=[49,81)/1 luod=0'0 crt=41'577 mlcod 0'0 active mbc={}] start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Oct 11 03:39:26 compute-0 ceph-osd[89722]: osd.2 pg_epoch: 81 pg[9.1c( v 41'577 (0'0,41'577] local-lis/les=0/0 n=6 ec=49/34 lis/c=79/49 les/c/f=80/50/0 sis=81) [2] r=0 lpr=81 pi=[49,81)/1 crt=41'577 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 03:39:26 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v165: 305 pgs: 305 active+clean; 456 KiB data, 103 MiB used, 60 GiB / 60 GiB avail
Oct 11 03:39:26 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "15"} v 0) v1
Oct 11 03:39:26 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "15"}]: dispatch
Oct 11 03:39:26 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "15"} v 0) v1
Oct 11 03:39:26 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "15"}]: dispatch
Oct 11 03:39:26 compute-0 ceph-mon[74273]: 7.1e scrub starts
Oct 11 03:39:26 compute-0 ceph-mon[74273]: 7.1e scrub ok
Oct 11 03:39:26 compute-0 ceph-mon[74273]: osdmap e81: 3 total, 3 up, 3 in
Oct 11 03:39:26 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "15"}]: dispatch
Oct 11 03:39:26 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "15"}]: dispatch
Oct 11 03:39:26 compute-0 ceph-osd[87591]: log_channel(cluster) log [DBG] : 10.9 scrub starts
Oct 11 03:39:26 compute-0 ceph-osd[87591]: log_channel(cluster) log [DBG] : 10.9 scrub ok
Oct 11 03:39:27 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e81 do_prune osdmap full prune enabled
Oct 11 03:39:27 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "15"}]': finished
Oct 11 03:39:27 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "15"}]': finished
Oct 11 03:39:27 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e82 e82: 3 total, 3 up, 3 in
Oct 11 03:39:27 compute-0 ceph-mon[74273]: log_channel(cluster) log [DBG] : osdmap e82: 3 total, 3 up, 3 in
Oct 11 03:39:27 compute-0 ceph-osd[89722]: osd.2 pg_epoch: 82 pg[9.c( v 41'577 (0'0,41'577] local-lis/les=81/82 n=7 ec=49/34 lis/c=79/49 les/c/f=80/50/0 sis=81) [2] r=0 lpr=81 pi=[49,81)/1 crt=41'577 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 03:39:27 compute-0 ceph-osd[89722]: osd.2 pg_epoch: 82 pg[9.1c( v 41'577 (0'0,41'577] local-lis/les=81/82 n=6 ec=49/34 lis/c=79/49 les/c/f=80/50/0 sis=81) [2] r=0 lpr=81 pi=[49,81)/1 crt=41'577 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 03:39:27 compute-0 ceph-mon[74273]: 8.7 scrub starts
Oct 11 03:39:27 compute-0 ceph-mon[74273]: 8.7 scrub ok
Oct 11 03:39:27 compute-0 ceph-mon[74273]: pgmap v165: 305 pgs: 305 active+clean; 456 KiB data, 103 MiB used, 60 GiB / 60 GiB avail
Oct 11 03:39:27 compute-0 ceph-mon[74273]: 10.9 scrub starts
Oct 11 03:39:27 compute-0 ceph-mon[74273]: 10.9 scrub ok
Oct 11 03:39:27 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "15"}]': finished
Oct 11 03:39:27 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "15"}]': finished
Oct 11 03:39:27 compute-0 ceph-mon[74273]: osdmap e82: 3 total, 3 up, 3 in
Oct 11 03:39:28 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v167: 305 pgs: 305 active+clean; 456 KiB data, 104 MiB used, 60 GiB / 60 GiB avail; 40 B/s, 3 objects/s recovering
Oct 11 03:39:28 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "16"} v 0) v1
Oct 11 03:39:28 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "16"}]: dispatch
Oct 11 03:39:28 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "16"} v 0) v1
Oct 11 03:39:28 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "16"}]: dispatch
Oct 11 03:39:28 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e82 do_prune osdmap full prune enabled
Oct 11 03:39:28 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "16"}]': finished
Oct 11 03:39:28 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "16"}]': finished
Oct 11 03:39:28 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e83 e83: 3 total, 3 up, 3 in
Oct 11 03:39:28 compute-0 ceph-mon[74273]: log_channel(cluster) log [DBG] : osdmap e83: 3 total, 3 up, 3 in
Oct 11 03:39:28 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "16"}]: dispatch
Oct 11 03:39:28 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "16"}]: dispatch
Oct 11 03:39:29 compute-0 sudo[104532]: pam_unix(sudo:session): session closed for user root
Oct 11 03:39:29 compute-0 ceph-osd[88594]: log_channel(cluster) log [DBG] : 8.8 scrub starts
Oct 11 03:39:29 compute-0 ceph-osd[88594]: log_channel(cluster) log [DBG] : 8.8 scrub ok
Oct 11 03:39:29 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e83 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 11 03:39:29 compute-0 sshd-session[104166]: Connection closed by 192.168.122.30 port 51962
Oct 11 03:39:29 compute-0 sshd-session[104163]: pam_unix(sshd:session): session closed for user zuul
Oct 11 03:39:29 compute-0 systemd[1]: session-34.scope: Deactivated successfully.
Oct 11 03:39:29 compute-0 systemd[1]: session-34.scope: Consumed 8.591s CPU time.
Oct 11 03:39:29 compute-0 systemd-logind[820]: Session 34 logged out. Waiting for processes to exit.
Oct 11 03:39:29 compute-0 systemd-logind[820]: Removed session 34.
Oct 11 03:39:29 compute-0 ceph-mon[74273]: pgmap v167: 305 pgs: 305 active+clean; 456 KiB data, 104 MiB used, 60 GiB / 60 GiB avail; 40 B/s, 3 objects/s recovering
Oct 11 03:39:29 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "16"}]': finished
Oct 11 03:39:29 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "16"}]': finished
Oct 11 03:39:29 compute-0 ceph-mon[74273]: osdmap e83: 3 total, 3 up, 3 in
Oct 11 03:39:30 compute-0 ceph-osd[89722]: log_channel(cluster) log [DBG] : 5.1b scrub starts
Oct 11 03:39:30 compute-0 ceph-osd[89722]: log_channel(cluster) log [DBG] : 5.1b scrub ok
Oct 11 03:39:30 compute-0 ceph-osd[88594]: log_channel(cluster) log [DBG] : 8.a scrub starts
Oct 11 03:39:30 compute-0 ceph-osd[88594]: log_channel(cluster) log [DBG] : 8.a scrub ok
Oct 11 03:39:30 compute-0 ceph-osd[87591]: osd.0 pg_epoch: 83 pg[6.f( v 37'39 (0'0,37'39] local-lis/les=59/61 n=1 ec=45/21 lis/c=59/59 les/c/f=61/61/0 sis=83 pruub=10.095108032s) [2] r=-1 lpr=83 pi=[59,83)/1 crt=37'39 mlcod 37'39 active pruub 143.433242798s@ mbc={255={}}] start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 11 03:39:30 compute-0 ceph-osd[87591]: osd.0 pg_epoch: 83 pg[6.f( v 37'39 (0'0,37'39] local-lis/les=59/61 n=1 ec=45/21 lis/c=59/59 les/c/f=61/61/0 sis=83 pruub=10.094737053s) [2] r=-1 lpr=83 pi=[59,83)/1 crt=37'39 mlcod 0'0 unknown NOTIFY pruub 143.433242798s@ mbc={}] state<Start>: transitioning to Stray
Oct 11 03:39:30 compute-0 ceph-osd[89722]: osd.2 pg_epoch: 83 pg[6.f( empty local-lis/les=0/0 n=0 ec=45/21 lis/c=59/59 les/c/f=61/61/0 sis=83) [2] r=0 lpr=83 pi=[59,83)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 03:39:30 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v169: 305 pgs: 305 active+clean; 456 KiB data, 104 MiB used, 60 GiB / 60 GiB avail; 32 B/s, 2 objects/s recovering
Oct 11 03:39:30 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "17"} v 0) v1
Oct 11 03:39:30 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "17"}]: dispatch
Oct 11 03:39:30 compute-0 ceph-mgr[74563]: [pg_autoscaler INFO root] _maybe_adjust
Oct 11 03:39:30 compute-0 ceph-mgr[74563]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 03:39:30 compute-0 ceph-mgr[74563]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct 11 03:39:30 compute-0 ceph-mgr[74563]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 03:39:30 compute-0 ceph-mgr[74563]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 11 03:39:30 compute-0 ceph-mgr[74563]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 03:39:30 compute-0 ceph-mgr[74563]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 11 03:39:30 compute-0 ceph-mgr[74563]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 03:39:30 compute-0 ceph-mgr[74563]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 11 03:39:30 compute-0 ceph-mgr[74563]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 03:39:30 compute-0 ceph-mgr[74563]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 11 03:39:30 compute-0 ceph-mgr[74563]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 03:39:30 compute-0 ceph-mgr[74563]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Oct 11 03:39:30 compute-0 ceph-mgr[74563]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 03:39:30 compute-0 ceph-mgr[74563]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 11 03:39:30 compute-0 ceph-mgr[74563]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 03:39:30 compute-0 ceph-mgr[74563]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Oct 11 03:39:30 compute-0 ceph-mgr[74563]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 03:39:30 compute-0 ceph-mgr[74563]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.225674773718825e-06 of space, bias 1.0, pg target 0.0006677024321156476 quantized to 32 (current 32)
Oct 11 03:39:30 compute-0 ceph-mgr[74563]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 03:39:30 compute-0 ceph-mgr[74563]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 11 03:39:30 compute-0 ceph-mgr[74563]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 03:39:30 compute-0 ceph-mgr[74563]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct 11 03:39:30 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e83 do_prune osdmap full prune enabled
Oct 11 03:39:30 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "17"}]': finished
Oct 11 03:39:30 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e84 e84: 3 total, 3 up, 3 in
Oct 11 03:39:30 compute-0 ceph-mon[74273]: 8.8 scrub starts
Oct 11 03:39:30 compute-0 ceph-mon[74273]: 8.8 scrub ok
Oct 11 03:39:30 compute-0 ceph-mon[74273]: 5.1b scrub starts
Oct 11 03:39:30 compute-0 ceph-mon[74273]: 5.1b scrub ok
Oct 11 03:39:30 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "17"}]: dispatch
Oct 11 03:39:30 compute-0 ceph-mon[74273]: log_channel(cluster) log [DBG] : osdmap e84: 3 total, 3 up, 3 in
Oct 11 03:39:30 compute-0 ceph-osd[89722]: osd.2 pg_epoch: 84 pg[6.f( v 37'39 lc 33'1 (0'0,37'39] local-lis/les=83/84 n=1 ec=45/21 lis/c=59/59 les/c/f=61/61/0 sis=83) [2] r=0 lpr=83 pi=[59,83)/1 crt=37'39 lcod 0'0 mlcod 0'0 active+degraded m=3 mbc={255={(0+1)=3}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 03:39:31 compute-0 ceph-mon[74273]: 8.a scrub starts
Oct 11 03:39:31 compute-0 ceph-mon[74273]: 8.a scrub ok
Oct 11 03:39:31 compute-0 ceph-mon[74273]: pgmap v169: 305 pgs: 305 active+clean; 456 KiB data, 104 MiB used, 60 GiB / 60 GiB avail; 32 B/s, 2 objects/s recovering
Oct 11 03:39:31 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "17"}]': finished
Oct 11 03:39:31 compute-0 ceph-mon[74273]: osdmap e84: 3 total, 3 up, 3 in
Oct 11 03:39:32 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v171: 305 pgs: 305 active+clean; 456 KiB data, 121 MiB used, 60 GiB / 60 GiB avail; 29 B/s, 2 objects/s recovering
Oct 11 03:39:32 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "18"} v 0) v1
Oct 11 03:39:32 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "18"}]: dispatch
Oct 11 03:39:32 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e84 do_prune osdmap full prune enabled
Oct 11 03:39:32 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "18"}]: dispatch
Oct 11 03:39:32 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "18"}]': finished
Oct 11 03:39:32 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e85 e85: 3 total, 3 up, 3 in
Oct 11 03:39:32 compute-0 ceph-mon[74273]: log_channel(cluster) log [DBG] : osdmap e85: 3 total, 3 up, 3 in
Oct 11 03:39:32 compute-0 ceph-osd[87591]: log_channel(cluster) log [DBG] : 5.7 scrub starts
Oct 11 03:39:32 compute-0 ceph-osd[87591]: log_channel(cluster) log [DBG] : 5.7 scrub ok
Oct 11 03:39:33 compute-0 ceph-osd[88594]: log_channel(cluster) log [DBG] : 8.13 scrub starts
Oct 11 03:39:33 compute-0 ceph-osd[88594]: log_channel(cluster) log [DBG] : 8.13 scrub ok
Oct 11 03:39:33 compute-0 sudo[104591]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 03:39:33 compute-0 sudo[104591]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 03:39:33 compute-0 sudo[104591]: pam_unix(sudo:session): session closed for user root
Oct 11 03:39:33 compute-0 sudo[104616]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 11 03:39:33 compute-0 sudo[104616]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 03:39:33 compute-0 sudo[104616]: pam_unix(sudo:session): session closed for user root
Oct 11 03:39:33 compute-0 sudo[104641]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 03:39:33 compute-0 sudo[104641]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 03:39:33 compute-0 sudo[104641]: pam_unix(sudo:session): session closed for user root
Oct 11 03:39:33 compute-0 sudo[104666]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/23b68101-59a9-532f-ab6b-9acf78fb2162/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ls
Oct 11 03:39:33 compute-0 sudo[104666]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 03:39:33 compute-0 ceph-mon[74273]: pgmap v171: 305 pgs: 305 active+clean; 456 KiB data, 121 MiB used, 60 GiB / 60 GiB avail; 29 B/s, 2 objects/s recovering
Oct 11 03:39:33 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "18"}]': finished
Oct 11 03:39:33 compute-0 ceph-mon[74273]: osdmap e85: 3 total, 3 up, 3 in
Oct 11 03:39:33 compute-0 ceph-mon[74273]: 5.7 scrub starts
Oct 11 03:39:33 compute-0 ceph-mon[74273]: 5.7 scrub ok
Oct 11 03:39:34 compute-0 ceph-osd[89722]: log_channel(cluster) log [DBG] : 5.1c scrub starts
Oct 11 03:39:34 compute-0 ceph-osd[89722]: log_channel(cluster) log [DBG] : 5.1c scrub ok
Oct 11 03:39:34 compute-0 ceph-osd[88594]: log_channel(cluster) log [DBG] : 8.16 scrub starts
Oct 11 03:39:34 compute-0 ceph-osd[88594]: log_channel(cluster) log [DBG] : 8.16 scrub ok
Oct 11 03:39:34 compute-0 podman[104761]: 2025-10-11 03:39:34.235118631 +0000 UTC m=+0.075259654 container exec 24261ba7295af5a6a49cb537d1551fd7fd4de28fdeebff7ecec5d89143ebddf9 (image=quay.io/ceph/ceph:v18, name=ceph-23b68101-59a9-532f-ab6b-9acf78fb2162-mon-compute-0, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 11 03:39:34 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e85 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 11 03:39:34 compute-0 podman[104761]: 2025-10-11 03:39:34.353742132 +0000 UTC m=+0.193883135 container exec_died 24261ba7295af5a6a49cb537d1551fd7fd4de28fdeebff7ecec5d89143ebddf9 (image=quay.io/ceph/ceph:v18, name=ceph-23b68101-59a9-532f-ab6b-9acf78fb2162-mon-compute-0, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 11 03:39:34 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v173: 305 pgs: 305 active+clean; 456 KiB data, 121 MiB used, 60 GiB / 60 GiB avail; 100 B/s, 0 objects/s recovering
Oct 11 03:39:34 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "19"} v 0) v1
Oct 11 03:39:34 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "19"}]: dispatch
Oct 11 03:39:34 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e85 do_prune osdmap full prune enabled
Oct 11 03:39:34 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "19"}]': finished
Oct 11 03:39:34 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e86 e86: 3 total, 3 up, 3 in
Oct 11 03:39:34 compute-0 ceph-mon[74273]: log_channel(cluster) log [DBG] : osdmap e86: 3 total, 3 up, 3 in
Oct 11 03:39:34 compute-0 ceph-mon[74273]: 8.13 scrub starts
Oct 11 03:39:34 compute-0 ceph-mon[74273]: 8.13 scrub ok
Oct 11 03:39:34 compute-0 ceph-mon[74273]: 5.1c scrub starts
Oct 11 03:39:34 compute-0 ceph-mon[74273]: 5.1c scrub ok
Oct 11 03:39:34 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "19"}]: dispatch
Oct 11 03:39:35 compute-0 ceph-osd[89722]: log_channel(cluster) log [DBG] : 5.1f scrub starts
Oct 11 03:39:35 compute-0 ceph-osd[89722]: log_channel(cluster) log [DBG] : 5.1f scrub ok
Oct 11 03:39:35 compute-0 ceph-osd[88594]: log_channel(cluster) log [DBG] : 8.17 scrub starts
Oct 11 03:39:35 compute-0 ceph-osd[88594]: log_channel(cluster) log [DBG] : 8.17 scrub ok
Oct 11 03:39:35 compute-0 sudo[104666]: pam_unix(sudo:session): session closed for user root
Oct 11 03:39:35 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 11 03:39:35 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' 
Oct 11 03:39:35 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 11 03:39:35 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' 
Oct 11 03:39:35 compute-0 sudo[104916]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 03:39:35 compute-0 sudo[104916]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 03:39:35 compute-0 sudo[104916]: pam_unix(sudo:session): session closed for user root
Oct 11 03:39:35 compute-0 sudo[104941]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 11 03:39:35 compute-0 sudo[104941]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 03:39:35 compute-0 sudo[104941]: pam_unix(sudo:session): session closed for user root
Oct 11 03:39:35 compute-0 sudo[104966]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 03:39:35 compute-0 sudo[104966]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 03:39:35 compute-0 sudo[104966]: pam_unix(sudo:session): session closed for user root
Oct 11 03:39:35 compute-0 sudo[104991]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/23b68101-59a9-532f-ab6b-9acf78fb2162/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Oct 11 03:39:35 compute-0 sudo[104991]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 03:39:35 compute-0 ceph-mon[74273]: 8.16 scrub starts
Oct 11 03:39:35 compute-0 ceph-mon[74273]: 8.16 scrub ok
Oct 11 03:39:35 compute-0 ceph-mon[74273]: pgmap v173: 305 pgs: 305 active+clean; 456 KiB data, 121 MiB used, 60 GiB / 60 GiB avail; 100 B/s, 0 objects/s recovering
Oct 11 03:39:35 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "19"}]': finished
Oct 11 03:39:35 compute-0 ceph-mon[74273]: osdmap e86: 3 total, 3 up, 3 in
Oct 11 03:39:35 compute-0 ceph-mon[74273]: 5.1f scrub starts
Oct 11 03:39:35 compute-0 ceph-mon[74273]: 5.1f scrub ok
Oct 11 03:39:35 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' 
Oct 11 03:39:35 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' 
Oct 11 03:39:36 compute-0 sudo[104991]: pam_unix(sudo:session): session closed for user root
Oct 11 03:39:36 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 11 03:39:36 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 11 03:39:36 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct 11 03:39:36 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 11 03:39:36 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct 11 03:39:36 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' 
Oct 11 03:39:36 compute-0 ceph-mgr[74563]: [progress WARNING root] complete: ev 05725364-b63f-4b8e-804b-fd4df7917d51 does not exist
Oct 11 03:39:36 compute-0 ceph-mgr[74563]: [progress WARNING root] complete: ev fae431bd-5ab5-46f4-a984-3d0b13c12e77 does not exist
Oct 11 03:39:36 compute-0 ceph-mgr[74563]: [progress WARNING root] complete: ev 005a1739-30d1-43ce-be9e-82d6f63dd8e9 does not exist
Oct 11 03:39:36 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct 11 03:39:36 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 11 03:39:36 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct 11 03:39:36 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 11 03:39:36 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 11 03:39:36 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 11 03:39:36 compute-0 sudo[105048]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 03:39:36 compute-0 sudo[105048]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 03:39:36 compute-0 sudo[105048]: pam_unix(sudo:session): session closed for user root
Oct 11 03:39:36 compute-0 sudo[105073]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 11 03:39:36 compute-0 sudo[105073]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 03:39:36 compute-0 sudo[105073]: pam_unix(sudo:session): session closed for user root
Oct 11 03:39:36 compute-0 sudo[105098]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 03:39:36 compute-0 sudo[105098]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 03:39:36 compute-0 sudo[105098]: pam_unix(sudo:session): session closed for user root
Oct 11 03:39:36 compute-0 sudo[105123]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/23b68101-59a9-532f-ab6b-9acf78fb2162/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 23b68101-59a9-532f-ab6b-9acf78fb2162 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Oct 11 03:39:36 compute-0 sudo[105123]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 03:39:36 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v175: 305 pgs: 305 active+clean; 456 KiB data, 121 MiB used, 60 GiB / 60 GiB avail; 100 B/s, 0 objects/s recovering
Oct 11 03:39:36 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "20"} v 0) v1
Oct 11 03:39:36 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "20"}]: dispatch
Oct 11 03:39:36 compute-0 ceph-mon[74273]: 8.17 scrub starts
Oct 11 03:39:36 compute-0 ceph-mon[74273]: 8.17 scrub ok
Oct 11 03:39:36 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 11 03:39:36 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 11 03:39:36 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' 
Oct 11 03:39:36 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 11 03:39:36 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 11 03:39:36 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 11 03:39:36 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "20"}]: dispatch
Oct 11 03:39:36 compute-0 podman[105188]: 2025-10-11 03:39:36.880628179 +0000 UTC m=+0.059529563 container create bed512a86370ae65ee100ffd10b8155e13fc75fa2e0ebaf9219b09e3828c22ef (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_nightingale, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3)
Oct 11 03:39:36 compute-0 systemd[1]: Started libpod-conmon-bed512a86370ae65ee100ffd10b8155e13fc75fa2e0ebaf9219b09e3828c22ef.scope.
Oct 11 03:39:36 compute-0 podman[105188]: 2025-10-11 03:39:36.852837576 +0000 UTC m=+0.031739000 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 03:39:36 compute-0 systemd[1]: Started libcrun container.
Oct 11 03:39:36 compute-0 podman[105188]: 2025-10-11 03:39:36.986084655 +0000 UTC m=+0.164986069 container init bed512a86370ae65ee100ffd10b8155e13fc75fa2e0ebaf9219b09e3828c22ef (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_nightingale, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef)
Oct 11 03:39:36 compute-0 podman[105188]: 2025-10-11 03:39:36.998808578 +0000 UTC m=+0.177709962 container start bed512a86370ae65ee100ffd10b8155e13fc75fa2e0ebaf9219b09e3828c22ef (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_nightingale, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 11 03:39:37 compute-0 ceph-osd[89722]: log_channel(cluster) log [DBG] : 10.3 deep-scrub starts
Oct 11 03:39:37 compute-0 podman[105188]: 2025-10-11 03:39:37.002929158 +0000 UTC m=+0.181830582 container attach bed512a86370ae65ee100ffd10b8155e13fc75fa2e0ebaf9219b09e3828c22ef (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_nightingale, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 11 03:39:37 compute-0 gracious_nightingale[105205]: 167 167
Oct 11 03:39:37 compute-0 systemd[1]: libpod-bed512a86370ae65ee100ffd10b8155e13fc75fa2e0ebaf9219b09e3828c22ef.scope: Deactivated successfully.
Oct 11 03:39:37 compute-0 podman[105188]: 2025-10-11 03:39:37.005767771 +0000 UTC m=+0.184669145 container died bed512a86370ae65ee100ffd10b8155e13fc75fa2e0ebaf9219b09e3828c22ef (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_nightingale, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=reef)
Oct 11 03:39:37 compute-0 ceph-osd[89722]: log_channel(cluster) log [DBG] : 10.3 deep-scrub ok
Oct 11 03:39:37 compute-0 systemd[1]: var-lib-containers-storage-overlay-b3e72726d10a8746ca8d0b55836e615e6c8b99b9181c674bed75d42590e6c0bf-merged.mount: Deactivated successfully.
Oct 11 03:39:37 compute-0 podman[105188]: 2025-10-11 03:39:37.055356303 +0000 UTC m=+0.234257687 container remove bed512a86370ae65ee100ffd10b8155e13fc75fa2e0ebaf9219b09e3828c22ef (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_nightingale, org.label-schema.schema-version=1.0, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 11 03:39:37 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e86 do_prune osdmap full prune enabled
Oct 11 03:39:37 compute-0 systemd[1]: libpod-conmon-bed512a86370ae65ee100ffd10b8155e13fc75fa2e0ebaf9219b09e3828c22ef.scope: Deactivated successfully.
Oct 11 03:39:37 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "20"}]': finished
Oct 11 03:39:37 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e87 e87: 3 total, 3 up, 3 in
Oct 11 03:39:37 compute-0 ceph-mon[74273]: log_channel(cluster) log [DBG] : osdmap e87: 3 total, 3 up, 3 in
Oct 11 03:39:37 compute-0 ceph-osd[87591]: osd.0 pg_epoch: 87 pg[9.13( v 41'577 (0'0,41'577] local-lis/les=57/58 n=6 ec=49/34 lis/c=57/57 les/c/f=58/58/0 sis=87 pruub=10.206871986s) [2] r=-1 lpr=87 pi=[57,87)/1 crt=41'577 mlcod 0'0 active pruub 150.349395752s@ mbc={}] start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 11 03:39:37 compute-0 ceph-osd[87591]: osd.0 pg_epoch: 87 pg[9.13( v 41'577 (0'0,41'577] local-lis/les=57/58 n=6 ec=49/34 lis/c=57/57 les/c/f=58/58/0 sis=87 pruub=10.205896378s) [2] r=-1 lpr=87 pi=[57,87)/1 crt=41'577 mlcod 0'0 unknown NOTIFY pruub 150.349395752s@ mbc={}] state<Start>: transitioning to Stray
Oct 11 03:39:37 compute-0 ceph-osd[89722]: osd.2 pg_epoch: 87 pg[9.13( empty local-lis/les=0/0 n=0 ec=49/34 lis/c=57/57 les/c/f=58/58/0 sis=87) [2] r=0 lpr=87 pi=[57,87)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 03:39:37 compute-0 podman[105229]: 2025-10-11 03:39:37.315954499 +0000 UTC m=+0.081972950 container create 1f70e3383035084408368b0180361cc0f6d9a14853057578045a06280d30363f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_curie, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 11 03:39:37 compute-0 systemd[1]: Started libpod-conmon-1f70e3383035084408368b0180361cc0f6d9a14853057578045a06280d30363f.scope.
Oct 11 03:39:37 compute-0 podman[105229]: 2025-10-11 03:39:37.280599554 +0000 UTC m=+0.046618015 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 03:39:37 compute-0 systemd[1]: Started libcrun container.
Oct 11 03:39:37 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/50a8f20c7d9378b18ee4046bbcca23e8be63d282b42d08665d3959a33e8e73a7/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 11 03:39:37 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/50a8f20c7d9378b18ee4046bbcca23e8be63d282b42d08665d3959a33e8e73a7/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 11 03:39:37 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/50a8f20c7d9378b18ee4046bbcca23e8be63d282b42d08665d3959a33e8e73a7/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 11 03:39:37 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/50a8f20c7d9378b18ee4046bbcca23e8be63d282b42d08665d3959a33e8e73a7/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 11 03:39:37 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/50a8f20c7d9378b18ee4046bbcca23e8be63d282b42d08665d3959a33e8e73a7/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct 11 03:39:37 compute-0 podman[105229]: 2025-10-11 03:39:37.405400477 +0000 UTC m=+0.171418918 container init 1f70e3383035084408368b0180361cc0f6d9a14853057578045a06280d30363f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_curie, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 11 03:39:37 compute-0 podman[105229]: 2025-10-11 03:39:37.417614464 +0000 UTC m=+0.183632885 container start 1f70e3383035084408368b0180361cc0f6d9a14853057578045a06280d30363f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_curie, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 11 03:39:37 compute-0 podman[105229]: 2025-10-11 03:39:37.421189809 +0000 UTC m=+0.187208230 container attach 1f70e3383035084408368b0180361cc0f6d9a14853057578045a06280d30363f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_curie, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Oct 11 03:39:38 compute-0 ceph-osd[89722]: log_channel(cluster) log [DBG] : 10.5 scrub starts
Oct 11 03:39:38 compute-0 ceph-osd[89722]: log_channel(cluster) log [DBG] : 10.5 scrub ok
Oct 11 03:39:38 compute-0 ceph-mon[74273]: pgmap v175: 305 pgs: 305 active+clean; 456 KiB data, 121 MiB used, 60 GiB / 60 GiB avail; 100 B/s, 0 objects/s recovering
Oct 11 03:39:38 compute-0 ceph-mon[74273]: 10.3 deep-scrub starts
Oct 11 03:39:38 compute-0 ceph-mon[74273]: 10.3 deep-scrub ok
Oct 11 03:39:38 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "20"}]': finished
Oct 11 03:39:38 compute-0 ceph-mon[74273]: osdmap e87: 3 total, 3 up, 3 in
Oct 11 03:39:38 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e87 do_prune osdmap full prune enabled
Oct 11 03:39:38 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e88 e88: 3 total, 3 up, 3 in
Oct 11 03:39:38 compute-0 ceph-mon[74273]: log_channel(cluster) log [DBG] : osdmap e88: 3 total, 3 up, 3 in
Oct 11 03:39:38 compute-0 ceph-osd[89722]: osd.2 pg_epoch: 88 pg[9.13( empty local-lis/les=0/0 n=0 ec=49/34 lis/c=57/57 les/c/f=58/58/0 sis=88) [2]/[0] r=-1 lpr=88 pi=[57,88)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 11 03:39:38 compute-0 ceph-osd[89722]: osd.2 pg_epoch: 88 pg[9.13( empty local-lis/les=0/0 n=0 ec=49/34 lis/c=57/57 les/c/f=58/58/0 sis=88) [2]/[0] r=-1 lpr=88 pi=[57,88)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Oct 11 03:39:38 compute-0 ceph-osd[87591]: osd.0 pg_epoch: 88 pg[9.13( v 41'577 (0'0,41'577] local-lis/les=57/58 n=6 ec=49/34 lis/c=57/57 les/c/f=58/58/0 sis=88) [2]/[0] r=0 lpr=88 pi=[57,88)/1 crt=41'577 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Oct 11 03:39:38 compute-0 ceph-osd[87591]: osd.0 pg_epoch: 88 pg[9.13( v 41'577 (0'0,41'577] local-lis/les=57/58 n=6 ec=49/34 lis/c=57/57 les/c/f=58/58/0 sis=88) [2]/[0] r=0 lpr=88 pi=[57,88)/1 crt=41'577 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Oct 11 03:39:38 compute-0 practical_curie[105245]: --> passed data devices: 0 physical, 3 LVM
Oct 11 03:39:38 compute-0 practical_curie[105245]: --> relative data size: 1.0
Oct 11 03:39:38 compute-0 practical_curie[105245]: --> All data devices are unavailable
Oct 11 03:39:38 compute-0 systemd[1]: libpod-1f70e3383035084408368b0180361cc0f6d9a14853057578045a06280d30363f.scope: Deactivated successfully.
Oct 11 03:39:38 compute-0 podman[105229]: 2025-10-11 03:39:38.602099187 +0000 UTC m=+1.368117628 container died 1f70e3383035084408368b0180361cc0f6d9a14853057578045a06280d30363f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_curie, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=reef, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 11 03:39:38 compute-0 systemd[1]: libpod-1f70e3383035084408368b0180361cc0f6d9a14853057578045a06280d30363f.scope: Consumed 1.131s CPU time.
Oct 11 03:39:38 compute-0 systemd[1]: var-lib-containers-storage-overlay-50a8f20c7d9378b18ee4046bbcca23e8be63d282b42d08665d3959a33e8e73a7-merged.mount: Deactivated successfully.
Oct 11 03:39:38 compute-0 podman[105229]: 2025-10-11 03:39:38.658079505 +0000 UTC m=+1.424097956 container remove 1f70e3383035084408368b0180361cc0f6d9a14853057578045a06280d30363f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_curie, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 11 03:39:38 compute-0 systemd[1]: libpod-conmon-1f70e3383035084408368b0180361cc0f6d9a14853057578045a06280d30363f.scope: Deactivated successfully.
Oct 11 03:39:38 compute-0 sudo[105123]: pam_unix(sudo:session): session closed for user root
Oct 11 03:39:38 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v178: 305 pgs: 1 unknown, 304 active+clean; 456 KiB data, 121 MiB used, 60 GiB / 60 GiB avail; 83 B/s, 0 objects/s recovering
Oct 11 03:39:38 compute-0 sudo[105288]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 03:39:38 compute-0 sudo[105288]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 03:39:38 compute-0 sudo[105288]: pam_unix(sudo:session): session closed for user root
Oct 11 03:39:38 compute-0 sudo[105313]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 11 03:39:38 compute-0 sudo[105313]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 03:39:38 compute-0 sudo[105313]: pam_unix(sudo:session): session closed for user root
Oct 11 03:39:38 compute-0 sudo[105338]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 03:39:38 compute-0 sudo[105338]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 03:39:38 compute-0 sudo[105338]: pam_unix(sudo:session): session closed for user root
Oct 11 03:39:39 compute-0 sudo[105363]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/23b68101-59a9-532f-ab6b-9acf78fb2162/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 23b68101-59a9-532f-ab6b-9acf78fb2162 -- lvm list --format json
Oct 11 03:39:39 compute-0 sudo[105363]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 03:39:39 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e88 do_prune osdmap full prune enabled
Oct 11 03:39:39 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e89 e89: 3 total, 3 up, 3 in
Oct 11 03:39:39 compute-0 ceph-mon[74273]: 10.5 scrub starts
Oct 11 03:39:39 compute-0 ceph-mon[74273]: 10.5 scrub ok
Oct 11 03:39:39 compute-0 ceph-mon[74273]: osdmap e88: 3 total, 3 up, 3 in
Oct 11 03:39:39 compute-0 ceph-mon[74273]: log_channel(cluster) log [DBG] : osdmap e89: 3 total, 3 up, 3 in
Oct 11 03:39:39 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e89 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 11 03:39:39 compute-0 podman[105428]: 2025-10-11 03:39:39.529810196 +0000 UTC m=+0.063137989 container create 9b5d3d8cc909cb3e3b8f9503571718ef87d0d45062f0e3f9a0be87f0dc35c8f5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_curran, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=reef, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 11 03:39:39 compute-0 systemd[1]: Started libpod-conmon-9b5d3d8cc909cb3e3b8f9503571718ef87d0d45062f0e3f9a0be87f0dc35c8f5.scope.
Oct 11 03:39:39 compute-0 podman[105428]: 2025-10-11 03:39:39.504082303 +0000 UTC m=+0.037410146 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 03:39:39 compute-0 systemd[1]: Started libcrun container.
Oct 11 03:39:39 compute-0 podman[105428]: 2025-10-11 03:39:39.625008272 +0000 UTC m=+0.158336115 container init 9b5d3d8cc909cb3e3b8f9503571718ef87d0d45062f0e3f9a0be87f0dc35c8f5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_curran, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 11 03:39:39 compute-0 podman[105428]: 2025-10-11 03:39:39.639383152 +0000 UTC m=+0.172710945 container start 9b5d3d8cc909cb3e3b8f9503571718ef87d0d45062f0e3f9a0be87f0dc35c8f5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_curran, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, ceph=True, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0)
Oct 11 03:39:39 compute-0 podman[105428]: 2025-10-11 03:39:39.64442536 +0000 UTC m=+0.177753143 container attach 9b5d3d8cc909cb3e3b8f9503571718ef87d0d45062f0e3f9a0be87f0dc35c8f5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_curran, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Oct 11 03:39:39 compute-0 reverent_curran[105444]: 167 167
Oct 11 03:39:39 compute-0 systemd[1]: libpod-9b5d3d8cc909cb3e3b8f9503571718ef87d0d45062f0e3f9a0be87f0dc35c8f5.scope: Deactivated successfully.
Oct 11 03:39:39 compute-0 podman[105428]: 2025-10-11 03:39:39.646713687 +0000 UTC m=+0.180041480 container died 9b5d3d8cc909cb3e3b8f9503571718ef87d0d45062f0e3f9a0be87f0dc35c8f5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_curran, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20250507)
Oct 11 03:39:39 compute-0 systemd[1]: var-lib-containers-storage-overlay-218715bb4ead4eb32eb4199123c26cbe5d7880edef766b94c824fade07b0fa6f-merged.mount: Deactivated successfully.
Oct 11 03:39:39 compute-0 podman[105428]: 2025-10-11 03:39:39.695274568 +0000 UTC m=+0.228602351 container remove 9b5d3d8cc909cb3e3b8f9503571718ef87d0d45062f0e3f9a0be87f0dc35c8f5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_curran, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Oct 11 03:39:39 compute-0 systemd[1]: libpod-conmon-9b5d3d8cc909cb3e3b8f9503571718ef87d0d45062f0e3f9a0be87f0dc35c8f5.scope: Deactivated successfully.
Oct 11 03:39:39 compute-0 podman[105467]: 2025-10-11 03:39:39.933408307 +0000 UTC m=+0.052809217 container create 518ac87039d852eb87f77ca75b889e1d6a54fd3df0c7fe6821bc98e12bf687b4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=amazing_mendeleev, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Oct 11 03:39:39 compute-0 systemd[1]: Started libpod-conmon-518ac87039d852eb87f77ca75b889e1d6a54fd3df0c7fe6821bc98e12bf687b4.scope.
Oct 11 03:39:40 compute-0 ceph-osd[87591]: log_channel(cluster) log [DBG] : 5.1e deep-scrub starts
Oct 11 03:39:40 compute-0 podman[105467]: 2025-10-11 03:39:39.914906925 +0000 UTC m=+0.034307825 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 03:39:40 compute-0 systemd[1]: Started libcrun container.
Oct 11 03:39:40 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b859c3ed70eaa6b41652b6b9203a9adaee503407271e999ef30e6a8aad12182b/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 11 03:39:40 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b859c3ed70eaa6b41652b6b9203a9adaee503407271e999ef30e6a8aad12182b/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 11 03:39:40 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b859c3ed70eaa6b41652b6b9203a9adaee503407271e999ef30e6a8aad12182b/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 11 03:39:40 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b859c3ed70eaa6b41652b6b9203a9adaee503407271e999ef30e6a8aad12182b/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 11 03:39:40 compute-0 ceph-osd[87591]: log_channel(cluster) log [DBG] : 5.1e deep-scrub ok
Oct 11 03:39:40 compute-0 podman[105467]: 2025-10-11 03:39:40.027580613 +0000 UTC m=+0.146981553 container init 518ac87039d852eb87f77ca75b889e1d6a54fd3df0c7fe6821bc98e12bf687b4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=amazing_mendeleev, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Oct 11 03:39:40 compute-0 ceph-osd[89722]: log_channel(cluster) log [DBG] : 10.a deep-scrub starts
Oct 11 03:39:40 compute-0 ceph-osd[89722]: log_channel(cluster) log [DBG] : 10.a deep-scrub ok
Oct 11 03:39:40 compute-0 podman[105467]: 2025-10-11 03:39:40.043043715 +0000 UTC m=+0.162444625 container start 518ac87039d852eb87f77ca75b889e1d6a54fd3df0c7fe6821bc98e12bf687b4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=amazing_mendeleev, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_REF=reef)
Oct 11 03:39:40 compute-0 ceph-osd[87591]: osd.0 pg_epoch: 89 pg[9.13( v 41'577 (0'0,41'577] local-lis/les=88/89 n=6 ec=49/34 lis/c=57/57 les/c/f=58/58/0 sis=88) [2]/[0] async=[2] r=0 lpr=88 pi=[57,88)/1 crt=41'577 mlcod 0'0 active+remapped mbc={255={(0+1)=5}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 03:39:40 compute-0 podman[105467]: 2025-10-11 03:39:40.049042651 +0000 UTC m=+0.168443551 container attach 518ac87039d852eb87f77ca75b889e1d6a54fd3df0c7fe6821bc98e12bf687b4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=amazing_mendeleev, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 11 03:39:40 compute-0 ceph-osd[88594]: log_channel(cluster) log [DBG] : 8.19 scrub starts
Oct 11 03:39:40 compute-0 ceph-osd[88594]: log_channel(cluster) log [DBG] : 8.19 scrub ok
Oct 11 03:39:40 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e89 do_prune osdmap full prune enabled
Oct 11 03:39:40 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e90 e90: 3 total, 3 up, 3 in
Oct 11 03:39:40 compute-0 ceph-mon[74273]: log_channel(cluster) log [DBG] : osdmap e90: 3 total, 3 up, 3 in
Oct 11 03:39:40 compute-0 ceph-osd[87591]: osd.0 pg_epoch: 90 pg[9.13( v 41'577 (0'0,41'577] local-lis/les=88/89 n=6 ec=49/34 lis/c=88/57 les/c/f=89/58/0 sis=90 pruub=15.918626785s) [2] async=[2] r=-1 lpr=90 pi=[57,90)/1 crt=41'577 mlcod 41'577 active pruub 159.101318359s@ mbc={255={}}] start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 11 03:39:40 compute-0 ceph-osd[87591]: osd.0 pg_epoch: 90 pg[9.13( v 41'577 (0'0,41'577] local-lis/les=88/89 n=6 ec=49/34 lis/c=88/57 les/c/f=89/58/0 sis=90 pruub=15.918560028s) [2] r=-1 lpr=90 pi=[57,90)/1 crt=41'577 mlcod 0'0 unknown NOTIFY pruub 159.101318359s@ mbc={}] state<Start>: transitioning to Stray
Oct 11 03:39:40 compute-0 ceph-mon[74273]: pgmap v178: 305 pgs: 1 unknown, 304 active+clean; 456 KiB data, 121 MiB used, 60 GiB / 60 GiB avail; 83 B/s, 0 objects/s recovering
Oct 11 03:39:40 compute-0 ceph-mon[74273]: osdmap e89: 3 total, 3 up, 3 in
Oct 11 03:39:40 compute-0 ceph-osd[89722]: osd.2 pg_epoch: 90 pg[9.13( v 41'577 (0'0,41'577] local-lis/les=0/0 n=6 ec=49/34 lis/c=88/57 les/c/f=89/58/0 sis=90) [2] r=0 lpr=90 pi=[57,90)/1 luod=0'0 crt=41'577 mlcod 0'0 active mbc={}] start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Oct 11 03:39:40 compute-0 ceph-osd[89722]: osd.2 pg_epoch: 90 pg[9.13( v 41'577 (0'0,41'577] local-lis/les=0/0 n=6 ec=49/34 lis/c=88/57 les/c/f=89/58/0 sis=90) [2] r=0 lpr=90 pi=[57,90)/1 crt=41'577 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 03:39:40 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v181: 305 pgs: 1 unknown, 304 active+clean; 456 KiB data, 121 MiB used, 60 GiB / 60 GiB avail
Oct 11 03:39:40 compute-0 amazing_mendeleev[105484]: {
Oct 11 03:39:40 compute-0 amazing_mendeleev[105484]:     "0": [
Oct 11 03:39:40 compute-0 amazing_mendeleev[105484]:         {
Oct 11 03:39:40 compute-0 amazing_mendeleev[105484]:             "devices": [
Oct 11 03:39:40 compute-0 amazing_mendeleev[105484]:                 "/dev/loop3"
Oct 11 03:39:40 compute-0 amazing_mendeleev[105484]:             ],
Oct 11 03:39:40 compute-0 amazing_mendeleev[105484]:             "lv_name": "ceph_lv0",
Oct 11 03:39:40 compute-0 amazing_mendeleev[105484]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct 11 03:39:40 compute-0 amazing_mendeleev[105484]:             "lv_size": "21470642176",
Oct 11 03:39:40 compute-0 amazing_mendeleev[105484]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=1XVTvq-Um5W-oCLh-MZzq-EKrd-vgGj-JdO4S3,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=23b68101-59a9-532f-ab6b-9acf78fb2162,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=bd7ac921-1218-45c1-b1c6-7c594dbceccb,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 11 03:39:40 compute-0 amazing_mendeleev[105484]:             "lv_uuid": "1XVTvq-Um5W-oCLh-MZzq-EKrd-vgGj-JdO4S3",
Oct 11 03:39:40 compute-0 amazing_mendeleev[105484]:             "name": "ceph_lv0",
Oct 11 03:39:40 compute-0 amazing_mendeleev[105484]:             "path": "/dev/ceph_vg0/ceph_lv0",
Oct 11 03:39:40 compute-0 amazing_mendeleev[105484]:             "tags": {
Oct 11 03:39:40 compute-0 amazing_mendeleev[105484]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct 11 03:39:40 compute-0 amazing_mendeleev[105484]:                 "ceph.block_uuid": "1XVTvq-Um5W-oCLh-MZzq-EKrd-vgGj-JdO4S3",
Oct 11 03:39:40 compute-0 amazing_mendeleev[105484]:                 "ceph.cephx_lockbox_secret": "",
Oct 11 03:39:40 compute-0 amazing_mendeleev[105484]:                 "ceph.cluster_fsid": "23b68101-59a9-532f-ab6b-9acf78fb2162",
Oct 11 03:39:40 compute-0 amazing_mendeleev[105484]:                 "ceph.cluster_name": "ceph",
Oct 11 03:39:40 compute-0 amazing_mendeleev[105484]:                 "ceph.crush_device_class": "",
Oct 11 03:39:40 compute-0 amazing_mendeleev[105484]:                 "ceph.encrypted": "0",
Oct 11 03:39:40 compute-0 amazing_mendeleev[105484]:                 "ceph.osd_fsid": "bd7ac921-1218-45c1-b1c6-7c594dbceccb",
Oct 11 03:39:40 compute-0 amazing_mendeleev[105484]:                 "ceph.osd_id": "0",
Oct 11 03:39:40 compute-0 amazing_mendeleev[105484]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 11 03:39:40 compute-0 amazing_mendeleev[105484]:                 "ceph.type": "block",
Oct 11 03:39:40 compute-0 amazing_mendeleev[105484]:                 "ceph.vdo": "0"
Oct 11 03:39:40 compute-0 amazing_mendeleev[105484]:             },
Oct 11 03:39:40 compute-0 amazing_mendeleev[105484]:             "type": "block",
Oct 11 03:39:40 compute-0 amazing_mendeleev[105484]:             "vg_name": "ceph_vg0"
Oct 11 03:39:40 compute-0 amazing_mendeleev[105484]:         }
Oct 11 03:39:40 compute-0 amazing_mendeleev[105484]:     ],
Oct 11 03:39:40 compute-0 amazing_mendeleev[105484]:     "1": [
Oct 11 03:39:40 compute-0 amazing_mendeleev[105484]:         {
Oct 11 03:39:40 compute-0 amazing_mendeleev[105484]:             "devices": [
Oct 11 03:39:40 compute-0 amazing_mendeleev[105484]:                 "/dev/loop4"
Oct 11 03:39:40 compute-0 amazing_mendeleev[105484]:             ],
Oct 11 03:39:40 compute-0 amazing_mendeleev[105484]:             "lv_name": "ceph_lv1",
Oct 11 03:39:40 compute-0 amazing_mendeleev[105484]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Oct 11 03:39:40 compute-0 amazing_mendeleev[105484]:             "lv_size": "21470642176",
Oct 11 03:39:40 compute-0 amazing_mendeleev[105484]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=SYHCVo-UVf0-Iwc3-4dzz-QuCG-19LJ-9rRA5x,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=23b68101-59a9-532f-ab6b-9acf78fb2162,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=38da774d-7ecf-442f-9a7a-97978287cff8,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 11 03:39:40 compute-0 amazing_mendeleev[105484]:             "lv_uuid": "SYHCVo-UVf0-Iwc3-4dzz-QuCG-19LJ-9rRA5x",
Oct 11 03:39:40 compute-0 amazing_mendeleev[105484]:             "name": "ceph_lv1",
Oct 11 03:39:40 compute-0 amazing_mendeleev[105484]:             "path": "/dev/ceph_vg1/ceph_lv1",
Oct 11 03:39:40 compute-0 amazing_mendeleev[105484]:             "tags": {
Oct 11 03:39:40 compute-0 amazing_mendeleev[105484]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Oct 11 03:39:40 compute-0 amazing_mendeleev[105484]:                 "ceph.block_uuid": "SYHCVo-UVf0-Iwc3-4dzz-QuCG-19LJ-9rRA5x",
Oct 11 03:39:40 compute-0 amazing_mendeleev[105484]:                 "ceph.cephx_lockbox_secret": "",
Oct 11 03:39:40 compute-0 amazing_mendeleev[105484]:                 "ceph.cluster_fsid": "23b68101-59a9-532f-ab6b-9acf78fb2162",
Oct 11 03:39:40 compute-0 amazing_mendeleev[105484]:                 "ceph.cluster_name": "ceph",
Oct 11 03:39:40 compute-0 amazing_mendeleev[105484]:                 "ceph.crush_device_class": "",
Oct 11 03:39:40 compute-0 amazing_mendeleev[105484]:                 "ceph.encrypted": "0",
Oct 11 03:39:40 compute-0 amazing_mendeleev[105484]:                 "ceph.osd_fsid": "38da774d-7ecf-442f-9a7a-97978287cff8",
Oct 11 03:39:40 compute-0 amazing_mendeleev[105484]:                 "ceph.osd_id": "1",
Oct 11 03:39:40 compute-0 amazing_mendeleev[105484]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 11 03:39:40 compute-0 amazing_mendeleev[105484]:                 "ceph.type": "block",
Oct 11 03:39:40 compute-0 amazing_mendeleev[105484]:                 "ceph.vdo": "0"
Oct 11 03:39:40 compute-0 amazing_mendeleev[105484]:             },
Oct 11 03:39:40 compute-0 amazing_mendeleev[105484]:             "type": "block",
Oct 11 03:39:40 compute-0 amazing_mendeleev[105484]:             "vg_name": "ceph_vg1"
Oct 11 03:39:40 compute-0 amazing_mendeleev[105484]:         }
Oct 11 03:39:40 compute-0 amazing_mendeleev[105484]:     ],
Oct 11 03:39:40 compute-0 amazing_mendeleev[105484]:     "2": [
Oct 11 03:39:40 compute-0 amazing_mendeleev[105484]:         {
Oct 11 03:39:40 compute-0 amazing_mendeleev[105484]:             "devices": [
Oct 11 03:39:40 compute-0 amazing_mendeleev[105484]:                 "/dev/loop5"
Oct 11 03:39:40 compute-0 amazing_mendeleev[105484]:             ],
Oct 11 03:39:40 compute-0 amazing_mendeleev[105484]:             "lv_name": "ceph_lv2",
Oct 11 03:39:40 compute-0 amazing_mendeleev[105484]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Oct 11 03:39:40 compute-0 amazing_mendeleev[105484]:             "lv_size": "21470642176",
Oct 11 03:39:40 compute-0 amazing_mendeleev[105484]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=RoMfIg-8Myq-NBZ3-1HM3-6QfC-sjk8-dlChvt,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=23b68101-59a9-532f-ab6b-9acf78fb2162,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=b0a32a6b-06c7-4cda-9235-0c5ffbd1bd85,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 11 03:39:40 compute-0 amazing_mendeleev[105484]:             "lv_uuid": "RoMfIg-8Myq-NBZ3-1HM3-6QfC-sjk8-dlChvt",
Oct 11 03:39:40 compute-0 amazing_mendeleev[105484]:             "name": "ceph_lv2",
Oct 11 03:39:40 compute-0 amazing_mendeleev[105484]:             "path": "/dev/ceph_vg2/ceph_lv2",
Oct 11 03:39:40 compute-0 amazing_mendeleev[105484]:             "tags": {
Oct 11 03:39:40 compute-0 amazing_mendeleev[105484]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Oct 11 03:39:40 compute-0 amazing_mendeleev[105484]:                 "ceph.block_uuid": "RoMfIg-8Myq-NBZ3-1HM3-6QfC-sjk8-dlChvt",
Oct 11 03:39:40 compute-0 amazing_mendeleev[105484]:                 "ceph.cephx_lockbox_secret": "",
Oct 11 03:39:40 compute-0 amazing_mendeleev[105484]:                 "ceph.cluster_fsid": "23b68101-59a9-532f-ab6b-9acf78fb2162",
Oct 11 03:39:40 compute-0 amazing_mendeleev[105484]:                 "ceph.cluster_name": "ceph",
Oct 11 03:39:40 compute-0 amazing_mendeleev[105484]:                 "ceph.crush_device_class": "",
Oct 11 03:39:40 compute-0 amazing_mendeleev[105484]:                 "ceph.encrypted": "0",
Oct 11 03:39:40 compute-0 amazing_mendeleev[105484]:                 "ceph.osd_fsid": "b0a32a6b-06c7-4cda-9235-0c5ffbd1bd85",
Oct 11 03:39:40 compute-0 amazing_mendeleev[105484]:                 "ceph.osd_id": "2",
Oct 11 03:39:40 compute-0 amazing_mendeleev[105484]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 11 03:39:40 compute-0 amazing_mendeleev[105484]:                 "ceph.type": "block",
Oct 11 03:39:40 compute-0 amazing_mendeleev[105484]:                 "ceph.vdo": "0"
Oct 11 03:39:40 compute-0 amazing_mendeleev[105484]:             },
Oct 11 03:39:40 compute-0 amazing_mendeleev[105484]:             "type": "block",
Oct 11 03:39:40 compute-0 amazing_mendeleev[105484]:             "vg_name": "ceph_vg2"
Oct 11 03:39:40 compute-0 amazing_mendeleev[105484]:         }
Oct 11 03:39:40 compute-0 amazing_mendeleev[105484]:     ]
Oct 11 03:39:40 compute-0 amazing_mendeleev[105484]: }
Oct 11 03:39:40 compute-0 systemd[1]: libpod-518ac87039d852eb87f77ca75b889e1d6a54fd3df0c7fe6821bc98e12bf687b4.scope: Deactivated successfully.
Oct 11 03:39:40 compute-0 podman[105467]: 2025-10-11 03:39:40.857936913 +0000 UTC m=+0.977337823 container died 518ac87039d852eb87f77ca75b889e1d6a54fd3df0c7fe6821bc98e12bf687b4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=amazing_mendeleev, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 11 03:39:40 compute-0 systemd[1]: var-lib-containers-storage-overlay-b859c3ed70eaa6b41652b6b9203a9adaee503407271e999ef30e6a8aad12182b-merged.mount: Deactivated successfully.
Oct 11 03:39:40 compute-0 podman[105467]: 2025-10-11 03:39:40.927274452 +0000 UTC m=+1.046675332 container remove 518ac87039d852eb87f77ca75b889e1d6a54fd3df0c7fe6821bc98e12bf687b4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=amazing_mendeleev, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True)
Oct 11 03:39:40 compute-0 systemd[1]: libpod-conmon-518ac87039d852eb87f77ca75b889e1d6a54fd3df0c7fe6821bc98e12bf687b4.scope: Deactivated successfully.
Oct 11 03:39:40 compute-0 sudo[105363]: pam_unix(sudo:session): session closed for user root
Oct 11 03:39:41 compute-0 ceph-osd[87591]: log_channel(cluster) log [DBG] : 10.8 scrub starts
Oct 11 03:39:41 compute-0 sudo[105504]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 03:39:41 compute-0 ceph-osd[87591]: log_channel(cluster) log [DBG] : 10.8 scrub ok
Oct 11 03:39:41 compute-0 sudo[105504]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 03:39:41 compute-0 sudo[105504]: pam_unix(sudo:session): session closed for user root
Oct 11 03:39:41 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e90 do_prune osdmap full prune enabled
Oct 11 03:39:41 compute-0 sudo[105529]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 11 03:39:41 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e91 e91: 3 total, 3 up, 3 in
Oct 11 03:39:41 compute-0 sudo[105529]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 03:39:41 compute-0 sudo[105529]: pam_unix(sudo:session): session closed for user root
Oct 11 03:39:41 compute-0 ceph-mon[74273]: log_channel(cluster) log [DBG] : osdmap e91: 3 total, 3 up, 3 in
Oct 11 03:39:41 compute-0 ceph-mon[74273]: 5.1e deep-scrub starts
Oct 11 03:39:41 compute-0 ceph-mon[74273]: 5.1e deep-scrub ok
Oct 11 03:39:41 compute-0 ceph-mon[74273]: 10.a deep-scrub starts
Oct 11 03:39:41 compute-0 ceph-mon[74273]: 10.a deep-scrub ok
Oct 11 03:39:41 compute-0 ceph-mon[74273]: 8.19 scrub starts
Oct 11 03:39:41 compute-0 ceph-mon[74273]: 8.19 scrub ok
Oct 11 03:39:41 compute-0 ceph-mon[74273]: osdmap e90: 3 total, 3 up, 3 in
Oct 11 03:39:41 compute-0 ceph-osd[89722]: osd.2 pg_epoch: 91 pg[9.13( v 41'577 (0'0,41'577] local-lis/les=90/91 n=6 ec=49/34 lis/c=88/57 les/c/f=89/58/0 sis=90) [2] r=0 lpr=90 pi=[57,90)/1 crt=41'577 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 03:39:41 compute-0 sudo[105554]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 03:39:41 compute-0 sudo[105554]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 03:39:41 compute-0 sudo[105554]: pam_unix(sudo:session): session closed for user root
Oct 11 03:39:41 compute-0 sudo[105579]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/23b68101-59a9-532f-ab6b-9acf78fb2162/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 23b68101-59a9-532f-ab6b-9acf78fb2162 -- raw list --format json
Oct 11 03:39:41 compute-0 sudo[105579]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 03:39:41 compute-0 podman[105645]: 2025-10-11 03:39:41.740511131 +0000 UTC m=+0.059468071 container create 8b15749690aa78d4e910e44dbaccd2e0bba037a302668ec10faefe675e8389ec (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_proskuriakova, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.vendor=CentOS)
Oct 11 03:39:41 compute-0 systemd[1]: Started libpod-conmon-8b15749690aa78d4e910e44dbaccd2e0bba037a302668ec10faefe675e8389ec.scope.
Oct 11 03:39:41 compute-0 systemd[1]: Started libcrun container.
Oct 11 03:39:41 compute-0 podman[105645]: 2025-10-11 03:39:41.721077782 +0000 UTC m=+0.040034722 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 03:39:41 compute-0 podman[105645]: 2025-10-11 03:39:41.827738593 +0000 UTC m=+0.146695543 container init 8b15749690aa78d4e910e44dbaccd2e0bba037a302668ec10faefe675e8389ec (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_proskuriakova, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Oct 11 03:39:41 compute-0 podman[105645]: 2025-10-11 03:39:41.838051835 +0000 UTC m=+0.157008765 container start 8b15749690aa78d4e910e44dbaccd2e0bba037a302668ec10faefe675e8389ec (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_proskuriakova, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 11 03:39:41 compute-0 podman[105645]: 2025-10-11 03:39:41.84096087 +0000 UTC m=+0.159917810 container attach 8b15749690aa78d4e910e44dbaccd2e0bba037a302668ec10faefe675e8389ec (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_proskuriakova, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_REF=reef, ceph=True)
Oct 11 03:39:41 compute-0 flamboyant_proskuriakova[105661]: 167 167
Oct 11 03:39:41 compute-0 systemd[1]: libpod-8b15749690aa78d4e910e44dbaccd2e0bba037a302668ec10faefe675e8389ec.scope: Deactivated successfully.
Oct 11 03:39:41 compute-0 conmon[105661]: conmon 8b15749690aa78d4e910 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-8b15749690aa78d4e910e44dbaccd2e0bba037a302668ec10faefe675e8389ec.scope/container/memory.events
Oct 11 03:39:41 compute-0 podman[105645]: 2025-10-11 03:39:41.845779781 +0000 UTC m=+0.164736711 container died 8b15749690aa78d4e910e44dbaccd2e0bba037a302668ec10faefe675e8389ec (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_proskuriakova, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 11 03:39:41 compute-0 systemd[1]: var-lib-containers-storage-overlay-79721cfc5de19e98cfa140a3d09290146593cc4b643904c82114de5d0cefe253-merged.mount: Deactivated successfully.
Oct 11 03:39:41 compute-0 podman[105645]: 2025-10-11 03:39:41.889982194 +0000 UTC m=+0.208939124 container remove 8b15749690aa78d4e910e44dbaccd2e0bba037a302668ec10faefe675e8389ec (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_proskuriakova, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 11 03:39:41 compute-0 systemd[1]: libpod-conmon-8b15749690aa78d4e910e44dbaccd2e0bba037a302668ec10faefe675e8389ec.scope: Deactivated successfully.
Oct 11 03:39:42 compute-0 podman[105684]: 2025-10-11 03:39:42.081409296 +0000 UTC m=+0.065327392 container create 0aa101b4795d4296d7bb487af46f0e663297075993261609787795fbd06cb4a8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_napier, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3)
Oct 11 03:39:42 compute-0 systemd[1]: Started libpod-conmon-0aa101b4795d4296d7bb487af46f0e663297075993261609787795fbd06cb4a8.scope.
Oct 11 03:39:42 compute-0 podman[105684]: 2025-10-11 03:39:42.055924511 +0000 UTC m=+0.039842697 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 03:39:42 compute-0 ceph-osd[88594]: log_channel(cluster) log [DBG] : 8.1e scrub starts
Oct 11 03:39:42 compute-0 ceph-mon[74273]: pgmap v181: 305 pgs: 1 unknown, 304 active+clean; 456 KiB data, 121 MiB used, 60 GiB / 60 GiB avail
Oct 11 03:39:42 compute-0 ceph-mon[74273]: 10.8 scrub starts
Oct 11 03:39:42 compute-0 ceph-mon[74273]: 10.8 scrub ok
Oct 11 03:39:42 compute-0 ceph-mon[74273]: osdmap e91: 3 total, 3 up, 3 in
Oct 11 03:39:42 compute-0 systemd[1]: Started libcrun container.
Oct 11 03:39:42 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/572d4b9af68921b9e18c5ab0d79a1053f7cfb843cdb8205fb4ed1ebff0c9db6f/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 11 03:39:42 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/572d4b9af68921b9e18c5ab0d79a1053f7cfb843cdb8205fb4ed1ebff0c9db6f/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 11 03:39:42 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/572d4b9af68921b9e18c5ab0d79a1053f7cfb843cdb8205fb4ed1ebff0c9db6f/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 11 03:39:42 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/572d4b9af68921b9e18c5ab0d79a1053f7cfb843cdb8205fb4ed1ebff0c9db6f/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 11 03:39:42 compute-0 ceph-osd[88594]: log_channel(cluster) log [DBG] : 8.1e scrub ok
Oct 11 03:39:42 compute-0 podman[105684]: 2025-10-11 03:39:42.181720842 +0000 UTC m=+0.165639028 container init 0aa101b4795d4296d7bb487af46f0e663297075993261609787795fbd06cb4a8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_napier, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 11 03:39:42 compute-0 podman[105684]: 2025-10-11 03:39:42.194411333 +0000 UTC m=+0.178329459 container start 0aa101b4795d4296d7bb487af46f0e663297075993261609787795fbd06cb4a8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_napier, CEPH_REF=reef, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 11 03:39:42 compute-0 podman[105684]: 2025-10-11 03:39:42.20009607 +0000 UTC m=+0.184014246 container attach 0aa101b4795d4296d7bb487af46f0e663297075993261609787795fbd06cb4a8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_napier, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 11 03:39:42 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v183: 305 pgs: 1 unknown, 304 active+clean; 456 KiB data, 121 MiB used, 60 GiB / 60 GiB avail
Oct 11 03:39:43 compute-0 ceph-mon[74273]: 8.1e scrub starts
Oct 11 03:39:43 compute-0 ceph-mon[74273]: 8.1e scrub ok
Oct 11 03:39:43 compute-0 keen_napier[105701]: {
Oct 11 03:39:43 compute-0 keen_napier[105701]:     "38da774d-7ecf-442f-9a7a-97978287cff8": {
Oct 11 03:39:43 compute-0 keen_napier[105701]:         "ceph_fsid": "23b68101-59a9-532f-ab6b-9acf78fb2162",
Oct 11 03:39:43 compute-0 keen_napier[105701]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Oct 11 03:39:43 compute-0 keen_napier[105701]:         "osd_id": 1,
Oct 11 03:39:43 compute-0 keen_napier[105701]:         "osd_uuid": "38da774d-7ecf-442f-9a7a-97978287cff8",
Oct 11 03:39:43 compute-0 keen_napier[105701]:         "type": "bluestore"
Oct 11 03:39:43 compute-0 keen_napier[105701]:     },
Oct 11 03:39:43 compute-0 keen_napier[105701]:     "b0a32a6b-06c7-4cda-9235-0c5ffbd1bd85": {
Oct 11 03:39:43 compute-0 keen_napier[105701]:         "ceph_fsid": "23b68101-59a9-532f-ab6b-9acf78fb2162",
Oct 11 03:39:43 compute-0 keen_napier[105701]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Oct 11 03:39:43 compute-0 keen_napier[105701]:         "osd_id": 2,
Oct 11 03:39:43 compute-0 keen_napier[105701]:         "osd_uuid": "b0a32a6b-06c7-4cda-9235-0c5ffbd1bd85",
Oct 11 03:39:43 compute-0 keen_napier[105701]:         "type": "bluestore"
Oct 11 03:39:43 compute-0 keen_napier[105701]:     },
Oct 11 03:39:43 compute-0 keen_napier[105701]:     "bd7ac921-1218-45c1-b1c6-7c594dbceccb": {
Oct 11 03:39:43 compute-0 keen_napier[105701]:         "ceph_fsid": "23b68101-59a9-532f-ab6b-9acf78fb2162",
Oct 11 03:39:43 compute-0 keen_napier[105701]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct 11 03:39:43 compute-0 keen_napier[105701]:         "osd_id": 0,
Oct 11 03:39:43 compute-0 keen_napier[105701]:         "osd_uuid": "bd7ac921-1218-45c1-b1c6-7c594dbceccb",
Oct 11 03:39:43 compute-0 keen_napier[105701]:         "type": "bluestore"
Oct 11 03:39:43 compute-0 keen_napier[105701]:     }
Oct 11 03:39:43 compute-0 keen_napier[105701]: }
Oct 11 03:39:43 compute-0 systemd[1]: libpod-0aa101b4795d4296d7bb487af46f0e663297075993261609787795fbd06cb4a8.scope: Deactivated successfully.
Oct 11 03:39:43 compute-0 podman[105684]: 2025-10-11 03:39:43.223417597 +0000 UTC m=+1.207335683 container died 0aa101b4795d4296d7bb487af46f0e663297075993261609787795fbd06cb4a8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_napier, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 11 03:39:43 compute-0 systemd[1]: libpod-0aa101b4795d4296d7bb487af46f0e663297075993261609787795fbd06cb4a8.scope: Consumed 1.038s CPU time.
Oct 11 03:39:43 compute-0 systemd[1]: var-lib-containers-storage-overlay-572d4b9af68921b9e18c5ab0d79a1053f7cfb843cdb8205fb4ed1ebff0c9db6f-merged.mount: Deactivated successfully.
Oct 11 03:39:43 compute-0 podman[105684]: 2025-10-11 03:39:43.273381419 +0000 UTC m=+1.257299505 container remove 0aa101b4795d4296d7bb487af46f0e663297075993261609787795fbd06cb4a8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_napier, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 11 03:39:43 compute-0 systemd[1]: libpod-conmon-0aa101b4795d4296d7bb487af46f0e663297075993261609787795fbd06cb4a8.scope: Deactivated successfully.
Oct 11 03:39:43 compute-0 sudo[105579]: pam_unix(sudo:session): session closed for user root
Oct 11 03:39:43 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 11 03:39:43 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' 
Oct 11 03:39:43 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 11 03:39:43 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' 
Oct 11 03:39:43 compute-0 ceph-mgr[74563]: [progress WARNING root] complete: ev 014bc863-92cc-4d4d-8e71-46ef15ac8290 does not exist
Oct 11 03:39:43 compute-0 ceph-mgr[74563]: [progress WARNING root] complete: ev 5d727a5a-1012-40a4-87d1-844d17faf10e does not exist
Oct 11 03:39:43 compute-0 sudo[105746]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 03:39:43 compute-0 sudo[105746]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 03:39:43 compute-0 sudo[105746]: pam_unix(sudo:session): session closed for user root
Oct 11 03:39:43 compute-0 sudo[105771]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Oct 11 03:39:43 compute-0 sudo[105771]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 03:39:43 compute-0 sudo[105771]: pam_unix(sudo:session): session closed for user root
Oct 11 03:39:43 compute-0 ceph-osd[89722]: log_channel(cluster) log [DBG] : 10.c scrub starts
Oct 11 03:39:43 compute-0 ceph-osd[89722]: log_channel(cluster) log [DBG] : 10.c scrub ok
Oct 11 03:39:44 compute-0 ceph-osd[88594]: log_channel(cluster) log [DBG] : 9.2 scrub starts
Oct 11 03:39:44 compute-0 ceph-osd[88594]: log_channel(cluster) log [DBG] : 9.2 scrub ok
Oct 11 03:39:44 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e91 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 11 03:39:44 compute-0 ceph-mon[74273]: pgmap v183: 305 pgs: 1 unknown, 304 active+clean; 456 KiB data, 121 MiB used, 60 GiB / 60 GiB avail
Oct 11 03:39:44 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' 
Oct 11 03:39:44 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' 
Oct 11 03:39:44 compute-0 ceph-mon[74273]: 10.c scrub starts
Oct 11 03:39:44 compute-0 ceph-mon[74273]: 10.c scrub ok
Oct 11 03:39:44 compute-0 sshd-session[105796]: Accepted publickey for zuul from 192.168.122.30 port 43700 ssh2: ECDSA SHA256:qo9+RMabHfLAOt2q/80W97JXaZUdeUCREBuTRaqgxBY
Oct 11 03:39:44 compute-0 systemd-logind[820]: New session 35 of user zuul.
Oct 11 03:39:44 compute-0 systemd[1]: Started Session 35 of User zuul.
Oct 11 03:39:44 compute-0 sshd-session[105796]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Oct 11 03:39:44 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v184: 305 pgs: 305 active+clean; 456 KiB data, 122 MiB used, 60 GiB / 60 GiB avail; 3.7 KiB/s rd, 170 B/s wr, 8 op/s; 36 B/s, 1 objects/s recovering
Oct 11 03:39:44 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "21"} v 0) v1
Oct 11 03:39:44 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "21"}]: dispatch
Oct 11 03:39:45 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e91 do_prune osdmap full prune enabled
Oct 11 03:39:45 compute-0 ceph-mon[74273]: 9.2 scrub starts
Oct 11 03:39:45 compute-0 ceph-mon[74273]: 9.2 scrub ok
Oct 11 03:39:45 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "21"}]: dispatch
Oct 11 03:39:45 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "21"}]': finished
Oct 11 03:39:45 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e92 e92: 3 total, 3 up, 3 in
Oct 11 03:39:45 compute-0 ceph-mon[74273]: log_channel(cluster) log [DBG] : osdmap e92: 3 total, 3 up, 3 in
Oct 11 03:39:45 compute-0 python3.9[105949]: ansible-ansible.legacy.ping Invoked with data=pong
Oct 11 03:39:46 compute-0 ceph-mon[74273]: pgmap v184: 305 pgs: 305 active+clean; 456 KiB data, 122 MiB used, 60 GiB / 60 GiB avail; 3.7 KiB/s rd, 170 B/s wr, 8 op/s; 36 B/s, 1 objects/s recovering
Oct 11 03:39:46 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "21"}]': finished
Oct 11 03:39:46 compute-0 ceph-mon[74273]: osdmap e92: 3 total, 3 up, 3 in
Oct 11 03:39:46 compute-0 python3.9[106123]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct 11 03:39:46 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v186: 305 pgs: 305 active+clean; 456 KiB data, 122 MiB used, 60 GiB / 60 GiB avail; 3.3 KiB/s rd, 154 B/s wr, 7 op/s; 33 B/s, 1 objects/s recovering
Oct 11 03:39:46 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "22"} v 0) v1
Oct 11 03:39:46 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "22"}]: dispatch
Oct 11 03:39:47 compute-0 ceph-osd[89722]: log_channel(cluster) log [DBG] : 10.18 scrub starts
Oct 11 03:39:47 compute-0 ceph-osd[89722]: log_channel(cluster) log [DBG] : 10.18 scrub ok
Oct 11 03:39:47 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e92 do_prune osdmap full prune enabled
Oct 11 03:39:47 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "22"}]: dispatch
Oct 11 03:39:47 compute-0 ceph-mon[74273]: 10.18 scrub starts
Oct 11 03:39:47 compute-0 ceph-mon[74273]: 10.18 scrub ok
Oct 11 03:39:47 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "22"}]': finished
Oct 11 03:39:47 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e93 e93: 3 total, 3 up, 3 in
Oct 11 03:39:47 compute-0 ceph-mon[74273]: log_channel(cluster) log [DBG] : osdmap e93: 3 total, 3 up, 3 in
Oct 11 03:39:47 compute-0 sudo[106277]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ydnwijwkcsyhnzrjorulgcflipukhrcl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760153986.998235-45-165730730373705/AnsiballZ_command.py'
Oct 11 03:39:47 compute-0 sudo[106277]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 03:39:47 compute-0 ceph-osd[87591]: osd.0 pg_epoch: 93 pg[9.15( v 41'577 (0'0,41'577] local-lis/les=56/57 n=6 ec=49/34 lis/c=56/56 les/c/f=57/57/0 sis=93 pruub=14.614356995s) [1] r=-1 lpr=93 pi=[56,93)/1 crt=41'577 mlcod 0'0 active pruub 165.357238770s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 11 03:39:47 compute-0 ceph-osd[87591]: osd.0 pg_epoch: 93 pg[9.15( v 41'577 (0'0,41'577] local-lis/les=56/57 n=6 ec=49/34 lis/c=56/56 les/c/f=57/57/0 sis=93 pruub=14.614294052s) [1] r=-1 lpr=93 pi=[56,93)/1 crt=41'577 mlcod 0'0 unknown NOTIFY pruub 165.357238770s@ mbc={}] state<Start>: transitioning to Stray
Oct 11 03:39:47 compute-0 ceph-osd[88594]: osd.1 pg_epoch: 93 pg[9.15( empty local-lis/les=0/0 n=0 ec=49/34 lis/c=56/56 les/c/f=57/57/0 sis=93) [1] r=0 lpr=93 pi=[56,93)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 03:39:47 compute-0 python3.9[106279]: ansible-ansible.legacy.command Invoked with _raw_params=PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin which growvols
                                              _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 11 03:39:47 compute-0 sudo[106277]: pam_unix(sudo:session): session closed for user root
Oct 11 03:39:48 compute-0 ceph-osd[87591]: log_channel(cluster) log [DBG] : 2.1d scrub starts
Oct 11 03:39:48 compute-0 ceph-osd[87591]: log_channel(cluster) log [DBG] : 2.1d scrub ok
Oct 11 03:39:48 compute-0 ceph-osd[88594]: log_channel(cluster) log [DBG] : 9.4 scrub starts
Oct 11 03:39:48 compute-0 ceph-osd[88594]: log_channel(cluster) log [DBG] : 9.4 scrub ok
Oct 11 03:39:48 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e93 do_prune osdmap full prune enabled
Oct 11 03:39:48 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e94 e94: 3 total, 3 up, 3 in
Oct 11 03:39:48 compute-0 ceph-mon[74273]: log_channel(cluster) log [DBG] : osdmap e94: 3 total, 3 up, 3 in
Oct 11 03:39:48 compute-0 ceph-osd[87591]: osd.0 pg_epoch: 94 pg[9.15( v 41'577 (0'0,41'577] local-lis/les=56/57 n=6 ec=49/34 lis/c=56/56 les/c/f=57/57/0 sis=94) [1]/[0] r=0 lpr=94 pi=[56,94)/1 crt=41'577 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [1] -> [1], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 1, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Oct 11 03:39:48 compute-0 ceph-osd[87591]: osd.0 pg_epoch: 94 pg[9.15( v 41'577 (0'0,41'577] local-lis/les=56/57 n=6 ec=49/34 lis/c=56/56 les/c/f=57/57/0 sis=94) [1]/[0] r=0 lpr=94 pi=[56,94)/1 crt=41'577 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Oct 11 03:39:48 compute-0 ceph-osd[88594]: osd.1 pg_epoch: 94 pg[9.15( empty local-lis/les=0/0 n=0 ec=49/34 lis/c=56/56 les/c/f=57/57/0 sis=94) [1]/[0] r=-1 lpr=94 pi=[56,94)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [1] -> [1], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 11 03:39:48 compute-0 ceph-osd[88594]: osd.1 pg_epoch: 94 pg[9.15( empty local-lis/les=0/0 n=0 ec=49/34 lis/c=56/56 les/c/f=57/57/0 sis=94) [1]/[0] r=-1 lpr=94 pi=[56,94)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Oct 11 03:39:48 compute-0 ceph-mon[74273]: pgmap v186: 305 pgs: 305 active+clean; 456 KiB data, 122 MiB used, 60 GiB / 60 GiB avail; 3.3 KiB/s rd, 154 B/s wr, 7 op/s; 33 B/s, 1 objects/s recovering
Oct 11 03:39:48 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "22"}]': finished
Oct 11 03:39:48 compute-0 ceph-mon[74273]: osdmap e93: 3 total, 3 up, 3 in
Oct 11 03:39:48 compute-0 sudo[106430]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lkksbbbexxppurjoiysxmmdlvdwxgkkg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760153988.0811548-57-57401036636822/AnsiballZ_stat.py'
Oct 11 03:39:48 compute-0 sudo[106430]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 03:39:48 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v189: 305 pgs: 305 active+clean; 456 KiB data, 122 MiB used, 60 GiB / 60 GiB avail; 36 B/s, 1 objects/s recovering
Oct 11 03:39:48 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "23"} v 0) v1
Oct 11 03:39:48 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "23"}]: dispatch
Oct 11 03:39:48 compute-0 python3.9[106432]: ansible-ansible.builtin.stat Invoked with path=/etc/ansible/facts.d/bootc.fact follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct 11 03:39:48 compute-0 sudo[106430]: pam_unix(sudo:session): session closed for user root
Oct 11 03:39:49 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e94 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 11 03:39:49 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e94 do_prune osdmap full prune enabled
Oct 11 03:39:49 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "23"}]': finished
Oct 11 03:39:49 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e95 e95: 3 total, 3 up, 3 in
Oct 11 03:39:49 compute-0 ceph-mon[74273]: log_channel(cluster) log [DBG] : osdmap e95: 3 total, 3 up, 3 in
Oct 11 03:39:49 compute-0 ceph-mon[74273]: 2.1d scrub starts
Oct 11 03:39:49 compute-0 ceph-mon[74273]: 2.1d scrub ok
Oct 11 03:39:49 compute-0 ceph-mon[74273]: 9.4 scrub starts
Oct 11 03:39:49 compute-0 ceph-mon[74273]: 9.4 scrub ok
Oct 11 03:39:49 compute-0 ceph-mon[74273]: osdmap e94: 3 total, 3 up, 3 in
Oct 11 03:39:49 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "23"}]: dispatch
Oct 11 03:39:49 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "23"}]': finished
Oct 11 03:39:49 compute-0 ceph-mon[74273]: osdmap e95: 3 total, 3 up, 3 in
Oct 11 03:39:49 compute-0 ceph-osd[89722]: osd.2 pg_epoch: 95 pg[9.16( v 41'577 (0'0,41'577] local-lis/les=67/68 n=6 ec=49/34 lis/c=67/67 les/c/f=68/68/0 sis=95 pruub=12.720117569s) [0] r=-1 lpr=95 pi=[67,95)/1 crt=41'577 mlcod 0'0 active pruub 155.061386108s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 11 03:39:49 compute-0 ceph-osd[89722]: osd.2 pg_epoch: 95 pg[9.16( v 41'577 (0'0,41'577] local-lis/les=67/68 n=6 ec=49/34 lis/c=67/67 les/c/f=68/68/0 sis=95 pruub=12.720035553s) [0] r=-1 lpr=95 pi=[67,95)/1 crt=41'577 mlcod 0'0 unknown NOTIFY pruub 155.061386108s@ mbc={}] state<Start>: transitioning to Stray
Oct 11 03:39:49 compute-0 ceph-osd[87591]: osd.0 pg_epoch: 95 pg[9.16( empty local-lis/les=0/0 n=0 ec=49/34 lis/c=67/67 les/c/f=68/68/0 sis=95) [0] r=0 lpr=95 pi=[67,95)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 03:39:49 compute-0 sudo[106584]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qqzyzdvzcmgmzagoyymexbwylymtkgcz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760153989.1093643-68-199252991101055/AnsiballZ_file.py'
Oct 11 03:39:49 compute-0 sudo[106584]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 03:39:49 compute-0 python3.9[106586]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/log/journal setype=var_log_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct 11 03:39:49 compute-0 sudo[106584]: pam_unix(sudo:session): session closed for user root
Oct 11 03:39:50 compute-0 ceph-osd[87591]: osd.0 pg_epoch: 95 pg[9.15( v 41'577 (0'0,41'577] local-lis/les=94/95 n=6 ec=49/34 lis/c=56/56 les/c/f=57/57/0 sis=94) [1]/[0] async=[1] r=0 lpr=94 pi=[56,94)/1 crt=41'577 mlcod 0'0 active+remapped mbc={255={(0+1)=5}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 03:39:50 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e95 do_prune osdmap full prune enabled
Oct 11 03:39:50 compute-0 ceph-mon[74273]: pgmap v189: 305 pgs: 305 active+clean; 456 KiB data, 122 MiB used, 60 GiB / 60 GiB avail; 36 B/s, 1 objects/s recovering
Oct 11 03:39:50 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e96 e96: 3 total, 3 up, 3 in
Oct 11 03:39:50 compute-0 ceph-mon[74273]: log_channel(cluster) log [DBG] : osdmap e96: 3 total, 3 up, 3 in
Oct 11 03:39:50 compute-0 ceph-osd[89722]: osd.2 pg_epoch: 96 pg[9.16( v 41'577 (0'0,41'577] local-lis/les=67/68 n=6 ec=49/34 lis/c=67/67 les/c/f=68/68/0 sis=96) [0]/[2] r=0 lpr=96 pi=[67,96)/1 crt=41'577 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Oct 11 03:39:50 compute-0 ceph-osd[89722]: osd.2 pg_epoch: 96 pg[9.16( v 41'577 (0'0,41'577] local-lis/les=67/68 n=6 ec=49/34 lis/c=67/67 les/c/f=68/68/0 sis=96) [0]/[2] r=0 lpr=96 pi=[67,96)/1 crt=41'577 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Oct 11 03:39:50 compute-0 ceph-osd[87591]: osd.0 pg_epoch: 96 pg[9.16( empty local-lis/les=0/0 n=0 ec=49/34 lis/c=67/67 les/c/f=68/68/0 sis=96) [0]/[2] r=-1 lpr=96 pi=[67,96)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 11 03:39:50 compute-0 ceph-osd[87591]: osd.0 pg_epoch: 96 pg[9.16( empty local-lis/les=0/0 n=0 ec=49/34 lis/c=67/67 les/c/f=68/68/0 sis=96) [0]/[2] r=-1 lpr=96 pi=[67,96)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Oct 11 03:39:50 compute-0 ceph-osd[87591]: osd.0 pg_epoch: 96 pg[9.15( v 41'577 (0'0,41'577] local-lis/les=94/95 n=6 ec=49/34 lis/c=94/56 les/c/f=95/57/0 sis=96 pruub=15.739741325s) [1] async=[1] r=-1 lpr=96 pi=[56,96)/1 crt=41'577 mlcod 41'577 active pruub 169.211380005s@ mbc={255={}}] start_peering_interval up [1] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 1 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 11 03:39:50 compute-0 ceph-osd[87591]: osd.0 pg_epoch: 96 pg[9.15( v 41'577 (0'0,41'577] local-lis/les=94/95 n=6 ec=49/34 lis/c=94/56 les/c/f=95/57/0 sis=96 pruub=15.739683151s) [1] r=-1 lpr=96 pi=[56,96)/1 crt=41'577 mlcod 0'0 unknown NOTIFY pruub 169.211380005s@ mbc={}] state<Start>: transitioning to Stray
Oct 11 03:39:50 compute-0 ceph-osd[88594]: osd.1 pg_epoch: 96 pg[9.15( v 41'577 (0'0,41'577] local-lis/les=0/0 n=6 ec=49/34 lis/c=94/56 les/c/f=95/57/0 sis=96) [1] r=0 lpr=96 pi=[56,96)/1 luod=0'0 crt=41'577 mlcod 0'0 active mbc={}] start_peering_interval up [1] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 1 -> 1, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Oct 11 03:39:50 compute-0 ceph-osd[88594]: osd.1 pg_epoch: 96 pg[9.15( v 41'577 (0'0,41'577] local-lis/les=0/0 n=6 ec=49/34 lis/c=94/56 les/c/f=95/57/0 sis=96) [1] r=0 lpr=96 pi=[56,96)/1 crt=41'577 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 03:39:50 compute-0 ceph-mgr[74563]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 03:39:50 compute-0 ceph-mgr[74563]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 03:39:50 compute-0 python3.9[106736]: ansible-ansible.builtin.service_facts Invoked
Oct 11 03:39:50 compute-0 ceph-mgr[74563]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 03:39:50 compute-0 ceph-mgr[74563]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 03:39:50 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v192: 305 pgs: 305 active+clean; 456 KiB data, 122 MiB used, 60 GiB / 60 GiB avail
Oct 11 03:39:50 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "24"} v 0) v1
Oct 11 03:39:50 compute-0 ceph-mgr[74563]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 03:39:50 compute-0 ceph-mgr[74563]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 03:39:50 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "24"}]: dispatch
Oct 11 03:39:50 compute-0 network[106753]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Oct 11 03:39:50 compute-0 network[106754]: 'network-scripts' will be removed from distribution in near future.
Oct 11 03:39:50 compute-0 network[106755]: It is advised to switch to 'NetworkManager' instead for network management.
Oct 11 03:39:51 compute-0 ceph-osd[88594]: log_channel(cluster) log [DBG] : 9.a scrub starts
Oct 11 03:39:51 compute-0 ceph-osd[88594]: log_channel(cluster) log [DBG] : 9.a scrub ok
Oct 11 03:39:51 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e96 do_prune osdmap full prune enabled
Oct 11 03:39:51 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "24"}]': finished
Oct 11 03:39:51 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e97 e97: 3 total, 3 up, 3 in
Oct 11 03:39:51 compute-0 ceph-mon[74273]: log_channel(cluster) log [DBG] : osdmap e97: 3 total, 3 up, 3 in
Oct 11 03:39:51 compute-0 ceph-mon[74273]: osdmap e96: 3 total, 3 up, 3 in
Oct 11 03:39:51 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "24"}]: dispatch
Oct 11 03:39:51 compute-0 ceph-osd[88594]: osd.1 pg_epoch: 97 pg[9.15( v 41'577 (0'0,41'577] local-lis/les=96/97 n=6 ec=49/34 lis/c=94/56 les/c/f=95/57/0 sis=96) [1] r=0 lpr=96 pi=[56,96)/1 crt=41'577 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 03:39:51 compute-0 ceph-osd[89722]: osd.2 pg_epoch: 97 pg[9.16( v 41'577 (0'0,41'577] local-lis/les=96/97 n=6 ec=49/34 lis/c=67/67 les/c/f=68/68/0 sis=96) [0]/[2] async=[0] r=0 lpr=96 pi=[67,96)/1 crt=41'577 mlcod 0'0 active+remapped mbc={255={(0+1)=6}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 03:39:51 compute-0 ceph-osd[89722]: log_channel(cluster) log [DBG] : 10.1b scrub starts
Oct 11 03:39:51 compute-0 ceph-osd[89722]: log_channel(cluster) log [DBG] : 10.1b scrub ok
Oct 11 03:39:52 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e97 do_prune osdmap full prune enabled
Oct 11 03:39:52 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e98 e98: 3 total, 3 up, 3 in
Oct 11 03:39:52 compute-0 ceph-mon[74273]: log_channel(cluster) log [DBG] : osdmap e98: 3 total, 3 up, 3 in
Oct 11 03:39:52 compute-0 ceph-osd[89722]: osd.2 pg_epoch: 98 pg[9.16( v 41'577 (0'0,41'577] local-lis/les=96/97 n=6 ec=49/34 lis/c=96/67 les/c/f=97/68/0 sis=98 pruub=15.078603745s) [0] async=[0] r=-1 lpr=98 pi=[67,98)/1 crt=41'577 mlcod 41'577 active pruub 160.429214478s@ mbc={255={}}] start_peering_interval up [0] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 11 03:39:52 compute-0 ceph-osd[89722]: osd.2 pg_epoch: 98 pg[9.16( v 41'577 (0'0,41'577] local-lis/les=96/97 n=6 ec=49/34 lis/c=96/67 les/c/f=97/68/0 sis=98 pruub=15.078494072s) [0] r=-1 lpr=98 pi=[67,98)/1 crt=41'577 mlcod 0'0 unknown NOTIFY pruub 160.429214478s@ mbc={}] state<Start>: transitioning to Stray
Oct 11 03:39:52 compute-0 ceph-mon[74273]: pgmap v192: 305 pgs: 305 active+clean; 456 KiB data, 122 MiB used, 60 GiB / 60 GiB avail
Oct 11 03:39:52 compute-0 ceph-mon[74273]: 9.a scrub starts
Oct 11 03:39:52 compute-0 ceph-mon[74273]: 9.a scrub ok
Oct 11 03:39:52 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "24"}]': finished
Oct 11 03:39:52 compute-0 ceph-mon[74273]: osdmap e97: 3 total, 3 up, 3 in
Oct 11 03:39:52 compute-0 ceph-mon[74273]: 10.1b scrub starts
Oct 11 03:39:52 compute-0 ceph-mon[74273]: 10.1b scrub ok
Oct 11 03:39:52 compute-0 ceph-osd[87591]: osd.0 pg_epoch: 98 pg[9.16( v 41'577 (0'0,41'577] local-lis/les=0/0 n=6 ec=49/34 lis/c=96/67 les/c/f=97/68/0 sis=98) [0] r=0 lpr=98 pi=[67,98)/1 luod=0'0 crt=41'577 mlcod 0'0 active mbc={}] start_peering_interval up [0] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Oct 11 03:39:52 compute-0 ceph-osd[87591]: osd.0 pg_epoch: 98 pg[9.16( v 41'577 (0'0,41'577] local-lis/les=0/0 n=6 ec=49/34 lis/c=96/67 les/c/f=97/68/0 sis=98) [0] r=0 lpr=98 pi=[67,98)/1 crt=41'577 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 03:39:52 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v195: 305 pgs: 305 active+clean; 456 KiB data, 122 MiB used, 60 GiB / 60 GiB avail; 27 B/s, 1 objects/s recovering
Oct 11 03:39:52 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "25"} v 0) v1
Oct 11 03:39:52 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "25"}]: dispatch
Oct 11 03:39:53 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e98 do_prune osdmap full prune enabled
Oct 11 03:39:53 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "25"}]': finished
Oct 11 03:39:53 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e99 e99: 3 total, 3 up, 3 in
Oct 11 03:39:53 compute-0 ceph-mon[74273]: log_channel(cluster) log [DBG] : osdmap e99: 3 total, 3 up, 3 in
Oct 11 03:39:53 compute-0 ceph-mon[74273]: osdmap e98: 3 total, 3 up, 3 in
Oct 11 03:39:53 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "25"}]: dispatch
Oct 11 03:39:53 compute-0 ceph-osd[87591]: osd.0 pg_epoch: 99 pg[9.16( v 41'577 (0'0,41'577] local-lis/les=98/99 n=6 ec=49/34 lis/c=96/67 les/c/f=97/68/0 sis=98) [0] r=0 lpr=98 pi=[67,98)/1 crt=41'577 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 03:39:54 compute-0 ceph-osd[87591]: log_channel(cluster) log [DBG] : 10.15 scrub starts
Oct 11 03:39:54 compute-0 ceph-osd[87591]: log_channel(cluster) log [DBG] : 10.15 scrub ok
Oct 11 03:39:54 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e99 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 11 03:39:54 compute-0 ceph-mon[74273]: pgmap v195: 305 pgs: 305 active+clean; 456 KiB data, 122 MiB used, 60 GiB / 60 GiB avail; 27 B/s, 1 objects/s recovering
Oct 11 03:39:54 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "25"}]': finished
Oct 11 03:39:54 compute-0 ceph-mon[74273]: osdmap e99: 3 total, 3 up, 3 in
Oct 11 03:39:54 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v197: 305 pgs: 305 active+clean; 456 KiB data, 139 MiB used, 60 GiB / 60 GiB avail; 25 B/s, 1 objects/s recovering
Oct 11 03:39:54 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "26"} v 0) v1
Oct 11 03:39:54 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "26"}]: dispatch
Oct 11 03:39:55 compute-0 ceph-osd[87591]: log_channel(cluster) log [DBG] : 5.4 scrub starts
Oct 11 03:39:55 compute-0 ceph-osd[87591]: log_channel(cluster) log [DBG] : 5.4 scrub ok
Oct 11 03:39:55 compute-0 python3.9[107017]: ansible-ansible.builtin.lineinfile Invoked with line=cloud-init=disabled path=/proc/cmdline state=present backrefs=False create=False backup=False firstmatch=False unsafe_writes=False regexp=None search_string=None insertafter=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 11 03:39:55 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e99 do_prune osdmap full prune enabled
Oct 11 03:39:55 compute-0 ceph-mon[74273]: 10.15 scrub starts
Oct 11 03:39:55 compute-0 ceph-mon[74273]: 10.15 scrub ok
Oct 11 03:39:55 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "26"}]: dispatch
Oct 11 03:39:55 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "26"}]': finished
Oct 11 03:39:55 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e100 e100: 3 total, 3 up, 3 in
Oct 11 03:39:55 compute-0 ceph-mon[74273]: log_channel(cluster) log [DBG] : osdmap e100: 3 total, 3 up, 3 in
Oct 11 03:39:55 compute-0 ceph-osd[87591]: osd.0 pg_epoch: 100 pg[9.19( v 41'577 (0'0,41'577] local-lis/les=57/58 n=6 ec=49/34 lis/c=57/57 les/c/f=58/58/0 sis=100 pruub=15.677139282s) [2] r=-1 lpr=100 pi=[57,100)/1 crt=41'577 mlcod 0'0 active pruub 174.353881836s@ mbc={}] start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 11 03:39:55 compute-0 ceph-osd[87591]: osd.0 pg_epoch: 100 pg[9.19( v 41'577 (0'0,41'577] local-lis/les=57/58 n=6 ec=49/34 lis/c=57/57 les/c/f=58/58/0 sis=100 pruub=15.676974297s) [2] r=-1 lpr=100 pi=[57,100)/1 crt=41'577 mlcod 0'0 unknown NOTIFY pruub 174.353881836s@ mbc={}] state<Start>: transitioning to Stray
Oct 11 03:39:55 compute-0 ceph-osd[89722]: osd.2 pg_epoch: 100 pg[9.19( empty local-lis/les=0/0 n=0 ec=49/34 lis/c=57/57 les/c/f=58/58/0 sis=100) [2] r=0 lpr=100 pi=[57,100)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 03:39:56 compute-0 ceph-osd[89722]: log_channel(cluster) log [DBG] : 10.1c scrub starts
Oct 11 03:39:56 compute-0 ceph-osd[89722]: log_channel(cluster) log [DBG] : 10.1c scrub ok
Oct 11 03:39:56 compute-0 python3.9[107167]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct 11 03:39:56 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e100 do_prune osdmap full prune enabled
Oct 11 03:39:56 compute-0 ceph-mon[74273]: pgmap v197: 305 pgs: 305 active+clean; 456 KiB data, 139 MiB used, 60 GiB / 60 GiB avail; 25 B/s, 1 objects/s recovering
Oct 11 03:39:56 compute-0 ceph-mon[74273]: 5.4 scrub starts
Oct 11 03:39:56 compute-0 ceph-mon[74273]: 5.4 scrub ok
Oct 11 03:39:56 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "26"}]': finished
Oct 11 03:39:56 compute-0 ceph-mon[74273]: osdmap e100: 3 total, 3 up, 3 in
Oct 11 03:39:56 compute-0 ceph-mon[74273]: 10.1c scrub starts
Oct 11 03:39:56 compute-0 ceph-mon[74273]: 10.1c scrub ok
Oct 11 03:39:56 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e101 e101: 3 total, 3 up, 3 in
Oct 11 03:39:56 compute-0 ceph-mon[74273]: log_channel(cluster) log [DBG] : osdmap e101: 3 total, 3 up, 3 in
Oct 11 03:39:56 compute-0 ceph-osd[89722]: osd.2 pg_epoch: 101 pg[9.19( empty local-lis/les=0/0 n=0 ec=49/34 lis/c=57/57 les/c/f=58/58/0 sis=101) [2]/[0] r=-1 lpr=101 pi=[57,101)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 11 03:39:56 compute-0 ceph-osd[89722]: osd.2 pg_epoch: 101 pg[9.19( empty local-lis/les=0/0 n=0 ec=49/34 lis/c=57/57 les/c/f=58/58/0 sis=101) [2]/[0] r=-1 lpr=101 pi=[57,101)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Oct 11 03:39:56 compute-0 ceph-osd[87591]: osd.0 pg_epoch: 101 pg[9.19( v 41'577 (0'0,41'577] local-lis/les=57/58 n=6 ec=49/34 lis/c=57/57 les/c/f=58/58/0 sis=101) [2]/[0] r=0 lpr=101 pi=[57,101)/1 crt=41'577 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Oct 11 03:39:56 compute-0 ceph-osd[87591]: osd.0 pg_epoch: 101 pg[9.19( v 41'577 (0'0,41'577] local-lis/les=57/58 n=6 ec=49/34 lis/c=57/57 les/c/f=58/58/0 sis=101) [2]/[0] r=0 lpr=101 pi=[57,101)/1 crt=41'577 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Oct 11 03:39:56 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v200: 305 pgs: 305 active+clean; 456 KiB data, 139 MiB used, 60 GiB / 60 GiB avail; 26 B/s, 1 objects/s recovering
Oct 11 03:39:56 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "27"} v 0) v1
Oct 11 03:39:56 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "27"}]: dispatch
Oct 11 03:39:57 compute-0 ceph-osd[88594]: log_channel(cluster) log [DBG] : 9.10 scrub starts
Oct 11 03:39:57 compute-0 ceph-osd[88594]: log_channel(cluster) log [DBG] : 9.10 scrub ok
Oct 11 03:39:57 compute-0 python3.9[107321]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local', 'distribution'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct 11 03:39:57 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e101 do_prune osdmap full prune enabled
Oct 11 03:39:57 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "27"}]': finished
Oct 11 03:39:57 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e102 e102: 3 total, 3 up, 3 in
Oct 11 03:39:57 compute-0 ceph-mon[74273]: log_channel(cluster) log [DBG] : osdmap e102: 3 total, 3 up, 3 in
Oct 11 03:39:57 compute-0 ceph-mon[74273]: osdmap e101: 3 total, 3 up, 3 in
Oct 11 03:39:57 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "27"}]: dispatch
Oct 11 03:39:58 compute-0 ceph-osd[89722]: log_channel(cluster) log [DBG] : 10.1d scrub starts
Oct 11 03:39:58 compute-0 ceph-osd[87591]: osd.0 pg_epoch: 102 pg[9.19( v 41'577 (0'0,41'577] local-lis/les=101/102 n=6 ec=49/34 lis/c=57/57 les/c/f=58/58/0 sis=101) [2]/[0] async=[2] r=0 lpr=101 pi=[57,101)/1 crt=41'577 mlcod 0'0 active+remapped mbc={255={(0+1)=11}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 03:39:58 compute-0 ceph-osd[89722]: log_channel(cluster) log [DBG] : 10.1d scrub ok
Oct 11 03:39:58 compute-0 ceph-osd[88594]: log_channel(cluster) log [DBG] : 9.12 scrub starts
Oct 11 03:39:58 compute-0 ceph-osd[88594]: log_channel(cluster) log [DBG] : 9.12 scrub ok
Oct 11 03:39:58 compute-0 sudo[107477]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xsgusofmiohjwbwtafpfrxqhheacglxl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760153997.9854863-116-122588545575009/AnsiballZ_setup.py'
Oct 11 03:39:58 compute-0 sudo[107477]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 03:39:58 compute-0 python3.9[107479]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Oct 11 03:39:58 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e102 do_prune osdmap full prune enabled
Oct 11 03:39:58 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e103 e103: 3 total, 3 up, 3 in
Oct 11 03:39:58 compute-0 ceph-mon[74273]: log_channel(cluster) log [DBG] : osdmap e103: 3 total, 3 up, 3 in
Oct 11 03:39:58 compute-0 ceph-mon[74273]: pgmap v200: 305 pgs: 305 active+clean; 456 KiB data, 139 MiB used, 60 GiB / 60 GiB avail; 26 B/s, 1 objects/s recovering
Oct 11 03:39:58 compute-0 ceph-mon[74273]: 9.10 scrub starts
Oct 11 03:39:58 compute-0 ceph-mon[74273]: 9.10 scrub ok
Oct 11 03:39:58 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "27"}]': finished
Oct 11 03:39:58 compute-0 ceph-mon[74273]: osdmap e102: 3 total, 3 up, 3 in
Oct 11 03:39:58 compute-0 ceph-mon[74273]: 10.1d scrub starts
Oct 11 03:39:58 compute-0 ceph-mon[74273]: 10.1d scrub ok
Oct 11 03:39:58 compute-0 ceph-osd[87591]: osd.0 pg_epoch: 103 pg[9.19( v 41'577 (0'0,41'577] local-lis/les=101/102 n=6 ec=49/34 lis/c=101/57 les/c/f=102/58/0 sis=103 pruub=15.344883919s) [2] async=[2] r=-1 lpr=103 pi=[57,103)/1 crt=41'577 mlcod 41'577 active pruub 177.068252563s@ mbc={255={}}] start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 11 03:39:58 compute-0 ceph-osd[87591]: osd.0 pg_epoch: 103 pg[9.19( v 41'577 (0'0,41'577] local-lis/les=101/102 n=6 ec=49/34 lis/c=101/57 les/c/f=102/58/0 sis=103 pruub=15.344706535s) [2] r=-1 lpr=103 pi=[57,103)/1 crt=41'577 mlcod 0'0 unknown NOTIFY pruub 177.068252563s@ mbc={}] state<Start>: transitioning to Stray
Oct 11 03:39:58 compute-0 ceph-osd[89722]: osd.2 pg_epoch: 103 pg[9.19( v 41'577 (0'0,41'577] local-lis/les=0/0 n=6 ec=49/34 lis/c=101/57 les/c/f=102/58/0 sis=103) [2] r=0 lpr=103 pi=[57,103)/1 luod=0'0 crt=41'577 mlcod 0'0 active mbc={}] start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Oct 11 03:39:58 compute-0 ceph-osd[89722]: osd.2 pg_epoch: 103 pg[9.19( v 41'577 (0'0,41'577] local-lis/les=0/0 n=6 ec=49/34 lis/c=101/57 les/c/f=102/58/0 sis=103) [2] r=0 lpr=103 pi=[57,103)/1 crt=41'577 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 03:39:58 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v203: 305 pgs: 1 remapped+peering, 304 active+clean; 456 KiB data, 140 MiB used, 60 GiB / 60 GiB avail; 27 B/s, 1 objects/s recovering
Oct 11 03:39:58 compute-0 sudo[107477]: pam_unix(sudo:session): session closed for user root
Oct 11 03:39:59 compute-0 sudo[107561]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qstrvknoixfgtaznebazkpukzoolfccl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760153997.9854863-116-122588545575009/AnsiballZ_dnf.py'
Oct 11 03:39:59 compute-0 sudo[107561]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 03:39:59 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e103 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 11 03:39:59 compute-0 python3.9[107563]: ansible-ansible.legacy.dnf Invoked with name=['driverctl', 'lvm2', 'crudini', 'jq', 'nftables', 'NetworkManager', 'openstack-selinux', 'python3-libselinux', 'python3-pyyaml', 'rsync', 'tmpwatch', 'sysstat', 'iproute-tc', 'ksmtuned', 'systemd-container', 'crypto-policies-scripts', 'grubby', 'sos'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Oct 11 03:39:59 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e103 do_prune osdmap full prune enabled
Oct 11 03:39:59 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e104 e104: 3 total, 3 up, 3 in
Oct 11 03:39:59 compute-0 ceph-mon[74273]: log_channel(cluster) log [DBG] : osdmap e104: 3 total, 3 up, 3 in
Oct 11 03:39:59 compute-0 ceph-mon[74273]: 9.12 scrub starts
Oct 11 03:39:59 compute-0 ceph-mon[74273]: 9.12 scrub ok
Oct 11 03:39:59 compute-0 ceph-mon[74273]: osdmap e103: 3 total, 3 up, 3 in
Oct 11 03:39:59 compute-0 ceph-osd[89722]: osd.2 pg_epoch: 104 pg[9.19( v 41'577 (0'0,41'577] local-lis/les=103/104 n=6 ec=49/34 lis/c=101/57 les/c/f=102/58/0 sis=103) [2] r=0 lpr=103 pi=[57,103)/1 crt=41'577 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 03:40:00 compute-0 ceph-osd[87591]: log_channel(cluster) log [DBG] : 10.4 scrub starts
Oct 11 03:40:00 compute-0 ceph-osd[87591]: log_channel(cluster) log [DBG] : 10.4 scrub ok
Oct 11 03:40:00 compute-0 ceph-mon[74273]: pgmap v203: 305 pgs: 1 remapped+peering, 304 active+clean; 456 KiB data, 140 MiB used, 60 GiB / 60 GiB avail; 27 B/s, 1 objects/s recovering
Oct 11 03:40:00 compute-0 ceph-mon[74273]: osdmap e104: 3 total, 3 up, 3 in
Oct 11 03:40:00 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v205: 305 pgs: 1 remapped+peering, 304 active+clean; 456 KiB data, 140 MiB used, 60 GiB / 60 GiB avail; 26 B/s, 1 objects/s recovering
Oct 11 03:40:01 compute-0 ceph-osd[89722]: log_channel(cluster) log [DBG] : 10.1f scrub starts
Oct 11 03:40:01 compute-0 ceph-osd[89722]: log_channel(cluster) log [DBG] : 10.1f scrub ok
Oct 11 03:40:01 compute-0 ceph-osd[87591]: log_channel(cluster) log [DBG] : 2.1c scrub starts
Oct 11 03:40:01 compute-0 ceph-osd[87591]: log_channel(cluster) log [DBG] : 2.1c scrub ok
Oct 11 03:40:01 compute-0 ceph-mon[74273]: 10.4 scrub starts
Oct 11 03:40:01 compute-0 ceph-mon[74273]: 10.4 scrub ok
Oct 11 03:40:01 compute-0 ceph-mon[74273]: 10.1f scrub starts
Oct 11 03:40:01 compute-0 ceph-mon[74273]: 10.1f scrub ok
Oct 11 03:40:02 compute-0 ceph-osd[89722]: log_channel(cluster) log [DBG] : 8.15 scrub starts
Oct 11 03:40:02 compute-0 ceph-osd[89722]: log_channel(cluster) log [DBG] : 8.15 scrub ok
Oct 11 03:40:02 compute-0 ceph-mon[74273]: pgmap v205: 305 pgs: 1 remapped+peering, 304 active+clean; 456 KiB data, 140 MiB used, 60 GiB / 60 GiB avail; 26 B/s, 1 objects/s recovering
Oct 11 03:40:02 compute-0 ceph-mon[74273]: 2.1c scrub starts
Oct 11 03:40:02 compute-0 ceph-mon[74273]: 2.1c scrub ok
Oct 11 03:40:02 compute-0 ceph-mon[74273]: 8.15 scrub starts
Oct 11 03:40:02 compute-0 ceph-mon[74273]: 8.15 scrub ok
Oct 11 03:40:02 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v206: 305 pgs: 1 remapped+peering, 304 active+clean; 456 KiB data, 140 MiB used, 60 GiB / 60 GiB avail; 18 B/s, 0 objects/s recovering
Oct 11 03:40:03 compute-0 ceph-osd[87591]: log_channel(cluster) log [DBG] : 10.7 scrub starts
Oct 11 03:40:03 compute-0 ceph-osd[87591]: log_channel(cluster) log [DBG] : 10.7 scrub ok
Oct 11 03:40:03 compute-0 ceph-mon[74273]: pgmap v206: 305 pgs: 1 remapped+peering, 304 active+clean; 456 KiB data, 140 MiB used, 60 GiB / 60 GiB avail; 18 B/s, 0 objects/s recovering
Oct 11 03:40:04 compute-0 ceph-osd[87591]: log_channel(cluster) log [DBG] : 2.f scrub starts
Oct 11 03:40:04 compute-0 ceph-osd[87591]: log_channel(cluster) log [DBG] : 2.f scrub ok
Oct 11 03:40:04 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e104 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 11 03:40:04 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v207: 305 pgs: 305 active+clean; 456 KiB data, 140 MiB used, 60 GiB / 60 GiB avail; 15 B/s, 1 objects/s recovering
Oct 11 03:40:04 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "28"} v 0) v1
Oct 11 03:40:04 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "28"}]: dispatch
Oct 11 03:40:04 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e104 do_prune osdmap full prune enabled
Oct 11 03:40:04 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "28"}]': finished
Oct 11 03:40:04 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e105 e105: 3 total, 3 up, 3 in
Oct 11 03:40:04 compute-0 ceph-mon[74273]: log_channel(cluster) log [DBG] : osdmap e105: 3 total, 3 up, 3 in
Oct 11 03:40:04 compute-0 ceph-mon[74273]: 10.7 scrub starts
Oct 11 03:40:04 compute-0 ceph-mon[74273]: 10.7 scrub ok
Oct 11 03:40:04 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "28"}]: dispatch
Oct 11 03:40:05 compute-0 ceph-osd[87591]: log_channel(cluster) log [DBG] : 2.1f scrub starts
Oct 11 03:40:05 compute-0 ceph-osd[87591]: log_channel(cluster) log [DBG] : 2.1f scrub ok
Oct 11 03:40:05 compute-0 ceph-mon[74273]: 2.f scrub starts
Oct 11 03:40:05 compute-0 ceph-mon[74273]: 2.f scrub ok
Oct 11 03:40:05 compute-0 ceph-mon[74273]: pgmap v207: 305 pgs: 305 active+clean; 456 KiB data, 140 MiB used, 60 GiB / 60 GiB avail; 15 B/s, 1 objects/s recovering
Oct 11 03:40:05 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "28"}]': finished
Oct 11 03:40:05 compute-0 ceph-mon[74273]: osdmap e105: 3 total, 3 up, 3 in
Oct 11 03:40:06 compute-0 ceph-osd[89722]: log_channel(cluster) log [DBG] : 11.15 deep-scrub starts
Oct 11 03:40:06 compute-0 ceph-osd[89722]: log_channel(cluster) log [DBG] : 11.15 deep-scrub ok
Oct 11 03:40:06 compute-0 ceph-osd[87591]: log_channel(cluster) log [DBG] : 10.d scrub starts
Oct 11 03:40:06 compute-0 ceph-osd[87591]: log_channel(cluster) log [DBG] : 10.d scrub ok
Oct 11 03:40:06 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v209: 305 pgs: 305 active+clean; 456 KiB data, 140 MiB used, 60 GiB / 60 GiB avail; 13 B/s, 1 objects/s recovering
Oct 11 03:40:06 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "29"} v 0) v1
Oct 11 03:40:06 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "29"}]: dispatch
Oct 11 03:40:06 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e105 do_prune osdmap full prune enabled
Oct 11 03:40:06 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "29"}]': finished
Oct 11 03:40:06 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e106 e106: 3 total, 3 up, 3 in
Oct 11 03:40:06 compute-0 ceph-mon[74273]: log_channel(cluster) log [DBG] : osdmap e106: 3 total, 3 up, 3 in
Oct 11 03:40:06 compute-0 ceph-mon[74273]: 2.1f scrub starts
Oct 11 03:40:06 compute-0 ceph-mon[74273]: 2.1f scrub ok
Oct 11 03:40:06 compute-0 ceph-mon[74273]: 11.15 deep-scrub starts
Oct 11 03:40:06 compute-0 ceph-mon[74273]: 11.15 deep-scrub ok
Oct 11 03:40:06 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "29"}]: dispatch
Oct 11 03:40:06 compute-0 ceph-osd[89722]: osd.2 pg_epoch: 106 pg[9.1c( v 41'577 (0'0,41'577] local-lis/les=81/82 n=6 ec=49/34 lis/c=81/81 les/c/f=82/82/0 sis=106 pruub=8.309728622s) [0] r=-1 lpr=106 pi=[81,106)/1 crt=41'577 mlcod 0'0 active pruub 168.074340820s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 11 03:40:06 compute-0 ceph-osd[89722]: osd.2 pg_epoch: 106 pg[9.1c( v 41'577 (0'0,41'577] local-lis/les=81/82 n=6 ec=49/34 lis/c=81/81 les/c/f=82/82/0 sis=106 pruub=8.309666634s) [0] r=-1 lpr=106 pi=[81,106)/1 crt=41'577 mlcod 0'0 unknown NOTIFY pruub 168.074340820s@ mbc={}] state<Start>: transitioning to Stray
Oct 11 03:40:06 compute-0 ceph-osd[87591]: osd.0 pg_epoch: 106 pg[9.1c( empty local-lis/les=0/0 n=0 ec=49/34 lis/c=81/81 les/c/f=82/82/0 sis=106) [0] r=0 lpr=106 pi=[81,106)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 03:40:07 compute-0 ceph-osd[87591]: log_channel(cluster) log [DBG] : 5.2 scrub starts
Oct 11 03:40:07 compute-0 ceph-osd[87591]: log_channel(cluster) log [DBG] : 5.2 scrub ok
Oct 11 03:40:07 compute-0 ceph-osd[88594]: log_channel(cluster) log [DBG] : 9.14 deep-scrub starts
Oct 11 03:40:07 compute-0 ceph-osd[88594]: log_channel(cluster) log [DBG] : 9.14 deep-scrub ok
Oct 11 03:40:07 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e106 do_prune osdmap full prune enabled
Oct 11 03:40:07 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e107 e107: 3 total, 3 up, 3 in
Oct 11 03:40:07 compute-0 ceph-mon[74273]: log_channel(cluster) log [DBG] : osdmap e107: 3 total, 3 up, 3 in
Oct 11 03:40:07 compute-0 ceph-mon[74273]: 10.d scrub starts
Oct 11 03:40:07 compute-0 ceph-mon[74273]: 10.d scrub ok
Oct 11 03:40:07 compute-0 ceph-mon[74273]: pgmap v209: 305 pgs: 305 active+clean; 456 KiB data, 140 MiB used, 60 GiB / 60 GiB avail; 13 B/s, 1 objects/s recovering
Oct 11 03:40:07 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "29"}]': finished
Oct 11 03:40:07 compute-0 ceph-mon[74273]: osdmap e106: 3 total, 3 up, 3 in
Oct 11 03:40:07 compute-0 ceph-osd[87591]: osd.0 pg_epoch: 107 pg[9.1c( empty local-lis/les=0/0 n=0 ec=49/34 lis/c=81/81 les/c/f=82/82/0 sis=107) [0]/[2] r=-1 lpr=107 pi=[81,107)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 11 03:40:07 compute-0 ceph-osd[87591]: osd.0 pg_epoch: 107 pg[9.1c( empty local-lis/les=0/0 n=0 ec=49/34 lis/c=81/81 les/c/f=82/82/0 sis=107) [0]/[2] r=-1 lpr=107 pi=[81,107)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Oct 11 03:40:07 compute-0 ceph-osd[89722]: osd.2 pg_epoch: 107 pg[9.1c( v 41'577 (0'0,41'577] local-lis/les=81/82 n=6 ec=49/34 lis/c=81/81 les/c/f=82/82/0 sis=107) [0]/[2] r=0 lpr=107 pi=[81,107)/1 crt=41'577 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Oct 11 03:40:07 compute-0 ceph-osd[89722]: osd.2 pg_epoch: 107 pg[9.1c( v 41'577 (0'0,41'577] local-lis/les=81/82 n=6 ec=49/34 lis/c=81/81 les/c/f=82/82/0 sis=107) [0]/[2] r=0 lpr=107 pi=[81,107)/1 crt=41'577 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Oct 11 03:40:08 compute-0 ceph-osd[89722]: log_channel(cluster) log [DBG] : 7.1a scrub starts
Oct 11 03:40:08 compute-0 ceph-osd[89722]: log_channel(cluster) log [DBG] : 7.1a scrub ok
Oct 11 03:40:08 compute-0 ceph-osd[87591]: log_channel(cluster) log [DBG] : 5.3 scrub starts
Oct 11 03:40:08 compute-0 ceph-osd[87591]: log_channel(cluster) log [DBG] : 5.3 scrub ok
Oct 11 03:40:08 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v212: 305 pgs: 1 unknown, 304 active+clean; 456 KiB data, 140 MiB used, 60 GiB / 60 GiB avail; 0 B/s, 0 objects/s recovering
Oct 11 03:40:08 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e107 do_prune osdmap full prune enabled
Oct 11 03:40:08 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e108 e108: 3 total, 3 up, 3 in
Oct 11 03:40:08 compute-0 ceph-mon[74273]: log_channel(cluster) log [DBG] : osdmap e108: 3 total, 3 up, 3 in
Oct 11 03:40:08 compute-0 ceph-osd[89722]: log_channel(cluster) log [DBG] : 3.1e scrub starts
Oct 11 03:40:09 compute-0 ceph-mon[74273]: 5.2 scrub starts
Oct 11 03:40:09 compute-0 ceph-mon[74273]: 5.2 scrub ok
Oct 11 03:40:09 compute-0 ceph-mon[74273]: 9.14 deep-scrub starts
Oct 11 03:40:09 compute-0 ceph-mon[74273]: 9.14 deep-scrub ok
Oct 11 03:40:09 compute-0 ceph-mon[74273]: osdmap e107: 3 total, 3 up, 3 in
Oct 11 03:40:09 compute-0 ceph-mon[74273]: 7.1a scrub starts
Oct 11 03:40:09 compute-0 ceph-mon[74273]: 7.1a scrub ok
Oct 11 03:40:09 compute-0 ceph-osd[89722]: log_channel(cluster) log [DBG] : 3.1e scrub ok
Oct 11 03:40:09 compute-0 ceph-osd[89722]: osd.2 pg_epoch: 108 pg[9.1c( v 41'577 (0'0,41'577] local-lis/les=107/108 n=6 ec=49/34 lis/c=81/81 les/c/f=82/82/0 sis=107) [0]/[2] async=[0] r=0 lpr=107 pi=[81,107)/1 crt=41'577 mlcod 0'0 active+remapped mbc={255={(0+1)=7}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 03:40:09 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e108 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 11 03:40:09 compute-0 ceph-osd[89722]: log_channel(cluster) log [DBG] : 8.12 scrub starts
Oct 11 03:40:09 compute-0 ceph-osd[89722]: log_channel(cluster) log [DBG] : 8.12 scrub ok
Oct 11 03:40:10 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e108 do_prune osdmap full prune enabled
Oct 11 03:40:10 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e109 e109: 3 total, 3 up, 3 in
Oct 11 03:40:10 compute-0 ceph-mon[74273]: log_channel(cluster) log [DBG] : osdmap e109: 3 total, 3 up, 3 in
Oct 11 03:40:10 compute-0 ceph-mon[74273]: 5.3 scrub starts
Oct 11 03:40:10 compute-0 ceph-mon[74273]: 5.3 scrub ok
Oct 11 03:40:10 compute-0 ceph-mon[74273]: pgmap v212: 305 pgs: 1 unknown, 304 active+clean; 456 KiB data, 140 MiB used, 60 GiB / 60 GiB avail; 0 B/s, 0 objects/s recovering
Oct 11 03:40:10 compute-0 ceph-mon[74273]: osdmap e108: 3 total, 3 up, 3 in
Oct 11 03:40:10 compute-0 ceph-mon[74273]: 3.1e scrub starts
Oct 11 03:40:10 compute-0 ceph-mon[74273]: 3.1e scrub ok
Oct 11 03:40:10 compute-0 ceph-osd[87591]: osd.0 pg_epoch: 109 pg[9.1c( v 41'577 (0'0,41'577] local-lis/les=0/0 n=6 ec=49/34 lis/c=107/81 les/c/f=108/82/0 sis=109) [0] r=0 lpr=109 pi=[81,109)/1 luod=0'0 crt=41'577 mlcod 0'0 active mbc={}] start_peering_interval up [0] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Oct 11 03:40:10 compute-0 ceph-osd[87591]: osd.0 pg_epoch: 109 pg[9.1c( v 41'577 (0'0,41'577] local-lis/les=0/0 n=6 ec=49/34 lis/c=107/81 les/c/f=108/82/0 sis=109) [0] r=0 lpr=109 pi=[81,109)/1 crt=41'577 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 03:40:10 compute-0 ceph-osd[89722]: osd.2 pg_epoch: 109 pg[9.1c( v 41'577 (0'0,41'577] local-lis/les=107/108 n=6 ec=49/34 lis/c=107/81 les/c/f=108/82/0 sis=109 pruub=15.203624725s) [0] async=[0] r=-1 lpr=109 pi=[81,109)/1 crt=41'577 mlcod 41'577 active pruub 177.999832153s@ mbc={255={}}] start_peering_interval up [0] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 11 03:40:10 compute-0 ceph-osd[89722]: osd.2 pg_epoch: 109 pg[9.1c( v 41'577 (0'0,41'577] local-lis/les=107/108 n=6 ec=49/34 lis/c=107/81 les/c/f=108/82/0 sis=109 pruub=15.203346252s) [0] r=-1 lpr=109 pi=[81,109)/1 crt=41'577 mlcod 0'0 unknown NOTIFY pruub 177.999832153s@ mbc={}] state<Start>: transitioning to Stray
Oct 11 03:40:10 compute-0 ceph-osd[88594]: log_channel(cluster) log [DBG] : 9.1a scrub starts
Oct 11 03:40:10 compute-0 ceph-osd[88594]: log_channel(cluster) log [DBG] : 9.1a scrub ok
Oct 11 03:40:10 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v215: 305 pgs: 1 unknown, 304 active+clean; 456 KiB data, 140 MiB used, 60 GiB / 60 GiB avail
Oct 11 03:40:10 compute-0 ceph-osd[89722]: log_channel(cluster) log [DBG] : 3.18 scrub starts
Oct 11 03:40:10 compute-0 ceph-osd[89722]: log_channel(cluster) log [DBG] : 3.18 scrub ok
Oct 11 03:40:11 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e109 do_prune osdmap full prune enabled
Oct 11 03:40:11 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e110 e110: 3 total, 3 up, 3 in
Oct 11 03:40:11 compute-0 ceph-mon[74273]: log_channel(cluster) log [DBG] : osdmap e110: 3 total, 3 up, 3 in
Oct 11 03:40:11 compute-0 ceph-mon[74273]: 8.12 scrub starts
Oct 11 03:40:11 compute-0 ceph-mon[74273]: 8.12 scrub ok
Oct 11 03:40:11 compute-0 ceph-mon[74273]: osdmap e109: 3 total, 3 up, 3 in
Oct 11 03:40:11 compute-0 ceph-osd[87591]: osd.0 pg_epoch: 110 pg[9.1c( v 41'577 (0'0,41'577] local-lis/les=109/110 n=6 ec=49/34 lis/c=107/81 les/c/f=108/82/0 sis=109) [0] r=0 lpr=109 pi=[81,109)/1 crt=41'577 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 03:40:11 compute-0 ceph-osd[89722]: log_channel(cluster) log [DBG] : 8.11 scrub starts
Oct 11 03:40:11 compute-0 ceph-osd[89722]: log_channel(cluster) log [DBG] : 8.11 scrub ok
Oct 11 03:40:12 compute-0 ceph-mon[74273]: 9.1a scrub starts
Oct 11 03:40:12 compute-0 ceph-mon[74273]: 9.1a scrub ok
Oct 11 03:40:12 compute-0 ceph-mon[74273]: pgmap v215: 305 pgs: 1 unknown, 304 active+clean; 456 KiB data, 140 MiB used, 60 GiB / 60 GiB avail
Oct 11 03:40:12 compute-0 ceph-mon[74273]: 3.18 scrub starts
Oct 11 03:40:12 compute-0 ceph-mon[74273]: 3.18 scrub ok
Oct 11 03:40:12 compute-0 ceph-mon[74273]: osdmap e110: 3 total, 3 up, 3 in
Oct 11 03:40:12 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v217: 305 pgs: 305 active+clean; 456 KiB data, 144 MiB used, 60 GiB / 60 GiB avail; 44 B/s, 2 objects/s recovering
Oct 11 03:40:12 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "30"} v 0) v1
Oct 11 03:40:12 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "30"}]: dispatch
Oct 11 03:40:13 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e110 do_prune osdmap full prune enabled
Oct 11 03:40:13 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "30"}]': finished
Oct 11 03:40:13 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e111 e111: 3 total, 3 up, 3 in
Oct 11 03:40:13 compute-0 ceph-mon[74273]: log_channel(cluster) log [DBG] : osdmap e111: 3 total, 3 up, 3 in
Oct 11 03:40:13 compute-0 ceph-mon[74273]: 8.11 scrub starts
Oct 11 03:40:13 compute-0 ceph-mon[74273]: 8.11 scrub ok
Oct 11 03:40:13 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "30"}]: dispatch
Oct 11 03:40:13 compute-0 PackageKit[31002]: daemon quit
Oct 11 03:40:13 compute-0 systemd[1]: packagekit.service: Deactivated successfully.
Oct 11 03:40:14 compute-0 ceph-mon[74273]: pgmap v217: 305 pgs: 305 active+clean; 456 KiB data, 144 MiB used, 60 GiB / 60 GiB avail; 44 B/s, 2 objects/s recovering
Oct 11 03:40:14 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "30"}]': finished
Oct 11 03:40:14 compute-0 ceph-mon[74273]: osdmap e111: 3 total, 3 up, 3 in
Oct 11 03:40:14 compute-0 ceph-osd[87591]: log_channel(cluster) log [DBG] : 10.17 scrub starts
Oct 11 03:40:14 compute-0 ceph-osd[87591]: log_channel(cluster) log [DBG] : 10.17 scrub ok
Oct 11 03:40:14 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e111 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 11 03:40:14 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v219: 305 pgs: 305 active+clean; 456 KiB data, 144 MiB used, 60 GiB / 60 GiB avail; 37 B/s, 2 objects/s recovering
Oct 11 03:40:14 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "31"} v 0) v1
Oct 11 03:40:14 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "31"}]: dispatch
Oct 11 03:40:15 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e111 do_prune osdmap full prune enabled
Oct 11 03:40:15 compute-0 ceph-mon[74273]: 10.17 scrub starts
Oct 11 03:40:15 compute-0 ceph-mon[74273]: 10.17 scrub ok
Oct 11 03:40:15 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "31"}]: dispatch
Oct 11 03:40:15 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "31"}]': finished
Oct 11 03:40:15 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e112 e112: 3 total, 3 up, 3 in
Oct 11 03:40:15 compute-0 ceph-mon[74273]: log_channel(cluster) log [DBG] : osdmap e112: 3 total, 3 up, 3 in
Oct 11 03:40:15 compute-0 ceph-osd[89722]: osd.2 pg_epoch: 112 pg[9.1e( v 41'577 (0'0,41'577] local-lis/les=67/68 n=6 ec=49/34 lis/c=67/67 les/c/f=68/68/0 sis=112 pruub=10.413641930s) [0] r=-1 lpr=112 pi=[67,112)/1 crt=41'577 mlcod 0'0 active pruub 179.057250977s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 11 03:40:15 compute-0 ceph-osd[89722]: osd.2 pg_epoch: 112 pg[9.1e( v 41'577 (0'0,41'577] local-lis/les=67/68 n=6 ec=49/34 lis/c=67/67 les/c/f=68/68/0 sis=112 pruub=10.413537979s) [0] r=-1 lpr=112 pi=[67,112)/1 crt=41'577 mlcod 0'0 unknown NOTIFY pruub 179.057250977s@ mbc={}] state<Start>: transitioning to Stray
Oct 11 03:40:15 compute-0 ceph-osd[87591]: osd.0 pg_epoch: 112 pg[9.1e( empty local-lis/les=0/0 n=0 ec=49/34 lis/c=67/67 les/c/f=68/68/0 sis=112) [0] r=0 lpr=112 pi=[67,112)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 03:40:16 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e112 do_prune osdmap full prune enabled
Oct 11 03:40:16 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e113 e113: 3 total, 3 up, 3 in
Oct 11 03:40:16 compute-0 ceph-mon[74273]: log_channel(cluster) log [DBG] : osdmap e113: 3 total, 3 up, 3 in
Oct 11 03:40:16 compute-0 ceph-mon[74273]: pgmap v219: 305 pgs: 305 active+clean; 456 KiB data, 144 MiB used, 60 GiB / 60 GiB avail; 37 B/s, 2 objects/s recovering
Oct 11 03:40:16 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "31"}]': finished
Oct 11 03:40:16 compute-0 ceph-mon[74273]: osdmap e112: 3 total, 3 up, 3 in
Oct 11 03:40:16 compute-0 ceph-osd[89722]: osd.2 pg_epoch: 113 pg[9.1e( v 41'577 (0'0,41'577] local-lis/les=67/68 n=6 ec=49/34 lis/c=67/67 les/c/f=68/68/0 sis=113) [0]/[2] r=0 lpr=113 pi=[67,113)/1 crt=41'577 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Oct 11 03:40:16 compute-0 ceph-osd[89722]: osd.2 pg_epoch: 113 pg[9.1e( v 41'577 (0'0,41'577] local-lis/les=67/68 n=6 ec=49/34 lis/c=67/67 les/c/f=68/68/0 sis=113) [0]/[2] r=0 lpr=113 pi=[67,113)/1 crt=41'577 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Oct 11 03:40:16 compute-0 ceph-osd[87591]: osd.0 pg_epoch: 113 pg[9.1e( empty local-lis/les=0/0 n=0 ec=49/34 lis/c=67/67 les/c/f=68/68/0 sis=113) [0]/[2] r=-1 lpr=113 pi=[67,113)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 11 03:40:16 compute-0 ceph-osd[87591]: osd.0 pg_epoch: 113 pg[9.1e( empty local-lis/les=0/0 n=0 ec=49/34 lis/c=67/67 les/c/f=68/68/0 sis=113) [0]/[2] r=-1 lpr=113 pi=[67,113)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Oct 11 03:40:16 compute-0 ceph-osd[87591]: log_channel(cluster) log [DBG] : 5.5 scrub starts
Oct 11 03:40:16 compute-0 ceph-osd[87591]: log_channel(cluster) log [DBG] : 5.5 scrub ok
Oct 11 03:40:16 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v222: 305 pgs: 305 active+clean; 456 KiB data, 144 MiB used, 60 GiB / 60 GiB avail; 38 B/s, 2 objects/s recovering
Oct 11 03:40:16 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "32"} v 0) v1
Oct 11 03:40:16 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "32"}]: dispatch
Oct 11 03:40:17 compute-0 ceph-osd[89722]: log_channel(cluster) log [DBG] : 3.1d scrub starts
Oct 11 03:40:17 compute-0 ceph-osd[89722]: log_channel(cluster) log [DBG] : 3.1d scrub ok
Oct 11 03:40:17 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e113 do_prune osdmap full prune enabled
Oct 11 03:40:17 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "32"}]': finished
Oct 11 03:40:17 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e114 e114: 3 total, 3 up, 3 in
Oct 11 03:40:17 compute-0 ceph-mon[74273]: osdmap e113: 3 total, 3 up, 3 in
Oct 11 03:40:17 compute-0 ceph-mon[74273]: 5.5 scrub starts
Oct 11 03:40:17 compute-0 ceph-mon[74273]: 5.5 scrub ok
Oct 11 03:40:17 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "32"}]: dispatch
Oct 11 03:40:17 compute-0 ceph-mon[74273]: log_channel(cluster) log [DBG] : osdmap e114: 3 total, 3 up, 3 in
Oct 11 03:40:17 compute-0 ceph-osd[89722]: osd.2 pg_epoch: 114 pg[9.1f( v 41'577 (0'0,41'577] local-lis/les=68/69 n=6 ec=49/34 lis/c=68/68 les/c/f=69/69/0 sis=114 pruub=10.154154778s) [1] r=-1 lpr=114 pi=[68,114)/1 crt=41'577 mlcod 0'0 active pruub 180.062896729s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 11 03:40:17 compute-0 ceph-osd[89722]: osd.2 pg_epoch: 114 pg[9.1f( v 41'577 (0'0,41'577] local-lis/les=68/69 n=6 ec=49/34 lis/c=68/68 les/c/f=69/69/0 sis=114 pruub=10.154078484s) [1] r=-1 lpr=114 pi=[68,114)/1 crt=41'577 mlcod 0'0 unknown NOTIFY pruub 180.062896729s@ mbc={}] state<Start>: transitioning to Stray
Oct 11 03:40:17 compute-0 ceph-osd[89722]: osd.2 pg_epoch: 114 pg[9.1e( v 41'577 (0'0,41'577] local-lis/les=113/114 n=6 ec=49/34 lis/c=67/67 les/c/f=68/68/0 sis=113) [0]/[2] async=[0] r=0 lpr=113 pi=[67,113)/1 crt=41'577 mlcod 0'0 active+remapped mbc={255={(0+1)=6}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 03:40:17 compute-0 ceph-osd[88594]: osd.1 pg_epoch: 114 pg[9.1f( empty local-lis/les=0/0 n=0 ec=49/34 lis/c=68/68 les/c/f=69/69/0 sis=114) [1] r=0 lpr=114 pi=[68,114)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 03:40:17 compute-0 ceph-osd[87591]: log_channel(cluster) log [DBG] : 10.e scrub starts
Oct 11 03:40:17 compute-0 ceph-osd[87591]: log_channel(cluster) log [DBG] : 10.e scrub ok
Oct 11 03:40:18 compute-0 ceph-osd[88594]: log_channel(cluster) log [DBG] : 11.5 scrub starts
Oct 11 03:40:18 compute-0 ceph-osd[88594]: log_channel(cluster) log [DBG] : 11.5 scrub ok
Oct 11 03:40:18 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e114 do_prune osdmap full prune enabled
Oct 11 03:40:18 compute-0 ceph-mon[74273]: pgmap v222: 305 pgs: 305 active+clean; 456 KiB data, 144 MiB used, 60 GiB / 60 GiB avail; 38 B/s, 2 objects/s recovering
Oct 11 03:40:18 compute-0 ceph-mon[74273]: 3.1d scrub starts
Oct 11 03:40:18 compute-0 ceph-mon[74273]: 3.1d scrub ok
Oct 11 03:40:18 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "32"}]': finished
Oct 11 03:40:18 compute-0 ceph-mon[74273]: osdmap e114: 3 total, 3 up, 3 in
Oct 11 03:40:18 compute-0 ceph-mon[74273]: 10.e scrub starts
Oct 11 03:40:18 compute-0 ceph-mon[74273]: 10.e scrub ok
Oct 11 03:40:18 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e115 e115: 3 total, 3 up, 3 in
Oct 11 03:40:18 compute-0 ceph-mon[74273]: log_channel(cluster) log [DBG] : osdmap e115: 3 total, 3 up, 3 in
Oct 11 03:40:18 compute-0 ceph-osd[89722]: osd.2 pg_epoch: 115 pg[9.1f( v 41'577 (0'0,41'577] local-lis/les=68/69 n=6 ec=49/34 lis/c=68/68 les/c/f=69/69/0 sis=115) [1]/[2] r=0 lpr=115 pi=[68,115)/1 crt=41'577 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [1] -> [1], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 1, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Oct 11 03:40:18 compute-0 ceph-osd[89722]: osd.2 pg_epoch: 115 pg[9.1f( v 41'577 (0'0,41'577] local-lis/les=68/69 n=6 ec=49/34 lis/c=68/68 les/c/f=69/69/0 sis=115) [1]/[2] r=0 lpr=115 pi=[68,115)/1 crt=41'577 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Oct 11 03:40:18 compute-0 ceph-osd[89722]: osd.2 pg_epoch: 115 pg[9.1e( v 41'577 (0'0,41'577] local-lis/les=113/114 n=6 ec=49/34 lis/c=113/67 les/c/f=114/68/0 sis=115 pruub=14.987086296s) [0] async=[0] r=-1 lpr=115 pi=[67,115)/1 crt=41'577 mlcod 41'577 active pruub 185.911666870s@ mbc={255={}}] start_peering_interval up [0] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 11 03:40:18 compute-0 ceph-osd[89722]: osd.2 pg_epoch: 115 pg[9.1e( v 41'577 (0'0,41'577] local-lis/les=113/114 n=6 ec=49/34 lis/c=113/67 les/c/f=114/68/0 sis=115 pruub=14.986988068s) [0] r=-1 lpr=115 pi=[67,115)/1 crt=41'577 mlcod 0'0 unknown NOTIFY pruub 185.911666870s@ mbc={}] state<Start>: transitioning to Stray
Oct 11 03:40:18 compute-0 ceph-osd[88594]: osd.1 pg_epoch: 115 pg[9.1f( empty local-lis/les=0/0 n=0 ec=49/34 lis/c=68/68 les/c/f=69/69/0 sis=115) [1]/[2] r=-1 lpr=115 pi=[68,115)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [1] -> [1], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 11 03:40:18 compute-0 ceph-osd[88594]: osd.1 pg_epoch: 115 pg[9.1f( empty local-lis/les=0/0 n=0 ec=49/34 lis/c=68/68 les/c/f=69/69/0 sis=115) [1]/[2] r=-1 lpr=115 pi=[68,115)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Oct 11 03:40:18 compute-0 ceph-osd[87591]: osd.0 pg_epoch: 115 pg[9.1e( v 41'577 (0'0,41'577] local-lis/les=0/0 n=6 ec=49/34 lis/c=113/67 les/c/f=114/68/0 sis=115) [0] r=0 lpr=115 pi=[67,115)/1 luod=0'0 crt=41'577 mlcod 0'0 active mbc={}] start_peering_interval up [0] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Oct 11 03:40:18 compute-0 ceph-osd[87591]: osd.0 pg_epoch: 115 pg[9.1e( v 41'577 (0'0,41'577] local-lis/les=0/0 n=6 ec=49/34 lis/c=113/67 les/c/f=114/68/0 sis=115) [0] r=0 lpr=115 pi=[67,115)/1 crt=41'577 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 03:40:18 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v225: 305 pgs: 1 unknown, 1 active+remapped, 303 active+clean; 456 KiB data, 144 MiB used, 60 GiB / 60 GiB avail
Oct 11 03:40:19 compute-0 ceph-osd[88594]: log_channel(cluster) log [DBG] : 11.7 scrub starts
Oct 11 03:40:19 compute-0 ceph-osd[88594]: log_channel(cluster) log [DBG] : 11.7 scrub ok
Oct 11 03:40:19 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e115 do_prune osdmap full prune enabled
Oct 11 03:40:19 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e116 e116: 3 total, 3 up, 3 in
Oct 11 03:40:19 compute-0 ceph-mon[74273]: log_channel(cluster) log [DBG] : osdmap e116: 3 total, 3 up, 3 in
Oct 11 03:40:19 compute-0 ceph-mon[74273]: 11.5 scrub starts
Oct 11 03:40:19 compute-0 ceph-mon[74273]: 11.5 scrub ok
Oct 11 03:40:19 compute-0 ceph-mon[74273]: osdmap e115: 3 total, 3 up, 3 in
Oct 11 03:40:19 compute-0 ceph-osd[87591]: osd.0 pg_epoch: 116 pg[9.1e( v 41'577 (0'0,41'577] local-lis/les=115/116 n=6 ec=49/34 lis/c=113/67 les/c/f=114/68/0 sis=115) [0] r=0 lpr=115 pi=[67,115)/1 crt=41'577 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 03:40:19 compute-0 ceph-osd[87591]: log_channel(cluster) log [DBG] : 10.1 scrub starts
Oct 11 03:40:19 compute-0 ceph-osd[87591]: log_channel(cluster) log [DBG] : 10.1 scrub ok
Oct 11 03:40:19 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e116 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 11 03:40:20 compute-0 ceph-osd[88594]: log_channel(cluster) log [DBG] : 11.a scrub starts
Oct 11 03:40:20 compute-0 ceph-mon[74273]: pgmap v225: 305 pgs: 1 unknown, 1 active+remapped, 303 active+clean; 456 KiB data, 144 MiB used, 60 GiB / 60 GiB avail
Oct 11 03:40:20 compute-0 ceph-mon[74273]: 11.7 scrub starts
Oct 11 03:40:20 compute-0 ceph-mon[74273]: 11.7 scrub ok
Oct 11 03:40:20 compute-0 ceph-mon[74273]: osdmap e116: 3 total, 3 up, 3 in
Oct 11 03:40:20 compute-0 ceph-mon[74273]: 10.1 scrub starts
Oct 11 03:40:20 compute-0 ceph-mon[74273]: 10.1 scrub ok
Oct 11 03:40:20 compute-0 ceph-osd[88594]: log_channel(cluster) log [DBG] : 11.a scrub ok
Oct 11 03:40:20 compute-0 ceph-osd[89722]: osd.2 pg_epoch: 116 pg[9.1f( v 41'577 (0'0,41'577] local-lis/les=115/116 n=6 ec=49/34 lis/c=68/68 les/c/f=69/69/0 sis=115) [1]/[2] async=[1] r=0 lpr=115 pi=[68,115)/1 crt=41'577 mlcod 0'0 active+remapped mbc={255={(0+1)=6}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 03:40:20 compute-0 ceph-mgr[74563]: [balancer INFO root] Optimize plan auto_2025-10-11_03:40:20
Oct 11 03:40:20 compute-0 ceph-mgr[74563]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 11 03:40:20 compute-0 ceph-mgr[74563]: [balancer INFO root] Some PGs (0.003279) are unknown; try again later
Oct 11 03:40:20 compute-0 ceph-mgr[74563]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 03:40:20 compute-0 ceph-mgr[74563]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 03:40:20 compute-0 ceph-mgr[74563]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 03:40:20 compute-0 ceph-mgr[74563]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 03:40:20 compute-0 ceph-mgr[74563]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 03:40:20 compute-0 ceph-mgr[74563]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 03:40:20 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v227: 305 pgs: 1 unknown, 1 active+remapped, 303 active+clean; 456 KiB data, 144 MiB used, 60 GiB / 60 GiB avail
Oct 11 03:40:20 compute-0 ceph-mgr[74563]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 11 03:40:20 compute-0 ceph-mgr[74563]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 11 03:40:20 compute-0 ceph-mgr[74563]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 11 03:40:20 compute-0 ceph-mgr[74563]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 11 03:40:20 compute-0 ceph-mgr[74563]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 11 03:40:20 compute-0 ceph-mgr[74563]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 11 03:40:20 compute-0 ceph-mgr[74563]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 11 03:40:20 compute-0 ceph-mgr[74563]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 11 03:40:20 compute-0 ceph-mgr[74563]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 11 03:40:20 compute-0 ceph-mgr[74563]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 11 03:40:21 compute-0 ceph-osd[88594]: log_channel(cluster) log [DBG] : 11.c deep-scrub starts
Oct 11 03:40:21 compute-0 ceph-osd[88594]: log_channel(cluster) log [DBG] : 11.c deep-scrub ok
Oct 11 03:40:21 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e116 do_prune osdmap full prune enabled
Oct 11 03:40:21 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e117 e117: 3 total, 3 up, 3 in
Oct 11 03:40:21 compute-0 ceph-mon[74273]: log_channel(cluster) log [DBG] : osdmap e117: 3 total, 3 up, 3 in
Oct 11 03:40:21 compute-0 ceph-mon[74273]: 11.a scrub starts
Oct 11 03:40:21 compute-0 ceph-mon[74273]: 11.a scrub ok
Oct 11 03:40:21 compute-0 ceph-osd[89722]: osd.2 pg_epoch: 117 pg[9.1f( v 41'577 (0'0,41'577] local-lis/les=115/116 n=6 ec=49/34 lis/c=115/68 les/c/f=116/69/0 sis=117 pruub=14.990808487s) [1] async=[1] r=-1 lpr=117 pi=[68,117)/1 crt=41'577 mlcod 41'577 active pruub 188.942962646s@ mbc={255={}}] start_peering_interval up [1] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 1 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 11 03:40:21 compute-0 ceph-osd[89722]: osd.2 pg_epoch: 117 pg[9.1f( v 41'577 (0'0,41'577] local-lis/les=115/116 n=6 ec=49/34 lis/c=115/68 les/c/f=116/69/0 sis=117 pruub=14.990694046s) [1] r=-1 lpr=117 pi=[68,117)/1 crt=41'577 mlcod 0'0 unknown NOTIFY pruub 188.942962646s@ mbc={}] state<Start>: transitioning to Stray
Oct 11 03:40:21 compute-0 ceph-osd[88594]: osd.1 pg_epoch: 117 pg[9.1f( v 41'577 (0'0,41'577] local-lis/les=0/0 n=6 ec=49/34 lis/c=115/68 les/c/f=116/69/0 sis=117) [1] r=0 lpr=117 pi=[68,117)/1 luod=0'0 crt=41'577 mlcod 0'0 active mbc={}] start_peering_interval up [1] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 1 -> 1, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Oct 11 03:40:21 compute-0 ceph-osd[88594]: osd.1 pg_epoch: 117 pg[9.1f( v 41'577 (0'0,41'577] local-lis/les=0/0 n=6 ec=49/34 lis/c=115/68 les/c/f=116/69/0 sis=117) [1] r=0 lpr=117 pi=[68,117)/1 crt=41'577 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 03:40:21 compute-0 ceph-osd[87591]: log_channel(cluster) log [DBG] : 2.b scrub starts
Oct 11 03:40:21 compute-0 ceph-osd[87591]: log_channel(cluster) log [DBG] : 2.b scrub ok
Oct 11 03:40:22 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e117 do_prune osdmap full prune enabled
Oct 11 03:40:22 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e118 e118: 3 total, 3 up, 3 in
Oct 11 03:40:22 compute-0 ceph-mon[74273]: log_channel(cluster) log [DBG] : osdmap e118: 3 total, 3 up, 3 in
Oct 11 03:40:22 compute-0 ceph-mon[74273]: pgmap v227: 305 pgs: 1 unknown, 1 active+remapped, 303 active+clean; 456 KiB data, 144 MiB used, 60 GiB / 60 GiB avail
Oct 11 03:40:22 compute-0 ceph-mon[74273]: 11.c deep-scrub starts
Oct 11 03:40:22 compute-0 ceph-mon[74273]: 11.c deep-scrub ok
Oct 11 03:40:22 compute-0 ceph-mon[74273]: osdmap e117: 3 total, 3 up, 3 in
Oct 11 03:40:22 compute-0 ceph-mon[74273]: 2.b scrub starts
Oct 11 03:40:22 compute-0 ceph-mon[74273]: 2.b scrub ok
Oct 11 03:40:22 compute-0 ceph-osd[88594]: osd.1 pg_epoch: 118 pg[9.1f( v 41'577 (0'0,41'577] local-lis/les=117/118 n=6 ec=49/34 lis/c=115/68 les/c/f=116/69/0 sis=117) [1] r=0 lpr=117 pi=[68,117)/1 crt=41'577 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 03:40:22 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v230: 305 pgs: 305 active+clean; 456 KiB data, 144 MiB used, 60 GiB / 60 GiB avail; 4.8 KiB/s rd, 221 B/s wr, 10 op/s; 71 B/s, 3 objects/s recovering
Oct 11 03:40:23 compute-0 ceph-mon[74273]: osdmap e118: 3 total, 3 up, 3 in
Oct 11 03:40:24 compute-0 ceph-mon[74273]: pgmap v230: 305 pgs: 305 active+clean; 456 KiB data, 144 MiB used, 60 GiB / 60 GiB avail; 4.8 KiB/s rd, 221 B/s wr, 10 op/s; 71 B/s, 3 objects/s recovering
Oct 11 03:40:24 compute-0 ceph-osd[87591]: log_channel(cluster) log [DBG] : 2.16 scrub starts
Oct 11 03:40:24 compute-0 ceph-osd[87591]: log_channel(cluster) log [DBG] : 2.16 scrub ok
Oct 11 03:40:24 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 11 03:40:24 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v231: 305 pgs: 305 active+clean; 456 KiB data, 144 MiB used, 60 GiB / 60 GiB avail; 3.7 KiB/s rd, 170 B/s wr, 8 op/s; 54 B/s, 2 objects/s recovering
Oct 11 03:40:24 compute-0 ceph-osd[89722]: log_channel(cluster) log [DBG] : 11.11 deep-scrub starts
Oct 11 03:40:24 compute-0 ceph-osd[89722]: log_channel(cluster) log [DBG] : 11.11 deep-scrub ok
Oct 11 03:40:25 compute-0 ceph-mon[74273]: 2.16 scrub starts
Oct 11 03:40:25 compute-0 ceph-mon[74273]: 2.16 scrub ok
Oct 11 03:40:26 compute-0 ceph-osd[88594]: log_channel(cluster) log [DBG] : 11.13 scrub starts
Oct 11 03:40:26 compute-0 ceph-osd[88594]: log_channel(cluster) log [DBG] : 11.13 scrub ok
Oct 11 03:40:26 compute-0 ceph-mon[74273]: pgmap v231: 305 pgs: 305 active+clean; 456 KiB data, 144 MiB used, 60 GiB / 60 GiB avail; 3.7 KiB/s rd, 170 B/s wr, 8 op/s; 54 B/s, 2 objects/s recovering
Oct 11 03:40:26 compute-0 ceph-mon[74273]: 11.11 deep-scrub starts
Oct 11 03:40:26 compute-0 ceph-mon[74273]: 11.11 deep-scrub ok
Oct 11 03:40:26 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v232: 305 pgs: 305 active+clean; 456 KiB data, 144 MiB used, 60 GiB / 60 GiB avail; 2.9 KiB/s rd, 134 B/s wr, 6 op/s; 43 B/s, 2 objects/s recovering
Oct 11 03:40:27 compute-0 ceph-osd[87591]: log_channel(cluster) log [DBG] : 5.15 scrub starts
Oct 11 03:40:27 compute-0 ceph-osd[87591]: log_channel(cluster) log [DBG] : 5.15 scrub ok
Oct 11 03:40:27 compute-0 ceph-osd[88594]: log_channel(cluster) log [DBG] : 11.16 scrub starts
Oct 11 03:40:27 compute-0 ceph-osd[88594]: log_channel(cluster) log [DBG] : 11.16 scrub ok
Oct 11 03:40:27 compute-0 ceph-mon[74273]: 11.13 scrub starts
Oct 11 03:40:27 compute-0 ceph-mon[74273]: 11.13 scrub ok
Oct 11 03:40:28 compute-0 ceph-osd[88594]: log_channel(cluster) log [DBG] : 11.1d deep-scrub starts
Oct 11 03:40:28 compute-0 ceph-osd[88594]: log_channel(cluster) log [DBG] : 11.1d deep-scrub ok
Oct 11 03:40:28 compute-0 ceph-mon[74273]: pgmap v232: 305 pgs: 305 active+clean; 456 KiB data, 144 MiB used, 60 GiB / 60 GiB avail; 2.9 KiB/s rd, 134 B/s wr, 6 op/s; 43 B/s, 2 objects/s recovering
Oct 11 03:40:28 compute-0 ceph-mon[74273]: 5.15 scrub starts
Oct 11 03:40:28 compute-0 ceph-mon[74273]: 5.15 scrub ok
Oct 11 03:40:28 compute-0 ceph-mon[74273]: 11.16 scrub starts
Oct 11 03:40:28 compute-0 ceph-mon[74273]: 11.16 scrub ok
Oct 11 03:40:28 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v233: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 2.7 KiB/s rd, 127 B/s wr, 6 op/s; 41 B/s, 2 objects/s recovering
Oct 11 03:40:29 compute-0 ceph-osd[87591]: log_channel(cluster) log [DBG] : 10.1e scrub starts
Oct 11 03:40:29 compute-0 ceph-osd[87591]: log_channel(cluster) log [DBG] : 10.1e scrub ok
Oct 11 03:40:29 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 11 03:40:29 compute-0 ceph-mon[74273]: 11.1d deep-scrub starts
Oct 11 03:40:29 compute-0 ceph-mon[74273]: 11.1d deep-scrub ok
Oct 11 03:40:30 compute-0 ceph-osd[87591]: log_channel(cluster) log [DBG] : 5.14 scrub starts
Oct 11 03:40:30 compute-0 ceph-osd[87591]: log_channel(cluster) log [DBG] : 5.14 scrub ok
Oct 11 03:40:30 compute-0 ceph-osd[88594]: log_channel(cluster) log [DBG] : 5.11 scrub starts
Oct 11 03:40:30 compute-0 ceph-osd[88594]: log_channel(cluster) log [DBG] : 5.11 scrub ok
Oct 11 03:40:30 compute-0 ceph-mon[74273]: pgmap v233: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 2.7 KiB/s rd, 127 B/s wr, 6 op/s; 41 B/s, 2 objects/s recovering
Oct 11 03:40:30 compute-0 ceph-mon[74273]: 10.1e scrub starts
Oct 11 03:40:30 compute-0 ceph-mon[74273]: 10.1e scrub ok
Oct 11 03:40:30 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v234: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 2.3 KiB/s rd, 106 B/s wr, 5 op/s; 34 B/s, 1 objects/s recovering
Oct 11 03:40:30 compute-0 ceph-mgr[74563]: [pg_autoscaler INFO root] _maybe_adjust
Oct 11 03:40:30 compute-0 ceph-mgr[74563]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 03:40:30 compute-0 ceph-mgr[74563]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct 11 03:40:30 compute-0 ceph-mgr[74563]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 03:40:30 compute-0 ceph-mgr[74563]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 11 03:40:30 compute-0 ceph-mgr[74563]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 03:40:30 compute-0 ceph-mgr[74563]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 11 03:40:30 compute-0 ceph-mgr[74563]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 03:40:30 compute-0 ceph-mgr[74563]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 11 03:40:30 compute-0 ceph-mgr[74563]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 03:40:30 compute-0 ceph-mgr[74563]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 11 03:40:30 compute-0 ceph-mgr[74563]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 03:40:30 compute-0 ceph-mgr[74563]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Oct 11 03:40:30 compute-0 ceph-mgr[74563]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 03:40:30 compute-0 ceph-mgr[74563]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 11 03:40:30 compute-0 ceph-mgr[74563]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 03:40:30 compute-0 ceph-mgr[74563]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Oct 11 03:40:30 compute-0 ceph-mgr[74563]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 03:40:30 compute-0 ceph-mgr[74563]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct 11 03:40:30 compute-0 ceph-mgr[74563]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 03:40:30 compute-0 ceph-mgr[74563]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 11 03:40:30 compute-0 ceph-mgr[74563]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 03:40:30 compute-0 ceph-mgr[74563]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct 11 03:40:30 compute-0 ceph-osd[89722]: log_channel(cluster) log [DBG] : 3.7 scrub starts
Oct 11 03:40:30 compute-0 ceph-osd[89722]: log_channel(cluster) log [DBG] : 3.7 scrub ok
Oct 11 03:40:31 compute-0 ceph-mon[74273]: 5.14 scrub starts
Oct 11 03:40:31 compute-0 ceph-mon[74273]: 5.14 scrub ok
Oct 11 03:40:31 compute-0 ceph-mon[74273]: 5.11 scrub starts
Oct 11 03:40:31 compute-0 ceph-mon[74273]: 5.11 scrub ok
Oct 11 03:40:32 compute-0 ceph-mon[74273]: pgmap v234: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 2.3 KiB/s rd, 106 B/s wr, 5 op/s; 34 B/s, 1 objects/s recovering
Oct 11 03:40:32 compute-0 ceph-mon[74273]: 3.7 scrub starts
Oct 11 03:40:32 compute-0 ceph-mon[74273]: 3.7 scrub ok
Oct 11 03:40:32 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v235: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 2.1 KiB/s rd, 0 B/s wr, 4 op/s; 31 B/s, 1 objects/s recovering
Oct 11 03:40:33 compute-0 ceph-osd[88594]: log_channel(cluster) log [DBG] : 2.17 scrub starts
Oct 11 03:40:33 compute-0 ceph-osd[88594]: log_channel(cluster) log [DBG] : 2.17 scrub ok
Oct 11 03:40:33 compute-0 ceph-osd[89722]: log_channel(cluster) log [DBG] : 8.d scrub starts
Oct 11 03:40:33 compute-0 ceph-osd[89722]: log_channel(cluster) log [DBG] : 8.d scrub ok
Oct 11 03:40:34 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 11 03:40:34 compute-0 ceph-mon[74273]: pgmap v235: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 2.1 KiB/s rd, 0 B/s wr, 4 op/s; 31 B/s, 1 objects/s recovering
Oct 11 03:40:34 compute-0 ceph-mon[74273]: 2.17 scrub starts
Oct 11 03:40:34 compute-0 ceph-mon[74273]: 2.17 scrub ok
Oct 11 03:40:34 compute-0 ceph-mon[74273]: 8.d scrub starts
Oct 11 03:40:34 compute-0 ceph-mon[74273]: 8.d scrub ok
Oct 11 03:40:34 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v236: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 03:40:35 compute-0 ceph-osd[88594]: log_channel(cluster) log [DBG] : 5.13 scrub starts
Oct 11 03:40:35 compute-0 ceph-osd[88594]: log_channel(cluster) log [DBG] : 5.13 scrub ok
Oct 11 03:40:36 compute-0 ceph-osd[87591]: log_channel(cluster) log [DBG] : 2.13 scrub starts
Oct 11 03:40:36 compute-0 ceph-osd[87591]: log_channel(cluster) log [DBG] : 2.13 scrub ok
Oct 11 03:40:36 compute-0 ceph-mon[74273]: pgmap v236: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 03:40:36 compute-0 ceph-mon[74273]: 5.13 scrub starts
Oct 11 03:40:36 compute-0 ceph-mon[74273]: 5.13 scrub ok
Oct 11 03:40:36 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v237: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 03:40:37 compute-0 ceph-osd[88594]: log_channel(cluster) log [DBG] : 2.15 scrub starts
Oct 11 03:40:37 compute-0 ceph-osd[88594]: log_channel(cluster) log [DBG] : 2.15 scrub ok
Oct 11 03:40:37 compute-0 ceph-osd[87591]: log_channel(cluster) log [DBG] : 2.8 scrub starts
Oct 11 03:40:37 compute-0 ceph-osd[87591]: log_channel(cluster) log [DBG] : 2.8 scrub ok
Oct 11 03:40:37 compute-0 ceph-mon[74273]: 2.13 scrub starts
Oct 11 03:40:37 compute-0 ceph-mon[74273]: 2.13 scrub ok
Oct 11 03:40:38 compute-0 ceph-osd[88594]: log_channel(cluster) log [DBG] : 10.19 scrub starts
Oct 11 03:40:38 compute-0 ceph-osd[88594]: log_channel(cluster) log [DBG] : 10.19 scrub ok
Oct 11 03:40:38 compute-0 ceph-osd[87591]: log_channel(cluster) log [DBG] : 10.16 scrub starts
Oct 11 03:40:38 compute-0 ceph-osd[87591]: log_channel(cluster) log [DBG] : 10.16 scrub ok
Oct 11 03:40:38 compute-0 ceph-mon[74273]: pgmap v237: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 03:40:38 compute-0 ceph-mon[74273]: 2.15 scrub starts
Oct 11 03:40:38 compute-0 ceph-mon[74273]: 2.15 scrub ok
Oct 11 03:40:38 compute-0 ceph-mon[74273]: 2.8 scrub starts
Oct 11 03:40:38 compute-0 ceph-mon[74273]: 2.8 scrub ok
Oct 11 03:40:38 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v238: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 03:40:39 compute-0 ceph-osd[88594]: log_channel(cluster) log [DBG] : 5.16 scrub starts
Oct 11 03:40:39 compute-0 ceph-osd[88594]: log_channel(cluster) log [DBG] : 5.16 scrub ok
Oct 11 03:40:39 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 11 03:40:39 compute-0 ceph-mon[74273]: 10.19 scrub starts
Oct 11 03:40:39 compute-0 ceph-mon[74273]: 10.19 scrub ok
Oct 11 03:40:39 compute-0 ceph-mon[74273]: 10.16 scrub starts
Oct 11 03:40:39 compute-0 ceph-mon[74273]: 10.16 scrub ok
Oct 11 03:40:39 compute-0 ceph-osd[89722]: log_channel(cluster) log [DBG] : 7.1c deep-scrub starts
Oct 11 03:40:39 compute-0 ceph-osd[89722]: log_channel(cluster) log [DBG] : 7.1c deep-scrub ok
Oct 11 03:40:40 compute-0 ceph-osd[88594]: log_channel(cluster) log [DBG] : 10.6 scrub starts
Oct 11 03:40:40 compute-0 ceph-osd[88594]: log_channel(cluster) log [DBG] : 10.6 scrub ok
Oct 11 03:40:40 compute-0 sudo[107561]: pam_unix(sudo:session): session closed for user root
Oct 11 03:40:40 compute-0 ceph-mon[74273]: pgmap v238: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 03:40:40 compute-0 ceph-mon[74273]: 5.16 scrub starts
Oct 11 03:40:40 compute-0 ceph-mon[74273]: 5.16 scrub ok
Oct 11 03:40:40 compute-0 ceph-mon[74273]: 7.1c deep-scrub starts
Oct 11 03:40:40 compute-0 ceph-mon[74273]: 7.1c deep-scrub ok
Oct 11 03:40:40 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v239: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 03:40:40 compute-0 sudo[107856]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hxbnszgvscwnznxapirkebbsqvthfdmk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760154040.4353294-128-61119311212942/AnsiballZ_command.py'
Oct 11 03:40:40 compute-0 sudo[107856]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 03:40:41 compute-0 python3.9[107858]: ansible-ansible.legacy.command Invoked with _raw_params=rpm -V driverctl lvm2 crudini jq nftables NetworkManager openstack-selinux python3-libselinux python3-pyyaml rsync tmpwatch sysstat iproute-tc ksmtuned systemd-container crypto-policies-scripts grubby sos _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 11 03:40:41 compute-0 ceph-osd[88594]: log_channel(cluster) log [DBG] : 10.1a scrub starts
Oct 11 03:40:41 compute-0 ceph-osd[88594]: log_channel(cluster) log [DBG] : 10.1a scrub ok
Oct 11 03:40:41 compute-0 ceph-mon[74273]: 10.6 scrub starts
Oct 11 03:40:41 compute-0 ceph-mon[74273]: 10.6 scrub ok
Oct 11 03:40:41 compute-0 sudo[107856]: pam_unix(sudo:session): session closed for user root
Oct 11 03:40:41 compute-0 ceph-osd[89722]: log_channel(cluster) log [DBG] : 7.2 scrub starts
Oct 11 03:40:41 compute-0 ceph-osd[89722]: log_channel(cluster) log [DBG] : 7.2 scrub ok
Oct 11 03:40:42 compute-0 ceph-mon[74273]: pgmap v239: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 03:40:42 compute-0 ceph-mon[74273]: 10.1a scrub starts
Oct 11 03:40:42 compute-0 ceph-mon[74273]: 10.1a scrub ok
Oct 11 03:40:42 compute-0 ceph-mon[74273]: 7.2 scrub starts
Oct 11 03:40:42 compute-0 ceph-mon[74273]: 7.2 scrub ok
Oct 11 03:40:42 compute-0 sudo[108143]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yjmtkgxtyvgjzlqbpczyluvuenunqwlm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760154042.0815804-136-275263924704945/AnsiballZ_selinux.py'
Oct 11 03:40:42 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v240: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 03:40:42 compute-0 sudo[108143]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 03:40:43 compute-0 python3.9[108145]: ansible-ansible.posix.selinux Invoked with policy=targeted state=enforcing configfile=/etc/selinux/config update_kernel_param=False
Oct 11 03:40:43 compute-0 sudo[108143]: pam_unix(sudo:session): session closed for user root
Oct 11 03:40:43 compute-0 ceph-osd[87591]: log_channel(cluster) log [DBG] : 2.11 deep-scrub starts
Oct 11 03:40:43 compute-0 ceph-osd[87591]: log_channel(cluster) log [DBG] : 2.11 deep-scrub ok
Oct 11 03:40:43 compute-0 sudo[108210]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 03:40:43 compute-0 sudo[108210]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 03:40:43 compute-0 sudo[108210]: pam_unix(sudo:session): session closed for user root
Oct 11 03:40:43 compute-0 sudo[108259]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 11 03:40:43 compute-0 sudo[108259]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 03:40:43 compute-0 sudo[108259]: pam_unix(sudo:session): session closed for user root
Oct 11 03:40:43 compute-0 sudo[108315]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 03:40:43 compute-0 sudo[108315]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 03:40:43 compute-0 sudo[108315]: pam_unix(sudo:session): session closed for user root
Oct 11 03:40:43 compute-0 sudo[108375]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-scdbkvescmzjatgxmusppjyjunocevph ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760154043.4207325-147-269154744731557/AnsiballZ_command.py'
Oct 11 03:40:43 compute-0 sudo[108375]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 03:40:43 compute-0 sudo[108368]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/23b68101-59a9-532f-ab6b-9acf78fb2162/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Oct 11 03:40:43 compute-0 sudo[108368]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 03:40:43 compute-0 ceph-osd[89722]: log_channel(cluster) log [DBG] : 11.d scrub starts
Oct 11 03:40:43 compute-0 ceph-osd[89722]: log_channel(cluster) log [DBG] : 11.d scrub ok
Oct 11 03:40:43 compute-0 python3.9[108393]: ansible-ansible.legacy.command Invoked with cmd=dd if=/dev/zero of=/swap count=1024 bs=1M creates=/swap _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None removes=None stdin=None
Oct 11 03:40:44 compute-0 sudo[108375]: pam_unix(sudo:session): session closed for user root
Oct 11 03:40:44 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 11 03:40:44 compute-0 ceph-osd[87591]: log_channel(cluster) log [DBG] : 11.17 deep-scrub starts
Oct 11 03:40:44 compute-0 sudo[108368]: pam_unix(sudo:session): session closed for user root
Oct 11 03:40:44 compute-0 ceph-osd[87591]: log_channel(cluster) log [DBG] : 11.17 deep-scrub ok
Oct 11 03:40:44 compute-0 ceph-mon[74273]: pgmap v240: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 03:40:44 compute-0 ceph-mon[74273]: 2.11 deep-scrub starts
Oct 11 03:40:44 compute-0 ceph-mon[74273]: 2.11 deep-scrub ok
Oct 11 03:40:44 compute-0 ceph-mon[74273]: 11.d scrub starts
Oct 11 03:40:44 compute-0 ceph-mon[74273]: 11.d scrub ok
Oct 11 03:40:44 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 11 03:40:44 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 11 03:40:44 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct 11 03:40:44 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 11 03:40:44 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct 11 03:40:44 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' 
Oct 11 03:40:44 compute-0 ceph-mgr[74563]: [progress WARNING root] complete: ev ef64bf51-029a-4c51-872d-c316a180a45f does not exist
Oct 11 03:40:44 compute-0 ceph-mgr[74563]: [progress WARNING root] complete: ev 01164552-55f9-4715-bf07-9dfa4d18da71 does not exist
Oct 11 03:40:44 compute-0 ceph-mgr[74563]: [progress WARNING root] complete: ev a68f10ad-332d-41b6-b98d-d93134e9c0c9 does not exist
Oct 11 03:40:44 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct 11 03:40:44 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 11 03:40:44 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct 11 03:40:44 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 11 03:40:44 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 11 03:40:44 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 11 03:40:44 compute-0 sudo[108601]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dltfgwbkxevbgefrwgdhhmnjdmouytgv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760154044.2588208-155-17486122847355/AnsiballZ_file.py'
Oct 11 03:40:44 compute-0 sudo[108562]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 03:40:44 compute-0 sudo[108601]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 03:40:44 compute-0 sudo[108562]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 03:40:44 compute-0 sudo[108562]: pam_unix(sudo:session): session closed for user root
Oct 11 03:40:44 compute-0 sudo[108606]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 11 03:40:44 compute-0 sudo[108606]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 03:40:44 compute-0 sudo[108606]: pam_unix(sudo:session): session closed for user root
Oct 11 03:40:44 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v241: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 03:40:44 compute-0 sudo[108631]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 03:40:44 compute-0 sudo[108631]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 03:40:44 compute-0 sudo[108631]: pam_unix(sudo:session): session closed for user root
Oct 11 03:40:44 compute-0 python3.9[108604]: ansible-ansible.builtin.file Invoked with group=root mode=0600 owner=root path=/swap recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False state=None _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 11 03:40:44 compute-0 sudo[108656]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/23b68101-59a9-532f-ab6b-9acf78fb2162/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 23b68101-59a9-532f-ab6b-9acf78fb2162 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Oct 11 03:40:44 compute-0 sudo[108656]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 03:40:44 compute-0 sudo[108601]: pam_unix(sudo:session): session closed for user root
Oct 11 03:40:45 compute-0 podman[108797]: 2025-10-11 03:40:45.194308652 +0000 UTC m=+0.045491219 container create 700a406f298c42fbf2edee5a042ecea6adb95deac28279a78eab5eaabf168768 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_gould, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.39.3, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 11 03:40:45 compute-0 systemd[1]: Started libpod-conmon-700a406f298c42fbf2edee5a042ecea6adb95deac28279a78eab5eaabf168768.scope.
Oct 11 03:40:45 compute-0 systemd[1]: Started libcrun container.
Oct 11 03:40:45 compute-0 podman[108797]: 2025-10-11 03:40:45.170911889 +0000 UTC m=+0.022094476 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 03:40:45 compute-0 podman[108797]: 2025-10-11 03:40:45.284853895 +0000 UTC m=+0.136036512 container init 700a406f298c42fbf2edee5a042ecea6adb95deac28279a78eab5eaabf168768 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_gould, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Oct 11 03:40:45 compute-0 podman[108797]: 2025-10-11 03:40:45.296548286 +0000 UTC m=+0.147730853 container start 700a406f298c42fbf2edee5a042ecea6adb95deac28279a78eab5eaabf168768 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_gould, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 11 03:40:45 compute-0 podman[108797]: 2025-10-11 03:40:45.300246964 +0000 UTC m=+0.151429581 container attach 700a406f298c42fbf2edee5a042ecea6adb95deac28279a78eab5eaabf168768 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_gould, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 11 03:40:45 compute-0 tender_gould[108814]: 167 167
Oct 11 03:40:45 compute-0 systemd[1]: libpod-700a406f298c42fbf2edee5a042ecea6adb95deac28279a78eab5eaabf168768.scope: Deactivated successfully.
Oct 11 03:40:45 compute-0 podman[108797]: 2025-10-11 03:40:45.30250341 +0000 UTC m=+0.153685937 container died 700a406f298c42fbf2edee5a042ecea6adb95deac28279a78eab5eaabf168768 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_gould, ceph=True, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Oct 11 03:40:45 compute-0 systemd[1]: var-lib-containers-storage-overlay-e7a8ac9882e94734577bf4970e05a798cebc88aff61327357bd3f580cee66459-merged.mount: Deactivated successfully.
Oct 11 03:40:45 compute-0 ceph-osd[87591]: log_channel(cluster) log [DBG] : 7.1b scrub starts
Oct 11 03:40:45 compute-0 ceph-osd[87591]: log_channel(cluster) log [DBG] : 7.1b scrub ok
Oct 11 03:40:45 compute-0 podman[108797]: 2025-10-11 03:40:45.376118409 +0000 UTC m=+0.227300976 container remove 700a406f298c42fbf2edee5a042ecea6adb95deac28279a78eab5eaabf168768 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_gould, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 11 03:40:45 compute-0 systemd[1]: libpod-conmon-700a406f298c42fbf2edee5a042ecea6adb95deac28279a78eab5eaabf168768.scope: Deactivated successfully.
Oct 11 03:40:45 compute-0 ceph-mon[74273]: 11.17 deep-scrub starts
Oct 11 03:40:45 compute-0 ceph-mon[74273]: 11.17 deep-scrub ok
Oct 11 03:40:45 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 11 03:40:45 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 11 03:40:45 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' 
Oct 11 03:40:45 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 11 03:40:45 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 11 03:40:45 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 11 03:40:45 compute-0 podman[108889]: 2025-10-11 03:40:45.571708068 +0000 UTC m=+0.036706492 container create 71318a61cf52cecfc6cf794a624fc9d3ef2780386801ba4fd1c758ec31a06394 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_wilbur, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=reef, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 11 03:40:45 compute-0 sudo[108923]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pudcojibfyyonabqoopuvqnjiujrxkej ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760154045.0828009-163-232541268801088/AnsiballZ_mount.py'
Oct 11 03:40:45 compute-0 sudo[108923]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 03:40:45 compute-0 systemd[1]: Started libpod-conmon-71318a61cf52cecfc6cf794a624fc9d3ef2780386801ba4fd1c758ec31a06394.scope.
Oct 11 03:40:45 compute-0 systemd[1]: Started libcrun container.
Oct 11 03:40:45 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cd9fed202349a6cde78359734c6f7d8ee14e0f1b9ab21e59fa5ee3941c61b481/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 11 03:40:45 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cd9fed202349a6cde78359734c6f7d8ee14e0f1b9ab21e59fa5ee3941c61b481/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 11 03:40:45 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cd9fed202349a6cde78359734c6f7d8ee14e0f1b9ab21e59fa5ee3941c61b481/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 11 03:40:45 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cd9fed202349a6cde78359734c6f7d8ee14e0f1b9ab21e59fa5ee3941c61b481/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 11 03:40:45 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cd9fed202349a6cde78359734c6f7d8ee14e0f1b9ab21e59fa5ee3941c61b481/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct 11 03:40:45 compute-0 podman[108889]: 2025-10-11 03:40:45.555254588 +0000 UTC m=+0.020253032 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 03:40:45 compute-0 podman[108889]: 2025-10-11 03:40:45.657814902 +0000 UTC m=+0.122813326 container init 71318a61cf52cecfc6cf794a624fc9d3ef2780386801ba4fd1c758ec31a06394 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_wilbur, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef)
Oct 11 03:40:45 compute-0 podman[108889]: 2025-10-11 03:40:45.663632392 +0000 UTC m=+0.128630816 container start 71318a61cf52cecfc6cf794a624fc9d3ef2780386801ba4fd1c758ec31a06394 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_wilbur, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef)
Oct 11 03:40:45 compute-0 podman[108889]: 2025-10-11 03:40:45.668179455 +0000 UTC m=+0.133177899 container attach 71318a61cf52cecfc6cf794a624fc9d3ef2780386801ba4fd1c758ec31a06394 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_wilbur, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.build-date=20250507)
Oct 11 03:40:45 compute-0 python3.9[108928]: ansible-ansible.posix.mount Invoked with dump=0 fstype=swap name=none opts=sw passno=0 src=/swap state=present path=none boot=True opts_no_log=False backup=False fstab=None
Oct 11 03:40:45 compute-0 sudo[108923]: pam_unix(sudo:session): session closed for user root
Oct 11 03:40:46 compute-0 ceph-mon[74273]: pgmap v241: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 03:40:46 compute-0 ceph-mon[74273]: 7.1b scrub starts
Oct 11 03:40:46 compute-0 ceph-mon[74273]: 7.1b scrub ok
Oct 11 03:40:46 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v242: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 03:40:46 compute-0 sudo[109109]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ubkkcftadmbedcxagfxnzxzdzgqisbhm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760154046.450909-191-236270422163390/AnsiballZ_file.py'
Oct 11 03:40:46 compute-0 sad_wilbur[108931]: --> passed data devices: 0 physical, 3 LVM
Oct 11 03:40:46 compute-0 sad_wilbur[108931]: --> relative data size: 1.0
Oct 11 03:40:46 compute-0 sad_wilbur[108931]: --> All data devices are unavailable
Oct 11 03:40:46 compute-0 sudo[109109]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 03:40:46 compute-0 systemd[1]: libpod-71318a61cf52cecfc6cf794a624fc9d3ef2780386801ba4fd1c758ec31a06394.scope: Deactivated successfully.
Oct 11 03:40:46 compute-0 systemd[1]: libpod-71318a61cf52cecfc6cf794a624fc9d3ef2780386801ba4fd1c758ec31a06394.scope: Consumed 1.105s CPU time.
Oct 11 03:40:46 compute-0 podman[109113]: 2025-10-11 03:40:46.902081693 +0000 UTC m=+0.028590506 container died 71318a61cf52cecfc6cf794a624fc9d3ef2780386801ba4fd1c758ec31a06394 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_wilbur, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef)
Oct 11 03:40:46 compute-0 systemd[1]: var-lib-containers-storage-overlay-cd9fed202349a6cde78359734c6f7d8ee14e0f1b9ab21e59fa5ee3941c61b481-merged.mount: Deactivated successfully.
Oct 11 03:40:46 compute-0 podman[109113]: 2025-10-11 03:40:46.955896694 +0000 UTC m=+0.082405507 container remove 71318a61cf52cecfc6cf794a624fc9d3ef2780386801ba4fd1c758ec31a06394 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_wilbur, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 11 03:40:46 compute-0 systemd[1]: libpod-conmon-71318a61cf52cecfc6cf794a624fc9d3ef2780386801ba4fd1c758ec31a06394.scope: Deactivated successfully.
Oct 11 03:40:46 compute-0 sudo[108656]: pam_unix(sudo:session): session closed for user root
Oct 11 03:40:47 compute-0 python3.9[109112]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/pki/ca-trust/source/anchors setype=cert_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct 11 03:40:47 compute-0 sudo[109109]: pam_unix(sudo:session): session closed for user root
Oct 11 03:40:47 compute-0 sudo[109128]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 03:40:47 compute-0 sudo[109128]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 03:40:47 compute-0 sudo[109128]: pam_unix(sudo:session): session closed for user root
Oct 11 03:40:47 compute-0 sudo[109156]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 11 03:40:47 compute-0 sudo[109156]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 03:40:47 compute-0 sudo[109156]: pam_unix(sudo:session): session closed for user root
Oct 11 03:40:47 compute-0 sudo[109202]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 03:40:47 compute-0 sudo[109202]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 03:40:47 compute-0 sudo[109202]: pam_unix(sudo:session): session closed for user root
Oct 11 03:40:47 compute-0 sudo[109250]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/23b68101-59a9-532f-ab6b-9acf78fb2162/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 23b68101-59a9-532f-ab6b-9acf78fb2162 -- lvm list --format json
Oct 11 03:40:47 compute-0 sudo[109250]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 03:40:47 compute-0 sudo[109407]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pthpzbyclvmkxhglgqlrasqtwtyaickg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760154047.2273498-199-76280620154629/AnsiballZ_stat.py'
Oct 11 03:40:47 compute-0 sudo[109407]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 03:40:47 compute-0 podman[109422]: 2025-10-11 03:40:47.695266747 +0000 UTC m=+0.078893044 container create 9a6d47a6a23a340f2a91971262c699b9e302f18d21f7a47ae8f21a8a37e31f11 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_mendeleev, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=reef, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Oct 11 03:40:47 compute-0 systemd[1]: Started libpod-conmon-9a6d47a6a23a340f2a91971262c699b9e302f18d21f7a47ae8f21a8a37e31f11.scope.
Oct 11 03:40:47 compute-0 podman[109422]: 2025-10-11 03:40:47.661220703 +0000 UTC m=+0.044847040 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 03:40:47 compute-0 systemd[1]: Started libcrun container.
Oct 11 03:40:47 compute-0 python3.9[109419]: ansible-ansible.legacy.stat Invoked with path=/etc/pki/ca-trust/source/anchors/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 11 03:40:47 compute-0 podman[109422]: 2025-10-11 03:40:47.797498982 +0000 UTC m=+0.181125339 container init 9a6d47a6a23a340f2a91971262c699b9e302f18d21f7a47ae8f21a8a37e31f11 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_mendeleev, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, ceph=True)
Oct 11 03:40:47 compute-0 podman[109422]: 2025-10-11 03:40:47.808221135 +0000 UTC m=+0.191847402 container start 9a6d47a6a23a340f2a91971262c699b9e302f18d21f7a47ae8f21a8a37e31f11 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_mendeleev, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 11 03:40:47 compute-0 practical_mendeleev[109438]: 167 167
Oct 11 03:40:47 compute-0 systemd[1]: libpod-9a6d47a6a23a340f2a91971262c699b9e302f18d21f7a47ae8f21a8a37e31f11.scope: Deactivated successfully.
Oct 11 03:40:47 compute-0 podman[109422]: 2025-10-11 03:40:47.815918789 +0000 UTC m=+0.199545156 container attach 9a6d47a6a23a340f2a91971262c699b9e302f18d21f7a47ae8f21a8a37e31f11 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_mendeleev, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20250507)
Oct 11 03:40:47 compute-0 podman[109422]: 2025-10-11 03:40:47.816746093 +0000 UTC m=+0.200372400 container died 9a6d47a6a23a340f2a91971262c699b9e302f18d21f7a47ae8f21a8a37e31f11 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_mendeleev, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 11 03:40:47 compute-0 systemd[1]: var-lib-containers-storage-overlay-f40d3b945018639f589f7a7a0fc67f2d0ce263e7e524264b71ad7df1a1b01669-merged.mount: Deactivated successfully.
Oct 11 03:40:47 compute-0 podman[109422]: 2025-10-11 03:40:47.860173991 +0000 UTC m=+0.243800268 container remove 9a6d47a6a23a340f2a91971262c699b9e302f18d21f7a47ae8f21a8a37e31f11 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_mendeleev, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 11 03:40:47 compute-0 sudo[109407]: pam_unix(sudo:session): session closed for user root
Oct 11 03:40:47 compute-0 systemd[1]: libpod-conmon-9a6d47a6a23a340f2a91971262c699b9e302f18d21f7a47ae8f21a8a37e31f11.scope: Deactivated successfully.
Oct 11 03:40:48 compute-0 podman[109490]: 2025-10-11 03:40:48.066213476 +0000 UTC m=+0.063781363 container create 88cef009d7d3a750841c4bca9dc95ef4945c2628d595be8b3d7af9bcd1a47395 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_chatelet, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.vendor=CentOS)
Oct 11 03:40:48 compute-0 systemd[1]: Started libpod-conmon-88cef009d7d3a750841c4bca9dc95ef4945c2628d595be8b3d7af9bcd1a47395.scope.
Oct 11 03:40:48 compute-0 sudo[109551]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-evdennnzlnjffpyzjmqmxytyzuddcuvg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760154047.2273498-199-76280620154629/AnsiballZ_file.py'
Oct 11 03:40:48 compute-0 sudo[109551]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 03:40:48 compute-0 ceph-osd[88594]: log_channel(cluster) log [DBG] : 5.12 scrub starts
Oct 11 03:40:48 compute-0 podman[109490]: 2025-10-11 03:40:48.043006258 +0000 UTC m=+0.040574135 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 03:40:48 compute-0 ceph-osd[88594]: log_channel(cluster) log [DBG] : 5.12 scrub ok
Oct 11 03:40:48 compute-0 systemd[1]: Started libcrun container.
Oct 11 03:40:48 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c32901fafef8ea0839c35d6fde96239195b0804baca10ba938e96b6af290cf0d/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 11 03:40:48 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c32901fafef8ea0839c35d6fde96239195b0804baca10ba938e96b6af290cf0d/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 11 03:40:48 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c32901fafef8ea0839c35d6fde96239195b0804baca10ba938e96b6af290cf0d/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 11 03:40:48 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c32901fafef8ea0839c35d6fde96239195b0804baca10ba938e96b6af290cf0d/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 11 03:40:48 compute-0 podman[109490]: 2025-10-11 03:40:48.182994805 +0000 UTC m=+0.180562762 container init 88cef009d7d3a750841c4bca9dc95ef4945c2628d595be8b3d7af9bcd1a47395 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_chatelet, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Oct 11 03:40:48 compute-0 podman[109490]: 2025-10-11 03:40:48.200825225 +0000 UTC m=+0.198393132 container start 88cef009d7d3a750841c4bca9dc95ef4945c2628d595be8b3d7af9bcd1a47395 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_chatelet, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 11 03:40:48 compute-0 podman[109490]: 2025-10-11 03:40:48.204531204 +0000 UTC m=+0.202099161 container attach 88cef009d7d3a750841c4bca9dc95ef4945c2628d595be8b3d7af9bcd1a47395 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_chatelet, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 11 03:40:48 compute-0 ceph-osd[87591]: log_channel(cluster) log [DBG] : 3.1f scrub starts
Oct 11 03:40:48 compute-0 ceph-osd[87591]: log_channel(cluster) log [DBG] : 3.1f scrub ok
Oct 11 03:40:48 compute-0 python3.9[109557]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/pki/ca-trust/source/anchors/tls-ca-bundle.pem _original_basename=tls-ca-bundle.pem recurse=False state=file path=/etc/pki/ca-trust/source/anchors/tls-ca-bundle.pem force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 11 03:40:48 compute-0 sudo[109551]: pam_unix(sudo:session): session closed for user root
Oct 11 03:40:48 compute-0 ceph-mon[74273]: pgmap v242: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 03:40:48 compute-0 sshd-session[109316]: Connection closed by authenticating user root 78.128.112.74 port 44470 [preauth]
Oct 11 03:40:48 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v243: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 03:40:48 compute-0 ceph-osd[89722]: log_channel(cluster) log [DBG] : 7.1 scrub starts
Oct 11 03:40:48 compute-0 ceph-osd[89722]: log_channel(cluster) log [DBG] : 7.1 scrub ok
Oct 11 03:40:48 compute-0 zen_chatelet[109555]: {
Oct 11 03:40:48 compute-0 zen_chatelet[109555]:     "0": [
Oct 11 03:40:48 compute-0 zen_chatelet[109555]:         {
Oct 11 03:40:48 compute-0 zen_chatelet[109555]:             "devices": [
Oct 11 03:40:48 compute-0 zen_chatelet[109555]:                 "/dev/loop3"
Oct 11 03:40:48 compute-0 zen_chatelet[109555]:             ],
Oct 11 03:40:48 compute-0 zen_chatelet[109555]:             "lv_name": "ceph_lv0",
Oct 11 03:40:48 compute-0 zen_chatelet[109555]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct 11 03:40:48 compute-0 zen_chatelet[109555]:             "lv_size": "21470642176",
Oct 11 03:40:48 compute-0 zen_chatelet[109555]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=1XVTvq-Um5W-oCLh-MZzq-EKrd-vgGj-JdO4S3,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=23b68101-59a9-532f-ab6b-9acf78fb2162,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=bd7ac921-1218-45c1-b1c6-7c594dbceccb,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 11 03:40:48 compute-0 zen_chatelet[109555]:             "lv_uuid": "1XVTvq-Um5W-oCLh-MZzq-EKrd-vgGj-JdO4S3",
Oct 11 03:40:48 compute-0 zen_chatelet[109555]:             "name": "ceph_lv0",
Oct 11 03:40:48 compute-0 zen_chatelet[109555]:             "path": "/dev/ceph_vg0/ceph_lv0",
Oct 11 03:40:48 compute-0 zen_chatelet[109555]:             "tags": {
Oct 11 03:40:48 compute-0 zen_chatelet[109555]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct 11 03:40:48 compute-0 zen_chatelet[109555]:                 "ceph.block_uuid": "1XVTvq-Um5W-oCLh-MZzq-EKrd-vgGj-JdO4S3",
Oct 11 03:40:48 compute-0 zen_chatelet[109555]:                 "ceph.cephx_lockbox_secret": "",
Oct 11 03:40:48 compute-0 zen_chatelet[109555]:                 "ceph.cluster_fsid": "23b68101-59a9-532f-ab6b-9acf78fb2162",
Oct 11 03:40:48 compute-0 zen_chatelet[109555]:                 "ceph.cluster_name": "ceph",
Oct 11 03:40:48 compute-0 zen_chatelet[109555]:                 "ceph.crush_device_class": "",
Oct 11 03:40:48 compute-0 zen_chatelet[109555]:                 "ceph.encrypted": "0",
Oct 11 03:40:48 compute-0 zen_chatelet[109555]:                 "ceph.osd_fsid": "bd7ac921-1218-45c1-b1c6-7c594dbceccb",
Oct 11 03:40:48 compute-0 zen_chatelet[109555]:                 "ceph.osd_id": "0",
Oct 11 03:40:48 compute-0 zen_chatelet[109555]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 11 03:40:48 compute-0 zen_chatelet[109555]:                 "ceph.type": "block",
Oct 11 03:40:48 compute-0 zen_chatelet[109555]:                 "ceph.vdo": "0"
Oct 11 03:40:48 compute-0 zen_chatelet[109555]:             },
Oct 11 03:40:48 compute-0 zen_chatelet[109555]:             "type": "block",
Oct 11 03:40:48 compute-0 zen_chatelet[109555]:             "vg_name": "ceph_vg0"
Oct 11 03:40:48 compute-0 zen_chatelet[109555]:         }
Oct 11 03:40:48 compute-0 zen_chatelet[109555]:     ],
Oct 11 03:40:48 compute-0 zen_chatelet[109555]:     "1": [
Oct 11 03:40:48 compute-0 zen_chatelet[109555]:         {
Oct 11 03:40:48 compute-0 zen_chatelet[109555]:             "devices": [
Oct 11 03:40:48 compute-0 zen_chatelet[109555]:                 "/dev/loop4"
Oct 11 03:40:48 compute-0 zen_chatelet[109555]:             ],
Oct 11 03:40:48 compute-0 zen_chatelet[109555]:             "lv_name": "ceph_lv1",
Oct 11 03:40:48 compute-0 zen_chatelet[109555]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Oct 11 03:40:48 compute-0 zen_chatelet[109555]:             "lv_size": "21470642176",
Oct 11 03:40:48 compute-0 zen_chatelet[109555]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=SYHCVo-UVf0-Iwc3-4dzz-QuCG-19LJ-9rRA5x,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=23b68101-59a9-532f-ab6b-9acf78fb2162,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=38da774d-7ecf-442f-9a7a-97978287cff8,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 11 03:40:48 compute-0 zen_chatelet[109555]:             "lv_uuid": "SYHCVo-UVf0-Iwc3-4dzz-QuCG-19LJ-9rRA5x",
Oct 11 03:40:48 compute-0 zen_chatelet[109555]:             "name": "ceph_lv1",
Oct 11 03:40:48 compute-0 zen_chatelet[109555]:             "path": "/dev/ceph_vg1/ceph_lv1",
Oct 11 03:40:48 compute-0 zen_chatelet[109555]:             "tags": {
Oct 11 03:40:48 compute-0 zen_chatelet[109555]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Oct 11 03:40:48 compute-0 zen_chatelet[109555]:                 "ceph.block_uuid": "SYHCVo-UVf0-Iwc3-4dzz-QuCG-19LJ-9rRA5x",
Oct 11 03:40:48 compute-0 zen_chatelet[109555]:                 "ceph.cephx_lockbox_secret": "",
Oct 11 03:40:48 compute-0 zen_chatelet[109555]:                 "ceph.cluster_fsid": "23b68101-59a9-532f-ab6b-9acf78fb2162",
Oct 11 03:40:48 compute-0 zen_chatelet[109555]:                 "ceph.cluster_name": "ceph",
Oct 11 03:40:48 compute-0 zen_chatelet[109555]:                 "ceph.crush_device_class": "",
Oct 11 03:40:48 compute-0 zen_chatelet[109555]:                 "ceph.encrypted": "0",
Oct 11 03:40:48 compute-0 zen_chatelet[109555]:                 "ceph.osd_fsid": "38da774d-7ecf-442f-9a7a-97978287cff8",
Oct 11 03:40:48 compute-0 zen_chatelet[109555]:                 "ceph.osd_id": "1",
Oct 11 03:40:48 compute-0 zen_chatelet[109555]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 11 03:40:48 compute-0 zen_chatelet[109555]:                 "ceph.type": "block",
Oct 11 03:40:48 compute-0 zen_chatelet[109555]:                 "ceph.vdo": "0"
Oct 11 03:40:48 compute-0 zen_chatelet[109555]:             },
Oct 11 03:40:48 compute-0 zen_chatelet[109555]:             "type": "block",
Oct 11 03:40:48 compute-0 zen_chatelet[109555]:             "vg_name": "ceph_vg1"
Oct 11 03:40:48 compute-0 zen_chatelet[109555]:         }
Oct 11 03:40:48 compute-0 zen_chatelet[109555]:     ],
Oct 11 03:40:48 compute-0 zen_chatelet[109555]:     "2": [
Oct 11 03:40:48 compute-0 zen_chatelet[109555]:         {
Oct 11 03:40:48 compute-0 zen_chatelet[109555]:             "devices": [
Oct 11 03:40:48 compute-0 zen_chatelet[109555]:                 "/dev/loop5"
Oct 11 03:40:48 compute-0 zen_chatelet[109555]:             ],
Oct 11 03:40:48 compute-0 zen_chatelet[109555]:             "lv_name": "ceph_lv2",
Oct 11 03:40:48 compute-0 zen_chatelet[109555]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Oct 11 03:40:48 compute-0 zen_chatelet[109555]:             "lv_size": "21470642176",
Oct 11 03:40:48 compute-0 zen_chatelet[109555]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=RoMfIg-8Myq-NBZ3-1HM3-6QfC-sjk8-dlChvt,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=23b68101-59a9-532f-ab6b-9acf78fb2162,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=b0a32a6b-06c7-4cda-9235-0c5ffbd1bd85,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 11 03:40:48 compute-0 zen_chatelet[109555]:             "lv_uuid": "RoMfIg-8Myq-NBZ3-1HM3-6QfC-sjk8-dlChvt",
Oct 11 03:40:48 compute-0 zen_chatelet[109555]:             "name": "ceph_lv2",
Oct 11 03:40:48 compute-0 zen_chatelet[109555]:             "path": "/dev/ceph_vg2/ceph_lv2",
Oct 11 03:40:48 compute-0 zen_chatelet[109555]:             "tags": {
Oct 11 03:40:48 compute-0 zen_chatelet[109555]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Oct 11 03:40:48 compute-0 zen_chatelet[109555]:                 "ceph.block_uuid": "RoMfIg-8Myq-NBZ3-1HM3-6QfC-sjk8-dlChvt",
Oct 11 03:40:48 compute-0 zen_chatelet[109555]:                 "ceph.cephx_lockbox_secret": "",
Oct 11 03:40:48 compute-0 zen_chatelet[109555]:                 "ceph.cluster_fsid": "23b68101-59a9-532f-ab6b-9acf78fb2162",
Oct 11 03:40:48 compute-0 zen_chatelet[109555]:                 "ceph.cluster_name": "ceph",
Oct 11 03:40:48 compute-0 zen_chatelet[109555]:                 "ceph.crush_device_class": "",
Oct 11 03:40:48 compute-0 zen_chatelet[109555]:                 "ceph.encrypted": "0",
Oct 11 03:40:48 compute-0 zen_chatelet[109555]:                 "ceph.osd_fsid": "b0a32a6b-06c7-4cda-9235-0c5ffbd1bd85",
Oct 11 03:40:48 compute-0 zen_chatelet[109555]:                 "ceph.osd_id": "2",
Oct 11 03:40:48 compute-0 zen_chatelet[109555]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 11 03:40:48 compute-0 zen_chatelet[109555]:                 "ceph.type": "block",
Oct 11 03:40:48 compute-0 zen_chatelet[109555]:                 "ceph.vdo": "0"
Oct 11 03:40:48 compute-0 zen_chatelet[109555]:             },
Oct 11 03:40:48 compute-0 zen_chatelet[109555]:             "type": "block",
Oct 11 03:40:48 compute-0 zen_chatelet[109555]:             "vg_name": "ceph_vg2"
Oct 11 03:40:48 compute-0 zen_chatelet[109555]:         }
Oct 11 03:40:48 compute-0 zen_chatelet[109555]:     ]
Oct 11 03:40:48 compute-0 zen_chatelet[109555]: }
Oct 11 03:40:48 compute-0 systemd[1]: libpod-88cef009d7d3a750841c4bca9dc95ef4945c2628d595be8b3d7af9bcd1a47395.scope: Deactivated successfully.
Oct 11 03:40:48 compute-0 podman[109490]: 2025-10-11 03:40:48.989186939 +0000 UTC m=+0.986754896 container died 88cef009d7d3a750841c4bca9dc95ef4945c2628d595be8b3d7af9bcd1a47395 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_chatelet, org.label-schema.build-date=20250507, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 11 03:40:49 compute-0 systemd[1]: var-lib-containers-storage-overlay-c32901fafef8ea0839c35d6fde96239195b0804baca10ba938e96b6af290cf0d-merged.mount: Deactivated successfully.
Oct 11 03:40:49 compute-0 podman[109490]: 2025-10-11 03:40:49.061715466 +0000 UTC m=+1.059283333 container remove 88cef009d7d3a750841c4bca9dc95ef4945c2628d595be8b3d7af9bcd1a47395 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_chatelet, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Oct 11 03:40:49 compute-0 systemd[1]: libpod-conmon-88cef009d7d3a750841c4bca9dc95ef4945c2628d595be8b3d7af9bcd1a47395.scope: Deactivated successfully.
Oct 11 03:40:49 compute-0 sudo[109250]: pam_unix(sudo:session): session closed for user root
Oct 11 03:40:49 compute-0 sudo[109653]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 03:40:49 compute-0 sudo[109653]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 03:40:49 compute-0 sudo[109653]: pam_unix(sudo:session): session closed for user root
Oct 11 03:40:49 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 11 03:40:49 compute-0 sudo[109678]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 11 03:40:49 compute-0 sudo[109678]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 03:40:49 compute-0 sudo[109678]: pam_unix(sudo:session): session closed for user root
Oct 11 03:40:49 compute-0 sudo[109727]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 03:40:49 compute-0 sudo[109727]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 03:40:49 compute-0 sudo[109727]: pam_unix(sudo:session): session closed for user root
Oct 11 03:40:49 compute-0 sudo[109779]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/23b68101-59a9-532f-ab6b-9acf78fb2162/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 23b68101-59a9-532f-ab6b-9acf78fb2162 -- raw list --format json
Oct 11 03:40:49 compute-0 sudo[109779]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 03:40:49 compute-0 sudo[109824]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-asuozbvgacapxzbwpxkhlprgscfijyqy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760154048.952921-223-153440762795572/AnsiballZ_getent.py'
Oct 11 03:40:49 compute-0 sudo[109824]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 03:40:49 compute-0 ceph-mon[74273]: 5.12 scrub starts
Oct 11 03:40:49 compute-0 ceph-mon[74273]: 5.12 scrub ok
Oct 11 03:40:49 compute-0 ceph-mon[74273]: 3.1f scrub starts
Oct 11 03:40:49 compute-0 ceph-mon[74273]: 3.1f scrub ok
Oct 11 03:40:49 compute-0 ceph-mon[74273]: 7.1 scrub starts
Oct 11 03:40:49 compute-0 ceph-mon[74273]: 7.1 scrub ok
Oct 11 03:40:49 compute-0 python3.9[109828]: ansible-ansible.builtin.getent Invoked with database=passwd key=qemu fail_key=True service=None split=None
Oct 11 03:40:49 compute-0 sudo[109824]: pam_unix(sudo:session): session closed for user root
Oct 11 03:40:49 compute-0 podman[109890]: 2025-10-11 03:40:49.805055314 +0000 UTC m=+0.048414034 container create 940396bfa330392581aa98ee7501d1ab1bcb611f0da7c92c981ec229fe97d1f5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_saha, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507)
Oct 11 03:40:49 compute-0 systemd[1]: Started libpod-conmon-940396bfa330392581aa98ee7501d1ab1bcb611f0da7c92c981ec229fe97d1f5.scope.
Oct 11 03:40:49 compute-0 podman[109890]: 2025-10-11 03:40:49.784830174 +0000 UTC m=+0.028188924 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 03:40:49 compute-0 systemd[1]: Started libcrun container.
Oct 11 03:40:49 compute-0 podman[109890]: 2025-10-11 03:40:49.908849064 +0000 UTC m=+0.152207794 container init 940396bfa330392581aa98ee7501d1ab1bcb611f0da7c92c981ec229fe97d1f5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_saha, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 11 03:40:49 compute-0 podman[109890]: 2025-10-11 03:40:49.914661924 +0000 UTC m=+0.158020664 container start 940396bfa330392581aa98ee7501d1ab1bcb611f0da7c92c981ec229fe97d1f5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_saha, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Oct 11 03:40:49 compute-0 podman[109890]: 2025-10-11 03:40:49.91863406 +0000 UTC m=+0.161992790 container attach 940396bfa330392581aa98ee7501d1ab1bcb611f0da7c92c981ec229fe97d1f5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_saha, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.build-date=20250507, OSD_FLAVOR=default, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Oct 11 03:40:49 compute-0 keen_saha[109909]: 167 167
Oct 11 03:40:49 compute-0 systemd[1]: libpod-940396bfa330392581aa98ee7501d1ab1bcb611f0da7c92c981ec229fe97d1f5.scope: Deactivated successfully.
Oct 11 03:40:49 compute-0 podman[109890]: 2025-10-11 03:40:49.921741191 +0000 UTC m=+0.165099941 container died 940396bfa330392581aa98ee7501d1ab1bcb611f0da7c92c981ec229fe97d1f5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_saha, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.license=GPLv2)
Oct 11 03:40:49 compute-0 systemd[1]: var-lib-containers-storage-overlay-6cd50cd95d7600eb8477027432f9df7f15893153a5db167f62cef7baaa615e26-merged.mount: Deactivated successfully.
Oct 11 03:40:49 compute-0 podman[109890]: 2025-10-11 03:40:49.970494814 +0000 UTC m=+0.213853564 container remove 940396bfa330392581aa98ee7501d1ab1bcb611f0da7c92c981ec229fe97d1f5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_saha, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 11 03:40:49 compute-0 systemd[1]: libpod-conmon-940396bfa330392581aa98ee7501d1ab1bcb611f0da7c92c981ec229fe97d1f5.scope: Deactivated successfully.
Oct 11 03:40:50 compute-0 podman[110010]: 2025-10-11 03:40:50.186365515 +0000 UTC m=+0.061718682 container create 54e4b7fcd1ee88b4a633bfdaa858db550053f7c119659f4c27f529883f766b80 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_greider, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.schema-version=1.0)
Oct 11 03:40:50 compute-0 systemd[1]: Started libpod-conmon-54e4b7fcd1ee88b4a633bfdaa858db550053f7c119659f4c27f529883f766b80.scope.
Oct 11 03:40:50 compute-0 podman[110010]: 2025-10-11 03:40:50.166355131 +0000 UTC m=+0.041708338 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 03:40:50 compute-0 systemd[1]: Started libcrun container.
Oct 11 03:40:50 compute-0 sudo[110076]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qljwbdjrguapnqcdmmwgjnjmcugppspr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760154049.9176402-233-251066801761033/AnsiballZ_getent.py'
Oct 11 03:40:50 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/49fe6f57ee6fa17a49048598dce5ffd08a5a98c31cb5e9dbad18cb3305ae7e9e/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 11 03:40:50 compute-0 sudo[110076]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 03:40:50 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/49fe6f57ee6fa17a49048598dce5ffd08a5a98c31cb5e9dbad18cb3305ae7e9e/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 11 03:40:50 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/49fe6f57ee6fa17a49048598dce5ffd08a5a98c31cb5e9dbad18cb3305ae7e9e/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 11 03:40:50 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/49fe6f57ee6fa17a49048598dce5ffd08a5a98c31cb5e9dbad18cb3305ae7e9e/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 11 03:40:50 compute-0 podman[110010]: 2025-10-11 03:40:50.296894602 +0000 UTC m=+0.172247779 container init 54e4b7fcd1ee88b4a633bfdaa858db550053f7c119659f4c27f529883f766b80 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_greider, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 11 03:40:50 compute-0 podman[110010]: 2025-10-11 03:40:50.308972484 +0000 UTC m=+0.184325641 container start 54e4b7fcd1ee88b4a633bfdaa858db550053f7c119659f4c27f529883f766b80 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_greider, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 11 03:40:50 compute-0 podman[110010]: 2025-10-11 03:40:50.311813397 +0000 UTC m=+0.187166554 container attach 54e4b7fcd1ee88b4a633bfdaa858db550053f7c119659f4c27f529883f766b80 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_greider, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 11 03:40:50 compute-0 python3.9[110079]: ansible-ansible.builtin.getent Invoked with database=passwd key=hugetlbfs fail_key=True service=None split=None
Oct 11 03:40:50 compute-0 sudo[110076]: pam_unix(sudo:session): session closed for user root
Oct 11 03:40:50 compute-0 ceph-mon[74273]: pgmap v243: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 03:40:50 compute-0 ceph-mgr[74563]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 03:40:50 compute-0 ceph-mgr[74563]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 03:40:50 compute-0 ceph-mgr[74563]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 03:40:50 compute-0 ceph-mgr[74563]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 03:40:50 compute-0 ceph-mgr[74563]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 03:40:50 compute-0 ceph-mgr[74563]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 03:40:50 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v244: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 03:40:51 compute-0 sudo[110252]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bglccqlupmyjezmfqhxpppdnpsqhtgkt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760154050.7726822-241-48987605769843/AnsiballZ_group.py'
Oct 11 03:40:51 compute-0 sudo[110252]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 03:40:51 compute-0 wizardly_greider[110074]: {
Oct 11 03:40:51 compute-0 wizardly_greider[110074]:     "38da774d-7ecf-442f-9a7a-97978287cff8": {
Oct 11 03:40:51 compute-0 wizardly_greider[110074]:         "ceph_fsid": "23b68101-59a9-532f-ab6b-9acf78fb2162",
Oct 11 03:40:51 compute-0 wizardly_greider[110074]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Oct 11 03:40:51 compute-0 wizardly_greider[110074]:         "osd_id": 1,
Oct 11 03:40:51 compute-0 wizardly_greider[110074]:         "osd_uuid": "38da774d-7ecf-442f-9a7a-97978287cff8",
Oct 11 03:40:51 compute-0 wizardly_greider[110074]:         "type": "bluestore"
Oct 11 03:40:51 compute-0 wizardly_greider[110074]:     },
Oct 11 03:40:51 compute-0 wizardly_greider[110074]:     "b0a32a6b-06c7-4cda-9235-0c5ffbd1bd85": {
Oct 11 03:40:51 compute-0 wizardly_greider[110074]:         "ceph_fsid": "23b68101-59a9-532f-ab6b-9acf78fb2162",
Oct 11 03:40:51 compute-0 wizardly_greider[110074]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Oct 11 03:40:51 compute-0 wizardly_greider[110074]:         "osd_id": 2,
Oct 11 03:40:51 compute-0 wizardly_greider[110074]:         "osd_uuid": "b0a32a6b-06c7-4cda-9235-0c5ffbd1bd85",
Oct 11 03:40:51 compute-0 wizardly_greider[110074]:         "type": "bluestore"
Oct 11 03:40:51 compute-0 wizardly_greider[110074]:     },
Oct 11 03:40:51 compute-0 wizardly_greider[110074]:     "bd7ac921-1218-45c1-b1c6-7c594dbceccb": {
Oct 11 03:40:51 compute-0 wizardly_greider[110074]:         "ceph_fsid": "23b68101-59a9-532f-ab6b-9acf78fb2162",
Oct 11 03:40:51 compute-0 wizardly_greider[110074]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct 11 03:40:51 compute-0 wizardly_greider[110074]:         "osd_id": 0,
Oct 11 03:40:51 compute-0 wizardly_greider[110074]:         "osd_uuid": "bd7ac921-1218-45c1-b1c6-7c594dbceccb",
Oct 11 03:40:51 compute-0 wizardly_greider[110074]:         "type": "bluestore"
Oct 11 03:40:51 compute-0 wizardly_greider[110074]:     }
Oct 11 03:40:51 compute-0 wizardly_greider[110074]: }
Oct 11 03:40:51 compute-0 systemd[1]: libpod-54e4b7fcd1ee88b4a633bfdaa858db550053f7c119659f4c27f529883f766b80.scope: Deactivated successfully.
Oct 11 03:40:51 compute-0 systemd[1]: libpod-54e4b7fcd1ee88b4a633bfdaa858db550053f7c119659f4c27f529883f766b80.scope: Consumed 1.100s CPU time.
Oct 11 03:40:51 compute-0 podman[110010]: 2025-10-11 03:40:51.405690659 +0000 UTC m=+1.281043856 container died 54e4b7fcd1ee88b4a633bfdaa858db550053f7c119659f4c27f529883f766b80 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_greider, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 11 03:40:51 compute-0 systemd[1]: var-lib-containers-storage-overlay-49fe6f57ee6fa17a49048598dce5ffd08a5a98c31cb5e9dbad18cb3305ae7e9e-merged.mount: Deactivated successfully.
Oct 11 03:40:51 compute-0 podman[110010]: 2025-10-11 03:40:51.477085543 +0000 UTC m=+1.352438710 container remove 54e4b7fcd1ee88b4a633bfdaa858db550053f7c119659f4c27f529883f766b80 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_greider, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default)
Oct 11 03:40:51 compute-0 systemd[1]: libpod-conmon-54e4b7fcd1ee88b4a633bfdaa858db550053f7c119659f4c27f529883f766b80.scope: Deactivated successfully.
Oct 11 03:40:51 compute-0 python3.9[110258]: ansible-ansible.builtin.group Invoked with gid=42477 name=hugetlbfs state=present force=False system=False local=False non_unique=False gid_min=None gid_max=None
Oct 11 03:40:51 compute-0 sudo[109779]: pam_unix(sudo:session): session closed for user root
Oct 11 03:40:51 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 11 03:40:51 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' 
Oct 11 03:40:51 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 11 03:40:51 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' 
Oct 11 03:40:51 compute-0 sudo[110252]: pam_unix(sudo:session): session closed for user root
Oct 11 03:40:51 compute-0 ceph-mgr[74563]: [progress WARNING root] complete: ev b7655577-5987-4b5e-861c-d06bbb5dc5aa does not exist
Oct 11 03:40:51 compute-0 ceph-mgr[74563]: [progress WARNING root] complete: ev 25521c79-c0cf-4ef9-9da3-dec9b0519585 does not exist
Oct 11 03:40:51 compute-0 sudo[110278]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 03:40:51 compute-0 sudo[110278]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 03:40:51 compute-0 sudo[110278]: pam_unix(sudo:session): session closed for user root
Oct 11 03:40:51 compute-0 sudo[110327]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Oct 11 03:40:51 compute-0 sudo[110327]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 03:40:51 compute-0 sudo[110327]: pam_unix(sudo:session): session closed for user root
Oct 11 03:40:52 compute-0 sudo[110477]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wvnzgavxlqetmocwggzqafwngtybgrsg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760154051.7205925-250-58930087283759/AnsiballZ_file.py'
Oct 11 03:40:52 compute-0 sudo[110477]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 03:40:52 compute-0 python3.9[110479]: ansible-ansible.builtin.file Invoked with group=qemu mode=0755 owner=qemu path=/var/lib/vhost_sockets setype=virt_cache_t seuser=system_u state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None serole=None selevel=None attributes=None
Oct 11 03:40:52 compute-0 sudo[110477]: pam_unix(sudo:session): session closed for user root
Oct 11 03:40:52 compute-0 ceph-mon[74273]: pgmap v244: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 03:40:52 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' 
Oct 11 03:40:52 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' 
Oct 11 03:40:52 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v245: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 03:40:52 compute-0 sudo[110629]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bgdxemclforpkvvpbuanvyvxpfyqfdmc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760154052.6362953-261-142494406550481/AnsiballZ_dnf.py'
Oct 11 03:40:52 compute-0 sudo[110629]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 03:40:53 compute-0 python3.9[110631]: ansible-ansible.legacy.dnf Invoked with name=['dracut-config-generic'] state=absent allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Oct 11 03:40:53 compute-0 ceph-osd[89722]: log_channel(cluster) log [DBG] : 11.b scrub starts
Oct 11 03:40:53 compute-0 ceph-osd[89722]: log_channel(cluster) log [DBG] : 11.b scrub ok
Oct 11 03:40:54 compute-0 ceph-osd[88594]: log_channel(cluster) log [DBG] : 5.9 scrub starts
Oct 11 03:40:54 compute-0 ceph-osd[88594]: log_channel(cluster) log [DBG] : 5.9 scrub ok
Oct 11 03:40:54 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 11 03:40:54 compute-0 sudo[110629]: pam_unix(sudo:session): session closed for user root
Oct 11 03:40:54 compute-0 ceph-mon[74273]: pgmap v245: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 03:40:54 compute-0 ceph-mon[74273]: 11.b scrub starts
Oct 11 03:40:54 compute-0 ceph-mon[74273]: 11.b scrub ok
Oct 11 03:40:54 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v246: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 03:40:55 compute-0 sudo[110782]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qfhqfdxdunltisllklrmcusttoqlfhsz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760154054.510159-269-273133351663586/AnsiballZ_file.py'
Oct 11 03:40:55 compute-0 sudo[110782]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 03:40:55 compute-0 python3.9[110784]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/modules-load.d setype=etc_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct 11 03:40:55 compute-0 sudo[110782]: pam_unix(sudo:session): session closed for user root
Oct 11 03:40:55 compute-0 ceph-mon[74273]: 5.9 scrub starts
Oct 11 03:40:55 compute-0 ceph-mon[74273]: 5.9 scrub ok
Oct 11 03:40:55 compute-0 sudo[110934]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dhiczcdnbgvpapwmsdpponplzbzyuwpk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760154055.681187-277-156700395622511/AnsiballZ_stat.py'
Oct 11 03:40:55 compute-0 sudo[110934]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 03:40:56 compute-0 ceph-osd[88594]: log_channel(cluster) log [DBG] : 2.d deep-scrub starts
Oct 11 03:40:56 compute-0 ceph-osd[88594]: log_channel(cluster) log [DBG] : 2.d deep-scrub ok
Oct 11 03:40:56 compute-0 python3.9[110936]: ansible-ansible.legacy.stat Invoked with path=/etc/modules-load.d/99-edpm.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 11 03:40:56 compute-0 sudo[110934]: pam_unix(sudo:session): session closed for user root
Oct 11 03:40:56 compute-0 sudo[111012]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dgnpxngvpxdvmzrzzsbuquwylplkjzvg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760154055.681187-277-156700395622511/AnsiballZ_file.py'
Oct 11 03:40:56 compute-0 sudo[111012]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 03:40:56 compute-0 ceph-mon[74273]: pgmap v246: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 03:40:56 compute-0 python3.9[111014]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root setype=etc_t dest=/etc/modules-load.d/99-edpm.conf _original_basename=edpm-modprobe.conf.j2 recurse=False state=file path=/etc/modules-load.d/99-edpm.conf force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct 11 03:40:56 compute-0 sudo[111012]: pam_unix(sudo:session): session closed for user root
Oct 11 03:40:56 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v247: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 03:40:57 compute-0 ceph-osd[88594]: log_channel(cluster) log [DBG] : 10.b deep-scrub starts
Oct 11 03:40:57 compute-0 ceph-osd[88594]: log_channel(cluster) log [DBG] : 10.b deep-scrub ok
Oct 11 03:40:57 compute-0 sudo[111164]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cxfjgbsehrocsqdhhpjgteackpiodvrl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760154056.9196012-290-93572996432666/AnsiballZ_stat.py'
Oct 11 03:40:57 compute-0 sudo[111164]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 03:40:57 compute-0 python3.9[111166]: ansible-ansible.legacy.stat Invoked with path=/etc/sysctl.d/99-edpm.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 11 03:40:57 compute-0 sudo[111164]: pam_unix(sudo:session): session closed for user root
Oct 11 03:40:57 compute-0 ceph-mon[74273]: 2.d deep-scrub starts
Oct 11 03:40:57 compute-0 ceph-mon[74273]: 2.d deep-scrub ok
Oct 11 03:40:57 compute-0 sudo[111242]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zkyloladtbsyxpnmysmzwpymodrgzuiv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760154056.9196012-290-93572996432666/AnsiballZ_file.py'
Oct 11 03:40:57 compute-0 sudo[111242]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 03:40:57 compute-0 python3.9[111244]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root setype=etc_t dest=/etc/sysctl.d/99-edpm.conf _original_basename=edpm-sysctl.conf.j2 recurse=False state=file path=/etc/sysctl.d/99-edpm.conf force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct 11 03:40:57 compute-0 sudo[111242]: pam_unix(sudo:session): session closed for user root
Oct 11 03:40:58 compute-0 ceph-osd[87591]: log_channel(cluster) log [DBG] : 11.14 scrub starts
Oct 11 03:40:58 compute-0 ceph-osd[87591]: log_channel(cluster) log [DBG] : 11.14 scrub ok
Oct 11 03:40:58 compute-0 sudo[111394]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zgnfvcjwdaeescfcoiobuvydkvenpiux ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760154058.228852-305-80929132195649/AnsiballZ_dnf.py'
Oct 11 03:40:58 compute-0 sudo[111394]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 03:40:58 compute-0 ceph-mon[74273]: pgmap v247: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 03:40:58 compute-0 ceph-mon[74273]: 10.b deep-scrub starts
Oct 11 03:40:58 compute-0 ceph-mon[74273]: 10.b deep-scrub ok
Oct 11 03:40:58 compute-0 python3.9[111396]: ansible-ansible.legacy.dnf Invoked with name=['tuned', 'tuned-profiles-cpu-partitioning'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Oct 11 03:40:58 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v248: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 03:40:59 compute-0 ceph-osd[88594]: log_channel(cluster) log [DBG] : 5.f scrub starts
Oct 11 03:40:59 compute-0 ceph-osd[88594]: log_channel(cluster) log [DBG] : 5.f scrub ok
Oct 11 03:40:59 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 11 03:40:59 compute-0 ceph-mon[74273]: 11.14 scrub starts
Oct 11 03:40:59 compute-0 ceph-mon[74273]: 11.14 scrub ok
Oct 11 03:40:59 compute-0 sudo[111394]: pam_unix(sudo:session): session closed for user root
Oct 11 03:41:00 compute-0 ceph-mon[74273]: pgmap v248: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 03:41:00 compute-0 ceph-mon[74273]: 5.f scrub starts
Oct 11 03:41:00 compute-0 ceph-mon[74273]: 5.f scrub ok
Oct 11 03:41:00 compute-0 python3.9[111547]: ansible-ansible.builtin.stat Invoked with path=/etc/tuned/active_profile follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct 11 03:41:00 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v249: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 03:41:01 compute-0 ceph-osd[88594]: log_channel(cluster) log [DBG] : 2.a deep-scrub starts
Oct 11 03:41:01 compute-0 ceph-osd[88594]: log_channel(cluster) log [DBG] : 2.a deep-scrub ok
Oct 11 03:41:01 compute-0 ceph-osd[87591]: log_channel(cluster) log [DBG] : 7.1f scrub starts
Oct 11 03:41:01 compute-0 ceph-osd[87591]: log_channel(cluster) log [DBG] : 7.1f scrub ok
Oct 11 03:41:01 compute-0 python3.9[111699]: ansible-ansible.builtin.slurp Invoked with src=/etc/tuned/active_profile
Oct 11 03:41:02 compute-0 python3.9[111849]: ansible-ansible.builtin.stat Invoked with path=/etc/tuned/throughput-performance-variables.conf follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct 11 03:41:02 compute-0 ceph-mon[74273]: pgmap v249: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 03:41:02 compute-0 ceph-mon[74273]: 2.a deep-scrub starts
Oct 11 03:41:02 compute-0 ceph-mon[74273]: 2.a deep-scrub ok
Oct 11 03:41:02 compute-0 ceph-mon[74273]: 7.1f scrub starts
Oct 11 03:41:02 compute-0 ceph-mon[74273]: 7.1f scrub ok
Oct 11 03:41:02 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v250: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 03:41:03 compute-0 sudo[111999]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vfynutdwjiedfkryalrqdjkautwxqtjr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760154062.7356145-346-184470982784602/AnsiballZ_systemd.py'
Oct 11 03:41:03 compute-0 sudo[111999]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 03:41:03 compute-0 python3.9[112001]: ansible-ansible.builtin.systemd Invoked with enabled=True name=tuned state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Oct 11 03:41:03 compute-0 ceph-mon[74273]: pgmap v250: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 03:41:03 compute-0 systemd[1]: Stopping Dynamic System Tuning Daemon...
Oct 11 03:41:03 compute-0 systemd[1]: tuned.service: Deactivated successfully.
Oct 11 03:41:03 compute-0 systemd[1]: Stopped Dynamic System Tuning Daemon.
Oct 11 03:41:03 compute-0 systemd[1]: Starting Dynamic System Tuning Daemon...
Oct 11 03:41:04 compute-0 systemd[1]: Started Dynamic System Tuning Daemon.
Oct 11 03:41:04 compute-0 sudo[111999]: pam_unix(sudo:session): session closed for user root
Oct 11 03:41:04 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 11 03:41:04 compute-0 ceph-osd[89722]: log_channel(cluster) log [DBG] : 3.5 scrub starts
Oct 11 03:41:04 compute-0 ceph-osd[89722]: log_channel(cluster) log [DBG] : 3.5 scrub ok
Oct 11 03:41:04 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v251: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 03:41:04 compute-0 python3.9[112163]: ansible-ansible.builtin.slurp Invoked with src=/proc/cmdline
Oct 11 03:41:05 compute-0 ceph-osd[87591]: log_channel(cluster) log [DBG] : 8.10 scrub starts
Oct 11 03:41:05 compute-0 ceph-osd[87591]: log_channel(cluster) log [DBG] : 8.10 scrub ok
Oct 11 03:41:05 compute-0 ceph-mon[74273]: 3.5 scrub starts
Oct 11 03:41:05 compute-0 ceph-mon[74273]: 3.5 scrub ok
Oct 11 03:41:05 compute-0 ceph-mon[74273]: pgmap v251: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 03:41:06 compute-0 ceph-osd[88594]: log_channel(cluster) log [DBG] : 2.9 scrub starts
Oct 11 03:41:06 compute-0 ceph-osd[88594]: log_channel(cluster) log [DBG] : 2.9 scrub ok
Oct 11 03:41:06 compute-0 ceph-osd[87591]: log_channel(cluster) log [DBG] : 7.18 scrub starts
Oct 11 03:41:06 compute-0 ceph-osd[87591]: log_channel(cluster) log [DBG] : 7.18 scrub ok
Oct 11 03:41:06 compute-0 sudo[112313]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-soznaaeytekuwcvkwlnjveguprkgwwts ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760154066.3013535-403-76722581752405/AnsiballZ_systemd.py'
Oct 11 03:41:06 compute-0 sudo[112313]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 03:41:06 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v252: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 03:41:06 compute-0 ceph-mon[74273]: 8.10 scrub starts
Oct 11 03:41:06 compute-0 ceph-mon[74273]: 8.10 scrub ok
Oct 11 03:41:06 compute-0 python3.9[112315]: ansible-ansible.builtin.systemd Invoked with enabled=False name=ksm.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Oct 11 03:41:07 compute-0 sudo[112313]: pam_unix(sudo:session): session closed for user root
Oct 11 03:41:07 compute-0 sudo[112467]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mfvtuhzdgnmszxsdknvckxtulafsyemj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760154067.1815593-403-122813806279737/AnsiballZ_systemd.py'
Oct 11 03:41:07 compute-0 sudo[112467]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 03:41:07 compute-0 python3.9[112469]: ansible-ansible.builtin.systemd Invoked with enabled=False name=ksmtuned.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Oct 11 03:41:07 compute-0 ceph-mon[74273]: 2.9 scrub starts
Oct 11 03:41:07 compute-0 ceph-mon[74273]: 2.9 scrub ok
Oct 11 03:41:07 compute-0 ceph-mon[74273]: 7.18 scrub starts
Oct 11 03:41:07 compute-0 ceph-mon[74273]: 7.18 scrub ok
Oct 11 03:41:07 compute-0 ceph-mon[74273]: pgmap v252: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 03:41:07 compute-0 sudo[112467]: pam_unix(sudo:session): session closed for user root
Oct 11 03:41:08 compute-0 ceph-osd[88594]: log_channel(cluster) log [DBG] : 5.c scrub starts
Oct 11 03:41:08 compute-0 ceph-osd[88594]: log_channel(cluster) log [DBG] : 5.c scrub ok
Oct 11 03:41:08 compute-0 sshd-session[105799]: Connection closed by 192.168.122.30 port 43700
Oct 11 03:41:08 compute-0 sshd-session[105796]: pam_unix(sshd:session): session closed for user zuul
Oct 11 03:41:08 compute-0 systemd[1]: session-35.scope: Deactivated successfully.
Oct 11 03:41:08 compute-0 systemd[1]: session-35.scope: Consumed 1min 6.254s CPU time.
Oct 11 03:41:08 compute-0 systemd-logind[820]: Session 35 logged out. Waiting for processes to exit.
Oct 11 03:41:08 compute-0 systemd-logind[820]: Removed session 35.
Oct 11 03:41:08 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v253: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 03:41:09 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 11 03:41:09 compute-0 ceph-osd[89722]: log_channel(cluster) log [DBG] : 11.12 scrub starts
Oct 11 03:41:09 compute-0 ceph-osd[89722]: log_channel(cluster) log [DBG] : 11.12 scrub ok
Oct 11 03:41:09 compute-0 ceph-mon[74273]: 5.c scrub starts
Oct 11 03:41:09 compute-0 ceph-mon[74273]: 5.c scrub ok
Oct 11 03:41:09 compute-0 ceph-mon[74273]: pgmap v253: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 03:41:10 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v254: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 03:41:10 compute-0 ceph-mon[74273]: 11.12 scrub starts
Oct 11 03:41:10 compute-0 ceph-mon[74273]: 11.12 scrub ok
Oct 11 03:41:11 compute-0 ceph-osd[89722]: log_channel(cluster) log [DBG] : 11.3 scrub starts
Oct 11 03:41:11 compute-0 ceph-osd[89722]: log_channel(cluster) log [DBG] : 11.3 scrub ok
Oct 11 03:41:11 compute-0 ceph-mon[74273]: pgmap v254: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 03:41:12 compute-0 ceph-osd[88594]: log_channel(cluster) log [DBG] : 10.f scrub starts
Oct 11 03:41:12 compute-0 ceph-osd[88594]: log_channel(cluster) log [DBG] : 10.f scrub ok
Oct 11 03:41:12 compute-0 ceph-osd[87591]: log_channel(cluster) log [DBG] : 3.1b scrub starts
Oct 11 03:41:12 compute-0 ceph-osd[87591]: log_channel(cluster) log [DBG] : 3.1b scrub ok
Oct 11 03:41:12 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v255: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 03:41:12 compute-0 ceph-mon[74273]: 11.3 scrub starts
Oct 11 03:41:12 compute-0 ceph-mon[74273]: 11.3 scrub ok
Oct 11 03:41:13 compute-0 ceph-osd[87591]: log_channel(cluster) log [DBG] : 11.10 scrub starts
Oct 11 03:41:13 compute-0 ceph-osd[87591]: log_channel(cluster) log [DBG] : 11.10 scrub ok
Oct 11 03:41:13 compute-0 sshd-session[112496]: Accepted publickey for zuul from 192.168.122.30 port 43604 ssh2: ECDSA SHA256:qo9+RMabHfLAOt2q/80W97JXaZUdeUCREBuTRaqgxBY
Oct 11 03:41:13 compute-0 systemd-logind[820]: New session 36 of user zuul.
Oct 11 03:41:13 compute-0 systemd[1]: Started Session 36 of User zuul.
Oct 11 03:41:13 compute-0 sshd-session[112496]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Oct 11 03:41:13 compute-0 ceph-mon[74273]: 10.f scrub starts
Oct 11 03:41:13 compute-0 ceph-mon[74273]: 10.f scrub ok
Oct 11 03:41:13 compute-0 ceph-mon[74273]: 3.1b scrub starts
Oct 11 03:41:13 compute-0 ceph-mon[74273]: 3.1b scrub ok
Oct 11 03:41:13 compute-0 ceph-mon[74273]: pgmap v255: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 03:41:14 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 11 03:41:14 compute-0 ceph-osd[87591]: log_channel(cluster) log [DBG] : 11.f scrub starts
Oct 11 03:41:14 compute-0 ceph-osd[87591]: log_channel(cluster) log [DBG] : 11.f scrub ok
Oct 11 03:41:14 compute-0 python3.9[112649]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct 11 03:41:14 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v256: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 03:41:14 compute-0 ceph-mon[74273]: 11.10 scrub starts
Oct 11 03:41:14 compute-0 ceph-mon[74273]: 11.10 scrub ok
Oct 11 03:41:15 compute-0 sudo[112803]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yllvzqcxwjuhsamuvgtmnccsazkafqmk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760154075.0205834-36-118774038734016/AnsiballZ_getent.py'
Oct 11 03:41:15 compute-0 sudo[112803]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 03:41:15 compute-0 python3.9[112805]: ansible-ansible.builtin.getent Invoked with database=passwd key=openvswitch fail_key=True service=None split=None
Oct 11 03:41:15 compute-0 sudo[112803]: pam_unix(sudo:session): session closed for user root
Oct 11 03:41:15 compute-0 ceph-osd[89722]: log_channel(cluster) log [DBG] : 3.8 deep-scrub starts
Oct 11 03:41:15 compute-0 ceph-osd[89722]: log_channel(cluster) log [DBG] : 3.8 deep-scrub ok
Oct 11 03:41:15 compute-0 ceph-mon[74273]: 11.f scrub starts
Oct 11 03:41:15 compute-0 ceph-mon[74273]: 11.f scrub ok
Oct 11 03:41:15 compute-0 ceph-mon[74273]: pgmap v256: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 03:41:16 compute-0 ceph-osd[87591]: log_channel(cluster) log [DBG] : 8.c scrub starts
Oct 11 03:41:16 compute-0 ceph-osd[87591]: log_channel(cluster) log [DBG] : 8.c scrub ok
Oct 11 03:41:16 compute-0 sudo[112956]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tczwjpeczewifsfvvqkngzrsdwpficqe ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760154076.0158088-48-11633086458451/AnsiballZ_setup.py'
Oct 11 03:41:16 compute-0 sudo[112956]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 03:41:16 compute-0 python3.9[112958]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Oct 11 03:41:16 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v257: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 03:41:16 compute-0 ceph-osd[89722]: log_channel(cluster) log [DBG] : 7.5 scrub starts
Oct 11 03:41:16 compute-0 ceph-osd[89722]: log_channel(cluster) log [DBG] : 7.5 scrub ok
Oct 11 03:41:16 compute-0 sudo[112956]: pam_unix(sudo:session): session closed for user root
Oct 11 03:41:16 compute-0 ceph-mon[74273]: 3.8 deep-scrub starts
Oct 11 03:41:16 compute-0 ceph-mon[74273]: 3.8 deep-scrub ok
Oct 11 03:41:17 compute-0 ceph-osd[87591]: log_channel(cluster) log [DBG] : 7.3 scrub starts
Oct 11 03:41:17 compute-0 ceph-osd[87591]: log_channel(cluster) log [DBG] : 7.3 scrub ok
Oct 11 03:41:17 compute-0 sudo[113040]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qviwzrjbecpkpeafqygcsxgkycowhrih ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760154076.0158088-48-11633086458451/AnsiballZ_dnf.py'
Oct 11 03:41:17 compute-0 sudo[113040]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 03:41:17 compute-0 python3.9[113042]: ansible-ansible.legacy.dnf Invoked with download_only=True name=['openvswitch'] allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None state=None
Oct 11 03:41:17 compute-0 ceph-mon[74273]: 8.c scrub starts
Oct 11 03:41:17 compute-0 ceph-mon[74273]: 8.c scrub ok
Oct 11 03:41:17 compute-0 ceph-mon[74273]: pgmap v257: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 03:41:17 compute-0 ceph-mon[74273]: 7.5 scrub starts
Oct 11 03:41:17 compute-0 ceph-mon[74273]: 7.5 scrub ok
Oct 11 03:41:18 compute-0 ceph-osd[88594]: log_channel(cluster) log [DBG] : 10.2 scrub starts
Oct 11 03:41:18 compute-0 ceph-osd[88594]: log_channel(cluster) log [DBG] : 10.2 scrub ok
Oct 11 03:41:18 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v258: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 03:41:18 compute-0 ceph-osd[89722]: log_channel(cluster) log [DBG] : 11.8 scrub starts
Oct 11 03:41:18 compute-0 sudo[113040]: pam_unix(sudo:session): session closed for user root
Oct 11 03:41:18 compute-0 ceph-osd[89722]: log_channel(cluster) log [DBG] : 11.8 scrub ok
Oct 11 03:41:18 compute-0 ceph-mon[74273]: 7.3 scrub starts
Oct 11 03:41:18 compute-0 ceph-mon[74273]: 7.3 scrub ok
Oct 11 03:41:19 compute-0 ceph-osd[88594]: log_channel(cluster) log [DBG] : 2.1b scrub starts
Oct 11 03:41:19 compute-0 ceph-osd[88594]: log_channel(cluster) log [DBG] : 2.1b scrub ok
Oct 11 03:41:19 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 11 03:41:19 compute-0 ceph-osd[87591]: log_channel(cluster) log [DBG] : 3.6 scrub starts
Oct 11 03:41:19 compute-0 ceph-osd[87591]: log_channel(cluster) log [DBG] : 3.6 scrub ok
Oct 11 03:41:19 compute-0 sudo[113193]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cmyvtzhutbpffhojxcyvnjbkgeipucuq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760154079.1133306-62-216924345093763/AnsiballZ_dnf.py'
Oct 11 03:41:19 compute-0 sudo[113193]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 03:41:19 compute-0 python3.9[113195]: ansible-ansible.legacy.dnf Invoked with name=['openvswitch'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Oct 11 03:41:20 compute-0 ceph-mon[74273]: 10.2 scrub starts
Oct 11 03:41:20 compute-0 ceph-mon[74273]: 10.2 scrub ok
Oct 11 03:41:20 compute-0 ceph-mon[74273]: pgmap v258: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 03:41:20 compute-0 ceph-mon[74273]: 11.8 scrub starts
Oct 11 03:41:20 compute-0 ceph-mon[74273]: 11.8 scrub ok
Oct 11 03:41:20 compute-0 ceph-mgr[74563]: [balancer INFO root] Optimize plan auto_2025-10-11_03:41:20
Oct 11 03:41:20 compute-0 ceph-mgr[74563]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 11 03:41:20 compute-0 ceph-mgr[74563]: [balancer INFO root] do_upmap
Oct 11 03:41:20 compute-0 ceph-mgr[74563]: [balancer INFO root] pools ['default.rgw.log', 'default.rgw.meta', 'vms', '.rgw.root', 'cephfs.cephfs.meta', 'images', 'backups', 'cephfs.cephfs.data', '.mgr', 'volumes', 'default.rgw.control']
Oct 11 03:41:20 compute-0 ceph-mgr[74563]: [balancer INFO root] prepared 0/10 changes
Oct 11 03:41:20 compute-0 ceph-mgr[74563]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 03:41:20 compute-0 ceph-mgr[74563]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 03:41:20 compute-0 ceph-mgr[74563]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 03:41:20 compute-0 ceph-mgr[74563]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 03:41:20 compute-0 ceph-mgr[74563]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 03:41:20 compute-0 ceph-mgr[74563]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 03:41:20 compute-0 ceph-mgr[74563]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 11 03:41:20 compute-0 ceph-mgr[74563]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 11 03:41:20 compute-0 ceph-mgr[74563]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 11 03:41:20 compute-0 ceph-mgr[74563]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 11 03:41:20 compute-0 ceph-mgr[74563]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 11 03:41:20 compute-0 ceph-mgr[74563]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 11 03:41:20 compute-0 ceph-mgr[74563]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 11 03:41:20 compute-0 ceph-mgr[74563]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 11 03:41:20 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v259: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 03:41:20 compute-0 ceph-mgr[74563]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 11 03:41:20 compute-0 ceph-mgr[74563]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 11 03:41:20 compute-0 sudo[113193]: pam_unix(sudo:session): session closed for user root
Oct 11 03:41:21 compute-0 ceph-mon[74273]: 2.1b scrub starts
Oct 11 03:41:21 compute-0 ceph-mon[74273]: 2.1b scrub ok
Oct 11 03:41:21 compute-0 ceph-mon[74273]: 3.6 scrub starts
Oct 11 03:41:21 compute-0 ceph-mon[74273]: 3.6 scrub ok
Oct 11 03:41:21 compute-0 sudo[113346]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ltazipozwagalvbzjmmbbpcjaykeljvf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760154081.0193532-70-9651907543660/AnsiballZ_systemd.py'
Oct 11 03:41:21 compute-0 sudo[113346]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 03:41:21 compute-0 python3.9[113348]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=openvswitch.service state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Oct 11 03:41:22 compute-0 ceph-mon[74273]: pgmap v259: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 03:41:22 compute-0 sudo[113346]: pam_unix(sudo:session): session closed for user root
Oct 11 03:41:22 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v260: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 03:41:22 compute-0 python3.9[113501]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'selinux'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct 11 03:41:23 compute-0 sudo[113651]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-flonhaksshxcnyutoskcjkxwcogggqxc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760154083.1596725-88-4242112100363/AnsiballZ_sefcontext.py'
Oct 11 03:41:23 compute-0 sudo[113651]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 03:41:23 compute-0 python3.9[113653]: ansible-community.general.sefcontext Invoked with selevel=s0 setype=container_file_t state=present target=/var/lib/edpm-config(/.*)? ignore_selinux_state=False ftype=a reload=True substitute=None seuser=None
Oct 11 03:41:24 compute-0 ceph-mon[74273]: pgmap v260: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 03:41:24 compute-0 sudo[113651]: pam_unix(sudo:session): session closed for user root
Oct 11 03:41:24 compute-0 ceph-osd[87591]: log_channel(cluster) log [DBG] : 11.e scrub starts
Oct 11 03:41:24 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 11 03:41:24 compute-0 ceph-osd[87591]: log_channel(cluster) log [DBG] : 11.e scrub ok
Oct 11 03:41:24 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v261: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 03:41:25 compute-0 ceph-mon[74273]: 11.e scrub starts
Oct 11 03:41:25 compute-0 ceph-mon[74273]: 11.e scrub ok
Oct 11 03:41:25 compute-0 python3.9[113803]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local', 'distribution'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct 11 03:41:25 compute-0 sudo[113959]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hhkpyfobgvpjjpanopqhbvspbmrhnpxo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760154085.5460641-106-199811190867733/AnsiballZ_dnf.py'
Oct 11 03:41:25 compute-0 sudo[113959]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 03:41:26 compute-0 ceph-mon[74273]: pgmap v261: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 03:41:26 compute-0 python3.9[113961]: ansible-ansible.legacy.dnf Invoked with name=['driverctl', 'lvm2', 'crudini', 'jq', 'nftables', 'NetworkManager', 'openstack-selinux', 'python3-libselinux', 'python3-pyyaml', 'rsync', 'tmpwatch', 'sysstat', 'iproute-tc', 'ksmtuned', 'systemd-container', 'crypto-policies-scripts', 'grubby', 'sos'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Oct 11 03:41:26 compute-0 ceph-osd[87591]: log_channel(cluster) log [DBG] : 8.e scrub starts
Oct 11 03:41:26 compute-0 ceph-osd[87591]: log_channel(cluster) log [DBG] : 8.e scrub ok
Oct 11 03:41:26 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v262: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 03:41:26 compute-0 ceph-osd[89722]: log_channel(cluster) log [DBG] : 7.e scrub starts
Oct 11 03:41:26 compute-0 ceph-osd[89722]: log_channel(cluster) log [DBG] : 7.e scrub ok
Oct 11 03:41:27 compute-0 ceph-mon[74273]: 8.e scrub starts
Oct 11 03:41:27 compute-0 ceph-mon[74273]: 8.e scrub ok
Oct 11 03:41:27 compute-0 ceph-osd[88594]: log_channel(cluster) log [DBG] : 10.11 scrub starts
Oct 11 03:41:27 compute-0 ceph-osd[88594]: log_channel(cluster) log [DBG] : 10.11 scrub ok
Oct 11 03:41:27 compute-0 sudo[113959]: pam_unix(sudo:session): session closed for user root
Oct 11 03:41:27 compute-0 ceph-osd[89722]: log_channel(cluster) log [DBG] : 11.2 scrub starts
Oct 11 03:41:27 compute-0 ceph-osd[89722]: log_channel(cluster) log [DBG] : 11.2 scrub ok
Oct 11 03:41:28 compute-0 sudo[114112]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xplwqxqxcqliztmdnvwdrqntpxkaizny ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760154087.538339-114-175775211838207/AnsiballZ_command.py'
Oct 11 03:41:28 compute-0 sudo[114112]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 03:41:28 compute-0 ceph-mon[74273]: pgmap v262: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 03:41:28 compute-0 ceph-mon[74273]: 7.e scrub starts
Oct 11 03:41:28 compute-0 ceph-mon[74273]: 7.e scrub ok
Oct 11 03:41:28 compute-0 ceph-mon[74273]: 10.11 scrub starts
Oct 11 03:41:28 compute-0 ceph-mon[74273]: 10.11 scrub ok
Oct 11 03:41:28 compute-0 ceph-osd[88594]: log_channel(cluster) log [DBG] : 5.1 scrub starts
Oct 11 03:41:28 compute-0 ceph-osd[88594]: log_channel(cluster) log [DBG] : 5.1 scrub ok
Oct 11 03:41:28 compute-0 python3.9[114114]: ansible-ansible.legacy.command Invoked with _raw_params=rpm -V driverctl lvm2 crudini jq nftables NetworkManager openstack-selinux python3-libselinux python3-pyyaml rsync tmpwatch sysstat iproute-tc ksmtuned systemd-container crypto-policies-scripts grubby sos _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 11 03:41:28 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v263: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 03:41:28 compute-0 ceph-osd[89722]: log_channel(cluster) log [DBG] : 11.9 scrub starts
Oct 11 03:41:28 compute-0 ceph-osd[89722]: log_channel(cluster) log [DBG] : 11.9 scrub ok
Oct 11 03:41:29 compute-0 ceph-mon[74273]: 11.2 scrub starts
Oct 11 03:41:29 compute-0 ceph-mon[74273]: 11.2 scrub ok
Oct 11 03:41:29 compute-0 ceph-mon[74273]: 5.1 scrub starts
Oct 11 03:41:29 compute-0 ceph-mon[74273]: 5.1 scrub ok
Oct 11 03:41:29 compute-0 sudo[114112]: pam_unix(sudo:session): session closed for user root
Oct 11 03:41:29 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 11 03:41:29 compute-0 ceph-osd[89722]: log_channel(cluster) log [DBG] : 7.c scrub starts
Oct 11 03:41:29 compute-0 sudo[114399]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ulrlkjybypsrseczeoxwazuwqkmlhxnv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760154089.355486-122-141255950232412/AnsiballZ_file.py'
Oct 11 03:41:29 compute-0 ceph-osd[89722]: log_channel(cluster) log [DBG] : 7.c scrub ok
Oct 11 03:41:29 compute-0 sudo[114399]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 03:41:30 compute-0 ceph-mon[74273]: pgmap v263: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 03:41:30 compute-0 ceph-mon[74273]: 11.9 scrub starts
Oct 11 03:41:30 compute-0 ceph-mon[74273]: 11.9 scrub ok
Oct 11 03:41:30 compute-0 python3.9[114401]: ansible-ansible.builtin.file Invoked with mode=0750 path=/var/lib/edpm-config selevel=s0 setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None attributes=None
Oct 11 03:41:30 compute-0 sudo[114399]: pam_unix(sudo:session): session closed for user root
Oct 11 03:41:30 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v264: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 03:41:30 compute-0 ceph-mgr[74563]: [pg_autoscaler INFO root] _maybe_adjust
Oct 11 03:41:30 compute-0 ceph-mgr[74563]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 03:41:30 compute-0 ceph-mgr[74563]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct 11 03:41:30 compute-0 ceph-mgr[74563]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 03:41:30 compute-0 ceph-mgr[74563]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 11 03:41:30 compute-0 ceph-mgr[74563]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 03:41:30 compute-0 ceph-mgr[74563]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 11 03:41:30 compute-0 ceph-mgr[74563]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 03:41:30 compute-0 ceph-mgr[74563]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 11 03:41:30 compute-0 ceph-mgr[74563]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 03:41:30 compute-0 ceph-mgr[74563]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 11 03:41:30 compute-0 ceph-mgr[74563]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 03:41:30 compute-0 ceph-mgr[74563]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Oct 11 03:41:30 compute-0 ceph-mgr[74563]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 03:41:30 compute-0 ceph-mgr[74563]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 11 03:41:30 compute-0 ceph-mgr[74563]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 03:41:30 compute-0 ceph-mgr[74563]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Oct 11 03:41:30 compute-0 ceph-mgr[74563]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 03:41:30 compute-0 ceph-mgr[74563]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct 11 03:41:30 compute-0 ceph-mgr[74563]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 03:41:30 compute-0 ceph-mgr[74563]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 11 03:41:30 compute-0 ceph-mgr[74563]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 03:41:30 compute-0 ceph-mgr[74563]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct 11 03:41:30 compute-0 ceph-osd[89722]: log_channel(cluster) log [DBG] : 7.a scrub starts
Oct 11 03:41:30 compute-0 ceph-osd[89722]: log_channel(cluster) log [DBG] : 7.a scrub ok
Oct 11 03:41:30 compute-0 python3.9[114551]: ansible-ansible.builtin.stat Invoked with path=/etc/cloud/cloud.cfg.d follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct 11 03:41:31 compute-0 ceph-mon[74273]: 7.c scrub starts
Oct 11 03:41:31 compute-0 ceph-mon[74273]: 7.c scrub ok
Oct 11 03:41:31 compute-0 ceph-osd[87591]: log_channel(cluster) log [DBG] : 7.f deep-scrub starts
Oct 11 03:41:31 compute-0 ceph-osd[87591]: log_channel(cluster) log [DBG] : 7.f deep-scrub ok
Oct 11 03:41:31 compute-0 sudo[114703]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-felkvyhaezvfgitdehbdsrcuasexbhtt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760154091.240084-138-266823420947895/AnsiballZ_dnf.py'
Oct 11 03:41:31 compute-0 sudo[114703]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 03:41:31 compute-0 python3.9[114705]: ansible-ansible.legacy.dnf Invoked with name=['NetworkManager-ovs'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Oct 11 03:41:31 compute-0 ceph-osd[89722]: log_channel(cluster) log [DBG] : 3.e scrub starts
Oct 11 03:41:31 compute-0 ceph-osd[89722]: log_channel(cluster) log [DBG] : 3.e scrub ok
Oct 11 03:41:32 compute-0 ceph-mon[74273]: pgmap v264: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 03:41:32 compute-0 ceph-mon[74273]: 7.a scrub starts
Oct 11 03:41:32 compute-0 ceph-mon[74273]: 7.a scrub ok
Oct 11 03:41:32 compute-0 ceph-mon[74273]: 7.f deep-scrub starts
Oct 11 03:41:32 compute-0 ceph-mon[74273]: 7.f deep-scrub ok
Oct 11 03:41:32 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v265: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 03:41:32 compute-0 ceph-osd[88594]: log_channel(cluster) log [DBG] : 5.1a scrub starts
Oct 11 03:41:32 compute-0 ceph-osd[88594]: log_channel(cluster) log [DBG] : 5.1a scrub ok
Oct 11 03:41:32 compute-0 sudo[114703]: pam_unix(sudo:session): session closed for user root
Oct 11 03:41:33 compute-0 ceph-mon[74273]: 3.e scrub starts
Oct 11 03:41:33 compute-0 ceph-mon[74273]: 3.e scrub ok
Oct 11 03:41:33 compute-0 ceph-mon[74273]: 5.1a scrub starts
Oct 11 03:41:33 compute-0 ceph-mon[74273]: 5.1a scrub ok
Oct 11 03:41:33 compute-0 sudo[114856]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ayyfhezgxbpkrvuqggusmdkyxaswozfw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760154093.218739-147-239259134251316/AnsiballZ_dnf.py'
Oct 11 03:41:33 compute-0 sudo[114856]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 03:41:33 compute-0 python3.9[114858]: ansible-ansible.legacy.dnf Invoked with name=['os-net-config'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Oct 11 03:41:33 compute-0 ceph-osd[89722]: log_channel(cluster) log [DBG] : 11.18 scrub starts
Oct 11 03:41:33 compute-0 ceph-osd[89722]: log_channel(cluster) log [DBG] : 11.18 scrub ok
Oct 11 03:41:34 compute-0 ceph-mon[74273]: pgmap v265: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 03:41:34 compute-0 ceph-osd[87591]: log_channel(cluster) log [DBG] : 8.f scrub starts
Oct 11 03:41:34 compute-0 ceph-osd[87591]: log_channel(cluster) log [DBG] : 8.f scrub ok
Oct 11 03:41:34 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 11 03:41:34 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v266: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 03:41:34 compute-0 ceph-osd[89722]: log_channel(cluster) log [DBG] : 8.4 deep-scrub starts
Oct 11 03:41:34 compute-0 ceph-osd[89722]: log_channel(cluster) log [DBG] : 8.4 deep-scrub ok
Oct 11 03:41:35 compute-0 sudo[114856]: pam_unix(sudo:session): session closed for user root
Oct 11 03:41:35 compute-0 ceph-mon[74273]: 11.18 scrub starts
Oct 11 03:41:35 compute-0 ceph-mon[74273]: 11.18 scrub ok
Oct 11 03:41:35 compute-0 ceph-mon[74273]: 8.f scrub starts
Oct 11 03:41:35 compute-0 ceph-mon[74273]: 8.f scrub ok
Oct 11 03:41:35 compute-0 sudo[115009]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wnykcomeimjjyyijumjnfefmppymhwku ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760154095.3443174-159-51730442753192/AnsiballZ_stat.py'
Oct 11 03:41:35 compute-0 sudo[115009]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 03:41:35 compute-0 python3.9[115011]: ansible-ansible.builtin.stat Invoked with path=/var/lib/edpm-config/os-net-config.returncode follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct 11 03:41:35 compute-0 ceph-osd[89722]: log_channel(cluster) log [DBG] : 8.1b scrub starts
Oct 11 03:41:35 compute-0 ceph-osd[89722]: log_channel(cluster) log [DBG] : 8.1b scrub ok
Oct 11 03:41:35 compute-0 sudo[115009]: pam_unix(sudo:session): session closed for user root
Oct 11 03:41:36 compute-0 ceph-mon[74273]: pgmap v266: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 03:41:36 compute-0 ceph-mon[74273]: 8.4 deep-scrub starts
Oct 11 03:41:36 compute-0 ceph-mon[74273]: 8.4 deep-scrub ok
Oct 11 03:41:36 compute-0 ceph-osd[87591]: log_channel(cluster) log [DBG] : 7.4 scrub starts
Oct 11 03:41:36 compute-0 ceph-osd[87591]: log_channel(cluster) log [DBG] : 7.4 scrub ok
Oct 11 03:41:36 compute-0 sudo[115163]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pfysfddpthpxskxoutrimqpqpssfcqxc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760154096.0794802-167-272968334899762/AnsiballZ_slurp.py'
Oct 11 03:41:36 compute-0 sudo[115163]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 03:41:36 compute-0 python3.9[115165]: ansible-ansible.builtin.slurp Invoked with path=/var/lib/edpm-config/os-net-config.returncode src=/var/lib/edpm-config/os-net-config.returncode
Oct 11 03:41:36 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v267: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 03:41:36 compute-0 sudo[115163]: pam_unix(sudo:session): session closed for user root
Oct 11 03:41:37 compute-0 ceph-mon[74273]: 8.1b scrub starts
Oct 11 03:41:37 compute-0 ceph-mon[74273]: 8.1b scrub ok
Oct 11 03:41:37 compute-0 ceph-mon[74273]: 7.4 scrub starts
Oct 11 03:41:37 compute-0 ceph-mon[74273]: 7.4 scrub ok
Oct 11 03:41:37 compute-0 sshd-session[112499]: Connection closed by 192.168.122.30 port 43604
Oct 11 03:41:37 compute-0 sshd-session[112496]: pam_unix(sshd:session): session closed for user zuul
Oct 11 03:41:37 compute-0 systemd-logind[820]: Session 36 logged out. Waiting for processes to exit.
Oct 11 03:41:37 compute-0 systemd[1]: session-36.scope: Deactivated successfully.
Oct 11 03:41:37 compute-0 systemd[1]: session-36.scope: Consumed 19.534s CPU time.
Oct 11 03:41:37 compute-0 systemd-logind[820]: Removed session 36.
Oct 11 03:41:37 compute-0 ceph-osd[89722]: log_channel(cluster) log [DBG] : 11.1c scrub starts
Oct 11 03:41:37 compute-0 ceph-osd[89722]: log_channel(cluster) log [DBG] : 11.1c scrub ok
Oct 11 03:41:38 compute-0 ceph-mon[74273]: pgmap v267: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 03:41:38 compute-0 ceph-osd[87591]: log_channel(cluster) log [DBG] : 8.b deep-scrub starts
Oct 11 03:41:38 compute-0 ceph-osd[87591]: log_channel(cluster) log [DBG] : 8.b deep-scrub ok
Oct 11 03:41:38 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v268: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 03:41:39 compute-0 ceph-mon[74273]: 11.1c scrub starts
Oct 11 03:41:39 compute-0 ceph-mon[74273]: 11.1c scrub ok
Oct 11 03:41:39 compute-0 ceph-mon[74273]: 8.b deep-scrub starts
Oct 11 03:41:39 compute-0 ceph-mon[74273]: 8.b deep-scrub ok
Oct 11 03:41:39 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 11 03:41:39 compute-0 ceph-osd[89722]: log_channel(cluster) log [DBG] : 11.1a scrub starts
Oct 11 03:41:39 compute-0 ceph-osd[89722]: log_channel(cluster) log [DBG] : 11.1a scrub ok
Oct 11 03:41:40 compute-0 ceph-mon[74273]: pgmap v268: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 03:41:40 compute-0 ceph-osd[88594]: log_channel(cluster) log [DBG] : 10.10 scrub starts
Oct 11 03:41:40 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v269: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 03:41:40 compute-0 ceph-osd[88594]: log_channel(cluster) log [DBG] : 10.10 scrub ok
Oct 11 03:41:40 compute-0 ceph-osd[89722]: log_channel(cluster) log [DBG] : 11.1b scrub starts
Oct 11 03:41:40 compute-0 ceph-osd[89722]: log_channel(cluster) log [DBG] : 11.1b scrub ok
Oct 11 03:41:41 compute-0 ceph-mon[74273]: 11.1a scrub starts
Oct 11 03:41:41 compute-0 ceph-mon[74273]: 11.1a scrub ok
Oct 11 03:41:41 compute-0 ceph-mon[74273]: 10.10 scrub starts
Oct 11 03:41:41 compute-0 ceph-mon[74273]: 10.10 scrub ok
Oct 11 03:41:41 compute-0 ceph-osd[89722]: log_channel(cluster) log [DBG] : 7.15 scrub starts
Oct 11 03:41:41 compute-0 ceph-osd[89722]: log_channel(cluster) log [DBG] : 7.15 scrub ok
Oct 11 03:41:42 compute-0 ceph-mon[74273]: pgmap v269: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 03:41:42 compute-0 ceph-mon[74273]: 11.1b scrub starts
Oct 11 03:41:42 compute-0 ceph-mon[74273]: 11.1b scrub ok
Oct 11 03:41:42 compute-0 sshd-session[115190]: Accepted publickey for zuul from 192.168.122.30 port 54718 ssh2: ECDSA SHA256:qo9+RMabHfLAOt2q/80W97JXaZUdeUCREBuTRaqgxBY
Oct 11 03:41:42 compute-0 systemd-logind[820]: New session 37 of user zuul.
Oct 11 03:41:42 compute-0 systemd[1]: Started Session 37 of User zuul.
Oct 11 03:41:42 compute-0 sshd-session[115190]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Oct 11 03:41:42 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v270: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 03:41:43 compute-0 ceph-mon[74273]: 7.15 scrub starts
Oct 11 03:41:43 compute-0 ceph-mon[74273]: 7.15 scrub ok
Oct 11 03:41:43 compute-0 python3.9[115343]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct 11 03:41:43 compute-0 ceph-osd[88594]: log_channel(cluster) log [DBG] : 10.13 scrub starts
Oct 11 03:41:43 compute-0 ceph-osd[88594]: log_channel(cluster) log [DBG] : 10.13 scrub ok
Oct 11 03:41:43 compute-0 ceph-osd[89722]: log_channel(cluster) log [DBG] : 8.2 scrub starts
Oct 11 03:41:43 compute-0 ceph-osd[89722]: log_channel(cluster) log [DBG] : 8.2 scrub ok
Oct 11 03:41:44 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 11 03:41:44 compute-0 ceph-mon[74273]: pgmap v270: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 03:41:44 compute-0 ceph-mon[74273]: 10.13 scrub starts
Oct 11 03:41:44 compute-0 ceph-mon[74273]: 10.13 scrub ok
Oct 11 03:41:44 compute-0 python3.9[115497]: ansible-ansible.builtin.setup Invoked with filter=['ansible_default_ipv4'] gather_subset=['!all', '!min', 'network'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Oct 11 03:41:44 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v271: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 03:41:45 compute-0 ceph-osd[87591]: log_channel(cluster) log [DBG] : 11.1 scrub starts
Oct 11 03:41:45 compute-0 ceph-osd[87591]: log_channel(cluster) log [DBG] : 11.1 scrub ok
Oct 11 03:41:45 compute-0 ceph-mon[74273]: 8.2 scrub starts
Oct 11 03:41:45 compute-0 ceph-mon[74273]: 8.2 scrub ok
Oct 11 03:41:45 compute-0 python3.9[115690]: ansible-ansible.legacy.command Invoked with _raw_params=hostname -f _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 11 03:41:46 compute-0 systemd[75886]: Created slice User Background Tasks Slice.
Oct 11 03:41:46 compute-0 systemd[75886]: Starting Cleanup of User's Temporary Files and Directories...
Oct 11 03:41:46 compute-0 systemd[75886]: Finished Cleanup of User's Temporary Files and Directories.
Oct 11 03:41:46 compute-0 sshd-session[115193]: Connection closed by 192.168.122.30 port 54718
Oct 11 03:41:46 compute-0 sshd-session[115190]: pam_unix(sshd:session): session closed for user zuul
Oct 11 03:41:46 compute-0 systemd[1]: session-37.scope: Deactivated successfully.
Oct 11 03:41:46 compute-0 systemd[1]: session-37.scope: Consumed 2.682s CPU time.
Oct 11 03:41:46 compute-0 systemd-logind[820]: Session 37 logged out. Waiting for processes to exit.
Oct 11 03:41:46 compute-0 systemd-logind[820]: Removed session 37.
Oct 11 03:41:46 compute-0 ceph-mon[74273]: pgmap v271: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 03:41:46 compute-0 ceph-mon[74273]: 11.1 scrub starts
Oct 11 03:41:46 compute-0 ceph-mon[74273]: 11.1 scrub ok
Oct 11 03:41:46 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v272: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 03:41:48 compute-0 ceph-mon[74273]: pgmap v272: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 03:41:48 compute-0 ceph-osd[88594]: log_channel(cluster) log [DBG] : 10.12 scrub starts
Oct 11 03:41:48 compute-0 ceph-osd[88594]: log_channel(cluster) log [DBG] : 10.12 scrub ok
Oct 11 03:41:48 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v273: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 03:41:49 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 11 03:41:49 compute-0 ceph-mon[74273]: 10.12 scrub starts
Oct 11 03:41:49 compute-0 ceph-mon[74273]: 10.12 scrub ok
Oct 11 03:41:49 compute-0 ceph-osd[89722]: log_channel(cluster) log [DBG] : 7.11 scrub starts
Oct 11 03:41:49 compute-0 ceph-osd[89722]: log_channel(cluster) log [DBG] : 7.11 scrub ok
Oct 11 03:41:50 compute-0 ceph-mon[74273]: pgmap v273: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 03:41:50 compute-0 ceph-mon[74273]: 7.11 scrub starts
Oct 11 03:41:50 compute-0 ceph-mon[74273]: 7.11 scrub ok
Oct 11 03:41:50 compute-0 ceph-mgr[74563]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 03:41:50 compute-0 ceph-mgr[74563]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 03:41:50 compute-0 ceph-mgr[74563]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 03:41:50 compute-0 ceph-mgr[74563]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 03:41:50 compute-0 ceph-mgr[74563]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 03:41:50 compute-0 ceph-mgr[74563]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 03:41:50 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v274: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 03:41:51 compute-0 ceph-osd[87591]: log_channel(cluster) log [DBG] : 3.9 scrub starts
Oct 11 03:41:51 compute-0 ceph-osd[87591]: log_channel(cluster) log [DBG] : 3.9 scrub ok
Oct 11 03:41:51 compute-0 sshd-session[115717]: Accepted publickey for zuul from 192.168.122.30 port 46812 ssh2: ECDSA SHA256:qo9+RMabHfLAOt2q/80W97JXaZUdeUCREBuTRaqgxBY
Oct 11 03:41:51 compute-0 systemd-logind[820]: New session 38 of user zuul.
Oct 11 03:41:51 compute-0 systemd[1]: Started Session 38 of User zuul.
Oct 11 03:41:51 compute-0 sshd-session[115717]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Oct 11 03:41:51 compute-0 ceph-osd[88594]: log_channel(cluster) log [DBG] : 5.18 scrub starts
Oct 11 03:41:51 compute-0 ceph-osd[88594]: log_channel(cluster) log [DBG] : 5.18 scrub ok
Oct 11 03:41:51 compute-0 sudo[115767]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 03:41:51 compute-0 sudo[115767]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 03:41:51 compute-0 sudo[115767]: pam_unix(sudo:session): session closed for user root
Oct 11 03:41:51 compute-0 sudo[115798]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 11 03:41:51 compute-0 sudo[115798]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 03:41:51 compute-0 sudo[115798]: pam_unix(sudo:session): session closed for user root
Oct 11 03:41:51 compute-0 sudo[115823]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 03:41:51 compute-0 sudo[115823]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 03:41:51 compute-0 sudo[115823]: pam_unix(sudo:session): session closed for user root
Oct 11 03:41:51 compute-0 sudo[115848]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/23b68101-59a9-532f-ab6b-9acf78fb2162/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Oct 11 03:41:52 compute-0 sudo[115848]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 03:41:52 compute-0 ceph-mon[74273]: pgmap v274: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 03:41:52 compute-0 ceph-mon[74273]: 3.9 scrub starts
Oct 11 03:41:52 compute-0 ceph-mon[74273]: 3.9 scrub ok
Oct 11 03:41:52 compute-0 ceph-mon[74273]: 5.18 scrub starts
Oct 11 03:41:52 compute-0 ceph-mon[74273]: 5.18 scrub ok
Oct 11 03:41:52 compute-0 sudo[115848]: pam_unix(sudo:session): session closed for user root
Oct 11 03:41:52 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 11 03:41:52 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 11 03:41:52 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct 11 03:41:52 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 11 03:41:52 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct 11 03:41:52 compute-0 python3.9[115984]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct 11 03:41:52 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' 
Oct 11 03:41:52 compute-0 ceph-mgr[74563]: [progress WARNING root] complete: ev 75241e56-6698-40e2-bf65-82c6dca3caf6 does not exist
Oct 11 03:41:52 compute-0 ceph-mgr[74563]: [progress WARNING root] complete: ev 62bb4ccd-6c88-4671-8064-847ef5620d75 does not exist
Oct 11 03:41:52 compute-0 ceph-mgr[74563]: [progress WARNING root] complete: ev f2b91e5a-c2c1-400c-ae4a-e5342a082d1d does not exist
Oct 11 03:41:52 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct 11 03:41:52 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 11 03:41:52 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct 11 03:41:52 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 11 03:41:52 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 11 03:41:52 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 11 03:41:52 compute-0 sudo[116006]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 03:41:52 compute-0 sudo[116006]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 03:41:52 compute-0 ceph-osd[88594]: log_channel(cluster) log [DBG] : 5.1d scrub starts
Oct 11 03:41:52 compute-0 sudo[116006]: pam_unix(sudo:session): session closed for user root
Oct 11 03:41:52 compute-0 ceph-osd[88594]: log_channel(cluster) log [DBG] : 5.1d scrub ok
Oct 11 03:41:52 compute-0 sudo[116031]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 11 03:41:52 compute-0 sudo[116031]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 03:41:52 compute-0 sudo[116031]: pam_unix(sudo:session): session closed for user root
Oct 11 03:41:52 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v275: 305 pgs: 1 active+clean+scrubbing, 304 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 03:41:52 compute-0 sudo[116056]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 03:41:52 compute-0 sudo[116056]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 03:41:52 compute-0 sudo[116056]: pam_unix(sudo:session): session closed for user root
Oct 11 03:41:52 compute-0 sudo[116105]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/23b68101-59a9-532f-ab6b-9acf78fb2162/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 23b68101-59a9-532f-ab6b-9acf78fb2162 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Oct 11 03:41:52 compute-0 sudo[116105]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 03:41:53 compute-0 podman[116275]: 2025-10-11 03:41:53.278592876 +0000 UTC m=+0.047734134 container create b2ff60e8c1ba6840bae00ba059c0630ebe94af234b92741f6c92d5f40c507663 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_chaum, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Oct 11 03:41:53 compute-0 systemd[1]: Started libpod-conmon-b2ff60e8c1ba6840bae00ba059c0630ebe94af234b92741f6c92d5f40c507663.scope.
Oct 11 03:41:53 compute-0 podman[116275]: 2025-10-11 03:41:53.259402356 +0000 UTC m=+0.028543654 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 03:41:53 compute-0 systemd[1]: Started libcrun container.
Oct 11 03:41:53 compute-0 podman[116275]: 2025-10-11 03:41:53.387380232 +0000 UTC m=+0.156521490 container init b2ff60e8c1ba6840bae00ba059c0630ebe94af234b92741f6c92d5f40c507663 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_chaum, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_REF=reef, OSD_FLAVOR=default)
Oct 11 03:41:53 compute-0 podman[116275]: 2025-10-11 03:41:53.398498047 +0000 UTC m=+0.167639345 container start b2ff60e8c1ba6840bae00ba059c0630ebe94af234b92741f6c92d5f40c507663 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_chaum, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 11 03:41:53 compute-0 podman[116275]: 2025-10-11 03:41:53.403168783 +0000 UTC m=+0.172310061 container attach b2ff60e8c1ba6840bae00ba059c0630ebe94af234b92741f6c92d5f40c507663 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_chaum, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3)
Oct 11 03:41:53 compute-0 hardcore_chaum[116313]: 167 167
Oct 11 03:41:53 compute-0 systemd[1]: libpod-b2ff60e8c1ba6840bae00ba059c0630ebe94af234b92741f6c92d5f40c507663.scope: Deactivated successfully.
Oct 11 03:41:53 compute-0 podman[116275]: 2025-10-11 03:41:53.407629803 +0000 UTC m=+0.176771081 container died b2ff60e8c1ba6840bae00ba059c0630ebe94af234b92741f6c92d5f40c507663 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_chaum, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 11 03:41:53 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 11 03:41:53 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 11 03:41:53 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' 
Oct 11 03:41:53 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 11 03:41:53 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 11 03:41:53 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 11 03:41:53 compute-0 ceph-mon[74273]: 5.1d scrub starts
Oct 11 03:41:53 compute-0 ceph-mon[74273]: 5.1d scrub ok
Oct 11 03:41:53 compute-0 systemd[1]: var-lib-containers-storage-overlay-696388781c5f33717f7558d18554fe7c014d22be4c82120e57853f0c40b291c4-merged.mount: Deactivated successfully.
Oct 11 03:41:53 compute-0 podman[116275]: 2025-10-11 03:41:53.473419074 +0000 UTC m=+0.242560332 container remove b2ff60e8c1ba6840bae00ba059c0630ebe94af234b92741f6c92d5f40c507663 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_chaum, org.label-schema.license=GPLv2, CEPH_REF=reef, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 11 03:41:53 compute-0 systemd[1]: libpod-conmon-b2ff60e8c1ba6840bae00ba059c0630ebe94af234b92741f6c92d5f40c507663.scope: Deactivated successfully.
Oct 11 03:41:53 compute-0 python3.9[116310]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct 11 03:41:53 compute-0 podman[116341]: 2025-10-11 03:41:53.708689112 +0000 UTC m=+0.076614898 container create 6fe98f756267327f0693b58a516ab6aa851955324bfdcf6f4c3eb1415e7ceae0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_keldysh, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 11 03:41:53 compute-0 ceph-osd[88594]: log_channel(cluster) log [DBG] : 5.19 scrub starts
Oct 11 03:41:53 compute-0 ceph-osd[88594]: log_channel(cluster) log [DBG] : 5.19 scrub ok
Oct 11 03:41:53 compute-0 systemd[1]: Started libpod-conmon-6fe98f756267327f0693b58a516ab6aa851955324bfdcf6f4c3eb1415e7ceae0.scope.
Oct 11 03:41:53 compute-0 podman[116341]: 2025-10-11 03:41:53.680682594 +0000 UTC m=+0.048608440 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 03:41:53 compute-0 systemd[1]: Started libcrun container.
Oct 11 03:41:53 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d417c0c3b246686cc1383bd0e64762c2cbacb2fba5d011e02219b25ab8520a9a/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 11 03:41:53 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d417c0c3b246686cc1383bd0e64762c2cbacb2fba5d011e02219b25ab8520a9a/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 11 03:41:53 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d417c0c3b246686cc1383bd0e64762c2cbacb2fba5d011e02219b25ab8520a9a/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 11 03:41:53 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d417c0c3b246686cc1383bd0e64762c2cbacb2fba5d011e02219b25ab8520a9a/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 11 03:41:53 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d417c0c3b246686cc1383bd0e64762c2cbacb2fba5d011e02219b25ab8520a9a/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct 11 03:41:53 compute-0 podman[116341]: 2025-10-11 03:41:53.826070438 +0000 UTC m=+0.193996284 container init 6fe98f756267327f0693b58a516ab6aa851955324bfdcf6f4c3eb1415e7ceae0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_keldysh, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 11 03:41:53 compute-0 podman[116341]: 2025-10-11 03:41:53.839233243 +0000 UTC m=+0.207159039 container start 6fe98f756267327f0693b58a516ab6aa851955324bfdcf6f4c3eb1415e7ceae0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_keldysh, org.label-schema.license=GPLv2, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 11 03:41:53 compute-0 podman[116341]: 2025-10-11 03:41:53.911456611 +0000 UTC m=+0.279382427 container attach 6fe98f756267327f0693b58a516ab6aa851955324bfdcf6f4c3eb1415e7ceae0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_keldysh, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507)
Oct 11 03:41:54 compute-0 ceph-osd[89722]: log_channel(cluster) log [DBG] : 3.11 scrub starts
Oct 11 03:41:54 compute-0 ceph-osd[89722]: log_channel(cluster) log [DBG] : 3.11 scrub ok
Oct 11 03:41:54 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 11 03:41:54 compute-0 sudo[116512]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zdjkkqzjismrgsuyanjxktkaavheqzlz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760154113.9955215-40-11576572542799/AnsiballZ_setup.py'
Oct 11 03:41:54 compute-0 sudo[116512]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 03:41:54 compute-0 ceph-mon[74273]: pgmap v275: 305 pgs: 1 active+clean+scrubbing, 304 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 03:41:54 compute-0 ceph-mon[74273]: 5.19 scrub starts
Oct 11 03:41:54 compute-0 ceph-mon[74273]: 5.19 scrub ok
Oct 11 03:41:54 compute-0 ceph-mon[74273]: 3.11 scrub starts
Oct 11 03:41:54 compute-0 ceph-mon[74273]: 3.11 scrub ok
Oct 11 03:41:54 compute-0 python3.9[116514]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Oct 11 03:41:54 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v276: 305 pgs: 1 active+clean+scrubbing, 304 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 03:41:54 compute-0 pedantic_keldysh[116358]: --> passed data devices: 0 physical, 3 LVM
Oct 11 03:41:54 compute-0 pedantic_keldysh[116358]: --> relative data size: 1.0
Oct 11 03:41:54 compute-0 pedantic_keldysh[116358]: --> All data devices are unavailable
Oct 11 03:41:54 compute-0 systemd[1]: libpod-6fe98f756267327f0693b58a516ab6aa851955324bfdcf6f4c3eb1415e7ceae0.scope: Deactivated successfully.
Oct 11 03:41:54 compute-0 ceph-osd[89722]: log_channel(cluster) log [DBG] : 3.16 deep-scrub starts
Oct 11 03:41:54 compute-0 systemd[1]: libpod-6fe98f756267327f0693b58a516ab6aa851955324bfdcf6f4c3eb1415e7ceae0.scope: Consumed 1.083s CPU time.
Oct 11 03:41:54 compute-0 conmon[116358]: conmon 6fe98f756267327f0693 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-6fe98f756267327f0693b58a516ab6aa851955324bfdcf6f4c3eb1415e7ceae0.scope/container/memory.events
Oct 11 03:41:54 compute-0 podman[116341]: 2025-10-11 03:41:54.982185806 +0000 UTC m=+1.350111572 container died 6fe98f756267327f0693b58a516ab6aa851955324bfdcf6f4c3eb1415e7ceae0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_keldysh, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Oct 11 03:41:54 compute-0 ceph-osd[89722]: log_channel(cluster) log [DBG] : 3.16 deep-scrub ok
Oct 11 03:41:55 compute-0 sudo[116512]: pam_unix(sudo:session): session closed for user root
Oct 11 03:41:55 compute-0 systemd[1]: var-lib-containers-storage-overlay-d417c0c3b246686cc1383bd0e64762c2cbacb2fba5d011e02219b25ab8520a9a-merged.mount: Deactivated successfully.
Oct 11 03:41:55 compute-0 podman[116341]: 2025-10-11 03:41:55.05013772 +0000 UTC m=+1.418063496 container remove 6fe98f756267327f0693b58a516ab6aa851955324bfdcf6f4c3eb1415e7ceae0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_keldysh, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.schema-version=1.0)
Oct 11 03:41:55 compute-0 systemd[1]: libpod-conmon-6fe98f756267327f0693b58a516ab6aa851955324bfdcf6f4c3eb1415e7ceae0.scope: Deactivated successfully.
Oct 11 03:41:55 compute-0 sudo[116105]: pam_unix(sudo:session): session closed for user root
Oct 11 03:41:55 compute-0 ceph-osd[87591]: log_channel(cluster) log [DBG] : 8.9 scrub starts
Oct 11 03:41:55 compute-0 sudo[116559]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 03:41:55 compute-0 ceph-osd[87591]: log_channel(cluster) log [DBG] : 8.9 scrub ok
Oct 11 03:41:55 compute-0 sudo[116559]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 03:41:55 compute-0 sudo[116559]: pam_unix(sudo:session): session closed for user root
Oct 11 03:41:55 compute-0 sudo[116584]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 11 03:41:55 compute-0 sudo[116584]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 03:41:55 compute-0 sudo[116584]: pam_unix(sudo:session): session closed for user root
Oct 11 03:41:55 compute-0 sudo[116610]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 03:41:55 compute-0 sudo[116610]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 03:41:55 compute-0 sudo[116610]: pam_unix(sudo:session): session closed for user root
Oct 11 03:41:55 compute-0 sudo[116661]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/23b68101-59a9-532f-ab6b-9acf78fb2162/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 23b68101-59a9-532f-ab6b-9acf78fb2162 -- lvm list --format json
Oct 11 03:41:55 compute-0 sudo[116661]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 03:41:55 compute-0 sudo[116732]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-usbfaxhanjutjkyvtizffmzsmruknelx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760154113.9955215-40-11576572542799/AnsiballZ_dnf.py'
Oct 11 03:41:55 compute-0 sudo[116732]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 03:41:55 compute-0 ceph-mon[74273]: 3.16 deep-scrub starts
Oct 11 03:41:55 compute-0 ceph-mon[74273]: 3.16 deep-scrub ok
Oct 11 03:41:55 compute-0 python3.9[116734]: ansible-ansible.legacy.dnf Invoked with name=['podman'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Oct 11 03:41:55 compute-0 ceph-osd[88594]: log_channel(cluster) log [DBG] : 10.14 scrub starts
Oct 11 03:41:55 compute-0 ceph-osd[88594]: log_channel(cluster) log [DBG] : 10.14 scrub ok
Oct 11 03:41:55 compute-0 podman[116778]: 2025-10-11 03:41:55.852600714 +0000 UTC m=+0.044226960 container create fcad228ba24fb392e591ac83777b6b06a31f1edc45e09e5db03afae761230ffb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=busy_noyce, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 11 03:41:55 compute-0 systemd[1]: Started libpod-conmon-fcad228ba24fb392e591ac83777b6b06a31f1edc45e09e5db03afae761230ffb.scope.
Oct 11 03:41:55 compute-0 podman[116778]: 2025-10-11 03:41:55.836985653 +0000 UTC m=+0.028611899 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 03:41:55 compute-0 systemd[1]: Started libcrun container.
Oct 11 03:41:55 compute-0 podman[116778]: 2025-10-11 03:41:55.959637519 +0000 UTC m=+0.151263835 container init fcad228ba24fb392e591ac83777b6b06a31f1edc45e09e5db03afae761230ffb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=busy_noyce, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 11 03:41:55 compute-0 podman[116778]: 2025-10-11 03:41:55.971809253 +0000 UTC m=+0.163435509 container start fcad228ba24fb392e591ac83777b6b06a31f1edc45e09e5db03afae761230ffb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=busy_noyce, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 11 03:41:55 compute-0 podman[116778]: 2025-10-11 03:41:55.976023522 +0000 UTC m=+0.167649778 container attach fcad228ba24fb392e591ac83777b6b06a31f1edc45e09e5db03afae761230ffb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=busy_noyce, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=reef)
Oct 11 03:41:55 compute-0 busy_noyce[116795]: 167 167
Oct 11 03:41:55 compute-0 systemd[1]: libpod-fcad228ba24fb392e591ac83777b6b06a31f1edc45e09e5db03afae761230ffb.scope: Deactivated successfully.
Oct 11 03:41:55 compute-0 podman[116778]: 2025-10-11 03:41:55.979603583 +0000 UTC m=+0.171229849 container died fcad228ba24fb392e591ac83777b6b06a31f1edc45e09e5db03afae761230ffb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=busy_noyce, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 11 03:41:56 compute-0 systemd[1]: var-lib-containers-storage-overlay-c218acf0be76675d7f0d8946cd072440824d69b60dd1bd31f031be9f555a7967-merged.mount: Deactivated successfully.
Oct 11 03:41:56 compute-0 podman[116778]: 2025-10-11 03:41:56.028526605 +0000 UTC m=+0.220152871 container remove fcad228ba24fb392e591ac83777b6b06a31f1edc45e09e5db03afae761230ffb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=busy_noyce, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=reef, io.buildah.version=1.39.3)
Oct 11 03:41:56 compute-0 systemd[1]: libpod-conmon-fcad228ba24fb392e591ac83777b6b06a31f1edc45e09e5db03afae761230ffb.scope: Deactivated successfully.
Oct 11 03:41:56 compute-0 podman[116819]: 2025-10-11 03:41:56.246327199 +0000 UTC m=+0.062276800 container create a6070c37afc4f02f2051cc6e496af5d206ae0891e2703321adf699fe9e2d7245 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_shockley, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 11 03:41:56 compute-0 systemd[1]: Started libpod-conmon-a6070c37afc4f02f2051cc6e496af5d206ae0891e2703321adf699fe9e2d7245.scope.
Oct 11 03:41:56 compute-0 podman[116819]: 2025-10-11 03:41:56.218439541 +0000 UTC m=+0.034389212 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 03:41:56 compute-0 systemd[1]: Started libcrun container.
Oct 11 03:41:56 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/26abcec408b7e75a87d16aa12bbfde9d510d0d712608ec77091ab8d0bad3036a/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 11 03:41:56 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/26abcec408b7e75a87d16aa12bbfde9d510d0d712608ec77091ab8d0bad3036a/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 11 03:41:56 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/26abcec408b7e75a87d16aa12bbfde9d510d0d712608ec77091ab8d0bad3036a/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 11 03:41:56 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/26abcec408b7e75a87d16aa12bbfde9d510d0d712608ec77091ab8d0bad3036a/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 11 03:41:56 compute-0 podman[116819]: 2025-10-11 03:41:56.375291033 +0000 UTC m=+0.191240634 container init a6070c37afc4f02f2051cc6e496af5d206ae0891e2703321adf699fe9e2d7245 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_shockley, ceph=True, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 11 03:41:56 compute-0 podman[116819]: 2025-10-11 03:41:56.383219987 +0000 UTC m=+0.199169588 container start a6070c37afc4f02f2051cc6e496af5d206ae0891e2703321adf699fe9e2d7245 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_shockley, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, OSD_FLAVOR=default, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 11 03:41:56 compute-0 podman[116819]: 2025-10-11 03:41:56.397661985 +0000 UTC m=+0.213611646 container attach a6070c37afc4f02f2051cc6e496af5d206ae0891e2703321adf699fe9e2d7245 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_shockley, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Oct 11 03:41:56 compute-0 ceph-mon[74273]: pgmap v276: 305 pgs: 1 active+clean+scrubbing, 304 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 03:41:56 compute-0 ceph-mon[74273]: 8.9 scrub starts
Oct 11 03:41:56 compute-0 ceph-mon[74273]: 8.9 scrub ok
Oct 11 03:41:56 compute-0 ceph-mon[74273]: 10.14 scrub starts
Oct 11 03:41:56 compute-0 ceph-mon[74273]: 10.14 scrub ok
Oct 11 03:41:56 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v277: 305 pgs: 1 active+clean+scrubbing, 304 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 03:41:56 compute-0 sudo[116732]: pam_unix(sudo:session): session closed for user root
Oct 11 03:41:56 compute-0 ceph-osd[89722]: log_channel(cluster) log [DBG] : 11.1f scrub starts
Oct 11 03:41:57 compute-0 ceph-osd[89722]: log_channel(cluster) log [DBG] : 11.1f scrub ok
Oct 11 03:41:57 compute-0 ceph-osd[87591]: log_channel(cluster) log [DBG] : 3.c deep-scrub starts
Oct 11 03:41:57 compute-0 ceph-osd[87591]: log_channel(cluster) log [DBG] : 3.c deep-scrub ok
Oct 11 03:41:57 compute-0 competent_shockley[116835]: {
Oct 11 03:41:57 compute-0 competent_shockley[116835]:     "0": [
Oct 11 03:41:57 compute-0 competent_shockley[116835]:         {
Oct 11 03:41:57 compute-0 competent_shockley[116835]:             "devices": [
Oct 11 03:41:57 compute-0 competent_shockley[116835]:                 "/dev/loop3"
Oct 11 03:41:57 compute-0 competent_shockley[116835]:             ],
Oct 11 03:41:57 compute-0 competent_shockley[116835]:             "lv_name": "ceph_lv0",
Oct 11 03:41:57 compute-0 competent_shockley[116835]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct 11 03:41:57 compute-0 competent_shockley[116835]:             "lv_size": "21470642176",
Oct 11 03:41:57 compute-0 competent_shockley[116835]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=1XVTvq-Um5W-oCLh-MZzq-EKrd-vgGj-JdO4S3,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=23b68101-59a9-532f-ab6b-9acf78fb2162,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=bd7ac921-1218-45c1-b1c6-7c594dbceccb,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 11 03:41:57 compute-0 competent_shockley[116835]:             "lv_uuid": "1XVTvq-Um5W-oCLh-MZzq-EKrd-vgGj-JdO4S3",
Oct 11 03:41:57 compute-0 competent_shockley[116835]:             "name": "ceph_lv0",
Oct 11 03:41:57 compute-0 competent_shockley[116835]:             "path": "/dev/ceph_vg0/ceph_lv0",
Oct 11 03:41:57 compute-0 competent_shockley[116835]:             "tags": {
Oct 11 03:41:57 compute-0 competent_shockley[116835]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct 11 03:41:57 compute-0 competent_shockley[116835]:                 "ceph.block_uuid": "1XVTvq-Um5W-oCLh-MZzq-EKrd-vgGj-JdO4S3",
Oct 11 03:41:57 compute-0 competent_shockley[116835]:                 "ceph.cephx_lockbox_secret": "",
Oct 11 03:41:57 compute-0 competent_shockley[116835]:                 "ceph.cluster_fsid": "23b68101-59a9-532f-ab6b-9acf78fb2162",
Oct 11 03:41:57 compute-0 competent_shockley[116835]:                 "ceph.cluster_name": "ceph",
Oct 11 03:41:57 compute-0 competent_shockley[116835]:                 "ceph.crush_device_class": "",
Oct 11 03:41:57 compute-0 competent_shockley[116835]:                 "ceph.encrypted": "0",
Oct 11 03:41:57 compute-0 competent_shockley[116835]:                 "ceph.osd_fsid": "bd7ac921-1218-45c1-b1c6-7c594dbceccb",
Oct 11 03:41:57 compute-0 competent_shockley[116835]:                 "ceph.osd_id": "0",
Oct 11 03:41:57 compute-0 competent_shockley[116835]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 11 03:41:57 compute-0 competent_shockley[116835]:                 "ceph.type": "block",
Oct 11 03:41:57 compute-0 competent_shockley[116835]:                 "ceph.vdo": "0"
Oct 11 03:41:57 compute-0 competent_shockley[116835]:             },
Oct 11 03:41:57 compute-0 competent_shockley[116835]:             "type": "block",
Oct 11 03:41:57 compute-0 competent_shockley[116835]:             "vg_name": "ceph_vg0"
Oct 11 03:41:57 compute-0 competent_shockley[116835]:         }
Oct 11 03:41:57 compute-0 competent_shockley[116835]:     ],
Oct 11 03:41:57 compute-0 competent_shockley[116835]:     "1": [
Oct 11 03:41:57 compute-0 competent_shockley[116835]:         {
Oct 11 03:41:57 compute-0 competent_shockley[116835]:             "devices": [
Oct 11 03:41:57 compute-0 competent_shockley[116835]:                 "/dev/loop4"
Oct 11 03:41:57 compute-0 competent_shockley[116835]:             ],
Oct 11 03:41:57 compute-0 competent_shockley[116835]:             "lv_name": "ceph_lv1",
Oct 11 03:41:57 compute-0 competent_shockley[116835]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Oct 11 03:41:57 compute-0 competent_shockley[116835]:             "lv_size": "21470642176",
Oct 11 03:41:57 compute-0 competent_shockley[116835]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=SYHCVo-UVf0-Iwc3-4dzz-QuCG-19LJ-9rRA5x,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=23b68101-59a9-532f-ab6b-9acf78fb2162,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=38da774d-7ecf-442f-9a7a-97978287cff8,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 11 03:41:57 compute-0 competent_shockley[116835]:             "lv_uuid": "SYHCVo-UVf0-Iwc3-4dzz-QuCG-19LJ-9rRA5x",
Oct 11 03:41:57 compute-0 competent_shockley[116835]:             "name": "ceph_lv1",
Oct 11 03:41:57 compute-0 competent_shockley[116835]:             "path": "/dev/ceph_vg1/ceph_lv1",
Oct 11 03:41:57 compute-0 competent_shockley[116835]:             "tags": {
Oct 11 03:41:57 compute-0 competent_shockley[116835]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Oct 11 03:41:57 compute-0 competent_shockley[116835]:                 "ceph.block_uuid": "SYHCVo-UVf0-Iwc3-4dzz-QuCG-19LJ-9rRA5x",
Oct 11 03:41:57 compute-0 competent_shockley[116835]:                 "ceph.cephx_lockbox_secret": "",
Oct 11 03:41:57 compute-0 competent_shockley[116835]:                 "ceph.cluster_fsid": "23b68101-59a9-532f-ab6b-9acf78fb2162",
Oct 11 03:41:57 compute-0 competent_shockley[116835]:                 "ceph.cluster_name": "ceph",
Oct 11 03:41:57 compute-0 competent_shockley[116835]:                 "ceph.crush_device_class": "",
Oct 11 03:41:57 compute-0 competent_shockley[116835]:                 "ceph.encrypted": "0",
Oct 11 03:41:57 compute-0 competent_shockley[116835]:                 "ceph.osd_fsid": "38da774d-7ecf-442f-9a7a-97978287cff8",
Oct 11 03:41:57 compute-0 competent_shockley[116835]:                 "ceph.osd_id": "1",
Oct 11 03:41:57 compute-0 competent_shockley[116835]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 11 03:41:57 compute-0 competent_shockley[116835]:                 "ceph.type": "block",
Oct 11 03:41:57 compute-0 competent_shockley[116835]:                 "ceph.vdo": "0"
Oct 11 03:41:57 compute-0 competent_shockley[116835]:             },
Oct 11 03:41:57 compute-0 competent_shockley[116835]:             "type": "block",
Oct 11 03:41:57 compute-0 competent_shockley[116835]:             "vg_name": "ceph_vg1"
Oct 11 03:41:57 compute-0 competent_shockley[116835]:         }
Oct 11 03:41:57 compute-0 competent_shockley[116835]:     ],
Oct 11 03:41:57 compute-0 competent_shockley[116835]:     "2": [
Oct 11 03:41:57 compute-0 competent_shockley[116835]:         {
Oct 11 03:41:57 compute-0 competent_shockley[116835]:             "devices": [
Oct 11 03:41:57 compute-0 competent_shockley[116835]:                 "/dev/loop5"
Oct 11 03:41:57 compute-0 competent_shockley[116835]:             ],
Oct 11 03:41:57 compute-0 competent_shockley[116835]:             "lv_name": "ceph_lv2",
Oct 11 03:41:57 compute-0 competent_shockley[116835]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Oct 11 03:41:57 compute-0 competent_shockley[116835]:             "lv_size": "21470642176",
Oct 11 03:41:57 compute-0 competent_shockley[116835]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=RoMfIg-8Myq-NBZ3-1HM3-6QfC-sjk8-dlChvt,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=23b68101-59a9-532f-ab6b-9acf78fb2162,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=b0a32a6b-06c7-4cda-9235-0c5ffbd1bd85,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 11 03:41:57 compute-0 competent_shockley[116835]:             "lv_uuid": "RoMfIg-8Myq-NBZ3-1HM3-6QfC-sjk8-dlChvt",
Oct 11 03:41:57 compute-0 competent_shockley[116835]:             "name": "ceph_lv2",
Oct 11 03:41:57 compute-0 competent_shockley[116835]:             "path": "/dev/ceph_vg2/ceph_lv2",
Oct 11 03:41:57 compute-0 competent_shockley[116835]:             "tags": {
Oct 11 03:41:57 compute-0 competent_shockley[116835]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Oct 11 03:41:57 compute-0 competent_shockley[116835]:                 "ceph.block_uuid": "RoMfIg-8Myq-NBZ3-1HM3-6QfC-sjk8-dlChvt",
Oct 11 03:41:57 compute-0 competent_shockley[116835]:                 "ceph.cephx_lockbox_secret": "",
Oct 11 03:41:57 compute-0 competent_shockley[116835]:                 "ceph.cluster_fsid": "23b68101-59a9-532f-ab6b-9acf78fb2162",
Oct 11 03:41:57 compute-0 competent_shockley[116835]:                 "ceph.cluster_name": "ceph",
Oct 11 03:41:57 compute-0 competent_shockley[116835]:                 "ceph.crush_device_class": "",
Oct 11 03:41:57 compute-0 competent_shockley[116835]:                 "ceph.encrypted": "0",
Oct 11 03:41:57 compute-0 competent_shockley[116835]:                 "ceph.osd_fsid": "b0a32a6b-06c7-4cda-9235-0c5ffbd1bd85",
Oct 11 03:41:57 compute-0 competent_shockley[116835]:                 "ceph.osd_id": "2",
Oct 11 03:41:57 compute-0 competent_shockley[116835]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 11 03:41:57 compute-0 competent_shockley[116835]:                 "ceph.type": "block",
Oct 11 03:41:57 compute-0 competent_shockley[116835]:                 "ceph.vdo": "0"
Oct 11 03:41:57 compute-0 competent_shockley[116835]:             },
Oct 11 03:41:57 compute-0 competent_shockley[116835]:             "type": "block",
Oct 11 03:41:57 compute-0 competent_shockley[116835]:             "vg_name": "ceph_vg2"
Oct 11 03:41:57 compute-0 competent_shockley[116835]:         }
Oct 11 03:41:57 compute-0 competent_shockley[116835]:     ]
Oct 11 03:41:57 compute-0 competent_shockley[116835]: }
Oct 11 03:41:57 compute-0 systemd[1]: libpod-a6070c37afc4f02f2051cc6e496af5d206ae0891e2703321adf699fe9e2d7245.scope: Deactivated successfully.
Oct 11 03:41:57 compute-0 podman[116819]: 2025-10-11 03:41:57.305944679 +0000 UTC m=+1.121894260 container died a6070c37afc4f02f2051cc6e496af5d206ae0891e2703321adf699fe9e2d7245 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_shockley, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Oct 11 03:41:57 compute-0 systemd[1]: var-lib-containers-storage-overlay-26abcec408b7e75a87d16aa12bbfde9d510d0d712608ec77091ab8d0bad3036a-merged.mount: Deactivated successfully.
Oct 11 03:41:57 compute-0 podman[116819]: 2025-10-11 03:41:57.375056611 +0000 UTC m=+1.191006212 container remove a6070c37afc4f02f2051cc6e496af5d206ae0891e2703321adf699fe9e2d7245 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_shockley, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Oct 11 03:41:57 compute-0 systemd[1]: libpod-conmon-a6070c37afc4f02f2051cc6e496af5d206ae0891e2703321adf699fe9e2d7245.scope: Deactivated successfully.
Oct 11 03:41:57 compute-0 sudo[116661]: pam_unix(sudo:session): session closed for user root
Oct 11 03:41:57 compute-0 sudo[117021]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rrekenthoibirmcwjbwanqfzypblnauj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760154117.1490493-52-153196266794694/AnsiballZ_setup.py'
Oct 11 03:41:57 compute-0 sudo[117021]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 03:41:57 compute-0 sudo[116998]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 03:41:57 compute-0 sudo[116998]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 03:41:57 compute-0 sudo[116998]: pam_unix(sudo:session): session closed for user root
Oct 11 03:41:57 compute-0 sudo[117036]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 11 03:41:57 compute-0 sudo[117036]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 03:41:57 compute-0 sudo[117036]: pam_unix(sudo:session): session closed for user root
Oct 11 03:41:57 compute-0 ceph-mon[74273]: 11.1f scrub starts
Oct 11 03:41:57 compute-0 ceph-mon[74273]: 11.1f scrub ok
Oct 11 03:41:57 compute-0 sudo[117061]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 03:41:57 compute-0 sudo[117061]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 03:41:57 compute-0 sudo[117061]: pam_unix(sudo:session): session closed for user root
Oct 11 03:41:57 compute-0 sudo[117086]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/23b68101-59a9-532f-ab6b-9acf78fb2162/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 23b68101-59a9-532f-ab6b-9acf78fb2162 -- raw list --format json
Oct 11 03:41:57 compute-0 sudo[117086]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 03:41:57 compute-0 ceph-osd[88594]: log_channel(cluster) log [DBG] : 4.f scrub starts
Oct 11 03:41:57 compute-0 ceph-osd[88594]: log_channel(cluster) log [DBG] : 4.f scrub ok
Oct 11 03:41:57 compute-0 python3.9[117034]: ansible-ansible.builtin.setup Invoked with filter=['ansible_interfaces'] gather_subset=['!all', '!min', 'network'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Oct 11 03:41:57 compute-0 ceph-osd[89722]: log_channel(cluster) log [DBG] : 11.1e scrub starts
Oct 11 03:41:58 compute-0 ceph-osd[89722]: log_channel(cluster) log [DBG] : 11.1e scrub ok
Oct 11 03:41:58 compute-0 podman[117172]: 2025-10-11 03:41:58.116115019 +0000 UTC m=+0.061547020 container create 54554b2aec791c231416041e5e4da648757df62ddc7d36088f0392ce4b70ff39 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dazzling_heisenberg, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Oct 11 03:41:58 compute-0 systemd[1]: Started libpod-conmon-54554b2aec791c231416041e5e4da648757df62ddc7d36088f0392ce4b70ff39.scope.
Oct 11 03:41:58 compute-0 systemd[1]: Started libcrun container.
Oct 11 03:41:58 compute-0 podman[117172]: 2025-10-11 03:41:58.185679944 +0000 UTC m=+0.131111965 container init 54554b2aec791c231416041e5e4da648757df62ddc7d36088f0392ce4b70ff39 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dazzling_heisenberg, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 11 03:41:58 compute-0 podman[117172]: 2025-10-11 03:41:58.091760551 +0000 UTC m=+0.037192582 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 03:41:58 compute-0 sudo[117021]: pam_unix(sudo:session): session closed for user root
Oct 11 03:41:58 compute-0 podman[117172]: 2025-10-11 03:41:58.193840865 +0000 UTC m=+0.139272856 container start 54554b2aec791c231416041e5e4da648757df62ddc7d36088f0392ce4b70ff39 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dazzling_heisenberg, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.schema-version=1.0)
Oct 11 03:41:58 compute-0 podman[117172]: 2025-10-11 03:41:58.197296153 +0000 UTC m=+0.142728164 container attach 54554b2aec791c231416041e5e4da648757df62ddc7d36088f0392ce4b70ff39 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dazzling_heisenberg, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, ceph=True, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 11 03:41:58 compute-0 dazzling_heisenberg[117205]: 167 167
Oct 11 03:41:58 compute-0 systemd[1]: libpod-54554b2aec791c231416041e5e4da648757df62ddc7d36088f0392ce4b70ff39.scope: Deactivated successfully.
Oct 11 03:41:58 compute-0 podman[117172]: 2025-10-11 03:41:58.199876985 +0000 UTC m=+0.145308977 container died 54554b2aec791c231416041e5e4da648757df62ddc7d36088f0392ce4b70ff39 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dazzling_heisenberg, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.39.3, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 11 03:41:58 compute-0 systemd[1]: var-lib-containers-storage-overlay-b4b62983a54280e22dacebf7657d60078f971fb6cb51eb92b23faf2b4a6650c6-merged.mount: Deactivated successfully.
Oct 11 03:41:58 compute-0 podman[117172]: 2025-10-11 03:41:58.332374069 +0000 UTC m=+0.277806060 container remove 54554b2aec791c231416041e5e4da648757df62ddc7d36088f0392ce4b70ff39 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dazzling_heisenberg, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Oct 11 03:41:58 compute-0 systemd[1]: libpod-conmon-54554b2aec791c231416041e5e4da648757df62ddc7d36088f0392ce4b70ff39.scope: Deactivated successfully.
Oct 11 03:41:58 compute-0 podman[117310]: 2025-10-11 03:41:58.57202691 +0000 UTC m=+0.062380003 container create 0389c8ca25142d1a6e5ebd491480bc4eac67eb120dbeec80e21ea365bf5c17f8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_ptolemy, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.39.3)
Oct 11 03:41:58 compute-0 systemd[1]: Started libpod-conmon-0389c8ca25142d1a6e5ebd491480bc4eac67eb120dbeec80e21ea365bf5c17f8.scope.
Oct 11 03:41:58 compute-0 ceph-mon[74273]: pgmap v277: 305 pgs: 1 active+clean+scrubbing, 304 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 03:41:58 compute-0 ceph-mon[74273]: 3.c deep-scrub starts
Oct 11 03:41:58 compute-0 ceph-mon[74273]: 3.c deep-scrub ok
Oct 11 03:41:58 compute-0 ceph-mon[74273]: 4.f scrub starts
Oct 11 03:41:58 compute-0 ceph-mon[74273]: 4.f scrub ok
Oct 11 03:41:58 compute-0 ceph-mon[74273]: 11.1e scrub starts
Oct 11 03:41:58 compute-0 ceph-mon[74273]: 11.1e scrub ok
Oct 11 03:41:58 compute-0 systemd[1]: Started libcrun container.
Oct 11 03:41:58 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8717346afd11f19e33f49d3c28813577f9dedcbde4af24b490e9deebc525bd57/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 11 03:41:58 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8717346afd11f19e33f49d3c28813577f9dedcbde4af24b490e9deebc525bd57/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 11 03:41:58 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8717346afd11f19e33f49d3c28813577f9dedcbde4af24b490e9deebc525bd57/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 11 03:41:58 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8717346afd11f19e33f49d3c28813577f9dedcbde4af24b490e9deebc525bd57/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 11 03:41:58 compute-0 podman[117310]: 2025-10-11 03:41:58.549119133 +0000 UTC m=+0.039472246 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 03:41:58 compute-0 podman[117310]: 2025-10-11 03:41:58.679373334 +0000 UTC m=+0.169726517 container init 0389c8ca25142d1a6e5ebd491480bc4eac67eb120dbeec80e21ea365bf5c17f8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_ptolemy, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 11 03:41:58 compute-0 podman[117310]: 2025-10-11 03:41:58.691305171 +0000 UTC m=+0.181658264 container start 0389c8ca25142d1a6e5ebd491480bc4eac67eb120dbeec80e21ea365bf5c17f8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_ptolemy, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0)
Oct 11 03:41:58 compute-0 podman[117310]: 2025-10-11 03:41:58.69554039 +0000 UTC m=+0.185893533 container attach 0389c8ca25142d1a6e5ebd491480bc4eac67eb120dbeec80e21ea365bf5c17f8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_ptolemy, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 11 03:41:58 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v278: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 03:41:58 compute-0 sudo[117405]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-satglqihiqwrrqsjncwfarhowzbwxymc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760154118.4005919-63-42554214591771/AnsiballZ_file.py'
Oct 11 03:41:58 compute-0 sudo[117405]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 03:41:59 compute-0 python3.9[117407]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/containers/networks recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 11 03:41:59 compute-0 sudo[117405]: pam_unix(sudo:session): session closed for user root
Oct 11 03:41:59 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 11 03:41:59 compute-0 gifted_ptolemy[117327]: {
Oct 11 03:41:59 compute-0 gifted_ptolemy[117327]:     "38da774d-7ecf-442f-9a7a-97978287cff8": {
Oct 11 03:41:59 compute-0 gifted_ptolemy[117327]:         "ceph_fsid": "23b68101-59a9-532f-ab6b-9acf78fb2162",
Oct 11 03:41:59 compute-0 gifted_ptolemy[117327]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Oct 11 03:41:59 compute-0 gifted_ptolemy[117327]:         "osd_id": 1,
Oct 11 03:41:59 compute-0 gifted_ptolemy[117327]:         "osd_uuid": "38da774d-7ecf-442f-9a7a-97978287cff8",
Oct 11 03:41:59 compute-0 gifted_ptolemy[117327]:         "type": "bluestore"
Oct 11 03:41:59 compute-0 gifted_ptolemy[117327]:     },
Oct 11 03:41:59 compute-0 gifted_ptolemy[117327]:     "b0a32a6b-06c7-4cda-9235-0c5ffbd1bd85": {
Oct 11 03:41:59 compute-0 gifted_ptolemy[117327]:         "ceph_fsid": "23b68101-59a9-532f-ab6b-9acf78fb2162",
Oct 11 03:41:59 compute-0 gifted_ptolemy[117327]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Oct 11 03:41:59 compute-0 gifted_ptolemy[117327]:         "osd_id": 2,
Oct 11 03:41:59 compute-0 gifted_ptolemy[117327]:         "osd_uuid": "b0a32a6b-06c7-4cda-9235-0c5ffbd1bd85",
Oct 11 03:41:59 compute-0 gifted_ptolemy[117327]:         "type": "bluestore"
Oct 11 03:41:59 compute-0 gifted_ptolemy[117327]:     },
Oct 11 03:41:59 compute-0 gifted_ptolemy[117327]:     "bd7ac921-1218-45c1-b1c6-7c594dbceccb": {
Oct 11 03:41:59 compute-0 gifted_ptolemy[117327]:         "ceph_fsid": "23b68101-59a9-532f-ab6b-9acf78fb2162",
Oct 11 03:41:59 compute-0 gifted_ptolemy[117327]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct 11 03:41:59 compute-0 gifted_ptolemy[117327]:         "osd_id": 0,
Oct 11 03:41:59 compute-0 gifted_ptolemy[117327]:         "osd_uuid": "bd7ac921-1218-45c1-b1c6-7c594dbceccb",
Oct 11 03:41:59 compute-0 gifted_ptolemy[117327]:         "type": "bluestore"
Oct 11 03:41:59 compute-0 gifted_ptolemy[117327]:     }
Oct 11 03:41:59 compute-0 gifted_ptolemy[117327]: }
Oct 11 03:41:59 compute-0 systemd[1]: libpod-0389c8ca25142d1a6e5ebd491480bc4eac67eb120dbeec80e21ea365bf5c17f8.scope: Deactivated successfully.
Oct 11 03:41:59 compute-0 systemd[1]: libpod-0389c8ca25142d1a6e5ebd491480bc4eac67eb120dbeec80e21ea365bf5c17f8.scope: Consumed 1.063s CPU time.
Oct 11 03:41:59 compute-0 podman[117310]: 2025-10-11 03:41:59.750376735 +0000 UTC m=+1.240729858 container died 0389c8ca25142d1a6e5ebd491480bc4eac67eb120dbeec80e21ea365bf5c17f8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_ptolemy, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 11 03:41:59 compute-0 sudo[117585]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jxqnyvgyqclpfygzklroligvgmnhtkww ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760154119.279495-71-175393671401315/AnsiballZ_command.py'
Oct 11 03:41:59 compute-0 sudo[117585]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 03:41:59 compute-0 systemd[1]: var-lib-containers-storage-overlay-8717346afd11f19e33f49d3c28813577f9dedcbde4af24b490e9deebc525bd57-merged.mount: Deactivated successfully.
Oct 11 03:41:59 compute-0 podman[117310]: 2025-10-11 03:41:59.947726071 +0000 UTC m=+1.438079154 container remove 0389c8ca25142d1a6e5ebd491480bc4eac67eb120dbeec80e21ea365bf5c17f8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_ptolemy, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 11 03:41:59 compute-0 systemd[1]: libpod-conmon-0389c8ca25142d1a6e5ebd491480bc4eac67eb120dbeec80e21ea365bf5c17f8.scope: Deactivated successfully.
Oct 11 03:41:59 compute-0 python3.9[117589]: ansible-ansible.legacy.command Invoked with _raw_params=podman network inspect podman
                                              _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 11 03:42:00 compute-0 sudo[117086]: pam_unix(sudo:session): session closed for user root
Oct 11 03:42:00 compute-0 rsyslogd[1005]: imjournal from <np0005480847:sudo>: begin to drop messages due to rate-limiting
Oct 11 03:42:00 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 11 03:42:00 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' 
Oct 11 03:42:00 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 11 03:42:00 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' 
Oct 11 03:42:00 compute-0 sudo[117585]: pam_unix(sudo:session): session closed for user root
Oct 11 03:42:00 compute-0 ceph-mgr[74563]: [progress WARNING root] complete: ev 20d72ef8-3aa3-4c3b-b9b2-b1fff67ac2b9 does not exist
Oct 11 03:42:00 compute-0 ceph-mgr[74563]: [progress WARNING root] complete: ev fbd08085-dcdb-4799-ae03-b9fd7cf1a03a does not exist
Oct 11 03:42:00 compute-0 sudo[117614]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 03:42:00 compute-0 sudo[117614]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 03:42:00 compute-0 sudo[117614]: pam_unix(sudo:session): session closed for user root
Oct 11 03:42:00 compute-0 sudo[117662]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Oct 11 03:42:00 compute-0 sudo[117662]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 03:42:00 compute-0 sudo[117662]: pam_unix(sudo:session): session closed for user root
Oct 11 03:42:00 compute-0 ceph-mon[74273]: pgmap v278: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 03:42:00 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' 
Oct 11 03:42:00 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' 
Oct 11 03:42:00 compute-0 sudo[117813]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-thehapyzioikwzmmsbahxzbjiqsgpgcv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760154120.2361524-79-90224152236576/AnsiballZ_stat.py'
Oct 11 03:42:00 compute-0 sudo[117813]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 03:42:00 compute-0 ceph-osd[88594]: log_channel(cluster) log [DBG] : 4.12 scrub starts
Oct 11 03:42:00 compute-0 ceph-osd[88594]: log_channel(cluster) log [DBG] : 4.12 scrub ok
Oct 11 03:42:00 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v279: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 03:42:00 compute-0 python3.9[117815]: ansible-ansible.legacy.stat Invoked with path=/etc/containers/networks/podman.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 11 03:42:00 compute-0 sudo[117813]: pam_unix(sudo:session): session closed for user root
Oct 11 03:42:01 compute-0 sudo[117891]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yixaicclusesqcoueybfapmserzzjjoc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760154120.2361524-79-90224152236576/AnsiballZ_file.py'
Oct 11 03:42:01 compute-0 sudo[117891]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 03:42:01 compute-0 python3.9[117893]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/containers/networks/podman.json _original_basename=podman_network_config.j2 recurse=False state=file path=/etc/containers/networks/podman.json force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 11 03:42:01 compute-0 sudo[117891]: pam_unix(sudo:session): session closed for user root
Oct 11 03:42:01 compute-0 ceph-mon[74273]: 4.12 scrub starts
Oct 11 03:42:01 compute-0 ceph-mon[74273]: 4.12 scrub ok
Oct 11 03:42:01 compute-0 ceph-osd[88594]: log_channel(cluster) log [DBG] : 4.10 deep-scrub starts
Oct 11 03:42:01 compute-0 ceph-osd[88594]: log_channel(cluster) log [DBG] : 4.10 deep-scrub ok
Oct 11 03:42:02 compute-0 sudo[118043]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qtpsfiahtotbnqyxcyymviphuspkzdlv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760154121.6725993-91-174446276639599/AnsiballZ_stat.py'
Oct 11 03:42:02 compute-0 sudo[118043]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 03:42:02 compute-0 python3.9[118045]: ansible-ansible.legacy.stat Invoked with path=/etc/containers/registries.conf.d/20-edpm-podman-registries.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 11 03:42:02 compute-0 sudo[118043]: pam_unix(sudo:session): session closed for user root
Oct 11 03:42:02 compute-0 sudo[118121]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fbkituhvqddebcsvhjiipbcxyivxjwyi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760154121.6725993-91-174446276639599/AnsiballZ_file.py'
Oct 11 03:42:02 compute-0 sudo[118121]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 03:42:02 compute-0 ceph-mon[74273]: pgmap v279: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 03:42:02 compute-0 ceph-mon[74273]: 4.10 deep-scrub starts
Oct 11 03:42:02 compute-0 ceph-mon[74273]: 4.10 deep-scrub ok
Oct 11 03:42:02 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v280: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 03:42:02 compute-0 ceph-osd[88594]: log_channel(cluster) log [DBG] : 4.14 scrub starts
Oct 11 03:42:02 compute-0 ceph-osd[88594]: log_channel(cluster) log [DBG] : 4.14 scrub ok
Oct 11 03:42:02 compute-0 python3.9[118123]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root setype=etc_t dest=/etc/containers/registries.conf.d/20-edpm-podman-registries.conf _original_basename=registries.conf.j2 recurse=False state=file path=/etc/containers/registries.conf.d/20-edpm-podman-registries.conf force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct 11 03:42:02 compute-0 sudo[118121]: pam_unix(sudo:session): session closed for user root
Oct 11 03:42:03 compute-0 sudo[118273]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xkimibiyjhtmvjbbqcgsfhlvmibfgeka ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760154123.1003864-104-210754671142189/AnsiballZ_ini_file.py'
Oct 11 03:42:03 compute-0 sudo[118273]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 03:42:03 compute-0 ceph-mon[74273]: 4.14 scrub starts
Oct 11 03:42:03 compute-0 ceph-mon[74273]: 4.14 scrub ok
Oct 11 03:42:03 compute-0 ceph-osd[88594]: log_channel(cluster) log [DBG] : 4.d scrub starts
Oct 11 03:42:03 compute-0 python3.9[118275]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=pids_limit owner=root path=/etc/containers/containers.conf section=containers setype=etc_t value=4096 backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None
Oct 11 03:42:03 compute-0 ceph-osd[88594]: log_channel(cluster) log [DBG] : 4.d scrub ok
Oct 11 03:42:03 compute-0 sudo[118273]: pam_unix(sudo:session): session closed for user root
Oct 11 03:42:04 compute-0 ceph-osd[89722]: log_channel(cluster) log [DBG] : 8.1c scrub starts
Oct 11 03:42:04 compute-0 ceph-osd[89722]: log_channel(cluster) log [DBG] : 8.1c scrub ok
Oct 11 03:42:04 compute-0 ceph-osd[87591]: log_channel(cluster) log [DBG] : 11.4 scrub starts
Oct 11 03:42:04 compute-0 ceph-osd[87591]: log_channel(cluster) log [DBG] : 11.4 scrub ok
Oct 11 03:42:04 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 11 03:42:04 compute-0 sudo[118425]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zoqzfxhbifxlpgezzhijodnbfpckgxxu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760154124.0742018-104-70132921182420/AnsiballZ_ini_file.py'
Oct 11 03:42:04 compute-0 sudo[118425]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 03:42:04 compute-0 python3.9[118427]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=events_logger owner=root path=/etc/containers/containers.conf section=engine setype=etc_t value="journald" backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None
Oct 11 03:42:04 compute-0 sudo[118425]: pam_unix(sudo:session): session closed for user root
Oct 11 03:42:04 compute-0 ceph-mon[74273]: pgmap v280: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 03:42:04 compute-0 ceph-mon[74273]: 4.d scrub starts
Oct 11 03:42:04 compute-0 ceph-mon[74273]: 4.d scrub ok
Oct 11 03:42:04 compute-0 ceph-mon[74273]: 8.1c scrub starts
Oct 11 03:42:04 compute-0 ceph-mon[74273]: 8.1c scrub ok
Oct 11 03:42:04 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v281: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 03:42:05 compute-0 ceph-osd[89722]: log_channel(cluster) log [DBG] : 7.8 scrub starts
Oct 11 03:42:05 compute-0 ceph-osd[87591]: log_channel(cluster) log [DBG] : 3.a scrub starts
Oct 11 03:42:05 compute-0 ceph-osd[89722]: log_channel(cluster) log [DBG] : 7.8 scrub ok
Oct 11 03:42:05 compute-0 ceph-osd[87591]: log_channel(cluster) log [DBG] : 3.a scrub ok
Oct 11 03:42:05 compute-0 sudo[118577]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tikfdnfefrrltjjdambrrutqdcznxuri ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760154124.7873232-104-46955842330824/AnsiballZ_ini_file.py'
Oct 11 03:42:05 compute-0 sudo[118577]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 03:42:05 compute-0 python3.9[118579]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=runtime owner=root path=/etc/containers/containers.conf section=engine setype=etc_t value="crun" backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None
Oct 11 03:42:05 compute-0 sudo[118577]: pam_unix(sudo:session): session closed for user root
Oct 11 03:42:05 compute-0 ceph-mon[74273]: 11.4 scrub starts
Oct 11 03:42:05 compute-0 ceph-mon[74273]: 11.4 scrub ok
Oct 11 03:42:05 compute-0 ceph-mon[74273]: pgmap v281: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 03:42:05 compute-0 ceph-mon[74273]: 7.8 scrub starts
Oct 11 03:42:05 compute-0 ceph-mon[74273]: 7.8 scrub ok
Oct 11 03:42:05 compute-0 sudo[118729]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-aqphytmnwylsbucsrxrvmqbbxzugxjoj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760154125.5338714-104-229343901484725/AnsiballZ_ini_file.py'
Oct 11 03:42:05 compute-0 sudo[118729]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 03:42:06 compute-0 ceph-osd[89722]: log_channel(cluster) log [DBG] : 4.13 scrub starts
Oct 11 03:42:06 compute-0 ceph-osd[89722]: log_channel(cluster) log [DBG] : 4.13 scrub ok
Oct 11 03:42:06 compute-0 python3.9[118731]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=network_backend owner=root path=/etc/containers/containers.conf section=network setype=etc_t value="netavark" backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None
Oct 11 03:42:06 compute-0 ceph-osd[87591]: log_channel(cluster) log [DBG] : 8.6 scrub starts
Oct 11 03:42:06 compute-0 sudo[118729]: pam_unix(sudo:session): session closed for user root
Oct 11 03:42:06 compute-0 ceph-osd[87591]: log_channel(cluster) log [DBG] : 8.6 scrub ok
Oct 11 03:42:06 compute-0 sudo[118881]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xjqdjtakjmoyiyjrvkzfruqepzrpngid ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760154126.3784506-135-55325661763128/AnsiballZ_dnf.py'
Oct 11 03:42:06 compute-0 sudo[118881]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 03:42:06 compute-0 ceph-mon[74273]: 3.a scrub starts
Oct 11 03:42:06 compute-0 ceph-mon[74273]: 3.a scrub ok
Oct 11 03:42:06 compute-0 ceph-mon[74273]: 4.13 scrub starts
Oct 11 03:42:06 compute-0 ceph-mon[74273]: 4.13 scrub ok
Oct 11 03:42:06 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v282: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 03:42:06 compute-0 python3.9[118883]: ansible-ansible.legacy.dnf Invoked with name=['openssh-server'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Oct 11 03:42:07 compute-0 ceph-mon[74273]: 8.6 scrub starts
Oct 11 03:42:07 compute-0 ceph-mon[74273]: 8.6 scrub ok
Oct 11 03:42:07 compute-0 ceph-mon[74273]: pgmap v282: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 03:42:08 compute-0 sudo[118881]: pam_unix(sudo:session): session closed for user root
Oct 11 03:42:08 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v283: 305 pgs: 305 active+clean; 455 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 03:42:08 compute-0 sudo[119034]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cjlvtorrciouhcnsycqqvquqegbhnbzh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760154128.4838393-146-14130862190028/AnsiballZ_setup.py'
Oct 11 03:42:08 compute-0 sudo[119034]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 03:42:08 compute-0 ceph-osd[88594]: log_channel(cluster) log [DBG] : 6.1 scrub starts
Oct 11 03:42:08 compute-0 ceph-osd[88594]: log_channel(cluster) log [DBG] : 6.1 scrub ok
Oct 11 03:42:09 compute-0 python3.9[119036]: ansible-setup Invoked with gather_subset=['!all', '!min', 'distribution', 'distribution_major_version', 'distribution_version', 'os_family'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct 11 03:42:09 compute-0 sudo[119034]: pam_unix(sudo:session): session closed for user root
Oct 11 03:42:09 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 11 03:42:09 compute-0 sudo[119188]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ygbvtcoamsgfedqbwxgtpghhpwwzkhqh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760154129.381205-154-267447557146607/AnsiballZ_stat.py'
Oct 11 03:42:09 compute-0 sudo[119188]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 03:42:09 compute-0 ceph-mon[74273]: pgmap v283: 305 pgs: 305 active+clean; 455 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 03:42:09 compute-0 ceph-mon[74273]: 6.1 scrub starts
Oct 11 03:42:09 compute-0 ceph-mon[74273]: 6.1 scrub ok
Oct 11 03:42:09 compute-0 python3.9[119190]: ansible-stat Invoked with path=/run/ostree-booted follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct 11 03:42:09 compute-0 sudo[119188]: pam_unix(sudo:session): session closed for user root
Oct 11 03:42:10 compute-0 ceph-osd[87591]: log_channel(cluster) log [DBG] : 11.6 scrub starts
Oct 11 03:42:10 compute-0 ceph-osd[87591]: log_channel(cluster) log [DBG] : 11.6 scrub ok
Oct 11 03:42:10 compute-0 sudo[119340]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jupalmjjwjcbvtcqefgdjjyadvsjxawz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760154130.1981223-163-271897780058285/AnsiballZ_stat.py'
Oct 11 03:42:10 compute-0 sudo[119340]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 03:42:10 compute-0 python3.9[119342]: ansible-stat Invoked with path=/sbin/transactional-update follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct 11 03:42:10 compute-0 sudo[119340]: pam_unix(sudo:session): session closed for user root
Oct 11 03:42:10 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v284: 305 pgs: 305 active+clean; 455 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 03:42:11 compute-0 sudo[119492]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vltwfasmxxxguorpztrifxeeqaaxkkmo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760154131.0291355-173-40144503264382/AnsiballZ_service_facts.py'
Oct 11 03:42:11 compute-0 sudo[119492]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 03:42:11 compute-0 python3.9[119494]: ansible-service_facts Invoked
Oct 11 03:42:11 compute-0 network[119511]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Oct 11 03:42:11 compute-0 network[119512]: 'network-scripts' will be removed from distribution in near future.
Oct 11 03:42:11 compute-0 network[119513]: It is advised to switch to 'NetworkManager' instead for network management.
Oct 11 03:42:11 compute-0 ceph-mon[74273]: 11.6 scrub starts
Oct 11 03:42:11 compute-0 ceph-mon[74273]: 11.6 scrub ok
Oct 11 03:42:11 compute-0 ceph-mon[74273]: pgmap v284: 305 pgs: 305 active+clean; 455 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 03:42:12 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v285: 305 pgs: 305 active+clean; 455 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 03:42:13 compute-0 ceph-osd[89722]: log_channel(cluster) log [DBG] : 4.18 scrub starts
Oct 11 03:42:13 compute-0 ceph-osd[89722]: log_channel(cluster) log [DBG] : 4.18 scrub ok
Oct 11 03:42:13 compute-0 ceph-osd[88594]: log_channel(cluster) log [DBG] : 4.9 scrub starts
Oct 11 03:42:13 compute-0 ceph-mon[74273]: pgmap v285: 305 pgs: 305 active+clean; 455 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 03:42:13 compute-0 ceph-mon[74273]: 4.18 scrub starts
Oct 11 03:42:13 compute-0 ceph-mon[74273]: 4.18 scrub ok
Oct 11 03:42:13 compute-0 ceph-osd[88594]: log_channel(cluster) log [DBG] : 4.9 scrub ok
Oct 11 03:42:14 compute-0 ceph-osd[87591]: log_channel(cluster) log [DBG] : 3.f scrub starts
Oct 11 03:42:14 compute-0 ceph-osd[87591]: log_channel(cluster) log [DBG] : 3.f scrub ok
Oct 11 03:42:14 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 11 03:42:14 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v286: 305 pgs: 305 active+clean; 455 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 03:42:14 compute-0 ceph-osd[88594]: log_channel(cluster) log [DBG] : 4.7 scrub starts
Oct 11 03:42:14 compute-0 ceph-osd[88594]: log_channel(cluster) log [DBG] : 4.7 scrub ok
Oct 11 03:42:14 compute-0 ceph-mon[74273]: 4.9 scrub starts
Oct 11 03:42:14 compute-0 ceph-mon[74273]: 4.9 scrub ok
Oct 11 03:42:15 compute-0 ceph-osd[89722]: log_channel(cluster) log [DBG] : 4.1a scrub starts
Oct 11 03:42:15 compute-0 ceph-osd[89722]: log_channel(cluster) log [DBG] : 4.1a scrub ok
Oct 11 03:42:15 compute-0 ceph-mon[74273]: 3.f scrub starts
Oct 11 03:42:15 compute-0 ceph-mon[74273]: 3.f scrub ok
Oct 11 03:42:15 compute-0 ceph-mon[74273]: pgmap v286: 305 pgs: 305 active+clean; 455 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 03:42:15 compute-0 ceph-mon[74273]: 4.7 scrub starts
Oct 11 03:42:15 compute-0 ceph-mon[74273]: 4.7 scrub ok
Oct 11 03:42:15 compute-0 ceph-mon[74273]: 4.1a scrub starts
Oct 11 03:42:15 compute-0 ceph-mon[74273]: 4.1a scrub ok
Oct 11 03:42:16 compute-0 ceph-osd[89722]: log_channel(cluster) log [DBG] : 4.a scrub starts
Oct 11 03:42:16 compute-0 ceph-osd[89722]: log_channel(cluster) log [DBG] : 4.a scrub ok
Oct 11 03:42:16 compute-0 ceph-osd[87591]: log_channel(cluster) log [DBG] : 7.9 scrub starts
Oct 11 03:42:16 compute-0 ceph-osd[87591]: log_channel(cluster) log [DBG] : 7.9 scrub ok
Oct 11 03:42:16 compute-0 sudo[119492]: pam_unix(sudo:session): session closed for user root
Oct 11 03:42:16 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v287: 305 pgs: 305 active+clean; 455 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 03:42:16 compute-0 ceph-mon[74273]: 4.a scrub starts
Oct 11 03:42:16 compute-0 ceph-mon[74273]: 4.a scrub ok
Oct 11 03:42:17 compute-0 sudo[119799]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mzoqexuiwfszdpmzecwphwxujtorgsfb ; /bin/bash /home/zuul/.ansible/tmp/ansible-tmp-1760154137.145321-186-240538587158545/AnsiballZ_timesync_provider.sh /home/zuul/.ansible/tmp/ansible-tmp-1760154137.145321-186-240538587158545/args'
Oct 11 03:42:17 compute-0 sudo[119799]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 03:42:17 compute-0 sudo[119799]: pam_unix(sudo:session): session closed for user root
Oct 11 03:42:17 compute-0 ceph-mon[74273]: 7.9 scrub starts
Oct 11 03:42:17 compute-0 ceph-mon[74273]: 7.9 scrub ok
Oct 11 03:42:17 compute-0 ceph-mon[74273]: pgmap v287: 305 pgs: 305 active+clean; 455 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 03:42:18 compute-0 sudo[119966]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-stqxftwsavgbsdqiodsbdtagagxapbrv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760154137.7988527-197-81630032830824/AnsiballZ_dnf.py'
Oct 11 03:42:18 compute-0 sudo[119966]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 03:42:18 compute-0 ceph-osd[87591]: log_channel(cluster) log [DBG] : 8.1a scrub starts
Oct 11 03:42:18 compute-0 ceph-osd[87591]: log_channel(cluster) log [DBG] : 8.1a scrub ok
Oct 11 03:42:18 compute-0 python3.9[119968]: ansible-ansible.legacy.dnf Invoked with name=['chrony'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Oct 11 03:42:18 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v288: 305 pgs: 305 active+clean; 455 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 03:42:19 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 11 03:42:19 compute-0 sudo[119966]: pam_unix(sudo:session): session closed for user root
Oct 11 03:42:19 compute-0 ceph-mon[74273]: 8.1a scrub starts
Oct 11 03:42:19 compute-0 ceph-mon[74273]: 8.1a scrub ok
Oct 11 03:42:19 compute-0 ceph-mon[74273]: pgmap v288: 305 pgs: 305 active+clean; 455 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 03:42:20 compute-0 ceph-osd[87591]: log_channel(cluster) log [DBG] : 3.12 deep-scrub starts
Oct 11 03:42:20 compute-0 ceph-osd[87591]: log_channel(cluster) log [DBG] : 3.12 deep-scrub ok
Oct 11 03:42:20 compute-0 sudo[120119]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kgxsorwqbjukhvwasjxkzmdhytclbylq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760154139.911835-210-89799005745224/AnsiballZ_package_facts.py'
Oct 11 03:42:20 compute-0 sudo[120119]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 03:42:20 compute-0 ceph-mgr[74563]: [balancer INFO root] Optimize plan auto_2025-10-11_03:42:20
Oct 11 03:42:20 compute-0 ceph-mgr[74563]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 11 03:42:20 compute-0 ceph-mgr[74563]: [balancer INFO root] do_upmap
Oct 11 03:42:20 compute-0 ceph-mgr[74563]: [balancer INFO root] pools ['cephfs.cephfs.data', 'backups', 'default.rgw.meta', 'vms', 'default.rgw.log', '.rgw.root', 'cephfs.cephfs.meta', 'volumes', 'default.rgw.control', 'images', '.mgr']
Oct 11 03:42:20 compute-0 ceph-mgr[74563]: [balancer INFO root] prepared 0/10 changes
Oct 11 03:42:20 compute-0 ceph-mgr[74563]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 03:42:20 compute-0 ceph-mgr[74563]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 03:42:20 compute-0 ceph-mgr[74563]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 03:42:20 compute-0 ceph-mgr[74563]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 03:42:20 compute-0 ceph-mgr[74563]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 03:42:20 compute-0 ceph-mgr[74563]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 03:42:20 compute-0 ceph-mgr[74563]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 11 03:42:20 compute-0 ceph-mgr[74563]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 11 03:42:20 compute-0 ceph-mgr[74563]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 11 03:42:20 compute-0 ceph-mgr[74563]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 11 03:42:20 compute-0 ceph-mgr[74563]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 11 03:42:20 compute-0 ceph-mgr[74563]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 11 03:42:20 compute-0 ceph-mgr[74563]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 11 03:42:20 compute-0 ceph-mgr[74563]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 11 03:42:20 compute-0 ceph-mgr[74563]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 11 03:42:20 compute-0 ceph-mgr[74563]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 11 03:42:20 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v289: 305 pgs: 305 active+clean; 455 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 03:42:20 compute-0 python3.9[120121]: ansible-package_facts Invoked with manager=['auto'] strategy=first
Oct 11 03:42:21 compute-0 ceph-mon[74273]: 3.12 deep-scrub starts
Oct 11 03:42:21 compute-0 ceph-mon[74273]: 3.12 deep-scrub ok
Oct 11 03:42:21 compute-0 sudo[120119]: pam_unix(sudo:session): session closed for user root
Oct 11 03:42:21 compute-0 ceph-osd[87591]: log_channel(cluster) log [DBG] : 7.6 scrub starts
Oct 11 03:42:21 compute-0 ceph-osd[87591]: log_channel(cluster) log [DBG] : 7.6 scrub ok
Oct 11 03:42:21 compute-0 sudo[120271]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-aiiwxpodviqkerbcsgxptanzrxxlwvhv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760154141.5254045-220-106656152078046/AnsiballZ_stat.py'
Oct 11 03:42:21 compute-0 sudo[120271]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 03:42:22 compute-0 ceph-mon[74273]: pgmap v289: 305 pgs: 305 active+clean; 455 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 03:42:22 compute-0 ceph-mon[74273]: 7.6 scrub starts
Oct 11 03:42:22 compute-0 ceph-mon[74273]: 7.6 scrub ok
Oct 11 03:42:22 compute-0 python3.9[120273]: ansible-ansible.legacy.stat Invoked with path=/etc/chrony.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 11 03:42:22 compute-0 sudo[120271]: pam_unix(sudo:session): session closed for user root
Oct 11 03:42:22 compute-0 sudo[120349]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-opfbjevsaczjmmsammmfcnotfuvncmki ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760154141.5254045-220-106656152078046/AnsiballZ_file.py'
Oct 11 03:42:22 compute-0 sudo[120349]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 03:42:22 compute-0 python3.9[120351]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/etc/chrony.conf _original_basename=chrony.conf.j2 recurse=False state=file path=/etc/chrony.conf force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 11 03:42:22 compute-0 sudo[120349]: pam_unix(sudo:session): session closed for user root
Oct 11 03:42:22 compute-0 ceph-osd[88594]: log_channel(cluster) log [DBG] : 4.8 scrub starts
Oct 11 03:42:22 compute-0 ceph-osd[88594]: log_channel(cluster) log [DBG] : 4.8 scrub ok
Oct 11 03:42:22 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v290: 305 pgs: 305 active+clean; 455 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 03:42:23 compute-0 ceph-osd[89722]: log_channel(cluster) log [DBG] : 4.e scrub starts
Oct 11 03:42:23 compute-0 ceph-osd[89722]: log_channel(cluster) log [DBG] : 4.e scrub ok
Oct 11 03:42:23 compute-0 sudo[120501]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ppgymvawnjcmhtqanerwbtrvweklbqqu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760154142.922241-232-232842437095176/AnsiballZ_stat.py'
Oct 11 03:42:23 compute-0 sudo[120501]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 03:42:23 compute-0 ceph-mon[74273]: 4.8 scrub starts
Oct 11 03:42:23 compute-0 ceph-mon[74273]: 4.8 scrub ok
Oct 11 03:42:23 compute-0 python3.9[120503]: ansible-ansible.legacy.stat Invoked with path=/etc/sysconfig/chronyd follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 11 03:42:23 compute-0 sudo[120501]: pam_unix(sudo:session): session closed for user root
Oct 11 03:42:23 compute-0 sudo[120579]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qeelfhnznxombsoyrvhkrbwyielstruo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760154142.922241-232-232842437095176/AnsiballZ_file.py'
Oct 11 03:42:23 compute-0 sudo[120579]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 03:42:23 compute-0 ceph-osd[88594]: log_channel(cluster) log [DBG] : 6.6 scrub starts
Oct 11 03:42:23 compute-0 ceph-osd[88594]: log_channel(cluster) log [DBG] : 6.6 scrub ok
Oct 11 03:42:24 compute-0 ceph-osd[89722]: log_channel(cluster) log [DBG] : 4.1c scrub starts
Oct 11 03:42:24 compute-0 ceph-osd[89722]: log_channel(cluster) log [DBG] : 4.1c scrub ok
Oct 11 03:42:24 compute-0 python3.9[120581]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/etc/sysconfig/chronyd _original_basename=chronyd.sysconfig.j2 recurse=False state=file path=/etc/sysconfig/chronyd force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 11 03:42:24 compute-0 sudo[120579]: pam_unix(sudo:session): session closed for user root
Oct 11 03:42:24 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 11 03:42:24 compute-0 ceph-mon[74273]: pgmap v290: 305 pgs: 305 active+clean; 455 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 03:42:24 compute-0 ceph-mon[74273]: 4.e scrub starts
Oct 11 03:42:24 compute-0 ceph-mon[74273]: 4.e scrub ok
Oct 11 03:42:24 compute-0 ceph-mon[74273]: 6.6 scrub starts
Oct 11 03:42:24 compute-0 ceph-mon[74273]: 6.6 scrub ok
Oct 11 03:42:24 compute-0 ceph-mon[74273]: 4.1c scrub starts
Oct 11 03:42:24 compute-0 ceph-mon[74273]: 4.1c scrub ok
Oct 11 03:42:24 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v291: 305 pgs: 305 active+clean; 455 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 03:42:24 compute-0 ceph-osd[88594]: log_channel(cluster) log [DBG] : 6.e deep-scrub starts
Oct 11 03:42:24 compute-0 ceph-osd[88594]: log_channel(cluster) log [DBG] : 6.e deep-scrub ok
Oct 11 03:42:25 compute-0 sudo[120731]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zrbncmblkavbcwrjfpvdqiltdvyzlqow ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760154144.641948-250-3428868995707/AnsiballZ_lineinfile.py'
Oct 11 03:42:25 compute-0 sudo[120731]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 03:42:25 compute-0 python3.9[120733]: ansible-lineinfile Invoked with backup=True create=True dest=/etc/sysconfig/network line=PEERNTP=no mode=0644 regexp=^PEERNTP= state=present path=/etc/sysconfig/network backrefs=False firstmatch=False unsafe_writes=False search_string=None insertafter=None insertbefore=None validate=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 11 03:42:25 compute-0 sudo[120731]: pam_unix(sudo:session): session closed for user root
Oct 11 03:42:25 compute-0 ceph-mon[74273]: 6.e deep-scrub starts
Oct 11 03:42:25 compute-0 ceph-mon[74273]: 6.e deep-scrub ok
Oct 11 03:42:26 compute-0 sudo[120883]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xexzyaxtzwfmforbxgcfgvkntranamau ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760154145.7650924-265-31676004061938/AnsiballZ_setup.py'
Oct 11 03:42:26 compute-0 sudo[120883]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 03:42:26 compute-0 python3.9[120885]: ansible-ansible.legacy.setup Invoked with gather_subset=['!all'] filter=['ansible_service_mgr'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Oct 11 03:42:26 compute-0 ceph-mon[74273]: pgmap v291: 305 pgs: 305 active+clean; 455 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 03:42:26 compute-0 sudo[120883]: pam_unix(sudo:session): session closed for user root
Oct 11 03:42:26 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v292: 305 pgs: 305 active+clean; 455 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 03:42:27 compute-0 sudo[120967]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pevgmkpngycdvhmrjxrhgyixgqnmrznr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760154145.7650924-265-31676004061938/AnsiballZ_systemd.py'
Oct 11 03:42:27 compute-0 sudo[120967]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 03:42:27 compute-0 python3.9[120969]: ansible-ansible.legacy.systemd Invoked with enabled=True name=chronyd state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Oct 11 03:42:27 compute-0 sudo[120967]: pam_unix(sudo:session): session closed for user root
Oct 11 03:42:28 compute-0 sshd-session[115720]: Connection closed by 192.168.122.30 port 46812
Oct 11 03:42:28 compute-0 sshd-session[115717]: pam_unix(sshd:session): session closed for user zuul
Oct 11 03:42:28 compute-0 systemd[1]: session-38.scope: Deactivated successfully.
Oct 11 03:42:28 compute-0 systemd[1]: session-38.scope: Consumed 27.000s CPU time.
Oct 11 03:42:28 compute-0 systemd-logind[820]: Session 38 logged out. Waiting for processes to exit.
Oct 11 03:42:28 compute-0 systemd-logind[820]: Removed session 38.
Oct 11 03:42:28 compute-0 ceph-mon[74273]: pgmap v292: 305 pgs: 305 active+clean; 455 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 03:42:28 compute-0 ceph-mon[74273]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #18. Immutable memtables: 0.
Oct 11 03:42:28 compute-0 ceph-mon[74273]: rocksdb: (Original Log Time 2025/10/11-03:42:28.491261) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Oct 11 03:42:28 compute-0 ceph-mon[74273]: rocksdb: [db/flush_job.cc:856] [default] [JOB 3] Flushing memtable with next log file: 18
Oct 11 03:42:28 compute-0 ceph-mon[74273]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760154148491431, "job": 3, "event": "flush_started", "num_memtables": 1, "num_entries": 7173, "num_deletes": 251, "total_data_size": 9202096, "memory_usage": 9486112, "flush_reason": "Manual Compaction"}
Oct 11 03:42:28 compute-0 ceph-mon[74273]: rocksdb: [db/flush_job.cc:885] [default] [JOB 3] Level-0 flush table #19: started
Oct 11 03:42:28 compute-0 ceph-mon[74273]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760154148564899, "cf_name": "default", "job": 3, "event": "table_file_creation", "file_number": 19, "file_size": 7387084, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 141, "largest_seqno": 7311, "table_properties": {"data_size": 7360619, "index_size": 17262, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 8133, "raw_key_size": 75132, "raw_average_key_size": 23, "raw_value_size": 7298287, "raw_average_value_size": 2257, "num_data_blocks": 758, "num_entries": 3233, "num_filter_entries": 3233, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1760153734, "oldest_key_time": 1760153734, "file_creation_time": 1760154148, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "c5ef686a-ea96-43f3-b64e-136aeef6150d", "db_session_id": "5PENSWPAYOBU0GJSS6OB", "orig_file_number": 19, "seqno_to_time_mapping": "N/A"}}
Oct 11 03:42:28 compute-0 ceph-mon[74273]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 3] Flush lasted 73684 microseconds, and 25233 cpu microseconds.
Oct 11 03:42:28 compute-0 ceph-mon[74273]: rocksdb: (Original Log Time 2025/10/11-03:42:28.564959) [db/flush_job.cc:967] [default] [JOB 3] Level-0 flush table #19: 7387084 bytes OK
Oct 11 03:42:28 compute-0 ceph-mon[74273]: rocksdb: (Original Log Time 2025/10/11-03:42:28.564981) [db/memtable_list.cc:519] [default] Level-0 commit table #19 started
Oct 11 03:42:28 compute-0 ceph-mon[74273]: rocksdb: (Original Log Time 2025/10/11-03:42:28.566912) [db/memtable_list.cc:722] [default] Level-0 commit table #19: memtable #1 done
Oct 11 03:42:28 compute-0 ceph-mon[74273]: rocksdb: (Original Log Time 2025/10/11-03:42:28.566928) EVENT_LOG_v1 {"time_micros": 1760154148566923, "job": 3, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [3, 0, 0, 0, 0, 0, 0], "immutable_memtables": 0}
Oct 11 03:42:28 compute-0 ceph-mon[74273]: rocksdb: (Original Log Time 2025/10/11-03:42:28.566950) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: files[3 0 0 0 0 0 0] max score 0.75
Oct 11 03:42:28 compute-0 ceph-mon[74273]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 3] Try to delete WAL files size 9170942, prev total WAL file size 9170942, number of live WAL files 2.
Oct 11 03:42:28 compute-0 ceph-mon[74273]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000014.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 11 03:42:28 compute-0 ceph-mon[74273]: rocksdb: (Original Log Time 2025/10/11-03:42:28.569549) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730030' seq:72057594037927935, type:22 .. '7061786F7300323532' seq:0, type:0; will stop at (end)
Oct 11 03:42:28 compute-0 ceph-mon[74273]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 4] Compacting 3@0 files to L6, score -1.00
Oct 11 03:42:28 compute-0 ceph-mon[74273]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 3 Base level 0, inputs: [19(7213KB) 13(53KB) 8(1944B)]
Oct 11 03:42:28 compute-0 ceph-mon[74273]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760154148569644, "job": 4, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [19, 13, 8], "score": -1, "input_data_size": 7444289, "oldest_snapshot_seqno": -1}
Oct 11 03:42:28 compute-0 ceph-mon[74273]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 4] Generated table #20: 3049 keys, 7399618 bytes, temperature: kUnknown
Oct 11 03:42:28 compute-0 ceph-mon[74273]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760154148619300, "cf_name": "default", "job": 4, "event": "table_file_creation", "file_number": 20, "file_size": 7399618, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 7373562, "index_size": 17306, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 7685, "raw_key_size": 73204, "raw_average_key_size": 24, "raw_value_size": 7312837, "raw_average_value_size": 2398, "num_data_blocks": 762, "num_entries": 3049, "num_filter_entries": 3049, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1760153731, "oldest_key_time": 0, "file_creation_time": 1760154148, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "c5ef686a-ea96-43f3-b64e-136aeef6150d", "db_session_id": "5PENSWPAYOBU0GJSS6OB", "orig_file_number": 20, "seqno_to_time_mapping": "N/A"}}
Oct 11 03:42:28 compute-0 ceph-mon[74273]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 11 03:42:28 compute-0 ceph-mon[74273]: rocksdb: (Original Log Time 2025/10/11-03:42:28.619527) [db/compaction/compaction_job.cc:1663] [default] [JOB 4] Compacted 3@0 files to L6 => 7399618 bytes
Oct 11 03:42:28 compute-0 ceph-mon[74273]: rocksdb: (Original Log Time 2025/10/11-03:42:28.620746) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 149.7 rd, 148.8 wr, level 6, files in(3, 0) out(1 +0 blob) MB in(7.1, 0.0 +0.0 blob) out(7.1 +0.0 blob), read-write-amplify(2.0) write-amplify(1.0) OK, records in: 3339, records dropped: 290 output_compression: NoCompression
Oct 11 03:42:28 compute-0 ceph-mon[74273]: rocksdb: (Original Log Time 2025/10/11-03:42:28.620767) EVENT_LOG_v1 {"time_micros": 1760154148620757, "job": 4, "event": "compaction_finished", "compaction_time_micros": 49740, "compaction_time_cpu_micros": 18610, "output_level": 6, "num_output_files": 1, "total_output_size": 7399618, "num_input_records": 3339, "num_output_records": 3049, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Oct 11 03:42:28 compute-0 ceph-mon[74273]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000019.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 11 03:42:28 compute-0 ceph-mon[74273]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760154148622106, "job": 4, "event": "table_file_deletion", "file_number": 19}
Oct 11 03:42:28 compute-0 ceph-mon[74273]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000013.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 11 03:42:28 compute-0 ceph-mon[74273]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760154148622185, "job": 4, "event": "table_file_deletion", "file_number": 13}
Oct 11 03:42:28 compute-0 ceph-mon[74273]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000008.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 11 03:42:28 compute-0 ceph-mon[74273]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760154148622218, "job": 4, "event": "table_file_deletion", "file_number": 8}
Oct 11 03:42:28 compute-0 ceph-mon[74273]: rocksdb: (Original Log Time 2025/10/11-03:42:28.569455) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 11 03:42:28 compute-0 ceph-osd[88594]: log_channel(cluster) log [DBG] : 6.2 scrub starts
Oct 11 03:42:28 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v293: 305 pgs: 305 active+clean; 455 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 03:42:28 compute-0 ceph-osd[88594]: log_channel(cluster) log [DBG] : 6.2 scrub ok
Oct 11 03:42:29 compute-0 ceph-osd[89722]: log_channel(cluster) log [DBG] : 4.1b scrub starts
Oct 11 03:42:29 compute-0 ceph-osd[89722]: log_channel(cluster) log [DBG] : 4.1b scrub ok
Oct 11 03:42:29 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 11 03:42:29 compute-0 ceph-mon[74273]: 6.2 scrub starts
Oct 11 03:42:29 compute-0 ceph-mon[74273]: 6.2 scrub ok
Oct 11 03:42:29 compute-0 ceph-mon[74273]: 4.1b scrub starts
Oct 11 03:42:29 compute-0 ceph-mon[74273]: 4.1b scrub ok
Oct 11 03:42:30 compute-0 ceph-mon[74273]: pgmap v293: 305 pgs: 305 active+clean; 455 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 03:42:30 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v294: 305 pgs: 305 active+clean; 455 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 03:42:30 compute-0 ceph-mgr[74563]: [pg_autoscaler INFO root] _maybe_adjust
Oct 11 03:42:30 compute-0 ceph-mgr[74563]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 03:42:30 compute-0 ceph-mgr[74563]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct 11 03:42:30 compute-0 ceph-mgr[74563]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 03:42:30 compute-0 ceph-mgr[74563]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 11 03:42:30 compute-0 ceph-mgr[74563]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 03:42:30 compute-0 ceph-mgr[74563]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 11 03:42:30 compute-0 ceph-mgr[74563]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 03:42:30 compute-0 ceph-mgr[74563]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 11 03:42:30 compute-0 ceph-mgr[74563]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 03:42:30 compute-0 ceph-mgr[74563]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 11 03:42:30 compute-0 ceph-mgr[74563]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 03:42:30 compute-0 ceph-mgr[74563]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Oct 11 03:42:30 compute-0 ceph-mgr[74563]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 03:42:30 compute-0 ceph-mgr[74563]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 11 03:42:30 compute-0 ceph-mgr[74563]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 03:42:30 compute-0 ceph-mgr[74563]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Oct 11 03:42:30 compute-0 ceph-mgr[74563]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 03:42:30 compute-0 ceph-mgr[74563]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct 11 03:42:30 compute-0 ceph-mgr[74563]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 03:42:30 compute-0 ceph-mgr[74563]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 11 03:42:30 compute-0 ceph-mgr[74563]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 03:42:30 compute-0 ceph-mgr[74563]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct 11 03:42:31 compute-0 ceph-osd[87591]: log_channel(cluster) log [DBG] : 3.15 scrub starts
Oct 11 03:42:31 compute-0 ceph-osd[87591]: log_channel(cluster) log [DBG] : 3.15 scrub ok
Oct 11 03:42:32 compute-0 ceph-osd[87591]: log_channel(cluster) log [DBG] : 8.1f scrub starts
Oct 11 03:42:32 compute-0 ceph-osd[87591]: log_channel(cluster) log [DBG] : 8.1f scrub ok
Oct 11 03:42:32 compute-0 ceph-mon[74273]: pgmap v294: 305 pgs: 305 active+clean; 455 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 03:42:32 compute-0 ceph-mon[74273]: 3.15 scrub starts
Oct 11 03:42:32 compute-0 ceph-mon[74273]: 3.15 scrub ok
Oct 11 03:42:32 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v295: 305 pgs: 305 active+clean; 455 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 03:42:33 compute-0 sshd-session[120997]: Accepted publickey for zuul from 192.168.122.30 port 48718 ssh2: ECDSA SHA256:qo9+RMabHfLAOt2q/80W97JXaZUdeUCREBuTRaqgxBY
Oct 11 03:42:33 compute-0 systemd-logind[820]: New session 39 of user zuul.
Oct 11 03:42:33 compute-0 systemd[1]: Started Session 39 of User zuul.
Oct 11 03:42:33 compute-0 ceph-osd[89722]: log_channel(cluster) log [DBG] : 4.11 scrub starts
Oct 11 03:42:33 compute-0 sshd-session[120997]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Oct 11 03:42:33 compute-0 ceph-osd[89722]: log_channel(cluster) log [DBG] : 4.11 scrub ok
Oct 11 03:42:33 compute-0 ceph-mon[74273]: 8.1f scrub starts
Oct 11 03:42:33 compute-0 ceph-mon[74273]: 8.1f scrub ok
Oct 11 03:42:33 compute-0 ceph-mon[74273]: 4.11 scrub starts
Oct 11 03:42:33 compute-0 ceph-mon[74273]: 4.11 scrub ok
Oct 11 03:42:33 compute-0 sudo[121150]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qgooogiljqqmidcerqefleehvdifbqzr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760154153.2766926-22-161577520518096/AnsiballZ_file.py'
Oct 11 03:42:33 compute-0 sudo[121150]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 03:42:34 compute-0 python3.9[121152]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 11 03:42:34 compute-0 sudo[121150]: pam_unix(sudo:session): session closed for user root
Oct 11 03:42:34 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 11 03:42:34 compute-0 ceph-mon[74273]: pgmap v295: 305 pgs: 305 active+clean; 455 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 03:42:34 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v296: 305 pgs: 305 active+clean; 455 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 03:42:34 compute-0 sudo[121302]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zrzvzfglqempzmrloynmuhnhuyifitau ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760154154.4446929-34-142948849779418/AnsiballZ_stat.py'
Oct 11 03:42:34 compute-0 sudo[121302]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 03:42:35 compute-0 python3.9[121304]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/ceph-networks.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 11 03:42:35 compute-0 sudo[121302]: pam_unix(sudo:session): session closed for user root
Oct 11 03:42:35 compute-0 sudo[121380]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yxzeixalbvntqazuggkanejpilaogbeu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760154154.4446929-34-142948849779418/AnsiballZ_file.py'
Oct 11 03:42:35 compute-0 sudo[121380]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 03:42:35 compute-0 python3.9[121382]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/edpm-config/firewall/ceph-networks.yaml _original_basename=firewall.yaml.j2 recurse=False state=file path=/var/lib/edpm-config/firewall/ceph-networks.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 11 03:42:35 compute-0 sudo[121380]: pam_unix(sudo:session): session closed for user root
Oct 11 03:42:35 compute-0 ceph-osd[88594]: log_channel(cluster) log [DBG] : 6.4 scrub starts
Oct 11 03:42:35 compute-0 ceph-osd[88594]: log_channel(cluster) log [DBG] : 6.4 scrub ok
Oct 11 03:42:36 compute-0 sshd-session[121000]: Connection closed by 192.168.122.30 port 48718
Oct 11 03:42:36 compute-0 sshd-session[120997]: pam_unix(sshd:session): session closed for user zuul
Oct 11 03:42:36 compute-0 systemd[1]: session-39.scope: Deactivated successfully.
Oct 11 03:42:36 compute-0 systemd[1]: session-39.scope: Consumed 1.933s CPU time.
Oct 11 03:42:36 compute-0 systemd-logind[820]: Session 39 logged out. Waiting for processes to exit.
Oct 11 03:42:36 compute-0 systemd-logind[820]: Removed session 39.
Oct 11 03:42:36 compute-0 ceph-osd[89722]: log_channel(cluster) log [DBG] : 9.e scrub starts
Oct 11 03:42:36 compute-0 ceph-osd[89722]: log_channel(cluster) log [DBG] : 9.e scrub ok
Oct 11 03:42:36 compute-0 ceph-osd[87591]: log_channel(cluster) log [DBG] : 8.18 scrub starts
Oct 11 03:42:36 compute-0 ceph-osd[87591]: log_channel(cluster) log [DBG] : 8.18 scrub ok
Oct 11 03:42:36 compute-0 ceph-mon[74273]: pgmap v296: 305 pgs: 305 active+clean; 455 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 03:42:36 compute-0 ceph-mon[74273]: 6.4 scrub starts
Oct 11 03:42:36 compute-0 ceph-mon[74273]: 6.4 scrub ok
Oct 11 03:42:36 compute-0 ceph-mon[74273]: 9.e scrub starts
Oct 11 03:42:36 compute-0 ceph-mon[74273]: 9.e scrub ok
Oct 11 03:42:36 compute-0 ceph-osd[88594]: log_channel(cluster) log [DBG] : 6.c scrub starts
Oct 11 03:42:36 compute-0 ceph-osd[88594]: log_channel(cluster) log [DBG] : 6.c scrub ok
Oct 11 03:42:36 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v297: 305 pgs: 305 active+clean; 455 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 03:42:37 compute-0 ceph-mon[74273]: 8.18 scrub starts
Oct 11 03:42:37 compute-0 ceph-mon[74273]: 8.18 scrub ok
Oct 11 03:42:37 compute-0 ceph-mon[74273]: 6.c scrub starts
Oct 11 03:42:37 compute-0 ceph-mon[74273]: 6.c scrub ok
Oct 11 03:42:38 compute-0 ceph-mon[74273]: pgmap v297: 305 pgs: 305 active+clean; 455 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 03:42:38 compute-0 ceph-osd[88594]: log_channel(cluster) log [DBG] : 6.b deep-scrub starts
Oct 11 03:42:38 compute-0 ceph-osd[88594]: log_channel(cluster) log [DBG] : 6.b deep-scrub ok
Oct 11 03:42:38 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v298: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 03:42:39 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 11 03:42:39 compute-0 ceph-osd[87591]: log_channel(cluster) log [DBG] : 8.1d deep-scrub starts
Oct 11 03:42:39 compute-0 ceph-osd[87591]: log_channel(cluster) log [DBG] : 8.1d deep-scrub ok
Oct 11 03:42:39 compute-0 ceph-mon[74273]: 6.b deep-scrub starts
Oct 11 03:42:39 compute-0 ceph-mon[74273]: 6.b deep-scrub ok
Oct 11 03:42:40 compute-0 ceph-osd[87591]: log_channel(cluster) log [DBG] : 7.13 deep-scrub starts
Oct 11 03:42:40 compute-0 ceph-osd[87591]: log_channel(cluster) log [DBG] : 7.13 deep-scrub ok
Oct 11 03:42:40 compute-0 ceph-mon[74273]: pgmap v298: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 03:42:40 compute-0 ceph-mon[74273]: 8.1d deep-scrub starts
Oct 11 03:42:40 compute-0 ceph-mon[74273]: 8.1d deep-scrub ok
Oct 11 03:42:40 compute-0 ceph-osd[88594]: log_channel(cluster) log [DBG] : 6.d scrub starts
Oct 11 03:42:40 compute-0 ceph-osd[88594]: log_channel(cluster) log [DBG] : 6.d scrub ok
Oct 11 03:42:40 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v299: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 03:42:41 compute-0 ceph-osd[89722]: log_channel(cluster) log [DBG] : 9.6 scrub starts
Oct 11 03:42:41 compute-0 sshd-session[121407]: Accepted publickey for zuul from 192.168.122.30 port 41496 ssh2: ECDSA SHA256:qo9+RMabHfLAOt2q/80W97JXaZUdeUCREBuTRaqgxBY
Oct 11 03:42:41 compute-0 systemd-logind[820]: New session 40 of user zuul.
Oct 11 03:42:41 compute-0 ceph-osd[89722]: log_channel(cluster) log [DBG] : 9.6 scrub ok
Oct 11 03:42:41 compute-0 systemd[1]: Started Session 40 of User zuul.
Oct 11 03:42:41 compute-0 sshd-session[121407]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Oct 11 03:42:41 compute-0 ceph-osd[87591]: log_channel(cluster) log [DBG] : 11.19 scrub starts
Oct 11 03:42:41 compute-0 ceph-osd[87591]: log_channel(cluster) log [DBG] : 11.19 scrub ok
Oct 11 03:42:41 compute-0 ceph-mon[74273]: 7.13 deep-scrub starts
Oct 11 03:42:41 compute-0 ceph-mon[74273]: 7.13 deep-scrub ok
Oct 11 03:42:41 compute-0 ceph-mon[74273]: 6.d scrub starts
Oct 11 03:42:41 compute-0 ceph-mon[74273]: 6.d scrub ok
Oct 11 03:42:41 compute-0 ceph-mon[74273]: 9.6 scrub starts
Oct 11 03:42:41 compute-0 ceph-mon[74273]: 9.6 scrub ok
Oct 11 03:42:42 compute-0 python3.9[121560]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct 11 03:42:42 compute-0 ceph-mon[74273]: pgmap v299: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 03:42:42 compute-0 ceph-mon[74273]: 11.19 scrub starts
Oct 11 03:42:42 compute-0 ceph-mon[74273]: 11.19 scrub ok
Oct 11 03:42:42 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v300: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 03:42:43 compute-0 ceph-osd[89722]: log_channel(cluster) log [DBG] : 9.17 scrub starts
Oct 11 03:42:43 compute-0 ceph-osd[89722]: log_channel(cluster) log [DBG] : 9.17 scrub ok
Oct 11 03:42:43 compute-0 sudo[121714]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-atyuauwdnuykeiavgdbmpxsrsjjwadtl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760154162.5923274-33-36191284811932/AnsiballZ_file.py'
Oct 11 03:42:43 compute-0 sudo[121714]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 03:42:43 compute-0 python3.9[121716]: ansible-ansible.builtin.file Invoked with group=zuul mode=0770 owner=zuul path=/root/.config/containers recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 11 03:42:43 compute-0 sudo[121714]: pam_unix(sudo:session): session closed for user root
Oct 11 03:42:43 compute-0 ceph-mon[74273]: 9.17 scrub starts
Oct 11 03:42:43 compute-0 ceph-mon[74273]: 9.17 scrub ok
Oct 11 03:42:43 compute-0 ceph-osd[88594]: log_channel(cluster) log [DBG] : 9.15 scrub starts
Oct 11 03:42:43 compute-0 ceph-osd[88594]: log_channel(cluster) log [DBG] : 9.15 scrub ok
Oct 11 03:42:44 compute-0 sudo[121889]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fuwyalamybrsuwyytwbyklwyfdklenpf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760154163.608195-41-28662667282817/AnsiballZ_stat.py'
Oct 11 03:42:44 compute-0 sudo[121889]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 03:42:44 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 11 03:42:44 compute-0 python3.9[121891]: ansible-ansible.legacy.stat Invoked with path=/root/.config/containers/auth.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 11 03:42:44 compute-0 ceph-osd[87591]: log_channel(cluster) log [DBG] : 8.14 scrub starts
Oct 11 03:42:44 compute-0 ceph-osd[87591]: log_channel(cluster) log [DBG] : 8.14 scrub ok
Oct 11 03:42:44 compute-0 ceph-mon[74273]: pgmap v300: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 03:42:44 compute-0 ceph-mon[74273]: 9.15 scrub starts
Oct 11 03:42:44 compute-0 ceph-mon[74273]: 9.15 scrub ok
Oct 11 03:42:44 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v301: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 03:42:45 compute-0 ceph-osd[89722]: log_channel(cluster) log [DBG] : 9.7 scrub starts
Oct 11 03:42:45 compute-0 ceph-osd[89722]: log_channel(cluster) log [DBG] : 9.7 scrub ok
Oct 11 03:42:45 compute-0 sudo[121889]: pam_unix(sudo:session): session closed for user root
Oct 11 03:42:45 compute-0 ceph-osd[87591]: log_channel(cluster) log [DBG] : 3.17 scrub starts
Oct 11 03:42:45 compute-0 ceph-osd[87591]: log_channel(cluster) log [DBG] : 3.17 scrub ok
Oct 11 03:42:45 compute-0 sudo[121967]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mnzzklbjcvgdtdyzcgxqraavyzcoqmou ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760154163.608195-41-28662667282817/AnsiballZ_file.py'
Oct 11 03:42:45 compute-0 sudo[121967]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 03:42:45 compute-0 ceph-mon[74273]: 8.14 scrub starts
Oct 11 03:42:45 compute-0 ceph-mon[74273]: 8.14 scrub ok
Oct 11 03:42:45 compute-0 ceph-mon[74273]: 9.7 scrub starts
Oct 11 03:42:45 compute-0 ceph-mon[74273]: 9.7 scrub ok
Oct 11 03:42:45 compute-0 python3.9[121969]: ansible-ansible.legacy.file Invoked with group=zuul mode=0660 owner=zuul dest=/root/.config/containers/auth.json _original_basename=.6epqr6lc recurse=False state=file path=/root/.config/containers/auth.json force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 11 03:42:45 compute-0 sudo[121967]: pam_unix(sudo:session): session closed for user root
Oct 11 03:42:46 compute-0 sudo[122119]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cbthjaydkwzcmnxjlpxxiqlsddahrocw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760154166.1714027-61-109258858642071/AnsiballZ_stat.py'
Oct 11 03:42:46 compute-0 sudo[122119]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 03:42:46 compute-0 ceph-mon[74273]: pgmap v301: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 03:42:46 compute-0 ceph-mon[74273]: 3.17 scrub starts
Oct 11 03:42:46 compute-0 ceph-mon[74273]: 3.17 scrub ok
Oct 11 03:42:46 compute-0 python3.9[122121]: ansible-ansible.legacy.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 11 03:42:46 compute-0 ceph-osd[88594]: log_channel(cluster) log [DBG] : 9.1f scrub starts
Oct 11 03:42:46 compute-0 sudo[122119]: pam_unix(sudo:session): session closed for user root
Oct 11 03:42:46 compute-0 ceph-osd[88594]: log_channel(cluster) log [DBG] : 9.1f scrub ok
Oct 11 03:42:46 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v302: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 03:42:47 compute-0 sudo[122197]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ijgzphreeluqrtmojlyisdphautlsopi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760154166.1714027-61-109258858642071/AnsiballZ_file.py'
Oct 11 03:42:47 compute-0 sudo[122197]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 03:42:47 compute-0 python3.9[122199]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/etc/sysconfig/podman_drop_in _original_basename=.yntarhw0 recurse=False state=file path=/etc/sysconfig/podman_drop_in force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 11 03:42:47 compute-0 sudo[122197]: pam_unix(sudo:session): session closed for user root
Oct 11 03:42:47 compute-0 ceph-osd[87591]: log_channel(cluster) log [DBG] : 9.1d scrub starts
Oct 11 03:42:47 compute-0 ceph-osd[87591]: log_channel(cluster) log [DBG] : 9.1d scrub ok
Oct 11 03:42:47 compute-0 ceph-mon[74273]: 9.1f scrub starts
Oct 11 03:42:47 compute-0 ceph-mon[74273]: 9.1f scrub ok
Oct 11 03:42:47 compute-0 sudo[122349]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vnwqtpvlstcrdshfhnlhtmrecugofthp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760154167.5786014-74-76979539720112/AnsiballZ_file.py'
Oct 11 03:42:47 compute-0 sudo[122349]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 03:42:48 compute-0 python3.9[122351]: ansible-ansible.builtin.file Invoked with path=/var/local/libexec recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Oct 11 03:42:48 compute-0 sudo[122349]: pam_unix(sudo:session): session closed for user root
Oct 11 03:42:48 compute-0 ceph-mon[74273]: pgmap v302: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 03:42:48 compute-0 ceph-mon[74273]: 9.1d scrub starts
Oct 11 03:42:48 compute-0 ceph-mon[74273]: 9.1d scrub ok
Oct 11 03:42:48 compute-0 sudo[122501]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uxohvfpsvcrgfrsysmceftwnnkbumaen ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760154168.3427687-82-229271683130664/AnsiballZ_stat.py'
Oct 11 03:42:48 compute-0 sudo[122501]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 03:42:48 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v303: 305 pgs: 305 active+clean; 457 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 03:42:48 compute-0 python3.9[122503]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-container-shutdown follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 11 03:42:48 compute-0 sudo[122501]: pam_unix(sudo:session): session closed for user root
Oct 11 03:42:49 compute-0 sudo[122579]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lzrehsgpdoxjlpqjzwgfvklumvybqnjb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760154168.3427687-82-229271683130664/AnsiballZ_file.py'
Oct 11 03:42:49 compute-0 sudo[122579]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 03:42:49 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 11 03:42:49 compute-0 python3.9[122581]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-container-shutdown _original_basename=edpm-container-shutdown recurse=False state=file path=/var/local/libexec/edpm-container-shutdown force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct 11 03:42:49 compute-0 sudo[122579]: pam_unix(sudo:session): session closed for user root
Oct 11 03:42:49 compute-0 sudo[122731]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-apqmyicxoratlcgkvqdlbpfyhrrqqzjv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760154169.6320667-82-66195961467840/AnsiballZ_stat.py'
Oct 11 03:42:49 compute-0 sudo[122731]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 03:42:50 compute-0 python3.9[122733]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-start-podman-container follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 11 03:42:50 compute-0 sudo[122731]: pam_unix(sudo:session): session closed for user root
Oct 11 03:42:50 compute-0 ceph-osd[87591]: log_channel(cluster) log [DBG] : 9.1b scrub starts
Oct 11 03:42:50 compute-0 ceph-osd[87591]: log_channel(cluster) log [DBG] : 9.1b scrub ok
Oct 11 03:42:50 compute-0 sudo[122809]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-iruogugbjhnhlzxbqwzucqchhdlmytzg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760154169.6320667-82-66195961467840/AnsiballZ_file.py'
Oct 11 03:42:50 compute-0 sudo[122809]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 03:42:50 compute-0 python3.9[122811]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-start-podman-container _original_basename=edpm-start-podman-container recurse=False state=file path=/var/local/libexec/edpm-start-podman-container force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct 11 03:42:50 compute-0 sudo[122809]: pam_unix(sudo:session): session closed for user root
Oct 11 03:42:50 compute-0 ceph-mon[74273]: pgmap v303: 305 pgs: 305 active+clean; 457 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 03:42:50 compute-0 ceph-mgr[74563]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 03:42:50 compute-0 ceph-mgr[74563]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 03:42:50 compute-0 ceph-mgr[74563]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 03:42:50 compute-0 ceph-mgr[74563]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 03:42:50 compute-0 ceph-mgr[74563]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 03:42:50 compute-0 ceph-mgr[74563]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 03:42:50 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v304: 305 pgs: 305 active+clean; 457 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 03:42:51 compute-0 sudo[122961]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lwgkbahbvzfytvfxifkysooynovlaesc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760154170.9286606-105-85057241820989/AnsiballZ_file.py'
Oct 11 03:42:51 compute-0 sudo[122961]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 03:42:51 compute-0 ceph-osd[87591]: log_channel(cluster) log [DBG] : 9.11 deep-scrub starts
Oct 11 03:42:51 compute-0 ceph-osd[87591]: log_channel(cluster) log [DBG] : 9.11 deep-scrub ok
Oct 11 03:42:51 compute-0 python3.9[122963]: ansible-ansible.builtin.file Invoked with mode=420 path=/etc/systemd/system-preset state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 11 03:42:51 compute-0 sudo[122961]: pam_unix(sudo:session): session closed for user root
Oct 11 03:42:51 compute-0 ceph-mon[74273]: 9.1b scrub starts
Oct 11 03:42:51 compute-0 ceph-mon[74273]: 9.1b scrub ok
Oct 11 03:42:52 compute-0 sudo[123113]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wkdxfcytxkukzvopavkfojabzbtznnns ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760154171.6991742-113-120182791956033/AnsiballZ_stat.py'
Oct 11 03:42:52 compute-0 sudo[123113]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 03:42:52 compute-0 ceph-osd[89722]: log_channel(cluster) log [DBG] : 9.f scrub starts
Oct 11 03:42:52 compute-0 python3.9[123115]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/edpm-container-shutdown.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 11 03:42:52 compute-0 ceph-osd[89722]: log_channel(cluster) log [DBG] : 9.f scrub ok
Oct 11 03:42:52 compute-0 sudo[123113]: pam_unix(sudo:session): session closed for user root
Oct 11 03:42:52 compute-0 sudo[123191]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bxmphsyjpbtsyjacsophwowuhumimfkc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760154171.6991742-113-120182791956033/AnsiballZ_file.py'
Oct 11 03:42:52 compute-0 sudo[123191]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 03:42:52 compute-0 python3.9[123193]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/edpm-container-shutdown.service _original_basename=edpm-container-shutdown-service recurse=False state=file path=/etc/systemd/system/edpm-container-shutdown.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 11 03:42:52 compute-0 sudo[123191]: pam_unix(sudo:session): session closed for user root
Oct 11 03:42:52 compute-0 ceph-mon[74273]: pgmap v304: 305 pgs: 305 active+clean; 457 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 03:42:52 compute-0 ceph-mon[74273]: 9.11 deep-scrub starts
Oct 11 03:42:52 compute-0 ceph-mon[74273]: 9.11 deep-scrub ok
Oct 11 03:42:52 compute-0 ceph-mon[74273]: 9.f scrub starts
Oct 11 03:42:52 compute-0 ceph-mon[74273]: 9.f scrub ok
Oct 11 03:42:52 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v305: 305 pgs: 305 active+clean; 457 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 03:42:53 compute-0 sudo[123343]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uwdraydvauyfsngpjeczpxfnuffeeepy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760154172.8571749-125-169555521129057/AnsiballZ_stat.py'
Oct 11 03:42:53 compute-0 sudo[123343]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 03:42:53 compute-0 python3.9[123345]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 11 03:42:53 compute-0 sudo[123343]: pam_unix(sudo:session): session closed for user root
Oct 11 03:42:53 compute-0 sudo[123421]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wwujbvaggdvftfbrztejngutgwmuzfcg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760154172.8571749-125-169555521129057/AnsiballZ_file.py'
Oct 11 03:42:53 compute-0 sudo[123421]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 03:42:53 compute-0 python3.9[123423]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-edpm-container-shutdown.preset _original_basename=91-edpm-container-shutdown-preset recurse=False state=file path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 11 03:42:53 compute-0 sudo[123421]: pam_unix(sudo:session): session closed for user root
Oct 11 03:42:54 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 11 03:42:54 compute-0 sudo[123573]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pamvrrqznqlvelzgjiovehlspitnplzu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760154174.0713685-137-176033418506987/AnsiballZ_systemd.py'
Oct 11 03:42:54 compute-0 ceph-mon[74273]: pgmap v305: 305 pgs: 305 active+clean; 457 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 03:42:54 compute-0 sudo[123573]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 03:42:54 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v306: 305 pgs: 305 active+clean; 457 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 03:42:55 compute-0 python3.9[123575]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=edpm-container-shutdown state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Oct 11 03:42:55 compute-0 systemd[1]: Reloading.
Oct 11 03:42:55 compute-0 systemd-rc-local-generator[123605]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 11 03:42:55 compute-0 systemd-sysv-generator[123610]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 11 03:42:55 compute-0 ceph-osd[87591]: log_channel(cluster) log [DBG] : 9.b scrub starts
Oct 11 03:42:55 compute-0 ceph-osd[87591]: log_channel(cluster) log [DBG] : 9.b scrub ok
Oct 11 03:42:55 compute-0 sudo[123573]: pam_unix(sudo:session): session closed for user root
Oct 11 03:42:55 compute-0 ceph-mon[74273]: pgmap v306: 305 pgs: 305 active+clean; 457 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 03:42:55 compute-0 sudo[123763]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ilufxvwdlvlxtzgttfbnnimtwyxydfln ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760154175.6387215-145-278599893710283/AnsiballZ_stat.py'
Oct 11 03:42:55 compute-0 sudo[123763]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 03:42:56 compute-0 python3.9[123765]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/netns-placeholder.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 11 03:42:56 compute-0 sudo[123763]: pam_unix(sudo:session): session closed for user root
Oct 11 03:42:56 compute-0 ceph-osd[89722]: log_channel(cluster) log [DBG] : 6.8 deep-scrub starts
Oct 11 03:42:56 compute-0 ceph-osd[89722]: log_channel(cluster) log [DBG] : 6.8 deep-scrub ok
Oct 11 03:42:56 compute-0 sudo[123841]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yzjzmatygrhkuspxhvzryqhzbknpmjqu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760154175.6387215-145-278599893710283/AnsiballZ_file.py'
Oct 11 03:42:56 compute-0 sudo[123841]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 03:42:56 compute-0 python3.9[123843]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/netns-placeholder.service _original_basename=netns-placeholder-service recurse=False state=file path=/etc/systemd/system/netns-placeholder.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 11 03:42:56 compute-0 sudo[123841]: pam_unix(sudo:session): session closed for user root
Oct 11 03:42:56 compute-0 ceph-mon[74273]: 9.b scrub starts
Oct 11 03:42:56 compute-0 ceph-mon[74273]: 9.b scrub ok
Oct 11 03:42:56 compute-0 ceph-mon[74273]: 6.8 deep-scrub starts
Oct 11 03:42:56 compute-0 ceph-mon[74273]: 6.8 deep-scrub ok
Oct 11 03:42:56 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v307: 305 pgs: 305 active+clean; 457 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 03:42:57 compute-0 sudo[123993]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yiwybakgatjixtztckozucouxlughtqp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760154176.8531814-157-3612297826408/AnsiballZ_stat.py'
Oct 11 03:42:57 compute-0 sudo[123993]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 03:42:57 compute-0 python3.9[123995]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-netns-placeholder.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 11 03:42:57 compute-0 sudo[123993]: pam_unix(sudo:session): session closed for user root
Oct 11 03:42:57 compute-0 sudo[124071]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jgjtahevvsdfwwhkwryqpwaclvoeclvv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760154176.8531814-157-3612297826408/AnsiballZ_file.py'
Oct 11 03:42:57 compute-0 sudo[124071]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 03:42:57 compute-0 ceph-mon[74273]: pgmap v307: 305 pgs: 305 active+clean; 457 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 03:42:57 compute-0 python3.9[124073]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-netns-placeholder.preset _original_basename=91-netns-placeholder-preset recurse=False state=file path=/etc/systemd/system-preset/91-netns-placeholder.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 11 03:42:57 compute-0 sudo[124071]: pam_unix(sudo:session): session closed for user root
Oct 11 03:42:58 compute-0 sudo[124223]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hssvppyiaoksmlbcvupcwiogdggzmjke ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760154177.9978642-169-206733613121635/AnsiballZ_systemd.py'
Oct 11 03:42:58 compute-0 sudo[124223]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 03:42:58 compute-0 python3.9[124225]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=netns-placeholder state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Oct 11 03:42:58 compute-0 systemd[1]: Reloading.
Oct 11 03:42:58 compute-0 systemd-rc-local-generator[124253]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 11 03:42:58 compute-0 systemd-sysv-generator[124256]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 11 03:42:58 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v308: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 03:42:58 compute-0 systemd[1]: Starting Create netns directory...
Oct 11 03:42:59 compute-0 systemd[1]: run-netns-placeholder.mount: Deactivated successfully.
Oct 11 03:42:59 compute-0 systemd[1]: netns-placeholder.service: Deactivated successfully.
Oct 11 03:42:59 compute-0 systemd[1]: Finished Create netns directory.
Oct 11 03:42:59 compute-0 sudo[124223]: pam_unix(sudo:session): session closed for user root
Oct 11 03:42:59 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 11 03:42:59 compute-0 python3.9[124417]: ansible-ansible.builtin.service_facts Invoked
Oct 11 03:42:59 compute-0 network[124434]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Oct 11 03:42:59 compute-0 ceph-mon[74273]: pgmap v308: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 03:42:59 compute-0 network[124435]: 'network-scripts' will be removed from distribution in near future.
Oct 11 03:42:59 compute-0 network[124436]: It is advised to switch to 'NetworkManager' instead for network management.
Oct 11 03:43:00 compute-0 sudo[124442]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 03:43:00 compute-0 sudo[124442]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 03:43:00 compute-0 sudo[124442]: pam_unix(sudo:session): session closed for user root
Oct 11 03:43:00 compute-0 sudo[124468]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 11 03:43:00 compute-0 sudo[124468]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 03:43:00 compute-0 sudo[124468]: pam_unix(sudo:session): session closed for user root
Oct 11 03:43:00 compute-0 sudo[124497]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 03:43:00 compute-0 sudo[124497]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 03:43:00 compute-0 sudo[124497]: pam_unix(sudo:session): session closed for user root
Oct 11 03:43:00 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v309: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 03:43:00 compute-0 sudo[124525]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/23b68101-59a9-532f-ab6b-9acf78fb2162/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Oct 11 03:43:00 compute-0 sudo[124525]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 03:43:01 compute-0 ceph-osd[89722]: log_channel(cluster) log [DBG] : 9.18 deep-scrub starts
Oct 11 03:43:01 compute-0 ceph-osd[89722]: log_channel(cluster) log [DBG] : 9.18 deep-scrub ok
Oct 11 03:43:01 compute-0 ceph-osd[87591]: log_channel(cluster) log [DBG] : 9.9 scrub starts
Oct 11 03:43:01 compute-0 ceph-osd[87591]: log_channel(cluster) log [DBG] : 9.9 scrub ok
Oct 11 03:43:01 compute-0 sudo[124525]: pam_unix(sudo:session): session closed for user root
Oct 11 03:43:01 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 11 03:43:01 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 11 03:43:01 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct 11 03:43:01 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 11 03:43:01 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct 11 03:43:01 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' 
Oct 11 03:43:01 compute-0 ceph-mgr[74563]: [progress WARNING root] complete: ev baaf837f-7b97-487f-a6c1-d0114e9bd8c3 does not exist
Oct 11 03:43:01 compute-0 ceph-mgr[74563]: [progress WARNING root] complete: ev 0b55d5c4-b4a1-477c-a87c-807b50b04ee5 does not exist
Oct 11 03:43:01 compute-0 ceph-mgr[74563]: [progress WARNING root] complete: ev ef76836f-cc94-430b-966f-93a197e7c06d does not exist
Oct 11 03:43:01 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct 11 03:43:01 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 11 03:43:01 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct 11 03:43:01 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 11 03:43:01 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 11 03:43:01 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 11 03:43:01 compute-0 sudo[124587]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 03:43:01 compute-0 sudo[124587]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 03:43:01 compute-0 sudo[124587]: pam_unix(sudo:session): session closed for user root
Oct 11 03:43:01 compute-0 sudo[124612]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 11 03:43:01 compute-0 sudo[124612]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 03:43:01 compute-0 sudo[124612]: pam_unix(sudo:session): session closed for user root
Oct 11 03:43:01 compute-0 sudo[124637]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 03:43:01 compute-0 sudo[124637]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 03:43:01 compute-0 sudo[124637]: pam_unix(sudo:session): session closed for user root
Oct 11 03:43:01 compute-0 sudo[124662]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/23b68101-59a9-532f-ab6b-9acf78fb2162/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 23b68101-59a9-532f-ab6b-9acf78fb2162 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Oct 11 03:43:01 compute-0 sudo[124662]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 03:43:01 compute-0 ceph-mon[74273]: pgmap v309: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 03:43:01 compute-0 ceph-mon[74273]: 9.18 deep-scrub starts
Oct 11 03:43:01 compute-0 ceph-mon[74273]: 9.18 deep-scrub ok
Oct 11 03:43:01 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 11 03:43:01 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 11 03:43:01 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' 
Oct 11 03:43:01 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 11 03:43:01 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 11 03:43:01 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 11 03:43:02 compute-0 podman[124735]: 2025-10-11 03:43:02.214858922 +0000 UTC m=+0.065384825 container create e412eee32940f9176001087b580d7cace3a8b9be6d79fc319dc065f9a0086812 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_visvesvaraya, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 11 03:43:02 compute-0 systemd[1]: Started libpod-conmon-e412eee32940f9176001087b580d7cace3a8b9be6d79fc319dc065f9a0086812.scope.
Oct 11 03:43:02 compute-0 podman[124735]: 2025-10-11 03:43:02.181188688 +0000 UTC m=+0.031714581 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 03:43:02 compute-0 systemd[1]: Started libcrun container.
Oct 11 03:43:02 compute-0 podman[124735]: 2025-10-11 03:43:02.322023044 +0000 UTC m=+0.172548957 container init e412eee32940f9176001087b580d7cace3a8b9be6d79fc319dc065f9a0086812 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_visvesvaraya, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 11 03:43:02 compute-0 podman[124735]: 2025-10-11 03:43:02.330119913 +0000 UTC m=+0.180645786 container start e412eee32940f9176001087b580d7cace3a8b9be6d79fc319dc065f9a0086812 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_visvesvaraya, ceph=True, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=reef)
Oct 11 03:43:02 compute-0 podman[124735]: 2025-10-11 03:43:02.334121697 +0000 UTC m=+0.184647580 container attach e412eee32940f9176001087b580d7cace3a8b9be6d79fc319dc065f9a0086812 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_visvesvaraya, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Oct 11 03:43:02 compute-0 tender_visvesvaraya[124758]: 167 167
Oct 11 03:43:02 compute-0 podman[124735]: 2025-10-11 03:43:02.336481414 +0000 UTC m=+0.187007317 container died e412eee32940f9176001087b580d7cace3a8b9be6d79fc319dc065f9a0086812 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_visvesvaraya, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, ceph=True, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 11 03:43:02 compute-0 systemd[1]: libpod-e412eee32940f9176001087b580d7cace3a8b9be6d79fc319dc065f9a0086812.scope: Deactivated successfully.
Oct 11 03:43:02 compute-0 systemd[1]: var-lib-containers-storage-overlay-b1209617a0ac0bc4faf28201121a316948224a56c3ac57620d9d7ddcece87f41-merged.mount: Deactivated successfully.
Oct 11 03:43:02 compute-0 podman[124735]: 2025-10-11 03:43:02.390723963 +0000 UTC m=+0.241249856 container remove e412eee32940f9176001087b580d7cace3a8b9be6d79fc319dc065f9a0086812 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_visvesvaraya, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3)
Oct 11 03:43:02 compute-0 systemd[1]: libpod-conmon-e412eee32940f9176001087b580d7cace3a8b9be6d79fc319dc065f9a0086812.scope: Deactivated successfully.
Oct 11 03:43:02 compute-0 podman[124793]: 2025-10-11 03:43:02.581583889 +0000 UTC m=+0.053990693 container create 454bf96813da7efeb38a65f9b61317dd0b678368a7716a96fd40321b07b8ac13 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=amazing_haslett, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 11 03:43:02 compute-0 systemd[1]: Started libpod-conmon-454bf96813da7efeb38a65f9b61317dd0b678368a7716a96fd40321b07b8ac13.scope.
Oct 11 03:43:02 compute-0 podman[124793]: 2025-10-11 03:43:02.556451906 +0000 UTC m=+0.028858750 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 03:43:02 compute-0 systemd[1]: Started libcrun container.
Oct 11 03:43:02 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/86048ebf69b96038b0d26f49859c5b3fd86b5f37cbb8eeb2572c4a4e80cc5951/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 11 03:43:02 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/86048ebf69b96038b0d26f49859c5b3fd86b5f37cbb8eeb2572c4a4e80cc5951/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 11 03:43:02 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/86048ebf69b96038b0d26f49859c5b3fd86b5f37cbb8eeb2572c4a4e80cc5951/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 11 03:43:02 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/86048ebf69b96038b0d26f49859c5b3fd86b5f37cbb8eeb2572c4a4e80cc5951/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 11 03:43:02 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/86048ebf69b96038b0d26f49859c5b3fd86b5f37cbb8eeb2572c4a4e80cc5951/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct 11 03:43:02 compute-0 podman[124793]: 2025-10-11 03:43:02.670494763 +0000 UTC m=+0.142901577 container init 454bf96813da7efeb38a65f9b61317dd0b678368a7716a96fd40321b07b8ac13 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=amazing_haslett, io.buildah.version=1.39.3, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 11 03:43:02 compute-0 podman[124793]: 2025-10-11 03:43:02.67603256 +0000 UTC m=+0.148439354 container start 454bf96813da7efeb38a65f9b61317dd0b678368a7716a96fd40321b07b8ac13 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=amazing_haslett, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3)
Oct 11 03:43:02 compute-0 podman[124793]: 2025-10-11 03:43:02.679864118 +0000 UTC m=+0.152270942 container attach 454bf96813da7efeb38a65f9b61317dd0b678368a7716a96fd40321b07b8ac13 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=amazing_haslett, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_REF=reef, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 11 03:43:02 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v310: 305 pgs: 305 active+clean; 457 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 03:43:02 compute-0 ceph-mon[74273]: 9.9 scrub starts
Oct 11 03:43:02 compute-0 ceph-mon[74273]: 9.9 scrub ok
Oct 11 03:43:03 compute-0 amazing_haslett[124815]: --> passed data devices: 0 physical, 3 LVM
Oct 11 03:43:03 compute-0 amazing_haslett[124815]: --> relative data size: 1.0
Oct 11 03:43:03 compute-0 amazing_haslett[124815]: --> All data devices are unavailable
Oct 11 03:43:03 compute-0 systemd[1]: libpod-454bf96813da7efeb38a65f9b61317dd0b678368a7716a96fd40321b07b8ac13.scope: Deactivated successfully.
Oct 11 03:43:03 compute-0 systemd[1]: libpod-454bf96813da7efeb38a65f9b61317dd0b678368a7716a96fd40321b07b8ac13.scope: Consumed 1.072s CPU time.
Oct 11 03:43:03 compute-0 podman[124793]: 2025-10-11 03:43:03.809016602 +0000 UTC m=+1.281423416 container died 454bf96813da7efeb38a65f9b61317dd0b678368a7716a96fd40321b07b8ac13 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=amazing_haslett, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 11 03:43:03 compute-0 systemd[1]: var-lib-containers-storage-overlay-86048ebf69b96038b0d26f49859c5b3fd86b5f37cbb8eeb2572c4a4e80cc5951-merged.mount: Deactivated successfully.
Oct 11 03:43:03 compute-0 podman[124793]: 2025-10-11 03:43:03.877225727 +0000 UTC m=+1.349632521 container remove 454bf96813da7efeb38a65f9b61317dd0b678368a7716a96fd40321b07b8ac13 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=amazing_haslett, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Oct 11 03:43:03 compute-0 systemd[1]: libpod-conmon-454bf96813da7efeb38a65f9b61317dd0b678368a7716a96fd40321b07b8ac13.scope: Deactivated successfully.
Oct 11 03:43:03 compute-0 sudo[124662]: pam_unix(sudo:session): session closed for user root
Oct 11 03:43:03 compute-0 ceph-mon[74273]: pgmap v310: 305 pgs: 305 active+clean; 457 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 03:43:04 compute-0 sudo[124870]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 03:43:04 compute-0 sudo[124870]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 03:43:04 compute-0 sudo[124870]: pam_unix(sudo:session): session closed for user root
Oct 11 03:43:04 compute-0 sudo[124898]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 11 03:43:04 compute-0 sudo[124898]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 03:43:04 compute-0 sudo[124898]: pam_unix(sudo:session): session closed for user root
Oct 11 03:43:04 compute-0 ceph-osd[89722]: log_channel(cluster) log [DBG] : 9.8 scrub starts
Oct 11 03:43:04 compute-0 sudo[124926]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 03:43:04 compute-0 sudo[124926]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 03:43:04 compute-0 sudo[124926]: pam_unix(sudo:session): session closed for user root
Oct 11 03:43:04 compute-0 ceph-osd[89722]: log_channel(cluster) log [DBG] : 9.8 scrub ok
Oct 11 03:43:04 compute-0 sudo[124954]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/23b68101-59a9-532f-ab6b-9acf78fb2162/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 23b68101-59a9-532f-ab6b-9acf78fb2162 -- lvm list --format json
Oct 11 03:43:04 compute-0 sudo[124954]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 03:43:04 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 11 03:43:04 compute-0 podman[125038]: 2025-10-11 03:43:04.713237902 +0000 UTC m=+0.061401213 container create 2adbda6aa89288713f1487143d3d52fc70772012eabd809dacfa04528c629a34 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_lederberg, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 11 03:43:04 compute-0 systemd[1]: Started libpod-conmon-2adbda6aa89288713f1487143d3d52fc70772012eabd809dacfa04528c629a34.scope.
Oct 11 03:43:04 compute-0 systemd[1]: Started libcrun container.
Oct 11 03:43:04 compute-0 podman[125038]: 2025-10-11 03:43:04.687403809 +0000 UTC m=+0.035567200 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 03:43:04 compute-0 podman[125038]: 2025-10-11 03:43:04.791417591 +0000 UTC m=+0.139580932 container init 2adbda6aa89288713f1487143d3d52fc70772012eabd809dacfa04528c629a34 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_lederberg, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_REF=reef, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 11 03:43:04 compute-0 podman[125038]: 2025-10-11 03:43:04.801918638 +0000 UTC m=+0.150081949 container start 2adbda6aa89288713f1487143d3d52fc70772012eabd809dacfa04528c629a34 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_lederberg, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, OSD_FLAVOR=default, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 11 03:43:04 compute-0 agitated_lederberg[125061]: 167 167
Oct 11 03:43:04 compute-0 systemd[1]: libpod-2adbda6aa89288713f1487143d3d52fc70772012eabd809dacfa04528c629a34.scope: Deactivated successfully.
Oct 11 03:43:04 compute-0 podman[125038]: 2025-10-11 03:43:04.807301271 +0000 UTC m=+0.155464582 container attach 2adbda6aa89288713f1487143d3d52fc70772012eabd809dacfa04528c629a34 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_lederberg, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.build-date=20250507)
Oct 11 03:43:04 compute-0 podman[125038]: 2025-10-11 03:43:04.807557609 +0000 UTC m=+0.155720920 container died 2adbda6aa89288713f1487143d3d52fc70772012eabd809dacfa04528c629a34 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_lederberg, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Oct 11 03:43:04 compute-0 systemd[1]: var-lib-containers-storage-overlay-a86e6aff8f2bc695a099d016c78d72d54ff6cd714e8429abd6ee192f114d38f2-merged.mount: Deactivated successfully.
Oct 11 03:43:04 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v311: 305 pgs: 305 active+clean; 457 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 03:43:04 compute-0 podman[125038]: 2025-10-11 03:43:04.853856022 +0000 UTC m=+0.202019373 container remove 2adbda6aa89288713f1487143d3d52fc70772012eabd809dacfa04528c629a34 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_lederberg, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3)
Oct 11 03:43:04 compute-0 systemd[1]: libpod-conmon-2adbda6aa89288713f1487143d3d52fc70772012eabd809dacfa04528c629a34.scope: Deactivated successfully.
Oct 11 03:43:04 compute-0 ceph-mon[74273]: 9.8 scrub starts
Oct 11 03:43:04 compute-0 ceph-mon[74273]: 9.8 scrub ok
Oct 11 03:43:05 compute-0 podman[125093]: 2025-10-11 03:43:05.036474035 +0000 UTC m=+0.048025754 container create eeb48290e98796797e273ac7ab0fed1ca73239025d9b7a41bfb075b77796dc0e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_curie, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=reef)
Oct 11 03:43:05 compute-0 systemd[1]: Started libpod-conmon-eeb48290e98796797e273ac7ab0fed1ca73239025d9b7a41bfb075b77796dc0e.scope.
Oct 11 03:43:05 compute-0 podman[125093]: 2025-10-11 03:43:05.01444341 +0000 UTC m=+0.025995159 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 03:43:05 compute-0 systemd[1]: Started libcrun container.
Oct 11 03:43:05 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/871688de9a43f827b5c9d1880549ca7380a9316705a8e7c05785f456aebfe4e6/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 11 03:43:05 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/871688de9a43f827b5c9d1880549ca7380a9316705a8e7c05785f456aebfe4e6/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 11 03:43:05 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/871688de9a43f827b5c9d1880549ca7380a9316705a8e7c05785f456aebfe4e6/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 11 03:43:05 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/871688de9a43f827b5c9d1880549ca7380a9316705a8e7c05785f456aebfe4e6/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 11 03:43:05 compute-0 podman[125093]: 2025-10-11 03:43:05.148738751 +0000 UTC m=+0.160290490 container init eeb48290e98796797e273ac7ab0fed1ca73239025d9b7a41bfb075b77796dc0e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_curie, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Oct 11 03:43:05 compute-0 podman[125093]: 2025-10-11 03:43:05.159503246 +0000 UTC m=+0.171054985 container start eeb48290e98796797e273ac7ab0fed1ca73239025d9b7a41bfb075b77796dc0e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_curie, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, OSD_FLAVOR=default, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 11 03:43:05 compute-0 podman[125093]: 2025-10-11 03:43:05.16351003 +0000 UTC m=+0.175061769 container attach eeb48290e98796797e273ac7ab0fed1ca73239025d9b7a41bfb075b77796dc0e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_curie, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.schema-version=1.0)
Oct 11 03:43:05 compute-0 friendly_curie[125115]: {
Oct 11 03:43:05 compute-0 friendly_curie[125115]:     "0": [
Oct 11 03:43:05 compute-0 friendly_curie[125115]:         {
Oct 11 03:43:05 compute-0 friendly_curie[125115]:             "devices": [
Oct 11 03:43:05 compute-0 friendly_curie[125115]:                 "/dev/loop3"
Oct 11 03:43:05 compute-0 friendly_curie[125115]:             ],
Oct 11 03:43:05 compute-0 friendly_curie[125115]:             "lv_name": "ceph_lv0",
Oct 11 03:43:05 compute-0 friendly_curie[125115]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct 11 03:43:05 compute-0 friendly_curie[125115]:             "lv_size": "21470642176",
Oct 11 03:43:05 compute-0 friendly_curie[125115]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=1XVTvq-Um5W-oCLh-MZzq-EKrd-vgGj-JdO4S3,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=23b68101-59a9-532f-ab6b-9acf78fb2162,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=bd7ac921-1218-45c1-b1c6-7c594dbceccb,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 11 03:43:05 compute-0 friendly_curie[125115]:             "lv_uuid": "1XVTvq-Um5W-oCLh-MZzq-EKrd-vgGj-JdO4S3",
Oct 11 03:43:05 compute-0 friendly_curie[125115]:             "name": "ceph_lv0",
Oct 11 03:43:05 compute-0 friendly_curie[125115]:             "path": "/dev/ceph_vg0/ceph_lv0",
Oct 11 03:43:05 compute-0 friendly_curie[125115]:             "tags": {
Oct 11 03:43:05 compute-0 friendly_curie[125115]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct 11 03:43:05 compute-0 friendly_curie[125115]:                 "ceph.block_uuid": "1XVTvq-Um5W-oCLh-MZzq-EKrd-vgGj-JdO4S3",
Oct 11 03:43:05 compute-0 friendly_curie[125115]:                 "ceph.cephx_lockbox_secret": "",
Oct 11 03:43:05 compute-0 friendly_curie[125115]:                 "ceph.cluster_fsid": "23b68101-59a9-532f-ab6b-9acf78fb2162",
Oct 11 03:43:05 compute-0 friendly_curie[125115]:                 "ceph.cluster_name": "ceph",
Oct 11 03:43:05 compute-0 friendly_curie[125115]:                 "ceph.crush_device_class": "",
Oct 11 03:43:05 compute-0 friendly_curie[125115]:                 "ceph.encrypted": "0",
Oct 11 03:43:05 compute-0 friendly_curie[125115]:                 "ceph.osd_fsid": "bd7ac921-1218-45c1-b1c6-7c594dbceccb",
Oct 11 03:43:05 compute-0 friendly_curie[125115]:                 "ceph.osd_id": "0",
Oct 11 03:43:05 compute-0 friendly_curie[125115]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 11 03:43:05 compute-0 friendly_curie[125115]:                 "ceph.type": "block",
Oct 11 03:43:05 compute-0 friendly_curie[125115]:                 "ceph.vdo": "0"
Oct 11 03:43:05 compute-0 friendly_curie[125115]:             },
Oct 11 03:43:05 compute-0 friendly_curie[125115]:             "type": "block",
Oct 11 03:43:05 compute-0 friendly_curie[125115]:             "vg_name": "ceph_vg0"
Oct 11 03:43:05 compute-0 friendly_curie[125115]:         }
Oct 11 03:43:05 compute-0 friendly_curie[125115]:     ],
Oct 11 03:43:05 compute-0 friendly_curie[125115]:     "1": [
Oct 11 03:43:05 compute-0 friendly_curie[125115]:         {
Oct 11 03:43:05 compute-0 friendly_curie[125115]:             "devices": [
Oct 11 03:43:05 compute-0 friendly_curie[125115]:                 "/dev/loop4"
Oct 11 03:43:05 compute-0 friendly_curie[125115]:             ],
Oct 11 03:43:05 compute-0 friendly_curie[125115]:             "lv_name": "ceph_lv1",
Oct 11 03:43:05 compute-0 friendly_curie[125115]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Oct 11 03:43:05 compute-0 friendly_curie[125115]:             "lv_size": "21470642176",
Oct 11 03:43:05 compute-0 friendly_curie[125115]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=SYHCVo-UVf0-Iwc3-4dzz-QuCG-19LJ-9rRA5x,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=23b68101-59a9-532f-ab6b-9acf78fb2162,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=38da774d-7ecf-442f-9a7a-97978287cff8,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 11 03:43:05 compute-0 friendly_curie[125115]:             "lv_uuid": "SYHCVo-UVf0-Iwc3-4dzz-QuCG-19LJ-9rRA5x",
Oct 11 03:43:05 compute-0 friendly_curie[125115]:             "name": "ceph_lv1",
Oct 11 03:43:05 compute-0 friendly_curie[125115]:             "path": "/dev/ceph_vg1/ceph_lv1",
Oct 11 03:43:05 compute-0 friendly_curie[125115]:             "tags": {
Oct 11 03:43:05 compute-0 friendly_curie[125115]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Oct 11 03:43:05 compute-0 friendly_curie[125115]:                 "ceph.block_uuid": "SYHCVo-UVf0-Iwc3-4dzz-QuCG-19LJ-9rRA5x",
Oct 11 03:43:05 compute-0 friendly_curie[125115]:                 "ceph.cephx_lockbox_secret": "",
Oct 11 03:43:05 compute-0 friendly_curie[125115]:                 "ceph.cluster_fsid": "23b68101-59a9-532f-ab6b-9acf78fb2162",
Oct 11 03:43:05 compute-0 friendly_curie[125115]:                 "ceph.cluster_name": "ceph",
Oct 11 03:43:05 compute-0 friendly_curie[125115]:                 "ceph.crush_device_class": "",
Oct 11 03:43:05 compute-0 friendly_curie[125115]:                 "ceph.encrypted": "0",
Oct 11 03:43:05 compute-0 friendly_curie[125115]:                 "ceph.osd_fsid": "38da774d-7ecf-442f-9a7a-97978287cff8",
Oct 11 03:43:05 compute-0 friendly_curie[125115]:                 "ceph.osd_id": "1",
Oct 11 03:43:05 compute-0 friendly_curie[125115]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 11 03:43:05 compute-0 friendly_curie[125115]:                 "ceph.type": "block",
Oct 11 03:43:05 compute-0 friendly_curie[125115]:                 "ceph.vdo": "0"
Oct 11 03:43:05 compute-0 friendly_curie[125115]:             },
Oct 11 03:43:05 compute-0 friendly_curie[125115]:             "type": "block",
Oct 11 03:43:05 compute-0 friendly_curie[125115]:             "vg_name": "ceph_vg1"
Oct 11 03:43:05 compute-0 friendly_curie[125115]:         }
Oct 11 03:43:05 compute-0 friendly_curie[125115]:     ],
Oct 11 03:43:05 compute-0 friendly_curie[125115]:     "2": [
Oct 11 03:43:05 compute-0 friendly_curie[125115]:         {
Oct 11 03:43:05 compute-0 friendly_curie[125115]:             "devices": [
Oct 11 03:43:05 compute-0 friendly_curie[125115]:                 "/dev/loop5"
Oct 11 03:43:05 compute-0 friendly_curie[125115]:             ],
Oct 11 03:43:05 compute-0 friendly_curie[125115]:             "lv_name": "ceph_lv2",
Oct 11 03:43:05 compute-0 friendly_curie[125115]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Oct 11 03:43:05 compute-0 friendly_curie[125115]:             "lv_size": "21470642176",
Oct 11 03:43:05 compute-0 friendly_curie[125115]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=RoMfIg-8Myq-NBZ3-1HM3-6QfC-sjk8-dlChvt,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=23b68101-59a9-532f-ab6b-9acf78fb2162,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=b0a32a6b-06c7-4cda-9235-0c5ffbd1bd85,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 11 03:43:05 compute-0 friendly_curie[125115]:             "lv_uuid": "RoMfIg-8Myq-NBZ3-1HM3-6QfC-sjk8-dlChvt",
Oct 11 03:43:05 compute-0 friendly_curie[125115]:             "name": "ceph_lv2",
Oct 11 03:43:05 compute-0 friendly_curie[125115]:             "path": "/dev/ceph_vg2/ceph_lv2",
Oct 11 03:43:05 compute-0 friendly_curie[125115]:             "tags": {
Oct 11 03:43:05 compute-0 friendly_curie[125115]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Oct 11 03:43:05 compute-0 friendly_curie[125115]:                 "ceph.block_uuid": "RoMfIg-8Myq-NBZ3-1HM3-6QfC-sjk8-dlChvt",
Oct 11 03:43:05 compute-0 friendly_curie[125115]:                 "ceph.cephx_lockbox_secret": "",
Oct 11 03:43:05 compute-0 friendly_curie[125115]:                 "ceph.cluster_fsid": "23b68101-59a9-532f-ab6b-9acf78fb2162",
Oct 11 03:43:05 compute-0 friendly_curie[125115]:                 "ceph.cluster_name": "ceph",
Oct 11 03:43:05 compute-0 friendly_curie[125115]:                 "ceph.crush_device_class": "",
Oct 11 03:43:05 compute-0 friendly_curie[125115]:                 "ceph.encrypted": "0",
Oct 11 03:43:05 compute-0 friendly_curie[125115]:                 "ceph.osd_fsid": "b0a32a6b-06c7-4cda-9235-0c5ffbd1bd85",
Oct 11 03:43:05 compute-0 friendly_curie[125115]:                 "ceph.osd_id": "2",
Oct 11 03:43:05 compute-0 friendly_curie[125115]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 11 03:43:05 compute-0 friendly_curie[125115]:                 "ceph.type": "block",
Oct 11 03:43:05 compute-0 friendly_curie[125115]:                 "ceph.vdo": "0"
Oct 11 03:43:05 compute-0 friendly_curie[125115]:             },
Oct 11 03:43:05 compute-0 friendly_curie[125115]:             "type": "block",
Oct 11 03:43:05 compute-0 friendly_curie[125115]:             "vg_name": "ceph_vg2"
Oct 11 03:43:05 compute-0 friendly_curie[125115]:         }
Oct 11 03:43:05 compute-0 friendly_curie[125115]:     ]
Oct 11 03:43:05 compute-0 friendly_curie[125115]: }
Oct 11 03:43:05 compute-0 systemd[1]: libpod-eeb48290e98796797e273ac7ab0fed1ca73239025d9b7a41bfb075b77796dc0e.scope: Deactivated successfully.
Oct 11 03:43:05 compute-0 podman[125093]: 2025-10-11 03:43:05.947864527 +0000 UTC m=+0.959416346 container died eeb48290e98796797e273ac7ab0fed1ca73239025d9b7a41bfb075b77796dc0e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_curie, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.license=GPLv2)
Oct 11 03:43:05 compute-0 ceph-mon[74273]: pgmap v311: 305 pgs: 305 active+clean; 457 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 03:43:06 compute-0 systemd[1]: var-lib-containers-storage-overlay-871688de9a43f827b5c9d1880549ca7380a9316705a8e7c05785f456aebfe4e6-merged.mount: Deactivated successfully.
Oct 11 03:43:06 compute-0 podman[125093]: 2025-10-11 03:43:06.078729781 +0000 UTC m=+1.090281500 container remove eeb48290e98796797e273ac7ab0fed1ca73239025d9b7a41bfb075b77796dc0e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_curie, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.license=GPLv2)
Oct 11 03:43:06 compute-0 systemd[1]: libpod-conmon-eeb48290e98796797e273ac7ab0fed1ca73239025d9b7a41bfb075b77796dc0e.scope: Deactivated successfully.
Oct 11 03:43:06 compute-0 sudo[124954]: pam_unix(sudo:session): session closed for user root
Oct 11 03:43:06 compute-0 sudo[125143]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 03:43:06 compute-0 sudo[125143]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 03:43:06 compute-0 sudo[125143]: pam_unix(sudo:session): session closed for user root
Oct 11 03:43:06 compute-0 sudo[125168]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 11 03:43:06 compute-0 sudo[125168]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 03:43:06 compute-0 sudo[125168]: pam_unix(sudo:session): session closed for user root
Oct 11 03:43:06 compute-0 sudo[125217]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 03:43:06 compute-0 sudo[125217]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 03:43:06 compute-0 sudo[125217]: pam_unix(sudo:session): session closed for user root
Oct 11 03:43:06 compute-0 sudo[125242]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/23b68101-59a9-532f-ab6b-9acf78fb2162/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 23b68101-59a9-532f-ab6b-9acf78fb2162 -- raw list --format json
Oct 11 03:43:06 compute-0 sudo[125242]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 03:43:06 compute-0 ceph-osd[87591]: log_channel(cluster) log [DBG] : 9.d scrub starts
Oct 11 03:43:06 compute-0 ceph-osd[87591]: log_channel(cluster) log [DBG] : 9.d scrub ok
Oct 11 03:43:06 compute-0 podman[125307]: 2025-10-11 03:43:06.780177587 +0000 UTC m=+0.041292353 container create 84064a18eb05cf99b2b3f5cec7ff067f4f7f2625966358eed1cd12c315dc1239 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_lumiere, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 11 03:43:06 compute-0 systemd[1]: Started libpod-conmon-84064a18eb05cf99b2b3f5cec7ff067f4f7f2625966358eed1cd12c315dc1239.scope.
Oct 11 03:43:06 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v312: 305 pgs: 305 active+clean; 457 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 03:43:06 compute-0 systemd[1]: Started libcrun container.
Oct 11 03:43:06 compute-0 podman[125307]: 2025-10-11 03:43:06.855783933 +0000 UTC m=+0.116898719 container init 84064a18eb05cf99b2b3f5cec7ff067f4f7f2625966358eed1cd12c315dc1239 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_lumiere, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, io.buildah.version=1.39.3)
Oct 11 03:43:06 compute-0 podman[125307]: 2025-10-11 03:43:06.760729855 +0000 UTC m=+0.021844651 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 03:43:06 compute-0 podman[125307]: 2025-10-11 03:43:06.864264413 +0000 UTC m=+0.125379169 container start 84064a18eb05cf99b2b3f5cec7ff067f4f7f2625966358eed1cd12c315dc1239 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_lumiere, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef)
Oct 11 03:43:06 compute-0 podman[125307]: 2025-10-11 03:43:06.867565297 +0000 UTC m=+0.128680113 container attach 84064a18eb05cf99b2b3f5cec7ff067f4f7f2625966358eed1cd12c315dc1239 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_lumiere, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 11 03:43:06 compute-0 nifty_lumiere[125359]: 167 167
Oct 11 03:43:06 compute-0 systemd[1]: libpod-84064a18eb05cf99b2b3f5cec7ff067f4f7f2625966358eed1cd12c315dc1239.scope: Deactivated successfully.
Oct 11 03:43:06 compute-0 podman[125307]: 2025-10-11 03:43:06.870551432 +0000 UTC m=+0.131666198 container died 84064a18eb05cf99b2b3f5cec7ff067f4f7f2625966358eed1cd12c315dc1239 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_lumiere, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 11 03:43:06 compute-0 systemd[1]: var-lib-containers-storage-overlay-0fa9693d30f43b7186a82962e34345df2c4ca93e710e6f8afb480912975f5e27-merged.mount: Deactivated successfully.
Oct 11 03:43:06 compute-0 podman[125307]: 2025-10-11 03:43:06.906338247 +0000 UTC m=+0.167453013 container remove 84064a18eb05cf99b2b3f5cec7ff067f4f7f2625966358eed1cd12c315dc1239 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_lumiere, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 11 03:43:06 compute-0 systemd[1]: libpod-conmon-84064a18eb05cf99b2b3f5cec7ff067f4f7f2625966358eed1cd12c315dc1239.scope: Deactivated successfully.
Oct 11 03:43:07 compute-0 podman[125428]: 2025-10-11 03:43:07.076136086 +0000 UTC m=+0.053811828 container create f641af83d1edfab2c69a1c50bd781e3a8c473e8a319df97a0791a3ae89d38335 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_cray, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Oct 11 03:43:07 compute-0 systemd[1]: Started libpod-conmon-f641af83d1edfab2c69a1c50bd781e3a8c473e8a319df97a0791a3ae89d38335.scope.
Oct 11 03:43:07 compute-0 sudo[125487]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xwqnmuzvdpbnlzotciiakqolwbbogqdl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760154186.7774627-195-23525684870851/AnsiballZ_stat.py'
Oct 11 03:43:07 compute-0 sudo[125487]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 03:43:07 compute-0 systemd[1]: Started libcrun container.
Oct 11 03:43:07 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b5663d772ce8a428ce990fbec3f14aab7be28267433d739cc5de4be6a0ebe8ac/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 11 03:43:07 compute-0 podman[125428]: 2025-10-11 03:43:07.058179816 +0000 UTC m=+0.035855618 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 03:43:07 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b5663d772ce8a428ce990fbec3f14aab7be28267433d739cc5de4be6a0ebe8ac/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 11 03:43:07 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b5663d772ce8a428ce990fbec3f14aab7be28267433d739cc5de4be6a0ebe8ac/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 11 03:43:07 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b5663d772ce8a428ce990fbec3f14aab7be28267433d739cc5de4be6a0ebe8ac/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 11 03:43:07 compute-0 podman[125428]: 2025-10-11 03:43:07.186887729 +0000 UTC m=+0.164563491 container init f641af83d1edfab2c69a1c50bd781e3a8c473e8a319df97a0791a3ae89d38335 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_cray, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, ceph=True, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef)
Oct 11 03:43:07 compute-0 podman[125428]: 2025-10-11 03:43:07.195185884 +0000 UTC m=+0.172861676 container start f641af83d1edfab2c69a1c50bd781e3a8c473e8a319df97a0791a3ae89d38335 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_cray, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.39.3)
Oct 11 03:43:07 compute-0 podman[125428]: 2025-10-11 03:43:07.199325542 +0000 UTC m=+0.177001334 container attach f641af83d1edfab2c69a1c50bd781e3a8c473e8a319df97a0791a3ae89d38335 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_cray, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 11 03:43:07 compute-0 python3.9[125492]: ansible-ansible.legacy.stat Invoked with path=/etc/ssh/sshd_config follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 11 03:43:07 compute-0 sudo[125487]: pam_unix(sudo:session): session closed for user root
Oct 11 03:43:07 compute-0 ceph-osd[87591]: log_channel(cluster) log [DBG] : 9.5 scrub starts
Oct 11 03:43:07 compute-0 ceph-osd[87591]: log_channel(cluster) log [DBG] : 9.5 scrub ok
Oct 11 03:43:07 compute-0 sudo[125571]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sscffaqdsannpnsdzwjchexwmkqxpcyn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760154186.7774627-195-23525684870851/AnsiballZ_file.py'
Oct 11 03:43:07 compute-0 sudo[125571]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 03:43:07 compute-0 python3.9[125573]: ansible-ansible.legacy.file Invoked with mode=0600 dest=/etc/ssh/sshd_config _original_basename=sshd_config_block.j2 recurse=False state=file path=/etc/ssh/sshd_config force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 11 03:43:07 compute-0 sudo[125571]: pam_unix(sudo:session): session closed for user root
Oct 11 03:43:07 compute-0 ceph-mon[74273]: 9.d scrub starts
Oct 11 03:43:07 compute-0 ceph-mon[74273]: 9.d scrub ok
Oct 11 03:43:07 compute-0 ceph-mon[74273]: pgmap v312: 305 pgs: 305 active+clean; 457 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 03:43:08 compute-0 bold_cray[125488]: {
Oct 11 03:43:08 compute-0 bold_cray[125488]:     "38da774d-7ecf-442f-9a7a-97978287cff8": {
Oct 11 03:43:08 compute-0 bold_cray[125488]:         "ceph_fsid": "23b68101-59a9-532f-ab6b-9acf78fb2162",
Oct 11 03:43:08 compute-0 bold_cray[125488]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Oct 11 03:43:08 compute-0 bold_cray[125488]:         "osd_id": 1,
Oct 11 03:43:08 compute-0 bold_cray[125488]:         "osd_uuid": "38da774d-7ecf-442f-9a7a-97978287cff8",
Oct 11 03:43:08 compute-0 bold_cray[125488]:         "type": "bluestore"
Oct 11 03:43:08 compute-0 bold_cray[125488]:     },
Oct 11 03:43:08 compute-0 bold_cray[125488]:     "b0a32a6b-06c7-4cda-9235-0c5ffbd1bd85": {
Oct 11 03:43:08 compute-0 bold_cray[125488]:         "ceph_fsid": "23b68101-59a9-532f-ab6b-9acf78fb2162",
Oct 11 03:43:08 compute-0 bold_cray[125488]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Oct 11 03:43:08 compute-0 bold_cray[125488]:         "osd_id": 2,
Oct 11 03:43:08 compute-0 bold_cray[125488]:         "osd_uuid": "b0a32a6b-06c7-4cda-9235-0c5ffbd1bd85",
Oct 11 03:43:08 compute-0 bold_cray[125488]:         "type": "bluestore"
Oct 11 03:43:08 compute-0 bold_cray[125488]:     },
Oct 11 03:43:08 compute-0 bold_cray[125488]:     "bd7ac921-1218-45c1-b1c6-7c594dbceccb": {
Oct 11 03:43:08 compute-0 bold_cray[125488]:         "ceph_fsid": "23b68101-59a9-532f-ab6b-9acf78fb2162",
Oct 11 03:43:08 compute-0 bold_cray[125488]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct 11 03:43:08 compute-0 bold_cray[125488]:         "osd_id": 0,
Oct 11 03:43:08 compute-0 bold_cray[125488]:         "osd_uuid": "bd7ac921-1218-45c1-b1c6-7c594dbceccb",
Oct 11 03:43:08 compute-0 bold_cray[125488]:         "type": "bluestore"
Oct 11 03:43:08 compute-0 bold_cray[125488]:     }
Oct 11 03:43:08 compute-0 bold_cray[125488]: }
Oct 11 03:43:08 compute-0 systemd[1]: libpod-f641af83d1edfab2c69a1c50bd781e3a8c473e8a319df97a0791a3ae89d38335.scope: Deactivated successfully.
Oct 11 03:43:08 compute-0 systemd[1]: libpod-f641af83d1edfab2c69a1c50bd781e3a8c473e8a319df97a0791a3ae89d38335.scope: Consumed 1.060s CPU time.
Oct 11 03:43:08 compute-0 podman[125428]: 2025-10-11 03:43:08.250901004 +0000 UTC m=+1.228576816 container died f641af83d1edfab2c69a1c50bd781e3a8c473e8a319df97a0791a3ae89d38335 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_cray, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 11 03:43:08 compute-0 systemd[1]: var-lib-containers-storage-overlay-b5663d772ce8a428ce990fbec3f14aab7be28267433d739cc5de4be6a0ebe8ac-merged.mount: Deactivated successfully.
Oct 11 03:43:08 compute-0 podman[125428]: 2025-10-11 03:43:08.315020363 +0000 UTC m=+1.292696105 container remove f641af83d1edfab2c69a1c50bd781e3a8c473e8a319df97a0791a3ae89d38335 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_cray, ceph=True, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 11 03:43:08 compute-0 systemd[1]: libpod-conmon-f641af83d1edfab2c69a1c50bd781e3a8c473e8a319df97a0791a3ae89d38335.scope: Deactivated successfully.
Oct 11 03:43:08 compute-0 sudo[125242]: pam_unix(sudo:session): session closed for user root
Oct 11 03:43:08 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 11 03:43:08 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' 
Oct 11 03:43:08 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 11 03:43:08 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' 
Oct 11 03:43:08 compute-0 ceph-mgr[74563]: [progress WARNING root] complete: ev 10fa6752-4ef5-464a-b66c-bd15a5645973 does not exist
Oct 11 03:43:08 compute-0 ceph-mgr[74563]: [progress WARNING root] complete: ev 8c932049-ff70-4bb9-9cd6-b4cbc746f984 does not exist
Oct 11 03:43:08 compute-0 sudo[125788]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tvneuinwtgpfosuoxuoixnsdnctehdfr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760154188.0994816-208-47882066044085/AnsiballZ_file.py'
Oct 11 03:43:08 compute-0 sudo[125788]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 03:43:08 compute-0 sudo[125746]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 03:43:08 compute-0 sudo[125746]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 03:43:08 compute-0 sudo[125746]: pam_unix(sudo:session): session closed for user root
Oct 11 03:43:08 compute-0 sudo[125793]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Oct 11 03:43:08 compute-0 sudo[125793]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 03:43:08 compute-0 sudo[125793]: pam_unix(sudo:session): session closed for user root
Oct 11 03:43:08 compute-0 python3.9[125791]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 11 03:43:08 compute-0 sudo[125788]: pam_unix(sudo:session): session closed for user root
Oct 11 03:43:08 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v313: 305 pgs: 305 active+clean; 457 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 03:43:08 compute-0 ceph-mon[74273]: 9.5 scrub starts
Oct 11 03:43:08 compute-0 ceph-mon[74273]: 9.5 scrub ok
Oct 11 03:43:08 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' 
Oct 11 03:43:08 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' 
Oct 11 03:43:09 compute-0 ceph-osd[89722]: log_channel(cluster) log [DBG] : 9.c scrub starts
Oct 11 03:43:09 compute-0 ceph-osd[89722]: log_channel(cluster) log [DBG] : 9.c scrub ok
Oct 11 03:43:09 compute-0 sudo[125967]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yojjublwvhaquiaqehhgivmcsbmnmoio ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760154188.8247252-216-118372776771862/AnsiballZ_stat.py'
Oct 11 03:43:09 compute-0 sudo[125967]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 03:43:09 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 11 03:43:09 compute-0 python3.9[125969]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/sshd-networks.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 11 03:43:09 compute-0 sudo[125967]: pam_unix(sudo:session): session closed for user root
Oct 11 03:43:09 compute-0 sudo[126045]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wuakpliwsbldueyrvnxobsbtmmlptpgo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760154188.8247252-216-118372776771862/AnsiballZ_file.py'
Oct 11 03:43:09 compute-0 sudo[126045]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 03:43:09 compute-0 python3.9[126047]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/var/lib/edpm-config/firewall/sshd-networks.yaml _original_basename=firewall.yaml.j2 recurse=False state=file path=/var/lib/edpm-config/firewall/sshd-networks.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 11 03:43:09 compute-0 sudo[126045]: pam_unix(sudo:session): session closed for user root
Oct 11 03:43:09 compute-0 ceph-mon[74273]: pgmap v313: 305 pgs: 305 active+clean; 457 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 03:43:09 compute-0 ceph-mon[74273]: 9.c scrub starts
Oct 11 03:43:09 compute-0 ceph-mon[74273]: 9.c scrub ok
Oct 11 03:43:10 compute-0 ceph-osd[89722]: log_channel(cluster) log [DBG] : 6.f deep-scrub starts
Oct 11 03:43:10 compute-0 ceph-osd[89722]: log_channel(cluster) log [DBG] : 6.f deep-scrub ok
Oct 11 03:43:10 compute-0 ceph-osd[87591]: log_channel(cluster) log [DBG] : 9.1 scrub starts
Oct 11 03:43:10 compute-0 ceph-osd[87591]: log_channel(cluster) log [DBG] : 9.1 scrub ok
Oct 11 03:43:10 compute-0 sudo[126197]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rmsquipcrlvxkyhxfqfjrnmkrenortro ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760154190.200976-231-89470942033716/AnsiballZ_timezone.py'
Oct 11 03:43:10 compute-0 sudo[126197]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 03:43:10 compute-0 python3.9[126199]: ansible-community.general.timezone Invoked with name=UTC hwclock=None
Oct 11 03:43:10 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v314: 305 pgs: 305 active+clean; 457 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 03:43:10 compute-0 systemd[1]: Starting Time & Date Service...
Oct 11 03:43:10 compute-0 systemd[1]: Started Time & Date Service.
Oct 11 03:43:11 compute-0 ceph-mon[74273]: 6.f deep-scrub starts
Oct 11 03:43:11 compute-0 ceph-mon[74273]: 6.f deep-scrub ok
Oct 11 03:43:11 compute-0 sudo[126197]: pam_unix(sudo:session): session closed for user root
Oct 11 03:43:11 compute-0 ceph-osd[87591]: log_channel(cluster) log [DBG] : 9.3 deep-scrub starts
Oct 11 03:43:11 compute-0 ceph-osd[87591]: log_channel(cluster) log [DBG] : 9.3 deep-scrub ok
Oct 11 03:43:11 compute-0 sudo[126353]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qtkuctsxjontrzodwihhwtdbqrhejnzz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760154191.2927902-240-57715875670634/AnsiballZ_file.py'
Oct 11 03:43:11 compute-0 sudo[126353]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 03:43:11 compute-0 python3.9[126355]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 11 03:43:11 compute-0 sudo[126353]: pam_unix(sudo:session): session closed for user root
Oct 11 03:43:12 compute-0 ceph-mon[74273]: 9.1 scrub starts
Oct 11 03:43:12 compute-0 ceph-mon[74273]: 9.1 scrub ok
Oct 11 03:43:12 compute-0 ceph-mon[74273]: pgmap v314: 305 pgs: 305 active+clean; 457 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 03:43:12 compute-0 sudo[126505]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jmdmignmopqahawjqvytwajxuigwffqm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760154192.0517898-248-63601373029781/AnsiballZ_stat.py'
Oct 11 03:43:12 compute-0 sudo[126505]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 03:43:12 compute-0 ceph-osd[87591]: log_channel(cluster) log [DBG] : 6.3 scrub starts
Oct 11 03:43:12 compute-0 ceph-osd[87591]: log_channel(cluster) log [DBG] : 6.3 scrub ok
Oct 11 03:43:12 compute-0 python3.9[126507]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 11 03:43:12 compute-0 sudo[126505]: pam_unix(sudo:session): session closed for user root
Oct 11 03:43:12 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v315: 305 pgs: 305 active+clean; 457 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 03:43:12 compute-0 sudo[126583]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ryshunnhrywfrhefgzbvxsnsosgthexh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760154192.0517898-248-63601373029781/AnsiballZ_file.py'
Oct 11 03:43:12 compute-0 sudo[126583]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 03:43:13 compute-0 ceph-mon[74273]: 9.3 deep-scrub starts
Oct 11 03:43:13 compute-0 ceph-mon[74273]: 9.3 deep-scrub ok
Oct 11 03:43:13 compute-0 python3.9[126585]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml _original_basename=base-rules.yaml.j2 recurse=False state=file path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 11 03:43:13 compute-0 sudo[126583]: pam_unix(sudo:session): session closed for user root
Oct 11 03:43:13 compute-0 sudo[126735]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ydnuvitvbumglfbiholoprlhywheeyhq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760154193.258726-260-111013551330759/AnsiballZ_stat.py'
Oct 11 03:43:13 compute-0 sudo[126735]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 03:43:13 compute-0 python3.9[126737]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 11 03:43:13 compute-0 sudo[126735]: pam_unix(sudo:session): session closed for user root
Oct 11 03:43:14 compute-0 ceph-mon[74273]: 6.3 scrub starts
Oct 11 03:43:14 compute-0 ceph-mon[74273]: 6.3 scrub ok
Oct 11 03:43:14 compute-0 ceph-mon[74273]: pgmap v315: 305 pgs: 305 active+clean; 457 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 03:43:14 compute-0 sudo[126813]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-azbyyglxscrjkiehuxttlzwfdinvvczd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760154193.258726-260-111013551330759/AnsiballZ_file.py'
Oct 11 03:43:14 compute-0 sudo[126813]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 03:43:14 compute-0 python3.9[126815]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml _original_basename=.xlmpopnc recurse=False state=file path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 11 03:43:14 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 11 03:43:14 compute-0 sudo[126813]: pam_unix(sudo:session): session closed for user root
Oct 11 03:43:14 compute-0 ceph-osd[87591]: log_channel(cluster) log [DBG] : 6.7 scrub starts
Oct 11 03:43:14 compute-0 ceph-osd[87591]: log_channel(cluster) log [DBG] : 6.7 scrub ok
Oct 11 03:43:14 compute-0 sudo[126965]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dnwphazvxzhfysanvawatjfzydvlmqxt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760154194.4756784-272-70219181511995/AnsiballZ_stat.py'
Oct 11 03:43:14 compute-0 sudo[126965]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 03:43:14 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v316: 305 pgs: 305 active+clean; 457 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 03:43:14 compute-0 python3.9[126967]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/iptables.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 11 03:43:15 compute-0 sudo[126965]: pam_unix(sudo:session): session closed for user root
Oct 11 03:43:15 compute-0 ceph-mon[74273]: 6.7 scrub starts
Oct 11 03:43:15 compute-0 ceph-mon[74273]: 6.7 scrub ok
Oct 11 03:43:15 compute-0 sudo[127043]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-eczuywlxyhlzsqqrjibflrxxulwmfyfa ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760154194.4756784-272-70219181511995/AnsiballZ_file.py'
Oct 11 03:43:15 compute-0 sudo[127043]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 03:43:15 compute-0 python3.9[127045]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/iptables.nft _original_basename=iptables.nft recurse=False state=file path=/etc/nftables/iptables.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 11 03:43:15 compute-0 sudo[127043]: pam_unix(sudo:session): session closed for user root
Oct 11 03:43:16 compute-0 ceph-mon[74273]: pgmap v316: 305 pgs: 305 active+clean; 457 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 03:43:16 compute-0 sudo[127195]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-aplorksugeombthrjunzrvpqlmrjmriv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760154195.6747046-285-86297727955845/AnsiballZ_command.py'
Oct 11 03:43:16 compute-0 sudo[127195]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 03:43:16 compute-0 python3.9[127197]: ansible-ansible.legacy.command Invoked with _raw_params=nft -j list ruleset _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 11 03:43:16 compute-0 sudo[127195]: pam_unix(sudo:session): session closed for user root
Oct 11 03:43:16 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v317: 305 pgs: 305 active+clean; 457 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 03:43:17 compute-0 sudo[127348]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lhdbhknyyfkqxqkeyeousnerpzoiyhbe ; /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1760154196.6884851-293-98397823326022/AnsiballZ_edpm_nftables_from_files.py'
Oct 11 03:43:17 compute-0 sudo[127348]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 03:43:17 compute-0 python3[127350]: ansible-edpm_nftables_from_files Invoked with src=/var/lib/edpm-config/firewall
Oct 11 03:43:17 compute-0 sudo[127348]: pam_unix(sudo:session): session closed for user root
Oct 11 03:43:17 compute-0 sudo[127500]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cbbvsydtjtixhrshkluzxlcjvkaavvqx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760154197.5467172-301-215057979012515/AnsiballZ_stat.py'
Oct 11 03:43:17 compute-0 sudo[127500]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 03:43:18 compute-0 python3.9[127502]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 11 03:43:18 compute-0 ceph-mon[74273]: pgmap v317: 305 pgs: 305 active+clean; 457 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 03:43:18 compute-0 sudo[127500]: pam_unix(sudo:session): session closed for user root
Oct 11 03:43:18 compute-0 sudo[127578]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xzvjscjvjdopoewzwhnjcvehdedzkkin ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760154197.5467172-301-215057979012515/AnsiballZ_file.py'
Oct 11 03:43:18 compute-0 sudo[127578]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 03:43:18 compute-0 python3.9[127580]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-jumps.nft _original_basename=jump-chain.j2 recurse=False state=file path=/etc/nftables/edpm-jumps.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 11 03:43:18 compute-0 sudo[127578]: pam_unix(sudo:session): session closed for user root
Oct 11 03:43:18 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v318: 305 pgs: 305 active+clean; 457 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 03:43:19 compute-0 sudo[127730]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cvvnaawotcpxjudoylvvqurwoilulhkz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760154198.7486484-313-201316519212376/AnsiballZ_stat.py'
Oct 11 03:43:19 compute-0 sudo[127730]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 03:43:19 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 11 03:43:19 compute-0 python3.9[127732]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-update-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 11 03:43:19 compute-0 sudo[127730]: pam_unix(sudo:session): session closed for user root
Oct 11 03:43:19 compute-0 sudo[127808]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bamozatinnipixbegrbmcxadfhfmsaaq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760154198.7486484-313-201316519212376/AnsiballZ_file.py'
Oct 11 03:43:19 compute-0 sudo[127808]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 03:43:19 compute-0 python3.9[127810]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-update-jumps.nft _original_basename=jump-chain.j2 recurse=False state=file path=/etc/nftables/edpm-update-jumps.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 11 03:43:19 compute-0 sudo[127808]: pam_unix(sudo:session): session closed for user root
Oct 11 03:43:20 compute-0 ceph-mon[74273]: pgmap v318: 305 pgs: 305 active+clean; 457 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 03:43:20 compute-0 sudo[127960]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rweyikgjixrdqlwjukpmjveemjfmjkeq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760154200.014404-325-178447941067499/AnsiballZ_stat.py'
Oct 11 03:43:20 compute-0 sudo[127960]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 03:43:20 compute-0 python3.9[127962]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-flushes.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 11 03:43:20 compute-0 ceph-mgr[74563]: [balancer INFO root] Optimize plan auto_2025-10-11_03:43:20
Oct 11 03:43:20 compute-0 ceph-mgr[74563]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 11 03:43:20 compute-0 ceph-mgr[74563]: [balancer INFO root] do_upmap
Oct 11 03:43:20 compute-0 ceph-mgr[74563]: [balancer INFO root] pools ['backups', 'default.rgw.meta', 'vms', 'default.rgw.log', 'cephfs.cephfs.meta', 'images', '.rgw.root', '.mgr', 'default.rgw.control', 'volumes', 'cephfs.cephfs.data']
Oct 11 03:43:20 compute-0 ceph-mgr[74563]: [balancer INFO root] prepared 0/10 changes
Oct 11 03:43:20 compute-0 sudo[127960]: pam_unix(sudo:session): session closed for user root
Oct 11 03:43:20 compute-0 ceph-mgr[74563]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 03:43:20 compute-0 ceph-mgr[74563]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 03:43:20 compute-0 ceph-mgr[74563]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 03:43:20 compute-0 ceph-mgr[74563]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 03:43:20 compute-0 ceph-mgr[74563]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 03:43:20 compute-0 ceph-mgr[74563]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 03:43:20 compute-0 ceph-mgr[74563]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 11 03:43:20 compute-0 ceph-mgr[74563]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 11 03:43:20 compute-0 ceph-mgr[74563]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 11 03:43:20 compute-0 ceph-mgr[74563]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 11 03:43:20 compute-0 ceph-mgr[74563]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 11 03:43:20 compute-0 ceph-mgr[74563]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 11 03:43:20 compute-0 ceph-mgr[74563]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 11 03:43:20 compute-0 ceph-mgr[74563]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 11 03:43:20 compute-0 ceph-mgr[74563]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 11 03:43:20 compute-0 ceph-mgr[74563]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 11 03:43:20 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v319: 305 pgs: 305 active+clean; 457 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 03:43:20 compute-0 sudo[128038]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ariuxxjcnqyjfiivfqqmuyicnqyyxdyj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760154200.014404-325-178447941067499/AnsiballZ_file.py'
Oct 11 03:43:20 compute-0 sudo[128038]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 03:43:21 compute-0 python3.9[128040]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-flushes.nft _original_basename=flush-chain.j2 recurse=False state=file path=/etc/nftables/edpm-flushes.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 11 03:43:21 compute-0 sudo[128038]: pam_unix(sudo:session): session closed for user root
Oct 11 03:43:21 compute-0 sudo[128190]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fncyovuardhjcqgcapplskgphakjzoym ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760154201.3541205-337-157424099135160/AnsiballZ_stat.py'
Oct 11 03:43:21 compute-0 sudo[128190]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 03:43:21 compute-0 python3.9[128192]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-chains.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 11 03:43:21 compute-0 sudo[128190]: pam_unix(sudo:session): session closed for user root
Oct 11 03:43:22 compute-0 ceph-mon[74273]: pgmap v319: 305 pgs: 305 active+clean; 457 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 03:43:22 compute-0 sudo[128268]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ajwpnlzcwvsxrnutvjchddbfdsazsagx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760154201.3541205-337-157424099135160/AnsiballZ_file.py'
Oct 11 03:43:22 compute-0 sudo[128268]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 03:43:22 compute-0 python3.9[128270]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-chains.nft _original_basename=chains.j2 recurse=False state=file path=/etc/nftables/edpm-chains.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 11 03:43:22 compute-0 sudo[128268]: pam_unix(sudo:session): session closed for user root
Oct 11 03:43:22 compute-0 ceph-osd[87591]: log_channel(cluster) log [DBG] : 6.5 deep-scrub starts
Oct 11 03:43:22 compute-0 ceph-osd[87591]: log_channel(cluster) log [DBG] : 6.5 deep-scrub ok
Oct 11 03:43:22 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v320: 305 pgs: 305 active+clean; 457 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 03:43:22 compute-0 sudo[128420]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yigjmowzwcjipnupntkbosevuiudaokt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760154202.5644233-349-57768865096324/AnsiballZ_stat.py'
Oct 11 03:43:22 compute-0 sudo[128420]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 03:43:23 compute-0 ceph-mon[74273]: 6.5 deep-scrub starts
Oct 11 03:43:23 compute-0 ceph-mon[74273]: 6.5 deep-scrub ok
Oct 11 03:43:23 compute-0 python3.9[128422]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-rules.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 11 03:43:23 compute-0 sudo[128420]: pam_unix(sudo:session): session closed for user root
Oct 11 03:43:23 compute-0 ceph-osd[89722]: log_channel(cluster) log [DBG] : 9.13 scrub starts
Oct 11 03:43:23 compute-0 ceph-osd[89722]: log_channel(cluster) log [DBG] : 9.13 scrub ok
Oct 11 03:43:23 compute-0 sudo[128498]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jvlancmimhjlrfvqivkofeecquotfeib ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760154202.5644233-349-57768865096324/AnsiballZ_file.py'
Oct 11 03:43:23 compute-0 sudo[128498]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 03:43:23 compute-0 python3.9[128500]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-rules.nft _original_basename=ruleset.j2 recurse=False state=file path=/etc/nftables/edpm-rules.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 11 03:43:23 compute-0 sudo[128498]: pam_unix(sudo:session): session closed for user root
Oct 11 03:43:24 compute-0 sudo[128650]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-asxwcqmdfphafbvqgiattcbwpvzejcaz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760154203.789917-362-102837013534640/AnsiballZ_command.py'
Oct 11 03:43:24 compute-0 sudo[128650]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 03:43:24 compute-0 ceph-mon[74273]: pgmap v320: 305 pgs: 305 active+clean; 457 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 03:43:24 compute-0 ceph-mon[74273]: 9.13 scrub starts
Oct 11 03:43:24 compute-0 ceph-mon[74273]: 9.13 scrub ok
Oct 11 03:43:24 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 11 03:43:24 compute-0 python3.9[128652]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-chains.nft /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft /etc/nftables/edpm-jumps.nft | nft -c -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 11 03:43:24 compute-0 sudo[128650]: pam_unix(sudo:session): session closed for user root
Oct 11 03:43:24 compute-0 ceph-osd[87591]: log_channel(cluster) log [DBG] : 6.9 deep-scrub starts
Oct 11 03:43:24 compute-0 ceph-osd[87591]: log_channel(cluster) log [DBG] : 6.9 deep-scrub ok
Oct 11 03:43:24 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v321: 305 pgs: 305 active+clean; 457 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 03:43:25 compute-0 sudo[128805]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ilyekmixbeuncfucivhylcbahyxmlioi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760154204.574848-370-159676246219095/AnsiballZ_blockinfile.py'
Oct 11 03:43:25 compute-0 sudo[128805]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 03:43:25 compute-0 ceph-mon[74273]: 6.9 deep-scrub starts
Oct 11 03:43:25 compute-0 ceph-mon[74273]: 6.9 deep-scrub ok
Oct 11 03:43:25 compute-0 python3.9[128807]: ansible-ansible.builtin.blockinfile Invoked with backup=False block=include "/etc/nftables/iptables.nft"
                                             include "/etc/nftables/edpm-chains.nft"
                                             include "/etc/nftables/edpm-rules.nft"
                                             include "/etc/nftables/edpm-jumps.nft"
                                              path=/etc/sysconfig/nftables.conf validate=nft -c -f %s state=present marker=# {mark} ANSIBLE MANAGED BLOCK create=False marker_begin=BEGIN marker_end=END append_newline=False prepend_newline=False unsafe_writes=False insertafter=None insertbefore=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 11 03:43:25 compute-0 sudo[128805]: pam_unix(sudo:session): session closed for user root
Oct 11 03:43:25 compute-0 ceph-osd[87591]: log_channel(cluster) log [DBG] : 6.a scrub starts
Oct 11 03:43:25 compute-0 ceph-osd[87591]: log_channel(cluster) log [DBG] : 6.a scrub ok
Oct 11 03:43:25 compute-0 sudo[128957]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rdgtqunnmgrounpzvrchpgjfjszjccfx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760154205.4698443-379-44408844521970/AnsiballZ_file.py'
Oct 11 03:43:25 compute-0 sudo[128957]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 03:43:25 compute-0 python3.9[128959]: ansible-ansible.builtin.file Invoked with group=hugetlbfs mode=0775 owner=zuul path=/dev/hugepages1G state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 11 03:43:25 compute-0 sudo[128957]: pam_unix(sudo:session): session closed for user root
Oct 11 03:43:26 compute-0 ceph-mon[74273]: pgmap v321: 305 pgs: 305 active+clean; 457 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 03:43:26 compute-0 ceph-mon[74273]: 6.a scrub starts
Oct 11 03:43:26 compute-0 ceph-mon[74273]: 6.a scrub ok
Oct 11 03:43:26 compute-0 ceph-osd[89722]: log_channel(cluster) log [DBG] : 9.19 scrub starts
Oct 11 03:43:26 compute-0 ceph-osd[89722]: log_channel(cluster) log [DBG] : 9.19 scrub ok
Oct 11 03:43:26 compute-0 sudo[129109]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ackdzayxznvpsaqdpvyuhmfgwijfohob ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760154206.1494832-379-220336194927069/AnsiballZ_file.py'
Oct 11 03:43:26 compute-0 sudo[129109]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 03:43:26 compute-0 python3.9[129111]: ansible-ansible.builtin.file Invoked with group=hugetlbfs mode=0775 owner=zuul path=/dev/hugepages2M state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 11 03:43:26 compute-0 sudo[129109]: pam_unix(sudo:session): session closed for user root
Oct 11 03:43:26 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v322: 305 pgs: 305 active+clean; 457 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 03:43:27 compute-0 ceph-mon[74273]: 9.19 scrub starts
Oct 11 03:43:27 compute-0 ceph-mon[74273]: 9.19 scrub ok
Oct 11 03:43:27 compute-0 sudo[129261]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fgpvpjaqskowzobxlgcfrtgorybntnaw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760154206.9485888-394-118986097828879/AnsiballZ_mount.py'
Oct 11 03:43:27 compute-0 sudo[129261]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 03:43:27 compute-0 python3.9[129263]: ansible-ansible.posix.mount Invoked with fstype=hugetlbfs opts=pagesize=1G path=/dev/hugepages1G src=none state=mounted boot=True dump=0 opts_no_log=False passno=0 backup=False fstab=None
Oct 11 03:43:27 compute-0 sudo[129261]: pam_unix(sudo:session): session closed for user root
Oct 11 03:43:27 compute-0 sudo[129413]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qkiyihqgklkbldekoltlanicaldvzxrw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760154207.7570026-394-86939381932435/AnsiballZ_mount.py'
Oct 11 03:43:27 compute-0 sudo[129413]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 03:43:28 compute-0 python3.9[129415]: ansible-ansible.posix.mount Invoked with fstype=hugetlbfs opts=pagesize=2M path=/dev/hugepages2M src=none state=mounted boot=True dump=0 opts_no_log=False passno=0 backup=False fstab=None
Oct 11 03:43:28 compute-0 sudo[129413]: pam_unix(sudo:session): session closed for user root
Oct 11 03:43:28 compute-0 ceph-mon[74273]: pgmap v322: 305 pgs: 305 active+clean; 457 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 03:43:28 compute-0 sshd-session[121410]: Connection closed by 192.168.122.30 port 41496
Oct 11 03:43:28 compute-0 sshd-session[121407]: pam_unix(sshd:session): session closed for user zuul
Oct 11 03:43:28 compute-0 systemd[1]: session-40.scope: Deactivated successfully.
Oct 11 03:43:28 compute-0 systemd[1]: session-40.scope: Consumed 34.103s CPU time.
Oct 11 03:43:28 compute-0 systemd-logind[820]: Session 40 logged out. Waiting for processes to exit.
Oct 11 03:43:28 compute-0 systemd-logind[820]: Removed session 40.
Oct 11 03:43:28 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v323: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 03:43:29 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 11 03:43:30 compute-0 ceph-mon[74273]: pgmap v323: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 03:43:30 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v324: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 03:43:30 compute-0 ceph-mgr[74563]: [pg_autoscaler INFO root] _maybe_adjust
Oct 11 03:43:30 compute-0 ceph-mgr[74563]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 03:43:30 compute-0 ceph-mgr[74563]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct 11 03:43:30 compute-0 ceph-mgr[74563]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 03:43:30 compute-0 ceph-mgr[74563]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 11 03:43:30 compute-0 ceph-mgr[74563]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 03:43:30 compute-0 ceph-mgr[74563]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 11 03:43:30 compute-0 ceph-mgr[74563]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 03:43:30 compute-0 ceph-mgr[74563]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 11 03:43:30 compute-0 ceph-mgr[74563]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 03:43:30 compute-0 ceph-mgr[74563]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 11 03:43:30 compute-0 ceph-mgr[74563]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 03:43:30 compute-0 ceph-mgr[74563]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Oct 11 03:43:30 compute-0 ceph-mgr[74563]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 03:43:30 compute-0 ceph-mgr[74563]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 11 03:43:30 compute-0 ceph-mgr[74563]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 03:43:30 compute-0 ceph-mgr[74563]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Oct 11 03:43:30 compute-0 ceph-mgr[74563]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 03:43:30 compute-0 ceph-mgr[74563]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct 11 03:43:30 compute-0 ceph-mgr[74563]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 03:43:30 compute-0 ceph-mgr[74563]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 11 03:43:30 compute-0 ceph-mgr[74563]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 03:43:30 compute-0 ceph-mgr[74563]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct 11 03:43:32 compute-0 ceph-mon[74273]: pgmap v324: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 03:43:32 compute-0 ceph-osd[87591]: log_channel(cluster) log [DBG] : 9.16 scrub starts
Oct 11 03:43:32 compute-0 ceph-osd[87591]: log_channel(cluster) log [DBG] : 9.16 scrub ok
Oct 11 03:43:32 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v325: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 03:43:33 compute-0 sshd-session[129440]: Received disconnect from 91.224.92.108 port 49918:11:  [preauth]
Oct 11 03:43:33 compute-0 sshd-session[129440]: Disconnected from authenticating user root 91.224.92.108 port 49918 [preauth]
Oct 11 03:43:33 compute-0 ceph-mon[74273]: 9.16 scrub starts
Oct 11 03:43:33 compute-0 ceph-mon[74273]: 9.16 scrub ok
Oct 11 03:43:33 compute-0 sshd-session[129442]: Accepted publickey for zuul from 192.168.122.30 port 40948 ssh2: ECDSA SHA256:qo9+RMabHfLAOt2q/80W97JXaZUdeUCREBuTRaqgxBY
Oct 11 03:43:33 compute-0 systemd-logind[820]: New session 41 of user zuul.
Oct 11 03:43:33 compute-0 systemd[1]: Started Session 41 of User zuul.
Oct 11 03:43:33 compute-0 sshd-session[129442]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Oct 11 03:43:34 compute-0 sudo[129595]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ahgrrialnbaiwtcxsczdasbctjtevcfq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760154213.6191752-16-261231008846315/AnsiballZ_tempfile.py'
Oct 11 03:43:34 compute-0 sudo[129595]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 03:43:34 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 11 03:43:34 compute-0 ceph-mon[74273]: pgmap v325: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 03:43:34 compute-0 python3.9[129597]: ansible-ansible.builtin.tempfile Invoked with state=file prefix=ansible. suffix= path=None
Oct 11 03:43:34 compute-0 sudo[129595]: pam_unix(sudo:session): session closed for user root
Oct 11 03:43:34 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v326: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 03:43:35 compute-0 sudo[129747]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uietbolawoqsiyvlskpuhjswrvpiygky ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760154214.5543199-28-260994520814544/AnsiballZ_stat.py'
Oct 11 03:43:35 compute-0 sudo[129747]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 03:43:35 compute-0 python3.9[129749]: ansible-ansible.builtin.stat Invoked with path=/etc/ssh/ssh_known_hosts follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct 11 03:43:35 compute-0 sudo[129747]: pam_unix(sudo:session): session closed for user root
Oct 11 03:43:35 compute-0 ceph-osd[87591]: log_channel(cluster) log [DBG] : 9.1c scrub starts
Oct 11 03:43:35 compute-0 ceph-osd[87591]: log_channel(cluster) log [DBG] : 9.1c scrub ok
Oct 11 03:43:35 compute-0 sudo[129901]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vvcshiyttswwrejzlzpgcxvwkyduxcug ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760154215.4733932-36-6438960814898/AnsiballZ_slurp.py'
Oct 11 03:43:35 compute-0 sudo[129901]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 03:43:36 compute-0 python3.9[129903]: ansible-ansible.builtin.slurp Invoked with src=/etc/ssh/ssh_known_hosts
Oct 11 03:43:36 compute-0 sudo[129901]: pam_unix(sudo:session): session closed for user root
Oct 11 03:43:36 compute-0 ceph-mon[74273]: pgmap v326: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 03:43:36 compute-0 ceph-mon[74273]: 9.1c scrub starts
Oct 11 03:43:36 compute-0 ceph-mon[74273]: 9.1c scrub ok
Oct 11 03:43:36 compute-0 sudo[130053]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-djqnmtbcqtmuklzomktuilxbpidxastl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760154216.3998628-44-55073132801481/AnsiballZ_stat.py'
Oct 11 03:43:36 compute-0 sudo[130053]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 03:43:36 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v327: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 03:43:37 compute-0 python3.9[130055]: ansible-ansible.legacy.stat Invoked with path=/tmp/ansible.6419k7ft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 11 03:43:37 compute-0 sudo[130053]: pam_unix(sudo:session): session closed for user root
Oct 11 03:43:37 compute-0 sudo[130178]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zqxjpkjvrfuzdtmmxbgovcaqkuqjjcym ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760154216.3998628-44-55073132801481/AnsiballZ_copy.py'
Oct 11 03:43:37 compute-0 sudo[130178]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 03:43:37 compute-0 python3.9[130180]: ansible-ansible.legacy.copy Invoked with dest=/tmp/ansible.6419k7ft mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1760154216.3998628-44-55073132801481/.source.6419k7ft _original_basename=.x6zj9xxg follow=False checksum=1b0a6de5c56066fd9b3bf602d315a9bf1eef44c1 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 11 03:43:37 compute-0 sudo[130178]: pam_unix(sudo:session): session closed for user root
Oct 11 03:43:38 compute-0 ceph-mon[74273]: pgmap v327: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 03:43:38 compute-0 sudo[130330]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kpecfoqjabmodvmskddffpnypkjnsomr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760154218.0342968-59-166257542421616/AnsiballZ_setup.py'
Oct 11 03:43:38 compute-0 sudo[130330]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 03:43:38 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v328: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 03:43:38 compute-0 python3.9[130332]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'ssh_host_key_rsa_public', 'ssh_host_key_ed25519_public', 'ssh_host_key_ecdsa_public'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct 11 03:43:39 compute-0 sudo[130330]: pam_unix(sudo:session): session closed for user root
Oct 11 03:43:39 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 11 03:43:39 compute-0 ceph-osd[87591]: log_channel(cluster) log [DBG] : 9.1e scrub starts
Oct 11 03:43:39 compute-0 ceph-osd[87591]: log_channel(cluster) log [DBG] : 9.1e scrub ok
Oct 11 03:43:39 compute-0 sudo[130482]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fdjvbjybmpxlwcmdmbtliozwtbnujogw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760154219.249076-68-92502381903723/AnsiballZ_blockinfile.py'
Oct 11 03:43:39 compute-0 sudo[130482]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 03:43:39 compute-0 python3.9[130484]: ansible-ansible.builtin.blockinfile Invoked with block=compute-0.ctlplane.example.com,192.168.122.100,compute-0* ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCmVtx50w2Ce2BfePsAxe42wtfQuybkCFnQ+I2wKBdvA+hHDGHKq+DK0r0MLsjknW+B6oLz7z83ONuCSI5fnEYMb6H8z3rFIW9mdAsCheBoEcRPQSdsEr1zoV+Lv7A+HyKWCln0chhVjM/32sWu15LGXmQorZF/GWzY1NOxhihAQtcIqeMT/3Ua2PANdYB0fdrnFkb+3YzO84UBMzDk8jdHKd/7U3YMrD+kPoytRTEVSpo5OvNuBM1OtTrDNBt/j+ftF4YOc18YwJqu7X9wBLwb9xO071ScxcKpyHsBBrC0Mv75H6BF6LQH5rL1Un6T/ewz/3gkpzNbm+04c9OFAH44gTl6zh4XfklWhAbff0bb1vm3n/8G/NcKRmHB0qeM8UEmmrHKyTtqF41fpNChphqfswUDGB+9FLfONvHYzJeldie9EXYZFpbG3Ov0TnUSyQk9YAWPQzbqKMg7Cz2zKcExApU/ZMwSQ4tTzzNkHOxmfgkaDEyh1ByhBS2ocb/FoZc=
                                             compute-0.ctlplane.example.com,192.168.122.100,compute-0* ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIGpKnRjzn6GUq2BdxYSnAaefVvunenomnLuP3H43+vw4
                                             compute-0.ctlplane.example.com,192.168.122.100,compute-0* ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBPYbrjRTf9G+akEKWCGs7xCkq0HSionPcF1rxn4XZxvd/UFlbPUo5VosqUj/1lwDnQIVl+rXU6w4H/eH4SjxsN0=
                                              create=True mode=0644 path=/tmp/ansible.6419k7ft state=present marker=# {mark} ANSIBLE MANAGED BLOCK backup=False marker_begin=BEGIN marker_end=END append_newline=False prepend_newline=False unsafe_writes=False insertafter=None insertbefore=None validate=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 11 03:43:39 compute-0 sudo[130482]: pam_unix(sudo:session): session closed for user root
Oct 11 03:43:40 compute-0 ceph-mon[74273]: pgmap v328: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 03:43:40 compute-0 ceph-mon[74273]: 9.1e scrub starts
Oct 11 03:43:40 compute-0 ceph-mon[74273]: 9.1e scrub ok
Oct 11 03:43:40 compute-0 sudo[130634]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-maaqnogmqigkaonohpfovwwvusjfjhws ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760154220.1165345-76-47779344125616/AnsiballZ_command.py'
Oct 11 03:43:40 compute-0 sudo[130634]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 03:43:40 compute-0 python3.9[130636]: ansible-ansible.legacy.command Invoked with _raw_params=cat '/tmp/ansible.6419k7ft' > /etc/ssh/ssh_known_hosts _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 11 03:43:40 compute-0 sudo[130634]: pam_unix(sudo:session): session closed for user root
Oct 11 03:43:40 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v329: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 03:43:41 compute-0 systemd[1]: systemd-timedated.service: Deactivated successfully.
Oct 11 03:43:41 compute-0 sudo[130790]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-exapwglnkqdcurearmjntffavekpzqbk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760154220.9797218-84-145208482737542/AnsiballZ_file.py'
Oct 11 03:43:41 compute-0 sudo[130790]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 03:43:41 compute-0 python3.9[130792]: ansible-ansible.builtin.file Invoked with path=/tmp/ansible.6419k7ft state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 11 03:43:41 compute-0 sudo[130790]: pam_unix(sudo:session): session closed for user root
Oct 11 03:43:42 compute-0 sshd-session[129445]: Connection closed by 192.168.122.30 port 40948
Oct 11 03:43:42 compute-0 sshd-session[129442]: pam_unix(sshd:session): session closed for user zuul
Oct 11 03:43:42 compute-0 systemd[1]: session-41.scope: Deactivated successfully.
Oct 11 03:43:42 compute-0 systemd[1]: session-41.scope: Consumed 6.188s CPU time.
Oct 11 03:43:42 compute-0 systemd-logind[820]: Session 41 logged out. Waiting for processes to exit.
Oct 11 03:43:42 compute-0 systemd-logind[820]: Removed session 41.
Oct 11 03:43:42 compute-0 ceph-mon[74273]: pgmap v329: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 03:43:42 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v330: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 03:43:44 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 11 03:43:44 compute-0 ceph-mon[74273]: pgmap v330: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 03:43:44 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v331: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 03:43:46 compute-0 ceph-mon[74273]: pgmap v331: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 03:43:46 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v332: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 03:43:47 compute-0 sshd-session[130817]: Accepted publickey for zuul from 192.168.122.30 port 56924 ssh2: ECDSA SHA256:qo9+RMabHfLAOt2q/80W97JXaZUdeUCREBuTRaqgxBY
Oct 11 03:43:47 compute-0 systemd-logind[820]: New session 42 of user zuul.
Oct 11 03:43:47 compute-0 systemd[1]: Started Session 42 of User zuul.
Oct 11 03:43:47 compute-0 sshd-session[130817]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Oct 11 03:43:48 compute-0 ceph-mon[74273]: pgmap v332: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 03:43:48 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v333: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 03:43:49 compute-0 python3.9[130970]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct 11 03:43:49 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 11 03:43:49 compute-0 sudo[131124]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pcmxfdeeeutkbqjkqoretrnllpswplqs ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760154229.3807333-32-35127866731330/AnsiballZ_systemd.py'
Oct 11 03:43:49 compute-0 sudo[131124]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 03:43:50 compute-0 ceph-mon[74273]: pgmap v333: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 03:43:50 compute-0 python3.9[131126]: ansible-ansible.builtin.systemd Invoked with enabled=True name=sshd daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None masked=None
Oct 11 03:43:50 compute-0 sudo[131124]: pam_unix(sudo:session): session closed for user root
Oct 11 03:43:50 compute-0 ceph-mgr[74563]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 03:43:50 compute-0 ceph-mgr[74563]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 03:43:50 compute-0 ceph-mgr[74563]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 03:43:50 compute-0 ceph-mgr[74563]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 03:43:50 compute-0 ceph-mgr[74563]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 03:43:50 compute-0 ceph-mgr[74563]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 03:43:50 compute-0 sudo[131278]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xqeopaeinhaborbsbuzpkdkqtovahjsh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760154230.4776087-40-153514452877356/AnsiballZ_systemd.py'
Oct 11 03:43:50 compute-0 sudo[131278]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 03:43:50 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v334: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 03:43:51 compute-0 python3.9[131280]: ansible-ansible.builtin.systemd Invoked with name=sshd state=started daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Oct 11 03:43:51 compute-0 sudo[131278]: pam_unix(sudo:session): session closed for user root
Oct 11 03:43:51 compute-0 sudo[131431]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-womarokxaqevdpokaypwaltyashbidqm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760154231.3904867-49-268327474227060/AnsiballZ_command.py'
Oct 11 03:43:51 compute-0 sudo[131431]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 03:43:52 compute-0 python3.9[131433]: ansible-ansible.legacy.command Invoked with _raw_params=nft -f /etc/nftables/edpm-chains.nft _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 11 03:43:52 compute-0 ceph-mon[74273]: pgmap v334: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 03:43:52 compute-0 sudo[131431]: pam_unix(sudo:session): session closed for user root
Oct 11 03:43:52 compute-0 sudo[131584]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-edvufzurceebjdeyryvskfroqishvpcx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760154232.1924033-57-31194526955153/AnsiballZ_stat.py'
Oct 11 03:43:52 compute-0 sudo[131584]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 03:43:52 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v335: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 03:43:52 compute-0 python3.9[131586]: ansible-ansible.builtin.stat Invoked with path=/etc/nftables/edpm-rules.nft.changed follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct 11 03:43:52 compute-0 sudo[131584]: pam_unix(sudo:session): session closed for user root
Oct 11 03:43:53 compute-0 sudo[131736]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xkadxnxnqznzfopnexraumtfhhbpgkoj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760154233.1347954-66-194173552696045/AnsiballZ_file.py'
Oct 11 03:43:53 compute-0 sudo[131736]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 03:43:53 compute-0 python3.9[131738]: ansible-ansible.builtin.file Invoked with path=/etc/nftables/edpm-rules.nft.changed state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 11 03:43:53 compute-0 sudo[131736]: pam_unix(sudo:session): session closed for user root
Oct 11 03:43:54 compute-0 ceph-mon[74273]: pgmap v335: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 03:43:54 compute-0 sshd-session[130820]: Connection closed by 192.168.122.30 port 56924
Oct 11 03:43:54 compute-0 sshd-session[130817]: pam_unix(sshd:session): session closed for user zuul
Oct 11 03:43:54 compute-0 systemd[1]: session-42.scope: Deactivated successfully.
Oct 11 03:43:54 compute-0 systemd[1]: session-42.scope: Consumed 4.397s CPU time.
Oct 11 03:43:54 compute-0 systemd-logind[820]: Session 42 logged out. Waiting for processes to exit.
Oct 11 03:43:54 compute-0 systemd-logind[820]: Removed session 42.
Oct 11 03:43:54 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 11 03:43:54 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v336: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 03:43:56 compute-0 ceph-mon[74273]: pgmap v336: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 03:43:56 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v337: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 03:43:58 compute-0 ceph-mon[74273]: pgmap v337: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 03:43:58 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v338: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 03:43:59 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 11 03:43:59 compute-0 sshd-session[131763]: Accepted publickey for zuul from 192.168.122.30 port 45248 ssh2: ECDSA SHA256:qo9+RMabHfLAOt2q/80W97JXaZUdeUCREBuTRaqgxBY
Oct 11 03:43:59 compute-0 systemd-logind[820]: New session 43 of user zuul.
Oct 11 03:43:59 compute-0 systemd[1]: Started Session 43 of User zuul.
Oct 11 03:43:59 compute-0 sshd-session[131763]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Oct 11 03:44:00 compute-0 ceph-mon[74273]: pgmap v338: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 03:44:00 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v339: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 03:44:00 compute-0 python3.9[131916]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct 11 03:44:01 compute-0 sudo[132070]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ndvehbxpqvvytvkcremxooekokyiuvte ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760154241.385684-34-276895563686745/AnsiballZ_setup.py'
Oct 11 03:44:01 compute-0 sudo[132070]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 03:44:01 compute-0 python3.9[132072]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Oct 11 03:44:02 compute-0 ceph-mon[74273]: pgmap v339: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 03:44:02 compute-0 sudo[132070]: pam_unix(sudo:session): session closed for user root
Oct 11 03:44:02 compute-0 sudo[132154]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ghjvpeomhvrrbwuyfberiyevvumftdbb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760154241.385684-34-276895563686745/AnsiballZ_dnf.py'
Oct 11 03:44:02 compute-0 sudo[132154]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 03:44:02 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v340: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 03:44:02 compute-0 python3.9[132156]: ansible-ansible.legacy.dnf Invoked with name=['yum-utils'] allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None state=None
Oct 11 03:44:04 compute-0 sudo[132154]: pam_unix(sudo:session): session closed for user root
Oct 11 03:44:04 compute-0 ceph-mon[74273]: pgmap v340: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 03:44:04 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 11 03:44:04 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v341: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 03:44:04 compute-0 python3.9[132307]: ansible-ansible.legacy.command Invoked with _raw_params=needs-restarting -r _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 11 03:44:05 compute-0 sshd-session[70527]: Received disconnect from 38.102.83.159 port 48286:11: disconnected by user
Oct 11 03:44:05 compute-0 sshd-session[70527]: Disconnected from user zuul 38.102.83.159 port 48286
Oct 11 03:44:05 compute-0 sshd-session[70524]: pam_unix(sshd:session): session closed for user zuul
Oct 11 03:44:05 compute-0 systemd[1]: session-18.scope: Deactivated successfully.
Oct 11 03:44:05 compute-0 systemd[1]: session-18.scope: Consumed 1min 31.296s CPU time.
Oct 11 03:44:05 compute-0 systemd-logind[820]: Session 18 logged out. Waiting for processes to exit.
Oct 11 03:44:05 compute-0 systemd-logind[820]: Removed session 18.
Oct 11 03:44:06 compute-0 ceph-mon[74273]: pgmap v341: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 03:44:06 compute-0 python3.9[132458]: ansible-ansible.builtin.find Invoked with paths=['/var/lib/openstack/reboot_required/'] patterns=[] read_whole_file=False file_type=file age_stamp=mtime recurse=False hidden=False follow=False get_checksum=False checksum_algorithm=sha1 use_regex=False exact_mode=True excludes=None contains=None age=None size=None depth=None mode=None encoding=None limit=None
Oct 11 03:44:06 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v342: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 03:44:07 compute-0 python3.9[132608]: ansible-ansible.builtin.stat Invoked with path=/var/lib/config-data/puppet-generated follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct 11 03:44:07 compute-0 python3.9[132758]: ansible-ansible.builtin.stat Invoked with path=/var/lib/openstack/config follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct 11 03:44:08 compute-0 ceph-mon[74273]: pgmap v342: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 03:44:08 compute-0 sshd-session[131766]: Connection closed by 192.168.122.30 port 45248
Oct 11 03:44:08 compute-0 sshd-session[131763]: pam_unix(sshd:session): session closed for user zuul
Oct 11 03:44:08 compute-0 systemd[1]: session-43.scope: Deactivated successfully.
Oct 11 03:44:08 compute-0 systemd[1]: session-43.scope: Consumed 6.579s CPU time.
Oct 11 03:44:08 compute-0 systemd-logind[820]: Session 43 logged out. Waiting for processes to exit.
Oct 11 03:44:08 compute-0 systemd-logind[820]: Removed session 43.
Oct 11 03:44:08 compute-0 sudo[132783]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 03:44:08 compute-0 sudo[132783]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 03:44:08 compute-0 sudo[132783]: pam_unix(sudo:session): session closed for user root
Oct 11 03:44:08 compute-0 sudo[132808]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 11 03:44:08 compute-0 sudo[132808]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 03:44:08 compute-0 sudo[132808]: pam_unix(sudo:session): session closed for user root
Oct 11 03:44:08 compute-0 sudo[132833]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 03:44:08 compute-0 sudo[132833]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 03:44:08 compute-0 sudo[132833]: pam_unix(sudo:session): session closed for user root
Oct 11 03:44:08 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v343: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 03:44:08 compute-0 sudo[132858]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/23b68101-59a9-532f-ab6b-9acf78fb2162/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Oct 11 03:44:08 compute-0 sudo[132858]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 03:44:09 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 11 03:44:09 compute-0 sudo[132858]: pam_unix(sudo:session): session closed for user root
Oct 11 03:44:09 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 11 03:44:09 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 11 03:44:09 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct 11 03:44:09 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 11 03:44:09 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct 11 03:44:09 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' 
Oct 11 03:44:09 compute-0 ceph-mgr[74563]: [progress WARNING root] complete: ev e8b69686-e42d-4fa5-89c7-4f370bb19872 does not exist
Oct 11 03:44:09 compute-0 ceph-mgr[74563]: [progress WARNING root] complete: ev 14c22e9c-7a54-4446-b14e-43983ef18da8 does not exist
Oct 11 03:44:09 compute-0 ceph-mgr[74563]: [progress WARNING root] complete: ev 126f8988-40d4-4e0a-8cf7-092486514179 does not exist
Oct 11 03:44:09 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct 11 03:44:09 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 11 03:44:09 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct 11 03:44:09 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 11 03:44:09 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 11 03:44:09 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 11 03:44:09 compute-0 sudo[132914]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 03:44:09 compute-0 sudo[132914]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 03:44:09 compute-0 sudo[132914]: pam_unix(sudo:session): session closed for user root
Oct 11 03:44:09 compute-0 sudo[132939]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 11 03:44:09 compute-0 sudo[132939]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 03:44:09 compute-0 sudo[132939]: pam_unix(sudo:session): session closed for user root
Oct 11 03:44:09 compute-0 sudo[132964]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 03:44:09 compute-0 sudo[132964]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 03:44:09 compute-0 sudo[132964]: pam_unix(sudo:session): session closed for user root
Oct 11 03:44:09 compute-0 sudo[132989]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/23b68101-59a9-532f-ab6b-9acf78fb2162/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 23b68101-59a9-532f-ab6b-9acf78fb2162 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Oct 11 03:44:09 compute-0 sudo[132989]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 03:44:10 compute-0 ceph-mon[74273]: pgmap v343: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 03:44:10 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 11 03:44:10 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 11 03:44:10 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' 
Oct 11 03:44:10 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 11 03:44:10 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 11 03:44:10 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 11 03:44:10 compute-0 podman[133055]: 2025-10-11 03:44:10.284206909 +0000 UTC m=+0.069839663 container create 16ae15188f693d06a42d44eb09e67bca631a7775b72e8a62d14f0d4fc5dda8b3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_morse, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 11 03:44:10 compute-0 systemd[1]: Started libpod-conmon-16ae15188f693d06a42d44eb09e67bca631a7775b72e8a62d14f0d4fc5dda8b3.scope.
Oct 11 03:44:10 compute-0 podman[133055]: 2025-10-11 03:44:10.252948672 +0000 UTC m=+0.038581466 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 03:44:10 compute-0 systemd[1]: Started libcrun container.
Oct 11 03:44:10 compute-0 podman[133055]: 2025-10-11 03:44:10.394072277 +0000 UTC m=+0.179705051 container init 16ae15188f693d06a42d44eb09e67bca631a7775b72e8a62d14f0d4fc5dda8b3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_morse, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 11 03:44:10 compute-0 podman[133055]: 2025-10-11 03:44:10.405758548 +0000 UTC m=+0.191391292 container start 16ae15188f693d06a42d44eb09e67bca631a7775b72e8a62d14f0d4fc5dda8b3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_morse, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507)
Oct 11 03:44:10 compute-0 podman[133055]: 2025-10-11 03:44:10.410802941 +0000 UTC m=+0.196435745 container attach 16ae15188f693d06a42d44eb09e67bca631a7775b72e8a62d14f0d4fc5dda8b3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_morse, org.label-schema.build-date=20250507, CEPH_REF=reef, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3)
Oct 11 03:44:10 compute-0 vigorous_morse[133071]: 167 167
Oct 11 03:44:10 compute-0 systemd[1]: libpod-16ae15188f693d06a42d44eb09e67bca631a7775b72e8a62d14f0d4fc5dda8b3.scope: Deactivated successfully.
Oct 11 03:44:10 compute-0 conmon[133071]: conmon 16ae15188f693d06a42d <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-16ae15188f693d06a42d44eb09e67bca631a7775b72e8a62d14f0d4fc5dda8b3.scope/container/memory.events
Oct 11 03:44:10 compute-0 podman[133055]: 2025-10-11 03:44:10.418567212 +0000 UTC m=+0.204199926 container died 16ae15188f693d06a42d44eb09e67bca631a7775b72e8a62d14f0d4fc5dda8b3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_morse, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=reef, OSD_FLAVOR=default)
Oct 11 03:44:10 compute-0 systemd[1]: var-lib-containers-storage-overlay-a4a688cd1284f5979c8067c035779a39897db6609943d47f0dd485d03b49b195-merged.mount: Deactivated successfully.
Oct 11 03:44:10 compute-0 podman[133055]: 2025-10-11 03:44:10.474560871 +0000 UTC m=+0.260193595 container remove 16ae15188f693d06a42d44eb09e67bca631a7775b72e8a62d14f0d4fc5dda8b3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_morse, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 11 03:44:10 compute-0 systemd[1]: libpod-conmon-16ae15188f693d06a42d44eb09e67bca631a7775b72e8a62d14f0d4fc5dda8b3.scope: Deactivated successfully.
Oct 11 03:44:10 compute-0 podman[133097]: 2025-10-11 03:44:10.678855188 +0000 UTC m=+0.050792202 container create 42fcd6200f52711c95990f5af4243b26297e231780942c3e1549584c555c2ec6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_engelbart, CEPH_REF=reef, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 11 03:44:10 compute-0 systemd[1]: Started libpod-conmon-42fcd6200f52711c95990f5af4243b26297e231780942c3e1549584c555c2ec6.scope.
Oct 11 03:44:10 compute-0 podman[133097]: 2025-10-11 03:44:10.65319029 +0000 UTC m=+0.025127274 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 03:44:10 compute-0 systemd[1]: Started libcrun container.
Oct 11 03:44:10 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/503f6c100822c6700443f585294a6f87ca0d1c17ad3a84ceffeee6cac6aaa32f/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 11 03:44:10 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/503f6c100822c6700443f585294a6f87ca0d1c17ad3a84ceffeee6cac6aaa32f/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 11 03:44:10 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/503f6c100822c6700443f585294a6f87ca0d1c17ad3a84ceffeee6cac6aaa32f/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 11 03:44:10 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/503f6c100822c6700443f585294a6f87ca0d1c17ad3a84ceffeee6cac6aaa32f/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 11 03:44:10 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/503f6c100822c6700443f585294a6f87ca0d1c17ad3a84ceffeee6cac6aaa32f/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct 11 03:44:10 compute-0 podman[133097]: 2025-10-11 03:44:10.794697496 +0000 UTC m=+0.166634500 container init 42fcd6200f52711c95990f5af4243b26297e231780942c3e1549584c555c2ec6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_engelbart, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 11 03:44:10 compute-0 podman[133097]: 2025-10-11 03:44:10.813080067 +0000 UTC m=+0.185017081 container start 42fcd6200f52711c95990f5af4243b26297e231780942c3e1549584c555c2ec6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_engelbart, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Oct 11 03:44:10 compute-0 podman[133097]: 2025-10-11 03:44:10.817368879 +0000 UTC m=+0.189305883 container attach 42fcd6200f52711c95990f5af4243b26297e231780942c3e1549584c555c2ec6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_engelbart, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 11 03:44:10 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v344: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 03:44:11 compute-0 dreamy_engelbart[133114]: --> passed data devices: 0 physical, 3 LVM
Oct 11 03:44:11 compute-0 dreamy_engelbart[133114]: --> relative data size: 1.0
Oct 11 03:44:11 compute-0 dreamy_engelbart[133114]: --> All data devices are unavailable
Oct 11 03:44:12 compute-0 systemd[1]: libpod-42fcd6200f52711c95990f5af4243b26297e231780942c3e1549584c555c2ec6.scope: Deactivated successfully.
Oct 11 03:44:12 compute-0 systemd[1]: libpod-42fcd6200f52711c95990f5af4243b26297e231780942c3e1549584c555c2ec6.scope: Consumed 1.171s CPU time.
Oct 11 03:44:12 compute-0 podman[133097]: 2025-10-11 03:44:12.039836961 +0000 UTC m=+1.411773965 container died 42fcd6200f52711c95990f5af4243b26297e231780942c3e1549584c555c2ec6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_engelbart, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Oct 11 03:44:12 compute-0 systemd[1]: var-lib-containers-storage-overlay-503f6c100822c6700443f585294a6f87ca0d1c17ad3a84ceffeee6cac6aaa32f-merged.mount: Deactivated successfully.
Oct 11 03:44:12 compute-0 podman[133097]: 2025-10-11 03:44:12.121847948 +0000 UTC m=+1.493784962 container remove 42fcd6200f52711c95990f5af4243b26297e231780942c3e1549584c555c2ec6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_engelbart, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507)
Oct 11 03:44:12 compute-0 systemd[1]: libpod-conmon-42fcd6200f52711c95990f5af4243b26297e231780942c3e1549584c555c2ec6.scope: Deactivated successfully.
Oct 11 03:44:12 compute-0 sudo[132989]: pam_unix(sudo:session): session closed for user root
Oct 11 03:44:12 compute-0 ceph-mon[74273]: pgmap v344: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 03:44:12 compute-0 sudo[133157]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 03:44:12 compute-0 sudo[133157]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 03:44:12 compute-0 sudo[133157]: pam_unix(sudo:session): session closed for user root
Oct 11 03:44:12 compute-0 sudo[133182]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 11 03:44:12 compute-0 sudo[133182]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 03:44:12 compute-0 sudo[133182]: pam_unix(sudo:session): session closed for user root
Oct 11 03:44:12 compute-0 sudo[133207]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 03:44:12 compute-0 sudo[133207]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 03:44:12 compute-0 sudo[133207]: pam_unix(sudo:session): session closed for user root
Oct 11 03:44:12 compute-0 sudo[133232]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/23b68101-59a9-532f-ab6b-9acf78fb2162/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 23b68101-59a9-532f-ab6b-9acf78fb2162 -- lvm list --format json
Oct 11 03:44:12 compute-0 sudo[133232]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 03:44:12 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v345: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 03:44:12 compute-0 podman[133297]: 2025-10-11 03:44:12.981537625 +0000 UTC m=+0.069736750 container create bdd030e039441e6b9d20b239e078f449c3d5350afc286b01b8c82860188931a1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_nobel, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3)
Oct 11 03:44:13 compute-0 systemd[1]: Started libpod-conmon-bdd030e039441e6b9d20b239e078f449c3d5350afc286b01b8c82860188931a1.scope.
Oct 11 03:44:13 compute-0 podman[133297]: 2025-10-11 03:44:12.949499066 +0000 UTC m=+0.037698261 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 03:44:13 compute-0 systemd[1]: Started libcrun container.
Oct 11 03:44:13 compute-0 podman[133297]: 2025-10-11 03:44:13.090302561 +0000 UTC m=+0.178501706 container init bdd030e039441e6b9d20b239e078f449c3d5350afc286b01b8c82860188931a1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_nobel, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 11 03:44:13 compute-0 podman[133297]: 2025-10-11 03:44:13.101022526 +0000 UTC m=+0.189221651 container start bdd030e039441e6b9d20b239e078f449c3d5350afc286b01b8c82860188931a1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_nobel, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 11 03:44:13 compute-0 podman[133297]: 2025-10-11 03:44:13.104923856 +0000 UTC m=+0.193123011 container attach bdd030e039441e6b9d20b239e078f449c3d5350afc286b01b8c82860188931a1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_nobel, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Oct 11 03:44:13 compute-0 crazy_nobel[133313]: 167 167
Oct 11 03:44:13 compute-0 systemd[1]: libpod-bdd030e039441e6b9d20b239e078f449c3d5350afc286b01b8c82860188931a1.scope: Deactivated successfully.
Oct 11 03:44:13 compute-0 podman[133297]: 2025-10-11 03:44:13.109521767 +0000 UTC m=+0.197720912 container died bdd030e039441e6b9d20b239e078f449c3d5350afc286b01b8c82860188931a1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_nobel, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Oct 11 03:44:13 compute-0 systemd[1]: var-lib-containers-storage-overlay-2b27a8844f6efe6606a6cb8a87f04632cd1e967a95d32ea3e2c0692e7d29b5e3-merged.mount: Deactivated successfully.
Oct 11 03:44:13 compute-0 podman[133297]: 2025-10-11 03:44:13.167892033 +0000 UTC m=+0.256091188 container remove bdd030e039441e6b9d20b239e078f449c3d5350afc286b01b8c82860188931a1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_nobel, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, ceph=True, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS)
Oct 11 03:44:13 compute-0 systemd[1]: libpod-conmon-bdd030e039441e6b9d20b239e078f449c3d5350afc286b01b8c82860188931a1.scope: Deactivated successfully.
Oct 11 03:44:13 compute-0 podman[133337]: 2025-10-11 03:44:13.412710621 +0000 UTC m=+0.068905657 container create 6c24118f76107df5f963a9d22ad720da5ca32705f84f3058e2e684ae51377395 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_curie, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 11 03:44:13 compute-0 systemd[1]: Started libpod-conmon-6c24118f76107df5f963a9d22ad720da5ca32705f84f3058e2e684ae51377395.scope.
Oct 11 03:44:13 compute-0 podman[133337]: 2025-10-11 03:44:13.38485004 +0000 UTC m=+0.041045116 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 03:44:13 compute-0 systemd[1]: Started libcrun container.
Oct 11 03:44:13 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/97cb653430541c568aa40908defa9ec2cda64a9a165a3ee10e00870166f50c09/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 11 03:44:13 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/97cb653430541c568aa40908defa9ec2cda64a9a165a3ee10e00870166f50c09/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 11 03:44:13 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/97cb653430541c568aa40908defa9ec2cda64a9a165a3ee10e00870166f50c09/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 11 03:44:13 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/97cb653430541c568aa40908defa9ec2cda64a9a165a3ee10e00870166f50c09/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 11 03:44:13 compute-0 podman[133337]: 2025-10-11 03:44:13.519076719 +0000 UTC m=+0.175271775 container init 6c24118f76107df5f963a9d22ad720da5ca32705f84f3058e2e684ae51377395 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_curie, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Oct 11 03:44:13 compute-0 podman[133337]: 2025-10-11 03:44:13.536344749 +0000 UTC m=+0.192539775 container start 6c24118f76107df5f963a9d22ad720da5ca32705f84f3058e2e684ae51377395 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_curie, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_REF=reef, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True)
Oct 11 03:44:13 compute-0 podman[133337]: 2025-10-11 03:44:13.540441875 +0000 UTC m=+0.196636981 container attach 6c24118f76107df5f963a9d22ad720da5ca32705f84f3058e2e684ae51377395 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_curie, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, ceph=True, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 11 03:44:13 compute-0 sshd-session[133358]: Accepted publickey for zuul from 192.168.122.30 port 37938 ssh2: ECDSA SHA256:qo9+RMabHfLAOt2q/80W97JXaZUdeUCREBuTRaqgxBY
Oct 11 03:44:13 compute-0 systemd-logind[820]: New session 44 of user zuul.
Oct 11 03:44:13 compute-0 systemd[1]: Started Session 44 of User zuul.
Oct 11 03:44:13 compute-0 sshd-session[133358]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Oct 11 03:44:14 compute-0 intelligent_curie[133353]: {
Oct 11 03:44:14 compute-0 intelligent_curie[133353]:     "0": [
Oct 11 03:44:14 compute-0 intelligent_curie[133353]:         {
Oct 11 03:44:14 compute-0 intelligent_curie[133353]:             "devices": [
Oct 11 03:44:14 compute-0 intelligent_curie[133353]:                 "/dev/loop3"
Oct 11 03:44:14 compute-0 intelligent_curie[133353]:             ],
Oct 11 03:44:14 compute-0 intelligent_curie[133353]:             "lv_name": "ceph_lv0",
Oct 11 03:44:14 compute-0 intelligent_curie[133353]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct 11 03:44:14 compute-0 intelligent_curie[133353]:             "lv_size": "21470642176",
Oct 11 03:44:14 compute-0 intelligent_curie[133353]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=1XVTvq-Um5W-oCLh-MZzq-EKrd-vgGj-JdO4S3,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=23b68101-59a9-532f-ab6b-9acf78fb2162,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=bd7ac921-1218-45c1-b1c6-7c594dbceccb,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 11 03:44:14 compute-0 intelligent_curie[133353]:             "lv_uuid": "1XVTvq-Um5W-oCLh-MZzq-EKrd-vgGj-JdO4S3",
Oct 11 03:44:14 compute-0 intelligent_curie[133353]:             "name": "ceph_lv0",
Oct 11 03:44:14 compute-0 intelligent_curie[133353]:             "path": "/dev/ceph_vg0/ceph_lv0",
Oct 11 03:44:14 compute-0 intelligent_curie[133353]:             "tags": {
Oct 11 03:44:14 compute-0 intelligent_curie[133353]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct 11 03:44:14 compute-0 intelligent_curie[133353]:                 "ceph.block_uuid": "1XVTvq-Um5W-oCLh-MZzq-EKrd-vgGj-JdO4S3",
Oct 11 03:44:14 compute-0 intelligent_curie[133353]:                 "ceph.cephx_lockbox_secret": "",
Oct 11 03:44:14 compute-0 intelligent_curie[133353]:                 "ceph.cluster_fsid": "23b68101-59a9-532f-ab6b-9acf78fb2162",
Oct 11 03:44:14 compute-0 intelligent_curie[133353]:                 "ceph.cluster_name": "ceph",
Oct 11 03:44:14 compute-0 intelligent_curie[133353]:                 "ceph.crush_device_class": "",
Oct 11 03:44:14 compute-0 intelligent_curie[133353]:                 "ceph.encrypted": "0",
Oct 11 03:44:14 compute-0 intelligent_curie[133353]:                 "ceph.osd_fsid": "bd7ac921-1218-45c1-b1c6-7c594dbceccb",
Oct 11 03:44:14 compute-0 intelligent_curie[133353]:                 "ceph.osd_id": "0",
Oct 11 03:44:14 compute-0 intelligent_curie[133353]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 11 03:44:14 compute-0 intelligent_curie[133353]:                 "ceph.type": "block",
Oct 11 03:44:14 compute-0 intelligent_curie[133353]:                 "ceph.vdo": "0"
Oct 11 03:44:14 compute-0 intelligent_curie[133353]:             },
Oct 11 03:44:14 compute-0 intelligent_curie[133353]:             "type": "block",
Oct 11 03:44:14 compute-0 intelligent_curie[133353]:             "vg_name": "ceph_vg0"
Oct 11 03:44:14 compute-0 intelligent_curie[133353]:         }
Oct 11 03:44:14 compute-0 intelligent_curie[133353]:     ],
Oct 11 03:44:14 compute-0 intelligent_curie[133353]:     "1": [
Oct 11 03:44:14 compute-0 intelligent_curie[133353]:         {
Oct 11 03:44:14 compute-0 intelligent_curie[133353]:             "devices": [
Oct 11 03:44:14 compute-0 intelligent_curie[133353]:                 "/dev/loop4"
Oct 11 03:44:14 compute-0 intelligent_curie[133353]:             ],
Oct 11 03:44:14 compute-0 intelligent_curie[133353]:             "lv_name": "ceph_lv1",
Oct 11 03:44:14 compute-0 intelligent_curie[133353]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Oct 11 03:44:14 compute-0 intelligent_curie[133353]:             "lv_size": "21470642176",
Oct 11 03:44:14 compute-0 intelligent_curie[133353]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=SYHCVo-UVf0-Iwc3-4dzz-QuCG-19LJ-9rRA5x,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=23b68101-59a9-532f-ab6b-9acf78fb2162,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=38da774d-7ecf-442f-9a7a-97978287cff8,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 11 03:44:14 compute-0 intelligent_curie[133353]:             "lv_uuid": "SYHCVo-UVf0-Iwc3-4dzz-QuCG-19LJ-9rRA5x",
Oct 11 03:44:14 compute-0 intelligent_curie[133353]:             "name": "ceph_lv1",
Oct 11 03:44:14 compute-0 intelligent_curie[133353]:             "path": "/dev/ceph_vg1/ceph_lv1",
Oct 11 03:44:14 compute-0 intelligent_curie[133353]:             "tags": {
Oct 11 03:44:14 compute-0 intelligent_curie[133353]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Oct 11 03:44:14 compute-0 intelligent_curie[133353]:                 "ceph.block_uuid": "SYHCVo-UVf0-Iwc3-4dzz-QuCG-19LJ-9rRA5x",
Oct 11 03:44:14 compute-0 intelligent_curie[133353]:                 "ceph.cephx_lockbox_secret": "",
Oct 11 03:44:14 compute-0 intelligent_curie[133353]:                 "ceph.cluster_fsid": "23b68101-59a9-532f-ab6b-9acf78fb2162",
Oct 11 03:44:14 compute-0 intelligent_curie[133353]:                 "ceph.cluster_name": "ceph",
Oct 11 03:44:14 compute-0 intelligent_curie[133353]:                 "ceph.crush_device_class": "",
Oct 11 03:44:14 compute-0 intelligent_curie[133353]:                 "ceph.encrypted": "0",
Oct 11 03:44:14 compute-0 intelligent_curie[133353]:                 "ceph.osd_fsid": "38da774d-7ecf-442f-9a7a-97978287cff8",
Oct 11 03:44:14 compute-0 intelligent_curie[133353]:                 "ceph.osd_id": "1",
Oct 11 03:44:14 compute-0 intelligent_curie[133353]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 11 03:44:14 compute-0 intelligent_curie[133353]:                 "ceph.type": "block",
Oct 11 03:44:14 compute-0 intelligent_curie[133353]:                 "ceph.vdo": "0"
Oct 11 03:44:14 compute-0 intelligent_curie[133353]:             },
Oct 11 03:44:14 compute-0 intelligent_curie[133353]:             "type": "block",
Oct 11 03:44:14 compute-0 intelligent_curie[133353]:             "vg_name": "ceph_vg1"
Oct 11 03:44:14 compute-0 intelligent_curie[133353]:         }
Oct 11 03:44:14 compute-0 intelligent_curie[133353]:     ],
Oct 11 03:44:14 compute-0 intelligent_curie[133353]:     "2": [
Oct 11 03:44:14 compute-0 intelligent_curie[133353]:         {
Oct 11 03:44:14 compute-0 intelligent_curie[133353]:             "devices": [
Oct 11 03:44:14 compute-0 intelligent_curie[133353]:                 "/dev/loop5"
Oct 11 03:44:14 compute-0 intelligent_curie[133353]:             ],
Oct 11 03:44:14 compute-0 intelligent_curie[133353]:             "lv_name": "ceph_lv2",
Oct 11 03:44:14 compute-0 intelligent_curie[133353]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Oct 11 03:44:14 compute-0 intelligent_curie[133353]:             "lv_size": "21470642176",
Oct 11 03:44:14 compute-0 intelligent_curie[133353]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=RoMfIg-8Myq-NBZ3-1HM3-6QfC-sjk8-dlChvt,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=23b68101-59a9-532f-ab6b-9acf78fb2162,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=b0a32a6b-06c7-4cda-9235-0c5ffbd1bd85,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 11 03:44:14 compute-0 intelligent_curie[133353]:             "lv_uuid": "RoMfIg-8Myq-NBZ3-1HM3-6QfC-sjk8-dlChvt",
Oct 11 03:44:14 compute-0 intelligent_curie[133353]:             "name": "ceph_lv2",
Oct 11 03:44:14 compute-0 intelligent_curie[133353]:             "path": "/dev/ceph_vg2/ceph_lv2",
Oct 11 03:44:14 compute-0 intelligent_curie[133353]:             "tags": {
Oct 11 03:44:14 compute-0 intelligent_curie[133353]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Oct 11 03:44:14 compute-0 intelligent_curie[133353]:                 "ceph.block_uuid": "RoMfIg-8Myq-NBZ3-1HM3-6QfC-sjk8-dlChvt",
Oct 11 03:44:14 compute-0 intelligent_curie[133353]:                 "ceph.cephx_lockbox_secret": "",
Oct 11 03:44:14 compute-0 intelligent_curie[133353]:                 "ceph.cluster_fsid": "23b68101-59a9-532f-ab6b-9acf78fb2162",
Oct 11 03:44:14 compute-0 intelligent_curie[133353]:                 "ceph.cluster_name": "ceph",
Oct 11 03:44:14 compute-0 intelligent_curie[133353]:                 "ceph.crush_device_class": "",
Oct 11 03:44:14 compute-0 intelligent_curie[133353]:                 "ceph.encrypted": "0",
Oct 11 03:44:14 compute-0 intelligent_curie[133353]:                 "ceph.osd_fsid": "b0a32a6b-06c7-4cda-9235-0c5ffbd1bd85",
Oct 11 03:44:14 compute-0 intelligent_curie[133353]:                 "ceph.osd_id": "2",
Oct 11 03:44:14 compute-0 intelligent_curie[133353]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 11 03:44:14 compute-0 intelligent_curie[133353]:                 "ceph.type": "block",
Oct 11 03:44:14 compute-0 intelligent_curie[133353]:                 "ceph.vdo": "0"
Oct 11 03:44:14 compute-0 intelligent_curie[133353]:             },
Oct 11 03:44:14 compute-0 intelligent_curie[133353]:             "type": "block",
Oct 11 03:44:14 compute-0 intelligent_curie[133353]:             "vg_name": "ceph_vg2"
Oct 11 03:44:14 compute-0 intelligent_curie[133353]:         }
Oct 11 03:44:14 compute-0 intelligent_curie[133353]:     ]
Oct 11 03:44:14 compute-0 intelligent_curie[133353]: }
Oct 11 03:44:14 compute-0 systemd[1]: libpod-6c24118f76107df5f963a9d22ad720da5ca32705f84f3058e2e684ae51377395.scope: Deactivated successfully.
Oct 11 03:44:14 compute-0 podman[133337]: 2025-10-11 03:44:14.333067298 +0000 UTC m=+0.989262294 container died 6c24118f76107df5f963a9d22ad720da5ca32705f84f3058e2e684ae51377395 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_curie, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, ceph=True, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Oct 11 03:44:14 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 11 03:44:14 compute-0 ceph-mon[74273]: pgmap v345: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 03:44:14 compute-0 systemd[1]: var-lib-containers-storage-overlay-97cb653430541c568aa40908defa9ec2cda64a9a165a3ee10e00870166f50c09-merged.mount: Deactivated successfully.
Oct 11 03:44:14 compute-0 podman[133337]: 2025-10-11 03:44:14.601837195 +0000 UTC m=+1.258032181 container remove 6c24118f76107df5f963a9d22ad720da5ca32705f84f3058e2e684ae51377395 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_curie, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 11 03:44:14 compute-0 sudo[133232]: pam_unix(sudo:session): session closed for user root
Oct 11 03:44:14 compute-0 systemd[1]: libpod-conmon-6c24118f76107df5f963a9d22ad720da5ca32705f84f3058e2e684ae51377395.scope: Deactivated successfully.
Oct 11 03:44:14 compute-0 sudo[133528]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 03:44:14 compute-0 sudo[133528]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 03:44:14 compute-0 sudo[133528]: pam_unix(sudo:session): session closed for user root
Oct 11 03:44:14 compute-0 sudo[133554]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 11 03:44:14 compute-0 sudo[133554]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 03:44:14 compute-0 sudo[133554]: pam_unix(sudo:session): session closed for user root
Oct 11 03:44:14 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v346: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 03:44:14 compute-0 sudo[133579]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 03:44:14 compute-0 sudo[133579]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 03:44:14 compute-0 sudo[133579]: pam_unix(sudo:session): session closed for user root
Oct 11 03:44:14 compute-0 sudo[133604]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/23b68101-59a9-532f-ab6b-9acf78fb2162/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 23b68101-59a9-532f-ab6b-9acf78fb2162 -- raw list --format json
Oct 11 03:44:14 compute-0 sudo[133604]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 03:44:14 compute-0 python3.9[133530]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct 11 03:44:15 compute-0 podman[133675]: 2025-10-11 03:44:15.309872008 +0000 UTC m=+0.051012769 container create 2ea388186144ca360fbc99df36e4c636969d4a3b3f6a91326736cc9cb37fcf47 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_banzai, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 11 03:44:15 compute-0 systemd[1]: Started libpod-conmon-2ea388186144ca360fbc99df36e4c636969d4a3b3f6a91326736cc9cb37fcf47.scope.
Oct 11 03:44:15 compute-0 systemd[1]: Started libcrun container.
Oct 11 03:44:15 compute-0 podman[133675]: 2025-10-11 03:44:15.389570509 +0000 UTC m=+0.130711370 container init 2ea388186144ca360fbc99df36e4c636969d4a3b3f6a91326736cc9cb37fcf47 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_banzai, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Oct 11 03:44:15 compute-0 podman[133675]: 2025-10-11 03:44:15.296291662 +0000 UTC m=+0.037432443 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 03:44:15 compute-0 podman[133675]: 2025-10-11 03:44:15.398192454 +0000 UTC m=+0.139333255 container start 2ea388186144ca360fbc99df36e4c636969d4a3b3f6a91326736cc9cb37fcf47 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_banzai, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.build-date=20250507)
Oct 11 03:44:15 compute-0 podman[133675]: 2025-10-11 03:44:15.401867808 +0000 UTC m=+0.143008619 container attach 2ea388186144ca360fbc99df36e4c636969d4a3b3f6a91326736cc9cb37fcf47 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_banzai, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 11 03:44:15 compute-0 angry_banzai[133713]: 167 167
Oct 11 03:44:15 compute-0 podman[133675]: 2025-10-11 03:44:15.404438511 +0000 UTC m=+0.145579312 container died 2ea388186144ca360fbc99df36e4c636969d4a3b3f6a91326736cc9cb37fcf47 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_banzai, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Oct 11 03:44:15 compute-0 systemd[1]: libpod-2ea388186144ca360fbc99df36e4c636969d4a3b3f6a91326736cc9cb37fcf47.scope: Deactivated successfully.
Oct 11 03:44:15 compute-0 systemd[1]: var-lib-containers-storage-overlay-a1e7052af2f82087383e349c0391a0b6107ac2c32234ebf46a3ab473395e4e11-merged.mount: Deactivated successfully.
Oct 11 03:44:15 compute-0 podman[133675]: 2025-10-11 03:44:15.451361723 +0000 UTC m=+0.192502524 container remove 2ea388186144ca360fbc99df36e4c636969d4a3b3f6a91326736cc9cb37fcf47 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_banzai, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Oct 11 03:44:15 compute-0 systemd[1]: libpod-conmon-2ea388186144ca360fbc99df36e4c636969d4a3b3f6a91326736cc9cb37fcf47.scope: Deactivated successfully.
Oct 11 03:44:15 compute-0 podman[133737]: 2025-10-11 03:44:15.689024647 +0000 UTC m=+0.062643608 container create 7cd7bc01fdcfa9cdf98b4b18235b2acbdf57a077bbeba014425d71645a569f73 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_jackson, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Oct 11 03:44:15 compute-0 systemd[1]: Started libpod-conmon-7cd7bc01fdcfa9cdf98b4b18235b2acbdf57a077bbeba014425d71645a569f73.scope.
Oct 11 03:44:15 compute-0 podman[133737]: 2025-10-11 03:44:15.669412391 +0000 UTC m=+0.043031392 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 03:44:15 compute-0 systemd[1]: Started libcrun container.
Oct 11 03:44:15 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d4b2e98b5a946f258b0eff7e554effb2092ab0df56cca09e5d20ae04838b56b2/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 11 03:44:15 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d4b2e98b5a946f258b0eff7e554effb2092ab0df56cca09e5d20ae04838b56b2/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 11 03:44:15 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d4b2e98b5a946f258b0eff7e554effb2092ab0df56cca09e5d20ae04838b56b2/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 11 03:44:15 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d4b2e98b5a946f258b0eff7e554effb2092ab0df56cca09e5d20ae04838b56b2/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 11 03:44:15 compute-0 podman[133737]: 2025-10-11 03:44:15.803940328 +0000 UTC m=+0.177559359 container init 7cd7bc01fdcfa9cdf98b4b18235b2acbdf57a077bbeba014425d71645a569f73 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_jackson, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Oct 11 03:44:15 compute-0 podman[133737]: 2025-10-11 03:44:15.822011281 +0000 UTC m=+0.195630262 container start 7cd7bc01fdcfa9cdf98b4b18235b2acbdf57a077bbeba014425d71645a569f73 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_jackson, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef)
Oct 11 03:44:15 compute-0 podman[133737]: 2025-10-11 03:44:15.826965372 +0000 UTC m=+0.200584363 container attach 7cd7bc01fdcfa9cdf98b4b18235b2acbdf57a077bbeba014425d71645a569f73 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_jackson, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0)
Oct 11 03:44:16 compute-0 ceph-mon[74273]: pgmap v346: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 03:44:16 compute-0 sudo[133891]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vdjwwgpupkiliyujgwlezunpevcanssb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760154256.0767977-50-159165915032993/AnsiballZ_file.py'
Oct 11 03:44:16 compute-0 sudo[133891]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 03:44:16 compute-0 python3.9[133896]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/certs/libvirt/default setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct 11 03:44:16 compute-0 sudo[133891]: pam_unix(sudo:session): session closed for user root
Oct 11 03:44:16 compute-0 elastic_jackson[133753]: {
Oct 11 03:44:16 compute-0 elastic_jackson[133753]:     "38da774d-7ecf-442f-9a7a-97978287cff8": {
Oct 11 03:44:16 compute-0 elastic_jackson[133753]:         "ceph_fsid": "23b68101-59a9-532f-ab6b-9acf78fb2162",
Oct 11 03:44:16 compute-0 elastic_jackson[133753]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Oct 11 03:44:16 compute-0 elastic_jackson[133753]:         "osd_id": 1,
Oct 11 03:44:16 compute-0 elastic_jackson[133753]:         "osd_uuid": "38da774d-7ecf-442f-9a7a-97978287cff8",
Oct 11 03:44:16 compute-0 elastic_jackson[133753]:         "type": "bluestore"
Oct 11 03:44:16 compute-0 elastic_jackson[133753]:     },
Oct 11 03:44:16 compute-0 elastic_jackson[133753]:     "b0a32a6b-06c7-4cda-9235-0c5ffbd1bd85": {
Oct 11 03:44:16 compute-0 elastic_jackson[133753]:         "ceph_fsid": "23b68101-59a9-532f-ab6b-9acf78fb2162",
Oct 11 03:44:16 compute-0 elastic_jackson[133753]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Oct 11 03:44:16 compute-0 elastic_jackson[133753]:         "osd_id": 2,
Oct 11 03:44:16 compute-0 elastic_jackson[133753]:         "osd_uuid": "b0a32a6b-06c7-4cda-9235-0c5ffbd1bd85",
Oct 11 03:44:16 compute-0 elastic_jackson[133753]:         "type": "bluestore"
Oct 11 03:44:16 compute-0 elastic_jackson[133753]:     },
Oct 11 03:44:16 compute-0 elastic_jackson[133753]:     "bd7ac921-1218-45c1-b1c6-7c594dbceccb": {
Oct 11 03:44:16 compute-0 elastic_jackson[133753]:         "ceph_fsid": "23b68101-59a9-532f-ab6b-9acf78fb2162",
Oct 11 03:44:16 compute-0 elastic_jackson[133753]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct 11 03:44:16 compute-0 elastic_jackson[133753]:         "osd_id": 0,
Oct 11 03:44:16 compute-0 elastic_jackson[133753]:         "osd_uuid": "bd7ac921-1218-45c1-b1c6-7c594dbceccb",
Oct 11 03:44:16 compute-0 elastic_jackson[133753]:         "type": "bluestore"
Oct 11 03:44:16 compute-0 elastic_jackson[133753]:     }
Oct 11 03:44:16 compute-0 elastic_jackson[133753]: }
Oct 11 03:44:16 compute-0 systemd[1]: libpod-7cd7bc01fdcfa9cdf98b4b18235b2acbdf57a077bbeba014425d71645a569f73.scope: Deactivated successfully.
Oct 11 03:44:16 compute-0 systemd[1]: libpod-7cd7bc01fdcfa9cdf98b4b18235b2acbdf57a077bbeba014425d71645a569f73.scope: Consumed 1.004s CPU time.
Oct 11 03:44:16 compute-0 podman[133737]: 2025-10-11 03:44:16.816810302 +0000 UTC m=+1.190429293 container died 7cd7bc01fdcfa9cdf98b4b18235b2acbdf57a077bbeba014425d71645a569f73 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_jackson, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Oct 11 03:44:16 compute-0 systemd[1]: var-lib-containers-storage-overlay-d4b2e98b5a946f258b0eff7e554effb2092ab0df56cca09e5d20ae04838b56b2-merged.mount: Deactivated successfully.
Oct 11 03:44:16 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v347: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 03:44:16 compute-0 podman[133737]: 2025-10-11 03:44:16.882673611 +0000 UTC m=+1.256292582 container remove 7cd7bc01fdcfa9cdf98b4b18235b2acbdf57a077bbeba014425d71645a569f73 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_jackson, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, ceph=True, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 11 03:44:16 compute-0 systemd[1]: libpod-conmon-7cd7bc01fdcfa9cdf98b4b18235b2acbdf57a077bbeba014425d71645a569f73.scope: Deactivated successfully.
Oct 11 03:44:16 compute-0 sudo[133604]: pam_unix(sudo:session): session closed for user root
Oct 11 03:44:16 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 11 03:44:16 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' 
Oct 11 03:44:16 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 11 03:44:16 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' 
Oct 11 03:44:16 compute-0 ceph-mgr[74563]: [progress WARNING root] complete: ev 93046184-9876-4cba-8565-8c9bc6fc1bbf does not exist
Oct 11 03:44:16 compute-0 ceph-mgr[74563]: [progress WARNING root] complete: ev 2ac3ec58-723f-4371-8e88-c78c5cead210 does not exist
Oct 11 03:44:17 compute-0 sudo[133981]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 03:44:17 compute-0 sudo[133981]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 03:44:17 compute-0 sudo[133981]: pam_unix(sudo:session): session closed for user root
Oct 11 03:44:17 compute-0 sudo[134033]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Oct 11 03:44:17 compute-0 sudo[134033]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 03:44:17 compute-0 sudo[134033]: pam_unix(sudo:session): session closed for user root
Oct 11 03:44:17 compute-0 sudo[134126]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hfrhzdjetswgocicdsrrlbxjgihudliy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760154256.9127498-50-203643511556477/AnsiballZ_file.py'
Oct 11 03:44:17 compute-0 sudo[134126]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 03:44:17 compute-0 python3.9[134128]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/certs/libvirt/default setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct 11 03:44:17 compute-0 sudo[134126]: pam_unix(sudo:session): session closed for user root
Oct 11 03:44:17 compute-0 ceph-mon[74273]: pgmap v347: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 03:44:17 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' 
Oct 11 03:44:17 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' 
Oct 11 03:44:18 compute-0 sudo[134278]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kgsyymzjlfmvsxdaelkdycfqgtxlirnc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760154257.5998557-65-62269962227793/AnsiballZ_stat.py'
Oct 11 03:44:18 compute-0 sudo[134278]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 03:44:18 compute-0 python3.9[134280]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/libvirt/default/tls.crt follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 11 03:44:18 compute-0 sudo[134278]: pam_unix(sudo:session): session closed for user root
Oct 11 03:44:18 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v348: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 03:44:18 compute-0 sudo[134401]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zhxpnnhydptqyondoyxtugdhkuvaxtii ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760154257.5998557-65-62269962227793/AnsiballZ_copy.py'
Oct 11 03:44:18 compute-0 sudo[134401]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 03:44:19 compute-0 python3.9[134403]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/libvirt/default/tls.crt group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1760154257.5998557-65-62269962227793/.source.crt _original_basename=compute-0.ctlplane.example.com-tls.crt follow=False checksum=d5696c54a39dcb59ac916a1797fec08ae30fa209 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 11 03:44:19 compute-0 sudo[134401]: pam_unix(sudo:session): session closed for user root
Oct 11 03:44:19 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 11 03:44:19 compute-0 sudo[134553]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-svijbukvcysgzqpilpkdbhjehyfieila ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760154259.4000337-65-107763815477421/AnsiballZ_stat.py'
Oct 11 03:44:19 compute-0 sudo[134553]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 03:44:19 compute-0 python3.9[134555]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/libvirt/default/ca.crt follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 11 03:44:19 compute-0 ceph-mon[74273]: pgmap v348: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 03:44:19 compute-0 sudo[134553]: pam_unix(sudo:session): session closed for user root
Oct 11 03:44:20 compute-0 sudo[134676]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sizvgxwcqmbnyqwgmqdsnzmhkkrddsot ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760154259.4000337-65-107763815477421/AnsiballZ_copy.py'
Oct 11 03:44:20 compute-0 sudo[134676]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 03:44:20 compute-0 python3.9[134678]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/libvirt/default/ca.crt group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1760154259.4000337-65-107763815477421/.source.crt _original_basename=compute-0.ctlplane.example.com-ca.crt follow=False checksum=c6b383be76e8f46f684d42936f59097859908b12 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 11 03:44:20 compute-0 sudo[134676]: pam_unix(sudo:session): session closed for user root
Oct 11 03:44:20 compute-0 ceph-mgr[74563]: [balancer INFO root] Optimize plan auto_2025-10-11_03:44:20
Oct 11 03:44:20 compute-0 ceph-mgr[74563]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 11 03:44:20 compute-0 ceph-mgr[74563]: [balancer INFO root] do_upmap
Oct 11 03:44:20 compute-0 ceph-mgr[74563]: [balancer INFO root] pools ['.rgw.root', 'default.rgw.control', 'images', 'vms', '.mgr', 'default.rgw.meta', 'cephfs.cephfs.data', 'cephfs.cephfs.meta', 'backups', 'volumes', 'default.rgw.log']
Oct 11 03:44:20 compute-0 ceph-mgr[74563]: [balancer INFO root] prepared 0/10 changes
Oct 11 03:44:20 compute-0 ceph-mgr[74563]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 03:44:20 compute-0 ceph-mgr[74563]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 03:44:20 compute-0 ceph-mgr[74563]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 03:44:20 compute-0 ceph-mgr[74563]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 03:44:20 compute-0 ceph-mgr[74563]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 03:44:20 compute-0 ceph-mgr[74563]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 03:44:20 compute-0 ceph-mgr[74563]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 11 03:44:20 compute-0 ceph-mgr[74563]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 11 03:44:20 compute-0 ceph-mgr[74563]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 11 03:44:20 compute-0 ceph-mgr[74563]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 11 03:44:20 compute-0 ceph-mgr[74563]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 11 03:44:20 compute-0 ceph-mgr[74563]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 11 03:44:20 compute-0 ceph-mgr[74563]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 11 03:44:20 compute-0 ceph-mgr[74563]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 11 03:44:20 compute-0 ceph-mgr[74563]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 11 03:44:20 compute-0 ceph-mgr[74563]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 11 03:44:20 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v349: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 03:44:20 compute-0 sudo[134828]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lhuqkdyvznyuveruszjcljcfidaveoek ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760154260.6701093-65-184285707705842/AnsiballZ_stat.py'
Oct 11 03:44:20 compute-0 sudo[134828]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 03:44:21 compute-0 python3.9[134830]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/libvirt/default/tls.key follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 11 03:44:21 compute-0 sudo[134828]: pam_unix(sudo:session): session closed for user root
Oct 11 03:44:21 compute-0 rsyslogd[1005]: imjournal: 1711 messages lost due to rate-limiting (20000 allowed within 600 seconds)
Oct 11 03:44:21 compute-0 sudo[134951]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-neawbneocaeaqqvemqqwgpohuzijddln ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760154260.6701093-65-184285707705842/AnsiballZ_copy.py'
Oct 11 03:44:21 compute-0 sudo[134951]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 03:44:21 compute-0 python3.9[134953]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/libvirt/default/tls.key group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1760154260.6701093-65-184285707705842/.source.key _original_basename=compute-0.ctlplane.example.com-tls.key follow=False checksum=78a870eb994f33707482a02a0fc570f759970a65 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 11 03:44:21 compute-0 sudo[134951]: pam_unix(sudo:session): session closed for user root
Oct 11 03:44:21 compute-0 ceph-mon[74273]: pgmap v349: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 03:44:22 compute-0 sudo[135103]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cjeawyhhnyswolamqpyrnhprxouwxviz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760154262.110808-109-39069865118964/AnsiballZ_file.py'
Oct 11 03:44:22 compute-0 sudo[135103]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 03:44:22 compute-0 python3.9[135105]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/certs/neutron-metadata/default setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct 11 03:44:22 compute-0 sudo[135103]: pam_unix(sudo:session): session closed for user root
Oct 11 03:44:22 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v350: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 03:44:23 compute-0 sudo[135255]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mumrwczpuitibuyqdyydeniruvgblezx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760154262.935704-109-54818365637487/AnsiballZ_file.py'
Oct 11 03:44:23 compute-0 sudo[135255]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 03:44:23 compute-0 python3.9[135257]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/certs/neutron-metadata/default setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct 11 03:44:23 compute-0 sudo[135255]: pam_unix(sudo:session): session closed for user root
Oct 11 03:44:23 compute-0 ceph-mon[74273]: pgmap v350: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 03:44:24 compute-0 sudo[135407]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-alqxzuryvwbcxbwvbsrpurespbexluhe ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760154263.7342541-124-67982573962684/AnsiballZ_stat.py'
Oct 11 03:44:24 compute-0 sudo[135407]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 03:44:24 compute-0 python3.9[135409]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/neutron-metadata/default/tls.crt follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 11 03:44:24 compute-0 sudo[135407]: pam_unix(sudo:session): session closed for user root
Oct 11 03:44:24 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 11 03:44:24 compute-0 sudo[135530]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zngloxgecnbjernszjuwcpgcdwsdtgkv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760154263.7342541-124-67982573962684/AnsiballZ_copy.py'
Oct 11 03:44:24 compute-0 sudo[135530]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 03:44:24 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v351: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 03:44:24 compute-0 python3.9[135532]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/neutron-metadata/default/tls.crt group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1760154263.7342541-124-67982573962684/.source.crt _original_basename=compute-0.ctlplane.example.com-tls.crt follow=False checksum=32974676e25f1be592f0378f22045ce758c9f3d4 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 11 03:44:24 compute-0 sudo[135530]: pam_unix(sudo:session): session closed for user root
Oct 11 03:44:25 compute-0 sudo[135682]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jcjfijujnsqqvndywlicmjvhifxbyigr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760154265.1174283-124-164306788538917/AnsiballZ_stat.py'
Oct 11 03:44:25 compute-0 sudo[135682]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 03:44:25 compute-0 python3.9[135684]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/neutron-metadata/default/ca.crt follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 11 03:44:25 compute-0 sudo[135682]: pam_unix(sudo:session): session closed for user root
Oct 11 03:44:25 compute-0 ceph-mon[74273]: pgmap v351: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 03:44:26 compute-0 sudo[135805]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-twvxecunfvfmbypxoeyqvjdzgsyyylja ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760154265.1174283-124-164306788538917/AnsiballZ_copy.py'
Oct 11 03:44:26 compute-0 sudo[135805]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 03:44:26 compute-0 python3.9[135807]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/neutron-metadata/default/ca.crt group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1760154265.1174283-124-164306788538917/.source.crt _original_basename=compute-0.ctlplane.example.com-ca.crt follow=False checksum=257e58b2fa33c9fdb482c6978fd1474b0c483fd5 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 11 03:44:26 compute-0 sudo[135805]: pam_unix(sudo:session): session closed for user root
Oct 11 03:44:26 compute-0 sudo[135957]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ysylelhrjlrlpevpxisjbvhnfdsrpkiv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760154266.4147487-124-1093532854118/AnsiballZ_stat.py'
Oct 11 03:44:26 compute-0 sudo[135957]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 03:44:26 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v352: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 03:44:26 compute-0 python3.9[135959]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/neutron-metadata/default/tls.key follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 11 03:44:26 compute-0 sudo[135957]: pam_unix(sudo:session): session closed for user root
Oct 11 03:44:27 compute-0 sudo[136080]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dwinxkbqxubblejmxfqoagqdamfgyhqq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760154266.4147487-124-1093532854118/AnsiballZ_copy.py'
Oct 11 03:44:27 compute-0 sudo[136080]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 03:44:27 compute-0 python3.9[136082]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/neutron-metadata/default/tls.key group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1760154266.4147487-124-1093532854118/.source.key _original_basename=compute-0.ctlplane.example.com-tls.key follow=False checksum=41d1ed627ed4673ba9b4ed70fe2d2094968af0de backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 11 03:44:27 compute-0 sudo[136080]: pam_unix(sudo:session): session closed for user root
Oct 11 03:44:27 compute-0 ceph-mon[74273]: pgmap v352: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 03:44:28 compute-0 sudo[136232]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bznlzqmlvreghjwgynsfeahysxkzqrfl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760154267.846215-168-213901808435270/AnsiballZ_file.py'
Oct 11 03:44:28 compute-0 sudo[136232]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 03:44:28 compute-0 python3.9[136234]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/certs/ovn/default setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct 11 03:44:28 compute-0 sudo[136232]: pam_unix(sudo:session): session closed for user root
Oct 11 03:44:28 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v353: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 03:44:28 compute-0 sudo[136384]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-awplwsgrfftxrwickjfkmeaezdjtfiqr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760154268.5833538-168-156360090965747/AnsiballZ_file.py'
Oct 11 03:44:28 compute-0 sudo[136384]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 03:44:29 compute-0 python3.9[136386]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/certs/ovn/default setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct 11 03:44:29 compute-0 sudo[136384]: pam_unix(sudo:session): session closed for user root
Oct 11 03:44:29 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 11 03:44:29 compute-0 sudo[136536]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fkpnrlmzmnzhdfyhfbmcbfyweephkfnn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760154269.3902667-183-141958616525407/AnsiballZ_stat.py'
Oct 11 03:44:29 compute-0 sudo[136536]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 03:44:29 compute-0 python3.9[136538]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/ovn/default/tls.crt follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 11 03:44:29 compute-0 sudo[136536]: pam_unix(sudo:session): session closed for user root
Oct 11 03:44:29 compute-0 ceph-mon[74273]: pgmap v353: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 03:44:30 compute-0 sudo[136659]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jaazkyelrcnmjgzhhzuscgnktxhpwbwh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760154269.3902667-183-141958616525407/AnsiballZ_copy.py'
Oct 11 03:44:30 compute-0 sudo[136659]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 03:44:30 compute-0 python3.9[136661]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/ovn/default/tls.crt group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1760154269.3902667-183-141958616525407/.source.crt _original_basename=compute-0.ctlplane.example.com-tls.crt follow=False checksum=f9ffeb603ba01cefb7a313910fe25f3ee91fc734 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 11 03:44:30 compute-0 sudo[136659]: pam_unix(sudo:session): session closed for user root
Oct 11 03:44:30 compute-0 ceph-mgr[74563]: [pg_autoscaler INFO root] _maybe_adjust
Oct 11 03:44:30 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v354: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 03:44:30 compute-0 ceph-mgr[74563]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 03:44:30 compute-0 ceph-mgr[74563]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct 11 03:44:30 compute-0 ceph-mgr[74563]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 03:44:30 compute-0 ceph-mgr[74563]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 11 03:44:30 compute-0 ceph-mgr[74563]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 03:44:30 compute-0 ceph-mgr[74563]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 11 03:44:30 compute-0 ceph-mgr[74563]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 03:44:30 compute-0 ceph-mgr[74563]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 11 03:44:30 compute-0 ceph-mgr[74563]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 03:44:30 compute-0 ceph-mgr[74563]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 11 03:44:30 compute-0 ceph-mgr[74563]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 03:44:30 compute-0 ceph-mgr[74563]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Oct 11 03:44:30 compute-0 ceph-mgr[74563]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 03:44:30 compute-0 ceph-mgr[74563]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 11 03:44:30 compute-0 ceph-mgr[74563]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 03:44:30 compute-0 ceph-mgr[74563]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Oct 11 03:44:30 compute-0 ceph-mgr[74563]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 03:44:30 compute-0 ceph-mgr[74563]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct 11 03:44:30 compute-0 ceph-mgr[74563]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 03:44:30 compute-0 ceph-mgr[74563]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 11 03:44:30 compute-0 ceph-mgr[74563]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 03:44:30 compute-0 ceph-mgr[74563]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct 11 03:44:30 compute-0 sudo[136811]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zgghggtitbnkwhfydpkvhjaucyyenqow ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760154270.6455262-183-95053821865514/AnsiballZ_stat.py'
Oct 11 03:44:30 compute-0 sudo[136811]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 03:44:31 compute-0 python3.9[136813]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/ovn/default/ca.crt follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 11 03:44:31 compute-0 sudo[136811]: pam_unix(sudo:session): session closed for user root
Oct 11 03:44:31 compute-0 sudo[136934]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-omfpjuludpjkhsyzzfaqnjserzysmhww ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760154270.6455262-183-95053821865514/AnsiballZ_copy.py'
Oct 11 03:44:31 compute-0 sudo[136934]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 03:44:31 compute-0 python3.9[136936]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/ovn/default/ca.crt group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1760154270.6455262-183-95053821865514/.source.crt _original_basename=compute-0.ctlplane.example.com-ca.crt follow=False checksum=257e58b2fa33c9fdb482c6978fd1474b0c483fd5 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 11 03:44:31 compute-0 sudo[136934]: pam_unix(sudo:session): session closed for user root
Oct 11 03:44:32 compute-0 ceph-mon[74273]: pgmap v354: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 03:44:32 compute-0 sudo[137086]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lffmwhxlccqjzyaxxfvzfvznasyfjueg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760154271.955194-183-193244732222404/AnsiballZ_stat.py'
Oct 11 03:44:32 compute-0 sudo[137086]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 03:44:32 compute-0 python3.9[137088]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/ovn/default/tls.key follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 11 03:44:32 compute-0 sudo[137086]: pam_unix(sudo:session): session closed for user root
Oct 11 03:44:32 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v355: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 03:44:32 compute-0 sudo[137209]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rbsahtruebeoupolpnmskwnavofzfxzq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760154271.955194-183-193244732222404/AnsiballZ_copy.py'
Oct 11 03:44:32 compute-0 sudo[137209]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 03:44:33 compute-0 python3.9[137211]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/ovn/default/tls.key group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1760154271.955194-183-193244732222404/.source.key _original_basename=compute-0.ctlplane.example.com-tls.key follow=False checksum=a558b17d8c7256da766598faaf327f34d495386d backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 11 03:44:33 compute-0 sudo[137209]: pam_unix(sudo:session): session closed for user root
Oct 11 03:44:34 compute-0 ceph-mon[74273]: pgmap v355: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 03:44:34 compute-0 sudo[137361]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tgovipbkezddivawscdkvowunjgnocec ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760154273.8547924-243-186396511919089/AnsiballZ_file.py'
Oct 11 03:44:34 compute-0 sudo[137361]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 03:44:34 compute-0 python3.9[137363]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/cacerts/ovn setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct 11 03:44:34 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 11 03:44:34 compute-0 sudo[137361]: pam_unix(sudo:session): session closed for user root
Oct 11 03:44:34 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v356: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 03:44:34 compute-0 sudo[137513]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wxkqimhayzwuoqbgiccnuvsjssdkgrpi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760154274.580691-251-128488880188788/AnsiballZ_stat.py'
Oct 11 03:44:34 compute-0 sudo[137513]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 03:44:35 compute-0 python3.9[137515]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 11 03:44:35 compute-0 sudo[137513]: pam_unix(sudo:session): session closed for user root
Oct 11 03:44:35 compute-0 sudo[137636]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wxvpxquhixtuxbjskpqvrabcggsaquil ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760154274.580691-251-128488880188788/AnsiballZ_copy.py'
Oct 11 03:44:35 compute-0 sudo[137636]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 03:44:35 compute-0 python3.9[137638]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1760154274.580691-251-128488880188788/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=8245a904210c3962a63879d763ded8fcd136bfb2 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 11 03:44:35 compute-0 sudo[137636]: pam_unix(sudo:session): session closed for user root
Oct 11 03:44:36 compute-0 ceph-mon[74273]: pgmap v356: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 03:44:36 compute-0 sudo[137788]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-aygldcgbfcsvrblxqelzhagvrnnggvyu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760154275.9567888-267-232288555302023/AnsiballZ_file.py'
Oct 11 03:44:36 compute-0 sudo[137788]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 03:44:36 compute-0 python3.9[137790]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/cacerts/libvirt setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct 11 03:44:36 compute-0 sudo[137788]: pam_unix(sudo:session): session closed for user root
Oct 11 03:44:36 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v357: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 03:44:37 compute-0 sudo[137940]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lrzkuittmgnubobzpbsbhtkbygaajqqx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760154276.7579286-275-234219305930840/AnsiballZ_stat.py'
Oct 11 03:44:37 compute-0 sudo[137940]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 03:44:37 compute-0 python3.9[137942]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/cacerts/libvirt/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 11 03:44:37 compute-0 sudo[137940]: pam_unix(sudo:session): session closed for user root
Oct 11 03:44:37 compute-0 sudo[138063]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sdiitmevabrpgkugzucwpmdhwkdatwah ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760154276.7579286-275-234219305930840/AnsiballZ_copy.py'
Oct 11 03:44:37 compute-0 sudo[138063]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 03:44:37 compute-0 python3.9[138065]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/cacerts/libvirt/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1760154276.7579286-275-234219305930840/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=8245a904210c3962a63879d763ded8fcd136bfb2 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 11 03:44:38 compute-0 sudo[138063]: pam_unix(sudo:session): session closed for user root
Oct 11 03:44:38 compute-0 ceph-mon[74273]: pgmap v357: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 03:44:38 compute-0 sudo[138215]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mxpiqfybfhbzahtckraysbjmuwflncup ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760154278.2232773-291-42984338030728/AnsiballZ_file.py'
Oct 11 03:44:38 compute-0 sudo[138215]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 03:44:38 compute-0 python3.9[138217]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/cacerts/neutron-metadata setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct 11 03:44:38 compute-0 sudo[138215]: pam_unix(sudo:session): session closed for user root
Oct 11 03:44:38 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v358: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 03:44:39 compute-0 sudo[138368]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-iwkxxieroqlzfceyzogeumhadtzuurjo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760154279.0489383-299-70993509751182/AnsiballZ_stat.py'
Oct 11 03:44:39 compute-0 sudo[138368]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 03:44:39 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 11 03:44:39 compute-0 python3.9[138370]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 11 03:44:39 compute-0 sudo[138368]: pam_unix(sudo:session): session closed for user root
Oct 11 03:44:40 compute-0 ceph-mon[74273]: pgmap v358: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 03:44:40 compute-0 sudo[138491]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qrsaedbtgeerasszgpfkfbfrgjdxpbxu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760154279.0489383-299-70993509751182/AnsiballZ_copy.py'
Oct 11 03:44:40 compute-0 sudo[138491]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 03:44:40 compute-0 python3.9[138493]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1760154279.0489383-299-70993509751182/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=8245a904210c3962a63879d763ded8fcd136bfb2 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 11 03:44:40 compute-0 sudo[138491]: pam_unix(sudo:session): session closed for user root
Oct 11 03:44:40 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v359: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 03:44:40 compute-0 sudo[138643]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tylvkihwvdhpjyvyuabjhxvifkzupiua ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760154280.6008842-315-259309859023922/AnsiballZ_file.py'
Oct 11 03:44:40 compute-0 sudo[138643]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 03:44:41 compute-0 python3.9[138645]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/cacerts/bootstrap setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct 11 03:44:41 compute-0 sudo[138643]: pam_unix(sudo:session): session closed for user root
Oct 11 03:44:41 compute-0 sudo[138795]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pmglhirjskxbuubrgfylegazxtelfzft ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760154281.3303714-323-34076269100666/AnsiballZ_stat.py'
Oct 11 03:44:41 compute-0 sudo[138795]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 03:44:41 compute-0 python3.9[138797]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/cacerts/bootstrap/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 11 03:44:41 compute-0 sudo[138795]: pam_unix(sudo:session): session closed for user root
Oct 11 03:44:42 compute-0 ceph-mon[74273]: pgmap v359: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 03:44:42 compute-0 sudo[138918]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bbsqpcwejlqigxiimqqgblcvoeldxltm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760154281.3303714-323-34076269100666/AnsiballZ_copy.py'
Oct 11 03:44:42 compute-0 sudo[138918]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 03:44:42 compute-0 python3.9[138920]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/cacerts/bootstrap/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1760154281.3303714-323-34076269100666/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=8245a904210c3962a63879d763ded8fcd136bfb2 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 11 03:44:42 compute-0 sudo[138918]: pam_unix(sudo:session): session closed for user root
Oct 11 03:44:42 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v360: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 03:44:43 compute-0 sudo[139070]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-inkjrjqwymlhcvahbmupsfbmqmetuhcp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760154282.852197-339-89204415912069/AnsiballZ_file.py'
Oct 11 03:44:43 compute-0 sudo[139070]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 03:44:43 compute-0 python3.9[139072]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/cacerts/repo-setup setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct 11 03:44:43 compute-0 sudo[139070]: pam_unix(sudo:session): session closed for user root
Oct 11 03:44:43 compute-0 sudo[139222]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bfvdpmwyuecyptlgnyrzqsqjntvkthzn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760154283.624914-347-254310360256354/AnsiballZ_stat.py'
Oct 11 03:44:43 compute-0 sudo[139222]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 03:44:44 compute-0 ceph-mon[74273]: pgmap v360: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 03:44:44 compute-0 python3.9[139224]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/cacerts/repo-setup/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 11 03:44:44 compute-0 sudo[139222]: pam_unix(sudo:session): session closed for user root
Oct 11 03:44:44 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 11 03:44:44 compute-0 sudo[139345]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qhhaybfxqdqguhoywomggbsolnmtucwr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760154283.624914-347-254310360256354/AnsiballZ_copy.py'
Oct 11 03:44:44 compute-0 sudo[139345]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 03:44:44 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v361: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 03:44:44 compute-0 python3.9[139347]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/cacerts/repo-setup/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1760154283.624914-347-254310360256354/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=8245a904210c3962a63879d763ded8fcd136bfb2 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 11 03:44:44 compute-0 sudo[139345]: pam_unix(sudo:session): session closed for user root
Oct 11 03:44:45 compute-0 sudo[139497]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xonaagpxgejxcmottwnraybipzewqywt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760154285.2004242-363-167199001702221/AnsiballZ_file.py'
Oct 11 03:44:45 compute-0 sudo[139497]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 03:44:45 compute-0 python3.9[139499]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/cacerts/nova setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct 11 03:44:45 compute-0 sudo[139497]: pam_unix(sudo:session): session closed for user root
Oct 11 03:44:46 compute-0 ceph-mon[74273]: pgmap v361: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 03:44:46 compute-0 sudo[139649]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-skluymlfavejolhvgpajkwcayoonxiau ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760154285.9430792-371-104520948201069/AnsiballZ_stat.py'
Oct 11 03:44:46 compute-0 sudo[139649]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 03:44:46 compute-0 python3.9[139651]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 11 03:44:46 compute-0 sudo[139649]: pam_unix(sudo:session): session closed for user root
Oct 11 03:44:46 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v362: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 03:44:46 compute-0 sudo[139772]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dfbyfkwtsmcbbkugsxkagukjnhdtwltk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760154285.9430792-371-104520948201069/AnsiballZ_copy.py'
Oct 11 03:44:46 compute-0 sudo[139772]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 03:44:47 compute-0 python3.9[139774]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1760154285.9430792-371-104520948201069/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=8245a904210c3962a63879d763ded8fcd136bfb2 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 11 03:44:47 compute-0 sudo[139772]: pam_unix(sudo:session): session closed for user root
Oct 11 03:44:47 compute-0 sshd-session[133361]: Connection closed by 192.168.122.30 port 37938
Oct 11 03:44:47 compute-0 sshd-session[133358]: pam_unix(sshd:session): session closed for user zuul
Oct 11 03:44:47 compute-0 systemd[1]: session-44.scope: Deactivated successfully.
Oct 11 03:44:47 compute-0 systemd[1]: session-44.scope: Consumed 28.071s CPU time.
Oct 11 03:44:47 compute-0 systemd-logind[820]: Session 44 logged out. Waiting for processes to exit.
Oct 11 03:44:47 compute-0 systemd-logind[820]: Removed session 44.
Oct 11 03:44:48 compute-0 ceph-mon[74273]: pgmap v362: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 03:44:48 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v363: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 03:44:49 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 11 03:44:50 compute-0 ceph-mon[74273]: pgmap v363: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 03:44:50 compute-0 ceph-mgr[74563]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 03:44:50 compute-0 ceph-mgr[74563]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 03:44:50 compute-0 ceph-mgr[74563]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 03:44:50 compute-0 ceph-mgr[74563]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 03:44:50 compute-0 ceph-mgr[74563]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 03:44:50 compute-0 ceph-mgr[74563]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 03:44:50 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v364: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 03:44:52 compute-0 ceph-mon[74273]: pgmap v364: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 03:44:52 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v365: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 03:44:53 compute-0 sshd-session[139799]: Accepted publickey for zuul from 192.168.122.30 port 48720 ssh2: ECDSA SHA256:qo9+RMabHfLAOt2q/80W97JXaZUdeUCREBuTRaqgxBY
Oct 11 03:44:53 compute-0 systemd-logind[820]: New session 45 of user zuul.
Oct 11 03:44:53 compute-0 systemd[1]: Started Session 45 of User zuul.
Oct 11 03:44:53 compute-0 sshd-session[139799]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Oct 11 03:44:54 compute-0 sudo[139952]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rwiykiyavzasmqwtvexldztncantjiwu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760154293.4244926-22-45351897166303/AnsiballZ_file.py'
Oct 11 03:44:54 compute-0 sudo[139952]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 03:44:54 compute-0 ceph-mon[74273]: pgmap v365: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 03:44:54 compute-0 python3.9[139954]: ansible-ansible.builtin.file Invoked with mode=0755 path=/var/lib/openstack/config/ceph state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 11 03:44:54 compute-0 sudo[139952]: pam_unix(sudo:session): session closed for user root
Oct 11 03:44:54 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 11 03:44:54 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v366: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 03:44:55 compute-0 sudo[140104]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tqvkeljlnsgvmdcorblgplxytyxxjipb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760154294.4907424-34-36070341832998/AnsiballZ_stat.py'
Oct 11 03:44:55 compute-0 sudo[140104]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 03:44:55 compute-0 python3.9[140106]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/ceph/ceph.client.openstack.keyring follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 11 03:44:55 compute-0 sudo[140104]: pam_unix(sudo:session): session closed for user root
Oct 11 03:44:55 compute-0 sudo[140227]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vnxwieypsaxmxglbfrbpysfkyigvhksh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760154294.4907424-34-36070341832998/AnsiballZ_copy.py'
Oct 11 03:44:55 compute-0 sudo[140227]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 03:44:56 compute-0 python3.9[140229]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/ceph/ceph.client.openstack.keyring mode=0600 src=/home/zuul/.ansible/tmp/ansible-tmp-1760154294.4907424-34-36070341832998/.source.keyring _original_basename=ceph.client.openstack.keyring follow=False checksum=5ffab1b62c7b96c69504627db7d5c17b04f06e25 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 11 03:44:56 compute-0 sudo[140227]: pam_unix(sudo:session): session closed for user root
Oct 11 03:44:56 compute-0 ceph-mon[74273]: pgmap v366: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 03:44:56 compute-0 sudo[140379]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nuaauqtkfaioxirqfoiayaeifnaigskb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760154296.2180812-34-77090659102372/AnsiballZ_stat.py'
Oct 11 03:44:56 compute-0 sudo[140379]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 03:44:56 compute-0 python3.9[140381]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/ceph/ceph.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 11 03:44:56 compute-0 sudo[140379]: pam_unix(sudo:session): session closed for user root
Oct 11 03:44:56 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v367: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 03:44:57 compute-0 sudo[140502]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hsdqktfaoliwksqyqzezmnwcerjrroof ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760154296.2180812-34-77090659102372/AnsiballZ_copy.py'
Oct 11 03:44:57 compute-0 sudo[140502]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 03:44:57 compute-0 python3.9[140504]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/ceph/ceph.conf mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1760154296.2180812-34-77090659102372/.source.conf _original_basename=ceph.conf follow=False checksum=918cb73c7acfe55ba8e9160812037e6f722da776 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 11 03:44:57 compute-0 sudo[140502]: pam_unix(sudo:session): session closed for user root
Oct 11 03:44:57 compute-0 sshd-session[139802]: Connection closed by 192.168.122.30 port 48720
Oct 11 03:44:57 compute-0 sshd-session[139799]: pam_unix(sshd:session): session closed for user zuul
Oct 11 03:44:57 compute-0 systemd[1]: session-45.scope: Deactivated successfully.
Oct 11 03:44:57 compute-0 systemd[1]: session-45.scope: Consumed 3.169s CPU time.
Oct 11 03:44:57 compute-0 systemd-logind[820]: Session 45 logged out. Waiting for processes to exit.
Oct 11 03:44:57 compute-0 systemd-logind[820]: Removed session 45.
Oct 11 03:44:58 compute-0 ceph-mon[74273]: pgmap v367: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 03:44:58 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v368: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 03:44:59 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 11 03:45:00 compute-0 ceph-mon[74273]: pgmap v368: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 03:45:00 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v369: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 03:45:02 compute-0 ceph-mon[74273]: pgmap v369: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 03:45:02 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v370: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 03:45:03 compute-0 sshd-session[140529]: Accepted publickey for zuul from 192.168.122.30 port 47958 ssh2: ECDSA SHA256:qo9+RMabHfLAOt2q/80W97JXaZUdeUCREBuTRaqgxBY
Oct 11 03:45:03 compute-0 systemd-logind[820]: New session 46 of user zuul.
Oct 11 03:45:03 compute-0 systemd[1]: Started Session 46 of User zuul.
Oct 11 03:45:03 compute-0 sshd-session[140529]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Oct 11 03:45:04 compute-0 ceph-mon[74273]: pgmap v370: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 03:45:04 compute-0 python3.9[140682]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct 11 03:45:04 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 11 03:45:04 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v371: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 03:45:05 compute-0 sudo[140836]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-soxhhycnzxlbdtqenqdljvjgfrvklctt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760154304.7463794-34-141521944478104/AnsiballZ_file.py'
Oct 11 03:45:05 compute-0 sudo[140836]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 03:45:05 compute-0 python3.9[140838]: ansible-ansible.builtin.file Invoked with group=zuul mode=0750 owner=zuul path=/var/lib/edpm-config/firewall setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct 11 03:45:05 compute-0 sudo[140836]: pam_unix(sudo:session): session closed for user root
Oct 11 03:45:06 compute-0 sudo[140988]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-txhkkftkmvkbqiczlmodeolkvzwyisnr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760154305.6760964-34-238072514382999/AnsiballZ_file.py'
Oct 11 03:45:06 compute-0 sudo[140988]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 03:45:06 compute-0 ceph-mon[74273]: pgmap v371: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 03:45:06 compute-0 python3.9[140990]: ansible-ansible.builtin.file Invoked with group=openvswitch owner=openvswitch path=/var/lib/openvswitch/ovn setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Oct 11 03:45:06 compute-0 sudo[140988]: pam_unix(sudo:session): session closed for user root
Oct 11 03:45:06 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v372: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 03:45:07 compute-0 python3.9[141140]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'selinux'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct 11 03:45:07 compute-0 sudo[141290]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cadidwdvxdidxuqgqrguhxqeannzmxtz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760154307.1747699-57-250001354700991/AnsiballZ_seboolean.py'
Oct 11 03:45:07 compute-0 sudo[141290]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 03:45:07 compute-0 python3.9[141292]: ansible-ansible.posix.seboolean Invoked with name=virt_sandbox_use_netlink persistent=True state=True ignore_selinux_state=False
Oct 11 03:45:08 compute-0 ceph-mon[74273]: pgmap v372: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 03:45:08 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v373: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 03:45:09 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 11 03:45:10 compute-0 sudo[141290]: pam_unix(sudo:session): session closed for user root
Oct 11 03:45:10 compute-0 ceph-mon[74273]: pgmap v373: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 03:45:10 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v374: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 03:45:10 compute-0 sudo[141446]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tuwhonqdunbchbekvsupkdaztwjrywqb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760154310.5743704-67-217875498185521/AnsiballZ_setup.py'
Oct 11 03:45:10 compute-0 dbus-broker-launch[810]: avc:  op=load_policy lsm=selinux seqno=11 res=1
Oct 11 03:45:10 compute-0 sudo[141446]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 03:45:11 compute-0 python3.9[141448]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Oct 11 03:45:11 compute-0 sudo[141446]: pam_unix(sudo:session): session closed for user root
Oct 11 03:45:12 compute-0 sudo[141530]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wqfowoakkfmfkzyzctmhaknyhyyfmdxv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760154310.5743704-67-217875498185521/AnsiballZ_dnf.py'
Oct 11 03:45:12 compute-0 sudo[141530]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 03:45:12 compute-0 python3.9[141532]: ansible-ansible.legacy.dnf Invoked with name=['openvswitch'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Oct 11 03:45:12 compute-0 ceph-mon[74273]: pgmap v374: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 03:45:12 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v375: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 03:45:13 compute-0 sudo[141530]: pam_unix(sudo:session): session closed for user root
Oct 11 03:45:14 compute-0 sudo[141683]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pexxuzcwbmffypjljnhkenurknmyzcgb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760154313.7492802-79-164845609794874/AnsiballZ_systemd.py'
Oct 11 03:45:14 compute-0 sudo[141683]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 03:45:14 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 11 03:45:14 compute-0 ceph-mon[74273]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #21. Immutable memtables: 0.
Oct 11 03:45:14 compute-0 ceph-mon[74273]: rocksdb: (Original Log Time 2025/10/11-03:45:14.429062) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Oct 11 03:45:14 compute-0 ceph-mon[74273]: rocksdb: [db/flush_job.cc:856] [default] [JOB 5] Flushing memtable with next log file: 21
Oct 11 03:45:14 compute-0 ceph-mon[74273]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760154314429091, "job": 5, "event": "flush_started", "num_memtables": 1, "num_entries": 1686, "num_deletes": 252, "total_data_size": 2488657, "memory_usage": 2534536, "flush_reason": "Manual Compaction"}
Oct 11 03:45:14 compute-0 ceph-mon[74273]: rocksdb: [db/flush_job.cc:885] [default] [JOB 5] Level-0 flush table #22: started
Oct 11 03:45:14 compute-0 ceph-mon[74273]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760154314441232, "cf_name": "default", "job": 5, "event": "table_file_creation", "file_number": 22, "file_size": 1440765, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 7312, "largest_seqno": 8997, "table_properties": {"data_size": 1435281, "index_size": 2496, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1925, "raw_key_size": 15687, "raw_average_key_size": 20, "raw_value_size": 1422375, "raw_average_value_size": 1866, "num_data_blocks": 118, "num_entries": 762, "num_filter_entries": 762, "num_deletions": 252, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1760154149, "oldest_key_time": 1760154149, "file_creation_time": 1760154314, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "c5ef686a-ea96-43f3-b64e-136aeef6150d", "db_session_id": "5PENSWPAYOBU0GJSS6OB", "orig_file_number": 22, "seqno_to_time_mapping": "N/A"}}
Oct 11 03:45:14 compute-0 ceph-mon[74273]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 5] Flush lasted 12242 microseconds, and 5320 cpu microseconds.
Oct 11 03:45:14 compute-0 ceph-mon[74273]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 11 03:45:14 compute-0 ceph-mon[74273]: rocksdb: (Original Log Time 2025/10/11-03:45:14.441295) [db/flush_job.cc:967] [default] [JOB 5] Level-0 flush table #22: 1440765 bytes OK
Oct 11 03:45:14 compute-0 ceph-mon[74273]: rocksdb: (Original Log Time 2025/10/11-03:45:14.441322) [db/memtable_list.cc:519] [default] Level-0 commit table #22 started
Oct 11 03:45:14 compute-0 ceph-mon[74273]: rocksdb: (Original Log Time 2025/10/11-03:45:14.443230) [db/memtable_list.cc:722] [default] Level-0 commit table #22: memtable #1 done
Oct 11 03:45:14 compute-0 ceph-mon[74273]: rocksdb: (Original Log Time 2025/10/11-03:45:14.443259) EVENT_LOG_v1 {"time_micros": 1760154314443250, "job": 5, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Oct 11 03:45:14 compute-0 ceph-mon[74273]: rocksdb: (Original Log Time 2025/10/11-03:45:14.443284) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Oct 11 03:45:14 compute-0 ceph-mon[74273]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 5] Try to delete WAL files size 2481211, prev total WAL file size 2481211, number of live WAL files 2.
Oct 11 03:45:14 compute-0 ceph-mon[74273]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000018.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 11 03:45:14 compute-0 ceph-mon[74273]: rocksdb: (Original Log Time 2025/10/11-03:45:14.444534) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6D6772737461740030' seq:72057594037927935, type:22 .. '6D67727374617400323533' seq:0, type:0; will stop at (end)
Oct 11 03:45:14 compute-0 ceph-mon[74273]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 6] Compacting 1@0 + 1@6 files to L6, score -1.00
Oct 11 03:45:14 compute-0 ceph-mon[74273]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 5 Base level 0, inputs: [22(1406KB)], [20(7226KB)]
Oct 11 03:45:14 compute-0 ceph-mon[74273]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760154314444591, "job": 6, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [22], "files_L6": [20], "score": -1, "input_data_size": 8840383, "oldest_snapshot_seqno": -1}
Oct 11 03:45:14 compute-0 ceph-mon[74273]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 6] Generated table #23: 3372 keys, 6944348 bytes, temperature: kUnknown
Oct 11 03:45:14 compute-0 ceph-mon[74273]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760154314518411, "cf_name": "default", "job": 6, "event": "table_file_creation", "file_number": 23, "file_size": 6944348, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 6918483, "index_size": 16348, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 8453, "raw_key_size": 80601, "raw_average_key_size": 23, "raw_value_size": 6854239, "raw_average_value_size": 2032, "num_data_blocks": 725, "num_entries": 3372, "num_filter_entries": 3372, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1760153731, "oldest_key_time": 0, "file_creation_time": 1760154314, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "c5ef686a-ea96-43f3-b64e-136aeef6150d", "db_session_id": "5PENSWPAYOBU0GJSS6OB", "orig_file_number": 23, "seqno_to_time_mapping": "N/A"}}
Oct 11 03:45:14 compute-0 ceph-mon[74273]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 11 03:45:14 compute-0 ceph-mon[74273]: rocksdb: (Original Log Time 2025/10/11-03:45:14.518743) [db/compaction/compaction_job.cc:1663] [default] [JOB 6] Compacted 1@0 + 1@6 files to L6 => 6944348 bytes
Oct 11 03:45:14 compute-0 ceph-mon[74273]: rocksdb: (Original Log Time 2025/10/11-03:45:14.520227) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 119.6 rd, 93.9 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.4, 7.1 +0.0 blob) out(6.6 +0.0 blob), read-write-amplify(11.0) write-amplify(4.8) OK, records in: 3811, records dropped: 439 output_compression: NoCompression
Oct 11 03:45:14 compute-0 ceph-mon[74273]: rocksdb: (Original Log Time 2025/10/11-03:45:14.520267) EVENT_LOG_v1 {"time_micros": 1760154314520248, "job": 6, "event": "compaction_finished", "compaction_time_micros": 73935, "compaction_time_cpu_micros": 23102, "output_level": 6, "num_output_files": 1, "total_output_size": 6944348, "num_input_records": 3811, "num_output_records": 3372, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Oct 11 03:45:14 compute-0 ceph-mon[74273]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000022.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 11 03:45:14 compute-0 ceph-mon[74273]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760154314521405, "job": 6, "event": "table_file_deletion", "file_number": 22}
Oct 11 03:45:14 compute-0 ceph-mon[74273]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000020.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 11 03:45:14 compute-0 ceph-mon[74273]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760154314523952, "job": 6, "event": "table_file_deletion", "file_number": 20}
Oct 11 03:45:14 compute-0 ceph-mon[74273]: rocksdb: (Original Log Time 2025/10/11-03:45:14.444449) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 11 03:45:14 compute-0 ceph-mon[74273]: rocksdb: (Original Log Time 2025/10/11-03:45:14.524010) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 11 03:45:14 compute-0 ceph-mon[74273]: rocksdb: (Original Log Time 2025/10/11-03:45:14.524015) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 11 03:45:14 compute-0 ceph-mon[74273]: rocksdb: (Original Log Time 2025/10/11-03:45:14.524017) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 11 03:45:14 compute-0 ceph-mon[74273]: rocksdb: (Original Log Time 2025/10/11-03:45:14.524019) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 11 03:45:14 compute-0 ceph-mon[74273]: rocksdb: (Original Log Time 2025/10/11-03:45:14.524020) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 11 03:45:14 compute-0 ceph-mon[74273]: pgmap v375: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 03:45:14 compute-0 python3.9[141685]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=openvswitch.service state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Oct 11 03:45:14 compute-0 sudo[141683]: pam_unix(sudo:session): session closed for user root
Oct 11 03:45:14 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v376: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 03:45:15 compute-0 sudo[141838]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-opjdnqbyydwrhfxlnuqvgvqfxohxegtx ; /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1760154315.040655-87-31651814828571/AnsiballZ_edpm_nftables_snippet.py'
Oct 11 03:45:15 compute-0 sudo[141838]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 03:45:15 compute-0 python3[141840]: ansible-osp.edpm.edpm_nftables_snippet Invoked with content=- rule_name: 118 neutron vxlan networks
                                             rule:
                                               proto: udp
                                               dport: 4789
                                           - rule_name: 119 neutron geneve networks
                                             rule:
                                               proto: udp
                                               dport: 6081
                                               state: ["UNTRACKED"]
                                           - rule_name: 120 neutron geneve networks no conntrack
                                             rule:
                                               proto: udp
                                               dport: 6081
                                               table: raw
                                               chain: OUTPUT
                                               jump: NOTRACK
                                               action: append
                                               state: []
                                           - rule_name: 121 neutron geneve networks no conntrack
                                             rule:
                                               proto: udp
                                               dport: 6081
                                               table: raw
                                               chain: PREROUTING
                                               jump: NOTRACK
                                               action: append
                                               state: []
                                            dest=/var/lib/edpm-config/firewall/ovn.yaml state=present
Oct 11 03:45:15 compute-0 sudo[141838]: pam_unix(sudo:session): session closed for user root
Oct 11 03:45:15 compute-0 ceph-mon[74273]: pgmap v376: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 03:45:16 compute-0 sudo[141990]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wnpapcjfywqvodrdqfxyjmxagqxeskwa ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760154315.9991343-96-219511934026592/AnsiballZ_file.py'
Oct 11 03:45:16 compute-0 sudo[141990]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 03:45:16 compute-0 python3.9[141992]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 11 03:45:16 compute-0 sudo[141990]: pam_unix(sudo:session): session closed for user root
Oct 11 03:45:16 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v377: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 03:45:17 compute-0 sudo[142100]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 03:45:17 compute-0 sudo[142100]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 03:45:17 compute-0 sudo[142100]: pam_unix(sudo:session): session closed for user root
Oct 11 03:45:17 compute-0 sudo[142174]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fvmncwraudsyfjbnwstrwzubwdgpfutp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760154316.7197676-104-224410303778132/AnsiballZ_stat.py'
Oct 11 03:45:17 compute-0 sudo[142174]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 03:45:17 compute-0 sudo[142161]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 11 03:45:17 compute-0 sudo[142161]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 03:45:17 compute-0 sudo[142161]: pam_unix(sudo:session): session closed for user root
Oct 11 03:45:17 compute-0 sudo[142195]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 03:45:17 compute-0 sudo[142195]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 03:45:17 compute-0 sudo[142195]: pam_unix(sudo:session): session closed for user root
Oct 11 03:45:17 compute-0 sudo[142220]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/23b68101-59a9-532f-ab6b-9acf78fb2162/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Oct 11 03:45:17 compute-0 sudo[142220]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 03:45:17 compute-0 python3.9[142192]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 11 03:45:17 compute-0 sudo[142174]: pam_unix(sudo:session): session closed for user root
Oct 11 03:45:17 compute-0 sudo[142337]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ceclvtdenovgaiwbkofyxtwvugmnsarc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760154316.7197676-104-224410303778132/AnsiballZ_file.py'
Oct 11 03:45:17 compute-0 sudo[142337]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 03:45:17 compute-0 sudo[142220]: pam_unix(sudo:session): session closed for user root
Oct 11 03:45:17 compute-0 python3.9[142341]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml _original_basename=base-rules.yaml.j2 recurse=False state=file path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 11 03:45:17 compute-0 sudo[142337]: pam_unix(sudo:session): session closed for user root
Oct 11 03:45:17 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 11 03:45:17 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 11 03:45:17 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct 11 03:45:17 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 11 03:45:17 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct 11 03:45:17 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' 
Oct 11 03:45:17 compute-0 ceph-mgr[74563]: [progress WARNING root] complete: ev 2ec2191a-00c2-463a-8810-cce0c6663429 does not exist
Oct 11 03:45:17 compute-0 ceph-mgr[74563]: [progress WARNING root] complete: ev a0e675af-c38a-43ed-9a62-27878f0de367 does not exist
Oct 11 03:45:17 compute-0 ceph-mgr[74563]: [progress WARNING root] complete: ev b551833b-5773-475e-b7b7-ab4c20e3e304 does not exist
Oct 11 03:45:17 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct 11 03:45:17 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 11 03:45:17 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct 11 03:45:17 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 11 03:45:17 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 11 03:45:17 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 11 03:45:17 compute-0 sudo[142358]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 03:45:17 compute-0 sudo[142358]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 03:45:17 compute-0 sudo[142358]: pam_unix(sudo:session): session closed for user root
Oct 11 03:45:17 compute-0 ceph-mon[74273]: pgmap v377: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 03:45:17 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 11 03:45:17 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 11 03:45:17 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' 
Oct 11 03:45:17 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 11 03:45:17 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 11 03:45:17 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 11 03:45:18 compute-0 sudo[142403]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 11 03:45:18 compute-0 sudo[142403]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 03:45:18 compute-0 sudo[142403]: pam_unix(sudo:session): session closed for user root
Oct 11 03:45:18 compute-0 sudo[142451]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 03:45:18 compute-0 sudo[142451]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 03:45:18 compute-0 sudo[142451]: pam_unix(sudo:session): session closed for user root
Oct 11 03:45:18 compute-0 sudo[142505]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/23b68101-59a9-532f-ab6b-9acf78fb2162/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 23b68101-59a9-532f-ab6b-9acf78fb2162 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Oct 11 03:45:18 compute-0 sudo[142505]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 03:45:18 compute-0 sudo[142615]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nntpzpzmoihsdvluauttqzkwphlifvve ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760154318.011321-116-16994598936375/AnsiballZ_stat.py'
Oct 11 03:45:18 compute-0 sudo[142615]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 03:45:18 compute-0 python3.9[142619]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 11 03:45:18 compute-0 sudo[142615]: pam_unix(sudo:session): session closed for user root
Oct 11 03:45:18 compute-0 podman[142645]: 2025-10-11 03:45:18.550026785 +0000 UTC m=+0.066381029 container create adb9b937dd9b32feff26f31e51568412a15c6bb62c7933528272b45add0dcb56 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_pascal, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 11 03:45:18 compute-0 podman[142645]: 2025-10-11 03:45:18.521882495 +0000 UTC m=+0.038236799 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 03:45:18 compute-0 systemd[1]: Started libpod-conmon-adb9b937dd9b32feff26f31e51568412a15c6bb62c7933528272b45add0dcb56.scope.
Oct 11 03:45:18 compute-0 systemd[1]: Started libcrun container.
Oct 11 03:45:18 compute-0 podman[142645]: 2025-10-11 03:45:18.684509399 +0000 UTC m=+0.200863653 container init adb9b937dd9b32feff26f31e51568412a15c6bb62c7933528272b45add0dcb56 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_pascal, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507)
Oct 11 03:45:18 compute-0 podman[142645]: 2025-10-11 03:45:18.693563969 +0000 UTC m=+0.209918163 container start adb9b937dd9b32feff26f31e51568412a15c6bb62c7933528272b45add0dcb56 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_pascal, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Oct 11 03:45:18 compute-0 podman[142645]: 2025-10-11 03:45:18.698194208 +0000 UTC m=+0.214548442 container attach adb9b937dd9b32feff26f31e51568412a15c6bb62c7933528272b45add0dcb56 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_pascal, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef)
Oct 11 03:45:18 compute-0 gracious_pascal[142687]: 167 167
Oct 11 03:45:18 compute-0 systemd[1]: libpod-adb9b937dd9b32feff26f31e51568412a15c6bb62c7933528272b45add0dcb56.scope: Deactivated successfully.
Oct 11 03:45:18 compute-0 podman[142645]: 2025-10-11 03:45:18.703771502 +0000 UTC m=+0.220125726 container died adb9b937dd9b32feff26f31e51568412a15c6bb62c7933528272b45add0dcb56 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_pascal, ceph=True, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 11 03:45:18 compute-0 systemd[1]: var-lib-containers-storage-overlay-e666be36e5720e6569d0dbb58bd10ecb4008ff0477a1133ed1f31c09cc947af1-merged.mount: Deactivated successfully.
Oct 11 03:45:18 compute-0 podman[142645]: 2025-10-11 03:45:18.768037432 +0000 UTC m=+0.284391646 container remove adb9b937dd9b32feff26f31e51568412a15c6bb62c7933528272b45add0dcb56 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_pascal, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 11 03:45:18 compute-0 systemd[1]: libpod-conmon-adb9b937dd9b32feff26f31e51568412a15c6bb62c7933528272b45add0dcb56.scope: Deactivated successfully.
Oct 11 03:45:18 compute-0 sudo[142751]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qvzwmclsnbhinyibggxnzjlqykmgfztj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760154318.011321-116-16994598936375/AnsiballZ_file.py'
Oct 11 03:45:18 compute-0 sudo[142751]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 03:45:18 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v378: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 03:45:18 compute-0 podman[142762]: 2025-10-11 03:45:18.9517903 +0000 UTC m=+0.052688190 container create 75ea902d896dd0334adc8d47c51ad400f9e023cae8b80ce8aac09c9ee1e958cc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_bartik, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef)
Oct 11 03:45:18 compute-0 systemd[1]: Started libpod-conmon-75ea902d896dd0334adc8d47c51ad400f9e023cae8b80ce8aac09c9ee1e958cc.scope.
Oct 11 03:45:19 compute-0 python3.9[142756]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml _original_basename=.sjpjk4rn recurse=False state=file path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 11 03:45:19 compute-0 systemd[1]: Started libcrun container.
Oct 11 03:45:19 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7f6de85c07bd5027a859dcef934daa72c519f8c15598d42611e72d9fae8f7b27/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 11 03:45:19 compute-0 podman[142762]: 2025-10-11 03:45:18.928705031 +0000 UTC m=+0.029602961 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 03:45:19 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7f6de85c07bd5027a859dcef934daa72c519f8c15598d42611e72d9fae8f7b27/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 11 03:45:19 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7f6de85c07bd5027a859dcef934daa72c519f8c15598d42611e72d9fae8f7b27/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 11 03:45:19 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7f6de85c07bd5027a859dcef934daa72c519f8c15598d42611e72d9fae8f7b27/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 11 03:45:19 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7f6de85c07bd5027a859dcef934daa72c519f8c15598d42611e72d9fae8f7b27/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct 11 03:45:19 compute-0 sudo[142751]: pam_unix(sudo:session): session closed for user root
Oct 11 03:45:19 compute-0 podman[142762]: 2025-10-11 03:45:19.037871184 +0000 UTC m=+0.138769154 container init 75ea902d896dd0334adc8d47c51ad400f9e023cae8b80ce8aac09c9ee1e958cc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_bartik, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, ceph=True, CEPH_REF=reef)
Oct 11 03:45:19 compute-0 podman[142762]: 2025-10-11 03:45:19.047612313 +0000 UTC m=+0.148510203 container start 75ea902d896dd0334adc8d47c51ad400f9e023cae8b80ce8aac09c9ee1e958cc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_bartik, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 11 03:45:19 compute-0 podman[142762]: 2025-10-11 03:45:19.154244486 +0000 UTC m=+0.255142456 container attach 75ea902d896dd0334adc8d47c51ad400f9e023cae8b80ce8aac09c9ee1e958cc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_bartik, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Oct 11 03:45:19 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 11 03:45:19 compute-0 sudo[142932]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hugsseuaefnlebzzdztfeqlpcakrqemp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760154319.2102425-128-4290448434107/AnsiballZ_stat.py'
Oct 11 03:45:19 compute-0 sudo[142932]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 03:45:19 compute-0 python3.9[142934]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/iptables.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 11 03:45:19 compute-0 sudo[142932]: pam_unix(sudo:session): session closed for user root
Oct 11 03:45:19 compute-0 ceph-mon[74273]: pgmap v378: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 03:45:20 compute-0 affectionate_bartik[142778]: --> passed data devices: 0 physical, 3 LVM
Oct 11 03:45:20 compute-0 affectionate_bartik[142778]: --> relative data size: 1.0
Oct 11 03:45:20 compute-0 affectionate_bartik[142778]: --> All data devices are unavailable
Oct 11 03:45:20 compute-0 systemd[1]: libpod-75ea902d896dd0334adc8d47c51ad400f9e023cae8b80ce8aac09c9ee1e958cc.scope: Deactivated successfully.
Oct 11 03:45:20 compute-0 podman[142762]: 2025-10-11 03:45:20.083669542 +0000 UTC m=+1.184567452 container died 75ea902d896dd0334adc8d47c51ad400f9e023cae8b80ce8aac09c9ee1e958cc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_bartik, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 11 03:45:20 compute-0 systemd[1]: var-lib-containers-storage-overlay-7f6de85c07bd5027a859dcef934daa72c519f8c15598d42611e72d9fae8f7b27-merged.mount: Deactivated successfully.
Oct 11 03:45:20 compute-0 sudo[143045]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wgqjzdawqhqamxwhytjojjxzrxtgwpdh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760154319.2102425-128-4290448434107/AnsiballZ_file.py'
Oct 11 03:45:20 compute-0 sudo[143045]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 03:45:20 compute-0 podman[142762]: 2025-10-11 03:45:20.14063904 +0000 UTC m=+1.241536940 container remove 75ea902d896dd0334adc8d47c51ad400f9e023cae8b80ce8aac09c9ee1e958cc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_bartik, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=reef, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 11 03:45:20 compute-0 systemd[1]: libpod-conmon-75ea902d896dd0334adc8d47c51ad400f9e023cae8b80ce8aac09c9ee1e958cc.scope: Deactivated successfully.
Oct 11 03:45:20 compute-0 sudo[142505]: pam_unix(sudo:session): session closed for user root
Oct 11 03:45:20 compute-0 sudo[143050]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 03:45:20 compute-0 sudo[143050]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 03:45:20 compute-0 sudo[143050]: pam_unix(sudo:session): session closed for user root
Oct 11 03:45:20 compute-0 sudo[143075]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 11 03:45:20 compute-0 sudo[143075]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 03:45:20 compute-0 sudo[143075]: pam_unix(sudo:session): session closed for user root
Oct 11 03:45:20 compute-0 python3.9[143049]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/iptables.nft _original_basename=iptables.nft recurse=False state=file path=/etc/nftables/iptables.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 11 03:45:20 compute-0 sudo[143100]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 03:45:20 compute-0 sudo[143100]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 03:45:20 compute-0 sudo[143045]: pam_unix(sudo:session): session closed for user root
Oct 11 03:45:20 compute-0 sudo[143100]: pam_unix(sudo:session): session closed for user root
Oct 11 03:45:20 compute-0 sudo[143125]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/23b68101-59a9-532f-ab6b-9acf78fb2162/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 23b68101-59a9-532f-ab6b-9acf78fb2162 -- lvm list --format json
Oct 11 03:45:20 compute-0 sudo[143125]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 03:45:20 compute-0 ceph-mgr[74563]: [balancer INFO root] Optimize plan auto_2025-10-11_03:45:20
Oct 11 03:45:20 compute-0 ceph-mgr[74563]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 11 03:45:20 compute-0 ceph-mgr[74563]: [balancer INFO root] do_upmap
Oct 11 03:45:20 compute-0 ceph-mgr[74563]: [balancer INFO root] pools ['default.rgw.log', 'cephfs.cephfs.data', '.rgw.root', 'backups', 'default.rgw.control', 'images', '.mgr', 'vms', 'volumes', 'default.rgw.meta', 'cephfs.cephfs.meta']
Oct 11 03:45:20 compute-0 ceph-mgr[74563]: [balancer INFO root] prepared 0/10 changes
Oct 11 03:45:20 compute-0 podman[143267]: 2025-10-11 03:45:20.742563127 +0000 UTC m=+0.035642768 container create 1627934b318541c97dd797b509213eb7bc6917650036e745cdeef0168e692aba (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=unruffled_meninsky, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Oct 11 03:45:20 compute-0 ceph-mgr[74563]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 03:45:20 compute-0 ceph-mgr[74563]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 03:45:20 compute-0 ceph-mgr[74563]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 03:45:20 compute-0 ceph-mgr[74563]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 03:45:20 compute-0 ceph-mgr[74563]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 03:45:20 compute-0 ceph-mgr[74563]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 03:45:20 compute-0 systemd[1]: Started libpod-conmon-1627934b318541c97dd797b509213eb7bc6917650036e745cdeef0168e692aba.scope.
Oct 11 03:45:20 compute-0 systemd[1]: Started libcrun container.
Oct 11 03:45:20 compute-0 podman[143267]: 2025-10-11 03:45:20.809596294 +0000 UTC m=+0.102675935 container init 1627934b318541c97dd797b509213eb7bc6917650036e745cdeef0168e692aba (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=unruffled_meninsky, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507)
Oct 11 03:45:20 compute-0 ceph-mgr[74563]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 11 03:45:20 compute-0 ceph-mgr[74563]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 11 03:45:20 compute-0 podman[143267]: 2025-10-11 03:45:20.817504513 +0000 UTC m=+0.110584154 container start 1627934b318541c97dd797b509213eb7bc6917650036e745cdeef0168e692aba (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=unruffled_meninsky, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_REF=reef)
Oct 11 03:45:20 compute-0 ceph-mgr[74563]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 11 03:45:20 compute-0 ceph-mgr[74563]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 11 03:45:20 compute-0 ceph-mgr[74563]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 11 03:45:20 compute-0 ceph-mgr[74563]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 11 03:45:20 compute-0 ceph-mgr[74563]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 11 03:45:20 compute-0 ceph-mgr[74563]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 11 03:45:20 compute-0 ceph-mgr[74563]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 11 03:45:20 compute-0 podman[143267]: 2025-10-11 03:45:20.821309538 +0000 UTC m=+0.114389209 container attach 1627934b318541c97dd797b509213eb7bc6917650036e745cdeef0168e692aba (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=unruffled_meninsky, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20250507, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 11 03:45:20 compute-0 unruffled_meninsky[143284]: 167 167
Oct 11 03:45:20 compute-0 podman[143267]: 2025-10-11 03:45:20.726971296 +0000 UTC m=+0.020050937 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 03:45:20 compute-0 systemd[1]: libpod-1627934b318541c97dd797b509213eb7bc6917650036e745cdeef0168e692aba.scope: Deactivated successfully.
Oct 11 03:45:20 compute-0 conmon[143284]: conmon 1627934b318541c97dd7 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-1627934b318541c97dd797b509213eb7bc6917650036e745cdeef0168e692aba.scope/container/memory.events
Oct 11 03:45:20 compute-0 podman[143267]: 2025-10-11 03:45:20.824232209 +0000 UTC m=+0.117311870 container died 1627934b318541c97dd797b509213eb7bc6917650036e745cdeef0168e692aba (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=unruffled_meninsky, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3)
Oct 11 03:45:20 compute-0 ceph-mgr[74563]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 11 03:45:20 compute-0 systemd[1]: var-lib-containers-storage-overlay-9228af3ea91f960bca71f9614b11010c7b21f27ffc42e09369c156a51058e83a-merged.mount: Deactivated successfully.
Oct 11 03:45:20 compute-0 podman[143267]: 2025-10-11 03:45:20.863330622 +0000 UTC m=+0.156410253 container remove 1627934b318541c97dd797b509213eb7bc6917650036e745cdeef0168e692aba (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=unruffled_meninsky, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Oct 11 03:45:20 compute-0 systemd[1]: libpod-conmon-1627934b318541c97dd797b509213eb7bc6917650036e745cdeef0168e692aba.scope: Deactivated successfully.
Oct 11 03:45:20 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v379: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 03:45:21 compute-0 podman[143338]: 2025-10-11 03:45:21.014045935 +0000 UTC m=+0.036291156 container create b02a08125614be1b3176cc33f7d71920452cf869fe0b8497e6ecaaaa75e8572b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_hamilton, org.label-schema.build-date=20250507, OSD_FLAVOR=default, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=reef, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 11 03:45:21 compute-0 systemd[1]: Started libpod-conmon-b02a08125614be1b3176cc33f7d71920452cf869fe0b8497e6ecaaaa75e8572b.scope.
Oct 11 03:45:21 compute-0 sudo[143398]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mmmwzhtoesvlqexwwpvxoagzxgsyhbms ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760154320.5325956-141-18910320491667/AnsiballZ_command.py'
Oct 11 03:45:21 compute-0 sudo[143398]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 03:45:21 compute-0 systemd[1]: Started libcrun container.
Oct 11 03:45:21 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a20b0411cf7fa80a68e90ad6457194387822d13374cb15eb8db74bce2b3548c3/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 11 03:45:21 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a20b0411cf7fa80a68e90ad6457194387822d13374cb15eb8db74bce2b3548c3/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 11 03:45:21 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a20b0411cf7fa80a68e90ad6457194387822d13374cb15eb8db74bce2b3548c3/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 11 03:45:21 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a20b0411cf7fa80a68e90ad6457194387822d13374cb15eb8db74bce2b3548c3/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 11 03:45:21 compute-0 podman[143338]: 2025-10-11 03:45:20.998475774 +0000 UTC m=+0.020721005 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 03:45:21 compute-0 podman[143338]: 2025-10-11 03:45:21.110061274 +0000 UTC m=+0.132306545 container init b02a08125614be1b3176cc33f7d71920452cf869fe0b8497e6ecaaaa75e8572b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_hamilton, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=reef, OSD_FLAVOR=default, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 11 03:45:21 compute-0 podman[143338]: 2025-10-11 03:45:21.118635591 +0000 UTC m=+0.140880812 container start b02a08125614be1b3176cc33f7d71920452cf869fe0b8497e6ecaaaa75e8572b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_hamilton, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 11 03:45:21 compute-0 podman[143338]: 2025-10-11 03:45:21.121615704 +0000 UTC m=+0.143860995 container attach b02a08125614be1b3176cc33f7d71920452cf869fe0b8497e6ecaaaa75e8572b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_hamilton, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 11 03:45:21 compute-0 python3.9[143403]: ansible-ansible.legacy.command Invoked with _raw_params=nft -j list ruleset _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 11 03:45:21 compute-0 sudo[143398]: pam_unix(sudo:session): session closed for user root
Oct 11 03:45:21 compute-0 infallible_hamilton[143400]: {
Oct 11 03:45:21 compute-0 infallible_hamilton[143400]:     "0": [
Oct 11 03:45:21 compute-0 infallible_hamilton[143400]:         {
Oct 11 03:45:21 compute-0 infallible_hamilton[143400]:             "devices": [
Oct 11 03:45:21 compute-0 infallible_hamilton[143400]:                 "/dev/loop3"
Oct 11 03:45:21 compute-0 infallible_hamilton[143400]:             ],
Oct 11 03:45:21 compute-0 infallible_hamilton[143400]:             "lv_name": "ceph_lv0",
Oct 11 03:45:21 compute-0 infallible_hamilton[143400]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct 11 03:45:21 compute-0 infallible_hamilton[143400]:             "lv_size": "21470642176",
Oct 11 03:45:21 compute-0 infallible_hamilton[143400]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=1XVTvq-Um5W-oCLh-MZzq-EKrd-vgGj-JdO4S3,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=23b68101-59a9-532f-ab6b-9acf78fb2162,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=bd7ac921-1218-45c1-b1c6-7c594dbceccb,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 11 03:45:21 compute-0 infallible_hamilton[143400]:             "lv_uuid": "1XVTvq-Um5W-oCLh-MZzq-EKrd-vgGj-JdO4S3",
Oct 11 03:45:21 compute-0 infallible_hamilton[143400]:             "name": "ceph_lv0",
Oct 11 03:45:21 compute-0 infallible_hamilton[143400]:             "path": "/dev/ceph_vg0/ceph_lv0",
Oct 11 03:45:21 compute-0 infallible_hamilton[143400]:             "tags": {
Oct 11 03:45:21 compute-0 infallible_hamilton[143400]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct 11 03:45:21 compute-0 infallible_hamilton[143400]:                 "ceph.block_uuid": "1XVTvq-Um5W-oCLh-MZzq-EKrd-vgGj-JdO4S3",
Oct 11 03:45:21 compute-0 infallible_hamilton[143400]:                 "ceph.cephx_lockbox_secret": "",
Oct 11 03:45:21 compute-0 infallible_hamilton[143400]:                 "ceph.cluster_fsid": "23b68101-59a9-532f-ab6b-9acf78fb2162",
Oct 11 03:45:21 compute-0 infallible_hamilton[143400]:                 "ceph.cluster_name": "ceph",
Oct 11 03:45:21 compute-0 infallible_hamilton[143400]:                 "ceph.crush_device_class": "",
Oct 11 03:45:21 compute-0 infallible_hamilton[143400]:                 "ceph.encrypted": "0",
Oct 11 03:45:21 compute-0 infallible_hamilton[143400]:                 "ceph.osd_fsid": "bd7ac921-1218-45c1-b1c6-7c594dbceccb",
Oct 11 03:45:21 compute-0 infallible_hamilton[143400]:                 "ceph.osd_id": "0",
Oct 11 03:45:21 compute-0 infallible_hamilton[143400]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 11 03:45:21 compute-0 infallible_hamilton[143400]:                 "ceph.type": "block",
Oct 11 03:45:21 compute-0 infallible_hamilton[143400]:                 "ceph.vdo": "0"
Oct 11 03:45:21 compute-0 infallible_hamilton[143400]:             },
Oct 11 03:45:21 compute-0 infallible_hamilton[143400]:             "type": "block",
Oct 11 03:45:21 compute-0 infallible_hamilton[143400]:             "vg_name": "ceph_vg0"
Oct 11 03:45:21 compute-0 infallible_hamilton[143400]:         }
Oct 11 03:45:21 compute-0 infallible_hamilton[143400]:     ],
Oct 11 03:45:21 compute-0 infallible_hamilton[143400]:     "1": [
Oct 11 03:45:21 compute-0 infallible_hamilton[143400]:         {
Oct 11 03:45:21 compute-0 infallible_hamilton[143400]:             "devices": [
Oct 11 03:45:21 compute-0 infallible_hamilton[143400]:                 "/dev/loop4"
Oct 11 03:45:21 compute-0 infallible_hamilton[143400]:             ],
Oct 11 03:45:21 compute-0 infallible_hamilton[143400]:             "lv_name": "ceph_lv1",
Oct 11 03:45:21 compute-0 infallible_hamilton[143400]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Oct 11 03:45:21 compute-0 infallible_hamilton[143400]:             "lv_size": "21470642176",
Oct 11 03:45:21 compute-0 infallible_hamilton[143400]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=SYHCVo-UVf0-Iwc3-4dzz-QuCG-19LJ-9rRA5x,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=23b68101-59a9-532f-ab6b-9acf78fb2162,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=38da774d-7ecf-442f-9a7a-97978287cff8,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 11 03:45:21 compute-0 infallible_hamilton[143400]:             "lv_uuid": "SYHCVo-UVf0-Iwc3-4dzz-QuCG-19LJ-9rRA5x",
Oct 11 03:45:21 compute-0 infallible_hamilton[143400]:             "name": "ceph_lv1",
Oct 11 03:45:21 compute-0 infallible_hamilton[143400]:             "path": "/dev/ceph_vg1/ceph_lv1",
Oct 11 03:45:21 compute-0 infallible_hamilton[143400]:             "tags": {
Oct 11 03:45:21 compute-0 infallible_hamilton[143400]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Oct 11 03:45:21 compute-0 infallible_hamilton[143400]:                 "ceph.block_uuid": "SYHCVo-UVf0-Iwc3-4dzz-QuCG-19LJ-9rRA5x",
Oct 11 03:45:21 compute-0 infallible_hamilton[143400]:                 "ceph.cephx_lockbox_secret": "",
Oct 11 03:45:21 compute-0 infallible_hamilton[143400]:                 "ceph.cluster_fsid": "23b68101-59a9-532f-ab6b-9acf78fb2162",
Oct 11 03:45:21 compute-0 infallible_hamilton[143400]:                 "ceph.cluster_name": "ceph",
Oct 11 03:45:21 compute-0 infallible_hamilton[143400]:                 "ceph.crush_device_class": "",
Oct 11 03:45:21 compute-0 infallible_hamilton[143400]:                 "ceph.encrypted": "0",
Oct 11 03:45:21 compute-0 infallible_hamilton[143400]:                 "ceph.osd_fsid": "38da774d-7ecf-442f-9a7a-97978287cff8",
Oct 11 03:45:21 compute-0 infallible_hamilton[143400]:                 "ceph.osd_id": "1",
Oct 11 03:45:21 compute-0 infallible_hamilton[143400]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 11 03:45:21 compute-0 infallible_hamilton[143400]:                 "ceph.type": "block",
Oct 11 03:45:21 compute-0 infallible_hamilton[143400]:                 "ceph.vdo": "0"
Oct 11 03:45:21 compute-0 infallible_hamilton[143400]:             },
Oct 11 03:45:21 compute-0 infallible_hamilton[143400]:             "type": "block",
Oct 11 03:45:21 compute-0 infallible_hamilton[143400]:             "vg_name": "ceph_vg1"
Oct 11 03:45:21 compute-0 infallible_hamilton[143400]:         }
Oct 11 03:45:21 compute-0 infallible_hamilton[143400]:     ],
Oct 11 03:45:21 compute-0 infallible_hamilton[143400]:     "2": [
Oct 11 03:45:21 compute-0 infallible_hamilton[143400]:         {
Oct 11 03:45:21 compute-0 infallible_hamilton[143400]:             "devices": [
Oct 11 03:45:21 compute-0 infallible_hamilton[143400]:                 "/dev/loop5"
Oct 11 03:45:21 compute-0 infallible_hamilton[143400]:             ],
Oct 11 03:45:21 compute-0 infallible_hamilton[143400]:             "lv_name": "ceph_lv2",
Oct 11 03:45:21 compute-0 infallible_hamilton[143400]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Oct 11 03:45:21 compute-0 infallible_hamilton[143400]:             "lv_size": "21470642176",
Oct 11 03:45:21 compute-0 infallible_hamilton[143400]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=RoMfIg-8Myq-NBZ3-1HM3-6QfC-sjk8-dlChvt,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=23b68101-59a9-532f-ab6b-9acf78fb2162,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=b0a32a6b-06c7-4cda-9235-0c5ffbd1bd85,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 11 03:45:21 compute-0 infallible_hamilton[143400]:             "lv_uuid": "RoMfIg-8Myq-NBZ3-1HM3-6QfC-sjk8-dlChvt",
Oct 11 03:45:21 compute-0 infallible_hamilton[143400]:             "name": "ceph_lv2",
Oct 11 03:45:21 compute-0 infallible_hamilton[143400]:             "path": "/dev/ceph_vg2/ceph_lv2",
Oct 11 03:45:21 compute-0 infallible_hamilton[143400]:             "tags": {
Oct 11 03:45:21 compute-0 infallible_hamilton[143400]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Oct 11 03:45:21 compute-0 infallible_hamilton[143400]:                 "ceph.block_uuid": "RoMfIg-8Myq-NBZ3-1HM3-6QfC-sjk8-dlChvt",
Oct 11 03:45:21 compute-0 infallible_hamilton[143400]:                 "ceph.cephx_lockbox_secret": "",
Oct 11 03:45:21 compute-0 infallible_hamilton[143400]:                 "ceph.cluster_fsid": "23b68101-59a9-532f-ab6b-9acf78fb2162",
Oct 11 03:45:21 compute-0 infallible_hamilton[143400]:                 "ceph.cluster_name": "ceph",
Oct 11 03:45:21 compute-0 infallible_hamilton[143400]:                 "ceph.crush_device_class": "",
Oct 11 03:45:21 compute-0 infallible_hamilton[143400]:                 "ceph.encrypted": "0",
Oct 11 03:45:21 compute-0 infallible_hamilton[143400]:                 "ceph.osd_fsid": "b0a32a6b-06c7-4cda-9235-0c5ffbd1bd85",
Oct 11 03:45:21 compute-0 infallible_hamilton[143400]:                 "ceph.osd_id": "2",
Oct 11 03:45:21 compute-0 infallible_hamilton[143400]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 11 03:45:21 compute-0 infallible_hamilton[143400]:                 "ceph.type": "block",
Oct 11 03:45:21 compute-0 infallible_hamilton[143400]:                 "ceph.vdo": "0"
Oct 11 03:45:21 compute-0 infallible_hamilton[143400]:             },
Oct 11 03:45:21 compute-0 infallible_hamilton[143400]:             "type": "block",
Oct 11 03:45:21 compute-0 infallible_hamilton[143400]:             "vg_name": "ceph_vg2"
Oct 11 03:45:21 compute-0 infallible_hamilton[143400]:         }
Oct 11 03:45:21 compute-0 infallible_hamilton[143400]:     ]
Oct 11 03:45:21 compute-0 infallible_hamilton[143400]: }
Oct 11 03:45:21 compute-0 sudo[143560]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pyuuqivrtfcelwgyvfitzdzozmuiymyj ; /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1760154321.4618871-149-58967957522743/AnsiballZ_edpm_nftables_from_files.py'
Oct 11 03:45:21 compute-0 sudo[143560]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 03:45:21 compute-0 systemd[1]: libpod-b02a08125614be1b3176cc33f7d71920452cf869fe0b8497e6ecaaaa75e8572b.scope: Deactivated successfully.
Oct 11 03:45:21 compute-0 podman[143338]: 2025-10-11 03:45:21.93011652 +0000 UTC m=+0.952361751 container died b02a08125614be1b3176cc33f7d71920452cf869fe0b8497e6ecaaaa75e8572b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_hamilton, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Oct 11 03:45:21 compute-0 systemd[1]: var-lib-containers-storage-overlay-a20b0411cf7fa80a68e90ad6457194387822d13374cb15eb8db74bce2b3548c3-merged.mount: Deactivated successfully.
Oct 11 03:45:21 compute-0 ceph-mon[74273]: pgmap v379: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 03:45:21 compute-0 podman[143338]: 2025-10-11 03:45:21.994116593 +0000 UTC m=+1.016361824 container remove b02a08125614be1b3176cc33f7d71920452cf869fe0b8497e6ecaaaa75e8572b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_hamilton, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 11 03:45:22 compute-0 systemd[1]: libpod-conmon-b02a08125614be1b3176cc33f7d71920452cf869fe0b8497e6ecaaaa75e8572b.scope: Deactivated successfully.
Oct 11 03:45:22 compute-0 sudo[143125]: pam_unix(sudo:session): session closed for user root
Oct 11 03:45:22 compute-0 sudo[143575]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 03:45:22 compute-0 sudo[143575]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 03:45:22 compute-0 sudo[143575]: pam_unix(sudo:session): session closed for user root
Oct 11 03:45:22 compute-0 python3[143562]: ansible-edpm_nftables_from_files Invoked with src=/var/lib/edpm-config/firewall
Oct 11 03:45:22 compute-0 sudo[143600]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 11 03:45:22 compute-0 sudo[143560]: pam_unix(sudo:session): session closed for user root
Oct 11 03:45:22 compute-0 sudo[143600]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 03:45:22 compute-0 sudo[143600]: pam_unix(sudo:session): session closed for user root
Oct 11 03:45:22 compute-0 sudo[143625]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 03:45:22 compute-0 sudo[143625]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 03:45:22 compute-0 sudo[143625]: pam_unix(sudo:session): session closed for user root
Oct 11 03:45:22 compute-0 sudo[143674]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/23b68101-59a9-532f-ab6b-9acf78fb2162/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 23b68101-59a9-532f-ab6b-9acf78fb2162 -- raw list --format json
Oct 11 03:45:22 compute-0 sudo[143674]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 03:45:22 compute-0 sudo[143873]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lfgpeafwgrwsbqivjdzfluzzvvkfmjsj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760154322.3300815-157-219796483058600/AnsiballZ_stat.py'
Oct 11 03:45:22 compute-0 sudo[143873]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 03:45:22 compute-0 podman[143854]: 2025-10-11 03:45:22.675348606 +0000 UTC m=+0.039745011 container create 2c4153de7bb526143ae3c60822e12aba8f7a4573291008c55a719c860f96dd9f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_kirch, org.label-schema.vendor=CentOS, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 11 03:45:22 compute-0 systemd[1]: Started libpod-conmon-2c4153de7bb526143ae3c60822e12aba8f7a4573291008c55a719c860f96dd9f.scope.
Oct 11 03:45:22 compute-0 systemd[1]: Started libcrun container.
Oct 11 03:45:22 compute-0 podman[143854]: 2025-10-11 03:45:22.750060986 +0000 UTC m=+0.114457441 container init 2c4153de7bb526143ae3c60822e12aba8f7a4573291008c55a719c860f96dd9f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_kirch, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef)
Oct 11 03:45:22 compute-0 podman[143854]: 2025-10-11 03:45:22.656514605 +0000 UTC m=+0.020911050 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 03:45:22 compute-0 podman[143854]: 2025-10-11 03:45:22.757368568 +0000 UTC m=+0.121764993 container start 2c4153de7bb526143ae3c60822e12aba8f7a4573291008c55a719c860f96dd9f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_kirch, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS)
Oct 11 03:45:22 compute-0 podman[143854]: 2025-10-11 03:45:22.761120962 +0000 UTC m=+0.125517387 container attach 2c4153de7bb526143ae3c60822e12aba8f7a4573291008c55a719c860f96dd9f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_kirch, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.39.3)
Oct 11 03:45:22 compute-0 epic_kirch[143885]: 167 167
Oct 11 03:45:22 compute-0 systemd[1]: libpod-2c4153de7bb526143ae3c60822e12aba8f7a4573291008c55a719c860f96dd9f.scope: Deactivated successfully.
Oct 11 03:45:22 compute-0 podman[143854]: 2025-10-11 03:45:22.762935692 +0000 UTC m=+0.127332117 container died 2c4153de7bb526143ae3c60822e12aba8f7a4573291008c55a719c860f96dd9f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_kirch, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Oct 11 03:45:22 compute-0 systemd[1]: var-lib-containers-storage-overlay-7d8baf513ecdb0d53ac24f333f5402d26382a1de56b8f25b4a1f5eec04e0d1c7-merged.mount: Deactivated successfully.
Oct 11 03:45:22 compute-0 podman[143854]: 2025-10-11 03:45:22.810378916 +0000 UTC m=+0.174775331 container remove 2c4153de7bb526143ae3c60822e12aba8f7a4573291008c55a719c860f96dd9f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_kirch, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default)
Oct 11 03:45:22 compute-0 systemd[1]: libpod-conmon-2c4153de7bb526143ae3c60822e12aba8f7a4573291008c55a719c860f96dd9f.scope: Deactivated successfully.
Oct 11 03:45:22 compute-0 python3.9[143880]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 11 03:45:22 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v380: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 03:45:22 compute-0 sudo[143873]: pam_unix(sudo:session): session closed for user root
Oct 11 03:45:23 compute-0 podman[143913]: 2025-10-11 03:45:23.013194402 +0000 UTC m=+0.047646090 container create 0a106e115d2bb51175f3e9e11fecaf5c53de83b225e6b5dde07dbf47fb4f1a33 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_hermann, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 11 03:45:23 compute-0 systemd[1]: Started libpod-conmon-0a106e115d2bb51175f3e9e11fecaf5c53de83b225e6b5dde07dbf47fb4f1a33.scope.
Oct 11 03:45:23 compute-0 podman[143913]: 2025-10-11 03:45:22.994038102 +0000 UTC m=+0.028489800 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 03:45:23 compute-0 systemd[1]: Started libcrun container.
Oct 11 03:45:23 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2544e59fb57bbf83210487cb2c21aa01a7798a864059504d92bb63009d03a464/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 11 03:45:23 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2544e59fb57bbf83210487cb2c21aa01a7798a864059504d92bb63009d03a464/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 11 03:45:23 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2544e59fb57bbf83210487cb2c21aa01a7798a864059504d92bb63009d03a464/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 11 03:45:23 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2544e59fb57bbf83210487cb2c21aa01a7798a864059504d92bb63009d03a464/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 11 03:45:23 compute-0 podman[143913]: 2025-10-11 03:45:23.13555172 +0000 UTC m=+0.170003428 container init 0a106e115d2bb51175f3e9e11fecaf5c53de83b225e6b5dde07dbf47fb4f1a33 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_hermann, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 11 03:45:23 compute-0 podman[143913]: 2025-10-11 03:45:23.145608859 +0000 UTC m=+0.180060537 container start 0a106e115d2bb51175f3e9e11fecaf5c53de83b225e6b5dde07dbf47fb4f1a33 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_hermann, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 11 03:45:23 compute-0 podman[143913]: 2025-10-11 03:45:23.149082755 +0000 UTC m=+0.183534443 container attach 0a106e115d2bb51175f3e9e11fecaf5c53de83b225e6b5dde07dbf47fb4f1a33 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_hermann, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Oct 11 03:45:23 compute-0 sudo[144054]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hnumfqfpmpzcwcgbprjdabyvjjsirbfg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760154322.3300815-157-219796483058600/AnsiballZ_copy.py'
Oct 11 03:45:23 compute-0 sudo[144054]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 03:45:23 compute-0 python3.9[144056]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-jumps.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1760154322.3300815-157-219796483058600/.source.nft follow=False _original_basename=jump-chain.j2 checksum=81c2fc96c23335ffe374f9b064e885d5d971ddf9 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 11 03:45:23 compute-0 sudo[144054]: pam_unix(sudo:session): session closed for user root
Oct 11 03:45:23 compute-0 ceph-mon[74273]: pgmap v380: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 03:45:24 compute-0 xenodochial_hermann[143976]: {
Oct 11 03:45:24 compute-0 xenodochial_hermann[143976]:     "38da774d-7ecf-442f-9a7a-97978287cff8": {
Oct 11 03:45:24 compute-0 xenodochial_hermann[143976]:         "ceph_fsid": "23b68101-59a9-532f-ab6b-9acf78fb2162",
Oct 11 03:45:24 compute-0 xenodochial_hermann[143976]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Oct 11 03:45:24 compute-0 xenodochial_hermann[143976]:         "osd_id": 1,
Oct 11 03:45:24 compute-0 xenodochial_hermann[143976]:         "osd_uuid": "38da774d-7ecf-442f-9a7a-97978287cff8",
Oct 11 03:45:24 compute-0 xenodochial_hermann[143976]:         "type": "bluestore"
Oct 11 03:45:24 compute-0 xenodochial_hermann[143976]:     },
Oct 11 03:45:24 compute-0 xenodochial_hermann[143976]:     "b0a32a6b-06c7-4cda-9235-0c5ffbd1bd85": {
Oct 11 03:45:24 compute-0 xenodochial_hermann[143976]:         "ceph_fsid": "23b68101-59a9-532f-ab6b-9acf78fb2162",
Oct 11 03:45:24 compute-0 xenodochial_hermann[143976]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Oct 11 03:45:24 compute-0 xenodochial_hermann[143976]:         "osd_id": 2,
Oct 11 03:45:24 compute-0 xenodochial_hermann[143976]:         "osd_uuid": "b0a32a6b-06c7-4cda-9235-0c5ffbd1bd85",
Oct 11 03:45:24 compute-0 xenodochial_hermann[143976]:         "type": "bluestore"
Oct 11 03:45:24 compute-0 xenodochial_hermann[143976]:     },
Oct 11 03:45:24 compute-0 xenodochial_hermann[143976]:     "bd7ac921-1218-45c1-b1c6-7c594dbceccb": {
Oct 11 03:45:24 compute-0 xenodochial_hermann[143976]:         "ceph_fsid": "23b68101-59a9-532f-ab6b-9acf78fb2162",
Oct 11 03:45:24 compute-0 xenodochial_hermann[143976]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct 11 03:45:24 compute-0 xenodochial_hermann[143976]:         "osd_id": 0,
Oct 11 03:45:24 compute-0 xenodochial_hermann[143976]:         "osd_uuid": "bd7ac921-1218-45c1-b1c6-7c594dbceccb",
Oct 11 03:45:24 compute-0 xenodochial_hermann[143976]:         "type": "bluestore"
Oct 11 03:45:24 compute-0 xenodochial_hermann[143976]:     }
Oct 11 03:45:24 compute-0 xenodochial_hermann[143976]: }
Oct 11 03:45:24 compute-0 sudo[144234]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vzuhpebmdvakfspipjaljxkhxqoaxqvk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760154323.9528172-172-220584867180195/AnsiballZ_stat.py'
Oct 11 03:45:24 compute-0 sudo[144234]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 03:45:24 compute-0 systemd[1]: libpod-0a106e115d2bb51175f3e9e11fecaf5c53de83b225e6b5dde07dbf47fb4f1a33.scope: Deactivated successfully.
Oct 11 03:45:24 compute-0 podman[143913]: 2025-10-11 03:45:24.285082912 +0000 UTC m=+1.319534610 container died 0a106e115d2bb51175f3e9e11fecaf5c53de83b225e6b5dde07dbf47fb4f1a33 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_hermann, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507)
Oct 11 03:45:24 compute-0 systemd[1]: libpod-0a106e115d2bb51175f3e9e11fecaf5c53de83b225e6b5dde07dbf47fb4f1a33.scope: Consumed 1.146s CPU time.
Oct 11 03:45:24 compute-0 systemd[1]: var-lib-containers-storage-overlay-2544e59fb57bbf83210487cb2c21aa01a7798a864059504d92bb63009d03a464-merged.mount: Deactivated successfully.
Oct 11 03:45:24 compute-0 podman[143913]: 2025-10-11 03:45:24.354914645 +0000 UTC m=+1.389366323 container remove 0a106e115d2bb51175f3e9e11fecaf5c53de83b225e6b5dde07dbf47fb4f1a33 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_hermann, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3)
Oct 11 03:45:24 compute-0 systemd[1]: libpod-conmon-0a106e115d2bb51175f3e9e11fecaf5c53de83b225e6b5dde07dbf47fb4f1a33.scope: Deactivated successfully.
Oct 11 03:45:24 compute-0 sudo[143674]: pam_unix(sudo:session): session closed for user root
Oct 11 03:45:24 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 11 03:45:24 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' 
Oct 11 03:45:24 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 11 03:45:24 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' 
Oct 11 03:45:24 compute-0 ceph-mgr[74563]: [progress WARNING root] complete: ev 7c717ab9-8052-43f0-b3d7-2657e7a63081 does not exist
Oct 11 03:45:24 compute-0 ceph-mgr[74563]: [progress WARNING root] complete: ev 7521f18a-942a-401a-a08e-9968282c2597 does not exist
Oct 11 03:45:24 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 11 03:45:24 compute-0 sudo[144251]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 03:45:24 compute-0 sudo[144251]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 03:45:24 compute-0 sudo[144251]: pam_unix(sudo:session): session closed for user root
Oct 11 03:45:24 compute-0 python3.9[144236]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-update-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 11 03:45:24 compute-0 sudo[144276]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Oct 11 03:45:24 compute-0 sudo[144276]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 03:45:24 compute-0 sudo[144234]: pam_unix(sudo:session): session closed for user root
Oct 11 03:45:24 compute-0 sudo[144276]: pam_unix(sudo:session): session closed for user root
Oct 11 03:45:24 compute-0 sudo[144423]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rgafhgkvuxfnxtfgoavuihrzddwvdycy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760154323.9528172-172-220584867180195/AnsiballZ_copy.py'
Oct 11 03:45:24 compute-0 sudo[144423]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 03:45:24 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v381: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 03:45:25 compute-0 python3.9[144425]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-update-jumps.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1760154323.9528172-172-220584867180195/.source.nft follow=False _original_basename=jump-chain.j2 checksum=81c2fc96c23335ffe374f9b064e885d5d971ddf9 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 11 03:45:25 compute-0 sudo[144423]: pam_unix(sudo:session): session closed for user root
Oct 11 03:45:25 compute-0 sudo[144575]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jtkjgxrnfsxsimsbowflxdcrpaefmnvi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760154325.2658257-187-58019956352395/AnsiballZ_stat.py'
Oct 11 03:45:25 compute-0 sudo[144575]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 03:45:25 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' 
Oct 11 03:45:25 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' 
Oct 11 03:45:25 compute-0 python3.9[144577]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-flushes.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 11 03:45:25 compute-0 sudo[144575]: pam_unix(sudo:session): session closed for user root
Oct 11 03:45:26 compute-0 sudo[144700]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-okbqttefghhuexeomzlihoscxtlowdaq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760154325.2658257-187-58019956352395/AnsiballZ_copy.py'
Oct 11 03:45:26 compute-0 sudo[144700]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 03:45:26 compute-0 python3.9[144702]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-flushes.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1760154325.2658257-187-58019956352395/.source.nft follow=False _original_basename=flush-chain.j2 checksum=4d3ffec49c8eb1a9b80d2f1e8cd64070063a87b4 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 11 03:45:26 compute-0 sudo[144700]: pam_unix(sudo:session): session closed for user root
Oct 11 03:45:26 compute-0 ceph-mon[74273]: pgmap v381: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 03:45:26 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v382: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 03:45:27 compute-0 sudo[144852]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-aljfjswzyiiufkvyslxepdevodypjqpl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760154326.802344-202-53051866919987/AnsiballZ_stat.py'
Oct 11 03:45:27 compute-0 sudo[144852]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 03:45:27 compute-0 python3.9[144854]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-chains.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 11 03:45:27 compute-0 sudo[144852]: pam_unix(sudo:session): session closed for user root
Oct 11 03:45:27 compute-0 sudo[144977]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-elkudhgculvytutpgbhkuytsfxjbyuwg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760154326.802344-202-53051866919987/AnsiballZ_copy.py'
Oct 11 03:45:27 compute-0 sudo[144977]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 03:45:28 compute-0 python3.9[144979]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-chains.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1760154326.802344-202-53051866919987/.source.nft follow=False _original_basename=chains.j2 checksum=298ada419730ec15df17ded0cc50c97a4014a591 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 11 03:45:28 compute-0 sudo[144977]: pam_unix(sudo:session): session closed for user root
Oct 11 03:45:28 compute-0 ceph-mon[74273]: pgmap v382: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 03:45:28 compute-0 sudo[145129]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-oqzuqxofhtoyakewlekcqbbtlgkknpxt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760154328.2772248-217-177737794027887/AnsiballZ_stat.py'
Oct 11 03:45:28 compute-0 sudo[145129]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 03:45:28 compute-0 python3.9[145131]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-rules.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 11 03:45:28 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v383: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 03:45:28 compute-0 sudo[145129]: pam_unix(sudo:session): session closed for user root
Oct 11 03:45:29 compute-0 sudo[145254]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wpynuhzvgxqmiicibkgxgnvgqfjlijmb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760154328.2772248-217-177737794027887/AnsiballZ_copy.py'
Oct 11 03:45:29 compute-0 sudo[145254]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 03:45:29 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 11 03:45:29 compute-0 python3.9[145256]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-rules.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1760154328.2772248-217-177737794027887/.source.nft follow=False _original_basename=ruleset.j2 checksum=bdba38546f86123f1927359d89789bd211aba99d backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 11 03:45:29 compute-0 sudo[145254]: pam_unix(sudo:session): session closed for user root
Oct 11 03:45:30 compute-0 sudo[145406]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vsywjufrgguqoyzejnhhbnzknfgwpqlx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760154329.7669454-232-222195161929149/AnsiballZ_file.py'
Oct 11 03:45:30 compute-0 sudo[145406]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 03:45:30 compute-0 python3.9[145408]: ansible-ansible.builtin.file Invoked with group=root mode=0600 owner=root path=/etc/nftables/edpm-rules.nft.changed state=touch recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 11 03:45:30 compute-0 sudo[145406]: pam_unix(sudo:session): session closed for user root
Oct 11 03:45:30 compute-0 ceph-mon[74273]: pgmap v383: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 03:45:30 compute-0 sudo[145558]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cqupflvmeknrjdmbquubhsckkmqjdsfo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760154330.5365148-240-94036630236467/AnsiballZ_command.py'
Oct 11 03:45:30 compute-0 sudo[145558]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 03:45:30 compute-0 ceph-mgr[74563]: [pg_autoscaler INFO root] _maybe_adjust
Oct 11 03:45:30 compute-0 ceph-mgr[74563]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 03:45:30 compute-0 ceph-mgr[74563]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct 11 03:45:30 compute-0 ceph-mgr[74563]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 03:45:30 compute-0 ceph-mgr[74563]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 11 03:45:30 compute-0 ceph-mgr[74563]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 03:45:30 compute-0 ceph-mgr[74563]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 11 03:45:30 compute-0 ceph-mgr[74563]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 03:45:30 compute-0 ceph-mgr[74563]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 11 03:45:30 compute-0 ceph-mgr[74563]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 03:45:30 compute-0 ceph-mgr[74563]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 11 03:45:30 compute-0 ceph-mgr[74563]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 03:45:30 compute-0 ceph-mgr[74563]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Oct 11 03:45:30 compute-0 ceph-mgr[74563]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 03:45:30 compute-0 ceph-mgr[74563]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 11 03:45:30 compute-0 ceph-mgr[74563]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 03:45:30 compute-0 ceph-mgr[74563]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Oct 11 03:45:30 compute-0 ceph-mgr[74563]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 03:45:30 compute-0 ceph-mgr[74563]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct 11 03:45:30 compute-0 ceph-mgr[74563]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 03:45:30 compute-0 ceph-mgr[74563]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 11 03:45:30 compute-0 ceph-mgr[74563]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 03:45:30 compute-0 ceph-mgr[74563]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct 11 03:45:30 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v384: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 03:45:31 compute-0 python3.9[145560]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-chains.nft /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft /etc/nftables/edpm-jumps.nft | nft -c -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 11 03:45:31 compute-0 sudo[145558]: pam_unix(sudo:session): session closed for user root
Oct 11 03:45:31 compute-0 sudo[145713]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qenkqrmrycgjdlczvrywppxbnluqtqru ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760154331.3155766-248-46750798408743/AnsiballZ_blockinfile.py'
Oct 11 03:45:31 compute-0 sudo[145713]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 03:45:31 compute-0 python3.9[145715]: ansible-ansible.builtin.blockinfile Invoked with backup=False block=include "/etc/nftables/iptables.nft"
                                             include "/etc/nftables/edpm-chains.nft"
                                             include "/etc/nftables/edpm-rules.nft"
                                             include "/etc/nftables/edpm-jumps.nft"
                                              path=/etc/sysconfig/nftables.conf validate=nft -c -f %s state=present marker=# {mark} ANSIBLE MANAGED BLOCK create=False marker_begin=BEGIN marker_end=END append_newline=False prepend_newline=False unsafe_writes=False insertafter=None insertbefore=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 11 03:45:32 compute-0 sudo[145713]: pam_unix(sudo:session): session closed for user root
Oct 11 03:45:32 compute-0 sudo[145865]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vuyqlihknpmsksfhmynemjrgrvleyvzf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760154332.215273-257-51294629705592/AnsiballZ_command.py'
Oct 11 03:45:32 compute-0 sudo[145865]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 03:45:32 compute-0 ceph-mon[74273]: pgmap v384: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 03:45:32 compute-0 python3.9[145867]: ansible-ansible.legacy.command Invoked with _raw_params=nft -f /etc/nftables/edpm-chains.nft _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 11 03:45:32 compute-0 sudo[145865]: pam_unix(sudo:session): session closed for user root
Oct 11 03:45:32 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v385: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 03:45:33 compute-0 sudo[146018]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-opkeathwbrtkyjqqktgbvphfguhdovcq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760154333.017273-265-153381841387252/AnsiballZ_stat.py'
Oct 11 03:45:33 compute-0 sudo[146018]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 03:45:33 compute-0 python3.9[146020]: ansible-ansible.builtin.stat Invoked with path=/etc/nftables/edpm-rules.nft.changed follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct 11 03:45:33 compute-0 sudo[146018]: pam_unix(sudo:session): session closed for user root
Oct 11 03:45:34 compute-0 sudo[146172]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-djialmqzvitjhdhtybjkmkzcmvnzpwgu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760154333.8191962-273-226275905629309/AnsiballZ_command.py'
Oct 11 03:45:34 compute-0 sudo[146172]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 03:45:34 compute-0 ceph-mon[74273]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Oct 11 03:45:34 compute-0 ceph-mon[74273]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 600.0 total, 600.0 interval
                                           Cumulative writes: 2041 writes, 9036 keys, 2041 commit groups, 1.0 writes per commit group, ingest: 0.01 GB, 0.02 MB/s
                                           Cumulative WAL: 2041 writes, 2041 syncs, 1.00 writes per sync, written: 0.01 GB, 0.02 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 2041 writes, 9036 keys, 2041 commit groups, 1.0 writes per commit group, ingest: 11.43 MB, 0.02 MB/s
                                           Interval WAL: 2041 writes, 2041 syncs, 1.00 writes per sync, written: 0.01 GB, 0.02 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                           
                                           ** Compaction Stats [default] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0     94.2      0.09              0.03         3    0.030       0      0       0.0       0.0
                                             L6      1/0    6.62 MB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.6    125.6    110.6      0.12              0.04         2    0.062    7150    729       0.0       0.0
                                            Sum      1/0    6.62 MB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   2.6     72.7    103.7      0.21              0.07         5    0.043    7150    729       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   2.6     74.1    105.4      0.21              0.07         4    0.052    7150    729       0.0       0.0
                                           
                                           ** Compaction Stats [default] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Low      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0    125.6    110.6      0.12              0.04         2    0.062    7150    729       0.0       0.0
                                           High      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0     98.0      0.09              0.03         2    0.043       0      0       0.0       0.0
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0     13.2      0.00              0.00         1    0.004       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.008, interval 0.008
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.02 GB write, 0.04 MB/s write, 0.02 GB read, 0.03 MB/s read, 0.2 seconds
                                           Interval compaction: 0.02 GB write, 0.04 MB/s write, 0.02 GB read, 0.03 MB/s read, 0.2 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x558495a5d1f0#2 capacity: 308.00 MB usage: 640.83 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 0 last_secs: 8.8e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(36,554.11 KB,0.175689%) FilterBlock(6,27.55 KB,0.00873417%) IndexBlock(6,59.17 KB,0.0187614%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [default] **
Oct 11 03:45:34 compute-0 python3.9[146174]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft | nft -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 11 03:45:34 compute-0 sudo[146172]: pam_unix(sudo:session): session closed for user root
Oct 11 03:45:34 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 11 03:45:34 compute-0 ceph-mon[74273]: pgmap v385: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 03:45:34 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v386: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 03:45:34 compute-0 sudo[146327]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ucajybjipmbjcmvhxmurxszxtpbiyben ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760154334.585603-281-262987640809569/AnsiballZ_file.py'
Oct 11 03:45:34 compute-0 sudo[146327]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 03:45:35 compute-0 python3.9[146329]: ansible-ansible.builtin.file Invoked with path=/etc/nftables/edpm-rules.nft.changed state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 11 03:45:35 compute-0 sudo[146327]: pam_unix(sudo:session): session closed for user root
Oct 11 03:45:36 compute-0 python3.9[146479]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'machine'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct 11 03:45:36 compute-0 ceph-mon[74273]: pgmap v386: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 03:45:36 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v387: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 03:45:37 compute-0 sudo[146630]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nbmqpcjchuvdpidcmhioamwhetdvhsnk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760154336.8089075-321-267512206544879/AnsiballZ_command.py'
Oct 11 03:45:37 compute-0 sudo[146630]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 03:45:37 compute-0 python3.9[146632]: ansible-ansible.legacy.command Invoked with _raw_params=ovs-vsctl set open . external_ids:hostname=compute-0.ctlplane.example.com external_ids:ovn-bridge=br-int external_ids:ovn-bridge-mappings=datacentre:br-ex external_ids:ovn-chassis-mac-mappings="datacentre:3e:0a:c0:16:5a:16" external_ids:ovn-encap-ip=172.19.0.100 external_ids:ovn-encap-type=geneve external_ids:ovn-encap-tos=0 external_ids:ovn-match-northd-version=False external_ids:ovn-monitor-all=True external_ids:ovn-remote=ssl:ovsdbserver-sb.openstack.svc:6642 external_ids:ovn-remote-probe-interval=60000 external_ids:ovn-ofctrl-wait-before-clear=8000 external_ids:rundir=/var/run/openvswitch 
                                              _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 11 03:45:37 compute-0 ovs-vsctl[146633]: ovs|00001|vsctl|INFO|Called as ovs-vsctl set open . external_ids:hostname=compute-0.ctlplane.example.com external_ids:ovn-bridge=br-int external_ids:ovn-bridge-mappings=datacentre:br-ex external_ids:ovn-chassis-mac-mappings=datacentre:3e:0a:c0:16:5a:16 external_ids:ovn-encap-ip=172.19.0.100 external_ids:ovn-encap-type=geneve external_ids:ovn-encap-tos=0 external_ids:ovn-match-northd-version=False external_ids:ovn-monitor-all=True external_ids:ovn-remote=ssl:ovsdbserver-sb.openstack.svc:6642 external_ids:ovn-remote-probe-interval=60000 external_ids:ovn-ofctrl-wait-before-clear=8000 external_ids:rundir=/var/run/openvswitch
Oct 11 03:45:37 compute-0 sudo[146630]: pam_unix(sudo:session): session closed for user root
Oct 11 03:45:37 compute-0 sudo[146783]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wffoqnaefldehguhpcidqbzsypxvkgns ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760154337.6270146-330-77379121005019/AnsiballZ_command.py'
Oct 11 03:45:37 compute-0 sudo[146783]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 03:45:38 compute-0 python3.9[146785]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail
                                             ovs-vsctl show | grep -q "Manager"
                                              _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 11 03:45:38 compute-0 sudo[146783]: pam_unix(sudo:session): session closed for user root
Oct 11 03:45:38 compute-0 ceph-mon[74273]: pgmap v387: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 03:45:38 compute-0 sudo[146938]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-grqvmbkihbekneytrtavqelqruguybaq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760154338.4370663-338-57480756269640/AnsiballZ_command.py'
Oct 11 03:45:38 compute-0 sudo[146938]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 03:45:38 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v388: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 03:45:39 compute-0 python3.9[146940]: ansible-ansible.legacy.command Invoked with _raw_params=ovs-vsctl --timeout=5 --id=@manager -- create Manager target=\"ptcp:********@manager
                                              _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 11 03:45:39 compute-0 ovs-vsctl[146941]: ovs|00001|vsctl|INFO|Called as ovs-vsctl --timeout=5 --id=@manager -- create Manager "target=\"ptcp:6640:127.0.0.1\"" -- add Open_vSwitch . manager_options @manager
Oct 11 03:45:39 compute-0 sudo[146938]: pam_unix(sudo:session): session closed for user root
Oct 11 03:45:39 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 11 03:45:39 compute-0 python3.9[147091]: ansible-ansible.builtin.stat Invoked with path=/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct 11 03:45:40 compute-0 sudo[147243]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vjvmcbkykfoxjjkkuvyexjbjyopaxnqp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760154340.1134675-355-112844080000012/AnsiballZ_file.py'
Oct 11 03:45:40 compute-0 sudo[147243]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 03:45:40 compute-0 python3.9[147245]: ansible-ansible.builtin.file Invoked with path=/var/local/libexec recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Oct 11 03:45:40 compute-0 sudo[147243]: pam_unix(sudo:session): session closed for user root
Oct 11 03:45:40 compute-0 ceph-mon[74273]: pgmap v388: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 03:45:40 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v389: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 03:45:41 compute-0 sudo[147395]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ullgapxbmjazaxeqjsvitnjijiuxrdvv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760154340.9266918-363-180854346550980/AnsiballZ_stat.py'
Oct 11 03:45:41 compute-0 sudo[147395]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 03:45:41 compute-0 python3.9[147397]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-container-shutdown follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 11 03:45:41 compute-0 sudo[147395]: pam_unix(sudo:session): session closed for user root
Oct 11 03:45:41 compute-0 sudo[147473]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-olgpxkbppqfnmqadqmkvxvyyqgnhwpdn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760154340.9266918-363-180854346550980/AnsiballZ_file.py'
Oct 11 03:45:41 compute-0 sudo[147473]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 03:45:42 compute-0 python3.9[147475]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-container-shutdown _original_basename=edpm-container-shutdown recurse=False state=file path=/var/local/libexec/edpm-container-shutdown force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct 11 03:45:42 compute-0 sudo[147473]: pam_unix(sudo:session): session closed for user root
Oct 11 03:45:42 compute-0 sudo[147625]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hmcdsbenqxcdkwsbhpqjajbvwjuutzoq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760154342.1664772-363-45272459284176/AnsiballZ_stat.py'
Oct 11 03:45:42 compute-0 sudo[147625]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 03:45:42 compute-0 python3.9[147627]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-start-podman-container follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 11 03:45:42 compute-0 sudo[147625]: pam_unix(sudo:session): session closed for user root
Oct 11 03:45:42 compute-0 ceph-mon[74273]: pgmap v389: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 03:45:42 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v390: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 03:45:43 compute-0 sudo[147703]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-urpqgdpitqfaspkthszhhrqskyadkxga ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760154342.1664772-363-45272459284176/AnsiballZ_file.py'
Oct 11 03:45:43 compute-0 sudo[147703]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 03:45:43 compute-0 python3.9[147705]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-start-podman-container _original_basename=edpm-start-podman-container recurse=False state=file path=/var/local/libexec/edpm-start-podman-container force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct 11 03:45:43 compute-0 sudo[147703]: pam_unix(sudo:session): session closed for user root
Oct 11 03:45:43 compute-0 sudo[147855]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tobpojjzgbzdsbgrapivhpkyegqywioa ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760154343.4407923-386-222456336090480/AnsiballZ_file.py'
Oct 11 03:45:43 compute-0 sudo[147855]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 03:45:44 compute-0 python3.9[147857]: ansible-ansible.builtin.file Invoked with mode=420 path=/etc/systemd/system-preset state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 11 03:45:44 compute-0 sudo[147855]: pam_unix(sudo:session): session closed for user root
Oct 11 03:45:44 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 11 03:45:44 compute-0 sudo[148007]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tpkoaxklfbalqvfpfmmqmnjcggbwbmta ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760154344.1972625-394-260806675394305/AnsiballZ_stat.py'
Oct 11 03:45:44 compute-0 sudo[148007]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 03:45:44 compute-0 python3.9[148009]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/edpm-container-shutdown.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 11 03:45:44 compute-0 ceph-mon[74273]: pgmap v390: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 03:45:44 compute-0 sudo[148007]: pam_unix(sudo:session): session closed for user root
Oct 11 03:45:44 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v391: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 03:45:45 compute-0 sudo[148085]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zckcpngnhmbbumhfsdrkrxjnqbsntooo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760154344.1972625-394-260806675394305/AnsiballZ_file.py'
Oct 11 03:45:45 compute-0 sudo[148085]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 03:45:45 compute-0 python3.9[148087]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/edpm-container-shutdown.service _original_basename=edpm-container-shutdown-service recurse=False state=file path=/etc/systemd/system/edpm-container-shutdown.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 11 03:45:45 compute-0 sudo[148085]: pam_unix(sudo:session): session closed for user root
Oct 11 03:45:45 compute-0 sudo[148237]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kcjyheajepjlsxklbpqfhrturnmzlxci ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760154345.4246697-406-44083253747635/AnsiballZ_stat.py'
Oct 11 03:45:45 compute-0 sudo[148237]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 03:45:45 compute-0 python3.9[148239]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 11 03:45:45 compute-0 sudo[148237]: pam_unix(sudo:session): session closed for user root
Oct 11 03:45:46 compute-0 sudo[148315]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-grxadsybqnhwqxdxpnefosdqbyujiact ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760154345.4246697-406-44083253747635/AnsiballZ_file.py'
Oct 11 03:45:46 compute-0 sudo[148315]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 03:45:46 compute-0 python3.9[148317]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-edpm-container-shutdown.preset _original_basename=91-edpm-container-shutdown-preset recurse=False state=file path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 11 03:45:46 compute-0 sudo[148315]: pam_unix(sudo:session): session closed for user root
Oct 11 03:45:46 compute-0 ceph-mon[74273]: pgmap v391: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 03:45:46 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v392: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 03:45:46 compute-0 sudo[148467]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-llllhtxhifahkptnmmfdcgmzkwxgtzrw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760154346.6767435-418-179873904277626/AnsiballZ_systemd.py'
Oct 11 03:45:47 compute-0 sudo[148467]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 03:45:47 compute-0 python3.9[148469]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=edpm-container-shutdown state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Oct 11 03:45:47 compute-0 systemd[1]: Reloading.
Oct 11 03:45:47 compute-0 systemd-rc-local-generator[148493]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 11 03:45:47 compute-0 systemd-sysv-generator[148498]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 11 03:45:47 compute-0 sudo[148467]: pam_unix(sudo:session): session closed for user root
Oct 11 03:45:47 compute-0 ceph-mon[74273]: pgmap v392: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 03:45:48 compute-0 sudo[148655]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-iodlwvpzabifinifvubwwqzngnhmcixb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760154347.847686-426-277380380926840/AnsiballZ_stat.py'
Oct 11 03:45:48 compute-0 sudo[148655]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 03:45:48 compute-0 python3.9[148657]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/netns-placeholder.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 11 03:45:48 compute-0 sudo[148655]: pam_unix(sudo:session): session closed for user root
Oct 11 03:45:48 compute-0 sudo[148733]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-aafrrfaacatcsqeqdgkcdnivqtjqivct ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760154347.847686-426-277380380926840/AnsiballZ_file.py'
Oct 11 03:45:48 compute-0 sudo[148733]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 03:45:48 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v393: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 03:45:49 compute-0 python3.9[148735]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/netns-placeholder.service _original_basename=netns-placeholder-service recurse=False state=file path=/etc/systemd/system/netns-placeholder.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 11 03:45:49 compute-0 sudo[148733]: pam_unix(sudo:session): session closed for user root
Oct 11 03:45:49 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 11 03:45:49 compute-0 sudo[148885]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nybseewffkmkoifheotphljkjtsaxqay ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760154349.24291-438-184971732042781/AnsiballZ_stat.py'
Oct 11 03:45:49 compute-0 sudo[148885]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 03:45:49 compute-0 python3.9[148887]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-netns-placeholder.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 11 03:45:49 compute-0 sudo[148885]: pam_unix(sudo:session): session closed for user root
Oct 11 03:45:50 compute-0 ceph-mon[74273]: pgmap v393: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 03:45:50 compute-0 sudo[148963]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kcwotntxrgsbdxqtalqvootznrsqwhro ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760154349.24291-438-184971732042781/AnsiballZ_file.py'
Oct 11 03:45:50 compute-0 sudo[148963]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 03:45:50 compute-0 python3.9[148965]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-netns-placeholder.preset _original_basename=91-netns-placeholder-preset recurse=False state=file path=/etc/systemd/system-preset/91-netns-placeholder.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 11 03:45:50 compute-0 sudo[148963]: pam_unix(sudo:session): session closed for user root
Oct 11 03:45:50 compute-0 ceph-mgr[74563]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 03:45:50 compute-0 ceph-mgr[74563]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 03:45:50 compute-0 ceph-mgr[74563]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 03:45:50 compute-0 ceph-mgr[74563]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 03:45:50 compute-0 ceph-mgr[74563]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 03:45:50 compute-0 ceph-mgr[74563]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 03:45:50 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v394: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 03:45:50 compute-0 sudo[149115]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jviagpvbzuzfeobwhlbljttbyeveghmu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760154350.5659528-450-128711533801046/AnsiballZ_systemd.py'
Oct 11 03:45:50 compute-0 sudo[149115]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 03:45:51 compute-0 python3.9[149117]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=netns-placeholder state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Oct 11 03:45:51 compute-0 systemd[1]: Reloading.
Oct 11 03:45:51 compute-0 systemd-rc-local-generator[149143]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 11 03:45:51 compute-0 systemd-sysv-generator[149148]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 11 03:45:51 compute-0 systemd[1]: Starting Create netns directory...
Oct 11 03:45:51 compute-0 systemd[1]: run-netns-placeholder.mount: Deactivated successfully.
Oct 11 03:45:51 compute-0 systemd[1]: netns-placeholder.service: Deactivated successfully.
Oct 11 03:45:51 compute-0 systemd[1]: Finished Create netns directory.
Oct 11 03:45:51 compute-0 sudo[149115]: pam_unix(sudo:session): session closed for user root
Oct 11 03:45:52 compute-0 ceph-mon[74273]: pgmap v394: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 03:45:52 compute-0 sudo[149309]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-itpncjsbewzkjxseankijfokhkxjguew ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760154351.9611256-460-252544961961566/AnsiballZ_file.py'
Oct 11 03:45:52 compute-0 sudo[149309]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 03:45:52 compute-0 python3.9[149311]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/openstack/healthchecks setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct 11 03:45:52 compute-0 sudo[149309]: pam_unix(sudo:session): session closed for user root
Oct 11 03:45:52 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v395: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 03:45:52 compute-0 sudo[149461]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-diszjkdrfrcxcfripagbyiswvqpoygup ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760154352.6874812-468-38084042956205/AnsiballZ_stat.py'
Oct 11 03:45:52 compute-0 sudo[149461]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 03:45:53 compute-0 python3.9[149463]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/healthchecks/ovn_controller/healthcheck follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 11 03:45:53 compute-0 sudo[149461]: pam_unix(sudo:session): session closed for user root
Oct 11 03:45:53 compute-0 sudo[149584]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-opuzztiegnfiuidxyqparwvcqeurwxhw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760154352.6874812-468-38084042956205/AnsiballZ_copy.py'
Oct 11 03:45:53 compute-0 sudo[149584]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 03:45:53 compute-0 python3.9[149586]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/healthchecks/ovn_controller/ group=zuul mode=0700 owner=zuul setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1760154352.6874812-468-38084042956205/.source _original_basename=healthcheck follow=False checksum=4098dd010265fabdf5c26b97d169fc4e575ff457 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Oct 11 03:45:53 compute-0 sudo[149584]: pam_unix(sudo:session): session closed for user root
Oct 11 03:45:54 compute-0 ceph-mon[74273]: pgmap v395: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 03:45:54 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 11 03:45:54 compute-0 sudo[149736]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hoqopzwpmfmaonrrlcebljxaisnfvfyy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760154354.227635-485-20689959886070/AnsiballZ_file.py'
Oct 11 03:45:54 compute-0 sudo[149736]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 03:45:54 compute-0 python3.9[149738]: ansible-ansible.builtin.file Invoked with path=/var/lib/kolla/config_files recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Oct 11 03:45:54 compute-0 sudo[149736]: pam_unix(sudo:session): session closed for user root
Oct 11 03:45:54 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v396: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 03:45:55 compute-0 sudo[149888]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-naedhswinvllnohlpjexoqpphhghjcsj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760154355.0094738-493-130678889113776/AnsiballZ_stat.py'
Oct 11 03:45:55 compute-0 sudo[149888]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 03:45:55 compute-0 python3.9[149890]: ansible-ansible.legacy.stat Invoked with path=/var/lib/kolla/config_files/ovn_controller.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 11 03:45:55 compute-0 sudo[149888]: pam_unix(sudo:session): session closed for user root
Oct 11 03:45:56 compute-0 sudo[150011]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-difsojjpgxgtqhuphzwttrulotyumacw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760154355.0094738-493-130678889113776/AnsiballZ_copy.py'
Oct 11 03:45:56 compute-0 sudo[150011]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 03:45:56 compute-0 ceph-mon[74273]: pgmap v396: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 03:45:56 compute-0 python3.9[150013]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/kolla/config_files/ovn_controller.json mode=0600 src=/home/zuul/.ansible/tmp/ansible-tmp-1760154355.0094738-493-130678889113776/.source.json _original_basename=.bpykn47o follow=False checksum=2328fc98619beeb08ee32b01f15bb43094c10b61 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 11 03:45:56 compute-0 sudo[150011]: pam_unix(sudo:session): session closed for user root
Oct 11 03:45:56 compute-0 sudo[150163]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yayygcgehyayhlgwcyziwmquqwpcqxmu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760154356.4279191-508-218923047059116/AnsiballZ_file.py'
Oct 11 03:45:56 compute-0 sudo[150163]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 03:45:56 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v397: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 03:45:56 compute-0 python3.9[150165]: ansible-ansible.builtin.file Invoked with mode=0755 path=/var/lib/edpm-config/container-startup-config/ovn_controller state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 11 03:45:57 compute-0 sudo[150163]: pam_unix(sudo:session): session closed for user root
Oct 11 03:45:57 compute-0 sudo[150315]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wnjqndxxuyhgrhcreoulelugclctvlxi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760154357.2245755-516-108260079921260/AnsiballZ_stat.py'
Oct 11 03:45:57 compute-0 sudo[150315]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 03:45:57 compute-0 sudo[150315]: pam_unix(sudo:session): session closed for user root
Oct 11 03:45:58 compute-0 sudo[150438]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wnjykrsawyosgftowneuvfyrfikzmbkk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760154357.2245755-516-108260079921260/AnsiballZ_copy.py'
Oct 11 03:45:58 compute-0 sudo[150438]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 03:45:58 compute-0 ceph-mon[74273]: pgmap v397: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 03:45:58 compute-0 sudo[150438]: pam_unix(sudo:session): session closed for user root
Oct 11 03:45:58 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v398: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 03:45:59 compute-0 sudo[150590]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rwjmkdrfikasaopxdqgblwualjnoklta ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760154358.744092-533-122312976574058/AnsiballZ_container_config_data.py'
Oct 11 03:45:59 compute-0 sudo[150590]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 03:45:59 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 11 03:45:59 compute-0 python3.9[150592]: ansible-container_config_data Invoked with config_overrides={} config_path=/var/lib/edpm-config/container-startup-config/ovn_controller config_pattern=*.json debug=False
Oct 11 03:45:59 compute-0 sudo[150590]: pam_unix(sudo:session): session closed for user root
Oct 11 03:46:00 compute-0 sudo[150742]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qmazfgsmbdmleatziwjksvdhnuoblyks ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760154359.6763976-542-198823194813619/AnsiballZ_container_config_hash.py'
Oct 11 03:46:00 compute-0 sudo[150742]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 03:46:00 compute-0 ceph-mon[74273]: pgmap v398: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 03:46:00 compute-0 python3.9[150744]: ansible-container_config_hash Invoked with check_mode=False config_vol_prefix=/var/lib/config-data
Oct 11 03:46:00 compute-0 sudo[150742]: pam_unix(sudo:session): session closed for user root
Oct 11 03:46:00 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v399: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 03:46:01 compute-0 sudo[150894]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-apxsyayuqogprcrvoakubatcwvzimhcu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760154360.7478783-551-156612783682387/AnsiballZ_podman_container_info.py'
Oct 11 03:46:01 compute-0 sudo[150894]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 03:46:01 compute-0 python3.9[150896]: ansible-containers.podman.podman_container_info Invoked with executable=podman name=None
Oct 11 03:46:01 compute-0 sudo[150894]: pam_unix(sudo:session): session closed for user root
Oct 11 03:46:02 compute-0 ceph-mon[74273]: pgmap v399: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 03:46:02 compute-0 sudo[151072]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yciztpwneoiicolenmrgvoylzohnhqdf ; /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1760154362.300062-564-37767660586811/AnsiballZ_edpm_container_manage.py'
Oct 11 03:46:02 compute-0 sudo[151072]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 03:46:02 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v400: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 03:46:03 compute-0 python3[151074]: ansible-edpm_container_manage Invoked with concurrency=1 config_dir=/var/lib/edpm-config/container-startup-config/ovn_controller config_id=ovn_controller config_overrides={} config_patterns=*.json log_base_path=/var/log/containers/stdouts debug=False
Oct 11 03:46:03 compute-0 ceph-mon[74273]: pgmap v400: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 03:46:04 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 11 03:46:04 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v401: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 03:46:06 compute-0 ceph-mon[74273]: pgmap v401: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 03:46:06 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v402: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 03:46:07 compute-0 ceph-mon[74273]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #24. Immutable memtables: 0.
Oct 11 03:46:07 compute-0 ceph-mon[74273]: rocksdb: (Original Log Time 2025/10/11-03:46:07.103644) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Oct 11 03:46:07 compute-0 ceph-mon[74273]: rocksdb: [db/flush_job.cc:856] [default] [JOB 7] Flushing memtable with next log file: 24
Oct 11 03:46:07 compute-0 ceph-mon[74273]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760154367103690, "job": 7, "event": "flush_started", "num_memtables": 1, "num_entries": 671, "num_deletes": 251, "total_data_size": 816058, "memory_usage": 828344, "flush_reason": "Manual Compaction"}
Oct 11 03:46:07 compute-0 ceph-mon[74273]: rocksdb: [db/flush_job.cc:885] [default] [JOB 7] Level-0 flush table #25: started
Oct 11 03:46:07 compute-0 ceph-mon[74273]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760154367109491, "cf_name": "default", "job": 7, "event": "table_file_creation", "file_number": 25, "file_size": 808933, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 8998, "largest_seqno": 9668, "table_properties": {"data_size": 805374, "index_size": 1403, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1093, "raw_key_size": 7689, "raw_average_key_size": 18, "raw_value_size": 798299, "raw_average_value_size": 1918, "num_data_blocks": 65, "num_entries": 416, "num_filter_entries": 416, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1760154314, "oldest_key_time": 1760154314, "file_creation_time": 1760154367, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "c5ef686a-ea96-43f3-b64e-136aeef6150d", "db_session_id": "5PENSWPAYOBU0GJSS6OB", "orig_file_number": 25, "seqno_to_time_mapping": "N/A"}}
Oct 11 03:46:07 compute-0 ceph-mon[74273]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 7] Flush lasted 5884 microseconds, and 3472 cpu microseconds.
Oct 11 03:46:07 compute-0 ceph-mon[74273]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 11 03:46:07 compute-0 ceph-mon[74273]: rocksdb: (Original Log Time 2025/10/11-03:46:07.109534) [db/flush_job.cc:967] [default] [JOB 7] Level-0 flush table #25: 808933 bytes OK
Oct 11 03:46:07 compute-0 ceph-mon[74273]: rocksdb: (Original Log Time 2025/10/11-03:46:07.109551) [db/memtable_list.cc:519] [default] Level-0 commit table #25 started
Oct 11 03:46:07 compute-0 ceph-mon[74273]: rocksdb: (Original Log Time 2025/10/11-03:46:07.110901) [db/memtable_list.cc:722] [default] Level-0 commit table #25: memtable #1 done
Oct 11 03:46:07 compute-0 ceph-mon[74273]: rocksdb: (Original Log Time 2025/10/11-03:46:07.110917) EVENT_LOG_v1 {"time_micros": 1760154367110912, "job": 7, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Oct 11 03:46:07 compute-0 ceph-mon[74273]: rocksdb: (Original Log Time 2025/10/11-03:46:07.110932) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Oct 11 03:46:07 compute-0 ceph-mon[74273]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 7] Try to delete WAL files size 812526, prev total WAL file size 812526, number of live WAL files 2.
Oct 11 03:46:07 compute-0 ceph-mon[74273]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000021.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 11 03:46:07 compute-0 ceph-mon[74273]: rocksdb: (Original Log Time 2025/10/11-03:46:07.111416) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F7300323531' seq:72057594037927935, type:22 .. '7061786F7300353033' seq:0, type:0; will stop at (end)
Oct 11 03:46:07 compute-0 ceph-mon[74273]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 8] Compacting 1@0 + 1@6 files to L6, score -1.00
Oct 11 03:46:07 compute-0 ceph-mon[74273]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 7 Base level 0, inputs: [25(789KB)], [23(6781KB)]
Oct 11 03:46:07 compute-0 ceph-mon[74273]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760154367111473, "job": 8, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [25], "files_L6": [23], "score": -1, "input_data_size": 7753281, "oldest_snapshot_seqno": -1}
Oct 11 03:46:07 compute-0 ceph-mon[74273]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 8] Generated table #26: 3274 keys, 6050667 bytes, temperature: kUnknown
Oct 11 03:46:07 compute-0 ceph-mon[74273]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760154367164123, "cf_name": "default", "job": 8, "event": "table_file_creation", "file_number": 26, "file_size": 6050667, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 6027025, "index_size": 14381, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 8197, "raw_key_size": 79365, "raw_average_key_size": 24, "raw_value_size": 5966013, "raw_average_value_size": 1822, "num_data_blocks": 627, "num_entries": 3274, "num_filter_entries": 3274, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1760153731, "oldest_key_time": 0, "file_creation_time": 1760154367, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "c5ef686a-ea96-43f3-b64e-136aeef6150d", "db_session_id": "5PENSWPAYOBU0GJSS6OB", "orig_file_number": 26, "seqno_to_time_mapping": "N/A"}}
Oct 11 03:46:07 compute-0 ceph-mon[74273]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 11 03:46:07 compute-0 ceph-mon[74273]: rocksdb: (Original Log Time 2025/10/11-03:46:07.164325) [db/compaction/compaction_job.cc:1663] [default] [JOB 8] Compacted 1@0 + 1@6 files to L6 => 6050667 bytes
Oct 11 03:46:07 compute-0 ceph-mon[74273]: rocksdb: (Original Log Time 2025/10/11-03:46:07.165365) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 147.1 rd, 114.8 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.8, 6.6 +0.0 blob) out(5.8 +0.0 blob), read-write-amplify(17.1) write-amplify(7.5) OK, records in: 3788, records dropped: 514 output_compression: NoCompression
Oct 11 03:46:07 compute-0 ceph-mon[74273]: rocksdb: (Original Log Time 2025/10/11-03:46:07.165380) EVENT_LOG_v1 {"time_micros": 1760154367165372, "job": 8, "event": "compaction_finished", "compaction_time_micros": 52703, "compaction_time_cpu_micros": 27853, "output_level": 6, "num_output_files": 1, "total_output_size": 6050667, "num_input_records": 3788, "num_output_records": 3274, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Oct 11 03:46:07 compute-0 ceph-mon[74273]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000025.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 11 03:46:07 compute-0 ceph-mon[74273]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760154367165558, "job": 8, "event": "table_file_deletion", "file_number": 25}
Oct 11 03:46:07 compute-0 ceph-mon[74273]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000023.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 11 03:46:07 compute-0 ceph-mon[74273]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760154367166532, "job": 8, "event": "table_file_deletion", "file_number": 23}
Oct 11 03:46:07 compute-0 ceph-mon[74273]: rocksdb: (Original Log Time 2025/10/11-03:46:07.111332) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 11 03:46:07 compute-0 ceph-mon[74273]: rocksdb: (Original Log Time 2025/10/11-03:46:07.166619) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 11 03:46:07 compute-0 ceph-mon[74273]: rocksdb: (Original Log Time 2025/10/11-03:46:07.166627) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 11 03:46:07 compute-0 ceph-mon[74273]: rocksdb: (Original Log Time 2025/10/11-03:46:07.166630) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 11 03:46:07 compute-0 ceph-mon[74273]: rocksdb: (Original Log Time 2025/10/11-03:46:07.166633) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 11 03:46:07 compute-0 ceph-mon[74273]: rocksdb: (Original Log Time 2025/10/11-03:46:07.166636) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 11 03:46:08 compute-0 ceph-mon[74273]: pgmap v402: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 03:46:08 compute-0 podman[151089]: 2025-10-11 03:46:08.536016042 +0000 UTC m=+5.278230780 image pull 3b86aea1acd0e80af91d8a3efa79cc99f54489e3c22377193c4282a256797350 quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified
Oct 11 03:46:08 compute-0 podman[151207]: 2025-10-11 03:46:08.72222595 +0000 UTC m=+0.059276588 container create 648a70a95c858d67aef01a55c153ff0b5b9bef4b589fa10c62a24185180db61f (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, container_name=ovn_controller, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_id=ovn_controller, org.label-schema.build-date=20251009)
Oct 11 03:46:08 compute-0 podman[151207]: 2025-10-11 03:46:08.691360972 +0000 UTC m=+0.028411650 image pull 3b86aea1acd0e80af91d8a3efa79cc99f54489e3c22377193c4282a256797350 quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified
Oct 11 03:46:08 compute-0 python3[151074]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: podman create --name ovn_controller --conmon-pidfile /run/ovn_controller.pid --env KOLLA_CONFIG_STRATEGY=COPY_ALWAYS --healthcheck-command /openstack/healthcheck --label config_id=ovn_controller --label container_name=ovn_controller --label managed_by=edpm_ansible --label config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']} --log-driver journald --log-level info --network host --privileged=True --user root --volume /lib/modules:/lib/modules:ro --volume /run:/run --volume /var/lib/openvswitch/ovn:/run/ovn:shared,z --volume /var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro --volume /var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z --volume /var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z --volume /var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z --volume /var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z --volume /var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified
Oct 11 03:46:08 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v403: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 03:46:08 compute-0 sudo[151072]: pam_unix(sudo:session): session closed for user root
Oct 11 03:46:09 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 11 03:46:09 compute-0 sudo[151396]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-eeeqwyqsnxpytcuxlgdjhcylczwjyjxe ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760154369.117567-572-58850486310801/AnsiballZ_stat.py'
Oct 11 03:46:09 compute-0 sudo[151396]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 03:46:09 compute-0 python3.9[151398]: ansible-ansible.builtin.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct 11 03:46:09 compute-0 sudo[151396]: pam_unix(sudo:session): session closed for user root
Oct 11 03:46:10 compute-0 sudo[151550]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-icisoyhahapfocljulyjapzvqsdivgxa ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760154369.9820256-581-203772676661597/AnsiballZ_file.py'
Oct 11 03:46:10 compute-0 sudo[151550]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 03:46:10 compute-0 ceph-mon[74273]: pgmap v403: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 03:46:10 compute-0 python3.9[151552]: ansible-file Invoked with path=/etc/systemd/system/edpm_ovn_controller.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 11 03:46:10 compute-0 sudo[151550]: pam_unix(sudo:session): session closed for user root
Oct 11 03:46:10 compute-0 sudo[151627]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tslycrzmaglvvxfmatcueyyynaazrzyw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760154369.9820256-581-203772676661597/AnsiballZ_stat.py'
Oct 11 03:46:10 compute-0 sudo[151627]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 03:46:10 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v404: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 03:46:10 compute-0 python3.9[151629]: ansible-stat Invoked with path=/etc/systemd/system/edpm_ovn_controller_healthcheck.timer follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct 11 03:46:10 compute-0 sudo[151627]: pam_unix(sudo:session): session closed for user root
Oct 11 03:46:11 compute-0 sudo[151778]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zwgqkmafuaaiwxxqpovjsqjztangjezr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760154371.041791-581-140125126631057/AnsiballZ_copy.py'
Oct 11 03:46:11 compute-0 sudo[151778]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 03:46:11 compute-0 python3.9[151780]: ansible-copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1760154371.041791-581-140125126631057/source dest=/etc/systemd/system/edpm_ovn_controller.service mode=0644 owner=root group=root backup=False force=True remote_src=False follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 11 03:46:11 compute-0 sudo[151778]: pam_unix(sudo:session): session closed for user root
Oct 11 03:46:12 compute-0 sudo[151854]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wmnkwifwfzdcdyolspychszqvreputya ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760154371.041791-581-140125126631057/AnsiballZ_systemd.py'
Oct 11 03:46:12 compute-0 sudo[151854]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 03:46:12 compute-0 ceph-mon[74273]: pgmap v404: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 03:46:12 compute-0 python3.9[151856]: ansible-systemd Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Oct 11 03:46:12 compute-0 systemd[1]: Reloading.
Oct 11 03:46:12 compute-0 systemd-sysv-generator[151889]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 11 03:46:12 compute-0 systemd-rc-local-generator[151886]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 11 03:46:12 compute-0 sudo[151854]: pam_unix(sudo:session): session closed for user root
Oct 11 03:46:12 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v405: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 03:46:13 compute-0 sudo[151967]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cjlhxbjvbdmiybocuemjwfslbiiwrlqd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760154371.041791-581-140125126631057/AnsiballZ_systemd.py'
Oct 11 03:46:13 compute-0 sudo[151967]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 03:46:13 compute-0 python3.9[151969]: ansible-systemd Invoked with state=restarted name=edpm_ovn_controller.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Oct 11 03:46:13 compute-0 systemd[1]: Reloading.
Oct 11 03:46:13 compute-0 systemd-rc-local-generator[152000]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 11 03:46:13 compute-0 systemd-sysv-generator[152004]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 11 03:46:13 compute-0 systemd[1]: Starting ovn_controller container...
Oct 11 03:46:13 compute-0 systemd[1]: Started libcrun container.
Oct 11 03:46:13 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/eb5146f27841890763964370753df94ce404c5d284c0bbd3d6fb482fedc0e593/merged/run/ovn supports timestamps until 2038 (0x7fffffff)
Oct 11 03:46:14 compute-0 systemd[1]: Started /usr/bin/podman healthcheck run 648a70a95c858d67aef01a55c153ff0b5b9bef4b589fa10c62a24185180db61f.
Oct 11 03:46:14 compute-0 podman[152010]: 2025-10-11 03:46:14.017402247 +0000 UTC m=+0.189021989 container init 648a70a95c858d67aef01a55c153ff0b5b9bef4b589fa10c62a24185180db61f (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, config_id=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_managed=true)
Oct 11 03:46:14 compute-0 ovn_controller[152025]: + sudo -E kolla_set_configs
Oct 11 03:46:14 compute-0 podman[152010]: 2025-10-11 03:46:14.059618804 +0000 UTC m=+0.231238536 container start 648a70a95c858d67aef01a55c153ff0b5b9bef4b589fa10c62a24185180db61f (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251009, config_id=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, container_name=ovn_controller, io.buildah.version=1.41.3)
Oct 11 03:46:14 compute-0 edpm-start-podman-container[152010]: ovn_controller
Oct 11 03:46:14 compute-0 systemd[1]: Created slice User Slice of UID 0.
Oct 11 03:46:14 compute-0 systemd[1]: Starting User Runtime Directory /run/user/0...
Oct 11 03:46:14 compute-0 systemd[1]: Finished User Runtime Directory /run/user/0.
Oct 11 03:46:14 compute-0 systemd[1]: Starting User Manager for UID 0...
Oct 11 03:46:14 compute-0 systemd[152062]: pam_unix(systemd-user:session): session opened for user root(uid=0) by root(uid=0)
Oct 11 03:46:14 compute-0 edpm-start-podman-container[152009]: Creating additional drop-in dependency for "ovn_controller" (648a70a95c858d67aef01a55c153ff0b5b9bef4b589fa10c62a24185180db61f)
Oct 11 03:46:14 compute-0 podman[152032]: 2025-10-11 03:46:14.17641859 +0000 UTC m=+0.099898261 container health_status 648a70a95c858d67aef01a55c153ff0b5b9bef4b589fa10c62a24185180db61f (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=starting, health_failing_streak=1, health_log=, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.build-date=20251009, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Oct 11 03:46:14 compute-0 systemd[1]: 648a70a95c858d67aef01a55c153ff0b5b9bef4b589fa10c62a24185180db61f-5cce63a8672e76d3.service: Main process exited, code=exited, status=1/FAILURE
Oct 11 03:46:14 compute-0 systemd[1]: 648a70a95c858d67aef01a55c153ff0b5b9bef4b589fa10c62a24185180db61f-5cce63a8672e76d3.service: Failed with result 'exit-code'.
Oct 11 03:46:14 compute-0 systemd[1]: Reloading.
Oct 11 03:46:14 compute-0 systemd[152062]: Queued start job for default target Main User Target.
Oct 11 03:46:14 compute-0 systemd[152062]: Created slice User Application Slice.
Oct 11 03:46:14 compute-0 systemd[152062]: Mark boot as successful after the user session has run 2 minutes was skipped because of an unmet condition check (ConditionUser=!@system).
Oct 11 03:46:14 compute-0 systemd[152062]: Started Daily Cleanup of User's Temporary Directories.
Oct 11 03:46:14 compute-0 systemd[152062]: Reached target Paths.
Oct 11 03:46:14 compute-0 systemd[152062]: Reached target Timers.
Oct 11 03:46:14 compute-0 systemd-rc-local-generator[152110]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 11 03:46:14 compute-0 systemd-sysv-generator[152116]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 11 03:46:14 compute-0 systemd[152062]: Starting D-Bus User Message Bus Socket...
Oct 11 03:46:14 compute-0 systemd[152062]: Starting Create User's Volatile Files and Directories...
Oct 11 03:46:14 compute-0 systemd[152062]: Listening on D-Bus User Message Bus Socket.
Oct 11 03:46:14 compute-0 systemd[152062]: Reached target Sockets.
Oct 11 03:46:14 compute-0 systemd[152062]: Finished Create User's Volatile Files and Directories.
Oct 11 03:46:14 compute-0 systemd[152062]: Reached target Basic System.
Oct 11 03:46:14 compute-0 systemd[152062]: Reached target Main User Target.
Oct 11 03:46:14 compute-0 systemd[152062]: Startup finished in 165ms.
Oct 11 03:46:14 compute-0 ceph-mon[74273]: pgmap v405: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 03:46:14 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 11 03:46:14 compute-0 systemd[1]: Started User Manager for UID 0.
Oct 11 03:46:14 compute-0 systemd[1]: Started ovn_controller container.
Oct 11 03:46:14 compute-0 systemd[1]: Started Session c1 of User root.
Oct 11 03:46:14 compute-0 sudo[151967]: pam_unix(sudo:session): session closed for user root
Oct 11 03:46:14 compute-0 ovn_controller[152025]: INFO:__main__:Loading config file at /var/lib/kolla/config_files/config.json
Oct 11 03:46:14 compute-0 ovn_controller[152025]: INFO:__main__:Validating config file
Oct 11 03:46:14 compute-0 ovn_controller[152025]: INFO:__main__:Kolla config strategy set to: COPY_ALWAYS
Oct 11 03:46:14 compute-0 ovn_controller[152025]: INFO:__main__:Writing out command to execute
Oct 11 03:46:14 compute-0 systemd[1]: session-c1.scope: Deactivated successfully.
Oct 11 03:46:14 compute-0 ovn_controller[152025]: ++ cat /run_command
Oct 11 03:46:14 compute-0 ovn_controller[152025]: + CMD='/usr/bin/ovn-controller --pidfile unix:/run/openvswitch/db.sock  -p /etc/pki/tls/private/ovndb.key -c /etc/pki/tls/certs/ovndb.crt -C /etc/pki/tls/certs/ovndbca.crt '
Oct 11 03:46:14 compute-0 ovn_controller[152025]: + ARGS=
Oct 11 03:46:14 compute-0 ovn_controller[152025]: + sudo kolla_copy_cacerts
Oct 11 03:46:14 compute-0 systemd[1]: Started Session c2 of User root.
Oct 11 03:46:14 compute-0 systemd[1]: session-c2.scope: Deactivated successfully.
Oct 11 03:46:14 compute-0 ovn_controller[152025]: + [[ ! -n '' ]]
Oct 11 03:46:14 compute-0 ovn_controller[152025]: + . kolla_extend_start
Oct 11 03:46:14 compute-0 ovn_controller[152025]: Running command: '/usr/bin/ovn-controller --pidfile unix:/run/openvswitch/db.sock  -p /etc/pki/tls/private/ovndb.key -c /etc/pki/tls/certs/ovndb.crt -C /etc/pki/tls/certs/ovndbca.crt '
Oct 11 03:46:14 compute-0 ovn_controller[152025]: + echo 'Running command: '\''/usr/bin/ovn-controller --pidfile unix:/run/openvswitch/db.sock  -p /etc/pki/tls/private/ovndb.key -c /etc/pki/tls/certs/ovndb.crt -C /etc/pki/tls/certs/ovndbca.crt '\'''
Oct 11 03:46:14 compute-0 ovn_controller[152025]: + umask 0022
Oct 11 03:46:14 compute-0 ovn_controller[152025]: + exec /usr/bin/ovn-controller --pidfile unix:/run/openvswitch/db.sock -p /etc/pki/tls/private/ovndb.key -c /etc/pki/tls/certs/ovndb.crt -C /etc/pki/tls/certs/ovndbca.crt
Oct 11 03:46:14 compute-0 ovn_controller[152025]: 2025-10-11T03:46:14Z|00001|reconnect|INFO|unix:/run/openvswitch/db.sock: connecting...
Oct 11 03:46:14 compute-0 ovn_controller[152025]: 2025-10-11T03:46:14Z|00002|reconnect|INFO|unix:/run/openvswitch/db.sock: connected
Oct 11 03:46:14 compute-0 ovn_controller[152025]: 2025-10-11T03:46:14Z|00003|main|INFO|OVN internal version is : [24.03.7-20.33.0-76.8]
Oct 11 03:46:14 compute-0 ovn_controller[152025]: 2025-10-11T03:46:14Z|00004|main|INFO|OVS IDL reconnected, force recompute.
Oct 11 03:46:14 compute-0 ovn_controller[152025]: 2025-10-11T03:46:14Z|00005|reconnect|INFO|ssl:ovsdbserver-sb.openstack.svc:6642: connecting...
Oct 11 03:46:14 compute-0 ovn_controller[152025]: 2025-10-11T03:46:14Z|00006|main|INFO|OVNSB IDL reconnected, force recompute.
Oct 11 03:46:14 compute-0 NetworkManager[44920]: <info>  [1760154374.7044] manager: (br-int): new Open vSwitch Interface device (/org/freedesktop/NetworkManager/Devices/16)
Oct 11 03:46:14 compute-0 NetworkManager[44920]: <info>  [1760154374.7058] device (br-int)[Open vSwitch Interface]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Oct 11 03:46:14 compute-0 NetworkManager[44920]: <info>  [1760154374.7085] manager: (br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/17)
Oct 11 03:46:14 compute-0 NetworkManager[44920]: <info>  [1760154374.7099] manager: (br-int): new Open vSwitch Bridge device (/org/freedesktop/NetworkManager/Devices/18)
Oct 11 03:46:14 compute-0 NetworkManager[44920]: <info>  [1760154374.7110] device (br-int)[Open vSwitch Interface]: state change: unavailable -> disconnected (reason 'none', managed-type: 'full')
Oct 11 03:46:14 compute-0 ovn_controller[152025]: 2025-10-11T03:46:14Z|00007|reconnect|INFO|ssl:ovsdbserver-sb.openstack.svc:6642: connected
Oct 11 03:46:14 compute-0 kernel: br-int: entered promiscuous mode
Oct 11 03:46:14 compute-0 ovn_controller[152025]: 2025-10-11T03:46:14Z|00008|features|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting to switch
Oct 11 03:46:14 compute-0 ovn_controller[152025]: 2025-10-11T03:46:14Z|00009|rconn|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting...
Oct 11 03:46:14 compute-0 ovn_controller[152025]: 2025-10-11T03:46:14Z|00010|features|INFO|OVS Feature: ct_zero_snat, state: supported
Oct 11 03:46:14 compute-0 ovn_controller[152025]: 2025-10-11T03:46:14Z|00011|features|INFO|OVS Feature: ct_flush, state: supported
Oct 11 03:46:14 compute-0 ovn_controller[152025]: 2025-10-11T03:46:14Z|00012|features|INFO|OVS Feature: dp_hash_l4_sym_support, state: supported
Oct 11 03:46:14 compute-0 ovn_controller[152025]: 2025-10-11T03:46:14Z|00013|reconnect|INFO|unix:/run/openvswitch/db.sock: connecting...
Oct 11 03:46:14 compute-0 ovn_controller[152025]: 2025-10-11T03:46:14Z|00014|main|INFO|OVS feature set changed, force recompute.
Oct 11 03:46:14 compute-0 ovn_controller[152025]: 2025-10-11T03:46:14Z|00015|ofctrl|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting to switch
Oct 11 03:46:14 compute-0 ovn_controller[152025]: 2025-10-11T03:46:14Z|00016|rconn|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting...
Oct 11 03:46:14 compute-0 ovn_controller[152025]: 2025-10-11T03:46:14Z|00017|reconnect|INFO|unix:/run/openvswitch/db.sock: connected
Oct 11 03:46:14 compute-0 ovn_controller[152025]: 2025-10-11T03:46:14Z|00018|rconn|INFO|unix:/var/run/openvswitch/br-int.mgmt: connected
Oct 11 03:46:14 compute-0 ovn_controller[152025]: 2025-10-11T03:46:14Z|00019|features|INFO|OVS DB schema supports 4 flow table prefixes, our IDL supports: 4
Oct 11 03:46:14 compute-0 ovn_controller[152025]: 2025-10-11T03:46:14Z|00020|rconn|INFO|unix:/var/run/openvswitch/br-int.mgmt: connected
Oct 11 03:46:14 compute-0 ovn_controller[152025]: 2025-10-11T03:46:14Z|00021|ofctrl|INFO|ofctrl-wait-before-clear is now 8000 ms (was 0 ms)
Oct 11 03:46:14 compute-0 ovn_controller[152025]: 2025-10-11T03:46:14Z|00022|main|INFO|OVS OpenFlow connection reconnected,force recompute.
Oct 11 03:46:14 compute-0 ovn_controller[152025]: 2025-10-11T03:46:14Z|00023|main|INFO|Setting flow table prefixes: ip_src, ip_dst, ipv6_src, ipv6_dst.
Oct 11 03:46:14 compute-0 ovn_controller[152025]: 2025-10-11T03:46:14Z|00024|main|INFO|OVS feature set changed, force recompute.
Oct 11 03:46:14 compute-0 ovn_controller[152025]: 2025-10-11T03:46:14Z|00001|pinctrl(ovn_pinctrl0)|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting to switch
Oct 11 03:46:14 compute-0 ovn_controller[152025]: 2025-10-11T03:46:14Z|00001|statctrl(ovn_statctrl3)|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting to switch
Oct 11 03:46:14 compute-0 ovn_controller[152025]: 2025-10-11T03:46:14Z|00002|rconn(ovn_pinctrl0)|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting...
Oct 11 03:46:14 compute-0 ovn_controller[152025]: 2025-10-11T03:46:14Z|00002|rconn(ovn_statctrl3)|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting...
Oct 11 03:46:14 compute-0 ovn_controller[152025]: 2025-10-11T03:46:14Z|00003|rconn(ovn_pinctrl0)|INFO|unix:/var/run/openvswitch/br-int.mgmt: connected
Oct 11 03:46:14 compute-0 ovn_controller[152025]: 2025-10-11T03:46:14Z|00003|rconn(ovn_statctrl3)|INFO|unix:/var/run/openvswitch/br-int.mgmt: connected
Oct 11 03:46:14 compute-0 NetworkManager[44920]: <info>  [1760154374.7344] manager: (ovn-8cedc9-0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/19)
Oct 11 03:46:14 compute-0 kernel: genev_sys_6081: entered promiscuous mode
Oct 11 03:46:14 compute-0 NetworkManager[44920]: <info>  [1760154374.7586] device (genev_sys_6081): carrier: link connected
Oct 11 03:46:14 compute-0 NetworkManager[44920]: <info>  [1760154374.7589] manager: (genev_sys_6081): new Generic device (/org/freedesktop/NetworkManager/Devices/20)
Oct 11 03:46:14 compute-0 systemd-udevd[152206]: Network interface NamePolicy= disabled on kernel command line.
Oct 11 03:46:14 compute-0 systemd-udevd[152213]: Network interface NamePolicy= disabled on kernel command line.
Oct 11 03:46:14 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v406: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 03:46:15 compute-0 sudo[152292]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xgozpchgnzewkhxvnxehtwdsogdwprsi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760154374.6831264-609-41245376657192/AnsiballZ_command.py'
Oct 11 03:46:15 compute-0 sudo[152292]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 03:46:15 compute-0 python3.9[152294]: ansible-ansible.legacy.command Invoked with _raw_params=ovs-vsctl remove open . other_config hw-offload
                                              _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 11 03:46:15 compute-0 ovs-vsctl[152295]: ovs|00001|vsctl|INFO|Called as ovs-vsctl remove open . other_config hw-offload
Oct 11 03:46:15 compute-0 sudo[152292]: pam_unix(sudo:session): session closed for user root
Oct 11 03:46:15 compute-0 sudo[152445]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yhccueleulsslczaynurkxbceagkbufs ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760154375.4736538-617-170334743283231/AnsiballZ_command.py'
Oct 11 03:46:15 compute-0 sudo[152445]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 03:46:16 compute-0 python3.9[152447]: ansible-ansible.legacy.command Invoked with _raw_params=ovs-vsctl get Open_vSwitch . external_ids:ovn-cms-options | sed 's/\"//g'
                                              _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 11 03:46:16 compute-0 ovs-vsctl[152449]: ovs|00001|db_ctl_base|ERR|no key "ovn-cms-options" in Open_vSwitch record "." column external_ids
Oct 11 03:46:16 compute-0 sudo[152445]: pam_unix(sudo:session): session closed for user root
Oct 11 03:46:16 compute-0 ceph-mon[74273]: pgmap v406: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 03:46:16 compute-0 sudo[152602]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kxrdhhavzvvfeeaupdxrudjjumflvicq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760154376.4513986-631-80025937488839/AnsiballZ_command.py'
Oct 11 03:46:16 compute-0 sudo[152602]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 03:46:16 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v407: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 03:46:16 compute-0 python3.9[152604]: ansible-ansible.legacy.command Invoked with _raw_params=ovs-vsctl remove Open_vSwitch . external_ids ovn-cms-options
                                              _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 11 03:46:16 compute-0 ovs-vsctl[152605]: ovs|00001|vsctl|INFO|Called as ovs-vsctl remove Open_vSwitch . external_ids ovn-cms-options
Oct 11 03:46:17 compute-0 sudo[152602]: pam_unix(sudo:session): session closed for user root
Oct 11 03:46:17 compute-0 sshd-session[140532]: Connection closed by 192.168.122.30 port 47958
Oct 11 03:46:17 compute-0 sshd-session[140529]: pam_unix(sshd:session): session closed for user zuul
Oct 11 03:46:17 compute-0 systemd[1]: session-46.scope: Deactivated successfully.
Oct 11 03:46:17 compute-0 systemd-logind[820]: Session 46 logged out. Waiting for processes to exit.
Oct 11 03:46:17 compute-0 systemd[1]: session-46.scope: Consumed 1min 5.555s CPU time.
Oct 11 03:46:17 compute-0 systemd-logind[820]: Removed session 46.
Oct 11 03:46:18 compute-0 ceph-mon[74273]: pgmap v407: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 03:46:18 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v408: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 03:46:19 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 11 03:46:20 compute-0 ceph-mon[74273]: pgmap v408: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 03:46:20 compute-0 ceph-mgr[74563]: [balancer INFO root] Optimize plan auto_2025-10-11_03:46:20
Oct 11 03:46:20 compute-0 ceph-mgr[74563]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 11 03:46:20 compute-0 ceph-mgr[74563]: [balancer INFO root] do_upmap
Oct 11 03:46:20 compute-0 ceph-mgr[74563]: [balancer INFO root] pools ['vms', 'default.rgw.log', 'default.rgw.control', 'default.rgw.meta', 'backups', '.rgw.root', 'images', 'cephfs.cephfs.meta', 'cephfs.cephfs.data', '.mgr', 'volumes']
Oct 11 03:46:20 compute-0 ceph-mgr[74563]: [balancer INFO root] prepared 0/10 changes
Oct 11 03:46:20 compute-0 ceph-mgr[74563]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 03:46:20 compute-0 ceph-mgr[74563]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 03:46:20 compute-0 ceph-mgr[74563]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 03:46:20 compute-0 ceph-mgr[74563]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 03:46:20 compute-0 ceph-mgr[74563]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 03:46:20 compute-0 ceph-mgr[74563]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 03:46:20 compute-0 ceph-mgr[74563]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 11 03:46:20 compute-0 ceph-mgr[74563]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 11 03:46:20 compute-0 ceph-mgr[74563]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 11 03:46:20 compute-0 ceph-mgr[74563]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 11 03:46:20 compute-0 ceph-mgr[74563]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 11 03:46:20 compute-0 ceph-mgr[74563]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 11 03:46:20 compute-0 ceph-mgr[74563]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 11 03:46:20 compute-0 ceph-mgr[74563]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 11 03:46:20 compute-0 ceph-mgr[74563]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 11 03:46:20 compute-0 ceph-mgr[74563]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 11 03:46:20 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v409: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 03:46:22 compute-0 ceph-mon[74273]: pgmap v409: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 03:46:22 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v410: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 03:46:23 compute-0 sshd-session[152631]: Accepted publickey for zuul from 192.168.122.30 port 52088 ssh2: ECDSA SHA256:qo9+RMabHfLAOt2q/80W97JXaZUdeUCREBuTRaqgxBY
Oct 11 03:46:23 compute-0 systemd-logind[820]: New session 48 of user zuul.
Oct 11 03:46:23 compute-0 systemd[1]: Started Session 48 of User zuul.
Oct 11 03:46:23 compute-0 sshd-session[152631]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Oct 11 03:46:24 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 11 03:46:24 compute-0 sudo[152785]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 03:46:24 compute-0 sudo[152785]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 03:46:24 compute-0 sudo[152785]: pam_unix(sudo:session): session closed for user root
Oct 11 03:46:24 compute-0 sudo[152810]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 11 03:46:24 compute-0 sudo[152810]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 03:46:24 compute-0 ceph-mon[74273]: pgmap v410: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 03:46:24 compute-0 systemd[1]: Stopping User Manager for UID 0...
Oct 11 03:46:24 compute-0 systemd[152062]: Activating special unit Exit the Session...
Oct 11 03:46:24 compute-0 sudo[152810]: pam_unix(sudo:session): session closed for user root
Oct 11 03:46:24 compute-0 systemd[152062]: Stopped target Main User Target.
Oct 11 03:46:24 compute-0 systemd[152062]: Stopped target Basic System.
Oct 11 03:46:24 compute-0 systemd[152062]: Stopped target Paths.
Oct 11 03:46:24 compute-0 systemd[152062]: Stopped target Sockets.
Oct 11 03:46:24 compute-0 systemd[152062]: Stopped target Timers.
Oct 11 03:46:24 compute-0 systemd[152062]: Stopped Daily Cleanup of User's Temporary Directories.
Oct 11 03:46:24 compute-0 systemd[152062]: Closed D-Bus User Message Bus Socket.
Oct 11 03:46:24 compute-0 systemd[152062]: Stopped Create User's Volatile Files and Directories.
Oct 11 03:46:24 compute-0 systemd[152062]: Removed slice User Application Slice.
Oct 11 03:46:24 compute-0 systemd[152062]: Reached target Shutdown.
Oct 11 03:46:24 compute-0 systemd[152062]: Finished Exit the Session.
Oct 11 03:46:24 compute-0 systemd[152062]: Reached target Exit the Session.
Oct 11 03:46:24 compute-0 systemd[1]: user@0.service: Deactivated successfully.
Oct 11 03:46:24 compute-0 systemd[1]: Stopped User Manager for UID 0.
Oct 11 03:46:24 compute-0 systemd[1]: Stopping User Runtime Directory /run/user/0...
Oct 11 03:46:24 compute-0 python3.9[152784]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct 11 03:46:24 compute-0 systemd[1]: run-user-0.mount: Deactivated successfully.
Oct 11 03:46:24 compute-0 systemd[1]: user-runtime-dir@0.service: Deactivated successfully.
Oct 11 03:46:24 compute-0 systemd[1]: Stopped User Runtime Directory /run/user/0.
Oct 11 03:46:24 compute-0 systemd[1]: Removed slice User Slice of UID 0.
Oct 11 03:46:24 compute-0 sudo[152835]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 03:46:24 compute-0 sudo[152835]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 03:46:24 compute-0 sudo[152835]: pam_unix(sudo:session): session closed for user root
Oct 11 03:46:24 compute-0 sudo[152866]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/23b68101-59a9-532f-ab6b-9acf78fb2162/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Oct 11 03:46:24 compute-0 sudo[152866]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 03:46:24 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v411: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 03:46:25 compute-0 sudo[152866]: pam_unix(sudo:session): session closed for user root
Oct 11 03:46:25 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 11 03:46:25 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 11 03:46:25 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct 11 03:46:25 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 11 03:46:25 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct 11 03:46:25 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' 
Oct 11 03:46:25 compute-0 ceph-mgr[74563]: [progress WARNING root] complete: ev a9f05142-e5a3-4888-b774-182028b38001 does not exist
Oct 11 03:46:25 compute-0 ceph-mgr[74563]: [progress WARNING root] complete: ev 279af303-0caf-4b31-ab96-0a3c643bec59 does not exist
Oct 11 03:46:25 compute-0 ceph-mgr[74563]: [progress WARNING root] complete: ev d58362bb-d12b-4b49-89fe-8710a6d81256 does not exist
Oct 11 03:46:25 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct 11 03:46:25 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 11 03:46:25 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct 11 03:46:25 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 11 03:46:25 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 11 03:46:25 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 11 03:46:25 compute-0 sudo[153015]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 03:46:25 compute-0 sudo[153015]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 03:46:25 compute-0 sudo[153015]: pam_unix(sudo:session): session closed for user root
Oct 11 03:46:25 compute-0 sudo[153060]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 11 03:46:25 compute-0 sudo[153060]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 03:46:25 compute-0 sudo[153060]: pam_unix(sudo:session): session closed for user root
Oct 11 03:46:25 compute-0 sudo[153124]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kpsliwpnqlxuehcwqnesphtsbrgpbxns ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760154385.2650056-34-136496861800085/AnsiballZ_file.py'
Oct 11 03:46:25 compute-0 sudo[153124]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 03:46:25 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 11 03:46:25 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 11 03:46:25 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' 
Oct 11 03:46:25 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 11 03:46:25 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 11 03:46:25 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 11 03:46:25 compute-0 sudo[153115]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 03:46:25 compute-0 sudo[153115]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 03:46:25 compute-0 sudo[153115]: pam_unix(sudo:session): session closed for user root
Oct 11 03:46:25 compute-0 sudo[153148]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/23b68101-59a9-532f-ab6b-9acf78fb2162/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 23b68101-59a9-532f-ab6b-9acf78fb2162 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Oct 11 03:46:25 compute-0 sudo[153148]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 03:46:25 compute-0 python3.9[153139]: ansible-ansible.builtin.file Invoked with group=zuul owner=zuul path=/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Oct 11 03:46:25 compute-0 sudo[153124]: pam_unix(sudo:session): session closed for user root
Oct 11 03:46:26 compute-0 podman[153242]: 2025-10-11 03:46:26.136399223 +0000 UTC m=+0.058622940 container create 3db020ff49e5aca92324d104c63f41b09e394059877881b8a693dac609353f25 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_goldstine, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507)
Oct 11 03:46:26 compute-0 systemd[1]: Started libpod-conmon-3db020ff49e5aca92324d104c63f41b09e394059877881b8a693dac609353f25.scope.
Oct 11 03:46:26 compute-0 podman[153242]: 2025-10-11 03:46:26.106941705 +0000 UTC m=+0.029165502 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 03:46:26 compute-0 systemd[1]: Started libcrun container.
Oct 11 03:46:26 compute-0 podman[153242]: 2025-10-11 03:46:26.227070763 +0000 UTC m=+0.149294480 container init 3db020ff49e5aca92324d104c63f41b09e394059877881b8a693dac609353f25 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_goldstine, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, ceph=True, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 11 03:46:26 compute-0 podman[153242]: 2025-10-11 03:46:26.234441641 +0000 UTC m=+0.156665358 container start 3db020ff49e5aca92324d104c63f41b09e394059877881b8a693dac609353f25 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_goldstine, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, ceph=True, org.label-schema.vendor=CentOS)
Oct 11 03:46:26 compute-0 podman[153242]: 2025-10-11 03:46:26.238052072 +0000 UTC m=+0.160275819 container attach 3db020ff49e5aca92324d104c63f41b09e394059877881b8a693dac609353f25 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_goldstine, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Oct 11 03:46:26 compute-0 quizzical_goldstine[153307]: 167 167
Oct 11 03:46:26 compute-0 systemd[1]: libpod-3db020ff49e5aca92324d104c63f41b09e394059877881b8a693dac609353f25.scope: Deactivated successfully.
Oct 11 03:46:26 compute-0 podman[153242]: 2025-10-11 03:46:26.240034268 +0000 UTC m=+0.162257975 container died 3db020ff49e5aca92324d104c63f41b09e394059877881b8a693dac609353f25 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_goldstine, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, OSD_FLAVOR=default, io.buildah.version=1.39.3, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef)
Oct 11 03:46:26 compute-0 systemd[1]: var-lib-containers-storage-overlay-fbd4f97648c784684f68f74fa561d657903742164cac86488b2c880cdf501dd6-merged.mount: Deactivated successfully.
Oct 11 03:46:26 compute-0 podman[153242]: 2025-10-11 03:46:26.291090504 +0000 UTC m=+0.213314211 container remove 3db020ff49e5aca92324d104c63f41b09e394059877881b8a693dac609353f25 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_goldstine, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 11 03:46:26 compute-0 systemd[1]: libpod-conmon-3db020ff49e5aca92324d104c63f41b09e394059877881b8a693dac609353f25.scope: Deactivated successfully.
Oct 11 03:46:26 compute-0 sudo[153398]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cxpnwjgwwulolmpwncaybmvphwojcfqs ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760154386.1070855-34-24281771449297/AnsiballZ_file.py'
Oct 11 03:46:26 compute-0 sudo[153398]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 03:46:26 compute-0 podman[153405]: 2025-10-11 03:46:26.525519109 +0000 UTC m=+0.061942373 container create ae79fb60d23cd0751def60cc8551c2784f577b20f9db10e865e0a42fca9644d1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=busy_carson, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 11 03:46:26 compute-0 systemd[1]: Started libpod-conmon-ae79fb60d23cd0751def60cc8551c2784f577b20f9db10e865e0a42fca9644d1.scope.
Oct 11 03:46:26 compute-0 podman[153405]: 2025-10-11 03:46:26.496096302 +0000 UTC m=+0.032519626 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 03:46:26 compute-0 systemd[1]: Started libcrun container.
Oct 11 03:46:26 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/371be94e29c5e8938d7ca5dc3875bc1375049698762df9df79b3bc37718e39a4/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 11 03:46:26 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/371be94e29c5e8938d7ca5dc3875bc1375049698762df9df79b3bc37718e39a4/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 11 03:46:26 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/371be94e29c5e8938d7ca5dc3875bc1375049698762df9df79b3bc37718e39a4/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 11 03:46:26 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/371be94e29c5e8938d7ca5dc3875bc1375049698762df9df79b3bc37718e39a4/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 11 03:46:26 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/371be94e29c5e8938d7ca5dc3875bc1375049698762df9df79b3bc37718e39a4/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct 11 03:46:26 compute-0 python3.9[153406]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/neutron setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct 11 03:46:26 compute-0 podman[153405]: 2025-10-11 03:46:26.686240441 +0000 UTC m=+0.222663695 container init ae79fb60d23cd0751def60cc8551c2784f577b20f9db10e865e0a42fca9644d1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=busy_carson, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 11 03:46:26 compute-0 podman[153405]: 2025-10-11 03:46:26.695065299 +0000 UTC m=+0.231488523 container start ae79fb60d23cd0751def60cc8551c2784f577b20f9db10e865e0a42fca9644d1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=busy_carson, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True)
Oct 11 03:46:26 compute-0 sudo[153398]: pam_unix(sudo:session): session closed for user root
Oct 11 03:46:26 compute-0 podman[153405]: 2025-10-11 03:46:26.699455813 +0000 UTC m=+0.235879077 container attach ae79fb60d23cd0751def60cc8551c2784f577b20f9db10e865e0a42fca9644d1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=busy_carson, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=reef)
Oct 11 03:46:26 compute-0 ceph-mon[74273]: pgmap v411: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 03:46:26 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v412: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 03:46:27 compute-0 sudo[153577]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-awmdljxfkzxckzntawdgpogbbsxwiqve ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760154386.8423967-34-206550067858775/AnsiballZ_file.py'
Oct 11 03:46:27 compute-0 sudo[153577]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 03:46:27 compute-0 python3.9[153579]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/neutron/kill_scripts setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct 11 03:46:27 compute-0 sudo[153577]: pam_unix(sudo:session): session closed for user root
Oct 11 03:46:27 compute-0 busy_carson[153423]: --> passed data devices: 0 physical, 3 LVM
Oct 11 03:46:27 compute-0 busy_carson[153423]: --> relative data size: 1.0
Oct 11 03:46:27 compute-0 busy_carson[153423]: --> All data devices are unavailable
Oct 11 03:46:27 compute-0 systemd[1]: libpod-ae79fb60d23cd0751def60cc8551c2784f577b20f9db10e865e0a42fca9644d1.scope: Deactivated successfully.
Oct 11 03:46:27 compute-0 podman[153405]: 2025-10-11 03:46:27.791246888 +0000 UTC m=+1.327670112 container died ae79fb60d23cd0751def60cc8551c2784f577b20f9db10e865e0a42fca9644d1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=busy_carson, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507)
Oct 11 03:46:27 compute-0 systemd[1]: libpod-ae79fb60d23cd0751def60cc8551c2784f577b20f9db10e865e0a42fca9644d1.scope: Consumed 1.023s CPU time.
Oct 11 03:46:27 compute-0 sudo[153753]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wfzeawhcyzplliejaaagtogdahgajrze ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760154387.4979866-34-266308164068529/AnsiballZ_file.py'
Oct 11 03:46:27 compute-0 sudo[153753]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 03:46:27 compute-0 systemd[1]: var-lib-containers-storage-overlay-371be94e29c5e8938d7ca5dc3875bc1375049698762df9df79b3bc37718e39a4-merged.mount: Deactivated successfully.
Oct 11 03:46:27 compute-0 podman[153405]: 2025-10-11 03:46:27.897461596 +0000 UTC m=+1.433884840 container remove ae79fb60d23cd0751def60cc8551c2784f577b20f9db10e865e0a42fca9644d1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=busy_carson, OSD_FLAVOR=default, CEPH_REF=reef, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3)
Oct 11 03:46:27 compute-0 systemd[1]: libpod-conmon-ae79fb60d23cd0751def60cc8551c2784f577b20f9db10e865e0a42fca9644d1.scope: Deactivated successfully.
Oct 11 03:46:27 compute-0 sudo[153148]: pam_unix(sudo:session): session closed for user root
Oct 11 03:46:28 compute-0 sudo[153770]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 03:46:28 compute-0 sudo[153770]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 03:46:28 compute-0 sudo[153770]: pam_unix(sudo:session): session closed for user root
Oct 11 03:46:28 compute-0 python3.9[153761]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/neutron/ovn-metadata-proxy setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct 11 03:46:28 compute-0 sudo[153753]: pam_unix(sudo:session): session closed for user root
Oct 11 03:46:28 compute-0 sudo[153795]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 11 03:46:28 compute-0 sudo[153795]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 03:46:28 compute-0 sudo[153795]: pam_unix(sudo:session): session closed for user root
Oct 11 03:46:28 compute-0 sudo[153822]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 03:46:28 compute-0 sudo[153822]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 03:46:28 compute-0 sudo[153822]: pam_unix(sudo:session): session closed for user root
Oct 11 03:46:28 compute-0 sudo[153870]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/23b68101-59a9-532f-ab6b-9acf78fb2162/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 23b68101-59a9-532f-ab6b-9acf78fb2162 -- lvm list --format json
Oct 11 03:46:28 compute-0 sudo[153870]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 03:46:28 compute-0 sudo[154052]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-djgmzwhcgdwgrdgyspcspefezogavbli ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760154388.2288647-34-205574126470915/AnsiballZ_file.py'
Oct 11 03:46:28 compute-0 sudo[154052]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 03:46:28 compute-0 podman[154062]: 2025-10-11 03:46:28.633816671 +0000 UTC m=+0.046692754 container create 35c2cedd6ee411bf1fa7fbe9a00d0eedf77b1577a57e893cca32a83ec2f4dd94 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_bohr, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 11 03:46:28 compute-0 systemd[1]: Started libpod-conmon-35c2cedd6ee411bf1fa7fbe9a00d0eedf77b1577a57e893cca32a83ec2f4dd94.scope.
Oct 11 03:46:28 compute-0 podman[154062]: 2025-10-11 03:46:28.613537841 +0000 UTC m=+0.026413884 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 03:46:28 compute-0 systemd[1]: Started libcrun container.
Oct 11 03:46:28 compute-0 podman[154062]: 2025-10-11 03:46:28.734428792 +0000 UTC m=+0.147304855 container init 35c2cedd6ee411bf1fa7fbe9a00d0eedf77b1577a57e893cca32a83ec2f4dd94 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_bohr, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef)
Oct 11 03:46:28 compute-0 ceph-mon[74273]: pgmap v412: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 03:46:28 compute-0 podman[154062]: 2025-10-11 03:46:28.74788975 +0000 UTC m=+0.160765783 container start 35c2cedd6ee411bf1fa7fbe9a00d0eedf77b1577a57e893cca32a83ec2f4dd94 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_bohr, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Oct 11 03:46:28 compute-0 podman[154062]: 2025-10-11 03:46:28.752524811 +0000 UTC m=+0.165400874 container attach 35c2cedd6ee411bf1fa7fbe9a00d0eedf77b1577a57e893cca32a83ec2f4dd94 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_bohr, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507)
Oct 11 03:46:28 compute-0 wizardly_bohr[154079]: 167 167
Oct 11 03:46:28 compute-0 systemd[1]: libpod-35c2cedd6ee411bf1fa7fbe9a00d0eedf77b1577a57e893cca32a83ec2f4dd94.scope: Deactivated successfully.
Oct 11 03:46:28 compute-0 podman[154062]: 2025-10-11 03:46:28.758344344 +0000 UTC m=+0.171220407 container died 35c2cedd6ee411bf1fa7fbe9a00d0eedf77b1577a57e893cca32a83ec2f4dd94 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_bohr, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Oct 11 03:46:28 compute-0 python3.9[154058]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/neutron/external/pids setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct 11 03:46:28 compute-0 sudo[154052]: pam_unix(sudo:session): session closed for user root
Oct 11 03:46:28 compute-0 systemd[1]: var-lib-containers-storage-overlay-a193d14dfb7b18930bcbdace40afc0353862d702470f3d27c959c8e0c80e1793-merged.mount: Deactivated successfully.
Oct 11 03:46:28 compute-0 podman[154062]: 2025-10-11 03:46:28.808552607 +0000 UTC m=+0.221428640 container remove 35c2cedd6ee411bf1fa7fbe9a00d0eedf77b1577a57e893cca32a83ec2f4dd94 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_bohr, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 11 03:46:28 compute-0 systemd[1]: libpod-conmon-35c2cedd6ee411bf1fa7fbe9a00d0eedf77b1577a57e893cca32a83ec2f4dd94.scope: Deactivated successfully.
Oct 11 03:46:28 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v413: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 03:46:29 compute-0 podman[154154]: 2025-10-11 03:46:29.038728572 +0000 UTC m=+0.055804451 container create 5482cceec110bac2712b287b77cd7c413c10363142a379bf3618d45d60c37059 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_lamarr, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507)
Oct 11 03:46:29 compute-0 systemd[1]: Started libpod-conmon-5482cceec110bac2712b287b77cd7c413c10363142a379bf3618d45d60c37059.scope.
Oct 11 03:46:29 compute-0 podman[154154]: 2025-10-11 03:46:29.014945733 +0000 UTC m=+0.032021632 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 03:46:29 compute-0 systemd[1]: Started libcrun container.
Oct 11 03:46:29 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3ffef9482e58f3b7ac68c957c344e890e50c5642b002dc33ccfc10468d8c5a3d/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 11 03:46:29 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3ffef9482e58f3b7ac68c957c344e890e50c5642b002dc33ccfc10468d8c5a3d/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 11 03:46:29 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3ffef9482e58f3b7ac68c957c344e890e50c5642b002dc33ccfc10468d8c5a3d/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 11 03:46:29 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3ffef9482e58f3b7ac68c957c344e890e50c5642b002dc33ccfc10468d8c5a3d/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 11 03:46:29 compute-0 podman[154154]: 2025-10-11 03:46:29.142930474 +0000 UTC m=+0.160006373 container init 5482cceec110bac2712b287b77cd7c413c10363142a379bf3618d45d60c37059 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_lamarr, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=reef, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Oct 11 03:46:29 compute-0 podman[154154]: 2025-10-11 03:46:29.158626375 +0000 UTC m=+0.175702244 container start 5482cceec110bac2712b287b77cd7c413c10363142a379bf3618d45d60c37059 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_lamarr, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 11 03:46:29 compute-0 podman[154154]: 2025-10-11 03:46:29.161786234 +0000 UTC m=+0.178862153 container attach 5482cceec110bac2712b287b77cd7c413c10363142a379bf3618d45d60c37059 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_lamarr, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 11 03:46:29 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 11 03:46:29 compute-0 python3.9[154272]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'selinux'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct 11 03:46:29 compute-0 agitated_lamarr[154214]: {
Oct 11 03:46:29 compute-0 agitated_lamarr[154214]:     "0": [
Oct 11 03:46:29 compute-0 agitated_lamarr[154214]:         {
Oct 11 03:46:29 compute-0 agitated_lamarr[154214]:             "devices": [
Oct 11 03:46:29 compute-0 agitated_lamarr[154214]:                 "/dev/loop3"
Oct 11 03:46:29 compute-0 agitated_lamarr[154214]:             ],
Oct 11 03:46:29 compute-0 agitated_lamarr[154214]:             "lv_name": "ceph_lv0",
Oct 11 03:46:29 compute-0 agitated_lamarr[154214]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct 11 03:46:29 compute-0 agitated_lamarr[154214]:             "lv_size": "21470642176",
Oct 11 03:46:29 compute-0 agitated_lamarr[154214]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=1XVTvq-Um5W-oCLh-MZzq-EKrd-vgGj-JdO4S3,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=23b68101-59a9-532f-ab6b-9acf78fb2162,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=bd7ac921-1218-45c1-b1c6-7c594dbceccb,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 11 03:46:29 compute-0 agitated_lamarr[154214]:             "lv_uuid": "1XVTvq-Um5W-oCLh-MZzq-EKrd-vgGj-JdO4S3",
Oct 11 03:46:29 compute-0 agitated_lamarr[154214]:             "name": "ceph_lv0",
Oct 11 03:46:29 compute-0 agitated_lamarr[154214]:             "path": "/dev/ceph_vg0/ceph_lv0",
Oct 11 03:46:29 compute-0 agitated_lamarr[154214]:             "tags": {
Oct 11 03:46:29 compute-0 agitated_lamarr[154214]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct 11 03:46:29 compute-0 agitated_lamarr[154214]:                 "ceph.block_uuid": "1XVTvq-Um5W-oCLh-MZzq-EKrd-vgGj-JdO4S3",
Oct 11 03:46:29 compute-0 agitated_lamarr[154214]:                 "ceph.cephx_lockbox_secret": "",
Oct 11 03:46:29 compute-0 agitated_lamarr[154214]:                 "ceph.cluster_fsid": "23b68101-59a9-532f-ab6b-9acf78fb2162",
Oct 11 03:46:29 compute-0 agitated_lamarr[154214]:                 "ceph.cluster_name": "ceph",
Oct 11 03:46:29 compute-0 agitated_lamarr[154214]:                 "ceph.crush_device_class": "",
Oct 11 03:46:29 compute-0 agitated_lamarr[154214]:                 "ceph.encrypted": "0",
Oct 11 03:46:29 compute-0 agitated_lamarr[154214]:                 "ceph.osd_fsid": "bd7ac921-1218-45c1-b1c6-7c594dbceccb",
Oct 11 03:46:29 compute-0 agitated_lamarr[154214]:                 "ceph.osd_id": "0",
Oct 11 03:46:29 compute-0 agitated_lamarr[154214]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 11 03:46:29 compute-0 agitated_lamarr[154214]:                 "ceph.type": "block",
Oct 11 03:46:29 compute-0 agitated_lamarr[154214]:                 "ceph.vdo": "0"
Oct 11 03:46:29 compute-0 agitated_lamarr[154214]:             },
Oct 11 03:46:29 compute-0 agitated_lamarr[154214]:             "type": "block",
Oct 11 03:46:29 compute-0 agitated_lamarr[154214]:             "vg_name": "ceph_vg0"
Oct 11 03:46:29 compute-0 agitated_lamarr[154214]:         }
Oct 11 03:46:29 compute-0 agitated_lamarr[154214]:     ],
Oct 11 03:46:29 compute-0 agitated_lamarr[154214]:     "1": [
Oct 11 03:46:29 compute-0 agitated_lamarr[154214]:         {
Oct 11 03:46:29 compute-0 agitated_lamarr[154214]:             "devices": [
Oct 11 03:46:29 compute-0 agitated_lamarr[154214]:                 "/dev/loop4"
Oct 11 03:46:29 compute-0 agitated_lamarr[154214]:             ],
Oct 11 03:46:29 compute-0 agitated_lamarr[154214]:             "lv_name": "ceph_lv1",
Oct 11 03:46:29 compute-0 agitated_lamarr[154214]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Oct 11 03:46:29 compute-0 agitated_lamarr[154214]:             "lv_size": "21470642176",
Oct 11 03:46:29 compute-0 agitated_lamarr[154214]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=SYHCVo-UVf0-Iwc3-4dzz-QuCG-19LJ-9rRA5x,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=23b68101-59a9-532f-ab6b-9acf78fb2162,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=38da774d-7ecf-442f-9a7a-97978287cff8,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 11 03:46:29 compute-0 agitated_lamarr[154214]:             "lv_uuid": "SYHCVo-UVf0-Iwc3-4dzz-QuCG-19LJ-9rRA5x",
Oct 11 03:46:29 compute-0 agitated_lamarr[154214]:             "name": "ceph_lv1",
Oct 11 03:46:29 compute-0 agitated_lamarr[154214]:             "path": "/dev/ceph_vg1/ceph_lv1",
Oct 11 03:46:29 compute-0 agitated_lamarr[154214]:             "tags": {
Oct 11 03:46:29 compute-0 agitated_lamarr[154214]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Oct 11 03:46:29 compute-0 agitated_lamarr[154214]:                 "ceph.block_uuid": "SYHCVo-UVf0-Iwc3-4dzz-QuCG-19LJ-9rRA5x",
Oct 11 03:46:29 compute-0 agitated_lamarr[154214]:                 "ceph.cephx_lockbox_secret": "",
Oct 11 03:46:29 compute-0 agitated_lamarr[154214]:                 "ceph.cluster_fsid": "23b68101-59a9-532f-ab6b-9acf78fb2162",
Oct 11 03:46:29 compute-0 agitated_lamarr[154214]:                 "ceph.cluster_name": "ceph",
Oct 11 03:46:29 compute-0 agitated_lamarr[154214]:                 "ceph.crush_device_class": "",
Oct 11 03:46:29 compute-0 agitated_lamarr[154214]:                 "ceph.encrypted": "0",
Oct 11 03:46:29 compute-0 agitated_lamarr[154214]:                 "ceph.osd_fsid": "38da774d-7ecf-442f-9a7a-97978287cff8",
Oct 11 03:46:29 compute-0 agitated_lamarr[154214]:                 "ceph.osd_id": "1",
Oct 11 03:46:29 compute-0 agitated_lamarr[154214]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 11 03:46:29 compute-0 agitated_lamarr[154214]:                 "ceph.type": "block",
Oct 11 03:46:29 compute-0 agitated_lamarr[154214]:                 "ceph.vdo": "0"
Oct 11 03:46:29 compute-0 agitated_lamarr[154214]:             },
Oct 11 03:46:29 compute-0 agitated_lamarr[154214]:             "type": "block",
Oct 11 03:46:29 compute-0 agitated_lamarr[154214]:             "vg_name": "ceph_vg1"
Oct 11 03:46:29 compute-0 agitated_lamarr[154214]:         }
Oct 11 03:46:29 compute-0 agitated_lamarr[154214]:     ],
Oct 11 03:46:29 compute-0 agitated_lamarr[154214]:     "2": [
Oct 11 03:46:29 compute-0 agitated_lamarr[154214]:         {
Oct 11 03:46:29 compute-0 agitated_lamarr[154214]:             "devices": [
Oct 11 03:46:29 compute-0 agitated_lamarr[154214]:                 "/dev/loop5"
Oct 11 03:46:29 compute-0 agitated_lamarr[154214]:             ],
Oct 11 03:46:29 compute-0 agitated_lamarr[154214]:             "lv_name": "ceph_lv2",
Oct 11 03:46:29 compute-0 agitated_lamarr[154214]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Oct 11 03:46:29 compute-0 agitated_lamarr[154214]:             "lv_size": "21470642176",
Oct 11 03:46:29 compute-0 agitated_lamarr[154214]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=RoMfIg-8Myq-NBZ3-1HM3-6QfC-sjk8-dlChvt,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=23b68101-59a9-532f-ab6b-9acf78fb2162,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=b0a32a6b-06c7-4cda-9235-0c5ffbd1bd85,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 11 03:46:29 compute-0 agitated_lamarr[154214]:             "lv_uuid": "RoMfIg-8Myq-NBZ3-1HM3-6QfC-sjk8-dlChvt",
Oct 11 03:46:29 compute-0 agitated_lamarr[154214]:             "name": "ceph_lv2",
Oct 11 03:46:29 compute-0 agitated_lamarr[154214]:             "path": "/dev/ceph_vg2/ceph_lv2",
Oct 11 03:46:29 compute-0 agitated_lamarr[154214]:             "tags": {
Oct 11 03:46:29 compute-0 agitated_lamarr[154214]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Oct 11 03:46:29 compute-0 agitated_lamarr[154214]:                 "ceph.block_uuid": "RoMfIg-8Myq-NBZ3-1HM3-6QfC-sjk8-dlChvt",
Oct 11 03:46:29 compute-0 agitated_lamarr[154214]:                 "ceph.cephx_lockbox_secret": "",
Oct 11 03:46:29 compute-0 agitated_lamarr[154214]:                 "ceph.cluster_fsid": "23b68101-59a9-532f-ab6b-9acf78fb2162",
Oct 11 03:46:29 compute-0 agitated_lamarr[154214]:                 "ceph.cluster_name": "ceph",
Oct 11 03:46:29 compute-0 agitated_lamarr[154214]:                 "ceph.crush_device_class": "",
Oct 11 03:46:29 compute-0 agitated_lamarr[154214]:                 "ceph.encrypted": "0",
Oct 11 03:46:29 compute-0 agitated_lamarr[154214]:                 "ceph.osd_fsid": "b0a32a6b-06c7-4cda-9235-0c5ffbd1bd85",
Oct 11 03:46:29 compute-0 agitated_lamarr[154214]:                 "ceph.osd_id": "2",
Oct 11 03:46:29 compute-0 agitated_lamarr[154214]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 11 03:46:29 compute-0 agitated_lamarr[154214]:                 "ceph.type": "block",
Oct 11 03:46:29 compute-0 agitated_lamarr[154214]:                 "ceph.vdo": "0"
Oct 11 03:46:29 compute-0 agitated_lamarr[154214]:             },
Oct 11 03:46:29 compute-0 agitated_lamarr[154214]:             "type": "block",
Oct 11 03:46:29 compute-0 agitated_lamarr[154214]:             "vg_name": "ceph_vg2"
Oct 11 03:46:29 compute-0 agitated_lamarr[154214]:         }
Oct 11 03:46:29 compute-0 agitated_lamarr[154214]:     ]
Oct 11 03:46:29 compute-0 agitated_lamarr[154214]: }
Oct 11 03:46:29 compute-0 systemd[1]: libpod-5482cceec110bac2712b287b77cd7c413c10363142a379bf3618d45d60c37059.scope: Deactivated successfully.
Oct 11 03:46:29 compute-0 podman[154154]: 2025-10-11 03:46:29.992323559 +0000 UTC m=+1.009399438 container died 5482cceec110bac2712b287b77cd7c413c10363142a379bf3618d45d60c37059 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_lamarr, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Oct 11 03:46:30 compute-0 systemd[1]: var-lib-containers-storage-overlay-3ffef9482e58f3b7ac68c957c344e890e50c5642b002dc33ccfc10468d8c5a3d-merged.mount: Deactivated successfully.
Oct 11 03:46:30 compute-0 podman[154154]: 2025-10-11 03:46:30.041332347 +0000 UTC m=+1.058408226 container remove 5482cceec110bac2712b287b77cd7c413c10363142a379bf3618d45d60c37059 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_lamarr, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2)
Oct 11 03:46:30 compute-0 systemd[1]: libpod-conmon-5482cceec110bac2712b287b77cd7c413c10363142a379bf3618d45d60c37059.scope: Deactivated successfully.
Oct 11 03:46:30 compute-0 sudo[153870]: pam_unix(sudo:session): session closed for user root
Oct 11 03:46:30 compute-0 sudo[154367]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 03:46:30 compute-0 sudo[154367]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 03:46:30 compute-0 sudo[154367]: pam_unix(sudo:session): session closed for user root
Oct 11 03:46:30 compute-0 sudo[154415]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 11 03:46:30 compute-0 sudo[154415]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 03:46:30 compute-0 sudo[154415]: pam_unix(sudo:session): session closed for user root
Oct 11 03:46:30 compute-0 sudo[154464]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 03:46:30 compute-0 sudo[154464]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 03:46:30 compute-0 sudo[154464]: pam_unix(sudo:session): session closed for user root
Oct 11 03:46:30 compute-0 sudo[154514]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nfmrocvovftmoekcrcdunuvhgaehpetb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760154389.8293364-78-114971473037002/AnsiballZ_seboolean.py'
Oct 11 03:46:30 compute-0 sudo[154514]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 03:46:30 compute-0 sudo[154517]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/23b68101-59a9-532f-ab6b-9acf78fb2162/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 23b68101-59a9-532f-ab6b-9acf78fb2162 -- raw list --format json
Oct 11 03:46:30 compute-0 sudo[154517]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 03:46:30 compute-0 python3.9[154521]: ansible-ansible.posix.seboolean Invoked with name=virt_sandbox_use_netlink persistent=True state=True ignore_selinux_state=False
Oct 11 03:46:30 compute-0 podman[154584]: 2025-10-11 03:46:30.702496908 +0000 UTC m=+0.065606897 container create b3c10067a32bb4754dbe1d63f15d87befabc086f42a67b00a3ea88340750f9d6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_chandrasekhar, io.buildah.version=1.39.3, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 11 03:46:30 compute-0 ceph-mon[74273]: pgmap v413: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 03:46:30 compute-0 systemd[1]: Started libpod-conmon-b3c10067a32bb4754dbe1d63f15d87befabc086f42a67b00a3ea88340750f9d6.scope.
Oct 11 03:46:30 compute-0 podman[154584]: 2025-10-11 03:46:30.667657588 +0000 UTC m=+0.030767597 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 03:46:30 compute-0 systemd[1]: Started libcrun container.
Oct 11 03:46:30 compute-0 podman[154584]: 2025-10-11 03:46:30.809032245 +0000 UTC m=+0.172142274 container init b3c10067a32bb4754dbe1d63f15d87befabc086f42a67b00a3ea88340750f9d6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_chandrasekhar, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3)
Oct 11 03:46:30 compute-0 podman[154584]: 2025-10-11 03:46:30.82059443 +0000 UTC m=+0.183704429 container start b3c10067a32bb4754dbe1d63f15d87befabc086f42a67b00a3ea88340750f9d6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_chandrasekhar, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 11 03:46:30 compute-0 podman[154584]: 2025-10-11 03:46:30.826027373 +0000 UTC m=+0.189137382 container attach b3c10067a32bb4754dbe1d63f15d87befabc086f42a67b00a3ea88340750f9d6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_chandrasekhar, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 11 03:46:30 compute-0 sweet_chandrasekhar[154600]: 167 167
Oct 11 03:46:30 compute-0 systemd[1]: libpod-b3c10067a32bb4754dbe1d63f15d87befabc086f42a67b00a3ea88340750f9d6.scope: Deactivated successfully.
Oct 11 03:46:30 compute-0 conmon[154600]: conmon b3c10067a32bb4754dbe <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-b3c10067a32bb4754dbe1d63f15d87befabc086f42a67b00a3ea88340750f9d6.scope/container/memory.events
Oct 11 03:46:30 compute-0 podman[154584]: 2025-10-11 03:46:30.830435037 +0000 UTC m=+0.193545036 container died b3c10067a32bb4754dbe1d63f15d87befabc086f42a67b00a3ea88340750f9d6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_chandrasekhar, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=reef)
Oct 11 03:46:30 compute-0 systemd[1]: var-lib-containers-storage-overlay-06db40323cbc904b7b9de07f0b7e04238de0c83c41b99fe8f9a72ca70e63f1ed-merged.mount: Deactivated successfully.
Oct 11 03:46:30 compute-0 podman[154584]: 2025-10-11 03:46:30.879283231 +0000 UTC m=+0.242393200 container remove b3c10067a32bb4754dbe1d63f15d87befabc086f42a67b00a3ea88340750f9d6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_chandrasekhar, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=reef)
Oct 11 03:46:30 compute-0 ceph-mgr[74563]: [pg_autoscaler INFO root] _maybe_adjust
Oct 11 03:46:30 compute-0 ceph-mgr[74563]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 03:46:30 compute-0 ceph-mgr[74563]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct 11 03:46:30 compute-0 ceph-mgr[74563]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 03:46:30 compute-0 ceph-mgr[74563]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 11 03:46:30 compute-0 ceph-mgr[74563]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 03:46:30 compute-0 ceph-mgr[74563]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 11 03:46:30 compute-0 ceph-mgr[74563]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 03:46:30 compute-0 ceph-mgr[74563]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 11 03:46:30 compute-0 ceph-mgr[74563]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 03:46:30 compute-0 ceph-mgr[74563]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 11 03:46:30 compute-0 ceph-mgr[74563]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 03:46:30 compute-0 ceph-mgr[74563]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Oct 11 03:46:30 compute-0 ceph-mgr[74563]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 03:46:30 compute-0 ceph-mgr[74563]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 11 03:46:30 compute-0 ceph-mgr[74563]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 03:46:30 compute-0 ceph-mgr[74563]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Oct 11 03:46:30 compute-0 ceph-mgr[74563]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 03:46:30 compute-0 ceph-mgr[74563]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct 11 03:46:30 compute-0 ceph-mgr[74563]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 03:46:30 compute-0 ceph-mgr[74563]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 11 03:46:30 compute-0 ceph-mgr[74563]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 03:46:30 compute-0 ceph-mgr[74563]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct 11 03:46:30 compute-0 systemd[1]: libpod-conmon-b3c10067a32bb4754dbe1d63f15d87befabc086f42a67b00a3ea88340750f9d6.scope: Deactivated successfully.
Oct 11 03:46:30 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v414: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 03:46:31 compute-0 podman[154624]: 2025-10-11 03:46:31.080036749 +0000 UTC m=+0.047495367 container create 521cd5e9cef7b6eb3670a391af4d6e72ae994cfed3c7a8bcd16ca29ce8552f5f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_dijkstra, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 11 03:46:31 compute-0 systemd[1]: Started libpod-conmon-521cd5e9cef7b6eb3670a391af4d6e72ae994cfed3c7a8bcd16ca29ce8552f5f.scope.
Oct 11 03:46:31 compute-0 sudo[154514]: pam_unix(sudo:session): session closed for user root
Oct 11 03:46:31 compute-0 systemd[1]: Started libcrun container.
Oct 11 03:46:31 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f4fbb1c4da5bbfaac86bc857002e0ac8b66f3bf78fb6202f53b39a4fc36fc2c5/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 11 03:46:31 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f4fbb1c4da5bbfaac86bc857002e0ac8b66f3bf78fb6202f53b39a4fc36fc2c5/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 11 03:46:31 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f4fbb1c4da5bbfaac86bc857002e0ac8b66f3bf78fb6202f53b39a4fc36fc2c5/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 11 03:46:31 compute-0 podman[154624]: 2025-10-11 03:46:31.058820592 +0000 UTC m=+0.026279200 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 03:46:31 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f4fbb1c4da5bbfaac86bc857002e0ac8b66f3bf78fb6202f53b39a4fc36fc2c5/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 11 03:46:31 compute-0 podman[154624]: 2025-10-11 03:46:31.175184126 +0000 UTC m=+0.142642804 container init 521cd5e9cef7b6eb3670a391af4d6e72ae994cfed3c7a8bcd16ca29ce8552f5f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_dijkstra, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 11 03:46:31 compute-0 podman[154624]: 2025-10-11 03:46:31.190889547 +0000 UTC m=+0.158348155 container start 521cd5e9cef7b6eb3670a391af4d6e72ae994cfed3c7a8bcd16ca29ce8552f5f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_dijkstra, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 11 03:46:31 compute-0 podman[154624]: 2025-10-11 03:46:31.195023094 +0000 UTC m=+0.162481782 container attach 521cd5e9cef7b6eb3670a391af4d6e72ae994cfed3c7a8bcd16ca29ce8552f5f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_dijkstra, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_REF=reef)
Oct 11 03:46:31 compute-0 python3.9[154795]: ansible-ansible.legacy.stat Invoked with path=/var/lib/neutron/ovn_metadata_haproxy_wrapper follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 11 03:46:32 compute-0 angry_dijkstra[154641]: {
Oct 11 03:46:32 compute-0 angry_dijkstra[154641]:     "38da774d-7ecf-442f-9a7a-97978287cff8": {
Oct 11 03:46:32 compute-0 angry_dijkstra[154641]:         "ceph_fsid": "23b68101-59a9-532f-ab6b-9acf78fb2162",
Oct 11 03:46:32 compute-0 angry_dijkstra[154641]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Oct 11 03:46:32 compute-0 angry_dijkstra[154641]:         "osd_id": 1,
Oct 11 03:46:32 compute-0 angry_dijkstra[154641]:         "osd_uuid": "38da774d-7ecf-442f-9a7a-97978287cff8",
Oct 11 03:46:32 compute-0 angry_dijkstra[154641]:         "type": "bluestore"
Oct 11 03:46:32 compute-0 angry_dijkstra[154641]:     },
Oct 11 03:46:32 compute-0 angry_dijkstra[154641]:     "b0a32a6b-06c7-4cda-9235-0c5ffbd1bd85": {
Oct 11 03:46:32 compute-0 angry_dijkstra[154641]:         "ceph_fsid": "23b68101-59a9-532f-ab6b-9acf78fb2162",
Oct 11 03:46:32 compute-0 angry_dijkstra[154641]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Oct 11 03:46:32 compute-0 angry_dijkstra[154641]:         "osd_id": 2,
Oct 11 03:46:32 compute-0 angry_dijkstra[154641]:         "osd_uuid": "b0a32a6b-06c7-4cda-9235-0c5ffbd1bd85",
Oct 11 03:46:32 compute-0 angry_dijkstra[154641]:         "type": "bluestore"
Oct 11 03:46:32 compute-0 angry_dijkstra[154641]:     },
Oct 11 03:46:32 compute-0 angry_dijkstra[154641]:     "bd7ac921-1218-45c1-b1c6-7c594dbceccb": {
Oct 11 03:46:32 compute-0 angry_dijkstra[154641]:         "ceph_fsid": "23b68101-59a9-532f-ab6b-9acf78fb2162",
Oct 11 03:46:32 compute-0 angry_dijkstra[154641]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct 11 03:46:32 compute-0 angry_dijkstra[154641]:         "osd_id": 0,
Oct 11 03:46:32 compute-0 angry_dijkstra[154641]:         "osd_uuid": "bd7ac921-1218-45c1-b1c6-7c594dbceccb",
Oct 11 03:46:32 compute-0 angry_dijkstra[154641]:         "type": "bluestore"
Oct 11 03:46:32 compute-0 angry_dijkstra[154641]:     }
Oct 11 03:46:32 compute-0 angry_dijkstra[154641]: }
Oct 11 03:46:32 compute-0 systemd[1]: libpod-521cd5e9cef7b6eb3670a391af4d6e72ae994cfed3c7a8bcd16ca29ce8552f5f.scope: Deactivated successfully.
Oct 11 03:46:32 compute-0 systemd[1]: libpod-521cd5e9cef7b6eb3670a391af4d6e72ae994cfed3c7a8bcd16ca29ce8552f5f.scope: Consumed 1.085s CPU time.
Oct 11 03:46:32 compute-0 podman[154624]: 2025-10-11 03:46:32.269822971 +0000 UTC m=+1.237281609 container died 521cd5e9cef7b6eb3670a391af4d6e72ae994cfed3c7a8bcd16ca29ce8552f5f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_dijkstra, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20250507)
Oct 11 03:46:32 compute-0 systemd[1]: var-lib-containers-storage-overlay-f4fbb1c4da5bbfaac86bc857002e0ac8b66f3bf78fb6202f53b39a4fc36fc2c5-merged.mount: Deactivated successfully.
Oct 11 03:46:32 compute-0 podman[154624]: 2025-10-11 03:46:32.348396701 +0000 UTC m=+1.315855329 container remove 521cd5e9cef7b6eb3670a391af4d6e72ae994cfed3c7a8bcd16ca29ce8552f5f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_dijkstra, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=reef)
Oct 11 03:46:32 compute-0 systemd[1]: libpod-conmon-521cd5e9cef7b6eb3670a391af4d6e72ae994cfed3c7a8bcd16ca29ce8552f5f.scope: Deactivated successfully.
Oct 11 03:46:32 compute-0 sudo[154517]: pam_unix(sudo:session): session closed for user root
Oct 11 03:46:32 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 11 03:46:32 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' 
Oct 11 03:46:32 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 11 03:46:32 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' 
Oct 11 03:46:32 compute-0 ceph-mgr[74563]: [progress WARNING root] complete: ev 7ed33752-d71f-4dc5-9baa-b7b135d9fdb6 does not exist
Oct 11 03:46:32 compute-0 ceph-mgr[74563]: [progress WARNING root] complete: ev 08e435da-2e14-40df-938f-229b4578bf77 does not exist
Oct 11 03:46:32 compute-0 sudo[154911]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 03:46:32 compute-0 sudo[154911]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 03:46:32 compute-0 sudo[154911]: pam_unix(sudo:session): session closed for user root
Oct 11 03:46:32 compute-0 sudo[154960]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Oct 11 03:46:32 compute-0 sudo[154960]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 03:46:32 compute-0 sudo[154960]: pam_unix(sudo:session): session closed for user root
Oct 11 03:46:32 compute-0 ceph-mon[74273]: pgmap v414: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 03:46:32 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' 
Oct 11 03:46:32 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' 
Oct 11 03:46:32 compute-0 python3.9[155010]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/neutron/ovn_metadata_haproxy_wrapper mode=0755 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1760154391.3091633-86-158404096287600/.source follow=False _original_basename=haproxy.j2 checksum=95c62e64c8f82dd9393a560d1b052dc98d38f810 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Oct 11 03:46:32 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v415: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 03:46:33 compute-0 python3.9[155162]: ansible-ansible.legacy.stat Invoked with path=/var/lib/neutron/kill_scripts/haproxy-kill follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 11 03:46:34 compute-0 python3.9[155283]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/neutron/kill_scripts/haproxy-kill mode=0755 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1760154392.974565-101-120819328316526/.source follow=False _original_basename=kill-script.j2 checksum=2dfb5489f491f61b95691c3bf95fa1fe48ff3700 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Oct 11 03:46:34 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 11 03:46:34 compute-0 sudo[155433]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wxmprpudnqrklxbywjbqwvwtuvghbgsw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760154394.3738494-118-28128573712027/AnsiballZ_setup.py'
Oct 11 03:46:34 compute-0 sudo[155433]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 03:46:34 compute-0 ceph-mon[74273]: pgmap v415: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 03:46:34 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v416: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 03:46:34 compute-0 python3.9[155435]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Oct 11 03:46:35 compute-0 sudo[155433]: pam_unix(sudo:session): session closed for user root
Oct 11 03:46:35 compute-0 sudo[155517]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qeiujoitwwixgiwtvwshsxgmxxxnyrfm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760154394.3738494-118-28128573712027/AnsiballZ_dnf.py'
Oct 11 03:46:35 compute-0 sudo[155517]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 03:46:35 compute-0 python3.9[155519]: ansible-ansible.legacy.dnf Invoked with name=['openvswitch'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Oct 11 03:46:36 compute-0 ceph-mon[74273]: pgmap v416: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 03:46:36 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v417: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 03:46:37 compute-0 sudo[155517]: pam_unix(sudo:session): session closed for user root
Oct 11 03:46:38 compute-0 sudo[155670]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-neogmahntvsdbcinwbcxdvppfndhycnm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760154397.3715644-130-222091218491596/AnsiballZ_systemd.py'
Oct 11 03:46:38 compute-0 sudo[155670]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 03:46:38 compute-0 python3.9[155672]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=openvswitch.service state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Oct 11 03:46:38 compute-0 sudo[155670]: pam_unix(sudo:session): session closed for user root
Oct 11 03:46:38 compute-0 ceph-mon[74273]: pgmap v417: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 03:46:38 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v418: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 03:46:39 compute-0 python3.9[155825]: ansible-ansible.legacy.stat Invoked with path=/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent/01-rootwrap.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 11 03:46:39 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 11 03:46:39 compute-0 python3.9[155946]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent/01-rootwrap.conf mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1760154398.660527-138-189182950890708/.source.conf follow=False _original_basename=rootwrap.conf.j2 checksum=11f2cfb4b7d97b2cef3c2c2d88089e6999cffe22 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Oct 11 03:46:40 compute-0 python3.9[156096]: ansible-ansible.legacy.stat Invoked with path=/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent/01-neutron-ovn-metadata-agent.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 11 03:46:40 compute-0 ceph-mon[74273]: pgmap v418: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 03:46:40 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v419: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 03:46:41 compute-0 python3.9[156217]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent/01-neutron-ovn-metadata-agent.conf mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1760154400.0418005-138-247713734885342/.source.conf follow=False _original_basename=neutron-ovn-metadata-agent.conf.j2 checksum=8bc979abbe81c2cf3993a225517a7e2483e20443 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Oct 11 03:46:41 compute-0 ceph-mon[74273]: pgmap v419: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 03:46:42 compute-0 python3.9[156367]: ansible-ansible.legacy.stat Invoked with path=/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent/10-neutron-metadata.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 11 03:46:42 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v420: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 03:46:43 compute-0 python3.9[156488]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent/10-neutron-metadata.conf mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1760154401.8911066-182-25621099572624/.source.conf _original_basename=10-neutron-metadata.conf follow=False checksum=ca7d4d155f5b812fab1a3b70e34adb495d291b8d backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Oct 11 03:46:43 compute-0 python3.9[156638]: ansible-ansible.legacy.stat Invoked with path=/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent/05-nova-metadata.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 11 03:46:43 compute-0 ceph-mon[74273]: pgmap v420: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 03:46:44 compute-0 python3.9[156759]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent/05-nova-metadata.conf mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1760154403.199454-182-105434298086569/.source.conf _original_basename=05-nova-metadata.conf follow=False checksum=a14d6b38898a379cd37fc0bf365d17f10859446f backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Oct 11 03:46:44 compute-0 ovn_controller[152025]: 2025-10-11T03:46:44Z|00025|memory|INFO|16256 kB peak resident set size after 29.7 seconds
Oct 11 03:46:44 compute-0 ovn_controller[152025]: 2025-10-11T03:46:44Z|00026|memory|INFO|idl-cells-OVN_Southbound:239 idl-cells-Open_vSwitch:528 ofctrl_desired_flow_usage-KB:5 ofctrl_installed_flow_usage-KB:4 ofctrl_sb_flow_ref_usage-KB:2
Oct 11 03:46:44 compute-0 podman[156760]: 2025-10-11 03:46:44.415851097 +0000 UTC m=+0.117960160 container health_status 648a70a95c858d67aef01a55c153ff0b5b9bef4b589fa10c62a24185180db61f (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS)
Oct 11 03:46:44 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 11 03:46:44 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v421: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 03:46:44 compute-0 python3.9[156936]: ansible-ansible.builtin.stat Invoked with path=/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct 11 03:46:45 compute-0 sudo[157088]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ldpmmlyieisilgpasrddiayjtcugaeng ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760154405.2584627-220-180701875409797/AnsiballZ_file.py'
Oct 11 03:46:45 compute-0 sudo[157088]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 03:46:45 compute-0 python3.9[157090]: ansible-ansible.builtin.file Invoked with path=/var/local/libexec recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Oct 11 03:46:45 compute-0 sudo[157088]: pam_unix(sudo:session): session closed for user root
Oct 11 03:46:46 compute-0 ceph-mon[74273]: pgmap v421: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 03:46:46 compute-0 sudo[157240]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-htmjmvyrinseodeswxnxwbafdipqlxcr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760154405.969823-228-206583115066587/AnsiballZ_stat.py'
Oct 11 03:46:46 compute-0 sudo[157240]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 03:46:46 compute-0 python3.9[157242]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-container-shutdown follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 11 03:46:46 compute-0 sudo[157240]: pam_unix(sudo:session): session closed for user root
Oct 11 03:46:46 compute-0 sudo[157318]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-baxoympjcmovzslxsyynaudcxhmfgmut ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760154405.969823-228-206583115066587/AnsiballZ_file.py'
Oct 11 03:46:46 compute-0 sudo[157318]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 03:46:46 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v422: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 03:46:47 compute-0 python3.9[157320]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-container-shutdown _original_basename=edpm-container-shutdown recurse=False state=file path=/var/local/libexec/edpm-container-shutdown force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct 11 03:46:47 compute-0 sudo[157318]: pam_unix(sudo:session): session closed for user root
Oct 11 03:46:47 compute-0 sudo[157470]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-woxjcqetbnotastjrlktjkgammmeahdg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760154407.1822433-228-146458048191743/AnsiballZ_stat.py'
Oct 11 03:46:47 compute-0 sudo[157470]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 03:46:47 compute-0 python3.9[157472]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-start-podman-container follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 11 03:46:47 compute-0 sudo[157470]: pam_unix(sudo:session): session closed for user root
Oct 11 03:46:48 compute-0 sudo[157548]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kbiiuyeuayiypugqcwzkrmkhfetygpbe ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760154407.1822433-228-146458048191743/AnsiballZ_file.py'
Oct 11 03:46:48 compute-0 sudo[157548]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 03:46:48 compute-0 ceph-mon[74273]: pgmap v422: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 03:46:48 compute-0 python3.9[157550]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-start-podman-container _original_basename=edpm-start-podman-container recurse=False state=file path=/var/local/libexec/edpm-start-podman-container force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct 11 03:46:48 compute-0 sudo[157548]: pam_unix(sudo:session): session closed for user root
Oct 11 03:46:48 compute-0 sudo[157700]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nmccxmhqxqlbnofxfmmqfmdlntkzqudq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760154408.4216075-251-159785458060352/AnsiballZ_file.py'
Oct 11 03:46:48 compute-0 sudo[157700]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 03:46:48 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v423: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 03:46:49 compute-0 python3.9[157702]: ansible-ansible.builtin.file Invoked with mode=420 path=/etc/systemd/system-preset state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 11 03:46:49 compute-0 sudo[157700]: pam_unix(sudo:session): session closed for user root
Oct 11 03:46:49 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 11 03:46:49 compute-0 sudo[157852]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hunhkzablbydkudteiaenmfvlqltoltc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760154409.2213225-259-64601647098065/AnsiballZ_stat.py'
Oct 11 03:46:49 compute-0 sudo[157852]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 03:46:49 compute-0 python3.9[157854]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/edpm-container-shutdown.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 11 03:46:49 compute-0 sudo[157852]: pam_unix(sudo:session): session closed for user root
Oct 11 03:46:50 compute-0 ceph-mon[74273]: pgmap v423: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 03:46:50 compute-0 sudo[157930]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dznowbbwnphacqisavnffoselyobvnwg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760154409.2213225-259-64601647098065/AnsiballZ_file.py'
Oct 11 03:46:50 compute-0 sudo[157930]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 03:46:50 compute-0 python3.9[157932]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/edpm-container-shutdown.service _original_basename=edpm-container-shutdown-service recurse=False state=file path=/etc/systemd/system/edpm-container-shutdown.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 11 03:46:50 compute-0 sudo[157930]: pam_unix(sudo:session): session closed for user root
Oct 11 03:46:50 compute-0 ceph-mgr[74563]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 03:46:50 compute-0 ceph-mgr[74563]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 03:46:50 compute-0 ceph-mgr[74563]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 03:46:50 compute-0 ceph-mgr[74563]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 03:46:50 compute-0 ceph-mgr[74563]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 03:46:50 compute-0 ceph-mgr[74563]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 03:46:50 compute-0 sudo[158082]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qyepymwhoxqrbngwcsvzwrpolxgobnhl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760154410.411396-271-196186570405628/AnsiballZ_stat.py'
Oct 11 03:46:50 compute-0 sudo[158082]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 03:46:50 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v424: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 03:46:51 compute-0 python3.9[158084]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 11 03:46:51 compute-0 sudo[158082]: pam_unix(sudo:session): session closed for user root
Oct 11 03:46:51 compute-0 sudo[158160]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xfmwrjqyczyjlwcdvjcontpcxckhjhnp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760154410.411396-271-196186570405628/AnsiballZ_file.py'
Oct 11 03:46:51 compute-0 sudo[158160]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 03:46:51 compute-0 python3.9[158162]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-edpm-container-shutdown.preset _original_basename=91-edpm-container-shutdown-preset recurse=False state=file path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 11 03:46:51 compute-0 sudo[158160]: pam_unix(sudo:session): session closed for user root
Oct 11 03:46:52 compute-0 ceph-mon[74273]: pgmap v424: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 03:46:52 compute-0 sudo[158312]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-plmfubybiemzjqbvnowxsslmjfmmciif ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760154411.7599573-283-231237591516742/AnsiballZ_systemd.py'
Oct 11 03:46:52 compute-0 sudo[158312]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 03:46:52 compute-0 python3.9[158314]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=edpm-container-shutdown state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Oct 11 03:46:52 compute-0 systemd[1]: Reloading.
Oct 11 03:46:52 compute-0 systemd-sysv-generator[158346]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 11 03:46:52 compute-0 systemd-rc-local-generator[158343]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 11 03:46:52 compute-0 sudo[158312]: pam_unix(sudo:session): session closed for user root
Oct 11 03:46:52 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v425: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 03:46:53 compute-0 sudo[158502]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qfnrpzsiukrjimregdutjtgxtkyinqtu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760154412.9881582-291-112808048514091/AnsiballZ_stat.py'
Oct 11 03:46:53 compute-0 sudo[158502]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 03:46:53 compute-0 python3.9[158504]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/netns-placeholder.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 11 03:46:53 compute-0 sudo[158502]: pam_unix(sudo:session): session closed for user root
Oct 11 03:46:53 compute-0 sudo[158580]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-saeizpvbinsgeiynqeynmxzmsdxqimmo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760154412.9881582-291-112808048514091/AnsiballZ_file.py'
Oct 11 03:46:53 compute-0 sudo[158580]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 03:46:53 compute-0 python3.9[158582]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/netns-placeholder.service _original_basename=netns-placeholder-service recurse=False state=file path=/etc/systemd/system/netns-placeholder.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 11 03:46:53 compute-0 sudo[158580]: pam_unix(sudo:session): session closed for user root
Oct 11 03:46:54 compute-0 ceph-mon[74273]: pgmap v425: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 03:46:54 compute-0 sudo[158732]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-oaayxyikcbxjlwzphtjxxkkimhjaegez ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760154414.1231492-303-209978515015731/AnsiballZ_stat.py'
Oct 11 03:46:54 compute-0 sudo[158732]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 03:46:54 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 11 03:46:54 compute-0 python3.9[158734]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-netns-placeholder.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 11 03:46:54 compute-0 sudo[158732]: pam_unix(sudo:session): session closed for user root
Oct 11 03:46:54 compute-0 sudo[158810]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xweeqrjhdlfgmculrsanonuvswplxpqm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760154414.1231492-303-209978515015731/AnsiballZ_file.py'
Oct 11 03:46:54 compute-0 sudo[158810]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 03:46:54 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v426: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 03:46:55 compute-0 python3.9[158812]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-netns-placeholder.preset _original_basename=91-netns-placeholder-preset recurse=False state=file path=/etc/systemd/system-preset/91-netns-placeholder.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 11 03:46:55 compute-0 sudo[158810]: pam_unix(sudo:session): session closed for user root
Oct 11 03:46:55 compute-0 sudo[158962]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ufnmspkcxdqlmosbmxvyovyffpowbovy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760154415.3249876-315-200861085889100/AnsiballZ_systemd.py'
Oct 11 03:46:55 compute-0 sudo[158962]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 03:46:55 compute-0 python3.9[158964]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=netns-placeholder state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Oct 11 03:46:55 compute-0 systemd[1]: Reloading.
Oct 11 03:46:56 compute-0 systemd-rc-local-generator[158992]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 11 03:46:56 compute-0 systemd-sysv-generator[158997]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 11 03:46:56 compute-0 ceph-mon[74273]: pgmap v426: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 03:46:56 compute-0 systemd[1]: Starting Create netns directory...
Oct 11 03:46:56 compute-0 systemd[1]: run-netns-placeholder.mount: Deactivated successfully.
Oct 11 03:46:56 compute-0 systemd[1]: netns-placeholder.service: Deactivated successfully.
Oct 11 03:46:56 compute-0 systemd[1]: Finished Create netns directory.
Oct 11 03:46:56 compute-0 sudo[158962]: pam_unix(sudo:session): session closed for user root
Oct 11 03:46:56 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v427: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 03:46:56 compute-0 sudo[159155]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sjqqsdcliyicllhfuvxjzuvkyxodgcfy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760154416.636354-325-192775822233353/AnsiballZ_file.py'
Oct 11 03:46:56 compute-0 sudo[159155]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 03:46:57 compute-0 python3.9[159157]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/openstack/healthchecks setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct 11 03:46:57 compute-0 sudo[159155]: pam_unix(sudo:session): session closed for user root
Oct 11 03:46:57 compute-0 sudo[159307]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vftntbokprditwetedwhhcakjrcsitam ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760154417.375065-333-37124435515654/AnsiballZ_stat.py'
Oct 11 03:46:57 compute-0 sudo[159307]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 03:46:57 compute-0 python3.9[159309]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/healthchecks/ovn_metadata_agent/healthcheck follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 11 03:46:57 compute-0 sudo[159307]: pam_unix(sudo:session): session closed for user root
Oct 11 03:46:58 compute-0 ceph-mon[74273]: pgmap v427: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 03:46:58 compute-0 sudo[159430]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sbsbaekulfozrfioclyiofkbioosmsgs ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760154417.375065-333-37124435515654/AnsiballZ_copy.py'
Oct 11 03:46:58 compute-0 sudo[159430]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 03:46:58 compute-0 python3.9[159432]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/healthchecks/ovn_metadata_agent/ group=zuul mode=0700 owner=zuul setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1760154417.375065-333-37124435515654/.source _original_basename=healthcheck follow=False checksum=898a5a1fcd473cf731177fc866e3bd7ebf20a131 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Oct 11 03:46:58 compute-0 sudo[159430]: pam_unix(sudo:session): session closed for user root
Oct 11 03:46:58 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v428: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 03:46:59 compute-0 sudo[159582]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-srluzogqfvshhrnljcckmjjmyjjbssmy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760154418.8467321-350-137643126027730/AnsiballZ_file.py'
Oct 11 03:46:59 compute-0 sudo[159582]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 03:46:59 compute-0 python3.9[159584]: ansible-ansible.builtin.file Invoked with path=/var/lib/kolla/config_files recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Oct 11 03:46:59 compute-0 sudo[159582]: pam_unix(sudo:session): session closed for user root
Oct 11 03:46:59 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 11 03:46:59 compute-0 sudo[159734]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vlsrrpfcumvmegskenqzqpkaavwcujge ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760154419.5725944-358-92880697948116/AnsiballZ_stat.py'
Oct 11 03:46:59 compute-0 sudo[159734]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 03:47:00 compute-0 python3.9[159736]: ansible-ansible.legacy.stat Invoked with path=/var/lib/kolla/config_files/ovn_metadata_agent.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 11 03:47:00 compute-0 sudo[159734]: pam_unix(sudo:session): session closed for user root
Oct 11 03:47:00 compute-0 ceph-mon[74273]: pgmap v428: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 03:47:00 compute-0 sudo[159857]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tymanhuovtwgentyitglfmdrfwncebuw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760154419.5725944-358-92880697948116/AnsiballZ_copy.py'
Oct 11 03:47:00 compute-0 sudo[159857]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 03:47:00 compute-0 python3.9[159859]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/kolla/config_files/ovn_metadata_agent.json mode=0600 src=/home/zuul/.ansible/tmp/ansible-tmp-1760154419.5725944-358-92880697948116/.source.json _original_basename=.9it_nbnr follow=False checksum=a908ef151ded3a33ae6c9ac8be72a35e5e33b9dc backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 11 03:47:00 compute-0 sudo[159857]: pam_unix(sudo:session): session closed for user root
Oct 11 03:47:00 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v429: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 03:47:01 compute-0 sudo[160009]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pyjemvmvowsxicplqtavjkldjpvjkfjw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760154420.9680808-373-185882529343148/AnsiballZ_file.py'
Oct 11 03:47:01 compute-0 sudo[160009]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 03:47:01 compute-0 python3.9[160011]: ansible-ansible.builtin.file Invoked with mode=0755 path=/var/lib/edpm-config/container-startup-config/ovn_metadata_agent state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 11 03:47:01 compute-0 sudo[160009]: pam_unix(sudo:session): session closed for user root
Oct 11 03:47:01 compute-0 sudo[160161]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cnkktvmbkunqpddliafpucyotkyugmvr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760154421.6943989-381-76097111272203/AnsiballZ_stat.py'
Oct 11 03:47:02 compute-0 sudo[160161]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 03:47:02 compute-0 sudo[160161]: pam_unix(sudo:session): session closed for user root
Oct 11 03:47:02 compute-0 ceph-mon[74273]: pgmap v429: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 03:47:02 compute-0 sudo[160284]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hssrlixtsifhiichbeehdrygldkkensh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760154421.6943989-381-76097111272203/AnsiballZ_copy.py'
Oct 11 03:47:02 compute-0 sudo[160284]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 03:47:02 compute-0 sudo[160284]: pam_unix(sudo:session): session closed for user root
Oct 11 03:47:02 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v430: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 03:47:03 compute-0 sudo[160436]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hvccyapippuwfmbhymmlsbtkfuwyzgka ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760154423.1159017-398-223721750007888/AnsiballZ_container_config_data.py'
Oct 11 03:47:03 compute-0 sudo[160436]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 03:47:03 compute-0 python3.9[160438]: ansible-container_config_data Invoked with config_overrides={} config_path=/var/lib/edpm-config/container-startup-config/ovn_metadata_agent config_pattern=*.json debug=False
Oct 11 03:47:03 compute-0 sudo[160436]: pam_unix(sudo:session): session closed for user root
Oct 11 03:47:04 compute-0 ceph-mon[74273]: pgmap v430: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 03:47:04 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 11 03:47:04 compute-0 sudo[160588]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nzlgceinrbtpgzmxpdbwjlemnaqwipbx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760154423.9777434-407-76951800467748/AnsiballZ_container_config_hash.py'
Oct 11 03:47:04 compute-0 sudo[160588]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 03:47:04 compute-0 python3.9[160590]: ansible-container_config_hash Invoked with check_mode=False config_vol_prefix=/var/lib/config-data
Oct 11 03:47:04 compute-0 sudo[160588]: pam_unix(sudo:session): session closed for user root
Oct 11 03:47:04 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v431: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 03:47:05 compute-0 sudo[160740]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-npfhygutjwebxsuaftsokphjxiceggzp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760154424.930097-416-16362204645656/AnsiballZ_podman_container_info.py'
Oct 11 03:47:05 compute-0 sudo[160740]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 03:47:05 compute-0 python3.9[160742]: ansible-containers.podman.podman_container_info Invoked with executable=podman name=None
Oct 11 03:47:05 compute-0 sudo[160740]: pam_unix(sudo:session): session closed for user root
Oct 11 03:47:06 compute-0 ceph-mon[74273]: pgmap v431: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 03:47:06 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v432: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 03:47:06 compute-0 sudo[160920]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uelbmnqlwqlemfnlsrpxjrwdrlonuuie ; /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1760154426.446393-429-111287297003532/AnsiballZ_edpm_container_manage.py'
Oct 11 03:47:06 compute-0 sudo[160920]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 03:47:07 compute-0 python3[160922]: ansible-edpm_container_manage Invoked with concurrency=1 config_dir=/var/lib/edpm-config/container-startup-config/ovn_metadata_agent config_id=ovn_metadata_agent config_overrides={} config_patterns=*.json log_base_path=/var/log/containers/stdouts debug=False
Oct 11 03:47:08 compute-0 ceph-mon[74273]: pgmap v432: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 03:47:08 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v433: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 03:47:09 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 11 03:47:10 compute-0 ceph-mon[74273]: pgmap v433: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 03:47:10 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v434: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 03:47:12 compute-0 ceph-mon[74273]: pgmap v434: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 03:47:12 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v435: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 03:47:14 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 11 03:47:14 compute-0 ceph-mon[74273]: pgmap v435: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 03:47:14 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v436: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 03:47:15 compute-0 podman[161033]: 2025-10-11 03:47:15.656244154 +0000 UTC m=+0.353436880 container health_status 648a70a95c858d67aef01a55c153ff0b5b9bef4b589fa10c62a24185180db61f (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, org.label-schema.build-date=20251009, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_id=ovn_controller, org.label-schema.vendor=CentOS)
Oct 11 03:47:15 compute-0 podman[160935]: 2025-10-11 03:47:15.774006099 +0000 UTC m=+8.445351792 image pull 1061e4fafe13e0b9aa1ef2c904ba4ad70c44f3e87b1d831f16c6db34937f4022 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Oct 11 03:47:15 compute-0 podman[161083]: 2025-10-11 03:47:15.969976571 +0000 UTC m=+0.073840965 container create aa44bb3cffcfb092b9d85ae52a9bd9f36c87e07262632420f97b48a886109eab (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, managed_by=edpm_ansible, io.buildah.version=1.41.3)
Oct 11 03:47:15 compute-0 podman[161083]: 2025-10-11 03:47:15.929132278 +0000 UTC m=+0.032996732 image pull 1061e4fafe13e0b9aa1ef2c904ba4ad70c44f3e87b1d831f16c6db34937f4022 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Oct 11 03:47:15 compute-0 python3[160922]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: podman create --name ovn_metadata_agent --cgroupns=host --conmon-pidfile /run/ovn_metadata_agent.pid --env KOLLA_CONFIG_STRATEGY=COPY_ALWAYS --env EDPM_CONFIG_HASH=0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d --healthcheck-command /openstack/healthcheck --label config_id=ovn_metadata_agent --label container_name=ovn_metadata_agent --label managed_by=edpm_ansible --label config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']} --log-driver journald --log-level info --network host --pid host --privileged=True --user root --volume /run/openvswitch:/run/openvswitch:z --volume /var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z --volume /run/netns:/run/netns:shared --volume /var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro --volume /var/lib/neutron:/var/lib/neutron:shared,z --volume /var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro --volume /var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro --volume /var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z --volume /var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z --volume /var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z --volume /var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z --volume /var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Oct 11 03:47:16 compute-0 sudo[160920]: pam_unix(sudo:session): session closed for user root
Oct 11 03:47:16 compute-0 ceph-mon[74273]: pgmap v436: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 03:47:16 compute-0 sudo[161271]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ncscfhdxfnnrazfvysueclhfkrjdlhff ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760154436.359269-437-56478531713/AnsiballZ_stat.py'
Oct 11 03:47:16 compute-0 sudo[161271]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 03:47:16 compute-0 python3.9[161273]: ansible-ansible.builtin.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct 11 03:47:16 compute-0 ceph-osd[87591]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Oct 11 03:47:16 compute-0 ceph-osd[87591]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Cumulative writes: 5564 writes, 24K keys, 5564 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.03 MB/s
                                           Cumulative WAL: 5564 writes, 861 syncs, 6.46 writes per sync, written: 0.02 GB, 0.03 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 5564 writes, 24K keys, 5564 commit groups, 1.0 writes per commit group, ingest: 18.60 MB, 0.03 MB/s
                                           Interval WAL: 5564 writes, 861 syncs, 6.46 writes per sync, written: 0.02 GB, 0.03 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                           
                                           ** Compaction Stats [default] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      2/0    2.63 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.2      0.01              0.00         1    0.006       0      0       0.0       0.0
                                            Sum      2/0    2.63 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.2      0.01              0.00         1    0.006       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [default] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.2      0.01              0.00         1    0.006       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5651f29fd1f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 3.5e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [default] **
                                           
                                           ** Compaction Stats [m-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5651f29fd1f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 3.5e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-0] **
                                           
                                           ** Compaction Stats [m-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5651f29fd1f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 3.5e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-1] **
                                           
                                           ** Compaction Stats [m-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5651f29fd1f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 3.5e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-2] **
                                           
                                           ** Compaction Stats [p-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.56 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.5      0.00              0.00         1    0.003       0      0       0.0       0.0
                                            Sum      1/0    1.56 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.5      0.00              0.00         1    0.003       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.5      0.00              0.00         1    0.003       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5651f29fd1f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 3.5e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-0] **
                                           
                                           ** Compaction Stats [p-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5651f29fd1f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 3.5e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-1] **
                                           
                                           ** Compaction Stats [p-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5651f29fd1f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 3.5e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-2] **
                                           
                                           ** Compaction Stats [O-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5651f29fd090#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 2 last_secs: 6e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-0] **
                                           
                                           ** Compaction Stats [O-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5651f29fd090#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 2 last_secs: 6e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-1] **
                                           
                                           ** Compaction Stats [O-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.25 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.4      0.00              0.00         1    0.003       0      0       0.0       0.0
                                            Sum      1/0    1.25 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.4      0.00              0.00         1    0.003       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.4      0.00              0.00         1    0.003       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5651f29fd090#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 2 last_secs: 6e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-2] **
                                           
                                           ** Compaction Stats [L] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.001       0      0       0.0       0.0
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.001       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [L] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.001       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5651f29fd1f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 3.5e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [L] **
                                           
                                           ** Compaction Stats [P] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [P] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5651f29fd1f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 3.5e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [P] **
Oct 11 03:47:16 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v437: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 03:47:16 compute-0 sudo[161271]: pam_unix(sudo:session): session closed for user root
Oct 11 03:47:17 compute-0 sudo[161425]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-iiifuvbugmpkprgxfsgzxfqfgncnasya ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760154437.1819286-446-267378137768874/AnsiballZ_file.py'
Oct 11 03:47:17 compute-0 sudo[161425]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 03:47:17 compute-0 python3.9[161427]: ansible-file Invoked with path=/etc/systemd/system/edpm_ovn_metadata_agent.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 11 03:47:17 compute-0 sudo[161425]: pam_unix(sudo:session): session closed for user root
Oct 11 03:47:17 compute-0 sudo[161501]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tjjuyzkygnyqjahdofqjtjtlpdspmmls ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760154437.1819286-446-267378137768874/AnsiballZ_stat.py'
Oct 11 03:47:17 compute-0 sudo[161501]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 03:47:18 compute-0 python3.9[161503]: ansible-stat Invoked with path=/etc/systemd/system/edpm_ovn_metadata_agent_healthcheck.timer follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct 11 03:47:18 compute-0 sudo[161501]: pam_unix(sudo:session): session closed for user root
Oct 11 03:47:18 compute-0 ceph-mon[74273]: pgmap v437: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 03:47:18 compute-0 sudo[161652]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ifzyrnthyvzcxhiksxakaoipzucgjlcf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760154438.2776096-446-162010415384853/AnsiballZ_copy.py'
Oct 11 03:47:18 compute-0 sudo[161652]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 03:47:18 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v438: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 03:47:18 compute-0 python3.9[161654]: ansible-copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1760154438.2776096-446-162010415384853/source dest=/etc/systemd/system/edpm_ovn_metadata_agent.service mode=0644 owner=root group=root backup=False force=True remote_src=False follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 11 03:47:19 compute-0 sudo[161652]: pam_unix(sudo:session): session closed for user root
Oct 11 03:47:19 compute-0 sudo[161728]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hshdwbcuifmfhuibywfulenweghbqqaa ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760154438.2776096-446-162010415384853/AnsiballZ_systemd.py'
Oct 11 03:47:19 compute-0 sudo[161728]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 03:47:19 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 11 03:47:19 compute-0 python3.9[161730]: ansible-systemd Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Oct 11 03:47:19 compute-0 systemd[1]: Reloading.
Oct 11 03:47:19 compute-0 systemd-rc-local-generator[161754]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 11 03:47:19 compute-0 systemd-sysv-generator[161758]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 11 03:47:19 compute-0 sudo[161728]: pam_unix(sudo:session): session closed for user root
Oct 11 03:47:20 compute-0 sudo[161838]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tftuddivgphplwdcvvjsduqxyvtgpumo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760154438.2776096-446-162010415384853/AnsiballZ_systemd.py'
Oct 11 03:47:20 compute-0 sudo[161838]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 03:47:20 compute-0 python3.9[161840]: ansible-systemd Invoked with state=restarted name=edpm_ovn_metadata_agent.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Oct 11 03:47:20 compute-0 systemd[1]: Reloading.
Oct 11 03:47:20 compute-0 systemd-rc-local-generator[161870]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 11 03:47:20 compute-0 systemd-sysv-generator[161874]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 11 03:47:20 compute-0 ceph-mgr[74563]: [balancer INFO root] Optimize plan auto_2025-10-11_03:47:20
Oct 11 03:47:20 compute-0 ceph-mgr[74563]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 11 03:47:20 compute-0 ceph-mgr[74563]: [balancer INFO root] do_upmap
Oct 11 03:47:20 compute-0 ceph-mgr[74563]: [balancer INFO root] pools ['default.rgw.meta', 'volumes', '.rgw.root', 'images', 'backups', 'vms', 'cephfs.cephfs.data', 'cephfs.cephfs.meta', 'default.rgw.control', '.mgr', 'default.rgw.log']
Oct 11 03:47:20 compute-0 ceph-mgr[74563]: [balancer INFO root] prepared 0/10 changes
Oct 11 03:47:20 compute-0 ceph-mgr[74563]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 03:47:20 compute-0 ceph-mgr[74563]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 03:47:20 compute-0 ceph-mgr[74563]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 03:47:20 compute-0 ceph-mgr[74563]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 03:47:20 compute-0 ceph-mgr[74563]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 03:47:20 compute-0 ceph-mgr[74563]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 03:47:20 compute-0 ceph-mon[74273]: pgmap v438: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 03:47:20 compute-0 systemd[1]: Starting ovn_metadata_agent container...
Oct 11 03:47:20 compute-0 ceph-mgr[74563]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 11 03:47:20 compute-0 ceph-mgr[74563]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 11 03:47:20 compute-0 ceph-mgr[74563]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 11 03:47:20 compute-0 ceph-mgr[74563]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 11 03:47:20 compute-0 ceph-mgr[74563]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 11 03:47:20 compute-0 ceph-mgr[74563]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 11 03:47:20 compute-0 ceph-mgr[74563]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 11 03:47:20 compute-0 ceph-mgr[74563]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 11 03:47:20 compute-0 ceph-mgr[74563]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 11 03:47:20 compute-0 ceph-mgr[74563]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 11 03:47:20 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v439: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 03:47:20 compute-0 systemd[1]: Started libcrun container.
Oct 11 03:47:20 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/77fef0d4a412dbcc60039b6a86609bc6560f0b86e2f2a34bd6980430b578d284/merged/etc/neutron.conf.d supports timestamps until 2038 (0x7fffffff)
Oct 11 03:47:20 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/77fef0d4a412dbcc60039b6a86609bc6560f0b86e2f2a34bd6980430b578d284/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct 11 03:47:21 compute-0 systemd[1]: Started /usr/bin/podman healthcheck run aa44bb3cffcfb092b9d85ae52a9bd9f36c87e07262632420f97b48a886109eab.
Oct 11 03:47:21 compute-0 podman[161881]: 2025-10-11 03:47:21.019474618 +0000 UTC m=+0.169409123 container init aa44bb3cffcfb092b9d85ae52a9bd9f36c87e07262632420f97b48a886109eab (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, tcib_managed=true, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, maintainer=OpenStack Kubernetes Operator team, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20251009, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Oct 11 03:47:21 compute-0 ovn_metadata_agent[161897]: + sudo -E kolla_set_configs
Oct 11 03:47:21 compute-0 podman[161881]: 2025-10-11 03:47:21.056640108 +0000 UTC m=+0.206574583 container start aa44bb3cffcfb092b9d85ae52a9bd9f36c87e07262632420f97b48a886109eab (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251009, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible)
Oct 11 03:47:21 compute-0 edpm-start-podman-container[161881]: ovn_metadata_agent
Oct 11 03:47:21 compute-0 ovn_metadata_agent[161897]: INFO:__main__:Loading config file at /var/lib/kolla/config_files/config.json
Oct 11 03:47:21 compute-0 ovn_metadata_agent[161897]: INFO:__main__:Validating config file
Oct 11 03:47:21 compute-0 ovn_metadata_agent[161897]: INFO:__main__:Kolla config strategy set to: COPY_ALWAYS
Oct 11 03:47:21 compute-0 ovn_metadata_agent[161897]: INFO:__main__:Copying service configuration files
Oct 11 03:47:21 compute-0 ovn_metadata_agent[161897]: INFO:__main__:Deleting /etc/neutron/rootwrap.conf
Oct 11 03:47:21 compute-0 ovn_metadata_agent[161897]: INFO:__main__:Copying /etc/neutron.conf.d/01-rootwrap.conf to /etc/neutron/rootwrap.conf
Oct 11 03:47:21 compute-0 ovn_metadata_agent[161897]: INFO:__main__:Setting permission for /etc/neutron/rootwrap.conf
Oct 11 03:47:21 compute-0 ovn_metadata_agent[161897]: INFO:__main__:Writing out command to execute
Oct 11 03:47:21 compute-0 ovn_metadata_agent[161897]: INFO:__main__:Setting permission for /var/lib/neutron
Oct 11 03:47:21 compute-0 ovn_metadata_agent[161897]: INFO:__main__:Setting permission for /var/lib/neutron/kill_scripts
Oct 11 03:47:21 compute-0 ovn_metadata_agent[161897]: INFO:__main__:Setting permission for /var/lib/neutron/ovn-metadata-proxy
Oct 11 03:47:21 compute-0 ovn_metadata_agent[161897]: INFO:__main__:Setting permission for /var/lib/neutron/external
Oct 11 03:47:21 compute-0 ovn_metadata_agent[161897]: INFO:__main__:Setting permission for /var/lib/neutron/ovn_metadata_haproxy_wrapper
Oct 11 03:47:21 compute-0 ovn_metadata_agent[161897]: INFO:__main__:Setting permission for /var/lib/neutron/kill_scripts/haproxy-kill
Oct 11 03:47:21 compute-0 ovn_metadata_agent[161897]: INFO:__main__:Setting permission for /var/lib/neutron/external/pids
Oct 11 03:47:21 compute-0 ovn_metadata_agent[161897]: ++ cat /run_command
Oct 11 03:47:21 compute-0 ovn_metadata_agent[161897]: + CMD=neutron-ovn-metadata-agent
Oct 11 03:47:21 compute-0 ovn_metadata_agent[161897]: + ARGS=
Oct 11 03:47:21 compute-0 ovn_metadata_agent[161897]: + sudo kolla_copy_cacerts
Oct 11 03:47:21 compute-0 ovn_metadata_agent[161897]: + [[ ! -n '' ]]
Oct 11 03:47:21 compute-0 ovn_metadata_agent[161897]: + . kolla_extend_start
Oct 11 03:47:21 compute-0 ovn_metadata_agent[161897]: Running command: 'neutron-ovn-metadata-agent'
Oct 11 03:47:21 compute-0 ovn_metadata_agent[161897]: + echo 'Running command: '\''neutron-ovn-metadata-agent'\'''
Oct 11 03:47:21 compute-0 ovn_metadata_agent[161897]: + umask 0022
Oct 11 03:47:21 compute-0 ovn_metadata_agent[161897]: + exec neutron-ovn-metadata-agent
Oct 11 03:47:21 compute-0 edpm-start-podman-container[161880]: Creating additional drop-in dependency for "ovn_metadata_agent" (aa44bb3cffcfb092b9d85ae52a9bd9f36c87e07262632420f97b48a886109eab)
Oct 11 03:47:21 compute-0 podman[161904]: 2025-10-11 03:47:21.15272261 +0000 UTC m=+0.083412716 container health_status aa44bb3cffcfb092b9d85ae52a9bd9f36c87e07262632420f97b48a886109eab (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20251009, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Oct 11 03:47:21 compute-0 systemd[1]: Reloading.
Oct 11 03:47:21 compute-0 systemd-sysv-generator[161979]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 11 03:47:21 compute-0 systemd-rc-local-generator[161975]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 11 03:47:21 compute-0 systemd[1]: Started ovn_metadata_agent container.
Oct 11 03:47:21 compute-0 sudo[161838]: pam_unix(sudo:session): session closed for user root
Oct 11 03:47:21 compute-0 sshd-session[152634]: Connection closed by 192.168.122.30 port 52088
Oct 11 03:47:21 compute-0 sshd-session[152631]: pam_unix(sshd:session): session closed for user zuul
Oct 11 03:47:21 compute-0 systemd[1]: session-48.scope: Deactivated successfully.
Oct 11 03:47:21 compute-0 systemd[1]: session-48.scope: Consumed 1min 261ms CPU time.
Oct 11 03:47:21 compute-0 systemd-logind[820]: Session 48 logged out. Waiting for processes to exit.
Oct 11 03:47:21 compute-0 systemd-logind[820]: Removed session 48.
Oct 11 03:47:21 compute-0 ceph-osd[88594]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Oct 11 03:47:21 compute-0 ceph-osd[88594]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Cumulative writes: 6813 writes, 28K keys, 6813 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.03 MB/s
                                           Cumulative WAL: 6813 writes, 1189 syncs, 5.73 writes per sync, written: 0.02 GB, 0.03 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 6813 writes, 28K keys, 6813 commit groups, 1.0 writes per commit group, ingest: 19.62 MB, 0.03 MB/s
                                           Interval WAL: 6813 writes, 1189 syncs, 5.73 writes per sync, written: 0.02 GB, 0.03 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                           
                                           ** Compaction Stats [default] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      2/0    2.63 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.2      0.01              0.00         1    0.005       0      0       0.0       0.0
                                            Sum      2/0    2.63 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.2      0.01              0.00         1    0.005       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [default] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.2      0.01              0.00         1    0.005       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x56493a3531f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 3.3e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [default] **
                                           
                                           ** Compaction Stats [m-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x56493a3531f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 3.3e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-0] **
                                           
                                           ** Compaction Stats [m-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x56493a3531f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 3.3e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-1] **
                                           
                                           ** Compaction Stats [m-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x56493a3531f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 3.3e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-2] **
                                           
                                           ** Compaction Stats [p-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.56 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.6      0.00              0.00         1    0.003       0      0       0.0       0.0
                                            Sum      1/0    1.56 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.6      0.00              0.00         1    0.003       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.6      0.00              0.00         1    0.003       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x56493a3531f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 3.3e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-0] **
                                           
                                           ** Compaction Stats [p-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x56493a3531f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 3.3e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-1] **
                                           
                                           ** Compaction Stats [p-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x56493a3531f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 3.3e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-2] **
                                           
                                           ** Compaction Stats [O-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x56493a353090#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 2 last_secs: 5e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-0] **
                                           
                                           ** Compaction Stats [O-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x56493a353090#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 2 last_secs: 5e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-1] **
                                           
                                           ** Compaction Stats [O-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.25 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.5      0.00              0.00         1    0.003       0      0       0.0       0.0
                                            Sum      1/0    1.25 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.5      0.00              0.00         1    0.003       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.5      0.00              0.00         1    0.003       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x56493a353090#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 2 last_secs: 5e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-2] **
                                           
                                           ** Compaction Stats [L] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.001       0      0       0.0       0.0
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.001       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [L] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.001       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x56493a3531f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 3.3e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [L] **
                                           
                                           ** Compaction Stats [P] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [P] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x56493a3531f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 3.3e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [P] **
Oct 11 03:47:22 compute-0 ceph-mon[74273]: pgmap v439: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 03:47:22 compute-0 ovn_metadata_agent[161897]: 2025-10-11 03:47:22.878 161902 INFO neutron.common.config [-] Logging enabled!
Oct 11 03:47:22 compute-0 ovn_metadata_agent[161897]: 2025-10-11 03:47:22.880 161902 INFO neutron.common.config [-] /usr/bin/neutron-ovn-metadata-agent version 22.2.2.dev43
Oct 11 03:47:22 compute-0 ovn_metadata_agent[161897]: 2025-10-11 03:47:22.880 161902 DEBUG neutron.common.config [-] command line: /usr/bin/neutron-ovn-metadata-agent setup_logging /usr/lib/python3.9/site-packages/neutron/common/config.py:123
Oct 11 03:47:22 compute-0 ovn_metadata_agent[161897]: 2025-10-11 03:47:22.880 161902 DEBUG neutron.agent.ovn.metadata_agent [-] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2589
Oct 11 03:47:22 compute-0 ovn_metadata_agent[161897]: 2025-10-11 03:47:22.880 161902 DEBUG neutron.agent.ovn.metadata_agent [-] Configuration options gathered from: log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2590
Oct 11 03:47:22 compute-0 ovn_metadata_agent[161897]: 2025-10-11 03:47:22.880 161902 DEBUG neutron.agent.ovn.metadata_agent [-] command line args: [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2591
Oct 11 03:47:22 compute-0 ovn_metadata_agent[161897]: 2025-10-11 03:47:22.881 161902 DEBUG neutron.agent.ovn.metadata_agent [-] config files: ['/etc/neutron/neutron.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2592
Oct 11 03:47:22 compute-0 ovn_metadata_agent[161897]: 2025-10-11 03:47:22.881 161902 DEBUG neutron.agent.ovn.metadata_agent [-] ================================================================================ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2594
Oct 11 03:47:22 compute-0 ovn_metadata_agent[161897]: 2025-10-11 03:47:22.881 161902 DEBUG neutron.agent.ovn.metadata_agent [-] agent_down_time                = 75 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 11 03:47:22 compute-0 ovn_metadata_agent[161897]: 2025-10-11 03:47:22.881 161902 DEBUG neutron.agent.ovn.metadata_agent [-] allow_bulk                     = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 11 03:47:22 compute-0 ovn_metadata_agent[161897]: 2025-10-11 03:47:22.881 161902 DEBUG neutron.agent.ovn.metadata_agent [-] api_extensions_path            =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 11 03:47:22 compute-0 ovn_metadata_agent[161897]: 2025-10-11 03:47:22.882 161902 DEBUG neutron.agent.ovn.metadata_agent [-] api_paste_config               = api-paste.ini log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 11 03:47:22 compute-0 ovn_metadata_agent[161897]: 2025-10-11 03:47:22.882 161902 DEBUG neutron.agent.ovn.metadata_agent [-] api_workers                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 11 03:47:22 compute-0 ovn_metadata_agent[161897]: 2025-10-11 03:47:22.882 161902 DEBUG neutron.agent.ovn.metadata_agent [-] auth_ca_cert                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 11 03:47:22 compute-0 ovn_metadata_agent[161897]: 2025-10-11 03:47:22.882 161902 DEBUG neutron.agent.ovn.metadata_agent [-] auth_strategy                  = keystone log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 11 03:47:22 compute-0 ovn_metadata_agent[161897]: 2025-10-11 03:47:22.882 161902 DEBUG neutron.agent.ovn.metadata_agent [-] backlog                        = 4096 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 11 03:47:22 compute-0 ovn_metadata_agent[161897]: 2025-10-11 03:47:22.882 161902 DEBUG neutron.agent.ovn.metadata_agent [-] base_mac                       = fa:16:3e:00:00:00 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 11 03:47:22 compute-0 ovn_metadata_agent[161897]: 2025-10-11 03:47:22.882 161902 DEBUG neutron.agent.ovn.metadata_agent [-] bind_host                      = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 11 03:47:22 compute-0 ovn_metadata_agent[161897]: 2025-10-11 03:47:22.883 161902 DEBUG neutron.agent.ovn.metadata_agent [-] bind_port                      = 9696 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 11 03:47:22 compute-0 ovn_metadata_agent[161897]: 2025-10-11 03:47:22.883 161902 DEBUG neutron.agent.ovn.metadata_agent [-] client_socket_timeout          = 900 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 11 03:47:22 compute-0 ovn_metadata_agent[161897]: 2025-10-11 03:47:22.883 161902 DEBUG neutron.agent.ovn.metadata_agent [-] config_dir                     = ['/etc/neutron.conf.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 11 03:47:22 compute-0 ovn_metadata_agent[161897]: 2025-10-11 03:47:22.883 161902 DEBUG neutron.agent.ovn.metadata_agent [-] config_file                    = ['/etc/neutron/neutron.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 11 03:47:22 compute-0 ovn_metadata_agent[161897]: 2025-10-11 03:47:22.883 161902 DEBUG neutron.agent.ovn.metadata_agent [-] config_source                  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 11 03:47:22 compute-0 ovn_metadata_agent[161897]: 2025-10-11 03:47:22.883 161902 DEBUG neutron.agent.ovn.metadata_agent [-] control_exchange               = neutron log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 11 03:47:22 compute-0 ovn_metadata_agent[161897]: 2025-10-11 03:47:22.884 161902 DEBUG neutron.agent.ovn.metadata_agent [-] core_plugin                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 11 03:47:22 compute-0 ovn_metadata_agent[161897]: 2025-10-11 03:47:22.884 161902 DEBUG neutron.agent.ovn.metadata_agent [-] debug                          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 11 03:47:22 compute-0 ovn_metadata_agent[161897]: 2025-10-11 03:47:22.884 161902 DEBUG neutron.agent.ovn.metadata_agent [-] default_availability_zones     = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 11 03:47:22 compute-0 ovn_metadata_agent[161897]: 2025-10-11 03:47:22.884 161902 DEBUG neutron.agent.ovn.metadata_agent [-] default_log_levels             = ['amqp=WARN', 'amqplib=WARN', 'boto=WARN', 'qpid=WARN', 'sqlalchemy=WARN', 'suds=INFO', 'oslo.messaging=INFO', 'oslo_messaging=INFO', 'iso8601=WARN', 'requests.packages.urllib3.connectionpool=WARN', 'urllib3.connectionpool=WARN', 'websocket=WARN', 'requests.packages.urllib3.util.retry=WARN', 'urllib3.util.retry=WARN', 'keystonemiddleware=WARN', 'routes.middleware=WARN', 'stevedore=WARN', 'taskflow=WARN', 'keystoneauth=WARN', 'oslo.cache=INFO', 'oslo_policy=INFO', 'dogpile.core.dogpile=INFO', 'OFPHandler=INFO', 'OfctlService=INFO', 'os_ken.base.app_manager=INFO', 'os_ken.controller.controller=INFO'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 11 03:47:22 compute-0 ovn_metadata_agent[161897]: 2025-10-11 03:47:22.884 161902 DEBUG neutron.agent.ovn.metadata_agent [-] dhcp_agent_notification        = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 11 03:47:22 compute-0 ovn_metadata_agent[161897]: 2025-10-11 03:47:22.884 161902 DEBUG neutron.agent.ovn.metadata_agent [-] dhcp_lease_duration            = 86400 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 11 03:47:22 compute-0 ovn_metadata_agent[161897]: 2025-10-11 03:47:22.884 161902 DEBUG neutron.agent.ovn.metadata_agent [-] dhcp_load_type                 = networks log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 11 03:47:22 compute-0 ovn_metadata_agent[161897]: 2025-10-11 03:47:22.885 161902 DEBUG neutron.agent.ovn.metadata_agent [-] dns_domain                     = openstacklocal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 11 03:47:22 compute-0 ovn_metadata_agent[161897]: 2025-10-11 03:47:22.885 161902 DEBUG neutron.agent.ovn.metadata_agent [-] enable_new_agents              = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 11 03:47:22 compute-0 ovn_metadata_agent[161897]: 2025-10-11 03:47:22.885 161902 DEBUG neutron.agent.ovn.metadata_agent [-] enable_traditional_dhcp        = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 11 03:47:22 compute-0 ovn_metadata_agent[161897]: 2025-10-11 03:47:22.885 161902 DEBUG neutron.agent.ovn.metadata_agent [-] external_dns_driver            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 11 03:47:22 compute-0 ovn_metadata_agent[161897]: 2025-10-11 03:47:22.885 161902 DEBUG neutron.agent.ovn.metadata_agent [-] external_pids                  = /var/lib/neutron/external/pids log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 11 03:47:22 compute-0 ovn_metadata_agent[161897]: 2025-10-11 03:47:22.885 161902 DEBUG neutron.agent.ovn.metadata_agent [-] filter_validation              = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 11 03:47:22 compute-0 ovn_metadata_agent[161897]: 2025-10-11 03:47:22.886 161902 DEBUG neutron.agent.ovn.metadata_agent [-] global_physnet_mtu             = 1500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 11 03:47:22 compute-0 ovn_metadata_agent[161897]: 2025-10-11 03:47:22.886 161902 DEBUG neutron.agent.ovn.metadata_agent [-] host                           = compute-0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 11 03:47:22 compute-0 ovn_metadata_agent[161897]: 2025-10-11 03:47:22.886 161902 DEBUG neutron.agent.ovn.metadata_agent [-] http_retries                   = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 11 03:47:22 compute-0 ovn_metadata_agent[161897]: 2025-10-11 03:47:22.886 161902 DEBUG neutron.agent.ovn.metadata_agent [-] instance_format                = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 11 03:47:22 compute-0 ovn_metadata_agent[161897]: 2025-10-11 03:47:22.886 161902 DEBUG neutron.agent.ovn.metadata_agent [-] instance_uuid_format           = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 11 03:47:22 compute-0 ovn_metadata_agent[161897]: 2025-10-11 03:47:22.886 161902 DEBUG neutron.agent.ovn.metadata_agent [-] ipam_driver                    = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 11 03:47:22 compute-0 ovn_metadata_agent[161897]: 2025-10-11 03:47:22.887 161902 DEBUG neutron.agent.ovn.metadata_agent [-] ipv6_pd_enabled                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 11 03:47:22 compute-0 ovn_metadata_agent[161897]: 2025-10-11 03:47:22.887 161902 DEBUG neutron.agent.ovn.metadata_agent [-] log_config_append              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 11 03:47:22 compute-0 ovn_metadata_agent[161897]: 2025-10-11 03:47:22.887 161902 DEBUG neutron.agent.ovn.metadata_agent [-] log_date_format                = %Y-%m-%d %H:%M:%S log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 11 03:47:22 compute-0 ovn_metadata_agent[161897]: 2025-10-11 03:47:22.887 161902 DEBUG neutron.agent.ovn.metadata_agent [-] log_dir                        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 11 03:47:22 compute-0 ovn_metadata_agent[161897]: 2025-10-11 03:47:22.887 161902 DEBUG neutron.agent.ovn.metadata_agent [-] log_file                       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 11 03:47:22 compute-0 ovn_metadata_agent[161897]: 2025-10-11 03:47:22.887 161902 DEBUG neutron.agent.ovn.metadata_agent [-] log_rotate_interval            = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 11 03:47:22 compute-0 ovn_metadata_agent[161897]: 2025-10-11 03:47:22.887 161902 DEBUG neutron.agent.ovn.metadata_agent [-] log_rotate_interval_type       = days log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 11 03:47:22 compute-0 ovn_metadata_agent[161897]: 2025-10-11 03:47:22.888 161902 DEBUG neutron.agent.ovn.metadata_agent [-] log_rotation_type              = none log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 11 03:47:22 compute-0 ovn_metadata_agent[161897]: 2025-10-11 03:47:22.888 161902 DEBUG neutron.agent.ovn.metadata_agent [-] logging_context_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(global_request_id)s %(request_id)s %(user_identity)s] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 11 03:47:22 compute-0 ovn_metadata_agent[161897]: 2025-10-11 03:47:22.888 161902 DEBUG neutron.agent.ovn.metadata_agent [-] logging_debug_format_suffix    = %(funcName)s %(pathname)s:%(lineno)d log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 11 03:47:22 compute-0 ovn_metadata_agent[161897]: 2025-10-11 03:47:22.888 161902 DEBUG neutron.agent.ovn.metadata_agent [-] logging_default_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [-] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 11 03:47:22 compute-0 ovn_metadata_agent[161897]: 2025-10-11 03:47:22.888 161902 DEBUG neutron.agent.ovn.metadata_agent [-] logging_exception_prefix       = %(asctime)s.%(msecs)03d %(process)d ERROR %(name)s %(instance)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 11 03:47:22 compute-0 ovn_metadata_agent[161897]: 2025-10-11 03:47:22.888 161902 DEBUG neutron.agent.ovn.metadata_agent [-] logging_user_identity_format   = %(user)s %(project)s %(domain)s %(system_scope)s %(user_domain)s %(project_domain)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 11 03:47:22 compute-0 ovn_metadata_agent[161897]: 2025-10-11 03:47:22.888 161902 DEBUG neutron.agent.ovn.metadata_agent [-] max_dns_nameservers            = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 11 03:47:22 compute-0 ovn_metadata_agent[161897]: 2025-10-11 03:47:22.889 161902 DEBUG neutron.agent.ovn.metadata_agent [-] max_header_line                = 16384 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 11 03:47:22 compute-0 ovn_metadata_agent[161897]: 2025-10-11 03:47:22.889 161902 DEBUG neutron.agent.ovn.metadata_agent [-] max_logfile_count              = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 11 03:47:22 compute-0 ovn_metadata_agent[161897]: 2025-10-11 03:47:22.889 161902 DEBUG neutron.agent.ovn.metadata_agent [-] max_logfile_size_mb            = 200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 11 03:47:22 compute-0 ovn_metadata_agent[161897]: 2025-10-11 03:47:22.889 161902 DEBUG neutron.agent.ovn.metadata_agent [-] max_subnet_host_routes         = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 11 03:47:22 compute-0 ovn_metadata_agent[161897]: 2025-10-11 03:47:22.889 161902 DEBUG neutron.agent.ovn.metadata_agent [-] metadata_backlog               = 4096 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 11 03:47:22 compute-0 ovn_metadata_agent[161897]: 2025-10-11 03:47:22.889 161902 DEBUG neutron.agent.ovn.metadata_agent [-] metadata_proxy_group           =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 11 03:47:22 compute-0 ovn_metadata_agent[161897]: 2025-10-11 03:47:22.890 161902 DEBUG neutron.agent.ovn.metadata_agent [-] metadata_proxy_shared_secret   = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 11 03:47:22 compute-0 ovn_metadata_agent[161897]: 2025-10-11 03:47:22.890 161902 DEBUG neutron.agent.ovn.metadata_agent [-] metadata_proxy_socket          = /var/lib/neutron/metadata_proxy log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 11 03:47:22 compute-0 ovn_metadata_agent[161897]: 2025-10-11 03:47:22.890 161902 DEBUG neutron.agent.ovn.metadata_agent [-] metadata_proxy_socket_mode     = deduce log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 11 03:47:22 compute-0 ovn_metadata_agent[161897]: 2025-10-11 03:47:22.890 161902 DEBUG neutron.agent.ovn.metadata_agent [-] metadata_proxy_user            =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 11 03:47:22 compute-0 ovn_metadata_agent[161897]: 2025-10-11 03:47:22.890 161902 DEBUG neutron.agent.ovn.metadata_agent [-] metadata_workers               = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 11 03:47:22 compute-0 ovn_metadata_agent[161897]: 2025-10-11 03:47:22.890 161902 DEBUG neutron.agent.ovn.metadata_agent [-] network_link_prefix            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 11 03:47:22 compute-0 ovn_metadata_agent[161897]: 2025-10-11 03:47:22.891 161902 DEBUG neutron.agent.ovn.metadata_agent [-] notify_nova_on_port_data_changes = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 11 03:47:22 compute-0 ovn_metadata_agent[161897]: 2025-10-11 03:47:22.891 161902 DEBUG neutron.agent.ovn.metadata_agent [-] notify_nova_on_port_status_changes = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 11 03:47:22 compute-0 ovn_metadata_agent[161897]: 2025-10-11 03:47:22.891 161902 DEBUG neutron.agent.ovn.metadata_agent [-] nova_client_cert               =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 11 03:47:22 compute-0 ovn_metadata_agent[161897]: 2025-10-11 03:47:22.891 161902 DEBUG neutron.agent.ovn.metadata_agent [-] nova_client_priv_key           =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 11 03:47:22 compute-0 ovn_metadata_agent[161897]: 2025-10-11 03:47:22.891 161902 DEBUG neutron.agent.ovn.metadata_agent [-] nova_metadata_host             = nova-metadata-internal.openstack.svc log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 11 03:47:22 compute-0 ovn_metadata_agent[161897]: 2025-10-11 03:47:22.891 161902 DEBUG neutron.agent.ovn.metadata_agent [-] nova_metadata_insecure         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 11 03:47:22 compute-0 ovn_metadata_agent[161897]: 2025-10-11 03:47:22.892 161902 DEBUG neutron.agent.ovn.metadata_agent [-] nova_metadata_port             = 8775 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 11 03:47:22 compute-0 ovn_metadata_agent[161897]: 2025-10-11 03:47:22.892 161902 DEBUG neutron.agent.ovn.metadata_agent [-] nova_metadata_protocol         = https log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 11 03:47:22 compute-0 ovn_metadata_agent[161897]: 2025-10-11 03:47:22.892 161902 DEBUG neutron.agent.ovn.metadata_agent [-] pagination_max_limit           = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 11 03:47:22 compute-0 ovn_metadata_agent[161897]: 2025-10-11 03:47:22.892 161902 DEBUG neutron.agent.ovn.metadata_agent [-] periodic_fuzzy_delay           = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 11 03:47:22 compute-0 ovn_metadata_agent[161897]: 2025-10-11 03:47:22.892 161902 DEBUG neutron.agent.ovn.metadata_agent [-] periodic_interval              = 40 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 11 03:47:22 compute-0 ovn_metadata_agent[161897]: 2025-10-11 03:47:22.892 161902 DEBUG neutron.agent.ovn.metadata_agent [-] publish_errors                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 11 03:47:22 compute-0 ovn_metadata_agent[161897]: 2025-10-11 03:47:22.892 161902 DEBUG neutron.agent.ovn.metadata_agent [-] rate_limit_burst               = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 11 03:47:22 compute-0 ovn_metadata_agent[161897]: 2025-10-11 03:47:22.893 161902 DEBUG neutron.agent.ovn.metadata_agent [-] rate_limit_except_level        = CRITICAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 11 03:47:22 compute-0 ovn_metadata_agent[161897]: 2025-10-11 03:47:22.893 161902 DEBUG neutron.agent.ovn.metadata_agent [-] rate_limit_interval            = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 11 03:47:22 compute-0 ovn_metadata_agent[161897]: 2025-10-11 03:47:22.893 161902 DEBUG neutron.agent.ovn.metadata_agent [-] retry_until_window             = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 11 03:47:22 compute-0 ovn_metadata_agent[161897]: 2025-10-11 03:47:22.893 161902 DEBUG neutron.agent.ovn.metadata_agent [-] rpc_resources_processing_step  = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 11 03:47:22 compute-0 ovn_metadata_agent[161897]: 2025-10-11 03:47:22.893 161902 DEBUG neutron.agent.ovn.metadata_agent [-] rpc_response_max_timeout       = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 11 03:47:22 compute-0 ovn_metadata_agent[161897]: 2025-10-11 03:47:22.893 161902 DEBUG neutron.agent.ovn.metadata_agent [-] rpc_state_report_workers       = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 11 03:47:22 compute-0 ovn_metadata_agent[161897]: 2025-10-11 03:47:22.894 161902 DEBUG neutron.agent.ovn.metadata_agent [-] rpc_workers                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 11 03:47:22 compute-0 ovn_metadata_agent[161897]: 2025-10-11 03:47:22.894 161902 DEBUG neutron.agent.ovn.metadata_agent [-] send_events_interval           = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 11 03:47:22 compute-0 ovn_metadata_agent[161897]: 2025-10-11 03:47:22.894 161902 DEBUG neutron.agent.ovn.metadata_agent [-] service_plugins                = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 11 03:47:22 compute-0 ovn_metadata_agent[161897]: 2025-10-11 03:47:22.894 161902 DEBUG neutron.agent.ovn.metadata_agent [-] setproctitle                   = on log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 11 03:47:22 compute-0 ovn_metadata_agent[161897]: 2025-10-11 03:47:22.894 161902 DEBUG neutron.agent.ovn.metadata_agent [-] state_path                     = /var/lib/neutron log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 11 03:47:22 compute-0 ovn_metadata_agent[161897]: 2025-10-11 03:47:22.894 161902 DEBUG neutron.agent.ovn.metadata_agent [-] syslog_log_facility            = syslog log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 11 03:47:22 compute-0 ovn_metadata_agent[161897]: 2025-10-11 03:47:22.894 161902 DEBUG neutron.agent.ovn.metadata_agent [-] tcp_keepidle                   = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 11 03:47:22 compute-0 ovn_metadata_agent[161897]: 2025-10-11 03:47:22.895 161902 DEBUG neutron.agent.ovn.metadata_agent [-] transport_url                  = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 11 03:47:22 compute-0 ovn_metadata_agent[161897]: 2025-10-11 03:47:22.895 161902 DEBUG neutron.agent.ovn.metadata_agent [-] use_eventlog                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 11 03:47:22 compute-0 ovn_metadata_agent[161897]: 2025-10-11 03:47:22.895 161902 DEBUG neutron.agent.ovn.metadata_agent [-] use_journal                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 11 03:47:22 compute-0 ovn_metadata_agent[161897]: 2025-10-11 03:47:22.895 161902 DEBUG neutron.agent.ovn.metadata_agent [-] use_json                       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 11 03:47:22 compute-0 ovn_metadata_agent[161897]: 2025-10-11 03:47:22.895 161902 DEBUG neutron.agent.ovn.metadata_agent [-] use_ssl                        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 11 03:47:22 compute-0 ovn_metadata_agent[161897]: 2025-10-11 03:47:22.895 161902 DEBUG neutron.agent.ovn.metadata_agent [-] use_stderr                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 11 03:47:22 compute-0 ovn_metadata_agent[161897]: 2025-10-11 03:47:22.895 161902 DEBUG neutron.agent.ovn.metadata_agent [-] use_syslog                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 11 03:47:22 compute-0 ovn_metadata_agent[161897]: 2025-10-11 03:47:22.896 161902 DEBUG neutron.agent.ovn.metadata_agent [-] vlan_transparent               = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 11 03:47:22 compute-0 ovn_metadata_agent[161897]: 2025-10-11 03:47:22.896 161902 DEBUG neutron.agent.ovn.metadata_agent [-] watch_log_file                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 11 03:47:22 compute-0 ovn_metadata_agent[161897]: 2025-10-11 03:47:22.896 161902 DEBUG neutron.agent.ovn.metadata_agent [-] wsgi_default_pool_size         = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 11 03:47:22 compute-0 ovn_metadata_agent[161897]: 2025-10-11 03:47:22.896 161902 DEBUG neutron.agent.ovn.metadata_agent [-] wsgi_keep_alive                = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 11 03:47:22 compute-0 ovn_metadata_agent[161897]: 2025-10-11 03:47:22.896 161902 DEBUG neutron.agent.ovn.metadata_agent [-] wsgi_log_format                = %(client_ip)s "%(request_line)s" status: %(status_code)s  len: %(body_length)s time: %(wall_seconds).7f log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 11 03:47:22 compute-0 ovn_metadata_agent[161897]: 2025-10-11 03:47:22.896 161902 DEBUG neutron.agent.ovn.metadata_agent [-] wsgi_server_debug              = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 11 03:47:22 compute-0 ovn_metadata_agent[161897]: 2025-10-11 03:47:22.897 161902 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_concurrency.disable_process_locking = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:47:22 compute-0 ovn_metadata_agent[161897]: 2025-10-11 03:47:22.897 161902 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_concurrency.lock_path     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:47:22 compute-0 ovn_metadata_agent[161897]: 2025-10-11 03:47:22.897 161902 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.connection_string     = messaging:// log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:47:22 compute-0 ovn_metadata_agent[161897]: 2025-10-11 03:47:22.897 161902 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.enabled               = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:47:22 compute-0 ovn_metadata_agent[161897]: 2025-10-11 03:47:22.897 161902 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.es_doc_type           = notification log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:47:22 compute-0 ovn_metadata_agent[161897]: 2025-10-11 03:47:22.897 161902 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.es_scroll_size        = 10000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:47:22 compute-0 ovn_metadata_agent[161897]: 2025-10-11 03:47:22.898 161902 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.es_scroll_time        = 2m log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:47:22 compute-0 ovn_metadata_agent[161897]: 2025-10-11 03:47:22.898 161902 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.filter_error_trace    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:47:22 compute-0 ovn_metadata_agent[161897]: 2025-10-11 03:47:22.898 161902 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.hmac_keys             = SECRET_KEY log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:47:22 compute-0 ovn_metadata_agent[161897]: 2025-10-11 03:47:22.898 161902 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.sentinel_service_name = mymaster log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:47:22 compute-0 ovn_metadata_agent[161897]: 2025-10-11 03:47:22.898 161902 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.socket_timeout        = 0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:47:22 compute-0 ovn_metadata_agent[161897]: 2025-10-11 03:47:22.898 161902 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.trace_sqlalchemy      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:47:22 compute-0 ovn_metadata_agent[161897]: 2025-10-11 03:47:22.899 161902 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.enforce_new_defaults = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:47:22 compute-0 ovn_metadata_agent[161897]: 2025-10-11 03:47:22.899 161902 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.enforce_scope      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:47:22 compute-0 ovn_metadata_agent[161897]: 2025-10-11 03:47:22.899 161902 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.policy_default_rule = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:47:22 compute-0 ovn_metadata_agent[161897]: 2025-10-11 03:47:22.899 161902 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.policy_dirs        = ['policy.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:47:22 compute-0 ovn_metadata_agent[161897]: 2025-10-11 03:47:22.899 161902 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.policy_file        = policy.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:47:22 compute-0 ovn_metadata_agent[161897]: 2025-10-11 03:47:22.899 161902 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.remote_content_type = application/x-www-form-urlencoded log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:47:22 compute-0 ovn_metadata_agent[161897]: 2025-10-11 03:47:22.899 161902 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.remote_ssl_ca_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:47:22 compute-0 ovn_metadata_agent[161897]: 2025-10-11 03:47:22.900 161902 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.remote_ssl_client_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:47:22 compute-0 ovn_metadata_agent[161897]: 2025-10-11 03:47:22.900 161902 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.remote_ssl_client_key_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:47:22 compute-0 ovn_metadata_agent[161897]: 2025-10-11 03:47:22.900 161902 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.remote_ssl_verify_server_crt = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:47:22 compute-0 ovn_metadata_agent[161897]: 2025-10-11 03:47:22.900 161902 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_metrics.metrics_buffer_size = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:47:22 compute-0 ovn_metadata_agent[161897]: 2025-10-11 03:47:22.900 161902 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_metrics.metrics_enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:47:22 compute-0 ovn_metadata_agent[161897]: 2025-10-11 03:47:22.901 161902 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_metrics.metrics_process_name =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:47:22 compute-0 ovn_metadata_agent[161897]: 2025-10-11 03:47:22.901 161902 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_metrics.metrics_socket_file = /var/tmp/metrics_collector.sock log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:47:22 compute-0 ovn_metadata_agent[161897]: 2025-10-11 03:47:22.901 161902 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_metrics.metrics_thread_stop_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:47:22 compute-0 ovn_metadata_agent[161897]: 2025-10-11 03:47:22.901 161902 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_middleware.http_basic_auth_user_file = /etc/htpasswd log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:47:22 compute-0 ovn_metadata_agent[161897]: 2025-10-11 03:47:22.901 161902 DEBUG neutron.agent.ovn.metadata_agent [-] service_providers.service_provider = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:47:22 compute-0 ovn_metadata_agent[161897]: 2025-10-11 03:47:22.901 161902 DEBUG neutron.agent.ovn.metadata_agent [-] privsep.capabilities           = [21, 12, 1, 2, 19] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:47:22 compute-0 ovn_metadata_agent[161897]: 2025-10-11 03:47:22.901 161902 DEBUG neutron.agent.ovn.metadata_agent [-] privsep.group                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:47:22 compute-0 ovn_metadata_agent[161897]: 2025-10-11 03:47:22.902 161902 DEBUG neutron.agent.ovn.metadata_agent [-] privsep.helper_command         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:47:22 compute-0 ovn_metadata_agent[161897]: 2025-10-11 03:47:22.902 161902 DEBUG neutron.agent.ovn.metadata_agent [-] privsep.logger_name            = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:47:22 compute-0 ovn_metadata_agent[161897]: 2025-10-11 03:47:22.902 161902 DEBUG neutron.agent.ovn.metadata_agent [-] privsep.thread_pool_size       = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:47:22 compute-0 ovn_metadata_agent[161897]: 2025-10-11 03:47:22.902 161902 DEBUG neutron.agent.ovn.metadata_agent [-] privsep.user                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:47:22 compute-0 ovn_metadata_agent[161897]: 2025-10-11 03:47:22.902 161902 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_dhcp_release.capabilities = [21, 12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:47:22 compute-0 ovn_metadata_agent[161897]: 2025-10-11 03:47:22.902 161902 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_dhcp_release.group     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:47:22 compute-0 ovn_metadata_agent[161897]: 2025-10-11 03:47:22.903 161902 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_dhcp_release.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:47:22 compute-0 ovn_metadata_agent[161897]: 2025-10-11 03:47:22.903 161902 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_dhcp_release.logger_name = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:47:22 compute-0 ovn_metadata_agent[161897]: 2025-10-11 03:47:22.903 161902 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_dhcp_release.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:47:22 compute-0 ovn_metadata_agent[161897]: 2025-10-11 03:47:22.903 161902 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_dhcp_release.user      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:47:22 compute-0 ovn_metadata_agent[161897]: 2025-10-11 03:47:22.903 161902 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_ovs_vsctl.capabilities = [21, 12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:47:22 compute-0 ovn_metadata_agent[161897]: 2025-10-11 03:47:22.903 161902 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_ovs_vsctl.group        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:47:22 compute-0 ovn_metadata_agent[161897]: 2025-10-11 03:47:22.904 161902 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_ovs_vsctl.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:47:22 compute-0 ovn_metadata_agent[161897]: 2025-10-11 03:47:22.904 161902 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_ovs_vsctl.logger_name  = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:47:22 compute-0 ovn_metadata_agent[161897]: 2025-10-11 03:47:22.904 161902 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_ovs_vsctl.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:47:22 compute-0 ovn_metadata_agent[161897]: 2025-10-11 03:47:22.904 161902 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_ovs_vsctl.user         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:47:22 compute-0 ovn_metadata_agent[161897]: 2025-10-11 03:47:22.904 161902 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_namespace.capabilities = [21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:47:22 compute-0 ovn_metadata_agent[161897]: 2025-10-11 03:47:22.904 161902 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_namespace.group        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:47:22 compute-0 ovn_metadata_agent[161897]: 2025-10-11 03:47:22.905 161902 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_namespace.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:47:22 compute-0 ovn_metadata_agent[161897]: 2025-10-11 03:47:22.905 161902 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_namespace.logger_name  = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:47:22 compute-0 ovn_metadata_agent[161897]: 2025-10-11 03:47:22.905 161902 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_namespace.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:47:22 compute-0 ovn_metadata_agent[161897]: 2025-10-11 03:47:22.905 161902 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_namespace.user         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:47:22 compute-0 ovn_metadata_agent[161897]: 2025-10-11 03:47:22.905 161902 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_conntrack.capabilities = [12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:47:22 compute-0 ovn_metadata_agent[161897]: 2025-10-11 03:47:22.905 161902 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_conntrack.group        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:47:22 compute-0 ovn_metadata_agent[161897]: 2025-10-11 03:47:22.906 161902 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_conntrack.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:47:22 compute-0 ovn_metadata_agent[161897]: 2025-10-11 03:47:22.906 161902 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_conntrack.logger_name  = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:47:22 compute-0 ovn_metadata_agent[161897]: 2025-10-11 03:47:22.906 161902 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_conntrack.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:47:22 compute-0 ovn_metadata_agent[161897]: 2025-10-11 03:47:22.906 161902 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_conntrack.user         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:47:22 compute-0 ovn_metadata_agent[161897]: 2025-10-11 03:47:22.906 161902 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_link.capabilities      = [12, 21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:47:22 compute-0 ovn_metadata_agent[161897]: 2025-10-11 03:47:22.906 161902 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_link.group             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:47:22 compute-0 ovn_metadata_agent[161897]: 2025-10-11 03:47:22.907 161902 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_link.helper_command    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:47:22 compute-0 ovn_metadata_agent[161897]: 2025-10-11 03:47:22.907 161902 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_link.logger_name       = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:47:22 compute-0 ovn_metadata_agent[161897]: 2025-10-11 03:47:22.907 161902 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_link.thread_pool_size  = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:47:22 compute-0 ovn_metadata_agent[161897]: 2025-10-11 03:47:22.907 161902 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_link.user              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:47:22 compute-0 ovn_metadata_agent[161897]: 2025-10-11 03:47:22.907 161902 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.check_child_processes_action = respawn log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:47:22 compute-0 ovn_metadata_agent[161897]: 2025-10-11 03:47:22.907 161902 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.check_child_processes_interval = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:47:22 compute-0 ovn_metadata_agent[161897]: 2025-10-11 03:47:22.908 161902 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.comment_iptables_rules   = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:47:22 compute-0 ovn_metadata_agent[161897]: 2025-10-11 03:47:22.908 161902 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.debug_iptables_rules     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:47:22 compute-0 ovn_metadata_agent[161897]: 2025-10-11 03:47:22.908 161902 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.kill_scripts_path        = /etc/neutron/kill_scripts/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:47:22 compute-0 ovn_metadata_agent[161897]: 2025-10-11 03:47:22.908 161902 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.root_helper              = sudo neutron-rootwrap /etc/neutron/rootwrap.conf log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:47:22 compute-0 ovn_metadata_agent[161897]: 2025-10-11 03:47:22.908 161902 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.root_helper_daemon       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:47:22 compute-0 ovn_metadata_agent[161897]: 2025-10-11 03:47:22.908 161902 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.use_helper_for_ns_read   = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:47:22 compute-0 ovn_metadata_agent[161897]: 2025-10-11 03:47:22.909 161902 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.use_random_fully         = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:47:22 compute-0 ovn_metadata_agent[161897]: 2025-10-11 03:47:22.909 161902 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_versionedobjects.fatal_exception_format_errors = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:47:22 compute-0 ovn_metadata_agent[161897]: 2025-10-11 03:47:22.909 161902 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.default_quota           = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:47:22 compute-0 ovn_metadata_agent[161897]: 2025-10-11 03:47:22.909 161902 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.quota_driver            = neutron.db.quota.driver_nolock.DbQuotaNoLockDriver log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:47:22 compute-0 ovn_metadata_agent[161897]: 2025-10-11 03:47:22.910 161902 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.quota_network           = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:47:22 compute-0 ovn_metadata_agent[161897]: 2025-10-11 03:47:22.910 161902 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.quota_port              = 500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:47:22 compute-0 ovn_metadata_agent[161897]: 2025-10-11 03:47:22.910 161902 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.quota_security_group    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:47:22 compute-0 ovn_metadata_agent[161897]: 2025-10-11 03:47:22.910 161902 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.quota_security_group_rule = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:47:22 compute-0 ovn_metadata_agent[161897]: 2025-10-11 03:47:22.911 161902 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.quota_subnet            = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:47:22 compute-0 ovn_metadata_agent[161897]: 2025-10-11 03:47:22.911 161902 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.track_quota_usage       = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:47:22 compute-0 ovn_metadata_agent[161897]: 2025-10-11 03:47:22.911 161902 DEBUG neutron.agent.ovn.metadata_agent [-] nova.auth_section              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:47:22 compute-0 ovn_metadata_agent[161897]: 2025-10-11 03:47:22.911 161902 DEBUG neutron.agent.ovn.metadata_agent [-] nova.auth_type                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:47:22 compute-0 ovn_metadata_agent[161897]: 2025-10-11 03:47:22.911 161902 DEBUG neutron.agent.ovn.metadata_agent [-] nova.cafile                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:47:22 compute-0 ovn_metadata_agent[161897]: 2025-10-11 03:47:22.911 161902 DEBUG neutron.agent.ovn.metadata_agent [-] nova.certfile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:47:22 compute-0 ovn_metadata_agent[161897]: 2025-10-11 03:47:22.911 161902 DEBUG neutron.agent.ovn.metadata_agent [-] nova.collect_timing            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:47:22 compute-0 ovn_metadata_agent[161897]: 2025-10-11 03:47:22.912 161902 DEBUG neutron.agent.ovn.metadata_agent [-] nova.endpoint_type             = public log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:47:22 compute-0 ovn_metadata_agent[161897]: 2025-10-11 03:47:22.912 161902 DEBUG neutron.agent.ovn.metadata_agent [-] nova.insecure                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:47:22 compute-0 ovn_metadata_agent[161897]: 2025-10-11 03:47:22.912 161902 DEBUG neutron.agent.ovn.metadata_agent [-] nova.keyfile                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:47:22 compute-0 ovn_metadata_agent[161897]: 2025-10-11 03:47:22.912 161902 DEBUG neutron.agent.ovn.metadata_agent [-] nova.region_name               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:47:22 compute-0 ovn_metadata_agent[161897]: 2025-10-11 03:47:22.912 161902 DEBUG neutron.agent.ovn.metadata_agent [-] nova.split_loggers             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:47:22 compute-0 ovn_metadata_agent[161897]: 2025-10-11 03:47:22.912 161902 DEBUG neutron.agent.ovn.metadata_agent [-] nova.timeout                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:47:22 compute-0 ovn_metadata_agent[161897]: 2025-10-11 03:47:22.913 161902 DEBUG neutron.agent.ovn.metadata_agent [-] placement.auth_section         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:47:22 compute-0 ovn_metadata_agent[161897]: 2025-10-11 03:47:22.913 161902 DEBUG neutron.agent.ovn.metadata_agent [-] placement.auth_type            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:47:22 compute-0 ovn_metadata_agent[161897]: 2025-10-11 03:47:22.913 161902 DEBUG neutron.agent.ovn.metadata_agent [-] placement.cafile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:47:22 compute-0 ovn_metadata_agent[161897]: 2025-10-11 03:47:22.913 161902 DEBUG neutron.agent.ovn.metadata_agent [-] placement.certfile             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:47:22 compute-0 ovn_metadata_agent[161897]: 2025-10-11 03:47:22.913 161902 DEBUG neutron.agent.ovn.metadata_agent [-] placement.collect_timing       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:47:22 compute-0 ovn_metadata_agent[161897]: 2025-10-11 03:47:22.913 161902 DEBUG neutron.agent.ovn.metadata_agent [-] placement.endpoint_type        = public log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:47:22 compute-0 ovn_metadata_agent[161897]: 2025-10-11 03:47:22.914 161902 DEBUG neutron.agent.ovn.metadata_agent [-] placement.insecure             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:47:22 compute-0 ovn_metadata_agent[161897]: 2025-10-11 03:47:22.914 161902 DEBUG neutron.agent.ovn.metadata_agent [-] placement.keyfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:47:22 compute-0 ovn_metadata_agent[161897]: 2025-10-11 03:47:22.914 161902 DEBUG neutron.agent.ovn.metadata_agent [-] placement.region_name          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:47:22 compute-0 ovn_metadata_agent[161897]: 2025-10-11 03:47:22.914 161902 DEBUG neutron.agent.ovn.metadata_agent [-] placement.split_loggers        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:47:22 compute-0 ovn_metadata_agent[161897]: 2025-10-11 03:47:22.914 161902 DEBUG neutron.agent.ovn.metadata_agent [-] placement.timeout              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:47:22 compute-0 ovn_metadata_agent[161897]: 2025-10-11 03:47:22.914 161902 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.auth_section            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:47:22 compute-0 ovn_metadata_agent[161897]: 2025-10-11 03:47:22.915 161902 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.auth_type               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:47:22 compute-0 ovn_metadata_agent[161897]: 2025-10-11 03:47:22.915 161902 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:47:22 compute-0 ovn_metadata_agent[161897]: 2025-10-11 03:47:22.915 161902 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:47:22 compute-0 ovn_metadata_agent[161897]: 2025-10-11 03:47:22.915 161902 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:47:22 compute-0 ovn_metadata_agent[161897]: 2025-10-11 03:47:22.915 161902 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.connect_retries         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:47:22 compute-0 ovn_metadata_agent[161897]: 2025-10-11 03:47:22.915 161902 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.connect_retry_delay     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:47:22 compute-0 ovn_metadata_agent[161897]: 2025-10-11 03:47:22.916 161902 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.enable_notifications    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:47:22 compute-0 ovn_metadata_agent[161897]: 2025-10-11 03:47:22.916 161902 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.endpoint_override       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:47:22 compute-0 ovn_metadata_agent[161897]: 2025-10-11 03:47:22.916 161902 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:47:22 compute-0 ovn_metadata_agent[161897]: 2025-10-11 03:47:22.916 161902 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.interface               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:47:22 compute-0 ovn_metadata_agent[161897]: 2025-10-11 03:47:22.916 161902 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:47:22 compute-0 ovn_metadata_agent[161897]: 2025-10-11 03:47:22.916 161902 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.max_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:47:22 compute-0 ovn_metadata_agent[161897]: 2025-10-11 03:47:22.916 161902 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.min_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:47:22 compute-0 ovn_metadata_agent[161897]: 2025-10-11 03:47:22.917 161902 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.region_name             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:47:22 compute-0 ovn_metadata_agent[161897]: 2025-10-11 03:47:22.917 161902 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.service_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:47:22 compute-0 ovn_metadata_agent[161897]: 2025-10-11 03:47:22.917 161902 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.service_type            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:47:22 compute-0 ovn_metadata_agent[161897]: 2025-10-11 03:47:22.917 161902 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:47:22 compute-0 ovn_metadata_agent[161897]: 2025-10-11 03:47:22.917 161902 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.status_code_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:47:22 compute-0 ovn_metadata_agent[161897]: 2025-10-11 03:47:22.917 161902 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:47:22 compute-0 ovn_metadata_agent[161897]: 2025-10-11 03:47:22.918 161902 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:47:22 compute-0 ovn_metadata_agent[161897]: 2025-10-11 03:47:22.918 161902 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.valid_interfaces        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:47:22 compute-0 ovn_metadata_agent[161897]: 2025-10-11 03:47:22.918 161902 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.version                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:47:22 compute-0 ovn_metadata_agent[161897]: 2025-10-11 03:47:22.918 161902 DEBUG neutron.agent.ovn.metadata_agent [-] cli_script.dry_run             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:47:22 compute-0 ovn_metadata_agent[161897]: 2025-10-11 03:47:22.918 161902 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.allow_stateless_action_supported = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:47:22 compute-0 ovn_metadata_agent[161897]: 2025-10-11 03:47:22.918 161902 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.dhcp_default_lease_time    = 43200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:47:22 compute-0 ovn_metadata_agent[161897]: 2025-10-11 03:47:22.918 161902 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.disable_ovn_dhcp_for_baremetal_ports = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:47:22 compute-0 ovn_metadata_agent[161897]: 2025-10-11 03:47:22.919 161902 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.dns_servers                = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:47:22 compute-0 ovn_metadata_agent[161897]: 2025-10-11 03:47:22.919 161902 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.enable_distributed_floating_ip = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:47:22 compute-0 ovn_metadata_agent[161897]: 2025-10-11 03:47:22.919 161902 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.neutron_sync_mode          = log log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:47:22 compute-0 ovn_metadata_agent[161897]: 2025-10-11 03:47:22.919 161902 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_dhcp4_global_options   = {} log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:47:22 compute-0 ovn_metadata_agent[161897]: 2025-10-11 03:47:22.919 161902 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_dhcp6_global_options   = {} log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:47:22 compute-0 ovn_metadata_agent[161897]: 2025-10-11 03:47:22.919 161902 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_emit_need_to_frag      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:47:22 compute-0 ovn_metadata_agent[161897]: 2025-10-11 03:47:22.920 161902 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_l3_mode                = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:47:22 compute-0 ovn_metadata_agent[161897]: 2025-10-11 03:47:22.920 161902 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_l3_scheduler           = leastloaded log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:47:22 compute-0 ovn_metadata_agent[161897]: 2025-10-11 03:47:22.920 161902 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_metadata_enabled       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:47:22 compute-0 ovn_metadata_agent[161897]: 2025-10-11 03:47:22.920 161902 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_nb_ca_cert             =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:47:22 compute-0 ovn_metadata_agent[161897]: 2025-10-11 03:47:22.920 161902 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_nb_certificate         =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:47:22 compute-0 ovn_metadata_agent[161897]: 2025-10-11 03:47:22.920 161902 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_nb_connection          = tcp:127.0.0.1:6641 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:47:22 compute-0 ovn_metadata_agent[161897]: 2025-10-11 03:47:22.920 161902 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_nb_private_key         =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:47:22 compute-0 ovn_metadata_agent[161897]: 2025-10-11 03:47:22.921 161902 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_sb_ca_cert             = /etc/pki/tls/certs/ovndbca.crt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:47:22 compute-0 ovn_metadata_agent[161897]: 2025-10-11 03:47:22.921 161902 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_sb_certificate         = /etc/pki/tls/certs/ovndb.crt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:47:22 compute-0 ovn_metadata_agent[161897]: 2025-10-11 03:47:22.921 161902 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_sb_connection          = ssl:ovsdbserver-sb.openstack.svc:6642 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:47:22 compute-0 ovn_metadata_agent[161897]: 2025-10-11 03:47:22.921 161902 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_sb_private_key         = /etc/pki/tls/private/ovndb.key log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:47:22 compute-0 ovn_metadata_agent[161897]: 2025-10-11 03:47:22.921 161902 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovsdb_connection_timeout   = 180 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:47:22 compute-0 ovn_metadata_agent[161897]: 2025-10-11 03:47:22.921 161902 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovsdb_log_level            = INFO log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:47:22 compute-0 ovn_metadata_agent[161897]: 2025-10-11 03:47:22.922 161902 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovsdb_probe_interval       = 60000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:47:22 compute-0 ovn_metadata_agent[161897]: 2025-10-11 03:47:22.922 161902 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovsdb_retry_max_interval   = 180 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:47:22 compute-0 ovn_metadata_agent[161897]: 2025-10-11 03:47:22.922 161902 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.vhost_sock_dir             = /var/run/openvswitch log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:47:22 compute-0 ovn_metadata_agent[161897]: 2025-10-11 03:47:22.922 161902 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.vif_type                   = ovs log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:47:22 compute-0 ovn_metadata_agent[161897]: 2025-10-11 03:47:22.922 161902 DEBUG neutron.agent.ovn.metadata_agent [-] OVS.bridge_mac_table_size      = 50000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:47:22 compute-0 ovn_metadata_agent[161897]: 2025-10-11 03:47:22.922 161902 DEBUG neutron.agent.ovn.metadata_agent [-] OVS.igmp_snooping_enable       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:47:22 compute-0 ovn_metadata_agent[161897]: 2025-10-11 03:47:22.922 161902 DEBUG neutron.agent.ovn.metadata_agent [-] OVS.ovsdb_timeout              = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:47:22 compute-0 ovn_metadata_agent[161897]: 2025-10-11 03:47:22.923 161902 DEBUG neutron.agent.ovn.metadata_agent [-] ovs.ovsdb_connection           = tcp:127.0.0.1:6640 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:47:22 compute-0 ovn_metadata_agent[161897]: 2025-10-11 03:47:22.923 161902 DEBUG neutron.agent.ovn.metadata_agent [-] ovs.ovsdb_connection_timeout   = 180 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:47:22 compute-0 ovn_metadata_agent[161897]: 2025-10-11 03:47:22.923 161902 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.amqp_auto_delete = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:47:22 compute-0 ovn_metadata_agent[161897]: 2025-10-11 03:47:22.923 161902 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.amqp_durable_queues = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:47:22 compute-0 ovn_metadata_agent[161897]: 2025-10-11 03:47:22.923 161902 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.conn_pool_min_size = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:47:22 compute-0 ovn_metadata_agent[161897]: 2025-10-11 03:47:22.923 161902 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.conn_pool_ttl = 1200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:47:22 compute-0 ovn_metadata_agent[161897]: 2025-10-11 03:47:22.924 161902 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.direct_mandatory_flag = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:47:22 compute-0 ovn_metadata_agent[161897]: 2025-10-11 03:47:22.924 161902 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.enable_cancel_on_failover = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:47:22 compute-0 ovn_metadata_agent[161897]: 2025-10-11 03:47:22.924 161902 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.heartbeat_in_pthread = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:47:22 compute-0 ovn_metadata_agent[161897]: 2025-10-11 03:47:22.924 161902 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.heartbeat_rate = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:47:22 compute-0 ovn_metadata_agent[161897]: 2025-10-11 03:47:22.924 161902 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.heartbeat_timeout_threshold = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:47:22 compute-0 ovn_metadata_agent[161897]: 2025-10-11 03:47:22.924 161902 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.kombu_compression = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:47:22 compute-0 ovn_metadata_agent[161897]: 2025-10-11 03:47:22.924 161902 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.kombu_failover_strategy = round-robin log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:47:22 compute-0 ovn_metadata_agent[161897]: 2025-10-11 03:47:22.925 161902 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.kombu_missing_consumer_retry_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:47:22 compute-0 ovn_metadata_agent[161897]: 2025-10-11 03:47:22.925 161902 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.kombu_reconnect_delay = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:47:22 compute-0 ovn_metadata_agent[161897]: 2025-10-11 03:47:22.925 161902 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_ha_queues = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:47:22 compute-0 ovn_metadata_agent[161897]: 2025-10-11 03:47:22.925 161902 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_interval_max = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:47:22 compute-0 ovn_metadata_agent[161897]: 2025-10-11 03:47:22.925 161902 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_login_method = AMQPLAIN log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:47:22 compute-0 ovn_metadata_agent[161897]: 2025-10-11 03:47:22.925 161902 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_qos_prefetch_count = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:47:22 compute-0 ovn_metadata_agent[161897]: 2025-10-11 03:47:22.926 161902 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_quorum_delivery_limit = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:47:22 compute-0 ovn_metadata_agent[161897]: 2025-10-11 03:47:22.926 161902 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_quorum_max_memory_bytes = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:47:22 compute-0 ovn_metadata_agent[161897]: 2025-10-11 03:47:22.926 161902 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_quorum_max_memory_length = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:47:22 compute-0 ovn_metadata_agent[161897]: 2025-10-11 03:47:22.926 161902 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_quorum_queue = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:47:22 compute-0 ovn_metadata_agent[161897]: 2025-10-11 03:47:22.926 161902 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_retry_backoff = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:47:22 compute-0 ovn_metadata_agent[161897]: 2025-10-11 03:47:22.926 161902 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_retry_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:47:22 compute-0 ovn_metadata_agent[161897]: 2025-10-11 03:47:22.926 161902 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_transient_queues_ttl = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:47:22 compute-0 ovn_metadata_agent[161897]: 2025-10-11 03:47:22.927 161902 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rpc_conn_pool_size = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:47:22 compute-0 ovn_metadata_agent[161897]: 2025-10-11 03:47:22.927 161902 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.ssl      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:47:22 compute-0 ovn_metadata_agent[161897]: 2025-10-11 03:47:22.927 161902 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.ssl_ca_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:47:22 compute-0 ovn_metadata_agent[161897]: 2025-10-11 03:47:22.927 161902 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.ssl_cert_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:47:22 compute-0 ovn_metadata_agent[161897]: 2025-10-11 03:47:22.927 161902 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.ssl_enforce_fips_mode = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:47:22 compute-0 ovn_metadata_agent[161897]: 2025-10-11 03:47:22.927 161902 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.ssl_key_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:47:22 compute-0 ovn_metadata_agent[161897]: 2025-10-11 03:47:22.928 161902 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.ssl_version =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:47:22 compute-0 ovn_metadata_agent[161897]: 2025-10-11 03:47:22.928 161902 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_notifications.driver = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:47:22 compute-0 ovn_metadata_agent[161897]: 2025-10-11 03:47:22.928 161902 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_notifications.retry = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:47:22 compute-0 ovn_metadata_agent[161897]: 2025-10-11 03:47:22.928 161902 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_notifications.topics = ['notifications'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:47:22 compute-0 ovn_metadata_agent[161897]: 2025-10-11 03:47:22.928 161902 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_notifications.transport_url = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:47:22 compute-0 ovn_metadata_agent[161897]: 2025-10-11 03:47:22.928 161902 DEBUG neutron.agent.ovn.metadata_agent [-] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2613
Oct 11 03:47:22 compute-0 ovn_metadata_agent[161897]: 2025-10-11 03:47:22.938 161902 DEBUG ovsdbapp.backend.ovs_idl [-] Created schema index Bridge.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106
Oct 11 03:47:22 compute-0 ovn_metadata_agent[161897]: 2025-10-11 03:47:22.938 161902 DEBUG ovsdbapp.backend.ovs_idl [-] Created schema index Port.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106
Oct 11 03:47:22 compute-0 ovn_metadata_agent[161897]: 2025-10-11 03:47:22.939 161902 DEBUG ovsdbapp.backend.ovs_idl [-] Created schema index Interface.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106
Oct 11 03:47:22 compute-0 ovn_metadata_agent[161897]: 2025-10-11 03:47:22.939 161902 INFO ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: connecting...
Oct 11 03:47:22 compute-0 ovn_metadata_agent[161897]: 2025-10-11 03:47:22.939 161902 INFO ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: connected
Oct 11 03:47:22 compute-0 ovn_metadata_agent[161897]: 2025-10-11 03:47:22.952 161902 DEBUG neutron.agent.ovn.metadata.agent [-] Loaded chassis name 8a473e03-2208-47ae-afcd-05ad744a5969 (UUID: 8a473e03-2208-47ae-afcd-05ad744a5969) and ovn bridge br-int. _load_config /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:309
Oct 11 03:47:22 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v440: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 03:47:22 compute-0 ovn_metadata_agent[161897]: 2025-10-11 03:47:22.972 161902 INFO neutron.agent.ovn.metadata.ovsdb [-] Getting OvsdbSbOvnIdl for MetadataAgent with retry
Oct 11 03:47:22 compute-0 ovn_metadata_agent[161897]: 2025-10-11 03:47:22.973 161902 DEBUG ovsdbapp.backend.ovs_idl [-] Created lookup_table index Chassis.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:87
Oct 11 03:47:22 compute-0 ovn_metadata_agent[161897]: 2025-10-11 03:47:22.973 161902 DEBUG ovsdbapp.backend.ovs_idl [-] Created schema index Datapath_Binding.tunnel_key autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106
Oct 11 03:47:22 compute-0 ovn_metadata_agent[161897]: 2025-10-11 03:47:22.973 161902 DEBUG ovsdbapp.backend.ovs_idl [-] Created schema index Chassis_Private.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106
Oct 11 03:47:22 compute-0 ovn_metadata_agent[161897]: 2025-10-11 03:47:22.977 161902 INFO ovsdbapp.backend.ovs_idl.vlog [-] ssl:ovsdbserver-sb.openstack.svc:6642: connecting...
Oct 11 03:47:22 compute-0 ovn_metadata_agent[161897]: 2025-10-11 03:47:22.982 161902 INFO ovsdbapp.backend.ovs_idl.vlog [-] ssl:ovsdbserver-sb.openstack.svc:6642: connected
Oct 11 03:47:22 compute-0 ovn_metadata_agent[161897]: 2025-10-11 03:47:22.987 161902 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched CREATE: ChassisPrivateCreateEvent(events=('create',), table='Chassis_Private', conditions=(('name', '=', '8a473e03-2208-47ae-afcd-05ad744a5969'),), old_conditions=None), priority=20 to row=Chassis_Private(chassis=[<ovs.db.idl.Row object at 0x7f22fde8afd0>], external_ids={}, name=8a473e03-2208-47ae-afcd-05ad744a5969, nb_cfg_timestamp=1760154382736, nb_cfg=1) old= matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 11 03:47:22 compute-0 ovn_metadata_agent[161897]: 2025-10-11 03:47:22.987 161902 DEBUG neutron_lib.callbacks.manager [-] Subscribe: <bound method MetadataProxyHandler.post_fork_initialize of <neutron.agent.ovn.metadata.server.MetadataProxyHandler object at 0x7f22fddfde20>> process after_init 55550000, False subscribe /usr/lib/python3.9/site-packages/neutron_lib/callbacks/manager.py:52
Oct 11 03:47:22 compute-0 ovn_metadata_agent[161897]: 2025-10-11 03:47:22.988 161902 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 11 03:47:22 compute-0 ovn_metadata_agent[161897]: 2025-10-11 03:47:22.988 161902 DEBUG oslo_concurrency.lockutils [-] Acquired lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 11 03:47:22 compute-0 ovn_metadata_agent[161897]: 2025-10-11 03:47:22.989 161902 DEBUG oslo_concurrency.lockutils [-] Releasing lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 11 03:47:22 compute-0 ovn_metadata_agent[161897]: 2025-10-11 03:47:22.989 161902 INFO oslo_service.service [-] Starting 1 workers
Oct 11 03:47:22 compute-0 ovn_metadata_agent[161897]: 2025-10-11 03:47:22.992 161902 DEBUG oslo_service.service [-] Started child 162010 _start_child /usr/lib/python3.9/site-packages/oslo_service/service.py:575
Oct 11 03:47:22 compute-0 ovn_metadata_agent[161897]: 2025-10-11 03:47:22.995 162010 DEBUG neutron_lib.callbacks.manager [-] Publish callbacks ['neutron.agent.ovn.metadata.server.MetadataProxyHandler.post_fork_initialize-4032756'] for process (None), after_init _notify_loop /usr/lib/python3.9/site-packages/neutron_lib/callbacks/manager.py:184
Oct 11 03:47:22 compute-0 ovn_metadata_agent[161897]: 2025-10-11 03:47:22.995 161902 INFO oslo.privsep.daemon [-] Running privsep helper: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'privsep-helper', '--config-file', '/etc/neutron/neutron.conf', '--config-dir', '/etc/neutron.conf.d', '--privsep_context', 'neutron.privileged.namespace_cmd', '--privsep_sock_path', '/tmp/tmpv8680r5x/privsep.sock']
Oct 11 03:47:23 compute-0 ovn_metadata_agent[161897]: 2025-10-11 03:47:23.012 162010 INFO neutron.agent.ovn.metadata.ovsdb [-] Getting OvsdbSbOvnIdl for MetadataAgent with retry
Oct 11 03:47:23 compute-0 ovn_metadata_agent[161897]: 2025-10-11 03:47:23.013 162010 DEBUG ovsdbapp.backend.ovs_idl [-] Created lookup_table index Chassis.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:87
Oct 11 03:47:23 compute-0 ovn_metadata_agent[161897]: 2025-10-11 03:47:23.013 162010 DEBUG ovsdbapp.backend.ovs_idl [-] Created schema index Datapath_Binding.tunnel_key autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106
Oct 11 03:47:23 compute-0 ovn_metadata_agent[161897]: 2025-10-11 03:47:23.015 162010 INFO ovsdbapp.backend.ovs_idl.vlog [-] ssl:ovsdbserver-sb.openstack.svc:6642: connecting...
Oct 11 03:47:23 compute-0 ovn_metadata_agent[161897]: 2025-10-11 03:47:23.021 162010 INFO ovsdbapp.backend.ovs_idl.vlog [-] ssl:ovsdbserver-sb.openstack.svc:6642: connected
Oct 11 03:47:23 compute-0 ovn_metadata_agent[161897]: 2025-10-11 03:47:23.026 162010 INFO eventlet.wsgi.server [-] (162010) wsgi starting up on http:/var/lib/neutron/metadata_proxy
Oct 11 03:47:23 compute-0 kernel: capability: warning: `privsep-helper' uses deprecated v2 capabilities in a way that may be insecure
Oct 11 03:47:23 compute-0 ovn_metadata_agent[161897]: 2025-10-11 03:47:23.690 161902 INFO oslo.privsep.daemon [-] Spawned new privsep daemon via rootwrap
Oct 11 03:47:23 compute-0 ovn_metadata_agent[161897]: 2025-10-11 03:47:23.691 161902 DEBUG oslo.privsep.daemon [-] Accepted privsep connection to /tmp/tmpv8680r5x/privsep.sock __init__ /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:362
Oct 11 03:47:23 compute-0 ovn_metadata_agent[161897]: 2025-10-11 03:47:23.560 162015 INFO oslo.privsep.daemon [-] privsep daemon starting
Oct 11 03:47:23 compute-0 ovn_metadata_agent[161897]: 2025-10-11 03:47:23.565 162015 INFO oslo.privsep.daemon [-] privsep process running with uid/gid: 0/0
Oct 11 03:47:23 compute-0 ovn_metadata_agent[161897]: 2025-10-11 03:47:23.570 162015 INFO oslo.privsep.daemon [-] privsep process running with capabilities (eff/prm/inh): CAP_SYS_ADMIN/CAP_SYS_ADMIN/none
Oct 11 03:47:23 compute-0 ovn_metadata_agent[161897]: 2025-10-11 03:47:23.571 162015 INFO oslo.privsep.daemon [-] privsep daemon running as pid 162015
Oct 11 03:47:23 compute-0 ovn_metadata_agent[161897]: 2025-10-11 03:47:23.697 162015 DEBUG oslo.privsep.daemon [-] privsep: reply[94cc5031-b3f1-4d0f-afd0-07db1ac7a6b4]: (2,) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 03:47:23 compute-0 ceph-mon[74273]: pgmap v440: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 03:47:24 compute-0 ovn_metadata_agent[161897]: 2025-10-11 03:47:24.190 162015 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "context-manager" by "neutron_lib.db.api._create_context_manager" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 03:47:24 compute-0 ovn_metadata_agent[161897]: 2025-10-11 03:47:24.190 162015 DEBUG oslo_concurrency.lockutils [-] Lock "context-manager" acquired by "neutron_lib.db.api._create_context_manager" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 03:47:24 compute-0 ovn_metadata_agent[161897]: 2025-10-11 03:47:24.190 162015 DEBUG oslo_concurrency.lockutils [-] Lock "context-manager" "released" by "neutron_lib.db.api._create_context_manager" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 03:47:24 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 11 03:47:24 compute-0 ovn_metadata_agent[161897]: 2025-10-11 03:47:24.659 162015 DEBUG oslo.privsep.daemon [-] privsep: reply[5d886344-8e90-403a-a7fb-db2f0d5136f1]: (4, []) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 03:47:24 compute-0 ovn_metadata_agent[161897]: 2025-10-11 03:47:24.663 161902 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbAddCommand(_result=None, table=Chassis_Private, record=8a473e03-2208-47ae-afcd-05ad744a5969, column=external_ids, values=({'neutron:ovn-metadata-id': 'c972daed-22bc-5b49-9e46-3ac41a822599'},)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 03:47:24 compute-0 ovn_metadata_agent[161897]: 2025-10-11 03:47:24.677 161902 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=8a473e03-2208-47ae-afcd-05ad744a5969, col_values=(('external_ids', {'neutron:ovn-bridge': 'br-int'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 03:47:24 compute-0 ovn_metadata_agent[161897]: 2025-10-11 03:47:24.684 161902 DEBUG oslo_service.service [-] Full set of CONF: wait /usr/lib/python3.9/site-packages/oslo_service/service.py:649
Oct 11 03:47:24 compute-0 ovn_metadata_agent[161897]: 2025-10-11 03:47:24.685 161902 DEBUG oslo_service.service [-] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2589
Oct 11 03:47:24 compute-0 ovn_metadata_agent[161897]: 2025-10-11 03:47:24.685 161902 DEBUG oslo_service.service [-] Configuration options gathered from: log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2590
Oct 11 03:47:24 compute-0 ovn_metadata_agent[161897]: 2025-10-11 03:47:24.685 161902 DEBUG oslo_service.service [-] command line args: [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2591
Oct 11 03:47:24 compute-0 ovn_metadata_agent[161897]: 2025-10-11 03:47:24.686 161902 DEBUG oslo_service.service [-] config files: ['/etc/neutron/neutron.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2592
Oct 11 03:47:24 compute-0 ovn_metadata_agent[161897]: 2025-10-11 03:47:24.686 161902 DEBUG oslo_service.service [-] ================================================================================ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2594
Oct 11 03:47:24 compute-0 ovn_metadata_agent[161897]: 2025-10-11 03:47:24.686 161902 DEBUG oslo_service.service [-] agent_down_time                = 75 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 11 03:47:24 compute-0 ovn_metadata_agent[161897]: 2025-10-11 03:47:24.687 161902 DEBUG oslo_service.service [-] allow_bulk                     = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 11 03:47:24 compute-0 ovn_metadata_agent[161897]: 2025-10-11 03:47:24.687 161902 DEBUG oslo_service.service [-] api_extensions_path            =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 11 03:47:24 compute-0 ovn_metadata_agent[161897]: 2025-10-11 03:47:24.688 161902 DEBUG oslo_service.service [-] api_paste_config               = api-paste.ini log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 11 03:47:24 compute-0 ovn_metadata_agent[161897]: 2025-10-11 03:47:24.688 161902 DEBUG oslo_service.service [-] api_workers                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 11 03:47:24 compute-0 ovn_metadata_agent[161897]: 2025-10-11 03:47:24.688 161902 DEBUG oslo_service.service [-] auth_ca_cert                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 11 03:47:24 compute-0 ovn_metadata_agent[161897]: 2025-10-11 03:47:24.689 161902 DEBUG oslo_service.service [-] auth_strategy                  = keystone log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 11 03:47:24 compute-0 ovn_metadata_agent[161897]: 2025-10-11 03:47:24.689 161902 DEBUG oslo_service.service [-] backlog                        = 4096 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 11 03:47:24 compute-0 ovn_metadata_agent[161897]: 2025-10-11 03:47:24.689 161902 DEBUG oslo_service.service [-] base_mac                       = fa:16:3e:00:00:00 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 11 03:47:24 compute-0 ovn_metadata_agent[161897]: 2025-10-11 03:47:24.690 161902 DEBUG oslo_service.service [-] bind_host                      = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 11 03:47:24 compute-0 ovn_metadata_agent[161897]: 2025-10-11 03:47:24.690 161902 DEBUG oslo_service.service [-] bind_port                      = 9696 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 11 03:47:24 compute-0 ovn_metadata_agent[161897]: 2025-10-11 03:47:24.691 161902 DEBUG oslo_service.service [-] client_socket_timeout          = 900 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 11 03:47:24 compute-0 ovn_metadata_agent[161897]: 2025-10-11 03:47:24.691 161902 DEBUG oslo_service.service [-] config_dir                     = ['/etc/neutron.conf.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 11 03:47:24 compute-0 ovn_metadata_agent[161897]: 2025-10-11 03:47:24.691 161902 DEBUG oslo_service.service [-] config_file                    = ['/etc/neutron/neutron.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 11 03:47:24 compute-0 ovn_metadata_agent[161897]: 2025-10-11 03:47:24.692 161902 DEBUG oslo_service.service [-] config_source                  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 11 03:47:24 compute-0 ovn_metadata_agent[161897]: 2025-10-11 03:47:24.692 161902 DEBUG oslo_service.service [-] control_exchange               = neutron log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 11 03:47:24 compute-0 ovn_metadata_agent[161897]: 2025-10-11 03:47:24.692 161902 DEBUG oslo_service.service [-] core_plugin                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 11 03:47:24 compute-0 ovn_metadata_agent[161897]: 2025-10-11 03:47:24.693 161902 DEBUG oslo_service.service [-] debug                          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 11 03:47:24 compute-0 ovn_metadata_agent[161897]: 2025-10-11 03:47:24.693 161902 DEBUG oslo_service.service [-] default_availability_zones     = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 11 03:47:24 compute-0 ovn_metadata_agent[161897]: 2025-10-11 03:47:24.694 161902 DEBUG oslo_service.service [-] default_log_levels             = ['amqp=WARN', 'amqplib=WARN', 'boto=WARN', 'qpid=WARN', 'sqlalchemy=WARN', 'suds=INFO', 'oslo.messaging=INFO', 'oslo_messaging=INFO', 'iso8601=WARN', 'requests.packages.urllib3.connectionpool=WARN', 'urllib3.connectionpool=WARN', 'websocket=WARN', 'requests.packages.urllib3.util.retry=WARN', 'urllib3.util.retry=WARN', 'keystonemiddleware=WARN', 'routes.middleware=WARN', 'stevedore=WARN', 'taskflow=WARN', 'keystoneauth=WARN', 'oslo.cache=INFO', 'oslo_policy=INFO', 'dogpile.core.dogpile=INFO', 'OFPHandler=INFO', 'OfctlService=INFO', 'os_ken.base.app_manager=INFO', 'os_ken.controller.controller=INFO'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 11 03:47:24 compute-0 ovn_metadata_agent[161897]: 2025-10-11 03:47:24.694 161902 DEBUG oslo_service.service [-] dhcp_agent_notification        = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 11 03:47:24 compute-0 ovn_metadata_agent[161897]: 2025-10-11 03:47:24.695 161902 DEBUG oslo_service.service [-] dhcp_lease_duration            = 86400 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 11 03:47:24 compute-0 ovn_metadata_agent[161897]: 2025-10-11 03:47:24.695 161902 DEBUG oslo_service.service [-] dhcp_load_type                 = networks log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 11 03:47:24 compute-0 ovn_metadata_agent[161897]: 2025-10-11 03:47:24.695 161902 DEBUG oslo_service.service [-] dns_domain                     = openstacklocal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 11 03:47:24 compute-0 ovn_metadata_agent[161897]: 2025-10-11 03:47:24.696 161902 DEBUG oslo_service.service [-] enable_new_agents              = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 11 03:47:24 compute-0 ovn_metadata_agent[161897]: 2025-10-11 03:47:24.696 161902 DEBUG oslo_service.service [-] enable_traditional_dhcp        = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 11 03:47:24 compute-0 ovn_metadata_agent[161897]: 2025-10-11 03:47:24.696 161902 DEBUG oslo_service.service [-] external_dns_driver            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 11 03:47:24 compute-0 ovn_metadata_agent[161897]: 2025-10-11 03:47:24.697 161902 DEBUG oslo_service.service [-] external_pids                  = /var/lib/neutron/external/pids log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 11 03:47:24 compute-0 ovn_metadata_agent[161897]: 2025-10-11 03:47:24.697 161902 DEBUG oslo_service.service [-] filter_validation              = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 11 03:47:24 compute-0 ovn_metadata_agent[161897]: 2025-10-11 03:47:24.698 161902 DEBUG oslo_service.service [-] global_physnet_mtu             = 1500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 11 03:47:24 compute-0 ovn_metadata_agent[161897]: 2025-10-11 03:47:24.698 161902 DEBUG oslo_service.service [-] graceful_shutdown_timeout      = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 11 03:47:24 compute-0 ovn_metadata_agent[161897]: 2025-10-11 03:47:24.699 161902 DEBUG oslo_service.service [-] host                           = compute-0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 11 03:47:24 compute-0 ovn_metadata_agent[161897]: 2025-10-11 03:47:24.699 161902 DEBUG oslo_service.service [-] http_retries                   = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 11 03:47:24 compute-0 ovn_metadata_agent[161897]: 2025-10-11 03:47:24.699 161902 DEBUG oslo_service.service [-] instance_format                = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 11 03:47:24 compute-0 ovn_metadata_agent[161897]: 2025-10-11 03:47:24.700 161902 DEBUG oslo_service.service [-] instance_uuid_format           = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 11 03:47:24 compute-0 ovn_metadata_agent[161897]: 2025-10-11 03:47:24.700 161902 DEBUG oslo_service.service [-] ipam_driver                    = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 11 03:47:24 compute-0 ovn_metadata_agent[161897]: 2025-10-11 03:47:24.700 161902 DEBUG oslo_service.service [-] ipv6_pd_enabled                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 11 03:47:24 compute-0 ovn_metadata_agent[161897]: 2025-10-11 03:47:24.701 161902 DEBUG oslo_service.service [-] log_config_append              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 11 03:47:24 compute-0 ovn_metadata_agent[161897]: 2025-10-11 03:47:24.701 161902 DEBUG oslo_service.service [-] log_date_format                = %Y-%m-%d %H:%M:%S log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 11 03:47:24 compute-0 ovn_metadata_agent[161897]: 2025-10-11 03:47:24.701 161902 DEBUG oslo_service.service [-] log_dir                        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 11 03:47:24 compute-0 ovn_metadata_agent[161897]: 2025-10-11 03:47:24.702 161902 DEBUG oslo_service.service [-] log_file                       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 11 03:47:24 compute-0 ovn_metadata_agent[161897]: 2025-10-11 03:47:24.702 161902 DEBUG oslo_service.service [-] log_options                    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 11 03:47:24 compute-0 ovn_metadata_agent[161897]: 2025-10-11 03:47:24.702 161902 DEBUG oslo_service.service [-] log_rotate_interval            = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 11 03:47:24 compute-0 ovn_metadata_agent[161897]: 2025-10-11 03:47:24.703 161902 DEBUG oslo_service.service [-] log_rotate_interval_type       = days log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 11 03:47:24 compute-0 ovn_metadata_agent[161897]: 2025-10-11 03:47:24.703 161902 DEBUG oslo_service.service [-] log_rotation_type              = none log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 11 03:47:24 compute-0 ovn_metadata_agent[161897]: 2025-10-11 03:47:24.704 161902 DEBUG oslo_service.service [-] logging_context_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(global_request_id)s %(request_id)s %(user_identity)s] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 11 03:47:24 compute-0 ovn_metadata_agent[161897]: 2025-10-11 03:47:24.704 161902 DEBUG oslo_service.service [-] logging_debug_format_suffix    = %(funcName)s %(pathname)s:%(lineno)d log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 11 03:47:24 compute-0 ovn_metadata_agent[161897]: 2025-10-11 03:47:24.704 161902 DEBUG oslo_service.service [-] logging_default_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [-] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 11 03:47:24 compute-0 ovn_metadata_agent[161897]: 2025-10-11 03:47:24.704 161902 DEBUG oslo_service.service [-] logging_exception_prefix       = %(asctime)s.%(msecs)03d %(process)d ERROR %(name)s %(instance)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 11 03:47:24 compute-0 ovn_metadata_agent[161897]: 2025-10-11 03:47:24.705 161902 DEBUG oslo_service.service [-] logging_user_identity_format   = %(user)s %(project)s %(domain)s %(system_scope)s %(user_domain)s %(project_domain)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 11 03:47:24 compute-0 ovn_metadata_agent[161897]: 2025-10-11 03:47:24.705 161902 DEBUG oslo_service.service [-] max_dns_nameservers            = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 11 03:47:24 compute-0 ovn_metadata_agent[161897]: 2025-10-11 03:47:24.706 161902 DEBUG oslo_service.service [-] max_header_line                = 16384 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 11 03:47:24 compute-0 ovn_metadata_agent[161897]: 2025-10-11 03:47:24.706 161902 DEBUG oslo_service.service [-] max_logfile_count              = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 11 03:47:24 compute-0 ovn_metadata_agent[161897]: 2025-10-11 03:47:24.706 161902 DEBUG oslo_service.service [-] max_logfile_size_mb            = 200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 11 03:47:24 compute-0 ovn_metadata_agent[161897]: 2025-10-11 03:47:24.707 161902 DEBUG oslo_service.service [-] max_subnet_host_routes         = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 11 03:47:24 compute-0 ovn_metadata_agent[161897]: 2025-10-11 03:47:24.707 161902 DEBUG oslo_service.service [-] metadata_backlog               = 4096 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 11 03:47:24 compute-0 ovn_metadata_agent[161897]: 2025-10-11 03:47:24.707 161902 DEBUG oslo_service.service [-] metadata_proxy_group           =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 11 03:47:24 compute-0 ovn_metadata_agent[161897]: 2025-10-11 03:47:24.708 161902 DEBUG oslo_service.service [-] metadata_proxy_shared_secret   = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 11 03:47:24 compute-0 ovn_metadata_agent[161897]: 2025-10-11 03:47:24.708 161902 DEBUG oslo_service.service [-] metadata_proxy_socket          = /var/lib/neutron/metadata_proxy log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 11 03:47:24 compute-0 ovn_metadata_agent[161897]: 2025-10-11 03:47:24.708 161902 DEBUG oslo_service.service [-] metadata_proxy_socket_mode     = deduce log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 11 03:47:24 compute-0 ovn_metadata_agent[161897]: 2025-10-11 03:47:24.709 161902 DEBUG oslo_service.service [-] metadata_proxy_user            =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 11 03:47:24 compute-0 ovn_metadata_agent[161897]: 2025-10-11 03:47:24.709 161902 DEBUG oslo_service.service [-] metadata_workers               = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 11 03:47:24 compute-0 ovn_metadata_agent[161897]: 2025-10-11 03:47:24.709 161902 DEBUG oslo_service.service [-] network_link_prefix            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 11 03:47:24 compute-0 ovn_metadata_agent[161897]: 2025-10-11 03:47:24.710 161902 DEBUG oslo_service.service [-] notify_nova_on_port_data_changes = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 11 03:47:24 compute-0 ovn_metadata_agent[161897]: 2025-10-11 03:47:24.710 161902 DEBUG oslo_service.service [-] notify_nova_on_port_status_changes = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 11 03:47:24 compute-0 ovn_metadata_agent[161897]: 2025-10-11 03:47:24.711 161902 DEBUG oslo_service.service [-] nova_client_cert               =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 11 03:47:24 compute-0 ovn_metadata_agent[161897]: 2025-10-11 03:47:24.711 161902 DEBUG oslo_service.service [-] nova_client_priv_key           =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 11 03:47:24 compute-0 ovn_metadata_agent[161897]: 2025-10-11 03:47:24.711 161902 DEBUG oslo_service.service [-] nova_metadata_host             = nova-metadata-internal.openstack.svc log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 11 03:47:24 compute-0 ovn_metadata_agent[161897]: 2025-10-11 03:47:24.712 161902 DEBUG oslo_service.service [-] nova_metadata_insecure         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 11 03:47:24 compute-0 ovn_metadata_agent[161897]: 2025-10-11 03:47:24.712 161902 DEBUG oslo_service.service [-] nova_metadata_port             = 8775 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 11 03:47:24 compute-0 ovn_metadata_agent[161897]: 2025-10-11 03:47:24.712 161902 DEBUG oslo_service.service [-] nova_metadata_protocol         = https log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 11 03:47:24 compute-0 ovn_metadata_agent[161897]: 2025-10-11 03:47:24.713 161902 DEBUG oslo_service.service [-] pagination_max_limit           = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 11 03:47:24 compute-0 ovn_metadata_agent[161897]: 2025-10-11 03:47:24.713 161902 DEBUG oslo_service.service [-] periodic_fuzzy_delay           = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 11 03:47:24 compute-0 ovn_metadata_agent[161897]: 2025-10-11 03:47:24.713 161902 DEBUG oslo_service.service [-] periodic_interval              = 40 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 11 03:47:24 compute-0 ovn_metadata_agent[161897]: 2025-10-11 03:47:24.714 161902 DEBUG oslo_service.service [-] publish_errors                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 11 03:47:24 compute-0 ovn_metadata_agent[161897]: 2025-10-11 03:47:24.714 161902 DEBUG oslo_service.service [-] rate_limit_burst               = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 11 03:47:24 compute-0 ovn_metadata_agent[161897]: 2025-10-11 03:47:24.714 161902 DEBUG oslo_service.service [-] rate_limit_except_level        = CRITICAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 11 03:47:24 compute-0 ovn_metadata_agent[161897]: 2025-10-11 03:47:24.715 161902 DEBUG oslo_service.service [-] rate_limit_interval            = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 11 03:47:24 compute-0 ovn_metadata_agent[161897]: 2025-10-11 03:47:24.715 161902 DEBUG oslo_service.service [-] retry_until_window             = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 11 03:47:24 compute-0 ovn_metadata_agent[161897]: 2025-10-11 03:47:24.715 161902 DEBUG oslo_service.service [-] rpc_resources_processing_step  = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 11 03:47:24 compute-0 ovn_metadata_agent[161897]: 2025-10-11 03:47:24.716 161902 DEBUG oslo_service.service [-] rpc_response_max_timeout       = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 11 03:47:24 compute-0 ovn_metadata_agent[161897]: 2025-10-11 03:47:24.716 161902 DEBUG oslo_service.service [-] rpc_state_report_workers       = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 11 03:47:24 compute-0 ovn_metadata_agent[161897]: 2025-10-11 03:47:24.716 161902 DEBUG oslo_service.service [-] rpc_workers                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 11 03:47:24 compute-0 ovn_metadata_agent[161897]: 2025-10-11 03:47:24.717 161902 DEBUG oslo_service.service [-] send_events_interval           = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 11 03:47:24 compute-0 ovn_metadata_agent[161897]: 2025-10-11 03:47:24.717 161902 DEBUG oslo_service.service [-] service_plugins                = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 11 03:47:24 compute-0 ovn_metadata_agent[161897]: 2025-10-11 03:47:24.717 161902 DEBUG oslo_service.service [-] setproctitle                   = on log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 11 03:47:24 compute-0 ovn_metadata_agent[161897]: 2025-10-11 03:47:24.718 161902 DEBUG oslo_service.service [-] state_path                     = /var/lib/neutron log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 11 03:47:24 compute-0 ovn_metadata_agent[161897]: 2025-10-11 03:47:24.719 161902 DEBUG oslo_service.service [-] syslog_log_facility            = syslog log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 11 03:47:24 compute-0 ovn_metadata_agent[161897]: 2025-10-11 03:47:24.719 161902 DEBUG oslo_service.service [-] tcp_keepidle                   = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 11 03:47:24 compute-0 ovn_metadata_agent[161897]: 2025-10-11 03:47:24.719 161902 DEBUG oslo_service.service [-] transport_url                  = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 11 03:47:24 compute-0 ovn_metadata_agent[161897]: 2025-10-11 03:47:24.720 161902 DEBUG oslo_service.service [-] use_eventlog                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 11 03:47:24 compute-0 ovn_metadata_agent[161897]: 2025-10-11 03:47:24.720 161902 DEBUG oslo_service.service [-] use_journal                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 11 03:47:24 compute-0 ovn_metadata_agent[161897]: 2025-10-11 03:47:24.720 161902 DEBUG oslo_service.service [-] use_json                       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 11 03:47:24 compute-0 ovn_metadata_agent[161897]: 2025-10-11 03:47:24.721 161902 DEBUG oslo_service.service [-] use_ssl                        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 11 03:47:24 compute-0 ovn_metadata_agent[161897]: 2025-10-11 03:47:24.721 161902 DEBUG oslo_service.service [-] use_stderr                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 11 03:47:24 compute-0 ovn_metadata_agent[161897]: 2025-10-11 03:47:24.721 161902 DEBUG oslo_service.service [-] use_syslog                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 11 03:47:24 compute-0 ovn_metadata_agent[161897]: 2025-10-11 03:47:24.722 161902 DEBUG oslo_service.service [-] vlan_transparent               = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 11 03:47:24 compute-0 ovn_metadata_agent[161897]: 2025-10-11 03:47:24.722 161902 DEBUG oslo_service.service [-] watch_log_file                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 11 03:47:24 compute-0 ovn_metadata_agent[161897]: 2025-10-11 03:47:24.722 161902 DEBUG oslo_service.service [-] wsgi_default_pool_size         = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 11 03:47:24 compute-0 ovn_metadata_agent[161897]: 2025-10-11 03:47:24.723 161902 DEBUG oslo_service.service [-] wsgi_keep_alive                = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 11 03:47:24 compute-0 ovn_metadata_agent[161897]: 2025-10-11 03:47:24.723 161902 DEBUG oslo_service.service [-] wsgi_log_format                = %(client_ip)s "%(request_line)s" status: %(status_code)s  len: %(body_length)s time: %(wall_seconds).7f log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 11 03:47:24 compute-0 ovn_metadata_agent[161897]: 2025-10-11 03:47:24.724 161902 DEBUG oslo_service.service [-] wsgi_server_debug              = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 11 03:47:24 compute-0 ovn_metadata_agent[161897]: 2025-10-11 03:47:24.724 161902 DEBUG oslo_service.service [-] oslo_concurrency.disable_process_locking = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:47:24 compute-0 ovn_metadata_agent[161897]: 2025-10-11 03:47:24.724 161902 DEBUG oslo_service.service [-] oslo_concurrency.lock_path     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:47:24 compute-0 ovn_metadata_agent[161897]: 2025-10-11 03:47:24.725 161902 DEBUG oslo_service.service [-] profiler.connection_string     = messaging:// log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:47:24 compute-0 ovn_metadata_agent[161897]: 2025-10-11 03:47:24.725 161902 DEBUG oslo_service.service [-] profiler.enabled               = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:47:24 compute-0 ovn_metadata_agent[161897]: 2025-10-11 03:47:24.726 161902 DEBUG oslo_service.service [-] profiler.es_doc_type           = notification log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:47:24 compute-0 ovn_metadata_agent[161897]: 2025-10-11 03:47:24.726 161902 DEBUG oslo_service.service [-] profiler.es_scroll_size        = 10000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:47:24 compute-0 ovn_metadata_agent[161897]: 2025-10-11 03:47:24.726 161902 DEBUG oslo_service.service [-] profiler.es_scroll_time        = 2m log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:47:24 compute-0 ovn_metadata_agent[161897]: 2025-10-11 03:47:24.727 161902 DEBUG oslo_service.service [-] profiler.filter_error_trace    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:47:24 compute-0 ovn_metadata_agent[161897]: 2025-10-11 03:47:24.727 161902 DEBUG oslo_service.service [-] profiler.hmac_keys             = SECRET_KEY log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:47:24 compute-0 ovn_metadata_agent[161897]: 2025-10-11 03:47:24.727 161902 DEBUG oslo_service.service [-] profiler.sentinel_service_name = mymaster log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:47:24 compute-0 ovn_metadata_agent[161897]: 2025-10-11 03:47:24.728 161902 DEBUG oslo_service.service [-] profiler.socket_timeout        = 0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:47:24 compute-0 ovn_metadata_agent[161897]: 2025-10-11 03:47:24.728 161902 DEBUG oslo_service.service [-] profiler.trace_sqlalchemy      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:47:24 compute-0 ovn_metadata_agent[161897]: 2025-10-11 03:47:24.729 161902 DEBUG oslo_service.service [-] oslo_policy.enforce_new_defaults = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:47:24 compute-0 ovn_metadata_agent[161897]: 2025-10-11 03:47:24.729 161902 DEBUG oslo_service.service [-] oslo_policy.enforce_scope      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:47:24 compute-0 ovn_metadata_agent[161897]: 2025-10-11 03:47:24.729 161902 DEBUG oslo_service.service [-] oslo_policy.policy_default_rule = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:47:24 compute-0 ovn_metadata_agent[161897]: 2025-10-11 03:47:24.730 161902 DEBUG oslo_service.service [-] oslo_policy.policy_dirs        = ['policy.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:47:24 compute-0 ovn_metadata_agent[161897]: 2025-10-11 03:47:24.730 161902 DEBUG oslo_service.service [-] oslo_policy.policy_file        = policy.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:47:24 compute-0 ovn_metadata_agent[161897]: 2025-10-11 03:47:24.730 161902 DEBUG oslo_service.service [-] oslo_policy.remote_content_type = application/x-www-form-urlencoded log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:47:24 compute-0 ovn_metadata_agent[161897]: 2025-10-11 03:47:24.731 161902 DEBUG oslo_service.service [-] oslo_policy.remote_ssl_ca_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:47:24 compute-0 ovn_metadata_agent[161897]: 2025-10-11 03:47:24.731 161902 DEBUG oslo_service.service [-] oslo_policy.remote_ssl_client_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:47:24 compute-0 ovn_metadata_agent[161897]: 2025-10-11 03:47:24.731 161902 DEBUG oslo_service.service [-] oslo_policy.remote_ssl_client_key_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:47:24 compute-0 ovn_metadata_agent[161897]: 2025-10-11 03:47:24.732 161902 DEBUG oslo_service.service [-] oslo_policy.remote_ssl_verify_server_crt = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:47:24 compute-0 ovn_metadata_agent[161897]: 2025-10-11 03:47:24.732 161902 DEBUG oslo_service.service [-] oslo_messaging_metrics.metrics_buffer_size = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:47:24 compute-0 ovn_metadata_agent[161897]: 2025-10-11 03:47:24.732 161902 DEBUG oslo_service.service [-] oslo_messaging_metrics.metrics_enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:47:24 compute-0 ovn_metadata_agent[161897]: 2025-10-11 03:47:24.733 161902 DEBUG oslo_service.service [-] oslo_messaging_metrics.metrics_process_name =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:47:24 compute-0 ovn_metadata_agent[161897]: 2025-10-11 03:47:24.733 161902 DEBUG oslo_service.service [-] oslo_messaging_metrics.metrics_socket_file = /var/tmp/metrics_collector.sock log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:47:24 compute-0 ovn_metadata_agent[161897]: 2025-10-11 03:47:24.734 161902 DEBUG oslo_service.service [-] oslo_messaging_metrics.metrics_thread_stop_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:47:24 compute-0 ovn_metadata_agent[161897]: 2025-10-11 03:47:24.734 161902 DEBUG oslo_service.service [-] oslo_middleware.http_basic_auth_user_file = /etc/htpasswd log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:47:24 compute-0 ovn_metadata_agent[161897]: 2025-10-11 03:47:24.734 161902 DEBUG oslo_service.service [-] service_providers.service_provider = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:47:24 compute-0 ovn_metadata_agent[161897]: 2025-10-11 03:47:24.735 161902 DEBUG oslo_service.service [-] privsep.capabilities           = [21, 12, 1, 2, 19] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:47:24 compute-0 ovn_metadata_agent[161897]: 2025-10-11 03:47:24.735 161902 DEBUG oslo_service.service [-] privsep.group                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:47:24 compute-0 ovn_metadata_agent[161897]: 2025-10-11 03:47:24.735 161902 DEBUG oslo_service.service [-] privsep.helper_command         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:47:24 compute-0 ovn_metadata_agent[161897]: 2025-10-11 03:47:24.736 161902 DEBUG oslo_service.service [-] privsep.logger_name            = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:47:24 compute-0 ovn_metadata_agent[161897]: 2025-10-11 03:47:24.736 161902 DEBUG oslo_service.service [-] privsep.thread_pool_size       = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:47:24 compute-0 ovn_metadata_agent[161897]: 2025-10-11 03:47:24.736 161902 DEBUG oslo_service.service [-] privsep.user                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:47:24 compute-0 ovn_metadata_agent[161897]: 2025-10-11 03:47:24.737 161902 DEBUG oslo_service.service [-] privsep_dhcp_release.capabilities = [21, 12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:47:24 compute-0 ovn_metadata_agent[161897]: 2025-10-11 03:47:24.737 161902 DEBUG oslo_service.service [-] privsep_dhcp_release.group     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:47:24 compute-0 ovn_metadata_agent[161897]: 2025-10-11 03:47:24.737 161902 DEBUG oslo_service.service [-] privsep_dhcp_release.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:47:24 compute-0 ovn_metadata_agent[161897]: 2025-10-11 03:47:24.738 161902 DEBUG oslo_service.service [-] privsep_dhcp_release.logger_name = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:47:24 compute-0 ovn_metadata_agent[161897]: 2025-10-11 03:47:24.738 161902 DEBUG oslo_service.service [-] privsep_dhcp_release.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:47:24 compute-0 ovn_metadata_agent[161897]: 2025-10-11 03:47:24.738 161902 DEBUG oslo_service.service [-] privsep_dhcp_release.user      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:47:24 compute-0 ovn_metadata_agent[161897]: 2025-10-11 03:47:24.739 161902 DEBUG oslo_service.service [-] privsep_ovs_vsctl.capabilities = [21, 12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:47:24 compute-0 ovn_metadata_agent[161897]: 2025-10-11 03:47:24.739 161902 DEBUG oslo_service.service [-] privsep_ovs_vsctl.group        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:47:24 compute-0 ovn_metadata_agent[161897]: 2025-10-11 03:47:24.739 161902 DEBUG oslo_service.service [-] privsep_ovs_vsctl.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:47:24 compute-0 ovn_metadata_agent[161897]: 2025-10-11 03:47:24.740 161902 DEBUG oslo_service.service [-] privsep_ovs_vsctl.logger_name  = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:47:24 compute-0 ovn_metadata_agent[161897]: 2025-10-11 03:47:24.740 161902 DEBUG oslo_service.service [-] privsep_ovs_vsctl.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:47:24 compute-0 ovn_metadata_agent[161897]: 2025-10-11 03:47:24.740 161902 DEBUG oslo_service.service [-] privsep_ovs_vsctl.user         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:47:24 compute-0 ovn_metadata_agent[161897]: 2025-10-11 03:47:24.741 161902 DEBUG oslo_service.service [-] privsep_namespace.capabilities = [21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:47:24 compute-0 ovn_metadata_agent[161897]: 2025-10-11 03:47:24.741 161902 DEBUG oslo_service.service [-] privsep_namespace.group        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:47:24 compute-0 ovn_metadata_agent[161897]: 2025-10-11 03:47:24.741 161902 DEBUG oslo_service.service [-] privsep_namespace.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:47:24 compute-0 ovn_metadata_agent[161897]: 2025-10-11 03:47:24.742 161902 DEBUG oslo_service.service [-] privsep_namespace.logger_name  = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:47:24 compute-0 ovn_metadata_agent[161897]: 2025-10-11 03:47:24.742 161902 DEBUG oslo_service.service [-] privsep_namespace.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:47:24 compute-0 ovn_metadata_agent[161897]: 2025-10-11 03:47:24.742 161902 DEBUG oslo_service.service [-] privsep_namespace.user         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:47:24 compute-0 ovn_metadata_agent[161897]: 2025-10-11 03:47:24.743 161902 DEBUG oslo_service.service [-] privsep_conntrack.capabilities = [12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:47:24 compute-0 ovn_metadata_agent[161897]: 2025-10-11 03:47:24.743 161902 DEBUG oslo_service.service [-] privsep_conntrack.group        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:47:24 compute-0 ovn_metadata_agent[161897]: 2025-10-11 03:47:24.743 161902 DEBUG oslo_service.service [-] privsep_conntrack.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:47:24 compute-0 ovn_metadata_agent[161897]: 2025-10-11 03:47:24.744 161902 DEBUG oslo_service.service [-] privsep_conntrack.logger_name  = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:47:24 compute-0 ovn_metadata_agent[161897]: 2025-10-11 03:47:24.744 161902 DEBUG oslo_service.service [-] privsep_conntrack.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:47:24 compute-0 ovn_metadata_agent[161897]: 2025-10-11 03:47:24.744 161902 DEBUG oslo_service.service [-] privsep_conntrack.user         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:47:24 compute-0 ovn_metadata_agent[161897]: 2025-10-11 03:47:24.745 161902 DEBUG oslo_service.service [-] privsep_link.capabilities      = [12, 21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:47:24 compute-0 ovn_metadata_agent[161897]: 2025-10-11 03:47:24.745 161902 DEBUG oslo_service.service [-] privsep_link.group             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:47:24 compute-0 ovn_metadata_agent[161897]: 2025-10-11 03:47:24.745 161902 DEBUG oslo_service.service [-] privsep_link.helper_command    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:47:24 compute-0 ovn_metadata_agent[161897]: 2025-10-11 03:47:24.746 161902 DEBUG oslo_service.service [-] privsep_link.logger_name       = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:47:24 compute-0 ovn_metadata_agent[161897]: 2025-10-11 03:47:24.746 161902 DEBUG oslo_service.service [-] privsep_link.thread_pool_size  = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:47:24 compute-0 ovn_metadata_agent[161897]: 2025-10-11 03:47:24.746 161902 DEBUG oslo_service.service [-] privsep_link.user              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:47:24 compute-0 ovn_metadata_agent[161897]: 2025-10-11 03:47:24.747 161902 DEBUG oslo_service.service [-] AGENT.check_child_processes_action = respawn log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:47:24 compute-0 ovn_metadata_agent[161897]: 2025-10-11 03:47:24.747 161902 DEBUG oslo_service.service [-] AGENT.check_child_processes_interval = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:47:24 compute-0 ovn_metadata_agent[161897]: 2025-10-11 03:47:24.747 161902 DEBUG oslo_service.service [-] AGENT.comment_iptables_rules   = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:47:24 compute-0 ovn_metadata_agent[161897]: 2025-10-11 03:47:24.748 161902 DEBUG oslo_service.service [-] AGENT.debug_iptables_rules     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:47:24 compute-0 ovn_metadata_agent[161897]: 2025-10-11 03:47:24.748 161902 DEBUG oslo_service.service [-] AGENT.kill_scripts_path        = /etc/neutron/kill_scripts/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:47:24 compute-0 ovn_metadata_agent[161897]: 2025-10-11 03:47:24.749 161902 DEBUG oslo_service.service [-] AGENT.root_helper              = sudo neutron-rootwrap /etc/neutron/rootwrap.conf log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:47:24 compute-0 ovn_metadata_agent[161897]: 2025-10-11 03:47:24.749 161902 DEBUG oslo_service.service [-] AGENT.root_helper_daemon       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:47:24 compute-0 ovn_metadata_agent[161897]: 2025-10-11 03:47:24.749 161902 DEBUG oslo_service.service [-] AGENT.use_helper_for_ns_read   = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:47:24 compute-0 ovn_metadata_agent[161897]: 2025-10-11 03:47:24.749 161902 DEBUG oslo_service.service [-] AGENT.use_random_fully         = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:47:24 compute-0 ovn_metadata_agent[161897]: 2025-10-11 03:47:24.750 161902 DEBUG oslo_service.service [-] oslo_versionedobjects.fatal_exception_format_errors = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:47:24 compute-0 ovn_metadata_agent[161897]: 2025-10-11 03:47:24.750 161902 DEBUG oslo_service.service [-] QUOTAS.default_quota           = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:47:24 compute-0 ovn_metadata_agent[161897]: 2025-10-11 03:47:24.751 161902 DEBUG oslo_service.service [-] QUOTAS.quota_driver            = neutron.db.quota.driver_nolock.DbQuotaNoLockDriver log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:47:24 compute-0 ovn_metadata_agent[161897]: 2025-10-11 03:47:24.751 161902 DEBUG oslo_service.service [-] QUOTAS.quota_network           = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:47:24 compute-0 ovn_metadata_agent[161897]: 2025-10-11 03:47:24.751 161902 DEBUG oslo_service.service [-] QUOTAS.quota_port              = 500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:47:24 compute-0 ovn_metadata_agent[161897]: 2025-10-11 03:47:24.752 161902 DEBUG oslo_service.service [-] QUOTAS.quota_security_group    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:47:24 compute-0 ovn_metadata_agent[161897]: 2025-10-11 03:47:24.752 161902 DEBUG oslo_service.service [-] QUOTAS.quota_security_group_rule = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:47:24 compute-0 ovn_metadata_agent[161897]: 2025-10-11 03:47:24.752 161902 DEBUG oslo_service.service [-] QUOTAS.quota_subnet            = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:47:24 compute-0 ovn_metadata_agent[161897]: 2025-10-11 03:47:24.753 161902 DEBUG oslo_service.service [-] QUOTAS.track_quota_usage       = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:47:24 compute-0 ovn_metadata_agent[161897]: 2025-10-11 03:47:24.753 161902 DEBUG oslo_service.service [-] nova.auth_section              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:47:24 compute-0 ovn_metadata_agent[161897]: 2025-10-11 03:47:24.753 161902 DEBUG oslo_service.service [-] nova.auth_type                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:47:24 compute-0 ovn_metadata_agent[161897]: 2025-10-11 03:47:24.754 161902 DEBUG oslo_service.service [-] nova.cafile                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:47:24 compute-0 ovn_metadata_agent[161897]: 2025-10-11 03:47:24.754 161902 DEBUG oslo_service.service [-] nova.certfile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:47:24 compute-0 ovn_metadata_agent[161897]: 2025-10-11 03:47:24.754 161902 DEBUG oslo_service.service [-] nova.collect_timing            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:47:24 compute-0 ovn_metadata_agent[161897]: 2025-10-11 03:47:24.755 161902 DEBUG oslo_service.service [-] nova.endpoint_type             = public log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:47:24 compute-0 ovn_metadata_agent[161897]: 2025-10-11 03:47:24.755 161902 DEBUG oslo_service.service [-] nova.insecure                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:47:24 compute-0 ovn_metadata_agent[161897]: 2025-10-11 03:47:24.755 161902 DEBUG oslo_service.service [-] nova.keyfile                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:47:24 compute-0 ovn_metadata_agent[161897]: 2025-10-11 03:47:24.756 161902 DEBUG oslo_service.service [-] nova.region_name               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:47:24 compute-0 ovn_metadata_agent[161897]: 2025-10-11 03:47:24.757 161902 DEBUG oslo_service.service [-] nova.split_loggers             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:47:24 compute-0 ovn_metadata_agent[161897]: 2025-10-11 03:47:24.757 161902 DEBUG oslo_service.service [-] nova.timeout                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:47:24 compute-0 ovn_metadata_agent[161897]: 2025-10-11 03:47:24.757 161902 DEBUG oslo_service.service [-] placement.auth_section         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:47:24 compute-0 ovn_metadata_agent[161897]: 2025-10-11 03:47:24.758 161902 DEBUG oslo_service.service [-] placement.auth_type            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:47:24 compute-0 ovn_metadata_agent[161897]: 2025-10-11 03:47:24.758 161902 DEBUG oslo_service.service [-] placement.cafile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:47:24 compute-0 ovn_metadata_agent[161897]: 2025-10-11 03:47:24.758 161902 DEBUG oslo_service.service [-] placement.certfile             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:47:24 compute-0 ovn_metadata_agent[161897]: 2025-10-11 03:47:24.758 161902 DEBUG oslo_service.service [-] placement.collect_timing       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:47:24 compute-0 ovn_metadata_agent[161897]: 2025-10-11 03:47:24.758 161902 DEBUG oslo_service.service [-] placement.endpoint_type        = public log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:47:24 compute-0 ovn_metadata_agent[161897]: 2025-10-11 03:47:24.759 161902 DEBUG oslo_service.service [-] placement.insecure             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:47:24 compute-0 ovn_metadata_agent[161897]: 2025-10-11 03:47:24.759 161902 DEBUG oslo_service.service [-] placement.keyfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:47:24 compute-0 ovn_metadata_agent[161897]: 2025-10-11 03:47:24.759 161902 DEBUG oslo_service.service [-] placement.region_name          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:47:24 compute-0 ovn_metadata_agent[161897]: 2025-10-11 03:47:24.759 161902 DEBUG oslo_service.service [-] placement.split_loggers        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:47:24 compute-0 ovn_metadata_agent[161897]: 2025-10-11 03:47:24.759 161902 DEBUG oslo_service.service [-] placement.timeout              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:47:24 compute-0 ovn_metadata_agent[161897]: 2025-10-11 03:47:24.760 161902 DEBUG oslo_service.service [-] ironic.auth_section            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:47:24 compute-0 ovn_metadata_agent[161897]: 2025-10-11 03:47:24.760 161902 DEBUG oslo_service.service [-] ironic.auth_type               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:47:24 compute-0 ovn_metadata_agent[161897]: 2025-10-11 03:47:24.760 161902 DEBUG oslo_service.service [-] ironic.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:47:24 compute-0 ovn_metadata_agent[161897]: 2025-10-11 03:47:24.760 161902 DEBUG oslo_service.service [-] ironic.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:47:24 compute-0 ovn_metadata_agent[161897]: 2025-10-11 03:47:24.760 161902 DEBUG oslo_service.service [-] ironic.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:47:24 compute-0 ovn_metadata_agent[161897]: 2025-10-11 03:47:24.760 161902 DEBUG oslo_service.service [-] ironic.connect_retries         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:47:24 compute-0 ovn_metadata_agent[161897]: 2025-10-11 03:47:24.761 161902 DEBUG oslo_service.service [-] ironic.connect_retry_delay     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:47:24 compute-0 ovn_metadata_agent[161897]: 2025-10-11 03:47:24.761 161902 DEBUG oslo_service.service [-] ironic.enable_notifications    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:47:24 compute-0 ovn_metadata_agent[161897]: 2025-10-11 03:47:24.761 161902 DEBUG oslo_service.service [-] ironic.endpoint_override       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:47:24 compute-0 ovn_metadata_agent[161897]: 2025-10-11 03:47:24.761 161902 DEBUG oslo_service.service [-] ironic.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:47:24 compute-0 ovn_metadata_agent[161897]: 2025-10-11 03:47:24.761 161902 DEBUG oslo_service.service [-] ironic.interface               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:47:24 compute-0 ovn_metadata_agent[161897]: 2025-10-11 03:47:24.762 161902 DEBUG oslo_service.service [-] ironic.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:47:24 compute-0 ovn_metadata_agent[161897]: 2025-10-11 03:47:24.762 161902 DEBUG oslo_service.service [-] ironic.max_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:47:24 compute-0 ovn_metadata_agent[161897]: 2025-10-11 03:47:24.762 161902 DEBUG oslo_service.service [-] ironic.min_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:47:24 compute-0 ovn_metadata_agent[161897]: 2025-10-11 03:47:24.762 161902 DEBUG oslo_service.service [-] ironic.region_name             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:47:24 compute-0 ovn_metadata_agent[161897]: 2025-10-11 03:47:24.762 161902 DEBUG oslo_service.service [-] ironic.service_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:47:24 compute-0 ovn_metadata_agent[161897]: 2025-10-11 03:47:24.763 161902 DEBUG oslo_service.service [-] ironic.service_type            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:47:24 compute-0 ovn_metadata_agent[161897]: 2025-10-11 03:47:24.763 161902 DEBUG oslo_service.service [-] ironic.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:47:24 compute-0 ovn_metadata_agent[161897]: 2025-10-11 03:47:24.763 161902 DEBUG oslo_service.service [-] ironic.status_code_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:47:24 compute-0 ovn_metadata_agent[161897]: 2025-10-11 03:47:24.763 161902 DEBUG oslo_service.service [-] ironic.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:47:24 compute-0 ovn_metadata_agent[161897]: 2025-10-11 03:47:24.763 161902 DEBUG oslo_service.service [-] ironic.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:47:24 compute-0 ovn_metadata_agent[161897]: 2025-10-11 03:47:24.763 161902 DEBUG oslo_service.service [-] ironic.valid_interfaces        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:47:24 compute-0 ovn_metadata_agent[161897]: 2025-10-11 03:47:24.764 161902 DEBUG oslo_service.service [-] ironic.version                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:47:24 compute-0 ovn_metadata_agent[161897]: 2025-10-11 03:47:24.764 161902 DEBUG oslo_service.service [-] cli_script.dry_run             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:47:24 compute-0 ovn_metadata_agent[161897]: 2025-10-11 03:47:24.764 161902 DEBUG oslo_service.service [-] ovn.allow_stateless_action_supported = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:47:24 compute-0 ovn_metadata_agent[161897]: 2025-10-11 03:47:24.764 161902 DEBUG oslo_service.service [-] ovn.dhcp_default_lease_time    = 43200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:47:24 compute-0 ovn_metadata_agent[161897]: 2025-10-11 03:47:24.764 161902 DEBUG oslo_service.service [-] ovn.disable_ovn_dhcp_for_baremetal_ports = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:47:24 compute-0 ovn_metadata_agent[161897]: 2025-10-11 03:47:24.765 161902 DEBUG oslo_service.service [-] ovn.dns_servers                = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:47:24 compute-0 ovn_metadata_agent[161897]: 2025-10-11 03:47:24.765 161902 DEBUG oslo_service.service [-] ovn.enable_distributed_floating_ip = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:47:24 compute-0 ovn_metadata_agent[161897]: 2025-10-11 03:47:24.765 161902 DEBUG oslo_service.service [-] ovn.neutron_sync_mode          = log log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:47:24 compute-0 ovn_metadata_agent[161897]: 2025-10-11 03:47:24.765 161902 DEBUG oslo_service.service [-] ovn.ovn_dhcp4_global_options   = {} log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:47:24 compute-0 ovn_metadata_agent[161897]: 2025-10-11 03:47:24.765 161902 DEBUG oslo_service.service [-] ovn.ovn_dhcp6_global_options   = {} log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:47:24 compute-0 ovn_metadata_agent[161897]: 2025-10-11 03:47:24.766 161902 DEBUG oslo_service.service [-] ovn.ovn_emit_need_to_frag      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:47:24 compute-0 ovn_metadata_agent[161897]: 2025-10-11 03:47:24.766 161902 DEBUG oslo_service.service [-] ovn.ovn_l3_mode                = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:47:24 compute-0 ovn_metadata_agent[161897]: 2025-10-11 03:47:24.766 161902 DEBUG oslo_service.service [-] ovn.ovn_l3_scheduler           = leastloaded log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:47:24 compute-0 ovn_metadata_agent[161897]: 2025-10-11 03:47:24.766 161902 DEBUG oslo_service.service [-] ovn.ovn_metadata_enabled       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:47:24 compute-0 ovn_metadata_agent[161897]: 2025-10-11 03:47:24.766 161902 DEBUG oslo_service.service [-] ovn.ovn_nb_ca_cert             =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:47:24 compute-0 ovn_metadata_agent[161897]: 2025-10-11 03:47:24.767 161902 DEBUG oslo_service.service [-] ovn.ovn_nb_certificate         =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:47:24 compute-0 ovn_metadata_agent[161897]: 2025-10-11 03:47:24.767 161902 DEBUG oslo_service.service [-] ovn.ovn_nb_connection          = tcp:127.0.0.1:6641 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:47:24 compute-0 ovn_metadata_agent[161897]: 2025-10-11 03:47:24.767 161902 DEBUG oslo_service.service [-] ovn.ovn_nb_private_key         =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:47:24 compute-0 ovn_metadata_agent[161897]: 2025-10-11 03:47:24.767 161902 DEBUG oslo_service.service [-] ovn.ovn_sb_ca_cert             = /etc/pki/tls/certs/ovndbca.crt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:47:24 compute-0 ovn_metadata_agent[161897]: 2025-10-11 03:47:24.767 161902 DEBUG oslo_service.service [-] ovn.ovn_sb_certificate         = /etc/pki/tls/certs/ovndb.crt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:47:24 compute-0 ovn_metadata_agent[161897]: 2025-10-11 03:47:24.768 161902 DEBUG oslo_service.service [-] ovn.ovn_sb_connection          = ssl:ovsdbserver-sb.openstack.svc:6642 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:47:24 compute-0 ovn_metadata_agent[161897]: 2025-10-11 03:47:24.768 161902 DEBUG oslo_service.service [-] ovn.ovn_sb_private_key         = /etc/pki/tls/private/ovndb.key log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:47:24 compute-0 ovn_metadata_agent[161897]: 2025-10-11 03:47:24.768 161902 DEBUG oslo_service.service [-] ovn.ovsdb_connection_timeout   = 180 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:47:24 compute-0 ovn_metadata_agent[161897]: 2025-10-11 03:47:24.768 161902 DEBUG oslo_service.service [-] ovn.ovsdb_log_level            = INFO log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:47:24 compute-0 ovn_metadata_agent[161897]: 2025-10-11 03:47:24.768 161902 DEBUG oslo_service.service [-] ovn.ovsdb_probe_interval       = 60000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:47:24 compute-0 ovn_metadata_agent[161897]: 2025-10-11 03:47:24.768 161902 DEBUG oslo_service.service [-] ovn.ovsdb_retry_max_interval   = 180 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:47:24 compute-0 ovn_metadata_agent[161897]: 2025-10-11 03:47:24.769 161902 DEBUG oslo_service.service [-] ovn.vhost_sock_dir             = /var/run/openvswitch log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:47:24 compute-0 ovn_metadata_agent[161897]: 2025-10-11 03:47:24.769 161902 DEBUG oslo_service.service [-] ovn.vif_type                   = ovs log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:47:24 compute-0 ovn_metadata_agent[161897]: 2025-10-11 03:47:24.769 161902 DEBUG oslo_service.service [-] OVS.bridge_mac_table_size      = 50000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:47:24 compute-0 ovn_metadata_agent[161897]: 2025-10-11 03:47:24.769 161902 DEBUG oslo_service.service [-] OVS.igmp_snooping_enable       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:47:24 compute-0 ovn_metadata_agent[161897]: 2025-10-11 03:47:24.769 161902 DEBUG oslo_service.service [-] OVS.ovsdb_timeout              = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:47:24 compute-0 ovn_metadata_agent[161897]: 2025-10-11 03:47:24.770 161902 DEBUG oslo_service.service [-] ovs.ovsdb_connection           = tcp:127.0.0.1:6640 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:47:24 compute-0 ovn_metadata_agent[161897]: 2025-10-11 03:47:24.770 161902 DEBUG oslo_service.service [-] ovs.ovsdb_connection_timeout   = 180 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:47:24 compute-0 ovn_metadata_agent[161897]: 2025-10-11 03:47:24.770 161902 DEBUG oslo_service.service [-] oslo_messaging_rabbit.amqp_auto_delete = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:47:24 compute-0 ovn_metadata_agent[161897]: 2025-10-11 03:47:24.770 161902 DEBUG oslo_service.service [-] oslo_messaging_rabbit.amqp_durable_queues = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:47:24 compute-0 ovn_metadata_agent[161897]: 2025-10-11 03:47:24.770 161902 DEBUG oslo_service.service [-] oslo_messaging_rabbit.conn_pool_min_size = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:47:24 compute-0 ovn_metadata_agent[161897]: 2025-10-11 03:47:24.771 161902 DEBUG oslo_service.service [-] oslo_messaging_rabbit.conn_pool_ttl = 1200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:47:24 compute-0 ovn_metadata_agent[161897]: 2025-10-11 03:47:24.771 161902 DEBUG oslo_service.service [-] oslo_messaging_rabbit.direct_mandatory_flag = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:47:24 compute-0 ovn_metadata_agent[161897]: 2025-10-11 03:47:24.771 161902 DEBUG oslo_service.service [-] oslo_messaging_rabbit.enable_cancel_on_failover = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:47:24 compute-0 ovn_metadata_agent[161897]: 2025-10-11 03:47:24.771 161902 DEBUG oslo_service.service [-] oslo_messaging_rabbit.heartbeat_in_pthread = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:47:24 compute-0 ovn_metadata_agent[161897]: 2025-10-11 03:47:24.771 161902 DEBUG oslo_service.service [-] oslo_messaging_rabbit.heartbeat_rate = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:47:24 compute-0 ovn_metadata_agent[161897]: 2025-10-11 03:47:24.772 161902 DEBUG oslo_service.service [-] oslo_messaging_rabbit.heartbeat_timeout_threshold = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:47:24 compute-0 ovn_metadata_agent[161897]: 2025-10-11 03:47:24.772 161902 DEBUG oslo_service.service [-] oslo_messaging_rabbit.kombu_compression = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:47:24 compute-0 ovn_metadata_agent[161897]: 2025-10-11 03:47:24.772 161902 DEBUG oslo_service.service [-] oslo_messaging_rabbit.kombu_failover_strategy = round-robin log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:47:24 compute-0 ovn_metadata_agent[161897]: 2025-10-11 03:47:24.772 161902 DEBUG oslo_service.service [-] oslo_messaging_rabbit.kombu_missing_consumer_retry_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:47:24 compute-0 ovn_metadata_agent[161897]: 2025-10-11 03:47:24.772 161902 DEBUG oslo_service.service [-] oslo_messaging_rabbit.kombu_reconnect_delay = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:47:24 compute-0 ovn_metadata_agent[161897]: 2025-10-11 03:47:24.773 161902 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_ha_queues = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:47:24 compute-0 ovn_metadata_agent[161897]: 2025-10-11 03:47:24.773 161902 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_interval_max = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:47:24 compute-0 ovn_metadata_agent[161897]: 2025-10-11 03:47:24.773 161902 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_login_method = AMQPLAIN log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:47:24 compute-0 ovn_metadata_agent[161897]: 2025-10-11 03:47:24.773 161902 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_qos_prefetch_count = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:47:24 compute-0 ovn_metadata_agent[161897]: 2025-10-11 03:47:24.774 161902 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_quorum_delivery_limit = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:47:24 compute-0 ovn_metadata_agent[161897]: 2025-10-11 03:47:24.774 161902 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_quorum_max_memory_bytes = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:47:24 compute-0 ovn_metadata_agent[161897]: 2025-10-11 03:47:24.774 161902 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_quorum_max_memory_length = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:47:24 compute-0 ovn_metadata_agent[161897]: 2025-10-11 03:47:24.774 161902 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_quorum_queue = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:47:24 compute-0 ovn_metadata_agent[161897]: 2025-10-11 03:47:24.775 161902 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_retry_backoff = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:47:24 compute-0 ovn_metadata_agent[161897]: 2025-10-11 03:47:24.775 161902 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_retry_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:47:24 compute-0 ovn_metadata_agent[161897]: 2025-10-11 03:47:24.775 161902 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_transient_queues_ttl = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:47:24 compute-0 ovn_metadata_agent[161897]: 2025-10-11 03:47:24.775 161902 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rpc_conn_pool_size = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:47:24 compute-0 ovn_metadata_agent[161897]: 2025-10-11 03:47:24.775 161902 DEBUG oslo_service.service [-] oslo_messaging_rabbit.ssl      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:47:24 compute-0 ovn_metadata_agent[161897]: 2025-10-11 03:47:24.776 161902 DEBUG oslo_service.service [-] oslo_messaging_rabbit.ssl_ca_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:47:24 compute-0 ovn_metadata_agent[161897]: 2025-10-11 03:47:24.776 161902 DEBUG oslo_service.service [-] oslo_messaging_rabbit.ssl_cert_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:47:24 compute-0 ovn_metadata_agent[161897]: 2025-10-11 03:47:24.776 161902 DEBUG oslo_service.service [-] oslo_messaging_rabbit.ssl_enforce_fips_mode = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:47:24 compute-0 ovn_metadata_agent[161897]: 2025-10-11 03:47:24.776 161902 DEBUG oslo_service.service [-] oslo_messaging_rabbit.ssl_key_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:47:24 compute-0 ovn_metadata_agent[161897]: 2025-10-11 03:47:24.776 161902 DEBUG oslo_service.service [-] oslo_messaging_rabbit.ssl_version =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:47:24 compute-0 ovn_metadata_agent[161897]: 2025-10-11 03:47:24.777 161902 DEBUG oslo_service.service [-] oslo_messaging_notifications.driver = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:47:24 compute-0 ovn_metadata_agent[161897]: 2025-10-11 03:47:24.777 161902 DEBUG oslo_service.service [-] oslo_messaging_notifications.retry = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:47:24 compute-0 ovn_metadata_agent[161897]: 2025-10-11 03:47:24.777 161902 DEBUG oslo_service.service [-] oslo_messaging_notifications.topics = ['notifications'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:47:24 compute-0 ovn_metadata_agent[161897]: 2025-10-11 03:47:24.777 161902 DEBUG oslo_service.service [-] oslo_messaging_notifications.transport_url = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:47:24 compute-0 ovn_metadata_agent[161897]: 2025-10-11 03:47:24.777 161902 DEBUG oslo_service.service [-] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2613
Oct 11 03:47:24 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v441: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 03:47:26 compute-0 ceph-mon[74273]: pgmap v441: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 03:47:26 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v442: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 03:47:27 compute-0 ceph-osd[89722]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Oct 11 03:47:27 compute-0 ceph-osd[89722]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Cumulative writes: 5485 writes, 23K keys, 5485 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.03 MB/s
                                           Cumulative WAL: 5485 writes, 797 syncs, 6.88 writes per sync, written: 0.02 GB, 0.03 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 5485 writes, 23K keys, 5485 commit groups, 1.0 writes per commit group, ingest: 18.30 MB, 0.03 MB/s
                                           Interval WAL: 5485 writes, 797 syncs, 6.88 writes per sync, written: 0.02 GB, 0.03 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                           
                                           ** Compaction Stats [default] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      2/0    2.63 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.2      0.01              0.00         1    0.007       0      0       0.0       0.0
                                            Sum      2/0    2.63 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.2      0.01              0.00         1    0.007       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [default] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.2      0.01              0.00         1    0.007       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55f1041ccdd0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 4.7e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [default] **
                                           
                                           ** Compaction Stats [m-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55f1041ccdd0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 4.7e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-0] **
                                           
                                           ** Compaction Stats [m-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55f1041ccdd0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 4.7e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-1] **
                                           
                                           ** Compaction Stats [m-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55f1041ccdd0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 4.7e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-2] **
                                           
                                           ** Compaction Stats [p-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.56 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.4      0.00              0.00         1    0.003       0      0       0.0       0.0
                                            Sum      1/0    1.56 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.4      0.00              0.00         1    0.003       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.4      0.00              0.00         1    0.003       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55f1041ccdd0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 4.7e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-0] **
                                           
                                           ** Compaction Stats [p-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55f1041ccdd0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 4.7e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-1] **
                                           
                                           ** Compaction Stats [p-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55f1041ccdd0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 4.7e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-2] **
                                           
                                           ** Compaction Stats [O-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55f1041cc430#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 2 last_secs: 1.3e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-0] **
                                           
                                           ** Compaction Stats [O-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55f1041cc430#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 2 last_secs: 1.3e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-1] **
                                           
                                           ** Compaction Stats [O-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.25 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.4      0.00              0.00         1    0.003       0      0       0.0       0.0
                                            Sum      1/0    1.25 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.4      0.00              0.00         1    0.003       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.4      0.00              0.00         1    0.003       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55f1041cc430#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 2 last_secs: 1.3e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-2] **
                                           
                                           ** Compaction Stats [L] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.001       0      0       0.0       0.0
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.001       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [L] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.001       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55f1041ccdd0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 4.7e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [L] **
                                           
                                           ** Compaction Stats [P] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [P] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55f1041ccdd0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 4.7e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [P] **
Oct 11 03:47:27 compute-0 sshd-session[162021]: Accepted publickey for zuul from 192.168.122.30 port 43250 ssh2: ECDSA SHA256:qo9+RMabHfLAOt2q/80W97JXaZUdeUCREBuTRaqgxBY
Oct 11 03:47:27 compute-0 systemd-logind[820]: New session 49 of user zuul.
Oct 11 03:47:27 compute-0 systemd[1]: Started Session 49 of User zuul.
Oct 11 03:47:27 compute-0 sshd-session[162021]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Oct 11 03:47:27 compute-0 ceph-mgr[74563]: [devicehealth INFO root] Check health
Oct 11 03:47:28 compute-0 ceph-mon[74273]: pgmap v442: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 03:47:28 compute-0 python3.9[162174]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct 11 03:47:28 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v443: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 03:47:29 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 11 03:47:29 compute-0 sudo[162328]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-moppzvotukkawdsimhdiajdcasjgnxur ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760154449.3957617-34-82569481617467/AnsiballZ_command.py'
Oct 11 03:47:29 compute-0 sudo[162328]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 03:47:30 compute-0 ceph-mon[74273]: pgmap v443: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 03:47:30 compute-0 python3.9[162330]: ansible-ansible.legacy.command Invoked with _raw_params=podman ps -a --filter name=^nova_virtlogd$ --format \{\{.Names\}\} _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 11 03:47:30 compute-0 sudo[162328]: pam_unix(sudo:session): session closed for user root
Oct 11 03:47:30 compute-0 ceph-mgr[74563]: [pg_autoscaler INFO root] _maybe_adjust
Oct 11 03:47:30 compute-0 ceph-mgr[74563]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 03:47:30 compute-0 ceph-mgr[74563]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct 11 03:47:30 compute-0 ceph-mgr[74563]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 03:47:30 compute-0 ceph-mgr[74563]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 11 03:47:30 compute-0 ceph-mgr[74563]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 03:47:30 compute-0 ceph-mgr[74563]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 11 03:47:30 compute-0 ceph-mgr[74563]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 03:47:30 compute-0 ceph-mgr[74563]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 11 03:47:30 compute-0 ceph-mgr[74563]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 03:47:30 compute-0 ceph-mgr[74563]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 11 03:47:30 compute-0 ceph-mgr[74563]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 03:47:30 compute-0 ceph-mgr[74563]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Oct 11 03:47:30 compute-0 ceph-mgr[74563]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 03:47:30 compute-0 ceph-mgr[74563]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 11 03:47:30 compute-0 ceph-mgr[74563]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 03:47:30 compute-0 ceph-mgr[74563]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Oct 11 03:47:30 compute-0 ceph-mgr[74563]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 03:47:30 compute-0 ceph-mgr[74563]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct 11 03:47:30 compute-0 ceph-mgr[74563]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 03:47:30 compute-0 ceph-mgr[74563]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 11 03:47:30 compute-0 ceph-mgr[74563]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 03:47:30 compute-0 ceph-mgr[74563]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct 11 03:47:30 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v444: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 03:47:31 compute-0 sudo[162493]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xjhivgztkjamanfpmjsdqgzscgjcjxyz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760154450.5846786-45-109686550831528/AnsiballZ_systemd_service.py'
Oct 11 03:47:31 compute-0 sudo[162493]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 03:47:31 compute-0 python3.9[162495]: ansible-ansible.builtin.systemd_service Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Oct 11 03:47:31 compute-0 systemd[1]: Reloading.
Oct 11 03:47:31 compute-0 systemd-rc-local-generator[162521]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 11 03:47:31 compute-0 systemd-sysv-generator[162525]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 11 03:47:32 compute-0 sudo[162493]: pam_unix(sudo:session): session closed for user root
Oct 11 03:47:32 compute-0 ceph-mon[74273]: pgmap v444: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 03:47:32 compute-0 sudo[162643]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 03:47:32 compute-0 sudo[162643]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 03:47:32 compute-0 sudo[162643]: pam_unix(sudo:session): session closed for user root
Oct 11 03:47:32 compute-0 sudo[162694]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 11 03:47:32 compute-0 sudo[162694]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 03:47:32 compute-0 sudo[162694]: pam_unix(sudo:session): session closed for user root
Oct 11 03:47:32 compute-0 sudo[162731]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 03:47:32 compute-0 sudo[162731]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 03:47:32 compute-0 sudo[162731]: pam_unix(sudo:session): session closed for user root
Oct 11 03:47:32 compute-0 sudo[162756]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/23b68101-59a9-532f-ab6b-9acf78fb2162/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 check-host
Oct 11 03:47:32 compute-0 python3.9[162718]: ansible-ansible.builtin.service_facts Invoked
Oct 11 03:47:32 compute-0 sudo[162756]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 03:47:32 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v445: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 03:47:32 compute-0 network[162797]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Oct 11 03:47:32 compute-0 network[162798]: 'network-scripts' will be removed from distribution in near future.
Oct 11 03:47:32 compute-0 network[162799]: It is advised to switch to 'NetworkManager' instead for network management.
Oct 11 03:47:33 compute-0 sudo[162756]: pam_unix(sudo:session): session closed for user root
Oct 11 03:47:33 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 11 03:47:33 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' 
Oct 11 03:47:33 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 11 03:47:33 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' 
Oct 11 03:47:33 compute-0 sudo[162833]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 03:47:33 compute-0 sudo[162833]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 03:47:33 compute-0 sudo[162833]: pam_unix(sudo:session): session closed for user root
Oct 11 03:47:34 compute-0 sudo[162860]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 11 03:47:34 compute-0 sudo[162860]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 03:47:34 compute-0 sudo[162860]: pam_unix(sudo:session): session closed for user root
Oct 11 03:47:34 compute-0 ceph-mon[74273]: pgmap v445: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 03:47:34 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' 
Oct 11 03:47:34 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' 
Oct 11 03:47:34 compute-0 sudo[162890]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 03:47:34 compute-0 sudo[162890]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 03:47:34 compute-0 sudo[162890]: pam_unix(sudo:session): session closed for user root
Oct 11 03:47:34 compute-0 sudo[162919]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/23b68101-59a9-532f-ab6b-9acf78fb2162/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Oct 11 03:47:34 compute-0 sudo[162919]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 03:47:34 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 11 03:47:34 compute-0 sudo[162919]: pam_unix(sudo:session): session closed for user root
Oct 11 03:47:34 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"} v 0) v1
Oct 11 03:47:34 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Oct 11 03:47:34 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 11 03:47:34 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 11 03:47:34 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct 11 03:47:34 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 11 03:47:34 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct 11 03:47:34 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' 
Oct 11 03:47:34 compute-0 ceph-mgr[74563]: [progress WARNING root] complete: ev 0d4386d2-fc24-4884-8c21-2a8846d53f53 does not exist
Oct 11 03:47:34 compute-0 ceph-mgr[74563]: [progress WARNING root] complete: ev a2862f6c-191a-495a-bd3e-8d26be2679c0 does not exist
Oct 11 03:47:34 compute-0 ceph-mgr[74563]: [progress WARNING root] complete: ev ea535835-06a3-43cf-a1ce-ce6e50129b46 does not exist
Oct 11 03:47:34 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct 11 03:47:34 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 11 03:47:34 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct 11 03:47:34 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 11 03:47:34 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 11 03:47:34 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 11 03:47:34 compute-0 sudo[163000]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 03:47:34 compute-0 sudo[163000]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 03:47:34 compute-0 sudo[163000]: pam_unix(sudo:session): session closed for user root
Oct 11 03:47:34 compute-0 sudo[163027]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 11 03:47:34 compute-0 sudo[163027]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 03:47:34 compute-0 sudo[163027]: pam_unix(sudo:session): session closed for user root
Oct 11 03:47:34 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v446: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 03:47:34 compute-0 sudo[163055]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 03:47:34 compute-0 sudo[163055]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 03:47:34 compute-0 sudo[163055]: pam_unix(sudo:session): session closed for user root
Oct 11 03:47:35 compute-0 sudo[163083]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/23b68101-59a9-532f-ab6b-9acf78fb2162/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 23b68101-59a9-532f-ab6b-9acf78fb2162 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Oct 11 03:47:35 compute-0 sudo[163083]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 03:47:35 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Oct 11 03:47:35 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 11 03:47:35 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 11 03:47:35 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' 
Oct 11 03:47:35 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 11 03:47:35 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 11 03:47:35 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 11 03:47:35 compute-0 podman[163163]: 2025-10-11 03:47:35.416430365 +0000 UTC m=+0.052061341 container create 1882eee152488af3b5d7d94ee9d5db5db8ee78e4ff0ba7ee07ddc66c5ec15492 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_vaughan, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Oct 11 03:47:35 compute-0 systemd[1]: Started libpod-conmon-1882eee152488af3b5d7d94ee9d5db5db8ee78e4ff0ba7ee07ddc66c5ec15492.scope.
Oct 11 03:47:35 compute-0 podman[163163]: 2025-10-11 03:47:35.392350565 +0000 UTC m=+0.027981561 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 03:47:35 compute-0 systemd[1]: Started libcrun container.
Oct 11 03:47:35 compute-0 rsyslogd[1005]: imjournal: journal files changed, reloading...  [v8.2506.0-2.el9 try https://www.rsyslog.com/e/0 ]
Oct 11 03:47:35 compute-0 rsyslogd[1005]: imjournal: journal files changed, reloading...  [v8.2506.0-2.el9 try https://www.rsyslog.com/e/0 ]
Oct 11 03:47:35 compute-0 podman[163163]: 2025-10-11 03:47:35.516722297 +0000 UTC m=+0.152353323 container init 1882eee152488af3b5d7d94ee9d5db5db8ee78e4ff0ba7ee07ddc66c5ec15492 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_vaughan, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 11 03:47:35 compute-0 podman[163163]: 2025-10-11 03:47:35.528109378 +0000 UTC m=+0.163740354 container start 1882eee152488af3b5d7d94ee9d5db5db8ee78e4ff0ba7ee07ddc66c5ec15492 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_vaughan, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 11 03:47:35 compute-0 podman[163163]: 2025-10-11 03:47:35.532614305 +0000 UTC m=+0.168245331 container attach 1882eee152488af3b5d7d94ee9d5db5db8ee78e4ff0ba7ee07ddc66c5ec15492 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_vaughan, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef)
Oct 11 03:47:35 compute-0 strange_vaughan[163184]: 167 167
Oct 11 03:47:35 compute-0 podman[163163]: 2025-10-11 03:47:35.537083591 +0000 UTC m=+0.172714567 container died 1882eee152488af3b5d7d94ee9d5db5db8ee78e4ff0ba7ee07ddc66c5ec15492 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_vaughan, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Oct 11 03:47:35 compute-0 systemd[1]: libpod-1882eee152488af3b5d7d94ee9d5db5db8ee78e4ff0ba7ee07ddc66c5ec15492.scope: Deactivated successfully.
Oct 11 03:47:35 compute-0 systemd[1]: var-lib-containers-storage-overlay-d7ed3dbe918ef57e4993bb3bf85f6bd8f8effae89ee3c73f81442c8ce956dbd7-merged.mount: Deactivated successfully.
Oct 11 03:47:35 compute-0 podman[163163]: 2025-10-11 03:47:35.598699291 +0000 UTC m=+0.234330257 container remove 1882eee152488af3b5d7d94ee9d5db5db8ee78e4ff0ba7ee07ddc66c5ec15492 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_vaughan, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Oct 11 03:47:35 compute-0 systemd[1]: libpod-conmon-1882eee152488af3b5d7d94ee9d5db5db8ee78e4ff0ba7ee07ddc66c5ec15492.scope: Deactivated successfully.
Oct 11 03:47:35 compute-0 podman[163218]: 2025-10-11 03:47:35.827635154 +0000 UTC m=+0.105880680 container create c719d63ebbd50a4ff1b74e08b669d0f3c687fd9d78ca42a3749d208c90bf0305 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_johnson, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Oct 11 03:47:35 compute-0 podman[163218]: 2025-10-11 03:47:35.76479641 +0000 UTC m=+0.043042036 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 03:47:35 compute-0 systemd[1]: Started libpod-conmon-c719d63ebbd50a4ff1b74e08b669d0f3c687fd9d78ca42a3749d208c90bf0305.scope.
Oct 11 03:47:35 compute-0 systemd[1]: Started libcrun container.
Oct 11 03:47:35 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9fa4cb973a58a2580cc6cb9b40f236ef39b48379321d3357efadff8298ab9974/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 11 03:47:35 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9fa4cb973a58a2580cc6cb9b40f236ef39b48379321d3357efadff8298ab9974/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 11 03:47:35 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9fa4cb973a58a2580cc6cb9b40f236ef39b48379321d3357efadff8298ab9974/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 11 03:47:35 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9fa4cb973a58a2580cc6cb9b40f236ef39b48379321d3357efadff8298ab9974/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 11 03:47:35 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9fa4cb973a58a2580cc6cb9b40f236ef39b48379321d3357efadff8298ab9974/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct 11 03:47:35 compute-0 podman[163218]: 2025-10-11 03:47:35.972237787 +0000 UTC m=+0.250483383 container init c719d63ebbd50a4ff1b74e08b669d0f3c687fd9d78ca42a3749d208c90bf0305 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_johnson, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 11 03:47:35 compute-0 podman[163218]: 2025-10-11 03:47:35.984805172 +0000 UTC m=+0.263050728 container start c719d63ebbd50a4ff1b74e08b669d0f3c687fd9d78ca42a3749d208c90bf0305 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_johnson, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507)
Oct 11 03:47:36 compute-0 podman[163218]: 2025-10-11 03:47:36.00423455 +0000 UTC m=+0.282480166 container attach c719d63ebbd50a4ff1b74e08b669d0f3c687fd9d78ca42a3749d208c90bf0305 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_johnson, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0)
Oct 11 03:47:36 compute-0 ceph-mon[74273]: pgmap v446: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 03:47:36 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v447: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 03:47:37 compute-0 nifty_johnson[163235]: --> passed data devices: 0 physical, 3 LVM
Oct 11 03:47:37 compute-0 nifty_johnson[163235]: --> relative data size: 1.0
Oct 11 03:47:37 compute-0 nifty_johnson[163235]: --> All data devices are unavailable
Oct 11 03:47:37 compute-0 systemd[1]: libpod-c719d63ebbd50a4ff1b74e08b669d0f3c687fd9d78ca42a3749d208c90bf0305.scope: Deactivated successfully.
Oct 11 03:47:37 compute-0 podman[163218]: 2025-10-11 03:47:37.146248392 +0000 UTC m=+1.424493918 container died c719d63ebbd50a4ff1b74e08b669d0f3c687fd9d78ca42a3749d208c90bf0305 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_johnson, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2)
Oct 11 03:47:37 compute-0 systemd[1]: libpod-c719d63ebbd50a4ff1b74e08b669d0f3c687fd9d78ca42a3749d208c90bf0305.scope: Consumed 1.083s CPU time.
Oct 11 03:47:37 compute-0 systemd[1]: var-lib-containers-storage-overlay-9fa4cb973a58a2580cc6cb9b40f236ef39b48379321d3357efadff8298ab9974-merged.mount: Deactivated successfully.
Oct 11 03:47:37 compute-0 podman[163218]: 2025-10-11 03:47:37.634379293 +0000 UTC m=+1.912624829 container remove c719d63ebbd50a4ff1b74e08b669d0f3c687fd9d78ca42a3749d208c90bf0305 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_johnson, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 11 03:47:37 compute-0 sudo[163083]: pam_unix(sudo:session): session closed for user root
Oct 11 03:47:37 compute-0 systemd[1]: libpod-conmon-c719d63ebbd50a4ff1b74e08b669d0f3c687fd9d78ca42a3749d208c90bf0305.scope: Deactivated successfully.
Oct 11 03:47:37 compute-0 sudo[163347]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 03:47:37 compute-0 sudo[163347]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 03:47:37 compute-0 sudo[163347]: pam_unix(sudo:session): session closed for user root
Oct 11 03:47:37 compute-0 sudo[163407]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 11 03:47:37 compute-0 sudo[163407]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 03:47:37 compute-0 sudo[163407]: pam_unix(sudo:session): session closed for user root
Oct 11 03:47:37 compute-0 sudo[163455]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 03:47:37 compute-0 sudo[163455]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 03:47:37 compute-0 sudo[163455]: pam_unix(sudo:session): session closed for user root
Oct 11 03:47:38 compute-0 sudo[163544]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fvqhybrhsupurgqolnyudpuuvraxzzts ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760154457.6926177-64-228442327206115/AnsiballZ_systemd_service.py'
Oct 11 03:47:38 compute-0 sudo[163544]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 03:47:38 compute-0 sudo[163505]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/23b68101-59a9-532f-ab6b-9acf78fb2162/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 23b68101-59a9-532f-ab6b-9acf78fb2162 -- lvm list --format json
Oct 11 03:47:38 compute-0 sudo[163505]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 03:47:38 compute-0 ceph-mon[74273]: pgmap v447: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 03:47:38 compute-0 python3.9[163549]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_libvirt.target state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Oct 11 03:47:38 compute-0 sudo[163544]: pam_unix(sudo:session): session closed for user root
Oct 11 03:47:38 compute-0 podman[163594]: 2025-10-11 03:47:38.502300946 +0000 UTC m=+0.037925462 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 03:47:38 compute-0 podman[163594]: 2025-10-11 03:47:38.620281196 +0000 UTC m=+0.155905702 container create ccd94f4757fa304347643975c16ed5f48f4a810204eb0e80a1e2c5a1ea646e5f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_ishizaka, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, OSD_FLAVOR=default)
Oct 11 03:47:38 compute-0 systemd[1]: Started libpod-conmon-ccd94f4757fa304347643975c16ed5f48f4a810204eb0e80a1e2c5a1ea646e5f.scope.
Oct 11 03:47:38 compute-0 systemd[1]: Started libcrun container.
Oct 11 03:47:38 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v448: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 03:47:39 compute-0 sudo[163763]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mevnpftfrzpxnqcaebnzrrajztoyuqrd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760154458.6554613-64-69600766060087/AnsiballZ_systemd_service.py'
Oct 11 03:47:39 compute-0 sudo[163763]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 03:47:39 compute-0 podman[163594]: 2025-10-11 03:47:39.194070776 +0000 UTC m=+0.729695332 container init ccd94f4757fa304347643975c16ed5f48f4a810204eb0e80a1e2c5a1ea646e5f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_ishizaka, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 11 03:47:39 compute-0 podman[163594]: 2025-10-11 03:47:39.206526138 +0000 UTC m=+0.742150654 container start ccd94f4757fa304347643975c16ed5f48f4a810204eb0e80a1e2c5a1ea646e5f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_ishizaka, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_REF=reef, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Oct 11 03:47:39 compute-0 nervous_ishizaka[163733]: 167 167
Oct 11 03:47:39 compute-0 systemd[1]: libpod-ccd94f4757fa304347643975c16ed5f48f4a810204eb0e80a1e2c5a1ea646e5f.scope: Deactivated successfully.
Oct 11 03:47:39 compute-0 podman[163594]: 2025-10-11 03:47:39.261769277 +0000 UTC m=+0.797393853 container attach ccd94f4757fa304347643975c16ed5f48f4a810204eb0e80a1e2c5a1ea646e5f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_ishizaka, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_REF=reef, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Oct 11 03:47:39 compute-0 podman[163594]: 2025-10-11 03:47:39.262895349 +0000 UTC m=+0.798519825 container died ccd94f4757fa304347643975c16ed5f48f4a810204eb0e80a1e2c5a1ea646e5f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_ishizaka, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_REF=reef, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 11 03:47:39 compute-0 systemd[1]: var-lib-containers-storage-overlay-7ee6cc10b9ed712a17344d797112164030780bf688a9661ad045eab3ce3d3e84-merged.mount: Deactivated successfully.
Oct 11 03:47:39 compute-0 python3.9[163765]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_virtlogd_wrapper.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Oct 11 03:47:39 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 11 03:47:39 compute-0 sudo[163763]: pam_unix(sudo:session): session closed for user root
Oct 11 03:47:39 compute-0 podman[163594]: 2025-10-11 03:47:39.483391374 +0000 UTC m=+1.019015870 container remove ccd94f4757fa304347643975c16ed5f48f4a810204eb0e80a1e2c5a1ea646e5f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_ishizaka, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Oct 11 03:47:39 compute-0 systemd[1]: libpod-conmon-ccd94f4757fa304347643975c16ed5f48f4a810204eb0e80a1e2c5a1ea646e5f.scope: Deactivated successfully.
Oct 11 03:47:39 compute-0 podman[163837]: 2025-10-11 03:47:39.792180222 +0000 UTC m=+0.113791984 container create ebfc67cd5c2277f6329626384f3da31898c76bc4256e1eb3e48b40a85e193341 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_archimedes, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Oct 11 03:47:39 compute-0 podman[163837]: 2025-10-11 03:47:39.717512084 +0000 UTC m=+0.039123846 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 03:47:39 compute-0 systemd[1]: Started libpod-conmon-ebfc67cd5c2277f6329626384f3da31898c76bc4256e1eb3e48b40a85e193341.scope.
Oct 11 03:47:39 compute-0 systemd[1]: Started libcrun container.
Oct 11 03:47:39 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1e7e62d361e27ea4787e028a42c8e1ba218e61c690380a03a9bff5c0621c0daa/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 11 03:47:39 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1e7e62d361e27ea4787e028a42c8e1ba218e61c690380a03a9bff5c0621c0daa/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 11 03:47:39 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1e7e62d361e27ea4787e028a42c8e1ba218e61c690380a03a9bff5c0621c0daa/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 11 03:47:39 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1e7e62d361e27ea4787e028a42c8e1ba218e61c690380a03a9bff5c0621c0daa/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 11 03:47:39 compute-0 podman[163837]: 2025-10-11 03:47:39.941062115 +0000 UTC m=+0.262673917 container init ebfc67cd5c2277f6329626384f3da31898c76bc4256e1eb3e48b40a85e193341 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_archimedes, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 11 03:47:39 compute-0 podman[163837]: 2025-10-11 03:47:39.953510267 +0000 UTC m=+0.275122029 container start ebfc67cd5c2277f6329626384f3da31898c76bc4256e1eb3e48b40a85e193341 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_archimedes, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 11 03:47:39 compute-0 podman[163837]: 2025-10-11 03:47:39.963267812 +0000 UTC m=+0.284879634 container attach ebfc67cd5c2277f6329626384f3da31898c76bc4256e1eb3e48b40a85e193341 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_archimedes, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=reef)
Oct 11 03:47:40 compute-0 sudo[163958]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ikvhuhpcqexzkuvhwtburxxjonofonsf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760154459.641079-64-54088632670335/AnsiballZ_systemd_service.py'
Oct 11 03:47:40 compute-0 sudo[163958]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 03:47:40 compute-0 ceph-mon[74273]: pgmap v448: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 03:47:40 compute-0 python3.9[163960]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_virtnodedevd.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Oct 11 03:47:40 compute-0 sudo[163958]: pam_unix(sudo:session): session closed for user root
Oct 11 03:47:40 compute-0 competent_archimedes[163903]: {
Oct 11 03:47:40 compute-0 competent_archimedes[163903]:     "0": [
Oct 11 03:47:40 compute-0 competent_archimedes[163903]:         {
Oct 11 03:47:40 compute-0 competent_archimedes[163903]:             "devices": [
Oct 11 03:47:40 compute-0 competent_archimedes[163903]:                 "/dev/loop3"
Oct 11 03:47:40 compute-0 competent_archimedes[163903]:             ],
Oct 11 03:47:40 compute-0 competent_archimedes[163903]:             "lv_name": "ceph_lv0",
Oct 11 03:47:40 compute-0 competent_archimedes[163903]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct 11 03:47:40 compute-0 competent_archimedes[163903]:             "lv_size": "21470642176",
Oct 11 03:47:40 compute-0 competent_archimedes[163903]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=1XVTvq-Um5W-oCLh-MZzq-EKrd-vgGj-JdO4S3,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=23b68101-59a9-532f-ab6b-9acf78fb2162,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=bd7ac921-1218-45c1-b1c6-7c594dbceccb,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 11 03:47:40 compute-0 competent_archimedes[163903]:             "lv_uuid": "1XVTvq-Um5W-oCLh-MZzq-EKrd-vgGj-JdO4S3",
Oct 11 03:47:40 compute-0 competent_archimedes[163903]:             "name": "ceph_lv0",
Oct 11 03:47:40 compute-0 competent_archimedes[163903]:             "path": "/dev/ceph_vg0/ceph_lv0",
Oct 11 03:47:40 compute-0 competent_archimedes[163903]:             "tags": {
Oct 11 03:47:40 compute-0 competent_archimedes[163903]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct 11 03:47:40 compute-0 competent_archimedes[163903]:                 "ceph.block_uuid": "1XVTvq-Um5W-oCLh-MZzq-EKrd-vgGj-JdO4S3",
Oct 11 03:47:40 compute-0 competent_archimedes[163903]:                 "ceph.cephx_lockbox_secret": "",
Oct 11 03:47:40 compute-0 competent_archimedes[163903]:                 "ceph.cluster_fsid": "23b68101-59a9-532f-ab6b-9acf78fb2162",
Oct 11 03:47:40 compute-0 competent_archimedes[163903]:                 "ceph.cluster_name": "ceph",
Oct 11 03:47:40 compute-0 competent_archimedes[163903]:                 "ceph.crush_device_class": "",
Oct 11 03:47:40 compute-0 competent_archimedes[163903]:                 "ceph.encrypted": "0",
Oct 11 03:47:40 compute-0 competent_archimedes[163903]:                 "ceph.osd_fsid": "bd7ac921-1218-45c1-b1c6-7c594dbceccb",
Oct 11 03:47:40 compute-0 competent_archimedes[163903]:                 "ceph.osd_id": "0",
Oct 11 03:47:40 compute-0 competent_archimedes[163903]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 11 03:47:40 compute-0 competent_archimedes[163903]:                 "ceph.type": "block",
Oct 11 03:47:40 compute-0 competent_archimedes[163903]:                 "ceph.vdo": "0"
Oct 11 03:47:40 compute-0 competent_archimedes[163903]:             },
Oct 11 03:47:40 compute-0 competent_archimedes[163903]:             "type": "block",
Oct 11 03:47:40 compute-0 competent_archimedes[163903]:             "vg_name": "ceph_vg0"
Oct 11 03:47:40 compute-0 competent_archimedes[163903]:         }
Oct 11 03:47:40 compute-0 competent_archimedes[163903]:     ],
Oct 11 03:47:40 compute-0 competent_archimedes[163903]:     "1": [
Oct 11 03:47:40 compute-0 competent_archimedes[163903]:         {
Oct 11 03:47:40 compute-0 competent_archimedes[163903]:             "devices": [
Oct 11 03:47:40 compute-0 competent_archimedes[163903]:                 "/dev/loop4"
Oct 11 03:47:40 compute-0 competent_archimedes[163903]:             ],
Oct 11 03:47:40 compute-0 competent_archimedes[163903]:             "lv_name": "ceph_lv1",
Oct 11 03:47:40 compute-0 competent_archimedes[163903]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Oct 11 03:47:40 compute-0 competent_archimedes[163903]:             "lv_size": "21470642176",
Oct 11 03:47:40 compute-0 competent_archimedes[163903]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=SYHCVo-UVf0-Iwc3-4dzz-QuCG-19LJ-9rRA5x,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=23b68101-59a9-532f-ab6b-9acf78fb2162,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=38da774d-7ecf-442f-9a7a-97978287cff8,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 11 03:47:40 compute-0 competent_archimedes[163903]:             "lv_uuid": "SYHCVo-UVf0-Iwc3-4dzz-QuCG-19LJ-9rRA5x",
Oct 11 03:47:40 compute-0 competent_archimedes[163903]:             "name": "ceph_lv1",
Oct 11 03:47:40 compute-0 competent_archimedes[163903]:             "path": "/dev/ceph_vg1/ceph_lv1",
Oct 11 03:47:40 compute-0 competent_archimedes[163903]:             "tags": {
Oct 11 03:47:40 compute-0 competent_archimedes[163903]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Oct 11 03:47:40 compute-0 competent_archimedes[163903]:                 "ceph.block_uuid": "SYHCVo-UVf0-Iwc3-4dzz-QuCG-19LJ-9rRA5x",
Oct 11 03:47:40 compute-0 competent_archimedes[163903]:                 "ceph.cephx_lockbox_secret": "",
Oct 11 03:47:40 compute-0 competent_archimedes[163903]:                 "ceph.cluster_fsid": "23b68101-59a9-532f-ab6b-9acf78fb2162",
Oct 11 03:47:40 compute-0 competent_archimedes[163903]:                 "ceph.cluster_name": "ceph",
Oct 11 03:47:40 compute-0 competent_archimedes[163903]:                 "ceph.crush_device_class": "",
Oct 11 03:47:40 compute-0 competent_archimedes[163903]:                 "ceph.encrypted": "0",
Oct 11 03:47:40 compute-0 competent_archimedes[163903]:                 "ceph.osd_fsid": "38da774d-7ecf-442f-9a7a-97978287cff8",
Oct 11 03:47:40 compute-0 competent_archimedes[163903]:                 "ceph.osd_id": "1",
Oct 11 03:47:40 compute-0 competent_archimedes[163903]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 11 03:47:40 compute-0 competent_archimedes[163903]:                 "ceph.type": "block",
Oct 11 03:47:40 compute-0 competent_archimedes[163903]:                 "ceph.vdo": "0"
Oct 11 03:47:40 compute-0 competent_archimedes[163903]:             },
Oct 11 03:47:40 compute-0 competent_archimedes[163903]:             "type": "block",
Oct 11 03:47:40 compute-0 competent_archimedes[163903]:             "vg_name": "ceph_vg1"
Oct 11 03:47:40 compute-0 competent_archimedes[163903]:         }
Oct 11 03:47:40 compute-0 competent_archimedes[163903]:     ],
Oct 11 03:47:40 compute-0 competent_archimedes[163903]:     "2": [
Oct 11 03:47:40 compute-0 competent_archimedes[163903]:         {
Oct 11 03:47:40 compute-0 competent_archimedes[163903]:             "devices": [
Oct 11 03:47:40 compute-0 competent_archimedes[163903]:                 "/dev/loop5"
Oct 11 03:47:40 compute-0 competent_archimedes[163903]:             ],
Oct 11 03:47:40 compute-0 competent_archimedes[163903]:             "lv_name": "ceph_lv2",
Oct 11 03:47:40 compute-0 competent_archimedes[163903]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Oct 11 03:47:40 compute-0 competent_archimedes[163903]:             "lv_size": "21470642176",
Oct 11 03:47:40 compute-0 competent_archimedes[163903]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=RoMfIg-8Myq-NBZ3-1HM3-6QfC-sjk8-dlChvt,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=23b68101-59a9-532f-ab6b-9acf78fb2162,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=b0a32a6b-06c7-4cda-9235-0c5ffbd1bd85,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 11 03:47:40 compute-0 competent_archimedes[163903]:             "lv_uuid": "RoMfIg-8Myq-NBZ3-1HM3-6QfC-sjk8-dlChvt",
Oct 11 03:47:40 compute-0 competent_archimedes[163903]:             "name": "ceph_lv2",
Oct 11 03:47:40 compute-0 competent_archimedes[163903]:             "path": "/dev/ceph_vg2/ceph_lv2",
Oct 11 03:47:40 compute-0 competent_archimedes[163903]:             "tags": {
Oct 11 03:47:40 compute-0 competent_archimedes[163903]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Oct 11 03:47:40 compute-0 competent_archimedes[163903]:                 "ceph.block_uuid": "RoMfIg-8Myq-NBZ3-1HM3-6QfC-sjk8-dlChvt",
Oct 11 03:47:40 compute-0 competent_archimedes[163903]:                 "ceph.cephx_lockbox_secret": "",
Oct 11 03:47:40 compute-0 competent_archimedes[163903]:                 "ceph.cluster_fsid": "23b68101-59a9-532f-ab6b-9acf78fb2162",
Oct 11 03:47:40 compute-0 competent_archimedes[163903]:                 "ceph.cluster_name": "ceph",
Oct 11 03:47:40 compute-0 competent_archimedes[163903]:                 "ceph.crush_device_class": "",
Oct 11 03:47:40 compute-0 competent_archimedes[163903]:                 "ceph.encrypted": "0",
Oct 11 03:47:40 compute-0 competent_archimedes[163903]:                 "ceph.osd_fsid": "b0a32a6b-06c7-4cda-9235-0c5ffbd1bd85",
Oct 11 03:47:40 compute-0 competent_archimedes[163903]:                 "ceph.osd_id": "2",
Oct 11 03:47:40 compute-0 competent_archimedes[163903]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 11 03:47:40 compute-0 competent_archimedes[163903]:                 "ceph.type": "block",
Oct 11 03:47:40 compute-0 competent_archimedes[163903]:                 "ceph.vdo": "0"
Oct 11 03:47:40 compute-0 competent_archimedes[163903]:             },
Oct 11 03:47:40 compute-0 competent_archimedes[163903]:             "type": "block",
Oct 11 03:47:40 compute-0 competent_archimedes[163903]:             "vg_name": "ceph_vg2"
Oct 11 03:47:40 compute-0 competent_archimedes[163903]:         }
Oct 11 03:47:40 compute-0 competent_archimedes[163903]:     ]
Oct 11 03:47:40 compute-0 competent_archimedes[163903]: }
Oct 11 03:47:40 compute-0 systemd[1]: libpod-ebfc67cd5c2277f6329626384f3da31898c76bc4256e1eb3e48b40a85e193341.scope: Deactivated successfully.
Oct 11 03:47:40 compute-0 podman[163837]: 2025-10-11 03:47:40.694532267 +0000 UTC m=+1.016144019 container died ebfc67cd5c2277f6329626384f3da31898c76bc4256e1eb3e48b40a85e193341 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_archimedes, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=reef)
Oct 11 03:47:40 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v449: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 03:47:41 compute-0 sudo[164126]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ymdutyffwcgxithgckeqvamnegtnfeab ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760154460.729998-64-225483355269045/AnsiballZ_systemd_service.py'
Oct 11 03:47:41 compute-0 sudo[164126]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 03:47:41 compute-0 systemd[1]: var-lib-containers-storage-overlay-1e7e62d361e27ea4787e028a42c8e1ba218e61c690380a03a9bff5c0621c0daa-merged.mount: Deactivated successfully.
Oct 11 03:47:41 compute-0 podman[163837]: 2025-10-11 03:47:41.320479439 +0000 UTC m=+1.642091201 container remove ebfc67cd5c2277f6329626384f3da31898c76bc4256e1eb3e48b40a85e193341 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_archimedes, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 11 03:47:41 compute-0 systemd[1]: libpod-conmon-ebfc67cd5c2277f6329626384f3da31898c76bc4256e1eb3e48b40a85e193341.scope: Deactivated successfully.
Oct 11 03:47:41 compute-0 sudo[163505]: pam_unix(sudo:session): session closed for user root
Oct 11 03:47:41 compute-0 sudo[164130]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 03:47:41 compute-0 sudo[164130]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 03:47:41 compute-0 sudo[164130]: pam_unix(sudo:session): session closed for user root
Oct 11 03:47:41 compute-0 python3.9[164128]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_virtproxyd.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Oct 11 03:47:41 compute-0 sudo[164126]: pam_unix(sudo:session): session closed for user root
Oct 11 03:47:41 compute-0 sudo[164155]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 11 03:47:41 compute-0 sudo[164155]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 03:47:41 compute-0 sudo[164155]: pam_unix(sudo:session): session closed for user root
Oct 11 03:47:41 compute-0 sudo[164198]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 03:47:41 compute-0 sudo[164198]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 03:47:41 compute-0 sudo[164198]: pam_unix(sudo:session): session closed for user root
Oct 11 03:47:41 compute-0 sudo[164238]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/23b68101-59a9-532f-ab6b-9acf78fb2162/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 23b68101-59a9-532f-ab6b-9acf78fb2162 -- raw list --format json
Oct 11 03:47:41 compute-0 sudo[164238]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 03:47:41 compute-0 sudo[164406]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sytmkrpznsprcityeblrqrgjjsboworp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760154461.660861-64-280752210087671/AnsiballZ_systemd_service.py'
Oct 11 03:47:41 compute-0 sudo[164406]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 03:47:42 compute-0 podman[164423]: 2025-10-11 03:47:42.139564153 +0000 UTC m=+0.047128151 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 03:47:42 compute-0 python3.9[164410]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_virtqemud.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Oct 11 03:47:42 compute-0 sudo[164406]: pam_unix(sudo:session): session closed for user root
Oct 11 03:47:42 compute-0 podman[164423]: 2025-10-11 03:47:42.613958916 +0000 UTC m=+0.521522854 container create f32ffccd69e9216a024c04a40c26818c48b0e8b798e342effb72b900d348ba34 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_bouman, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 11 03:47:42 compute-0 ceph-mon[74273]: pgmap v449: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 03:47:42 compute-0 systemd[1]: Started libpod-conmon-f32ffccd69e9216a024c04a40c26818c48b0e8b798e342effb72b900d348ba34.scope.
Oct 11 03:47:42 compute-0 systemd[1]: Started libcrun container.
Oct 11 03:47:42 compute-0 podman[164423]: 2025-10-11 03:47:42.808736175 +0000 UTC m=+0.716300163 container init f32ffccd69e9216a024c04a40c26818c48b0e8b798e342effb72b900d348ba34 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_bouman, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=reef, io.buildah.version=1.39.3, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Oct 11 03:47:42 compute-0 podman[164423]: 2025-10-11 03:47:42.822302538 +0000 UTC m=+0.729866486 container start f32ffccd69e9216a024c04a40c26818c48b0e8b798e342effb72b900d348ba34 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_bouman, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Oct 11 03:47:42 compute-0 eloquent_bouman[164526]: 167 167
Oct 11 03:47:42 compute-0 systemd[1]: libpod-f32ffccd69e9216a024c04a40c26818c48b0e8b798e342effb72b900d348ba34.scope: Deactivated successfully.
Oct 11 03:47:42 compute-0 podman[164423]: 2025-10-11 03:47:42.876365015 +0000 UTC m=+0.783929013 container attach f32ffccd69e9216a024c04a40c26818c48b0e8b798e342effb72b900d348ba34 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_bouman, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 11 03:47:42 compute-0 podman[164423]: 2025-10-11 03:47:42.876843558 +0000 UTC m=+0.784407506 container died f32ffccd69e9216a024c04a40c26818c48b0e8b798e342effb72b900d348ba34 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_bouman, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 11 03:47:42 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v450: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 03:47:42 compute-0 sudo[164606]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-aogxyddlwmitxiogkntgdjaqymsachvr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760154462.5652533-64-74545976871488/AnsiballZ_systemd_service.py'
Oct 11 03:47:42 compute-0 systemd[1]: var-lib-containers-storage-overlay-97f9b6a212318f678a8e811b132a33fbb9af6f968125fbf621c013fd4dabb293-merged.mount: Deactivated successfully.
Oct 11 03:47:42 compute-0 sudo[164606]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 03:47:43 compute-0 podman[164423]: 2025-10-11 03:47:43.229801233 +0000 UTC m=+1.137365191 container remove f32ffccd69e9216a024c04a40c26818c48b0e8b798e342effb72b900d348ba34 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_bouman, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 11 03:47:43 compute-0 systemd[1]: libpod-conmon-f32ffccd69e9216a024c04a40c26818c48b0e8b798e342effb72b900d348ba34.scope: Deactivated successfully.
Oct 11 03:47:43 compute-0 python3.9[164612]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_virtsecretd.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Oct 11 03:47:43 compute-0 sudo[164606]: pam_unix(sudo:session): session closed for user root
Oct 11 03:47:43 compute-0 podman[164623]: 2025-10-11 03:47:43.432323111 +0000 UTC m=+0.034269379 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 03:47:43 compute-0 podman[164623]: 2025-10-11 03:47:43.552101002 +0000 UTC m=+0.154047250 container create 7eaa071f693a54c01d0ca1df57d115a3ccebf6cd22a7c78d76b6394586773096 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_napier, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True)
Oct 11 03:47:43 compute-0 systemd[1]: Started libpod-conmon-7eaa071f693a54c01d0ca1df57d115a3ccebf6cd22a7c78d76b6394586773096.scope.
Oct 11 03:47:43 compute-0 systemd[1]: Started libcrun container.
Oct 11 03:47:43 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/99308eef9788160370771b9908b4592740d6883cd4fa9ad6e84b2662abfa80f2/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 11 03:47:43 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/99308eef9788160370771b9908b4592740d6883cd4fa9ad6e84b2662abfa80f2/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 11 03:47:43 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/99308eef9788160370771b9908b4592740d6883cd4fa9ad6e84b2662abfa80f2/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 11 03:47:43 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/99308eef9788160370771b9908b4592740d6883cd4fa9ad6e84b2662abfa80f2/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 11 03:47:43 compute-0 podman[164623]: 2025-10-11 03:47:43.793262641 +0000 UTC m=+0.395208919 container init 7eaa071f693a54c01d0ca1df57d115a3ccebf6cd22a7c78d76b6394586773096 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_napier, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 11 03:47:43 compute-0 sudo[164789]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ydhhzxpcpvgbyhbtivexlelsinfjpdwd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760154463.471835-64-223989375481256/AnsiballZ_systemd_service.py'
Oct 11 03:47:43 compute-0 podman[164623]: 2025-10-11 03:47:43.807621686 +0000 UTC m=+0.409567964 container start 7eaa071f693a54c01d0ca1df57d115a3ccebf6cd22a7c78d76b6394586773096 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_napier, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default)
Oct 11 03:47:43 compute-0 sudo[164789]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 03:47:43 compute-0 podman[164623]: 2025-10-11 03:47:43.833683492 +0000 UTC m=+0.435629840 container attach 7eaa071f693a54c01d0ca1df57d115a3ccebf6cd22a7c78d76b6394586773096 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_napier, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Oct 11 03:47:44 compute-0 python3.9[164793]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_virtstoraged.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Oct 11 03:47:44 compute-0 sudo[164789]: pam_unix(sudo:session): session closed for user root
Oct 11 03:47:44 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 11 03:47:44 compute-0 ceph-mon[74273]: pgmap v450: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 03:47:44 compute-0 quirky_napier[164743]: {
Oct 11 03:47:44 compute-0 quirky_napier[164743]:     "38da774d-7ecf-442f-9a7a-97978287cff8": {
Oct 11 03:47:44 compute-0 quirky_napier[164743]:         "ceph_fsid": "23b68101-59a9-532f-ab6b-9acf78fb2162",
Oct 11 03:47:44 compute-0 quirky_napier[164743]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Oct 11 03:47:44 compute-0 quirky_napier[164743]:         "osd_id": 1,
Oct 11 03:47:44 compute-0 quirky_napier[164743]:         "osd_uuid": "38da774d-7ecf-442f-9a7a-97978287cff8",
Oct 11 03:47:44 compute-0 quirky_napier[164743]:         "type": "bluestore"
Oct 11 03:47:44 compute-0 quirky_napier[164743]:     },
Oct 11 03:47:44 compute-0 quirky_napier[164743]:     "b0a32a6b-06c7-4cda-9235-0c5ffbd1bd85": {
Oct 11 03:47:44 compute-0 quirky_napier[164743]:         "ceph_fsid": "23b68101-59a9-532f-ab6b-9acf78fb2162",
Oct 11 03:47:44 compute-0 quirky_napier[164743]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Oct 11 03:47:44 compute-0 quirky_napier[164743]:         "osd_id": 2,
Oct 11 03:47:44 compute-0 quirky_napier[164743]:         "osd_uuid": "b0a32a6b-06c7-4cda-9235-0c5ffbd1bd85",
Oct 11 03:47:44 compute-0 quirky_napier[164743]:         "type": "bluestore"
Oct 11 03:47:44 compute-0 quirky_napier[164743]:     },
Oct 11 03:47:44 compute-0 quirky_napier[164743]:     "bd7ac921-1218-45c1-b1c6-7c594dbceccb": {
Oct 11 03:47:44 compute-0 quirky_napier[164743]:         "ceph_fsid": "23b68101-59a9-532f-ab6b-9acf78fb2162",
Oct 11 03:47:44 compute-0 quirky_napier[164743]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct 11 03:47:44 compute-0 quirky_napier[164743]:         "osd_id": 0,
Oct 11 03:47:44 compute-0 quirky_napier[164743]:         "osd_uuid": "bd7ac921-1218-45c1-b1c6-7c594dbceccb",
Oct 11 03:47:44 compute-0 quirky_napier[164743]:         "type": "bluestore"
Oct 11 03:47:44 compute-0 quirky_napier[164743]:     }
Oct 11 03:47:44 compute-0 quirky_napier[164743]: }
Oct 11 03:47:44 compute-0 systemd[1]: libpod-7eaa071f693a54c01d0ca1df57d115a3ccebf6cd22a7c78d76b6394586773096.scope: Deactivated successfully.
Oct 11 03:47:44 compute-0 systemd[1]: libpod-7eaa071f693a54c01d0ca1df57d115a3ccebf6cd22a7c78d76b6394586773096.scope: Consumed 1.109s CPU time.
Oct 11 03:47:44 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v451: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 03:47:44 compute-0 podman[164950]: 2025-10-11 03:47:44.992981181 +0000 UTC m=+0.040014291 container died 7eaa071f693a54c01d0ca1df57d115a3ccebf6cd22a7c78d76b6394586773096 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_napier, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0)
Oct 11 03:47:45 compute-0 sudo[164983]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-laowjrcugogewpnkvmhwxdqjbvzfcuwq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760154464.4359217-116-112122580016748/AnsiballZ_file.py'
Oct 11 03:47:45 compute-0 sudo[164983]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 03:47:45 compute-0 systemd[1]: var-lib-containers-storage-overlay-99308eef9788160370771b9908b4592740d6883cd4fa9ad6e84b2662abfa80f2-merged.mount: Deactivated successfully.
Oct 11 03:47:45 compute-0 python3.9[164985]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_libvirt.target state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 11 03:47:45 compute-0 sudo[164983]: pam_unix(sudo:session): session closed for user root
Oct 11 03:47:45 compute-0 podman[164950]: 2025-10-11 03:47:45.399723413 +0000 UTC m=+0.446756423 container remove 7eaa071f693a54c01d0ca1df57d115a3ccebf6cd22a7c78d76b6394586773096 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_napier, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Oct 11 03:47:45 compute-0 systemd[1]: libpod-conmon-7eaa071f693a54c01d0ca1df57d115a3ccebf6cd22a7c78d76b6394586773096.scope: Deactivated successfully.
Oct 11 03:47:45 compute-0 sudo[164238]: pam_unix(sudo:session): session closed for user root
Oct 11 03:47:45 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 11 03:47:45 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' 
Oct 11 03:47:45 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 11 03:47:45 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' 
Oct 11 03:47:45 compute-0 ceph-mgr[74563]: [progress WARNING root] complete: ev 88ff187d-26a5-466e-8e36-a14e7387c9ac does not exist
Oct 11 03:47:45 compute-0 ceph-mgr[74563]: [progress WARNING root] complete: ev c76262f4-14a8-428e-8c2d-ef94e36d3ee3 does not exist
Oct 11 03:47:45 compute-0 sudo[165086]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 03:47:45 compute-0 sudo[165086]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 03:47:45 compute-0 sudo[165086]: pam_unix(sudo:session): session closed for user root
Oct 11 03:47:45 compute-0 sudo[165132]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Oct 11 03:47:45 compute-0 sudo[165132]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 03:47:45 compute-0 sudo[165132]: pam_unix(sudo:session): session closed for user root
Oct 11 03:47:45 compute-0 sudo[165187]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bvfvhekthscwqraxtjlqoukuxpdxivwu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760154465.3784318-116-276169294498563/AnsiballZ_file.py'
Oct 11 03:47:45 compute-0 sudo[165187]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 03:47:45 compute-0 podman[165189]: 2025-10-11 03:47:45.836844034 +0000 UTC m=+0.140917269 container health_status 648a70a95c858d67aef01a55c153ff0b5b9bef4b589fa10c62a24185180db61f (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller, container_name=ovn_controller, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.build-date=20251009, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, maintainer=OpenStack Kubernetes Operator team)
Oct 11 03:47:45 compute-0 python3.9[165190]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_virtlogd_wrapper.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 11 03:47:45 compute-0 sudo[165187]: pam_unix(sudo:session): session closed for user root
Oct 11 03:47:46 compute-0 sudo[165365]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vdqaznnisbyvgwnpzzlyirppurvlkocv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760154466.0523465-116-205591076354915/AnsiballZ_file.py'
Oct 11 03:47:46 compute-0 sudo[165365]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 03:47:46 compute-0 ceph-mon[74273]: pgmap v451: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 03:47:46 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' 
Oct 11 03:47:46 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' 
Oct 11 03:47:46 compute-0 python3.9[165367]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_virtnodedevd.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 11 03:47:46 compute-0 sudo[165365]: pam_unix(sudo:session): session closed for user root
Oct 11 03:47:46 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v452: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 03:47:47 compute-0 sudo[165517]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cdkiaqpwhmnlmfrxsumvozagwgbvevwm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760154466.8420866-116-189256816872589/AnsiballZ_file.py'
Oct 11 03:47:47 compute-0 sudo[165517]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 03:47:47 compute-0 python3.9[165519]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_virtproxyd.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 11 03:47:47 compute-0 sudo[165517]: pam_unix(sudo:session): session closed for user root
Oct 11 03:47:47 compute-0 sudo[165669]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bqdeknncgmhnrbwopowbisztfmttfeqs ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760154467.5380442-116-74367880270411/AnsiballZ_file.py'
Oct 11 03:47:47 compute-0 sudo[165669]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 03:47:48 compute-0 python3.9[165671]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_virtqemud.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 11 03:47:48 compute-0 sudo[165669]: pam_unix(sudo:session): session closed for user root
Oct 11 03:47:48 compute-0 ceph-mon[74273]: pgmap v452: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 03:47:48 compute-0 sudo[165822]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cvuoaerwxnrklrwpeqcllkdiuybfiypp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760154468.508177-116-127167047386545/AnsiballZ_file.py'
Oct 11 03:47:48 compute-0 sudo[165822]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 03:47:48 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v453: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 03:47:49 compute-0 python3.9[165824]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_virtsecretd.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 11 03:47:49 compute-0 sudo[165822]: pam_unix(sudo:session): session closed for user root
Oct 11 03:47:49 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 11 03:47:49 compute-0 sudo[165974]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-krqwaryvrzvbicsvhvizcxsuxgmrqqcd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760154469.3143418-116-159883400422851/AnsiballZ_file.py'
Oct 11 03:47:49 compute-0 sudo[165974]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 03:47:49 compute-0 python3.9[165976]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_virtstoraged.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 11 03:47:49 compute-0 sudo[165974]: pam_unix(sudo:session): session closed for user root
Oct 11 03:47:50 compute-0 sudo[166126]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-eojsjbeoyvsheljyftbatvcipxnzerdb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760154470.1080596-166-268146180442679/AnsiballZ_file.py'
Oct 11 03:47:50 compute-0 sudo[166126]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 03:47:50 compute-0 python3.9[166128]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_libvirt.target state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 11 03:47:50 compute-0 sudo[166126]: pam_unix(sudo:session): session closed for user root
Oct 11 03:47:50 compute-0 ceph-mon[74273]: pgmap v453: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 03:47:50 compute-0 ceph-mgr[74563]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 03:47:50 compute-0 ceph-mgr[74563]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 03:47:50 compute-0 ceph-mgr[74563]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 03:47:50 compute-0 ceph-mgr[74563]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 03:47:50 compute-0 ceph-mgr[74563]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 03:47:50 compute-0 ceph-mgr[74563]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 03:47:50 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v454: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 03:47:51 compute-0 sudo[166278]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pznjgjslxdfbezeusvsekhbflswfqvla ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760154470.8443675-166-48369048140896/AnsiballZ_file.py'
Oct 11 03:47:51 compute-0 sudo[166278]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 03:47:51 compute-0 podman[166280]: 2025-10-11 03:47:51.344653271 +0000 UTC m=+0.085382229 container health_status aa44bb3cffcfb092b9d85ae52a9bd9f36c87e07262632420f97b48a886109eab (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, managed_by=edpm_ansible, org.label-schema.build-date=20251009, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Oct 11 03:47:51 compute-0 python3.9[166281]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_virtlogd_wrapper.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 11 03:47:51 compute-0 sudo[166278]: pam_unix(sudo:session): session closed for user root
Oct 11 03:47:51 compute-0 sudo[166449]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-evhlvisllqgtckgnfbnhomfpmycepbxy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760154471.6167912-166-47731335482013/AnsiballZ_file.py'
Oct 11 03:47:51 compute-0 sudo[166449]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 03:47:52 compute-0 python3.9[166451]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_virtnodedevd.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 11 03:47:52 compute-0 sudo[166449]: pam_unix(sudo:session): session closed for user root
Oct 11 03:47:52 compute-0 sudo[166601]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jmejrulzlmzlfruhpeoajtabffhdzgkj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760154472.3420663-166-27347053872723/AnsiballZ_file.py'
Oct 11 03:47:52 compute-0 sudo[166601]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 03:47:52 compute-0 ceph-mon[74273]: pgmap v454: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 03:47:52 compute-0 python3.9[166603]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_virtproxyd.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 11 03:47:52 compute-0 sudo[166601]: pam_unix(sudo:session): session closed for user root
Oct 11 03:47:52 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v455: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 03:47:53 compute-0 sudo[166753]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zvzvnzxvugonbuozxcnincrabujnxdzp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760154473.042805-166-119181720602591/AnsiballZ_file.py'
Oct 11 03:47:53 compute-0 sudo[166753]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 03:47:53 compute-0 python3.9[166755]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_virtqemud.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 11 03:47:53 compute-0 sudo[166753]: pam_unix(sudo:session): session closed for user root
Oct 11 03:47:53 compute-0 ceph-mon[74273]: pgmap v455: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 03:47:54 compute-0 sudo[166905]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zgtjzwubnbfctydqiioktvuqqpmduuav ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760154473.7553947-166-100685514640804/AnsiballZ_file.py'
Oct 11 03:47:54 compute-0 sudo[166905]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 03:47:54 compute-0 python3.9[166907]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_virtsecretd.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 11 03:47:54 compute-0 sudo[166905]: pam_unix(sudo:session): session closed for user root
Oct 11 03:47:54 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 11 03:47:54 compute-0 sudo[167057]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jleygzvqmtoqklddizitobrrayhlbzeh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760154474.5346105-166-73977792296333/AnsiballZ_file.py'
Oct 11 03:47:54 compute-0 sudo[167057]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 03:47:54 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v456: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 03:47:55 compute-0 python3.9[167059]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_virtstoraged.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 11 03:47:55 compute-0 sudo[167057]: pam_unix(sudo:session): session closed for user root
Oct 11 03:47:55 compute-0 sudo[167209]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qrqueobdpiqhrznxejcvjwtueiqkakxg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760154475.2908456-217-70409888519484/AnsiballZ_command.py'
Oct 11 03:47:55 compute-0 sudo[167209]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 03:47:55 compute-0 python3.9[167211]: ansible-ansible.legacy.command Invoked with _raw_params=if systemctl is-active certmonger.service; then
                                               systemctl disable --now certmonger.service
                                               test -f /etc/systemd/system/certmonger.service || systemctl mask certmonger.service
                                             fi
                                              _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 11 03:47:55 compute-0 sudo[167209]: pam_unix(sudo:session): session closed for user root
Oct 11 03:47:56 compute-0 ceph-mon[74273]: pgmap v456: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 03:47:56 compute-0 python3.9[167363]: ansible-ansible.builtin.find Invoked with file_type=any hidden=True paths=['/var/lib/certmonger/requests'] patterns=[] read_whole_file=False age_stamp=mtime recurse=False follow=False get_checksum=False checksum_algorithm=sha1 use_regex=False exact_mode=True excludes=None contains=None age=None size=None depth=None mode=None encoding=None limit=None
Oct 11 03:47:56 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v457: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 03:47:57 compute-0 sudo[167513]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nhimdfdtczgxplqqtqhhtbwyvxynuxnp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760154477.0181644-235-108420966637171/AnsiballZ_systemd_service.py'
Oct 11 03:47:57 compute-0 sudo[167513]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 03:47:57 compute-0 python3.9[167515]: ansible-ansible.builtin.systemd_service Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Oct 11 03:47:57 compute-0 systemd[1]: Reloading.
Oct 11 03:47:57 compute-0 systemd-rc-local-generator[167542]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 11 03:47:57 compute-0 systemd-sysv-generator[167546]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 11 03:47:58 compute-0 ceph-mon[74273]: pgmap v457: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 03:47:58 compute-0 sudo[167513]: pam_unix(sudo:session): session closed for user root
Oct 11 03:47:58 compute-0 sudo[167700]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pvcflfmjrhjzdlmebcjqtaozextajluz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760154478.234054-243-174053104592458/AnsiballZ_command.py'
Oct 11 03:47:58 compute-0 sudo[167700]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 03:47:58 compute-0 python3.9[167702]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_libvirt.target _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 11 03:47:58 compute-0 sudo[167700]: pam_unix(sudo:session): session closed for user root
Oct 11 03:47:58 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v458: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 03:47:59 compute-0 sudo[167853]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qrowditnizhqeenjbsaptmvirnhzevmi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760154478.9850981-243-250907109133155/AnsiballZ_command.py'
Oct 11 03:47:59 compute-0 sudo[167853]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 03:47:59 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 11 03:47:59 compute-0 python3.9[167855]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_virtlogd_wrapper.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 11 03:47:59 compute-0 sudo[167853]: pam_unix(sudo:session): session closed for user root
Oct 11 03:48:00 compute-0 sudo[168006]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-clmzvvreyfgbvonabireedyyiilrsapc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760154479.705217-243-56905773412407/AnsiballZ_command.py'
Oct 11 03:48:00 compute-0 sudo[168006]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 03:48:00 compute-0 ceph-mon[74273]: pgmap v458: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 03:48:00 compute-0 python3.9[168008]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_virtnodedevd.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 11 03:48:00 compute-0 sudo[168006]: pam_unix(sudo:session): session closed for user root
Oct 11 03:48:00 compute-0 sudo[168159]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vaoijtpaeihngvlesgqqisdvpmhxobcm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760154480.4455476-243-56348187668536/AnsiballZ_command.py'
Oct 11 03:48:00 compute-0 sudo[168159]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 03:48:00 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v459: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 03:48:01 compute-0 python3.9[168161]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_virtproxyd.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 11 03:48:01 compute-0 sudo[168159]: pam_unix(sudo:session): session closed for user root
Oct 11 03:48:01 compute-0 sudo[168312]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jandhsmjrqyxtocawsvingdyqlwplsly ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760154481.2592833-243-142204277441702/AnsiballZ_command.py'
Oct 11 03:48:01 compute-0 sudo[168312]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 03:48:01 compute-0 python3.9[168314]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_virtqemud.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 11 03:48:01 compute-0 sudo[168312]: pam_unix(sudo:session): session closed for user root
Oct 11 03:48:02 compute-0 ceph-mon[74273]: pgmap v459: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 03:48:02 compute-0 sudo[168465]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-slfaexwylkurkwmytcbbpmviobuxymdz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760154482.0074263-243-147619608750151/AnsiballZ_command.py'
Oct 11 03:48:02 compute-0 sudo[168465]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 03:48:02 compute-0 python3.9[168467]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_virtsecretd.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 11 03:48:02 compute-0 sudo[168465]: pam_unix(sudo:session): session closed for user root
Oct 11 03:48:02 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v460: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 03:48:03 compute-0 sudo[168618]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xglyiwaqmrpyrumzcbbgzvrummlpjtrg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760154482.8075662-243-34125857163127/AnsiballZ_command.py'
Oct 11 03:48:03 compute-0 sudo[168618]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 03:48:03 compute-0 python3.9[168620]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_virtstoraged.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 11 03:48:03 compute-0 sudo[168618]: pam_unix(sudo:session): session closed for user root
Oct 11 03:48:04 compute-0 ceph-mon[74273]: pgmap v460: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 03:48:04 compute-0 sudo[168771]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zzfjyjvzwusqzlecspavwufcjlmtwmfm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760154483.910604-297-28885154647261/AnsiballZ_getent.py'
Oct 11 03:48:04 compute-0 sudo[168771]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 03:48:04 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 11 03:48:04 compute-0 python3.9[168773]: ansible-ansible.builtin.getent Invoked with database=passwd key=libvirt fail_key=True service=None split=None
Oct 11 03:48:04 compute-0 sudo[168771]: pam_unix(sudo:session): session closed for user root
Oct 11 03:48:04 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v461: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 03:48:05 compute-0 sudo[168924]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hvqgqrdocldklnlfgxtljcvgcyhuglbk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760154484.7888124-305-45297895591483/AnsiballZ_group.py'
Oct 11 03:48:05 compute-0 sudo[168924]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 03:48:05 compute-0 python3.9[168926]: ansible-ansible.builtin.group Invoked with gid=42473 name=libvirt state=present force=False system=False local=False non_unique=False gid_min=None gid_max=None
Oct 11 03:48:05 compute-0 groupadd[168927]: group added to /etc/group: name=libvirt, GID=42473
Oct 11 03:48:05 compute-0 groupadd[168927]: group added to /etc/gshadow: name=libvirt
Oct 11 03:48:05 compute-0 groupadd[168927]: new group: name=libvirt, GID=42473
Oct 11 03:48:05 compute-0 sudo[168924]: pam_unix(sudo:session): session closed for user root
Oct 11 03:48:06 compute-0 ceph-mon[74273]: pgmap v461: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 03:48:06 compute-0 sudo[169082]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gvsurkbxulgonrprdzyzuuepnwssljep ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760154485.8932862-313-42533071517010/AnsiballZ_user.py'
Oct 11 03:48:06 compute-0 sudo[169082]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 03:48:06 compute-0 python3.9[169084]: ansible-ansible.builtin.user Invoked with comment=libvirt user group=libvirt groups=[''] name=libvirt shell=/sbin/nologin state=present uid=42473 non_unique=False force=False remove=False create_home=True system=False move_home=False append=False ssh_key_bits=0 ssh_key_type=rsa ssh_key_comment=ansible-generated on compute-0 update_password=always home=None password=NOT_LOGGING_PARAMETER login_class=None password_expire_max=None password_expire_min=None password_expire_warn=None hidden=None seuser=None skeleton=None generate_ssh_key=None ssh_key_file=None ssh_key_passphrase=NOT_LOGGING_PARAMETER expires=None password_lock=None local=None profile=None authorization=None role=None umask=None password_expire_account_disable=None uid_min=None uid_max=None
Oct 11 03:48:06 compute-0 useradd[169086]: new user: name=libvirt, UID=42473, GID=42473, home=/home/libvirt, shell=/sbin/nologin, from=/dev/pts/0
Oct 11 03:48:06 compute-0 sudo[169082]: pam_unix(sudo:session): session closed for user root
Oct 11 03:48:06 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v462: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 03:48:07 compute-0 sudo[169242]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zrlikjnwzerkwhqidnylvyxkcblpdzjm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760154487.1583579-324-105959929296378/AnsiballZ_setup.py'
Oct 11 03:48:07 compute-0 sudo[169242]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 03:48:07 compute-0 python3.9[169244]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Oct 11 03:48:08 compute-0 sudo[169242]: pam_unix(sudo:session): session closed for user root
Oct 11 03:48:08 compute-0 ceph-mon[74273]: pgmap v462: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 03:48:08 compute-0 sudo[169326]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ftdnwjltfxibefmroiaqylyladtryzmg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760154487.1583579-324-105959929296378/AnsiballZ_dnf.py'
Oct 11 03:48:08 compute-0 sudo[169326]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 03:48:08 compute-0 python3.9[169328]: ansible-ansible.legacy.dnf Invoked with name=['libvirt ', 'libvirt-admin ', 'libvirt-client ', 'libvirt-daemon ', 'qemu-kvm', 'qemu-img', 'libguestfs', 'libseccomp', 'swtpm', 'swtpm-tools', 'edk2-ovmf', 'ceph-common', 'cyrus-sasl-scram'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Oct 11 03:48:08 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v463: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 03:48:09 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 11 03:48:10 compute-0 ceph-mon[74273]: pgmap v463: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 03:48:10 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v464: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 03:48:12 compute-0 ceph-mon[74273]: pgmap v464: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 03:48:12 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v465: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 03:48:14 compute-0 ceph-mon[74273]: pgmap v465: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 03:48:14 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 11 03:48:14 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v466: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 03:48:16 compute-0 ceph-mon[74273]: pgmap v466: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 03:48:16 compute-0 podman[169340]: 2025-10-11 03:48:16.450881075 +0000 UTC m=+0.149098423 container health_status 648a70a95c858d67aef01a55c153ff0b5b9bef4b589fa10c62a24185180db61f (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251009, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, tcib_managed=true, container_name=ovn_controller, managed_by=edpm_ansible, config_id=ovn_controller)
Oct 11 03:48:16 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v467: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 03:48:18 compute-0 ceph-mon[74273]: pgmap v467: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 03:48:18 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v468: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 03:48:19 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 11 03:48:20 compute-0 ceph-mon[74273]: pgmap v468: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 03:48:20 compute-0 ceph-mgr[74563]: [balancer INFO root] Optimize plan auto_2025-10-11_03:48:20
Oct 11 03:48:20 compute-0 ceph-mgr[74563]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 11 03:48:20 compute-0 ceph-mgr[74563]: [balancer INFO root] do_upmap
Oct 11 03:48:20 compute-0 ceph-mgr[74563]: [balancer INFO root] pools ['images', 'volumes', 'default.rgw.meta', 'default.rgw.control', 'cephfs.cephfs.data', '.rgw.root', 'vms', 'cephfs.cephfs.meta', '.mgr', 'default.rgw.log', 'backups']
Oct 11 03:48:20 compute-0 ceph-mgr[74563]: [balancer INFO root] prepared 0/10 changes
Oct 11 03:48:20 compute-0 ceph-mgr[74563]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 03:48:20 compute-0 ceph-mgr[74563]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 03:48:20 compute-0 ceph-mgr[74563]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 03:48:20 compute-0 ceph-mgr[74563]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 03:48:20 compute-0 ceph-mgr[74563]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 03:48:20 compute-0 ceph-mgr[74563]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 03:48:20 compute-0 ceph-mgr[74563]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 11 03:48:20 compute-0 ceph-mgr[74563]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 11 03:48:20 compute-0 ceph-mgr[74563]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 11 03:48:20 compute-0 ceph-mgr[74563]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 11 03:48:20 compute-0 ceph-mgr[74563]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 11 03:48:20 compute-0 ceph-mgr[74563]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 11 03:48:20 compute-0 ceph-mgr[74563]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 11 03:48:20 compute-0 ceph-mgr[74563]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 11 03:48:20 compute-0 ceph-mgr[74563]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 11 03:48:20 compute-0 ceph-mgr[74563]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 11 03:48:20 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v469: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 03:48:22 compute-0 ceph-mon[74273]: pgmap v469: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 03:48:22 compute-0 podman[169501]: 2025-10-11 03:48:22.351261412 +0000 UTC m=+0.060122113 container health_status aa44bb3cffcfb092b9d85ae52a9bd9f36c87e07262632420f97b48a886109eab (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009)
Oct 11 03:48:22 compute-0 ovn_metadata_agent[161897]: 2025-10-11 03:48:22.930 161902 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 03:48:22 compute-0 ovn_metadata_agent[161897]: 2025-10-11 03:48:22.931 161902 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 03:48:22 compute-0 ovn_metadata_agent[161897]: 2025-10-11 03:48:22.931 161902 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 03:48:22 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v470: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 23 KiB/s rd, 0 B/s wr, 37 op/s
Oct 11 03:48:24 compute-0 ceph-mon[74273]: pgmap v470: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 23 KiB/s rd, 0 B/s wr, 37 op/s
Oct 11 03:48:24 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 11 03:48:24 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v471: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 23 KiB/s rd, 0 B/s wr, 37 op/s
Oct 11 03:48:26 compute-0 ceph-mon[74273]: pgmap v471: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 23 KiB/s rd, 0 B/s wr, 37 op/s
Oct 11 03:48:26 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v472: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 23 KiB/s rd, 0 B/s wr, 37 op/s
Oct 11 03:48:28 compute-0 ceph-mon[74273]: pgmap v472: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 23 KiB/s rd, 0 B/s wr, 37 op/s
Oct 11 03:48:28 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v473: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 36 KiB/s rd, 0 B/s wr, 59 op/s
Oct 11 03:48:29 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 11 03:48:30 compute-0 ceph-mon[74273]: pgmap v473: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 36 KiB/s rd, 0 B/s wr, 59 op/s
Oct 11 03:48:30 compute-0 ceph-mgr[74563]: [pg_autoscaler INFO root] _maybe_adjust
Oct 11 03:48:30 compute-0 ceph-mgr[74563]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 03:48:30 compute-0 ceph-mgr[74563]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct 11 03:48:30 compute-0 ceph-mgr[74563]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 03:48:30 compute-0 ceph-mgr[74563]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 11 03:48:30 compute-0 ceph-mgr[74563]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 03:48:30 compute-0 ceph-mgr[74563]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 11 03:48:30 compute-0 ceph-mgr[74563]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 03:48:30 compute-0 ceph-mgr[74563]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 11 03:48:30 compute-0 ceph-mgr[74563]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 03:48:30 compute-0 ceph-mgr[74563]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 11 03:48:30 compute-0 ceph-mgr[74563]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 03:48:30 compute-0 ceph-mgr[74563]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Oct 11 03:48:30 compute-0 ceph-mgr[74563]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 03:48:30 compute-0 ceph-mgr[74563]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 11 03:48:30 compute-0 ceph-mgr[74563]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 03:48:30 compute-0 ceph-mgr[74563]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Oct 11 03:48:30 compute-0 ceph-mgr[74563]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 03:48:30 compute-0 ceph-mgr[74563]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct 11 03:48:30 compute-0 ceph-mgr[74563]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 03:48:30 compute-0 ceph-mgr[74563]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 11 03:48:30 compute-0 ceph-mgr[74563]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 03:48:30 compute-0 ceph-mgr[74563]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct 11 03:48:30 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v474: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 36 KiB/s rd, 0 B/s wr, 59 op/s
Oct 11 03:48:32 compute-0 ceph-mon[74273]: pgmap v474: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 36 KiB/s rd, 0 B/s wr, 59 op/s
Oct 11 03:48:32 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v475: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 36 KiB/s rd, 0 B/s wr, 59 op/s
Oct 11 03:48:34 compute-0 ceph-mon[74273]: pgmap v475: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 36 KiB/s rd, 0 B/s wr, 59 op/s
Oct 11 03:48:34 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 11 03:48:34 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v476: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 13 KiB/s rd, 0 B/s wr, 21 op/s
Oct 11 03:48:36 compute-0 ceph-mon[74273]: pgmap v476: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 13 KiB/s rd, 0 B/s wr, 21 op/s
Oct 11 03:48:36 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v477: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 13 KiB/s rd, 0 B/s wr, 21 op/s
Oct 11 03:48:38 compute-0 ceph-mon[74273]: pgmap v477: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 13 KiB/s rd, 0 B/s wr, 21 op/s
Oct 11 03:48:38 compute-0 kernel: SELinux:  Converting 2766 SID table entries...
Oct 11 03:48:38 compute-0 kernel: SELinux:  policy capability network_peer_controls=1
Oct 11 03:48:38 compute-0 kernel: SELinux:  policy capability open_perms=1
Oct 11 03:48:38 compute-0 kernel: SELinux:  policy capability extended_socket_class=1
Oct 11 03:48:38 compute-0 kernel: SELinux:  policy capability always_check_network=0
Oct 11 03:48:38 compute-0 kernel: SELinux:  policy capability cgroup_seclabel=1
Oct 11 03:48:38 compute-0 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Oct 11 03:48:38 compute-0 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Oct 11 03:48:38 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v478: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 13 KiB/s rd, 0 B/s wr, 21 op/s
Oct 11 03:48:39 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 11 03:48:40 compute-0 ceph-mon[74273]: pgmap v478: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 13 KiB/s rd, 0 B/s wr, 21 op/s
Oct 11 03:48:40 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v479: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 03:48:42 compute-0 ceph-mon[74273]: pgmap v479: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 03:48:42 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v480: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 03:48:44 compute-0 ceph-mon[74273]: pgmap v480: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 03:48:44 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 11 03:48:44 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v481: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 03:48:45 compute-0 sudo[169573]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 03:48:45 compute-0 dbus-broker-launch[810]: avc:  op=load_policy lsm=selinux seqno=12 res=1
Oct 11 03:48:45 compute-0 sudo[169573]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 03:48:45 compute-0 sudo[169573]: pam_unix(sudo:session): session closed for user root
Oct 11 03:48:45 compute-0 sudo[169598]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 11 03:48:45 compute-0 sudo[169598]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 03:48:45 compute-0 sudo[169598]: pam_unix(sudo:session): session closed for user root
Oct 11 03:48:45 compute-0 sudo[169623]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 03:48:45 compute-0 sudo[169623]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 03:48:45 compute-0 sudo[169623]: pam_unix(sudo:session): session closed for user root
Oct 11 03:48:45 compute-0 sudo[169648]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/23b68101-59a9-532f-ab6b-9acf78fb2162/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Oct 11 03:48:45 compute-0 sudo[169648]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 03:48:46 compute-0 ceph-mon[74273]: pgmap v481: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 03:48:46 compute-0 sudo[169648]: pam_unix(sudo:session): session closed for user root
Oct 11 03:48:46 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 11 03:48:46 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 11 03:48:46 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct 11 03:48:46 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 11 03:48:46 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct 11 03:48:46 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' 
Oct 11 03:48:46 compute-0 ceph-mgr[74563]: [progress WARNING root] complete: ev f5546635-060d-4d30-9e37-7c30d01b5145 does not exist
Oct 11 03:48:46 compute-0 ceph-mgr[74563]: [progress WARNING root] complete: ev fb4aac1d-cb53-40f2-8635-2335affd3a03 does not exist
Oct 11 03:48:46 compute-0 ceph-mgr[74563]: [progress WARNING root] complete: ev 4689db8b-e3b8-4921-9578-d10d20edc43c does not exist
Oct 11 03:48:46 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct 11 03:48:46 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 11 03:48:46 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct 11 03:48:46 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 11 03:48:46 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 11 03:48:46 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 11 03:48:46 compute-0 sudo[169705]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 03:48:46 compute-0 sudo[169705]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 03:48:46 compute-0 sudo[169705]: pam_unix(sudo:session): session closed for user root
Oct 11 03:48:46 compute-0 sudo[169736]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 11 03:48:46 compute-0 sudo[169736]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 03:48:46 compute-0 sudo[169736]: pam_unix(sudo:session): session closed for user root
Oct 11 03:48:46 compute-0 podman[169729]: 2025-10-11 03:48:46.89298733 +0000 UTC m=+0.127248985 container health_status 648a70a95c858d67aef01a55c153ff0b5b9bef4b589fa10c62a24185180db61f (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, config_id=ovn_controller, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_controller, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20251009)
Oct 11 03:48:46 compute-0 sudo[169779]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 03:48:46 compute-0 sudo[169779]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 03:48:46 compute-0 sudo[169779]: pam_unix(sudo:session): session closed for user root
Oct 11 03:48:46 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v482: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 03:48:47 compute-0 sudo[169808]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/23b68101-59a9-532f-ab6b-9acf78fb2162/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 23b68101-59a9-532f-ab6b-9acf78fb2162 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Oct 11 03:48:47 compute-0 sudo[169808]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 03:48:47 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 11 03:48:47 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 11 03:48:47 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' 
Oct 11 03:48:47 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 11 03:48:47 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 11 03:48:47 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 11 03:48:47 compute-0 podman[169875]: 2025-10-11 03:48:47.443519079 +0000 UTC m=+0.061495492 container create 558950f66ddc27145d5c3080193be2ce7d609ad474f0eb63b7c1140810d42fea (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_easley, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Oct 11 03:48:47 compute-0 systemd[1]: Started libpod-conmon-558950f66ddc27145d5c3080193be2ce7d609ad474f0eb63b7c1140810d42fea.scope.
Oct 11 03:48:47 compute-0 podman[169875]: 2025-10-11 03:48:47.409225028 +0000 UTC m=+0.027201441 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 03:48:47 compute-0 systemd[1]: Started libcrun container.
Oct 11 03:48:47 compute-0 kernel: SELinux:  Converting 2766 SID table entries...
Oct 11 03:48:47 compute-0 podman[169875]: 2025-10-11 03:48:47.539341312 +0000 UTC m=+0.157317715 container init 558950f66ddc27145d5c3080193be2ce7d609ad474f0eb63b7c1140810d42fea (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_easley, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 11 03:48:47 compute-0 podman[169875]: 2025-10-11 03:48:47.545131626 +0000 UTC m=+0.163107999 container start 558950f66ddc27145d5c3080193be2ce7d609ad474f0eb63b7c1140810d42fea (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_easley, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 11 03:48:47 compute-0 kernel: SELinux:  policy capability network_peer_controls=1
Oct 11 03:48:47 compute-0 kernel: SELinux:  policy capability open_perms=1
Oct 11 03:48:47 compute-0 kernel: SELinux:  policy capability extended_socket_class=1
Oct 11 03:48:47 compute-0 kernel: SELinux:  policy capability always_check_network=0
Oct 11 03:48:47 compute-0 kernel: SELinux:  policy capability cgroup_seclabel=1
Oct 11 03:48:47 compute-0 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Oct 11 03:48:47 compute-0 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Oct 11 03:48:47 compute-0 podman[169875]: 2025-10-11 03:48:47.557392944 +0000 UTC m=+0.175369427 container attach 558950f66ddc27145d5c3080193be2ce7d609ad474f0eb63b7c1140810d42fea (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_easley, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 11 03:48:47 compute-0 tender_easley[169893]: 167 167
Oct 11 03:48:47 compute-0 systemd[1]: libpod-558950f66ddc27145d5c3080193be2ce7d609ad474f0eb63b7c1140810d42fea.scope: Deactivated successfully.
Oct 11 03:48:47 compute-0 podman[169875]: 2025-10-11 03:48:47.564362051 +0000 UTC m=+0.182338474 container died 558950f66ddc27145d5c3080193be2ce7d609ad474f0eb63b7c1140810d42fea (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_easley, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 11 03:48:47 compute-0 systemd[1]: var-lib-containers-storage-overlay-751f7251a42e801cdcbcab155ac65db6dbb3d9bb9e3d69c30090242e8e214645-merged.mount: Deactivated successfully.
Oct 11 03:48:47 compute-0 dbus-broker-launch[810]: avc:  op=load_policy lsm=selinux seqno=13 res=1
Oct 11 03:48:47 compute-0 podman[169875]: 2025-10-11 03:48:47.622116206 +0000 UTC m=+0.240092619 container remove 558950f66ddc27145d5c3080193be2ce7d609ad474f0eb63b7c1140810d42fea (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_easley, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 11 03:48:47 compute-0 systemd[1]: libpod-conmon-558950f66ddc27145d5c3080193be2ce7d609ad474f0eb63b7c1140810d42fea.scope: Deactivated successfully.
Oct 11 03:48:47 compute-0 podman[169919]: 2025-10-11 03:48:47.818692353 +0000 UTC m=+0.059377323 container create 5f5e8df5fc82c8c3cff48cfdf3460126cfd6a1f624c4ea5e6f0d12cb11b40f2a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_rhodes, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 11 03:48:47 compute-0 systemd[1]: Started libpod-conmon-5f5e8df5fc82c8c3cff48cfdf3460126cfd6a1f624c4ea5e6f0d12cb11b40f2a.scope.
Oct 11 03:48:47 compute-0 podman[169919]: 2025-10-11 03:48:47.790852074 +0000 UTC m=+0.031537084 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 03:48:47 compute-0 systemd[1]: Started libcrun container.
Oct 11 03:48:47 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/19c8cf8b626c3a75759853a08a6dbe51e152bb577170583cb7fe5f44a19ab4e7/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 11 03:48:47 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/19c8cf8b626c3a75759853a08a6dbe51e152bb577170583cb7fe5f44a19ab4e7/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 11 03:48:47 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/19c8cf8b626c3a75759853a08a6dbe51e152bb577170583cb7fe5f44a19ab4e7/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 11 03:48:47 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/19c8cf8b626c3a75759853a08a6dbe51e152bb577170583cb7fe5f44a19ab4e7/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 11 03:48:47 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/19c8cf8b626c3a75759853a08a6dbe51e152bb577170583cb7fe5f44a19ab4e7/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct 11 03:48:47 compute-0 podman[169919]: 2025-10-11 03:48:47.94995519 +0000 UTC m=+0.190640220 container init 5f5e8df5fc82c8c3cff48cfdf3460126cfd6a1f624c4ea5e6f0d12cb11b40f2a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_rhodes, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 11 03:48:47 compute-0 podman[169919]: 2025-10-11 03:48:47.964285516 +0000 UTC m=+0.204970486 container start 5f5e8df5fc82c8c3cff48cfdf3460126cfd6a1f624c4ea5e6f0d12cb11b40f2a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_rhodes, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 11 03:48:47 compute-0 podman[169919]: 2025-10-11 03:48:47.968532616 +0000 UTC m=+0.209217586 container attach 5f5e8df5fc82c8c3cff48cfdf3460126cfd6a1f624c4ea5e6f0d12cb11b40f2a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_rhodes, CEPH_REF=reef, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, OSD_FLAVOR=default)
Oct 11 03:48:48 compute-0 ceph-mon[74273]: pgmap v482: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 03:48:48 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v483: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 03:48:49 compute-0 wizardly_rhodes[169935]: --> passed data devices: 0 physical, 3 LVM
Oct 11 03:48:49 compute-0 wizardly_rhodes[169935]: --> relative data size: 1.0
Oct 11 03:48:49 compute-0 wizardly_rhodes[169935]: --> All data devices are unavailable
Oct 11 03:48:49 compute-0 systemd[1]: libpod-5f5e8df5fc82c8c3cff48cfdf3460126cfd6a1f624c4ea5e6f0d12cb11b40f2a.scope: Deactivated successfully.
Oct 11 03:48:49 compute-0 systemd[1]: libpod-5f5e8df5fc82c8c3cff48cfdf3460126cfd6a1f624c4ea5e6f0d12cb11b40f2a.scope: Consumed 1.029s CPU time.
Oct 11 03:48:49 compute-0 podman[169919]: 2025-10-11 03:48:49.094056787 +0000 UTC m=+1.334741797 container died 5f5e8df5fc82c8c3cff48cfdf3460126cfd6a1f624c4ea5e6f0d12cb11b40f2a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_rhodes, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 11 03:48:49 compute-0 systemd[1]: var-lib-containers-storage-overlay-19c8cf8b626c3a75759853a08a6dbe51e152bb577170583cb7fe5f44a19ab4e7-merged.mount: Deactivated successfully.
Oct 11 03:48:49 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 11 03:48:49 compute-0 podman[169919]: 2025-10-11 03:48:49.485314335 +0000 UTC m=+1.725999295 container remove 5f5e8df5fc82c8c3cff48cfdf3460126cfd6a1f624c4ea5e6f0d12cb11b40f2a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_rhodes, ceph=True, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Oct 11 03:48:49 compute-0 systemd[1]: libpod-conmon-5f5e8df5fc82c8c3cff48cfdf3460126cfd6a1f624c4ea5e6f0d12cb11b40f2a.scope: Deactivated successfully.
Oct 11 03:48:49 compute-0 sudo[169808]: pam_unix(sudo:session): session closed for user root
Oct 11 03:48:49 compute-0 sudo[169978]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 03:48:49 compute-0 sudo[169978]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 03:48:49 compute-0 sudo[169978]: pam_unix(sudo:session): session closed for user root
Oct 11 03:48:49 compute-0 sudo[170003]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 11 03:48:49 compute-0 sudo[170003]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 03:48:49 compute-0 sudo[170003]: pam_unix(sudo:session): session closed for user root
Oct 11 03:48:49 compute-0 sudo[170028]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 03:48:49 compute-0 sudo[170028]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 03:48:49 compute-0 sudo[170028]: pam_unix(sudo:session): session closed for user root
Oct 11 03:48:49 compute-0 sudo[170053]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/23b68101-59a9-532f-ab6b-9acf78fb2162/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 23b68101-59a9-532f-ab6b-9acf78fb2162 -- lvm list --format json
Oct 11 03:48:49 compute-0 sudo[170053]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 03:48:50 compute-0 podman[170119]: 2025-10-11 03:48:50.334374958 +0000 UTC m=+0.068277854 container create f243904e38f22416afaad57a5d0ce8a05a631e8db5c4568c0e364ede46e0df9a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_dirac, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3)
Oct 11 03:48:50 compute-0 systemd[1]: Started libpod-conmon-f243904e38f22416afaad57a5d0ce8a05a631e8db5c4568c0e364ede46e0df9a.scope.
Oct 11 03:48:50 compute-0 podman[170119]: 2025-10-11 03:48:50.3040781 +0000 UTC m=+0.037981056 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 03:48:50 compute-0 systemd[1]: Started libcrun container.
Oct 11 03:48:50 compute-0 podman[170119]: 2025-10-11 03:48:50.442361296 +0000 UTC m=+0.176264252 container init f243904e38f22416afaad57a5d0ce8a05a631e8db5c4568c0e364ede46e0df9a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_dirac, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 11 03:48:50 compute-0 podman[170119]: 2025-10-11 03:48:50.450820246 +0000 UTC m=+0.184723142 container start f243904e38f22416afaad57a5d0ce8a05a631e8db5c4568c0e364ede46e0df9a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_dirac, org.label-schema.schema-version=1.0, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 11 03:48:50 compute-0 podman[170119]: 2025-10-11 03:48:50.455640922 +0000 UTC m=+0.189543828 container attach f243904e38f22416afaad57a5d0ce8a05a631e8db5c4568c0e364ede46e0df9a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_dirac, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 11 03:48:50 compute-0 condescending_dirac[170136]: 167 167
Oct 11 03:48:50 compute-0 systemd[1]: libpod-f243904e38f22416afaad57a5d0ce8a05a631e8db5c4568c0e364ede46e0df9a.scope: Deactivated successfully.
Oct 11 03:48:50 compute-0 podman[170119]: 2025-10-11 03:48:50.45839501 +0000 UTC m=+0.192297916 container died f243904e38f22416afaad57a5d0ce8a05a631e8db5c4568c0e364ede46e0df9a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_dirac, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507)
Oct 11 03:48:50 compute-0 systemd[1]: var-lib-containers-storage-overlay-cdbbc621d8752a4471edcf2fb4e46532c297ba14e2524f5aad8673ca23967c07-merged.mount: Deactivated successfully.
Oct 11 03:48:50 compute-0 podman[170119]: 2025-10-11 03:48:50.518354118 +0000 UTC m=+0.252257024 container remove f243904e38f22416afaad57a5d0ce8a05a631e8db5c4568c0e364ede46e0df9a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_dirac, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0)
Oct 11 03:48:50 compute-0 systemd[1]: libpod-conmon-f243904e38f22416afaad57a5d0ce8a05a631e8db5c4568c0e364ede46e0df9a.scope: Deactivated successfully.
Oct 11 03:48:50 compute-0 ceph-mon[74273]: pgmap v483: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 03:48:50 compute-0 ceph-mgr[74563]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 03:48:50 compute-0 ceph-mgr[74563]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 03:48:50 compute-0 ceph-mgr[74563]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 03:48:50 compute-0 ceph-mgr[74563]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 03:48:50 compute-0 ceph-mgr[74563]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 03:48:50 compute-0 ceph-mgr[74563]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 03:48:50 compute-0 podman[170160]: 2025-10-11 03:48:50.770003374 +0000 UTC m=+0.066898595 container create 1bb3f1c578cf860383a006964627e2545467b35a42c57439e32ba310c01054e0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_faraday, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Oct 11 03:48:50 compute-0 systemd[1]: Started libpod-conmon-1bb3f1c578cf860383a006964627e2545467b35a42c57439e32ba310c01054e0.scope.
Oct 11 03:48:50 compute-0 podman[170160]: 2025-10-11 03:48:50.743435482 +0000 UTC m=+0.040330783 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 03:48:50 compute-0 systemd[1]: Started libcrun container.
Oct 11 03:48:50 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/317471df785e4e2a0c40461fd46e107aea11261e8779a65d9fa000f99f832590/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 11 03:48:50 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/317471df785e4e2a0c40461fd46e107aea11261e8779a65d9fa000f99f832590/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 11 03:48:50 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/317471df785e4e2a0c40461fd46e107aea11261e8779a65d9fa000f99f832590/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 11 03:48:50 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/317471df785e4e2a0c40461fd46e107aea11261e8779a65d9fa000f99f832590/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 11 03:48:50 compute-0 podman[170160]: 2025-10-11 03:48:50.88144275 +0000 UTC m=+0.178337991 container init 1bb3f1c578cf860383a006964627e2545467b35a42c57439e32ba310c01054e0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_faraday, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Oct 11 03:48:50 compute-0 podman[170160]: 2025-10-11 03:48:50.894720246 +0000 UTC m=+0.191615497 container start 1bb3f1c578cf860383a006964627e2545467b35a42c57439e32ba310c01054e0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_faraday, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_REF=reef, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Oct 11 03:48:50 compute-0 podman[170160]: 2025-10-11 03:48:50.899573303 +0000 UTC m=+0.196468564 container attach 1bb3f1c578cf860383a006964627e2545467b35a42c57439e32ba310c01054e0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_faraday, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, ceph=True)
Oct 11 03:48:50 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v484: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 03:48:51 compute-0 cranky_faraday[170176]: {
Oct 11 03:48:51 compute-0 cranky_faraday[170176]:     "0": [
Oct 11 03:48:51 compute-0 cranky_faraday[170176]:         {
Oct 11 03:48:51 compute-0 cranky_faraday[170176]:             "devices": [
Oct 11 03:48:51 compute-0 cranky_faraday[170176]:                 "/dev/loop3"
Oct 11 03:48:51 compute-0 cranky_faraday[170176]:             ],
Oct 11 03:48:51 compute-0 cranky_faraday[170176]:             "lv_name": "ceph_lv0",
Oct 11 03:48:51 compute-0 cranky_faraday[170176]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct 11 03:48:51 compute-0 cranky_faraday[170176]:             "lv_size": "21470642176",
Oct 11 03:48:51 compute-0 cranky_faraday[170176]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=1XVTvq-Um5W-oCLh-MZzq-EKrd-vgGj-JdO4S3,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=23b68101-59a9-532f-ab6b-9acf78fb2162,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=bd7ac921-1218-45c1-b1c6-7c594dbceccb,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 11 03:48:51 compute-0 cranky_faraday[170176]:             "lv_uuid": "1XVTvq-Um5W-oCLh-MZzq-EKrd-vgGj-JdO4S3",
Oct 11 03:48:51 compute-0 cranky_faraday[170176]:             "name": "ceph_lv0",
Oct 11 03:48:51 compute-0 cranky_faraday[170176]:             "path": "/dev/ceph_vg0/ceph_lv0",
Oct 11 03:48:51 compute-0 cranky_faraday[170176]:             "tags": {
Oct 11 03:48:51 compute-0 cranky_faraday[170176]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct 11 03:48:51 compute-0 cranky_faraday[170176]:                 "ceph.block_uuid": "1XVTvq-Um5W-oCLh-MZzq-EKrd-vgGj-JdO4S3",
Oct 11 03:48:51 compute-0 cranky_faraday[170176]:                 "ceph.cephx_lockbox_secret": "",
Oct 11 03:48:51 compute-0 cranky_faraday[170176]:                 "ceph.cluster_fsid": "23b68101-59a9-532f-ab6b-9acf78fb2162",
Oct 11 03:48:51 compute-0 cranky_faraday[170176]:                 "ceph.cluster_name": "ceph",
Oct 11 03:48:51 compute-0 cranky_faraday[170176]:                 "ceph.crush_device_class": "",
Oct 11 03:48:51 compute-0 cranky_faraday[170176]:                 "ceph.encrypted": "0",
Oct 11 03:48:51 compute-0 cranky_faraday[170176]:                 "ceph.osd_fsid": "bd7ac921-1218-45c1-b1c6-7c594dbceccb",
Oct 11 03:48:51 compute-0 cranky_faraday[170176]:                 "ceph.osd_id": "0",
Oct 11 03:48:51 compute-0 cranky_faraday[170176]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 11 03:48:51 compute-0 cranky_faraday[170176]:                 "ceph.type": "block",
Oct 11 03:48:51 compute-0 cranky_faraday[170176]:                 "ceph.vdo": "0"
Oct 11 03:48:51 compute-0 cranky_faraday[170176]:             },
Oct 11 03:48:51 compute-0 cranky_faraday[170176]:             "type": "block",
Oct 11 03:48:51 compute-0 cranky_faraday[170176]:             "vg_name": "ceph_vg0"
Oct 11 03:48:51 compute-0 cranky_faraday[170176]:         }
Oct 11 03:48:51 compute-0 cranky_faraday[170176]:     ],
Oct 11 03:48:51 compute-0 cranky_faraday[170176]:     "1": [
Oct 11 03:48:51 compute-0 cranky_faraday[170176]:         {
Oct 11 03:48:51 compute-0 cranky_faraday[170176]:             "devices": [
Oct 11 03:48:51 compute-0 cranky_faraday[170176]:                 "/dev/loop4"
Oct 11 03:48:51 compute-0 cranky_faraday[170176]:             ],
Oct 11 03:48:51 compute-0 cranky_faraday[170176]:             "lv_name": "ceph_lv1",
Oct 11 03:48:51 compute-0 cranky_faraday[170176]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Oct 11 03:48:51 compute-0 cranky_faraday[170176]:             "lv_size": "21470642176",
Oct 11 03:48:51 compute-0 cranky_faraday[170176]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=SYHCVo-UVf0-Iwc3-4dzz-QuCG-19LJ-9rRA5x,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=23b68101-59a9-532f-ab6b-9acf78fb2162,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=38da774d-7ecf-442f-9a7a-97978287cff8,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 11 03:48:51 compute-0 cranky_faraday[170176]:             "lv_uuid": "SYHCVo-UVf0-Iwc3-4dzz-QuCG-19LJ-9rRA5x",
Oct 11 03:48:51 compute-0 cranky_faraday[170176]:             "name": "ceph_lv1",
Oct 11 03:48:51 compute-0 cranky_faraday[170176]:             "path": "/dev/ceph_vg1/ceph_lv1",
Oct 11 03:48:51 compute-0 cranky_faraday[170176]:             "tags": {
Oct 11 03:48:51 compute-0 cranky_faraday[170176]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Oct 11 03:48:51 compute-0 cranky_faraday[170176]:                 "ceph.block_uuid": "SYHCVo-UVf0-Iwc3-4dzz-QuCG-19LJ-9rRA5x",
Oct 11 03:48:51 compute-0 cranky_faraday[170176]:                 "ceph.cephx_lockbox_secret": "",
Oct 11 03:48:51 compute-0 cranky_faraday[170176]:                 "ceph.cluster_fsid": "23b68101-59a9-532f-ab6b-9acf78fb2162",
Oct 11 03:48:51 compute-0 cranky_faraday[170176]:                 "ceph.cluster_name": "ceph",
Oct 11 03:48:51 compute-0 cranky_faraday[170176]:                 "ceph.crush_device_class": "",
Oct 11 03:48:51 compute-0 cranky_faraday[170176]:                 "ceph.encrypted": "0",
Oct 11 03:48:51 compute-0 cranky_faraday[170176]:                 "ceph.osd_fsid": "38da774d-7ecf-442f-9a7a-97978287cff8",
Oct 11 03:48:51 compute-0 cranky_faraday[170176]:                 "ceph.osd_id": "1",
Oct 11 03:48:51 compute-0 cranky_faraday[170176]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 11 03:48:51 compute-0 cranky_faraday[170176]:                 "ceph.type": "block",
Oct 11 03:48:51 compute-0 cranky_faraday[170176]:                 "ceph.vdo": "0"
Oct 11 03:48:51 compute-0 cranky_faraday[170176]:             },
Oct 11 03:48:51 compute-0 cranky_faraday[170176]:             "type": "block",
Oct 11 03:48:51 compute-0 cranky_faraday[170176]:             "vg_name": "ceph_vg1"
Oct 11 03:48:51 compute-0 cranky_faraday[170176]:         }
Oct 11 03:48:51 compute-0 cranky_faraday[170176]:     ],
Oct 11 03:48:51 compute-0 cranky_faraday[170176]:     "2": [
Oct 11 03:48:51 compute-0 cranky_faraday[170176]:         {
Oct 11 03:48:51 compute-0 cranky_faraday[170176]:             "devices": [
Oct 11 03:48:51 compute-0 cranky_faraday[170176]:                 "/dev/loop5"
Oct 11 03:48:51 compute-0 cranky_faraday[170176]:             ],
Oct 11 03:48:51 compute-0 cranky_faraday[170176]:             "lv_name": "ceph_lv2",
Oct 11 03:48:51 compute-0 cranky_faraday[170176]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Oct 11 03:48:51 compute-0 cranky_faraday[170176]:             "lv_size": "21470642176",
Oct 11 03:48:51 compute-0 cranky_faraday[170176]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=RoMfIg-8Myq-NBZ3-1HM3-6QfC-sjk8-dlChvt,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=23b68101-59a9-532f-ab6b-9acf78fb2162,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=b0a32a6b-06c7-4cda-9235-0c5ffbd1bd85,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 11 03:48:51 compute-0 cranky_faraday[170176]:             "lv_uuid": "RoMfIg-8Myq-NBZ3-1HM3-6QfC-sjk8-dlChvt",
Oct 11 03:48:51 compute-0 cranky_faraday[170176]:             "name": "ceph_lv2",
Oct 11 03:48:51 compute-0 cranky_faraday[170176]:             "path": "/dev/ceph_vg2/ceph_lv2",
Oct 11 03:48:51 compute-0 cranky_faraday[170176]:             "tags": {
Oct 11 03:48:51 compute-0 cranky_faraday[170176]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Oct 11 03:48:51 compute-0 cranky_faraday[170176]:                 "ceph.block_uuid": "RoMfIg-8Myq-NBZ3-1HM3-6QfC-sjk8-dlChvt",
Oct 11 03:48:51 compute-0 cranky_faraday[170176]:                 "ceph.cephx_lockbox_secret": "",
Oct 11 03:48:51 compute-0 cranky_faraday[170176]:                 "ceph.cluster_fsid": "23b68101-59a9-532f-ab6b-9acf78fb2162",
Oct 11 03:48:51 compute-0 cranky_faraday[170176]:                 "ceph.cluster_name": "ceph",
Oct 11 03:48:51 compute-0 cranky_faraday[170176]:                 "ceph.crush_device_class": "",
Oct 11 03:48:51 compute-0 cranky_faraday[170176]:                 "ceph.encrypted": "0",
Oct 11 03:48:51 compute-0 cranky_faraday[170176]:                 "ceph.osd_fsid": "b0a32a6b-06c7-4cda-9235-0c5ffbd1bd85",
Oct 11 03:48:51 compute-0 cranky_faraday[170176]:                 "ceph.osd_id": "2",
Oct 11 03:48:51 compute-0 cranky_faraday[170176]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 11 03:48:51 compute-0 cranky_faraday[170176]:                 "ceph.type": "block",
Oct 11 03:48:51 compute-0 cranky_faraday[170176]:                 "ceph.vdo": "0"
Oct 11 03:48:51 compute-0 cranky_faraday[170176]:             },
Oct 11 03:48:51 compute-0 cranky_faraday[170176]:             "type": "block",
Oct 11 03:48:51 compute-0 cranky_faraday[170176]:             "vg_name": "ceph_vg2"
Oct 11 03:48:51 compute-0 cranky_faraday[170176]:         }
Oct 11 03:48:51 compute-0 cranky_faraday[170176]:     ]
Oct 11 03:48:51 compute-0 cranky_faraday[170176]: }
Oct 11 03:48:51 compute-0 systemd[1]: libpod-1bb3f1c578cf860383a006964627e2545467b35a42c57439e32ba310c01054e0.scope: Deactivated successfully.
Oct 11 03:48:51 compute-0 podman[170160]: 2025-10-11 03:48:51.692117345 +0000 UTC m=+0.989012636 container died 1bb3f1c578cf860383a006964627e2545467b35a42c57439e32ba310c01054e0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_faraday, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 11 03:48:51 compute-0 systemd[1]: var-lib-containers-storage-overlay-317471df785e4e2a0c40461fd46e107aea11261e8779a65d9fa000f99f832590-merged.mount: Deactivated successfully.
Oct 11 03:48:51 compute-0 podman[170160]: 2025-10-11 03:48:51.852418655 +0000 UTC m=+1.149313886 container remove 1bb3f1c578cf860383a006964627e2545467b35a42c57439e32ba310c01054e0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_faraday, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.39.3)
Oct 11 03:48:51 compute-0 systemd[1]: libpod-conmon-1bb3f1c578cf860383a006964627e2545467b35a42c57439e32ba310c01054e0.scope: Deactivated successfully.
Oct 11 03:48:51 compute-0 sudo[170053]: pam_unix(sudo:session): session closed for user root
Oct 11 03:48:51 compute-0 sudo[170195]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 03:48:51 compute-0 sudo[170195]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 03:48:51 compute-0 sudo[170195]: pam_unix(sudo:session): session closed for user root
Oct 11 03:48:52 compute-0 sudo[170220]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 11 03:48:52 compute-0 sudo[170220]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 03:48:52 compute-0 sudo[170220]: pam_unix(sudo:session): session closed for user root
Oct 11 03:48:52 compute-0 sudo[170245]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 03:48:52 compute-0 sudo[170245]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 03:48:52 compute-0 sudo[170245]: pam_unix(sudo:session): session closed for user root
Oct 11 03:48:52 compute-0 sudo[170270]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/23b68101-59a9-532f-ab6b-9acf78fb2162/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 23b68101-59a9-532f-ab6b-9acf78fb2162 -- raw list --format json
Oct 11 03:48:52 compute-0 sudo[170270]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 03:48:52 compute-0 podman[170334]: 2025-10-11 03:48:52.653705525 +0000 UTC m=+0.061414600 container create 67b1c5ba6b756ca5b6a00ad6103a4ed1601937076c958e4cd846875e7abca34d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_khayyam, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Oct 11 03:48:52 compute-0 systemd[1]: Started libpod-conmon-67b1c5ba6b756ca5b6a00ad6103a4ed1601937076c958e4cd846875e7abca34d.scope.
Oct 11 03:48:52 compute-0 systemd[1]: Started libcrun container.
Oct 11 03:48:52 compute-0 podman[170334]: 2025-10-11 03:48:52.627839392 +0000 UTC m=+0.035548487 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 03:48:52 compute-0 podman[170334]: 2025-10-11 03:48:52.746494932 +0000 UTC m=+0.154204057 container init 67b1c5ba6b756ca5b6a00ad6103a4ed1601937076c958e4cd846875e7abca34d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_khayyam, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=reef, OSD_FLAVOR=default)
Oct 11 03:48:52 compute-0 podman[170334]: 2025-10-11 03:48:52.753953113 +0000 UTC m=+0.161662208 container start 67b1c5ba6b756ca5b6a00ad6103a4ed1601937076c958e4cd846875e7abca34d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_khayyam, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 11 03:48:52 compute-0 great_khayyam[170353]: 167 167
Oct 11 03:48:52 compute-0 podman[170334]: 2025-10-11 03:48:52.759432259 +0000 UTC m=+0.167141374 container attach 67b1c5ba6b756ca5b6a00ad6103a4ed1601937076c958e4cd846875e7abca34d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_khayyam, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.license=GPLv2)
Oct 11 03:48:52 compute-0 systemd[1]: libpod-67b1c5ba6b756ca5b6a00ad6103a4ed1601937076c958e4cd846875e7abca34d.scope: Deactivated successfully.
Oct 11 03:48:52 compute-0 podman[170334]: 2025-10-11 03:48:52.761124216 +0000 UTC m=+0.168833331 container died 67b1c5ba6b756ca5b6a00ad6103a4ed1601937076c958e4cd846875e7abca34d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_khayyam, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 11 03:48:52 compute-0 ceph-mon[74273]: pgmap v484: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 03:48:52 compute-0 podman[170350]: 2025-10-11 03:48:52.799228445 +0000 UTC m=+0.099858728 container health_status aa44bb3cffcfb092b9d85ae52a9bd9f36c87e07262632420f97b48a886109eab (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, org.label-schema.schema-version=1.0, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 11 03:48:52 compute-0 systemd[1]: var-lib-containers-storage-overlay-2dff6894d4c52a4672742799eef7412ffea9c1f529e4da2b68e74cddeb1fc293-merged.mount: Deactivated successfully.
Oct 11 03:48:52 compute-0 podman[170334]: 2025-10-11 03:48:52.825116318 +0000 UTC m=+0.232825433 container remove 67b1c5ba6b756ca5b6a00ad6103a4ed1601937076c958e4cd846875e7abca34d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_khayyam, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.39.3, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 11 03:48:52 compute-0 systemd[1]: libpod-conmon-67b1c5ba6b756ca5b6a00ad6103a4ed1601937076c958e4cd846875e7abca34d.scope: Deactivated successfully.
Oct 11 03:48:52 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v485: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 03:48:53 compute-0 podman[170394]: 2025-10-11 03:48:53.07135121 +0000 UTC m=+0.080979984 container create 46c6e287df2ede840d35598f6c8f6b52777d8f65f45a760c4c028224d4a3fb4f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_chatterjee, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507)
Oct 11 03:48:53 compute-0 podman[170394]: 2025-10-11 03:48:53.032324425 +0000 UTC m=+0.041953249 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 03:48:53 compute-0 systemd[1]: Started libpod-conmon-46c6e287df2ede840d35598f6c8f6b52777d8f65f45a760c4c028224d4a3fb4f.scope.
Oct 11 03:48:53 compute-0 systemd[1]: Started libcrun container.
Oct 11 03:48:53 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/eb8722b9b93287cd9018143eefba0fffedc5ba0fa919ce190591d33f6b55090f/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 11 03:48:53 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/eb8722b9b93287cd9018143eefba0fffedc5ba0fa919ce190591d33f6b55090f/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 11 03:48:53 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/eb8722b9b93287cd9018143eefba0fffedc5ba0fa919ce190591d33f6b55090f/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 11 03:48:53 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/eb8722b9b93287cd9018143eefba0fffedc5ba0fa919ce190591d33f6b55090f/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 11 03:48:53 compute-0 podman[170394]: 2025-10-11 03:48:53.232561365 +0000 UTC m=+0.242190179 container init 46c6e287df2ede840d35598f6c8f6b52777d8f65f45a760c4c028224d4a3fb4f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_chatterjee, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 11 03:48:53 compute-0 podman[170394]: 2025-10-11 03:48:53.243934857 +0000 UTC m=+0.253563601 container start 46c6e287df2ede840d35598f6c8f6b52777d8f65f45a760c4c028224d4a3fb4f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_chatterjee, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 11 03:48:53 compute-0 podman[170394]: 2025-10-11 03:48:53.248383003 +0000 UTC m=+0.258011777 container attach 46c6e287df2ede840d35598f6c8f6b52777d8f65f45a760c4c028224d4a3fb4f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_chatterjee, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_REF=reef)
Oct 11 03:48:53 compute-0 ceph-mon[74273]: pgmap v485: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 03:48:54 compute-0 gracious_chatterjee[170411]: {
Oct 11 03:48:54 compute-0 gracious_chatterjee[170411]:     "38da774d-7ecf-442f-9a7a-97978287cff8": {
Oct 11 03:48:54 compute-0 gracious_chatterjee[170411]:         "ceph_fsid": "23b68101-59a9-532f-ab6b-9acf78fb2162",
Oct 11 03:48:54 compute-0 gracious_chatterjee[170411]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Oct 11 03:48:54 compute-0 gracious_chatterjee[170411]:         "osd_id": 1,
Oct 11 03:48:54 compute-0 gracious_chatterjee[170411]:         "osd_uuid": "38da774d-7ecf-442f-9a7a-97978287cff8",
Oct 11 03:48:54 compute-0 gracious_chatterjee[170411]:         "type": "bluestore"
Oct 11 03:48:54 compute-0 gracious_chatterjee[170411]:     },
Oct 11 03:48:54 compute-0 gracious_chatterjee[170411]:     "b0a32a6b-06c7-4cda-9235-0c5ffbd1bd85": {
Oct 11 03:48:54 compute-0 gracious_chatterjee[170411]:         "ceph_fsid": "23b68101-59a9-532f-ab6b-9acf78fb2162",
Oct 11 03:48:54 compute-0 gracious_chatterjee[170411]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Oct 11 03:48:54 compute-0 gracious_chatterjee[170411]:         "osd_id": 2,
Oct 11 03:48:54 compute-0 gracious_chatterjee[170411]:         "osd_uuid": "b0a32a6b-06c7-4cda-9235-0c5ffbd1bd85",
Oct 11 03:48:54 compute-0 gracious_chatterjee[170411]:         "type": "bluestore"
Oct 11 03:48:54 compute-0 gracious_chatterjee[170411]:     },
Oct 11 03:48:54 compute-0 gracious_chatterjee[170411]:     "bd7ac921-1218-45c1-b1c6-7c594dbceccb": {
Oct 11 03:48:54 compute-0 gracious_chatterjee[170411]:         "ceph_fsid": "23b68101-59a9-532f-ab6b-9acf78fb2162",
Oct 11 03:48:54 compute-0 gracious_chatterjee[170411]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct 11 03:48:54 compute-0 gracious_chatterjee[170411]:         "osd_id": 0,
Oct 11 03:48:54 compute-0 gracious_chatterjee[170411]:         "osd_uuid": "bd7ac921-1218-45c1-b1c6-7c594dbceccb",
Oct 11 03:48:54 compute-0 gracious_chatterjee[170411]:         "type": "bluestore"
Oct 11 03:48:54 compute-0 gracious_chatterjee[170411]:     }
Oct 11 03:48:54 compute-0 gracious_chatterjee[170411]: }
Oct 11 03:48:54 compute-0 systemd[1]: libpod-46c6e287df2ede840d35598f6c8f6b52777d8f65f45a760c4c028224d4a3fb4f.scope: Deactivated successfully.
Oct 11 03:48:54 compute-0 podman[170445]: 2025-10-11 03:48:54.352727453 +0000 UTC m=+0.042961931 container died 46c6e287df2ede840d35598f6c8f6b52777d8f65f45a760c4c028224d4a3fb4f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_chatterjee, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507)
Oct 11 03:48:54 compute-0 systemd[1]: var-lib-containers-storage-overlay-eb8722b9b93287cd9018143eefba0fffedc5ba0fa919ce190591d33f6b55090f-merged.mount: Deactivated successfully.
Oct 11 03:48:54 compute-0 podman[170445]: 2025-10-11 03:48:54.424633448 +0000 UTC m=+0.114867856 container remove 46c6e287df2ede840d35598f6c8f6b52777d8f65f45a760c4c028224d4a3fb4f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_chatterjee, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, ceph=True, OSD_FLAVOR=default)
Oct 11 03:48:54 compute-0 systemd[1]: libpod-conmon-46c6e287df2ede840d35598f6c8f6b52777d8f65f45a760c4c028224d4a3fb4f.scope: Deactivated successfully.
Oct 11 03:48:54 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 11 03:48:54 compute-0 sudo[170270]: pam_unix(sudo:session): session closed for user root
Oct 11 03:48:54 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 11 03:48:54 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' 
Oct 11 03:48:54 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 11 03:48:54 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' 
Oct 11 03:48:54 compute-0 ceph-mgr[74563]: [progress WARNING root] complete: ev 631fbc0b-8aad-4473-bc58-c4b93d59e109 does not exist
Oct 11 03:48:54 compute-0 ceph-mgr[74563]: [progress WARNING root] complete: ev a63878df-f7a7-445e-a4ee-3ea6d15acc6c does not exist
Oct 11 03:48:54 compute-0 sudo[170460]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 03:48:54 compute-0 sudo[170460]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 03:48:54 compute-0 sudo[170460]: pam_unix(sudo:session): session closed for user root
Oct 11 03:48:54 compute-0 sudo[170485]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Oct 11 03:48:54 compute-0 sudo[170485]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 03:48:54 compute-0 sudo[170485]: pam_unix(sudo:session): session closed for user root
Oct 11 03:48:54 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v486: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 03:48:55 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' 
Oct 11 03:48:55 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' 
Oct 11 03:48:56 compute-0 ceph-mon[74273]: pgmap v486: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 03:48:56 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v487: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 03:48:58 compute-0 ceph-mon[74273]: pgmap v487: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 03:48:58 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v488: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 03:48:59 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 11 03:49:00 compute-0 ceph-mon[74273]: pgmap v488: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 03:49:00 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v489: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 03:49:02 compute-0 ceph-mon[74273]: pgmap v489: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 03:49:02 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v490: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 03:49:04 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 11 03:49:04 compute-0 ceph-mon[74273]: pgmap v490: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 03:49:04 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v491: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 03:49:06 compute-0 ceph-mon[74273]: pgmap v491: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 03:49:06 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v492: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 03:49:08 compute-0 ceph-mon[74273]: pgmap v492: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 03:49:09 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v493: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 03:49:09 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 11 03:49:10 compute-0 ceph-mon[74273]: pgmap v493: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 03:49:11 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v494: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 03:49:12 compute-0 ceph-mon[74273]: pgmap v494: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 03:49:13 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v495: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 03:49:14 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 11 03:49:14 compute-0 ceph-mon[74273]: pgmap v495: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 03:49:15 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v496: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 03:49:16 compute-0 ceph-mon[74273]: pgmap v496: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 03:49:17 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v497: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 03:49:17 compute-0 podman[178959]: 2025-10-11 03:49:17.413087057 +0000 UTC m=+0.115073742 container health_status 648a70a95c858d67aef01a55c153ff0b5b9bef4b589fa10c62a24185180db61f (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, org.label-schema.vendor=CentOS, config_id=ovn_controller, org.label-schema.build-date=20251009, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f)
Oct 11 03:49:18 compute-0 ceph-mon[74273]: pgmap v497: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 03:49:19 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v498: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 03:49:19 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 11 03:49:19 compute-0 ceph-mon[74273]: pgmap v498: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 03:49:20 compute-0 ceph-mgr[74563]: [balancer INFO root] Optimize plan auto_2025-10-11_03:49:20
Oct 11 03:49:20 compute-0 ceph-mgr[74563]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 11 03:49:20 compute-0 ceph-mgr[74563]: [balancer INFO root] do_upmap
Oct 11 03:49:20 compute-0 ceph-mgr[74563]: [balancer INFO root] pools ['volumes', 'backups', 'default.rgw.log', 'cephfs.cephfs.meta', 'vms', 'default.rgw.control', 'default.rgw.meta', 'cephfs.cephfs.data', '.rgw.root', 'images', '.mgr']
Oct 11 03:49:20 compute-0 ceph-mgr[74563]: [balancer INFO root] prepared 0/10 changes
Oct 11 03:49:20 compute-0 ceph-mgr[74563]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 03:49:20 compute-0 ceph-mgr[74563]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 03:49:20 compute-0 ceph-mgr[74563]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 03:49:20 compute-0 ceph-mgr[74563]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 03:49:20 compute-0 ceph-mgr[74563]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 03:49:20 compute-0 ceph-mgr[74563]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 03:49:20 compute-0 ceph-mgr[74563]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 11 03:49:20 compute-0 ceph-mgr[74563]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 11 03:49:20 compute-0 ceph-mgr[74563]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 11 03:49:20 compute-0 ceph-mgr[74563]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 11 03:49:20 compute-0 ceph-mgr[74563]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 11 03:49:20 compute-0 ceph-mgr[74563]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 11 03:49:20 compute-0 ceph-mgr[74563]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 11 03:49:20 compute-0 ceph-mgr[74563]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 11 03:49:20 compute-0 ceph-mgr[74563]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 11 03:49:20 compute-0 ceph-mgr[74563]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 11 03:49:21 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v499: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 03:49:22 compute-0 ceph-mon[74273]: pgmap v499: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 03:49:22 compute-0 ovn_metadata_agent[161897]: 2025-10-11 03:49:22.931 161902 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 03:49:22 compute-0 ovn_metadata_agent[161897]: 2025-10-11 03:49:22.932 161902 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 03:49:22 compute-0 ovn_metadata_agent[161897]: 2025-10-11 03:49:22.933 161902 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 03:49:23 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v500: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 03:49:23 compute-0 podman[181652]: 2025-10-11 03:49:23.366090079 +0000 UTC m=+0.073958382 container health_status aa44bb3cffcfb092b9d85ae52a9bd9f36c87e07262632420f97b48a886109eab (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20251009, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent)
Oct 11 03:49:24 compute-0 ceph-mon[74273]: pgmap v500: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 03:49:24 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 11 03:49:25 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v501: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 03:49:26 compute-0 ceph-mon[74273]: pgmap v501: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 03:49:27 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v502: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 03:49:28 compute-0 ceph-mon[74273]: pgmap v502: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 03:49:29 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v503: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 03:49:29 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 11 03:49:30 compute-0 ceph-mon[74273]: pgmap v503: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 03:49:30 compute-0 ceph-mgr[74563]: [pg_autoscaler INFO root] _maybe_adjust
Oct 11 03:49:30 compute-0 ceph-mgr[74563]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 03:49:30 compute-0 ceph-mgr[74563]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct 11 03:49:30 compute-0 ceph-mgr[74563]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 03:49:30 compute-0 ceph-mgr[74563]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 11 03:49:30 compute-0 ceph-mgr[74563]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 03:49:30 compute-0 ceph-mgr[74563]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 11 03:49:30 compute-0 ceph-mgr[74563]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 03:49:30 compute-0 ceph-mgr[74563]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 11 03:49:30 compute-0 ceph-mgr[74563]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 03:49:30 compute-0 ceph-mgr[74563]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 11 03:49:30 compute-0 ceph-mgr[74563]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 03:49:30 compute-0 ceph-mgr[74563]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Oct 11 03:49:30 compute-0 ceph-mgr[74563]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 03:49:30 compute-0 ceph-mgr[74563]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 11 03:49:30 compute-0 ceph-mgr[74563]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 03:49:30 compute-0 ceph-mgr[74563]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Oct 11 03:49:30 compute-0 ceph-mgr[74563]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 03:49:30 compute-0 ceph-mgr[74563]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct 11 03:49:30 compute-0 ceph-mgr[74563]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 03:49:30 compute-0 ceph-mgr[74563]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 11 03:49:30 compute-0 ceph-mgr[74563]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 03:49:30 compute-0 ceph-mgr[74563]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct 11 03:49:31 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v504: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 03:49:32 compute-0 ceph-mon[74273]: pgmap v504: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 03:49:33 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v505: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 03:49:34 compute-0 ceph-mon[74273]: pgmap v505: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 03:49:34 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 11 03:49:35 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v506: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 03:49:36 compute-0 ceph-mon[74273]: pgmap v506: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 03:49:37 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v507: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 03:49:38 compute-0 ceph-mon[74273]: pgmap v507: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 03:49:39 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v508: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 03:49:39 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 11 03:49:40 compute-0 ceph-mon[74273]: pgmap v508: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 03:49:41 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v509: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 03:49:42 compute-0 ceph-mon[74273]: pgmap v509: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 03:49:43 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v510: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 03:49:44 compute-0 ceph-mon[74273]: pgmap v510: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 03:49:44 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 11 03:49:45 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v511: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 03:49:46 compute-0 ceph-mon[74273]: pgmap v511: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 03:49:47 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v512: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 03:49:47 compute-0 kernel: SELinux:  Converting 2767 SID table entries...
Oct 11 03:49:47 compute-0 kernel: SELinux:  policy capability network_peer_controls=1
Oct 11 03:49:47 compute-0 kernel: SELinux:  policy capability open_perms=1
Oct 11 03:49:47 compute-0 kernel: SELinux:  policy capability extended_socket_class=1
Oct 11 03:49:47 compute-0 kernel: SELinux:  policy capability always_check_network=0
Oct 11 03:49:47 compute-0 kernel: SELinux:  policy capability cgroup_seclabel=1
Oct 11 03:49:47 compute-0 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Oct 11 03:49:47 compute-0 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Oct 11 03:49:48 compute-0 dbus-broker-launch[810]: avc:  op=load_policy lsm=selinux seqno=14 res=1
Oct 11 03:49:48 compute-0 podman[187306]: 2025-10-11 03:49:48.407001857 +0000 UTC m=+0.110882907 container health_status 648a70a95c858d67aef01a55c153ff0b5b9bef4b589fa10c62a24185180db61f (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, org.label-schema.build-date=20251009, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Oct 11 03:49:48 compute-0 ceph-mon[74273]: pgmap v512: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 03:49:49 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v513: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 03:49:49 compute-0 groupadd[187338]: group added to /etc/group: name=dnsmasq, GID=991
Oct 11 03:49:49 compute-0 groupadd[187338]: group added to /etc/gshadow: name=dnsmasq
Oct 11 03:49:49 compute-0 groupadd[187338]: new group: name=dnsmasq, GID=991
Oct 11 03:49:49 compute-0 useradd[187345]: new user: name=dnsmasq, UID=991, GID=991, home=/var/lib/dnsmasq, shell=/usr/sbin/nologin, from=none
Oct 11 03:49:49 compute-0 dbus-broker-launch[809]: Noticed file-system modification, trigger reload.
Oct 11 03:49:49 compute-0 dbus-broker-launch[809]: Noticed file-system modification, trigger reload.
Oct 11 03:49:49 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 11 03:49:50 compute-0 groupadd[187358]: group added to /etc/group: name=clevis, GID=990
Oct 11 03:49:50 compute-0 groupadd[187358]: group added to /etc/gshadow: name=clevis
Oct 11 03:49:50 compute-0 groupadd[187358]: new group: name=clevis, GID=990
Oct 11 03:49:50 compute-0 useradd[187365]: new user: name=clevis, UID=990, GID=990, home=/var/cache/clevis, shell=/usr/sbin/nologin, from=none
Oct 11 03:49:50 compute-0 usermod[187375]: add 'clevis' to group 'tss'
Oct 11 03:49:50 compute-0 usermod[187375]: add 'clevis' to shadow group 'tss'
Oct 11 03:49:50 compute-0 ceph-mon[74273]: pgmap v513: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 03:49:50 compute-0 ceph-mgr[74563]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 03:49:50 compute-0 ceph-mgr[74563]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 03:49:50 compute-0 ceph-mgr[74563]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 03:49:50 compute-0 ceph-mgr[74563]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 03:49:50 compute-0 ceph-mgr[74563]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 03:49:50 compute-0 ceph-mgr[74563]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 03:49:51 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v514: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 03:49:52 compute-0 polkitd[6259]: Reloading rules
Oct 11 03:49:52 compute-0 polkitd[6259]: Collecting garbage unconditionally...
Oct 11 03:49:52 compute-0 polkitd[6259]: Loading rules from directory /etc/polkit-1/rules.d
Oct 11 03:49:52 compute-0 polkitd[6259]: Loading rules from directory /usr/share/polkit-1/rules.d
Oct 11 03:49:52 compute-0 polkitd[6259]: Finished loading, compiling and executing 4 rules
Oct 11 03:49:52 compute-0 polkitd[6259]: Reloading rules
Oct 11 03:49:52 compute-0 polkitd[6259]: Collecting garbage unconditionally...
Oct 11 03:49:52 compute-0 polkitd[6259]: Loading rules from directory /etc/polkit-1/rules.d
Oct 11 03:49:52 compute-0 polkitd[6259]: Loading rules from directory /usr/share/polkit-1/rules.d
Oct 11 03:49:52 compute-0 polkitd[6259]: Finished loading, compiling and executing 4 rules
Oct 11 03:49:52 compute-0 ceph-mon[74273]: pgmap v514: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 03:49:53 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v515: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 03:49:54 compute-0 groupadd[187562]: group added to /etc/group: name=ceph, GID=167
Oct 11 03:49:54 compute-0 groupadd[187562]: group added to /etc/gshadow: name=ceph
Oct 11 03:49:54 compute-0 groupadd[187562]: new group: name=ceph, GID=167
Oct 11 03:49:54 compute-0 useradd[187575]: new user: name=ceph, UID=167, GID=167, home=/var/lib/ceph, shell=/sbin/nologin, from=none
Oct 11 03:49:54 compute-0 podman[187563]: 2025-10-11 03:49:54.15288471 +0000 UTC m=+0.109615872 container health_status aa44bb3cffcfb092b9d85ae52a9bd9f36c87e07262632420f97b48a886109eab (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS)
Oct 11 03:49:54 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 11 03:49:54 compute-0 ceph-mon[74273]: pgmap v515: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 03:49:54 compute-0 sudo[187593]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 03:49:54 compute-0 sudo[187593]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 03:49:54 compute-0 sudo[187593]: pam_unix(sudo:session): session closed for user root
Oct 11 03:49:54 compute-0 sudo[187618]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 11 03:49:54 compute-0 sudo[187618]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 03:49:54 compute-0 sudo[187618]: pam_unix(sudo:session): session closed for user root
Oct 11 03:49:54 compute-0 sudo[187643]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 03:49:54 compute-0 sudo[187643]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 03:49:54 compute-0 sudo[187643]: pam_unix(sudo:session): session closed for user root
Oct 11 03:49:55 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v516: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 03:49:55 compute-0 sudo[187668]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/23b68101-59a9-532f-ab6b-9acf78fb2162/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ls
Oct 11 03:49:55 compute-0 sudo[187668]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 03:49:55 compute-0 podman[187760]: 2025-10-11 03:49:55.586898271 +0000 UTC m=+0.071612268 container exec 24261ba7295af5a6a49cb537d1551fd7fd4de28fdeebff7ecec5d89143ebddf9 (image=quay.io/ceph/ceph:v18, name=ceph-23b68101-59a9-532f-ab6b-9acf78fb2162-mon-compute-0, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Oct 11 03:49:55 compute-0 podman[187760]: 2025-10-11 03:49:55.737756685 +0000 UTC m=+0.222470672 container exec_died 24261ba7295af5a6a49cb537d1551fd7fd4de28fdeebff7ecec5d89143ebddf9 (image=quay.io/ceph/ceph:v18, name=ceph-23b68101-59a9-532f-ab6b-9acf78fb2162-mon-compute-0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 11 03:49:56 compute-0 sudo[187668]: pam_unix(sudo:session): session closed for user root
Oct 11 03:49:56 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 11 03:49:56 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' 
Oct 11 03:49:56 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 11 03:49:56 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' 
Oct 11 03:49:56 compute-0 sudo[188229]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 03:49:56 compute-0 sudo[188229]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 03:49:56 compute-0 sudo[188229]: pam_unix(sudo:session): session closed for user root
Oct 11 03:49:56 compute-0 sudo[188294]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 11 03:49:56 compute-0 sudo[188294]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 03:49:56 compute-0 sudo[188294]: pam_unix(sudo:session): session closed for user root
Oct 11 03:49:56 compute-0 sudo[188357]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 03:49:56 compute-0 sudo[188357]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 03:49:56 compute-0 sudo[188357]: pam_unix(sudo:session): session closed for user root
Oct 11 03:49:56 compute-0 ceph-mon[74273]: pgmap v516: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 03:49:56 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' 
Oct 11 03:49:56 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' 
Oct 11 03:49:56 compute-0 sudo[188424]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/23b68101-59a9-532f-ab6b-9acf78fb2162/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Oct 11 03:49:56 compute-0 sudo[188424]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 03:49:57 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v517: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 03:49:57 compute-0 sudo[188424]: pam_unix(sudo:session): session closed for user root
Oct 11 03:49:57 compute-0 sshd[1006]: Received signal 15; terminating.
Oct 11 03:49:57 compute-0 systemd[1]: Stopping OpenSSH server daemon...
Oct 11 03:49:57 compute-0 systemd[1]: sshd.service: Deactivated successfully.
Oct 11 03:49:57 compute-0 systemd[1]: Stopped OpenSSH server daemon.
Oct 11 03:49:57 compute-0 systemd[1]: sshd.service: Consumed 3.006s CPU time, no IO.
Oct 11 03:49:57 compute-0 systemd[1]: Stopped target sshd-keygen.target.
Oct 11 03:49:57 compute-0 systemd[1]: Stopping sshd-keygen.target...
Oct 11 03:49:57 compute-0 systemd[1]: OpenSSH ecdsa Server Key Generation was skipped because of an unmet condition check (ConditionPathExists=!/run/systemd/generator.early/multi-user.target.wants/cloud-init.target).
Oct 11 03:49:57 compute-0 systemd[1]: OpenSSH ed25519 Server Key Generation was skipped because of an unmet condition check (ConditionPathExists=!/run/systemd/generator.early/multi-user.target.wants/cloud-init.target).
Oct 11 03:49:57 compute-0 systemd[1]: OpenSSH rsa Server Key Generation was skipped because of an unmet condition check (ConditionPathExists=!/run/systemd/generator.early/multi-user.target.wants/cloud-init.target).
Oct 11 03:49:57 compute-0 systemd[1]: Reached target sshd-keygen.target.
Oct 11 03:49:57 compute-0 systemd[1]: Starting OpenSSH server daemon...
Oct 11 03:49:57 compute-0 sshd[188666]: Server listening on 0.0.0.0 port 22.
Oct 11 03:49:57 compute-0 sshd[188666]: Server listening on :: port 22.
Oct 11 03:49:57 compute-0 systemd[1]: Started OpenSSH server daemon.
Oct 11 03:49:57 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 11 03:49:57 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 11 03:49:57 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct 11 03:49:57 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 11 03:49:57 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct 11 03:49:57 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' 
Oct 11 03:49:57 compute-0 ceph-mgr[74563]: [progress WARNING root] complete: ev aa31159a-a006-484f-a007-c1e3c6e685a6 does not exist
Oct 11 03:49:57 compute-0 ceph-mgr[74563]: [progress WARNING root] complete: ev 9105ff1d-3964-4897-8097-916d9732da13 does not exist
Oct 11 03:49:57 compute-0 ceph-mgr[74563]: [progress WARNING root] complete: ev 350e30a8-8d2d-4481-8a9b-4d9091259ed5 does not exist
Oct 11 03:49:57 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct 11 03:49:57 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 11 03:49:57 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct 11 03:49:57 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 11 03:49:57 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 11 03:49:57 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 11 03:49:57 compute-0 sudo[188668]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 03:49:57 compute-0 sudo[188668]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 03:49:57 compute-0 sudo[188668]: pam_unix(sudo:session): session closed for user root
Oct 11 03:49:57 compute-0 sudo[188702]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 11 03:49:57 compute-0 sudo[188702]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 03:49:57 compute-0 sudo[188702]: pam_unix(sudo:session): session closed for user root
Oct 11 03:49:57 compute-0 sudo[188733]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 03:49:57 compute-0 sudo[188733]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 03:49:57 compute-0 sudo[188733]: pam_unix(sudo:session): session closed for user root
Oct 11 03:49:57 compute-0 sudo[188768]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/23b68101-59a9-532f-ab6b-9acf78fb2162/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 23b68101-59a9-532f-ab6b-9acf78fb2162 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Oct 11 03:49:57 compute-0 sudo[188768]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 03:49:57 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 11 03:49:57 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 11 03:49:57 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' 
Oct 11 03:49:57 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 11 03:49:57 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 11 03:49:57 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 11 03:49:57 compute-0 podman[188876]: 2025-10-11 03:49:57.944525472 +0000 UTC m=+0.034937660 container create 9ecb9b56f8e1d280c3cd2c713906baff6f28e4bc757381a7a4ecc6d5283def9f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_haslett, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 11 03:49:57 compute-0 systemd[1]: Started libpod-conmon-9ecb9b56f8e1d280c3cd2c713906baff6f28e4bc757381a7a4ecc6d5283def9f.scope.
Oct 11 03:49:58 compute-0 systemd[1]: Started libcrun container.
Oct 11 03:49:58 compute-0 podman[188876]: 2025-10-11 03:49:58.024572691 +0000 UTC m=+0.114984879 container init 9ecb9b56f8e1d280c3cd2c713906baff6f28e4bc757381a7a4ecc6d5283def9f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_haslett, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 11 03:49:58 compute-0 podman[188876]: 2025-10-11 03:49:57.92806173 +0000 UTC m=+0.018473938 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 03:49:58 compute-0 podman[188876]: 2025-10-11 03:49:58.032431007 +0000 UTC m=+0.122843195 container start 9ecb9b56f8e1d280c3cd2c713906baff6f28e4bc757381a7a4ecc6d5283def9f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_haslett, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 11 03:49:58 compute-0 podman[188876]: 2025-10-11 03:49:58.035977075 +0000 UTC m=+0.126389283 container attach 9ecb9b56f8e1d280c3cd2c713906baff6f28e4bc757381a7a4ecc6d5283def9f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_haslett, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef)
Oct 11 03:49:58 compute-0 strange_haslett[188903]: 167 167
Oct 11 03:49:58 compute-0 systemd[1]: libpod-9ecb9b56f8e1d280c3cd2c713906baff6f28e4bc757381a7a4ecc6d5283def9f.scope: Deactivated successfully.
Oct 11 03:49:58 compute-0 podman[188876]: 2025-10-11 03:49:58.038057052 +0000 UTC m=+0.128469240 container died 9ecb9b56f8e1d280c3cd2c713906baff6f28e4bc757381a7a4ecc6d5283def9f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_haslett, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Oct 11 03:49:58 compute-0 systemd[1]: var-lib-containers-storage-overlay-30c895c919938fb862d3442014172fee1c7413967eb26828fa141f42ffecf316-merged.mount: Deactivated successfully.
Oct 11 03:49:58 compute-0 podman[188876]: 2025-10-11 03:49:58.074063441 +0000 UTC m=+0.164475629 container remove 9ecb9b56f8e1d280c3cd2c713906baff6f28e4bc757381a7a4ecc6d5283def9f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_haslett, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS)
Oct 11 03:49:58 compute-0 systemd[1]: libpod-conmon-9ecb9b56f8e1d280c3cd2c713906baff6f28e4bc757381a7a4ecc6d5283def9f.scope: Deactivated successfully.
Oct 11 03:49:58 compute-0 podman[188954]: 2025-10-11 03:49:58.265567551 +0000 UTC m=+0.047642639 container create 065b82d3caa632c616462eeedb24b557af8036c6eb4f0d96a314a485447d5993 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_cartwright, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 11 03:49:58 compute-0 systemd[1]: Started libpod-conmon-065b82d3caa632c616462eeedb24b557af8036c6eb4f0d96a314a485447d5993.scope.
Oct 11 03:49:58 compute-0 podman[188954]: 2025-10-11 03:49:58.243464494 +0000 UTC m=+0.025539602 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 03:49:58 compute-0 systemd[1]: Started libcrun container.
Oct 11 03:49:58 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4ecf110151f11ad90683a9e2c2ad8b8397fca1b14586177b91e4c56486cb609b/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 11 03:49:58 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4ecf110151f11ad90683a9e2c2ad8b8397fca1b14586177b91e4c56486cb609b/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 11 03:49:58 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4ecf110151f11ad90683a9e2c2ad8b8397fca1b14586177b91e4c56486cb609b/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 11 03:49:58 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4ecf110151f11ad90683a9e2c2ad8b8397fca1b14586177b91e4c56486cb609b/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 11 03:49:58 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4ecf110151f11ad90683a9e2c2ad8b8397fca1b14586177b91e4c56486cb609b/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct 11 03:49:58 compute-0 podman[188954]: 2025-10-11 03:49:58.37748071 +0000 UTC m=+0.159555848 container init 065b82d3caa632c616462eeedb24b557af8036c6eb4f0d96a314a485447d5993 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_cartwright, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.build-date=20250507, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 11 03:49:58 compute-0 podman[188954]: 2025-10-11 03:49:58.386635467 +0000 UTC m=+0.168710585 container start 065b82d3caa632c616462eeedb24b557af8036c6eb4f0d96a314a485447d5993 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_cartwright, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 11 03:49:58 compute-0 podman[188954]: 2025-10-11 03:49:58.391130274 +0000 UTC m=+0.173205452 container attach 065b82d3caa632c616462eeedb24b557af8036c6eb4f0d96a314a485447d5993 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_cartwright, ceph=True, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 11 03:49:58 compute-0 ceph-mon[74273]: pgmap v517: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 03:49:58 compute-0 ceph-mon[74273]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #27. Immutable memtables: 0.
Oct 11 03:49:58 compute-0 ceph-mon[74273]: rocksdb: (Original Log Time 2025/10/11-03:49:58.649910) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Oct 11 03:49:58 compute-0 ceph-mon[74273]: rocksdb: [db/flush_job.cc:856] [default] [JOB 9] Flushing memtable with next log file: 27
Oct 11 03:49:58 compute-0 ceph-mon[74273]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760154598649963, "job": 9, "event": "flush_started", "num_memtables": 1, "num_entries": 2040, "num_deletes": 251, "total_data_size": 3463076, "memory_usage": 3512088, "flush_reason": "Manual Compaction"}
Oct 11 03:49:58 compute-0 ceph-mon[74273]: rocksdb: [db/flush_job.cc:885] [default] [JOB 9] Level-0 flush table #28: started
Oct 11 03:49:58 compute-0 ceph-mon[74273]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760154598671289, "cf_name": "default", "job": 9, "event": "table_file_creation", "file_number": 28, "file_size": 3387807, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 9669, "largest_seqno": 11708, "table_properties": {"data_size": 3378594, "index_size": 5835, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2309, "raw_key_size": 17844, "raw_average_key_size": 19, "raw_value_size": 3360277, "raw_average_value_size": 3664, "num_data_blocks": 265, "num_entries": 917, "num_filter_entries": 917, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1760154368, "oldest_key_time": 1760154368, "file_creation_time": 1760154598, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "c5ef686a-ea96-43f3-b64e-136aeef6150d", "db_session_id": "5PENSWPAYOBU0GJSS6OB", "orig_file_number": 28, "seqno_to_time_mapping": "N/A"}}
Oct 11 03:49:58 compute-0 ceph-mon[74273]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 9] Flush lasted 21440 microseconds, and 11265 cpu microseconds.
Oct 11 03:49:58 compute-0 ceph-mon[74273]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 11 03:49:58 compute-0 ceph-mon[74273]: rocksdb: (Original Log Time 2025/10/11-03:49:58.671352) [db/flush_job.cc:967] [default] [JOB 9] Level-0 flush table #28: 3387807 bytes OK
Oct 11 03:49:58 compute-0 ceph-mon[74273]: rocksdb: (Original Log Time 2025/10/11-03:49:58.671378) [db/memtable_list.cc:519] [default] Level-0 commit table #28 started
Oct 11 03:49:58 compute-0 ceph-mon[74273]: rocksdb: (Original Log Time 2025/10/11-03:49:58.672991) [db/memtable_list.cc:722] [default] Level-0 commit table #28: memtable #1 done
Oct 11 03:49:58 compute-0 ceph-mon[74273]: rocksdb: (Original Log Time 2025/10/11-03:49:58.673038) EVENT_LOG_v1 {"time_micros": 1760154598673028, "job": 9, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Oct 11 03:49:58 compute-0 ceph-mon[74273]: rocksdb: (Original Log Time 2025/10/11-03:49:58.673061) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Oct 11 03:49:58 compute-0 ceph-mon[74273]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 9] Try to delete WAL files size 3454569, prev total WAL file size 3454569, number of live WAL files 2.
Oct 11 03:49:58 compute-0 ceph-mon[74273]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000024.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 11 03:49:58 compute-0 ceph-mon[74273]: rocksdb: (Original Log Time 2025/10/11-03:49:58.674352) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F7300353032' seq:72057594037927935, type:22 .. '7061786F7300373534' seq:0, type:0; will stop at (end)
Oct 11 03:49:58 compute-0 ceph-mon[74273]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 10] Compacting 1@0 + 1@6 files to L6, score -1.00
Oct 11 03:49:58 compute-0 ceph-mon[74273]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 9 Base level 0, inputs: [28(3308KB)], [26(5908KB)]
Oct 11 03:49:58 compute-0 ceph-mon[74273]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760154598674460, "job": 10, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [28], "files_L6": [26], "score": -1, "input_data_size": 9438474, "oldest_snapshot_seqno": -1}
Oct 11 03:49:58 compute-0 ceph-mon[74273]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 10] Generated table #29: 3677 keys, 7815468 bytes, temperature: kUnknown
Oct 11 03:49:58 compute-0 ceph-mon[74273]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760154598726542, "cf_name": "default", "job": 10, "event": "table_file_creation", "file_number": 29, "file_size": 7815468, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 7787303, "index_size": 17879, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 9221, "raw_key_size": 88284, "raw_average_key_size": 24, "raw_value_size": 7717336, "raw_average_value_size": 2098, "num_data_blocks": 774, "num_entries": 3677, "num_filter_entries": 3677, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1760153731, "oldest_key_time": 0, "file_creation_time": 1760154598, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "c5ef686a-ea96-43f3-b64e-136aeef6150d", "db_session_id": "5PENSWPAYOBU0GJSS6OB", "orig_file_number": 29, "seqno_to_time_mapping": "N/A"}}
Oct 11 03:49:58 compute-0 ceph-mon[74273]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 11 03:49:58 compute-0 ceph-mon[74273]: rocksdb: (Original Log Time 2025/10/11-03:49:58.726824) [db/compaction/compaction_job.cc:1663] [default] [JOB 10] Compacted 1@0 + 1@6 files to L6 => 7815468 bytes
Oct 11 03:49:58 compute-0 ceph-mon[74273]: rocksdb: (Original Log Time 2025/10/11-03:49:58.728047) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 181.0 rd, 149.8 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(3.2, 5.8 +0.0 blob) out(7.5 +0.0 blob), read-write-amplify(5.1) write-amplify(2.3) OK, records in: 4191, records dropped: 514 output_compression: NoCompression
Oct 11 03:49:58 compute-0 ceph-mon[74273]: rocksdb: (Original Log Time 2025/10/11-03:49:58.728077) EVENT_LOG_v1 {"time_micros": 1760154598728063, "job": 10, "event": "compaction_finished", "compaction_time_micros": 52160, "compaction_time_cpu_micros": 32433, "output_level": 6, "num_output_files": 1, "total_output_size": 7815468, "num_input_records": 4191, "num_output_records": 3677, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Oct 11 03:49:58 compute-0 ceph-mon[74273]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000028.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 11 03:49:58 compute-0 ceph-mon[74273]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760154598729124, "job": 10, "event": "table_file_deletion", "file_number": 28}
Oct 11 03:49:58 compute-0 ceph-mon[74273]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000026.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 11 03:49:58 compute-0 ceph-mon[74273]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760154598731098, "job": 10, "event": "table_file_deletion", "file_number": 26}
Oct 11 03:49:58 compute-0 ceph-mon[74273]: rocksdb: (Original Log Time 2025/10/11-03:49:58.674201) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 11 03:49:58 compute-0 ceph-mon[74273]: rocksdb: (Original Log Time 2025/10/11-03:49:58.731185) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 11 03:49:58 compute-0 ceph-mon[74273]: rocksdb: (Original Log Time 2025/10/11-03:49:58.731193) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 11 03:49:58 compute-0 ceph-mon[74273]: rocksdb: (Original Log Time 2025/10/11-03:49:58.731196) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 11 03:49:58 compute-0 ceph-mon[74273]: rocksdb: (Original Log Time 2025/10/11-03:49:58.731199) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 11 03:49:58 compute-0 ceph-mon[74273]: rocksdb: (Original Log Time 2025/10/11-03:49:58.731202) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 11 03:49:59 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v518: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 03:49:59 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 11 03:49:59 compute-0 nice_cartwright[188978]: --> passed data devices: 0 physical, 3 LVM
Oct 11 03:49:59 compute-0 nice_cartwright[188978]: --> relative data size: 1.0
Oct 11 03:49:59 compute-0 nice_cartwright[188978]: --> All data devices are unavailable
Oct 11 03:49:59 compute-0 systemd[1]: libpod-065b82d3caa632c616462eeedb24b557af8036c6eb4f0d96a314a485447d5993.scope: Deactivated successfully.
Oct 11 03:49:59 compute-0 systemd[1]: libpod-065b82d3caa632c616462eeedb24b557af8036c6eb4f0d96a314a485447d5993.scope: Consumed 1.051s CPU time.
Oct 11 03:49:59 compute-0 podman[188954]: 2025-10-11 03:49:59.520068896 +0000 UTC m=+1.302143984 container died 065b82d3caa632c616462eeedb24b557af8036c6eb4f0d96a314a485447d5993 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_cartwright, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.39.3)
Oct 11 03:49:59 compute-0 systemd[1]: var-lib-containers-storage-overlay-4ecf110151f11ad90683a9e2c2ad8b8397fca1b14586177b91e4c56486cb609b-merged.mount: Deactivated successfully.
Oct 11 03:49:59 compute-0 podman[188954]: 2025-10-11 03:49:59.591622729 +0000 UTC m=+1.373697827 container remove 065b82d3caa632c616462eeedb24b557af8036c6eb4f0d96a314a485447d5993 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_cartwright, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 11 03:49:59 compute-0 systemd[1]: libpod-conmon-065b82d3caa632c616462eeedb24b557af8036c6eb4f0d96a314a485447d5993.scope: Deactivated successfully.
Oct 11 03:49:59 compute-0 sudo[188768]: pam_unix(sudo:session): session closed for user root
Oct 11 03:49:59 compute-0 sudo[189101]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 03:49:59 compute-0 sudo[189101]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 03:49:59 compute-0 sudo[189101]: pam_unix(sudo:session): session closed for user root
Oct 11 03:49:59 compute-0 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Oct 11 03:49:59 compute-0 systemd[1]: Starting man-db-cache-update.service...
Oct 11 03:49:59 compute-0 sudo[189128]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 11 03:49:59 compute-0 sudo[189128]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 03:49:59 compute-0 sudo[189128]: pam_unix(sudo:session): session closed for user root
Oct 11 03:49:59 compute-0 sudo[189170]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 03:49:59 compute-0 sudo[189170]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 03:49:59 compute-0 sudo[189170]: pam_unix(sudo:session): session closed for user root
Oct 11 03:49:59 compute-0 systemd[1]: Reloading.
Oct 11 03:50:00 compute-0 systemd-rc-local-generator[189260]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 11 03:50:00 compute-0 systemd-sysv-generator[189263]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 11 03:50:00 compute-0 sudo[189209]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/23b68101-59a9-532f-ab6b-9acf78fb2162/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 23b68101-59a9-532f-ab6b-9acf78fb2162 -- lvm list --format json
Oct 11 03:50:00 compute-0 sudo[189209]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 03:50:00 compute-0 systemd[1]: Queuing reload/restart jobs for marked units…
Oct 11 03:50:00 compute-0 podman[189756]: 2025-10-11 03:50:00.626389052 +0000 UTC m=+0.018650555 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 03:50:00 compute-0 ceph-mon[74273]: pgmap v518: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 03:50:01 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v519: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 03:50:01 compute-0 anacron[1070]: Job `cron.weekly' started
Oct 11 03:50:01 compute-0 podman[189756]: 2025-10-11 03:50:01.158179368 +0000 UTC m=+0.550440891 container create 5af6bccdf53b35e0fdeb08db3a93d404c3b3d9ec433de7769fdd35ee19822c52 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_bohr, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 11 03:50:01 compute-0 anacron[1070]: Job `cron.weekly' terminated
Oct 11 03:50:01 compute-0 systemd[1]: Started libpod-conmon-5af6bccdf53b35e0fdeb08db3a93d404c3b3d9ec433de7769fdd35ee19822c52.scope.
Oct 11 03:50:01 compute-0 systemd[1]: Started libcrun container.
Oct 11 03:50:01 compute-0 podman[189756]: 2025-10-11 03:50:01.811993237 +0000 UTC m=+1.204254740 container init 5af6bccdf53b35e0fdeb08db3a93d404c3b3d9ec433de7769fdd35ee19822c52 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_bohr, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Oct 11 03:50:01 compute-0 podman[189756]: 2025-10-11 03:50:01.820876497 +0000 UTC m=+1.213138000 container start 5af6bccdf53b35e0fdeb08db3a93d404c3b3d9ec433de7769fdd35ee19822c52 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_bohr, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, OSD_FLAVOR=default, CEPH_REF=reef)
Oct 11 03:50:01 compute-0 podman[189756]: 2025-10-11 03:50:01.824940811 +0000 UTC m=+1.217202314 container attach 5af6bccdf53b35e0fdeb08db3a93d404c3b3d9ec433de7769fdd35ee19822c52 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_bohr, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.vendor=CentOS)
Oct 11 03:50:01 compute-0 gracious_bohr[190874]: 167 167
Oct 11 03:50:01 compute-0 systemd[1]: libpod-5af6bccdf53b35e0fdeb08db3a93d404c3b3d9ec433de7769fdd35ee19822c52.scope: Deactivated successfully.
Oct 11 03:50:01 compute-0 conmon[190874]: conmon 5af6bccdf53b35e0fdeb <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-5af6bccdf53b35e0fdeb08db3a93d404c3b3d9ec433de7769fdd35ee19822c52.scope/container/memory.events
Oct 11 03:50:01 compute-0 podman[189756]: 2025-10-11 03:50:01.830645182 +0000 UTC m=+1.222906665 container died 5af6bccdf53b35e0fdeb08db3a93d404c3b3d9ec433de7769fdd35ee19822c52 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_bohr, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2)
Oct 11 03:50:01 compute-0 systemd[1]: var-lib-containers-storage-overlay-a96f3babc436bcf024f5b6036a0fea2fc0584f6f783699f95eca763d943fa489-merged.mount: Deactivated successfully.
Oct 11 03:50:01 compute-0 podman[189756]: 2025-10-11 03:50:01.887541502 +0000 UTC m=+1.279802995 container remove 5af6bccdf53b35e0fdeb08db3a93d404c3b3d9ec433de7769fdd35ee19822c52 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_bohr, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True)
Oct 11 03:50:01 compute-0 systemd[1]: libpod-conmon-5af6bccdf53b35e0fdeb08db3a93d404c3b3d9ec433de7769fdd35ee19822c52.scope: Deactivated successfully.
Oct 11 03:50:02 compute-0 podman[191127]: 2025-10-11 03:50:02.100931394 +0000 UTC m=+0.048748342 container create aa610360825437306ce44742ddf7530a4dec1575a8f72537700db890fad4baa5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=heuristic_mclean, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 11 03:50:02 compute-0 systemd[1]: Started libpod-conmon-aa610360825437306ce44742ddf7530a4dec1575a8f72537700db890fad4baa5.scope.
Oct 11 03:50:02 compute-0 podman[191127]: 2025-10-11 03:50:02.081119177 +0000 UTC m=+0.028936145 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 03:50:02 compute-0 systemd[1]: Starting PackageKit Daemon...
Oct 11 03:50:02 compute-0 systemd[1]: Started libcrun container.
Oct 11 03:50:02 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2d3c6d4da707bd29c962322ddcc8cebd2dbf69118359d8624bb310f82cfbb02c/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 11 03:50:02 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2d3c6d4da707bd29c962322ddcc8cebd2dbf69118359d8624bb310f82cfbb02c/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 11 03:50:02 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2d3c6d4da707bd29c962322ddcc8cebd2dbf69118359d8624bb310f82cfbb02c/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 11 03:50:02 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2d3c6d4da707bd29c962322ddcc8cebd2dbf69118359d8624bb310f82cfbb02c/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 11 03:50:02 compute-0 PackageKit[191263]: daemon start
Oct 11 03:50:02 compute-0 podman[191127]: 2025-10-11 03:50:02.20960218 +0000 UTC m=+0.157419148 container init aa610360825437306ce44742ddf7530a4dec1575a8f72537700db890fad4baa5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=heuristic_mclean, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_REF=reef, io.buildah.version=1.39.3)
Oct 11 03:50:02 compute-0 podman[191127]: 2025-10-11 03:50:02.219007205 +0000 UTC m=+0.166824183 container start aa610360825437306ce44742ddf7530a4dec1575a8f72537700db890fad4baa5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=heuristic_mclean, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 11 03:50:02 compute-0 podman[191127]: 2025-10-11 03:50:02.223048038 +0000 UTC m=+0.170864986 container attach aa610360825437306ce44742ddf7530a4dec1575a8f72537700db890fad4baa5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=heuristic_mclean, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 11 03:50:02 compute-0 systemd[1]: Started PackageKit Daemon.
Oct 11 03:50:02 compute-0 sudo[169326]: pam_unix(sudo:session): session closed for user root
Oct 11 03:50:02 compute-0 ceph-mon[74273]: pgmap v519: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 03:50:02 compute-0 heuristic_mclean[191254]: {
Oct 11 03:50:02 compute-0 heuristic_mclean[191254]:     "0": [
Oct 11 03:50:02 compute-0 heuristic_mclean[191254]:         {
Oct 11 03:50:02 compute-0 heuristic_mclean[191254]:             "devices": [
Oct 11 03:50:02 compute-0 heuristic_mclean[191254]:                 "/dev/loop3"
Oct 11 03:50:02 compute-0 heuristic_mclean[191254]:             ],
Oct 11 03:50:02 compute-0 heuristic_mclean[191254]:             "lv_name": "ceph_lv0",
Oct 11 03:50:02 compute-0 heuristic_mclean[191254]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct 11 03:50:02 compute-0 heuristic_mclean[191254]:             "lv_size": "21470642176",
Oct 11 03:50:02 compute-0 heuristic_mclean[191254]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=1XVTvq-Um5W-oCLh-MZzq-EKrd-vgGj-JdO4S3,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=23b68101-59a9-532f-ab6b-9acf78fb2162,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=bd7ac921-1218-45c1-b1c6-7c594dbceccb,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 11 03:50:02 compute-0 heuristic_mclean[191254]:             "lv_uuid": "1XVTvq-Um5W-oCLh-MZzq-EKrd-vgGj-JdO4S3",
Oct 11 03:50:02 compute-0 heuristic_mclean[191254]:             "name": "ceph_lv0",
Oct 11 03:50:02 compute-0 heuristic_mclean[191254]:             "path": "/dev/ceph_vg0/ceph_lv0",
Oct 11 03:50:02 compute-0 heuristic_mclean[191254]:             "tags": {
Oct 11 03:50:02 compute-0 heuristic_mclean[191254]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct 11 03:50:02 compute-0 heuristic_mclean[191254]:                 "ceph.block_uuid": "1XVTvq-Um5W-oCLh-MZzq-EKrd-vgGj-JdO4S3",
Oct 11 03:50:02 compute-0 heuristic_mclean[191254]:                 "ceph.cephx_lockbox_secret": "",
Oct 11 03:50:02 compute-0 heuristic_mclean[191254]:                 "ceph.cluster_fsid": "23b68101-59a9-532f-ab6b-9acf78fb2162",
Oct 11 03:50:02 compute-0 heuristic_mclean[191254]:                 "ceph.cluster_name": "ceph",
Oct 11 03:50:02 compute-0 heuristic_mclean[191254]:                 "ceph.crush_device_class": "",
Oct 11 03:50:02 compute-0 heuristic_mclean[191254]:                 "ceph.encrypted": "0",
Oct 11 03:50:02 compute-0 heuristic_mclean[191254]:                 "ceph.osd_fsid": "bd7ac921-1218-45c1-b1c6-7c594dbceccb",
Oct 11 03:50:02 compute-0 heuristic_mclean[191254]:                 "ceph.osd_id": "0",
Oct 11 03:50:02 compute-0 heuristic_mclean[191254]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 11 03:50:02 compute-0 heuristic_mclean[191254]:                 "ceph.type": "block",
Oct 11 03:50:02 compute-0 heuristic_mclean[191254]:                 "ceph.vdo": "0"
Oct 11 03:50:02 compute-0 heuristic_mclean[191254]:             },
Oct 11 03:50:02 compute-0 heuristic_mclean[191254]:             "type": "block",
Oct 11 03:50:02 compute-0 heuristic_mclean[191254]:             "vg_name": "ceph_vg0"
Oct 11 03:50:02 compute-0 heuristic_mclean[191254]:         }
Oct 11 03:50:02 compute-0 heuristic_mclean[191254]:     ],
Oct 11 03:50:02 compute-0 heuristic_mclean[191254]:     "1": [
Oct 11 03:50:02 compute-0 heuristic_mclean[191254]:         {
Oct 11 03:50:02 compute-0 heuristic_mclean[191254]:             "devices": [
Oct 11 03:50:02 compute-0 heuristic_mclean[191254]:                 "/dev/loop4"
Oct 11 03:50:02 compute-0 heuristic_mclean[191254]:             ],
Oct 11 03:50:02 compute-0 heuristic_mclean[191254]:             "lv_name": "ceph_lv1",
Oct 11 03:50:02 compute-0 heuristic_mclean[191254]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Oct 11 03:50:02 compute-0 heuristic_mclean[191254]:             "lv_size": "21470642176",
Oct 11 03:50:02 compute-0 heuristic_mclean[191254]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=SYHCVo-UVf0-Iwc3-4dzz-QuCG-19LJ-9rRA5x,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=23b68101-59a9-532f-ab6b-9acf78fb2162,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=38da774d-7ecf-442f-9a7a-97978287cff8,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 11 03:50:02 compute-0 heuristic_mclean[191254]:             "lv_uuid": "SYHCVo-UVf0-Iwc3-4dzz-QuCG-19LJ-9rRA5x",
Oct 11 03:50:02 compute-0 heuristic_mclean[191254]:             "name": "ceph_lv1",
Oct 11 03:50:02 compute-0 heuristic_mclean[191254]:             "path": "/dev/ceph_vg1/ceph_lv1",
Oct 11 03:50:02 compute-0 heuristic_mclean[191254]:             "tags": {
Oct 11 03:50:02 compute-0 heuristic_mclean[191254]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Oct 11 03:50:02 compute-0 heuristic_mclean[191254]:                 "ceph.block_uuid": "SYHCVo-UVf0-Iwc3-4dzz-QuCG-19LJ-9rRA5x",
Oct 11 03:50:02 compute-0 heuristic_mclean[191254]:                 "ceph.cephx_lockbox_secret": "",
Oct 11 03:50:02 compute-0 heuristic_mclean[191254]:                 "ceph.cluster_fsid": "23b68101-59a9-532f-ab6b-9acf78fb2162",
Oct 11 03:50:02 compute-0 heuristic_mclean[191254]:                 "ceph.cluster_name": "ceph",
Oct 11 03:50:02 compute-0 heuristic_mclean[191254]:                 "ceph.crush_device_class": "",
Oct 11 03:50:02 compute-0 heuristic_mclean[191254]:                 "ceph.encrypted": "0",
Oct 11 03:50:02 compute-0 heuristic_mclean[191254]:                 "ceph.osd_fsid": "38da774d-7ecf-442f-9a7a-97978287cff8",
Oct 11 03:50:02 compute-0 heuristic_mclean[191254]:                 "ceph.osd_id": "1",
Oct 11 03:50:02 compute-0 heuristic_mclean[191254]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 11 03:50:02 compute-0 heuristic_mclean[191254]:                 "ceph.type": "block",
Oct 11 03:50:02 compute-0 heuristic_mclean[191254]:                 "ceph.vdo": "0"
Oct 11 03:50:02 compute-0 heuristic_mclean[191254]:             },
Oct 11 03:50:02 compute-0 heuristic_mclean[191254]:             "type": "block",
Oct 11 03:50:02 compute-0 heuristic_mclean[191254]:             "vg_name": "ceph_vg1"
Oct 11 03:50:02 compute-0 heuristic_mclean[191254]:         }
Oct 11 03:50:02 compute-0 heuristic_mclean[191254]:     ],
Oct 11 03:50:02 compute-0 heuristic_mclean[191254]:     "2": [
Oct 11 03:50:02 compute-0 heuristic_mclean[191254]:         {
Oct 11 03:50:02 compute-0 heuristic_mclean[191254]:             "devices": [
Oct 11 03:50:02 compute-0 heuristic_mclean[191254]:                 "/dev/loop5"
Oct 11 03:50:02 compute-0 heuristic_mclean[191254]:             ],
Oct 11 03:50:02 compute-0 heuristic_mclean[191254]:             "lv_name": "ceph_lv2",
Oct 11 03:50:02 compute-0 heuristic_mclean[191254]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Oct 11 03:50:02 compute-0 heuristic_mclean[191254]:             "lv_size": "21470642176",
Oct 11 03:50:02 compute-0 heuristic_mclean[191254]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=RoMfIg-8Myq-NBZ3-1HM3-6QfC-sjk8-dlChvt,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=23b68101-59a9-532f-ab6b-9acf78fb2162,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=b0a32a6b-06c7-4cda-9235-0c5ffbd1bd85,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 11 03:50:02 compute-0 heuristic_mclean[191254]:             "lv_uuid": "RoMfIg-8Myq-NBZ3-1HM3-6QfC-sjk8-dlChvt",
Oct 11 03:50:02 compute-0 heuristic_mclean[191254]:             "name": "ceph_lv2",
Oct 11 03:50:02 compute-0 heuristic_mclean[191254]:             "path": "/dev/ceph_vg2/ceph_lv2",
Oct 11 03:50:02 compute-0 heuristic_mclean[191254]:             "tags": {
Oct 11 03:50:02 compute-0 heuristic_mclean[191254]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Oct 11 03:50:02 compute-0 heuristic_mclean[191254]:                 "ceph.block_uuid": "RoMfIg-8Myq-NBZ3-1HM3-6QfC-sjk8-dlChvt",
Oct 11 03:50:02 compute-0 heuristic_mclean[191254]:                 "ceph.cephx_lockbox_secret": "",
Oct 11 03:50:02 compute-0 heuristic_mclean[191254]:                 "ceph.cluster_fsid": "23b68101-59a9-532f-ab6b-9acf78fb2162",
Oct 11 03:50:02 compute-0 heuristic_mclean[191254]:                 "ceph.cluster_name": "ceph",
Oct 11 03:50:02 compute-0 heuristic_mclean[191254]:                 "ceph.crush_device_class": "",
Oct 11 03:50:02 compute-0 heuristic_mclean[191254]:                 "ceph.encrypted": "0",
Oct 11 03:50:02 compute-0 heuristic_mclean[191254]:                 "ceph.osd_fsid": "b0a32a6b-06c7-4cda-9235-0c5ffbd1bd85",
Oct 11 03:50:02 compute-0 heuristic_mclean[191254]:                 "ceph.osd_id": "2",
Oct 11 03:50:02 compute-0 heuristic_mclean[191254]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 11 03:50:02 compute-0 heuristic_mclean[191254]:                 "ceph.type": "block",
Oct 11 03:50:02 compute-0 heuristic_mclean[191254]:                 "ceph.vdo": "0"
Oct 11 03:50:02 compute-0 heuristic_mclean[191254]:             },
Oct 11 03:50:02 compute-0 heuristic_mclean[191254]:             "type": "block",
Oct 11 03:50:02 compute-0 heuristic_mclean[191254]:             "vg_name": "ceph_vg2"
Oct 11 03:50:02 compute-0 heuristic_mclean[191254]:         }
Oct 11 03:50:02 compute-0 heuristic_mclean[191254]:     ]
Oct 11 03:50:02 compute-0 heuristic_mclean[191254]: }
Oct 11 03:50:02 compute-0 systemd[1]: libpod-aa610360825437306ce44742ddf7530a4dec1575a8f72537700db890fad4baa5.scope: Deactivated successfully.
Oct 11 03:50:02 compute-0 podman[191127]: 2025-10-11 03:50:02.984938047 +0000 UTC m=+0.932755005 container died aa610360825437306ce44742ddf7530a4dec1575a8f72537700db890fad4baa5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=heuristic_mclean, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507)
Oct 11 03:50:03 compute-0 systemd[1]: var-lib-containers-storage-overlay-2d3c6d4da707bd29c962322ddcc8cebd2dbf69118359d8624bb310f82cfbb02c-merged.mount: Deactivated successfully.
Oct 11 03:50:03 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v520: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 03:50:03 compute-0 podman[191127]: 2025-10-11 03:50:03.054940166 +0000 UTC m=+1.002757154 container remove aa610360825437306ce44742ddf7530a4dec1575a8f72537700db890fad4baa5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=heuristic_mclean, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Oct 11 03:50:03 compute-0 systemd[1]: libpod-conmon-aa610360825437306ce44742ddf7530a4dec1575a8f72537700db890fad4baa5.scope: Deactivated successfully.
Oct 11 03:50:03 compute-0 sudo[189209]: pam_unix(sudo:session): session closed for user root
Oct 11 03:50:03 compute-0 sudo[192236]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 03:50:03 compute-0 sudo[192236]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 03:50:03 compute-0 sudo[192236]: pam_unix(sudo:session): session closed for user root
Oct 11 03:50:03 compute-0 sudo[192335]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 11 03:50:03 compute-0 sudo[192335]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 03:50:03 compute-0 sudo[192335]: pam_unix(sudo:session): session closed for user root
Oct 11 03:50:03 compute-0 sudo[192465]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 03:50:03 compute-0 sudo[192465]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 03:50:03 compute-0 sudo[192465]: pam_unix(sudo:session): session closed for user root
Oct 11 03:50:03 compute-0 sudo[192580]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/23b68101-59a9-532f-ab6b-9acf78fb2162/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 23b68101-59a9-532f-ab6b-9acf78fb2162 -- raw list --format json
Oct 11 03:50:03 compute-0 sudo[192580]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 03:50:03 compute-0 sudo[192795]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tpvgytugiqohtnqerkydswlknvnsnxyw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760154602.8196278-336-159449381746307/AnsiballZ_systemd.py'
Oct 11 03:50:03 compute-0 sudo[192795]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 03:50:03 compute-0 podman[193019]: 2025-10-11 03:50:03.807108741 +0000 UTC m=+0.060673077 container create bcac670f79d5adc1b52897899fdf374e1654c940a4569cb461d535c86af013d0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_cray, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 11 03:50:03 compute-0 systemd[1]: Started libpod-conmon-bcac670f79d5adc1b52897899fdf374e1654c940a4569cb461d535c86af013d0.scope.
Oct 11 03:50:03 compute-0 systemd[1]: Started libcrun container.
Oct 11 03:50:03 compute-0 podman[193019]: 2025-10-11 03:50:03.779800123 +0000 UTC m=+0.033364519 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 03:50:03 compute-0 python3.9[192811]: ansible-ansible.builtin.systemd Invoked with enabled=False masked=True name=libvirtd state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Oct 11 03:50:03 compute-0 podman[193019]: 2025-10-11 03:50:03.883120419 +0000 UTC m=+0.136684755 container init bcac670f79d5adc1b52897899fdf374e1654c940a4569cb461d535c86af013d0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_cray, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=reef)
Oct 11 03:50:03 compute-0 podman[193019]: 2025-10-11 03:50:03.891808494 +0000 UTC m=+0.145372810 container start bcac670f79d5adc1b52897899fdf374e1654c940a4569cb461d535c86af013d0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_cray, CEPH_REF=reef, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 11 03:50:03 compute-0 hopeful_cray[193149]: 167 167
Oct 11 03:50:03 compute-0 podman[193019]: 2025-10-11 03:50:03.895638661 +0000 UTC m=+0.149203007 container attach bcac670f79d5adc1b52897899fdf374e1654c940a4569cb461d535c86af013d0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_cray, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, ceph=True, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Oct 11 03:50:03 compute-0 podman[193019]: 2025-10-11 03:50:03.896923278 +0000 UTC m=+0.150487584 container died bcac670f79d5adc1b52897899fdf374e1654c940a4569cb461d535c86af013d0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_cray, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Oct 11 03:50:03 compute-0 systemd[1]: libpod-bcac670f79d5adc1b52897899fdf374e1654c940a4569cb461d535c86af013d0.scope: Deactivated successfully.
Oct 11 03:50:03 compute-0 systemd[1]: var-lib-containers-storage-overlay-0be0c1c64d5b514eba475afc01b4ade15e10700d836aca954faa1db2ac0477cd-merged.mount: Deactivated successfully.
Oct 11 03:50:03 compute-0 systemd[1]: Reloading.
Oct 11 03:50:03 compute-0 podman[193019]: 2025-10-11 03:50:03.938762244 +0000 UTC m=+0.192326550 container remove bcac670f79d5adc1b52897899fdf374e1654c940a4569cb461d535c86af013d0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_cray, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=reef, io.buildah.version=1.39.3, ceph=True, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 11 03:50:04 compute-0 systemd-sysv-generator[193375]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 11 03:50:04 compute-0 systemd-rc-local-generator[193367]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 11 03:50:04 compute-0 podman[193422]: 2025-10-11 03:50:04.103453727 +0000 UTC m=+0.036636022 container create dfdd7f467cad89666658668220c9c020af8d16325667c858d23531cc685ed4b9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_nobel, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 11 03:50:04 compute-0 podman[193422]: 2025-10-11 03:50:04.088369782 +0000 UTC m=+0.021552097 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 03:50:04 compute-0 systemd[1]: libpod-conmon-bcac670f79d5adc1b52897899fdf374e1654c940a4569cb461d535c86af013d0.scope: Deactivated successfully.
Oct 11 03:50:04 compute-0 systemd[1]: Started libpod-conmon-dfdd7f467cad89666658668220c9c020af8d16325667c858d23531cc685ed4b9.scope.
Oct 11 03:50:04 compute-0 systemd[1]: Started libcrun container.
Oct 11 03:50:04 compute-0 sudo[192795]: pam_unix(sudo:session): session closed for user root
Oct 11 03:50:04 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/898b9f97db87b21060302ea131881344a98d6082fdc1ad8207fdd0798f600d9a/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 11 03:50:04 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/898b9f97db87b21060302ea131881344a98d6082fdc1ad8207fdd0798f600d9a/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 11 03:50:04 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/898b9f97db87b21060302ea131881344a98d6082fdc1ad8207fdd0798f600d9a/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 11 03:50:04 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/898b9f97db87b21060302ea131881344a98d6082fdc1ad8207fdd0798f600d9a/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 11 03:50:04 compute-0 podman[193422]: 2025-10-11 03:50:04.329669999 +0000 UTC m=+0.262852334 container init dfdd7f467cad89666658668220c9c020af8d16325667c858d23531cc685ed4b9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_nobel, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507)
Oct 11 03:50:04 compute-0 podman[193422]: 2025-10-11 03:50:04.339813464 +0000 UTC m=+0.272995769 container start dfdd7f467cad89666658668220c9c020af8d16325667c858d23531cc685ed4b9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_nobel, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 11 03:50:04 compute-0 podman[193422]: 2025-10-11 03:50:04.344469655 +0000 UTC m=+0.277651990 container attach dfdd7f467cad89666658668220c9c020af8d16325667c858d23531cc685ed4b9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_nobel, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 11 03:50:04 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 11 03:50:04 compute-0 ceph-mon[74273]: pgmap v520: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 03:50:04 compute-0 sudo[194247]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jquitkasuvghfgbelbrrnhaxegxstwta ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760154604.4947531-336-36970842693144/AnsiballZ_systemd.py'
Oct 11 03:50:04 compute-0 sudo[194247]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 03:50:05 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v521: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 03:50:05 compute-0 python3.9[194272]: ansible-ansible.builtin.systemd Invoked with enabled=False masked=True name=libvirtd-tcp.socket state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Oct 11 03:50:05 compute-0 systemd[1]: Reloading.
Oct 11 03:50:05 compute-0 xenodochial_nobel[193631]: {
Oct 11 03:50:05 compute-0 xenodochial_nobel[193631]:     "38da774d-7ecf-442f-9a7a-97978287cff8": {
Oct 11 03:50:05 compute-0 xenodochial_nobel[193631]:         "ceph_fsid": "23b68101-59a9-532f-ab6b-9acf78fb2162",
Oct 11 03:50:05 compute-0 xenodochial_nobel[193631]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Oct 11 03:50:05 compute-0 xenodochial_nobel[193631]:         "osd_id": 1,
Oct 11 03:50:05 compute-0 xenodochial_nobel[193631]:         "osd_uuid": "38da774d-7ecf-442f-9a7a-97978287cff8",
Oct 11 03:50:05 compute-0 xenodochial_nobel[193631]:         "type": "bluestore"
Oct 11 03:50:05 compute-0 xenodochial_nobel[193631]:     },
Oct 11 03:50:05 compute-0 xenodochial_nobel[193631]:     "b0a32a6b-06c7-4cda-9235-0c5ffbd1bd85": {
Oct 11 03:50:05 compute-0 xenodochial_nobel[193631]:         "ceph_fsid": "23b68101-59a9-532f-ab6b-9acf78fb2162",
Oct 11 03:50:05 compute-0 xenodochial_nobel[193631]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Oct 11 03:50:05 compute-0 xenodochial_nobel[193631]:         "osd_id": 2,
Oct 11 03:50:05 compute-0 xenodochial_nobel[193631]:         "osd_uuid": "b0a32a6b-06c7-4cda-9235-0c5ffbd1bd85",
Oct 11 03:50:05 compute-0 xenodochial_nobel[193631]:         "type": "bluestore"
Oct 11 03:50:05 compute-0 xenodochial_nobel[193631]:     },
Oct 11 03:50:05 compute-0 xenodochial_nobel[193631]:     "bd7ac921-1218-45c1-b1c6-7c594dbceccb": {
Oct 11 03:50:05 compute-0 xenodochial_nobel[193631]:         "ceph_fsid": "23b68101-59a9-532f-ab6b-9acf78fb2162",
Oct 11 03:50:05 compute-0 xenodochial_nobel[193631]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct 11 03:50:05 compute-0 xenodochial_nobel[193631]:         "osd_id": 0,
Oct 11 03:50:05 compute-0 xenodochial_nobel[193631]:         "osd_uuid": "bd7ac921-1218-45c1-b1c6-7c594dbceccb",
Oct 11 03:50:05 compute-0 xenodochial_nobel[193631]:         "type": "bluestore"
Oct 11 03:50:05 compute-0 xenodochial_nobel[193631]:     }
Oct 11 03:50:05 compute-0 xenodochial_nobel[193631]: }
Oct 11 03:50:05 compute-0 podman[193422]: 2025-10-11 03:50:05.32276194 +0000 UTC m=+1.255944235 container died dfdd7f467cad89666658668220c9c020af8d16325667c858d23531cc685ed4b9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_nobel, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 11 03:50:05 compute-0 systemd-sysv-generator[194756]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 11 03:50:05 compute-0 systemd-rc-local-generator[194753]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 11 03:50:05 compute-0 systemd[1]: libpod-dfdd7f467cad89666658668220c9c020af8d16325667c858d23531cc685ed4b9.scope: Deactivated successfully.
Oct 11 03:50:05 compute-0 systemd[1]: var-lib-containers-storage-overlay-898b9f97db87b21060302ea131881344a98d6082fdc1ad8207fdd0798f600d9a-merged.mount: Deactivated successfully.
Oct 11 03:50:05 compute-0 podman[193422]: 2025-10-11 03:50:05.592664311 +0000 UTC m=+1.525846646 container remove dfdd7f467cad89666658668220c9c020af8d16325667c858d23531cc685ed4b9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_nobel, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 11 03:50:05 compute-0 sudo[194247]: pam_unix(sudo:session): session closed for user root
Oct 11 03:50:05 compute-0 systemd[1]: libpod-conmon-dfdd7f467cad89666658668220c9c020af8d16325667c858d23531cc685ed4b9.scope: Deactivated successfully.
Oct 11 03:50:05 compute-0 sudo[192580]: pam_unix(sudo:session): session closed for user root
Oct 11 03:50:05 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 11 03:50:05 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' 
Oct 11 03:50:05 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 11 03:50:05 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' 
Oct 11 03:50:05 compute-0 ceph-mgr[74563]: [progress WARNING root] complete: ev ee91874a-5a37-4469-b0bb-f6af808a0413 does not exist
Oct 11 03:50:05 compute-0 ceph-mgr[74563]: [progress WARNING root] complete: ev 7343b5d1-4edc-4eed-ab3a-9c8d089c0ead does not exist
Oct 11 03:50:05 compute-0 sudo[195152]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 03:50:05 compute-0 sudo[195152]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 03:50:05 compute-0 sudo[195152]: pam_unix(sudo:session): session closed for user root
Oct 11 03:50:05 compute-0 sudo[195258]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Oct 11 03:50:05 compute-0 sudo[195258]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 03:50:05 compute-0 sudo[195258]: pam_unix(sudo:session): session closed for user root
Oct 11 03:50:05 compute-0 sudo[195665]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-iljkekcqhaxjxylcokdglzytbjyxeilo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760154605.7273893-336-24383209825608/AnsiballZ_systemd.py'
Oct 11 03:50:05 compute-0 sudo[195665]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 03:50:06 compute-0 python3.9[195682]: ansible-ansible.builtin.systemd Invoked with enabled=False masked=True name=libvirtd-tls.socket state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Oct 11 03:50:06 compute-0 systemd[1]: Reloading.
Oct 11 03:50:06 compute-0 systemd-rc-local-generator[196093]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 11 03:50:06 compute-0 systemd-sysv-generator[196097]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 11 03:50:06 compute-0 ceph-mon[74273]: pgmap v521: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 03:50:06 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' 
Oct 11 03:50:06 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' 
Oct 11 03:50:06 compute-0 sudo[195665]: pam_unix(sudo:session): session closed for user root
Oct 11 03:50:07 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v522: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 03:50:07 compute-0 sudo[196884]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vqbqzyzddbfhbuwnkqfpemfpkywdcmbw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760154606.8849857-336-214741695914297/AnsiballZ_systemd.py'
Oct 11 03:50:07 compute-0 sudo[196884]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 03:50:07 compute-0 python3.9[196913]: ansible-ansible.builtin.systemd Invoked with enabled=False masked=True name=virtproxyd-tcp.socket state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Oct 11 03:50:07 compute-0 systemd[1]: Reloading.
Oct 11 03:50:07 compute-0 systemd-rc-local-generator[197334]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 11 03:50:07 compute-0 systemd-sysv-generator[197348]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 11 03:50:08 compute-0 sudo[196884]: pam_unix(sudo:session): session closed for user root
Oct 11 03:50:08 compute-0 auditd[700]: Audit daemon rotating log files
Oct 11 03:50:08 compute-0 sudo[198088]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dxmjrgdxlunvcygvvfsipwvjnwypaygy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760154608.2401462-365-132217699574927/AnsiballZ_systemd.py'
Oct 11 03:50:08 compute-0 sudo[198088]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 03:50:08 compute-0 ceph-mon[74273]: pgmap v522: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 03:50:08 compute-0 python3.9[198113]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtlogd.service daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Oct 11 03:50:08 compute-0 systemd[1]: Reloading.
Oct 11 03:50:09 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v523: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 03:50:09 compute-0 systemd-rc-local-generator[198596]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 11 03:50:09 compute-0 systemd-sysv-generator[198602]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 11 03:50:09 compute-0 sudo[198088]: pam_unix(sudo:session): session closed for user root
Oct 11 03:50:09 compute-0 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Oct 11 03:50:09 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 11 03:50:09 compute-0 systemd[1]: Finished man-db-cache-update.service.
Oct 11 03:50:09 compute-0 systemd[1]: man-db-cache-update.service: Consumed 11.976s CPU time.
Oct 11 03:50:09 compute-0 systemd[1]: run-r092e510e8aa64c7b961ee0f05437b08b.service: Deactivated successfully.
Oct 11 03:50:09 compute-0 sudo[199011]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jxjextbilstoazctyegkqrwqvfegitht ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760154609.4428484-365-113164691868966/AnsiballZ_systemd.py'
Oct 11 03:50:09 compute-0 sudo[199011]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 03:50:10 compute-0 python3.9[199013]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtnodedevd.service daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Oct 11 03:50:10 compute-0 systemd[1]: Reloading.
Oct 11 03:50:10 compute-0 systemd-sysv-generator[199047]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 11 03:50:10 compute-0 systemd-rc-local-generator[199044]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 11 03:50:10 compute-0 sudo[199011]: pam_unix(sudo:session): session closed for user root
Oct 11 03:50:10 compute-0 ceph-mon[74273]: pgmap v523: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 03:50:11 compute-0 sudo[199201]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jcaabbwsyobjabtdcgtnfnyqudwehyqg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760154610.646122-365-118349118713710/AnsiballZ_systemd.py'
Oct 11 03:50:11 compute-0 sudo[199201]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 03:50:11 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v524: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 03:50:11 compute-0 python3.9[199203]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtproxyd.service daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Oct 11 03:50:11 compute-0 systemd[1]: Reloading.
Oct 11 03:50:11 compute-0 systemd-sysv-generator[199239]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 11 03:50:11 compute-0 systemd-rc-local-generator[199236]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 11 03:50:11 compute-0 sudo[199201]: pam_unix(sudo:session): session closed for user root
Oct 11 03:50:12 compute-0 sudo[199391]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bmqwrecnygbuacuqwkcnjhwtdfgrqlfd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760154612.043385-365-138617628725346/AnsiballZ_systemd.py'
Oct 11 03:50:12 compute-0 sudo[199391]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 03:50:12 compute-0 ceph-mon[74273]: pgmap v524: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 03:50:12 compute-0 python3.9[199393]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtqemud.service daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Oct 11 03:50:12 compute-0 sudo[199391]: pam_unix(sudo:session): session closed for user root
Oct 11 03:50:13 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v525: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 03:50:13 compute-0 sudo[199546]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xsehjrlsovqqfenbiupmpoiyepewelwh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760154613.0919278-365-158166485262291/AnsiballZ_systemd.py'
Oct 11 03:50:13 compute-0 sudo[199546]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 03:50:13 compute-0 python3.9[199548]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtsecretd.service daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Oct 11 03:50:13 compute-0 systemd[1]: Reloading.
Oct 11 03:50:14 compute-0 systemd-rc-local-generator[199577]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 11 03:50:14 compute-0 systemd-sysv-generator[199581]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 11 03:50:14 compute-0 sudo[199546]: pam_unix(sudo:session): session closed for user root
Oct 11 03:50:14 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 11 03:50:14 compute-0 ceph-mon[74273]: pgmap v525: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 03:50:14 compute-0 sudo[199736]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-utecxcfwrzoogjcedqamivrfzhsolhbo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760154614.500083-401-94268128701272/AnsiballZ_systemd.py'
Oct 11 03:50:14 compute-0 sudo[199736]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 03:50:15 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v526: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 03:50:15 compute-0 python3.9[199738]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtproxyd-tls.socket state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Oct 11 03:50:15 compute-0 systemd[1]: Reloading.
Oct 11 03:50:15 compute-0 systemd-sysv-generator[199772]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 11 03:50:15 compute-0 systemd-rc-local-generator[199769]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 11 03:50:15 compute-0 systemd[1]: Listening on libvirt proxy daemon socket.
Oct 11 03:50:15 compute-0 systemd[1]: Listening on libvirt proxy daemon TLS IP socket.
Oct 11 03:50:15 compute-0 sudo[199736]: pam_unix(sudo:session): session closed for user root
Oct 11 03:50:16 compute-0 sudo[199929]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-orlporecopwojkidznjylxywlcglbiig ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760154615.934617-409-118620977446007/AnsiballZ_systemd.py'
Oct 11 03:50:16 compute-0 sudo[199929]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 03:50:16 compute-0 python3.9[199931]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtlogd.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Oct 11 03:50:16 compute-0 ceph-mon[74273]: pgmap v526: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 03:50:16 compute-0 sudo[199929]: pam_unix(sudo:session): session closed for user root
Oct 11 03:50:17 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v527: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 03:50:17 compute-0 sudo[200084]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-skbkmzpirvkpbffypdafdlhiulzyxans ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760154616.872389-409-40821399219980/AnsiballZ_systemd.py'
Oct 11 03:50:17 compute-0 sudo[200084]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 03:50:17 compute-0 python3.9[200086]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtlogd-admin.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Oct 11 03:50:17 compute-0 sudo[200084]: pam_unix(sudo:session): session closed for user root
Oct 11 03:50:18 compute-0 sudo[200239]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gormxjxqbbdnczvunhgfwfkpqbjcrfhk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760154617.85307-409-273698325779600/AnsiballZ_systemd.py'
Oct 11 03:50:18 compute-0 sudo[200239]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 03:50:18 compute-0 python3.9[200241]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtnodedevd.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Oct 11 03:50:18 compute-0 ceph-mon[74273]: pgmap v527: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 03:50:18 compute-0 sudo[200239]: pam_unix(sudo:session): session closed for user root
Oct 11 03:50:18 compute-0 podman[200243]: 2025-10-11 03:50:18.767050218 +0000 UTC m=+0.121073346 container health_status 648a70a95c858d67aef01a55c153ff0b5b9bef4b589fa10c62a24185180db61f (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, org.label-schema.build-date=20251009, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=ovn_controller, managed_by=edpm_ansible, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true)
Oct 11 03:50:19 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v528: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 03:50:19 compute-0 sudo[200418]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qhjryledzsfmwwrfuykxlbonrumcrpwq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760154618.8443613-409-13509088864757/AnsiballZ_systemd.py'
Oct 11 03:50:19 compute-0 sudo[200418]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 03:50:19 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 11 03:50:19 compute-0 python3.9[200420]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtnodedevd-ro.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Oct 11 03:50:19 compute-0 sudo[200418]: pam_unix(sudo:session): session closed for user root
Oct 11 03:50:20 compute-0 sudo[200573]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uhbuwhvyxmzugwiydufrgtvqobbazxrm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760154619.8055406-409-173424271027656/AnsiballZ_systemd.py'
Oct 11 03:50:20 compute-0 sudo[200573]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 03:50:20 compute-0 python3.9[200575]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtnodedevd-admin.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Oct 11 03:50:20 compute-0 sudo[200573]: pam_unix(sudo:session): session closed for user root
Oct 11 03:50:20 compute-0 ceph-mon[74273]: pgmap v528: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 03:50:20 compute-0 ceph-mgr[74563]: [balancer INFO root] Optimize plan auto_2025-10-11_03:50:20
Oct 11 03:50:20 compute-0 ceph-mgr[74563]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 11 03:50:20 compute-0 ceph-mgr[74563]: [balancer INFO root] do_upmap
Oct 11 03:50:20 compute-0 ceph-mgr[74563]: [balancer INFO root] pools ['.mgr', 'images', 'backups', 'cephfs.cephfs.meta', 'vms', 'volumes', 'default.rgw.control', '.rgw.root', 'default.rgw.meta', 'cephfs.cephfs.data', 'default.rgw.log']
Oct 11 03:50:20 compute-0 ceph-mgr[74563]: [balancer INFO root] prepared 0/10 changes
Oct 11 03:50:20 compute-0 ceph-mgr[74563]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 03:50:20 compute-0 ceph-mgr[74563]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 03:50:20 compute-0 ceph-mgr[74563]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 03:50:20 compute-0 ceph-mgr[74563]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 03:50:20 compute-0 ceph-mgr[74563]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 03:50:20 compute-0 ceph-mgr[74563]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 03:50:20 compute-0 ceph-mgr[74563]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 11 03:50:20 compute-0 ceph-mgr[74563]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 11 03:50:20 compute-0 ceph-mgr[74563]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 11 03:50:20 compute-0 ceph-mgr[74563]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 11 03:50:20 compute-0 ceph-mgr[74563]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 11 03:50:20 compute-0 ceph-mgr[74563]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 11 03:50:20 compute-0 ceph-mgr[74563]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 11 03:50:20 compute-0 ceph-mgr[74563]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 11 03:50:20 compute-0 ceph-mgr[74563]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 11 03:50:20 compute-0 ceph-mgr[74563]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 11 03:50:21 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v529: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 03:50:21 compute-0 sudo[200728]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ntdamqgiufdramumfwajihdoxzxuedrk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760154620.7248232-409-61258513610791/AnsiballZ_systemd.py'
Oct 11 03:50:21 compute-0 sudo[200728]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 03:50:21 compute-0 python3.9[200730]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtproxyd.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Oct 11 03:50:21 compute-0 sudo[200728]: pam_unix(sudo:session): session closed for user root
Oct 11 03:50:22 compute-0 sudo[200883]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ioblzpymzrlbfemrnrrdxofjfafaiqro ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760154621.7070084-409-175095769978610/AnsiballZ_systemd.py'
Oct 11 03:50:22 compute-0 sudo[200883]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 03:50:22 compute-0 python3.9[200885]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtproxyd-ro.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Oct 11 03:50:22 compute-0 sudo[200883]: pam_unix(sudo:session): session closed for user root
Oct 11 03:50:22 compute-0 ceph-mon[74273]: pgmap v529: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 03:50:22 compute-0 ovn_metadata_agent[161897]: 2025-10-11 03:50:22.933 161902 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 03:50:22 compute-0 ovn_metadata_agent[161897]: 2025-10-11 03:50:22.934 161902 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 03:50:22 compute-0 ovn_metadata_agent[161897]: 2025-10-11 03:50:22.934 161902 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 03:50:22 compute-0 sudo[201038]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qoelozxppmlrmejvjezkweiznzezxshi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760154622.632512-409-68962651781717/AnsiballZ_systemd.py'
Oct 11 03:50:22 compute-0 sudo[201038]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 03:50:23 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v530: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 03:50:23 compute-0 python3.9[201040]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtproxyd-admin.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Oct 11 03:50:23 compute-0 sudo[201038]: pam_unix(sudo:session): session closed for user root
Oct 11 03:50:23 compute-0 sudo[201193]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wccawzmujgsunqmkvnxvkxfbixjklvjp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760154623.558798-409-251121780946522/AnsiballZ_systemd.py'
Oct 11 03:50:23 compute-0 sudo[201193]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 03:50:24 compute-0 python3.9[201195]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtqemud.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Oct 11 03:50:24 compute-0 sudo[201193]: pam_unix(sudo:session): session closed for user root
Oct 11 03:50:24 compute-0 podman[201197]: 2025-10-11 03:50:24.376015753 +0000 UTC m=+0.081942816 container health_status aa44bb3cffcfb092b9d85ae52a9bd9f36c87e07262632420f97b48a886109eab (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20251009, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team)
Oct 11 03:50:24 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 11 03:50:24 compute-0 ceph-mon[74273]: pgmap v530: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 03:50:24 compute-0 sudo[201367]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vhojxkokjvyfxxyamsdsgxiugsxmwskn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760154624.524472-409-8107589845733/AnsiballZ_systemd.py'
Oct 11 03:50:24 compute-0 sudo[201367]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 03:50:25 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v531: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 03:50:25 compute-0 python3.9[201369]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtqemud-ro.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Oct 11 03:50:25 compute-0 sudo[201367]: pam_unix(sudo:session): session closed for user root
Oct 11 03:50:25 compute-0 sudo[201522]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rxgqfdrnszlpqfgymvtfizmzrcujmgmd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760154625.4477253-409-102287563804047/AnsiballZ_systemd.py'
Oct 11 03:50:25 compute-0 sudo[201522]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 03:50:26 compute-0 python3.9[201524]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtqemud-admin.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Oct 11 03:50:26 compute-0 sudo[201522]: pam_unix(sudo:session): session closed for user root
Oct 11 03:50:26 compute-0 sudo[201677]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mlfpxvimjzyxpwlmrsvyyhtlitmnbkij ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760154626.3846273-409-31721104114188/AnsiballZ_systemd.py'
Oct 11 03:50:26 compute-0 sudo[201677]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 03:50:26 compute-0 ceph-mon[74273]: pgmap v531: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 03:50:27 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v532: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 03:50:27 compute-0 python3.9[201679]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtsecretd.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Oct 11 03:50:27 compute-0 sudo[201677]: pam_unix(sudo:session): session closed for user root
Oct 11 03:50:27 compute-0 sudo[201832]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ghewfnauvhwwbcjfytowtsfukgbnslpn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760154627.273501-409-148752364639091/AnsiballZ_systemd.py'
Oct 11 03:50:27 compute-0 sudo[201832]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 03:50:27 compute-0 python3.9[201834]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtsecretd-ro.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Oct 11 03:50:28 compute-0 ceph-mon[74273]: pgmap v532: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 03:50:29 compute-0 sudo[201832]: pam_unix(sudo:session): session closed for user root
Oct 11 03:50:29 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v533: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 03:50:29 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 11 03:50:29 compute-0 sudo[201987]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-czsrdjywdawlyrftrhbbkqsypjkobwoz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760154629.1982446-409-65048930218655/AnsiballZ_systemd.py'
Oct 11 03:50:29 compute-0 sudo[201987]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 03:50:29 compute-0 python3.9[201989]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtsecretd-admin.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Oct 11 03:50:29 compute-0 sudo[201987]: pam_unix(sudo:session): session closed for user root
Oct 11 03:50:30 compute-0 sudo[202142]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-raborddkkcjvmybsuzbgyfklitzvatct ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760154630.3380103-511-117493789975365/AnsiballZ_file.py'
Oct 11 03:50:30 compute-0 sudo[202142]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 03:50:30 compute-0 ceph-mon[74273]: pgmap v533: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 03:50:30 compute-0 python3.9[202144]: ansible-ansible.builtin.file Invoked with group=root owner=root path=/etc/tmpfiles.d/ setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Oct 11 03:50:30 compute-0 sudo[202142]: pam_unix(sudo:session): session closed for user root
Oct 11 03:50:30 compute-0 ceph-mgr[74563]: [pg_autoscaler INFO root] _maybe_adjust
Oct 11 03:50:30 compute-0 ceph-mgr[74563]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 03:50:30 compute-0 ceph-mgr[74563]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct 11 03:50:30 compute-0 ceph-mgr[74563]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 03:50:30 compute-0 ceph-mgr[74563]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 11 03:50:30 compute-0 ceph-mgr[74563]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 03:50:30 compute-0 ceph-mgr[74563]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 11 03:50:30 compute-0 ceph-mgr[74563]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 03:50:30 compute-0 ceph-mgr[74563]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 11 03:50:30 compute-0 ceph-mgr[74563]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 03:50:30 compute-0 ceph-mgr[74563]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 11 03:50:30 compute-0 ceph-mgr[74563]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 03:50:30 compute-0 ceph-mgr[74563]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Oct 11 03:50:30 compute-0 ceph-mgr[74563]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 03:50:30 compute-0 ceph-mgr[74563]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 11 03:50:30 compute-0 ceph-mgr[74563]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 03:50:30 compute-0 ceph-mgr[74563]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Oct 11 03:50:30 compute-0 ceph-mgr[74563]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 03:50:30 compute-0 ceph-mgr[74563]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct 11 03:50:30 compute-0 ceph-mgr[74563]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 03:50:30 compute-0 ceph-mgr[74563]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 11 03:50:30 compute-0 ceph-mgr[74563]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 03:50:30 compute-0 ceph-mgr[74563]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct 11 03:50:31 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v534: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 03:50:31 compute-0 sudo[202294]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-koctyfcstgynpjmcdpmguioqeuxawdrr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760154631.1210668-511-68428344583545/AnsiballZ_file.py'
Oct 11 03:50:31 compute-0 sudo[202294]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 03:50:31 compute-0 python3.9[202296]: ansible-ansible.builtin.file Invoked with group=root owner=root path=/var/lib/edpm-config/firewall setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Oct 11 03:50:31 compute-0 sudo[202294]: pam_unix(sudo:session): session closed for user root
Oct 11 03:50:32 compute-0 sudo[202446]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nqmzpviiaglpwgsstijwpmvuidwoylfs ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760154631.8963819-511-97736640333243/AnsiballZ_file.py'
Oct 11 03:50:32 compute-0 sudo[202446]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 03:50:32 compute-0 python3.9[202448]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/pki/libvirt setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct 11 03:50:32 compute-0 sudo[202446]: pam_unix(sudo:session): session closed for user root
Oct 11 03:50:32 compute-0 ceph-mon[74273]: pgmap v534: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 03:50:33 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v535: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 03:50:33 compute-0 sudo[202598]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jjiodtoazoxihwwqgfuuwqheqfevharg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760154632.825869-511-76262152927158/AnsiballZ_file.py'
Oct 11 03:50:33 compute-0 sudo[202598]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 03:50:33 compute-0 python3.9[202600]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/pki/libvirt/private setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct 11 03:50:33 compute-0 sudo[202598]: pam_unix(sudo:session): session closed for user root
Oct 11 03:50:34 compute-0 sudo[202750]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nbvrrpbplgpomoohbnzxuognuinilkwd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760154633.6480484-511-18312493740597/AnsiballZ_file.py'
Oct 11 03:50:34 compute-0 sudo[202750]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 03:50:34 compute-0 python3.9[202752]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/pki/CA setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct 11 03:50:34 compute-0 sudo[202750]: pam_unix(sudo:session): session closed for user root
Oct 11 03:50:34 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 11 03:50:34 compute-0 ceph-mon[74273]: pgmap v535: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 03:50:34 compute-0 sudo[202902]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dsgeascgafkrbobamkwmijkpyssdzgoy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760154634.4697797-511-201960438594291/AnsiballZ_file.py'
Oct 11 03:50:34 compute-0 sudo[202902]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 03:50:35 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v536: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 03:50:35 compute-0 python3.9[202904]: ansible-ansible.builtin.file Invoked with group=qemu owner=root path=/etc/pki/qemu setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Oct 11 03:50:35 compute-0 sudo[202902]: pam_unix(sudo:session): session closed for user root
Oct 11 03:50:35 compute-0 sudo[203054]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qipjfvrrdhnfojkoncvfdifzzxuvapti ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760154635.2924085-554-273837253481719/AnsiballZ_stat.py'
Oct 11 03:50:35 compute-0 sudo[203054]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 03:50:36 compute-0 python3.9[203056]: ansible-ansible.legacy.stat Invoked with path=/etc/libvirt/virtlogd.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 11 03:50:36 compute-0 sudo[203054]: pam_unix(sudo:session): session closed for user root
Oct 11 03:50:36 compute-0 ceph-mon[74273]: pgmap v536: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 03:50:36 compute-0 sudo[203179]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wzkvpryirehembwqfzkhslregmlfidwr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760154635.2924085-554-273837253481719/AnsiballZ_copy.py'
Oct 11 03:50:36 compute-0 sudo[203179]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 03:50:37 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v537: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 03:50:37 compute-0 python3.9[203181]: ansible-ansible.legacy.copy Invoked with dest=/etc/libvirt/virtlogd.conf group=libvirt mode=0640 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1760154635.2924085-554-273837253481719/.source.conf follow=False _original_basename=virtlogd.conf checksum=d7a72ae92c2c205983b029473e05a6aa4c58ec24 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 11 03:50:37 compute-0 sudo[203179]: pam_unix(sudo:session): session closed for user root
Oct 11 03:50:37 compute-0 sudo[203331]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lozcnowfarugnqooxnwapxmniuazbptb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760154637.3335917-554-130422462292212/AnsiballZ_stat.py'
Oct 11 03:50:37 compute-0 sudo[203331]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 03:50:37 compute-0 python3.9[203333]: ansible-ansible.legacy.stat Invoked with path=/etc/libvirt/virtnodedevd.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 11 03:50:37 compute-0 sudo[203331]: pam_unix(sudo:session): session closed for user root
Oct 11 03:50:38 compute-0 sudo[203456]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-iqownwpvphardespxjlpfwmscpcbktsv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760154637.3335917-554-130422462292212/AnsiballZ_copy.py'
Oct 11 03:50:38 compute-0 sudo[203456]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 03:50:38 compute-0 python3.9[203458]: ansible-ansible.legacy.copy Invoked with dest=/etc/libvirt/virtnodedevd.conf group=libvirt mode=0640 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1760154637.3335917-554-130422462292212/.source.conf follow=False _original_basename=virtnodedevd.conf checksum=7a604468adb2868f1ab6ebd0fd4622286e6373e2 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 11 03:50:38 compute-0 sudo[203456]: pam_unix(sudo:session): session closed for user root
Oct 11 03:50:38 compute-0 ceph-mon[74273]: pgmap v537: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 03:50:39 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v538: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 03:50:39 compute-0 sudo[203608]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ugttkpyrlygyxcrjhtfobhbiagqawssr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760154638.749306-554-41718688478470/AnsiballZ_stat.py'
Oct 11 03:50:39 compute-0 sudo[203608]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 03:50:39 compute-0 python3.9[203610]: ansible-ansible.legacy.stat Invoked with path=/etc/libvirt/virtproxyd.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 11 03:50:39 compute-0 sudo[203608]: pam_unix(sudo:session): session closed for user root
Oct 11 03:50:39 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 11 03:50:39 compute-0 sudo[203733]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tzfveivpyvucbtfcdirpsgytqginzopu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760154638.749306-554-41718688478470/AnsiballZ_copy.py'
Oct 11 03:50:39 compute-0 sudo[203733]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 03:50:40 compute-0 python3.9[203735]: ansible-ansible.legacy.copy Invoked with dest=/etc/libvirt/virtproxyd.conf group=libvirt mode=0640 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1760154638.749306-554-41718688478470/.source.conf follow=False _original_basename=virtproxyd.conf checksum=28bc484b7c9988e03de49d4fcc0a088ea975f716 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 11 03:50:40 compute-0 sudo[203733]: pam_unix(sudo:session): session closed for user root
Oct 11 03:50:40 compute-0 sudo[203885]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sdwrkqdjezicbjhxessxtxjtdspqwocb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760154640.2790236-554-34234021104002/AnsiballZ_stat.py'
Oct 11 03:50:40 compute-0 sudo[203885]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 03:50:40 compute-0 ceph-mon[74273]: pgmap v538: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 03:50:40 compute-0 python3.9[203887]: ansible-ansible.legacy.stat Invoked with path=/etc/libvirt/virtqemud.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 11 03:50:40 compute-0 sudo[203885]: pam_unix(sudo:session): session closed for user root
Oct 11 03:50:41 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v539: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 03:50:41 compute-0 sudo[204010]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-muegrbuwrlohqqijrqsbkebjiqttvcwj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760154640.2790236-554-34234021104002/AnsiballZ_copy.py'
Oct 11 03:50:41 compute-0 sudo[204010]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 03:50:41 compute-0 python3.9[204012]: ansible-ansible.legacy.copy Invoked with dest=/etc/libvirt/virtqemud.conf group=libvirt mode=0640 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1760154640.2790236-554-34234021104002/.source.conf follow=False _original_basename=virtqemud.conf checksum=7a604468adb2868f1ab6ebd0fd4622286e6373e2 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 11 03:50:41 compute-0 sudo[204010]: pam_unix(sudo:session): session closed for user root
Oct 11 03:50:42 compute-0 sudo[204162]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-myzpnlulbafwpnltzuiwmphivwjkmbtr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760154641.7736537-554-190404892376835/AnsiballZ_stat.py'
Oct 11 03:50:42 compute-0 sudo[204162]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 03:50:42 compute-0 python3.9[204164]: ansible-ansible.legacy.stat Invoked with path=/etc/libvirt/qemu.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 11 03:50:42 compute-0 sudo[204162]: pam_unix(sudo:session): session closed for user root
Oct 11 03:50:42 compute-0 ceph-mon[74273]: pgmap v539: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 03:50:42 compute-0 sudo[204287]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nyacabywxlpsowljybawfsjbxfafogkd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760154641.7736537-554-190404892376835/AnsiballZ_copy.py'
Oct 11 03:50:42 compute-0 sudo[204287]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 03:50:43 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v540: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 03:50:43 compute-0 python3.9[204289]: ansible-ansible.legacy.copy Invoked with dest=/etc/libvirt/qemu.conf group=libvirt mode=0640 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1760154641.7736537-554-190404892376835/.source.conf follow=False _original_basename=qemu.conf.j2 checksum=c44de21af13c90603565570f09ff60c6a41ed8df backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 11 03:50:43 compute-0 sudo[204287]: pam_unix(sudo:session): session closed for user root
Oct 11 03:50:43 compute-0 sudo[204439]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fyovnrwnyiwgrrnhvblyufvmjlvlzpir ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760154643.416015-554-46456459640479/AnsiballZ_stat.py'
Oct 11 03:50:43 compute-0 sudo[204439]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 03:50:43 compute-0 python3.9[204441]: ansible-ansible.legacy.stat Invoked with path=/etc/libvirt/virtsecretd.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 11 03:50:44 compute-0 sudo[204439]: pam_unix(sudo:session): session closed for user root
Oct 11 03:50:44 compute-0 sudo[204564]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jmdhlkvqmhwcksbrouarzcyurjqjkmje ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760154643.416015-554-46456459640479/AnsiballZ_copy.py'
Oct 11 03:50:44 compute-0 sudo[204564]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 03:50:44 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 11 03:50:44 compute-0 python3.9[204566]: ansible-ansible.legacy.copy Invoked with dest=/etc/libvirt/virtsecretd.conf group=libvirt mode=0640 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1760154643.416015-554-46456459640479/.source.conf follow=False _original_basename=virtsecretd.conf checksum=7a604468adb2868f1ab6ebd0fd4622286e6373e2 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 11 03:50:44 compute-0 sudo[204564]: pam_unix(sudo:session): session closed for user root
Oct 11 03:50:44 compute-0 ceph-mon[74273]: pgmap v540: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 03:50:45 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v541: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 03:50:45 compute-0 sudo[204716]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xxohwrovixysllealqgmegjkpgpieijl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760154644.832699-554-177393866565851/AnsiballZ_stat.py'
Oct 11 03:50:45 compute-0 sudo[204716]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 03:50:45 compute-0 python3.9[204718]: ansible-ansible.legacy.stat Invoked with path=/etc/libvirt/auth.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 11 03:50:45 compute-0 sudo[204716]: pam_unix(sudo:session): session closed for user root
Oct 11 03:50:45 compute-0 sudo[204839]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zsjrsojmymdfclmluvvtonkluisdggpw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760154644.832699-554-177393866565851/AnsiballZ_copy.py'
Oct 11 03:50:45 compute-0 sudo[204839]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 03:50:45 compute-0 python3.9[204841]: ansible-ansible.legacy.copy Invoked with dest=/etc/libvirt/auth.conf group=libvirt mode=0600 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1760154644.832699-554-177393866565851/.source.conf follow=False _original_basename=auth.conf checksum=a94cd818c374cec2c8425b70d2e0e2f41b743ae4 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 11 03:50:46 compute-0 sudo[204839]: pam_unix(sudo:session): session closed for user root
Oct 11 03:50:46 compute-0 sudo[204991]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-naggmmwofdloxsnjymwjvsemckjftiqt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760154646.1911714-554-187045559981540/AnsiballZ_stat.py'
Oct 11 03:50:46 compute-0 sudo[204991]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 03:50:46 compute-0 python3.9[204993]: ansible-ansible.legacy.stat Invoked with path=/etc/sasl2/libvirt.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 11 03:50:46 compute-0 ceph-mon[74273]: pgmap v541: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 03:50:46 compute-0 sudo[204991]: pam_unix(sudo:session): session closed for user root
Oct 11 03:50:47 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v542: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 03:50:47 compute-0 sudo[205116]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-awkiajrezabxnbamuvtriphojaxdynhf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760154646.1911714-554-187045559981540/AnsiballZ_copy.py'
Oct 11 03:50:47 compute-0 sudo[205116]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 03:50:47 compute-0 python3.9[205118]: ansible-ansible.legacy.copy Invoked with dest=/etc/sasl2/libvirt.conf group=libvirt mode=0640 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1760154646.1911714-554-187045559981540/.source.conf follow=False _original_basename=sasl_libvirt.conf checksum=652e4d404bf79253d06956b8e9847c9364979d4a backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 11 03:50:47 compute-0 sudo[205116]: pam_unix(sudo:session): session closed for user root
Oct 11 03:50:48 compute-0 sudo[205268]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nbyhxujchhbaktwvdtbkjrxbecopacnz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760154647.7395806-667-165649341107055/AnsiballZ_command.py'
Oct 11 03:50:48 compute-0 sudo[205268]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 03:50:48 compute-0 python3.9[205270]: ansible-ansible.legacy.command Invoked with cmd=saslpasswd2 -f /etc/libvirt/passwd.db -p -a libvirt -u openstack migration stdin=12345678 _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None
Oct 11 03:50:48 compute-0 sudo[205268]: pam_unix(sudo:session): session closed for user root
Oct 11 03:50:48 compute-0 ceph-mon[74273]: pgmap v542: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 03:50:49 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v543: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 03:50:49 compute-0 sudo[205432]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wisqkzjjyglbbfiwfptzdvdvzajiijlb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760154648.6564646-676-170264922863368/AnsiballZ_file.py'
Oct 11 03:50:49 compute-0 sudo[205432]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 03:50:49 compute-0 podman[205395]: 2025-10-11 03:50:49.122639054 +0000 UTC m=+0.124323738 container health_status 648a70a95c858d67aef01a55c153ff0b5b9bef4b589fa10c62a24185180db61f (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team)
Oct 11 03:50:49 compute-0 python3.9[205441]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtlogd.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 11 03:50:49 compute-0 sudo[205432]: pam_unix(sudo:session): session closed for user root
Oct 11 03:50:49 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 11 03:50:49 compute-0 sudo[205597]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nnzznsvkepbkwhtkjsoqgkpygezgqkpw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760154649.470728-676-58304385604390/AnsiballZ_file.py'
Oct 11 03:50:49 compute-0 sudo[205597]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 03:50:50 compute-0 python3.9[205599]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtlogd-admin.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 11 03:50:50 compute-0 sudo[205597]: pam_unix(sudo:session): session closed for user root
Oct 11 03:50:50 compute-0 sudo[205749]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xuahdqeivvdvmiszsohwventolpqcmwv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760154650.2338843-676-61218134792034/AnsiballZ_file.py'
Oct 11 03:50:50 compute-0 sudo[205749]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 03:50:50 compute-0 ceph-mgr[74563]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 03:50:50 compute-0 ceph-mgr[74563]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 03:50:50 compute-0 ceph-mgr[74563]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 03:50:50 compute-0 ceph-mgr[74563]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 03:50:50 compute-0 ceph-mgr[74563]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 03:50:50 compute-0 ceph-mgr[74563]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 03:50:50 compute-0 python3.9[205751]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtnodedevd.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 11 03:50:50 compute-0 ceph-mon[74273]: pgmap v543: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 03:50:50 compute-0 sudo[205749]: pam_unix(sudo:session): session closed for user root
Oct 11 03:50:51 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v544: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 03:50:51 compute-0 sudo[205901]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rswdyaxgzbkdflmvycvlkoiiiyunjlqk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760154651.0405188-676-217074711110272/AnsiballZ_file.py'
Oct 11 03:50:51 compute-0 sudo[205901]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 03:50:51 compute-0 python3.9[205903]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtnodedevd-ro.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 11 03:50:51 compute-0 sudo[205901]: pam_unix(sudo:session): session closed for user root
Oct 11 03:50:51 compute-0 ceph-mon[74273]: pgmap v544: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 03:50:52 compute-0 sudo[206053]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-joiqgzsogkharhalpwsdyqofwzmrkloa ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760154651.79089-676-72546728279152/AnsiballZ_file.py'
Oct 11 03:50:52 compute-0 sudo[206053]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 03:50:52 compute-0 python3.9[206055]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtnodedevd-admin.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 11 03:50:52 compute-0 sudo[206053]: pam_unix(sudo:session): session closed for user root
Oct 11 03:50:52 compute-0 sudo[206205]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-eqpznzqcuzqqchbzmosvylnomffvnnjj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760154652.4698777-676-18496574073951/AnsiballZ_file.py'
Oct 11 03:50:52 compute-0 sudo[206205]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 03:50:53 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v545: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 03:50:53 compute-0 python3.9[206207]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtproxyd.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 11 03:50:53 compute-0 sudo[206205]: pam_unix(sudo:session): session closed for user root
Oct 11 03:50:53 compute-0 sudo[206357]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sosnsalbicfsianfqtuyzltxbfzpekdy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760154653.2507017-676-97651174905730/AnsiballZ_file.py'
Oct 11 03:50:53 compute-0 sudo[206357]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 03:50:53 compute-0 python3.9[206359]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtproxyd-ro.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 11 03:50:53 compute-0 sudo[206357]: pam_unix(sudo:session): session closed for user root
Oct 11 03:50:54 compute-0 ceph-mon[74273]: pgmap v545: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 03:50:54 compute-0 sudo[206509]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dzxchuuizkidrhzcysdrpledhpavhgvy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760154654.0267658-676-207471249541652/AnsiballZ_file.py'
Oct 11 03:50:54 compute-0 sudo[206509]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 03:50:54 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 11 03:50:54 compute-0 podman[206511]: 2025-10-11 03:50:54.505457809 +0000 UTC m=+0.080963798 container health_status aa44bb3cffcfb092b9d85ae52a9bd9f36c87e07262632420f97b48a886109eab (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Oct 11 03:50:54 compute-0 python3.9[206512]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtproxyd-admin.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 11 03:50:54 compute-0 sudo[206509]: pam_unix(sudo:session): session closed for user root
Oct 11 03:50:55 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v546: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 03:50:55 compute-0 sudo[206681]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-equoqnkuxdrjjhqwxjygafxwqlslnaas ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760154654.8182044-676-99205490192078/AnsiballZ_file.py'
Oct 11 03:50:55 compute-0 sudo[206681]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 03:50:55 compute-0 python3.9[206683]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtqemud.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 11 03:50:55 compute-0 sudo[206681]: pam_unix(sudo:session): session closed for user root
Oct 11 03:50:55 compute-0 sudo[206833]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bdyhxmynqndqawqnbfmdlutdmhhfvcqq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760154655.560301-676-127496173067036/AnsiballZ_file.py'
Oct 11 03:50:55 compute-0 sudo[206833]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 03:50:56 compute-0 ceph-mon[74273]: pgmap v546: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 03:50:56 compute-0 python3.9[206835]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtqemud-ro.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 11 03:50:56 compute-0 sudo[206833]: pam_unix(sudo:session): session closed for user root
Oct 11 03:50:56 compute-0 sudo[206985]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-aqupjnzbspahosmukwdbyytbucwijgra ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760154656.3005023-676-134824728008018/AnsiballZ_file.py'
Oct 11 03:50:56 compute-0 sudo[206985]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 03:50:56 compute-0 python3.9[206987]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtqemud-admin.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 11 03:50:56 compute-0 sudo[206985]: pam_unix(sudo:session): session closed for user root
Oct 11 03:50:57 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v547: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 03:50:57 compute-0 sudo[207137]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-badumygviepvnqdxchhewxiccsyhfgiv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760154656.9889207-676-233225127851854/AnsiballZ_file.py'
Oct 11 03:50:57 compute-0 sudo[207137]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 03:50:57 compute-0 python3.9[207139]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtsecretd.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 11 03:50:57 compute-0 sudo[207137]: pam_unix(sudo:session): session closed for user root
Oct 11 03:50:57 compute-0 sudo[207289]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hhudownvhcdlfysbentagoahyccdfgxm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760154657.6841881-676-274150432756972/AnsiballZ_file.py'
Oct 11 03:50:57 compute-0 sudo[207289]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 03:50:58 compute-0 ceph-mon[74273]: pgmap v547: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 03:50:58 compute-0 python3.9[207291]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtsecretd-ro.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 11 03:50:58 compute-0 sudo[207289]: pam_unix(sudo:session): session closed for user root
Oct 11 03:50:58 compute-0 sudo[207441]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-apcrvdipziuztdzwgblhfldupgqqsdsr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760154658.3285754-676-219758522735924/AnsiballZ_file.py'
Oct 11 03:50:58 compute-0 sudo[207441]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 03:50:58 compute-0 python3.9[207443]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtsecretd-admin.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 11 03:50:58 compute-0 sudo[207441]: pam_unix(sudo:session): session closed for user root
Oct 11 03:50:59 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v548: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 03:50:59 compute-0 sudo[207593]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rkduzgfpolthppqdcmzowthhdzxljknh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760154659.0907495-775-219379050775401/AnsiballZ_stat.py'
Oct 11 03:50:59 compute-0 sudo[207593]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 03:50:59 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 11 03:50:59 compute-0 python3.9[207595]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtlogd.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 11 03:50:59 compute-0 sudo[207593]: pam_unix(sudo:session): session closed for user root
Oct 11 03:51:00 compute-0 sudo[207716]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-oywydbctqeicjydxamlmzfhmjpobudfv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760154659.0907495-775-219379050775401/AnsiballZ_copy.py'
Oct 11 03:51:00 compute-0 sudo[207716]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 03:51:00 compute-0 ceph-mon[74273]: pgmap v548: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 03:51:00 compute-0 python3.9[207718]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtlogd.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1760154659.0907495-775-219379050775401/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 11 03:51:00 compute-0 sudo[207716]: pam_unix(sudo:session): session closed for user root
Oct 11 03:51:00 compute-0 sudo[207868]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-khqfkmxwthgrsdiinveqlaqqijvgxaqs ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760154660.412115-775-99949519122218/AnsiballZ_stat.py'
Oct 11 03:51:00 compute-0 sudo[207868]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 03:51:00 compute-0 python3.9[207870]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtlogd-admin.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 11 03:51:00 compute-0 sudo[207868]: pam_unix(sudo:session): session closed for user root
Oct 11 03:51:01 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v549: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 03:51:01 compute-0 sudo[207991]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zlkcaspsvgunqxiwqomanxtsnahiqokw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760154660.412115-775-99949519122218/AnsiballZ_copy.py'
Oct 11 03:51:01 compute-0 sudo[207991]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 03:51:01 compute-0 python3.9[207993]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtlogd-admin.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1760154660.412115-775-99949519122218/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 11 03:51:01 compute-0 sudo[207991]: pam_unix(sudo:session): session closed for user root
Oct 11 03:51:02 compute-0 sudo[208143]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zsexjahntuwdzgjlvclvccdjfyfkekis ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760154661.7357044-775-75464812782894/AnsiballZ_stat.py'
Oct 11 03:51:02 compute-0 sudo[208143]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 03:51:02 compute-0 ceph-mon[74273]: pgmap v549: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 03:51:02 compute-0 python3.9[208145]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtnodedevd.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 11 03:51:02 compute-0 sudo[208143]: pam_unix(sudo:session): session closed for user root
Oct 11 03:51:02 compute-0 sudo[208266]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rnjgqnjaboxsjphxuvsruoahhmucejrk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760154661.7357044-775-75464812782894/AnsiballZ_copy.py'
Oct 11 03:51:02 compute-0 sudo[208266]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 03:51:02 compute-0 python3.9[208268]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtnodedevd.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1760154661.7357044-775-75464812782894/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 11 03:51:03 compute-0 sudo[208266]: pam_unix(sudo:session): session closed for user root
Oct 11 03:51:03 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v550: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 03:51:03 compute-0 sudo[208418]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-giluwoukgyflgshomngibsjwppmiqgpp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760154663.1932802-775-176377734048570/AnsiballZ_stat.py'
Oct 11 03:51:03 compute-0 sudo[208418]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 03:51:03 compute-0 python3.9[208420]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtnodedevd-ro.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 11 03:51:03 compute-0 sudo[208418]: pam_unix(sudo:session): session closed for user root
Oct 11 03:51:04 compute-0 ceph-mon[74273]: pgmap v550: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 03:51:04 compute-0 sudo[208541]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gmhcatvauaczxzgkonhgnkeghfergsym ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760154663.1932802-775-176377734048570/AnsiballZ_copy.py'
Oct 11 03:51:04 compute-0 sudo[208541]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 03:51:04 compute-0 python3.9[208543]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtnodedevd-ro.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1760154663.1932802-775-176377734048570/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 11 03:51:04 compute-0 sudo[208541]: pam_unix(sudo:session): session closed for user root
Oct 11 03:51:04 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 11 03:51:05 compute-0 sudo[208693]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-roksnhhcqhgymfpmgwkzdqxocyofhqmp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760154664.6331956-775-84188567922846/AnsiballZ_stat.py'
Oct 11 03:51:05 compute-0 sudo[208693]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 03:51:05 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v551: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 03:51:05 compute-0 python3.9[208695]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtnodedevd-admin.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 11 03:51:05 compute-0 sudo[208693]: pam_unix(sudo:session): session closed for user root
Oct 11 03:51:05 compute-0 sudo[208816]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zmenozazfeymyxibfiywlxurmiuwoabx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760154664.6331956-775-84188567922846/AnsiballZ_copy.py'
Oct 11 03:51:05 compute-0 sudo[208816]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 03:51:05 compute-0 sudo[208819]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 03:51:05 compute-0 sudo[208819]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 03:51:05 compute-0 sudo[208819]: pam_unix(sudo:session): session closed for user root
Oct 11 03:51:05 compute-0 python3.9[208818]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtnodedevd-admin.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1760154664.6331956-775-84188567922846/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 11 03:51:05 compute-0 sudo[208816]: pam_unix(sudo:session): session closed for user root
Oct 11 03:51:05 compute-0 sudo[208844]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 11 03:51:05 compute-0 sudo[208844]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 03:51:05 compute-0 sudo[208844]: pam_unix(sudo:session): session closed for user root
Oct 11 03:51:06 compute-0 sudo[208869]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 03:51:06 compute-0 sudo[208869]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 03:51:06 compute-0 sudo[208869]: pam_unix(sudo:session): session closed for user root
Oct 11 03:51:06 compute-0 sudo[208915]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/23b68101-59a9-532f-ab6b-9acf78fb2162/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Oct 11 03:51:06 compute-0 sudo[208915]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 03:51:06 compute-0 ceph-mon[74273]: pgmap v551: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 03:51:06 compute-0 sudo[209083]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-camoibwwqktzsdhdnjabaqlwtwkkbvkc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760154666.1376789-775-44751803271307/AnsiballZ_stat.py'
Oct 11 03:51:06 compute-0 sudo[209083]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 03:51:06 compute-0 python3.9[209087]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtproxyd.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 11 03:51:06 compute-0 sudo[209083]: pam_unix(sudo:session): session closed for user root
Oct 11 03:51:06 compute-0 sudo[208915]: pam_unix(sudo:session): session closed for user root
Oct 11 03:51:06 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 11 03:51:06 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 11 03:51:06 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct 11 03:51:06 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 11 03:51:06 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct 11 03:51:06 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' 
Oct 11 03:51:06 compute-0 ceph-mgr[74563]: [progress WARNING root] complete: ev 1f106121-219c-4a9c-8d07-52153d5f23e0 does not exist
Oct 11 03:51:06 compute-0 ceph-mgr[74563]: [progress WARNING root] complete: ev 5bb9fae3-4e24-441f-89db-2dddd94a59c9 does not exist
Oct 11 03:51:06 compute-0 ceph-mgr[74563]: [progress WARNING root] complete: ev 798cf478-096a-46a1-ae95-2189e5345bcf does not exist
Oct 11 03:51:06 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct 11 03:51:06 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 11 03:51:06 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct 11 03:51:06 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 11 03:51:06 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 11 03:51:06 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 11 03:51:06 compute-0 sudo[209114]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 03:51:06 compute-0 sudo[209114]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 03:51:06 compute-0 sudo[209114]: pam_unix(sudo:session): session closed for user root
Oct 11 03:51:06 compute-0 sudo[209173]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 11 03:51:06 compute-0 sudo[209173]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 03:51:06 compute-0 sudo[209173]: pam_unix(sudo:session): session closed for user root
Oct 11 03:51:06 compute-0 sudo[209222]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 03:51:06 compute-0 sudo[209222]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 03:51:06 compute-0 sudo[209222]: pam_unix(sudo:session): session closed for user root
Oct 11 03:51:07 compute-0 sudo[209271]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/23b68101-59a9-532f-ab6b-9acf78fb2162/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 23b68101-59a9-532f-ab6b-9acf78fb2162 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Oct 11 03:51:07 compute-0 sudo[209271]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 03:51:07 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v552: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 03:51:07 compute-0 sudo[209322]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kljxgpdjoqbeeghqihambarjlpfhkmkv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760154666.1376789-775-44751803271307/AnsiballZ_copy.py'
Oct 11 03:51:07 compute-0 sudo[209322]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 03:51:07 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 11 03:51:07 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 11 03:51:07 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' 
Oct 11 03:51:07 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 11 03:51:07 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 11 03:51:07 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 11 03:51:07 compute-0 python3.9[209324]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtproxyd.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1760154666.1376789-775-44751803271307/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 11 03:51:07 compute-0 sudo[209322]: pam_unix(sudo:session): session closed for user root
Oct 11 03:51:07 compute-0 podman[209366]: 2025-10-11 03:51:07.430391233 +0000 UTC m=+0.060720848 container create e1786771cb2a47f4a3efeec7266363d62bcf117640b37c07eb391b7c769733e7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_shockley, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 11 03:51:07 compute-0 systemd[1]: Started libpod-conmon-e1786771cb2a47f4a3efeec7266363d62bcf117640b37c07eb391b7c769733e7.scope.
Oct 11 03:51:07 compute-0 systemd[1]: Started libcrun container.
Oct 11 03:51:07 compute-0 podman[209366]: 2025-10-11 03:51:07.409577695 +0000 UTC m=+0.039907400 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 03:51:07 compute-0 podman[209366]: 2025-10-11 03:51:07.514096698 +0000 UTC m=+0.144426343 container init e1786771cb2a47f4a3efeec7266363d62bcf117640b37c07eb391b7c769733e7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_shockley, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_REF=reef)
Oct 11 03:51:07 compute-0 podman[209366]: 2025-10-11 03:51:07.52531934 +0000 UTC m=+0.155648965 container start e1786771cb2a47f4a3efeec7266363d62bcf117640b37c07eb391b7c769733e7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_shockley, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default)
Oct 11 03:51:07 compute-0 podman[209366]: 2025-10-11 03:51:07.53182515 +0000 UTC m=+0.162154815 container attach e1786771cb2a47f4a3efeec7266363d62bcf117640b37c07eb391b7c769733e7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_shockley, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True)
Oct 11 03:51:07 compute-0 trusting_shockley[209430]: 167 167
Oct 11 03:51:07 compute-0 systemd[1]: libpod-e1786771cb2a47f4a3efeec7266363d62bcf117640b37c07eb391b7c769733e7.scope: Deactivated successfully.
Oct 11 03:51:07 compute-0 podman[209366]: 2025-10-11 03:51:07.533110366 +0000 UTC m=+0.163439981 container died e1786771cb2a47f4a3efeec7266363d62bcf117640b37c07eb391b7c769733e7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_shockley, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 11 03:51:07 compute-0 systemd[1]: var-lib-containers-storage-overlay-0cbb393545d21c00dff0eed4edcf659b5de51e3d5e5f81a66cadb0bff3b368a5-merged.mount: Deactivated successfully.
Oct 11 03:51:07 compute-0 podman[209366]: 2025-10-11 03:51:07.593615007 +0000 UTC m=+0.223944632 container remove e1786771cb2a47f4a3efeec7266363d62bcf117640b37c07eb391b7c769733e7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_shockley, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3)
Oct 11 03:51:07 compute-0 systemd[1]: libpod-conmon-e1786771cb2a47f4a3efeec7266363d62bcf117640b37c07eb391b7c769733e7.scope: Deactivated successfully.
Oct 11 03:51:07 compute-0 sudo[209570]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yizxfjcqbzbomezsxodhjizdbyxpciqn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760154667.4715996-775-136706761911600/AnsiballZ_stat.py'
Oct 11 03:51:07 compute-0 sudo[209570]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 03:51:07 compute-0 podman[209533]: 2025-10-11 03:51:07.808242409 +0000 UTC m=+0.066253662 container create ea8a69e30ba55380f0631bcc899a27a2a469952d3595ef313438f7c1759f680f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_wiles, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 11 03:51:07 compute-0 systemd[1]: Started libpod-conmon-ea8a69e30ba55380f0631bcc899a27a2a469952d3595ef313438f7c1759f680f.scope.
Oct 11 03:51:07 compute-0 podman[209533]: 2025-10-11 03:51:07.78634264 +0000 UTC m=+0.044353893 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 03:51:07 compute-0 systemd[1]: Started libcrun container.
Oct 11 03:51:07 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a1edc9246dc973def5b5f27e713875829110637815574677ce9b8381dfd08709/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 11 03:51:07 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a1edc9246dc973def5b5f27e713875829110637815574677ce9b8381dfd08709/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 11 03:51:07 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a1edc9246dc973def5b5f27e713875829110637815574677ce9b8381dfd08709/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 11 03:51:07 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a1edc9246dc973def5b5f27e713875829110637815574677ce9b8381dfd08709/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 11 03:51:07 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a1edc9246dc973def5b5f27e713875829110637815574677ce9b8381dfd08709/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct 11 03:51:07 compute-0 podman[209533]: 2025-10-11 03:51:07.914902781 +0000 UTC m=+0.172914034 container init ea8a69e30ba55380f0631bcc899a27a2a469952d3595ef313438f7c1759f680f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_wiles, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 11 03:51:07 compute-0 podman[209533]: 2025-10-11 03:51:07.926433262 +0000 UTC m=+0.184444475 container start ea8a69e30ba55380f0631bcc899a27a2a469952d3595ef313438f7c1759f680f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_wiles, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 11 03:51:07 compute-0 podman[209533]: 2025-10-11 03:51:07.934243128 +0000 UTC m=+0.192254391 container attach ea8a69e30ba55380f0631bcc899a27a2a469952d3595ef313438f7c1759f680f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_wiles, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default, CEPH_REF=reef, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 11 03:51:07 compute-0 python3.9[209574]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtproxyd-ro.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 11 03:51:08 compute-0 sudo[209570]: pam_unix(sudo:session): session closed for user root
Oct 11 03:51:08 compute-0 sudo[209702]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bhubaindnlplfqrxgmesnmxxjyyzywpn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760154667.4715996-775-136706761911600/AnsiballZ_copy.py'
Oct 11 03:51:08 compute-0 sudo[209702]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 03:51:08 compute-0 python3.9[209704]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtproxyd-ro.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1760154667.4715996-775-136706761911600/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 11 03:51:08 compute-0 sudo[209702]: pam_unix(sudo:session): session closed for user root
Oct 11 03:51:08 compute-0 ceph-mon[74273]: pgmap v552: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 03:51:09 compute-0 vigorous_wiles[209577]: --> passed data devices: 0 physical, 3 LVM
Oct 11 03:51:09 compute-0 vigorous_wiles[209577]: --> relative data size: 1.0
Oct 11 03:51:09 compute-0 vigorous_wiles[209577]: --> All data devices are unavailable
Oct 11 03:51:09 compute-0 systemd[1]: libpod-ea8a69e30ba55380f0631bcc899a27a2a469952d3595ef313438f7c1759f680f.scope: Deactivated successfully.
Oct 11 03:51:09 compute-0 systemd[1]: libpod-ea8a69e30ba55380f0631bcc899a27a2a469952d3595ef313438f7c1759f680f.scope: Consumed 1.075s CPU time.
Oct 11 03:51:09 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v553: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 03:51:09 compute-0 conmon[209577]: conmon ea8a69e30ba55380f063 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-ea8a69e30ba55380f0631bcc899a27a2a469952d3595ef313438f7c1759f680f.scope/container/memory.events
Oct 11 03:51:09 compute-0 podman[209533]: 2025-10-11 03:51:09.055570255 +0000 UTC m=+1.313581508 container died ea8a69e30ba55380f0631bcc899a27a2a469952d3595ef313438f7c1759f680f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_wiles, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 11 03:51:09 compute-0 systemd[1]: var-lib-containers-storage-overlay-a1edc9246dc973def5b5f27e713875829110637815574677ce9b8381dfd08709-merged.mount: Deactivated successfully.
Oct 11 03:51:09 compute-0 podman[209533]: 2025-10-11 03:51:09.107107676 +0000 UTC m=+1.365118889 container remove ea8a69e30ba55380f0631bcc899a27a2a469952d3595ef313438f7c1759f680f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_wiles, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 11 03:51:09 compute-0 systemd[1]: libpod-conmon-ea8a69e30ba55380f0631bcc899a27a2a469952d3595ef313438f7c1759f680f.scope: Deactivated successfully.
Oct 11 03:51:09 compute-0 sudo[209271]: pam_unix(sudo:session): session closed for user root
Oct 11 03:51:09 compute-0 sudo[209861]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 03:51:09 compute-0 sudo[209861]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 03:51:09 compute-0 sudo[209861]: pam_unix(sudo:session): session closed for user root
Oct 11 03:51:09 compute-0 sudo[209915]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tlwxfvhmafvjdbzqdiptasaawidiullc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760154668.8948753-775-167009366713075/AnsiballZ_stat.py'
Oct 11 03:51:09 compute-0 sudo[209915]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 03:51:09 compute-0 sudo[209916]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 11 03:51:09 compute-0 sudo[209916]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 03:51:09 compute-0 sudo[209916]: pam_unix(sudo:session): session closed for user root
Oct 11 03:51:09 compute-0 sudo[209943]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 03:51:09 compute-0 sudo[209943]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 03:51:09 compute-0 sudo[209943]: pam_unix(sudo:session): session closed for user root
Oct 11 03:51:09 compute-0 sudo[209968]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/23b68101-59a9-532f-ab6b-9acf78fb2162/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 23b68101-59a9-532f-ab6b-9acf78fb2162 -- lvm list --format json
Oct 11 03:51:09 compute-0 sudo[209968]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 03:51:09 compute-0 python3.9[209929]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtproxyd-admin.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 11 03:51:09 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 11 03:51:09 compute-0 sudo[209915]: pam_unix(sudo:session): session closed for user root
Oct 11 03:51:09 compute-0 podman[210115]: 2025-10-11 03:51:09.845813976 +0000 UTC m=+0.045173346 container create 9a6453298e1e01e04dedf4c77b8820ca53e20f59228d72dc8e1ff93573223376 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_mcclintock, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, io.buildah.version=1.39.3, org.label-schema.build-date=20250507)
Oct 11 03:51:09 compute-0 systemd[1]: Started libpod-conmon-9a6453298e1e01e04dedf4c77b8820ca53e20f59228d72dc8e1ff93573223376.scope.
Oct 11 03:51:09 compute-0 sudo[210169]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bahzldperyoylgibslxcevdrgsxxoztd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760154668.8948753-775-167009366713075/AnsiballZ_copy.py'
Oct 11 03:51:09 compute-0 sudo[210169]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 03:51:09 compute-0 systemd[1]: Started libcrun container.
Oct 11 03:51:09 compute-0 podman[210115]: 2025-10-11 03:51:09.826269743 +0000 UTC m=+0.025629143 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 03:51:09 compute-0 podman[210115]: 2025-10-11 03:51:09.930261581 +0000 UTC m=+0.129620961 container init 9a6453298e1e01e04dedf4c77b8820ca53e20f59228d72dc8e1ff93573223376 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_mcclintock, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Oct 11 03:51:09 compute-0 podman[210115]: 2025-10-11 03:51:09.936291199 +0000 UTC m=+0.135650549 container start 9a6453298e1e01e04dedf4c77b8820ca53e20f59228d72dc8e1ff93573223376 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_mcclintock, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 11 03:51:09 compute-0 pensive_mcclintock[210170]: 167 167
Oct 11 03:51:09 compute-0 systemd[1]: libpod-9a6453298e1e01e04dedf4c77b8820ca53e20f59228d72dc8e1ff93573223376.scope: Deactivated successfully.
Oct 11 03:51:09 compute-0 podman[210115]: 2025-10-11 03:51:09.941054891 +0000 UTC m=+0.140414261 container attach 9a6453298e1e01e04dedf4c77b8820ca53e20f59228d72dc8e1ff93573223376 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_mcclintock, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 11 03:51:09 compute-0 podman[210115]: 2025-10-11 03:51:09.94174796 +0000 UTC m=+0.141107320 container died 9a6453298e1e01e04dedf4c77b8820ca53e20f59228d72dc8e1ff93573223376 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_mcclintock, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Oct 11 03:51:09 compute-0 systemd[1]: var-lib-containers-storage-overlay-34b1d4dae8e3e82dc7ec99e6098476c4ae8c2b80550609b35f2aa2e774db5829-merged.mount: Deactivated successfully.
Oct 11 03:51:09 compute-0 podman[210115]: 2025-10-11 03:51:09.991591025 +0000 UTC m=+0.190950375 container remove 9a6453298e1e01e04dedf4c77b8820ca53e20f59228d72dc8e1ff93573223376 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_mcclintock, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, ceph=True, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507)
Oct 11 03:51:10 compute-0 systemd[1]: libpod-conmon-9a6453298e1e01e04dedf4c77b8820ca53e20f59228d72dc8e1ff93573223376.scope: Deactivated successfully.
Oct 11 03:51:10 compute-0 python3.9[210174]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtproxyd-admin.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1760154668.8948753-775-167009366713075/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 11 03:51:10 compute-0 sudo[210169]: pam_unix(sudo:session): session closed for user root
Oct 11 03:51:10 compute-0 podman[210197]: 2025-10-11 03:51:10.179573346 +0000 UTC m=+0.037751329 container create 1a87fcb6fc67a738f5321137da4d402f6294463df7e41ea581fee1aa696cbda3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_carson, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.39.3)
Oct 11 03:51:10 compute-0 systemd[1]: Started libpod-conmon-1a87fcb6fc67a738f5321137da4d402f6294463df7e41ea581fee1aa696cbda3.scope.
Oct 11 03:51:10 compute-0 systemd[1]: Started libcrun container.
Oct 11 03:51:10 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1d500a49d02d9387050bfc38f8283f4fc2dde5fcc9f9a9ff898ac8df3a6b904f/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 11 03:51:10 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1d500a49d02d9387050bfc38f8283f4fc2dde5fcc9f9a9ff898ac8df3a6b904f/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 11 03:51:10 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1d500a49d02d9387050bfc38f8283f4fc2dde5fcc9f9a9ff898ac8df3a6b904f/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 11 03:51:10 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1d500a49d02d9387050bfc38f8283f4fc2dde5fcc9f9a9ff898ac8df3a6b904f/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 11 03:51:10 compute-0 podman[210197]: 2025-10-11 03:51:10.161908266 +0000 UTC m=+0.020086259 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 03:51:10 compute-0 podman[210197]: 2025-10-11 03:51:10.263004674 +0000 UTC m=+0.121182657 container init 1a87fcb6fc67a738f5321137da4d402f6294463df7e41ea581fee1aa696cbda3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_carson, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 11 03:51:10 compute-0 podman[210197]: 2025-10-11 03:51:10.271572282 +0000 UTC m=+0.129750265 container start 1a87fcb6fc67a738f5321137da4d402f6294463df7e41ea581fee1aa696cbda3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_carson, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 11 03:51:10 compute-0 podman[210197]: 2025-10-11 03:51:10.274807322 +0000 UTC m=+0.132985325 container attach 1a87fcb6fc67a738f5321137da4d402f6294463df7e41ea581fee1aa696cbda3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_carson, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 11 03:51:10 compute-0 sudo[210368]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cjsfaekiojmynrsnlcxlogikryrukrmo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760154670.2438433-775-128524521249324/AnsiballZ_stat.py'
Oct 11 03:51:10 compute-0 sudo[210368]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 03:51:10 compute-0 ceph-mon[74273]: pgmap v553: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 03:51:10 compute-0 python3.9[210370]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtqemud.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 11 03:51:10 compute-0 sudo[210368]: pam_unix(sudo:session): session closed for user root
Oct 11 03:51:11 compute-0 gallant_carson[210251]: {
Oct 11 03:51:11 compute-0 gallant_carson[210251]:     "0": [
Oct 11 03:51:11 compute-0 gallant_carson[210251]:         {
Oct 11 03:51:11 compute-0 gallant_carson[210251]:             "devices": [
Oct 11 03:51:11 compute-0 gallant_carson[210251]:                 "/dev/loop3"
Oct 11 03:51:11 compute-0 gallant_carson[210251]:             ],
Oct 11 03:51:11 compute-0 gallant_carson[210251]:             "lv_name": "ceph_lv0",
Oct 11 03:51:11 compute-0 gallant_carson[210251]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct 11 03:51:11 compute-0 gallant_carson[210251]:             "lv_size": "21470642176",
Oct 11 03:51:11 compute-0 gallant_carson[210251]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=1XVTvq-Um5W-oCLh-MZzq-EKrd-vgGj-JdO4S3,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=23b68101-59a9-532f-ab6b-9acf78fb2162,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=bd7ac921-1218-45c1-b1c6-7c594dbceccb,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 11 03:51:11 compute-0 gallant_carson[210251]:             "lv_uuid": "1XVTvq-Um5W-oCLh-MZzq-EKrd-vgGj-JdO4S3",
Oct 11 03:51:11 compute-0 gallant_carson[210251]:             "name": "ceph_lv0",
Oct 11 03:51:11 compute-0 gallant_carson[210251]:             "path": "/dev/ceph_vg0/ceph_lv0",
Oct 11 03:51:11 compute-0 gallant_carson[210251]:             "tags": {
Oct 11 03:51:11 compute-0 gallant_carson[210251]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct 11 03:51:11 compute-0 gallant_carson[210251]:                 "ceph.block_uuid": "1XVTvq-Um5W-oCLh-MZzq-EKrd-vgGj-JdO4S3",
Oct 11 03:51:11 compute-0 gallant_carson[210251]:                 "ceph.cephx_lockbox_secret": "",
Oct 11 03:51:11 compute-0 gallant_carson[210251]:                 "ceph.cluster_fsid": "23b68101-59a9-532f-ab6b-9acf78fb2162",
Oct 11 03:51:11 compute-0 gallant_carson[210251]:                 "ceph.cluster_name": "ceph",
Oct 11 03:51:11 compute-0 gallant_carson[210251]:                 "ceph.crush_device_class": "",
Oct 11 03:51:11 compute-0 gallant_carson[210251]:                 "ceph.encrypted": "0",
Oct 11 03:51:11 compute-0 gallant_carson[210251]:                 "ceph.osd_fsid": "bd7ac921-1218-45c1-b1c6-7c594dbceccb",
Oct 11 03:51:11 compute-0 gallant_carson[210251]:                 "ceph.osd_id": "0",
Oct 11 03:51:11 compute-0 gallant_carson[210251]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 11 03:51:11 compute-0 gallant_carson[210251]:                 "ceph.type": "block",
Oct 11 03:51:11 compute-0 gallant_carson[210251]:                 "ceph.vdo": "0"
Oct 11 03:51:11 compute-0 gallant_carson[210251]:             },
Oct 11 03:51:11 compute-0 gallant_carson[210251]:             "type": "block",
Oct 11 03:51:11 compute-0 gallant_carson[210251]:             "vg_name": "ceph_vg0"
Oct 11 03:51:11 compute-0 gallant_carson[210251]:         }
Oct 11 03:51:11 compute-0 gallant_carson[210251]:     ],
Oct 11 03:51:11 compute-0 gallant_carson[210251]:     "1": [
Oct 11 03:51:11 compute-0 gallant_carson[210251]:         {
Oct 11 03:51:11 compute-0 gallant_carson[210251]:             "devices": [
Oct 11 03:51:11 compute-0 gallant_carson[210251]:                 "/dev/loop4"
Oct 11 03:51:11 compute-0 gallant_carson[210251]:             ],
Oct 11 03:51:11 compute-0 gallant_carson[210251]:             "lv_name": "ceph_lv1",
Oct 11 03:51:11 compute-0 gallant_carson[210251]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Oct 11 03:51:11 compute-0 gallant_carson[210251]:             "lv_size": "21470642176",
Oct 11 03:51:11 compute-0 gallant_carson[210251]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=SYHCVo-UVf0-Iwc3-4dzz-QuCG-19LJ-9rRA5x,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=23b68101-59a9-532f-ab6b-9acf78fb2162,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=38da774d-7ecf-442f-9a7a-97978287cff8,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 11 03:51:11 compute-0 gallant_carson[210251]:             "lv_uuid": "SYHCVo-UVf0-Iwc3-4dzz-QuCG-19LJ-9rRA5x",
Oct 11 03:51:11 compute-0 gallant_carson[210251]:             "name": "ceph_lv1",
Oct 11 03:51:11 compute-0 gallant_carson[210251]:             "path": "/dev/ceph_vg1/ceph_lv1",
Oct 11 03:51:11 compute-0 gallant_carson[210251]:             "tags": {
Oct 11 03:51:11 compute-0 gallant_carson[210251]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Oct 11 03:51:11 compute-0 gallant_carson[210251]:                 "ceph.block_uuid": "SYHCVo-UVf0-Iwc3-4dzz-QuCG-19LJ-9rRA5x",
Oct 11 03:51:11 compute-0 gallant_carson[210251]:                 "ceph.cephx_lockbox_secret": "",
Oct 11 03:51:11 compute-0 gallant_carson[210251]:                 "ceph.cluster_fsid": "23b68101-59a9-532f-ab6b-9acf78fb2162",
Oct 11 03:51:11 compute-0 gallant_carson[210251]:                 "ceph.cluster_name": "ceph",
Oct 11 03:51:11 compute-0 gallant_carson[210251]:                 "ceph.crush_device_class": "",
Oct 11 03:51:11 compute-0 gallant_carson[210251]:                 "ceph.encrypted": "0",
Oct 11 03:51:11 compute-0 gallant_carson[210251]:                 "ceph.osd_fsid": "38da774d-7ecf-442f-9a7a-97978287cff8",
Oct 11 03:51:11 compute-0 gallant_carson[210251]:                 "ceph.osd_id": "1",
Oct 11 03:51:11 compute-0 gallant_carson[210251]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 11 03:51:11 compute-0 gallant_carson[210251]:                 "ceph.type": "block",
Oct 11 03:51:11 compute-0 gallant_carson[210251]:                 "ceph.vdo": "0"
Oct 11 03:51:11 compute-0 gallant_carson[210251]:             },
Oct 11 03:51:11 compute-0 gallant_carson[210251]:             "type": "block",
Oct 11 03:51:11 compute-0 gallant_carson[210251]:             "vg_name": "ceph_vg1"
Oct 11 03:51:11 compute-0 gallant_carson[210251]:         }
Oct 11 03:51:11 compute-0 gallant_carson[210251]:     ],
Oct 11 03:51:11 compute-0 gallant_carson[210251]:     "2": [
Oct 11 03:51:11 compute-0 gallant_carson[210251]:         {
Oct 11 03:51:11 compute-0 gallant_carson[210251]:             "devices": [
Oct 11 03:51:11 compute-0 gallant_carson[210251]:                 "/dev/loop5"
Oct 11 03:51:11 compute-0 gallant_carson[210251]:             ],
Oct 11 03:51:11 compute-0 gallant_carson[210251]:             "lv_name": "ceph_lv2",
Oct 11 03:51:11 compute-0 gallant_carson[210251]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Oct 11 03:51:11 compute-0 gallant_carson[210251]:             "lv_size": "21470642176",
Oct 11 03:51:11 compute-0 gallant_carson[210251]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=RoMfIg-8Myq-NBZ3-1HM3-6QfC-sjk8-dlChvt,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=23b68101-59a9-532f-ab6b-9acf78fb2162,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=b0a32a6b-06c7-4cda-9235-0c5ffbd1bd85,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 11 03:51:11 compute-0 gallant_carson[210251]:             "lv_uuid": "RoMfIg-8Myq-NBZ3-1HM3-6QfC-sjk8-dlChvt",
Oct 11 03:51:11 compute-0 gallant_carson[210251]:             "name": "ceph_lv2",
Oct 11 03:51:11 compute-0 gallant_carson[210251]:             "path": "/dev/ceph_vg2/ceph_lv2",
Oct 11 03:51:11 compute-0 gallant_carson[210251]:             "tags": {
Oct 11 03:51:11 compute-0 gallant_carson[210251]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Oct 11 03:51:11 compute-0 gallant_carson[210251]:                 "ceph.block_uuid": "RoMfIg-8Myq-NBZ3-1HM3-6QfC-sjk8-dlChvt",
Oct 11 03:51:11 compute-0 gallant_carson[210251]:                 "ceph.cephx_lockbox_secret": "",
Oct 11 03:51:11 compute-0 gallant_carson[210251]:                 "ceph.cluster_fsid": "23b68101-59a9-532f-ab6b-9acf78fb2162",
Oct 11 03:51:11 compute-0 gallant_carson[210251]:                 "ceph.cluster_name": "ceph",
Oct 11 03:51:11 compute-0 gallant_carson[210251]:                 "ceph.crush_device_class": "",
Oct 11 03:51:11 compute-0 gallant_carson[210251]:                 "ceph.encrypted": "0",
Oct 11 03:51:11 compute-0 gallant_carson[210251]:                 "ceph.osd_fsid": "b0a32a6b-06c7-4cda-9235-0c5ffbd1bd85",
Oct 11 03:51:11 compute-0 gallant_carson[210251]:                 "ceph.osd_id": "2",
Oct 11 03:51:11 compute-0 gallant_carson[210251]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 11 03:51:11 compute-0 gallant_carson[210251]:                 "ceph.type": "block",
Oct 11 03:51:11 compute-0 gallant_carson[210251]:                 "ceph.vdo": "0"
Oct 11 03:51:11 compute-0 gallant_carson[210251]:             },
Oct 11 03:51:11 compute-0 gallant_carson[210251]:             "type": "block",
Oct 11 03:51:11 compute-0 gallant_carson[210251]:             "vg_name": "ceph_vg2"
Oct 11 03:51:11 compute-0 gallant_carson[210251]:         }
Oct 11 03:51:11 compute-0 gallant_carson[210251]:     ]
Oct 11 03:51:11 compute-0 gallant_carson[210251]: }
Oct 11 03:51:11 compute-0 systemd[1]: libpod-1a87fcb6fc67a738f5321137da4d402f6294463df7e41ea581fee1aa696cbda3.scope: Deactivated successfully.
Oct 11 03:51:11 compute-0 podman[210197]: 2025-10-11 03:51:11.046213779 +0000 UTC m=+0.904391782 container died 1a87fcb6fc67a738f5321137da4d402f6294463df7e41ea581fee1aa696cbda3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_carson, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Oct 11 03:51:11 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v554: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 03:51:11 compute-0 systemd[1]: var-lib-containers-storage-overlay-1d500a49d02d9387050bfc38f8283f4fc2dde5fcc9f9a9ff898ac8df3a6b904f-merged.mount: Deactivated successfully.
Oct 11 03:51:11 compute-0 podman[210197]: 2025-10-11 03:51:11.12292306 +0000 UTC m=+0.981101043 container remove 1a87fcb6fc67a738f5321137da4d402f6294463df7e41ea581fee1aa696cbda3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_carson, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 11 03:51:11 compute-0 systemd[1]: libpod-conmon-1a87fcb6fc67a738f5321137da4d402f6294463df7e41ea581fee1aa696cbda3.scope: Deactivated successfully.
Oct 11 03:51:11 compute-0 sudo[209968]: pam_unix(sudo:session): session closed for user root
Oct 11 03:51:11 compute-0 sudo[210463]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 03:51:11 compute-0 sudo[210463]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 03:51:11 compute-0 sudo[210463]: pam_unix(sudo:session): session closed for user root
Oct 11 03:51:11 compute-0 sudo[210546]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-oqjdlqrskudwopjdykyinotkamhbldgc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760154670.2438433-775-128524521249324/AnsiballZ_copy.py'
Oct 11 03:51:11 compute-0 sudo[210546]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 03:51:11 compute-0 sudo[210524]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 11 03:51:11 compute-0 sudo[210524]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 03:51:11 compute-0 sudo[210524]: pam_unix(sudo:session): session closed for user root
Oct 11 03:51:11 compute-0 sudo[210562]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 03:51:11 compute-0 sudo[210562]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 03:51:11 compute-0 sudo[210562]: pam_unix(sudo:session): session closed for user root
Oct 11 03:51:11 compute-0 python3.9[210559]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtqemud.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1760154670.2438433-775-128524521249324/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 11 03:51:11 compute-0 sudo[210587]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/23b68101-59a9-532f-ab6b-9acf78fb2162/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 23b68101-59a9-532f-ab6b-9acf78fb2162 -- raw list --format json
Oct 11 03:51:11 compute-0 sudo[210587]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 03:51:11 compute-0 sudo[210546]: pam_unix(sudo:session): session closed for user root
Oct 11 03:51:11 compute-0 podman[210748]: 2025-10-11 03:51:11.97434969 +0000 UTC m=+0.067126295 container create 42c24334e4a1dc76e9a294cd2099d07bc9475b54eaf2feb605fa8aa7c602829e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_cerf, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 11 03:51:12 compute-0 sudo[210812]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lxjirddwcqisvcfjqgoptafkhzzwjtpm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760154671.6945038-775-195398117567271/AnsiballZ_stat.py'
Oct 11 03:51:12 compute-0 podman[210748]: 2025-10-11 03:51:11.952325749 +0000 UTC m=+0.045102364 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 03:51:12 compute-0 sudo[210812]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 03:51:12 compute-0 systemd[1]: Started libpod-conmon-42c24334e4a1dc76e9a294cd2099d07bc9475b54eaf2feb605fa8aa7c602829e.scope.
Oct 11 03:51:12 compute-0 systemd[1]: Started libcrun container.
Oct 11 03:51:12 compute-0 podman[210748]: 2025-10-11 03:51:12.123903925 +0000 UTC m=+0.216680530 container init 42c24334e4a1dc76e9a294cd2099d07bc9475b54eaf2feb605fa8aa7c602829e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_cerf, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, OSD_FLAVOR=default)
Oct 11 03:51:12 compute-0 podman[210748]: 2025-10-11 03:51:12.135821466 +0000 UTC m=+0.228598041 container start 42c24334e4a1dc76e9a294cd2099d07bc9475b54eaf2feb605fa8aa7c602829e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_cerf, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=reef, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 11 03:51:12 compute-0 podman[210748]: 2025-10-11 03:51:12.139730854 +0000 UTC m=+0.232507429 container attach 42c24334e4a1dc76e9a294cd2099d07bc9475b54eaf2feb605fa8aa7c602829e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_cerf, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 11 03:51:12 compute-0 optimistic_cerf[210818]: 167 167
Oct 11 03:51:12 compute-0 systemd[1]: libpod-42c24334e4a1dc76e9a294cd2099d07bc9475b54eaf2feb605fa8aa7c602829e.scope: Deactivated successfully.
Oct 11 03:51:12 compute-0 podman[210748]: 2025-10-11 03:51:12.145769992 +0000 UTC m=+0.238546587 container died 42c24334e4a1dc76e9a294cd2099d07bc9475b54eaf2feb605fa8aa7c602829e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_cerf, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Oct 11 03:51:12 compute-0 systemd[1]: var-lib-containers-storage-overlay-ac92b5d7c24ad5a78bcda546815e12b258b538e5cc0ff4ab4620183d47363101-merged.mount: Deactivated successfully.
Oct 11 03:51:12 compute-0 podman[210748]: 2025-10-11 03:51:12.199087503 +0000 UTC m=+0.291864108 container remove 42c24334e4a1dc76e9a294cd2099d07bc9475b54eaf2feb605fa8aa7c602829e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_cerf, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=reef, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 11 03:51:12 compute-0 systemd[1]: libpod-conmon-42c24334e4a1dc76e9a294cd2099d07bc9475b54eaf2feb605fa8aa7c602829e.scope: Deactivated successfully.
Oct 11 03:51:12 compute-0 python3.9[210817]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtqemud-ro.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 11 03:51:12 compute-0 sudo[210812]: pam_unix(sudo:session): session closed for user root
Oct 11 03:51:12 compute-0 podman[210867]: 2025-10-11 03:51:12.431254702 +0000 UTC m=+0.058690561 container create d6091170b0dbd9f5b12dfe443eb5f696d62b67d1ec38ced017f66ae8dde13c2d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_fermat, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef)
Oct 11 03:51:12 compute-0 systemd[1]: Started libpod-conmon-d6091170b0dbd9f5b12dfe443eb5f696d62b67d1ec38ced017f66ae8dde13c2d.scope.
Oct 11 03:51:12 compute-0 podman[210867]: 2025-10-11 03:51:12.399992494 +0000 UTC m=+0.027428443 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 03:51:12 compute-0 systemd[1]: Started libcrun container.
Oct 11 03:51:12 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e45f10c18482730158fae11947cc1111411c3ea9999f6e49b22ca2d5e1e85992/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 11 03:51:12 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e45f10c18482730158fae11947cc1111411c3ea9999f6e49b22ca2d5e1e85992/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 11 03:51:12 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e45f10c18482730158fae11947cc1111411c3ea9999f6e49b22ca2d5e1e85992/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 11 03:51:12 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e45f10c18482730158fae11947cc1111411c3ea9999f6e49b22ca2d5e1e85992/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 11 03:51:12 compute-0 podman[210867]: 2025-10-11 03:51:12.551517381 +0000 UTC m=+0.178953280 container init d6091170b0dbd9f5b12dfe443eb5f696d62b67d1ec38ced017f66ae8dde13c2d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_fermat, CEPH_REF=reef, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 11 03:51:12 compute-0 podman[210867]: 2025-10-11 03:51:12.56296987 +0000 UTC m=+0.190405729 container start d6091170b0dbd9f5b12dfe443eb5f696d62b67d1ec38ced017f66ae8dde13c2d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_fermat, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 11 03:51:12 compute-0 podman[210867]: 2025-10-11 03:51:12.573252655 +0000 UTC m=+0.200688514 container attach d6091170b0dbd9f5b12dfe443eb5f696d62b67d1ec38ced017f66ae8dde13c2d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_fermat, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 11 03:51:12 compute-0 sudo[210983]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bjqiwilskwuhluhaaasbofglczmmhwyt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760154671.6945038-775-195398117567271/AnsiballZ_copy.py'
Oct 11 03:51:12 compute-0 sudo[210983]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 03:51:12 compute-0 ceph-mon[74273]: pgmap v554: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 03:51:12 compute-0 python3.9[210985]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtqemud-ro.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1760154671.6945038-775-195398117567271/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 11 03:51:12 compute-0 sudo[210983]: pam_unix(sudo:session): session closed for user root
Oct 11 03:51:13 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v555: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 03:51:13 compute-0 sudo[211151]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nvzwdnoeeagosmszsbaaoaqjpcnyztyv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760154673.0832427-775-5826178695764/AnsiballZ_stat.py'
Oct 11 03:51:13 compute-0 sudo[211151]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 03:51:13 compute-0 inspiring_fermat[210928]: {
Oct 11 03:51:13 compute-0 inspiring_fermat[210928]:     "38da774d-7ecf-442f-9a7a-97978287cff8": {
Oct 11 03:51:13 compute-0 inspiring_fermat[210928]:         "ceph_fsid": "23b68101-59a9-532f-ab6b-9acf78fb2162",
Oct 11 03:51:13 compute-0 inspiring_fermat[210928]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Oct 11 03:51:13 compute-0 inspiring_fermat[210928]:         "osd_id": 1,
Oct 11 03:51:13 compute-0 inspiring_fermat[210928]:         "osd_uuid": "38da774d-7ecf-442f-9a7a-97978287cff8",
Oct 11 03:51:13 compute-0 inspiring_fermat[210928]:         "type": "bluestore"
Oct 11 03:51:13 compute-0 inspiring_fermat[210928]:     },
Oct 11 03:51:13 compute-0 inspiring_fermat[210928]:     "b0a32a6b-06c7-4cda-9235-0c5ffbd1bd85": {
Oct 11 03:51:13 compute-0 inspiring_fermat[210928]:         "ceph_fsid": "23b68101-59a9-532f-ab6b-9acf78fb2162",
Oct 11 03:51:13 compute-0 inspiring_fermat[210928]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Oct 11 03:51:13 compute-0 inspiring_fermat[210928]:         "osd_id": 2,
Oct 11 03:51:13 compute-0 inspiring_fermat[210928]:         "osd_uuid": "b0a32a6b-06c7-4cda-9235-0c5ffbd1bd85",
Oct 11 03:51:13 compute-0 inspiring_fermat[210928]:         "type": "bluestore"
Oct 11 03:51:13 compute-0 inspiring_fermat[210928]:     },
Oct 11 03:51:13 compute-0 inspiring_fermat[210928]:     "bd7ac921-1218-45c1-b1c6-7c594dbceccb": {
Oct 11 03:51:13 compute-0 inspiring_fermat[210928]:         "ceph_fsid": "23b68101-59a9-532f-ab6b-9acf78fb2162",
Oct 11 03:51:13 compute-0 inspiring_fermat[210928]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct 11 03:51:13 compute-0 inspiring_fermat[210928]:         "osd_id": 0,
Oct 11 03:51:13 compute-0 inspiring_fermat[210928]:         "osd_uuid": "bd7ac921-1218-45c1-b1c6-7c594dbceccb",
Oct 11 03:51:13 compute-0 inspiring_fermat[210928]:         "type": "bluestore"
Oct 11 03:51:13 compute-0 inspiring_fermat[210928]:     }
Oct 11 03:51:13 compute-0 inspiring_fermat[210928]: }
Oct 11 03:51:13 compute-0 systemd[1]: libpod-d6091170b0dbd9f5b12dfe443eb5f696d62b67d1ec38ced017f66ae8dde13c2d.scope: Deactivated successfully.
Oct 11 03:51:13 compute-0 podman[210867]: 2025-10-11 03:51:13.64154676 +0000 UTC m=+1.268982619 container died d6091170b0dbd9f5b12dfe443eb5f696d62b67d1ec38ced017f66ae8dde13c2d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_fermat, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 11 03:51:13 compute-0 systemd[1]: libpod-d6091170b0dbd9f5b12dfe443eb5f696d62b67d1ec38ced017f66ae8dde13c2d.scope: Consumed 1.086s CPU time.
Oct 11 03:51:13 compute-0 systemd[1]: var-lib-containers-storage-overlay-e45f10c18482730158fae11947cc1111411c3ea9999f6e49b22ca2d5e1e85992-merged.mount: Deactivated successfully.
Oct 11 03:51:13 compute-0 podman[210867]: 2025-10-11 03:51:13.693372249 +0000 UTC m=+1.320808108 container remove d6091170b0dbd9f5b12dfe443eb5f696d62b67d1ec38ced017f66ae8dde13c2d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_fermat, CEPH_REF=reef, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 11 03:51:13 compute-0 systemd[1]: libpod-conmon-d6091170b0dbd9f5b12dfe443eb5f696d62b67d1ec38ced017f66ae8dde13c2d.scope: Deactivated successfully.
Oct 11 03:51:13 compute-0 python3.9[211156]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtqemud-admin.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 11 03:51:13 compute-0 sudo[210587]: pam_unix(sudo:session): session closed for user root
Oct 11 03:51:13 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 11 03:51:13 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' 
Oct 11 03:51:13 compute-0 sudo[211151]: pam_unix(sudo:session): session closed for user root
Oct 11 03:51:13 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 11 03:51:13 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' 
Oct 11 03:51:13 compute-0 ceph-mgr[74563]: [progress WARNING root] complete: ev 3b6a145f-1cd7-42f2-9f59-37fd17d87ab5 does not exist
Oct 11 03:51:13 compute-0 ceph-mgr[74563]: [progress WARNING root] complete: ev c8ead4bd-da87-4724-9c12-ec6ead608a69 does not exist
Oct 11 03:51:13 compute-0 sudo[211178]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 03:51:13 compute-0 sudo[211178]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 03:51:13 compute-0 sudo[211178]: pam_unix(sudo:session): session closed for user root
Oct 11 03:51:13 compute-0 sudo[211226]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Oct 11 03:51:13 compute-0 sudo[211226]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 03:51:13 compute-0 sudo[211226]: pam_unix(sudo:session): session closed for user root
Oct 11 03:51:14 compute-0 sudo[211348]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dfqypqmhkuxkmcanyslvrqjmhzcbhoto ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760154673.0832427-775-5826178695764/AnsiballZ_copy.py'
Oct 11 03:51:14 compute-0 sudo[211348]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 03:51:14 compute-0 python3.9[211350]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtqemud-admin.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1760154673.0832427-775-5826178695764/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 11 03:51:14 compute-0 sudo[211348]: pam_unix(sudo:session): session closed for user root
Oct 11 03:51:14 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 11 03:51:14 compute-0 ceph-mon[74273]: pgmap v555: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 03:51:14 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' 
Oct 11 03:51:14 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' 
Oct 11 03:51:14 compute-0 sudo[211500]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pqabvuczmjjpamaaqcminaiplifxbwfd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760154674.584714-775-31953498574355/AnsiballZ_stat.py'
Oct 11 03:51:14 compute-0 sudo[211500]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 03:51:15 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v556: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 03:51:15 compute-0 python3.9[211502]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtsecretd.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 11 03:51:15 compute-0 sudo[211500]: pam_unix(sudo:session): session closed for user root
Oct 11 03:51:15 compute-0 sudo[211623]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-brfbrzmrieflnylctwhoazoykeqfteov ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760154674.584714-775-31953498574355/AnsiballZ_copy.py'
Oct 11 03:51:15 compute-0 sudo[211623]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 03:51:15 compute-0 python3.9[211625]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtsecretd.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1760154674.584714-775-31953498574355/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 11 03:51:15 compute-0 sudo[211623]: pam_unix(sudo:session): session closed for user root
Oct 11 03:51:16 compute-0 sudo[211775]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vcvljxdultknegmdkneyphuqvcyhxjho ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760154675.9559286-775-66903223420679/AnsiballZ_stat.py'
Oct 11 03:51:16 compute-0 sudo[211775]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 03:51:16 compute-0 python3.9[211777]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtsecretd-ro.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 11 03:51:16 compute-0 sudo[211775]: pam_unix(sudo:session): session closed for user root
Oct 11 03:51:16 compute-0 ceph-mon[74273]: pgmap v556: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 03:51:16 compute-0 sudo[211898]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wokmjmqissstcjrqwdizdfwbkdopjzwk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760154675.9559286-775-66903223420679/AnsiballZ_copy.py'
Oct 11 03:51:16 compute-0 sudo[211898]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 03:51:17 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v557: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 03:51:17 compute-0 python3.9[211900]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtsecretd-ro.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1760154675.9559286-775-66903223420679/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 11 03:51:17 compute-0 sudo[211898]: pam_unix(sudo:session): session closed for user root
Oct 11 03:51:17 compute-0 sudo[212050]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-eomzaltgndsadrqedovrbrytvfhmlrsf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760154677.361957-775-238852729731300/AnsiballZ_stat.py'
Oct 11 03:51:17 compute-0 sudo[212050]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 03:51:17 compute-0 python3.9[212052]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtsecretd-admin.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 11 03:51:17 compute-0 sudo[212050]: pam_unix(sudo:session): session closed for user root
Oct 11 03:51:18 compute-0 sudo[212173]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-agoimsuehwubcimdstjwebijrrvdljbu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760154677.361957-775-238852729731300/AnsiballZ_copy.py'
Oct 11 03:51:18 compute-0 sudo[212173]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 03:51:18 compute-0 python3.9[212175]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtsecretd-admin.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1760154677.361957-775-238852729731300/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 11 03:51:18 compute-0 sudo[212173]: pam_unix(sudo:session): session closed for user root
Oct 11 03:51:18 compute-0 ceph-mon[74273]: pgmap v557: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 03:51:19 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v558: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 03:51:19 compute-0 python3.9[212325]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail
                                             ls -lRZ /run/libvirt | grep -E ':container_\S+_t'
                                              _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 11 03:51:19 compute-0 podman[212326]: 2025-10-11 03:51:19.449545789 +0000 UTC m=+0.136820952 container health_status 648a70a95c858d67aef01a55c153ff0b5b9bef4b589fa10c62a24185180db61f (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20251009, config_id=ovn_controller, container_name=ovn_controller, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2)
Oct 11 03:51:19 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 11 03:51:20 compute-0 sudo[212503]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lxifjzaflaemjygoqrihflcdvtnxkrwa ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760154679.6037886-981-130269666562066/AnsiballZ_seboolean.py'
Oct 11 03:51:20 compute-0 sudo[212503]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 03:51:20 compute-0 python3.9[212505]: ansible-ansible.posix.seboolean Invoked with name=os_enable_vtpm persistent=True state=True ignore_selinux_state=False
Oct 11 03:51:20 compute-0 ceph-mgr[74563]: [balancer INFO root] Optimize plan auto_2025-10-11_03:51:20
Oct 11 03:51:20 compute-0 ceph-mgr[74563]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 11 03:51:20 compute-0 ceph-mgr[74563]: [balancer INFO root] do_upmap
Oct 11 03:51:20 compute-0 ceph-mgr[74563]: [balancer INFO root] pools ['volumes', 'default.rgw.control', 'default.rgw.log', 'images', 'vms', '.rgw.root', 'default.rgw.meta', 'backups', 'cephfs.cephfs.meta', 'cephfs.cephfs.data', '.mgr']
Oct 11 03:51:20 compute-0 ceph-mgr[74563]: [balancer INFO root] prepared 0/10 changes
Oct 11 03:51:20 compute-0 ceph-mgr[74563]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 03:51:20 compute-0 ceph-mgr[74563]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 03:51:20 compute-0 ceph-mgr[74563]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 03:51:20 compute-0 ceph-mgr[74563]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 03:51:20 compute-0 ceph-mgr[74563]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 03:51:20 compute-0 ceph-mgr[74563]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 03:51:20 compute-0 ceph-mon[74273]: pgmap v558: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 03:51:20 compute-0 ceph-mgr[74563]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 11 03:51:20 compute-0 ceph-mgr[74563]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 11 03:51:20 compute-0 ceph-mgr[74563]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 11 03:51:20 compute-0 ceph-mgr[74563]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 11 03:51:20 compute-0 ceph-mgr[74563]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 11 03:51:20 compute-0 ceph-mgr[74563]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 11 03:51:20 compute-0 ceph-mgr[74563]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 11 03:51:20 compute-0 ceph-mgr[74563]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 11 03:51:20 compute-0 ceph-mgr[74563]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 11 03:51:20 compute-0 ceph-mgr[74563]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 11 03:51:21 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v559: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 03:51:21 compute-0 sudo[212503]: pam_unix(sudo:session): session closed for user root
Oct 11 03:51:22 compute-0 sudo[212659]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-itumxadjdnblxhfvutyawdmlbnsyrrlx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760154681.846672-989-111118977387570/AnsiballZ_copy.py'
Oct 11 03:51:22 compute-0 dbus-broker-launch[810]: avc:  op=load_policy lsm=selinux seqno=15 res=1
Oct 11 03:51:22 compute-0 sudo[212659]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 03:51:22 compute-0 python3.9[212661]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/libvirt/servercert.pem group=root mode=0644 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.crt backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 11 03:51:22 compute-0 sudo[212659]: pam_unix(sudo:session): session closed for user root
Oct 11 03:51:22 compute-0 ceph-mon[74273]: pgmap v559: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 03:51:22 compute-0 ovn_metadata_agent[161897]: 2025-10-11 03:51:22.935 161902 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 03:51:22 compute-0 ovn_metadata_agent[161897]: 2025-10-11 03:51:22.937 161902 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 03:51:22 compute-0 ovn_metadata_agent[161897]: 2025-10-11 03:51:22.937 161902 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 03:51:22 compute-0 sudo[212811]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qoidlwefmczujdjqsbpbzpopqiuaokxa ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760154682.6170692-989-47634775377757/AnsiballZ_copy.py'
Oct 11 03:51:22 compute-0 sudo[212811]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 03:51:23 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v560: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 03:51:23 compute-0 python3.9[212813]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/libvirt/private/serverkey.pem group=root mode=0600 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.key backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 11 03:51:23 compute-0 sudo[212811]: pam_unix(sudo:session): session closed for user root
Oct 11 03:51:23 compute-0 sudo[212963]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-llmmwzfpwhymxzsekyuhsuxojfgscevu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760154683.4293306-989-212103867263963/AnsiballZ_copy.py'
Oct 11 03:51:23 compute-0 sudo[212963]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 03:51:23 compute-0 python3.9[212965]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/libvirt/clientcert.pem group=root mode=0644 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.crt backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 11 03:51:23 compute-0 sudo[212963]: pam_unix(sudo:session): session closed for user root
Oct 11 03:51:24 compute-0 sudo[213115]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sxeivpwtuhwtwkcmsyedigmlhekufmcr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760154684.136584-989-106072341239816/AnsiballZ_copy.py'
Oct 11 03:51:24 compute-0 sudo[213115]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 03:51:24 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 11 03:51:24 compute-0 python3.9[213117]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/libvirt/private/clientkey.pem group=root mode=0644 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.key backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 11 03:51:24 compute-0 sudo[213115]: pam_unix(sudo:session): session closed for user root
Oct 11 03:51:24 compute-0 ceph-mon[74273]: pgmap v560: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 03:51:25 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v561: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 03:51:25 compute-0 podman[213241]: 2025-10-11 03:51:25.204647057 +0000 UTC m=+0.056144480 container health_status aa44bb3cffcfb092b9d85ae52a9bd9f36c87e07262632420f97b48a886109eab (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.license=GPLv2, container_name=ovn_metadata_agent, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, tcib_managed=true)
Oct 11 03:51:25 compute-0 sudo[213286]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rqsvuwzudzvhjfeqtvllssynkbrocaii ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760154684.8475854-989-58666551411294/AnsiballZ_copy.py'
Oct 11 03:51:25 compute-0 sudo[213286]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 03:51:25 compute-0 python3.9[213288]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/CA/cacert.pem group=root mode=0644 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/ca.crt backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 11 03:51:25 compute-0 sudo[213286]: pam_unix(sudo:session): session closed for user root
Oct 11 03:51:25 compute-0 sudo[213438]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nldhzukkaucrheiubgfirepxirvqzmnx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760154685.6076014-1025-91525025940462/AnsiballZ_copy.py'
Oct 11 03:51:25 compute-0 sudo[213438]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 03:51:26 compute-0 python3.9[213440]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/qemu/server-cert.pem group=qemu mode=0640 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.crt backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 11 03:51:26 compute-0 sudo[213438]: pam_unix(sudo:session): session closed for user root
Oct 11 03:51:26 compute-0 sudo[213590]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wupznnmpklzoklfbhcqqlcwailzgfdqv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760154686.30316-1025-228647872965127/AnsiballZ_copy.py'
Oct 11 03:51:26 compute-0 sudo[213590]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 03:51:26 compute-0 ceph-mon[74273]: pgmap v561: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 03:51:26 compute-0 python3.9[213592]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/qemu/server-key.pem group=qemu mode=0640 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.key backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 11 03:51:26 compute-0 sudo[213590]: pam_unix(sudo:session): session closed for user root
Oct 11 03:51:27 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v562: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 03:51:27 compute-0 sudo[213742]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-azbwcckzfdhpgbdrhtmtnzkfmbfqnmoq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760154687.0408902-1025-172626155002312/AnsiballZ_copy.py'
Oct 11 03:51:27 compute-0 sudo[213742]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 03:51:27 compute-0 python3.9[213744]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/qemu/client-cert.pem group=qemu mode=0640 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.crt backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 11 03:51:27 compute-0 sudo[213742]: pam_unix(sudo:session): session closed for user root
Oct 11 03:51:28 compute-0 sudo[213894]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tblgwnhdckzyxcnpafoxuyfveaczuhop ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760154687.8679078-1025-153638010023516/AnsiballZ_copy.py'
Oct 11 03:51:28 compute-0 sudo[213894]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 03:51:28 compute-0 python3.9[213896]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/qemu/client-key.pem group=qemu mode=0640 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.key backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 11 03:51:28 compute-0 sudo[213894]: pam_unix(sudo:session): session closed for user root
Oct 11 03:51:28 compute-0 ceph-mon[74273]: pgmap v562: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 03:51:28 compute-0 sudo[214046]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rulgumquaqsdfahrazedtdjlsrgtyzsk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760154688.623044-1025-158493983230816/AnsiballZ_copy.py'
Oct 11 03:51:28 compute-0 sudo[214046]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 03:51:29 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v563: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 03:51:29 compute-0 python3.9[214048]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/qemu/ca-cert.pem group=qemu mode=0640 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/ca.crt backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 11 03:51:29 compute-0 sudo[214046]: pam_unix(sudo:session): session closed for user root
Oct 11 03:51:29 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 11 03:51:29 compute-0 sudo[214198]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rgyuhqefjbgyntvygdvyfvrgreiaekmr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760154689.4363174-1061-95265722714438/AnsiballZ_systemd.py'
Oct 11 03:51:29 compute-0 sudo[214198]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 03:51:30 compute-0 python3.9[214200]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True name=virtlogd.service state=restarted daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Oct 11 03:51:30 compute-0 systemd[1]: Reloading.
Oct 11 03:51:30 compute-0 systemd-rc-local-generator[214228]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 11 03:51:30 compute-0 systemd-sysv-generator[214231]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 11 03:51:30 compute-0 systemd[1]: Starting libvirt logging daemon socket...
Oct 11 03:51:30 compute-0 systemd[1]: Listening on libvirt logging daemon socket.
Oct 11 03:51:30 compute-0 systemd[1]: Starting libvirt logging daemon admin socket...
Oct 11 03:51:30 compute-0 systemd[1]: Listening on libvirt logging daemon admin socket.
Oct 11 03:51:30 compute-0 systemd[1]: Starting libvirt logging daemon...
Oct 11 03:51:30 compute-0 systemd[1]: Started libvirt logging daemon.
Oct 11 03:51:30 compute-0 sudo[214198]: pam_unix(sudo:session): session closed for user root
Oct 11 03:51:30 compute-0 ceph-mon[74273]: pgmap v563: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 03:51:30 compute-0 ceph-mgr[74563]: [pg_autoscaler INFO root] _maybe_adjust
Oct 11 03:51:30 compute-0 ceph-mgr[74563]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 03:51:30 compute-0 ceph-mgr[74563]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct 11 03:51:30 compute-0 ceph-mgr[74563]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 03:51:30 compute-0 ceph-mgr[74563]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 11 03:51:30 compute-0 ceph-mgr[74563]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 03:51:30 compute-0 ceph-mgr[74563]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 11 03:51:30 compute-0 ceph-mgr[74563]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 03:51:30 compute-0 ceph-mgr[74563]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 11 03:51:30 compute-0 ceph-mgr[74563]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 03:51:30 compute-0 ceph-mgr[74563]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 11 03:51:30 compute-0 ceph-mgr[74563]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 03:51:30 compute-0 ceph-mgr[74563]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Oct 11 03:51:30 compute-0 ceph-mgr[74563]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 03:51:30 compute-0 ceph-mgr[74563]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 11 03:51:30 compute-0 ceph-mgr[74563]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 03:51:30 compute-0 ceph-mgr[74563]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Oct 11 03:51:30 compute-0 ceph-mgr[74563]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 03:51:30 compute-0 ceph-mgr[74563]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct 11 03:51:30 compute-0 ceph-mgr[74563]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 03:51:30 compute-0 ceph-mgr[74563]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 11 03:51:30 compute-0 ceph-mgr[74563]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 03:51:30 compute-0 ceph-mgr[74563]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct 11 03:51:31 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v564: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 03:51:31 compute-0 sudo[214390]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jfkvvgsqlwuypxtrkcwsfjhgkvotfhpr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760154691.0171173-1061-203194986016737/AnsiballZ_systemd.py'
Oct 11 03:51:31 compute-0 sudo[214390]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 03:51:31 compute-0 python3.9[214392]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True name=virtnodedevd.service state=restarted daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Oct 11 03:51:31 compute-0 systemd[1]: Reloading.
Oct 11 03:51:31 compute-0 systemd-rc-local-generator[214419]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 11 03:51:31 compute-0 systemd-sysv-generator[214424]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 11 03:51:32 compute-0 systemd[1]: Starting SETroubleshoot daemon for processing new SELinux denial logs...
Oct 11 03:51:32 compute-0 systemd[1]: Starting libvirt nodedev daemon socket...
Oct 11 03:51:32 compute-0 systemd[1]: Listening on libvirt nodedev daemon socket.
Oct 11 03:51:32 compute-0 systemd[1]: Starting libvirt nodedev daemon admin socket...
Oct 11 03:51:32 compute-0 systemd[1]: Starting libvirt nodedev daemon read-only socket...
Oct 11 03:51:32 compute-0 systemd[1]: Listening on libvirt nodedev daemon read-only socket.
Oct 11 03:51:32 compute-0 systemd[1]: Listening on libvirt nodedev daemon admin socket.
Oct 11 03:51:32 compute-0 systemd[1]: Starting libvirt nodedev daemon...
Oct 11 03:51:32 compute-0 systemd[1]: Started libvirt nodedev daemon.
Oct 11 03:51:32 compute-0 sudo[214390]: pam_unix(sudo:session): session closed for user root
Oct 11 03:51:32 compute-0 systemd[1]: Started SETroubleshoot daemon for processing new SELinux denial logs.
Oct 11 03:51:32 compute-0 systemd[1]: Created slice Slice /system/dbus-:1.1-org.fedoraproject.SetroubleshootPrivileged.
Oct 11 03:51:32 compute-0 systemd[1]: Started dbus-:1.1-org.fedoraproject.SetroubleshootPrivileged@0.service.
Oct 11 03:51:32 compute-0 sudo[214614]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rxmwnxpcsyrobaqxnelgvtkzietjfnhh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760154692.3912413-1061-235908984500157/AnsiballZ_systemd.py'
Oct 11 03:51:32 compute-0 sudo[214614]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 03:51:32 compute-0 ceph-mon[74273]: pgmap v564: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 03:51:33 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v565: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 03:51:33 compute-0 python3.9[214616]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True name=virtproxyd.service state=restarted daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Oct 11 03:51:33 compute-0 systemd[1]: Reloading.
Oct 11 03:51:33 compute-0 systemd-rc-local-generator[214647]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 11 03:51:33 compute-0 systemd-sysv-generator[214650]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 11 03:51:33 compute-0 systemd[1]: Starting libvirt proxy daemon admin socket...
Oct 11 03:51:33 compute-0 setroubleshoot[214430]: SELinux is preventing /usr/sbin/virtlogd from using the dac_read_search capability. For complete SELinux messages run: sealert -l a8066a25-8522-472e-a990-28284e85005a
Oct 11 03:51:33 compute-0 systemd[1]: Starting libvirt proxy daemon read-only socket...
Oct 11 03:51:33 compute-0 systemd[1]: Listening on libvirt proxy daemon admin socket.
Oct 11 03:51:33 compute-0 systemd[1]: Listening on libvirt proxy daemon read-only socket.
Oct 11 03:51:33 compute-0 systemd[1]: Starting libvirt proxy daemon...
Oct 11 03:51:33 compute-0 setroubleshoot[214430]: SELinux is preventing /usr/sbin/virtlogd from using the dac_read_search capability.
                                                  
                                                  *****  Plugin dac_override (91.4 confidence) suggests   **********************
                                                  
                                                  If you want to help identify if domain needs this access or you have a file with the wrong permissions on your system
                                                  Then turn on full auditing to get path information about the offending file and generate the error again.
                                                  Do
                                                  
                                                  Turn on full auditing
                                                  # auditctl -w /etc/shadow -p w
                                                  Try to recreate AVC. Then execute
                                                  # ausearch -m avc -ts recent
                                                  If you see PATH record check ownership/permissions on file, and fix it,
                                                  otherwise report as a bugzilla.
                                                  
                                                  *****  Plugin catchall (9.59 confidence) suggests   **************************
                                                  
                                                  If you believe that virtlogd should have the dac_read_search capability by default.
                                                  Then you should report this as a bug.
                                                  You can generate a local policy module to allow this access.
                                                  Do
                                                  allow this access for now by executing:
                                                  # ausearch -c 'virtlogd' --raw | audit2allow -M my-virtlogd
                                                  # semodule -X 300 -i my-virtlogd.pp
                                                  
Oct 11 03:51:33 compute-0 setroubleshoot[214430]: SELinux is preventing /usr/sbin/virtlogd from using the dac_read_search capability. For complete SELinux messages run: sealert -l a8066a25-8522-472e-a990-28284e85005a
Oct 11 03:51:33 compute-0 setroubleshoot[214430]: SELinux is preventing /usr/sbin/virtlogd from using the dac_read_search capability.
                                                  
                                                  *****  Plugin dac_override (91.4 confidence) suggests   **********************
                                                  
                                                  If you want to help identify if domain needs this access or you have a file with the wrong permissions on your system
                                                  Then turn on full auditing to get path information about the offending file and generate the error again.
                                                  Do
                                                  
                                                  Turn on full auditing
                                                  # auditctl -w /etc/shadow -p w
                                                  Try to recreate AVC. Then execute
                                                  # ausearch -m avc -ts recent
                                                  If you see PATH record check ownership/permissions on file, and fix it,
                                                  otherwise report as a bugzilla.
                                                  
                                                  *****  Plugin catchall (9.59 confidence) suggests   **************************
                                                  
                                                  If you believe that virtlogd should have the dac_read_search capability by default.
                                                  Then you should report this as a bug.
                                                  You can generate a local policy module to allow this access.
                                                  Do
                                                  allow this access for now by executing:
                                                  # ausearch -c 'virtlogd' --raw | audit2allow -M my-virtlogd
                                                  # semodule -X 300 -i my-virtlogd.pp
                                                  
Oct 11 03:51:33 compute-0 systemd[1]: Started libvirt proxy daemon.
Oct 11 03:51:33 compute-0 sudo[214614]: pam_unix(sudo:session): session closed for user root
Oct 11 03:51:34 compute-0 ceph-mon[74273]: pgmap v565: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 03:51:34 compute-0 sudo[214827]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mthyusywuipxmvsphlfgreoanwhjkzmh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760154693.8218243-1061-212269442518216/AnsiballZ_systemd.py'
Oct 11 03:51:34 compute-0 sudo[214827]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 03:51:34 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 11 03:51:34 compute-0 python3.9[214829]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True name=virtqemud.service state=restarted daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Oct 11 03:51:34 compute-0 systemd[1]: Reloading.
Oct 11 03:51:34 compute-0 systemd-sysv-generator[214859]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 11 03:51:34 compute-0 systemd-rc-local-generator[214850]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 11 03:51:34 compute-0 systemd[1]: Listening on libvirt locking daemon socket.
Oct 11 03:51:34 compute-0 systemd[1]: Starting libvirt QEMU daemon socket...
Oct 11 03:51:34 compute-0 systemd[1]: Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw).
Oct 11 03:51:34 compute-0 systemd[1]: Starting Virtual Machine and Container Registration Service...
Oct 11 03:51:34 compute-0 systemd[1]: Listening on libvirt QEMU daemon socket.
Oct 11 03:51:34 compute-0 systemd[1]: Starting libvirt QEMU daemon admin socket...
Oct 11 03:51:34 compute-0 systemd[1]: Starting libvirt QEMU daemon read-only socket...
Oct 11 03:51:34 compute-0 systemd[1]: Listening on libvirt QEMU daemon admin socket.
Oct 11 03:51:34 compute-0 systemd[1]: Listening on libvirt QEMU daemon read-only socket.
Oct 11 03:51:34 compute-0 systemd[1]: Started Virtual Machine and Container Registration Service.
Oct 11 03:51:34 compute-0 systemd[1]: Starting libvirt QEMU daemon...
Oct 11 03:51:35 compute-0 systemd[1]: Started libvirt QEMU daemon.
Oct 11 03:51:35 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v566: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 03:51:35 compute-0 sudo[214827]: pam_unix(sudo:session): session closed for user root
Oct 11 03:51:35 compute-0 sudo[215040]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fbkmjlstaekcpftikdvvapxvzhbnjnny ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760154695.2827132-1061-135760265042715/AnsiballZ_systemd.py'
Oct 11 03:51:35 compute-0 sudo[215040]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 03:51:36 compute-0 python3.9[215042]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True name=virtsecretd.service state=restarted daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Oct 11 03:51:36 compute-0 systemd[1]: Reloading.
Oct 11 03:51:36 compute-0 ceph-mon[74273]: pgmap v566: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 03:51:36 compute-0 systemd-rc-local-generator[215069]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 11 03:51:36 compute-0 systemd-sysv-generator[215072]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 11 03:51:36 compute-0 systemd[1]: Starting libvirt secret daemon socket...
Oct 11 03:51:36 compute-0 systemd[1]: Listening on libvirt secret daemon socket.
Oct 11 03:51:36 compute-0 systemd[1]: Starting libvirt secret daemon admin socket...
Oct 11 03:51:36 compute-0 systemd[1]: Starting libvirt secret daemon read-only socket...
Oct 11 03:51:36 compute-0 systemd[1]: Listening on libvirt secret daemon admin socket.
Oct 11 03:51:36 compute-0 systemd[1]: Listening on libvirt secret daemon read-only socket.
Oct 11 03:51:36 compute-0 systemd[1]: Starting libvirt secret daemon...
Oct 11 03:51:36 compute-0 systemd[1]: Started libvirt secret daemon.
Oct 11 03:51:36 compute-0 sudo[215040]: pam_unix(sudo:session): session closed for user root
Oct 11 03:51:37 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v567: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 03:51:37 compute-0 sudo[215249]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gvgibitmxkhychujjhbakcrrmkzqcwjq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760154696.848542-1098-80022117752628/AnsiballZ_file.py'
Oct 11 03:51:37 compute-0 sudo[215249]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 03:51:37 compute-0 python3.9[215251]: ansible-ansible.builtin.file Invoked with mode=0755 path=/var/lib/openstack/config/ceph state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 11 03:51:37 compute-0 sudo[215249]: pam_unix(sudo:session): session closed for user root
Oct 11 03:51:38 compute-0 sudo[215401]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yfivmcvbitybkvrpkvvfaizzeasazldu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760154697.6873586-1106-81655338592533/AnsiballZ_find.py'
Oct 11 03:51:38 compute-0 sudo[215401]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 03:51:38 compute-0 ceph-mon[74273]: pgmap v567: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 03:51:38 compute-0 python3.9[215403]: ansible-ansible.builtin.find Invoked with paths=['/var/lib/openstack/config/ceph'] patterns=['*.conf'] read_whole_file=False file_type=file age_stamp=mtime recurse=False hidden=False follow=False get_checksum=False checksum_algorithm=sha1 use_regex=False exact_mode=True excludes=None contains=None age=None size=None depth=None mode=None encoding=None limit=None
Oct 11 03:51:38 compute-0 sudo[215401]: pam_unix(sudo:session): session closed for user root
Oct 11 03:51:38 compute-0 sudo[215553]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xncruxpaeqdwkltvyttbezbggkrfijnp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760154698.5016108-1114-8340793822002/AnsiballZ_command.py'
Oct 11 03:51:38 compute-0 sudo[215553]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 03:51:39 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v568: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 03:51:39 compute-0 python3.9[215555]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail;
                                             echo ceph
                                             awk -F '=' '/fsid/ {print $2}' /var/lib/openstack/config/ceph/ceph.conf | xargs
                                              _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 11 03:51:39 compute-0 sudo[215553]: pam_unix(sudo:session): session closed for user root
Oct 11 03:51:39 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 11 03:51:40 compute-0 python3.9[215709]: ansible-ansible.builtin.find Invoked with paths=['/var/lib/openstack/config/ceph'] patterns=['*.keyring'] read_whole_file=False file_type=file age_stamp=mtime recurse=False hidden=False follow=False get_checksum=False checksum_algorithm=sha1 use_regex=False exact_mode=True excludes=None contains=None age=None size=None depth=None mode=None encoding=None limit=None
Oct 11 03:51:40 compute-0 ceph-mon[74273]: pgmap v568: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 03:51:41 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v569: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 03:51:41 compute-0 python3.9[215859]: ansible-ansible.legacy.stat Invoked with path=/tmp/secret.xml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 11 03:51:41 compute-0 python3.9[215980]: ansible-ansible.legacy.copy Invoked with dest=/tmp/secret.xml mode=0600 src=/home/zuul/.ansible/tmp/ansible-tmp-1760154700.5258148-1133-121276386689504/.source.xml follow=False _original_basename=secret.xml.j2 checksum=a78a849bf5859eca3bd2f60efee941884553a149 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 11 03:51:42 compute-0 ceph-mon[74273]: pgmap v569: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 03:51:42 compute-0 sudo[216131]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lhlyquhllntrtnmiqjqetoncwoofuurv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760154701.965788-1148-84205403093740/AnsiballZ_command.py'
Oct 11 03:51:42 compute-0 sudo[216131]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 03:51:42 compute-0 python3.9[216133]: ansible-ansible.legacy.command Invoked with _raw_params=virsh secret-undefine 23b68101-59a9-532f-ab6b-9acf78fb2162
                                             virsh secret-define --file /tmp/secret.xml
                                              _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 11 03:51:42 compute-0 polkitd[6259]: Registered Authentication Agent for unix-process:216135:308763 (system bus name :1.2983 [/usr/bin/pkttyagent --process 216135 --notify-fd 4 --fallback], object path /org/freedesktop/PolicyKit1/AuthenticationAgent, locale en_US.UTF-8)
Oct 11 03:51:42 compute-0 polkitd[6259]: Unregistered Authentication Agent for unix-process:216135:308763 (system bus name :1.2983, object path /org/freedesktop/PolicyKit1/AuthenticationAgent, locale en_US.UTF-8) (disconnected from bus)
Oct 11 03:51:42 compute-0 polkitd[6259]: Registered Authentication Agent for unix-process:216134:308763 (system bus name :1.2984 [/usr/bin/pkttyagent --process 216134 --notify-fd 4 --fallback], object path /org/freedesktop/PolicyKit1/AuthenticationAgent, locale en_US.UTF-8)
Oct 11 03:51:42 compute-0 polkitd[6259]: Unregistered Authentication Agent for unix-process:216134:308763 (system bus name :1.2984, object path /org/freedesktop/PolicyKit1/AuthenticationAgent, locale en_US.UTF-8) (disconnected from bus)
Oct 11 03:51:42 compute-0 sudo[216131]: pam_unix(sudo:session): session closed for user root
Oct 11 03:51:43 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v570: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 03:51:43 compute-0 python3.9[216295]: ansible-ansible.builtin.file Invoked with path=/tmp/secret.xml state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 11 03:51:43 compute-0 systemd[1]: dbus-:1.1-org.fedoraproject.SetroubleshootPrivileged@0.service: Deactivated successfully.
Oct 11 03:51:43 compute-0 systemd[1]: dbus-:1.1-org.fedoraproject.SetroubleshootPrivileged@0.service: Consumed 1.004s CPU time.
Oct 11 03:51:43 compute-0 systemd[1]: setroubleshootd.service: Deactivated successfully.
Oct 11 03:51:43 compute-0 sudo[216445]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-alqidgoyyjkjfswpsamaxbllksajikle ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760154703.67267-1164-60145807468127/AnsiballZ_command.py'
Oct 11 03:51:43 compute-0 sudo[216445]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 03:51:44 compute-0 sudo[216445]: pam_unix(sudo:session): session closed for user root
Oct 11 03:51:44 compute-0 ceph-mon[74273]: pgmap v570: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 03:51:44 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 11 03:51:44 compute-0 sudo[216598]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nlhjfqxqrxbhuzsayjtsehqpmjwyperx ; FSID=23b68101-59a9-532f-ab6b-9acf78fb2162 KEY=AQBn0OloAAAAABAA5vR2TXb/EBj5CZlyN7iICQ== /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760154704.5958686-1172-277238902505324/AnsiballZ_command.py'
Oct 11 03:51:44 compute-0 sudo[216598]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 03:51:45 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v571: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 03:51:45 compute-0 polkitd[6259]: Registered Authentication Agent for unix-process:216601:309025 (system bus name :1.2987 [/usr/bin/pkttyagent --process 216601 --notify-fd 4 --fallback], object path /org/freedesktop/PolicyKit1/AuthenticationAgent, locale en_US.UTF-8)
Oct 11 03:51:45 compute-0 polkitd[6259]: Unregistered Authentication Agent for unix-process:216601:309025 (system bus name :1.2987, object path /org/freedesktop/PolicyKit1/AuthenticationAgent, locale en_US.UTF-8) (disconnected from bus)
Oct 11 03:51:45 compute-0 sudo[216598]: pam_unix(sudo:session): session closed for user root
Oct 11 03:51:45 compute-0 sudo[216756]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rtxopcsofaljxzxskqovsmepkfqnuhhj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760154705.567707-1180-42974809773361/AnsiballZ_copy.py'
Oct 11 03:51:45 compute-0 sudo[216756]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 03:51:46 compute-0 python3.9[216758]: ansible-ansible.legacy.copy Invoked with dest=/etc/ceph/ceph.conf group=root mode=0644 owner=root remote_src=True src=/var/lib/openstack/config/ceph/ceph.conf backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 11 03:51:46 compute-0 ceph-mon[74273]: pgmap v571: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 03:51:46 compute-0 sudo[216756]: pam_unix(sudo:session): session closed for user root
Oct 11 03:51:46 compute-0 sudo[216908]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-otssaqbkeeqkvecrjcfnkzisarohbbxg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760154706.411867-1188-151416499062924/AnsiballZ_stat.py'
Oct 11 03:51:46 compute-0 sudo[216908]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 03:51:47 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v572: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 03:51:47 compute-0 python3.9[216910]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/libvirt.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 11 03:51:47 compute-0 sudo[216908]: pam_unix(sudo:session): session closed for user root
Oct 11 03:51:47 compute-0 sudo[217031]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uwhyrptkxqtsbfhhnebvdnwfswexttee ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760154706.411867-1188-151416499062924/AnsiballZ_copy.py'
Oct 11 03:51:47 compute-0 sudo[217031]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 03:51:47 compute-0 python3.9[217033]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/firewall/libvirt.yaml mode=0640 src=/home/zuul/.ansible/tmp/ansible-tmp-1760154706.411867-1188-151416499062924/.source.yaml follow=False _original_basename=firewall.yaml.j2 checksum=5ca83b1310a74c5e48c4c3d4640e1cb8fdac1061 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 11 03:51:47 compute-0 sudo[217031]: pam_unix(sudo:session): session closed for user root
Oct 11 03:51:48 compute-0 ceph-mon[74273]: pgmap v572: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 03:51:48 compute-0 sudo[217183]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-iybbouvoplcetobnnjzrdeumsjktyhzt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760154708.115583-1204-191561872816060/AnsiballZ_file.py'
Oct 11 03:51:48 compute-0 sudo[217183]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 03:51:48 compute-0 python3.9[217185]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 11 03:51:48 compute-0 sudo[217183]: pam_unix(sudo:session): session closed for user root
Oct 11 03:51:49 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v573: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 03:51:49 compute-0 sudo[217335]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-boxkhkgwgygtfprmlmhejzpzyfxwvxlg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760154709.0302558-1212-220314462673754/AnsiballZ_stat.py'
Oct 11 03:51:49 compute-0 sudo[217335]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 03:51:49 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 11 03:51:49 compute-0 python3.9[217337]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 11 03:51:49 compute-0 sudo[217335]: pam_unix(sudo:session): session closed for user root
Oct 11 03:51:50 compute-0 sudo[217430]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-smqmlowmicsrpiskrlpcimqaeanyroog ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760154709.0302558-1212-220314462673754/AnsiballZ_file.py'
Oct 11 03:51:50 compute-0 sudo[217430]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 03:51:50 compute-0 podman[217387]: 2025-10-11 03:51:50.045814098 +0000 UTC m=+0.111387645 container health_status 648a70a95c858d67aef01a55c153ff0b5b9bef4b589fa10c62a24185180db61f (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20251009, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, config_id=ovn_controller, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Oct 11 03:51:50 compute-0 python3.9[217435]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml _original_basename=base-rules.yaml.j2 recurse=False state=file path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 11 03:51:50 compute-0 sudo[217430]: pam_unix(sudo:session): session closed for user root
Oct 11 03:51:50 compute-0 ceph-mon[74273]: pgmap v573: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 03:51:50 compute-0 sudo[217592]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nuulylizxjizdrbqvpbwsinwoaedifit ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760154710.3826332-1224-265754121321202/AnsiballZ_stat.py'
Oct 11 03:51:50 compute-0 sudo[217592]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 03:51:50 compute-0 ceph-mgr[74563]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 03:51:50 compute-0 ceph-mgr[74563]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 03:51:50 compute-0 ceph-mgr[74563]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 03:51:50 compute-0 ceph-mgr[74563]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 03:51:50 compute-0 ceph-mgr[74563]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 03:51:50 compute-0 ceph-mgr[74563]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 03:51:50 compute-0 python3.9[217594]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 11 03:51:50 compute-0 sudo[217592]: pam_unix(sudo:session): session closed for user root
Oct 11 03:51:51 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v574: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 03:51:51 compute-0 sudo[217670]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fwsjrsvyexjnynsjeiztqoyirqghlpyn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760154710.3826332-1224-265754121321202/AnsiballZ_file.py'
Oct 11 03:51:51 compute-0 sudo[217670]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 03:51:51 compute-0 python3.9[217672]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml _original_basename=.836rktdk recurse=False state=file path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 11 03:51:51 compute-0 sudo[217670]: pam_unix(sudo:session): session closed for user root
Oct 11 03:51:51 compute-0 sudo[217822]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cmvgfcyziwtgsomqxcirqpogbpomhxeb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760154711.5462325-1236-80561590591935/AnsiballZ_stat.py'
Oct 11 03:51:51 compute-0 sudo[217822]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 03:51:52 compute-0 python3.9[217824]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/iptables.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 11 03:51:52 compute-0 sudo[217822]: pam_unix(sudo:session): session closed for user root
Oct 11 03:51:52 compute-0 ceph-mon[74273]: pgmap v574: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 03:51:52 compute-0 sudo[217900]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xlcmlqbtjofwpzaenjykbndidhpmcpch ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760154711.5462325-1236-80561590591935/AnsiballZ_file.py'
Oct 11 03:51:52 compute-0 sudo[217900]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 03:51:52 compute-0 python3.9[217902]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/iptables.nft _original_basename=iptables.nft recurse=False state=file path=/etc/nftables/iptables.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 11 03:51:52 compute-0 sudo[217900]: pam_unix(sudo:session): session closed for user root
Oct 11 03:51:53 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v575: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 03:51:53 compute-0 sudo[218052]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nyigrcjwdbmfhwcbozvdanyrynffjtya ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760154712.9044154-1249-30503417910282/AnsiballZ_command.py'
Oct 11 03:51:53 compute-0 sudo[218052]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 03:51:53 compute-0 python3.9[218054]: ansible-ansible.legacy.command Invoked with _raw_params=nft -j list ruleset _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 11 03:51:53 compute-0 sudo[218052]: pam_unix(sudo:session): session closed for user root
Oct 11 03:51:54 compute-0 sudo[218205]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vqeyjwjajmlriggreqinygfywqmwzulo ; /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1760154713.6744702-1257-200849642682627/AnsiballZ_edpm_nftables_from_files.py'
Oct 11 03:51:54 compute-0 sudo[218205]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 03:51:54 compute-0 ceph-mon[74273]: pgmap v575: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 03:51:54 compute-0 python3[218207]: ansible-edpm_nftables_from_files Invoked with src=/var/lib/edpm-config/firewall
Oct 11 03:51:54 compute-0 sudo[218205]: pam_unix(sudo:session): session closed for user root
Oct 11 03:51:54 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 11 03:51:54 compute-0 sudo[218357]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qrewzihavtvpaljlmcrdjsekuoaoaslf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760154714.5424008-1265-194039682908801/AnsiballZ_stat.py'
Oct 11 03:51:54 compute-0 sudo[218357]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 03:51:55 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v576: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 03:51:55 compute-0 python3.9[218359]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 11 03:51:55 compute-0 sudo[218357]: pam_unix(sudo:session): session closed for user root
Oct 11 03:51:55 compute-0 podman[218385]: 2025-10-11 03:51:55.372690353 +0000 UTC m=+0.079423987 container health_status aa44bb3cffcfb092b9d85ae52a9bd9f36c87e07262632420f97b48a886109eab (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.build-date=20251009, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, tcib_managed=true)
Oct 11 03:51:55 compute-0 sudo[218453]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pfxockbiezorqmureeujwokonzqxhzfs ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760154714.5424008-1265-194039682908801/AnsiballZ_file.py'
Oct 11 03:51:55 compute-0 sudo[218453]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 03:51:55 compute-0 python3.9[218455]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-jumps.nft _original_basename=jump-chain.j2 recurse=False state=file path=/etc/nftables/edpm-jumps.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 11 03:51:55 compute-0 sudo[218453]: pam_unix(sudo:session): session closed for user root
Oct 11 03:51:56 compute-0 ceph-mon[74273]: pgmap v576: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 03:51:56 compute-0 sudo[218605]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-iohkiaztjjdrqefyhsqrsyibwohmbalg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760154716.0490167-1277-105564258372492/AnsiballZ_stat.py'
Oct 11 03:51:56 compute-0 sudo[218605]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 03:51:56 compute-0 python3.9[218607]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-update-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 11 03:51:56 compute-0 sudo[218605]: pam_unix(sudo:session): session closed for user root
Oct 11 03:51:56 compute-0 sudo[218683]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nipmepoumvtpegosscrztybtpgoltxxy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760154716.0490167-1277-105564258372492/AnsiballZ_file.py'
Oct 11 03:51:56 compute-0 sudo[218683]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 03:51:57 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v577: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 03:51:57 compute-0 python3.9[218685]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-update-jumps.nft _original_basename=jump-chain.j2 recurse=False state=file path=/etc/nftables/edpm-update-jumps.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 11 03:51:57 compute-0 sudo[218683]: pam_unix(sudo:session): session closed for user root
Oct 11 03:51:57 compute-0 sudo[218835]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mhgrczdcrxxmmibtkfkissiqabhgxdip ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760154717.4487755-1289-228355106198080/AnsiballZ_stat.py'
Oct 11 03:51:57 compute-0 sudo[218835]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 03:51:58 compute-0 python3.9[218837]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-flushes.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 11 03:51:58 compute-0 sudo[218835]: pam_unix(sudo:session): session closed for user root
Oct 11 03:51:58 compute-0 ceph-mon[74273]: pgmap v577: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 03:51:58 compute-0 sudo[218913]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-srjavvtdrhidhdhkntftyzrdnglnaxeb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760154717.4487755-1289-228355106198080/AnsiballZ_file.py'
Oct 11 03:51:58 compute-0 sudo[218913]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 03:51:58 compute-0 python3.9[218915]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-flushes.nft _original_basename=flush-chain.j2 recurse=False state=file path=/etc/nftables/edpm-flushes.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 11 03:51:58 compute-0 sudo[218913]: pam_unix(sudo:session): session closed for user root
Oct 11 03:51:59 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v578: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 03:51:59 compute-0 sudo[219065]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nmeevaloexooogltfnaiiigwmcvvwqhn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760154718.7417552-1301-12440022513735/AnsiballZ_stat.py'
Oct 11 03:51:59 compute-0 sudo[219065]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 03:51:59 compute-0 python3.9[219067]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-chains.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 11 03:51:59 compute-0 sudo[219065]: pam_unix(sudo:session): session closed for user root
Oct 11 03:51:59 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 11 03:51:59 compute-0 sudo[219143]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xjmayhdwxblprwjvrmlaobzzplbwanro ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760154718.7417552-1301-12440022513735/AnsiballZ_file.py'
Oct 11 03:51:59 compute-0 sudo[219143]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 03:51:59 compute-0 python3.9[219145]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-chains.nft _original_basename=chains.j2 recurse=False state=file path=/etc/nftables/edpm-chains.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 11 03:51:59 compute-0 sudo[219143]: pam_unix(sudo:session): session closed for user root
Oct 11 03:52:00 compute-0 ceph-mon[74273]: pgmap v578: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 03:52:00 compute-0 sudo[219295]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rpmhgklpdqvvbuphqiftkzhoukglqjbb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760154720.0508583-1313-273508287155111/AnsiballZ_stat.py'
Oct 11 03:52:00 compute-0 sudo[219295]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 03:52:00 compute-0 python3.9[219297]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-rules.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 11 03:52:00 compute-0 sudo[219295]: pam_unix(sudo:session): session closed for user root
Oct 11 03:52:01 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v579: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 03:52:01 compute-0 sudo[219420]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-opvfesbpngekkfqcapjvcywjjgepdtko ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760154720.0508583-1313-273508287155111/AnsiballZ_copy.py'
Oct 11 03:52:01 compute-0 sudo[219420]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 03:52:01 compute-0 python3.9[219422]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-rules.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1760154720.0508583-1313-273508287155111/.source.nft follow=False _original_basename=ruleset.j2 checksum=ac3ce8ce2d33fa5fe0a79b0c811c97734ce43fa5 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 11 03:52:01 compute-0 sudo[219420]: pam_unix(sudo:session): session closed for user root
Oct 11 03:52:01 compute-0 sudo[219572]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-aquynnghxwvsenhirqksevaokqsltgyr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760154721.6292708-1328-109351326248759/AnsiballZ_file.py'
Oct 11 03:52:01 compute-0 sudo[219572]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 03:52:02 compute-0 python3.9[219574]: ansible-ansible.builtin.file Invoked with group=root mode=0600 owner=root path=/etc/nftables/edpm-rules.nft.changed state=touch recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 11 03:52:02 compute-0 sudo[219572]: pam_unix(sudo:session): session closed for user root
Oct 11 03:52:02 compute-0 ceph-mon[74273]: pgmap v579: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 03:52:02 compute-0 sudo[219724]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hvyrmuzbvqmpgpojhngqtclayplsvxtt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760154722.4285798-1336-258313093242453/AnsiballZ_command.py'
Oct 11 03:52:02 compute-0 sudo[219724]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 03:52:02 compute-0 python3.9[219726]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-chains.nft /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft /etc/nftables/edpm-jumps.nft | nft -c -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 11 03:52:03 compute-0 sudo[219724]: pam_unix(sudo:session): session closed for user root
Oct 11 03:52:03 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v580: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 03:52:03 compute-0 sudo[219879]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-thnqclcntveryoykzzzymdkgaomlzjaw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760154723.2502527-1344-29679314567708/AnsiballZ_blockinfile.py'
Oct 11 03:52:03 compute-0 sudo[219879]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 03:52:03 compute-0 python3.9[219881]: ansible-ansible.builtin.blockinfile Invoked with backup=False block=include "/etc/nftables/iptables.nft"
                                             include "/etc/nftables/edpm-chains.nft"
                                             include "/etc/nftables/edpm-rules.nft"
                                             include "/etc/nftables/edpm-jumps.nft"
                                              path=/etc/sysconfig/nftables.conf validate=nft -c -f %s state=present marker=# {mark} ANSIBLE MANAGED BLOCK create=False marker_begin=BEGIN marker_end=END append_newline=False prepend_newline=False unsafe_writes=False insertafter=None insertbefore=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 11 03:52:03 compute-0 sudo[219879]: pam_unix(sudo:session): session closed for user root
Oct 11 03:52:04 compute-0 ceph-mon[74273]: pgmap v580: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 03:52:04 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 11 03:52:04 compute-0 sudo[220031]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ksvjasswrbmgkyadadapuqhdstkdupnn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760154724.2492673-1353-101023705827524/AnsiballZ_command.py'
Oct 11 03:52:04 compute-0 sudo[220031]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 03:52:04 compute-0 python3.9[220033]: ansible-ansible.legacy.command Invoked with _raw_params=nft -f /etc/nftables/edpm-chains.nft _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 11 03:52:04 compute-0 sudo[220031]: pam_unix(sudo:session): session closed for user root
Oct 11 03:52:05 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v581: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 03:52:05 compute-0 sudo[220184]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hcoorwzbpckprtvvfngpjjnrnmybprqy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760154725.0925298-1361-121716098674239/AnsiballZ_stat.py'
Oct 11 03:52:05 compute-0 sudo[220184]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 03:52:05 compute-0 python3.9[220186]: ansible-ansible.builtin.stat Invoked with path=/etc/nftables/edpm-rules.nft.changed follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct 11 03:52:05 compute-0 sudo[220184]: pam_unix(sudo:session): session closed for user root
Oct 11 03:52:06 compute-0 sudo[220338]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ydbgbwgxdyrpesednbycvehnffpxmwsx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760154725.879099-1369-275946528260244/AnsiballZ_command.py'
Oct 11 03:52:06 compute-0 sudo[220338]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 03:52:06 compute-0 ceph-mon[74273]: pgmap v581: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 03:52:06 compute-0 python3.9[220340]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft | nft -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 11 03:52:06 compute-0 sudo[220338]: pam_unix(sudo:session): session closed for user root
Oct 11 03:52:07 compute-0 sudo[220493]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vsqesxyhxjviyfauzsjbsegcfqualsni ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760154726.6743608-1377-40130985015924/AnsiballZ_file.py'
Oct 11 03:52:07 compute-0 sudo[220493]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 03:52:07 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v582: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 03:52:07 compute-0 python3.9[220495]: ansible-ansible.builtin.file Invoked with path=/etc/nftables/edpm-rules.nft.changed state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 11 03:52:07 compute-0 sudo[220493]: pam_unix(sudo:session): session closed for user root
Oct 11 03:52:07 compute-0 sudo[220645]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lerqcfdclorgbptuqaqniklbfxvroatn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760154727.454415-1385-250715211312367/AnsiballZ_stat.py'
Oct 11 03:52:07 compute-0 sudo[220645]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 03:52:08 compute-0 python3.9[220647]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/edpm_libvirt.target follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 11 03:52:08 compute-0 sudo[220645]: pam_unix(sudo:session): session closed for user root
Oct 11 03:52:08 compute-0 ceph-mon[74273]: pgmap v582: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 03:52:08 compute-0 sudo[220768]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rhyttbbcgejdcmblklmjcujyuxtklkxb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760154727.454415-1385-250715211312367/AnsiballZ_copy.py'
Oct 11 03:52:08 compute-0 sudo[220768]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 03:52:08 compute-0 python3.9[220770]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/edpm_libvirt.target mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1760154727.454415-1385-250715211312367/.source.target follow=False _original_basename=edpm_libvirt.target checksum=13035a1aa0f414c677b14be9a5a363b6623d393c backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 11 03:52:08 compute-0 sudo[220768]: pam_unix(sudo:session): session closed for user root
Oct 11 03:52:09 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v583: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 03:52:09 compute-0 sudo[220920]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hmhumdwacpvjcjatqmopepjkaixdeqba ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760154729.0223453-1400-139778124756839/AnsiballZ_stat.py'
Oct 11 03:52:09 compute-0 sudo[220920]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 03:52:09 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 11 03:52:09 compute-0 python3.9[220922]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/edpm_libvirt_guests.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 11 03:52:09 compute-0 sudo[220920]: pam_unix(sudo:session): session closed for user root
Oct 11 03:52:10 compute-0 sudo[221043]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vcjimhaahzhzhjocymyiyrogijkzmuob ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760154729.0223453-1400-139778124756839/AnsiballZ_copy.py'
Oct 11 03:52:10 compute-0 sudo[221043]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 03:52:10 compute-0 python3.9[221045]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/edpm_libvirt_guests.service mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1760154729.0223453-1400-139778124756839/.source.service follow=False _original_basename=edpm_libvirt_guests.service checksum=db83430a42fc2ccfd6ed8b56ebf04f3dff9cd0cf backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 11 03:52:10 compute-0 ceph-mon[74273]: pgmap v583: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 03:52:10 compute-0 sudo[221043]: pam_unix(sudo:session): session closed for user root
Oct 11 03:52:10 compute-0 sudo[221195]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-haeqsbfnyezvduecniuxbxhwiziefbul ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760154730.512753-1415-98427126772428/AnsiballZ_stat.py'
Oct 11 03:52:10 compute-0 sudo[221195]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 03:52:11 compute-0 python3.9[221197]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virt-guest-shutdown.target follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 11 03:52:11 compute-0 sudo[221195]: pam_unix(sudo:session): session closed for user root
Oct 11 03:52:11 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v584: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 03:52:11 compute-0 sudo[221318]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zwlwprnqnwbmttqvbqdorcjrpsdcgybm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760154730.512753-1415-98427126772428/AnsiballZ_copy.py'
Oct 11 03:52:11 compute-0 sudo[221318]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 03:52:11 compute-0 python3.9[221320]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virt-guest-shutdown.target mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1760154730.512753-1415-98427126772428/.source.target follow=False _original_basename=virt-guest-shutdown.target checksum=49ca149619c596cbba877418629d2cf8f7b0f5cf backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 11 03:52:11 compute-0 sudo[221318]: pam_unix(sudo:session): session closed for user root
Oct 11 03:52:12 compute-0 sudo[221470]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zmvkhgxungtycaxyvxoefdketrzmxmsr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760154731.8019292-1430-89182038082117/AnsiballZ_systemd.py'
Oct 11 03:52:12 compute-0 sudo[221470]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 03:52:12 compute-0 ceph-mon[74273]: pgmap v584: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 03:52:12 compute-0 python3.9[221472]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=edpm_libvirt.target state=restarted daemon_reexec=False scope=system no_block=False force=None masked=None
Oct 11 03:52:12 compute-0 systemd[1]: Reloading.
Oct 11 03:52:12 compute-0 systemd-rc-local-generator[221495]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 11 03:52:12 compute-0 systemd-sysv-generator[221498]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 11 03:52:12 compute-0 systemd[1]: Reached target edpm_libvirt.target.
Oct 11 03:52:12 compute-0 sudo[221470]: pam_unix(sudo:session): session closed for user root
Oct 11 03:52:13 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v585: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 03:52:13 compute-0 sudo[221660]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-etfwktgpcwvxokuwechdmgilssfrmrnz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760154733.007436-1438-216198636765264/AnsiballZ_systemd.py'
Oct 11 03:52:13 compute-0 sudo[221660]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 03:52:13 compute-0 python3.9[221662]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=edpm_libvirt_guests daemon_reexec=False scope=system no_block=False state=None force=None masked=None
Oct 11 03:52:13 compute-0 systemd[1]: Reloading.
Oct 11 03:52:13 compute-0 systemd-rc-local-generator[221689]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 11 03:52:13 compute-0 systemd-sysv-generator[221694]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 11 03:52:14 compute-0 sudo[221700]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 03:52:14 compute-0 sudo[221700]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 03:52:14 compute-0 sudo[221700]: pam_unix(sudo:session): session closed for user root
Oct 11 03:52:14 compute-0 systemd[1]: Reloading.
Oct 11 03:52:14 compute-0 systemd-sysv-generator[221781]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 11 03:52:14 compute-0 systemd-rc-local-generator[221776]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 11 03:52:14 compute-0 ceph-mon[74273]: pgmap v585: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 03:52:14 compute-0 sudo[221727]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 11 03:52:14 compute-0 sudo[221727]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 03:52:14 compute-0 sudo[221727]: pam_unix(sudo:session): session closed for user root
Oct 11 03:52:14 compute-0 sudo[221660]: pam_unix(sudo:session): session closed for user root
Oct 11 03:52:14 compute-0 sudo[221787]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 03:52:14 compute-0 sudo[221787]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 03:52:14 compute-0 sudo[221787]: pam_unix(sudo:session): session closed for user root
Oct 11 03:52:14 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 11 03:52:14 compute-0 sudo[221836]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/23b68101-59a9-532f-ab6b-9acf78fb2162/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Oct 11 03:52:14 compute-0 sudo[221836]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 03:52:14 compute-0 sshd-session[162024]: Connection closed by 192.168.122.30 port 43250
Oct 11 03:52:14 compute-0 sshd-session[162021]: pam_unix(sshd:session): session closed for user zuul
Oct 11 03:52:14 compute-0 systemd-logind[820]: Session 49 logged out. Waiting for processes to exit.
Oct 11 03:52:14 compute-0 systemd[1]: session-49.scope: Deactivated successfully.
Oct 11 03:52:14 compute-0 systemd[1]: session-49.scope: Consumed 3min 59.859s CPU time.
Oct 11 03:52:14 compute-0 systemd-logind[820]: Removed session 49.
Oct 11 03:52:14 compute-0 sudo[221836]: pam_unix(sudo:session): session closed for user root
Oct 11 03:52:14 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 11 03:52:14 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 11 03:52:14 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct 11 03:52:14 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 11 03:52:14 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct 11 03:52:14 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' 
Oct 11 03:52:14 compute-0 ceph-mgr[74563]: [progress WARNING root] complete: ev 2c00bacc-a1b8-4e1f-be17-a6b1d7c27dba does not exist
Oct 11 03:52:14 compute-0 ceph-mgr[74563]: [progress WARNING root] complete: ev 2db1cc51-55bf-41c7-b923-d50823010015 does not exist
Oct 11 03:52:14 compute-0 ceph-mgr[74563]: [progress WARNING root] complete: ev efe4bf75-3537-4b4e-bbdf-b0062cb6c5e4 does not exist
Oct 11 03:52:14 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct 11 03:52:14 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 11 03:52:14 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct 11 03:52:14 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 11 03:52:14 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 11 03:52:14 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 11 03:52:15 compute-0 sudo[221892]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 03:52:15 compute-0 sudo[221892]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 03:52:15 compute-0 sudo[221892]: pam_unix(sudo:session): session closed for user root
Oct 11 03:52:15 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v586: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 03:52:15 compute-0 sudo[221917]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 11 03:52:15 compute-0 sudo[221917]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 03:52:15 compute-0 sudo[221917]: pam_unix(sudo:session): session closed for user root
Oct 11 03:52:15 compute-0 sudo[221942]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 03:52:15 compute-0 sudo[221942]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 03:52:15 compute-0 sudo[221942]: pam_unix(sudo:session): session closed for user root
Oct 11 03:52:15 compute-0 sudo[221967]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/23b68101-59a9-532f-ab6b-9acf78fb2162/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 23b68101-59a9-532f-ab6b-9acf78fb2162 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Oct 11 03:52:15 compute-0 sudo[221967]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 03:52:15 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 11 03:52:15 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 11 03:52:15 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' 
Oct 11 03:52:15 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 11 03:52:15 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 11 03:52:15 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 11 03:52:15 compute-0 podman[222032]: 2025-10-11 03:52:15.663926077 +0000 UTC m=+0.061017287 container create d107aa871d7b5460b74c9769c9439b3525b879778fce36c872375290e494cab5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_elgamal, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, io.buildah.version=1.39.3)
Oct 11 03:52:15 compute-0 systemd[1]: Started libpod-conmon-d107aa871d7b5460b74c9769c9439b3525b879778fce36c872375290e494cab5.scope.
Oct 11 03:52:15 compute-0 podman[222032]: 2025-10-11 03:52:15.631703167 +0000 UTC m=+0.028794457 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 03:52:15 compute-0 systemd[1]: Started libcrun container.
Oct 11 03:52:15 compute-0 podman[222032]: 2025-10-11 03:52:15.78203446 +0000 UTC m=+0.179125730 container init d107aa871d7b5460b74c9769c9439b3525b879778fce36c872375290e494cab5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_elgamal, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 11 03:52:15 compute-0 podman[222032]: 2025-10-11 03:52:15.793620461 +0000 UTC m=+0.190711671 container start d107aa871d7b5460b74c9769c9439b3525b879778fce36c872375290e494cab5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_elgamal, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Oct 11 03:52:15 compute-0 podman[222032]: 2025-10-11 03:52:15.798383772 +0000 UTC m=+0.195475042 container attach d107aa871d7b5460b74c9769c9439b3525b879778fce36c872375290e494cab5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_elgamal, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.license=GPLv2)
Oct 11 03:52:15 compute-0 tender_elgamal[222048]: 167 167
Oct 11 03:52:15 compute-0 systemd[1]: libpod-d107aa871d7b5460b74c9769c9439b3525b879778fce36c872375290e494cab5.scope: Deactivated successfully.
Oct 11 03:52:15 compute-0 podman[222032]: 2025-10-11 03:52:15.803222626 +0000 UTC m=+0.200313836 container died d107aa871d7b5460b74c9769c9439b3525b879778fce36c872375290e494cab5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_elgamal, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Oct 11 03:52:15 compute-0 systemd[1]: var-lib-containers-storage-overlay-23d6bfb23d232b48c8edd13656336c7c7693729cf49b3802e81a3e3e3180c62c-merged.mount: Deactivated successfully.
Oct 11 03:52:15 compute-0 podman[222032]: 2025-10-11 03:52:15.846318377 +0000 UTC m=+0.243409567 container remove d107aa871d7b5460b74c9769c9439b3525b879778fce36c872375290e494cab5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_elgamal, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 11 03:52:15 compute-0 systemd[1]: libpod-conmon-d107aa871d7b5460b74c9769c9439b3525b879778fce36c872375290e494cab5.scope: Deactivated successfully.
Oct 11 03:52:16 compute-0 podman[222073]: 2025-10-11 03:52:16.071067896 +0000 UTC m=+0.069131201 container create 248d6ddf64545c6aeb8462050728b3240225aaacee7e1f4c679f5bf9aa1760a2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_brahmagupta, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3)
Oct 11 03:52:16 compute-0 systemd[1]: Started libpod-conmon-248d6ddf64545c6aeb8462050728b3240225aaacee7e1f4c679f5bf9aa1760a2.scope.
Oct 11 03:52:16 compute-0 podman[222073]: 2025-10-11 03:52:16.043216637 +0000 UTC m=+0.041280012 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 03:52:16 compute-0 systemd[1]: Started libcrun container.
Oct 11 03:52:16 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c25f2d321b7c97b44f6ba789ac24f13fbd4ac84fac4153c7d5a77462007e630f/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 11 03:52:16 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c25f2d321b7c97b44f6ba789ac24f13fbd4ac84fac4153c7d5a77462007e630f/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 11 03:52:16 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c25f2d321b7c97b44f6ba789ac24f13fbd4ac84fac4153c7d5a77462007e630f/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 11 03:52:16 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c25f2d321b7c97b44f6ba789ac24f13fbd4ac84fac4153c7d5a77462007e630f/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 11 03:52:16 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c25f2d321b7c97b44f6ba789ac24f13fbd4ac84fac4153c7d5a77462007e630f/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct 11 03:52:16 compute-0 podman[222073]: 2025-10-11 03:52:16.179961415 +0000 UTC m=+0.178024730 container init 248d6ddf64545c6aeb8462050728b3240225aaacee7e1f4c679f5bf9aa1760a2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_brahmagupta, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Oct 11 03:52:16 compute-0 podman[222073]: 2025-10-11 03:52:16.19496741 +0000 UTC m=+0.193030725 container start 248d6ddf64545c6aeb8462050728b3240225aaacee7e1f4c679f5bf9aa1760a2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_brahmagupta, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Oct 11 03:52:16 compute-0 podman[222073]: 2025-10-11 03:52:16.199386712 +0000 UTC m=+0.197450027 container attach 248d6ddf64545c6aeb8462050728b3240225aaacee7e1f4c679f5bf9aa1760a2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_brahmagupta, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, ceph=True, CEPH_REF=reef, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Oct 11 03:52:16 compute-0 ceph-mon[74273]: pgmap v586: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 03:52:17 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v587: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 03:52:17 compute-0 affectionate_brahmagupta[222089]: --> passed data devices: 0 physical, 3 LVM
Oct 11 03:52:17 compute-0 affectionate_brahmagupta[222089]: --> relative data size: 1.0
Oct 11 03:52:17 compute-0 affectionate_brahmagupta[222089]: --> All data devices are unavailable
Oct 11 03:52:17 compute-0 systemd[1]: libpod-248d6ddf64545c6aeb8462050728b3240225aaacee7e1f4c679f5bf9aa1760a2.scope: Deactivated successfully.
Oct 11 03:52:17 compute-0 systemd[1]: libpod-248d6ddf64545c6aeb8462050728b3240225aaacee7e1f4c679f5bf9aa1760a2.scope: Consumed 1.044s CPU time.
Oct 11 03:52:17 compute-0 podman[222073]: 2025-10-11 03:52:17.292851083 +0000 UTC m=+1.290914378 container died 248d6ddf64545c6aeb8462050728b3240225aaacee7e1f4c679f5bf9aa1760a2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_brahmagupta, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef)
Oct 11 03:52:17 compute-0 systemd[1]: var-lib-containers-storage-overlay-c25f2d321b7c97b44f6ba789ac24f13fbd4ac84fac4153c7d5a77462007e630f-merged.mount: Deactivated successfully.
Oct 11 03:52:17 compute-0 podman[222073]: 2025-10-11 03:52:17.357290964 +0000 UTC m=+1.355354239 container remove 248d6ddf64545c6aeb8462050728b3240225aaacee7e1f4c679f5bf9aa1760a2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_brahmagupta, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Oct 11 03:52:17 compute-0 systemd[1]: libpod-conmon-248d6ddf64545c6aeb8462050728b3240225aaacee7e1f4c679f5bf9aa1760a2.scope: Deactivated successfully.
Oct 11 03:52:17 compute-0 sudo[221967]: pam_unix(sudo:session): session closed for user root
Oct 11 03:52:17 compute-0 sudo[222131]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 03:52:17 compute-0 sudo[222131]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 03:52:17 compute-0 sudo[222131]: pam_unix(sudo:session): session closed for user root
Oct 11 03:52:17 compute-0 sudo[222156]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 11 03:52:17 compute-0 sudo[222156]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 03:52:17 compute-0 sudo[222156]: pam_unix(sudo:session): session closed for user root
Oct 11 03:52:17 compute-0 sudo[222181]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 03:52:17 compute-0 sudo[222181]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 03:52:17 compute-0 sudo[222181]: pam_unix(sudo:session): session closed for user root
Oct 11 03:52:17 compute-0 sudo[222206]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/23b68101-59a9-532f-ab6b-9acf78fb2162/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 23b68101-59a9-532f-ab6b-9acf78fb2162 -- lvm list --format json
Oct 11 03:52:17 compute-0 sudo[222206]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 03:52:18 compute-0 podman[222273]: 2025-10-11 03:52:18.245255728 +0000 UTC m=+0.058607220 container create f3d41ba6fcea39484bb291cf5d648795db00b26c751c02688e35dde1ef9490a1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_hofstadter, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 11 03:52:18 compute-0 systemd[1]: Started libpod-conmon-f3d41ba6fcea39484bb291cf5d648795db00b26c751c02688e35dde1ef9490a1.scope.
Oct 11 03:52:18 compute-0 podman[222273]: 2025-10-11 03:52:18.213090189 +0000 UTC m=+0.026441741 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 03:52:18 compute-0 systemd[1]: Started libcrun container.
Oct 11 03:52:18 compute-0 podman[222273]: 2025-10-11 03:52:18.332141998 +0000 UTC m=+0.145493560 container init f3d41ba6fcea39484bb291cf5d648795db00b26c751c02688e35dde1ef9490a1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_hofstadter, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 11 03:52:18 compute-0 podman[222273]: 2025-10-11 03:52:18.340247072 +0000 UTC m=+0.153598534 container start f3d41ba6fcea39484bb291cf5d648795db00b26c751c02688e35dde1ef9490a1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_hofstadter, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 11 03:52:18 compute-0 podman[222273]: 2025-10-11 03:52:18.343707178 +0000 UTC m=+0.157058720 container attach f3d41ba6fcea39484bb291cf5d648795db00b26c751c02688e35dde1ef9490a1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_hofstadter, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 11 03:52:18 compute-0 festive_hofstadter[222291]: 167 167
Oct 11 03:52:18 compute-0 systemd[1]: libpod-f3d41ba6fcea39484bb291cf5d648795db00b26c751c02688e35dde1ef9490a1.scope: Deactivated successfully.
Oct 11 03:52:18 compute-0 podman[222273]: 2025-10-11 03:52:18.347245766 +0000 UTC m=+0.160597298 container died f3d41ba6fcea39484bb291cf5d648795db00b26c751c02688e35dde1ef9490a1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_hofstadter, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, ceph=True)
Oct 11 03:52:18 compute-0 ceph-mon[74273]: pgmap v587: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 03:52:18 compute-0 systemd[1]: var-lib-containers-storage-overlay-eafe8a266fc80ec838039a3f7e26a95d6079dbd01708ccb4e5c7e9d060e463ae-merged.mount: Deactivated successfully.
Oct 11 03:52:18 compute-0 podman[222273]: 2025-10-11 03:52:18.412376545 +0000 UTC m=+0.225728037 container remove f3d41ba6fcea39484bb291cf5d648795db00b26c751c02688e35dde1ef9490a1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_hofstadter, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0)
Oct 11 03:52:18 compute-0 systemd[1]: libpod-conmon-f3d41ba6fcea39484bb291cf5d648795db00b26c751c02688e35dde1ef9490a1.scope: Deactivated successfully.
Oct 11 03:52:18 compute-0 podman[222315]: 2025-10-11 03:52:18.634426391 +0000 UTC m=+0.039385240 container create 5bd16df596000afa4a8d282a1b981048efd073727f9aa7e367d1b9011fff2ec0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_wu, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef)
Oct 11 03:52:18 compute-0 systemd[1]: Started libpod-conmon-5bd16df596000afa4a8d282a1b981048efd073727f9aa7e367d1b9011fff2ec0.scope.
Oct 11 03:52:18 compute-0 podman[222315]: 2025-10-11 03:52:18.617505473 +0000 UTC m=+0.022464292 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 03:52:18 compute-0 systemd[1]: Started libcrun container.
Oct 11 03:52:18 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e2bf0cdba001cfef2895e5b7777c78dc8bec8dc06a6dcf02a82f9bcb9db38b3a/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 11 03:52:18 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e2bf0cdba001cfef2895e5b7777c78dc8bec8dc06a6dcf02a82f9bcb9db38b3a/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 11 03:52:18 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e2bf0cdba001cfef2895e5b7777c78dc8bec8dc06a6dcf02a82f9bcb9db38b3a/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 11 03:52:18 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e2bf0cdba001cfef2895e5b7777c78dc8bec8dc06a6dcf02a82f9bcb9db38b3a/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 11 03:52:18 compute-0 podman[222315]: 2025-10-11 03:52:18.753077319 +0000 UTC m=+0.158036198 container init 5bd16df596000afa4a8d282a1b981048efd073727f9aa7e367d1b9011fff2ec0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_wu, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Oct 11 03:52:18 compute-0 podman[222315]: 2025-10-11 03:52:18.767187409 +0000 UTC m=+0.172146268 container start 5bd16df596000afa4a8d282a1b981048efd073727f9aa7e367d1b9011fff2ec0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_wu, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3)
Oct 11 03:52:18 compute-0 podman[222315]: 2025-10-11 03:52:18.772204477 +0000 UTC m=+0.177163336 container attach 5bd16df596000afa4a8d282a1b981048efd073727f9aa7e367d1b9011fff2ec0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_wu, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 11 03:52:19 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v588: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 03:52:19 compute-0 lucid_wu[222331]: {
Oct 11 03:52:19 compute-0 lucid_wu[222331]:     "0": [
Oct 11 03:52:19 compute-0 lucid_wu[222331]:         {
Oct 11 03:52:19 compute-0 lucid_wu[222331]:             "devices": [
Oct 11 03:52:19 compute-0 lucid_wu[222331]:                 "/dev/loop3"
Oct 11 03:52:19 compute-0 lucid_wu[222331]:             ],
Oct 11 03:52:19 compute-0 lucid_wu[222331]:             "lv_name": "ceph_lv0",
Oct 11 03:52:19 compute-0 lucid_wu[222331]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct 11 03:52:19 compute-0 lucid_wu[222331]:             "lv_size": "21470642176",
Oct 11 03:52:19 compute-0 lucid_wu[222331]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=1XVTvq-Um5W-oCLh-MZzq-EKrd-vgGj-JdO4S3,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=23b68101-59a9-532f-ab6b-9acf78fb2162,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=bd7ac921-1218-45c1-b1c6-7c594dbceccb,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 11 03:52:19 compute-0 lucid_wu[222331]:             "lv_uuid": "1XVTvq-Um5W-oCLh-MZzq-EKrd-vgGj-JdO4S3",
Oct 11 03:52:19 compute-0 lucid_wu[222331]:             "name": "ceph_lv0",
Oct 11 03:52:19 compute-0 lucid_wu[222331]:             "path": "/dev/ceph_vg0/ceph_lv0",
Oct 11 03:52:19 compute-0 lucid_wu[222331]:             "tags": {
Oct 11 03:52:19 compute-0 lucid_wu[222331]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct 11 03:52:19 compute-0 lucid_wu[222331]:                 "ceph.block_uuid": "1XVTvq-Um5W-oCLh-MZzq-EKrd-vgGj-JdO4S3",
Oct 11 03:52:19 compute-0 lucid_wu[222331]:                 "ceph.cephx_lockbox_secret": "",
Oct 11 03:52:19 compute-0 lucid_wu[222331]:                 "ceph.cluster_fsid": "23b68101-59a9-532f-ab6b-9acf78fb2162",
Oct 11 03:52:19 compute-0 lucid_wu[222331]:                 "ceph.cluster_name": "ceph",
Oct 11 03:52:19 compute-0 lucid_wu[222331]:                 "ceph.crush_device_class": "",
Oct 11 03:52:19 compute-0 lucid_wu[222331]:                 "ceph.encrypted": "0",
Oct 11 03:52:19 compute-0 lucid_wu[222331]:                 "ceph.osd_fsid": "bd7ac921-1218-45c1-b1c6-7c594dbceccb",
Oct 11 03:52:19 compute-0 lucid_wu[222331]:                 "ceph.osd_id": "0",
Oct 11 03:52:19 compute-0 lucid_wu[222331]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 11 03:52:19 compute-0 lucid_wu[222331]:                 "ceph.type": "block",
Oct 11 03:52:19 compute-0 lucid_wu[222331]:                 "ceph.vdo": "0"
Oct 11 03:52:19 compute-0 lucid_wu[222331]:             },
Oct 11 03:52:19 compute-0 lucid_wu[222331]:             "type": "block",
Oct 11 03:52:19 compute-0 lucid_wu[222331]:             "vg_name": "ceph_vg0"
Oct 11 03:52:19 compute-0 lucid_wu[222331]:         }
Oct 11 03:52:19 compute-0 lucid_wu[222331]:     ],
Oct 11 03:52:19 compute-0 lucid_wu[222331]:     "1": [
Oct 11 03:52:19 compute-0 lucid_wu[222331]:         {
Oct 11 03:52:19 compute-0 lucid_wu[222331]:             "devices": [
Oct 11 03:52:19 compute-0 lucid_wu[222331]:                 "/dev/loop4"
Oct 11 03:52:19 compute-0 lucid_wu[222331]:             ],
Oct 11 03:52:19 compute-0 lucid_wu[222331]:             "lv_name": "ceph_lv1",
Oct 11 03:52:19 compute-0 lucid_wu[222331]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Oct 11 03:52:19 compute-0 lucid_wu[222331]:             "lv_size": "21470642176",
Oct 11 03:52:19 compute-0 lucid_wu[222331]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=SYHCVo-UVf0-Iwc3-4dzz-QuCG-19LJ-9rRA5x,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=23b68101-59a9-532f-ab6b-9acf78fb2162,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=38da774d-7ecf-442f-9a7a-97978287cff8,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 11 03:52:19 compute-0 lucid_wu[222331]:             "lv_uuid": "SYHCVo-UVf0-Iwc3-4dzz-QuCG-19LJ-9rRA5x",
Oct 11 03:52:19 compute-0 lucid_wu[222331]:             "name": "ceph_lv1",
Oct 11 03:52:19 compute-0 lucid_wu[222331]:             "path": "/dev/ceph_vg1/ceph_lv1",
Oct 11 03:52:19 compute-0 lucid_wu[222331]:             "tags": {
Oct 11 03:52:19 compute-0 lucid_wu[222331]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Oct 11 03:52:19 compute-0 lucid_wu[222331]:                 "ceph.block_uuid": "SYHCVo-UVf0-Iwc3-4dzz-QuCG-19LJ-9rRA5x",
Oct 11 03:52:19 compute-0 lucid_wu[222331]:                 "ceph.cephx_lockbox_secret": "",
Oct 11 03:52:19 compute-0 lucid_wu[222331]:                 "ceph.cluster_fsid": "23b68101-59a9-532f-ab6b-9acf78fb2162",
Oct 11 03:52:19 compute-0 lucid_wu[222331]:                 "ceph.cluster_name": "ceph",
Oct 11 03:52:19 compute-0 lucid_wu[222331]:                 "ceph.crush_device_class": "",
Oct 11 03:52:19 compute-0 lucid_wu[222331]:                 "ceph.encrypted": "0",
Oct 11 03:52:19 compute-0 lucid_wu[222331]:                 "ceph.osd_fsid": "38da774d-7ecf-442f-9a7a-97978287cff8",
Oct 11 03:52:19 compute-0 lucid_wu[222331]:                 "ceph.osd_id": "1",
Oct 11 03:52:19 compute-0 lucid_wu[222331]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 11 03:52:19 compute-0 lucid_wu[222331]:                 "ceph.type": "block",
Oct 11 03:52:19 compute-0 lucid_wu[222331]:                 "ceph.vdo": "0"
Oct 11 03:52:19 compute-0 lucid_wu[222331]:             },
Oct 11 03:52:19 compute-0 lucid_wu[222331]:             "type": "block",
Oct 11 03:52:19 compute-0 lucid_wu[222331]:             "vg_name": "ceph_vg1"
Oct 11 03:52:19 compute-0 lucid_wu[222331]:         }
Oct 11 03:52:19 compute-0 lucid_wu[222331]:     ],
Oct 11 03:52:19 compute-0 lucid_wu[222331]:     "2": [
Oct 11 03:52:19 compute-0 lucid_wu[222331]:         {
Oct 11 03:52:19 compute-0 lucid_wu[222331]:             "devices": [
Oct 11 03:52:19 compute-0 lucid_wu[222331]:                 "/dev/loop5"
Oct 11 03:52:19 compute-0 lucid_wu[222331]:             ],
Oct 11 03:52:19 compute-0 lucid_wu[222331]:             "lv_name": "ceph_lv2",
Oct 11 03:52:19 compute-0 lucid_wu[222331]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Oct 11 03:52:19 compute-0 lucid_wu[222331]:             "lv_size": "21470642176",
Oct 11 03:52:19 compute-0 lucid_wu[222331]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=RoMfIg-8Myq-NBZ3-1HM3-6QfC-sjk8-dlChvt,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=23b68101-59a9-532f-ab6b-9acf78fb2162,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=b0a32a6b-06c7-4cda-9235-0c5ffbd1bd85,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 11 03:52:19 compute-0 lucid_wu[222331]:             "lv_uuid": "RoMfIg-8Myq-NBZ3-1HM3-6QfC-sjk8-dlChvt",
Oct 11 03:52:19 compute-0 lucid_wu[222331]:             "name": "ceph_lv2",
Oct 11 03:52:19 compute-0 lucid_wu[222331]:             "path": "/dev/ceph_vg2/ceph_lv2",
Oct 11 03:52:19 compute-0 lucid_wu[222331]:             "tags": {
Oct 11 03:52:19 compute-0 lucid_wu[222331]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Oct 11 03:52:19 compute-0 lucid_wu[222331]:                 "ceph.block_uuid": "RoMfIg-8Myq-NBZ3-1HM3-6QfC-sjk8-dlChvt",
Oct 11 03:52:19 compute-0 lucid_wu[222331]:                 "ceph.cephx_lockbox_secret": "",
Oct 11 03:52:19 compute-0 lucid_wu[222331]:                 "ceph.cluster_fsid": "23b68101-59a9-532f-ab6b-9acf78fb2162",
Oct 11 03:52:19 compute-0 lucid_wu[222331]:                 "ceph.cluster_name": "ceph",
Oct 11 03:52:19 compute-0 lucid_wu[222331]:                 "ceph.crush_device_class": "",
Oct 11 03:52:19 compute-0 lucid_wu[222331]:                 "ceph.encrypted": "0",
Oct 11 03:52:19 compute-0 lucid_wu[222331]:                 "ceph.osd_fsid": "b0a32a6b-06c7-4cda-9235-0c5ffbd1bd85",
Oct 11 03:52:19 compute-0 lucid_wu[222331]:                 "ceph.osd_id": "2",
Oct 11 03:52:19 compute-0 lucid_wu[222331]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 11 03:52:19 compute-0 lucid_wu[222331]:                 "ceph.type": "block",
Oct 11 03:52:19 compute-0 lucid_wu[222331]:                 "ceph.vdo": "0"
Oct 11 03:52:19 compute-0 lucid_wu[222331]:             },
Oct 11 03:52:19 compute-0 lucid_wu[222331]:             "type": "block",
Oct 11 03:52:19 compute-0 lucid_wu[222331]:             "vg_name": "ceph_vg2"
Oct 11 03:52:19 compute-0 lucid_wu[222331]:         }
Oct 11 03:52:19 compute-0 lucid_wu[222331]:     ]
Oct 11 03:52:19 compute-0 lucid_wu[222331]: }
Oct 11 03:52:19 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 11 03:52:19 compute-0 systemd[1]: libpod-5bd16df596000afa4a8d282a1b981048efd073727f9aa7e367d1b9011fff2ec0.scope: Deactivated successfully.
Oct 11 03:52:19 compute-0 podman[222315]: 2025-10-11 03:52:19.538271694 +0000 UTC m=+0.943230563 container died 5bd16df596000afa4a8d282a1b981048efd073727f9aa7e367d1b9011fff2ec0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_wu, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 11 03:52:19 compute-0 systemd[1]: var-lib-containers-storage-overlay-e2bf0cdba001cfef2895e5b7777c78dc8bec8dc06a6dcf02a82f9bcb9db38b3a-merged.mount: Deactivated successfully.
Oct 11 03:52:19 compute-0 podman[222315]: 2025-10-11 03:52:19.624378553 +0000 UTC m=+1.029337402 container remove 5bd16df596000afa4a8d282a1b981048efd073727f9aa7e367d1b9011fff2ec0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_wu, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef)
Oct 11 03:52:19 compute-0 systemd[1]: libpod-conmon-5bd16df596000afa4a8d282a1b981048efd073727f9aa7e367d1b9011fff2ec0.scope: Deactivated successfully.
Oct 11 03:52:19 compute-0 sudo[222206]: pam_unix(sudo:session): session closed for user root
Oct 11 03:52:19 compute-0 sudo[222352]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 03:52:19 compute-0 sudo[222352]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 03:52:19 compute-0 sudo[222352]: pam_unix(sudo:session): session closed for user root
Oct 11 03:52:19 compute-0 sudo[222377]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 11 03:52:19 compute-0 sudo[222377]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 03:52:19 compute-0 sudo[222377]: pam_unix(sudo:session): session closed for user root
Oct 11 03:52:19 compute-0 sudo[222402]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 03:52:19 compute-0 sudo[222402]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 03:52:19 compute-0 sudo[222402]: pam_unix(sudo:session): session closed for user root
Oct 11 03:52:19 compute-0 sudo[222427]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/23b68101-59a9-532f-ab6b-9acf78fb2162/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 23b68101-59a9-532f-ab6b-9acf78fb2162 -- raw list --format json
Oct 11 03:52:19 compute-0 sudo[222427]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 03:52:20 compute-0 ceph-mon[74273]: pgmap v588: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 03:52:20 compute-0 podman[222508]: 2025-10-11 03:52:20.440134012 +0000 UTC m=+0.055561306 container create a66ad27a695fb6fff653911bff2de2d4e78b6a9ffa048932e15508b531262e8c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_goldberg, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True)
Oct 11 03:52:20 compute-0 systemd[1]: Started libpod-conmon-a66ad27a695fb6fff653911bff2de2d4e78b6a9ffa048932e15508b531262e8c.scope.
Oct 11 03:52:20 compute-0 podman[222478]: 2025-10-11 03:52:20.498039052 +0000 UTC m=+0.193426576 container health_status 648a70a95c858d67aef01a55c153ff0b5b9bef4b589fa10c62a24185180db61f (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20251009, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, config_id=ovn_controller)
Oct 11 03:52:20 compute-0 systemd[1]: Started libcrun container.
Oct 11 03:52:20 compute-0 podman[222508]: 2025-10-11 03:52:20.515111073 +0000 UTC m=+0.130538377 container init a66ad27a695fb6fff653911bff2de2d4e78b6a9ffa048932e15508b531262e8c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_goldberg, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 11 03:52:20 compute-0 podman[222508]: 2025-10-11 03:52:20.423280106 +0000 UTC m=+0.038707430 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 03:52:20 compute-0 podman[222508]: 2025-10-11 03:52:20.52293807 +0000 UTC m=+0.138365404 container start a66ad27a695fb6fff653911bff2de2d4e78b6a9ffa048932e15508b531262e8c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_goldberg, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 11 03:52:20 compute-0 clever_goldberg[222536]: 167 167
Oct 11 03:52:20 compute-0 systemd[1]: libpod-a66ad27a695fb6fff653911bff2de2d4e78b6a9ffa048932e15508b531262e8c.scope: Deactivated successfully.
Oct 11 03:52:20 compute-0 conmon[222536]: conmon a66ad27a695fb6fff653 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-a66ad27a695fb6fff653911bff2de2d4e78b6a9ffa048932e15508b531262e8c.scope/container/memory.events
Oct 11 03:52:20 compute-0 podman[222508]: 2025-10-11 03:52:20.527979739 +0000 UTC m=+0.143407043 container attach a66ad27a695fb6fff653911bff2de2d4e78b6a9ffa048932e15508b531262e8c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_goldberg, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 11 03:52:20 compute-0 podman[222508]: 2025-10-11 03:52:20.529343637 +0000 UTC m=+0.144770941 container died a66ad27a695fb6fff653911bff2de2d4e78b6a9ffa048932e15508b531262e8c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_goldberg, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Oct 11 03:52:20 compute-0 systemd[1]: var-lib-containers-storage-overlay-94886e9088fc7d875bd35a6b9beeac4a913bf408e551a65f0ea9debb7a60465f-merged.mount: Deactivated successfully.
Oct 11 03:52:20 compute-0 podman[222508]: 2025-10-11 03:52:20.563941682 +0000 UTC m=+0.179368976 container remove a66ad27a695fb6fff653911bff2de2d4e78b6a9ffa048932e15508b531262e8c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_goldberg, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 11 03:52:20 compute-0 systemd[1]: libpod-conmon-a66ad27a695fb6fff653911bff2de2d4e78b6a9ffa048932e15508b531262e8c.scope: Deactivated successfully.
Oct 11 03:52:20 compute-0 sshd-session[222548]: Accepted publickey for zuul from 192.168.122.30 port 45270 ssh2: ECDSA SHA256:qo9+RMabHfLAOt2q/80W97JXaZUdeUCREBuTRaqgxBY
Oct 11 03:52:20 compute-0 systemd-logind[820]: New session 50 of user zuul.
Oct 11 03:52:20 compute-0 systemd[1]: Started Session 50 of User zuul.
Oct 11 03:52:20 compute-0 sshd-session[222548]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Oct 11 03:52:20 compute-0 ceph-mgr[74563]: [balancer INFO root] Optimize plan auto_2025-10-11_03:52:20
Oct 11 03:52:20 compute-0 ceph-mgr[74563]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 11 03:52:20 compute-0 ceph-mgr[74563]: [balancer INFO root] do_upmap
Oct 11 03:52:20 compute-0 ceph-mgr[74563]: [balancer INFO root] pools ['.rgw.root', 'default.rgw.log', 'cephfs.cephfs.meta', '.mgr', 'volumes', 'vms', 'cephfs.cephfs.data', 'default.rgw.meta', 'default.rgw.control', 'images', 'backups']
Oct 11 03:52:20 compute-0 ceph-mgr[74563]: [balancer INFO root] prepared 0/10 changes
Oct 11 03:52:20 compute-0 ceph-mgr[74563]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 03:52:20 compute-0 ceph-mgr[74563]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 03:52:20 compute-0 ceph-mgr[74563]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 03:52:20 compute-0 ceph-mgr[74563]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 03:52:20 compute-0 podman[222565]: 2025-10-11 03:52:20.760531473 +0000 UTC m=+0.061212862 container create 1cea1b615ce1f6508898d73b323e996b37a5cd2d9f8675f1ed8cfdc9d2da2a6f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_aryabhata, CEPH_REF=reef, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 11 03:52:20 compute-0 ceph-mgr[74563]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 03:52:20 compute-0 ceph-mgr[74563]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 03:52:20 compute-0 systemd[1]: Started libpod-conmon-1cea1b615ce1f6508898d73b323e996b37a5cd2d9f8675f1ed8cfdc9d2da2a6f.scope.
Oct 11 03:52:20 compute-0 podman[222565]: 2025-10-11 03:52:20.735016608 +0000 UTC m=+0.035698077 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 03:52:20 compute-0 systemd[1]: Started libcrun container.
Oct 11 03:52:20 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8372ec0b22d8b123f615464895ada5e4a0fba7458a12103175689e82853720fe/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 11 03:52:20 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8372ec0b22d8b123f615464895ada5e4a0fba7458a12103175689e82853720fe/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 11 03:52:20 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8372ec0b22d8b123f615464895ada5e4a0fba7458a12103175689e82853720fe/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 11 03:52:20 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8372ec0b22d8b123f615464895ada5e4a0fba7458a12103175689e82853720fe/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 11 03:52:20 compute-0 ceph-mgr[74563]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 11 03:52:20 compute-0 ceph-mgr[74563]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 11 03:52:20 compute-0 ceph-mgr[74563]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 11 03:52:20 compute-0 ceph-mgr[74563]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 11 03:52:20 compute-0 ceph-mgr[74563]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 11 03:52:20 compute-0 ceph-mgr[74563]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 11 03:52:20 compute-0 ceph-mgr[74563]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 11 03:52:20 compute-0 ceph-mgr[74563]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 11 03:52:20 compute-0 ceph-mgr[74563]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 11 03:52:20 compute-0 ceph-mgr[74563]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 11 03:52:20 compute-0 podman[222565]: 2025-10-11 03:52:20.88671751 +0000 UTC m=+0.187398939 container init 1cea1b615ce1f6508898d73b323e996b37a5cd2d9f8675f1ed8cfdc9d2da2a6f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_aryabhata, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507)
Oct 11 03:52:20 compute-0 podman[222565]: 2025-10-11 03:52:20.899966416 +0000 UTC m=+0.200647815 container start 1cea1b615ce1f6508898d73b323e996b37a5cd2d9f8675f1ed8cfdc9d2da2a6f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_aryabhata, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default)
Oct 11 03:52:20 compute-0 podman[222565]: 2025-10-11 03:52:20.904256234 +0000 UTC m=+0.204937653 container attach 1cea1b615ce1f6508898d73b323e996b37a5cd2d9f8675f1ed8cfdc9d2da2a6f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_aryabhata, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 11 03:52:21 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v589: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 03:52:21 compute-0 python3.9[222736]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct 11 03:52:22 compute-0 musing_aryabhata[222605]: {
Oct 11 03:52:22 compute-0 musing_aryabhata[222605]:     "38da774d-7ecf-442f-9a7a-97978287cff8": {
Oct 11 03:52:22 compute-0 musing_aryabhata[222605]:         "ceph_fsid": "23b68101-59a9-532f-ab6b-9acf78fb2162",
Oct 11 03:52:22 compute-0 musing_aryabhata[222605]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Oct 11 03:52:22 compute-0 musing_aryabhata[222605]:         "osd_id": 1,
Oct 11 03:52:22 compute-0 musing_aryabhata[222605]:         "osd_uuid": "38da774d-7ecf-442f-9a7a-97978287cff8",
Oct 11 03:52:22 compute-0 musing_aryabhata[222605]:         "type": "bluestore"
Oct 11 03:52:22 compute-0 musing_aryabhata[222605]:     },
Oct 11 03:52:22 compute-0 musing_aryabhata[222605]:     "b0a32a6b-06c7-4cda-9235-0c5ffbd1bd85": {
Oct 11 03:52:22 compute-0 musing_aryabhata[222605]:         "ceph_fsid": "23b68101-59a9-532f-ab6b-9acf78fb2162",
Oct 11 03:52:22 compute-0 musing_aryabhata[222605]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Oct 11 03:52:22 compute-0 musing_aryabhata[222605]:         "osd_id": 2,
Oct 11 03:52:22 compute-0 musing_aryabhata[222605]:         "osd_uuid": "b0a32a6b-06c7-4cda-9235-0c5ffbd1bd85",
Oct 11 03:52:22 compute-0 musing_aryabhata[222605]:         "type": "bluestore"
Oct 11 03:52:22 compute-0 musing_aryabhata[222605]:     },
Oct 11 03:52:22 compute-0 musing_aryabhata[222605]:     "bd7ac921-1218-45c1-b1c6-7c594dbceccb": {
Oct 11 03:52:22 compute-0 musing_aryabhata[222605]:         "ceph_fsid": "23b68101-59a9-532f-ab6b-9acf78fb2162",
Oct 11 03:52:22 compute-0 musing_aryabhata[222605]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct 11 03:52:22 compute-0 musing_aryabhata[222605]:         "osd_id": 0,
Oct 11 03:52:22 compute-0 musing_aryabhata[222605]:         "osd_uuid": "bd7ac921-1218-45c1-b1c6-7c594dbceccb",
Oct 11 03:52:22 compute-0 musing_aryabhata[222605]:         "type": "bluestore"
Oct 11 03:52:22 compute-0 musing_aryabhata[222605]:     }
Oct 11 03:52:22 compute-0 musing_aryabhata[222605]: }
Oct 11 03:52:22 compute-0 systemd[1]: libpod-1cea1b615ce1f6508898d73b323e996b37a5cd2d9f8675f1ed8cfdc9d2da2a6f.scope: Deactivated successfully.
Oct 11 03:52:22 compute-0 systemd[1]: libpod-1cea1b615ce1f6508898d73b323e996b37a5cd2d9f8675f1ed8cfdc9d2da2a6f.scope: Consumed 1.130s CPU time.
Oct 11 03:52:22 compute-0 podman[222565]: 2025-10-11 03:52:22.028514147 +0000 UTC m=+1.329195526 container died 1cea1b615ce1f6508898d73b323e996b37a5cd2d9f8675f1ed8cfdc9d2da2a6f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_aryabhata, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 11 03:52:22 compute-0 systemd[1]: var-lib-containers-storage-overlay-8372ec0b22d8b123f615464895ada5e4a0fba7458a12103175689e82853720fe-merged.mount: Deactivated successfully.
Oct 11 03:52:22 compute-0 podman[222565]: 2025-10-11 03:52:22.080777071 +0000 UTC m=+1.381458450 container remove 1cea1b615ce1f6508898d73b323e996b37a5cd2d9f8675f1ed8cfdc9d2da2a6f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_aryabhata, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, io.buildah.version=1.39.3)
Oct 11 03:52:22 compute-0 systemd[1]: libpod-conmon-1cea1b615ce1f6508898d73b323e996b37a5cd2d9f8675f1ed8cfdc9d2da2a6f.scope: Deactivated successfully.
Oct 11 03:52:22 compute-0 sudo[222427]: pam_unix(sudo:session): session closed for user root
Oct 11 03:52:22 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 11 03:52:22 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' 
Oct 11 03:52:22 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 11 03:52:22 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' 
Oct 11 03:52:22 compute-0 ceph-mgr[74563]: [progress WARNING root] complete: ev 9cb32035-64d1-489e-b4b4-a425a8da280d does not exist
Oct 11 03:52:22 compute-0 ceph-mgr[74563]: [progress WARNING root] complete: ev 5f2d1f32-694e-4118-938b-2542c46938f0 does not exist
Oct 11 03:52:22 compute-0 sudo[222783]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 03:52:22 compute-0 sudo[222783]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 03:52:22 compute-0 sudo[222783]: pam_unix(sudo:session): session closed for user root
Oct 11 03:52:22 compute-0 sudo[222828]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Oct 11 03:52:22 compute-0 sudo[222828]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 03:52:22 compute-0 sudo[222828]: pam_unix(sudo:session): session closed for user root
Oct 11 03:52:22 compute-0 ceph-mon[74273]: pgmap v589: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 03:52:22 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' 
Oct 11 03:52:22 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' 
Oct 11 03:52:22 compute-0 ovn_metadata_agent[161897]: 2025-10-11 03:52:22.937 161902 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 03:52:22 compute-0 ovn_metadata_agent[161897]: 2025-10-11 03:52:22.939 161902 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 03:52:22 compute-0 ovn_metadata_agent[161897]: 2025-10-11 03:52:22.939 161902 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 03:52:23 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v590: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 03:52:23 compute-0 sudo[222982]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gunfvdwkzxttdlqseefopkydgtrmronz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760154742.4954677-34-183997724330884/AnsiballZ_file.py'
Oct 11 03:52:23 compute-0 sudo[222982]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 03:52:23 compute-0 python3.9[222984]: ansible-ansible.builtin.file Invoked with mode=0755 path=/etc/iscsi setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Oct 11 03:52:23 compute-0 sudo[222982]: pam_unix(sudo:session): session closed for user root
Oct 11 03:52:23 compute-0 sudo[223134]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tvwaqabnyufcmiiuawdmutibrmmjbcwq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760154743.5230525-34-151952743746943/AnsiballZ_file.py'
Oct 11 03:52:23 compute-0 sudo[223134]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 03:52:24 compute-0 python3.9[223136]: ansible-ansible.builtin.file Invoked with path=/etc/target setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Oct 11 03:52:24 compute-0 sudo[223134]: pam_unix(sudo:session): session closed for user root
Oct 11 03:52:24 compute-0 ceph-mon[74273]: pgmap v590: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 03:52:24 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 11 03:52:24 compute-0 sudo[223286]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-asxoioxfkvrmsvyuubtnuqeqquovlrgr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760154744.3341343-34-139843998458802/AnsiballZ_file.py'
Oct 11 03:52:24 compute-0 sudo[223286]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 03:52:24 compute-0 python3.9[223288]: ansible-ansible.builtin.file Invoked with path=/var/lib/iscsi setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Oct 11 03:52:24 compute-0 sudo[223286]: pam_unix(sudo:session): session closed for user root
Oct 11 03:52:25 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v591: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 03:52:25 compute-0 sudo[223438]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zemaipsufvpxifpssfrgfywkgmeqdfup ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760154745.0790203-34-125483676095587/AnsiballZ_file.py'
Oct 11 03:52:25 compute-0 sudo[223438]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 03:52:25 compute-0 python3.9[223440]: ansible-ansible.builtin.file Invoked with mode=0755 path=/var/lib/config-data selevel=s0 setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None attributes=None
Oct 11 03:52:25 compute-0 sudo[223438]: pam_unix(sudo:session): session closed for user root
Oct 11 03:52:25 compute-0 podman[223441]: 2025-10-11 03:52:25.712383921 +0000 UTC m=+0.076397821 container health_status aa44bb3cffcfb092b9d85ae52a9bd9f36c87e07262632420f97b48a886109eab (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251009, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, tcib_managed=true, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible)
Oct 11 03:52:26 compute-0 sudo[223607]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hugjrxcckutrnepvsspddgvjafxbqadh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760154745.8289008-34-252950148893547/AnsiballZ_file.py'
Oct 11 03:52:26 compute-0 sudo[223607]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 03:52:26 compute-0 python3.9[223609]: ansible-ansible.builtin.file Invoked with mode=0755 path=/var/lib/config-data/ansible-generated/iscsid setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Oct 11 03:52:26 compute-0 sudo[223607]: pam_unix(sudo:session): session closed for user root
Oct 11 03:52:26 compute-0 ceph-mon[74273]: pgmap v591: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 03:52:27 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v592: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 03:52:27 compute-0 sudo[223759]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jjoyojhsptofymewmqexovseenvkoapy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760154746.5582635-70-162349049502672/AnsiballZ_stat.py'
Oct 11 03:52:27 compute-0 sudo[223759]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 03:52:27 compute-0 python3.9[223761]: ansible-ansible.builtin.stat Invoked with path=/lib/systemd/system/iscsid.socket follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct 11 03:52:27 compute-0 sudo[223759]: pam_unix(sudo:session): session closed for user root
Oct 11 03:52:28 compute-0 sudo[223913]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mlzhuhmpogdpjcwpipuvdundhppjdvmk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760154747.5466163-78-279066228136265/AnsiballZ_systemd.py'
Oct 11 03:52:28 compute-0 sudo[223913]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 03:52:28 compute-0 ceph-mon[74273]: pgmap v592: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 03:52:28 compute-0 python3.9[223915]: ansible-ansible.builtin.systemd Invoked with enabled=False name=iscsid.socket state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Oct 11 03:52:28 compute-0 systemd[1]: Reloading.
Oct 11 03:52:28 compute-0 systemd-rc-local-generator[223936]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 11 03:52:28 compute-0 systemd-sysv-generator[223944]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 11 03:52:29 compute-0 sudo[223913]: pam_unix(sudo:session): session closed for user root
Oct 11 03:52:29 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v593: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 03:52:29 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 11 03:52:29 compute-0 sudo[224102]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zyuxjtgjfkjxngvwagwdandqjoydadba ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760154749.170006-86-59141480485225/AnsiballZ_service_facts.py'
Oct 11 03:52:29 compute-0 sudo[224102]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 03:52:29 compute-0 python3.9[224104]: ansible-ansible.builtin.service_facts Invoked
Oct 11 03:52:29 compute-0 network[224121]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Oct 11 03:52:29 compute-0 network[224122]: 'network-scripts' will be removed from distribution in near future.
Oct 11 03:52:29 compute-0 network[224123]: It is advised to switch to 'NetworkManager' instead for network management.
Oct 11 03:52:30 compute-0 ceph-mon[74273]: pgmap v593: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 03:52:30 compute-0 ceph-mgr[74563]: [pg_autoscaler INFO root] _maybe_adjust
Oct 11 03:52:31 compute-0 ceph-mgr[74563]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 03:52:31 compute-0 ceph-mgr[74563]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct 11 03:52:31 compute-0 ceph-mgr[74563]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 03:52:31 compute-0 ceph-mgr[74563]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 11 03:52:31 compute-0 ceph-mgr[74563]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 03:52:31 compute-0 ceph-mgr[74563]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 11 03:52:31 compute-0 ceph-mgr[74563]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 03:52:31 compute-0 ceph-mgr[74563]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 11 03:52:31 compute-0 ceph-mgr[74563]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 03:52:31 compute-0 ceph-mgr[74563]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 11 03:52:31 compute-0 ceph-mgr[74563]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 03:52:31 compute-0 ceph-mgr[74563]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Oct 11 03:52:31 compute-0 ceph-mgr[74563]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 03:52:31 compute-0 ceph-mgr[74563]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 11 03:52:31 compute-0 ceph-mgr[74563]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 03:52:31 compute-0 ceph-mgr[74563]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Oct 11 03:52:31 compute-0 ceph-mgr[74563]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 03:52:31 compute-0 ceph-mgr[74563]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct 11 03:52:31 compute-0 ceph-mgr[74563]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 03:52:31 compute-0 ceph-mgr[74563]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 11 03:52:31 compute-0 ceph-mgr[74563]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 03:52:31 compute-0 ceph-mgr[74563]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct 11 03:52:31 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v594: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 03:52:32 compute-0 ceph-mon[74273]: pgmap v594: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 03:52:33 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v595: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 03:52:33 compute-0 sudo[224102]: pam_unix(sudo:session): session closed for user root
Oct 11 03:52:34 compute-0 ceph-mon[74273]: pgmap v595: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 03:52:34 compute-0 sudo[224395]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vmezhseqrpkajjpxtboeofglaqwpjhvm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760154754.12944-94-85512377434009/AnsiballZ_systemd.py'
Oct 11 03:52:34 compute-0 sudo[224395]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 03:52:34 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 11 03:52:34 compute-0 python3.9[224397]: ansible-ansible.builtin.systemd Invoked with enabled=False name=iscsi-starter.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Oct 11 03:52:34 compute-0 systemd[1]: Reloading.
Oct 11 03:52:34 compute-0 systemd-rc-local-generator[224429]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 11 03:52:34 compute-0 systemd-sysv-generator[224432]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 11 03:52:35 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v596: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 03:52:35 compute-0 sudo[224395]: pam_unix(sudo:session): session closed for user root
Oct 11 03:52:35 compute-0 python3.9[224583]: ansible-ansible.builtin.stat Invoked with path=/etc/iscsi/.initiator_reset follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct 11 03:52:36 compute-0 ceph-mon[74273]: pgmap v596: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 03:52:36 compute-0 sudo[224733]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hufiuoodpdrjfhzysovusrkdjgiszmbj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760154756.171016-111-62295679558403/AnsiballZ_podman_container.py'
Oct 11 03:52:36 compute-0 sudo[224733]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 03:52:36 compute-0 python3.9[224735]: ansible-containers.podman.podman_container Invoked with command=/usr/sbin/iscsi-iname detach=False image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified name=iscsid_config rm=True tty=True executable=podman state=started debug=False force_restart=False force_delete=True generate_systemd={} image_strict=False recreate=False annotation=None arch=None attach=None authfile=None blkio_weight=None blkio_weight_device=None cap_add=None cap_drop=None cgroup_conf=None cgroup_parent=None cgroupns=None cgroups=None chrootdirs=None cidfile=None cmd_args=None conmon_pidfile=None cpu_period=None cpu_quota=None cpu_rt_period=None cpu_rt_runtime=None cpu_shares=None cpus=None cpuset_cpus=None cpuset_mems=None decryption_key=None delete_depend=None delete_time=None delete_volumes=None detach_keys=None device=None device_cgroup_rule=None device_read_bps=None device_read_iops=None device_write_bps=None device_write_iops=None dns=None dns_option=None dns_search=None entrypoint=None env=None env_file=None env_host=None env_merge=None etc_hosts=None expose=None gidmap=None gpus=None group_add=None group_entry=None healthcheck=None healthcheck_interval=None healthcheck_retries=None healthcheck_start_period=None health_startup_cmd=None health_startup_interval=None health_startup_retries=None health_startup_success=None health_startup_timeout=None healthcheck_timeout=None healthcheck_failure_action=None hooks_dir=None hostname=None hostuser=None http_proxy=None image_volume=None init=None init_ctr=None init_path=None interactive=None ip=None ip6=None ipc=None kernel_memory=None label=None label_file=None log_driver=None log_level=None log_opt=None mac_address=None memory=None memory_reservation=None memory_swap=None memory_swappiness=None mount=None network=None network_aliases=None no_healthcheck=None no_hosts=None oom_kill_disable=None oom_score_adj=None os=None passwd=None passwd_entry=None personality=None pid=None pid_file=None pids_limit=None platform=None pod=None pod_id_file=None preserve_fd=None preserve_fds=None privileged=None publish=None publish_all=None pull=None quadlet_dir=None quadlet_filename=None quadlet_file_mode=None quadlet_options=None rdt_class=None read_only=None read_only_tmpfs=None requires=None restart_policy=None restart_time=None retry=None retry_delay=None rmi=None rootfs=None seccomp_policy=None secrets=NOT_LOGGING_PARAMETER sdnotify=None security_opt=None shm_size=None shm_size_systemd=None sig_proxy=None stop_signal=None stop_timeout=None stop_time=None subgidname=None subuidname=None sysctl=None systemd=None timeout=None timezone=None tls_verify=None tmpfs=None uidmap=None ulimit=None umask=None unsetenv=None unsetenv_all=None user=None userns=None uts=None variant=None volume=None volumes_from=None workdir=None
Oct 11 03:52:37 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v597: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 03:52:37 compute-0 rsyslogd[1005]: imjournal: journal files changed, reloading...  [v8.2506.0-2.el9 try https://www.rsyslog.com/e/0 ]
Oct 11 03:52:37 compute-0 rsyslogd[1005]: imjournal: journal files changed, reloading...  [v8.2506.0-2.el9 try https://www.rsyslog.com/e/0 ]
Oct 11 03:52:38 compute-0 podman[224746]: 2025-10-11 03:52:38.204974424 +0000 UTC m=+1.288730978 image pull 5773abc4300b61c01f3353a0b9239f9a404bb272790b280574e4c56f72edaa72 quay.io/podified-antelope-centos9/openstack-iscsid:current-podified
Oct 11 03:52:38 compute-0 podman[224805]: 2025-10-11 03:52:38.404427475 +0000 UTC m=+0.072298409 container create 31ff5b910e53d8edcb4f6c23b2cfc4b90366dc0384ea43075faf615381804a4a (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid_config, tcib_managed=true, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 11 03:52:38 compute-0 NetworkManager[44920]: <info>  [1760154758.4385] manager: (podman0): new Bridge device (/org/freedesktop/NetworkManager/Devices/21)
Oct 11 03:52:38 compute-0 ceph-mon[74273]: pgmap v597: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 03:52:38 compute-0 podman[224805]: 2025-10-11 03:52:38.370297322 +0000 UTC m=+0.038168326 image pull 5773abc4300b61c01f3353a0b9239f9a404bb272790b280574e4c56f72edaa72 quay.io/podified-antelope-centos9/openstack-iscsid:current-podified
Oct 11 03:52:38 compute-0 kernel: podman0: port 1(veth0) entered blocking state
Oct 11 03:52:38 compute-0 kernel: podman0: port 1(veth0) entered disabled state
Oct 11 03:52:38 compute-0 kernel: veth0: entered allmulticast mode
Oct 11 03:52:38 compute-0 kernel: veth0: entered promiscuous mode
Oct 11 03:52:38 compute-0 NetworkManager[44920]: <info>  [1760154758.4744] manager: (veth0): new Veth device (/org/freedesktop/NetworkManager/Devices/22)
Oct 11 03:52:38 compute-0 kernel: podman0: port 1(veth0) entered blocking state
Oct 11 03:52:38 compute-0 kernel: podman0: port 1(veth0) entered forwarding state
Oct 11 03:52:38 compute-0 NetworkManager[44920]: <info>  [1760154758.4777] device (veth0): carrier: link connected
Oct 11 03:52:38 compute-0 NetworkManager[44920]: <info>  [1760154758.4805] device (podman0): carrier: link connected
Oct 11 03:52:38 compute-0 systemd-udevd[224832]: Network interface NamePolicy= disabled on kernel command line.
Oct 11 03:52:38 compute-0 systemd-udevd[224834]: Network interface NamePolicy= disabled on kernel command line.
Oct 11 03:52:38 compute-0 NetworkManager[44920]: <info>  [1760154758.5195] device (podman0): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 11 03:52:38 compute-0 NetworkManager[44920]: <info>  [1760154758.5210] device (podman0): state change: unavailable -> disconnected (reason 'connection-assumed', managed-type: 'external')
Oct 11 03:52:38 compute-0 NetworkManager[44920]: <info>  [1760154758.5227] device (podman0): Activation: starting connection 'podman0' (f71604c4-6dc8-41c3-b06e-72a7c225bdc3)
Oct 11 03:52:38 compute-0 NetworkManager[44920]: <info>  [1760154758.5235] device (podman0): state change: disconnected -> prepare (reason 'none', managed-type: 'external')
Oct 11 03:52:38 compute-0 NetworkManager[44920]: <info>  [1760154758.5239] device (podman0): state change: prepare -> config (reason 'none', managed-type: 'external')
Oct 11 03:52:38 compute-0 NetworkManager[44920]: <info>  [1760154758.5242] device (podman0): state change: config -> ip-config (reason 'none', managed-type: 'external')
Oct 11 03:52:38 compute-0 NetworkManager[44920]: <info>  [1760154758.5249] device (podman0): state change: ip-config -> ip-check (reason 'none', managed-type: 'external')
Oct 11 03:52:38 compute-0 systemd[1]: Starting Network Manager Script Dispatcher Service...
Oct 11 03:52:38 compute-0 systemd[1]: Started Network Manager Script Dispatcher Service.
Oct 11 03:52:38 compute-0 NetworkManager[44920]: <info>  [1760154758.5653] device (podman0): state change: ip-check -> secondaries (reason 'none', managed-type: 'external')
Oct 11 03:52:38 compute-0 NetworkManager[44920]: <info>  [1760154758.5658] device (podman0): state change: secondaries -> activated (reason 'none', managed-type: 'external')
Oct 11 03:52:38 compute-0 NetworkManager[44920]: <info>  [1760154758.5675] device (podman0): Activation: successful, device activated.
Oct 11 03:52:38 compute-0 systemd[1]: iscsi.service: Unit cannot be reloaded because it is inactive.
Oct 11 03:52:38 compute-0 systemd[1]: Started libpod-conmon-31ff5b910e53d8edcb4f6c23b2cfc4b90366dc0384ea43075faf615381804a4a.scope.
Oct 11 03:52:38 compute-0 systemd[1]: Started libcrun container.
Oct 11 03:52:38 compute-0 podman[224805]: 2025-10-11 03:52:38.967062019 +0000 UTC m=+0.634933003 container init 31ff5b910e53d8edcb4f6c23b2cfc4b90366dc0384ea43075faf615381804a4a (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid_config, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251009, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3)
Oct 11 03:52:38 compute-0 podman[224805]: 2025-10-11 03:52:38.97937371 +0000 UTC m=+0.647244654 container start 31ff5b910e53d8edcb4f6c23b2cfc4b90366dc0384ea43075faf615381804a4a (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid_config, tcib_managed=true, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3)
Oct 11 03:52:38 compute-0 podman[224805]: 2025-10-11 03:52:38.984390608 +0000 UTC m=+0.652261612 container attach 31ff5b910e53d8edcb4f6c23b2cfc4b90366dc0384ea43075faf615381804a4a (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid_config, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, org.label-schema.build-date=20251009, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.license=GPLv2)
Oct 11 03:52:38 compute-0 iscsid_config[224963]: iqn.1994-05.com.redhat:e727c2bd432c
Oct 11 03:52:38 compute-0 systemd[1]: libpod-31ff5b910e53d8edcb4f6c23b2cfc4b90366dc0384ea43075faf615381804a4a.scope: Deactivated successfully.
Oct 11 03:52:38 compute-0 conmon[224963]: conmon 31ff5b910e53d8edcb4f <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-31ff5b910e53d8edcb4f6c23b2cfc4b90366dc0384ea43075faf615381804a4a.scope/container/memory.events
Oct 11 03:52:38 compute-0 podman[224805]: 2025-10-11 03:52:38.989838279 +0000 UTC m=+0.657709253 container died 31ff5b910e53d8edcb4f6c23b2cfc4b90366dc0384ea43075faf615381804a4a (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid_config, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, tcib_managed=true, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Oct 11 03:52:39 compute-0 kernel: podman0: port 1(veth0) entered disabled state
Oct 11 03:52:39 compute-0 kernel: veth0 (unregistering): left allmulticast mode
Oct 11 03:52:39 compute-0 kernel: veth0 (unregistering): left promiscuous mode
Oct 11 03:52:39 compute-0 kernel: podman0: port 1(veth0) entered disabled state
Oct 11 03:52:39 compute-0 NetworkManager[44920]: <info>  [1760154759.0626] device (podman0): state change: activated -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 11 03:52:39 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v598: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 03:52:39 compute-0 systemd[1]: run-netns-netns\x2d7ea511f8\x2df1a8\x2d06da\x2d6176\x2d862222a77174.mount: Deactivated successfully.
Oct 11 03:52:39 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-31ff5b910e53d8edcb4f6c23b2cfc4b90366dc0384ea43075faf615381804a4a-userdata-shm.mount: Deactivated successfully.
Oct 11 03:52:39 compute-0 systemd[1]: var-lib-containers-storage-overlay-1d99dca546f726c00ad8a29c8a65f03145f09f2116b7eaae35a2c8338580909f-merged.mount: Deactivated successfully.
Oct 11 03:52:39 compute-0 podman[224805]: 2025-10-11 03:52:39.465698537 +0000 UTC m=+1.133569441 container remove 31ff5b910e53d8edcb4f6c23b2cfc4b90366dc0384ea43075faf615381804a4a (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid_config, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, tcib_managed=true, org.label-schema.build-date=20251009)
Oct 11 03:52:39 compute-0 python3.9[224735]: ansible-containers.podman.podman_container PODMAN-CONTAINER-DEBUG: podman run --name iscsid_config --detach=False --rm --tty=True quay.io/podified-antelope-centos9/openstack-iscsid:current-podified /usr/sbin/iscsi-iname
Oct 11 03:52:39 compute-0 systemd[1]: libpod-conmon-31ff5b910e53d8edcb4f6c23b2cfc4b90366dc0384ea43075faf615381804a4a.scope: Deactivated successfully.
Oct 11 03:52:39 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 11 03:52:39 compute-0 python3.9[224735]: ansible-containers.podman.podman_container PODMAN-CONTAINER-DEBUG: Error generating systemd: 
                                             DEPRECATED command:
                                             It is recommended to use Quadlets for running containers and pods under systemd.
                                             
                                             Please refer to podman-systemd.unit(5) for details.
                                             Error: iscsid_config does not refer to a container or pod: no pod with name or ID iscsid_config found: no such pod: no container with name or ID "iscsid_config" found: no such container
Oct 11 03:52:39 compute-0 sudo[224733]: pam_unix(sudo:session): session closed for user root
Oct 11 03:52:40 compute-0 sudo[225201]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sjjqeoiyjdxiacxgdrwaoyjixjusqbhs ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760154759.7685053-119-143480369916083/AnsiballZ_stat.py'
Oct 11 03:52:40 compute-0 sudo[225201]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 03:52:40 compute-0 python3.9[225203]: ansible-ansible.legacy.stat Invoked with path=/etc/iscsi/initiatorname.iscsi follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 11 03:52:40 compute-0 sudo[225201]: pam_unix(sudo:session): session closed for user root
Oct 11 03:52:40 compute-0 ceph-mon[74273]: pgmap v598: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 03:52:40 compute-0 sudo[225324]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vadwpeiqbexxzcqijgeprujxwbzsrtsy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760154759.7685053-119-143480369916083/AnsiballZ_copy.py'
Oct 11 03:52:40 compute-0 sudo[225324]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 03:52:41 compute-0 python3.9[225326]: ansible-ansible.legacy.copy Invoked with dest=/etc/iscsi/initiatorname.iscsi mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1760154759.7685053-119-143480369916083/.source.iscsi _original_basename=.vr9k3odf follow=False checksum=013a273cabe0f710df7b24ab0f582938c5b8dc76 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 11 03:52:41 compute-0 sudo[225324]: pam_unix(sudo:session): session closed for user root
Oct 11 03:52:41 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v599: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 03:52:41 compute-0 sudo[225476]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ynnzvbgvabuohvufgexvgadnlwqovnlf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760154761.288532-134-80443831947292/AnsiballZ_file.py'
Oct 11 03:52:41 compute-0 sudo[225476]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 03:52:41 compute-0 python3.9[225478]: ansible-ansible.builtin.file Invoked with mode=0600 path=/etc/iscsi/.initiator_reset state=touch recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 11 03:52:41 compute-0 sudo[225476]: pam_unix(sudo:session): session closed for user root
Oct 11 03:52:42 compute-0 ceph-mon[74273]: pgmap v599: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 03:52:42 compute-0 python3.9[225628]: ansible-ansible.builtin.stat Invoked with path=/etc/iscsi/iscsid.conf follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct 11 03:52:43 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v600: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 03:52:43 compute-0 sudo[225780]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jqsotxubnpsiasdfyxhedafkgyqwsbso ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760154762.9299889-151-242920632661072/AnsiballZ_lineinfile.py'
Oct 11 03:52:43 compute-0 sudo[225780]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 03:52:43 compute-0 python3.9[225782]: ansible-ansible.builtin.lineinfile Invoked with insertafter=^#node.session.auth.chap.algs line=node.session.auth.chap_algs = SHA3-256,SHA256,SHA1,MD5 path=/etc/iscsi/iscsid.conf regexp=^node.session.auth.chap_algs state=present backrefs=False create=False backup=False firstmatch=False unsafe_writes=False search_string=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 11 03:52:43 compute-0 sudo[225780]: pam_unix(sudo:session): session closed for user root
Oct 11 03:52:44 compute-0 sudo[225932]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kwzibljghbtfpzmgyrmdarpaxvexmhuf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760154763.9809418-160-229509075564050/AnsiballZ_file.py'
Oct 11 03:52:44 compute-0 sudo[225932]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 03:52:44 compute-0 ceph-mon[74273]: pgmap v600: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 03:52:44 compute-0 python3.9[225934]: ansible-ansible.builtin.file Invoked with path=/var/local/libexec recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Oct 11 03:52:44 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 11 03:52:44 compute-0 sudo[225932]: pam_unix(sudo:session): session closed for user root
Oct 11 03:52:45 compute-0 sudo[226084]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xfrbehwaawbnqpddyjiilinkplgbwkjz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760154764.7148552-168-202847789582192/AnsiballZ_stat.py'
Oct 11 03:52:45 compute-0 sudo[226084]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 03:52:45 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v601: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 03:52:45 compute-0 python3.9[226086]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-container-shutdown follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 11 03:52:45 compute-0 sudo[226084]: pam_unix(sudo:session): session closed for user root
Oct 11 03:52:45 compute-0 sudo[226162]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qtswcjgifizuuhazffixpnonszvozvsh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760154764.7148552-168-202847789582192/AnsiballZ_file.py'
Oct 11 03:52:45 compute-0 sudo[226162]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 03:52:45 compute-0 python3.9[226164]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-container-shutdown _original_basename=edpm-container-shutdown recurse=False state=file path=/var/local/libexec/edpm-container-shutdown force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct 11 03:52:45 compute-0 sudo[226162]: pam_unix(sudo:session): session closed for user root
Oct 11 03:52:46 compute-0 sudo[226314]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-idyrhrpzywcbmlhbyjdkhlvezioaufoi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760154765.952087-168-22985027199118/AnsiballZ_stat.py'
Oct 11 03:52:46 compute-0 sudo[226314]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 03:52:46 compute-0 ceph-mon[74273]: pgmap v601: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 03:52:46 compute-0 python3.9[226316]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-start-podman-container follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 11 03:52:46 compute-0 sudo[226314]: pam_unix(sudo:session): session closed for user root
Oct 11 03:52:46 compute-0 sudo[226392]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-litgcheglvsaaiyntbqtblynmdhfyult ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760154765.952087-168-22985027199118/AnsiballZ_file.py'
Oct 11 03:52:46 compute-0 sudo[226392]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 03:52:47 compute-0 python3.9[226394]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-start-podman-container _original_basename=edpm-start-podman-container recurse=False state=file path=/var/local/libexec/edpm-start-podman-container force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct 11 03:52:47 compute-0 sudo[226392]: pam_unix(sudo:session): session closed for user root
Oct 11 03:52:47 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v602: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 03:52:47 compute-0 sudo[226544]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kmwlabuyaafhyrlievrvgzfmfortslux ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760154767.2627158-191-275378837149381/AnsiballZ_file.py'
Oct 11 03:52:47 compute-0 sudo[226544]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 03:52:47 compute-0 python3.9[226546]: ansible-ansible.builtin.file Invoked with mode=420 path=/etc/systemd/system-preset state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 11 03:52:47 compute-0 sudo[226544]: pam_unix(sudo:session): session closed for user root
Oct 11 03:52:48 compute-0 sudo[226696]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-spcqeewvvexsylczejqevxlhucsrkoep ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760154768.08702-199-170048089302190/AnsiballZ_stat.py'
Oct 11 03:52:48 compute-0 sudo[226696]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 03:52:48 compute-0 ceph-mon[74273]: pgmap v602: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 03:52:48 compute-0 python3.9[226698]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/edpm-container-shutdown.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 11 03:52:48 compute-0 sudo[226696]: pam_unix(sudo:session): session closed for user root
Oct 11 03:52:49 compute-0 sudo[226774]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zsazgjvyzjlnhwscmuqnmrhadtzynbti ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760154768.08702-199-170048089302190/AnsiballZ_file.py'
Oct 11 03:52:49 compute-0 sudo[226774]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 03:52:49 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v603: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 03:52:49 compute-0 systemd[1]: NetworkManager-dispatcher.service: Deactivated successfully.
Oct 11 03:52:49 compute-0 python3.9[226776]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/edpm-container-shutdown.service _original_basename=edpm-container-shutdown-service recurse=False state=file path=/etc/systemd/system/edpm-container-shutdown.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 11 03:52:49 compute-0 sudo[226774]: pam_unix(sudo:session): session closed for user root
Oct 11 03:52:49 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 11 03:52:49 compute-0 sudo[226926]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-heckixrdwksozxurrwrpuyyrcugvqblc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760154769.423911-211-138391198416820/AnsiballZ_stat.py'
Oct 11 03:52:49 compute-0 sudo[226926]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 03:52:49 compute-0 python3.9[226928]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 11 03:52:49 compute-0 sudo[226926]: pam_unix(sudo:session): session closed for user root
Oct 11 03:52:50 compute-0 sudo[227004]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bypqcipaydamweahnbhbepngucvfauoq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760154769.423911-211-138391198416820/AnsiballZ_file.py'
Oct 11 03:52:50 compute-0 sudo[227004]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 03:52:50 compute-0 python3.9[227006]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-edpm-container-shutdown.preset _original_basename=91-edpm-container-shutdown-preset recurse=False state=file path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 11 03:52:50 compute-0 sudo[227004]: pam_unix(sudo:session): session closed for user root
Oct 11 03:52:50 compute-0 ceph-mon[74273]: pgmap v603: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 03:52:50 compute-0 podman[227031]: 2025-10-11 03:52:50.686235705 +0000 UTC m=+0.145311566 container health_status 648a70a95c858d67aef01a55c153ff0b5b9bef4b589fa10c62a24185180db61f (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20251009, org.label-schema.vendor=CentOS, container_name=ovn_controller)
Oct 11 03:52:50 compute-0 ceph-mgr[74563]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 03:52:50 compute-0 ceph-mgr[74563]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 03:52:50 compute-0 ceph-mgr[74563]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 03:52:50 compute-0 ceph-mgr[74563]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 03:52:50 compute-0 ceph-mgr[74563]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 03:52:50 compute-0 ceph-mgr[74563]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 03:52:50 compute-0 sudo[227181]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-odiuroitwkudloulrhhjoafebpxukykn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760154770.5560362-223-117491582782978/AnsiballZ_systemd.py'
Oct 11 03:52:50 compute-0 sudo[227181]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 03:52:51 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v604: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 03:52:51 compute-0 python3.9[227183]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=edpm-container-shutdown state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Oct 11 03:52:51 compute-0 systemd[1]: Reloading.
Oct 11 03:52:51 compute-0 systemd-rc-local-generator[227213]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 11 03:52:51 compute-0 systemd-sysv-generator[227216]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 11 03:52:51 compute-0 sudo[227181]: pam_unix(sudo:session): session closed for user root
Oct 11 03:52:52 compute-0 sudo[227372]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lgyfhfctspstxygdeygsludajxvtmhvf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760154771.9260027-231-235699064203732/AnsiballZ_stat.py'
Oct 11 03:52:52 compute-0 sudo[227372]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 03:52:52 compute-0 python3.9[227374]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/netns-placeholder.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 11 03:52:52 compute-0 sudo[227372]: pam_unix(sudo:session): session closed for user root
Oct 11 03:52:52 compute-0 ceph-mon[74273]: pgmap v604: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 03:52:52 compute-0 sudo[227450]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uzkpvxqxyhexnxentfskzjhyzzxkhjol ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760154771.9260027-231-235699064203732/AnsiballZ_file.py'
Oct 11 03:52:52 compute-0 sudo[227450]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 03:52:52 compute-0 python3.9[227452]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/netns-placeholder.service _original_basename=netns-placeholder-service recurse=False state=file path=/etc/systemd/system/netns-placeholder.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 11 03:52:52 compute-0 sudo[227450]: pam_unix(sudo:session): session closed for user root
Oct 11 03:52:53 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v605: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 03:52:53 compute-0 sudo[227602]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bjvurstdmypjkdweqvgiuycwtftqohfs ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760154773.1722322-243-235913822553453/AnsiballZ_stat.py'
Oct 11 03:52:53 compute-0 sudo[227602]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 03:52:53 compute-0 python3.9[227604]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-netns-placeholder.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 11 03:52:53 compute-0 sudo[227602]: pam_unix(sudo:session): session closed for user root
Oct 11 03:52:53 compute-0 sudo[227680]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jvhwipdlbhskuizwqjzairpscqcvvjay ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760154773.1722322-243-235913822553453/AnsiballZ_file.py'
Oct 11 03:52:53 compute-0 sudo[227680]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 03:52:54 compute-0 python3.9[227682]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-netns-placeholder.preset _original_basename=91-netns-placeholder-preset recurse=False state=file path=/etc/systemd/system-preset/91-netns-placeholder.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 11 03:52:54 compute-0 sudo[227680]: pam_unix(sudo:session): session closed for user root
Oct 11 03:52:54 compute-0 ceph-mon[74273]: pgmap v605: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 03:52:54 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 11 03:52:54 compute-0 sudo[227832]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mbbmpztchqngayhthjhfhazlnpmusjkj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760154774.4104698-255-92962947752687/AnsiballZ_systemd.py'
Oct 11 03:52:54 compute-0 sudo[227832]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 03:52:55 compute-0 python3.9[227834]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=netns-placeholder state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Oct 11 03:52:55 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v606: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 03:52:55 compute-0 systemd[1]: Reloading.
Oct 11 03:52:55 compute-0 systemd-rc-local-generator[227859]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 11 03:52:55 compute-0 systemd-sysv-generator[227862]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 11 03:52:55 compute-0 systemd[1]: Starting Create netns directory...
Oct 11 03:52:55 compute-0 systemd[1]: run-netns-placeholder.mount: Deactivated successfully.
Oct 11 03:52:55 compute-0 systemd[1]: netns-placeholder.service: Deactivated successfully.
Oct 11 03:52:55 compute-0 systemd[1]: Finished Create netns directory.
Oct 11 03:52:55 compute-0 sudo[227832]: pam_unix(sudo:session): session closed for user root
Oct 11 03:52:56 compute-0 podman[227998]: 2025-10-11 03:52:56.159786397 +0000 UTC m=+0.051128144 container health_status aa44bb3cffcfb092b9d85ae52a9bd9f36c87e07262632420f97b48a886109eab (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.build-date=20251009, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, config_id=ovn_metadata_agent)
Oct 11 03:52:56 compute-0 sudo[228041]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ffrnyzvdhogbbcudajvlprajkomdtlyh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760154775.8314993-265-148010216913294/AnsiballZ_file.py'
Oct 11 03:52:56 compute-0 sudo[228041]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 03:52:56 compute-0 python3.9[228045]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/openstack/healthchecks setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct 11 03:52:56 compute-0 sudo[228041]: pam_unix(sudo:session): session closed for user root
Oct 11 03:52:56 compute-0 ceph-mon[74273]: pgmap v606: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 03:52:56 compute-0 sudo[228195]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hyomickhhlrlckoaiuwuouezbxxxwsxb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760154776.563683-273-60165922931599/AnsiballZ_stat.py'
Oct 11 03:52:56 compute-0 sudo[228195]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 03:52:57 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v607: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 03:52:57 compute-0 python3.9[228197]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/healthchecks/iscsid/healthcheck follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 11 03:52:57 compute-0 sudo[228195]: pam_unix(sudo:session): session closed for user root
Oct 11 03:52:57 compute-0 sudo[228318]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ndfowlpjjrmqjqplrmdhaffrkfvpfpbk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760154776.563683-273-60165922931599/AnsiballZ_copy.py'
Oct 11 03:52:57 compute-0 sudo[228318]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 03:52:57 compute-0 python3.9[228320]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/healthchecks/iscsid/ group=zuul mode=0700 owner=zuul setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1760154776.563683-273-60165922931599/.source _original_basename=healthcheck follow=False checksum=2e1237e7fe015c809b173c52e24cfb87132f4344 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Oct 11 03:52:57 compute-0 sudo[228318]: pam_unix(sudo:session): session closed for user root
Oct 11 03:52:58 compute-0 sudo[228470]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-utsieazrqcjromcyevnmmxmqzhvamgut ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760154778.11793-290-220908048914285/AnsiballZ_file.py'
Oct 11 03:52:58 compute-0 sudo[228470]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 03:52:58 compute-0 python3.9[228472]: ansible-ansible.builtin.file Invoked with path=/var/lib/kolla/config_files recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Oct 11 03:52:58 compute-0 sudo[228470]: pam_unix(sudo:session): session closed for user root
Oct 11 03:52:58 compute-0 ceph-mon[74273]: pgmap v607: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 03:52:59 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v608: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 03:52:59 compute-0 sudo[228622]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pvwjdmvmwxvzoyxsfdgerhkzqxlhuaox ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760154778.7793543-298-107701511426856/AnsiballZ_stat.py'
Oct 11 03:52:59 compute-0 sudo[228622]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 03:52:59 compute-0 python3.9[228624]: ansible-ansible.legacy.stat Invoked with path=/var/lib/kolla/config_files/iscsid.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 11 03:52:59 compute-0 sudo[228622]: pam_unix(sudo:session): session closed for user root
Oct 11 03:52:59 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 11 03:52:59 compute-0 sudo[228745]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ugpkvzxguxqpcjvupdbfjimycrccjwaj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760154778.7793543-298-107701511426856/AnsiballZ_copy.py'
Oct 11 03:52:59 compute-0 sudo[228745]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 03:52:59 compute-0 python3.9[228747]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/kolla/config_files/iscsid.json mode=0600 src=/home/zuul/.ansible/tmp/ansible-tmp-1760154778.7793543-298-107701511426856/.source.json _original_basename=.4uqaj5oq follow=False checksum=80e4f97460718c7e5c66b21ef8b846eba0e0dbc8 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 11 03:52:59 compute-0 sudo[228745]: pam_unix(sudo:session): session closed for user root
Oct 11 03:53:00 compute-0 sudo[228897]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vdtwbkikbcjvgocwtkachfwopnctdtmk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760154780.177096-313-181195464703048/AnsiballZ_file.py'
Oct 11 03:53:00 compute-0 sudo[228897]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 03:53:00 compute-0 ceph-mon[74273]: pgmap v608: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 03:53:00 compute-0 python3.9[228899]: ansible-ansible.builtin.file Invoked with mode=0755 path=/var/lib/edpm-config/container-startup-config/iscsid state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 11 03:53:00 compute-0 sudo[228897]: pam_unix(sudo:session): session closed for user root
Oct 11 03:53:01 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v609: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 03:53:01 compute-0 sudo[229049]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-immhgzsmwghesyiehjclmmlhrchwsrrw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760154780.9189389-321-197305622589788/AnsiballZ_stat.py'
Oct 11 03:53:01 compute-0 sudo[229049]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 03:53:01 compute-0 sudo[229049]: pam_unix(sudo:session): session closed for user root
Oct 11 03:53:01 compute-0 sudo[229172]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bjheuxcmvjushhzgfuypkunqpetmhncx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760154780.9189389-321-197305622589788/AnsiballZ_copy.py'
Oct 11 03:53:01 compute-0 sudo[229172]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 03:53:02 compute-0 sudo[229172]: pam_unix(sudo:session): session closed for user root
Oct 11 03:53:02 compute-0 ceph-mon[74273]: pgmap v609: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 03:53:03 compute-0 sudo[229324]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hrcyvcqjnwixrededxzinznhqtjkoabf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760154782.5134926-338-132528401003208/AnsiballZ_container_config_data.py'
Oct 11 03:53:03 compute-0 sudo[229324]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 03:53:03 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v610: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 03:53:03 compute-0 python3.9[229326]: ansible-container_config_data Invoked with config_overrides={} config_path=/var/lib/edpm-config/container-startup-config/iscsid config_pattern=*.json debug=False
Oct 11 03:53:03 compute-0 sudo[229324]: pam_unix(sudo:session): session closed for user root
Oct 11 03:53:04 compute-0 sudo[229476]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xjftopwqjpiqiuczfgukhdmnxkswrksx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760154783.5751128-347-97405182207933/AnsiballZ_container_config_hash.py'
Oct 11 03:53:04 compute-0 sudo[229476]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 03:53:04 compute-0 python3.9[229478]: ansible-container_config_hash Invoked with check_mode=False config_vol_prefix=/var/lib/config-data
Oct 11 03:53:04 compute-0 sudo[229476]: pam_unix(sudo:session): session closed for user root
Oct 11 03:53:04 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 11 03:53:04 compute-0 ceph-mon[74273]: pgmap v610: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 03:53:05 compute-0 sudo[229628]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dgpllggqqbovfijbwuirmhtqzjogzndz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760154784.6093748-356-34252582827480/AnsiballZ_podman_container_info.py'
Oct 11 03:53:05 compute-0 sudo[229628]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 03:53:05 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v611: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 03:53:05 compute-0 python3.9[229630]: ansible-containers.podman.podman_container_info Invoked with executable=podman name=None
Oct 11 03:53:05 compute-0 sudo[229628]: pam_unix(sudo:session): session closed for user root
Oct 11 03:53:06 compute-0 ceph-mon[74273]: pgmap v611: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 03:53:06 compute-0 sudo[229807]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nzennsakdvblfjowrjsffwziljgkmheq ; /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1760154786.2072048-369-214258045516762/AnsiballZ_edpm_container_manage.py'
Oct 11 03:53:06 compute-0 sudo[229807]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 03:53:06 compute-0 python3[229809]: ansible-edpm_container_manage Invoked with concurrency=1 config_dir=/var/lib/edpm-config/container-startup-config/iscsid config_id=iscsid config_overrides={} config_patterns=*.json log_base_path=/var/log/containers/stdouts debug=False
Oct 11 03:53:07 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v612: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 03:53:07 compute-0 podman[229840]: 2025-10-11 03:53:07.216692894 +0000 UTC m=+0.065432959 container create 8f03625dda308e9a3fe3b3f00360811eab2a53db90537fed5552a11820542f07 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, maintainer=OpenStack Kubernetes Operator team, config_id=iscsid, container_name=iscsid, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.build-date=20251009, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 11 03:53:07 compute-0 podman[229840]: 2025-10-11 03:53:07.177088329 +0000 UTC m=+0.025828394 image pull 5773abc4300b61c01f3353a0b9239f9a404bb272790b280574e4c56f72edaa72 quay.io/podified-antelope-centos9/openstack-iscsid:current-podified
Oct 11 03:53:07 compute-0 python3[229809]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: podman create --name iscsid --conmon-pidfile /run/iscsid.pid --env KOLLA_CONFIG_STRATEGY=COPY_ALWAYS --healthcheck-command /openstack/healthcheck --label config_id=iscsid --label container_name=iscsid --label managed_by=edpm_ansible --label config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']} --log-driver journald --log-level info --network host --privileged=True --volume /etc/hosts:/etc/hosts:ro --volume /etc/localtime:/etc/localtime:ro --volume /etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro --volume /etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro --volume /etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro --volume /etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro --volume /etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro --volume /dev/log:/dev/log --volume /var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro --volume /dev:/dev --volume /run:/run --volume /sys:/sys --volume /lib/modules:/lib/modules:ro --volume /etc/iscsi:/etc/iscsi:z --volume /etc/target:/etc/target:z --volume /var/lib/iscsi:/var/lib/iscsi:z --volume /var/lib/openstack/healthchecks/iscsid:/openstack:ro,z quay.io/podified-antelope-centos9/openstack-iscsid:current-podified
Oct 11 03:53:07 compute-0 sudo[229807]: pam_unix(sudo:session): session closed for user root
Oct 11 03:53:07 compute-0 sudo[230028]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-daozinbvvpwnvxftqtrdtfplszkiwuep ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760154787.6173725-377-174274434728245/AnsiballZ_stat.py'
Oct 11 03:53:07 compute-0 sudo[230028]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 03:53:08 compute-0 python3.9[230030]: ansible-ansible.builtin.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct 11 03:53:08 compute-0 sudo[230028]: pam_unix(sudo:session): session closed for user root
Oct 11 03:53:08 compute-0 ceph-mon[74273]: pgmap v612: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 03:53:08 compute-0 sudo[230182]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cxbkkwrydjcxmbijsrucjqbqkfwpwphh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760154788.4371228-386-89025784165200/AnsiballZ_file.py'
Oct 11 03:53:08 compute-0 sudo[230182]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 03:53:08 compute-0 python3.9[230184]: ansible-file Invoked with path=/etc/systemd/system/edpm_iscsid.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 11 03:53:09 compute-0 sudo[230182]: pam_unix(sudo:session): session closed for user root
Oct 11 03:53:09 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v613: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 03:53:09 compute-0 sudo[230258]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-orsbpenrtjncrvnrohvhfavvpkhocmpb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760154788.4371228-386-89025784165200/AnsiballZ_stat.py'
Oct 11 03:53:09 compute-0 sudo[230258]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 03:53:09 compute-0 python3.9[230260]: ansible-stat Invoked with path=/etc/systemd/system/edpm_iscsid_healthcheck.timer follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct 11 03:53:09 compute-0 sudo[230258]: pam_unix(sudo:session): session closed for user root
Oct 11 03:53:09 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 11 03:53:10 compute-0 sudo[230409]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-abvwubrbsviyaitfknjuaunhcmhxishv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760154789.588573-386-246335118351774/AnsiballZ_copy.py'
Oct 11 03:53:10 compute-0 sudo[230409]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 03:53:10 compute-0 python3.9[230411]: ansible-copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1760154789.588573-386-246335118351774/source dest=/etc/systemd/system/edpm_iscsid.service mode=0644 owner=root group=root backup=False force=True remote_src=False follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 11 03:53:10 compute-0 sudo[230409]: pam_unix(sudo:session): session closed for user root
Oct 11 03:53:10 compute-0 sudo[230485]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ysniadhqkbmvrkeneqxmbjplrmmwnczk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760154789.588573-386-246335118351774/AnsiballZ_systemd.py'
Oct 11 03:53:10 compute-0 sudo[230485]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 03:53:10 compute-0 ceph-mon[74273]: pgmap v613: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 03:53:10 compute-0 python3.9[230487]: ansible-systemd Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Oct 11 03:53:10 compute-0 systemd[1]: Reloading.
Oct 11 03:53:11 compute-0 systemd-sysv-generator[230513]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 11 03:53:11 compute-0 systemd-rc-local-generator[230510]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 11 03:53:11 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v614: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 03:53:11 compute-0 sudo[230485]: pam_unix(sudo:session): session closed for user root
Oct 11 03:53:11 compute-0 sudo[230596]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ozjpeeeaniopwbaeunybfisobqurchmd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760154789.588573-386-246335118351774/AnsiballZ_systemd.py'
Oct 11 03:53:11 compute-0 sudo[230596]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 03:53:11 compute-0 python3.9[230598]: ansible-systemd Invoked with state=restarted name=edpm_iscsid.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Oct 11 03:53:11 compute-0 systemd[1]: Reloading.
Oct 11 03:53:12 compute-0 systemd-rc-local-generator[230630]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 11 03:53:12 compute-0 systemd-sysv-generator[230634]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 11 03:53:12 compute-0 systemd[1]: Starting iscsid container...
Oct 11 03:53:12 compute-0 systemd[1]: Started libcrun container.
Oct 11 03:53:12 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/afae34c112c2561dc7d543abb74379659411f7812b8b91760a9f3114f454fd7b/merged/etc/target supports timestamps until 2038 (0x7fffffff)
Oct 11 03:53:12 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/afae34c112c2561dc7d543abb74379659411f7812b8b91760a9f3114f454fd7b/merged/etc/iscsi supports timestamps until 2038 (0x7fffffff)
Oct 11 03:53:12 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/afae34c112c2561dc7d543abb74379659411f7812b8b91760a9f3114f454fd7b/merged/var/lib/iscsi supports timestamps until 2038 (0x7fffffff)
Oct 11 03:53:12 compute-0 systemd[1]: Started /usr/bin/podman healthcheck run 8f03625dda308e9a3fe3b3f00360811eab2a53db90537fed5552a11820542f07.
Oct 11 03:53:12 compute-0 podman[230639]: 2025-10-11 03:53:12.56696322 +0000 UTC m=+0.181356901 container init 8f03625dda308e9a3fe3b3f00360811eab2a53db90537fed5552a11820542f07 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, config_id=iscsid, managed_by=edpm_ansible, org.label-schema.build-date=20251009, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, container_name=iscsid)
Oct 11 03:53:12 compute-0 iscsid[230654]: + sudo -E kolla_set_configs
Oct 11 03:53:12 compute-0 sudo[230660]:     root : PWD=/ ; USER=root ; COMMAND=/usr/local/bin/kolla_set_configs
Oct 11 03:53:12 compute-0 podman[230639]: 2025-10-11 03:53:12.605619352 +0000 UTC m=+0.220012963 container start 8f03625dda308e9a3fe3b3f00360811eab2a53db90537fed5552a11820542f07 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20251009, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, config_id=iscsid, org.label-schema.name=CentOS Stream 9 Base Image, container_name=iscsid, org.label-schema.license=GPLv2)
Oct 11 03:53:12 compute-0 podman[230639]: iscsid
Oct 11 03:53:12 compute-0 systemd[1]: Created slice User Slice of UID 0.
Oct 11 03:53:12 compute-0 systemd[1]: Starting User Runtime Directory /run/user/0...
Oct 11 03:53:12 compute-0 systemd[1]: Started iscsid container.
Oct 11 03:53:12 compute-0 systemd[1]: Finished User Runtime Directory /run/user/0.
Oct 11 03:53:12 compute-0 systemd[1]: Starting User Manager for UID 0...
Oct 11 03:53:12 compute-0 ceph-mon[74273]: pgmap v614: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 03:53:12 compute-0 sudo[230596]: pam_unix(sudo:session): session closed for user root
Oct 11 03:53:12 compute-0 systemd[230669]: pam_unix(systemd-user:session): session opened for user root(uid=0) by root(uid=0)
Oct 11 03:53:12 compute-0 podman[230661]: 2025-10-11 03:53:12.705847842 +0000 UTC m=+0.079033003 container health_status 8f03625dda308e9a3fe3b3f00360811eab2a53db90537fed5552a11820542f07 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=starting, health_failing_streak=1, health_log=, tcib_managed=true, managed_by=edpm_ansible, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, config_id=iscsid, io.buildah.version=1.41.3, org.label-schema.build-date=20251009, container_name=iscsid, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 11 03:53:12 compute-0 systemd[1]: 8f03625dda308e9a3fe3b3f00360811eab2a53db90537fed5552a11820542f07-24abc812eddf9952.service: Main process exited, code=exited, status=1/FAILURE
Oct 11 03:53:12 compute-0 systemd[1]: 8f03625dda308e9a3fe3b3f00360811eab2a53db90537fed5552a11820542f07-24abc812eddf9952.service: Failed with result 'exit-code'.
Oct 11 03:53:12 compute-0 systemd[230669]: Queued start job for default target Main User Target.
Oct 11 03:53:12 compute-0 systemd[230669]: Created slice User Application Slice.
Oct 11 03:53:12 compute-0 systemd[230669]: Mark boot as successful after the user session has run 2 minutes was skipped because of an unmet condition check (ConditionUser=!@system).
Oct 11 03:53:12 compute-0 systemd[230669]: Started Daily Cleanup of User's Temporary Directories.
Oct 11 03:53:12 compute-0 systemd[230669]: Reached target Paths.
Oct 11 03:53:12 compute-0 systemd[230669]: Reached target Timers.
Oct 11 03:53:12 compute-0 systemd[230669]: Starting D-Bus User Message Bus Socket...
Oct 11 03:53:12 compute-0 systemd[230669]: Starting Create User's Volatile Files and Directories...
Oct 11 03:53:12 compute-0 systemd[230669]: Finished Create User's Volatile Files and Directories.
Oct 11 03:53:12 compute-0 systemd[230669]: Listening on D-Bus User Message Bus Socket.
Oct 11 03:53:12 compute-0 systemd[230669]: Reached target Sockets.
Oct 11 03:53:12 compute-0 systemd[230669]: Reached target Basic System.
Oct 11 03:53:12 compute-0 systemd[230669]: Reached target Main User Target.
Oct 11 03:53:12 compute-0 systemd[230669]: Startup finished in 145ms.
Oct 11 03:53:12 compute-0 systemd[1]: Started User Manager for UID 0.
Oct 11 03:53:12 compute-0 systemd[1]: Started Session c3 of User root.
Oct 11 03:53:12 compute-0 sudo[230660]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=0)
Oct 11 03:53:12 compute-0 iscsid[230654]: INFO:__main__:Loading config file at /var/lib/kolla/config_files/config.json
Oct 11 03:53:12 compute-0 iscsid[230654]: INFO:__main__:Validating config file
Oct 11 03:53:12 compute-0 iscsid[230654]: INFO:__main__:Kolla config strategy set to: COPY_ALWAYS
Oct 11 03:53:12 compute-0 iscsid[230654]: INFO:__main__:Writing out command to execute
Oct 11 03:53:12 compute-0 sudo[230660]: pam_unix(sudo:session): session closed for user root
Oct 11 03:53:12 compute-0 systemd[1]: session-c3.scope: Deactivated successfully.
Oct 11 03:53:12 compute-0 iscsid[230654]: ++ cat /run_command
Oct 11 03:53:12 compute-0 iscsid[230654]: + CMD='/usr/sbin/iscsid -f'
Oct 11 03:53:12 compute-0 iscsid[230654]: + ARGS=
Oct 11 03:53:12 compute-0 iscsid[230654]: + sudo kolla_copy_cacerts
Oct 11 03:53:12 compute-0 sudo[230780]:     root : PWD=/ ; USER=root ; COMMAND=/usr/local/bin/kolla_copy_cacerts
Oct 11 03:53:12 compute-0 systemd[1]: Started Session c4 of User root.
Oct 11 03:53:12 compute-0 sudo[230780]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=0)
Oct 11 03:53:12 compute-0 sudo[230780]: pam_unix(sudo:session): session closed for user root
Oct 11 03:53:12 compute-0 systemd[1]: session-c4.scope: Deactivated successfully.
Oct 11 03:53:12 compute-0 iscsid[230654]: + [[ ! -n '' ]]
Oct 11 03:53:12 compute-0 iscsid[230654]: + . kolla_extend_start
Oct 11 03:53:12 compute-0 iscsid[230654]: Running command: '/usr/sbin/iscsid -f'
Oct 11 03:53:12 compute-0 iscsid[230654]: ++ [[ ! -f /etc/iscsi/initiatorname.iscsi ]]
Oct 11 03:53:12 compute-0 iscsid[230654]: + echo 'Running command: '\''/usr/sbin/iscsid -f'\'''
Oct 11 03:53:12 compute-0 iscsid[230654]: + umask 0022
Oct 11 03:53:12 compute-0 iscsid[230654]: + exec /usr/sbin/iscsid -f
Oct 11 03:53:12 compute-0 kernel: Loading iSCSI transport class v2.0-870.
Oct 11 03:53:13 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v615: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 03:53:13 compute-0 python3.9[230857]: ansible-ansible.builtin.stat Invoked with path=/etc/iscsi/.iscsid_restart_required follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct 11 03:53:13 compute-0 sudo[231007]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-eghkthicsdiykbpulsbpyxbogdldmbsr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760154793.5468137-423-232848788001319/AnsiballZ_file.py'
Oct 11 03:53:13 compute-0 sudo[231007]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 03:53:14 compute-0 python3.9[231009]: ansible-ansible.builtin.file Invoked with path=/etc/iscsi/.iscsid_restart_required state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 11 03:53:14 compute-0 sudo[231007]: pam_unix(sudo:session): session closed for user root
Oct 11 03:53:14 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 11 03:53:14 compute-0 ceph-mon[74273]: pgmap v615: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 03:53:14 compute-0 sudo[231159]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-exybftevrkhrijnxmkposrqhgxzfhbdr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760154794.680279-434-1386934093263/AnsiballZ_service_facts.py'
Oct 11 03:53:14 compute-0 sudo[231159]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 03:53:15 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v616: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 03:53:15 compute-0 python3.9[231161]: ansible-ansible.builtin.service_facts Invoked
Oct 11 03:53:15 compute-0 network[231178]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Oct 11 03:53:15 compute-0 network[231179]: 'network-scripts' will be removed from distribution in near future.
Oct 11 03:53:15 compute-0 network[231180]: It is advised to switch to 'NetworkManager' instead for network management.
Oct 11 03:53:16 compute-0 ceph-mon[74273]: pgmap v616: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 03:53:17 compute-0 unix_chkpwd[231216]: password check failed for user (root)
Oct 11 03:53:17 compute-0 sshd-session[231189]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=80.94.93.233  user=root
Oct 11 03:53:17 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v617: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 03:53:18 compute-0 ceph-mon[74273]: pgmap v617: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 03:53:19 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v618: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 03:53:19 compute-0 sudo[231159]: pam_unix(sudo:session): session closed for user root
Oct 11 03:53:19 compute-0 sshd-session[231189]: Failed password for root from 80.94.93.233 port 62708 ssh2
Oct 11 03:53:19 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 11 03:53:19 compute-0 sudo[231456]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uezfhginyrxudujmxvfghvnahrfpjxdq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760154799.4516137-444-257987848340511/AnsiballZ_file.py'
Oct 11 03:53:19 compute-0 sudo[231456]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 03:53:20 compute-0 python3.9[231458]: ansible-ansible.builtin.file Invoked with mode=0755 path=/etc/modules-load.d selevel=s0 setype=etc_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None attributes=None
Oct 11 03:53:20 compute-0 sudo[231456]: pam_unix(sudo:session): session closed for user root
Oct 11 03:53:20 compute-0 ceph-mon[74273]: pgmap v618: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 03:53:20 compute-0 ceph-mgr[74563]: [balancer INFO root] Optimize plan auto_2025-10-11_03:53:20
Oct 11 03:53:20 compute-0 ceph-mgr[74563]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 11 03:53:20 compute-0 ceph-mgr[74563]: [balancer INFO root] do_upmap
Oct 11 03:53:20 compute-0 ceph-mgr[74563]: [balancer INFO root] pools ['backups', 'cephfs.cephfs.data', 'default.rgw.meta', 'default.rgw.log', 'vms', '.mgr', 'default.rgw.control', 'cephfs.cephfs.meta', 'images', '.rgw.root', 'volumes']
Oct 11 03:53:20 compute-0 ceph-mgr[74563]: [balancer INFO root] prepared 0/10 changes
Oct 11 03:53:20 compute-0 ceph-mgr[74563]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 03:53:20 compute-0 ceph-mgr[74563]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 03:53:20 compute-0 ceph-mgr[74563]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 03:53:20 compute-0 ceph-mgr[74563]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 03:53:20 compute-0 ceph-mgr[74563]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 03:53:20 compute-0 ceph-mgr[74563]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 03:53:20 compute-0 sudo[231608]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wlavfzioiwnmgjkccldxbswzcsvnwfyy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760154800.28388-452-67904768985025/AnsiballZ_modprobe.py'
Oct 11 03:53:20 compute-0 sudo[231608]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 03:53:20 compute-0 unix_chkpwd[231622]: password check failed for user (root)
Oct 11 03:53:20 compute-0 ceph-mgr[74563]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 11 03:53:20 compute-0 ceph-mgr[74563]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 11 03:53:20 compute-0 ceph-mgr[74563]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 11 03:53:20 compute-0 ceph-mgr[74563]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 11 03:53:20 compute-0 ceph-mgr[74563]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 11 03:53:20 compute-0 ceph-mgr[74563]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 11 03:53:20 compute-0 ceph-mgr[74563]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 11 03:53:20 compute-0 ceph-mgr[74563]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 11 03:53:20 compute-0 ceph-mgr[74563]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 11 03:53:20 compute-0 ceph-mgr[74563]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 11 03:53:20 compute-0 podman[231610]: 2025-10-11 03:53:20.931962507 +0000 UTC m=+0.146665129 container health_status 648a70a95c858d67aef01a55c153ff0b5b9bef4b589fa10c62a24185180db61f (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251009)
Oct 11 03:53:21 compute-0 python3.9[231611]: ansible-community.general.modprobe Invoked with name=dm-multipath state=present params= persistent=disabled
Oct 11 03:53:21 compute-0 sudo[231608]: pam_unix(sudo:session): session closed for user root
Oct 11 03:53:21 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v619: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 03:53:21 compute-0 sudo[231789]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gvnhapmwvxanqolpeaaukgjnicqdvumr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760154801.2402298-460-11697047840055/AnsiballZ_stat.py'
Oct 11 03:53:21 compute-0 sudo[231789]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 03:53:21 compute-0 python3.9[231791]: ansible-ansible.legacy.stat Invoked with path=/etc/modules-load.d/dm-multipath.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 11 03:53:21 compute-0 sudo[231789]: pam_unix(sudo:session): session closed for user root
Oct 11 03:53:22 compute-0 sshd-session[231189]: Failed password for root from 80.94.93.233 port 62708 ssh2
Oct 11 03:53:22 compute-0 sudo[231912]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yjmbkjbbwldaojmelardedpowaqlddap ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760154801.2402298-460-11697047840055/AnsiballZ_copy.py'
Oct 11 03:53:22 compute-0 sudo[231912]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 03:53:22 compute-0 sudo[231913]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 03:53:22 compute-0 sudo[231913]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 03:53:22 compute-0 sudo[231913]: pam_unix(sudo:session): session closed for user root
Oct 11 03:53:22 compute-0 sudo[231940]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 11 03:53:22 compute-0 sudo[231940]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 03:53:22 compute-0 sudo[231940]: pam_unix(sudo:session): session closed for user root
Oct 11 03:53:22 compute-0 python3.9[231921]: ansible-ansible.legacy.copy Invoked with dest=/etc/modules-load.d/dm-multipath.conf mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1760154801.2402298-460-11697047840055/.source.conf follow=False _original_basename=module-load.conf.j2 checksum=065061c60917e4f67cecc70d12ce55e42f9d0b3f backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 11 03:53:22 compute-0 sudo[231912]: pam_unix(sudo:session): session closed for user root
Oct 11 03:53:22 compute-0 sudo[231965]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 03:53:22 compute-0 sudo[231965]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 03:53:22 compute-0 sudo[231965]: pam_unix(sudo:session): session closed for user root
Oct 11 03:53:22 compute-0 sudo[231994]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/23b68101-59a9-532f-ab6b-9acf78fb2162/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Oct 11 03:53:22 compute-0 sudo[231994]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 03:53:22 compute-0 ceph-mon[74273]: pgmap v619: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 03:53:22 compute-0 unix_chkpwd[232080]: password check failed for user (root)
Oct 11 03:53:22 compute-0 ovn_metadata_agent[161897]: 2025-10-11 03:53:22.939 161902 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 03:53:22 compute-0 ovn_metadata_agent[161897]: 2025-10-11 03:53:22.940 161902 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 03:53:22 compute-0 ovn_metadata_agent[161897]: 2025-10-11 03:53:22.941 161902 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 03:53:23 compute-0 systemd[1]: Stopping User Manager for UID 0...
Oct 11 03:53:23 compute-0 systemd[230669]: Activating special unit Exit the Session...
Oct 11 03:53:23 compute-0 systemd[230669]: Stopped target Main User Target.
Oct 11 03:53:23 compute-0 systemd[230669]: Stopped target Basic System.
Oct 11 03:53:23 compute-0 systemd[230669]: Stopped target Paths.
Oct 11 03:53:23 compute-0 systemd[230669]: Stopped target Sockets.
Oct 11 03:53:23 compute-0 systemd[230669]: Stopped target Timers.
Oct 11 03:53:23 compute-0 systemd[230669]: Stopped Daily Cleanup of User's Temporary Directories.
Oct 11 03:53:23 compute-0 systemd[230669]: Closed D-Bus User Message Bus Socket.
Oct 11 03:53:23 compute-0 systemd[230669]: Stopped Create User's Volatile Files and Directories.
Oct 11 03:53:23 compute-0 systemd[230669]: Removed slice User Application Slice.
Oct 11 03:53:23 compute-0 systemd[230669]: Reached target Shutdown.
Oct 11 03:53:23 compute-0 systemd[230669]: Finished Exit the Session.
Oct 11 03:53:23 compute-0 systemd[230669]: Reached target Exit the Session.
Oct 11 03:53:23 compute-0 systemd[1]: user@0.service: Deactivated successfully.
Oct 11 03:53:23 compute-0 systemd[1]: Stopped User Manager for UID 0.
Oct 11 03:53:23 compute-0 systemd[1]: Stopping User Runtime Directory /run/user/0...
Oct 11 03:53:23 compute-0 systemd[1]: run-user-0.mount: Deactivated successfully.
Oct 11 03:53:23 compute-0 systemd[1]: user-runtime-dir@0.service: Deactivated successfully.
Oct 11 03:53:23 compute-0 systemd[1]: Stopped User Runtime Directory /run/user/0.
Oct 11 03:53:23 compute-0 systemd[1]: Removed slice User Slice of UID 0.
Oct 11 03:53:23 compute-0 sudo[232185]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-aojmucltrzyamdtplaqvbtbghmxdpnxe ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760154802.7405214-476-206659958961013/AnsiballZ_lineinfile.py'
Oct 11 03:53:23 compute-0 sudo[232185]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 03:53:23 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v620: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 03:53:23 compute-0 sudo[231994]: pam_unix(sudo:session): session closed for user root
Oct 11 03:53:23 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 11 03:53:23 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 11 03:53:23 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct 11 03:53:23 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 11 03:53:23 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct 11 03:53:23 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' 
Oct 11 03:53:23 compute-0 ceph-mgr[74563]: [progress WARNING root] complete: ev 5e257cc3-696e-44b2-b211-bca9accc80f8 does not exist
Oct 11 03:53:23 compute-0 ceph-mgr[74563]: [progress WARNING root] complete: ev 077b62ca-ad08-451d-9307-11dfe6e25bb6 does not exist
Oct 11 03:53:23 compute-0 ceph-mgr[74563]: [progress WARNING root] complete: ev d986a913-6718-4540-b338-1d40a13c50b7 does not exist
Oct 11 03:53:23 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct 11 03:53:23 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 11 03:53:23 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct 11 03:53:23 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 11 03:53:23 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 11 03:53:23 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 11 03:53:23 compute-0 python3.9[232188]: ansible-ansible.builtin.lineinfile Invoked with create=True dest=/etc/modules line=dm-multipath  mode=0644 state=present path=/etc/modules backrefs=False backup=False firstmatch=False unsafe_writes=False regexp=None search_string=None insertafter=None insertbefore=None validate=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 11 03:53:23 compute-0 sudo[232185]: pam_unix(sudo:session): session closed for user root
Oct 11 03:53:23 compute-0 sudo[232201]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 03:53:23 compute-0 sudo[232201]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 03:53:23 compute-0 sudo[232201]: pam_unix(sudo:session): session closed for user root
Oct 11 03:53:23 compute-0 sudo[232226]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 11 03:53:23 compute-0 sudo[232226]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 03:53:23 compute-0 sudo[232226]: pam_unix(sudo:session): session closed for user root
Oct 11 03:53:23 compute-0 sudo[232275]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 03:53:23 compute-0 sudo[232275]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 03:53:23 compute-0 sudo[232275]: pam_unix(sudo:session): session closed for user root
Oct 11 03:53:23 compute-0 sudo[232323]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/23b68101-59a9-532f-ab6b-9acf78fb2162/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 23b68101-59a9-532f-ab6b-9acf78fb2162 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Oct 11 03:53:23 compute-0 sudo[232323]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 03:53:23 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 11 03:53:23 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 11 03:53:23 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' 
Oct 11 03:53:23 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 11 03:53:23 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 11 03:53:23 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 11 03:53:23 compute-0 sudo[232476]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-korinjuelzfcpznhmomrvmkvwvdfncgk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760154803.5160866-484-54428437485629/AnsiballZ_systemd.py'
Oct 11 03:53:23 compute-0 sudo[232476]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 03:53:24 compute-0 podman[232493]: 2025-10-11 03:53:24.072459286 +0000 UTC m=+0.074780655 container create 87a43d4d03f836334a7c55729f17a4d0938371d16f1514d9a390c127657bd0ba (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_satoshi, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 11 03:53:24 compute-0 systemd[1]: Started libpod-conmon-87a43d4d03f836334a7c55729f17a4d0938371d16f1514d9a390c127657bd0ba.scope.
Oct 11 03:53:24 compute-0 podman[232493]: 2025-10-11 03:53:24.034560125 +0000 UTC m=+0.036881544 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 03:53:24 compute-0 systemd[1]: Started libcrun container.
Oct 11 03:53:24 compute-0 podman[232493]: 2025-10-11 03:53:24.184259986 +0000 UTC m=+0.186581375 container init 87a43d4d03f836334a7c55729f17a4d0938371d16f1514d9a390c127657bd0ba (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_satoshi, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507)
Oct 11 03:53:24 compute-0 podman[232493]: 2025-10-11 03:53:24.196356442 +0000 UTC m=+0.198677761 container start 87a43d4d03f836334a7c55729f17a4d0938371d16f1514d9a390c127657bd0ba (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_satoshi, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 11 03:53:24 compute-0 podman[232493]: 2025-10-11 03:53:24.199865439 +0000 UTC m=+0.202186848 container attach 87a43d4d03f836334a7c55729f17a4d0938371d16f1514d9a390c127657bd0ba (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_satoshi, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 11 03:53:24 compute-0 silly_satoshi[232509]: 167 167
Oct 11 03:53:24 compute-0 systemd[1]: libpod-87a43d4d03f836334a7c55729f17a4d0938371d16f1514d9a390c127657bd0ba.scope: Deactivated successfully.
Oct 11 03:53:24 compute-0 podman[232493]: 2025-10-11 03:53:24.205069004 +0000 UTC m=+0.207390363 container died 87a43d4d03f836334a7c55729f17a4d0938371d16f1514d9a390c127657bd0ba (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_satoshi, org.label-schema.build-date=20250507, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.license=GPLv2)
Oct 11 03:53:24 compute-0 python3.9[232488]: ansible-ansible.builtin.systemd Invoked with name=systemd-modules-load.service state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Oct 11 03:53:24 compute-0 systemd[1]: var-lib-containers-storage-overlay-e44946ff513389b3929660b0186defe65c04be3660634a32b1135b1565a1d4f8-merged.mount: Deactivated successfully.
Oct 11 03:53:24 compute-0 podman[232493]: 2025-10-11 03:53:24.269413808 +0000 UTC m=+0.271735137 container remove 87a43d4d03f836334a7c55729f17a4d0938371d16f1514d9a390c127657bd0ba (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_satoshi, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250507, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef)
Oct 11 03:53:24 compute-0 systemd[1]: systemd-modules-load.service: Deactivated successfully.
Oct 11 03:53:24 compute-0 systemd[1]: Stopped Load Kernel Modules.
Oct 11 03:53:24 compute-0 systemd[1]: Stopping Load Kernel Modules...
Oct 11 03:53:24 compute-0 systemd[1]: Starting Load Kernel Modules...
Oct 11 03:53:24 compute-0 systemd[1]: libpod-conmon-87a43d4d03f836334a7c55729f17a4d0938371d16f1514d9a390c127657bd0ba.scope: Deactivated successfully.
Oct 11 03:53:24 compute-0 systemd[1]: Finished Load Kernel Modules.
Oct 11 03:53:24 compute-0 sudo[232476]: pam_unix(sudo:session): session closed for user root
Oct 11 03:53:24 compute-0 podman[232561]: 2025-10-11 03:53:24.473388405 +0000 UTC m=+0.038915210 container create b9e2ee9e0be5e8f8c9f02d3e41296e31ca76c43ee16e266fc508cd38e0256ee3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=amazing_blackwell, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3)
Oct 11 03:53:24 compute-0 systemd[1]: Started libpod-conmon-b9e2ee9e0be5e8f8c9f02d3e41296e31ca76c43ee16e266fc508cd38e0256ee3.scope.
Oct 11 03:53:24 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 11 03:53:24 compute-0 podman[232561]: 2025-10-11 03:53:24.458092071 +0000 UTC m=+0.023618906 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 03:53:24 compute-0 systemd[1]: Started libcrun container.
Oct 11 03:53:24 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c44044af6fedb75fb9c8145c41e62178973714dbd81a1973ea4679781471cf46/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 11 03:53:24 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c44044af6fedb75fb9c8145c41e62178973714dbd81a1973ea4679781471cf46/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 11 03:53:24 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c44044af6fedb75fb9c8145c41e62178973714dbd81a1973ea4679781471cf46/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 11 03:53:24 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c44044af6fedb75fb9c8145c41e62178973714dbd81a1973ea4679781471cf46/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 11 03:53:24 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c44044af6fedb75fb9c8145c41e62178973714dbd81a1973ea4679781471cf46/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct 11 03:53:24 compute-0 podman[232561]: 2025-10-11 03:53:24.577706668 +0000 UTC m=+0.143233563 container init b9e2ee9e0be5e8f8c9f02d3e41296e31ca76c43ee16e266fc508cd38e0256ee3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=amazing_blackwell, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Oct 11 03:53:24 compute-0 podman[232561]: 2025-10-11 03:53:24.589237318 +0000 UTC m=+0.154764123 container start b9e2ee9e0be5e8f8c9f02d3e41296e31ca76c43ee16e266fc508cd38e0256ee3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=amazing_blackwell, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=reef, ceph=True)
Oct 11 03:53:24 compute-0 podman[232561]: 2025-10-11 03:53:24.592483658 +0000 UTC m=+0.158010553 container attach b9e2ee9e0be5e8f8c9f02d3e41296e31ca76c43ee16e266fc508cd38e0256ee3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=amazing_blackwell, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 11 03:53:24 compute-0 sshd-session[231189]: Failed password for root from 80.94.93.233 port 62708 ssh2
Oct 11 03:53:24 compute-0 ceph-mon[74273]: pgmap v620: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 03:53:24 compute-0 sudo[232708]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gjuasncstoyafxxsdwvqzhoudzmimdur ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760154804.541056-492-181190920855959/AnsiballZ_file.py'
Oct 11 03:53:24 compute-0 sudo[232708]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 03:53:25 compute-0 python3.9[232710]: ansible-ansible.builtin.file Invoked with mode=0755 path=/etc/multipath setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Oct 11 03:53:25 compute-0 sudo[232708]: pam_unix(sudo:session): session closed for user root
Oct 11 03:53:25 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v621: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 03:53:25 compute-0 sudo[232880]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fkarojkwgimydpimzjtldgvdhipefxht ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760154805.3624067-501-157823248820918/AnsiballZ_stat.py'
Oct 11 03:53:25 compute-0 sudo[232880]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 03:53:25 compute-0 amazing_blackwell[232596]: --> passed data devices: 0 physical, 3 LVM
Oct 11 03:53:25 compute-0 amazing_blackwell[232596]: --> relative data size: 1.0
Oct 11 03:53:25 compute-0 amazing_blackwell[232596]: --> All data devices are unavailable
Oct 11 03:53:25 compute-0 systemd[1]: libpod-b9e2ee9e0be5e8f8c9f02d3e41296e31ca76c43ee16e266fc508cd38e0256ee3.scope: Deactivated successfully.
Oct 11 03:53:25 compute-0 systemd[1]: libpod-b9e2ee9e0be5e8f8c9f02d3e41296e31ca76c43ee16e266fc508cd38e0256ee3.scope: Consumed 1.108s CPU time.
Oct 11 03:53:25 compute-0 podman[232561]: 2025-10-11 03:53:25.789477945 +0000 UTC m=+1.355004760 container died b9e2ee9e0be5e8f8c9f02d3e41296e31ca76c43ee16e266fc508cd38e0256ee3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=amazing_blackwell, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Oct 11 03:53:25 compute-0 systemd[1]: var-lib-containers-storage-overlay-c44044af6fedb75fb9c8145c41e62178973714dbd81a1973ea4679781471cf46-merged.mount: Deactivated successfully.
Oct 11 03:53:25 compute-0 podman[232561]: 2025-10-11 03:53:25.875072029 +0000 UTC m=+1.440598864 container remove b9e2ee9e0be5e8f8c9f02d3e41296e31ca76c43ee16e266fc508cd38e0256ee3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=amazing_blackwell, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 11 03:53:25 compute-0 python3.9[232883]: ansible-ansible.builtin.stat Invoked with path=/etc/multipath.conf follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct 11 03:53:25 compute-0 systemd[1]: libpod-conmon-b9e2ee9e0be5e8f8c9f02d3e41296e31ca76c43ee16e266fc508cd38e0256ee3.scope: Deactivated successfully.
Oct 11 03:53:25 compute-0 sudo[232880]: pam_unix(sudo:session): session closed for user root
Oct 11 03:53:25 compute-0 sudo[232323]: pam_unix(sudo:session): session closed for user root
Oct 11 03:53:25 compute-0 sudo[232903]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 03:53:25 compute-0 sudo[232903]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 03:53:25 compute-0 sudo[232903]: pam_unix(sudo:session): session closed for user root
Oct 11 03:53:26 compute-0 sudo[232949]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 11 03:53:26 compute-0 sudo[232949]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 03:53:26 compute-0 sudo[232949]: pam_unix(sudo:session): session closed for user root
Oct 11 03:53:26 compute-0 sudo[232987]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 03:53:26 compute-0 sudo[232987]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 03:53:26 compute-0 sudo[232987]: pam_unix(sudo:session): session closed for user root
Oct 11 03:53:26 compute-0 sudo[233043]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/23b68101-59a9-532f-ab6b-9acf78fb2162/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 23b68101-59a9-532f-ab6b-9acf78fb2162 -- lvm list --format json
Oct 11 03:53:26 compute-0 sudo[233043]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 03:53:26 compute-0 podman[233076]: 2025-10-11 03:53:26.296872188 +0000 UTC m=+0.093696240 container health_status aa44bb3cffcfb092b9d85ae52a9bd9f36c87e07262632420f97b48a886109eab (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, managed_by=edpm_ansible, org.label-schema.build-date=20251009, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Oct 11 03:53:26 compute-0 sudo[233179]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sohlwkjokhvudculwsazysspjzdgfhbm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760154806.0813448-510-67517906296557/AnsiballZ_stat.py'
Oct 11 03:53:26 compute-0 sudo[233179]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 03:53:26 compute-0 sshd-session[231189]: Received disconnect from 80.94.93.233 port 62708:11:  [preauth]
Oct 11 03:53:26 compute-0 sshd-session[231189]: Disconnected from authenticating user root 80.94.93.233 port 62708 [preauth]
Oct 11 03:53:26 compute-0 sshd-session[231189]: PAM 2 more authentication failures; logname= uid=0 euid=0 tty=ssh ruser= rhost=80.94.93.233  user=root
Oct 11 03:53:26 compute-0 python3.9[233186]: ansible-ansible.builtin.stat Invoked with path=/etc/multipath.conf follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct 11 03:53:26 compute-0 sudo[233179]: pam_unix(sudo:session): session closed for user root
Oct 11 03:53:26 compute-0 podman[233208]: 2025-10-11 03:53:26.664082662 +0000 UTC m=+0.070328802 container create 5f197c285d1b83462ea6378542a71736c4616fcad8e7834f6c17daaeafd28dc8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_maxwell, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3)
Oct 11 03:53:26 compute-0 systemd[1]: Started libpod-conmon-5f197c285d1b83462ea6378542a71736c4616fcad8e7834f6c17daaeafd28dc8.scope.
Oct 11 03:53:26 compute-0 podman[233208]: 2025-10-11 03:53:26.633629137 +0000 UTC m=+0.039875347 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 03:53:26 compute-0 ceph-mon[74273]: pgmap v621: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 03:53:26 compute-0 systemd[1]: Started libcrun container.
Oct 11 03:53:26 compute-0 podman[233208]: 2025-10-11 03:53:26.767636004 +0000 UTC m=+0.173882194 container init 5f197c285d1b83462ea6378542a71736c4616fcad8e7834f6c17daaeafd28dc8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_maxwell, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Oct 11 03:53:26 compute-0 podman[233208]: 2025-10-11 03:53:26.775766069 +0000 UTC m=+0.182012219 container start 5f197c285d1b83462ea6378542a71736c4616fcad8e7834f6c17daaeafd28dc8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_maxwell, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, OSD_FLAVOR=default)
Oct 11 03:53:26 compute-0 podman[233208]: 2025-10-11 03:53:26.780959994 +0000 UTC m=+0.187206154 container attach 5f197c285d1b83462ea6378542a71736c4616fcad8e7834f6c17daaeafd28dc8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_maxwell, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 11 03:53:26 compute-0 reverent_maxwell[233250]: 167 167
Oct 11 03:53:26 compute-0 systemd[1]: libpod-5f197c285d1b83462ea6378542a71736c4616fcad8e7834f6c17daaeafd28dc8.scope: Deactivated successfully.
Oct 11 03:53:26 compute-0 podman[233208]: 2025-10-11 03:53:26.785362876 +0000 UTC m=+0.191609026 container died 5f197c285d1b83462ea6378542a71736c4616fcad8e7834f6c17daaeafd28dc8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_maxwell, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 11 03:53:26 compute-0 systemd[1]: var-lib-containers-storage-overlay-287ca377fbc2758f0939daf73cd2390cc1f244aceb64370fead3cf5136b55872-merged.mount: Deactivated successfully.
Oct 11 03:53:26 compute-0 podman[233208]: 2025-10-11 03:53:26.842043598 +0000 UTC m=+0.248289718 container remove 5f197c285d1b83462ea6378542a71736c4616fcad8e7834f6c17daaeafd28dc8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_maxwell, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 11 03:53:26 compute-0 systemd[1]: libpod-conmon-5f197c285d1b83462ea6378542a71736c4616fcad8e7834f6c17daaeafd28dc8.scope: Deactivated successfully.
Oct 11 03:53:27 compute-0 podman[233349]: 2025-10-11 03:53:27.059839248 +0000 UTC m=+0.051456638 container create cdc972a99418ddb78516364dfcc5d3d9ed2b27c890bca4df58eee39830b67343 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_dhawan, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 11 03:53:27 compute-0 systemd[1]: Started libpod-conmon-cdc972a99418ddb78516364dfcc5d3d9ed2b27c890bca4df58eee39830b67343.scope.
Oct 11 03:53:27 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v622: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 03:53:27 compute-0 podman[233349]: 2025-10-11 03:53:27.036576193 +0000 UTC m=+0.028193653 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 03:53:27 compute-0 systemd[1]: Started libcrun container.
Oct 11 03:53:27 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3d4a95b78dd3b16906adeeaa2483b9a3fa724963a2abb75123c2dfd6ea6aa223/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 11 03:53:27 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3d4a95b78dd3b16906adeeaa2483b9a3fa724963a2abb75123c2dfd6ea6aa223/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 11 03:53:27 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3d4a95b78dd3b16906adeeaa2483b9a3fa724963a2abb75123c2dfd6ea6aa223/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 11 03:53:27 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3d4a95b78dd3b16906adeeaa2483b9a3fa724963a2abb75123c2dfd6ea6aa223/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 11 03:53:27 compute-0 podman[233349]: 2025-10-11 03:53:27.164073479 +0000 UTC m=+0.155690879 container init cdc972a99418ddb78516364dfcc5d3d9ed2b27c890bca4df58eee39830b67343 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_dhawan, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507)
Oct 11 03:53:27 compute-0 podman[233349]: 2025-10-11 03:53:27.176168474 +0000 UTC m=+0.167785854 container start cdc972a99418ddb78516364dfcc5d3d9ed2b27c890bca4df58eee39830b67343 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_dhawan, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef)
Oct 11 03:53:27 compute-0 podman[233349]: 2025-10-11 03:53:27.179354793 +0000 UTC m=+0.170972173 container attach cdc972a99418ddb78516364dfcc5d3d9ed2b27c890bca4df58eee39830b67343 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_dhawan, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.license=GPLv2)
Oct 11 03:53:27 compute-0 sudo[233422]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-iamcsypkynhjxvctepxyrpyqvkecejqw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760154806.8562353-518-187490821188960/AnsiballZ_stat.py'
Oct 11 03:53:27 compute-0 sudo[233422]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 03:53:27 compute-0 python3.9[233425]: ansible-ansible.legacy.stat Invoked with path=/etc/multipath.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 11 03:53:27 compute-0 sudo[233422]: pam_unix(sudo:session): session closed for user root
Oct 11 03:53:27 compute-0 unix_chkpwd[233426]: password check failed for user (root)
Oct 11 03:53:27 compute-0 sshd-session[233238]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=80.94.93.233  user=root
Oct 11 03:53:27 compute-0 sudo[233551]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wxziovjcmjmttrtmxguipgragyuyjofg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760154806.8562353-518-187490821188960/AnsiballZ_copy.py'
Oct 11 03:53:27 compute-0 sweet_dhawan[233392]: {
Oct 11 03:53:27 compute-0 sweet_dhawan[233392]:     "0": [
Oct 11 03:53:27 compute-0 sudo[233551]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 03:53:27 compute-0 sweet_dhawan[233392]:         {
Oct 11 03:53:27 compute-0 sweet_dhawan[233392]:             "devices": [
Oct 11 03:53:27 compute-0 sweet_dhawan[233392]:                 "/dev/loop3"
Oct 11 03:53:27 compute-0 sweet_dhawan[233392]:             ],
Oct 11 03:53:27 compute-0 sweet_dhawan[233392]:             "lv_name": "ceph_lv0",
Oct 11 03:53:27 compute-0 sweet_dhawan[233392]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct 11 03:53:27 compute-0 sweet_dhawan[233392]:             "lv_size": "21470642176",
Oct 11 03:53:27 compute-0 sweet_dhawan[233392]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=1XVTvq-Um5W-oCLh-MZzq-EKrd-vgGj-JdO4S3,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=23b68101-59a9-532f-ab6b-9acf78fb2162,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=bd7ac921-1218-45c1-b1c6-7c594dbceccb,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 11 03:53:27 compute-0 sweet_dhawan[233392]:             "lv_uuid": "1XVTvq-Um5W-oCLh-MZzq-EKrd-vgGj-JdO4S3",
Oct 11 03:53:27 compute-0 sweet_dhawan[233392]:             "name": "ceph_lv0",
Oct 11 03:53:27 compute-0 sweet_dhawan[233392]:             "path": "/dev/ceph_vg0/ceph_lv0",
Oct 11 03:53:27 compute-0 sweet_dhawan[233392]:             "tags": {
Oct 11 03:53:27 compute-0 sweet_dhawan[233392]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct 11 03:53:27 compute-0 sweet_dhawan[233392]:                 "ceph.block_uuid": "1XVTvq-Um5W-oCLh-MZzq-EKrd-vgGj-JdO4S3",
Oct 11 03:53:27 compute-0 sweet_dhawan[233392]:                 "ceph.cephx_lockbox_secret": "",
Oct 11 03:53:27 compute-0 sweet_dhawan[233392]:                 "ceph.cluster_fsid": "23b68101-59a9-532f-ab6b-9acf78fb2162",
Oct 11 03:53:27 compute-0 sweet_dhawan[233392]:                 "ceph.cluster_name": "ceph",
Oct 11 03:53:27 compute-0 sweet_dhawan[233392]:                 "ceph.crush_device_class": "",
Oct 11 03:53:27 compute-0 sweet_dhawan[233392]:                 "ceph.encrypted": "0",
Oct 11 03:53:27 compute-0 sweet_dhawan[233392]:                 "ceph.osd_fsid": "bd7ac921-1218-45c1-b1c6-7c594dbceccb",
Oct 11 03:53:27 compute-0 sweet_dhawan[233392]:                 "ceph.osd_id": "0",
Oct 11 03:53:27 compute-0 sweet_dhawan[233392]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 11 03:53:27 compute-0 sweet_dhawan[233392]:                 "ceph.type": "block",
Oct 11 03:53:27 compute-0 sweet_dhawan[233392]:                 "ceph.vdo": "0"
Oct 11 03:53:27 compute-0 sweet_dhawan[233392]:             },
Oct 11 03:53:27 compute-0 sweet_dhawan[233392]:             "type": "block",
Oct 11 03:53:27 compute-0 sweet_dhawan[233392]:             "vg_name": "ceph_vg0"
Oct 11 03:53:27 compute-0 sweet_dhawan[233392]:         }
Oct 11 03:53:27 compute-0 sweet_dhawan[233392]:     ],
Oct 11 03:53:27 compute-0 sweet_dhawan[233392]:     "1": [
Oct 11 03:53:27 compute-0 sweet_dhawan[233392]:         {
Oct 11 03:53:27 compute-0 sweet_dhawan[233392]:             "devices": [
Oct 11 03:53:27 compute-0 sweet_dhawan[233392]:                 "/dev/loop4"
Oct 11 03:53:27 compute-0 sweet_dhawan[233392]:             ],
Oct 11 03:53:27 compute-0 sweet_dhawan[233392]:             "lv_name": "ceph_lv1",
Oct 11 03:53:27 compute-0 sweet_dhawan[233392]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Oct 11 03:53:27 compute-0 sweet_dhawan[233392]:             "lv_size": "21470642176",
Oct 11 03:53:27 compute-0 sweet_dhawan[233392]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=SYHCVo-UVf0-Iwc3-4dzz-QuCG-19LJ-9rRA5x,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=23b68101-59a9-532f-ab6b-9acf78fb2162,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=38da774d-7ecf-442f-9a7a-97978287cff8,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 11 03:53:27 compute-0 sweet_dhawan[233392]:             "lv_uuid": "SYHCVo-UVf0-Iwc3-4dzz-QuCG-19LJ-9rRA5x",
Oct 11 03:53:27 compute-0 sweet_dhawan[233392]:             "name": "ceph_lv1",
Oct 11 03:53:27 compute-0 sweet_dhawan[233392]:             "path": "/dev/ceph_vg1/ceph_lv1",
Oct 11 03:53:27 compute-0 sweet_dhawan[233392]:             "tags": {
Oct 11 03:53:27 compute-0 sweet_dhawan[233392]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Oct 11 03:53:27 compute-0 sweet_dhawan[233392]:                 "ceph.block_uuid": "SYHCVo-UVf0-Iwc3-4dzz-QuCG-19LJ-9rRA5x",
Oct 11 03:53:27 compute-0 sweet_dhawan[233392]:                 "ceph.cephx_lockbox_secret": "",
Oct 11 03:53:27 compute-0 sweet_dhawan[233392]:                 "ceph.cluster_fsid": "23b68101-59a9-532f-ab6b-9acf78fb2162",
Oct 11 03:53:27 compute-0 sweet_dhawan[233392]:                 "ceph.cluster_name": "ceph",
Oct 11 03:53:27 compute-0 sweet_dhawan[233392]:                 "ceph.crush_device_class": "",
Oct 11 03:53:27 compute-0 sweet_dhawan[233392]:                 "ceph.encrypted": "0",
Oct 11 03:53:27 compute-0 sweet_dhawan[233392]:                 "ceph.osd_fsid": "38da774d-7ecf-442f-9a7a-97978287cff8",
Oct 11 03:53:27 compute-0 sweet_dhawan[233392]:                 "ceph.osd_id": "1",
Oct 11 03:53:27 compute-0 sweet_dhawan[233392]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 11 03:53:27 compute-0 sweet_dhawan[233392]:                 "ceph.type": "block",
Oct 11 03:53:27 compute-0 sweet_dhawan[233392]:                 "ceph.vdo": "0"
Oct 11 03:53:27 compute-0 sweet_dhawan[233392]:             },
Oct 11 03:53:27 compute-0 sweet_dhawan[233392]:             "type": "block",
Oct 11 03:53:27 compute-0 sweet_dhawan[233392]:             "vg_name": "ceph_vg1"
Oct 11 03:53:27 compute-0 sweet_dhawan[233392]:         }
Oct 11 03:53:27 compute-0 sweet_dhawan[233392]:     ],
Oct 11 03:53:27 compute-0 sweet_dhawan[233392]:     "2": [
Oct 11 03:53:27 compute-0 sweet_dhawan[233392]:         {
Oct 11 03:53:27 compute-0 sweet_dhawan[233392]:             "devices": [
Oct 11 03:53:27 compute-0 sweet_dhawan[233392]:                 "/dev/loop5"
Oct 11 03:53:27 compute-0 sweet_dhawan[233392]:             ],
Oct 11 03:53:27 compute-0 sweet_dhawan[233392]:             "lv_name": "ceph_lv2",
Oct 11 03:53:27 compute-0 sweet_dhawan[233392]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Oct 11 03:53:27 compute-0 sweet_dhawan[233392]:             "lv_size": "21470642176",
Oct 11 03:53:27 compute-0 sweet_dhawan[233392]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=RoMfIg-8Myq-NBZ3-1HM3-6QfC-sjk8-dlChvt,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=23b68101-59a9-532f-ab6b-9acf78fb2162,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=b0a32a6b-06c7-4cda-9235-0c5ffbd1bd85,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 11 03:53:27 compute-0 sweet_dhawan[233392]:             "lv_uuid": "RoMfIg-8Myq-NBZ3-1HM3-6QfC-sjk8-dlChvt",
Oct 11 03:53:27 compute-0 sweet_dhawan[233392]:             "name": "ceph_lv2",
Oct 11 03:53:27 compute-0 sweet_dhawan[233392]:             "path": "/dev/ceph_vg2/ceph_lv2",
Oct 11 03:53:27 compute-0 sweet_dhawan[233392]:             "tags": {
Oct 11 03:53:27 compute-0 sweet_dhawan[233392]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Oct 11 03:53:27 compute-0 sweet_dhawan[233392]:                 "ceph.block_uuid": "RoMfIg-8Myq-NBZ3-1HM3-6QfC-sjk8-dlChvt",
Oct 11 03:53:27 compute-0 sweet_dhawan[233392]:                 "ceph.cephx_lockbox_secret": "",
Oct 11 03:53:27 compute-0 sweet_dhawan[233392]:                 "ceph.cluster_fsid": "23b68101-59a9-532f-ab6b-9acf78fb2162",
Oct 11 03:53:27 compute-0 sweet_dhawan[233392]:                 "ceph.cluster_name": "ceph",
Oct 11 03:53:27 compute-0 sweet_dhawan[233392]:                 "ceph.crush_device_class": "",
Oct 11 03:53:27 compute-0 sweet_dhawan[233392]:                 "ceph.encrypted": "0",
Oct 11 03:53:27 compute-0 sweet_dhawan[233392]:                 "ceph.osd_fsid": "b0a32a6b-06c7-4cda-9235-0c5ffbd1bd85",
Oct 11 03:53:27 compute-0 sweet_dhawan[233392]:                 "ceph.osd_id": "2",
Oct 11 03:53:27 compute-0 sweet_dhawan[233392]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 11 03:53:27 compute-0 sweet_dhawan[233392]:                 "ceph.type": "block",
Oct 11 03:53:27 compute-0 sweet_dhawan[233392]:                 "ceph.vdo": "0"
Oct 11 03:53:27 compute-0 sweet_dhawan[233392]:             },
Oct 11 03:53:27 compute-0 sweet_dhawan[233392]:             "type": "block",
Oct 11 03:53:27 compute-0 sweet_dhawan[233392]:             "vg_name": "ceph_vg2"
Oct 11 03:53:27 compute-0 sweet_dhawan[233392]:         }
Oct 11 03:53:27 compute-0 sweet_dhawan[233392]:     ]
Oct 11 03:53:27 compute-0 sweet_dhawan[233392]: }
Oct 11 03:53:27 compute-0 systemd[1]: libpod-cdc972a99418ddb78516364dfcc5d3d9ed2b27c890bca4df58eee39830b67343.scope: Deactivated successfully.
Oct 11 03:53:27 compute-0 podman[233349]: 2025-10-11 03:53:27.957262218 +0000 UTC m=+0.948879638 container died cdc972a99418ddb78516364dfcc5d3d9ed2b27c890bca4df58eee39830b67343 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_dhawan, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Oct 11 03:53:27 compute-0 systemd[1]: var-lib-containers-storage-overlay-3d4a95b78dd3b16906adeeaa2483b9a3fa724963a2abb75123c2dfd6ea6aa223-merged.mount: Deactivated successfully.
Oct 11 03:53:28 compute-0 podman[233349]: 2025-10-11 03:53:28.011136672 +0000 UTC m=+1.002754052 container remove cdc972a99418ddb78516364dfcc5d3d9ed2b27c890bca4df58eee39830b67343 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_dhawan, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Oct 11 03:53:28 compute-0 systemd[1]: libpod-conmon-cdc972a99418ddb78516364dfcc5d3d9ed2b27c890bca4df58eee39830b67343.scope: Deactivated successfully.
Oct 11 03:53:28 compute-0 sudo[233043]: pam_unix(sudo:session): session closed for user root
Oct 11 03:53:28 compute-0 sudo[233568]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 03:53:28 compute-0 sudo[233568]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 03:53:28 compute-0 sudo[233568]: pam_unix(sudo:session): session closed for user root
Oct 11 03:53:28 compute-0 python3.9[233553]: ansible-ansible.legacy.copy Invoked with dest=/etc/multipath.conf mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1760154806.8562353-518-187490821188960/.source.conf _original_basename=multipath.conf follow=False checksum=bf02ab264d3d648048a81f3bacec8bc58db93162 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 11 03:53:28 compute-0 sudo[233551]: pam_unix(sudo:session): session closed for user root
Oct 11 03:53:28 compute-0 sudo[233593]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 11 03:53:28 compute-0 sudo[233593]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 03:53:28 compute-0 sudo[233593]: pam_unix(sudo:session): session closed for user root
Oct 11 03:53:28 compute-0 sudo[233636]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 03:53:28 compute-0 sudo[233636]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 03:53:28 compute-0 sudo[233636]: pam_unix(sudo:session): session closed for user root
Oct 11 03:53:28 compute-0 sudo[233668]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/23b68101-59a9-532f-ab6b-9acf78fb2162/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 23b68101-59a9-532f-ab6b-9acf78fb2162 -- raw list --format json
Oct 11 03:53:28 compute-0 sudo[233668]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 03:53:28 compute-0 podman[233805]: 2025-10-11 03:53:28.661496328 +0000 UTC m=+0.036113712 container create 161f16e9b82d44507ed4cd5640530b32fad5d8bf22cc33f16135634dbb8948cc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_galileo, CEPH_REF=reef, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Oct 11 03:53:28 compute-0 systemd[1]: Started libpod-conmon-161f16e9b82d44507ed4cd5640530b32fad5d8bf22cc33f16135634dbb8948cc.scope.
Oct 11 03:53:28 compute-0 systemd[1]: Started libcrun container.
Oct 11 03:53:28 compute-0 podman[233805]: 2025-10-11 03:53:28.646993366 +0000 UTC m=+0.021610770 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 03:53:28 compute-0 ceph-mon[74273]: pgmap v622: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 03:53:28 compute-0 podman[233805]: 2025-10-11 03:53:28.754881868 +0000 UTC m=+0.129499272 container init 161f16e9b82d44507ed4cd5640530b32fad5d8bf22cc33f16135634dbb8948cc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_galileo, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 11 03:53:28 compute-0 podman[233805]: 2025-10-11 03:53:28.800476873 +0000 UTC m=+0.175094297 container start 161f16e9b82d44507ed4cd5640530b32fad5d8bf22cc33f16135634dbb8948cc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_galileo, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Oct 11 03:53:28 compute-0 clever_galileo[233849]: 167 167
Oct 11 03:53:28 compute-0 podman[233805]: 2025-10-11 03:53:28.806181461 +0000 UTC m=+0.180798865 container attach 161f16e9b82d44507ed4cd5640530b32fad5d8bf22cc33f16135634dbb8948cc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_galileo, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Oct 11 03:53:28 compute-0 systemd[1]: libpod-161f16e9b82d44507ed4cd5640530b32fad5d8bf22cc33f16135634dbb8948cc.scope: Deactivated successfully.
Oct 11 03:53:28 compute-0 podman[233805]: 2025-10-11 03:53:28.808327061 +0000 UTC m=+0.182944515 container died 161f16e9b82d44507ed4cd5640530b32fad5d8bf22cc33f16135634dbb8948cc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_galileo, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef)
Oct 11 03:53:28 compute-0 sudo[233878]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gsjrbhuhsdkbjufocdmuywqycsujpyzx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760154808.318724-533-24055795301395/AnsiballZ_command.py'
Oct 11 03:53:28 compute-0 sudo[233878]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 03:53:28 compute-0 systemd[1]: var-lib-containers-storage-overlay-e97503e71c1e60ceaa2fab27e21f33a268de3878a2f1fc1ee934bd34ed9e56a6-merged.mount: Deactivated successfully.
Oct 11 03:53:28 compute-0 podman[233805]: 2025-10-11 03:53:28.863216293 +0000 UTC m=+0.237833717 container remove 161f16e9b82d44507ed4cd5640530b32fad5d8bf22cc33f16135634dbb8948cc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_galileo, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_REF=reef, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Oct 11 03:53:28 compute-0 systemd[1]: libpod-conmon-161f16e9b82d44507ed4cd5640530b32fad5d8bf22cc33f16135634dbb8948cc.scope: Deactivated successfully.
Oct 11 03:53:29 compute-0 podman[233901]: 2025-10-11 03:53:29.06322062 +0000 UTC m=+0.045891674 container create c64c011c3b30a0d05e70ac6f0bbfaf63c83ae8788e183df3f6a5eaa06ba2716e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_jepsen, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Oct 11 03:53:29 compute-0 python3.9[233888]: ansible-ansible.legacy.command Invoked with _raw_params=grep -q '^blacklist\s*{' /etc/multipath.conf _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 11 03:53:29 compute-0 sshd-session[233238]: Failed password for root from 80.94.93.233 port 58660 ssh2
Oct 11 03:53:29 compute-0 sudo[233878]: pam_unix(sudo:session): session closed for user root
Oct 11 03:53:29 compute-0 systemd[1]: Started libpod-conmon-c64c011c3b30a0d05e70ac6f0bbfaf63c83ae8788e183df3f6a5eaa06ba2716e.scope.
Oct 11 03:53:29 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v623: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 03:53:29 compute-0 podman[233901]: 2025-10-11 03:53:29.04013641 +0000 UTC m=+0.022807554 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 03:53:29 compute-0 systemd[1]: Started libcrun container.
Oct 11 03:53:29 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d84e649e5feee19cccc89a3cd0b002568b305fb27301dfef911b08e24459c4db/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 11 03:53:29 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d84e649e5feee19cccc89a3cd0b002568b305fb27301dfef911b08e24459c4db/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 11 03:53:29 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d84e649e5feee19cccc89a3cd0b002568b305fb27301dfef911b08e24459c4db/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 11 03:53:29 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d84e649e5feee19cccc89a3cd0b002568b305fb27301dfef911b08e24459c4db/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 11 03:53:29 compute-0 podman[233901]: 2025-10-11 03:53:29.162709949 +0000 UTC m=+0.145381033 container init c64c011c3b30a0d05e70ac6f0bbfaf63c83ae8788e183df3f6a5eaa06ba2716e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_jepsen, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 11 03:53:29 compute-0 podman[233901]: 2025-10-11 03:53:29.171537374 +0000 UTC m=+0.154208428 container start c64c011c3b30a0d05e70ac6f0bbfaf63c83ae8788e183df3f6a5eaa06ba2716e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_jepsen, org.label-schema.build-date=20250507, ceph=True, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3)
Oct 11 03:53:29 compute-0 podman[233901]: 2025-10-11 03:53:29.174716022 +0000 UTC m=+0.157387116 container attach c64c011c3b30a0d05e70ac6f0bbfaf63c83ae8788e183df3f6a5eaa06ba2716e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_jepsen, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default)
Oct 11 03:53:29 compute-0 unix_chkpwd[234023]: password check failed for user (root)
Oct 11 03:53:29 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 11 03:53:29 compute-0 sudo[234073]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cymdrocrdrzgjjjzeiemrexympxqnjij ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760154809.2594728-541-112427196777832/AnsiballZ_lineinfile.py'
Oct 11 03:53:29 compute-0 sudo[234073]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 03:53:29 compute-0 python3.9[234075]: ansible-ansible.builtin.lineinfile Invoked with line=blacklist { path=/etc/multipath.conf state=present backrefs=False create=False backup=False firstmatch=False unsafe_writes=False regexp=None search_string=None insertafter=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 11 03:53:29 compute-0 sudo[234073]: pam_unix(sudo:session): session closed for user root
Oct 11 03:53:30 compute-0 exciting_jepsen[233918]: {
Oct 11 03:53:30 compute-0 exciting_jepsen[233918]:     "38da774d-7ecf-442f-9a7a-97978287cff8": {
Oct 11 03:53:30 compute-0 exciting_jepsen[233918]:         "ceph_fsid": "23b68101-59a9-532f-ab6b-9acf78fb2162",
Oct 11 03:53:30 compute-0 exciting_jepsen[233918]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Oct 11 03:53:30 compute-0 exciting_jepsen[233918]:         "osd_id": 1,
Oct 11 03:53:30 compute-0 exciting_jepsen[233918]:         "osd_uuid": "38da774d-7ecf-442f-9a7a-97978287cff8",
Oct 11 03:53:30 compute-0 exciting_jepsen[233918]:         "type": "bluestore"
Oct 11 03:53:30 compute-0 exciting_jepsen[233918]:     },
Oct 11 03:53:30 compute-0 exciting_jepsen[233918]:     "b0a32a6b-06c7-4cda-9235-0c5ffbd1bd85": {
Oct 11 03:53:30 compute-0 exciting_jepsen[233918]:         "ceph_fsid": "23b68101-59a9-532f-ab6b-9acf78fb2162",
Oct 11 03:53:30 compute-0 exciting_jepsen[233918]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Oct 11 03:53:30 compute-0 exciting_jepsen[233918]:         "osd_id": 2,
Oct 11 03:53:30 compute-0 exciting_jepsen[233918]:         "osd_uuid": "b0a32a6b-06c7-4cda-9235-0c5ffbd1bd85",
Oct 11 03:53:30 compute-0 exciting_jepsen[233918]:         "type": "bluestore"
Oct 11 03:53:30 compute-0 exciting_jepsen[233918]:     },
Oct 11 03:53:30 compute-0 exciting_jepsen[233918]:     "bd7ac921-1218-45c1-b1c6-7c594dbceccb": {
Oct 11 03:53:30 compute-0 exciting_jepsen[233918]:         "ceph_fsid": "23b68101-59a9-532f-ab6b-9acf78fb2162",
Oct 11 03:53:30 compute-0 exciting_jepsen[233918]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct 11 03:53:30 compute-0 exciting_jepsen[233918]:         "osd_id": 0,
Oct 11 03:53:30 compute-0 exciting_jepsen[233918]:         "osd_uuid": "bd7ac921-1218-45c1-b1c6-7c594dbceccb",
Oct 11 03:53:30 compute-0 exciting_jepsen[233918]:         "type": "bluestore"
Oct 11 03:53:30 compute-0 exciting_jepsen[233918]:     }
Oct 11 03:53:30 compute-0 exciting_jepsen[233918]: }
Oct 11 03:53:30 compute-0 systemd[1]: libpod-c64c011c3b30a0d05e70ac6f0bbfaf63c83ae8788e183df3f6a5eaa06ba2716e.scope: Deactivated successfully.
Oct 11 03:53:30 compute-0 podman[233901]: 2025-10-11 03:53:30.333417038 +0000 UTC m=+1.316088102 container died c64c011c3b30a0d05e70ac6f0bbfaf63c83ae8788e183df3f6a5eaa06ba2716e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_jepsen, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Oct 11 03:53:30 compute-0 systemd[1]: libpod-c64c011c3b30a0d05e70ac6f0bbfaf63c83ae8788e183df3f6a5eaa06ba2716e.scope: Consumed 1.167s CPU time.
Oct 11 03:53:30 compute-0 systemd[1]: var-lib-containers-storage-overlay-d84e649e5feee19cccc89a3cd0b002568b305fb27301dfef911b08e24459c4db-merged.mount: Deactivated successfully.
Oct 11 03:53:30 compute-0 podman[233901]: 2025-10-11 03:53:30.42180971 +0000 UTC m=+1.404480804 container remove c64c011c3b30a0d05e70ac6f0bbfaf63c83ae8788e183df3f6a5eaa06ba2716e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_jepsen, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 11 03:53:30 compute-0 systemd[1]: libpod-conmon-c64c011c3b30a0d05e70ac6f0bbfaf63c83ae8788e183df3f6a5eaa06ba2716e.scope: Deactivated successfully.
Oct 11 03:53:30 compute-0 sudo[233668]: pam_unix(sudo:session): session closed for user root
Oct 11 03:53:30 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 11 03:53:30 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' 
Oct 11 03:53:30 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 11 03:53:30 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' 
Oct 11 03:53:30 compute-0 ceph-mgr[74563]: [progress WARNING root] complete: ev dc90cd5b-e4ed-4113-b8d4-802510c7b076 does not exist
Oct 11 03:53:30 compute-0 ceph-mgr[74563]: [progress WARNING root] complete: ev 1c8c77c5-979a-459d-b12e-de804c3933c8 does not exist
Oct 11 03:53:30 compute-0 sudo[234270]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-viogpqlpclqywhdpyqwwegunubzzigjb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760154810.0123425-549-220506253442134/AnsiballZ_replace.py'
Oct 11 03:53:30 compute-0 sudo[234270]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 03:53:30 compute-0 sudo[234266]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 03:53:30 compute-0 sudo[234266]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 03:53:30 compute-0 sudo[234266]: pam_unix(sudo:session): session closed for user root
Oct 11 03:53:30 compute-0 sudo[234295]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Oct 11 03:53:30 compute-0 sudo[234295]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 03:53:30 compute-0 sudo[234295]: pam_unix(sudo:session): session closed for user root
Oct 11 03:53:30 compute-0 python3.9[234292]: ansible-ansible.builtin.replace Invoked with path=/etc/multipath.conf regexp=^(blacklist {) replace=\1\n} backup=False encoding=utf-8 unsafe_writes=False after=None before=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 11 03:53:30 compute-0 ceph-mon[74273]: pgmap v623: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 03:53:30 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' 
Oct 11 03:53:30 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' 
Oct 11 03:53:30 compute-0 sudo[234270]: pam_unix(sudo:session): session closed for user root
Oct 11 03:53:31 compute-0 ceph-mgr[74563]: [pg_autoscaler INFO root] _maybe_adjust
Oct 11 03:53:31 compute-0 ceph-mgr[74563]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 03:53:31 compute-0 ceph-mgr[74563]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct 11 03:53:31 compute-0 ceph-mgr[74563]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 03:53:31 compute-0 ceph-mgr[74563]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 11 03:53:31 compute-0 ceph-mgr[74563]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 03:53:31 compute-0 ceph-mgr[74563]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 11 03:53:31 compute-0 ceph-mgr[74563]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 03:53:31 compute-0 ceph-mgr[74563]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 11 03:53:31 compute-0 ceph-mgr[74563]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 03:53:31 compute-0 ceph-mgr[74563]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 11 03:53:31 compute-0 ceph-mgr[74563]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 03:53:31 compute-0 ceph-mgr[74563]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Oct 11 03:53:31 compute-0 ceph-mgr[74563]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 03:53:31 compute-0 ceph-mgr[74563]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 11 03:53:31 compute-0 ceph-mgr[74563]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 03:53:31 compute-0 ceph-mgr[74563]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Oct 11 03:53:31 compute-0 ceph-mgr[74563]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 03:53:31 compute-0 ceph-mgr[74563]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct 11 03:53:31 compute-0 ceph-mgr[74563]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 03:53:31 compute-0 ceph-mgr[74563]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 11 03:53:31 compute-0 ceph-mgr[74563]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 03:53:31 compute-0 ceph-mgr[74563]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct 11 03:53:31 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v624: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 03:53:31 compute-0 sudo[234469]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yvthclbwmeswizmfygcwdjnlrksepcvy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760154810.9177525-557-256416320526875/AnsiballZ_replace.py'
Oct 11 03:53:31 compute-0 sudo[234469]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 03:53:31 compute-0 sshd-session[233238]: Failed password for root from 80.94.93.233 port 58660 ssh2
Oct 11 03:53:31 compute-0 python3.9[234471]: ansible-ansible.builtin.replace Invoked with path=/etc/multipath.conf regexp=^blacklist\s*{\n[\s]+devnode \"\.\*\" replace=blacklist { backup=False encoding=utf-8 unsafe_writes=False after=None before=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 11 03:53:31 compute-0 sudo[234469]: pam_unix(sudo:session): session closed for user root
Oct 11 03:53:32 compute-0 sudo[234621]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-iretpaostouqbyjdpmcxekrrhawcxaar ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760154811.7297096-566-135751636577747/AnsiballZ_lineinfile.py'
Oct 11 03:53:32 compute-0 sudo[234621]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 03:53:32 compute-0 systemd[1]: virtnodedevd.service: Deactivated successfully.
Oct 11 03:53:32 compute-0 python3.9[234623]: ansible-ansible.builtin.lineinfile Invoked with firstmatch=True insertafter=^defaults line=        find_multipaths yes path=/etc/multipath.conf regexp=^\s+find_multipaths state=present backrefs=False create=False backup=False unsafe_writes=False search_string=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 11 03:53:32 compute-0 sudo[234621]: pam_unix(sudo:session): session closed for user root
Oct 11 03:53:32 compute-0 ceph-mon[74273]: pgmap v624: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 03:53:32 compute-0 sudo[234774]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dvxfluuhflcwmejifgkskkiicgicnkmu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760154812.50758-566-215025114565680/AnsiballZ_lineinfile.py'
Oct 11 03:53:32 compute-0 sudo[234774]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 03:53:33 compute-0 python3.9[234776]: ansible-ansible.builtin.lineinfile Invoked with firstmatch=True insertafter=^defaults line=        recheck_wwid yes path=/etc/multipath.conf regexp=^\s+recheck_wwid state=present backrefs=False create=False backup=False unsafe_writes=False search_string=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 11 03:53:33 compute-0 sudo[234774]: pam_unix(sudo:session): session closed for user root
Oct 11 03:53:33 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v625: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 03:53:33 compute-0 unix_chkpwd[234822]: password check failed for user (root)
Oct 11 03:53:33 compute-0 systemd[1]: virtproxyd.service: Deactivated successfully.
Oct 11 03:53:33 compute-0 sudo[234927]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fqnmatiunwskvctcgzxbcslkypcrwnlm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760154813.2417755-566-74799805143083/AnsiballZ_lineinfile.py'
Oct 11 03:53:33 compute-0 sudo[234927]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 03:53:33 compute-0 python3.9[234930]: ansible-ansible.builtin.lineinfile Invoked with firstmatch=True insertafter=^defaults line=        skip_kpartx yes path=/etc/multipath.conf regexp=^\s+skip_kpartx state=present backrefs=False create=False backup=False unsafe_writes=False search_string=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 11 03:53:33 compute-0 sudo[234927]: pam_unix(sudo:session): session closed for user root
Oct 11 03:53:34 compute-0 sudo[235080]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tkfidvygydfrwdsigezpvmrrohhvffso ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760154814.0542095-566-93975640941468/AnsiballZ_lineinfile.py'
Oct 11 03:53:34 compute-0 sudo[235080]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 03:53:34 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 11 03:53:34 compute-0 ceph-mon[74273]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #30. Immutable memtables: 0.
Oct 11 03:53:34 compute-0 ceph-mon[74273]: rocksdb: (Original Log Time 2025/10/11-03:53:34.546008) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Oct 11 03:53:34 compute-0 ceph-mon[74273]: rocksdb: [db/flush_job.cc:856] [default] [JOB 11] Flushing memtable with next log file: 30
Oct 11 03:53:34 compute-0 ceph-mon[74273]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760154814546049, "job": 11, "event": "flush_started", "num_memtables": 1, "num_entries": 1885, "num_deletes": 250, "total_data_size": 3213893, "memory_usage": 3263640, "flush_reason": "Manual Compaction"}
Oct 11 03:53:34 compute-0 ceph-mon[74273]: rocksdb: [db/flush_job.cc:885] [default] [JOB 11] Level-0 flush table #31: started
Oct 11 03:53:34 compute-0 ceph-mon[74273]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760154814558671, "cf_name": "default", "job": 11, "event": "table_file_creation", "file_number": 31, "file_size": 1812833, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 11709, "largest_seqno": 13593, "table_properties": {"data_size": 1806724, "index_size": 3120, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1925, "raw_key_size": 15158, "raw_average_key_size": 20, "raw_value_size": 1793254, "raw_average_value_size": 2378, "num_data_blocks": 144, "num_entries": 754, "num_filter_entries": 754, "num_deletions": 250, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1760154599, "oldest_key_time": 1760154599, "file_creation_time": 1760154814, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "c5ef686a-ea96-43f3-b64e-136aeef6150d", "db_session_id": "5PENSWPAYOBU0GJSS6OB", "orig_file_number": 31, "seqno_to_time_mapping": "N/A"}}
Oct 11 03:53:34 compute-0 ceph-mon[74273]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 11] Flush lasted 12729 microseconds, and 7993 cpu microseconds.
Oct 11 03:53:34 compute-0 ceph-mon[74273]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 11 03:53:34 compute-0 ceph-mon[74273]: rocksdb: (Original Log Time 2025/10/11-03:53:34.558731) [db/flush_job.cc:967] [default] [JOB 11] Level-0 flush table #31: 1812833 bytes OK
Oct 11 03:53:34 compute-0 ceph-mon[74273]: rocksdb: (Original Log Time 2025/10/11-03:53:34.558764) [db/memtable_list.cc:519] [default] Level-0 commit table #31 started
Oct 11 03:53:34 compute-0 ceph-mon[74273]: rocksdb: (Original Log Time 2025/10/11-03:53:34.560630) [db/memtable_list.cc:722] [default] Level-0 commit table #31: memtable #1 done
Oct 11 03:53:34 compute-0 ceph-mon[74273]: rocksdb: (Original Log Time 2025/10/11-03:53:34.560658) EVENT_LOG_v1 {"time_micros": 1760154814560649, "job": 11, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Oct 11 03:53:34 compute-0 ceph-mon[74273]: rocksdb: (Original Log Time 2025/10/11-03:53:34.560684) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Oct 11 03:53:34 compute-0 ceph-mon[74273]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 11] Try to delete WAL files size 3205968, prev total WAL file size 3205968, number of live WAL files 2.
Oct 11 03:53:34 compute-0 ceph-mon[74273]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000027.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 11 03:53:34 compute-0 ceph-mon[74273]: rocksdb: (Original Log Time 2025/10/11-03:53:34.562575) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6D67727374617400323532' seq:72057594037927935, type:22 .. '6D67727374617400353033' seq:0, type:0; will stop at (end)
Oct 11 03:53:34 compute-0 ceph-mon[74273]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 12] Compacting 1@0 + 1@6 files to L6, score -1.00
Oct 11 03:53:34 compute-0 ceph-mon[74273]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 11 Base level 0, inputs: [31(1770KB)], [29(7632KB)]
Oct 11 03:53:34 compute-0 ceph-mon[74273]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760154814562626, "job": 12, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [31], "files_L6": [29], "score": -1, "input_data_size": 9628301, "oldest_snapshot_seqno": -1}
Oct 11 03:53:34 compute-0 ceph-mon[74273]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 12] Generated table #32: 4021 keys, 7669397 bytes, temperature: kUnknown
Oct 11 03:53:34 compute-0 ceph-mon[74273]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760154814621177, "cf_name": "default", "job": 12, "event": "table_file_creation", "file_number": 32, "file_size": 7669397, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 7640594, "index_size": 17621, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 10117, "raw_key_size": 95487, "raw_average_key_size": 23, "raw_value_size": 7566188, "raw_average_value_size": 1881, "num_data_blocks": 768, "num_entries": 4021, "num_filter_entries": 4021, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1760153731, "oldest_key_time": 0, "file_creation_time": 1760154814, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "c5ef686a-ea96-43f3-b64e-136aeef6150d", "db_session_id": "5PENSWPAYOBU0GJSS6OB", "orig_file_number": 32, "seqno_to_time_mapping": "N/A"}}
Oct 11 03:53:34 compute-0 ceph-mon[74273]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 11 03:53:34 compute-0 ceph-mon[74273]: rocksdb: (Original Log Time 2025/10/11-03:53:34.621485) [db/compaction/compaction_job.cc:1663] [default] [JOB 12] Compacted 1@0 + 1@6 files to L6 => 7669397 bytes
Oct 11 03:53:34 compute-0 ceph-mon[74273]: rocksdb: (Original Log Time 2025/10/11-03:53:34.622935) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 164.2 rd, 130.8 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.7, 7.5 +0.0 blob) out(7.3 +0.0 blob), read-write-amplify(9.5) write-amplify(4.2) OK, records in: 4431, records dropped: 410 output_compression: NoCompression
Oct 11 03:53:34 compute-0 ceph-mon[74273]: rocksdb: (Original Log Time 2025/10/11-03:53:34.622967) EVENT_LOG_v1 {"time_micros": 1760154814622951, "job": 12, "event": "compaction_finished", "compaction_time_micros": 58643, "compaction_time_cpu_micros": 33877, "output_level": 6, "num_output_files": 1, "total_output_size": 7669397, "num_input_records": 4431, "num_output_records": 4021, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Oct 11 03:53:34 compute-0 ceph-mon[74273]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000031.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 11 03:53:34 compute-0 ceph-mon[74273]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760154814623649, "job": 12, "event": "table_file_deletion", "file_number": 31}
Oct 11 03:53:34 compute-0 ceph-mon[74273]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000029.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 11 03:53:34 compute-0 ceph-mon[74273]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760154814626236, "job": 12, "event": "table_file_deletion", "file_number": 29}
Oct 11 03:53:34 compute-0 ceph-mon[74273]: rocksdb: (Original Log Time 2025/10/11-03:53:34.562440) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 11 03:53:34 compute-0 ceph-mon[74273]: rocksdb: (Original Log Time 2025/10/11-03:53:34.626357) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 11 03:53:34 compute-0 ceph-mon[74273]: rocksdb: (Original Log Time 2025/10/11-03:53:34.626365) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 11 03:53:34 compute-0 ceph-mon[74273]: rocksdb: (Original Log Time 2025/10/11-03:53:34.626368) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 11 03:53:34 compute-0 ceph-mon[74273]: rocksdb: (Original Log Time 2025/10/11-03:53:34.626371) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 11 03:53:34 compute-0 ceph-mon[74273]: rocksdb: (Original Log Time 2025/10/11-03:53:34.626374) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 11 03:53:34 compute-0 python3.9[235082]: ansible-ansible.builtin.lineinfile Invoked with firstmatch=True insertafter=^defaults line=        user_friendly_names no path=/etc/multipath.conf regexp=^\s+user_friendly_names state=present backrefs=False create=False backup=False unsafe_writes=False search_string=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 11 03:53:34 compute-0 sudo[235080]: pam_unix(sudo:session): session closed for user root
Oct 11 03:53:34 compute-0 ceph-mon[74273]: pgmap v625: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 03:53:35 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v626: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 03:53:35 compute-0 sudo[235232]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vbdopekdgasbtwlczmdrdlvxvtbrbvdj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760154814.8360581-595-169573619010123/AnsiballZ_stat.py'
Oct 11 03:53:35 compute-0 sudo[235232]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 03:53:35 compute-0 sshd-session[233238]: Failed password for root from 80.94.93.233 port 58660 ssh2
Oct 11 03:53:35 compute-0 python3.9[235234]: ansible-ansible.builtin.stat Invoked with path=/etc/multipath.conf follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct 11 03:53:35 compute-0 sudo[235232]: pam_unix(sudo:session): session closed for user root
Oct 11 03:53:36 compute-0 sudo[235386]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-oyarsmyughayqlqapnabkxeiibmxlvuu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760154815.6618836-603-65733066607764/AnsiballZ_file.py'
Oct 11 03:53:36 compute-0 sudo[235386]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 03:53:36 compute-0 python3.9[235388]: ansible-ansible.builtin.file Invoked with mode=0644 path=/etc/multipath/.multipath_restart_required state=touch recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 11 03:53:36 compute-0 sudo[235386]: pam_unix(sudo:session): session closed for user root
Oct 11 03:53:36 compute-0 ceph-mon[74273]: pgmap v626: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 03:53:36 compute-0 sudo[235538]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gxsdfqzmpnvipgkwknusolwpuqcwzpwc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760154816.5773883-612-93352355473885/AnsiballZ_file.py'
Oct 11 03:53:36 compute-0 sudo[235538]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 03:53:37 compute-0 sshd-session[233238]: Received disconnect from 80.94.93.233 port 58660:11:  [preauth]
Oct 11 03:53:37 compute-0 sshd-session[233238]: Disconnected from authenticating user root 80.94.93.233 port 58660 [preauth]
Oct 11 03:53:37 compute-0 sshd-session[233238]: PAM 2 more authentication failures; logname= uid=0 euid=0 tty=ssh ruser= rhost=80.94.93.233  user=root
Oct 11 03:53:37 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v627: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 03:53:37 compute-0 python3.9[235540]: ansible-ansible.builtin.file Invoked with path=/var/local/libexec recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Oct 11 03:53:37 compute-0 sudo[235538]: pam_unix(sudo:session): session closed for user root
Oct 11 03:53:37 compute-0 sudo[235692]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-azteiwpaumnbeazouiqozyhozqdejpxv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760154817.449788-620-88437815256706/AnsiballZ_stat.py'
Oct 11 03:53:37 compute-0 sudo[235692]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 03:53:37 compute-0 unix_chkpwd[235695]: password check failed for user (root)
Oct 11 03:53:37 compute-0 sshd-session[235541]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=80.94.93.233  user=root
Oct 11 03:53:38 compute-0 python3.9[235694]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-container-shutdown follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 11 03:53:38 compute-0 sudo[235692]: pam_unix(sudo:session): session closed for user root
Oct 11 03:53:38 compute-0 sudo[235771]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kprrfzimwpbconlkhfztnhavdsddjyrq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760154817.449788-620-88437815256706/AnsiballZ_file.py'
Oct 11 03:53:38 compute-0 sudo[235771]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 03:53:38 compute-0 python3.9[235773]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-container-shutdown _original_basename=edpm-container-shutdown recurse=False state=file path=/var/local/libexec/edpm-container-shutdown force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct 11 03:53:38 compute-0 sudo[235771]: pam_unix(sudo:session): session closed for user root
Oct 11 03:53:38 compute-0 ceph-mon[74273]: pgmap v627: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 03:53:39 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v628: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 03:53:39 compute-0 sudo[235923]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ulnfolqrllldtqmqazoynwtlkpuyhqhn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760154818.8433707-620-257221753249490/AnsiballZ_stat.py'
Oct 11 03:53:39 compute-0 sudo[235923]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 03:53:39 compute-0 python3.9[235925]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-start-podman-container follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 11 03:53:39 compute-0 sudo[235923]: pam_unix(sudo:session): session closed for user root
Oct 11 03:53:39 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 11 03:53:39 compute-0 sudo[236001]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bcrcxcgflctkodeaidgxgckitevglxuu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760154818.8433707-620-257221753249490/AnsiballZ_file.py'
Oct 11 03:53:39 compute-0 sudo[236001]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 03:53:40 compute-0 python3.9[236003]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-start-podman-container _original_basename=edpm-start-podman-container recurse=False state=file path=/var/local/libexec/edpm-start-podman-container force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct 11 03:53:40 compute-0 sudo[236001]: pam_unix(sudo:session): session closed for user root
Oct 11 03:53:40 compute-0 sshd-session[235541]: Failed password for root from 80.94.93.233 port 48950 ssh2
Oct 11 03:53:40 compute-0 sudo[236153]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gadhkutdrliibewortbhsqxuwqliuhzv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760154820.348843-643-246651018369555/AnsiballZ_file.py'
Oct 11 03:53:40 compute-0 sudo[236153]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 03:53:40 compute-0 ceph-mon[74273]: pgmap v628: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 03:53:40 compute-0 python3.9[236155]: ansible-ansible.builtin.file Invoked with mode=420 path=/etc/systemd/system-preset state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 11 03:53:40 compute-0 sudo[236153]: pam_unix(sudo:session): session closed for user root
Oct 11 03:53:41 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v629: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 03:53:41 compute-0 sudo[236305]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-abxvavviawkcrnwafelhslwvvhglcaix ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760154821.1674423-651-224874466005655/AnsiballZ_stat.py'
Oct 11 03:53:41 compute-0 sudo[236305]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 03:53:41 compute-0 python3.9[236307]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/edpm-container-shutdown.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 11 03:53:41 compute-0 unix_chkpwd[236310]: password check failed for user (root)
Oct 11 03:53:41 compute-0 sudo[236305]: pam_unix(sudo:session): session closed for user root
Oct 11 03:53:42 compute-0 sudo[236384]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rwumoquyrmteyukzhnfnnvadwxbweijx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760154821.1674423-651-224874466005655/AnsiballZ_file.py'
Oct 11 03:53:42 compute-0 sudo[236384]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 03:53:42 compute-0 python3.9[236386]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/edpm-container-shutdown.service _original_basename=edpm-container-shutdown-service recurse=False state=file path=/etc/systemd/system/edpm-container-shutdown.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 11 03:53:42 compute-0 sudo[236384]: pam_unix(sudo:session): session closed for user root
Oct 11 03:53:42 compute-0 ceph-mon[74273]: pgmap v629: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 03:53:43 compute-0 sudo[236549]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fdpbxpzmzjrcvyjjhkouferybnaatpuw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760154822.5522006-663-213465284657309/AnsiballZ_stat.py'
Oct 11 03:53:43 compute-0 sudo[236549]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 03:53:43 compute-0 podman[236510]: 2025-10-11 03:53:43.044018266 +0000 UTC m=+0.108012397 container health_status 8f03625dda308e9a3fe3b3f00360811eab2a53db90537fed5552a11820542f07 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, config_id=iscsid, container_name=iscsid, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3)
Oct 11 03:53:43 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v630: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 03:53:43 compute-0 python3.9[236555]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 11 03:53:43 compute-0 sudo[236549]: pam_unix(sudo:session): session closed for user root
Oct 11 03:53:43 compute-0 sudo[236635]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lgleebfvzfyrhbbugwsfvvcpmvlgimvr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760154822.5522006-663-213465284657309/AnsiballZ_file.py'
Oct 11 03:53:43 compute-0 sudo[236635]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 03:53:43 compute-0 sshd-session[235541]: Failed password for root from 80.94.93.233 port 48950 ssh2
Oct 11 03:53:43 compute-0 python3.9[236637]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-edpm-container-shutdown.preset _original_basename=91-edpm-container-shutdown-preset recurse=False state=file path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 11 03:53:43 compute-0 sudo[236635]: pam_unix(sudo:session): session closed for user root
Oct 11 03:53:44 compute-0 sudo[236787]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ekxftndfjfvepccyrfdsbzqjpwavhxsb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760154824.0076246-675-274999937697447/AnsiballZ_systemd.py'
Oct 11 03:53:44 compute-0 sudo[236787]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 03:53:44 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 11 03:53:44 compute-0 python3.9[236789]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=edpm-container-shutdown state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Oct 11 03:53:44 compute-0 systemd[1]: Reloading.
Oct 11 03:53:44 compute-0 systemd-rc-local-generator[236816]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 11 03:53:44 compute-0 systemd-sysv-generator[236820]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 11 03:53:44 compute-0 ceph-mon[74273]: pgmap v630: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 03:53:45 compute-0 sudo[236787]: pam_unix(sudo:session): session closed for user root
Oct 11 03:53:45 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v631: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 03:53:45 compute-0 systemd[1]: virtqemud.service: Deactivated successfully.
Oct 11 03:53:45 compute-0 systemd[1]: virtsecretd.service: Deactivated successfully.
Oct 11 03:53:45 compute-0 unix_chkpwd[236945]: password check failed for user (root)
Oct 11 03:53:45 compute-0 sudo[236978]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jsxyldvtoosesrcezusensvdlcoemkzf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760154825.2506187-683-223878620441973/AnsiballZ_stat.py'
Oct 11 03:53:45 compute-0 sudo[236978]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 03:53:45 compute-0 python3.9[236980]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/netns-placeholder.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 11 03:53:45 compute-0 sudo[236978]: pam_unix(sudo:session): session closed for user root
Oct 11 03:53:46 compute-0 sudo[237056]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-knivafnfahhoxoevhozautxhjsvmszej ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760154825.2506187-683-223878620441973/AnsiballZ_file.py'
Oct 11 03:53:46 compute-0 sudo[237056]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 03:53:46 compute-0 python3.9[237058]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/netns-placeholder.service _original_basename=netns-placeholder-service recurse=False state=file path=/etc/systemd/system/netns-placeholder.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 11 03:53:46 compute-0 sudo[237056]: pam_unix(sudo:session): session closed for user root
Oct 11 03:53:46 compute-0 ceph-mon[74273]: pgmap v631: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 03:53:47 compute-0 sudo[237208]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-intcemjfoyseitjofjfutzuuykzhcxjr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760154826.6029932-695-30461236346176/AnsiballZ_stat.py'
Oct 11 03:53:47 compute-0 sudo[237208]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 03:53:47 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v632: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 03:53:47 compute-0 python3.9[237210]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-netns-placeholder.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 11 03:53:47 compute-0 sudo[237208]: pam_unix(sudo:session): session closed for user root
Oct 11 03:53:47 compute-0 sshd-session[235541]: Failed password for root from 80.94.93.233 port 48950 ssh2
Oct 11 03:53:47 compute-0 sshd-session[235541]: Received disconnect from 80.94.93.233 port 48950:11:  [preauth]
Oct 11 03:53:47 compute-0 sshd-session[235541]: Disconnected from authenticating user root 80.94.93.233 port 48950 [preauth]
Oct 11 03:53:47 compute-0 sshd-session[235541]: PAM 2 more authentication failures; logname= uid=0 euid=0 tty=ssh ruser= rhost=80.94.93.233  user=root
Oct 11 03:53:47 compute-0 sudo[237286]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kpzkcdskucwgtrtjoizswqvtyjzgowxh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760154826.6029932-695-30461236346176/AnsiballZ_file.py'
Oct 11 03:53:47 compute-0 sudo[237286]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 03:53:47 compute-0 python3.9[237288]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-netns-placeholder.preset _original_basename=91-netns-placeholder-preset recurse=False state=file path=/etc/systemd/system-preset/91-netns-placeholder.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 11 03:53:47 compute-0 sudo[237286]: pam_unix(sudo:session): session closed for user root
Oct 11 03:53:48 compute-0 sudo[237438]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cinebbqslmadvxaedgrtlhtclpxuukue ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760154828.0245008-707-261735075095275/AnsiballZ_systemd.py'
Oct 11 03:53:48 compute-0 sudo[237438]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 03:53:48 compute-0 python3.9[237440]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=netns-placeholder state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Oct 11 03:53:48 compute-0 systemd[1]: Reloading.
Oct 11 03:53:48 compute-0 ceph-mon[74273]: pgmap v632: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 03:53:48 compute-0 systemd-sysv-generator[237469]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 11 03:53:48 compute-0 systemd-rc-local-generator[237466]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 11 03:53:49 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v633: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 03:53:49 compute-0 systemd[1]: Starting Create netns directory...
Oct 11 03:53:49 compute-0 systemd[1]: run-netns-placeholder.mount: Deactivated successfully.
Oct 11 03:53:49 compute-0 systemd[1]: netns-placeholder.service: Deactivated successfully.
Oct 11 03:53:49 compute-0 systemd[1]: Finished Create netns directory.
Oct 11 03:53:49 compute-0 sudo[237438]: pam_unix(sudo:session): session closed for user root
Oct 11 03:53:49 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 11 03:53:49 compute-0 sudo[237631]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hyykusijodbqxkpgmxyvsotvnwfwyctj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760154829.5298715-717-5955536835184/AnsiballZ_file.py'
Oct 11 03:53:49 compute-0 sudo[237631]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 03:53:50 compute-0 python3.9[237633]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/openstack/healthchecks setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct 11 03:53:50 compute-0 sudo[237631]: pam_unix(sudo:session): session closed for user root
Oct 11 03:53:50 compute-0 sudo[237783]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zvsitzqdmwrnhnwnwoazezzocemgaejt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760154830.3359091-725-196799864272813/AnsiballZ_stat.py'
Oct 11 03:53:50 compute-0 sudo[237783]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 03:53:50 compute-0 ceph-mgr[74563]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 03:53:50 compute-0 ceph-mgr[74563]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 03:53:50 compute-0 ceph-mgr[74563]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 03:53:50 compute-0 ceph-mgr[74563]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 03:53:50 compute-0 ceph-mgr[74563]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 03:53:50 compute-0 ceph-mgr[74563]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 03:53:50 compute-0 ceph-mon[74273]: pgmap v633: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 03:53:50 compute-0 python3.9[237785]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/healthchecks/multipathd/healthcheck follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 11 03:53:50 compute-0 sudo[237783]: pam_unix(sudo:session): session closed for user root
Oct 11 03:53:51 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v634: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 03:53:51 compute-0 sudo[237917]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dxonkzszsbzuuesycbygadhmnbksmwwp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760154830.3359091-725-196799864272813/AnsiballZ_copy.py'
Oct 11 03:53:51 compute-0 sudo[237917]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 03:53:51 compute-0 podman[237880]: 2025-10-11 03:53:51.445855393 +0000 UTC m=+0.145078645 container health_status 648a70a95c858d67aef01a55c153ff0b5b9bef4b589fa10c62a24185180db61f (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Oct 11 03:53:51 compute-0 python3.9[237926]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/healthchecks/multipathd/ group=zuul mode=0700 owner=zuul setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1760154830.3359091-725-196799864272813/.source _original_basename=healthcheck follow=False checksum=af9d0c1c8f3cb0e30ce9609be9d5b01924d0d23f backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Oct 11 03:53:51 compute-0 sudo[237917]: pam_unix(sudo:session): session closed for user root
Oct 11 03:53:52 compute-0 sudo[238084]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gxwyksfpjyfpundxumaftwsmkcvnyuff ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760154832.0058215-742-265930524452106/AnsiballZ_file.py'
Oct 11 03:53:52 compute-0 sudo[238084]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 03:53:52 compute-0 python3.9[238086]: ansible-ansible.builtin.file Invoked with path=/var/lib/kolla/config_files recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Oct 11 03:53:52 compute-0 sudo[238084]: pam_unix(sudo:session): session closed for user root
Oct 11 03:53:52 compute-0 ceph-mon[74273]: pgmap v634: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 03:53:53 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v635: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 03:53:53 compute-0 sudo[238236]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jaawqzmgwqqlimkmhexbexkxcmmumabx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760154832.809922-750-194996965860984/AnsiballZ_stat.py'
Oct 11 03:53:53 compute-0 sudo[238236]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 03:53:53 compute-0 python3.9[238238]: ansible-ansible.legacy.stat Invoked with path=/var/lib/kolla/config_files/multipathd.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 11 03:53:53 compute-0 sudo[238236]: pam_unix(sudo:session): session closed for user root
Oct 11 03:53:53 compute-0 sudo[238359]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-moqmusepgsvgapieeokmofudgdmlaypd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760154832.809922-750-194996965860984/AnsiballZ_copy.py'
Oct 11 03:53:53 compute-0 sudo[238359]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 03:53:54 compute-0 python3.9[238361]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/kolla/config_files/multipathd.json mode=0600 src=/home/zuul/.ansible/tmp/ansible-tmp-1760154832.809922-750-194996965860984/.source.json _original_basename=.gxtdsh5b follow=False checksum=3f7959ee8ac9757398adcc451c3b416c957d7c14 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 11 03:53:54 compute-0 sudo[238359]: pam_unix(sudo:session): session closed for user root
Oct 11 03:53:54 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 11 03:53:54 compute-0 sudo[238511]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jeytmnsduxmapmkerzxvhljjsfbvwhln ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760154834.3406632-765-58182352559023/AnsiballZ_file.py'
Oct 11 03:53:54 compute-0 sudo[238511]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 03:53:54 compute-0 python3.9[238513]: ansible-ansible.builtin.file Invoked with mode=0755 path=/var/lib/edpm-config/container-startup-config/multipathd state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 11 03:53:54 compute-0 sudo[238511]: pam_unix(sudo:session): session closed for user root
Oct 11 03:53:54 compute-0 ceph-mon[74273]: pgmap v635: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 03:53:55 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v636: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 03:53:55 compute-0 sudo[238663]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dcmevwlzcxfpfowymmgcykxymjechmkx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760154835.0638235-773-50928391032704/AnsiballZ_stat.py'
Oct 11 03:53:55 compute-0 sudo[238663]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 03:53:55 compute-0 sudo[238663]: pam_unix(sudo:session): session closed for user root
Oct 11 03:53:56 compute-0 sudo[238786]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zzlfxtmgrwyrubqgcuiwnghkvtevjrnj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760154835.0638235-773-50928391032704/AnsiballZ_copy.py'
Oct 11 03:53:56 compute-0 sudo[238786]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 03:53:56 compute-0 sudo[238786]: pam_unix(sudo:session): session closed for user root
Oct 11 03:53:56 compute-0 ceph-mon[74273]: pgmap v636: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 03:53:57 compute-0 sudo[238952]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lkrquomsfwjmuyzltxzereihmypecqog ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760154836.7658823-790-134557077403797/AnsiballZ_container_config_data.py'
Oct 11 03:53:57 compute-0 sudo[238952]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 03:53:57 compute-0 podman[238912]: 2025-10-11 03:53:57.087369766 +0000 UTC m=+0.056858338 container health_status aa44bb3cffcfb092b9d85ae52a9bd9f36c87e07262632420f97b48a886109eab (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, tcib_managed=true, config_id=ovn_metadata_agent, org.label-schema.build-date=20251009, managed_by=edpm_ansible, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 11 03:53:57 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v637: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 03:53:57 compute-0 python3.9[238958]: ansible-container_config_data Invoked with config_overrides={} config_path=/var/lib/edpm-config/container-startup-config/multipathd config_pattern=*.json debug=False
Oct 11 03:53:57 compute-0 sudo[238952]: pam_unix(sudo:session): session closed for user root
Oct 11 03:53:57 compute-0 sudo[239108]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bwwyuzlixhcvjoneswnpxeqdpofzecnv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760154837.53937-799-162038824329551/AnsiballZ_container_config_hash.py'
Oct 11 03:53:57 compute-0 sudo[239108]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 03:53:57 compute-0 ceph-mon[74273]: pgmap v637: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 03:53:58 compute-0 python3.9[239110]: ansible-container_config_hash Invoked with check_mode=False config_vol_prefix=/var/lib/config-data
Oct 11 03:53:58 compute-0 sudo[239108]: pam_unix(sudo:session): session closed for user root
Oct 11 03:53:58 compute-0 sudo[239260]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ifboiowetjsjuxsobyoajgpcbwoazhtm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760154838.3888695-808-96994986751728/AnsiballZ_podman_container_info.py'
Oct 11 03:53:58 compute-0 sudo[239260]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 03:53:58 compute-0 python3.9[239262]: ansible-containers.podman.podman_container_info Invoked with executable=podman name=None
Oct 11 03:53:59 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v638: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 03:53:59 compute-0 sudo[239260]: pam_unix(sudo:session): session closed for user root
Oct 11 03:53:59 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 11 03:54:00 compute-0 ceph-mon[74273]: pgmap v638: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 03:54:00 compute-0 sudo[239438]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mvekwrmqreotfkjalawudkfcbhwgmfvo ; /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1760154839.9908595-821-50789600559230/AnsiballZ_edpm_container_manage.py'
Oct 11 03:54:00 compute-0 sudo[239438]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 03:54:00 compute-0 python3[239440]: ansible-edpm_container_manage Invoked with concurrency=1 config_dir=/var/lib/edpm-config/container-startup-config/multipathd config_id=multipathd config_overrides={} config_patterns=*.json log_base_path=/var/log/containers/stdouts debug=False
Oct 11 03:54:01 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v639: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 03:54:01 compute-0 podman[239453]: 2025-10-11 03:54:01.850793905 +0000 UTC m=+1.198841559 image pull afce23cfe475a7c4b16d233ab936a7b07069ccb13842b1c95ba43e4b3f92adfb quay.io/podified-antelope-centos9/openstack-multipathd:current-podified
Oct 11 03:54:02 compute-0 podman[239512]: 2025-10-11 03:54:02.062793784 +0000 UTC m=+0.069838858 container create 83ff5079e26f0f00bbfd2512db4ebca61e431e3b7e362664573e34fff179574c (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_id=multipathd, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, io.buildah.version=1.41.3, org.label-schema.build-date=20251009)
Oct 11 03:54:02 compute-0 podman[239512]: 2025-10-11 03:54:02.030352254 +0000 UTC m=+0.037397388 image pull afce23cfe475a7c4b16d233ab936a7b07069ccb13842b1c95ba43e4b3f92adfb quay.io/podified-antelope-centos9/openstack-multipathd:current-podified
Oct 11 03:54:02 compute-0 python3[239440]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: podman create --name multipathd --conmon-pidfile /run/multipathd.pid --env KOLLA_CONFIG_STRATEGY=COPY_ALWAYS --healthcheck-command /openstack/healthcheck --label config_id=multipathd --label container_name=multipathd --label managed_by=edpm_ansible --label config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']} --log-driver journald --log-level info --network host --privileged=True --volume /etc/hosts:/etc/hosts:ro --volume /etc/localtime:/etc/localtime:ro --volume /etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro --volume /etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro --volume /etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro --volume /etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro --volume /etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro --volume /dev/log:/dev/log --volume /var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro --volume /dev:/dev --volume /run/udev:/run/udev --volume /sys:/sys --volume /lib/modules:/lib/modules:ro --volume /etc/iscsi:/etc/iscsi:ro --volume /var/lib/iscsi:/var/lib/iscsi:z --volume /etc/multipath:/etc/multipath:z --volume /etc/multipath.conf:/etc/multipath.conf:ro --volume /var/lib/openstack/healthchecks/multipathd:/openstack:ro,z quay.io/podified-antelope-centos9/openstack-multipathd:current-podified
Oct 11 03:54:02 compute-0 ceph-mon[74273]: pgmap v639: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 03:54:02 compute-0 sudo[239438]: pam_unix(sudo:session): session closed for user root
Oct 11 03:54:02 compute-0 sudo[239700]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gkcbrgvtmjdhslzdldojekcsajifmrut ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760154842.4077442-829-122636318847402/AnsiballZ_stat.py'
Oct 11 03:54:02 compute-0 sudo[239700]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 03:54:02 compute-0 python3.9[239702]: ansible-ansible.builtin.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct 11 03:54:02 compute-0 sudo[239700]: pam_unix(sudo:session): session closed for user root
Oct 11 03:54:03 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v640: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 03:54:03 compute-0 sudo[239854]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-aqgigkvgqupptgkcpnffbogawprrqzyh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760154843.2506363-838-185547415554681/AnsiballZ_file.py'
Oct 11 03:54:03 compute-0 sudo[239854]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 03:54:03 compute-0 python3.9[239856]: ansible-file Invoked with path=/etc/systemd/system/edpm_multipathd.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 11 03:54:03 compute-0 sudo[239854]: pam_unix(sudo:session): session closed for user root
Oct 11 03:54:04 compute-0 sudo[239930]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kvnwdjstjhchjdjfeikvtfkpbjwkgdcm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760154843.2506363-838-185547415554681/AnsiballZ_stat.py'
Oct 11 03:54:04 compute-0 sudo[239930]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 03:54:04 compute-0 ceph-mon[74273]: pgmap v640: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 03:54:04 compute-0 python3.9[239932]: ansible-stat Invoked with path=/etc/systemd/system/edpm_multipathd_healthcheck.timer follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct 11 03:54:04 compute-0 sudo[239930]: pam_unix(sudo:session): session closed for user root
Oct 11 03:54:04 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 11 03:54:04 compute-0 sudo[240081]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kyuwdbgynsiktmoakowrczqddzurrcjz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760154844.4304597-838-234224391627790/AnsiballZ_copy.py'
Oct 11 03:54:04 compute-0 sudo[240081]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 03:54:05 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v641: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 03:54:05 compute-0 python3.9[240083]: ansible-copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1760154844.4304597-838-234224391627790/source dest=/etc/systemd/system/edpm_multipathd.service mode=0644 owner=root group=root backup=False force=True remote_src=False follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 11 03:54:05 compute-0 sudo[240081]: pam_unix(sudo:session): session closed for user root
Oct 11 03:54:05 compute-0 sudo[240157]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uxrrortzzlvpdebaapidakuqjpwsdbeg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760154844.4304597-838-234224391627790/AnsiballZ_systemd.py'
Oct 11 03:54:05 compute-0 sudo[240157]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 03:54:05 compute-0 python3.9[240159]: ansible-systemd Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Oct 11 03:54:05 compute-0 systemd[1]: Reloading.
Oct 11 03:54:05 compute-0 systemd-sysv-generator[240192]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 11 03:54:06 compute-0 systemd-rc-local-generator[240188]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 11 03:54:06 compute-0 ceph-mon[74273]: pgmap v641: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 03:54:06 compute-0 sudo[240157]: pam_unix(sudo:session): session closed for user root
Oct 11 03:54:06 compute-0 sudo[240269]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-oauiasiuuhqxfsmdxnlmetswgskemcij ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760154844.4304597-838-234224391627790/AnsiballZ_systemd.py'
Oct 11 03:54:06 compute-0 sudo[240269]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 03:54:06 compute-0 python3.9[240271]: ansible-systemd Invoked with state=restarted name=edpm_multipathd.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Oct 11 03:54:07 compute-0 systemd[1]: Reloading.
Oct 11 03:54:07 compute-0 systemd-rc-local-generator[240301]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 11 03:54:07 compute-0 systemd-sysv-generator[240306]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 11 03:54:07 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v642: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 03:54:07 compute-0 systemd[1]: Starting multipathd container...
Oct 11 03:54:07 compute-0 systemd[1]: Started libcrun container.
Oct 11 03:54:07 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/240d479ca77bf762f32076e1f200025fbbe8d535e162e427c88395345fed9cca/merged/etc/multipath supports timestamps until 2038 (0x7fffffff)
Oct 11 03:54:07 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/240d479ca77bf762f32076e1f200025fbbe8d535e162e427c88395345fed9cca/merged/var/lib/iscsi supports timestamps until 2038 (0x7fffffff)
Oct 11 03:54:07 compute-0 systemd[1]: Started /usr/bin/podman healthcheck run 83ff5079e26f0f00bbfd2512db4ebca61e431e3b7e362664573e34fff179574c.
Oct 11 03:54:07 compute-0 podman[240312]: 2025-10-11 03:54:07.516410716 +0000 UTC m=+0.128038752 container init 83ff5079e26f0f00bbfd2512db4ebca61e431e3b7e362664573e34fff179574c (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, managed_by=edpm_ansible, org.label-schema.build-date=20251009, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, container_name=multipathd, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0)
Oct 11 03:54:07 compute-0 multipathd[240326]: + sudo -E kolla_set_configs
Oct 11 03:54:07 compute-0 podman[240312]: 2025-10-11 03:54:07.549501454 +0000 UTC m=+0.161129440 container start 83ff5079e26f0f00bbfd2512db4ebca61e431e3b7e362664573e34fff179574c (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, managed_by=edpm_ansible, org.label-schema.build-date=20251009, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, container_name=multipathd, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, config_id=multipathd, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Oct 11 03:54:07 compute-0 podman[240312]: multipathd
Oct 11 03:54:07 compute-0 sudo[240332]:     root : PWD=/ ; USER=root ; COMMAND=/usr/local/bin/kolla_set_configs
Oct 11 03:54:07 compute-0 sudo[240332]: pam_systemd(sudo:session): Failed to connect to system bus: No such file or directory
Oct 11 03:54:07 compute-0 sudo[240332]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=0)
Oct 11 03:54:07 compute-0 systemd[1]: Started multipathd container.
Oct 11 03:54:07 compute-0 sudo[240269]: pam_unix(sudo:session): session closed for user root
Oct 11 03:54:07 compute-0 multipathd[240326]: INFO:__main__:Loading config file at /var/lib/kolla/config_files/config.json
Oct 11 03:54:07 compute-0 multipathd[240326]: INFO:__main__:Validating config file
Oct 11 03:54:07 compute-0 multipathd[240326]: INFO:__main__:Kolla config strategy set to: COPY_ALWAYS
Oct 11 03:54:07 compute-0 multipathd[240326]: INFO:__main__:Writing out command to execute
Oct 11 03:54:07 compute-0 sudo[240332]: pam_unix(sudo:session): session closed for user root
Oct 11 03:54:07 compute-0 multipathd[240326]: ++ cat /run_command
Oct 11 03:54:07 compute-0 podman[240333]: 2025-10-11 03:54:07.625573064 +0000 UTC m=+0.067749680 container health_status 83ff5079e26f0f00bbfd2512db4ebca61e431e3b7e362664573e34fff179574c (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=starting, health_failing_streak=1, health_log=, tcib_managed=true, container_name=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, config_id=multipathd, org.label-schema.build-date=20251009, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Oct 11 03:54:07 compute-0 multipathd[240326]: + CMD='/usr/sbin/multipathd -d'
Oct 11 03:54:07 compute-0 multipathd[240326]: + ARGS=
Oct 11 03:54:07 compute-0 multipathd[240326]: + sudo kolla_copy_cacerts
Oct 11 03:54:07 compute-0 systemd[1]: 83ff5079e26f0f00bbfd2512db4ebca61e431e3b7e362664573e34fff179574c-38705142c171d1e8.service: Main process exited, code=exited, status=1/FAILURE
Oct 11 03:54:07 compute-0 systemd[1]: 83ff5079e26f0f00bbfd2512db4ebca61e431e3b7e362664573e34fff179574c-38705142c171d1e8.service: Failed with result 'exit-code'.
Oct 11 03:54:07 compute-0 sudo[240355]:     root : PWD=/ ; USER=root ; COMMAND=/usr/local/bin/kolla_copy_cacerts
Oct 11 03:54:07 compute-0 sudo[240355]: pam_systemd(sudo:session): Failed to connect to system bus: No such file or directory
Oct 11 03:54:07 compute-0 sudo[240355]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=0)
Oct 11 03:54:07 compute-0 sudo[240355]: pam_unix(sudo:session): session closed for user root
Oct 11 03:54:07 compute-0 multipathd[240326]: + [[ ! -n '' ]]
Oct 11 03:54:07 compute-0 multipathd[240326]: + . kolla_extend_start
Oct 11 03:54:07 compute-0 multipathd[240326]: + echo 'Running command: '\''/usr/sbin/multipathd -d'\'''
Oct 11 03:54:07 compute-0 multipathd[240326]: Running command: '/usr/sbin/multipathd -d'
Oct 11 03:54:07 compute-0 multipathd[240326]: + umask 0022
Oct 11 03:54:07 compute-0 multipathd[240326]: + exec /usr/sbin/multipathd -d
Oct 11 03:54:07 compute-0 multipathd[240326]: 3232.757419 | --------start up--------
Oct 11 03:54:07 compute-0 multipathd[240326]: 3232.757437 | read /etc/multipath.conf
Oct 11 03:54:07 compute-0 multipathd[240326]: 3232.763291 | path checkers start up
Oct 11 03:54:08 compute-0 ceph-mon[74273]: pgmap v642: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 03:54:08 compute-0 python3.9[240513]: ansible-ansible.builtin.stat Invoked with path=/etc/multipath/.multipath_restart_required follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct 11 03:54:08 compute-0 sudo[240665]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gsaxvuhtvtmdiadyrjpqejoknotedgni ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760154848.5312014-874-256791313287099/AnsiballZ_command.py'
Oct 11 03:54:08 compute-0 sudo[240665]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 03:54:09 compute-0 python3.9[240667]: ansible-ansible.legacy.command Invoked with _raw_params=podman ps --filter volume=/etc/multipath.conf --format {{.Names}} _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 11 03:54:09 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v643: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 03:54:09 compute-0 sudo[240665]: pam_unix(sudo:session): session closed for user root
Oct 11 03:54:09 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 11 03:54:09 compute-0 sudo[240830]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vjhnrcrmpxefvmyhohpaxsgjbcqlhmxd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760154849.4338007-882-116237665719725/AnsiballZ_systemd.py'
Oct 11 03:54:09 compute-0 sudo[240830]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 03:54:10 compute-0 python3.9[240832]: ansible-ansible.builtin.systemd Invoked with name=edpm_multipathd state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Oct 11 03:54:10 compute-0 systemd[1]: Stopping multipathd container...
Oct 11 03:54:10 compute-0 ceph-mon[74273]: pgmap v643: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 03:54:10 compute-0 multipathd[240326]: 3235.388573 | exit (signal)
Oct 11 03:54:10 compute-0 multipathd[240326]: 3235.389616 | --------shut down-------
Oct 11 03:54:10 compute-0 systemd[1]: libpod-83ff5079e26f0f00bbfd2512db4ebca61e431e3b7e362664573e34fff179574c.scope: Deactivated successfully.
Oct 11 03:54:10 compute-0 conmon[240326]: conmon 83ff5079e26f0f00bbfd <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-83ff5079e26f0f00bbfd2512db4ebca61e431e3b7e362664573e34fff179574c.scope/container/memory.events
Oct 11 03:54:10 compute-0 podman[240836]: 2025-10-11 03:54:10.34160332 +0000 UTC m=+0.103312276 container died 83ff5079e26f0f00bbfd2512db4ebca61e431e3b7e362664573e34fff179574c (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, io.buildah.version=1.41.3, org.label-schema.build-date=20251009, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=multipathd, org.label-schema.license=GPLv2, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, tcib_managed=true, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']})
Oct 11 03:54:10 compute-0 systemd[1]: 83ff5079e26f0f00bbfd2512db4ebca61e431e3b7e362664573e34fff179574c-38705142c171d1e8.timer: Deactivated successfully.
Oct 11 03:54:10 compute-0 systemd[1]: Stopped /usr/bin/podman healthcheck run 83ff5079e26f0f00bbfd2512db4ebca61e431e3b7e362664573e34fff179574c.
Oct 11 03:54:10 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-83ff5079e26f0f00bbfd2512db4ebca61e431e3b7e362664573e34fff179574c-userdata-shm.mount: Deactivated successfully.
Oct 11 03:54:10 compute-0 systemd[1]: var-lib-containers-storage-overlay-240d479ca77bf762f32076e1f200025fbbe8d535e162e427c88395345fed9cca-merged.mount: Deactivated successfully.
Oct 11 03:54:10 compute-0 podman[240836]: 2025-10-11 03:54:10.538326976 +0000 UTC m=+0.300035902 container cleanup 83ff5079e26f0f00bbfd2512db4ebca61e431e3b7e362664573e34fff179574c (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.license=GPLv2, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_id=multipathd)
Oct 11 03:54:10 compute-0 podman[240836]: multipathd
Oct 11 03:54:10 compute-0 podman[240863]: multipathd
Oct 11 03:54:10 compute-0 systemd[1]: edpm_multipathd.service: Deactivated successfully.
Oct 11 03:54:10 compute-0 systemd[1]: Stopped multipathd container.
Oct 11 03:54:10 compute-0 systemd[1]: Starting multipathd container...
Oct 11 03:54:10 compute-0 systemd[1]: Started libcrun container.
Oct 11 03:54:10 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/240d479ca77bf762f32076e1f200025fbbe8d535e162e427c88395345fed9cca/merged/etc/multipath supports timestamps until 2038 (0x7fffffff)
Oct 11 03:54:10 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/240d479ca77bf762f32076e1f200025fbbe8d535e162e427c88395345fed9cca/merged/var/lib/iscsi supports timestamps until 2038 (0x7fffffff)
Oct 11 03:54:10 compute-0 systemd[1]: Started /usr/bin/podman healthcheck run 83ff5079e26f0f00bbfd2512db4ebca61e431e3b7e362664573e34fff179574c.
Oct 11 03:54:10 compute-0 podman[240876]: 2025-10-11 03:54:10.817977452 +0000 UTC m=+0.149669272 container init 83ff5079e26f0f00bbfd2512db4ebca61e431e3b7e362664573e34fff179574c (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, io.buildah.version=1.41.3, org.label-schema.build-date=20251009, container_name=multipathd, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 11 03:54:10 compute-0 multipathd[240891]: + sudo -E kolla_set_configs
Oct 11 03:54:10 compute-0 sudo[240897]:     root : PWD=/ ; USER=root ; COMMAND=/usr/local/bin/kolla_set_configs
Oct 11 03:54:10 compute-0 sudo[240897]: pam_systemd(sudo:session): Failed to connect to system bus: No such file or directory
Oct 11 03:54:10 compute-0 sudo[240897]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=0)
Oct 11 03:54:10 compute-0 podman[240876]: 2025-10-11 03:54:10.864581615 +0000 UTC m=+0.196273375 container start 83ff5079e26f0f00bbfd2512db4ebca61e431e3b7e362664573e34fff179574c (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, container_name=multipathd, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f)
Oct 11 03:54:10 compute-0 podman[240876]: multipathd
Oct 11 03:54:10 compute-0 systemd[1]: Started multipathd container.
Oct 11 03:54:10 compute-0 sudo[240830]: pam_unix(sudo:session): session closed for user root
Oct 11 03:54:10 compute-0 multipathd[240891]: INFO:__main__:Loading config file at /var/lib/kolla/config_files/config.json
Oct 11 03:54:10 compute-0 multipathd[240891]: INFO:__main__:Validating config file
Oct 11 03:54:10 compute-0 multipathd[240891]: INFO:__main__:Kolla config strategy set to: COPY_ALWAYS
Oct 11 03:54:10 compute-0 multipathd[240891]: INFO:__main__:Writing out command to execute
Oct 11 03:54:10 compute-0 sudo[240897]: pam_unix(sudo:session): session closed for user root
Oct 11 03:54:10 compute-0 multipathd[240891]: ++ cat /run_command
Oct 11 03:54:10 compute-0 multipathd[240891]: + CMD='/usr/sbin/multipathd -d'
Oct 11 03:54:10 compute-0 multipathd[240891]: + ARGS=
Oct 11 03:54:10 compute-0 multipathd[240891]: + sudo kolla_copy_cacerts
Oct 11 03:54:10 compute-0 podman[240898]: 2025-10-11 03:54:10.972053585 +0000 UTC m=+0.090552092 container health_status 83ff5079e26f0f00bbfd2512db4ebca61e431e3b7e362664573e34fff179574c (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=starting, health_failing_streak=1, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20251009, org.label-schema.schema-version=1.0, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, config_id=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Oct 11 03:54:10 compute-0 sudo[240916]:     root : PWD=/ ; USER=root ; COMMAND=/usr/local/bin/kolla_copy_cacerts
Oct 11 03:54:10 compute-0 sudo[240916]: pam_systemd(sudo:session): Failed to connect to system bus: No such file or directory
Oct 11 03:54:10 compute-0 sudo[240916]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=0)
Oct 11 03:54:10 compute-0 systemd[1]: 83ff5079e26f0f00bbfd2512db4ebca61e431e3b7e362664573e34fff179574c-71635fde9449c6d5.service: Main process exited, code=exited, status=1/FAILURE
Oct 11 03:54:10 compute-0 systemd[1]: 83ff5079e26f0f00bbfd2512db4ebca61e431e3b7e362664573e34fff179574c-71635fde9449c6d5.service: Failed with result 'exit-code'.
Oct 11 03:54:10 compute-0 sudo[240916]: pam_unix(sudo:session): session closed for user root
Oct 11 03:54:10 compute-0 multipathd[240891]: + [[ ! -n '' ]]
Oct 11 03:54:10 compute-0 multipathd[240891]: + . kolla_extend_start
Oct 11 03:54:10 compute-0 multipathd[240891]: + echo 'Running command: '\''/usr/sbin/multipathd -d'\'''
Oct 11 03:54:10 compute-0 multipathd[240891]: Running command: '/usr/sbin/multipathd -d'
Oct 11 03:54:10 compute-0 multipathd[240891]: + umask 0022
Oct 11 03:54:10 compute-0 multipathd[240891]: + exec /usr/sbin/multipathd -d
Oct 11 03:54:11 compute-0 multipathd[240891]: 3236.100424 | --------start up--------
Oct 11 03:54:11 compute-0 multipathd[240891]: 3236.100455 | read /etc/multipath.conf
Oct 11 03:54:11 compute-0 multipathd[240891]: 3236.108146 | path checkers start up
Oct 11 03:54:11 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v644: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 03:54:11 compute-0 sudo[241080]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-drciqpmlsfgdbgntvnojgqnxsxkhjtgk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760154851.1300647-890-197808083350052/AnsiballZ_file.py'
Oct 11 03:54:11 compute-0 sudo[241080]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 03:54:11 compute-0 python3.9[241082]: ansible-ansible.builtin.file Invoked with path=/etc/multipath/.multipath_restart_required state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 11 03:54:11 compute-0 sudo[241080]: pam_unix(sudo:session): session closed for user root
Oct 11 03:54:12 compute-0 ceph-mon[74273]: pgmap v644: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 03:54:12 compute-0 sudo[241232]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mlqzomterehvwjkbcdmwyjwlywpjhgzq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760154852.097604-902-205310227641518/AnsiballZ_file.py'
Oct 11 03:54:12 compute-0 sudo[241232]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 03:54:12 compute-0 python3.9[241234]: ansible-ansible.builtin.file Invoked with mode=0755 path=/etc/modules-load.d selevel=s0 setype=etc_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None attributes=None
Oct 11 03:54:12 compute-0 sudo[241232]: pam_unix(sudo:session): session closed for user root
Oct 11 03:54:13 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v645: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 03:54:13 compute-0 sudo[241393]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-eoilzytvfqbvcjdhkoenswfzemxkbsnu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760154852.8963642-910-185825381758575/AnsiballZ_modprobe.py'
Oct 11 03:54:13 compute-0 sudo[241393]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 03:54:13 compute-0 podman[241358]: 2025-10-11 03:54:13.277132515 +0000 UTC m=+0.088723552 container health_status 8f03625dda308e9a3fe3b3f00360811eab2a53db90537fed5552a11820542f07 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, container_name=iscsid, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, org.label-schema.build-date=20251009, io.buildah.version=1.41.3, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, config_id=iscsid, managed_by=edpm_ansible)
Oct 11 03:54:13 compute-0 python3.9[241405]: ansible-community.general.modprobe Invoked with name=nvme-fabrics state=present params= persistent=disabled
Oct 11 03:54:13 compute-0 kernel: Key type psk registered
Oct 11 03:54:13 compute-0 sudo[241393]: pam_unix(sudo:session): session closed for user root
Oct 11 03:54:14 compute-0 sudo[241565]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-otnmuulqxrjnsaxyybmaolgaewgkglxx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760154853.7661364-918-88353160012433/AnsiballZ_stat.py'
Oct 11 03:54:14 compute-0 sudo[241565]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 03:54:14 compute-0 ceph-mon[74273]: pgmap v645: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 03:54:14 compute-0 python3.9[241567]: ansible-ansible.legacy.stat Invoked with path=/etc/modules-load.d/nvme-fabrics.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 11 03:54:14 compute-0 sudo[241565]: pam_unix(sudo:session): session closed for user root
Oct 11 03:54:14 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 11 03:54:14 compute-0 sudo[241688]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-alhzrpzsdtihasormogfwlgmygvlutmu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760154853.7661364-918-88353160012433/AnsiballZ_copy.py'
Oct 11 03:54:14 compute-0 sudo[241688]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 03:54:15 compute-0 python3.9[241690]: ansible-ansible.legacy.copy Invoked with dest=/etc/modules-load.d/nvme-fabrics.conf mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1760154853.7661364-918-88353160012433/.source.conf follow=False _original_basename=module-load.conf.j2 checksum=783c778f0c68cc414f35486f234cbb1cf3f9bbff backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 11 03:54:15 compute-0 sudo[241688]: pam_unix(sudo:session): session closed for user root
Oct 11 03:54:15 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v646: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 03:54:15 compute-0 sudo[241840]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pylfactfasrvomvqtdkihrwkgstfjmqm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760154855.3165004-934-269080917105582/AnsiballZ_lineinfile.py'
Oct 11 03:54:15 compute-0 sudo[241840]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 03:54:15 compute-0 python3.9[241842]: ansible-ansible.builtin.lineinfile Invoked with create=True dest=/etc/modules line=nvme-fabrics  mode=0644 state=present path=/etc/modules backrefs=False backup=False firstmatch=False unsafe_writes=False regexp=None search_string=None insertafter=None insertbefore=None validate=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 11 03:54:15 compute-0 sudo[241840]: pam_unix(sudo:session): session closed for user root
Oct 11 03:54:16 compute-0 ceph-mon[74273]: pgmap v646: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 03:54:16 compute-0 sudo[241992]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ciibwsfobikzvjhdxhqxemutpjvvclud ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760154856.1885357-942-82391254904228/AnsiballZ_systemd.py'
Oct 11 03:54:16 compute-0 sudo[241992]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 03:54:16 compute-0 python3.9[241994]: ansible-ansible.builtin.systemd Invoked with name=systemd-modules-load.service state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Oct 11 03:54:16 compute-0 systemd[1]: systemd-modules-load.service: Deactivated successfully.
Oct 11 03:54:16 compute-0 systemd[1]: Stopped Load Kernel Modules.
Oct 11 03:54:16 compute-0 systemd[1]: Stopping Load Kernel Modules...
Oct 11 03:54:16 compute-0 systemd[1]: Starting Load Kernel Modules...
Oct 11 03:54:17 compute-0 systemd[1]: Finished Load Kernel Modules.
Oct 11 03:54:17 compute-0 sudo[241992]: pam_unix(sudo:session): session closed for user root
Oct 11 03:54:17 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v647: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 03:54:17 compute-0 sudo[242148]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nzuqumsszjmprqivbpgltypygvwbqxzd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760154857.3012426-950-33031043672292/AnsiballZ_setup.py'
Oct 11 03:54:17 compute-0 sudo[242148]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 03:54:18 compute-0 python3.9[242150]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Oct 11 03:54:18 compute-0 ceph-mon[74273]: pgmap v647: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 03:54:18 compute-0 sudo[242148]: pam_unix(sudo:session): session closed for user root
Oct 11 03:54:18 compute-0 sudo[242232]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lqgaohgzacdookzvnaledqopevfbpnsu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760154857.3012426-950-33031043672292/AnsiballZ_dnf.py'
Oct 11 03:54:18 compute-0 sudo[242232]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 03:54:19 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v648: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 03:54:19 compute-0 python3.9[242234]: ansible-ansible.legacy.dnf Invoked with name=['nvme-cli'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Oct 11 03:54:19 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 11 03:54:20 compute-0 ceph-mon[74273]: pgmap v648: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 03:54:20 compute-0 ceph-mgr[74563]: [balancer INFO root] Optimize plan auto_2025-10-11_03:54:20
Oct 11 03:54:20 compute-0 ceph-mgr[74563]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 11 03:54:20 compute-0 ceph-mgr[74563]: [balancer INFO root] do_upmap
Oct 11 03:54:20 compute-0 ceph-mgr[74563]: [balancer INFO root] pools ['cephfs.cephfs.data', 'cephfs.cephfs.meta', '.mgr', 'backups', 'vms', '.rgw.root', 'default.rgw.log', 'default.rgw.control', 'images', 'volumes', 'default.rgw.meta']
Oct 11 03:54:20 compute-0 ceph-mgr[74563]: [balancer INFO root] prepared 0/10 changes
Oct 11 03:54:20 compute-0 ceph-mgr[74563]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 03:54:20 compute-0 ceph-mgr[74563]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 03:54:20 compute-0 ceph-mgr[74563]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 03:54:20 compute-0 ceph-mgr[74563]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 03:54:20 compute-0 ceph-mgr[74563]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 03:54:20 compute-0 ceph-mgr[74563]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 03:54:20 compute-0 ceph-mgr[74563]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 11 03:54:20 compute-0 ceph-mgr[74563]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 11 03:54:20 compute-0 ceph-mgr[74563]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 11 03:54:20 compute-0 ceph-mgr[74563]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 11 03:54:20 compute-0 ceph-mgr[74563]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 11 03:54:20 compute-0 ceph-mgr[74563]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 11 03:54:20 compute-0 ceph-mgr[74563]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 11 03:54:20 compute-0 ceph-mgr[74563]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 11 03:54:20 compute-0 ceph-mgr[74563]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 11 03:54:20 compute-0 ceph-mgr[74563]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 11 03:54:21 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v649: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 03:54:22 compute-0 ceph-mon[74273]: pgmap v649: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 03:54:22 compute-0 podman[242236]: 2025-10-11 03:54:22.398952221 +0000 UTC m=+0.111183575 container health_status 648a70a95c858d67aef01a55c153ff0b5b9bef4b589fa10c62a24185180db61f (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009, config_id=ovn_controller, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, container_name=ovn_controller, managed_by=edpm_ansible)
Oct 11 03:54:22 compute-0 ovn_metadata_agent[161897]: 2025-10-11 03:54:22.940 161902 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 03:54:22 compute-0 ovn_metadata_agent[161897]: 2025-10-11 03:54:22.941 161902 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 03:54:22 compute-0 ovn_metadata_agent[161897]: 2025-10-11 03:54:22.941 161902 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 03:54:23 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v650: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 03:54:24 compute-0 ceph-mon[74273]: pgmap v650: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 03:54:24 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 11 03:54:25 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v651: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 03:54:25 compute-0 systemd[1]: Reloading.
Oct 11 03:54:25 compute-0 systemd-rc-local-generator[242290]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 11 03:54:25 compute-0 systemd-sysv-generator[242296]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 11 03:54:25 compute-0 systemd[1]: Reloading.
Oct 11 03:54:25 compute-0 systemd-rc-local-generator[242334]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 11 03:54:25 compute-0 systemd-sysv-generator[242337]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 11 03:54:26 compute-0 systemd-logind[820]: Watching system buttons on /dev/input/event0 (Power Button)
Oct 11 03:54:26 compute-0 systemd-logind[820]: Watching system buttons on /dev/input/event1 (AT Translated Set 2 keyboard)
Oct 11 03:54:26 compute-0 ceph-mon[74273]: pgmap v651: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 03:54:26 compute-0 lvm[242376]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Oct 11 03:54:26 compute-0 lvm[242376]: VG ceph_vg0 finished
Oct 11 03:54:26 compute-0 lvm[242374]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Oct 11 03:54:26 compute-0 lvm[242374]: VG ceph_vg2 finished
Oct 11 03:54:26 compute-0 lvm[242375]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Oct 11 03:54:26 compute-0 lvm[242375]: VG ceph_vg1 finished
Oct 11 03:54:26 compute-0 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Oct 11 03:54:26 compute-0 systemd[1]: Starting man-db-cache-update.service...
Oct 11 03:54:26 compute-0 systemd[1]: Reloading.
Oct 11 03:54:26 compute-0 systemd-rc-local-generator[242437]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 11 03:54:26 compute-0 systemd-sysv-generator[242440]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 11 03:54:26 compute-0 systemd[1]: Queuing reload/restart jobs for marked units…
Oct 11 03:54:27 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v652: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 03:54:27 compute-0 podman[243062]: 2025-10-11 03:54:27.357718888 +0000 UTC m=+0.067917924 container health_status aa44bb3cffcfb092b9d85ae52a9bd9f36c87e07262632420f97b48a886109eab (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.build-date=20251009, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, container_name=ovn_metadata_agent)
Oct 11 03:54:27 compute-0 sudo[242232]: pam_unix(sudo:session): session closed for user root
Oct 11 03:54:27 compute-0 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Oct 11 03:54:27 compute-0 systemd[1]: Finished man-db-cache-update.service.
Oct 11 03:54:27 compute-0 systemd[1]: man-db-cache-update.service: Consumed 1.608s CPU time.
Oct 11 03:54:27 compute-0 systemd[1]: run-r48aa28ebca8e4ad0b154c44595dd1e53.service: Deactivated successfully.
Oct 11 03:54:28 compute-0 sudo[243740]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wuzdyzajolntdqvbuvlxcowltitstzza ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760154867.7283838-962-29834344275447/AnsiballZ_file.py'
Oct 11 03:54:28 compute-0 sudo[243740]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 03:54:28 compute-0 python3.9[243742]: ansible-ansible.builtin.file Invoked with mode=0600 path=/etc/iscsi/.iscsid_restart_required state=touch recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 11 03:54:28 compute-0 sudo[243740]: pam_unix(sudo:session): session closed for user root
Oct 11 03:54:28 compute-0 ceph-mon[74273]: pgmap v652: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 03:54:29 compute-0 python3.9[243892]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct 11 03:54:29 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v653: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 03:54:29 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 11 03:54:29 compute-0 sudo[244046]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-spzcankbzxzbwtvrbguhvemqeodukbbq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760154869.5969-980-261840184865333/AnsiballZ_file.py'
Oct 11 03:54:29 compute-0 sudo[244046]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 03:54:30 compute-0 python3.9[244048]: ansible-ansible.builtin.file Invoked with mode=0644 path=/etc/ssh/ssh_known_hosts state=touch recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 11 03:54:30 compute-0 sudo[244046]: pam_unix(sudo:session): session closed for user root
Oct 11 03:54:30 compute-0 ceph-mon[74273]: pgmap v653: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 03:54:30 compute-0 sudo[244125]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 03:54:30 compute-0 sudo[244125]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 03:54:30 compute-0 sudo[244125]: pam_unix(sudo:session): session closed for user root
Oct 11 03:54:30 compute-0 sudo[244150]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 11 03:54:30 compute-0 sudo[244150]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 03:54:30 compute-0 sudo[244150]: pam_unix(sudo:session): session closed for user root
Oct 11 03:54:30 compute-0 sudo[244175]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 03:54:30 compute-0 sudo[244175]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 03:54:30 compute-0 sudo[244175]: pam_unix(sudo:session): session closed for user root
Oct 11 03:54:30 compute-0 sudo[244200]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/23b68101-59a9-532f-ab6b-9acf78fb2162/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Oct 11 03:54:30 compute-0 sudo[244200]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 03:54:31 compute-0 ceph-mgr[74563]: [pg_autoscaler INFO root] _maybe_adjust
Oct 11 03:54:31 compute-0 ceph-mgr[74563]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 03:54:31 compute-0 ceph-mgr[74563]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct 11 03:54:31 compute-0 ceph-mgr[74563]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 03:54:31 compute-0 ceph-mgr[74563]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 11 03:54:31 compute-0 ceph-mgr[74563]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 03:54:31 compute-0 ceph-mgr[74563]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 11 03:54:31 compute-0 ceph-mgr[74563]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 03:54:31 compute-0 ceph-mgr[74563]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 11 03:54:31 compute-0 ceph-mgr[74563]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 03:54:31 compute-0 ceph-mgr[74563]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 11 03:54:31 compute-0 ceph-mgr[74563]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 03:54:31 compute-0 ceph-mgr[74563]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Oct 11 03:54:31 compute-0 ceph-mgr[74563]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 03:54:31 compute-0 ceph-mgr[74563]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 11 03:54:31 compute-0 ceph-mgr[74563]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 03:54:31 compute-0 ceph-mgr[74563]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Oct 11 03:54:31 compute-0 ceph-mgr[74563]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 03:54:31 compute-0 ceph-mgr[74563]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct 11 03:54:31 compute-0 ceph-mgr[74563]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 03:54:31 compute-0 ceph-mgr[74563]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 11 03:54:31 compute-0 ceph-mgr[74563]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 03:54:31 compute-0 ceph-mgr[74563]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct 11 03:54:31 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v654: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 03:54:31 compute-0 sudo[244317]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lnunrsqsdrumykyeaodklfpttneknfkh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760154870.5715792-991-238014239339502/AnsiballZ_systemd_service.py'
Oct 11 03:54:31 compute-0 sudo[244317]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 03:54:31 compute-0 sudo[244200]: pam_unix(sudo:session): session closed for user root
Oct 11 03:54:31 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 11 03:54:31 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 11 03:54:31 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct 11 03:54:31 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 11 03:54:31 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct 11 03:54:31 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' 
Oct 11 03:54:31 compute-0 ceph-mgr[74563]: [progress WARNING root] complete: ev 1bca116b-2c44-4edd-b271-5042a3c9bd52 does not exist
Oct 11 03:54:31 compute-0 ceph-mgr[74563]: [progress WARNING root] complete: ev ca8d54e2-18e4-4d42-8ea1-c90c213c8150 does not exist
Oct 11 03:54:31 compute-0 ceph-mgr[74563]: [progress WARNING root] complete: ev 45a694d1-a8bb-442d-be90-0cfa49c69533 does not exist
Oct 11 03:54:31 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct 11 03:54:31 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 11 03:54:31 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct 11 03:54:31 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 11 03:54:31 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 11 03:54:31 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 11 03:54:31 compute-0 sudo[244332]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 03:54:31 compute-0 sudo[244332]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 03:54:31 compute-0 sudo[244332]: pam_unix(sudo:session): session closed for user root
Oct 11 03:54:31 compute-0 sudo[244357]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 11 03:54:31 compute-0 sudo[244357]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 03:54:31 compute-0 sudo[244357]: pam_unix(sudo:session): session closed for user root
Oct 11 03:54:31 compute-0 python3.9[244319]: ansible-ansible.builtin.systemd_service Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Oct 11 03:54:31 compute-0 systemd[1]: Reloading.
Oct 11 03:54:31 compute-0 systemd-sysv-generator[244433]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 11 03:54:31 compute-0 systemd-rc-local-generator[244429]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 11 03:54:31 compute-0 sudo[244382]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 03:54:31 compute-0 sudo[244382]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 03:54:31 compute-0 sudo[244317]: pam_unix(sudo:session): session closed for user root
Oct 11 03:54:31 compute-0 sudo[244382]: pam_unix(sudo:session): session closed for user root
Oct 11 03:54:32 compute-0 sudo[244442]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/23b68101-59a9-532f-ab6b-9acf78fb2162/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 23b68101-59a9-532f-ab6b-9acf78fb2162 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Oct 11 03:54:32 compute-0 sudo[244442]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 03:54:32 compute-0 sshd-session[244509]: error: kex_exchange_identification: read: Connection reset by peer
Oct 11 03:54:32 compute-0 sshd-session[244509]: Connection reset by 45.140.17.97 port 48899
Oct 11 03:54:32 compute-0 ceph-mon[74273]: pgmap v654: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 03:54:32 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 11 03:54:32 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 11 03:54:32 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' 
Oct 11 03:54:32 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 11 03:54:32 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 11 03:54:32 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 11 03:54:32 compute-0 podman[244605]: 2025-10-11 03:54:32.510932929 +0000 UTC m=+0.076458662 container create 6bbdb86965cfc21e3f502b3ee995abb35416c57c02b6c22d7d7c62c3924fb026 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jovial_hamilton, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.license=GPLv2)
Oct 11 03:54:32 compute-0 systemd[1]: Started libpod-conmon-6bbdb86965cfc21e3f502b3ee995abb35416c57c02b6c22d7d7c62c3924fb026.scope.
Oct 11 03:54:32 compute-0 podman[244605]: 2025-10-11 03:54:32.4785448 +0000 UTC m=+0.044070563 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 03:54:32 compute-0 systemd[1]: Started libcrun container.
Oct 11 03:54:32 compute-0 podman[244605]: 2025-10-11 03:54:32.611706854 +0000 UTC m=+0.177232647 container init 6bbdb86965cfc21e3f502b3ee995abb35416c57c02b6c22d7d7c62c3924fb026 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jovial_hamilton, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Oct 11 03:54:32 compute-0 podman[244605]: 2025-10-11 03:54:32.62094977 +0000 UTC m=+0.186475503 container start 6bbdb86965cfc21e3f502b3ee995abb35416c57c02b6c22d7d7c62c3924fb026 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jovial_hamilton, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 11 03:54:32 compute-0 podman[244605]: 2025-10-11 03:54:32.626986877 +0000 UTC m=+0.192512600 container attach 6bbdb86965cfc21e3f502b3ee995abb35416c57c02b6c22d7d7c62c3924fb026 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jovial_hamilton, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True)
Oct 11 03:54:32 compute-0 jovial_hamilton[244660]: 167 167
Oct 11 03:54:32 compute-0 systemd[1]: libpod-6bbdb86965cfc21e3f502b3ee995abb35416c57c02b6c22d7d7c62c3924fb026.scope: Deactivated successfully.
Oct 11 03:54:32 compute-0 podman[244605]: 2025-10-11 03:54:32.629946759 +0000 UTC m=+0.195472462 container died 6bbdb86965cfc21e3f502b3ee995abb35416c57c02b6c22d7d7c62c3924fb026 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jovial_hamilton, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3)
Oct 11 03:54:32 compute-0 systemd[1]: var-lib-containers-storage-overlay-f8fc5e6da0231cfbd86e34428a85d8f8cb6c7e1805aae4789fa2962ab8bdba71-merged.mount: Deactivated successfully.
Oct 11 03:54:32 compute-0 podman[244605]: 2025-10-11 03:54:32.678767643 +0000 UTC m=+0.244293346 container remove 6bbdb86965cfc21e3f502b3ee995abb35416c57c02b6c22d7d7c62c3924fb026 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jovial_hamilton, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 11 03:54:32 compute-0 systemd[1]: libpod-conmon-6bbdb86965cfc21e3f502b3ee995abb35416c57c02b6c22d7d7c62c3924fb026.scope: Deactivated successfully.
Oct 11 03:54:32 compute-0 python3.9[244674]: ansible-ansible.builtin.service_facts Invoked
Oct 11 03:54:32 compute-0 network[244720]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Oct 11 03:54:32 compute-0 network[244721]: 'network-scripts' will be removed from distribution in near future.
Oct 11 03:54:32 compute-0 network[244722]: It is advised to switch to 'NetworkManager' instead for network management.
Oct 11 03:54:32 compute-0 podman[244702]: 2025-10-11 03:54:32.884477729 +0000 UTC m=+0.056651683 container create 717f3be3269418f65f324c2309f80c4be16de9e60c398d4f7c13bf1be8ebece8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_hamilton, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 11 03:54:32 compute-0 podman[244702]: 2025-10-11 03:54:32.864217087 +0000 UTC m=+0.036391031 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 03:54:33 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v655: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 03:54:33 compute-0 systemd[1]: Started libpod-conmon-717f3be3269418f65f324c2309f80c4be16de9e60c398d4f7c13bf1be8ebece8.scope.
Oct 11 03:54:33 compute-0 systemd[1]: Started libcrun container.
Oct 11 03:54:33 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a774753ba8f3f120effc81890e17a3ddede8e0f471586a929d72dd07c042e56e/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 11 03:54:33 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a774753ba8f3f120effc81890e17a3ddede8e0f471586a929d72dd07c042e56e/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 11 03:54:33 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a774753ba8f3f120effc81890e17a3ddede8e0f471586a929d72dd07c042e56e/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 11 03:54:33 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a774753ba8f3f120effc81890e17a3ddede8e0f471586a929d72dd07c042e56e/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 11 03:54:33 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a774753ba8f3f120effc81890e17a3ddede8e0f471586a929d72dd07c042e56e/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct 11 03:54:33 compute-0 podman[244702]: 2025-10-11 03:54:33.825352502 +0000 UTC m=+0.997526436 container init 717f3be3269418f65f324c2309f80c4be16de9e60c398d4f7c13bf1be8ebece8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_hamilton, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Oct 11 03:54:33 compute-0 podman[244702]: 2025-10-11 03:54:33.841354696 +0000 UTC m=+1.013528650 container start 717f3be3269418f65f324c2309f80c4be16de9e60c398d4f7c13bf1be8ebece8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_hamilton, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 11 03:54:33 compute-0 podman[244702]: 2025-10-11 03:54:33.847045884 +0000 UTC m=+1.019219798 container attach 717f3be3269418f65f324c2309f80c4be16de9e60c398d4f7c13bf1be8ebece8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_hamilton, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Oct 11 03:54:34 compute-0 ceph-mon[74273]: pgmap v655: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 03:54:34 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 11 03:54:34 compute-0 cool_hamilton[244738]: --> passed data devices: 0 physical, 3 LVM
Oct 11 03:54:34 compute-0 cool_hamilton[244738]: --> relative data size: 1.0
Oct 11 03:54:34 compute-0 cool_hamilton[244738]: --> All data devices are unavailable
Oct 11 03:54:35 compute-0 systemd[1]: libpod-717f3be3269418f65f324c2309f80c4be16de9e60c398d4f7c13bf1be8ebece8.scope: Deactivated successfully.
Oct 11 03:54:35 compute-0 systemd[1]: libpod-717f3be3269418f65f324c2309f80c4be16de9e60c398d4f7c13bf1be8ebece8.scope: Consumed 1.081s CPU time.
Oct 11 03:54:35 compute-0 podman[244702]: 2025-10-11 03:54:35.001108011 +0000 UTC m=+2.173281935 container died 717f3be3269418f65f324c2309f80c4be16de9e60c398d4f7c13bf1be8ebece8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_hamilton, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.license=GPLv2)
Oct 11 03:54:35 compute-0 systemd[1]: var-lib-containers-storage-overlay-a774753ba8f3f120effc81890e17a3ddede8e0f471586a929d72dd07c042e56e-merged.mount: Deactivated successfully.
Oct 11 03:54:35 compute-0 podman[244702]: 2025-10-11 03:54:35.077817199 +0000 UTC m=+2.249991123 container remove 717f3be3269418f65f324c2309f80c4be16de9e60c398d4f7c13bf1be8ebece8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_hamilton, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True)
Oct 11 03:54:35 compute-0 systemd[1]: libpod-conmon-717f3be3269418f65f324c2309f80c4be16de9e60c398d4f7c13bf1be8ebece8.scope: Deactivated successfully.
Oct 11 03:54:35 compute-0 sudo[244442]: pam_unix(sudo:session): session closed for user root
Oct 11 03:54:35 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v656: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 03:54:35 compute-0 sudo[244830]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 03:54:35 compute-0 sudo[244830]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 03:54:35 compute-0 sudo[244830]: pam_unix(sudo:session): session closed for user root
Oct 11 03:54:35 compute-0 sudo[244859]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 11 03:54:35 compute-0 sudo[244859]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 03:54:35 compute-0 sudo[244859]: pam_unix(sudo:session): session closed for user root
Oct 11 03:54:35 compute-0 sudo[244886]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 03:54:35 compute-0 sudo[244886]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 03:54:35 compute-0 sudo[244886]: pam_unix(sudo:session): session closed for user root
Oct 11 03:54:35 compute-0 sudo[244915]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/23b68101-59a9-532f-ab6b-9acf78fb2162/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 23b68101-59a9-532f-ab6b-9acf78fb2162 -- lvm list --format json
Oct 11 03:54:35 compute-0 sudo[244915]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 03:54:35 compute-0 podman[244998]: 2025-10-11 03:54:35.749559079 +0000 UTC m=+0.069448047 container create 3b1b1d15acdfdcd30297f8d6c542cdeb8354619b7e6d092eab4d057dcb35b245 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_wright, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 11 03:54:35 compute-0 systemd[1]: Started libpod-conmon-3b1b1d15acdfdcd30297f8d6c542cdeb8354619b7e6d092eab4d057dcb35b245.scope.
Oct 11 03:54:35 compute-0 podman[244998]: 2025-10-11 03:54:35.723652151 +0000 UTC m=+0.043541209 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 03:54:35 compute-0 systemd[1]: Started libcrun container.
Oct 11 03:54:35 compute-0 podman[244998]: 2025-10-11 03:54:35.853953755 +0000 UTC m=+0.173842813 container init 3b1b1d15acdfdcd30297f8d6c542cdeb8354619b7e6d092eab4d057dcb35b245 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_wright, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.39.3, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 11 03:54:35 compute-0 podman[244998]: 2025-10-11 03:54:35.861494214 +0000 UTC m=+0.181383212 container start 3b1b1d15acdfdcd30297f8d6c542cdeb8354619b7e6d092eab4d057dcb35b245 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_wright, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, ceph=True)
Oct 11 03:54:35 compute-0 infallible_wright[245020]: 167 167
Oct 11 03:54:35 compute-0 podman[244998]: 2025-10-11 03:54:35.865348071 +0000 UTC m=+0.185237069 container attach 3b1b1d15acdfdcd30297f8d6c542cdeb8354619b7e6d092eab4d057dcb35b245 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_wright, org.label-schema.build-date=20250507, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 11 03:54:35 compute-0 systemd[1]: libpod-3b1b1d15acdfdcd30297f8d6c542cdeb8354619b7e6d092eab4d057dcb35b245.scope: Deactivated successfully.
Oct 11 03:54:35 compute-0 conmon[245020]: conmon 3b1b1d15acdfdcd30297 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-3b1b1d15acdfdcd30297f8d6c542cdeb8354619b7e6d092eab4d057dcb35b245.scope/container/memory.events
Oct 11 03:54:35 compute-0 podman[245028]: 2025-10-11 03:54:35.919646927 +0000 UTC m=+0.034515949 container died 3b1b1d15acdfdcd30297f8d6c542cdeb8354619b7e6d092eab4d057dcb35b245 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_wright, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.39.3)
Oct 11 03:54:35 compute-0 systemd[1]: var-lib-containers-storage-overlay-f78697d0aef3398ac430218a0e319e44140012782c2a49539ebe41db0783d791-merged.mount: Deactivated successfully.
Oct 11 03:54:35 compute-0 podman[245028]: 2025-10-11 03:54:35.986877951 +0000 UTC m=+0.101746913 container remove 3b1b1d15acdfdcd30297f8d6c542cdeb8354619b7e6d092eab4d057dcb35b245 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_wright, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 11 03:54:35 compute-0 systemd[1]: libpod-conmon-3b1b1d15acdfdcd30297f8d6c542cdeb8354619b7e6d092eab4d057dcb35b245.scope: Deactivated successfully.
Oct 11 03:54:36 compute-0 podman[245058]: 2025-10-11 03:54:36.216427888 +0000 UTC m=+0.052757605 container create fd8e0622e7ae67ef9580bb8d6e51ed02364c68be2c32fa524998bb96c76f5360 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_shockley, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 11 03:54:36 compute-0 systemd[1]: Started libpod-conmon-fd8e0622e7ae67ef9580bb8d6e51ed02364c68be2c32fa524998bb96c76f5360.scope.
Oct 11 03:54:36 compute-0 podman[245058]: 2025-10-11 03:54:36.191025533 +0000 UTC m=+0.027355330 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 03:54:36 compute-0 systemd[1]: Started libcrun container.
Oct 11 03:54:36 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e170b51a866550045c37a60a20114c66f2507ede8b8335777a71a9a5fddf104f/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 11 03:54:36 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e170b51a866550045c37a60a20114c66f2507ede8b8335777a71a9a5fddf104f/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 11 03:54:36 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e170b51a866550045c37a60a20114c66f2507ede8b8335777a71a9a5fddf104f/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 11 03:54:36 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e170b51a866550045c37a60a20114c66f2507ede8b8335777a71a9a5fddf104f/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 11 03:54:36 compute-0 podman[245058]: 2025-10-11 03:54:36.346603578 +0000 UTC m=+0.182933375 container init fd8e0622e7ae67ef9580bb8d6e51ed02364c68be2c32fa524998bb96c76f5360 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_shockley, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20250507, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 11 03:54:36 compute-0 podman[245058]: 2025-10-11 03:54:36.353993003 +0000 UTC m=+0.190322720 container start fd8e0622e7ae67ef9580bb8d6e51ed02364c68be2c32fa524998bb96c76f5360 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_shockley, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=reef)
Oct 11 03:54:36 compute-0 podman[245058]: 2025-10-11 03:54:36.35750664 +0000 UTC m=+0.193836407 container attach fd8e0622e7ae67ef9580bb8d6e51ed02364c68be2c32fa524998bb96c76f5360 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_shockley, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=reef, org.label-schema.build-date=20250507)
Oct 11 03:54:36 compute-0 ceph-mon[74273]: pgmap v656: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 03:54:37 compute-0 hungry_shockley[245079]: {
Oct 11 03:54:37 compute-0 hungry_shockley[245079]:     "0": [
Oct 11 03:54:37 compute-0 hungry_shockley[245079]:         {
Oct 11 03:54:37 compute-0 hungry_shockley[245079]:             "devices": [
Oct 11 03:54:37 compute-0 hungry_shockley[245079]:                 "/dev/loop3"
Oct 11 03:54:37 compute-0 hungry_shockley[245079]:             ],
Oct 11 03:54:37 compute-0 hungry_shockley[245079]:             "lv_name": "ceph_lv0",
Oct 11 03:54:37 compute-0 hungry_shockley[245079]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct 11 03:54:37 compute-0 hungry_shockley[245079]:             "lv_size": "21470642176",
Oct 11 03:54:37 compute-0 hungry_shockley[245079]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=1XVTvq-Um5W-oCLh-MZzq-EKrd-vgGj-JdO4S3,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=23b68101-59a9-532f-ab6b-9acf78fb2162,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=bd7ac921-1218-45c1-b1c6-7c594dbceccb,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 11 03:54:37 compute-0 hungry_shockley[245079]:             "lv_uuid": "1XVTvq-Um5W-oCLh-MZzq-EKrd-vgGj-JdO4S3",
Oct 11 03:54:37 compute-0 hungry_shockley[245079]:             "name": "ceph_lv0",
Oct 11 03:54:37 compute-0 hungry_shockley[245079]:             "path": "/dev/ceph_vg0/ceph_lv0",
Oct 11 03:54:37 compute-0 hungry_shockley[245079]:             "tags": {
Oct 11 03:54:37 compute-0 hungry_shockley[245079]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct 11 03:54:37 compute-0 hungry_shockley[245079]:                 "ceph.block_uuid": "1XVTvq-Um5W-oCLh-MZzq-EKrd-vgGj-JdO4S3",
Oct 11 03:54:37 compute-0 hungry_shockley[245079]:                 "ceph.cephx_lockbox_secret": "",
Oct 11 03:54:37 compute-0 hungry_shockley[245079]:                 "ceph.cluster_fsid": "23b68101-59a9-532f-ab6b-9acf78fb2162",
Oct 11 03:54:37 compute-0 hungry_shockley[245079]:                 "ceph.cluster_name": "ceph",
Oct 11 03:54:37 compute-0 hungry_shockley[245079]:                 "ceph.crush_device_class": "",
Oct 11 03:54:37 compute-0 hungry_shockley[245079]:                 "ceph.encrypted": "0",
Oct 11 03:54:37 compute-0 hungry_shockley[245079]:                 "ceph.osd_fsid": "bd7ac921-1218-45c1-b1c6-7c594dbceccb",
Oct 11 03:54:37 compute-0 hungry_shockley[245079]:                 "ceph.osd_id": "0",
Oct 11 03:54:37 compute-0 hungry_shockley[245079]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 11 03:54:37 compute-0 hungry_shockley[245079]:                 "ceph.type": "block",
Oct 11 03:54:37 compute-0 hungry_shockley[245079]:                 "ceph.vdo": "0"
Oct 11 03:54:37 compute-0 hungry_shockley[245079]:             },
Oct 11 03:54:37 compute-0 hungry_shockley[245079]:             "type": "block",
Oct 11 03:54:37 compute-0 hungry_shockley[245079]:             "vg_name": "ceph_vg0"
Oct 11 03:54:37 compute-0 hungry_shockley[245079]:         }
Oct 11 03:54:37 compute-0 hungry_shockley[245079]:     ],
Oct 11 03:54:37 compute-0 hungry_shockley[245079]:     "1": [
Oct 11 03:54:37 compute-0 hungry_shockley[245079]:         {
Oct 11 03:54:37 compute-0 hungry_shockley[245079]:             "devices": [
Oct 11 03:54:37 compute-0 hungry_shockley[245079]:                 "/dev/loop4"
Oct 11 03:54:37 compute-0 hungry_shockley[245079]:             ],
Oct 11 03:54:37 compute-0 hungry_shockley[245079]:             "lv_name": "ceph_lv1",
Oct 11 03:54:37 compute-0 hungry_shockley[245079]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Oct 11 03:54:37 compute-0 hungry_shockley[245079]:             "lv_size": "21470642176",
Oct 11 03:54:37 compute-0 hungry_shockley[245079]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=SYHCVo-UVf0-Iwc3-4dzz-QuCG-19LJ-9rRA5x,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=23b68101-59a9-532f-ab6b-9acf78fb2162,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=38da774d-7ecf-442f-9a7a-97978287cff8,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 11 03:54:37 compute-0 hungry_shockley[245079]:             "lv_uuid": "SYHCVo-UVf0-Iwc3-4dzz-QuCG-19LJ-9rRA5x",
Oct 11 03:54:37 compute-0 hungry_shockley[245079]:             "name": "ceph_lv1",
Oct 11 03:54:37 compute-0 hungry_shockley[245079]:             "path": "/dev/ceph_vg1/ceph_lv1",
Oct 11 03:54:37 compute-0 hungry_shockley[245079]:             "tags": {
Oct 11 03:54:37 compute-0 hungry_shockley[245079]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Oct 11 03:54:37 compute-0 hungry_shockley[245079]:                 "ceph.block_uuid": "SYHCVo-UVf0-Iwc3-4dzz-QuCG-19LJ-9rRA5x",
Oct 11 03:54:37 compute-0 hungry_shockley[245079]:                 "ceph.cephx_lockbox_secret": "",
Oct 11 03:54:37 compute-0 hungry_shockley[245079]:                 "ceph.cluster_fsid": "23b68101-59a9-532f-ab6b-9acf78fb2162",
Oct 11 03:54:37 compute-0 hungry_shockley[245079]:                 "ceph.cluster_name": "ceph",
Oct 11 03:54:37 compute-0 hungry_shockley[245079]:                 "ceph.crush_device_class": "",
Oct 11 03:54:37 compute-0 hungry_shockley[245079]:                 "ceph.encrypted": "0",
Oct 11 03:54:37 compute-0 hungry_shockley[245079]:                 "ceph.osd_fsid": "38da774d-7ecf-442f-9a7a-97978287cff8",
Oct 11 03:54:37 compute-0 hungry_shockley[245079]:                 "ceph.osd_id": "1",
Oct 11 03:54:37 compute-0 hungry_shockley[245079]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 11 03:54:37 compute-0 hungry_shockley[245079]:                 "ceph.type": "block",
Oct 11 03:54:37 compute-0 hungry_shockley[245079]:                 "ceph.vdo": "0"
Oct 11 03:54:37 compute-0 hungry_shockley[245079]:             },
Oct 11 03:54:37 compute-0 hungry_shockley[245079]:             "type": "block",
Oct 11 03:54:37 compute-0 hungry_shockley[245079]:             "vg_name": "ceph_vg1"
Oct 11 03:54:37 compute-0 hungry_shockley[245079]:         }
Oct 11 03:54:37 compute-0 hungry_shockley[245079]:     ],
Oct 11 03:54:37 compute-0 hungry_shockley[245079]:     "2": [
Oct 11 03:54:37 compute-0 hungry_shockley[245079]:         {
Oct 11 03:54:37 compute-0 hungry_shockley[245079]:             "devices": [
Oct 11 03:54:37 compute-0 hungry_shockley[245079]:                 "/dev/loop5"
Oct 11 03:54:37 compute-0 hungry_shockley[245079]:             ],
Oct 11 03:54:37 compute-0 hungry_shockley[245079]:             "lv_name": "ceph_lv2",
Oct 11 03:54:37 compute-0 hungry_shockley[245079]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Oct 11 03:54:37 compute-0 hungry_shockley[245079]:             "lv_size": "21470642176",
Oct 11 03:54:37 compute-0 hungry_shockley[245079]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=RoMfIg-8Myq-NBZ3-1HM3-6QfC-sjk8-dlChvt,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=23b68101-59a9-532f-ab6b-9acf78fb2162,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=b0a32a6b-06c7-4cda-9235-0c5ffbd1bd85,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 11 03:54:37 compute-0 hungry_shockley[245079]:             "lv_uuid": "RoMfIg-8Myq-NBZ3-1HM3-6QfC-sjk8-dlChvt",
Oct 11 03:54:37 compute-0 hungry_shockley[245079]:             "name": "ceph_lv2",
Oct 11 03:54:37 compute-0 hungry_shockley[245079]:             "path": "/dev/ceph_vg2/ceph_lv2",
Oct 11 03:54:37 compute-0 hungry_shockley[245079]:             "tags": {
Oct 11 03:54:37 compute-0 hungry_shockley[245079]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Oct 11 03:54:37 compute-0 hungry_shockley[245079]:                 "ceph.block_uuid": "RoMfIg-8Myq-NBZ3-1HM3-6QfC-sjk8-dlChvt",
Oct 11 03:54:37 compute-0 hungry_shockley[245079]:                 "ceph.cephx_lockbox_secret": "",
Oct 11 03:54:37 compute-0 hungry_shockley[245079]:                 "ceph.cluster_fsid": "23b68101-59a9-532f-ab6b-9acf78fb2162",
Oct 11 03:54:37 compute-0 hungry_shockley[245079]:                 "ceph.cluster_name": "ceph",
Oct 11 03:54:37 compute-0 hungry_shockley[245079]:                 "ceph.crush_device_class": "",
Oct 11 03:54:37 compute-0 hungry_shockley[245079]:                 "ceph.encrypted": "0",
Oct 11 03:54:37 compute-0 hungry_shockley[245079]:                 "ceph.osd_fsid": "b0a32a6b-06c7-4cda-9235-0c5ffbd1bd85",
Oct 11 03:54:37 compute-0 hungry_shockley[245079]:                 "ceph.osd_id": "2",
Oct 11 03:54:37 compute-0 hungry_shockley[245079]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 11 03:54:37 compute-0 hungry_shockley[245079]:                 "ceph.type": "block",
Oct 11 03:54:37 compute-0 hungry_shockley[245079]:                 "ceph.vdo": "0"
Oct 11 03:54:37 compute-0 hungry_shockley[245079]:             },
Oct 11 03:54:37 compute-0 hungry_shockley[245079]:             "type": "block",
Oct 11 03:54:37 compute-0 hungry_shockley[245079]:             "vg_name": "ceph_vg2"
Oct 11 03:54:37 compute-0 hungry_shockley[245079]:         }
Oct 11 03:54:37 compute-0 hungry_shockley[245079]:     ]
Oct 11 03:54:37 compute-0 hungry_shockley[245079]: }
Oct 11 03:54:37 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v657: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 03:54:37 compute-0 systemd[1]: libpod-fd8e0622e7ae67ef9580bb8d6e51ed02364c68be2c32fa524998bb96c76f5360.scope: Deactivated successfully.
Oct 11 03:54:37 compute-0 podman[245058]: 2025-10-11 03:54:37.194500463 +0000 UTC m=+1.030830200 container died fd8e0622e7ae67ef9580bb8d6e51ed02364c68be2c32fa524998bb96c76f5360 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_shockley, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True)
Oct 11 03:54:37 compute-0 systemd[1]: var-lib-containers-storage-overlay-e170b51a866550045c37a60a20114c66f2507ede8b8335777a71a9a5fddf104f-merged.mount: Deactivated successfully.
Oct 11 03:54:37 compute-0 podman[245058]: 2025-10-11 03:54:37.257515861 +0000 UTC m=+1.093845588 container remove fd8e0622e7ae67ef9580bb8d6e51ed02364c68be2c32fa524998bb96c76f5360 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_shockley, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507)
Oct 11 03:54:37 compute-0 systemd[1]: libpod-conmon-fd8e0622e7ae67ef9580bb8d6e51ed02364c68be2c32fa524998bb96c76f5360.scope: Deactivated successfully.
Oct 11 03:54:37 compute-0 sudo[244915]: pam_unix(sudo:session): session closed for user root
Oct 11 03:54:37 compute-0 sudo[245221]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 03:54:37 compute-0 sudo[245221]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 03:54:37 compute-0 sudo[245221]: pam_unix(sudo:session): session closed for user root
Oct 11 03:54:37 compute-0 sudo[245316]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pxxcbbqqewaamlkqcyywuokfyjvtsqen ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760154877.0919573-1010-103647979235191/AnsiballZ_systemd_service.py'
Oct 11 03:54:37 compute-0 sudo[245316]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 03:54:37 compute-0 sudo[245276]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 11 03:54:37 compute-0 sudo[245276]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 03:54:37 compute-0 sudo[245276]: pam_unix(sudo:session): session closed for user root
Oct 11 03:54:37 compute-0 sudo[245323]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 03:54:37 compute-0 sudo[245323]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 03:54:37 compute-0 sudo[245323]: pam_unix(sudo:session): session closed for user root
Oct 11 03:54:37 compute-0 sudo[245348]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/23b68101-59a9-532f-ab6b-9acf78fb2162/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 23b68101-59a9-532f-ab6b-9acf78fb2162 -- raw list --format json
Oct 11 03:54:37 compute-0 sudo[245348]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 03:54:37 compute-0 python3.9[245320]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_compute.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Oct 11 03:54:37 compute-0 sudo[245316]: pam_unix(sudo:session): session closed for user root
Oct 11 03:54:37 compute-0 podman[245457]: 2025-10-11 03:54:37.97862766 +0000 UTC m=+0.049643057 container create cda58e0d52f888fa713b747469b5421dd728c86fd1316f98af1d1091b9440c53 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_ganguly, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 11 03:54:38 compute-0 systemd[1]: Started libpod-conmon-cda58e0d52f888fa713b747469b5421dd728c86fd1316f98af1d1091b9440c53.scope.
Oct 11 03:54:38 compute-0 podman[245457]: 2025-10-11 03:54:37.952364602 +0000 UTC m=+0.023380089 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 03:54:38 compute-0 systemd[1]: Started libcrun container.
Oct 11 03:54:38 compute-0 podman[245457]: 2025-10-11 03:54:38.071813145 +0000 UTC m=+0.142828632 container init cda58e0d52f888fa713b747469b5421dd728c86fd1316f98af1d1091b9440c53 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_ganguly, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 11 03:54:38 compute-0 podman[245457]: 2025-10-11 03:54:38.080107995 +0000 UTC m=+0.151123432 container start cda58e0d52f888fa713b747469b5421dd728c86fd1316f98af1d1091b9440c53 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_ganguly, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_REF=reef, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 11 03:54:38 compute-0 sad_ganguly[245507]: 167 167
Oct 11 03:54:38 compute-0 podman[245457]: 2025-10-11 03:54:38.084228349 +0000 UTC m=+0.155243796 container attach cda58e0d52f888fa713b747469b5421dd728c86fd1316f98af1d1091b9440c53 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_ganguly, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Oct 11 03:54:38 compute-0 systemd[1]: libpod-cda58e0d52f888fa713b747469b5421dd728c86fd1316f98af1d1091b9440c53.scope: Deactivated successfully.
Oct 11 03:54:38 compute-0 podman[245457]: 2025-10-11 03:54:38.086504972 +0000 UTC m=+0.157520389 container died cda58e0d52f888fa713b747469b5421dd728c86fd1316f98af1d1091b9440c53 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_ganguly, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Oct 11 03:54:38 compute-0 systemd[1]: var-lib-containers-storage-overlay-c8a40bb968a8dbacd09ad50f3cbed944854e56b20e27334a1f960eab78e68e6c-merged.mount: Deactivated successfully.
Oct 11 03:54:38 compute-0 podman[245457]: 2025-10-11 03:54:38.129930277 +0000 UTC m=+0.200945694 container remove cda58e0d52f888fa713b747469b5421dd728c86fd1316f98af1d1091b9440c53 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_ganguly, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 11 03:54:38 compute-0 systemd[1]: libpod-conmon-cda58e0d52f888fa713b747469b5421dd728c86fd1316f98af1d1091b9440c53.scope: Deactivated successfully.
Oct 11 03:54:38 compute-0 sudo[245599]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yjcemgfyzgjehqkwnatxsgmnzhwozmhc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760154877.9462547-1010-205755855201635/AnsiballZ_systemd_service.py'
Oct 11 03:54:38 compute-0 sudo[245599]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 03:54:38 compute-0 podman[245607]: 2025-10-11 03:54:38.373270216 +0000 UTC m=+0.077632154 container create 2f088c63f7b48ba2f60263da450acf072bf11ed9d4bf661d7459b16917389e32 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_margulis, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 11 03:54:38 compute-0 ceph-mon[74273]: pgmap v657: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 03:54:38 compute-0 systemd[1]: Started libpod-conmon-2f088c63f7b48ba2f60263da450acf072bf11ed9d4bf661d7459b16917389e32.scope.
Oct 11 03:54:38 compute-0 podman[245607]: 2025-10-11 03:54:38.341770332 +0000 UTC m=+0.046132310 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 03:54:38 compute-0 systemd[1]: Started libcrun container.
Oct 11 03:54:38 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3db3fcc6a11f77fd50b5027219e432c00a603514a1501e17fd504ea6baf12880/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 11 03:54:38 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3db3fcc6a11f77fd50b5027219e432c00a603514a1501e17fd504ea6baf12880/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 11 03:54:38 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3db3fcc6a11f77fd50b5027219e432c00a603514a1501e17fd504ea6baf12880/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 11 03:54:38 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3db3fcc6a11f77fd50b5027219e432c00a603514a1501e17fd504ea6baf12880/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 11 03:54:38 compute-0 podman[245607]: 2025-10-11 03:54:38.504580247 +0000 UTC m=+0.208942205 container init 2f088c63f7b48ba2f60263da450acf072bf11ed9d4bf661d7459b16917389e32 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_margulis, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 11 03:54:38 compute-0 podman[245607]: 2025-10-11 03:54:38.512909568 +0000 UTC m=+0.217271496 container start 2f088c63f7b48ba2f60263da450acf072bf11ed9d4bf661d7459b16917389e32 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_margulis, org.label-schema.schema-version=1.0, CEPH_REF=reef, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.license=GPLv2)
Oct 11 03:54:38 compute-0 podman[245607]: 2025-10-11 03:54:38.51656699 +0000 UTC m=+0.220928988 container attach 2f088c63f7b48ba2f60263da450acf072bf11ed9d4bf661d7459b16917389e32 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_margulis, OSD_FLAVOR=default, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507)
Oct 11 03:54:38 compute-0 python3.9[245602]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_migration_target.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Oct 11 03:54:38 compute-0 sudo[245599]: pam_unix(sudo:session): session closed for user root
Oct 11 03:54:39 compute-0 sudo[245779]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sypwcruevrasbhukcmlykdsvlxekxkwz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760154878.7595034-1010-122655345005567/AnsiballZ_systemd_service.py'
Oct 11 03:54:39 compute-0 sudo[245779]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 03:54:39 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v658: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 03:54:39 compute-0 python3.9[245781]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_api_cron.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Oct 11 03:54:39 compute-0 sudo[245779]: pam_unix(sudo:session): session closed for user root
Oct 11 03:54:39 compute-0 elastic_margulis[245624]: {
Oct 11 03:54:39 compute-0 elastic_margulis[245624]:     "38da774d-7ecf-442f-9a7a-97978287cff8": {
Oct 11 03:54:39 compute-0 elastic_margulis[245624]:         "ceph_fsid": "23b68101-59a9-532f-ab6b-9acf78fb2162",
Oct 11 03:54:39 compute-0 elastic_margulis[245624]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Oct 11 03:54:39 compute-0 elastic_margulis[245624]:         "osd_id": 1,
Oct 11 03:54:39 compute-0 elastic_margulis[245624]:         "osd_uuid": "38da774d-7ecf-442f-9a7a-97978287cff8",
Oct 11 03:54:39 compute-0 elastic_margulis[245624]:         "type": "bluestore"
Oct 11 03:54:39 compute-0 elastic_margulis[245624]:     },
Oct 11 03:54:39 compute-0 elastic_margulis[245624]:     "b0a32a6b-06c7-4cda-9235-0c5ffbd1bd85": {
Oct 11 03:54:39 compute-0 elastic_margulis[245624]:         "ceph_fsid": "23b68101-59a9-532f-ab6b-9acf78fb2162",
Oct 11 03:54:39 compute-0 elastic_margulis[245624]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Oct 11 03:54:39 compute-0 elastic_margulis[245624]:         "osd_id": 2,
Oct 11 03:54:39 compute-0 elastic_margulis[245624]:         "osd_uuid": "b0a32a6b-06c7-4cda-9235-0c5ffbd1bd85",
Oct 11 03:54:39 compute-0 elastic_margulis[245624]:         "type": "bluestore"
Oct 11 03:54:39 compute-0 elastic_margulis[245624]:     },
Oct 11 03:54:39 compute-0 elastic_margulis[245624]:     "bd7ac921-1218-45c1-b1c6-7c594dbceccb": {
Oct 11 03:54:39 compute-0 elastic_margulis[245624]:         "ceph_fsid": "23b68101-59a9-532f-ab6b-9acf78fb2162",
Oct 11 03:54:39 compute-0 elastic_margulis[245624]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct 11 03:54:39 compute-0 elastic_margulis[245624]:         "osd_id": 0,
Oct 11 03:54:39 compute-0 elastic_margulis[245624]:         "osd_uuid": "bd7ac921-1218-45c1-b1c6-7c594dbceccb",
Oct 11 03:54:39 compute-0 elastic_margulis[245624]:         "type": "bluestore"
Oct 11 03:54:39 compute-0 elastic_margulis[245624]:     }
Oct 11 03:54:39 compute-0 elastic_margulis[245624]: }
Oct 11 03:54:39 compute-0 systemd[1]: libpod-2f088c63f7b48ba2f60263da450acf072bf11ed9d4bf661d7459b16917389e32.scope: Deactivated successfully.
Oct 11 03:54:39 compute-0 podman[245607]: 2025-10-11 03:54:39.557132829 +0000 UTC m=+1.261494767 container died 2f088c63f7b48ba2f60263da450acf072bf11ed9d4bf661d7459b16917389e32 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_margulis, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 11 03:54:39 compute-0 systemd[1]: libpod-2f088c63f7b48ba2f60263da450acf072bf11ed9d4bf661d7459b16917389e32.scope: Consumed 1.052s CPU time.
Oct 11 03:54:39 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 11 03:54:39 compute-0 systemd[1]: var-lib-containers-storage-overlay-3db3fcc6a11f77fd50b5027219e432c00a603514a1501e17fd504ea6baf12880-merged.mount: Deactivated successfully.
Oct 11 03:54:39 compute-0 podman[245607]: 2025-10-11 03:54:39.622861772 +0000 UTC m=+1.327223670 container remove 2f088c63f7b48ba2f60263da450acf072bf11ed9d4bf661d7459b16917389e32 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_margulis, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 11 03:54:39 compute-0 systemd[1]: libpod-conmon-2f088c63f7b48ba2f60263da450acf072bf11ed9d4bf661d7459b16917389e32.scope: Deactivated successfully.
Oct 11 03:54:39 compute-0 sudo[245348]: pam_unix(sudo:session): session closed for user root
Oct 11 03:54:39 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 11 03:54:39 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' 
Oct 11 03:54:39 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 11 03:54:39 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' 
Oct 11 03:54:39 compute-0 ceph-mgr[74563]: [progress WARNING root] complete: ev bf90e336-1f5b-4caa-bb47-6f3b89333071 does not exist
Oct 11 03:54:39 compute-0 ceph-mgr[74563]: [progress WARNING root] complete: ev 589a4cac-eb6b-4b6c-87c3-28e168390e1f does not exist
Oct 11 03:54:39 compute-0 sudo[245927]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 03:54:39 compute-0 sudo[245927]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 03:54:39 compute-0 sudo[245927]: pam_unix(sudo:session): session closed for user root
Oct 11 03:54:39 compute-0 sudo[246014]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-erinadsdulvyqxdglxujokcwslnootle ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760154879.5079243-1010-101948283129615/AnsiballZ_systemd_service.py'
Oct 11 03:54:39 compute-0 sudo[246014]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 03:54:39 compute-0 sudo[245981]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Oct 11 03:54:39 compute-0 sudo[245981]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 03:54:39 compute-0 sudo[245981]: pam_unix(sudo:session): session closed for user root
Oct 11 03:54:40 compute-0 python3.9[246021]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_api.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Oct 11 03:54:40 compute-0 sudo[246014]: pam_unix(sudo:session): session closed for user root
Oct 11 03:54:40 compute-0 sudo[246174]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-aoufkmkphxhyxrwqbvgejnrlzmivqgld ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760154880.2678933-1010-107055757503980/AnsiballZ_systemd_service.py'
Oct 11 03:54:40 compute-0 sudo[246174]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 03:54:40 compute-0 ceph-mon[74273]: pgmap v658: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 03:54:40 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' 
Oct 11 03:54:40 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' 
Oct 11 03:54:40 compute-0 python3.9[246176]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_conductor.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Oct 11 03:54:40 compute-0 sudo[246174]: pam_unix(sudo:session): session closed for user root
Oct 11 03:54:41 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v659: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 03:54:41 compute-0 podman[246282]: 2025-10-11 03:54:41.368910007 +0000 UTC m=+0.078591641 container health_status 83ff5079e26f0f00bbfd2512db4ebca61e431e3b7e362664573e34fff179574c (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, container_name=multipathd, io.buildah.version=1.41.3, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, config_id=multipathd, org.label-schema.build-date=20251009, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']})
Oct 11 03:54:41 compute-0 sudo[246347]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vmckathrwdxprqdcippdqouithccqgtx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760154881.09027-1010-26234187926866/AnsiballZ_systemd_service.py'
Oct 11 03:54:41 compute-0 sudo[246347]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 03:54:41 compute-0 python3.9[246349]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_metadata.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Oct 11 03:54:41 compute-0 sudo[246347]: pam_unix(sudo:session): session closed for user root
Oct 11 03:54:42 compute-0 sudo[246500]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fpdjjyvgtwbktatruztcvpsklcncltpn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760154881.897739-1010-99642759958893/AnsiballZ_systemd_service.py'
Oct 11 03:54:42 compute-0 sudo[246500]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 03:54:42 compute-0 python3.9[246502]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_scheduler.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Oct 11 03:54:42 compute-0 sudo[246500]: pam_unix(sudo:session): session closed for user root
Oct 11 03:54:42 compute-0 ceph-mon[74273]: pgmap v659: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 03:54:43 compute-0 sudo[246653]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qgqkcljsfncdfimrgfylmhtrlrzbfbfl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760154882.7514515-1010-261223094171362/AnsiballZ_systemd_service.py'
Oct 11 03:54:43 compute-0 sudo[246653]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 03:54:43 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v660: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 03:54:43 compute-0 python3.9[246655]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_vnc_proxy.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Oct 11 03:54:43 compute-0 sudo[246653]: pam_unix(sudo:session): session closed for user root
Oct 11 03:54:43 compute-0 podman[246657]: 2025-10-11 03:54:43.600372106 +0000 UTC m=+0.090390768 container health_status 8f03625dda308e9a3fe3b3f00360811eab2a53db90537fed5552a11820542f07 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, tcib_managed=true, container_name=iscsid, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, config_id=iscsid, managed_by=edpm_ansible)
Oct 11 03:54:44 compute-0 sudo[246824]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cphljqmhqwyggfmptdjgylvbetcgbbkq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760154883.883224-1069-266581576785553/AnsiballZ_file.py'
Oct 11 03:54:44 compute-0 sudo[246824]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 03:54:44 compute-0 python3.9[246826]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_compute.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 11 03:54:44 compute-0 sudo[246824]: pam_unix(sudo:session): session closed for user root
Oct 11 03:54:44 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 11 03:54:44 compute-0 ceph-mon[74273]: pgmap v660: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 03:54:44 compute-0 sudo[246976]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jtwshnxqtkjptiwvguylfhughursyfrv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760154884.5587835-1069-147238756814309/AnsiballZ_file.py'
Oct 11 03:54:44 compute-0 sudo[246976]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 03:54:45 compute-0 python3.9[246978]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_migration_target.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 11 03:54:45 compute-0 sudo[246976]: pam_unix(sudo:session): session closed for user root
Oct 11 03:54:45 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v661: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 03:54:45 compute-0 sudo[247128]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-flhcsctgidcbrawntxoionaxyuuqdznk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760154885.201435-1069-16234546519508/AnsiballZ_file.py'
Oct 11 03:54:45 compute-0 sudo[247128]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 03:54:45 compute-0 python3.9[247130]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_api_cron.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 11 03:54:45 compute-0 sudo[247128]: pam_unix(sudo:session): session closed for user root
Oct 11 03:54:46 compute-0 sudo[247280]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-atjwwwwxgxwzbouqqaakmfetltavkdxx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760154885.820095-1069-64777025312733/AnsiballZ_file.py'
Oct 11 03:54:46 compute-0 sudo[247280]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 03:54:46 compute-0 python3.9[247282]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_api.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 11 03:54:46 compute-0 sudo[247280]: pam_unix(sudo:session): session closed for user root
Oct 11 03:54:46 compute-0 ceph-mon[74273]: pgmap v661: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 03:54:46 compute-0 sudo[247432]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qrvdbomnhprkapklczfztgojvjrihxkv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760154886.5130138-1069-214736959967896/AnsiballZ_file.py'
Oct 11 03:54:46 compute-0 sudo[247432]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 03:54:47 compute-0 python3.9[247434]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_conductor.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 11 03:54:47 compute-0 sudo[247432]: pam_unix(sudo:session): session closed for user root
Oct 11 03:54:47 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v662: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 03:54:47 compute-0 sudo[247584]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xzuytiplfhaheblhbozeywmkuqlpmxxc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760154887.2582464-1069-15485788467245/AnsiballZ_file.py'
Oct 11 03:54:47 compute-0 sudo[247584]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 03:54:47 compute-0 python3.9[247586]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_metadata.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 11 03:54:47 compute-0 sudo[247584]: pam_unix(sudo:session): session closed for user root
Oct 11 03:54:48 compute-0 sudo[247736]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dmgnvmpgjqyizvnduysidnuipxrmmblt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760154887.960762-1069-87404922831583/AnsiballZ_file.py'
Oct 11 03:54:48 compute-0 sudo[247736]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 03:54:48 compute-0 python3.9[247738]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_scheduler.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 11 03:54:48 compute-0 sudo[247736]: pam_unix(sudo:session): session closed for user root
Oct 11 03:54:48 compute-0 ceph-mon[74273]: pgmap v662: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 03:54:49 compute-0 sudo[247888]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zxkswmxoikvkntyifkcsrglolpkkclfh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760154888.6470933-1069-112302859759994/AnsiballZ_file.py'
Oct 11 03:54:49 compute-0 sudo[247888]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 03:54:49 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v663: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 03:54:49 compute-0 python3.9[247890]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_vnc_proxy.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 11 03:54:49 compute-0 sudo[247888]: pam_unix(sudo:session): session closed for user root
Oct 11 03:54:49 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 11 03:54:49 compute-0 sudo[248040]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zhdnnpwecbimqyzywtbxoeunuhnmwlxx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760154889.3935926-1126-278979751522295/AnsiballZ_file.py'
Oct 11 03:54:49 compute-0 sudo[248040]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 03:54:49 compute-0 python3.9[248042]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_compute.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 11 03:54:49 compute-0 sudo[248040]: pam_unix(sudo:session): session closed for user root
Oct 11 03:54:50 compute-0 sudo[248192]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-olurhjafkyfkredzvhqdrpfxrfnbstsc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760154890.058065-1126-273504330988285/AnsiballZ_file.py'
Oct 11 03:54:50 compute-0 sudo[248192]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 03:54:50 compute-0 python3.9[248194]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_migration_target.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 11 03:54:50 compute-0 sudo[248192]: pam_unix(sudo:session): session closed for user root
Oct 11 03:54:50 compute-0 ceph-mon[74273]: pgmap v663: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 03:54:50 compute-0 ceph-mgr[74563]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 03:54:50 compute-0 ceph-mgr[74563]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 03:54:50 compute-0 ceph-mgr[74563]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 03:54:50 compute-0 ceph-mgr[74563]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 03:54:50 compute-0 ceph-mgr[74563]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 03:54:50 compute-0 ceph-mgr[74563]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 03:54:51 compute-0 sudo[248344]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nauxltfmtocnnmlgcaupdtrlorxscpci ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760154890.7368052-1126-191623561940645/AnsiballZ_file.py'
Oct 11 03:54:51 compute-0 sudo[248344]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 03:54:51 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v664: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 03:54:51 compute-0 python3.9[248346]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_api_cron.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 11 03:54:51 compute-0 sudo[248344]: pam_unix(sudo:session): session closed for user root
Oct 11 03:54:51 compute-0 sudo[248496]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kvlevgjfnbhtfuykpctlrzxntkrjxtkb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760154891.508886-1126-150903490863965/AnsiballZ_file.py'
Oct 11 03:54:51 compute-0 sudo[248496]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 03:54:52 compute-0 python3.9[248498]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_api.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 11 03:54:52 compute-0 sudo[248496]: pam_unix(sudo:session): session closed for user root
Oct 11 03:54:52 compute-0 sudo[248663]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xqkncutkrkhdhywkwisvpcoqzhhpkfpb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760154892.2181075-1126-79304571724049/AnsiballZ_file.py'
Oct 11 03:54:52 compute-0 sudo[248663]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 03:54:52 compute-0 podman[248622]: 2025-10-11 03:54:52.595466357 +0000 UTC m=+0.113596972 container health_status 648a70a95c858d67aef01a55c153ff0b5b9bef4b589fa10c62a24185180db61f (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009, tcib_managed=true, org.label-schema.license=GPLv2)
Oct 11 03:54:52 compute-0 ceph-mon[74273]: pgmap v664: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 03:54:52 compute-0 python3.9[248671]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_conductor.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 11 03:54:52 compute-0 sudo[248663]: pam_unix(sudo:session): session closed for user root
Oct 11 03:54:53 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v665: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 03:54:53 compute-0 sudo[248825]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ryuakkmnwnpidcsequgiemexipyuasoe ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760154892.9493365-1126-78654666145105/AnsiballZ_file.py'
Oct 11 03:54:53 compute-0 sudo[248825]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 03:54:53 compute-0 python3.9[248827]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_metadata.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 11 03:54:53 compute-0 sudo[248825]: pam_unix(sudo:session): session closed for user root
Oct 11 03:54:53 compute-0 sudo[248977]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kpdjyscrhguvebwqunpqzlooctjcjlei ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760154893.6763482-1126-50397900748666/AnsiballZ_file.py'
Oct 11 03:54:53 compute-0 sudo[248977]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 03:54:54 compute-0 python3.9[248979]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_scheduler.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 11 03:54:54 compute-0 sudo[248977]: pam_unix(sudo:session): session closed for user root
Oct 11 03:54:54 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 11 03:54:54 compute-0 sudo[249129]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hticbailuvolumrmiizacsxdbfczposm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760154894.335587-1126-66999436710021/AnsiballZ_file.py'
Oct 11 03:54:54 compute-0 sudo[249129]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 03:54:54 compute-0 ceph-mon[74273]: pgmap v665: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 03:54:54 compute-0 python3.9[249131]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_vnc_proxy.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 11 03:54:54 compute-0 sudo[249129]: pam_unix(sudo:session): session closed for user root
Oct 11 03:54:55 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v666: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 03:54:55 compute-0 sudo[249281]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jmuqueaegafhnarhofmycxzzwzigndvp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760154895.2527273-1184-151219206265743/AnsiballZ_command.py'
Oct 11 03:54:55 compute-0 sudo[249281]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 03:54:55 compute-0 python3.9[249283]: ansible-ansible.legacy.command Invoked with _raw_params=if systemctl is-active certmonger.service; then
                                               systemctl disable --now certmonger.service
                                               test -f /etc/systemd/system/certmonger.service || systemctl mask certmonger.service
                                             fi
                                              _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 11 03:54:55 compute-0 sudo[249281]: pam_unix(sudo:session): session closed for user root
Oct 11 03:54:56 compute-0 ceph-mon[74273]: pgmap v666: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 03:54:56 compute-0 python3.9[249435]: ansible-ansible.builtin.find Invoked with file_type=any hidden=True paths=['/var/lib/certmonger/requests'] patterns=[] read_whole_file=False age_stamp=mtime recurse=False follow=False get_checksum=False checksum_algorithm=sha1 use_regex=False exact_mode=True excludes=None contains=None age=None size=None depth=None mode=None encoding=None limit=None
Oct 11 03:54:57 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v667: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 03:54:57 compute-0 podman[249559]: 2025-10-11 03:54:57.498717073 +0000 UTC m=+0.075349290 container health_status aa44bb3cffcfb092b9d85ae52a9bd9f36c87e07262632420f97b48a886109eab (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.build-date=20251009, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Oct 11 03:54:57 compute-0 sudo[249604]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nryaosyjpnmaagagxcmmlsugyosmmwjr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760154897.0879982-1202-234032945253690/AnsiballZ_systemd_service.py'
Oct 11 03:54:57 compute-0 sudo[249604]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 03:54:57 compute-0 python3.9[249606]: ansible-ansible.builtin.systemd_service Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Oct 11 03:54:57 compute-0 systemd[1]: Reloading.
Oct 11 03:54:57 compute-0 systemd-sysv-generator[249632]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 11 03:54:57 compute-0 systemd-rc-local-generator[249629]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 11 03:54:58 compute-0 sudo[249604]: pam_unix(sudo:session): session closed for user root
Oct 11 03:54:58 compute-0 sudo[249791]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vbnsmxjqqsotypjttuqcstmvsakxkcar ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760154898.4119892-1210-33050520408956/AnsiballZ_command.py'
Oct 11 03:54:58 compute-0 sudo[249791]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 03:54:58 compute-0 ceph-mon[74273]: pgmap v667: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 03:54:58 compute-0 python3.9[249793]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_compute.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 11 03:54:58 compute-0 sudo[249791]: pam_unix(sudo:session): session closed for user root
Oct 11 03:54:59 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v668: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 03:54:59 compute-0 sudo[249944]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ulhoeppkrxumtnpsovybwcrsxakugjiy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760154899.1284015-1210-153835462228261/AnsiballZ_command.py'
Oct 11 03:54:59 compute-0 sudo[249944]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 03:54:59 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 11 03:54:59 compute-0 ceph-mon[74273]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #33. Immutable memtables: 0.
Oct 11 03:54:59 compute-0 ceph-mon[74273]: rocksdb: (Original Log Time 2025/10/11-03:54:59.573934) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Oct 11 03:54:59 compute-0 ceph-mon[74273]: rocksdb: [db/flush_job.cc:856] [default] [JOB 13] Flushing memtable with next log file: 33
Oct 11 03:54:59 compute-0 ceph-mon[74273]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760154899573967, "job": 13, "event": "flush_started", "num_memtables": 1, "num_entries": 1141, "num_deletes": 505, "total_data_size": 1249845, "memory_usage": 1277728, "flush_reason": "Manual Compaction"}
Oct 11 03:54:59 compute-0 ceph-mon[74273]: rocksdb: [db/flush_job.cc:885] [default] [JOB 13] Level-0 flush table #34: started
Oct 11 03:54:59 compute-0 ceph-mon[74273]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760154899583951, "cf_name": "default", "job": 13, "event": "table_file_creation", "file_number": 34, "file_size": 1238091, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 13594, "largest_seqno": 14734, "table_properties": {"data_size": 1232957, "index_size": 2146, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1925, "raw_key_size": 13298, "raw_average_key_size": 17, "raw_value_size": 1220816, "raw_average_value_size": 1632, "num_data_blocks": 98, "num_entries": 748, "num_filter_entries": 748, "num_deletions": 505, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1760154814, "oldest_key_time": 1760154814, "file_creation_time": 1760154899, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "c5ef686a-ea96-43f3-b64e-136aeef6150d", "db_session_id": "5PENSWPAYOBU0GJSS6OB", "orig_file_number": 34, "seqno_to_time_mapping": "N/A"}}
Oct 11 03:54:59 compute-0 ceph-mon[74273]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 13] Flush lasted 10057 microseconds, and 4272 cpu microseconds.
Oct 11 03:54:59 compute-0 ceph-mon[74273]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 11 03:54:59 compute-0 ceph-mon[74273]: rocksdb: (Original Log Time 2025/10/11-03:54:59.583987) [db/flush_job.cc:967] [default] [JOB 13] Level-0 flush table #34: 1238091 bytes OK
Oct 11 03:54:59 compute-0 ceph-mon[74273]: rocksdb: (Original Log Time 2025/10/11-03:54:59.584010) [db/memtable_list.cc:519] [default] Level-0 commit table #34 started
Oct 11 03:54:59 compute-0 ceph-mon[74273]: rocksdb: (Original Log Time 2025/10/11-03:54:59.585556) [db/memtable_list.cc:722] [default] Level-0 commit table #34: memtable #1 done
Oct 11 03:54:59 compute-0 ceph-mon[74273]: rocksdb: (Original Log Time 2025/10/11-03:54:59.585571) EVENT_LOG_v1 {"time_micros": 1760154899585566, "job": 13, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Oct 11 03:54:59 compute-0 ceph-mon[74273]: rocksdb: (Original Log Time 2025/10/11-03:54:59.585586) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Oct 11 03:54:59 compute-0 ceph-mon[74273]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 13] Try to delete WAL files size 1243513, prev total WAL file size 1243513, number of live WAL files 2.
Oct 11 03:54:59 compute-0 ceph-mon[74273]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000030.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 11 03:54:59 compute-0 ceph-mon[74273]: rocksdb: (Original Log Time 2025/10/11-03:54:59.586377) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6C6F676D0030' seq:72057594037927935, type:22 .. '6C6F676D00323531' seq:0, type:0; will stop at (end)
Oct 11 03:54:59 compute-0 ceph-mon[74273]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 14] Compacting 1@0 + 1@6 files to L6, score -1.00
Oct 11 03:54:59 compute-0 ceph-mon[74273]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 13 Base level 0, inputs: [34(1209KB)], [32(7489KB)]
Oct 11 03:54:59 compute-0 ceph-mon[74273]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760154899586427, "job": 14, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [34], "files_L6": [32], "score": -1, "input_data_size": 8907488, "oldest_snapshot_seqno": -1}
Oct 11 03:54:59 compute-0 ceph-mon[74273]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 14] Generated table #35: 3746 keys, 6964201 bytes, temperature: kUnknown
Oct 11 03:54:59 compute-0 ceph-mon[74273]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760154899641212, "cf_name": "default", "job": 14, "event": "table_file_creation", "file_number": 35, "file_size": 6964201, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 6937562, "index_size": 16162, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 9413, "raw_key_size": 91872, "raw_average_key_size": 24, "raw_value_size": 6868127, "raw_average_value_size": 1833, "num_data_blocks": 684, "num_entries": 3746, "num_filter_entries": 3746, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1760153731, "oldest_key_time": 0, "file_creation_time": 1760154899, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "c5ef686a-ea96-43f3-b64e-136aeef6150d", "db_session_id": "5PENSWPAYOBU0GJSS6OB", "orig_file_number": 35, "seqno_to_time_mapping": "N/A"}}
Oct 11 03:54:59 compute-0 ceph-mon[74273]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 11 03:54:59 compute-0 ceph-mon[74273]: rocksdb: (Original Log Time 2025/10/11-03:54:59.641487) [db/compaction/compaction_job.cc:1663] [default] [JOB 14] Compacted 1@0 + 1@6 files to L6 => 6964201 bytes
Oct 11 03:54:59 compute-0 ceph-mon[74273]: rocksdb: (Original Log Time 2025/10/11-03:54:59.643711) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 162.4 rd, 126.9 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.2, 7.3 +0.0 blob) out(6.6 +0.0 blob), read-write-amplify(12.8) write-amplify(5.6) OK, records in: 4769, records dropped: 1023 output_compression: NoCompression
Oct 11 03:54:59 compute-0 ceph-mon[74273]: rocksdb: (Original Log Time 2025/10/11-03:54:59.643738) EVENT_LOG_v1 {"time_micros": 1760154899643726, "job": 14, "event": "compaction_finished", "compaction_time_micros": 54865, "compaction_time_cpu_micros": 31665, "output_level": 6, "num_output_files": 1, "total_output_size": 6964201, "num_input_records": 4769, "num_output_records": 3746, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Oct 11 03:54:59 compute-0 ceph-mon[74273]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000034.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 11 03:54:59 compute-0 ceph-mon[74273]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760154899644290, "job": 14, "event": "table_file_deletion", "file_number": 34}
Oct 11 03:54:59 compute-0 ceph-mon[74273]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000032.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 11 03:54:59 compute-0 ceph-mon[74273]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760154899646540, "job": 14, "event": "table_file_deletion", "file_number": 32}
Oct 11 03:54:59 compute-0 ceph-mon[74273]: rocksdb: (Original Log Time 2025/10/11-03:54:59.586241) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 11 03:54:59 compute-0 ceph-mon[74273]: rocksdb: (Original Log Time 2025/10/11-03:54:59.646613) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 11 03:54:59 compute-0 ceph-mon[74273]: rocksdb: (Original Log Time 2025/10/11-03:54:59.646619) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 11 03:54:59 compute-0 ceph-mon[74273]: rocksdb: (Original Log Time 2025/10/11-03:54:59.646622) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 11 03:54:59 compute-0 ceph-mon[74273]: rocksdb: (Original Log Time 2025/10/11-03:54:59.646623) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 11 03:54:59 compute-0 ceph-mon[74273]: rocksdb: (Original Log Time 2025/10/11-03:54:59.646625) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 11 03:54:59 compute-0 python3.9[249946]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_migration_target.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 11 03:54:59 compute-0 sudo[249944]: pam_unix(sudo:session): session closed for user root
Oct 11 03:55:00 compute-0 sudo[250097]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ngdadsucksfusqbkidjepiczepjdpgau ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760154899.9393287-1210-186700550432997/AnsiballZ_command.py'
Oct 11 03:55:00 compute-0 sudo[250097]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 03:55:00 compute-0 python3.9[250099]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_api_cron.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 11 03:55:00 compute-0 sudo[250097]: pam_unix(sudo:session): session closed for user root
Oct 11 03:55:00 compute-0 ceph-mon[74273]: pgmap v668: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 03:55:01 compute-0 sudo[250250]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jitofqyfdfizidgcjvyzxvnilorhcxfj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760154900.8047569-1210-44101999808603/AnsiballZ_command.py'
Oct 11 03:55:01 compute-0 sudo[250250]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 03:55:01 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v669: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 03:55:01 compute-0 python3.9[250252]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_api.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 11 03:55:01 compute-0 sudo[250250]: pam_unix(sudo:session): session closed for user root
Oct 11 03:55:01 compute-0 sudo[250403]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vgqocguusbevcmmumijeaeocljzohcip ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760154901.6308274-1210-150967731634286/AnsiballZ_command.py'
Oct 11 03:55:01 compute-0 sudo[250403]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 03:55:02 compute-0 python3.9[250405]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_conductor.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 11 03:55:02 compute-0 sudo[250403]: pam_unix(sudo:session): session closed for user root
Oct 11 03:55:02 compute-0 sudo[250556]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lsqpvfrjtwpgwufkzwdfokneeozxufrb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760154902.3613052-1210-167541722334333/AnsiballZ_command.py'
Oct 11 03:55:02 compute-0 sudo[250556]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 03:55:02 compute-0 ceph-mon[74273]: pgmap v669: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 03:55:02 compute-0 python3.9[250558]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_metadata.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 11 03:55:02 compute-0 sudo[250556]: pam_unix(sudo:session): session closed for user root
Oct 11 03:55:03 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v670: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 03:55:03 compute-0 sudo[250709]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-stydsxsvmdtkeipecnapqgwxythicrwn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760154903.1867254-1210-248861297356832/AnsiballZ_command.py'
Oct 11 03:55:03 compute-0 sudo[250709]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 03:55:03 compute-0 python3.9[250711]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_scheduler.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 11 03:55:03 compute-0 sudo[250709]: pam_unix(sudo:session): session closed for user root
Oct 11 03:55:04 compute-0 sudo[250862]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fmgbiecaxbbzwnuwbthiatupdpzjmxhu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760154903.980366-1210-268528553243805/AnsiballZ_command.py'
Oct 11 03:55:04 compute-0 sudo[250862]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 03:55:04 compute-0 python3.9[250864]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_vnc_proxy.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 11 03:55:04 compute-0 sudo[250862]: pam_unix(sudo:session): session closed for user root
Oct 11 03:55:04 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 11 03:55:04 compute-0 ceph-mon[74273]: pgmap v670: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 03:55:05 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v671: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 03:55:05 compute-0 sudo[251015]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jlauhppcdmuxqqxfkdrsbcwwbrxqhuym ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760154905.5424187-1289-6848656154199/AnsiballZ_file.py'
Oct 11 03:55:05 compute-0 sudo[251015]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 03:55:06 compute-0 python3.9[251017]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/openstack/config/nova setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct 11 03:55:06 compute-0 sudo[251015]: pam_unix(sudo:session): session closed for user root
Oct 11 03:55:06 compute-0 sudo[251167]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qpvlitlahchjgmdgusdggepozqoafpyi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760154906.3442419-1289-261523529759935/AnsiballZ_file.py'
Oct 11 03:55:06 compute-0 sudo[251167]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 03:55:06 compute-0 ceph-mon[74273]: pgmap v671: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 03:55:06 compute-0 python3.9[251169]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/openstack/config/containers setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct 11 03:55:06 compute-0 sudo[251167]: pam_unix(sudo:session): session closed for user root
Oct 11 03:55:07 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v672: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 03:55:07 compute-0 sudo[251319]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gydcxrqfvpfyslpzklmznuizcnwdajez ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760154907.167035-1289-212526933272702/AnsiballZ_file.py'
Oct 11 03:55:07 compute-0 sudo[251319]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 03:55:07 compute-0 python3.9[251321]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/openstack/config/nova_nvme_cleaner setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct 11 03:55:07 compute-0 sudo[251319]: pam_unix(sudo:session): session closed for user root
Oct 11 03:55:08 compute-0 sudo[251471]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vbfrsbqvmrgdwqtynxkjvenpxxncflrz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760154907.9794426-1311-204296388440777/AnsiballZ_file.py'
Oct 11 03:55:08 compute-0 sudo[251471]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 03:55:08 compute-0 python3.9[251473]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/nova setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct 11 03:55:08 compute-0 sudo[251471]: pam_unix(sudo:session): session closed for user root
Oct 11 03:55:08 compute-0 ceph-mon[74273]: pgmap v672: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 03:55:09 compute-0 sudo[251623]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ykypvdtahuwywitfqxrceyrgmmquylbo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760154908.7005618-1311-138933383280143/AnsiballZ_file.py'
Oct 11 03:55:09 compute-0 sudo[251623]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 03:55:09 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v673: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 03:55:09 compute-0 python3.9[251625]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/_nova_secontext setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct 11 03:55:09 compute-0 sudo[251623]: pam_unix(sudo:session): session closed for user root
Oct 11 03:55:09 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 11 03:55:10 compute-0 sudo[251775]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hzvfcszoeeltfntibwssqptrjknmhqnv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760154909.4659116-1311-280546633004024/AnsiballZ_file.py'
Oct 11 03:55:10 compute-0 sudo[251775]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 03:55:10 compute-0 python3.9[251777]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/nova/instances setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct 11 03:55:10 compute-0 sudo[251775]: pam_unix(sudo:session): session closed for user root
Oct 11 03:55:10 compute-0 sudo[251927]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ncpxjunlherrfscdrqbfgjqndpbvgujo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760154910.4352489-1311-194139414478545/AnsiballZ_file.py'
Oct 11 03:55:10 compute-0 sudo[251927]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 03:55:10 compute-0 ceph-mon[74273]: pgmap v673: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 03:55:10 compute-0 python3.9[251929]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/etc/ceph setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct 11 03:55:11 compute-0 sudo[251927]: pam_unix(sudo:session): session closed for user root
Oct 11 03:55:11 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v674: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 03:55:11 compute-0 sudo[252090]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-faazibrnqrbrbskdwzcqkqtxuzeasjyw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760154911.1808214-1311-86090835397489/AnsiballZ_file.py'
Oct 11 03:55:11 compute-0 sudo[252090]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 03:55:11 compute-0 podman[252053]: 2025-10-11 03:55:11.558220811 +0000 UTC m=+0.098766560 container health_status 83ff5079e26f0f00bbfd2512db4ebca61e431e3b7e362664573e34fff179574c (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, config_id=multipathd, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_managed=true)
Oct 11 03:55:11 compute-0 python3.9[252097]: ansible-ansible.builtin.file Invoked with group=zuul owner=zuul path=/etc/multipath setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Oct 11 03:55:11 compute-0 sudo[252090]: pam_unix(sudo:session): session closed for user root
Oct 11 03:55:12 compute-0 sudo[252249]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-eofevzddpipthxiwanwlwymfskvoidqu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760154911.9032614-1311-102935349727392/AnsiballZ_file.py'
Oct 11 03:55:12 compute-0 sudo[252249]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 03:55:12 compute-0 python3.9[252251]: ansible-ansible.builtin.file Invoked with group=zuul owner=zuul path=/etc/iscsi setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Oct 11 03:55:12 compute-0 sudo[252249]: pam_unix(sudo:session): session closed for user root
Oct 11 03:55:12 compute-0 ceph-mon[74273]: pgmap v674: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 03:55:13 compute-0 sudo[252401]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nizuqoxeekcjlhfwpxxyydzndtbxunak ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760154912.6714246-1311-168383296066572/AnsiballZ_file.py'
Oct 11 03:55:13 compute-0 sudo[252401]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 03:55:13 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v675: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 03:55:13 compute-0 python3.9[252403]: ansible-ansible.builtin.file Invoked with group=zuul owner=zuul path=/var/lib/iscsi setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Oct 11 03:55:13 compute-0 sudo[252401]: pam_unix(sudo:session): session closed for user root
Oct 11 03:55:13 compute-0 sudo[252566]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kxfkenpwjahexrcshljihdbvcxmcginj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760154913.4284968-1311-155903054814844/AnsiballZ_file.py'
Oct 11 03:55:13 compute-0 sudo[252566]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 03:55:13 compute-0 podman[252527]: 2025-10-11 03:55:13.871628162 +0000 UTC m=+0.095820649 container health_status 8f03625dda308e9a3fe3b3f00360811eab2a53db90537fed5552a11820542f07 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, config_id=iscsid, container_name=iscsid, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.build-date=20251009)
Oct 11 03:55:14 compute-0 python3.9[252574]: ansible-ansible.builtin.file Invoked with group=zuul owner=zuul path=/etc/nvme setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Oct 11 03:55:14 compute-0 sudo[252566]: pam_unix(sudo:session): session closed for user root
Oct 11 03:55:14 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 11 03:55:14 compute-0 sudo[252725]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-njmgbeqysdqwzmmyahfomltsegfxvpsg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760154914.304205-1311-220649959623193/AnsiballZ_file.py'
Oct 11 03:55:14 compute-0 sudo[252725]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 03:55:14 compute-0 ceph-mon[74273]: pgmap v675: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 03:55:14 compute-0 python3.9[252727]: ansible-ansible.builtin.file Invoked with group=zuul owner=zuul path=/run/openvswitch setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Oct 11 03:55:14 compute-0 sudo[252725]: pam_unix(sudo:session): session closed for user root
Oct 11 03:55:15 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v676: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 03:55:16 compute-0 ceph-mon[74273]: pgmap v676: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 03:55:17 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v677: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 03:55:18 compute-0 ceph-mon[74273]: pgmap v677: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 03:55:19 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v678: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 03:55:19 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 11 03:55:20 compute-0 ceph-mgr[74563]: [balancer INFO root] Optimize plan auto_2025-10-11_03:55:20
Oct 11 03:55:20 compute-0 ceph-mgr[74563]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 11 03:55:20 compute-0 ceph-mgr[74563]: [balancer INFO root] do_upmap
Oct 11 03:55:20 compute-0 ceph-mgr[74563]: [balancer INFO root] pools ['cephfs.cephfs.meta', '.mgr', 'default.rgw.log', 'images', 'vms', 'cephfs.cephfs.data', 'default.rgw.meta', '.rgw.root', 'backups', 'default.rgw.control', 'volumes']
Oct 11 03:55:20 compute-0 ceph-mgr[74563]: [balancer INFO root] prepared 0/10 changes
Oct 11 03:55:20 compute-0 ceph-mgr[74563]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 03:55:20 compute-0 ceph-mgr[74563]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 03:55:20 compute-0 ceph-mgr[74563]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 03:55:20 compute-0 ceph-mgr[74563]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 03:55:20 compute-0 ceph-mgr[74563]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 03:55:20 compute-0 ceph-mgr[74563]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 03:55:20 compute-0 ceph-mon[74273]: pgmap v678: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 03:55:20 compute-0 ceph-mgr[74563]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 11 03:55:20 compute-0 ceph-mgr[74563]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 11 03:55:20 compute-0 ceph-mgr[74563]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 11 03:55:20 compute-0 ceph-mgr[74563]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 11 03:55:20 compute-0 ceph-mgr[74563]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 11 03:55:20 compute-0 ceph-mgr[74563]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 11 03:55:20 compute-0 ceph-mgr[74563]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 11 03:55:20 compute-0 ceph-mgr[74563]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 11 03:55:20 compute-0 ceph-mgr[74563]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 11 03:55:20 compute-0 ceph-mgr[74563]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 11 03:55:21 compute-0 sudo[252877]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lgdddjlkfvsguwhbiezsylslotfnnlev ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760154920.4467869-1514-201240797957075/AnsiballZ_getent.py'
Oct 11 03:55:21 compute-0 sudo[252877]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 03:55:21 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v679: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 03:55:21 compute-0 python3.9[252879]: ansible-ansible.builtin.getent Invoked with database=passwd key=nova fail_key=True service=None split=None
Oct 11 03:55:21 compute-0 sudo[252877]: pam_unix(sudo:session): session closed for user root
Oct 11 03:55:22 compute-0 sudo[253030]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qdzlbaqxhrukkmyzlktpotliytxckdbm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760154921.514304-1522-64270409173366/AnsiballZ_group.py'
Oct 11 03:55:22 compute-0 sudo[253030]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 03:55:22 compute-0 python3.9[253032]: ansible-ansible.builtin.group Invoked with gid=42436 name=nova state=present force=False system=False local=False non_unique=False gid_min=None gid_max=None
Oct 11 03:55:22 compute-0 groupadd[253033]: group added to /etc/group: name=nova, GID=42436
Oct 11 03:55:22 compute-0 groupadd[253033]: group added to /etc/gshadow: name=nova
Oct 11 03:55:22 compute-0 groupadd[253033]: new group: name=nova, GID=42436
Oct 11 03:55:22 compute-0 sudo[253030]: pam_unix(sudo:session): session closed for user root
Oct 11 03:55:22 compute-0 ceph-mon[74273]: pgmap v679: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 03:55:22 compute-0 ovn_metadata_agent[161897]: 2025-10-11 03:55:22.942 161902 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 03:55:22 compute-0 ovn_metadata_agent[161897]: 2025-10-11 03:55:22.943 161902 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 03:55:22 compute-0 ovn_metadata_agent[161897]: 2025-10-11 03:55:22.943 161902 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 03:55:23 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v680: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 03:55:23 compute-0 sudo[253203]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gjfkozuftylqmdrnxffnznfmptwmjvas ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760154922.633732-1530-274506166326986/AnsiballZ_user.py'
Oct 11 03:55:23 compute-0 sudo[253203]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 03:55:23 compute-0 podman[253162]: 2025-10-11 03:55:23.283614728 +0000 UTC m=+0.116664627 container health_status 648a70a95c858d67aef01a55c153ff0b5b9bef4b589fa10c62a24185180db61f (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller, org.label-schema.schema-version=1.0, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, tcib_managed=true, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251009, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2)
Oct 11 03:55:23 compute-0 python3.9[253207]: ansible-ansible.builtin.user Invoked with comment=nova user group=nova groups=['libvirt'] name=nova shell=/bin/sh state=present uid=42436 non_unique=False force=False remove=False create_home=True system=False move_home=False append=False ssh_key_bits=0 ssh_key_type=rsa ssh_key_comment=ansible-generated on compute-0 update_password=always home=None password=NOT_LOGGING_PARAMETER login_class=None password_expire_max=None password_expire_min=None password_expire_warn=None hidden=None seuser=None skeleton=None generate_ssh_key=None ssh_key_file=None ssh_key_passphrase=NOT_LOGGING_PARAMETER expires=None password_lock=None local=None profile=None authorization=None role=None umask=None password_expire_account_disable=None uid_min=None uid_max=None
Oct 11 03:55:23 compute-0 useradd[253218]: new user: name=nova, UID=42436, GID=42436, home=/home/nova, shell=/bin/sh, from=/dev/pts/0
Oct 11 03:55:23 compute-0 useradd[253218]: add 'nova' to group 'libvirt'
Oct 11 03:55:23 compute-0 useradd[253218]: add 'nova' to shadow group 'libvirt'
Oct 11 03:55:23 compute-0 sudo[253203]: pam_unix(sudo:session): session closed for user root
Oct 11 03:55:24 compute-0 sshd-session[253249]: Accepted publickey for zuul from 192.168.122.30 port 53504 ssh2: ECDSA SHA256:qo9+RMabHfLAOt2q/80W97JXaZUdeUCREBuTRaqgxBY
Oct 11 03:55:24 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 11 03:55:24 compute-0 systemd-logind[820]: New session 52 of user zuul.
Oct 11 03:55:24 compute-0 systemd[1]: Started Session 52 of User zuul.
Oct 11 03:55:24 compute-0 sshd-session[253249]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Oct 11 03:55:24 compute-0 sshd-session[253252]: Received disconnect from 192.168.122.30 port 53504:11: disconnected by user
Oct 11 03:55:24 compute-0 sshd-session[253252]: Disconnected from user zuul 192.168.122.30 port 53504
Oct 11 03:55:24 compute-0 sshd-session[253249]: pam_unix(sshd:session): session closed for user zuul
Oct 11 03:55:24 compute-0 systemd[1]: session-52.scope: Deactivated successfully.
Oct 11 03:55:24 compute-0 systemd-logind[820]: Session 52 logged out. Waiting for processes to exit.
Oct 11 03:55:24 compute-0 systemd-logind[820]: Removed session 52.
Oct 11 03:55:24 compute-0 ceph-mon[74273]: pgmap v680: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 03:55:25 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v681: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 03:55:25 compute-0 python3.9[253402]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/nova/config.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 11 03:55:26 compute-0 python3.9[253523]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/nova/config.json mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1760154924.9341238-1555-28045057752241/.source.json follow=False _original_basename=config.json.j2 checksum=2c2474b5f24ef7c9ed37f49680082593e0d1100b backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Oct 11 03:55:26 compute-0 ceph-mon[74273]: pgmap v681: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 03:55:26 compute-0 python3.9[253673]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/nova/nova-blank.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 11 03:55:27 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v682: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 03:55:27 compute-0 python3.9[253749]: ansible-ansible.legacy.file Invoked with mode=0644 setype=container_file_t dest=/var/lib/openstack/config/nova/nova-blank.conf _original_basename=nova-blank.conf recurse=False state=file path=/var/lib/openstack/config/nova/nova-blank.conf force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Oct 11 03:55:28 compute-0 podman[253873]: 2025-10-11 03:55:28.025824923 +0000 UTC m=+0.085721199 container health_status aa44bb3cffcfb092b9d85ae52a9bd9f36c87e07262632420f97b48a886109eab (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, container_name=ovn_metadata_agent, org.label-schema.build-date=20251009, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Oct 11 03:55:28 compute-0 python3.9[253912]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/nova/ssh-config follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 11 03:55:28 compute-0 python3.9[254039]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/nova/ssh-config mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1760154927.6616046-1555-259692942508200/.source follow=False _original_basename=ssh-config checksum=4297f735c41bdc1ff52d72e6f623a02242f37958 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Oct 11 03:55:28 compute-0 ceph-mon[74273]: pgmap v682: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 03:55:29 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v683: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 03:55:29 compute-0 python3.9[254189]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/nova/02-nova-host-specific.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 11 03:55:29 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 11 03:55:29 compute-0 ceph-mon[74273]: pgmap v683: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 03:55:30 compute-0 python3.9[254310]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/nova/02-nova-host-specific.conf mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1760154928.9077792-1555-105575839637675/.source.conf follow=False _original_basename=02-nova-host-specific.conf.j2 checksum=1feba546d0beacad9258164ab79b8a747685ccc8 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Oct 11 03:55:30 compute-0 python3.9[254460]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/nova/nova_statedir_ownership.py follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 11 03:55:31 compute-0 ceph-mgr[74563]: [pg_autoscaler INFO root] _maybe_adjust
Oct 11 03:55:31 compute-0 ceph-mgr[74563]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 03:55:31 compute-0 ceph-mgr[74563]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct 11 03:55:31 compute-0 ceph-mgr[74563]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 03:55:31 compute-0 ceph-mgr[74563]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 11 03:55:31 compute-0 ceph-mgr[74563]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 03:55:31 compute-0 ceph-mgr[74563]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 11 03:55:31 compute-0 ceph-mgr[74563]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 03:55:31 compute-0 ceph-mgr[74563]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 11 03:55:31 compute-0 ceph-mgr[74563]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 03:55:31 compute-0 ceph-mgr[74563]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 11 03:55:31 compute-0 ceph-mgr[74563]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 03:55:31 compute-0 ceph-mgr[74563]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Oct 11 03:55:31 compute-0 ceph-mgr[74563]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 03:55:31 compute-0 ceph-mgr[74563]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 11 03:55:31 compute-0 ceph-mgr[74563]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 03:55:31 compute-0 ceph-mgr[74563]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Oct 11 03:55:31 compute-0 ceph-mgr[74563]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 03:55:31 compute-0 ceph-mgr[74563]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct 11 03:55:31 compute-0 ceph-mgr[74563]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 03:55:31 compute-0 ceph-mgr[74563]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 11 03:55:31 compute-0 ceph-mgr[74563]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 03:55:31 compute-0 ceph-mgr[74563]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct 11 03:55:31 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v684: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 03:55:31 compute-0 python3.9[254581]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/nova/nova_statedir_ownership.py mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1760154930.2593327-1555-54067507928237/.source.py follow=False _original_basename=nova_statedir_ownership.py checksum=c6c8a3cfefa5efd60ceb1408c4e977becedb71e2 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Oct 11 03:55:31 compute-0 sudo[254731]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xlujzyaxlekfvgozgpvcrgqgodbclagd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760154931.4981978-1624-174194237265434/AnsiballZ_file.py'
Oct 11 03:55:31 compute-0 sudo[254731]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 03:55:31 compute-0 python3.9[254733]: ansible-ansible.builtin.file Invoked with group=nova mode=0700 owner=nova path=/home/nova/.ssh state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 11 03:55:32 compute-0 sudo[254731]: pam_unix(sudo:session): session closed for user root
Oct 11 03:55:32 compute-0 ceph-mon[74273]: pgmap v684: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 03:55:32 compute-0 sudo[254883]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wymxaleevvqfhgincvsjqzodsnuclhqz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760154932.2282224-1632-249424557611043/AnsiballZ_copy.py'
Oct 11 03:55:32 compute-0 sudo[254883]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 03:55:32 compute-0 python3.9[254885]: ansible-ansible.legacy.copy Invoked with dest=/home/nova/.ssh/authorized_keys group=nova mode=0600 owner=nova remote_src=True src=/var/lib/openstack/config/nova/ssh-publickey backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 11 03:55:32 compute-0 sudo[254883]: pam_unix(sudo:session): session closed for user root
Oct 11 03:55:33 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v685: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 03:55:33 compute-0 sudo[255035]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ahunebgqciqtlshrlhvunwjebovzbvib ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760154933.0313659-1640-25869243692826/AnsiballZ_stat.py'
Oct 11 03:55:33 compute-0 sudo[255035]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 03:55:33 compute-0 python3.9[255037]: ansible-ansible.builtin.stat Invoked with path=/var/lib/nova/compute_id follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct 11 03:55:33 compute-0 sudo[255035]: pam_unix(sudo:session): session closed for user root
Oct 11 03:55:34 compute-0 ceph-mon[74273]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Oct 11 03:55:34 compute-0 ceph-mon[74273]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 1200.0 total, 600.0 interval
                                           Cumulative writes: 3323 writes, 14K keys, 3323 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.02 MB/s
                                           Cumulative WAL: 3323 writes, 3323 syncs, 1.00 writes per sync, written: 0.02 GB, 0.02 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 1282 writes, 5807 keys, 1282 commit groups, 1.0 writes per commit group, ingest: 8.47 MB, 0.01 MB/s
                                           Interval WAL: 1282 writes, 1282 syncs, 1.00 writes per sync, written: 0.01 GB, 0.01 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                           
                                           ** Compaction Stats [default] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0    109.9      0.14              0.06         7    0.020       0      0       0.0       0.0
                                             L6      1/0    6.64 MB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   2.7    145.0    119.5      0.34              0.17         6    0.057     24K   3190       0.0       0.0
                                            Sum      1/0    6.64 MB   0.0      0.0     0.0      0.0       0.1      0.0       0.0   3.7    102.9    116.7      0.48              0.23        13    0.037     24K   3190       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   4.9    126.9    127.0      0.27              0.15         8    0.034     17K   2461       0.0       0.0
                                           
                                           ** Compaction Stats [default] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Low      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0    145.0    119.5      0.34              0.17         6    0.057     24K   3190       0.0       0.0
                                           High      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0    112.7      0.14              0.06         6    0.023       0      0       0.0       0.0
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0     13.2      0.00              0.00         1    0.004       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.015, interval 0.007
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.05 GB write, 0.05 MB/s write, 0.05 GB read, 0.04 MB/s read, 0.5 seconds
                                           Interval compaction: 0.03 GB write, 0.06 MB/s write, 0.03 GB read, 0.06 MB/s read, 0.3 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x558495a5d1f0#2 capacity: 308.00 MB usage: 1.64 MB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 0 last_secs: 5.9e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(100,1.43 MB,0.464556%) FilterBlock(14,74.42 KB,0.0235966%) IndexBlock(14,144.67 KB,0.0458705%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [default] **
Oct 11 03:55:34 compute-0 sudo[255188]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-odxephipymbicqxfwvpysxdsliyjnryw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760154933.832044-1648-94907841610642/AnsiballZ_stat.py'
Oct 11 03:55:34 compute-0 sudo[255188]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 03:55:34 compute-0 ceph-mon[74273]: pgmap v685: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 03:55:34 compute-0 python3.9[255190]: ansible-ansible.legacy.stat Invoked with path=/var/lib/nova/compute_id follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 11 03:55:34 compute-0 sudo[255188]: pam_unix(sudo:session): session closed for user root
Oct 11 03:55:34 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 11 03:55:34 compute-0 sudo[255311]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dghwbeibhstkfakrsulnqzkdljcildep ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760154933.832044-1648-94907841610642/AnsiballZ_copy.py'
Oct 11 03:55:34 compute-0 sudo[255311]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 03:55:35 compute-0 python3.9[255313]: ansible-ansible.legacy.copy Invoked with attributes=+i dest=/var/lib/nova/compute_id group=nova mode=0400 owner=nova src=/home/zuul/.ansible/tmp/ansible-tmp-1760154933.832044-1648-94907841610642/.source _original_basename=.l8uc8976 follow=False checksum=dd45cd6f1646ea2252ce6657d561a2b88b2370a2 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None
Oct 11 03:55:35 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v686: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 03:55:35 compute-0 sudo[255311]: pam_unix(sudo:session): session closed for user root
Oct 11 03:55:36 compute-0 python3.9[255465]: ansible-ansible.builtin.stat Invoked with path=/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct 11 03:55:36 compute-0 ceph-mon[74273]: pgmap v686: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 03:55:36 compute-0 python3.9[255617]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/containers/nova_compute.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 11 03:55:37 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v687: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 03:55:37 compute-0 python3.9[255738]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/containers/nova_compute.json mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1760154936.3185213-1674-128296335181143/.source.json follow=False _original_basename=nova_compute.json.j2 checksum=f022386746472553146d29f689b545df70fa8a60 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Oct 11 03:55:38 compute-0 ceph-mon[74273]: pgmap v687: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 03:55:38 compute-0 python3.9[255888]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/containers/nova_compute_init.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 11 03:55:38 compute-0 python3.9[256009]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/containers/nova_compute_init.json mode=0700 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1760154937.7271426-1689-213422881977192/.source.json follow=False _original_basename=nova_compute_init.json.j2 checksum=60b024e6db49dc6e700fc0d50263944d98d4c034 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Oct 11 03:55:39 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v688: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 03:55:39 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 11 03:55:39 compute-0 sudo[256159]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jzzxldahdrgyubwwvnsvhikjakrhgphx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760154939.345512-1706-276886274179978/AnsiballZ_container_config_data.py'
Oct 11 03:55:39 compute-0 sudo[256159]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 03:55:39 compute-0 sudo[256162]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 03:55:39 compute-0 sudo[256162]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 03:55:39 compute-0 sudo[256162]: pam_unix(sudo:session): session closed for user root
Oct 11 03:55:39 compute-0 python3.9[256161]: ansible-container_config_data Invoked with config_overrides={} config_path=/var/lib/openstack/config/containers config_pattern=nova_compute_init.json debug=False
Oct 11 03:55:39 compute-0 sudo[256187]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 11 03:55:39 compute-0 sudo[256187]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 03:55:39 compute-0 sudo[256159]: pam_unix(sudo:session): session closed for user root
Oct 11 03:55:39 compute-0 sudo[256187]: pam_unix(sudo:session): session closed for user root
Oct 11 03:55:40 compute-0 sudo[256212]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 03:55:40 compute-0 sudo[256212]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 03:55:40 compute-0 sudo[256212]: pam_unix(sudo:session): session closed for user root
Oct 11 03:55:40 compute-0 sudo[256261]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/23b68101-59a9-532f-ab6b-9acf78fb2162/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Oct 11 03:55:40 compute-0 sudo[256261]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 03:55:40 compute-0 ceph-mon[74273]: pgmap v688: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 03:55:40 compute-0 sudo[256430]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-eaqlpebavgovyjehdrpxvtbxfquyxozm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760154940.235161-1715-159017176075909/AnsiballZ_container_config_hash.py'
Oct 11 03:55:40 compute-0 sudo[256430]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 03:55:40 compute-0 sudo[256261]: pam_unix(sudo:session): session closed for user root
Oct 11 03:55:40 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 11 03:55:40 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 11 03:55:40 compute-0 python3.9[256432]: ansible-container_config_hash Invoked with check_mode=False config_vol_prefix=/var/lib/config-data
Oct 11 03:55:40 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct 11 03:55:40 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 11 03:55:40 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct 11 03:55:40 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' 
Oct 11 03:55:40 compute-0 ceph-mgr[74563]: [progress WARNING root] complete: ev d490e6c8-bec0-499f-bea0-da5d9191927e does not exist
Oct 11 03:55:40 compute-0 ceph-mgr[74563]: [progress WARNING root] complete: ev f3bba7d3-60f6-4e2d-8e4e-2c4236afebdc does not exist
Oct 11 03:55:40 compute-0 ceph-mgr[74563]: [progress WARNING root] complete: ev af38cd50-032f-4c23-83c6-978820a57698 does not exist
Oct 11 03:55:40 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct 11 03:55:40 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 11 03:55:40 compute-0 sudo[256430]: pam_unix(sudo:session): session closed for user root
Oct 11 03:55:40 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct 11 03:55:40 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 11 03:55:40 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 11 03:55:40 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 11 03:55:40 compute-0 sudo[256445]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 03:55:40 compute-0 sudo[256445]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 03:55:40 compute-0 sudo[256445]: pam_unix(sudo:session): session closed for user root
Oct 11 03:55:40 compute-0 sudo[256494]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 11 03:55:40 compute-0 sudo[256494]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 03:55:40 compute-0 sudo[256494]: pam_unix(sudo:session): session closed for user root
Oct 11 03:55:41 compute-0 sudo[256519]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 03:55:41 compute-0 sudo[256519]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 03:55:41 compute-0 sudo[256519]: pam_unix(sudo:session): session closed for user root
Oct 11 03:55:41 compute-0 sudo[256544]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/23b68101-59a9-532f-ab6b-9acf78fb2162/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 23b68101-59a9-532f-ab6b-9acf78fb2162 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Oct 11 03:55:41 compute-0 sudo[256544]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 03:55:41 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v689: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 03:55:41 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 11 03:55:41 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 11 03:55:41 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' 
Oct 11 03:55:41 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 11 03:55:41 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 11 03:55:41 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 11 03:55:41 compute-0 podman[256706]: 2025-10-11 03:55:41.505755766 +0000 UTC m=+0.054815780 container create 1b839d46e68678b0900c382a79b62a677260cd333c7f41f6a2981625c2e15065 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=awesome_cray, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 11 03:55:41 compute-0 sudo[256745]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dnhwyyqihyfkunymreseilmufbrdczhc ; /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1760154941.1600347-1725-252099501936058/AnsiballZ_edpm_container_manage.py'
Oct 11 03:55:41 compute-0 sudo[256745]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 03:55:41 compute-0 systemd[1]: Started libpod-conmon-1b839d46e68678b0900c382a79b62a677260cd333c7f41f6a2981625c2e15065.scope.
Oct 11 03:55:41 compute-0 podman[256706]: 2025-10-11 03:55:41.477461952 +0000 UTC m=+0.026522006 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 03:55:41 compute-0 systemd[1]: Started libcrun container.
Oct 11 03:55:41 compute-0 podman[256706]: 2025-10-11 03:55:41.613493072 +0000 UTC m=+0.162553086 container init 1b839d46e68678b0900c382a79b62a677260cd333c7f41f6a2981625c2e15065 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=awesome_cray, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Oct 11 03:55:41 compute-0 podman[256706]: 2025-10-11 03:55:41.626364994 +0000 UTC m=+0.175425008 container start 1b839d46e68678b0900c382a79b62a677260cd333c7f41f6a2981625c2e15065 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=awesome_cray, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Oct 11 03:55:41 compute-0 podman[256706]: 2025-10-11 03:55:41.630553731 +0000 UTC m=+0.179613745 container attach 1b839d46e68678b0900c382a79b62a677260cd333c7f41f6a2981625c2e15065 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=awesome_cray, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 11 03:55:41 compute-0 awesome_cray[256750]: 167 167
Oct 11 03:55:41 compute-0 systemd[1]: libpod-1b839d46e68678b0900c382a79b62a677260cd333c7f41f6a2981625c2e15065.scope: Deactivated successfully.
Oct 11 03:55:41 compute-0 podman[256706]: 2025-10-11 03:55:41.635958483 +0000 UTC m=+0.185018527 container died 1b839d46e68678b0900c382a79b62a677260cd333c7f41f6a2981625c2e15065 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=awesome_cray, io.buildah.version=1.39.3, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 11 03:55:41 compute-0 systemd[1]: var-lib-containers-storage-overlay-53a94ab3b4810f6b5f2630628b334bf0dbdeb0e70a9fecc8c669eee2098cddc8-merged.mount: Deactivated successfully.
Oct 11 03:55:41 compute-0 podman[256706]: 2025-10-11 03:55:41.683743175 +0000 UTC m=+0.232803189 container remove 1b839d46e68678b0900c382a79b62a677260cd333c7f41f6a2981625c2e15065 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=awesome_cray, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2)
Oct 11 03:55:41 compute-0 systemd[1]: libpod-conmon-1b839d46e68678b0900c382a79b62a677260cd333c7f41f6a2981625c2e15065.scope: Deactivated successfully.
Oct 11 03:55:41 compute-0 podman[256753]: 2025-10-11 03:55:41.706446782 +0000 UTC m=+0.096450249 container health_status 83ff5079e26f0f00bbfd2512db4ebca61e431e3b7e362664573e34fff179574c (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, container_name=multipathd)
Oct 11 03:55:41 compute-0 python3[256749]: ansible-edpm_container_manage Invoked with concurrency=1 config_dir=/var/lib/openstack/config/containers config_id=edpm config_overrides={} config_patterns=nova_compute_init.json log_base_path=/var/log/containers/stdouts debug=False
Oct 11 03:55:41 compute-0 podman[256796]: 2025-10-11 03:55:41.890324536 +0000 UTC m=+0.048559284 container create 8d7c9de2dad36dbc02db8c206c79e7fcbf55aa3d00722465810a144e611cbd83 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=recursing_gauss, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS)
Oct 11 03:55:41 compute-0 systemd[1]: Started libpod-conmon-8d7c9de2dad36dbc02db8c206c79e7fcbf55aa3d00722465810a144e611cbd83.scope.
Oct 11 03:55:41 compute-0 systemd[1]: Started libcrun container.
Oct 11 03:55:41 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/402415317f1efdd4e229897e8df13582ce2881ec18e84a0aaf60c15de4414491/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 11 03:55:41 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/402415317f1efdd4e229897e8df13582ce2881ec18e84a0aaf60c15de4414491/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 11 03:55:41 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/402415317f1efdd4e229897e8df13582ce2881ec18e84a0aaf60c15de4414491/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 11 03:55:41 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/402415317f1efdd4e229897e8df13582ce2881ec18e84a0aaf60c15de4414491/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 11 03:55:41 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/402415317f1efdd4e229897e8df13582ce2881ec18e84a0aaf60c15de4414491/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct 11 03:55:41 compute-0 podman[256796]: 2025-10-11 03:55:41.874103061 +0000 UTC m=+0.032337829 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 03:55:41 compute-0 podman[256796]: 2025-10-11 03:55:41.981416194 +0000 UTC m=+0.139651042 container init 8d7c9de2dad36dbc02db8c206c79e7fcbf55aa3d00722465810a144e611cbd83 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=recursing_gauss, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_REF=reef, io.buildah.version=1.39.3, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 11 03:55:41 compute-0 podman[256796]: 2025-10-11 03:55:41.996304883 +0000 UTC m=+0.154539671 container start 8d7c9de2dad36dbc02db8c206c79e7fcbf55aa3d00722465810a144e611cbd83 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=recursing_gauss, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, ceph=True, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 11 03:55:42 compute-0 podman[256796]: 2025-10-11 03:55:42.003130834 +0000 UTC m=+0.161365682 container attach 8d7c9de2dad36dbc02db8c206c79e7fcbf55aa3d00722465810a144e611cbd83 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=recursing_gauss, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.39.3, OSD_FLAVOR=default)
Oct 11 03:55:42 compute-0 ceph-mon[74273]: pgmap v689: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 03:55:43 compute-0 recursing_gauss[256833]: --> passed data devices: 0 physical, 3 LVM
Oct 11 03:55:43 compute-0 recursing_gauss[256833]: --> relative data size: 1.0
Oct 11 03:55:43 compute-0 recursing_gauss[256833]: --> All data devices are unavailable
Oct 11 03:55:43 compute-0 systemd[1]: libpod-8d7c9de2dad36dbc02db8c206c79e7fcbf55aa3d00722465810a144e611cbd83.scope: Deactivated successfully.
Oct 11 03:55:43 compute-0 systemd[1]: libpod-8d7c9de2dad36dbc02db8c206c79e7fcbf55aa3d00722465810a144e611cbd83.scope: Consumed 1.024s CPU time.
Oct 11 03:55:43 compute-0 podman[256796]: 2025-10-11 03:55:43.109293549 +0000 UTC m=+1.267528307 container died 8d7c9de2dad36dbc02db8c206c79e7fcbf55aa3d00722465810a144e611cbd83 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=recursing_gauss, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 11 03:55:43 compute-0 systemd[1]: var-lib-containers-storage-overlay-402415317f1efdd4e229897e8df13582ce2881ec18e84a0aaf60c15de4414491-merged.mount: Deactivated successfully.
Oct 11 03:55:43 compute-0 podman[256796]: 2025-10-11 03:55:43.1698555 +0000 UTC m=+1.328090248 container remove 8d7c9de2dad36dbc02db8c206c79e7fcbf55aa3d00722465810a144e611cbd83 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=recursing_gauss, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 11 03:55:43 compute-0 systemd[1]: libpod-conmon-8d7c9de2dad36dbc02db8c206c79e7fcbf55aa3d00722465810a144e611cbd83.scope: Deactivated successfully.
Oct 11 03:55:43 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v690: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 03:55:43 compute-0 sudo[256544]: pam_unix(sudo:session): session closed for user root
Oct 11 03:55:43 compute-0 sudo[256888]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 03:55:43 compute-0 sudo[256888]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 03:55:43 compute-0 sudo[256888]: pam_unix(sudo:session): session closed for user root
Oct 11 03:55:43 compute-0 sudo[256913]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 11 03:55:43 compute-0 sudo[256913]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 03:55:43 compute-0 sudo[256913]: pam_unix(sudo:session): session closed for user root
Oct 11 03:55:43 compute-0 sudo[256938]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 03:55:43 compute-0 sudo[256938]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 03:55:43 compute-0 sudo[256938]: pam_unix(sudo:session): session closed for user root
Oct 11 03:55:43 compute-0 sudo[256963]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/23b68101-59a9-532f-ab6b-9acf78fb2162/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 23b68101-59a9-532f-ab6b-9acf78fb2162 -- lvm list --format json
Oct 11 03:55:43 compute-0 sudo[256963]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 03:55:43 compute-0 podman[257026]: 2025-10-11 03:55:43.941944352 +0000 UTC m=+0.057172646 container create faafde8abd2de55ead41938ae28d014c85fcd3db222cb58e5edcbbc4716d4af2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=frosty_pare, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default)
Oct 11 03:55:43 compute-0 systemd[1]: Started libpod-conmon-faafde8abd2de55ead41938ae28d014c85fcd3db222cb58e5edcbbc4716d4af2.scope.
Oct 11 03:55:44 compute-0 podman[257026]: 2025-10-11 03:55:43.920912162 +0000 UTC m=+0.036140476 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 03:55:44 compute-0 systemd[1]: Started libcrun container.
Oct 11 03:55:44 compute-0 podman[257026]: 2025-10-11 03:55:44.032084304 +0000 UTC m=+0.147312618 container init faafde8abd2de55ead41938ae28d014c85fcd3db222cb58e5edcbbc4716d4af2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=frosty_pare, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.build-date=20250507, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 11 03:55:44 compute-0 podman[257026]: 2025-10-11 03:55:44.040261893 +0000 UTC m=+0.155490187 container start faafde8abd2de55ead41938ae28d014c85fcd3db222cb58e5edcbbc4716d4af2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=frosty_pare, org.label-schema.build-date=20250507, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=reef, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 11 03:55:44 compute-0 podman[257026]: 2025-10-11 03:55:44.044196214 +0000 UTC m=+0.159424528 container attach faafde8abd2de55ead41938ae28d014c85fcd3db222cb58e5edcbbc4716d4af2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=frosty_pare, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.39.3)
Oct 11 03:55:44 compute-0 frosty_pare[257047]: 167 167
Oct 11 03:55:44 compute-0 systemd[1]: libpod-faafde8abd2de55ead41938ae28d014c85fcd3db222cb58e5edcbbc4716d4af2.scope: Deactivated successfully.
Oct 11 03:55:44 compute-0 podman[257026]: 2025-10-11 03:55:44.04654014 +0000 UTC m=+0.161768434 container died faafde8abd2de55ead41938ae28d014c85fcd3db222cb58e5edcbbc4716d4af2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=frosty_pare, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 11 03:55:44 compute-0 podman[257040]: 2025-10-11 03:55:44.062314853 +0000 UTC m=+0.075650476 container health_status 8f03625dda308e9a3fe3b3f00360811eab2a53db90537fed5552a11820542f07 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251009, org.label-schema.name=CentOS Stream 9 Base Image, config_id=iscsid, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, container_name=iscsid)
Oct 11 03:55:44 compute-0 systemd[1]: var-lib-containers-storage-overlay-8efb5ee395607765aa08e9ad14b836a50d0c7c541bcb14d641e70b79b9802b08-merged.mount: Deactivated successfully.
Oct 11 03:55:44 compute-0 podman[257026]: 2025-10-11 03:55:44.091735489 +0000 UTC m=+0.206963783 container remove faafde8abd2de55ead41938ae28d014c85fcd3db222cb58e5edcbbc4716d4af2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=frosty_pare, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Oct 11 03:55:44 compute-0 systemd[1]: libpod-conmon-faafde8abd2de55ead41938ae28d014c85fcd3db222cb58e5edcbbc4716d4af2.scope: Deactivated successfully.
Oct 11 03:55:44 compute-0 podman[257086]: 2025-10-11 03:55:44.305001348 +0000 UTC m=+0.065339266 container create bd4739d6f251cf0f344188906309056304c0fef8e6eed817a9cfd0bc264c2e00 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=confident_bhabha, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 11 03:55:44 compute-0 systemd[1]: Started libpod-conmon-bd4739d6f251cf0f344188906309056304c0fef8e6eed817a9cfd0bc264c2e00.scope.
Oct 11 03:55:44 compute-0 podman[257086]: 2025-10-11 03:55:44.281042675 +0000 UTC m=+0.041380613 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 03:55:44 compute-0 systemd[1]: Started libcrun container.
Oct 11 03:55:44 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4cf4c25569f8cdd3c3dbe7902975a7c9c21ac8c2f1d3dcedee755d90c197cd89/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 11 03:55:44 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4cf4c25569f8cdd3c3dbe7902975a7c9c21ac8c2f1d3dcedee755d90c197cd89/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 11 03:55:44 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4cf4c25569f8cdd3c3dbe7902975a7c9c21ac8c2f1d3dcedee755d90c197cd89/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 11 03:55:44 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4cf4c25569f8cdd3c3dbe7902975a7c9c21ac8c2f1d3dcedee755d90c197cd89/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 11 03:55:44 compute-0 podman[257086]: 2025-10-11 03:55:44.406868839 +0000 UTC m=+0.167206767 container init bd4739d6f251cf0f344188906309056304c0fef8e6eed817a9cfd0bc264c2e00 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=confident_bhabha, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Oct 11 03:55:44 compute-0 podman[257086]: 2025-10-11 03:55:44.413783313 +0000 UTC m=+0.174121231 container start bd4739d6f251cf0f344188906309056304c0fef8e6eed817a9cfd0bc264c2e00 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=confident_bhabha, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=reef, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 11 03:55:44 compute-0 podman[257086]: 2025-10-11 03:55:44.416830788 +0000 UTC m=+0.177168726 container attach bd4739d6f251cf0f344188906309056304c0fef8e6eed817a9cfd0bc264c2e00 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=confident_bhabha, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, ceph=True)
Oct 11 03:55:44 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 11 03:55:44 compute-0 ceph-mon[74273]: pgmap v690: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 03:55:45 compute-0 confident_bhabha[257102]: {
Oct 11 03:55:45 compute-0 confident_bhabha[257102]:     "0": [
Oct 11 03:55:45 compute-0 confident_bhabha[257102]:         {
Oct 11 03:55:45 compute-0 confident_bhabha[257102]:             "devices": [
Oct 11 03:55:45 compute-0 confident_bhabha[257102]:                 "/dev/loop3"
Oct 11 03:55:45 compute-0 confident_bhabha[257102]:             ],
Oct 11 03:55:45 compute-0 confident_bhabha[257102]:             "lv_name": "ceph_lv0",
Oct 11 03:55:45 compute-0 confident_bhabha[257102]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct 11 03:55:45 compute-0 confident_bhabha[257102]:             "lv_size": "21470642176",
Oct 11 03:55:45 compute-0 confident_bhabha[257102]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=1XVTvq-Um5W-oCLh-MZzq-EKrd-vgGj-JdO4S3,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=23b68101-59a9-532f-ab6b-9acf78fb2162,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=bd7ac921-1218-45c1-b1c6-7c594dbceccb,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 11 03:55:45 compute-0 confident_bhabha[257102]:             "lv_uuid": "1XVTvq-Um5W-oCLh-MZzq-EKrd-vgGj-JdO4S3",
Oct 11 03:55:45 compute-0 confident_bhabha[257102]:             "name": "ceph_lv0",
Oct 11 03:55:45 compute-0 confident_bhabha[257102]:             "path": "/dev/ceph_vg0/ceph_lv0",
Oct 11 03:55:45 compute-0 confident_bhabha[257102]:             "tags": {
Oct 11 03:55:45 compute-0 confident_bhabha[257102]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct 11 03:55:45 compute-0 confident_bhabha[257102]:                 "ceph.block_uuid": "1XVTvq-Um5W-oCLh-MZzq-EKrd-vgGj-JdO4S3",
Oct 11 03:55:45 compute-0 confident_bhabha[257102]:                 "ceph.cephx_lockbox_secret": "",
Oct 11 03:55:45 compute-0 confident_bhabha[257102]:                 "ceph.cluster_fsid": "23b68101-59a9-532f-ab6b-9acf78fb2162",
Oct 11 03:55:45 compute-0 confident_bhabha[257102]:                 "ceph.cluster_name": "ceph",
Oct 11 03:55:45 compute-0 confident_bhabha[257102]:                 "ceph.crush_device_class": "",
Oct 11 03:55:45 compute-0 confident_bhabha[257102]:                 "ceph.encrypted": "0",
Oct 11 03:55:45 compute-0 confident_bhabha[257102]:                 "ceph.osd_fsid": "bd7ac921-1218-45c1-b1c6-7c594dbceccb",
Oct 11 03:55:45 compute-0 confident_bhabha[257102]:                 "ceph.osd_id": "0",
Oct 11 03:55:45 compute-0 confident_bhabha[257102]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 11 03:55:45 compute-0 confident_bhabha[257102]:                 "ceph.type": "block",
Oct 11 03:55:45 compute-0 confident_bhabha[257102]:                 "ceph.vdo": "0"
Oct 11 03:55:45 compute-0 confident_bhabha[257102]:             },
Oct 11 03:55:45 compute-0 confident_bhabha[257102]:             "type": "block",
Oct 11 03:55:45 compute-0 confident_bhabha[257102]:             "vg_name": "ceph_vg0"
Oct 11 03:55:45 compute-0 confident_bhabha[257102]:         }
Oct 11 03:55:45 compute-0 confident_bhabha[257102]:     ],
Oct 11 03:55:45 compute-0 confident_bhabha[257102]:     "1": [
Oct 11 03:55:45 compute-0 confident_bhabha[257102]:         {
Oct 11 03:55:45 compute-0 confident_bhabha[257102]:             "devices": [
Oct 11 03:55:45 compute-0 confident_bhabha[257102]:                 "/dev/loop4"
Oct 11 03:55:45 compute-0 confident_bhabha[257102]:             ],
Oct 11 03:55:45 compute-0 confident_bhabha[257102]:             "lv_name": "ceph_lv1",
Oct 11 03:55:45 compute-0 confident_bhabha[257102]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Oct 11 03:55:45 compute-0 confident_bhabha[257102]:             "lv_size": "21470642176",
Oct 11 03:55:45 compute-0 confident_bhabha[257102]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=SYHCVo-UVf0-Iwc3-4dzz-QuCG-19LJ-9rRA5x,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=23b68101-59a9-532f-ab6b-9acf78fb2162,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=38da774d-7ecf-442f-9a7a-97978287cff8,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 11 03:55:45 compute-0 confident_bhabha[257102]:             "lv_uuid": "SYHCVo-UVf0-Iwc3-4dzz-QuCG-19LJ-9rRA5x",
Oct 11 03:55:45 compute-0 confident_bhabha[257102]:             "name": "ceph_lv1",
Oct 11 03:55:45 compute-0 confident_bhabha[257102]:             "path": "/dev/ceph_vg1/ceph_lv1",
Oct 11 03:55:45 compute-0 confident_bhabha[257102]:             "tags": {
Oct 11 03:55:45 compute-0 confident_bhabha[257102]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Oct 11 03:55:45 compute-0 confident_bhabha[257102]:                 "ceph.block_uuid": "SYHCVo-UVf0-Iwc3-4dzz-QuCG-19LJ-9rRA5x",
Oct 11 03:55:45 compute-0 confident_bhabha[257102]:                 "ceph.cephx_lockbox_secret": "",
Oct 11 03:55:45 compute-0 confident_bhabha[257102]:                 "ceph.cluster_fsid": "23b68101-59a9-532f-ab6b-9acf78fb2162",
Oct 11 03:55:45 compute-0 confident_bhabha[257102]:                 "ceph.cluster_name": "ceph",
Oct 11 03:55:45 compute-0 confident_bhabha[257102]:                 "ceph.crush_device_class": "",
Oct 11 03:55:45 compute-0 confident_bhabha[257102]:                 "ceph.encrypted": "0",
Oct 11 03:55:45 compute-0 confident_bhabha[257102]:                 "ceph.osd_fsid": "38da774d-7ecf-442f-9a7a-97978287cff8",
Oct 11 03:55:45 compute-0 confident_bhabha[257102]:                 "ceph.osd_id": "1",
Oct 11 03:55:45 compute-0 confident_bhabha[257102]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 11 03:55:45 compute-0 confident_bhabha[257102]:                 "ceph.type": "block",
Oct 11 03:55:45 compute-0 confident_bhabha[257102]:                 "ceph.vdo": "0"
Oct 11 03:55:45 compute-0 confident_bhabha[257102]:             },
Oct 11 03:55:45 compute-0 confident_bhabha[257102]:             "type": "block",
Oct 11 03:55:45 compute-0 confident_bhabha[257102]:             "vg_name": "ceph_vg1"
Oct 11 03:55:45 compute-0 confident_bhabha[257102]:         }
Oct 11 03:55:45 compute-0 confident_bhabha[257102]:     ],
Oct 11 03:55:45 compute-0 confident_bhabha[257102]:     "2": [
Oct 11 03:55:45 compute-0 confident_bhabha[257102]:         {
Oct 11 03:55:45 compute-0 confident_bhabha[257102]:             "devices": [
Oct 11 03:55:45 compute-0 confident_bhabha[257102]:                 "/dev/loop5"
Oct 11 03:55:45 compute-0 confident_bhabha[257102]:             ],
Oct 11 03:55:45 compute-0 confident_bhabha[257102]:             "lv_name": "ceph_lv2",
Oct 11 03:55:45 compute-0 confident_bhabha[257102]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Oct 11 03:55:45 compute-0 confident_bhabha[257102]:             "lv_size": "21470642176",
Oct 11 03:55:45 compute-0 confident_bhabha[257102]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=RoMfIg-8Myq-NBZ3-1HM3-6QfC-sjk8-dlChvt,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=23b68101-59a9-532f-ab6b-9acf78fb2162,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=b0a32a6b-06c7-4cda-9235-0c5ffbd1bd85,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 11 03:55:45 compute-0 confident_bhabha[257102]:             "lv_uuid": "RoMfIg-8Myq-NBZ3-1HM3-6QfC-sjk8-dlChvt",
Oct 11 03:55:45 compute-0 confident_bhabha[257102]:             "name": "ceph_lv2",
Oct 11 03:55:45 compute-0 confident_bhabha[257102]:             "path": "/dev/ceph_vg2/ceph_lv2",
Oct 11 03:55:45 compute-0 confident_bhabha[257102]:             "tags": {
Oct 11 03:55:45 compute-0 confident_bhabha[257102]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Oct 11 03:55:45 compute-0 confident_bhabha[257102]:                 "ceph.block_uuid": "RoMfIg-8Myq-NBZ3-1HM3-6QfC-sjk8-dlChvt",
Oct 11 03:55:45 compute-0 confident_bhabha[257102]:                 "ceph.cephx_lockbox_secret": "",
Oct 11 03:55:45 compute-0 confident_bhabha[257102]:                 "ceph.cluster_fsid": "23b68101-59a9-532f-ab6b-9acf78fb2162",
Oct 11 03:55:45 compute-0 confident_bhabha[257102]:                 "ceph.cluster_name": "ceph",
Oct 11 03:55:45 compute-0 confident_bhabha[257102]:                 "ceph.crush_device_class": "",
Oct 11 03:55:45 compute-0 confident_bhabha[257102]:                 "ceph.encrypted": "0",
Oct 11 03:55:45 compute-0 confident_bhabha[257102]:                 "ceph.osd_fsid": "b0a32a6b-06c7-4cda-9235-0c5ffbd1bd85",
Oct 11 03:55:45 compute-0 confident_bhabha[257102]:                 "ceph.osd_id": "2",
Oct 11 03:55:45 compute-0 confident_bhabha[257102]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 11 03:55:45 compute-0 confident_bhabha[257102]:                 "ceph.type": "block",
Oct 11 03:55:45 compute-0 confident_bhabha[257102]:                 "ceph.vdo": "0"
Oct 11 03:55:45 compute-0 confident_bhabha[257102]:             },
Oct 11 03:55:45 compute-0 confident_bhabha[257102]:             "type": "block",
Oct 11 03:55:45 compute-0 confident_bhabha[257102]:             "vg_name": "ceph_vg2"
Oct 11 03:55:45 compute-0 confident_bhabha[257102]:         }
Oct 11 03:55:45 compute-0 confident_bhabha[257102]:     ]
Oct 11 03:55:45 compute-0 confident_bhabha[257102]: }
Oct 11 03:55:45 compute-0 systemd[1]: libpod-bd4739d6f251cf0f344188906309056304c0fef8e6eed817a9cfd0bc264c2e00.scope: Deactivated successfully.
Oct 11 03:55:45 compute-0 podman[257086]: 2025-10-11 03:55:45.165977536 +0000 UTC m=+0.926315454 container died bd4739d6f251cf0f344188906309056304c0fef8e6eed817a9cfd0bc264c2e00 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=confident_bhabha, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 11 03:55:45 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v691: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 03:55:46 compute-0 ceph-mon[74273]: pgmap v691: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 03:55:47 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v692: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 03:55:49 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v693: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 03:55:49 compute-0 ceph-mon[74273]: pgmap v692: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 03:55:49 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 11 03:55:50 compute-0 ceph-mon[74273]: pgmap v693: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 03:55:50 compute-0 ceph-mgr[74563]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 03:55:50 compute-0 ceph-mgr[74563]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 03:55:50 compute-0 ceph-mgr[74563]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 03:55:50 compute-0 ceph-mgr[74563]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 03:55:50 compute-0 ceph-mgr[74563]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 03:55:50 compute-0 ceph-mgr[74563]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 03:55:51 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v694: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 03:55:51 compute-0 systemd[1]: var-lib-containers-storage-overlay-4cf4c25569f8cdd3c3dbe7902975a7c9c21ac8c2f1d3dcedee755d90c197cd89-merged.mount: Deactivated successfully.
Oct 11 03:55:51 compute-0 podman[257086]: 2025-10-11 03:55:51.981872765 +0000 UTC m=+7.742210683 container remove bd4739d6f251cf0f344188906309056304c0fef8e6eed817a9cfd0bc264c2e00 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=confident_bhabha, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True)
Oct 11 03:55:51 compute-0 systemd[1]: libpod-conmon-bd4739d6f251cf0f344188906309056304c0fef8e6eed817a9cfd0bc264c2e00.scope: Deactivated successfully.
Oct 11 03:55:52 compute-0 sudo[256963]: pam_unix(sudo:session): session closed for user root
Oct 11 03:55:52 compute-0 podman[256820]: 2025-10-11 03:55:52.051687306 +0000 UTC m=+10.144754744 image pull 95311272d2962a6b8537a6d19b94bc44c5c3621a6e21a2e983fd64d147646bc9 quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified
Oct 11 03:55:52 compute-0 sudo[257157]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 03:55:52 compute-0 sudo[257157]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 03:55:52 compute-0 sudo[257157]: pam_unix(sudo:session): session closed for user root
Oct 11 03:55:52 compute-0 sudo[257195]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 11 03:55:52 compute-0 sudo[257195]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 03:55:52 compute-0 sudo[257195]: pam_unix(sudo:session): session closed for user root
Oct 11 03:55:52 compute-0 podman[257222]: 2025-10-11 03:55:52.232458342 +0000 UTC m=+0.070748788 container create 6ae445183d92726697bc128e91a8124afaea4b755f6f4fee1d4ac9b55c5e4f79 (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute_init, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.build-date=20251009, config_id=edpm, container_name=nova_compute_init, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, managed_by=edpm_ansible, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': False, 'user': 'root', 'restart': 'never', 'command': 'bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init', 'net': 'none', 'security_opt': ['label=disable'], 'detach': False, 'environment': {'NOVA_STATEDIR_OWNERSHIP_SKIP': '/var/lib/nova/compute_id', '__OS_DEBUG': False}, 'volumes': ['/dev/log:/dev/log', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z', '/var/lib/openstack/config/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z']})
Oct 11 03:55:52 compute-0 podman[257222]: 2025-10-11 03:55:52.195038991 +0000 UTC m=+0.033329447 image pull 95311272d2962a6b8537a6d19b94bc44c5c3621a6e21a2e983fd64d147646bc9 quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified
Oct 11 03:55:52 compute-0 python3[256749]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: podman create --name nova_compute_init --conmon-pidfile /run/nova_compute_init.pid --env NOVA_STATEDIR_OWNERSHIP_SKIP=/var/lib/nova/compute_id --env __OS_DEBUG=False --label config_id=edpm --label container_name=nova_compute_init --label managed_by=edpm_ansible --label config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': False, 'user': 'root', 'restart': 'never', 'command': 'bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init', 'net': 'none', 'security_opt': ['label=disable'], 'detach': False, 'environment': {'NOVA_STATEDIR_OWNERSHIP_SKIP': '/var/lib/nova/compute_id', '__OS_DEBUG': False}, 'volumes': ['/dev/log:/dev/log', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z', '/var/lib/openstack/config/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z']} --log-driver journald --log-level info --network none --privileged=False --security-opt label=disable --user root --volume /dev/log:/dev/log --volume /var/lib/nova:/var/lib/nova:shared --volume /var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z --volume /var/lib/openstack/config/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init
Oct 11 03:55:52 compute-0 sudo[257241]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 03:55:52 compute-0 sudo[257241]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 03:55:52 compute-0 sudo[257241]: pam_unix(sudo:session): session closed for user root
Oct 11 03:55:52 compute-0 sudo[257279]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/23b68101-59a9-532f-ab6b-9acf78fb2162/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 23b68101-59a9-532f-ab6b-9acf78fb2162 -- raw list --format json
Oct 11 03:55:52 compute-0 sudo[257279]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 03:55:52 compute-0 sudo[256745]: pam_unix(sudo:session): session closed for user root
Oct 11 03:55:52 compute-0 podman[257445]: 2025-10-11 03:55:52.73234061 +0000 UTC m=+0.056544628 container create 47c21f9928a2e9e732cfbb50e62dbabc7e04eb1e80909d068e41afbe549347ac (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_thompson, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 11 03:55:52 compute-0 systemd[1]: Started libpod-conmon-47c21f9928a2e9e732cfbb50e62dbabc7e04eb1e80909d068e41afbe549347ac.scope.
Oct 11 03:55:52 compute-0 podman[257445]: 2025-10-11 03:55:52.700431144 +0000 UTC m=+0.024635212 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 03:55:52 compute-0 systemd[1]: Started libcrun container.
Oct 11 03:55:52 compute-0 podman[257445]: 2025-10-11 03:55:52.821514325 +0000 UTC m=+0.145718393 container init 47c21f9928a2e9e732cfbb50e62dbabc7e04eb1e80909d068e41afbe549347ac (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_thompson, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, ceph=True, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=reef)
Oct 11 03:55:52 compute-0 podman[257445]: 2025-10-11 03:55:52.82952482 +0000 UTC m=+0.153728838 container start 47c21f9928a2e9e732cfbb50e62dbabc7e04eb1e80909d068e41afbe549347ac (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_thompson, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 11 03:55:52 compute-0 affectionate_thompson[257496]: 167 167
Oct 11 03:55:52 compute-0 podman[257445]: 2025-10-11 03:55:52.833876692 +0000 UTC m=+0.158080730 container attach 47c21f9928a2e9e732cfbb50e62dbabc7e04eb1e80909d068e41afbe549347ac (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_thompson, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.build-date=20250507)
Oct 11 03:55:52 compute-0 systemd[1]: libpod-47c21f9928a2e9e732cfbb50e62dbabc7e04eb1e80909d068e41afbe549347ac.scope: Deactivated successfully.
Oct 11 03:55:52 compute-0 podman[257445]: 2025-10-11 03:55:52.835421705 +0000 UTC m=+0.159625753 container died 47c21f9928a2e9e732cfbb50e62dbabc7e04eb1e80909d068e41afbe549347ac (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_thompson, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, ceph=True, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 11 03:55:52 compute-0 systemd[1]: var-lib-containers-storage-overlay-e69df43bf0c2c03e8ddc7de4500fea3655ad8b79c1d17fd0d3268eeaac99cb4a-merged.mount: Deactivated successfully.
Oct 11 03:55:52 compute-0 podman[257445]: 2025-10-11 03:55:52.878241308 +0000 UTC m=+0.202445336 container remove 47c21f9928a2e9e732cfbb50e62dbabc7e04eb1e80909d068e41afbe549347ac (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_thompson, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250507)
Oct 11 03:55:52 compute-0 sudo[257534]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jyzfkodladjggyogdhrxwzpdlquxberz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760154952.5497906-1733-129155665396471/AnsiballZ_stat.py'
Oct 11 03:55:52 compute-0 sudo[257534]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 03:55:52 compute-0 systemd[1]: libpod-conmon-47c21f9928a2e9e732cfbb50e62dbabc7e04eb1e80909d068e41afbe549347ac.scope: Deactivated successfully.
Oct 11 03:55:52 compute-0 ceph-mon[74273]: pgmap v694: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 03:55:53 compute-0 python3.9[257539]: ansible-ansible.builtin.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct 11 03:55:53 compute-0 podman[257547]: 2025-10-11 03:55:53.085306743 +0000 UTC m=+0.063906686 container create 1f8eecc319af873551bb0ddac0ab5dcaf5406695ab4608c1e5bb2c711771ad2d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_cohen, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, ceph=True)
Oct 11 03:55:53 compute-0 sudo[257534]: pam_unix(sudo:session): session closed for user root
Oct 11 03:55:53 compute-0 systemd[1]: Started libpod-conmon-1f8eecc319af873551bb0ddac0ab5dcaf5406695ab4608c1e5bb2c711771ad2d.scope.
Oct 11 03:55:53 compute-0 podman[257547]: 2025-10-11 03:55:53.052933154 +0000 UTC m=+0.031533127 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 03:55:53 compute-0 systemd[1]: Started libcrun container.
Oct 11 03:55:53 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/be3ae0ff7e1cfffb0a291b6d47bf526f37800aea42e75d56d07fd3e38bfcc693/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 11 03:55:53 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/be3ae0ff7e1cfffb0a291b6d47bf526f37800aea42e75d56d07fd3e38bfcc693/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 11 03:55:53 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/be3ae0ff7e1cfffb0a291b6d47bf526f37800aea42e75d56d07fd3e38bfcc693/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 11 03:55:53 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/be3ae0ff7e1cfffb0a291b6d47bf526f37800aea42e75d56d07fd3e38bfcc693/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 11 03:55:53 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v695: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 03:55:53 compute-0 podman[257547]: 2025-10-11 03:55:53.19595052 +0000 UTC m=+0.174550583 container init 1f8eecc319af873551bb0ddac0ab5dcaf5406695ab4608c1e5bb2c711771ad2d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_cohen, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 11 03:55:53 compute-0 podman[257547]: 2025-10-11 03:55:53.219176952 +0000 UTC m=+0.197776915 container start 1f8eecc319af873551bb0ddac0ab5dcaf5406695ab4608c1e5bb2c711771ad2d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_cohen, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3)
Oct 11 03:55:53 compute-0 podman[257547]: 2025-10-11 03:55:53.223568506 +0000 UTC m=+0.202168459 container attach 1f8eecc319af873551bb0ddac0ab5dcaf5406695ab4608c1e5bb2c711771ad2d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_cohen, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Oct 11 03:55:53 compute-0 sudo[257733]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dlbqiovhapvixaoznupsxrnulchkyexv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760154953.4968765-1745-142774809127761/AnsiballZ_container_config_data.py'
Oct 11 03:55:53 compute-0 sudo[257733]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 03:55:53 compute-0 podman[257694]: 2025-10-11 03:55:53.99608577 +0000 UTC m=+0.219659669 container health_status 648a70a95c858d67aef01a55c153ff0b5b9bef4b589fa10c62a24185180db61f (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009, config_id=ovn_controller, io.buildah.version=1.41.3, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Oct 11 03:55:54 compute-0 ceph-mon[74273]: pgmap v695: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 03:55:54 compute-0 python3.9[257743]: ansible-container_config_data Invoked with config_overrides={} config_path=/var/lib/openstack/config/containers config_pattern=nova_compute.json debug=False
Oct 11 03:55:54 compute-0 sudo[257733]: pam_unix(sudo:session): session closed for user root
Oct 11 03:55:54 compute-0 competent_cohen[257566]: {
Oct 11 03:55:54 compute-0 competent_cohen[257566]:     "38da774d-7ecf-442f-9a7a-97978287cff8": {
Oct 11 03:55:54 compute-0 competent_cohen[257566]:         "ceph_fsid": "23b68101-59a9-532f-ab6b-9acf78fb2162",
Oct 11 03:55:54 compute-0 competent_cohen[257566]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Oct 11 03:55:54 compute-0 competent_cohen[257566]:         "osd_id": 1,
Oct 11 03:55:54 compute-0 competent_cohen[257566]:         "osd_uuid": "38da774d-7ecf-442f-9a7a-97978287cff8",
Oct 11 03:55:54 compute-0 competent_cohen[257566]:         "type": "bluestore"
Oct 11 03:55:54 compute-0 competent_cohen[257566]:     },
Oct 11 03:55:54 compute-0 competent_cohen[257566]:     "b0a32a6b-06c7-4cda-9235-0c5ffbd1bd85": {
Oct 11 03:55:54 compute-0 competent_cohen[257566]:         "ceph_fsid": "23b68101-59a9-532f-ab6b-9acf78fb2162",
Oct 11 03:55:54 compute-0 competent_cohen[257566]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Oct 11 03:55:54 compute-0 competent_cohen[257566]:         "osd_id": 2,
Oct 11 03:55:54 compute-0 competent_cohen[257566]:         "osd_uuid": "b0a32a6b-06c7-4cda-9235-0c5ffbd1bd85",
Oct 11 03:55:54 compute-0 competent_cohen[257566]:         "type": "bluestore"
Oct 11 03:55:54 compute-0 competent_cohen[257566]:     },
Oct 11 03:55:54 compute-0 competent_cohen[257566]:     "bd7ac921-1218-45c1-b1c6-7c594dbceccb": {
Oct 11 03:55:54 compute-0 competent_cohen[257566]:         "ceph_fsid": "23b68101-59a9-532f-ab6b-9acf78fb2162",
Oct 11 03:55:54 compute-0 competent_cohen[257566]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct 11 03:55:54 compute-0 competent_cohen[257566]:         "osd_id": 0,
Oct 11 03:55:54 compute-0 competent_cohen[257566]:         "osd_uuid": "bd7ac921-1218-45c1-b1c6-7c594dbceccb",
Oct 11 03:55:54 compute-0 competent_cohen[257566]:         "type": "bluestore"
Oct 11 03:55:54 compute-0 competent_cohen[257566]:     }
Oct 11 03:55:54 compute-0 competent_cohen[257566]: }
Oct 11 03:55:54 compute-0 systemd[1]: libpod-1f8eecc319af873551bb0ddac0ab5dcaf5406695ab4608c1e5bb2c711771ad2d.scope: Deactivated successfully.
Oct 11 03:55:54 compute-0 systemd[1]: libpod-1f8eecc319af873551bb0ddac0ab5dcaf5406695ab4608c1e5bb2c711771ad2d.scope: Consumed 1.057s CPU time.
Oct 11 03:55:54 compute-0 podman[257547]: 2025-10-11 03:55:54.315709336 +0000 UTC m=+1.294309319 container died 1f8eecc319af873551bb0ddac0ab5dcaf5406695ab4608c1e5bb2c711771ad2d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_cohen, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 11 03:55:54 compute-0 systemd[1]: var-lib-containers-storage-overlay-be3ae0ff7e1cfffb0a291b6d47bf526f37800aea42e75d56d07fd3e38bfcc693-merged.mount: Deactivated successfully.
Oct 11 03:55:54 compute-0 podman[257547]: 2025-10-11 03:55:54.395465766 +0000 UTC m=+1.374065739 container remove 1f8eecc319af873551bb0ddac0ab5dcaf5406695ab4608c1e5bb2c711771ad2d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_cohen, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20250507, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 11 03:55:54 compute-0 systemd[1]: libpod-conmon-1f8eecc319af873551bb0ddac0ab5dcaf5406695ab4608c1e5bb2c711771ad2d.scope: Deactivated successfully.
Oct 11 03:55:54 compute-0 sudo[257279]: pam_unix(sudo:session): session closed for user root
Oct 11 03:55:54 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 11 03:55:54 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' 
Oct 11 03:55:54 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 11 03:55:54 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' 
Oct 11 03:55:54 compute-0 ceph-mgr[74563]: [progress WARNING root] complete: ev 6a371680-8cfc-4807-890b-c4d352bb18dc does not exist
Oct 11 03:55:54 compute-0 ceph-mgr[74563]: [progress WARNING root] complete: ev 3948df99-1999-4f3e-8033-52932418a2af does not exist
Oct 11 03:55:54 compute-0 sudo[257867]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 03:55:54 compute-0 sudo[257867]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 03:55:54 compute-0 sudo[257867]: pam_unix(sudo:session): session closed for user root
Oct 11 03:55:54 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 11 03:55:54 compute-0 sudo[257915]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Oct 11 03:55:54 compute-0 sudo[257915]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 03:55:54 compute-0 sudo[257915]: pam_unix(sudo:session): session closed for user root
Oct 11 03:55:54 compute-0 sudo[257990]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-udgrcweebxqxjzejadaybpwqfrruhony ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760154954.3835642-1754-75151535528223/AnsiballZ_container_config_hash.py'
Oct 11 03:55:54 compute-0 sudo[257990]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 03:55:54 compute-0 python3.9[257992]: ansible-container_config_hash Invoked with check_mode=False config_vol_prefix=/var/lib/config-data
Oct 11 03:55:54 compute-0 sudo[257990]: pam_unix(sudo:session): session closed for user root
Oct 11 03:55:55 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v696: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 03:55:55 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' 
Oct 11 03:55:55 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' 
Oct 11 03:55:55 compute-0 sudo[258142]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uzlmvhcewnlebjcaxohgwqebjbpnhmso ; /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1760154955.1515994-1764-80604461034767/AnsiballZ_edpm_container_manage.py'
Oct 11 03:55:55 compute-0 sudo[258142]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 03:55:55 compute-0 python3[258144]: ansible-edpm_container_manage Invoked with concurrency=1 config_dir=/var/lib/openstack/config/containers config_id=edpm config_overrides={} config_patterns=nova_compute.json log_base_path=/var/log/containers/stdouts debug=False
Oct 11 03:55:56 compute-0 podman[258181]: 2025-10-11 03:55:56.106327232 +0000 UTC m=+0.079387591 container create 45e5bb239caa99956e81146eb8387d2de5a8ee4469f4bd6b61b3455d5ed0a021 (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute, org.label-schema.build-date=20251009, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, container_name=nova_compute, config_id=edpm, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']})
Oct 11 03:55:56 compute-0 podman[258181]: 2025-10-11 03:55:56.066363549 +0000 UTC m=+0.039423968 image pull 95311272d2962a6b8537a6d19b94bc44c5c3621a6e21a2e983fd64d147646bc9 quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified
Oct 11 03:55:56 compute-0 python3[258144]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: podman create --name nova_compute --conmon-pidfile /run/nova_compute.pid --env KOLLA_CONFIG_STRATEGY=COPY_ALWAYS --label config_id=edpm --label container_name=nova_compute --label managed_by=edpm_ansible --label config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']} --log-driver journald --log-level info --network host --privileged=True --user nova --volume /var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro --volume /var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z --volume /etc/localtime:/etc/localtime:ro --volume /lib/modules:/lib/modules:ro --volume /dev:/dev --volume /var/lib/libvirt:/var/lib/libvirt --volume /run/libvirt:/run/libvirt:shared --volume /var/lib/nova:/var/lib/nova:shared --volume /var/lib/iscsi:/var/lib/iscsi:z --volume /etc/multipath:/etc/multipath:z --volume /etc/multipath.conf:/etc/multipath.conf:ro --volume /etc/iscsi:/etc/iscsi:ro --volume /etc/nvme:/etc/nvme --volume /var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro --volume /etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified kolla_start
Oct 11 03:55:56 compute-0 sudo[258142]: pam_unix(sudo:session): session closed for user root
Oct 11 03:55:56 compute-0 ceph-mon[74273]: pgmap v696: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 03:55:56 compute-0 sudo[258369]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-etameuxxydjffxynwkrztmrabmjdhgof ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760154956.5038073-1772-37494508986281/AnsiballZ_stat.py'
Oct 11 03:55:56 compute-0 sudo[258369]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 03:55:56 compute-0 python3.9[258371]: ansible-ansible.builtin.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct 11 03:55:57 compute-0 sudo[258369]: pam_unix(sudo:session): session closed for user root
Oct 11 03:55:57 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v697: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 03:55:57 compute-0 sudo[258523]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xzqobeonzhvuyeigbmtoopbrwownpsct ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760154957.2437413-1781-60192150149139/AnsiballZ_file.py'
Oct 11 03:55:57 compute-0 sudo[258523]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 03:55:57 compute-0 python3.9[258525]: ansible-file Invoked with path=/etc/systemd/system/edpm_nova_compute.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 11 03:55:57 compute-0 sudo[258523]: pam_unix(sudo:session): session closed for user root
Oct 11 03:55:58 compute-0 podman[258624]: 2025-10-11 03:55:58.370207448 +0000 UTC m=+0.076273963 container health_status aa44bb3cffcfb092b9d85ae52a9bd9f36c87e07262632420f97b48a886109eab (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 11 03:55:58 compute-0 sudo[258693]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ogibyyyloctcdegqjxizdrpnkeixleim ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760154957.94837-1781-267747015097578/AnsiballZ_copy.py'
Oct 11 03:55:58 compute-0 sudo[258693]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 03:55:58 compute-0 ceph-mon[74273]: pgmap v697: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 03:55:58 compute-0 python3.9[258695]: ansible-copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1760154957.94837-1781-267747015097578/source dest=/etc/systemd/system/edpm_nova_compute.service mode=0644 owner=root group=root backup=False force=True remote_src=False follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 11 03:55:58 compute-0 sudo[258693]: pam_unix(sudo:session): session closed for user root
Oct 11 03:55:58 compute-0 sudo[258769]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dxcuapfeghucvttdquzemefmdbsyyjof ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760154957.94837-1781-267747015097578/AnsiballZ_systemd.py'
Oct 11 03:55:58 compute-0 sudo[258769]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 03:55:59 compute-0 python3.9[258771]: ansible-systemd Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Oct 11 03:55:59 compute-0 systemd[1]: Reloading.
Oct 11 03:55:59 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v698: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 03:55:59 compute-0 systemd-rc-local-generator[258797]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 11 03:55:59 compute-0 systemd-sysv-generator[258802]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 11 03:55:59 compute-0 sudo[258769]: pam_unix(sudo:session): session closed for user root
Oct 11 03:55:59 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 11 03:55:59 compute-0 sudo[258880]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ftiomqgettyxxjvwvzqiwaytgqtwzmeh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760154957.94837-1781-267747015097578/AnsiballZ_systemd.py'
Oct 11 03:55:59 compute-0 sudo[258880]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 03:56:00 compute-0 python3.9[258882]: ansible-systemd Invoked with state=restarted name=edpm_nova_compute.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Oct 11 03:56:00 compute-0 systemd[1]: Reloading.
Oct 11 03:56:00 compute-0 systemd-rc-local-generator[258908]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 11 03:56:00 compute-0 systemd-sysv-generator[258911]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 11 03:56:00 compute-0 ceph-mon[74273]: pgmap v698: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 03:56:00 compute-0 systemd[1]: Starting nova_compute container...
Oct 11 03:56:00 compute-0 systemd[1]: Started libcrun container.
Oct 11 03:56:00 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d7aa3f94ee24d4eeb96339ac7fac5cc42a5c3d4de796d8047d80bdce10ec7441/merged/etc/nvme supports timestamps until 2038 (0x7fffffff)
Oct 11 03:56:00 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d7aa3f94ee24d4eeb96339ac7fac5cc42a5c3d4de796d8047d80bdce10ec7441/merged/etc/multipath supports timestamps until 2038 (0x7fffffff)
Oct 11 03:56:00 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d7aa3f94ee24d4eeb96339ac7fac5cc42a5c3d4de796d8047d80bdce10ec7441/merged/var/lib/iscsi supports timestamps until 2038 (0x7fffffff)
Oct 11 03:56:00 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d7aa3f94ee24d4eeb96339ac7fac5cc42a5c3d4de796d8047d80bdce10ec7441/merged/var/lib/nova supports timestamps until 2038 (0x7fffffff)
Oct 11 03:56:00 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d7aa3f94ee24d4eeb96339ac7fac5cc42a5c3d4de796d8047d80bdce10ec7441/merged/var/lib/libvirt supports timestamps until 2038 (0x7fffffff)
Oct 11 03:56:00 compute-0 podman[258922]: 2025-10-11 03:56:00.83512808 +0000 UTC m=+0.130702882 container init 45e5bb239caa99956e81146eb8387d2de5a8ee4469f4bd6b61b3455d5ed0a021 (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, config_id=edpm, container_name=nova_compute, managed_by=edpm_ansible, org.label-schema.build-date=20251009, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 11 03:56:00 compute-0 podman[258922]: 2025-10-11 03:56:00.854669119 +0000 UTC m=+0.150243861 container start 45e5bb239caa99956e81146eb8387d2de5a8ee4469f4bd6b61b3455d5ed0a021 (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, tcib_managed=true, config_id=edpm, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}, container_name=nova_compute, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Oct 11 03:56:00 compute-0 podman[258922]: nova_compute
Oct 11 03:56:00 compute-0 nova_compute[258937]: + sudo -E kolla_set_configs
Oct 11 03:56:00 compute-0 systemd[1]: Started nova_compute container.
Oct 11 03:56:00 compute-0 sudo[258880]: pam_unix(sudo:session): session closed for user root
Oct 11 03:56:00 compute-0 nova_compute[258937]: INFO:__main__:Loading config file at /var/lib/kolla/config_files/config.json
Oct 11 03:56:00 compute-0 nova_compute[258937]: INFO:__main__:Validating config file
Oct 11 03:56:00 compute-0 nova_compute[258937]: INFO:__main__:Kolla config strategy set to: COPY_ALWAYS
Oct 11 03:56:00 compute-0 nova_compute[258937]: INFO:__main__:Copying service configuration files
Oct 11 03:56:00 compute-0 nova_compute[258937]: INFO:__main__:Deleting /etc/nova/nova.conf
Oct 11 03:56:00 compute-0 nova_compute[258937]: INFO:__main__:Copying /var/lib/kolla/config_files/nova-blank.conf to /etc/nova/nova.conf
Oct 11 03:56:00 compute-0 nova_compute[258937]: INFO:__main__:Setting permission for /etc/nova/nova.conf
Oct 11 03:56:00 compute-0 nova_compute[258937]: INFO:__main__:Copying /var/lib/kolla/config_files/01-nova.conf to /etc/nova/nova.conf.d/01-nova.conf
Oct 11 03:56:00 compute-0 nova_compute[258937]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/01-nova.conf
Oct 11 03:56:00 compute-0 nova_compute[258937]: INFO:__main__:Copying /var/lib/kolla/config_files/03-ceph-nova.conf to /etc/nova/nova.conf.d/03-ceph-nova.conf
Oct 11 03:56:00 compute-0 nova_compute[258937]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/03-ceph-nova.conf
Oct 11 03:56:00 compute-0 nova_compute[258937]: INFO:__main__:Copying /var/lib/kolla/config_files/25-nova-extra.conf to /etc/nova/nova.conf.d/25-nova-extra.conf
Oct 11 03:56:00 compute-0 nova_compute[258937]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/25-nova-extra.conf
Oct 11 03:56:00 compute-0 nova_compute[258937]: INFO:__main__:Copying /var/lib/kolla/config_files/nova-blank.conf to /etc/nova/nova.conf.d/nova-blank.conf
Oct 11 03:56:00 compute-0 nova_compute[258937]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/nova-blank.conf
Oct 11 03:56:00 compute-0 nova_compute[258937]: INFO:__main__:Copying /var/lib/kolla/config_files/02-nova-host-specific.conf to /etc/nova/nova.conf.d/02-nova-host-specific.conf
Oct 11 03:56:00 compute-0 nova_compute[258937]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/02-nova-host-specific.conf
Oct 11 03:56:00 compute-0 nova_compute[258937]: INFO:__main__:Deleting /etc/ceph
Oct 11 03:56:00 compute-0 nova_compute[258937]: INFO:__main__:Creating directory /etc/ceph
Oct 11 03:56:00 compute-0 nova_compute[258937]: INFO:__main__:Setting permission for /etc/ceph
Oct 11 03:56:00 compute-0 nova_compute[258937]: INFO:__main__:Copying /var/lib/kolla/config_files/ceph/ceph.client.openstack.keyring to /etc/ceph/ceph.client.openstack.keyring
Oct 11 03:56:00 compute-0 nova_compute[258937]: INFO:__main__:Setting permission for /etc/ceph/ceph.client.openstack.keyring
Oct 11 03:56:00 compute-0 nova_compute[258937]: INFO:__main__:Copying /var/lib/kolla/config_files/ceph/ceph.conf to /etc/ceph/ceph.conf
Oct 11 03:56:00 compute-0 nova_compute[258937]: INFO:__main__:Setting permission for /etc/ceph/ceph.conf
Oct 11 03:56:00 compute-0 nova_compute[258937]: INFO:__main__:Copying /var/lib/kolla/config_files/ssh-privatekey to /var/lib/nova/.ssh/ssh-privatekey
Oct 11 03:56:00 compute-0 nova_compute[258937]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/ssh-privatekey
Oct 11 03:56:00 compute-0 nova_compute[258937]: INFO:__main__:Copying /var/lib/kolla/config_files/ssh-config to /var/lib/nova/.ssh/config
Oct 11 03:56:00 compute-0 nova_compute[258937]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/config
Oct 11 03:56:00 compute-0 nova_compute[258937]: INFO:__main__:Writing out command to execute
Oct 11 03:56:00 compute-0 nova_compute[258937]: INFO:__main__:Setting permission for /etc/ceph/ceph.client.openstack.keyring
Oct 11 03:56:00 compute-0 nova_compute[258937]: INFO:__main__:Setting permission for /etc/ceph/ceph.conf
Oct 11 03:56:00 compute-0 nova_compute[258937]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/
Oct 11 03:56:00 compute-0 nova_compute[258937]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/ssh-privatekey
Oct 11 03:56:00 compute-0 nova_compute[258937]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/config
Oct 11 03:56:00 compute-0 nova_compute[258937]: ++ cat /run_command
Oct 11 03:56:00 compute-0 nova_compute[258937]: + CMD=nova-compute
Oct 11 03:56:00 compute-0 nova_compute[258937]: + ARGS=
Oct 11 03:56:00 compute-0 nova_compute[258937]: + sudo kolla_copy_cacerts
Oct 11 03:56:01 compute-0 nova_compute[258937]: + [[ ! -n '' ]]
Oct 11 03:56:01 compute-0 nova_compute[258937]: + . kolla_extend_start
Oct 11 03:56:01 compute-0 nova_compute[258937]: Running command: 'nova-compute'
Oct 11 03:56:01 compute-0 nova_compute[258937]: + echo 'Running command: '\''nova-compute'\'''
Oct 11 03:56:01 compute-0 nova_compute[258937]: + umask 0022
Oct 11 03:56:01 compute-0 nova_compute[258937]: + exec nova-compute
Oct 11 03:56:01 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v699: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 03:56:01 compute-0 python3.9[259098]: ansible-ansible.builtin.stat Invoked with path=/etc/systemd/system/edpm_nova_nvme_cleaner_healthcheck.service follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct 11 03:56:02 compute-0 ceph-mon[74273]: pgmap v699: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 03:56:02 compute-0 python3.9[259249]: ansible-ansible.builtin.stat Invoked with path=/etc/systemd/system/edpm_nova_nvme_cleaner.service follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct 11 03:56:03 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v700: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 03:56:03 compute-0 nova_compute[258937]: 2025-10-11 03:56:03.278 2 DEBUG os_vif [-] Loaded VIF plugin class '<class 'vif_plug_linux_bridge.linux_bridge.LinuxBridgePlugin'>' with name 'linux_bridge' initialize /usr/lib/python3.9/site-packages/os_vif/__init__.py:44
Oct 11 03:56:03 compute-0 nova_compute[258937]: 2025-10-11 03:56:03.278 2 DEBUG os_vif [-] Loaded VIF plugin class '<class 'vif_plug_noop.noop.NoOpPlugin'>' with name 'noop' initialize /usr/lib/python3.9/site-packages/os_vif/__init__.py:44
Oct 11 03:56:03 compute-0 nova_compute[258937]: 2025-10-11 03:56:03.278 2 DEBUG os_vif [-] Loaded VIF plugin class '<class 'vif_plug_ovs.ovs.OvsPlugin'>' with name 'ovs' initialize /usr/lib/python3.9/site-packages/os_vif/__init__.py:44
Oct 11 03:56:03 compute-0 nova_compute[258937]: 2025-10-11 03:56:03.278 2 INFO os_vif [-] Loaded VIF plugins: linux_bridge, noop, ovs
Oct 11 03:56:03 compute-0 python3.9[259400]: ansible-ansible.builtin.stat Invoked with path=/etc/systemd/system/edpm_nova_nvme_cleaner.service.requires follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct 11 03:56:03 compute-0 nova_compute[258937]: 2025-10-11 03:56:03.419 2 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): grep -F node.session.scan /sbin/iscsiadm execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 03:56:03 compute-0 nova_compute[258937]: 2025-10-11 03:56:03.455 2 DEBUG oslo_concurrency.processutils [-] CMD "grep -F node.session.scan /sbin/iscsiadm" returned: 0 in 0.037s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 03:56:04 compute-0 sudo[259553]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yxixjihbxzyhmzeaklsmkwdxahcopepy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760154963.6122696-1841-56098114142119/AnsiballZ_podman_container.py'
Oct 11 03:56:04 compute-0 sudo[259553]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 03:56:04 compute-0 nova_compute[258937]: 2025-10-11 03:56:04.045 2 INFO nova.virt.driver [None req-279685f3-4cb2-400f-b19b-5ded64b4d6d7 - - - - - -] Loading compute driver 'libvirt.LibvirtDriver'
Oct 11 03:56:04 compute-0 nova_compute[258937]: 2025-10-11 03:56:04.235 2 INFO nova.compute.provider_config [None req-279685f3-4cb2-400f-b19b-5ded64b4d6d7 - - - - - -] No provider configs found in /etc/nova/provider_config/. If files are present, ensure the Nova process has access.
Oct 11 03:56:04 compute-0 nova_compute[258937]: 2025-10-11 03:56:04.248 2 DEBUG oslo_concurrency.lockutils [None req-279685f3-4cb2-400f-b19b-5ded64b4d6d7 - - - - - -] Acquiring lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 11 03:56:04 compute-0 nova_compute[258937]: 2025-10-11 03:56:04.248 2 DEBUG oslo_concurrency.lockutils [None req-279685f3-4cb2-400f-b19b-5ded64b4d6d7 - - - - - -] Acquired lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 11 03:56:04 compute-0 nova_compute[258937]: 2025-10-11 03:56:04.248 2 DEBUG oslo_concurrency.lockutils [None req-279685f3-4cb2-400f-b19b-5ded64b4d6d7 - - - - - -] Releasing lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 11 03:56:04 compute-0 nova_compute[258937]: 2025-10-11 03:56:04.249 2 DEBUG oslo_service.service [None req-279685f3-4cb2-400f-b19b-5ded64b4d6d7 - - - - - -] Full set of CONF: _wait_for_exit_or_signal /usr/lib/python3.9/site-packages/oslo_service/service.py:362
Oct 11 03:56:04 compute-0 nova_compute[258937]: 2025-10-11 03:56:04.249 2 DEBUG oslo_service.service [None req-279685f3-4cb2-400f-b19b-5ded64b4d6d7 - - - - - -] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2589
Oct 11 03:56:04 compute-0 nova_compute[258937]: 2025-10-11 03:56:04.249 2 DEBUG oslo_service.service [None req-279685f3-4cb2-400f-b19b-5ded64b4d6d7 - - - - - -] Configuration options gathered from: log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2590
Oct 11 03:56:04 compute-0 nova_compute[258937]: 2025-10-11 03:56:04.249 2 DEBUG oslo_service.service [None req-279685f3-4cb2-400f-b19b-5ded64b4d6d7 - - - - - -] command line args: [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2591
Oct 11 03:56:04 compute-0 nova_compute[258937]: 2025-10-11 03:56:04.249 2 DEBUG oslo_service.service [None req-279685f3-4cb2-400f-b19b-5ded64b4d6d7 - - - - - -] config files: ['/etc/nova/nova.conf', '/etc/nova/nova-compute.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2592
Oct 11 03:56:04 compute-0 nova_compute[258937]: 2025-10-11 03:56:04.250 2 DEBUG oslo_service.service [None req-279685f3-4cb2-400f-b19b-5ded64b4d6d7 - - - - - -] ================================================================================ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2594
Oct 11 03:56:04 compute-0 nova_compute[258937]: 2025-10-11 03:56:04.250 2 DEBUG oslo_service.service [None req-279685f3-4cb2-400f-b19b-5ded64b4d6d7 - - - - - -] allow_resize_to_same_host      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 11 03:56:04 compute-0 nova_compute[258937]: 2025-10-11 03:56:04.250 2 DEBUG oslo_service.service [None req-279685f3-4cb2-400f-b19b-5ded64b4d6d7 - - - - - -] arq_binding_timeout            = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 11 03:56:04 compute-0 nova_compute[258937]: 2025-10-11 03:56:04.250 2 DEBUG oslo_service.service [None req-279685f3-4cb2-400f-b19b-5ded64b4d6d7 - - - - - -] backdoor_port                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 11 03:56:04 compute-0 nova_compute[258937]: 2025-10-11 03:56:04.250 2 DEBUG oslo_service.service [None req-279685f3-4cb2-400f-b19b-5ded64b4d6d7 - - - - - -] backdoor_socket                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 11 03:56:04 compute-0 nova_compute[258937]: 2025-10-11 03:56:04.250 2 DEBUG oslo_service.service [None req-279685f3-4cb2-400f-b19b-5ded64b4d6d7 - - - - - -] block_device_allocate_retries  = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 11 03:56:04 compute-0 nova_compute[258937]: 2025-10-11 03:56:04.251 2 DEBUG oslo_service.service [None req-279685f3-4cb2-400f-b19b-5ded64b4d6d7 - - - - - -] block_device_allocate_retries_interval = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 11 03:56:04 compute-0 nova_compute[258937]: 2025-10-11 03:56:04.251 2 DEBUG oslo_service.service [None req-279685f3-4cb2-400f-b19b-5ded64b4d6d7 - - - - - -] cert                           = self.pem log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 11 03:56:04 compute-0 nova_compute[258937]: 2025-10-11 03:56:04.251 2 DEBUG oslo_service.service [None req-279685f3-4cb2-400f-b19b-5ded64b4d6d7 - - - - - -] compute_driver                 = libvirt.LibvirtDriver log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 11 03:56:04 compute-0 nova_compute[258937]: 2025-10-11 03:56:04.251 2 DEBUG oslo_service.service [None req-279685f3-4cb2-400f-b19b-5ded64b4d6d7 - - - - - -] compute_monitors               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 11 03:56:04 compute-0 nova_compute[258937]: 2025-10-11 03:56:04.251 2 DEBUG oslo_service.service [None req-279685f3-4cb2-400f-b19b-5ded64b4d6d7 - - - - - -] config_dir                     = ['/etc/nova/nova.conf.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 11 03:56:04 compute-0 nova_compute[258937]: 2025-10-11 03:56:04.251 2 DEBUG oslo_service.service [None req-279685f3-4cb2-400f-b19b-5ded64b4d6d7 - - - - - -] config_drive_format            = iso9660 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 11 03:56:04 compute-0 nova_compute[258937]: 2025-10-11 03:56:04.251 2 DEBUG oslo_service.service [None req-279685f3-4cb2-400f-b19b-5ded64b4d6d7 - - - - - -] config_file                    = ['/etc/nova/nova.conf', '/etc/nova/nova-compute.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 11 03:56:04 compute-0 nova_compute[258937]: 2025-10-11 03:56:04.252 2 DEBUG oslo_service.service [None req-279685f3-4cb2-400f-b19b-5ded64b4d6d7 - - - - - -] config_source                  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 11 03:56:04 compute-0 nova_compute[258937]: 2025-10-11 03:56:04.252 2 DEBUG oslo_service.service [None req-279685f3-4cb2-400f-b19b-5ded64b4d6d7 - - - - - -] console_host                   = compute-0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 11 03:56:04 compute-0 nova_compute[258937]: 2025-10-11 03:56:04.252 2 DEBUG oslo_service.service [None req-279685f3-4cb2-400f-b19b-5ded64b4d6d7 - - - - - -] control_exchange               = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 11 03:56:04 compute-0 nova_compute[258937]: 2025-10-11 03:56:04.252 2 DEBUG oslo_service.service [None req-279685f3-4cb2-400f-b19b-5ded64b4d6d7 - - - - - -] cpu_allocation_ratio           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 11 03:56:04 compute-0 nova_compute[258937]: 2025-10-11 03:56:04.252 2 DEBUG oslo_service.service [None req-279685f3-4cb2-400f-b19b-5ded64b4d6d7 - - - - - -] daemon                         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 11 03:56:04 compute-0 nova_compute[258937]: 2025-10-11 03:56:04.252 2 DEBUG oslo_service.service [None req-279685f3-4cb2-400f-b19b-5ded64b4d6d7 - - - - - -] debug                          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 11 03:56:04 compute-0 nova_compute[258937]: 2025-10-11 03:56:04.253 2 DEBUG oslo_service.service [None req-279685f3-4cb2-400f-b19b-5ded64b4d6d7 - - - - - -] default_access_ip_network_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 11 03:56:04 compute-0 nova_compute[258937]: 2025-10-11 03:56:04.253 2 DEBUG oslo_service.service [None req-279685f3-4cb2-400f-b19b-5ded64b4d6d7 - - - - - -] default_availability_zone      = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 11 03:56:04 compute-0 nova_compute[258937]: 2025-10-11 03:56:04.253 2 DEBUG oslo_service.service [None req-279685f3-4cb2-400f-b19b-5ded64b4d6d7 - - - - - -] default_ephemeral_format       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 11 03:56:04 compute-0 nova_compute[258937]: 2025-10-11 03:56:04.253 2 DEBUG oslo_service.service [None req-279685f3-4cb2-400f-b19b-5ded64b4d6d7 - - - - - -] default_log_levels             = ['amqp=WARN', 'amqplib=WARN', 'boto=WARN', 'qpid=WARN', 'sqlalchemy=WARN', 'suds=INFO', 'oslo.messaging=INFO', 'oslo_messaging=INFO', 'iso8601=WARN', 'requests.packages.urllib3.connectionpool=WARN', 'urllib3.connectionpool=WARN', 'websocket=WARN', 'requests.packages.urllib3.util.retry=WARN', 'urllib3.util.retry=WARN', 'keystonemiddleware=WARN', 'routes.middleware=WARN', 'stevedore=WARN', 'taskflow=WARN', 'keystoneauth=WARN', 'oslo.cache=INFO', 'oslo_policy=INFO', 'dogpile.core.dogpile=INFO', 'glanceclient=WARN', 'oslo.privsep.daemon=INFO'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 11 03:56:04 compute-0 nova_compute[258937]: 2025-10-11 03:56:04.253 2 DEBUG oslo_service.service [None req-279685f3-4cb2-400f-b19b-5ded64b4d6d7 - - - - - -] default_schedule_zone          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 11 03:56:04 compute-0 nova_compute[258937]: 2025-10-11 03:56:04.254 2 DEBUG oslo_service.service [None req-279685f3-4cb2-400f-b19b-5ded64b4d6d7 - - - - - -] disk_allocation_ratio          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 11 03:56:04 compute-0 nova_compute[258937]: 2025-10-11 03:56:04.254 2 DEBUG oslo_service.service [None req-279685f3-4cb2-400f-b19b-5ded64b4d6d7 - - - - - -] enable_new_services            = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 11 03:56:04 compute-0 nova_compute[258937]: 2025-10-11 03:56:04.254 2 DEBUG oslo_service.service [None req-279685f3-4cb2-400f-b19b-5ded64b4d6d7 - - - - - -] enabled_apis                   = ['osapi_compute', 'metadata'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 11 03:56:04 compute-0 nova_compute[258937]: 2025-10-11 03:56:04.254 2 DEBUG oslo_service.service [None req-279685f3-4cb2-400f-b19b-5ded64b4d6d7 - - - - - -] enabled_ssl_apis               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 11 03:56:04 compute-0 nova_compute[258937]: 2025-10-11 03:56:04.254 2 DEBUG oslo_service.service [None req-279685f3-4cb2-400f-b19b-5ded64b4d6d7 - - - - - -] flat_injected                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 11 03:56:04 compute-0 nova_compute[258937]: 2025-10-11 03:56:04.254 2 DEBUG oslo_service.service [None req-279685f3-4cb2-400f-b19b-5ded64b4d6d7 - - - - - -] force_config_drive             = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 11 03:56:04 compute-0 nova_compute[258937]: 2025-10-11 03:56:04.254 2 DEBUG oslo_service.service [None req-279685f3-4cb2-400f-b19b-5ded64b4d6d7 - - - - - -] force_raw_images               = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 11 03:56:04 compute-0 nova_compute[258937]: 2025-10-11 03:56:04.255 2 DEBUG oslo_service.service [None req-279685f3-4cb2-400f-b19b-5ded64b4d6d7 - - - - - -] graceful_shutdown_timeout      = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 11 03:56:04 compute-0 nova_compute[258937]: 2025-10-11 03:56:04.255 2 DEBUG oslo_service.service [None req-279685f3-4cb2-400f-b19b-5ded64b4d6d7 - - - - - -] heal_instance_info_cache_interval = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 11 03:56:04 compute-0 nova_compute[258937]: 2025-10-11 03:56:04.255 2 DEBUG oslo_service.service [None req-279685f3-4cb2-400f-b19b-5ded64b4d6d7 - - - - - -] host                           = compute-0.ctlplane.example.com log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 11 03:56:04 compute-0 nova_compute[258937]: 2025-10-11 03:56:04.255 2 DEBUG oslo_service.service [None req-279685f3-4cb2-400f-b19b-5ded64b4d6d7 - - - - - -] initial_cpu_allocation_ratio   = 4.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 11 03:56:04 compute-0 nova_compute[258937]: 2025-10-11 03:56:04.255 2 DEBUG oslo_service.service [None req-279685f3-4cb2-400f-b19b-5ded64b4d6d7 - - - - - -] initial_disk_allocation_ratio  = 0.9 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 11 03:56:04 compute-0 nova_compute[258937]: 2025-10-11 03:56:04.255 2 DEBUG oslo_service.service [None req-279685f3-4cb2-400f-b19b-5ded64b4d6d7 - - - - - -] initial_ram_allocation_ratio   = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 11 03:56:04 compute-0 nova_compute[258937]: 2025-10-11 03:56:04.256 2 DEBUG oslo_service.service [None req-279685f3-4cb2-400f-b19b-5ded64b4d6d7 - - - - - -] injected_network_template      = /usr/lib/python3.9/site-packages/nova/virt/interfaces.template log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 11 03:56:04 compute-0 nova_compute[258937]: 2025-10-11 03:56:04.256 2 DEBUG oslo_service.service [None req-279685f3-4cb2-400f-b19b-5ded64b4d6d7 - - - - - -] instance_build_timeout         = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 11 03:56:04 compute-0 nova_compute[258937]: 2025-10-11 03:56:04.256 2 DEBUG oslo_service.service [None req-279685f3-4cb2-400f-b19b-5ded64b4d6d7 - - - - - -] instance_delete_interval       = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 11 03:56:04 compute-0 nova_compute[258937]: 2025-10-11 03:56:04.256 2 DEBUG oslo_service.service [None req-279685f3-4cb2-400f-b19b-5ded64b4d6d7 - - - - - -] instance_format                = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 11 03:56:04 compute-0 nova_compute[258937]: 2025-10-11 03:56:04.256 2 DEBUG oslo_service.service [None req-279685f3-4cb2-400f-b19b-5ded64b4d6d7 - - - - - -] instance_name_template         = instance-%08x log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 11 03:56:04 compute-0 nova_compute[258937]: 2025-10-11 03:56:04.256 2 DEBUG oslo_service.service [None req-279685f3-4cb2-400f-b19b-5ded64b4d6d7 - - - - - -] instance_usage_audit           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 11 03:56:04 compute-0 nova_compute[258937]: 2025-10-11 03:56:04.257 2 DEBUG oslo_service.service [None req-279685f3-4cb2-400f-b19b-5ded64b4d6d7 - - - - - -] instance_usage_audit_period    = month log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 11 03:56:04 compute-0 nova_compute[258937]: 2025-10-11 03:56:04.257 2 DEBUG oslo_service.service [None req-279685f3-4cb2-400f-b19b-5ded64b4d6d7 - - - - - -] instance_uuid_format           = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 11 03:56:04 compute-0 nova_compute[258937]: 2025-10-11 03:56:04.257 2 DEBUG oslo_service.service [None req-279685f3-4cb2-400f-b19b-5ded64b4d6d7 - - - - - -] instances_path                 = /var/lib/nova/instances log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 11 03:56:04 compute-0 nova_compute[258937]: 2025-10-11 03:56:04.257 2 DEBUG oslo_service.service [None req-279685f3-4cb2-400f-b19b-5ded64b4d6d7 - - - - - -] internal_service_availability_zone = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 11 03:56:04 compute-0 nova_compute[258937]: 2025-10-11 03:56:04.257 2 DEBUG oslo_service.service [None req-279685f3-4cb2-400f-b19b-5ded64b4d6d7 - - - - - -] key                            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 11 03:56:04 compute-0 nova_compute[258937]: 2025-10-11 03:56:04.257 2 DEBUG oslo_service.service [None req-279685f3-4cb2-400f-b19b-5ded64b4d6d7 - - - - - -] live_migration_retry_count     = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 11 03:56:04 compute-0 nova_compute[258937]: 2025-10-11 03:56:04.257 2 DEBUG oslo_service.service [None req-279685f3-4cb2-400f-b19b-5ded64b4d6d7 - - - - - -] log_config_append              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 11 03:56:04 compute-0 nova_compute[258937]: 2025-10-11 03:56:04.258 2 DEBUG oslo_service.service [None req-279685f3-4cb2-400f-b19b-5ded64b4d6d7 - - - - - -] log_date_format                = %Y-%m-%d %H:%M:%S log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 11 03:56:04 compute-0 nova_compute[258937]: 2025-10-11 03:56:04.258 2 DEBUG oslo_service.service [None req-279685f3-4cb2-400f-b19b-5ded64b4d6d7 - - - - - -] log_dir                        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 11 03:56:04 compute-0 nova_compute[258937]: 2025-10-11 03:56:04.258 2 DEBUG oslo_service.service [None req-279685f3-4cb2-400f-b19b-5ded64b4d6d7 - - - - - -] log_file                       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 11 03:56:04 compute-0 nova_compute[258937]: 2025-10-11 03:56:04.258 2 DEBUG oslo_service.service [None req-279685f3-4cb2-400f-b19b-5ded64b4d6d7 - - - - - -] log_options                    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 11 03:56:04 compute-0 nova_compute[258937]: 2025-10-11 03:56:04.258 2 DEBUG oslo_service.service [None req-279685f3-4cb2-400f-b19b-5ded64b4d6d7 - - - - - -] log_rotate_interval            = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 11 03:56:04 compute-0 nova_compute[258937]: 2025-10-11 03:56:04.258 2 DEBUG oslo_service.service [None req-279685f3-4cb2-400f-b19b-5ded64b4d6d7 - - - - - -] log_rotate_interval_type       = days log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 11 03:56:04 compute-0 nova_compute[258937]: 2025-10-11 03:56:04.259 2 DEBUG oslo_service.service [None req-279685f3-4cb2-400f-b19b-5ded64b4d6d7 - - - - - -] log_rotation_type              = size log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 11 03:56:04 compute-0 nova_compute[258937]: 2025-10-11 03:56:04.259 2 DEBUG oslo_service.service [None req-279685f3-4cb2-400f-b19b-5ded64b4d6d7 - - - - - -] logging_context_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(global_request_id)s %(request_id)s %(user_identity)s] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 11 03:56:04 compute-0 nova_compute[258937]: 2025-10-11 03:56:04.259 2 DEBUG oslo_service.service [None req-279685f3-4cb2-400f-b19b-5ded64b4d6d7 - - - - - -] logging_debug_format_suffix    = %(funcName)s %(pathname)s:%(lineno)d log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 11 03:56:04 compute-0 nova_compute[258937]: 2025-10-11 03:56:04.259 2 DEBUG oslo_service.service [None req-279685f3-4cb2-400f-b19b-5ded64b4d6d7 - - - - - -] logging_default_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [-] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 11 03:56:04 compute-0 nova_compute[258937]: 2025-10-11 03:56:04.259 2 DEBUG oslo_service.service [None req-279685f3-4cb2-400f-b19b-5ded64b4d6d7 - - - - - -] logging_exception_prefix       = %(asctime)s.%(msecs)03d %(process)d ERROR %(name)s %(instance)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 11 03:56:04 compute-0 nova_compute[258937]: 2025-10-11 03:56:04.259 2 DEBUG oslo_service.service [None req-279685f3-4cb2-400f-b19b-5ded64b4d6d7 - - - - - -] logging_user_identity_format   = %(user)s %(project)s %(domain)s %(system_scope)s %(user_domain)s %(project_domain)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 11 03:56:04 compute-0 nova_compute[258937]: 2025-10-11 03:56:04.259 2 DEBUG oslo_service.service [None req-279685f3-4cb2-400f-b19b-5ded64b4d6d7 - - - - - -] long_rpc_timeout               = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 11 03:56:04 compute-0 nova_compute[258937]: 2025-10-11 03:56:04.260 2 DEBUG oslo_service.service [None req-279685f3-4cb2-400f-b19b-5ded64b4d6d7 - - - - - -] max_concurrent_builds          = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 11 03:56:04 compute-0 nova_compute[258937]: 2025-10-11 03:56:04.260 2 DEBUG oslo_service.service [None req-279685f3-4cb2-400f-b19b-5ded64b4d6d7 - - - - - -] max_concurrent_live_migrations = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 11 03:56:04 compute-0 nova_compute[258937]: 2025-10-11 03:56:04.260 2 DEBUG oslo_service.service [None req-279685f3-4cb2-400f-b19b-5ded64b4d6d7 - - - - - -] max_concurrent_snapshots       = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 11 03:56:04 compute-0 nova_compute[258937]: 2025-10-11 03:56:04.260 2 DEBUG oslo_service.service [None req-279685f3-4cb2-400f-b19b-5ded64b4d6d7 - - - - - -] max_local_block_devices        = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 11 03:56:04 compute-0 nova_compute[258937]: 2025-10-11 03:56:04.260 2 DEBUG oslo_service.service [None req-279685f3-4cb2-400f-b19b-5ded64b4d6d7 - - - - - -] max_logfile_count              = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 11 03:56:04 compute-0 nova_compute[258937]: 2025-10-11 03:56:04.260 2 DEBUG oslo_service.service [None req-279685f3-4cb2-400f-b19b-5ded64b4d6d7 - - - - - -] max_logfile_size_mb            = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 11 03:56:04 compute-0 nova_compute[258937]: 2025-10-11 03:56:04.260 2 DEBUG oslo_service.service [None req-279685f3-4cb2-400f-b19b-5ded64b4d6d7 - - - - - -] maximum_instance_delete_attempts = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 11 03:56:04 compute-0 nova_compute[258937]: 2025-10-11 03:56:04.261 2 DEBUG oslo_service.service [None req-279685f3-4cb2-400f-b19b-5ded64b4d6d7 - - - - - -] metadata_listen                = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 11 03:56:04 compute-0 nova_compute[258937]: 2025-10-11 03:56:04.261 2 DEBUG oslo_service.service [None req-279685f3-4cb2-400f-b19b-5ded64b4d6d7 - - - - - -] metadata_listen_port           = 8775 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 11 03:56:04 compute-0 nova_compute[258937]: 2025-10-11 03:56:04.261 2 DEBUG oslo_service.service [None req-279685f3-4cb2-400f-b19b-5ded64b4d6d7 - - - - - -] metadata_workers               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 11 03:56:04 compute-0 nova_compute[258937]: 2025-10-11 03:56:04.261 2 DEBUG oslo_service.service [None req-279685f3-4cb2-400f-b19b-5ded64b4d6d7 - - - - - -] migrate_max_retries            = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 11 03:56:04 compute-0 nova_compute[258937]: 2025-10-11 03:56:04.261 2 DEBUG oslo_service.service [None req-279685f3-4cb2-400f-b19b-5ded64b4d6d7 - - - - - -] mkisofs_cmd                    = /usr/bin/mkisofs log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 11 03:56:04 compute-0 nova_compute[258937]: 2025-10-11 03:56:04.261 2 DEBUG oslo_service.service [None req-279685f3-4cb2-400f-b19b-5ded64b4d6d7 - - - - - -] my_block_storage_ip            = 192.168.122.100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 11 03:56:04 compute-0 nova_compute[258937]: 2025-10-11 03:56:04.261 2 DEBUG oslo_service.service [None req-279685f3-4cb2-400f-b19b-5ded64b4d6d7 - - - - - -] my_ip                          = 192.168.122.100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 11 03:56:04 compute-0 nova_compute[258937]: 2025-10-11 03:56:04.262 2 DEBUG oslo_service.service [None req-279685f3-4cb2-400f-b19b-5ded64b4d6d7 - - - - - -] network_allocate_retries       = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 11 03:56:04 compute-0 nova_compute[258937]: 2025-10-11 03:56:04.262 2 DEBUG oslo_service.service [None req-279685f3-4cb2-400f-b19b-5ded64b4d6d7 - - - - - -] non_inheritable_image_properties = ['cache_in_nova', 'bittorrent'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 11 03:56:04 compute-0 nova_compute[258937]: 2025-10-11 03:56:04.262 2 DEBUG oslo_service.service [None req-279685f3-4cb2-400f-b19b-5ded64b4d6d7 - - - - - -] osapi_compute_listen           = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 11 03:56:04 compute-0 nova_compute[258937]: 2025-10-11 03:56:04.262 2 DEBUG oslo_service.service [None req-279685f3-4cb2-400f-b19b-5ded64b4d6d7 - - - - - -] osapi_compute_listen_port      = 8774 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 11 03:56:04 compute-0 nova_compute[258937]: 2025-10-11 03:56:04.262 2 DEBUG oslo_service.service [None req-279685f3-4cb2-400f-b19b-5ded64b4d6d7 - - - - - -] osapi_compute_unique_server_name_scope =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 11 03:56:04 compute-0 nova_compute[258937]: 2025-10-11 03:56:04.262 2 DEBUG oslo_service.service [None req-279685f3-4cb2-400f-b19b-5ded64b4d6d7 - - - - - -] osapi_compute_workers          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 11 03:56:04 compute-0 nova_compute[258937]: 2025-10-11 03:56:04.262 2 DEBUG oslo_service.service [None req-279685f3-4cb2-400f-b19b-5ded64b4d6d7 - - - - - -] password_length                = 12 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 11 03:56:04 compute-0 nova_compute[258937]: 2025-10-11 03:56:04.263 2 DEBUG oslo_service.service [None req-279685f3-4cb2-400f-b19b-5ded64b4d6d7 - - - - - -] periodic_enable                = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 11 03:56:04 compute-0 nova_compute[258937]: 2025-10-11 03:56:04.263 2 DEBUG oslo_service.service [None req-279685f3-4cb2-400f-b19b-5ded64b4d6d7 - - - - - -] periodic_fuzzy_delay           = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 11 03:56:04 compute-0 nova_compute[258937]: 2025-10-11 03:56:04.263 2 DEBUG oslo_service.service [None req-279685f3-4cb2-400f-b19b-5ded64b4d6d7 - - - - - -] pointer_model                  = usbtablet log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 11 03:56:04 compute-0 nova_compute[258937]: 2025-10-11 03:56:04.263 2 DEBUG oslo_service.service [None req-279685f3-4cb2-400f-b19b-5ded64b4d6d7 - - - - - -] preallocate_images             = none log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 11 03:56:04 compute-0 nova_compute[258937]: 2025-10-11 03:56:04.263 2 DEBUG oslo_service.service [None req-279685f3-4cb2-400f-b19b-5ded64b4d6d7 - - - - - -] publish_errors                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 11 03:56:04 compute-0 nova_compute[258937]: 2025-10-11 03:56:04.263 2 DEBUG oslo_service.service [None req-279685f3-4cb2-400f-b19b-5ded64b4d6d7 - - - - - -] pybasedir                      = /usr/lib/python3.9/site-packages log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 11 03:56:04 compute-0 nova_compute[258937]: 2025-10-11 03:56:04.263 2 DEBUG oslo_service.service [None req-279685f3-4cb2-400f-b19b-5ded64b4d6d7 - - - - - -] ram_allocation_ratio           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 11 03:56:04 compute-0 nova_compute[258937]: 2025-10-11 03:56:04.264 2 DEBUG oslo_service.service [None req-279685f3-4cb2-400f-b19b-5ded64b4d6d7 - - - - - -] rate_limit_burst               = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 11 03:56:04 compute-0 nova_compute[258937]: 2025-10-11 03:56:04.264 2 DEBUG oslo_service.service [None req-279685f3-4cb2-400f-b19b-5ded64b4d6d7 - - - - - -] rate_limit_except_level        = CRITICAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 11 03:56:04 compute-0 nova_compute[258937]: 2025-10-11 03:56:04.264 2 DEBUG oslo_service.service [None req-279685f3-4cb2-400f-b19b-5ded64b4d6d7 - - - - - -] rate_limit_interval            = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 11 03:56:04 compute-0 nova_compute[258937]: 2025-10-11 03:56:04.264 2 DEBUG oslo_service.service [None req-279685f3-4cb2-400f-b19b-5ded64b4d6d7 - - - - - -] reboot_timeout                 = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 11 03:56:04 compute-0 nova_compute[258937]: 2025-10-11 03:56:04.264 2 DEBUG oslo_service.service [None req-279685f3-4cb2-400f-b19b-5ded64b4d6d7 - - - - - -] reclaim_instance_interval      = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 11 03:56:04 compute-0 nova_compute[258937]: 2025-10-11 03:56:04.264 2 DEBUG oslo_service.service [None req-279685f3-4cb2-400f-b19b-5ded64b4d6d7 - - - - - -] record                         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 11 03:56:04 compute-0 nova_compute[258937]: 2025-10-11 03:56:04.264 2 DEBUG oslo_service.service [None req-279685f3-4cb2-400f-b19b-5ded64b4d6d7 - - - - - -] reimage_timeout_per_gb         = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 11 03:56:04 compute-0 nova_compute[258937]: 2025-10-11 03:56:04.265 2 DEBUG oslo_service.service [None req-279685f3-4cb2-400f-b19b-5ded64b4d6d7 - - - - - -] report_interval                = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 11 03:56:04 compute-0 nova_compute[258937]: 2025-10-11 03:56:04.265 2 DEBUG oslo_service.service [None req-279685f3-4cb2-400f-b19b-5ded64b4d6d7 - - - - - -] rescue_timeout                 = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 11 03:56:04 compute-0 nova_compute[258937]: 2025-10-11 03:56:04.265 2 DEBUG oslo_service.service [None req-279685f3-4cb2-400f-b19b-5ded64b4d6d7 - - - - - -] reserved_host_cpus             = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 11 03:56:04 compute-0 nova_compute[258937]: 2025-10-11 03:56:04.265 2 DEBUG oslo_service.service [None req-279685f3-4cb2-400f-b19b-5ded64b4d6d7 - - - - - -] reserved_host_disk_mb          = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 11 03:56:04 compute-0 nova_compute[258937]: 2025-10-11 03:56:04.265 2 DEBUG oslo_service.service [None req-279685f3-4cb2-400f-b19b-5ded64b4d6d7 - - - - - -] reserved_host_memory_mb        = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 11 03:56:04 compute-0 nova_compute[258937]: 2025-10-11 03:56:04.265 2 DEBUG oslo_service.service [None req-279685f3-4cb2-400f-b19b-5ded64b4d6d7 - - - - - -] reserved_huge_pages            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 11 03:56:04 compute-0 nova_compute[258937]: 2025-10-11 03:56:04.265 2 DEBUG oslo_service.service [None req-279685f3-4cb2-400f-b19b-5ded64b4d6d7 - - - - - -] resize_confirm_window          = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 11 03:56:04 compute-0 nova_compute[258937]: 2025-10-11 03:56:04.266 2 DEBUG oslo_service.service [None req-279685f3-4cb2-400f-b19b-5ded64b4d6d7 - - - - - -] resize_fs_using_block_device   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 11 03:56:04 compute-0 nova_compute[258937]: 2025-10-11 03:56:04.266 2 DEBUG oslo_service.service [None req-279685f3-4cb2-400f-b19b-5ded64b4d6d7 - - - - - -] resume_guests_state_on_host_boot = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 11 03:56:04 compute-0 nova_compute[258937]: 2025-10-11 03:56:04.266 2 DEBUG oslo_service.service [None req-279685f3-4cb2-400f-b19b-5ded64b4d6d7 - - - - - -] rootwrap_config                = /etc/nova/rootwrap.conf log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 11 03:56:04 compute-0 nova_compute[258937]: 2025-10-11 03:56:04.266 2 DEBUG oslo_service.service [None req-279685f3-4cb2-400f-b19b-5ded64b4d6d7 - - - - - -] rpc_response_timeout           = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 11 03:56:04 compute-0 nova_compute[258937]: 2025-10-11 03:56:04.266 2 DEBUG oslo_service.service [None req-279685f3-4cb2-400f-b19b-5ded64b4d6d7 - - - - - -] run_external_periodic_tasks    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 11 03:56:04 compute-0 nova_compute[258937]: 2025-10-11 03:56:04.266 2 DEBUG oslo_service.service [None req-279685f3-4cb2-400f-b19b-5ded64b4d6d7 - - - - - -] running_deleted_instance_action = reap log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 11 03:56:04 compute-0 nova_compute[258937]: 2025-10-11 03:56:04.267 2 DEBUG oslo_service.service [None req-279685f3-4cb2-400f-b19b-5ded64b4d6d7 - - - - - -] running_deleted_instance_poll_interval = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 11 03:56:04 compute-0 nova_compute[258937]: 2025-10-11 03:56:04.267 2 DEBUG oslo_service.service [None req-279685f3-4cb2-400f-b19b-5ded64b4d6d7 - - - - - -] running_deleted_instance_timeout = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 11 03:56:04 compute-0 nova_compute[258937]: 2025-10-11 03:56:04.267 2 DEBUG oslo_service.service [None req-279685f3-4cb2-400f-b19b-5ded64b4d6d7 - - - - - -] scheduler_instance_sync_interval = 120 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 11 03:56:04 compute-0 nova_compute[258937]: 2025-10-11 03:56:04.267 2 DEBUG oslo_service.service [None req-279685f3-4cb2-400f-b19b-5ded64b4d6d7 - - - - - -] service_down_time              = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 11 03:56:04 compute-0 nova_compute[258937]: 2025-10-11 03:56:04.267 2 DEBUG oslo_service.service [None req-279685f3-4cb2-400f-b19b-5ded64b4d6d7 - - - - - -] servicegroup_driver            = db log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 11 03:56:04 compute-0 nova_compute[258937]: 2025-10-11 03:56:04.267 2 DEBUG oslo_service.service [None req-279685f3-4cb2-400f-b19b-5ded64b4d6d7 - - - - - -] shelved_offload_time           = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 11 03:56:04 compute-0 nova_compute[258937]: 2025-10-11 03:56:04.268 2 DEBUG oslo_service.service [None req-279685f3-4cb2-400f-b19b-5ded64b4d6d7 - - - - - -] shelved_poll_interval          = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 11 03:56:04 compute-0 nova_compute[258937]: 2025-10-11 03:56:04.268 2 DEBUG oslo_service.service [None req-279685f3-4cb2-400f-b19b-5ded64b4d6d7 - - - - - -] shutdown_timeout               = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 11 03:56:04 compute-0 nova_compute[258937]: 2025-10-11 03:56:04.268 2 DEBUG oslo_service.service [None req-279685f3-4cb2-400f-b19b-5ded64b4d6d7 - - - - - -] source_is_ipv6                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 11 03:56:04 compute-0 nova_compute[258937]: 2025-10-11 03:56:04.268 2 DEBUG oslo_service.service [None req-279685f3-4cb2-400f-b19b-5ded64b4d6d7 - - - - - -] ssl_only                       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 11 03:56:04 compute-0 nova_compute[258937]: 2025-10-11 03:56:04.268 2 DEBUG oslo_service.service [None req-279685f3-4cb2-400f-b19b-5ded64b4d6d7 - - - - - -] state_path                     = /var/lib/nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 11 03:56:04 compute-0 nova_compute[258937]: 2025-10-11 03:56:04.268 2 DEBUG oslo_service.service [None req-279685f3-4cb2-400f-b19b-5ded64b4d6d7 - - - - - -] sync_power_state_interval      = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 11 03:56:04 compute-0 nova_compute[258937]: 2025-10-11 03:56:04.268 2 DEBUG oslo_service.service [None req-279685f3-4cb2-400f-b19b-5ded64b4d6d7 - - - - - -] sync_power_state_pool_size     = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 11 03:56:04 compute-0 nova_compute[258937]: 2025-10-11 03:56:04.269 2 DEBUG oslo_service.service [None req-279685f3-4cb2-400f-b19b-5ded64b4d6d7 - - - - - -] syslog_log_facility            = LOG_USER log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 11 03:56:04 compute-0 nova_compute[258937]: 2025-10-11 03:56:04.269 2 DEBUG oslo_service.service [None req-279685f3-4cb2-400f-b19b-5ded64b4d6d7 - - - - - -] tempdir                        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 11 03:56:04 compute-0 nova_compute[258937]: 2025-10-11 03:56:04.269 2 DEBUG oslo_service.service [None req-279685f3-4cb2-400f-b19b-5ded64b4d6d7 - - - - - -] timeout_nbd                    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 11 03:56:04 compute-0 nova_compute[258937]: 2025-10-11 03:56:04.269 2 DEBUG oslo_service.service [None req-279685f3-4cb2-400f-b19b-5ded64b4d6d7 - - - - - -] transport_url                  = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 11 03:56:04 compute-0 nova_compute[258937]: 2025-10-11 03:56:04.269 2 DEBUG oslo_service.service [None req-279685f3-4cb2-400f-b19b-5ded64b4d6d7 - - - - - -] update_resources_interval      = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 11 03:56:04 compute-0 nova_compute[258937]: 2025-10-11 03:56:04.269 2 DEBUG oslo_service.service [None req-279685f3-4cb2-400f-b19b-5ded64b4d6d7 - - - - - -] use_cow_images                 = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 11 03:56:04 compute-0 nova_compute[258937]: 2025-10-11 03:56:04.269 2 DEBUG oslo_service.service [None req-279685f3-4cb2-400f-b19b-5ded64b4d6d7 - - - - - -] use_eventlog                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 11 03:56:04 compute-0 nova_compute[258937]: 2025-10-11 03:56:04.270 2 DEBUG oslo_service.service [None req-279685f3-4cb2-400f-b19b-5ded64b4d6d7 - - - - - -] use_journal                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 11 03:56:04 compute-0 nova_compute[258937]: 2025-10-11 03:56:04.270 2 DEBUG oslo_service.service [None req-279685f3-4cb2-400f-b19b-5ded64b4d6d7 - - - - - -] use_json                       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 11 03:56:04 compute-0 nova_compute[258937]: 2025-10-11 03:56:04.270 2 DEBUG oslo_service.service [None req-279685f3-4cb2-400f-b19b-5ded64b4d6d7 - - - - - -] use_rootwrap_daemon            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 11 03:56:04 compute-0 nova_compute[258937]: 2025-10-11 03:56:04.270 2 DEBUG oslo_service.service [None req-279685f3-4cb2-400f-b19b-5ded64b4d6d7 - - - - - -] use_stderr                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 11 03:56:04 compute-0 nova_compute[258937]: 2025-10-11 03:56:04.270 2 DEBUG oslo_service.service [None req-279685f3-4cb2-400f-b19b-5ded64b4d6d7 - - - - - -] use_syslog                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 11 03:56:04 compute-0 nova_compute[258937]: 2025-10-11 03:56:04.270 2 DEBUG oslo_service.service [None req-279685f3-4cb2-400f-b19b-5ded64b4d6d7 - - - - - -] vcpu_pin_set                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 11 03:56:04 compute-0 nova_compute[258937]: 2025-10-11 03:56:04.270 2 DEBUG oslo_service.service [None req-279685f3-4cb2-400f-b19b-5ded64b4d6d7 - - - - - -] vif_plugging_is_fatal          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 11 03:56:04 compute-0 nova_compute[258937]: 2025-10-11 03:56:04.270 2 DEBUG oslo_service.service [None req-279685f3-4cb2-400f-b19b-5ded64b4d6d7 - - - - - -] vif_plugging_timeout           = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 11 03:56:04 compute-0 nova_compute[258937]: 2025-10-11 03:56:04.271 2 DEBUG oslo_service.service [None req-279685f3-4cb2-400f-b19b-5ded64b4d6d7 - - - - - -] virt_mkfs                      = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 11 03:56:04 compute-0 nova_compute[258937]: 2025-10-11 03:56:04.271 2 DEBUG oslo_service.service [None req-279685f3-4cb2-400f-b19b-5ded64b4d6d7 - - - - - -] volume_usage_poll_interval     = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 11 03:56:04 compute-0 nova_compute[258937]: 2025-10-11 03:56:04.271 2 DEBUG oslo_service.service [None req-279685f3-4cb2-400f-b19b-5ded64b4d6d7 - - - - - -] watch_log_file                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 11 03:56:04 compute-0 nova_compute[258937]: 2025-10-11 03:56:04.271 2 DEBUG oslo_service.service [None req-279685f3-4cb2-400f-b19b-5ded64b4d6d7 - - - - - -] web                            = /usr/share/spice-html5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 11 03:56:04 compute-0 nova_compute[258937]: 2025-10-11 03:56:04.271 2 DEBUG oslo_service.service [None req-279685f3-4cb2-400f-b19b-5ded64b4d6d7 - - - - - -] oslo_concurrency.disable_process_locking = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:04 compute-0 nova_compute[258937]: 2025-10-11 03:56:04.271 2 DEBUG oslo_service.service [None req-279685f3-4cb2-400f-b19b-5ded64b4d6d7 - - - - - -] oslo_concurrency.lock_path     = /var/lib/nova/tmp log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:04 compute-0 nova_compute[258937]: 2025-10-11 03:56:04.272 2 DEBUG oslo_service.service [None req-279685f3-4cb2-400f-b19b-5ded64b4d6d7 - - - - - -] oslo_messaging_metrics.metrics_buffer_size = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:04 compute-0 nova_compute[258937]: 2025-10-11 03:56:04.272 2 DEBUG oslo_service.service [None req-279685f3-4cb2-400f-b19b-5ded64b4d6d7 - - - - - -] oslo_messaging_metrics.metrics_enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:04 compute-0 nova_compute[258937]: 2025-10-11 03:56:04.272 2 DEBUG oslo_service.service [None req-279685f3-4cb2-400f-b19b-5ded64b4d6d7 - - - - - -] oslo_messaging_metrics.metrics_process_name =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:04 compute-0 nova_compute[258937]: 2025-10-11 03:56:04.272 2 DEBUG oslo_service.service [None req-279685f3-4cb2-400f-b19b-5ded64b4d6d7 - - - - - -] oslo_messaging_metrics.metrics_socket_file = /var/tmp/metrics_collector.sock log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:04 compute-0 nova_compute[258937]: 2025-10-11 03:56:04.272 2 DEBUG oslo_service.service [None req-279685f3-4cb2-400f-b19b-5ded64b4d6d7 - - - - - -] oslo_messaging_metrics.metrics_thread_stop_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:04 compute-0 nova_compute[258937]: 2025-10-11 03:56:04.272 2 DEBUG oslo_service.service [None req-279685f3-4cb2-400f-b19b-5ded64b4d6d7 - - - - - -] api.auth_strategy              = keystone log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:04 compute-0 nova_compute[258937]: 2025-10-11 03:56:04.273 2 DEBUG oslo_service.service [None req-279685f3-4cb2-400f-b19b-5ded64b4d6d7 - - - - - -] api.compute_link_prefix        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:04 compute-0 nova_compute[258937]: 2025-10-11 03:56:04.273 2 DEBUG oslo_service.service [None req-279685f3-4cb2-400f-b19b-5ded64b4d6d7 - - - - - -] api.config_drive_skip_versions = 1.0 2007-01-19 2007-03-01 2007-08-29 2007-10-10 2007-12-15 2008-02-01 2008-09-01 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:04 compute-0 nova_compute[258937]: 2025-10-11 03:56:04.273 2 DEBUG oslo_service.service [None req-279685f3-4cb2-400f-b19b-5ded64b4d6d7 - - - - - -] api.dhcp_domain                =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:04 compute-0 nova_compute[258937]: 2025-10-11 03:56:04.273 2 DEBUG oslo_service.service [None req-279685f3-4cb2-400f-b19b-5ded64b4d6d7 - - - - - -] api.enable_instance_password   = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:04 compute-0 nova_compute[258937]: 2025-10-11 03:56:04.273 2 DEBUG oslo_service.service [None req-279685f3-4cb2-400f-b19b-5ded64b4d6d7 - - - - - -] api.glance_link_prefix         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:04 compute-0 python3.9[259555]: ansible-containers.podman.podman_container Invoked with name=nova_nvme_cleaner state=absent executable=podman detach=True debug=False force_restart=False force_delete=True generate_systemd={} image_strict=False recreate=False image=None annotation=None arch=None attach=None authfile=None blkio_weight=None blkio_weight_device=None cap_add=None cap_drop=None cgroup_conf=None cgroup_parent=None cgroupns=None cgroups=None chrootdirs=None cidfile=None cmd_args=None conmon_pidfile=None command=None cpu_period=None cpu_quota=None cpu_rt_period=None cpu_rt_runtime=None cpu_shares=None cpus=None cpuset_cpus=None cpuset_mems=None decryption_key=None delete_depend=None delete_time=None delete_volumes=None detach_keys=None device=None device_cgroup_rule=None device_read_bps=None device_read_iops=None device_write_bps=None device_write_iops=None dns=None dns_option=None dns_search=None entrypoint=None env=None env_file=None env_host=None env_merge=None etc_hosts=None expose=None gidmap=None gpus=None group_add=None group_entry=None healthcheck=None healthcheck_interval=None healthcheck_retries=None healthcheck_start_period=None health_startup_cmd=None health_startup_interval=None health_startup_retries=None health_startup_success=None health_startup_timeout=None healthcheck_timeout=None healthcheck_failure_action=None hooks_dir=None hostname=None hostuser=None http_proxy=None image_volume=None init=None init_ctr=None init_path=None interactive=None ip=None ip6=None ipc=None kernel_memory=None label=None label_file=None log_driver=None log_level=None log_opt=None mac_address=None memory=None memory_reservation=None memory_swap=None memory_swappiness=None mount=None network=None network_aliases=None no_healthcheck=None no_hosts=None oom_kill_disable=None oom_score_adj=None os=None passwd=None passwd_entry=None personality=None pid=None pid_file=None pids_limit=None platform=None pod=None pod_id_file=None preserve_fd=None preserve_fds=None privileged=None publish=None publish_all=None pull=None quadlet_dir=None quadlet_filename=None quadlet_file_mode=None quadlet_options=None rdt_class=None read_only=None read_only_tmpfs=None requires=None restart_policy=None restart_time=None retry=None retry_delay=None rm=None rmi=None rootfs=None seccomp_policy=None secrets=NOT_LOGGING_PARAMETER sdnotify=None security_opt=None shm_size=None shm_size_systemd=None sig_proxy=None stop_signal=None stop_timeout=None stop_time=None subgidname=None subuidname=None sysctl=None systemd=None timeout=None timezone=None tls_verify=None tmpfs=None tty=None uidmap=None ulimit=None umask=None unsetenv=None unsetenv_all=None user=None userns=None uts=None variant=None volume=None volumes_from=None workdir=None
Oct 11 03:56:04 compute-0 nova_compute[258937]: 2025-10-11 03:56:04.274 2 DEBUG oslo_service.service [None req-279685f3-4cb2-400f-b19b-5ded64b4d6d7 - - - - - -] api.instance_list_cells_batch_fixed_size = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:04 compute-0 nova_compute[258937]: 2025-10-11 03:56:04.274 2 DEBUG oslo_service.service [None req-279685f3-4cb2-400f-b19b-5ded64b4d6d7 - - - - - -] api.instance_list_cells_batch_strategy = distributed log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:04 compute-0 nova_compute[258937]: 2025-10-11 03:56:04.274 2 DEBUG oslo_service.service [None req-279685f3-4cb2-400f-b19b-5ded64b4d6d7 - - - - - -] api.instance_list_per_project_cells = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:04 compute-0 nova_compute[258937]: 2025-10-11 03:56:04.274 2 DEBUG oslo_service.service [None req-279685f3-4cb2-400f-b19b-5ded64b4d6d7 - - - - - -] api.list_records_by_skipping_down_cells = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:04 compute-0 nova_compute[258937]: 2025-10-11 03:56:04.274 2 DEBUG oslo_service.service [None req-279685f3-4cb2-400f-b19b-5ded64b4d6d7 - - - - - -] api.local_metadata_per_cell    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:04 compute-0 nova_compute[258937]: 2025-10-11 03:56:04.274 2 DEBUG oslo_service.service [None req-279685f3-4cb2-400f-b19b-5ded64b4d6d7 - - - - - -] api.max_limit                  = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:04 compute-0 nova_compute[258937]: 2025-10-11 03:56:04.274 2 DEBUG oslo_service.service [None req-279685f3-4cb2-400f-b19b-5ded64b4d6d7 - - - - - -] api.metadata_cache_expiration  = 15 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:04 compute-0 nova_compute[258937]: 2025-10-11 03:56:04.275 2 DEBUG oslo_service.service [None req-279685f3-4cb2-400f-b19b-5ded64b4d6d7 - - - - - -] api.neutron_default_tenant_id  = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:04 compute-0 nova_compute[258937]: 2025-10-11 03:56:04.275 2 DEBUG oslo_service.service [None req-279685f3-4cb2-400f-b19b-5ded64b4d6d7 - - - - - -] api.use_forwarded_for          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:04 compute-0 nova_compute[258937]: 2025-10-11 03:56:04.275 2 DEBUG oslo_service.service [None req-279685f3-4cb2-400f-b19b-5ded64b4d6d7 - - - - - -] api.use_neutron_default_nets   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:04 compute-0 nova_compute[258937]: 2025-10-11 03:56:04.275 2 DEBUG oslo_service.service [None req-279685f3-4cb2-400f-b19b-5ded64b4d6d7 - - - - - -] api.vendordata_dynamic_connect_timeout = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:04 compute-0 nova_compute[258937]: 2025-10-11 03:56:04.275 2 DEBUG oslo_service.service [None req-279685f3-4cb2-400f-b19b-5ded64b4d6d7 - - - - - -] api.vendordata_dynamic_failure_fatal = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:04 compute-0 nova_compute[258937]: 2025-10-11 03:56:04.275 2 DEBUG oslo_service.service [None req-279685f3-4cb2-400f-b19b-5ded64b4d6d7 - - - - - -] api.vendordata_dynamic_read_timeout = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:04 compute-0 nova_compute[258937]: 2025-10-11 03:56:04.275 2 DEBUG oslo_service.service [None req-279685f3-4cb2-400f-b19b-5ded64b4d6d7 - - - - - -] api.vendordata_dynamic_ssl_certfile =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:04 compute-0 nova_compute[258937]: 2025-10-11 03:56:04.275 2 DEBUG oslo_service.service [None req-279685f3-4cb2-400f-b19b-5ded64b4d6d7 - - - - - -] api.vendordata_dynamic_targets = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:04 compute-0 nova_compute[258937]: 2025-10-11 03:56:04.276 2 DEBUG oslo_service.service [None req-279685f3-4cb2-400f-b19b-5ded64b4d6d7 - - - - - -] api.vendordata_jsonfile_path   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:04 compute-0 nova_compute[258937]: 2025-10-11 03:56:04.276 2 DEBUG oslo_service.service [None req-279685f3-4cb2-400f-b19b-5ded64b4d6d7 - - - - - -] api.vendordata_providers       = ['StaticJSON'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:04 compute-0 nova_compute[258937]: 2025-10-11 03:56:04.276 2 DEBUG oslo_service.service [None req-279685f3-4cb2-400f-b19b-5ded64b4d6d7 - - - - - -] cache.backend                  = oslo_cache.dict log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:04 compute-0 nova_compute[258937]: 2025-10-11 03:56:04.276 2 DEBUG oslo_service.service [None req-279685f3-4cb2-400f-b19b-5ded64b4d6d7 - - - - - -] cache.backend_argument         = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:04 compute-0 nova_compute[258937]: 2025-10-11 03:56:04.276 2 DEBUG oslo_service.service [None req-279685f3-4cb2-400f-b19b-5ded64b4d6d7 - - - - - -] cache.config_prefix            = cache.oslo log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:04 compute-0 nova_compute[258937]: 2025-10-11 03:56:04.276 2 DEBUG oslo_service.service [None req-279685f3-4cb2-400f-b19b-5ded64b4d6d7 - - - - - -] cache.dead_timeout             = 60.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:04 compute-0 nova_compute[258937]: 2025-10-11 03:56:04.276 2 DEBUG oslo_service.service [None req-279685f3-4cb2-400f-b19b-5ded64b4d6d7 - - - - - -] cache.debug_cache_backend      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:04 compute-0 nova_compute[258937]: 2025-10-11 03:56:04.277 2 DEBUG oslo_service.service [None req-279685f3-4cb2-400f-b19b-5ded64b4d6d7 - - - - - -] cache.enable_retry_client      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:04 compute-0 nova_compute[258937]: 2025-10-11 03:56:04.277 2 DEBUG oslo_service.service [None req-279685f3-4cb2-400f-b19b-5ded64b4d6d7 - - - - - -] cache.enable_socket_keepalive  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:04 compute-0 nova_compute[258937]: 2025-10-11 03:56:04.277 2 DEBUG oslo_service.service [None req-279685f3-4cb2-400f-b19b-5ded64b4d6d7 - - - - - -] cache.enabled                  = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:04 compute-0 nova_compute[258937]: 2025-10-11 03:56:04.277 2 DEBUG oslo_service.service [None req-279685f3-4cb2-400f-b19b-5ded64b4d6d7 - - - - - -] cache.expiration_time          = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:04 compute-0 nova_compute[258937]: 2025-10-11 03:56:04.277 2 DEBUG oslo_service.service [None req-279685f3-4cb2-400f-b19b-5ded64b4d6d7 - - - - - -] cache.hashclient_retry_attempts = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:04 compute-0 nova_compute[258937]: 2025-10-11 03:56:04.277 2 DEBUG oslo_service.service [None req-279685f3-4cb2-400f-b19b-5ded64b4d6d7 - - - - - -] cache.hashclient_retry_delay   = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:04 compute-0 nova_compute[258937]: 2025-10-11 03:56:04.277 2 DEBUG oslo_service.service [None req-279685f3-4cb2-400f-b19b-5ded64b4d6d7 - - - - - -] cache.memcache_dead_retry      = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:04 compute-0 nova_compute[258937]: 2025-10-11 03:56:04.278 2 DEBUG oslo_service.service [None req-279685f3-4cb2-400f-b19b-5ded64b4d6d7 - - - - - -] cache.memcache_password        =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:04 compute-0 nova_compute[258937]: 2025-10-11 03:56:04.278 2 DEBUG oslo_service.service [None req-279685f3-4cb2-400f-b19b-5ded64b4d6d7 - - - - - -] cache.memcache_pool_connection_get_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:04 compute-0 nova_compute[258937]: 2025-10-11 03:56:04.278 2 DEBUG oslo_service.service [None req-279685f3-4cb2-400f-b19b-5ded64b4d6d7 - - - - - -] cache.memcache_pool_flush_on_reconnect = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:04 compute-0 nova_compute[258937]: 2025-10-11 03:56:04.278 2 DEBUG oslo_service.service [None req-279685f3-4cb2-400f-b19b-5ded64b4d6d7 - - - - - -] cache.memcache_pool_maxsize    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:04 compute-0 nova_compute[258937]: 2025-10-11 03:56:04.278 2 DEBUG oslo_service.service [None req-279685f3-4cb2-400f-b19b-5ded64b4d6d7 - - - - - -] cache.memcache_pool_unused_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:04 compute-0 nova_compute[258937]: 2025-10-11 03:56:04.278 2 DEBUG oslo_service.service [None req-279685f3-4cb2-400f-b19b-5ded64b4d6d7 - - - - - -] cache.memcache_sasl_enabled    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:04 compute-0 nova_compute[258937]: 2025-10-11 03:56:04.278 2 DEBUG oslo_service.service [None req-279685f3-4cb2-400f-b19b-5ded64b4d6d7 - - - - - -] cache.memcache_servers         = ['localhost:11211'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:04 compute-0 nova_compute[258937]: 2025-10-11 03:56:04.279 2 DEBUG oslo_service.service [None req-279685f3-4cb2-400f-b19b-5ded64b4d6d7 - - - - - -] cache.memcache_socket_timeout  = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:04 compute-0 nova_compute[258937]: 2025-10-11 03:56:04.279 2 DEBUG oslo_service.service [None req-279685f3-4cb2-400f-b19b-5ded64b4d6d7 - - - - - -] cache.memcache_username        =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:04 compute-0 nova_compute[258937]: 2025-10-11 03:56:04.279 2 DEBUG oslo_service.service [None req-279685f3-4cb2-400f-b19b-5ded64b4d6d7 - - - - - -] cache.proxies                  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:04 compute-0 nova_compute[258937]: 2025-10-11 03:56:04.279 2 DEBUG oslo_service.service [None req-279685f3-4cb2-400f-b19b-5ded64b4d6d7 - - - - - -] cache.retry_attempts           = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:04 compute-0 nova_compute[258937]: 2025-10-11 03:56:04.279 2 DEBUG oslo_service.service [None req-279685f3-4cb2-400f-b19b-5ded64b4d6d7 - - - - - -] cache.retry_delay              = 0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:04 compute-0 nova_compute[258937]: 2025-10-11 03:56:04.279 2 DEBUG oslo_service.service [None req-279685f3-4cb2-400f-b19b-5ded64b4d6d7 - - - - - -] cache.socket_keepalive_count   = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:04 compute-0 nova_compute[258937]: 2025-10-11 03:56:04.279 2 DEBUG oslo_service.service [None req-279685f3-4cb2-400f-b19b-5ded64b4d6d7 - - - - - -] cache.socket_keepalive_idle    = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:04 compute-0 nova_compute[258937]: 2025-10-11 03:56:04.279 2 DEBUG oslo_service.service [None req-279685f3-4cb2-400f-b19b-5ded64b4d6d7 - - - - - -] cache.socket_keepalive_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:04 compute-0 nova_compute[258937]: 2025-10-11 03:56:04.280 2 DEBUG oslo_service.service [None req-279685f3-4cb2-400f-b19b-5ded64b4d6d7 - - - - - -] cache.tls_allowed_ciphers      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:04 compute-0 nova_compute[258937]: 2025-10-11 03:56:04.280 2 DEBUG oslo_service.service [None req-279685f3-4cb2-400f-b19b-5ded64b4d6d7 - - - - - -] cache.tls_cafile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:04 compute-0 nova_compute[258937]: 2025-10-11 03:56:04.280 2 DEBUG oslo_service.service [None req-279685f3-4cb2-400f-b19b-5ded64b4d6d7 - - - - - -] cache.tls_certfile             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:04 compute-0 nova_compute[258937]: 2025-10-11 03:56:04.280 2 DEBUG oslo_service.service [None req-279685f3-4cb2-400f-b19b-5ded64b4d6d7 - - - - - -] cache.tls_enabled              = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:04 compute-0 nova_compute[258937]: 2025-10-11 03:56:04.280 2 DEBUG oslo_service.service [None req-279685f3-4cb2-400f-b19b-5ded64b4d6d7 - - - - - -] cache.tls_keyfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:04 compute-0 nova_compute[258937]: 2025-10-11 03:56:04.280 2 DEBUG oslo_service.service [None req-279685f3-4cb2-400f-b19b-5ded64b4d6d7 - - - - - -] cinder.auth_section            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:04 compute-0 nova_compute[258937]: 2025-10-11 03:56:04.281 2 DEBUG oslo_service.service [None req-279685f3-4cb2-400f-b19b-5ded64b4d6d7 - - - - - -] cinder.auth_type               = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:04 compute-0 nova_compute[258937]: 2025-10-11 03:56:04.281 2 DEBUG oslo_service.service [None req-279685f3-4cb2-400f-b19b-5ded64b4d6d7 - - - - - -] cinder.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:04 compute-0 nova_compute[258937]: 2025-10-11 03:56:04.281 2 DEBUG oslo_service.service [None req-279685f3-4cb2-400f-b19b-5ded64b4d6d7 - - - - - -] cinder.catalog_info            = volumev3:cinderv3:internalURL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:04 compute-0 nova_compute[258937]: 2025-10-11 03:56:04.281 2 DEBUG oslo_service.service [None req-279685f3-4cb2-400f-b19b-5ded64b4d6d7 - - - - - -] cinder.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:04 compute-0 nova_compute[258937]: 2025-10-11 03:56:04.281 2 DEBUG oslo_service.service [None req-279685f3-4cb2-400f-b19b-5ded64b4d6d7 - - - - - -] cinder.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:04 compute-0 nova_compute[258937]: 2025-10-11 03:56:04.282 2 DEBUG oslo_service.service [None req-279685f3-4cb2-400f-b19b-5ded64b4d6d7 - - - - - -] cinder.cross_az_attach         = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:04 compute-0 nova_compute[258937]: 2025-10-11 03:56:04.282 2 DEBUG oslo_service.service [None req-279685f3-4cb2-400f-b19b-5ded64b4d6d7 - - - - - -] cinder.debug                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:04 compute-0 nova_compute[258937]: 2025-10-11 03:56:04.282 2 DEBUG oslo_service.service [None req-279685f3-4cb2-400f-b19b-5ded64b4d6d7 - - - - - -] cinder.endpoint_template       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:04 compute-0 nova_compute[258937]: 2025-10-11 03:56:04.282 2 DEBUG oslo_service.service [None req-279685f3-4cb2-400f-b19b-5ded64b4d6d7 - - - - - -] cinder.http_retries            = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:04 compute-0 nova_compute[258937]: 2025-10-11 03:56:04.282 2 DEBUG oslo_service.service [None req-279685f3-4cb2-400f-b19b-5ded64b4d6d7 - - - - - -] cinder.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:04 compute-0 nova_compute[258937]: 2025-10-11 03:56:04.282 2 DEBUG oslo_service.service [None req-279685f3-4cb2-400f-b19b-5ded64b4d6d7 - - - - - -] cinder.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:04 compute-0 nova_compute[258937]: 2025-10-11 03:56:04.282 2 DEBUG oslo_service.service [None req-279685f3-4cb2-400f-b19b-5ded64b4d6d7 - - - - - -] cinder.os_region_name          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:04 compute-0 nova_compute[258937]: 2025-10-11 03:56:04.283 2 DEBUG oslo_service.service [None req-279685f3-4cb2-400f-b19b-5ded64b4d6d7 - - - - - -] cinder.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:04 compute-0 nova_compute[258937]: 2025-10-11 03:56:04.283 2 DEBUG oslo_service.service [None req-279685f3-4cb2-400f-b19b-5ded64b4d6d7 - - - - - -] cinder.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:04 compute-0 nova_compute[258937]: 2025-10-11 03:56:04.283 2 DEBUG oslo_service.service [None req-279685f3-4cb2-400f-b19b-5ded64b4d6d7 - - - - - -] compute.consecutive_build_service_disable_threshold = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:04 compute-0 nova_compute[258937]: 2025-10-11 03:56:04.283 2 DEBUG oslo_service.service [None req-279685f3-4cb2-400f-b19b-5ded64b4d6d7 - - - - - -] compute.cpu_dedicated_set      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:04 compute-0 nova_compute[258937]: 2025-10-11 03:56:04.283 2 DEBUG oslo_service.service [None req-279685f3-4cb2-400f-b19b-5ded64b4d6d7 - - - - - -] compute.cpu_shared_set         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:04 compute-0 nova_compute[258937]: 2025-10-11 03:56:04.283 2 DEBUG oslo_service.service [None req-279685f3-4cb2-400f-b19b-5ded64b4d6d7 - - - - - -] compute.image_type_exclude_list = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:04 compute-0 nova_compute[258937]: 2025-10-11 03:56:04.283 2 DEBUG oslo_service.service [None req-279685f3-4cb2-400f-b19b-5ded64b4d6d7 - - - - - -] compute.live_migration_wait_for_vif_plug = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:04 compute-0 nova_compute[258937]: 2025-10-11 03:56:04.283 2 DEBUG oslo_service.service [None req-279685f3-4cb2-400f-b19b-5ded64b4d6d7 - - - - - -] compute.max_concurrent_disk_ops = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:04 compute-0 nova_compute[258937]: 2025-10-11 03:56:04.284 2 DEBUG oslo_service.service [None req-279685f3-4cb2-400f-b19b-5ded64b4d6d7 - - - - - -] compute.max_disk_devices_to_attach = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:04 compute-0 nova_compute[258937]: 2025-10-11 03:56:04.284 2 DEBUG oslo_service.service [None req-279685f3-4cb2-400f-b19b-5ded64b4d6d7 - - - - - -] compute.packing_host_numa_cells_allocation_strategy = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:04 compute-0 nova_compute[258937]: 2025-10-11 03:56:04.284 2 DEBUG oslo_service.service [None req-279685f3-4cb2-400f-b19b-5ded64b4d6d7 - - - - - -] compute.provider_config_location = /etc/nova/provider_config/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:04 compute-0 nova_compute[258937]: 2025-10-11 03:56:04.284 2 DEBUG oslo_service.service [None req-279685f3-4cb2-400f-b19b-5ded64b4d6d7 - - - - - -] compute.resource_provider_association_refresh = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:04 compute-0 nova_compute[258937]: 2025-10-11 03:56:04.284 2 DEBUG oslo_service.service [None req-279685f3-4cb2-400f-b19b-5ded64b4d6d7 - - - - - -] compute.shutdown_retry_interval = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:04 compute-0 nova_compute[258937]: 2025-10-11 03:56:04.284 2 DEBUG oslo_service.service [None req-279685f3-4cb2-400f-b19b-5ded64b4d6d7 - - - - - -] compute.vmdk_allowed_types     = ['streamOptimized', 'monolithicSparse'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:04 compute-0 nova_compute[258937]: 2025-10-11 03:56:04.284 2 DEBUG oslo_service.service [None req-279685f3-4cb2-400f-b19b-5ded64b4d6d7 - - - - - -] conductor.workers              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:04 compute-0 nova_compute[258937]: 2025-10-11 03:56:04.285 2 DEBUG oslo_service.service [None req-279685f3-4cb2-400f-b19b-5ded64b4d6d7 - - - - - -] console.allowed_origins        = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:04 compute-0 nova_compute[258937]: 2025-10-11 03:56:04.285 2 DEBUG oslo_service.service [None req-279685f3-4cb2-400f-b19b-5ded64b4d6d7 - - - - - -] console.ssl_ciphers            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:04 compute-0 nova_compute[258937]: 2025-10-11 03:56:04.285 2 DEBUG oslo_service.service [None req-279685f3-4cb2-400f-b19b-5ded64b4d6d7 - - - - - -] console.ssl_minimum_version    = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:04 compute-0 nova_compute[258937]: 2025-10-11 03:56:04.285 2 DEBUG oslo_service.service [None req-279685f3-4cb2-400f-b19b-5ded64b4d6d7 - - - - - -] consoleauth.token_ttl          = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:04 compute-0 nova_compute[258937]: 2025-10-11 03:56:04.285 2 DEBUG oslo_service.service [None req-279685f3-4cb2-400f-b19b-5ded64b4d6d7 - - - - - -] cyborg.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:04 compute-0 nova_compute[258937]: 2025-10-11 03:56:04.285 2 DEBUG oslo_service.service [None req-279685f3-4cb2-400f-b19b-5ded64b4d6d7 - - - - - -] cyborg.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:04 compute-0 nova_compute[258937]: 2025-10-11 03:56:04.285 2 DEBUG oslo_service.service [None req-279685f3-4cb2-400f-b19b-5ded64b4d6d7 - - - - - -] cyborg.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:04 compute-0 nova_compute[258937]: 2025-10-11 03:56:04.286 2 DEBUG oslo_service.service [None req-279685f3-4cb2-400f-b19b-5ded64b4d6d7 - - - - - -] cyborg.connect_retries         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:04 compute-0 nova_compute[258937]: 2025-10-11 03:56:04.286 2 DEBUG oslo_service.service [None req-279685f3-4cb2-400f-b19b-5ded64b4d6d7 - - - - - -] cyborg.connect_retry_delay     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:04 compute-0 nova_compute[258937]: 2025-10-11 03:56:04.286 2 DEBUG oslo_service.service [None req-279685f3-4cb2-400f-b19b-5ded64b4d6d7 - - - - - -] cyborg.endpoint_override       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:04 compute-0 nova_compute[258937]: 2025-10-11 03:56:04.286 2 DEBUG oslo_service.service [None req-279685f3-4cb2-400f-b19b-5ded64b4d6d7 - - - - - -] cyborg.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:04 compute-0 nova_compute[258937]: 2025-10-11 03:56:04.286 2 DEBUG oslo_service.service [None req-279685f3-4cb2-400f-b19b-5ded64b4d6d7 - - - - - -] cyborg.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:04 compute-0 nova_compute[258937]: 2025-10-11 03:56:04.286 2 DEBUG oslo_service.service [None req-279685f3-4cb2-400f-b19b-5ded64b4d6d7 - - - - - -] cyborg.max_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:04 compute-0 nova_compute[258937]: 2025-10-11 03:56:04.286 2 DEBUG oslo_service.service [None req-279685f3-4cb2-400f-b19b-5ded64b4d6d7 - - - - - -] cyborg.min_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:04 compute-0 nova_compute[258937]: 2025-10-11 03:56:04.287 2 DEBUG oslo_service.service [None req-279685f3-4cb2-400f-b19b-5ded64b4d6d7 - - - - - -] cyborg.region_name             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:04 compute-0 nova_compute[258937]: 2025-10-11 03:56:04.287 2 DEBUG oslo_service.service [None req-279685f3-4cb2-400f-b19b-5ded64b4d6d7 - - - - - -] cyborg.service_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:04 compute-0 nova_compute[258937]: 2025-10-11 03:56:04.287 2 DEBUG oslo_service.service [None req-279685f3-4cb2-400f-b19b-5ded64b4d6d7 - - - - - -] cyborg.service_type            = accelerator log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:04 compute-0 nova_compute[258937]: 2025-10-11 03:56:04.287 2 DEBUG oslo_service.service [None req-279685f3-4cb2-400f-b19b-5ded64b4d6d7 - - - - - -] cyborg.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:04 compute-0 nova_compute[258937]: 2025-10-11 03:56:04.287 2 DEBUG oslo_service.service [None req-279685f3-4cb2-400f-b19b-5ded64b4d6d7 - - - - - -] cyborg.status_code_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:04 compute-0 nova_compute[258937]: 2025-10-11 03:56:04.287 2 DEBUG oslo_service.service [None req-279685f3-4cb2-400f-b19b-5ded64b4d6d7 - - - - - -] cyborg.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:04 compute-0 nova_compute[258937]: 2025-10-11 03:56:04.287 2 DEBUG oslo_service.service [None req-279685f3-4cb2-400f-b19b-5ded64b4d6d7 - - - - - -] cyborg.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:04 compute-0 nova_compute[258937]: 2025-10-11 03:56:04.288 2 DEBUG oslo_service.service [None req-279685f3-4cb2-400f-b19b-5ded64b4d6d7 - - - - - -] cyborg.valid_interfaces        = ['internal', 'public'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:04 compute-0 nova_compute[258937]: 2025-10-11 03:56:04.288 2 DEBUG oslo_service.service [None req-279685f3-4cb2-400f-b19b-5ded64b4d6d7 - - - - - -] cyborg.version                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:04 compute-0 nova_compute[258937]: 2025-10-11 03:56:04.288 2 DEBUG oslo_service.service [None req-279685f3-4cb2-400f-b19b-5ded64b4d6d7 - - - - - -] database.backend               = sqlalchemy log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:04 compute-0 nova_compute[258937]: 2025-10-11 03:56:04.288 2 DEBUG oslo_service.service [None req-279685f3-4cb2-400f-b19b-5ded64b4d6d7 - - - - - -] database.connection            = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:04 compute-0 nova_compute[258937]: 2025-10-11 03:56:04.288 2 DEBUG oslo_service.service [None req-279685f3-4cb2-400f-b19b-5ded64b4d6d7 - - - - - -] database.connection_debug      = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:04 compute-0 nova_compute[258937]: 2025-10-11 03:56:04.289 2 DEBUG oslo_service.service [None req-279685f3-4cb2-400f-b19b-5ded64b4d6d7 - - - - - -] database.connection_parameters =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:04 compute-0 nova_compute[258937]: 2025-10-11 03:56:04.289 2 DEBUG oslo_service.service [None req-279685f3-4cb2-400f-b19b-5ded64b4d6d7 - - - - - -] database.connection_recycle_time = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:04 compute-0 nova_compute[258937]: 2025-10-11 03:56:04.289 2 DEBUG oslo_service.service [None req-279685f3-4cb2-400f-b19b-5ded64b4d6d7 - - - - - -] database.connection_trace      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:04 compute-0 nova_compute[258937]: 2025-10-11 03:56:04.289 2 DEBUG oslo_service.service [None req-279685f3-4cb2-400f-b19b-5ded64b4d6d7 - - - - - -] database.db_inc_retry_interval = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:04 compute-0 nova_compute[258937]: 2025-10-11 03:56:04.289 2 DEBUG oslo_service.service [None req-279685f3-4cb2-400f-b19b-5ded64b4d6d7 - - - - - -] database.db_max_retries        = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:04 compute-0 nova_compute[258937]: 2025-10-11 03:56:04.289 2 DEBUG oslo_service.service [None req-279685f3-4cb2-400f-b19b-5ded64b4d6d7 - - - - - -] database.db_max_retry_interval = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:04 compute-0 nova_compute[258937]: 2025-10-11 03:56:04.289 2 DEBUG oslo_service.service [None req-279685f3-4cb2-400f-b19b-5ded64b4d6d7 - - - - - -] database.db_retry_interval     = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:04 compute-0 nova_compute[258937]: 2025-10-11 03:56:04.290 2 DEBUG oslo_service.service [None req-279685f3-4cb2-400f-b19b-5ded64b4d6d7 - - - - - -] database.max_overflow          = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:04 compute-0 nova_compute[258937]: 2025-10-11 03:56:04.290 2 DEBUG oslo_service.service [None req-279685f3-4cb2-400f-b19b-5ded64b4d6d7 - - - - - -] database.max_pool_size         = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:04 compute-0 nova_compute[258937]: 2025-10-11 03:56:04.290 2 DEBUG oslo_service.service [None req-279685f3-4cb2-400f-b19b-5ded64b4d6d7 - - - - - -] database.max_retries           = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:04 compute-0 nova_compute[258937]: 2025-10-11 03:56:04.290 2 DEBUG oslo_service.service [None req-279685f3-4cb2-400f-b19b-5ded64b4d6d7 - - - - - -] database.mysql_enable_ndb      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:04 compute-0 nova_compute[258937]: 2025-10-11 03:56:04.290 2 DEBUG oslo_service.service [None req-279685f3-4cb2-400f-b19b-5ded64b4d6d7 - - - - - -] database.mysql_sql_mode        = TRADITIONAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:04 compute-0 nova_compute[258937]: 2025-10-11 03:56:04.290 2 DEBUG oslo_service.service [None req-279685f3-4cb2-400f-b19b-5ded64b4d6d7 - - - - - -] database.mysql_wsrep_sync_wait = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:04 compute-0 nova_compute[258937]: 2025-10-11 03:56:04.290 2 DEBUG oslo_service.service [None req-279685f3-4cb2-400f-b19b-5ded64b4d6d7 - - - - - -] database.pool_timeout          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:04 compute-0 nova_compute[258937]: 2025-10-11 03:56:04.291 2 DEBUG oslo_service.service [None req-279685f3-4cb2-400f-b19b-5ded64b4d6d7 - - - - - -] database.retry_interval        = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:04 compute-0 nova_compute[258937]: 2025-10-11 03:56:04.291 2 DEBUG oslo_service.service [None req-279685f3-4cb2-400f-b19b-5ded64b4d6d7 - - - - - -] database.slave_connection      = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:04 compute-0 nova_compute[258937]: 2025-10-11 03:56:04.291 2 DEBUG oslo_service.service [None req-279685f3-4cb2-400f-b19b-5ded64b4d6d7 - - - - - -] database.sqlite_synchronous    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:04 compute-0 nova_compute[258937]: 2025-10-11 03:56:04.291 2 DEBUG oslo_service.service [None req-279685f3-4cb2-400f-b19b-5ded64b4d6d7 - - - - - -] api_database.backend           = sqlalchemy log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:04 compute-0 nova_compute[258937]: 2025-10-11 03:56:04.291 2 DEBUG oslo_service.service [None req-279685f3-4cb2-400f-b19b-5ded64b4d6d7 - - - - - -] api_database.connection        = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:04 compute-0 nova_compute[258937]: 2025-10-11 03:56:04.291 2 DEBUG oslo_service.service [None req-279685f3-4cb2-400f-b19b-5ded64b4d6d7 - - - - - -] api_database.connection_debug  = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:04 compute-0 nova_compute[258937]: 2025-10-11 03:56:04.291 2 DEBUG oslo_service.service [None req-279685f3-4cb2-400f-b19b-5ded64b4d6d7 - - - - - -] api_database.connection_parameters =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:04 compute-0 nova_compute[258937]: 2025-10-11 03:56:04.291 2 DEBUG oslo_service.service [None req-279685f3-4cb2-400f-b19b-5ded64b4d6d7 - - - - - -] api_database.connection_recycle_time = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:04 compute-0 nova_compute[258937]: 2025-10-11 03:56:04.292 2 DEBUG oslo_service.service [None req-279685f3-4cb2-400f-b19b-5ded64b4d6d7 - - - - - -] api_database.connection_trace  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:04 compute-0 nova_compute[258937]: 2025-10-11 03:56:04.292 2 DEBUG oslo_service.service [None req-279685f3-4cb2-400f-b19b-5ded64b4d6d7 - - - - - -] api_database.db_inc_retry_interval = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:04 compute-0 nova_compute[258937]: 2025-10-11 03:56:04.292 2 DEBUG oslo_service.service [None req-279685f3-4cb2-400f-b19b-5ded64b4d6d7 - - - - - -] api_database.db_max_retries    = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:04 compute-0 nova_compute[258937]: 2025-10-11 03:56:04.292 2 DEBUG oslo_service.service [None req-279685f3-4cb2-400f-b19b-5ded64b4d6d7 - - - - - -] api_database.db_max_retry_interval = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:04 compute-0 nova_compute[258937]: 2025-10-11 03:56:04.292 2 DEBUG oslo_service.service [None req-279685f3-4cb2-400f-b19b-5ded64b4d6d7 - - - - - -] api_database.db_retry_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:04 compute-0 nova_compute[258937]: 2025-10-11 03:56:04.293 2 DEBUG oslo_service.service [None req-279685f3-4cb2-400f-b19b-5ded64b4d6d7 - - - - - -] api_database.max_overflow      = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:04 compute-0 nova_compute[258937]: 2025-10-11 03:56:04.293 2 DEBUG oslo_service.service [None req-279685f3-4cb2-400f-b19b-5ded64b4d6d7 - - - - - -] api_database.max_pool_size     = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:04 compute-0 nova_compute[258937]: 2025-10-11 03:56:04.293 2 DEBUG oslo_service.service [None req-279685f3-4cb2-400f-b19b-5ded64b4d6d7 - - - - - -] api_database.max_retries       = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:04 compute-0 nova_compute[258937]: 2025-10-11 03:56:04.293 2 DEBUG oslo_service.service [None req-279685f3-4cb2-400f-b19b-5ded64b4d6d7 - - - - - -] api_database.mysql_enable_ndb  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:04 compute-0 nova_compute[258937]: 2025-10-11 03:56:04.293 2 DEBUG oslo_service.service [None req-279685f3-4cb2-400f-b19b-5ded64b4d6d7 - - - - - -] api_database.mysql_sql_mode    = TRADITIONAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:04 compute-0 nova_compute[258937]: 2025-10-11 03:56:04.293 2 DEBUG oslo_service.service [None req-279685f3-4cb2-400f-b19b-5ded64b4d6d7 - - - - - -] api_database.mysql_wsrep_sync_wait = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:04 compute-0 nova_compute[258937]: 2025-10-11 03:56:04.293 2 DEBUG oslo_service.service [None req-279685f3-4cb2-400f-b19b-5ded64b4d6d7 - - - - - -] api_database.pool_timeout      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:04 compute-0 nova_compute[258937]: 2025-10-11 03:56:04.294 2 DEBUG oslo_service.service [None req-279685f3-4cb2-400f-b19b-5ded64b4d6d7 - - - - - -] api_database.retry_interval    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:04 compute-0 nova_compute[258937]: 2025-10-11 03:56:04.294 2 DEBUG oslo_service.service [None req-279685f3-4cb2-400f-b19b-5ded64b4d6d7 - - - - - -] api_database.slave_connection  = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:04 compute-0 nova_compute[258937]: 2025-10-11 03:56:04.294 2 DEBUG oslo_service.service [None req-279685f3-4cb2-400f-b19b-5ded64b4d6d7 - - - - - -] api_database.sqlite_synchronous = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:04 compute-0 nova_compute[258937]: 2025-10-11 03:56:04.294 2 DEBUG oslo_service.service [None req-279685f3-4cb2-400f-b19b-5ded64b4d6d7 - - - - - -] devices.enabled_mdev_types     = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:04 compute-0 nova_compute[258937]: 2025-10-11 03:56:04.294 2 DEBUG oslo_service.service [None req-279685f3-4cb2-400f-b19b-5ded64b4d6d7 - - - - - -] ephemeral_storage_encryption.cipher = aes-xts-plain64 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:04 compute-0 nova_compute[258937]: 2025-10-11 03:56:04.294 2 DEBUG oslo_service.service [None req-279685f3-4cb2-400f-b19b-5ded64b4d6d7 - - - - - -] ephemeral_storage_encryption.enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:04 compute-0 nova_compute[258937]: 2025-10-11 03:56:04.294 2 DEBUG oslo_service.service [None req-279685f3-4cb2-400f-b19b-5ded64b4d6d7 - - - - - -] ephemeral_storage_encryption.key_size = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:04 compute-0 nova_compute[258937]: 2025-10-11 03:56:04.295 2 DEBUG oslo_service.service [None req-279685f3-4cb2-400f-b19b-5ded64b4d6d7 - - - - - -] glance.api_servers             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:04 compute-0 nova_compute[258937]: 2025-10-11 03:56:04.295 2 DEBUG oslo_service.service [None req-279685f3-4cb2-400f-b19b-5ded64b4d6d7 - - - - - -] glance.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:04 compute-0 nova_compute[258937]: 2025-10-11 03:56:04.295 2 DEBUG oslo_service.service [None req-279685f3-4cb2-400f-b19b-5ded64b4d6d7 - - - - - -] glance.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:04 compute-0 nova_compute[258937]: 2025-10-11 03:56:04.295 2 DEBUG oslo_service.service [None req-279685f3-4cb2-400f-b19b-5ded64b4d6d7 - - - - - -] glance.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:04 compute-0 nova_compute[258937]: 2025-10-11 03:56:04.295 2 DEBUG oslo_service.service [None req-279685f3-4cb2-400f-b19b-5ded64b4d6d7 - - - - - -] glance.connect_retries         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:04 compute-0 nova_compute[258937]: 2025-10-11 03:56:04.295 2 DEBUG oslo_service.service [None req-279685f3-4cb2-400f-b19b-5ded64b4d6d7 - - - - - -] glance.connect_retry_delay     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:04 compute-0 nova_compute[258937]: 2025-10-11 03:56:04.296 2 DEBUG oslo_service.service [None req-279685f3-4cb2-400f-b19b-5ded64b4d6d7 - - - - - -] glance.debug                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:04 compute-0 nova_compute[258937]: 2025-10-11 03:56:04.296 2 DEBUG oslo_service.service [None req-279685f3-4cb2-400f-b19b-5ded64b4d6d7 - - - - - -] glance.default_trusted_certificate_ids = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:04 compute-0 nova_compute[258937]: 2025-10-11 03:56:04.296 2 DEBUG oslo_service.service [None req-279685f3-4cb2-400f-b19b-5ded64b4d6d7 - - - - - -] glance.enable_certificate_validation = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:04 compute-0 nova_compute[258937]: 2025-10-11 03:56:04.296 2 DEBUG oslo_service.service [None req-279685f3-4cb2-400f-b19b-5ded64b4d6d7 - - - - - -] glance.enable_rbd_download     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:04 compute-0 nova_compute[258937]: 2025-10-11 03:56:04.296 2 DEBUG oslo_service.service [None req-279685f3-4cb2-400f-b19b-5ded64b4d6d7 - - - - - -] glance.endpoint_override       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:04 compute-0 nova_compute[258937]: 2025-10-11 03:56:04.297 2 DEBUG oslo_service.service [None req-279685f3-4cb2-400f-b19b-5ded64b4d6d7 - - - - - -] glance.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:04 compute-0 nova_compute[258937]: 2025-10-11 03:56:04.297 2 DEBUG oslo_service.service [None req-279685f3-4cb2-400f-b19b-5ded64b4d6d7 - - - - - -] glance.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:04 compute-0 nova_compute[258937]: 2025-10-11 03:56:04.297 2 DEBUG oslo_service.service [None req-279685f3-4cb2-400f-b19b-5ded64b4d6d7 - - - - - -] glance.max_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:04 compute-0 nova_compute[258937]: 2025-10-11 03:56:04.297 2 DEBUG oslo_service.service [None req-279685f3-4cb2-400f-b19b-5ded64b4d6d7 - - - - - -] glance.min_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:04 compute-0 nova_compute[258937]: 2025-10-11 03:56:04.297 2 DEBUG oslo_service.service [None req-279685f3-4cb2-400f-b19b-5ded64b4d6d7 - - - - - -] glance.num_retries             = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:04 compute-0 nova_compute[258937]: 2025-10-11 03:56:04.297 2 DEBUG oslo_service.service [None req-279685f3-4cb2-400f-b19b-5ded64b4d6d7 - - - - - -] glance.rbd_ceph_conf           =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:04 compute-0 nova_compute[258937]: 2025-10-11 03:56:04.297 2 DEBUG oslo_service.service [None req-279685f3-4cb2-400f-b19b-5ded64b4d6d7 - - - - - -] glance.rbd_connect_timeout     = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:04 compute-0 nova_compute[258937]: 2025-10-11 03:56:04.298 2 DEBUG oslo_service.service [None req-279685f3-4cb2-400f-b19b-5ded64b4d6d7 - - - - - -] glance.rbd_pool                =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:04 compute-0 nova_compute[258937]: 2025-10-11 03:56:04.298 2 DEBUG oslo_service.service [None req-279685f3-4cb2-400f-b19b-5ded64b4d6d7 - - - - - -] glance.rbd_user                =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:04 compute-0 nova_compute[258937]: 2025-10-11 03:56:04.298 2 DEBUG oslo_service.service [None req-279685f3-4cb2-400f-b19b-5ded64b4d6d7 - - - - - -] glance.region_name             = regionOne log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:04 compute-0 nova_compute[258937]: 2025-10-11 03:56:04.298 2 DEBUG oslo_service.service [None req-279685f3-4cb2-400f-b19b-5ded64b4d6d7 - - - - - -] glance.service_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:04 compute-0 nova_compute[258937]: 2025-10-11 03:56:04.298 2 DEBUG oslo_service.service [None req-279685f3-4cb2-400f-b19b-5ded64b4d6d7 - - - - - -] glance.service_type            = image log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:04 compute-0 nova_compute[258937]: 2025-10-11 03:56:04.299 2 DEBUG oslo_service.service [None req-279685f3-4cb2-400f-b19b-5ded64b4d6d7 - - - - - -] glance.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:04 compute-0 nova_compute[258937]: 2025-10-11 03:56:04.299 2 DEBUG oslo_service.service [None req-279685f3-4cb2-400f-b19b-5ded64b4d6d7 - - - - - -] glance.status_code_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:04 compute-0 nova_compute[258937]: 2025-10-11 03:56:04.299 2 DEBUG oslo_service.service [None req-279685f3-4cb2-400f-b19b-5ded64b4d6d7 - - - - - -] glance.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:04 compute-0 nova_compute[258937]: 2025-10-11 03:56:04.299 2 DEBUG oslo_service.service [None req-279685f3-4cb2-400f-b19b-5ded64b4d6d7 - - - - - -] glance.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:04 compute-0 nova_compute[258937]: 2025-10-11 03:56:04.299 2 DEBUG oslo_service.service [None req-279685f3-4cb2-400f-b19b-5ded64b4d6d7 - - - - - -] glance.valid_interfaces        = ['internal'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:04 compute-0 nova_compute[258937]: 2025-10-11 03:56:04.299 2 DEBUG oslo_service.service [None req-279685f3-4cb2-400f-b19b-5ded64b4d6d7 - - - - - -] glance.verify_glance_signatures = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:04 compute-0 nova_compute[258937]: 2025-10-11 03:56:04.299 2 DEBUG oslo_service.service [None req-279685f3-4cb2-400f-b19b-5ded64b4d6d7 - - - - - -] glance.version                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:04 compute-0 nova_compute[258937]: 2025-10-11 03:56:04.300 2 DEBUG oslo_service.service [None req-279685f3-4cb2-400f-b19b-5ded64b4d6d7 - - - - - -] guestfs.debug                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:04 compute-0 nova_compute[258937]: 2025-10-11 03:56:04.300 2 DEBUG oslo_service.service [None req-279685f3-4cb2-400f-b19b-5ded64b4d6d7 - - - - - -] hyperv.config_drive_cdrom      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:04 compute-0 nova_compute[258937]: 2025-10-11 03:56:04.300 2 DEBUG oslo_service.service [None req-279685f3-4cb2-400f-b19b-5ded64b4d6d7 - - - - - -] hyperv.config_drive_inject_password = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:04 compute-0 nova_compute[258937]: 2025-10-11 03:56:04.300 2 DEBUG oslo_service.service [None req-279685f3-4cb2-400f-b19b-5ded64b4d6d7 - - - - - -] hyperv.dynamic_memory_ratio    = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:04 compute-0 nova_compute[258937]: 2025-10-11 03:56:04.300 2 DEBUG oslo_service.service [None req-279685f3-4cb2-400f-b19b-5ded64b4d6d7 - - - - - -] hyperv.enable_instance_metrics_collection = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:04 compute-0 nova_compute[258937]: 2025-10-11 03:56:04.300 2 DEBUG oslo_service.service [None req-279685f3-4cb2-400f-b19b-5ded64b4d6d7 - - - - - -] hyperv.enable_remotefx         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:04 compute-0 nova_compute[258937]: 2025-10-11 03:56:04.300 2 DEBUG oslo_service.service [None req-279685f3-4cb2-400f-b19b-5ded64b4d6d7 - - - - - -] hyperv.instances_path_share    =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:04 compute-0 nova_compute[258937]: 2025-10-11 03:56:04.301 2 DEBUG oslo_service.service [None req-279685f3-4cb2-400f-b19b-5ded64b4d6d7 - - - - - -] hyperv.iscsi_initiator_list    = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:04 compute-0 nova_compute[258937]: 2025-10-11 03:56:04.301 2 DEBUG oslo_service.service [None req-279685f3-4cb2-400f-b19b-5ded64b4d6d7 - - - - - -] hyperv.limit_cpu_features      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:04 compute-0 nova_compute[258937]: 2025-10-11 03:56:04.301 2 DEBUG oslo_service.service [None req-279685f3-4cb2-400f-b19b-5ded64b4d6d7 - - - - - -] hyperv.mounted_disk_query_retry_count = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:04 compute-0 nova_compute[258937]: 2025-10-11 03:56:04.301 2 DEBUG oslo_service.service [None req-279685f3-4cb2-400f-b19b-5ded64b4d6d7 - - - - - -] hyperv.mounted_disk_query_retry_interval = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:04 compute-0 nova_compute[258937]: 2025-10-11 03:56:04.301 2 DEBUG oslo_service.service [None req-279685f3-4cb2-400f-b19b-5ded64b4d6d7 - - - - - -] hyperv.power_state_check_timeframe = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:04 compute-0 nova_compute[258937]: 2025-10-11 03:56:04.301 2 DEBUG oslo_service.service [None req-279685f3-4cb2-400f-b19b-5ded64b4d6d7 - - - - - -] hyperv.power_state_event_polling_interval = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:04 compute-0 nova_compute[258937]: 2025-10-11 03:56:04.302 2 DEBUG oslo_service.service [None req-279685f3-4cb2-400f-b19b-5ded64b4d6d7 - - - - - -] hyperv.qemu_img_cmd            = qemu-img.exe log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:04 compute-0 nova_compute[258937]: 2025-10-11 03:56:04.302 2 DEBUG oslo_service.service [None req-279685f3-4cb2-400f-b19b-5ded64b4d6d7 - - - - - -] hyperv.use_multipath_io        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:04 compute-0 nova_compute[258937]: 2025-10-11 03:56:04.302 2 DEBUG oslo_service.service [None req-279685f3-4cb2-400f-b19b-5ded64b4d6d7 - - - - - -] hyperv.volume_attach_retry_count = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:04 compute-0 nova_compute[258937]: 2025-10-11 03:56:04.302 2 DEBUG oslo_service.service [None req-279685f3-4cb2-400f-b19b-5ded64b4d6d7 - - - - - -] hyperv.volume_attach_retry_interval = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:04 compute-0 nova_compute[258937]: 2025-10-11 03:56:04.302 2 DEBUG oslo_service.service [None req-279685f3-4cb2-400f-b19b-5ded64b4d6d7 - - - - - -] hyperv.vswitch_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:04 compute-0 nova_compute[258937]: 2025-10-11 03:56:04.303 2 DEBUG oslo_service.service [None req-279685f3-4cb2-400f-b19b-5ded64b4d6d7 - - - - - -] hyperv.wait_soft_reboot_seconds = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:04 compute-0 nova_compute[258937]: 2025-10-11 03:56:04.303 2 DEBUG oslo_service.service [None req-279685f3-4cb2-400f-b19b-5ded64b4d6d7 - - - - - -] mks.enabled                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:04 compute-0 nova_compute[258937]: 2025-10-11 03:56:04.303 2 DEBUG oslo_service.service [None req-279685f3-4cb2-400f-b19b-5ded64b4d6d7 - - - - - -] mks.mksproxy_base_url          = http://127.0.0.1:6090/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:04 compute-0 nova_compute[258937]: 2025-10-11 03:56:04.303 2 DEBUG oslo_service.service [None req-279685f3-4cb2-400f-b19b-5ded64b4d6d7 - - - - - -] image_cache.manager_interval   = 2400 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:04 compute-0 nova_compute[258937]: 2025-10-11 03:56:04.303 2 DEBUG oslo_service.service [None req-279685f3-4cb2-400f-b19b-5ded64b4d6d7 - - - - - -] image_cache.precache_concurrency = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:04 compute-0 nova_compute[258937]: 2025-10-11 03:56:04.303 2 DEBUG oslo_service.service [None req-279685f3-4cb2-400f-b19b-5ded64b4d6d7 - - - - - -] image_cache.remove_unused_base_images = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:04 compute-0 nova_compute[258937]: 2025-10-11 03:56:04.304 2 DEBUG oslo_service.service [None req-279685f3-4cb2-400f-b19b-5ded64b4d6d7 - - - - - -] image_cache.remove_unused_original_minimum_age_seconds = 86400 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:04 compute-0 nova_compute[258937]: 2025-10-11 03:56:04.304 2 DEBUG oslo_service.service [None req-279685f3-4cb2-400f-b19b-5ded64b4d6d7 - - - - - -] image_cache.remove_unused_resized_minimum_age_seconds = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:04 compute-0 nova_compute[258937]: 2025-10-11 03:56:04.304 2 DEBUG oslo_service.service [None req-279685f3-4cb2-400f-b19b-5ded64b4d6d7 - - - - - -] image_cache.subdirectory_name  = _base log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:04 compute-0 nova_compute[258937]: 2025-10-11 03:56:04.304 2 DEBUG oslo_service.service [None req-279685f3-4cb2-400f-b19b-5ded64b4d6d7 - - - - - -] ironic.api_max_retries         = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:04 compute-0 nova_compute[258937]: 2025-10-11 03:56:04.304 2 DEBUG oslo_service.service [None req-279685f3-4cb2-400f-b19b-5ded64b4d6d7 - - - - - -] ironic.api_retry_interval      = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:04 compute-0 nova_compute[258937]: 2025-10-11 03:56:04.304 2 DEBUG oslo_service.service [None req-279685f3-4cb2-400f-b19b-5ded64b4d6d7 - - - - - -] ironic.auth_section            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:04 compute-0 nova_compute[258937]: 2025-10-11 03:56:04.305 2 DEBUG oslo_service.service [None req-279685f3-4cb2-400f-b19b-5ded64b4d6d7 - - - - - -] ironic.auth_type               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:04 compute-0 nova_compute[258937]: 2025-10-11 03:56:04.305 2 DEBUG oslo_service.service [None req-279685f3-4cb2-400f-b19b-5ded64b4d6d7 - - - - - -] ironic.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:04 compute-0 nova_compute[258937]: 2025-10-11 03:56:04.305 2 DEBUG oslo_service.service [None req-279685f3-4cb2-400f-b19b-5ded64b4d6d7 - - - - - -] ironic.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:04 compute-0 nova_compute[258937]: 2025-10-11 03:56:04.305 2 DEBUG oslo_service.service [None req-279685f3-4cb2-400f-b19b-5ded64b4d6d7 - - - - - -] ironic.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:04 compute-0 nova_compute[258937]: 2025-10-11 03:56:04.305 2 DEBUG oslo_service.service [None req-279685f3-4cb2-400f-b19b-5ded64b4d6d7 - - - - - -] ironic.connect_retries         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:04 compute-0 nova_compute[258937]: 2025-10-11 03:56:04.305 2 DEBUG oslo_service.service [None req-279685f3-4cb2-400f-b19b-5ded64b4d6d7 - - - - - -] ironic.connect_retry_delay     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:04 compute-0 nova_compute[258937]: 2025-10-11 03:56:04.305 2 DEBUG oslo_service.service [None req-279685f3-4cb2-400f-b19b-5ded64b4d6d7 - - - - - -] ironic.endpoint_override       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:04 compute-0 nova_compute[258937]: 2025-10-11 03:56:04.306 2 DEBUG oslo_service.service [None req-279685f3-4cb2-400f-b19b-5ded64b4d6d7 - - - - - -] ironic.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:04 compute-0 nova_compute[258937]: 2025-10-11 03:56:04.306 2 DEBUG oslo_service.service [None req-279685f3-4cb2-400f-b19b-5ded64b4d6d7 - - - - - -] ironic.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:04 compute-0 nova_compute[258937]: 2025-10-11 03:56:04.306 2 DEBUG oslo_service.service [None req-279685f3-4cb2-400f-b19b-5ded64b4d6d7 - - - - - -] ironic.max_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:04 compute-0 nova_compute[258937]: 2025-10-11 03:56:04.306 2 DEBUG oslo_service.service [None req-279685f3-4cb2-400f-b19b-5ded64b4d6d7 - - - - - -] ironic.min_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:04 compute-0 nova_compute[258937]: 2025-10-11 03:56:04.306 2 DEBUG oslo_service.service [None req-279685f3-4cb2-400f-b19b-5ded64b4d6d7 - - - - - -] ironic.partition_key           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:04 compute-0 nova_compute[258937]: 2025-10-11 03:56:04.306 2 DEBUG oslo_service.service [None req-279685f3-4cb2-400f-b19b-5ded64b4d6d7 - - - - - -] ironic.peer_list               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:04 compute-0 nova_compute[258937]: 2025-10-11 03:56:04.306 2 DEBUG oslo_service.service [None req-279685f3-4cb2-400f-b19b-5ded64b4d6d7 - - - - - -] ironic.region_name             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:04 compute-0 nova_compute[258937]: 2025-10-11 03:56:04.306 2 DEBUG oslo_service.service [None req-279685f3-4cb2-400f-b19b-5ded64b4d6d7 - - - - - -] ironic.serial_console_state_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:04 compute-0 nova_compute[258937]: 2025-10-11 03:56:04.307 2 DEBUG oslo_service.service [None req-279685f3-4cb2-400f-b19b-5ded64b4d6d7 - - - - - -] ironic.service_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:04 compute-0 nova_compute[258937]: 2025-10-11 03:56:04.307 2 DEBUG oslo_service.service [None req-279685f3-4cb2-400f-b19b-5ded64b4d6d7 - - - - - -] ironic.service_type            = baremetal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:04 compute-0 nova_compute[258937]: 2025-10-11 03:56:04.307 2 DEBUG oslo_service.service [None req-279685f3-4cb2-400f-b19b-5ded64b4d6d7 - - - - - -] ironic.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:04 compute-0 nova_compute[258937]: 2025-10-11 03:56:04.307 2 DEBUG oslo_service.service [None req-279685f3-4cb2-400f-b19b-5ded64b4d6d7 - - - - - -] ironic.status_code_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:04 compute-0 nova_compute[258937]: 2025-10-11 03:56:04.307 2 DEBUG oslo_service.service [None req-279685f3-4cb2-400f-b19b-5ded64b4d6d7 - - - - - -] ironic.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:04 compute-0 nova_compute[258937]: 2025-10-11 03:56:04.307 2 DEBUG oslo_service.service [None req-279685f3-4cb2-400f-b19b-5ded64b4d6d7 - - - - - -] ironic.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:04 compute-0 nova_compute[258937]: 2025-10-11 03:56:04.307 2 DEBUG oslo_service.service [None req-279685f3-4cb2-400f-b19b-5ded64b4d6d7 - - - - - -] ironic.valid_interfaces        = ['internal', 'public'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:04 compute-0 nova_compute[258937]: 2025-10-11 03:56:04.308 2 DEBUG oslo_service.service [None req-279685f3-4cb2-400f-b19b-5ded64b4d6d7 - - - - - -] ironic.version                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:04 compute-0 nova_compute[258937]: 2025-10-11 03:56:04.308 2 DEBUG oslo_service.service [None req-279685f3-4cb2-400f-b19b-5ded64b4d6d7 - - - - - -] key_manager.backend            = barbican log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:04 compute-0 nova_compute[258937]: 2025-10-11 03:56:04.308 2 DEBUG oslo_service.service [None req-279685f3-4cb2-400f-b19b-5ded64b4d6d7 - - - - - -] key_manager.fixed_key          = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:04 compute-0 nova_compute[258937]: 2025-10-11 03:56:04.308 2 DEBUG oslo_service.service [None req-279685f3-4cb2-400f-b19b-5ded64b4d6d7 - - - - - -] barbican.auth_endpoint         = http://localhost/identity/v3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:04 compute-0 nova_compute[258937]: 2025-10-11 03:56:04.308 2 DEBUG oslo_service.service [None req-279685f3-4cb2-400f-b19b-5ded64b4d6d7 - - - - - -] barbican.barbican_api_version  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:04 compute-0 nova_compute[258937]: 2025-10-11 03:56:04.308 2 DEBUG oslo_service.service [None req-279685f3-4cb2-400f-b19b-5ded64b4d6d7 - - - - - -] barbican.barbican_endpoint     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:04 compute-0 nova_compute[258937]: 2025-10-11 03:56:04.309 2 DEBUG oslo_service.service [None req-279685f3-4cb2-400f-b19b-5ded64b4d6d7 - - - - - -] barbican.barbican_endpoint_type = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:04 compute-0 nova_compute[258937]: 2025-10-11 03:56:04.309 2 DEBUG oslo_service.service [None req-279685f3-4cb2-400f-b19b-5ded64b4d6d7 - - - - - -] barbican.barbican_region_name  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:04 compute-0 nova_compute[258937]: 2025-10-11 03:56:04.309 2 DEBUG oslo_service.service [None req-279685f3-4cb2-400f-b19b-5ded64b4d6d7 - - - - - -] barbican.cafile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:04 compute-0 nova_compute[258937]: 2025-10-11 03:56:04.309 2 DEBUG oslo_service.service [None req-279685f3-4cb2-400f-b19b-5ded64b4d6d7 - - - - - -] barbican.certfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:04 compute-0 nova_compute[258937]: 2025-10-11 03:56:04.309 2 DEBUG oslo_service.service [None req-279685f3-4cb2-400f-b19b-5ded64b4d6d7 - - - - - -] barbican.collect_timing        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:04 compute-0 nova_compute[258937]: 2025-10-11 03:56:04.309 2 DEBUG oslo_service.service [None req-279685f3-4cb2-400f-b19b-5ded64b4d6d7 - - - - - -] barbican.insecure              = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:04 compute-0 rsyslogd[1005]: imjournal: journal files changed, reloading...  [v8.2506.0-2.el9 try https://www.rsyslog.com/e/0 ]
Oct 11 03:56:04 compute-0 nova_compute[258937]: 2025-10-11 03:56:04.310 2 DEBUG oslo_service.service [None req-279685f3-4cb2-400f-b19b-5ded64b4d6d7 - - - - - -] barbican.keyfile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:04 compute-0 nova_compute[258937]: 2025-10-11 03:56:04.310 2 DEBUG oslo_service.service [None req-279685f3-4cb2-400f-b19b-5ded64b4d6d7 - - - - - -] barbican.number_of_retries     = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:04 compute-0 nova_compute[258937]: 2025-10-11 03:56:04.310 2 DEBUG oslo_service.service [None req-279685f3-4cb2-400f-b19b-5ded64b4d6d7 - - - - - -] barbican.retry_delay           = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:04 compute-0 nova_compute[258937]: 2025-10-11 03:56:04.310 2 DEBUG oslo_service.service [None req-279685f3-4cb2-400f-b19b-5ded64b4d6d7 - - - - - -] barbican.send_service_user_token = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:04 compute-0 nova_compute[258937]: 2025-10-11 03:56:04.310 2 DEBUG oslo_service.service [None req-279685f3-4cb2-400f-b19b-5ded64b4d6d7 - - - - - -] barbican.split_loggers         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:04 compute-0 nova_compute[258937]: 2025-10-11 03:56:04.310 2 DEBUG oslo_service.service [None req-279685f3-4cb2-400f-b19b-5ded64b4d6d7 - - - - - -] barbican.timeout               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:04 compute-0 nova_compute[258937]: 2025-10-11 03:56:04.310 2 DEBUG oslo_service.service [None req-279685f3-4cb2-400f-b19b-5ded64b4d6d7 - - - - - -] barbican.verify_ssl            = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:04 compute-0 nova_compute[258937]: 2025-10-11 03:56:04.311 2 DEBUG oslo_service.service [None req-279685f3-4cb2-400f-b19b-5ded64b4d6d7 - - - - - -] barbican.verify_ssl_path       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:04 compute-0 nova_compute[258937]: 2025-10-11 03:56:04.311 2 DEBUG oslo_service.service [None req-279685f3-4cb2-400f-b19b-5ded64b4d6d7 - - - - - -] barbican_service_user.auth_section = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:04 compute-0 nova_compute[258937]: 2025-10-11 03:56:04.311 2 DEBUG oslo_service.service [None req-279685f3-4cb2-400f-b19b-5ded64b4d6d7 - - - - - -] barbican_service_user.auth_type = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:04 compute-0 nova_compute[258937]: 2025-10-11 03:56:04.311 2 DEBUG oslo_service.service [None req-279685f3-4cb2-400f-b19b-5ded64b4d6d7 - - - - - -] barbican_service_user.cafile   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:04 compute-0 nova_compute[258937]: 2025-10-11 03:56:04.311 2 DEBUG oslo_service.service [None req-279685f3-4cb2-400f-b19b-5ded64b4d6d7 - - - - - -] barbican_service_user.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:04 compute-0 nova_compute[258937]: 2025-10-11 03:56:04.311 2 DEBUG oslo_service.service [None req-279685f3-4cb2-400f-b19b-5ded64b4d6d7 - - - - - -] barbican_service_user.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:04 compute-0 nova_compute[258937]: 2025-10-11 03:56:04.311 2 DEBUG oslo_service.service [None req-279685f3-4cb2-400f-b19b-5ded64b4d6d7 - - - - - -] barbican_service_user.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:04 compute-0 nova_compute[258937]: 2025-10-11 03:56:04.312 2 DEBUG oslo_service.service [None req-279685f3-4cb2-400f-b19b-5ded64b4d6d7 - - - - - -] barbican_service_user.keyfile  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:04 compute-0 nova_compute[258937]: 2025-10-11 03:56:04.312 2 DEBUG oslo_service.service [None req-279685f3-4cb2-400f-b19b-5ded64b4d6d7 - - - - - -] barbican_service_user.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:04 compute-0 nova_compute[258937]: 2025-10-11 03:56:04.312 2 DEBUG oslo_service.service [None req-279685f3-4cb2-400f-b19b-5ded64b4d6d7 - - - - - -] barbican_service_user.timeout  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:04 compute-0 nova_compute[258937]: 2025-10-11 03:56:04.312 2 DEBUG oslo_service.service [None req-279685f3-4cb2-400f-b19b-5ded64b4d6d7 - - - - - -] vault.approle_role_id          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:04 compute-0 nova_compute[258937]: 2025-10-11 03:56:04.312 2 DEBUG oslo_service.service [None req-279685f3-4cb2-400f-b19b-5ded64b4d6d7 - - - - - -] vault.approle_secret_id        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:04 compute-0 nova_compute[258937]: 2025-10-11 03:56:04.312 2 DEBUG oslo_service.service [None req-279685f3-4cb2-400f-b19b-5ded64b4d6d7 - - - - - -] vault.cafile                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:04 compute-0 nova_compute[258937]: 2025-10-11 03:56:04.312 2 DEBUG oslo_service.service [None req-279685f3-4cb2-400f-b19b-5ded64b4d6d7 - - - - - -] vault.certfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:04 compute-0 nova_compute[258937]: 2025-10-11 03:56:04.312 2 DEBUG oslo_service.service [None req-279685f3-4cb2-400f-b19b-5ded64b4d6d7 - - - - - -] vault.collect_timing           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:04 compute-0 nova_compute[258937]: 2025-10-11 03:56:04.313 2 DEBUG oslo_service.service [None req-279685f3-4cb2-400f-b19b-5ded64b4d6d7 - - - - - -] vault.insecure                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:04 compute-0 nova_compute[258937]: 2025-10-11 03:56:04.313 2 DEBUG oslo_service.service [None req-279685f3-4cb2-400f-b19b-5ded64b4d6d7 - - - - - -] vault.keyfile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:04 compute-0 nova_compute[258937]: 2025-10-11 03:56:04.313 2 DEBUG oslo_service.service [None req-279685f3-4cb2-400f-b19b-5ded64b4d6d7 - - - - - -] vault.kv_mountpoint            = secret log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:04 compute-0 nova_compute[258937]: 2025-10-11 03:56:04.313 2 DEBUG oslo_service.service [None req-279685f3-4cb2-400f-b19b-5ded64b4d6d7 - - - - - -] vault.kv_version               = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:04 compute-0 nova_compute[258937]: 2025-10-11 03:56:04.313 2 DEBUG oslo_service.service [None req-279685f3-4cb2-400f-b19b-5ded64b4d6d7 - - - - - -] vault.namespace                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:04 compute-0 nova_compute[258937]: 2025-10-11 03:56:04.313 2 DEBUG oslo_service.service [None req-279685f3-4cb2-400f-b19b-5ded64b4d6d7 - - - - - -] vault.root_token_id            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:04 compute-0 nova_compute[258937]: 2025-10-11 03:56:04.313 2 DEBUG oslo_service.service [None req-279685f3-4cb2-400f-b19b-5ded64b4d6d7 - - - - - -] vault.split_loggers            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:04 compute-0 nova_compute[258937]: 2025-10-11 03:56:04.314 2 DEBUG oslo_service.service [None req-279685f3-4cb2-400f-b19b-5ded64b4d6d7 - - - - - -] vault.ssl_ca_crt_file          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:04 compute-0 nova_compute[258937]: 2025-10-11 03:56:04.314 2 DEBUG oslo_service.service [None req-279685f3-4cb2-400f-b19b-5ded64b4d6d7 - - - - - -] vault.timeout                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:04 compute-0 nova_compute[258937]: 2025-10-11 03:56:04.314 2 DEBUG oslo_service.service [None req-279685f3-4cb2-400f-b19b-5ded64b4d6d7 - - - - - -] vault.use_ssl                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:04 compute-0 nova_compute[258937]: 2025-10-11 03:56:04.314 2 DEBUG oslo_service.service [None req-279685f3-4cb2-400f-b19b-5ded64b4d6d7 - - - - - -] vault.vault_url                = http://127.0.0.1:8200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:04 compute-0 nova_compute[258937]: 2025-10-11 03:56:04.314 2 DEBUG oslo_service.service [None req-279685f3-4cb2-400f-b19b-5ded64b4d6d7 - - - - - -] keystone.cafile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:04 compute-0 nova_compute[258937]: 2025-10-11 03:56:04.314 2 DEBUG oslo_service.service [None req-279685f3-4cb2-400f-b19b-5ded64b4d6d7 - - - - - -] keystone.certfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:04 compute-0 nova_compute[258937]: 2025-10-11 03:56:04.315 2 DEBUG oslo_service.service [None req-279685f3-4cb2-400f-b19b-5ded64b4d6d7 - - - - - -] keystone.collect_timing        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:04 compute-0 nova_compute[258937]: 2025-10-11 03:56:04.315 2 DEBUG oslo_service.service [None req-279685f3-4cb2-400f-b19b-5ded64b4d6d7 - - - - - -] keystone.connect_retries       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:04 compute-0 nova_compute[258937]: 2025-10-11 03:56:04.315 2 DEBUG oslo_service.service [None req-279685f3-4cb2-400f-b19b-5ded64b4d6d7 - - - - - -] keystone.connect_retry_delay   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:04 compute-0 nova_compute[258937]: 2025-10-11 03:56:04.315 2 DEBUG oslo_service.service [None req-279685f3-4cb2-400f-b19b-5ded64b4d6d7 - - - - - -] keystone.endpoint_override     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:04 compute-0 nova_compute[258937]: 2025-10-11 03:56:04.315 2 DEBUG oslo_service.service [None req-279685f3-4cb2-400f-b19b-5ded64b4d6d7 - - - - - -] keystone.insecure              = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:04 compute-0 nova_compute[258937]: 2025-10-11 03:56:04.315 2 DEBUG oslo_service.service [None req-279685f3-4cb2-400f-b19b-5ded64b4d6d7 - - - - - -] keystone.keyfile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:04 compute-0 nova_compute[258937]: 2025-10-11 03:56:04.315 2 DEBUG oslo_service.service [None req-279685f3-4cb2-400f-b19b-5ded64b4d6d7 - - - - - -] keystone.max_version           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:04 compute-0 nova_compute[258937]: 2025-10-11 03:56:04.316 2 DEBUG oslo_service.service [None req-279685f3-4cb2-400f-b19b-5ded64b4d6d7 - - - - - -] keystone.min_version           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:04 compute-0 nova_compute[258937]: 2025-10-11 03:56:04.316 2 DEBUG oslo_service.service [None req-279685f3-4cb2-400f-b19b-5ded64b4d6d7 - - - - - -] keystone.region_name           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:04 compute-0 nova_compute[258937]: 2025-10-11 03:56:04.316 2 DEBUG oslo_service.service [None req-279685f3-4cb2-400f-b19b-5ded64b4d6d7 - - - - - -] keystone.service_name          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:04 compute-0 nova_compute[258937]: 2025-10-11 03:56:04.316 2 DEBUG oslo_service.service [None req-279685f3-4cb2-400f-b19b-5ded64b4d6d7 - - - - - -] keystone.service_type          = identity log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:04 compute-0 nova_compute[258937]: 2025-10-11 03:56:04.316 2 DEBUG oslo_service.service [None req-279685f3-4cb2-400f-b19b-5ded64b4d6d7 - - - - - -] keystone.split_loggers         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:04 compute-0 nova_compute[258937]: 2025-10-11 03:56:04.316 2 DEBUG oslo_service.service [None req-279685f3-4cb2-400f-b19b-5ded64b4d6d7 - - - - - -] keystone.status_code_retries   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:04 compute-0 nova_compute[258937]: 2025-10-11 03:56:04.316 2 DEBUG oslo_service.service [None req-279685f3-4cb2-400f-b19b-5ded64b4d6d7 - - - - - -] keystone.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:04 compute-0 nova_compute[258937]: 2025-10-11 03:56:04.317 2 DEBUG oslo_service.service [None req-279685f3-4cb2-400f-b19b-5ded64b4d6d7 - - - - - -] keystone.timeout               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:04 compute-0 nova_compute[258937]: 2025-10-11 03:56:04.317 2 DEBUG oslo_service.service [None req-279685f3-4cb2-400f-b19b-5ded64b4d6d7 - - - - - -] keystone.valid_interfaces      = ['internal', 'public'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:04 compute-0 nova_compute[258937]: 2025-10-11 03:56:04.317 2 DEBUG oslo_service.service [None req-279685f3-4cb2-400f-b19b-5ded64b4d6d7 - - - - - -] keystone.version               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:04 compute-0 nova_compute[258937]: 2025-10-11 03:56:04.317 2 DEBUG oslo_service.service [None req-279685f3-4cb2-400f-b19b-5ded64b4d6d7 - - - - - -] libvirt.connection_uri         =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:04 compute-0 nova_compute[258937]: 2025-10-11 03:56:04.317 2 DEBUG oslo_service.service [None req-279685f3-4cb2-400f-b19b-5ded64b4d6d7 - - - - - -] libvirt.cpu_mode               = host-model log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:04 compute-0 nova_compute[258937]: 2025-10-11 03:56:04.317 2 DEBUG oslo_service.service [None req-279685f3-4cb2-400f-b19b-5ded64b4d6d7 - - - - - -] libvirt.cpu_model_extra_flags  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:04 compute-0 nova_compute[258937]: 2025-10-11 03:56:04.317 2 DEBUG oslo_service.service [None req-279685f3-4cb2-400f-b19b-5ded64b4d6d7 - - - - - -] libvirt.cpu_models             = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:04 compute-0 nova_compute[258937]: 2025-10-11 03:56:04.318 2 DEBUG oslo_service.service [None req-279685f3-4cb2-400f-b19b-5ded64b4d6d7 - - - - - -] libvirt.cpu_power_governor_high = performance log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:04 compute-0 nova_compute[258937]: 2025-10-11 03:56:04.318 2 DEBUG oslo_service.service [None req-279685f3-4cb2-400f-b19b-5ded64b4d6d7 - - - - - -] libvirt.cpu_power_governor_low = powersave log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:04 compute-0 nova_compute[258937]: 2025-10-11 03:56:04.318 2 DEBUG oslo_service.service [None req-279685f3-4cb2-400f-b19b-5ded64b4d6d7 - - - - - -] libvirt.cpu_power_management   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:04 compute-0 nova_compute[258937]: 2025-10-11 03:56:04.318 2 DEBUG oslo_service.service [None req-279685f3-4cb2-400f-b19b-5ded64b4d6d7 - - - - - -] libvirt.cpu_power_management_strategy = cpu_state log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:04 compute-0 nova_compute[258937]: 2025-10-11 03:56:04.318 2 DEBUG oslo_service.service [None req-279685f3-4cb2-400f-b19b-5ded64b4d6d7 - - - - - -] libvirt.device_detach_attempts = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:04 compute-0 nova_compute[258937]: 2025-10-11 03:56:04.318 2 DEBUG oslo_service.service [None req-279685f3-4cb2-400f-b19b-5ded64b4d6d7 - - - - - -] libvirt.device_detach_timeout  = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:04 compute-0 nova_compute[258937]: 2025-10-11 03:56:04.319 2 DEBUG oslo_service.service [None req-279685f3-4cb2-400f-b19b-5ded64b4d6d7 - - - - - -] libvirt.disk_cachemodes        = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:04 compute-0 nova_compute[258937]: 2025-10-11 03:56:04.319 2 DEBUG oslo_service.service [None req-279685f3-4cb2-400f-b19b-5ded64b4d6d7 - - - - - -] libvirt.disk_prefix            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:04 compute-0 nova_compute[258937]: 2025-10-11 03:56:04.319 2 DEBUG oslo_service.service [None req-279685f3-4cb2-400f-b19b-5ded64b4d6d7 - - - - - -] libvirt.enabled_perf_events    = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:04 compute-0 nova_compute[258937]: 2025-10-11 03:56:04.319 2 DEBUG oslo_service.service [None req-279685f3-4cb2-400f-b19b-5ded64b4d6d7 - - - - - -] libvirt.file_backed_memory     = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:04 compute-0 nova_compute[258937]: 2025-10-11 03:56:04.319 2 DEBUG oslo_service.service [None req-279685f3-4cb2-400f-b19b-5ded64b4d6d7 - - - - - -] libvirt.gid_maps               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:04 compute-0 nova_compute[258937]: 2025-10-11 03:56:04.319 2 DEBUG oslo_service.service [None req-279685f3-4cb2-400f-b19b-5ded64b4d6d7 - - - - - -] libvirt.hw_disk_discard        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:04 compute-0 nova_compute[258937]: 2025-10-11 03:56:04.320 2 DEBUG oslo_service.service [None req-279685f3-4cb2-400f-b19b-5ded64b4d6d7 - - - - - -] libvirt.hw_machine_type        = ['x86_64=q35'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:04 compute-0 nova_compute[258937]: 2025-10-11 03:56:04.320 2 DEBUG oslo_service.service [None req-279685f3-4cb2-400f-b19b-5ded64b4d6d7 - - - - - -] libvirt.images_rbd_ceph_conf   = /etc/ceph/ceph.conf log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:04 compute-0 nova_compute[258937]: 2025-10-11 03:56:04.320 2 DEBUG oslo_service.service [None req-279685f3-4cb2-400f-b19b-5ded64b4d6d7 - - - - - -] libvirt.images_rbd_glance_copy_poll_interval = 15 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:04 compute-0 nova_compute[258937]: 2025-10-11 03:56:04.320 2 DEBUG oslo_service.service [None req-279685f3-4cb2-400f-b19b-5ded64b4d6d7 - - - - - -] libvirt.images_rbd_glance_copy_timeout = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:04 compute-0 nova_compute[258937]: 2025-10-11 03:56:04.320 2 DEBUG oslo_service.service [None req-279685f3-4cb2-400f-b19b-5ded64b4d6d7 - - - - - -] libvirt.images_rbd_glance_store_name = default_backend log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:04 compute-0 nova_compute[258937]: 2025-10-11 03:56:04.320 2 DEBUG oslo_service.service [None req-279685f3-4cb2-400f-b19b-5ded64b4d6d7 - - - - - -] libvirt.images_rbd_pool        = vms log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:04 compute-0 nova_compute[258937]: 2025-10-11 03:56:04.320 2 DEBUG oslo_service.service [None req-279685f3-4cb2-400f-b19b-5ded64b4d6d7 - - - - - -] libvirt.images_type            = rbd log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:04 compute-0 nova_compute[258937]: 2025-10-11 03:56:04.321 2 DEBUG oslo_service.service [None req-279685f3-4cb2-400f-b19b-5ded64b4d6d7 - - - - - -] libvirt.images_volume_group    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:04 compute-0 nova_compute[258937]: 2025-10-11 03:56:04.321 2 DEBUG oslo_service.service [None req-279685f3-4cb2-400f-b19b-5ded64b4d6d7 - - - - - -] libvirt.inject_key             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:04 compute-0 nova_compute[258937]: 2025-10-11 03:56:04.321 2 DEBUG oslo_service.service [None req-279685f3-4cb2-400f-b19b-5ded64b4d6d7 - - - - - -] libvirt.inject_partition       = -2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:04 compute-0 nova_compute[258937]: 2025-10-11 03:56:04.321 2 DEBUG oslo_service.service [None req-279685f3-4cb2-400f-b19b-5ded64b4d6d7 - - - - - -] libvirt.inject_password        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:04 compute-0 nova_compute[258937]: 2025-10-11 03:56:04.321 2 DEBUG oslo_service.service [None req-279685f3-4cb2-400f-b19b-5ded64b4d6d7 - - - - - -] libvirt.iscsi_iface            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:04 compute-0 nova_compute[258937]: 2025-10-11 03:56:04.321 2 DEBUG oslo_service.service [None req-279685f3-4cb2-400f-b19b-5ded64b4d6d7 - - - - - -] libvirt.iser_use_multipath     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:04 compute-0 nova_compute[258937]: 2025-10-11 03:56:04.321 2 DEBUG oslo_service.service [None req-279685f3-4cb2-400f-b19b-5ded64b4d6d7 - - - - - -] libvirt.live_migration_bandwidth = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:04 compute-0 nova_compute[258937]: 2025-10-11 03:56:04.322 2 DEBUG oslo_service.service [None req-279685f3-4cb2-400f-b19b-5ded64b4d6d7 - - - - - -] libvirt.live_migration_completion_timeout = 800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:04 compute-0 nova_compute[258937]: 2025-10-11 03:56:04.322 2 DEBUG oslo_service.service [None req-279685f3-4cb2-400f-b19b-5ded64b4d6d7 - - - - - -] libvirt.live_migration_downtime = 500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:04 compute-0 nova_compute[258937]: 2025-10-11 03:56:04.322 2 DEBUG oslo_service.service [None req-279685f3-4cb2-400f-b19b-5ded64b4d6d7 - - - - - -] libvirt.live_migration_downtime_delay = 75 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:04 compute-0 nova_compute[258937]: 2025-10-11 03:56:04.322 2 DEBUG oslo_service.service [None req-279685f3-4cb2-400f-b19b-5ded64b4d6d7 - - - - - -] libvirt.live_migration_downtime_steps = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:04 compute-0 nova_compute[258937]: 2025-10-11 03:56:04.322 2 DEBUG oslo_service.service [None req-279685f3-4cb2-400f-b19b-5ded64b4d6d7 - - - - - -] libvirt.live_migration_inbound_addr = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:04 compute-0 nova_compute[258937]: 2025-10-11 03:56:04.322 2 DEBUG oslo_service.service [None req-279685f3-4cb2-400f-b19b-5ded64b4d6d7 - - - - - -] libvirt.live_migration_permit_auto_converge = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:04 compute-0 nova_compute[258937]: 2025-10-11 03:56:04.323 2 DEBUG oslo_service.service [None req-279685f3-4cb2-400f-b19b-5ded64b4d6d7 - - - - - -] libvirt.live_migration_permit_post_copy = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:04 compute-0 nova_compute[258937]: 2025-10-11 03:56:04.323 2 DEBUG oslo_service.service [None req-279685f3-4cb2-400f-b19b-5ded64b4d6d7 - - - - - -] libvirt.live_migration_scheme  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:04 compute-0 nova_compute[258937]: 2025-10-11 03:56:04.323 2 DEBUG oslo_service.service [None req-279685f3-4cb2-400f-b19b-5ded64b4d6d7 - - - - - -] libvirt.live_migration_timeout_action = force_complete log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:04 compute-0 nova_compute[258937]: 2025-10-11 03:56:04.323 2 DEBUG oslo_service.service [None req-279685f3-4cb2-400f-b19b-5ded64b4d6d7 - - - - - -] libvirt.live_migration_tunnelled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:04 compute-0 nova_compute[258937]: 2025-10-11 03:56:04.324 2 WARNING oslo_config.cfg [None req-279685f3-4cb2-400f-b19b-5ded64b4d6d7 - - - - - -] Deprecated: Option "live_migration_uri" from group "libvirt" is deprecated for removal (
Oct 11 03:56:04 compute-0 nova_compute[258937]: live_migration_uri is deprecated for removal in favor of two other options that
Oct 11 03:56:04 compute-0 nova_compute[258937]: allow to change live migration scheme and target URI: ``live_migration_scheme``
Oct 11 03:56:04 compute-0 nova_compute[258937]: and ``live_migration_inbound_addr`` respectively.
Oct 11 03:56:04 compute-0 nova_compute[258937]: ).  Its value may be silently ignored in the future.
Oct 11 03:56:04 compute-0 nova_compute[258937]: 2025-10-11 03:56:04.324 2 DEBUG oslo_service.service [None req-279685f3-4cb2-400f-b19b-5ded64b4d6d7 - - - - - -] libvirt.live_migration_uri     = qemu+tls://%s/system log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:04 compute-0 nova_compute[258937]: 2025-10-11 03:56:04.324 2 DEBUG oslo_service.service [None req-279685f3-4cb2-400f-b19b-5ded64b4d6d7 - - - - - -] libvirt.live_migration_with_native_tls = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:04 compute-0 nova_compute[258937]: 2025-10-11 03:56:04.324 2 DEBUG oslo_service.service [None req-279685f3-4cb2-400f-b19b-5ded64b4d6d7 - - - - - -] libvirt.max_queues             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:04 compute-0 nova_compute[258937]: 2025-10-11 03:56:04.324 2 DEBUG oslo_service.service [None req-279685f3-4cb2-400f-b19b-5ded64b4d6d7 - - - - - -] libvirt.mem_stats_period_seconds = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:04 compute-0 nova_compute[258937]: 2025-10-11 03:56:04.325 2 DEBUG oslo_service.service [None req-279685f3-4cb2-400f-b19b-5ded64b4d6d7 - - - - - -] libvirt.nfs_mount_options      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:04 compute-0 nova_compute[258937]: 2025-10-11 03:56:04.325 2 DEBUG oslo_service.service [None req-279685f3-4cb2-400f-b19b-5ded64b4d6d7 - - - - - -] libvirt.nfs_mount_point_base   = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:04 compute-0 nova_compute[258937]: 2025-10-11 03:56:04.325 2 DEBUG oslo_service.service [None req-279685f3-4cb2-400f-b19b-5ded64b4d6d7 - - - - - -] libvirt.num_aoe_discover_tries = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:04 compute-0 nova_compute[258937]: 2025-10-11 03:56:04.325 2 DEBUG oslo_service.service [None req-279685f3-4cb2-400f-b19b-5ded64b4d6d7 - - - - - -] libvirt.num_iser_scan_tries    = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:04 compute-0 nova_compute[258937]: 2025-10-11 03:56:04.325 2 DEBUG oslo_service.service [None req-279685f3-4cb2-400f-b19b-5ded64b4d6d7 - - - - - -] libvirt.num_memory_encrypted_guests = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:04 compute-0 nova_compute[258937]: 2025-10-11 03:56:04.325 2 DEBUG oslo_service.service [None req-279685f3-4cb2-400f-b19b-5ded64b4d6d7 - - - - - -] libvirt.num_nvme_discover_tries = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:04 compute-0 nova_compute[258937]: 2025-10-11 03:56:04.326 2 DEBUG oslo_service.service [None req-279685f3-4cb2-400f-b19b-5ded64b4d6d7 - - - - - -] libvirt.num_pcie_ports         = 24 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:04 compute-0 nova_compute[258937]: 2025-10-11 03:56:04.326 2 DEBUG oslo_service.service [None req-279685f3-4cb2-400f-b19b-5ded64b4d6d7 - - - - - -] libvirt.num_volume_scan_tries  = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:04 compute-0 nova_compute[258937]: 2025-10-11 03:56:04.326 2 DEBUG oslo_service.service [None req-279685f3-4cb2-400f-b19b-5ded64b4d6d7 - - - - - -] libvirt.pmem_namespaces        = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:04 compute-0 nova_compute[258937]: 2025-10-11 03:56:04.326 2 DEBUG oslo_service.service [None req-279685f3-4cb2-400f-b19b-5ded64b4d6d7 - - - - - -] libvirt.quobyte_client_cfg     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:04 compute-0 nova_compute[258937]: 2025-10-11 03:56:04.326 2 DEBUG oslo_service.service [None req-279685f3-4cb2-400f-b19b-5ded64b4d6d7 - - - - - -] libvirt.quobyte_mount_point_base = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:04 compute-0 nova_compute[258937]: 2025-10-11 03:56:04.326 2 DEBUG oslo_service.service [None req-279685f3-4cb2-400f-b19b-5ded64b4d6d7 - - - - - -] libvirt.rbd_connect_timeout    = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:04 compute-0 nova_compute[258937]: 2025-10-11 03:56:04.327 2 DEBUG oslo_service.service [None req-279685f3-4cb2-400f-b19b-5ded64b4d6d7 - - - - - -] libvirt.rbd_destroy_volume_retries = 12 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:04 compute-0 nova_compute[258937]: 2025-10-11 03:56:04.327 2 DEBUG oslo_service.service [None req-279685f3-4cb2-400f-b19b-5ded64b4d6d7 - - - - - -] libvirt.rbd_destroy_volume_retry_interval = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:04 compute-0 nova_compute[258937]: 2025-10-11 03:56:04.327 2 DEBUG oslo_service.service [None req-279685f3-4cb2-400f-b19b-5ded64b4d6d7 - - - - - -] libvirt.rbd_secret_uuid        = 23b68101-59a9-532f-ab6b-9acf78fb2162 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:04 compute-0 nova_compute[258937]: 2025-10-11 03:56:04.327 2 DEBUG oslo_service.service [None req-279685f3-4cb2-400f-b19b-5ded64b4d6d7 - - - - - -] libvirt.rbd_user               = openstack log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:04 compute-0 nova_compute[258937]: 2025-10-11 03:56:04.327 2 DEBUG oslo_service.service [None req-279685f3-4cb2-400f-b19b-5ded64b4d6d7 - - - - - -] libvirt.realtime_scheduler_priority = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:04 compute-0 nova_compute[258937]: 2025-10-11 03:56:04.327 2 DEBUG oslo_service.service [None req-279685f3-4cb2-400f-b19b-5ded64b4d6d7 - - - - - -] libvirt.remote_filesystem_transport = ssh log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:04 compute-0 nova_compute[258937]: 2025-10-11 03:56:04.327 2 DEBUG oslo_service.service [None req-279685f3-4cb2-400f-b19b-5ded64b4d6d7 - - - - - -] libvirt.rescue_image_id        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:04 compute-0 nova_compute[258937]: 2025-10-11 03:56:04.328 2 DEBUG oslo_service.service [None req-279685f3-4cb2-400f-b19b-5ded64b4d6d7 - - - - - -] libvirt.rescue_kernel_id       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:04 compute-0 nova_compute[258937]: 2025-10-11 03:56:04.328 2 DEBUG oslo_service.service [None req-279685f3-4cb2-400f-b19b-5ded64b4d6d7 - - - - - -] libvirt.rescue_ramdisk_id      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:04 compute-0 nova_compute[258937]: 2025-10-11 03:56:04.328 2 DEBUG oslo_service.service [None req-279685f3-4cb2-400f-b19b-5ded64b4d6d7 - - - - - -] libvirt.rng_dev_path           = /dev/urandom log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:04 compute-0 nova_compute[258937]: 2025-10-11 03:56:04.328 2 DEBUG oslo_service.service [None req-279685f3-4cb2-400f-b19b-5ded64b4d6d7 - - - - - -] libvirt.rx_queue_size          = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:04 compute-0 nova_compute[258937]: 2025-10-11 03:56:04.328 2 DEBUG oslo_service.service [None req-279685f3-4cb2-400f-b19b-5ded64b4d6d7 - - - - - -] libvirt.smbfs_mount_options    =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:04 compute-0 nova_compute[258937]: 2025-10-11 03:56:04.328 2 DEBUG oslo_service.service [None req-279685f3-4cb2-400f-b19b-5ded64b4d6d7 - - - - - -] libvirt.smbfs_mount_point_base = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:04 compute-0 nova_compute[258937]: 2025-10-11 03:56:04.329 2 DEBUG oslo_service.service [None req-279685f3-4cb2-400f-b19b-5ded64b4d6d7 - - - - - -] libvirt.snapshot_compression   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:04 compute-0 nova_compute[258937]: 2025-10-11 03:56:04.329 2 DEBUG oslo_service.service [None req-279685f3-4cb2-400f-b19b-5ded64b4d6d7 - - - - - -] libvirt.snapshot_image_format  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:04 compute-0 nova_compute[258937]: 2025-10-11 03:56:04.329 2 DEBUG oslo_service.service [None req-279685f3-4cb2-400f-b19b-5ded64b4d6d7 - - - - - -] libvirt.snapshots_directory    = /var/lib/nova/instances/snapshots log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:04 compute-0 nova_compute[258937]: 2025-10-11 03:56:04.329 2 DEBUG oslo_service.service [None req-279685f3-4cb2-400f-b19b-5ded64b4d6d7 - - - - - -] libvirt.sparse_logical_volumes = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:04 compute-0 nova_compute[258937]: 2025-10-11 03:56:04.329 2 DEBUG oslo_service.service [None req-279685f3-4cb2-400f-b19b-5ded64b4d6d7 - - - - - -] libvirt.swtpm_enabled          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:04 compute-0 nova_compute[258937]: 2025-10-11 03:56:04.329 2 DEBUG oslo_service.service [None req-279685f3-4cb2-400f-b19b-5ded64b4d6d7 - - - - - -] libvirt.swtpm_group            = tss log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:04 compute-0 nova_compute[258937]: 2025-10-11 03:56:04.330 2 DEBUG oslo_service.service [None req-279685f3-4cb2-400f-b19b-5ded64b4d6d7 - - - - - -] libvirt.swtpm_user             = tss log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:04 compute-0 nova_compute[258937]: 2025-10-11 03:56:04.330 2 DEBUG oslo_service.service [None req-279685f3-4cb2-400f-b19b-5ded64b4d6d7 - - - - - -] libvirt.sysinfo_serial         = unique log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:04 compute-0 nova_compute[258937]: 2025-10-11 03:56:04.330 2 DEBUG oslo_service.service [None req-279685f3-4cb2-400f-b19b-5ded64b4d6d7 - - - - - -] libvirt.tx_queue_size          = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:04 compute-0 nova_compute[258937]: 2025-10-11 03:56:04.330 2 DEBUG oslo_service.service [None req-279685f3-4cb2-400f-b19b-5ded64b4d6d7 - - - - - -] libvirt.uid_maps               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:04 compute-0 nova_compute[258937]: 2025-10-11 03:56:04.330 2 DEBUG oslo_service.service [None req-279685f3-4cb2-400f-b19b-5ded64b4d6d7 - - - - - -] libvirt.use_virtio_for_bridges = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:04 compute-0 nova_compute[258937]: 2025-10-11 03:56:04.330 2 DEBUG oslo_service.service [None req-279685f3-4cb2-400f-b19b-5ded64b4d6d7 - - - - - -] libvirt.virt_type              = kvm log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:04 compute-0 nova_compute[258937]: 2025-10-11 03:56:04.330 2 DEBUG oslo_service.service [None req-279685f3-4cb2-400f-b19b-5ded64b4d6d7 - - - - - -] libvirt.volume_clear           = zero log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:04 compute-0 nova_compute[258937]: 2025-10-11 03:56:04.331 2 DEBUG oslo_service.service [None req-279685f3-4cb2-400f-b19b-5ded64b4d6d7 - - - - - -] libvirt.volume_clear_size      = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:04 compute-0 nova_compute[258937]: 2025-10-11 03:56:04.331 2 DEBUG oslo_service.service [None req-279685f3-4cb2-400f-b19b-5ded64b4d6d7 - - - - - -] libvirt.volume_use_multipath   = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:04 compute-0 nova_compute[258937]: 2025-10-11 03:56:04.331 2 DEBUG oslo_service.service [None req-279685f3-4cb2-400f-b19b-5ded64b4d6d7 - - - - - -] libvirt.vzstorage_cache_path   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:04 compute-0 nova_compute[258937]: 2025-10-11 03:56:04.331 2 DEBUG oslo_service.service [None req-279685f3-4cb2-400f-b19b-5ded64b4d6d7 - - - - - -] libvirt.vzstorage_log_path     = /var/log/vstorage/%(cluster_name)s/nova.log.gz log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:04 compute-0 nova_compute[258937]: 2025-10-11 03:56:04.331 2 DEBUG oslo_service.service [None req-279685f3-4cb2-400f-b19b-5ded64b4d6d7 - - - - - -] libvirt.vzstorage_mount_group  = qemu log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:04 compute-0 nova_compute[258937]: 2025-10-11 03:56:04.331 2 DEBUG oslo_service.service [None req-279685f3-4cb2-400f-b19b-5ded64b4d6d7 - - - - - -] libvirt.vzstorage_mount_opts   = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:04 compute-0 nova_compute[258937]: 2025-10-11 03:56:04.331 2 DEBUG oslo_service.service [None req-279685f3-4cb2-400f-b19b-5ded64b4d6d7 - - - - - -] libvirt.vzstorage_mount_perms  = 0770 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:04 compute-0 nova_compute[258937]: 2025-10-11 03:56:04.332 2 DEBUG oslo_service.service [None req-279685f3-4cb2-400f-b19b-5ded64b4d6d7 - - - - - -] libvirt.vzstorage_mount_point_base = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:04 compute-0 nova_compute[258937]: 2025-10-11 03:56:04.332 2 DEBUG oslo_service.service [None req-279685f3-4cb2-400f-b19b-5ded64b4d6d7 - - - - - -] libvirt.vzstorage_mount_user   = stack log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:04 compute-0 nova_compute[258937]: 2025-10-11 03:56:04.332 2 DEBUG oslo_service.service [None req-279685f3-4cb2-400f-b19b-5ded64b4d6d7 - - - - - -] libvirt.wait_soft_reboot_seconds = 120 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:04 compute-0 nova_compute[258937]: 2025-10-11 03:56:04.332 2 DEBUG oslo_service.service [None req-279685f3-4cb2-400f-b19b-5ded64b4d6d7 - - - - - -] neutron.auth_section           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:04 compute-0 nova_compute[258937]: 2025-10-11 03:56:04.332 2 DEBUG oslo_service.service [None req-279685f3-4cb2-400f-b19b-5ded64b4d6d7 - - - - - -] neutron.auth_type              = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:04 compute-0 nova_compute[258937]: 2025-10-11 03:56:04.332 2 DEBUG oslo_service.service [None req-279685f3-4cb2-400f-b19b-5ded64b4d6d7 - - - - - -] neutron.cafile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:04 compute-0 nova_compute[258937]: 2025-10-11 03:56:04.333 2 DEBUG oslo_service.service [None req-279685f3-4cb2-400f-b19b-5ded64b4d6d7 - - - - - -] neutron.certfile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:04 compute-0 nova_compute[258937]: 2025-10-11 03:56:04.333 2 DEBUG oslo_service.service [None req-279685f3-4cb2-400f-b19b-5ded64b4d6d7 - - - - - -] neutron.collect_timing         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:04 compute-0 nova_compute[258937]: 2025-10-11 03:56:04.333 2 DEBUG oslo_service.service [None req-279685f3-4cb2-400f-b19b-5ded64b4d6d7 - - - - - -] neutron.connect_retries        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:04 compute-0 nova_compute[258937]: 2025-10-11 03:56:04.333 2 DEBUG oslo_service.service [None req-279685f3-4cb2-400f-b19b-5ded64b4d6d7 - - - - - -] neutron.connect_retry_delay    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:04 compute-0 nova_compute[258937]: 2025-10-11 03:56:04.333 2 DEBUG oslo_service.service [None req-279685f3-4cb2-400f-b19b-5ded64b4d6d7 - - - - - -] neutron.default_floating_pool  = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:04 compute-0 nova_compute[258937]: 2025-10-11 03:56:04.333 2 DEBUG oslo_service.service [None req-279685f3-4cb2-400f-b19b-5ded64b4d6d7 - - - - - -] neutron.endpoint_override      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:04 compute-0 nova_compute[258937]: 2025-10-11 03:56:04.334 2 DEBUG oslo_service.service [None req-279685f3-4cb2-400f-b19b-5ded64b4d6d7 - - - - - -] neutron.extension_sync_interval = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:04 compute-0 nova_compute[258937]: 2025-10-11 03:56:04.334 2 DEBUG oslo_service.service [None req-279685f3-4cb2-400f-b19b-5ded64b4d6d7 - - - - - -] neutron.http_retries           = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:04 compute-0 nova_compute[258937]: 2025-10-11 03:56:04.334 2 DEBUG oslo_service.service [None req-279685f3-4cb2-400f-b19b-5ded64b4d6d7 - - - - - -] neutron.insecure               = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:04 compute-0 nova_compute[258937]: 2025-10-11 03:56:04.334 2 DEBUG oslo_service.service [None req-279685f3-4cb2-400f-b19b-5ded64b4d6d7 - - - - - -] neutron.keyfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:04 compute-0 nova_compute[258937]: 2025-10-11 03:56:04.334 2 DEBUG oslo_service.service [None req-279685f3-4cb2-400f-b19b-5ded64b4d6d7 - - - - - -] neutron.max_version            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:04 compute-0 nova_compute[258937]: 2025-10-11 03:56:04.334 2 DEBUG oslo_service.service [None req-279685f3-4cb2-400f-b19b-5ded64b4d6d7 - - - - - -] neutron.metadata_proxy_shared_secret = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:04 compute-0 nova_compute[258937]: 2025-10-11 03:56:04.334 2 DEBUG oslo_service.service [None req-279685f3-4cb2-400f-b19b-5ded64b4d6d7 - - - - - -] neutron.min_version            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:04 compute-0 nova_compute[258937]: 2025-10-11 03:56:04.335 2 DEBUG oslo_service.service [None req-279685f3-4cb2-400f-b19b-5ded64b4d6d7 - - - - - -] neutron.ovs_bridge             = br-int log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:04 compute-0 nova_compute[258937]: 2025-10-11 03:56:04.335 2 DEBUG oslo_service.service [None req-279685f3-4cb2-400f-b19b-5ded64b4d6d7 - - - - - -] neutron.physnets               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:04 compute-0 nova_compute[258937]: 2025-10-11 03:56:04.335 2 DEBUG oslo_service.service [None req-279685f3-4cb2-400f-b19b-5ded64b4d6d7 - - - - - -] neutron.region_name            = regionOne log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:04 compute-0 nova_compute[258937]: 2025-10-11 03:56:04.335 2 DEBUG oslo_service.service [None req-279685f3-4cb2-400f-b19b-5ded64b4d6d7 - - - - - -] neutron.service_metadata_proxy = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:04 compute-0 nova_compute[258937]: 2025-10-11 03:56:04.335 2 DEBUG oslo_service.service [None req-279685f3-4cb2-400f-b19b-5ded64b4d6d7 - - - - - -] neutron.service_name           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:04 compute-0 nova_compute[258937]: 2025-10-11 03:56:04.335 2 DEBUG oslo_service.service [None req-279685f3-4cb2-400f-b19b-5ded64b4d6d7 - - - - - -] neutron.service_type           = network log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:04 compute-0 nova_compute[258937]: 2025-10-11 03:56:04.335 2 DEBUG oslo_service.service [None req-279685f3-4cb2-400f-b19b-5ded64b4d6d7 - - - - - -] neutron.split_loggers          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:04 compute-0 nova_compute[258937]: 2025-10-11 03:56:04.336 2 DEBUG oslo_service.service [None req-279685f3-4cb2-400f-b19b-5ded64b4d6d7 - - - - - -] neutron.status_code_retries    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:04 compute-0 nova_compute[258937]: 2025-10-11 03:56:04.336 2 DEBUG oslo_service.service [None req-279685f3-4cb2-400f-b19b-5ded64b4d6d7 - - - - - -] neutron.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:04 compute-0 nova_compute[258937]: 2025-10-11 03:56:04.336 2 DEBUG oslo_service.service [None req-279685f3-4cb2-400f-b19b-5ded64b4d6d7 - - - - - -] neutron.timeout                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:04 compute-0 nova_compute[258937]: 2025-10-11 03:56:04.336 2 DEBUG oslo_service.service [None req-279685f3-4cb2-400f-b19b-5ded64b4d6d7 - - - - - -] neutron.valid_interfaces       = ['internal'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:04 compute-0 nova_compute[258937]: 2025-10-11 03:56:04.336 2 DEBUG oslo_service.service [None req-279685f3-4cb2-400f-b19b-5ded64b4d6d7 - - - - - -] neutron.version                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:04 compute-0 nova_compute[258937]: 2025-10-11 03:56:04.336 2 DEBUG oslo_service.service [None req-279685f3-4cb2-400f-b19b-5ded64b4d6d7 - - - - - -] notifications.bdms_in_notifications = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:04 compute-0 nova_compute[258937]: 2025-10-11 03:56:04.336 2 DEBUG oslo_service.service [None req-279685f3-4cb2-400f-b19b-5ded64b4d6d7 - - - - - -] notifications.default_level    = INFO log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:04 compute-0 nova_compute[258937]: 2025-10-11 03:56:04.337 2 DEBUG oslo_service.service [None req-279685f3-4cb2-400f-b19b-5ded64b4d6d7 - - - - - -] notifications.notification_format = unversioned log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:04 compute-0 nova_compute[258937]: 2025-10-11 03:56:04.337 2 DEBUG oslo_service.service [None req-279685f3-4cb2-400f-b19b-5ded64b4d6d7 - - - - - -] notifications.notify_on_state_change = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:04 compute-0 nova_compute[258937]: 2025-10-11 03:56:04.337 2 DEBUG oslo_service.service [None req-279685f3-4cb2-400f-b19b-5ded64b4d6d7 - - - - - -] notifications.versioned_notifications_topics = ['versioned_notifications'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:04 compute-0 nova_compute[258937]: 2025-10-11 03:56:04.337 2 DEBUG oslo_service.service [None req-279685f3-4cb2-400f-b19b-5ded64b4d6d7 - - - - - -] pci.alias                      = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:04 compute-0 nova_compute[258937]: 2025-10-11 03:56:04.337 2 DEBUG oslo_service.service [None req-279685f3-4cb2-400f-b19b-5ded64b4d6d7 - - - - - -] pci.device_spec                = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:04 compute-0 nova_compute[258937]: 2025-10-11 03:56:04.337 2 DEBUG oslo_service.service [None req-279685f3-4cb2-400f-b19b-5ded64b4d6d7 - - - - - -] pci.report_in_placement        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:04 compute-0 nova_compute[258937]: 2025-10-11 03:56:04.338 2 DEBUG oslo_service.service [None req-279685f3-4cb2-400f-b19b-5ded64b4d6d7 - - - - - -] placement.auth_section         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:04 compute-0 nova_compute[258937]: 2025-10-11 03:56:04.338 2 DEBUG oslo_service.service [None req-279685f3-4cb2-400f-b19b-5ded64b4d6d7 - - - - - -] placement.auth_type            = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:04 compute-0 nova_compute[258937]: 2025-10-11 03:56:04.338 2 DEBUG oslo_service.service [None req-279685f3-4cb2-400f-b19b-5ded64b4d6d7 - - - - - -] placement.auth_url             = https://keystone-internal.openstack.svc:5000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:04 compute-0 nova_compute[258937]: 2025-10-11 03:56:04.338 2 DEBUG oslo_service.service [None req-279685f3-4cb2-400f-b19b-5ded64b4d6d7 - - - - - -] placement.cafile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:04 compute-0 nova_compute[258937]: 2025-10-11 03:56:04.338 2 DEBUG oslo_service.service [None req-279685f3-4cb2-400f-b19b-5ded64b4d6d7 - - - - - -] placement.certfile             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:04 compute-0 nova_compute[258937]: 2025-10-11 03:56:04.338 2 DEBUG oslo_service.service [None req-279685f3-4cb2-400f-b19b-5ded64b4d6d7 - - - - - -] placement.collect_timing       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:04 compute-0 nova_compute[258937]: 2025-10-11 03:56:04.339 2 DEBUG oslo_service.service [None req-279685f3-4cb2-400f-b19b-5ded64b4d6d7 - - - - - -] placement.connect_retries      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:04 compute-0 nova_compute[258937]: 2025-10-11 03:56:04.339 2 DEBUG oslo_service.service [None req-279685f3-4cb2-400f-b19b-5ded64b4d6d7 - - - - - -] placement.connect_retry_delay  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:04 compute-0 nova_compute[258937]: 2025-10-11 03:56:04.339 2 DEBUG oslo_service.service [None req-279685f3-4cb2-400f-b19b-5ded64b4d6d7 - - - - - -] placement.default_domain_id    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:04 compute-0 nova_compute[258937]: 2025-10-11 03:56:04.339 2 DEBUG oslo_service.service [None req-279685f3-4cb2-400f-b19b-5ded64b4d6d7 - - - - - -] placement.default_domain_name  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:04 compute-0 nova_compute[258937]: 2025-10-11 03:56:04.339 2 DEBUG oslo_service.service [None req-279685f3-4cb2-400f-b19b-5ded64b4d6d7 - - - - - -] placement.domain_id            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:04 compute-0 nova_compute[258937]: 2025-10-11 03:56:04.339 2 DEBUG oslo_service.service [None req-279685f3-4cb2-400f-b19b-5ded64b4d6d7 - - - - - -] placement.domain_name          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:04 compute-0 nova_compute[258937]: 2025-10-11 03:56:04.339 2 DEBUG oslo_service.service [None req-279685f3-4cb2-400f-b19b-5ded64b4d6d7 - - - - - -] placement.endpoint_override    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:04 compute-0 nova_compute[258937]: 2025-10-11 03:56:04.339 2 DEBUG oslo_service.service [None req-279685f3-4cb2-400f-b19b-5ded64b4d6d7 - - - - - -] placement.insecure             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:04 compute-0 nova_compute[258937]: 2025-10-11 03:56:04.340 2 DEBUG oslo_service.service [None req-279685f3-4cb2-400f-b19b-5ded64b4d6d7 - - - - - -] placement.keyfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:04 compute-0 nova_compute[258937]: 2025-10-11 03:56:04.340 2 DEBUG oslo_service.service [None req-279685f3-4cb2-400f-b19b-5ded64b4d6d7 - - - - - -] placement.max_version          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:04 compute-0 nova_compute[258937]: 2025-10-11 03:56:04.340 2 DEBUG oslo_service.service [None req-279685f3-4cb2-400f-b19b-5ded64b4d6d7 - - - - - -] placement.min_version          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:04 compute-0 nova_compute[258937]: 2025-10-11 03:56:04.340 2 DEBUG oslo_service.service [None req-279685f3-4cb2-400f-b19b-5ded64b4d6d7 - - - - - -] placement.password             = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:04 compute-0 nova_compute[258937]: 2025-10-11 03:56:04.340 2 DEBUG oslo_service.service [None req-279685f3-4cb2-400f-b19b-5ded64b4d6d7 - - - - - -] placement.project_domain_id    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:04 compute-0 nova_compute[258937]: 2025-10-11 03:56:04.340 2 DEBUG oslo_service.service [None req-279685f3-4cb2-400f-b19b-5ded64b4d6d7 - - - - - -] placement.project_domain_name  = Default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:04 compute-0 nova_compute[258937]: 2025-10-11 03:56:04.340 2 DEBUG oslo_service.service [None req-279685f3-4cb2-400f-b19b-5ded64b4d6d7 - - - - - -] placement.project_id           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:04 compute-0 nova_compute[258937]: 2025-10-11 03:56:04.341 2 DEBUG oslo_service.service [None req-279685f3-4cb2-400f-b19b-5ded64b4d6d7 - - - - - -] placement.project_name         = service log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:04 compute-0 nova_compute[258937]: 2025-10-11 03:56:04.341 2 DEBUG oslo_service.service [None req-279685f3-4cb2-400f-b19b-5ded64b4d6d7 - - - - - -] placement.region_name          = regionOne log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:04 compute-0 nova_compute[258937]: 2025-10-11 03:56:04.341 2 DEBUG oslo_service.service [None req-279685f3-4cb2-400f-b19b-5ded64b4d6d7 - - - - - -] placement.service_name         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:04 compute-0 nova_compute[258937]: 2025-10-11 03:56:04.341 2 DEBUG oslo_service.service [None req-279685f3-4cb2-400f-b19b-5ded64b4d6d7 - - - - - -] placement.service_type         = placement log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:04 compute-0 nova_compute[258937]: 2025-10-11 03:56:04.341 2 DEBUG oslo_service.service [None req-279685f3-4cb2-400f-b19b-5ded64b4d6d7 - - - - - -] placement.split_loggers        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:04 compute-0 nova_compute[258937]: 2025-10-11 03:56:04.341 2 DEBUG oslo_service.service [None req-279685f3-4cb2-400f-b19b-5ded64b4d6d7 - - - - - -] placement.status_code_retries  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:04 compute-0 nova_compute[258937]: 2025-10-11 03:56:04.341 2 DEBUG oslo_service.service [None req-279685f3-4cb2-400f-b19b-5ded64b4d6d7 - - - - - -] placement.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:04 compute-0 nova_compute[258937]: 2025-10-11 03:56:04.342 2 DEBUG oslo_service.service [None req-279685f3-4cb2-400f-b19b-5ded64b4d6d7 - - - - - -] placement.system_scope         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:04 compute-0 nova_compute[258937]: 2025-10-11 03:56:04.342 2 DEBUG oslo_service.service [None req-279685f3-4cb2-400f-b19b-5ded64b4d6d7 - - - - - -] placement.timeout              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:04 compute-0 nova_compute[258937]: 2025-10-11 03:56:04.342 2 DEBUG oslo_service.service [None req-279685f3-4cb2-400f-b19b-5ded64b4d6d7 - - - - - -] placement.trust_id             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:04 compute-0 nova_compute[258937]: 2025-10-11 03:56:04.342 2 DEBUG oslo_service.service [None req-279685f3-4cb2-400f-b19b-5ded64b4d6d7 - - - - - -] placement.user_domain_id       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:04 compute-0 nova_compute[258937]: 2025-10-11 03:56:04.342 2 DEBUG oslo_service.service [None req-279685f3-4cb2-400f-b19b-5ded64b4d6d7 - - - - - -] placement.user_domain_name     = Default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:04 compute-0 nova_compute[258937]: 2025-10-11 03:56:04.342 2 DEBUG oslo_service.service [None req-279685f3-4cb2-400f-b19b-5ded64b4d6d7 - - - - - -] placement.user_id              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:04 compute-0 nova_compute[258937]: 2025-10-11 03:56:04.342 2 DEBUG oslo_service.service [None req-279685f3-4cb2-400f-b19b-5ded64b4d6d7 - - - - - -] placement.username             = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:04 compute-0 nova_compute[258937]: 2025-10-11 03:56:04.342 2 DEBUG oslo_service.service [None req-279685f3-4cb2-400f-b19b-5ded64b4d6d7 - - - - - -] placement.valid_interfaces     = ['internal'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:04 compute-0 nova_compute[258937]: 2025-10-11 03:56:04.343 2 DEBUG oslo_service.service [None req-279685f3-4cb2-400f-b19b-5ded64b4d6d7 - - - - - -] placement.version              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:04 compute-0 nova_compute[258937]: 2025-10-11 03:56:04.343 2 DEBUG oslo_service.service [None req-279685f3-4cb2-400f-b19b-5ded64b4d6d7 - - - - - -] quota.cores                    = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:04 compute-0 nova_compute[258937]: 2025-10-11 03:56:04.343 2 DEBUG oslo_service.service [None req-279685f3-4cb2-400f-b19b-5ded64b4d6d7 - - - - - -] quota.count_usage_from_placement = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:04 compute-0 nova_compute[258937]: 2025-10-11 03:56:04.343 2 DEBUG oslo_service.service [None req-279685f3-4cb2-400f-b19b-5ded64b4d6d7 - - - - - -] quota.driver                   = nova.quota.DbQuotaDriver log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:04 compute-0 nova_compute[258937]: 2025-10-11 03:56:04.343 2 DEBUG oslo_service.service [None req-279685f3-4cb2-400f-b19b-5ded64b4d6d7 - - - - - -] quota.injected_file_content_bytes = 10240 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:04 compute-0 nova_compute[258937]: 2025-10-11 03:56:04.343 2 DEBUG oslo_service.service [None req-279685f3-4cb2-400f-b19b-5ded64b4d6d7 - - - - - -] quota.injected_file_path_length = 255 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:04 compute-0 nova_compute[258937]: 2025-10-11 03:56:04.343 2 DEBUG oslo_service.service [None req-279685f3-4cb2-400f-b19b-5ded64b4d6d7 - - - - - -] quota.injected_files           = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:04 compute-0 nova_compute[258937]: 2025-10-11 03:56:04.344 2 DEBUG oslo_service.service [None req-279685f3-4cb2-400f-b19b-5ded64b4d6d7 - - - - - -] quota.instances                = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:04 compute-0 nova_compute[258937]: 2025-10-11 03:56:04.344 2 DEBUG oslo_service.service [None req-279685f3-4cb2-400f-b19b-5ded64b4d6d7 - - - - - -] quota.key_pairs                = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:04 compute-0 nova_compute[258937]: 2025-10-11 03:56:04.344 2 DEBUG oslo_service.service [None req-279685f3-4cb2-400f-b19b-5ded64b4d6d7 - - - - - -] quota.metadata_items           = 128 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:04 compute-0 nova_compute[258937]: 2025-10-11 03:56:04.344 2 DEBUG oslo_service.service [None req-279685f3-4cb2-400f-b19b-5ded64b4d6d7 - - - - - -] quota.ram                      = 51200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:04 compute-0 nova_compute[258937]: 2025-10-11 03:56:04.344 2 DEBUG oslo_service.service [None req-279685f3-4cb2-400f-b19b-5ded64b4d6d7 - - - - - -] quota.recheck_quota            = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:04 compute-0 nova_compute[258937]: 2025-10-11 03:56:04.344 2 DEBUG oslo_service.service [None req-279685f3-4cb2-400f-b19b-5ded64b4d6d7 - - - - - -] quota.server_group_members     = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:04 compute-0 nova_compute[258937]: 2025-10-11 03:56:04.344 2 DEBUG oslo_service.service [None req-279685f3-4cb2-400f-b19b-5ded64b4d6d7 - - - - - -] quota.server_groups            = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:04 compute-0 nova_compute[258937]: 2025-10-11 03:56:04.345 2 DEBUG oslo_service.service [None req-279685f3-4cb2-400f-b19b-5ded64b4d6d7 - - - - - -] rdp.enabled                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:04 compute-0 nova_compute[258937]: 2025-10-11 03:56:04.345 2 DEBUG oslo_service.service [None req-279685f3-4cb2-400f-b19b-5ded64b4d6d7 - - - - - -] rdp.html5_proxy_base_url       = http://127.0.0.1:6083/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:04 compute-0 nova_compute[258937]: 2025-10-11 03:56:04.345 2 DEBUG oslo_service.service [None req-279685f3-4cb2-400f-b19b-5ded64b4d6d7 - - - - - -] scheduler.discover_hosts_in_cells_interval = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:04 compute-0 nova_compute[258937]: 2025-10-11 03:56:04.345 2 DEBUG oslo_service.service [None req-279685f3-4cb2-400f-b19b-5ded64b4d6d7 - - - - - -] scheduler.enable_isolated_aggregate_filtering = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:04 compute-0 nova_compute[258937]: 2025-10-11 03:56:04.345 2 DEBUG oslo_service.service [None req-279685f3-4cb2-400f-b19b-5ded64b4d6d7 - - - - - -] scheduler.image_metadata_prefilter = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:04 compute-0 nova_compute[258937]: 2025-10-11 03:56:04.345 2 DEBUG oslo_service.service [None req-279685f3-4cb2-400f-b19b-5ded64b4d6d7 - - - - - -] scheduler.limit_tenants_to_placement_aggregate = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:04 compute-0 nova_compute[258937]: 2025-10-11 03:56:04.346 2 DEBUG oslo_service.service [None req-279685f3-4cb2-400f-b19b-5ded64b4d6d7 - - - - - -] scheduler.max_attempts         = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:04 compute-0 nova_compute[258937]: 2025-10-11 03:56:04.346 2 DEBUG oslo_service.service [None req-279685f3-4cb2-400f-b19b-5ded64b4d6d7 - - - - - -] scheduler.max_placement_results = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:04 compute-0 nova_compute[258937]: 2025-10-11 03:56:04.346 2 DEBUG oslo_service.service [None req-279685f3-4cb2-400f-b19b-5ded64b4d6d7 - - - - - -] scheduler.placement_aggregate_required_for_tenants = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:04 compute-0 nova_compute[258937]: 2025-10-11 03:56:04.346 2 DEBUG oslo_service.service [None req-279685f3-4cb2-400f-b19b-5ded64b4d6d7 - - - - - -] scheduler.query_placement_for_availability_zone = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:04 compute-0 nova_compute[258937]: 2025-10-11 03:56:04.346 2 DEBUG oslo_service.service [None req-279685f3-4cb2-400f-b19b-5ded64b4d6d7 - - - - - -] scheduler.query_placement_for_image_type_support = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:04 compute-0 nova_compute[258937]: 2025-10-11 03:56:04.346 2 DEBUG oslo_service.service [None req-279685f3-4cb2-400f-b19b-5ded64b4d6d7 - - - - - -] scheduler.query_placement_for_routed_network_aggregates = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:04 compute-0 nova_compute[258937]: 2025-10-11 03:56:04.346 2 DEBUG oslo_service.service [None req-279685f3-4cb2-400f-b19b-5ded64b4d6d7 - - - - - -] scheduler.workers              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:04 compute-0 nova_compute[258937]: 2025-10-11 03:56:04.347 2 DEBUG oslo_service.service [None req-279685f3-4cb2-400f-b19b-5ded64b4d6d7 - - - - - -] filter_scheduler.aggregate_image_properties_isolation_namespace = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:04 compute-0 nova_compute[258937]: 2025-10-11 03:56:04.347 2 DEBUG oslo_service.service [None req-279685f3-4cb2-400f-b19b-5ded64b4d6d7 - - - - - -] filter_scheduler.aggregate_image_properties_isolation_separator = . log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:04 compute-0 nova_compute[258937]: 2025-10-11 03:56:04.347 2 DEBUG oslo_service.service [None req-279685f3-4cb2-400f-b19b-5ded64b4d6d7 - - - - - -] filter_scheduler.available_filters = ['nova.scheduler.filters.all_filters'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:04 compute-0 nova_compute[258937]: 2025-10-11 03:56:04.347 2 DEBUG oslo_service.service [None req-279685f3-4cb2-400f-b19b-5ded64b4d6d7 - - - - - -] filter_scheduler.build_failure_weight_multiplier = 1000000.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:04 compute-0 nova_compute[258937]: 2025-10-11 03:56:04.347 2 DEBUG oslo_service.service [None req-279685f3-4cb2-400f-b19b-5ded64b4d6d7 - - - - - -] filter_scheduler.cpu_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:04 compute-0 nova_compute[258937]: 2025-10-11 03:56:04.347 2 DEBUG oslo_service.service [None req-279685f3-4cb2-400f-b19b-5ded64b4d6d7 - - - - - -] filter_scheduler.cross_cell_move_weight_multiplier = 1000000.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:04 compute-0 nova_compute[258937]: 2025-10-11 03:56:04.347 2 DEBUG oslo_service.service [None req-279685f3-4cb2-400f-b19b-5ded64b4d6d7 - - - - - -] filter_scheduler.disk_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:04 compute-0 nova_compute[258937]: 2025-10-11 03:56:04.348 2 DEBUG oslo_service.service [None req-279685f3-4cb2-400f-b19b-5ded64b4d6d7 - - - - - -] filter_scheduler.enabled_filters = ['ComputeFilter', 'ComputeCapabilitiesFilter', 'ImagePropertiesFilter', 'ServerGroupAntiAffinityFilter', 'ServerGroupAffinityFilter'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:04 compute-0 nova_compute[258937]: 2025-10-11 03:56:04.348 2 DEBUG oslo_service.service [None req-279685f3-4cb2-400f-b19b-5ded64b4d6d7 - - - - - -] filter_scheduler.host_subset_size = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:04 compute-0 nova_compute[258937]: 2025-10-11 03:56:04.348 2 DEBUG oslo_service.service [None req-279685f3-4cb2-400f-b19b-5ded64b4d6d7 - - - - - -] filter_scheduler.image_properties_default_architecture = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:04 compute-0 nova_compute[258937]: 2025-10-11 03:56:04.348 2 DEBUG oslo_service.service [None req-279685f3-4cb2-400f-b19b-5ded64b4d6d7 - - - - - -] filter_scheduler.io_ops_weight_multiplier = -1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:04 compute-0 nova_compute[258937]: 2025-10-11 03:56:04.348 2 DEBUG oslo_service.service [None req-279685f3-4cb2-400f-b19b-5ded64b4d6d7 - - - - - -] filter_scheduler.isolated_hosts = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:04 compute-0 nova_compute[258937]: 2025-10-11 03:56:04.348 2 DEBUG oslo_service.service [None req-279685f3-4cb2-400f-b19b-5ded64b4d6d7 - - - - - -] filter_scheduler.isolated_images = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:04 compute-0 nova_compute[258937]: 2025-10-11 03:56:04.348 2 DEBUG oslo_service.service [None req-279685f3-4cb2-400f-b19b-5ded64b4d6d7 - - - - - -] filter_scheduler.max_instances_per_host = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:04 compute-0 nova_compute[258937]: 2025-10-11 03:56:04.349 2 DEBUG oslo_service.service [None req-279685f3-4cb2-400f-b19b-5ded64b4d6d7 - - - - - -] filter_scheduler.max_io_ops_per_host = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:04 compute-0 nova_compute[258937]: 2025-10-11 03:56:04.349 2 DEBUG oslo_service.service [None req-279685f3-4cb2-400f-b19b-5ded64b4d6d7 - - - - - -] filter_scheduler.pci_in_placement = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:04 compute-0 nova_compute[258937]: 2025-10-11 03:56:04.349 2 DEBUG oslo_service.service [None req-279685f3-4cb2-400f-b19b-5ded64b4d6d7 - - - - - -] filter_scheduler.pci_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:04 compute-0 nova_compute[258937]: 2025-10-11 03:56:04.349 2 DEBUG oslo_service.service [None req-279685f3-4cb2-400f-b19b-5ded64b4d6d7 - - - - - -] filter_scheduler.ram_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:04 compute-0 nova_compute[258937]: 2025-10-11 03:56:04.349 2 DEBUG oslo_service.service [None req-279685f3-4cb2-400f-b19b-5ded64b4d6d7 - - - - - -] filter_scheduler.restrict_isolated_hosts_to_isolated_images = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:04 compute-0 nova_compute[258937]: 2025-10-11 03:56:04.349 2 DEBUG oslo_service.service [None req-279685f3-4cb2-400f-b19b-5ded64b4d6d7 - - - - - -] filter_scheduler.shuffle_best_same_weighed_hosts = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:04 compute-0 nova_compute[258937]: 2025-10-11 03:56:04.350 2 DEBUG oslo_service.service [None req-279685f3-4cb2-400f-b19b-5ded64b4d6d7 - - - - - -] filter_scheduler.soft_affinity_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:04 compute-0 nova_compute[258937]: 2025-10-11 03:56:04.350 2 DEBUG oslo_service.service [None req-279685f3-4cb2-400f-b19b-5ded64b4d6d7 - - - - - -] filter_scheduler.soft_anti_affinity_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:04 compute-0 nova_compute[258937]: 2025-10-11 03:56:04.350 2 DEBUG oslo_service.service [None req-279685f3-4cb2-400f-b19b-5ded64b4d6d7 - - - - - -] filter_scheduler.track_instance_changes = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:04 compute-0 nova_compute[258937]: 2025-10-11 03:56:04.350 2 DEBUG oslo_service.service [None req-279685f3-4cb2-400f-b19b-5ded64b4d6d7 - - - - - -] filter_scheduler.weight_classes = ['nova.scheduler.weights.all_weighers'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:04 compute-0 nova_compute[258937]: 2025-10-11 03:56:04.350 2 DEBUG oslo_service.service [None req-279685f3-4cb2-400f-b19b-5ded64b4d6d7 - - - - - -] metrics.required               = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:04 compute-0 nova_compute[258937]: 2025-10-11 03:56:04.350 2 DEBUG oslo_service.service [None req-279685f3-4cb2-400f-b19b-5ded64b4d6d7 - - - - - -] metrics.weight_multiplier      = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:04 compute-0 nova_compute[258937]: 2025-10-11 03:56:04.351 2 DEBUG oslo_service.service [None req-279685f3-4cb2-400f-b19b-5ded64b4d6d7 - - - - - -] metrics.weight_of_unavailable  = -10000.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:04 compute-0 nova_compute[258937]: 2025-10-11 03:56:04.351 2 DEBUG oslo_service.service [None req-279685f3-4cb2-400f-b19b-5ded64b4d6d7 - - - - - -] metrics.weight_setting         = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:04 compute-0 nova_compute[258937]: 2025-10-11 03:56:04.351 2 DEBUG oslo_service.service [None req-279685f3-4cb2-400f-b19b-5ded64b4d6d7 - - - - - -] serial_console.base_url        = ws://127.0.0.1:6083/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:04 compute-0 nova_compute[258937]: 2025-10-11 03:56:04.351 2 DEBUG oslo_service.service [None req-279685f3-4cb2-400f-b19b-5ded64b4d6d7 - - - - - -] serial_console.enabled         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:04 compute-0 nova_compute[258937]: 2025-10-11 03:56:04.351 2 DEBUG oslo_service.service [None req-279685f3-4cb2-400f-b19b-5ded64b4d6d7 - - - - - -] serial_console.port_range      = 10000:20000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:04 compute-0 nova_compute[258937]: 2025-10-11 03:56:04.351 2 DEBUG oslo_service.service [None req-279685f3-4cb2-400f-b19b-5ded64b4d6d7 - - - - - -] serial_console.proxyclient_address = 127.0.0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:04 compute-0 nova_compute[258937]: 2025-10-11 03:56:04.351 2 DEBUG oslo_service.service [None req-279685f3-4cb2-400f-b19b-5ded64b4d6d7 - - - - - -] serial_console.serialproxy_host = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:04 compute-0 nova_compute[258937]: 2025-10-11 03:56:04.352 2 DEBUG oslo_service.service [None req-279685f3-4cb2-400f-b19b-5ded64b4d6d7 - - - - - -] serial_console.serialproxy_port = 6083 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:04 compute-0 nova_compute[258937]: 2025-10-11 03:56:04.352 2 DEBUG oslo_service.service [None req-279685f3-4cb2-400f-b19b-5ded64b4d6d7 - - - - - -] service_user.auth_section      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:04 compute-0 nova_compute[258937]: 2025-10-11 03:56:04.352 2 DEBUG oslo_service.service [None req-279685f3-4cb2-400f-b19b-5ded64b4d6d7 - - - - - -] service_user.auth_type         = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:04 compute-0 nova_compute[258937]: 2025-10-11 03:56:04.352 2 DEBUG oslo_service.service [None req-279685f3-4cb2-400f-b19b-5ded64b4d6d7 - - - - - -] service_user.cafile            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:04 compute-0 nova_compute[258937]: 2025-10-11 03:56:04.352 2 DEBUG oslo_service.service [None req-279685f3-4cb2-400f-b19b-5ded64b4d6d7 - - - - - -] service_user.certfile          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:04 compute-0 nova_compute[258937]: 2025-10-11 03:56:04.352 2 DEBUG oslo_service.service [None req-279685f3-4cb2-400f-b19b-5ded64b4d6d7 - - - - - -] service_user.collect_timing    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:04 compute-0 nova_compute[258937]: 2025-10-11 03:56:04.352 2 DEBUG oslo_service.service [None req-279685f3-4cb2-400f-b19b-5ded64b4d6d7 - - - - - -] service_user.insecure          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:04 compute-0 nova_compute[258937]: 2025-10-11 03:56:04.353 2 DEBUG oslo_service.service [None req-279685f3-4cb2-400f-b19b-5ded64b4d6d7 - - - - - -] service_user.keyfile           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:04 compute-0 nova_compute[258937]: 2025-10-11 03:56:04.353 2 DEBUG oslo_service.service [None req-279685f3-4cb2-400f-b19b-5ded64b4d6d7 - - - - - -] service_user.send_service_user_token = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:04 compute-0 nova_compute[258937]: 2025-10-11 03:56:04.353 2 DEBUG oslo_service.service [None req-279685f3-4cb2-400f-b19b-5ded64b4d6d7 - - - - - -] service_user.split_loggers     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:04 compute-0 nova_compute[258937]: 2025-10-11 03:56:04.353 2 DEBUG oslo_service.service [None req-279685f3-4cb2-400f-b19b-5ded64b4d6d7 - - - - - -] service_user.timeout           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:04 compute-0 nova_compute[258937]: 2025-10-11 03:56:04.353 2 DEBUG oslo_service.service [None req-279685f3-4cb2-400f-b19b-5ded64b4d6d7 - - - - - -] spice.agent_enabled            = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:04 compute-0 nova_compute[258937]: 2025-10-11 03:56:04.353 2 DEBUG oslo_service.service [None req-279685f3-4cb2-400f-b19b-5ded64b4d6d7 - - - - - -] spice.enabled                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:04 compute-0 nova_compute[258937]: 2025-10-11 03:56:04.354 2 DEBUG oslo_service.service [None req-279685f3-4cb2-400f-b19b-5ded64b4d6d7 - - - - - -] spice.html5proxy_base_url      = http://127.0.0.1:6082/spice_auto.html log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:04 compute-0 nova_compute[258937]: 2025-10-11 03:56:04.354 2 DEBUG oslo_service.service [None req-279685f3-4cb2-400f-b19b-5ded64b4d6d7 - - - - - -] spice.html5proxy_host          = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:04 compute-0 nova_compute[258937]: 2025-10-11 03:56:04.354 2 DEBUG oslo_service.service [None req-279685f3-4cb2-400f-b19b-5ded64b4d6d7 - - - - - -] spice.html5proxy_port          = 6082 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:04 compute-0 nova_compute[258937]: 2025-10-11 03:56:04.354 2 DEBUG oslo_service.service [None req-279685f3-4cb2-400f-b19b-5ded64b4d6d7 - - - - - -] spice.image_compression        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:04 compute-0 nova_compute[258937]: 2025-10-11 03:56:04.354 2 DEBUG oslo_service.service [None req-279685f3-4cb2-400f-b19b-5ded64b4d6d7 - - - - - -] spice.jpeg_compression         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:04 compute-0 nova_compute[258937]: 2025-10-11 03:56:04.354 2 DEBUG oslo_service.service [None req-279685f3-4cb2-400f-b19b-5ded64b4d6d7 - - - - - -] spice.playback_compression     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:04 compute-0 nova_compute[258937]: 2025-10-11 03:56:04.355 2 DEBUG oslo_service.service [None req-279685f3-4cb2-400f-b19b-5ded64b4d6d7 - - - - - -] spice.server_listen            = 127.0.0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:04 compute-0 nova_compute[258937]: 2025-10-11 03:56:04.355 2 DEBUG oslo_service.service [None req-279685f3-4cb2-400f-b19b-5ded64b4d6d7 - - - - - -] spice.server_proxyclient_address = 127.0.0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:04 compute-0 nova_compute[258937]: 2025-10-11 03:56:04.355 2 DEBUG oslo_service.service [None req-279685f3-4cb2-400f-b19b-5ded64b4d6d7 - - - - - -] spice.streaming_mode           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:04 compute-0 nova_compute[258937]: 2025-10-11 03:56:04.355 2 DEBUG oslo_service.service [None req-279685f3-4cb2-400f-b19b-5ded64b4d6d7 - - - - - -] spice.zlib_compression         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:04 compute-0 nova_compute[258937]: 2025-10-11 03:56:04.355 2 DEBUG oslo_service.service [None req-279685f3-4cb2-400f-b19b-5ded64b4d6d7 - - - - - -] upgrade_levels.baseapi         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:04 compute-0 nova_compute[258937]: 2025-10-11 03:56:04.355 2 DEBUG oslo_service.service [None req-279685f3-4cb2-400f-b19b-5ded64b4d6d7 - - - - - -] upgrade_levels.cert            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:04 compute-0 nova_compute[258937]: 2025-10-11 03:56:04.356 2 DEBUG oslo_service.service [None req-279685f3-4cb2-400f-b19b-5ded64b4d6d7 - - - - - -] upgrade_levels.compute         = auto log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:04 compute-0 nova_compute[258937]: 2025-10-11 03:56:04.356 2 DEBUG oslo_service.service [None req-279685f3-4cb2-400f-b19b-5ded64b4d6d7 - - - - - -] upgrade_levels.conductor       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:04 compute-0 nova_compute[258937]: 2025-10-11 03:56:04.356 2 DEBUG oslo_service.service [None req-279685f3-4cb2-400f-b19b-5ded64b4d6d7 - - - - - -] upgrade_levels.scheduler       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:04 compute-0 nova_compute[258937]: 2025-10-11 03:56:04.356 2 DEBUG oslo_service.service [None req-279685f3-4cb2-400f-b19b-5ded64b4d6d7 - - - - - -] vendordata_dynamic_auth.auth_section = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:04 compute-0 nova_compute[258937]: 2025-10-11 03:56:04.356 2 DEBUG oslo_service.service [None req-279685f3-4cb2-400f-b19b-5ded64b4d6d7 - - - - - -] vendordata_dynamic_auth.auth_type = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:04 compute-0 nova_compute[258937]: 2025-10-11 03:56:04.356 2 DEBUG oslo_service.service [None req-279685f3-4cb2-400f-b19b-5ded64b4d6d7 - - - - - -] vendordata_dynamic_auth.cafile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:04 compute-0 nova_compute[258937]: 2025-10-11 03:56:04.356 2 DEBUG oslo_service.service [None req-279685f3-4cb2-400f-b19b-5ded64b4d6d7 - - - - - -] vendordata_dynamic_auth.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:04 compute-0 nova_compute[258937]: 2025-10-11 03:56:04.357 2 DEBUG oslo_service.service [None req-279685f3-4cb2-400f-b19b-5ded64b4d6d7 - - - - - -] vendordata_dynamic_auth.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:04 compute-0 nova_compute[258937]: 2025-10-11 03:56:04.357 2 DEBUG oslo_service.service [None req-279685f3-4cb2-400f-b19b-5ded64b4d6d7 - - - - - -] vendordata_dynamic_auth.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:04 compute-0 nova_compute[258937]: 2025-10-11 03:56:04.357 2 DEBUG oslo_service.service [None req-279685f3-4cb2-400f-b19b-5ded64b4d6d7 - - - - - -] vendordata_dynamic_auth.keyfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:04 compute-0 nova_compute[258937]: 2025-10-11 03:56:04.357 2 DEBUG oslo_service.service [None req-279685f3-4cb2-400f-b19b-5ded64b4d6d7 - - - - - -] vendordata_dynamic_auth.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:04 compute-0 nova_compute[258937]: 2025-10-11 03:56:04.357 2 DEBUG oslo_service.service [None req-279685f3-4cb2-400f-b19b-5ded64b4d6d7 - - - - - -] vendordata_dynamic_auth.timeout = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:04 compute-0 nova_compute[258937]: 2025-10-11 03:56:04.357 2 DEBUG oslo_service.service [None req-279685f3-4cb2-400f-b19b-5ded64b4d6d7 - - - - - -] vmware.api_retry_count         = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:04 compute-0 nova_compute[258937]: 2025-10-11 03:56:04.357 2 DEBUG oslo_service.service [None req-279685f3-4cb2-400f-b19b-5ded64b4d6d7 - - - - - -] vmware.ca_file                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:04 compute-0 nova_compute[258937]: 2025-10-11 03:56:04.358 2 DEBUG oslo_service.service [None req-279685f3-4cb2-400f-b19b-5ded64b4d6d7 - - - - - -] vmware.cache_prefix            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:04 compute-0 nova_compute[258937]: 2025-10-11 03:56:04.358 2 DEBUG oslo_service.service [None req-279685f3-4cb2-400f-b19b-5ded64b4d6d7 - - - - - -] vmware.cluster_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:04 compute-0 nova_compute[258937]: 2025-10-11 03:56:04.358 2 DEBUG oslo_service.service [None req-279685f3-4cb2-400f-b19b-5ded64b4d6d7 - - - - - -] vmware.connection_pool_size    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:04 compute-0 nova_compute[258937]: 2025-10-11 03:56:04.358 2 DEBUG oslo_service.service [None req-279685f3-4cb2-400f-b19b-5ded64b4d6d7 - - - - - -] vmware.console_delay_seconds   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:04 compute-0 nova_compute[258937]: 2025-10-11 03:56:04.358 2 DEBUG oslo_service.service [None req-279685f3-4cb2-400f-b19b-5ded64b4d6d7 - - - - - -] vmware.datastore_regex         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:04 compute-0 nova_compute[258937]: 2025-10-11 03:56:04.358 2 DEBUG oslo_service.service [None req-279685f3-4cb2-400f-b19b-5ded64b4d6d7 - - - - - -] vmware.host_ip                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:04 compute-0 nova_compute[258937]: 2025-10-11 03:56:04.358 2 DEBUG oslo_service.service [None req-279685f3-4cb2-400f-b19b-5ded64b4d6d7 - - - - - -] vmware.host_password           = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:04 compute-0 nova_compute[258937]: 2025-10-11 03:56:04.358 2 DEBUG oslo_service.service [None req-279685f3-4cb2-400f-b19b-5ded64b4d6d7 - - - - - -] vmware.host_port               = 443 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:04 compute-0 nova_compute[258937]: 2025-10-11 03:56:04.359 2 DEBUG oslo_service.service [None req-279685f3-4cb2-400f-b19b-5ded64b4d6d7 - - - - - -] vmware.host_username           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:04 compute-0 nova_compute[258937]: 2025-10-11 03:56:04.359 2 DEBUG oslo_service.service [None req-279685f3-4cb2-400f-b19b-5ded64b4d6d7 - - - - - -] vmware.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:04 compute-0 nova_compute[258937]: 2025-10-11 03:56:04.359 2 DEBUG oslo_service.service [None req-279685f3-4cb2-400f-b19b-5ded64b4d6d7 - - - - - -] vmware.integration_bridge      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:04 compute-0 nova_compute[258937]: 2025-10-11 03:56:04.359 2 DEBUG oslo_service.service [None req-279685f3-4cb2-400f-b19b-5ded64b4d6d7 - - - - - -] vmware.maximum_objects         = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:04 compute-0 nova_compute[258937]: 2025-10-11 03:56:04.359 2 DEBUG oslo_service.service [None req-279685f3-4cb2-400f-b19b-5ded64b4d6d7 - - - - - -] vmware.pbm_default_policy      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:04 compute-0 nova_compute[258937]: 2025-10-11 03:56:04.359 2 DEBUG oslo_service.service [None req-279685f3-4cb2-400f-b19b-5ded64b4d6d7 - - - - - -] vmware.pbm_enabled             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:04 compute-0 nova_compute[258937]: 2025-10-11 03:56:04.360 2 DEBUG oslo_service.service [None req-279685f3-4cb2-400f-b19b-5ded64b4d6d7 - - - - - -] vmware.pbm_wsdl_location       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:04 compute-0 nova_compute[258937]: 2025-10-11 03:56:04.360 2 DEBUG oslo_service.service [None req-279685f3-4cb2-400f-b19b-5ded64b4d6d7 - - - - - -] vmware.serial_log_dir          = /opt/vmware/vspc log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:04 compute-0 nova_compute[258937]: 2025-10-11 03:56:04.360 2 DEBUG oslo_service.service [None req-279685f3-4cb2-400f-b19b-5ded64b4d6d7 - - - - - -] vmware.serial_port_proxy_uri   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:04 compute-0 nova_compute[258937]: 2025-10-11 03:56:04.360 2 DEBUG oslo_service.service [None req-279685f3-4cb2-400f-b19b-5ded64b4d6d7 - - - - - -] vmware.serial_port_service_uri = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:04 compute-0 nova_compute[258937]: 2025-10-11 03:56:04.360 2 DEBUG oslo_service.service [None req-279685f3-4cb2-400f-b19b-5ded64b4d6d7 - - - - - -] vmware.task_poll_interval      = 0.5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:04 compute-0 nova_compute[258937]: 2025-10-11 03:56:04.360 2 DEBUG oslo_service.service [None req-279685f3-4cb2-400f-b19b-5ded64b4d6d7 - - - - - -] vmware.use_linked_clone        = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:04 compute-0 nova_compute[258937]: 2025-10-11 03:56:04.360 2 DEBUG oslo_service.service [None req-279685f3-4cb2-400f-b19b-5ded64b4d6d7 - - - - - -] vmware.vnc_keymap              = en-us log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:04 compute-0 nova_compute[258937]: 2025-10-11 03:56:04.361 2 DEBUG oslo_service.service [None req-279685f3-4cb2-400f-b19b-5ded64b4d6d7 - - - - - -] vmware.vnc_port                = 5900 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:04 compute-0 nova_compute[258937]: 2025-10-11 03:56:04.361 2 DEBUG oslo_service.service [None req-279685f3-4cb2-400f-b19b-5ded64b4d6d7 - - - - - -] vmware.vnc_port_total          = 10000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:04 compute-0 nova_compute[258937]: 2025-10-11 03:56:04.361 2 DEBUG oslo_service.service [None req-279685f3-4cb2-400f-b19b-5ded64b4d6d7 - - - - - -] vnc.auth_schemes               = ['none'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:04 compute-0 nova_compute[258937]: 2025-10-11 03:56:04.361 2 DEBUG oslo_service.service [None req-279685f3-4cb2-400f-b19b-5ded64b4d6d7 - - - - - -] vnc.enabled                    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:04 compute-0 nova_compute[258937]: 2025-10-11 03:56:04.361 2 DEBUG oslo_service.service [None req-279685f3-4cb2-400f-b19b-5ded64b4d6d7 - - - - - -] vnc.novncproxy_base_url        = https://nova-novncproxy-cell1-public-openstack.apps-crc.testing/vnc_lite.html log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:04 compute-0 nova_compute[258937]: 2025-10-11 03:56:04.361 2 DEBUG oslo_service.service [None req-279685f3-4cb2-400f-b19b-5ded64b4d6d7 - - - - - -] vnc.novncproxy_host            = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:04 compute-0 nova_compute[258937]: 2025-10-11 03:56:04.362 2 DEBUG oslo_service.service [None req-279685f3-4cb2-400f-b19b-5ded64b4d6d7 - - - - - -] vnc.novncproxy_port            = 6080 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:04 compute-0 nova_compute[258937]: 2025-10-11 03:56:04.362 2 DEBUG oslo_service.service [None req-279685f3-4cb2-400f-b19b-5ded64b4d6d7 - - - - - -] vnc.server_listen              = ::0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:04 compute-0 nova_compute[258937]: 2025-10-11 03:56:04.362 2 DEBUG oslo_service.service [None req-279685f3-4cb2-400f-b19b-5ded64b4d6d7 - - - - - -] vnc.server_proxyclient_address = 192.168.122.100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:04 compute-0 nova_compute[258937]: 2025-10-11 03:56:04.362 2 DEBUG oslo_service.service [None req-279685f3-4cb2-400f-b19b-5ded64b4d6d7 - - - - - -] vnc.vencrypt_ca_certs          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:04 compute-0 nova_compute[258937]: 2025-10-11 03:56:04.362 2 DEBUG oslo_service.service [None req-279685f3-4cb2-400f-b19b-5ded64b4d6d7 - - - - - -] vnc.vencrypt_client_cert       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:04 compute-0 nova_compute[258937]: 2025-10-11 03:56:04.362 2 DEBUG oslo_service.service [None req-279685f3-4cb2-400f-b19b-5ded64b4d6d7 - - - - - -] vnc.vencrypt_client_key        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:04 compute-0 nova_compute[258937]: 2025-10-11 03:56:04.363 2 DEBUG oslo_service.service [None req-279685f3-4cb2-400f-b19b-5ded64b4d6d7 - - - - - -] workarounds.disable_compute_service_check_for_ffu = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:04 compute-0 nova_compute[258937]: 2025-10-11 03:56:04.363 2 DEBUG oslo_service.service [None req-279685f3-4cb2-400f-b19b-5ded64b4d6d7 - - - - - -] workarounds.disable_deep_image_inspection = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:04 compute-0 nova_compute[258937]: 2025-10-11 03:56:04.363 2 DEBUG oslo_service.service [None req-279685f3-4cb2-400f-b19b-5ded64b4d6d7 - - - - - -] workarounds.disable_fallback_pcpu_query = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:04 compute-0 nova_compute[258937]: 2025-10-11 03:56:04.363 2 DEBUG oslo_service.service [None req-279685f3-4cb2-400f-b19b-5ded64b4d6d7 - - - - - -] workarounds.disable_group_policy_check_upcall = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:04 compute-0 nova_compute[258937]: 2025-10-11 03:56:04.363 2 DEBUG oslo_service.service [None req-279685f3-4cb2-400f-b19b-5ded64b4d6d7 - - - - - -] workarounds.disable_libvirt_livesnapshot = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:04 compute-0 nova_compute[258937]: 2025-10-11 03:56:04.363 2 DEBUG oslo_service.service [None req-279685f3-4cb2-400f-b19b-5ded64b4d6d7 - - - - - -] workarounds.disable_rootwrap   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:04 compute-0 nova_compute[258937]: 2025-10-11 03:56:04.363 2 DEBUG oslo_service.service [None req-279685f3-4cb2-400f-b19b-5ded64b4d6d7 - - - - - -] workarounds.enable_numa_live_migration = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:04 compute-0 nova_compute[258937]: 2025-10-11 03:56:04.364 2 DEBUG oslo_service.service [None req-279685f3-4cb2-400f-b19b-5ded64b4d6d7 - - - - - -] workarounds.enable_qemu_monitor_announce_self = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:04 compute-0 nova_compute[258937]: 2025-10-11 03:56:04.364 2 DEBUG oslo_service.service [None req-279685f3-4cb2-400f-b19b-5ded64b4d6d7 - - - - - -] workarounds.ensure_libvirt_rbd_instance_dir_cleanup = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:04 compute-0 nova_compute[258937]: 2025-10-11 03:56:04.364 2 DEBUG oslo_service.service [None req-279685f3-4cb2-400f-b19b-5ded64b4d6d7 - - - - - -] workarounds.handle_virt_lifecycle_events = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:04 compute-0 nova_compute[258937]: 2025-10-11 03:56:04.364 2 DEBUG oslo_service.service [None req-279685f3-4cb2-400f-b19b-5ded64b4d6d7 - - - - - -] workarounds.libvirt_disable_apic = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:04 compute-0 nova_compute[258937]: 2025-10-11 03:56:04.364 2 DEBUG oslo_service.service [None req-279685f3-4cb2-400f-b19b-5ded64b4d6d7 - - - - - -] workarounds.never_download_image_if_on_rbd = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:04 compute-0 nova_compute[258937]: 2025-10-11 03:56:04.364 2 DEBUG oslo_service.service [None req-279685f3-4cb2-400f-b19b-5ded64b4d6d7 - - - - - -] workarounds.qemu_monitor_announce_self_count = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:04 compute-0 nova_compute[258937]: 2025-10-11 03:56:04.364 2 DEBUG oslo_service.service [None req-279685f3-4cb2-400f-b19b-5ded64b4d6d7 - - - - - -] workarounds.qemu_monitor_announce_self_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:04 compute-0 nova_compute[258937]: 2025-10-11 03:56:04.365 2 DEBUG oslo_service.service [None req-279685f3-4cb2-400f-b19b-5ded64b4d6d7 - - - - - -] workarounds.reserve_disk_resource_for_image_cache = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:04 compute-0 nova_compute[258937]: 2025-10-11 03:56:04.365 2 DEBUG oslo_service.service [None req-279685f3-4cb2-400f-b19b-5ded64b4d6d7 - - - - - -] workarounds.skip_cpu_compare_at_startup = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:04 compute-0 nova_compute[258937]: 2025-10-11 03:56:04.365 2 DEBUG oslo_service.service [None req-279685f3-4cb2-400f-b19b-5ded64b4d6d7 - - - - - -] workarounds.skip_cpu_compare_on_dest = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:04 compute-0 nova_compute[258937]: 2025-10-11 03:56:04.365 2 DEBUG oslo_service.service [None req-279685f3-4cb2-400f-b19b-5ded64b4d6d7 - - - - - -] workarounds.skip_hypervisor_version_check_on_lm = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:04 compute-0 nova_compute[258937]: 2025-10-11 03:56:04.365 2 DEBUG oslo_service.service [None req-279685f3-4cb2-400f-b19b-5ded64b4d6d7 - - - - - -] workarounds.skip_reserve_in_use_ironic_nodes = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:04 compute-0 nova_compute[258937]: 2025-10-11 03:56:04.365 2 DEBUG oslo_service.service [None req-279685f3-4cb2-400f-b19b-5ded64b4d6d7 - - - - - -] workarounds.unified_limits_count_pcpu_as_vcpu = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:04 compute-0 nova_compute[258937]: 2025-10-11 03:56:04.365 2 DEBUG oslo_service.service [None req-279685f3-4cb2-400f-b19b-5ded64b4d6d7 - - - - - -] workarounds.wait_for_vif_plugged_event_during_hard_reboot = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:04 compute-0 nova_compute[258937]: 2025-10-11 03:56:04.366 2 DEBUG oslo_service.service [None req-279685f3-4cb2-400f-b19b-5ded64b4d6d7 - - - - - -] wsgi.api_paste_config          = api-paste.ini log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:04 compute-0 nova_compute[258937]: 2025-10-11 03:56:04.366 2 DEBUG oslo_service.service [None req-279685f3-4cb2-400f-b19b-5ded64b4d6d7 - - - - - -] wsgi.client_socket_timeout     = 900 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:04 compute-0 nova_compute[258937]: 2025-10-11 03:56:04.366 2 DEBUG oslo_service.service [None req-279685f3-4cb2-400f-b19b-5ded64b4d6d7 - - - - - -] wsgi.default_pool_size         = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:04 compute-0 nova_compute[258937]: 2025-10-11 03:56:04.366 2 DEBUG oslo_service.service [None req-279685f3-4cb2-400f-b19b-5ded64b4d6d7 - - - - - -] wsgi.keep_alive                = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:04 compute-0 nova_compute[258937]: 2025-10-11 03:56:04.366 2 DEBUG oslo_service.service [None req-279685f3-4cb2-400f-b19b-5ded64b4d6d7 - - - - - -] wsgi.max_header_line           = 16384 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:04 compute-0 nova_compute[258937]: 2025-10-11 03:56:04.366 2 DEBUG oslo_service.service [None req-279685f3-4cb2-400f-b19b-5ded64b4d6d7 - - - - - -] wsgi.secure_proxy_ssl_header   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:04 compute-0 nova_compute[258937]: 2025-10-11 03:56:04.367 2 DEBUG oslo_service.service [None req-279685f3-4cb2-400f-b19b-5ded64b4d6d7 - - - - - -] wsgi.ssl_ca_file               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:04 compute-0 nova_compute[258937]: 2025-10-11 03:56:04.367 2 DEBUG oslo_service.service [None req-279685f3-4cb2-400f-b19b-5ded64b4d6d7 - - - - - -] wsgi.ssl_cert_file             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:04 compute-0 nova_compute[258937]: 2025-10-11 03:56:04.367 2 DEBUG oslo_service.service [None req-279685f3-4cb2-400f-b19b-5ded64b4d6d7 - - - - - -] wsgi.ssl_key_file              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:04 compute-0 nova_compute[258937]: 2025-10-11 03:56:04.367 2 DEBUG oslo_service.service [None req-279685f3-4cb2-400f-b19b-5ded64b4d6d7 - - - - - -] wsgi.tcp_keepidle              = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:04 compute-0 nova_compute[258937]: 2025-10-11 03:56:04.367 2 DEBUG oslo_service.service [None req-279685f3-4cb2-400f-b19b-5ded64b4d6d7 - - - - - -] wsgi.wsgi_log_format           = %(client_ip)s "%(request_line)s" status: %(status_code)s len: %(body_length)s time: %(wall_seconds).7f log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:04 compute-0 nova_compute[258937]: 2025-10-11 03:56:04.367 2 DEBUG oslo_service.service [None req-279685f3-4cb2-400f-b19b-5ded64b4d6d7 - - - - - -] zvm.ca_file                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:04 compute-0 nova_compute[258937]: 2025-10-11 03:56:04.368 2 DEBUG oslo_service.service [None req-279685f3-4cb2-400f-b19b-5ded64b4d6d7 - - - - - -] zvm.cloud_connector_url        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:04 compute-0 nova_compute[258937]: 2025-10-11 03:56:04.368 2 DEBUG oslo_service.service [None req-279685f3-4cb2-400f-b19b-5ded64b4d6d7 - - - - - -] zvm.image_tmp_path             = /var/lib/nova/images log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:04 compute-0 nova_compute[258937]: 2025-10-11 03:56:04.368 2 DEBUG oslo_service.service [None req-279685f3-4cb2-400f-b19b-5ded64b4d6d7 - - - - - -] zvm.reachable_timeout          = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:04 compute-0 nova_compute[258937]: 2025-10-11 03:56:04.368 2 DEBUG oslo_service.service [None req-279685f3-4cb2-400f-b19b-5ded64b4d6d7 - - - - - -] oslo_policy.enforce_new_defaults = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:04 compute-0 nova_compute[258937]: 2025-10-11 03:56:04.368 2 DEBUG oslo_service.service [None req-279685f3-4cb2-400f-b19b-5ded64b4d6d7 - - - - - -] oslo_policy.enforce_scope      = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:04 compute-0 nova_compute[258937]: 2025-10-11 03:56:04.368 2 DEBUG oslo_service.service [None req-279685f3-4cb2-400f-b19b-5ded64b4d6d7 - - - - - -] oslo_policy.policy_default_rule = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:04 compute-0 nova_compute[258937]: 2025-10-11 03:56:04.369 2 DEBUG oslo_service.service [None req-279685f3-4cb2-400f-b19b-5ded64b4d6d7 - - - - - -] oslo_policy.policy_dirs        = ['policy.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:04 compute-0 nova_compute[258937]: 2025-10-11 03:56:04.369 2 DEBUG oslo_service.service [None req-279685f3-4cb2-400f-b19b-5ded64b4d6d7 - - - - - -] oslo_policy.policy_file        = policy.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:04 compute-0 nova_compute[258937]: 2025-10-11 03:56:04.369 2 DEBUG oslo_service.service [None req-279685f3-4cb2-400f-b19b-5ded64b4d6d7 - - - - - -] oslo_policy.remote_content_type = application/x-www-form-urlencoded log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:04 compute-0 nova_compute[258937]: 2025-10-11 03:56:04.369 2 DEBUG oslo_service.service [None req-279685f3-4cb2-400f-b19b-5ded64b4d6d7 - - - - - -] oslo_policy.remote_ssl_ca_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:04 compute-0 nova_compute[258937]: 2025-10-11 03:56:04.369 2 DEBUG oslo_service.service [None req-279685f3-4cb2-400f-b19b-5ded64b4d6d7 - - - - - -] oslo_policy.remote_ssl_client_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:04 compute-0 nova_compute[258937]: 2025-10-11 03:56:04.369 2 DEBUG oslo_service.service [None req-279685f3-4cb2-400f-b19b-5ded64b4d6d7 - - - - - -] oslo_policy.remote_ssl_client_key_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:04 compute-0 nova_compute[258937]: 2025-10-11 03:56:04.369 2 DEBUG oslo_service.service [None req-279685f3-4cb2-400f-b19b-5ded64b4d6d7 - - - - - -] oslo_policy.remote_ssl_verify_server_crt = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:04 compute-0 nova_compute[258937]: 2025-10-11 03:56:04.370 2 DEBUG oslo_service.service [None req-279685f3-4cb2-400f-b19b-5ded64b4d6d7 - - - - - -] oslo_versionedobjects.fatal_exception_format_errors = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:04 compute-0 nova_compute[258937]: 2025-10-11 03:56:04.370 2 DEBUG oslo_service.service [None req-279685f3-4cb2-400f-b19b-5ded64b4d6d7 - - - - - -] oslo_middleware.http_basic_auth_user_file = /etc/htpasswd log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:04 compute-0 nova_compute[258937]: 2025-10-11 03:56:04.370 2 DEBUG oslo_service.service [None req-279685f3-4cb2-400f-b19b-5ded64b4d6d7 - - - - - -] remote_debug.host              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:04 compute-0 nova_compute[258937]: 2025-10-11 03:56:04.370 2 DEBUG oslo_service.service [None req-279685f3-4cb2-400f-b19b-5ded64b4d6d7 - - - - - -] remote_debug.port              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:04 compute-0 nova_compute[258937]: 2025-10-11 03:56:04.370 2 DEBUG oslo_service.service [None req-279685f3-4cb2-400f-b19b-5ded64b4d6d7 - - - - - -] oslo_messaging_rabbit.amqp_auto_delete = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:04 compute-0 nova_compute[258937]: 2025-10-11 03:56:04.370 2 DEBUG oslo_service.service [None req-279685f3-4cb2-400f-b19b-5ded64b4d6d7 - - - - - -] oslo_messaging_rabbit.amqp_durable_queues = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:04 compute-0 nova_compute[258937]: 2025-10-11 03:56:04.370 2 DEBUG oslo_service.service [None req-279685f3-4cb2-400f-b19b-5ded64b4d6d7 - - - - - -] oslo_messaging_rabbit.conn_pool_min_size = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:04 compute-0 nova_compute[258937]: 2025-10-11 03:56:04.371 2 DEBUG oslo_service.service [None req-279685f3-4cb2-400f-b19b-5ded64b4d6d7 - - - - - -] oslo_messaging_rabbit.conn_pool_ttl = 1200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:04 compute-0 nova_compute[258937]: 2025-10-11 03:56:04.371 2 DEBUG oslo_service.service [None req-279685f3-4cb2-400f-b19b-5ded64b4d6d7 - - - - - -] oslo_messaging_rabbit.direct_mandatory_flag = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:04 compute-0 nova_compute[258937]: 2025-10-11 03:56:04.371 2 DEBUG oslo_service.service [None req-279685f3-4cb2-400f-b19b-5ded64b4d6d7 - - - - - -] oslo_messaging_rabbit.enable_cancel_on_failover = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:04 compute-0 nova_compute[258937]: 2025-10-11 03:56:04.371 2 DEBUG oslo_service.service [None req-279685f3-4cb2-400f-b19b-5ded64b4d6d7 - - - - - -] oslo_messaging_rabbit.heartbeat_in_pthread = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:04 compute-0 nova_compute[258937]: 2025-10-11 03:56:04.371 2 DEBUG oslo_service.service [None req-279685f3-4cb2-400f-b19b-5ded64b4d6d7 - - - - - -] oslo_messaging_rabbit.heartbeat_rate = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:04 compute-0 nova_compute[258937]: 2025-10-11 03:56:04.371 2 DEBUG oslo_service.service [None req-279685f3-4cb2-400f-b19b-5ded64b4d6d7 - - - - - -] oslo_messaging_rabbit.heartbeat_timeout_threshold = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:04 compute-0 nova_compute[258937]: 2025-10-11 03:56:04.371 2 DEBUG oslo_service.service [None req-279685f3-4cb2-400f-b19b-5ded64b4d6d7 - - - - - -] oslo_messaging_rabbit.kombu_compression = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:04 compute-0 nova_compute[258937]: 2025-10-11 03:56:04.372 2 DEBUG oslo_service.service [None req-279685f3-4cb2-400f-b19b-5ded64b4d6d7 - - - - - -] oslo_messaging_rabbit.kombu_failover_strategy = round-robin log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:04 compute-0 nova_compute[258937]: 2025-10-11 03:56:04.372 2 DEBUG oslo_service.service [None req-279685f3-4cb2-400f-b19b-5ded64b4d6d7 - - - - - -] oslo_messaging_rabbit.kombu_missing_consumer_retry_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:04 compute-0 nova_compute[258937]: 2025-10-11 03:56:04.372 2 DEBUG oslo_service.service [None req-279685f3-4cb2-400f-b19b-5ded64b4d6d7 - - - - - -] oslo_messaging_rabbit.kombu_reconnect_delay = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:04 compute-0 nova_compute[258937]: 2025-10-11 03:56:04.372 2 DEBUG oslo_service.service [None req-279685f3-4cb2-400f-b19b-5ded64b4d6d7 - - - - - -] oslo_messaging_rabbit.rabbit_ha_queues = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:04 compute-0 nova_compute[258937]: 2025-10-11 03:56:04.372 2 DEBUG oslo_service.service [None req-279685f3-4cb2-400f-b19b-5ded64b4d6d7 - - - - - -] oslo_messaging_rabbit.rabbit_interval_max = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:04 compute-0 nova_compute[258937]: 2025-10-11 03:56:04.372 2 DEBUG oslo_service.service [None req-279685f3-4cb2-400f-b19b-5ded64b4d6d7 - - - - - -] oslo_messaging_rabbit.rabbit_login_method = AMQPLAIN log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:04 compute-0 nova_compute[258937]: 2025-10-11 03:56:04.372 2 DEBUG oslo_service.service [None req-279685f3-4cb2-400f-b19b-5ded64b4d6d7 - - - - - -] oslo_messaging_rabbit.rabbit_qos_prefetch_count = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:04 compute-0 nova_compute[258937]: 2025-10-11 03:56:04.373 2 DEBUG oslo_service.service [None req-279685f3-4cb2-400f-b19b-5ded64b4d6d7 - - - - - -] oslo_messaging_rabbit.rabbit_quorum_delivery_limit = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:04 compute-0 nova_compute[258937]: 2025-10-11 03:56:04.373 2 DEBUG oslo_service.service [None req-279685f3-4cb2-400f-b19b-5ded64b4d6d7 - - - - - -] oslo_messaging_rabbit.rabbit_quorum_max_memory_bytes = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:04 compute-0 nova_compute[258937]: 2025-10-11 03:56:04.373 2 DEBUG oslo_service.service [None req-279685f3-4cb2-400f-b19b-5ded64b4d6d7 - - - - - -] oslo_messaging_rabbit.rabbit_quorum_max_memory_length = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:04 compute-0 nova_compute[258937]: 2025-10-11 03:56:04.373 2 DEBUG oslo_service.service [None req-279685f3-4cb2-400f-b19b-5ded64b4d6d7 - - - - - -] oslo_messaging_rabbit.rabbit_quorum_queue = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:04 compute-0 nova_compute[258937]: 2025-10-11 03:56:04.373 2 DEBUG oslo_service.service [None req-279685f3-4cb2-400f-b19b-5ded64b4d6d7 - - - - - -] oslo_messaging_rabbit.rabbit_retry_backoff = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:04 compute-0 nova_compute[258937]: 2025-10-11 03:56:04.373 2 DEBUG oslo_service.service [None req-279685f3-4cb2-400f-b19b-5ded64b4d6d7 - - - - - -] oslo_messaging_rabbit.rabbit_retry_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:04 compute-0 nova_compute[258937]: 2025-10-11 03:56:04.373 2 DEBUG oslo_service.service [None req-279685f3-4cb2-400f-b19b-5ded64b4d6d7 - - - - - -] oslo_messaging_rabbit.rabbit_transient_queues_ttl = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:04 compute-0 nova_compute[258937]: 2025-10-11 03:56:04.374 2 DEBUG oslo_service.service [None req-279685f3-4cb2-400f-b19b-5ded64b4d6d7 - - - - - -] oslo_messaging_rabbit.rpc_conn_pool_size = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:04 compute-0 nova_compute[258937]: 2025-10-11 03:56:04.374 2 DEBUG oslo_service.service [None req-279685f3-4cb2-400f-b19b-5ded64b4d6d7 - - - - - -] oslo_messaging_rabbit.ssl      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:04 compute-0 nova_compute[258937]: 2025-10-11 03:56:04.374 2 DEBUG oslo_service.service [None req-279685f3-4cb2-400f-b19b-5ded64b4d6d7 - - - - - -] oslo_messaging_rabbit.ssl_ca_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:04 compute-0 nova_compute[258937]: 2025-10-11 03:56:04.374 2 DEBUG oslo_service.service [None req-279685f3-4cb2-400f-b19b-5ded64b4d6d7 - - - - - -] oslo_messaging_rabbit.ssl_cert_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:04 compute-0 nova_compute[258937]: 2025-10-11 03:56:04.374 2 DEBUG oslo_service.service [None req-279685f3-4cb2-400f-b19b-5ded64b4d6d7 - - - - - -] oslo_messaging_rabbit.ssl_enforce_fips_mode = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:04 compute-0 nova_compute[258937]: 2025-10-11 03:56:04.374 2 DEBUG oslo_service.service [None req-279685f3-4cb2-400f-b19b-5ded64b4d6d7 - - - - - -] oslo_messaging_rabbit.ssl_key_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:04 compute-0 nova_compute[258937]: 2025-10-11 03:56:04.374 2 DEBUG oslo_service.service [None req-279685f3-4cb2-400f-b19b-5ded64b4d6d7 - - - - - -] oslo_messaging_rabbit.ssl_version =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:04 compute-0 nova_compute[258937]: 2025-10-11 03:56:04.375 2 DEBUG oslo_service.service [None req-279685f3-4cb2-400f-b19b-5ded64b4d6d7 - - - - - -] oslo_messaging_notifications.driver = ['noop'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:04 compute-0 nova_compute[258937]: 2025-10-11 03:56:04.375 2 DEBUG oslo_service.service [None req-279685f3-4cb2-400f-b19b-5ded64b4d6d7 - - - - - -] oslo_messaging_notifications.retry = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:04 compute-0 nova_compute[258937]: 2025-10-11 03:56:04.375 2 DEBUG oslo_service.service [None req-279685f3-4cb2-400f-b19b-5ded64b4d6d7 - - - - - -] oslo_messaging_notifications.topics = ['notifications'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:04 compute-0 nova_compute[258937]: 2025-10-11 03:56:04.375 2 DEBUG oslo_service.service [None req-279685f3-4cb2-400f-b19b-5ded64b4d6d7 - - - - - -] oslo_messaging_notifications.transport_url = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:04 compute-0 nova_compute[258937]: 2025-10-11 03:56:04.375 2 DEBUG oslo_service.service [None req-279685f3-4cb2-400f-b19b-5ded64b4d6d7 - - - - - -] oslo_limit.auth_section        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:04 compute-0 nova_compute[258937]: 2025-10-11 03:56:04.375 2 DEBUG oslo_service.service [None req-279685f3-4cb2-400f-b19b-5ded64b4d6d7 - - - - - -] oslo_limit.auth_type           = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:04 compute-0 nova_compute[258937]: 2025-10-11 03:56:04.376 2 DEBUG oslo_service.service [None req-279685f3-4cb2-400f-b19b-5ded64b4d6d7 - - - - - -] oslo_limit.auth_url            = https://keystone-internal.openstack.svc:5000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:04 compute-0 nova_compute[258937]: 2025-10-11 03:56:04.376 2 DEBUG oslo_service.service [None req-279685f3-4cb2-400f-b19b-5ded64b4d6d7 - - - - - -] oslo_limit.cafile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:04 compute-0 nova_compute[258937]: 2025-10-11 03:56:04.376 2 DEBUG oslo_service.service [None req-279685f3-4cb2-400f-b19b-5ded64b4d6d7 - - - - - -] oslo_limit.certfile            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:04 compute-0 nova_compute[258937]: 2025-10-11 03:56:04.376 2 DEBUG oslo_service.service [None req-279685f3-4cb2-400f-b19b-5ded64b4d6d7 - - - - - -] oslo_limit.collect_timing      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:04 compute-0 nova_compute[258937]: 2025-10-11 03:56:04.376 2 DEBUG oslo_service.service [None req-279685f3-4cb2-400f-b19b-5ded64b4d6d7 - - - - - -] oslo_limit.connect_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:04 compute-0 nova_compute[258937]: 2025-10-11 03:56:04.376 2 DEBUG oslo_service.service [None req-279685f3-4cb2-400f-b19b-5ded64b4d6d7 - - - - - -] oslo_limit.connect_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:04 compute-0 nova_compute[258937]: 2025-10-11 03:56:04.376 2 DEBUG oslo_service.service [None req-279685f3-4cb2-400f-b19b-5ded64b4d6d7 - - - - - -] oslo_limit.default_domain_id   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:04 compute-0 nova_compute[258937]: 2025-10-11 03:56:04.377 2 DEBUG oslo_service.service [None req-279685f3-4cb2-400f-b19b-5ded64b4d6d7 - - - - - -] oslo_limit.default_domain_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:04 compute-0 nova_compute[258937]: 2025-10-11 03:56:04.377 2 DEBUG oslo_service.service [None req-279685f3-4cb2-400f-b19b-5ded64b4d6d7 - - - - - -] oslo_limit.domain_id           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:04 compute-0 nova_compute[258937]: 2025-10-11 03:56:04.377 2 DEBUG oslo_service.service [None req-279685f3-4cb2-400f-b19b-5ded64b4d6d7 - - - - - -] oslo_limit.domain_name         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:04 compute-0 nova_compute[258937]: 2025-10-11 03:56:04.377 2 DEBUG oslo_service.service [None req-279685f3-4cb2-400f-b19b-5ded64b4d6d7 - - - - - -] oslo_limit.endpoint_id         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:04 compute-0 nova_compute[258937]: 2025-10-11 03:56:04.377 2 DEBUG oslo_service.service [None req-279685f3-4cb2-400f-b19b-5ded64b4d6d7 - - - - - -] oslo_limit.endpoint_override   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:04 compute-0 nova_compute[258937]: 2025-10-11 03:56:04.377 2 DEBUG oslo_service.service [None req-279685f3-4cb2-400f-b19b-5ded64b4d6d7 - - - - - -] oslo_limit.insecure            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:04 compute-0 nova_compute[258937]: 2025-10-11 03:56:04.377 2 DEBUG oslo_service.service [None req-279685f3-4cb2-400f-b19b-5ded64b4d6d7 - - - - - -] oslo_limit.keyfile             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:04 compute-0 nova_compute[258937]: 2025-10-11 03:56:04.378 2 DEBUG oslo_service.service [None req-279685f3-4cb2-400f-b19b-5ded64b4d6d7 - - - - - -] oslo_limit.max_version         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:04 compute-0 nova_compute[258937]: 2025-10-11 03:56:04.378 2 DEBUG oslo_service.service [None req-279685f3-4cb2-400f-b19b-5ded64b4d6d7 - - - - - -] oslo_limit.min_version         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:04 compute-0 nova_compute[258937]: 2025-10-11 03:56:04.378 2 DEBUG oslo_service.service [None req-279685f3-4cb2-400f-b19b-5ded64b4d6d7 - - - - - -] oslo_limit.password            = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:04 compute-0 nova_compute[258937]: 2025-10-11 03:56:04.378 2 DEBUG oslo_service.service [None req-279685f3-4cb2-400f-b19b-5ded64b4d6d7 - - - - - -] oslo_limit.project_domain_id   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:04 compute-0 nova_compute[258937]: 2025-10-11 03:56:04.378 2 DEBUG oslo_service.service [None req-279685f3-4cb2-400f-b19b-5ded64b4d6d7 - - - - - -] oslo_limit.project_domain_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:04 compute-0 nova_compute[258937]: 2025-10-11 03:56:04.378 2 DEBUG oslo_service.service [None req-279685f3-4cb2-400f-b19b-5ded64b4d6d7 - - - - - -] oslo_limit.project_id          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:04 compute-0 nova_compute[258937]: 2025-10-11 03:56:04.378 2 DEBUG oslo_service.service [None req-279685f3-4cb2-400f-b19b-5ded64b4d6d7 - - - - - -] oslo_limit.project_name        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:04 compute-0 nova_compute[258937]: 2025-10-11 03:56:04.378 2 DEBUG oslo_service.service [None req-279685f3-4cb2-400f-b19b-5ded64b4d6d7 - - - - - -] oslo_limit.region_name         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:04 compute-0 nova_compute[258937]: 2025-10-11 03:56:04.379 2 DEBUG oslo_service.service [None req-279685f3-4cb2-400f-b19b-5ded64b4d6d7 - - - - - -] oslo_limit.service_name        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:04 compute-0 nova_compute[258937]: 2025-10-11 03:56:04.379 2 DEBUG oslo_service.service [None req-279685f3-4cb2-400f-b19b-5ded64b4d6d7 - - - - - -] oslo_limit.service_type        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:04 compute-0 nova_compute[258937]: 2025-10-11 03:56:04.379 2 DEBUG oslo_service.service [None req-279685f3-4cb2-400f-b19b-5ded64b4d6d7 - - - - - -] oslo_limit.split_loggers       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:04 compute-0 nova_compute[258937]: 2025-10-11 03:56:04.379 2 DEBUG oslo_service.service [None req-279685f3-4cb2-400f-b19b-5ded64b4d6d7 - - - - - -] oslo_limit.status_code_retries = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:04 compute-0 nova_compute[258937]: 2025-10-11 03:56:04.379 2 DEBUG oslo_service.service [None req-279685f3-4cb2-400f-b19b-5ded64b4d6d7 - - - - - -] oslo_limit.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:04 compute-0 nova_compute[258937]: 2025-10-11 03:56:04.379 2 DEBUG oslo_service.service [None req-279685f3-4cb2-400f-b19b-5ded64b4d6d7 - - - - - -] oslo_limit.system_scope        = all log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:04 compute-0 nova_compute[258937]: 2025-10-11 03:56:04.379 2 DEBUG oslo_service.service [None req-279685f3-4cb2-400f-b19b-5ded64b4d6d7 - - - - - -] oslo_limit.timeout             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:04 compute-0 nova_compute[258937]: 2025-10-11 03:56:04.380 2 DEBUG oslo_service.service [None req-279685f3-4cb2-400f-b19b-5ded64b4d6d7 - - - - - -] oslo_limit.trust_id            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:04 compute-0 nova_compute[258937]: 2025-10-11 03:56:04.380 2 DEBUG oslo_service.service [None req-279685f3-4cb2-400f-b19b-5ded64b4d6d7 - - - - - -] oslo_limit.user_domain_id      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:04 compute-0 nova_compute[258937]: 2025-10-11 03:56:04.380 2 DEBUG oslo_service.service [None req-279685f3-4cb2-400f-b19b-5ded64b4d6d7 - - - - - -] oslo_limit.user_domain_name    = Default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:04 compute-0 nova_compute[258937]: 2025-10-11 03:56:04.380 2 DEBUG oslo_service.service [None req-279685f3-4cb2-400f-b19b-5ded64b4d6d7 - - - - - -] oslo_limit.user_id             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:04 compute-0 nova_compute[258937]: 2025-10-11 03:56:04.380 2 DEBUG oslo_service.service [None req-279685f3-4cb2-400f-b19b-5ded64b4d6d7 - - - - - -] oslo_limit.username            = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:04 compute-0 nova_compute[258937]: 2025-10-11 03:56:04.380 2 DEBUG oslo_service.service [None req-279685f3-4cb2-400f-b19b-5ded64b4d6d7 - - - - - -] oslo_limit.valid_interfaces    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:04 compute-0 nova_compute[258937]: 2025-10-11 03:56:04.380 2 DEBUG oslo_service.service [None req-279685f3-4cb2-400f-b19b-5ded64b4d6d7 - - - - - -] oslo_limit.version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:04 compute-0 nova_compute[258937]: 2025-10-11 03:56:04.381 2 DEBUG oslo_service.service [None req-279685f3-4cb2-400f-b19b-5ded64b4d6d7 - - - - - -] oslo_reports.file_event_handler = /var/lib/nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:04 compute-0 nova_compute[258937]: 2025-10-11 03:56:04.381 2 DEBUG oslo_service.service [None req-279685f3-4cb2-400f-b19b-5ded64b4d6d7 - - - - - -] oslo_reports.file_event_handler_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:04 compute-0 nova_compute[258937]: 2025-10-11 03:56:04.381 2 DEBUG oslo_service.service [None req-279685f3-4cb2-400f-b19b-5ded64b4d6d7 - - - - - -] oslo_reports.log_dir           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:04 compute-0 nova_compute[258937]: 2025-10-11 03:56:04.381 2 DEBUG oslo_service.service [None req-279685f3-4cb2-400f-b19b-5ded64b4d6d7 - - - - - -] vif_plug_linux_bridge_privileged.capabilities = [12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:04 compute-0 nova_compute[258937]: 2025-10-11 03:56:04.381 2 DEBUG oslo_service.service [None req-279685f3-4cb2-400f-b19b-5ded64b4d6d7 - - - - - -] vif_plug_linux_bridge_privileged.group = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:04 compute-0 nova_compute[258937]: 2025-10-11 03:56:04.381 2 DEBUG oslo_service.service [None req-279685f3-4cb2-400f-b19b-5ded64b4d6d7 - - - - - -] vif_plug_linux_bridge_privileged.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:04 compute-0 nova_compute[258937]: 2025-10-11 03:56:04.381 2 DEBUG oslo_service.service [None req-279685f3-4cb2-400f-b19b-5ded64b4d6d7 - - - - - -] vif_plug_linux_bridge_privileged.logger_name = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:04 compute-0 nova_compute[258937]: 2025-10-11 03:56:04.382 2 DEBUG oslo_service.service [None req-279685f3-4cb2-400f-b19b-5ded64b4d6d7 - - - - - -] vif_plug_linux_bridge_privileged.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:04 compute-0 nova_compute[258937]: 2025-10-11 03:56:04.382 2 DEBUG oslo_service.service [None req-279685f3-4cb2-400f-b19b-5ded64b4d6d7 - - - - - -] vif_plug_linux_bridge_privileged.user = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:04 compute-0 nova_compute[258937]: 2025-10-11 03:56:04.382 2 DEBUG oslo_service.service [None req-279685f3-4cb2-400f-b19b-5ded64b4d6d7 - - - - - -] vif_plug_ovs_privileged.capabilities = [12, 1] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:04 compute-0 nova_compute[258937]: 2025-10-11 03:56:04.382 2 DEBUG oslo_service.service [None req-279685f3-4cb2-400f-b19b-5ded64b4d6d7 - - - - - -] vif_plug_ovs_privileged.group  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:04 compute-0 nova_compute[258937]: 2025-10-11 03:56:04.382 2 DEBUG oslo_service.service [None req-279685f3-4cb2-400f-b19b-5ded64b4d6d7 - - - - - -] vif_plug_ovs_privileged.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:04 compute-0 nova_compute[258937]: 2025-10-11 03:56:04.382 2 DEBUG oslo_service.service [None req-279685f3-4cb2-400f-b19b-5ded64b4d6d7 - - - - - -] vif_plug_ovs_privileged.logger_name = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:04 compute-0 nova_compute[258937]: 2025-10-11 03:56:04.382 2 DEBUG oslo_service.service [None req-279685f3-4cb2-400f-b19b-5ded64b4d6d7 - - - - - -] vif_plug_ovs_privileged.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:04 compute-0 nova_compute[258937]: 2025-10-11 03:56:04.382 2 DEBUG oslo_service.service [None req-279685f3-4cb2-400f-b19b-5ded64b4d6d7 - - - - - -] vif_plug_ovs_privileged.user   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:04 compute-0 nova_compute[258937]: 2025-10-11 03:56:04.383 2 DEBUG oslo_service.service [None req-279685f3-4cb2-400f-b19b-5ded64b4d6d7 - - - - - -] os_vif_linux_bridge.flat_interface = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:04 compute-0 nova_compute[258937]: 2025-10-11 03:56:04.383 2 DEBUG oslo_service.service [None req-279685f3-4cb2-400f-b19b-5ded64b4d6d7 - - - - - -] os_vif_linux_bridge.forward_bridge_interface = ['all'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:04 compute-0 nova_compute[258937]: 2025-10-11 03:56:04.383 2 DEBUG oslo_service.service [None req-279685f3-4cb2-400f-b19b-5ded64b4d6d7 - - - - - -] os_vif_linux_bridge.iptables_bottom_regex =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:04 compute-0 nova_compute[258937]: 2025-10-11 03:56:04.383 2 DEBUG oslo_service.service [None req-279685f3-4cb2-400f-b19b-5ded64b4d6d7 - - - - - -] os_vif_linux_bridge.iptables_drop_action = DROP log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:04 compute-0 nova_compute[258937]: 2025-10-11 03:56:04.383 2 DEBUG oslo_service.service [None req-279685f3-4cb2-400f-b19b-5ded64b4d6d7 - - - - - -] os_vif_linux_bridge.iptables_top_regex =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:04 compute-0 nova_compute[258937]: 2025-10-11 03:56:04.383 2 DEBUG oslo_service.service [None req-279685f3-4cb2-400f-b19b-5ded64b4d6d7 - - - - - -] os_vif_linux_bridge.network_device_mtu = 1500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:04 compute-0 nova_compute[258937]: 2025-10-11 03:56:04.383 2 DEBUG oslo_service.service [None req-279685f3-4cb2-400f-b19b-5ded64b4d6d7 - - - - - -] os_vif_linux_bridge.use_ipv6   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:04 compute-0 nova_compute[258937]: 2025-10-11 03:56:04.384 2 DEBUG oslo_service.service [None req-279685f3-4cb2-400f-b19b-5ded64b4d6d7 - - - - - -] os_vif_linux_bridge.vlan_interface = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:04 compute-0 nova_compute[258937]: 2025-10-11 03:56:04.384 2 DEBUG oslo_service.service [None req-279685f3-4cb2-400f-b19b-5ded64b4d6d7 - - - - - -] os_vif_ovs.isolate_vif         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:04 compute-0 nova_compute[258937]: 2025-10-11 03:56:04.384 2 DEBUG oslo_service.service [None req-279685f3-4cb2-400f-b19b-5ded64b4d6d7 - - - - - -] os_vif_ovs.network_device_mtu  = 1500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:04 compute-0 nova_compute[258937]: 2025-10-11 03:56:04.384 2 DEBUG oslo_service.service [None req-279685f3-4cb2-400f-b19b-5ded64b4d6d7 - - - - - -] os_vif_ovs.ovs_vsctl_timeout   = 120 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:04 compute-0 nova_compute[258937]: 2025-10-11 03:56:04.384 2 DEBUG oslo_service.service [None req-279685f3-4cb2-400f-b19b-5ded64b4d6d7 - - - - - -] os_vif_ovs.ovsdb_connection    = tcp:127.0.0.1:6640 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:04 compute-0 nova_compute[258937]: 2025-10-11 03:56:04.384 2 DEBUG oslo_service.service [None req-279685f3-4cb2-400f-b19b-5ded64b4d6d7 - - - - - -] os_vif_ovs.ovsdb_interface     = native log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:04 compute-0 nova_compute[258937]: 2025-10-11 03:56:04.384 2 DEBUG oslo_service.service [None req-279685f3-4cb2-400f-b19b-5ded64b4d6d7 - - - - - -] os_vif_ovs.per_port_bridge     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:04 compute-0 nova_compute[258937]: 2025-10-11 03:56:04.385 2 DEBUG oslo_service.service [None req-279685f3-4cb2-400f-b19b-5ded64b4d6d7 - - - - - -] os_brick.lock_path             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:04 compute-0 nova_compute[258937]: 2025-10-11 03:56:04.385 2 DEBUG oslo_service.service [None req-279685f3-4cb2-400f-b19b-5ded64b4d6d7 - - - - - -] os_brick.wait_mpath_device_attempts = 4 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:04 compute-0 nova_compute[258937]: 2025-10-11 03:56:04.385 2 DEBUG oslo_service.service [None req-279685f3-4cb2-400f-b19b-5ded64b4d6d7 - - - - - -] os_brick.wait_mpath_device_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:04 compute-0 nova_compute[258937]: 2025-10-11 03:56:04.385 2 DEBUG oslo_service.service [None req-279685f3-4cb2-400f-b19b-5ded64b4d6d7 - - - - - -] privsep_osbrick.capabilities   = [21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:04 compute-0 nova_compute[258937]: 2025-10-11 03:56:04.385 2 DEBUG oslo_service.service [None req-279685f3-4cb2-400f-b19b-5ded64b4d6d7 - - - - - -] privsep_osbrick.group          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:04 compute-0 nova_compute[258937]: 2025-10-11 03:56:04.385 2 DEBUG oslo_service.service [None req-279685f3-4cb2-400f-b19b-5ded64b4d6d7 - - - - - -] privsep_osbrick.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:04 compute-0 nova_compute[258937]: 2025-10-11 03:56:04.385 2 DEBUG oslo_service.service [None req-279685f3-4cb2-400f-b19b-5ded64b4d6d7 - - - - - -] privsep_osbrick.logger_name    = os_brick.privileged log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:04 compute-0 nova_compute[258937]: 2025-10-11 03:56:04.386 2 DEBUG oslo_service.service [None req-279685f3-4cb2-400f-b19b-5ded64b4d6d7 - - - - - -] privsep_osbrick.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:04 compute-0 nova_compute[258937]: 2025-10-11 03:56:04.386 2 DEBUG oslo_service.service [None req-279685f3-4cb2-400f-b19b-5ded64b4d6d7 - - - - - -] privsep_osbrick.user           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:04 compute-0 nova_compute[258937]: 2025-10-11 03:56:04.386 2 DEBUG oslo_service.service [None req-279685f3-4cb2-400f-b19b-5ded64b4d6d7 - - - - - -] nova_sys_admin.capabilities    = [0, 1, 2, 3, 12, 21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:04 compute-0 nova_compute[258937]: 2025-10-11 03:56:04.386 2 DEBUG oslo_service.service [None req-279685f3-4cb2-400f-b19b-5ded64b4d6d7 - - - - - -] nova_sys_admin.group           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:04 compute-0 nova_compute[258937]: 2025-10-11 03:56:04.386 2 DEBUG oslo_service.service [None req-279685f3-4cb2-400f-b19b-5ded64b4d6d7 - - - - - -] nova_sys_admin.helper_command  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:04 compute-0 nova_compute[258937]: 2025-10-11 03:56:04.386 2 DEBUG oslo_service.service [None req-279685f3-4cb2-400f-b19b-5ded64b4d6d7 - - - - - -] nova_sys_admin.logger_name     = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:04 compute-0 nova_compute[258937]: 2025-10-11 03:56:04.386 2 DEBUG oslo_service.service [None req-279685f3-4cb2-400f-b19b-5ded64b4d6d7 - - - - - -] nova_sys_admin.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:04 compute-0 nova_compute[258937]: 2025-10-11 03:56:04.386 2 DEBUG oslo_service.service [None req-279685f3-4cb2-400f-b19b-5ded64b4d6d7 - - - - - -] nova_sys_admin.user            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:04 compute-0 nova_compute[258937]: 2025-10-11 03:56:04.387 2 DEBUG oslo_service.service [None req-279685f3-4cb2-400f-b19b-5ded64b4d6d7 - - - - - -] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2613
Oct 11 03:56:04 compute-0 nova_compute[258937]: 2025-10-11 03:56:04.388 2 INFO nova.service [-] Starting compute node (version 27.5.2-0.20250829104910.6f8decf.el9)
Oct 11 03:56:04 compute-0 sudo[259553]: pam_unix(sudo:session): session closed for user root
Oct 11 03:56:04 compute-0 nova_compute[258937]: 2025-10-11 03:56:04.402 2 DEBUG nova.virt.libvirt.host [None req-8dcd7773-5ff9-4647-b10c-502d3dc226c9 - - - - - -] Starting native event thread _init_events /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:492
Oct 11 03:56:04 compute-0 nova_compute[258937]: 2025-10-11 03:56:04.402 2 DEBUG nova.virt.libvirt.host [None req-8dcd7773-5ff9-4647-b10c-502d3dc226c9 - - - - - -] Starting green dispatch thread _init_events /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:498
Oct 11 03:56:04 compute-0 nova_compute[258937]: 2025-10-11 03:56:04.403 2 DEBUG nova.virt.libvirt.host [None req-8dcd7773-5ff9-4647-b10c-502d3dc226c9 - - - - - -] Starting connection event dispatch thread initialize /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:620
Oct 11 03:56:04 compute-0 nova_compute[258937]: 2025-10-11 03:56:04.403 2 DEBUG nova.virt.libvirt.host [None req-8dcd7773-5ff9-4647-b10c-502d3dc226c9 - - - - - -] Connecting to libvirt: qemu:///system _get_new_connection /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:503
Oct 11 03:56:04 compute-0 systemd[1]: Starting libvirt QEMU daemon...
Oct 11 03:56:04 compute-0 systemd[1]: Started libvirt QEMU daemon.
Oct 11 03:56:04 compute-0 nova_compute[258937]: 2025-10-11 03:56:04.489 2 DEBUG nova.virt.libvirt.host [None req-8dcd7773-5ff9-4647-b10c-502d3dc226c9 - - - - - -] Registering for lifecycle events <nova.virt.libvirt.host.Host object at 0x7fd0add12730> _get_new_connection /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:509
Oct 11 03:56:04 compute-0 nova_compute[258937]: 2025-10-11 03:56:04.491 2 DEBUG nova.virt.libvirt.host [None req-8dcd7773-5ff9-4647-b10c-502d3dc226c9 - - - - - -] Registering for connection events: <nova.virt.libvirt.host.Host object at 0x7fd0add12730> _get_new_connection /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:530
Oct 11 03:56:04 compute-0 nova_compute[258937]: 2025-10-11 03:56:04.492 2 INFO nova.virt.libvirt.driver [None req-8dcd7773-5ff9-4647-b10c-502d3dc226c9 - - - - - -] Connection event '1' reason 'None'
Oct 11 03:56:04 compute-0 ceph-mon[74273]: pgmap v700: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 03:56:04 compute-0 nova_compute[258937]: 2025-10-11 03:56:04.504 2 WARNING nova.virt.libvirt.driver [None req-8dcd7773-5ff9-4647-b10c-502d3dc226c9 - - - - - -] Cannot update service status on host "compute-0.ctlplane.example.com" since it is not registered.: nova.exception_Remote.ComputeHostNotFound_Remote: Compute host compute-0.ctlplane.example.com could not be found.
Oct 11 03:56:04 compute-0 nova_compute[258937]: 2025-10-11 03:56:04.505 2 DEBUG nova.virt.libvirt.volume.mount [None req-8dcd7773-5ff9-4647-b10c-502d3dc226c9 - - - - - -] Initialising _HostMountState generation 0 host_up /usr/lib/python3.9/site-packages/nova/virt/libvirt/volume/mount.py:130
Oct 11 03:56:04 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 11 03:56:04 compute-0 sudo[259777]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zkivyipogeipzaibcebrybfjtmprpagt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760154964.5780625-1849-141131299088861/AnsiballZ_systemd.py'
Oct 11 03:56:04 compute-0 sudo[259777]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 03:56:05 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v701: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 03:56:05 compute-0 python3.9[259779]: ansible-ansible.builtin.systemd Invoked with name=edpm_nova_compute.service state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Oct 11 03:56:05 compute-0 systemd[1]: Stopping nova_compute container...
Oct 11 03:56:05 compute-0 nova_compute[258937]: 2025-10-11 03:56:05.367 2 DEBUG oslo_concurrency.lockutils [None req-279685f3-4cb2-400f-b19b-5ded64b4d6d7 - - - - - -] Acquiring lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 11 03:56:05 compute-0 nova_compute[258937]: 2025-10-11 03:56:05.368 2 DEBUG oslo_concurrency.lockutils [None req-279685f3-4cb2-400f-b19b-5ded64b4d6d7 - - - - - -] Acquired lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 11 03:56:05 compute-0 nova_compute[258937]: 2025-10-11 03:56:05.368 2 DEBUG oslo_concurrency.lockutils [None req-279685f3-4cb2-400f-b19b-5ded64b4d6d7 - - - - - -] Releasing lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 11 03:56:05 compute-0 virtqemud[259597]: libvirt version: 10.10.0, package: 15.el9 (builder@centos.org, 2025-08-18-13:22:20, )
Oct 11 03:56:05 compute-0 virtqemud[259597]: hostname: compute-0
Oct 11 03:56:05 compute-0 virtqemud[259597]: End of file while reading data: Input/output error
Oct 11 03:56:05 compute-0 systemd[1]: libpod-45e5bb239caa99956e81146eb8387d2de5a8ee4469f4bd6b61b3455d5ed0a021.scope: Deactivated successfully.
Oct 11 03:56:05 compute-0 systemd[1]: libpod-45e5bb239caa99956e81146eb8387d2de5a8ee4469f4bd6b61b3455d5ed0a021.scope: Consumed 2.979s CPU time.
Oct 11 03:56:05 compute-0 conmon[258937]: conmon 45e5bb239caa99956e81 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-45e5bb239caa99956e81146eb8387d2de5a8ee4469f4bd6b61b3455d5ed0a021.scope/container/memory.events
Oct 11 03:56:05 compute-0 podman[259791]: 2025-10-11 03:56:05.905780889 +0000 UTC m=+0.586313037 container died 45e5bb239caa99956e81146eb8387d2de5a8ee4469f4bd6b61b3455d5ed0a021 (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}, config_id=edpm, container_name=nova_compute, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f)
Oct 11 03:56:05 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-45e5bb239caa99956e81146eb8387d2de5a8ee4469f4bd6b61b3455d5ed0a021-userdata-shm.mount: Deactivated successfully.
Oct 11 03:56:05 compute-0 systemd[1]: var-lib-containers-storage-overlay-d7aa3f94ee24d4eeb96339ac7fac5cc42a5c3d4de796d8047d80bdce10ec7441-merged.mount: Deactivated successfully.
Oct 11 03:56:06 compute-0 ceph-mon[74273]: pgmap v701: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 03:56:06 compute-0 podman[259791]: 2025-10-11 03:56:06.657335634 +0000 UTC m=+1.337867772 container cleanup 45e5bb239caa99956e81146eb8387d2de5a8ee4469f4bd6b61b3455d5ed0a021 (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009, tcib_managed=true, config_id=edpm, container_name=nova_compute)
Oct 11 03:56:06 compute-0 podman[259791]: nova_compute
Oct 11 03:56:06 compute-0 podman[259822]: nova_compute
Oct 11 03:56:06 compute-0 systemd[1]: edpm_nova_compute.service: Deactivated successfully.
Oct 11 03:56:06 compute-0 systemd[1]: Stopped nova_compute container.
Oct 11 03:56:06 compute-0 systemd[1]: Starting nova_compute container...
Oct 11 03:56:06 compute-0 systemd[1]: Started libcrun container.
Oct 11 03:56:06 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d7aa3f94ee24d4eeb96339ac7fac5cc42a5c3d4de796d8047d80bdce10ec7441/merged/etc/nvme supports timestamps until 2038 (0x7fffffff)
Oct 11 03:56:06 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d7aa3f94ee24d4eeb96339ac7fac5cc42a5c3d4de796d8047d80bdce10ec7441/merged/etc/multipath supports timestamps until 2038 (0x7fffffff)
Oct 11 03:56:06 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d7aa3f94ee24d4eeb96339ac7fac5cc42a5c3d4de796d8047d80bdce10ec7441/merged/var/lib/iscsi supports timestamps until 2038 (0x7fffffff)
Oct 11 03:56:06 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d7aa3f94ee24d4eeb96339ac7fac5cc42a5c3d4de796d8047d80bdce10ec7441/merged/var/lib/nova supports timestamps until 2038 (0x7fffffff)
Oct 11 03:56:06 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d7aa3f94ee24d4eeb96339ac7fac5cc42a5c3d4de796d8047d80bdce10ec7441/merged/var/lib/libvirt supports timestamps until 2038 (0x7fffffff)
Oct 11 03:56:06 compute-0 podman[259835]: 2025-10-11 03:56:06.856685072 +0000 UTC m=+0.106087480 container init 45e5bb239caa99956e81146eb8387d2de5a8ee4469f4bd6b61b3455d5ed0a021 (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251009, tcib_managed=true, config_id=edpm, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, container_name=nova_compute)
Oct 11 03:56:06 compute-0 podman[259835]: 2025-10-11 03:56:06.868509184 +0000 UTC m=+0.117911572 container start 45e5bb239caa99956e81146eb8387d2de5a8ee4469f4bd6b61b3455d5ed0a021 (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.build-date=20251009, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=nova_compute, org.label-schema.license=GPLv2, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}, config_id=edpm)
Oct 11 03:56:06 compute-0 podman[259835]: nova_compute
Oct 11 03:56:06 compute-0 nova_compute[259850]: + sudo -E kolla_set_configs
Oct 11 03:56:06 compute-0 systemd[1]: Started nova_compute container.
Oct 11 03:56:06 compute-0 sudo[259777]: pam_unix(sudo:session): session closed for user root
Oct 11 03:56:06 compute-0 nova_compute[259850]: INFO:__main__:Loading config file at /var/lib/kolla/config_files/config.json
Oct 11 03:56:06 compute-0 nova_compute[259850]: INFO:__main__:Validating config file
Oct 11 03:56:06 compute-0 nova_compute[259850]: INFO:__main__:Kolla config strategy set to: COPY_ALWAYS
Oct 11 03:56:06 compute-0 nova_compute[259850]: INFO:__main__:Copying service configuration files
Oct 11 03:56:06 compute-0 nova_compute[259850]: INFO:__main__:Deleting /etc/nova/nova.conf
Oct 11 03:56:06 compute-0 nova_compute[259850]: INFO:__main__:Copying /var/lib/kolla/config_files/nova-blank.conf to /etc/nova/nova.conf
Oct 11 03:56:06 compute-0 nova_compute[259850]: INFO:__main__:Setting permission for /etc/nova/nova.conf
Oct 11 03:56:06 compute-0 nova_compute[259850]: INFO:__main__:Deleting /etc/nova/nova.conf.d/01-nova.conf
Oct 11 03:56:06 compute-0 nova_compute[259850]: INFO:__main__:Copying /var/lib/kolla/config_files/01-nova.conf to /etc/nova/nova.conf.d/01-nova.conf
Oct 11 03:56:06 compute-0 nova_compute[259850]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/01-nova.conf
Oct 11 03:56:06 compute-0 nova_compute[259850]: INFO:__main__:Deleting /etc/nova/nova.conf.d/03-ceph-nova.conf
Oct 11 03:56:06 compute-0 nova_compute[259850]: INFO:__main__:Copying /var/lib/kolla/config_files/03-ceph-nova.conf to /etc/nova/nova.conf.d/03-ceph-nova.conf
Oct 11 03:56:06 compute-0 nova_compute[259850]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/03-ceph-nova.conf
Oct 11 03:56:06 compute-0 nova_compute[259850]: INFO:__main__:Deleting /etc/nova/nova.conf.d/25-nova-extra.conf
Oct 11 03:56:06 compute-0 nova_compute[259850]: INFO:__main__:Copying /var/lib/kolla/config_files/25-nova-extra.conf to /etc/nova/nova.conf.d/25-nova-extra.conf
Oct 11 03:56:06 compute-0 nova_compute[259850]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/25-nova-extra.conf
Oct 11 03:56:06 compute-0 nova_compute[259850]: INFO:__main__:Deleting /etc/nova/nova.conf.d/nova-blank.conf
Oct 11 03:56:06 compute-0 nova_compute[259850]: INFO:__main__:Copying /var/lib/kolla/config_files/nova-blank.conf to /etc/nova/nova.conf.d/nova-blank.conf
Oct 11 03:56:06 compute-0 nova_compute[259850]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/nova-blank.conf
Oct 11 03:56:06 compute-0 nova_compute[259850]: INFO:__main__:Deleting /etc/nova/nova.conf.d/02-nova-host-specific.conf
Oct 11 03:56:06 compute-0 nova_compute[259850]: INFO:__main__:Copying /var/lib/kolla/config_files/02-nova-host-specific.conf to /etc/nova/nova.conf.d/02-nova-host-specific.conf
Oct 11 03:56:06 compute-0 nova_compute[259850]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/02-nova-host-specific.conf
Oct 11 03:56:06 compute-0 nova_compute[259850]: INFO:__main__:Deleting /etc/ceph
Oct 11 03:56:06 compute-0 nova_compute[259850]: INFO:__main__:Creating directory /etc/ceph
Oct 11 03:56:06 compute-0 nova_compute[259850]: INFO:__main__:Setting permission for /etc/ceph
Oct 11 03:56:06 compute-0 nova_compute[259850]: INFO:__main__:Copying /var/lib/kolla/config_files/ceph/ceph.client.openstack.keyring to /etc/ceph/ceph.client.openstack.keyring
Oct 11 03:56:06 compute-0 nova_compute[259850]: INFO:__main__:Setting permission for /etc/ceph/ceph.client.openstack.keyring
Oct 11 03:56:06 compute-0 nova_compute[259850]: INFO:__main__:Copying /var/lib/kolla/config_files/ceph/ceph.conf to /etc/ceph/ceph.conf
Oct 11 03:56:06 compute-0 nova_compute[259850]: INFO:__main__:Setting permission for /etc/ceph/ceph.conf
Oct 11 03:56:06 compute-0 nova_compute[259850]: INFO:__main__:Deleting /var/lib/nova/.ssh/ssh-privatekey
Oct 11 03:56:06 compute-0 nova_compute[259850]: INFO:__main__:Copying /var/lib/kolla/config_files/ssh-privatekey to /var/lib/nova/.ssh/ssh-privatekey
Oct 11 03:56:06 compute-0 nova_compute[259850]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/ssh-privatekey
Oct 11 03:56:06 compute-0 nova_compute[259850]: INFO:__main__:Deleting /var/lib/nova/.ssh/config
Oct 11 03:56:06 compute-0 nova_compute[259850]: INFO:__main__:Copying /var/lib/kolla/config_files/ssh-config to /var/lib/nova/.ssh/config
Oct 11 03:56:06 compute-0 nova_compute[259850]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/config
Oct 11 03:56:06 compute-0 nova_compute[259850]: INFO:__main__:Writing out command to execute
Oct 11 03:56:06 compute-0 nova_compute[259850]: INFO:__main__:Setting permission for /etc/ceph/ceph.client.openstack.keyring
Oct 11 03:56:06 compute-0 nova_compute[259850]: INFO:__main__:Setting permission for /etc/ceph/ceph.conf
Oct 11 03:56:06 compute-0 nova_compute[259850]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/
Oct 11 03:56:06 compute-0 nova_compute[259850]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/ssh-privatekey
Oct 11 03:56:06 compute-0 nova_compute[259850]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/config
Oct 11 03:56:07 compute-0 nova_compute[259850]: ++ cat /run_command
Oct 11 03:56:07 compute-0 nova_compute[259850]: + CMD=nova-compute
Oct 11 03:56:07 compute-0 nova_compute[259850]: + ARGS=
Oct 11 03:56:07 compute-0 nova_compute[259850]: + sudo kolla_copy_cacerts
Oct 11 03:56:07 compute-0 nova_compute[259850]: + [[ ! -n '' ]]
Oct 11 03:56:07 compute-0 nova_compute[259850]: + . kolla_extend_start
Oct 11 03:56:07 compute-0 nova_compute[259850]: Running command: 'nova-compute'
Oct 11 03:56:07 compute-0 nova_compute[259850]: + echo 'Running command: '\''nova-compute'\'''
Oct 11 03:56:07 compute-0 nova_compute[259850]: + umask 0022
Oct 11 03:56:07 compute-0 nova_compute[259850]: + exec nova-compute
Oct 11 03:56:07 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v702: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 03:56:07 compute-0 sudo[260011]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wulpbqmorpcrbpvlijezgyuozmdsduwv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760154967.171693-1858-172069576708918/AnsiballZ_podman_container.py'
Oct 11 03:56:07 compute-0 sudo[260011]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 03:56:07 compute-0 python3.9[260013]: ansible-containers.podman.podman_container Invoked with name=nova_compute_init state=started executable=podman detach=True debug=False force_restart=False force_delete=True generate_systemd={} image_strict=False recreate=False image=None annotation=None arch=None attach=None authfile=None blkio_weight=None blkio_weight_device=None cap_add=None cap_drop=None cgroup_conf=None cgroup_parent=None cgroupns=None cgroups=None chrootdirs=None cidfile=None cmd_args=None conmon_pidfile=None command=None cpu_period=None cpu_quota=None cpu_rt_period=None cpu_rt_runtime=None cpu_shares=None cpus=None cpuset_cpus=None cpuset_mems=None decryption_key=None delete_depend=None delete_time=None delete_volumes=None detach_keys=None device=None device_cgroup_rule=None device_read_bps=None device_read_iops=None device_write_bps=None device_write_iops=None dns=None dns_option=None dns_search=None entrypoint=None env=None env_file=None env_host=None env_merge=None etc_hosts=None expose=None gidmap=None gpus=None group_add=None group_entry=None healthcheck=None healthcheck_interval=None healthcheck_retries=None healthcheck_start_period=None health_startup_cmd=None health_startup_interval=None health_startup_retries=None health_startup_success=None health_startup_timeout=None healthcheck_timeout=None healthcheck_failure_action=None hooks_dir=None hostname=None hostuser=None http_proxy=None image_volume=None init=None init_ctr=None init_path=None interactive=None ip=None ip6=None ipc=None kernel_memory=None label=None label_file=None log_driver=None log_level=None log_opt=None mac_address=None memory=None memory_reservation=None memory_swap=None memory_swappiness=None mount=None network=None network_aliases=None no_healthcheck=None no_hosts=None oom_kill_disable=None oom_score_adj=None os=None passwd=None passwd_entry=None personality=None pid=None pid_file=None pids_limit=None platform=None pod=None pod_id_file=None preserve_fd=None preserve_fds=None privileged=None publish=None publish_all=None pull=None quadlet_dir=None quadlet_filename=None quadlet_file_mode=None quadlet_options=None rdt_class=None read_only=None read_only_tmpfs=None requires=None restart_policy=None restart_time=None retry=None retry_delay=None rm=None rmi=None rootfs=None seccomp_policy=None secrets=NOT_LOGGING_PARAMETER sdnotify=None security_opt=None shm_size=None shm_size_systemd=None sig_proxy=None stop_signal=None stop_timeout=None stop_time=None subgidname=None subuidname=None sysctl=None systemd=None timeout=None timezone=None tls_verify=None tmpfs=None tty=None uidmap=None ulimit=None umask=None unsetenv=None unsetenv_all=None user=None userns=None uts=None variant=None volume=None volumes_from=None workdir=None
Oct 11 03:56:08 compute-0 systemd[1]: Started libpod-conmon-6ae445183d92726697bc128e91a8124afaea4b755f6f4fee1d4ac9b55c5e4f79.scope.
Oct 11 03:56:08 compute-0 systemd[1]: Started libcrun container.
Oct 11 03:56:08 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/450b59e3217cd22f42741d194fe064e2c3bbb5f4eff983abde9cb78c7d005522/merged/usr/sbin/nova_statedir_ownership.py supports timestamps until 2038 (0x7fffffff)
Oct 11 03:56:08 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/450b59e3217cd22f42741d194fe064e2c3bbb5f4eff983abde9cb78c7d005522/merged/var/lib/nova supports timestamps until 2038 (0x7fffffff)
Oct 11 03:56:08 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/450b59e3217cd22f42741d194fe064e2c3bbb5f4eff983abde9cb78c7d005522/merged/var/lib/_nova_secontext supports timestamps until 2038 (0x7fffffff)
Oct 11 03:56:08 compute-0 podman[260038]: 2025-10-11 03:56:08.108746474 +0000 UTC m=+0.178204286 container init 6ae445183d92726697bc128e91a8124afaea4b755f6f4fee1d4ac9b55c5e4f79 (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute_init, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': False, 'user': 'root', 'restart': 'never', 'command': 'bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init', 'net': 'none', 'security_opt': ['label=disable'], 'detach': False, 'environment': {'NOVA_STATEDIR_OWNERSHIP_SKIP': '/var/lib/nova/compute_id', '__OS_DEBUG': False}, 'volumes': ['/dev/log:/dev/log', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z', '/var/lib/openstack/config/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z']}, config_id=edpm, container_name=nova_compute_init, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251009, io.buildah.version=1.41.3)
Oct 11 03:56:08 compute-0 podman[260038]: 2025-10-11 03:56:08.120787092 +0000 UTC m=+0.190244844 container start 6ae445183d92726697bc128e91a8124afaea4b755f6f4fee1d4ac9b55c5e4f79 (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute_init, org.label-schema.name=CentOS Stream 9 Base Image, config_id=edpm, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.vendor=CentOS, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': False, 'user': 'root', 'restart': 'never', 'command': 'bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init', 'net': 'none', 'security_opt': ['label=disable'], 'detach': False, 'environment': {'NOVA_STATEDIR_OWNERSHIP_SKIP': '/var/lib/nova/compute_id', '__OS_DEBUG': False}, 'volumes': ['/dev/log:/dev/log', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z', '/var/lib/openstack/config/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z']}, container_name=nova_compute_init, org.label-schema.build-date=20251009, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2)
Oct 11 03:56:08 compute-0 python3.9[260013]: ansible-containers.podman.podman_container PODMAN-CONTAINER-DEBUG: podman start nova_compute_init
Oct 11 03:56:08 compute-0 nova_compute_init[260059]: INFO:nova_statedir:Applying nova statedir ownership
Oct 11 03:56:08 compute-0 nova_compute_init[260059]: INFO:nova_statedir:Target ownership for /var/lib/nova: 42436:42436
Oct 11 03:56:08 compute-0 nova_compute_init[260059]: INFO:nova_statedir:Checking uid: 1000 gid: 1000 path: /var/lib/nova/
Oct 11 03:56:08 compute-0 nova_compute_init[260059]: INFO:nova_statedir:Changing ownership of /var/lib/nova from 1000:1000 to 42436:42436
Oct 11 03:56:08 compute-0 nova_compute_init[260059]: INFO:nova_statedir:Setting selinux context of /var/lib/nova to system_u:object_r:container_file_t:s0
Oct 11 03:56:08 compute-0 nova_compute_init[260059]: INFO:nova_statedir:Checking uid: 1000 gid: 1000 path: /var/lib/nova/instances/
Oct 11 03:56:08 compute-0 nova_compute_init[260059]: INFO:nova_statedir:Changing ownership of /var/lib/nova/instances from 1000:1000 to 42436:42436
Oct 11 03:56:08 compute-0 nova_compute_init[260059]: INFO:nova_statedir:Setting selinux context of /var/lib/nova/instances to system_u:object_r:container_file_t:s0
Oct 11 03:56:08 compute-0 nova_compute_init[260059]: INFO:nova_statedir:Checking uid: 42436 gid: 42436 path: /var/lib/nova/.ssh/
Oct 11 03:56:08 compute-0 nova_compute_init[260059]: INFO:nova_statedir:Ownership of /var/lib/nova/.ssh already 42436:42436
Oct 11 03:56:08 compute-0 nova_compute_init[260059]: INFO:nova_statedir:Setting selinux context of /var/lib/nova/.ssh to system_u:object_r:container_file_t:s0
Oct 11 03:56:08 compute-0 nova_compute_init[260059]: INFO:nova_statedir:Checking uid: 42436 gid: 42436 path: /var/lib/nova/.ssh/ssh-privatekey
Oct 11 03:56:08 compute-0 nova_compute_init[260059]: INFO:nova_statedir:Checking uid: 42436 gid: 42436 path: /var/lib/nova/.ssh/config
Oct 11 03:56:08 compute-0 nova_compute_init[260059]: INFO:nova_statedir:Nova statedir ownership complete
Oct 11 03:56:08 compute-0 systemd[1]: libpod-6ae445183d92726697bc128e91a8124afaea4b755f6f4fee1d4ac9b55c5e4f79.scope: Deactivated successfully.
Oct 11 03:56:08 compute-0 podman[260060]: 2025-10-11 03:56:08.20157014 +0000 UTC m=+0.039465209 container died 6ae445183d92726697bc128e91a8124afaea4b755f6f4fee1d4ac9b55c5e4f79 (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute_init, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.build-date=20251009, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, container_name=nova_compute_init, io.buildah.version=1.41.3, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': False, 'user': 'root', 'restart': 'never', 'command': 'bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init', 'net': 'none', 'security_opt': ['label=disable'], 'detach': False, 'environment': {'NOVA_STATEDIR_OWNERSHIP_SKIP': '/var/lib/nova/compute_id', '__OS_DEBUG': False}, 'volumes': ['/dev/log:/dev/log', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z', '/var/lib/openstack/config/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, config_id=edpm, org.label-schema.license=GPLv2)
Oct 11 03:56:08 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-6ae445183d92726697bc128e91a8124afaea4b755f6f4fee1d4ac9b55c5e4f79-userdata-shm.mount: Deactivated successfully.
Oct 11 03:56:08 compute-0 systemd[1]: var-lib-containers-storage-overlay-450b59e3217cd22f42741d194fe064e2c3bbb5f4eff983abde9cb78c7d005522-merged.mount: Deactivated successfully.
Oct 11 03:56:08 compute-0 podman[260071]: 2025-10-11 03:56:08.247916742 +0000 UTC m=+0.043258266 container cleanup 6ae445183d92726697bc128e91a8124afaea4b755f6f4fee1d4ac9b55c5e4f79 (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute_init, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': False, 'user': 'root', 'restart': 'never', 'command': 'bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init', 'net': 'none', 'security_opt': ['label=disable'], 'detach': False, 'environment': {'NOVA_STATEDIR_OWNERSHIP_SKIP': '/var/lib/nova/compute_id', '__OS_DEBUG': False}, 'volumes': ['/dev/log:/dev/log', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z', '/var/lib/openstack/config/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z']}, container_name=nova_compute_init, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=edpm, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, org.label-schema.build-date=20251009)
Oct 11 03:56:08 compute-0 systemd[1]: libpod-conmon-6ae445183d92726697bc128e91a8124afaea4b755f6f4fee1d4ac9b55c5e4f79.scope: Deactivated successfully.
Oct 11 03:56:08 compute-0 sudo[260011]: pam_unix(sudo:session): session closed for user root
Oct 11 03:56:08 compute-0 ceph-mon[74273]: pgmap v702: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 03:56:08 compute-0 sshd-session[222564]: Connection closed by 192.168.122.30 port 45270
Oct 11 03:56:08 compute-0 sshd-session[222548]: pam_unix(sshd:session): session closed for user zuul
Oct 11 03:56:08 compute-0 systemd[1]: session-50.scope: Deactivated successfully.
Oct 11 03:56:08 compute-0 systemd[1]: session-50.scope: Consumed 3min 9.108s CPU time.
Oct 11 03:56:08 compute-0 systemd-logind[820]: Session 50 logged out. Waiting for processes to exit.
Oct 11 03:56:08 compute-0 systemd-logind[820]: Removed session 50.
Oct 11 03:56:08 compute-0 nova_compute[259850]: 2025-10-11 03:56:08.913 2 DEBUG os_vif [-] Loaded VIF plugin class '<class 'vif_plug_linux_bridge.linux_bridge.LinuxBridgePlugin'>' with name 'linux_bridge' initialize /usr/lib/python3.9/site-packages/os_vif/__init__.py:44
Oct 11 03:56:08 compute-0 nova_compute[259850]: 2025-10-11 03:56:08.913 2 DEBUG os_vif [-] Loaded VIF plugin class '<class 'vif_plug_noop.noop.NoOpPlugin'>' with name 'noop' initialize /usr/lib/python3.9/site-packages/os_vif/__init__.py:44
Oct 11 03:56:08 compute-0 nova_compute[259850]: 2025-10-11 03:56:08.914 2 DEBUG os_vif [-] Loaded VIF plugin class '<class 'vif_plug_ovs.ovs.OvsPlugin'>' with name 'ovs' initialize /usr/lib/python3.9/site-packages/os_vif/__init__.py:44
Oct 11 03:56:08 compute-0 nova_compute[259850]: 2025-10-11 03:56:08.914 2 INFO os_vif [-] Loaded VIF plugins: linux_bridge, noop, ovs
Oct 11 03:56:09 compute-0 nova_compute[259850]: 2025-10-11 03:56:09.037 2 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): grep -F node.session.scan /sbin/iscsiadm execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 03:56:09 compute-0 nova_compute[259850]: 2025-10-11 03:56:09.049 2 DEBUG oslo_concurrency.processutils [-] CMD "grep -F node.session.scan /sbin/iscsiadm" returned: 0 in 0.012s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 03:56:09 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v703: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 03:56:09 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 11 03:56:09 compute-0 nova_compute[259850]: 2025-10-11 03:56:09.612 2 INFO nova.virt.driver [None req-682edebb-6e1e-4c40-910e-896b6023b4ae - - - - - -] Loading compute driver 'libvirt.LibvirtDriver'
Oct 11 03:56:09 compute-0 nova_compute[259850]: 2025-10-11 03:56:09.738 2 INFO nova.compute.provider_config [None req-682edebb-6e1e-4c40-910e-896b6023b4ae - - - - - -] No provider configs found in /etc/nova/provider_config/. If files are present, ensure the Nova process has access.
Oct 11 03:56:09 compute-0 nova_compute[259850]: 2025-10-11 03:56:09.754 2 DEBUG oslo_concurrency.lockutils [None req-682edebb-6e1e-4c40-910e-896b6023b4ae - - - - - -] Acquiring lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 11 03:56:09 compute-0 nova_compute[259850]: 2025-10-11 03:56:09.754 2 DEBUG oslo_concurrency.lockutils [None req-682edebb-6e1e-4c40-910e-896b6023b4ae - - - - - -] Acquired lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 11 03:56:09 compute-0 nova_compute[259850]: 2025-10-11 03:56:09.754 2 DEBUG oslo_concurrency.lockutils [None req-682edebb-6e1e-4c40-910e-896b6023b4ae - - - - - -] Releasing lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 11 03:56:09 compute-0 nova_compute[259850]: 2025-10-11 03:56:09.755 2 DEBUG oslo_service.service [None req-682edebb-6e1e-4c40-910e-896b6023b4ae - - - - - -] Full set of CONF: _wait_for_exit_or_signal /usr/lib/python3.9/site-packages/oslo_service/service.py:362
Oct 11 03:56:09 compute-0 nova_compute[259850]: 2025-10-11 03:56:09.755 2 DEBUG oslo_service.service [None req-682edebb-6e1e-4c40-910e-896b6023b4ae - - - - - -] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2589
Oct 11 03:56:09 compute-0 nova_compute[259850]: 2025-10-11 03:56:09.755 2 DEBUG oslo_service.service [None req-682edebb-6e1e-4c40-910e-896b6023b4ae - - - - - -] Configuration options gathered from: log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2590
Oct 11 03:56:09 compute-0 nova_compute[259850]: 2025-10-11 03:56:09.755 2 DEBUG oslo_service.service [None req-682edebb-6e1e-4c40-910e-896b6023b4ae - - - - - -] command line args: [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2591
Oct 11 03:56:09 compute-0 nova_compute[259850]: 2025-10-11 03:56:09.756 2 DEBUG oslo_service.service [None req-682edebb-6e1e-4c40-910e-896b6023b4ae - - - - - -] config files: ['/etc/nova/nova.conf', '/etc/nova/nova-compute.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2592
Oct 11 03:56:09 compute-0 nova_compute[259850]: 2025-10-11 03:56:09.756 2 DEBUG oslo_service.service [None req-682edebb-6e1e-4c40-910e-896b6023b4ae - - - - - -] ================================================================================ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2594
Oct 11 03:56:09 compute-0 nova_compute[259850]: 2025-10-11 03:56:09.756 2 DEBUG oslo_service.service [None req-682edebb-6e1e-4c40-910e-896b6023b4ae - - - - - -] allow_resize_to_same_host      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 11 03:56:09 compute-0 nova_compute[259850]: 2025-10-11 03:56:09.756 2 DEBUG oslo_service.service [None req-682edebb-6e1e-4c40-910e-896b6023b4ae - - - - - -] arq_binding_timeout            = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 11 03:56:09 compute-0 nova_compute[259850]: 2025-10-11 03:56:09.756 2 DEBUG oslo_service.service [None req-682edebb-6e1e-4c40-910e-896b6023b4ae - - - - - -] backdoor_port                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 11 03:56:09 compute-0 nova_compute[259850]: 2025-10-11 03:56:09.757 2 DEBUG oslo_service.service [None req-682edebb-6e1e-4c40-910e-896b6023b4ae - - - - - -] backdoor_socket                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 11 03:56:09 compute-0 nova_compute[259850]: 2025-10-11 03:56:09.757 2 DEBUG oslo_service.service [None req-682edebb-6e1e-4c40-910e-896b6023b4ae - - - - - -] block_device_allocate_retries  = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 11 03:56:09 compute-0 nova_compute[259850]: 2025-10-11 03:56:09.757 2 DEBUG oslo_service.service [None req-682edebb-6e1e-4c40-910e-896b6023b4ae - - - - - -] block_device_allocate_retries_interval = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 11 03:56:09 compute-0 nova_compute[259850]: 2025-10-11 03:56:09.757 2 DEBUG oslo_service.service [None req-682edebb-6e1e-4c40-910e-896b6023b4ae - - - - - -] cert                           = self.pem log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 11 03:56:09 compute-0 nova_compute[259850]: 2025-10-11 03:56:09.757 2 DEBUG oslo_service.service [None req-682edebb-6e1e-4c40-910e-896b6023b4ae - - - - - -] compute_driver                 = libvirt.LibvirtDriver log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 11 03:56:09 compute-0 nova_compute[259850]: 2025-10-11 03:56:09.758 2 DEBUG oslo_service.service [None req-682edebb-6e1e-4c40-910e-896b6023b4ae - - - - - -] compute_monitors               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 11 03:56:09 compute-0 nova_compute[259850]: 2025-10-11 03:56:09.758 2 DEBUG oslo_service.service [None req-682edebb-6e1e-4c40-910e-896b6023b4ae - - - - - -] config_dir                     = ['/etc/nova/nova.conf.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 11 03:56:09 compute-0 nova_compute[259850]: 2025-10-11 03:56:09.758 2 DEBUG oslo_service.service [None req-682edebb-6e1e-4c40-910e-896b6023b4ae - - - - - -] config_drive_format            = iso9660 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 11 03:56:09 compute-0 nova_compute[259850]: 2025-10-11 03:56:09.758 2 DEBUG oslo_service.service [None req-682edebb-6e1e-4c40-910e-896b6023b4ae - - - - - -] config_file                    = ['/etc/nova/nova.conf', '/etc/nova/nova-compute.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 11 03:56:09 compute-0 nova_compute[259850]: 2025-10-11 03:56:09.758 2 DEBUG oslo_service.service [None req-682edebb-6e1e-4c40-910e-896b6023b4ae - - - - - -] config_source                  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 11 03:56:09 compute-0 nova_compute[259850]: 2025-10-11 03:56:09.759 2 DEBUG oslo_service.service [None req-682edebb-6e1e-4c40-910e-896b6023b4ae - - - - - -] console_host                   = compute-0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 11 03:56:09 compute-0 nova_compute[259850]: 2025-10-11 03:56:09.759 2 DEBUG oslo_service.service [None req-682edebb-6e1e-4c40-910e-896b6023b4ae - - - - - -] control_exchange               = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 11 03:56:09 compute-0 nova_compute[259850]: 2025-10-11 03:56:09.759 2 DEBUG oslo_service.service [None req-682edebb-6e1e-4c40-910e-896b6023b4ae - - - - - -] cpu_allocation_ratio           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 11 03:56:09 compute-0 nova_compute[259850]: 2025-10-11 03:56:09.759 2 DEBUG oslo_service.service [None req-682edebb-6e1e-4c40-910e-896b6023b4ae - - - - - -] daemon                         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 11 03:56:09 compute-0 nova_compute[259850]: 2025-10-11 03:56:09.759 2 DEBUG oslo_service.service [None req-682edebb-6e1e-4c40-910e-896b6023b4ae - - - - - -] debug                          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 11 03:56:09 compute-0 nova_compute[259850]: 2025-10-11 03:56:09.760 2 DEBUG oslo_service.service [None req-682edebb-6e1e-4c40-910e-896b6023b4ae - - - - - -] default_access_ip_network_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 11 03:56:09 compute-0 nova_compute[259850]: 2025-10-11 03:56:09.760 2 DEBUG oslo_service.service [None req-682edebb-6e1e-4c40-910e-896b6023b4ae - - - - - -] default_availability_zone      = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 11 03:56:09 compute-0 nova_compute[259850]: 2025-10-11 03:56:09.760 2 DEBUG oslo_service.service [None req-682edebb-6e1e-4c40-910e-896b6023b4ae - - - - - -] default_ephemeral_format       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 11 03:56:09 compute-0 nova_compute[259850]: 2025-10-11 03:56:09.760 2 DEBUG oslo_service.service [None req-682edebb-6e1e-4c40-910e-896b6023b4ae - - - - - -] default_log_levels             = ['amqp=WARN', 'amqplib=WARN', 'boto=WARN', 'qpid=WARN', 'sqlalchemy=WARN', 'suds=INFO', 'oslo.messaging=INFO', 'oslo_messaging=INFO', 'iso8601=WARN', 'requests.packages.urllib3.connectionpool=WARN', 'urllib3.connectionpool=WARN', 'websocket=WARN', 'requests.packages.urllib3.util.retry=WARN', 'urllib3.util.retry=WARN', 'keystonemiddleware=WARN', 'routes.middleware=WARN', 'stevedore=WARN', 'taskflow=WARN', 'keystoneauth=WARN', 'oslo.cache=INFO', 'oslo_policy=INFO', 'dogpile.core.dogpile=INFO', 'glanceclient=WARN', 'oslo.privsep.daemon=INFO'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 11 03:56:09 compute-0 nova_compute[259850]: 2025-10-11 03:56:09.760 2 DEBUG oslo_service.service [None req-682edebb-6e1e-4c40-910e-896b6023b4ae - - - - - -] default_schedule_zone          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 11 03:56:09 compute-0 nova_compute[259850]: 2025-10-11 03:56:09.761 2 DEBUG oslo_service.service [None req-682edebb-6e1e-4c40-910e-896b6023b4ae - - - - - -] disk_allocation_ratio          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 11 03:56:09 compute-0 nova_compute[259850]: 2025-10-11 03:56:09.761 2 DEBUG oslo_service.service [None req-682edebb-6e1e-4c40-910e-896b6023b4ae - - - - - -] enable_new_services            = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 11 03:56:09 compute-0 nova_compute[259850]: 2025-10-11 03:56:09.761 2 DEBUG oslo_service.service [None req-682edebb-6e1e-4c40-910e-896b6023b4ae - - - - - -] enabled_apis                   = ['osapi_compute', 'metadata'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 11 03:56:09 compute-0 nova_compute[259850]: 2025-10-11 03:56:09.761 2 DEBUG oslo_service.service [None req-682edebb-6e1e-4c40-910e-896b6023b4ae - - - - - -] enabled_ssl_apis               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 11 03:56:09 compute-0 nova_compute[259850]: 2025-10-11 03:56:09.761 2 DEBUG oslo_service.service [None req-682edebb-6e1e-4c40-910e-896b6023b4ae - - - - - -] flat_injected                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 11 03:56:09 compute-0 nova_compute[259850]: 2025-10-11 03:56:09.762 2 DEBUG oslo_service.service [None req-682edebb-6e1e-4c40-910e-896b6023b4ae - - - - - -] force_config_drive             = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 11 03:56:09 compute-0 nova_compute[259850]: 2025-10-11 03:56:09.762 2 DEBUG oslo_service.service [None req-682edebb-6e1e-4c40-910e-896b6023b4ae - - - - - -] force_raw_images               = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 11 03:56:09 compute-0 nova_compute[259850]: 2025-10-11 03:56:09.762 2 DEBUG oslo_service.service [None req-682edebb-6e1e-4c40-910e-896b6023b4ae - - - - - -] graceful_shutdown_timeout      = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 11 03:56:09 compute-0 nova_compute[259850]: 2025-10-11 03:56:09.762 2 DEBUG oslo_service.service [None req-682edebb-6e1e-4c40-910e-896b6023b4ae - - - - - -] heal_instance_info_cache_interval = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 11 03:56:09 compute-0 nova_compute[259850]: 2025-10-11 03:56:09.763 2 DEBUG oslo_service.service [None req-682edebb-6e1e-4c40-910e-896b6023b4ae - - - - - -] host                           = compute-0.ctlplane.example.com log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 11 03:56:09 compute-0 nova_compute[259850]: 2025-10-11 03:56:09.763 2 DEBUG oslo_service.service [None req-682edebb-6e1e-4c40-910e-896b6023b4ae - - - - - -] initial_cpu_allocation_ratio   = 4.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 11 03:56:09 compute-0 nova_compute[259850]: 2025-10-11 03:56:09.763 2 DEBUG oslo_service.service [None req-682edebb-6e1e-4c40-910e-896b6023b4ae - - - - - -] initial_disk_allocation_ratio  = 0.9 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 11 03:56:09 compute-0 nova_compute[259850]: 2025-10-11 03:56:09.763 2 DEBUG oslo_service.service [None req-682edebb-6e1e-4c40-910e-896b6023b4ae - - - - - -] initial_ram_allocation_ratio   = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 11 03:56:09 compute-0 nova_compute[259850]: 2025-10-11 03:56:09.763 2 DEBUG oslo_service.service [None req-682edebb-6e1e-4c40-910e-896b6023b4ae - - - - - -] injected_network_template      = /usr/lib/python3.9/site-packages/nova/virt/interfaces.template log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 11 03:56:09 compute-0 nova_compute[259850]: 2025-10-11 03:56:09.764 2 DEBUG oslo_service.service [None req-682edebb-6e1e-4c40-910e-896b6023b4ae - - - - - -] instance_build_timeout         = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 11 03:56:09 compute-0 nova_compute[259850]: 2025-10-11 03:56:09.764 2 DEBUG oslo_service.service [None req-682edebb-6e1e-4c40-910e-896b6023b4ae - - - - - -] instance_delete_interval       = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 11 03:56:09 compute-0 nova_compute[259850]: 2025-10-11 03:56:09.764 2 DEBUG oslo_service.service [None req-682edebb-6e1e-4c40-910e-896b6023b4ae - - - - - -] instance_format                = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 11 03:56:09 compute-0 nova_compute[259850]: 2025-10-11 03:56:09.764 2 DEBUG oslo_service.service [None req-682edebb-6e1e-4c40-910e-896b6023b4ae - - - - - -] instance_name_template         = instance-%08x log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 11 03:56:09 compute-0 nova_compute[259850]: 2025-10-11 03:56:09.764 2 DEBUG oslo_service.service [None req-682edebb-6e1e-4c40-910e-896b6023b4ae - - - - - -] instance_usage_audit           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 11 03:56:09 compute-0 nova_compute[259850]: 2025-10-11 03:56:09.764 2 DEBUG oslo_service.service [None req-682edebb-6e1e-4c40-910e-896b6023b4ae - - - - - -] instance_usage_audit_period    = month log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 11 03:56:09 compute-0 nova_compute[259850]: 2025-10-11 03:56:09.765 2 DEBUG oslo_service.service [None req-682edebb-6e1e-4c40-910e-896b6023b4ae - - - - - -] instance_uuid_format           = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 11 03:56:09 compute-0 nova_compute[259850]: 2025-10-11 03:56:09.765 2 DEBUG oslo_service.service [None req-682edebb-6e1e-4c40-910e-896b6023b4ae - - - - - -] instances_path                 = /var/lib/nova/instances log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 11 03:56:09 compute-0 nova_compute[259850]: 2025-10-11 03:56:09.765 2 DEBUG oslo_service.service [None req-682edebb-6e1e-4c40-910e-896b6023b4ae - - - - - -] internal_service_availability_zone = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 11 03:56:09 compute-0 nova_compute[259850]: 2025-10-11 03:56:09.765 2 DEBUG oslo_service.service [None req-682edebb-6e1e-4c40-910e-896b6023b4ae - - - - - -] key                            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 11 03:56:09 compute-0 nova_compute[259850]: 2025-10-11 03:56:09.765 2 DEBUG oslo_service.service [None req-682edebb-6e1e-4c40-910e-896b6023b4ae - - - - - -] live_migration_retry_count     = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 11 03:56:09 compute-0 nova_compute[259850]: 2025-10-11 03:56:09.766 2 DEBUG oslo_service.service [None req-682edebb-6e1e-4c40-910e-896b6023b4ae - - - - - -] log_config_append              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 11 03:56:09 compute-0 nova_compute[259850]: 2025-10-11 03:56:09.766 2 DEBUG oslo_service.service [None req-682edebb-6e1e-4c40-910e-896b6023b4ae - - - - - -] log_date_format                = %Y-%m-%d %H:%M:%S log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 11 03:56:09 compute-0 nova_compute[259850]: 2025-10-11 03:56:09.766 2 DEBUG oslo_service.service [None req-682edebb-6e1e-4c40-910e-896b6023b4ae - - - - - -] log_dir                        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 11 03:56:09 compute-0 nova_compute[259850]: 2025-10-11 03:56:09.766 2 DEBUG oslo_service.service [None req-682edebb-6e1e-4c40-910e-896b6023b4ae - - - - - -] log_file                       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 11 03:56:09 compute-0 nova_compute[259850]: 2025-10-11 03:56:09.766 2 DEBUG oslo_service.service [None req-682edebb-6e1e-4c40-910e-896b6023b4ae - - - - - -] log_options                    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 11 03:56:09 compute-0 nova_compute[259850]: 2025-10-11 03:56:09.767 2 DEBUG oslo_service.service [None req-682edebb-6e1e-4c40-910e-896b6023b4ae - - - - - -] log_rotate_interval            = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 11 03:56:09 compute-0 nova_compute[259850]: 2025-10-11 03:56:09.767 2 DEBUG oslo_service.service [None req-682edebb-6e1e-4c40-910e-896b6023b4ae - - - - - -] log_rotate_interval_type       = days log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 11 03:56:09 compute-0 nova_compute[259850]: 2025-10-11 03:56:09.767 2 DEBUG oslo_service.service [None req-682edebb-6e1e-4c40-910e-896b6023b4ae - - - - - -] log_rotation_type              = size log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 11 03:56:09 compute-0 nova_compute[259850]: 2025-10-11 03:56:09.767 2 DEBUG oslo_service.service [None req-682edebb-6e1e-4c40-910e-896b6023b4ae - - - - - -] logging_context_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(global_request_id)s %(request_id)s %(user_identity)s] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 11 03:56:09 compute-0 nova_compute[259850]: 2025-10-11 03:56:09.767 2 DEBUG oslo_service.service [None req-682edebb-6e1e-4c40-910e-896b6023b4ae - - - - - -] logging_debug_format_suffix    = %(funcName)s %(pathname)s:%(lineno)d log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 11 03:56:09 compute-0 nova_compute[259850]: 2025-10-11 03:56:09.767 2 DEBUG oslo_service.service [None req-682edebb-6e1e-4c40-910e-896b6023b4ae - - - - - -] logging_default_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [-] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 11 03:56:09 compute-0 nova_compute[259850]: 2025-10-11 03:56:09.768 2 DEBUG oslo_service.service [None req-682edebb-6e1e-4c40-910e-896b6023b4ae - - - - - -] logging_exception_prefix       = %(asctime)s.%(msecs)03d %(process)d ERROR %(name)s %(instance)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 11 03:56:09 compute-0 nova_compute[259850]: 2025-10-11 03:56:09.768 2 DEBUG oslo_service.service [None req-682edebb-6e1e-4c40-910e-896b6023b4ae - - - - - -] logging_user_identity_format   = %(user)s %(project)s %(domain)s %(system_scope)s %(user_domain)s %(project_domain)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 11 03:56:09 compute-0 nova_compute[259850]: 2025-10-11 03:56:09.768 2 DEBUG oslo_service.service [None req-682edebb-6e1e-4c40-910e-896b6023b4ae - - - - - -] long_rpc_timeout               = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 11 03:56:09 compute-0 nova_compute[259850]: 2025-10-11 03:56:09.768 2 DEBUG oslo_service.service [None req-682edebb-6e1e-4c40-910e-896b6023b4ae - - - - - -] max_concurrent_builds          = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 11 03:56:09 compute-0 nova_compute[259850]: 2025-10-11 03:56:09.768 2 DEBUG oslo_service.service [None req-682edebb-6e1e-4c40-910e-896b6023b4ae - - - - - -] max_concurrent_live_migrations = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 11 03:56:09 compute-0 nova_compute[259850]: 2025-10-11 03:56:09.769 2 DEBUG oslo_service.service [None req-682edebb-6e1e-4c40-910e-896b6023b4ae - - - - - -] max_concurrent_snapshots       = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 11 03:56:09 compute-0 nova_compute[259850]: 2025-10-11 03:56:09.769 2 DEBUG oslo_service.service [None req-682edebb-6e1e-4c40-910e-896b6023b4ae - - - - - -] max_local_block_devices        = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 11 03:56:09 compute-0 nova_compute[259850]: 2025-10-11 03:56:09.769 2 DEBUG oslo_service.service [None req-682edebb-6e1e-4c40-910e-896b6023b4ae - - - - - -] max_logfile_count              = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 11 03:56:09 compute-0 nova_compute[259850]: 2025-10-11 03:56:09.769 2 DEBUG oslo_service.service [None req-682edebb-6e1e-4c40-910e-896b6023b4ae - - - - - -] max_logfile_size_mb            = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 11 03:56:09 compute-0 nova_compute[259850]: 2025-10-11 03:56:09.769 2 DEBUG oslo_service.service [None req-682edebb-6e1e-4c40-910e-896b6023b4ae - - - - - -] maximum_instance_delete_attempts = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 11 03:56:09 compute-0 nova_compute[259850]: 2025-10-11 03:56:09.770 2 DEBUG oslo_service.service [None req-682edebb-6e1e-4c40-910e-896b6023b4ae - - - - - -] metadata_listen                = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 11 03:56:09 compute-0 nova_compute[259850]: 2025-10-11 03:56:09.770 2 DEBUG oslo_service.service [None req-682edebb-6e1e-4c40-910e-896b6023b4ae - - - - - -] metadata_listen_port           = 8775 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 11 03:56:09 compute-0 nova_compute[259850]: 2025-10-11 03:56:09.770 2 DEBUG oslo_service.service [None req-682edebb-6e1e-4c40-910e-896b6023b4ae - - - - - -] metadata_workers               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 11 03:56:09 compute-0 nova_compute[259850]: 2025-10-11 03:56:09.770 2 DEBUG oslo_service.service [None req-682edebb-6e1e-4c40-910e-896b6023b4ae - - - - - -] migrate_max_retries            = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 11 03:56:09 compute-0 nova_compute[259850]: 2025-10-11 03:56:09.770 2 DEBUG oslo_service.service [None req-682edebb-6e1e-4c40-910e-896b6023b4ae - - - - - -] mkisofs_cmd                    = /usr/bin/mkisofs log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 11 03:56:09 compute-0 nova_compute[259850]: 2025-10-11 03:56:09.771 2 DEBUG oslo_service.service [None req-682edebb-6e1e-4c40-910e-896b6023b4ae - - - - - -] my_block_storage_ip            = 192.168.122.100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 11 03:56:09 compute-0 nova_compute[259850]: 2025-10-11 03:56:09.771 2 DEBUG oslo_service.service [None req-682edebb-6e1e-4c40-910e-896b6023b4ae - - - - - -] my_ip                          = 192.168.122.100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 11 03:56:09 compute-0 nova_compute[259850]: 2025-10-11 03:56:09.771 2 DEBUG oslo_service.service [None req-682edebb-6e1e-4c40-910e-896b6023b4ae - - - - - -] network_allocate_retries       = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 11 03:56:09 compute-0 nova_compute[259850]: 2025-10-11 03:56:09.771 2 DEBUG oslo_service.service [None req-682edebb-6e1e-4c40-910e-896b6023b4ae - - - - - -] non_inheritable_image_properties = ['cache_in_nova', 'bittorrent'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 11 03:56:09 compute-0 nova_compute[259850]: 2025-10-11 03:56:09.771 2 DEBUG oslo_service.service [None req-682edebb-6e1e-4c40-910e-896b6023b4ae - - - - - -] osapi_compute_listen           = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 11 03:56:09 compute-0 nova_compute[259850]: 2025-10-11 03:56:09.772 2 DEBUG oslo_service.service [None req-682edebb-6e1e-4c40-910e-896b6023b4ae - - - - - -] osapi_compute_listen_port      = 8774 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 11 03:56:09 compute-0 nova_compute[259850]: 2025-10-11 03:56:09.772 2 DEBUG oslo_service.service [None req-682edebb-6e1e-4c40-910e-896b6023b4ae - - - - - -] osapi_compute_unique_server_name_scope =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 11 03:56:09 compute-0 nova_compute[259850]: 2025-10-11 03:56:09.772 2 DEBUG oslo_service.service [None req-682edebb-6e1e-4c40-910e-896b6023b4ae - - - - - -] osapi_compute_workers          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 11 03:56:09 compute-0 nova_compute[259850]: 2025-10-11 03:56:09.772 2 DEBUG oslo_service.service [None req-682edebb-6e1e-4c40-910e-896b6023b4ae - - - - - -] password_length                = 12 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 11 03:56:09 compute-0 nova_compute[259850]: 2025-10-11 03:56:09.772 2 DEBUG oslo_service.service [None req-682edebb-6e1e-4c40-910e-896b6023b4ae - - - - - -] periodic_enable                = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 11 03:56:09 compute-0 nova_compute[259850]: 2025-10-11 03:56:09.773 2 DEBUG oslo_service.service [None req-682edebb-6e1e-4c40-910e-896b6023b4ae - - - - - -] periodic_fuzzy_delay           = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 11 03:56:09 compute-0 nova_compute[259850]: 2025-10-11 03:56:09.773 2 DEBUG oslo_service.service [None req-682edebb-6e1e-4c40-910e-896b6023b4ae - - - - - -] pointer_model                  = usbtablet log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 11 03:56:09 compute-0 nova_compute[259850]: 2025-10-11 03:56:09.773 2 DEBUG oslo_service.service [None req-682edebb-6e1e-4c40-910e-896b6023b4ae - - - - - -] preallocate_images             = none log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 11 03:56:09 compute-0 nova_compute[259850]: 2025-10-11 03:56:09.773 2 DEBUG oslo_service.service [None req-682edebb-6e1e-4c40-910e-896b6023b4ae - - - - - -] publish_errors                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 11 03:56:09 compute-0 nova_compute[259850]: 2025-10-11 03:56:09.773 2 DEBUG oslo_service.service [None req-682edebb-6e1e-4c40-910e-896b6023b4ae - - - - - -] pybasedir                      = /usr/lib/python3.9/site-packages log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 11 03:56:09 compute-0 nova_compute[259850]: 2025-10-11 03:56:09.774 2 DEBUG oslo_service.service [None req-682edebb-6e1e-4c40-910e-896b6023b4ae - - - - - -] ram_allocation_ratio           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 11 03:56:09 compute-0 nova_compute[259850]: 2025-10-11 03:56:09.774 2 DEBUG oslo_service.service [None req-682edebb-6e1e-4c40-910e-896b6023b4ae - - - - - -] rate_limit_burst               = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 11 03:56:09 compute-0 nova_compute[259850]: 2025-10-11 03:56:09.774 2 DEBUG oslo_service.service [None req-682edebb-6e1e-4c40-910e-896b6023b4ae - - - - - -] rate_limit_except_level        = CRITICAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 11 03:56:09 compute-0 nova_compute[259850]: 2025-10-11 03:56:09.774 2 DEBUG oslo_service.service [None req-682edebb-6e1e-4c40-910e-896b6023b4ae - - - - - -] rate_limit_interval            = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 11 03:56:09 compute-0 nova_compute[259850]: 2025-10-11 03:56:09.774 2 DEBUG oslo_service.service [None req-682edebb-6e1e-4c40-910e-896b6023b4ae - - - - - -] reboot_timeout                 = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 11 03:56:09 compute-0 nova_compute[259850]: 2025-10-11 03:56:09.775 2 DEBUG oslo_service.service [None req-682edebb-6e1e-4c40-910e-896b6023b4ae - - - - - -] reclaim_instance_interval      = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 11 03:56:09 compute-0 nova_compute[259850]: 2025-10-11 03:56:09.775 2 DEBUG oslo_service.service [None req-682edebb-6e1e-4c40-910e-896b6023b4ae - - - - - -] record                         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 11 03:56:09 compute-0 nova_compute[259850]: 2025-10-11 03:56:09.775 2 DEBUG oslo_service.service [None req-682edebb-6e1e-4c40-910e-896b6023b4ae - - - - - -] reimage_timeout_per_gb         = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 11 03:56:09 compute-0 nova_compute[259850]: 2025-10-11 03:56:09.775 2 DEBUG oslo_service.service [None req-682edebb-6e1e-4c40-910e-896b6023b4ae - - - - - -] report_interval                = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 11 03:56:09 compute-0 nova_compute[259850]: 2025-10-11 03:56:09.775 2 DEBUG oslo_service.service [None req-682edebb-6e1e-4c40-910e-896b6023b4ae - - - - - -] rescue_timeout                 = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 11 03:56:09 compute-0 nova_compute[259850]: 2025-10-11 03:56:09.775 2 DEBUG oslo_service.service [None req-682edebb-6e1e-4c40-910e-896b6023b4ae - - - - - -] reserved_host_cpus             = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 11 03:56:09 compute-0 nova_compute[259850]: 2025-10-11 03:56:09.776 2 DEBUG oslo_service.service [None req-682edebb-6e1e-4c40-910e-896b6023b4ae - - - - - -] reserved_host_disk_mb          = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 11 03:56:09 compute-0 nova_compute[259850]: 2025-10-11 03:56:09.776 2 DEBUG oslo_service.service [None req-682edebb-6e1e-4c40-910e-896b6023b4ae - - - - - -] reserved_host_memory_mb        = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 11 03:56:09 compute-0 nova_compute[259850]: 2025-10-11 03:56:09.776 2 DEBUG oslo_service.service [None req-682edebb-6e1e-4c40-910e-896b6023b4ae - - - - - -] reserved_huge_pages            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 11 03:56:09 compute-0 nova_compute[259850]: 2025-10-11 03:56:09.776 2 DEBUG oslo_service.service [None req-682edebb-6e1e-4c40-910e-896b6023b4ae - - - - - -] resize_confirm_window          = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 11 03:56:09 compute-0 nova_compute[259850]: 2025-10-11 03:56:09.776 2 DEBUG oslo_service.service [None req-682edebb-6e1e-4c40-910e-896b6023b4ae - - - - - -] resize_fs_using_block_device   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 11 03:56:09 compute-0 nova_compute[259850]: 2025-10-11 03:56:09.777 2 DEBUG oslo_service.service [None req-682edebb-6e1e-4c40-910e-896b6023b4ae - - - - - -] resume_guests_state_on_host_boot = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 11 03:56:09 compute-0 nova_compute[259850]: 2025-10-11 03:56:09.777 2 DEBUG oslo_service.service [None req-682edebb-6e1e-4c40-910e-896b6023b4ae - - - - - -] rootwrap_config                = /etc/nova/rootwrap.conf log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 11 03:56:09 compute-0 nova_compute[259850]: 2025-10-11 03:56:09.777 2 DEBUG oslo_service.service [None req-682edebb-6e1e-4c40-910e-896b6023b4ae - - - - - -] rpc_response_timeout           = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 11 03:56:09 compute-0 nova_compute[259850]: 2025-10-11 03:56:09.777 2 DEBUG oslo_service.service [None req-682edebb-6e1e-4c40-910e-896b6023b4ae - - - - - -] run_external_periodic_tasks    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 11 03:56:09 compute-0 nova_compute[259850]: 2025-10-11 03:56:09.777 2 DEBUG oslo_service.service [None req-682edebb-6e1e-4c40-910e-896b6023b4ae - - - - - -] running_deleted_instance_action = reap log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 11 03:56:09 compute-0 nova_compute[259850]: 2025-10-11 03:56:09.778 2 DEBUG oslo_service.service [None req-682edebb-6e1e-4c40-910e-896b6023b4ae - - - - - -] running_deleted_instance_poll_interval = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 11 03:56:09 compute-0 nova_compute[259850]: 2025-10-11 03:56:09.778 2 DEBUG oslo_service.service [None req-682edebb-6e1e-4c40-910e-896b6023b4ae - - - - - -] running_deleted_instance_timeout = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 11 03:56:09 compute-0 nova_compute[259850]: 2025-10-11 03:56:09.778 2 DEBUG oslo_service.service [None req-682edebb-6e1e-4c40-910e-896b6023b4ae - - - - - -] scheduler_instance_sync_interval = 120 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 11 03:56:09 compute-0 nova_compute[259850]: 2025-10-11 03:56:09.778 2 DEBUG oslo_service.service [None req-682edebb-6e1e-4c40-910e-896b6023b4ae - - - - - -] service_down_time              = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 11 03:56:09 compute-0 nova_compute[259850]: 2025-10-11 03:56:09.778 2 DEBUG oslo_service.service [None req-682edebb-6e1e-4c40-910e-896b6023b4ae - - - - - -] servicegroup_driver            = db log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 11 03:56:09 compute-0 nova_compute[259850]: 2025-10-11 03:56:09.778 2 DEBUG oslo_service.service [None req-682edebb-6e1e-4c40-910e-896b6023b4ae - - - - - -] shelved_offload_time           = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 11 03:56:09 compute-0 nova_compute[259850]: 2025-10-11 03:56:09.779 2 DEBUG oslo_service.service [None req-682edebb-6e1e-4c40-910e-896b6023b4ae - - - - - -] shelved_poll_interval          = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 11 03:56:09 compute-0 nova_compute[259850]: 2025-10-11 03:56:09.779 2 DEBUG oslo_service.service [None req-682edebb-6e1e-4c40-910e-896b6023b4ae - - - - - -] shutdown_timeout               = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 11 03:56:09 compute-0 nova_compute[259850]: 2025-10-11 03:56:09.779 2 DEBUG oslo_service.service [None req-682edebb-6e1e-4c40-910e-896b6023b4ae - - - - - -] source_is_ipv6                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 11 03:56:09 compute-0 nova_compute[259850]: 2025-10-11 03:56:09.779 2 DEBUG oslo_service.service [None req-682edebb-6e1e-4c40-910e-896b6023b4ae - - - - - -] ssl_only                       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 11 03:56:09 compute-0 nova_compute[259850]: 2025-10-11 03:56:09.779 2 DEBUG oslo_service.service [None req-682edebb-6e1e-4c40-910e-896b6023b4ae - - - - - -] state_path                     = /var/lib/nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 11 03:56:09 compute-0 nova_compute[259850]: 2025-10-11 03:56:09.779 2 DEBUG oslo_service.service [None req-682edebb-6e1e-4c40-910e-896b6023b4ae - - - - - -] sync_power_state_interval      = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 11 03:56:09 compute-0 nova_compute[259850]: 2025-10-11 03:56:09.779 2 DEBUG oslo_service.service [None req-682edebb-6e1e-4c40-910e-896b6023b4ae - - - - - -] sync_power_state_pool_size     = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 11 03:56:09 compute-0 nova_compute[259850]: 2025-10-11 03:56:09.780 2 DEBUG oslo_service.service [None req-682edebb-6e1e-4c40-910e-896b6023b4ae - - - - - -] syslog_log_facility            = LOG_USER log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 11 03:56:09 compute-0 nova_compute[259850]: 2025-10-11 03:56:09.780 2 DEBUG oslo_service.service [None req-682edebb-6e1e-4c40-910e-896b6023b4ae - - - - - -] tempdir                        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 11 03:56:09 compute-0 nova_compute[259850]: 2025-10-11 03:56:09.780 2 DEBUG oslo_service.service [None req-682edebb-6e1e-4c40-910e-896b6023b4ae - - - - - -] timeout_nbd                    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 11 03:56:09 compute-0 nova_compute[259850]: 2025-10-11 03:56:09.780 2 DEBUG oslo_service.service [None req-682edebb-6e1e-4c40-910e-896b6023b4ae - - - - - -] transport_url                  = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 11 03:56:09 compute-0 nova_compute[259850]: 2025-10-11 03:56:09.780 2 DEBUG oslo_service.service [None req-682edebb-6e1e-4c40-910e-896b6023b4ae - - - - - -] update_resources_interval      = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 11 03:56:09 compute-0 nova_compute[259850]: 2025-10-11 03:56:09.780 2 DEBUG oslo_service.service [None req-682edebb-6e1e-4c40-910e-896b6023b4ae - - - - - -] use_cow_images                 = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 11 03:56:09 compute-0 nova_compute[259850]: 2025-10-11 03:56:09.781 2 DEBUG oslo_service.service [None req-682edebb-6e1e-4c40-910e-896b6023b4ae - - - - - -] use_eventlog                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 11 03:56:09 compute-0 nova_compute[259850]: 2025-10-11 03:56:09.781 2 DEBUG oslo_service.service [None req-682edebb-6e1e-4c40-910e-896b6023b4ae - - - - - -] use_journal                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 11 03:56:09 compute-0 nova_compute[259850]: 2025-10-11 03:56:09.781 2 DEBUG oslo_service.service [None req-682edebb-6e1e-4c40-910e-896b6023b4ae - - - - - -] use_json                       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 11 03:56:09 compute-0 nova_compute[259850]: 2025-10-11 03:56:09.781 2 DEBUG oslo_service.service [None req-682edebb-6e1e-4c40-910e-896b6023b4ae - - - - - -] use_rootwrap_daemon            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 11 03:56:09 compute-0 nova_compute[259850]: 2025-10-11 03:56:09.781 2 DEBUG oslo_service.service [None req-682edebb-6e1e-4c40-910e-896b6023b4ae - - - - - -] use_stderr                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 11 03:56:09 compute-0 nova_compute[259850]: 2025-10-11 03:56:09.781 2 DEBUG oslo_service.service [None req-682edebb-6e1e-4c40-910e-896b6023b4ae - - - - - -] use_syslog                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 11 03:56:09 compute-0 nova_compute[259850]: 2025-10-11 03:56:09.781 2 DEBUG oslo_service.service [None req-682edebb-6e1e-4c40-910e-896b6023b4ae - - - - - -] vcpu_pin_set                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 11 03:56:09 compute-0 nova_compute[259850]: 2025-10-11 03:56:09.782 2 DEBUG oslo_service.service [None req-682edebb-6e1e-4c40-910e-896b6023b4ae - - - - - -] vif_plugging_is_fatal          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 11 03:56:09 compute-0 nova_compute[259850]: 2025-10-11 03:56:09.782 2 DEBUG oslo_service.service [None req-682edebb-6e1e-4c40-910e-896b6023b4ae - - - - - -] vif_plugging_timeout           = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 11 03:56:09 compute-0 nova_compute[259850]: 2025-10-11 03:56:09.782 2 DEBUG oslo_service.service [None req-682edebb-6e1e-4c40-910e-896b6023b4ae - - - - - -] virt_mkfs                      = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 11 03:56:09 compute-0 nova_compute[259850]: 2025-10-11 03:56:09.782 2 DEBUG oslo_service.service [None req-682edebb-6e1e-4c40-910e-896b6023b4ae - - - - - -] volume_usage_poll_interval     = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 11 03:56:09 compute-0 nova_compute[259850]: 2025-10-11 03:56:09.782 2 DEBUG oslo_service.service [None req-682edebb-6e1e-4c40-910e-896b6023b4ae - - - - - -] watch_log_file                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 11 03:56:09 compute-0 nova_compute[259850]: 2025-10-11 03:56:09.782 2 DEBUG oslo_service.service [None req-682edebb-6e1e-4c40-910e-896b6023b4ae - - - - - -] web                            = /usr/share/spice-html5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 11 03:56:09 compute-0 nova_compute[259850]: 2025-10-11 03:56:09.783 2 DEBUG oslo_service.service [None req-682edebb-6e1e-4c40-910e-896b6023b4ae - - - - - -] oslo_concurrency.disable_process_locking = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:09 compute-0 nova_compute[259850]: 2025-10-11 03:56:09.783 2 DEBUG oslo_service.service [None req-682edebb-6e1e-4c40-910e-896b6023b4ae - - - - - -] oslo_concurrency.lock_path     = /var/lib/nova/tmp log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:09 compute-0 nova_compute[259850]: 2025-10-11 03:56:09.783 2 DEBUG oslo_service.service [None req-682edebb-6e1e-4c40-910e-896b6023b4ae - - - - - -] oslo_messaging_metrics.metrics_buffer_size = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:09 compute-0 nova_compute[259850]: 2025-10-11 03:56:09.783 2 DEBUG oslo_service.service [None req-682edebb-6e1e-4c40-910e-896b6023b4ae - - - - - -] oslo_messaging_metrics.metrics_enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:09 compute-0 nova_compute[259850]: 2025-10-11 03:56:09.783 2 DEBUG oslo_service.service [None req-682edebb-6e1e-4c40-910e-896b6023b4ae - - - - - -] oslo_messaging_metrics.metrics_process_name =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:09 compute-0 nova_compute[259850]: 2025-10-11 03:56:09.784 2 DEBUG oslo_service.service [None req-682edebb-6e1e-4c40-910e-896b6023b4ae - - - - - -] oslo_messaging_metrics.metrics_socket_file = /var/tmp/metrics_collector.sock log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:09 compute-0 nova_compute[259850]: 2025-10-11 03:56:09.784 2 DEBUG oslo_service.service [None req-682edebb-6e1e-4c40-910e-896b6023b4ae - - - - - -] oslo_messaging_metrics.metrics_thread_stop_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:09 compute-0 nova_compute[259850]: 2025-10-11 03:56:09.784 2 DEBUG oslo_service.service [None req-682edebb-6e1e-4c40-910e-896b6023b4ae - - - - - -] api.auth_strategy              = keystone log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:09 compute-0 nova_compute[259850]: 2025-10-11 03:56:09.784 2 DEBUG oslo_service.service [None req-682edebb-6e1e-4c40-910e-896b6023b4ae - - - - - -] api.compute_link_prefix        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:09 compute-0 nova_compute[259850]: 2025-10-11 03:56:09.785 2 DEBUG oslo_service.service [None req-682edebb-6e1e-4c40-910e-896b6023b4ae - - - - - -] api.config_drive_skip_versions = 1.0 2007-01-19 2007-03-01 2007-08-29 2007-10-10 2007-12-15 2008-02-01 2008-09-01 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:09 compute-0 nova_compute[259850]: 2025-10-11 03:56:09.785 2 DEBUG oslo_service.service [None req-682edebb-6e1e-4c40-910e-896b6023b4ae - - - - - -] api.dhcp_domain                =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:09 compute-0 nova_compute[259850]: 2025-10-11 03:56:09.785 2 DEBUG oslo_service.service [None req-682edebb-6e1e-4c40-910e-896b6023b4ae - - - - - -] api.enable_instance_password   = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:09 compute-0 nova_compute[259850]: 2025-10-11 03:56:09.785 2 DEBUG oslo_service.service [None req-682edebb-6e1e-4c40-910e-896b6023b4ae - - - - - -] api.glance_link_prefix         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:09 compute-0 nova_compute[259850]: 2025-10-11 03:56:09.785 2 DEBUG oslo_service.service [None req-682edebb-6e1e-4c40-910e-896b6023b4ae - - - - - -] api.instance_list_cells_batch_fixed_size = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:09 compute-0 nova_compute[259850]: 2025-10-11 03:56:09.786 2 DEBUG oslo_service.service [None req-682edebb-6e1e-4c40-910e-896b6023b4ae - - - - - -] api.instance_list_cells_batch_strategy = distributed log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:09 compute-0 nova_compute[259850]: 2025-10-11 03:56:09.786 2 DEBUG oslo_service.service [None req-682edebb-6e1e-4c40-910e-896b6023b4ae - - - - - -] api.instance_list_per_project_cells = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:09 compute-0 nova_compute[259850]: 2025-10-11 03:56:09.786 2 DEBUG oslo_service.service [None req-682edebb-6e1e-4c40-910e-896b6023b4ae - - - - - -] api.list_records_by_skipping_down_cells = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:09 compute-0 nova_compute[259850]: 2025-10-11 03:56:09.786 2 DEBUG oslo_service.service [None req-682edebb-6e1e-4c40-910e-896b6023b4ae - - - - - -] api.local_metadata_per_cell    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:09 compute-0 nova_compute[259850]: 2025-10-11 03:56:09.786 2 DEBUG oslo_service.service [None req-682edebb-6e1e-4c40-910e-896b6023b4ae - - - - - -] api.max_limit                  = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:09 compute-0 nova_compute[259850]: 2025-10-11 03:56:09.787 2 DEBUG oslo_service.service [None req-682edebb-6e1e-4c40-910e-896b6023b4ae - - - - - -] api.metadata_cache_expiration  = 15 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:09 compute-0 nova_compute[259850]: 2025-10-11 03:56:09.787 2 DEBUG oslo_service.service [None req-682edebb-6e1e-4c40-910e-896b6023b4ae - - - - - -] api.neutron_default_tenant_id  = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:09 compute-0 nova_compute[259850]: 2025-10-11 03:56:09.787 2 DEBUG oslo_service.service [None req-682edebb-6e1e-4c40-910e-896b6023b4ae - - - - - -] api.use_forwarded_for          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:09 compute-0 nova_compute[259850]: 2025-10-11 03:56:09.787 2 DEBUG oslo_service.service [None req-682edebb-6e1e-4c40-910e-896b6023b4ae - - - - - -] api.use_neutron_default_nets   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:09 compute-0 nova_compute[259850]: 2025-10-11 03:56:09.787 2 DEBUG oslo_service.service [None req-682edebb-6e1e-4c40-910e-896b6023b4ae - - - - - -] api.vendordata_dynamic_connect_timeout = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:09 compute-0 nova_compute[259850]: 2025-10-11 03:56:09.787 2 DEBUG oslo_service.service [None req-682edebb-6e1e-4c40-910e-896b6023b4ae - - - - - -] api.vendordata_dynamic_failure_fatal = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:09 compute-0 nova_compute[259850]: 2025-10-11 03:56:09.787 2 DEBUG oslo_service.service [None req-682edebb-6e1e-4c40-910e-896b6023b4ae - - - - - -] api.vendordata_dynamic_read_timeout = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:09 compute-0 nova_compute[259850]: 2025-10-11 03:56:09.787 2 DEBUG oslo_service.service [None req-682edebb-6e1e-4c40-910e-896b6023b4ae - - - - - -] api.vendordata_dynamic_ssl_certfile =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:09 compute-0 nova_compute[259850]: 2025-10-11 03:56:09.788 2 DEBUG oslo_service.service [None req-682edebb-6e1e-4c40-910e-896b6023b4ae - - - - - -] api.vendordata_dynamic_targets = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:09 compute-0 nova_compute[259850]: 2025-10-11 03:56:09.788 2 DEBUG oslo_service.service [None req-682edebb-6e1e-4c40-910e-896b6023b4ae - - - - - -] api.vendordata_jsonfile_path   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:09 compute-0 nova_compute[259850]: 2025-10-11 03:56:09.788 2 DEBUG oslo_service.service [None req-682edebb-6e1e-4c40-910e-896b6023b4ae - - - - - -] api.vendordata_providers       = ['StaticJSON'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:09 compute-0 nova_compute[259850]: 2025-10-11 03:56:09.788 2 DEBUG oslo_service.service [None req-682edebb-6e1e-4c40-910e-896b6023b4ae - - - - - -] cache.backend                  = oslo_cache.dict log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:09 compute-0 nova_compute[259850]: 2025-10-11 03:56:09.788 2 DEBUG oslo_service.service [None req-682edebb-6e1e-4c40-910e-896b6023b4ae - - - - - -] cache.backend_argument         = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:09 compute-0 nova_compute[259850]: 2025-10-11 03:56:09.788 2 DEBUG oslo_service.service [None req-682edebb-6e1e-4c40-910e-896b6023b4ae - - - - - -] cache.config_prefix            = cache.oslo log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:09 compute-0 nova_compute[259850]: 2025-10-11 03:56:09.789 2 DEBUG oslo_service.service [None req-682edebb-6e1e-4c40-910e-896b6023b4ae - - - - - -] cache.dead_timeout             = 60.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:09 compute-0 nova_compute[259850]: 2025-10-11 03:56:09.789 2 DEBUG oslo_service.service [None req-682edebb-6e1e-4c40-910e-896b6023b4ae - - - - - -] cache.debug_cache_backend      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:09 compute-0 nova_compute[259850]: 2025-10-11 03:56:09.789 2 DEBUG oslo_service.service [None req-682edebb-6e1e-4c40-910e-896b6023b4ae - - - - - -] cache.enable_retry_client      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:09 compute-0 nova_compute[259850]: 2025-10-11 03:56:09.789 2 DEBUG oslo_service.service [None req-682edebb-6e1e-4c40-910e-896b6023b4ae - - - - - -] cache.enable_socket_keepalive  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:09 compute-0 nova_compute[259850]: 2025-10-11 03:56:09.789 2 DEBUG oslo_service.service [None req-682edebb-6e1e-4c40-910e-896b6023b4ae - - - - - -] cache.enabled                  = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:09 compute-0 nova_compute[259850]: 2025-10-11 03:56:09.789 2 DEBUG oslo_service.service [None req-682edebb-6e1e-4c40-910e-896b6023b4ae - - - - - -] cache.expiration_time          = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:09 compute-0 nova_compute[259850]: 2025-10-11 03:56:09.789 2 DEBUG oslo_service.service [None req-682edebb-6e1e-4c40-910e-896b6023b4ae - - - - - -] cache.hashclient_retry_attempts = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:09 compute-0 nova_compute[259850]: 2025-10-11 03:56:09.790 2 DEBUG oslo_service.service [None req-682edebb-6e1e-4c40-910e-896b6023b4ae - - - - - -] cache.hashclient_retry_delay   = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:09 compute-0 nova_compute[259850]: 2025-10-11 03:56:09.790 2 DEBUG oslo_service.service [None req-682edebb-6e1e-4c40-910e-896b6023b4ae - - - - - -] cache.memcache_dead_retry      = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:09 compute-0 nova_compute[259850]: 2025-10-11 03:56:09.790 2 DEBUG oslo_service.service [None req-682edebb-6e1e-4c40-910e-896b6023b4ae - - - - - -] cache.memcache_password        =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:09 compute-0 nova_compute[259850]: 2025-10-11 03:56:09.790 2 DEBUG oslo_service.service [None req-682edebb-6e1e-4c40-910e-896b6023b4ae - - - - - -] cache.memcache_pool_connection_get_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:09 compute-0 nova_compute[259850]: 2025-10-11 03:56:09.790 2 DEBUG oslo_service.service [None req-682edebb-6e1e-4c40-910e-896b6023b4ae - - - - - -] cache.memcache_pool_flush_on_reconnect = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:09 compute-0 nova_compute[259850]: 2025-10-11 03:56:09.790 2 DEBUG oslo_service.service [None req-682edebb-6e1e-4c40-910e-896b6023b4ae - - - - - -] cache.memcache_pool_maxsize    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:09 compute-0 nova_compute[259850]: 2025-10-11 03:56:09.790 2 DEBUG oslo_service.service [None req-682edebb-6e1e-4c40-910e-896b6023b4ae - - - - - -] cache.memcache_pool_unused_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:09 compute-0 nova_compute[259850]: 2025-10-11 03:56:09.790 2 DEBUG oslo_service.service [None req-682edebb-6e1e-4c40-910e-896b6023b4ae - - - - - -] cache.memcache_sasl_enabled    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:09 compute-0 nova_compute[259850]: 2025-10-11 03:56:09.791 2 DEBUG oslo_service.service [None req-682edebb-6e1e-4c40-910e-896b6023b4ae - - - - - -] cache.memcache_servers         = ['localhost:11211'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:09 compute-0 nova_compute[259850]: 2025-10-11 03:56:09.791 2 DEBUG oslo_service.service [None req-682edebb-6e1e-4c40-910e-896b6023b4ae - - - - - -] cache.memcache_socket_timeout  = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:09 compute-0 nova_compute[259850]: 2025-10-11 03:56:09.791 2 DEBUG oslo_service.service [None req-682edebb-6e1e-4c40-910e-896b6023b4ae - - - - - -] cache.memcache_username        =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:09 compute-0 nova_compute[259850]: 2025-10-11 03:56:09.791 2 DEBUG oslo_service.service [None req-682edebb-6e1e-4c40-910e-896b6023b4ae - - - - - -] cache.proxies                  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:09 compute-0 nova_compute[259850]: 2025-10-11 03:56:09.791 2 DEBUG oslo_service.service [None req-682edebb-6e1e-4c40-910e-896b6023b4ae - - - - - -] cache.retry_attempts           = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:09 compute-0 nova_compute[259850]: 2025-10-11 03:56:09.791 2 DEBUG oslo_service.service [None req-682edebb-6e1e-4c40-910e-896b6023b4ae - - - - - -] cache.retry_delay              = 0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:09 compute-0 nova_compute[259850]: 2025-10-11 03:56:09.792 2 DEBUG oslo_service.service [None req-682edebb-6e1e-4c40-910e-896b6023b4ae - - - - - -] cache.socket_keepalive_count   = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:09 compute-0 nova_compute[259850]: 2025-10-11 03:56:09.792 2 DEBUG oslo_service.service [None req-682edebb-6e1e-4c40-910e-896b6023b4ae - - - - - -] cache.socket_keepalive_idle    = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:09 compute-0 nova_compute[259850]: 2025-10-11 03:56:09.792 2 DEBUG oslo_service.service [None req-682edebb-6e1e-4c40-910e-896b6023b4ae - - - - - -] cache.socket_keepalive_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:09 compute-0 nova_compute[259850]: 2025-10-11 03:56:09.792 2 DEBUG oslo_service.service [None req-682edebb-6e1e-4c40-910e-896b6023b4ae - - - - - -] cache.tls_allowed_ciphers      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:09 compute-0 nova_compute[259850]: 2025-10-11 03:56:09.792 2 DEBUG oslo_service.service [None req-682edebb-6e1e-4c40-910e-896b6023b4ae - - - - - -] cache.tls_cafile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:09 compute-0 nova_compute[259850]: 2025-10-11 03:56:09.792 2 DEBUG oslo_service.service [None req-682edebb-6e1e-4c40-910e-896b6023b4ae - - - - - -] cache.tls_certfile             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:09 compute-0 nova_compute[259850]: 2025-10-11 03:56:09.792 2 DEBUG oslo_service.service [None req-682edebb-6e1e-4c40-910e-896b6023b4ae - - - - - -] cache.tls_enabled              = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:09 compute-0 nova_compute[259850]: 2025-10-11 03:56:09.793 2 DEBUG oslo_service.service [None req-682edebb-6e1e-4c40-910e-896b6023b4ae - - - - - -] cache.tls_keyfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:09 compute-0 nova_compute[259850]: 2025-10-11 03:56:09.793 2 DEBUG oslo_service.service [None req-682edebb-6e1e-4c40-910e-896b6023b4ae - - - - - -] cinder.auth_section            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:09 compute-0 nova_compute[259850]: 2025-10-11 03:56:09.793 2 DEBUG oslo_service.service [None req-682edebb-6e1e-4c40-910e-896b6023b4ae - - - - - -] cinder.auth_type               = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:09 compute-0 nova_compute[259850]: 2025-10-11 03:56:09.793 2 DEBUG oslo_service.service [None req-682edebb-6e1e-4c40-910e-896b6023b4ae - - - - - -] cinder.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:09 compute-0 nova_compute[259850]: 2025-10-11 03:56:09.793 2 DEBUG oslo_service.service [None req-682edebb-6e1e-4c40-910e-896b6023b4ae - - - - - -] cinder.catalog_info            = volumev3:cinderv3:internalURL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:09 compute-0 nova_compute[259850]: 2025-10-11 03:56:09.793 2 DEBUG oslo_service.service [None req-682edebb-6e1e-4c40-910e-896b6023b4ae - - - - - -] cinder.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:09 compute-0 nova_compute[259850]: 2025-10-11 03:56:09.794 2 DEBUG oslo_service.service [None req-682edebb-6e1e-4c40-910e-896b6023b4ae - - - - - -] cinder.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:09 compute-0 nova_compute[259850]: 2025-10-11 03:56:09.794 2 DEBUG oslo_service.service [None req-682edebb-6e1e-4c40-910e-896b6023b4ae - - - - - -] cinder.cross_az_attach         = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:09 compute-0 nova_compute[259850]: 2025-10-11 03:56:09.794 2 DEBUG oslo_service.service [None req-682edebb-6e1e-4c40-910e-896b6023b4ae - - - - - -] cinder.debug                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:09 compute-0 nova_compute[259850]: 2025-10-11 03:56:09.794 2 DEBUG oslo_service.service [None req-682edebb-6e1e-4c40-910e-896b6023b4ae - - - - - -] cinder.endpoint_template       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:09 compute-0 nova_compute[259850]: 2025-10-11 03:56:09.794 2 DEBUG oslo_service.service [None req-682edebb-6e1e-4c40-910e-896b6023b4ae - - - - - -] cinder.http_retries            = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:09 compute-0 nova_compute[259850]: 2025-10-11 03:56:09.794 2 DEBUG oslo_service.service [None req-682edebb-6e1e-4c40-910e-896b6023b4ae - - - - - -] cinder.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:09 compute-0 nova_compute[259850]: 2025-10-11 03:56:09.794 2 DEBUG oslo_service.service [None req-682edebb-6e1e-4c40-910e-896b6023b4ae - - - - - -] cinder.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:09 compute-0 nova_compute[259850]: 2025-10-11 03:56:09.794 2 DEBUG oslo_service.service [None req-682edebb-6e1e-4c40-910e-896b6023b4ae - - - - - -] cinder.os_region_name          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:09 compute-0 nova_compute[259850]: 2025-10-11 03:56:09.795 2 DEBUG oslo_service.service [None req-682edebb-6e1e-4c40-910e-896b6023b4ae - - - - - -] cinder.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:09 compute-0 nova_compute[259850]: 2025-10-11 03:56:09.795 2 DEBUG oslo_service.service [None req-682edebb-6e1e-4c40-910e-896b6023b4ae - - - - - -] cinder.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:09 compute-0 nova_compute[259850]: 2025-10-11 03:56:09.795 2 DEBUG oslo_service.service [None req-682edebb-6e1e-4c40-910e-896b6023b4ae - - - - - -] compute.consecutive_build_service_disable_threshold = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:09 compute-0 nova_compute[259850]: 2025-10-11 03:56:09.795 2 DEBUG oslo_service.service [None req-682edebb-6e1e-4c40-910e-896b6023b4ae - - - - - -] compute.cpu_dedicated_set      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:09 compute-0 nova_compute[259850]: 2025-10-11 03:56:09.795 2 DEBUG oslo_service.service [None req-682edebb-6e1e-4c40-910e-896b6023b4ae - - - - - -] compute.cpu_shared_set         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:09 compute-0 nova_compute[259850]: 2025-10-11 03:56:09.795 2 DEBUG oslo_service.service [None req-682edebb-6e1e-4c40-910e-896b6023b4ae - - - - - -] compute.image_type_exclude_list = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:09 compute-0 nova_compute[259850]: 2025-10-11 03:56:09.795 2 DEBUG oslo_service.service [None req-682edebb-6e1e-4c40-910e-896b6023b4ae - - - - - -] compute.live_migration_wait_for_vif_plug = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:09 compute-0 nova_compute[259850]: 2025-10-11 03:56:09.796 2 DEBUG oslo_service.service [None req-682edebb-6e1e-4c40-910e-896b6023b4ae - - - - - -] compute.max_concurrent_disk_ops = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:09 compute-0 nova_compute[259850]: 2025-10-11 03:56:09.796 2 DEBUG oslo_service.service [None req-682edebb-6e1e-4c40-910e-896b6023b4ae - - - - - -] compute.max_disk_devices_to_attach = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:09 compute-0 nova_compute[259850]: 2025-10-11 03:56:09.796 2 DEBUG oslo_service.service [None req-682edebb-6e1e-4c40-910e-896b6023b4ae - - - - - -] compute.packing_host_numa_cells_allocation_strategy = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:09 compute-0 nova_compute[259850]: 2025-10-11 03:56:09.796 2 DEBUG oslo_service.service [None req-682edebb-6e1e-4c40-910e-896b6023b4ae - - - - - -] compute.provider_config_location = /etc/nova/provider_config/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:09 compute-0 nova_compute[259850]: 2025-10-11 03:56:09.796 2 DEBUG oslo_service.service [None req-682edebb-6e1e-4c40-910e-896b6023b4ae - - - - - -] compute.resource_provider_association_refresh = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:09 compute-0 nova_compute[259850]: 2025-10-11 03:56:09.796 2 DEBUG oslo_service.service [None req-682edebb-6e1e-4c40-910e-896b6023b4ae - - - - - -] compute.shutdown_retry_interval = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:09 compute-0 nova_compute[259850]: 2025-10-11 03:56:09.796 2 DEBUG oslo_service.service [None req-682edebb-6e1e-4c40-910e-896b6023b4ae - - - - - -] compute.vmdk_allowed_types     = ['streamOptimized', 'monolithicSparse'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:09 compute-0 nova_compute[259850]: 2025-10-11 03:56:09.796 2 DEBUG oslo_service.service [None req-682edebb-6e1e-4c40-910e-896b6023b4ae - - - - - -] conductor.workers              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:09 compute-0 nova_compute[259850]: 2025-10-11 03:56:09.797 2 DEBUG oslo_service.service [None req-682edebb-6e1e-4c40-910e-896b6023b4ae - - - - - -] console.allowed_origins        = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:09 compute-0 nova_compute[259850]: 2025-10-11 03:56:09.797 2 DEBUG oslo_service.service [None req-682edebb-6e1e-4c40-910e-896b6023b4ae - - - - - -] console.ssl_ciphers            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:09 compute-0 nova_compute[259850]: 2025-10-11 03:56:09.797 2 DEBUG oslo_service.service [None req-682edebb-6e1e-4c40-910e-896b6023b4ae - - - - - -] console.ssl_minimum_version    = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:09 compute-0 nova_compute[259850]: 2025-10-11 03:56:09.797 2 DEBUG oslo_service.service [None req-682edebb-6e1e-4c40-910e-896b6023b4ae - - - - - -] consoleauth.token_ttl          = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:09 compute-0 nova_compute[259850]: 2025-10-11 03:56:09.797 2 DEBUG oslo_service.service [None req-682edebb-6e1e-4c40-910e-896b6023b4ae - - - - - -] cyborg.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:09 compute-0 nova_compute[259850]: 2025-10-11 03:56:09.797 2 DEBUG oslo_service.service [None req-682edebb-6e1e-4c40-910e-896b6023b4ae - - - - - -] cyborg.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:09 compute-0 nova_compute[259850]: 2025-10-11 03:56:09.798 2 DEBUG oslo_service.service [None req-682edebb-6e1e-4c40-910e-896b6023b4ae - - - - - -] cyborg.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:09 compute-0 nova_compute[259850]: 2025-10-11 03:56:09.798 2 DEBUG oslo_service.service [None req-682edebb-6e1e-4c40-910e-896b6023b4ae - - - - - -] cyborg.connect_retries         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:09 compute-0 nova_compute[259850]: 2025-10-11 03:56:09.798 2 DEBUG oslo_service.service [None req-682edebb-6e1e-4c40-910e-896b6023b4ae - - - - - -] cyborg.connect_retry_delay     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:09 compute-0 nova_compute[259850]: 2025-10-11 03:56:09.798 2 DEBUG oslo_service.service [None req-682edebb-6e1e-4c40-910e-896b6023b4ae - - - - - -] cyborg.endpoint_override       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:09 compute-0 nova_compute[259850]: 2025-10-11 03:56:09.798 2 DEBUG oslo_service.service [None req-682edebb-6e1e-4c40-910e-896b6023b4ae - - - - - -] cyborg.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:09 compute-0 nova_compute[259850]: 2025-10-11 03:56:09.798 2 DEBUG oslo_service.service [None req-682edebb-6e1e-4c40-910e-896b6023b4ae - - - - - -] cyborg.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:09 compute-0 nova_compute[259850]: 2025-10-11 03:56:09.798 2 DEBUG oslo_service.service [None req-682edebb-6e1e-4c40-910e-896b6023b4ae - - - - - -] cyborg.max_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:09 compute-0 nova_compute[259850]: 2025-10-11 03:56:09.798 2 DEBUG oslo_service.service [None req-682edebb-6e1e-4c40-910e-896b6023b4ae - - - - - -] cyborg.min_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:09 compute-0 nova_compute[259850]: 2025-10-11 03:56:09.799 2 DEBUG oslo_service.service [None req-682edebb-6e1e-4c40-910e-896b6023b4ae - - - - - -] cyborg.region_name             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:09 compute-0 nova_compute[259850]: 2025-10-11 03:56:09.799 2 DEBUG oslo_service.service [None req-682edebb-6e1e-4c40-910e-896b6023b4ae - - - - - -] cyborg.service_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:09 compute-0 nova_compute[259850]: 2025-10-11 03:56:09.799 2 DEBUG oslo_service.service [None req-682edebb-6e1e-4c40-910e-896b6023b4ae - - - - - -] cyborg.service_type            = accelerator log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:09 compute-0 nova_compute[259850]: 2025-10-11 03:56:09.799 2 DEBUG oslo_service.service [None req-682edebb-6e1e-4c40-910e-896b6023b4ae - - - - - -] cyborg.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:09 compute-0 nova_compute[259850]: 2025-10-11 03:56:09.799 2 DEBUG oslo_service.service [None req-682edebb-6e1e-4c40-910e-896b6023b4ae - - - - - -] cyborg.status_code_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:09 compute-0 nova_compute[259850]: 2025-10-11 03:56:09.799 2 DEBUG oslo_service.service [None req-682edebb-6e1e-4c40-910e-896b6023b4ae - - - - - -] cyborg.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:09 compute-0 nova_compute[259850]: 2025-10-11 03:56:09.799 2 DEBUG oslo_service.service [None req-682edebb-6e1e-4c40-910e-896b6023b4ae - - - - - -] cyborg.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:09 compute-0 nova_compute[259850]: 2025-10-11 03:56:09.800 2 DEBUG oslo_service.service [None req-682edebb-6e1e-4c40-910e-896b6023b4ae - - - - - -] cyborg.valid_interfaces        = ['internal', 'public'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:09 compute-0 nova_compute[259850]: 2025-10-11 03:56:09.800 2 DEBUG oslo_service.service [None req-682edebb-6e1e-4c40-910e-896b6023b4ae - - - - - -] cyborg.version                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:09 compute-0 nova_compute[259850]: 2025-10-11 03:56:09.800 2 DEBUG oslo_service.service [None req-682edebb-6e1e-4c40-910e-896b6023b4ae - - - - - -] database.backend               = sqlalchemy log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:09 compute-0 nova_compute[259850]: 2025-10-11 03:56:09.800 2 DEBUG oslo_service.service [None req-682edebb-6e1e-4c40-910e-896b6023b4ae - - - - - -] database.connection            = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:09 compute-0 nova_compute[259850]: 2025-10-11 03:56:09.800 2 DEBUG oslo_service.service [None req-682edebb-6e1e-4c40-910e-896b6023b4ae - - - - - -] database.connection_debug      = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:09 compute-0 nova_compute[259850]: 2025-10-11 03:56:09.800 2 DEBUG oslo_service.service [None req-682edebb-6e1e-4c40-910e-896b6023b4ae - - - - - -] database.connection_parameters =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:09 compute-0 nova_compute[259850]: 2025-10-11 03:56:09.800 2 DEBUG oslo_service.service [None req-682edebb-6e1e-4c40-910e-896b6023b4ae - - - - - -] database.connection_recycle_time = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:09 compute-0 nova_compute[259850]: 2025-10-11 03:56:09.801 2 DEBUG oslo_service.service [None req-682edebb-6e1e-4c40-910e-896b6023b4ae - - - - - -] database.connection_trace      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:09 compute-0 nova_compute[259850]: 2025-10-11 03:56:09.801 2 DEBUG oslo_service.service [None req-682edebb-6e1e-4c40-910e-896b6023b4ae - - - - - -] database.db_inc_retry_interval = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:09 compute-0 nova_compute[259850]: 2025-10-11 03:56:09.801 2 DEBUG oslo_service.service [None req-682edebb-6e1e-4c40-910e-896b6023b4ae - - - - - -] database.db_max_retries        = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:09 compute-0 nova_compute[259850]: 2025-10-11 03:56:09.801 2 DEBUG oslo_service.service [None req-682edebb-6e1e-4c40-910e-896b6023b4ae - - - - - -] database.db_max_retry_interval = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:09 compute-0 nova_compute[259850]: 2025-10-11 03:56:09.801 2 DEBUG oslo_service.service [None req-682edebb-6e1e-4c40-910e-896b6023b4ae - - - - - -] database.db_retry_interval     = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:09 compute-0 nova_compute[259850]: 2025-10-11 03:56:09.801 2 DEBUG oslo_service.service [None req-682edebb-6e1e-4c40-910e-896b6023b4ae - - - - - -] database.max_overflow          = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:09 compute-0 nova_compute[259850]: 2025-10-11 03:56:09.801 2 DEBUG oslo_service.service [None req-682edebb-6e1e-4c40-910e-896b6023b4ae - - - - - -] database.max_pool_size         = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:09 compute-0 nova_compute[259850]: 2025-10-11 03:56:09.801 2 DEBUG oslo_service.service [None req-682edebb-6e1e-4c40-910e-896b6023b4ae - - - - - -] database.max_retries           = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:09 compute-0 nova_compute[259850]: 2025-10-11 03:56:09.802 2 DEBUG oslo_service.service [None req-682edebb-6e1e-4c40-910e-896b6023b4ae - - - - - -] database.mysql_enable_ndb      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:09 compute-0 nova_compute[259850]: 2025-10-11 03:56:09.802 2 DEBUG oslo_service.service [None req-682edebb-6e1e-4c40-910e-896b6023b4ae - - - - - -] database.mysql_sql_mode        = TRADITIONAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:09 compute-0 nova_compute[259850]: 2025-10-11 03:56:09.802 2 DEBUG oslo_service.service [None req-682edebb-6e1e-4c40-910e-896b6023b4ae - - - - - -] database.mysql_wsrep_sync_wait = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:09 compute-0 nova_compute[259850]: 2025-10-11 03:56:09.802 2 DEBUG oslo_service.service [None req-682edebb-6e1e-4c40-910e-896b6023b4ae - - - - - -] database.pool_timeout          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:09 compute-0 nova_compute[259850]: 2025-10-11 03:56:09.802 2 DEBUG oslo_service.service [None req-682edebb-6e1e-4c40-910e-896b6023b4ae - - - - - -] database.retry_interval        = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:09 compute-0 nova_compute[259850]: 2025-10-11 03:56:09.802 2 DEBUG oslo_service.service [None req-682edebb-6e1e-4c40-910e-896b6023b4ae - - - - - -] database.slave_connection      = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:09 compute-0 nova_compute[259850]: 2025-10-11 03:56:09.802 2 DEBUG oslo_service.service [None req-682edebb-6e1e-4c40-910e-896b6023b4ae - - - - - -] database.sqlite_synchronous    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:09 compute-0 nova_compute[259850]: 2025-10-11 03:56:09.803 2 DEBUG oslo_service.service [None req-682edebb-6e1e-4c40-910e-896b6023b4ae - - - - - -] api_database.backend           = sqlalchemy log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:09 compute-0 nova_compute[259850]: 2025-10-11 03:56:09.803 2 DEBUG oslo_service.service [None req-682edebb-6e1e-4c40-910e-896b6023b4ae - - - - - -] api_database.connection        = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:09 compute-0 nova_compute[259850]: 2025-10-11 03:56:09.803 2 DEBUG oslo_service.service [None req-682edebb-6e1e-4c40-910e-896b6023b4ae - - - - - -] api_database.connection_debug  = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:09 compute-0 nova_compute[259850]: 2025-10-11 03:56:09.803 2 DEBUG oslo_service.service [None req-682edebb-6e1e-4c40-910e-896b6023b4ae - - - - - -] api_database.connection_parameters =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:09 compute-0 nova_compute[259850]: 2025-10-11 03:56:09.803 2 DEBUG oslo_service.service [None req-682edebb-6e1e-4c40-910e-896b6023b4ae - - - - - -] api_database.connection_recycle_time = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:09 compute-0 nova_compute[259850]: 2025-10-11 03:56:09.803 2 DEBUG oslo_service.service [None req-682edebb-6e1e-4c40-910e-896b6023b4ae - - - - - -] api_database.connection_trace  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:09 compute-0 nova_compute[259850]: 2025-10-11 03:56:09.803 2 DEBUG oslo_service.service [None req-682edebb-6e1e-4c40-910e-896b6023b4ae - - - - - -] api_database.db_inc_retry_interval = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:09 compute-0 nova_compute[259850]: 2025-10-11 03:56:09.804 2 DEBUG oslo_service.service [None req-682edebb-6e1e-4c40-910e-896b6023b4ae - - - - - -] api_database.db_max_retries    = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:09 compute-0 nova_compute[259850]: 2025-10-11 03:56:09.804 2 DEBUG oslo_service.service [None req-682edebb-6e1e-4c40-910e-896b6023b4ae - - - - - -] api_database.db_max_retry_interval = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:09 compute-0 nova_compute[259850]: 2025-10-11 03:56:09.804 2 DEBUG oslo_service.service [None req-682edebb-6e1e-4c40-910e-896b6023b4ae - - - - - -] api_database.db_retry_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:09 compute-0 nova_compute[259850]: 2025-10-11 03:56:09.804 2 DEBUG oslo_service.service [None req-682edebb-6e1e-4c40-910e-896b6023b4ae - - - - - -] api_database.max_overflow      = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:09 compute-0 nova_compute[259850]: 2025-10-11 03:56:09.804 2 DEBUG oslo_service.service [None req-682edebb-6e1e-4c40-910e-896b6023b4ae - - - - - -] api_database.max_pool_size     = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:09 compute-0 nova_compute[259850]: 2025-10-11 03:56:09.804 2 DEBUG oslo_service.service [None req-682edebb-6e1e-4c40-910e-896b6023b4ae - - - - - -] api_database.max_retries       = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:09 compute-0 nova_compute[259850]: 2025-10-11 03:56:09.804 2 DEBUG oslo_service.service [None req-682edebb-6e1e-4c40-910e-896b6023b4ae - - - - - -] api_database.mysql_enable_ndb  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:09 compute-0 nova_compute[259850]: 2025-10-11 03:56:09.804 2 DEBUG oslo_service.service [None req-682edebb-6e1e-4c40-910e-896b6023b4ae - - - - - -] api_database.mysql_sql_mode    = TRADITIONAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:09 compute-0 nova_compute[259850]: 2025-10-11 03:56:09.805 2 DEBUG oslo_service.service [None req-682edebb-6e1e-4c40-910e-896b6023b4ae - - - - - -] api_database.mysql_wsrep_sync_wait = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:09 compute-0 nova_compute[259850]: 2025-10-11 03:56:09.805 2 DEBUG oslo_service.service [None req-682edebb-6e1e-4c40-910e-896b6023b4ae - - - - - -] api_database.pool_timeout      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:09 compute-0 nova_compute[259850]: 2025-10-11 03:56:09.805 2 DEBUG oslo_service.service [None req-682edebb-6e1e-4c40-910e-896b6023b4ae - - - - - -] api_database.retry_interval    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:09 compute-0 nova_compute[259850]: 2025-10-11 03:56:09.805 2 DEBUG oslo_service.service [None req-682edebb-6e1e-4c40-910e-896b6023b4ae - - - - - -] api_database.slave_connection  = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:09 compute-0 nova_compute[259850]: 2025-10-11 03:56:09.805 2 DEBUG oslo_service.service [None req-682edebb-6e1e-4c40-910e-896b6023b4ae - - - - - -] api_database.sqlite_synchronous = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:09 compute-0 nova_compute[259850]: 2025-10-11 03:56:09.805 2 DEBUG oslo_service.service [None req-682edebb-6e1e-4c40-910e-896b6023b4ae - - - - - -] devices.enabled_mdev_types     = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:09 compute-0 nova_compute[259850]: 2025-10-11 03:56:09.805 2 DEBUG oslo_service.service [None req-682edebb-6e1e-4c40-910e-896b6023b4ae - - - - - -] ephemeral_storage_encryption.cipher = aes-xts-plain64 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:09 compute-0 nova_compute[259850]: 2025-10-11 03:56:09.806 2 DEBUG oslo_service.service [None req-682edebb-6e1e-4c40-910e-896b6023b4ae - - - - - -] ephemeral_storage_encryption.enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:09 compute-0 nova_compute[259850]: 2025-10-11 03:56:09.806 2 DEBUG oslo_service.service [None req-682edebb-6e1e-4c40-910e-896b6023b4ae - - - - - -] ephemeral_storage_encryption.key_size = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:09 compute-0 nova_compute[259850]: 2025-10-11 03:56:09.806 2 DEBUG oslo_service.service [None req-682edebb-6e1e-4c40-910e-896b6023b4ae - - - - - -] glance.api_servers             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:09 compute-0 nova_compute[259850]: 2025-10-11 03:56:09.806 2 DEBUG oslo_service.service [None req-682edebb-6e1e-4c40-910e-896b6023b4ae - - - - - -] glance.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:09 compute-0 nova_compute[259850]: 2025-10-11 03:56:09.806 2 DEBUG oslo_service.service [None req-682edebb-6e1e-4c40-910e-896b6023b4ae - - - - - -] glance.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:09 compute-0 nova_compute[259850]: 2025-10-11 03:56:09.806 2 DEBUG oslo_service.service [None req-682edebb-6e1e-4c40-910e-896b6023b4ae - - - - - -] glance.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:09 compute-0 nova_compute[259850]: 2025-10-11 03:56:09.806 2 DEBUG oslo_service.service [None req-682edebb-6e1e-4c40-910e-896b6023b4ae - - - - - -] glance.connect_retries         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:09 compute-0 nova_compute[259850]: 2025-10-11 03:56:09.807 2 DEBUG oslo_service.service [None req-682edebb-6e1e-4c40-910e-896b6023b4ae - - - - - -] glance.connect_retry_delay     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:09 compute-0 nova_compute[259850]: 2025-10-11 03:56:09.807 2 DEBUG oslo_service.service [None req-682edebb-6e1e-4c40-910e-896b6023b4ae - - - - - -] glance.debug                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:09 compute-0 nova_compute[259850]: 2025-10-11 03:56:09.807 2 DEBUG oslo_service.service [None req-682edebb-6e1e-4c40-910e-896b6023b4ae - - - - - -] glance.default_trusted_certificate_ids = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:09 compute-0 nova_compute[259850]: 2025-10-11 03:56:09.807 2 DEBUG oslo_service.service [None req-682edebb-6e1e-4c40-910e-896b6023b4ae - - - - - -] glance.enable_certificate_validation = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:09 compute-0 nova_compute[259850]: 2025-10-11 03:56:09.807 2 DEBUG oslo_service.service [None req-682edebb-6e1e-4c40-910e-896b6023b4ae - - - - - -] glance.enable_rbd_download     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:09 compute-0 nova_compute[259850]: 2025-10-11 03:56:09.807 2 DEBUG oslo_service.service [None req-682edebb-6e1e-4c40-910e-896b6023b4ae - - - - - -] glance.endpoint_override       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:09 compute-0 nova_compute[259850]: 2025-10-11 03:56:09.807 2 DEBUG oslo_service.service [None req-682edebb-6e1e-4c40-910e-896b6023b4ae - - - - - -] glance.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:09 compute-0 nova_compute[259850]: 2025-10-11 03:56:09.807 2 DEBUG oslo_service.service [None req-682edebb-6e1e-4c40-910e-896b6023b4ae - - - - - -] glance.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:09 compute-0 nova_compute[259850]: 2025-10-11 03:56:09.808 2 DEBUG oslo_service.service [None req-682edebb-6e1e-4c40-910e-896b6023b4ae - - - - - -] glance.max_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:09 compute-0 nova_compute[259850]: 2025-10-11 03:56:09.808 2 DEBUG oslo_service.service [None req-682edebb-6e1e-4c40-910e-896b6023b4ae - - - - - -] glance.min_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:09 compute-0 nova_compute[259850]: 2025-10-11 03:56:09.808 2 DEBUG oslo_service.service [None req-682edebb-6e1e-4c40-910e-896b6023b4ae - - - - - -] glance.num_retries             = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:09 compute-0 nova_compute[259850]: 2025-10-11 03:56:09.808 2 DEBUG oslo_service.service [None req-682edebb-6e1e-4c40-910e-896b6023b4ae - - - - - -] glance.rbd_ceph_conf           =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:09 compute-0 nova_compute[259850]: 2025-10-11 03:56:09.808 2 DEBUG oslo_service.service [None req-682edebb-6e1e-4c40-910e-896b6023b4ae - - - - - -] glance.rbd_connect_timeout     = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:09 compute-0 nova_compute[259850]: 2025-10-11 03:56:09.808 2 DEBUG oslo_service.service [None req-682edebb-6e1e-4c40-910e-896b6023b4ae - - - - - -] glance.rbd_pool                =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:09 compute-0 nova_compute[259850]: 2025-10-11 03:56:09.808 2 DEBUG oslo_service.service [None req-682edebb-6e1e-4c40-910e-896b6023b4ae - - - - - -] glance.rbd_user                =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:09 compute-0 nova_compute[259850]: 2025-10-11 03:56:09.809 2 DEBUG oslo_service.service [None req-682edebb-6e1e-4c40-910e-896b6023b4ae - - - - - -] glance.region_name             = regionOne log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:09 compute-0 nova_compute[259850]: 2025-10-11 03:56:09.809 2 DEBUG oslo_service.service [None req-682edebb-6e1e-4c40-910e-896b6023b4ae - - - - - -] glance.service_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:09 compute-0 nova_compute[259850]: 2025-10-11 03:56:09.809 2 DEBUG oslo_service.service [None req-682edebb-6e1e-4c40-910e-896b6023b4ae - - - - - -] glance.service_type            = image log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:09 compute-0 nova_compute[259850]: 2025-10-11 03:56:09.809 2 DEBUG oslo_service.service [None req-682edebb-6e1e-4c40-910e-896b6023b4ae - - - - - -] glance.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:09 compute-0 nova_compute[259850]: 2025-10-11 03:56:09.809 2 DEBUG oslo_service.service [None req-682edebb-6e1e-4c40-910e-896b6023b4ae - - - - - -] glance.status_code_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:09 compute-0 nova_compute[259850]: 2025-10-11 03:56:09.809 2 DEBUG oslo_service.service [None req-682edebb-6e1e-4c40-910e-896b6023b4ae - - - - - -] glance.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:09 compute-0 nova_compute[259850]: 2025-10-11 03:56:09.809 2 DEBUG oslo_service.service [None req-682edebb-6e1e-4c40-910e-896b6023b4ae - - - - - -] glance.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:09 compute-0 nova_compute[259850]: 2025-10-11 03:56:09.809 2 DEBUG oslo_service.service [None req-682edebb-6e1e-4c40-910e-896b6023b4ae - - - - - -] glance.valid_interfaces        = ['internal'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:09 compute-0 nova_compute[259850]: 2025-10-11 03:56:09.810 2 DEBUG oslo_service.service [None req-682edebb-6e1e-4c40-910e-896b6023b4ae - - - - - -] glance.verify_glance_signatures = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:09 compute-0 nova_compute[259850]: 2025-10-11 03:56:09.810 2 DEBUG oslo_service.service [None req-682edebb-6e1e-4c40-910e-896b6023b4ae - - - - - -] glance.version                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:09 compute-0 nova_compute[259850]: 2025-10-11 03:56:09.810 2 DEBUG oslo_service.service [None req-682edebb-6e1e-4c40-910e-896b6023b4ae - - - - - -] guestfs.debug                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:09 compute-0 nova_compute[259850]: 2025-10-11 03:56:09.810 2 DEBUG oslo_service.service [None req-682edebb-6e1e-4c40-910e-896b6023b4ae - - - - - -] hyperv.config_drive_cdrom      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:09 compute-0 nova_compute[259850]: 2025-10-11 03:56:09.810 2 DEBUG oslo_service.service [None req-682edebb-6e1e-4c40-910e-896b6023b4ae - - - - - -] hyperv.config_drive_inject_password = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:09 compute-0 nova_compute[259850]: 2025-10-11 03:56:09.810 2 DEBUG oslo_service.service [None req-682edebb-6e1e-4c40-910e-896b6023b4ae - - - - - -] hyperv.dynamic_memory_ratio    = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:09 compute-0 nova_compute[259850]: 2025-10-11 03:56:09.810 2 DEBUG oslo_service.service [None req-682edebb-6e1e-4c40-910e-896b6023b4ae - - - - - -] hyperv.enable_instance_metrics_collection = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:09 compute-0 nova_compute[259850]: 2025-10-11 03:56:09.811 2 DEBUG oslo_service.service [None req-682edebb-6e1e-4c40-910e-896b6023b4ae - - - - - -] hyperv.enable_remotefx         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:09 compute-0 nova_compute[259850]: 2025-10-11 03:56:09.811 2 DEBUG oslo_service.service [None req-682edebb-6e1e-4c40-910e-896b6023b4ae - - - - - -] hyperv.instances_path_share    =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:09 compute-0 nova_compute[259850]: 2025-10-11 03:56:09.811 2 DEBUG oslo_service.service [None req-682edebb-6e1e-4c40-910e-896b6023b4ae - - - - - -] hyperv.iscsi_initiator_list    = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:09 compute-0 nova_compute[259850]: 2025-10-11 03:56:09.811 2 DEBUG oslo_service.service [None req-682edebb-6e1e-4c40-910e-896b6023b4ae - - - - - -] hyperv.limit_cpu_features      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:09 compute-0 nova_compute[259850]: 2025-10-11 03:56:09.811 2 DEBUG oslo_service.service [None req-682edebb-6e1e-4c40-910e-896b6023b4ae - - - - - -] hyperv.mounted_disk_query_retry_count = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:09 compute-0 nova_compute[259850]: 2025-10-11 03:56:09.811 2 DEBUG oslo_service.service [None req-682edebb-6e1e-4c40-910e-896b6023b4ae - - - - - -] hyperv.mounted_disk_query_retry_interval = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:09 compute-0 nova_compute[259850]: 2025-10-11 03:56:09.811 2 DEBUG oslo_service.service [None req-682edebb-6e1e-4c40-910e-896b6023b4ae - - - - - -] hyperv.power_state_check_timeframe = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:09 compute-0 nova_compute[259850]: 2025-10-11 03:56:09.812 2 DEBUG oslo_service.service [None req-682edebb-6e1e-4c40-910e-896b6023b4ae - - - - - -] hyperv.power_state_event_polling_interval = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:09 compute-0 nova_compute[259850]: 2025-10-11 03:56:09.812 2 DEBUG oslo_service.service [None req-682edebb-6e1e-4c40-910e-896b6023b4ae - - - - - -] hyperv.qemu_img_cmd            = qemu-img.exe log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:09 compute-0 nova_compute[259850]: 2025-10-11 03:56:09.812 2 DEBUG oslo_service.service [None req-682edebb-6e1e-4c40-910e-896b6023b4ae - - - - - -] hyperv.use_multipath_io        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:09 compute-0 nova_compute[259850]: 2025-10-11 03:56:09.812 2 DEBUG oslo_service.service [None req-682edebb-6e1e-4c40-910e-896b6023b4ae - - - - - -] hyperv.volume_attach_retry_count = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:09 compute-0 nova_compute[259850]: 2025-10-11 03:56:09.812 2 DEBUG oslo_service.service [None req-682edebb-6e1e-4c40-910e-896b6023b4ae - - - - - -] hyperv.volume_attach_retry_interval = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:09 compute-0 nova_compute[259850]: 2025-10-11 03:56:09.812 2 DEBUG oslo_service.service [None req-682edebb-6e1e-4c40-910e-896b6023b4ae - - - - - -] hyperv.vswitch_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:09 compute-0 nova_compute[259850]: 2025-10-11 03:56:09.812 2 DEBUG oslo_service.service [None req-682edebb-6e1e-4c40-910e-896b6023b4ae - - - - - -] hyperv.wait_soft_reboot_seconds = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:09 compute-0 nova_compute[259850]: 2025-10-11 03:56:09.813 2 DEBUG oslo_service.service [None req-682edebb-6e1e-4c40-910e-896b6023b4ae - - - - - -] mks.enabled                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:09 compute-0 nova_compute[259850]: 2025-10-11 03:56:09.813 2 DEBUG oslo_service.service [None req-682edebb-6e1e-4c40-910e-896b6023b4ae - - - - - -] mks.mksproxy_base_url          = http://127.0.0.1:6090/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:09 compute-0 nova_compute[259850]: 2025-10-11 03:56:09.813 2 DEBUG oslo_service.service [None req-682edebb-6e1e-4c40-910e-896b6023b4ae - - - - - -] image_cache.manager_interval   = 2400 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:09 compute-0 nova_compute[259850]: 2025-10-11 03:56:09.813 2 DEBUG oslo_service.service [None req-682edebb-6e1e-4c40-910e-896b6023b4ae - - - - - -] image_cache.precache_concurrency = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:09 compute-0 nova_compute[259850]: 2025-10-11 03:56:09.813 2 DEBUG oslo_service.service [None req-682edebb-6e1e-4c40-910e-896b6023b4ae - - - - - -] image_cache.remove_unused_base_images = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:09 compute-0 nova_compute[259850]: 2025-10-11 03:56:09.813 2 DEBUG oslo_service.service [None req-682edebb-6e1e-4c40-910e-896b6023b4ae - - - - - -] image_cache.remove_unused_original_minimum_age_seconds = 86400 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:09 compute-0 nova_compute[259850]: 2025-10-11 03:56:09.814 2 DEBUG oslo_service.service [None req-682edebb-6e1e-4c40-910e-896b6023b4ae - - - - - -] image_cache.remove_unused_resized_minimum_age_seconds = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:09 compute-0 nova_compute[259850]: 2025-10-11 03:56:09.814 2 DEBUG oslo_service.service [None req-682edebb-6e1e-4c40-910e-896b6023b4ae - - - - - -] image_cache.subdirectory_name  = _base log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:09 compute-0 nova_compute[259850]: 2025-10-11 03:56:09.814 2 DEBUG oslo_service.service [None req-682edebb-6e1e-4c40-910e-896b6023b4ae - - - - - -] ironic.api_max_retries         = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:09 compute-0 nova_compute[259850]: 2025-10-11 03:56:09.814 2 DEBUG oslo_service.service [None req-682edebb-6e1e-4c40-910e-896b6023b4ae - - - - - -] ironic.api_retry_interval      = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:09 compute-0 nova_compute[259850]: 2025-10-11 03:56:09.814 2 DEBUG oslo_service.service [None req-682edebb-6e1e-4c40-910e-896b6023b4ae - - - - - -] ironic.auth_section            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:09 compute-0 nova_compute[259850]: 2025-10-11 03:56:09.814 2 DEBUG oslo_service.service [None req-682edebb-6e1e-4c40-910e-896b6023b4ae - - - - - -] ironic.auth_type               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:09 compute-0 nova_compute[259850]: 2025-10-11 03:56:09.814 2 DEBUG oslo_service.service [None req-682edebb-6e1e-4c40-910e-896b6023b4ae - - - - - -] ironic.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:09 compute-0 nova_compute[259850]: 2025-10-11 03:56:09.815 2 DEBUG oslo_service.service [None req-682edebb-6e1e-4c40-910e-896b6023b4ae - - - - - -] ironic.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:09 compute-0 nova_compute[259850]: 2025-10-11 03:56:09.815 2 DEBUG oslo_service.service [None req-682edebb-6e1e-4c40-910e-896b6023b4ae - - - - - -] ironic.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:09 compute-0 nova_compute[259850]: 2025-10-11 03:56:09.815 2 DEBUG oslo_service.service [None req-682edebb-6e1e-4c40-910e-896b6023b4ae - - - - - -] ironic.connect_retries         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:09 compute-0 nova_compute[259850]: 2025-10-11 03:56:09.815 2 DEBUG oslo_service.service [None req-682edebb-6e1e-4c40-910e-896b6023b4ae - - - - - -] ironic.connect_retry_delay     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:09 compute-0 nova_compute[259850]: 2025-10-11 03:56:09.815 2 DEBUG oslo_service.service [None req-682edebb-6e1e-4c40-910e-896b6023b4ae - - - - - -] ironic.endpoint_override       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:09 compute-0 nova_compute[259850]: 2025-10-11 03:56:09.815 2 DEBUG oslo_service.service [None req-682edebb-6e1e-4c40-910e-896b6023b4ae - - - - - -] ironic.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:09 compute-0 nova_compute[259850]: 2025-10-11 03:56:09.815 2 DEBUG oslo_service.service [None req-682edebb-6e1e-4c40-910e-896b6023b4ae - - - - - -] ironic.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:09 compute-0 nova_compute[259850]: 2025-10-11 03:56:09.815 2 DEBUG oslo_service.service [None req-682edebb-6e1e-4c40-910e-896b6023b4ae - - - - - -] ironic.max_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:09 compute-0 nova_compute[259850]: 2025-10-11 03:56:09.816 2 DEBUG oslo_service.service [None req-682edebb-6e1e-4c40-910e-896b6023b4ae - - - - - -] ironic.min_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:09 compute-0 nova_compute[259850]: 2025-10-11 03:56:09.816 2 DEBUG oslo_service.service [None req-682edebb-6e1e-4c40-910e-896b6023b4ae - - - - - -] ironic.partition_key           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:09 compute-0 nova_compute[259850]: 2025-10-11 03:56:09.816 2 DEBUG oslo_service.service [None req-682edebb-6e1e-4c40-910e-896b6023b4ae - - - - - -] ironic.peer_list               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:09 compute-0 nova_compute[259850]: 2025-10-11 03:56:09.816 2 DEBUG oslo_service.service [None req-682edebb-6e1e-4c40-910e-896b6023b4ae - - - - - -] ironic.region_name             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:09 compute-0 nova_compute[259850]: 2025-10-11 03:56:09.816 2 DEBUG oslo_service.service [None req-682edebb-6e1e-4c40-910e-896b6023b4ae - - - - - -] ironic.serial_console_state_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:09 compute-0 nova_compute[259850]: 2025-10-11 03:56:09.816 2 DEBUG oslo_service.service [None req-682edebb-6e1e-4c40-910e-896b6023b4ae - - - - - -] ironic.service_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:09 compute-0 nova_compute[259850]: 2025-10-11 03:56:09.816 2 DEBUG oslo_service.service [None req-682edebb-6e1e-4c40-910e-896b6023b4ae - - - - - -] ironic.service_type            = baremetal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:09 compute-0 nova_compute[259850]: 2025-10-11 03:56:09.817 2 DEBUG oslo_service.service [None req-682edebb-6e1e-4c40-910e-896b6023b4ae - - - - - -] ironic.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:09 compute-0 nova_compute[259850]: 2025-10-11 03:56:09.817 2 DEBUG oslo_service.service [None req-682edebb-6e1e-4c40-910e-896b6023b4ae - - - - - -] ironic.status_code_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:09 compute-0 nova_compute[259850]: 2025-10-11 03:56:09.817 2 DEBUG oslo_service.service [None req-682edebb-6e1e-4c40-910e-896b6023b4ae - - - - - -] ironic.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:09 compute-0 nova_compute[259850]: 2025-10-11 03:56:09.817 2 DEBUG oslo_service.service [None req-682edebb-6e1e-4c40-910e-896b6023b4ae - - - - - -] ironic.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:09 compute-0 nova_compute[259850]: 2025-10-11 03:56:09.817 2 DEBUG oslo_service.service [None req-682edebb-6e1e-4c40-910e-896b6023b4ae - - - - - -] ironic.valid_interfaces        = ['internal', 'public'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:09 compute-0 nova_compute[259850]: 2025-10-11 03:56:09.817 2 DEBUG oslo_service.service [None req-682edebb-6e1e-4c40-910e-896b6023b4ae - - - - - -] ironic.version                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:09 compute-0 nova_compute[259850]: 2025-10-11 03:56:09.817 2 DEBUG oslo_service.service [None req-682edebb-6e1e-4c40-910e-896b6023b4ae - - - - - -] key_manager.backend            = barbican log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:09 compute-0 nova_compute[259850]: 2025-10-11 03:56:09.817 2 DEBUG oslo_service.service [None req-682edebb-6e1e-4c40-910e-896b6023b4ae - - - - - -] key_manager.fixed_key          = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:09 compute-0 nova_compute[259850]: 2025-10-11 03:56:09.818 2 DEBUG oslo_service.service [None req-682edebb-6e1e-4c40-910e-896b6023b4ae - - - - - -] barbican.auth_endpoint         = http://localhost/identity/v3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:09 compute-0 nova_compute[259850]: 2025-10-11 03:56:09.818 2 DEBUG oslo_service.service [None req-682edebb-6e1e-4c40-910e-896b6023b4ae - - - - - -] barbican.barbican_api_version  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:09 compute-0 nova_compute[259850]: 2025-10-11 03:56:09.818 2 DEBUG oslo_service.service [None req-682edebb-6e1e-4c40-910e-896b6023b4ae - - - - - -] barbican.barbican_endpoint     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:09 compute-0 nova_compute[259850]: 2025-10-11 03:56:09.818 2 DEBUG oslo_service.service [None req-682edebb-6e1e-4c40-910e-896b6023b4ae - - - - - -] barbican.barbican_endpoint_type = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:09 compute-0 nova_compute[259850]: 2025-10-11 03:56:09.818 2 DEBUG oslo_service.service [None req-682edebb-6e1e-4c40-910e-896b6023b4ae - - - - - -] barbican.barbican_region_name  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:09 compute-0 nova_compute[259850]: 2025-10-11 03:56:09.818 2 DEBUG oslo_service.service [None req-682edebb-6e1e-4c40-910e-896b6023b4ae - - - - - -] barbican.cafile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:09 compute-0 nova_compute[259850]: 2025-10-11 03:56:09.818 2 DEBUG oslo_service.service [None req-682edebb-6e1e-4c40-910e-896b6023b4ae - - - - - -] barbican.certfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:09 compute-0 nova_compute[259850]: 2025-10-11 03:56:09.819 2 DEBUG oslo_service.service [None req-682edebb-6e1e-4c40-910e-896b6023b4ae - - - - - -] barbican.collect_timing        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:09 compute-0 nova_compute[259850]: 2025-10-11 03:56:09.819 2 DEBUG oslo_service.service [None req-682edebb-6e1e-4c40-910e-896b6023b4ae - - - - - -] barbican.insecure              = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:09 compute-0 nova_compute[259850]: 2025-10-11 03:56:09.819 2 DEBUG oslo_service.service [None req-682edebb-6e1e-4c40-910e-896b6023b4ae - - - - - -] barbican.keyfile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:09 compute-0 nova_compute[259850]: 2025-10-11 03:56:09.819 2 DEBUG oslo_service.service [None req-682edebb-6e1e-4c40-910e-896b6023b4ae - - - - - -] barbican.number_of_retries     = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:09 compute-0 nova_compute[259850]: 2025-10-11 03:56:09.819 2 DEBUG oslo_service.service [None req-682edebb-6e1e-4c40-910e-896b6023b4ae - - - - - -] barbican.retry_delay           = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:09 compute-0 nova_compute[259850]: 2025-10-11 03:56:09.819 2 DEBUG oslo_service.service [None req-682edebb-6e1e-4c40-910e-896b6023b4ae - - - - - -] barbican.send_service_user_token = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:09 compute-0 nova_compute[259850]: 2025-10-11 03:56:09.819 2 DEBUG oslo_service.service [None req-682edebb-6e1e-4c40-910e-896b6023b4ae - - - - - -] barbican.split_loggers         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:09 compute-0 nova_compute[259850]: 2025-10-11 03:56:09.820 2 DEBUG oslo_service.service [None req-682edebb-6e1e-4c40-910e-896b6023b4ae - - - - - -] barbican.timeout               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:09 compute-0 nova_compute[259850]: 2025-10-11 03:56:09.820 2 DEBUG oslo_service.service [None req-682edebb-6e1e-4c40-910e-896b6023b4ae - - - - - -] barbican.verify_ssl            = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:09 compute-0 nova_compute[259850]: 2025-10-11 03:56:09.820 2 DEBUG oslo_service.service [None req-682edebb-6e1e-4c40-910e-896b6023b4ae - - - - - -] barbican.verify_ssl_path       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:09 compute-0 nova_compute[259850]: 2025-10-11 03:56:09.820 2 DEBUG oslo_service.service [None req-682edebb-6e1e-4c40-910e-896b6023b4ae - - - - - -] barbican_service_user.auth_section = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:09 compute-0 nova_compute[259850]: 2025-10-11 03:56:09.820 2 DEBUG oslo_service.service [None req-682edebb-6e1e-4c40-910e-896b6023b4ae - - - - - -] barbican_service_user.auth_type = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:09 compute-0 nova_compute[259850]: 2025-10-11 03:56:09.820 2 DEBUG oslo_service.service [None req-682edebb-6e1e-4c40-910e-896b6023b4ae - - - - - -] barbican_service_user.cafile   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:09 compute-0 nova_compute[259850]: 2025-10-11 03:56:09.820 2 DEBUG oslo_service.service [None req-682edebb-6e1e-4c40-910e-896b6023b4ae - - - - - -] barbican_service_user.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:09 compute-0 nova_compute[259850]: 2025-10-11 03:56:09.821 2 DEBUG oslo_service.service [None req-682edebb-6e1e-4c40-910e-896b6023b4ae - - - - - -] barbican_service_user.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:09 compute-0 nova_compute[259850]: 2025-10-11 03:56:09.821 2 DEBUG oslo_service.service [None req-682edebb-6e1e-4c40-910e-896b6023b4ae - - - - - -] barbican_service_user.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:09 compute-0 nova_compute[259850]: 2025-10-11 03:56:09.821 2 DEBUG oslo_service.service [None req-682edebb-6e1e-4c40-910e-896b6023b4ae - - - - - -] barbican_service_user.keyfile  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:09 compute-0 nova_compute[259850]: 2025-10-11 03:56:09.821 2 DEBUG oslo_service.service [None req-682edebb-6e1e-4c40-910e-896b6023b4ae - - - - - -] barbican_service_user.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:09 compute-0 nova_compute[259850]: 2025-10-11 03:56:09.821 2 DEBUG oslo_service.service [None req-682edebb-6e1e-4c40-910e-896b6023b4ae - - - - - -] barbican_service_user.timeout  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:09 compute-0 nova_compute[259850]: 2025-10-11 03:56:09.821 2 DEBUG oslo_service.service [None req-682edebb-6e1e-4c40-910e-896b6023b4ae - - - - - -] vault.approle_role_id          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:09 compute-0 nova_compute[259850]: 2025-10-11 03:56:09.821 2 DEBUG oslo_service.service [None req-682edebb-6e1e-4c40-910e-896b6023b4ae - - - - - -] vault.approle_secret_id        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:09 compute-0 nova_compute[259850]: 2025-10-11 03:56:09.821 2 DEBUG oslo_service.service [None req-682edebb-6e1e-4c40-910e-896b6023b4ae - - - - - -] vault.cafile                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:09 compute-0 nova_compute[259850]: 2025-10-11 03:56:09.822 2 DEBUG oslo_service.service [None req-682edebb-6e1e-4c40-910e-896b6023b4ae - - - - - -] vault.certfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:09 compute-0 nova_compute[259850]: 2025-10-11 03:56:09.822 2 DEBUG oslo_service.service [None req-682edebb-6e1e-4c40-910e-896b6023b4ae - - - - - -] vault.collect_timing           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:09 compute-0 nova_compute[259850]: 2025-10-11 03:56:09.822 2 DEBUG oslo_service.service [None req-682edebb-6e1e-4c40-910e-896b6023b4ae - - - - - -] vault.insecure                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:09 compute-0 nova_compute[259850]: 2025-10-11 03:56:09.822 2 DEBUG oslo_service.service [None req-682edebb-6e1e-4c40-910e-896b6023b4ae - - - - - -] vault.keyfile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:09 compute-0 nova_compute[259850]: 2025-10-11 03:56:09.822 2 DEBUG oslo_service.service [None req-682edebb-6e1e-4c40-910e-896b6023b4ae - - - - - -] vault.kv_mountpoint            = secret log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:09 compute-0 nova_compute[259850]: 2025-10-11 03:56:09.822 2 DEBUG oslo_service.service [None req-682edebb-6e1e-4c40-910e-896b6023b4ae - - - - - -] vault.kv_version               = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:09 compute-0 nova_compute[259850]: 2025-10-11 03:56:09.822 2 DEBUG oslo_service.service [None req-682edebb-6e1e-4c40-910e-896b6023b4ae - - - - - -] vault.namespace                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:09 compute-0 nova_compute[259850]: 2025-10-11 03:56:09.823 2 DEBUG oslo_service.service [None req-682edebb-6e1e-4c40-910e-896b6023b4ae - - - - - -] vault.root_token_id            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:09 compute-0 nova_compute[259850]: 2025-10-11 03:56:09.823 2 DEBUG oslo_service.service [None req-682edebb-6e1e-4c40-910e-896b6023b4ae - - - - - -] vault.split_loggers            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:09 compute-0 nova_compute[259850]: 2025-10-11 03:56:09.823 2 DEBUG oslo_service.service [None req-682edebb-6e1e-4c40-910e-896b6023b4ae - - - - - -] vault.ssl_ca_crt_file          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:09 compute-0 nova_compute[259850]: 2025-10-11 03:56:09.823 2 DEBUG oslo_service.service [None req-682edebb-6e1e-4c40-910e-896b6023b4ae - - - - - -] vault.timeout                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:09 compute-0 nova_compute[259850]: 2025-10-11 03:56:09.823 2 DEBUG oslo_service.service [None req-682edebb-6e1e-4c40-910e-896b6023b4ae - - - - - -] vault.use_ssl                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:09 compute-0 nova_compute[259850]: 2025-10-11 03:56:09.823 2 DEBUG oslo_service.service [None req-682edebb-6e1e-4c40-910e-896b6023b4ae - - - - - -] vault.vault_url                = http://127.0.0.1:8200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:09 compute-0 nova_compute[259850]: 2025-10-11 03:56:09.823 2 DEBUG oslo_service.service [None req-682edebb-6e1e-4c40-910e-896b6023b4ae - - - - - -] keystone.cafile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:09 compute-0 nova_compute[259850]: 2025-10-11 03:56:09.823 2 DEBUG oslo_service.service [None req-682edebb-6e1e-4c40-910e-896b6023b4ae - - - - - -] keystone.certfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:09 compute-0 nova_compute[259850]: 2025-10-11 03:56:09.824 2 DEBUG oslo_service.service [None req-682edebb-6e1e-4c40-910e-896b6023b4ae - - - - - -] keystone.collect_timing        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:09 compute-0 nova_compute[259850]: 2025-10-11 03:56:09.824 2 DEBUG oslo_service.service [None req-682edebb-6e1e-4c40-910e-896b6023b4ae - - - - - -] keystone.connect_retries       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:09 compute-0 nova_compute[259850]: 2025-10-11 03:56:09.824 2 DEBUG oslo_service.service [None req-682edebb-6e1e-4c40-910e-896b6023b4ae - - - - - -] keystone.connect_retry_delay   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:09 compute-0 nova_compute[259850]: 2025-10-11 03:56:09.824 2 DEBUG oslo_service.service [None req-682edebb-6e1e-4c40-910e-896b6023b4ae - - - - - -] keystone.endpoint_override     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:09 compute-0 nova_compute[259850]: 2025-10-11 03:56:09.824 2 DEBUG oslo_service.service [None req-682edebb-6e1e-4c40-910e-896b6023b4ae - - - - - -] keystone.insecure              = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:09 compute-0 nova_compute[259850]: 2025-10-11 03:56:09.824 2 DEBUG oslo_service.service [None req-682edebb-6e1e-4c40-910e-896b6023b4ae - - - - - -] keystone.keyfile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:09 compute-0 nova_compute[259850]: 2025-10-11 03:56:09.824 2 DEBUG oslo_service.service [None req-682edebb-6e1e-4c40-910e-896b6023b4ae - - - - - -] keystone.max_version           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:09 compute-0 nova_compute[259850]: 2025-10-11 03:56:09.825 2 DEBUG oslo_service.service [None req-682edebb-6e1e-4c40-910e-896b6023b4ae - - - - - -] keystone.min_version           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:09 compute-0 nova_compute[259850]: 2025-10-11 03:56:09.825 2 DEBUG oslo_service.service [None req-682edebb-6e1e-4c40-910e-896b6023b4ae - - - - - -] keystone.region_name           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:09 compute-0 nova_compute[259850]: 2025-10-11 03:56:09.825 2 DEBUG oslo_service.service [None req-682edebb-6e1e-4c40-910e-896b6023b4ae - - - - - -] keystone.service_name          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:09 compute-0 nova_compute[259850]: 2025-10-11 03:56:09.825 2 DEBUG oslo_service.service [None req-682edebb-6e1e-4c40-910e-896b6023b4ae - - - - - -] keystone.service_type          = identity log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:09 compute-0 nova_compute[259850]: 2025-10-11 03:56:09.825 2 DEBUG oslo_service.service [None req-682edebb-6e1e-4c40-910e-896b6023b4ae - - - - - -] keystone.split_loggers         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:09 compute-0 nova_compute[259850]: 2025-10-11 03:56:09.825 2 DEBUG oslo_service.service [None req-682edebb-6e1e-4c40-910e-896b6023b4ae - - - - - -] keystone.status_code_retries   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:09 compute-0 nova_compute[259850]: 2025-10-11 03:56:09.825 2 DEBUG oslo_service.service [None req-682edebb-6e1e-4c40-910e-896b6023b4ae - - - - - -] keystone.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:09 compute-0 nova_compute[259850]: 2025-10-11 03:56:09.825 2 DEBUG oslo_service.service [None req-682edebb-6e1e-4c40-910e-896b6023b4ae - - - - - -] keystone.timeout               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:09 compute-0 nova_compute[259850]: 2025-10-11 03:56:09.826 2 DEBUG oslo_service.service [None req-682edebb-6e1e-4c40-910e-896b6023b4ae - - - - - -] keystone.valid_interfaces      = ['internal', 'public'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:09 compute-0 nova_compute[259850]: 2025-10-11 03:56:09.826 2 DEBUG oslo_service.service [None req-682edebb-6e1e-4c40-910e-896b6023b4ae - - - - - -] keystone.version               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:09 compute-0 nova_compute[259850]: 2025-10-11 03:56:09.826 2 DEBUG oslo_service.service [None req-682edebb-6e1e-4c40-910e-896b6023b4ae - - - - - -] libvirt.connection_uri         =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:09 compute-0 nova_compute[259850]: 2025-10-11 03:56:09.826 2 DEBUG oslo_service.service [None req-682edebb-6e1e-4c40-910e-896b6023b4ae - - - - - -] libvirt.cpu_mode               = host-model log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:09 compute-0 nova_compute[259850]: 2025-10-11 03:56:09.826 2 DEBUG oslo_service.service [None req-682edebb-6e1e-4c40-910e-896b6023b4ae - - - - - -] libvirt.cpu_model_extra_flags  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:09 compute-0 nova_compute[259850]: 2025-10-11 03:56:09.826 2 DEBUG oslo_service.service [None req-682edebb-6e1e-4c40-910e-896b6023b4ae - - - - - -] libvirt.cpu_models             = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:09 compute-0 nova_compute[259850]: 2025-10-11 03:56:09.826 2 DEBUG oslo_service.service [None req-682edebb-6e1e-4c40-910e-896b6023b4ae - - - - - -] libvirt.cpu_power_governor_high = performance log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:09 compute-0 nova_compute[259850]: 2025-10-11 03:56:09.827 2 DEBUG oslo_service.service [None req-682edebb-6e1e-4c40-910e-896b6023b4ae - - - - - -] libvirt.cpu_power_governor_low = powersave log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:09 compute-0 nova_compute[259850]: 2025-10-11 03:56:09.827 2 DEBUG oslo_service.service [None req-682edebb-6e1e-4c40-910e-896b6023b4ae - - - - - -] libvirt.cpu_power_management   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:09 compute-0 nova_compute[259850]: 2025-10-11 03:56:09.827 2 DEBUG oslo_service.service [None req-682edebb-6e1e-4c40-910e-896b6023b4ae - - - - - -] libvirt.cpu_power_management_strategy = cpu_state log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:09 compute-0 nova_compute[259850]: 2025-10-11 03:56:09.827 2 DEBUG oslo_service.service [None req-682edebb-6e1e-4c40-910e-896b6023b4ae - - - - - -] libvirt.device_detach_attempts = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:09 compute-0 nova_compute[259850]: 2025-10-11 03:56:09.827 2 DEBUG oslo_service.service [None req-682edebb-6e1e-4c40-910e-896b6023b4ae - - - - - -] libvirt.device_detach_timeout  = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:09 compute-0 nova_compute[259850]: 2025-10-11 03:56:09.827 2 DEBUG oslo_service.service [None req-682edebb-6e1e-4c40-910e-896b6023b4ae - - - - - -] libvirt.disk_cachemodes        = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:09 compute-0 nova_compute[259850]: 2025-10-11 03:56:09.827 2 DEBUG oslo_service.service [None req-682edebb-6e1e-4c40-910e-896b6023b4ae - - - - - -] libvirt.disk_prefix            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:09 compute-0 nova_compute[259850]: 2025-10-11 03:56:09.828 2 DEBUG oslo_service.service [None req-682edebb-6e1e-4c40-910e-896b6023b4ae - - - - - -] libvirt.enabled_perf_events    = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:09 compute-0 nova_compute[259850]: 2025-10-11 03:56:09.828 2 DEBUG oslo_service.service [None req-682edebb-6e1e-4c40-910e-896b6023b4ae - - - - - -] libvirt.file_backed_memory     = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:09 compute-0 nova_compute[259850]: 2025-10-11 03:56:09.828 2 DEBUG oslo_service.service [None req-682edebb-6e1e-4c40-910e-896b6023b4ae - - - - - -] libvirt.gid_maps               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:09 compute-0 nova_compute[259850]: 2025-10-11 03:56:09.828 2 DEBUG oslo_service.service [None req-682edebb-6e1e-4c40-910e-896b6023b4ae - - - - - -] libvirt.hw_disk_discard        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:09 compute-0 nova_compute[259850]: 2025-10-11 03:56:09.828 2 DEBUG oslo_service.service [None req-682edebb-6e1e-4c40-910e-896b6023b4ae - - - - - -] libvirt.hw_machine_type        = ['x86_64=q35'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:09 compute-0 nova_compute[259850]: 2025-10-11 03:56:09.828 2 DEBUG oslo_service.service [None req-682edebb-6e1e-4c40-910e-896b6023b4ae - - - - - -] libvirt.images_rbd_ceph_conf   = /etc/ceph/ceph.conf log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:09 compute-0 nova_compute[259850]: 2025-10-11 03:56:09.828 2 DEBUG oslo_service.service [None req-682edebb-6e1e-4c40-910e-896b6023b4ae - - - - - -] libvirt.images_rbd_glance_copy_poll_interval = 15 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:09 compute-0 nova_compute[259850]: 2025-10-11 03:56:09.829 2 DEBUG oslo_service.service [None req-682edebb-6e1e-4c40-910e-896b6023b4ae - - - - - -] libvirt.images_rbd_glance_copy_timeout = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:09 compute-0 nova_compute[259850]: 2025-10-11 03:56:09.829 2 DEBUG oslo_service.service [None req-682edebb-6e1e-4c40-910e-896b6023b4ae - - - - - -] libvirt.images_rbd_glance_store_name = default_backend log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:09 compute-0 nova_compute[259850]: 2025-10-11 03:56:09.829 2 DEBUG oslo_service.service [None req-682edebb-6e1e-4c40-910e-896b6023b4ae - - - - - -] libvirt.images_rbd_pool        = vms log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:09 compute-0 nova_compute[259850]: 2025-10-11 03:56:09.829 2 DEBUG oslo_service.service [None req-682edebb-6e1e-4c40-910e-896b6023b4ae - - - - - -] libvirt.images_type            = rbd log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:09 compute-0 nova_compute[259850]: 2025-10-11 03:56:09.829 2 DEBUG oslo_service.service [None req-682edebb-6e1e-4c40-910e-896b6023b4ae - - - - - -] libvirt.images_volume_group    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:09 compute-0 nova_compute[259850]: 2025-10-11 03:56:09.829 2 DEBUG oslo_service.service [None req-682edebb-6e1e-4c40-910e-896b6023b4ae - - - - - -] libvirt.inject_key             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:09 compute-0 nova_compute[259850]: 2025-10-11 03:56:09.829 2 DEBUG oslo_service.service [None req-682edebb-6e1e-4c40-910e-896b6023b4ae - - - - - -] libvirt.inject_partition       = -2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:09 compute-0 nova_compute[259850]: 2025-10-11 03:56:09.830 2 DEBUG oslo_service.service [None req-682edebb-6e1e-4c40-910e-896b6023b4ae - - - - - -] libvirt.inject_password        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:09 compute-0 nova_compute[259850]: 2025-10-11 03:56:09.830 2 DEBUG oslo_service.service [None req-682edebb-6e1e-4c40-910e-896b6023b4ae - - - - - -] libvirt.iscsi_iface            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:09 compute-0 nova_compute[259850]: 2025-10-11 03:56:09.830 2 DEBUG oslo_service.service [None req-682edebb-6e1e-4c40-910e-896b6023b4ae - - - - - -] libvirt.iser_use_multipath     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:09 compute-0 nova_compute[259850]: 2025-10-11 03:56:09.830 2 DEBUG oslo_service.service [None req-682edebb-6e1e-4c40-910e-896b6023b4ae - - - - - -] libvirt.live_migration_bandwidth = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:09 compute-0 nova_compute[259850]: 2025-10-11 03:56:09.830 2 DEBUG oslo_service.service [None req-682edebb-6e1e-4c40-910e-896b6023b4ae - - - - - -] libvirt.live_migration_completion_timeout = 800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:09 compute-0 nova_compute[259850]: 2025-10-11 03:56:09.830 2 DEBUG oslo_service.service [None req-682edebb-6e1e-4c40-910e-896b6023b4ae - - - - - -] libvirt.live_migration_downtime = 500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:09 compute-0 nova_compute[259850]: 2025-10-11 03:56:09.830 2 DEBUG oslo_service.service [None req-682edebb-6e1e-4c40-910e-896b6023b4ae - - - - - -] libvirt.live_migration_downtime_delay = 75 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:09 compute-0 nova_compute[259850]: 2025-10-11 03:56:09.830 2 DEBUG oslo_service.service [None req-682edebb-6e1e-4c40-910e-896b6023b4ae - - - - - -] libvirt.live_migration_downtime_steps = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:09 compute-0 nova_compute[259850]: 2025-10-11 03:56:09.831 2 DEBUG oslo_service.service [None req-682edebb-6e1e-4c40-910e-896b6023b4ae - - - - - -] libvirt.live_migration_inbound_addr = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:09 compute-0 nova_compute[259850]: 2025-10-11 03:56:09.831 2 DEBUG oslo_service.service [None req-682edebb-6e1e-4c40-910e-896b6023b4ae - - - - - -] libvirt.live_migration_permit_auto_converge = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:09 compute-0 nova_compute[259850]: 2025-10-11 03:56:09.831 2 DEBUG oslo_service.service [None req-682edebb-6e1e-4c40-910e-896b6023b4ae - - - - - -] libvirt.live_migration_permit_post_copy = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:09 compute-0 nova_compute[259850]: 2025-10-11 03:56:09.831 2 DEBUG oslo_service.service [None req-682edebb-6e1e-4c40-910e-896b6023b4ae - - - - - -] libvirt.live_migration_scheme  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:09 compute-0 nova_compute[259850]: 2025-10-11 03:56:09.831 2 DEBUG oslo_service.service [None req-682edebb-6e1e-4c40-910e-896b6023b4ae - - - - - -] libvirt.live_migration_timeout_action = force_complete log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:09 compute-0 nova_compute[259850]: 2025-10-11 03:56:09.831 2 DEBUG oslo_service.service [None req-682edebb-6e1e-4c40-910e-896b6023b4ae - - - - - -] libvirt.live_migration_tunnelled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:09 compute-0 nova_compute[259850]: 2025-10-11 03:56:09.832 2 WARNING oslo_config.cfg [None req-682edebb-6e1e-4c40-910e-896b6023b4ae - - - - - -] Deprecated: Option "live_migration_uri" from group "libvirt" is deprecated for removal (
Oct 11 03:56:09 compute-0 nova_compute[259850]: live_migration_uri is deprecated for removal in favor of two other options that
Oct 11 03:56:09 compute-0 nova_compute[259850]: allow to change live migration scheme and target URI: ``live_migration_scheme``
Oct 11 03:56:09 compute-0 nova_compute[259850]: and ``live_migration_inbound_addr`` respectively.
Oct 11 03:56:09 compute-0 nova_compute[259850]: ).  Its value may be silently ignored in the future.
Oct 11 03:56:09 compute-0 nova_compute[259850]: 2025-10-11 03:56:09.832 2 DEBUG oslo_service.service [None req-682edebb-6e1e-4c40-910e-896b6023b4ae - - - - - -] libvirt.live_migration_uri     = qemu+tls://%s/system log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:09 compute-0 nova_compute[259850]: 2025-10-11 03:56:09.832 2 DEBUG oslo_service.service [None req-682edebb-6e1e-4c40-910e-896b6023b4ae - - - - - -] libvirt.live_migration_with_native_tls = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:09 compute-0 nova_compute[259850]: 2025-10-11 03:56:09.832 2 DEBUG oslo_service.service [None req-682edebb-6e1e-4c40-910e-896b6023b4ae - - - - - -] libvirt.max_queues             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:09 compute-0 nova_compute[259850]: 2025-10-11 03:56:09.832 2 DEBUG oslo_service.service [None req-682edebb-6e1e-4c40-910e-896b6023b4ae - - - - - -] libvirt.mem_stats_period_seconds = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:09 compute-0 nova_compute[259850]: 2025-10-11 03:56:09.832 2 DEBUG oslo_service.service [None req-682edebb-6e1e-4c40-910e-896b6023b4ae - - - - - -] libvirt.nfs_mount_options      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:09 compute-0 nova_compute[259850]: 2025-10-11 03:56:09.832 2 DEBUG oslo_service.service [None req-682edebb-6e1e-4c40-910e-896b6023b4ae - - - - - -] libvirt.nfs_mount_point_base   = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:09 compute-0 nova_compute[259850]: 2025-10-11 03:56:09.833 2 DEBUG oslo_service.service [None req-682edebb-6e1e-4c40-910e-896b6023b4ae - - - - - -] libvirt.num_aoe_discover_tries = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:09 compute-0 nova_compute[259850]: 2025-10-11 03:56:09.833 2 DEBUG oslo_service.service [None req-682edebb-6e1e-4c40-910e-896b6023b4ae - - - - - -] libvirt.num_iser_scan_tries    = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:09 compute-0 nova_compute[259850]: 2025-10-11 03:56:09.833 2 DEBUG oslo_service.service [None req-682edebb-6e1e-4c40-910e-896b6023b4ae - - - - - -] libvirt.num_memory_encrypted_guests = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:09 compute-0 nova_compute[259850]: 2025-10-11 03:56:09.833 2 DEBUG oslo_service.service [None req-682edebb-6e1e-4c40-910e-896b6023b4ae - - - - - -] libvirt.num_nvme_discover_tries = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:09 compute-0 nova_compute[259850]: 2025-10-11 03:56:09.833 2 DEBUG oslo_service.service [None req-682edebb-6e1e-4c40-910e-896b6023b4ae - - - - - -] libvirt.num_pcie_ports         = 24 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:09 compute-0 nova_compute[259850]: 2025-10-11 03:56:09.834 2 DEBUG oslo_service.service [None req-682edebb-6e1e-4c40-910e-896b6023b4ae - - - - - -] libvirt.num_volume_scan_tries  = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:09 compute-0 nova_compute[259850]: 2025-10-11 03:56:09.834 2 DEBUG oslo_service.service [None req-682edebb-6e1e-4c40-910e-896b6023b4ae - - - - - -] libvirt.pmem_namespaces        = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:09 compute-0 nova_compute[259850]: 2025-10-11 03:56:09.834 2 DEBUG oslo_service.service [None req-682edebb-6e1e-4c40-910e-896b6023b4ae - - - - - -] libvirt.quobyte_client_cfg     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:09 compute-0 nova_compute[259850]: 2025-10-11 03:56:09.834 2 DEBUG oslo_service.service [None req-682edebb-6e1e-4c40-910e-896b6023b4ae - - - - - -] libvirt.quobyte_mount_point_base = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:09 compute-0 nova_compute[259850]: 2025-10-11 03:56:09.834 2 DEBUG oslo_service.service [None req-682edebb-6e1e-4c40-910e-896b6023b4ae - - - - - -] libvirt.rbd_connect_timeout    = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:09 compute-0 nova_compute[259850]: 2025-10-11 03:56:09.834 2 DEBUG oslo_service.service [None req-682edebb-6e1e-4c40-910e-896b6023b4ae - - - - - -] libvirt.rbd_destroy_volume_retries = 12 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:09 compute-0 nova_compute[259850]: 2025-10-11 03:56:09.834 2 DEBUG oslo_service.service [None req-682edebb-6e1e-4c40-910e-896b6023b4ae - - - - - -] libvirt.rbd_destroy_volume_retry_interval = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:09 compute-0 nova_compute[259850]: 2025-10-11 03:56:09.835 2 DEBUG oslo_service.service [None req-682edebb-6e1e-4c40-910e-896b6023b4ae - - - - - -] libvirt.rbd_secret_uuid        = 23b68101-59a9-532f-ab6b-9acf78fb2162 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:09 compute-0 nova_compute[259850]: 2025-10-11 03:56:09.835 2 DEBUG oslo_service.service [None req-682edebb-6e1e-4c40-910e-896b6023b4ae - - - - - -] libvirt.rbd_user               = openstack log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:09 compute-0 nova_compute[259850]: 2025-10-11 03:56:09.835 2 DEBUG oslo_service.service [None req-682edebb-6e1e-4c40-910e-896b6023b4ae - - - - - -] libvirt.realtime_scheduler_priority = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:09 compute-0 nova_compute[259850]: 2025-10-11 03:56:09.835 2 DEBUG oslo_service.service [None req-682edebb-6e1e-4c40-910e-896b6023b4ae - - - - - -] libvirt.remote_filesystem_transport = ssh log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:09 compute-0 nova_compute[259850]: 2025-10-11 03:56:09.835 2 DEBUG oslo_service.service [None req-682edebb-6e1e-4c40-910e-896b6023b4ae - - - - - -] libvirt.rescue_image_id        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:09 compute-0 nova_compute[259850]: 2025-10-11 03:56:09.835 2 DEBUG oslo_service.service [None req-682edebb-6e1e-4c40-910e-896b6023b4ae - - - - - -] libvirt.rescue_kernel_id       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:09 compute-0 nova_compute[259850]: 2025-10-11 03:56:09.835 2 DEBUG oslo_service.service [None req-682edebb-6e1e-4c40-910e-896b6023b4ae - - - - - -] libvirt.rescue_ramdisk_id      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:09 compute-0 nova_compute[259850]: 2025-10-11 03:56:09.835 2 DEBUG oslo_service.service [None req-682edebb-6e1e-4c40-910e-896b6023b4ae - - - - - -] libvirt.rng_dev_path           = /dev/urandom log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:09 compute-0 nova_compute[259850]: 2025-10-11 03:56:09.836 2 DEBUG oslo_service.service [None req-682edebb-6e1e-4c40-910e-896b6023b4ae - - - - - -] libvirt.rx_queue_size          = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:09 compute-0 nova_compute[259850]: 2025-10-11 03:56:09.836 2 DEBUG oslo_service.service [None req-682edebb-6e1e-4c40-910e-896b6023b4ae - - - - - -] libvirt.smbfs_mount_options    =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:09 compute-0 nova_compute[259850]: 2025-10-11 03:56:09.836 2 DEBUG oslo_service.service [None req-682edebb-6e1e-4c40-910e-896b6023b4ae - - - - - -] libvirt.smbfs_mount_point_base = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:09 compute-0 nova_compute[259850]: 2025-10-11 03:56:09.836 2 DEBUG oslo_service.service [None req-682edebb-6e1e-4c40-910e-896b6023b4ae - - - - - -] libvirt.snapshot_compression   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:09 compute-0 nova_compute[259850]: 2025-10-11 03:56:09.836 2 DEBUG oslo_service.service [None req-682edebb-6e1e-4c40-910e-896b6023b4ae - - - - - -] libvirt.snapshot_image_format  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:09 compute-0 nova_compute[259850]: 2025-10-11 03:56:09.836 2 DEBUG oslo_service.service [None req-682edebb-6e1e-4c40-910e-896b6023b4ae - - - - - -] libvirt.snapshots_directory    = /var/lib/nova/instances/snapshots log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:09 compute-0 nova_compute[259850]: 2025-10-11 03:56:09.837 2 DEBUG oslo_service.service [None req-682edebb-6e1e-4c40-910e-896b6023b4ae - - - - - -] libvirt.sparse_logical_volumes = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:09 compute-0 nova_compute[259850]: 2025-10-11 03:56:09.837 2 DEBUG oslo_service.service [None req-682edebb-6e1e-4c40-910e-896b6023b4ae - - - - - -] libvirt.swtpm_enabled          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:09 compute-0 nova_compute[259850]: 2025-10-11 03:56:09.837 2 DEBUG oslo_service.service [None req-682edebb-6e1e-4c40-910e-896b6023b4ae - - - - - -] libvirt.swtpm_group            = tss log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:09 compute-0 nova_compute[259850]: 2025-10-11 03:56:09.837 2 DEBUG oslo_service.service [None req-682edebb-6e1e-4c40-910e-896b6023b4ae - - - - - -] libvirt.swtpm_user             = tss log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:09 compute-0 nova_compute[259850]: 2025-10-11 03:56:09.837 2 DEBUG oslo_service.service [None req-682edebb-6e1e-4c40-910e-896b6023b4ae - - - - - -] libvirt.sysinfo_serial         = unique log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:09 compute-0 nova_compute[259850]: 2025-10-11 03:56:09.837 2 DEBUG oslo_service.service [None req-682edebb-6e1e-4c40-910e-896b6023b4ae - - - - - -] libvirt.tx_queue_size          = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:09 compute-0 nova_compute[259850]: 2025-10-11 03:56:09.837 2 DEBUG oslo_service.service [None req-682edebb-6e1e-4c40-910e-896b6023b4ae - - - - - -] libvirt.uid_maps               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:09 compute-0 nova_compute[259850]: 2025-10-11 03:56:09.838 2 DEBUG oslo_service.service [None req-682edebb-6e1e-4c40-910e-896b6023b4ae - - - - - -] libvirt.use_virtio_for_bridges = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:09 compute-0 nova_compute[259850]: 2025-10-11 03:56:09.838 2 DEBUG oslo_service.service [None req-682edebb-6e1e-4c40-910e-896b6023b4ae - - - - - -] libvirt.virt_type              = kvm log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:09 compute-0 nova_compute[259850]: 2025-10-11 03:56:09.839 2 DEBUG oslo_service.service [None req-682edebb-6e1e-4c40-910e-896b6023b4ae - - - - - -] libvirt.volume_clear           = zero log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:09 compute-0 nova_compute[259850]: 2025-10-11 03:56:09.839 2 DEBUG oslo_service.service [None req-682edebb-6e1e-4c40-910e-896b6023b4ae - - - - - -] libvirt.volume_clear_size      = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:09 compute-0 nova_compute[259850]: 2025-10-11 03:56:09.840 2 DEBUG oslo_service.service [None req-682edebb-6e1e-4c40-910e-896b6023b4ae - - - - - -] libvirt.volume_use_multipath   = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:09 compute-0 nova_compute[259850]: 2025-10-11 03:56:09.840 2 DEBUG oslo_service.service [None req-682edebb-6e1e-4c40-910e-896b6023b4ae - - - - - -] libvirt.vzstorage_cache_path   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:09 compute-0 nova_compute[259850]: 2025-10-11 03:56:09.840 2 DEBUG oslo_service.service [None req-682edebb-6e1e-4c40-910e-896b6023b4ae - - - - - -] libvirt.vzstorage_log_path     = /var/log/vstorage/%(cluster_name)s/nova.log.gz log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:09 compute-0 nova_compute[259850]: 2025-10-11 03:56:09.841 2 DEBUG oslo_service.service [None req-682edebb-6e1e-4c40-910e-896b6023b4ae - - - - - -] libvirt.vzstorage_mount_group  = qemu log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:09 compute-0 nova_compute[259850]: 2025-10-11 03:56:09.841 2 DEBUG oslo_service.service [None req-682edebb-6e1e-4c40-910e-896b6023b4ae - - - - - -] libvirt.vzstorage_mount_opts   = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:09 compute-0 nova_compute[259850]: 2025-10-11 03:56:09.841 2 DEBUG oslo_service.service [None req-682edebb-6e1e-4c40-910e-896b6023b4ae - - - - - -] libvirt.vzstorage_mount_perms  = 0770 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:09 compute-0 nova_compute[259850]: 2025-10-11 03:56:09.842 2 DEBUG oslo_service.service [None req-682edebb-6e1e-4c40-910e-896b6023b4ae - - - - - -] libvirt.vzstorage_mount_point_base = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:09 compute-0 nova_compute[259850]: 2025-10-11 03:56:09.842 2 DEBUG oslo_service.service [None req-682edebb-6e1e-4c40-910e-896b6023b4ae - - - - - -] libvirt.vzstorage_mount_user   = stack log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:09 compute-0 nova_compute[259850]: 2025-10-11 03:56:09.842 2 DEBUG oslo_service.service [None req-682edebb-6e1e-4c40-910e-896b6023b4ae - - - - - -] libvirt.wait_soft_reboot_seconds = 120 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:09 compute-0 nova_compute[259850]: 2025-10-11 03:56:09.842 2 DEBUG oslo_service.service [None req-682edebb-6e1e-4c40-910e-896b6023b4ae - - - - - -] neutron.auth_section           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:09 compute-0 nova_compute[259850]: 2025-10-11 03:56:09.843 2 DEBUG oslo_service.service [None req-682edebb-6e1e-4c40-910e-896b6023b4ae - - - - - -] neutron.auth_type              = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:09 compute-0 nova_compute[259850]: 2025-10-11 03:56:09.843 2 DEBUG oslo_service.service [None req-682edebb-6e1e-4c40-910e-896b6023b4ae - - - - - -] neutron.cafile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:09 compute-0 nova_compute[259850]: 2025-10-11 03:56:09.843 2 DEBUG oslo_service.service [None req-682edebb-6e1e-4c40-910e-896b6023b4ae - - - - - -] neutron.certfile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:09 compute-0 nova_compute[259850]: 2025-10-11 03:56:09.843 2 DEBUG oslo_service.service [None req-682edebb-6e1e-4c40-910e-896b6023b4ae - - - - - -] neutron.collect_timing         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:09 compute-0 nova_compute[259850]: 2025-10-11 03:56:09.843 2 DEBUG oslo_service.service [None req-682edebb-6e1e-4c40-910e-896b6023b4ae - - - - - -] neutron.connect_retries        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:09 compute-0 nova_compute[259850]: 2025-10-11 03:56:09.843 2 DEBUG oslo_service.service [None req-682edebb-6e1e-4c40-910e-896b6023b4ae - - - - - -] neutron.connect_retry_delay    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:09 compute-0 nova_compute[259850]: 2025-10-11 03:56:09.843 2 DEBUG oslo_service.service [None req-682edebb-6e1e-4c40-910e-896b6023b4ae - - - - - -] neutron.default_floating_pool  = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:09 compute-0 nova_compute[259850]: 2025-10-11 03:56:09.843 2 DEBUG oslo_service.service [None req-682edebb-6e1e-4c40-910e-896b6023b4ae - - - - - -] neutron.endpoint_override      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:09 compute-0 nova_compute[259850]: 2025-10-11 03:56:09.844 2 DEBUG oslo_service.service [None req-682edebb-6e1e-4c40-910e-896b6023b4ae - - - - - -] neutron.extension_sync_interval = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:09 compute-0 nova_compute[259850]: 2025-10-11 03:56:09.844 2 DEBUG oslo_service.service [None req-682edebb-6e1e-4c40-910e-896b6023b4ae - - - - - -] neutron.http_retries           = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:09 compute-0 nova_compute[259850]: 2025-10-11 03:56:09.844 2 DEBUG oslo_service.service [None req-682edebb-6e1e-4c40-910e-896b6023b4ae - - - - - -] neutron.insecure               = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:09 compute-0 nova_compute[259850]: 2025-10-11 03:56:09.844 2 DEBUG oslo_service.service [None req-682edebb-6e1e-4c40-910e-896b6023b4ae - - - - - -] neutron.keyfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:09 compute-0 nova_compute[259850]: 2025-10-11 03:56:09.844 2 DEBUG oslo_service.service [None req-682edebb-6e1e-4c40-910e-896b6023b4ae - - - - - -] neutron.max_version            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:09 compute-0 nova_compute[259850]: 2025-10-11 03:56:09.844 2 DEBUG oslo_service.service [None req-682edebb-6e1e-4c40-910e-896b6023b4ae - - - - - -] neutron.metadata_proxy_shared_secret = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:09 compute-0 nova_compute[259850]: 2025-10-11 03:56:09.844 2 DEBUG oslo_service.service [None req-682edebb-6e1e-4c40-910e-896b6023b4ae - - - - - -] neutron.min_version            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:09 compute-0 nova_compute[259850]: 2025-10-11 03:56:09.844 2 DEBUG oslo_service.service [None req-682edebb-6e1e-4c40-910e-896b6023b4ae - - - - - -] neutron.ovs_bridge             = br-int log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:09 compute-0 nova_compute[259850]: 2025-10-11 03:56:09.845 2 DEBUG oslo_service.service [None req-682edebb-6e1e-4c40-910e-896b6023b4ae - - - - - -] neutron.physnets               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:09 compute-0 nova_compute[259850]: 2025-10-11 03:56:09.845 2 DEBUG oslo_service.service [None req-682edebb-6e1e-4c40-910e-896b6023b4ae - - - - - -] neutron.region_name            = regionOne log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:09 compute-0 nova_compute[259850]: 2025-10-11 03:56:09.845 2 DEBUG oslo_service.service [None req-682edebb-6e1e-4c40-910e-896b6023b4ae - - - - - -] neutron.service_metadata_proxy = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:09 compute-0 nova_compute[259850]: 2025-10-11 03:56:09.845 2 DEBUG oslo_service.service [None req-682edebb-6e1e-4c40-910e-896b6023b4ae - - - - - -] neutron.service_name           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:09 compute-0 nova_compute[259850]: 2025-10-11 03:56:09.845 2 DEBUG oslo_service.service [None req-682edebb-6e1e-4c40-910e-896b6023b4ae - - - - - -] neutron.service_type           = network log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:09 compute-0 nova_compute[259850]: 2025-10-11 03:56:09.845 2 DEBUG oslo_service.service [None req-682edebb-6e1e-4c40-910e-896b6023b4ae - - - - - -] neutron.split_loggers          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:09 compute-0 nova_compute[259850]: 2025-10-11 03:56:09.845 2 DEBUG oslo_service.service [None req-682edebb-6e1e-4c40-910e-896b6023b4ae - - - - - -] neutron.status_code_retries    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:09 compute-0 nova_compute[259850]: 2025-10-11 03:56:09.846 2 DEBUG oslo_service.service [None req-682edebb-6e1e-4c40-910e-896b6023b4ae - - - - - -] neutron.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:09 compute-0 nova_compute[259850]: 2025-10-11 03:56:09.846 2 DEBUG oslo_service.service [None req-682edebb-6e1e-4c40-910e-896b6023b4ae - - - - - -] neutron.timeout                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:09 compute-0 nova_compute[259850]: 2025-10-11 03:56:09.846 2 DEBUG oslo_service.service [None req-682edebb-6e1e-4c40-910e-896b6023b4ae - - - - - -] neutron.valid_interfaces       = ['internal'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:09 compute-0 nova_compute[259850]: 2025-10-11 03:56:09.846 2 DEBUG oslo_service.service [None req-682edebb-6e1e-4c40-910e-896b6023b4ae - - - - - -] neutron.version                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:09 compute-0 nova_compute[259850]: 2025-10-11 03:56:09.846 2 DEBUG oslo_service.service [None req-682edebb-6e1e-4c40-910e-896b6023b4ae - - - - - -] notifications.bdms_in_notifications = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:09 compute-0 nova_compute[259850]: 2025-10-11 03:56:09.846 2 DEBUG oslo_service.service [None req-682edebb-6e1e-4c40-910e-896b6023b4ae - - - - - -] notifications.default_level    = INFO log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:09 compute-0 nova_compute[259850]: 2025-10-11 03:56:09.846 2 DEBUG oslo_service.service [None req-682edebb-6e1e-4c40-910e-896b6023b4ae - - - - - -] notifications.notification_format = unversioned log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:09 compute-0 nova_compute[259850]: 2025-10-11 03:56:09.846 2 DEBUG oslo_service.service [None req-682edebb-6e1e-4c40-910e-896b6023b4ae - - - - - -] notifications.notify_on_state_change = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:09 compute-0 nova_compute[259850]: 2025-10-11 03:56:09.847 2 DEBUG oslo_service.service [None req-682edebb-6e1e-4c40-910e-896b6023b4ae - - - - - -] notifications.versioned_notifications_topics = ['versioned_notifications'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:09 compute-0 nova_compute[259850]: 2025-10-11 03:56:09.847 2 DEBUG oslo_service.service [None req-682edebb-6e1e-4c40-910e-896b6023b4ae - - - - - -] pci.alias                      = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:09 compute-0 nova_compute[259850]: 2025-10-11 03:56:09.847 2 DEBUG oslo_service.service [None req-682edebb-6e1e-4c40-910e-896b6023b4ae - - - - - -] pci.device_spec                = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:09 compute-0 nova_compute[259850]: 2025-10-11 03:56:09.847 2 DEBUG oslo_service.service [None req-682edebb-6e1e-4c40-910e-896b6023b4ae - - - - - -] pci.report_in_placement        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:09 compute-0 nova_compute[259850]: 2025-10-11 03:56:09.847 2 DEBUG oslo_service.service [None req-682edebb-6e1e-4c40-910e-896b6023b4ae - - - - - -] placement.auth_section         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:09 compute-0 nova_compute[259850]: 2025-10-11 03:56:09.847 2 DEBUG oslo_service.service [None req-682edebb-6e1e-4c40-910e-896b6023b4ae - - - - - -] placement.auth_type            = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:09 compute-0 nova_compute[259850]: 2025-10-11 03:56:09.847 2 DEBUG oslo_service.service [None req-682edebb-6e1e-4c40-910e-896b6023b4ae - - - - - -] placement.auth_url             = https://keystone-internal.openstack.svc:5000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:09 compute-0 nova_compute[259850]: 2025-10-11 03:56:09.848 2 DEBUG oslo_service.service [None req-682edebb-6e1e-4c40-910e-896b6023b4ae - - - - - -] placement.cafile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:09 compute-0 nova_compute[259850]: 2025-10-11 03:56:09.848 2 DEBUG oslo_service.service [None req-682edebb-6e1e-4c40-910e-896b6023b4ae - - - - - -] placement.certfile             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:09 compute-0 nova_compute[259850]: 2025-10-11 03:56:09.848 2 DEBUG oslo_service.service [None req-682edebb-6e1e-4c40-910e-896b6023b4ae - - - - - -] placement.collect_timing       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:09 compute-0 nova_compute[259850]: 2025-10-11 03:56:09.848 2 DEBUG oslo_service.service [None req-682edebb-6e1e-4c40-910e-896b6023b4ae - - - - - -] placement.connect_retries      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:09 compute-0 nova_compute[259850]: 2025-10-11 03:56:09.848 2 DEBUG oslo_service.service [None req-682edebb-6e1e-4c40-910e-896b6023b4ae - - - - - -] placement.connect_retry_delay  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:09 compute-0 nova_compute[259850]: 2025-10-11 03:56:09.848 2 DEBUG oslo_service.service [None req-682edebb-6e1e-4c40-910e-896b6023b4ae - - - - - -] placement.default_domain_id    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:09 compute-0 nova_compute[259850]: 2025-10-11 03:56:09.848 2 DEBUG oslo_service.service [None req-682edebb-6e1e-4c40-910e-896b6023b4ae - - - - - -] placement.default_domain_name  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:09 compute-0 nova_compute[259850]: 2025-10-11 03:56:09.849 2 DEBUG oslo_service.service [None req-682edebb-6e1e-4c40-910e-896b6023b4ae - - - - - -] placement.domain_id            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:09 compute-0 nova_compute[259850]: 2025-10-11 03:56:09.849 2 DEBUG oslo_service.service [None req-682edebb-6e1e-4c40-910e-896b6023b4ae - - - - - -] placement.domain_name          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:09 compute-0 nova_compute[259850]: 2025-10-11 03:56:09.849 2 DEBUG oslo_service.service [None req-682edebb-6e1e-4c40-910e-896b6023b4ae - - - - - -] placement.endpoint_override    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:09 compute-0 nova_compute[259850]: 2025-10-11 03:56:09.849 2 DEBUG oslo_service.service [None req-682edebb-6e1e-4c40-910e-896b6023b4ae - - - - - -] placement.insecure             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:09 compute-0 nova_compute[259850]: 2025-10-11 03:56:09.849 2 DEBUG oslo_service.service [None req-682edebb-6e1e-4c40-910e-896b6023b4ae - - - - - -] placement.keyfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:09 compute-0 nova_compute[259850]: 2025-10-11 03:56:09.849 2 DEBUG oslo_service.service [None req-682edebb-6e1e-4c40-910e-896b6023b4ae - - - - - -] placement.max_version          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:09 compute-0 nova_compute[259850]: 2025-10-11 03:56:09.849 2 DEBUG oslo_service.service [None req-682edebb-6e1e-4c40-910e-896b6023b4ae - - - - - -] placement.min_version          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:09 compute-0 nova_compute[259850]: 2025-10-11 03:56:09.850 2 DEBUG oslo_service.service [None req-682edebb-6e1e-4c40-910e-896b6023b4ae - - - - - -] placement.password             = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:09 compute-0 nova_compute[259850]: 2025-10-11 03:56:09.850 2 DEBUG oslo_service.service [None req-682edebb-6e1e-4c40-910e-896b6023b4ae - - - - - -] placement.project_domain_id    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:09 compute-0 nova_compute[259850]: 2025-10-11 03:56:09.850 2 DEBUG oslo_service.service [None req-682edebb-6e1e-4c40-910e-896b6023b4ae - - - - - -] placement.project_domain_name  = Default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:09 compute-0 nova_compute[259850]: 2025-10-11 03:56:09.850 2 DEBUG oslo_service.service [None req-682edebb-6e1e-4c40-910e-896b6023b4ae - - - - - -] placement.project_id           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:09 compute-0 nova_compute[259850]: 2025-10-11 03:56:09.850 2 DEBUG oslo_service.service [None req-682edebb-6e1e-4c40-910e-896b6023b4ae - - - - - -] placement.project_name         = service log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:09 compute-0 nova_compute[259850]: 2025-10-11 03:56:09.850 2 DEBUG oslo_service.service [None req-682edebb-6e1e-4c40-910e-896b6023b4ae - - - - - -] placement.region_name          = regionOne log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:09 compute-0 nova_compute[259850]: 2025-10-11 03:56:09.850 2 DEBUG oslo_service.service [None req-682edebb-6e1e-4c40-910e-896b6023b4ae - - - - - -] placement.service_name         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:09 compute-0 nova_compute[259850]: 2025-10-11 03:56:09.851 2 DEBUG oslo_service.service [None req-682edebb-6e1e-4c40-910e-896b6023b4ae - - - - - -] placement.service_type         = placement log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:09 compute-0 nova_compute[259850]: 2025-10-11 03:56:09.851 2 DEBUG oslo_service.service [None req-682edebb-6e1e-4c40-910e-896b6023b4ae - - - - - -] placement.split_loggers        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:09 compute-0 nova_compute[259850]: 2025-10-11 03:56:09.851 2 DEBUG oslo_service.service [None req-682edebb-6e1e-4c40-910e-896b6023b4ae - - - - - -] placement.status_code_retries  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:09 compute-0 nova_compute[259850]: 2025-10-11 03:56:09.851 2 DEBUG oslo_service.service [None req-682edebb-6e1e-4c40-910e-896b6023b4ae - - - - - -] placement.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:09 compute-0 nova_compute[259850]: 2025-10-11 03:56:09.851 2 DEBUG oslo_service.service [None req-682edebb-6e1e-4c40-910e-896b6023b4ae - - - - - -] placement.system_scope         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:09 compute-0 nova_compute[259850]: 2025-10-11 03:56:09.851 2 DEBUG oslo_service.service [None req-682edebb-6e1e-4c40-910e-896b6023b4ae - - - - - -] placement.timeout              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:09 compute-0 nova_compute[259850]: 2025-10-11 03:56:09.851 2 DEBUG oslo_service.service [None req-682edebb-6e1e-4c40-910e-896b6023b4ae - - - - - -] placement.trust_id             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:09 compute-0 nova_compute[259850]: 2025-10-11 03:56:09.851 2 DEBUG oslo_service.service [None req-682edebb-6e1e-4c40-910e-896b6023b4ae - - - - - -] placement.user_domain_id       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:09 compute-0 nova_compute[259850]: 2025-10-11 03:56:09.852 2 DEBUG oslo_service.service [None req-682edebb-6e1e-4c40-910e-896b6023b4ae - - - - - -] placement.user_domain_name     = Default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:09 compute-0 nova_compute[259850]: 2025-10-11 03:56:09.852 2 DEBUG oslo_service.service [None req-682edebb-6e1e-4c40-910e-896b6023b4ae - - - - - -] placement.user_id              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:09 compute-0 nova_compute[259850]: 2025-10-11 03:56:09.852 2 DEBUG oslo_service.service [None req-682edebb-6e1e-4c40-910e-896b6023b4ae - - - - - -] placement.username             = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:09 compute-0 nova_compute[259850]: 2025-10-11 03:56:09.852 2 DEBUG oslo_service.service [None req-682edebb-6e1e-4c40-910e-896b6023b4ae - - - - - -] placement.valid_interfaces     = ['internal'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:09 compute-0 nova_compute[259850]: 2025-10-11 03:56:09.852 2 DEBUG oslo_service.service [None req-682edebb-6e1e-4c40-910e-896b6023b4ae - - - - - -] placement.version              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:09 compute-0 nova_compute[259850]: 2025-10-11 03:56:09.852 2 DEBUG oslo_service.service [None req-682edebb-6e1e-4c40-910e-896b6023b4ae - - - - - -] quota.cores                    = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:09 compute-0 nova_compute[259850]: 2025-10-11 03:56:09.853 2 DEBUG oslo_service.service [None req-682edebb-6e1e-4c40-910e-896b6023b4ae - - - - - -] quota.count_usage_from_placement = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:09 compute-0 nova_compute[259850]: 2025-10-11 03:56:09.853 2 DEBUG oslo_service.service [None req-682edebb-6e1e-4c40-910e-896b6023b4ae - - - - - -] quota.driver                   = nova.quota.DbQuotaDriver log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:09 compute-0 nova_compute[259850]: 2025-10-11 03:56:09.853 2 DEBUG oslo_service.service [None req-682edebb-6e1e-4c40-910e-896b6023b4ae - - - - - -] quota.injected_file_content_bytes = 10240 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:09 compute-0 nova_compute[259850]: 2025-10-11 03:56:09.853 2 DEBUG oslo_service.service [None req-682edebb-6e1e-4c40-910e-896b6023b4ae - - - - - -] quota.injected_file_path_length = 255 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:09 compute-0 nova_compute[259850]: 2025-10-11 03:56:09.853 2 DEBUG oslo_service.service [None req-682edebb-6e1e-4c40-910e-896b6023b4ae - - - - - -] quota.injected_files           = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:09 compute-0 nova_compute[259850]: 2025-10-11 03:56:09.853 2 DEBUG oslo_service.service [None req-682edebb-6e1e-4c40-910e-896b6023b4ae - - - - - -] quota.instances                = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:09 compute-0 nova_compute[259850]: 2025-10-11 03:56:09.853 2 DEBUG oslo_service.service [None req-682edebb-6e1e-4c40-910e-896b6023b4ae - - - - - -] quota.key_pairs                = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:09 compute-0 nova_compute[259850]: 2025-10-11 03:56:09.854 2 DEBUG oslo_service.service [None req-682edebb-6e1e-4c40-910e-896b6023b4ae - - - - - -] quota.metadata_items           = 128 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:09 compute-0 nova_compute[259850]: 2025-10-11 03:56:09.854 2 DEBUG oslo_service.service [None req-682edebb-6e1e-4c40-910e-896b6023b4ae - - - - - -] quota.ram                      = 51200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:09 compute-0 nova_compute[259850]: 2025-10-11 03:56:09.854 2 DEBUG oslo_service.service [None req-682edebb-6e1e-4c40-910e-896b6023b4ae - - - - - -] quota.recheck_quota            = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:09 compute-0 nova_compute[259850]: 2025-10-11 03:56:09.854 2 DEBUG oslo_service.service [None req-682edebb-6e1e-4c40-910e-896b6023b4ae - - - - - -] quota.server_group_members     = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:09 compute-0 nova_compute[259850]: 2025-10-11 03:56:09.854 2 DEBUG oslo_service.service [None req-682edebb-6e1e-4c40-910e-896b6023b4ae - - - - - -] quota.server_groups            = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:09 compute-0 nova_compute[259850]: 2025-10-11 03:56:09.854 2 DEBUG oslo_service.service [None req-682edebb-6e1e-4c40-910e-896b6023b4ae - - - - - -] rdp.enabled                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:09 compute-0 nova_compute[259850]: 2025-10-11 03:56:09.855 2 DEBUG oslo_service.service [None req-682edebb-6e1e-4c40-910e-896b6023b4ae - - - - - -] rdp.html5_proxy_base_url       = http://127.0.0.1:6083/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:09 compute-0 nova_compute[259850]: 2025-10-11 03:56:09.855 2 DEBUG oslo_service.service [None req-682edebb-6e1e-4c40-910e-896b6023b4ae - - - - - -] scheduler.discover_hosts_in_cells_interval = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:09 compute-0 nova_compute[259850]: 2025-10-11 03:56:09.855 2 DEBUG oslo_service.service [None req-682edebb-6e1e-4c40-910e-896b6023b4ae - - - - - -] scheduler.enable_isolated_aggregate_filtering = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:09 compute-0 nova_compute[259850]: 2025-10-11 03:56:09.855 2 DEBUG oslo_service.service [None req-682edebb-6e1e-4c40-910e-896b6023b4ae - - - - - -] scheduler.image_metadata_prefilter = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:09 compute-0 nova_compute[259850]: 2025-10-11 03:56:09.855 2 DEBUG oslo_service.service [None req-682edebb-6e1e-4c40-910e-896b6023b4ae - - - - - -] scheduler.limit_tenants_to_placement_aggregate = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:09 compute-0 nova_compute[259850]: 2025-10-11 03:56:09.855 2 DEBUG oslo_service.service [None req-682edebb-6e1e-4c40-910e-896b6023b4ae - - - - - -] scheduler.max_attempts         = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:09 compute-0 nova_compute[259850]: 2025-10-11 03:56:09.856 2 DEBUG oslo_service.service [None req-682edebb-6e1e-4c40-910e-896b6023b4ae - - - - - -] scheduler.max_placement_results = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:09 compute-0 nova_compute[259850]: 2025-10-11 03:56:09.856 2 DEBUG oslo_service.service [None req-682edebb-6e1e-4c40-910e-896b6023b4ae - - - - - -] scheduler.placement_aggregate_required_for_tenants = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:09 compute-0 nova_compute[259850]: 2025-10-11 03:56:09.856 2 DEBUG oslo_service.service [None req-682edebb-6e1e-4c40-910e-896b6023b4ae - - - - - -] scheduler.query_placement_for_availability_zone = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:09 compute-0 nova_compute[259850]: 2025-10-11 03:56:09.856 2 DEBUG oslo_service.service [None req-682edebb-6e1e-4c40-910e-896b6023b4ae - - - - - -] scheduler.query_placement_for_image_type_support = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:09 compute-0 nova_compute[259850]: 2025-10-11 03:56:09.856 2 DEBUG oslo_service.service [None req-682edebb-6e1e-4c40-910e-896b6023b4ae - - - - - -] scheduler.query_placement_for_routed_network_aggregates = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:09 compute-0 nova_compute[259850]: 2025-10-11 03:56:09.856 2 DEBUG oslo_service.service [None req-682edebb-6e1e-4c40-910e-896b6023b4ae - - - - - -] scheduler.workers              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:09 compute-0 nova_compute[259850]: 2025-10-11 03:56:09.857 2 DEBUG oslo_service.service [None req-682edebb-6e1e-4c40-910e-896b6023b4ae - - - - - -] filter_scheduler.aggregate_image_properties_isolation_namespace = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:09 compute-0 nova_compute[259850]: 2025-10-11 03:56:09.857 2 DEBUG oslo_service.service [None req-682edebb-6e1e-4c40-910e-896b6023b4ae - - - - - -] filter_scheduler.aggregate_image_properties_isolation_separator = . log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:09 compute-0 nova_compute[259850]: 2025-10-11 03:56:09.857 2 DEBUG oslo_service.service [None req-682edebb-6e1e-4c40-910e-896b6023b4ae - - - - - -] filter_scheduler.available_filters = ['nova.scheduler.filters.all_filters'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:09 compute-0 nova_compute[259850]: 2025-10-11 03:56:09.857 2 DEBUG oslo_service.service [None req-682edebb-6e1e-4c40-910e-896b6023b4ae - - - - - -] filter_scheduler.build_failure_weight_multiplier = 1000000.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:09 compute-0 nova_compute[259850]: 2025-10-11 03:56:09.857 2 DEBUG oslo_service.service [None req-682edebb-6e1e-4c40-910e-896b6023b4ae - - - - - -] filter_scheduler.cpu_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:09 compute-0 nova_compute[259850]: 2025-10-11 03:56:09.857 2 DEBUG oslo_service.service [None req-682edebb-6e1e-4c40-910e-896b6023b4ae - - - - - -] filter_scheduler.cross_cell_move_weight_multiplier = 1000000.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:09 compute-0 nova_compute[259850]: 2025-10-11 03:56:09.858 2 DEBUG oslo_service.service [None req-682edebb-6e1e-4c40-910e-896b6023b4ae - - - - - -] filter_scheduler.disk_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:09 compute-0 nova_compute[259850]: 2025-10-11 03:56:09.858 2 DEBUG oslo_service.service [None req-682edebb-6e1e-4c40-910e-896b6023b4ae - - - - - -] filter_scheduler.enabled_filters = ['ComputeFilter', 'ComputeCapabilitiesFilter', 'ImagePropertiesFilter', 'ServerGroupAntiAffinityFilter', 'ServerGroupAffinityFilter'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:09 compute-0 nova_compute[259850]: 2025-10-11 03:56:09.858 2 DEBUG oslo_service.service [None req-682edebb-6e1e-4c40-910e-896b6023b4ae - - - - - -] filter_scheduler.host_subset_size = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:09 compute-0 nova_compute[259850]: 2025-10-11 03:56:09.858 2 DEBUG oslo_service.service [None req-682edebb-6e1e-4c40-910e-896b6023b4ae - - - - - -] filter_scheduler.image_properties_default_architecture = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:09 compute-0 nova_compute[259850]: 2025-10-11 03:56:09.858 2 DEBUG oslo_service.service [None req-682edebb-6e1e-4c40-910e-896b6023b4ae - - - - - -] filter_scheduler.io_ops_weight_multiplier = -1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:09 compute-0 nova_compute[259850]: 2025-10-11 03:56:09.859 2 DEBUG oslo_service.service [None req-682edebb-6e1e-4c40-910e-896b6023b4ae - - - - - -] filter_scheduler.isolated_hosts = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:09 compute-0 nova_compute[259850]: 2025-10-11 03:56:09.859 2 DEBUG oslo_service.service [None req-682edebb-6e1e-4c40-910e-896b6023b4ae - - - - - -] filter_scheduler.isolated_images = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:09 compute-0 nova_compute[259850]: 2025-10-11 03:56:09.859 2 DEBUG oslo_service.service [None req-682edebb-6e1e-4c40-910e-896b6023b4ae - - - - - -] filter_scheduler.max_instances_per_host = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:09 compute-0 nova_compute[259850]: 2025-10-11 03:56:09.859 2 DEBUG oslo_service.service [None req-682edebb-6e1e-4c40-910e-896b6023b4ae - - - - - -] filter_scheduler.max_io_ops_per_host = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:09 compute-0 nova_compute[259850]: 2025-10-11 03:56:09.859 2 DEBUG oslo_service.service [None req-682edebb-6e1e-4c40-910e-896b6023b4ae - - - - - -] filter_scheduler.pci_in_placement = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:09 compute-0 nova_compute[259850]: 2025-10-11 03:56:09.859 2 DEBUG oslo_service.service [None req-682edebb-6e1e-4c40-910e-896b6023b4ae - - - - - -] filter_scheduler.pci_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:09 compute-0 nova_compute[259850]: 2025-10-11 03:56:09.860 2 DEBUG oslo_service.service [None req-682edebb-6e1e-4c40-910e-896b6023b4ae - - - - - -] filter_scheduler.ram_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:09 compute-0 nova_compute[259850]: 2025-10-11 03:56:09.860 2 DEBUG oslo_service.service [None req-682edebb-6e1e-4c40-910e-896b6023b4ae - - - - - -] filter_scheduler.restrict_isolated_hosts_to_isolated_images = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:09 compute-0 nova_compute[259850]: 2025-10-11 03:56:09.860 2 DEBUG oslo_service.service [None req-682edebb-6e1e-4c40-910e-896b6023b4ae - - - - - -] filter_scheduler.shuffle_best_same_weighed_hosts = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:09 compute-0 nova_compute[259850]: 2025-10-11 03:56:09.860 2 DEBUG oslo_service.service [None req-682edebb-6e1e-4c40-910e-896b6023b4ae - - - - - -] filter_scheduler.soft_affinity_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:09 compute-0 nova_compute[259850]: 2025-10-11 03:56:09.860 2 DEBUG oslo_service.service [None req-682edebb-6e1e-4c40-910e-896b6023b4ae - - - - - -] filter_scheduler.soft_anti_affinity_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:09 compute-0 nova_compute[259850]: 2025-10-11 03:56:09.860 2 DEBUG oslo_service.service [None req-682edebb-6e1e-4c40-910e-896b6023b4ae - - - - - -] filter_scheduler.track_instance_changes = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:09 compute-0 nova_compute[259850]: 2025-10-11 03:56:09.861 2 DEBUG oslo_service.service [None req-682edebb-6e1e-4c40-910e-896b6023b4ae - - - - - -] filter_scheduler.weight_classes = ['nova.scheduler.weights.all_weighers'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:09 compute-0 nova_compute[259850]: 2025-10-11 03:56:09.861 2 DEBUG oslo_service.service [None req-682edebb-6e1e-4c40-910e-896b6023b4ae - - - - - -] metrics.required               = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:09 compute-0 nova_compute[259850]: 2025-10-11 03:56:09.861 2 DEBUG oslo_service.service [None req-682edebb-6e1e-4c40-910e-896b6023b4ae - - - - - -] metrics.weight_multiplier      = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:09 compute-0 nova_compute[259850]: 2025-10-11 03:56:09.861 2 DEBUG oslo_service.service [None req-682edebb-6e1e-4c40-910e-896b6023b4ae - - - - - -] metrics.weight_of_unavailable  = -10000.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:09 compute-0 nova_compute[259850]: 2025-10-11 03:56:09.861 2 DEBUG oslo_service.service [None req-682edebb-6e1e-4c40-910e-896b6023b4ae - - - - - -] metrics.weight_setting         = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:09 compute-0 nova_compute[259850]: 2025-10-11 03:56:09.861 2 DEBUG oslo_service.service [None req-682edebb-6e1e-4c40-910e-896b6023b4ae - - - - - -] serial_console.base_url        = ws://127.0.0.1:6083/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:09 compute-0 nova_compute[259850]: 2025-10-11 03:56:09.861 2 DEBUG oslo_service.service [None req-682edebb-6e1e-4c40-910e-896b6023b4ae - - - - - -] serial_console.enabled         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:09 compute-0 nova_compute[259850]: 2025-10-11 03:56:09.862 2 DEBUG oslo_service.service [None req-682edebb-6e1e-4c40-910e-896b6023b4ae - - - - - -] serial_console.port_range      = 10000:20000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:09 compute-0 nova_compute[259850]: 2025-10-11 03:56:09.862 2 DEBUG oslo_service.service [None req-682edebb-6e1e-4c40-910e-896b6023b4ae - - - - - -] serial_console.proxyclient_address = 127.0.0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:09 compute-0 nova_compute[259850]: 2025-10-11 03:56:09.862 2 DEBUG oslo_service.service [None req-682edebb-6e1e-4c40-910e-896b6023b4ae - - - - - -] serial_console.serialproxy_host = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:09 compute-0 nova_compute[259850]: 2025-10-11 03:56:09.862 2 DEBUG oslo_service.service [None req-682edebb-6e1e-4c40-910e-896b6023b4ae - - - - - -] serial_console.serialproxy_port = 6083 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:09 compute-0 nova_compute[259850]: 2025-10-11 03:56:09.862 2 DEBUG oslo_service.service [None req-682edebb-6e1e-4c40-910e-896b6023b4ae - - - - - -] service_user.auth_section      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:09 compute-0 nova_compute[259850]: 2025-10-11 03:56:09.862 2 DEBUG oslo_service.service [None req-682edebb-6e1e-4c40-910e-896b6023b4ae - - - - - -] service_user.auth_type         = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:09 compute-0 nova_compute[259850]: 2025-10-11 03:56:09.863 2 DEBUG oslo_service.service [None req-682edebb-6e1e-4c40-910e-896b6023b4ae - - - - - -] service_user.cafile            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:09 compute-0 nova_compute[259850]: 2025-10-11 03:56:09.863 2 DEBUG oslo_service.service [None req-682edebb-6e1e-4c40-910e-896b6023b4ae - - - - - -] service_user.certfile          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:09 compute-0 nova_compute[259850]: 2025-10-11 03:56:09.863 2 DEBUG oslo_service.service [None req-682edebb-6e1e-4c40-910e-896b6023b4ae - - - - - -] service_user.collect_timing    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:09 compute-0 nova_compute[259850]: 2025-10-11 03:56:09.863 2 DEBUG oslo_service.service [None req-682edebb-6e1e-4c40-910e-896b6023b4ae - - - - - -] service_user.insecure          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:09 compute-0 nova_compute[259850]: 2025-10-11 03:56:09.863 2 DEBUG oslo_service.service [None req-682edebb-6e1e-4c40-910e-896b6023b4ae - - - - - -] service_user.keyfile           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:09 compute-0 nova_compute[259850]: 2025-10-11 03:56:09.863 2 DEBUG oslo_service.service [None req-682edebb-6e1e-4c40-910e-896b6023b4ae - - - - - -] service_user.send_service_user_token = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:09 compute-0 nova_compute[259850]: 2025-10-11 03:56:09.863 2 DEBUG oslo_service.service [None req-682edebb-6e1e-4c40-910e-896b6023b4ae - - - - - -] service_user.split_loggers     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:09 compute-0 nova_compute[259850]: 2025-10-11 03:56:09.864 2 DEBUG oslo_service.service [None req-682edebb-6e1e-4c40-910e-896b6023b4ae - - - - - -] service_user.timeout           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:09 compute-0 nova_compute[259850]: 2025-10-11 03:56:09.864 2 DEBUG oslo_service.service [None req-682edebb-6e1e-4c40-910e-896b6023b4ae - - - - - -] spice.agent_enabled            = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:09 compute-0 nova_compute[259850]: 2025-10-11 03:56:09.864 2 DEBUG oslo_service.service [None req-682edebb-6e1e-4c40-910e-896b6023b4ae - - - - - -] spice.enabled                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:09 compute-0 nova_compute[259850]: 2025-10-11 03:56:09.864 2 DEBUG oslo_service.service [None req-682edebb-6e1e-4c40-910e-896b6023b4ae - - - - - -] spice.html5proxy_base_url      = http://127.0.0.1:6082/spice_auto.html log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:09 compute-0 nova_compute[259850]: 2025-10-11 03:56:09.864 2 DEBUG oslo_service.service [None req-682edebb-6e1e-4c40-910e-896b6023b4ae - - - - - -] spice.html5proxy_host          = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:09 compute-0 nova_compute[259850]: 2025-10-11 03:56:09.864 2 DEBUG oslo_service.service [None req-682edebb-6e1e-4c40-910e-896b6023b4ae - - - - - -] spice.html5proxy_port          = 6082 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:09 compute-0 nova_compute[259850]: 2025-10-11 03:56:09.865 2 DEBUG oslo_service.service [None req-682edebb-6e1e-4c40-910e-896b6023b4ae - - - - - -] spice.image_compression        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:09 compute-0 nova_compute[259850]: 2025-10-11 03:56:09.865 2 DEBUG oslo_service.service [None req-682edebb-6e1e-4c40-910e-896b6023b4ae - - - - - -] spice.jpeg_compression         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:09 compute-0 nova_compute[259850]: 2025-10-11 03:56:09.865 2 DEBUG oslo_service.service [None req-682edebb-6e1e-4c40-910e-896b6023b4ae - - - - - -] spice.playback_compression     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:09 compute-0 nova_compute[259850]: 2025-10-11 03:56:09.865 2 DEBUG oslo_service.service [None req-682edebb-6e1e-4c40-910e-896b6023b4ae - - - - - -] spice.server_listen            = 127.0.0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:09 compute-0 nova_compute[259850]: 2025-10-11 03:56:09.865 2 DEBUG oslo_service.service [None req-682edebb-6e1e-4c40-910e-896b6023b4ae - - - - - -] spice.server_proxyclient_address = 127.0.0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:09 compute-0 nova_compute[259850]: 2025-10-11 03:56:09.865 2 DEBUG oslo_service.service [None req-682edebb-6e1e-4c40-910e-896b6023b4ae - - - - - -] spice.streaming_mode           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:09 compute-0 nova_compute[259850]: 2025-10-11 03:56:09.866 2 DEBUG oslo_service.service [None req-682edebb-6e1e-4c40-910e-896b6023b4ae - - - - - -] spice.zlib_compression         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:09 compute-0 nova_compute[259850]: 2025-10-11 03:56:09.866 2 DEBUG oslo_service.service [None req-682edebb-6e1e-4c40-910e-896b6023b4ae - - - - - -] upgrade_levels.baseapi         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:09 compute-0 nova_compute[259850]: 2025-10-11 03:56:09.866 2 DEBUG oslo_service.service [None req-682edebb-6e1e-4c40-910e-896b6023b4ae - - - - - -] upgrade_levels.cert            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:09 compute-0 nova_compute[259850]: 2025-10-11 03:56:09.866 2 DEBUG oslo_service.service [None req-682edebb-6e1e-4c40-910e-896b6023b4ae - - - - - -] upgrade_levels.compute         = auto log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:09 compute-0 nova_compute[259850]: 2025-10-11 03:56:09.866 2 DEBUG oslo_service.service [None req-682edebb-6e1e-4c40-910e-896b6023b4ae - - - - - -] upgrade_levels.conductor       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:09 compute-0 nova_compute[259850]: 2025-10-11 03:56:09.867 2 DEBUG oslo_service.service [None req-682edebb-6e1e-4c40-910e-896b6023b4ae - - - - - -] upgrade_levels.scheduler       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:09 compute-0 nova_compute[259850]: 2025-10-11 03:56:09.867 2 DEBUG oslo_service.service [None req-682edebb-6e1e-4c40-910e-896b6023b4ae - - - - - -] vendordata_dynamic_auth.auth_section = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:09 compute-0 nova_compute[259850]: 2025-10-11 03:56:09.867 2 DEBUG oslo_service.service [None req-682edebb-6e1e-4c40-910e-896b6023b4ae - - - - - -] vendordata_dynamic_auth.auth_type = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:09 compute-0 nova_compute[259850]: 2025-10-11 03:56:09.867 2 DEBUG oslo_service.service [None req-682edebb-6e1e-4c40-910e-896b6023b4ae - - - - - -] vendordata_dynamic_auth.cafile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:09 compute-0 nova_compute[259850]: 2025-10-11 03:56:09.867 2 DEBUG oslo_service.service [None req-682edebb-6e1e-4c40-910e-896b6023b4ae - - - - - -] vendordata_dynamic_auth.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:09 compute-0 nova_compute[259850]: 2025-10-11 03:56:09.868 2 DEBUG oslo_service.service [None req-682edebb-6e1e-4c40-910e-896b6023b4ae - - - - - -] vendordata_dynamic_auth.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:09 compute-0 nova_compute[259850]: 2025-10-11 03:56:09.868 2 DEBUG oslo_service.service [None req-682edebb-6e1e-4c40-910e-896b6023b4ae - - - - - -] vendordata_dynamic_auth.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:09 compute-0 nova_compute[259850]: 2025-10-11 03:56:09.868 2 DEBUG oslo_service.service [None req-682edebb-6e1e-4c40-910e-896b6023b4ae - - - - - -] vendordata_dynamic_auth.keyfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:09 compute-0 nova_compute[259850]: 2025-10-11 03:56:09.868 2 DEBUG oslo_service.service [None req-682edebb-6e1e-4c40-910e-896b6023b4ae - - - - - -] vendordata_dynamic_auth.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:09 compute-0 nova_compute[259850]: 2025-10-11 03:56:09.868 2 DEBUG oslo_service.service [None req-682edebb-6e1e-4c40-910e-896b6023b4ae - - - - - -] vendordata_dynamic_auth.timeout = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:09 compute-0 nova_compute[259850]: 2025-10-11 03:56:09.868 2 DEBUG oslo_service.service [None req-682edebb-6e1e-4c40-910e-896b6023b4ae - - - - - -] vmware.api_retry_count         = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:09 compute-0 nova_compute[259850]: 2025-10-11 03:56:09.869 2 DEBUG oslo_service.service [None req-682edebb-6e1e-4c40-910e-896b6023b4ae - - - - - -] vmware.ca_file                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:09 compute-0 nova_compute[259850]: 2025-10-11 03:56:09.869 2 DEBUG oslo_service.service [None req-682edebb-6e1e-4c40-910e-896b6023b4ae - - - - - -] vmware.cache_prefix            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:09 compute-0 nova_compute[259850]: 2025-10-11 03:56:09.869 2 DEBUG oslo_service.service [None req-682edebb-6e1e-4c40-910e-896b6023b4ae - - - - - -] vmware.cluster_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:09 compute-0 nova_compute[259850]: 2025-10-11 03:56:09.869 2 DEBUG oslo_service.service [None req-682edebb-6e1e-4c40-910e-896b6023b4ae - - - - - -] vmware.connection_pool_size    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:09 compute-0 nova_compute[259850]: 2025-10-11 03:56:09.869 2 DEBUG oslo_service.service [None req-682edebb-6e1e-4c40-910e-896b6023b4ae - - - - - -] vmware.console_delay_seconds   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:09 compute-0 nova_compute[259850]: 2025-10-11 03:56:09.870 2 DEBUG oslo_service.service [None req-682edebb-6e1e-4c40-910e-896b6023b4ae - - - - - -] vmware.datastore_regex         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:09 compute-0 nova_compute[259850]: 2025-10-11 03:56:09.870 2 DEBUG oslo_service.service [None req-682edebb-6e1e-4c40-910e-896b6023b4ae - - - - - -] vmware.host_ip                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:09 compute-0 nova_compute[259850]: 2025-10-11 03:56:09.870 2 DEBUG oslo_service.service [None req-682edebb-6e1e-4c40-910e-896b6023b4ae - - - - - -] vmware.host_password           = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:09 compute-0 nova_compute[259850]: 2025-10-11 03:56:09.870 2 DEBUG oslo_service.service [None req-682edebb-6e1e-4c40-910e-896b6023b4ae - - - - - -] vmware.host_port               = 443 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:09 compute-0 nova_compute[259850]: 2025-10-11 03:56:09.870 2 DEBUG oslo_service.service [None req-682edebb-6e1e-4c40-910e-896b6023b4ae - - - - - -] vmware.host_username           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:09 compute-0 nova_compute[259850]: 2025-10-11 03:56:09.870 2 DEBUG oslo_service.service [None req-682edebb-6e1e-4c40-910e-896b6023b4ae - - - - - -] vmware.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:09 compute-0 nova_compute[259850]: 2025-10-11 03:56:09.871 2 DEBUG oslo_service.service [None req-682edebb-6e1e-4c40-910e-896b6023b4ae - - - - - -] vmware.integration_bridge      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:09 compute-0 nova_compute[259850]: 2025-10-11 03:56:09.871 2 DEBUG oslo_service.service [None req-682edebb-6e1e-4c40-910e-896b6023b4ae - - - - - -] vmware.maximum_objects         = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:09 compute-0 nova_compute[259850]: 2025-10-11 03:56:09.871 2 DEBUG oslo_service.service [None req-682edebb-6e1e-4c40-910e-896b6023b4ae - - - - - -] vmware.pbm_default_policy      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:09 compute-0 nova_compute[259850]: 2025-10-11 03:56:09.871 2 DEBUG oslo_service.service [None req-682edebb-6e1e-4c40-910e-896b6023b4ae - - - - - -] vmware.pbm_enabled             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:09 compute-0 nova_compute[259850]: 2025-10-11 03:56:09.871 2 DEBUG oslo_service.service [None req-682edebb-6e1e-4c40-910e-896b6023b4ae - - - - - -] vmware.pbm_wsdl_location       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:09 compute-0 nova_compute[259850]: 2025-10-11 03:56:09.871 2 DEBUG oslo_service.service [None req-682edebb-6e1e-4c40-910e-896b6023b4ae - - - - - -] vmware.serial_log_dir          = /opt/vmware/vspc log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:09 compute-0 nova_compute[259850]: 2025-10-11 03:56:09.872 2 DEBUG oslo_service.service [None req-682edebb-6e1e-4c40-910e-896b6023b4ae - - - - - -] vmware.serial_port_proxy_uri   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:09 compute-0 nova_compute[259850]: 2025-10-11 03:56:09.872 2 DEBUG oslo_service.service [None req-682edebb-6e1e-4c40-910e-896b6023b4ae - - - - - -] vmware.serial_port_service_uri = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:09 compute-0 nova_compute[259850]: 2025-10-11 03:56:09.872 2 DEBUG oslo_service.service [None req-682edebb-6e1e-4c40-910e-896b6023b4ae - - - - - -] vmware.task_poll_interval      = 0.5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:09 compute-0 nova_compute[259850]: 2025-10-11 03:56:09.872 2 DEBUG oslo_service.service [None req-682edebb-6e1e-4c40-910e-896b6023b4ae - - - - - -] vmware.use_linked_clone        = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:09 compute-0 nova_compute[259850]: 2025-10-11 03:56:09.872 2 DEBUG oslo_service.service [None req-682edebb-6e1e-4c40-910e-896b6023b4ae - - - - - -] vmware.vnc_keymap              = en-us log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:09 compute-0 nova_compute[259850]: 2025-10-11 03:56:09.872 2 DEBUG oslo_service.service [None req-682edebb-6e1e-4c40-910e-896b6023b4ae - - - - - -] vmware.vnc_port                = 5900 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:09 compute-0 nova_compute[259850]: 2025-10-11 03:56:09.872 2 DEBUG oslo_service.service [None req-682edebb-6e1e-4c40-910e-896b6023b4ae - - - - - -] vmware.vnc_port_total          = 10000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:09 compute-0 nova_compute[259850]: 2025-10-11 03:56:09.873 2 DEBUG oslo_service.service [None req-682edebb-6e1e-4c40-910e-896b6023b4ae - - - - - -] vnc.auth_schemes               = ['none'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:09 compute-0 nova_compute[259850]: 2025-10-11 03:56:09.873 2 DEBUG oslo_service.service [None req-682edebb-6e1e-4c40-910e-896b6023b4ae - - - - - -] vnc.enabled                    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:09 compute-0 nova_compute[259850]: 2025-10-11 03:56:09.873 2 DEBUG oslo_service.service [None req-682edebb-6e1e-4c40-910e-896b6023b4ae - - - - - -] vnc.novncproxy_base_url        = https://nova-novncproxy-cell1-public-openstack.apps-crc.testing/vnc_lite.html log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:09 compute-0 nova_compute[259850]: 2025-10-11 03:56:09.873 2 DEBUG oslo_service.service [None req-682edebb-6e1e-4c40-910e-896b6023b4ae - - - - - -] vnc.novncproxy_host            = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:09 compute-0 nova_compute[259850]: 2025-10-11 03:56:09.873 2 DEBUG oslo_service.service [None req-682edebb-6e1e-4c40-910e-896b6023b4ae - - - - - -] vnc.novncproxy_port            = 6080 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:09 compute-0 nova_compute[259850]: 2025-10-11 03:56:09.874 2 DEBUG oslo_service.service [None req-682edebb-6e1e-4c40-910e-896b6023b4ae - - - - - -] vnc.server_listen              = ::0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:09 compute-0 nova_compute[259850]: 2025-10-11 03:56:09.874 2 DEBUG oslo_service.service [None req-682edebb-6e1e-4c40-910e-896b6023b4ae - - - - - -] vnc.server_proxyclient_address = 192.168.122.100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:09 compute-0 nova_compute[259850]: 2025-10-11 03:56:09.874 2 DEBUG oslo_service.service [None req-682edebb-6e1e-4c40-910e-896b6023b4ae - - - - - -] vnc.vencrypt_ca_certs          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:09 compute-0 nova_compute[259850]: 2025-10-11 03:56:09.874 2 DEBUG oslo_service.service [None req-682edebb-6e1e-4c40-910e-896b6023b4ae - - - - - -] vnc.vencrypt_client_cert       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:09 compute-0 nova_compute[259850]: 2025-10-11 03:56:09.874 2 DEBUG oslo_service.service [None req-682edebb-6e1e-4c40-910e-896b6023b4ae - - - - - -] vnc.vencrypt_client_key        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:09 compute-0 nova_compute[259850]: 2025-10-11 03:56:09.875 2 DEBUG oslo_service.service [None req-682edebb-6e1e-4c40-910e-896b6023b4ae - - - - - -] workarounds.disable_compute_service_check_for_ffu = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:09 compute-0 nova_compute[259850]: 2025-10-11 03:56:09.875 2 DEBUG oslo_service.service [None req-682edebb-6e1e-4c40-910e-896b6023b4ae - - - - - -] workarounds.disable_deep_image_inspection = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:09 compute-0 nova_compute[259850]: 2025-10-11 03:56:09.875 2 DEBUG oslo_service.service [None req-682edebb-6e1e-4c40-910e-896b6023b4ae - - - - - -] workarounds.disable_fallback_pcpu_query = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:09 compute-0 nova_compute[259850]: 2025-10-11 03:56:09.875 2 DEBUG oslo_service.service [None req-682edebb-6e1e-4c40-910e-896b6023b4ae - - - - - -] workarounds.disable_group_policy_check_upcall = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:09 compute-0 nova_compute[259850]: 2025-10-11 03:56:09.875 2 DEBUG oslo_service.service [None req-682edebb-6e1e-4c40-910e-896b6023b4ae - - - - - -] workarounds.disable_libvirt_livesnapshot = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:09 compute-0 nova_compute[259850]: 2025-10-11 03:56:09.875 2 DEBUG oslo_service.service [None req-682edebb-6e1e-4c40-910e-896b6023b4ae - - - - - -] workarounds.disable_rootwrap   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:09 compute-0 nova_compute[259850]: 2025-10-11 03:56:09.875 2 DEBUG oslo_service.service [None req-682edebb-6e1e-4c40-910e-896b6023b4ae - - - - - -] workarounds.enable_numa_live_migration = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:09 compute-0 nova_compute[259850]: 2025-10-11 03:56:09.876 2 DEBUG oslo_service.service [None req-682edebb-6e1e-4c40-910e-896b6023b4ae - - - - - -] workarounds.enable_qemu_monitor_announce_self = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:09 compute-0 nova_compute[259850]: 2025-10-11 03:56:09.876 2 DEBUG oslo_service.service [None req-682edebb-6e1e-4c40-910e-896b6023b4ae - - - - - -] workarounds.ensure_libvirt_rbd_instance_dir_cleanup = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:09 compute-0 nova_compute[259850]: 2025-10-11 03:56:09.876 2 DEBUG oslo_service.service [None req-682edebb-6e1e-4c40-910e-896b6023b4ae - - - - - -] workarounds.handle_virt_lifecycle_events = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:09 compute-0 nova_compute[259850]: 2025-10-11 03:56:09.876 2 DEBUG oslo_service.service [None req-682edebb-6e1e-4c40-910e-896b6023b4ae - - - - - -] workarounds.libvirt_disable_apic = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:09 compute-0 nova_compute[259850]: 2025-10-11 03:56:09.876 2 DEBUG oslo_service.service [None req-682edebb-6e1e-4c40-910e-896b6023b4ae - - - - - -] workarounds.never_download_image_if_on_rbd = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:09 compute-0 nova_compute[259850]: 2025-10-11 03:56:09.876 2 DEBUG oslo_service.service [None req-682edebb-6e1e-4c40-910e-896b6023b4ae - - - - - -] workarounds.qemu_monitor_announce_self_count = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:09 compute-0 nova_compute[259850]: 2025-10-11 03:56:09.876 2 DEBUG oslo_service.service [None req-682edebb-6e1e-4c40-910e-896b6023b4ae - - - - - -] workarounds.qemu_monitor_announce_self_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:09 compute-0 nova_compute[259850]: 2025-10-11 03:56:09.877 2 DEBUG oslo_service.service [None req-682edebb-6e1e-4c40-910e-896b6023b4ae - - - - - -] workarounds.reserve_disk_resource_for_image_cache = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:09 compute-0 nova_compute[259850]: 2025-10-11 03:56:09.877 2 DEBUG oslo_service.service [None req-682edebb-6e1e-4c40-910e-896b6023b4ae - - - - - -] workarounds.skip_cpu_compare_at_startup = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:09 compute-0 nova_compute[259850]: 2025-10-11 03:56:09.877 2 DEBUG oslo_service.service [None req-682edebb-6e1e-4c40-910e-896b6023b4ae - - - - - -] workarounds.skip_cpu_compare_on_dest = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:09 compute-0 nova_compute[259850]: 2025-10-11 03:56:09.877 2 DEBUG oslo_service.service [None req-682edebb-6e1e-4c40-910e-896b6023b4ae - - - - - -] workarounds.skip_hypervisor_version_check_on_lm = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:09 compute-0 nova_compute[259850]: 2025-10-11 03:56:09.877 2 DEBUG oslo_service.service [None req-682edebb-6e1e-4c40-910e-896b6023b4ae - - - - - -] workarounds.skip_reserve_in_use_ironic_nodes = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:09 compute-0 nova_compute[259850]: 2025-10-11 03:56:09.877 2 DEBUG oslo_service.service [None req-682edebb-6e1e-4c40-910e-896b6023b4ae - - - - - -] workarounds.unified_limits_count_pcpu_as_vcpu = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:09 compute-0 nova_compute[259850]: 2025-10-11 03:56:09.877 2 DEBUG oslo_service.service [None req-682edebb-6e1e-4c40-910e-896b6023b4ae - - - - - -] workarounds.wait_for_vif_plugged_event_during_hard_reboot = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:09 compute-0 nova_compute[259850]: 2025-10-11 03:56:09.877 2 DEBUG oslo_service.service [None req-682edebb-6e1e-4c40-910e-896b6023b4ae - - - - - -] wsgi.api_paste_config          = api-paste.ini log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:09 compute-0 nova_compute[259850]: 2025-10-11 03:56:09.878 2 DEBUG oslo_service.service [None req-682edebb-6e1e-4c40-910e-896b6023b4ae - - - - - -] wsgi.client_socket_timeout     = 900 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:09 compute-0 nova_compute[259850]: 2025-10-11 03:56:09.878 2 DEBUG oslo_service.service [None req-682edebb-6e1e-4c40-910e-896b6023b4ae - - - - - -] wsgi.default_pool_size         = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:09 compute-0 nova_compute[259850]: 2025-10-11 03:56:09.878 2 DEBUG oslo_service.service [None req-682edebb-6e1e-4c40-910e-896b6023b4ae - - - - - -] wsgi.keep_alive                = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:09 compute-0 nova_compute[259850]: 2025-10-11 03:56:09.878 2 DEBUG oslo_service.service [None req-682edebb-6e1e-4c40-910e-896b6023b4ae - - - - - -] wsgi.max_header_line           = 16384 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:09 compute-0 nova_compute[259850]: 2025-10-11 03:56:09.878 2 DEBUG oslo_service.service [None req-682edebb-6e1e-4c40-910e-896b6023b4ae - - - - - -] wsgi.secure_proxy_ssl_header   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:09 compute-0 nova_compute[259850]: 2025-10-11 03:56:09.878 2 DEBUG oslo_service.service [None req-682edebb-6e1e-4c40-910e-896b6023b4ae - - - - - -] wsgi.ssl_ca_file               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:09 compute-0 nova_compute[259850]: 2025-10-11 03:56:09.878 2 DEBUG oslo_service.service [None req-682edebb-6e1e-4c40-910e-896b6023b4ae - - - - - -] wsgi.ssl_cert_file             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:09 compute-0 nova_compute[259850]: 2025-10-11 03:56:09.879 2 DEBUG oslo_service.service [None req-682edebb-6e1e-4c40-910e-896b6023b4ae - - - - - -] wsgi.ssl_key_file              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:09 compute-0 nova_compute[259850]: 2025-10-11 03:56:09.879 2 DEBUG oslo_service.service [None req-682edebb-6e1e-4c40-910e-896b6023b4ae - - - - - -] wsgi.tcp_keepidle              = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:09 compute-0 nova_compute[259850]: 2025-10-11 03:56:09.879 2 DEBUG oslo_service.service [None req-682edebb-6e1e-4c40-910e-896b6023b4ae - - - - - -] wsgi.wsgi_log_format           = %(client_ip)s "%(request_line)s" status: %(status_code)s len: %(body_length)s time: %(wall_seconds).7f log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:09 compute-0 nova_compute[259850]: 2025-10-11 03:56:09.879 2 DEBUG oslo_service.service [None req-682edebb-6e1e-4c40-910e-896b6023b4ae - - - - - -] zvm.ca_file                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:09 compute-0 nova_compute[259850]: 2025-10-11 03:56:09.879 2 DEBUG oslo_service.service [None req-682edebb-6e1e-4c40-910e-896b6023b4ae - - - - - -] zvm.cloud_connector_url        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:09 compute-0 nova_compute[259850]: 2025-10-11 03:56:09.879 2 DEBUG oslo_service.service [None req-682edebb-6e1e-4c40-910e-896b6023b4ae - - - - - -] zvm.image_tmp_path             = /var/lib/nova/images log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:09 compute-0 nova_compute[259850]: 2025-10-11 03:56:09.879 2 DEBUG oslo_service.service [None req-682edebb-6e1e-4c40-910e-896b6023b4ae - - - - - -] zvm.reachable_timeout          = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:09 compute-0 nova_compute[259850]: 2025-10-11 03:56:09.880 2 DEBUG oslo_service.service [None req-682edebb-6e1e-4c40-910e-896b6023b4ae - - - - - -] oslo_policy.enforce_new_defaults = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:09 compute-0 nova_compute[259850]: 2025-10-11 03:56:09.880 2 DEBUG oslo_service.service [None req-682edebb-6e1e-4c40-910e-896b6023b4ae - - - - - -] oslo_policy.enforce_scope      = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:09 compute-0 nova_compute[259850]: 2025-10-11 03:56:09.880 2 DEBUG oslo_service.service [None req-682edebb-6e1e-4c40-910e-896b6023b4ae - - - - - -] oslo_policy.policy_default_rule = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:09 compute-0 nova_compute[259850]: 2025-10-11 03:56:09.880 2 DEBUG oslo_service.service [None req-682edebb-6e1e-4c40-910e-896b6023b4ae - - - - - -] oslo_policy.policy_dirs        = ['policy.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:09 compute-0 nova_compute[259850]: 2025-10-11 03:56:09.880 2 DEBUG oslo_service.service [None req-682edebb-6e1e-4c40-910e-896b6023b4ae - - - - - -] oslo_policy.policy_file        = policy.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:09 compute-0 nova_compute[259850]: 2025-10-11 03:56:09.880 2 DEBUG oslo_service.service [None req-682edebb-6e1e-4c40-910e-896b6023b4ae - - - - - -] oslo_policy.remote_content_type = application/x-www-form-urlencoded log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:09 compute-0 nova_compute[259850]: 2025-10-11 03:56:09.881 2 DEBUG oslo_service.service [None req-682edebb-6e1e-4c40-910e-896b6023b4ae - - - - - -] oslo_policy.remote_ssl_ca_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:09 compute-0 nova_compute[259850]: 2025-10-11 03:56:09.881 2 DEBUG oslo_service.service [None req-682edebb-6e1e-4c40-910e-896b6023b4ae - - - - - -] oslo_policy.remote_ssl_client_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:09 compute-0 nova_compute[259850]: 2025-10-11 03:56:09.881 2 DEBUG oslo_service.service [None req-682edebb-6e1e-4c40-910e-896b6023b4ae - - - - - -] oslo_policy.remote_ssl_client_key_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:09 compute-0 nova_compute[259850]: 2025-10-11 03:56:09.881 2 DEBUG oslo_service.service [None req-682edebb-6e1e-4c40-910e-896b6023b4ae - - - - - -] oslo_policy.remote_ssl_verify_server_crt = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:09 compute-0 nova_compute[259850]: 2025-10-11 03:56:09.881 2 DEBUG oslo_service.service [None req-682edebb-6e1e-4c40-910e-896b6023b4ae - - - - - -] oslo_versionedobjects.fatal_exception_format_errors = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:09 compute-0 nova_compute[259850]: 2025-10-11 03:56:09.881 2 DEBUG oslo_service.service [None req-682edebb-6e1e-4c40-910e-896b6023b4ae - - - - - -] oslo_middleware.http_basic_auth_user_file = /etc/htpasswd log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:09 compute-0 nova_compute[259850]: 2025-10-11 03:56:09.881 2 DEBUG oslo_service.service [None req-682edebb-6e1e-4c40-910e-896b6023b4ae - - - - - -] remote_debug.host              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:09 compute-0 nova_compute[259850]: 2025-10-11 03:56:09.881 2 DEBUG oslo_service.service [None req-682edebb-6e1e-4c40-910e-896b6023b4ae - - - - - -] remote_debug.port              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:09 compute-0 nova_compute[259850]: 2025-10-11 03:56:09.882 2 DEBUG oslo_service.service [None req-682edebb-6e1e-4c40-910e-896b6023b4ae - - - - - -] oslo_messaging_rabbit.amqp_auto_delete = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:09 compute-0 nova_compute[259850]: 2025-10-11 03:56:09.882 2 DEBUG oslo_service.service [None req-682edebb-6e1e-4c40-910e-896b6023b4ae - - - - - -] oslo_messaging_rabbit.amqp_durable_queues = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:09 compute-0 nova_compute[259850]: 2025-10-11 03:56:09.882 2 DEBUG oslo_service.service [None req-682edebb-6e1e-4c40-910e-896b6023b4ae - - - - - -] oslo_messaging_rabbit.conn_pool_min_size = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:09 compute-0 nova_compute[259850]: 2025-10-11 03:56:09.882 2 DEBUG oslo_service.service [None req-682edebb-6e1e-4c40-910e-896b6023b4ae - - - - - -] oslo_messaging_rabbit.conn_pool_ttl = 1200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:09 compute-0 nova_compute[259850]: 2025-10-11 03:56:09.882 2 DEBUG oslo_service.service [None req-682edebb-6e1e-4c40-910e-896b6023b4ae - - - - - -] oslo_messaging_rabbit.direct_mandatory_flag = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:09 compute-0 nova_compute[259850]: 2025-10-11 03:56:09.882 2 DEBUG oslo_service.service [None req-682edebb-6e1e-4c40-910e-896b6023b4ae - - - - - -] oslo_messaging_rabbit.enable_cancel_on_failover = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:09 compute-0 nova_compute[259850]: 2025-10-11 03:56:09.882 2 DEBUG oslo_service.service [None req-682edebb-6e1e-4c40-910e-896b6023b4ae - - - - - -] oslo_messaging_rabbit.heartbeat_in_pthread = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:09 compute-0 nova_compute[259850]: 2025-10-11 03:56:09.883 2 DEBUG oslo_service.service [None req-682edebb-6e1e-4c40-910e-896b6023b4ae - - - - - -] oslo_messaging_rabbit.heartbeat_rate = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:09 compute-0 nova_compute[259850]: 2025-10-11 03:56:09.883 2 DEBUG oslo_service.service [None req-682edebb-6e1e-4c40-910e-896b6023b4ae - - - - - -] oslo_messaging_rabbit.heartbeat_timeout_threshold = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:09 compute-0 nova_compute[259850]: 2025-10-11 03:56:09.883 2 DEBUG oslo_service.service [None req-682edebb-6e1e-4c40-910e-896b6023b4ae - - - - - -] oslo_messaging_rabbit.kombu_compression = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:09 compute-0 nova_compute[259850]: 2025-10-11 03:56:09.883 2 DEBUG oslo_service.service [None req-682edebb-6e1e-4c40-910e-896b6023b4ae - - - - - -] oslo_messaging_rabbit.kombu_failover_strategy = round-robin log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:09 compute-0 nova_compute[259850]: 2025-10-11 03:56:09.883 2 DEBUG oslo_service.service [None req-682edebb-6e1e-4c40-910e-896b6023b4ae - - - - - -] oslo_messaging_rabbit.kombu_missing_consumer_retry_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:09 compute-0 nova_compute[259850]: 2025-10-11 03:56:09.883 2 DEBUG oslo_service.service [None req-682edebb-6e1e-4c40-910e-896b6023b4ae - - - - - -] oslo_messaging_rabbit.kombu_reconnect_delay = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:09 compute-0 nova_compute[259850]: 2025-10-11 03:56:09.883 2 DEBUG oslo_service.service [None req-682edebb-6e1e-4c40-910e-896b6023b4ae - - - - - -] oslo_messaging_rabbit.rabbit_ha_queues = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:09 compute-0 nova_compute[259850]: 2025-10-11 03:56:09.884 2 DEBUG oslo_service.service [None req-682edebb-6e1e-4c40-910e-896b6023b4ae - - - - - -] oslo_messaging_rabbit.rabbit_interval_max = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:09 compute-0 nova_compute[259850]: 2025-10-11 03:56:09.884 2 DEBUG oslo_service.service [None req-682edebb-6e1e-4c40-910e-896b6023b4ae - - - - - -] oslo_messaging_rabbit.rabbit_login_method = AMQPLAIN log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:09 compute-0 nova_compute[259850]: 2025-10-11 03:56:09.884 2 DEBUG oslo_service.service [None req-682edebb-6e1e-4c40-910e-896b6023b4ae - - - - - -] oslo_messaging_rabbit.rabbit_qos_prefetch_count = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:09 compute-0 nova_compute[259850]: 2025-10-11 03:56:09.884 2 DEBUG oslo_service.service [None req-682edebb-6e1e-4c40-910e-896b6023b4ae - - - - - -] oslo_messaging_rabbit.rabbit_quorum_delivery_limit = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:09 compute-0 nova_compute[259850]: 2025-10-11 03:56:09.884 2 DEBUG oslo_service.service [None req-682edebb-6e1e-4c40-910e-896b6023b4ae - - - - - -] oslo_messaging_rabbit.rabbit_quorum_max_memory_bytes = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:09 compute-0 nova_compute[259850]: 2025-10-11 03:56:09.884 2 DEBUG oslo_service.service [None req-682edebb-6e1e-4c40-910e-896b6023b4ae - - - - - -] oslo_messaging_rabbit.rabbit_quorum_max_memory_length = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:09 compute-0 nova_compute[259850]: 2025-10-11 03:56:09.885 2 DEBUG oslo_service.service [None req-682edebb-6e1e-4c40-910e-896b6023b4ae - - - - - -] oslo_messaging_rabbit.rabbit_quorum_queue = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:09 compute-0 nova_compute[259850]: 2025-10-11 03:56:09.885 2 DEBUG oslo_service.service [None req-682edebb-6e1e-4c40-910e-896b6023b4ae - - - - - -] oslo_messaging_rabbit.rabbit_retry_backoff = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:09 compute-0 nova_compute[259850]: 2025-10-11 03:56:09.885 2 DEBUG oslo_service.service [None req-682edebb-6e1e-4c40-910e-896b6023b4ae - - - - - -] oslo_messaging_rabbit.rabbit_retry_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:09 compute-0 nova_compute[259850]: 2025-10-11 03:56:09.885 2 DEBUG oslo_service.service [None req-682edebb-6e1e-4c40-910e-896b6023b4ae - - - - - -] oslo_messaging_rabbit.rabbit_transient_queues_ttl = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:09 compute-0 nova_compute[259850]: 2025-10-11 03:56:09.885 2 DEBUG oslo_service.service [None req-682edebb-6e1e-4c40-910e-896b6023b4ae - - - - - -] oslo_messaging_rabbit.rpc_conn_pool_size = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:09 compute-0 nova_compute[259850]: 2025-10-11 03:56:09.885 2 DEBUG oslo_service.service [None req-682edebb-6e1e-4c40-910e-896b6023b4ae - - - - - -] oslo_messaging_rabbit.ssl      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:09 compute-0 nova_compute[259850]: 2025-10-11 03:56:09.885 2 DEBUG oslo_service.service [None req-682edebb-6e1e-4c40-910e-896b6023b4ae - - - - - -] oslo_messaging_rabbit.ssl_ca_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:09 compute-0 nova_compute[259850]: 2025-10-11 03:56:09.886 2 DEBUG oslo_service.service [None req-682edebb-6e1e-4c40-910e-896b6023b4ae - - - - - -] oslo_messaging_rabbit.ssl_cert_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:09 compute-0 nova_compute[259850]: 2025-10-11 03:56:09.886 2 DEBUG oslo_service.service [None req-682edebb-6e1e-4c40-910e-896b6023b4ae - - - - - -] oslo_messaging_rabbit.ssl_enforce_fips_mode = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:09 compute-0 nova_compute[259850]: 2025-10-11 03:56:09.886 2 DEBUG oslo_service.service [None req-682edebb-6e1e-4c40-910e-896b6023b4ae - - - - - -] oslo_messaging_rabbit.ssl_key_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:09 compute-0 nova_compute[259850]: 2025-10-11 03:56:09.886 2 DEBUG oslo_service.service [None req-682edebb-6e1e-4c40-910e-896b6023b4ae - - - - - -] oslo_messaging_rabbit.ssl_version =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:09 compute-0 nova_compute[259850]: 2025-10-11 03:56:09.886 2 DEBUG oslo_service.service [None req-682edebb-6e1e-4c40-910e-896b6023b4ae - - - - - -] oslo_messaging_notifications.driver = ['noop'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:09 compute-0 nova_compute[259850]: 2025-10-11 03:56:09.886 2 DEBUG oslo_service.service [None req-682edebb-6e1e-4c40-910e-896b6023b4ae - - - - - -] oslo_messaging_notifications.retry = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:09 compute-0 nova_compute[259850]: 2025-10-11 03:56:09.887 2 DEBUG oslo_service.service [None req-682edebb-6e1e-4c40-910e-896b6023b4ae - - - - - -] oslo_messaging_notifications.topics = ['notifications'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:09 compute-0 nova_compute[259850]: 2025-10-11 03:56:09.887 2 DEBUG oslo_service.service [None req-682edebb-6e1e-4c40-910e-896b6023b4ae - - - - - -] oslo_messaging_notifications.transport_url = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:09 compute-0 nova_compute[259850]: 2025-10-11 03:56:09.887 2 DEBUG oslo_service.service [None req-682edebb-6e1e-4c40-910e-896b6023b4ae - - - - - -] oslo_limit.auth_section        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:09 compute-0 nova_compute[259850]: 2025-10-11 03:56:09.887 2 DEBUG oslo_service.service [None req-682edebb-6e1e-4c40-910e-896b6023b4ae - - - - - -] oslo_limit.auth_type           = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:09 compute-0 nova_compute[259850]: 2025-10-11 03:56:09.887 2 DEBUG oslo_service.service [None req-682edebb-6e1e-4c40-910e-896b6023b4ae - - - - - -] oslo_limit.auth_url            = https://keystone-internal.openstack.svc:5000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:09 compute-0 nova_compute[259850]: 2025-10-11 03:56:09.887 2 DEBUG oslo_service.service [None req-682edebb-6e1e-4c40-910e-896b6023b4ae - - - - - -] oslo_limit.cafile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:09 compute-0 nova_compute[259850]: 2025-10-11 03:56:09.887 2 DEBUG oslo_service.service [None req-682edebb-6e1e-4c40-910e-896b6023b4ae - - - - - -] oslo_limit.certfile            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:09 compute-0 nova_compute[259850]: 2025-10-11 03:56:09.888 2 DEBUG oslo_service.service [None req-682edebb-6e1e-4c40-910e-896b6023b4ae - - - - - -] oslo_limit.collect_timing      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:09 compute-0 nova_compute[259850]: 2025-10-11 03:56:09.888 2 DEBUG oslo_service.service [None req-682edebb-6e1e-4c40-910e-896b6023b4ae - - - - - -] oslo_limit.connect_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:09 compute-0 nova_compute[259850]: 2025-10-11 03:56:09.888 2 DEBUG oslo_service.service [None req-682edebb-6e1e-4c40-910e-896b6023b4ae - - - - - -] oslo_limit.connect_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:09 compute-0 nova_compute[259850]: 2025-10-11 03:56:09.888 2 DEBUG oslo_service.service [None req-682edebb-6e1e-4c40-910e-896b6023b4ae - - - - - -] oslo_limit.default_domain_id   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:09 compute-0 nova_compute[259850]: 2025-10-11 03:56:09.888 2 DEBUG oslo_service.service [None req-682edebb-6e1e-4c40-910e-896b6023b4ae - - - - - -] oslo_limit.default_domain_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:09 compute-0 nova_compute[259850]: 2025-10-11 03:56:09.888 2 DEBUG oslo_service.service [None req-682edebb-6e1e-4c40-910e-896b6023b4ae - - - - - -] oslo_limit.domain_id           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:09 compute-0 nova_compute[259850]: 2025-10-11 03:56:09.888 2 DEBUG oslo_service.service [None req-682edebb-6e1e-4c40-910e-896b6023b4ae - - - - - -] oslo_limit.domain_name         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:09 compute-0 nova_compute[259850]: 2025-10-11 03:56:09.889 2 DEBUG oslo_service.service [None req-682edebb-6e1e-4c40-910e-896b6023b4ae - - - - - -] oslo_limit.endpoint_id         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:09 compute-0 nova_compute[259850]: 2025-10-11 03:56:09.889 2 DEBUG oslo_service.service [None req-682edebb-6e1e-4c40-910e-896b6023b4ae - - - - - -] oslo_limit.endpoint_override   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:09 compute-0 nova_compute[259850]: 2025-10-11 03:56:09.889 2 DEBUG oslo_service.service [None req-682edebb-6e1e-4c40-910e-896b6023b4ae - - - - - -] oslo_limit.insecure            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:09 compute-0 nova_compute[259850]: 2025-10-11 03:56:09.889 2 DEBUG oslo_service.service [None req-682edebb-6e1e-4c40-910e-896b6023b4ae - - - - - -] oslo_limit.keyfile             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:09 compute-0 nova_compute[259850]: 2025-10-11 03:56:09.889 2 DEBUG oslo_service.service [None req-682edebb-6e1e-4c40-910e-896b6023b4ae - - - - - -] oslo_limit.max_version         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:09 compute-0 nova_compute[259850]: 2025-10-11 03:56:09.889 2 DEBUG oslo_service.service [None req-682edebb-6e1e-4c40-910e-896b6023b4ae - - - - - -] oslo_limit.min_version         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:09 compute-0 nova_compute[259850]: 2025-10-11 03:56:09.889 2 DEBUG oslo_service.service [None req-682edebb-6e1e-4c40-910e-896b6023b4ae - - - - - -] oslo_limit.password            = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:09 compute-0 nova_compute[259850]: 2025-10-11 03:56:09.890 2 DEBUG oslo_service.service [None req-682edebb-6e1e-4c40-910e-896b6023b4ae - - - - - -] oslo_limit.project_domain_id   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:09 compute-0 nova_compute[259850]: 2025-10-11 03:56:09.890 2 DEBUG oslo_service.service [None req-682edebb-6e1e-4c40-910e-896b6023b4ae - - - - - -] oslo_limit.project_domain_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:09 compute-0 nova_compute[259850]: 2025-10-11 03:56:09.890 2 DEBUG oslo_service.service [None req-682edebb-6e1e-4c40-910e-896b6023b4ae - - - - - -] oslo_limit.project_id          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:09 compute-0 nova_compute[259850]: 2025-10-11 03:56:09.890 2 DEBUG oslo_service.service [None req-682edebb-6e1e-4c40-910e-896b6023b4ae - - - - - -] oslo_limit.project_name        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:09 compute-0 nova_compute[259850]: 2025-10-11 03:56:09.890 2 DEBUG oslo_service.service [None req-682edebb-6e1e-4c40-910e-896b6023b4ae - - - - - -] oslo_limit.region_name         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:09 compute-0 nova_compute[259850]: 2025-10-11 03:56:09.890 2 DEBUG oslo_service.service [None req-682edebb-6e1e-4c40-910e-896b6023b4ae - - - - - -] oslo_limit.service_name        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:09 compute-0 nova_compute[259850]: 2025-10-11 03:56:09.891 2 DEBUG oslo_service.service [None req-682edebb-6e1e-4c40-910e-896b6023b4ae - - - - - -] oslo_limit.service_type        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:09 compute-0 nova_compute[259850]: 2025-10-11 03:56:09.891 2 DEBUG oslo_service.service [None req-682edebb-6e1e-4c40-910e-896b6023b4ae - - - - - -] oslo_limit.split_loggers       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:09 compute-0 nova_compute[259850]: 2025-10-11 03:56:09.891 2 DEBUG oslo_service.service [None req-682edebb-6e1e-4c40-910e-896b6023b4ae - - - - - -] oslo_limit.status_code_retries = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:09 compute-0 nova_compute[259850]: 2025-10-11 03:56:09.891 2 DEBUG oslo_service.service [None req-682edebb-6e1e-4c40-910e-896b6023b4ae - - - - - -] oslo_limit.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:09 compute-0 nova_compute[259850]: 2025-10-11 03:56:09.891 2 DEBUG oslo_service.service [None req-682edebb-6e1e-4c40-910e-896b6023b4ae - - - - - -] oslo_limit.system_scope        = all log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:09 compute-0 nova_compute[259850]: 2025-10-11 03:56:09.891 2 DEBUG oslo_service.service [None req-682edebb-6e1e-4c40-910e-896b6023b4ae - - - - - -] oslo_limit.timeout             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:09 compute-0 nova_compute[259850]: 2025-10-11 03:56:09.891 2 DEBUG oslo_service.service [None req-682edebb-6e1e-4c40-910e-896b6023b4ae - - - - - -] oslo_limit.trust_id            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:09 compute-0 nova_compute[259850]: 2025-10-11 03:56:09.891 2 DEBUG oslo_service.service [None req-682edebb-6e1e-4c40-910e-896b6023b4ae - - - - - -] oslo_limit.user_domain_id      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:09 compute-0 nova_compute[259850]: 2025-10-11 03:56:09.892 2 DEBUG oslo_service.service [None req-682edebb-6e1e-4c40-910e-896b6023b4ae - - - - - -] oslo_limit.user_domain_name    = Default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:09 compute-0 nova_compute[259850]: 2025-10-11 03:56:09.892 2 DEBUG oslo_service.service [None req-682edebb-6e1e-4c40-910e-896b6023b4ae - - - - - -] oslo_limit.user_id             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:09 compute-0 nova_compute[259850]: 2025-10-11 03:56:09.892 2 DEBUG oslo_service.service [None req-682edebb-6e1e-4c40-910e-896b6023b4ae - - - - - -] oslo_limit.username            = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:09 compute-0 nova_compute[259850]: 2025-10-11 03:56:09.892 2 DEBUG oslo_service.service [None req-682edebb-6e1e-4c40-910e-896b6023b4ae - - - - - -] oslo_limit.valid_interfaces    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:09 compute-0 nova_compute[259850]: 2025-10-11 03:56:09.892 2 DEBUG oslo_service.service [None req-682edebb-6e1e-4c40-910e-896b6023b4ae - - - - - -] oslo_limit.version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:09 compute-0 nova_compute[259850]: 2025-10-11 03:56:09.892 2 DEBUG oslo_service.service [None req-682edebb-6e1e-4c40-910e-896b6023b4ae - - - - - -] oslo_reports.file_event_handler = /var/lib/nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:09 compute-0 nova_compute[259850]: 2025-10-11 03:56:09.893 2 DEBUG oslo_service.service [None req-682edebb-6e1e-4c40-910e-896b6023b4ae - - - - - -] oslo_reports.file_event_handler_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:09 compute-0 nova_compute[259850]: 2025-10-11 03:56:09.893 2 DEBUG oslo_service.service [None req-682edebb-6e1e-4c40-910e-896b6023b4ae - - - - - -] oslo_reports.log_dir           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:09 compute-0 nova_compute[259850]: 2025-10-11 03:56:09.893 2 DEBUG oslo_service.service [None req-682edebb-6e1e-4c40-910e-896b6023b4ae - - - - - -] vif_plug_linux_bridge_privileged.capabilities = [12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:09 compute-0 nova_compute[259850]: 2025-10-11 03:56:09.893 2 DEBUG oslo_service.service [None req-682edebb-6e1e-4c40-910e-896b6023b4ae - - - - - -] vif_plug_linux_bridge_privileged.group = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:09 compute-0 nova_compute[259850]: 2025-10-11 03:56:09.893 2 DEBUG oslo_service.service [None req-682edebb-6e1e-4c40-910e-896b6023b4ae - - - - - -] vif_plug_linux_bridge_privileged.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:09 compute-0 nova_compute[259850]: 2025-10-11 03:56:09.893 2 DEBUG oslo_service.service [None req-682edebb-6e1e-4c40-910e-896b6023b4ae - - - - - -] vif_plug_linux_bridge_privileged.logger_name = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:09 compute-0 nova_compute[259850]: 2025-10-11 03:56:09.893 2 DEBUG oslo_service.service [None req-682edebb-6e1e-4c40-910e-896b6023b4ae - - - - - -] vif_plug_linux_bridge_privileged.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:09 compute-0 nova_compute[259850]: 2025-10-11 03:56:09.894 2 DEBUG oslo_service.service [None req-682edebb-6e1e-4c40-910e-896b6023b4ae - - - - - -] vif_plug_linux_bridge_privileged.user = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:09 compute-0 nova_compute[259850]: 2025-10-11 03:56:09.894 2 DEBUG oslo_service.service [None req-682edebb-6e1e-4c40-910e-896b6023b4ae - - - - - -] vif_plug_ovs_privileged.capabilities = [12, 1] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:09 compute-0 nova_compute[259850]: 2025-10-11 03:56:09.894 2 DEBUG oslo_service.service [None req-682edebb-6e1e-4c40-910e-896b6023b4ae - - - - - -] vif_plug_ovs_privileged.group  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:09 compute-0 nova_compute[259850]: 2025-10-11 03:56:09.894 2 DEBUG oslo_service.service [None req-682edebb-6e1e-4c40-910e-896b6023b4ae - - - - - -] vif_plug_ovs_privileged.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:09 compute-0 nova_compute[259850]: 2025-10-11 03:56:09.894 2 DEBUG oslo_service.service [None req-682edebb-6e1e-4c40-910e-896b6023b4ae - - - - - -] vif_plug_ovs_privileged.logger_name = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:09 compute-0 nova_compute[259850]: 2025-10-11 03:56:09.894 2 DEBUG oslo_service.service [None req-682edebb-6e1e-4c40-910e-896b6023b4ae - - - - - -] vif_plug_ovs_privileged.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:09 compute-0 nova_compute[259850]: 2025-10-11 03:56:09.894 2 DEBUG oslo_service.service [None req-682edebb-6e1e-4c40-910e-896b6023b4ae - - - - - -] vif_plug_ovs_privileged.user   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:09 compute-0 nova_compute[259850]: 2025-10-11 03:56:09.895 2 DEBUG oslo_service.service [None req-682edebb-6e1e-4c40-910e-896b6023b4ae - - - - - -] os_vif_linux_bridge.flat_interface = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:09 compute-0 nova_compute[259850]: 2025-10-11 03:56:09.895 2 DEBUG oslo_service.service [None req-682edebb-6e1e-4c40-910e-896b6023b4ae - - - - - -] os_vif_linux_bridge.forward_bridge_interface = ['all'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:09 compute-0 nova_compute[259850]: 2025-10-11 03:56:09.895 2 DEBUG oslo_service.service [None req-682edebb-6e1e-4c40-910e-896b6023b4ae - - - - - -] os_vif_linux_bridge.iptables_bottom_regex =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:09 compute-0 nova_compute[259850]: 2025-10-11 03:56:09.895 2 DEBUG oslo_service.service [None req-682edebb-6e1e-4c40-910e-896b6023b4ae - - - - - -] os_vif_linux_bridge.iptables_drop_action = DROP log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:09 compute-0 nova_compute[259850]: 2025-10-11 03:56:09.895 2 DEBUG oslo_service.service [None req-682edebb-6e1e-4c40-910e-896b6023b4ae - - - - - -] os_vif_linux_bridge.iptables_top_regex =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:09 compute-0 nova_compute[259850]: 2025-10-11 03:56:09.895 2 DEBUG oslo_service.service [None req-682edebb-6e1e-4c40-910e-896b6023b4ae - - - - - -] os_vif_linux_bridge.network_device_mtu = 1500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:09 compute-0 nova_compute[259850]: 2025-10-11 03:56:09.895 2 DEBUG oslo_service.service [None req-682edebb-6e1e-4c40-910e-896b6023b4ae - - - - - -] os_vif_linux_bridge.use_ipv6   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:09 compute-0 nova_compute[259850]: 2025-10-11 03:56:09.896 2 DEBUG oslo_service.service [None req-682edebb-6e1e-4c40-910e-896b6023b4ae - - - - - -] os_vif_linux_bridge.vlan_interface = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:09 compute-0 nova_compute[259850]: 2025-10-11 03:56:09.896 2 DEBUG oslo_service.service [None req-682edebb-6e1e-4c40-910e-896b6023b4ae - - - - - -] os_vif_ovs.isolate_vif         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:09 compute-0 nova_compute[259850]: 2025-10-11 03:56:09.896 2 DEBUG oslo_service.service [None req-682edebb-6e1e-4c40-910e-896b6023b4ae - - - - - -] os_vif_ovs.network_device_mtu  = 1500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:09 compute-0 nova_compute[259850]: 2025-10-11 03:56:09.896 2 DEBUG oslo_service.service [None req-682edebb-6e1e-4c40-910e-896b6023b4ae - - - - - -] os_vif_ovs.ovs_vsctl_timeout   = 120 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:09 compute-0 nova_compute[259850]: 2025-10-11 03:56:09.896 2 DEBUG oslo_service.service [None req-682edebb-6e1e-4c40-910e-896b6023b4ae - - - - - -] os_vif_ovs.ovsdb_connection    = tcp:127.0.0.1:6640 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:09 compute-0 nova_compute[259850]: 2025-10-11 03:56:09.896 2 DEBUG oslo_service.service [None req-682edebb-6e1e-4c40-910e-896b6023b4ae - - - - - -] os_vif_ovs.ovsdb_interface     = native log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:09 compute-0 nova_compute[259850]: 2025-10-11 03:56:09.896 2 DEBUG oslo_service.service [None req-682edebb-6e1e-4c40-910e-896b6023b4ae - - - - - -] os_vif_ovs.per_port_bridge     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:09 compute-0 nova_compute[259850]: 2025-10-11 03:56:09.897 2 DEBUG oslo_service.service [None req-682edebb-6e1e-4c40-910e-896b6023b4ae - - - - - -] os_brick.lock_path             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:09 compute-0 nova_compute[259850]: 2025-10-11 03:56:09.897 2 DEBUG oslo_service.service [None req-682edebb-6e1e-4c40-910e-896b6023b4ae - - - - - -] os_brick.wait_mpath_device_attempts = 4 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:09 compute-0 nova_compute[259850]: 2025-10-11 03:56:09.897 2 DEBUG oslo_service.service [None req-682edebb-6e1e-4c40-910e-896b6023b4ae - - - - - -] os_brick.wait_mpath_device_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:09 compute-0 nova_compute[259850]: 2025-10-11 03:56:09.897 2 DEBUG oslo_service.service [None req-682edebb-6e1e-4c40-910e-896b6023b4ae - - - - - -] privsep_osbrick.capabilities   = [21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:09 compute-0 nova_compute[259850]: 2025-10-11 03:56:09.897 2 DEBUG oslo_service.service [None req-682edebb-6e1e-4c40-910e-896b6023b4ae - - - - - -] privsep_osbrick.group          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:09 compute-0 nova_compute[259850]: 2025-10-11 03:56:09.897 2 DEBUG oslo_service.service [None req-682edebb-6e1e-4c40-910e-896b6023b4ae - - - - - -] privsep_osbrick.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:09 compute-0 nova_compute[259850]: 2025-10-11 03:56:09.897 2 DEBUG oslo_service.service [None req-682edebb-6e1e-4c40-910e-896b6023b4ae - - - - - -] privsep_osbrick.logger_name    = os_brick.privileged log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:09 compute-0 nova_compute[259850]: 2025-10-11 03:56:09.897 2 DEBUG oslo_service.service [None req-682edebb-6e1e-4c40-910e-896b6023b4ae - - - - - -] privsep_osbrick.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:09 compute-0 nova_compute[259850]: 2025-10-11 03:56:09.898 2 DEBUG oslo_service.service [None req-682edebb-6e1e-4c40-910e-896b6023b4ae - - - - - -] privsep_osbrick.user           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:09 compute-0 nova_compute[259850]: 2025-10-11 03:56:09.898 2 DEBUG oslo_service.service [None req-682edebb-6e1e-4c40-910e-896b6023b4ae - - - - - -] nova_sys_admin.capabilities    = [0, 1, 2, 3, 12, 21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:09 compute-0 nova_compute[259850]: 2025-10-11 03:56:09.898 2 DEBUG oslo_service.service [None req-682edebb-6e1e-4c40-910e-896b6023b4ae - - - - - -] nova_sys_admin.group           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:09 compute-0 nova_compute[259850]: 2025-10-11 03:56:09.898 2 DEBUG oslo_service.service [None req-682edebb-6e1e-4c40-910e-896b6023b4ae - - - - - -] nova_sys_admin.helper_command  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:09 compute-0 nova_compute[259850]: 2025-10-11 03:56:09.898 2 DEBUG oslo_service.service [None req-682edebb-6e1e-4c40-910e-896b6023b4ae - - - - - -] nova_sys_admin.logger_name     = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:09 compute-0 nova_compute[259850]: 2025-10-11 03:56:09.898 2 DEBUG oslo_service.service [None req-682edebb-6e1e-4c40-910e-896b6023b4ae - - - - - -] nova_sys_admin.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:09 compute-0 nova_compute[259850]: 2025-10-11 03:56:09.899 2 DEBUG oslo_service.service [None req-682edebb-6e1e-4c40-910e-896b6023b4ae - - - - - -] nova_sys_admin.user            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 03:56:09 compute-0 nova_compute[259850]: 2025-10-11 03:56:09.899 2 DEBUG oslo_service.service [None req-682edebb-6e1e-4c40-910e-896b6023b4ae - - - - - -] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2613
Oct 11 03:56:09 compute-0 nova_compute[259850]: 2025-10-11 03:56:09.900 2 INFO nova.service [-] Starting compute node (version 27.5.2-0.20250829104910.6f8decf.el9)
Oct 11 03:56:09 compute-0 nova_compute[259850]: 2025-10-11 03:56:09.911 2 DEBUG nova.virt.libvirt.host [None req-3700afd8-f980-4cf2-b41e-54ecc7fbe9a2 - - - - - -] Starting native event thread _init_events /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:492
Oct 11 03:56:09 compute-0 nova_compute[259850]: 2025-10-11 03:56:09.911 2 DEBUG nova.virt.libvirt.host [None req-3700afd8-f980-4cf2-b41e-54ecc7fbe9a2 - - - - - -] Starting green dispatch thread _init_events /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:498
Oct 11 03:56:09 compute-0 nova_compute[259850]: 2025-10-11 03:56:09.912 2 DEBUG nova.virt.libvirt.host [None req-3700afd8-f980-4cf2-b41e-54ecc7fbe9a2 - - - - - -] Starting connection event dispatch thread initialize /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:620
Oct 11 03:56:09 compute-0 nova_compute[259850]: 2025-10-11 03:56:09.912 2 DEBUG nova.virt.libvirt.host [None req-3700afd8-f980-4cf2-b41e-54ecc7fbe9a2 - - - - - -] Connecting to libvirt: qemu:///system _get_new_connection /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:503
Oct 11 03:56:09 compute-0 nova_compute[259850]: 2025-10-11 03:56:09.927 2 DEBUG nova.virt.libvirt.host [None req-3700afd8-f980-4cf2-b41e-54ecc7fbe9a2 - - - - - -] Registering for lifecycle events <nova.virt.libvirt.host.Host object at 0x7fe801dd3490> _get_new_connection /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:509
Oct 11 03:56:09 compute-0 nova_compute[259850]: 2025-10-11 03:56:09.929 2 DEBUG nova.virt.libvirt.host [None req-3700afd8-f980-4cf2-b41e-54ecc7fbe9a2 - - - - - -] Registering for connection events: <nova.virt.libvirt.host.Host object at 0x7fe801dd3490> _get_new_connection /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:530
Oct 11 03:56:09 compute-0 nova_compute[259850]: 2025-10-11 03:56:09.930 2 INFO nova.virt.libvirt.driver [None req-3700afd8-f980-4cf2-b41e-54ecc7fbe9a2 - - - - - -] Connection event '1' reason 'None'
Oct 11 03:56:09 compute-0 nova_compute[259850]: 2025-10-11 03:56:09.936 2 INFO nova.virt.libvirt.host [None req-3700afd8-f980-4cf2-b41e-54ecc7fbe9a2 - - - - - -] Libvirt host capabilities <capabilities>
Oct 11 03:56:09 compute-0 nova_compute[259850]: 
Oct 11 03:56:09 compute-0 nova_compute[259850]:   <host>
Oct 11 03:56:09 compute-0 nova_compute[259850]:     <uuid>e4b2deed-ff06-4afb-a523-b61a9dddb9cc</uuid>
Oct 11 03:56:09 compute-0 nova_compute[259850]:     <cpu>
Oct 11 03:56:09 compute-0 nova_compute[259850]:       <arch>x86_64</arch>
Oct 11 03:56:09 compute-0 nova_compute[259850]:       <model>EPYC-Rome-v4</model>
Oct 11 03:56:09 compute-0 nova_compute[259850]:       <vendor>AMD</vendor>
Oct 11 03:56:09 compute-0 nova_compute[259850]:       <microcode version='16777317'/>
Oct 11 03:56:09 compute-0 nova_compute[259850]:       <signature family='23' model='49' stepping='0'/>
Oct 11 03:56:09 compute-0 nova_compute[259850]:       <topology sockets='8' dies='1' clusters='1' cores='1' threads='1'/>
Oct 11 03:56:09 compute-0 nova_compute[259850]:       <maxphysaddr mode='emulate' bits='40'/>
Oct 11 03:56:09 compute-0 nova_compute[259850]:       <feature name='x2apic'/>
Oct 11 03:56:09 compute-0 nova_compute[259850]:       <feature name='tsc-deadline'/>
Oct 11 03:56:09 compute-0 nova_compute[259850]:       <feature name='osxsave'/>
Oct 11 03:56:09 compute-0 nova_compute[259850]:       <feature name='hypervisor'/>
Oct 11 03:56:09 compute-0 nova_compute[259850]:       <feature name='tsc_adjust'/>
Oct 11 03:56:09 compute-0 nova_compute[259850]:       <feature name='spec-ctrl'/>
Oct 11 03:56:09 compute-0 nova_compute[259850]:       <feature name='stibp'/>
Oct 11 03:56:09 compute-0 nova_compute[259850]:       <feature name='arch-capabilities'/>
Oct 11 03:56:09 compute-0 nova_compute[259850]:       <feature name='ssbd'/>
Oct 11 03:56:09 compute-0 nova_compute[259850]:       <feature name='cmp_legacy'/>
Oct 11 03:56:09 compute-0 nova_compute[259850]:       <feature name='topoext'/>
Oct 11 03:56:09 compute-0 nova_compute[259850]:       <feature name='virt-ssbd'/>
Oct 11 03:56:09 compute-0 nova_compute[259850]:       <feature name='lbrv'/>
Oct 11 03:56:09 compute-0 nova_compute[259850]:       <feature name='tsc-scale'/>
Oct 11 03:56:09 compute-0 nova_compute[259850]:       <feature name='vmcb-clean'/>
Oct 11 03:56:09 compute-0 nova_compute[259850]:       <feature name='pause-filter'/>
Oct 11 03:56:09 compute-0 nova_compute[259850]:       <feature name='pfthreshold'/>
Oct 11 03:56:09 compute-0 nova_compute[259850]:       <feature name='svme-addr-chk'/>
Oct 11 03:56:09 compute-0 nova_compute[259850]:       <feature name='rdctl-no'/>
Oct 11 03:56:09 compute-0 nova_compute[259850]:       <feature name='skip-l1dfl-vmentry'/>
Oct 11 03:56:09 compute-0 nova_compute[259850]:       <feature name='mds-no'/>
Oct 11 03:56:09 compute-0 nova_compute[259850]:       <feature name='pschange-mc-no'/>
Oct 11 03:56:09 compute-0 nova_compute[259850]:       <pages unit='KiB' size='4'/>
Oct 11 03:56:09 compute-0 nova_compute[259850]:       <pages unit='KiB' size='2048'/>
Oct 11 03:56:09 compute-0 nova_compute[259850]:       <pages unit='KiB' size='1048576'/>
Oct 11 03:56:09 compute-0 nova_compute[259850]:     </cpu>
Oct 11 03:56:09 compute-0 nova_compute[259850]:     <power_management>
Oct 11 03:56:09 compute-0 nova_compute[259850]:       <suspend_mem/>
Oct 11 03:56:09 compute-0 nova_compute[259850]:     </power_management>
Oct 11 03:56:09 compute-0 nova_compute[259850]:     <iommu support='no'/>
Oct 11 03:56:09 compute-0 nova_compute[259850]:     <migration_features>
Oct 11 03:56:09 compute-0 nova_compute[259850]:       <live/>
Oct 11 03:56:09 compute-0 nova_compute[259850]:       <uri_transports>
Oct 11 03:56:09 compute-0 nova_compute[259850]:         <uri_transport>tcp</uri_transport>
Oct 11 03:56:09 compute-0 nova_compute[259850]:         <uri_transport>rdma</uri_transport>
Oct 11 03:56:09 compute-0 nova_compute[259850]:       </uri_transports>
Oct 11 03:56:09 compute-0 nova_compute[259850]:     </migration_features>
Oct 11 03:56:09 compute-0 nova_compute[259850]:     <topology>
Oct 11 03:56:09 compute-0 nova_compute[259850]:       <cells num='1'>
Oct 11 03:56:09 compute-0 nova_compute[259850]:         <cell id='0'>
Oct 11 03:56:09 compute-0 nova_compute[259850]:           <memory unit='KiB'>7864348</memory>
Oct 11 03:56:09 compute-0 nova_compute[259850]:           <pages unit='KiB' size='4'>1966087</pages>
Oct 11 03:56:09 compute-0 nova_compute[259850]:           <pages unit='KiB' size='2048'>0</pages>
Oct 11 03:56:09 compute-0 nova_compute[259850]:           <pages unit='KiB' size='1048576'>0</pages>
Oct 11 03:56:09 compute-0 nova_compute[259850]:           <distances>
Oct 11 03:56:09 compute-0 nova_compute[259850]:             <sibling id='0' value='10'/>
Oct 11 03:56:09 compute-0 nova_compute[259850]:           </distances>
Oct 11 03:56:09 compute-0 nova_compute[259850]:           <cpus num='8'>
Oct 11 03:56:09 compute-0 nova_compute[259850]:             <cpu id='0' socket_id='0' die_id='0' cluster_id='65535' core_id='0' siblings='0'/>
Oct 11 03:56:09 compute-0 nova_compute[259850]:             <cpu id='1' socket_id='1' die_id='1' cluster_id='65535' core_id='0' siblings='1'/>
Oct 11 03:56:09 compute-0 nova_compute[259850]:             <cpu id='2' socket_id='2' die_id='2' cluster_id='65535' core_id='0' siblings='2'/>
Oct 11 03:56:09 compute-0 nova_compute[259850]:             <cpu id='3' socket_id='3' die_id='3' cluster_id='65535' core_id='0' siblings='3'/>
Oct 11 03:56:09 compute-0 nova_compute[259850]:             <cpu id='4' socket_id='4' die_id='4' cluster_id='65535' core_id='0' siblings='4'/>
Oct 11 03:56:09 compute-0 nova_compute[259850]:             <cpu id='5' socket_id='5' die_id='5' cluster_id='65535' core_id='0' siblings='5'/>
Oct 11 03:56:09 compute-0 nova_compute[259850]:             <cpu id='6' socket_id='6' die_id='6' cluster_id='65535' core_id='0' siblings='6'/>
Oct 11 03:56:09 compute-0 nova_compute[259850]:             <cpu id='7' socket_id='7' die_id='7' cluster_id='65535' core_id='0' siblings='7'/>
Oct 11 03:56:09 compute-0 nova_compute[259850]:           </cpus>
Oct 11 03:56:09 compute-0 nova_compute[259850]:         </cell>
Oct 11 03:56:09 compute-0 nova_compute[259850]:       </cells>
Oct 11 03:56:09 compute-0 nova_compute[259850]:     </topology>
Oct 11 03:56:09 compute-0 nova_compute[259850]:     <cache>
Oct 11 03:56:09 compute-0 nova_compute[259850]:       <bank id='0' level='2' type='both' size='512' unit='KiB' cpus='0'/>
Oct 11 03:56:09 compute-0 nova_compute[259850]:       <bank id='1' level='2' type='both' size='512' unit='KiB' cpus='1'/>
Oct 11 03:56:09 compute-0 nova_compute[259850]:       <bank id='2' level='2' type='both' size='512' unit='KiB' cpus='2'/>
Oct 11 03:56:09 compute-0 nova_compute[259850]:       <bank id='3' level='2' type='both' size='512' unit='KiB' cpus='3'/>
Oct 11 03:56:09 compute-0 nova_compute[259850]:       <bank id='4' level='2' type='both' size='512' unit='KiB' cpus='4'/>
Oct 11 03:56:09 compute-0 nova_compute[259850]:       <bank id='5' level='2' type='both' size='512' unit='KiB' cpus='5'/>
Oct 11 03:56:09 compute-0 nova_compute[259850]:       <bank id='6' level='2' type='both' size='512' unit='KiB' cpus='6'/>
Oct 11 03:56:09 compute-0 nova_compute[259850]:       <bank id='7' level='2' type='both' size='512' unit='KiB' cpus='7'/>
Oct 11 03:56:09 compute-0 nova_compute[259850]:       <bank id='0' level='3' type='both' size='16' unit='MiB' cpus='0'/>
Oct 11 03:56:09 compute-0 nova_compute[259850]:       <bank id='1' level='3' type='both' size='16' unit='MiB' cpus='1'/>
Oct 11 03:56:09 compute-0 nova_compute[259850]:       <bank id='2' level='3' type='both' size='16' unit='MiB' cpus='2'/>
Oct 11 03:56:09 compute-0 nova_compute[259850]:       <bank id='3' level='3' type='both' size='16' unit='MiB' cpus='3'/>
Oct 11 03:56:09 compute-0 nova_compute[259850]:       <bank id='4' level='3' type='both' size='16' unit='MiB' cpus='4'/>
Oct 11 03:56:09 compute-0 nova_compute[259850]:       <bank id='5' level='3' type='both' size='16' unit='MiB' cpus='5'/>
Oct 11 03:56:09 compute-0 nova_compute[259850]:       <bank id='6' level='3' type='both' size='16' unit='MiB' cpus='6'/>
Oct 11 03:56:09 compute-0 nova_compute[259850]:       <bank id='7' level='3' type='both' size='16' unit='MiB' cpus='7'/>
Oct 11 03:56:09 compute-0 nova_compute[259850]:     </cache>
Oct 11 03:56:09 compute-0 nova_compute[259850]:     <secmodel>
Oct 11 03:56:09 compute-0 nova_compute[259850]:       <model>selinux</model>
Oct 11 03:56:09 compute-0 nova_compute[259850]:       <doi>0</doi>
Oct 11 03:56:09 compute-0 nova_compute[259850]:       <baselabel type='kvm'>system_u:system_r:svirt_t:s0</baselabel>
Oct 11 03:56:09 compute-0 nova_compute[259850]:       <baselabel type='qemu'>system_u:system_r:svirt_tcg_t:s0</baselabel>
Oct 11 03:56:09 compute-0 nova_compute[259850]:     </secmodel>
Oct 11 03:56:09 compute-0 nova_compute[259850]:     <secmodel>
Oct 11 03:56:09 compute-0 nova_compute[259850]:       <model>dac</model>
Oct 11 03:56:09 compute-0 nova_compute[259850]:       <doi>0</doi>
Oct 11 03:56:09 compute-0 nova_compute[259850]:       <baselabel type='kvm'>+107:+107</baselabel>
Oct 11 03:56:09 compute-0 nova_compute[259850]:       <baselabel type='qemu'>+107:+107</baselabel>
Oct 11 03:56:09 compute-0 nova_compute[259850]:     </secmodel>
Oct 11 03:56:09 compute-0 nova_compute[259850]:   </host>
Oct 11 03:56:09 compute-0 nova_compute[259850]: 
Oct 11 03:56:09 compute-0 nova_compute[259850]:   <guest>
Oct 11 03:56:09 compute-0 nova_compute[259850]:     <os_type>hvm</os_type>
Oct 11 03:56:09 compute-0 nova_compute[259850]:     <arch name='i686'>
Oct 11 03:56:09 compute-0 nova_compute[259850]:       <wordsize>32</wordsize>
Oct 11 03:56:09 compute-0 nova_compute[259850]:       <emulator>/usr/libexec/qemu-kvm</emulator>
Oct 11 03:56:09 compute-0 nova_compute[259850]:       <machine maxCpus='240' deprecated='yes'>pc-i440fx-rhel7.6.0</machine>
Oct 11 03:56:09 compute-0 nova_compute[259850]:       <machine canonical='pc-i440fx-rhel7.6.0' maxCpus='240' deprecated='yes'>pc</machine>
Oct 11 03:56:09 compute-0 nova_compute[259850]:       <machine maxCpus='4096'>pc-q35-rhel9.6.0</machine>
Oct 11 03:56:09 compute-0 nova_compute[259850]:       <machine canonical='pc-q35-rhel9.6.0' maxCpus='4096'>q35</machine>
Oct 11 03:56:09 compute-0 nova_compute[259850]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.6.0</machine>
Oct 11 03:56:09 compute-0 nova_compute[259850]:       <machine maxCpus='710'>pc-q35-rhel9.4.0</machine>
Oct 11 03:56:09 compute-0 nova_compute[259850]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.5.0</machine>
Oct 11 03:56:09 compute-0 nova_compute[259850]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.3.0</machine>
Oct 11 03:56:09 compute-0 nova_compute[259850]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel7.6.0</machine>
Oct 11 03:56:09 compute-0 nova_compute[259850]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.4.0</machine>
Oct 11 03:56:09 compute-0 nova_compute[259850]:       <machine maxCpus='710'>pc-q35-rhel9.2.0</machine>
Oct 11 03:56:09 compute-0 nova_compute[259850]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.2.0</machine>
Oct 11 03:56:09 compute-0 nova_compute[259850]:       <machine maxCpus='710'>pc-q35-rhel9.0.0</machine>
Oct 11 03:56:09 compute-0 nova_compute[259850]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.0.0</machine>
Oct 11 03:56:09 compute-0 nova_compute[259850]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.1.0</machine>
Oct 11 03:56:09 compute-0 nova_compute[259850]:       <domain type='qemu'/>
Oct 11 03:56:09 compute-0 nova_compute[259850]:       <domain type='kvm'/>
Oct 11 03:56:09 compute-0 nova_compute[259850]:     </arch>
Oct 11 03:56:09 compute-0 nova_compute[259850]:     <features>
Oct 11 03:56:09 compute-0 nova_compute[259850]:       <pae/>
Oct 11 03:56:09 compute-0 nova_compute[259850]:       <nonpae/>
Oct 11 03:56:09 compute-0 nova_compute[259850]:       <acpi default='on' toggle='yes'/>
Oct 11 03:56:09 compute-0 nova_compute[259850]:       <apic default='on' toggle='no'/>
Oct 11 03:56:09 compute-0 nova_compute[259850]:       <cpuselection/>
Oct 11 03:56:09 compute-0 nova_compute[259850]:       <deviceboot/>
Oct 11 03:56:09 compute-0 nova_compute[259850]:       <disksnapshot default='on' toggle='no'/>
Oct 11 03:56:09 compute-0 nova_compute[259850]:       <externalSnapshot/>
Oct 11 03:56:09 compute-0 nova_compute[259850]:     </features>
Oct 11 03:56:09 compute-0 nova_compute[259850]:   </guest>
Oct 11 03:56:09 compute-0 nova_compute[259850]: 
Oct 11 03:56:09 compute-0 nova_compute[259850]:   <guest>
Oct 11 03:56:09 compute-0 nova_compute[259850]:     <os_type>hvm</os_type>
Oct 11 03:56:09 compute-0 nova_compute[259850]:     <arch name='x86_64'>
Oct 11 03:56:09 compute-0 nova_compute[259850]:       <wordsize>64</wordsize>
Oct 11 03:56:09 compute-0 nova_compute[259850]:       <emulator>/usr/libexec/qemu-kvm</emulator>
Oct 11 03:56:09 compute-0 nova_compute[259850]:       <machine maxCpus='240' deprecated='yes'>pc-i440fx-rhel7.6.0</machine>
Oct 11 03:56:09 compute-0 nova_compute[259850]:       <machine canonical='pc-i440fx-rhel7.6.0' maxCpus='240' deprecated='yes'>pc</machine>
Oct 11 03:56:09 compute-0 nova_compute[259850]:       <machine maxCpus='4096'>pc-q35-rhel9.6.0</machine>
Oct 11 03:56:09 compute-0 nova_compute[259850]:       <machine canonical='pc-q35-rhel9.6.0' maxCpus='4096'>q35</machine>
Oct 11 03:56:09 compute-0 nova_compute[259850]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.6.0</machine>
Oct 11 03:56:09 compute-0 nova_compute[259850]:       <machine maxCpus='710'>pc-q35-rhel9.4.0</machine>
Oct 11 03:56:09 compute-0 nova_compute[259850]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.5.0</machine>
Oct 11 03:56:09 compute-0 nova_compute[259850]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.3.0</machine>
Oct 11 03:56:09 compute-0 nova_compute[259850]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel7.6.0</machine>
Oct 11 03:56:09 compute-0 nova_compute[259850]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.4.0</machine>
Oct 11 03:56:09 compute-0 nova_compute[259850]:       <machine maxCpus='710'>pc-q35-rhel9.2.0</machine>
Oct 11 03:56:09 compute-0 nova_compute[259850]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.2.0</machine>
Oct 11 03:56:09 compute-0 nova_compute[259850]:       <machine maxCpus='710'>pc-q35-rhel9.0.0</machine>
Oct 11 03:56:09 compute-0 nova_compute[259850]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.0.0</machine>
Oct 11 03:56:09 compute-0 nova_compute[259850]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.1.0</machine>
Oct 11 03:56:09 compute-0 nova_compute[259850]:       <domain type='qemu'/>
Oct 11 03:56:09 compute-0 nova_compute[259850]:       <domain type='kvm'/>
Oct 11 03:56:09 compute-0 nova_compute[259850]:     </arch>
Oct 11 03:56:09 compute-0 nova_compute[259850]:     <features>
Oct 11 03:56:09 compute-0 nova_compute[259850]:       <acpi default='on' toggle='yes'/>
Oct 11 03:56:09 compute-0 nova_compute[259850]:       <apic default='on' toggle='no'/>
Oct 11 03:56:09 compute-0 nova_compute[259850]:       <cpuselection/>
Oct 11 03:56:09 compute-0 nova_compute[259850]:       <deviceboot/>
Oct 11 03:56:09 compute-0 nova_compute[259850]:       <disksnapshot default='on' toggle='no'/>
Oct 11 03:56:09 compute-0 nova_compute[259850]:       <externalSnapshot/>
Oct 11 03:56:09 compute-0 nova_compute[259850]:     </features>
Oct 11 03:56:09 compute-0 nova_compute[259850]:   </guest>
Oct 11 03:56:09 compute-0 nova_compute[259850]: 
Oct 11 03:56:09 compute-0 nova_compute[259850]: </capabilities>
Oct 11 03:56:09 compute-0 nova_compute[259850]: 
Oct 11 03:56:09 compute-0 nova_compute[259850]: 2025-10-11 03:56:09.942 2 WARNING nova.virt.libvirt.driver [None req-3700afd8-f980-4cf2-b41e-54ecc7fbe9a2 - - - - - -] Cannot update service status on host "compute-0.ctlplane.example.com" since it is not registered.: nova.exception_Remote.ComputeHostNotFound_Remote: Compute host compute-0.ctlplane.example.com could not be found.
Oct 11 03:56:09 compute-0 nova_compute[259850]: 2025-10-11 03:56:09.942 2 DEBUG nova.virt.libvirt.volume.mount [None req-3700afd8-f980-4cf2-b41e-54ecc7fbe9a2 - - - - - -] Initialising _HostMountState generation 0 host_up /usr/lib/python3.9/site-packages/nova/virt/libvirt/volume/mount.py:130
Oct 11 03:56:09 compute-0 nova_compute[259850]: 2025-10-11 03:56:09.944 2 DEBUG nova.virt.libvirt.host [None req-3700afd8-f980-4cf2-b41e-54ecc7fbe9a2 - - - - - -] Getting domain capabilities for i686 via machine types: {'pc', 'q35'} _get_machine_types /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:952
Oct 11 03:56:09 compute-0 nova_compute[259850]: 2025-10-11 03:56:09.979 2 DEBUG nova.virt.libvirt.host [None req-3700afd8-f980-4cf2-b41e-54ecc7fbe9a2 - - - - - -] Libvirt host hypervisor capabilities for arch=i686 and machine_type=pc:
Oct 11 03:56:09 compute-0 nova_compute[259850]: <domainCapabilities>
Oct 11 03:56:09 compute-0 nova_compute[259850]:   <path>/usr/libexec/qemu-kvm</path>
Oct 11 03:56:09 compute-0 nova_compute[259850]:   <domain>kvm</domain>
Oct 11 03:56:09 compute-0 nova_compute[259850]:   <machine>pc-i440fx-rhel7.6.0</machine>
Oct 11 03:56:09 compute-0 nova_compute[259850]:   <arch>i686</arch>
Oct 11 03:56:09 compute-0 nova_compute[259850]:   <vcpu max='240'/>
Oct 11 03:56:09 compute-0 nova_compute[259850]:   <iothreads supported='yes'/>
Oct 11 03:56:09 compute-0 nova_compute[259850]:   <os supported='yes'>
Oct 11 03:56:09 compute-0 nova_compute[259850]:     <enum name='firmware'/>
Oct 11 03:56:09 compute-0 nova_compute[259850]:     <loader supported='yes'>
Oct 11 03:56:09 compute-0 nova_compute[259850]:       <value>/usr/share/OVMF/OVMF_CODE.secboot.fd</value>
Oct 11 03:56:09 compute-0 nova_compute[259850]:       <enum name='type'>
Oct 11 03:56:09 compute-0 nova_compute[259850]:         <value>rom</value>
Oct 11 03:56:09 compute-0 nova_compute[259850]:         <value>pflash</value>
Oct 11 03:56:09 compute-0 nova_compute[259850]:       </enum>
Oct 11 03:56:09 compute-0 nova_compute[259850]:       <enum name='readonly'>
Oct 11 03:56:09 compute-0 nova_compute[259850]:         <value>yes</value>
Oct 11 03:56:09 compute-0 nova_compute[259850]:         <value>no</value>
Oct 11 03:56:09 compute-0 nova_compute[259850]:       </enum>
Oct 11 03:56:09 compute-0 nova_compute[259850]:       <enum name='secure'>
Oct 11 03:56:09 compute-0 nova_compute[259850]:         <value>no</value>
Oct 11 03:56:09 compute-0 nova_compute[259850]:       </enum>
Oct 11 03:56:09 compute-0 nova_compute[259850]:     </loader>
Oct 11 03:56:09 compute-0 nova_compute[259850]:   </os>
Oct 11 03:56:09 compute-0 nova_compute[259850]:   <cpu>
Oct 11 03:56:09 compute-0 nova_compute[259850]:     <mode name='host-passthrough' supported='yes'>
Oct 11 03:56:09 compute-0 nova_compute[259850]:       <enum name='hostPassthroughMigratable'>
Oct 11 03:56:09 compute-0 nova_compute[259850]:         <value>on</value>
Oct 11 03:56:09 compute-0 nova_compute[259850]:         <value>off</value>
Oct 11 03:56:09 compute-0 nova_compute[259850]:       </enum>
Oct 11 03:56:09 compute-0 nova_compute[259850]:     </mode>
Oct 11 03:56:09 compute-0 nova_compute[259850]:     <mode name='maximum' supported='yes'>
Oct 11 03:56:09 compute-0 nova_compute[259850]:       <enum name='maximumMigratable'>
Oct 11 03:56:09 compute-0 nova_compute[259850]:         <value>on</value>
Oct 11 03:56:09 compute-0 nova_compute[259850]:         <value>off</value>
Oct 11 03:56:09 compute-0 nova_compute[259850]:       </enum>
Oct 11 03:56:09 compute-0 nova_compute[259850]:     </mode>
Oct 11 03:56:09 compute-0 nova_compute[259850]:     <mode name='host-model' supported='yes'>
Oct 11 03:56:09 compute-0 nova_compute[259850]:       <model fallback='forbid'>EPYC-Rome</model>
Oct 11 03:56:09 compute-0 nova_compute[259850]:       <vendor>AMD</vendor>
Oct 11 03:56:09 compute-0 nova_compute[259850]:       <maxphysaddr mode='passthrough' limit='40'/>
Oct 11 03:56:09 compute-0 nova_compute[259850]:       <feature policy='require' name='x2apic'/>
Oct 11 03:56:09 compute-0 nova_compute[259850]:       <feature policy='require' name='tsc-deadline'/>
Oct 11 03:56:09 compute-0 nova_compute[259850]:       <feature policy='require' name='hypervisor'/>
Oct 11 03:56:09 compute-0 nova_compute[259850]:       <feature policy='require' name='tsc_adjust'/>
Oct 11 03:56:09 compute-0 nova_compute[259850]:       <feature policy='require' name='spec-ctrl'/>
Oct 11 03:56:09 compute-0 nova_compute[259850]:       <feature policy='require' name='stibp'/>
Oct 11 03:56:09 compute-0 nova_compute[259850]:       <feature policy='require' name='arch-capabilities'/>
Oct 11 03:56:09 compute-0 nova_compute[259850]:       <feature policy='require' name='ssbd'/>
Oct 11 03:56:09 compute-0 nova_compute[259850]:       <feature policy='require' name='cmp_legacy'/>
Oct 11 03:56:09 compute-0 nova_compute[259850]:       <feature policy='require' name='overflow-recov'/>
Oct 11 03:56:09 compute-0 nova_compute[259850]:       <feature policy='require' name='succor'/>
Oct 11 03:56:09 compute-0 nova_compute[259850]:       <feature policy='require' name='ibrs'/>
Oct 11 03:56:09 compute-0 nova_compute[259850]:       <feature policy='require' name='amd-ssbd'/>
Oct 11 03:56:09 compute-0 nova_compute[259850]:       <feature policy='require' name='virt-ssbd'/>
Oct 11 03:56:09 compute-0 nova_compute[259850]:       <feature policy='require' name='lbrv'/>
Oct 11 03:56:09 compute-0 nova_compute[259850]:       <feature policy='require' name='tsc-scale'/>
Oct 11 03:56:09 compute-0 nova_compute[259850]:       <feature policy='require' name='vmcb-clean'/>
Oct 11 03:56:09 compute-0 nova_compute[259850]:       <feature policy='require' name='flushbyasid'/>
Oct 11 03:56:09 compute-0 nova_compute[259850]:       <feature policy='require' name='pause-filter'/>
Oct 11 03:56:09 compute-0 nova_compute[259850]:       <feature policy='require' name='pfthreshold'/>
Oct 11 03:56:09 compute-0 nova_compute[259850]:       <feature policy='require' name='svme-addr-chk'/>
Oct 11 03:56:09 compute-0 nova_compute[259850]:       <feature policy='require' name='lfence-always-serializing'/>
Oct 11 03:56:09 compute-0 nova_compute[259850]:       <feature policy='require' name='rdctl-no'/>
Oct 11 03:56:09 compute-0 nova_compute[259850]:       <feature policy='require' name='skip-l1dfl-vmentry'/>
Oct 11 03:56:09 compute-0 nova_compute[259850]:       <feature policy='require' name='mds-no'/>
Oct 11 03:56:09 compute-0 nova_compute[259850]:       <feature policy='require' name='pschange-mc-no'/>
Oct 11 03:56:09 compute-0 nova_compute[259850]:       <feature policy='require' name='gds-no'/>
Oct 11 03:56:09 compute-0 nova_compute[259850]:       <feature policy='require' name='rfds-no'/>
Oct 11 03:56:09 compute-0 nova_compute[259850]:       <feature policy='disable' name='xsaves'/>
Oct 11 03:56:09 compute-0 nova_compute[259850]:     </mode>
Oct 11 03:56:09 compute-0 nova_compute[259850]:     <mode name='custom' supported='yes'>
Oct 11 03:56:09 compute-0 nova_compute[259850]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='486-v1'>486</model>
Oct 11 03:56:09 compute-0 nova_compute[259850]:       <model usable='yes' deprecated='yes' vendor='unknown'>486-v1</model>
Oct 11 03:56:09 compute-0 nova_compute[259850]:       <model usable='no' vendor='Intel' canonical='Broadwell-v1'>Broadwell</model>
Oct 11 03:56:09 compute-0 nova_compute[259850]:       <blockers model='Broadwell'>
Oct 11 03:56:09 compute-0 nova_compute[259850]:         <feature name='erms'/>
Oct 11 03:56:09 compute-0 nova_compute[259850]:         <feature name='hle'/>
Oct 11 03:56:09 compute-0 nova_compute[259850]:         <feature name='invpcid'/>
Oct 11 03:56:09 compute-0 nova_compute[259850]:         <feature name='pcid'/>
Oct 11 03:56:09 compute-0 nova_compute[259850]:         <feature name='rtm'/>
Oct 11 03:56:09 compute-0 nova_compute[259850]:       </blockers>
Oct 11 03:56:09 compute-0 nova_compute[259850]:       <model usable='no' vendor='Intel' canonical='Broadwell-v3'>Broadwell-IBRS</model>
Oct 11 03:56:09 compute-0 nova_compute[259850]:       <blockers model='Broadwell-IBRS'>
Oct 11 03:56:09 compute-0 nova_compute[259850]:         <feature name='erms'/>
Oct 11 03:56:09 compute-0 nova_compute[259850]:         <feature name='hle'/>
Oct 11 03:56:09 compute-0 nova_compute[259850]:         <feature name='invpcid'/>
Oct 11 03:56:09 compute-0 nova_compute[259850]:         <feature name='pcid'/>
Oct 11 03:56:09 compute-0 nova_compute[259850]:         <feature name='rtm'/>
Oct 11 03:56:09 compute-0 nova_compute[259850]:       </blockers>
Oct 11 03:56:09 compute-0 nova_compute[259850]:       <model usable='no' vendor='Intel' canonical='Broadwell-v2'>Broadwell-noTSX</model>
Oct 11 03:56:09 compute-0 nova_compute[259850]:       <blockers model='Broadwell-noTSX'>
Oct 11 03:56:09 compute-0 nova_compute[259850]:         <feature name='erms'/>
Oct 11 03:56:09 compute-0 nova_compute[259850]:         <feature name='invpcid'/>
Oct 11 03:56:09 compute-0 nova_compute[259850]:         <feature name='pcid'/>
Oct 11 03:56:09 compute-0 nova_compute[259850]:       </blockers>
Oct 11 03:56:09 compute-0 nova_compute[259850]:       <model usable='no' vendor='Intel' canonical='Broadwell-v4'>Broadwell-noTSX-IBRS</model>
Oct 11 03:56:09 compute-0 nova_compute[259850]:       <blockers model='Broadwell-noTSX-IBRS'>
Oct 11 03:56:09 compute-0 nova_compute[259850]:         <feature name='erms'/>
Oct 11 03:56:09 compute-0 nova_compute[259850]:         <feature name='invpcid'/>
Oct 11 03:56:09 compute-0 nova_compute[259850]:         <feature name='pcid'/>
Oct 11 03:56:09 compute-0 nova_compute[259850]:       </blockers>
Oct 11 03:56:09 compute-0 nova_compute[259850]:       <model usable='no' vendor='Intel'>Broadwell-v1</model>
Oct 11 03:56:09 compute-0 nova_compute[259850]:       <blockers model='Broadwell-v1'>
Oct 11 03:56:09 compute-0 nova_compute[259850]:         <feature name='erms'/>
Oct 11 03:56:09 compute-0 nova_compute[259850]:         <feature name='hle'/>
Oct 11 03:56:09 compute-0 nova_compute[259850]:         <feature name='invpcid'/>
Oct 11 03:56:09 compute-0 nova_compute[259850]:         <feature name='pcid'/>
Oct 11 03:56:09 compute-0 nova_compute[259850]:         <feature name='rtm'/>
Oct 11 03:56:09 compute-0 nova_compute[259850]:       </blockers>
Oct 11 03:56:09 compute-0 nova_compute[259850]:       <model usable='no' vendor='Intel'>Broadwell-v2</model>
Oct 11 03:56:09 compute-0 nova_compute[259850]:       <blockers model='Broadwell-v2'>
Oct 11 03:56:09 compute-0 nova_compute[259850]:         <feature name='erms'/>
Oct 11 03:56:09 compute-0 nova_compute[259850]:         <feature name='invpcid'/>
Oct 11 03:56:09 compute-0 nova_compute[259850]:         <feature name='pcid'/>
Oct 11 03:56:09 compute-0 nova_compute[259850]:       </blockers>
Oct 11 03:56:09 compute-0 nova_compute[259850]:       <model usable='no' vendor='Intel'>Broadwell-v3</model>
Oct 11 03:56:09 compute-0 nova_compute[259850]:       <blockers model='Broadwell-v3'>
Oct 11 03:56:09 compute-0 nova_compute[259850]:         <feature name='erms'/>
Oct 11 03:56:09 compute-0 nova_compute[259850]:         <feature name='hle'/>
Oct 11 03:56:09 compute-0 nova_compute[259850]:         <feature name='invpcid'/>
Oct 11 03:56:09 compute-0 nova_compute[259850]:         <feature name='pcid'/>
Oct 11 03:56:09 compute-0 nova_compute[259850]:         <feature name='rtm'/>
Oct 11 03:56:09 compute-0 nova_compute[259850]:       </blockers>
Oct 11 03:56:09 compute-0 nova_compute[259850]:       <model usable='no' vendor='Intel'>Broadwell-v4</model>
Oct 11 03:56:09 compute-0 nova_compute[259850]:       <blockers model='Broadwell-v4'>
Oct 11 03:56:09 compute-0 nova_compute[259850]:         <feature name='erms'/>
Oct 11 03:56:09 compute-0 nova_compute[259850]:         <feature name='invpcid'/>
Oct 11 03:56:09 compute-0 nova_compute[259850]:         <feature name='pcid'/>
Oct 11 03:56:09 compute-0 nova_compute[259850]:       </blockers>
Oct 11 03:56:09 compute-0 nova_compute[259850]:       <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v1'>Cascadelake-Server</model>
Oct 11 03:56:09 compute-0 nova_compute[259850]:       <blockers model='Cascadelake-Server'>
Oct 11 03:56:09 compute-0 nova_compute[259850]:         <feature name='avx512bw'/>
Oct 11 03:56:09 compute-0 nova_compute[259850]:         <feature name='avx512cd'/>
Oct 11 03:56:09 compute-0 nova_compute[259850]:         <feature name='avx512dq'/>
Oct 11 03:56:09 compute-0 nova_compute[259850]:         <feature name='avx512f'/>
Oct 11 03:56:09 compute-0 nova_compute[259850]:         <feature name='avx512vl'/>
Oct 11 03:56:09 compute-0 nova_compute[259850]:         <feature name='avx512vnni'/>
Oct 11 03:56:09 compute-0 nova_compute[259850]:         <feature name='erms'/>
Oct 11 03:56:09 compute-0 nova_compute[259850]:         <feature name='hle'/>
Oct 11 03:56:09 compute-0 nova_compute[259850]:         <feature name='invpcid'/>
Oct 11 03:56:09 compute-0 nova_compute[259850]:         <feature name='pcid'/>
Oct 11 03:56:09 compute-0 nova_compute[259850]:         <feature name='pku'/>
Oct 11 03:56:09 compute-0 nova_compute[259850]:         <feature name='rtm'/>
Oct 11 03:56:09 compute-0 nova_compute[259850]:       </blockers>
Oct 11 03:56:09 compute-0 nova_compute[259850]:       <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v3'>Cascadelake-Server-noTSX</model>
Oct 11 03:56:09 compute-0 nova_compute[259850]:       <blockers model='Cascadelake-Server-noTSX'>
Oct 11 03:56:09 compute-0 nova_compute[259850]:         <feature name='avx512bw'/>
Oct 11 03:56:09 compute-0 nova_compute[259850]:         <feature name='avx512cd'/>
Oct 11 03:56:09 compute-0 nova_compute[259850]:         <feature name='avx512dq'/>
Oct 11 03:56:09 compute-0 nova_compute[259850]:         <feature name='avx512f'/>
Oct 11 03:56:09 compute-0 nova_compute[259850]:         <feature name='avx512vl'/>
Oct 11 03:56:09 compute-0 nova_compute[259850]:         <feature name='avx512vnni'/>
Oct 11 03:56:09 compute-0 nova_compute[259850]:         <feature name='erms'/>
Oct 11 03:56:09 compute-0 nova_compute[259850]:         <feature name='ibrs-all'/>
Oct 11 03:56:09 compute-0 nova_compute[259850]:         <feature name='invpcid'/>
Oct 11 03:56:09 compute-0 nova_compute[259850]:         <feature name='pcid'/>
Oct 11 03:56:09 compute-0 nova_compute[259850]:         <feature name='pku'/>
Oct 11 03:56:09 compute-0 nova_compute[259850]:       </blockers>
Oct 11 03:56:09 compute-0 nova_compute[259850]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v1</model>
Oct 11 03:56:09 compute-0 nova_compute[259850]:       <blockers model='Cascadelake-Server-v1'>
Oct 11 03:56:09 compute-0 nova_compute[259850]:         <feature name='avx512bw'/>
Oct 11 03:56:09 compute-0 nova_compute[259850]:         <feature name='avx512cd'/>
Oct 11 03:56:09 compute-0 nova_compute[259850]:         <feature name='avx512dq'/>
Oct 11 03:56:09 compute-0 nova_compute[259850]:         <feature name='avx512f'/>
Oct 11 03:56:09 compute-0 nova_compute[259850]:         <feature name='avx512vl'/>
Oct 11 03:56:09 compute-0 nova_compute[259850]:         <feature name='avx512vnni'/>
Oct 11 03:56:09 compute-0 nova_compute[259850]:         <feature name='erms'/>
Oct 11 03:56:09 compute-0 nova_compute[259850]:         <feature name='hle'/>
Oct 11 03:56:09 compute-0 nova_compute[259850]:         <feature name='invpcid'/>
Oct 11 03:56:09 compute-0 nova_compute[259850]:         <feature name='pcid'/>
Oct 11 03:56:09 compute-0 nova_compute[259850]:         <feature name='pku'/>
Oct 11 03:56:09 compute-0 nova_compute[259850]:         <feature name='rtm'/>
Oct 11 03:56:09 compute-0 nova_compute[259850]:       </blockers>
Oct 11 03:56:09 compute-0 nova_compute[259850]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v2</model>
Oct 11 03:56:09 compute-0 nova_compute[259850]:       <blockers model='Cascadelake-Server-v2'>
Oct 11 03:56:09 compute-0 nova_compute[259850]:         <feature name='avx512bw'/>
Oct 11 03:56:09 compute-0 nova_compute[259850]:         <feature name='avx512cd'/>
Oct 11 03:56:09 compute-0 nova_compute[259850]:         <feature name='avx512dq'/>
Oct 11 03:56:09 compute-0 nova_compute[259850]:         <feature name='avx512f'/>
Oct 11 03:56:09 compute-0 nova_compute[259850]:         <feature name='avx512vl'/>
Oct 11 03:56:09 compute-0 nova_compute[259850]:         <feature name='avx512vnni'/>
Oct 11 03:56:09 compute-0 nova_compute[259850]:         <feature name='erms'/>
Oct 11 03:56:09 compute-0 nova_compute[259850]:         <feature name='hle'/>
Oct 11 03:56:09 compute-0 nova_compute[259850]:         <feature name='ibrs-all'/>
Oct 11 03:56:09 compute-0 nova_compute[259850]:         <feature name='invpcid'/>
Oct 11 03:56:09 compute-0 nova_compute[259850]:         <feature name='pcid'/>
Oct 11 03:56:09 compute-0 nova_compute[259850]:         <feature name='pku'/>
Oct 11 03:56:09 compute-0 nova_compute[259850]:         <feature name='rtm'/>
Oct 11 03:56:09 compute-0 nova_compute[259850]:       </blockers>
Oct 11 03:56:09 compute-0 nova_compute[259850]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v3</model>
Oct 11 03:56:09 compute-0 nova_compute[259850]:       <blockers model='Cascadelake-Server-v3'>
Oct 11 03:56:09 compute-0 nova_compute[259850]:         <feature name='avx512bw'/>
Oct 11 03:56:09 compute-0 nova_compute[259850]:         <feature name='avx512cd'/>
Oct 11 03:56:09 compute-0 nova_compute[259850]:         <feature name='avx512dq'/>
Oct 11 03:56:09 compute-0 nova_compute[259850]:         <feature name='avx512f'/>
Oct 11 03:56:09 compute-0 nova_compute[259850]:         <feature name='avx512vl'/>
Oct 11 03:56:09 compute-0 nova_compute[259850]:         <feature name='avx512vnni'/>
Oct 11 03:56:09 compute-0 nova_compute[259850]:         <feature name='erms'/>
Oct 11 03:56:09 compute-0 nova_compute[259850]:         <feature name='ibrs-all'/>
Oct 11 03:56:09 compute-0 nova_compute[259850]:         <feature name='invpcid'/>
Oct 11 03:56:09 compute-0 nova_compute[259850]:         <feature name='pcid'/>
Oct 11 03:56:09 compute-0 nova_compute[259850]:         <feature name='pku'/>
Oct 11 03:56:09 compute-0 nova_compute[259850]:       </blockers>
Oct 11 03:56:09 compute-0 nova_compute[259850]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v4</model>
Oct 11 03:56:09 compute-0 nova_compute[259850]:       <blockers model='Cascadelake-Server-v4'>
Oct 11 03:56:09 compute-0 nova_compute[259850]:         <feature name='avx512bw'/>
Oct 11 03:56:09 compute-0 nova_compute[259850]:         <feature name='avx512cd'/>
Oct 11 03:56:09 compute-0 nova_compute[259850]:         <feature name='avx512dq'/>
Oct 11 03:56:09 compute-0 nova_compute[259850]:         <feature name='avx512f'/>
Oct 11 03:56:09 compute-0 nova_compute[259850]:         <feature name='avx512vl'/>
Oct 11 03:56:09 compute-0 nova_compute[259850]:         <feature name='avx512vnni'/>
Oct 11 03:56:09 compute-0 nova_compute[259850]:         <feature name='erms'/>
Oct 11 03:56:09 compute-0 nova_compute[259850]:         <feature name='ibrs-all'/>
Oct 11 03:56:09 compute-0 nova_compute[259850]:         <feature name='invpcid'/>
Oct 11 03:56:09 compute-0 nova_compute[259850]:         <feature name='pcid'/>
Oct 11 03:56:09 compute-0 nova_compute[259850]:         <feature name='pku'/>
Oct 11 03:56:09 compute-0 nova_compute[259850]:       </blockers>
Oct 11 03:56:09 compute-0 nova_compute[259850]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v5</model>
Oct 11 03:56:09 compute-0 nova_compute[259850]:       <blockers model='Cascadelake-Server-v5'>
Oct 11 03:56:09 compute-0 nova_compute[259850]:         <feature name='avx512bw'/>
Oct 11 03:56:09 compute-0 nova_compute[259850]:         <feature name='avx512cd'/>
Oct 11 03:56:09 compute-0 nova_compute[259850]:         <feature name='avx512dq'/>
Oct 11 03:56:09 compute-0 nova_compute[259850]:         <feature name='avx512f'/>
Oct 11 03:56:09 compute-0 nova_compute[259850]:         <feature name='avx512vl'/>
Oct 11 03:56:09 compute-0 nova_compute[259850]:         <feature name='avx512vnni'/>
Oct 11 03:56:09 compute-0 nova_compute[259850]:         <feature name='erms'/>
Oct 11 03:56:09 compute-0 nova_compute[259850]:         <feature name='ibrs-all'/>
Oct 11 03:56:09 compute-0 nova_compute[259850]:         <feature name='invpcid'/>
Oct 11 03:56:09 compute-0 nova_compute[259850]:         <feature name='pcid'/>
Oct 11 03:56:09 compute-0 nova_compute[259850]:         <feature name='pku'/>
Oct 11 03:56:09 compute-0 nova_compute[259850]:         <feature name='xsaves'/>
Oct 11 03:56:09 compute-0 nova_compute[259850]:       </blockers>
Oct 11 03:56:09 compute-0 nova_compute[259850]:       <model usable='yes' deprecated='yes' vendor='Intel' canonical='Conroe-v1'>Conroe</model>
Oct 11 03:56:09 compute-0 nova_compute[259850]:       <model usable='yes' deprecated='yes' vendor='Intel'>Conroe-v1</model>
Oct 11 03:56:09 compute-0 nova_compute[259850]:       <model usable='no' vendor='Intel' canonical='Cooperlake-v1'>Cooperlake</model>
Oct 11 03:56:09 compute-0 nova_compute[259850]:       <blockers model='Cooperlake'>
Oct 11 03:56:09 compute-0 nova_compute[259850]:         <feature name='avx512-bf16'/>
Oct 11 03:56:09 compute-0 nova_compute[259850]:         <feature name='avx512bw'/>
Oct 11 03:56:09 compute-0 nova_compute[259850]:         <feature name='avx512cd'/>
Oct 11 03:56:09 compute-0 nova_compute[259850]:         <feature name='avx512dq'/>
Oct 11 03:56:09 compute-0 nova_compute[259850]:         <feature name='avx512f'/>
Oct 11 03:56:09 compute-0 nova_compute[259850]:         <feature name='avx512vl'/>
Oct 11 03:56:09 compute-0 nova_compute[259850]:         <feature name='avx512vnni'/>
Oct 11 03:56:09 compute-0 nova_compute[259850]:         <feature name='erms'/>
Oct 11 03:56:09 compute-0 nova_compute[259850]:         <feature name='hle'/>
Oct 11 03:56:09 compute-0 nova_compute[259850]:         <feature name='ibrs-all'/>
Oct 11 03:56:09 compute-0 nova_compute[259850]:         <feature name='invpcid'/>
Oct 11 03:56:09 compute-0 nova_compute[259850]:         <feature name='pcid'/>
Oct 11 03:56:09 compute-0 nova_compute[259850]:         <feature name='pku'/>
Oct 11 03:56:09 compute-0 nova_compute[259850]:         <feature name='rtm'/>
Oct 11 03:56:09 compute-0 nova_compute[259850]:         <feature name='taa-no'/>
Oct 11 03:56:09 compute-0 nova_compute[259850]:       </blockers>
Oct 11 03:56:09 compute-0 nova_compute[259850]:       <model usable='no' vendor='Intel'>Cooperlake-v1</model>
Oct 11 03:56:09 compute-0 nova_compute[259850]:       <blockers model='Cooperlake-v1'>
Oct 11 03:56:09 compute-0 nova_compute[259850]:         <feature name='avx512-bf16'/>
Oct 11 03:56:09 compute-0 nova_compute[259850]:         <feature name='avx512bw'/>
Oct 11 03:56:09 compute-0 nova_compute[259850]:         <feature name='avx512cd'/>
Oct 11 03:56:09 compute-0 nova_compute[259850]:         <feature name='avx512dq'/>
Oct 11 03:56:09 compute-0 nova_compute[259850]:         <feature name='avx512f'/>
Oct 11 03:56:09 compute-0 nova_compute[259850]:         <feature name='avx512vl'/>
Oct 11 03:56:09 compute-0 nova_compute[259850]:         <feature name='avx512vnni'/>
Oct 11 03:56:09 compute-0 nova_compute[259850]:         <feature name='erms'/>
Oct 11 03:56:09 compute-0 nova_compute[259850]:         <feature name='hle'/>
Oct 11 03:56:09 compute-0 nova_compute[259850]:         <feature name='ibrs-all'/>
Oct 11 03:56:09 compute-0 nova_compute[259850]:         <feature name='invpcid'/>
Oct 11 03:56:09 compute-0 nova_compute[259850]:         <feature name='pcid'/>
Oct 11 03:56:09 compute-0 nova_compute[259850]:         <feature name='pku'/>
Oct 11 03:56:09 compute-0 nova_compute[259850]:         <feature name='rtm'/>
Oct 11 03:56:09 compute-0 nova_compute[259850]:         <feature name='taa-no'/>
Oct 11 03:56:09 compute-0 nova_compute[259850]:       </blockers>
Oct 11 03:56:09 compute-0 nova_compute[259850]:       <model usable='no' vendor='Intel'>Cooperlake-v2</model>
Oct 11 03:56:09 compute-0 nova_compute[259850]:       <blockers model='Cooperlake-v2'>
Oct 11 03:56:09 compute-0 nova_compute[259850]:         <feature name='avx512-bf16'/>
Oct 11 03:56:09 compute-0 nova_compute[259850]:         <feature name='avx512bw'/>
Oct 11 03:56:09 compute-0 nova_compute[259850]:         <feature name='avx512cd'/>
Oct 11 03:56:09 compute-0 nova_compute[259850]:         <feature name='avx512dq'/>
Oct 11 03:56:09 compute-0 nova_compute[259850]:         <feature name='avx512f'/>
Oct 11 03:56:09 compute-0 nova_compute[259850]:         <feature name='avx512vl'/>
Oct 11 03:56:09 compute-0 nova_compute[259850]:         <feature name='avx512vnni'/>
Oct 11 03:56:09 compute-0 nova_compute[259850]:         <feature name='erms'/>
Oct 11 03:56:09 compute-0 nova_compute[259850]:         <feature name='hle'/>
Oct 11 03:56:09 compute-0 nova_compute[259850]:         <feature name='ibrs-all'/>
Oct 11 03:56:09 compute-0 nova_compute[259850]:         <feature name='invpcid'/>
Oct 11 03:56:09 compute-0 nova_compute[259850]:         <feature name='pcid'/>
Oct 11 03:56:09 compute-0 nova_compute[259850]:         <feature name='pku'/>
Oct 11 03:56:09 compute-0 nova_compute[259850]:         <feature name='rtm'/>
Oct 11 03:56:09 compute-0 nova_compute[259850]:         <feature name='taa-no'/>
Oct 11 03:56:09 compute-0 nova_compute[259850]:         <feature name='xsaves'/>
Oct 11 03:56:09 compute-0 nova_compute[259850]:       </blockers>
Oct 11 03:56:09 compute-0 nova_compute[259850]:       <model usable='no' vendor='Intel' canonical='Denverton-v1'>Denverton</model>
Oct 11 03:56:09 compute-0 nova_compute[259850]:       <blockers model='Denverton'>
Oct 11 03:56:09 compute-0 nova_compute[259850]:         <feature name='erms'/>
Oct 11 03:56:09 compute-0 nova_compute[259850]:         <feature name='mpx'/>
Oct 11 03:56:09 compute-0 nova_compute[259850]:       </blockers>
Oct 11 03:56:09 compute-0 nova_compute[259850]:       <model usable='no' vendor='Intel'>Denverton-v1</model>
Oct 11 03:56:09 compute-0 nova_compute[259850]:       <blockers model='Denverton-v1'>
Oct 11 03:56:09 compute-0 nova_compute[259850]:         <feature name='erms'/>
Oct 11 03:56:09 compute-0 nova_compute[259850]:         <feature name='mpx'/>
Oct 11 03:56:09 compute-0 nova_compute[259850]:       </blockers>
Oct 11 03:56:09 compute-0 nova_compute[259850]:       <model usable='no' vendor='Intel'>Denverton-v2</model>
Oct 11 03:56:09 compute-0 nova_compute[259850]:       <blockers model='Denverton-v2'>
Oct 11 03:56:09 compute-0 nova_compute[259850]:         <feature name='erms'/>
Oct 11 03:56:09 compute-0 nova_compute[259850]:       </blockers>
Oct 11 03:56:09 compute-0 nova_compute[259850]:       <model usable='no' vendor='Intel'>Denverton-v3</model>
Oct 11 03:56:09 compute-0 nova_compute[259850]:       <blockers model='Denverton-v3'>
Oct 11 03:56:09 compute-0 nova_compute[259850]:         <feature name='erms'/>
Oct 11 03:56:09 compute-0 nova_compute[259850]:         <feature name='xsaves'/>
Oct 11 03:56:09 compute-0 nova_compute[259850]:       </blockers>
Oct 11 03:56:09 compute-0 nova_compute[259850]:       <model usable='yes' vendor='Hygon' canonical='Dhyana-v1'>Dhyana</model>
Oct 11 03:56:09 compute-0 nova_compute[259850]:       <model usable='yes' vendor='Hygon'>Dhyana-v1</model>
Oct 11 03:56:09 compute-0 nova_compute[259850]:       <model usable='no' vendor='Hygon'>Dhyana-v2</model>
Oct 11 03:56:09 compute-0 nova_compute[259850]:       <blockers model='Dhyana-v2'>
Oct 11 03:56:09 compute-0 nova_compute[259850]:         <feature name='xsaves'/>
Oct 11 03:56:09 compute-0 nova_compute[259850]:       </blockers>
Oct 11 03:56:09 compute-0 nova_compute[259850]:       <model usable='yes' vendor='AMD' canonical='EPYC-v1'>EPYC</model>
Oct 11 03:56:09 compute-0 nova_compute[259850]:       <model usable='no' vendor='AMD' canonical='EPYC-Genoa-v1'>EPYC-Genoa</model>
Oct 11 03:56:09 compute-0 nova_compute[259850]:       <blockers model='EPYC-Genoa'>
Oct 11 03:56:09 compute-0 nova_compute[259850]:         <feature name='amd-psfd'/>
Oct 11 03:56:09 compute-0 nova_compute[259850]:         <feature name='auto-ibrs'/>
Oct 11 03:56:09 compute-0 nova_compute[259850]:         <feature name='avx512-bf16'/>
Oct 11 03:56:09 compute-0 nova_compute[259850]:         <feature name='avx512-vpopcntdq'/>
Oct 11 03:56:09 compute-0 nova_compute[259850]:         <feature name='avx512bitalg'/>
Oct 11 03:56:09 compute-0 nova_compute[259850]:         <feature name='avx512bw'/>
Oct 11 03:56:09 compute-0 nova_compute[259850]:         <feature name='avx512cd'/>
Oct 11 03:56:09 compute-0 nova_compute[259850]:         <feature name='avx512dq'/>
Oct 11 03:56:09 compute-0 nova_compute[259850]:         <feature name='avx512f'/>
Oct 11 03:56:09 compute-0 nova_compute[259850]:         <feature name='avx512ifma'/>
Oct 11 03:56:09 compute-0 nova_compute[259850]:         <feature name='avx512vbmi'/>
Oct 11 03:56:09 compute-0 nova_compute[259850]:         <feature name='avx512vbmi2'/>
Oct 11 03:56:09 compute-0 nova_compute[259850]:         <feature name='avx512vl'/>
Oct 11 03:56:09 compute-0 nova_compute[259850]:         <feature name='avx512vnni'/>
Oct 11 03:56:09 compute-0 nova_compute[259850]:         <feature name='erms'/>
Oct 11 03:56:09 compute-0 nova_compute[259850]:         <feature name='fsrm'/>
Oct 11 03:56:09 compute-0 nova_compute[259850]:         <feature name='gfni'/>
Oct 11 03:56:09 compute-0 nova_compute[259850]:         <feature name='invpcid'/>
Oct 11 03:56:09 compute-0 nova_compute[259850]:         <feature name='la57'/>
Oct 11 03:56:09 compute-0 nova_compute[259850]:         <feature name='no-nested-data-bp'/>
Oct 11 03:56:09 compute-0 nova_compute[259850]:         <feature name='null-sel-clr-base'/>
Oct 11 03:56:09 compute-0 nova_compute[259850]:         <feature name='pcid'/>
Oct 11 03:56:09 compute-0 nova_compute[259850]:         <feature name='pku'/>
Oct 11 03:56:09 compute-0 nova_compute[259850]:         <feature name='stibp-always-on'/>
Oct 11 03:56:09 compute-0 nova_compute[259850]:         <feature name='vaes'/>
Oct 11 03:56:09 compute-0 nova_compute[259850]:         <feature name='vpclmulqdq'/>
Oct 11 03:56:09 compute-0 nova_compute[259850]:         <feature name='xsaves'/>
Oct 11 03:56:09 compute-0 nova_compute[259850]:       </blockers>
Oct 11 03:56:09 compute-0 nova_compute[259850]:       <model usable='no' vendor='AMD'>EPYC-Genoa-v1</model>
Oct 11 03:56:09 compute-0 nova_compute[259850]:       <blockers model='EPYC-Genoa-v1'>
Oct 11 03:56:09 compute-0 nova_compute[259850]:         <feature name='amd-psfd'/>
Oct 11 03:56:09 compute-0 nova_compute[259850]:         <feature name='auto-ibrs'/>
Oct 11 03:56:09 compute-0 nova_compute[259850]:         <feature name='avx512-bf16'/>
Oct 11 03:56:09 compute-0 nova_compute[259850]:         <feature name='avx512-vpopcntdq'/>
Oct 11 03:56:09 compute-0 nova_compute[259850]:         <feature name='avx512bitalg'/>
Oct 11 03:56:09 compute-0 nova_compute[259850]:         <feature name='avx512bw'/>
Oct 11 03:56:09 compute-0 nova_compute[259850]:         <feature name='avx512cd'/>
Oct 11 03:56:09 compute-0 nova_compute[259850]:         <feature name='avx512dq'/>
Oct 11 03:56:09 compute-0 nova_compute[259850]:         <feature name='avx512f'/>
Oct 11 03:56:09 compute-0 nova_compute[259850]:         <feature name='avx512ifma'/>
Oct 11 03:56:09 compute-0 nova_compute[259850]:         <feature name='avx512vbmi'/>
Oct 11 03:56:09 compute-0 nova_compute[259850]:         <feature name='avx512vbmi2'/>
Oct 11 03:56:09 compute-0 nova_compute[259850]:         <feature name='avx512vl'/>
Oct 11 03:56:09 compute-0 nova_compute[259850]:         <feature name='avx512vnni'/>
Oct 11 03:56:09 compute-0 nova_compute[259850]:         <feature name='erms'/>
Oct 11 03:56:09 compute-0 nova_compute[259850]:         <feature name='fsrm'/>
Oct 11 03:56:09 compute-0 nova_compute[259850]:         <feature name='gfni'/>
Oct 11 03:56:09 compute-0 nova_compute[259850]:         <feature name='invpcid'/>
Oct 11 03:56:09 compute-0 nova_compute[259850]:         <feature name='la57'/>
Oct 11 03:56:09 compute-0 nova_compute[259850]:         <feature name='no-nested-data-bp'/>
Oct 11 03:56:09 compute-0 nova_compute[259850]:         <feature name='null-sel-clr-base'/>
Oct 11 03:56:09 compute-0 nova_compute[259850]:         <feature name='pcid'/>
Oct 11 03:56:09 compute-0 nova_compute[259850]:         <feature name='pku'/>
Oct 11 03:56:09 compute-0 nova_compute[259850]:         <feature name='stibp-always-on'/>
Oct 11 03:56:09 compute-0 nova_compute[259850]:         <feature name='vaes'/>
Oct 11 03:56:09 compute-0 nova_compute[259850]:         <feature name='vpclmulqdq'/>
Oct 11 03:56:09 compute-0 nova_compute[259850]:         <feature name='xsaves'/>
Oct 11 03:56:09 compute-0 nova_compute[259850]:       </blockers>
Oct 11 03:56:09 compute-0 nova_compute[259850]:       <model usable='yes' vendor='AMD' canonical='EPYC-v2'>EPYC-IBPB</model>
Oct 11 03:56:09 compute-0 nova_compute[259850]:       <model usable='no' vendor='AMD' canonical='EPYC-Milan-v1'>EPYC-Milan</model>
Oct 11 03:56:09 compute-0 nova_compute[259850]:       <blockers model='EPYC-Milan'>
Oct 11 03:56:09 compute-0 nova_compute[259850]:         <feature name='erms'/>
Oct 11 03:56:09 compute-0 nova_compute[259850]:         <feature name='fsrm'/>
Oct 11 03:56:09 compute-0 nova_compute[259850]:         <feature name='invpcid'/>
Oct 11 03:56:09 compute-0 nova_compute[259850]:         <feature name='pcid'/>
Oct 11 03:56:09 compute-0 nova_compute[259850]:         <feature name='pku'/>
Oct 11 03:56:09 compute-0 nova_compute[259850]:         <feature name='xsaves'/>
Oct 11 03:56:09 compute-0 nova_compute[259850]:       </blockers>
Oct 11 03:56:09 compute-0 nova_compute[259850]:       <model usable='no' vendor='AMD'>EPYC-Milan-v1</model>
Oct 11 03:56:09 compute-0 nova_compute[259850]:       <blockers model='EPYC-Milan-v1'>
Oct 11 03:56:09 compute-0 nova_compute[259850]:         <feature name='erms'/>
Oct 11 03:56:09 compute-0 nova_compute[259850]:         <feature name='fsrm'/>
Oct 11 03:56:09 compute-0 nova_compute[259850]:         <feature name='invpcid'/>
Oct 11 03:56:09 compute-0 nova_compute[259850]:         <feature name='pcid'/>
Oct 11 03:56:09 compute-0 nova_compute[259850]:         <feature name='pku'/>
Oct 11 03:56:09 compute-0 nova_compute[259850]:         <feature name='xsaves'/>
Oct 11 03:56:09 compute-0 nova_compute[259850]:       </blockers>
Oct 11 03:56:09 compute-0 nova_compute[259850]:       <model usable='no' vendor='AMD'>EPYC-Milan-v2</model>
Oct 11 03:56:09 compute-0 nova_compute[259850]:       <blockers model='EPYC-Milan-v2'>
Oct 11 03:56:09 compute-0 nova_compute[259850]:         <feature name='amd-psfd'/>
Oct 11 03:56:09 compute-0 nova_compute[259850]:         <feature name='erms'/>
Oct 11 03:56:09 compute-0 nova_compute[259850]:         <feature name='fsrm'/>
Oct 11 03:56:09 compute-0 nova_compute[259850]:         <feature name='invpcid'/>
Oct 11 03:56:09 compute-0 nova_compute[259850]:         <feature name='no-nested-data-bp'/>
Oct 11 03:56:09 compute-0 nova_compute[259850]:         <feature name='null-sel-clr-base'/>
Oct 11 03:56:09 compute-0 nova_compute[259850]:         <feature name='pcid'/>
Oct 11 03:56:09 compute-0 nova_compute[259850]:         <feature name='pku'/>
Oct 11 03:56:09 compute-0 nova_compute[259850]:         <feature name='stibp-always-on'/>
Oct 11 03:56:09 compute-0 nova_compute[259850]:         <feature name='vaes'/>
Oct 11 03:56:09 compute-0 nova_compute[259850]:         <feature name='vpclmulqdq'/>
Oct 11 03:56:09 compute-0 nova_compute[259850]:         <feature name='xsaves'/>
Oct 11 03:56:09 compute-0 nova_compute[259850]:       </blockers>
Oct 11 03:56:09 compute-0 nova_compute[259850]:       <model usable='no' vendor='AMD' canonical='EPYC-Rome-v1'>EPYC-Rome</model>
Oct 11 03:56:09 compute-0 nova_compute[259850]:       <blockers model='EPYC-Rome'>
Oct 11 03:56:09 compute-0 nova_compute[259850]:         <feature name='xsaves'/>
Oct 11 03:56:09 compute-0 nova_compute[259850]:       </blockers>
Oct 11 03:56:09 compute-0 nova_compute[259850]:       <model usable='no' vendor='AMD'>EPYC-Rome-v1</model>
Oct 11 03:56:09 compute-0 nova_compute[259850]:       <blockers model='EPYC-Rome-v1'>
Oct 11 03:56:09 compute-0 nova_compute[259850]:         <feature name='xsaves'/>
Oct 11 03:56:09 compute-0 nova_compute[259850]:       </blockers>
Oct 11 03:56:09 compute-0 nova_compute[259850]:       <model usable='no' vendor='AMD'>EPYC-Rome-v2</model>
Oct 11 03:56:09 compute-0 nova_compute[259850]:       <blockers model='EPYC-Rome-v2'>
Oct 11 03:56:09 compute-0 nova_compute[259850]:         <feature name='xsaves'/>
Oct 11 03:56:09 compute-0 nova_compute[259850]:       </blockers>
Oct 11 03:56:09 compute-0 nova_compute[259850]:       <model usable='no' vendor='AMD'>EPYC-Rome-v3</model>
Oct 11 03:56:09 compute-0 nova_compute[259850]:       <blockers model='EPYC-Rome-v3'>
Oct 11 03:56:09 compute-0 nova_compute[259850]:         <feature name='xsaves'/>
Oct 11 03:56:09 compute-0 nova_compute[259850]:       </blockers>
Oct 11 03:56:09 compute-0 nova_compute[259850]:       <model usable='yes' vendor='AMD'>EPYC-Rome-v4</model>
Oct 11 03:56:09 compute-0 nova_compute[259850]:       <model usable='yes' vendor='AMD'>EPYC-v1</model>
Oct 11 03:56:09 compute-0 nova_compute[259850]:       <model usable='yes' vendor='AMD'>EPYC-v2</model>
Oct 11 03:56:09 compute-0 nova_compute[259850]:       <model usable='no' vendor='AMD'>EPYC-v3</model>
Oct 11 03:56:09 compute-0 nova_compute[259850]:       <blockers model='EPYC-v3'>
Oct 11 03:56:09 compute-0 nova_compute[259850]:         <feature name='xsaves'/>
Oct 11 03:56:09 compute-0 nova_compute[259850]:       </blockers>
Oct 11 03:56:09 compute-0 nova_compute[259850]:       <model usable='no' vendor='AMD'>EPYC-v4</model>
Oct 11 03:56:09 compute-0 nova_compute[259850]:       <blockers model='EPYC-v4'>
Oct 11 03:56:09 compute-0 nova_compute[259850]:         <feature name='xsaves'/>
Oct 11 03:56:09 compute-0 nova_compute[259850]:       </blockers>
Oct 11 03:56:09 compute-0 nova_compute[259850]:       <model usable='no' vendor='Intel' canonical='GraniteRapids-v1'>GraniteRapids</model>
Oct 11 03:56:09 compute-0 nova_compute[259850]:       <blockers model='GraniteRapids'>
Oct 11 03:56:09 compute-0 nova_compute[259850]:         <feature name='amx-bf16'/>
Oct 11 03:56:09 compute-0 nova_compute[259850]:         <feature name='amx-fp16'/>
Oct 11 03:56:09 compute-0 nova_compute[259850]:         <feature name='amx-int8'/>
Oct 11 03:56:09 compute-0 nova_compute[259850]:         <feature name='amx-tile'/>
Oct 11 03:56:09 compute-0 nova_compute[259850]:         <feature name='avx-vnni'/>
Oct 11 03:56:09 compute-0 nova_compute[259850]:         <feature name='avx512-bf16'/>
Oct 11 03:56:09 compute-0 nova_compute[259850]:         <feature name='avx512-fp16'/>
Oct 11 03:56:09 compute-0 nova_compute[259850]:         <feature name='avx512-vpopcntdq'/>
Oct 11 03:56:09 compute-0 nova_compute[259850]:         <feature name='avx512bitalg'/>
Oct 11 03:56:09 compute-0 nova_compute[259850]:         <feature name='avx512bw'/>
Oct 11 03:56:09 compute-0 nova_compute[259850]:         <feature name='avx512cd'/>
Oct 11 03:56:09 compute-0 nova_compute[259850]:         <feature name='avx512dq'/>
Oct 11 03:56:09 compute-0 nova_compute[259850]:         <feature name='avx512f'/>
Oct 11 03:56:09 compute-0 nova_compute[259850]:         <feature name='avx512ifma'/>
Oct 11 03:56:09 compute-0 nova_compute[259850]:         <feature name='avx512vbmi'/>
Oct 11 03:56:09 compute-0 nova_compute[259850]:         <feature name='avx512vbmi2'/>
Oct 11 03:56:09 compute-0 nova_compute[259850]:         <feature name='avx512vl'/>
Oct 11 03:56:09 compute-0 nova_compute[259850]:         <feature name='avx512vnni'/>
Oct 11 03:56:09 compute-0 nova_compute[259850]:         <feature name='bus-lock-detect'/>
Oct 11 03:56:09 compute-0 nova_compute[259850]:         <feature name='erms'/>
Oct 11 03:56:09 compute-0 nova_compute[259850]:         <feature name='fbsdp-no'/>
Oct 11 03:56:09 compute-0 nova_compute[259850]:         <feature name='fsrc'/>
Oct 11 03:56:09 compute-0 nova_compute[259850]:         <feature name='fsrm'/>
Oct 11 03:56:09 compute-0 nova_compute[259850]:         <feature name='fsrs'/>
Oct 11 03:56:09 compute-0 nova_compute[259850]:         <feature name='fzrm'/>
Oct 11 03:56:09 compute-0 nova_compute[259850]:         <feature name='gfni'/>
Oct 11 03:56:09 compute-0 nova_compute[259850]:         <feature name='hle'/>
Oct 11 03:56:09 compute-0 nova_compute[259850]:         <feature name='ibrs-all'/>
Oct 11 03:56:09 compute-0 nova_compute[259850]:         <feature name='invpcid'/>
Oct 11 03:56:09 compute-0 nova_compute[259850]:         <feature name='la57'/>
Oct 11 03:56:09 compute-0 nova_compute[259850]:         <feature name='mcdt-no'/>
Oct 11 03:56:09 compute-0 nova_compute[259850]:         <feature name='pbrsb-no'/>
Oct 11 03:56:09 compute-0 nova_compute[259850]:         <feature name='pcid'/>
Oct 11 03:56:09 compute-0 nova_compute[259850]:         <feature name='pku'/>
Oct 11 03:56:09 compute-0 nova_compute[259850]:         <feature name='prefetchiti'/>
Oct 11 03:56:09 compute-0 nova_compute[259850]:         <feature name='psdp-no'/>
Oct 11 03:56:09 compute-0 nova_compute[259850]:         <feature name='rtm'/>
Oct 11 03:56:09 compute-0 nova_compute[259850]:         <feature name='sbdr-ssdp-no'/>
Oct 11 03:56:09 compute-0 nova_compute[259850]:         <feature name='serialize'/>
Oct 11 03:56:09 compute-0 nova_compute[259850]:         <feature name='taa-no'/>
Oct 11 03:56:09 compute-0 nova_compute[259850]:         <feature name='tsx-ldtrk'/>
Oct 11 03:56:09 compute-0 nova_compute[259850]:         <feature name='vaes'/>
Oct 11 03:56:09 compute-0 nova_compute[259850]:         <feature name='vpclmulqdq'/>
Oct 11 03:56:09 compute-0 nova_compute[259850]:         <feature name='xfd'/>
Oct 11 03:56:09 compute-0 nova_compute[259850]:         <feature name='xsaves'/>
Oct 11 03:56:09 compute-0 nova_compute[259850]:       </blockers>
Oct 11 03:56:09 compute-0 nova_compute[259850]:       <model usable='no' vendor='Intel'>GraniteRapids-v1</model>
Oct 11 03:56:09 compute-0 nova_compute[259850]:       <blockers model='GraniteRapids-v1'>
Oct 11 03:56:09 compute-0 nova_compute[259850]:         <feature name='amx-bf16'/>
Oct 11 03:56:09 compute-0 nova_compute[259850]:         <feature name='amx-fp16'/>
Oct 11 03:56:09 compute-0 nova_compute[259850]:         <feature name='amx-int8'/>
Oct 11 03:56:09 compute-0 nova_compute[259850]:         <feature name='amx-tile'/>
Oct 11 03:56:09 compute-0 nova_compute[259850]:         <feature name='avx-vnni'/>
Oct 11 03:56:09 compute-0 nova_compute[259850]:         <feature name='avx512-bf16'/>
Oct 11 03:56:09 compute-0 nova_compute[259850]:         <feature name='avx512-fp16'/>
Oct 11 03:56:09 compute-0 nova_compute[259850]:         <feature name='avx512-vpopcntdq'/>
Oct 11 03:56:09 compute-0 nova_compute[259850]:         <feature name='avx512bitalg'/>
Oct 11 03:56:09 compute-0 nova_compute[259850]:         <feature name='avx512bw'/>
Oct 11 03:56:09 compute-0 nova_compute[259850]:         <feature name='avx512cd'/>
Oct 11 03:56:09 compute-0 nova_compute[259850]:         <feature name='avx512dq'/>
Oct 11 03:56:09 compute-0 nova_compute[259850]:         <feature name='avx512f'/>
Oct 11 03:56:09 compute-0 nova_compute[259850]:         <feature name='avx512ifma'/>
Oct 11 03:56:09 compute-0 nova_compute[259850]:         <feature name='avx512vbmi'/>
Oct 11 03:56:09 compute-0 nova_compute[259850]:         <feature name='avx512vbmi2'/>
Oct 11 03:56:09 compute-0 nova_compute[259850]:         <feature name='avx512vl'/>
Oct 11 03:56:09 compute-0 nova_compute[259850]:         <feature name='avx512vnni'/>
Oct 11 03:56:09 compute-0 nova_compute[259850]:         <feature name='bus-lock-detect'/>
Oct 11 03:56:09 compute-0 nova_compute[259850]:         <feature name='erms'/>
Oct 11 03:56:09 compute-0 nova_compute[259850]:         <feature name='fbsdp-no'/>
Oct 11 03:56:09 compute-0 nova_compute[259850]:         <feature name='fsrc'/>
Oct 11 03:56:09 compute-0 nova_compute[259850]:         <feature name='fsrm'/>
Oct 11 03:56:09 compute-0 nova_compute[259850]:         <feature name='fsrs'/>
Oct 11 03:56:09 compute-0 nova_compute[259850]:         <feature name='fzrm'/>
Oct 11 03:56:09 compute-0 nova_compute[259850]:         <feature name='gfni'/>
Oct 11 03:56:09 compute-0 nova_compute[259850]:         <feature name='hle'/>
Oct 11 03:56:09 compute-0 nova_compute[259850]:         <feature name='ibrs-all'/>
Oct 11 03:56:09 compute-0 nova_compute[259850]:         <feature name='invpcid'/>
Oct 11 03:56:09 compute-0 nova_compute[259850]:         <feature name='la57'/>
Oct 11 03:56:09 compute-0 nova_compute[259850]:         <feature name='mcdt-no'/>
Oct 11 03:56:09 compute-0 nova_compute[259850]:         <feature name='pbrsb-no'/>
Oct 11 03:56:09 compute-0 nova_compute[259850]:         <feature name='pcid'/>
Oct 11 03:56:09 compute-0 nova_compute[259850]:         <feature name='pku'/>
Oct 11 03:56:09 compute-0 nova_compute[259850]:         <feature name='prefetchiti'/>
Oct 11 03:56:09 compute-0 nova_compute[259850]:         <feature name='psdp-no'/>
Oct 11 03:56:09 compute-0 nova_compute[259850]:         <feature name='rtm'/>
Oct 11 03:56:09 compute-0 nova_compute[259850]:         <feature name='sbdr-ssdp-no'/>
Oct 11 03:56:09 compute-0 nova_compute[259850]:         <feature name='serialize'/>
Oct 11 03:56:09 compute-0 nova_compute[259850]:         <feature name='taa-no'/>
Oct 11 03:56:09 compute-0 nova_compute[259850]:         <feature name='tsx-ldtrk'/>
Oct 11 03:56:09 compute-0 nova_compute[259850]:         <feature name='vaes'/>
Oct 11 03:56:09 compute-0 nova_compute[259850]:         <feature name='vpclmulqdq'/>
Oct 11 03:56:09 compute-0 nova_compute[259850]:         <feature name='xfd'/>
Oct 11 03:56:09 compute-0 nova_compute[259850]:         <feature name='xsaves'/>
Oct 11 03:56:09 compute-0 nova_compute[259850]:       </blockers>
Oct 11 03:56:09 compute-0 nova_compute[259850]:       <model usable='no' vendor='Intel'>GraniteRapids-v2</model>
Oct 11 03:56:09 compute-0 nova_compute[259850]:       <blockers model='GraniteRapids-v2'>
Oct 11 03:56:09 compute-0 nova_compute[259850]:         <feature name='amx-bf16'/>
Oct 11 03:56:09 compute-0 nova_compute[259850]:         <feature name='amx-fp16'/>
Oct 11 03:56:09 compute-0 nova_compute[259850]:         <feature name='amx-int8'/>
Oct 11 03:56:09 compute-0 nova_compute[259850]:         <feature name='amx-tile'/>
Oct 11 03:56:09 compute-0 nova_compute[259850]:         <feature name='avx-vnni'/>
Oct 11 03:56:09 compute-0 nova_compute[259850]:         <feature name='avx10'/>
Oct 11 03:56:09 compute-0 nova_compute[259850]:         <feature name='avx10-128'/>
Oct 11 03:56:09 compute-0 nova_compute[259850]:         <feature name='avx10-256'/>
Oct 11 03:56:09 compute-0 nova_compute[259850]:         <feature name='avx10-512'/>
Oct 11 03:56:09 compute-0 nova_compute[259850]:         <feature name='avx512-bf16'/>
Oct 11 03:56:09 compute-0 nova_compute[259850]:         <feature name='avx512-fp16'/>
Oct 11 03:56:09 compute-0 nova_compute[259850]:         <feature name='avx512-vpopcntdq'/>
Oct 11 03:56:09 compute-0 nova_compute[259850]:         <feature name='avx512bitalg'/>
Oct 11 03:56:09 compute-0 nova_compute[259850]:         <feature name='avx512bw'/>
Oct 11 03:56:09 compute-0 nova_compute[259850]:         <feature name='avx512cd'/>
Oct 11 03:56:09 compute-0 nova_compute[259850]:         <feature name='avx512dq'/>
Oct 11 03:56:09 compute-0 nova_compute[259850]:         <feature name='avx512f'/>
Oct 11 03:56:09 compute-0 nova_compute[259850]:         <feature name='avx512ifma'/>
Oct 11 03:56:09 compute-0 nova_compute[259850]:         <feature name='avx512vbmi'/>
Oct 11 03:56:09 compute-0 nova_compute[259850]:         <feature name='avx512vbmi2'/>
Oct 11 03:56:09 compute-0 nova_compute[259850]:         <feature name='avx512vl'/>
Oct 11 03:56:09 compute-0 nova_compute[259850]:         <feature name='avx512vnni'/>
Oct 11 03:56:09 compute-0 nova_compute[259850]:         <feature name='bus-lock-detect'/>
Oct 11 03:56:09 compute-0 nova_compute[259850]:         <feature name='cldemote'/>
Oct 11 03:56:09 compute-0 nova_compute[259850]:         <feature name='erms'/>
Oct 11 03:56:09 compute-0 nova_compute[259850]:         <feature name='fbsdp-no'/>
Oct 11 03:56:09 compute-0 nova_compute[259850]:         <feature name='fsrc'/>
Oct 11 03:56:09 compute-0 nova_compute[259850]:         <feature name='fsrm'/>
Oct 11 03:56:09 compute-0 nova_compute[259850]:         <feature name='fsrs'/>
Oct 11 03:56:09 compute-0 nova_compute[259850]:         <feature name='fzrm'/>
Oct 11 03:56:09 compute-0 nova_compute[259850]:         <feature name='gfni'/>
Oct 11 03:56:09 compute-0 nova_compute[259850]:         <feature name='hle'/>
Oct 11 03:56:09 compute-0 nova_compute[259850]:         <feature name='ibrs-all'/>
Oct 11 03:56:09 compute-0 nova_compute[259850]:         <feature name='invpcid'/>
Oct 11 03:56:09 compute-0 nova_compute[259850]:         <feature name='la57'/>
Oct 11 03:56:09 compute-0 nova_compute[259850]:         <feature name='mcdt-no'/>
Oct 11 03:56:09 compute-0 nova_compute[259850]:         <feature name='movdir64b'/>
Oct 11 03:56:09 compute-0 nova_compute[259850]:         <feature name='movdiri'/>
Oct 11 03:56:09 compute-0 nova_compute[259850]:         <feature name='pbrsb-no'/>
Oct 11 03:56:09 compute-0 nova_compute[259850]:         <feature name='pcid'/>
Oct 11 03:56:09 compute-0 nova_compute[259850]:         <feature name='pku'/>
Oct 11 03:56:09 compute-0 nova_compute[259850]:         <feature name='prefetchiti'/>
Oct 11 03:56:09 compute-0 nova_compute[259850]:         <feature name='psdp-no'/>
Oct 11 03:56:09 compute-0 nova_compute[259850]:         <feature name='rtm'/>
Oct 11 03:56:09 compute-0 nova_compute[259850]:         <feature name='sbdr-ssdp-no'/>
Oct 11 03:56:09 compute-0 nova_compute[259850]:         <feature name='serialize'/>
Oct 11 03:56:09 compute-0 nova_compute[259850]:         <feature name='ss'/>
Oct 11 03:56:09 compute-0 nova_compute[259850]:         <feature name='taa-no'/>
Oct 11 03:56:09 compute-0 nova_compute[259850]:         <feature name='tsx-ldtrk'/>
Oct 11 03:56:09 compute-0 nova_compute[259850]:         <feature name='vaes'/>
Oct 11 03:56:09 compute-0 nova_compute[259850]:         <feature name='vpclmulqdq'/>
Oct 11 03:56:09 compute-0 nova_compute[259850]:         <feature name='xfd'/>
Oct 11 03:56:09 compute-0 nova_compute[259850]:         <feature name='xsaves'/>
Oct 11 03:56:09 compute-0 nova_compute[259850]:       </blockers>
Oct 11 03:56:09 compute-0 nova_compute[259850]:       <model usable='no' vendor='Intel' canonical='Haswell-v1'>Haswell</model>
Oct 11 03:56:09 compute-0 nova_compute[259850]:       <blockers model='Haswell'>
Oct 11 03:56:09 compute-0 nova_compute[259850]:         <feature name='erms'/>
Oct 11 03:56:09 compute-0 nova_compute[259850]:         <feature name='hle'/>
Oct 11 03:56:09 compute-0 nova_compute[259850]:         <feature name='invpcid'/>
Oct 11 03:56:09 compute-0 nova_compute[259850]:         <feature name='pcid'/>
Oct 11 03:56:09 compute-0 nova_compute[259850]:         <feature name='rtm'/>
Oct 11 03:56:09 compute-0 nova_compute[259850]:       </blockers>
Oct 11 03:56:09 compute-0 nova_compute[259850]:       <model usable='no' vendor='Intel' canonical='Haswell-v3'>Haswell-IBRS</model>
Oct 11 03:56:09 compute-0 nova_compute[259850]:       <blockers model='Haswell-IBRS'>
Oct 11 03:56:09 compute-0 nova_compute[259850]:         <feature name='erms'/>
Oct 11 03:56:09 compute-0 nova_compute[259850]:         <feature name='hle'/>
Oct 11 03:56:09 compute-0 nova_compute[259850]:         <feature name='invpcid'/>
Oct 11 03:56:09 compute-0 nova_compute[259850]:         <feature name='pcid'/>
Oct 11 03:56:09 compute-0 nova_compute[259850]:         <feature name='rtm'/>
Oct 11 03:56:09 compute-0 nova_compute[259850]:       </blockers>
Oct 11 03:56:09 compute-0 nova_compute[259850]:       <model usable='no' vendor='Intel' canonical='Haswell-v2'>Haswell-noTSX</model>
Oct 11 03:56:09 compute-0 nova_compute[259850]:       <blockers model='Haswell-noTSX'>
Oct 11 03:56:09 compute-0 nova_compute[259850]:         <feature name='erms'/>
Oct 11 03:56:09 compute-0 nova_compute[259850]:         <feature name='invpcid'/>
Oct 11 03:56:09 compute-0 nova_compute[259850]:         <feature name='pcid'/>
Oct 11 03:56:09 compute-0 nova_compute[259850]:       </blockers>
Oct 11 03:56:09 compute-0 nova_compute[259850]:       <model usable='no' vendor='Intel' canonical='Haswell-v4'>Haswell-noTSX-IBRS</model>
Oct 11 03:56:09 compute-0 nova_compute[259850]:       <blockers model='Haswell-noTSX-IBRS'>
Oct 11 03:56:09 compute-0 nova_compute[259850]:         <feature name='erms'/>
Oct 11 03:56:09 compute-0 nova_compute[259850]:         <feature name='invpcid'/>
Oct 11 03:56:09 compute-0 nova_compute[259850]:         <feature name='pcid'/>
Oct 11 03:56:09 compute-0 nova_compute[259850]:       </blockers>
Oct 11 03:56:09 compute-0 nova_compute[259850]:       <model usable='no' vendor='Intel'>Haswell-v1</model>
Oct 11 03:56:09 compute-0 nova_compute[259850]:       <blockers model='Haswell-v1'>
Oct 11 03:56:09 compute-0 nova_compute[259850]:         <feature name='erms'/>
Oct 11 03:56:09 compute-0 nova_compute[259850]:         <feature name='hle'/>
Oct 11 03:56:09 compute-0 nova_compute[259850]:         <feature name='invpcid'/>
Oct 11 03:56:09 compute-0 nova_compute[259850]:         <feature name='pcid'/>
Oct 11 03:56:09 compute-0 nova_compute[259850]:         <feature name='rtm'/>
Oct 11 03:56:09 compute-0 nova_compute[259850]:       </blockers>
Oct 11 03:56:09 compute-0 nova_compute[259850]:       <model usable='no' vendor='Intel'>Haswell-v2</model>
Oct 11 03:56:09 compute-0 nova_compute[259850]:       <blockers model='Haswell-v2'>
Oct 11 03:56:09 compute-0 nova_compute[259850]:         <feature name='erms'/>
Oct 11 03:56:09 compute-0 nova_compute[259850]:         <feature name='invpcid'/>
Oct 11 03:56:09 compute-0 nova_compute[259850]:         <feature name='pcid'/>
Oct 11 03:56:09 compute-0 nova_compute[259850]:       </blockers>
Oct 11 03:56:09 compute-0 nova_compute[259850]:       <model usable='no' vendor='Intel'>Haswell-v3</model>
Oct 11 03:56:09 compute-0 nova_compute[259850]:       <blockers model='Haswell-v3'>
Oct 11 03:56:09 compute-0 nova_compute[259850]:         <feature name='erms'/>
Oct 11 03:56:09 compute-0 nova_compute[259850]:         <feature name='hle'/>
Oct 11 03:56:09 compute-0 nova_compute[259850]:         <feature name='invpcid'/>
Oct 11 03:56:09 compute-0 nova_compute[259850]:         <feature name='pcid'/>
Oct 11 03:56:09 compute-0 nova_compute[259850]:         <feature name='rtm'/>
Oct 11 03:56:09 compute-0 nova_compute[259850]:       </blockers>
Oct 11 03:56:09 compute-0 nova_compute[259850]:       <model usable='no' vendor='Intel'>Haswell-v4</model>
Oct 11 03:56:09 compute-0 nova_compute[259850]:       <blockers model='Haswell-v4'>
Oct 11 03:56:09 compute-0 nova_compute[259850]:         <feature name='erms'/>
Oct 11 03:56:09 compute-0 nova_compute[259850]:         <feature name='invpcid'/>
Oct 11 03:56:09 compute-0 nova_compute[259850]:         <feature name='pcid'/>
Oct 11 03:56:09 compute-0 nova_compute[259850]:       </blockers>
Oct 11 03:56:09 compute-0 nova_compute[259850]:       <model usable='no' vendor='Intel' canonical='Icelake-Server-v1'>Icelake-Server</model>
Oct 11 03:56:09 compute-0 nova_compute[259850]:       <blockers model='Icelake-Server'>
Oct 11 03:56:09 compute-0 nova_compute[259850]:         <feature name='avx512-vpopcntdq'/>
Oct 11 03:56:09 compute-0 nova_compute[259850]:         <feature name='avx512bitalg'/>
Oct 11 03:56:09 compute-0 nova_compute[259850]:         <feature name='avx512bw'/>
Oct 11 03:56:09 compute-0 nova_compute[259850]:         <feature name='avx512cd'/>
Oct 11 03:56:09 compute-0 nova_compute[259850]:         <feature name='avx512dq'/>
Oct 11 03:56:09 compute-0 nova_compute[259850]:         <feature name='avx512f'/>
Oct 11 03:56:09 compute-0 nova_compute[259850]:         <feature name='avx512vbmi'/>
Oct 11 03:56:09 compute-0 nova_compute[259850]:         <feature name='avx512vbmi2'/>
Oct 11 03:56:09 compute-0 nova_compute[259850]:         <feature name='avx512vl'/>
Oct 11 03:56:09 compute-0 nova_compute[259850]:         <feature name='avx512vnni'/>
Oct 11 03:56:09 compute-0 nova_compute[259850]:         <feature name='erms'/>
Oct 11 03:56:09 compute-0 nova_compute[259850]:         <feature name='gfni'/>
Oct 11 03:56:09 compute-0 nova_compute[259850]:         <feature name='hle'/>
Oct 11 03:56:09 compute-0 nova_compute[259850]:         <feature name='invpcid'/>
Oct 11 03:56:09 compute-0 nova_compute[259850]:         <feature name='la57'/>
Oct 11 03:56:09 compute-0 nova_compute[259850]:         <feature name='pcid'/>
Oct 11 03:56:09 compute-0 nova_compute[259850]:         <feature name='pku'/>
Oct 11 03:56:09 compute-0 nova_compute[259850]:         <feature name='rtm'/>
Oct 11 03:56:09 compute-0 nova_compute[259850]:         <feature name='vaes'/>
Oct 11 03:56:09 compute-0 nova_compute[259850]:         <feature name='vpclmulqdq'/>
Oct 11 03:56:09 compute-0 nova_compute[259850]:       </blockers>
Oct 11 03:56:09 compute-0 nova_compute[259850]:       <model usable='no' vendor='Intel' canonical='Icelake-Server-v2'>Icelake-Server-noTSX</model>
Oct 11 03:56:09 compute-0 nova_compute[259850]:       <blockers model='Icelake-Server-noTSX'>
Oct 11 03:56:09 compute-0 nova_compute[259850]:         <feature name='avx512-vpopcntdq'/>
Oct 11 03:56:09 compute-0 nova_compute[259850]:         <feature name='avx512bitalg'/>
Oct 11 03:56:09 compute-0 nova_compute[259850]:         <feature name='avx512bw'/>
Oct 11 03:56:09 compute-0 nova_compute[259850]:         <feature name='avx512cd'/>
Oct 11 03:56:09 compute-0 nova_compute[259850]:         <feature name='avx512dq'/>
Oct 11 03:56:09 compute-0 nova_compute[259850]:         <feature name='avx512f'/>
Oct 11 03:56:09 compute-0 nova_compute[259850]:         <feature name='avx512vbmi'/>
Oct 11 03:56:09 compute-0 nova_compute[259850]:         <feature name='avx512vbmi2'/>
Oct 11 03:56:09 compute-0 nova_compute[259850]:         <feature name='avx512vl'/>
Oct 11 03:56:09 compute-0 nova_compute[259850]:         <feature name='avx512vnni'/>
Oct 11 03:56:09 compute-0 nova_compute[259850]:         <feature name='erms'/>
Oct 11 03:56:09 compute-0 nova_compute[259850]:         <feature name='gfni'/>
Oct 11 03:56:09 compute-0 nova_compute[259850]:         <feature name='invpcid'/>
Oct 11 03:56:09 compute-0 nova_compute[259850]:         <feature name='la57'/>
Oct 11 03:56:09 compute-0 nova_compute[259850]:         <feature name='pcid'/>
Oct 11 03:56:09 compute-0 nova_compute[259850]:         <feature name='pku'/>
Oct 11 03:56:09 compute-0 nova_compute[259850]:         <feature name='vaes'/>
Oct 11 03:56:09 compute-0 nova_compute[259850]:         <feature name='vpclmulqdq'/>
Oct 11 03:56:09 compute-0 nova_compute[259850]:       </blockers>
Oct 11 03:56:09 compute-0 nova_compute[259850]:       <model usable='no' vendor='Intel'>Icelake-Server-v1</model>
Oct 11 03:56:09 compute-0 nova_compute[259850]:       <blockers model='Icelake-Server-v1'>
Oct 11 03:56:09 compute-0 nova_compute[259850]:         <feature name='avx512-vpopcntdq'/>
Oct 11 03:56:09 compute-0 nova_compute[259850]:         <feature name='avx512bitalg'/>
Oct 11 03:56:09 compute-0 nova_compute[259850]:         <feature name='avx512bw'/>
Oct 11 03:56:09 compute-0 nova_compute[259850]:         <feature name='avx512cd'/>
Oct 11 03:56:09 compute-0 nova_compute[259850]:         <feature name='avx512dq'/>
Oct 11 03:56:09 compute-0 nova_compute[259850]:         <feature name='avx512f'/>
Oct 11 03:56:09 compute-0 nova_compute[259850]:         <feature name='avx512vbmi'/>
Oct 11 03:56:09 compute-0 nova_compute[259850]:         <feature name='avx512vbmi2'/>
Oct 11 03:56:09 compute-0 nova_compute[259850]:         <feature name='avx512vl'/>
Oct 11 03:56:09 compute-0 nova_compute[259850]:         <feature name='avx512vnni'/>
Oct 11 03:56:09 compute-0 nova_compute[259850]:         <feature name='erms'/>
Oct 11 03:56:09 compute-0 nova_compute[259850]:         <feature name='gfni'/>
Oct 11 03:56:09 compute-0 nova_compute[259850]:         <feature name='hle'/>
Oct 11 03:56:09 compute-0 nova_compute[259850]:         <feature name='invpcid'/>
Oct 11 03:56:09 compute-0 nova_compute[259850]:         <feature name='la57'/>
Oct 11 03:56:09 compute-0 nova_compute[259850]:         <feature name='pcid'/>
Oct 11 03:56:09 compute-0 nova_compute[259850]:         <feature name='pku'/>
Oct 11 03:56:09 compute-0 nova_compute[259850]:         <feature name='rtm'/>
Oct 11 03:56:09 compute-0 nova_compute[259850]:         <feature name='vaes'/>
Oct 11 03:56:09 compute-0 nova_compute[259850]:         <feature name='vpclmulqdq'/>
Oct 11 03:56:09 compute-0 nova_compute[259850]:       </blockers>
Oct 11 03:56:09 compute-0 nova_compute[259850]:       <model usable='no' vendor='Intel'>Icelake-Server-v2</model>
Oct 11 03:56:09 compute-0 nova_compute[259850]:       <blockers model='Icelake-Server-v2'>
Oct 11 03:56:09 compute-0 nova_compute[259850]:         <feature name='avx512-vpopcntdq'/>
Oct 11 03:56:09 compute-0 nova_compute[259850]:         <feature name='avx512bitalg'/>
Oct 11 03:56:09 compute-0 nova_compute[259850]:         <feature name='avx512bw'/>
Oct 11 03:56:09 compute-0 nova_compute[259850]:         <feature name='avx512cd'/>
Oct 11 03:56:09 compute-0 nova_compute[259850]:         <feature name='avx512dq'/>
Oct 11 03:56:09 compute-0 nova_compute[259850]:         <feature name='avx512f'/>
Oct 11 03:56:09 compute-0 nova_compute[259850]:         <feature name='avx512vbmi'/>
Oct 11 03:56:09 compute-0 nova_compute[259850]:         <feature name='avx512vbmi2'/>
Oct 11 03:56:09 compute-0 nova_compute[259850]:         <feature name='avx512vl'/>
Oct 11 03:56:09 compute-0 nova_compute[259850]:         <feature name='avx512vnni'/>
Oct 11 03:56:09 compute-0 nova_compute[259850]:         <feature name='erms'/>
Oct 11 03:56:09 compute-0 nova_compute[259850]:         <feature name='gfni'/>
Oct 11 03:56:09 compute-0 nova_compute[259850]:         <feature name='invpcid'/>
Oct 11 03:56:09 compute-0 nova_compute[259850]:         <feature name='la57'/>
Oct 11 03:56:09 compute-0 nova_compute[259850]:         <feature name='pcid'/>
Oct 11 03:56:09 compute-0 nova_compute[259850]:         <feature name='pku'/>
Oct 11 03:56:09 compute-0 nova_compute[259850]:         <feature name='vaes'/>
Oct 11 03:56:09 compute-0 nova_compute[259850]:         <feature name='vpclmulqdq'/>
Oct 11 03:56:09 compute-0 nova_compute[259850]:       </blockers>
Oct 11 03:56:09 compute-0 nova_compute[259850]:       <model usable='no' vendor='Intel'>Icelake-Server-v3</model>
Oct 11 03:56:09 compute-0 nova_compute[259850]:       <blockers model='Icelake-Server-v3'>
Oct 11 03:56:09 compute-0 nova_compute[259850]:         <feature name='avx512-vpopcntdq'/>
Oct 11 03:56:09 compute-0 nova_compute[259850]:         <feature name='avx512bitalg'/>
Oct 11 03:56:09 compute-0 nova_compute[259850]:         <feature name='avx512bw'/>
Oct 11 03:56:09 compute-0 nova_compute[259850]:         <feature name='avx512cd'/>
Oct 11 03:56:09 compute-0 nova_compute[259850]:         <feature name='avx512dq'/>
Oct 11 03:56:09 compute-0 nova_compute[259850]:         <feature name='avx512f'/>
Oct 11 03:56:09 compute-0 nova_compute[259850]:         <feature name='avx512vbmi'/>
Oct 11 03:56:09 compute-0 nova_compute[259850]:         <feature name='avx512vbmi2'/>
Oct 11 03:56:09 compute-0 nova_compute[259850]:         <feature name='avx512vl'/>
Oct 11 03:56:09 compute-0 nova_compute[259850]:         <feature name='avx512vnni'/>
Oct 11 03:56:09 compute-0 nova_compute[259850]:         <feature name='erms'/>
Oct 11 03:56:09 compute-0 nova_compute[259850]:         <feature name='gfni'/>
Oct 11 03:56:09 compute-0 nova_compute[259850]:         <feature name='ibrs-all'/>
Oct 11 03:56:09 compute-0 nova_compute[259850]:         <feature name='invpcid'/>
Oct 11 03:56:09 compute-0 nova_compute[259850]:         <feature name='la57'/>
Oct 11 03:56:09 compute-0 nova_compute[259850]:         <feature name='pcid'/>
Oct 11 03:56:09 compute-0 nova_compute[259850]:         <feature name='pku'/>
Oct 11 03:56:09 compute-0 nova_compute[259850]:         <feature name='taa-no'/>
Oct 11 03:56:09 compute-0 nova_compute[259850]:         <feature name='vaes'/>
Oct 11 03:56:09 compute-0 nova_compute[259850]:         <feature name='vpclmulqdq'/>
Oct 11 03:56:09 compute-0 nova_compute[259850]:       </blockers>
Oct 11 03:56:09 compute-0 nova_compute[259850]:       <model usable='no' vendor='Intel'>Icelake-Server-v4</model>
Oct 11 03:56:09 compute-0 nova_compute[259850]:       <blockers model='Icelake-Server-v4'>
Oct 11 03:56:09 compute-0 nova_compute[259850]:         <feature name='avx512-vpopcntdq'/>
Oct 11 03:56:09 compute-0 nova_compute[259850]:         <feature name='avx512bitalg'/>
Oct 11 03:56:09 compute-0 nova_compute[259850]:         <feature name='avx512bw'/>
Oct 11 03:56:09 compute-0 nova_compute[259850]:         <feature name='avx512cd'/>
Oct 11 03:56:09 compute-0 nova_compute[259850]:         <feature name='avx512dq'/>
Oct 11 03:56:09 compute-0 nova_compute[259850]:         <feature name='avx512f'/>
Oct 11 03:56:09 compute-0 nova_compute[259850]:         <feature name='avx512ifma'/>
Oct 11 03:56:09 compute-0 nova_compute[259850]:         <feature name='avx512vbmi'/>
Oct 11 03:56:09 compute-0 nova_compute[259850]:         <feature name='avx512vbmi2'/>
Oct 11 03:56:09 compute-0 nova_compute[259850]:         <feature name='avx512vl'/>
Oct 11 03:56:09 compute-0 nova_compute[259850]:         <feature name='avx512vnni'/>
Oct 11 03:56:09 compute-0 nova_compute[259850]:         <feature name='erms'/>
Oct 11 03:56:09 compute-0 nova_compute[259850]:         <feature name='fsrm'/>
Oct 11 03:56:09 compute-0 nova_compute[259850]:         <feature name='gfni'/>
Oct 11 03:56:09 compute-0 nova_compute[259850]:         <feature name='ibrs-all'/>
Oct 11 03:56:09 compute-0 nova_compute[259850]:         <feature name='invpcid'/>
Oct 11 03:56:09 compute-0 nova_compute[259850]:         <feature name='la57'/>
Oct 11 03:56:09 compute-0 nova_compute[259850]:         <feature name='pcid'/>
Oct 11 03:56:09 compute-0 nova_compute[259850]:         <feature name='pku'/>
Oct 11 03:56:09 compute-0 nova_compute[259850]:         <feature name='taa-no'/>
Oct 11 03:56:09 compute-0 nova_compute[259850]:         <feature name='vaes'/>
Oct 11 03:56:09 compute-0 nova_compute[259850]:         <feature name='vpclmulqdq'/>
Oct 11 03:56:09 compute-0 nova_compute[259850]:       </blockers>
Oct 11 03:56:09 compute-0 nova_compute[259850]:       <model usable='no' vendor='Intel'>Icelake-Server-v5</model>
Oct 11 03:56:09 compute-0 nova_compute[259850]:       <blockers model='Icelake-Server-v5'>
Oct 11 03:56:09 compute-0 nova_compute[259850]:         <feature name='avx512-vpopcntdq'/>
Oct 11 03:56:09 compute-0 nova_compute[259850]:         <feature name='avx512bitalg'/>
Oct 11 03:56:09 compute-0 nova_compute[259850]:         <feature name='avx512bw'/>
Oct 11 03:56:09 compute-0 nova_compute[259850]:         <feature name='avx512cd'/>
Oct 11 03:56:09 compute-0 nova_compute[259850]:         <feature name='avx512dq'/>
Oct 11 03:56:09 compute-0 nova_compute[259850]:         <feature name='avx512f'/>
Oct 11 03:56:09 compute-0 nova_compute[259850]:         <feature name='avx512ifma'/>
Oct 11 03:56:09 compute-0 nova_compute[259850]:         <feature name='avx512vbmi'/>
Oct 11 03:56:09 compute-0 nova_compute[259850]:         <feature name='avx512vbmi2'/>
Oct 11 03:56:09 compute-0 nova_compute[259850]:         <feature name='avx512vl'/>
Oct 11 03:56:09 compute-0 nova_compute[259850]:         <feature name='avx512vnni'/>
Oct 11 03:56:09 compute-0 nova_compute[259850]:         <feature name='erms'/>
Oct 11 03:56:09 compute-0 nova_compute[259850]:         <feature name='fsrm'/>
Oct 11 03:56:09 compute-0 nova_compute[259850]:         <feature name='gfni'/>
Oct 11 03:56:09 compute-0 nova_compute[259850]:         <feature name='ibrs-all'/>
Oct 11 03:56:09 compute-0 nova_compute[259850]:         <feature name='invpcid'/>
Oct 11 03:56:09 compute-0 nova_compute[259850]:         <feature name='la57'/>
Oct 11 03:56:09 compute-0 nova_compute[259850]:         <feature name='pcid'/>
Oct 11 03:56:09 compute-0 nova_compute[259850]:         <feature name='pku'/>
Oct 11 03:56:09 compute-0 nova_compute[259850]:         <feature name='taa-no'/>
Oct 11 03:56:09 compute-0 nova_compute[259850]:         <feature name='vaes'/>
Oct 11 03:56:09 compute-0 nova_compute[259850]:         <feature name='vpclmulqdq'/>
Oct 11 03:56:09 compute-0 nova_compute[259850]:         <feature name='xsaves'/>
Oct 11 03:56:09 compute-0 nova_compute[259850]:       </blockers>
Oct 11 03:56:09 compute-0 nova_compute[259850]:       <model usable='no' vendor='Intel'>Icelake-Server-v6</model>
Oct 11 03:56:09 compute-0 nova_compute[259850]:       <blockers model='Icelake-Server-v6'>
Oct 11 03:56:09 compute-0 nova_compute[259850]:         <feature name='avx512-vpopcntdq'/>
Oct 11 03:56:09 compute-0 nova_compute[259850]:         <feature name='avx512bitalg'/>
Oct 11 03:56:09 compute-0 nova_compute[259850]:         <feature name='avx512bw'/>
Oct 11 03:56:09 compute-0 nova_compute[259850]:         <feature name='avx512cd'/>
Oct 11 03:56:09 compute-0 nova_compute[259850]:         <feature name='avx512dq'/>
Oct 11 03:56:09 compute-0 nova_compute[259850]:         <feature name='avx512f'/>
Oct 11 03:56:09 compute-0 nova_compute[259850]:         <feature name='avx512ifma'/>
Oct 11 03:56:09 compute-0 nova_compute[259850]:         <feature name='avx512vbmi'/>
Oct 11 03:56:09 compute-0 nova_compute[259850]:         <feature name='avx512vbmi2'/>
Oct 11 03:56:09 compute-0 nova_compute[259850]:         <feature name='avx512vl'/>
Oct 11 03:56:09 compute-0 nova_compute[259850]:         <feature name='avx512vnni'/>
Oct 11 03:56:09 compute-0 nova_compute[259850]:         <feature name='erms'/>
Oct 11 03:56:09 compute-0 nova_compute[259850]:         <feature name='fsrm'/>
Oct 11 03:56:09 compute-0 nova_compute[259850]:         <feature name='gfni'/>
Oct 11 03:56:09 compute-0 nova_compute[259850]:         <feature name='ibrs-all'/>
Oct 11 03:56:09 compute-0 nova_compute[259850]:         <feature name='invpcid'/>
Oct 11 03:56:09 compute-0 nova_compute[259850]:         <feature name='la57'/>
Oct 11 03:56:09 compute-0 nova_compute[259850]:         <feature name='pcid'/>
Oct 11 03:56:09 compute-0 nova_compute[259850]:         <feature name='pku'/>
Oct 11 03:56:09 compute-0 nova_compute[259850]:         <feature name='taa-no'/>
Oct 11 03:56:09 compute-0 nova_compute[259850]:         <feature name='vaes'/>
Oct 11 03:56:09 compute-0 nova_compute[259850]:         <feature name='vpclmulqdq'/>
Oct 11 03:56:09 compute-0 nova_compute[259850]:         <feature name='xsaves'/>
Oct 11 03:56:09 compute-0 nova_compute[259850]:       </blockers>
Oct 11 03:56:09 compute-0 nova_compute[259850]:       <model usable='no' vendor='Intel'>Icelake-Server-v7</model>
Oct 11 03:56:09 compute-0 nova_compute[259850]:       <blockers model='Icelake-Server-v7'>
Oct 11 03:56:09 compute-0 nova_compute[259850]:         <feature name='avx512-vpopcntdq'/>
Oct 11 03:56:09 compute-0 nova_compute[259850]:         <feature name='avx512bitalg'/>
Oct 11 03:56:09 compute-0 nova_compute[259850]:         <feature name='avx512bw'/>
Oct 11 03:56:09 compute-0 nova_compute[259850]:         <feature name='avx512cd'/>
Oct 11 03:56:09 compute-0 nova_compute[259850]:         <feature name='avx512dq'/>
Oct 11 03:56:09 compute-0 nova_compute[259850]:         <feature name='avx512f'/>
Oct 11 03:56:09 compute-0 nova_compute[259850]:         <feature name='avx512ifma'/>
Oct 11 03:56:09 compute-0 nova_compute[259850]:         <feature name='avx512vbmi'/>
Oct 11 03:56:09 compute-0 nova_compute[259850]:         <feature name='avx512vbmi2'/>
Oct 11 03:56:09 compute-0 nova_compute[259850]:         <feature name='avx512vl'/>
Oct 11 03:56:09 compute-0 nova_compute[259850]:         <feature name='avx512vnni'/>
Oct 11 03:56:09 compute-0 nova_compute[259850]:         <feature name='erms'/>
Oct 11 03:56:09 compute-0 nova_compute[259850]:         <feature name='fsrm'/>
Oct 11 03:56:09 compute-0 nova_compute[259850]:         <feature name='gfni'/>
Oct 11 03:56:09 compute-0 nova_compute[259850]:         <feature name='hle'/>
Oct 11 03:56:09 compute-0 nova_compute[259850]:         <feature name='ibrs-all'/>
Oct 11 03:56:09 compute-0 nova_compute[259850]:         <feature name='invpcid'/>
Oct 11 03:56:09 compute-0 nova_compute[259850]:         <feature name='la57'/>
Oct 11 03:56:09 compute-0 nova_compute[259850]:         <feature name='pcid'/>
Oct 11 03:56:09 compute-0 nova_compute[259850]:         <feature name='pku'/>
Oct 11 03:56:09 compute-0 nova_compute[259850]:         <feature name='rtm'/>
Oct 11 03:56:09 compute-0 nova_compute[259850]:         <feature name='taa-no'/>
Oct 11 03:56:09 compute-0 nova_compute[259850]:         <feature name='vaes'/>
Oct 11 03:56:09 compute-0 nova_compute[259850]:         <feature name='vpclmulqdq'/>
Oct 11 03:56:09 compute-0 nova_compute[259850]:         <feature name='xsaves'/>
Oct 11 03:56:09 compute-0 nova_compute[259850]:       </blockers>
Oct 11 03:56:09 compute-0 nova_compute[259850]:       <model usable='no' vendor='Intel' canonical='IvyBridge-v1'>IvyBridge</model>
Oct 11 03:56:09 compute-0 nova_compute[259850]:       <blockers model='IvyBridge'>
Oct 11 03:56:09 compute-0 nova_compute[259850]:         <feature name='erms'/>
Oct 11 03:56:09 compute-0 nova_compute[259850]:       </blockers>
Oct 11 03:56:09 compute-0 nova_compute[259850]:       <model usable='no' vendor='Intel' canonical='IvyBridge-v2'>IvyBridge-IBRS</model>
Oct 11 03:56:09 compute-0 nova_compute[259850]:       <blockers model='IvyBridge-IBRS'>
Oct 11 03:56:09 compute-0 nova_compute[259850]:         <feature name='erms'/>
Oct 11 03:56:09 compute-0 nova_compute[259850]:       </blockers>
Oct 11 03:56:09 compute-0 nova_compute[259850]:       <model usable='no' vendor='Intel'>IvyBridge-v1</model>
Oct 11 03:56:09 compute-0 nova_compute[259850]:       <blockers model='IvyBridge-v1'>
Oct 11 03:56:09 compute-0 nova_compute[259850]:         <feature name='erms'/>
Oct 11 03:56:09 compute-0 nova_compute[259850]:       </blockers>
Oct 11 03:56:09 compute-0 nova_compute[259850]:       <model usable='no' vendor='Intel'>IvyBridge-v2</model>
Oct 11 03:56:09 compute-0 nova_compute[259850]:       <blockers model='IvyBridge-v2'>
Oct 11 03:56:09 compute-0 nova_compute[259850]:         <feature name='erms'/>
Oct 11 03:56:09 compute-0 nova_compute[259850]:       </blockers>
Oct 11 03:56:09 compute-0 nova_compute[259850]:       <model usable='no' vendor='Intel' canonical='KnightsMill-v1'>KnightsMill</model>
Oct 11 03:56:09 compute-0 nova_compute[259850]:       <blockers model='KnightsMill'>
Oct 11 03:56:09 compute-0 nova_compute[259850]:         <feature name='avx512-4fmaps'/>
Oct 11 03:56:09 compute-0 nova_compute[259850]:         <feature name='avx512-4vnniw'/>
Oct 11 03:56:09 compute-0 nova_compute[259850]:         <feature name='avx512-vpopcntdq'/>
Oct 11 03:56:09 compute-0 nova_compute[259850]:         <feature name='avx512cd'/>
Oct 11 03:56:09 compute-0 nova_compute[259850]:         <feature name='avx512er'/>
Oct 11 03:56:09 compute-0 nova_compute[259850]:         <feature name='avx512f'/>
Oct 11 03:56:09 compute-0 nova_compute[259850]:         <feature name='avx512pf'/>
Oct 11 03:56:09 compute-0 nova_compute[259850]:         <feature name='erms'/>
Oct 11 03:56:09 compute-0 nova_compute[259850]:         <feature name='ss'/>
Oct 11 03:56:09 compute-0 nova_compute[259850]:       </blockers>
Oct 11 03:56:09 compute-0 nova_compute[259850]:       <model usable='no' vendor='Intel'>KnightsMill-v1</model>
Oct 11 03:56:09 compute-0 nova_compute[259850]:       <blockers model='KnightsMill-v1'>
Oct 11 03:56:09 compute-0 nova_compute[259850]:         <feature name='avx512-4fmaps'/>
Oct 11 03:56:09 compute-0 nova_compute[259850]:         <feature name='avx512-4vnniw'/>
Oct 11 03:56:09 compute-0 nova_compute[259850]:         <feature name='avx512-vpopcntdq'/>
Oct 11 03:56:09 compute-0 nova_compute[259850]:         <feature name='avx512cd'/>
Oct 11 03:56:09 compute-0 nova_compute[259850]:         <feature name='avx512er'/>
Oct 11 03:56:09 compute-0 nova_compute[259850]:         <feature name='avx512f'/>
Oct 11 03:56:09 compute-0 nova_compute[259850]:         <feature name='avx512pf'/>
Oct 11 03:56:09 compute-0 nova_compute[259850]:         <feature name='erms'/>
Oct 11 03:56:09 compute-0 nova_compute[259850]:         <feature name='ss'/>
Oct 11 03:56:09 compute-0 nova_compute[259850]:       </blockers>
Oct 11 03:56:09 compute-0 nova_compute[259850]:       <model usable='yes' vendor='Intel' canonical='Nehalem-v1'>Nehalem</model>
Oct 11 03:56:09 compute-0 nova_compute[259850]:       <model usable='yes' vendor='Intel' canonical='Nehalem-v2'>Nehalem-IBRS</model>
Oct 11 03:56:09 compute-0 nova_compute[259850]:       <model usable='yes' vendor='Intel'>Nehalem-v1</model>
Oct 11 03:56:09 compute-0 nova_compute[259850]:       <model usable='yes' vendor='Intel'>Nehalem-v2</model>
Oct 11 03:56:09 compute-0 nova_compute[259850]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G1-v1'>Opteron_G1</model>
Oct 11 03:56:09 compute-0 nova_compute[259850]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G1-v1</model>
Oct 11 03:56:09 compute-0 nova_compute[259850]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G2-v1'>Opteron_G2</model>
Oct 11 03:56:09 compute-0 nova_compute[259850]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G2-v1</model>
Oct 11 03:56:09 compute-0 nova_compute[259850]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G3-v1'>Opteron_G3</model>
Oct 11 03:56:09 compute-0 nova_compute[259850]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G3-v1</model>
Oct 11 03:56:09 compute-0 nova_compute[259850]:       <model usable='no' vendor='AMD' canonical='Opteron_G4-v1'>Opteron_G4</model>
Oct 11 03:56:09 compute-0 nova_compute[259850]:       <blockers model='Opteron_G4'>
Oct 11 03:56:09 compute-0 nova_compute[259850]:         <feature name='fma4'/>
Oct 11 03:56:09 compute-0 nova_compute[259850]:         <feature name='xop'/>
Oct 11 03:56:09 compute-0 nova_compute[259850]:       </blockers>
Oct 11 03:56:09 compute-0 nova_compute[259850]:       <model usable='no' vendor='AMD'>Opteron_G4-v1</model>
Oct 11 03:56:09 compute-0 nova_compute[259850]:       <blockers model='Opteron_G4-v1'>
Oct 11 03:56:09 compute-0 nova_compute[259850]:         <feature name='fma4'/>
Oct 11 03:56:09 compute-0 nova_compute[259850]:         <feature name='xop'/>
Oct 11 03:56:09 compute-0 nova_compute[259850]:       </blockers>
Oct 11 03:56:09 compute-0 nova_compute[259850]:       <model usable='no' vendor='AMD' canonical='Opteron_G5-v1'>Opteron_G5</model>
Oct 11 03:56:09 compute-0 nova_compute[259850]:       <blockers model='Opteron_G5'>
Oct 11 03:56:09 compute-0 nova_compute[259850]:         <feature name='fma4'/>
Oct 11 03:56:09 compute-0 nova_compute[259850]:         <feature name='tbm'/>
Oct 11 03:56:09 compute-0 nova_compute[259850]:         <feature name='xop'/>
Oct 11 03:56:09 compute-0 nova_compute[259850]:       </blockers>
Oct 11 03:56:09 compute-0 nova_compute[259850]:       <model usable='no' vendor='AMD'>Opteron_G5-v1</model>
Oct 11 03:56:09 compute-0 nova_compute[259850]:       <blockers model='Opteron_G5-v1'>
Oct 11 03:56:09 compute-0 nova_compute[259850]:         <feature name='fma4'/>
Oct 11 03:56:09 compute-0 nova_compute[259850]:         <feature name='tbm'/>
Oct 11 03:56:09 compute-0 nova_compute[259850]:         <feature name='xop'/>
Oct 11 03:56:09 compute-0 nova_compute[259850]:       </blockers>
Oct 11 03:56:09 compute-0 nova_compute[259850]:       <model usable='yes' deprecated='yes' vendor='Intel' canonical='Penryn-v1'>Penryn</model>
Oct 11 03:56:09 compute-0 nova_compute[259850]:       <model usable='yes' deprecated='yes' vendor='Intel'>Penryn-v1</model>
Oct 11 03:56:09 compute-0 nova_compute[259850]:       <model usable='yes' vendor='Intel' canonical='SandyBridge-v1'>SandyBridge</model>
Oct 11 03:56:09 compute-0 nova_compute[259850]:       <model usable='yes' vendor='Intel' canonical='SandyBridge-v2'>SandyBridge-IBRS</model>
Oct 11 03:56:09 compute-0 nova_compute[259850]:       <model usable='yes' vendor='Intel'>SandyBridge-v1</model>
Oct 11 03:56:09 compute-0 nova_compute[259850]:       <model usable='yes' vendor='Intel'>SandyBridge-v2</model>
Oct 11 03:56:09 compute-0 nova_compute[259850]:       <model usable='no' vendor='Intel' canonical='SapphireRapids-v1'>SapphireRapids</model>
Oct 11 03:56:09 compute-0 nova_compute[259850]:       <blockers model='SapphireRapids'>
Oct 11 03:56:09 compute-0 nova_compute[259850]:         <feature name='amx-bf16'/>
Oct 11 03:56:09 compute-0 nova_compute[259850]:         <feature name='amx-int8'/>
Oct 11 03:56:09 compute-0 nova_compute[259850]:         <feature name='amx-tile'/>
Oct 11 03:56:09 compute-0 nova_compute[259850]:         <feature name='avx-vnni'/>
Oct 11 03:56:09 compute-0 nova_compute[259850]:         <feature name='avx512-bf16'/>
Oct 11 03:56:09 compute-0 nova_compute[259850]:         <feature name='avx512-fp16'/>
Oct 11 03:56:09 compute-0 nova_compute[259850]:         <feature name='avx512-vpopcntdq'/>
Oct 11 03:56:09 compute-0 nova_compute[259850]:         <feature name='avx512bitalg'/>
Oct 11 03:56:09 compute-0 nova_compute[259850]:         <feature name='avx512bw'/>
Oct 11 03:56:09 compute-0 nova_compute[259850]:         <feature name='avx512cd'/>
Oct 11 03:56:09 compute-0 nova_compute[259850]:         <feature name='avx512dq'/>
Oct 11 03:56:09 compute-0 nova_compute[259850]:         <feature name='avx512f'/>
Oct 11 03:56:09 compute-0 nova_compute[259850]:         <feature name='avx512ifma'/>
Oct 11 03:56:09 compute-0 nova_compute[259850]:         <feature name='avx512vbmi'/>
Oct 11 03:56:09 compute-0 nova_compute[259850]:         <feature name='avx512vbmi2'/>
Oct 11 03:56:09 compute-0 nova_compute[259850]:         <feature name='avx512vl'/>
Oct 11 03:56:09 compute-0 nova_compute[259850]:         <feature name='avx512vnni'/>
Oct 11 03:56:09 compute-0 nova_compute[259850]:         <feature name='bus-lock-detect'/>
Oct 11 03:56:09 compute-0 nova_compute[259850]:         <feature name='erms'/>
Oct 11 03:56:09 compute-0 nova_compute[259850]:         <feature name='fsrc'/>
Oct 11 03:56:09 compute-0 nova_compute[259850]:         <feature name='fsrm'/>
Oct 11 03:56:09 compute-0 nova_compute[259850]:         <feature name='fsrs'/>
Oct 11 03:56:09 compute-0 nova_compute[259850]:         <feature name='fzrm'/>
Oct 11 03:56:09 compute-0 nova_compute[259850]:         <feature name='gfni'/>
Oct 11 03:56:09 compute-0 nova_compute[259850]:         <feature name='hle'/>
Oct 11 03:56:09 compute-0 nova_compute[259850]:         <feature name='ibrs-all'/>
Oct 11 03:56:09 compute-0 nova_compute[259850]:         <feature name='invpcid'/>
Oct 11 03:56:09 compute-0 nova_compute[259850]:         <feature name='la57'/>
Oct 11 03:56:09 compute-0 nova_compute[259850]:         <feature name='pcid'/>
Oct 11 03:56:09 compute-0 nova_compute[259850]:         <feature name='pku'/>
Oct 11 03:56:09 compute-0 nova_compute[259850]:         <feature name='rtm'/>
Oct 11 03:56:09 compute-0 nova_compute[259850]:         <feature name='serialize'/>
Oct 11 03:56:09 compute-0 nova_compute[259850]:         <feature name='taa-no'/>
Oct 11 03:56:09 compute-0 nova_compute[259850]:         <feature name='tsx-ldtrk'/>
Oct 11 03:56:09 compute-0 nova_compute[259850]:         <feature name='vaes'/>
Oct 11 03:56:09 compute-0 nova_compute[259850]:         <feature name='vpclmulqdq'/>
Oct 11 03:56:09 compute-0 nova_compute[259850]:         <feature name='xfd'/>
Oct 11 03:56:09 compute-0 nova_compute[259850]:         <feature name='xsaves'/>
Oct 11 03:56:09 compute-0 nova_compute[259850]:       </blockers>
Oct 11 03:56:09 compute-0 nova_compute[259850]:       <model usable='no' vendor='Intel'>SapphireRapids-v1</model>
Oct 11 03:56:09 compute-0 nova_compute[259850]:       <blockers model='SapphireRapids-v1'>
Oct 11 03:56:09 compute-0 nova_compute[259850]:         <feature name='amx-bf16'/>
Oct 11 03:56:09 compute-0 nova_compute[259850]:         <feature name='amx-int8'/>
Oct 11 03:56:09 compute-0 nova_compute[259850]:         <feature name='amx-tile'/>
Oct 11 03:56:09 compute-0 nova_compute[259850]:         <feature name='avx-vnni'/>
Oct 11 03:56:09 compute-0 nova_compute[259850]:         <feature name='avx512-bf16'/>
Oct 11 03:56:09 compute-0 nova_compute[259850]:         <feature name='avx512-fp16'/>
Oct 11 03:56:09 compute-0 nova_compute[259850]:         <feature name='avx512-vpopcntdq'/>
Oct 11 03:56:09 compute-0 nova_compute[259850]:         <feature name='avx512bitalg'/>
Oct 11 03:56:09 compute-0 nova_compute[259850]:         <feature name='avx512bw'/>
Oct 11 03:56:09 compute-0 nova_compute[259850]:         <feature name='avx512cd'/>
Oct 11 03:56:09 compute-0 nova_compute[259850]:         <feature name='avx512dq'/>
Oct 11 03:56:09 compute-0 nova_compute[259850]:         <feature name='avx512f'/>
Oct 11 03:56:09 compute-0 nova_compute[259850]:         <feature name='avx512ifma'/>
Oct 11 03:56:09 compute-0 nova_compute[259850]:         <feature name='avx512vbmi'/>
Oct 11 03:56:09 compute-0 nova_compute[259850]:         <feature name='avx512vbmi2'/>
Oct 11 03:56:09 compute-0 nova_compute[259850]:         <feature name='avx512vl'/>
Oct 11 03:56:09 compute-0 nova_compute[259850]:         <feature name='avx512vnni'/>
Oct 11 03:56:09 compute-0 nova_compute[259850]:         <feature name='bus-lock-detect'/>
Oct 11 03:56:09 compute-0 nova_compute[259850]:         <feature name='erms'/>
Oct 11 03:56:09 compute-0 nova_compute[259850]:         <feature name='fsrc'/>
Oct 11 03:56:09 compute-0 nova_compute[259850]:         <feature name='fsrm'/>
Oct 11 03:56:09 compute-0 nova_compute[259850]:         <feature name='fsrs'/>
Oct 11 03:56:09 compute-0 nova_compute[259850]:         <feature name='fzrm'/>
Oct 11 03:56:09 compute-0 nova_compute[259850]:         <feature name='gfni'/>
Oct 11 03:56:09 compute-0 nova_compute[259850]:         <feature name='hle'/>
Oct 11 03:56:09 compute-0 nova_compute[259850]:         <feature name='ibrs-all'/>
Oct 11 03:56:09 compute-0 nova_compute[259850]:         <feature name='invpcid'/>
Oct 11 03:56:09 compute-0 nova_compute[259850]:         <feature name='la57'/>
Oct 11 03:56:09 compute-0 nova_compute[259850]:         <feature name='pcid'/>
Oct 11 03:56:09 compute-0 nova_compute[259850]:         <feature name='pku'/>
Oct 11 03:56:09 compute-0 nova_compute[259850]:         <feature name='rtm'/>
Oct 11 03:56:09 compute-0 nova_compute[259850]:         <feature name='serialize'/>
Oct 11 03:56:09 compute-0 nova_compute[259850]:         <feature name='taa-no'/>
Oct 11 03:56:09 compute-0 nova_compute[259850]:         <feature name='tsx-ldtrk'/>
Oct 11 03:56:09 compute-0 nova_compute[259850]:         <feature name='vaes'/>
Oct 11 03:56:09 compute-0 nova_compute[259850]:         <feature name='vpclmulqdq'/>
Oct 11 03:56:09 compute-0 nova_compute[259850]:         <feature name='xfd'/>
Oct 11 03:56:09 compute-0 nova_compute[259850]:         <feature name='xsaves'/>
Oct 11 03:56:09 compute-0 nova_compute[259850]:       </blockers>
Oct 11 03:56:09 compute-0 nova_compute[259850]:       <model usable='no' vendor='Intel'>SapphireRapids-v2</model>
Oct 11 03:56:09 compute-0 nova_compute[259850]:       <blockers model='SapphireRapids-v2'>
Oct 11 03:56:09 compute-0 nova_compute[259850]:         <feature name='amx-bf16'/>
Oct 11 03:56:09 compute-0 nova_compute[259850]:         <feature name='amx-int8'/>
Oct 11 03:56:09 compute-0 nova_compute[259850]:         <feature name='amx-tile'/>
Oct 11 03:56:09 compute-0 nova_compute[259850]:         <feature name='avx-vnni'/>
Oct 11 03:56:09 compute-0 nova_compute[259850]:         <feature name='avx512-bf16'/>
Oct 11 03:56:09 compute-0 nova_compute[259850]:         <feature name='avx512-fp16'/>
Oct 11 03:56:09 compute-0 nova_compute[259850]:         <feature name='avx512-vpopcntdq'/>
Oct 11 03:56:09 compute-0 nova_compute[259850]:         <feature name='avx512bitalg'/>
Oct 11 03:56:09 compute-0 nova_compute[259850]:         <feature name='avx512bw'/>
Oct 11 03:56:09 compute-0 nova_compute[259850]:         <feature name='avx512cd'/>
Oct 11 03:56:09 compute-0 nova_compute[259850]:         <feature name='avx512dq'/>
Oct 11 03:56:09 compute-0 nova_compute[259850]:         <feature name='avx512f'/>
Oct 11 03:56:09 compute-0 nova_compute[259850]:         <feature name='avx512ifma'/>
Oct 11 03:56:09 compute-0 nova_compute[259850]:         <feature name='avx512vbmi'/>
Oct 11 03:56:09 compute-0 nova_compute[259850]:         <feature name='avx512vbmi2'/>
Oct 11 03:56:09 compute-0 nova_compute[259850]:         <feature name='avx512vl'/>
Oct 11 03:56:09 compute-0 nova_compute[259850]:         <feature name='avx512vnni'/>
Oct 11 03:56:09 compute-0 nova_compute[259850]:         <feature name='bus-lock-detect'/>
Oct 11 03:56:09 compute-0 nova_compute[259850]:         <feature name='erms'/>
Oct 11 03:56:09 compute-0 nova_compute[259850]:         <feature name='fbsdp-no'/>
Oct 11 03:56:09 compute-0 nova_compute[259850]:         <feature name='fsrc'/>
Oct 11 03:56:09 compute-0 nova_compute[259850]:         <feature name='fsrm'/>
Oct 11 03:56:09 compute-0 nova_compute[259850]:         <feature name='fsrs'/>
Oct 11 03:56:09 compute-0 nova_compute[259850]:         <feature name='fzrm'/>
Oct 11 03:56:09 compute-0 nova_compute[259850]:         <feature name='gfni'/>
Oct 11 03:56:09 compute-0 nova_compute[259850]:         <feature name='hle'/>
Oct 11 03:56:09 compute-0 nova_compute[259850]:         <feature name='ibrs-all'/>
Oct 11 03:56:09 compute-0 nova_compute[259850]:         <feature name='invpcid'/>
Oct 11 03:56:09 compute-0 nova_compute[259850]:         <feature name='la57'/>
Oct 11 03:56:09 compute-0 nova_compute[259850]:         <feature name='pcid'/>
Oct 11 03:56:09 compute-0 nova_compute[259850]:         <feature name='pku'/>
Oct 11 03:56:09 compute-0 nova_compute[259850]:         <feature name='psdp-no'/>
Oct 11 03:56:09 compute-0 nova_compute[259850]:         <feature name='rtm'/>
Oct 11 03:56:09 compute-0 nova_compute[259850]:         <feature name='sbdr-ssdp-no'/>
Oct 11 03:56:09 compute-0 nova_compute[259850]:         <feature name='serialize'/>
Oct 11 03:56:09 compute-0 nova_compute[259850]:         <feature name='taa-no'/>
Oct 11 03:56:09 compute-0 nova_compute[259850]:         <feature name='tsx-ldtrk'/>
Oct 11 03:56:09 compute-0 nova_compute[259850]:         <feature name='vaes'/>
Oct 11 03:56:09 compute-0 nova_compute[259850]:         <feature name='vpclmulqdq'/>
Oct 11 03:56:09 compute-0 nova_compute[259850]:         <feature name='xfd'/>
Oct 11 03:56:09 compute-0 nova_compute[259850]:         <feature name='xsaves'/>
Oct 11 03:56:09 compute-0 nova_compute[259850]:       </blockers>
Oct 11 03:56:09 compute-0 nova_compute[259850]:       <model usable='no' vendor='Intel'>SapphireRapids-v3</model>
Oct 11 03:56:09 compute-0 nova_compute[259850]:       <blockers model='SapphireRapids-v3'>
Oct 11 03:56:09 compute-0 nova_compute[259850]:         <feature name='amx-bf16'/>
Oct 11 03:56:09 compute-0 nova_compute[259850]:         <feature name='amx-int8'/>
Oct 11 03:56:09 compute-0 nova_compute[259850]:         <feature name='amx-tile'/>
Oct 11 03:56:09 compute-0 nova_compute[259850]:         <feature name='avx-vnni'/>
Oct 11 03:56:09 compute-0 nova_compute[259850]:         <feature name='avx512-bf16'/>
Oct 11 03:56:09 compute-0 nova_compute[259850]:         <feature name='avx512-fp16'/>
Oct 11 03:56:09 compute-0 nova_compute[259850]:         <feature name='avx512-vpopcntdq'/>
Oct 11 03:56:09 compute-0 nova_compute[259850]:         <feature name='avx512bitalg'/>
Oct 11 03:56:09 compute-0 nova_compute[259850]:         <feature name='avx512bw'/>
Oct 11 03:56:09 compute-0 nova_compute[259850]:         <feature name='avx512cd'/>
Oct 11 03:56:09 compute-0 nova_compute[259850]:         <feature name='avx512dq'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='avx512f'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='avx512ifma'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='avx512vbmi'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='avx512vbmi2'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='avx512vl'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='avx512vnni'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='bus-lock-detect'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='cldemote'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='erms'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='fbsdp-no'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='fsrc'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='fsrm'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='fsrs'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='fzrm'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='gfni'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='hle'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='ibrs-all'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='invpcid'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='la57'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='movdir64b'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='movdiri'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='pcid'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='pku'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='psdp-no'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='rtm'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='sbdr-ssdp-no'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='serialize'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='ss'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='taa-no'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='tsx-ldtrk'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='vaes'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='vpclmulqdq'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='xfd'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='xsaves'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       </blockers>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       <model usable='no' vendor='Intel' canonical='SierraForest-v1'>SierraForest</model>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       <blockers model='SierraForest'>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='avx-ifma'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='avx-ne-convert'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='avx-vnni'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='avx-vnni-int8'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='bus-lock-detect'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='cmpccxadd'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='erms'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='fbsdp-no'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='fsrm'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='fsrs'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='gfni'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='ibrs-all'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='invpcid'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='mcdt-no'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='pbrsb-no'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='pcid'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='pku'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='psdp-no'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='sbdr-ssdp-no'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='serialize'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='vaes'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='vpclmulqdq'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='xsaves'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       </blockers>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       <model usable='no' vendor='Intel'>SierraForest-v1</model>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       <blockers model='SierraForest-v1'>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='avx-ifma'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='avx-ne-convert'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='avx-vnni'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='avx-vnni-int8'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='bus-lock-detect'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='cmpccxadd'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='erms'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='fbsdp-no'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='fsrm'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='fsrs'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='gfni'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='ibrs-all'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='invpcid'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='mcdt-no'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='pbrsb-no'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='pcid'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='pku'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='psdp-no'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='sbdr-ssdp-no'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='serialize'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='vaes'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='vpclmulqdq'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='xsaves'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       </blockers>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v1'>Skylake-Client</model>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       <blockers model='Skylake-Client'>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='erms'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='hle'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='invpcid'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='pcid'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='rtm'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       </blockers>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v2'>Skylake-Client-IBRS</model>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       <blockers model='Skylake-Client-IBRS'>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='erms'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='hle'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='invpcid'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='pcid'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='rtm'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       </blockers>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v3'>Skylake-Client-noTSX-IBRS</model>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       <blockers model='Skylake-Client-noTSX-IBRS'>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='erms'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='invpcid'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='pcid'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       </blockers>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       <model usable='no' vendor='Intel'>Skylake-Client-v1</model>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       <blockers model='Skylake-Client-v1'>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='erms'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='hle'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='invpcid'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='pcid'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='rtm'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       </blockers>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       <model usable='no' vendor='Intel'>Skylake-Client-v2</model>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       <blockers model='Skylake-Client-v2'>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='erms'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='hle'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='invpcid'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='pcid'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='rtm'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       </blockers>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       <model usable='no' vendor='Intel'>Skylake-Client-v3</model>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       <blockers model='Skylake-Client-v3'>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='erms'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='invpcid'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='pcid'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       </blockers>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       <model usable='no' vendor='Intel'>Skylake-Client-v4</model>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       <blockers model='Skylake-Client-v4'>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='erms'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='invpcid'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='pcid'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='xsaves'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       </blockers>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v1'>Skylake-Server</model>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       <blockers model='Skylake-Server'>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='avx512bw'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='avx512cd'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='avx512dq'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='avx512f'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='avx512vl'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='erms'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='hle'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='invpcid'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='pcid'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='pku'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='rtm'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       </blockers>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v2'>Skylake-Server-IBRS</model>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       <blockers model='Skylake-Server-IBRS'>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='avx512bw'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='avx512cd'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='avx512dq'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='avx512f'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='avx512vl'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='erms'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='hle'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='invpcid'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='pcid'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='pku'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='rtm'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       </blockers>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v3'>Skylake-Server-noTSX-IBRS</model>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       <blockers model='Skylake-Server-noTSX-IBRS'>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='avx512bw'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='avx512cd'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='avx512dq'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='avx512f'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='avx512vl'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='erms'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='invpcid'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='pcid'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='pku'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       </blockers>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       <model usable='no' vendor='Intel'>Skylake-Server-v1</model>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       <blockers model='Skylake-Server-v1'>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='avx512bw'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='avx512cd'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='avx512dq'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='avx512f'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='avx512vl'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='erms'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='hle'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='invpcid'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='pcid'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='pku'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='rtm'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       </blockers>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       <model usable='no' vendor='Intel'>Skylake-Server-v2</model>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       <blockers model='Skylake-Server-v2'>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='avx512bw'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='avx512cd'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='avx512dq'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='avx512f'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='avx512vl'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='erms'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='hle'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='invpcid'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='pcid'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='pku'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='rtm'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       </blockers>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       <model usable='no' vendor='Intel'>Skylake-Server-v3</model>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       <blockers model='Skylake-Server-v3'>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='avx512bw'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='avx512cd'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='avx512dq'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='avx512f'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='avx512vl'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='erms'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='invpcid'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='pcid'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='pku'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       </blockers>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       <model usable='no' vendor='Intel'>Skylake-Server-v4</model>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       <blockers model='Skylake-Server-v4'>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='avx512bw'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='avx512cd'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='avx512dq'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='avx512f'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='avx512vl'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='erms'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='invpcid'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='pcid'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='pku'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       </blockers>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       <model usable='no' vendor='Intel'>Skylake-Server-v5</model>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       <blockers model='Skylake-Server-v5'>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='avx512bw'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='avx512cd'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='avx512dq'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='avx512f'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='avx512vl'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='erms'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='invpcid'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='pcid'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='pku'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='xsaves'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       </blockers>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       <model usable='no' vendor='Intel' canonical='Snowridge-v1'>Snowridge</model>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       <blockers model='Snowridge'>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='cldemote'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='core-capability'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='erms'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='gfni'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='movdir64b'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='movdiri'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='mpx'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='split-lock-detect'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       </blockers>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       <model usable='no' vendor='Intel'>Snowridge-v1</model>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       <blockers model='Snowridge-v1'>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='cldemote'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='core-capability'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='erms'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='gfni'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='movdir64b'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='movdiri'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='mpx'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='split-lock-detect'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       </blockers>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       <model usable='no' vendor='Intel'>Snowridge-v2</model>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       <blockers model='Snowridge-v2'>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='cldemote'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='core-capability'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='erms'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='gfni'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='movdir64b'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='movdiri'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='split-lock-detect'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       </blockers>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       <model usable='no' vendor='Intel'>Snowridge-v3</model>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       <blockers model='Snowridge-v3'>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='cldemote'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='core-capability'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='erms'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='gfni'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='movdir64b'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='movdiri'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='split-lock-detect'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='xsaves'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       </blockers>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       <model usable='no' vendor='Intel'>Snowridge-v4</model>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       <blockers model='Snowridge-v4'>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='cldemote'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='erms'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='gfni'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='movdir64b'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='movdiri'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='xsaves'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       </blockers>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       <model usable='yes' vendor='Intel' canonical='Westmere-v1'>Westmere</model>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       <model usable='yes' vendor='Intel' canonical='Westmere-v2'>Westmere-IBRS</model>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       <model usable='yes' vendor='Intel'>Westmere-v1</model>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       <model usable='yes' vendor='Intel'>Westmere-v2</model>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       <model usable='no' deprecated='yes' vendor='AMD' canonical='athlon-v1'>athlon</model>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       <blockers model='athlon'>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='3dnow'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='3dnowext'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       </blockers>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       <model usable='no' deprecated='yes' vendor='AMD'>athlon-v1</model>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       <blockers model='athlon-v1'>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='3dnow'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='3dnowext'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       </blockers>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='core2duo-v1'>core2duo</model>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       <blockers model='core2duo'>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='ss'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       </blockers>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       <model usable='no' deprecated='yes' vendor='Intel'>core2duo-v1</model>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       <blockers model='core2duo-v1'>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='ss'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       </blockers>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='coreduo-v1'>coreduo</model>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       <blockers model='coreduo'>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='ss'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       </blockers>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       <model usable='no' deprecated='yes' vendor='Intel'>coreduo-v1</model>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       <blockers model='coreduo-v1'>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='ss'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       </blockers>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm32-v1'>kvm32</model>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       <model usable='yes' deprecated='yes' vendor='unknown'>kvm32-v1</model>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm64-v1'>kvm64</model>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       <model usable='yes' deprecated='yes' vendor='unknown'>kvm64-v1</model>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='n270-v1'>n270</model>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       <blockers model='n270'>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='ss'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       </blockers>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       <model usable='no' deprecated='yes' vendor='Intel'>n270-v1</model>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       <blockers model='n270-v1'>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='ss'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       </blockers>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium-v1'>pentium</model>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium-v1</model>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium2-v1'>pentium2</model>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium2-v1</model>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium3-v1'>pentium3</model>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium3-v1</model>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       <model usable='no' deprecated='yes' vendor='AMD' canonical='phenom-v1'>phenom</model>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       <blockers model='phenom'>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='3dnow'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='3dnowext'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       </blockers>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       <model usable='no' deprecated='yes' vendor='AMD'>phenom-v1</model>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       <blockers model='phenom-v1'>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='3dnow'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='3dnowext'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       </blockers>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu32-v1'>qemu32</model>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       <model usable='yes' deprecated='yes' vendor='unknown'>qemu32-v1</model>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu64-v1'>qemu64</model>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       <model usable='yes' deprecated='yes' vendor='unknown'>qemu64-v1</model>
Oct 11 03:56:10 compute-0 nova_compute[259850]:     </mode>
Oct 11 03:56:10 compute-0 nova_compute[259850]:   </cpu>
Oct 11 03:56:10 compute-0 nova_compute[259850]:   <memoryBacking supported='yes'>
Oct 11 03:56:10 compute-0 nova_compute[259850]:     <enum name='sourceType'>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       <value>file</value>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       <value>anonymous</value>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       <value>memfd</value>
Oct 11 03:56:10 compute-0 nova_compute[259850]:     </enum>
Oct 11 03:56:10 compute-0 nova_compute[259850]:   </memoryBacking>
Oct 11 03:56:10 compute-0 nova_compute[259850]:   <devices>
Oct 11 03:56:10 compute-0 nova_compute[259850]:     <disk supported='yes'>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       <enum name='diskDevice'>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <value>disk</value>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <value>cdrom</value>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <value>floppy</value>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <value>lun</value>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       </enum>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       <enum name='bus'>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <value>ide</value>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <value>fdc</value>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <value>scsi</value>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <value>virtio</value>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <value>usb</value>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <value>sata</value>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       </enum>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       <enum name='model'>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <value>virtio</value>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <value>virtio-transitional</value>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <value>virtio-non-transitional</value>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       </enum>
Oct 11 03:56:10 compute-0 nova_compute[259850]:     </disk>
Oct 11 03:56:10 compute-0 nova_compute[259850]:     <graphics supported='yes'>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       <enum name='type'>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <value>vnc</value>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <value>egl-headless</value>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <value>dbus</value>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       </enum>
Oct 11 03:56:10 compute-0 nova_compute[259850]:     </graphics>
Oct 11 03:56:10 compute-0 nova_compute[259850]:     <video supported='yes'>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       <enum name='modelType'>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <value>vga</value>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <value>cirrus</value>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <value>virtio</value>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <value>none</value>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <value>bochs</value>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <value>ramfb</value>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       </enum>
Oct 11 03:56:10 compute-0 nova_compute[259850]:     </video>
Oct 11 03:56:10 compute-0 nova_compute[259850]:     <hostdev supported='yes'>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       <enum name='mode'>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <value>subsystem</value>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       </enum>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       <enum name='startupPolicy'>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <value>default</value>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <value>mandatory</value>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <value>requisite</value>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <value>optional</value>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       </enum>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       <enum name='subsysType'>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <value>usb</value>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <value>pci</value>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <value>scsi</value>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       </enum>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       <enum name='capsType'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       <enum name='pciBackend'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:     </hostdev>
Oct 11 03:56:10 compute-0 nova_compute[259850]:     <rng supported='yes'>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       <enum name='model'>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <value>virtio</value>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <value>virtio-transitional</value>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <value>virtio-non-transitional</value>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       </enum>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       <enum name='backendModel'>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <value>random</value>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <value>egd</value>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <value>builtin</value>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       </enum>
Oct 11 03:56:10 compute-0 nova_compute[259850]:     </rng>
Oct 11 03:56:10 compute-0 nova_compute[259850]:     <filesystem supported='yes'>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       <enum name='driverType'>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <value>path</value>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <value>handle</value>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <value>virtiofs</value>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       </enum>
Oct 11 03:56:10 compute-0 nova_compute[259850]:     </filesystem>
Oct 11 03:56:10 compute-0 nova_compute[259850]:     <tpm supported='yes'>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       <enum name='model'>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <value>tpm-tis</value>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <value>tpm-crb</value>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       </enum>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       <enum name='backendModel'>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <value>emulator</value>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <value>external</value>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       </enum>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       <enum name='backendVersion'>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <value>2.0</value>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       </enum>
Oct 11 03:56:10 compute-0 nova_compute[259850]:     </tpm>
Oct 11 03:56:10 compute-0 nova_compute[259850]:     <redirdev supported='yes'>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       <enum name='bus'>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <value>usb</value>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       </enum>
Oct 11 03:56:10 compute-0 nova_compute[259850]:     </redirdev>
Oct 11 03:56:10 compute-0 nova_compute[259850]:     <channel supported='yes'>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       <enum name='type'>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <value>pty</value>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <value>unix</value>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       </enum>
Oct 11 03:56:10 compute-0 nova_compute[259850]:     </channel>
Oct 11 03:56:10 compute-0 nova_compute[259850]:     <crypto supported='yes'>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       <enum name='model'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       <enum name='type'>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <value>qemu</value>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       </enum>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       <enum name='backendModel'>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <value>builtin</value>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       </enum>
Oct 11 03:56:10 compute-0 nova_compute[259850]:     </crypto>
Oct 11 03:56:10 compute-0 nova_compute[259850]:     <interface supported='yes'>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       <enum name='backendType'>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <value>default</value>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <value>passt</value>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       </enum>
Oct 11 03:56:10 compute-0 nova_compute[259850]:     </interface>
Oct 11 03:56:10 compute-0 nova_compute[259850]:     <panic supported='yes'>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       <enum name='model'>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <value>isa</value>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <value>hyperv</value>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       </enum>
Oct 11 03:56:10 compute-0 nova_compute[259850]:     </panic>
Oct 11 03:56:10 compute-0 nova_compute[259850]:   </devices>
Oct 11 03:56:10 compute-0 nova_compute[259850]:   <features>
Oct 11 03:56:10 compute-0 nova_compute[259850]:     <gic supported='no'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:     <vmcoreinfo supported='yes'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:     <genid supported='yes'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:     <backingStoreInput supported='yes'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:     <backup supported='yes'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:     <async-teardown supported='yes'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:     <ps2 supported='yes'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:     <sev supported='no'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:     <sgx supported='no'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:     <hyperv supported='yes'>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       <enum name='features'>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <value>relaxed</value>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <value>vapic</value>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <value>spinlocks</value>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <value>vpindex</value>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <value>runtime</value>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <value>synic</value>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <value>stimer</value>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <value>reset</value>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <value>vendor_id</value>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <value>frequencies</value>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <value>reenlightenment</value>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <value>tlbflush</value>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <value>ipi</value>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <value>avic</value>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <value>emsr_bitmap</value>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <value>xmm_input</value>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       </enum>
Oct 11 03:56:10 compute-0 nova_compute[259850]:     </hyperv>
Oct 11 03:56:10 compute-0 nova_compute[259850]:     <launchSecurity supported='no'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:   </features>
Oct 11 03:56:10 compute-0 nova_compute[259850]: </domainCapabilities>
Oct 11 03:56:10 compute-0 nova_compute[259850]:  _get_domain_capabilities /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1037
Oct 11 03:56:10 compute-0 nova_compute[259850]: 2025-10-11 03:56:09.985 2 DEBUG nova.virt.libvirt.host [None req-3700afd8-f980-4cf2-b41e-54ecc7fbe9a2 - - - - - -] Libvirt host hypervisor capabilities for arch=i686 and machine_type=q35:
Oct 11 03:56:10 compute-0 nova_compute[259850]: <domainCapabilities>
Oct 11 03:56:10 compute-0 nova_compute[259850]:   <path>/usr/libexec/qemu-kvm</path>
Oct 11 03:56:10 compute-0 nova_compute[259850]:   <domain>kvm</domain>
Oct 11 03:56:10 compute-0 nova_compute[259850]:   <machine>pc-q35-rhel9.6.0</machine>
Oct 11 03:56:10 compute-0 nova_compute[259850]:   <arch>i686</arch>
Oct 11 03:56:10 compute-0 nova_compute[259850]:   <vcpu max='4096'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:   <iothreads supported='yes'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:   <os supported='yes'>
Oct 11 03:56:10 compute-0 nova_compute[259850]:     <enum name='firmware'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:     <loader supported='yes'>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       <value>/usr/share/OVMF/OVMF_CODE.secboot.fd</value>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       <enum name='type'>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <value>rom</value>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <value>pflash</value>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       </enum>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       <enum name='readonly'>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <value>yes</value>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <value>no</value>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       </enum>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       <enum name='secure'>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <value>no</value>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       </enum>
Oct 11 03:56:10 compute-0 nova_compute[259850]:     </loader>
Oct 11 03:56:10 compute-0 nova_compute[259850]:   </os>
Oct 11 03:56:10 compute-0 nova_compute[259850]:   <cpu>
Oct 11 03:56:10 compute-0 nova_compute[259850]:     <mode name='host-passthrough' supported='yes'>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       <enum name='hostPassthroughMigratable'>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <value>on</value>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <value>off</value>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       </enum>
Oct 11 03:56:10 compute-0 nova_compute[259850]:     </mode>
Oct 11 03:56:10 compute-0 nova_compute[259850]:     <mode name='maximum' supported='yes'>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       <enum name='maximumMigratable'>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <value>on</value>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <value>off</value>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       </enum>
Oct 11 03:56:10 compute-0 nova_compute[259850]:     </mode>
Oct 11 03:56:10 compute-0 nova_compute[259850]:     <mode name='host-model' supported='yes'>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       <model fallback='forbid'>EPYC-Rome</model>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       <vendor>AMD</vendor>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       <maxphysaddr mode='passthrough' limit='40'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       <feature policy='require' name='x2apic'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       <feature policy='require' name='tsc-deadline'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       <feature policy='require' name='hypervisor'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       <feature policy='require' name='tsc_adjust'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       <feature policy='require' name='spec-ctrl'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       <feature policy='require' name='stibp'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       <feature policy='require' name='arch-capabilities'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       <feature policy='require' name='ssbd'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       <feature policy='require' name='cmp_legacy'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       <feature policy='require' name='overflow-recov'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       <feature policy='require' name='succor'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       <feature policy='require' name='ibrs'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       <feature policy='require' name='amd-ssbd'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       <feature policy='require' name='virt-ssbd'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       <feature policy='require' name='lbrv'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       <feature policy='require' name='tsc-scale'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       <feature policy='require' name='vmcb-clean'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       <feature policy='require' name='flushbyasid'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       <feature policy='require' name='pause-filter'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       <feature policy='require' name='pfthreshold'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       <feature policy='require' name='svme-addr-chk'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       <feature policy='require' name='lfence-always-serializing'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       <feature policy='require' name='rdctl-no'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       <feature policy='require' name='skip-l1dfl-vmentry'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       <feature policy='require' name='mds-no'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       <feature policy='require' name='pschange-mc-no'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       <feature policy='require' name='gds-no'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       <feature policy='require' name='rfds-no'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       <feature policy='disable' name='xsaves'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:     </mode>
Oct 11 03:56:10 compute-0 nova_compute[259850]:     <mode name='custom' supported='yes'>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='486-v1'>486</model>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       <model usable='yes' deprecated='yes' vendor='unknown'>486-v1</model>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       <model usable='no' vendor='Intel' canonical='Broadwell-v1'>Broadwell</model>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       <blockers model='Broadwell'>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='erms'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='hle'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='invpcid'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='pcid'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='rtm'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       </blockers>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       <model usable='no' vendor='Intel' canonical='Broadwell-v3'>Broadwell-IBRS</model>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       <blockers model='Broadwell-IBRS'>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='erms'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='hle'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='invpcid'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='pcid'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='rtm'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       </blockers>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       <model usable='no' vendor='Intel' canonical='Broadwell-v2'>Broadwell-noTSX</model>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       <blockers model='Broadwell-noTSX'>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='erms'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='invpcid'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='pcid'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       </blockers>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       <model usable='no' vendor='Intel' canonical='Broadwell-v4'>Broadwell-noTSX-IBRS</model>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       <blockers model='Broadwell-noTSX-IBRS'>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='erms'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='invpcid'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='pcid'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       </blockers>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       <model usable='no' vendor='Intel'>Broadwell-v1</model>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       <blockers model='Broadwell-v1'>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='erms'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='hle'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='invpcid'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='pcid'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='rtm'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       </blockers>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       <model usable='no' vendor='Intel'>Broadwell-v2</model>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       <blockers model='Broadwell-v2'>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='erms'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='invpcid'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='pcid'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       </blockers>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       <model usable='no' vendor='Intel'>Broadwell-v3</model>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       <blockers model='Broadwell-v3'>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='erms'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='hle'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='invpcid'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='pcid'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='rtm'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       </blockers>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       <model usable='no' vendor='Intel'>Broadwell-v4</model>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       <blockers model='Broadwell-v4'>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='erms'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='invpcid'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='pcid'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       </blockers>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v1'>Cascadelake-Server</model>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       <blockers model='Cascadelake-Server'>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='avx512bw'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='avx512cd'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='avx512dq'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='avx512f'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='avx512vl'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='avx512vnni'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='erms'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='hle'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='invpcid'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='pcid'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='pku'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='rtm'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       </blockers>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v3'>Cascadelake-Server-noTSX</model>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       <blockers model='Cascadelake-Server-noTSX'>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='avx512bw'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='avx512cd'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='avx512dq'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='avx512f'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='avx512vl'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='avx512vnni'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='erms'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='ibrs-all'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='invpcid'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='pcid'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='pku'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       </blockers>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v1</model>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       <blockers model='Cascadelake-Server-v1'>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='avx512bw'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='avx512cd'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='avx512dq'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='avx512f'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='avx512vl'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='avx512vnni'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='erms'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='hle'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='invpcid'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='pcid'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='pku'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='rtm'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       </blockers>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v2</model>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       <blockers model='Cascadelake-Server-v2'>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='avx512bw'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='avx512cd'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='avx512dq'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='avx512f'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='avx512vl'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='avx512vnni'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='erms'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='hle'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='ibrs-all'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='invpcid'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='pcid'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='pku'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='rtm'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       </blockers>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v3</model>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       <blockers model='Cascadelake-Server-v3'>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='avx512bw'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='avx512cd'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='avx512dq'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='avx512f'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='avx512vl'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='avx512vnni'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='erms'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='ibrs-all'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='invpcid'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='pcid'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='pku'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       </blockers>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v4</model>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       <blockers model='Cascadelake-Server-v4'>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='avx512bw'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='avx512cd'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='avx512dq'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='avx512f'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='avx512vl'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='avx512vnni'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='erms'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='ibrs-all'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='invpcid'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='pcid'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='pku'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       </blockers>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v5</model>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       <blockers model='Cascadelake-Server-v5'>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='avx512bw'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='avx512cd'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='avx512dq'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='avx512f'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='avx512vl'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='avx512vnni'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='erms'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='ibrs-all'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='invpcid'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='pcid'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='pku'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='xsaves'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       </blockers>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       <model usable='yes' deprecated='yes' vendor='Intel' canonical='Conroe-v1'>Conroe</model>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       <model usable='yes' deprecated='yes' vendor='Intel'>Conroe-v1</model>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       <model usable='no' vendor='Intel' canonical='Cooperlake-v1'>Cooperlake</model>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       <blockers model='Cooperlake'>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='avx512-bf16'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='avx512bw'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='avx512cd'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='avx512dq'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='avx512f'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='avx512vl'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='avx512vnni'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='erms'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='hle'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='ibrs-all'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='invpcid'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='pcid'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='pku'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='rtm'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='taa-no'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       </blockers>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       <model usable='no' vendor='Intel'>Cooperlake-v1</model>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       <blockers model='Cooperlake-v1'>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='avx512-bf16'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='avx512bw'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='avx512cd'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='avx512dq'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='avx512f'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='avx512vl'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='avx512vnni'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='erms'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='hle'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='ibrs-all'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='invpcid'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='pcid'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='pku'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='rtm'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='taa-no'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       </blockers>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       <model usable='no' vendor='Intel'>Cooperlake-v2</model>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       <blockers model='Cooperlake-v2'>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='avx512-bf16'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='avx512bw'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='avx512cd'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='avx512dq'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='avx512f'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='avx512vl'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='avx512vnni'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='erms'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='hle'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='ibrs-all'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='invpcid'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='pcid'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='pku'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='rtm'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='taa-no'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='xsaves'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       </blockers>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       <model usable='no' vendor='Intel' canonical='Denverton-v1'>Denverton</model>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       <blockers model='Denverton'>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='erms'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='mpx'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       </blockers>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       <model usable='no' vendor='Intel'>Denverton-v1</model>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       <blockers model='Denverton-v1'>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='erms'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='mpx'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       </blockers>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       <model usable='no' vendor='Intel'>Denverton-v2</model>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       <blockers model='Denverton-v2'>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='erms'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       </blockers>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       <model usable='no' vendor='Intel'>Denverton-v3</model>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       <blockers model='Denverton-v3'>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='erms'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='xsaves'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       </blockers>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       <model usable='yes' vendor='Hygon' canonical='Dhyana-v1'>Dhyana</model>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       <model usable='yes' vendor='Hygon'>Dhyana-v1</model>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       <model usable='no' vendor='Hygon'>Dhyana-v2</model>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       <blockers model='Dhyana-v2'>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='xsaves'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       </blockers>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       <model usable='yes' vendor='AMD' canonical='EPYC-v1'>EPYC</model>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       <model usable='no' vendor='AMD' canonical='EPYC-Genoa-v1'>EPYC-Genoa</model>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       <blockers model='EPYC-Genoa'>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='amd-psfd'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='auto-ibrs'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='avx512-bf16'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='avx512-vpopcntdq'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='avx512bitalg'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='avx512bw'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='avx512cd'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='avx512dq'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='avx512f'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='avx512ifma'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='avx512vbmi'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='avx512vbmi2'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='avx512vl'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='avx512vnni'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='erms'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='fsrm'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='gfni'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='invpcid'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='la57'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='no-nested-data-bp'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='null-sel-clr-base'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='pcid'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='pku'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='stibp-always-on'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='vaes'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='vpclmulqdq'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='xsaves'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       </blockers>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       <model usable='no' vendor='AMD'>EPYC-Genoa-v1</model>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       <blockers model='EPYC-Genoa-v1'>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='amd-psfd'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='auto-ibrs'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='avx512-bf16'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='avx512-vpopcntdq'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='avx512bitalg'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='avx512bw'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='avx512cd'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='avx512dq'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='avx512f'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='avx512ifma'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='avx512vbmi'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='avx512vbmi2'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='avx512vl'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='avx512vnni'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='erms'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='fsrm'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='gfni'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='invpcid'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='la57'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='no-nested-data-bp'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='null-sel-clr-base'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='pcid'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='pku'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='stibp-always-on'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='vaes'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='vpclmulqdq'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='xsaves'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       </blockers>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       <model usable='yes' vendor='AMD' canonical='EPYC-v2'>EPYC-IBPB</model>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       <model usable='no' vendor='AMD' canonical='EPYC-Milan-v1'>EPYC-Milan</model>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       <blockers model='EPYC-Milan'>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='erms'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='fsrm'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='invpcid'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='pcid'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='pku'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='xsaves'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       </blockers>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       <model usable='no' vendor='AMD'>EPYC-Milan-v1</model>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       <blockers model='EPYC-Milan-v1'>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='erms'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='fsrm'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='invpcid'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='pcid'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='pku'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='xsaves'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       </blockers>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       <model usable='no' vendor='AMD'>EPYC-Milan-v2</model>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       <blockers model='EPYC-Milan-v2'>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='amd-psfd'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='erms'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='fsrm'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='invpcid'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='no-nested-data-bp'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='null-sel-clr-base'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='pcid'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='pku'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='stibp-always-on'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='vaes'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='vpclmulqdq'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='xsaves'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       </blockers>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       <model usable='no' vendor='AMD' canonical='EPYC-Rome-v1'>EPYC-Rome</model>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       <blockers model='EPYC-Rome'>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='xsaves'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       </blockers>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       <model usable='no' vendor='AMD'>EPYC-Rome-v1</model>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       <blockers model='EPYC-Rome-v1'>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='xsaves'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       </blockers>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       <model usable='no' vendor='AMD'>EPYC-Rome-v2</model>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       <blockers model='EPYC-Rome-v2'>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='xsaves'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       </blockers>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       <model usable='no' vendor='AMD'>EPYC-Rome-v3</model>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       <blockers model='EPYC-Rome-v3'>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='xsaves'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       </blockers>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       <model usable='yes' vendor='AMD'>EPYC-Rome-v4</model>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       <model usable='yes' vendor='AMD'>EPYC-v1</model>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       <model usable='yes' vendor='AMD'>EPYC-v2</model>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       <model usable='no' vendor='AMD'>EPYC-v3</model>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       <blockers model='EPYC-v3'>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='xsaves'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       </blockers>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       <model usable='no' vendor='AMD'>EPYC-v4</model>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       <blockers model='EPYC-v4'>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='xsaves'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       </blockers>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       <model usable='no' vendor='Intel' canonical='GraniteRapids-v1'>GraniteRapids</model>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       <blockers model='GraniteRapids'>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='amx-bf16'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='amx-fp16'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='amx-int8'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='amx-tile'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='avx-vnni'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='avx512-bf16'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='avx512-fp16'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='avx512-vpopcntdq'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='avx512bitalg'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='avx512bw'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='avx512cd'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='avx512dq'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='avx512f'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='avx512ifma'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='avx512vbmi'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='avx512vbmi2'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='avx512vl'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='avx512vnni'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='bus-lock-detect'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='erms'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='fbsdp-no'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='fsrc'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='fsrm'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='fsrs'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='fzrm'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='gfni'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='hle'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='ibrs-all'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='invpcid'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='la57'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='mcdt-no'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='pbrsb-no'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='pcid'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='pku'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='prefetchiti'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='psdp-no'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='rtm'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='sbdr-ssdp-no'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='serialize'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='taa-no'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='tsx-ldtrk'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='vaes'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='vpclmulqdq'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='xfd'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='xsaves'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       </blockers>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       <model usable='no' vendor='Intel'>GraniteRapids-v1</model>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       <blockers model='GraniteRapids-v1'>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='amx-bf16'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='amx-fp16'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='amx-int8'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='amx-tile'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='avx-vnni'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='avx512-bf16'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='avx512-fp16'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='avx512-vpopcntdq'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='avx512bitalg'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='avx512bw'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='avx512cd'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='avx512dq'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='avx512f'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='avx512ifma'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='avx512vbmi'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='avx512vbmi2'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='avx512vl'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='avx512vnni'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='bus-lock-detect'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='erms'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='fbsdp-no'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='fsrc'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='fsrm'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='fsrs'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='fzrm'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='gfni'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='hle'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='ibrs-all'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='invpcid'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='la57'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='mcdt-no'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='pbrsb-no'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='pcid'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='pku'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='prefetchiti'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='psdp-no'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='rtm'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='sbdr-ssdp-no'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='serialize'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='taa-no'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='tsx-ldtrk'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='vaes'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='vpclmulqdq'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='xfd'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='xsaves'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       </blockers>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       <model usable='no' vendor='Intel'>GraniteRapids-v2</model>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       <blockers model='GraniteRapids-v2'>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='amx-bf16'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='amx-fp16'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='amx-int8'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='amx-tile'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='avx-vnni'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='avx10'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='avx10-128'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='avx10-256'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='avx10-512'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='avx512-bf16'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='avx512-fp16'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='avx512-vpopcntdq'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='avx512bitalg'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='avx512bw'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='avx512cd'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='avx512dq'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='avx512f'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='avx512ifma'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='avx512vbmi'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='avx512vbmi2'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='avx512vl'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='avx512vnni'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='bus-lock-detect'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='cldemote'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='erms'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='fbsdp-no'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='fsrc'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='fsrm'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='fsrs'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='fzrm'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='gfni'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='hle'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='ibrs-all'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='invpcid'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='la57'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='mcdt-no'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='movdir64b'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='movdiri'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='pbrsb-no'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='pcid'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='pku'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='prefetchiti'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='psdp-no'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='rtm'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='sbdr-ssdp-no'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='serialize'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='ss'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='taa-no'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='tsx-ldtrk'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='vaes'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='vpclmulqdq'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='xfd'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='xsaves'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       </blockers>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       <model usable='no' vendor='Intel' canonical='Haswell-v1'>Haswell</model>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       <blockers model='Haswell'>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='erms'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='hle'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='invpcid'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='pcid'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='rtm'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       </blockers>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       <model usable='no' vendor='Intel' canonical='Haswell-v3'>Haswell-IBRS</model>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       <blockers model='Haswell-IBRS'>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='erms'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='hle'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='invpcid'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='pcid'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='rtm'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       </blockers>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       <model usable='no' vendor='Intel' canonical='Haswell-v2'>Haswell-noTSX</model>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       <blockers model='Haswell-noTSX'>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='erms'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='invpcid'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='pcid'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       </blockers>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       <model usable='no' vendor='Intel' canonical='Haswell-v4'>Haswell-noTSX-IBRS</model>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       <blockers model='Haswell-noTSX-IBRS'>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='erms'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='invpcid'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='pcid'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       </blockers>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       <model usable='no' vendor='Intel'>Haswell-v1</model>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       <blockers model='Haswell-v1'>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='erms'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='hle'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='invpcid'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='pcid'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='rtm'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       </blockers>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       <model usable='no' vendor='Intel'>Haswell-v2</model>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       <blockers model='Haswell-v2'>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='erms'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='invpcid'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='pcid'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       </blockers>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       <model usable='no' vendor='Intel'>Haswell-v3</model>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       <blockers model='Haswell-v3'>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='erms'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='hle'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='invpcid'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='pcid'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='rtm'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       </blockers>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       <model usable='no' vendor='Intel'>Haswell-v4</model>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       <blockers model='Haswell-v4'>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='erms'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='invpcid'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='pcid'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       </blockers>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       <model usable='no' vendor='Intel' canonical='Icelake-Server-v1'>Icelake-Server</model>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       <blockers model='Icelake-Server'>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='avx512-vpopcntdq'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='avx512bitalg'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='avx512bw'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='avx512cd'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='avx512dq'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='avx512f'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='avx512vbmi'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='avx512vbmi2'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='avx512vl'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='avx512vnni'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='erms'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='gfni'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='hle'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='invpcid'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='la57'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='pcid'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='pku'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='rtm'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='vaes'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='vpclmulqdq'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       </blockers>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       <model usable='no' vendor='Intel' canonical='Icelake-Server-v2'>Icelake-Server-noTSX</model>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       <blockers model='Icelake-Server-noTSX'>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='avx512-vpopcntdq'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='avx512bitalg'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='avx512bw'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='avx512cd'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='avx512dq'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='avx512f'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='avx512vbmi'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='avx512vbmi2'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='avx512vl'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='avx512vnni'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='erms'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='gfni'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='invpcid'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='la57'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='pcid'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='pku'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='vaes'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='vpclmulqdq'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       </blockers>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       <model usable='no' vendor='Intel'>Icelake-Server-v1</model>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       <blockers model='Icelake-Server-v1'>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='avx512-vpopcntdq'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='avx512bitalg'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='avx512bw'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='avx512cd'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='avx512dq'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='avx512f'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='avx512vbmi'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='avx512vbmi2'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='avx512vl'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='avx512vnni'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='erms'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='gfni'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='hle'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='invpcid'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='la57'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='pcid'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='pku'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='rtm'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='vaes'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='vpclmulqdq'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       </blockers>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       <model usable='no' vendor='Intel'>Icelake-Server-v2</model>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       <blockers model='Icelake-Server-v2'>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='avx512-vpopcntdq'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='avx512bitalg'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='avx512bw'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='avx512cd'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='avx512dq'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='avx512f'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='avx512vbmi'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='avx512vbmi2'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='avx512vl'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='avx512vnni'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='erms'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='gfni'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='invpcid'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='la57'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='pcid'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='pku'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='vaes'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='vpclmulqdq'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       </blockers>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       <model usable='no' vendor='Intel'>Icelake-Server-v3</model>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       <blockers model='Icelake-Server-v3'>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='avx512-vpopcntdq'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='avx512bitalg'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='avx512bw'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='avx512cd'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='avx512dq'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='avx512f'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='avx512vbmi'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='avx512vbmi2'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='avx512vl'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='avx512vnni'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='erms'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='gfni'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='ibrs-all'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='invpcid'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='la57'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='pcid'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='pku'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='taa-no'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='vaes'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='vpclmulqdq'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       </blockers>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       <model usable='no' vendor='Intel'>Icelake-Server-v4</model>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       <blockers model='Icelake-Server-v4'>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='avx512-vpopcntdq'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='avx512bitalg'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='avx512bw'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='avx512cd'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='avx512dq'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='avx512f'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='avx512ifma'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='avx512vbmi'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='avx512vbmi2'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='avx512vl'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='avx512vnni'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='erms'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='fsrm'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='gfni'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='ibrs-all'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='invpcid'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='la57'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='pcid'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='pku'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='taa-no'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='vaes'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='vpclmulqdq'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       </blockers>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       <model usable='no' vendor='Intel'>Icelake-Server-v5</model>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       <blockers model='Icelake-Server-v5'>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='avx512-vpopcntdq'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='avx512bitalg'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='avx512bw'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='avx512cd'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='avx512dq'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='avx512f'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='avx512ifma'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='avx512vbmi'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='avx512vbmi2'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='avx512vl'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='avx512vnni'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='erms'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='fsrm'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='gfni'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='ibrs-all'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='invpcid'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='la57'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='pcid'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='pku'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='taa-no'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='vaes'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='vpclmulqdq'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='xsaves'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       </blockers>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       <model usable='no' vendor='Intel'>Icelake-Server-v6</model>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       <blockers model='Icelake-Server-v6'>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='avx512-vpopcntdq'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='avx512bitalg'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='avx512bw'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='avx512cd'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='avx512dq'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='avx512f'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='avx512ifma'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='avx512vbmi'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='avx512vbmi2'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='avx512vl'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='avx512vnni'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='erms'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='fsrm'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='gfni'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='ibrs-all'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='invpcid'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='la57'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='pcid'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='pku'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='taa-no'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='vaes'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='vpclmulqdq'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='xsaves'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       </blockers>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       <model usable='no' vendor='Intel'>Icelake-Server-v7</model>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       <blockers model='Icelake-Server-v7'>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='avx512-vpopcntdq'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='avx512bitalg'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='avx512bw'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='avx512cd'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='avx512dq'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='avx512f'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='avx512ifma'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='avx512vbmi'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='avx512vbmi2'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='avx512vl'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='avx512vnni'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='erms'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='fsrm'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='gfni'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='hle'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='ibrs-all'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='invpcid'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='la57'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='pcid'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='pku'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='rtm'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='taa-no'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='vaes'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='vpclmulqdq'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='xsaves'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       </blockers>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       <model usable='no' vendor='Intel' canonical='IvyBridge-v1'>IvyBridge</model>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       <blockers model='IvyBridge'>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='erms'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       </blockers>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       <model usable='no' vendor='Intel' canonical='IvyBridge-v2'>IvyBridge-IBRS</model>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       <blockers model='IvyBridge-IBRS'>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='erms'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       </blockers>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       <model usable='no' vendor='Intel'>IvyBridge-v1</model>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       <blockers model='IvyBridge-v1'>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='erms'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       </blockers>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       <model usable='no' vendor='Intel'>IvyBridge-v2</model>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       <blockers model='IvyBridge-v2'>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='erms'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       </blockers>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       <model usable='no' vendor='Intel' canonical='KnightsMill-v1'>KnightsMill</model>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       <blockers model='KnightsMill'>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='avx512-4fmaps'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='avx512-4vnniw'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='avx512-vpopcntdq'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='avx512cd'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='avx512er'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='avx512f'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='avx512pf'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='erms'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='ss'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       </blockers>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       <model usable='no' vendor='Intel'>KnightsMill-v1</model>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       <blockers model='KnightsMill-v1'>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='avx512-4fmaps'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='avx512-4vnniw'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='avx512-vpopcntdq'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='avx512cd'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='avx512er'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='avx512f'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='avx512pf'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='erms'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='ss'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       </blockers>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       <model usable='yes' vendor='Intel' canonical='Nehalem-v1'>Nehalem</model>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       <model usable='yes' vendor='Intel' canonical='Nehalem-v2'>Nehalem-IBRS</model>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       <model usable='yes' vendor='Intel'>Nehalem-v1</model>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       <model usable='yes' vendor='Intel'>Nehalem-v2</model>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G1-v1'>Opteron_G1</model>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G1-v1</model>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G2-v1'>Opteron_G2</model>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G2-v1</model>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G3-v1'>Opteron_G3</model>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G3-v1</model>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       <model usable='no' vendor='AMD' canonical='Opteron_G4-v1'>Opteron_G4</model>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       <blockers model='Opteron_G4'>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='fma4'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='xop'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       </blockers>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       <model usable='no' vendor='AMD'>Opteron_G4-v1</model>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       <blockers model='Opteron_G4-v1'>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='fma4'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='xop'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       </blockers>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       <model usable='no' vendor='AMD' canonical='Opteron_G5-v1'>Opteron_G5</model>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       <blockers model='Opteron_G5'>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='fma4'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='tbm'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='xop'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       </blockers>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       <model usable='no' vendor='AMD'>Opteron_G5-v1</model>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       <blockers model='Opteron_G5-v1'>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='fma4'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='tbm'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='xop'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       </blockers>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       <model usable='yes' deprecated='yes' vendor='Intel' canonical='Penryn-v1'>Penryn</model>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       <model usable='yes' deprecated='yes' vendor='Intel'>Penryn-v1</model>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       <model usable='yes' vendor='Intel' canonical='SandyBridge-v1'>SandyBridge</model>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       <model usable='yes' vendor='Intel' canonical='SandyBridge-v2'>SandyBridge-IBRS</model>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       <model usable='yes' vendor='Intel'>SandyBridge-v1</model>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       <model usable='yes' vendor='Intel'>SandyBridge-v2</model>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       <model usable='no' vendor='Intel' canonical='SapphireRapids-v1'>SapphireRapids</model>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       <blockers model='SapphireRapids'>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='amx-bf16'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='amx-int8'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='amx-tile'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='avx-vnni'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='avx512-bf16'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='avx512-fp16'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='avx512-vpopcntdq'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='avx512bitalg'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='avx512bw'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='avx512cd'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='avx512dq'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='avx512f'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='avx512ifma'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='avx512vbmi'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='avx512vbmi2'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='avx512vl'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='avx512vnni'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='bus-lock-detect'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='erms'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='fsrc'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='fsrm'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='fsrs'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='fzrm'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='gfni'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='hle'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='ibrs-all'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='invpcid'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='la57'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='pcid'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='pku'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='rtm'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='serialize'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='taa-no'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='tsx-ldtrk'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='vaes'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='vpclmulqdq'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='xfd'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='xsaves'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       </blockers>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       <model usable='no' vendor='Intel'>SapphireRapids-v1</model>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       <blockers model='SapphireRapids-v1'>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='amx-bf16'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='amx-int8'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='amx-tile'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='avx-vnni'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='avx512-bf16'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='avx512-fp16'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='avx512-vpopcntdq'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='avx512bitalg'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='avx512bw'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='avx512cd'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='avx512dq'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='avx512f'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='avx512ifma'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='avx512vbmi'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='avx512vbmi2'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='avx512vl'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='avx512vnni'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='bus-lock-detect'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='erms'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='fsrc'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='fsrm'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='fsrs'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='fzrm'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='gfni'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='hle'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='ibrs-all'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='invpcid'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='la57'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='pcid'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='pku'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='rtm'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='serialize'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='taa-no'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='tsx-ldtrk'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='vaes'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='vpclmulqdq'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='xfd'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='xsaves'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       </blockers>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       <model usable='no' vendor='Intel'>SapphireRapids-v2</model>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       <blockers model='SapphireRapids-v2'>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='amx-bf16'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='amx-int8'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='amx-tile'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='avx-vnni'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='avx512-bf16'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='avx512-fp16'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='avx512-vpopcntdq'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='avx512bitalg'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='avx512bw'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='avx512cd'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='avx512dq'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='avx512f'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='avx512ifma'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='avx512vbmi'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='avx512vbmi2'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='avx512vl'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='avx512vnni'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='bus-lock-detect'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='erms'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='fbsdp-no'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='fsrc'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='fsrm'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='fsrs'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='fzrm'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='gfni'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='hle'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='ibrs-all'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='invpcid'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='la57'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='pcid'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='pku'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='psdp-no'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='rtm'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='sbdr-ssdp-no'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='serialize'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='taa-no'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='tsx-ldtrk'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='vaes'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='vpclmulqdq'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='xfd'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='xsaves'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       </blockers>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       <model usable='no' vendor='Intel'>SapphireRapids-v3</model>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       <blockers model='SapphireRapids-v3'>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='amx-bf16'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='amx-int8'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='amx-tile'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='avx-vnni'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='avx512-bf16'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='avx512-fp16'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='avx512-vpopcntdq'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='avx512bitalg'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='avx512bw'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='avx512cd'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='avx512dq'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='avx512f'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='avx512ifma'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='avx512vbmi'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='avx512vbmi2'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='avx512vl'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='avx512vnni'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='bus-lock-detect'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='cldemote'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='erms'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='fbsdp-no'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='fsrc'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='fsrm'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='fsrs'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='fzrm'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='gfni'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='hle'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='ibrs-all'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='invpcid'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='la57'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='movdir64b'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='movdiri'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='pcid'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='pku'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='psdp-no'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='rtm'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='sbdr-ssdp-no'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='serialize'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='ss'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='taa-no'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='tsx-ldtrk'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='vaes'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='vpclmulqdq'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='xfd'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='xsaves'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       </blockers>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       <model usable='no' vendor='Intel' canonical='SierraForest-v1'>SierraForest</model>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       <blockers model='SierraForest'>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='avx-ifma'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='avx-ne-convert'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='avx-vnni'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='avx-vnni-int8'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='bus-lock-detect'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='cmpccxadd'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='erms'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='fbsdp-no'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='fsrm'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='fsrs'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='gfni'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='ibrs-all'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='invpcid'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='mcdt-no'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='pbrsb-no'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='pcid'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='pku'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='psdp-no'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='sbdr-ssdp-no'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='serialize'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='vaes'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='vpclmulqdq'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='xsaves'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       </blockers>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       <model usable='no' vendor='Intel'>SierraForest-v1</model>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       <blockers model='SierraForest-v1'>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='avx-ifma'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='avx-ne-convert'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='avx-vnni'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='avx-vnni-int8'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='bus-lock-detect'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='cmpccxadd'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='erms'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='fbsdp-no'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='fsrm'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='fsrs'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='gfni'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='ibrs-all'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='invpcid'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='mcdt-no'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='pbrsb-no'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='pcid'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='pku'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='psdp-no'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='sbdr-ssdp-no'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='serialize'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='vaes'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='vpclmulqdq'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='xsaves'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       </blockers>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v1'>Skylake-Client</model>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       <blockers model='Skylake-Client'>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='erms'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='hle'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='invpcid'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='pcid'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='rtm'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       </blockers>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v2'>Skylake-Client-IBRS</model>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       <blockers model='Skylake-Client-IBRS'>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='erms'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='hle'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='invpcid'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='pcid'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='rtm'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       </blockers>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v3'>Skylake-Client-noTSX-IBRS</model>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       <blockers model='Skylake-Client-noTSX-IBRS'>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='erms'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='invpcid'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='pcid'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       </blockers>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       <model usable='no' vendor='Intel'>Skylake-Client-v1</model>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       <blockers model='Skylake-Client-v1'>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='erms'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='hle'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='invpcid'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='pcid'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='rtm'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       </blockers>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       <model usable='no' vendor='Intel'>Skylake-Client-v2</model>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       <blockers model='Skylake-Client-v2'>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='erms'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='hle'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='invpcid'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='pcid'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='rtm'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       </blockers>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       <model usable='no' vendor='Intel'>Skylake-Client-v3</model>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       <blockers model='Skylake-Client-v3'>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='erms'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='invpcid'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='pcid'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       </blockers>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       <model usable='no' vendor='Intel'>Skylake-Client-v4</model>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       <blockers model='Skylake-Client-v4'>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='erms'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='invpcid'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='pcid'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='xsaves'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       </blockers>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v1'>Skylake-Server</model>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       <blockers model='Skylake-Server'>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='avx512bw'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='avx512cd'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='avx512dq'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='avx512f'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='avx512vl'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='erms'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='hle'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='invpcid'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='pcid'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='pku'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='rtm'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       </blockers>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v2'>Skylake-Server-IBRS</model>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       <blockers model='Skylake-Server-IBRS'>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='avx512bw'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='avx512cd'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='avx512dq'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='avx512f'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='avx512vl'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='erms'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='hle'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='invpcid'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='pcid'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='pku'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='rtm'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       </blockers>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v3'>Skylake-Server-noTSX-IBRS</model>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       <blockers model='Skylake-Server-noTSX-IBRS'>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='avx512bw'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='avx512cd'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='avx512dq'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='avx512f'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='avx512vl'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='erms'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='invpcid'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='pcid'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='pku'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       </blockers>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       <model usable='no' vendor='Intel'>Skylake-Server-v1</model>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       <blockers model='Skylake-Server-v1'>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='avx512bw'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='avx512cd'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='avx512dq'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='avx512f'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='avx512vl'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='erms'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='hle'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='invpcid'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='pcid'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='pku'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='rtm'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       </blockers>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       <model usable='no' vendor='Intel'>Skylake-Server-v2</model>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       <blockers model='Skylake-Server-v2'>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='avx512bw'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='avx512cd'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='avx512dq'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='avx512f'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='avx512vl'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='erms'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='hle'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='invpcid'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='pcid'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='pku'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='rtm'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       </blockers>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       <model usable='no' vendor='Intel'>Skylake-Server-v3</model>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       <blockers model='Skylake-Server-v3'>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='avx512bw'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='avx512cd'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='avx512dq'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='avx512f'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='avx512vl'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='erms'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='invpcid'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='pcid'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='pku'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       </blockers>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       <model usable='no' vendor='Intel'>Skylake-Server-v4</model>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       <blockers model='Skylake-Server-v4'>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='avx512bw'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='avx512cd'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='avx512dq'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='avx512f'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='avx512vl'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='erms'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='invpcid'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='pcid'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='pku'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       </blockers>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       <model usable='no' vendor='Intel'>Skylake-Server-v5</model>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       <blockers model='Skylake-Server-v5'>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='avx512bw'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='avx512cd'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='avx512dq'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='avx512f'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='avx512vl'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='erms'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='invpcid'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='pcid'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='pku'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='xsaves'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       </blockers>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       <model usable='no' vendor='Intel' canonical='Snowridge-v1'>Snowridge</model>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       <blockers model='Snowridge'>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='cldemote'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='core-capability'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='erms'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='gfni'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='movdir64b'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='movdiri'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='mpx'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='split-lock-detect'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       </blockers>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       <model usable='no' vendor='Intel'>Snowridge-v1</model>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       <blockers model='Snowridge-v1'>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='cldemote'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='core-capability'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='erms'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='gfni'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='movdir64b'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='movdiri'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='mpx'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='split-lock-detect'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       </blockers>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       <model usable='no' vendor='Intel'>Snowridge-v2</model>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       <blockers model='Snowridge-v2'>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='cldemote'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='core-capability'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='erms'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='gfni'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='movdir64b'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='movdiri'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='split-lock-detect'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       </blockers>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       <model usable='no' vendor='Intel'>Snowridge-v3</model>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       <blockers model='Snowridge-v3'>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='cldemote'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='core-capability'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='erms'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='gfni'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='movdir64b'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='movdiri'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='split-lock-detect'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='xsaves'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       </blockers>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       <model usable='no' vendor='Intel'>Snowridge-v4</model>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       <blockers model='Snowridge-v4'>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='cldemote'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='erms'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='gfni'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='movdir64b'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='movdiri'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='xsaves'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       </blockers>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       <model usable='yes' vendor='Intel' canonical='Westmere-v1'>Westmere</model>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       <model usable='yes' vendor='Intel' canonical='Westmere-v2'>Westmere-IBRS</model>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       <model usable='yes' vendor='Intel'>Westmere-v1</model>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       <model usable='yes' vendor='Intel'>Westmere-v2</model>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       <model usable='no' deprecated='yes' vendor='AMD' canonical='athlon-v1'>athlon</model>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       <blockers model='athlon'>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='3dnow'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='3dnowext'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       </blockers>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       <model usable='no' deprecated='yes' vendor='AMD'>athlon-v1</model>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       <blockers model='athlon-v1'>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='3dnow'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='3dnowext'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       </blockers>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='core2duo-v1'>core2duo</model>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       <blockers model='core2duo'>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='ss'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       </blockers>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       <model usable='no' deprecated='yes' vendor='Intel'>core2duo-v1</model>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       <blockers model='core2duo-v1'>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='ss'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       </blockers>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='coreduo-v1'>coreduo</model>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       <blockers model='coreduo'>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='ss'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       </blockers>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       <model usable='no' deprecated='yes' vendor='Intel'>coreduo-v1</model>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       <blockers model='coreduo-v1'>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='ss'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       </blockers>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm32-v1'>kvm32</model>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       <model usable='yes' deprecated='yes' vendor='unknown'>kvm32-v1</model>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm64-v1'>kvm64</model>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       <model usable='yes' deprecated='yes' vendor='unknown'>kvm64-v1</model>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='n270-v1'>n270</model>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       <blockers model='n270'>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='ss'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       </blockers>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       <model usable='no' deprecated='yes' vendor='Intel'>n270-v1</model>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       <blockers model='n270-v1'>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='ss'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       </blockers>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium-v1'>pentium</model>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium-v1</model>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium2-v1'>pentium2</model>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium2-v1</model>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium3-v1'>pentium3</model>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium3-v1</model>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       <model usable='no' deprecated='yes' vendor='AMD' canonical='phenom-v1'>phenom</model>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       <blockers model='phenom'>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='3dnow'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='3dnowext'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       </blockers>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       <model usable='no' deprecated='yes' vendor='AMD'>phenom-v1</model>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       <blockers model='phenom-v1'>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='3dnow'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='3dnowext'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       </blockers>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu32-v1'>qemu32</model>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       <model usable='yes' deprecated='yes' vendor='unknown'>qemu32-v1</model>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu64-v1'>qemu64</model>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       <model usable='yes' deprecated='yes' vendor='unknown'>qemu64-v1</model>
Oct 11 03:56:10 compute-0 nova_compute[259850]:     </mode>
Oct 11 03:56:10 compute-0 nova_compute[259850]:   </cpu>
Oct 11 03:56:10 compute-0 nova_compute[259850]:   <memoryBacking supported='yes'>
Oct 11 03:56:10 compute-0 nova_compute[259850]:     <enum name='sourceType'>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       <value>file</value>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       <value>anonymous</value>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       <value>memfd</value>
Oct 11 03:56:10 compute-0 nova_compute[259850]:     </enum>
Oct 11 03:56:10 compute-0 nova_compute[259850]:   </memoryBacking>
Oct 11 03:56:10 compute-0 nova_compute[259850]:   <devices>
Oct 11 03:56:10 compute-0 nova_compute[259850]:     <disk supported='yes'>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       <enum name='diskDevice'>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <value>disk</value>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <value>cdrom</value>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <value>floppy</value>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <value>lun</value>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       </enum>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       <enum name='bus'>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <value>fdc</value>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <value>scsi</value>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <value>virtio</value>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <value>usb</value>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <value>sata</value>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       </enum>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       <enum name='model'>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <value>virtio</value>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <value>virtio-transitional</value>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <value>virtio-non-transitional</value>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       </enum>
Oct 11 03:56:10 compute-0 nova_compute[259850]:     </disk>
Oct 11 03:56:10 compute-0 nova_compute[259850]:     <graphics supported='yes'>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       <enum name='type'>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <value>vnc</value>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <value>egl-headless</value>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <value>dbus</value>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       </enum>
Oct 11 03:56:10 compute-0 nova_compute[259850]:     </graphics>
Oct 11 03:56:10 compute-0 nova_compute[259850]:     <video supported='yes'>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       <enum name='modelType'>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <value>vga</value>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <value>cirrus</value>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <value>virtio</value>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <value>none</value>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <value>bochs</value>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <value>ramfb</value>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       </enum>
Oct 11 03:56:10 compute-0 nova_compute[259850]:     </video>
Oct 11 03:56:10 compute-0 nova_compute[259850]:     <hostdev supported='yes'>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       <enum name='mode'>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <value>subsystem</value>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       </enum>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       <enum name='startupPolicy'>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <value>default</value>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <value>mandatory</value>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <value>requisite</value>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <value>optional</value>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       </enum>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       <enum name='subsysType'>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <value>usb</value>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <value>pci</value>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <value>scsi</value>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       </enum>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       <enum name='capsType'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       <enum name='pciBackend'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:     </hostdev>
Oct 11 03:56:10 compute-0 nova_compute[259850]:     <rng supported='yes'>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       <enum name='model'>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <value>virtio</value>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <value>virtio-transitional</value>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <value>virtio-non-transitional</value>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       </enum>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       <enum name='backendModel'>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <value>random</value>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <value>egd</value>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <value>builtin</value>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       </enum>
Oct 11 03:56:10 compute-0 nova_compute[259850]:     </rng>
Oct 11 03:56:10 compute-0 nova_compute[259850]:     <filesystem supported='yes'>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       <enum name='driverType'>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <value>path</value>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <value>handle</value>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <value>virtiofs</value>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       </enum>
Oct 11 03:56:10 compute-0 nova_compute[259850]:     </filesystem>
Oct 11 03:56:10 compute-0 nova_compute[259850]:     <tpm supported='yes'>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       <enum name='model'>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <value>tpm-tis</value>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <value>tpm-crb</value>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       </enum>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       <enum name='backendModel'>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <value>emulator</value>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <value>external</value>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       </enum>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       <enum name='backendVersion'>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <value>2.0</value>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       </enum>
Oct 11 03:56:10 compute-0 nova_compute[259850]:     </tpm>
Oct 11 03:56:10 compute-0 nova_compute[259850]:     <redirdev supported='yes'>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       <enum name='bus'>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <value>usb</value>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       </enum>
Oct 11 03:56:10 compute-0 nova_compute[259850]:     </redirdev>
Oct 11 03:56:10 compute-0 nova_compute[259850]:     <channel supported='yes'>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       <enum name='type'>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <value>pty</value>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <value>unix</value>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       </enum>
Oct 11 03:56:10 compute-0 nova_compute[259850]:     </channel>
Oct 11 03:56:10 compute-0 nova_compute[259850]:     <crypto supported='yes'>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       <enum name='model'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       <enum name='type'>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <value>qemu</value>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       </enum>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       <enum name='backendModel'>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <value>builtin</value>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       </enum>
Oct 11 03:56:10 compute-0 nova_compute[259850]:     </crypto>
Oct 11 03:56:10 compute-0 nova_compute[259850]:     <interface supported='yes'>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       <enum name='backendType'>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <value>default</value>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <value>passt</value>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       </enum>
Oct 11 03:56:10 compute-0 nova_compute[259850]:     </interface>
Oct 11 03:56:10 compute-0 nova_compute[259850]:     <panic supported='yes'>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       <enum name='model'>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <value>isa</value>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <value>hyperv</value>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       </enum>
Oct 11 03:56:10 compute-0 nova_compute[259850]:     </panic>
Oct 11 03:56:10 compute-0 nova_compute[259850]:   </devices>
Oct 11 03:56:10 compute-0 nova_compute[259850]:   <features>
Oct 11 03:56:10 compute-0 nova_compute[259850]:     <gic supported='no'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:     <vmcoreinfo supported='yes'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:     <genid supported='yes'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:     <backingStoreInput supported='yes'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:     <backup supported='yes'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:     <async-teardown supported='yes'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:     <ps2 supported='yes'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:     <sev supported='no'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:     <sgx supported='no'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:     <hyperv supported='yes'>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       <enum name='features'>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <value>relaxed</value>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <value>vapic</value>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <value>spinlocks</value>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <value>vpindex</value>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <value>runtime</value>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <value>synic</value>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <value>stimer</value>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <value>reset</value>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <value>vendor_id</value>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <value>frequencies</value>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <value>reenlightenment</value>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <value>tlbflush</value>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <value>ipi</value>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <value>avic</value>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <value>emsr_bitmap</value>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <value>xmm_input</value>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       </enum>
Oct 11 03:56:10 compute-0 nova_compute[259850]:     </hyperv>
Oct 11 03:56:10 compute-0 nova_compute[259850]:     <launchSecurity supported='no'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:   </features>
Oct 11 03:56:10 compute-0 nova_compute[259850]: </domainCapabilities>
Oct 11 03:56:10 compute-0 nova_compute[259850]:  _get_domain_capabilities /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1037
Oct 11 03:56:10 compute-0 nova_compute[259850]: 2025-10-11 03:56:10.011 2 DEBUG nova.virt.libvirt.host [None req-3700afd8-f980-4cf2-b41e-54ecc7fbe9a2 - - - - - -] Getting domain capabilities for x86_64 via machine types: {'pc', 'q35'} _get_machine_types /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:952
Oct 11 03:56:10 compute-0 nova_compute[259850]: 2025-10-11 03:56:10.017 2 DEBUG nova.virt.libvirt.host [None req-3700afd8-f980-4cf2-b41e-54ecc7fbe9a2 - - - - - -] Libvirt host hypervisor capabilities for arch=x86_64 and machine_type=pc:
Oct 11 03:56:10 compute-0 nova_compute[259850]: <domainCapabilities>
Oct 11 03:56:10 compute-0 nova_compute[259850]:   <path>/usr/libexec/qemu-kvm</path>
Oct 11 03:56:10 compute-0 nova_compute[259850]:   <domain>kvm</domain>
Oct 11 03:56:10 compute-0 nova_compute[259850]:   <machine>pc-i440fx-rhel7.6.0</machine>
Oct 11 03:56:10 compute-0 nova_compute[259850]:   <arch>x86_64</arch>
Oct 11 03:56:10 compute-0 nova_compute[259850]:   <vcpu max='240'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:   <iothreads supported='yes'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:   <os supported='yes'>
Oct 11 03:56:10 compute-0 nova_compute[259850]:     <enum name='firmware'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:     <loader supported='yes'>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       <value>/usr/share/OVMF/OVMF_CODE.secboot.fd</value>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       <enum name='type'>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <value>rom</value>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <value>pflash</value>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       </enum>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       <enum name='readonly'>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <value>yes</value>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <value>no</value>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       </enum>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       <enum name='secure'>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <value>no</value>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       </enum>
Oct 11 03:56:10 compute-0 nova_compute[259850]:     </loader>
Oct 11 03:56:10 compute-0 nova_compute[259850]:   </os>
Oct 11 03:56:10 compute-0 nova_compute[259850]:   <cpu>
Oct 11 03:56:10 compute-0 nova_compute[259850]:     <mode name='host-passthrough' supported='yes'>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       <enum name='hostPassthroughMigratable'>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <value>on</value>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <value>off</value>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       </enum>
Oct 11 03:56:10 compute-0 nova_compute[259850]:     </mode>
Oct 11 03:56:10 compute-0 nova_compute[259850]:     <mode name='maximum' supported='yes'>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       <enum name='maximumMigratable'>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <value>on</value>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <value>off</value>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       </enum>
Oct 11 03:56:10 compute-0 nova_compute[259850]:     </mode>
Oct 11 03:56:10 compute-0 nova_compute[259850]:     <mode name='host-model' supported='yes'>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       <model fallback='forbid'>EPYC-Rome</model>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       <vendor>AMD</vendor>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       <maxphysaddr mode='passthrough' limit='40'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       <feature policy='require' name='x2apic'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       <feature policy='require' name='tsc-deadline'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       <feature policy='require' name='hypervisor'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       <feature policy='require' name='tsc_adjust'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       <feature policy='require' name='spec-ctrl'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       <feature policy='require' name='stibp'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       <feature policy='require' name='arch-capabilities'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       <feature policy='require' name='ssbd'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       <feature policy='require' name='cmp_legacy'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       <feature policy='require' name='overflow-recov'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       <feature policy='require' name='succor'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       <feature policy='require' name='ibrs'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       <feature policy='require' name='amd-ssbd'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       <feature policy='require' name='virt-ssbd'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       <feature policy='require' name='lbrv'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       <feature policy='require' name='tsc-scale'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       <feature policy='require' name='vmcb-clean'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       <feature policy='require' name='flushbyasid'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       <feature policy='require' name='pause-filter'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       <feature policy='require' name='pfthreshold'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       <feature policy='require' name='svme-addr-chk'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       <feature policy='require' name='lfence-always-serializing'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       <feature policy='require' name='rdctl-no'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       <feature policy='require' name='skip-l1dfl-vmentry'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       <feature policy='require' name='mds-no'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       <feature policy='require' name='pschange-mc-no'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       <feature policy='require' name='gds-no'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       <feature policy='require' name='rfds-no'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       <feature policy='disable' name='xsaves'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:     </mode>
Oct 11 03:56:10 compute-0 nova_compute[259850]:     <mode name='custom' supported='yes'>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='486-v1'>486</model>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       <model usable='yes' deprecated='yes' vendor='unknown'>486-v1</model>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       <model usable='no' vendor='Intel' canonical='Broadwell-v1'>Broadwell</model>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       <blockers model='Broadwell'>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='erms'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='hle'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='invpcid'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='pcid'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='rtm'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       </blockers>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       <model usable='no' vendor='Intel' canonical='Broadwell-v3'>Broadwell-IBRS</model>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       <blockers model='Broadwell-IBRS'>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='erms'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='hle'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='invpcid'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='pcid'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='rtm'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       </blockers>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       <model usable='no' vendor='Intel' canonical='Broadwell-v2'>Broadwell-noTSX</model>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       <blockers model='Broadwell-noTSX'>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='erms'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='invpcid'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='pcid'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       </blockers>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       <model usable='no' vendor='Intel' canonical='Broadwell-v4'>Broadwell-noTSX-IBRS</model>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       <blockers model='Broadwell-noTSX-IBRS'>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='erms'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='invpcid'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='pcid'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       </blockers>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       <model usable='no' vendor='Intel'>Broadwell-v1</model>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       <blockers model='Broadwell-v1'>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='erms'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='hle'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='invpcid'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='pcid'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='rtm'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       </blockers>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       <model usable='no' vendor='Intel'>Broadwell-v2</model>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       <blockers model='Broadwell-v2'>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='erms'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='invpcid'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='pcid'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       </blockers>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       <model usable='no' vendor='Intel'>Broadwell-v3</model>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       <blockers model='Broadwell-v3'>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='erms'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='hle'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='invpcid'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='pcid'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='rtm'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       </blockers>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       <model usable='no' vendor='Intel'>Broadwell-v4</model>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       <blockers model='Broadwell-v4'>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='erms'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='invpcid'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='pcid'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       </blockers>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v1'>Cascadelake-Server</model>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       <blockers model='Cascadelake-Server'>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='avx512bw'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='avx512cd'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='avx512dq'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='avx512f'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='avx512vl'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='avx512vnni'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='erms'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='hle'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='invpcid'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='pcid'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='pku'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='rtm'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       </blockers>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v3'>Cascadelake-Server-noTSX</model>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       <blockers model='Cascadelake-Server-noTSX'>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='avx512bw'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='avx512cd'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='avx512dq'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='avx512f'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='avx512vl'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='avx512vnni'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='erms'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='ibrs-all'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='invpcid'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='pcid'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='pku'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       </blockers>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v1</model>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       <blockers model='Cascadelake-Server-v1'>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='avx512bw'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='avx512cd'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='avx512dq'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='avx512f'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='avx512vl'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='avx512vnni'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='erms'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='hle'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='invpcid'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='pcid'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='pku'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='rtm'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       </blockers>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v2</model>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       <blockers model='Cascadelake-Server-v2'>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='avx512bw'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='avx512cd'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='avx512dq'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='avx512f'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='avx512vl'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='avx512vnni'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='erms'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='hle'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='ibrs-all'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='invpcid'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='pcid'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='pku'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='rtm'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       </blockers>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v3</model>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       <blockers model='Cascadelake-Server-v3'>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='avx512bw'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='avx512cd'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='avx512dq'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='avx512f'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='avx512vl'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='avx512vnni'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='erms'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='ibrs-all'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='invpcid'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='pcid'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='pku'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       </blockers>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v4</model>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       <blockers model='Cascadelake-Server-v4'>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='avx512bw'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='avx512cd'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='avx512dq'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='avx512f'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='avx512vl'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='avx512vnni'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='erms'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='ibrs-all'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='invpcid'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='pcid'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='pku'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       </blockers>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v5</model>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       <blockers model='Cascadelake-Server-v5'>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='avx512bw'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='avx512cd'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='avx512dq'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='avx512f'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='avx512vl'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='avx512vnni'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='erms'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='ibrs-all'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='invpcid'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='pcid'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='pku'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='xsaves'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       </blockers>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       <model usable='yes' deprecated='yes' vendor='Intel' canonical='Conroe-v1'>Conroe</model>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       <model usable='yes' deprecated='yes' vendor='Intel'>Conroe-v1</model>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       <model usable='no' vendor='Intel' canonical='Cooperlake-v1'>Cooperlake</model>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       <blockers model='Cooperlake'>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='avx512-bf16'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='avx512bw'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='avx512cd'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='avx512dq'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='avx512f'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='avx512vl'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='avx512vnni'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='erms'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='hle'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='ibrs-all'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='invpcid'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='pcid'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='pku'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='rtm'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='taa-no'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       </blockers>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       <model usable='no' vendor='Intel'>Cooperlake-v1</model>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       <blockers model='Cooperlake-v1'>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='avx512-bf16'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='avx512bw'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='avx512cd'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='avx512dq'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='avx512f'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='avx512vl'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='avx512vnni'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='erms'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='hle'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='ibrs-all'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='invpcid'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='pcid'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='pku'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='rtm'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='taa-no'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       </blockers>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       <model usable='no' vendor='Intel'>Cooperlake-v2</model>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       <blockers model='Cooperlake-v2'>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='avx512-bf16'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='avx512bw'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='avx512cd'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='avx512dq'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='avx512f'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='avx512vl'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='avx512vnni'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='erms'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='hle'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='ibrs-all'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='invpcid'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='pcid'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='pku'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='rtm'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='taa-no'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='xsaves'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       </blockers>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       <model usable='no' vendor='Intel' canonical='Denverton-v1'>Denverton</model>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       <blockers model='Denverton'>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='erms'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='mpx'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       </blockers>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       <model usable='no' vendor='Intel'>Denverton-v1</model>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       <blockers model='Denverton-v1'>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='erms'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='mpx'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       </blockers>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       <model usable='no' vendor='Intel'>Denverton-v2</model>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       <blockers model='Denverton-v2'>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='erms'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       </blockers>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       <model usable='no' vendor='Intel'>Denverton-v3</model>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       <blockers model='Denverton-v3'>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='erms'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='xsaves'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       </blockers>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       <model usable='yes' vendor='Hygon' canonical='Dhyana-v1'>Dhyana</model>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       <model usable='yes' vendor='Hygon'>Dhyana-v1</model>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       <model usable='no' vendor='Hygon'>Dhyana-v2</model>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       <blockers model='Dhyana-v2'>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='xsaves'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       </blockers>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       <model usable='yes' vendor='AMD' canonical='EPYC-v1'>EPYC</model>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       <model usable='no' vendor='AMD' canonical='EPYC-Genoa-v1'>EPYC-Genoa</model>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       <blockers model='EPYC-Genoa'>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='amd-psfd'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='auto-ibrs'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='avx512-bf16'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='avx512-vpopcntdq'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='avx512bitalg'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='avx512bw'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='avx512cd'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='avx512dq'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='avx512f'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='avx512ifma'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='avx512vbmi'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='avx512vbmi2'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='avx512vl'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='avx512vnni'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='erms'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='fsrm'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='gfni'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='invpcid'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='la57'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='no-nested-data-bp'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='null-sel-clr-base'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='pcid'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='pku'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='stibp-always-on'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='vaes'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='vpclmulqdq'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='xsaves'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       </blockers>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       <model usable='no' vendor='AMD'>EPYC-Genoa-v1</model>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       <blockers model='EPYC-Genoa-v1'>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='amd-psfd'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='auto-ibrs'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='avx512-bf16'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='avx512-vpopcntdq'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='avx512bitalg'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='avx512bw'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='avx512cd'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='avx512dq'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='avx512f'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='avx512ifma'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='avx512vbmi'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='avx512vbmi2'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='avx512vl'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='avx512vnni'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='erms'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='fsrm'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='gfni'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='invpcid'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='la57'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='no-nested-data-bp'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='null-sel-clr-base'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='pcid'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='pku'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='stibp-always-on'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='vaes'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='vpclmulqdq'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='xsaves'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       </blockers>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       <model usable='yes' vendor='AMD' canonical='EPYC-v2'>EPYC-IBPB</model>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       <model usable='no' vendor='AMD' canonical='EPYC-Milan-v1'>EPYC-Milan</model>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       <blockers model='EPYC-Milan'>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='erms'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='fsrm'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='invpcid'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='pcid'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='pku'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='xsaves'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       </blockers>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       <model usable='no' vendor='AMD'>EPYC-Milan-v1</model>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       <blockers model='EPYC-Milan-v1'>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='erms'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='fsrm'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='invpcid'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='pcid'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='pku'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='xsaves'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       </blockers>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       <model usable='no' vendor='AMD'>EPYC-Milan-v2</model>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       <blockers model='EPYC-Milan-v2'>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='amd-psfd'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='erms'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='fsrm'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='invpcid'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='no-nested-data-bp'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='null-sel-clr-base'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='pcid'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='pku'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='stibp-always-on'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='vaes'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='vpclmulqdq'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='xsaves'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       </blockers>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       <model usable='no' vendor='AMD' canonical='EPYC-Rome-v1'>EPYC-Rome</model>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       <blockers model='EPYC-Rome'>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='xsaves'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       </blockers>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       <model usable='no' vendor='AMD'>EPYC-Rome-v1</model>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       <blockers model='EPYC-Rome-v1'>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='xsaves'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       </blockers>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       <model usable='no' vendor='AMD'>EPYC-Rome-v2</model>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       <blockers model='EPYC-Rome-v2'>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='xsaves'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       </blockers>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       <model usable='no' vendor='AMD'>EPYC-Rome-v3</model>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       <blockers model='EPYC-Rome-v3'>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='xsaves'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       </blockers>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       <model usable='yes' vendor='AMD'>EPYC-Rome-v4</model>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       <model usable='yes' vendor='AMD'>EPYC-v1</model>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       <model usable='yes' vendor='AMD'>EPYC-v2</model>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       <model usable='no' vendor='AMD'>EPYC-v3</model>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       <blockers model='EPYC-v3'>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='xsaves'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       </blockers>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       <model usable='no' vendor='AMD'>EPYC-v4</model>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       <blockers model='EPYC-v4'>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='xsaves'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       </blockers>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       <model usable='no' vendor='Intel' canonical='GraniteRapids-v1'>GraniteRapids</model>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       <blockers model='GraniteRapids'>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='amx-bf16'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='amx-fp16'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='amx-int8'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='amx-tile'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='avx-vnni'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='avx512-bf16'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='avx512-fp16'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='avx512-vpopcntdq'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='avx512bitalg'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='avx512bw'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='avx512cd'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='avx512dq'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='avx512f'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='avx512ifma'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='avx512vbmi'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='avx512vbmi2'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='avx512vl'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='avx512vnni'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='bus-lock-detect'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='erms'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='fbsdp-no'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='fsrc'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='fsrm'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='fsrs'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='fzrm'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='gfni'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='hle'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='ibrs-all'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='invpcid'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='la57'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='mcdt-no'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='pbrsb-no'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='pcid'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='pku'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='prefetchiti'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='psdp-no'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='rtm'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='sbdr-ssdp-no'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='serialize'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='taa-no'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='tsx-ldtrk'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='vaes'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='vpclmulqdq'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='xfd'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='xsaves'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       </blockers>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       <model usable='no' vendor='Intel'>GraniteRapids-v1</model>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       <blockers model='GraniteRapids-v1'>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='amx-bf16'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='amx-fp16'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='amx-int8'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='amx-tile'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='avx-vnni'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='avx512-bf16'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='avx512-fp16'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='avx512-vpopcntdq'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='avx512bitalg'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='avx512bw'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='avx512cd'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='avx512dq'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='avx512f'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='avx512ifma'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='avx512vbmi'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='avx512vbmi2'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='avx512vl'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='avx512vnni'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='bus-lock-detect'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='erms'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='fbsdp-no'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='fsrc'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='fsrm'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='fsrs'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='fzrm'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='gfni'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='hle'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='ibrs-all'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='invpcid'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='la57'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='mcdt-no'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='pbrsb-no'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='pcid'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='pku'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='prefetchiti'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='psdp-no'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='rtm'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='sbdr-ssdp-no'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='serialize'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='taa-no'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='tsx-ldtrk'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='vaes'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='vpclmulqdq'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='xfd'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='xsaves'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       </blockers>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       <model usable='no' vendor='Intel'>GraniteRapids-v2</model>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       <blockers model='GraniteRapids-v2'>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='amx-bf16'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='amx-fp16'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='amx-int8'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='amx-tile'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='avx-vnni'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='avx10'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='avx10-128'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='avx10-256'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='avx10-512'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='avx512-bf16'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='avx512-fp16'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='avx512-vpopcntdq'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='avx512bitalg'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='avx512bw'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='avx512cd'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='avx512dq'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='avx512f'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='avx512ifma'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='avx512vbmi'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='avx512vbmi2'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='avx512vl'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='avx512vnni'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='bus-lock-detect'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='cldemote'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='erms'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='fbsdp-no'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='fsrc'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='fsrm'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='fsrs'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='fzrm'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='gfni'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='hle'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='ibrs-all'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='invpcid'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='la57'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='mcdt-no'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='movdir64b'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='movdiri'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='pbrsb-no'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='pcid'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='pku'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='prefetchiti'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='psdp-no'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='rtm'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='sbdr-ssdp-no'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='serialize'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='ss'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='taa-no'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='tsx-ldtrk'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='vaes'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='vpclmulqdq'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='xfd'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='xsaves'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       </blockers>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       <model usable='no' vendor='Intel' canonical='Haswell-v1'>Haswell</model>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       <blockers model='Haswell'>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='erms'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='hle'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='invpcid'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='pcid'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='rtm'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       </blockers>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       <model usable='no' vendor='Intel' canonical='Haswell-v3'>Haswell-IBRS</model>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       <blockers model='Haswell-IBRS'>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='erms'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='hle'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='invpcid'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='pcid'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='rtm'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       </blockers>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       <model usable='no' vendor='Intel' canonical='Haswell-v2'>Haswell-noTSX</model>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       <blockers model='Haswell-noTSX'>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='erms'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='invpcid'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='pcid'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       </blockers>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       <model usable='no' vendor='Intel' canonical='Haswell-v4'>Haswell-noTSX-IBRS</model>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       <blockers model='Haswell-noTSX-IBRS'>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='erms'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='invpcid'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='pcid'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       </blockers>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       <model usable='no' vendor='Intel'>Haswell-v1</model>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       <blockers model='Haswell-v1'>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='erms'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='hle'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='invpcid'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='pcid'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='rtm'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       </blockers>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       <model usable='no' vendor='Intel'>Haswell-v2</model>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       <blockers model='Haswell-v2'>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='erms'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='invpcid'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='pcid'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       </blockers>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       <model usable='no' vendor='Intel'>Haswell-v3</model>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       <blockers model='Haswell-v3'>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='erms'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='hle'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='invpcid'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='pcid'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='rtm'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       </blockers>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       <model usable='no' vendor='Intel'>Haswell-v4</model>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       <blockers model='Haswell-v4'>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='erms'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='invpcid'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='pcid'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       </blockers>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       <model usable='no' vendor='Intel' canonical='Icelake-Server-v1'>Icelake-Server</model>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       <blockers model='Icelake-Server'>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='avx512-vpopcntdq'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='avx512bitalg'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='avx512bw'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='avx512cd'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='avx512dq'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='avx512f'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='avx512vbmi'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='avx512vbmi2'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='avx512vl'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='avx512vnni'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='erms'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='gfni'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='hle'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='invpcid'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='la57'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='pcid'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='pku'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='rtm'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='vaes'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='vpclmulqdq'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       </blockers>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       <model usable='no' vendor='Intel' canonical='Icelake-Server-v2'>Icelake-Server-noTSX</model>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       <blockers model='Icelake-Server-noTSX'>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='avx512-vpopcntdq'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='avx512bitalg'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='avx512bw'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='avx512cd'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='avx512dq'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='avx512f'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='avx512vbmi'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='avx512vbmi2'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='avx512vl'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='avx512vnni'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='erms'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='gfni'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='invpcid'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='la57'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='pcid'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='pku'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='vaes'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='vpclmulqdq'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       </blockers>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       <model usable='no' vendor='Intel'>Icelake-Server-v1</model>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       <blockers model='Icelake-Server-v1'>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='avx512-vpopcntdq'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='avx512bitalg'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='avx512bw'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='avx512cd'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='avx512dq'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='avx512f'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='avx512vbmi'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='avx512vbmi2'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='avx512vl'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='avx512vnni'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='erms'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='gfni'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='hle'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='invpcid'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='la57'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='pcid'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='pku'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='rtm'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='vaes'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='vpclmulqdq'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       </blockers>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       <model usable='no' vendor='Intel'>Icelake-Server-v2</model>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       <blockers model='Icelake-Server-v2'>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='avx512-vpopcntdq'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='avx512bitalg'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='avx512bw'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='avx512cd'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='avx512dq'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='avx512f'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='avx512vbmi'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='avx512vbmi2'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='avx512vl'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='avx512vnni'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='erms'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='gfni'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='invpcid'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='la57'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='pcid'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='pku'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='vaes'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='vpclmulqdq'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       </blockers>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       <model usable='no' vendor='Intel'>Icelake-Server-v3</model>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       <blockers model='Icelake-Server-v3'>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='avx512-vpopcntdq'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='avx512bitalg'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='avx512bw'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='avx512cd'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='avx512dq'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='avx512f'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='avx512vbmi'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='avx512vbmi2'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='avx512vl'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='avx512vnni'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='erms'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='gfni'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='ibrs-all'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='invpcid'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='la57'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='pcid'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='pku'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='taa-no'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='vaes'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='vpclmulqdq'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       </blockers>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       <model usable='no' vendor='Intel'>Icelake-Server-v4</model>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       <blockers model='Icelake-Server-v4'>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='avx512-vpopcntdq'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='avx512bitalg'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='avx512bw'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='avx512cd'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='avx512dq'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='avx512f'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='avx512ifma'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='avx512vbmi'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='avx512vbmi2'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='avx512vl'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='avx512vnni'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='erms'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='fsrm'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='gfni'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='ibrs-all'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='invpcid'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='la57'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='pcid'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='pku'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='taa-no'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='vaes'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='vpclmulqdq'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       </blockers>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       <model usable='no' vendor='Intel'>Icelake-Server-v5</model>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       <blockers model='Icelake-Server-v5'>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='avx512-vpopcntdq'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='avx512bitalg'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='avx512bw'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='avx512cd'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='avx512dq'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='avx512f'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='avx512ifma'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='avx512vbmi'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='avx512vbmi2'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='avx512vl'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='avx512vnni'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='erms'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='fsrm'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='gfni'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='ibrs-all'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='invpcid'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='la57'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='pcid'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='pku'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='taa-no'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='vaes'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='vpclmulqdq'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='xsaves'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       </blockers>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       <model usable='no' vendor='Intel'>Icelake-Server-v6</model>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       <blockers model='Icelake-Server-v6'>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='avx512-vpopcntdq'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='avx512bitalg'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='avx512bw'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='avx512cd'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='avx512dq'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='avx512f'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='avx512ifma'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='avx512vbmi'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='avx512vbmi2'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='avx512vl'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='avx512vnni'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='erms'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='fsrm'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='gfni'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='ibrs-all'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='invpcid'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='la57'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='pcid'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='pku'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='taa-no'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='vaes'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='vpclmulqdq'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='xsaves'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       </blockers>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       <model usable='no' vendor='Intel'>Icelake-Server-v7</model>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       <blockers model='Icelake-Server-v7'>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='avx512-vpopcntdq'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='avx512bitalg'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='avx512bw'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='avx512cd'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='avx512dq'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='avx512f'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='avx512ifma'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='avx512vbmi'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='avx512vbmi2'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='avx512vl'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='avx512vnni'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='erms'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='fsrm'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='gfni'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='hle'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='ibrs-all'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='invpcid'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='la57'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='pcid'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='pku'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='rtm'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='taa-no'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='vaes'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='vpclmulqdq'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='xsaves'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       </blockers>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       <model usable='no' vendor='Intel' canonical='IvyBridge-v1'>IvyBridge</model>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       <blockers model='IvyBridge'>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='erms'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       </blockers>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       <model usable='no' vendor='Intel' canonical='IvyBridge-v2'>IvyBridge-IBRS</model>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       <blockers model='IvyBridge-IBRS'>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='erms'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       </blockers>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       <model usable='no' vendor='Intel'>IvyBridge-v1</model>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       <blockers model='IvyBridge-v1'>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='erms'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       </blockers>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       <model usable='no' vendor='Intel'>IvyBridge-v2</model>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       <blockers model='IvyBridge-v2'>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='erms'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       </blockers>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       <model usable='no' vendor='Intel' canonical='KnightsMill-v1'>KnightsMill</model>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       <blockers model='KnightsMill'>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='avx512-4fmaps'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='avx512-4vnniw'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='avx512-vpopcntdq'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='avx512cd'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='avx512er'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='avx512f'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='avx512pf'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='erms'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='ss'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       </blockers>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       <model usable='no' vendor='Intel'>KnightsMill-v1</model>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       <blockers model='KnightsMill-v1'>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='avx512-4fmaps'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='avx512-4vnniw'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='avx512-vpopcntdq'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='avx512cd'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='avx512er'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='avx512f'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='avx512pf'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='erms'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='ss'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       </blockers>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       <model usable='yes' vendor='Intel' canonical='Nehalem-v1'>Nehalem</model>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       <model usable='yes' vendor='Intel' canonical='Nehalem-v2'>Nehalem-IBRS</model>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       <model usable='yes' vendor='Intel'>Nehalem-v1</model>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       <model usable='yes' vendor='Intel'>Nehalem-v2</model>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G1-v1'>Opteron_G1</model>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G1-v1</model>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G2-v1'>Opteron_G2</model>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G2-v1</model>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G3-v1'>Opteron_G3</model>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G3-v1</model>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       <model usable='no' vendor='AMD' canonical='Opteron_G4-v1'>Opteron_G4</model>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       <blockers model='Opteron_G4'>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='fma4'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='xop'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       </blockers>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       <model usable='no' vendor='AMD'>Opteron_G4-v1</model>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       <blockers model='Opteron_G4-v1'>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='fma4'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='xop'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       </blockers>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       <model usable='no' vendor='AMD' canonical='Opteron_G5-v1'>Opteron_G5</model>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       <blockers model='Opteron_G5'>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='fma4'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='tbm'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='xop'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       </blockers>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       <model usable='no' vendor='AMD'>Opteron_G5-v1</model>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       <blockers model='Opteron_G5-v1'>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='fma4'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='tbm'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='xop'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       </blockers>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       <model usable='yes' deprecated='yes' vendor='Intel' canonical='Penryn-v1'>Penryn</model>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       <model usable='yes' deprecated='yes' vendor='Intel'>Penryn-v1</model>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       <model usable='yes' vendor='Intel' canonical='SandyBridge-v1'>SandyBridge</model>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       <model usable='yes' vendor='Intel' canonical='SandyBridge-v2'>SandyBridge-IBRS</model>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       <model usable='yes' vendor='Intel'>SandyBridge-v1</model>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       <model usable='yes' vendor='Intel'>SandyBridge-v2</model>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       <model usable='no' vendor='Intel' canonical='SapphireRapids-v1'>SapphireRapids</model>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       <blockers model='SapphireRapids'>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='amx-bf16'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='amx-int8'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='amx-tile'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='avx-vnni'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='avx512-bf16'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='avx512-fp16'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='avx512-vpopcntdq'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='avx512bitalg'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='avx512bw'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='avx512cd'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='avx512dq'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='avx512f'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='avx512ifma'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='avx512vbmi'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='avx512vbmi2'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='avx512vl'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='avx512vnni'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='bus-lock-detect'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='erms'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='fsrc'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='fsrm'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='fsrs'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='fzrm'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='gfni'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='hle'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='ibrs-all'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='invpcid'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='la57'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='pcid'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='pku'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='rtm'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='serialize'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='taa-no'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='tsx-ldtrk'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='vaes'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='vpclmulqdq'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='xfd'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='xsaves'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       </blockers>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       <model usable='no' vendor='Intel'>SapphireRapids-v1</model>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       <blockers model='SapphireRapids-v1'>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='amx-bf16'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='amx-int8'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='amx-tile'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='avx-vnni'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='avx512-bf16'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='avx512-fp16'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='avx512-vpopcntdq'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='avx512bitalg'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='avx512bw'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='avx512cd'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='avx512dq'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='avx512f'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='avx512ifma'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='avx512vbmi'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='avx512vbmi2'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='avx512vl'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='avx512vnni'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='bus-lock-detect'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='erms'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='fsrc'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='fsrm'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='fsrs'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='fzrm'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='gfni'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='hle'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='ibrs-all'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='invpcid'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='la57'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='pcid'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='pku'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='rtm'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='serialize'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='taa-no'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='tsx-ldtrk'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='vaes'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='vpclmulqdq'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='xfd'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='xsaves'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       </blockers>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       <model usable='no' vendor='Intel'>SapphireRapids-v2</model>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       <blockers model='SapphireRapids-v2'>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='amx-bf16'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='amx-int8'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='amx-tile'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='avx-vnni'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='avx512-bf16'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='avx512-fp16'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='avx512-vpopcntdq'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='avx512bitalg'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='avx512bw'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='avx512cd'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='avx512dq'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='avx512f'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='avx512ifma'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='avx512vbmi'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='avx512vbmi2'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='avx512vl'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='avx512vnni'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='bus-lock-detect'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='erms'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='fbsdp-no'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='fsrc'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='fsrm'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='fsrs'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='fzrm'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='gfni'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='hle'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='ibrs-all'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='invpcid'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='la57'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='pcid'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='pku'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='psdp-no'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='rtm'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='sbdr-ssdp-no'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='serialize'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='taa-no'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='tsx-ldtrk'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='vaes'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='vpclmulqdq'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='xfd'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='xsaves'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       </blockers>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       <model usable='no' vendor='Intel'>SapphireRapids-v3</model>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       <blockers model='SapphireRapids-v3'>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='amx-bf16'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='amx-int8'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='amx-tile'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='avx-vnni'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='avx512-bf16'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='avx512-fp16'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='avx512-vpopcntdq'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='avx512bitalg'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='avx512bw'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='avx512cd'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='avx512dq'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='avx512f'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='avx512ifma'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='avx512vbmi'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='avx512vbmi2'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='avx512vl'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='avx512vnni'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='bus-lock-detect'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='cldemote'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='erms'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='fbsdp-no'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='fsrc'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='fsrm'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='fsrs'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='fzrm'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='gfni'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='hle'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='ibrs-all'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='invpcid'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='la57'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='movdir64b'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='movdiri'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='pcid'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='pku'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='psdp-no'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='rtm'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='sbdr-ssdp-no'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='serialize'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='ss'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='taa-no'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='tsx-ldtrk'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='vaes'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='vpclmulqdq'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='xfd'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='xsaves'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       </blockers>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       <model usable='no' vendor='Intel' canonical='SierraForest-v1'>SierraForest</model>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       <blockers model='SierraForest'>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='avx-ifma'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='avx-ne-convert'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='avx-vnni'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='avx-vnni-int8'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='bus-lock-detect'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='cmpccxadd'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='erms'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='fbsdp-no'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='fsrm'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='fsrs'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='gfni'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='ibrs-all'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='invpcid'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='mcdt-no'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='pbrsb-no'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='pcid'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='pku'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='psdp-no'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='sbdr-ssdp-no'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='serialize'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='vaes'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='vpclmulqdq'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='xsaves'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       </blockers>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       <model usable='no' vendor='Intel'>SierraForest-v1</model>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       <blockers model='SierraForest-v1'>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='avx-ifma'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='avx-ne-convert'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='avx-vnni'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='avx-vnni-int8'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='bus-lock-detect'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='cmpccxadd'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='erms'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='fbsdp-no'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='fsrm'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='fsrs'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='gfni'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='ibrs-all'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='invpcid'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='mcdt-no'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='pbrsb-no'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='pcid'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='pku'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='psdp-no'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='sbdr-ssdp-no'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='serialize'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='vaes'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='vpclmulqdq'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='xsaves'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       </blockers>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v1'>Skylake-Client</model>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       <blockers model='Skylake-Client'>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='erms'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='hle'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='invpcid'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='pcid'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='rtm'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       </blockers>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v2'>Skylake-Client-IBRS</model>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       <blockers model='Skylake-Client-IBRS'>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='erms'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='hle'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='invpcid'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='pcid'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='rtm'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       </blockers>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v3'>Skylake-Client-noTSX-IBRS</model>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       <blockers model='Skylake-Client-noTSX-IBRS'>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='erms'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='invpcid'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='pcid'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       </blockers>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       <model usable='no' vendor='Intel'>Skylake-Client-v1</model>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       <blockers model='Skylake-Client-v1'>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='erms'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='hle'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='invpcid'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='pcid'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='rtm'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       </blockers>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       <model usable='no' vendor='Intel'>Skylake-Client-v2</model>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       <blockers model='Skylake-Client-v2'>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='erms'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='hle'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='invpcid'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='pcid'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='rtm'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       </blockers>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       <model usable='no' vendor='Intel'>Skylake-Client-v3</model>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       <blockers model='Skylake-Client-v3'>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='erms'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='invpcid'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='pcid'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       </blockers>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       <model usable='no' vendor='Intel'>Skylake-Client-v4</model>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       <blockers model='Skylake-Client-v4'>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='erms'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='invpcid'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='pcid'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='xsaves'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       </blockers>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v1'>Skylake-Server</model>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       <blockers model='Skylake-Server'>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='avx512bw'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='avx512cd'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='avx512dq'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='avx512f'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='avx512vl'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='erms'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='hle'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='invpcid'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='pcid'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='pku'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='rtm'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       </blockers>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v2'>Skylake-Server-IBRS</model>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       <blockers model='Skylake-Server-IBRS'>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='avx512bw'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='avx512cd'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='avx512dq'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='avx512f'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='avx512vl'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='erms'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='hle'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='invpcid'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='pcid'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='pku'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='rtm'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       </blockers>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v3'>Skylake-Server-noTSX-IBRS</model>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       <blockers model='Skylake-Server-noTSX-IBRS'>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='avx512bw'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='avx512cd'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='avx512dq'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='avx512f'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='avx512vl'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='erms'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='invpcid'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='pcid'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='pku'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       </blockers>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       <model usable='no' vendor='Intel'>Skylake-Server-v1</model>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       <blockers model='Skylake-Server-v1'>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='avx512bw'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='avx512cd'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='avx512dq'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='avx512f'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='avx512vl'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='erms'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='hle'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='invpcid'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='pcid'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='pku'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='rtm'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       </blockers>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       <model usable='no' vendor='Intel'>Skylake-Server-v2</model>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       <blockers model='Skylake-Server-v2'>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='avx512bw'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='avx512cd'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='avx512dq'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='avx512f'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='avx512vl'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='erms'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='hle'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='invpcid'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='pcid'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='pku'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='rtm'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       </blockers>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       <model usable='no' vendor='Intel'>Skylake-Server-v3</model>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       <blockers model='Skylake-Server-v3'>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='avx512bw'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='avx512cd'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='avx512dq'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='avx512f'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='avx512vl'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='erms'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='invpcid'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='pcid'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='pku'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       </blockers>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       <model usable='no' vendor='Intel'>Skylake-Server-v4</model>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       <blockers model='Skylake-Server-v4'>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='avx512bw'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='avx512cd'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='avx512dq'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='avx512f'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='avx512vl'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='erms'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='invpcid'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='pcid'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='pku'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       </blockers>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       <model usable='no' vendor='Intel'>Skylake-Server-v5</model>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       <blockers model='Skylake-Server-v5'>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='avx512bw'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='avx512cd'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='avx512dq'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='avx512f'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='avx512vl'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='erms'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='invpcid'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='pcid'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='pku'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='xsaves'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       </blockers>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       <model usable='no' vendor='Intel' canonical='Snowridge-v1'>Snowridge</model>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       <blockers model='Snowridge'>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='cldemote'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='core-capability'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='erms'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='gfni'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='movdir64b'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='movdiri'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='mpx'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='split-lock-detect'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       </blockers>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       <model usable='no' vendor='Intel'>Snowridge-v1</model>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       <blockers model='Snowridge-v1'>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='cldemote'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='core-capability'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='erms'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='gfni'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='movdir64b'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='movdiri'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='mpx'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='split-lock-detect'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       </blockers>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       <model usable='no' vendor='Intel'>Snowridge-v2</model>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       <blockers model='Snowridge-v2'>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='cldemote'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='core-capability'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='erms'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='gfni'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='movdir64b'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='movdiri'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='split-lock-detect'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       </blockers>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       <model usable='no' vendor='Intel'>Snowridge-v3</model>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       <blockers model='Snowridge-v3'>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='cldemote'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='core-capability'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='erms'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='gfni'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='movdir64b'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='movdiri'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='split-lock-detect'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='xsaves'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       </blockers>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       <model usable='no' vendor='Intel'>Snowridge-v4</model>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       <blockers model='Snowridge-v4'>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='cldemote'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='erms'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='gfni'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='movdir64b'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='movdiri'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='xsaves'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       </blockers>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       <model usable='yes' vendor='Intel' canonical='Westmere-v1'>Westmere</model>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       <model usable='yes' vendor='Intel' canonical='Westmere-v2'>Westmere-IBRS</model>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       <model usable='yes' vendor='Intel'>Westmere-v1</model>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       <model usable='yes' vendor='Intel'>Westmere-v2</model>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       <model usable='no' deprecated='yes' vendor='AMD' canonical='athlon-v1'>athlon</model>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       <blockers model='athlon'>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='3dnow'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='3dnowext'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       </blockers>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       <model usable='no' deprecated='yes' vendor='AMD'>athlon-v1</model>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       <blockers model='athlon-v1'>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='3dnow'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='3dnowext'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       </blockers>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='core2duo-v1'>core2duo</model>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       <blockers model='core2duo'>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='ss'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       </blockers>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       <model usable='no' deprecated='yes' vendor='Intel'>core2duo-v1</model>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       <blockers model='core2duo-v1'>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='ss'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       </blockers>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='coreduo-v1'>coreduo</model>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       <blockers model='coreduo'>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='ss'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       </blockers>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       <model usable='no' deprecated='yes' vendor='Intel'>coreduo-v1</model>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       <blockers model='coreduo-v1'>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='ss'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       </blockers>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm32-v1'>kvm32</model>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       <model usable='yes' deprecated='yes' vendor='unknown'>kvm32-v1</model>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm64-v1'>kvm64</model>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       <model usable='yes' deprecated='yes' vendor='unknown'>kvm64-v1</model>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='n270-v1'>n270</model>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       <blockers model='n270'>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='ss'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       </blockers>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       <model usable='no' deprecated='yes' vendor='Intel'>n270-v1</model>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       <blockers model='n270-v1'>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='ss'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       </blockers>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium-v1'>pentium</model>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium-v1</model>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium2-v1'>pentium2</model>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium2-v1</model>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium3-v1'>pentium3</model>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium3-v1</model>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       <model usable='no' deprecated='yes' vendor='AMD' canonical='phenom-v1'>phenom</model>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       <blockers model='phenom'>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='3dnow'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='3dnowext'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       </blockers>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       <model usable='no' deprecated='yes' vendor='AMD'>phenom-v1</model>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       <blockers model='phenom-v1'>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='3dnow'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='3dnowext'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       </blockers>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu32-v1'>qemu32</model>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       <model usable='yes' deprecated='yes' vendor='unknown'>qemu32-v1</model>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu64-v1'>qemu64</model>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       <model usable='yes' deprecated='yes' vendor='unknown'>qemu64-v1</model>
Oct 11 03:56:10 compute-0 nova_compute[259850]:     </mode>
Oct 11 03:56:10 compute-0 nova_compute[259850]:   </cpu>
Oct 11 03:56:10 compute-0 nova_compute[259850]:   <memoryBacking supported='yes'>
Oct 11 03:56:10 compute-0 nova_compute[259850]:     <enum name='sourceType'>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       <value>file</value>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       <value>anonymous</value>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       <value>memfd</value>
Oct 11 03:56:10 compute-0 nova_compute[259850]:     </enum>
Oct 11 03:56:10 compute-0 nova_compute[259850]:   </memoryBacking>
Oct 11 03:56:10 compute-0 nova_compute[259850]:   <devices>
Oct 11 03:56:10 compute-0 nova_compute[259850]:     <disk supported='yes'>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       <enum name='diskDevice'>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <value>disk</value>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <value>cdrom</value>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <value>floppy</value>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <value>lun</value>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       </enum>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       <enum name='bus'>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <value>ide</value>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <value>fdc</value>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <value>scsi</value>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <value>virtio</value>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <value>usb</value>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <value>sata</value>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       </enum>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       <enum name='model'>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <value>virtio</value>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <value>virtio-transitional</value>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <value>virtio-non-transitional</value>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       </enum>
Oct 11 03:56:10 compute-0 nova_compute[259850]:     </disk>
Oct 11 03:56:10 compute-0 nova_compute[259850]:     <graphics supported='yes'>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       <enum name='type'>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <value>vnc</value>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <value>egl-headless</value>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <value>dbus</value>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       </enum>
Oct 11 03:56:10 compute-0 nova_compute[259850]:     </graphics>
Oct 11 03:56:10 compute-0 nova_compute[259850]:     <video supported='yes'>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       <enum name='modelType'>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <value>vga</value>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <value>cirrus</value>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <value>virtio</value>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <value>none</value>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <value>bochs</value>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <value>ramfb</value>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       </enum>
Oct 11 03:56:10 compute-0 nova_compute[259850]:     </video>
Oct 11 03:56:10 compute-0 nova_compute[259850]:     <hostdev supported='yes'>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       <enum name='mode'>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <value>subsystem</value>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       </enum>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       <enum name='startupPolicy'>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <value>default</value>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <value>mandatory</value>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <value>requisite</value>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <value>optional</value>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       </enum>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       <enum name='subsysType'>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <value>usb</value>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <value>pci</value>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <value>scsi</value>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       </enum>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       <enum name='capsType'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       <enum name='pciBackend'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:     </hostdev>
Oct 11 03:56:10 compute-0 nova_compute[259850]:     <rng supported='yes'>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       <enum name='model'>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <value>virtio</value>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <value>virtio-transitional</value>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <value>virtio-non-transitional</value>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       </enum>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       <enum name='backendModel'>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <value>random</value>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <value>egd</value>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <value>builtin</value>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       </enum>
Oct 11 03:56:10 compute-0 nova_compute[259850]:     </rng>
Oct 11 03:56:10 compute-0 nova_compute[259850]:     <filesystem supported='yes'>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       <enum name='driverType'>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <value>path</value>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <value>handle</value>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <value>virtiofs</value>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       </enum>
Oct 11 03:56:10 compute-0 nova_compute[259850]:     </filesystem>
Oct 11 03:56:10 compute-0 nova_compute[259850]:     <tpm supported='yes'>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       <enum name='model'>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <value>tpm-tis</value>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <value>tpm-crb</value>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       </enum>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       <enum name='backendModel'>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <value>emulator</value>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <value>external</value>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       </enum>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       <enum name='backendVersion'>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <value>2.0</value>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       </enum>
Oct 11 03:56:10 compute-0 nova_compute[259850]:     </tpm>
Oct 11 03:56:10 compute-0 nova_compute[259850]:     <redirdev supported='yes'>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       <enum name='bus'>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <value>usb</value>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       </enum>
Oct 11 03:56:10 compute-0 nova_compute[259850]:     </redirdev>
Oct 11 03:56:10 compute-0 nova_compute[259850]:     <channel supported='yes'>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       <enum name='type'>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <value>pty</value>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <value>unix</value>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       </enum>
Oct 11 03:56:10 compute-0 nova_compute[259850]:     </channel>
Oct 11 03:56:10 compute-0 nova_compute[259850]:     <crypto supported='yes'>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       <enum name='model'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       <enum name='type'>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <value>qemu</value>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       </enum>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       <enum name='backendModel'>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <value>builtin</value>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       </enum>
Oct 11 03:56:10 compute-0 nova_compute[259850]:     </crypto>
Oct 11 03:56:10 compute-0 nova_compute[259850]:     <interface supported='yes'>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       <enum name='backendType'>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <value>default</value>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <value>passt</value>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       </enum>
Oct 11 03:56:10 compute-0 nova_compute[259850]:     </interface>
Oct 11 03:56:10 compute-0 nova_compute[259850]:     <panic supported='yes'>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       <enum name='model'>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <value>isa</value>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <value>hyperv</value>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       </enum>
Oct 11 03:56:10 compute-0 nova_compute[259850]:     </panic>
Oct 11 03:56:10 compute-0 nova_compute[259850]:   </devices>
Oct 11 03:56:10 compute-0 nova_compute[259850]:   <features>
Oct 11 03:56:10 compute-0 nova_compute[259850]:     <gic supported='no'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:     <vmcoreinfo supported='yes'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:     <genid supported='yes'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:     <backingStoreInput supported='yes'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:     <backup supported='yes'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:     <async-teardown supported='yes'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:     <ps2 supported='yes'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:     <sev supported='no'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:     <sgx supported='no'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:     <hyperv supported='yes'>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       <enum name='features'>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <value>relaxed</value>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <value>vapic</value>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <value>spinlocks</value>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <value>vpindex</value>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <value>runtime</value>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <value>synic</value>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <value>stimer</value>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <value>reset</value>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <value>vendor_id</value>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <value>frequencies</value>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <value>reenlightenment</value>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <value>tlbflush</value>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <value>ipi</value>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <value>avic</value>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <value>emsr_bitmap</value>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <value>xmm_input</value>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       </enum>
Oct 11 03:56:10 compute-0 nova_compute[259850]:     </hyperv>
Oct 11 03:56:10 compute-0 nova_compute[259850]:     <launchSecurity supported='no'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:   </features>
Oct 11 03:56:10 compute-0 nova_compute[259850]: </domainCapabilities>
Oct 11 03:56:10 compute-0 nova_compute[259850]:  _get_domain_capabilities /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1037
Oct 11 03:56:10 compute-0 nova_compute[259850]: 2025-10-11 03:56:10.076 2 DEBUG nova.virt.libvirt.host [None req-3700afd8-f980-4cf2-b41e-54ecc7fbe9a2 - - - - - -] Libvirt host hypervisor capabilities for arch=x86_64 and machine_type=q35:
Oct 11 03:56:10 compute-0 nova_compute[259850]: <domainCapabilities>
Oct 11 03:56:10 compute-0 nova_compute[259850]:   <path>/usr/libexec/qemu-kvm</path>
Oct 11 03:56:10 compute-0 nova_compute[259850]:   <domain>kvm</domain>
Oct 11 03:56:10 compute-0 nova_compute[259850]:   <machine>pc-q35-rhel9.6.0</machine>
Oct 11 03:56:10 compute-0 nova_compute[259850]:   <arch>x86_64</arch>
Oct 11 03:56:10 compute-0 nova_compute[259850]:   <vcpu max='4096'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:   <iothreads supported='yes'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:   <os supported='yes'>
Oct 11 03:56:10 compute-0 nova_compute[259850]:     <enum name='firmware'>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       <value>efi</value>
Oct 11 03:56:10 compute-0 nova_compute[259850]:     </enum>
Oct 11 03:56:10 compute-0 nova_compute[259850]:     <loader supported='yes'>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       <value>/usr/share/edk2/ovmf/OVMF_CODE.secboot.fd</value>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       <value>/usr/share/edk2/ovmf/OVMF_CODE.fd</value>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       <value>/usr/share/edk2/ovmf/OVMF.amdsev.fd</value>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       <value>/usr/share/edk2/ovmf/OVMF.inteltdx.secboot.fd</value>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       <enum name='type'>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <value>rom</value>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <value>pflash</value>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       </enum>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       <enum name='readonly'>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <value>yes</value>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <value>no</value>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       </enum>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       <enum name='secure'>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <value>yes</value>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <value>no</value>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       </enum>
Oct 11 03:56:10 compute-0 nova_compute[259850]:     </loader>
Oct 11 03:56:10 compute-0 nova_compute[259850]:   </os>
Oct 11 03:56:10 compute-0 nova_compute[259850]:   <cpu>
Oct 11 03:56:10 compute-0 nova_compute[259850]:     <mode name='host-passthrough' supported='yes'>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       <enum name='hostPassthroughMigratable'>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <value>on</value>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <value>off</value>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       </enum>
Oct 11 03:56:10 compute-0 nova_compute[259850]:     </mode>
Oct 11 03:56:10 compute-0 nova_compute[259850]:     <mode name='maximum' supported='yes'>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       <enum name='maximumMigratable'>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <value>on</value>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <value>off</value>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       </enum>
Oct 11 03:56:10 compute-0 nova_compute[259850]:     </mode>
Oct 11 03:56:10 compute-0 nova_compute[259850]:     <mode name='host-model' supported='yes'>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       <model fallback='forbid'>EPYC-Rome</model>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       <vendor>AMD</vendor>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       <maxphysaddr mode='passthrough' limit='40'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       <feature policy='require' name='x2apic'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       <feature policy='require' name='tsc-deadline'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       <feature policy='require' name='hypervisor'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       <feature policy='require' name='tsc_adjust'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       <feature policy='require' name='spec-ctrl'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       <feature policy='require' name='stibp'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       <feature policy='require' name='arch-capabilities'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       <feature policy='require' name='ssbd'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       <feature policy='require' name='cmp_legacy'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       <feature policy='require' name='overflow-recov'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       <feature policy='require' name='succor'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       <feature policy='require' name='ibrs'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       <feature policy='require' name='amd-ssbd'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       <feature policy='require' name='virt-ssbd'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       <feature policy='require' name='lbrv'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       <feature policy='require' name='tsc-scale'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       <feature policy='require' name='vmcb-clean'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       <feature policy='require' name='flushbyasid'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       <feature policy='require' name='pause-filter'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       <feature policy='require' name='pfthreshold'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       <feature policy='require' name='svme-addr-chk'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       <feature policy='require' name='lfence-always-serializing'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       <feature policy='require' name='rdctl-no'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       <feature policy='require' name='skip-l1dfl-vmentry'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       <feature policy='require' name='mds-no'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       <feature policy='require' name='pschange-mc-no'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       <feature policy='require' name='gds-no'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       <feature policy='require' name='rfds-no'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       <feature policy='disable' name='xsaves'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:     </mode>
Oct 11 03:56:10 compute-0 nova_compute[259850]:     <mode name='custom' supported='yes'>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='486-v1'>486</model>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       <model usable='yes' deprecated='yes' vendor='unknown'>486-v1</model>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       <model usable='no' vendor='Intel' canonical='Broadwell-v1'>Broadwell</model>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       <blockers model='Broadwell'>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='erms'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='hle'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='invpcid'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='pcid'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='rtm'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       </blockers>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       <model usable='no' vendor='Intel' canonical='Broadwell-v3'>Broadwell-IBRS</model>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       <blockers model='Broadwell-IBRS'>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='erms'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='hle'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='invpcid'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='pcid'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='rtm'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       </blockers>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       <model usable='no' vendor='Intel' canonical='Broadwell-v2'>Broadwell-noTSX</model>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       <blockers model='Broadwell-noTSX'>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='erms'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='invpcid'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='pcid'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       </blockers>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       <model usable='no' vendor='Intel' canonical='Broadwell-v4'>Broadwell-noTSX-IBRS</model>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       <blockers model='Broadwell-noTSX-IBRS'>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='erms'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='invpcid'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='pcid'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       </blockers>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       <model usable='no' vendor='Intel'>Broadwell-v1</model>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       <blockers model='Broadwell-v1'>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='erms'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='hle'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='invpcid'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='pcid'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='rtm'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       </blockers>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       <model usable='no' vendor='Intel'>Broadwell-v2</model>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       <blockers model='Broadwell-v2'>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='erms'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='invpcid'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='pcid'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       </blockers>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       <model usable='no' vendor='Intel'>Broadwell-v3</model>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       <blockers model='Broadwell-v3'>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='erms'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='hle'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='invpcid'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='pcid'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='rtm'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       </blockers>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       <model usable='no' vendor='Intel'>Broadwell-v4</model>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       <blockers model='Broadwell-v4'>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='erms'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='invpcid'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='pcid'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       </blockers>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v1'>Cascadelake-Server</model>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       <blockers model='Cascadelake-Server'>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='avx512bw'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='avx512cd'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='avx512dq'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='avx512f'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='avx512vl'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='avx512vnni'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='erms'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='hle'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='invpcid'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='pcid'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='pku'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='rtm'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       </blockers>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v3'>Cascadelake-Server-noTSX</model>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       <blockers model='Cascadelake-Server-noTSX'>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='avx512bw'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='avx512cd'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='avx512dq'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='avx512f'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='avx512vl'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='avx512vnni'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='erms'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='ibrs-all'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='invpcid'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='pcid'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='pku'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       </blockers>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v1</model>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       <blockers model='Cascadelake-Server-v1'>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='avx512bw'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='avx512cd'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='avx512dq'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='avx512f'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='avx512vl'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='avx512vnni'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='erms'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='hle'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='invpcid'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='pcid'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='pku'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='rtm'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       </blockers>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v2</model>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       <blockers model='Cascadelake-Server-v2'>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='avx512bw'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='avx512cd'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='avx512dq'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='avx512f'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='avx512vl'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='avx512vnni'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='erms'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='hle'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='ibrs-all'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='invpcid'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='pcid'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='pku'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='rtm'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       </blockers>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v3</model>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       <blockers model='Cascadelake-Server-v3'>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='avx512bw'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='avx512cd'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='avx512dq'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='avx512f'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='avx512vl'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='avx512vnni'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='erms'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='ibrs-all'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='invpcid'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='pcid'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='pku'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       </blockers>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v4</model>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       <blockers model='Cascadelake-Server-v4'>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='avx512bw'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='avx512cd'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='avx512dq'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='avx512f'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='avx512vl'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='avx512vnni'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='erms'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='ibrs-all'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='invpcid'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='pcid'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='pku'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       </blockers>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v5</model>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       <blockers model='Cascadelake-Server-v5'>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='avx512bw'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='avx512cd'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='avx512dq'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='avx512f'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='avx512vl'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='avx512vnni'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='erms'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='ibrs-all'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='invpcid'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='pcid'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='pku'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='xsaves'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       </blockers>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       <model usable='yes' deprecated='yes' vendor='Intel' canonical='Conroe-v1'>Conroe</model>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       <model usable='yes' deprecated='yes' vendor='Intel'>Conroe-v1</model>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       <model usable='no' vendor='Intel' canonical='Cooperlake-v1'>Cooperlake</model>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       <blockers model='Cooperlake'>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='avx512-bf16'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='avx512bw'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='avx512cd'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='avx512dq'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='avx512f'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='avx512vl'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='avx512vnni'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='erms'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='hle'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='ibrs-all'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='invpcid'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='pcid'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='pku'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='rtm'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='taa-no'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       </blockers>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       <model usable='no' vendor='Intel'>Cooperlake-v1</model>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       <blockers model='Cooperlake-v1'>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='avx512-bf16'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='avx512bw'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='avx512cd'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='avx512dq'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='avx512f'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='avx512vl'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='avx512vnni'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='erms'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='hle'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='ibrs-all'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='invpcid'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='pcid'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='pku'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='rtm'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='taa-no'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       </blockers>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       <model usable='no' vendor='Intel'>Cooperlake-v2</model>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       <blockers model='Cooperlake-v2'>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='avx512-bf16'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='avx512bw'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='avx512cd'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='avx512dq'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='avx512f'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='avx512vl'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='avx512vnni'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='erms'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='hle'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='ibrs-all'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='invpcid'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='pcid'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='pku'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='rtm'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='taa-no'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='xsaves'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       </blockers>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       <model usable='no' vendor='Intel' canonical='Denverton-v1'>Denverton</model>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       <blockers model='Denverton'>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='erms'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='mpx'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       </blockers>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       <model usable='no' vendor='Intel'>Denverton-v1</model>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       <blockers model='Denverton-v1'>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='erms'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='mpx'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       </blockers>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       <model usable='no' vendor='Intel'>Denverton-v2</model>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       <blockers model='Denverton-v2'>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='erms'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       </blockers>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       <model usable='no' vendor='Intel'>Denverton-v3</model>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       <blockers model='Denverton-v3'>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='erms'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='xsaves'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       </blockers>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       <model usable='yes' vendor='Hygon' canonical='Dhyana-v1'>Dhyana</model>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       <model usable='yes' vendor='Hygon'>Dhyana-v1</model>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       <model usable='no' vendor='Hygon'>Dhyana-v2</model>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       <blockers model='Dhyana-v2'>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='xsaves'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       </blockers>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       <model usable='yes' vendor='AMD' canonical='EPYC-v1'>EPYC</model>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       <model usable='no' vendor='AMD' canonical='EPYC-Genoa-v1'>EPYC-Genoa</model>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       <blockers model='EPYC-Genoa'>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='amd-psfd'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='auto-ibrs'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='avx512-bf16'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='avx512-vpopcntdq'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='avx512bitalg'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='avx512bw'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='avx512cd'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='avx512dq'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='avx512f'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='avx512ifma'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='avx512vbmi'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='avx512vbmi2'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='avx512vl'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='avx512vnni'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='erms'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='fsrm'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='gfni'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='invpcid'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='la57'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='no-nested-data-bp'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='null-sel-clr-base'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='pcid'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='pku'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='stibp-always-on'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='vaes'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='vpclmulqdq'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='xsaves'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       </blockers>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       <model usable='no' vendor='AMD'>EPYC-Genoa-v1</model>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       <blockers model='EPYC-Genoa-v1'>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='amd-psfd'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='auto-ibrs'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='avx512-bf16'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='avx512-vpopcntdq'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='avx512bitalg'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='avx512bw'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='avx512cd'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='avx512dq'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='avx512f'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='avx512ifma'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='avx512vbmi'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='avx512vbmi2'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='avx512vl'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='avx512vnni'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='erms'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='fsrm'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='gfni'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='invpcid'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='la57'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='no-nested-data-bp'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='null-sel-clr-base'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='pcid'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='pku'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='stibp-always-on'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='vaes'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='vpclmulqdq'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='xsaves'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       </blockers>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       <model usable='yes' vendor='AMD' canonical='EPYC-v2'>EPYC-IBPB</model>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       <model usable='no' vendor='AMD' canonical='EPYC-Milan-v1'>EPYC-Milan</model>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       <blockers model='EPYC-Milan'>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='erms'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='fsrm'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='invpcid'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='pcid'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='pku'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='xsaves'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       </blockers>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       <model usable='no' vendor='AMD'>EPYC-Milan-v1</model>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       <blockers model='EPYC-Milan-v1'>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='erms'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='fsrm'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='invpcid'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='pcid'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='pku'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='xsaves'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       </blockers>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       <model usable='no' vendor='AMD'>EPYC-Milan-v2</model>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       <blockers model='EPYC-Milan-v2'>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='amd-psfd'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='erms'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='fsrm'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='invpcid'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='no-nested-data-bp'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='null-sel-clr-base'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='pcid'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='pku'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='stibp-always-on'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='vaes'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='vpclmulqdq'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='xsaves'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       </blockers>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       <model usable='no' vendor='AMD' canonical='EPYC-Rome-v1'>EPYC-Rome</model>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       <blockers model='EPYC-Rome'>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='xsaves'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       </blockers>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       <model usable='no' vendor='AMD'>EPYC-Rome-v1</model>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       <blockers model='EPYC-Rome-v1'>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='xsaves'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       </blockers>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       <model usable='no' vendor='AMD'>EPYC-Rome-v2</model>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       <blockers model='EPYC-Rome-v2'>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='xsaves'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       </blockers>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       <model usable='no' vendor='AMD'>EPYC-Rome-v3</model>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       <blockers model='EPYC-Rome-v3'>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='xsaves'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       </blockers>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       <model usable='yes' vendor='AMD'>EPYC-Rome-v4</model>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       <model usable='yes' vendor='AMD'>EPYC-v1</model>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       <model usable='yes' vendor='AMD'>EPYC-v2</model>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       <model usable='no' vendor='AMD'>EPYC-v3</model>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       <blockers model='EPYC-v3'>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='xsaves'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       </blockers>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       <model usable='no' vendor='AMD'>EPYC-v4</model>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       <blockers model='EPYC-v4'>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='xsaves'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       </blockers>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       <model usable='no' vendor='Intel' canonical='GraniteRapids-v1'>GraniteRapids</model>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       <blockers model='GraniteRapids'>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='amx-bf16'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='amx-fp16'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='amx-int8'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='amx-tile'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='avx-vnni'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='avx512-bf16'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='avx512-fp16'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='avx512-vpopcntdq'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='avx512bitalg'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='avx512bw'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='avx512cd'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='avx512dq'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='avx512f'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='avx512ifma'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='avx512vbmi'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='avx512vbmi2'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='avx512vl'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='avx512vnni'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='bus-lock-detect'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='erms'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='fbsdp-no'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='fsrc'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='fsrm'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='fsrs'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='fzrm'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='gfni'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='hle'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='ibrs-all'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='invpcid'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='la57'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='mcdt-no'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='pbrsb-no'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='pcid'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='pku'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='prefetchiti'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='psdp-no'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='rtm'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='sbdr-ssdp-no'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='serialize'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='taa-no'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='tsx-ldtrk'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='vaes'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='vpclmulqdq'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='xfd'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='xsaves'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       </blockers>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       <model usable='no' vendor='Intel'>GraniteRapids-v1</model>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       <blockers model='GraniteRapids-v1'>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='amx-bf16'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='amx-fp16'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='amx-int8'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='amx-tile'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='avx-vnni'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='avx512-bf16'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='avx512-fp16'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='avx512-vpopcntdq'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='avx512bitalg'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='avx512bw'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='avx512cd'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='avx512dq'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='avx512f'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='avx512ifma'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='avx512vbmi'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='avx512vbmi2'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='avx512vl'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='avx512vnni'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='bus-lock-detect'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='erms'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='fbsdp-no'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='fsrc'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='fsrm'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='fsrs'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='fzrm'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='gfni'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='hle'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='ibrs-all'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='invpcid'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='la57'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='mcdt-no'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='pbrsb-no'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='pcid'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='pku'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='prefetchiti'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='psdp-no'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='rtm'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='sbdr-ssdp-no'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='serialize'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='taa-no'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='tsx-ldtrk'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='vaes'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='vpclmulqdq'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='xfd'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='xsaves'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       </blockers>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       <model usable='no' vendor='Intel'>GraniteRapids-v2</model>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       <blockers model='GraniteRapids-v2'>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='amx-bf16'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='amx-fp16'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='amx-int8'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='amx-tile'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='avx-vnni'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='avx10'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='avx10-128'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='avx10-256'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='avx10-512'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='avx512-bf16'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='avx512-fp16'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='avx512-vpopcntdq'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='avx512bitalg'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='avx512bw'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='avx512cd'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='avx512dq'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='avx512f'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='avx512ifma'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='avx512vbmi'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='avx512vbmi2'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='avx512vl'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='avx512vnni'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='bus-lock-detect'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='cldemote'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='erms'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='fbsdp-no'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='fsrc'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='fsrm'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='fsrs'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='fzrm'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='gfni'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='hle'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='ibrs-all'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='invpcid'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='la57'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='mcdt-no'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='movdir64b'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='movdiri'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='pbrsb-no'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='pcid'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='pku'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='prefetchiti'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='psdp-no'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='rtm'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='sbdr-ssdp-no'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='serialize'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='ss'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='taa-no'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='tsx-ldtrk'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='vaes'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='vpclmulqdq'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='xfd'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='xsaves'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       </blockers>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       <model usable='no' vendor='Intel' canonical='Haswell-v1'>Haswell</model>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       <blockers model='Haswell'>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='erms'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='hle'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='invpcid'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='pcid'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='rtm'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       </blockers>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       <model usable='no' vendor='Intel' canonical='Haswell-v3'>Haswell-IBRS</model>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       <blockers model='Haswell-IBRS'>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='erms'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='hle'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='invpcid'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='pcid'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='rtm'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       </blockers>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       <model usable='no' vendor='Intel' canonical='Haswell-v2'>Haswell-noTSX</model>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       <blockers model='Haswell-noTSX'>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='erms'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='invpcid'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='pcid'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       </blockers>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       <model usable='no' vendor='Intel' canonical='Haswell-v4'>Haswell-noTSX-IBRS</model>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       <blockers model='Haswell-noTSX-IBRS'>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='erms'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='invpcid'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='pcid'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       </blockers>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       <model usable='no' vendor='Intel'>Haswell-v1</model>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       <blockers model='Haswell-v1'>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='erms'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='hle'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='invpcid'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='pcid'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='rtm'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       </blockers>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       <model usable='no' vendor='Intel'>Haswell-v2</model>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       <blockers model='Haswell-v2'>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='erms'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='invpcid'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='pcid'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       </blockers>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       <model usable='no' vendor='Intel'>Haswell-v3</model>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       <blockers model='Haswell-v3'>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='erms'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='hle'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='invpcid'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='pcid'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='rtm'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       </blockers>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       <model usable='no' vendor='Intel'>Haswell-v4</model>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       <blockers model='Haswell-v4'>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='erms'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='invpcid'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='pcid'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       </blockers>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       <model usable='no' vendor='Intel' canonical='Icelake-Server-v1'>Icelake-Server</model>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       <blockers model='Icelake-Server'>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='avx512-vpopcntdq'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='avx512bitalg'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='avx512bw'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='avx512cd'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='avx512dq'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='avx512f'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='avx512vbmi'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='avx512vbmi2'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='avx512vl'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='avx512vnni'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='erms'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='gfni'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='hle'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='invpcid'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='la57'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='pcid'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='pku'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='rtm'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='vaes'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='vpclmulqdq'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       </blockers>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       <model usable='no' vendor='Intel' canonical='Icelake-Server-v2'>Icelake-Server-noTSX</model>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       <blockers model='Icelake-Server-noTSX'>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='avx512-vpopcntdq'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='avx512bitalg'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='avx512bw'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='avx512cd'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='avx512dq'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='avx512f'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='avx512vbmi'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='avx512vbmi2'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='avx512vl'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='avx512vnni'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='erms'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='gfni'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='invpcid'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='la57'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='pcid'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='pku'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='vaes'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='vpclmulqdq'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       </blockers>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       <model usable='no' vendor='Intel'>Icelake-Server-v1</model>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       <blockers model='Icelake-Server-v1'>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='avx512-vpopcntdq'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='avx512bitalg'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='avx512bw'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='avx512cd'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='avx512dq'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='avx512f'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='avx512vbmi'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='avx512vbmi2'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='avx512vl'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='avx512vnni'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='erms'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='gfni'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='hle'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='invpcid'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='la57'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='pcid'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='pku'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='rtm'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='vaes'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='vpclmulqdq'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       </blockers>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       <model usable='no' vendor='Intel'>Icelake-Server-v2</model>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       <blockers model='Icelake-Server-v2'>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='avx512-vpopcntdq'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='avx512bitalg'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='avx512bw'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='avx512cd'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='avx512dq'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='avx512f'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='avx512vbmi'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='avx512vbmi2'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='avx512vl'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='avx512vnni'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='erms'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='gfni'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='invpcid'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='la57'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='pcid'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='pku'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='vaes'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='vpclmulqdq'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       </blockers>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       <model usable='no' vendor='Intel'>Icelake-Server-v3</model>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       <blockers model='Icelake-Server-v3'>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='avx512-vpopcntdq'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='avx512bitalg'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='avx512bw'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='avx512cd'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='avx512dq'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='avx512f'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='avx512vbmi'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='avx512vbmi2'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='avx512vl'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='avx512vnni'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='erms'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='gfni'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='ibrs-all'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='invpcid'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='la57'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='pcid'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='pku'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='taa-no'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='vaes'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='vpclmulqdq'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       </blockers>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       <model usable='no' vendor='Intel'>Icelake-Server-v4</model>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       <blockers model='Icelake-Server-v4'>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='avx512-vpopcntdq'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='avx512bitalg'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='avx512bw'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='avx512cd'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='avx512dq'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='avx512f'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='avx512ifma'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='avx512vbmi'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='avx512vbmi2'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='avx512vl'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='avx512vnni'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='erms'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='fsrm'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='gfni'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='ibrs-all'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='invpcid'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='la57'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='pcid'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='pku'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='taa-no'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='vaes'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='vpclmulqdq'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       </blockers>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       <model usable='no' vendor='Intel'>Icelake-Server-v5</model>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       <blockers model='Icelake-Server-v5'>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='avx512-vpopcntdq'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='avx512bitalg'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='avx512bw'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='avx512cd'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='avx512dq'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='avx512f'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='avx512ifma'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='avx512vbmi'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='avx512vbmi2'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='avx512vl'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='avx512vnni'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='erms'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='fsrm'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='gfni'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='ibrs-all'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='invpcid'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='la57'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='pcid'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='pku'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='taa-no'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='vaes'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='vpclmulqdq'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='xsaves'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       </blockers>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       <model usable='no' vendor='Intel'>Icelake-Server-v6</model>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       <blockers model='Icelake-Server-v6'>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='avx512-vpopcntdq'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='avx512bitalg'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='avx512bw'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='avx512cd'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='avx512dq'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='avx512f'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='avx512ifma'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='avx512vbmi'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='avx512vbmi2'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='avx512vl'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='avx512vnni'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='erms'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='fsrm'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='gfni'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='ibrs-all'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='invpcid'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='la57'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='pcid'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='pku'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='taa-no'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='vaes'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='vpclmulqdq'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='xsaves'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       </blockers>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       <model usable='no' vendor='Intel'>Icelake-Server-v7</model>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       <blockers model='Icelake-Server-v7'>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='avx512-vpopcntdq'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='avx512bitalg'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='avx512bw'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='avx512cd'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='avx512dq'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='avx512f'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='avx512ifma'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='avx512vbmi'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='avx512vbmi2'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='avx512vl'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='avx512vnni'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='erms'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='fsrm'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='gfni'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='hle'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='ibrs-all'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='invpcid'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='la57'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='pcid'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='pku'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='rtm'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='taa-no'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='vaes'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='vpclmulqdq'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='xsaves'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       </blockers>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       <model usable='no' vendor='Intel' canonical='IvyBridge-v1'>IvyBridge</model>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       <blockers model='IvyBridge'>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='erms'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       </blockers>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       <model usable='no' vendor='Intel' canonical='IvyBridge-v2'>IvyBridge-IBRS</model>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       <blockers model='IvyBridge-IBRS'>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='erms'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       </blockers>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       <model usable='no' vendor='Intel'>IvyBridge-v1</model>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       <blockers model='IvyBridge-v1'>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='erms'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       </blockers>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       <model usable='no' vendor='Intel'>IvyBridge-v2</model>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       <blockers model='IvyBridge-v2'>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='erms'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       </blockers>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       <model usable='no' vendor='Intel' canonical='KnightsMill-v1'>KnightsMill</model>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       <blockers model='KnightsMill'>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='avx512-4fmaps'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='avx512-4vnniw'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='avx512-vpopcntdq'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='avx512cd'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='avx512er'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='avx512f'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='avx512pf'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='erms'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='ss'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       </blockers>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       <model usable='no' vendor='Intel'>KnightsMill-v1</model>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       <blockers model='KnightsMill-v1'>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='avx512-4fmaps'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='avx512-4vnniw'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='avx512-vpopcntdq'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='avx512cd'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='avx512er'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='avx512f'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='avx512pf'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='erms'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='ss'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       </blockers>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       <model usable='yes' vendor='Intel' canonical='Nehalem-v1'>Nehalem</model>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       <model usable='yes' vendor='Intel' canonical='Nehalem-v2'>Nehalem-IBRS</model>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       <model usable='yes' vendor='Intel'>Nehalem-v1</model>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       <model usable='yes' vendor='Intel'>Nehalem-v2</model>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G1-v1'>Opteron_G1</model>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G1-v1</model>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G2-v1'>Opteron_G2</model>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G2-v1</model>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G3-v1'>Opteron_G3</model>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G3-v1</model>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       <model usable='no' vendor='AMD' canonical='Opteron_G4-v1'>Opteron_G4</model>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       <blockers model='Opteron_G4'>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='fma4'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='xop'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       </blockers>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       <model usable='no' vendor='AMD'>Opteron_G4-v1</model>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       <blockers model='Opteron_G4-v1'>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='fma4'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='xop'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       </blockers>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       <model usable='no' vendor='AMD' canonical='Opteron_G5-v1'>Opteron_G5</model>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       <blockers model='Opteron_G5'>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='fma4'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='tbm'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='xop'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       </blockers>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       <model usable='no' vendor='AMD'>Opteron_G5-v1</model>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       <blockers model='Opteron_G5-v1'>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='fma4'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='tbm'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='xop'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       </blockers>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       <model usable='yes' deprecated='yes' vendor='Intel' canonical='Penryn-v1'>Penryn</model>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       <model usable='yes' deprecated='yes' vendor='Intel'>Penryn-v1</model>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       <model usable='yes' vendor='Intel' canonical='SandyBridge-v1'>SandyBridge</model>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       <model usable='yes' vendor='Intel' canonical='SandyBridge-v2'>SandyBridge-IBRS</model>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       <model usable='yes' vendor='Intel'>SandyBridge-v1</model>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       <model usable='yes' vendor='Intel'>SandyBridge-v2</model>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       <model usable='no' vendor='Intel' canonical='SapphireRapids-v1'>SapphireRapids</model>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       <blockers model='SapphireRapids'>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='amx-bf16'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='amx-int8'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='amx-tile'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='avx-vnni'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='avx512-bf16'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='avx512-fp16'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='avx512-vpopcntdq'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='avx512bitalg'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='avx512bw'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='avx512cd'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='avx512dq'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='avx512f'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='avx512ifma'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='avx512vbmi'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='avx512vbmi2'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='avx512vl'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='avx512vnni'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='bus-lock-detect'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='erms'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='fsrc'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='fsrm'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='fsrs'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='fzrm'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='gfni'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='hle'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='ibrs-all'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='invpcid'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='la57'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='pcid'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='pku'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='rtm'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='serialize'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='taa-no'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='tsx-ldtrk'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='vaes'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='vpclmulqdq'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='xfd'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='xsaves'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       </blockers>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       <model usable='no' vendor='Intel'>SapphireRapids-v1</model>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       <blockers model='SapphireRapids-v1'>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='amx-bf16'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='amx-int8'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='amx-tile'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='avx-vnni'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='avx512-bf16'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='avx512-fp16'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='avx512-vpopcntdq'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='avx512bitalg'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='avx512bw'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='avx512cd'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='avx512dq'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='avx512f'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='avx512ifma'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='avx512vbmi'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='avx512vbmi2'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='avx512vl'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='avx512vnni'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='bus-lock-detect'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='erms'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='fsrc'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='fsrm'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='fsrs'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='fzrm'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='gfni'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='hle'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='ibrs-all'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='invpcid'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='la57'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='pcid'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='pku'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='rtm'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='serialize'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='taa-no'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='tsx-ldtrk'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='vaes'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='vpclmulqdq'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='xfd'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='xsaves'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       </blockers>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       <model usable='no' vendor='Intel'>SapphireRapids-v2</model>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       <blockers model='SapphireRapids-v2'>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='amx-bf16'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='amx-int8'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='amx-tile'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='avx-vnni'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='avx512-bf16'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='avx512-fp16'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='avx512-vpopcntdq'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='avx512bitalg'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='avx512bw'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='avx512cd'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='avx512dq'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='avx512f'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='avx512ifma'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='avx512vbmi'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='avx512vbmi2'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='avx512vl'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='avx512vnni'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='bus-lock-detect'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='erms'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='fbsdp-no'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='fsrc'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='fsrm'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='fsrs'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='fzrm'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='gfni'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='hle'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='ibrs-all'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='invpcid'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='la57'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='pcid'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='pku'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='psdp-no'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='rtm'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='sbdr-ssdp-no'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='serialize'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='taa-no'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='tsx-ldtrk'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='vaes'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='vpclmulqdq'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='xfd'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='xsaves'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       </blockers>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       <model usable='no' vendor='Intel'>SapphireRapids-v3</model>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       <blockers model='SapphireRapids-v3'>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='amx-bf16'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='amx-int8'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='amx-tile'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='avx-vnni'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='avx512-bf16'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='avx512-fp16'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='avx512-vpopcntdq'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='avx512bitalg'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='avx512bw'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='avx512cd'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='avx512dq'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='avx512f'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='avx512ifma'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='avx512vbmi'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='avx512vbmi2'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='avx512vl'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='avx512vnni'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='bus-lock-detect'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='cldemote'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='erms'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='fbsdp-no'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='fsrc'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='fsrm'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='fsrs'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='fzrm'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='gfni'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='hle'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='ibrs-all'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='invpcid'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='la57'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='movdir64b'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='movdiri'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='pcid'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='pku'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='psdp-no'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='rtm'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='sbdr-ssdp-no'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='serialize'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='ss'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='taa-no'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='tsx-ldtrk'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='vaes'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='vpclmulqdq'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='xfd'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='xsaves'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       </blockers>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       <model usable='no' vendor='Intel' canonical='SierraForest-v1'>SierraForest</model>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       <blockers model='SierraForest'>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='avx-ifma'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='avx-ne-convert'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='avx-vnni'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='avx-vnni-int8'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='bus-lock-detect'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='cmpccxadd'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='erms'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='fbsdp-no'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='fsrm'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='fsrs'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='gfni'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='ibrs-all'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='invpcid'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='mcdt-no'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='pbrsb-no'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='pcid'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='pku'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='psdp-no'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='sbdr-ssdp-no'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='serialize'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='vaes'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='vpclmulqdq'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='xsaves'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       </blockers>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       <model usable='no' vendor='Intel'>SierraForest-v1</model>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       <blockers model='SierraForest-v1'>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='avx-ifma'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='avx-ne-convert'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='avx-vnni'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='avx-vnni-int8'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='bus-lock-detect'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='cmpccxadd'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='erms'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='fbsdp-no'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='fsrm'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='fsrs'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='gfni'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='ibrs-all'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='invpcid'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='mcdt-no'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='pbrsb-no'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='pcid'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='pku'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='psdp-no'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='sbdr-ssdp-no'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='serialize'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='vaes'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='vpclmulqdq'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='xsaves'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       </blockers>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v1'>Skylake-Client</model>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       <blockers model='Skylake-Client'>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='erms'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='hle'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='invpcid'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='pcid'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='rtm'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       </blockers>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v2'>Skylake-Client-IBRS</model>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       <blockers model='Skylake-Client-IBRS'>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='erms'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='hle'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='invpcid'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='pcid'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='rtm'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       </blockers>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v3'>Skylake-Client-noTSX-IBRS</model>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       <blockers model='Skylake-Client-noTSX-IBRS'>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='erms'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='invpcid'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='pcid'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       </blockers>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       <model usable='no' vendor='Intel'>Skylake-Client-v1</model>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       <blockers model='Skylake-Client-v1'>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='erms'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='hle'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='invpcid'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='pcid'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='rtm'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       </blockers>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       <model usable='no' vendor='Intel'>Skylake-Client-v2</model>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       <blockers model='Skylake-Client-v2'>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='erms'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='hle'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='invpcid'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='pcid'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='rtm'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       </blockers>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       <model usable='no' vendor='Intel'>Skylake-Client-v3</model>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       <blockers model='Skylake-Client-v3'>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='erms'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='invpcid'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='pcid'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       </blockers>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       <model usable='no' vendor='Intel'>Skylake-Client-v4</model>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       <blockers model='Skylake-Client-v4'>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='erms'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='invpcid'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='pcid'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='xsaves'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       </blockers>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v1'>Skylake-Server</model>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       <blockers model='Skylake-Server'>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='avx512bw'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='avx512cd'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='avx512dq'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='avx512f'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='avx512vl'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='erms'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='hle'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='invpcid'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='pcid'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='pku'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='rtm'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       </blockers>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v2'>Skylake-Server-IBRS</model>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       <blockers model='Skylake-Server-IBRS'>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='avx512bw'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='avx512cd'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='avx512dq'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='avx512f'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='avx512vl'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='erms'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='hle'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='invpcid'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='pcid'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='pku'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='rtm'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       </blockers>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v3'>Skylake-Server-noTSX-IBRS</model>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       <blockers model='Skylake-Server-noTSX-IBRS'>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='avx512bw'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='avx512cd'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='avx512dq'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='avx512f'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='avx512vl'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='erms'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='invpcid'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='pcid'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='pku'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       </blockers>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       <model usable='no' vendor='Intel'>Skylake-Server-v1</model>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       <blockers model='Skylake-Server-v1'>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='avx512bw'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='avx512cd'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='avx512dq'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='avx512f'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='avx512vl'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='erms'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='hle'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='invpcid'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='pcid'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='pku'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='rtm'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       </blockers>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       <model usable='no' vendor='Intel'>Skylake-Server-v2</model>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       <blockers model='Skylake-Server-v2'>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='avx512bw'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='avx512cd'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='avx512dq'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='avx512f'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='avx512vl'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='erms'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='hle'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='invpcid'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='pcid'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='pku'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='rtm'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       </blockers>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       <model usable='no' vendor='Intel'>Skylake-Server-v3</model>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       <blockers model='Skylake-Server-v3'>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='avx512bw'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='avx512cd'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='avx512dq'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='avx512f'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='avx512vl'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='erms'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='invpcid'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='pcid'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='pku'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       </blockers>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       <model usable='no' vendor='Intel'>Skylake-Server-v4</model>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       <blockers model='Skylake-Server-v4'>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='avx512bw'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='avx512cd'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='avx512dq'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='avx512f'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='avx512vl'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='erms'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='invpcid'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='pcid'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='pku'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       </blockers>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       <model usable='no' vendor='Intel'>Skylake-Server-v5</model>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       <blockers model='Skylake-Server-v5'>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='avx512bw'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='avx512cd'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='avx512dq'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='avx512f'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='avx512vl'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='erms'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='invpcid'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='pcid'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='pku'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='xsaves'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       </blockers>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       <model usable='no' vendor='Intel' canonical='Snowridge-v1'>Snowridge</model>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       <blockers model='Snowridge'>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='cldemote'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='core-capability'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='erms'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='gfni'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='movdir64b'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='movdiri'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='mpx'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='split-lock-detect'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       </blockers>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       <model usable='no' vendor='Intel'>Snowridge-v1</model>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       <blockers model='Snowridge-v1'>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='cldemote'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='core-capability'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='erms'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='gfni'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='movdir64b'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='movdiri'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='mpx'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='split-lock-detect'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       </blockers>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       <model usable='no' vendor='Intel'>Snowridge-v2</model>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       <blockers model='Snowridge-v2'>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='cldemote'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='core-capability'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='erms'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='gfni'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='movdir64b'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='movdiri'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='split-lock-detect'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       </blockers>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       <model usable='no' vendor='Intel'>Snowridge-v3</model>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       <blockers model='Snowridge-v3'>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='cldemote'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='core-capability'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='erms'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='gfni'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='movdir64b'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='movdiri'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='split-lock-detect'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='xsaves'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       </blockers>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       <model usable='no' vendor='Intel'>Snowridge-v4</model>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       <blockers model='Snowridge-v4'>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='cldemote'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='erms'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='gfni'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='movdir64b'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='movdiri'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='xsaves'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       </blockers>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       <model usable='yes' vendor='Intel' canonical='Westmere-v1'>Westmere</model>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       <model usable='yes' vendor='Intel' canonical='Westmere-v2'>Westmere-IBRS</model>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       <model usable='yes' vendor='Intel'>Westmere-v1</model>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       <model usable='yes' vendor='Intel'>Westmere-v2</model>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       <model usable='no' deprecated='yes' vendor='AMD' canonical='athlon-v1'>athlon</model>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       <blockers model='athlon'>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='3dnow'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='3dnowext'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       </blockers>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       <model usable='no' deprecated='yes' vendor='AMD'>athlon-v1</model>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       <blockers model='athlon-v1'>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='3dnow'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='3dnowext'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       </blockers>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='core2duo-v1'>core2duo</model>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       <blockers model='core2duo'>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='ss'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       </blockers>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       <model usable='no' deprecated='yes' vendor='Intel'>core2duo-v1</model>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       <blockers model='core2duo-v1'>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='ss'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       </blockers>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='coreduo-v1'>coreduo</model>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       <blockers model='coreduo'>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='ss'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       </blockers>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       <model usable='no' deprecated='yes' vendor='Intel'>coreduo-v1</model>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       <blockers model='coreduo-v1'>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='ss'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       </blockers>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm32-v1'>kvm32</model>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       <model usable='yes' deprecated='yes' vendor='unknown'>kvm32-v1</model>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm64-v1'>kvm64</model>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       <model usable='yes' deprecated='yes' vendor='unknown'>kvm64-v1</model>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='n270-v1'>n270</model>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       <blockers model='n270'>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='ss'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       </blockers>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       <model usable='no' deprecated='yes' vendor='Intel'>n270-v1</model>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       <blockers model='n270-v1'>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='ss'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       </blockers>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium-v1'>pentium</model>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium-v1</model>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium2-v1'>pentium2</model>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium2-v1</model>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium3-v1'>pentium3</model>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium3-v1</model>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       <model usable='no' deprecated='yes' vendor='AMD' canonical='phenom-v1'>phenom</model>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       <blockers model='phenom'>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='3dnow'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='3dnowext'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       </blockers>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       <model usable='no' deprecated='yes' vendor='AMD'>phenom-v1</model>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       <blockers model='phenom-v1'>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='3dnow'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <feature name='3dnowext'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       </blockers>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu32-v1'>qemu32</model>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       <model usable='yes' deprecated='yes' vendor='unknown'>qemu32-v1</model>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu64-v1'>qemu64</model>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       <model usable='yes' deprecated='yes' vendor='unknown'>qemu64-v1</model>
Oct 11 03:56:10 compute-0 nova_compute[259850]:     </mode>
Oct 11 03:56:10 compute-0 nova_compute[259850]:   </cpu>
Oct 11 03:56:10 compute-0 nova_compute[259850]:   <memoryBacking supported='yes'>
Oct 11 03:56:10 compute-0 nova_compute[259850]:     <enum name='sourceType'>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       <value>file</value>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       <value>anonymous</value>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       <value>memfd</value>
Oct 11 03:56:10 compute-0 nova_compute[259850]:     </enum>
Oct 11 03:56:10 compute-0 nova_compute[259850]:   </memoryBacking>
Oct 11 03:56:10 compute-0 nova_compute[259850]:   <devices>
Oct 11 03:56:10 compute-0 nova_compute[259850]:     <disk supported='yes'>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       <enum name='diskDevice'>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <value>disk</value>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <value>cdrom</value>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <value>floppy</value>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <value>lun</value>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       </enum>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       <enum name='bus'>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <value>fdc</value>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <value>scsi</value>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <value>virtio</value>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <value>usb</value>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <value>sata</value>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       </enum>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       <enum name='model'>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <value>virtio</value>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <value>virtio-transitional</value>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <value>virtio-non-transitional</value>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       </enum>
Oct 11 03:56:10 compute-0 nova_compute[259850]:     </disk>
Oct 11 03:56:10 compute-0 nova_compute[259850]:     <graphics supported='yes'>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       <enum name='type'>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <value>vnc</value>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <value>egl-headless</value>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <value>dbus</value>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       </enum>
Oct 11 03:56:10 compute-0 nova_compute[259850]:     </graphics>
Oct 11 03:56:10 compute-0 nova_compute[259850]:     <video supported='yes'>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       <enum name='modelType'>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <value>vga</value>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <value>cirrus</value>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <value>virtio</value>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <value>none</value>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <value>bochs</value>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <value>ramfb</value>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       </enum>
Oct 11 03:56:10 compute-0 nova_compute[259850]:     </video>
Oct 11 03:56:10 compute-0 nova_compute[259850]:     <hostdev supported='yes'>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       <enum name='mode'>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <value>subsystem</value>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       </enum>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       <enum name='startupPolicy'>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <value>default</value>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <value>mandatory</value>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <value>requisite</value>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <value>optional</value>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       </enum>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       <enum name='subsysType'>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <value>usb</value>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <value>pci</value>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <value>scsi</value>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       </enum>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       <enum name='capsType'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       <enum name='pciBackend'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:     </hostdev>
Oct 11 03:56:10 compute-0 nova_compute[259850]:     <rng supported='yes'>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       <enum name='model'>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <value>virtio</value>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <value>virtio-transitional</value>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <value>virtio-non-transitional</value>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       </enum>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       <enum name='backendModel'>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <value>random</value>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <value>egd</value>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <value>builtin</value>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       </enum>
Oct 11 03:56:10 compute-0 nova_compute[259850]:     </rng>
Oct 11 03:56:10 compute-0 nova_compute[259850]:     <filesystem supported='yes'>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       <enum name='driverType'>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <value>path</value>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <value>handle</value>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <value>virtiofs</value>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       </enum>
Oct 11 03:56:10 compute-0 nova_compute[259850]:     </filesystem>
Oct 11 03:56:10 compute-0 nova_compute[259850]:     <tpm supported='yes'>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       <enum name='model'>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <value>tpm-tis</value>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <value>tpm-crb</value>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       </enum>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       <enum name='backendModel'>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <value>emulator</value>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <value>external</value>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       </enum>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       <enum name='backendVersion'>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <value>2.0</value>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       </enum>
Oct 11 03:56:10 compute-0 nova_compute[259850]:     </tpm>
Oct 11 03:56:10 compute-0 nova_compute[259850]:     <redirdev supported='yes'>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       <enum name='bus'>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <value>usb</value>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       </enum>
Oct 11 03:56:10 compute-0 nova_compute[259850]:     </redirdev>
Oct 11 03:56:10 compute-0 nova_compute[259850]:     <channel supported='yes'>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       <enum name='type'>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <value>pty</value>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <value>unix</value>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       </enum>
Oct 11 03:56:10 compute-0 nova_compute[259850]:     </channel>
Oct 11 03:56:10 compute-0 nova_compute[259850]:     <crypto supported='yes'>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       <enum name='model'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       <enum name='type'>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <value>qemu</value>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       </enum>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       <enum name='backendModel'>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <value>builtin</value>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       </enum>
Oct 11 03:56:10 compute-0 nova_compute[259850]:     </crypto>
Oct 11 03:56:10 compute-0 nova_compute[259850]:     <interface supported='yes'>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       <enum name='backendType'>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <value>default</value>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <value>passt</value>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       </enum>
Oct 11 03:56:10 compute-0 nova_compute[259850]:     </interface>
Oct 11 03:56:10 compute-0 nova_compute[259850]:     <panic supported='yes'>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       <enum name='model'>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <value>isa</value>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <value>hyperv</value>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       </enum>
Oct 11 03:56:10 compute-0 nova_compute[259850]:     </panic>
Oct 11 03:56:10 compute-0 nova_compute[259850]:   </devices>
Oct 11 03:56:10 compute-0 nova_compute[259850]:   <features>
Oct 11 03:56:10 compute-0 nova_compute[259850]:     <gic supported='no'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:     <vmcoreinfo supported='yes'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:     <genid supported='yes'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:     <backingStoreInput supported='yes'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:     <backup supported='yes'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:     <async-teardown supported='yes'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:     <ps2 supported='yes'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:     <sev supported='no'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:     <sgx supported='no'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:     <hyperv supported='yes'>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       <enum name='features'>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <value>relaxed</value>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <value>vapic</value>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <value>spinlocks</value>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <value>vpindex</value>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <value>runtime</value>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <value>synic</value>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <value>stimer</value>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <value>reset</value>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <value>vendor_id</value>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <value>frequencies</value>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <value>reenlightenment</value>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <value>tlbflush</value>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <value>ipi</value>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <value>avic</value>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <value>emsr_bitmap</value>
Oct 11 03:56:10 compute-0 nova_compute[259850]:         <value>xmm_input</value>
Oct 11 03:56:10 compute-0 nova_compute[259850]:       </enum>
Oct 11 03:56:10 compute-0 nova_compute[259850]:     </hyperv>
Oct 11 03:56:10 compute-0 nova_compute[259850]:     <launchSecurity supported='no'/>
Oct 11 03:56:10 compute-0 nova_compute[259850]:   </features>
Oct 11 03:56:10 compute-0 nova_compute[259850]: </domainCapabilities>
Oct 11 03:56:10 compute-0 nova_compute[259850]:  _get_domain_capabilities /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1037
Oct 11 03:56:10 compute-0 nova_compute[259850]: 2025-10-11 03:56:10.141 2 DEBUG nova.virt.libvirt.host [None req-3700afd8-f980-4cf2-b41e-54ecc7fbe9a2 - - - - - -] Checking secure boot support for host arch (x86_64) supports_secure_boot /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1782
Oct 11 03:56:10 compute-0 nova_compute[259850]: 2025-10-11 03:56:10.141 2 DEBUG nova.virt.libvirt.host [None req-3700afd8-f980-4cf2-b41e-54ecc7fbe9a2 - - - - - -] Checking secure boot support for host arch (x86_64) supports_secure_boot /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1782
Oct 11 03:56:10 compute-0 nova_compute[259850]: 2025-10-11 03:56:10.141 2 DEBUG nova.virt.libvirt.host [None req-3700afd8-f980-4cf2-b41e-54ecc7fbe9a2 - - - - - -] Checking secure boot support for host arch (x86_64) supports_secure_boot /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1782
Oct 11 03:56:10 compute-0 nova_compute[259850]: 2025-10-11 03:56:10.142 2 INFO nova.virt.libvirt.host [None req-3700afd8-f980-4cf2-b41e-54ecc7fbe9a2 - - - - - -] Secure Boot support detected
Oct 11 03:56:10 compute-0 nova_compute[259850]: 2025-10-11 03:56:10.145 2 INFO nova.virt.libvirt.driver [None req-3700afd8-f980-4cf2-b41e-54ecc7fbe9a2 - - - - - -] The live_migration_permit_post_copy is set to True and post copy live migration is available so auto-converge will not be in use.
Oct 11 03:56:10 compute-0 nova_compute[259850]: 2025-10-11 03:56:10.145 2 INFO nova.virt.libvirt.driver [None req-3700afd8-f980-4cf2-b41e-54ecc7fbe9a2 - - - - - -] The live_migration_permit_post_copy is set to True and post copy live migration is available so auto-converge will not be in use.
Oct 11 03:56:10 compute-0 nova_compute[259850]: 2025-10-11 03:56:10.158 2 DEBUG nova.virt.libvirt.driver [None req-3700afd8-f980-4cf2-b41e-54ecc7fbe9a2 - - - - - -] Enabling emulated TPM support _check_vtpm_support /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:1097
Oct 11 03:56:10 compute-0 nova_compute[259850]: 2025-10-11 03:56:10.203 2 INFO nova.virt.node [None req-3700afd8-f980-4cf2-b41e-54ecc7fbe9a2 - - - - - -] Determined node identity 108a560b-89c0-4926-a2fc-cb749a6f8386 from /var/lib/nova/compute_id
Oct 11 03:56:10 compute-0 nova_compute[259850]: 2025-10-11 03:56:10.230 2 WARNING nova.compute.manager [None req-3700afd8-f980-4cf2-b41e-54ecc7fbe9a2 - - - - - -] Compute nodes ['108a560b-89c0-4926-a2fc-cb749a6f8386'] for host compute-0.ctlplane.example.com were not found in the database. If this is the first time this service is starting on this host, then you can ignore this warning.
Oct 11 03:56:10 compute-0 nova_compute[259850]: 2025-10-11 03:56:10.274 2 INFO nova.compute.manager [None req-3700afd8-f980-4cf2-b41e-54ecc7fbe9a2 - - - - - -] Looking for unclaimed instances stuck in BUILDING status for nodes managed by this host
Oct 11 03:56:10 compute-0 nova_compute[259850]: 2025-10-11 03:56:10.320 2 WARNING nova.compute.manager [None req-3700afd8-f980-4cf2-b41e-54ecc7fbe9a2 - - - - - -] No compute node record found for host compute-0.ctlplane.example.com. If this is the first time this service is starting on this host, then you can ignore this warning.: nova.exception_Remote.ComputeHostNotFound_Remote: Compute host compute-0.ctlplane.example.com could not be found.
Oct 11 03:56:10 compute-0 nova_compute[259850]: 2025-10-11 03:56:10.321 2 DEBUG oslo_concurrency.lockutils [None req-3700afd8-f980-4cf2-b41e-54ecc7fbe9a2 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 03:56:10 compute-0 nova_compute[259850]: 2025-10-11 03:56:10.321 2 DEBUG oslo_concurrency.lockutils [None req-3700afd8-f980-4cf2-b41e-54ecc7fbe9a2 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 03:56:10 compute-0 nova_compute[259850]: 2025-10-11 03:56:10.322 2 DEBUG oslo_concurrency.lockutils [None req-3700afd8-f980-4cf2-b41e-54ecc7fbe9a2 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 03:56:10 compute-0 nova_compute[259850]: 2025-10-11 03:56:10.322 2 DEBUG nova.compute.resource_tracker [None req-3700afd8-f980-4cf2-b41e-54ecc7fbe9a2 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Oct 11 03:56:10 compute-0 nova_compute[259850]: 2025-10-11 03:56:10.323 2 DEBUG oslo_concurrency.processutils [None req-3700afd8-f980-4cf2-b41e-54ecc7fbe9a2 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 03:56:10 compute-0 ceph-mon[74273]: pgmap v703: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 03:56:10 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 11 03:56:10 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2711953488' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 03:56:10 compute-0 nova_compute[259850]: 2025-10-11 03:56:10.794 2 DEBUG oslo_concurrency.processutils [None req-3700afd8-f980-4cf2-b41e-54ecc7fbe9a2 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.471s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 03:56:10 compute-0 systemd[1]: Starting libvirt nodedev daemon...
Oct 11 03:56:10 compute-0 systemd[1]: Started libvirt nodedev daemon.
Oct 11 03:56:11 compute-0 nova_compute[259850]: 2025-10-11 03:56:11.083 2 WARNING nova.virt.libvirt.driver [None req-3700afd8-f980-4cf2-b41e-54ecc7fbe9a2 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 11 03:56:11 compute-0 nova_compute[259850]: 2025-10-11 03:56:11.084 2 DEBUG nova.compute.resource_tracker [None req-3700afd8-f980-4cf2-b41e-54ecc7fbe9a2 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5095MB free_disk=59.98828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Oct 11 03:56:11 compute-0 nova_compute[259850]: 2025-10-11 03:56:11.084 2 DEBUG oslo_concurrency.lockutils [None req-3700afd8-f980-4cf2-b41e-54ecc7fbe9a2 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 03:56:11 compute-0 nova_compute[259850]: 2025-10-11 03:56:11.084 2 DEBUG oslo_concurrency.lockutils [None req-3700afd8-f980-4cf2-b41e-54ecc7fbe9a2 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 03:56:11 compute-0 nova_compute[259850]: 2025-10-11 03:56:11.142 2 WARNING nova.compute.resource_tracker [None req-3700afd8-f980-4cf2-b41e-54ecc7fbe9a2 - - - - - -] No compute node record for compute-0.ctlplane.example.com:108a560b-89c0-4926-a2fc-cb749a6f8386: nova.exception_Remote.ComputeHostNotFound_Remote: Compute host 108a560b-89c0-4926-a2fc-cb749a6f8386 could not be found.
Oct 11 03:56:11 compute-0 nova_compute[259850]: 2025-10-11 03:56:11.169 2 INFO nova.compute.resource_tracker [None req-3700afd8-f980-4cf2-b41e-54ecc7fbe9a2 - - - - - -] Compute node record created for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com with uuid: 108a560b-89c0-4926-a2fc-cb749a6f8386
Oct 11 03:56:11 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v704: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 03:56:11 compute-0 nova_compute[259850]: 2025-10-11 03:56:11.242 2 DEBUG nova.compute.resource_tracker [None req-3700afd8-f980-4cf2-b41e-54ecc7fbe9a2 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Oct 11 03:56:11 compute-0 nova_compute[259850]: 2025-10-11 03:56:11.242 2 DEBUG nova.compute.resource_tracker [None req-3700afd8-f980-4cf2-b41e-54ecc7fbe9a2 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Oct 11 03:56:11 compute-0 ceph-mon[74273]: from='client.? 192.168.122.100:0/2711953488' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 03:56:12 compute-0 nova_compute[259850]: 2025-10-11 03:56:12.152 2 INFO nova.scheduler.client.report [None req-3700afd8-f980-4cf2-b41e-54ecc7fbe9a2 - - - - - -] [req-cac402c1-bd0c-4504-bf65-57fa98b8e250] Created resource provider record via placement API for resource provider with UUID 108a560b-89c0-4926-a2fc-cb749a6f8386 and name compute-0.ctlplane.example.com.
Oct 11 03:56:12 compute-0 podman[260195]: 2025-10-11 03:56:12.403376141 +0000 UTC m=+0.103864328 container health_status 83ff5079e26f0f00bbfd2512db4ebca61e431e3b7e362664573e34fff179574c (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=multipathd, org.label-schema.license=GPLv2)
Oct 11 03:56:12 compute-0 nova_compute[259850]: 2025-10-11 03:56:12.591 2 DEBUG oslo_concurrency.processutils [None req-3700afd8-f980-4cf2-b41e-54ecc7fbe9a2 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 03:56:12 compute-0 ceph-mon[74273]: pgmap v704: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 03:56:12 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 11 03:56:12 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4067247254' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 03:56:12 compute-0 nova_compute[259850]: 2025-10-11 03:56:12.990 2 DEBUG oslo_concurrency.processutils [None req-3700afd8-f980-4cf2-b41e-54ecc7fbe9a2 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.399s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 03:56:12 compute-0 nova_compute[259850]: 2025-10-11 03:56:12.996 2 DEBUG nova.virt.libvirt.host [None req-3700afd8-f980-4cf2-b41e-54ecc7fbe9a2 - - - - - -] /sys/module/kvm_amd/parameters/sev contains [N
Oct 11 03:56:12 compute-0 nova_compute[259850]: ] _kernel_supports_amd_sev /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1803
Oct 11 03:56:12 compute-0 nova_compute[259850]: 2025-10-11 03:56:12.997 2 INFO nova.virt.libvirt.host [None req-3700afd8-f980-4cf2-b41e-54ecc7fbe9a2 - - - - - -] kernel doesn't support AMD SEV
Oct 11 03:56:12 compute-0 nova_compute[259850]: 2025-10-11 03:56:12.998 2 DEBUG nova.compute.provider_tree [None req-3700afd8-f980-4cf2-b41e-54ecc7fbe9a2 - - - - - -] Updating inventory in ProviderTree for provider 108a560b-89c0-4926-a2fc-cb749a6f8386 with inventory: {'MEMORY_MB': {'total': 7680, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0, 'reserved': 512}, 'VCPU': {'total': 8, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0, 'reserved': 0}, 'DISK_GB': {'total': 59, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9, 'reserved': 0}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176
Oct 11 03:56:12 compute-0 nova_compute[259850]: 2025-10-11 03:56:12.998 2 DEBUG nova.virt.libvirt.driver [None req-3700afd8-f980-4cf2-b41e-54ecc7fbe9a2 - - - - - -] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Oct 11 03:56:13 compute-0 nova_compute[259850]: 2025-10-11 03:56:13.085 2 DEBUG nova.scheduler.client.report [None req-3700afd8-f980-4cf2-b41e-54ecc7fbe9a2 - - - - - -] Updated inventory for provider 108a560b-89c0-4926-a2fc-cb749a6f8386 with generation 0 in Placement from set_inventory_for_provider using data: {'MEMORY_MB': {'total': 7680, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0, 'reserved': 512}, 'VCPU': {'total': 8, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0, 'reserved': 0}, 'DISK_GB': {'total': 59, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9, 'reserved': 0}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:957
Oct 11 03:56:13 compute-0 nova_compute[259850]: 2025-10-11 03:56:13.085 2 DEBUG nova.compute.provider_tree [None req-3700afd8-f980-4cf2-b41e-54ecc7fbe9a2 - - - - - -] Updating resource provider 108a560b-89c0-4926-a2fc-cb749a6f8386 generation from 0 to 1 during operation: update_inventory _update_generation /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:164
Oct 11 03:56:13 compute-0 nova_compute[259850]: 2025-10-11 03:56:13.086 2 DEBUG nova.compute.provider_tree [None req-3700afd8-f980-4cf2-b41e-54ecc7fbe9a2 - - - - - -] Updating inventory in ProviderTree for provider 108a560b-89c0-4926-a2fc-cb749a6f8386 with inventory: {'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176
Oct 11 03:56:13 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v705: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 03:56:13 compute-0 nova_compute[259850]: 2025-10-11 03:56:13.244 2 DEBUG nova.compute.provider_tree [None req-3700afd8-f980-4cf2-b41e-54ecc7fbe9a2 - - - - - -] Updating resource provider 108a560b-89c0-4926-a2fc-cb749a6f8386 generation from 1 to 2 during operation: update_traits _update_generation /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:164
Oct 11 03:56:13 compute-0 nova_compute[259850]: 2025-10-11 03:56:13.274 2 DEBUG nova.compute.resource_tracker [None req-3700afd8-f980-4cf2-b41e-54ecc7fbe9a2 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Oct 11 03:56:13 compute-0 nova_compute[259850]: 2025-10-11 03:56:13.274 2 DEBUG oslo_concurrency.lockutils [None req-3700afd8-f980-4cf2-b41e-54ecc7fbe9a2 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 2.190s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 03:56:13 compute-0 nova_compute[259850]: 2025-10-11 03:56:13.274 2 DEBUG nova.service [None req-3700afd8-f980-4cf2-b41e-54ecc7fbe9a2 - - - - - -] Creating RPC server for service compute start /usr/lib/python3.9/site-packages/nova/service.py:182
Oct 11 03:56:13 compute-0 nova_compute[259850]: 2025-10-11 03:56:13.399 2 DEBUG nova.service [None req-3700afd8-f980-4cf2-b41e-54ecc7fbe9a2 - - - - - -] Join ServiceGroup membership for this service compute start /usr/lib/python3.9/site-packages/nova/service.py:199
Oct 11 03:56:13 compute-0 nova_compute[259850]: 2025-10-11 03:56:13.400 2 DEBUG nova.servicegroup.drivers.db [None req-3700afd8-f980-4cf2-b41e-54ecc7fbe9a2 - - - - - -] DB_Driver: join new ServiceGroup member compute-0.ctlplane.example.com to the compute group, service = <Service: host=compute-0.ctlplane.example.com, binary=nova-compute, manager_class_name=nova.compute.manager.ComputeManager> join /usr/lib/python3.9/site-packages/nova/servicegroup/drivers/db.py:44
Oct 11 03:56:13 compute-0 ceph-mon[74273]: from='client.? 192.168.122.100:0/4067247254' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 03:56:14 compute-0 podman[260237]: 2025-10-11 03:56:14.371590982 +0000 UTC m=+0.070012047 container health_status 8f03625dda308e9a3fe3b3f00360811eab2a53db90537fed5552a11820542f07 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, container_name=iscsid, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251009, config_id=iscsid, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.license=GPLv2)
Oct 11 03:56:14 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 11 03:56:14 compute-0 ceph-mon[74273]: pgmap v705: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 03:56:15 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v706: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 03:56:16 compute-0 ceph-mon[74273]: pgmap v706: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 03:56:17 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v707: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 03:56:18 compute-0 ceph-mon[74273]: pgmap v707: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 03:56:19 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v708: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 03:56:19 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 11 03:56:20 compute-0 ceph-mon[74273]: pgmap v708: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 03:56:20 compute-0 ceph-mgr[74563]: [balancer INFO root] Optimize plan auto_2025-10-11_03:56:20
Oct 11 03:56:20 compute-0 ceph-mgr[74563]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 11 03:56:20 compute-0 ceph-mgr[74563]: [balancer INFO root] do_upmap
Oct 11 03:56:20 compute-0 ceph-mgr[74563]: [balancer INFO root] pools ['cephfs.cephfs.meta', 'images', 'default.rgw.control', 'volumes', 'cephfs.cephfs.data', '.rgw.root', 'default.rgw.meta', '.mgr', 'default.rgw.log', 'backups', 'vms']
Oct 11 03:56:20 compute-0 ceph-mgr[74563]: [balancer INFO root] prepared 0/10 changes
Oct 11 03:56:20 compute-0 ceph-mgr[74563]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 03:56:20 compute-0 ceph-mgr[74563]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 03:56:20 compute-0 ceph-mgr[74563]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 03:56:20 compute-0 ceph-mgr[74563]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 03:56:20 compute-0 ceph-mgr[74563]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 03:56:20 compute-0 ceph-mgr[74563]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 03:56:20 compute-0 ceph-mgr[74563]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 11 03:56:20 compute-0 ceph-mgr[74563]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 11 03:56:20 compute-0 ceph-mgr[74563]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 11 03:56:20 compute-0 ceph-mgr[74563]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 11 03:56:20 compute-0 ceph-mgr[74563]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 11 03:56:20 compute-0 ceph-mgr[74563]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 11 03:56:20 compute-0 ceph-mgr[74563]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 11 03:56:20 compute-0 ceph-mgr[74563]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 11 03:56:20 compute-0 ceph-mgr[74563]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 11 03:56:20 compute-0 ceph-mgr[74563]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 11 03:56:21 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v709: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 03:56:22 compute-0 ceph-mon[74273]: pgmap v709: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 03:56:22 compute-0 ovn_metadata_agent[161897]: 2025-10-11 03:56:22.944 161902 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 03:56:22 compute-0 ovn_metadata_agent[161897]: 2025-10-11 03:56:22.945 161902 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 03:56:22 compute-0 ovn_metadata_agent[161897]: 2025-10-11 03:56:22.945 161902 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 03:56:23 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v710: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 03:56:24 compute-0 podman[260257]: 2025-10-11 03:56:24.3766412 +0000 UTC m=+0.091407608 container health_status 648a70a95c858d67aef01a55c153ff0b5b9bef4b589fa10c62a24185180db61f (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20251009, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f)
Oct 11 03:56:24 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 11 03:56:24 compute-0 ceph-mon[74273]: pgmap v710: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 03:56:25 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v711: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 03:56:26 compute-0 ceph-mon[74273]: pgmap v711: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 03:56:27 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v712: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 03:56:28 compute-0 ceph-mon[74273]: pgmap v712: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 03:56:29 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v713: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 03:56:29 compute-0 podman[260284]: 2025-10-11 03:56:29.380267497 +0000 UTC m=+0.088481976 container health_status aa44bb3cffcfb092b9d85ae52a9bd9f36c87e07262632420f97b48a886109eab (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, org.label-schema.build-date=20251009, tcib_managed=true)
Oct 11 03:56:29 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 11 03:56:30 compute-0 ceph-mon[74273]: pgmap v713: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 03:56:31 compute-0 ceph-mgr[74563]: [pg_autoscaler INFO root] _maybe_adjust
Oct 11 03:56:31 compute-0 ceph-mgr[74563]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 03:56:31 compute-0 ceph-mgr[74563]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct 11 03:56:31 compute-0 ceph-mgr[74563]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 03:56:31 compute-0 ceph-mgr[74563]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 11 03:56:31 compute-0 ceph-mgr[74563]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 03:56:31 compute-0 ceph-mgr[74563]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 11 03:56:31 compute-0 ceph-mgr[74563]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 03:56:31 compute-0 ceph-mgr[74563]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 11 03:56:31 compute-0 ceph-mgr[74563]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 03:56:31 compute-0 ceph-mgr[74563]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 11 03:56:31 compute-0 ceph-mgr[74563]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 03:56:31 compute-0 ceph-mgr[74563]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Oct 11 03:56:31 compute-0 ceph-mgr[74563]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 03:56:31 compute-0 ceph-mgr[74563]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 11 03:56:31 compute-0 ceph-mgr[74563]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 03:56:31 compute-0 ceph-mgr[74563]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Oct 11 03:56:31 compute-0 ceph-mgr[74563]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 03:56:31 compute-0 ceph-mgr[74563]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct 11 03:56:31 compute-0 ceph-mgr[74563]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 03:56:31 compute-0 ceph-mgr[74563]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 11 03:56:31 compute-0 ceph-mgr[74563]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 03:56:31 compute-0 ceph-mgr[74563]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct 11 03:56:31 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v714: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 03:56:32 compute-0 ceph-mon[74273]: pgmap v714: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 03:56:33 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v715: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 03:56:33 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct 11 03:56:33 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3008062666' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 11 03:56:33 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct 11 03:56:33 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3008062666' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 11 03:56:33 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct 11 03:56:33 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3863283658' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 11 03:56:33 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct 11 03:56:33 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3863283658' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 11 03:56:34 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct 11 03:56:34 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2731302698' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 11 03:56:34 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct 11 03:56:34 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2731302698' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 11 03:56:34 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 11 03:56:34 compute-0 ceph-mon[74273]: pgmap v715: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 03:56:34 compute-0 ceph-mon[74273]: from='client.? 192.168.122.10:0/3008062666' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 11 03:56:34 compute-0 ceph-mon[74273]: from='client.? 192.168.122.10:0/3008062666' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 11 03:56:34 compute-0 ceph-mon[74273]: from='client.? 192.168.122.10:0/3863283658' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 11 03:56:34 compute-0 ceph-mon[74273]: from='client.? 192.168.122.10:0/3863283658' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 11 03:56:34 compute-0 ceph-mon[74273]: from='client.? 192.168.122.10:0/2731302698' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 11 03:56:34 compute-0 ceph-mon[74273]: from='client.? 192.168.122.10:0/2731302698' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 11 03:56:35 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v716: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 03:56:36 compute-0 ceph-mon[74273]: pgmap v716: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 03:56:37 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v717: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 03:56:38 compute-0 ceph-mon[74273]: pgmap v717: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 03:56:39 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v718: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 03:56:39 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 11 03:56:40 compute-0 ceph-mon[74273]: pgmap v718: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 03:56:41 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v719: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 03:56:42 compute-0 ceph-mon[74273]: pgmap v719: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 03:56:43 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v720: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 03:56:43 compute-0 podman[260303]: 2025-10-11 03:56:43.377445717 +0000 UTC m=+0.078462104 container health_status 83ff5079e26f0f00bbfd2512db4ebca61e431e3b7e362664573e34fff179574c (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, container_name=multipathd, org.label-schema.build-date=20251009, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd)
Oct 11 03:56:44 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 11 03:56:44 compute-0 ceph-mon[74273]: pgmap v720: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 03:56:45 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v721: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 03:56:45 compute-0 podman[260323]: 2025-10-11 03:56:45.364648122 +0000 UTC m=+0.073626779 container health_status 8f03625dda308e9a3fe3b3f00360811eab2a53db90537fed5552a11820542f07 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, config_id=iscsid, io.buildah.version=1.41.3, org.label-schema.build-date=20251009, container_name=iscsid, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Oct 11 03:56:45 compute-0 nova_compute[259850]: 2025-10-11 03:56:45.402 2 DEBUG oslo_service.periodic_task [None req-a15c22ab-259d-4b26-83d0-eb5f92102649 - - - - - -] Running periodic task ComputeManager._sync_power_states run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 11 03:56:45 compute-0 nova_compute[259850]: 2025-10-11 03:56:45.464 2 DEBUG oslo_service.periodic_task [None req-a15c22ab-259d-4b26-83d0-eb5f92102649 - - - - - -] Running periodic task ComputeManager._cleanup_running_deleted_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 11 03:56:46 compute-0 ceph-mon[74273]: pgmap v721: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 03:56:47 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v722: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 03:56:48 compute-0 ceph-mon[74273]: pgmap v722: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 03:56:49 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v723: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 03:56:49 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 11 03:56:50 compute-0 ceph-mgr[74563]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 03:56:50 compute-0 ceph-mgr[74563]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 03:56:50 compute-0 ceph-mgr[74563]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 03:56:50 compute-0 ceph-mgr[74563]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 03:56:50 compute-0 ceph-mgr[74563]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 03:56:50 compute-0 ceph-mgr[74563]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 03:56:50 compute-0 ceph-mon[74273]: pgmap v723: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 03:56:51 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v724: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 03:56:52 compute-0 ceph-mon[74273]: pgmap v724: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 03:56:53 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v725: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 03:56:54 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 11 03:56:54 compute-0 sudo[260343]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 03:56:54 compute-0 sudo[260343]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 03:56:54 compute-0 sudo[260343]: pam_unix(sudo:session): session closed for user root
Oct 11 03:56:54 compute-0 sudo[260374]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 11 03:56:54 compute-0 sudo[260374]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 03:56:54 compute-0 sudo[260374]: pam_unix(sudo:session): session closed for user root
Oct 11 03:56:54 compute-0 sudo[260418]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 03:56:54 compute-0 sudo[260418]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 03:56:54 compute-0 sudo[260418]: pam_unix(sudo:session): session closed for user root
Oct 11 03:56:54 compute-0 podman[260367]: 2025-10-11 03:56:54.861797058 +0000 UTC m=+0.163914714 container health_status 648a70a95c858d67aef01a55c153ff0b5b9bef4b589fa10c62a24185180db61f (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009, tcib_managed=true, config_id=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3)
Oct 11 03:56:54 compute-0 ceph-mon[74273]: pgmap v725: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 03:56:54 compute-0 sudo[260445]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/23b68101-59a9-532f-ab6b-9acf78fb2162/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Oct 11 03:56:54 compute-0 sudo[260445]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 03:56:55 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v726: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 03:56:55 compute-0 sudo[260445]: pam_unix(sudo:session): session closed for user root
Oct 11 03:56:55 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 11 03:56:55 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 11 03:56:55 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct 11 03:56:55 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 11 03:56:55 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct 11 03:56:55 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' 
Oct 11 03:56:55 compute-0 ceph-mgr[74563]: [progress WARNING root] complete: ev ca43e96b-34f9-40bd-84f4-0d89558da3b6 does not exist
Oct 11 03:56:55 compute-0 ceph-mgr[74563]: [progress WARNING root] complete: ev 6763f0c6-1acc-42ac-8460-7275b1717e91 does not exist
Oct 11 03:56:55 compute-0 ceph-mgr[74563]: [progress WARNING root] complete: ev 48b981ff-6616-4ca7-b2ff-5e6a5bdfcf6f does not exist
Oct 11 03:56:55 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct 11 03:56:55 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 11 03:56:55 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct 11 03:56:55 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 11 03:56:55 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 11 03:56:55 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 11 03:56:55 compute-0 sudo[260502]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 03:56:55 compute-0 sudo[260502]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 03:56:55 compute-0 sudo[260502]: pam_unix(sudo:session): session closed for user root
Oct 11 03:56:55 compute-0 sudo[260527]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 11 03:56:55 compute-0 sudo[260527]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 03:56:55 compute-0 sudo[260527]: pam_unix(sudo:session): session closed for user root
Oct 11 03:56:55 compute-0 sudo[260552]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 03:56:55 compute-0 sudo[260552]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 03:56:55 compute-0 sudo[260552]: pam_unix(sudo:session): session closed for user root
Oct 11 03:56:55 compute-0 sudo[260577]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/23b68101-59a9-532f-ab6b-9acf78fb2162/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 23b68101-59a9-532f-ab6b-9acf78fb2162 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Oct 11 03:56:55 compute-0 sudo[260577]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 03:56:55 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 11 03:56:55 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 11 03:56:55 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' 
Oct 11 03:56:55 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 11 03:56:55 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 11 03:56:55 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 11 03:56:56 compute-0 podman[260641]: 2025-10-11 03:56:56.02378098 +0000 UTC m=+0.046328302 container create a585d2147bffe71f3652dc66abd5f6fe260013aac60ba2274ff9ad78fcddbe71 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_neumann, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=reef, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 11 03:56:56 compute-0 systemd[1]: Started libpod-conmon-a585d2147bffe71f3652dc66abd5f6fe260013aac60ba2274ff9ad78fcddbe71.scope.
Oct 11 03:56:56 compute-0 systemd[1]: Started libcrun container.
Oct 11 03:56:56 compute-0 podman[260641]: 2025-10-11 03:56:56.006373581 +0000 UTC m=+0.028920933 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 03:56:56 compute-0 podman[260641]: 2025-10-11 03:56:56.114066596 +0000 UTC m=+0.136613968 container init a585d2147bffe71f3652dc66abd5f6fe260013aac60ba2274ff9ad78fcddbe71 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_neumann, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Oct 11 03:56:56 compute-0 podman[260641]: 2025-10-11 03:56:56.12027381 +0000 UTC m=+0.142821152 container start a585d2147bffe71f3652dc66abd5f6fe260013aac60ba2274ff9ad78fcddbe71 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_neumann, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.39.3, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Oct 11 03:56:56 compute-0 podman[260641]: 2025-10-11 03:56:56.123394038 +0000 UTC m=+0.145941370 container attach a585d2147bffe71f3652dc66abd5f6fe260013aac60ba2274ff9ad78fcddbe71 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_neumann, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Oct 11 03:56:56 compute-0 stupefied_neumann[260658]: 167 167
Oct 11 03:56:56 compute-0 systemd[1]: libpod-a585d2147bffe71f3652dc66abd5f6fe260013aac60ba2274ff9ad78fcddbe71.scope: Deactivated successfully.
Oct 11 03:56:56 compute-0 podman[260641]: 2025-10-11 03:56:56.12844656 +0000 UTC m=+0.150993932 container died a585d2147bffe71f3652dc66abd5f6fe260013aac60ba2274ff9ad78fcddbe71 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_neumann, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 11 03:56:56 compute-0 systemd[1]: var-lib-containers-storage-overlay-74d7f954729df7a52eecf1dddc256f787d41136e3fb0c4fe926cc9b2f02eac64-merged.mount: Deactivated successfully.
Oct 11 03:56:56 compute-0 podman[260641]: 2025-10-11 03:56:56.186909031 +0000 UTC m=+0.209456383 container remove a585d2147bffe71f3652dc66abd5f6fe260013aac60ba2274ff9ad78fcddbe71 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_neumann, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 11 03:56:56 compute-0 systemd[1]: libpod-conmon-a585d2147bffe71f3652dc66abd5f6fe260013aac60ba2274ff9ad78fcddbe71.scope: Deactivated successfully.
Oct 11 03:56:56 compute-0 podman[260681]: 2025-10-11 03:56:56.408473573 +0000 UTC m=+0.066230001 container create 793dc647dee55d55463ae67f2d8532403f65efc7d6531109fe0f6f7054c041bc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_shaw, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.39.3, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 11 03:56:56 compute-0 systemd[1]: Started libpod-conmon-793dc647dee55d55463ae67f2d8532403f65efc7d6531109fe0f6f7054c041bc.scope.
Oct 11 03:56:56 compute-0 podman[260681]: 2025-10-11 03:56:56.379749206 +0000 UTC m=+0.037505694 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 03:56:56 compute-0 systemd[1]: Started libcrun container.
Oct 11 03:56:56 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/52cca2ea6b152ca7f333ea8d377706089dbfe177b1fc13d2279dc80928c4ef7c/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 11 03:56:56 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/52cca2ea6b152ca7f333ea8d377706089dbfe177b1fc13d2279dc80928c4ef7c/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 11 03:56:56 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/52cca2ea6b152ca7f333ea8d377706089dbfe177b1fc13d2279dc80928c4ef7c/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 11 03:56:56 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/52cca2ea6b152ca7f333ea8d377706089dbfe177b1fc13d2279dc80928c4ef7c/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 11 03:56:56 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/52cca2ea6b152ca7f333ea8d377706089dbfe177b1fc13d2279dc80928c4ef7c/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct 11 03:56:56 compute-0 podman[260681]: 2025-10-11 03:56:56.534397409 +0000 UTC m=+0.192153957 container init 793dc647dee55d55463ae67f2d8532403f65efc7d6531109fe0f6f7054c041bc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_shaw, CEPH_REF=reef, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 11 03:56:56 compute-0 podman[260681]: 2025-10-11 03:56:56.549270467 +0000 UTC m=+0.207026905 container start 793dc647dee55d55463ae67f2d8532403f65efc7d6531109fe0f6f7054c041bc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_shaw, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, CEPH_REF=reef)
Oct 11 03:56:56 compute-0 podman[260681]: 2025-10-11 03:56:56.553734462 +0000 UTC m=+0.211490870 container attach 793dc647dee55d55463ae67f2d8532403f65efc7d6531109fe0f6f7054c041bc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_shaw, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, ceph=True, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 11 03:56:56 compute-0 ceph-mon[74273]: pgmap v726: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 03:56:57 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v727: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 03:56:57 compute-0 intelligent_shaw[260698]: --> passed data devices: 0 physical, 3 LVM
Oct 11 03:56:57 compute-0 intelligent_shaw[260698]: --> relative data size: 1.0
Oct 11 03:56:57 compute-0 intelligent_shaw[260698]: --> All data devices are unavailable
Oct 11 03:56:57 compute-0 systemd[1]: libpod-793dc647dee55d55463ae67f2d8532403f65efc7d6531109fe0f6f7054c041bc.scope: Deactivated successfully.
Oct 11 03:56:57 compute-0 systemd[1]: libpod-793dc647dee55d55463ae67f2d8532403f65efc7d6531109fe0f6f7054c041bc.scope: Consumed 1.117s CPU time.
Oct 11 03:56:57 compute-0 podman[260681]: 2025-10-11 03:56:57.715062916 +0000 UTC m=+1.372819354 container died 793dc647dee55d55463ae67f2d8532403f65efc7d6531109fe0f6f7054c041bc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_shaw, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Oct 11 03:56:57 compute-0 systemd[1]: var-lib-containers-storage-overlay-52cca2ea6b152ca7f333ea8d377706089dbfe177b1fc13d2279dc80928c4ef7c-merged.mount: Deactivated successfully.
Oct 11 03:56:57 compute-0 podman[260681]: 2025-10-11 03:56:57.782752737 +0000 UTC m=+1.440509145 container remove 793dc647dee55d55463ae67f2d8532403f65efc7d6531109fe0f6f7054c041bc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_shaw, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0)
Oct 11 03:56:57 compute-0 systemd[1]: libpod-conmon-793dc647dee55d55463ae67f2d8532403f65efc7d6531109fe0f6f7054c041bc.scope: Deactivated successfully.
Oct 11 03:56:57 compute-0 sudo[260577]: pam_unix(sudo:session): session closed for user root
Oct 11 03:56:57 compute-0 sudo[260739]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 03:56:57 compute-0 sudo[260739]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 03:56:57 compute-0 sudo[260739]: pam_unix(sudo:session): session closed for user root
Oct 11 03:56:57 compute-0 sudo[260764]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 11 03:56:57 compute-0 sudo[260764]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 03:56:57 compute-0 sudo[260764]: pam_unix(sudo:session): session closed for user root
Oct 11 03:56:58 compute-0 sudo[260789]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 03:56:58 compute-0 sudo[260789]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 03:56:58 compute-0 sudo[260789]: pam_unix(sudo:session): session closed for user root
Oct 11 03:56:58 compute-0 sudo[260814]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/23b68101-59a9-532f-ab6b-9acf78fb2162/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 23b68101-59a9-532f-ab6b-9acf78fb2162 -- lvm list --format json
Oct 11 03:56:58 compute-0 sudo[260814]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 03:56:58 compute-0 podman[260882]: 2025-10-11 03:56:58.478494545 +0000 UTC m=+0.046974170 container create 0541962df2375216d647c8096220903b2974f7ecb9002c065b8c3d97cb29a0ff (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_wu, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Oct 11 03:56:58 compute-0 systemd[1]: Started libpod-conmon-0541962df2375216d647c8096220903b2974f7ecb9002c065b8c3d97cb29a0ff.scope.
Oct 11 03:56:58 compute-0 systemd[1]: Started libcrun container.
Oct 11 03:56:58 compute-0 podman[260882]: 2025-10-11 03:56:58.457973229 +0000 UTC m=+0.026452854 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 03:56:58 compute-0 podman[260882]: 2025-10-11 03:56:58.563294207 +0000 UTC m=+0.131773872 container init 0541962df2375216d647c8096220903b2974f7ecb9002c065b8c3d97cb29a0ff (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_wu, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507)
Oct 11 03:56:58 compute-0 podman[260882]: 2025-10-11 03:56:58.571333253 +0000 UTC m=+0.139812878 container start 0541962df2375216d647c8096220903b2974f7ecb9002c065b8c3d97cb29a0ff (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_wu, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 11 03:56:58 compute-0 podman[260882]: 2025-10-11 03:56:58.574617725 +0000 UTC m=+0.143097390 container attach 0541962df2375216d647c8096220903b2974f7ecb9002c065b8c3d97cb29a0ff (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_wu, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS)
Oct 11 03:56:58 compute-0 goofy_wu[260898]: 167 167
Oct 11 03:56:58 compute-0 podman[260882]: 2025-10-11 03:56:58.576958371 +0000 UTC m=+0.145438006 container died 0541962df2375216d647c8096220903b2974f7ecb9002c065b8c3d97cb29a0ff (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_wu, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0)
Oct 11 03:56:58 compute-0 systemd[1]: libpod-0541962df2375216d647c8096220903b2974f7ecb9002c065b8c3d97cb29a0ff.scope: Deactivated successfully.
Oct 11 03:56:58 compute-0 systemd[1]: var-lib-containers-storage-overlay-099011098ce99f81e28c6a665c00245393f82905b304fda45c1a0006111e07d3-merged.mount: Deactivated successfully.
Oct 11 03:56:58 compute-0 podman[260882]: 2025-10-11 03:56:58.613589879 +0000 UTC m=+0.182069524 container remove 0541962df2375216d647c8096220903b2974f7ecb9002c065b8c3d97cb29a0ff (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_wu, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS)
Oct 11 03:56:58 compute-0 systemd[1]: libpod-conmon-0541962df2375216d647c8096220903b2974f7ecb9002c065b8c3d97cb29a0ff.scope: Deactivated successfully.
Oct 11 03:56:58 compute-0 podman[260923]: 2025-10-11 03:56:58.803833192 +0000 UTC m=+0.048095402 container create f412bf97e03ccc8de28f44b50768e1bb32fe4e43f1526ce8e91a207043801e91 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_ganguly, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Oct 11 03:56:58 compute-0 systemd[1]: Started libpod-conmon-f412bf97e03ccc8de28f44b50768e1bb32fe4e43f1526ce8e91a207043801e91.scope.
Oct 11 03:56:58 compute-0 podman[260923]: 2025-10-11 03:56:58.784511169 +0000 UTC m=+0.028773419 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 03:56:58 compute-0 ceph-mon[74273]: pgmap v727: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 03:56:58 compute-0 systemd[1]: Started libcrun container.
Oct 11 03:56:58 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/594d6c5b691c9aecbcb7b34fa165dc9d0514caa4993ca26a3c5b9feee2d91952/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 11 03:56:58 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/594d6c5b691c9aecbcb7b34fa165dc9d0514caa4993ca26a3c5b9feee2d91952/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 11 03:56:58 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/594d6c5b691c9aecbcb7b34fa165dc9d0514caa4993ca26a3c5b9feee2d91952/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 11 03:56:58 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/594d6c5b691c9aecbcb7b34fa165dc9d0514caa4993ca26a3c5b9feee2d91952/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 11 03:56:58 compute-0 podman[260923]: 2025-10-11 03:56:58.914590292 +0000 UTC m=+0.158852602 container init f412bf97e03ccc8de28f44b50768e1bb32fe4e43f1526ce8e91a207043801e91 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_ganguly, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507)
Oct 11 03:56:58 compute-0 podman[260923]: 2025-10-11 03:56:58.92946011 +0000 UTC m=+0.173722360 container start f412bf97e03ccc8de28f44b50768e1bb32fe4e43f1526ce8e91a207043801e91 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_ganguly, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 11 03:56:58 compute-0 podman[260923]: 2025-10-11 03:56:58.933669968 +0000 UTC m=+0.177932228 container attach f412bf97e03ccc8de28f44b50768e1bb32fe4e43f1526ce8e91a207043801e91 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_ganguly, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 11 03:56:59 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v728: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 03:56:59 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 11 03:56:59 compute-0 funny_ganguly[260939]: {
Oct 11 03:56:59 compute-0 funny_ganguly[260939]:     "0": [
Oct 11 03:56:59 compute-0 funny_ganguly[260939]:         {
Oct 11 03:56:59 compute-0 funny_ganguly[260939]:             "devices": [
Oct 11 03:56:59 compute-0 funny_ganguly[260939]:                 "/dev/loop3"
Oct 11 03:56:59 compute-0 funny_ganguly[260939]:             ],
Oct 11 03:56:59 compute-0 funny_ganguly[260939]:             "lv_name": "ceph_lv0",
Oct 11 03:56:59 compute-0 funny_ganguly[260939]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct 11 03:56:59 compute-0 funny_ganguly[260939]:             "lv_size": "21470642176",
Oct 11 03:56:59 compute-0 funny_ganguly[260939]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=1XVTvq-Um5W-oCLh-MZzq-EKrd-vgGj-JdO4S3,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=23b68101-59a9-532f-ab6b-9acf78fb2162,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=bd7ac921-1218-45c1-b1c6-7c594dbceccb,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 11 03:56:59 compute-0 funny_ganguly[260939]:             "lv_uuid": "1XVTvq-Um5W-oCLh-MZzq-EKrd-vgGj-JdO4S3",
Oct 11 03:56:59 compute-0 funny_ganguly[260939]:             "name": "ceph_lv0",
Oct 11 03:56:59 compute-0 funny_ganguly[260939]:             "path": "/dev/ceph_vg0/ceph_lv0",
Oct 11 03:56:59 compute-0 funny_ganguly[260939]:             "tags": {
Oct 11 03:56:59 compute-0 funny_ganguly[260939]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct 11 03:56:59 compute-0 funny_ganguly[260939]:                 "ceph.block_uuid": "1XVTvq-Um5W-oCLh-MZzq-EKrd-vgGj-JdO4S3",
Oct 11 03:56:59 compute-0 funny_ganguly[260939]:                 "ceph.cephx_lockbox_secret": "",
Oct 11 03:56:59 compute-0 funny_ganguly[260939]:                 "ceph.cluster_fsid": "23b68101-59a9-532f-ab6b-9acf78fb2162",
Oct 11 03:56:59 compute-0 funny_ganguly[260939]:                 "ceph.cluster_name": "ceph",
Oct 11 03:56:59 compute-0 funny_ganguly[260939]:                 "ceph.crush_device_class": "",
Oct 11 03:56:59 compute-0 funny_ganguly[260939]:                 "ceph.encrypted": "0",
Oct 11 03:56:59 compute-0 funny_ganguly[260939]:                 "ceph.osd_fsid": "bd7ac921-1218-45c1-b1c6-7c594dbceccb",
Oct 11 03:56:59 compute-0 funny_ganguly[260939]:                 "ceph.osd_id": "0",
Oct 11 03:56:59 compute-0 funny_ganguly[260939]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 11 03:56:59 compute-0 funny_ganguly[260939]:                 "ceph.type": "block",
Oct 11 03:56:59 compute-0 funny_ganguly[260939]:                 "ceph.vdo": "0"
Oct 11 03:56:59 compute-0 funny_ganguly[260939]:             },
Oct 11 03:56:59 compute-0 funny_ganguly[260939]:             "type": "block",
Oct 11 03:56:59 compute-0 funny_ganguly[260939]:             "vg_name": "ceph_vg0"
Oct 11 03:56:59 compute-0 funny_ganguly[260939]:         }
Oct 11 03:56:59 compute-0 funny_ganguly[260939]:     ],
Oct 11 03:56:59 compute-0 funny_ganguly[260939]:     "1": [
Oct 11 03:56:59 compute-0 funny_ganguly[260939]:         {
Oct 11 03:56:59 compute-0 funny_ganguly[260939]:             "devices": [
Oct 11 03:56:59 compute-0 funny_ganguly[260939]:                 "/dev/loop4"
Oct 11 03:56:59 compute-0 funny_ganguly[260939]:             ],
Oct 11 03:56:59 compute-0 funny_ganguly[260939]:             "lv_name": "ceph_lv1",
Oct 11 03:56:59 compute-0 funny_ganguly[260939]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Oct 11 03:56:59 compute-0 funny_ganguly[260939]:             "lv_size": "21470642176",
Oct 11 03:56:59 compute-0 funny_ganguly[260939]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=SYHCVo-UVf0-Iwc3-4dzz-QuCG-19LJ-9rRA5x,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=23b68101-59a9-532f-ab6b-9acf78fb2162,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=38da774d-7ecf-442f-9a7a-97978287cff8,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 11 03:56:59 compute-0 funny_ganguly[260939]:             "lv_uuid": "SYHCVo-UVf0-Iwc3-4dzz-QuCG-19LJ-9rRA5x",
Oct 11 03:56:59 compute-0 funny_ganguly[260939]:             "name": "ceph_lv1",
Oct 11 03:56:59 compute-0 funny_ganguly[260939]:             "path": "/dev/ceph_vg1/ceph_lv1",
Oct 11 03:56:59 compute-0 funny_ganguly[260939]:             "tags": {
Oct 11 03:56:59 compute-0 funny_ganguly[260939]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Oct 11 03:56:59 compute-0 funny_ganguly[260939]:                 "ceph.block_uuid": "SYHCVo-UVf0-Iwc3-4dzz-QuCG-19LJ-9rRA5x",
Oct 11 03:56:59 compute-0 funny_ganguly[260939]:                 "ceph.cephx_lockbox_secret": "",
Oct 11 03:56:59 compute-0 funny_ganguly[260939]:                 "ceph.cluster_fsid": "23b68101-59a9-532f-ab6b-9acf78fb2162",
Oct 11 03:56:59 compute-0 funny_ganguly[260939]:                 "ceph.cluster_name": "ceph",
Oct 11 03:56:59 compute-0 funny_ganguly[260939]:                 "ceph.crush_device_class": "",
Oct 11 03:56:59 compute-0 funny_ganguly[260939]:                 "ceph.encrypted": "0",
Oct 11 03:56:59 compute-0 funny_ganguly[260939]:                 "ceph.osd_fsid": "38da774d-7ecf-442f-9a7a-97978287cff8",
Oct 11 03:56:59 compute-0 funny_ganguly[260939]:                 "ceph.osd_id": "1",
Oct 11 03:56:59 compute-0 funny_ganguly[260939]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 11 03:56:59 compute-0 funny_ganguly[260939]:                 "ceph.type": "block",
Oct 11 03:56:59 compute-0 funny_ganguly[260939]:                 "ceph.vdo": "0"
Oct 11 03:56:59 compute-0 funny_ganguly[260939]:             },
Oct 11 03:56:59 compute-0 funny_ganguly[260939]:             "type": "block",
Oct 11 03:56:59 compute-0 funny_ganguly[260939]:             "vg_name": "ceph_vg1"
Oct 11 03:56:59 compute-0 funny_ganguly[260939]:         }
Oct 11 03:56:59 compute-0 funny_ganguly[260939]:     ],
Oct 11 03:56:59 compute-0 funny_ganguly[260939]:     "2": [
Oct 11 03:56:59 compute-0 funny_ganguly[260939]:         {
Oct 11 03:56:59 compute-0 funny_ganguly[260939]:             "devices": [
Oct 11 03:56:59 compute-0 funny_ganguly[260939]:                 "/dev/loop5"
Oct 11 03:56:59 compute-0 funny_ganguly[260939]:             ],
Oct 11 03:56:59 compute-0 funny_ganguly[260939]:             "lv_name": "ceph_lv2",
Oct 11 03:56:59 compute-0 funny_ganguly[260939]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Oct 11 03:56:59 compute-0 funny_ganguly[260939]:             "lv_size": "21470642176",
Oct 11 03:56:59 compute-0 funny_ganguly[260939]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=RoMfIg-8Myq-NBZ3-1HM3-6QfC-sjk8-dlChvt,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=23b68101-59a9-532f-ab6b-9acf78fb2162,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=b0a32a6b-06c7-4cda-9235-0c5ffbd1bd85,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 11 03:56:59 compute-0 funny_ganguly[260939]:             "lv_uuid": "RoMfIg-8Myq-NBZ3-1HM3-6QfC-sjk8-dlChvt",
Oct 11 03:56:59 compute-0 funny_ganguly[260939]:             "name": "ceph_lv2",
Oct 11 03:56:59 compute-0 funny_ganguly[260939]:             "path": "/dev/ceph_vg2/ceph_lv2",
Oct 11 03:56:59 compute-0 funny_ganguly[260939]:             "tags": {
Oct 11 03:56:59 compute-0 funny_ganguly[260939]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Oct 11 03:56:59 compute-0 funny_ganguly[260939]:                 "ceph.block_uuid": "RoMfIg-8Myq-NBZ3-1HM3-6QfC-sjk8-dlChvt",
Oct 11 03:56:59 compute-0 funny_ganguly[260939]:                 "ceph.cephx_lockbox_secret": "",
Oct 11 03:56:59 compute-0 funny_ganguly[260939]:                 "ceph.cluster_fsid": "23b68101-59a9-532f-ab6b-9acf78fb2162",
Oct 11 03:56:59 compute-0 funny_ganguly[260939]:                 "ceph.cluster_name": "ceph",
Oct 11 03:56:59 compute-0 funny_ganguly[260939]:                 "ceph.crush_device_class": "",
Oct 11 03:56:59 compute-0 funny_ganguly[260939]:                 "ceph.encrypted": "0",
Oct 11 03:56:59 compute-0 funny_ganguly[260939]:                 "ceph.osd_fsid": "b0a32a6b-06c7-4cda-9235-0c5ffbd1bd85",
Oct 11 03:56:59 compute-0 funny_ganguly[260939]:                 "ceph.osd_id": "2",
Oct 11 03:56:59 compute-0 funny_ganguly[260939]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 11 03:56:59 compute-0 funny_ganguly[260939]:                 "ceph.type": "block",
Oct 11 03:56:59 compute-0 funny_ganguly[260939]:                 "ceph.vdo": "0"
Oct 11 03:56:59 compute-0 funny_ganguly[260939]:             },
Oct 11 03:56:59 compute-0 funny_ganguly[260939]:             "type": "block",
Oct 11 03:56:59 compute-0 funny_ganguly[260939]:             "vg_name": "ceph_vg2"
Oct 11 03:56:59 compute-0 funny_ganguly[260939]:         }
Oct 11 03:56:59 compute-0 funny_ganguly[260939]:     ]
Oct 11 03:56:59 compute-0 funny_ganguly[260939]: }
Oct 11 03:56:59 compute-0 systemd[1]: libpod-f412bf97e03ccc8de28f44b50768e1bb32fe4e43f1526ce8e91a207043801e91.scope: Deactivated successfully.
Oct 11 03:56:59 compute-0 podman[260923]: 2025-10-11 03:56:59.66371247 +0000 UTC m=+0.907974690 container died f412bf97e03ccc8de28f44b50768e1bb32fe4e43f1526ce8e91a207043801e91 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_ganguly, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 11 03:56:59 compute-0 systemd[1]: var-lib-containers-storage-overlay-594d6c5b691c9aecbcb7b34fa165dc9d0514caa4993ca26a3c5b9feee2d91952-merged.mount: Deactivated successfully.
Oct 11 03:56:59 compute-0 podman[260923]: 2025-10-11 03:56:59.72428684 +0000 UTC m=+0.968549070 container remove f412bf97e03ccc8de28f44b50768e1bb32fe4e43f1526ce8e91a207043801e91 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_ganguly, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507)
Oct 11 03:56:59 compute-0 systemd[1]: libpod-conmon-f412bf97e03ccc8de28f44b50768e1bb32fe4e43f1526ce8e91a207043801e91.scope: Deactivated successfully.
Oct 11 03:56:59 compute-0 sudo[260814]: pam_unix(sudo:session): session closed for user root
Oct 11 03:56:59 compute-0 podman[260949]: 2025-10-11 03:56:59.763925143 +0000 UTC m=+0.066018594 container health_status aa44bb3cffcfb092b9d85ae52a9bd9f36c87e07262632420f97b48a886109eab (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, managed_by=edpm_ansible, container_name=ovn_metadata_agent, org.label-schema.build-date=20251009, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_managed=true)
Oct 11 03:56:59 compute-0 sudo[260976]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 03:56:59 compute-0 sudo[260976]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 03:56:59 compute-0 sudo[260976]: pam_unix(sudo:session): session closed for user root
Oct 11 03:56:59 compute-0 sudo[261001]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 11 03:56:59 compute-0 sudo[261001]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 03:56:59 compute-0 sudo[261001]: pam_unix(sudo:session): session closed for user root
Oct 11 03:56:59 compute-0 sudo[261026]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 03:56:59 compute-0 sudo[261026]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 03:56:59 compute-0 sudo[261026]: pam_unix(sudo:session): session closed for user root
Oct 11 03:57:00 compute-0 sudo[261051]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/23b68101-59a9-532f-ab6b-9acf78fb2162/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 23b68101-59a9-532f-ab6b-9acf78fb2162 -- raw list --format json
Oct 11 03:57:00 compute-0 sudo[261051]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 03:57:00 compute-0 podman[261118]: 2025-10-11 03:57:00.403233817 +0000 UTC m=+0.042058502 container create f69b594995dc21f0809d5c06658f247f467e871c1a0f1f60ede9dc9fd27f42c3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_heisenberg, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, ceph=True)
Oct 11 03:57:00 compute-0 systemd[1]: Started libpod-conmon-f69b594995dc21f0809d5c06658f247f467e871c1a0f1f60ede9dc9fd27f42c3.scope.
Oct 11 03:57:00 compute-0 podman[261118]: 2025-10-11 03:57:00.385235742 +0000 UTC m=+0.024060447 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 03:57:00 compute-0 systemd[1]: Started libcrun container.
Oct 11 03:57:00 compute-0 podman[261118]: 2025-10-11 03:57:00.509632335 +0000 UTC m=+0.148457090 container init f69b594995dc21f0809d5c06658f247f467e871c1a0f1f60ede9dc9fd27f42c3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_heisenberg, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Oct 11 03:57:00 compute-0 podman[261118]: 2025-10-11 03:57:00.517825895 +0000 UTC m=+0.156650610 container start f69b594995dc21f0809d5c06658f247f467e871c1a0f1f60ede9dc9fd27f42c3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_heisenberg, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef)
Oct 11 03:57:00 compute-0 podman[261118]: 2025-10-11 03:57:00.52191634 +0000 UTC m=+0.160741045 container attach f69b594995dc21f0809d5c06658f247f467e871c1a0f1f60ede9dc9fd27f42c3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_heisenberg, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 11 03:57:00 compute-0 great_heisenberg[261134]: 167 167
Oct 11 03:57:00 compute-0 systemd[1]: libpod-f69b594995dc21f0809d5c06658f247f467e871c1a0f1f60ede9dc9fd27f42c3.scope: Deactivated successfully.
Oct 11 03:57:00 compute-0 podman[261118]: 2025-10-11 03:57:00.528209997 +0000 UTC m=+0.167034682 container died f69b594995dc21f0809d5c06658f247f467e871c1a0f1f60ede9dc9fd27f42c3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_heisenberg, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 11 03:57:00 compute-0 systemd[1]: var-lib-containers-storage-overlay-02e94fc88cd6c33b8595eda5847f8aca7d3f6b4428f6f17a9e13d3a4892b70e6-merged.mount: Deactivated successfully.
Oct 11 03:57:00 compute-0 podman[261118]: 2025-10-11 03:57:00.577312536 +0000 UTC m=+0.216137251 container remove f69b594995dc21f0809d5c06658f247f467e871c1a0f1f60ede9dc9fd27f42c3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_heisenberg, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 11 03:57:00 compute-0 systemd[1]: libpod-conmon-f69b594995dc21f0809d5c06658f247f467e871c1a0f1f60ede9dc9fd27f42c3.scope: Deactivated successfully.
Oct 11 03:57:00 compute-0 podman[261157]: 2025-10-11 03:57:00.830232149 +0000 UTC m=+0.063855235 container create 8450db808235d468e45aaaab631057e2ca5e39f5d36fb9031d4b4f228b6f2b02 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_curie, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 11 03:57:00 compute-0 systemd[1]: Started libpod-conmon-8450db808235d468e45aaaab631057e2ca5e39f5d36fb9031d4b4f228b6f2b02.scope.
Oct 11 03:57:00 compute-0 podman[261157]: 2025-10-11 03:57:00.804793374 +0000 UTC m=+0.038416510 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 03:57:00 compute-0 ceph-mon[74273]: pgmap v728: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 03:57:00 compute-0 systemd[1]: Started libcrun container.
Oct 11 03:57:00 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5f9a71ecde29fd039a3bd0a93363b8f8aca67ca94c3a42e442c18f974cf08410/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 11 03:57:00 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5f9a71ecde29fd039a3bd0a93363b8f8aca67ca94c3a42e442c18f974cf08410/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 11 03:57:00 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5f9a71ecde29fd039a3bd0a93363b8f8aca67ca94c3a42e442c18f974cf08410/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 11 03:57:00 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5f9a71ecde29fd039a3bd0a93363b8f8aca67ca94c3a42e442c18f974cf08410/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 11 03:57:00 compute-0 podman[261157]: 2025-10-11 03:57:00.926986226 +0000 UTC m=+0.160609302 container init 8450db808235d468e45aaaab631057e2ca5e39f5d36fb9031d4b4f228b6f2b02 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_curie, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_REF=reef, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 11 03:57:00 compute-0 podman[261157]: 2025-10-11 03:57:00.939325652 +0000 UTC m=+0.172948738 container start 8450db808235d468e45aaaab631057e2ca5e39f5d36fb9031d4b4f228b6f2b02 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_curie, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Oct 11 03:57:00 compute-0 podman[261157]: 2025-10-11 03:57:00.943456968 +0000 UTC m=+0.177080124 container attach 8450db808235d468e45aaaab631057e2ca5e39f5d36fb9031d4b4f228b6f2b02 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_curie, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_REF=reef, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 11 03:57:01 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v729: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 03:57:01 compute-0 agitated_curie[261173]: {
Oct 11 03:57:01 compute-0 agitated_curie[261173]:     "38da774d-7ecf-442f-9a7a-97978287cff8": {
Oct 11 03:57:01 compute-0 agitated_curie[261173]:         "ceph_fsid": "23b68101-59a9-532f-ab6b-9acf78fb2162",
Oct 11 03:57:01 compute-0 agitated_curie[261173]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Oct 11 03:57:01 compute-0 agitated_curie[261173]:         "osd_id": 1,
Oct 11 03:57:01 compute-0 agitated_curie[261173]:         "osd_uuid": "38da774d-7ecf-442f-9a7a-97978287cff8",
Oct 11 03:57:01 compute-0 agitated_curie[261173]:         "type": "bluestore"
Oct 11 03:57:01 compute-0 agitated_curie[261173]:     },
Oct 11 03:57:01 compute-0 agitated_curie[261173]:     "b0a32a6b-06c7-4cda-9235-0c5ffbd1bd85": {
Oct 11 03:57:01 compute-0 agitated_curie[261173]:         "ceph_fsid": "23b68101-59a9-532f-ab6b-9acf78fb2162",
Oct 11 03:57:01 compute-0 agitated_curie[261173]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Oct 11 03:57:01 compute-0 agitated_curie[261173]:         "osd_id": 2,
Oct 11 03:57:01 compute-0 agitated_curie[261173]:         "osd_uuid": "b0a32a6b-06c7-4cda-9235-0c5ffbd1bd85",
Oct 11 03:57:01 compute-0 agitated_curie[261173]:         "type": "bluestore"
Oct 11 03:57:01 compute-0 agitated_curie[261173]:     },
Oct 11 03:57:01 compute-0 agitated_curie[261173]:     "bd7ac921-1218-45c1-b1c6-7c594dbceccb": {
Oct 11 03:57:01 compute-0 agitated_curie[261173]:         "ceph_fsid": "23b68101-59a9-532f-ab6b-9acf78fb2162",
Oct 11 03:57:01 compute-0 agitated_curie[261173]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct 11 03:57:01 compute-0 agitated_curie[261173]:         "osd_id": 0,
Oct 11 03:57:01 compute-0 agitated_curie[261173]:         "osd_uuid": "bd7ac921-1218-45c1-b1c6-7c594dbceccb",
Oct 11 03:57:01 compute-0 agitated_curie[261173]:         "type": "bluestore"
Oct 11 03:57:01 compute-0 agitated_curie[261173]:     }
Oct 11 03:57:01 compute-0 agitated_curie[261173]: }
Oct 11 03:57:01 compute-0 systemd[1]: libpod-8450db808235d468e45aaaab631057e2ca5e39f5d36fb9031d4b4f228b6f2b02.scope: Deactivated successfully.
Oct 11 03:57:01 compute-0 podman[261157]: 2025-10-11 03:57:01.990955745 +0000 UTC m=+1.224578801 container died 8450db808235d468e45aaaab631057e2ca5e39f5d36fb9031d4b4f228b6f2b02 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_curie, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Oct 11 03:57:01 compute-0 systemd[1]: libpod-8450db808235d468e45aaaab631057e2ca5e39f5d36fb9031d4b4f228b6f2b02.scope: Consumed 1.060s CPU time.
Oct 11 03:57:02 compute-0 systemd[1]: var-lib-containers-storage-overlay-5f9a71ecde29fd039a3bd0a93363b8f8aca67ca94c3a42e442c18f974cf08410-merged.mount: Deactivated successfully.
Oct 11 03:57:02 compute-0 podman[261157]: 2025-10-11 03:57:02.056663021 +0000 UTC m=+1.290286077 container remove 8450db808235d468e45aaaab631057e2ca5e39f5d36fb9031d4b4f228b6f2b02 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_curie, org.label-schema.build-date=20250507, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 11 03:57:02 compute-0 systemd[1]: libpod-conmon-8450db808235d468e45aaaab631057e2ca5e39f5d36fb9031d4b4f228b6f2b02.scope: Deactivated successfully.
Oct 11 03:57:02 compute-0 sudo[261051]: pam_unix(sudo:session): session closed for user root
Oct 11 03:57:02 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 11 03:57:02 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' 
Oct 11 03:57:02 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 11 03:57:02 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' 
Oct 11 03:57:02 compute-0 ceph-mgr[74563]: [progress WARNING root] complete: ev a6c567e4-635f-43e8-9c7d-236599ab126d does not exist
Oct 11 03:57:02 compute-0 ceph-mgr[74563]: [progress WARNING root] complete: ev 629a0973-9170-4e38-8e26-351f05c9dce3 does not exist
Oct 11 03:57:02 compute-0 sudo[261220]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 03:57:02 compute-0 sudo[261220]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 03:57:02 compute-0 sudo[261220]: pam_unix(sudo:session): session closed for user root
Oct 11 03:57:02 compute-0 sudo[261245]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Oct 11 03:57:02 compute-0 sudo[261245]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 03:57:02 compute-0 sudo[261245]: pam_unix(sudo:session): session closed for user root
Oct 11 03:57:02 compute-0 ceph-mon[74273]: pgmap v729: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 03:57:02 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' 
Oct 11 03:57:02 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' 
Oct 11 03:57:03 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v730: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 03:57:04 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 11 03:57:04 compute-0 ceph-mon[74273]: pgmap v730: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 03:57:05 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v731: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 03:57:06 compute-0 ceph-mon[74273]: pgmap v731: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 03:57:07 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v732: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 03:57:08 compute-0 ceph-mon[74273]: pgmap v732: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 03:57:09 compute-0 nova_compute[259850]: 2025-10-11 03:57:09.061 2 DEBUG oslo_service.periodic_task [None req-a15c22ab-259d-4b26-83d0-eb5f92102649 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 11 03:57:09 compute-0 nova_compute[259850]: 2025-10-11 03:57:09.063 2 DEBUG oslo_service.periodic_task [None req-a15c22ab-259d-4b26-83d0-eb5f92102649 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 11 03:57:09 compute-0 nova_compute[259850]: 2025-10-11 03:57:09.063 2 DEBUG nova.compute.manager [None req-a15c22ab-259d-4b26-83d0-eb5f92102649 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Oct 11 03:57:09 compute-0 nova_compute[259850]: 2025-10-11 03:57:09.063 2 DEBUG nova.compute.manager [None req-a15c22ab-259d-4b26-83d0-eb5f92102649 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Oct 11 03:57:09 compute-0 nova_compute[259850]: 2025-10-11 03:57:09.107 2 DEBUG nova.compute.manager [None req-a15c22ab-259d-4b26-83d0-eb5f92102649 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Oct 11 03:57:09 compute-0 nova_compute[259850]: 2025-10-11 03:57:09.107 2 DEBUG oslo_service.periodic_task [None req-a15c22ab-259d-4b26-83d0-eb5f92102649 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 11 03:57:09 compute-0 nova_compute[259850]: 2025-10-11 03:57:09.108 2 DEBUG oslo_service.periodic_task [None req-a15c22ab-259d-4b26-83d0-eb5f92102649 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 11 03:57:09 compute-0 nova_compute[259850]: 2025-10-11 03:57:09.108 2 DEBUG oslo_service.periodic_task [None req-a15c22ab-259d-4b26-83d0-eb5f92102649 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 11 03:57:09 compute-0 nova_compute[259850]: 2025-10-11 03:57:09.108 2 DEBUG oslo_service.periodic_task [None req-a15c22ab-259d-4b26-83d0-eb5f92102649 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 11 03:57:09 compute-0 nova_compute[259850]: 2025-10-11 03:57:09.108 2 DEBUG oslo_service.periodic_task [None req-a15c22ab-259d-4b26-83d0-eb5f92102649 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 11 03:57:09 compute-0 nova_compute[259850]: 2025-10-11 03:57:09.108 2 DEBUG oslo_service.periodic_task [None req-a15c22ab-259d-4b26-83d0-eb5f92102649 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 11 03:57:09 compute-0 nova_compute[259850]: 2025-10-11 03:57:09.109 2 DEBUG nova.compute.manager [None req-a15c22ab-259d-4b26-83d0-eb5f92102649 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Oct 11 03:57:09 compute-0 nova_compute[259850]: 2025-10-11 03:57:09.109 2 DEBUG oslo_service.periodic_task [None req-a15c22ab-259d-4b26-83d0-eb5f92102649 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 11 03:57:09 compute-0 nova_compute[259850]: 2025-10-11 03:57:09.148 2 DEBUG oslo_concurrency.lockutils [None req-a15c22ab-259d-4b26-83d0-eb5f92102649 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 03:57:09 compute-0 nova_compute[259850]: 2025-10-11 03:57:09.149 2 DEBUG oslo_concurrency.lockutils [None req-a15c22ab-259d-4b26-83d0-eb5f92102649 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 03:57:09 compute-0 nova_compute[259850]: 2025-10-11 03:57:09.149 2 DEBUG oslo_concurrency.lockutils [None req-a15c22ab-259d-4b26-83d0-eb5f92102649 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 03:57:09 compute-0 nova_compute[259850]: 2025-10-11 03:57:09.149 2 DEBUG nova.compute.resource_tracker [None req-a15c22ab-259d-4b26-83d0-eb5f92102649 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Oct 11 03:57:09 compute-0 nova_compute[259850]: 2025-10-11 03:57:09.150 2 DEBUG oslo_concurrency.processutils [None req-a15c22ab-259d-4b26-83d0-eb5f92102649 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 03:57:09 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v733: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 03:57:09 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 11 03:57:09 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 11 03:57:09 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3587846884' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 03:57:09 compute-0 nova_compute[259850]: 2025-10-11 03:57:09.615 2 DEBUG oslo_concurrency.processutils [None req-a15c22ab-259d-4b26-83d0-eb5f92102649 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.465s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 03:57:09 compute-0 nova_compute[259850]: 2025-10-11 03:57:09.769 2 WARNING nova.virt.libvirt.driver [None req-a15c22ab-259d-4b26-83d0-eb5f92102649 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 11 03:57:09 compute-0 nova_compute[259850]: 2025-10-11 03:57:09.771 2 DEBUG nova.compute.resource_tracker [None req-a15c22ab-259d-4b26-83d0-eb5f92102649 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5179MB free_disk=59.98828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Oct 11 03:57:09 compute-0 nova_compute[259850]: 2025-10-11 03:57:09.771 2 DEBUG oslo_concurrency.lockutils [None req-a15c22ab-259d-4b26-83d0-eb5f92102649 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 03:57:09 compute-0 nova_compute[259850]: 2025-10-11 03:57:09.771 2 DEBUG oslo_concurrency.lockutils [None req-a15c22ab-259d-4b26-83d0-eb5f92102649 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 03:57:09 compute-0 nova_compute[259850]: 2025-10-11 03:57:09.868 2 DEBUG nova.compute.resource_tracker [None req-a15c22ab-259d-4b26-83d0-eb5f92102649 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Oct 11 03:57:09 compute-0 nova_compute[259850]: 2025-10-11 03:57:09.868 2 DEBUG nova.compute.resource_tracker [None req-a15c22ab-259d-4b26-83d0-eb5f92102649 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Oct 11 03:57:09 compute-0 nova_compute[259850]: 2025-10-11 03:57:09.899 2 DEBUG oslo_concurrency.processutils [None req-a15c22ab-259d-4b26-83d0-eb5f92102649 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 03:57:09 compute-0 ceph-mon[74273]: from='client.? 192.168.122.100:0/3587846884' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 03:57:10 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 11 03:57:10 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1556400347' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 03:57:10 compute-0 nova_compute[259850]: 2025-10-11 03:57:10.351 2 DEBUG oslo_concurrency.processutils [None req-a15c22ab-259d-4b26-83d0-eb5f92102649 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.452s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 03:57:10 compute-0 nova_compute[259850]: 2025-10-11 03:57:10.359 2 DEBUG nova.compute.provider_tree [None req-a15c22ab-259d-4b26-83d0-eb5f92102649 - - - - - -] Inventory has not changed in ProviderTree for provider: 108a560b-89c0-4926-a2fc-cb749a6f8386 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 11 03:57:10 compute-0 nova_compute[259850]: 2025-10-11 03:57:10.385 2 DEBUG nova.scheduler.client.report [None req-a15c22ab-259d-4b26-83d0-eb5f92102649 - - - - - -] Inventory has not changed for provider 108a560b-89c0-4926-a2fc-cb749a6f8386 based on inventory data: {'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 11 03:57:10 compute-0 nova_compute[259850]: 2025-10-11 03:57:10.387 2 DEBUG nova.compute.resource_tracker [None req-a15c22ab-259d-4b26-83d0-eb5f92102649 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Oct 11 03:57:10 compute-0 nova_compute[259850]: 2025-10-11 03:57:10.387 2 DEBUG oslo_concurrency.lockutils [None req-a15c22ab-259d-4b26-83d0-eb5f92102649 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.616s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 03:57:10 compute-0 ceph-mon[74273]: pgmap v733: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 03:57:10 compute-0 ceph-mon[74273]: from='client.? 192.168.122.100:0/1556400347' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 03:57:11 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v734: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 03:57:11 compute-0 ceph-mon[74273]: pgmap v734: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 03:57:13 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v735: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 03:57:14 compute-0 ceph-mon[74273]: pgmap v735: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 03:57:14 compute-0 podman[261314]: 2025-10-11 03:57:14.4155015 +0000 UTC m=+0.102249502 container health_status 83ff5079e26f0f00bbfd2512db4ebca61e431e3b7e362664573e34fff179574c (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, config_id=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, container_name=multipathd, org.label-schema.build-date=20251009, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, managed_by=edpm_ansible)
Oct 11 03:57:14 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 11 03:57:15 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v736: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 03:57:16 compute-0 ceph-mon[74273]: pgmap v736: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 03:57:16 compute-0 podman[261334]: 2025-10-11 03:57:16.356893081 +0000 UTC m=+0.066211260 container health_status 8f03625dda308e9a3fe3b3f00360811eab2a53db90537fed5552a11820542f07 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, org.label-schema.build-date=20251009, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, container_name=iscsid, tcib_managed=true, config_id=iscsid, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f)
Oct 11 03:57:16 compute-0 ceph-osd[87591]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Oct 11 03:57:16 compute-0 ceph-osd[87591]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Cumulative writes: 5776 writes, 24K keys, 5776 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.02 MB/s
                                           Cumulative WAL: 5776 writes, 967 syncs, 5.97 writes per sync, written: 0.02 GB, 0.02 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 212 writes, 318 keys, 212 commit groups, 1.0 writes per commit group, ingest: 0.11 MB, 0.00 MB/s
                                           Interval WAL: 212 writes, 106 syncs, 2.00 writes per sync, written: 0.00 GB, 0.00 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                           
                                           ** Compaction Stats [default] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      2/0    2.63 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.2      0.01              0.00         1    0.006       0      0       0.0       0.0
                                            Sum      2/0    2.63 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.2      0.01              0.00         1    0.006       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [default] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.2      0.01              0.00         1    0.006       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5651f29fd1f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 4.7e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [default] **
                                           
                                           ** Compaction Stats [m-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5651f29fd1f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 4.7e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-0] **
                                           
                                           ** Compaction Stats [m-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5651f29fd1f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 4.7e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-1] **
                                           
                                           ** Compaction Stats [m-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5651f29fd1f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 4.7e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-2] **
                                           
                                           ** Compaction Stats [p-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.56 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.5      0.00              0.00         1    0.003       0      0       0.0       0.0
                                            Sum      1/0    1.56 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.5      0.00              0.00         1    0.003       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.5      0.00              0.00         1    0.003       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5651f29fd1f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 4.7e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-0] **
                                           
                                           ** Compaction Stats [p-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5651f29fd1f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 4.7e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-1] **
                                           
                                           ** Compaction Stats [p-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5651f29fd1f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 4.7e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-2] **
                                           
                                           ** Compaction Stats [O-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5651f29fd090#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 2 last_secs: 7e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-0] **
                                           
                                           ** Compaction Stats [O-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5651f29fd090#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 2 last_secs: 7e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-1] **
                                           
                                           ** Compaction Stats [O-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.25 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.4      0.00              0.00         1    0.003       0      0       0.0       0.0
                                            Sum      1/0    1.25 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.4      0.00              0.00         1    0.003       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.4      0.00              0.00         1    0.003       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5651f29fd090#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 2 last_secs: 7e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-2] **
                                           
                                           ** Compaction Stats [L] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.001       0      0       0.0       0.0
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.001       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [L] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.001       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5651f29fd1f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 4.7e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [L] **
                                           
                                           ** Compaction Stats [P] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [P] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5651f29fd1f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 4.7e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [P] **
Oct 11 03:57:17 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v737: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 03:57:18 compute-0 ceph-mon[74273]: pgmap v737: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 03:57:19 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "version", "format": "json"} v 0) v1
Oct 11 03:57:19 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2717351226' entity='client.openstack' cmd=[{"prefix": "version", "format": "json"}]: dispatch
Oct 11 03:57:19 compute-0 ceph-mgr[74563]: log_channel(audit) log [DBG] : from='client.14347 -' entity='client.openstack' cmd=[{"prefix": "fs volume ls", "format": "json"}]: dispatch
Oct 11 03:57:19 compute-0 ceph-mgr[74563]: [volumes INFO volumes.module] Starting _cmd_fs_volume_ls(format:json, prefix:fs volume ls) < ""
Oct 11 03:57:19 compute-0 ceph-mgr[74563]: [volumes INFO volumes.module] Finishing _cmd_fs_volume_ls(format:json, prefix:fs volume ls) < ""
Oct 11 03:57:19 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v738: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 03:57:19 compute-0 ceph-mon[74273]: from='client.? 192.168.122.10:0/2717351226' entity='client.openstack' cmd=[{"prefix": "version", "format": "json"}]: dispatch
Oct 11 03:57:19 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 11 03:57:20 compute-0 ceph-mon[74273]: from='client.14347 -' entity='client.openstack' cmd=[{"prefix": "fs volume ls", "format": "json"}]: dispatch
Oct 11 03:57:20 compute-0 ceph-mon[74273]: pgmap v738: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 03:57:20 compute-0 ceph-mgr[74563]: [balancer INFO root] Optimize plan auto_2025-10-11_03:57:20
Oct 11 03:57:20 compute-0 ceph-mgr[74563]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 11 03:57:20 compute-0 ceph-mgr[74563]: [balancer INFO root] do_upmap
Oct 11 03:57:20 compute-0 ceph-mgr[74563]: [balancer INFO root] pools ['default.rgw.control', 'cephfs.cephfs.meta', '.rgw.root', 'vms', 'default.rgw.log', 'images', 'cephfs.cephfs.data', '.mgr', 'default.rgw.meta', 'volumes', 'backups']
Oct 11 03:57:20 compute-0 ceph-mgr[74563]: [balancer INFO root] prepared 0/10 changes
Oct 11 03:57:20 compute-0 ceph-mgr[74563]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 03:57:20 compute-0 ceph-mgr[74563]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 03:57:20 compute-0 ceph-mgr[74563]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 03:57:20 compute-0 ceph-mgr[74563]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 03:57:20 compute-0 ceph-mgr[74563]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 03:57:20 compute-0 ceph-mgr[74563]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 03:57:20 compute-0 ceph-mgr[74563]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 11 03:57:20 compute-0 ceph-mgr[74563]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 11 03:57:20 compute-0 ceph-mgr[74563]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 11 03:57:20 compute-0 ceph-mgr[74563]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 11 03:57:20 compute-0 ceph-mgr[74563]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 11 03:57:20 compute-0 ceph-mgr[74563]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 11 03:57:20 compute-0 ceph-mgr[74563]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 11 03:57:20 compute-0 ceph-mgr[74563]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 11 03:57:20 compute-0 ceph-mgr[74563]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 11 03:57:20 compute-0 ceph-mgr[74563]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 11 03:57:21 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v739: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 03:57:21 compute-0 ceph-osd[88594]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Oct 11 03:57:21 compute-0 ceph-osd[88594]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Cumulative writes: 6993 writes, 29K keys, 6993 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.02 MB/s
                                           Cumulative WAL: 6993 writes, 1279 syncs, 5.47 writes per sync, written: 0.02 GB, 0.02 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 180 writes, 278 keys, 180 commit groups, 1.0 writes per commit group, ingest: 0.09 MB, 0.00 MB/s
                                           Interval WAL: 180 writes, 90 syncs, 2.00 writes per sync, written: 0.00 GB, 0.00 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                           
                                           ** Compaction Stats [default] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      2/0    2.63 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.2      0.01              0.00         1    0.005       0      0       0.0       0.0
                                            Sum      2/0    2.63 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.2      0.01              0.00         1    0.005       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [default] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.2      0.01              0.00         1    0.005       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x56493a3531f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 4.7e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [default] **
                                           
                                           ** Compaction Stats [m-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x56493a3531f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 4.7e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-0] **
                                           
                                           ** Compaction Stats [m-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x56493a3531f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 4.7e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-1] **
                                           
                                           ** Compaction Stats [m-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x56493a3531f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 4.7e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-2] **
                                           
                                           ** Compaction Stats [p-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.56 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.6      0.00              0.00         1    0.003       0      0       0.0       0.0
                                            Sum      1/0    1.56 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.6      0.00              0.00         1    0.003       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.6      0.00              0.00         1    0.003       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x56493a3531f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 4.7e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-0] **
                                           
                                           ** Compaction Stats [p-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x56493a3531f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 4.7e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-1] **
                                           
                                           ** Compaction Stats [p-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x56493a3531f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 4.7e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-2] **
                                           
                                           ** Compaction Stats [O-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x56493a353090#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 2 last_secs: 8e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-0] **
                                           
                                           ** Compaction Stats [O-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x56493a353090#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 2 last_secs: 8e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-1] **
                                           
                                           ** Compaction Stats [O-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.25 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.5      0.00              0.00         1    0.003       0      0       0.0       0.0
                                            Sum      1/0    1.25 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.5      0.00              0.00         1    0.003       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.5      0.00              0.00         1    0.003       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x56493a353090#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 2 last_secs: 8e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-2] **
                                           
                                           ** Compaction Stats [L] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.001       0      0       0.0       0.0
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.001       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [L] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.001       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x56493a3531f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 4.7e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [L] **
                                           
                                           ** Compaction Stats [P] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [P] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x56493a3531f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 4.7e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [P] **
Oct 11 03:57:22 compute-0 ceph-mon[74273]: pgmap v739: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 03:57:22 compute-0 ovn_metadata_agent[161897]: 2025-10-11 03:57:22.945 161902 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 03:57:22 compute-0 ovn_metadata_agent[161897]: 2025-10-11 03:57:22.946 161902 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 03:57:22 compute-0 ovn_metadata_agent[161897]: 2025-10-11 03:57:22.946 161902 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 03:57:23 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v740: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 03:57:24 compute-0 ceph-mon[74273]: pgmap v740: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 03:57:24 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 11 03:57:25 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v741: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 03:57:25 compute-0 podman[261353]: 2025-10-11 03:57:25.393144295 +0000 UTC m=+0.108184369 container health_status 648a70a95c858d67aef01a55c153ff0b5b9bef4b589fa10c62a24185180db61f (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_id=ovn_controller, org.label-schema.build-date=20251009, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, org.label-schema.license=GPLv2, io.buildah.version=1.41.3)
Oct 11 03:57:26 compute-0 ceph-mon[74273]: pgmap v741: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 03:57:27 compute-0 ceph-osd[89722]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Oct 11 03:57:27 compute-0 ceph-osd[89722]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Cumulative writes: 5665 writes, 23K keys, 5665 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.02 MB/s
                                           Cumulative WAL: 5665 writes, 887 syncs, 6.39 writes per sync, written: 0.02 GB, 0.02 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 180 writes, 270 keys, 180 commit groups, 1.0 writes per commit group, ingest: 0.09 MB, 0.00 MB/s
                                           Interval WAL: 180 writes, 90 syncs, 2.00 writes per sync, written: 0.00 GB, 0.00 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                           
                                           ** Compaction Stats [default] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      2/0    2.63 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.2      0.01              0.00         1    0.007       0      0       0.0       0.0
                                            Sum      2/0    2.63 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.2      0.01              0.00         1    0.007       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [default] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.2      0.01              0.00         1    0.007       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55f1041ccdd0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 3.7e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [default] **
                                           
                                           ** Compaction Stats [m-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55f1041ccdd0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 3.7e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-0] **
                                           
                                           ** Compaction Stats [m-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55f1041ccdd0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 3.7e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-1] **
                                           
                                           ** Compaction Stats [m-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55f1041ccdd0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 3.7e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-2] **
                                           
                                           ** Compaction Stats [p-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.56 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.4      0.00              0.00         1    0.003       0      0       0.0       0.0
                                            Sum      1/0    1.56 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.4      0.00              0.00         1    0.003       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.4      0.00              0.00         1    0.003       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55f1041ccdd0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 3.7e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-0] **
                                           
                                           ** Compaction Stats [p-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55f1041ccdd0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 3.7e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-1] **
                                           
                                           ** Compaction Stats [p-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55f1041ccdd0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 3.7e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-2] **
                                           
                                           ** Compaction Stats [O-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55f1041cc430#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 2 last_secs: 7e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-0] **
                                           
                                           ** Compaction Stats [O-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55f1041cc430#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 2 last_secs: 7e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-1] **
                                           
                                           ** Compaction Stats [O-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.25 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.4      0.00              0.00         1    0.003       0      0       0.0       0.0
                                            Sum      1/0    1.25 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.4      0.00              0.00         1    0.003       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.4      0.00              0.00         1    0.003       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55f1041cc430#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 2 last_secs: 7e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-2] **
                                           
                                           ** Compaction Stats [L] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.001       0      0       0.0       0.0
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.001       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [L] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.001       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55f1041ccdd0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 3.7e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [L] **
                                           
                                           ** Compaction Stats [P] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [P] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55f1041ccdd0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 3.7e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [P] **
Oct 11 03:57:27 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v742: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 03:57:27 compute-0 ceph-mgr[74563]: [devicehealth INFO root] Check health
Oct 11 03:57:28 compute-0 ceph-mon[74273]: pgmap v742: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 03:57:29 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v743: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 03:57:29 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 11 03:57:30 compute-0 podman[261379]: 2025-10-11 03:57:30.366963683 +0000 UTC m=+0.080393758 container health_status aa44bb3cffcfb092b9d85ae52a9bd9f36c87e07262632420f97b48a886109eab (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, org.label-schema.build-date=20251009, org.label-schema.vendor=CentOS, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2)
Oct 11 03:57:30 compute-0 ceph-mon[74273]: pgmap v743: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 03:57:31 compute-0 ceph-mgr[74563]: [pg_autoscaler INFO root] _maybe_adjust
Oct 11 03:57:31 compute-0 ceph-mgr[74563]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 03:57:31 compute-0 ceph-mgr[74563]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct 11 03:57:31 compute-0 ceph-mgr[74563]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 03:57:31 compute-0 ceph-mgr[74563]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 11 03:57:31 compute-0 ceph-mgr[74563]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 03:57:31 compute-0 ceph-mgr[74563]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 11 03:57:31 compute-0 ceph-mgr[74563]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 03:57:31 compute-0 ceph-mgr[74563]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 11 03:57:31 compute-0 ceph-mgr[74563]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 03:57:31 compute-0 ceph-mgr[74563]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 11 03:57:31 compute-0 ceph-mgr[74563]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 03:57:31 compute-0 ceph-mgr[74563]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Oct 11 03:57:31 compute-0 ceph-mgr[74563]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 03:57:31 compute-0 ceph-mgr[74563]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 11 03:57:31 compute-0 ceph-mgr[74563]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 03:57:31 compute-0 ceph-mgr[74563]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Oct 11 03:57:31 compute-0 ceph-mgr[74563]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 03:57:31 compute-0 ceph-mgr[74563]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct 11 03:57:31 compute-0 ceph-mgr[74563]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 03:57:31 compute-0 ceph-mgr[74563]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 11 03:57:31 compute-0 ceph-mgr[74563]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 03:57:31 compute-0 ceph-mgr[74563]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct 11 03:57:31 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v744: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 03:57:32 compute-0 ceph-mon[74273]: pgmap v744: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 03:57:33 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v745: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 03:57:34 compute-0 ceph-mon[74273]: pgmap v745: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 03:57:34 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 11 03:57:35 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v746: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 03:57:36 compute-0 ceph-mon[74273]: pgmap v746: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 03:57:37 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v747: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 03:57:38 compute-0 ceph-mon[74273]: pgmap v747: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 03:57:39 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v748: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 03:57:39 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 11 03:57:40 compute-0 ceph-mon[74273]: pgmap v748: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 03:57:41 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v749: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 03:57:42 compute-0 ceph-mon[74273]: pgmap v749: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 03:57:42 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "version", "format": "json"} v 0) v1
Oct 11 03:57:42 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1696741228' entity='client.openstack' cmd=[{"prefix": "version", "format": "json"}]: dispatch
Oct 11 03:57:42 compute-0 ceph-mgr[74563]: log_channel(audit) log [DBG] : from='client.14353 -' entity='client.openstack' cmd=[{"prefix": "fs volume ls", "format": "json"}]: dispatch
Oct 11 03:57:42 compute-0 ceph-mgr[74563]: [volumes INFO volumes.module] Starting _cmd_fs_volume_ls(format:json, prefix:fs volume ls) < ""
Oct 11 03:57:42 compute-0 ceph-mgr[74563]: [volumes INFO volumes.module] Finishing _cmd_fs_volume_ls(format:json, prefix:fs volume ls) < ""
Oct 11 03:57:43 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v750: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 03:57:43 compute-0 ceph-mon[74273]: from='client.? 192.168.122.10:0/1696741228' entity='client.openstack' cmd=[{"prefix": "version", "format": "json"}]: dispatch
Oct 11 03:57:43 compute-0 ceph-mon[74273]: from='client.14353 -' entity='client.openstack' cmd=[{"prefix": "fs volume ls", "format": "json"}]: dispatch
Oct 11 03:57:44 compute-0 ceph-mon[74273]: pgmap v750: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 03:57:44 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 11 03:57:45 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v751: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 03:57:45 compute-0 podman[261400]: 2025-10-11 03:57:45.392568744 +0000 UTC m=+0.085026168 container health_status 83ff5079e26f0f00bbfd2512db4ebca61e431e3b7e362664573e34fff179574c (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, config_id=multipathd, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251009, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 11 03:57:45 compute-0 ceph-mon[74273]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #36. Immutable memtables: 0.
Oct 11 03:57:45 compute-0 ceph-mon[74273]: rocksdb: (Original Log Time 2025/10/11-03:57:45.494085) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Oct 11 03:57:45 compute-0 ceph-mon[74273]: rocksdb: [db/flush_job.cc:856] [default] [JOB 15] Flushing memtable with next log file: 36
Oct 11 03:57:45 compute-0 ceph-mon[74273]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760155065494200, "job": 15, "event": "flush_started", "num_memtables": 1, "num_entries": 1549, "num_deletes": 251, "total_data_size": 2466824, "memory_usage": 2497904, "flush_reason": "Manual Compaction"}
Oct 11 03:57:45 compute-0 ceph-mon[74273]: rocksdb: [db/flush_job.cc:885] [default] [JOB 15] Level-0 flush table #37: started
Oct 11 03:57:45 compute-0 ceph-mon[74273]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760155065513509, "cf_name": "default", "job": 15, "event": "table_file_creation", "file_number": 37, "file_size": 2432384, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 14735, "largest_seqno": 16283, "table_properties": {"data_size": 2425224, "index_size": 4231, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1861, "raw_key_size": 14580, "raw_average_key_size": 19, "raw_value_size": 2410871, "raw_average_value_size": 3253, "num_data_blocks": 193, "num_entries": 741, "num_filter_entries": 741, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1760154899, "oldest_key_time": 1760154899, "file_creation_time": 1760155065, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "c5ef686a-ea96-43f3-b64e-136aeef6150d", "db_session_id": "5PENSWPAYOBU0GJSS6OB", "orig_file_number": 37, "seqno_to_time_mapping": "N/A"}}
Oct 11 03:57:45 compute-0 ceph-mon[74273]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 15] Flush lasted 19512 microseconds, and 11378 cpu microseconds.
Oct 11 03:57:45 compute-0 ceph-mon[74273]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 11 03:57:45 compute-0 ceph-mon[74273]: rocksdb: (Original Log Time 2025/10/11-03:57:45.513594) [db/flush_job.cc:967] [default] [JOB 15] Level-0 flush table #37: 2432384 bytes OK
Oct 11 03:57:45 compute-0 ceph-mon[74273]: rocksdb: (Original Log Time 2025/10/11-03:57:45.513629) [db/memtable_list.cc:519] [default] Level-0 commit table #37 started
Oct 11 03:57:45 compute-0 ceph-mon[74273]: rocksdb: (Original Log Time 2025/10/11-03:57:45.515346) [db/memtable_list.cc:722] [default] Level-0 commit table #37: memtable #1 done
Oct 11 03:57:45 compute-0 ceph-mon[74273]: rocksdb: (Original Log Time 2025/10/11-03:57:45.515376) EVENT_LOG_v1 {"time_micros": 1760155065515365, "job": 15, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Oct 11 03:57:45 compute-0 ceph-mon[74273]: rocksdb: (Original Log Time 2025/10/11-03:57:45.515403) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Oct 11 03:57:45 compute-0 ceph-mon[74273]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 15] Try to delete WAL files size 2460098, prev total WAL file size 2460098, number of live WAL files 2.
Oct 11 03:57:45 compute-0 ceph-mon[74273]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000033.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 11 03:57:45 compute-0 ceph-mon[74273]: rocksdb: (Original Log Time 2025/10/11-03:57:45.516913) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730031303034' seq:72057594037927935, type:22 .. '7061786F730031323536' seq:0, type:0; will stop at (end)
Oct 11 03:57:45 compute-0 ceph-mon[74273]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 16] Compacting 1@0 + 1@6 files to L6, score -1.00
Oct 11 03:57:45 compute-0 ceph-mon[74273]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 15 Base level 0, inputs: [37(2375KB)], [35(6800KB)]
Oct 11 03:57:45 compute-0 ceph-mon[74273]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760155065516964, "job": 16, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [37], "files_L6": [35], "score": -1, "input_data_size": 9396585, "oldest_snapshot_seqno": -1}
Oct 11 03:57:45 compute-0 ceph-mon[74273]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 16] Generated table #38: 3973 keys, 7646198 bytes, temperature: kUnknown
Oct 11 03:57:45 compute-0 ceph-mon[74273]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760155065561664, "cf_name": "default", "job": 16, "event": "table_file_creation", "file_number": 38, "file_size": 7646198, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 7617451, "index_size": 17693, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 9989, "raw_key_size": 97025, "raw_average_key_size": 24, "raw_value_size": 7543352, "raw_average_value_size": 1898, "num_data_blocks": 748, "num_entries": 3973, "num_filter_entries": 3973, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1760153731, "oldest_key_time": 0, "file_creation_time": 1760155065, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "c5ef686a-ea96-43f3-b64e-136aeef6150d", "db_session_id": "5PENSWPAYOBU0GJSS6OB", "orig_file_number": 38, "seqno_to_time_mapping": "N/A"}}
Oct 11 03:57:45 compute-0 ceph-mon[74273]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 11 03:57:45 compute-0 ceph-mon[74273]: rocksdb: (Original Log Time 2025/10/11-03:57:45.561987) [db/compaction/compaction_job.cc:1663] [default] [JOB 16] Compacted 1@0 + 1@6 files to L6 => 7646198 bytes
Oct 11 03:57:45 compute-0 ceph-mon[74273]: rocksdb: (Original Log Time 2025/10/11-03:57:45.563226) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 209.6 rd, 170.6 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(2.3, 6.6 +0.0 blob) out(7.3 +0.0 blob), read-write-amplify(7.0) write-amplify(3.1) OK, records in: 4487, records dropped: 514 output_compression: NoCompression
Oct 11 03:57:45 compute-0 ceph-mon[74273]: rocksdb: (Original Log Time 2025/10/11-03:57:45.563246) EVENT_LOG_v1 {"time_micros": 1760155065563234, "job": 16, "event": "compaction_finished", "compaction_time_micros": 44827, "compaction_time_cpu_micros": 32018, "output_level": 6, "num_output_files": 1, "total_output_size": 7646198, "num_input_records": 4487, "num_output_records": 3973, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Oct 11 03:57:45 compute-0 ceph-mon[74273]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000037.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 11 03:57:45 compute-0 ceph-mon[74273]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760155065563702, "job": 16, "event": "table_file_deletion", "file_number": 37}
Oct 11 03:57:45 compute-0 ceph-mon[74273]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000035.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 11 03:57:45 compute-0 ceph-mon[74273]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760155065564821, "job": 16, "event": "table_file_deletion", "file_number": 35}
Oct 11 03:57:45 compute-0 ceph-mon[74273]: rocksdb: (Original Log Time 2025/10/11-03:57:45.516797) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 11 03:57:45 compute-0 ceph-mon[74273]: rocksdb: (Original Log Time 2025/10/11-03:57:45.564947) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 11 03:57:45 compute-0 ceph-mon[74273]: rocksdb: (Original Log Time 2025/10/11-03:57:45.564954) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 11 03:57:45 compute-0 ceph-mon[74273]: rocksdb: (Original Log Time 2025/10/11-03:57:45.564957) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 11 03:57:45 compute-0 ceph-mon[74273]: rocksdb: (Original Log Time 2025/10/11-03:57:45.564960) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 11 03:57:45 compute-0 ceph-mon[74273]: rocksdb: (Original Log Time 2025/10/11-03:57:45.564962) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 11 03:57:46 compute-0 ceph-mon[74273]: pgmap v751: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 03:57:47 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v752: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 03:57:47 compute-0 podman[261420]: 2025-10-11 03:57:47.392706749 +0000 UTC m=+0.088508294 container health_status 8f03625dda308e9a3fe3b3f00360811eab2a53db90537fed5552a11820542f07 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, org.label-schema.vendor=CentOS, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, config_id=iscsid, container_name=iscsid, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009)
Oct 11 03:57:48 compute-0 ceph-mon[74273]: pgmap v752: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 03:57:49 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v753: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 03:57:49 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 11 03:57:50 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct 11 03:57:50 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3764145870' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 11 03:57:50 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct 11 03:57:50 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3764145870' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 11 03:57:50 compute-0 ceph-mon[74273]: pgmap v753: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 03:57:50 compute-0 ceph-mon[74273]: from='client.? 192.168.122.10:0/3764145870' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 11 03:57:50 compute-0 ceph-mon[74273]: from='client.? 192.168.122.10:0/3764145870' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 11 03:57:50 compute-0 ceph-mgr[74563]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 03:57:50 compute-0 ceph-mgr[74563]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 03:57:50 compute-0 ceph-mgr[74563]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 03:57:50 compute-0 ceph-mgr[74563]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 03:57:50 compute-0 ceph-mgr[74563]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 03:57:50 compute-0 ceph-mgr[74563]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 03:57:51 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v754: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 03:57:52 compute-0 ceph-mon[74273]: pgmap v754: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 03:57:53 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v755: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 03:57:54 compute-0 ceph-mon[74273]: pgmap v755: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 03:57:54 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 11 03:57:55 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v756: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 03:57:56 compute-0 podman[261440]: 2025-10-11 03:57:56.439511263 +0000 UTC m=+0.137879274 container health_status 648a70a95c858d67aef01a55c153ff0b5b9bef4b589fa10c62a24185180db61f (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251009, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f)
Oct 11 03:57:56 compute-0 ceph-mon[74273]: pgmap v756: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 03:57:57 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v757: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 03:57:58 compute-0 ceph-mon[74273]: pgmap v757: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 03:57:59 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v758: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 03:57:59 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 11 03:58:00 compute-0 ceph-mon[74273]: pgmap v758: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 03:58:01 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v759: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 03:58:01 compute-0 podman[261467]: 2025-10-11 03:58:01.369516117 +0000 UTC m=+0.074772521 container health_status aa44bb3cffcfb092b9d85ae52a9bd9f36c87e07262632420f97b48a886109eab (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_metadata_agent, org.label-schema.build-date=20251009, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.vendor=CentOS)
Oct 11 03:58:02 compute-0 sudo[261488]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 03:58:02 compute-0 sudo[261488]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 03:58:02 compute-0 sudo[261488]: pam_unix(sudo:session): session closed for user root
Oct 11 03:58:02 compute-0 sudo[261513]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 11 03:58:02 compute-0 sudo[261513]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 03:58:02 compute-0 sudo[261513]: pam_unix(sudo:session): session closed for user root
Oct 11 03:58:02 compute-0 sudo[261538]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 03:58:02 compute-0 sudo[261538]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 03:58:02 compute-0 sudo[261538]: pam_unix(sudo:session): session closed for user root
Oct 11 03:58:02 compute-0 ceph-mon[74273]: pgmap v759: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 03:58:02 compute-0 sudo[261563]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/23b68101-59a9-532f-ab6b-9acf78fb2162/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 check-host
Oct 11 03:58:02 compute-0 sudo[261563]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 03:58:02 compute-0 sudo[261563]: pam_unix(sudo:session): session closed for user root
Oct 11 03:58:03 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 11 03:58:03 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' 
Oct 11 03:58:03 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 11 03:58:03 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' 
Oct 11 03:58:03 compute-0 sudo[261608]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 03:58:03 compute-0 sudo[261608]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 03:58:03 compute-0 sudo[261608]: pam_unix(sudo:session): session closed for user root
Oct 11 03:58:03 compute-0 sudo[261633]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 11 03:58:03 compute-0 sudo[261633]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 03:58:03 compute-0 sudo[261633]: pam_unix(sudo:session): session closed for user root
Oct 11 03:58:03 compute-0 sudo[261658]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 03:58:03 compute-0 sudo[261658]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 03:58:03 compute-0 sudo[261658]: pam_unix(sudo:session): session closed for user root
Oct 11 03:58:03 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v760: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 03:58:03 compute-0 sudo[261683]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/23b68101-59a9-532f-ab6b-9acf78fb2162/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Oct 11 03:58:03 compute-0 sudo[261683]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 03:58:03 compute-0 sudo[261683]: pam_unix(sudo:session): session closed for user root
Oct 11 03:58:04 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"} v 0) v1
Oct 11 03:58:04 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Oct 11 03:58:04 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' 
Oct 11 03:58:04 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' 
Oct 11 03:58:04 compute-0 ceph-mon[74273]: pgmap v760: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 03:58:04 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 11 03:58:04 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 11 03:58:04 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct 11 03:58:04 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 11 03:58:04 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct 11 03:58:04 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' 
Oct 11 03:58:04 compute-0 ceph-mgr[74563]: [progress WARNING root] complete: ev c32dcd68-cdd8-4d56-bf75-bcd09d6d4ca9 does not exist
Oct 11 03:58:04 compute-0 ceph-mgr[74563]: [progress WARNING root] complete: ev 7b74e8f2-0a0b-4960-95d5-7ad3bc39737f does not exist
Oct 11 03:58:04 compute-0 ceph-mgr[74563]: [progress WARNING root] complete: ev 6f873b38-5759-48d9-875b-5bf970777927 does not exist
Oct 11 03:58:04 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct 11 03:58:04 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 11 03:58:04 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct 11 03:58:04 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 11 03:58:04 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 11 03:58:04 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 11 03:58:04 compute-0 sudo[261740]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 03:58:04 compute-0 sudo[261740]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 03:58:04 compute-0 sudo[261740]: pam_unix(sudo:session): session closed for user root
Oct 11 03:58:04 compute-0 sudo[261765]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 11 03:58:04 compute-0 sudo[261765]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 03:58:04 compute-0 sudo[261765]: pam_unix(sudo:session): session closed for user root
Oct 11 03:58:04 compute-0 sudo[261790]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 03:58:04 compute-0 sudo[261790]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 03:58:04 compute-0 sudo[261790]: pam_unix(sudo:session): session closed for user root
Oct 11 03:58:04 compute-0 sudo[261815]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/23b68101-59a9-532f-ab6b-9acf78fb2162/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 23b68101-59a9-532f-ab6b-9acf78fb2162 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Oct 11 03:58:04 compute-0 sudo[261815]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 03:58:04 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 11 03:58:04 compute-0 podman[261879]: 2025-10-11 03:58:04.875031253 +0000 UTC m=+0.067413205 container create 6e7d997bc9f16fafd8fe22dbd7f8f0f2684d157f7daa144e64b36d7bc16e26db (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_shaw, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 11 03:58:04 compute-0 systemd[1]: Started libpod-conmon-6e7d997bc9f16fafd8fe22dbd7f8f0f2684d157f7daa144e64b36d7bc16e26db.scope.
Oct 11 03:58:04 compute-0 podman[261879]: 2025-10-11 03:58:04.84809925 +0000 UTC m=+0.040481252 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 03:58:04 compute-0 systemd[1]: Started libcrun container.
Oct 11 03:58:04 compute-0 podman[261879]: 2025-10-11 03:58:04.975975124 +0000 UTC m=+0.168357136 container init 6e7d997bc9f16fafd8fe22dbd7f8f0f2684d157f7daa144e64b36d7bc16e26db (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_shaw, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Oct 11 03:58:04 compute-0 podman[261879]: 2025-10-11 03:58:04.986635942 +0000 UTC m=+0.179017894 container start 6e7d997bc9f16fafd8fe22dbd7f8f0f2684d157f7daa144e64b36d7bc16e26db (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_shaw, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 11 03:58:04 compute-0 podman[261879]: 2025-10-11 03:58:04.990477699 +0000 UTC m=+0.182859691 container attach 6e7d997bc9f16fafd8fe22dbd7f8f0f2684d157f7daa144e64b36d7bc16e26db (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_shaw, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 11 03:58:04 compute-0 jolly_shaw[261896]: 167 167
Oct 11 03:58:04 compute-0 systemd[1]: libpod-6e7d997bc9f16fafd8fe22dbd7f8f0f2684d157f7daa144e64b36d7bc16e26db.scope: Deactivated successfully.
Oct 11 03:58:04 compute-0 podman[261879]: 2025-10-11 03:58:04.995340875 +0000 UTC m=+0.187722857 container died 6e7d997bc9f16fafd8fe22dbd7f8f0f2684d157f7daa144e64b36d7bc16e26db (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_shaw, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507)
Oct 11 03:58:05 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Oct 11 03:58:05 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 11 03:58:05 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 11 03:58:05 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' 
Oct 11 03:58:05 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 11 03:58:05 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 11 03:58:05 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 11 03:58:05 compute-0 systemd[1]: var-lib-containers-storage-overlay-5ecf7da694e9276a8280ff29e5f93d0a9459a80dcd0e14a611bdbe0605935f29-merged.mount: Deactivated successfully.
Oct 11 03:58:05 compute-0 podman[261879]: 2025-10-11 03:58:05.057802941 +0000 UTC m=+0.250184873 container remove 6e7d997bc9f16fafd8fe22dbd7f8f0f2684d157f7daa144e64b36d7bc16e26db (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_shaw, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 11 03:58:05 compute-0 systemd[1]: libpod-conmon-6e7d997bc9f16fafd8fe22dbd7f8f0f2684d157f7daa144e64b36d7bc16e26db.scope: Deactivated successfully.
Oct 11 03:58:05 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v761: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 03:58:05 compute-0 podman[261919]: 2025-10-11 03:58:05.301273545 +0000 UTC m=+0.059689800 container create b3bfe1af50084af2701e5c3cd8bfa34f086adfe25c92b452d392ca8f2246396b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sleepy_dirac, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507)
Oct 11 03:58:05 compute-0 systemd[1]: Started libpod-conmon-b3bfe1af50084af2701e5c3cd8bfa34f086adfe25c92b452d392ca8f2246396b.scope.
Oct 11 03:58:05 compute-0 podman[261919]: 2025-10-11 03:58:05.280758211 +0000 UTC m=+0.039174566 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 03:58:05 compute-0 systemd[1]: Started libcrun container.
Oct 11 03:58:05 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ba0431dc17ab8fabcb6b23c3c086d244b32ef9d0acf2cb0c5d5a40111f88f15e/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 11 03:58:05 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ba0431dc17ab8fabcb6b23c3c086d244b32ef9d0acf2cb0c5d5a40111f88f15e/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 11 03:58:05 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ba0431dc17ab8fabcb6b23c3c086d244b32ef9d0acf2cb0c5d5a40111f88f15e/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 11 03:58:05 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ba0431dc17ab8fabcb6b23c3c086d244b32ef9d0acf2cb0c5d5a40111f88f15e/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 11 03:58:05 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ba0431dc17ab8fabcb6b23c3c086d244b32ef9d0acf2cb0c5d5a40111f88f15e/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct 11 03:58:05 compute-0 podman[261919]: 2025-10-11 03:58:05.410019474 +0000 UTC m=+0.168435749 container init b3bfe1af50084af2701e5c3cd8bfa34f086adfe25c92b452d392ca8f2246396b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sleepy_dirac, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20250507, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 11 03:58:05 compute-0 podman[261919]: 2025-10-11 03:58:05.423302275 +0000 UTC m=+0.181718550 container start b3bfe1af50084af2701e5c3cd8bfa34f086adfe25c92b452d392ca8f2246396b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sleepy_dirac, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 11 03:58:05 compute-0 podman[261919]: 2025-10-11 03:58:05.427628516 +0000 UTC m=+0.186044861 container attach b3bfe1af50084af2701e5c3cd8bfa34f086adfe25c92b452d392ca8f2246396b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sleepy_dirac, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 11 03:58:06 compute-0 ceph-mon[74273]: pgmap v761: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 03:58:06 compute-0 sleepy_dirac[261935]: --> passed data devices: 0 physical, 3 LVM
Oct 11 03:58:06 compute-0 sleepy_dirac[261935]: --> relative data size: 1.0
Oct 11 03:58:06 compute-0 sleepy_dirac[261935]: --> All data devices are unavailable
Oct 11 03:58:06 compute-0 systemd[1]: libpod-b3bfe1af50084af2701e5c3cd8bfa34f086adfe25c92b452d392ca8f2246396b.scope: Deactivated successfully.
Oct 11 03:58:06 compute-0 podman[261919]: 2025-10-11 03:58:06.632884969 +0000 UTC m=+1.391301224 container died b3bfe1af50084af2701e5c3cd8bfa34f086adfe25c92b452d392ca8f2246396b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sleepy_dirac, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 11 03:58:06 compute-0 systemd[1]: libpod-b3bfe1af50084af2701e5c3cd8bfa34f086adfe25c92b452d392ca8f2246396b.scope: Consumed 1.168s CPU time.
Oct 11 03:58:06 compute-0 systemd[1]: var-lib-containers-storage-overlay-ba0431dc17ab8fabcb6b23c3c086d244b32ef9d0acf2cb0c5d5a40111f88f15e-merged.mount: Deactivated successfully.
Oct 11 03:58:06 compute-0 podman[261919]: 2025-10-11 03:58:06.704990464 +0000 UTC m=+1.463406719 container remove b3bfe1af50084af2701e5c3cd8bfa34f086adfe25c92b452d392ca8f2246396b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sleepy_dirac, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Oct 11 03:58:06 compute-0 systemd[1]: libpod-conmon-b3bfe1af50084af2701e5c3cd8bfa34f086adfe25c92b452d392ca8f2246396b.scope: Deactivated successfully.
Oct 11 03:58:06 compute-0 sudo[261815]: pam_unix(sudo:session): session closed for user root
Oct 11 03:58:06 compute-0 sudo[261978]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 03:58:06 compute-0 sudo[261978]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 03:58:06 compute-0 sudo[261978]: pam_unix(sudo:session): session closed for user root
Oct 11 03:58:06 compute-0 sudo[262003]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 11 03:58:06 compute-0 sudo[262003]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 03:58:06 compute-0 sudo[262003]: pam_unix(sudo:session): session closed for user root
Oct 11 03:58:07 compute-0 sudo[262028]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 03:58:07 compute-0 sudo[262028]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 03:58:07 compute-0 sudo[262028]: pam_unix(sudo:session): session closed for user root
Oct 11 03:58:07 compute-0 sudo[262053]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/23b68101-59a9-532f-ab6b-9acf78fb2162/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 23b68101-59a9-532f-ab6b-9acf78fb2162 -- lvm list --format json
Oct 11 03:58:07 compute-0 sudo[262053]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 03:58:07 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v762: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 03:58:07 compute-0 podman[262118]: 2025-10-11 03:58:07.581819678 +0000 UTC m=+0.063992069 container create ad28098084aca6004703b2dff72e5b224ff1bfa69187064d5fcddffd80d119c5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_wozniak, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 11 03:58:07 compute-0 systemd[1]: Started libpod-conmon-ad28098084aca6004703b2dff72e5b224ff1bfa69187064d5fcddffd80d119c5.scope.
Oct 11 03:58:07 compute-0 podman[262118]: 2025-10-11 03:58:07.555799831 +0000 UTC m=+0.037972272 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 03:58:07 compute-0 systemd[1]: Started libcrun container.
Oct 11 03:58:07 compute-0 podman[262118]: 2025-10-11 03:58:07.682935444 +0000 UTC m=+0.165107895 container init ad28098084aca6004703b2dff72e5b224ff1bfa69187064d5fcddffd80d119c5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_wozniak, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 11 03:58:07 compute-0 podman[262118]: 2025-10-11 03:58:07.693353015 +0000 UTC m=+0.175525416 container start ad28098084aca6004703b2dff72e5b224ff1bfa69187064d5fcddffd80d119c5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_wozniak, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.build-date=20250507, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 11 03:58:07 compute-0 podman[262118]: 2025-10-11 03:58:07.696930485 +0000 UTC m=+0.179102936 container attach ad28098084aca6004703b2dff72e5b224ff1bfa69187064d5fcddffd80d119c5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_wozniak, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.build-date=20250507, io.buildah.version=1.39.3)
Oct 11 03:58:07 compute-0 beautiful_wozniak[262135]: 167 167
Oct 11 03:58:07 compute-0 podman[262118]: 2025-10-11 03:58:07.699473986 +0000 UTC m=+0.181646387 container died ad28098084aca6004703b2dff72e5b224ff1bfa69187064d5fcddffd80d119c5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_wozniak, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2)
Oct 11 03:58:07 compute-0 systemd[1]: libpod-ad28098084aca6004703b2dff72e5b224ff1bfa69187064d5fcddffd80d119c5.scope: Deactivated successfully.
Oct 11 03:58:07 compute-0 systemd[1]: var-lib-containers-storage-overlay-ce9e8022bce6d5dda4fbbbe2957dff7be7e76b17ae27141cbf9c93e5fcc3ad20-merged.mount: Deactivated successfully.
Oct 11 03:58:07 compute-0 podman[262118]: 2025-10-11 03:58:07.749888394 +0000 UTC m=+0.232060785 container remove ad28098084aca6004703b2dff72e5b224ff1bfa69187064d5fcddffd80d119c5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_wozniak, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 11 03:58:07 compute-0 systemd[1]: libpod-conmon-ad28098084aca6004703b2dff72e5b224ff1bfa69187064d5fcddffd80d119c5.scope: Deactivated successfully.
Oct 11 03:58:07 compute-0 podman[262159]: 2025-10-11 03:58:07.962454874 +0000 UTC m=+0.063852785 container create 4fc3f7914cc9cdebc238f4e31c1a7ff3bdf06fb3f6d1a1294f61f2e52650496e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_hertz, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef)
Oct 11 03:58:08 compute-0 systemd[1]: Started libpod-conmon-4fc3f7914cc9cdebc238f4e31c1a7ff3bdf06fb3f6d1a1294f61f2e52650496e.scope.
Oct 11 03:58:08 compute-0 podman[262159]: 2025-10-11 03:58:07.935861631 +0000 UTC m=+0.037259612 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 03:58:08 compute-0 systemd[1]: Started libcrun container.
Oct 11 03:58:08 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5df0c56aeab5e21df80fa4edebe086db901cb177d89efa0492f87c2cd5a59c76/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 11 03:58:08 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5df0c56aeab5e21df80fa4edebe086db901cb177d89efa0492f87c2cd5a59c76/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 11 03:58:08 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5df0c56aeab5e21df80fa4edebe086db901cb177d89efa0492f87c2cd5a59c76/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 11 03:58:08 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5df0c56aeab5e21df80fa4edebe086db901cb177d89efa0492f87c2cd5a59c76/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 11 03:58:08 compute-0 podman[262159]: 2025-10-11 03:58:08.071025699 +0000 UTC m=+0.172423650 container init 4fc3f7914cc9cdebc238f4e31c1a7ff3bdf06fb3f6d1a1294f61f2e52650496e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_hertz, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 11 03:58:08 compute-0 podman[262159]: 2025-10-11 03:58:08.090141353 +0000 UTC m=+0.191539264 container start 4fc3f7914cc9cdebc238f4e31c1a7ff3bdf06fb3f6d1a1294f61f2e52650496e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_hertz, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 11 03:58:08 compute-0 podman[262159]: 2025-10-11 03:58:08.094639999 +0000 UTC m=+0.196037970 container attach 4fc3f7914cc9cdebc238f4e31c1a7ff3bdf06fb3f6d1a1294f61f2e52650496e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_hertz, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, OSD_FLAVOR=default)
Oct 11 03:58:08 compute-0 ceph-mon[74273]: pgmap v762: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 03:58:08 compute-0 thirsty_hertz[262175]: {
Oct 11 03:58:08 compute-0 thirsty_hertz[262175]:     "0": [
Oct 11 03:58:08 compute-0 thirsty_hertz[262175]:         {
Oct 11 03:58:08 compute-0 thirsty_hertz[262175]:             "devices": [
Oct 11 03:58:08 compute-0 thirsty_hertz[262175]:                 "/dev/loop3"
Oct 11 03:58:08 compute-0 thirsty_hertz[262175]:             ],
Oct 11 03:58:08 compute-0 thirsty_hertz[262175]:             "lv_name": "ceph_lv0",
Oct 11 03:58:08 compute-0 thirsty_hertz[262175]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct 11 03:58:08 compute-0 thirsty_hertz[262175]:             "lv_size": "21470642176",
Oct 11 03:58:08 compute-0 thirsty_hertz[262175]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=1XVTvq-Um5W-oCLh-MZzq-EKrd-vgGj-JdO4S3,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=23b68101-59a9-532f-ab6b-9acf78fb2162,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=bd7ac921-1218-45c1-b1c6-7c594dbceccb,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 11 03:58:08 compute-0 thirsty_hertz[262175]:             "lv_uuid": "1XVTvq-Um5W-oCLh-MZzq-EKrd-vgGj-JdO4S3",
Oct 11 03:58:08 compute-0 thirsty_hertz[262175]:             "name": "ceph_lv0",
Oct 11 03:58:08 compute-0 thirsty_hertz[262175]:             "path": "/dev/ceph_vg0/ceph_lv0",
Oct 11 03:58:08 compute-0 thirsty_hertz[262175]:             "tags": {
Oct 11 03:58:08 compute-0 thirsty_hertz[262175]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct 11 03:58:08 compute-0 thirsty_hertz[262175]:                 "ceph.block_uuid": "1XVTvq-Um5W-oCLh-MZzq-EKrd-vgGj-JdO4S3",
Oct 11 03:58:08 compute-0 thirsty_hertz[262175]:                 "ceph.cephx_lockbox_secret": "",
Oct 11 03:58:08 compute-0 thirsty_hertz[262175]:                 "ceph.cluster_fsid": "23b68101-59a9-532f-ab6b-9acf78fb2162",
Oct 11 03:58:08 compute-0 thirsty_hertz[262175]:                 "ceph.cluster_name": "ceph",
Oct 11 03:58:08 compute-0 thirsty_hertz[262175]:                 "ceph.crush_device_class": "",
Oct 11 03:58:08 compute-0 thirsty_hertz[262175]:                 "ceph.encrypted": "0",
Oct 11 03:58:08 compute-0 thirsty_hertz[262175]:                 "ceph.osd_fsid": "bd7ac921-1218-45c1-b1c6-7c594dbceccb",
Oct 11 03:58:08 compute-0 thirsty_hertz[262175]:                 "ceph.osd_id": "0",
Oct 11 03:58:08 compute-0 thirsty_hertz[262175]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 11 03:58:08 compute-0 thirsty_hertz[262175]:                 "ceph.type": "block",
Oct 11 03:58:08 compute-0 thirsty_hertz[262175]:                 "ceph.vdo": "0"
Oct 11 03:58:08 compute-0 thirsty_hertz[262175]:             },
Oct 11 03:58:08 compute-0 thirsty_hertz[262175]:             "type": "block",
Oct 11 03:58:08 compute-0 thirsty_hertz[262175]:             "vg_name": "ceph_vg0"
Oct 11 03:58:08 compute-0 thirsty_hertz[262175]:         }
Oct 11 03:58:08 compute-0 thirsty_hertz[262175]:     ],
Oct 11 03:58:08 compute-0 thirsty_hertz[262175]:     "1": [
Oct 11 03:58:08 compute-0 thirsty_hertz[262175]:         {
Oct 11 03:58:08 compute-0 thirsty_hertz[262175]:             "devices": [
Oct 11 03:58:08 compute-0 thirsty_hertz[262175]:                 "/dev/loop4"
Oct 11 03:58:08 compute-0 thirsty_hertz[262175]:             ],
Oct 11 03:58:08 compute-0 thirsty_hertz[262175]:             "lv_name": "ceph_lv1",
Oct 11 03:58:08 compute-0 thirsty_hertz[262175]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Oct 11 03:58:08 compute-0 thirsty_hertz[262175]:             "lv_size": "21470642176",
Oct 11 03:58:08 compute-0 thirsty_hertz[262175]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=SYHCVo-UVf0-Iwc3-4dzz-QuCG-19LJ-9rRA5x,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=23b68101-59a9-532f-ab6b-9acf78fb2162,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=38da774d-7ecf-442f-9a7a-97978287cff8,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 11 03:58:08 compute-0 thirsty_hertz[262175]:             "lv_uuid": "SYHCVo-UVf0-Iwc3-4dzz-QuCG-19LJ-9rRA5x",
Oct 11 03:58:08 compute-0 thirsty_hertz[262175]:             "name": "ceph_lv1",
Oct 11 03:58:08 compute-0 thirsty_hertz[262175]:             "path": "/dev/ceph_vg1/ceph_lv1",
Oct 11 03:58:08 compute-0 thirsty_hertz[262175]:             "tags": {
Oct 11 03:58:08 compute-0 thirsty_hertz[262175]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Oct 11 03:58:08 compute-0 thirsty_hertz[262175]:                 "ceph.block_uuid": "SYHCVo-UVf0-Iwc3-4dzz-QuCG-19LJ-9rRA5x",
Oct 11 03:58:08 compute-0 thirsty_hertz[262175]:                 "ceph.cephx_lockbox_secret": "",
Oct 11 03:58:08 compute-0 thirsty_hertz[262175]:                 "ceph.cluster_fsid": "23b68101-59a9-532f-ab6b-9acf78fb2162",
Oct 11 03:58:08 compute-0 thirsty_hertz[262175]:                 "ceph.cluster_name": "ceph",
Oct 11 03:58:08 compute-0 thirsty_hertz[262175]:                 "ceph.crush_device_class": "",
Oct 11 03:58:08 compute-0 thirsty_hertz[262175]:                 "ceph.encrypted": "0",
Oct 11 03:58:08 compute-0 thirsty_hertz[262175]:                 "ceph.osd_fsid": "38da774d-7ecf-442f-9a7a-97978287cff8",
Oct 11 03:58:08 compute-0 thirsty_hertz[262175]:                 "ceph.osd_id": "1",
Oct 11 03:58:08 compute-0 thirsty_hertz[262175]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 11 03:58:08 compute-0 thirsty_hertz[262175]:                 "ceph.type": "block",
Oct 11 03:58:08 compute-0 thirsty_hertz[262175]:                 "ceph.vdo": "0"
Oct 11 03:58:08 compute-0 thirsty_hertz[262175]:             },
Oct 11 03:58:08 compute-0 thirsty_hertz[262175]:             "type": "block",
Oct 11 03:58:08 compute-0 thirsty_hertz[262175]:             "vg_name": "ceph_vg1"
Oct 11 03:58:08 compute-0 thirsty_hertz[262175]:         }
Oct 11 03:58:08 compute-0 thirsty_hertz[262175]:     ],
Oct 11 03:58:08 compute-0 thirsty_hertz[262175]:     "2": [
Oct 11 03:58:08 compute-0 thirsty_hertz[262175]:         {
Oct 11 03:58:08 compute-0 thirsty_hertz[262175]:             "devices": [
Oct 11 03:58:08 compute-0 thirsty_hertz[262175]:                 "/dev/loop5"
Oct 11 03:58:08 compute-0 thirsty_hertz[262175]:             ],
Oct 11 03:58:08 compute-0 thirsty_hertz[262175]:             "lv_name": "ceph_lv2",
Oct 11 03:58:08 compute-0 thirsty_hertz[262175]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Oct 11 03:58:08 compute-0 thirsty_hertz[262175]:             "lv_size": "21470642176",
Oct 11 03:58:08 compute-0 thirsty_hertz[262175]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=RoMfIg-8Myq-NBZ3-1HM3-6QfC-sjk8-dlChvt,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=23b68101-59a9-532f-ab6b-9acf78fb2162,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=b0a32a6b-06c7-4cda-9235-0c5ffbd1bd85,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 11 03:58:08 compute-0 thirsty_hertz[262175]:             "lv_uuid": "RoMfIg-8Myq-NBZ3-1HM3-6QfC-sjk8-dlChvt",
Oct 11 03:58:08 compute-0 thirsty_hertz[262175]:             "name": "ceph_lv2",
Oct 11 03:58:08 compute-0 thirsty_hertz[262175]:             "path": "/dev/ceph_vg2/ceph_lv2",
Oct 11 03:58:08 compute-0 thirsty_hertz[262175]:             "tags": {
Oct 11 03:58:08 compute-0 thirsty_hertz[262175]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Oct 11 03:58:08 compute-0 thirsty_hertz[262175]:                 "ceph.block_uuid": "RoMfIg-8Myq-NBZ3-1HM3-6QfC-sjk8-dlChvt",
Oct 11 03:58:08 compute-0 thirsty_hertz[262175]:                 "ceph.cephx_lockbox_secret": "",
Oct 11 03:58:08 compute-0 thirsty_hertz[262175]:                 "ceph.cluster_fsid": "23b68101-59a9-532f-ab6b-9acf78fb2162",
Oct 11 03:58:08 compute-0 thirsty_hertz[262175]:                 "ceph.cluster_name": "ceph",
Oct 11 03:58:08 compute-0 thirsty_hertz[262175]:                 "ceph.crush_device_class": "",
Oct 11 03:58:08 compute-0 thirsty_hertz[262175]:                 "ceph.encrypted": "0",
Oct 11 03:58:08 compute-0 thirsty_hertz[262175]:                 "ceph.osd_fsid": "b0a32a6b-06c7-4cda-9235-0c5ffbd1bd85",
Oct 11 03:58:08 compute-0 thirsty_hertz[262175]:                 "ceph.osd_id": "2",
Oct 11 03:58:08 compute-0 thirsty_hertz[262175]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 11 03:58:08 compute-0 thirsty_hertz[262175]:                 "ceph.type": "block",
Oct 11 03:58:08 compute-0 thirsty_hertz[262175]:                 "ceph.vdo": "0"
Oct 11 03:58:08 compute-0 thirsty_hertz[262175]:             },
Oct 11 03:58:08 compute-0 thirsty_hertz[262175]:             "type": "block",
Oct 11 03:58:08 compute-0 thirsty_hertz[262175]:             "vg_name": "ceph_vg2"
Oct 11 03:58:08 compute-0 thirsty_hertz[262175]:         }
Oct 11 03:58:08 compute-0 thirsty_hertz[262175]:     ]
Oct 11 03:58:08 compute-0 thirsty_hertz[262175]: }
Oct 11 03:58:08 compute-0 systemd[1]: libpod-4fc3f7914cc9cdebc238f4e31c1a7ff3bdf06fb3f6d1a1294f61f2e52650496e.scope: Deactivated successfully.
Oct 11 03:58:08 compute-0 podman[262159]: 2025-10-11 03:58:08.843873247 +0000 UTC m=+0.945271168 container died 4fc3f7914cc9cdebc238f4e31c1a7ff3bdf06fb3f6d1a1294f61f2e52650496e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_hertz, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 11 03:58:08 compute-0 systemd[1]: var-lib-containers-storage-overlay-5df0c56aeab5e21df80fa4edebe086db901cb177d89efa0492f87c2cd5a59c76-merged.mount: Deactivated successfully.
Oct 11 03:58:08 compute-0 podman[262159]: 2025-10-11 03:58:08.926125236 +0000 UTC m=+1.027523157 container remove 4fc3f7914cc9cdebc238f4e31c1a7ff3bdf06fb3f6d1a1294f61f2e52650496e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_hertz, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef)
Oct 11 03:58:08 compute-0 systemd[1]: libpod-conmon-4fc3f7914cc9cdebc238f4e31c1a7ff3bdf06fb3f6d1a1294f61f2e52650496e.scope: Deactivated successfully.
Oct 11 03:58:08 compute-0 sudo[262053]: pam_unix(sudo:session): session closed for user root
Oct 11 03:58:09 compute-0 sudo[262198]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 03:58:09 compute-0 sudo[262198]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 03:58:09 compute-0 sudo[262198]: pam_unix(sudo:session): session closed for user root
Oct 11 03:58:09 compute-0 sudo[262223]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 11 03:58:09 compute-0 sudo[262223]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 03:58:09 compute-0 sudo[262223]: pam_unix(sudo:session): session closed for user root
Oct 11 03:58:09 compute-0 sudo[262248]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 03:58:09 compute-0 sudo[262248]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 03:58:09 compute-0 sudo[262248]: pam_unix(sudo:session): session closed for user root
Oct 11 03:58:09 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v763: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 03:58:09 compute-0 sudo[262273]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/23b68101-59a9-532f-ab6b-9acf78fb2162/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 23b68101-59a9-532f-ab6b-9acf78fb2162 -- raw list --format json
Oct 11 03:58:09 compute-0 sudo[262273]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 03:58:09 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 11 03:58:09 compute-0 podman[262334]: 2025-10-11 03:58:09.811834368 +0000 UTC m=+0.073889716 container create 7351746cd3097dff8d843077d356671e6cc1d2df1df15a59dcdeb376aac36cda (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_rubin, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 11 03:58:09 compute-0 systemd[1]: Started libpod-conmon-7351746cd3097dff8d843077d356671e6cc1d2df1df15a59dcdeb376aac36cda.scope.
Oct 11 03:58:09 compute-0 podman[262334]: 2025-10-11 03:58:09.783810025 +0000 UTC m=+0.045865433 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 03:58:09 compute-0 systemd[1]: Started libcrun container.
Oct 11 03:58:09 compute-0 podman[262334]: 2025-10-11 03:58:09.917490171 +0000 UTC m=+0.179545569 container init 7351746cd3097dff8d843077d356671e6cc1d2df1df15a59dcdeb376aac36cda (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_rubin, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20250507, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 11 03:58:09 compute-0 podman[262334]: 2025-10-11 03:58:09.928084677 +0000 UTC m=+0.190140035 container start 7351746cd3097dff8d843077d356671e6cc1d2df1df15a59dcdeb376aac36cda (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_rubin, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3)
Oct 11 03:58:09 compute-0 podman[262334]: 2025-10-11 03:58:09.932225683 +0000 UTC m=+0.194281101 container attach 7351746cd3097dff8d843077d356671e6cc1d2df1df15a59dcdeb376aac36cda (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_rubin, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS)
Oct 11 03:58:09 compute-0 angry_rubin[262350]: 167 167
Oct 11 03:58:09 compute-0 systemd[1]: libpod-7351746cd3097dff8d843077d356671e6cc1d2df1df15a59dcdeb376aac36cda.scope: Deactivated successfully.
Oct 11 03:58:09 compute-0 podman[262334]: 2025-10-11 03:58:09.937367136 +0000 UTC m=+0.199422484 container died 7351746cd3097dff8d843077d356671e6cc1d2df1df15a59dcdeb376aac36cda (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_rubin, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 11 03:58:09 compute-0 systemd[1]: var-lib-containers-storage-overlay-52d215245fb84b7ec29507823be0554a7475cbc041341d964ffa85322f86aec7-merged.mount: Deactivated successfully.
Oct 11 03:58:09 compute-0 podman[262334]: 2025-10-11 03:58:09.986436608 +0000 UTC m=+0.248491966 container remove 7351746cd3097dff8d843077d356671e6cc1d2df1df15a59dcdeb376aac36cda (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_rubin, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 11 03:58:09 compute-0 systemd[1]: libpod-conmon-7351746cd3097dff8d843077d356671e6cc1d2df1df15a59dcdeb376aac36cda.scope: Deactivated successfully.
Oct 11 03:58:10 compute-0 podman[262373]: 2025-10-11 03:58:10.16437254 +0000 UTC m=+0.056640984 container create 97c5a40f370763ae7cfc3eed3a6a4bfcef418acf9c9c4925ca974ddfc90b070d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_shannon, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 11 03:58:10 compute-0 systemd[1]: Started libpod-conmon-97c5a40f370763ae7cfc3eed3a6a4bfcef418acf9c9c4925ca974ddfc90b070d.scope.
Oct 11 03:58:10 compute-0 podman[262373]: 2025-10-11 03:58:10.135819232 +0000 UTC m=+0.028087686 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 03:58:10 compute-0 systemd[1]: Started libcrun container.
Oct 11 03:58:10 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3536fed2e149e3795aeb11bf51283f0ff7c00c65500b2571917f0fcbab6cca08/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 11 03:58:10 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3536fed2e149e3795aeb11bf51283f0ff7c00c65500b2571917f0fcbab6cca08/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 11 03:58:10 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3536fed2e149e3795aeb11bf51283f0ff7c00c65500b2571917f0fcbab6cca08/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 11 03:58:10 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3536fed2e149e3795aeb11bf51283f0ff7c00c65500b2571917f0fcbab6cca08/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 11 03:58:10 compute-0 podman[262373]: 2025-10-11 03:58:10.287859372 +0000 UTC m=+0.180127816 container init 97c5a40f370763ae7cfc3eed3a6a4bfcef418acf9c9c4925ca974ddfc90b070d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_shannon, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 11 03:58:10 compute-0 podman[262373]: 2025-10-11 03:58:10.301350729 +0000 UTC m=+0.193619173 container start 97c5a40f370763ae7cfc3eed3a6a4bfcef418acf9c9c4925ca974ddfc90b070d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_shannon, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 11 03:58:10 compute-0 podman[262373]: 2025-10-11 03:58:10.305174035 +0000 UTC m=+0.197442509 container attach 97c5a40f370763ae7cfc3eed3a6a4bfcef418acf9c9c4925ca974ddfc90b070d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_shannon, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default)
Oct 11 03:58:10 compute-0 ceph-mon[74273]: pgmap v763: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 03:58:10 compute-0 nova_compute[259850]: 2025-10-11 03:58:10.379 2 DEBUG oslo_service.periodic_task [None req-a15c22ab-259d-4b26-83d0-eb5f92102649 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 11 03:58:10 compute-0 nova_compute[259850]: 2025-10-11 03:58:10.381 2 DEBUG oslo_service.periodic_task [None req-a15c22ab-259d-4b26-83d0-eb5f92102649 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 11 03:58:10 compute-0 nova_compute[259850]: 2025-10-11 03:58:10.401 2 DEBUG oslo_service.periodic_task [None req-a15c22ab-259d-4b26-83d0-eb5f92102649 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 11 03:58:10 compute-0 nova_compute[259850]: 2025-10-11 03:58:10.402 2 DEBUG nova.compute.manager [None req-a15c22ab-259d-4b26-83d0-eb5f92102649 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Oct 11 03:58:10 compute-0 nova_compute[259850]: 2025-10-11 03:58:10.402 2 DEBUG nova.compute.manager [None req-a15c22ab-259d-4b26-83d0-eb5f92102649 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Oct 11 03:58:10 compute-0 nova_compute[259850]: 2025-10-11 03:58:10.421 2 DEBUG nova.compute.manager [None req-a15c22ab-259d-4b26-83d0-eb5f92102649 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Oct 11 03:58:10 compute-0 nova_compute[259850]: 2025-10-11 03:58:10.422 2 DEBUG oslo_service.periodic_task [None req-a15c22ab-259d-4b26-83d0-eb5f92102649 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 11 03:58:10 compute-0 nova_compute[259850]: 2025-10-11 03:58:10.422 2 DEBUG oslo_service.periodic_task [None req-a15c22ab-259d-4b26-83d0-eb5f92102649 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 11 03:58:10 compute-0 nova_compute[259850]: 2025-10-11 03:58:10.422 2 DEBUG oslo_service.periodic_task [None req-a15c22ab-259d-4b26-83d0-eb5f92102649 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 11 03:58:10 compute-0 nova_compute[259850]: 2025-10-11 03:58:10.423 2 DEBUG oslo_service.periodic_task [None req-a15c22ab-259d-4b26-83d0-eb5f92102649 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 11 03:58:10 compute-0 nova_compute[259850]: 2025-10-11 03:58:10.423 2 DEBUG oslo_service.periodic_task [None req-a15c22ab-259d-4b26-83d0-eb5f92102649 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 11 03:58:10 compute-0 nova_compute[259850]: 2025-10-11 03:58:10.423 2 DEBUG nova.compute.manager [None req-a15c22ab-259d-4b26-83d0-eb5f92102649 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Oct 11 03:58:10 compute-0 nova_compute[259850]: 2025-10-11 03:58:10.424 2 DEBUG oslo_service.periodic_task [None req-a15c22ab-259d-4b26-83d0-eb5f92102649 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 11 03:58:10 compute-0 nova_compute[259850]: 2025-10-11 03:58:10.452 2 DEBUG oslo_concurrency.lockutils [None req-a15c22ab-259d-4b26-83d0-eb5f92102649 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 03:58:10 compute-0 nova_compute[259850]: 2025-10-11 03:58:10.452 2 DEBUG oslo_concurrency.lockutils [None req-a15c22ab-259d-4b26-83d0-eb5f92102649 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 03:58:10 compute-0 nova_compute[259850]: 2025-10-11 03:58:10.452 2 DEBUG oslo_concurrency.lockutils [None req-a15c22ab-259d-4b26-83d0-eb5f92102649 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 03:58:10 compute-0 nova_compute[259850]: 2025-10-11 03:58:10.453 2 DEBUG nova.compute.resource_tracker [None req-a15c22ab-259d-4b26-83d0-eb5f92102649 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Oct 11 03:58:10 compute-0 nova_compute[259850]: 2025-10-11 03:58:10.453 2 DEBUG oslo_concurrency.processutils [None req-a15c22ab-259d-4b26-83d0-eb5f92102649 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 03:58:10 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 11 03:58:10 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1861483012' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 03:58:10 compute-0 nova_compute[259850]: 2025-10-11 03:58:10.926 2 DEBUG oslo_concurrency.processutils [None req-a15c22ab-259d-4b26-83d0-eb5f92102649 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.472s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 03:58:11 compute-0 nova_compute[259850]: 2025-10-11 03:58:11.091 2 WARNING nova.virt.libvirt.driver [None req-a15c22ab-259d-4b26-83d0-eb5f92102649 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 11 03:58:11 compute-0 nova_compute[259850]: 2025-10-11 03:58:11.092 2 DEBUG nova.compute.resource_tracker [None req-a15c22ab-259d-4b26-83d0-eb5f92102649 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5114MB free_disk=59.98828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Oct 11 03:58:11 compute-0 nova_compute[259850]: 2025-10-11 03:58:11.093 2 DEBUG oslo_concurrency.lockutils [None req-a15c22ab-259d-4b26-83d0-eb5f92102649 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 03:58:11 compute-0 nova_compute[259850]: 2025-10-11 03:58:11.093 2 DEBUG oslo_concurrency.lockutils [None req-a15c22ab-259d-4b26-83d0-eb5f92102649 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 03:58:11 compute-0 nova_compute[259850]: 2025-10-11 03:58:11.177 2 DEBUG nova.compute.resource_tracker [None req-a15c22ab-259d-4b26-83d0-eb5f92102649 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Oct 11 03:58:11 compute-0 nova_compute[259850]: 2025-10-11 03:58:11.178 2 DEBUG nova.compute.resource_tracker [None req-a15c22ab-259d-4b26-83d0-eb5f92102649 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Oct 11 03:58:11 compute-0 nova_compute[259850]: 2025-10-11 03:58:11.194 2 DEBUG oslo_concurrency.processutils [None req-a15c22ab-259d-4b26-83d0-eb5f92102649 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 03:58:11 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v764: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 03:58:11 compute-0 practical_shannon[262390]: {
Oct 11 03:58:11 compute-0 practical_shannon[262390]:     "38da774d-7ecf-442f-9a7a-97978287cff8": {
Oct 11 03:58:11 compute-0 practical_shannon[262390]:         "ceph_fsid": "23b68101-59a9-532f-ab6b-9acf78fb2162",
Oct 11 03:58:11 compute-0 practical_shannon[262390]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Oct 11 03:58:11 compute-0 practical_shannon[262390]:         "osd_id": 1,
Oct 11 03:58:11 compute-0 practical_shannon[262390]:         "osd_uuid": "38da774d-7ecf-442f-9a7a-97978287cff8",
Oct 11 03:58:11 compute-0 practical_shannon[262390]:         "type": "bluestore"
Oct 11 03:58:11 compute-0 practical_shannon[262390]:     },
Oct 11 03:58:11 compute-0 practical_shannon[262390]:     "b0a32a6b-06c7-4cda-9235-0c5ffbd1bd85": {
Oct 11 03:58:11 compute-0 practical_shannon[262390]:         "ceph_fsid": "23b68101-59a9-532f-ab6b-9acf78fb2162",
Oct 11 03:58:11 compute-0 practical_shannon[262390]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Oct 11 03:58:11 compute-0 practical_shannon[262390]:         "osd_id": 2,
Oct 11 03:58:11 compute-0 practical_shannon[262390]:         "osd_uuid": "b0a32a6b-06c7-4cda-9235-0c5ffbd1bd85",
Oct 11 03:58:11 compute-0 practical_shannon[262390]:         "type": "bluestore"
Oct 11 03:58:11 compute-0 practical_shannon[262390]:     },
Oct 11 03:58:11 compute-0 practical_shannon[262390]:     "bd7ac921-1218-45c1-b1c6-7c594dbceccb": {
Oct 11 03:58:11 compute-0 practical_shannon[262390]:         "ceph_fsid": "23b68101-59a9-532f-ab6b-9acf78fb2162",
Oct 11 03:58:11 compute-0 practical_shannon[262390]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct 11 03:58:11 compute-0 practical_shannon[262390]:         "osd_id": 0,
Oct 11 03:58:11 compute-0 practical_shannon[262390]:         "osd_uuid": "bd7ac921-1218-45c1-b1c6-7c594dbceccb",
Oct 11 03:58:11 compute-0 practical_shannon[262390]:         "type": "bluestore"
Oct 11 03:58:11 compute-0 practical_shannon[262390]:     }
Oct 11 03:58:11 compute-0 practical_shannon[262390]: }
Oct 11 03:58:11 compute-0 systemd[1]: libpod-97c5a40f370763ae7cfc3eed3a6a4bfcef418acf9c9c4925ca974ddfc90b070d.scope: Deactivated successfully.
Oct 11 03:58:11 compute-0 systemd[1]: libpod-97c5a40f370763ae7cfc3eed3a6a4bfcef418acf9c9c4925ca974ddfc90b070d.scope: Consumed 1.031s CPU time.
Oct 11 03:58:11 compute-0 ceph-mon[74273]: from='client.? 192.168.122.100:0/1861483012' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 03:58:11 compute-0 podman[262465]: 2025-10-11 03:58:11.368740647 +0000 UTC m=+0.023456706 container died 97c5a40f370763ae7cfc3eed3a6a4bfcef418acf9c9c4925ca974ddfc90b070d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_shannon, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 11 03:58:11 compute-0 systemd[1]: var-lib-containers-storage-overlay-3536fed2e149e3795aeb11bf51283f0ff7c00c65500b2571917f0fcbab6cca08-merged.mount: Deactivated successfully.
Oct 11 03:58:11 compute-0 podman[262465]: 2025-10-11 03:58:11.421737608 +0000 UTC m=+0.076453647 container remove 97c5a40f370763ae7cfc3eed3a6a4bfcef418acf9c9c4925ca974ddfc90b070d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_shannon, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=reef, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 11 03:58:11 compute-0 systemd[1]: libpod-conmon-97c5a40f370763ae7cfc3eed3a6a4bfcef418acf9c9c4925ca974ddfc90b070d.scope: Deactivated successfully.
Oct 11 03:58:11 compute-0 sudo[262273]: pam_unix(sudo:session): session closed for user root
Oct 11 03:58:11 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 11 03:58:11 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' 
Oct 11 03:58:11 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 11 03:58:11 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' 
Oct 11 03:58:11 compute-0 ceph-mgr[74563]: [progress WARNING root] complete: ev 76736ad9-a048-487e-bbfa-5d2bf27adffa does not exist
Oct 11 03:58:11 compute-0 ceph-mgr[74563]: [progress WARNING root] complete: ev 9e37bf3b-418b-4499-b940-38487844d961 does not exist
Oct 11 03:58:11 compute-0 sudo[262478]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 03:58:11 compute-0 sudo[262478]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 03:58:11 compute-0 sudo[262478]: pam_unix(sudo:session): session closed for user root
Oct 11 03:58:11 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 11 03:58:11 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/49016571' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 03:58:11 compute-0 nova_compute[259850]: 2025-10-11 03:58:11.622 2 DEBUG oslo_concurrency.processutils [None req-a15c22ab-259d-4b26-83d0-eb5f92102649 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.428s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 03:58:11 compute-0 nova_compute[259850]: 2025-10-11 03:58:11.632 2 DEBUG nova.compute.provider_tree [None req-a15c22ab-259d-4b26-83d0-eb5f92102649 - - - - - -] Inventory has not changed in ProviderTree for provider: 108a560b-89c0-4926-a2fc-cb749a6f8386 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 11 03:58:11 compute-0 sudo[262503]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Oct 11 03:58:11 compute-0 nova_compute[259850]: 2025-10-11 03:58:11.653 2 DEBUG nova.scheduler.client.report [None req-a15c22ab-259d-4b26-83d0-eb5f92102649 - - - - - -] Inventory has not changed for provider 108a560b-89c0-4926-a2fc-cb749a6f8386 based on inventory data: {'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 11 03:58:11 compute-0 sudo[262503]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 03:58:11 compute-0 nova_compute[259850]: 2025-10-11 03:58:11.657 2 DEBUG nova.compute.resource_tracker [None req-a15c22ab-259d-4b26-83d0-eb5f92102649 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Oct 11 03:58:11 compute-0 nova_compute[259850]: 2025-10-11 03:58:11.657 2 DEBUG oslo_concurrency.lockutils [None req-a15c22ab-259d-4b26-83d0-eb5f92102649 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.564s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 03:58:11 compute-0 sudo[262503]: pam_unix(sudo:session): session closed for user root
Oct 11 03:58:12 compute-0 nova_compute[259850]: 2025-10-11 03:58:12.294 2 DEBUG oslo_service.periodic_task [None req-a15c22ab-259d-4b26-83d0-eb5f92102649 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 11 03:58:12 compute-0 ceph-mon[74273]: pgmap v764: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 03:58:12 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' 
Oct 11 03:58:12 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' 
Oct 11 03:58:12 compute-0 ceph-mon[74273]: from='client.? 192.168.122.100:0/49016571' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 03:58:13 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v765: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 03:58:14 compute-0 ceph-mon[74273]: pgmap v765: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 03:58:14 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 11 03:58:15 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v766: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 03:58:16 compute-0 podman[262530]: 2025-10-11 03:58:16.384271303 +0000 UTC m=+0.082303782 container health_status 83ff5079e26f0f00bbfd2512db4ebca61e431e3b7e362664573e34fff179574c (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, config_id=multipathd, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 11 03:58:16 compute-0 ceph-mon[74273]: pgmap v766: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 03:58:17 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v767: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 03:58:18 compute-0 podman[262550]: 2025-10-11 03:58:18.375591203 +0000 UTC m=+0.074190634 container health_status 8f03625dda308e9a3fe3b3f00360811eab2a53db90537fed5552a11820542f07 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, config_id=iscsid, io.buildah.version=1.41.3, container_name=iscsid, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009, org.label-schema.schema-version=1.0)
Oct 11 03:58:18 compute-0 ceph-mon[74273]: pgmap v767: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 03:58:19 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v768: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 03:58:19 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 11 03:58:20 compute-0 ceph-mon[74273]: pgmap v768: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 03:58:20 compute-0 ceph-mgr[74563]: [balancer INFO root] Optimize plan auto_2025-10-11_03:58:20
Oct 11 03:58:20 compute-0 ceph-mgr[74563]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 11 03:58:20 compute-0 ceph-mgr[74563]: [balancer INFO root] do_upmap
Oct 11 03:58:20 compute-0 ceph-mgr[74563]: [balancer INFO root] pools ['images', 'vms', 'cephfs.cephfs.data', 'default.rgw.meta', 'default.rgw.control', '.rgw.root', '.mgr', 'volumes', 'backups', 'default.rgw.log', 'cephfs.cephfs.meta']
Oct 11 03:58:20 compute-0 ceph-mgr[74563]: [balancer INFO root] prepared 0/10 changes
Oct 11 03:58:20 compute-0 ceph-mgr[74563]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 03:58:20 compute-0 ceph-mgr[74563]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 03:58:20 compute-0 ceph-mgr[74563]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 03:58:20 compute-0 ceph-mgr[74563]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 03:58:20 compute-0 ceph-mgr[74563]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 03:58:20 compute-0 ceph-mgr[74563]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 03:58:20 compute-0 ceph-mgr[74563]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 11 03:58:20 compute-0 ceph-mgr[74563]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 11 03:58:20 compute-0 ceph-mgr[74563]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 11 03:58:20 compute-0 ceph-mgr[74563]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 11 03:58:20 compute-0 ceph-mgr[74563]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 11 03:58:20 compute-0 ceph-mgr[74563]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 11 03:58:20 compute-0 ceph-mgr[74563]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 11 03:58:20 compute-0 ceph-mgr[74563]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 11 03:58:20 compute-0 ceph-mgr[74563]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 11 03:58:20 compute-0 ceph-mgr[74563]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 11 03:58:21 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v769: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 03:58:22 compute-0 ceph-mon[74273]: pgmap v769: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 03:58:22 compute-0 ovn_metadata_agent[161897]: 2025-10-11 03:58:22.947 161902 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 03:58:22 compute-0 ovn_metadata_agent[161897]: 2025-10-11 03:58:22.947 161902 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 03:58:22 compute-0 ovn_metadata_agent[161897]: 2025-10-11 03:58:22.948 161902 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 03:58:23 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v770: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 17 KiB/s rd, 0 B/s wr, 29 op/s
Oct 11 03:58:24 compute-0 ceph-mon[74273]: pgmap v770: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 17 KiB/s rd, 0 B/s wr, 29 op/s
Oct 11 03:58:24 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 11 03:58:25 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v771: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 17 KiB/s rd, 0 B/s wr, 29 op/s
Oct 11 03:58:26 compute-0 ceph-mon[74273]: pgmap v771: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 17 KiB/s rd, 0 B/s wr, 29 op/s
Oct 11 03:58:27 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v772: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 17 KiB/s rd, 0 B/s wr, 29 op/s
Oct 11 03:58:27 compute-0 podman[262571]: 2025-10-11 03:58:27.424079602 +0000 UTC m=+0.129239102 container health_status 648a70a95c858d67aef01a55c153ff0b5b9bef4b589fa10c62a24185180db61f (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, tcib_managed=true, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251009)
Oct 11 03:58:28 compute-0 ceph-mon[74273]: pgmap v772: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 17 KiB/s rd, 0 B/s wr, 29 op/s
Oct 11 03:58:29 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v773: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 36 KiB/s rd, 0 B/s wr, 59 op/s
Oct 11 03:58:29 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 11 03:58:30 compute-0 ceph-mon[74273]: pgmap v773: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 36 KiB/s rd, 0 B/s wr, 59 op/s
Oct 11 03:58:30 compute-0 ovn_metadata_agent[161897]: 2025-10-11 03:58:30.904 161902 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=2, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '7a:61:6f', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '92:f1:b6:e4:f1:16'}, ipsec=False) old=SB_Global(nb_cfg=1) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 11 03:58:30 compute-0 ovn_metadata_agent[161897]: 2025-10-11 03:58:30.905 161902 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 0 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Oct 11 03:58:30 compute-0 ovn_metadata_agent[161897]: 2025-10-11 03:58:30.906 161902 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=8a473e03-2208-47ae-afcd-05ad744a5969, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '2'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 03:58:31 compute-0 ceph-mgr[74563]: [pg_autoscaler INFO root] _maybe_adjust
Oct 11 03:58:31 compute-0 ceph-mgr[74563]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 03:58:31 compute-0 ceph-mgr[74563]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct 11 03:58:31 compute-0 ceph-mgr[74563]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 03:58:31 compute-0 ceph-mgr[74563]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 11 03:58:31 compute-0 ceph-mgr[74563]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 03:58:31 compute-0 ceph-mgr[74563]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 11 03:58:31 compute-0 ceph-mgr[74563]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 03:58:31 compute-0 ceph-mgr[74563]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 11 03:58:31 compute-0 ceph-mgr[74563]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 03:58:31 compute-0 ceph-mgr[74563]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 11 03:58:31 compute-0 ceph-mgr[74563]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 03:58:31 compute-0 ceph-mgr[74563]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Oct 11 03:58:31 compute-0 ceph-mgr[74563]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 03:58:31 compute-0 ceph-mgr[74563]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 11 03:58:31 compute-0 ceph-mgr[74563]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 03:58:31 compute-0 ceph-mgr[74563]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Oct 11 03:58:31 compute-0 ceph-mgr[74563]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 03:58:31 compute-0 ceph-mgr[74563]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct 11 03:58:31 compute-0 ceph-mgr[74563]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 03:58:31 compute-0 ceph-mgr[74563]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 11 03:58:31 compute-0 ceph-mgr[74563]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 03:58:31 compute-0 ceph-mgr[74563]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct 11 03:58:31 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v774: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 36 KiB/s rd, 0 B/s wr, 59 op/s
Oct 11 03:58:32 compute-0 podman[262597]: 2025-10-11 03:58:32.388699024 +0000 UTC m=+0.089630676 container health_status aa44bb3cffcfb092b9d85ae52a9bd9f36c87e07262632420f97b48a886109eab (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, tcib_managed=true, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible)
Oct 11 03:58:32 compute-0 ceph-mon[74273]: pgmap v774: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 36 KiB/s rd, 0 B/s wr, 59 op/s
Oct 11 03:58:33 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v775: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 36 KiB/s rd, 0 B/s wr, 59 op/s
Oct 11 03:58:34 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 11 03:58:34 compute-0 ceph-mon[74273]: pgmap v775: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 36 KiB/s rd, 0 B/s wr, 59 op/s
Oct 11 03:58:35 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v776: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 18 KiB/s rd, 0 B/s wr, 30 op/s
Oct 11 03:58:36 compute-0 ceph-mon[74273]: pgmap v776: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 18 KiB/s rd, 0 B/s wr, 30 op/s
Oct 11 03:58:37 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v777: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 18 KiB/s rd, 0 B/s wr, 30 op/s
Oct 11 03:58:38 compute-0 ceph-mon[74273]: pgmap v777: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 18 KiB/s rd, 0 B/s wr, 30 op/s
Oct 11 03:58:39 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v778: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 18 KiB/s rd, 0 B/s wr, 30 op/s
Oct 11 03:58:39 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 11 03:58:40 compute-0 ceph-mon[74273]: pgmap v778: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 18 KiB/s rd, 0 B/s wr, 30 op/s
Oct 11 03:58:41 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v779: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 03:58:42 compute-0 ceph-mon[74273]: pgmap v779: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 03:58:43 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v780: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 03:58:44 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 11 03:58:44 compute-0 ceph-mon[74273]: pgmap v780: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 03:58:45 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v781: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 03:58:46 compute-0 ceph-mon[74273]: pgmap v781: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 03:58:47 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v782: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 03:58:47 compute-0 podman[262616]: 2025-10-11 03:58:47.37849795 +0000 UTC m=+0.078846685 container health_status 83ff5079e26f0f00bbfd2512db4ebca61e431e3b7e362664573e34fff179574c (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, managed_by=edpm_ansible)
Oct 11 03:58:48 compute-0 ceph-mon[74273]: pgmap v782: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 03:58:49 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v783: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 03:58:49 compute-0 podman[262636]: 2025-10-11 03:58:49.359789969 +0000 UTC m=+0.062845147 container health_status 8f03625dda308e9a3fe3b3f00360811eab2a53db90537fed5552a11820542f07 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=iscsid, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, config_id=iscsid, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true)
Oct 11 03:58:49 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 11 03:58:50 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct 11 03:58:50 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/35206798' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 11 03:58:50 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct 11 03:58:50 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/35206798' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 11 03:58:50 compute-0 ceph-mon[74273]: pgmap v783: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 03:58:50 compute-0 ceph-mon[74273]: from='client.? 192.168.122.10:0/35206798' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 11 03:58:50 compute-0 ceph-mon[74273]: from='client.? 192.168.122.10:0/35206798' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 11 03:58:50 compute-0 ceph-mgr[74563]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 03:58:50 compute-0 ceph-mgr[74563]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 03:58:50 compute-0 ceph-mgr[74563]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 03:58:50 compute-0 ceph-mgr[74563]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 03:58:50 compute-0 ceph-mgr[74563]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 03:58:50 compute-0 ceph-mgr[74563]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 03:58:51 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v784: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 03:58:52 compute-0 ceph-mon[74273]: pgmap v784: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 03:58:53 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v785: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 03:58:54 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 11 03:58:54 compute-0 ceph-mon[74273]: pgmap v785: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 03:58:55 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v786: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 03:58:56 compute-0 ceph-mon[74273]: pgmap v786: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 03:58:57 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v787: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 03:58:58 compute-0 podman[262656]: 2025-10-11 03:58:58.396511891 +0000 UTC m=+0.113004949 container health_status 648a70a95c858d67aef01a55c153ff0b5b9bef4b589fa10c62a24185180db61f (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20251009, config_id=ovn_controller)
Oct 11 03:58:58 compute-0 ceph-mon[74273]: pgmap v787: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 03:58:59 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v788: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 03:58:59 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 11 03:59:00 compute-0 ceph-mon[74273]: pgmap v788: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 03:59:01 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v789: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 03:59:02 compute-0 ceph-mon[74273]: pgmap v789: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 03:59:03 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v790: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 03:59:03 compute-0 podman[262682]: 2025-10-11 03:59:03.369357636 +0000 UTC m=+0.076152339 container health_status aa44bb3cffcfb092b9d85ae52a9bd9f36c87e07262632420f97b48a886109eab (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009, tcib_managed=true, io.buildah.version=1.41.3, managed_by=edpm_ansible, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, org.label-schema.vendor=CentOS)
Oct 11 03:59:04 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 11 03:59:04 compute-0 ceph-mon[74273]: pgmap v790: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 03:59:05 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v791: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 03:59:06 compute-0 ceph-mon[74273]: pgmap v791: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 03:59:07 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v792: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 03:59:08 compute-0 ceph-mon[74273]: pgmap v792: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 03:59:09 compute-0 nova_compute[259850]: 2025-10-11 03:59:09.055 2 DEBUG oslo_service.periodic_task [None req-a15c22ab-259d-4b26-83d0-eb5f92102649 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 11 03:59:09 compute-0 nova_compute[259850]: 2025-10-11 03:59:09.058 2 DEBUG oslo_service.periodic_task [None req-a15c22ab-259d-4b26-83d0-eb5f92102649 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 11 03:59:09 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v793: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 03:59:09 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 11 03:59:10 compute-0 nova_compute[259850]: 2025-10-11 03:59:10.059 2 DEBUG oslo_service.periodic_task [None req-a15c22ab-259d-4b26-83d0-eb5f92102649 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 11 03:59:10 compute-0 nova_compute[259850]: 2025-10-11 03:59:10.060 2 DEBUG oslo_service.periodic_task [None req-a15c22ab-259d-4b26-83d0-eb5f92102649 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 11 03:59:10 compute-0 ceph-mon[74273]: pgmap v793: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 03:59:11 compute-0 nova_compute[259850]: 2025-10-11 03:59:11.059 2 DEBUG oslo_service.periodic_task [None req-a15c22ab-259d-4b26-83d0-eb5f92102649 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 11 03:59:11 compute-0 nova_compute[259850]: 2025-10-11 03:59:11.060 2 DEBUG nova.compute.manager [None req-a15c22ab-259d-4b26-83d0-eb5f92102649 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Oct 11 03:59:11 compute-0 nova_compute[259850]: 2025-10-11 03:59:11.060 2 DEBUG nova.compute.manager [None req-a15c22ab-259d-4b26-83d0-eb5f92102649 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Oct 11 03:59:11 compute-0 nova_compute[259850]: 2025-10-11 03:59:11.123 2 DEBUG nova.compute.manager [None req-a15c22ab-259d-4b26-83d0-eb5f92102649 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Oct 11 03:59:11 compute-0 nova_compute[259850]: 2025-10-11 03:59:11.124 2 DEBUG oslo_service.periodic_task [None req-a15c22ab-259d-4b26-83d0-eb5f92102649 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 11 03:59:11 compute-0 nova_compute[259850]: 2025-10-11 03:59:11.124 2 DEBUG oslo_service.periodic_task [None req-a15c22ab-259d-4b26-83d0-eb5f92102649 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 11 03:59:11 compute-0 nova_compute[259850]: 2025-10-11 03:59:11.124 2 DEBUG nova.compute.manager [None req-a15c22ab-259d-4b26-83d0-eb5f92102649 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Oct 11 03:59:11 compute-0 nova_compute[259850]: 2025-10-11 03:59:11.124 2 DEBUG oslo_service.periodic_task [None req-a15c22ab-259d-4b26-83d0-eb5f92102649 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 11 03:59:11 compute-0 nova_compute[259850]: 2025-10-11 03:59:11.201 2 DEBUG oslo_concurrency.lockutils [None req-a15c22ab-259d-4b26-83d0-eb5f92102649 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 03:59:11 compute-0 nova_compute[259850]: 2025-10-11 03:59:11.202 2 DEBUG oslo_concurrency.lockutils [None req-a15c22ab-259d-4b26-83d0-eb5f92102649 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 03:59:11 compute-0 nova_compute[259850]: 2025-10-11 03:59:11.202 2 DEBUG oslo_concurrency.lockutils [None req-a15c22ab-259d-4b26-83d0-eb5f92102649 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 03:59:11 compute-0 nova_compute[259850]: 2025-10-11 03:59:11.202 2 DEBUG nova.compute.resource_tracker [None req-a15c22ab-259d-4b26-83d0-eb5f92102649 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Oct 11 03:59:11 compute-0 nova_compute[259850]: 2025-10-11 03:59:11.203 2 DEBUG oslo_concurrency.processutils [None req-a15c22ab-259d-4b26-83d0-eb5f92102649 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 03:59:11 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v794: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 03:59:11 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 11 03:59:11 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/242885604' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 03:59:11 compute-0 nova_compute[259850]: 2025-10-11 03:59:11.655 2 DEBUG oslo_concurrency.processutils [None req-a15c22ab-259d-4b26-83d0-eb5f92102649 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.453s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 03:59:11 compute-0 sudo[262723]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 03:59:11 compute-0 sudo[262723]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 03:59:11 compute-0 sudo[262723]: pam_unix(sudo:session): session closed for user root
Oct 11 03:59:11 compute-0 ceph-mon[74273]: from='client.? 192.168.122.100:0/242885604' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 03:59:11 compute-0 sudo[262748]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 11 03:59:11 compute-0 sudo[262748]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 03:59:11 compute-0 sudo[262748]: pam_unix(sudo:session): session closed for user root
Oct 11 03:59:11 compute-0 nova_compute[259850]: 2025-10-11 03:59:11.849 2 WARNING nova.virt.libvirt.driver [None req-a15c22ab-259d-4b26-83d0-eb5f92102649 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 11 03:59:11 compute-0 nova_compute[259850]: 2025-10-11 03:59:11.851 2 DEBUG nova.compute.resource_tracker [None req-a15c22ab-259d-4b26-83d0-eb5f92102649 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5184MB free_disk=59.98828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Oct 11 03:59:11 compute-0 nova_compute[259850]: 2025-10-11 03:59:11.851 2 DEBUG oslo_concurrency.lockutils [None req-a15c22ab-259d-4b26-83d0-eb5f92102649 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 03:59:11 compute-0 nova_compute[259850]: 2025-10-11 03:59:11.851 2 DEBUG oslo_concurrency.lockutils [None req-a15c22ab-259d-4b26-83d0-eb5f92102649 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 03:59:11 compute-0 sudo[262773]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 03:59:11 compute-0 sudo[262773]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 03:59:11 compute-0 sudo[262773]: pam_unix(sudo:session): session closed for user root
Oct 11 03:59:11 compute-0 sudo[262798]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/23b68101-59a9-532f-ab6b-9acf78fb2162/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Oct 11 03:59:11 compute-0 sudo[262798]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 03:59:12 compute-0 nova_compute[259850]: 2025-10-11 03:59:12.003 2 DEBUG nova.compute.resource_tracker [None req-a15c22ab-259d-4b26-83d0-eb5f92102649 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Oct 11 03:59:12 compute-0 nova_compute[259850]: 2025-10-11 03:59:12.003 2 DEBUG nova.compute.resource_tracker [None req-a15c22ab-259d-4b26-83d0-eb5f92102649 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Oct 11 03:59:12 compute-0 nova_compute[259850]: 2025-10-11 03:59:12.020 2 DEBUG oslo_concurrency.processutils [None req-a15c22ab-259d-4b26-83d0-eb5f92102649 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 03:59:12 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 11 03:59:12 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4188473971' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 03:59:12 compute-0 nova_compute[259850]: 2025-10-11 03:59:12.442 2 DEBUG oslo_concurrency.processutils [None req-a15c22ab-259d-4b26-83d0-eb5f92102649 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.422s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 03:59:12 compute-0 nova_compute[259850]: 2025-10-11 03:59:12.451 2 DEBUG nova.compute.provider_tree [None req-a15c22ab-259d-4b26-83d0-eb5f92102649 - - - - - -] Inventory has not changed in ProviderTree for provider: 108a560b-89c0-4926-a2fc-cb749a6f8386 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 11 03:59:12 compute-0 nova_compute[259850]: 2025-10-11 03:59:12.469 2 DEBUG nova.scheduler.client.report [None req-a15c22ab-259d-4b26-83d0-eb5f92102649 - - - - - -] Inventory has not changed for provider 108a560b-89c0-4926-a2fc-cb749a6f8386 based on inventory data: {'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 11 03:59:12 compute-0 nova_compute[259850]: 2025-10-11 03:59:12.471 2 DEBUG nova.compute.resource_tracker [None req-a15c22ab-259d-4b26-83d0-eb5f92102649 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Oct 11 03:59:12 compute-0 nova_compute[259850]: 2025-10-11 03:59:12.472 2 DEBUG oslo_concurrency.lockutils [None req-a15c22ab-259d-4b26-83d0-eb5f92102649 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.621s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 03:59:12 compute-0 sudo[262798]: pam_unix(sudo:session): session closed for user root
Oct 11 03:59:12 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 11 03:59:12 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 11 03:59:12 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct 11 03:59:12 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 11 03:59:12 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct 11 03:59:12 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' 
Oct 11 03:59:12 compute-0 ceph-mgr[74563]: [progress WARNING root] complete: ev c593aede-5758-44cf-953f-4bb9fe1d0a5c does not exist
Oct 11 03:59:12 compute-0 ceph-mgr[74563]: [progress WARNING root] complete: ev 07b6fa93-d610-4462-a58d-94fb8ec141bd does not exist
Oct 11 03:59:12 compute-0 ceph-mgr[74563]: [progress WARNING root] complete: ev ec4b3786-6b56-4eec-b458-cb11bba57d6a does not exist
Oct 11 03:59:12 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct 11 03:59:12 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 11 03:59:12 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct 11 03:59:12 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 11 03:59:12 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 11 03:59:12 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 11 03:59:12 compute-0 sudo[262875]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 03:59:12 compute-0 sudo[262875]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 03:59:12 compute-0 sudo[262875]: pam_unix(sudo:session): session closed for user root
Oct 11 03:59:12 compute-0 sudo[262900]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 11 03:59:12 compute-0 sudo[262900]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 03:59:12 compute-0 sudo[262900]: pam_unix(sudo:session): session closed for user root
Oct 11 03:59:12 compute-0 ceph-mon[74273]: pgmap v794: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 03:59:12 compute-0 ceph-mon[74273]: from='client.? 192.168.122.100:0/4188473971' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 03:59:12 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 11 03:59:12 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 11 03:59:12 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' 
Oct 11 03:59:12 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 11 03:59:12 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 11 03:59:12 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 11 03:59:12 compute-0 sudo[262925]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 03:59:12 compute-0 sudo[262925]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 03:59:12 compute-0 sudo[262925]: pam_unix(sudo:session): session closed for user root
Oct 11 03:59:12 compute-0 sudo[262950]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/23b68101-59a9-532f-ab6b-9acf78fb2162/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 23b68101-59a9-532f-ab6b-9acf78fb2162 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Oct 11 03:59:12 compute-0 sudo[262950]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 03:59:13 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v795: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 03:59:13 compute-0 podman[263017]: 2025-10-11 03:59:13.453075429 +0000 UTC m=+0.073539026 container create 2252e39c79729c8c1c8ce6cf6ee27e628df1570d0b259fad56d2d85f14067fe8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_proskuriakova, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 11 03:59:13 compute-0 systemd[1]: Started libpod-conmon-2252e39c79729c8c1c8ce6cf6ee27e628df1570d0b259fad56d2d85f14067fe8.scope.
Oct 11 03:59:13 compute-0 podman[263017]: 2025-10-11 03:59:13.423301847 +0000 UTC m=+0.043765504 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 03:59:13 compute-0 systemd[1]: Started libcrun container.
Oct 11 03:59:13 compute-0 podman[263017]: 2025-10-11 03:59:13.551560291 +0000 UTC m=+0.172023938 container init 2252e39c79729c8c1c8ce6cf6ee27e628df1570d0b259fad56d2d85f14067fe8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_proskuriakova, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 11 03:59:13 compute-0 podman[263017]: 2025-10-11 03:59:13.563203326 +0000 UTC m=+0.183666923 container start 2252e39c79729c8c1c8ce6cf6ee27e628df1570d0b259fad56d2d85f14067fe8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_proskuriakova, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, io.buildah.version=1.39.3)
Oct 11 03:59:13 compute-0 podman[263017]: 2025-10-11 03:59:13.56761728 +0000 UTC m=+0.188080937 container attach 2252e39c79729c8c1c8ce6cf6ee27e628df1570d0b259fad56d2d85f14067fe8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_proskuriakova, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=reef, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 11 03:59:13 compute-0 hopeful_proskuriakova[263034]: 167 167
Oct 11 03:59:13 compute-0 systemd[1]: libpod-2252e39c79729c8c1c8ce6cf6ee27e628df1570d0b259fad56d2d85f14067fe8.scope: Deactivated successfully.
Oct 11 03:59:13 compute-0 podman[263017]: 2025-10-11 03:59:13.571490588 +0000 UTC m=+0.191954175 container died 2252e39c79729c8c1c8ce6cf6ee27e628df1570d0b259fad56d2d85f14067fe8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_proskuriakova, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 11 03:59:13 compute-0 systemd[1]: var-lib-containers-storage-overlay-cb616e625583f0d1e49ef57e2a82df2bad3336f980a0123e5b656900fa712ffe-merged.mount: Deactivated successfully.
Oct 11 03:59:13 compute-0 podman[263017]: 2025-10-11 03:59:13.62631364 +0000 UTC m=+0.246777237 container remove 2252e39c79729c8c1c8ce6cf6ee27e628df1570d0b259fad56d2d85f14067fe8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_proskuriakova, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 11 03:59:13 compute-0 systemd[1]: libpod-conmon-2252e39c79729c8c1c8ce6cf6ee27e628df1570d0b259fad56d2d85f14067fe8.scope: Deactivated successfully.
Oct 11 03:59:13 compute-0 podman[263058]: 2025-10-11 03:59:13.855028032 +0000 UTC m=+0.059485044 container create 81a343e2b5946ac807acbebc6bd8234847dab3157d6f22a6508ca25664bda752 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_davinci, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 11 03:59:13 compute-0 systemd[1]: Started libpod-conmon-81a343e2b5946ac807acbebc6bd8234847dab3157d6f22a6508ca25664bda752.scope.
Oct 11 03:59:13 compute-0 podman[263058]: 2025-10-11 03:59:13.833819309 +0000 UTC m=+0.038276361 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 03:59:13 compute-0 systemd[1]: Started libcrun container.
Oct 11 03:59:13 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/64378c9b437332ae430bc2a32f1a478531a27714c880060b6741c670e9fde8c3/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 11 03:59:13 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/64378c9b437332ae430bc2a32f1a478531a27714c880060b6741c670e9fde8c3/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 11 03:59:13 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/64378c9b437332ae430bc2a32f1a478531a27714c880060b6741c670e9fde8c3/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 11 03:59:13 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/64378c9b437332ae430bc2a32f1a478531a27714c880060b6741c670e9fde8c3/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 11 03:59:13 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/64378c9b437332ae430bc2a32f1a478531a27714c880060b6741c670e9fde8c3/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct 11 03:59:13 compute-0 podman[263058]: 2025-10-11 03:59:13.953678149 +0000 UTC m=+0.158135201 container init 81a343e2b5946ac807acbebc6bd8234847dab3157d6f22a6508ca25664bda752 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_davinci, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2)
Oct 11 03:59:13 compute-0 podman[263058]: 2025-10-11 03:59:13.970397766 +0000 UTC m=+0.174854798 container start 81a343e2b5946ac807acbebc6bd8234847dab3157d6f22a6508ca25664bda752 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_davinci, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS)
Oct 11 03:59:13 compute-0 podman[263058]: 2025-10-11 03:59:13.973837542 +0000 UTC m=+0.178294594 container attach 81a343e2b5946ac807acbebc6bd8234847dab3157d6f22a6508ca25664bda752 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_davinci, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 11 03:59:14 compute-0 nova_compute[259850]: 2025-10-11 03:59:14.408 2 DEBUG oslo_service.periodic_task [None req-a15c22ab-259d-4b26-83d0-eb5f92102649 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 11 03:59:14 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 11 03:59:14 compute-0 ceph-mon[74273]: pgmap v795: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 03:59:15 compute-0 vigorous_davinci[263075]: --> passed data devices: 0 physical, 3 LVM
Oct 11 03:59:15 compute-0 vigorous_davinci[263075]: --> relative data size: 1.0
Oct 11 03:59:15 compute-0 vigorous_davinci[263075]: --> All data devices are unavailable
Oct 11 03:59:15 compute-0 systemd[1]: libpod-81a343e2b5946ac807acbebc6bd8234847dab3157d6f22a6508ca25664bda752.scope: Deactivated successfully.
Oct 11 03:59:15 compute-0 systemd[1]: libpod-81a343e2b5946ac807acbebc6bd8234847dab3157d6f22a6508ca25664bda752.scope: Consumed 1.056s CPU time.
Oct 11 03:59:15 compute-0 conmon[263075]: conmon 81a343e2b5946ac807ac <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-81a343e2b5946ac807acbebc6bd8234847dab3157d6f22a6508ca25664bda752.scope/container/memory.events
Oct 11 03:59:15 compute-0 podman[263058]: 2025-10-11 03:59:15.06976655 +0000 UTC m=+1.274223592 container died 81a343e2b5946ac807acbebc6bd8234847dab3157d6f22a6508ca25664bda752 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_davinci, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Oct 11 03:59:15 compute-0 systemd[1]: var-lib-containers-storage-overlay-64378c9b437332ae430bc2a32f1a478531a27714c880060b6741c670e9fde8c3-merged.mount: Deactivated successfully.
Oct 11 03:59:15 compute-0 podman[263058]: 2025-10-11 03:59:15.165103974 +0000 UTC m=+1.369561016 container remove 81a343e2b5946ac807acbebc6bd8234847dab3157d6f22a6508ca25664bda752 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_davinci, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 11 03:59:15 compute-0 systemd[1]: libpod-conmon-81a343e2b5946ac807acbebc6bd8234847dab3157d6f22a6508ca25664bda752.scope: Deactivated successfully.
Oct 11 03:59:15 compute-0 sudo[262950]: pam_unix(sudo:session): session closed for user root
Oct 11 03:59:15 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v796: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 03:59:15 compute-0 sudo[263118]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 03:59:15 compute-0 sudo[263118]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 03:59:15 compute-0 sudo[263118]: pam_unix(sudo:session): session closed for user root
Oct 11 03:59:15 compute-0 sudo[263143]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 11 03:59:15 compute-0 sudo[263143]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 03:59:15 compute-0 sudo[263143]: pam_unix(sudo:session): session closed for user root
Oct 11 03:59:15 compute-0 sudo[263168]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 03:59:15 compute-0 sudo[263168]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 03:59:15 compute-0 sudo[263168]: pam_unix(sudo:session): session closed for user root
Oct 11 03:59:15 compute-0 sudo[263193]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/23b68101-59a9-532f-ab6b-9acf78fb2162/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 23b68101-59a9-532f-ab6b-9acf78fb2162 -- lvm list --format json
Oct 11 03:59:15 compute-0 sudo[263193]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 03:59:15 compute-0 podman[263258]: 2025-10-11 03:59:15.949554986 +0000 UTC m=+0.058261460 container create 71d7dbb8a9a68c746c3000ef8e3793fe3b6bccceb69b8e88220a39e1f19d4ae3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_khorana, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 11 03:59:15 compute-0 systemd[1]: Started libpod-conmon-71d7dbb8a9a68c746c3000ef8e3793fe3b6bccceb69b8e88220a39e1f19d4ae3.scope.
Oct 11 03:59:16 compute-0 podman[263258]: 2025-10-11 03:59:15.929969208 +0000 UTC m=+0.038675802 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 03:59:16 compute-0 systemd[1]: Started libcrun container.
Oct 11 03:59:16 compute-0 podman[263258]: 2025-10-11 03:59:16.059657682 +0000 UTC m=+0.168364206 container init 71d7dbb8a9a68c746c3000ef8e3793fe3b6bccceb69b8e88220a39e1f19d4ae3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_khorana, ceph=True, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 11 03:59:16 compute-0 podman[263258]: 2025-10-11 03:59:16.067416849 +0000 UTC m=+0.176123353 container start 71d7dbb8a9a68c746c3000ef8e3793fe3b6bccceb69b8e88220a39e1f19d4ae3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_khorana, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Oct 11 03:59:16 compute-0 inspiring_khorana[263274]: 167 167
Oct 11 03:59:16 compute-0 systemd[1]: libpod-71d7dbb8a9a68c746c3000ef8e3793fe3b6bccceb69b8e88220a39e1f19d4ae3.scope: Deactivated successfully.
Oct 11 03:59:16 compute-0 podman[263258]: 2025-10-11 03:59:16.073997013 +0000 UTC m=+0.182703507 container attach 71d7dbb8a9a68c746c3000ef8e3793fe3b6bccceb69b8e88220a39e1f19d4ae3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_khorana, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS)
Oct 11 03:59:16 compute-0 podman[263258]: 2025-10-11 03:59:16.075072693 +0000 UTC m=+0.183779197 container died 71d7dbb8a9a68c746c3000ef8e3793fe3b6bccceb69b8e88220a39e1f19d4ae3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_khorana, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 11 03:59:16 compute-0 systemd[1]: var-lib-containers-storage-overlay-e6e373fa0b7947e105e6cee042a5e64e7c8456700157da3e4e90d945b052a5c2-merged.mount: Deactivated successfully.
Oct 11 03:59:16 compute-0 podman[263258]: 2025-10-11 03:59:16.123781935 +0000 UTC m=+0.232488429 container remove 71d7dbb8a9a68c746c3000ef8e3793fe3b6bccceb69b8e88220a39e1f19d4ae3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_khorana, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507)
Oct 11 03:59:16 compute-0 systemd[1]: libpod-conmon-71d7dbb8a9a68c746c3000ef8e3793fe3b6bccceb69b8e88220a39e1f19d4ae3.scope: Deactivated successfully.
Oct 11 03:59:16 compute-0 podman[263298]: 2025-10-11 03:59:16.337999731 +0000 UTC m=+0.056929992 container create 6e4629a7c6c70d67ba0caefbdbe6d1ace9cab5e31772f33aec3ccc2a3b68eb04 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_lumiere, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, OSD_FLAVOR=default)
Oct 11 03:59:16 compute-0 systemd[1]: Started libpod-conmon-6e4629a7c6c70d67ba0caefbdbe6d1ace9cab5e31772f33aec3ccc2a3b68eb04.scope.
Oct 11 03:59:16 compute-0 podman[263298]: 2025-10-11 03:59:16.314113764 +0000 UTC m=+0.033044025 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 03:59:16 compute-0 systemd[1]: Started libcrun container.
Oct 11 03:59:16 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/af9bd9142ace4d031dd4def95ae5cff46c1caf33525a91dc4ff6f1c6d5b21df7/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 11 03:59:16 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/af9bd9142ace4d031dd4def95ae5cff46c1caf33525a91dc4ff6f1c6d5b21df7/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 11 03:59:16 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/af9bd9142ace4d031dd4def95ae5cff46c1caf33525a91dc4ff6f1c6d5b21df7/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 11 03:59:16 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/af9bd9142ace4d031dd4def95ae5cff46c1caf33525a91dc4ff6f1c6d5b21df7/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 11 03:59:16 compute-0 podman[263298]: 2025-10-11 03:59:16.456302547 +0000 UTC m=+0.175232778 container init 6e4629a7c6c70d67ba0caefbdbe6d1ace9cab5e31772f33aec3ccc2a3b68eb04 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_lumiere, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 11 03:59:16 compute-0 podman[263298]: 2025-10-11 03:59:16.46354846 +0000 UTC m=+0.182478721 container start 6e4629a7c6c70d67ba0caefbdbe6d1ace9cab5e31772f33aec3ccc2a3b68eb04 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_lumiere, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Oct 11 03:59:16 compute-0 podman[263298]: 2025-10-11 03:59:16.471142552 +0000 UTC m=+0.190072813 container attach 6e4629a7c6c70d67ba0caefbdbe6d1ace9cab5e31772f33aec3ccc2a3b68eb04 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_lumiere, CEPH_REF=reef, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3)
Oct 11 03:59:16 compute-0 ceph-mon[74273]: pgmap v796: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 03:59:17 compute-0 modest_lumiere[263316]: {
Oct 11 03:59:17 compute-0 modest_lumiere[263316]:     "0": [
Oct 11 03:59:17 compute-0 modest_lumiere[263316]:         {
Oct 11 03:59:17 compute-0 modest_lumiere[263316]:             "devices": [
Oct 11 03:59:17 compute-0 modest_lumiere[263316]:                 "/dev/loop3"
Oct 11 03:59:17 compute-0 modest_lumiere[263316]:             ],
Oct 11 03:59:17 compute-0 modest_lumiere[263316]:             "lv_name": "ceph_lv0",
Oct 11 03:59:17 compute-0 modest_lumiere[263316]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct 11 03:59:17 compute-0 modest_lumiere[263316]:             "lv_size": "21470642176",
Oct 11 03:59:17 compute-0 modest_lumiere[263316]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=1XVTvq-Um5W-oCLh-MZzq-EKrd-vgGj-JdO4S3,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=23b68101-59a9-532f-ab6b-9acf78fb2162,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=bd7ac921-1218-45c1-b1c6-7c594dbceccb,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 11 03:59:17 compute-0 modest_lumiere[263316]:             "lv_uuid": "1XVTvq-Um5W-oCLh-MZzq-EKrd-vgGj-JdO4S3",
Oct 11 03:59:17 compute-0 modest_lumiere[263316]:             "name": "ceph_lv0",
Oct 11 03:59:17 compute-0 modest_lumiere[263316]:             "path": "/dev/ceph_vg0/ceph_lv0",
Oct 11 03:59:17 compute-0 modest_lumiere[263316]:             "tags": {
Oct 11 03:59:17 compute-0 modest_lumiere[263316]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct 11 03:59:17 compute-0 modest_lumiere[263316]:                 "ceph.block_uuid": "1XVTvq-Um5W-oCLh-MZzq-EKrd-vgGj-JdO4S3",
Oct 11 03:59:17 compute-0 modest_lumiere[263316]:                 "ceph.cephx_lockbox_secret": "",
Oct 11 03:59:17 compute-0 modest_lumiere[263316]:                 "ceph.cluster_fsid": "23b68101-59a9-532f-ab6b-9acf78fb2162",
Oct 11 03:59:17 compute-0 modest_lumiere[263316]:                 "ceph.cluster_name": "ceph",
Oct 11 03:59:17 compute-0 modest_lumiere[263316]:                 "ceph.crush_device_class": "",
Oct 11 03:59:17 compute-0 modest_lumiere[263316]:                 "ceph.encrypted": "0",
Oct 11 03:59:17 compute-0 modest_lumiere[263316]:                 "ceph.osd_fsid": "bd7ac921-1218-45c1-b1c6-7c594dbceccb",
Oct 11 03:59:17 compute-0 modest_lumiere[263316]:                 "ceph.osd_id": "0",
Oct 11 03:59:17 compute-0 modest_lumiere[263316]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 11 03:59:17 compute-0 modest_lumiere[263316]:                 "ceph.type": "block",
Oct 11 03:59:17 compute-0 modest_lumiere[263316]:                 "ceph.vdo": "0"
Oct 11 03:59:17 compute-0 modest_lumiere[263316]:             },
Oct 11 03:59:17 compute-0 modest_lumiere[263316]:             "type": "block",
Oct 11 03:59:17 compute-0 modest_lumiere[263316]:             "vg_name": "ceph_vg0"
Oct 11 03:59:17 compute-0 modest_lumiere[263316]:         }
Oct 11 03:59:17 compute-0 modest_lumiere[263316]:     ],
Oct 11 03:59:17 compute-0 modest_lumiere[263316]:     "1": [
Oct 11 03:59:17 compute-0 modest_lumiere[263316]:         {
Oct 11 03:59:17 compute-0 modest_lumiere[263316]:             "devices": [
Oct 11 03:59:17 compute-0 modest_lumiere[263316]:                 "/dev/loop4"
Oct 11 03:59:17 compute-0 modest_lumiere[263316]:             ],
Oct 11 03:59:17 compute-0 modest_lumiere[263316]:             "lv_name": "ceph_lv1",
Oct 11 03:59:17 compute-0 modest_lumiere[263316]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Oct 11 03:59:17 compute-0 modest_lumiere[263316]:             "lv_size": "21470642176",
Oct 11 03:59:17 compute-0 modest_lumiere[263316]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=SYHCVo-UVf0-Iwc3-4dzz-QuCG-19LJ-9rRA5x,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=23b68101-59a9-532f-ab6b-9acf78fb2162,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=38da774d-7ecf-442f-9a7a-97978287cff8,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 11 03:59:17 compute-0 modest_lumiere[263316]:             "lv_uuid": "SYHCVo-UVf0-Iwc3-4dzz-QuCG-19LJ-9rRA5x",
Oct 11 03:59:17 compute-0 modest_lumiere[263316]:             "name": "ceph_lv1",
Oct 11 03:59:17 compute-0 modest_lumiere[263316]:             "path": "/dev/ceph_vg1/ceph_lv1",
Oct 11 03:59:17 compute-0 modest_lumiere[263316]:             "tags": {
Oct 11 03:59:17 compute-0 modest_lumiere[263316]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Oct 11 03:59:17 compute-0 modest_lumiere[263316]:                 "ceph.block_uuid": "SYHCVo-UVf0-Iwc3-4dzz-QuCG-19LJ-9rRA5x",
Oct 11 03:59:17 compute-0 modest_lumiere[263316]:                 "ceph.cephx_lockbox_secret": "",
Oct 11 03:59:17 compute-0 modest_lumiere[263316]:                 "ceph.cluster_fsid": "23b68101-59a9-532f-ab6b-9acf78fb2162",
Oct 11 03:59:17 compute-0 modest_lumiere[263316]:                 "ceph.cluster_name": "ceph",
Oct 11 03:59:17 compute-0 modest_lumiere[263316]:                 "ceph.crush_device_class": "",
Oct 11 03:59:17 compute-0 modest_lumiere[263316]:                 "ceph.encrypted": "0",
Oct 11 03:59:17 compute-0 modest_lumiere[263316]:                 "ceph.osd_fsid": "38da774d-7ecf-442f-9a7a-97978287cff8",
Oct 11 03:59:17 compute-0 modest_lumiere[263316]:                 "ceph.osd_id": "1",
Oct 11 03:59:17 compute-0 modest_lumiere[263316]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 11 03:59:17 compute-0 modest_lumiere[263316]:                 "ceph.type": "block",
Oct 11 03:59:17 compute-0 modest_lumiere[263316]:                 "ceph.vdo": "0"
Oct 11 03:59:17 compute-0 modest_lumiere[263316]:             },
Oct 11 03:59:17 compute-0 modest_lumiere[263316]:             "type": "block",
Oct 11 03:59:17 compute-0 modest_lumiere[263316]:             "vg_name": "ceph_vg1"
Oct 11 03:59:17 compute-0 modest_lumiere[263316]:         }
Oct 11 03:59:17 compute-0 modest_lumiere[263316]:     ],
Oct 11 03:59:17 compute-0 modest_lumiere[263316]:     "2": [
Oct 11 03:59:17 compute-0 modest_lumiere[263316]:         {
Oct 11 03:59:17 compute-0 modest_lumiere[263316]:             "devices": [
Oct 11 03:59:17 compute-0 modest_lumiere[263316]:                 "/dev/loop5"
Oct 11 03:59:17 compute-0 modest_lumiere[263316]:             ],
Oct 11 03:59:17 compute-0 modest_lumiere[263316]:             "lv_name": "ceph_lv2",
Oct 11 03:59:17 compute-0 modest_lumiere[263316]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Oct 11 03:59:17 compute-0 modest_lumiere[263316]:             "lv_size": "21470642176",
Oct 11 03:59:17 compute-0 modest_lumiere[263316]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=RoMfIg-8Myq-NBZ3-1HM3-6QfC-sjk8-dlChvt,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=23b68101-59a9-532f-ab6b-9acf78fb2162,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=b0a32a6b-06c7-4cda-9235-0c5ffbd1bd85,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 11 03:59:17 compute-0 modest_lumiere[263316]:             "lv_uuid": "RoMfIg-8Myq-NBZ3-1HM3-6QfC-sjk8-dlChvt",
Oct 11 03:59:17 compute-0 modest_lumiere[263316]:             "name": "ceph_lv2",
Oct 11 03:59:17 compute-0 modest_lumiere[263316]:             "path": "/dev/ceph_vg2/ceph_lv2",
Oct 11 03:59:17 compute-0 modest_lumiere[263316]:             "tags": {
Oct 11 03:59:17 compute-0 modest_lumiere[263316]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Oct 11 03:59:17 compute-0 modest_lumiere[263316]:                 "ceph.block_uuid": "RoMfIg-8Myq-NBZ3-1HM3-6QfC-sjk8-dlChvt",
Oct 11 03:59:17 compute-0 modest_lumiere[263316]:                 "ceph.cephx_lockbox_secret": "",
Oct 11 03:59:17 compute-0 modest_lumiere[263316]:                 "ceph.cluster_fsid": "23b68101-59a9-532f-ab6b-9acf78fb2162",
Oct 11 03:59:17 compute-0 modest_lumiere[263316]:                 "ceph.cluster_name": "ceph",
Oct 11 03:59:17 compute-0 modest_lumiere[263316]:                 "ceph.crush_device_class": "",
Oct 11 03:59:17 compute-0 modest_lumiere[263316]:                 "ceph.encrypted": "0",
Oct 11 03:59:17 compute-0 modest_lumiere[263316]:                 "ceph.osd_fsid": "b0a32a6b-06c7-4cda-9235-0c5ffbd1bd85",
Oct 11 03:59:17 compute-0 modest_lumiere[263316]:                 "ceph.osd_id": "2",
Oct 11 03:59:17 compute-0 modest_lumiere[263316]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 11 03:59:17 compute-0 modest_lumiere[263316]:                 "ceph.type": "block",
Oct 11 03:59:17 compute-0 modest_lumiere[263316]:                 "ceph.vdo": "0"
Oct 11 03:59:17 compute-0 modest_lumiere[263316]:             },
Oct 11 03:59:17 compute-0 modest_lumiere[263316]:             "type": "block",
Oct 11 03:59:17 compute-0 modest_lumiere[263316]:             "vg_name": "ceph_vg2"
Oct 11 03:59:17 compute-0 modest_lumiere[263316]:         }
Oct 11 03:59:17 compute-0 modest_lumiere[263316]:     ]
Oct 11 03:59:17 compute-0 modest_lumiere[263316]: }
Oct 11 03:59:17 compute-0 systemd[1]: libpod-6e4629a7c6c70d67ba0caefbdbe6d1ace9cab5e31772f33aec3ccc2a3b68eb04.scope: Deactivated successfully.
Oct 11 03:59:17 compute-0 podman[263298]: 2025-10-11 03:59:17.250867943 +0000 UTC m=+0.969798184 container died 6e4629a7c6c70d67ba0caefbdbe6d1ace9cab5e31772f33aec3ccc2a3b68eb04 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_lumiere, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Oct 11 03:59:17 compute-0 systemd[1]: var-lib-containers-storage-overlay-af9bd9142ace4d031dd4def95ae5cff46c1caf33525a91dc4ff6f1c6d5b21df7-merged.mount: Deactivated successfully.
Oct 11 03:59:17 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v797: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 03:59:17 compute-0 podman[263298]: 2025-10-11 03:59:17.314706737 +0000 UTC m=+1.033636968 container remove 6e4629a7c6c70d67ba0caefbdbe6d1ace9cab5e31772f33aec3ccc2a3b68eb04 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_lumiere, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507)
Oct 11 03:59:17 compute-0 systemd[1]: libpod-conmon-6e4629a7c6c70d67ba0caefbdbe6d1ace9cab5e31772f33aec3ccc2a3b68eb04.scope: Deactivated successfully.
Oct 11 03:59:17 compute-0 sudo[263193]: pam_unix(sudo:session): session closed for user root
Oct 11 03:59:17 compute-0 sudo[263337]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 03:59:17 compute-0 sudo[263337]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 03:59:17 compute-0 sudo[263337]: pam_unix(sudo:session): session closed for user root
Oct 11 03:59:17 compute-0 podman[263361]: 2025-10-11 03:59:17.552269706 +0000 UTC m=+0.059848654 container health_status 83ff5079e26f0f00bbfd2512db4ebca61e431e3b7e362664573e34fff179574c (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, tcib_managed=true, org.label-schema.build-date=20251009, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team)
Oct 11 03:59:17 compute-0 sudo[263368]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 11 03:59:17 compute-0 sudo[263368]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 03:59:17 compute-0 sudo[263368]: pam_unix(sudo:session): session closed for user root
Oct 11 03:59:17 compute-0 sudo[263407]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 03:59:17 compute-0 sudo[263407]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 03:59:17 compute-0 sudo[263407]: pam_unix(sudo:session): session closed for user root
Oct 11 03:59:17 compute-0 sudo[263432]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/23b68101-59a9-532f-ab6b-9acf78fb2162/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 23b68101-59a9-532f-ab6b-9acf78fb2162 -- raw list --format json
Oct 11 03:59:17 compute-0 sudo[263432]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 03:59:18 compute-0 podman[263497]: 2025-10-11 03:59:18.081799314 +0000 UTC m=+0.047855388 container create 8bebcbebdefbb2c15262057b6fa39b5ded1961f6269c46baa15030fc9fb2c677 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jovial_moser, org.label-schema.license=GPLv2, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3)
Oct 11 03:59:18 compute-0 systemd[1]: Started libpod-conmon-8bebcbebdefbb2c15262057b6fa39b5ded1961f6269c46baa15030fc9fb2c677.scope.
Oct 11 03:59:18 compute-0 systemd[1]: Started libcrun container.
Oct 11 03:59:18 compute-0 podman[263497]: 2025-10-11 03:59:18.062827904 +0000 UTC m=+0.028883968 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 03:59:18 compute-0 podman[263497]: 2025-10-11 03:59:18.176630294 +0000 UTC m=+0.142686378 container init 8bebcbebdefbb2c15262057b6fa39b5ded1961f6269c46baa15030fc9fb2c677 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jovial_moser, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, io.buildah.version=1.39.3)
Oct 11 03:59:18 compute-0 podman[263497]: 2025-10-11 03:59:18.189474003 +0000 UTC m=+0.155530067 container start 8bebcbebdefbb2c15262057b6fa39b5ded1961f6269c46baa15030fc9fb2c677 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jovial_moser, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Oct 11 03:59:18 compute-0 podman[263497]: 2025-10-11 03:59:18.192760505 +0000 UTC m=+0.158816589 container attach 8bebcbebdefbb2c15262057b6fa39b5ded1961f6269c46baa15030fc9fb2c677 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jovial_moser, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20250507, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 11 03:59:18 compute-0 jovial_moser[263513]: 167 167
Oct 11 03:59:18 compute-0 systemd[1]: libpod-8bebcbebdefbb2c15262057b6fa39b5ded1961f6269c46baa15030fc9fb2c677.scope: Deactivated successfully.
Oct 11 03:59:18 compute-0 podman[263497]: 2025-10-11 03:59:18.198366632 +0000 UTC m=+0.164422726 container died 8bebcbebdefbb2c15262057b6fa39b5ded1961f6269c46baa15030fc9fb2c677 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jovial_moser, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 11 03:59:18 compute-0 systemd[1]: var-lib-containers-storage-overlay-3b083639aeddc2c271400ea85d6e396e50054dad4f52d47861fd35f18fb5147b-merged.mount: Deactivated successfully.
Oct 11 03:59:18 compute-0 podman[263497]: 2025-10-11 03:59:18.260080747 +0000 UTC m=+0.226136801 container remove 8bebcbebdefbb2c15262057b6fa39b5ded1961f6269c46baa15030fc9fb2c677 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jovial_moser, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, ceph=True)
Oct 11 03:59:18 compute-0 systemd[1]: libpod-conmon-8bebcbebdefbb2c15262057b6fa39b5ded1961f6269c46baa15030fc9fb2c677.scope: Deactivated successfully.
Oct 11 03:59:18 compute-0 podman[263538]: 2025-10-11 03:59:18.496942736 +0000 UTC m=+0.070308956 container create 4b6ba4074a68c867dda28d28cdf759d1fcccba07f689090caa9509fa6e0015e1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_khorana, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 11 03:59:18 compute-0 systemd[1]: Started libpod-conmon-4b6ba4074a68c867dda28d28cdf759d1fcccba07f689090caa9509fa6e0015e1.scope.
Oct 11 03:59:18 compute-0 podman[263538]: 2025-10-11 03:59:18.470547608 +0000 UTC m=+0.043913888 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 03:59:18 compute-0 systemd[1]: Started libcrun container.
Oct 11 03:59:18 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a6518f05ccf71c434af4e71dac00eb1f44b16808d0b9686051a728bfb8c7ffde/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 11 03:59:18 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a6518f05ccf71c434af4e71dac00eb1f44b16808d0b9686051a728bfb8c7ffde/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 11 03:59:18 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a6518f05ccf71c434af4e71dac00eb1f44b16808d0b9686051a728bfb8c7ffde/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 11 03:59:18 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a6518f05ccf71c434af4e71dac00eb1f44b16808d0b9686051a728bfb8c7ffde/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 11 03:59:18 compute-0 podman[263538]: 2025-10-11 03:59:18.600737817 +0000 UTC m=+0.174104087 container init 4b6ba4074a68c867dda28d28cdf759d1fcccba07f689090caa9509fa6e0015e1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_khorana, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 11 03:59:18 compute-0 podman[263538]: 2025-10-11 03:59:18.613077242 +0000 UTC m=+0.186443472 container start 4b6ba4074a68c867dda28d28cdf759d1fcccba07f689090caa9509fa6e0015e1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_khorana, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 11 03:59:18 compute-0 podman[263538]: 2025-10-11 03:59:18.622047612 +0000 UTC m=+0.195413842 container attach 4b6ba4074a68c867dda28d28cdf759d1fcccba07f689090caa9509fa6e0015e1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_khorana, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=reef, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 11 03:59:18 compute-0 ceph-mon[74273]: pgmap v797: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 03:59:19 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v798: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 03:59:19 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 11 03:59:19 compute-0 funny_khorana[263554]: {
Oct 11 03:59:19 compute-0 funny_khorana[263554]:     "38da774d-7ecf-442f-9a7a-97978287cff8": {
Oct 11 03:59:19 compute-0 funny_khorana[263554]:         "ceph_fsid": "23b68101-59a9-532f-ab6b-9acf78fb2162",
Oct 11 03:59:19 compute-0 funny_khorana[263554]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Oct 11 03:59:19 compute-0 funny_khorana[263554]:         "osd_id": 1,
Oct 11 03:59:19 compute-0 funny_khorana[263554]:         "osd_uuid": "38da774d-7ecf-442f-9a7a-97978287cff8",
Oct 11 03:59:19 compute-0 funny_khorana[263554]:         "type": "bluestore"
Oct 11 03:59:19 compute-0 funny_khorana[263554]:     },
Oct 11 03:59:19 compute-0 funny_khorana[263554]:     "b0a32a6b-06c7-4cda-9235-0c5ffbd1bd85": {
Oct 11 03:59:19 compute-0 funny_khorana[263554]:         "ceph_fsid": "23b68101-59a9-532f-ab6b-9acf78fb2162",
Oct 11 03:59:19 compute-0 funny_khorana[263554]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Oct 11 03:59:19 compute-0 funny_khorana[263554]:         "osd_id": 2,
Oct 11 03:59:19 compute-0 funny_khorana[263554]:         "osd_uuid": "b0a32a6b-06c7-4cda-9235-0c5ffbd1bd85",
Oct 11 03:59:19 compute-0 funny_khorana[263554]:         "type": "bluestore"
Oct 11 03:59:19 compute-0 funny_khorana[263554]:     },
Oct 11 03:59:19 compute-0 funny_khorana[263554]:     "bd7ac921-1218-45c1-b1c6-7c594dbceccb": {
Oct 11 03:59:19 compute-0 funny_khorana[263554]:         "ceph_fsid": "23b68101-59a9-532f-ab6b-9acf78fb2162",
Oct 11 03:59:19 compute-0 funny_khorana[263554]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct 11 03:59:19 compute-0 funny_khorana[263554]:         "osd_id": 0,
Oct 11 03:59:19 compute-0 funny_khorana[263554]:         "osd_uuid": "bd7ac921-1218-45c1-b1c6-7c594dbceccb",
Oct 11 03:59:19 compute-0 funny_khorana[263554]:         "type": "bluestore"
Oct 11 03:59:19 compute-0 funny_khorana[263554]:     }
Oct 11 03:59:19 compute-0 funny_khorana[263554]: }
Oct 11 03:59:19 compute-0 systemd[1]: libpod-4b6ba4074a68c867dda28d28cdf759d1fcccba07f689090caa9509fa6e0015e1.scope: Deactivated successfully.
Oct 11 03:59:19 compute-0 systemd[1]: libpod-4b6ba4074a68c867dda28d28cdf759d1fcccba07f689090caa9509fa6e0015e1.scope: Consumed 1.124s CPU time.
Oct 11 03:59:19 compute-0 podman[263538]: 2025-10-11 03:59:19.730602601 +0000 UTC m=+1.303968891 container died 4b6ba4074a68c867dda28d28cdf759d1fcccba07f689090caa9509fa6e0015e1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_khorana, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef)
Oct 11 03:59:19 compute-0 systemd[1]: var-lib-containers-storage-overlay-a6518f05ccf71c434af4e71dac00eb1f44b16808d0b9686051a728bfb8c7ffde-merged.mount: Deactivated successfully.
Oct 11 03:59:19 compute-0 podman[263538]: 2025-10-11 03:59:19.800185376 +0000 UTC m=+1.373551576 container remove 4b6ba4074a68c867dda28d28cdf759d1fcccba07f689090caa9509fa6e0015e1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_khorana, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef)
Oct 11 03:59:19 compute-0 systemd[1]: libpod-conmon-4b6ba4074a68c867dda28d28cdf759d1fcccba07f689090caa9509fa6e0015e1.scope: Deactivated successfully.
Oct 11 03:59:19 compute-0 sudo[263432]: pam_unix(sudo:session): session closed for user root
Oct 11 03:59:19 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 11 03:59:19 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' 
Oct 11 03:59:19 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 11 03:59:19 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' 
Oct 11 03:59:19 compute-0 ceph-mgr[74563]: [progress WARNING root] complete: ev d7fd1880-c0f9-47a2-8820-549b08b0cb25 does not exist
Oct 11 03:59:19 compute-0 ceph-mgr[74563]: [progress WARNING root] complete: ev 2d824193-d7a3-40a4-8c93-125a01c0c24a does not exist
Oct 11 03:59:19 compute-0 podman[263587]: 2025-10-11 03:59:19.880612024 +0000 UTC m=+0.105593232 container health_status 8f03625dda308e9a3fe3b3f00360811eab2a53db90537fed5552a11820542f07 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, container_name=iscsid, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, config_id=iscsid, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_managed=true)
Oct 11 03:59:19 compute-0 sudo[263619]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 03:59:19 compute-0 sudo[263619]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 03:59:19 compute-0 sudo[263619]: pam_unix(sudo:session): session closed for user root
Oct 11 03:59:20 compute-0 sudo[263644]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Oct 11 03:59:20 compute-0 sudo[263644]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 03:59:20 compute-0 sudo[263644]: pam_unix(sudo:session): session closed for user root
Oct 11 03:59:20 compute-0 ceph-mgr[74563]: [balancer INFO root] Optimize plan auto_2025-10-11_03:59:20
Oct 11 03:59:20 compute-0 ceph-mgr[74563]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 11 03:59:20 compute-0 ceph-mgr[74563]: [balancer INFO root] do_upmap
Oct 11 03:59:20 compute-0 ceph-mgr[74563]: [balancer INFO root] pools ['vms', 'backups', '.mgr', 'cephfs.cephfs.meta', 'default.rgw.control', 'cephfs.cephfs.data', 'default.rgw.meta', '.rgw.root', 'images', 'default.rgw.log', 'volumes']
Oct 11 03:59:20 compute-0 ceph-mgr[74563]: [balancer INFO root] prepared 0/10 changes
Oct 11 03:59:20 compute-0 ceph-mgr[74563]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 03:59:20 compute-0 ceph-mgr[74563]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 03:59:20 compute-0 ceph-mgr[74563]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 03:59:20 compute-0 ceph-mgr[74563]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 03:59:20 compute-0 ceph-mgr[74563]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 03:59:20 compute-0 ceph-mgr[74563]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 03:59:20 compute-0 ceph-mon[74273]: pgmap v798: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 03:59:20 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' 
Oct 11 03:59:20 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' 
Oct 11 03:59:20 compute-0 ceph-mgr[74563]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 11 03:59:20 compute-0 ceph-mgr[74563]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 11 03:59:20 compute-0 ceph-mgr[74563]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 11 03:59:20 compute-0 ceph-mgr[74563]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 11 03:59:20 compute-0 ceph-mgr[74563]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 11 03:59:20 compute-0 ceph-mgr[74563]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 11 03:59:20 compute-0 ceph-mgr[74563]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 11 03:59:20 compute-0 ceph-mgr[74563]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 11 03:59:20 compute-0 ceph-mgr[74563]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 11 03:59:20 compute-0 ceph-mgr[74563]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 11 03:59:21 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v799: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 03:59:22 compute-0 ceph-mon[74273]: pgmap v799: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 03:59:22 compute-0 ovn_metadata_agent[161897]: 2025-10-11 03:59:22.948 161902 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 03:59:22 compute-0 ovn_metadata_agent[161897]: 2025-10-11 03:59:22.948 161902 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 03:59:22 compute-0 ovn_metadata_agent[161897]: 2025-10-11 03:59:22.948 161902 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 03:59:23 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v800: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 03:59:24 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 11 03:59:24 compute-0 ceph-mon[74273]: pgmap v800: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 03:59:25 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v801: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 03:59:26 compute-0 ceph-mon[74273]: pgmap v801: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 03:59:27 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v802: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 03:59:28 compute-0 ceph-mon[74273]: pgmap v802: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 03:59:29 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v803: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 03:59:29 compute-0 podman[263669]: 2025-10-11 03:59:29.411771515 +0000 UTC m=+0.118810331 container health_status 648a70a95c858d67aef01a55c153ff0b5b9bef4b589fa10c62a24185180db61f (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.license=GPLv2, config_id=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, org.label-schema.build-date=20251009)
Oct 11 03:59:29 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 11 03:59:30 compute-0 ceph-mon[74273]: pgmap v803: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 03:59:31 compute-0 ceph-mgr[74563]: [pg_autoscaler INFO root] _maybe_adjust
Oct 11 03:59:31 compute-0 ceph-mgr[74563]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 03:59:31 compute-0 ceph-mgr[74563]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct 11 03:59:31 compute-0 ceph-mgr[74563]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 03:59:31 compute-0 ceph-mgr[74563]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 11 03:59:31 compute-0 ceph-mgr[74563]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 03:59:31 compute-0 ceph-mgr[74563]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 11 03:59:31 compute-0 ceph-mgr[74563]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 03:59:31 compute-0 ceph-mgr[74563]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 11 03:59:31 compute-0 ceph-mgr[74563]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 03:59:31 compute-0 ceph-mgr[74563]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 11 03:59:31 compute-0 ceph-mgr[74563]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 03:59:31 compute-0 ceph-mgr[74563]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Oct 11 03:59:31 compute-0 ceph-mgr[74563]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 03:59:31 compute-0 ceph-mgr[74563]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 11 03:59:31 compute-0 ceph-mgr[74563]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 03:59:31 compute-0 ceph-mgr[74563]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Oct 11 03:59:31 compute-0 ceph-mgr[74563]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 03:59:31 compute-0 ceph-mgr[74563]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct 11 03:59:31 compute-0 ceph-mgr[74563]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 03:59:31 compute-0 ceph-mgr[74563]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 11 03:59:31 compute-0 ceph-mgr[74563]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 03:59:31 compute-0 ceph-mgr[74563]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct 11 03:59:31 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v804: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 03:59:32 compute-0 PackageKit[191263]: daemon quit
Oct 11 03:59:32 compute-0 systemd[1]: packagekit.service: Deactivated successfully.
Oct 11 03:59:32 compute-0 ceph-mon[74273]: pgmap v804: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 03:59:33 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v805: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 03:59:34 compute-0 podman[263696]: 2025-10-11 03:59:34.379136473 +0000 UTC m=+0.077413225 container health_status aa44bb3cffcfb092b9d85ae52a9bd9f36c87e07262632420f97b48a886109eab (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, managed_by=edpm_ansible, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2)
Oct 11 03:59:34 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 11 03:59:34 compute-0 ceph-mon[74273]: pgmap v805: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 03:59:35 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v806: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 03:59:36 compute-0 ceph-mon[74273]: pgmap v806: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 03:59:37 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v807: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 03:59:38 compute-0 ceph-mon[74273]: pgmap v807: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 03:59:39 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v808: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 03:59:39 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 11 03:59:40 compute-0 ceph-mon[74273]: pgmap v808: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 03:59:41 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v809: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 03:59:42 compute-0 ceph-mon[74273]: pgmap v809: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 03:59:43 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v810: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 03:59:44 compute-0 ceph-mon[74273]: pgmap v810: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 03:59:44 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 11 03:59:45 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v811: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 03:59:46 compute-0 ceph-mon[74273]: pgmap v811: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 03:59:47 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v812: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 03:59:48 compute-0 ceph-mon[74273]: pgmap v812: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 03:59:48 compute-0 podman[263716]: 2025-10-11 03:59:48.379208182 +0000 UTC m=+0.086947096 container health_status 83ff5079e26f0f00bbfd2512db4ebca61e431e3b7e362664573e34fff179574c (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, container_name=multipathd, org.label-schema.license=GPLv2, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251009)
Oct 11 03:59:49 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v813: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 03:59:49 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 11 03:59:50 compute-0 ceph-mon[74273]: pgmap v813: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 03:59:50 compute-0 podman[263736]: 2025-10-11 03:59:50.39654685 +0000 UTC m=+0.098542021 container health_status 8f03625dda308e9a3fe3b3f00360811eab2a53db90537fed5552a11820542f07 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, config_id=iscsid, org.label-schema.schema-version=1.0, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, tcib_managed=true, container_name=iscsid, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, managed_by=edpm_ansible, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Oct 11 03:59:50 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct 11 03:59:50 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1486504199' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 11 03:59:50 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct 11 03:59:50 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1486504199' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 11 03:59:50 compute-0 ceph-mgr[74563]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 03:59:50 compute-0 ceph-mgr[74563]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 03:59:50 compute-0 ceph-mgr[74563]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 03:59:50 compute-0 ceph-mgr[74563]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 03:59:50 compute-0 ceph-mgr[74563]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 03:59:50 compute-0 ceph-mgr[74563]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 03:59:51 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v814: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 03:59:51 compute-0 ceph-mon[74273]: from='client.? 192.168.122.10:0/1486504199' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 11 03:59:51 compute-0 ceph-mon[74273]: from='client.? 192.168.122.10:0/1486504199' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 11 03:59:52 compute-0 ceph-mon[74273]: pgmap v814: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 03:59:53 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v815: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 03:59:54 compute-0 ceph-mon[74273]: pgmap v815: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 03:59:54 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 11 03:59:55 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v816: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 03:59:56 compute-0 ceph-mon[74273]: pgmap v816: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 03:59:57 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v817: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 03:59:58 compute-0 ceph-mon[74273]: pgmap v817: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 03:59:59 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v818: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 03:59:59 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 11 04:00:00 compute-0 podman[263756]: 2025-10-11 04:00:00.402530769 +0000 UTC m=+0.105888958 container health_status 648a70a95c858d67aef01a55c153ff0b5b9bef4b589fa10c62a24185180db61f (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251009, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, tcib_managed=true, container_name=ovn_controller)
Oct 11 04:00:00 compute-0 ceph-mon[74273]: pgmap v818: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:00:01 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v819: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:00:02 compute-0 ceph-mon[74273]: pgmap v819: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:00:03 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v820: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:00:04 compute-0 ceph-mon[74273]: pgmap v820: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:00:04 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 11 04:00:05 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v821: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:00:05 compute-0 podman[263782]: 2025-10-11 04:00:05.380554962 +0000 UTC m=+0.084869767 container health_status aa44bb3cffcfb092b9d85ae52a9bd9f36c87e07262632420f97b48a886109eab (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251009, org.label-schema.vendor=CentOS, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Oct 11 04:00:06 compute-0 ceph-mon[74273]: pgmap v821: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:00:07 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v822: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:00:08 compute-0 ceph-mon[74273]: pgmap v822: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:00:09 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v823: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:00:09 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 11 04:00:10 compute-0 nova_compute[259850]: 2025-10-11 04:00:10.062 2 DEBUG oslo_service.periodic_task [None req-a15c22ab-259d-4b26-83d0-eb5f92102649 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 11 04:00:10 compute-0 nova_compute[259850]: 2025-10-11 04:00:10.062 2 DEBUG oslo_service.periodic_task [None req-a15c22ab-259d-4b26-83d0-eb5f92102649 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 11 04:00:10 compute-0 ceph-mon[74273]: pgmap v823: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:00:11 compute-0 nova_compute[259850]: 2025-10-11 04:00:11.055 2 DEBUG oslo_service.periodic_task [None req-a15c22ab-259d-4b26-83d0-eb5f92102649 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 11 04:00:11 compute-0 nova_compute[259850]: 2025-10-11 04:00:11.058 2 DEBUG oslo_service.periodic_task [None req-a15c22ab-259d-4b26-83d0-eb5f92102649 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 11 04:00:11 compute-0 nova_compute[259850]: 2025-10-11 04:00:11.059 2 DEBUG nova.compute.manager [None req-a15c22ab-259d-4b26-83d0-eb5f92102649 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Oct 11 04:00:11 compute-0 nova_compute[259850]: 2025-10-11 04:00:11.059 2 DEBUG nova.compute.manager [None req-a15c22ab-259d-4b26-83d0-eb5f92102649 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Oct 11 04:00:11 compute-0 nova_compute[259850]: 2025-10-11 04:00:11.082 2 DEBUG nova.compute.manager [None req-a15c22ab-259d-4b26-83d0-eb5f92102649 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Oct 11 04:00:11 compute-0 nova_compute[259850]: 2025-10-11 04:00:11.083 2 DEBUG oslo_service.periodic_task [None req-a15c22ab-259d-4b26-83d0-eb5f92102649 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 11 04:00:11 compute-0 nova_compute[259850]: 2025-10-11 04:00:11.084 2 DEBUG oslo_service.periodic_task [None req-a15c22ab-259d-4b26-83d0-eb5f92102649 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 11 04:00:11 compute-0 nova_compute[259850]: 2025-10-11 04:00:11.085 2 DEBUG nova.compute.manager [None req-a15c22ab-259d-4b26-83d0-eb5f92102649 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Oct 11 04:00:11 compute-0 nova_compute[259850]: 2025-10-11 04:00:11.085 2 DEBUG oslo_service.periodic_task [None req-a15c22ab-259d-4b26-83d0-eb5f92102649 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 11 04:00:11 compute-0 nova_compute[259850]: 2025-10-11 04:00:11.116 2 DEBUG oslo_concurrency.lockutils [None req-a15c22ab-259d-4b26-83d0-eb5f92102649 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 04:00:11 compute-0 nova_compute[259850]: 2025-10-11 04:00:11.117 2 DEBUG oslo_concurrency.lockutils [None req-a15c22ab-259d-4b26-83d0-eb5f92102649 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 04:00:11 compute-0 nova_compute[259850]: 2025-10-11 04:00:11.118 2 DEBUG oslo_concurrency.lockutils [None req-a15c22ab-259d-4b26-83d0-eb5f92102649 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 04:00:11 compute-0 nova_compute[259850]: 2025-10-11 04:00:11.118 2 DEBUG nova.compute.resource_tracker [None req-a15c22ab-259d-4b26-83d0-eb5f92102649 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Oct 11 04:00:11 compute-0 nova_compute[259850]: 2025-10-11 04:00:11.119 2 DEBUG oslo_concurrency.processutils [None req-a15c22ab-259d-4b26-83d0-eb5f92102649 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 04:00:11 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v824: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:00:11 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 11 04:00:11 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1316456627' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 04:00:11 compute-0 nova_compute[259850]: 2025-10-11 04:00:11.571 2 DEBUG oslo_concurrency.processutils [None req-a15c22ab-259d-4b26-83d0-eb5f92102649 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.452s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 04:00:11 compute-0 nova_compute[259850]: 2025-10-11 04:00:11.814 2 WARNING nova.virt.libvirt.driver [None req-a15c22ab-259d-4b26-83d0-eb5f92102649 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 11 04:00:11 compute-0 nova_compute[259850]: 2025-10-11 04:00:11.817 2 DEBUG nova.compute.resource_tracker [None req-a15c22ab-259d-4b26-83d0-eb5f92102649 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5173MB free_disk=59.98828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Oct 11 04:00:11 compute-0 nova_compute[259850]: 2025-10-11 04:00:11.817 2 DEBUG oslo_concurrency.lockutils [None req-a15c22ab-259d-4b26-83d0-eb5f92102649 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 04:00:11 compute-0 nova_compute[259850]: 2025-10-11 04:00:11.818 2 DEBUG oslo_concurrency.lockutils [None req-a15c22ab-259d-4b26-83d0-eb5f92102649 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 04:00:11 compute-0 nova_compute[259850]: 2025-10-11 04:00:11.919 2 DEBUG nova.compute.resource_tracker [None req-a15c22ab-259d-4b26-83d0-eb5f92102649 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Oct 11 04:00:11 compute-0 nova_compute[259850]: 2025-10-11 04:00:11.920 2 DEBUG nova.compute.resource_tracker [None req-a15c22ab-259d-4b26-83d0-eb5f92102649 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Oct 11 04:00:11 compute-0 nova_compute[259850]: 2025-10-11 04:00:11.948 2 DEBUG oslo_concurrency.processutils [None req-a15c22ab-259d-4b26-83d0-eb5f92102649 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 04:00:12 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 11 04:00:12 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1836797677' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 04:00:12 compute-0 nova_compute[259850]: 2025-10-11 04:00:12.442 2 DEBUG oslo_concurrency.processutils [None req-a15c22ab-259d-4b26-83d0-eb5f92102649 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.493s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 04:00:12 compute-0 nova_compute[259850]: 2025-10-11 04:00:12.451 2 DEBUG nova.compute.provider_tree [None req-a15c22ab-259d-4b26-83d0-eb5f92102649 - - - - - -] Inventory has not changed in ProviderTree for provider: 108a560b-89c0-4926-a2fc-cb749a6f8386 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 11 04:00:12 compute-0 nova_compute[259850]: 2025-10-11 04:00:12.476 2 DEBUG nova.scheduler.client.report [None req-a15c22ab-259d-4b26-83d0-eb5f92102649 - - - - - -] Inventory has not changed for provider 108a560b-89c0-4926-a2fc-cb749a6f8386 based on inventory data: {'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 11 04:00:12 compute-0 nova_compute[259850]: 2025-10-11 04:00:12.478 2 DEBUG nova.compute.resource_tracker [None req-a15c22ab-259d-4b26-83d0-eb5f92102649 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Oct 11 04:00:12 compute-0 nova_compute[259850]: 2025-10-11 04:00:12.478 2 DEBUG oslo_concurrency.lockutils [None req-a15c22ab-259d-4b26-83d0-eb5f92102649 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.661s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 04:00:12 compute-0 ceph-mon[74273]: pgmap v824: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:00:12 compute-0 ceph-mon[74273]: from='client.? 192.168.122.100:0/1316456627' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 04:00:12 compute-0 ceph-mon[74273]: from='client.? 192.168.122.100:0/1836797677' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 04:00:13 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v825: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:00:13 compute-0 nova_compute[259850]: 2025-10-11 04:00:13.475 2 DEBUG oslo_service.periodic_task [None req-a15c22ab-259d-4b26-83d0-eb5f92102649 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 11 04:00:13 compute-0 nova_compute[259850]: 2025-10-11 04:00:13.500 2 DEBUG oslo_service.periodic_task [None req-a15c22ab-259d-4b26-83d0-eb5f92102649 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 11 04:00:14 compute-0 nova_compute[259850]: 2025-10-11 04:00:14.059 2 DEBUG oslo_service.periodic_task [None req-a15c22ab-259d-4b26-83d0-eb5f92102649 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 11 04:00:14 compute-0 ceph-mon[74273]: pgmap v825: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:00:14 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 11 04:00:15 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v826: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:00:16 compute-0 ceph-mon[74273]: pgmap v826: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:00:17 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v827: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:00:18 compute-0 ceph-mon[74273]: pgmap v827: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:00:19 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v828: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:00:19 compute-0 podman[263846]: 2025-10-11 04:00:19.397193574 +0000 UTC m=+0.100312611 container health_status 83ff5079e26f0f00bbfd2512db4ebca61e431e3b7e362664573e34fff179574c (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251009, org.label-schema.schema-version=1.0, container_name=multipathd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, tcib_managed=true)
Oct 11 04:00:19 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 11 04:00:20 compute-0 sudo[263867]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 04:00:20 compute-0 sudo[263867]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 04:00:20 compute-0 sudo[263867]: pam_unix(sudo:session): session closed for user root
Oct 11 04:00:20 compute-0 sudo[263892]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 11 04:00:20 compute-0 sudo[263892]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 04:00:20 compute-0 sudo[263892]: pam_unix(sudo:session): session closed for user root
Oct 11 04:00:20 compute-0 sudo[263917]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 04:00:20 compute-0 sudo[263917]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 04:00:20 compute-0 sudo[263917]: pam_unix(sudo:session): session closed for user root
Oct 11 04:00:20 compute-0 sudo[263942]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/23b68101-59a9-532f-ab6b-9acf78fb2162/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ls
Oct 11 04:00:20 compute-0 sudo[263942]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 04:00:20 compute-0 ceph-mon[74273]: pgmap v828: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:00:20 compute-0 podman[263969]: 2025-10-11 04:00:20.640602686 +0000 UTC m=+0.057554989 container health_status 8f03625dda308e9a3fe3b3f00360811eab2a53db90537fed5552a11820542f07 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, container_name=iscsid, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251009, org.label-schema.name=CentOS Stream 9 Base Image, config_id=iscsid, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3)
Oct 11 04:00:20 compute-0 ceph-mgr[74563]: [balancer INFO root] Optimize plan auto_2025-10-11_04:00:20
Oct 11 04:00:20 compute-0 ceph-mgr[74563]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 11 04:00:20 compute-0 ceph-mgr[74563]: [balancer INFO root] do_upmap
Oct 11 04:00:20 compute-0 ceph-mgr[74563]: [balancer INFO root] pools ['default.rgw.control', 'default.rgw.log', 'backups', '.mgr', 'default.rgw.meta', 'cephfs.cephfs.meta', '.rgw.root', 'images', 'vms', 'volumes', 'cephfs.cephfs.data']
Oct 11 04:00:20 compute-0 ceph-mgr[74563]: [balancer INFO root] prepared 0/10 changes
Oct 11 04:00:20 compute-0 ceph-mgr[74563]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 04:00:20 compute-0 ceph-mgr[74563]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 04:00:20 compute-0 ceph-mgr[74563]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 04:00:20 compute-0 ceph-mgr[74563]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 04:00:20 compute-0 ceph-mgr[74563]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 04:00:20 compute-0 ceph-mgr[74563]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 04:00:20 compute-0 ceph-mgr[74563]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 11 04:00:20 compute-0 ceph-mgr[74563]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 11 04:00:20 compute-0 ceph-mgr[74563]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 11 04:00:20 compute-0 ceph-mgr[74563]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 11 04:00:20 compute-0 ceph-mgr[74563]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 11 04:00:20 compute-0 ceph-mgr[74563]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 11 04:00:20 compute-0 ceph-mgr[74563]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 11 04:00:20 compute-0 ceph-mgr[74563]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 11 04:00:20 compute-0 ceph-mgr[74563]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 11 04:00:20 compute-0 ceph-mgr[74563]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 11 04:00:21 compute-0 podman[264058]: 2025-10-11 04:00:21.093119606 +0000 UTC m=+0.107017509 container exec 24261ba7295af5a6a49cb537d1551fd7fd4de28fdeebff7ecec5d89143ebddf9 (image=quay.io/ceph/ceph:v18, name=ceph-23b68101-59a9-532f-ab6b-9acf78fb2162-mon-compute-0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.vendor=CentOS)
Oct 11 04:00:21 compute-0 podman[264058]: 2025-10-11 04:00:21.213615143 +0000 UTC m=+0.227513046 container exec_died 24261ba7295af5a6a49cb537d1551fd7fd4de28fdeebff7ecec5d89143ebddf9 (image=quay.io/ceph/ceph:v18, name=ceph-23b68101-59a9-532f-ab6b-9acf78fb2162-mon-compute-0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3)
Oct 11 04:00:21 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v829: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:00:21 compute-0 sudo[263942]: pam_unix(sudo:session): session closed for user root
Oct 11 04:00:21 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 11 04:00:21 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' 
Oct 11 04:00:21 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 11 04:00:21 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' 
Oct 11 04:00:22 compute-0 sudo[264222]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 04:00:22 compute-0 sudo[264222]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 04:00:22 compute-0 sudo[264222]: pam_unix(sudo:session): session closed for user root
Oct 11 04:00:22 compute-0 sudo[264247]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 11 04:00:22 compute-0 sudo[264247]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 04:00:22 compute-0 sudo[264247]: pam_unix(sudo:session): session closed for user root
Oct 11 04:00:22 compute-0 sudo[264272]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 04:00:22 compute-0 sudo[264272]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 04:00:22 compute-0 sudo[264272]: pam_unix(sudo:session): session closed for user root
Oct 11 04:00:22 compute-0 sudo[264297]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/23b68101-59a9-532f-ab6b-9acf78fb2162/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Oct 11 04:00:22 compute-0 sudo[264297]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 04:00:22 compute-0 sudo[264297]: pam_unix(sudo:session): session closed for user root
Oct 11 04:00:22 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 11 04:00:22 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 11 04:00:22 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct 11 04:00:22 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 11 04:00:22 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct 11 04:00:22 compute-0 ceph-mon[74273]: pgmap v829: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:00:22 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' 
Oct 11 04:00:22 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' 
Oct 11 04:00:22 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 11 04:00:22 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 11 04:00:22 compute-0 ovn_metadata_agent[161897]: 2025-10-11 04:00:22.948 161902 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 04:00:22 compute-0 ovn_metadata_agent[161897]: 2025-10-11 04:00:22.950 161902 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 04:00:22 compute-0 ovn_metadata_agent[161897]: 2025-10-11 04:00:22.950 161902 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 04:00:22 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' 
Oct 11 04:00:22 compute-0 ceph-mgr[74563]: [progress WARNING root] complete: ev e14c5639-5efe-4f5d-95c5-0593b99a51ab does not exist
Oct 11 04:00:22 compute-0 ceph-mgr[74563]: [progress WARNING root] complete: ev 30a39598-f816-4004-a06c-89cd8a107cc7 does not exist
Oct 11 04:00:22 compute-0 ceph-mgr[74563]: [progress WARNING root] complete: ev 89370590-b632-4be6-8aab-7b517c0c2f4b does not exist
Oct 11 04:00:22 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct 11 04:00:22 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 11 04:00:22 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct 11 04:00:22 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 11 04:00:22 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 11 04:00:22 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 11 04:00:23 compute-0 sudo[264351]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 04:00:23 compute-0 sudo[264351]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 04:00:23 compute-0 sudo[264351]: pam_unix(sudo:session): session closed for user root
Oct 11 04:00:23 compute-0 sudo[264376]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 11 04:00:23 compute-0 sudo[264376]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 04:00:23 compute-0 sudo[264376]: pam_unix(sudo:session): session closed for user root
Oct 11 04:00:23 compute-0 sudo[264401]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 04:00:23 compute-0 sudo[264401]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 04:00:23 compute-0 sudo[264401]: pam_unix(sudo:session): session closed for user root
Oct 11 04:00:23 compute-0 sudo[264426]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/23b68101-59a9-532f-ab6b-9acf78fb2162/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 23b68101-59a9-532f-ab6b-9acf78fb2162 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Oct 11 04:00:23 compute-0 sudo[264426]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 04:00:23 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v830: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:00:23 compute-0 podman[264491]: 2025-10-11 04:00:23.661810583 +0000 UTC m=+0.061566982 container create 09d39811b202f17bceeacb75991889862eccea3a0348205378da0f033c0500b3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_knuth, CEPH_REF=reef, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 11 04:00:23 compute-0 systemd[1]: Started libpod-conmon-09d39811b202f17bceeacb75991889862eccea3a0348205378da0f033c0500b3.scope.
Oct 11 04:00:23 compute-0 podman[264491]: 2025-10-11 04:00:23.627602872 +0000 UTC m=+0.027359301 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 04:00:23 compute-0 systemd[1]: Started libcrun container.
Oct 11 04:00:23 compute-0 podman[264491]: 2025-10-11 04:00:23.759465097 +0000 UTC m=+0.159221526 container init 09d39811b202f17bceeacb75991889862eccea3a0348205378da0f033c0500b3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_knuth, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 11 04:00:23 compute-0 podman[264491]: 2025-10-11 04:00:23.772276977 +0000 UTC m=+0.172033366 container start 09d39811b202f17bceeacb75991889862eccea3a0348205378da0f033c0500b3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_knuth, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 11 04:00:23 compute-0 podman[264491]: 2025-10-11 04:00:23.776401963 +0000 UTC m=+0.176158392 container attach 09d39811b202f17bceeacb75991889862eccea3a0348205378da0f033c0500b3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_knuth, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, CEPH_REF=reef, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 11 04:00:23 compute-0 nifty_knuth[264507]: 167 167
Oct 11 04:00:23 compute-0 systemd[1]: libpod-09d39811b202f17bceeacb75991889862eccea3a0348205378da0f033c0500b3.scope: Deactivated successfully.
Oct 11 04:00:23 compute-0 podman[264491]: 2025-10-11 04:00:23.779951563 +0000 UTC m=+0.179707932 container died 09d39811b202f17bceeacb75991889862eccea3a0348205378da0f033c0500b3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_knuth, CEPH_REF=reef, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 11 04:00:23 compute-0 systemd[1]: var-lib-containers-storage-overlay-f1ba9f0b7d98a4f138e775063bd2cd761d0cb3ed37197f23d61ebfc3fd47a203-merged.mount: Deactivated successfully.
Oct 11 04:00:23 compute-0 podman[264491]: 2025-10-11 04:00:23.830042201 +0000 UTC m=+0.229798570 container remove 09d39811b202f17bceeacb75991889862eccea3a0348205378da0f033c0500b3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_knuth, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 11 04:00:23 compute-0 systemd[1]: libpod-conmon-09d39811b202f17bceeacb75991889862eccea3a0348205378da0f033c0500b3.scope: Deactivated successfully.
Oct 11 04:00:23 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' 
Oct 11 04:00:23 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 11 04:00:23 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 11 04:00:23 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 11 04:00:24 compute-0 podman[264530]: 2025-10-11 04:00:24.078087204 +0000 UTC m=+0.064484014 container create e7bdcfa9a914cb82ee5ef35b0e76d3aebafa54f12787d89dce27d91cb8395773 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=suspicious_thompson, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 11 04:00:24 compute-0 systemd[1]: Started libpod-conmon-e7bdcfa9a914cb82ee5ef35b0e76d3aebafa54f12787d89dce27d91cb8395773.scope.
Oct 11 04:00:24 compute-0 podman[264530]: 2025-10-11 04:00:24.052425562 +0000 UTC m=+0.038822422 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 04:00:24 compute-0 systemd[1]: Started libcrun container.
Oct 11 04:00:24 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/82a480a7a4d834de03c60fe0ebcad69170bf3452dfb934e27c0ba79d4bea4910/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 11 04:00:24 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/82a480a7a4d834de03c60fe0ebcad69170bf3452dfb934e27c0ba79d4bea4910/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 11 04:00:24 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/82a480a7a4d834de03c60fe0ebcad69170bf3452dfb934e27c0ba79d4bea4910/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 11 04:00:24 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/82a480a7a4d834de03c60fe0ebcad69170bf3452dfb934e27c0ba79d4bea4910/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 11 04:00:24 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/82a480a7a4d834de03c60fe0ebcad69170bf3452dfb934e27c0ba79d4bea4910/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct 11 04:00:24 compute-0 podman[264530]: 2025-10-11 04:00:24.202401838 +0000 UTC m=+0.188798698 container init e7bdcfa9a914cb82ee5ef35b0e76d3aebafa54f12787d89dce27d91cb8395773 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=suspicious_thompson, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20250507, OSD_FLAVOR=default)
Oct 11 04:00:24 compute-0 podman[264530]: 2025-10-11 04:00:24.218292675 +0000 UTC m=+0.204689495 container start e7bdcfa9a914cb82ee5ef35b0e76d3aebafa54f12787d89dce27d91cb8395773 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=suspicious_thompson, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 11 04:00:24 compute-0 podman[264530]: 2025-10-11 04:00:24.221620359 +0000 UTC m=+0.208017129 container attach e7bdcfa9a914cb82ee5ef35b0e76d3aebafa54f12787d89dce27d91cb8395773 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=suspicious_thompson, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 11 04:00:24 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 11 04:00:24 compute-0 ceph-mon[74273]: pgmap v830: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:00:25 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v831: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:00:25 compute-0 suspicious_thompson[264547]: --> passed data devices: 0 physical, 3 LVM
Oct 11 04:00:25 compute-0 suspicious_thompson[264547]: --> relative data size: 1.0
Oct 11 04:00:25 compute-0 suspicious_thompson[264547]: --> All data devices are unavailable
Oct 11 04:00:25 compute-0 systemd[1]: libpod-e7bdcfa9a914cb82ee5ef35b0e76d3aebafa54f12787d89dce27d91cb8395773.scope: Deactivated successfully.
Oct 11 04:00:25 compute-0 podman[264530]: 2025-10-11 04:00:25.354655559 +0000 UTC m=+1.341052399 container died e7bdcfa9a914cb82ee5ef35b0e76d3aebafa54f12787d89dce27d91cb8395773 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=suspicious_thompson, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, ceph=True, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 11 04:00:25 compute-0 systemd[1]: libpod-e7bdcfa9a914cb82ee5ef35b0e76d3aebafa54f12787d89dce27d91cb8395773.scope: Consumed 1.084s CPU time.
Oct 11 04:00:25 compute-0 systemd[1]: var-lib-containers-storage-overlay-82a480a7a4d834de03c60fe0ebcad69170bf3452dfb934e27c0ba79d4bea4910-merged.mount: Deactivated successfully.
Oct 11 04:00:25 compute-0 podman[264530]: 2025-10-11 04:00:25.434528164 +0000 UTC m=+1.420924974 container remove e7bdcfa9a914cb82ee5ef35b0e76d3aebafa54f12787d89dce27d91cb8395773 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=suspicious_thompson, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 11 04:00:25 compute-0 systemd[1]: libpod-conmon-e7bdcfa9a914cb82ee5ef35b0e76d3aebafa54f12787d89dce27d91cb8395773.scope: Deactivated successfully.
Oct 11 04:00:25 compute-0 sudo[264426]: pam_unix(sudo:session): session closed for user root
Oct 11 04:00:25 compute-0 sudo[264586]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 04:00:25 compute-0 sudo[264586]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 04:00:25 compute-0 sudo[264586]: pam_unix(sudo:session): session closed for user root
Oct 11 04:00:25 compute-0 sudo[264611]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 11 04:00:25 compute-0 sudo[264611]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 04:00:25 compute-0 sudo[264611]: pam_unix(sudo:session): session closed for user root
Oct 11 04:00:25 compute-0 sudo[264636]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 04:00:25 compute-0 sudo[264636]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 04:00:25 compute-0 sudo[264636]: pam_unix(sudo:session): session closed for user root
Oct 11 04:00:25 compute-0 sudo[264661]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/23b68101-59a9-532f-ab6b-9acf78fb2162/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 23b68101-59a9-532f-ab6b-9acf78fb2162 -- lvm list --format json
Oct 11 04:00:25 compute-0 sudo[264661]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 04:00:26 compute-0 podman[264724]: 2025-10-11 04:00:26.290335531 +0000 UTC m=+0.066901762 container create cc52cf2e4e8acbde5bed0b8c8bb3ac1d26d4e358c7bc9fe571d213aedc7a980c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elegant_beaver, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 11 04:00:26 compute-0 systemd[1]: Started libpod-conmon-cc52cf2e4e8acbde5bed0b8c8bb3ac1d26d4e358c7bc9fe571d213aedc7a980c.scope.
Oct 11 04:00:26 compute-0 podman[264724]: 2025-10-11 04:00:26.263399764 +0000 UTC m=+0.039966045 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 04:00:26 compute-0 systemd[1]: Started libcrun container.
Oct 11 04:00:26 compute-0 podman[264724]: 2025-10-11 04:00:26.405480888 +0000 UTC m=+0.182047149 container init cc52cf2e4e8acbde5bed0b8c8bb3ac1d26d4e358c7bc9fe571d213aedc7a980c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elegant_beaver, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.build-date=20250507)
Oct 11 04:00:26 compute-0 podman[264724]: 2025-10-11 04:00:26.415812988 +0000 UTC m=+0.192379239 container start cc52cf2e4e8acbde5bed0b8c8bb3ac1d26d4e358c7bc9fe571d213aedc7a980c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elegant_beaver, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_REF=reef)
Oct 11 04:00:26 compute-0 podman[264724]: 2025-10-11 04:00:26.41977529 +0000 UTC m=+0.196341521 container attach cc52cf2e4e8acbde5bed0b8c8bb3ac1d26d4e358c7bc9fe571d213aedc7a980c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elegant_beaver, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=reef, OSD_FLAVOR=default)
Oct 11 04:00:26 compute-0 elegant_beaver[264741]: 167 167
Oct 11 04:00:26 compute-0 systemd[1]: libpod-cc52cf2e4e8acbde5bed0b8c8bb3ac1d26d4e358c7bc9fe571d213aedc7a980c.scope: Deactivated successfully.
Oct 11 04:00:26 compute-0 podman[264724]: 2025-10-11 04:00:26.423478354 +0000 UTC m=+0.200044555 container died cc52cf2e4e8acbde5bed0b8c8bb3ac1d26d4e358c7bc9fe571d213aedc7a980c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elegant_beaver, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, OSD_FLAVOR=default)
Oct 11 04:00:26 compute-0 systemd[1]: var-lib-containers-storage-overlay-319c7c72bd8c6755773242968f8546185015ad26f1cfe342c18c6419cc249fd1-merged.mount: Deactivated successfully.
Oct 11 04:00:26 compute-0 podman[264724]: 2025-10-11 04:00:26.457961833 +0000 UTC m=+0.234528044 container remove cc52cf2e4e8acbde5bed0b8c8bb3ac1d26d4e358c7bc9fe571d213aedc7a980c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elegant_beaver, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Oct 11 04:00:26 compute-0 systemd[1]: libpod-conmon-cc52cf2e4e8acbde5bed0b8c8bb3ac1d26d4e358c7bc9fe571d213aedc7a980c.scope: Deactivated successfully.
Oct 11 04:00:26 compute-0 podman[264765]: 2025-10-11 04:00:26.673832581 +0000 UTC m=+0.065403339 container create 37857ff4c94f42d73c9cad7e3ef6fc86840c3f742661e7661aab59030c4cf31c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_maxwell, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Oct 11 04:00:26 compute-0 systemd[1]: Started libpod-conmon-37857ff4c94f42d73c9cad7e3ef6fc86840c3f742661e7661aab59030c4cf31c.scope.
Oct 11 04:00:26 compute-0 podman[264765]: 2025-10-11 04:00:26.64746952 +0000 UTC m=+0.039040328 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 04:00:26 compute-0 systemd[1]: Started libcrun container.
Oct 11 04:00:26 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7fed9426cc6f2d860e47d4d85368453b3a02031a377257b3a63267bf38ea1c63/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 11 04:00:26 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7fed9426cc6f2d860e47d4d85368453b3a02031a377257b3a63267bf38ea1c63/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 11 04:00:26 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7fed9426cc6f2d860e47d4d85368453b3a02031a377257b3a63267bf38ea1c63/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 11 04:00:26 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7fed9426cc6f2d860e47d4d85368453b3a02031a377257b3a63267bf38ea1c63/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 11 04:00:26 compute-0 podman[264765]: 2025-10-11 04:00:26.774125191 +0000 UTC m=+0.165695979 container init 37857ff4c94f42d73c9cad7e3ef6fc86840c3f742661e7661aab59030c4cf31c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_maxwell, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, ceph=True)
Oct 11 04:00:26 compute-0 podman[264765]: 2025-10-11 04:00:26.784375609 +0000 UTC m=+0.175946367 container start 37857ff4c94f42d73c9cad7e3ef6fc86840c3f742661e7661aab59030c4cf31c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_maxwell, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef)
Oct 11 04:00:26 compute-0 podman[264765]: 2025-10-11 04:00:26.788384882 +0000 UTC m=+0.179955670 container attach 37857ff4c94f42d73c9cad7e3ef6fc86840c3f742661e7661aab59030c4cf31c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_maxwell, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_REF=reef)
Oct 11 04:00:26 compute-0 ceph-mon[74273]: pgmap v831: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:00:27 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v832: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:00:27 compute-0 fervent_maxwell[264782]: {
Oct 11 04:00:27 compute-0 fervent_maxwell[264782]:     "0": [
Oct 11 04:00:27 compute-0 fervent_maxwell[264782]:         {
Oct 11 04:00:27 compute-0 fervent_maxwell[264782]:             "devices": [
Oct 11 04:00:27 compute-0 fervent_maxwell[264782]:                 "/dev/loop3"
Oct 11 04:00:27 compute-0 fervent_maxwell[264782]:             ],
Oct 11 04:00:27 compute-0 fervent_maxwell[264782]:             "lv_name": "ceph_lv0",
Oct 11 04:00:27 compute-0 fervent_maxwell[264782]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct 11 04:00:27 compute-0 fervent_maxwell[264782]:             "lv_size": "21470642176",
Oct 11 04:00:27 compute-0 fervent_maxwell[264782]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=1XVTvq-Um5W-oCLh-MZzq-EKrd-vgGj-JdO4S3,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=23b68101-59a9-532f-ab6b-9acf78fb2162,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=bd7ac921-1218-45c1-b1c6-7c594dbceccb,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 11 04:00:27 compute-0 fervent_maxwell[264782]:             "lv_uuid": "1XVTvq-Um5W-oCLh-MZzq-EKrd-vgGj-JdO4S3",
Oct 11 04:00:27 compute-0 fervent_maxwell[264782]:             "name": "ceph_lv0",
Oct 11 04:00:27 compute-0 fervent_maxwell[264782]:             "path": "/dev/ceph_vg0/ceph_lv0",
Oct 11 04:00:27 compute-0 fervent_maxwell[264782]:             "tags": {
Oct 11 04:00:27 compute-0 fervent_maxwell[264782]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct 11 04:00:27 compute-0 fervent_maxwell[264782]:                 "ceph.block_uuid": "1XVTvq-Um5W-oCLh-MZzq-EKrd-vgGj-JdO4S3",
Oct 11 04:00:27 compute-0 fervent_maxwell[264782]:                 "ceph.cephx_lockbox_secret": "",
Oct 11 04:00:27 compute-0 fervent_maxwell[264782]:                 "ceph.cluster_fsid": "23b68101-59a9-532f-ab6b-9acf78fb2162",
Oct 11 04:00:27 compute-0 fervent_maxwell[264782]:                 "ceph.cluster_name": "ceph",
Oct 11 04:00:27 compute-0 fervent_maxwell[264782]:                 "ceph.crush_device_class": "",
Oct 11 04:00:27 compute-0 fervent_maxwell[264782]:                 "ceph.encrypted": "0",
Oct 11 04:00:27 compute-0 fervent_maxwell[264782]:                 "ceph.osd_fsid": "bd7ac921-1218-45c1-b1c6-7c594dbceccb",
Oct 11 04:00:27 compute-0 fervent_maxwell[264782]:                 "ceph.osd_id": "0",
Oct 11 04:00:27 compute-0 fervent_maxwell[264782]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 11 04:00:27 compute-0 fervent_maxwell[264782]:                 "ceph.type": "block",
Oct 11 04:00:27 compute-0 fervent_maxwell[264782]:                 "ceph.vdo": "0"
Oct 11 04:00:27 compute-0 fervent_maxwell[264782]:             },
Oct 11 04:00:27 compute-0 fervent_maxwell[264782]:             "type": "block",
Oct 11 04:00:27 compute-0 fervent_maxwell[264782]:             "vg_name": "ceph_vg0"
Oct 11 04:00:27 compute-0 fervent_maxwell[264782]:         }
Oct 11 04:00:27 compute-0 fervent_maxwell[264782]:     ],
Oct 11 04:00:27 compute-0 fervent_maxwell[264782]:     "1": [
Oct 11 04:00:27 compute-0 fervent_maxwell[264782]:         {
Oct 11 04:00:27 compute-0 fervent_maxwell[264782]:             "devices": [
Oct 11 04:00:27 compute-0 fervent_maxwell[264782]:                 "/dev/loop4"
Oct 11 04:00:27 compute-0 fervent_maxwell[264782]:             ],
Oct 11 04:00:27 compute-0 fervent_maxwell[264782]:             "lv_name": "ceph_lv1",
Oct 11 04:00:27 compute-0 fervent_maxwell[264782]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Oct 11 04:00:27 compute-0 fervent_maxwell[264782]:             "lv_size": "21470642176",
Oct 11 04:00:27 compute-0 fervent_maxwell[264782]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=SYHCVo-UVf0-Iwc3-4dzz-QuCG-19LJ-9rRA5x,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=23b68101-59a9-532f-ab6b-9acf78fb2162,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=38da774d-7ecf-442f-9a7a-97978287cff8,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 11 04:00:27 compute-0 fervent_maxwell[264782]:             "lv_uuid": "SYHCVo-UVf0-Iwc3-4dzz-QuCG-19LJ-9rRA5x",
Oct 11 04:00:27 compute-0 fervent_maxwell[264782]:             "name": "ceph_lv1",
Oct 11 04:00:27 compute-0 fervent_maxwell[264782]:             "path": "/dev/ceph_vg1/ceph_lv1",
Oct 11 04:00:27 compute-0 fervent_maxwell[264782]:             "tags": {
Oct 11 04:00:27 compute-0 fervent_maxwell[264782]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Oct 11 04:00:27 compute-0 fervent_maxwell[264782]:                 "ceph.block_uuid": "SYHCVo-UVf0-Iwc3-4dzz-QuCG-19LJ-9rRA5x",
Oct 11 04:00:27 compute-0 fervent_maxwell[264782]:                 "ceph.cephx_lockbox_secret": "",
Oct 11 04:00:27 compute-0 fervent_maxwell[264782]:                 "ceph.cluster_fsid": "23b68101-59a9-532f-ab6b-9acf78fb2162",
Oct 11 04:00:27 compute-0 fervent_maxwell[264782]:                 "ceph.cluster_name": "ceph",
Oct 11 04:00:27 compute-0 fervent_maxwell[264782]:                 "ceph.crush_device_class": "",
Oct 11 04:00:27 compute-0 fervent_maxwell[264782]:                 "ceph.encrypted": "0",
Oct 11 04:00:27 compute-0 fervent_maxwell[264782]:                 "ceph.osd_fsid": "38da774d-7ecf-442f-9a7a-97978287cff8",
Oct 11 04:00:27 compute-0 fervent_maxwell[264782]:                 "ceph.osd_id": "1",
Oct 11 04:00:27 compute-0 fervent_maxwell[264782]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 11 04:00:27 compute-0 fervent_maxwell[264782]:                 "ceph.type": "block",
Oct 11 04:00:27 compute-0 fervent_maxwell[264782]:                 "ceph.vdo": "0"
Oct 11 04:00:27 compute-0 fervent_maxwell[264782]:             },
Oct 11 04:00:27 compute-0 fervent_maxwell[264782]:             "type": "block",
Oct 11 04:00:27 compute-0 fervent_maxwell[264782]:             "vg_name": "ceph_vg1"
Oct 11 04:00:27 compute-0 fervent_maxwell[264782]:         }
Oct 11 04:00:27 compute-0 fervent_maxwell[264782]:     ],
Oct 11 04:00:27 compute-0 fervent_maxwell[264782]:     "2": [
Oct 11 04:00:27 compute-0 fervent_maxwell[264782]:         {
Oct 11 04:00:27 compute-0 fervent_maxwell[264782]:             "devices": [
Oct 11 04:00:27 compute-0 fervent_maxwell[264782]:                 "/dev/loop5"
Oct 11 04:00:27 compute-0 fervent_maxwell[264782]:             ],
Oct 11 04:00:27 compute-0 fervent_maxwell[264782]:             "lv_name": "ceph_lv2",
Oct 11 04:00:27 compute-0 fervent_maxwell[264782]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Oct 11 04:00:27 compute-0 fervent_maxwell[264782]:             "lv_size": "21470642176",
Oct 11 04:00:27 compute-0 fervent_maxwell[264782]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=RoMfIg-8Myq-NBZ3-1HM3-6QfC-sjk8-dlChvt,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=23b68101-59a9-532f-ab6b-9acf78fb2162,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=b0a32a6b-06c7-4cda-9235-0c5ffbd1bd85,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 11 04:00:27 compute-0 fervent_maxwell[264782]:             "lv_uuid": "RoMfIg-8Myq-NBZ3-1HM3-6QfC-sjk8-dlChvt",
Oct 11 04:00:27 compute-0 fervent_maxwell[264782]:             "name": "ceph_lv2",
Oct 11 04:00:27 compute-0 fervent_maxwell[264782]:             "path": "/dev/ceph_vg2/ceph_lv2",
Oct 11 04:00:27 compute-0 fervent_maxwell[264782]:             "tags": {
Oct 11 04:00:27 compute-0 fervent_maxwell[264782]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Oct 11 04:00:27 compute-0 fervent_maxwell[264782]:                 "ceph.block_uuid": "RoMfIg-8Myq-NBZ3-1HM3-6QfC-sjk8-dlChvt",
Oct 11 04:00:27 compute-0 fervent_maxwell[264782]:                 "ceph.cephx_lockbox_secret": "",
Oct 11 04:00:27 compute-0 fervent_maxwell[264782]:                 "ceph.cluster_fsid": "23b68101-59a9-532f-ab6b-9acf78fb2162",
Oct 11 04:00:27 compute-0 fervent_maxwell[264782]:                 "ceph.cluster_name": "ceph",
Oct 11 04:00:27 compute-0 fervent_maxwell[264782]:                 "ceph.crush_device_class": "",
Oct 11 04:00:27 compute-0 fervent_maxwell[264782]:                 "ceph.encrypted": "0",
Oct 11 04:00:27 compute-0 fervent_maxwell[264782]:                 "ceph.osd_fsid": "b0a32a6b-06c7-4cda-9235-0c5ffbd1bd85",
Oct 11 04:00:27 compute-0 fervent_maxwell[264782]:                 "ceph.osd_id": "2",
Oct 11 04:00:27 compute-0 fervent_maxwell[264782]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 11 04:00:27 compute-0 fervent_maxwell[264782]:                 "ceph.type": "block",
Oct 11 04:00:27 compute-0 fervent_maxwell[264782]:                 "ceph.vdo": "0"
Oct 11 04:00:27 compute-0 fervent_maxwell[264782]:             },
Oct 11 04:00:27 compute-0 fervent_maxwell[264782]:             "type": "block",
Oct 11 04:00:27 compute-0 fervent_maxwell[264782]:             "vg_name": "ceph_vg2"
Oct 11 04:00:27 compute-0 fervent_maxwell[264782]:         }
Oct 11 04:00:27 compute-0 fervent_maxwell[264782]:     ]
Oct 11 04:00:27 compute-0 fervent_maxwell[264782]: }
Oct 11 04:00:27 compute-0 systemd[1]: libpod-37857ff4c94f42d73c9cad7e3ef6fc86840c3f742661e7661aab59030c4cf31c.scope: Deactivated successfully.
Oct 11 04:00:27 compute-0 podman[264765]: 2025-10-11 04:00:27.646118422 +0000 UTC m=+1.037689150 container died 37857ff4c94f42d73c9cad7e3ef6fc86840c3f742661e7661aab59030c4cf31c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_maxwell, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 11 04:00:27 compute-0 systemd[1]: var-lib-containers-storage-overlay-7fed9426cc6f2d860e47d4d85368453b3a02031a377257b3a63267bf38ea1c63-merged.mount: Deactivated successfully.
Oct 11 04:00:27 compute-0 podman[264765]: 2025-10-11 04:00:27.714042011 +0000 UTC m=+1.105612739 container remove 37857ff4c94f42d73c9cad7e3ef6fc86840c3f742661e7661aab59030c4cf31c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_maxwell, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Oct 11 04:00:27 compute-0 systemd[1]: libpod-conmon-37857ff4c94f42d73c9cad7e3ef6fc86840c3f742661e7661aab59030c4cf31c.scope: Deactivated successfully.
Oct 11 04:00:27 compute-0 sudo[264661]: pam_unix(sudo:session): session closed for user root
Oct 11 04:00:27 compute-0 sudo[264803]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 04:00:27 compute-0 sudo[264803]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 04:00:27 compute-0 sudo[264803]: pam_unix(sudo:session): session closed for user root
Oct 11 04:00:27 compute-0 sudo[264828]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 11 04:00:27 compute-0 sudo[264828]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 04:00:27 compute-0 sudo[264828]: pam_unix(sudo:session): session closed for user root
Oct 11 04:00:28 compute-0 sudo[264853]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 04:00:28 compute-0 sudo[264853]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 04:00:28 compute-0 sudo[264853]: pam_unix(sudo:session): session closed for user root
Oct 11 04:00:28 compute-0 sudo[264878]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/23b68101-59a9-532f-ab6b-9acf78fb2162/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 23b68101-59a9-532f-ab6b-9acf78fb2162 -- raw list --format json
Oct 11 04:00:28 compute-0 sudo[264878]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 04:00:28 compute-0 podman[264945]: 2025-10-11 04:00:28.548829617 +0000 UTC m=+0.057553868 container create 1300b60929504f58c95c7590dd541a4376fac776dc533d1848582ed990fd42d2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_gates, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 11 04:00:28 compute-0 systemd[1]: Started libpod-conmon-1300b60929504f58c95c7590dd541a4376fac776dc533d1848582ed990fd42d2.scope.
Oct 11 04:00:28 compute-0 podman[264945]: 2025-10-11 04:00:28.519263606 +0000 UTC m=+0.027987917 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 04:00:28 compute-0 systemd[1]: Started libcrun container.
Oct 11 04:00:28 compute-0 podman[264945]: 2025-10-11 04:00:28.635809593 +0000 UTC m=+0.144533824 container init 1300b60929504f58c95c7590dd541a4376fac776dc533d1848582ed990fd42d2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_gates, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_REF=reef, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 11 04:00:28 compute-0 podman[264945]: 2025-10-11 04:00:28.646749 +0000 UTC m=+0.155473221 container start 1300b60929504f58c95c7590dd541a4376fac776dc533d1848582ed990fd42d2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_gates, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3)
Oct 11 04:00:28 compute-0 podman[264945]: 2025-10-11 04:00:28.650715562 +0000 UTC m=+0.159439873 container attach 1300b60929504f58c95c7590dd541a4376fac776dc533d1848582ed990fd42d2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_gates, CEPH_REF=reef, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507)
Oct 11 04:00:28 compute-0 xenodochial_gates[264961]: 167 167
Oct 11 04:00:28 compute-0 systemd[1]: libpod-1300b60929504f58c95c7590dd541a4376fac776dc533d1848582ed990fd42d2.scope: Deactivated successfully.
Oct 11 04:00:28 compute-0 podman[264945]: 2025-10-11 04:00:28.654689273 +0000 UTC m=+0.163413494 container died 1300b60929504f58c95c7590dd541a4376fac776dc533d1848582ed990fd42d2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_gates, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, ceph=True)
Oct 11 04:00:28 compute-0 systemd[1]: var-lib-containers-storage-overlay-c6151a2d1148e014fbc948ea1e11c712ca0459409845786e541209c3969e7d30-merged.mount: Deactivated successfully.
Oct 11 04:00:28 compute-0 podman[264945]: 2025-10-11 04:00:28.692811615 +0000 UTC m=+0.201535866 container remove 1300b60929504f58c95c7590dd541a4376fac776dc533d1848582ed990fd42d2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_gates, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True)
Oct 11 04:00:28 compute-0 systemd[1]: libpod-conmon-1300b60929504f58c95c7590dd541a4376fac776dc533d1848582ed990fd42d2.scope: Deactivated successfully.
Oct 11 04:00:28 compute-0 podman[264984]: 2025-10-11 04:00:28.859403688 +0000 UTC m=+0.039150132 container create 4669ddd92e713e2b2644f47dfa25c4e35b2b9bbba27698925f1b4b023ff130cf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_dhawan, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 11 04:00:28 compute-0 systemd[1]: Started libpod-conmon-4669ddd92e713e2b2644f47dfa25c4e35b2b9bbba27698925f1b4b023ff130cf.scope.
Oct 11 04:00:28 compute-0 systemd[1]: Started libcrun container.
Oct 11 04:00:28 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8fb308b741573d9a3aa468b47cb3437faab630b2f59d585664f753efe3a52c8a/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 11 04:00:28 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8fb308b741573d9a3aa468b47cb3437faab630b2f59d585664f753efe3a52c8a/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 11 04:00:28 compute-0 podman[264984]: 2025-10-11 04:00:28.843526192 +0000 UTC m=+0.023272696 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 04:00:28 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8fb308b741573d9a3aa468b47cb3437faab630b2f59d585664f753efe3a52c8a/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 11 04:00:28 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8fb308b741573d9a3aa468b47cb3437faab630b2f59d585664f753efe3a52c8a/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 11 04:00:28 compute-0 podman[264984]: 2025-10-11 04:00:28.950500649 +0000 UTC m=+0.130247143 container init 4669ddd92e713e2b2644f47dfa25c4e35b2b9bbba27698925f1b4b023ff130cf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_dhawan, OSD_FLAVOR=default, CEPH_REF=reef, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507)
Oct 11 04:00:28 compute-0 podman[264984]: 2025-10-11 04:00:28.966303183 +0000 UTC m=+0.146049587 container start 4669ddd92e713e2b2644f47dfa25c4e35b2b9bbba27698925f1b4b023ff130cf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_dhawan, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Oct 11 04:00:28 compute-0 podman[264984]: 2025-10-11 04:00:28.969204614 +0000 UTC m=+0.148951028 container attach 4669ddd92e713e2b2644f47dfa25c4e35b2b9bbba27698925f1b4b023ff130cf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_dhawan, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Oct 11 04:00:28 compute-0 ceph-mon[74273]: pgmap v832: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:00:29 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v833: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:00:29 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 11 04:00:30 compute-0 stoic_dhawan[265000]: {
Oct 11 04:00:30 compute-0 stoic_dhawan[265000]:     "38da774d-7ecf-442f-9a7a-97978287cff8": {
Oct 11 04:00:30 compute-0 stoic_dhawan[265000]:         "ceph_fsid": "23b68101-59a9-532f-ab6b-9acf78fb2162",
Oct 11 04:00:30 compute-0 stoic_dhawan[265000]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Oct 11 04:00:30 compute-0 stoic_dhawan[265000]:         "osd_id": 1,
Oct 11 04:00:30 compute-0 stoic_dhawan[265000]:         "osd_uuid": "38da774d-7ecf-442f-9a7a-97978287cff8",
Oct 11 04:00:30 compute-0 stoic_dhawan[265000]:         "type": "bluestore"
Oct 11 04:00:30 compute-0 stoic_dhawan[265000]:     },
Oct 11 04:00:30 compute-0 stoic_dhawan[265000]:     "b0a32a6b-06c7-4cda-9235-0c5ffbd1bd85": {
Oct 11 04:00:30 compute-0 stoic_dhawan[265000]:         "ceph_fsid": "23b68101-59a9-532f-ab6b-9acf78fb2162",
Oct 11 04:00:30 compute-0 stoic_dhawan[265000]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Oct 11 04:00:30 compute-0 stoic_dhawan[265000]:         "osd_id": 2,
Oct 11 04:00:30 compute-0 stoic_dhawan[265000]:         "osd_uuid": "b0a32a6b-06c7-4cda-9235-0c5ffbd1bd85",
Oct 11 04:00:30 compute-0 stoic_dhawan[265000]:         "type": "bluestore"
Oct 11 04:00:30 compute-0 stoic_dhawan[265000]:     },
Oct 11 04:00:30 compute-0 stoic_dhawan[265000]:     "bd7ac921-1218-45c1-b1c6-7c594dbceccb": {
Oct 11 04:00:30 compute-0 stoic_dhawan[265000]:         "ceph_fsid": "23b68101-59a9-532f-ab6b-9acf78fb2162",
Oct 11 04:00:30 compute-0 stoic_dhawan[265000]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct 11 04:00:30 compute-0 stoic_dhawan[265000]:         "osd_id": 0,
Oct 11 04:00:30 compute-0 stoic_dhawan[265000]:         "osd_uuid": "bd7ac921-1218-45c1-b1c6-7c594dbceccb",
Oct 11 04:00:30 compute-0 stoic_dhawan[265000]:         "type": "bluestore"
Oct 11 04:00:30 compute-0 stoic_dhawan[265000]:     }
Oct 11 04:00:30 compute-0 stoic_dhawan[265000]: }
Oct 11 04:00:30 compute-0 ceph-mon[74273]: pgmap v833: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:00:30 compute-0 systemd[1]: libpod-4669ddd92e713e2b2644f47dfa25c4e35b2b9bbba27698925f1b4b023ff130cf.scope: Deactivated successfully.
Oct 11 04:00:30 compute-0 systemd[1]: libpod-4669ddd92e713e2b2644f47dfa25c4e35b2b9bbba27698925f1b4b023ff130cf.scope: Consumed 1.136s CPU time.
Oct 11 04:00:30 compute-0 podman[264984]: 2025-10-11 04:00:30.095568237 +0000 UTC m=+1.275314691 container died 4669ddd92e713e2b2644f47dfa25c4e35b2b9bbba27698925f1b4b023ff130cf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_dhawan, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 11 04:00:30 compute-0 systemd[1]: var-lib-containers-storage-overlay-8fb308b741573d9a3aa468b47cb3437faab630b2f59d585664f753efe3a52c8a-merged.mount: Deactivated successfully.
Oct 11 04:00:30 compute-0 podman[264984]: 2025-10-11 04:00:30.171245644 +0000 UTC m=+1.350992088 container remove 4669ddd92e713e2b2644f47dfa25c4e35b2b9bbba27698925f1b4b023ff130cf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_dhawan, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 11 04:00:30 compute-0 systemd[1]: libpod-conmon-4669ddd92e713e2b2644f47dfa25c4e35b2b9bbba27698925f1b4b023ff130cf.scope: Deactivated successfully.
Oct 11 04:00:30 compute-0 sudo[264878]: pam_unix(sudo:session): session closed for user root
Oct 11 04:00:30 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 11 04:00:30 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' 
Oct 11 04:00:30 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 11 04:00:30 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' 
Oct 11 04:00:30 compute-0 ceph-mgr[74563]: [progress WARNING root] complete: ev f9ec5a4d-4b4a-4827-aeaa-43d5f7b2efd4 does not exist
Oct 11 04:00:30 compute-0 ceph-mgr[74563]: [progress WARNING root] complete: ev 7c6b5cc2-3624-4a9f-8948-75ca3fa15675 does not exist
Oct 11 04:00:30 compute-0 sudo[265047]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 04:00:30 compute-0 sudo[265047]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 04:00:30 compute-0 sudo[265047]: pam_unix(sudo:session): session closed for user root
Oct 11 04:00:30 compute-0 sudo[265072]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Oct 11 04:00:30 compute-0 sudo[265072]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 04:00:30 compute-0 sudo[265072]: pam_unix(sudo:session): session closed for user root
Oct 11 04:00:30 compute-0 podman[265096]: 2025-10-11 04:00:30.624271149 +0000 UTC m=+0.146743046 container health_status 648a70a95c858d67aef01a55c153ff0b5b9bef4b589fa10c62a24185180db61f (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, container_name=ovn_controller, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS)
Oct 11 04:00:31 compute-0 ceph-mgr[74563]: [pg_autoscaler INFO root] _maybe_adjust
Oct 11 04:00:31 compute-0 ceph-mgr[74563]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 04:00:31 compute-0 ceph-mgr[74563]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct 11 04:00:31 compute-0 ceph-mgr[74563]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 04:00:31 compute-0 ceph-mgr[74563]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 11 04:00:31 compute-0 ceph-mgr[74563]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 04:00:31 compute-0 ceph-mgr[74563]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 11 04:00:31 compute-0 ceph-mgr[74563]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 04:00:31 compute-0 ceph-mgr[74563]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 11 04:00:31 compute-0 ceph-mgr[74563]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 04:00:31 compute-0 ceph-mgr[74563]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 11 04:00:31 compute-0 ceph-mgr[74563]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 04:00:31 compute-0 ceph-mgr[74563]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Oct 11 04:00:31 compute-0 ceph-mgr[74563]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 04:00:31 compute-0 ceph-mgr[74563]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 11 04:00:31 compute-0 ceph-mgr[74563]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 04:00:31 compute-0 ceph-mgr[74563]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Oct 11 04:00:31 compute-0 ceph-mgr[74563]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 04:00:31 compute-0 ceph-mgr[74563]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct 11 04:00:31 compute-0 ceph-mgr[74563]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 04:00:31 compute-0 ceph-mgr[74563]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 11 04:00:31 compute-0 ceph-mgr[74563]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 04:00:31 compute-0 ceph-mgr[74563]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct 11 04:00:31 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' 
Oct 11 04:00:31 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' 
Oct 11 04:00:31 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v834: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:00:32 compute-0 ceph-mon[74273]: pgmap v834: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:00:33 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v835: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:00:34 compute-0 ceph-mon[74273]: pgmap v835: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:00:34 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 11 04:00:35 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v836: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:00:36 compute-0 ceph-mon[74273]: pgmap v836: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:00:36 compute-0 podman[265123]: 2025-10-11 04:00:36.400619633 +0000 UTC m=+0.094061715 container health_status aa44bb3cffcfb092b9d85ae52a9bd9f36c87e07262632420f97b48a886109eab (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3)
Oct 11 04:00:37 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v837: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:00:38 compute-0 ceph-mon[74273]: pgmap v837: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:00:39 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v838: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:00:39 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 11 04:00:40 compute-0 ceph-mon[74273]: pgmap v838: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:00:41 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v839: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:00:42 compute-0 ceph-mon[74273]: pgmap v839: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:00:43 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v840: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:00:44 compute-0 ceph-mon[74273]: pgmap v840: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:00:44 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 11 04:00:45 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v841: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:00:46 compute-0 ceph-mon[74273]: pgmap v841: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:00:47 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v842: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:00:48 compute-0 ceph-mon[74273]: pgmap v842: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:00:49 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v843: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:00:49 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 11 04:00:50 compute-0 podman[265144]: 2025-10-11 04:00:50.395499902 +0000 UTC m=+0.094959910 container health_status 83ff5079e26f0f00bbfd2512db4ebca61e431e3b7e362664573e34fff179574c (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, container_name=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=multipathd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']})
Oct 11 04:00:50 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct 11 04:00:50 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2296793959' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 11 04:00:50 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct 11 04:00:50 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2296793959' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 11 04:00:50 compute-0 ceph-mon[74273]: pgmap v843: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:00:50 compute-0 ceph-mon[74273]: from='client.? 192.168.122.10:0/2296793959' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 11 04:00:50 compute-0 ceph-mon[74273]: from='client.? 192.168.122.10:0/2296793959' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 11 04:00:50 compute-0 ceph-mgr[74563]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 04:00:50 compute-0 ceph-mgr[74563]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 04:00:50 compute-0 ceph-mgr[74563]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 04:00:50 compute-0 ceph-mgr[74563]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 04:00:50 compute-0 ceph-mgr[74563]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 04:00:50 compute-0 ceph-mgr[74563]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 04:00:51 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v844: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:00:51 compute-0 podman[265164]: 2025-10-11 04:00:51.381247572 +0000 UTC m=+0.079475865 container health_status 8f03625dda308e9a3fe3b3f00360811eab2a53db90537fed5552a11820542f07 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, config_id=iscsid, container_name=iscsid, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Oct 11 04:00:52 compute-0 ceph-mon[74273]: pgmap v844: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:00:53 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v845: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:00:54 compute-0 ceph-mon[74273]: pgmap v845: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:00:54 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 11 04:00:55 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v846: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:00:56 compute-0 ceph-mon[74273]: pgmap v846: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:00:57 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v847: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:00:58 compute-0 ceph-mon[74273]: pgmap v847: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:00:59 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v848: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:00:59 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 11 04:01:00 compute-0 ceph-mon[74273]: pgmap v848: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:01:01 compute-0 CROND[265186]: (root) CMD (run-parts /etc/cron.hourly)
Oct 11 04:01:01 compute-0 run-parts[265189]: (/etc/cron.hourly) starting 0anacron
Oct 11 04:01:01 compute-0 anacron[265197]: Anacron started on 2025-10-11
Oct 11 04:01:01 compute-0 anacron[265197]: Job `cron.monthly' locked by another anacron - skipping
Oct 11 04:01:01 compute-0 anacron[265197]: Will run job `cron.daily' in 25 min.
Oct 11 04:01:01 compute-0 anacron[265197]: Jobs will be executed sequentially
Oct 11 04:01:01 compute-0 run-parts[265199]: (/etc/cron.hourly) finished 0anacron
Oct 11 04:01:01 compute-0 CROND[265185]: (root) CMDEND (run-parts /etc/cron.hourly)
Oct 11 04:01:01 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v849: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:01:01 compute-0 podman[265200]: 2025-10-11 04:01:01.390281506 +0000 UTC m=+0.096165824 container health_status 648a70a95c858d67aef01a55c153ff0b5b9bef4b589fa10c62a24185180db61f (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, tcib_managed=true)
Oct 11 04:01:02 compute-0 ceph-mon[74273]: pgmap v849: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:01:03 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v850: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:01:04 compute-0 ceph-mon[74273]: pgmap v850: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:01:04 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 11 04:01:05 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v851: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:01:06 compute-0 ceph-mon[74273]: pgmap v851: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:01:07 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v852: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:01:07 compute-0 podman[265226]: 2025-10-11 04:01:07.362356422 +0000 UTC m=+0.062484098 container health_status aa44bb3cffcfb092b9d85ae52a9bd9f36c87e07262632420f97b48a886109eab (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20251009, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3)
Oct 11 04:01:08 compute-0 ceph-mon[74273]: pgmap v852: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:01:09 compute-0 nova_compute[259850]: 2025-10-11 04:01:09.060 2 DEBUG oslo_service.periodic_task [None req-a15c22ab-259d-4b26-83d0-eb5f92102649 - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 11 04:01:09 compute-0 nova_compute[259850]: 2025-10-11 04:01:09.060 2 DEBUG nova.compute.manager [None req-a15c22ab-259d-4b26-83d0-eb5f92102649 - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145
Oct 11 04:01:09 compute-0 nova_compute[259850]: 2025-10-11 04:01:09.099 2 DEBUG nova.compute.manager [None req-a15c22ab-259d-4b26-83d0-eb5f92102649 - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154
Oct 11 04:01:09 compute-0 nova_compute[259850]: 2025-10-11 04:01:09.101 2 DEBUG oslo_service.periodic_task [None req-a15c22ab-259d-4b26-83d0-eb5f92102649 - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 11 04:01:09 compute-0 nova_compute[259850]: 2025-10-11 04:01:09.101 2 DEBUG nova.compute.manager [None req-a15c22ab-259d-4b26-83d0-eb5f92102649 - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183
Oct 11 04:01:09 compute-0 nova_compute[259850]: 2025-10-11 04:01:09.137 2 DEBUG oslo_service.periodic_task [None req-a15c22ab-259d-4b26-83d0-eb5f92102649 - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 11 04:01:09 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v853: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:01:09 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 11 04:01:10 compute-0 ceph-mon[74273]: pgmap v853: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:01:11 compute-0 nova_compute[259850]: 2025-10-11 04:01:11.168 2 DEBUG oslo_service.periodic_task [None req-a15c22ab-259d-4b26-83d0-eb5f92102649 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 11 04:01:11 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v854: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:01:12 compute-0 nova_compute[259850]: 2025-10-11 04:01:12.059 2 DEBUG oslo_service.periodic_task [None req-a15c22ab-259d-4b26-83d0-eb5f92102649 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 11 04:01:12 compute-0 nova_compute[259850]: 2025-10-11 04:01:12.059 2 DEBUG oslo_service.periodic_task [None req-a15c22ab-259d-4b26-83d0-eb5f92102649 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 11 04:01:12 compute-0 nova_compute[259850]: 2025-10-11 04:01:12.060 2 DEBUG oslo_service.periodic_task [None req-a15c22ab-259d-4b26-83d0-eb5f92102649 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 11 04:01:12 compute-0 ceph-mon[74273]: pgmap v854: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:01:13 compute-0 nova_compute[259850]: 2025-10-11 04:01:13.059 2 DEBUG oslo_service.periodic_task [None req-a15c22ab-259d-4b26-83d0-eb5f92102649 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 11 04:01:13 compute-0 nova_compute[259850]: 2025-10-11 04:01:13.060 2 DEBUG nova.compute.manager [None req-a15c22ab-259d-4b26-83d0-eb5f92102649 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Oct 11 04:01:13 compute-0 nova_compute[259850]: 2025-10-11 04:01:13.060 2 DEBUG nova.compute.manager [None req-a15c22ab-259d-4b26-83d0-eb5f92102649 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Oct 11 04:01:13 compute-0 nova_compute[259850]: 2025-10-11 04:01:13.084 2 DEBUG nova.compute.manager [None req-a15c22ab-259d-4b26-83d0-eb5f92102649 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Oct 11 04:01:13 compute-0 nova_compute[259850]: 2025-10-11 04:01:13.084 2 DEBUG oslo_service.periodic_task [None req-a15c22ab-259d-4b26-83d0-eb5f92102649 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 11 04:01:13 compute-0 nova_compute[259850]: 2025-10-11 04:01:13.085 2 DEBUG nova.compute.manager [None req-a15c22ab-259d-4b26-83d0-eb5f92102649 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Oct 11 04:01:13 compute-0 nova_compute[259850]: 2025-10-11 04:01:13.086 2 DEBUG oslo_service.periodic_task [None req-a15c22ab-259d-4b26-83d0-eb5f92102649 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 11 04:01:13 compute-0 nova_compute[259850]: 2025-10-11 04:01:13.121 2 DEBUG oslo_concurrency.lockutils [None req-a15c22ab-259d-4b26-83d0-eb5f92102649 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 04:01:13 compute-0 nova_compute[259850]: 2025-10-11 04:01:13.121 2 DEBUG oslo_concurrency.lockutils [None req-a15c22ab-259d-4b26-83d0-eb5f92102649 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 04:01:13 compute-0 nova_compute[259850]: 2025-10-11 04:01:13.121 2 DEBUG oslo_concurrency.lockutils [None req-a15c22ab-259d-4b26-83d0-eb5f92102649 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 04:01:13 compute-0 nova_compute[259850]: 2025-10-11 04:01:13.122 2 DEBUG nova.compute.resource_tracker [None req-a15c22ab-259d-4b26-83d0-eb5f92102649 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Oct 11 04:01:13 compute-0 nova_compute[259850]: 2025-10-11 04:01:13.122 2 DEBUG oslo_concurrency.processutils [None req-a15c22ab-259d-4b26-83d0-eb5f92102649 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 04:01:13 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v855: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:01:13 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 11 04:01:13 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/848289882' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 04:01:13 compute-0 nova_compute[259850]: 2025-10-11 04:01:13.613 2 DEBUG oslo_concurrency.processutils [None req-a15c22ab-259d-4b26-83d0-eb5f92102649 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.491s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 04:01:13 compute-0 nova_compute[259850]: 2025-10-11 04:01:13.855 2 WARNING nova.virt.libvirt.driver [None req-a15c22ab-259d-4b26-83d0-eb5f92102649 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 11 04:01:13 compute-0 nova_compute[259850]: 2025-10-11 04:01:13.856 2 DEBUG nova.compute.resource_tracker [None req-a15c22ab-259d-4b26-83d0-eb5f92102649 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5168MB free_disk=59.98828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Oct 11 04:01:13 compute-0 nova_compute[259850]: 2025-10-11 04:01:13.857 2 DEBUG oslo_concurrency.lockutils [None req-a15c22ab-259d-4b26-83d0-eb5f92102649 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 04:01:13 compute-0 nova_compute[259850]: 2025-10-11 04:01:13.857 2 DEBUG oslo_concurrency.lockutils [None req-a15c22ab-259d-4b26-83d0-eb5f92102649 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 04:01:14 compute-0 nova_compute[259850]: 2025-10-11 04:01:14.147 2 DEBUG nova.compute.resource_tracker [None req-a15c22ab-259d-4b26-83d0-eb5f92102649 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Oct 11 04:01:14 compute-0 nova_compute[259850]: 2025-10-11 04:01:14.148 2 DEBUG nova.compute.resource_tracker [None req-a15c22ab-259d-4b26-83d0-eb5f92102649 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Oct 11 04:01:14 compute-0 nova_compute[259850]: 2025-10-11 04:01:14.261 2 DEBUG nova.scheduler.client.report [None req-a15c22ab-259d-4b26-83d0-eb5f92102649 - - - - - -] Refreshing inventories for resource provider 108a560b-89c0-4926-a2fc-cb749a6f8386 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804
Oct 11 04:01:14 compute-0 nova_compute[259850]: 2025-10-11 04:01:14.404 2 DEBUG nova.scheduler.client.report [None req-a15c22ab-259d-4b26-83d0-eb5f92102649 - - - - - -] Updating ProviderTree inventory for provider 108a560b-89c0-4926-a2fc-cb749a6f8386 from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768
Oct 11 04:01:14 compute-0 nova_compute[259850]: 2025-10-11 04:01:14.404 2 DEBUG nova.compute.provider_tree [None req-a15c22ab-259d-4b26-83d0-eb5f92102649 - - - - - -] Updating inventory in ProviderTree for provider 108a560b-89c0-4926-a2fc-cb749a6f8386 with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176
Oct 11 04:01:14 compute-0 nova_compute[259850]: 2025-10-11 04:01:14.425 2 DEBUG nova.scheduler.client.report [None req-a15c22ab-259d-4b26-83d0-eb5f92102649 - - - - - -] Refreshing aggregate associations for resource provider 108a560b-89c0-4926-a2fc-cb749a6f8386, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813
Oct 11 04:01:14 compute-0 nova_compute[259850]: 2025-10-11 04:01:14.454 2 DEBUG nova.scheduler.client.report [None req-a15c22ab-259d-4b26-83d0-eb5f92102649 - - - - - -] Refreshing trait associations for resource provider 108a560b-89c0-4926-a2fc-cb749a6f8386, traits: COMPUTE_IMAGE_TYPE_QCOW2,HW_CPU_X86_SSSE3,COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_STORAGE_BUS_SCSI,HW_CPU_X86_AESNI,HW_CPU_X86_FMA3,COMPUTE_SECURITY_UEFI_SECURE_BOOT,HW_CPU_X86_F16C,HW_CPU_X86_SVM,COMPUTE_STORAGE_BUS_USB,HW_CPU_X86_SHA,COMPUTE_ACCELERATORS,COMPUTE_IMAGE_TYPE_AMI,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_GRAPHICS_MODEL_BOCHS,COMPUTE_NET_VIF_MODEL_RTL8139,COMPUTE_IMAGE_TYPE_RAW,COMPUTE_NET_VIF_MODEL_NE2K_PCI,HW_CPU_X86_SSE41,COMPUTE_NODE,COMPUTE_RESCUE_BFV,COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_NET_ATTACH_INTERFACE,COMPUTE_IMAGE_TYPE_ISO,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,COMPUTE_TRUSTED_CERTS,COMPUTE_NET_VIF_MODEL_E1000E,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_STORAGE_BUS_SATA,HW_CPU_X86_AVX,COMPUTE_IMAGE_TYPE_AKI,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_STORAGE_BUS_VIRTIO,COMPUTE_NET_VIF_MODEL_VMXNET3,HW_CPU_X86_BMI2,HW_CPU_X86_MMX,COMPUTE_VOLUME_ATTACH_WITH_TAG,COMPUTE_SECURITY_TPM_1_2,COMPUTE_GRAPHICS_MODEL_NONE,HW_CPU_X86_AVX2,COMPUTE_DEVICE_TAGGING,HW_CPU_X86_AMD_SVM,COMPUTE_STORAGE_BUS_IDE,COMPUTE_GRAPHICS_MODEL_VGA,COMPUTE_VOLUME_EXTEND,COMPUTE_NET_VIF_MODEL_PCNET,HW_CPU_X86_CLMUL,HW_CPU_X86_SSE2,HW_CPU_X86_SSE4A,COMPUTE_GRAPHICS_MODEL_VIRTIO,COMPUTE_NET_VIF_MODEL_E1000,HW_CPU_X86_BMI,COMPUTE_VIOMMU_MODEL_INTEL,HW_CPU_X86_SSE,COMPUTE_VOLUME_MULTI_ATTACH,HW_CPU_X86_ABM,HW_CPU_X86_SSE42,COMPUTE_STORAGE_BUS_FDC,COMPUTE_SECURITY_TPM_2_0,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825
Oct 11 04:01:14 compute-0 nova_compute[259850]: 2025-10-11 04:01:14.474 2 DEBUG oslo_concurrency.processutils [None req-a15c22ab-259d-4b26-83d0-eb5f92102649 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 04:01:14 compute-0 ceph-mon[74273]: pgmap v855: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:01:14 compute-0 ceph-mon[74273]: from='client.? 192.168.122.100:0/848289882' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 04:01:14 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 11 04:01:14 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 11 04:01:14 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2866474268' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 04:01:14 compute-0 nova_compute[259850]: 2025-10-11 04:01:14.977 2 DEBUG oslo_concurrency.processutils [None req-a15c22ab-259d-4b26-83d0-eb5f92102649 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.503s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 04:01:14 compute-0 nova_compute[259850]: 2025-10-11 04:01:14.982 2 DEBUG nova.compute.provider_tree [None req-a15c22ab-259d-4b26-83d0-eb5f92102649 - - - - - -] Inventory has not changed in ProviderTree for provider: 108a560b-89c0-4926-a2fc-cb749a6f8386 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 11 04:01:15 compute-0 nova_compute[259850]: 2025-10-11 04:01:15.004 2 DEBUG nova.scheduler.client.report [None req-a15c22ab-259d-4b26-83d0-eb5f92102649 - - - - - -] Inventory has not changed for provider 108a560b-89c0-4926-a2fc-cb749a6f8386 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 11 04:01:15 compute-0 nova_compute[259850]: 2025-10-11 04:01:15.005 2 DEBUG nova.compute.resource_tracker [None req-a15c22ab-259d-4b26-83d0-eb5f92102649 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Oct 11 04:01:15 compute-0 nova_compute[259850]: 2025-10-11 04:01:15.006 2 DEBUG oslo_concurrency.lockutils [None req-a15c22ab-259d-4b26-83d0-eb5f92102649 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.149s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 04:01:15 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v856: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:01:15 compute-0 ceph-mon[74273]: from='client.? 192.168.122.100:0/2866474268' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 04:01:16 compute-0 ceph-mon[74273]: pgmap v856: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:01:16 compute-0 nova_compute[259850]: 2025-10-11 04:01:16.982 2 DEBUG oslo_service.periodic_task [None req-a15c22ab-259d-4b26-83d0-eb5f92102649 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 11 04:01:16 compute-0 nova_compute[259850]: 2025-10-11 04:01:16.982 2 DEBUG oslo_service.periodic_task [None req-a15c22ab-259d-4b26-83d0-eb5f92102649 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 11 04:01:17 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v857: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:01:18 compute-0 ceph-mon[74273]: pgmap v857: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:01:19 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v858: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:01:19 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 11 04:01:20 compute-0 ceph-mon[74273]: pgmap v858: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:01:20 compute-0 ceph-mgr[74563]: [balancer INFO root] Optimize plan auto_2025-10-11_04:01:20
Oct 11 04:01:20 compute-0 ceph-mgr[74563]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 11 04:01:20 compute-0 ceph-mgr[74563]: [balancer INFO root] do_upmap
Oct 11 04:01:20 compute-0 ceph-mgr[74563]: [balancer INFO root] pools ['volumes', 'images', 'default.rgw.meta', 'default.rgw.control', 'default.rgw.log', '.mgr', '.rgw.root', 'backups', 'cephfs.cephfs.meta', 'vms', 'cephfs.cephfs.data']
Oct 11 04:01:20 compute-0 ceph-mgr[74563]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 04:01:20 compute-0 ceph-mgr[74563]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 04:01:20 compute-0 ceph-mgr[74563]: [balancer INFO root] prepared 0/10 changes
Oct 11 04:01:20 compute-0 ceph-mgr[74563]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 04:01:20 compute-0 ceph-mgr[74563]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 04:01:20 compute-0 ceph-mgr[74563]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 04:01:20 compute-0 ceph-mgr[74563]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 04:01:20 compute-0 ceph-mgr[74563]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 11 04:01:20 compute-0 ceph-mgr[74563]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 11 04:01:20 compute-0 ceph-mgr[74563]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 11 04:01:20 compute-0 ceph-mgr[74563]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 11 04:01:20 compute-0 ceph-mgr[74563]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 11 04:01:20 compute-0 ceph-mgr[74563]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 11 04:01:20 compute-0 ceph-mgr[74563]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 11 04:01:20 compute-0 ceph-mgr[74563]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 11 04:01:20 compute-0 ceph-mgr[74563]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 11 04:01:20 compute-0 ceph-mgr[74563]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 11 04:01:21 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v859: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:01:21 compute-0 podman[265290]: 2025-10-11 04:01:21.376435647 +0000 UTC m=+0.077781047 container health_status 83ff5079e26f0f00bbfd2512db4ebca61e431e3b7e362664573e34fff179574c (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, managed_by=edpm_ansible, org.label-schema.build-date=20251009, container_name=multipathd)
Oct 11 04:01:22 compute-0 podman[265310]: 2025-10-11 04:01:22.367551318 +0000 UTC m=+0.077368196 container health_status 8f03625dda308e9a3fe3b3f00360811eab2a53db90537fed5552a11820542f07 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=iscsid, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, config_id=iscsid, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Oct 11 04:01:22 compute-0 ceph-mon[74273]: pgmap v859: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:01:22 compute-0 ovn_metadata_agent[161897]: 2025-10-11 04:01:22.950 161902 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 04:01:22 compute-0 ovn_metadata_agent[161897]: 2025-10-11 04:01:22.950 161902 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 04:01:22 compute-0 ovn_metadata_agent[161897]: 2025-10-11 04:01:22.951 161902 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 04:01:23 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v860: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:01:24 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 11 04:01:24 compute-0 ceph-mon[74273]: pgmap v860: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:01:25 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v861: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:01:26 compute-0 ceph-mon[74273]: pgmap v861: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:01:27 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v862: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:01:28 compute-0 ceph-mon[74273]: pgmap v862: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:01:29 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v863: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:01:29 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 11 04:01:30 compute-0 sudo[265331]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 04:01:30 compute-0 sudo[265331]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 04:01:30 compute-0 sudo[265331]: pam_unix(sudo:session): session closed for user root
Oct 11 04:01:30 compute-0 sudo[265356]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 11 04:01:30 compute-0 sudo[265356]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 04:01:30 compute-0 sudo[265356]: pam_unix(sudo:session): session closed for user root
Oct 11 04:01:30 compute-0 ceph-mon[74273]: pgmap v863: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:01:30 compute-0 sudo[265381]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 04:01:30 compute-0 sudo[265381]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 04:01:30 compute-0 sudo[265381]: pam_unix(sudo:session): session closed for user root
Oct 11 04:01:30 compute-0 sudo[265406]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/23b68101-59a9-532f-ab6b-9acf78fb2162/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Oct 11 04:01:30 compute-0 sudo[265406]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 04:01:31 compute-0 ceph-mgr[74563]: [pg_autoscaler INFO root] _maybe_adjust
Oct 11 04:01:31 compute-0 ceph-mgr[74563]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 04:01:31 compute-0 ceph-mgr[74563]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct 11 04:01:31 compute-0 ceph-mgr[74563]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 04:01:31 compute-0 ceph-mgr[74563]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 11 04:01:31 compute-0 ceph-mgr[74563]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 04:01:31 compute-0 ceph-mgr[74563]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 11 04:01:31 compute-0 ceph-mgr[74563]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 04:01:31 compute-0 ceph-mgr[74563]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 11 04:01:31 compute-0 ceph-mgr[74563]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 04:01:31 compute-0 ceph-mgr[74563]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 11 04:01:31 compute-0 ceph-mgr[74563]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 04:01:31 compute-0 ceph-mgr[74563]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Oct 11 04:01:31 compute-0 ceph-mgr[74563]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 04:01:31 compute-0 ceph-mgr[74563]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 11 04:01:31 compute-0 ceph-mgr[74563]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 04:01:31 compute-0 ceph-mgr[74563]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Oct 11 04:01:31 compute-0 ceph-mgr[74563]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 04:01:31 compute-0 ceph-mgr[74563]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct 11 04:01:31 compute-0 ceph-mgr[74563]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 04:01:31 compute-0 ceph-mgr[74563]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 11 04:01:31 compute-0 ceph-mgr[74563]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 04:01:31 compute-0 ceph-mgr[74563]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct 11 04:01:31 compute-0 sudo[265406]: pam_unix(sudo:session): session closed for user root
Oct 11 04:01:31 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v864: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:01:31 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 11 04:01:31 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 11 04:01:31 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct 11 04:01:31 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 11 04:01:31 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct 11 04:01:31 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' 
Oct 11 04:01:31 compute-0 ceph-mgr[74563]: [progress WARNING root] complete: ev 9d92a289-db25-45bd-8fc4-20465ecbc960 does not exist
Oct 11 04:01:31 compute-0 ceph-mgr[74563]: [progress WARNING root] complete: ev 6211b232-5941-4672-aa20-cf382b3d9448 does not exist
Oct 11 04:01:31 compute-0 ceph-mgr[74563]: [progress WARNING root] complete: ev 5a9a115b-795e-47a8-b7d2-eea07517dda4 does not exist
Oct 11 04:01:31 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct 11 04:01:31 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 11 04:01:31 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct 11 04:01:31 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 11 04:01:31 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 11 04:01:31 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 11 04:01:31 compute-0 sudo[265463]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 04:01:31 compute-0 sudo[265463]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 04:01:31 compute-0 sudo[265463]: pam_unix(sudo:session): session closed for user root
Oct 11 04:01:31 compute-0 sudo[265492]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 11 04:01:31 compute-0 sudo[265492]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 04:01:31 compute-0 sudo[265492]: pam_unix(sudo:session): session closed for user root
Oct 11 04:01:31 compute-0 podman[265487]: 2025-10-11 04:01:31.646494581 +0000 UTC m=+0.130217982 container health_status 648a70a95c858d67aef01a55c153ff0b5b9bef4b589fa10c62a24185180db61f (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20251009, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 11 04:01:31 compute-0 sudo[265532]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 04:01:31 compute-0 sudo[265532]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 04:01:31 compute-0 sudo[265532]: pam_unix(sudo:session): session closed for user root
Oct 11 04:01:31 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 11 04:01:31 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 11 04:01:31 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' 
Oct 11 04:01:31 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 11 04:01:31 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 11 04:01:31 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 11 04:01:31 compute-0 sudo[265564]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/23b68101-59a9-532f-ab6b-9acf78fb2162/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 23b68101-59a9-532f-ab6b-9acf78fb2162 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Oct 11 04:01:31 compute-0 sudo[265564]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 04:01:32 compute-0 podman[265627]: 2025-10-11 04:01:32.222060539 +0000 UTC m=+0.074181946 container create 14e3a9e31d0739deb392b3477161054576a3972b0c07925310e6ea2a75ed898d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=heuristic_hawking, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507)
Oct 11 04:01:32 compute-0 systemd[1]: Started libpod-conmon-14e3a9e31d0739deb392b3477161054576a3972b0c07925310e6ea2a75ed898d.scope.
Oct 11 04:01:32 compute-0 podman[265627]: 2025-10-11 04:01:32.192015625 +0000 UTC m=+0.044137042 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 04:01:32 compute-0 systemd[1]: Started libcrun container.
Oct 11 04:01:32 compute-0 podman[265627]: 2025-10-11 04:01:32.332606227 +0000 UTC m=+0.184727684 container init 14e3a9e31d0739deb392b3477161054576a3972b0c07925310e6ea2a75ed898d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=heuristic_hawking, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 11 04:01:32 compute-0 podman[265627]: 2025-10-11 04:01:32.345584281 +0000 UTC m=+0.197705688 container start 14e3a9e31d0739deb392b3477161054576a3972b0c07925310e6ea2a75ed898d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=heuristic_hawking, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507)
Oct 11 04:01:32 compute-0 podman[265627]: 2025-10-11 04:01:32.350126659 +0000 UTC m=+0.202248066 container attach 14e3a9e31d0739deb392b3477161054576a3972b0c07925310e6ea2a75ed898d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=heuristic_hawking, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 11 04:01:32 compute-0 heuristic_hawking[265643]: 167 167
Oct 11 04:01:32 compute-0 systemd[1]: libpod-14e3a9e31d0739deb392b3477161054576a3972b0c07925310e6ea2a75ed898d.scope: Deactivated successfully.
Oct 11 04:01:32 compute-0 podman[265627]: 2025-10-11 04:01:32.355268044 +0000 UTC m=+0.207389451 container died 14e3a9e31d0739deb392b3477161054576a3972b0c07925310e6ea2a75ed898d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=heuristic_hawking, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=reef, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507)
Oct 11 04:01:32 compute-0 systemd[1]: var-lib-containers-storage-overlay-523a150f1b068525f63cffe3f8b6191190dc74885e0a965ddf6ceaf281654513-merged.mount: Deactivated successfully.
Oct 11 04:01:32 compute-0 podman[265627]: 2025-10-11 04:01:32.407253485 +0000 UTC m=+0.259374892 container remove 14e3a9e31d0739deb392b3477161054576a3972b0c07925310e6ea2a75ed898d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=heuristic_hawking, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 11 04:01:32 compute-0 systemd[1]: libpod-conmon-14e3a9e31d0739deb392b3477161054576a3972b0c07925310e6ea2a75ed898d.scope: Deactivated successfully.
Oct 11 04:01:32 compute-0 podman[265667]: 2025-10-11 04:01:32.618052211 +0000 UTC m=+0.064411082 container create f160e47e48c6b754fdb84115418deebfe4c1f7c8e2fa30fc57b16761bdfe62ee (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_engelbart, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507)
Oct 11 04:01:32 compute-0 systemd[1]: Started libpod-conmon-f160e47e48c6b754fdb84115418deebfe4c1f7c8e2fa30fc57b16761bdfe62ee.scope.
Oct 11 04:01:32 compute-0 podman[265667]: 2025-10-11 04:01:32.590011992 +0000 UTC m=+0.036370913 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 04:01:32 compute-0 systemd[1]: Started libcrun container.
Oct 11 04:01:32 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8396f75a30b3693742c83e4b80ee4d03c924d64bf14548d387d989f1a2fe6443/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 11 04:01:32 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8396f75a30b3693742c83e4b80ee4d03c924d64bf14548d387d989f1a2fe6443/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 11 04:01:32 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8396f75a30b3693742c83e4b80ee4d03c924d64bf14548d387d989f1a2fe6443/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 11 04:01:32 compute-0 ceph-mon[74273]: pgmap v864: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:01:32 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8396f75a30b3693742c83e4b80ee4d03c924d64bf14548d387d989f1a2fe6443/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 11 04:01:32 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8396f75a30b3693742c83e4b80ee4d03c924d64bf14548d387d989f1a2fe6443/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct 11 04:01:32 compute-0 podman[265667]: 2025-10-11 04:01:32.740312657 +0000 UTC m=+0.186671548 container init f160e47e48c6b754fdb84115418deebfe4c1f7c8e2fa30fc57b16761bdfe62ee (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_engelbart, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 11 04:01:32 compute-0 podman[265667]: 2025-10-11 04:01:32.759117076 +0000 UTC m=+0.205475927 container start f160e47e48c6b754fdb84115418deebfe4c1f7c8e2fa30fc57b16761bdfe62ee (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_engelbart, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 11 04:01:32 compute-0 podman[265667]: 2025-10-11 04:01:32.763064907 +0000 UTC m=+0.209423798 container attach f160e47e48c6b754fdb84115418deebfe4c1f7c8e2fa30fc57b16761bdfe62ee (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_engelbart, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20250507)
Oct 11 04:01:33 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v865: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:01:33 compute-0 ceph-mon[74273]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #39. Immutable memtables: 0.
Oct 11 04:01:33 compute-0 ceph-mon[74273]: rocksdb: (Original Log Time 2025/10/11-04:01:33.728326) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Oct 11 04:01:33 compute-0 ceph-mon[74273]: rocksdb: [db/flush_job.cc:856] [default] [JOB 17] Flushing memtable with next log file: 39
Oct 11 04:01:33 compute-0 ceph-mon[74273]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760155293728398, "job": 17, "event": "flush_started", "num_memtables": 1, "num_entries": 2053, "num_deletes": 251, "total_data_size": 3453816, "memory_usage": 3510128, "flush_reason": "Manual Compaction"}
Oct 11 04:01:33 compute-0 ceph-mon[74273]: rocksdb: [db/flush_job.cc:885] [default] [JOB 17] Level-0 flush table #40: started
Oct 11 04:01:33 compute-0 ceph-mon[74273]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760155293749901, "cf_name": "default", "job": 17, "event": "table_file_creation", "file_number": 40, "file_size": 3378188, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 16284, "largest_seqno": 18336, "table_properties": {"data_size": 3368892, "index_size": 5854, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2373, "raw_key_size": 18466, "raw_average_key_size": 19, "raw_value_size": 3350391, "raw_average_value_size": 3598, "num_data_blocks": 265, "num_entries": 931, "num_filter_entries": 931, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1760155066, "oldest_key_time": 1760155066, "file_creation_time": 1760155293, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "c5ef686a-ea96-43f3-b64e-136aeef6150d", "db_session_id": "5PENSWPAYOBU0GJSS6OB", "orig_file_number": 40, "seqno_to_time_mapping": "N/A"}}
Oct 11 04:01:33 compute-0 ceph-mon[74273]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 17] Flush lasted 21648 microseconds, and 12671 cpu microseconds.
Oct 11 04:01:33 compute-0 ceph-mon[74273]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 11 04:01:33 compute-0 ceph-mon[74273]: rocksdb: (Original Log Time 2025/10/11-04:01:33.749974) [db/flush_job.cc:967] [default] [JOB 17] Level-0 flush table #40: 3378188 bytes OK
Oct 11 04:01:33 compute-0 ceph-mon[74273]: rocksdb: (Original Log Time 2025/10/11-04:01:33.750003) [db/memtable_list.cc:519] [default] Level-0 commit table #40 started
Oct 11 04:01:33 compute-0 ceph-mon[74273]: rocksdb: (Original Log Time 2025/10/11-04:01:33.751930) [db/memtable_list.cc:722] [default] Level-0 commit table #40: memtable #1 done
Oct 11 04:01:33 compute-0 ceph-mon[74273]: rocksdb: (Original Log Time 2025/10/11-04:01:33.751956) EVENT_LOG_v1 {"time_micros": 1760155293751947, "job": 17, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Oct 11 04:01:33 compute-0 ceph-mon[74273]: rocksdb: (Original Log Time 2025/10/11-04:01:33.751982) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Oct 11 04:01:33 compute-0 ceph-mon[74273]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 17] Try to delete WAL files size 3445218, prev total WAL file size 3445218, number of live WAL files 2.
Oct 11 04:01:33 compute-0 ceph-mon[74273]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000036.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 11 04:01:33 compute-0 ceph-mon[74273]: rocksdb: (Original Log Time 2025/10/11-04:01:33.753775) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730031323535' seq:72057594037927935, type:22 .. '7061786F730031353037' seq:0, type:0; will stop at (end)
Oct 11 04:01:33 compute-0 ceph-mon[74273]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 18] Compacting 1@0 + 1@6 files to L6, score -1.00
Oct 11 04:01:33 compute-0 ceph-mon[74273]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 17 Base level 0, inputs: [40(3299KB)], [38(7466KB)]
Oct 11 04:01:33 compute-0 ceph-mon[74273]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760155293753838, "job": 18, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [40], "files_L6": [38], "score": -1, "input_data_size": 11024386, "oldest_snapshot_seqno": -1}
Oct 11 04:01:33 compute-0 laughing_engelbart[265685]: --> passed data devices: 0 physical, 3 LVM
Oct 11 04:01:33 compute-0 laughing_engelbart[265685]: --> relative data size: 1.0
Oct 11 04:01:33 compute-0 laughing_engelbart[265685]: --> All data devices are unavailable
Oct 11 04:01:33 compute-0 systemd[1]: libpod-f160e47e48c6b754fdb84115418deebfe4c1f7c8e2fa30fc57b16761bdfe62ee.scope: Deactivated successfully.
Oct 11 04:01:33 compute-0 podman[265667]: 2025-10-11 04:01:33.79838089 +0000 UTC m=+1.244739741 container died f160e47e48c6b754fdb84115418deebfe4c1f7c8e2fa30fc57b16761bdfe62ee (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_engelbart, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 11 04:01:33 compute-0 ceph-mon[74273]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 18] Generated table #41: 4390 keys, 9249262 bytes, temperature: kUnknown
Oct 11 04:01:33 compute-0 ceph-mon[74273]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760155293805579, "cf_name": "default", "job": 18, "event": "table_file_creation", "file_number": 41, "file_size": 9249262, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 9216372, "index_size": 20812, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 11013, "raw_key_size": 106064, "raw_average_key_size": 24, "raw_value_size": 9133582, "raw_average_value_size": 2080, "num_data_blocks": 884, "num_entries": 4390, "num_filter_entries": 4390, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1760153731, "oldest_key_time": 0, "file_creation_time": 1760155293, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "c5ef686a-ea96-43f3-b64e-136aeef6150d", "db_session_id": "5PENSWPAYOBU0GJSS6OB", "orig_file_number": 41, "seqno_to_time_mapping": "N/A"}}
Oct 11 04:01:33 compute-0 ceph-mon[74273]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 11 04:01:33 compute-0 ceph-mon[74273]: rocksdb: (Original Log Time 2025/10/11-04:01:33.806094) [db/compaction/compaction_job.cc:1663] [default] [JOB 18] Compacted 1@0 + 1@6 files to L6 => 9249262 bytes
Oct 11 04:01:33 compute-0 ceph-mon[74273]: rocksdb: (Original Log Time 2025/10/11-04:01:33.807548) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 211.9 rd, 177.8 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(3.2, 7.3 +0.0 blob) out(8.8 +0.0 blob), read-write-amplify(6.0) write-amplify(2.7) OK, records in: 4904, records dropped: 514 output_compression: NoCompression
Oct 11 04:01:33 compute-0 ceph-mon[74273]: rocksdb: (Original Log Time 2025/10/11-04:01:33.807568) EVENT_LOG_v1 {"time_micros": 1760155293807557, "job": 18, "event": "compaction_finished", "compaction_time_micros": 52026, "compaction_time_cpu_micros": 30282, "output_level": 6, "num_output_files": 1, "total_output_size": 9249262, "num_input_records": 4904, "num_output_records": 4390, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Oct 11 04:01:33 compute-0 ceph-mon[74273]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000040.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 11 04:01:33 compute-0 ceph-mon[74273]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760155293808258, "job": 18, "event": "table_file_deletion", "file_number": 40}
Oct 11 04:01:33 compute-0 ceph-mon[74273]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000038.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 11 04:01:33 compute-0 ceph-mon[74273]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760155293809642, "job": 18, "event": "table_file_deletion", "file_number": 38}
Oct 11 04:01:33 compute-0 ceph-mon[74273]: rocksdb: (Original Log Time 2025/10/11-04:01:33.753615) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 11 04:01:33 compute-0 ceph-mon[74273]: rocksdb: (Original Log Time 2025/10/11-04:01:33.809735) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 11 04:01:33 compute-0 ceph-mon[74273]: rocksdb: (Original Log Time 2025/10/11-04:01:33.809742) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 11 04:01:33 compute-0 ceph-mon[74273]: rocksdb: (Original Log Time 2025/10/11-04:01:33.809743) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 11 04:01:33 compute-0 ceph-mon[74273]: rocksdb: (Original Log Time 2025/10/11-04:01:33.809745) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 11 04:01:33 compute-0 ceph-mon[74273]: rocksdb: (Original Log Time 2025/10/11-04:01:33.809746) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 11 04:01:33 compute-0 systemd[1]: var-lib-containers-storage-overlay-8396f75a30b3693742c83e4b80ee4d03c924d64bf14548d387d989f1a2fe6443-merged.mount: Deactivated successfully.
Oct 11 04:01:33 compute-0 podman[265667]: 2025-10-11 04:01:33.881867607 +0000 UTC m=+1.328226488 container remove f160e47e48c6b754fdb84115418deebfe4c1f7c8e2fa30fc57b16761bdfe62ee (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_engelbart, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=reef, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 11 04:01:33 compute-0 systemd[1]: libpod-conmon-f160e47e48c6b754fdb84115418deebfe4c1f7c8e2fa30fc57b16761bdfe62ee.scope: Deactivated successfully.
Oct 11 04:01:33 compute-0 sudo[265564]: pam_unix(sudo:session): session closed for user root
Oct 11 04:01:34 compute-0 sudo[265728]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 04:01:34 compute-0 sudo[265728]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 04:01:34 compute-0 sudo[265728]: pam_unix(sudo:session): session closed for user root
Oct 11 04:01:34 compute-0 sudo[265753]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 11 04:01:34 compute-0 sudo[265753]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 04:01:34 compute-0 sudo[265753]: pam_unix(sudo:session): session closed for user root
Oct 11 04:01:34 compute-0 sudo[265778]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 04:01:34 compute-0 sudo[265778]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 04:01:34 compute-0 sudo[265778]: pam_unix(sudo:session): session closed for user root
Oct 11 04:01:34 compute-0 sudo[265803]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/23b68101-59a9-532f-ab6b-9acf78fb2162/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 23b68101-59a9-532f-ab6b-9acf78fb2162 -- lvm list --format json
Oct 11 04:01:34 compute-0 sudo[265803]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 04:01:34 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 11 04:01:34 compute-0 podman[265868]: 2025-10-11 04:01:34.709627876 +0000 UTC m=+0.061388597 container create b6a621b01ae42f20f0e7a6b4cad888b39342e72c80c7fbf4d5e5088ffe6e6367 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=amazing_nightingale, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 11 04:01:34 compute-0 ceph-mon[74273]: pgmap v865: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:01:34 compute-0 systemd[1]: Started libpod-conmon-b6a621b01ae42f20f0e7a6b4cad888b39342e72c80c7fbf4d5e5088ffe6e6367.scope.
Oct 11 04:01:34 compute-0 podman[265868]: 2025-10-11 04:01:34.67631662 +0000 UTC m=+0.028077431 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 04:01:34 compute-0 systemd[1]: Started libcrun container.
Oct 11 04:01:34 compute-0 podman[265868]: 2025-10-11 04:01:34.79335505 +0000 UTC m=+0.145115801 container init b6a621b01ae42f20f0e7a6b4cad888b39342e72c80c7fbf4d5e5088ffe6e6367 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=amazing_nightingale, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default, CEPH_REF=reef)
Oct 11 04:01:34 compute-0 podman[265868]: 2025-10-11 04:01:34.806053027 +0000 UTC m=+0.157813778 container start b6a621b01ae42f20f0e7a6b4cad888b39342e72c80c7fbf4d5e5088ffe6e6367 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=amazing_nightingale, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 11 04:01:34 compute-0 amazing_nightingale[265885]: 167 167
Oct 11 04:01:34 compute-0 podman[265868]: 2025-10-11 04:01:34.810140461 +0000 UTC m=+0.161901212 container attach b6a621b01ae42f20f0e7a6b4cad888b39342e72c80c7fbf4d5e5088ffe6e6367 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=amazing_nightingale, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Oct 11 04:01:34 compute-0 systemd[1]: libpod-b6a621b01ae42f20f0e7a6b4cad888b39342e72c80c7fbf4d5e5088ffe6e6367.scope: Deactivated successfully.
Oct 11 04:01:34 compute-0 podman[265868]: 2025-10-11 04:01:34.810997795 +0000 UTC m=+0.162758536 container died b6a621b01ae42f20f0e7a6b4cad888b39342e72c80c7fbf4d5e5088ffe6e6367 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=amazing_nightingale, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Oct 11 04:01:34 compute-0 systemd[1]: var-lib-containers-storage-overlay-8cff6eb46523ff7fb4df1f21816d4bd521ce837fc8deb4b6c8899f6e6a59e8eb-merged.mount: Deactivated successfully.
Oct 11 04:01:34 compute-0 podman[265868]: 2025-10-11 04:01:34.859383426 +0000 UTC m=+0.211144167 container remove b6a621b01ae42f20f0e7a6b4cad888b39342e72c80c7fbf4d5e5088ffe6e6367 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=amazing_nightingale, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Oct 11 04:01:34 compute-0 systemd[1]: libpod-conmon-b6a621b01ae42f20f0e7a6b4cad888b39342e72c80c7fbf4d5e5088ffe6e6367.scope: Deactivated successfully.
Oct 11 04:01:35 compute-0 podman[265909]: 2025-10-11 04:01:35.111712469 +0000 UTC m=+0.058624699 container create 691a944da923a31568b86be8e9c91f441de6b4df499731cf0d1f81e64ccf8e45 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_lederberg, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Oct 11 04:01:35 compute-0 systemd[1]: Started libpod-conmon-691a944da923a31568b86be8e9c91f441de6b4df499731cf0d1f81e64ccf8e45.scope.
Oct 11 04:01:35 compute-0 podman[265909]: 2025-10-11 04:01:35.093236599 +0000 UTC m=+0.040148839 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 04:01:35 compute-0 systemd[1]: Started libcrun container.
Oct 11 04:01:35 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6223f3cf15b1d6d517154a6ea7912b4391e37514e02f1f06b03a4a75de07f724/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 11 04:01:35 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6223f3cf15b1d6d517154a6ea7912b4391e37514e02f1f06b03a4a75de07f724/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 11 04:01:35 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6223f3cf15b1d6d517154a6ea7912b4391e37514e02f1f06b03a4a75de07f724/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 11 04:01:35 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6223f3cf15b1d6d517154a6ea7912b4391e37514e02f1f06b03a4a75de07f724/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 11 04:01:35 compute-0 podman[265909]: 2025-10-11 04:01:35.207872732 +0000 UTC m=+0.154784982 container init 691a944da923a31568b86be8e9c91f441de6b4df499731cf0d1f81e64ccf8e45 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_lederberg, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_REF=reef)
Oct 11 04:01:35 compute-0 podman[265909]: 2025-10-11 04:01:35.220510907 +0000 UTC m=+0.167423127 container start 691a944da923a31568b86be8e9c91f441de6b4df499731cf0d1f81e64ccf8e45 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_lederberg, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Oct 11 04:01:35 compute-0 podman[265909]: 2025-10-11 04:01:35.224065437 +0000 UTC m=+0.170977657 container attach 691a944da923a31568b86be8e9c91f441de6b4df499731cf0d1f81e64ccf8e45 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_lederberg, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=reef)
Oct 11 04:01:35 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v866: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:01:35 compute-0 kind_lederberg[265926]: {
Oct 11 04:01:35 compute-0 kind_lederberg[265926]:     "0": [
Oct 11 04:01:35 compute-0 kind_lederberg[265926]:         {
Oct 11 04:01:35 compute-0 kind_lederberg[265926]:             "devices": [
Oct 11 04:01:35 compute-0 kind_lederberg[265926]:                 "/dev/loop3"
Oct 11 04:01:35 compute-0 kind_lederberg[265926]:             ],
Oct 11 04:01:35 compute-0 kind_lederberg[265926]:             "lv_name": "ceph_lv0",
Oct 11 04:01:35 compute-0 kind_lederberg[265926]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct 11 04:01:35 compute-0 kind_lederberg[265926]:             "lv_size": "21470642176",
Oct 11 04:01:35 compute-0 kind_lederberg[265926]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=1XVTvq-Um5W-oCLh-MZzq-EKrd-vgGj-JdO4S3,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=23b68101-59a9-532f-ab6b-9acf78fb2162,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=bd7ac921-1218-45c1-b1c6-7c594dbceccb,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 11 04:01:35 compute-0 kind_lederberg[265926]:             "lv_uuid": "1XVTvq-Um5W-oCLh-MZzq-EKrd-vgGj-JdO4S3",
Oct 11 04:01:35 compute-0 kind_lederberg[265926]:             "name": "ceph_lv0",
Oct 11 04:01:35 compute-0 kind_lederberg[265926]:             "path": "/dev/ceph_vg0/ceph_lv0",
Oct 11 04:01:35 compute-0 kind_lederberg[265926]:             "tags": {
Oct 11 04:01:35 compute-0 kind_lederberg[265926]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct 11 04:01:35 compute-0 kind_lederberg[265926]:                 "ceph.block_uuid": "1XVTvq-Um5W-oCLh-MZzq-EKrd-vgGj-JdO4S3",
Oct 11 04:01:35 compute-0 kind_lederberg[265926]:                 "ceph.cephx_lockbox_secret": "",
Oct 11 04:01:35 compute-0 kind_lederberg[265926]:                 "ceph.cluster_fsid": "23b68101-59a9-532f-ab6b-9acf78fb2162",
Oct 11 04:01:35 compute-0 kind_lederberg[265926]:                 "ceph.cluster_name": "ceph",
Oct 11 04:01:35 compute-0 kind_lederberg[265926]:                 "ceph.crush_device_class": "",
Oct 11 04:01:35 compute-0 kind_lederberg[265926]:                 "ceph.encrypted": "0",
Oct 11 04:01:35 compute-0 kind_lederberg[265926]:                 "ceph.osd_fsid": "bd7ac921-1218-45c1-b1c6-7c594dbceccb",
Oct 11 04:01:35 compute-0 kind_lederberg[265926]:                 "ceph.osd_id": "0",
Oct 11 04:01:35 compute-0 kind_lederberg[265926]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 11 04:01:35 compute-0 kind_lederberg[265926]:                 "ceph.type": "block",
Oct 11 04:01:35 compute-0 kind_lederberg[265926]:                 "ceph.vdo": "0"
Oct 11 04:01:35 compute-0 kind_lederberg[265926]:             },
Oct 11 04:01:35 compute-0 kind_lederberg[265926]:             "type": "block",
Oct 11 04:01:35 compute-0 kind_lederberg[265926]:             "vg_name": "ceph_vg0"
Oct 11 04:01:35 compute-0 kind_lederberg[265926]:         }
Oct 11 04:01:35 compute-0 kind_lederberg[265926]:     ],
Oct 11 04:01:35 compute-0 kind_lederberg[265926]:     "1": [
Oct 11 04:01:35 compute-0 kind_lederberg[265926]:         {
Oct 11 04:01:35 compute-0 kind_lederberg[265926]:             "devices": [
Oct 11 04:01:35 compute-0 kind_lederberg[265926]:                 "/dev/loop4"
Oct 11 04:01:35 compute-0 kind_lederberg[265926]:             ],
Oct 11 04:01:35 compute-0 kind_lederberg[265926]:             "lv_name": "ceph_lv1",
Oct 11 04:01:35 compute-0 kind_lederberg[265926]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Oct 11 04:01:35 compute-0 kind_lederberg[265926]:             "lv_size": "21470642176",
Oct 11 04:01:35 compute-0 kind_lederberg[265926]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=SYHCVo-UVf0-Iwc3-4dzz-QuCG-19LJ-9rRA5x,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=23b68101-59a9-532f-ab6b-9acf78fb2162,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=38da774d-7ecf-442f-9a7a-97978287cff8,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 11 04:01:35 compute-0 kind_lederberg[265926]:             "lv_uuid": "SYHCVo-UVf0-Iwc3-4dzz-QuCG-19LJ-9rRA5x",
Oct 11 04:01:35 compute-0 kind_lederberg[265926]:             "name": "ceph_lv1",
Oct 11 04:01:35 compute-0 kind_lederberg[265926]:             "path": "/dev/ceph_vg1/ceph_lv1",
Oct 11 04:01:35 compute-0 kind_lederberg[265926]:             "tags": {
Oct 11 04:01:35 compute-0 kind_lederberg[265926]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Oct 11 04:01:35 compute-0 kind_lederberg[265926]:                 "ceph.block_uuid": "SYHCVo-UVf0-Iwc3-4dzz-QuCG-19LJ-9rRA5x",
Oct 11 04:01:35 compute-0 kind_lederberg[265926]:                 "ceph.cephx_lockbox_secret": "",
Oct 11 04:01:35 compute-0 kind_lederberg[265926]:                 "ceph.cluster_fsid": "23b68101-59a9-532f-ab6b-9acf78fb2162",
Oct 11 04:01:35 compute-0 kind_lederberg[265926]:                 "ceph.cluster_name": "ceph",
Oct 11 04:01:35 compute-0 kind_lederberg[265926]:                 "ceph.crush_device_class": "",
Oct 11 04:01:35 compute-0 kind_lederberg[265926]:                 "ceph.encrypted": "0",
Oct 11 04:01:35 compute-0 kind_lederberg[265926]:                 "ceph.osd_fsid": "38da774d-7ecf-442f-9a7a-97978287cff8",
Oct 11 04:01:35 compute-0 kind_lederberg[265926]:                 "ceph.osd_id": "1",
Oct 11 04:01:35 compute-0 kind_lederberg[265926]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 11 04:01:35 compute-0 kind_lederberg[265926]:                 "ceph.type": "block",
Oct 11 04:01:35 compute-0 kind_lederberg[265926]:                 "ceph.vdo": "0"
Oct 11 04:01:35 compute-0 kind_lederberg[265926]:             },
Oct 11 04:01:35 compute-0 kind_lederberg[265926]:             "type": "block",
Oct 11 04:01:35 compute-0 kind_lederberg[265926]:             "vg_name": "ceph_vg1"
Oct 11 04:01:35 compute-0 kind_lederberg[265926]:         }
Oct 11 04:01:35 compute-0 kind_lederberg[265926]:     ],
Oct 11 04:01:35 compute-0 kind_lederberg[265926]:     "2": [
Oct 11 04:01:35 compute-0 kind_lederberg[265926]:         {
Oct 11 04:01:35 compute-0 kind_lederberg[265926]:             "devices": [
Oct 11 04:01:35 compute-0 kind_lederberg[265926]:                 "/dev/loop5"
Oct 11 04:01:35 compute-0 kind_lederberg[265926]:             ],
Oct 11 04:01:35 compute-0 kind_lederberg[265926]:             "lv_name": "ceph_lv2",
Oct 11 04:01:35 compute-0 kind_lederberg[265926]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Oct 11 04:01:35 compute-0 kind_lederberg[265926]:             "lv_size": "21470642176",
Oct 11 04:01:35 compute-0 kind_lederberg[265926]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=RoMfIg-8Myq-NBZ3-1HM3-6QfC-sjk8-dlChvt,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=23b68101-59a9-532f-ab6b-9acf78fb2162,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=b0a32a6b-06c7-4cda-9235-0c5ffbd1bd85,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 11 04:01:35 compute-0 kind_lederberg[265926]:             "lv_uuid": "RoMfIg-8Myq-NBZ3-1HM3-6QfC-sjk8-dlChvt",
Oct 11 04:01:35 compute-0 kind_lederberg[265926]:             "name": "ceph_lv2",
Oct 11 04:01:35 compute-0 kind_lederberg[265926]:             "path": "/dev/ceph_vg2/ceph_lv2",
Oct 11 04:01:35 compute-0 kind_lederberg[265926]:             "tags": {
Oct 11 04:01:35 compute-0 kind_lederberg[265926]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Oct 11 04:01:35 compute-0 kind_lederberg[265926]:                 "ceph.block_uuid": "RoMfIg-8Myq-NBZ3-1HM3-6QfC-sjk8-dlChvt",
Oct 11 04:01:35 compute-0 kind_lederberg[265926]:                 "ceph.cephx_lockbox_secret": "",
Oct 11 04:01:35 compute-0 kind_lederberg[265926]:                 "ceph.cluster_fsid": "23b68101-59a9-532f-ab6b-9acf78fb2162",
Oct 11 04:01:35 compute-0 kind_lederberg[265926]:                 "ceph.cluster_name": "ceph",
Oct 11 04:01:35 compute-0 kind_lederberg[265926]:                 "ceph.crush_device_class": "",
Oct 11 04:01:35 compute-0 kind_lederberg[265926]:                 "ceph.encrypted": "0",
Oct 11 04:01:35 compute-0 kind_lederberg[265926]:                 "ceph.osd_fsid": "b0a32a6b-06c7-4cda-9235-0c5ffbd1bd85",
Oct 11 04:01:35 compute-0 kind_lederberg[265926]:                 "ceph.osd_id": "2",
Oct 11 04:01:35 compute-0 kind_lederberg[265926]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 11 04:01:35 compute-0 kind_lederberg[265926]:                 "ceph.type": "block",
Oct 11 04:01:35 compute-0 kind_lederberg[265926]:                 "ceph.vdo": "0"
Oct 11 04:01:35 compute-0 kind_lederberg[265926]:             },
Oct 11 04:01:35 compute-0 kind_lederberg[265926]:             "type": "block",
Oct 11 04:01:35 compute-0 kind_lederberg[265926]:             "vg_name": "ceph_vg2"
Oct 11 04:01:35 compute-0 kind_lederberg[265926]:         }
Oct 11 04:01:35 compute-0 kind_lederberg[265926]:     ]
Oct 11 04:01:35 compute-0 kind_lederberg[265926]: }
Oct 11 04:01:35 compute-0 systemd[1]: libpod-691a944da923a31568b86be8e9c91f441de6b4df499731cf0d1f81e64ccf8e45.scope: Deactivated successfully.
Oct 11 04:01:35 compute-0 podman[265909]: 2025-10-11 04:01:35.96842644 +0000 UTC m=+0.915338700 container died 691a944da923a31568b86be8e9c91f441de6b4df499731cf0d1f81e64ccf8e45 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_lederberg, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Oct 11 04:01:36 compute-0 systemd[1]: var-lib-containers-storage-overlay-6223f3cf15b1d6d517154a6ea7912b4391e37514e02f1f06b03a4a75de07f724-merged.mount: Deactivated successfully.
Oct 11 04:01:36 compute-0 podman[265909]: 2025-10-11 04:01:36.047015679 +0000 UTC m=+0.993927909 container remove 691a944da923a31568b86be8e9c91f441de6b4df499731cf0d1f81e64ccf8e45 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_lederberg, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20250507, ceph=True)
Oct 11 04:01:36 compute-0 systemd[1]: libpod-conmon-691a944da923a31568b86be8e9c91f441de6b4df499731cf0d1f81e64ccf8e45.scope: Deactivated successfully.
Oct 11 04:01:36 compute-0 sudo[265803]: pam_unix(sudo:session): session closed for user root
Oct 11 04:01:36 compute-0 sudo[265948]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 04:01:36 compute-0 sudo[265948]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 04:01:36 compute-0 sudo[265948]: pam_unix(sudo:session): session closed for user root
Oct 11 04:01:36 compute-0 sudo[265973]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 11 04:01:36 compute-0 sudo[265973]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 04:01:36 compute-0 sudo[265973]: pam_unix(sudo:session): session closed for user root
Oct 11 04:01:36 compute-0 sudo[265998]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 04:01:36 compute-0 sudo[265998]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 04:01:36 compute-0 sudo[265998]: pam_unix(sudo:session): session closed for user root
Oct 11 04:01:36 compute-0 sudo[266023]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/23b68101-59a9-532f-ab6b-9acf78fb2162/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 23b68101-59a9-532f-ab6b-9acf78fb2162 -- raw list --format json
Oct 11 04:01:36 compute-0 sudo[266023]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 04:01:36 compute-0 ceph-mon[74273]: pgmap v866: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:01:36 compute-0 podman[266089]: 2025-10-11 04:01:36.780466787 +0000 UTC m=+0.044391799 container create bb853bcd7496c2b56e0ab891724c707cff11ad59f05da2af1a4d9bccea5cea5f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_perlman, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20250507)
Oct 11 04:01:36 compute-0 systemd[1]: Started libpod-conmon-bb853bcd7496c2b56e0ab891724c707cff11ad59f05da2af1a4d9bccea5cea5f.scope.
Oct 11 04:01:36 compute-0 podman[266089]: 2025-10-11 04:01:36.763356096 +0000 UTC m=+0.027281098 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 04:01:36 compute-0 systemd[1]: Started libcrun container.
Oct 11 04:01:36 compute-0 podman[266089]: 2025-10-11 04:01:36.875686674 +0000 UTC m=+0.139611656 container init bb853bcd7496c2b56e0ab891724c707cff11ad59f05da2af1a4d9bccea5cea5f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_perlman, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 11 04:01:36 compute-0 podman[266089]: 2025-10-11 04:01:36.886556169 +0000 UTC m=+0.150481181 container start bb853bcd7496c2b56e0ab891724c707cff11ad59f05da2af1a4d9bccea5cea5f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_perlman, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507)
Oct 11 04:01:36 compute-0 podman[266089]: 2025-10-11 04:01:36.890543771 +0000 UTC m=+0.154468803 container attach bb853bcd7496c2b56e0ab891724c707cff11ad59f05da2af1a4d9bccea5cea5f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_perlman, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, ceph=True, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 11 04:01:36 compute-0 distracted_perlman[266105]: 167 167
Oct 11 04:01:36 compute-0 systemd[1]: libpod-bb853bcd7496c2b56e0ab891724c707cff11ad59f05da2af1a4d9bccea5cea5f.scope: Deactivated successfully.
Oct 11 04:01:36 compute-0 podman[266089]: 2025-10-11 04:01:36.893810813 +0000 UTC m=+0.157735845 container died bb853bcd7496c2b56e0ab891724c707cff11ad59f05da2af1a4d9bccea5cea5f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_perlman, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Oct 11 04:01:36 compute-0 systemd[1]: var-lib-containers-storage-overlay-7fb7693a2e418b6fdd83371f133d3c558cb35d16e6723efc87899c0ef2734504-merged.mount: Deactivated successfully.
Oct 11 04:01:36 compute-0 podman[266089]: 2025-10-11 04:01:36.940628429 +0000 UTC m=+0.204553431 container remove bb853bcd7496c2b56e0ab891724c707cff11ad59f05da2af1a4d9bccea5cea5f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_perlman, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 11 04:01:36 compute-0 systemd[1]: libpod-conmon-bb853bcd7496c2b56e0ab891724c707cff11ad59f05da2af1a4d9bccea5cea5f.scope: Deactivated successfully.
Oct 11 04:01:37 compute-0 podman[266129]: 2025-10-11 04:01:37.200589487 +0000 UTC m=+0.071780779 container create 585f71f1b3dadf9072d0f75e7c5656fbf721354de8505d02fcbb70b3ad0af939 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_sinoussi, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 11 04:01:37 compute-0 systemd[1]: Started libpod-conmon-585f71f1b3dadf9072d0f75e7c5656fbf721354de8505d02fcbb70b3ad0af939.scope.
Oct 11 04:01:37 compute-0 podman[266129]: 2025-10-11 04:01:37.170959764 +0000 UTC m=+0.042151096 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 04:01:37 compute-0 systemd[1]: Started libcrun container.
Oct 11 04:01:37 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/91f604a3e8fa6dc73b13b418a16b4d73c774cd634abcf395a2f5ac353d65b5c2/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 11 04:01:37 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/91f604a3e8fa6dc73b13b418a16b4d73c774cd634abcf395a2f5ac353d65b5c2/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 11 04:01:37 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/91f604a3e8fa6dc73b13b418a16b4d73c774cd634abcf395a2f5ac353d65b5c2/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 11 04:01:37 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/91f604a3e8fa6dc73b13b418a16b4d73c774cd634abcf395a2f5ac353d65b5c2/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 11 04:01:37 compute-0 podman[266129]: 2025-10-11 04:01:37.317605346 +0000 UTC m=+0.188796618 container init 585f71f1b3dadf9072d0f75e7c5656fbf721354de8505d02fcbb70b3ad0af939 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_sinoussi, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef)
Oct 11 04:01:37 compute-0 podman[266129]: 2025-10-11 04:01:37.329714017 +0000 UTC m=+0.200905309 container start 585f71f1b3dadf9072d0f75e7c5656fbf721354de8505d02fcbb70b3ad0af939 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_sinoussi, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Oct 11 04:01:37 compute-0 podman[266129]: 2025-10-11 04:01:37.333976387 +0000 UTC m=+0.205167639 container attach 585f71f1b3dadf9072d0f75e7c5656fbf721354de8505d02fcbb70b3ad0af939 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_sinoussi, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Oct 11 04:01:37 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v867: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:01:37 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e118 do_prune osdmap full prune enabled
Oct 11 04:01:37 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e119 e119: 3 total, 3 up, 3 in
Oct 11 04:01:37 compute-0 ceph-mon[74273]: log_channel(cluster) log [DBG] : osdmap e119: 3 total, 3 up, 3 in
Oct 11 04:01:38 compute-0 podman[266162]: 2025-10-11 04:01:38.39846312 +0000 UTC m=+0.099157549 container health_status aa44bb3cffcfb092b9d85ae52a9bd9f36c87e07262632420f97b48a886109eab (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f)
Oct 11 04:01:38 compute-0 modest_sinoussi[266146]: {
Oct 11 04:01:38 compute-0 modest_sinoussi[266146]:     "38da774d-7ecf-442f-9a7a-97978287cff8": {
Oct 11 04:01:38 compute-0 modest_sinoussi[266146]:         "ceph_fsid": "23b68101-59a9-532f-ab6b-9acf78fb2162",
Oct 11 04:01:38 compute-0 modest_sinoussi[266146]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Oct 11 04:01:38 compute-0 modest_sinoussi[266146]:         "osd_id": 1,
Oct 11 04:01:38 compute-0 modest_sinoussi[266146]:         "osd_uuid": "38da774d-7ecf-442f-9a7a-97978287cff8",
Oct 11 04:01:38 compute-0 modest_sinoussi[266146]:         "type": "bluestore"
Oct 11 04:01:38 compute-0 modest_sinoussi[266146]:     },
Oct 11 04:01:38 compute-0 modest_sinoussi[266146]:     "b0a32a6b-06c7-4cda-9235-0c5ffbd1bd85": {
Oct 11 04:01:38 compute-0 modest_sinoussi[266146]:         "ceph_fsid": "23b68101-59a9-532f-ab6b-9acf78fb2162",
Oct 11 04:01:38 compute-0 modest_sinoussi[266146]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Oct 11 04:01:38 compute-0 modest_sinoussi[266146]:         "osd_id": 2,
Oct 11 04:01:38 compute-0 modest_sinoussi[266146]:         "osd_uuid": "b0a32a6b-06c7-4cda-9235-0c5ffbd1bd85",
Oct 11 04:01:38 compute-0 modest_sinoussi[266146]:         "type": "bluestore"
Oct 11 04:01:38 compute-0 modest_sinoussi[266146]:     },
Oct 11 04:01:38 compute-0 modest_sinoussi[266146]:     "bd7ac921-1218-45c1-b1c6-7c594dbceccb": {
Oct 11 04:01:38 compute-0 modest_sinoussi[266146]:         "ceph_fsid": "23b68101-59a9-532f-ab6b-9acf78fb2162",
Oct 11 04:01:38 compute-0 modest_sinoussi[266146]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct 11 04:01:38 compute-0 modest_sinoussi[266146]:         "osd_id": 0,
Oct 11 04:01:38 compute-0 modest_sinoussi[266146]:         "osd_uuid": "bd7ac921-1218-45c1-b1c6-7c594dbceccb",
Oct 11 04:01:38 compute-0 modest_sinoussi[266146]:         "type": "bluestore"
Oct 11 04:01:38 compute-0 modest_sinoussi[266146]:     }
Oct 11 04:01:38 compute-0 modest_sinoussi[266146]: }
Oct 11 04:01:38 compute-0 systemd[1]: libpod-585f71f1b3dadf9072d0f75e7c5656fbf721354de8505d02fcbb70b3ad0af939.scope: Deactivated successfully.
Oct 11 04:01:38 compute-0 systemd[1]: libpod-585f71f1b3dadf9072d0f75e7c5656fbf721354de8505d02fcbb70b3ad0af939.scope: Consumed 1.258s CPU time.
Oct 11 04:01:38 compute-0 podman[266129]: 2025-10-11 04:01:38.596511707 +0000 UTC m=+1.467702999 container died 585f71f1b3dadf9072d0f75e7c5656fbf721354de8505d02fcbb70b3ad0af939 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_sinoussi, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 11 04:01:38 compute-0 systemd[1]: var-lib-containers-storage-overlay-91f604a3e8fa6dc73b13b418a16b4d73c774cd634abcf395a2f5ac353d65b5c2-merged.mount: Deactivated successfully.
Oct 11 04:01:38 compute-0 podman[266129]: 2025-10-11 04:01:38.685628332 +0000 UTC m=+1.556819614 container remove 585f71f1b3dadf9072d0f75e7c5656fbf721354de8505d02fcbb70b3ad0af939 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_sinoussi, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 11 04:01:38 compute-0 systemd[1]: libpod-conmon-585f71f1b3dadf9072d0f75e7c5656fbf721354de8505d02fcbb70b3ad0af939.scope: Deactivated successfully.
Oct 11 04:01:38 compute-0 sudo[266023]: pam_unix(sudo:session): session closed for user root
Oct 11 04:01:38 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 11 04:01:38 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' 
Oct 11 04:01:38 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 11 04:01:38 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e119 do_prune osdmap full prune enabled
Oct 11 04:01:38 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' 
Oct 11 04:01:38 compute-0 ceph-mgr[74563]: [progress WARNING root] complete: ev ee591ff8-3a97-41ac-aebe-79a48c2a43c9 does not exist
Oct 11 04:01:38 compute-0 ceph-mgr[74563]: [progress WARNING root] complete: ev 848247ad-d80c-4448-a2e3-3247d90e0d33 does not exist
Oct 11 04:01:38 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e120 e120: 3 total, 3 up, 3 in
Oct 11 04:01:38 compute-0 ceph-mon[74273]: pgmap v867: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:01:38 compute-0 ceph-mon[74273]: osdmap e119: 3 total, 3 up, 3 in
Oct 11 04:01:38 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' 
Oct 11 04:01:38 compute-0 ceph-mon[74273]: log_channel(cluster) log [DBG] : osdmap e120: 3 total, 3 up, 3 in
Oct 11 04:01:38 compute-0 sudo[266213]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 04:01:38 compute-0 sudo[266213]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 04:01:38 compute-0 sudo[266213]: pam_unix(sudo:session): session closed for user root
Oct 11 04:01:38 compute-0 sudo[266238]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Oct 11 04:01:38 compute-0 sudo[266238]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 04:01:38 compute-0 sudo[266238]: pam_unix(sudo:session): session closed for user root
Oct 11 04:01:39 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v870: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 127 B/s wr, 0 op/s
Oct 11 04:01:39 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 11 04:01:39 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e120 do_prune osdmap full prune enabled
Oct 11 04:01:39 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e121 e121: 3 total, 3 up, 3 in
Oct 11 04:01:39 compute-0 ceph-mon[74273]: log_channel(cluster) log [DBG] : osdmap e121: 3 total, 3 up, 3 in
Oct 11 04:01:39 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' 
Oct 11 04:01:39 compute-0 ceph-mon[74273]: osdmap e120: 3 total, 3 up, 3 in
Oct 11 04:01:40 compute-0 ceph-mon[74273]: pgmap v870: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 127 B/s wr, 0 op/s
Oct 11 04:01:40 compute-0 ceph-mon[74273]: osdmap e121: 3 total, 3 up, 3 in
Oct 11 04:01:41 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v872: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 170 B/s wr, 0 op/s
Oct 11 04:01:41 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e121 do_prune osdmap full prune enabled
Oct 11 04:01:41 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e122 e122: 3 total, 3 up, 3 in
Oct 11 04:01:41 compute-0 ceph-mon[74273]: log_channel(cluster) log [DBG] : osdmap e122: 3 total, 3 up, 3 in
Oct 11 04:01:42 compute-0 ceph-mon[74273]: pgmap v872: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 170 B/s wr, 0 op/s
Oct 11 04:01:42 compute-0 ceph-mon[74273]: osdmap e122: 3 total, 3 up, 3 in
Oct 11 04:01:43 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v874: 305 pgs: 305 active+clean; 41 MiB data, 189 MiB used, 60 GiB / 60 GiB avail; 47 KiB/s rd, 7.3 MiB/s wr, 68 op/s
Oct 11 04:01:44 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 11 04:01:44 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e122 do_prune osdmap full prune enabled
Oct 11 04:01:44 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e123 e123: 3 total, 3 up, 3 in
Oct 11 04:01:44 compute-0 ceph-mon[74273]: log_channel(cluster) log [DBG] : osdmap e123: 3 total, 3 up, 3 in
Oct 11 04:01:44 compute-0 ceph-mon[74273]: pgmap v874: 305 pgs: 305 active+clean; 41 MiB data, 189 MiB used, 60 GiB / 60 GiB avail; 47 KiB/s rd, 7.3 MiB/s wr, 68 op/s
Oct 11 04:01:44 compute-0 ceph-mon[74273]: osdmap e123: 3 total, 3 up, 3 in
Oct 11 04:01:45 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v876: 305 pgs: 305 active+clean; 41 MiB data, 189 MiB used, 60 GiB / 60 GiB avail; 44 KiB/s rd, 6.8 MiB/s wr, 63 op/s
Oct 11 04:01:46 compute-0 ceph-mon[74273]: pgmap v876: 305 pgs: 305 active+clean; 41 MiB data, 189 MiB used, 60 GiB / 60 GiB avail; 44 KiB/s rd, 6.8 MiB/s wr, 63 op/s
Oct 11 04:01:47 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v877: 305 pgs: 305 active+clean; 41 MiB data, 189 MiB used, 60 GiB / 60 GiB avail; 35 KiB/s rd, 5.4 MiB/s wr, 50 op/s
Oct 11 04:01:48 compute-0 ceph-mon[74273]: pgmap v877: 305 pgs: 305 active+clean; 41 MiB data, 189 MiB used, 60 GiB / 60 GiB avail; 35 KiB/s rd, 5.4 MiB/s wr, 50 op/s
Oct 11 04:01:49 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v878: 305 pgs: 305 active+clean; 41 MiB data, 189 MiB used, 60 GiB / 60 GiB avail; 33 KiB/s rd, 5.1 MiB/s wr, 47 op/s
Oct 11 04:01:49 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e123 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 11 04:01:50 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct 11 04:01:50 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1157131641' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 11 04:01:50 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct 11 04:01:50 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1157131641' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 11 04:01:50 compute-0 ceph-mgr[74563]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 04:01:50 compute-0 ceph-mgr[74563]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 04:01:50 compute-0 ceph-mgr[74563]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 04:01:50 compute-0 ceph-mgr[74563]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 04:01:50 compute-0 ceph-mgr[74563]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 04:01:50 compute-0 ceph-mgr[74563]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 04:01:50 compute-0 ceph-mon[74273]: pgmap v878: 305 pgs: 305 active+clean; 41 MiB data, 189 MiB used, 60 GiB / 60 GiB avail; 33 KiB/s rd, 5.1 MiB/s wr, 47 op/s
Oct 11 04:01:50 compute-0 ceph-mon[74273]: from='client.? 192.168.122.10:0/1157131641' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 11 04:01:50 compute-0 ceph-mon[74273]: from='client.? 192.168.122.10:0/1157131641' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 11 04:01:51 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v879: 305 pgs: 305 active+clean; 41 MiB data, 189 MiB used, 60 GiB / 60 GiB avail; 27 KiB/s rd, 4.3 MiB/s wr, 39 op/s
Oct 11 04:01:52 compute-0 podman[266264]: 2025-10-11 04:01:52.388764218 +0000 UTC m=+0.086086004 container health_status 83ff5079e26f0f00bbfd2512db4ebca61e431e3b7e362664573e34fff179574c (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, config_id=multipathd, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Oct 11 04:01:52 compute-0 podman[266282]: 2025-10-11 04:01:52.47553273 +0000 UTC m=+0.060235259 container health_status 8f03625dda308e9a3fe3b3f00360811eab2a53db90537fed5552a11820542f07 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, config_id=iscsid, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, container_name=iscsid)
Oct 11 04:01:52 compute-0 ceph-mon[74273]: pgmap v879: 305 pgs: 305 active+clean; 41 MiB data, 189 MiB used, 60 GiB / 60 GiB avail; 27 KiB/s rd, 4.3 MiB/s wr, 39 op/s
Oct 11 04:01:53 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v880: 305 pgs: 305 active+clean; 41 MiB data, 189 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:01:54 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e123 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 11 04:01:54 compute-0 ceph-mon[74273]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #42. Immutable memtables: 0.
Oct 11 04:01:54 compute-0 ceph-mon[74273]: rocksdb: (Original Log Time 2025/10/11-04:01:54.657211) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Oct 11 04:01:54 compute-0 ceph-mon[74273]: rocksdb: [db/flush_job.cc:856] [default] [JOB 19] Flushing memtable with next log file: 42
Oct 11 04:01:54 compute-0 ceph-mon[74273]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760155314657281, "job": 19, "event": "flush_started", "num_memtables": 1, "num_entries": 471, "num_deletes": 250, "total_data_size": 420186, "memory_usage": 428656, "flush_reason": "Manual Compaction"}
Oct 11 04:01:54 compute-0 ceph-mon[74273]: rocksdb: [db/flush_job.cc:885] [default] [JOB 19] Level-0 flush table #43: started
Oct 11 04:01:54 compute-0 ceph-mon[74273]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760155314663480, "cf_name": "default", "job": 19, "event": "table_file_creation", "file_number": 43, "file_size": 363694, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 18337, "largest_seqno": 18807, "table_properties": {"data_size": 360929, "index_size": 801, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 901, "raw_key_size": 6858, "raw_average_key_size": 19, "raw_value_size": 355333, "raw_average_value_size": 1035, "num_data_blocks": 35, "num_entries": 343, "num_filter_entries": 343, "num_deletions": 250, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1760155294, "oldest_key_time": 1760155294, "file_creation_time": 1760155314, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "c5ef686a-ea96-43f3-b64e-136aeef6150d", "db_session_id": "5PENSWPAYOBU0GJSS6OB", "orig_file_number": 43, "seqno_to_time_mapping": "N/A"}}
Oct 11 04:01:54 compute-0 ceph-mon[74273]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 19] Flush lasted 6323 microseconds, and 2556 cpu microseconds.
Oct 11 04:01:54 compute-0 ceph-mon[74273]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 11 04:01:54 compute-0 ceph-mon[74273]: rocksdb: (Original Log Time 2025/10/11-04:01:54.663537) [db/flush_job.cc:967] [default] [JOB 19] Level-0 flush table #43: 363694 bytes OK
Oct 11 04:01:54 compute-0 ceph-mon[74273]: rocksdb: (Original Log Time 2025/10/11-04:01:54.663557) [db/memtable_list.cc:519] [default] Level-0 commit table #43 started
Oct 11 04:01:54 compute-0 ceph-mon[74273]: rocksdb: (Original Log Time 2025/10/11-04:01:54.665485) [db/memtable_list.cc:722] [default] Level-0 commit table #43: memtable #1 done
Oct 11 04:01:54 compute-0 ceph-mon[74273]: rocksdb: (Original Log Time 2025/10/11-04:01:54.665507) EVENT_LOG_v1 {"time_micros": 1760155314665500, "job": 19, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Oct 11 04:01:54 compute-0 ceph-mon[74273]: rocksdb: (Original Log Time 2025/10/11-04:01:54.665525) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Oct 11 04:01:54 compute-0 ceph-mon[74273]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 19] Try to delete WAL files size 417347, prev total WAL file size 417347, number of live WAL files 2.
Oct 11 04:01:54 compute-0 ceph-mon[74273]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000039.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 11 04:01:54 compute-0 ceph-mon[74273]: rocksdb: (Original Log Time 2025/10/11-04:01:54.666198) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6D67727374617400353032' seq:72057594037927935, type:22 .. '6D67727374617400373533' seq:0, type:0; will stop at (end)
Oct 11 04:01:54 compute-0 ceph-mon[74273]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 20] Compacting 1@0 + 1@6 files to L6, score -1.00
Oct 11 04:01:54 compute-0 ceph-mon[74273]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 19 Base level 0, inputs: [43(355KB)], [41(9032KB)]
Oct 11 04:01:54 compute-0 ceph-mon[74273]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760155314666262, "job": 20, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [43], "files_L6": [41], "score": -1, "input_data_size": 9612956, "oldest_snapshot_seqno": -1}
Oct 11 04:01:54 compute-0 ceph-mon[74273]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 20] Generated table #44: 4222 keys, 6363885 bytes, temperature: kUnknown
Oct 11 04:01:54 compute-0 ceph-mon[74273]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760155314709661, "cf_name": "default", "job": 20, "event": "table_file_creation", "file_number": 44, "file_size": 6363885, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 6336351, "index_size": 15905, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 10565, "raw_key_size": 103048, "raw_average_key_size": 24, "raw_value_size": 6260532, "raw_average_value_size": 1482, "num_data_blocks": 670, "num_entries": 4222, "num_filter_entries": 4222, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1760153731, "oldest_key_time": 0, "file_creation_time": 1760155314, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "c5ef686a-ea96-43f3-b64e-136aeef6150d", "db_session_id": "5PENSWPAYOBU0GJSS6OB", "orig_file_number": 44, "seqno_to_time_mapping": "N/A"}}
Oct 11 04:01:54 compute-0 ceph-mon[74273]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 11 04:01:54 compute-0 ceph-mon[74273]: rocksdb: (Original Log Time 2025/10/11-04:01:54.709989) [db/compaction/compaction_job.cc:1663] [default] [JOB 20] Compacted 1@0 + 1@6 files to L6 => 6363885 bytes
Oct 11 04:01:54 compute-0 ceph-mon[74273]: rocksdb: (Original Log Time 2025/10/11-04:01:54.711515) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 220.8 rd, 146.2 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.3, 8.8 +0.0 blob) out(6.1 +0.0 blob), read-write-amplify(43.9) write-amplify(17.5) OK, records in: 4733, records dropped: 511 output_compression: NoCompression
Oct 11 04:01:54 compute-0 ceph-mon[74273]: rocksdb: (Original Log Time 2025/10/11-04:01:54.711544) EVENT_LOG_v1 {"time_micros": 1760155314711531, "job": 20, "event": "compaction_finished", "compaction_time_micros": 43538, "compaction_time_cpu_micros": 30979, "output_level": 6, "num_output_files": 1, "total_output_size": 6363885, "num_input_records": 4733, "num_output_records": 4222, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Oct 11 04:01:54 compute-0 ceph-mon[74273]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000043.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 11 04:01:54 compute-0 ceph-mon[74273]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760155314711797, "job": 20, "event": "table_file_deletion", "file_number": 43}
Oct 11 04:01:54 compute-0 ceph-mon[74273]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000041.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 11 04:01:54 compute-0 ceph-mon[74273]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760155314714800, "job": 20, "event": "table_file_deletion", "file_number": 41}
Oct 11 04:01:54 compute-0 ceph-mon[74273]: rocksdb: (Original Log Time 2025/10/11-04:01:54.666037) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 11 04:01:54 compute-0 ceph-mon[74273]: rocksdb: (Original Log Time 2025/10/11-04:01:54.714891) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 11 04:01:54 compute-0 ceph-mon[74273]: rocksdb: (Original Log Time 2025/10/11-04:01:54.714897) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 11 04:01:54 compute-0 ceph-mon[74273]: rocksdb: (Original Log Time 2025/10/11-04:01:54.714899) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 11 04:01:54 compute-0 ceph-mon[74273]: rocksdb: (Original Log Time 2025/10/11-04:01:54.714902) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 11 04:01:54 compute-0 ceph-mon[74273]: rocksdb: (Original Log Time 2025/10/11-04:01:54.714905) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 11 04:01:54 compute-0 ceph-mon[74273]: pgmap v880: 305 pgs: 305 active+clean; 41 MiB data, 189 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:01:55 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v881: 305 pgs: 305 active+clean; 41 MiB data, 189 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:01:56 compute-0 ceph-mon[74273]: pgmap v881: 305 pgs: 305 active+clean; 41 MiB data, 189 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:01:57 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v882: 305 pgs: 305 active+clean; 41 MiB data, 189 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:01:58 compute-0 ceph-mon[74273]: pgmap v882: 305 pgs: 305 active+clean; 41 MiB data, 189 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:01:58 compute-0 ovn_metadata_agent[161897]: 2025-10-11 04:01:58.989 161902 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=3, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '7a:61:6f', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '92:f1:b6:e4:f1:16'}, ipsec=False) old=SB_Global(nb_cfg=2) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 11 04:01:58 compute-0 ovn_metadata_agent[161897]: 2025-10-11 04:01:58.991 161902 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 7 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Oct 11 04:01:59 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v883: 305 pgs: 305 active+clean; 41 MiB data, 189 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:01:59 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e123 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 11 04:02:00 compute-0 ceph-mon[74273]: pgmap v883: 305 pgs: 305 active+clean; 41 MiB data, 189 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:02:01 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v884: 305 pgs: 305 active+clean; 41 MiB data, 189 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:02:02 compute-0 podman[266303]: 2025-10-11 04:02:02.398596703 +0000 UTC m=+0.101144426 container health_status 648a70a95c858d67aef01a55c153ff0b5b9bef4b589fa10c62a24185180db61f (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2)
Oct 11 04:02:02 compute-0 ceph-mon[74273]: pgmap v884: 305 pgs: 305 active+clean; 41 MiB data, 189 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:02:03 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v885: 305 pgs: 305 active+clean; 41 MiB data, 189 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:02:04 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e123 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 11 04:02:04 compute-0 ceph-mon[74273]: pgmap v885: 305 pgs: 305 active+clean; 41 MiB data, 189 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:02:05 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v886: 305 pgs: 305 active+clean; 41 MiB data, 189 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:02:05 compute-0 ovn_metadata_agent[161897]: 2025-10-11 04:02:05.993 161902 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=8a473e03-2208-47ae-afcd-05ad744a5969, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '3'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 04:02:06 compute-0 ceph-mon[74273]: pgmap v886: 305 pgs: 305 active+clean; 41 MiB data, 189 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:02:07 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v887: 305 pgs: 305 active+clean; 41 MiB data, 189 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:02:08 compute-0 ceph-mon[74273]: pgmap v887: 305 pgs: 305 active+clean; 41 MiB data, 189 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:02:09 compute-0 podman[266329]: 2025-10-11 04:02:09.340338217 +0000 UTC m=+0.055147807 container health_status aa44bb3cffcfb092b9d85ae52a9bd9f36c87e07262632420f97b48a886109eab (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, org.label-schema.build-date=20251009, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_managed=true, config_id=ovn_metadata_agent)
Oct 11 04:02:09 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v888: 305 pgs: 305 active+clean; 41 MiB data, 189 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:02:09 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e123 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 11 04:02:09 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e123 do_prune osdmap full prune enabled
Oct 11 04:02:09 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e124 e124: 3 total, 3 up, 3 in
Oct 11 04:02:09 compute-0 ceph-mon[74273]: log_channel(cluster) log [DBG] : osdmap e124: 3 total, 3 up, 3 in
Oct 11 04:02:10 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e124 do_prune osdmap full prune enabled
Oct 11 04:02:10 compute-0 ceph-mon[74273]: pgmap v888: 305 pgs: 305 active+clean; 41 MiB data, 189 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:02:10 compute-0 ceph-mon[74273]: osdmap e124: 3 total, 3 up, 3 in
Oct 11 04:02:10 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e125 e125: 3 total, 3 up, 3 in
Oct 11 04:02:10 compute-0 ceph-mon[74273]: log_channel(cluster) log [DBG] : osdmap e125: 3 total, 3 up, 3 in
Oct 11 04:02:11 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v891: 305 pgs: 305 active+clean; 41 MiB data, 189 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:02:11 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e125 do_prune osdmap full prune enabled
Oct 11 04:02:11 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e126 e126: 3 total, 3 up, 3 in
Oct 11 04:02:11 compute-0 ceph-mon[74273]: osdmap e125: 3 total, 3 up, 3 in
Oct 11 04:02:11 compute-0 ceph-mon[74273]: log_channel(cluster) log [DBG] : osdmap e126: 3 total, 3 up, 3 in
Oct 11 04:02:12 compute-0 nova_compute[259850]: 2025-10-11 04:02:12.055 2 DEBUG oslo_service.periodic_task [None req-a15c22ab-259d-4b26-83d0-eb5f92102649 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 11 04:02:12 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct 11 04:02:12 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/4267437241' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 11 04:02:12 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct 11 04:02:12 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/4267437241' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 11 04:02:12 compute-0 ceph-mon[74273]: pgmap v891: 305 pgs: 305 active+clean; 41 MiB data, 189 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:02:12 compute-0 ceph-mon[74273]: osdmap e126: 3 total, 3 up, 3 in
Oct 11 04:02:12 compute-0 ceph-mon[74273]: from='client.? 192.168.122.10:0/4267437241' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 11 04:02:12 compute-0 ceph-mon[74273]: from='client.? 192.168.122.10:0/4267437241' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 11 04:02:13 compute-0 nova_compute[259850]: 2025-10-11 04:02:13.059 2 DEBUG oslo_service.periodic_task [None req-a15c22ab-259d-4b26-83d0-eb5f92102649 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 11 04:02:13 compute-0 nova_compute[259850]: 2025-10-11 04:02:13.059 2 DEBUG nova.compute.manager [None req-a15c22ab-259d-4b26-83d0-eb5f92102649 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Oct 11 04:02:13 compute-0 nova_compute[259850]: 2025-10-11 04:02:13.060 2 DEBUG nova.compute.manager [None req-a15c22ab-259d-4b26-83d0-eb5f92102649 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Oct 11 04:02:13 compute-0 nova_compute[259850]: 2025-10-11 04:02:13.094 2 DEBUG nova.compute.manager [None req-a15c22ab-259d-4b26-83d0-eb5f92102649 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Oct 11 04:02:13 compute-0 nova_compute[259850]: 2025-10-11 04:02:13.095 2 DEBUG oslo_service.periodic_task [None req-a15c22ab-259d-4b26-83d0-eb5f92102649 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 11 04:02:13 compute-0 nova_compute[259850]: 2025-10-11 04:02:13.095 2 DEBUG oslo_service.periodic_task [None req-a15c22ab-259d-4b26-83d0-eb5f92102649 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 11 04:02:13 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct 11 04:02:13 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1272535495' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 11 04:02:13 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct 11 04:02:13 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1272535495' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 11 04:02:13 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v893: 305 pgs: 305 active+clean; 41 MiB data, 189 MiB used, 60 GiB / 60 GiB avail; 30 KiB/s rd, 3.0 KiB/s wr, 43 op/s
Oct 11 04:02:13 compute-0 ceph-mon[74273]: from='client.? 192.168.122.10:0/1272535495' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 11 04:02:13 compute-0 ceph-mon[74273]: from='client.? 192.168.122.10:0/1272535495' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 11 04:02:14 compute-0 nova_compute[259850]: 2025-10-11 04:02:14.058 2 DEBUG oslo_service.periodic_task [None req-a15c22ab-259d-4b26-83d0-eb5f92102649 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 11 04:02:14 compute-0 nova_compute[259850]: 2025-10-11 04:02:14.059 2 DEBUG oslo_service.periodic_task [None req-a15c22ab-259d-4b26-83d0-eb5f92102649 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 11 04:02:14 compute-0 nova_compute[259850]: 2025-10-11 04:02:14.059 2 DEBUG nova.compute.manager [None req-a15c22ab-259d-4b26-83d0-eb5f92102649 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Oct 11 04:02:14 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e126 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 11 04:02:14 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e126 do_prune osdmap full prune enabled
Oct 11 04:02:14 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e127 e127: 3 total, 3 up, 3 in
Oct 11 04:02:14 compute-0 ceph-mon[74273]: log_channel(cluster) log [DBG] : osdmap e127: 3 total, 3 up, 3 in
Oct 11 04:02:14 compute-0 ceph-mon[74273]: pgmap v893: 305 pgs: 305 active+clean; 41 MiB data, 189 MiB used, 60 GiB / 60 GiB avail; 30 KiB/s rd, 3.0 KiB/s wr, 43 op/s
Oct 11 04:02:14 compute-0 ceph-mon[74273]: osdmap e127: 3 total, 3 up, 3 in
Oct 11 04:02:15 compute-0 nova_compute[259850]: 2025-10-11 04:02:15.055 2 DEBUG oslo_service.periodic_task [None req-a15c22ab-259d-4b26-83d0-eb5f92102649 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 11 04:02:15 compute-0 nova_compute[259850]: 2025-10-11 04:02:15.071 2 DEBUG oslo_service.periodic_task [None req-a15c22ab-259d-4b26-83d0-eb5f92102649 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 11 04:02:15 compute-0 nova_compute[259850]: 2025-10-11 04:02:15.101 2 DEBUG oslo_concurrency.lockutils [None req-a15c22ab-259d-4b26-83d0-eb5f92102649 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 04:02:15 compute-0 nova_compute[259850]: 2025-10-11 04:02:15.101 2 DEBUG oslo_concurrency.lockutils [None req-a15c22ab-259d-4b26-83d0-eb5f92102649 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 04:02:15 compute-0 nova_compute[259850]: 2025-10-11 04:02:15.101 2 DEBUG oslo_concurrency.lockutils [None req-a15c22ab-259d-4b26-83d0-eb5f92102649 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 04:02:15 compute-0 nova_compute[259850]: 2025-10-11 04:02:15.101 2 DEBUG nova.compute.resource_tracker [None req-a15c22ab-259d-4b26-83d0-eb5f92102649 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Oct 11 04:02:15 compute-0 nova_compute[259850]: 2025-10-11 04:02:15.101 2 DEBUG oslo_concurrency.processutils [None req-a15c22ab-259d-4b26-83d0-eb5f92102649 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 04:02:15 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v895: 305 pgs: 305 active+clean; 41 MiB data, 189 MiB used, 60 GiB / 60 GiB avail; 34 KiB/s rd, 3.3 KiB/s wr, 48 op/s
Oct 11 04:02:15 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 11 04:02:15 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1836049796' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 04:02:15 compute-0 nova_compute[259850]: 2025-10-11 04:02:15.541 2 DEBUG oslo_concurrency.processutils [None req-a15c22ab-259d-4b26-83d0-eb5f92102649 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.440s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 04:02:15 compute-0 nova_compute[259850]: 2025-10-11 04:02:15.762 2 WARNING nova.virt.libvirt.driver [None req-a15c22ab-259d-4b26-83d0-eb5f92102649 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 11 04:02:15 compute-0 nova_compute[259850]: 2025-10-11 04:02:15.763 2 DEBUG nova.compute.resource_tracker [None req-a15c22ab-259d-4b26-83d0-eb5f92102649 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5164MB free_disk=59.98828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Oct 11 04:02:15 compute-0 nova_compute[259850]: 2025-10-11 04:02:15.763 2 DEBUG oslo_concurrency.lockutils [None req-a15c22ab-259d-4b26-83d0-eb5f92102649 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 04:02:15 compute-0 nova_compute[259850]: 2025-10-11 04:02:15.764 2 DEBUG oslo_concurrency.lockutils [None req-a15c22ab-259d-4b26-83d0-eb5f92102649 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 04:02:15 compute-0 nova_compute[259850]: 2025-10-11 04:02:15.847 2 DEBUG nova.compute.resource_tracker [None req-a15c22ab-259d-4b26-83d0-eb5f92102649 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Oct 11 04:02:15 compute-0 nova_compute[259850]: 2025-10-11 04:02:15.848 2 DEBUG nova.compute.resource_tracker [None req-a15c22ab-259d-4b26-83d0-eb5f92102649 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Oct 11 04:02:15 compute-0 nova_compute[259850]: 2025-10-11 04:02:15.872 2 DEBUG oslo_concurrency.processutils [None req-a15c22ab-259d-4b26-83d0-eb5f92102649 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 04:02:16 compute-0 ceph-mon[74273]: from='client.? 192.168.122.100:0/1836049796' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 04:02:16 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct 11 04:02:16 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1567327197' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 11 04:02:16 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct 11 04:02:16 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1567327197' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 11 04:02:16 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 11 04:02:16 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4229173259' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 04:02:16 compute-0 nova_compute[259850]: 2025-10-11 04:02:16.381 2 DEBUG oslo_concurrency.processutils [None req-a15c22ab-259d-4b26-83d0-eb5f92102649 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.509s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 04:02:16 compute-0 nova_compute[259850]: 2025-10-11 04:02:16.387 2 DEBUG nova.compute.provider_tree [None req-a15c22ab-259d-4b26-83d0-eb5f92102649 - - - - - -] Inventory has not changed in ProviderTree for provider: 108a560b-89c0-4926-a2fc-cb749a6f8386 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 11 04:02:16 compute-0 nova_compute[259850]: 2025-10-11 04:02:16.404 2 DEBUG nova.scheduler.client.report [None req-a15c22ab-259d-4b26-83d0-eb5f92102649 - - - - - -] Inventory has not changed for provider 108a560b-89c0-4926-a2fc-cb749a6f8386 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 11 04:02:16 compute-0 nova_compute[259850]: 2025-10-11 04:02:16.405 2 DEBUG nova.compute.resource_tracker [None req-a15c22ab-259d-4b26-83d0-eb5f92102649 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Oct 11 04:02:16 compute-0 nova_compute[259850]: 2025-10-11 04:02:16.406 2 DEBUG oslo_concurrency.lockutils [None req-a15c22ab-259d-4b26-83d0-eb5f92102649 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.642s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 04:02:17 compute-0 ceph-mon[74273]: pgmap v895: 305 pgs: 305 active+clean; 41 MiB data, 189 MiB used, 60 GiB / 60 GiB avail; 34 KiB/s rd, 3.3 KiB/s wr, 48 op/s
Oct 11 04:02:17 compute-0 ceph-mon[74273]: from='client.? 192.168.122.10:0/1567327197' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 11 04:02:17 compute-0 ceph-mon[74273]: from='client.? 192.168.122.10:0/1567327197' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 11 04:02:17 compute-0 ceph-mon[74273]: from='client.? 192.168.122.100:0/4229173259' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 04:02:17 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v896: 305 pgs: 305 active+clean; 41 MiB data, 189 MiB used, 60 GiB / 60 GiB avail; 29 KiB/s rd, 2.8 KiB/s wr, 41 op/s
Oct 11 04:02:17 compute-0 nova_compute[259850]: 2025-10-11 04:02:17.394 2 DEBUG oslo_service.periodic_task [None req-a15c22ab-259d-4b26-83d0-eb5f92102649 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 11 04:02:17 compute-0 nova_compute[259850]: 2025-10-11 04:02:17.395 2 DEBUG oslo_service.periodic_task [None req-a15c22ab-259d-4b26-83d0-eb5f92102649 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 11 04:02:19 compute-0 ceph-mon[74273]: pgmap v896: 305 pgs: 305 active+clean; 41 MiB data, 189 MiB used, 60 GiB / 60 GiB avail; 29 KiB/s rd, 2.8 KiB/s wr, 41 op/s
Oct 11 04:02:19 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v897: 305 pgs: 305 active+clean; 41 MiB data, 189 MiB used, 60 GiB / 60 GiB avail; 50 KiB/s rd, 4.4 KiB/s wr, 69 op/s
Oct 11 04:02:19 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 11 04:02:19 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct 11 04:02:19 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3596153880' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 11 04:02:19 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct 11 04:02:19 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3596153880' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 11 04:02:20 compute-0 ceph-mon[74273]: from='client.? 192.168.122.10:0/3596153880' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 11 04:02:20 compute-0 ceph-mon[74273]: from='client.? 192.168.122.10:0/3596153880' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 11 04:02:20 compute-0 ceph-mgr[74563]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 04:02:20 compute-0 ceph-mgr[74563]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 04:02:20 compute-0 ceph-mgr[74563]: [balancer INFO root] Optimize plan auto_2025-10-11_04:02:20
Oct 11 04:02:20 compute-0 ceph-mgr[74563]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 11 04:02:20 compute-0 ceph-mgr[74563]: [balancer INFO root] do_upmap
Oct 11 04:02:20 compute-0 ceph-mgr[74563]: [balancer INFO root] pools ['cephfs.cephfs.data', 'volumes', '.mgr', 'default.rgw.meta', 'vms', 'backups', 'default.rgw.control', '.rgw.root', 'images', 'default.rgw.log', 'cephfs.cephfs.meta']
Oct 11 04:02:20 compute-0 ceph-mgr[74563]: [balancer INFO root] prepared 0/10 changes
Oct 11 04:02:20 compute-0 ceph-mgr[74563]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 04:02:20 compute-0 ceph-mgr[74563]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 04:02:20 compute-0 ceph-mgr[74563]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 04:02:20 compute-0 ceph-mgr[74563]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 04:02:20 compute-0 ceph-mgr[74563]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 11 04:02:20 compute-0 ceph-mgr[74563]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 11 04:02:20 compute-0 ceph-mgr[74563]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 11 04:02:20 compute-0 ceph-mgr[74563]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 11 04:02:20 compute-0 ceph-mgr[74563]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 11 04:02:20 compute-0 ceph-mgr[74563]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 11 04:02:20 compute-0 ceph-mgr[74563]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 11 04:02:20 compute-0 ceph-mgr[74563]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 11 04:02:20 compute-0 ceph-mgr[74563]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 11 04:02:20 compute-0 ceph-mgr[74563]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 11 04:02:21 compute-0 ceph-mon[74273]: pgmap v897: 305 pgs: 305 active+clean; 41 MiB data, 189 MiB used, 60 GiB / 60 GiB avail; 50 KiB/s rd, 4.4 KiB/s wr, 69 op/s
Oct 11 04:02:21 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v898: 305 pgs: 305 active+clean; 41 MiB data, 189 MiB used, 60 GiB / 60 GiB avail; 42 KiB/s rd, 3.7 KiB/s wr, 59 op/s
Oct 11 04:02:22 compute-0 ceph-mon[74273]: pgmap v898: 305 pgs: 305 active+clean; 41 MiB data, 189 MiB used, 60 GiB / 60 GiB avail; 42 KiB/s rd, 3.7 KiB/s wr, 59 op/s
Oct 11 04:02:22 compute-0 ovn_metadata_agent[161897]: 2025-10-11 04:02:22.950 161902 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 04:02:22 compute-0 ovn_metadata_agent[161897]: 2025-10-11 04:02:22.951 161902 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 04:02:22 compute-0 ovn_metadata_agent[161897]: 2025-10-11 04:02:22.951 161902 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 04:02:23 compute-0 podman[266395]: 2025-10-11 04:02:23.364928548 +0000 UTC m=+0.070226659 container health_status 8f03625dda308e9a3fe3b3f00360811eab2a53db90537fed5552a11820542f07 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, config_id=iscsid, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, container_name=iscsid, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.vendor=CentOS)
Oct 11 04:02:23 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v899: 305 pgs: 305 active+clean; 41 MiB data, 189 MiB used, 60 GiB / 60 GiB avail; 45 KiB/s rd, 2.3 KiB/s wr, 61 op/s
Oct 11 04:02:23 compute-0 podman[266394]: 2025-10-11 04:02:23.377951273 +0000 UTC m=+0.081044222 container health_status 83ff5079e26f0f00bbfd2512db4ebca61e431e3b7e362664573e34fff179574c (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, config_id=multipathd, container_name=multipathd, org.label-schema.build-date=20251009, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true, managed_by=edpm_ansible, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f)
Oct 11 04:02:24 compute-0 ceph-mon[74273]: pgmap v899: 305 pgs: 305 active+clean; 41 MiB data, 189 MiB used, 60 GiB / 60 GiB avail; 45 KiB/s rd, 2.3 KiB/s wr, 61 op/s
Oct 11 04:02:24 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 11 04:02:24 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e127 do_prune osdmap full prune enabled
Oct 11 04:02:24 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e128 e128: 3 total, 3 up, 3 in
Oct 11 04:02:24 compute-0 ceph-mon[74273]: log_channel(cluster) log [DBG] : osdmap e128: 3 total, 3 up, 3 in
Oct 11 04:02:25 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v901: 305 pgs: 305 active+clean; 41 MiB data, 189 MiB used, 60 GiB / 60 GiB avail; 45 KiB/s rd, 2.3 KiB/s wr, 61 op/s
Oct 11 04:02:25 compute-0 ceph-mon[74273]: osdmap e128: 3 total, 3 up, 3 in
Oct 11 04:02:26 compute-0 ceph-mon[74273]: pgmap v901: 305 pgs: 305 active+clean; 41 MiB data, 189 MiB used, 60 GiB / 60 GiB avail; 45 KiB/s rd, 2.3 KiB/s wr, 61 op/s
Oct 11 04:02:27 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v902: 305 pgs: 305 active+clean; 41 MiB data, 189 MiB used, 60 GiB / 60 GiB avail; 45 KiB/s rd, 2.3 KiB/s wr, 61 op/s
Oct 11 04:02:28 compute-0 ceph-mon[74273]: pgmap v902: 305 pgs: 305 active+clean; 41 MiB data, 189 MiB used, 60 GiB / 60 GiB avail; 45 KiB/s rd, 2.3 KiB/s wr, 61 op/s
Oct 11 04:02:29 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v903: 305 pgs: 305 active+clean; 42 MiB data, 190 MiB used, 60 GiB / 60 GiB avail; 28 KiB/s rd, 27 KiB/s wr, 38 op/s
Oct 11 04:02:29 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 11 04:02:30 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e128 do_prune osdmap full prune enabled
Oct 11 04:02:30 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e129 e129: 3 total, 3 up, 3 in
Oct 11 04:02:30 compute-0 ceph-mon[74273]: log_channel(cluster) log [DBG] : osdmap e129: 3 total, 3 up, 3 in
Oct 11 04:02:30 compute-0 ceph-mon[74273]: pgmap v903: 305 pgs: 305 active+clean; 42 MiB data, 190 MiB used, 60 GiB / 60 GiB avail; 28 KiB/s rd, 27 KiB/s wr, 38 op/s
Oct 11 04:02:31 compute-0 ceph-mgr[74563]: [pg_autoscaler INFO root] _maybe_adjust
Oct 11 04:02:31 compute-0 ceph-mgr[74563]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 04:02:31 compute-0 ceph-mgr[74563]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct 11 04:02:31 compute-0 ceph-mgr[74563]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 04:02:31 compute-0 ceph-mgr[74563]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 11 04:02:31 compute-0 ceph-mgr[74563]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 04:02:31 compute-0 ceph-mgr[74563]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 4.260577423976037e-06 of space, bias 1.0, pg target 0.001278173227192811 quantized to 32 (current 32)
Oct 11 04:02:31 compute-0 ceph-mgr[74563]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 04:02:31 compute-0 ceph-mgr[74563]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 11 04:02:31 compute-0 ceph-mgr[74563]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 04:02:31 compute-0 ceph-mgr[74563]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.000665858301588852 of space, bias 1.0, pg target 0.19975749047665559 quantized to 32 (current 32)
Oct 11 04:02:31 compute-0 ceph-mgr[74563]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 04:02:31 compute-0 ceph-mgr[74563]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Oct 11 04:02:31 compute-0 ceph-mgr[74563]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 04:02:31 compute-0 ceph-mgr[74563]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 11 04:02:31 compute-0 ceph-mgr[74563]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 04:02:31 compute-0 ceph-mgr[74563]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Oct 11 04:02:31 compute-0 ceph-mgr[74563]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 04:02:31 compute-0 ceph-mgr[74563]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct 11 04:02:31 compute-0 ceph-mgr[74563]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 04:02:31 compute-0 ceph-mgr[74563]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 11 04:02:31 compute-0 ceph-mgr[74563]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 04:02:31 compute-0 ceph-mgr[74563]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct 11 04:02:31 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v905: 305 pgs: 305 active+clean; 42 MiB data, 190 MiB used, 60 GiB / 60 GiB avail; 4.5 KiB/s rd, 33 KiB/s wr, 7 op/s
Oct 11 04:02:31 compute-0 ceph-mon[74273]: osdmap e129: 3 total, 3 up, 3 in
Oct 11 04:02:32 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct 11 04:02:32 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3732322944' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 11 04:02:32 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct 11 04:02:32 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3732322944' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 11 04:02:32 compute-0 ceph-mon[74273]: pgmap v905: 305 pgs: 305 active+clean; 42 MiB data, 190 MiB used, 60 GiB / 60 GiB avail; 4.5 KiB/s rd, 33 KiB/s wr, 7 op/s
Oct 11 04:02:32 compute-0 ceph-mon[74273]: from='client.? 192.168.122.10:0/3732322944' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 11 04:02:32 compute-0 ceph-mon[74273]: from='client.? 192.168.122.10:0/3732322944' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 11 04:02:33 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v906: 305 pgs: 305 active+clean; 42 MiB data, 190 MiB used, 60 GiB / 60 GiB avail; 41 KiB/s rd, 33 KiB/s wr, 56 op/s
Oct 11 04:02:33 compute-0 podman[266431]: 2025-10-11 04:02:33.390805574 +0000 UTC m=+0.103803261 container health_status 648a70a95c858d67aef01a55c153ff0b5b9bef4b589fa10c62a24185180db61f (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.build-date=20251009, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, config_id=ovn_controller, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3)
Oct 11 04:02:33 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct 11 04:02:33 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/4138745717' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 11 04:02:33 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct 11 04:02:33 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/4138745717' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 11 04:02:34 compute-0 nova_compute[259850]: 2025-10-11 04:02:34.627 2 DEBUG oslo_concurrency.lockutils [None req-78a57090-801a-40e7-b05e-441e28e301e0 6f96a3b66f9943398432732b3141745a 54de3f5004d1488aaf5e429b0071e194 - - default default] Acquiring lock "02d4314d-9e34-4961-a2ae-1c9b0a3c1dfc" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 04:02:34 compute-0 nova_compute[259850]: 2025-10-11 04:02:34.628 2 DEBUG oslo_concurrency.lockutils [None req-78a57090-801a-40e7-b05e-441e28e301e0 6f96a3b66f9943398432732b3141745a 54de3f5004d1488aaf5e429b0071e194 - - default default] Lock "02d4314d-9e34-4961-a2ae-1c9b0a3c1dfc" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 04:02:34 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e129 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 11 04:02:34 compute-0 nova_compute[259850]: 2025-10-11 04:02:34.668 2 DEBUG nova.compute.manager [None req-78a57090-801a-40e7-b05e-441e28e301e0 6f96a3b66f9943398432732b3141745a 54de3f5004d1488aaf5e429b0071e194 - - default default] [instance: 02d4314d-9e34-4961-a2ae-1c9b0a3c1dfc] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Oct 11 04:02:34 compute-0 ceph-mon[74273]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #45. Immutable memtables: 0.
Oct 11 04:02:34 compute-0 ceph-mon[74273]: rocksdb: (Original Log Time 2025/10/11-04:02:34.672682) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Oct 11 04:02:34 compute-0 ceph-mon[74273]: rocksdb: [db/flush_job.cc:856] [default] [JOB 21] Flushing memtable with next log file: 45
Oct 11 04:02:34 compute-0 ceph-mon[74273]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760155354672738, "job": 21, "event": "flush_started", "num_memtables": 1, "num_entries": 658, "num_deletes": 256, "total_data_size": 678346, "memory_usage": 690712, "flush_reason": "Manual Compaction"}
Oct 11 04:02:34 compute-0 ceph-mon[74273]: rocksdb: [db/flush_job.cc:885] [default] [JOB 21] Level-0 flush table #46: started
Oct 11 04:02:34 compute-0 ceph-mon[74273]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760155354681905, "cf_name": "default", "job": 21, "event": "table_file_creation", "file_number": 46, "file_size": 671006, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 18808, "largest_seqno": 19465, "table_properties": {"data_size": 667545, "index_size": 1305, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1093, "raw_key_size": 7670, "raw_average_key_size": 18, "raw_value_size": 660537, "raw_average_value_size": 1576, "num_data_blocks": 60, "num_entries": 419, "num_filter_entries": 419, "num_deletions": 256, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1760155314, "oldest_key_time": 1760155314, "file_creation_time": 1760155354, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "c5ef686a-ea96-43f3-b64e-136aeef6150d", "db_session_id": "5PENSWPAYOBU0GJSS6OB", "orig_file_number": 46, "seqno_to_time_mapping": "N/A"}}
Oct 11 04:02:34 compute-0 ceph-mon[74273]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 21] Flush lasted 9280 microseconds, and 4956 cpu microseconds.
Oct 11 04:02:34 compute-0 ceph-mon[74273]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 11 04:02:34 compute-0 ceph-mon[74273]: rocksdb: (Original Log Time 2025/10/11-04:02:34.681965) [db/flush_job.cc:967] [default] [JOB 21] Level-0 flush table #46: 671006 bytes OK
Oct 11 04:02:34 compute-0 ceph-mon[74273]: rocksdb: (Original Log Time 2025/10/11-04:02:34.681990) [db/memtable_list.cc:519] [default] Level-0 commit table #46 started
Oct 11 04:02:34 compute-0 ceph-mon[74273]: rocksdb: (Original Log Time 2025/10/11-04:02:34.684214) [db/memtable_list.cc:722] [default] Level-0 commit table #46: memtable #1 done
Oct 11 04:02:34 compute-0 ceph-mon[74273]: rocksdb: (Original Log Time 2025/10/11-04:02:34.684236) EVENT_LOG_v1 {"time_micros": 1760155354684228, "job": 21, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Oct 11 04:02:34 compute-0 ceph-mon[74273]: rocksdb: (Original Log Time 2025/10/11-04:02:34.684259) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Oct 11 04:02:34 compute-0 ceph-mon[74273]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 21] Try to delete WAL files size 674804, prev total WAL file size 674804, number of live WAL files 2.
Oct 11 04:02:34 compute-0 ceph-mon[74273]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000042.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 11 04:02:34 compute-0 ceph-mon[74273]: rocksdb: (Original Log Time 2025/10/11-04:02:34.685188) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6C6F676D00323530' seq:72057594037927935, type:22 .. '6C6F676D00353031' seq:0, type:0; will stop at (end)
Oct 11 04:02:34 compute-0 ceph-mon[74273]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 22] Compacting 1@0 + 1@6 files to L6, score -1.00
Oct 11 04:02:34 compute-0 ceph-mon[74273]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 21 Base level 0, inputs: [46(655KB)], [44(6214KB)]
Oct 11 04:02:34 compute-0 ceph-mon[74273]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760155354685249, "job": 22, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [46], "files_L6": [44], "score": -1, "input_data_size": 7034891, "oldest_snapshot_seqno": -1}
Oct 11 04:02:34 compute-0 ceph-mon[74273]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 22] Generated table #47: 4119 keys, 6917178 bytes, temperature: kUnknown
Oct 11 04:02:34 compute-0 ceph-mon[74273]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760155354729031, "cf_name": "default", "job": 22, "event": "table_file_creation", "file_number": 47, "file_size": 6917178, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 6889071, "index_size": 16733, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 10309, "raw_key_size": 102171, "raw_average_key_size": 24, "raw_value_size": 6813824, "raw_average_value_size": 1654, "num_data_blocks": 701, "num_entries": 4119, "num_filter_entries": 4119, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1760153731, "oldest_key_time": 0, "file_creation_time": 1760155354, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "c5ef686a-ea96-43f3-b64e-136aeef6150d", "db_session_id": "5PENSWPAYOBU0GJSS6OB", "orig_file_number": 47, "seqno_to_time_mapping": "N/A"}}
Oct 11 04:02:34 compute-0 ceph-mon[74273]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 11 04:02:34 compute-0 ceph-mon[74273]: rocksdb: (Original Log Time 2025/10/11-04:02:34.729283) [db/compaction/compaction_job.cc:1663] [default] [JOB 22] Compacted 1@0 + 1@6 files to L6 => 6917178 bytes
Oct 11 04:02:34 compute-0 ceph-mon[74273]: rocksdb: (Original Log Time 2025/10/11-04:02:34.730460) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 160.5 rd, 157.8 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.6, 6.1 +0.0 blob) out(6.6 +0.0 blob), read-write-amplify(20.8) write-amplify(10.3) OK, records in: 4641, records dropped: 522 output_compression: NoCompression
Oct 11 04:02:34 compute-0 ceph-mon[74273]: rocksdb: (Original Log Time 2025/10/11-04:02:34.730476) EVENT_LOG_v1 {"time_micros": 1760155354730468, "job": 22, "event": "compaction_finished", "compaction_time_micros": 43840, "compaction_time_cpu_micros": 27436, "output_level": 6, "num_output_files": 1, "total_output_size": 6917178, "num_input_records": 4641, "num_output_records": 4119, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Oct 11 04:02:34 compute-0 ceph-mon[74273]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000046.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 11 04:02:34 compute-0 ceph-mon[74273]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760155354730678, "job": 22, "event": "table_file_deletion", "file_number": 46}
Oct 11 04:02:34 compute-0 ceph-mon[74273]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000044.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 11 04:02:34 compute-0 ceph-mon[74273]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760155354731835, "job": 22, "event": "table_file_deletion", "file_number": 44}
Oct 11 04:02:34 compute-0 ceph-mon[74273]: rocksdb: (Original Log Time 2025/10/11-04:02:34.685045) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 11 04:02:34 compute-0 ceph-mon[74273]: rocksdb: (Original Log Time 2025/10/11-04:02:34.731975) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 11 04:02:34 compute-0 ceph-mon[74273]: rocksdb: (Original Log Time 2025/10/11-04:02:34.731985) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 11 04:02:34 compute-0 ceph-mon[74273]: rocksdb: (Original Log Time 2025/10/11-04:02:34.731990) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 11 04:02:34 compute-0 ceph-mon[74273]: rocksdb: (Original Log Time 2025/10/11-04:02:34.731994) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 11 04:02:34 compute-0 ceph-mon[74273]: rocksdb: (Original Log Time 2025/10/11-04:02:34.731999) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 11 04:02:34 compute-0 ceph-mon[74273]: pgmap v906: 305 pgs: 305 active+clean; 42 MiB data, 190 MiB used, 60 GiB / 60 GiB avail; 41 KiB/s rd, 33 KiB/s wr, 56 op/s
Oct 11 04:02:34 compute-0 ceph-mon[74273]: from='client.? 192.168.122.10:0/4138745717' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 11 04:02:34 compute-0 ceph-mon[74273]: from='client.? 192.168.122.10:0/4138745717' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 11 04:02:34 compute-0 nova_compute[259850]: 2025-10-11 04:02:34.799 2 DEBUG oslo_concurrency.lockutils [None req-78a57090-801a-40e7-b05e-441e28e301e0 6f96a3b66f9943398432732b3141745a 54de3f5004d1488aaf5e429b0071e194 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 04:02:34 compute-0 nova_compute[259850]: 2025-10-11 04:02:34.800 2 DEBUG oslo_concurrency.lockutils [None req-78a57090-801a-40e7-b05e-441e28e301e0 6f96a3b66f9943398432732b3141745a 54de3f5004d1488aaf5e429b0071e194 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 04:02:34 compute-0 nova_compute[259850]: 2025-10-11 04:02:34.809 2 DEBUG nova.virt.hardware [None req-78a57090-801a-40e7-b05e-441e28e301e0 6f96a3b66f9943398432732b3141745a 54de3f5004d1488aaf5e429b0071e194 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Oct 11 04:02:34 compute-0 nova_compute[259850]: 2025-10-11 04:02:34.810 2 INFO nova.compute.claims [None req-78a57090-801a-40e7-b05e-441e28e301e0 6f96a3b66f9943398432732b3141745a 54de3f5004d1488aaf5e429b0071e194 - - default default] [instance: 02d4314d-9e34-4961-a2ae-1c9b0a3c1dfc] Claim successful on node compute-0.ctlplane.example.com
Oct 11 04:02:34 compute-0 nova_compute[259850]: 2025-10-11 04:02:34.956 2 DEBUG oslo_concurrency.processutils [None req-78a57090-801a-40e7-b05e-441e28e301e0 6f96a3b66f9943398432732b3141745a 54de3f5004d1488aaf5e429b0071e194 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 04:02:35 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 11 04:02:35 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2909135800' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 04:02:35 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v907: 305 pgs: 305 active+clean; 42 MiB data, 190 MiB used, 60 GiB / 60 GiB avail; 35 KiB/s rd, 29 KiB/s wr, 49 op/s
Oct 11 04:02:35 compute-0 nova_compute[259850]: 2025-10-11 04:02:35.375 2 DEBUG oslo_concurrency.processutils [None req-78a57090-801a-40e7-b05e-441e28e301e0 6f96a3b66f9943398432732b3141745a 54de3f5004d1488aaf5e429b0071e194 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.418s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 04:02:35 compute-0 nova_compute[259850]: 2025-10-11 04:02:35.382 2 DEBUG nova.compute.provider_tree [None req-78a57090-801a-40e7-b05e-441e28e301e0 6f96a3b66f9943398432732b3141745a 54de3f5004d1488aaf5e429b0071e194 - - default default] Inventory has not changed in ProviderTree for provider: 108a560b-89c0-4926-a2fc-cb749a6f8386 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 11 04:02:35 compute-0 nova_compute[259850]: 2025-10-11 04:02:35.396 2 DEBUG nova.scheduler.client.report [None req-78a57090-801a-40e7-b05e-441e28e301e0 6f96a3b66f9943398432732b3141745a 54de3f5004d1488aaf5e429b0071e194 - - default default] Inventory has not changed for provider 108a560b-89c0-4926-a2fc-cb749a6f8386 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 11 04:02:35 compute-0 nova_compute[259850]: 2025-10-11 04:02:35.418 2 DEBUG oslo_concurrency.lockutils [None req-78a57090-801a-40e7-b05e-441e28e301e0 6f96a3b66f9943398432732b3141745a 54de3f5004d1488aaf5e429b0071e194 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.618s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 04:02:35 compute-0 nova_compute[259850]: 2025-10-11 04:02:35.419 2 DEBUG nova.compute.manager [None req-78a57090-801a-40e7-b05e-441e28e301e0 6f96a3b66f9943398432732b3141745a 54de3f5004d1488aaf5e429b0071e194 - - default default] [instance: 02d4314d-9e34-4961-a2ae-1c9b0a3c1dfc] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Oct 11 04:02:35 compute-0 nova_compute[259850]: 2025-10-11 04:02:35.475 2 DEBUG nova.compute.manager [None req-78a57090-801a-40e7-b05e-441e28e301e0 6f96a3b66f9943398432732b3141745a 54de3f5004d1488aaf5e429b0071e194 - - default default] [instance: 02d4314d-9e34-4961-a2ae-1c9b0a3c1dfc] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Oct 11 04:02:35 compute-0 nova_compute[259850]: 2025-10-11 04:02:35.476 2 DEBUG nova.network.neutron [None req-78a57090-801a-40e7-b05e-441e28e301e0 6f96a3b66f9943398432732b3141745a 54de3f5004d1488aaf5e429b0071e194 - - default default] [instance: 02d4314d-9e34-4961-a2ae-1c9b0a3c1dfc] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Oct 11 04:02:35 compute-0 nova_compute[259850]: 2025-10-11 04:02:35.508 2 INFO nova.virt.libvirt.driver [None req-78a57090-801a-40e7-b05e-441e28e301e0 6f96a3b66f9943398432732b3141745a 54de3f5004d1488aaf5e429b0071e194 - - default default] [instance: 02d4314d-9e34-4961-a2ae-1c9b0a3c1dfc] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Oct 11 04:02:35 compute-0 nova_compute[259850]: 2025-10-11 04:02:35.530 2 DEBUG nova.compute.manager [None req-78a57090-801a-40e7-b05e-441e28e301e0 6f96a3b66f9943398432732b3141745a 54de3f5004d1488aaf5e429b0071e194 - - default default] [instance: 02d4314d-9e34-4961-a2ae-1c9b0a3c1dfc] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Oct 11 04:02:35 compute-0 nova_compute[259850]: 2025-10-11 04:02:35.636 2 DEBUG nova.compute.manager [None req-78a57090-801a-40e7-b05e-441e28e301e0 6f96a3b66f9943398432732b3141745a 54de3f5004d1488aaf5e429b0071e194 - - default default] [instance: 02d4314d-9e34-4961-a2ae-1c9b0a3c1dfc] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Oct 11 04:02:35 compute-0 nova_compute[259850]: 2025-10-11 04:02:35.639 2 DEBUG nova.virt.libvirt.driver [None req-78a57090-801a-40e7-b05e-441e28e301e0 6f96a3b66f9943398432732b3141745a 54de3f5004d1488aaf5e429b0071e194 - - default default] [instance: 02d4314d-9e34-4961-a2ae-1c9b0a3c1dfc] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Oct 11 04:02:35 compute-0 nova_compute[259850]: 2025-10-11 04:02:35.639 2 INFO nova.virt.libvirt.driver [None req-78a57090-801a-40e7-b05e-441e28e301e0 6f96a3b66f9943398432732b3141745a 54de3f5004d1488aaf5e429b0071e194 - - default default] [instance: 02d4314d-9e34-4961-a2ae-1c9b0a3c1dfc] Creating image(s)
Oct 11 04:02:35 compute-0 nova_compute[259850]: 2025-10-11 04:02:35.675 2 DEBUG nova.storage.rbd_utils [None req-78a57090-801a-40e7-b05e-441e28e301e0 6f96a3b66f9943398432732b3141745a 54de3f5004d1488aaf5e429b0071e194 - - default default] rbd image 02d4314d-9e34-4961-a2ae-1c9b0a3c1dfc_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 11 04:02:35 compute-0 nova_compute[259850]: 2025-10-11 04:02:35.709 2 DEBUG nova.storage.rbd_utils [None req-78a57090-801a-40e7-b05e-441e28e301e0 6f96a3b66f9943398432732b3141745a 54de3f5004d1488aaf5e429b0071e194 - - default default] rbd image 02d4314d-9e34-4961-a2ae-1c9b0a3c1dfc_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 11 04:02:35 compute-0 nova_compute[259850]: 2025-10-11 04:02:35.742 2 DEBUG nova.storage.rbd_utils [None req-78a57090-801a-40e7-b05e-441e28e301e0 6f96a3b66f9943398432732b3141745a 54de3f5004d1488aaf5e429b0071e194 - - default default] rbd image 02d4314d-9e34-4961-a2ae-1c9b0a3c1dfc_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 11 04:02:35 compute-0 ceph-mon[74273]: from='client.? 192.168.122.100:0/2909135800' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 04:02:35 compute-0 nova_compute[259850]: 2025-10-11 04:02:35.746 2 DEBUG oslo_concurrency.lockutils [None req-78a57090-801a-40e7-b05e-441e28e301e0 6f96a3b66f9943398432732b3141745a 54de3f5004d1488aaf5e429b0071e194 - - default default] Acquiring lock "1fdd1a1629caceb533dd1ed1ad40a6716b3e72ac" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 04:02:35 compute-0 nova_compute[259850]: 2025-10-11 04:02:35.747 2 DEBUG oslo_concurrency.lockutils [None req-78a57090-801a-40e7-b05e-441e28e301e0 6f96a3b66f9943398432732b3141745a 54de3f5004d1488aaf5e429b0071e194 - - default default] Lock "1fdd1a1629caceb533dd1ed1ad40a6716b3e72ac" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 04:02:36 compute-0 nova_compute[259850]: 2025-10-11 04:02:36.339 2 DEBUG nova.virt.libvirt.imagebackend [None req-78a57090-801a-40e7-b05e-441e28e301e0 6f96a3b66f9943398432732b3141745a 54de3f5004d1488aaf5e429b0071e194 - - default default] Image locations are: [{'url': 'rbd://23b68101-59a9-532f-ab6b-9acf78fb2162/images/1a107e2f-1a9d-4b6f-861d-e64bee7d56be/snap', 'metadata': {'store': 'default_backend'}}, {'url': 'rbd://23b68101-59a9-532f-ab6b-9acf78fb2162/images/1a107e2f-1a9d-4b6f-861d-e64bee7d56be/snap', 'metadata': {}}] clone /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1085
Oct 11 04:02:36 compute-0 ceph-mon[74273]: pgmap v907: 305 pgs: 305 active+clean; 42 MiB data, 190 MiB used, 60 GiB / 60 GiB avail; 35 KiB/s rd, 29 KiB/s wr, 49 op/s
Oct 11 04:02:36 compute-0 nova_compute[259850]: 2025-10-11 04:02:36.975 2 WARNING oslo_policy.policy [None req-78a57090-801a-40e7-b05e-441e28e301e0 6f96a3b66f9943398432732b3141745a 54de3f5004d1488aaf5e429b0071e194 - - default default] JSON formatted policy_file support is deprecated since Victoria release. You need to use YAML format which will be default in future. You can use ``oslopolicy-convert-json-to-yaml`` tool to convert existing JSON-formatted policy file to YAML-formatted in backward compatible way: https://docs.openstack.org/oslo.policy/latest/cli/oslopolicy-convert-json-to-yaml.html.
Oct 11 04:02:36 compute-0 nova_compute[259850]: 2025-10-11 04:02:36.976 2 WARNING oslo_policy.policy [None req-78a57090-801a-40e7-b05e-441e28e301e0 6f96a3b66f9943398432732b3141745a 54de3f5004d1488aaf5e429b0071e194 - - default default] JSON formatted policy_file support is deprecated since Victoria release. You need to use YAML format which will be default in future. You can use ``oslopolicy-convert-json-to-yaml`` tool to convert existing JSON-formatted policy file to YAML-formatted in backward compatible way: https://docs.openstack.org/oslo.policy/latest/cli/oslopolicy-convert-json-to-yaml.html.
Oct 11 04:02:36 compute-0 nova_compute[259850]: 2025-10-11 04:02:36.980 2 DEBUG nova.policy [None req-78a57090-801a-40e7-b05e-441e28e301e0 6f96a3b66f9943398432732b3141745a 54de3f5004d1488aaf5e429b0071e194 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '6f96a3b66f9943398432732b3141745a', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '54de3f5004d1488aaf5e429b0071e194', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Oct 11 04:02:37 compute-0 nova_compute[259850]: 2025-10-11 04:02:37.222 2 DEBUG oslo_concurrency.processutils [None req-78a57090-801a-40e7-b05e-441e28e301e0 6f96a3b66f9943398432732b3141745a 54de3f5004d1488aaf5e429b0071e194 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/1fdd1a1629caceb533dd1ed1ad40a6716b3e72ac.part --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 04:02:37 compute-0 nova_compute[259850]: 2025-10-11 04:02:37.306 2 DEBUG oslo_concurrency.processutils [None req-78a57090-801a-40e7-b05e-441e28e301e0 6f96a3b66f9943398432732b3141745a 54de3f5004d1488aaf5e429b0071e194 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/1fdd1a1629caceb533dd1ed1ad40a6716b3e72ac.part --force-share --output=json" returned: 0 in 0.084s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 04:02:37 compute-0 nova_compute[259850]: 2025-10-11 04:02:37.308 2 DEBUG nova.virt.images [None req-78a57090-801a-40e7-b05e-441e28e301e0 6f96a3b66f9943398432732b3141745a 54de3f5004d1488aaf5e429b0071e194 - - default default] 1a107e2f-1a9d-4b6f-861d-e64bee7d56be was qcow2, converting to raw fetch_to_raw /usr/lib/python3.9/site-packages/nova/virt/images.py:242
Oct 11 04:02:37 compute-0 nova_compute[259850]: 2025-10-11 04:02:37.311 2 DEBUG nova.privsep.utils [None req-78a57090-801a-40e7-b05e-441e28e301e0 6f96a3b66f9943398432732b3141745a 54de3f5004d1488aaf5e429b0071e194 - - default default] Path '/var/lib/nova/instances' supports direct I/O supports_direct_io /usr/lib/python3.9/site-packages/nova/privsep/utils.py:63
Oct 11 04:02:37 compute-0 nova_compute[259850]: 2025-10-11 04:02:37.311 2 DEBUG oslo_concurrency.processutils [None req-78a57090-801a-40e7-b05e-441e28e301e0 6f96a3b66f9943398432732b3141745a 54de3f5004d1488aaf5e429b0071e194 - - default default] Running cmd (subprocess): qemu-img convert -t none -O raw -f qcow2 /var/lib/nova/instances/_base/1fdd1a1629caceb533dd1ed1ad40a6716b3e72ac.part /var/lib/nova/instances/_base/1fdd1a1629caceb533dd1ed1ad40a6716b3e72ac.converted execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 04:02:37 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v908: 305 pgs: 305 active+clean; 42 MiB data, 190 MiB used, 60 GiB / 60 GiB avail; 35 KiB/s rd, 29 KiB/s wr, 49 op/s
Oct 11 04:02:37 compute-0 nova_compute[259850]: 2025-10-11 04:02:37.485 2 DEBUG oslo_concurrency.processutils [None req-78a57090-801a-40e7-b05e-441e28e301e0 6f96a3b66f9943398432732b3141745a 54de3f5004d1488aaf5e429b0071e194 - - default default] CMD "qemu-img convert -t none -O raw -f qcow2 /var/lib/nova/instances/_base/1fdd1a1629caceb533dd1ed1ad40a6716b3e72ac.part /var/lib/nova/instances/_base/1fdd1a1629caceb533dd1ed1ad40a6716b3e72ac.converted" returned: 0 in 0.174s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 04:02:37 compute-0 nova_compute[259850]: 2025-10-11 04:02:37.494 2 DEBUG oslo_concurrency.processutils [None req-78a57090-801a-40e7-b05e-441e28e301e0 6f96a3b66f9943398432732b3141745a 54de3f5004d1488aaf5e429b0071e194 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/1fdd1a1629caceb533dd1ed1ad40a6716b3e72ac.converted --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 04:02:37 compute-0 nova_compute[259850]: 2025-10-11 04:02:37.571 2 DEBUG oslo_concurrency.processutils [None req-78a57090-801a-40e7-b05e-441e28e301e0 6f96a3b66f9943398432732b3141745a 54de3f5004d1488aaf5e429b0071e194 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/1fdd1a1629caceb533dd1ed1ad40a6716b3e72ac.converted --force-share --output=json" returned: 0 in 0.076s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 04:02:37 compute-0 nova_compute[259850]: 2025-10-11 04:02:37.572 2 DEBUG oslo_concurrency.lockutils [None req-78a57090-801a-40e7-b05e-441e28e301e0 6f96a3b66f9943398432732b3141745a 54de3f5004d1488aaf5e429b0071e194 - - default default] Lock "1fdd1a1629caceb533dd1ed1ad40a6716b3e72ac" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 1.825s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 04:02:37 compute-0 nova_compute[259850]: 2025-10-11 04:02:37.603 2 DEBUG nova.storage.rbd_utils [None req-78a57090-801a-40e7-b05e-441e28e301e0 6f96a3b66f9943398432732b3141745a 54de3f5004d1488aaf5e429b0071e194 - - default default] rbd image 02d4314d-9e34-4961-a2ae-1c9b0a3c1dfc_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 11 04:02:37 compute-0 nova_compute[259850]: 2025-10-11 04:02:37.608 2 DEBUG oslo_concurrency.processutils [None req-78a57090-801a-40e7-b05e-441e28e301e0 6f96a3b66f9943398432732b3141745a 54de3f5004d1488aaf5e429b0071e194 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/1fdd1a1629caceb533dd1ed1ad40a6716b3e72ac 02d4314d-9e34-4961-a2ae-1c9b0a3c1dfc_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 04:02:37 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e129 do_prune osdmap full prune enabled
Oct 11 04:02:37 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e130 e130: 3 total, 3 up, 3 in
Oct 11 04:02:37 compute-0 ceph-mon[74273]: log_channel(cluster) log [DBG] : osdmap e130: 3 total, 3 up, 3 in
Oct 11 04:02:38 compute-0 nova_compute[259850]: 2025-10-11 04:02:38.061 2 DEBUG nova.network.neutron [None req-78a57090-801a-40e7-b05e-441e28e301e0 6f96a3b66f9943398432732b3141745a 54de3f5004d1488aaf5e429b0071e194 - - default default] [instance: 02d4314d-9e34-4961-a2ae-1c9b0a3c1dfc] Successfully created port: 601ef18d-d973-476f-90d4-f8f40df267fa _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Oct 11 04:02:38 compute-0 ceph-mon[74273]: pgmap v908: 305 pgs: 305 active+clean; 42 MiB data, 190 MiB used, 60 GiB / 60 GiB avail; 35 KiB/s rd, 29 KiB/s wr, 49 op/s
Oct 11 04:02:38 compute-0 ceph-mon[74273]: osdmap e130: 3 total, 3 up, 3 in
Oct 11 04:02:38 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e130 do_prune osdmap full prune enabled
Oct 11 04:02:38 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e131 e131: 3 total, 3 up, 3 in
Oct 11 04:02:38 compute-0 ceph-mon[74273]: log_channel(cluster) log [DBG] : osdmap e131: 3 total, 3 up, 3 in
Oct 11 04:02:39 compute-0 sudo[266584]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 04:02:39 compute-0 sudo[266584]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 04:02:39 compute-0 sudo[266584]: pam_unix(sudo:session): session closed for user root
Oct 11 04:02:39 compute-0 nova_compute[259850]: 2025-10-11 04:02:39.087 2 DEBUG oslo_concurrency.processutils [None req-78a57090-801a-40e7-b05e-441e28e301e0 6f96a3b66f9943398432732b3141745a 54de3f5004d1488aaf5e429b0071e194 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/1fdd1a1629caceb533dd1ed1ad40a6716b3e72ac 02d4314d-9e34-4961-a2ae-1c9b0a3c1dfc_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 1.479s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 04:02:39 compute-0 sudo[266609]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 11 04:02:39 compute-0 sudo[266609]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 04:02:39 compute-0 sudo[266609]: pam_unix(sudo:session): session closed for user root
Oct 11 04:02:39 compute-0 nova_compute[259850]: 2025-10-11 04:02:39.171 2 DEBUG nova.storage.rbd_utils [None req-78a57090-801a-40e7-b05e-441e28e301e0 6f96a3b66f9943398432732b3141745a 54de3f5004d1488aaf5e429b0071e194 - - default default] resizing rbd image 02d4314d-9e34-4961-a2ae-1c9b0a3c1dfc_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Oct 11 04:02:39 compute-0 sudo[266668]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 04:02:39 compute-0 sudo[266668]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 04:02:39 compute-0 sudo[266668]: pam_unix(sudo:session): session closed for user root
Oct 11 04:02:39 compute-0 nova_compute[259850]: 2025-10-11 04:02:39.302 2 DEBUG nova.objects.instance [None req-78a57090-801a-40e7-b05e-441e28e301e0 6f96a3b66f9943398432732b3141745a 54de3f5004d1488aaf5e429b0071e194 - - default default] Lazy-loading 'migration_context' on Instance uuid 02d4314d-9e34-4961-a2ae-1c9b0a3c1dfc obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 11 04:02:39 compute-0 nova_compute[259850]: 2025-10-11 04:02:39.318 2 DEBUG nova.virt.libvirt.driver [None req-78a57090-801a-40e7-b05e-441e28e301e0 6f96a3b66f9943398432732b3141745a 54de3f5004d1488aaf5e429b0071e194 - - default default] [instance: 02d4314d-9e34-4961-a2ae-1c9b0a3c1dfc] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Oct 11 04:02:39 compute-0 nova_compute[259850]: 2025-10-11 04:02:39.318 2 DEBUG nova.virt.libvirt.driver [None req-78a57090-801a-40e7-b05e-441e28e301e0 6f96a3b66f9943398432732b3141745a 54de3f5004d1488aaf5e429b0071e194 - - default default] [instance: 02d4314d-9e34-4961-a2ae-1c9b0a3c1dfc] Ensure instance console log exists: /var/lib/nova/instances/02d4314d-9e34-4961-a2ae-1c9b0a3c1dfc/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Oct 11 04:02:39 compute-0 nova_compute[259850]: 2025-10-11 04:02:39.318 2 DEBUG oslo_concurrency.lockutils [None req-78a57090-801a-40e7-b05e-441e28e301e0 6f96a3b66f9943398432732b3141745a 54de3f5004d1488aaf5e429b0071e194 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 04:02:39 compute-0 nova_compute[259850]: 2025-10-11 04:02:39.318 2 DEBUG oslo_concurrency.lockutils [None req-78a57090-801a-40e7-b05e-441e28e301e0 6f96a3b66f9943398432732b3141745a 54de3f5004d1488aaf5e429b0071e194 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 04:02:39 compute-0 nova_compute[259850]: 2025-10-11 04:02:39.319 2 DEBUG oslo_concurrency.lockutils [None req-78a57090-801a-40e7-b05e-441e28e301e0 6f96a3b66f9943398432732b3141745a 54de3f5004d1488aaf5e429b0071e194 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 04:02:39 compute-0 sudo[266713]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/23b68101-59a9-532f-ab6b-9acf78fb2162/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Oct 11 04:02:39 compute-0 sudo[266713]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 04:02:39 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v911: 305 pgs: 305 active+clean; 42 MiB data, 190 MiB used, 60 GiB / 60 GiB avail; 2.6 MiB/s rd, 3.4 KiB/s wr, 75 op/s
Oct 11 04:02:39 compute-0 nova_compute[259850]: 2025-10-11 04:02:39.667 2 DEBUG nova.network.neutron [None req-78a57090-801a-40e7-b05e-441e28e301e0 6f96a3b66f9943398432732b3141745a 54de3f5004d1488aaf5e429b0071e194 - - default default] [instance: 02d4314d-9e34-4961-a2ae-1c9b0a3c1dfc] Successfully updated port: 601ef18d-d973-476f-90d4-f8f40df267fa _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Oct 11 04:02:39 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e131 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 11 04:02:39 compute-0 nova_compute[259850]: 2025-10-11 04:02:39.688 2 DEBUG oslo_concurrency.lockutils [None req-78a57090-801a-40e7-b05e-441e28e301e0 6f96a3b66f9943398432732b3141745a 54de3f5004d1488aaf5e429b0071e194 - - default default] Acquiring lock "refresh_cache-02d4314d-9e34-4961-a2ae-1c9b0a3c1dfc" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 11 04:02:39 compute-0 nova_compute[259850]: 2025-10-11 04:02:39.688 2 DEBUG oslo_concurrency.lockutils [None req-78a57090-801a-40e7-b05e-441e28e301e0 6f96a3b66f9943398432732b3141745a 54de3f5004d1488aaf5e429b0071e194 - - default default] Acquired lock "refresh_cache-02d4314d-9e34-4961-a2ae-1c9b0a3c1dfc" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 11 04:02:39 compute-0 nova_compute[259850]: 2025-10-11 04:02:39.688 2 DEBUG nova.network.neutron [None req-78a57090-801a-40e7-b05e-441e28e301e0 6f96a3b66f9943398432732b3141745a 54de3f5004d1488aaf5e429b0071e194 - - default default] [instance: 02d4314d-9e34-4961-a2ae-1c9b0a3c1dfc] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Oct 11 04:02:39 compute-0 ceph-mon[74273]: osdmap e131: 3 total, 3 up, 3 in
Oct 11 04:02:39 compute-0 sudo[266713]: pam_unix(sudo:session): session closed for user root
Oct 11 04:02:39 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 11 04:02:39 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 11 04:02:39 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct 11 04:02:39 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 11 04:02:39 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct 11 04:02:39 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' 
Oct 11 04:02:39 compute-0 ceph-mgr[74563]: [progress WARNING root] complete: ev 2d39c21c-a2e8-4b0e-bed6-03f5bc235aa9 does not exist
Oct 11 04:02:39 compute-0 ceph-mgr[74563]: [progress WARNING root] complete: ev db7afb25-090d-4e74-9748-c336d5c52fce does not exist
Oct 11 04:02:39 compute-0 ceph-mgr[74563]: [progress WARNING root] complete: ev b8665307-5376-4435-bfb3-4044f71de579 does not exist
Oct 11 04:02:39 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct 11 04:02:39 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 11 04:02:39 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct 11 04:02:39 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 11 04:02:39 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 11 04:02:39 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 11 04:02:39 compute-0 sudo[266786]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 04:02:39 compute-0 sudo[266786]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 04:02:39 compute-0 sudo[266786]: pam_unix(sudo:session): session closed for user root
Oct 11 04:02:40 compute-0 nova_compute[259850]: 2025-10-11 04:02:40.020 2 DEBUG nova.network.neutron [None req-78a57090-801a-40e7-b05e-441e28e301e0 6f96a3b66f9943398432732b3141745a 54de3f5004d1488aaf5e429b0071e194 - - default default] [instance: 02d4314d-9e34-4961-a2ae-1c9b0a3c1dfc] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Oct 11 04:02:40 compute-0 sudo[266812]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 11 04:02:40 compute-0 sudo[266812]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 04:02:40 compute-0 sudo[266812]: pam_unix(sudo:session): session closed for user root
Oct 11 04:02:40 compute-0 podman[266810]: 2025-10-11 04:02:40.071782059 +0000 UTC m=+0.088509292 container health_status aa44bb3cffcfb092b9d85ae52a9bd9f36c87e07262632420f97b48a886109eab (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20251009, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, config_id=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Oct 11 04:02:40 compute-0 sudo[266853]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 04:02:40 compute-0 sudo[266853]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 04:02:40 compute-0 sudo[266853]: pam_unix(sudo:session): session closed for user root
Oct 11 04:02:40 compute-0 sudo[266878]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/23b68101-59a9-532f-ab6b-9acf78fb2162/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 23b68101-59a9-532f-ab6b-9acf78fb2162 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Oct 11 04:02:40 compute-0 sudo[266878]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 04:02:40 compute-0 nova_compute[259850]: 2025-10-11 04:02:40.312 2 DEBUG nova.compute.manager [req-fa02c694-5359-4f28-88bf-8359d7da78f1 req-1d73c10f-2c5f-438f-b1df-acbe7ae0535a f4159c198f2c491aba3076e803251bb4 551ef16ca7c64c7d98e9560ddab5abd8 - - default default] [instance: 02d4314d-9e34-4961-a2ae-1c9b0a3c1dfc] Received event network-changed-601ef18d-d973-476f-90d4-f8f40df267fa external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 11 04:02:40 compute-0 nova_compute[259850]: 2025-10-11 04:02:40.313 2 DEBUG nova.compute.manager [req-fa02c694-5359-4f28-88bf-8359d7da78f1 req-1d73c10f-2c5f-438f-b1df-acbe7ae0535a f4159c198f2c491aba3076e803251bb4 551ef16ca7c64c7d98e9560ddab5abd8 - - default default] [instance: 02d4314d-9e34-4961-a2ae-1c9b0a3c1dfc] Refreshing instance network info cache due to event network-changed-601ef18d-d973-476f-90d4-f8f40df267fa. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 11 04:02:40 compute-0 nova_compute[259850]: 2025-10-11 04:02:40.313 2 DEBUG oslo_concurrency.lockutils [req-fa02c694-5359-4f28-88bf-8359d7da78f1 req-1d73c10f-2c5f-438f-b1df-acbe7ae0535a f4159c198f2c491aba3076e803251bb4 551ef16ca7c64c7d98e9560ddab5abd8 - - default default] Acquiring lock "refresh_cache-02d4314d-9e34-4961-a2ae-1c9b0a3c1dfc" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 11 04:02:40 compute-0 podman[266943]: 2025-10-11 04:02:40.5691957 +0000 UTC m=+0.046457173 container create 83b0e09fa26fae61bee3a9ff7ea5addeeaafff04db744e16e7a4a38368bf3af8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_keldysh, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 11 04:02:40 compute-0 systemd[1]: Started libpod-conmon-83b0e09fa26fae61bee3a9ff7ea5addeeaafff04db744e16e7a4a38368bf3af8.scope.
Oct 11 04:02:40 compute-0 systemd[1]: Started libcrun container.
Oct 11 04:02:40 compute-0 podman[266943]: 2025-10-11 04:02:40.549830267 +0000 UTC m=+0.027091770 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 04:02:40 compute-0 podman[266943]: 2025-10-11 04:02:40.647831684 +0000 UTC m=+0.125093167 container init 83b0e09fa26fae61bee3a9ff7ea5addeeaafff04db744e16e7a4a38368bf3af8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_keldysh, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS)
Oct 11 04:02:40 compute-0 podman[266943]: 2025-10-11 04:02:40.661196089 +0000 UTC m=+0.138457542 container start 83b0e09fa26fae61bee3a9ff7ea5addeeaafff04db744e16e7a4a38368bf3af8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_keldysh, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Oct 11 04:02:40 compute-0 podman[266943]: 2025-10-11 04:02:40.664357617 +0000 UTC m=+0.141619100 container attach 83b0e09fa26fae61bee3a9ff7ea5addeeaafff04db744e16e7a4a38368bf3af8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_keldysh, org.label-schema.license=GPLv2, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 11 04:02:40 compute-0 quirky_keldysh[266960]: 167 167
Oct 11 04:02:40 compute-0 systemd[1]: libpod-83b0e09fa26fae61bee3a9ff7ea5addeeaafff04db744e16e7a4a38368bf3af8.scope: Deactivated successfully.
Oct 11 04:02:40 compute-0 podman[266943]: 2025-10-11 04:02:40.667802854 +0000 UTC m=+0.145064347 container died 83b0e09fa26fae61bee3a9ff7ea5addeeaafff04db744e16e7a4a38368bf3af8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_keldysh, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 11 04:02:40 compute-0 systemd[1]: var-lib-containers-storage-overlay-432d1d6b854400ebcdc9d0a1a39b52c09429288f58883103e73c36b6464da6f3-merged.mount: Deactivated successfully.
Oct 11 04:02:40 compute-0 podman[266943]: 2025-10-11 04:02:40.719499643 +0000 UTC m=+0.196761136 container remove 83b0e09fa26fae61bee3a9ff7ea5addeeaafff04db744e16e7a4a38368bf3af8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_keldysh, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_REF=reef, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Oct 11 04:02:40 compute-0 systemd[1]: libpod-conmon-83b0e09fa26fae61bee3a9ff7ea5addeeaafff04db744e16e7a4a38368bf3af8.scope: Deactivated successfully.
Oct 11 04:02:40 compute-0 ceph-mon[74273]: pgmap v911: 305 pgs: 305 active+clean; 42 MiB data, 190 MiB used, 60 GiB / 60 GiB avail; 2.6 MiB/s rd, 3.4 KiB/s wr, 75 op/s
Oct 11 04:02:40 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 11 04:02:40 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 11 04:02:40 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' 
Oct 11 04:02:40 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 11 04:02:40 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 11 04:02:40 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 11 04:02:40 compute-0 podman[266985]: 2025-10-11 04:02:40.98949623 +0000 UTC m=+0.070437605 container create 2d36b8536c72c2bcdc7aa4cbfd6ee55d987204aa5ec5370fbddeb35920de0a0d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_tesla, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True)
Oct 11 04:02:41 compute-0 systemd[1]: Started libpod-conmon-2d36b8536c72c2bcdc7aa4cbfd6ee55d987204aa5ec5370fbddeb35920de0a0d.scope.
Oct 11 04:02:41 compute-0 podman[266985]: 2025-10-11 04:02:40.96164577 +0000 UTC m=+0.042587195 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 04:02:41 compute-0 systemd[1]: Started libcrun container.
Oct 11 04:02:41 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0f932968bcee5b8a753c7ca15f46365f1eff1193c02dc30b92b764773c13a529/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 11 04:02:41 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0f932968bcee5b8a753c7ca15f46365f1eff1193c02dc30b92b764773c13a529/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 11 04:02:41 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0f932968bcee5b8a753c7ca15f46365f1eff1193c02dc30b92b764773c13a529/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 11 04:02:41 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0f932968bcee5b8a753c7ca15f46365f1eff1193c02dc30b92b764773c13a529/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 11 04:02:41 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0f932968bcee5b8a753c7ca15f46365f1eff1193c02dc30b92b764773c13a529/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct 11 04:02:41 compute-0 podman[266985]: 2025-10-11 04:02:41.098892837 +0000 UTC m=+0.179834192 container init 2d36b8536c72c2bcdc7aa4cbfd6ee55d987204aa5ec5370fbddeb35920de0a0d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_tesla, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True)
Oct 11 04:02:41 compute-0 podman[266985]: 2025-10-11 04:02:41.116709576 +0000 UTC m=+0.197650911 container start 2d36b8536c72c2bcdc7aa4cbfd6ee55d987204aa5ec5370fbddeb35920de0a0d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_tesla, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 11 04:02:41 compute-0 podman[266985]: 2025-10-11 04:02:41.120582335 +0000 UTC m=+0.201523690 container attach 2d36b8536c72c2bcdc7aa4cbfd6ee55d987204aa5ec5370fbddeb35920de0a0d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_tesla, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2)
Oct 11 04:02:41 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v912: 305 pgs: 305 active+clean; 42 MiB data, 190 MiB used, 60 GiB / 60 GiB avail; 2.6 MiB/s rd, 383 B/s wr, 21 op/s
Oct 11 04:02:41 compute-0 nova_compute[259850]: 2025-10-11 04:02:41.524 2 DEBUG nova.network.neutron [None req-78a57090-801a-40e7-b05e-441e28e301e0 6f96a3b66f9943398432732b3141745a 54de3f5004d1488aaf5e429b0071e194 - - default default] [instance: 02d4314d-9e34-4961-a2ae-1c9b0a3c1dfc] Updating instance_info_cache with network_info: [{"id": "601ef18d-d973-476f-90d4-f8f40df267fa", "address": "fa:16:3e:b3:25:de", "network": {"id": "373b2ee9-84af-407c-9e36-4d16b55fdfd0", "bridge": "br-int", "label": "tempest-EncryptedVolumesExtendAttachedTest-757984270-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "54de3f5004d1488aaf5e429b0071e194", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap601ef18d-d9", "ovs_interfaceid": "601ef18d-d973-476f-90d4-f8f40df267fa", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 11 04:02:41 compute-0 nova_compute[259850]: 2025-10-11 04:02:41.546 2 DEBUG oslo_concurrency.lockutils [None req-78a57090-801a-40e7-b05e-441e28e301e0 6f96a3b66f9943398432732b3141745a 54de3f5004d1488aaf5e429b0071e194 - - default default] Releasing lock "refresh_cache-02d4314d-9e34-4961-a2ae-1c9b0a3c1dfc" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 11 04:02:41 compute-0 nova_compute[259850]: 2025-10-11 04:02:41.547 2 DEBUG nova.compute.manager [None req-78a57090-801a-40e7-b05e-441e28e301e0 6f96a3b66f9943398432732b3141745a 54de3f5004d1488aaf5e429b0071e194 - - default default] [instance: 02d4314d-9e34-4961-a2ae-1c9b0a3c1dfc] Instance network_info: |[{"id": "601ef18d-d973-476f-90d4-f8f40df267fa", "address": "fa:16:3e:b3:25:de", "network": {"id": "373b2ee9-84af-407c-9e36-4d16b55fdfd0", "bridge": "br-int", "label": "tempest-EncryptedVolumesExtendAttachedTest-757984270-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "54de3f5004d1488aaf5e429b0071e194", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap601ef18d-d9", "ovs_interfaceid": "601ef18d-d973-476f-90d4-f8f40df267fa", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Oct 11 04:02:41 compute-0 nova_compute[259850]: 2025-10-11 04:02:41.548 2 DEBUG oslo_concurrency.lockutils [req-fa02c694-5359-4f28-88bf-8359d7da78f1 req-1d73c10f-2c5f-438f-b1df-acbe7ae0535a f4159c198f2c491aba3076e803251bb4 551ef16ca7c64c7d98e9560ddab5abd8 - - default default] Acquired lock "refresh_cache-02d4314d-9e34-4961-a2ae-1c9b0a3c1dfc" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 11 04:02:41 compute-0 nova_compute[259850]: 2025-10-11 04:02:41.548 2 DEBUG nova.network.neutron [req-fa02c694-5359-4f28-88bf-8359d7da78f1 req-1d73c10f-2c5f-438f-b1df-acbe7ae0535a f4159c198f2c491aba3076e803251bb4 551ef16ca7c64c7d98e9560ddab5abd8 - - default default] [instance: 02d4314d-9e34-4961-a2ae-1c9b0a3c1dfc] Refreshing network info cache for port 601ef18d-d973-476f-90d4-f8f40df267fa _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 11 04:02:41 compute-0 nova_compute[259850]: 2025-10-11 04:02:41.553 2 DEBUG nova.virt.libvirt.driver [None req-78a57090-801a-40e7-b05e-441e28e301e0 6f96a3b66f9943398432732b3141745a 54de3f5004d1488aaf5e429b0071e194 - - default default] [instance: 02d4314d-9e34-4961-a2ae-1c9b0a3c1dfc] Start _get_guest_xml network_info=[{"id": "601ef18d-d973-476f-90d4-f8f40df267fa", "address": "fa:16:3e:b3:25:de", "network": {"id": "373b2ee9-84af-407c-9e36-4d16b55fdfd0", "bridge": "br-int", "label": "tempest-EncryptedVolumesExtendAttachedTest-757984270-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "54de3f5004d1488aaf5e429b0071e194", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap601ef18d-d9", "ovs_interfaceid": "601ef18d-d973-476f-90d4-f8f40df267fa", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-11T04:01:37Z,direct_url=<?>,disk_format='qcow2',id=1a107e2f-1a9d-4b6f-861d-e64bee7d56be,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='e4ac9f6319b648399a8baca50902ce47',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-11T04:01:39Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_type': 'disk', 'encrypted': False, 'encryption_secret_uuid': None, 'size': 0, 'encryption_format': None, 'boot_index': 0, 'device_name': '/dev/vda', 'guest_format': None, 'encryption_options': None, 'disk_bus': 'virtio', 'image_id': '1a107e2f-1a9d-4b6f-861d-e64bee7d56be'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Oct 11 04:02:41 compute-0 nova_compute[259850]: 2025-10-11 04:02:41.561 2 WARNING nova.virt.libvirt.driver [None req-78a57090-801a-40e7-b05e-441e28e301e0 6f96a3b66f9943398432732b3141745a 54de3f5004d1488aaf5e429b0071e194 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 11 04:02:41 compute-0 nova_compute[259850]: 2025-10-11 04:02:41.574 2 DEBUG nova.virt.libvirt.host [None req-78a57090-801a-40e7-b05e-441e28e301e0 6f96a3b66f9943398432732b3141745a 54de3f5004d1488aaf5e429b0071e194 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Oct 11 04:02:41 compute-0 nova_compute[259850]: 2025-10-11 04:02:41.576 2 DEBUG nova.virt.libvirt.host [None req-78a57090-801a-40e7-b05e-441e28e301e0 6f96a3b66f9943398432732b3141745a 54de3f5004d1488aaf5e429b0071e194 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Oct 11 04:02:41 compute-0 nova_compute[259850]: 2025-10-11 04:02:41.580 2 DEBUG nova.virt.libvirt.host [None req-78a57090-801a-40e7-b05e-441e28e301e0 6f96a3b66f9943398432732b3141745a 54de3f5004d1488aaf5e429b0071e194 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Oct 11 04:02:41 compute-0 nova_compute[259850]: 2025-10-11 04:02:41.581 2 DEBUG nova.virt.libvirt.host [None req-78a57090-801a-40e7-b05e-441e28e301e0 6f96a3b66f9943398432732b3141745a 54de3f5004d1488aaf5e429b0071e194 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Oct 11 04:02:41 compute-0 nova_compute[259850]: 2025-10-11 04:02:41.583 2 DEBUG nova.virt.libvirt.driver [None req-78a57090-801a-40e7-b05e-441e28e301e0 6f96a3b66f9943398432732b3141745a 54de3f5004d1488aaf5e429b0071e194 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Oct 11 04:02:41 compute-0 nova_compute[259850]: 2025-10-11 04:02:41.583 2 DEBUG nova.virt.hardware [None req-78a57090-801a-40e7-b05e-441e28e301e0 6f96a3b66f9943398432732b3141745a 54de3f5004d1488aaf5e429b0071e194 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-11T04:01:35Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='178575de-f0e6-4acd-9fcd-d75e3e09ac2e',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-11T04:01:37Z,direct_url=<?>,disk_format='qcow2',id=1a107e2f-1a9d-4b6f-861d-e64bee7d56be,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='e4ac9f6319b648399a8baca50902ce47',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-11T04:01:39Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Oct 11 04:02:41 compute-0 nova_compute[259850]: 2025-10-11 04:02:41.584 2 DEBUG nova.virt.hardware [None req-78a57090-801a-40e7-b05e-441e28e301e0 6f96a3b66f9943398432732b3141745a 54de3f5004d1488aaf5e429b0071e194 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Oct 11 04:02:41 compute-0 nova_compute[259850]: 2025-10-11 04:02:41.585 2 DEBUG nova.virt.hardware [None req-78a57090-801a-40e7-b05e-441e28e301e0 6f96a3b66f9943398432732b3141745a 54de3f5004d1488aaf5e429b0071e194 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Oct 11 04:02:41 compute-0 nova_compute[259850]: 2025-10-11 04:02:41.585 2 DEBUG nova.virt.hardware [None req-78a57090-801a-40e7-b05e-441e28e301e0 6f96a3b66f9943398432732b3141745a 54de3f5004d1488aaf5e429b0071e194 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Oct 11 04:02:41 compute-0 nova_compute[259850]: 2025-10-11 04:02:41.586 2 DEBUG nova.virt.hardware [None req-78a57090-801a-40e7-b05e-441e28e301e0 6f96a3b66f9943398432732b3141745a 54de3f5004d1488aaf5e429b0071e194 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Oct 11 04:02:41 compute-0 nova_compute[259850]: 2025-10-11 04:02:41.586 2 DEBUG nova.virt.hardware [None req-78a57090-801a-40e7-b05e-441e28e301e0 6f96a3b66f9943398432732b3141745a 54de3f5004d1488aaf5e429b0071e194 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Oct 11 04:02:41 compute-0 nova_compute[259850]: 2025-10-11 04:02:41.587 2 DEBUG nova.virt.hardware [None req-78a57090-801a-40e7-b05e-441e28e301e0 6f96a3b66f9943398432732b3141745a 54de3f5004d1488aaf5e429b0071e194 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Oct 11 04:02:41 compute-0 nova_compute[259850]: 2025-10-11 04:02:41.587 2 DEBUG nova.virt.hardware [None req-78a57090-801a-40e7-b05e-441e28e301e0 6f96a3b66f9943398432732b3141745a 54de3f5004d1488aaf5e429b0071e194 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Oct 11 04:02:41 compute-0 nova_compute[259850]: 2025-10-11 04:02:41.587 2 DEBUG nova.virt.hardware [None req-78a57090-801a-40e7-b05e-441e28e301e0 6f96a3b66f9943398432732b3141745a 54de3f5004d1488aaf5e429b0071e194 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Oct 11 04:02:41 compute-0 nova_compute[259850]: 2025-10-11 04:02:41.588 2 DEBUG nova.virt.hardware [None req-78a57090-801a-40e7-b05e-441e28e301e0 6f96a3b66f9943398432732b3141745a 54de3f5004d1488aaf5e429b0071e194 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Oct 11 04:02:41 compute-0 nova_compute[259850]: 2025-10-11 04:02:41.588 2 DEBUG nova.virt.hardware [None req-78a57090-801a-40e7-b05e-441e28e301e0 6f96a3b66f9943398432732b3141745a 54de3f5004d1488aaf5e429b0071e194 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Oct 11 04:02:41 compute-0 nova_compute[259850]: 2025-10-11 04:02:41.594 2 DEBUG nova.privsep.utils [None req-78a57090-801a-40e7-b05e-441e28e301e0 6f96a3b66f9943398432732b3141745a 54de3f5004d1488aaf5e429b0071e194 - - default default] Path '/var/lib/nova/instances' supports direct I/O supports_direct_io /usr/lib/python3.9/site-packages/nova/privsep/utils.py:63
Oct 11 04:02:41 compute-0 nova_compute[259850]: 2025-10-11 04:02:41.595 2 DEBUG oslo_concurrency.processutils [None req-78a57090-801a-40e7-b05e-441e28e301e0 6f96a3b66f9943398432732b3141745a 54de3f5004d1488aaf5e429b0071e194 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 04:02:42 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 11 04:02:42 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3313145099' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 11 04:02:42 compute-0 nova_compute[259850]: 2025-10-11 04:02:42.058 2 DEBUG oslo_concurrency.processutils [None req-78a57090-801a-40e7-b05e-441e28e301e0 6f96a3b66f9943398432732b3141745a 54de3f5004d1488aaf5e429b0071e194 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.463s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 04:02:42 compute-0 distracted_tesla[267001]: --> passed data devices: 0 physical, 3 LVM
Oct 11 04:02:42 compute-0 distracted_tesla[267001]: --> relative data size: 1.0
Oct 11 04:02:42 compute-0 distracted_tesla[267001]: --> All data devices are unavailable
Oct 11 04:02:42 compute-0 nova_compute[259850]: 2025-10-11 04:02:42.092 2 DEBUG nova.storage.rbd_utils [None req-78a57090-801a-40e7-b05e-441e28e301e0 6f96a3b66f9943398432732b3141745a 54de3f5004d1488aaf5e429b0071e194 - - default default] rbd image 02d4314d-9e34-4961-a2ae-1c9b0a3c1dfc_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 11 04:02:42 compute-0 systemd[1]: libpod-2d36b8536c72c2bcdc7aa4cbfd6ee55d987204aa5ec5370fbddeb35920de0a0d.scope: Deactivated successfully.
Oct 11 04:02:42 compute-0 conmon[267001]: conmon 2d36b8536c72c2bcdc7a <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-2d36b8536c72c2bcdc7aa4cbfd6ee55d987204aa5ec5370fbddeb35920de0a0d.scope/container/memory.events
Oct 11 04:02:42 compute-0 podman[266985]: 2025-10-11 04:02:42.09622716 +0000 UTC m=+1.177168505 container died 2d36b8536c72c2bcdc7aa4cbfd6ee55d987204aa5ec5370fbddeb35920de0a0d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_tesla, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 11 04:02:42 compute-0 nova_compute[259850]: 2025-10-11 04:02:42.099 2 DEBUG oslo_concurrency.processutils [None req-78a57090-801a-40e7-b05e-441e28e301e0 6f96a3b66f9943398432732b3141745a 54de3f5004d1488aaf5e429b0071e194 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 04:02:42 compute-0 systemd[1]: var-lib-containers-storage-overlay-0f932968bcee5b8a753c7ca15f46365f1eff1193c02dc30b92b764773c13a529-merged.mount: Deactivated successfully.
Oct 11 04:02:42 compute-0 podman[266985]: 2025-10-11 04:02:42.152822886 +0000 UTC m=+1.233764221 container remove 2d36b8536c72c2bcdc7aa4cbfd6ee55d987204aa5ec5370fbddeb35920de0a0d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_tesla, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 11 04:02:42 compute-0 systemd[1]: libpod-conmon-2d36b8536c72c2bcdc7aa4cbfd6ee55d987204aa5ec5370fbddeb35920de0a0d.scope: Deactivated successfully.
Oct 11 04:02:42 compute-0 sudo[266878]: pam_unix(sudo:session): session closed for user root
Oct 11 04:02:42 compute-0 sudo[267084]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 04:02:42 compute-0 sudo[267084]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 04:02:42 compute-0 sudo[267084]: pam_unix(sudo:session): session closed for user root
Oct 11 04:02:42 compute-0 sudo[267128]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 11 04:02:42 compute-0 sudo[267128]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 04:02:42 compute-0 sudo[267128]: pam_unix(sudo:session): session closed for user root
Oct 11 04:02:42 compute-0 sudo[267153]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 04:02:42 compute-0 sudo[267153]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 04:02:42 compute-0 sudo[267153]: pam_unix(sudo:session): session closed for user root
Oct 11 04:02:42 compute-0 sudo[267178]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/23b68101-59a9-532f-ab6b-9acf78fb2162/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 23b68101-59a9-532f-ab6b-9acf78fb2162 -- lvm list --format json
Oct 11 04:02:42 compute-0 sudo[267178]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 04:02:42 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 11 04:02:42 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4250066632' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 11 04:02:42 compute-0 nova_compute[259850]: 2025-10-11 04:02:42.571 2 DEBUG oslo_concurrency.processutils [None req-78a57090-801a-40e7-b05e-441e28e301e0 6f96a3b66f9943398432732b3141745a 54de3f5004d1488aaf5e429b0071e194 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.472s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 04:02:42 compute-0 nova_compute[259850]: 2025-10-11 04:02:42.573 2 DEBUG nova.virt.libvirt.vif [None req-78a57090-801a-40e7-b05e-441e28e301e0 6f96a3b66f9943398432732b3141745a 54de3f5004d1488aaf5e429b0071e194 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-11T04:02:32Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-EncryptedVolumesExtendAttachedTest-instance-1517859076',display_name='tempest-EncryptedVolumesExtendAttachedTest-instance-1517859076',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-encryptedvolumesextendattachedtest-instance-1517859076',id=1,image_ref='1a107e2f-1a9d-4b6f-861d-e64bee7d56be',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBIP9WwMZHg30GEZ6pU9u1A/MMvyJS2+nS/lRgwrDD2GyS0E+SUtgIIxuMa25JYk2802r1expk7HTzdwVfDdYPfQ09QKkuenleq+s8kuEDgjh5maYKeHlqJtfNaVfPDIR9g==',key_name='tempest-keypair-583789850',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='54de3f5004d1488aaf5e429b0071e194',ramdisk_id='',reservation_id='r-kyx1qls8',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='1a107e2f-1a9d-4b6f-861d-e64bee7d56be',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-EncryptedVolumesExtendAttachedTest-1383109666',owner_user_name='tempest-EncryptedVolumesExtendAttachedTest-1383109666-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-11T04:02:35Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='6f96a3b66f9943398432732b3141745a',uuid=02d4314d-9e34-4961-a2ae-1c9b0a3c1dfc,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "601ef18d-d973-476f-90d4-f8f40df267fa", "address": "fa:16:3e:b3:25:de", "network": {"id": "373b2ee9-84af-407c-9e36-4d16b55fdfd0", "bridge": "br-int", "label": "tempest-EncryptedVolumesExtendAttachedTest-757984270-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "54de3f5004d1488aaf5e429b0071e194", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap601ef18d-d9", "ovs_interfaceid": "601ef18d-d973-476f-90d4-f8f40df267fa", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Oct 11 04:02:42 compute-0 nova_compute[259850]: 2025-10-11 04:02:42.573 2 DEBUG nova.network.os_vif_util [None req-78a57090-801a-40e7-b05e-441e28e301e0 6f96a3b66f9943398432732b3141745a 54de3f5004d1488aaf5e429b0071e194 - - default default] Converting VIF {"id": "601ef18d-d973-476f-90d4-f8f40df267fa", "address": "fa:16:3e:b3:25:de", "network": {"id": "373b2ee9-84af-407c-9e36-4d16b55fdfd0", "bridge": "br-int", "label": "tempest-EncryptedVolumesExtendAttachedTest-757984270-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "54de3f5004d1488aaf5e429b0071e194", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap601ef18d-d9", "ovs_interfaceid": "601ef18d-d973-476f-90d4-f8f40df267fa", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 11 04:02:42 compute-0 nova_compute[259850]: 2025-10-11 04:02:42.574 2 DEBUG nova.network.os_vif_util [None req-78a57090-801a-40e7-b05e-441e28e301e0 6f96a3b66f9943398432732b3141745a 54de3f5004d1488aaf5e429b0071e194 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:b3:25:de,bridge_name='br-int',has_traffic_filtering=True,id=601ef18d-d973-476f-90d4-f8f40df267fa,network=Network(373b2ee9-84af-407c-9e36-4d16b55fdfd0),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap601ef18d-d9') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 11 04:02:42 compute-0 nova_compute[259850]: 2025-10-11 04:02:42.577 2 DEBUG nova.objects.instance [None req-78a57090-801a-40e7-b05e-441e28e301e0 6f96a3b66f9943398432732b3141745a 54de3f5004d1488aaf5e429b0071e194 - - default default] Lazy-loading 'pci_devices' on Instance uuid 02d4314d-9e34-4961-a2ae-1c9b0a3c1dfc obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 11 04:02:42 compute-0 nova_compute[259850]: 2025-10-11 04:02:42.594 2 DEBUG nova.virt.libvirt.driver [None req-78a57090-801a-40e7-b05e-441e28e301e0 6f96a3b66f9943398432732b3141745a 54de3f5004d1488aaf5e429b0071e194 - - default default] [instance: 02d4314d-9e34-4961-a2ae-1c9b0a3c1dfc] End _get_guest_xml xml=<domain type="kvm">
Oct 11 04:02:42 compute-0 nova_compute[259850]:   <uuid>02d4314d-9e34-4961-a2ae-1c9b0a3c1dfc</uuid>
Oct 11 04:02:42 compute-0 nova_compute[259850]:   <name>instance-00000001</name>
Oct 11 04:02:42 compute-0 nova_compute[259850]:   <memory>131072</memory>
Oct 11 04:02:42 compute-0 nova_compute[259850]:   <vcpu>1</vcpu>
Oct 11 04:02:42 compute-0 nova_compute[259850]:   <metadata>
Oct 11 04:02:42 compute-0 nova_compute[259850]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 11 04:02:42 compute-0 nova_compute[259850]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 11 04:02:42 compute-0 nova_compute[259850]:       <nova:name>tempest-EncryptedVolumesExtendAttachedTest-instance-1517859076</nova:name>
Oct 11 04:02:42 compute-0 nova_compute[259850]:       <nova:creationTime>2025-10-11 04:02:41</nova:creationTime>
Oct 11 04:02:42 compute-0 nova_compute[259850]:       <nova:flavor name="m1.nano">
Oct 11 04:02:42 compute-0 nova_compute[259850]:         <nova:memory>128</nova:memory>
Oct 11 04:02:42 compute-0 nova_compute[259850]:         <nova:disk>1</nova:disk>
Oct 11 04:02:42 compute-0 nova_compute[259850]:         <nova:swap>0</nova:swap>
Oct 11 04:02:42 compute-0 nova_compute[259850]:         <nova:ephemeral>0</nova:ephemeral>
Oct 11 04:02:42 compute-0 nova_compute[259850]:         <nova:vcpus>1</nova:vcpus>
Oct 11 04:02:42 compute-0 nova_compute[259850]:       </nova:flavor>
Oct 11 04:02:42 compute-0 nova_compute[259850]:       <nova:owner>
Oct 11 04:02:42 compute-0 nova_compute[259850]:         <nova:user uuid="6f96a3b66f9943398432732b3141745a">tempest-EncryptedVolumesExtendAttachedTest-1383109666-project-member</nova:user>
Oct 11 04:02:42 compute-0 nova_compute[259850]:         <nova:project uuid="54de3f5004d1488aaf5e429b0071e194">tempest-EncryptedVolumesExtendAttachedTest-1383109666</nova:project>
Oct 11 04:02:42 compute-0 nova_compute[259850]:       </nova:owner>
Oct 11 04:02:42 compute-0 nova_compute[259850]:       <nova:root type="image" uuid="1a107e2f-1a9d-4b6f-861d-e64bee7d56be"/>
Oct 11 04:02:42 compute-0 nova_compute[259850]:       <nova:ports>
Oct 11 04:02:42 compute-0 nova_compute[259850]:         <nova:port uuid="601ef18d-d973-476f-90d4-f8f40df267fa">
Oct 11 04:02:42 compute-0 nova_compute[259850]:           <nova:ip type="fixed" address="10.100.0.5" ipVersion="4"/>
Oct 11 04:02:42 compute-0 nova_compute[259850]:         </nova:port>
Oct 11 04:02:42 compute-0 nova_compute[259850]:       </nova:ports>
Oct 11 04:02:42 compute-0 nova_compute[259850]:     </nova:instance>
Oct 11 04:02:42 compute-0 nova_compute[259850]:   </metadata>
Oct 11 04:02:42 compute-0 nova_compute[259850]:   <sysinfo type="smbios">
Oct 11 04:02:42 compute-0 nova_compute[259850]:     <system>
Oct 11 04:02:42 compute-0 nova_compute[259850]:       <entry name="manufacturer">RDO</entry>
Oct 11 04:02:42 compute-0 nova_compute[259850]:       <entry name="product">OpenStack Compute</entry>
Oct 11 04:02:42 compute-0 nova_compute[259850]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 11 04:02:42 compute-0 nova_compute[259850]:       <entry name="serial">02d4314d-9e34-4961-a2ae-1c9b0a3c1dfc</entry>
Oct 11 04:02:42 compute-0 nova_compute[259850]:       <entry name="uuid">02d4314d-9e34-4961-a2ae-1c9b0a3c1dfc</entry>
Oct 11 04:02:42 compute-0 nova_compute[259850]:       <entry name="family">Virtual Machine</entry>
Oct 11 04:02:42 compute-0 nova_compute[259850]:     </system>
Oct 11 04:02:42 compute-0 nova_compute[259850]:   </sysinfo>
Oct 11 04:02:42 compute-0 nova_compute[259850]:   <os>
Oct 11 04:02:42 compute-0 nova_compute[259850]:     <type arch="x86_64" machine="q35">hvm</type>
Oct 11 04:02:42 compute-0 nova_compute[259850]:     <boot dev="hd"/>
Oct 11 04:02:42 compute-0 nova_compute[259850]:     <smbios mode="sysinfo"/>
Oct 11 04:02:42 compute-0 nova_compute[259850]:   </os>
Oct 11 04:02:42 compute-0 nova_compute[259850]:   <features>
Oct 11 04:02:42 compute-0 nova_compute[259850]:     <acpi/>
Oct 11 04:02:42 compute-0 nova_compute[259850]:     <apic/>
Oct 11 04:02:42 compute-0 nova_compute[259850]:     <vmcoreinfo/>
Oct 11 04:02:42 compute-0 nova_compute[259850]:   </features>
Oct 11 04:02:42 compute-0 nova_compute[259850]:   <clock offset="utc">
Oct 11 04:02:42 compute-0 nova_compute[259850]:     <timer name="pit" tickpolicy="delay"/>
Oct 11 04:02:42 compute-0 nova_compute[259850]:     <timer name="rtc" tickpolicy="catchup"/>
Oct 11 04:02:42 compute-0 nova_compute[259850]:     <timer name="hpet" present="no"/>
Oct 11 04:02:42 compute-0 nova_compute[259850]:   </clock>
Oct 11 04:02:42 compute-0 nova_compute[259850]:   <cpu mode="host-model" match="exact">
Oct 11 04:02:42 compute-0 nova_compute[259850]:     <topology sockets="1" cores="1" threads="1"/>
Oct 11 04:02:42 compute-0 nova_compute[259850]:   </cpu>
Oct 11 04:02:42 compute-0 nova_compute[259850]:   <devices>
Oct 11 04:02:42 compute-0 nova_compute[259850]:     <disk type="network" device="disk">
Oct 11 04:02:42 compute-0 nova_compute[259850]:       <driver type="raw" cache="none"/>
Oct 11 04:02:42 compute-0 nova_compute[259850]:       <source protocol="rbd" name="vms/02d4314d-9e34-4961-a2ae-1c9b0a3c1dfc_disk">
Oct 11 04:02:42 compute-0 nova_compute[259850]:         <host name="192.168.122.100" port="6789"/>
Oct 11 04:02:42 compute-0 nova_compute[259850]:       </source>
Oct 11 04:02:42 compute-0 nova_compute[259850]:       <auth username="openstack">
Oct 11 04:02:42 compute-0 nova_compute[259850]:         <secret type="ceph" uuid="23b68101-59a9-532f-ab6b-9acf78fb2162"/>
Oct 11 04:02:42 compute-0 nova_compute[259850]:       </auth>
Oct 11 04:02:42 compute-0 nova_compute[259850]:       <target dev="vda" bus="virtio"/>
Oct 11 04:02:42 compute-0 nova_compute[259850]:     </disk>
Oct 11 04:02:42 compute-0 nova_compute[259850]:     <disk type="network" device="cdrom">
Oct 11 04:02:42 compute-0 nova_compute[259850]:       <driver type="raw" cache="none"/>
Oct 11 04:02:42 compute-0 nova_compute[259850]:       <source protocol="rbd" name="vms/02d4314d-9e34-4961-a2ae-1c9b0a3c1dfc_disk.config">
Oct 11 04:02:42 compute-0 nova_compute[259850]:         <host name="192.168.122.100" port="6789"/>
Oct 11 04:02:42 compute-0 nova_compute[259850]:       </source>
Oct 11 04:02:42 compute-0 nova_compute[259850]:       <auth username="openstack">
Oct 11 04:02:42 compute-0 nova_compute[259850]:         <secret type="ceph" uuid="23b68101-59a9-532f-ab6b-9acf78fb2162"/>
Oct 11 04:02:42 compute-0 nova_compute[259850]:       </auth>
Oct 11 04:02:42 compute-0 nova_compute[259850]:       <target dev="sda" bus="sata"/>
Oct 11 04:02:42 compute-0 nova_compute[259850]:     </disk>
Oct 11 04:02:42 compute-0 nova_compute[259850]:     <interface type="ethernet">
Oct 11 04:02:42 compute-0 nova_compute[259850]:       <mac address="fa:16:3e:b3:25:de"/>
Oct 11 04:02:42 compute-0 nova_compute[259850]:       <model type="virtio"/>
Oct 11 04:02:42 compute-0 nova_compute[259850]:       <driver name="vhost" rx_queue_size="512"/>
Oct 11 04:02:42 compute-0 nova_compute[259850]:       <mtu size="1442"/>
Oct 11 04:02:42 compute-0 nova_compute[259850]:       <target dev="tap601ef18d-d9"/>
Oct 11 04:02:42 compute-0 nova_compute[259850]:     </interface>
Oct 11 04:02:42 compute-0 nova_compute[259850]:     <serial type="pty">
Oct 11 04:02:42 compute-0 nova_compute[259850]:       <log file="/var/lib/nova/instances/02d4314d-9e34-4961-a2ae-1c9b0a3c1dfc/console.log" append="off"/>
Oct 11 04:02:42 compute-0 nova_compute[259850]:     </serial>
Oct 11 04:02:42 compute-0 nova_compute[259850]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 11 04:02:42 compute-0 nova_compute[259850]:     <video>
Oct 11 04:02:42 compute-0 nova_compute[259850]:       <model type="virtio"/>
Oct 11 04:02:42 compute-0 nova_compute[259850]:     </video>
Oct 11 04:02:42 compute-0 nova_compute[259850]:     <input type="tablet" bus="usb"/>
Oct 11 04:02:42 compute-0 nova_compute[259850]:     <rng model="virtio">
Oct 11 04:02:42 compute-0 nova_compute[259850]:       <backend model="random">/dev/urandom</backend>
Oct 11 04:02:42 compute-0 nova_compute[259850]:     </rng>
Oct 11 04:02:42 compute-0 nova_compute[259850]:     <controller type="pci" model="pcie-root"/>
Oct 11 04:02:42 compute-0 nova_compute[259850]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 04:02:42 compute-0 nova_compute[259850]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 04:02:42 compute-0 nova_compute[259850]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 04:02:42 compute-0 nova_compute[259850]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 04:02:42 compute-0 nova_compute[259850]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 04:02:42 compute-0 nova_compute[259850]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 04:02:42 compute-0 nova_compute[259850]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 04:02:42 compute-0 nova_compute[259850]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 04:02:42 compute-0 nova_compute[259850]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 04:02:42 compute-0 nova_compute[259850]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 04:02:42 compute-0 nova_compute[259850]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 04:02:42 compute-0 nova_compute[259850]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 04:02:42 compute-0 nova_compute[259850]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 04:02:42 compute-0 nova_compute[259850]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 04:02:42 compute-0 nova_compute[259850]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 04:02:42 compute-0 nova_compute[259850]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 04:02:42 compute-0 nova_compute[259850]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 04:02:42 compute-0 nova_compute[259850]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 04:02:42 compute-0 nova_compute[259850]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 04:02:42 compute-0 nova_compute[259850]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 04:02:42 compute-0 nova_compute[259850]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 04:02:42 compute-0 nova_compute[259850]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 04:02:42 compute-0 nova_compute[259850]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 04:02:42 compute-0 nova_compute[259850]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 04:02:42 compute-0 nova_compute[259850]:     <controller type="usb" index="0"/>
Oct 11 04:02:42 compute-0 nova_compute[259850]:     <memballoon model="virtio">
Oct 11 04:02:42 compute-0 nova_compute[259850]:       <stats period="10"/>
Oct 11 04:02:42 compute-0 nova_compute[259850]:     </memballoon>
Oct 11 04:02:42 compute-0 nova_compute[259850]:   </devices>
Oct 11 04:02:42 compute-0 nova_compute[259850]: </domain>
Oct 11 04:02:42 compute-0 nova_compute[259850]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Oct 11 04:02:42 compute-0 nova_compute[259850]: 2025-10-11 04:02:42.596 2 DEBUG nova.compute.manager [None req-78a57090-801a-40e7-b05e-441e28e301e0 6f96a3b66f9943398432732b3141745a 54de3f5004d1488aaf5e429b0071e194 - - default default] [instance: 02d4314d-9e34-4961-a2ae-1c9b0a3c1dfc] Preparing to wait for external event network-vif-plugged-601ef18d-d973-476f-90d4-f8f40df267fa prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Oct 11 04:02:42 compute-0 nova_compute[259850]: 2025-10-11 04:02:42.596 2 DEBUG oslo_concurrency.lockutils [None req-78a57090-801a-40e7-b05e-441e28e301e0 6f96a3b66f9943398432732b3141745a 54de3f5004d1488aaf5e429b0071e194 - - default default] Acquiring lock "02d4314d-9e34-4961-a2ae-1c9b0a3c1dfc-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 04:02:42 compute-0 nova_compute[259850]: 2025-10-11 04:02:42.597 2 DEBUG oslo_concurrency.lockutils [None req-78a57090-801a-40e7-b05e-441e28e301e0 6f96a3b66f9943398432732b3141745a 54de3f5004d1488aaf5e429b0071e194 - - default default] Lock "02d4314d-9e34-4961-a2ae-1c9b0a3c1dfc-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 04:02:42 compute-0 nova_compute[259850]: 2025-10-11 04:02:42.597 2 DEBUG oslo_concurrency.lockutils [None req-78a57090-801a-40e7-b05e-441e28e301e0 6f96a3b66f9943398432732b3141745a 54de3f5004d1488aaf5e429b0071e194 - - default default] Lock "02d4314d-9e34-4961-a2ae-1c9b0a3c1dfc-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 04:02:42 compute-0 nova_compute[259850]: 2025-10-11 04:02:42.598 2 DEBUG nova.virt.libvirt.vif [None req-78a57090-801a-40e7-b05e-441e28e301e0 6f96a3b66f9943398432732b3141745a 54de3f5004d1488aaf5e429b0071e194 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-11T04:02:32Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-EncryptedVolumesExtendAttachedTest-instance-1517859076',display_name='tempest-EncryptedVolumesExtendAttachedTest-instance-1517859076',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-encryptedvolumesextendattachedtest-instance-1517859076',id=1,image_ref='1a107e2f-1a9d-4b6f-861d-e64bee7d56be',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBIP9WwMZHg30GEZ6pU9u1A/MMvyJS2+nS/lRgwrDD2GyS0E+SUtgIIxuMa25JYk2802r1expk7HTzdwVfDdYPfQ09QKkuenleq+s8kuEDgjh5maYKeHlqJtfNaVfPDIR9g==',key_name='tempest-keypair-583789850',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='54de3f5004d1488aaf5e429b0071e194',ramdisk_id='',reservation_id='r-kyx1qls8',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='1a107e2f-1a9d-4b6f-861d-e64bee7d56be',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-EncryptedVolumesExtendAttachedTest-1383109666',owner_user_name='tempest-EncryptedVolumesExtendAttachedTest-1383109666-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-11T04:02:35Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='6f96a3b66f9943398432732b3141745a',uuid=02d4314d-9e34-4961-a2ae-1c9b0a3c1dfc,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "601ef18d-d973-476f-90d4-f8f40df267fa", "address": "fa:16:3e:b3:25:de", "network": {"id": "373b2ee9-84af-407c-9e36-4d16b55fdfd0", "bridge": "br-int", "label": "tempest-EncryptedVolumesExtendAttachedTest-757984270-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "54de3f5004d1488aaf5e429b0071e194", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap601ef18d-d9", "ovs_interfaceid": "601ef18d-d973-476f-90d4-f8f40df267fa", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Oct 11 04:02:42 compute-0 nova_compute[259850]: 2025-10-11 04:02:42.598 2 DEBUG nova.network.os_vif_util [None req-78a57090-801a-40e7-b05e-441e28e301e0 6f96a3b66f9943398432732b3141745a 54de3f5004d1488aaf5e429b0071e194 - - default default] Converting VIF {"id": "601ef18d-d973-476f-90d4-f8f40df267fa", "address": "fa:16:3e:b3:25:de", "network": {"id": "373b2ee9-84af-407c-9e36-4d16b55fdfd0", "bridge": "br-int", "label": "tempest-EncryptedVolumesExtendAttachedTest-757984270-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "54de3f5004d1488aaf5e429b0071e194", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap601ef18d-d9", "ovs_interfaceid": "601ef18d-d973-476f-90d4-f8f40df267fa", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 11 04:02:42 compute-0 nova_compute[259850]: 2025-10-11 04:02:42.599 2 DEBUG nova.network.os_vif_util [None req-78a57090-801a-40e7-b05e-441e28e301e0 6f96a3b66f9943398432732b3141745a 54de3f5004d1488aaf5e429b0071e194 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:b3:25:de,bridge_name='br-int',has_traffic_filtering=True,id=601ef18d-d973-476f-90d4-f8f40df267fa,network=Network(373b2ee9-84af-407c-9e36-4d16b55fdfd0),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap601ef18d-d9') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 11 04:02:42 compute-0 nova_compute[259850]: 2025-10-11 04:02:42.599 2 DEBUG os_vif [None req-78a57090-801a-40e7-b05e-441e28e301e0 6f96a3b66f9943398432732b3141745a 54de3f5004d1488aaf5e429b0071e194 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:b3:25:de,bridge_name='br-int',has_traffic_filtering=True,id=601ef18d-d973-476f-90d4-f8f40df267fa,network=Network(373b2ee9-84af-407c-9e36-4d16b55fdfd0),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap601ef18d-d9') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Oct 11 04:02:42 compute-0 nova_compute[259850]: 2025-10-11 04:02:42.634 2 DEBUG ovsdbapp.backend.ovs_idl [None req-78a57090-801a-40e7-b05e-441e28e301e0 6f96a3b66f9943398432732b3141745a 54de3f5004d1488aaf5e429b0071e194 - - default default] Created schema index Interface.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106
Oct 11 04:02:42 compute-0 nova_compute[259850]: 2025-10-11 04:02:42.635 2 DEBUG ovsdbapp.backend.ovs_idl [None req-78a57090-801a-40e7-b05e-441e28e301e0 6f96a3b66f9943398432732b3141745a 54de3f5004d1488aaf5e429b0071e194 - - default default] Created schema index Port.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106
Oct 11 04:02:42 compute-0 nova_compute[259850]: 2025-10-11 04:02:42.635 2 DEBUG ovsdbapp.backend.ovs_idl [None req-78a57090-801a-40e7-b05e-441e28e301e0 6f96a3b66f9943398432732b3141745a 54de3f5004d1488aaf5e429b0071e194 - - default default] Created schema index Bridge.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106
Oct 11 04:02:42 compute-0 nova_compute[259850]: 2025-10-11 04:02:42.636 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [None req-78a57090-801a-40e7-b05e-441e28e301e0 6f96a3b66f9943398432732b3141745a 54de3f5004d1488aaf5e429b0071e194 - - default default] tcp:127.0.0.1:6640: entering CONNECTING _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519
Oct 11 04:02:42 compute-0 nova_compute[259850]: 2025-10-11 04:02:42.637 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [None req-78a57090-801a-40e7-b05e-441e28e301e0 6f96a3b66f9943398432732b3141745a 54de3f5004d1488aaf5e429b0071e194 - - default default] [POLLOUT] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 04:02:42 compute-0 nova_compute[259850]: 2025-10-11 04:02:42.637 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [None req-78a57090-801a-40e7-b05e-441e28e301e0 6f96a3b66f9943398432732b3141745a 54de3f5004d1488aaf5e429b0071e194 - - default default] tcp:127.0.0.1:6640: entering ACTIVE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519
Oct 11 04:02:42 compute-0 nova_compute[259850]: 2025-10-11 04:02:42.638 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [None req-78a57090-801a-40e7-b05e-441e28e301e0 6f96a3b66f9943398432732b3141745a 54de3f5004d1488aaf5e429b0071e194 - - default default] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 04:02:42 compute-0 nova_compute[259850]: 2025-10-11 04:02:42.639 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [None req-78a57090-801a-40e7-b05e-441e28e301e0 6f96a3b66f9943398432732b3141745a 54de3f5004d1488aaf5e429b0071e194 - - default default] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 04:02:42 compute-0 nova_compute[259850]: 2025-10-11 04:02:42.643 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [None req-78a57090-801a-40e7-b05e-441e28e301e0 6f96a3b66f9943398432732b3141745a 54de3f5004d1488aaf5e429b0071e194 - - default default] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 04:02:42 compute-0 nova_compute[259850]: 2025-10-11 04:02:42.656 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 04:02:42 compute-0 nova_compute[259850]: 2025-10-11 04:02:42.656 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 04:02:42 compute-0 nova_compute[259850]: 2025-10-11 04:02:42.657 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 11 04:02:42 compute-0 nova_compute[259850]: 2025-10-11 04:02:42.660 2 INFO oslo.privsep.daemon [None req-78a57090-801a-40e7-b05e-441e28e301e0 6f96a3b66f9943398432732b3141745a 54de3f5004d1488aaf5e429b0071e194 - - default default] Running privsep helper: ['sudo', 'nova-rootwrap', '/etc/nova/rootwrap.conf', 'privsep-helper', '--config-file', '/etc/nova/nova.conf', '--config-file', '/etc/nova/nova-compute.conf', '--config-dir', '/etc/nova/nova.conf.d', '--privsep_context', 'vif_plug_ovs.privsep.vif_plug', '--privsep_sock_path', '/tmp/tmptpoxpwrc/privsep.sock']
Oct 11 04:02:42 compute-0 ceph-mon[74273]: pgmap v912: 305 pgs: 305 active+clean; 42 MiB data, 190 MiB used, 60 GiB / 60 GiB avail; 2.6 MiB/s rd, 383 B/s wr, 21 op/s
Oct 11 04:02:42 compute-0 ceph-mon[74273]: from='client.? 192.168.122.100:0/3313145099' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 11 04:02:42 compute-0 ceph-mon[74273]: from='client.? 192.168.122.100:0/4250066632' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 11 04:02:42 compute-0 podman[267248]: 2025-10-11 04:02:42.886615953 +0000 UTC m=+0.069093467 container create 64a12d36b153062654f20799ccee7180acf79cf30000cf8c0bc938b8c4b3f2c8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=recursing_raman, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=reef)
Oct 11 04:02:42 compute-0 systemd[1]: Started libpod-conmon-64a12d36b153062654f20799ccee7180acf79cf30000cf8c0bc938b8c4b3f2c8.scope.
Oct 11 04:02:42 compute-0 podman[267248]: 2025-10-11 04:02:42.856808958 +0000 UTC m=+0.039286502 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 04:02:42 compute-0 nova_compute[259850]: 2025-10-11 04:02:42.968 2 DEBUG nova.network.neutron [req-fa02c694-5359-4f28-88bf-8359d7da78f1 req-1d73c10f-2c5f-438f-b1df-acbe7ae0535a f4159c198f2c491aba3076e803251bb4 551ef16ca7c64c7d98e9560ddab5abd8 - - default default] [instance: 02d4314d-9e34-4961-a2ae-1c9b0a3c1dfc] Updated VIF entry in instance network info cache for port 601ef18d-d973-476f-90d4-f8f40df267fa. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 11 04:02:42 compute-0 nova_compute[259850]: 2025-10-11 04:02:42.970 2 DEBUG nova.network.neutron [req-fa02c694-5359-4f28-88bf-8359d7da78f1 req-1d73c10f-2c5f-438f-b1df-acbe7ae0535a f4159c198f2c491aba3076e803251bb4 551ef16ca7c64c7d98e9560ddab5abd8 - - default default] [instance: 02d4314d-9e34-4961-a2ae-1c9b0a3c1dfc] Updating instance_info_cache with network_info: [{"id": "601ef18d-d973-476f-90d4-f8f40df267fa", "address": "fa:16:3e:b3:25:de", "network": {"id": "373b2ee9-84af-407c-9e36-4d16b55fdfd0", "bridge": "br-int", "label": "tempest-EncryptedVolumesExtendAttachedTest-757984270-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "54de3f5004d1488aaf5e429b0071e194", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap601ef18d-d9", "ovs_interfaceid": "601ef18d-d973-476f-90d4-f8f40df267fa", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 11 04:02:42 compute-0 systemd[1]: Started libcrun container.
Oct 11 04:02:42 compute-0 podman[267248]: 2025-10-11 04:02:42.992532572 +0000 UTC m=+0.175010146 container init 64a12d36b153062654f20799ccee7180acf79cf30000cf8c0bc938b8c4b3f2c8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=recursing_raman, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, ceph=True, OSD_FLAVOR=default)
Oct 11 04:02:42 compute-0 nova_compute[259850]: 2025-10-11 04:02:42.996 2 DEBUG oslo_concurrency.lockutils [req-fa02c694-5359-4f28-88bf-8359d7da78f1 req-1d73c10f-2c5f-438f-b1df-acbe7ae0535a f4159c198f2c491aba3076e803251bb4 551ef16ca7c64c7d98e9560ddab5abd8 - - default default] Releasing lock "refresh_cache-02d4314d-9e34-4961-a2ae-1c9b0a3c1dfc" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 11 04:02:43 compute-0 podman[267248]: 2025-10-11 04:02:43.004379364 +0000 UTC m=+0.186856878 container start 64a12d36b153062654f20799ccee7180acf79cf30000cf8c0bc938b8c4b3f2c8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=recursing_raman, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 11 04:02:43 compute-0 recursing_raman[267263]: 167 167
Oct 11 04:02:43 compute-0 podman[267248]: 2025-10-11 04:02:43.010995619 +0000 UTC m=+0.193473183 container attach 64a12d36b153062654f20799ccee7180acf79cf30000cf8c0bc938b8c4b3f2c8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=recursing_raman, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, ceph=True, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 11 04:02:43 compute-0 systemd[1]: libpod-64a12d36b153062654f20799ccee7180acf79cf30000cf8c0bc938b8c4b3f2c8.scope: Deactivated successfully.
Oct 11 04:02:43 compute-0 podman[267248]: 2025-10-11 04:02:43.012756349 +0000 UTC m=+0.195233843 container died 64a12d36b153062654f20799ccee7180acf79cf30000cf8c0bc938b8c4b3f2c8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=recursing_raman, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 11 04:02:43 compute-0 systemd[1]: var-lib-containers-storage-overlay-a1b7541b3481b521001afcea11f83c357292194394cbd927b5e4b5a1ac1faf1c-merged.mount: Deactivated successfully.
Oct 11 04:02:43 compute-0 podman[267248]: 2025-10-11 04:02:43.066233468 +0000 UTC m=+0.248710982 container remove 64a12d36b153062654f20799ccee7180acf79cf30000cf8c0bc938b8c4b3f2c8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=recursing_raman, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 11 04:02:43 compute-0 systemd[1]: libpod-conmon-64a12d36b153062654f20799ccee7180acf79cf30000cf8c0bc938b8c4b3f2c8.scope: Deactivated successfully.
Oct 11 04:02:43 compute-0 podman[267288]: 2025-10-11 04:02:43.295630097 +0000 UTC m=+0.064802997 container create 19c360e7df7c4331d1619ee217e2bd118c5d3e4098ae3d0a5526abac5da4faaf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_almeida, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, ceph=True)
Oct 11 04:02:43 compute-0 nova_compute[259850]: 2025-10-11 04:02:43.308 2 INFO oslo.privsep.daemon [None req-78a57090-801a-40e7-b05e-441e28e301e0 6f96a3b66f9943398432732b3141745a 54de3f5004d1488aaf5e429b0071e194 - - default default] Spawned new privsep daemon via rootwrap
Oct 11 04:02:43 compute-0 nova_compute[259850]: 2025-10-11 04:02:43.190 609 INFO oslo.privsep.daemon [-] privsep daemon starting
Oct 11 04:02:43 compute-0 nova_compute[259850]: 2025-10-11 04:02:43.199 609 INFO oslo.privsep.daemon [-] privsep process running with uid/gid: 0/0
Oct 11 04:02:43 compute-0 nova_compute[259850]: 2025-10-11 04:02:43.203 609 INFO oslo.privsep.daemon [-] privsep process running with capabilities (eff/prm/inh): CAP_DAC_OVERRIDE|CAP_NET_ADMIN/CAP_DAC_OVERRIDE|CAP_NET_ADMIN/none
Oct 11 04:02:43 compute-0 nova_compute[259850]: 2025-10-11 04:02:43.204 609 INFO oslo.privsep.daemon [-] privsep daemon running as pid 609
Oct 11 04:02:43 compute-0 systemd[1]: Started libpod-conmon-19c360e7df7c4331d1619ee217e2bd118c5d3e4098ae3d0a5526abac5da4faaf.scope.
Oct 11 04:02:43 compute-0 podman[267288]: 2025-10-11 04:02:43.268305101 +0000 UTC m=+0.037477981 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 04:02:43 compute-0 systemd[1]: Started libcrun container.
Oct 11 04:02:43 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5459420424c09da91806c984282d6f32e030647bbcfa3af2b044dc0141cb4598/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 11 04:02:43 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5459420424c09da91806c984282d6f32e030647bbcfa3af2b044dc0141cb4598/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 11 04:02:43 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5459420424c09da91806c984282d6f32e030647bbcfa3af2b044dc0141cb4598/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 11 04:02:43 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5459420424c09da91806c984282d6f32e030647bbcfa3af2b044dc0141cb4598/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 11 04:02:43 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v913: 305 pgs: 305 active+clean; 88 MiB data, 211 MiB used, 60 GiB / 60 GiB avail; 2.6 MiB/s rd, 2.7 MiB/s wr, 62 op/s
Oct 11 04:02:43 compute-0 podman[267288]: 2025-10-11 04:02:43.405101894 +0000 UTC m=+0.174274844 container init 19c360e7df7c4331d1619ee217e2bd118c5d3e4098ae3d0a5526abac5da4faaf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_almeida, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 11 04:02:43 compute-0 podman[267288]: 2025-10-11 04:02:43.415459385 +0000 UTC m=+0.184632285 container start 19c360e7df7c4331d1619ee217e2bd118c5d3e4098ae3d0a5526abac5da4faaf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_almeida, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef)
Oct 11 04:02:43 compute-0 podman[267288]: 2025-10-11 04:02:43.419353394 +0000 UTC m=+0.188526284 container attach 19c360e7df7c4331d1619ee217e2bd118c5d3e4098ae3d0a5526abac5da4faaf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_almeida, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, ceph=True)
Oct 11 04:02:43 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct 11 04:02:43 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/4279695383' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 11 04:02:43 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct 11 04:02:43 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/4279695383' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 11 04:02:43 compute-0 nova_compute[259850]: 2025-10-11 04:02:43.630 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 04:02:43 compute-0 nova_compute[259850]: 2025-10-11 04:02:43.631 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap601ef18d-d9, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 04:02:43 compute-0 nova_compute[259850]: 2025-10-11 04:02:43.631 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap601ef18d-d9, col_values=(('external_ids', {'iface-id': '601ef18d-d973-476f-90d4-f8f40df267fa', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:b3:25:de', 'vm-uuid': '02d4314d-9e34-4961-a2ae-1c9b0a3c1dfc'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 04:02:43 compute-0 NetworkManager[44920]: <info>  [1760155363.6350] manager: (tap601ef18d-d9): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/23)
Oct 11 04:02:43 compute-0 nova_compute[259850]: 2025-10-11 04:02:43.635 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 04:02:43 compute-0 nova_compute[259850]: 2025-10-11 04:02:43.639 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 11 04:02:43 compute-0 nova_compute[259850]: 2025-10-11 04:02:43.646 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 04:02:43 compute-0 nova_compute[259850]: 2025-10-11 04:02:43.648 2 INFO os_vif [None req-78a57090-801a-40e7-b05e-441e28e301e0 6f96a3b66f9943398432732b3141745a 54de3f5004d1488aaf5e429b0071e194 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:b3:25:de,bridge_name='br-int',has_traffic_filtering=True,id=601ef18d-d973-476f-90d4-f8f40df267fa,network=Network(373b2ee9-84af-407c-9e36-4d16b55fdfd0),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap601ef18d-d9')
Oct 11 04:02:43 compute-0 nova_compute[259850]: 2025-10-11 04:02:43.724 2 DEBUG nova.virt.libvirt.driver [None req-78a57090-801a-40e7-b05e-441e28e301e0 6f96a3b66f9943398432732b3141745a 54de3f5004d1488aaf5e429b0071e194 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 11 04:02:43 compute-0 nova_compute[259850]: 2025-10-11 04:02:43.725 2 DEBUG nova.virt.libvirt.driver [None req-78a57090-801a-40e7-b05e-441e28e301e0 6f96a3b66f9943398432732b3141745a 54de3f5004d1488aaf5e429b0071e194 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 11 04:02:43 compute-0 nova_compute[259850]: 2025-10-11 04:02:43.725 2 DEBUG nova.virt.libvirt.driver [None req-78a57090-801a-40e7-b05e-441e28e301e0 6f96a3b66f9943398432732b3141745a 54de3f5004d1488aaf5e429b0071e194 - - default default] No VIF found with MAC fa:16:3e:b3:25:de, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Oct 11 04:02:43 compute-0 nova_compute[259850]: 2025-10-11 04:02:43.727 2 INFO nova.virt.libvirt.driver [None req-78a57090-801a-40e7-b05e-441e28e301e0 6f96a3b66f9943398432732b3141745a 54de3f5004d1488aaf5e429b0071e194 - - default default] [instance: 02d4314d-9e34-4961-a2ae-1c9b0a3c1dfc] Using config drive
Oct 11 04:02:43 compute-0 nova_compute[259850]: 2025-10-11 04:02:43.759 2 DEBUG nova.storage.rbd_utils [None req-78a57090-801a-40e7-b05e-441e28e301e0 6f96a3b66f9943398432732b3141745a 54de3f5004d1488aaf5e429b0071e194 - - default default] rbd image 02d4314d-9e34-4961-a2ae-1c9b0a3c1dfc_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 11 04:02:43 compute-0 ceph-mon[74273]: from='client.? 192.168.122.10:0/4279695383' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 11 04:02:43 compute-0 ceph-mon[74273]: from='client.? 192.168.122.10:0/4279695383' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 11 04:02:44 compute-0 wonderful_almeida[267304]: {
Oct 11 04:02:44 compute-0 wonderful_almeida[267304]:     "0": [
Oct 11 04:02:44 compute-0 wonderful_almeida[267304]:         {
Oct 11 04:02:44 compute-0 wonderful_almeida[267304]:             "devices": [
Oct 11 04:02:44 compute-0 wonderful_almeida[267304]:                 "/dev/loop3"
Oct 11 04:02:44 compute-0 wonderful_almeida[267304]:             ],
Oct 11 04:02:44 compute-0 wonderful_almeida[267304]:             "lv_name": "ceph_lv0",
Oct 11 04:02:44 compute-0 wonderful_almeida[267304]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct 11 04:02:44 compute-0 wonderful_almeida[267304]:             "lv_size": "21470642176",
Oct 11 04:02:44 compute-0 wonderful_almeida[267304]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=1XVTvq-Um5W-oCLh-MZzq-EKrd-vgGj-JdO4S3,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=23b68101-59a9-532f-ab6b-9acf78fb2162,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=bd7ac921-1218-45c1-b1c6-7c594dbceccb,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 11 04:02:44 compute-0 wonderful_almeida[267304]:             "lv_uuid": "1XVTvq-Um5W-oCLh-MZzq-EKrd-vgGj-JdO4S3",
Oct 11 04:02:44 compute-0 wonderful_almeida[267304]:             "name": "ceph_lv0",
Oct 11 04:02:44 compute-0 wonderful_almeida[267304]:             "path": "/dev/ceph_vg0/ceph_lv0",
Oct 11 04:02:44 compute-0 wonderful_almeida[267304]:             "tags": {
Oct 11 04:02:44 compute-0 wonderful_almeida[267304]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct 11 04:02:44 compute-0 wonderful_almeida[267304]:                 "ceph.block_uuid": "1XVTvq-Um5W-oCLh-MZzq-EKrd-vgGj-JdO4S3",
Oct 11 04:02:44 compute-0 wonderful_almeida[267304]:                 "ceph.cephx_lockbox_secret": "",
Oct 11 04:02:44 compute-0 wonderful_almeida[267304]:                 "ceph.cluster_fsid": "23b68101-59a9-532f-ab6b-9acf78fb2162",
Oct 11 04:02:44 compute-0 wonderful_almeida[267304]:                 "ceph.cluster_name": "ceph",
Oct 11 04:02:44 compute-0 wonderful_almeida[267304]:                 "ceph.crush_device_class": "",
Oct 11 04:02:44 compute-0 wonderful_almeida[267304]:                 "ceph.encrypted": "0",
Oct 11 04:02:44 compute-0 wonderful_almeida[267304]:                 "ceph.osd_fsid": "bd7ac921-1218-45c1-b1c6-7c594dbceccb",
Oct 11 04:02:44 compute-0 wonderful_almeida[267304]:                 "ceph.osd_id": "0",
Oct 11 04:02:44 compute-0 wonderful_almeida[267304]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 11 04:02:44 compute-0 wonderful_almeida[267304]:                 "ceph.type": "block",
Oct 11 04:02:44 compute-0 wonderful_almeida[267304]:                 "ceph.vdo": "0"
Oct 11 04:02:44 compute-0 wonderful_almeida[267304]:             },
Oct 11 04:02:44 compute-0 wonderful_almeida[267304]:             "type": "block",
Oct 11 04:02:44 compute-0 wonderful_almeida[267304]:             "vg_name": "ceph_vg0"
Oct 11 04:02:44 compute-0 wonderful_almeida[267304]:         }
Oct 11 04:02:44 compute-0 wonderful_almeida[267304]:     ],
Oct 11 04:02:44 compute-0 wonderful_almeida[267304]:     "1": [
Oct 11 04:02:44 compute-0 wonderful_almeida[267304]:         {
Oct 11 04:02:44 compute-0 wonderful_almeida[267304]:             "devices": [
Oct 11 04:02:44 compute-0 wonderful_almeida[267304]:                 "/dev/loop4"
Oct 11 04:02:44 compute-0 wonderful_almeida[267304]:             ],
Oct 11 04:02:44 compute-0 wonderful_almeida[267304]:             "lv_name": "ceph_lv1",
Oct 11 04:02:44 compute-0 wonderful_almeida[267304]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Oct 11 04:02:44 compute-0 wonderful_almeida[267304]:             "lv_size": "21470642176",
Oct 11 04:02:44 compute-0 wonderful_almeida[267304]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=SYHCVo-UVf0-Iwc3-4dzz-QuCG-19LJ-9rRA5x,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=23b68101-59a9-532f-ab6b-9acf78fb2162,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=38da774d-7ecf-442f-9a7a-97978287cff8,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 11 04:02:44 compute-0 wonderful_almeida[267304]:             "lv_uuid": "SYHCVo-UVf0-Iwc3-4dzz-QuCG-19LJ-9rRA5x",
Oct 11 04:02:44 compute-0 wonderful_almeida[267304]:             "name": "ceph_lv1",
Oct 11 04:02:44 compute-0 wonderful_almeida[267304]:             "path": "/dev/ceph_vg1/ceph_lv1",
Oct 11 04:02:44 compute-0 wonderful_almeida[267304]:             "tags": {
Oct 11 04:02:44 compute-0 wonderful_almeida[267304]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Oct 11 04:02:44 compute-0 wonderful_almeida[267304]:                 "ceph.block_uuid": "SYHCVo-UVf0-Iwc3-4dzz-QuCG-19LJ-9rRA5x",
Oct 11 04:02:44 compute-0 wonderful_almeida[267304]:                 "ceph.cephx_lockbox_secret": "",
Oct 11 04:02:44 compute-0 wonderful_almeida[267304]:                 "ceph.cluster_fsid": "23b68101-59a9-532f-ab6b-9acf78fb2162",
Oct 11 04:02:44 compute-0 wonderful_almeida[267304]:                 "ceph.cluster_name": "ceph",
Oct 11 04:02:44 compute-0 wonderful_almeida[267304]:                 "ceph.crush_device_class": "",
Oct 11 04:02:44 compute-0 wonderful_almeida[267304]:                 "ceph.encrypted": "0",
Oct 11 04:02:44 compute-0 wonderful_almeida[267304]:                 "ceph.osd_fsid": "38da774d-7ecf-442f-9a7a-97978287cff8",
Oct 11 04:02:44 compute-0 wonderful_almeida[267304]:                 "ceph.osd_id": "1",
Oct 11 04:02:44 compute-0 wonderful_almeida[267304]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 11 04:02:44 compute-0 wonderful_almeida[267304]:                 "ceph.type": "block",
Oct 11 04:02:44 compute-0 wonderful_almeida[267304]:                 "ceph.vdo": "0"
Oct 11 04:02:44 compute-0 wonderful_almeida[267304]:             },
Oct 11 04:02:44 compute-0 wonderful_almeida[267304]:             "type": "block",
Oct 11 04:02:44 compute-0 wonderful_almeida[267304]:             "vg_name": "ceph_vg1"
Oct 11 04:02:44 compute-0 wonderful_almeida[267304]:         }
Oct 11 04:02:44 compute-0 wonderful_almeida[267304]:     ],
Oct 11 04:02:44 compute-0 wonderful_almeida[267304]:     "2": [
Oct 11 04:02:44 compute-0 wonderful_almeida[267304]:         {
Oct 11 04:02:44 compute-0 wonderful_almeida[267304]:             "devices": [
Oct 11 04:02:44 compute-0 wonderful_almeida[267304]:                 "/dev/loop5"
Oct 11 04:02:44 compute-0 wonderful_almeida[267304]:             ],
Oct 11 04:02:44 compute-0 wonderful_almeida[267304]:             "lv_name": "ceph_lv2",
Oct 11 04:02:44 compute-0 wonderful_almeida[267304]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Oct 11 04:02:44 compute-0 wonderful_almeida[267304]:             "lv_size": "21470642176",
Oct 11 04:02:44 compute-0 wonderful_almeida[267304]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=RoMfIg-8Myq-NBZ3-1HM3-6QfC-sjk8-dlChvt,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=23b68101-59a9-532f-ab6b-9acf78fb2162,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=b0a32a6b-06c7-4cda-9235-0c5ffbd1bd85,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 11 04:02:44 compute-0 wonderful_almeida[267304]:             "lv_uuid": "RoMfIg-8Myq-NBZ3-1HM3-6QfC-sjk8-dlChvt",
Oct 11 04:02:44 compute-0 wonderful_almeida[267304]:             "name": "ceph_lv2",
Oct 11 04:02:44 compute-0 wonderful_almeida[267304]:             "path": "/dev/ceph_vg2/ceph_lv2",
Oct 11 04:02:44 compute-0 wonderful_almeida[267304]:             "tags": {
Oct 11 04:02:44 compute-0 wonderful_almeida[267304]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Oct 11 04:02:44 compute-0 wonderful_almeida[267304]:                 "ceph.block_uuid": "RoMfIg-8Myq-NBZ3-1HM3-6QfC-sjk8-dlChvt",
Oct 11 04:02:44 compute-0 wonderful_almeida[267304]:                 "ceph.cephx_lockbox_secret": "",
Oct 11 04:02:44 compute-0 wonderful_almeida[267304]:                 "ceph.cluster_fsid": "23b68101-59a9-532f-ab6b-9acf78fb2162",
Oct 11 04:02:44 compute-0 wonderful_almeida[267304]:                 "ceph.cluster_name": "ceph",
Oct 11 04:02:44 compute-0 wonderful_almeida[267304]:                 "ceph.crush_device_class": "",
Oct 11 04:02:44 compute-0 wonderful_almeida[267304]:                 "ceph.encrypted": "0",
Oct 11 04:02:44 compute-0 wonderful_almeida[267304]:                 "ceph.osd_fsid": "b0a32a6b-06c7-4cda-9235-0c5ffbd1bd85",
Oct 11 04:02:44 compute-0 wonderful_almeida[267304]:                 "ceph.osd_id": "2",
Oct 11 04:02:44 compute-0 wonderful_almeida[267304]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 11 04:02:44 compute-0 wonderful_almeida[267304]:                 "ceph.type": "block",
Oct 11 04:02:44 compute-0 wonderful_almeida[267304]:                 "ceph.vdo": "0"
Oct 11 04:02:44 compute-0 wonderful_almeida[267304]:             },
Oct 11 04:02:44 compute-0 wonderful_almeida[267304]:             "type": "block",
Oct 11 04:02:44 compute-0 wonderful_almeida[267304]:             "vg_name": "ceph_vg2"
Oct 11 04:02:44 compute-0 wonderful_almeida[267304]:         }
Oct 11 04:02:44 compute-0 wonderful_almeida[267304]:     ]
Oct 11 04:02:44 compute-0 wonderful_almeida[267304]: }
Oct 11 04:02:44 compute-0 systemd[1]: libpod-19c360e7df7c4331d1619ee217e2bd118c5d3e4098ae3d0a5526abac5da4faaf.scope: Deactivated successfully.
Oct 11 04:02:44 compute-0 podman[267335]: 2025-10-11 04:02:44.30010769 +0000 UTC m=+0.041996608 container died 19c360e7df7c4331d1619ee217e2bd118c5d3e4098ae3d0a5526abac5da4faaf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_almeida, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, io.buildah.version=1.39.3)
Oct 11 04:02:44 compute-0 systemd[1]: var-lib-containers-storage-overlay-5459420424c09da91806c984282d6f32e030647bbcfa3af2b044dc0141cb4598-merged.mount: Deactivated successfully.
Oct 11 04:02:44 compute-0 podman[267335]: 2025-10-11 04:02:44.37788209 +0000 UTC m=+0.119770988 container remove 19c360e7df7c4331d1619ee217e2bd118c5d3e4098ae3d0a5526abac5da4faaf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_almeida, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default)
Oct 11 04:02:44 compute-0 systemd[1]: libpod-conmon-19c360e7df7c4331d1619ee217e2bd118c5d3e4098ae3d0a5526abac5da4faaf.scope: Deactivated successfully.
Oct 11 04:02:44 compute-0 sudo[267178]: pam_unix(sudo:session): session closed for user root
Oct 11 04:02:44 compute-0 sudo[267350]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 04:02:44 compute-0 sudo[267350]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 04:02:44 compute-0 sudo[267350]: pam_unix(sudo:session): session closed for user root
Oct 11 04:02:44 compute-0 sudo[267375]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 11 04:02:44 compute-0 sudo[267375]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 04:02:44 compute-0 sudo[267375]: pam_unix(sudo:session): session closed for user root
Oct 11 04:02:44 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e131 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 11 04:02:44 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e131 do_prune osdmap full prune enabled
Oct 11 04:02:44 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e132 e132: 3 total, 3 up, 3 in
Oct 11 04:02:44 compute-0 ceph-mon[74273]: log_channel(cluster) log [DBG] : osdmap e132: 3 total, 3 up, 3 in
Oct 11 04:02:44 compute-0 sudo[267400]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 04:02:44 compute-0 sudo[267400]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 04:02:44 compute-0 sudo[267400]: pam_unix(sudo:session): session closed for user root
Oct 11 04:02:44 compute-0 nova_compute[259850]: 2025-10-11 04:02:44.738 2 INFO nova.virt.libvirt.driver [None req-78a57090-801a-40e7-b05e-441e28e301e0 6f96a3b66f9943398432732b3141745a 54de3f5004d1488aaf5e429b0071e194 - - default default] [instance: 02d4314d-9e34-4961-a2ae-1c9b0a3c1dfc] Creating config drive at /var/lib/nova/instances/02d4314d-9e34-4961-a2ae-1c9b0a3c1dfc/disk.config
Oct 11 04:02:44 compute-0 nova_compute[259850]: 2025-10-11 04:02:44.750 2 DEBUG oslo_concurrency.processutils [None req-78a57090-801a-40e7-b05e-441e28e301e0 6f96a3b66f9943398432732b3141745a 54de3f5004d1488aaf5e429b0071e194 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/02d4314d-9e34-4961-a2ae-1c9b0a3c1dfc/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp7w85uckn execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 04:02:44 compute-0 sudo[267425]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/23b68101-59a9-532f-ab6b-9acf78fb2162/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 23b68101-59a9-532f-ab6b-9acf78fb2162 -- raw list --format json
Oct 11 04:02:44 compute-0 sudo[267425]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 04:02:44 compute-0 ceph-mon[74273]: pgmap v913: 305 pgs: 305 active+clean; 88 MiB data, 211 MiB used, 60 GiB / 60 GiB avail; 2.6 MiB/s rd, 2.7 MiB/s wr, 62 op/s
Oct 11 04:02:44 compute-0 ceph-mon[74273]: osdmap e132: 3 total, 3 up, 3 in
Oct 11 04:02:44 compute-0 nova_compute[259850]: 2025-10-11 04:02:44.904 2 DEBUG oslo_concurrency.processutils [None req-78a57090-801a-40e7-b05e-441e28e301e0 6f96a3b66f9943398432732b3141745a 54de3f5004d1488aaf5e429b0071e194 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/02d4314d-9e34-4961-a2ae-1c9b0a3c1dfc/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp7w85uckn" returned: 0 in 0.154s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 04:02:44 compute-0 nova_compute[259850]: 2025-10-11 04:02:44.950 2 DEBUG nova.storage.rbd_utils [None req-78a57090-801a-40e7-b05e-441e28e301e0 6f96a3b66f9943398432732b3141745a 54de3f5004d1488aaf5e429b0071e194 - - default default] rbd image 02d4314d-9e34-4961-a2ae-1c9b0a3c1dfc_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 11 04:02:44 compute-0 nova_compute[259850]: 2025-10-11 04:02:44.956 2 DEBUG oslo_concurrency.processutils [None req-78a57090-801a-40e7-b05e-441e28e301e0 6f96a3b66f9943398432732b3141745a 54de3f5004d1488aaf5e429b0071e194 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/02d4314d-9e34-4961-a2ae-1c9b0a3c1dfc/disk.config 02d4314d-9e34-4961-a2ae-1c9b0a3c1dfc_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 04:02:45 compute-0 nova_compute[259850]: 2025-10-11 04:02:45.158 2 DEBUG oslo_concurrency.processutils [None req-78a57090-801a-40e7-b05e-441e28e301e0 6f96a3b66f9943398432732b3141745a 54de3f5004d1488aaf5e429b0071e194 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/02d4314d-9e34-4961-a2ae-1c9b0a3c1dfc/disk.config 02d4314d-9e34-4961-a2ae-1c9b0a3c1dfc_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.203s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 04:02:45 compute-0 nova_compute[259850]: 2025-10-11 04:02:45.160 2 INFO nova.virt.libvirt.driver [None req-78a57090-801a-40e7-b05e-441e28e301e0 6f96a3b66f9943398432732b3141745a 54de3f5004d1488aaf5e429b0071e194 - - default default] [instance: 02d4314d-9e34-4961-a2ae-1c9b0a3c1dfc] Deleting local config drive /var/lib/nova/instances/02d4314d-9e34-4961-a2ae-1c9b0a3c1dfc/disk.config because it was imported into RBD.
Oct 11 04:02:45 compute-0 systemd[1]: Starting libvirt secret daemon...
Oct 11 04:02:45 compute-0 systemd[1]: Started libvirt secret daemon.
Oct 11 04:02:45 compute-0 podman[267528]: 2025-10-11 04:02:45.251471055 +0000 UTC m=+0.064586872 container create 65ef659f8a8fe57349e36fe2cc93bc9b49367bb7eff183b6af28d20449bfcf9e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=suspicious_hofstadter, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 11 04:02:45 compute-0 systemd[1]: Started libpod-conmon-65ef659f8a8fe57349e36fe2cc93bc9b49367bb7eff183b6af28d20449bfcf9e.scope.
Oct 11 04:02:45 compute-0 podman[267528]: 2025-10-11 04:02:45.217388679 +0000 UTC m=+0.030504587 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 04:02:45 compute-0 kernel: tun: Universal TUN/TAP device driver, 1.6
Oct 11 04:02:45 compute-0 kernel: tap601ef18d-d9: entered promiscuous mode
Oct 11 04:02:45 compute-0 NetworkManager[44920]: <info>  [1760155365.3307] manager: (tap601ef18d-d9): new Tun device (/org/freedesktop/NetworkManager/Devices/24)
Oct 11 04:02:45 compute-0 nova_compute[259850]: 2025-10-11 04:02:45.332 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 04:02:45 compute-0 ovn_controller[152025]: 2025-10-11T04:02:45Z|00027|binding|INFO|Claiming lport 601ef18d-d973-476f-90d4-f8f40df267fa for this chassis.
Oct 11 04:02:45 compute-0 ovn_controller[152025]: 2025-10-11T04:02:45Z|00028|binding|INFO|601ef18d-d973-476f-90d4-f8f40df267fa: Claiming fa:16:3e:b3:25:de 10.100.0.5
Oct 11 04:02:45 compute-0 systemd[1]: Started libcrun container.
Oct 11 04:02:45 compute-0 nova_compute[259850]: 2025-10-11 04:02:45.344 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 04:02:45 compute-0 podman[267528]: 2025-10-11 04:02:45.362741363 +0000 UTC m=+0.175857250 container init 65ef659f8a8fe57349e36fe2cc93bc9b49367bb7eff183b6af28d20449bfcf9e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=suspicious_hofstadter, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, ceph=True, org.label-schema.vendor=CentOS)
Oct 11 04:02:45 compute-0 ovn_metadata_agent[161897]: 2025-10-11 04:02:45.366 161902 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:b3:25:de 10.100.0.5'], port_security=['fa:16:3e:b3:25:de 10.100.0.5'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.5/28', 'neutron:device_id': '02d4314d-9e34-4961-a2ae-1c9b0a3c1dfc', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-373b2ee9-84af-407c-9e36-4d16b55fdfd0', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '54de3f5004d1488aaf5e429b0071e194', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'cf46f1e6-a956-4884-b138-5e34f728c752', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=24b759c8-c2ad-43bf-b2f5-82015d6a0f2f, chassis=[<ovs.db.idl.Row object at 0x7f22fde8afd0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f22fde8afd0>], logical_port=601ef18d-d973-476f-90d4-f8f40df267fa) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 11 04:02:45 compute-0 ovn_metadata_agent[161897]: 2025-10-11 04:02:45.371 161902 INFO neutron.agent.ovn.metadata.agent [-] Port 601ef18d-d973-476f-90d4-f8f40df267fa in datapath 373b2ee9-84af-407c-9e36-4d16b55fdfd0 bound to our chassis
Oct 11 04:02:45 compute-0 ovn_metadata_agent[161897]: 2025-10-11 04:02:45.374 161902 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 373b2ee9-84af-407c-9e36-4d16b55fdfd0
Oct 11 04:02:45 compute-0 ovn_metadata_agent[161897]: 2025-10-11 04:02:45.375 161902 INFO oslo.privsep.daemon [-] Running privsep helper: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'privsep-helper', '--config-file', '/etc/neutron/neutron.conf', '--config-dir', '/etc/neutron.conf.d', '--privsep_context', 'neutron.privileged.default', '--privsep_sock_path', '/tmp/tmpzsf64ork/privsep.sock']
Oct 11 04:02:45 compute-0 podman[267528]: 2025-10-11 04:02:45.376565861 +0000 UTC m=+0.189681678 container start 65ef659f8a8fe57349e36fe2cc93bc9b49367bb7eff183b6af28d20449bfcf9e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=suspicious_hofstadter, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0)
Oct 11 04:02:45 compute-0 podman[267528]: 2025-10-11 04:02:45.379942406 +0000 UTC m=+0.193058253 container attach 65ef659f8a8fe57349e36fe2cc93bc9b49367bb7eff183b6af28d20449bfcf9e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=suspicious_hofstadter, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True)
Oct 11 04:02:45 compute-0 suspicious_hofstadter[267571]: 167 167
Oct 11 04:02:45 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v915: 305 pgs: 305 active+clean; 88 MiB data, 211 MiB used, 60 GiB / 60 GiB avail; 1.7 MiB/s rd, 2.8 MiB/s wr, 54 op/s
Oct 11 04:02:45 compute-0 systemd[1]: libpod-65ef659f8a8fe57349e36fe2cc93bc9b49367bb7eff183b6af28d20449bfcf9e.scope: Deactivated successfully.
Oct 11 04:02:45 compute-0 podman[267528]: 2025-10-11 04:02:45.392203649 +0000 UTC m=+0.205319466 container died 65ef659f8a8fe57349e36fe2cc93bc9b49367bb7eff183b6af28d20449bfcf9e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=suspicious_hofstadter, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2)
Oct 11 04:02:45 compute-0 systemd-udevd[267584]: Network interface NamePolicy= disabled on kernel command line.
Oct 11 04:02:45 compute-0 systemd[1]: var-lib-containers-storage-overlay-b7a85dc6125a75db9a6edadc588574916ad18f251eacb50f7e1546676972f6d0-merged.mount: Deactivated successfully.
Oct 11 04:02:45 compute-0 NetworkManager[44920]: <info>  [1760155365.4283] device (tap601ef18d-d9): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 11 04:02:45 compute-0 NetworkManager[44920]: <info>  [1760155365.4291] device (tap601ef18d-d9): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 11 04:02:45 compute-0 podman[267528]: 2025-10-11 04:02:45.444301429 +0000 UTC m=+0.257417246 container remove 65ef659f8a8fe57349e36fe2cc93bc9b49367bb7eff183b6af28d20449bfcf9e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=suspicious_hofstadter, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Oct 11 04:02:45 compute-0 systemd-machined[214869]: New machine qemu-1-instance-00000001.
Oct 11 04:02:45 compute-0 systemd[1]: Started Virtual Machine qemu-1-instance-00000001.
Oct 11 04:02:45 compute-0 nova_compute[259850]: 2025-10-11 04:02:45.461 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 04:02:45 compute-0 systemd[1]: libpod-conmon-65ef659f8a8fe57349e36fe2cc93bc9b49367bb7eff183b6af28d20449bfcf9e.scope: Deactivated successfully.
Oct 11 04:02:45 compute-0 ovn_controller[152025]: 2025-10-11T04:02:45Z|00029|binding|INFO|Setting lport 601ef18d-d973-476f-90d4-f8f40df267fa ovn-installed in OVS
Oct 11 04:02:45 compute-0 ovn_controller[152025]: 2025-10-11T04:02:45Z|00030|binding|INFO|Setting lport 601ef18d-d973-476f-90d4-f8f40df267fa up in Southbound
Oct 11 04:02:45 compute-0 nova_compute[259850]: 2025-10-11 04:02:45.468 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 04:02:45 compute-0 podman[267616]: 2025-10-11 04:02:45.65375144 +0000 UTC m=+0.066259158 container create e0933792ffdc130fd4e0803506ee36d78b63f3203313a2a5176bf6ecdb9ece53 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_keller, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 11 04:02:45 compute-0 podman[267616]: 2025-10-11 04:02:45.626469915 +0000 UTC m=+0.038977663 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 04:02:45 compute-0 systemd[1]: Started libpod-conmon-e0933792ffdc130fd4e0803506ee36d78b63f3203313a2a5176bf6ecdb9ece53.scope.
Oct 11 04:02:45 compute-0 systemd[1]: Started libcrun container.
Oct 11 04:02:45 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/744952f6d183857b29f8a0a97ec4b051035af290db85a3fa2b1f622bb49600cb/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 11 04:02:45 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/744952f6d183857b29f8a0a97ec4b051035af290db85a3fa2b1f622bb49600cb/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 11 04:02:45 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/744952f6d183857b29f8a0a97ec4b051035af290db85a3fa2b1f622bb49600cb/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 11 04:02:45 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/744952f6d183857b29f8a0a97ec4b051035af290db85a3fa2b1f622bb49600cb/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 11 04:02:45 compute-0 podman[267616]: 2025-10-11 04:02:45.828007354 +0000 UTC m=+0.240515072 container init e0933792ffdc130fd4e0803506ee36d78b63f3203313a2a5176bf6ecdb9ece53 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_keller, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 11 04:02:45 compute-0 podman[267616]: 2025-10-11 04:02:45.836696338 +0000 UTC m=+0.249204056 container start e0933792ffdc130fd4e0803506ee36d78b63f3203313a2a5176bf6ecdb9ece53 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_keller, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 11 04:02:45 compute-0 podman[267616]: 2025-10-11 04:02:45.844697032 +0000 UTC m=+0.257204800 container attach e0933792ffdc130fd4e0803506ee36d78b63f3203313a2a5176bf6ecdb9ece53 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_keller, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, ceph=True, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Oct 11 04:02:46 compute-0 ovn_metadata_agent[161897]: 2025-10-11 04:02:46.112 161902 INFO oslo.privsep.daemon [-] Spawned new privsep daemon via rootwrap
Oct 11 04:02:46 compute-0 ovn_metadata_agent[161897]: 2025-10-11 04:02:46.114 161902 DEBUG oslo.privsep.daemon [-] Accepted privsep connection to /tmp/tmpzsf64ork/privsep.sock __init__ /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:362
Oct 11 04:02:46 compute-0 ovn_metadata_agent[161897]: 2025-10-11 04:02:45.984 267637 INFO oslo.privsep.daemon [-] privsep daemon starting
Oct 11 04:02:46 compute-0 ovn_metadata_agent[161897]: 2025-10-11 04:02:45.988 267637 INFO oslo.privsep.daemon [-] privsep process running with uid/gid: 0/0
Oct 11 04:02:46 compute-0 ovn_metadata_agent[161897]: 2025-10-11 04:02:45.989 267637 INFO oslo.privsep.daemon [-] privsep process running with capabilities (eff/prm/inh): CAP_DAC_OVERRIDE|CAP_DAC_READ_SEARCH|CAP_NET_ADMIN|CAP_SYS_ADMIN|CAP_SYS_PTRACE/CAP_DAC_OVERRIDE|CAP_DAC_READ_SEARCH|CAP_NET_ADMIN|CAP_SYS_ADMIN|CAP_SYS_PTRACE/none
Oct 11 04:02:46 compute-0 ovn_metadata_agent[161897]: 2025-10-11 04:02:45.990 267637 INFO oslo.privsep.daemon [-] privsep daemon running as pid 267637
Oct 11 04:02:46 compute-0 ovn_metadata_agent[161897]: 2025-10-11 04:02:46.116 267637 DEBUG oslo.privsep.daemon [-] privsep: reply[36a9d702-c42c-4584-a915-9eabbac0d772]: (2,) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 04:02:46 compute-0 nova_compute[259850]: 2025-10-11 04:02:46.413 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 04:02:46 compute-0 nova_compute[259850]: 2025-10-11 04:02:46.481 2 DEBUG nova.compute.manager [req-f9d6bb13-544c-4a90-b268-4dfbbe80aa78 req-12268688-fa7c-4baf-a0c1-128ad7944e7d f4159c198f2c491aba3076e803251bb4 551ef16ca7c64c7d98e9560ddab5abd8 - - default default] [instance: 02d4314d-9e34-4961-a2ae-1c9b0a3c1dfc] Received event network-vif-plugged-601ef18d-d973-476f-90d4-f8f40df267fa external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 11 04:02:46 compute-0 nova_compute[259850]: 2025-10-11 04:02:46.482 2 DEBUG oslo_concurrency.lockutils [req-f9d6bb13-544c-4a90-b268-4dfbbe80aa78 req-12268688-fa7c-4baf-a0c1-128ad7944e7d f4159c198f2c491aba3076e803251bb4 551ef16ca7c64c7d98e9560ddab5abd8 - - default default] Acquiring lock "02d4314d-9e34-4961-a2ae-1c9b0a3c1dfc-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 04:02:46 compute-0 nova_compute[259850]: 2025-10-11 04:02:46.482 2 DEBUG oslo_concurrency.lockutils [req-f9d6bb13-544c-4a90-b268-4dfbbe80aa78 req-12268688-fa7c-4baf-a0c1-128ad7944e7d f4159c198f2c491aba3076e803251bb4 551ef16ca7c64c7d98e9560ddab5abd8 - - default default] Lock "02d4314d-9e34-4961-a2ae-1c9b0a3c1dfc-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 04:02:46 compute-0 nova_compute[259850]: 2025-10-11 04:02:46.483 2 DEBUG oslo_concurrency.lockutils [req-f9d6bb13-544c-4a90-b268-4dfbbe80aa78 req-12268688-fa7c-4baf-a0c1-128ad7944e7d f4159c198f2c491aba3076e803251bb4 551ef16ca7c64c7d98e9560ddab5abd8 - - default default] Lock "02d4314d-9e34-4961-a2ae-1c9b0a3c1dfc-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 04:02:46 compute-0 nova_compute[259850]: 2025-10-11 04:02:46.483 2 DEBUG nova.compute.manager [req-f9d6bb13-544c-4a90-b268-4dfbbe80aa78 req-12268688-fa7c-4baf-a0c1-128ad7944e7d f4159c198f2c491aba3076e803251bb4 551ef16ca7c64c7d98e9560ddab5abd8 - - default default] [instance: 02d4314d-9e34-4961-a2ae-1c9b0a3c1dfc] Processing event network-vif-plugged-601ef18d-d973-476f-90d4-f8f40df267fa _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Oct 11 04:02:46 compute-0 epic_keller[267632]: {
Oct 11 04:02:46 compute-0 epic_keller[267632]:     "38da774d-7ecf-442f-9a7a-97978287cff8": {
Oct 11 04:02:46 compute-0 epic_keller[267632]:         "ceph_fsid": "23b68101-59a9-532f-ab6b-9acf78fb2162",
Oct 11 04:02:46 compute-0 epic_keller[267632]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Oct 11 04:02:46 compute-0 epic_keller[267632]:         "osd_id": 1,
Oct 11 04:02:46 compute-0 epic_keller[267632]:         "osd_uuid": "38da774d-7ecf-442f-9a7a-97978287cff8",
Oct 11 04:02:46 compute-0 epic_keller[267632]:         "type": "bluestore"
Oct 11 04:02:46 compute-0 epic_keller[267632]:     },
Oct 11 04:02:46 compute-0 epic_keller[267632]:     "b0a32a6b-06c7-4cda-9235-0c5ffbd1bd85": {
Oct 11 04:02:46 compute-0 epic_keller[267632]:         "ceph_fsid": "23b68101-59a9-532f-ab6b-9acf78fb2162",
Oct 11 04:02:46 compute-0 epic_keller[267632]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Oct 11 04:02:46 compute-0 epic_keller[267632]:         "osd_id": 2,
Oct 11 04:02:46 compute-0 epic_keller[267632]:         "osd_uuid": "b0a32a6b-06c7-4cda-9235-0c5ffbd1bd85",
Oct 11 04:02:46 compute-0 epic_keller[267632]:         "type": "bluestore"
Oct 11 04:02:46 compute-0 epic_keller[267632]:     },
Oct 11 04:02:46 compute-0 epic_keller[267632]:     "bd7ac921-1218-45c1-b1c6-7c594dbceccb": {
Oct 11 04:02:46 compute-0 epic_keller[267632]:         "ceph_fsid": "23b68101-59a9-532f-ab6b-9acf78fb2162",
Oct 11 04:02:46 compute-0 epic_keller[267632]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct 11 04:02:46 compute-0 epic_keller[267632]:         "osd_id": 0,
Oct 11 04:02:46 compute-0 epic_keller[267632]:         "osd_uuid": "bd7ac921-1218-45c1-b1c6-7c594dbceccb",
Oct 11 04:02:46 compute-0 epic_keller[267632]:         "type": "bluestore"
Oct 11 04:02:46 compute-0 epic_keller[267632]:     }
Oct 11 04:02:46 compute-0 epic_keller[267632]: }
Oct 11 04:02:46 compute-0 systemd[1]: libpod-e0933792ffdc130fd4e0803506ee36d78b63f3203313a2a5176bf6ecdb9ece53.scope: Deactivated successfully.
Oct 11 04:02:46 compute-0 systemd[1]: libpod-e0933792ffdc130fd4e0803506ee36d78b63f3203313a2a5176bf6ecdb9ece53.scope: Consumed 1.005s CPU time.
Oct 11 04:02:46 compute-0 podman[267616]: 2025-10-11 04:02:46.845937225 +0000 UTC m=+1.258444923 container died e0933792ffdc130fd4e0803506ee36d78b63f3203313a2a5176bf6ecdb9ece53 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_keller, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 11 04:02:46 compute-0 ceph-mon[74273]: pgmap v915: 305 pgs: 305 active+clean; 88 MiB data, 211 MiB used, 60 GiB / 60 GiB avail; 1.7 MiB/s rd, 2.8 MiB/s wr, 54 op/s
Oct 11 04:02:46 compute-0 nova_compute[259850]: 2025-10-11 04:02:46.851 2 DEBUG nova.virt.driver [None req-3700afd8-f980-4cf2-b41e-54ecc7fbe9a2 - - - - - -] Emitting event <LifecycleEvent: 1760155366.8507948, 02d4314d-9e34-4961-a2ae-1c9b0a3c1dfc => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 11 04:02:46 compute-0 nova_compute[259850]: 2025-10-11 04:02:46.853 2 INFO nova.compute.manager [None req-3700afd8-f980-4cf2-b41e-54ecc7fbe9a2 - - - - - -] [instance: 02d4314d-9e34-4961-a2ae-1c9b0a3c1dfc] VM Started (Lifecycle Event)
Oct 11 04:02:46 compute-0 nova_compute[259850]: 2025-10-11 04:02:46.862 2 DEBUG nova.compute.manager [None req-78a57090-801a-40e7-b05e-441e28e301e0 6f96a3b66f9943398432732b3141745a 54de3f5004d1488aaf5e429b0071e194 - - default default] [instance: 02d4314d-9e34-4961-a2ae-1c9b0a3c1dfc] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Oct 11 04:02:46 compute-0 nova_compute[259850]: 2025-10-11 04:02:46.865 2 DEBUG nova.virt.libvirt.driver [None req-78a57090-801a-40e7-b05e-441e28e301e0 6f96a3b66f9943398432732b3141745a 54de3f5004d1488aaf5e429b0071e194 - - default default] [instance: 02d4314d-9e34-4961-a2ae-1c9b0a3c1dfc] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Oct 11 04:02:46 compute-0 nova_compute[259850]: 2025-10-11 04:02:46.878 2 INFO nova.virt.libvirt.driver [-] [instance: 02d4314d-9e34-4961-a2ae-1c9b0a3c1dfc] Instance spawned successfully.
Oct 11 04:02:46 compute-0 nova_compute[259850]: 2025-10-11 04:02:46.879 2 DEBUG nova.virt.libvirt.driver [None req-78a57090-801a-40e7-b05e-441e28e301e0 6f96a3b66f9943398432732b3141745a 54de3f5004d1488aaf5e429b0071e194 - - default default] [instance: 02d4314d-9e34-4961-a2ae-1c9b0a3c1dfc] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Oct 11 04:02:46 compute-0 ovn_metadata_agent[161897]: 2025-10-11 04:02:46.901 267637 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "context-manager" by "neutron_lib.db.api._create_context_manager" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 04:02:46 compute-0 systemd[1]: var-lib-containers-storage-overlay-744952f6d183857b29f8a0a97ec4b051035af290db85a3fa2b1f622bb49600cb-merged.mount: Deactivated successfully.
Oct 11 04:02:46 compute-0 ovn_metadata_agent[161897]: 2025-10-11 04:02:46.901 267637 DEBUG oslo_concurrency.lockutils [-] Lock "context-manager" acquired by "neutron_lib.db.api._create_context_manager" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 04:02:46 compute-0 ovn_metadata_agent[161897]: 2025-10-11 04:02:46.901 267637 DEBUG oslo_concurrency.lockutils [-] Lock "context-manager" "released" by "neutron_lib.db.api._create_context_manager" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 04:02:46 compute-0 nova_compute[259850]: 2025-10-11 04:02:46.928 2 DEBUG nova.compute.manager [None req-3700afd8-f980-4cf2-b41e-54ecc7fbe9a2 - - - - - -] [instance: 02d4314d-9e34-4961-a2ae-1c9b0a3c1dfc] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 11 04:02:46 compute-0 podman[267616]: 2025-10-11 04:02:46.931756409 +0000 UTC m=+1.344264087 container remove e0933792ffdc130fd4e0803506ee36d78b63f3203313a2a5176bf6ecdb9ece53 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_keller, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.build-date=20250507)
Oct 11 04:02:46 compute-0 nova_compute[259850]: 2025-10-11 04:02:46.933 2 DEBUG nova.compute.manager [None req-3700afd8-f980-4cf2-b41e-54ecc7fbe9a2 - - - - - -] [instance: 02d4314d-9e34-4961-a2ae-1c9b0a3c1dfc] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 11 04:02:46 compute-0 systemd[1]: libpod-conmon-e0933792ffdc130fd4e0803506ee36d78b63f3203313a2a5176bf6ecdb9ece53.scope: Deactivated successfully.
Oct 11 04:02:46 compute-0 nova_compute[259850]: 2025-10-11 04:02:46.968 2 INFO nova.compute.manager [None req-3700afd8-f980-4cf2-b41e-54ecc7fbe9a2 - - - - - -] [instance: 02d4314d-9e34-4961-a2ae-1c9b0a3c1dfc] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 11 04:02:46 compute-0 nova_compute[259850]: 2025-10-11 04:02:46.969 2 DEBUG nova.virt.driver [None req-3700afd8-f980-4cf2-b41e-54ecc7fbe9a2 - - - - - -] Emitting event <LifecycleEvent: 1760155366.8508809, 02d4314d-9e34-4961-a2ae-1c9b0a3c1dfc => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 11 04:02:46 compute-0 nova_compute[259850]: 2025-10-11 04:02:46.969 2 INFO nova.compute.manager [None req-3700afd8-f980-4cf2-b41e-54ecc7fbe9a2 - - - - - -] [instance: 02d4314d-9e34-4961-a2ae-1c9b0a3c1dfc] VM Paused (Lifecycle Event)
Oct 11 04:02:46 compute-0 sudo[267425]: pam_unix(sudo:session): session closed for user root
Oct 11 04:02:46 compute-0 nova_compute[259850]: 2025-10-11 04:02:46.976 2 DEBUG nova.virt.libvirt.driver [None req-78a57090-801a-40e7-b05e-441e28e301e0 6f96a3b66f9943398432732b3141745a 54de3f5004d1488aaf5e429b0071e194 - - default default] [instance: 02d4314d-9e34-4961-a2ae-1c9b0a3c1dfc] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 11 04:02:46 compute-0 nova_compute[259850]: 2025-10-11 04:02:46.977 2 DEBUG nova.virt.libvirt.driver [None req-78a57090-801a-40e7-b05e-441e28e301e0 6f96a3b66f9943398432732b3141745a 54de3f5004d1488aaf5e429b0071e194 - - default default] [instance: 02d4314d-9e34-4961-a2ae-1c9b0a3c1dfc] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 11 04:02:46 compute-0 nova_compute[259850]: 2025-10-11 04:02:46.977 2 DEBUG nova.virt.libvirt.driver [None req-78a57090-801a-40e7-b05e-441e28e301e0 6f96a3b66f9943398432732b3141745a 54de3f5004d1488aaf5e429b0071e194 - - default default] [instance: 02d4314d-9e34-4961-a2ae-1c9b0a3c1dfc] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 11 04:02:46 compute-0 nova_compute[259850]: 2025-10-11 04:02:46.978 2 DEBUG nova.virt.libvirt.driver [None req-78a57090-801a-40e7-b05e-441e28e301e0 6f96a3b66f9943398432732b3141745a 54de3f5004d1488aaf5e429b0071e194 - - default default] [instance: 02d4314d-9e34-4961-a2ae-1c9b0a3c1dfc] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 11 04:02:46 compute-0 nova_compute[259850]: 2025-10-11 04:02:46.978 2 DEBUG nova.virt.libvirt.driver [None req-78a57090-801a-40e7-b05e-441e28e301e0 6f96a3b66f9943398432732b3141745a 54de3f5004d1488aaf5e429b0071e194 - - default default] [instance: 02d4314d-9e34-4961-a2ae-1c9b0a3c1dfc] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 11 04:02:46 compute-0 nova_compute[259850]: 2025-10-11 04:02:46.979 2 DEBUG nova.virt.libvirt.driver [None req-78a57090-801a-40e7-b05e-441e28e301e0 6f96a3b66f9943398432732b3141745a 54de3f5004d1488aaf5e429b0071e194 - - default default] [instance: 02d4314d-9e34-4961-a2ae-1c9b0a3c1dfc] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 11 04:02:46 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 11 04:02:46 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' 
Oct 11 04:02:46 compute-0 nova_compute[259850]: 2025-10-11 04:02:46.987 2 DEBUG nova.compute.manager [None req-3700afd8-f980-4cf2-b41e-54ecc7fbe9a2 - - - - - -] [instance: 02d4314d-9e34-4961-a2ae-1c9b0a3c1dfc] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 11 04:02:46 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 11 04:02:46 compute-0 nova_compute[259850]: 2025-10-11 04:02:46.990 2 DEBUG nova.virt.driver [None req-3700afd8-f980-4cf2-b41e-54ecc7fbe9a2 - - - - - -] Emitting event <LifecycleEvent: 1760155366.86529, 02d4314d-9e34-4961-a2ae-1c9b0a3c1dfc => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 11 04:02:46 compute-0 nova_compute[259850]: 2025-10-11 04:02:46.991 2 INFO nova.compute.manager [None req-3700afd8-f980-4cf2-b41e-54ecc7fbe9a2 - - - - - -] [instance: 02d4314d-9e34-4961-a2ae-1c9b0a3c1dfc] VM Resumed (Lifecycle Event)
Oct 11 04:02:46 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' 
Oct 11 04:02:46 compute-0 ceph-mgr[74563]: [progress WARNING root] complete: ev cb85b501-7049-439c-b9ff-66277b4484b2 does not exist
Oct 11 04:02:46 compute-0 ceph-mgr[74563]: [progress WARNING root] complete: ev 6b8f6e74-923e-4e86-b532-3863ac33d3b5 does not exist
Oct 11 04:02:47 compute-0 nova_compute[259850]: 2025-10-11 04:02:47.008 2 DEBUG nova.compute.manager [None req-3700afd8-f980-4cf2-b41e-54ecc7fbe9a2 - - - - - -] [instance: 02d4314d-9e34-4961-a2ae-1c9b0a3c1dfc] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 11 04:02:47 compute-0 nova_compute[259850]: 2025-10-11 04:02:47.010 2 DEBUG nova.compute.manager [None req-3700afd8-f980-4cf2-b41e-54ecc7fbe9a2 - - - - - -] [instance: 02d4314d-9e34-4961-a2ae-1c9b0a3c1dfc] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 11 04:02:47 compute-0 nova_compute[259850]: 2025-10-11 04:02:47.035 2 INFO nova.compute.manager [None req-3700afd8-f980-4cf2-b41e-54ecc7fbe9a2 - - - - - -] [instance: 02d4314d-9e34-4961-a2ae-1c9b0a3c1dfc] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 11 04:02:47 compute-0 nova_compute[259850]: 2025-10-11 04:02:47.046 2 INFO nova.compute.manager [None req-78a57090-801a-40e7-b05e-441e28e301e0 6f96a3b66f9943398432732b3141745a 54de3f5004d1488aaf5e429b0071e194 - - default default] [instance: 02d4314d-9e34-4961-a2ae-1c9b0a3c1dfc] Took 11.41 seconds to spawn the instance on the hypervisor.
Oct 11 04:02:47 compute-0 nova_compute[259850]: 2025-10-11 04:02:47.047 2 DEBUG nova.compute.manager [None req-78a57090-801a-40e7-b05e-441e28e301e0 6f96a3b66f9943398432732b3141745a 54de3f5004d1488aaf5e429b0071e194 - - default default] [instance: 02d4314d-9e34-4961-a2ae-1c9b0a3c1dfc] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 11 04:02:47 compute-0 sudo[267726]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 04:02:47 compute-0 sudo[267726]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 04:02:47 compute-0 sudo[267726]: pam_unix(sudo:session): session closed for user root
Oct 11 04:02:47 compute-0 nova_compute[259850]: 2025-10-11 04:02:47.115 2 INFO nova.compute.manager [None req-78a57090-801a-40e7-b05e-441e28e301e0 6f96a3b66f9943398432732b3141745a 54de3f5004d1488aaf5e429b0071e194 - - default default] [instance: 02d4314d-9e34-4961-a2ae-1c9b0a3c1dfc] Took 12.37 seconds to build instance.
Oct 11 04:02:47 compute-0 nova_compute[259850]: 2025-10-11 04:02:47.133 2 DEBUG oslo_concurrency.lockutils [None req-78a57090-801a-40e7-b05e-441e28e301e0 6f96a3b66f9943398432732b3141745a 54de3f5004d1488aaf5e429b0071e194 - - default default] Lock "02d4314d-9e34-4961-a2ae-1c9b0a3c1dfc" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 12.505s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 04:02:47 compute-0 sudo[267751]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Oct 11 04:02:47 compute-0 sudo[267751]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 04:02:47 compute-0 sudo[267751]: pam_unix(sudo:session): session closed for user root
Oct 11 04:02:47 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v916: 305 pgs: 305 active+clean; 88 MiB data, 211 MiB used, 60 GiB / 60 GiB avail; 24 KiB/s rd, 2.5 MiB/s wr, 37 op/s
Oct 11 04:02:47 compute-0 ovn_metadata_agent[161897]: 2025-10-11 04:02:47.825 267637 DEBUG oslo.privsep.daemon [-] privsep: reply[d041b9dd-3234-4520-bfe9-b4b3d07ea599]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 04:02:47 compute-0 ovn_metadata_agent[161897]: 2025-10-11 04:02:47.826 161902 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap373b2ee9-81 in ovnmeta-373b2ee9-84af-407c-9e36-4d16b55fdfd0 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Oct 11 04:02:47 compute-0 ovn_metadata_agent[161897]: 2025-10-11 04:02:47.828 267637 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap373b2ee9-80 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Oct 11 04:02:47 compute-0 ovn_metadata_agent[161897]: 2025-10-11 04:02:47.828 267637 DEBUG oslo.privsep.daemon [-] privsep: reply[b87a0a75-7552-40bc-866a-cda618d260ef]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 04:02:47 compute-0 ovn_metadata_agent[161897]: 2025-10-11 04:02:47.833 267637 DEBUG oslo.privsep.daemon [-] privsep: reply[ed184328-4de4-4522-81fb-df5cb8690456]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 04:02:47 compute-0 ovn_metadata_agent[161897]: 2025-10-11 04:02:47.855 162015 DEBUG oslo.privsep.daemon [-] privsep: reply[80e4f1e2-d07e-467d-a4e8-3765b06bb328]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 04:02:47 compute-0 ovn_metadata_agent[161897]: 2025-10-11 04:02:47.888 267637 DEBUG oslo.privsep.daemon [-] privsep: reply[ef922f60-5003-4d56-8d9e-9c283937a346]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 04:02:47 compute-0 ovn_metadata_agent[161897]: 2025-10-11 04:02:47.892 161902 INFO oslo.privsep.daemon [-] Running privsep helper: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'privsep-helper', '--config-file', '/etc/neutron/neutron.conf', '--config-dir', '/etc/neutron.conf.d', '--privsep_context', 'neutron.privileged.link_cmd', '--privsep_sock_path', '/tmp/tmp2hp7w0cx/privsep.sock']
Oct 11 04:02:47 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' 
Oct 11 04:02:47 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' 
Oct 11 04:02:48 compute-0 ovn_metadata_agent[161897]: 2025-10-11 04:02:48.583 161902 INFO oslo.privsep.daemon [-] Spawned new privsep daemon via rootwrap
Oct 11 04:02:48 compute-0 ovn_metadata_agent[161897]: 2025-10-11 04:02:48.585 161902 DEBUG oslo.privsep.daemon [-] Accepted privsep connection to /tmp/tmp2hp7w0cx/privsep.sock __init__ /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:362
Oct 11 04:02:48 compute-0 ovn_metadata_agent[161897]: 2025-10-11 04:02:48.454 267785 INFO oslo.privsep.daemon [-] privsep daemon starting
Oct 11 04:02:48 compute-0 ovn_metadata_agent[161897]: 2025-10-11 04:02:48.462 267785 INFO oslo.privsep.daemon [-] privsep process running with uid/gid: 0/0
Oct 11 04:02:48 compute-0 ovn_metadata_agent[161897]: 2025-10-11 04:02:48.466 267785 INFO oslo.privsep.daemon [-] privsep process running with capabilities (eff/prm/inh): CAP_NET_ADMIN|CAP_SYS_ADMIN/CAP_NET_ADMIN|CAP_SYS_ADMIN/none
Oct 11 04:02:48 compute-0 ovn_metadata_agent[161897]: 2025-10-11 04:02:48.467 267785 INFO oslo.privsep.daemon [-] privsep daemon running as pid 267785
Oct 11 04:02:48 compute-0 ovn_metadata_agent[161897]: 2025-10-11 04:02:48.590 267785 DEBUG oslo.privsep.daemon [-] privsep: reply[567eaa68-ace7-46ce-8839-c31819f1ea04]: (2,) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 04:02:48 compute-0 nova_compute[259850]: 2025-10-11 04:02:48.675 2 DEBUG nova.compute.manager [req-c20ab8ce-c064-4d60-840d-c0bdce972ccd req-c1fc7016-1f6e-4593-8c3d-9d36ed934cb0 f4159c198f2c491aba3076e803251bb4 551ef16ca7c64c7d98e9560ddab5abd8 - - default default] [instance: 02d4314d-9e34-4961-a2ae-1c9b0a3c1dfc] Received event network-vif-plugged-601ef18d-d973-476f-90d4-f8f40df267fa external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 11 04:02:48 compute-0 nova_compute[259850]: 2025-10-11 04:02:48.675 2 DEBUG oslo_concurrency.lockutils [req-c20ab8ce-c064-4d60-840d-c0bdce972ccd req-c1fc7016-1f6e-4593-8c3d-9d36ed934cb0 f4159c198f2c491aba3076e803251bb4 551ef16ca7c64c7d98e9560ddab5abd8 - - default default] Acquiring lock "02d4314d-9e34-4961-a2ae-1c9b0a3c1dfc-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 04:02:48 compute-0 nova_compute[259850]: 2025-10-11 04:02:48.675 2 DEBUG oslo_concurrency.lockutils [req-c20ab8ce-c064-4d60-840d-c0bdce972ccd req-c1fc7016-1f6e-4593-8c3d-9d36ed934cb0 f4159c198f2c491aba3076e803251bb4 551ef16ca7c64c7d98e9560ddab5abd8 - - default default] Lock "02d4314d-9e34-4961-a2ae-1c9b0a3c1dfc-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 04:02:48 compute-0 nova_compute[259850]: 2025-10-11 04:02:48.676 2 DEBUG oslo_concurrency.lockutils [req-c20ab8ce-c064-4d60-840d-c0bdce972ccd req-c1fc7016-1f6e-4593-8c3d-9d36ed934cb0 f4159c198f2c491aba3076e803251bb4 551ef16ca7c64c7d98e9560ddab5abd8 - - default default] Lock "02d4314d-9e34-4961-a2ae-1c9b0a3c1dfc-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 04:02:48 compute-0 nova_compute[259850]: 2025-10-11 04:02:48.676 2 DEBUG nova.compute.manager [req-c20ab8ce-c064-4d60-840d-c0bdce972ccd req-c1fc7016-1f6e-4593-8c3d-9d36ed934cb0 f4159c198f2c491aba3076e803251bb4 551ef16ca7c64c7d98e9560ddab5abd8 - - default default] [instance: 02d4314d-9e34-4961-a2ae-1c9b0a3c1dfc] No waiting events found dispatching network-vif-plugged-601ef18d-d973-476f-90d4-f8f40df267fa pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 11 04:02:48 compute-0 nova_compute[259850]: 2025-10-11 04:02:48.676 2 WARNING nova.compute.manager [req-c20ab8ce-c064-4d60-840d-c0bdce972ccd req-c1fc7016-1f6e-4593-8c3d-9d36ed934cb0 f4159c198f2c491aba3076e803251bb4 551ef16ca7c64c7d98e9560ddab5abd8 - - default default] [instance: 02d4314d-9e34-4961-a2ae-1c9b0a3c1dfc] Received unexpected event network-vif-plugged-601ef18d-d973-476f-90d4-f8f40df267fa for instance with vm_state active and task_state None.
Oct 11 04:02:48 compute-0 nova_compute[259850]: 2025-10-11 04:02:48.683 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 04:02:49 compute-0 ceph-mon[74273]: pgmap v916: 305 pgs: 305 active+clean; 88 MiB data, 211 MiB used, 60 GiB / 60 GiB avail; 24 KiB/s rd, 2.5 MiB/s wr, 37 op/s
Oct 11 04:02:49 compute-0 ovn_metadata_agent[161897]: 2025-10-11 04:02:49.108 267785 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "context-manager" by "neutron_lib.db.api._create_context_manager" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 04:02:49 compute-0 ovn_metadata_agent[161897]: 2025-10-11 04:02:49.109 267785 DEBUG oslo_concurrency.lockutils [-] Lock "context-manager" acquired by "neutron_lib.db.api._create_context_manager" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 04:02:49 compute-0 ovn_metadata_agent[161897]: 2025-10-11 04:02:49.109 267785 DEBUG oslo_concurrency.lockutils [-] Lock "context-manager" "released" by "neutron_lib.db.api._create_context_manager" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 04:02:49 compute-0 nova_compute[259850]: 2025-10-11 04:02:49.296 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 04:02:49 compute-0 NetworkManager[44920]: <info>  [1760155369.2975] manager: (patch-provnet-86cd831a-6a58-4ba8-a51c-57fa1a3acacc-to-br-int): new Open vSwitch Interface device (/org/freedesktop/NetworkManager/Devices/25)
Oct 11 04:02:49 compute-0 NetworkManager[44920]: <info>  [1760155369.2984] device (patch-provnet-86cd831a-6a58-4ba8-a51c-57fa1a3acacc-to-br-int)[Open vSwitch Interface]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Oct 11 04:02:49 compute-0 NetworkManager[44920]: <info>  [1760155369.3006] manager: (patch-br-int-to-provnet-86cd831a-6a58-4ba8-a51c-57fa1a3acacc): new Open vSwitch Interface device (/org/freedesktop/NetworkManager/Devices/26)
Oct 11 04:02:49 compute-0 NetworkManager[44920]: <info>  [1760155369.3011] device (patch-br-int-to-provnet-86cd831a-6a58-4ba8-a51c-57fa1a3acacc)[Open vSwitch Interface]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Oct 11 04:02:49 compute-0 NetworkManager[44920]: <info>  [1760155369.3024] manager: (patch-br-int-to-provnet-86cd831a-6a58-4ba8-a51c-57fa1a3acacc): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/27)
Oct 11 04:02:49 compute-0 NetworkManager[44920]: <info>  [1760155369.3033] manager: (patch-provnet-86cd831a-6a58-4ba8-a51c-57fa1a3acacc-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/28)
Oct 11 04:02:49 compute-0 NetworkManager[44920]: <info>  [1760155369.3039] device (patch-provnet-86cd831a-6a58-4ba8-a51c-57fa1a3acacc-to-br-int)[Open vSwitch Interface]: state change: unavailable -> disconnected (reason 'none', managed-type: 'full')
Oct 11 04:02:49 compute-0 NetworkManager[44920]: <info>  [1760155369.3045] device (patch-br-int-to-provnet-86cd831a-6a58-4ba8-a51c-57fa1a3acacc)[Open vSwitch Interface]: state change: unavailable -> disconnected (reason 'none', managed-type: 'full')
Oct 11 04:02:49 compute-0 nova_compute[259850]: 2025-10-11 04:02:49.372 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 04:02:49 compute-0 nova_compute[259850]: 2025-10-11 04:02:49.384 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 04:02:49 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v917: 305 pgs: 305 active+clean; 88 MiB data, 211 MiB used, 60 GiB / 60 GiB avail; 2.0 MiB/s rd, 2.1 MiB/s wr, 110 op/s
Oct 11 04:02:49 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e132 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 11 04:02:49 compute-0 ovn_metadata_agent[161897]: 2025-10-11 04:02:49.699 267785 DEBUG oslo.privsep.daemon [-] privsep: reply[df4c9609-54c9-40e1-8715-4585726b0cff]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 04:02:49 compute-0 NetworkManager[44920]: <info>  [1760155369.7131] manager: (tap373b2ee9-80): new Veth device (/org/freedesktop/NetworkManager/Devices/29)
Oct 11 04:02:49 compute-0 ovn_metadata_agent[161897]: 2025-10-11 04:02:49.707 267637 DEBUG oslo.privsep.daemon [-] privsep: reply[f109ee46-ea27-416c-9905-c69794f00339]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 04:02:49 compute-0 systemd-udevd[267796]: Network interface NamePolicy= disabled on kernel command line.
Oct 11 04:02:49 compute-0 ovn_metadata_agent[161897]: 2025-10-11 04:02:49.744 267785 DEBUG oslo.privsep.daemon [-] privsep: reply[812737d0-1c40-4059-a453-b6d024eea664]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 04:02:49 compute-0 ovn_metadata_agent[161897]: 2025-10-11 04:02:49.748 267785 DEBUG oslo.privsep.daemon [-] privsep: reply[8227c15f-b509-4fcb-94b3-3b5f33aaf9b9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 04:02:49 compute-0 NetworkManager[44920]: <info>  [1760155369.7807] device (tap373b2ee9-80): carrier: link connected
Oct 11 04:02:49 compute-0 ovn_metadata_agent[161897]: 2025-10-11 04:02:49.787 267785 DEBUG oslo.privsep.daemon [-] privsep: reply[5ab61956-e417-442d-aa5d-b1857241f772]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 04:02:49 compute-0 nova_compute[259850]: 2025-10-11 04:02:49.789 2 DEBUG nova.compute.manager [req-66a4ab3f-fa48-4302-909e-d03f20f1f0d2 req-eb5427b1-6be4-4977-937c-82c17cc11f43 f4159c198f2c491aba3076e803251bb4 551ef16ca7c64c7d98e9560ddab5abd8 - - default default] [instance: 02d4314d-9e34-4961-a2ae-1c9b0a3c1dfc] Received event network-changed-601ef18d-d973-476f-90d4-f8f40df267fa external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 11 04:02:49 compute-0 nova_compute[259850]: 2025-10-11 04:02:49.790 2 DEBUG nova.compute.manager [req-66a4ab3f-fa48-4302-909e-d03f20f1f0d2 req-eb5427b1-6be4-4977-937c-82c17cc11f43 f4159c198f2c491aba3076e803251bb4 551ef16ca7c64c7d98e9560ddab5abd8 - - default default] [instance: 02d4314d-9e34-4961-a2ae-1c9b0a3c1dfc] Refreshing instance network info cache due to event network-changed-601ef18d-d973-476f-90d4-f8f40df267fa. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 11 04:02:49 compute-0 nova_compute[259850]: 2025-10-11 04:02:49.790 2 DEBUG oslo_concurrency.lockutils [req-66a4ab3f-fa48-4302-909e-d03f20f1f0d2 req-eb5427b1-6be4-4977-937c-82c17cc11f43 f4159c198f2c491aba3076e803251bb4 551ef16ca7c64c7d98e9560ddab5abd8 - - default default] Acquiring lock "refresh_cache-02d4314d-9e34-4961-a2ae-1c9b0a3c1dfc" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 11 04:02:49 compute-0 nova_compute[259850]: 2025-10-11 04:02:49.790 2 DEBUG oslo_concurrency.lockutils [req-66a4ab3f-fa48-4302-909e-d03f20f1f0d2 req-eb5427b1-6be4-4977-937c-82c17cc11f43 f4159c198f2c491aba3076e803251bb4 551ef16ca7c64c7d98e9560ddab5abd8 - - default default] Acquired lock "refresh_cache-02d4314d-9e34-4961-a2ae-1c9b0a3c1dfc" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 11 04:02:49 compute-0 nova_compute[259850]: 2025-10-11 04:02:49.791 2 DEBUG nova.network.neutron [req-66a4ab3f-fa48-4302-909e-d03f20f1f0d2 req-eb5427b1-6be4-4977-937c-82c17cc11f43 f4159c198f2c491aba3076e803251bb4 551ef16ca7c64c7d98e9560ddab5abd8 - - default default] [instance: 02d4314d-9e34-4961-a2ae-1c9b0a3c1dfc] Refreshing network info cache for port 601ef18d-d973-476f-90d4-f8f40df267fa _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 11 04:02:49 compute-0 ovn_metadata_agent[161897]: 2025-10-11 04:02:49.808 267637 DEBUG oslo.privsep.daemon [-] privsep: reply[e44c8100-9f77-4097-b184-5b49eee0dc0c]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap373b2ee9-81'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:94:d5:c1'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 2, 'rx_bytes': 110, 'tx_bytes': 176, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 2, 'rx_bytes': 110, 'tx_bytes': 176, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 15], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 375481, 'reachable_time': 23538, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 2, 'outoctets': 148, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 2, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 148, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 2, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 267815, 'error': None, 'target': 'ovnmeta-373b2ee9-84af-407c-9e36-4d16b55fdfd0', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 04:02:49 compute-0 ovn_metadata_agent[161897]: 2025-10-11 04:02:49.828 267637 DEBUG oslo.privsep.daemon [-] privsep: reply[2d02c489-250e-447d-85c8-f4727347404e]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe94:d5c1'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 375481, 'tstamp': 375481}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 267816, 'error': None, 'target': 'ovnmeta-373b2ee9-84af-407c-9e36-4d16b55fdfd0', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 04:02:49 compute-0 ovn_metadata_agent[161897]: 2025-10-11 04:02:49.842 267637 DEBUG oslo.privsep.daemon [-] privsep: reply[33b7e932-045c-4b11-9d1b-d0381c4c19eb]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap373b2ee9-81'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:94:d5:c1'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 2, 'rx_bytes': 110, 'tx_bytes': 176, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 2, 'rx_bytes': 110, 'tx_bytes': 176, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 15], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 375481, 'reachable_time': 23538, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 2, 'outoctets': 148, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 2, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 148, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 2, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 267817, 'error': None, 'target': 'ovnmeta-373b2ee9-84af-407c-9e36-4d16b55fdfd0', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 04:02:49 compute-0 ovn_metadata_agent[161897]: 2025-10-11 04:02:49.875 267637 DEBUG oslo.privsep.daemon [-] privsep: reply[3e3a179c-80ca-4b54-95be-d93e8d885650]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 04:02:49 compute-0 ovn_metadata_agent[161897]: 2025-10-11 04:02:49.958 267637 DEBUG oslo.privsep.daemon [-] privsep: reply[61a4f749-4682-45aa-88b9-3e1370767d2e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 04:02:49 compute-0 ovn_metadata_agent[161897]: 2025-10-11 04:02:49.960 161902 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap373b2ee9-80, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 04:02:49 compute-0 ovn_metadata_agent[161897]: 2025-10-11 04:02:49.960 161902 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 11 04:02:49 compute-0 ovn_metadata_agent[161897]: 2025-10-11 04:02:49.961 161902 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap373b2ee9-80, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 04:02:50 compute-0 nova_compute[259850]: 2025-10-11 04:02:50.013 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 04:02:50 compute-0 NetworkManager[44920]: <info>  [1760155370.0150] manager: (tap373b2ee9-80): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/30)
Oct 11 04:02:50 compute-0 kernel: tap373b2ee9-80: entered promiscuous mode
Oct 11 04:02:50 compute-0 nova_compute[259850]: 2025-10-11 04:02:50.019 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 04:02:50 compute-0 ovn_metadata_agent[161897]: 2025-10-11 04:02:50.020 161902 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap373b2ee9-80, col_values=(('external_ids', {'iface-id': '8f6342ee-cbcc-4ef4-b4d9-682e2511a096'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 04:02:50 compute-0 nova_compute[259850]: 2025-10-11 04:02:50.021 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 04:02:50 compute-0 ovn_controller[152025]: 2025-10-11T04:02:50Z|00031|binding|INFO|Releasing lport 8f6342ee-cbcc-4ef4-b4d9-682e2511a096 from this chassis (sb_readonly=0)
Oct 11 04:02:50 compute-0 nova_compute[259850]: 2025-10-11 04:02:50.057 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 04:02:50 compute-0 ovn_metadata_agent[161897]: 2025-10-11 04:02:50.058 161902 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/373b2ee9-84af-407c-9e36-4d16b55fdfd0.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/373b2ee9-84af-407c-9e36-4d16b55fdfd0.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Oct 11 04:02:50 compute-0 ovn_metadata_agent[161897]: 2025-10-11 04:02:50.059 267637 DEBUG oslo.privsep.daemon [-] privsep: reply[fe193083-79a0-4ea1-a827-efe8009c14ea]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 04:02:50 compute-0 ovn_metadata_agent[161897]: 2025-10-11 04:02:50.060 161902 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct 11 04:02:50 compute-0 ovn_metadata_agent[161897]: global
Oct 11 04:02:50 compute-0 ovn_metadata_agent[161897]:     log         /dev/log local0 debug
Oct 11 04:02:50 compute-0 ovn_metadata_agent[161897]:     log-tag     haproxy-metadata-proxy-373b2ee9-84af-407c-9e36-4d16b55fdfd0
Oct 11 04:02:50 compute-0 ovn_metadata_agent[161897]:     user        root
Oct 11 04:02:50 compute-0 ovn_metadata_agent[161897]:     group       root
Oct 11 04:02:50 compute-0 ovn_metadata_agent[161897]:     maxconn     1024
Oct 11 04:02:50 compute-0 ovn_metadata_agent[161897]:     pidfile     /var/lib/neutron/external/pids/373b2ee9-84af-407c-9e36-4d16b55fdfd0.pid.haproxy
Oct 11 04:02:50 compute-0 ovn_metadata_agent[161897]:     daemon
Oct 11 04:02:50 compute-0 ovn_metadata_agent[161897]: 
Oct 11 04:02:50 compute-0 ovn_metadata_agent[161897]: defaults
Oct 11 04:02:50 compute-0 ovn_metadata_agent[161897]:     log global
Oct 11 04:02:50 compute-0 ovn_metadata_agent[161897]:     mode http
Oct 11 04:02:50 compute-0 ovn_metadata_agent[161897]:     option httplog
Oct 11 04:02:50 compute-0 ovn_metadata_agent[161897]:     option dontlognull
Oct 11 04:02:50 compute-0 ovn_metadata_agent[161897]:     option http-server-close
Oct 11 04:02:50 compute-0 ovn_metadata_agent[161897]:     option forwardfor
Oct 11 04:02:50 compute-0 ovn_metadata_agent[161897]:     retries                 3
Oct 11 04:02:50 compute-0 ovn_metadata_agent[161897]:     timeout http-request    30s
Oct 11 04:02:50 compute-0 ovn_metadata_agent[161897]:     timeout connect         30s
Oct 11 04:02:50 compute-0 ovn_metadata_agent[161897]:     timeout client          32s
Oct 11 04:02:50 compute-0 ovn_metadata_agent[161897]:     timeout server          32s
Oct 11 04:02:50 compute-0 ovn_metadata_agent[161897]:     timeout http-keep-alive 30s
Oct 11 04:02:50 compute-0 ovn_metadata_agent[161897]: 
Oct 11 04:02:50 compute-0 ovn_metadata_agent[161897]: 
Oct 11 04:02:50 compute-0 ovn_metadata_agent[161897]: listen listener
Oct 11 04:02:50 compute-0 ovn_metadata_agent[161897]:     bind 169.254.169.254:80
Oct 11 04:02:50 compute-0 ovn_metadata_agent[161897]:     server metadata /var/lib/neutron/metadata_proxy
Oct 11 04:02:50 compute-0 ovn_metadata_agent[161897]:     http-request add-header X-OVN-Network-ID 373b2ee9-84af-407c-9e36-4d16b55fdfd0
Oct 11 04:02:50 compute-0 ovn_metadata_agent[161897]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Oct 11 04:02:50 compute-0 ovn_metadata_agent[161897]: 2025-10-11 04:02:50.061 161902 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-373b2ee9-84af-407c-9e36-4d16b55fdfd0', 'env', 'PROCESS_TAG=haproxy-373b2ee9-84af-407c-9e36-4d16b55fdfd0', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/373b2ee9-84af-407c-9e36-4d16b55fdfd0.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Oct 11 04:02:50 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct 11 04:02:50 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/633383363' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 11 04:02:50 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct 11 04:02:50 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/633383363' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 11 04:02:50 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct 11 04:02:50 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3133244546' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 11 04:02:50 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct 11 04:02:50 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3133244546' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 11 04:02:50 compute-0 podman[267849]: 2025-10-11 04:02:50.478600371 +0000 UTC m=+0.080390585 container create c5b8f26246d83a8fca5641e14fbb7e75a781f0f4a4c441f194ec34d196e9ce59 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-373b2ee9-84af-407c-9e36-4d16b55fdfd0, org.label-schema.vendor=CentOS, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, org.label-schema.build-date=20251009, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true)
Oct 11 04:02:50 compute-0 podman[267849]: 2025-10-11 04:02:50.443777915 +0000 UTC m=+0.045568209 image pull 1061e4fafe13e0b9aa1ef2c904ba4ad70c44f3e87b1d831f16c6db34937f4022 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Oct 11 04:02:50 compute-0 systemd[1]: Started libpod-conmon-c5b8f26246d83a8fca5641e14fbb7e75a781f0f4a4c441f194ec34d196e9ce59.scope.
Oct 11 04:02:50 compute-0 systemd[1]: Started libcrun container.
Oct 11 04:02:50 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/46bdd741cf814c1bca60ebe5318642f1f8bb3ceb5d7975b452fbdc43fb0b49d3/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct 11 04:02:50 compute-0 podman[267849]: 2025-10-11 04:02:50.582953734 +0000 UTC m=+0.184743968 container init c5b8f26246d83a8fca5641e14fbb7e75a781f0f4a4c441f194ec34d196e9ce59 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-373b2ee9-84af-407c-9e36-4d16b55fdfd0, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, org.label-schema.build-date=20251009, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Oct 11 04:02:50 compute-0 podman[267849]: 2025-10-11 04:02:50.594616891 +0000 UTC m=+0.196407115 container start c5b8f26246d83a8fca5641e14fbb7e75a781f0f4a4c441f194ec34d196e9ce59 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-373b2ee9-84af-407c-9e36-4d16b55fdfd0, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.build-date=20251009, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Oct 11 04:02:50 compute-0 neutron-haproxy-ovnmeta-373b2ee9-84af-407c-9e36-4d16b55fdfd0[267864]: [NOTICE]   (267868) : New worker (267870) forked
Oct 11 04:02:50 compute-0 neutron-haproxy-ovnmeta-373b2ee9-84af-407c-9e36-4d16b55fdfd0[267864]: [NOTICE]   (267868) : Loading success.
Oct 11 04:02:50 compute-0 ceph-mgr[74563]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 04:02:50 compute-0 ceph-mgr[74563]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 04:02:50 compute-0 ceph-mgr[74563]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 04:02:50 compute-0 ceph-mgr[74563]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 04:02:50 compute-0 ceph-mgr[74563]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 04:02:50 compute-0 ceph-mgr[74563]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 04:02:51 compute-0 ceph-mon[74273]: pgmap v917: 305 pgs: 305 active+clean; 88 MiB data, 211 MiB used, 60 GiB / 60 GiB avail; 2.0 MiB/s rd, 2.1 MiB/s wr, 110 op/s
Oct 11 04:02:51 compute-0 ceph-mon[74273]: from='client.? 192.168.122.10:0/633383363' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 11 04:02:51 compute-0 ceph-mon[74273]: from='client.? 192.168.122.10:0/633383363' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 11 04:02:51 compute-0 ceph-mon[74273]: from='client.? 192.168.122.10:0/3133244546' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 11 04:02:51 compute-0 ceph-mon[74273]: from='client.? 192.168.122.10:0/3133244546' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 11 04:02:51 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct 11 04:02:51 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1327838346' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 11 04:02:51 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct 11 04:02:51 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1327838346' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 11 04:02:51 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v918: 305 pgs: 305 active+clean; 88 MiB data, 211 MiB used, 60 GiB / 60 GiB avail; 2.0 MiB/s rd, 2.1 MiB/s wr, 110 op/s
Oct 11 04:02:51 compute-0 nova_compute[259850]: 2025-10-11 04:02:51.453 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 04:02:51 compute-0 nova_compute[259850]: 2025-10-11 04:02:51.651 2 DEBUG nova.network.neutron [req-66a4ab3f-fa48-4302-909e-d03f20f1f0d2 req-eb5427b1-6be4-4977-937c-82c17cc11f43 f4159c198f2c491aba3076e803251bb4 551ef16ca7c64c7d98e9560ddab5abd8 - - default default] [instance: 02d4314d-9e34-4961-a2ae-1c9b0a3c1dfc] Updated VIF entry in instance network info cache for port 601ef18d-d973-476f-90d4-f8f40df267fa. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 11 04:02:51 compute-0 nova_compute[259850]: 2025-10-11 04:02:51.652 2 DEBUG nova.network.neutron [req-66a4ab3f-fa48-4302-909e-d03f20f1f0d2 req-eb5427b1-6be4-4977-937c-82c17cc11f43 f4159c198f2c491aba3076e803251bb4 551ef16ca7c64c7d98e9560ddab5abd8 - - default default] [instance: 02d4314d-9e34-4961-a2ae-1c9b0a3c1dfc] Updating instance_info_cache with network_info: [{"id": "601ef18d-d973-476f-90d4-f8f40df267fa", "address": "fa:16:3e:b3:25:de", "network": {"id": "373b2ee9-84af-407c-9e36-4d16b55fdfd0", "bridge": "br-int", "label": "tempest-EncryptedVolumesExtendAttachedTest-757984270-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.213", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "54de3f5004d1488aaf5e429b0071e194", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap601ef18d-d9", "ovs_interfaceid": "601ef18d-d973-476f-90d4-f8f40df267fa", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 11 04:02:51 compute-0 nova_compute[259850]: 2025-10-11 04:02:51.674 2 DEBUG oslo_concurrency.lockutils [req-66a4ab3f-fa48-4302-909e-d03f20f1f0d2 req-eb5427b1-6be4-4977-937c-82c17cc11f43 f4159c198f2c491aba3076e803251bb4 551ef16ca7c64c7d98e9560ddab5abd8 - - default default] Releasing lock "refresh_cache-02d4314d-9e34-4961-a2ae-1c9b0a3c1dfc" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 11 04:02:52 compute-0 ceph-mon[74273]: from='client.? 192.168.122.10:0/1327838346' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 11 04:02:52 compute-0 ceph-mon[74273]: from='client.? 192.168.122.10:0/1327838346' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 11 04:02:53 compute-0 ceph-mon[74273]: pgmap v918: 305 pgs: 305 active+clean; 88 MiB data, 211 MiB used, 60 GiB / 60 GiB avail; 2.0 MiB/s rd, 2.1 MiB/s wr, 110 op/s
Oct 11 04:02:53 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v919: 305 pgs: 305 active+clean; 88 MiB data, 211 MiB used, 60 GiB / 60 GiB avail; 2.3 MiB/s rd, 18 KiB/s wr, 105 op/s
Oct 11 04:02:53 compute-0 nova_compute[259850]: 2025-10-11 04:02:53.706 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 04:02:54 compute-0 ceph-mon[74273]: pgmap v919: 305 pgs: 305 active+clean; 88 MiB data, 211 MiB used, 60 GiB / 60 GiB avail; 2.3 MiB/s rd, 18 KiB/s wr, 105 op/s
Oct 11 04:02:54 compute-0 podman[267879]: 2025-10-11 04:02:54.367118167 +0000 UTC m=+0.070313362 container health_status 83ff5079e26f0f00bbfd2512db4ebca61e431e3b7e362664573e34fff179574c (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_id=multipathd, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, container_name=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009)
Oct 11 04:02:54 compute-0 podman[267880]: 2025-10-11 04:02:54.380980995 +0000 UTC m=+0.076532236 container health_status 8f03625dda308e9a3fe3b3f00360811eab2a53db90537fed5552a11820542f07 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, tcib_managed=true, config_id=iscsid, container_name=iscsid, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, managed_by=edpm_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 11 04:02:54 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e132 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 11 04:02:55 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v920: 305 pgs: 305 active+clean; 88 MiB data, 211 MiB used, 60 GiB / 60 GiB avail; 2.2 MiB/s rd, 17 KiB/s wr, 98 op/s
Oct 11 04:02:56 compute-0 ceph-mon[74273]: pgmap v920: 305 pgs: 305 active+clean; 88 MiB data, 211 MiB used, 60 GiB / 60 GiB avail; 2.2 MiB/s rd, 17 KiB/s wr, 98 op/s
Oct 11 04:02:56 compute-0 nova_compute[259850]: 2025-10-11 04:02:56.455 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 04:02:57 compute-0 ceph-osd[89722]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #43. Immutable memtables: 0.
Oct 11 04:02:57 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v921: 305 pgs: 305 active+clean; 88 MiB data, 211 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 15 KiB/s wr, 88 op/s
Oct 11 04:02:58 compute-0 ceph-mon[74273]: pgmap v921: 305 pgs: 305 active+clean; 88 MiB data, 211 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 15 KiB/s wr, 88 op/s
Oct 11 04:02:58 compute-0 nova_compute[259850]: 2025-10-11 04:02:58.710 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 04:02:58 compute-0 ovn_controller[152025]: 2025-10-11T04:02:58Z|00004|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:b3:25:de 10.100.0.5
Oct 11 04:02:58 compute-0 ovn_controller[152025]: 2025-10-11T04:02:58Z|00005|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:b3:25:de 10.100.0.5
Oct 11 04:02:59 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v922: 305 pgs: 305 active+clean; 113 MiB data, 246 MiB used, 60 GiB / 60 GiB avail; 2.1 MiB/s rd, 2.1 MiB/s wr, 131 op/s
Oct 11 04:02:59 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e132 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 11 04:02:59 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct 11 04:02:59 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/365360430' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 11 04:02:59 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct 11 04:02:59 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/365360430' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 11 04:03:00 compute-0 ceph-mon[74273]: pgmap v922: 305 pgs: 305 active+clean; 113 MiB data, 246 MiB used, 60 GiB / 60 GiB avail; 2.1 MiB/s rd, 2.1 MiB/s wr, 131 op/s
Oct 11 04:03:00 compute-0 ceph-mon[74273]: from='client.? 192.168.122.10:0/365360430' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 11 04:03:00 compute-0 ceph-mon[74273]: from='client.? 192.168.122.10:0/365360430' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 11 04:03:01 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v923: 305 pgs: 305 active+clean; 113 MiB data, 246 MiB used, 60 GiB / 60 GiB avail; 480 KiB/s rd, 2.0 MiB/s wr, 66 op/s
Oct 11 04:03:01 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct 11 04:03:01 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/523243' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 11 04:03:01 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct 11 04:03:01 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/523243' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 11 04:03:01 compute-0 nova_compute[259850]: 2025-10-11 04:03:01.458 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 04:03:01 compute-0 ceph-mon[74273]: from='client.? 192.168.122.10:0/523243' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 11 04:03:01 compute-0 ceph-mon[74273]: from='client.? 192.168.122.10:0/523243' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 11 04:03:02 compute-0 ceph-mon[74273]: pgmap v923: 305 pgs: 305 active+clean; 113 MiB data, 246 MiB used, 60 GiB / 60 GiB avail; 480 KiB/s rd, 2.0 MiB/s wr, 66 op/s
Oct 11 04:03:03 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v924: 305 pgs: 305 active+clean; 121 MiB data, 254 MiB used, 60 GiB / 60 GiB avail; 606 KiB/s rd, 2.1 MiB/s wr, 115 op/s
Oct 11 04:03:03 compute-0 nova_compute[259850]: 2025-10-11 04:03:03.714 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 04:03:03 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct 11 04:03:03 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/101716239' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 11 04:03:03 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct 11 04:03:03 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/101716239' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 11 04:03:04 compute-0 nova_compute[259850]: 2025-10-11 04:03:04.071 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 04:03:04 compute-0 ovn_metadata_agent[161897]: 2025-10-11 04:03:04.071 161902 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=4, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '7a:61:6f', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '92:f1:b6:e4:f1:16'}, ipsec=False) old=SB_Global(nb_cfg=3) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 11 04:03:04 compute-0 ovn_metadata_agent[161897]: 2025-10-11 04:03:04.073 161902 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 1 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Oct 11 04:03:04 compute-0 podman[267922]: 2025-10-11 04:03:04.413107996 +0000 UTC m=+0.113458661 container health_status 648a70a95c858d67aef01a55c153ff0b5b9bef4b589fa10c62a24185180db61f (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.build-date=20251009, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.license=GPLv2)
Oct 11 04:03:04 compute-0 ceph-mon[74273]: pgmap v924: 305 pgs: 305 active+clean; 121 MiB data, 254 MiB used, 60 GiB / 60 GiB avail; 606 KiB/s rd, 2.1 MiB/s wr, 115 op/s
Oct 11 04:03:04 compute-0 ceph-mon[74273]: from='client.? 192.168.122.10:0/101716239' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 11 04:03:04 compute-0 ceph-mon[74273]: from='client.? 192.168.122.10:0/101716239' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 11 04:03:04 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e132 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 11 04:03:05 compute-0 ovn_metadata_agent[161897]: 2025-10-11 04:03:05.076 161902 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=8a473e03-2208-47ae-afcd-05ad744a5969, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '4'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 04:03:05 compute-0 nova_compute[259850]: 2025-10-11 04:03:05.367 2 DEBUG oslo_concurrency.lockutils [None req-969e4c03-4f8e-4939-a6c2-3b644534cf25 6f96a3b66f9943398432732b3141745a 54de3f5004d1488aaf5e429b0071e194 - - default default] Acquiring lock "02d4314d-9e34-4961-a2ae-1c9b0a3c1dfc" by "nova.compute.manager.ComputeManager.reserve_block_device_name.<locals>.do_reserve" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 04:03:05 compute-0 nova_compute[259850]: 2025-10-11 04:03:05.367 2 DEBUG oslo_concurrency.lockutils [None req-969e4c03-4f8e-4939-a6c2-3b644534cf25 6f96a3b66f9943398432732b3141745a 54de3f5004d1488aaf5e429b0071e194 - - default default] Lock "02d4314d-9e34-4961-a2ae-1c9b0a3c1dfc" acquired by "nova.compute.manager.ComputeManager.reserve_block_device_name.<locals>.do_reserve" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 04:03:05 compute-0 nova_compute[259850]: 2025-10-11 04:03:05.383 2 DEBUG nova.objects.instance [None req-969e4c03-4f8e-4939-a6c2-3b644534cf25 6f96a3b66f9943398432732b3141745a 54de3f5004d1488aaf5e429b0071e194 - - default default] Lazy-loading 'flavor' on Instance uuid 02d4314d-9e34-4961-a2ae-1c9b0a3c1dfc obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 11 04:03:05 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v925: 305 pgs: 305 active+clean; 121 MiB data, 254 MiB used, 60 GiB / 60 GiB avail; 327 KiB/s rd, 2.1 MiB/s wr, 92 op/s
Oct 11 04:03:05 compute-0 nova_compute[259850]: 2025-10-11 04:03:05.434 2 INFO nova.virt.libvirt.driver [None req-969e4c03-4f8e-4939-a6c2-3b644534cf25 6f96a3b66f9943398432732b3141745a 54de3f5004d1488aaf5e429b0071e194 - - default default] [instance: 02d4314d-9e34-4961-a2ae-1c9b0a3c1dfc] Ignoring supplied device name: /dev/vdb
Oct 11 04:03:05 compute-0 nova_compute[259850]: 2025-10-11 04:03:05.451 2 DEBUG oslo_concurrency.lockutils [None req-969e4c03-4f8e-4939-a6c2-3b644534cf25 6f96a3b66f9943398432732b3141745a 54de3f5004d1488aaf5e429b0071e194 - - default default] Lock "02d4314d-9e34-4961-a2ae-1c9b0a3c1dfc" "released" by "nova.compute.manager.ComputeManager.reserve_block_device_name.<locals>.do_reserve" :: held 0.084s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 04:03:05 compute-0 nova_compute[259850]: 2025-10-11 04:03:05.917 2 DEBUG oslo_concurrency.lockutils [None req-969e4c03-4f8e-4939-a6c2-3b644534cf25 6f96a3b66f9943398432732b3141745a 54de3f5004d1488aaf5e429b0071e194 - - default default] Acquiring lock "02d4314d-9e34-4961-a2ae-1c9b0a3c1dfc" by "nova.compute.manager.ComputeManager.attach_volume.<locals>.do_attach_volume" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 04:03:05 compute-0 nova_compute[259850]: 2025-10-11 04:03:05.918 2 DEBUG oslo_concurrency.lockutils [None req-969e4c03-4f8e-4939-a6c2-3b644534cf25 6f96a3b66f9943398432732b3141745a 54de3f5004d1488aaf5e429b0071e194 - - default default] Lock "02d4314d-9e34-4961-a2ae-1c9b0a3c1dfc" acquired by "nova.compute.manager.ComputeManager.attach_volume.<locals>.do_attach_volume" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 04:03:05 compute-0 nova_compute[259850]: 2025-10-11 04:03:05.918 2 INFO nova.compute.manager [None req-969e4c03-4f8e-4939-a6c2-3b644534cf25 6f96a3b66f9943398432732b3141745a 54de3f5004d1488aaf5e429b0071e194 - - default default] [instance: 02d4314d-9e34-4961-a2ae-1c9b0a3c1dfc] Attaching volume 124d81aa-c1b6-4933-a0f2-4582c93cb200 to /dev/vdb
Oct 11 04:03:06 compute-0 nova_compute[259850]: 2025-10-11 04:03:06.182 2 DEBUG os_brick.utils [None req-969e4c03-4f8e-4939-a6c2-3b644534cf25 6f96a3b66f9943398432732b3141745a 54de3f5004d1488aaf5e429b0071e194 - - default default] ==> get_connector_properties: call "{'root_helper': 'sudo nova-rootwrap /etc/nova/rootwrap.conf', 'my_ip': '192.168.122.100', 'multipath': True, 'enforce_multipath': True, 'host': 'compute-0.ctlplane.example.com', 'execute': None}" trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:176
Oct 11 04:03:06 compute-0 nova_compute[259850]: 2025-10-11 04:03:06.183 2 INFO oslo.privsep.daemon [None req-969e4c03-4f8e-4939-a6c2-3b644534cf25 6f96a3b66f9943398432732b3141745a 54de3f5004d1488aaf5e429b0071e194 - - default default] Running privsep helper: ['sudo', 'nova-rootwrap', '/etc/nova/rootwrap.conf', 'privsep-helper', '--config-file', '/etc/nova/nova.conf', '--config-file', '/etc/nova/nova-compute.conf', '--config-dir', '/etc/nova/nova.conf.d', '--privsep_context', 'os_brick.privileged.default', '--privsep_sock_path', '/tmp/tmpx7m3suqq/privsep.sock']
Oct 11 04:03:06 compute-0 nova_compute[259850]: 2025-10-11 04:03:06.460 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 04:03:06 compute-0 ceph-mon[74273]: pgmap v925: 305 pgs: 305 active+clean; 121 MiB data, 254 MiB used, 60 GiB / 60 GiB avail; 327 KiB/s rd, 2.1 MiB/s wr, 92 op/s
Oct 11 04:03:06 compute-0 nova_compute[259850]: 2025-10-11 04:03:06.886 2 INFO oslo.privsep.daemon [None req-969e4c03-4f8e-4939-a6c2-3b644534cf25 6f96a3b66f9943398432732b3141745a 54de3f5004d1488aaf5e429b0071e194 - - default default] Spawned new privsep daemon via rootwrap
Oct 11 04:03:06 compute-0 nova_compute[259850]: 2025-10-11 04:03:06.753 675 INFO oslo.privsep.daemon [-] privsep daemon starting
Oct 11 04:03:06 compute-0 nova_compute[259850]: 2025-10-11 04:03:06.757 675 INFO oslo.privsep.daemon [-] privsep process running with uid/gid: 0/0
Oct 11 04:03:06 compute-0 nova_compute[259850]: 2025-10-11 04:03:06.760 675 INFO oslo.privsep.daemon [-] privsep process running with capabilities (eff/prm/inh): CAP_SYS_ADMIN/CAP_SYS_ADMIN/none
Oct 11 04:03:06 compute-0 nova_compute[259850]: 2025-10-11 04:03:06.760 675 INFO oslo.privsep.daemon [-] privsep daemon running as pid 675
Oct 11 04:03:06 compute-0 nova_compute[259850]: 2025-10-11 04:03:06.890 675 DEBUG oslo.privsep.daemon [-] privsep: reply[97283474-98d6-4a3b-b9c6-1adc19bac558]: (2,) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 04:03:07 compute-0 nova_compute[259850]: 2025-10-11 04:03:07.033 675 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): multipathd show status execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 04:03:07 compute-0 nova_compute[259850]: 2025-10-11 04:03:07.044 675 DEBUG oslo_concurrency.processutils [-] CMD "multipathd show status" returned: 0 in 0.011s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 04:03:07 compute-0 nova_compute[259850]: 2025-10-11 04:03:07.045 675 DEBUG oslo.privsep.daemon [-] privsep: reply[26089adb-696a-4ded-b5a5-c7a4c2775766]: (4, ('path checker states:\n\npaths: 0\nbusy: False\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 04:03:07 compute-0 nova_compute[259850]: 2025-10-11 04:03:07.046 675 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): cat /etc/iscsi/initiatorname.iscsi execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 04:03:07 compute-0 nova_compute[259850]: 2025-10-11 04:03:07.053 675 DEBUG oslo_concurrency.processutils [-] CMD "cat /etc/iscsi/initiatorname.iscsi" returned: 0 in 0.006s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 04:03:07 compute-0 nova_compute[259850]: 2025-10-11 04:03:07.053 675 DEBUG oslo.privsep.daemon [-] privsep: reply[3934c449-eb80-4901-bcfc-e823df57bee9]: (4, ('InitiatorName=iqn.1994-05.com.redhat:e727c2bd432c', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 04:03:07 compute-0 nova_compute[259850]: 2025-10-11 04:03:07.055 675 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): findmnt -v / -n -o SOURCE execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 04:03:07 compute-0 nova_compute[259850]: 2025-10-11 04:03:07.066 675 DEBUG oslo_concurrency.processutils [-] CMD "findmnt -v / -n -o SOURCE" returned: 0 in 0.011s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 04:03:07 compute-0 nova_compute[259850]: 2025-10-11 04:03:07.066 675 DEBUG oslo.privsep.daemon [-] privsep: reply[61b35f80-ecf5-431e-9e5d-6b153fc6a446]: (4, ('overlay\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 04:03:07 compute-0 nova_compute[259850]: 2025-10-11 04:03:07.068 675 DEBUG oslo.privsep.daemon [-] privsep: reply[b2819523-988a-43eb-a5b4-c748b62f54c1]: (4, 'e4b2deed-ff06-4afb-a523-b61a9dddb9cc') _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 04:03:07 compute-0 nova_compute[259850]: 2025-10-11 04:03:07.068 2 DEBUG oslo_concurrency.processutils [None req-969e4c03-4f8e-4939-a6c2-3b644534cf25 6f96a3b66f9943398432732b3141745a 54de3f5004d1488aaf5e429b0071e194 - - default default] Running cmd (subprocess): nvme version execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 04:03:07 compute-0 nova_compute[259850]: 2025-10-11 04:03:07.086 2 DEBUG oslo_concurrency.processutils [None req-969e4c03-4f8e-4939-a6c2-3b644534cf25 6f96a3b66f9943398432732b3141745a 54de3f5004d1488aaf5e429b0071e194 - - default default] CMD "nvme version" returned: 0 in 0.017s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 04:03:07 compute-0 nova_compute[259850]: 2025-10-11 04:03:07.088 2 DEBUG os_brick.initiator.connectors.lightos [None req-969e4c03-4f8e-4939-a6c2-3b644534cf25 6f96a3b66f9943398432732b3141745a 54de3f5004d1488aaf5e429b0071e194 - - default default] LIGHTOS: [Errno 111] ECONNREFUSED find_dsc /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:98
Oct 11 04:03:07 compute-0 nova_compute[259850]: 2025-10-11 04:03:07.089 2 DEBUG os_brick.initiator.connectors.lightos [None req-969e4c03-4f8e-4939-a6c2-3b644534cf25 6f96a3b66f9943398432732b3141745a 54de3f5004d1488aaf5e429b0071e194 - - default default] LIGHTOS: did not find dsc, continuing anyway. get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:76
Oct 11 04:03:07 compute-0 nova_compute[259850]: 2025-10-11 04:03:07.089 2 DEBUG os_brick.initiator.connectors.lightos [None req-969e4c03-4f8e-4939-a6c2-3b644534cf25 6f96a3b66f9943398432732b3141745a 54de3f5004d1488aaf5e429b0071e194 - - default default] LIGHTOS: finally hostnqn: nqn.2014-08.org.nvmexpress:uuid:83042a20-0f72-4c47-8453-e72ead378624 dsc:  get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:79
Oct 11 04:03:07 compute-0 nova_compute[259850]: 2025-10-11 04:03:07.090 2 DEBUG os_brick.utils [None req-969e4c03-4f8e-4939-a6c2-3b644534cf25 6f96a3b66f9943398432732b3141745a 54de3f5004d1488aaf5e429b0071e194 - - default default] <== get_connector_properties: return (907ms) {'platform': 'x86_64', 'os_type': 'linux', 'ip': '192.168.122.100', 'host': 'compute-0.ctlplane.example.com', 'multipath': True, 'initiator': 'iqn.1994-05.com.redhat:e727c2bd432c', 'do_local_attach': False, 'nvme_hostid': '83042a20-0f72-4c47-8453-e72ead378624', 'system uuid': 'e4b2deed-ff06-4afb-a523-b61a9dddb9cc', 'nqn': 'nqn.2014-08.org.nvmexpress:uuid:83042a20-0f72-4c47-8453-e72ead378624', 'nvme_native_multipath': True, 'found_dsc': ''} trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:203
Oct 11 04:03:07 compute-0 nova_compute[259850]: 2025-10-11 04:03:07.090 2 DEBUG nova.virt.block_device [None req-969e4c03-4f8e-4939-a6c2-3b644534cf25 6f96a3b66f9943398432732b3141745a 54de3f5004d1488aaf5e429b0071e194 - - default default] [instance: 02d4314d-9e34-4961-a2ae-1c9b0a3c1dfc] Updating existing volume attachment record: 58cd35e1-0711-4b48-b4a8-0b0bdbf50255 _volume_attach /usr/lib/python3.9/site-packages/nova/virt/block_device.py:631
Oct 11 04:03:07 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v926: 305 pgs: 305 active+clean; 121 MiB data, 254 MiB used, 60 GiB / 60 GiB avail; 327 KiB/s rd, 2.1 MiB/s wr, 92 op/s
Oct 11 04:03:07 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 11 04:03:07 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2879861894' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 11 04:03:07 compute-0 nova_compute[259850]: 2025-10-11 04:03:07.957 2 DEBUG os_brick.encryptors [None req-969e4c03-4f8e-4939-a6c2-3b644534cf25 6f96a3b66f9943398432732b3141745a 54de3f5004d1488aaf5e429b0071e194 - - default default] Using volume encryption metadata '{'encryption_key_id': '797788f5-9826-4caa-81a4-b0aab0930665', 'control_location': 'front-end', 'cipher': 'aes-xts-plain64', 'key_size': 256, 'provider': 'luks'}' for connection: {'driver_volume_type': 'rbd', 'data': {'name': 'volumes/volume-124d81aa-c1b6-4933-a0f2-4582c93cb200', 'hosts': ['192.168.122.100'], 'ports': ['6789'], 'cluster_name': 'ceph', 'auth_enabled': True, 'auth_username': 'openstack', 'secret_type': 'ceph', 'secret_uuid': '***', 'volume_id': '124d81aa-c1b6-4933-a0f2-4582c93cb200', 'discard': True, 'qos_specs': None, 'access_mode': 'rw', 'encrypted': True, 'cacheable': False}, 'status': 'reserved', 'instance': '02d4314d-9e34-4961-a2ae-1c9b0a3c1dfc', 'attached_at': '', 'detached_at': '', 'volume_id': '124d81aa-c1b6-4933-a0f2-4582c93cb200', 'serial': '} get_encryption_metadata /usr/lib/python3.9/site-packages/os_brick/encryptors/__init__.py:135
Oct 11 04:03:07 compute-0 nova_compute[259850]: 2025-10-11 04:03:07.960 2 DEBUG oslo_concurrency.lockutils [None req-969e4c03-4f8e-4939-a6c2-3b644534cf25 6f96a3b66f9943398432732b3141745a 54de3f5004d1488aaf5e429b0071e194 - - default default] Acquiring lock "cache_volume_driver" by "nova.virt.libvirt.driver.LibvirtDriver._get_volume_driver.<locals>._cache_volume_driver" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 04:03:07 compute-0 nova_compute[259850]: 2025-10-11 04:03:07.960 2 DEBUG oslo_concurrency.lockutils [None req-969e4c03-4f8e-4939-a6c2-3b644534cf25 6f96a3b66f9943398432732b3141745a 54de3f5004d1488aaf5e429b0071e194 - - default default] Lock "cache_volume_driver" acquired by "nova.virt.libvirt.driver.LibvirtDriver._get_volume_driver.<locals>._cache_volume_driver" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 04:03:07 compute-0 nova_compute[259850]: 2025-10-11 04:03:07.962 2 DEBUG oslo_concurrency.lockutils [None req-969e4c03-4f8e-4939-a6c2-3b644534cf25 6f96a3b66f9943398432732b3141745a 54de3f5004d1488aaf5e429b0071e194 - - default default] Lock "cache_volume_driver" "released" by "nova.virt.libvirt.driver.LibvirtDriver._get_volume_driver.<locals>._cache_volume_driver" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 04:03:07 compute-0 nova_compute[259850]: 2025-10-11 04:03:07.973 2 DEBUG barbicanclient.client [None req-969e4c03-4f8e-4939-a6c2-3b644534cf25 6f96a3b66f9943398432732b3141745a 54de3f5004d1488aaf5e429b0071e194 - - default default] Creating Client object Client /usr/lib/python3.9/site-packages/barbicanclient/client.py:163
Oct 11 04:03:08 compute-0 nova_compute[259850]: 2025-10-11 04:03:08.003 2 DEBUG barbicanclient.v1.secrets [None req-969e4c03-4f8e-4939-a6c2-3b644534cf25 6f96a3b66f9943398432732b3141745a 54de3f5004d1488aaf5e429b0071e194 - - default default] Getting secret - Secret href: https://barbican-internal.openstack.svc:9311/secrets/797788f5-9826-4caa-81a4-b0aab0930665 get /usr/lib/python3.9/site-packages/barbicanclient/v1/secrets.py:514
Oct 11 04:03:08 compute-0 nova_compute[259850]: 2025-10-11 04:03:08.005 2 INFO barbicanclient.base [None req-969e4c03-4f8e-4939-a6c2-3b644534cf25 6f96a3b66f9943398432732b3141745a 54de3f5004d1488aaf5e429b0071e194 - - default default] Calculated Secrets uuid ref: secrets/797788f5-9826-4caa-81a4-b0aab0930665
Oct 11 04:03:08 compute-0 nova_compute[259850]: 2025-10-11 04:03:08.031 2 DEBUG barbicanclient.client [None req-969e4c03-4f8e-4939-a6c2-3b644534cf25 6f96a3b66f9943398432732b3141745a 54de3f5004d1488aaf5e429b0071e194 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87
Oct 11 04:03:08 compute-0 nova_compute[259850]: 2025-10-11 04:03:08.032 2 INFO barbicanclient.base [None req-969e4c03-4f8e-4939-a6c2-3b644534cf25 6f96a3b66f9943398432732b3141745a 54de3f5004d1488aaf5e429b0071e194 - - default default] Calculated Secrets uuid ref: secrets/797788f5-9826-4caa-81a4-b0aab0930665
Oct 11 04:03:08 compute-0 nova_compute[259850]: 2025-10-11 04:03:08.057 2 DEBUG barbicanclient.client [None req-969e4c03-4f8e-4939-a6c2-3b644534cf25 6f96a3b66f9943398432732b3141745a 54de3f5004d1488aaf5e429b0071e194 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87
Oct 11 04:03:08 compute-0 nova_compute[259850]: 2025-10-11 04:03:08.058 2 INFO barbicanclient.base [None req-969e4c03-4f8e-4939-a6c2-3b644534cf25 6f96a3b66f9943398432732b3141745a 54de3f5004d1488aaf5e429b0071e194 - - default default] Calculated Secrets uuid ref: secrets/797788f5-9826-4caa-81a4-b0aab0930665
Oct 11 04:03:08 compute-0 nova_compute[259850]: 2025-10-11 04:03:08.090 2 DEBUG barbicanclient.client [None req-969e4c03-4f8e-4939-a6c2-3b644534cf25 6f96a3b66f9943398432732b3141745a 54de3f5004d1488aaf5e429b0071e194 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87
Oct 11 04:03:08 compute-0 nova_compute[259850]: 2025-10-11 04:03:08.091 2 INFO barbicanclient.base [None req-969e4c03-4f8e-4939-a6c2-3b644534cf25 6f96a3b66f9943398432732b3141745a 54de3f5004d1488aaf5e429b0071e194 - - default default] Calculated Secrets uuid ref: secrets/797788f5-9826-4caa-81a4-b0aab0930665
Oct 11 04:03:08 compute-0 nova_compute[259850]: 2025-10-11 04:03:08.118 2 DEBUG barbicanclient.client [None req-969e4c03-4f8e-4939-a6c2-3b644534cf25 6f96a3b66f9943398432732b3141745a 54de3f5004d1488aaf5e429b0071e194 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87
Oct 11 04:03:08 compute-0 nova_compute[259850]: 2025-10-11 04:03:08.119 2 INFO barbicanclient.base [None req-969e4c03-4f8e-4939-a6c2-3b644534cf25 6f96a3b66f9943398432732b3141745a 54de3f5004d1488aaf5e429b0071e194 - - default default] Calculated Secrets uuid ref: secrets/797788f5-9826-4caa-81a4-b0aab0930665
Oct 11 04:03:08 compute-0 nova_compute[259850]: 2025-10-11 04:03:08.145 2 DEBUG barbicanclient.client [None req-969e4c03-4f8e-4939-a6c2-3b644534cf25 6f96a3b66f9943398432732b3141745a 54de3f5004d1488aaf5e429b0071e194 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87
Oct 11 04:03:08 compute-0 nova_compute[259850]: 2025-10-11 04:03:08.146 2 INFO barbicanclient.base [None req-969e4c03-4f8e-4939-a6c2-3b644534cf25 6f96a3b66f9943398432732b3141745a 54de3f5004d1488aaf5e429b0071e194 - - default default] Calculated Secrets uuid ref: secrets/797788f5-9826-4caa-81a4-b0aab0930665
Oct 11 04:03:08 compute-0 nova_compute[259850]: 2025-10-11 04:03:08.179 2 DEBUG barbicanclient.client [None req-969e4c03-4f8e-4939-a6c2-3b644534cf25 6f96a3b66f9943398432732b3141745a 54de3f5004d1488aaf5e429b0071e194 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87
Oct 11 04:03:08 compute-0 nova_compute[259850]: 2025-10-11 04:03:08.180 2 INFO barbicanclient.base [None req-969e4c03-4f8e-4939-a6c2-3b644534cf25 6f96a3b66f9943398432732b3141745a 54de3f5004d1488aaf5e429b0071e194 - - default default] Calculated Secrets uuid ref: secrets/797788f5-9826-4caa-81a4-b0aab0930665
Oct 11 04:03:08 compute-0 nova_compute[259850]: 2025-10-11 04:03:08.208 2 DEBUG barbicanclient.client [None req-969e4c03-4f8e-4939-a6c2-3b644534cf25 6f96a3b66f9943398432732b3141745a 54de3f5004d1488aaf5e429b0071e194 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87
Oct 11 04:03:08 compute-0 nova_compute[259850]: 2025-10-11 04:03:08.209 2 INFO barbicanclient.base [None req-969e4c03-4f8e-4939-a6c2-3b644534cf25 6f96a3b66f9943398432732b3141745a 54de3f5004d1488aaf5e429b0071e194 - - default default] Calculated Secrets uuid ref: secrets/797788f5-9826-4caa-81a4-b0aab0930665
Oct 11 04:03:08 compute-0 nova_compute[259850]: 2025-10-11 04:03:08.240 2 DEBUG barbicanclient.client [None req-969e4c03-4f8e-4939-a6c2-3b644534cf25 6f96a3b66f9943398432732b3141745a 54de3f5004d1488aaf5e429b0071e194 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87
Oct 11 04:03:08 compute-0 nova_compute[259850]: 2025-10-11 04:03:08.241 2 INFO barbicanclient.base [None req-969e4c03-4f8e-4939-a6c2-3b644534cf25 6f96a3b66f9943398432732b3141745a 54de3f5004d1488aaf5e429b0071e194 - - default default] Calculated Secrets uuid ref: secrets/797788f5-9826-4caa-81a4-b0aab0930665
Oct 11 04:03:08 compute-0 nova_compute[259850]: 2025-10-11 04:03:08.265 2 DEBUG barbicanclient.client [None req-969e4c03-4f8e-4939-a6c2-3b644534cf25 6f96a3b66f9943398432732b3141745a 54de3f5004d1488aaf5e429b0071e194 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87
Oct 11 04:03:08 compute-0 nova_compute[259850]: 2025-10-11 04:03:08.266 2 INFO barbicanclient.base [None req-969e4c03-4f8e-4939-a6c2-3b644534cf25 6f96a3b66f9943398432732b3141745a 54de3f5004d1488aaf5e429b0071e194 - - default default] Calculated Secrets uuid ref: secrets/797788f5-9826-4caa-81a4-b0aab0930665
Oct 11 04:03:08 compute-0 nova_compute[259850]: 2025-10-11 04:03:08.305 2 DEBUG barbicanclient.client [None req-969e4c03-4f8e-4939-a6c2-3b644534cf25 6f96a3b66f9943398432732b3141745a 54de3f5004d1488aaf5e429b0071e194 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87
Oct 11 04:03:08 compute-0 nova_compute[259850]: 2025-10-11 04:03:08.306 2 INFO barbicanclient.base [None req-969e4c03-4f8e-4939-a6c2-3b644534cf25 6f96a3b66f9943398432732b3141745a 54de3f5004d1488aaf5e429b0071e194 - - default default] Calculated Secrets uuid ref: secrets/797788f5-9826-4caa-81a4-b0aab0930665
Oct 11 04:03:08 compute-0 nova_compute[259850]: 2025-10-11 04:03:08.332 2 DEBUG barbicanclient.client [None req-969e4c03-4f8e-4939-a6c2-3b644534cf25 6f96a3b66f9943398432732b3141745a 54de3f5004d1488aaf5e429b0071e194 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87
Oct 11 04:03:08 compute-0 nova_compute[259850]: 2025-10-11 04:03:08.333 2 INFO barbicanclient.base [None req-969e4c03-4f8e-4939-a6c2-3b644534cf25 6f96a3b66f9943398432732b3141745a 54de3f5004d1488aaf5e429b0071e194 - - default default] Calculated Secrets uuid ref: secrets/797788f5-9826-4caa-81a4-b0aab0930665
Oct 11 04:03:08 compute-0 nova_compute[259850]: 2025-10-11 04:03:08.354 2 DEBUG barbicanclient.client [None req-969e4c03-4f8e-4939-a6c2-3b644534cf25 6f96a3b66f9943398432732b3141745a 54de3f5004d1488aaf5e429b0071e194 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87
Oct 11 04:03:08 compute-0 nova_compute[259850]: 2025-10-11 04:03:08.355 2 INFO barbicanclient.base [None req-969e4c03-4f8e-4939-a6c2-3b644534cf25 6f96a3b66f9943398432732b3141745a 54de3f5004d1488aaf5e429b0071e194 - - default default] Calculated Secrets uuid ref: secrets/797788f5-9826-4caa-81a4-b0aab0930665
Oct 11 04:03:08 compute-0 nova_compute[259850]: 2025-10-11 04:03:08.375 2 DEBUG barbicanclient.client [None req-969e4c03-4f8e-4939-a6c2-3b644534cf25 6f96a3b66f9943398432732b3141745a 54de3f5004d1488aaf5e429b0071e194 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87
Oct 11 04:03:08 compute-0 nova_compute[259850]: 2025-10-11 04:03:08.375 2 INFO barbicanclient.base [None req-969e4c03-4f8e-4939-a6c2-3b644534cf25 6f96a3b66f9943398432732b3141745a 54de3f5004d1488aaf5e429b0071e194 - - default default] Calculated Secrets uuid ref: secrets/797788f5-9826-4caa-81a4-b0aab0930665
Oct 11 04:03:08 compute-0 nova_compute[259850]: 2025-10-11 04:03:08.407 2 DEBUG barbicanclient.client [None req-969e4c03-4f8e-4939-a6c2-3b644534cf25 6f96a3b66f9943398432732b3141745a 54de3f5004d1488aaf5e429b0071e194 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87
Oct 11 04:03:08 compute-0 nova_compute[259850]: 2025-10-11 04:03:08.408 2 INFO barbicanclient.base [None req-969e4c03-4f8e-4939-a6c2-3b644534cf25 6f96a3b66f9943398432732b3141745a 54de3f5004d1488aaf5e429b0071e194 - - default default] Calculated Secrets uuid ref: secrets/797788f5-9826-4caa-81a4-b0aab0930665
Oct 11 04:03:08 compute-0 nova_compute[259850]: 2025-10-11 04:03:08.428 2 DEBUG barbicanclient.client [None req-969e4c03-4f8e-4939-a6c2-3b644534cf25 6f96a3b66f9943398432732b3141745a 54de3f5004d1488aaf5e429b0071e194 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87
Oct 11 04:03:08 compute-0 nova_compute[259850]: 2025-10-11 04:03:08.430 2 DEBUG nova.virt.libvirt.host [None req-969e4c03-4f8e-4939-a6c2-3b644534cf25 6f96a3b66f9943398432732b3141745a 54de3f5004d1488aaf5e429b0071e194 - - default default] Secret XML: <secret ephemeral="no" private="no">
Oct 11 04:03:08 compute-0 nova_compute[259850]:   <usage type="volume">
Oct 11 04:03:08 compute-0 nova_compute[259850]:     <volume>124d81aa-c1b6-4933-a0f2-4582c93cb200</volume>
Oct 11 04:03:08 compute-0 nova_compute[259850]:   </usage>
Oct 11 04:03:08 compute-0 nova_compute[259850]: </secret>
Oct 11 04:03:08 compute-0 nova_compute[259850]:  create_secret /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1131
Oct 11 04:03:08 compute-0 nova_compute[259850]: 2025-10-11 04:03:08.447 2 DEBUG nova.objects.instance [None req-969e4c03-4f8e-4939-a6c2-3b644534cf25 6f96a3b66f9943398432732b3141745a 54de3f5004d1488aaf5e429b0071e194 - - default default] Lazy-loading 'flavor' on Instance uuid 02d4314d-9e34-4961-a2ae-1c9b0a3c1dfc obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 11 04:03:08 compute-0 nova_compute[259850]: 2025-10-11 04:03:08.467 2 DEBUG nova.virt.libvirt.driver [None req-969e4c03-4f8e-4939-a6c2-3b644534cf25 6f96a3b66f9943398432732b3141745a 54de3f5004d1488aaf5e429b0071e194 - - default default] [instance: 02d4314d-9e34-4961-a2ae-1c9b0a3c1dfc] Attempting to attach volume 124d81aa-c1b6-4933-a0f2-4582c93cb200 with discard support enabled to an instance using an unsupported configuration. target_bus = virtio. Trim commands will not be issued to the storage device. _check_discard_for_attach_volume /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2168
Oct 11 04:03:08 compute-0 nova_compute[259850]: 2025-10-11 04:03:08.472 2 DEBUG nova.virt.libvirt.guest [None req-969e4c03-4f8e-4939-a6c2-3b644534cf25 6f96a3b66f9943398432732b3141745a 54de3f5004d1488aaf5e429b0071e194 - - default default] attach device xml: <disk type="network" device="disk">
Oct 11 04:03:08 compute-0 nova_compute[259850]:   <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Oct 11 04:03:08 compute-0 nova_compute[259850]:   <source protocol="rbd" name="volumes/volume-124d81aa-c1b6-4933-a0f2-4582c93cb200">
Oct 11 04:03:08 compute-0 nova_compute[259850]:     <host name="192.168.122.100" port="6789"/>
Oct 11 04:03:08 compute-0 nova_compute[259850]:   </source>
Oct 11 04:03:08 compute-0 nova_compute[259850]:   <auth username="openstack">
Oct 11 04:03:08 compute-0 nova_compute[259850]:     <secret type="ceph" uuid="23b68101-59a9-532f-ab6b-9acf78fb2162"/>
Oct 11 04:03:08 compute-0 nova_compute[259850]:   </auth>
Oct 11 04:03:08 compute-0 nova_compute[259850]:   <target dev="vdb" bus="virtio"/>
Oct 11 04:03:08 compute-0 nova_compute[259850]:   <serial>124d81aa-c1b6-4933-a0f2-4582c93cb200</serial>
Oct 11 04:03:08 compute-0 nova_compute[259850]:   <encryption format="luks">
Oct 11 04:03:08 compute-0 nova_compute[259850]:     <secret type="passphrase" uuid="c4372935-6c00-474d-b496-1b9cea82c0bb"/>
Oct 11 04:03:08 compute-0 nova_compute[259850]:   </encryption>
Oct 11 04:03:08 compute-0 nova_compute[259850]: </disk>
Oct 11 04:03:08 compute-0 nova_compute[259850]:  attach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:339
Oct 11 04:03:08 compute-0 ceph-mon[74273]: pgmap v926: 305 pgs: 305 active+clean; 121 MiB data, 254 MiB used, 60 GiB / 60 GiB avail; 327 KiB/s rd, 2.1 MiB/s wr, 92 op/s
Oct 11 04:03:08 compute-0 ceph-mon[74273]: from='client.? 192.168.122.10:0/2879861894' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 11 04:03:08 compute-0 nova_compute[259850]: 2025-10-11 04:03:08.719 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 04:03:09 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v927: 305 pgs: 305 active+clean; 121 MiB data, 254 MiB used, 60 GiB / 60 GiB avail; 337 KiB/s rd, 2.1 MiB/s wr, 106 op/s
Oct 11 04:03:09 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e132 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 11 04:03:10 compute-0 podman[267982]: 2025-10-11 04:03:10.379184002 +0000 UTC m=+0.077541454 container health_status aa44bb3cffcfb092b9d85ae52a9bd9f36c87e07262632420f97b48a886109eab (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, managed_by=edpm_ansible, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, tcib_managed=true, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2)
Oct 11 04:03:10 compute-0 ceph-mon[74273]: pgmap v927: 305 pgs: 305 active+clean; 121 MiB data, 254 MiB used, 60 GiB / 60 GiB avail; 337 KiB/s rd, 2.1 MiB/s wr, 106 op/s
Oct 11 04:03:10 compute-0 nova_compute[259850]: 2025-10-11 04:03:10.938 2 DEBUG nova.virt.libvirt.driver [None req-969e4c03-4f8e-4939-a6c2-3b644534cf25 6f96a3b66f9943398432732b3141745a 54de3f5004d1488aaf5e429b0071e194 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 11 04:03:10 compute-0 nova_compute[259850]: 2025-10-11 04:03:10.938 2 DEBUG nova.virt.libvirt.driver [None req-969e4c03-4f8e-4939-a6c2-3b644534cf25 6f96a3b66f9943398432732b3141745a 54de3f5004d1488aaf5e429b0071e194 - - default default] No BDM found with device name vdb, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 11 04:03:10 compute-0 nova_compute[259850]: 2025-10-11 04:03:10.939 2 DEBUG nova.virt.libvirt.driver [None req-969e4c03-4f8e-4939-a6c2-3b644534cf25 6f96a3b66f9943398432732b3141745a 54de3f5004d1488aaf5e429b0071e194 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 11 04:03:10 compute-0 nova_compute[259850]: 2025-10-11 04:03:10.939 2 DEBUG nova.virt.libvirt.driver [None req-969e4c03-4f8e-4939-a6c2-3b644534cf25 6f96a3b66f9943398432732b3141745a 54de3f5004d1488aaf5e429b0071e194 - - default default] No VIF found with MAC fa:16:3e:b3:25:de, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Oct 11 04:03:11 compute-0 nova_compute[259850]: 2025-10-11 04:03:11.305 2 DEBUG oslo_concurrency.lockutils [None req-969e4c03-4f8e-4939-a6c2-3b644534cf25 6f96a3b66f9943398432732b3141745a 54de3f5004d1488aaf5e429b0071e194 - - default default] Lock "02d4314d-9e34-4961-a2ae-1c9b0a3c1dfc" "released" by "nova.compute.manager.ComputeManager.attach_volume.<locals>.do_attach_volume" :: held 5.387s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 04:03:11 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v928: 305 pgs: 305 active+clean; 121 MiB data, 254 MiB used, 60 GiB / 60 GiB avail; 136 KiB/s rd, 107 KiB/s wr, 63 op/s
Oct 11 04:03:11 compute-0 nova_compute[259850]: 2025-10-11 04:03:11.462 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 04:03:12 compute-0 ceph-mon[74273]: pgmap v928: 305 pgs: 305 active+clean; 121 MiB data, 254 MiB used, 60 GiB / 60 GiB avail; 136 KiB/s rd, 107 KiB/s wr, 63 op/s
Oct 11 04:03:13 compute-0 nova_compute[259850]: 2025-10-11 04:03:13.056 2 DEBUG oslo_service.periodic_task [None req-a15c22ab-259d-4b26-83d0-eb5f92102649 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 11 04:03:13 compute-0 nova_compute[259850]: 2025-10-11 04:03:13.058 2 DEBUG oslo_service.periodic_task [None req-a15c22ab-259d-4b26-83d0-eb5f92102649 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 11 04:03:13 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct 11 04:03:13 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2824714180' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 11 04:03:13 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct 11 04:03:13 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2824714180' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 11 04:03:13 compute-0 nova_compute[259850]: 2025-10-11 04:03:13.318 2 DEBUG nova.compute.manager [req-e71e364b-4f8d-4bf2-9128-17700a14e90d req-fa42254e-2b20-4405-8657-9f93fe5c97a5 407a16c34d6f4e07bd2919006b3d8fef 551ef16ca7c64c7d98e9560ddab5abd8 - - default default] [instance: 02d4314d-9e34-4961-a2ae-1c9b0a3c1dfc] Received event volume-extended-124d81aa-c1b6-4933-a0f2-4582c93cb200 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 11 04:03:13 compute-0 nova_compute[259850]: 2025-10-11 04:03:13.336 2 DEBUG nova.compute.manager [req-e71e364b-4f8d-4bf2-9128-17700a14e90d req-fa42254e-2b20-4405-8657-9f93fe5c97a5 407a16c34d6f4e07bd2919006b3d8fef 551ef16ca7c64c7d98e9560ddab5abd8 - - default default] [instance: 02d4314d-9e34-4961-a2ae-1c9b0a3c1dfc] Handling volume-extended event for volume 124d81aa-c1b6-4933-a0f2-4582c93cb200 extend_volume /usr/lib/python3.9/site-packages/nova/compute/manager.py:10896
Oct 11 04:03:13 compute-0 nova_compute[259850]: 2025-10-11 04:03:13.357 2 INFO nova.compute.manager [req-e71e364b-4f8d-4bf2-9128-17700a14e90d req-fa42254e-2b20-4405-8657-9f93fe5c97a5 407a16c34d6f4e07bd2919006b3d8fef 551ef16ca7c64c7d98e9560ddab5abd8 - - default default] [instance: 02d4314d-9e34-4961-a2ae-1c9b0a3c1dfc] Cinder extended volume 124d81aa-c1b6-4933-a0f2-4582c93cb200; extending it to detect new size
Oct 11 04:03:13 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v929: 305 pgs: 305 active+clean; 121 MiB data, 254 MiB used, 60 GiB / 60 GiB avail; 163 KiB/s rd, 112 KiB/s wr, 70 op/s
Oct 11 04:03:13 compute-0 ceph-mon[74273]: from='client.? 192.168.122.10:0/2824714180' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 11 04:03:13 compute-0 ceph-mon[74273]: from='client.? 192.168.122.10:0/2824714180' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 11 04:03:13 compute-0 nova_compute[259850]: 2025-10-11 04:03:13.612 2 DEBUG os_brick.encryptors [req-e71e364b-4f8d-4bf2-9128-17700a14e90d req-fa42254e-2b20-4405-8657-9f93fe5c97a5 407a16c34d6f4e07bd2919006b3d8fef 551ef16ca7c64c7d98e9560ddab5abd8 - - default default] Using volume encryption metadata '{'encryption_key_id': '797788f5-9826-4caa-81a4-b0aab0930665', 'control_location': 'front-end', 'cipher': 'aes-xts-plain64', 'key_size': 256, 'provider': 'luks'}' for connection: {'driver_volume_type': 'rbd', 'data': {'name': 'volumes/volume-124d81aa-c1b6-4933-a0f2-4582c93cb200', 'hosts': ['192.168.122.100'], 'ports': ['6789'], 'cluster_name': 'ceph', 'auth_enabled': True, 'auth_username': 'openstack', 'secret_type': 'ceph', 'secret_uuid': '***', 'volume_id': '124d81aa-c1b6-4933-a0f2-4582c93cb200', 'discard': True, 'qos_specs': None, 'access_mode': 'rw', 'encrypted': True, 'cacheable': False}, 'status': 'reserved', 'instance': '02d4314d-9e34-4961-a2ae-1c9b0a3c1dfc', 'attached_at': '', 'detached_at': '', 'volume_id': '124d81aa-c1b6-4933-a0f2-4582c93cb200', 'serial': '} get_encryption_metadata /usr/lib/python3.9/site-packages/os_brick/encryptors/__init__.py:135
Oct 11 04:03:13 compute-0 nova_compute[259850]: 2025-10-11 04:03:13.613 2 INFO oslo.privsep.daemon [req-e71e364b-4f8d-4bf2-9128-17700a14e90d req-fa42254e-2b20-4405-8657-9f93fe5c97a5 407a16c34d6f4e07bd2919006b3d8fef 551ef16ca7c64c7d98e9560ddab5abd8 - - default default] Running privsep helper: ['sudo', 'nova-rootwrap', '/etc/nova/rootwrap.conf', 'privsep-helper', '--config-file', '/etc/nova/nova.conf', '--config-file', '/etc/nova/nova-compute.conf', '--config-dir', '/etc/nova/nova.conf.d', '--privsep_context', 'nova.privsep.sys_admin_pctxt', '--privsep_sock_path', '/tmp/tmpyfjyfqnc/privsep.sock']
Oct 11 04:03:13 compute-0 nova_compute[259850]: 2025-10-11 04:03:13.727 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 04:03:14 compute-0 nova_compute[259850]: 2025-10-11 04:03:14.059 2 DEBUG oslo_service.periodic_task [None req-a15c22ab-259d-4b26-83d0-eb5f92102649 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 11 04:03:14 compute-0 nova_compute[259850]: 2025-10-11 04:03:14.352 2 INFO oslo.privsep.daemon [req-e71e364b-4f8d-4bf2-9128-17700a14e90d req-fa42254e-2b20-4405-8657-9f93fe5c97a5 407a16c34d6f4e07bd2919006b3d8fef 551ef16ca7c64c7d98e9560ddab5abd8 - - default default] Spawned new privsep daemon via rootwrap
Oct 11 04:03:14 compute-0 nova_compute[259850]: 2025-10-11 04:03:14.235 688 INFO oslo.privsep.daemon [-] privsep daemon starting
Oct 11 04:03:14 compute-0 nova_compute[259850]: 2025-10-11 04:03:14.242 688 INFO oslo.privsep.daemon [-] privsep process running with uid/gid: 0/0
Oct 11 04:03:14 compute-0 nova_compute[259850]: 2025-10-11 04:03:14.246 688 INFO oslo.privsep.daemon [-] privsep process running with capabilities (eff/prm/inh): CAP_CHOWN|CAP_DAC_OVERRIDE|CAP_DAC_READ_SEARCH|CAP_FOWNER|CAP_NET_ADMIN|CAP_SYS_ADMIN/CAP_CHOWN|CAP_DAC_OVERRIDE|CAP_DAC_READ_SEARCH|CAP_FOWNER|CAP_NET_ADMIN|CAP_SYS_ADMIN/none
Oct 11 04:03:14 compute-0 nova_compute[259850]: 2025-10-11 04:03:14.246 688 INFO oslo.privsep.daemon [-] privsep daemon running as pid 688
Oct 11 04:03:14 compute-0 ceph-mon[74273]: pgmap v929: 305 pgs: 305 active+clean; 121 MiB data, 254 MiB used, 60 GiB / 60 GiB avail; 163 KiB/s rd, 112 KiB/s wr, 70 op/s
Oct 11 04:03:14 compute-0 systemd[1]: Created slice Slice /system/systemd-coredump.
Oct 11 04:03:14 compute-0 systemd[1]: Started Process Core Dump (PID 268027/UID 0).
Oct 11 04:03:14 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e132 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 11 04:03:15 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct 11 04:03:15 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2741025551' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 11 04:03:15 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct 11 04:03:15 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2741025551' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 11 04:03:15 compute-0 nova_compute[259850]: 2025-10-11 04:03:15.058 2 DEBUG oslo_service.periodic_task [None req-a15c22ab-259d-4b26-83d0-eb5f92102649 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 11 04:03:15 compute-0 nova_compute[259850]: 2025-10-11 04:03:15.060 2 DEBUG nova.compute.manager [None req-a15c22ab-259d-4b26-83d0-eb5f92102649 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Oct 11 04:03:15 compute-0 nova_compute[259850]: 2025-10-11 04:03:15.060 2 DEBUG nova.compute.manager [None req-a15c22ab-259d-4b26-83d0-eb5f92102649 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Oct 11 04:03:15 compute-0 nova_compute[259850]: 2025-10-11 04:03:15.337 2 DEBUG oslo_concurrency.lockutils [None req-a15c22ab-259d-4b26-83d0-eb5f92102649 - - - - - -] Acquiring lock "refresh_cache-02d4314d-9e34-4961-a2ae-1c9b0a3c1dfc" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 11 04:03:15 compute-0 nova_compute[259850]: 2025-10-11 04:03:15.338 2 DEBUG oslo_concurrency.lockutils [None req-a15c22ab-259d-4b26-83d0-eb5f92102649 - - - - - -] Acquired lock "refresh_cache-02d4314d-9e34-4961-a2ae-1c9b0a3c1dfc" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 11 04:03:15 compute-0 nova_compute[259850]: 2025-10-11 04:03:15.338 2 DEBUG nova.network.neutron [None req-a15c22ab-259d-4b26-83d0-eb5f92102649 - - - - - -] [instance: 02d4314d-9e34-4961-a2ae-1c9b0a3c1dfc] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Oct 11 04:03:15 compute-0 nova_compute[259850]: 2025-10-11 04:03:15.339 2 DEBUG nova.objects.instance [None req-a15c22ab-259d-4b26-83d0-eb5f92102649 - - - - - -] Lazy-loading 'info_cache' on Instance uuid 02d4314d-9e34-4961-a2ae-1c9b0a3c1dfc obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 11 04:03:15 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v930: 305 pgs: 305 active+clean; 121 MiB data, 254 MiB used, 60 GiB / 60 GiB avail; 36 KiB/s rd, 17 KiB/s wr, 21 op/s
Oct 11 04:03:15 compute-0 ceph-mon[74273]: from='client.? 192.168.122.10:0/2741025551' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 11 04:03:15 compute-0 ceph-mon[74273]: from='client.? 192.168.122.10:0/2741025551' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 11 04:03:16 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct 11 04:03:16 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/169746591' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 11 04:03:16 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct 11 04:03:16 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/169746591' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 11 04:03:16 compute-0 nova_compute[259850]: 2025-10-11 04:03:16.463 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 04:03:16 compute-0 systemd-coredump[268028]: Process 268008 (qemu-img) of user 0 dumped core.
                                                    
                                                    Stack trace of thread 700:
                                                    #0  0x00007f29ba54303c __pthread_kill_implementation (libc.so.6 + 0x8d03c)
                                                    #1  0x00007f29ba4f5b86 raise (libc.so.6 + 0x3fb86)
                                                    #2  0x00007f29ba4df873 abort (libc.so.6 + 0x29873)
                                                    #3  0x000055dceea1c56f ___interceptor_pthread_create (qemu-img + 0x4e56f)
                                                    #4  0x00007f29b3700ff4 _ZN6Thread10try_createEm (libceph-common.so.2 + 0x258ff4)
                                                    #5  0x00007f29b37036ae _ZN6Thread6createEPKcm (libceph-common.so.2 + 0x25b6ae)
                                                    #6  0x00007f29b862326b _ZNSt8_Rb_treeISt4pairINSt7__cxx1112basic_stringIcSt11char_traitsIcESaIcEEESt10type_indexES0_IKS8_N4ceph12immobile_anyILm576EEEESt10_Select1stISD_ENSA_6common11CephContext19associated_objs_cmpESaISD_EE22_M_emplace_hint_uniqueIJRKSt21piecewise_construct_tSt5tupleIJRSt17basic_string_viewIcS4_ERS7_EESP_IJRKSt15in_place_type_tIN6librbd21TaskFinisherSingletonEERPSH_EEEEESt17_Rb_tree_iteratorISD_ESt23_Rb_tree_const_iteratorISD_EDpOT_.constprop.0 (librbd.so.1 + 0x51126b)
                                                    #7  0x00007f29b82507a6 _ZN6librbd8ImageCtx4initEv (librbd.so.1 + 0x13e7a6)
                                                    #8  0x00007f29b832a2d3 _ZN6librbd5image11OpenRequestINS_8ImageCtxEE12send_refreshEv (librbd.so.1 + 0x2182d3)
                                                    #9  0x00007f29b832af46 _ZN6librbd5image11OpenRequestINS_8ImageCtxEE23handle_v2_get_data_poolEPi (librbd.so.1 + 0x218f46)
                                                    #10 0x00007f29b832b2a7 _ZN6librbd4util6detail20rados_state_callbackINS_5image11OpenRequestINS_8ImageCtxEEEXadL_ZNS6_23handle_v2_get_data_poolEPiEELb1EEEvPvS8_ (librbd.so.1 + 0x2192a7)
                                                    #11 0x00007f29b3f170ac _ZN5boost4asio6detail18completion_handlerINS1_7binder0IN8librados14CB_AioCompleteEEENS0_10io_context19basic_executor_typeISaIvELm0EEEE11do_completeEPvPNS1_19scheduler_operationERKNS_6system10error_codeEm (librados.so.2 + 0xad0ac)
                                                    #12 0x00007f29b3f16585 _ZN5boost4asio6detail14strand_service11do_completeEPvPNS1_19scheduler_operationERKNS_6system10error_codeEm (librados.so.2 + 0xac585)
                                                    #13 0x00007f29b3f91498 _ZN5boost4asio6detail9scheduler3runERNS_6system10error_codeE.constprop.0.isra.0 (librados.so.2 + 0x127498)
                                                    #14 0x00007f29b3f304e4 _ZNSt6thread11_State_implINS_8_InvokerISt5tupleIJZ17make_named_threadIZN4ceph5async15io_context_pool5startEsEUlvE_JEES_St17basic_string_viewIcSt11char_traitsIcEEOT_DpOT0_EUlSD_SG_E_S7_EEEEE6_M_runEv (librados.so.2 + 0xc64e4)
                                                    #15 0x00007f29b2d97ae4 execute_native_thread_routine (libstdc++.so.6 + 0xdbae4)
                                                    #16 0x00007f29ba5412fa start_thread (libc.so.6 + 0x8b2fa)
                                                    #17 0x00007f29ba5c6540 __clone3 (libc.so.6 + 0x110540)
                                                    
                                                    Stack trace of thread 693:
                                                    #0  0x00007f29ba5c5b7e epoll_wait (libc.so.6 + 0x10fb7e)
                                                    #1  0x00007f29b38e8618 _ZN11EpollDriver10event_waitERSt6vectorI14FiredFileEventSaIS1_EEP7timeval (libceph-common.so.2 + 0x440618)
                                                    #2  0x00007f29b38e6702 _ZN11EventCenter14process_eventsEjPNSt6chrono8durationImSt5ratioILl1ELl1000000000EEEE (libceph-common.so.2 + 0x43e702)
                                                    #3  0x00007f29b38e72c6 _ZNSt17_Function_handlerIFvvEZN12NetworkStack10add_threadEP6WorkerEUlvE_E9_M_invokeERKSt9_Any_data (libceph-common.so.2 + 0x43f2c6)
                                                    #4  0x00007f29b2d97ae4 execute_native_thread_routine (libstdc++.so.6 + 0xdbae4)
                                                    #5  0x00007f29ba5412fa start_thread (libc.so.6 + 0x8b2fa)
                                                    #6  0x00007f29ba5c6540 __clone3 (libc.so.6 + 0x110540)
                                                    
                                                    Stack trace of thread 692:
                                                    #0  0x00007f29ba53e38a __futex_abstimed_wait_common (libc.so.6 + 0x8838a)
                                                    #1  0x00007f29ba5408e2 pthread_cond_wait@@GLIBC_2.3.2 (libc.so.6 + 0x8a8e2)
                                                    #2  0x00007f29b2d916c0 _ZNSt18condition_variable4waitERSt11unique_lockISt5mutexE (libstdc++.so.6 + 0xd56c0)
                                                    #3  0x00007f29b39130a2 _ZN4ceph7logging3Log5entryEv (libceph-common.so.2 + 0x46b0a2)
                                                    #4  0x00007f29ba5412fa start_thread (libc.so.6 + 0x8b2fa)
                                                    #5  0x00007f29ba5c6540 __clone3 (libc.so.6 + 0x110540)
                                                    
                                                    Stack trace of thread 691:
                                                    #0  0x00007f29ba5be96d syscall (libc.so.6 + 0x10896d)
                                                    #1  0x000055dceeb9bf73 qemu_event_wait (qemu-img + 0x1cdf73)
                                                    #2  0x000055dceeba8f87 call_rcu_thread (qemu-img + 0x1daf87)
                                                    #3  0x000055dceeb9c2ba qemu_thread_start.llvm.7701297430486814853 (qemu-img + 0x1ce2ba)
                                                    #4  0x00007f29ba5412fa start_thread (libc.so.6 + 0x8b2fa)
                                                    #5  0x00007f29ba5c6540 __clone3 (libc.so.6 + 0x110540)
                                                    
                                                    Stack trace of thread 704:
                                                    #0  0x00007f29ba53e38a __futex_abstimed_wait_common (libc.so.6 + 0x8838a)
                                                    #1  0x00007f29ba5408e2 pthread_cond_wait@@GLIBC_2.3.2 (libc.so.6 + 0x8a8e2)
                                                    #2  0x00007f29b2d916c0 _ZNSt18condition_variable4waitERSt11unique_lockISt5mutexE (libstdc++.so.6 + 0xd56c0)
                                                    #3  0x00007f29b380f0b9 _ZN13DispatchQueue18run_local_deliveryEv (libceph-common.so.2 + 0x3670b9)
                                                    #4  0x00007f29b38a0431 _ZN13DispatchQueue19LocalDeliveryThread5entryEv (libceph-common.so.2 + 0x3f8431)
                                                    #5  0x00007f29ba5412fa start_thread (libc.so.6 + 0x8b2fa)
                                                    #6  0x00007f29ba5c6540 __clone3 (libc.so.6 + 0x110540)
                                                    
                                                    Stack trace of thread 701:
                                                    #0  0x00007f29ba53e38a __futex_abstimed_wait_common (libc.so.6 + 0x8838a)
                                                    #1  0x00007f29ba5408e2 pthread_cond_wait@@GLIBC_2.3.2 (libc.so.6 + 0x8a8e2)
                                                    #2  0x00007f29b3f91266 _ZN5boost4asio6detail9scheduler3runERNS_6system10error_codeE.constprop.0.isra.0 (librados.so.2 + 0x127266)
                                                    #3  0x00007f29b3f304e4 _ZNSt6thread11_State_implINS_8_InvokerISt5tupleIJZ17make_named_threadIZN4ceph5async15io_context_pool5startEsEUlvE_JEES_St17basic_string_viewIcSt11char_traitsIcEEOT_DpOT0_EUlSD_SG_E_S7_EEEEE6_M_runEv (librados.so.2 + 0xc64e4)
                                                    #4  0x00007f29b2d97ae4 execute_native_thread_routine (libstdc++.so.6 + 0xdbae4)
                                                    #5  0x00007f29ba5412fa start_thread (libc.so.6 + 0x8b2fa)
                                                    #6  0x00007f29ba5c6540 __clone3 (libc.so.6 + 0x110540)
                                                    
                                                    Stack trace of thread 695:
                                                    #0  0x00007f29ba5c5b7e epoll_wait (libc.so.6 + 0x10fb7e)
                                                    #1  0x00007f29b38e8618 _ZN11EpollDriver10event_waitERSt6vectorI14FiredFileEventSaIS1_EEP7timeval (libceph-common.so.2 + 0x440618)
                                                    #2  0x00007f29b38e6702 _ZN11EventCenter14process_eventsEjPNSt6chrono8durationImSt5ratioILl1ELl1000000000EEEE (libceph-common.so.2 + 0x43e702)
                                                    #3  0x00007f29b38e72c6 _ZNSt17_Function_handlerIFvvEZN12NetworkStack10add_threadEP6WorkerEUlvE_E9_M_invokeERKSt9_Any_data (libceph-common.so.2 + 0x43f2c6)
                                                    #4  0x00007f29b2d97ae4 execute_native_thread_routine (libstdc++.so.6 + 0xdbae4)
                                                    #5  0x00007f29ba5412fa start_thread (libc.so.6 + 0x8b2fa)
                                                    #6  0x00007f29ba5c6540 __clone3 (libc.so.6 + 0x110540)
                                                    
                                                    Stack trace of thread 703:
                                                    #0  0x00007f29ba53e38a __futex_abstimed_wait_common (libc.so.6 + 0x8838a)
                                                    #1  0x00007f29ba5408e2 pthread_cond_wait@@GLIBC_2.3.2 (libc.so.6 + 0x8a8e2)
                                                    #2  0x00007f29b2d916c0 _ZNSt18condition_variable4waitERSt11unique_lockISt5mutexE (libstdc++.so.6 + 0xd56c0)
                                                    #3  0x00007f29b380f49f _ZN13DispatchQueue5entryEv (libceph-common.so.2 + 0x36749f)
                                                    #4  0x00007f29b38a0411 _ZN13DispatchQueue14DispatchThread5entryEv (libceph-common.so.2 + 0x3f8411)
                                                    #5  0x00007f29ba5412fa start_thread (libc.so.6 + 0x8b2fa)
                                                    #6  0x00007f29ba5c6540 __clone3 (libc.so.6 + 0x110540)
                                                    
                                                    Stack trace of thread 705:
                                                    #0  0x00007f29ba53e38a __futex_abstimed_wait_common (libc.so.6 + 0x8838a)
                                                    #1  0x00007f29ba540cc0 pthread_cond_clockwait@GLIBC_2.30 (libc.so.6 + 0x8acc0)
                                                    #2  0x00007f29b3706b23 _ZN15CommonSafeTimerISt5mutexE12timer_threadEv (libceph-common.so.2 + 0x25eb23)
                                                    #3  0x00007f29b3706f81 _ZN21CommonSafeTimerThreadISt5mutexE5entryEv (libceph-common.so.2 + 0x25ef81)
                                                    #4  0x00007f29ba5412fa start_thread (libc.so.6 + 0x8b2fa)
                                                    #5  0x00007f29ba5c6540 __clone3 (libc.so.6 + 0x110540)
                                                    
                                                    Stack trace of thread 706:
                                                    #0  0x00007f29ba53e38a __futex_abstimed_wait_common (libc.so.6 + 0x8838a)
                                                    #1  0x00007f29ba5408e2 pthread_cond_wait@@GLIBC_2.3.2 (libc.so.6 + 0x8a8e2)
                                                    #2  0x00007f29b2d916c0 _ZNSt18condition_variable4waitERSt11unique_lockISt5mutexE (libstdc++.so.6 + 0xd56c0)
                                                    #3  0x00007f29b37067f8 _ZN15CommonSafeTimerISt5mutexE12timer_threadEv (libceph-common.so.2 + 0x25e7f8)
                                                    #4  0x00007f29b3706f81 _ZN21CommonSafeTimerThreadISt5mutexE5entryEv (libceph-common.so.2 + 0x25ef81)
                                                    #5  0x00007f29ba5412fa start_thread (libc.so.6 + 0x8b2fa)
                                                    #6  0x00007f29ba5c6540 __clone3 (libc.so.6 + 0x110540)
                                                    
                                                    Stack trace of thread 702:
                                                    #0  0x00007f29ba53e38a __futex_abstimed_wait_common (libc.so.6 + 0x8838a)
                                                    #1  0x00007f29ba540cc0 pthread_cond_clockwait@GLIBC_2.30 (libc.so.6 + 0x8acc0)
                                                    #2  0x00007f29b3f69364 _ZN4ceph5timerINS_17coarse_mono_clockEE12timer_threadEv (librados.so.2 + 0xff364)
                                                    #3  0x00007f29b2d97ae4 execute_native_thread_routine (libstdc++.so.6 + 0xdbae4)
                                                    #4  0x00007f29ba5412fa start_thread (libc.so.6 + 0x8b2fa)
                                                    #5  0x00007f29ba5c6540 __clone3 (libc.so.6 + 0x110540)
                                                    
                                                    Stack trace of thread 694:
                                                    #0  0x00007f29ba5c5b7e epoll_wait (libc.so.6 + 0x10fb7e)
                                                    #1  0x00007f29b38e8618 _ZN11EpollDriver10event_waitERSt6vectorI14FiredFileEventSaIS1_EEP7timeval (libceph-common.so.2 + 0x440618)
                                                    #2  0x00007f29b38e6702 _ZN11EventCenter14process_eventsEjPNSt6chrono8durationImSt5ratioILl1ELl1000000000EEEE (libceph-common.so.2 + 0x43e702)
                                                    #3  0x00007f29b38e72c6 _ZNSt17_Function_handlerIFvvEZN12NetworkStack10add_threadEP6WorkerEUlvE_E9_M_invokeERKSt9_Any_data (libceph-common.so.2 + 0x43f2c6)
                                                    #4  0x00007f29b2d97ae4 execute_native_thread_routine (libstdc++.so.6 + 0xdbae4)
                                                    #5  0x00007f29ba5412fa start_thread (libc.so.6 + 0x8b2fa)
                                                    #6  0x00007f29ba5c6540 __clone3 (libc.so.6 + 0x110540)
                                                    
                                                    Stack trace of thread 708:
                                                    #0  0x00007f29ba54119e start_thread (libc.so.6 + 0x8b19e)
                                                    #1  0x00007f29ba5c6540 __clone3 (libc.so.6 + 0x110540)
                                                    
                                                    Stack trace of thread 707:
                                                    #0  0x00007f29ba53e38a __futex_abstimed_wait_common (libc.so.6 + 0x8838a)
                                                    #1  0x00007f29ba5408e2 pthread_cond_wait@@GLIBC_2.3.2 (libc.so.6 + 0x8a8e2)
                                                    #2  0x00007f29b2d916c0 _ZNSt18condition_variable4waitERSt11unique_lockISt5mutexE (libstdc++.so.6 + 0xd56c0)
                                                    #3  0x00007f29b37067f8 _ZN15CommonSafeTimerISt5mutexE12timer_threadEv (libceph-common.so.2 + 0x25e7f8)
                                                    #4  0x00007f29b3706f81 _ZN21CommonSafeTimerThreadISt5mutexE5entryEv (libceph-common.so.2 + 0x25ef81)
                                                    #5  0x00007f29ba5412fa start_thread (libc.so.6 + 0x8b2fa)
                                                    #6  0x00007f29ba5c6540 __clone3 (libc.so.6 + 0x110540)
                                                    
                                                    Stack trace of thread 699:
                                                    #0  0x00007f29ba53e38a __futex_abstimed_wait_common (libc.so.6 + 0x8838a)
                                                    #1  0x00007f29ba540cc0 pthread_cond_clockwait@GLIBC_2.30 (libc.so.6 + 0x8acc0)
                                                    #2  0x00007f29b3721150 _ZN4ceph6common24CephContextServiceThread5entryEv (libceph-common.so.2 + 0x279150)
                                                    #3  0x00007f29ba5412fa start_thread (libc.so.6 + 0x8b2fa)
                                                    #4  0x00007f29ba5c6540 __clone3 (libc.so.6 + 0x110540)
                                                    
                                                    Stack trace of thread 690:
                                                    #0  0x00007f29ba53e38a __futex_abstimed_wait_common (libc.so.6 + 0x8838a)
                                                    #1  0x00007f29ba5408e2 pthread_cond_wait@@GLIBC_2.3.2 (libc.so.6 + 0x8a8e2)
                                                    #2  0x00007f29b2d916c0 _ZNSt18condition_variable4waitERSt11unique_lockISt5mutexE (libstdc++.so.6 + 0xd56c0)
                                                    #3  0x00007f29b8257eb3 _ZN6librbd10ImageStateINS_8ImageCtxEE4openEm (librbd.so.1 + 0x145eb3)
                                                    #4  0x00007f29b8227fcb rbd_open (librbd.so.1 + 0x115fcb)
                                                    #5  0x00007f29b87d289d qemu_rbd_open (block-rbd.so + 0x489d)
                                                    #6  0x000055dceea2ce4c bdrv_open_driver.llvm.6332234179151191066 (qemu-img + 0x5ee4c)
                                                    #7  0x000055dceea31b6b bdrv_open_inherit.llvm.6332234179151191066 (qemu-img + 0x63b6b)
                                                    #8  0x000055dceea3e5ce bdrv_open_child_bs.llvm.6332234179151191066 (qemu-img + 0x705ce)
                                                    #9  0x000055dceea31396 bdrv_open_inherit.llvm.6332234179151191066 (qemu-img + 0x63396)
                                                    #10 0x000055dceea5f1f5 blk_new_open (qemu-img + 0x911f5)
                                                    #11 0x000055dceeb1ae16 img_open_file (qemu-img + 0x14ce16)
                                                    #12 0x000055dceeb1a9e0 img_open (qemu-img + 0x14c9e0)
                                                    #13 0x000055dceeb16c1d img_info (qemu-img + 0x148c1d)
                                                    #14 0x000055dceeb10638 main (qemu-img + 0x142638)
                                                    #15 0x00007f29ba4e0610 __libc_start_call_main (libc.so.6 + 0x2a610)
                                                    #16 0x00007f29ba4e06c0 __libc_start_main@@GLIBC_2.34 (libc.so.6 + 0x2a6c0)
                                                    #17 0x000055dceea1c215 _start (qemu-img + 0x4e215)
                                                    ELF object binary architecture: AMD x86-64
Oct 11 04:03:16 compute-0 systemd[1]: systemd-coredump@0-268027-0.service: Deactivated successfully.
Oct 11 04:03:16 compute-0 systemd[1]: systemd-coredump@0-268027-0.service: Consumed 1.837s CPU time.
Oct 11 04:03:16 compute-0 rsyslogd[1005]: imjournal: journal files changed, reloading...  [v8.2506.0-2.el9 try https://www.rsyslog.com/e/0 ]
Oct 11 04:03:16 compute-0 nova_compute[259850]: 2025-10-11 04:03:16.549 2 ERROR nova.virt.libvirt.driver [req-e71e364b-4f8d-4bf2-9128-17700a14e90d req-fa42254e-2b20-4405-8657-9f93fe5c97a5 407a16c34d6f4e07bd2919006b3d8fef 551ef16ca7c64c7d98e9560ddab5abd8 - - default default] [instance: 02d4314d-9e34-4961-a2ae-1c9b0a3c1dfc] Unknown error when attempting to find the payload_offset for LUKSv1 encrypted disk rbd:volumes/volume-124d81aa-c1b6-4933-a0f2-4582c93cb200:id=openstack.: nova.exception.InvalidDiskInfo: Disk info file is invalid: qemu-img failed to execute on rbd:volumes/volume-124d81aa-c1b6-4933-a0f2-4582c93cb200:id=openstack : Unexpected error while running command.
Oct 11 04:03:16 compute-0 rsyslogd[1005]: imjournal: journal files changed, reloading...  [v8.2506.0-2.el9 try https://www.rsyslog.com/e/0 ]
Oct 11 04:03:16 compute-0 nova_compute[259850]: Command: /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info rbd:volumes/volume-124d81aa-c1b6-4933-a0f2-4582c93cb200:id=openstack --force-share --output=json
Oct 11 04:03:16 compute-0 nova_compute[259850]: Exit code: -6
Oct 11 04:03:16 compute-0 nova_compute[259850]: Stdout: ''
Oct 11 04:03:16 compute-0 nova_compute[259850]: Stderr: 'safestack CHECK failed: /builddir/build/BUILD/llvm-project-20.1.8.src/compiler-rt/lib/safestack/safestack.cpp:120 MAP_FAILED != addr\n'
Oct 11 04:03:16 compute-0 nova_compute[259850]: 2025-10-11 04:03:16.549 2 ERROR nova.virt.libvirt.driver [instance: 02d4314d-9e34-4961-a2ae-1c9b0a3c1dfc] Traceback (most recent call last):
Oct 11 04:03:16 compute-0 nova_compute[259850]: 2025-10-11 04:03:16.549 2 ERROR nova.virt.libvirt.driver [instance: 02d4314d-9e34-4961-a2ae-1c9b0a3c1dfc]   File "/usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py", line 2788, in _resize_attached_encrypted_volume
Oct 11 04:03:16 compute-0 nova_compute[259850]: 2025-10-11 04:03:16.549 2 ERROR nova.virt.libvirt.driver [instance: 02d4314d-9e34-4961-a2ae-1c9b0a3c1dfc]     info = images.privileged_qemu_img_info(path)
Oct 11 04:03:16 compute-0 nova_compute[259850]: 2025-10-11 04:03:16.549 2 ERROR nova.virt.libvirt.driver [instance: 02d4314d-9e34-4961-a2ae-1c9b0a3c1dfc]   File "/usr/lib/python3.9/site-packages/nova/virt/images.py", line 57, in privileged_qemu_img_info
Oct 11 04:03:16 compute-0 nova_compute[259850]: 2025-10-11 04:03:16.549 2 ERROR nova.virt.libvirt.driver [instance: 02d4314d-9e34-4961-a2ae-1c9b0a3c1dfc]     info = nova.privsep.qemu.privileged_qemu_img_info(path, format=format)
Oct 11 04:03:16 compute-0 nova_compute[259850]: 2025-10-11 04:03:16.549 2 ERROR nova.virt.libvirt.driver [instance: 02d4314d-9e34-4961-a2ae-1c9b0a3c1dfc]   File "/usr/lib/python3.9/site-packages/oslo_privsep/priv_context.py", line 271, in _wrap
Oct 11 04:03:16 compute-0 nova_compute[259850]: 2025-10-11 04:03:16.549 2 ERROR nova.virt.libvirt.driver [instance: 02d4314d-9e34-4961-a2ae-1c9b0a3c1dfc]     return self.channel.remote_call(name, args, kwargs,
Oct 11 04:03:16 compute-0 nova_compute[259850]: 2025-10-11 04:03:16.549 2 ERROR nova.virt.libvirt.driver [instance: 02d4314d-9e34-4961-a2ae-1c9b0a3c1dfc]   File "/usr/lib/python3.9/site-packages/oslo_privsep/daemon.py", line 215, in remote_call
Oct 11 04:03:16 compute-0 nova_compute[259850]: 2025-10-11 04:03:16.549 2 ERROR nova.virt.libvirt.driver [instance: 02d4314d-9e34-4961-a2ae-1c9b0a3c1dfc]     raise exc_type(*result[2])
Oct 11 04:03:16 compute-0 nova_compute[259850]: 2025-10-11 04:03:16.549 2 ERROR nova.virt.libvirt.driver [instance: 02d4314d-9e34-4961-a2ae-1c9b0a3c1dfc] nova.exception.InvalidDiskInfo: Disk info file is invalid: qemu-img failed to execute on rbd:volumes/volume-124d81aa-c1b6-4933-a0f2-4582c93cb200:id=openstack : Unexpected error while running command.
Oct 11 04:03:16 compute-0 nova_compute[259850]: 2025-10-11 04:03:16.549 2 ERROR nova.virt.libvirt.driver [instance: 02d4314d-9e34-4961-a2ae-1c9b0a3c1dfc] Command: /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info rbd:volumes/volume-124d81aa-c1b6-4933-a0f2-4582c93cb200:id=openstack --force-share --output=json
Oct 11 04:03:16 compute-0 nova_compute[259850]: 2025-10-11 04:03:16.549 2 ERROR nova.virt.libvirt.driver [instance: 02d4314d-9e34-4961-a2ae-1c9b0a3c1dfc] Exit code: -6
Oct 11 04:03:16 compute-0 nova_compute[259850]: 2025-10-11 04:03:16.549 2 ERROR nova.virt.libvirt.driver [instance: 02d4314d-9e34-4961-a2ae-1c9b0a3c1dfc] Stdout: ''
Oct 11 04:03:16 compute-0 nova_compute[259850]: 2025-10-11 04:03:16.549 2 ERROR nova.virt.libvirt.driver [instance: 02d4314d-9e34-4961-a2ae-1c9b0a3c1dfc] Stderr: 'safestack CHECK failed: /builddir/build/BUILD/llvm-project-20.1.8.src/compiler-rt/lib/safestack/safestack.cpp:120 MAP_FAILED != addr\n'
Oct 11 04:03:16 compute-0 nova_compute[259850]: 2025-10-11 04:03:16.549 2 ERROR nova.virt.libvirt.driver [instance: 02d4314d-9e34-4961-a2ae-1c9b0a3c1dfc] 
Oct 11 04:03:16 compute-0 nova_compute[259850]: 2025-10-11 04:03:16.553 2 WARNING nova.compute.manager [req-e71e364b-4f8d-4bf2-9128-17700a14e90d req-fa42254e-2b20-4405-8657-9f93fe5c97a5 407a16c34d6f4e07bd2919006b3d8fef 551ef16ca7c64c7d98e9560ddab5abd8 - - default default] [instance: 02d4314d-9e34-4961-a2ae-1c9b0a3c1dfc] Extend volume failed, volume_id=124d81aa-c1b6-4933-a0f2-4582c93cb200, reason: Disk info file is invalid: qemu-img failed to execute on rbd:volumes/volume-124d81aa-c1b6-4933-a0f2-4582c93cb200:id=openstack : Unexpected error while running command.
Oct 11 04:03:16 compute-0 nova_compute[259850]: Command: /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info rbd:volumes/volume-124d81aa-c1b6-4933-a0f2-4582c93cb200:id=openstack --force-share --output=json
Oct 11 04:03:16 compute-0 nova_compute[259850]: Exit code: -6
Oct 11 04:03:16 compute-0 nova_compute[259850]: Stdout: ''
Oct 11 04:03:16 compute-0 nova_compute[259850]: Stderr: 'safestack CHECK failed: /builddir/build/BUILD/llvm-project-20.1.8.src/compiler-rt/lib/safestack/safestack.cpp:120 MAP_FAILED != addr\n': nova.exception.InvalidDiskInfo: Disk info file is invalid: qemu-img failed to execute on rbd:volumes/volume-124d81aa-c1b6-4933-a0f2-4582c93cb200:id=openstack : Unexpected error while running command.
Oct 11 04:03:16 compute-0 ceph-mon[74273]: pgmap v930: 305 pgs: 305 active+clean; 121 MiB data, 254 MiB used, 60 GiB / 60 GiB avail; 36 KiB/s rd, 17 KiB/s wr, 21 op/s
Oct 11 04:03:16 compute-0 ceph-mon[74273]: from='client.? 192.168.122.10:0/169746591' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 11 04:03:16 compute-0 ceph-mon[74273]: from='client.? 192.168.122.10:0/169746591' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 11 04:03:16 compute-0 nova_compute[259850]: 2025-10-11 04:03:16.583 2 DEBUG nova.network.neutron [None req-a15c22ab-259d-4b26-83d0-eb5f92102649 - - - - - -] [instance: 02d4314d-9e34-4961-a2ae-1c9b0a3c1dfc] Updating instance_info_cache with network_info: [{"id": "601ef18d-d973-476f-90d4-f8f40df267fa", "address": "fa:16:3e:b3:25:de", "network": {"id": "373b2ee9-84af-407c-9e36-4d16b55fdfd0", "bridge": "br-int", "label": "tempest-EncryptedVolumesExtendAttachedTest-757984270-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.213", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "54de3f5004d1488aaf5e429b0071e194", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap601ef18d-d9", "ovs_interfaceid": "601ef18d-d973-476f-90d4-f8f40df267fa", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 11 04:03:16 compute-0 nova_compute[259850]: 2025-10-11 04:03:16.619 2 ERROR oslo_messaging.rpc.server [req-e71e364b-4f8d-4bf2-9128-17700a14e90d req-fa42254e-2b20-4405-8657-9f93fe5c97a5 407a16c34d6f4e07bd2919006b3d8fef 551ef16ca7c64c7d98e9560ddab5abd8 - - default default] Exception during message handling: nova.exception.InvalidDiskInfo: Disk info file is invalid: qemu-img failed to execute on rbd:volumes/volume-124d81aa-c1b6-4933-a0f2-4582c93cb200:id=openstack : Unexpected error while running command.
Oct 11 04:03:16 compute-0 nova_compute[259850]: Command: /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info rbd:volumes/volume-124d81aa-c1b6-4933-a0f2-4582c93cb200:id=openstack --force-share --output=json
Oct 11 04:03:16 compute-0 nova_compute[259850]: Exit code: -6
Oct 11 04:03:16 compute-0 nova_compute[259850]: Stdout: ''
Oct 11 04:03:16 compute-0 nova_compute[259850]: Stderr: 'safestack CHECK failed: /builddir/build/BUILD/llvm-project-20.1.8.src/compiler-rt/lib/safestack/safestack.cpp:120 MAP_FAILED != addr\n'
Oct 11 04:03:16 compute-0 nova_compute[259850]: 2025-10-11 04:03:16.619 2 ERROR oslo_messaging.rpc.server Traceback (most recent call last):
Oct 11 04:03:16 compute-0 nova_compute[259850]: 2025-10-11 04:03:16.619 2 ERROR oslo_messaging.rpc.server   File "/usr/lib/python3.9/site-packages/oslo_messaging/rpc/server.py", line 165, in _process_incoming
Oct 11 04:03:16 compute-0 nova_compute[259850]: 2025-10-11 04:03:16.619 2 ERROR oslo_messaging.rpc.server     res = self.dispatcher.dispatch(message)
Oct 11 04:03:16 compute-0 nova_compute[259850]: 2025-10-11 04:03:16.619 2 ERROR oslo_messaging.rpc.server   File "/usr/lib/python3.9/site-packages/oslo_messaging/rpc/dispatcher.py", line 309, in dispatch
Oct 11 04:03:16 compute-0 nova_compute[259850]: 2025-10-11 04:03:16.619 2 ERROR oslo_messaging.rpc.server     return self._do_dispatch(endpoint, method, ctxt, args)
Oct 11 04:03:16 compute-0 nova_compute[259850]: 2025-10-11 04:03:16.619 2 ERROR oslo_messaging.rpc.server   File "/usr/lib/python3.9/site-packages/oslo_messaging/rpc/dispatcher.py", line 229, in _do_dispatch
Oct 11 04:03:16 compute-0 nova_compute[259850]: 2025-10-11 04:03:16.619 2 ERROR oslo_messaging.rpc.server     result = func(ctxt, **new_args)
Oct 11 04:03:16 compute-0 nova_compute[259850]: 2025-10-11 04:03:16.619 2 ERROR oslo_messaging.rpc.server   File "/usr/lib/python3.9/site-packages/nova/exception_wrapper.py", line 71, in wrapped
Oct 11 04:03:16 compute-0 nova_compute[259850]: 2025-10-11 04:03:16.619 2 ERROR oslo_messaging.rpc.server     _emit_versioned_exception_notification(
Oct 11 04:03:16 compute-0 nova_compute[259850]: 2025-10-11 04:03:16.619 2 ERROR oslo_messaging.rpc.server   File "/usr/lib/python3.9/site-packages/oslo_utils/excutils.py", line 227, in __exit__
Oct 11 04:03:16 compute-0 nova_compute[259850]: 2025-10-11 04:03:16.619 2 ERROR oslo_messaging.rpc.server     self.force_reraise()
Oct 11 04:03:16 compute-0 nova_compute[259850]: 2025-10-11 04:03:16.619 2 ERROR oslo_messaging.rpc.server   File "/usr/lib/python3.9/site-packages/oslo_utils/excutils.py", line 200, in force_reraise
Oct 11 04:03:16 compute-0 nova_compute[259850]: 2025-10-11 04:03:16.619 2 ERROR oslo_messaging.rpc.server     raise self.value
Oct 11 04:03:16 compute-0 nova_compute[259850]: 2025-10-11 04:03:16.619 2 ERROR oslo_messaging.rpc.server   File "/usr/lib/python3.9/site-packages/nova/exception_wrapper.py", line 63, in wrapped
Oct 11 04:03:16 compute-0 nova_compute[259850]: 2025-10-11 04:03:16.619 2 ERROR oslo_messaging.rpc.server     return f(self, context, *args, **kw)
Oct 11 04:03:16 compute-0 nova_compute[259850]: 2025-10-11 04:03:16.619 2 ERROR oslo_messaging.rpc.server   File "/usr/lib/python3.9/site-packages/nova/compute/manager.py", line 11073, in external_instance_event
Oct 11 04:03:16 compute-0 nova_compute[259850]: 2025-10-11 04:03:16.619 2 ERROR oslo_messaging.rpc.server     self.extend_volume(context, instance, event.tag)
Oct 11 04:03:16 compute-0 nova_compute[259850]: 2025-10-11 04:03:16.619 2 ERROR oslo_messaging.rpc.server   File "/usr/lib/python3.9/site-packages/nova/compute/utils.py", line 1439, in decorated_function
Oct 11 04:03:16 compute-0 nova_compute[259850]: 2025-10-11 04:03:16.619 2 ERROR oslo_messaging.rpc.server     return function(self, context, *args, **kwargs)
Oct 11 04:03:16 compute-0 nova_compute[259850]: 2025-10-11 04:03:16.619 2 ERROR oslo_messaging.rpc.server   File "/usr/lib/python3.9/site-packages/nova/compute/manager.py", line 214, in decorated_function
Oct 11 04:03:16 compute-0 nova_compute[259850]: 2025-10-11 04:03:16.619 2 ERROR oslo_messaging.rpc.server     compute_utils.add_instance_fault_from_exc(context,
Oct 11 04:03:16 compute-0 nova_compute[259850]: 2025-10-11 04:03:16.619 2 ERROR oslo_messaging.rpc.server   File "/usr/lib/python3.9/site-packages/oslo_utils/excutils.py", line 227, in __exit__
Oct 11 04:03:16 compute-0 nova_compute[259850]: 2025-10-11 04:03:16.619 2 ERROR oslo_messaging.rpc.server     self.force_reraise()
Oct 11 04:03:16 compute-0 nova_compute[259850]: 2025-10-11 04:03:16.619 2 ERROR oslo_messaging.rpc.server   File "/usr/lib/python3.9/site-packages/oslo_utils/excutils.py", line 200, in force_reraise
Oct 11 04:03:16 compute-0 nova_compute[259850]: 2025-10-11 04:03:16.619 2 ERROR oslo_messaging.rpc.server     raise self.value
Oct 11 04:03:16 compute-0 nova_compute[259850]: 2025-10-11 04:03:16.619 2 ERROR oslo_messaging.rpc.server   File "/usr/lib/python3.9/site-packages/nova/compute/manager.py", line 203, in decorated_function
Oct 11 04:03:16 compute-0 nova_compute[259850]: 2025-10-11 04:03:16.619 2 ERROR oslo_messaging.rpc.server     return function(self, context, *args, **kwargs)
Oct 11 04:03:16 compute-0 nova_compute[259850]: 2025-10-11 04:03:16.619 2 ERROR oslo_messaging.rpc.server   File "/usr/lib/python3.9/site-packages/nova/compute/manager.py", line 10930, in extend_volume
Oct 11 04:03:16 compute-0 nova_compute[259850]: 2025-10-11 04:03:16.619 2 ERROR oslo_messaging.rpc.server     self.driver.extend_volume(context, connection_info, instance,
Oct 11 04:03:16 compute-0 nova_compute[259850]: 2025-10-11 04:03:16.619 2 ERROR oslo_messaging.rpc.server   File "/usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py", line 2865, in extend_volume
Oct 11 04:03:16 compute-0 nova_compute[259850]: 2025-10-11 04:03:16.619 2 ERROR oslo_messaging.rpc.server     self._resize_attached_encrypted_volume(
Oct 11 04:03:16 compute-0 nova_compute[259850]: 2025-10-11 04:03:16.619 2 ERROR oslo_messaging.rpc.server   File "/usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py", line 2804, in _resize_attached_encrypted_volume
Oct 11 04:03:16 compute-0 nova_compute[259850]: 2025-10-11 04:03:16.619 2 ERROR oslo_messaging.rpc.server     LOG.exception('Unknown error when attempting to find the '
Oct 11 04:03:16 compute-0 nova_compute[259850]: 2025-10-11 04:03:16.619 2 ERROR oslo_messaging.rpc.server   File "/usr/lib/python3.9/site-packages/oslo_utils/excutils.py", line 227, in __exit__
Oct 11 04:03:16 compute-0 nova_compute[259850]: 2025-10-11 04:03:16.619 2 ERROR oslo_messaging.rpc.server     self.force_reraise()
Oct 11 04:03:16 compute-0 nova_compute[259850]: 2025-10-11 04:03:16.619 2 ERROR oslo_messaging.rpc.server   File "/usr/lib/python3.9/site-packages/oslo_utils/excutils.py", line 200, in force_reraise
Oct 11 04:03:16 compute-0 nova_compute[259850]: 2025-10-11 04:03:16.619 2 ERROR oslo_messaging.rpc.server     raise self.value
Oct 11 04:03:16 compute-0 nova_compute[259850]: 2025-10-11 04:03:16.619 2 ERROR oslo_messaging.rpc.server   File "/usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py", line 2788, in _resize_attached_encrypted_volume
Oct 11 04:03:16 compute-0 nova_compute[259850]: 2025-10-11 04:03:16.619 2 ERROR oslo_messaging.rpc.server     info = images.privileged_qemu_img_info(path)
Oct 11 04:03:16 compute-0 nova_compute[259850]: 2025-10-11 04:03:16.619 2 ERROR oslo_messaging.rpc.server   File "/usr/lib/python3.9/site-packages/nova/virt/images.py", line 57, in privileged_qemu_img_info
Oct 11 04:03:16 compute-0 nova_compute[259850]: 2025-10-11 04:03:16.619 2 ERROR oslo_messaging.rpc.server     info = nova.privsep.qemu.privileged_qemu_img_info(path, format=format)
Oct 11 04:03:16 compute-0 nova_compute[259850]: 2025-10-11 04:03:16.619 2 ERROR oslo_messaging.rpc.server   File "/usr/lib/python3.9/site-packages/oslo_privsep/priv_context.py", line 271, in _wrap
Oct 11 04:03:16 compute-0 nova_compute[259850]: 2025-10-11 04:03:16.619 2 ERROR oslo_messaging.rpc.server     return self.channel.remote_call(name, args, kwargs,
Oct 11 04:03:16 compute-0 nova_compute[259850]: 2025-10-11 04:03:16.619 2 ERROR oslo_messaging.rpc.server   File "/usr/lib/python3.9/site-packages/oslo_privsep/daemon.py", line 215, in remote_call
Oct 11 04:03:16 compute-0 nova_compute[259850]: 2025-10-11 04:03:16.619 2 ERROR oslo_messaging.rpc.server     raise exc_type(*result[2])
Oct 11 04:03:16 compute-0 nova_compute[259850]: 2025-10-11 04:03:16.619 2 ERROR oslo_messaging.rpc.server nova.exception.InvalidDiskInfo: Disk info file is invalid: qemu-img failed to execute on rbd:volumes/volume-124d81aa-c1b6-4933-a0f2-4582c93cb200:id=openstack : Unexpected error while running command.
Oct 11 04:03:16 compute-0 nova_compute[259850]: 2025-10-11 04:03:16.619 2 ERROR oslo_messaging.rpc.server Command: /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info rbd:volumes/volume-124d81aa-c1b6-4933-a0f2-4582c93cb200:id=openstack --force-share --output=json
Oct 11 04:03:16 compute-0 nova_compute[259850]: 2025-10-11 04:03:16.619 2 ERROR oslo_messaging.rpc.server Exit code: -6
Oct 11 04:03:16 compute-0 nova_compute[259850]: 2025-10-11 04:03:16.619 2 ERROR oslo_messaging.rpc.server Stdout: ''
Oct 11 04:03:16 compute-0 nova_compute[259850]: 2025-10-11 04:03:16.619 2 ERROR oslo_messaging.rpc.server Stderr: 'safestack CHECK failed: /builddir/build/BUILD/llvm-project-20.1.8.src/compiler-rt/lib/safestack/safestack.cpp:120 MAP_FAILED != addr\n'
Oct 11 04:03:16 compute-0 nova_compute[259850]: 2025-10-11 04:03:16.619 2 ERROR oslo_messaging.rpc.server 
Oct 11 04:03:16 compute-0 nova_compute[259850]: 2025-10-11 04:03:16.647 2 DEBUG oslo_concurrency.lockutils [None req-a15c22ab-259d-4b26-83d0-eb5f92102649 - - - - - -] Releasing lock "refresh_cache-02d4314d-9e34-4961-a2ae-1c9b0a3c1dfc" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 11 04:03:16 compute-0 nova_compute[259850]: 2025-10-11 04:03:16.647 2 DEBUG nova.compute.manager [None req-a15c22ab-259d-4b26-83d0-eb5f92102649 - - - - - -] [instance: 02d4314d-9e34-4961-a2ae-1c9b0a3c1dfc] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Oct 11 04:03:16 compute-0 nova_compute[259850]: 2025-10-11 04:03:16.648 2 DEBUG oslo_service.periodic_task [None req-a15c22ab-259d-4b26-83d0-eb5f92102649 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 11 04:03:16 compute-0 nova_compute[259850]: 2025-10-11 04:03:16.649 2 DEBUG oslo_service.periodic_task [None req-a15c22ab-259d-4b26-83d0-eb5f92102649 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 11 04:03:16 compute-0 nova_compute[259850]: 2025-10-11 04:03:16.649 2 DEBUG nova.compute.manager [None req-a15c22ab-259d-4b26-83d0-eb5f92102649 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Oct 11 04:03:16 compute-0 nova_compute[259850]: 2025-10-11 04:03:16.650 2 DEBUG oslo_service.periodic_task [None req-a15c22ab-259d-4b26-83d0-eb5f92102649 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 11 04:03:16 compute-0 nova_compute[259850]: 2025-10-11 04:03:16.675 2 DEBUG oslo_concurrency.lockutils [None req-a15c22ab-259d-4b26-83d0-eb5f92102649 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 04:03:16 compute-0 nova_compute[259850]: 2025-10-11 04:03:16.676 2 DEBUG oslo_concurrency.lockutils [None req-a15c22ab-259d-4b26-83d0-eb5f92102649 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 04:03:16 compute-0 nova_compute[259850]: 2025-10-11 04:03:16.676 2 DEBUG oslo_concurrency.lockutils [None req-a15c22ab-259d-4b26-83d0-eb5f92102649 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 04:03:16 compute-0 nova_compute[259850]: 2025-10-11 04:03:16.676 2 DEBUG nova.compute.resource_tracker [None req-a15c22ab-259d-4b26-83d0-eb5f92102649 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Oct 11 04:03:16 compute-0 nova_compute[259850]: 2025-10-11 04:03:16.677 2 DEBUG oslo_concurrency.processutils [None req-a15c22ab-259d-4b26-83d0-eb5f92102649 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 04:03:17 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 11 04:03:17 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1003816956' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 04:03:17 compute-0 nova_compute[259850]: 2025-10-11 04:03:17.151 2 DEBUG oslo_concurrency.processutils [None req-a15c22ab-259d-4b26-83d0-eb5f92102649 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.475s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 04:03:17 compute-0 nova_compute[259850]: 2025-10-11 04:03:17.231 2 DEBUG nova.virt.libvirt.driver [None req-a15c22ab-259d-4b26-83d0-eb5f92102649 - - - - - -] skipping disk for instance-00000001 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 11 04:03:17 compute-0 nova_compute[259850]: 2025-10-11 04:03:17.232 2 DEBUG nova.virt.libvirt.driver [None req-a15c22ab-259d-4b26-83d0-eb5f92102649 - - - - - -] skipping disk for instance-00000001 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 11 04:03:17 compute-0 nova_compute[259850]: 2025-10-11 04:03:17.232 2 DEBUG nova.virt.libvirt.driver [None req-a15c22ab-259d-4b26-83d0-eb5f92102649 - - - - - -] skipping disk for instance-00000001 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 11 04:03:17 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v931: 305 pgs: 305 active+clean; 121 MiB data, 254 MiB used, 60 GiB / 60 GiB avail; 36 KiB/s rd, 17 KiB/s wr, 21 op/s
Oct 11 04:03:17 compute-0 nova_compute[259850]: 2025-10-11 04:03:17.484 2 WARNING nova.virt.libvirt.driver [None req-a15c22ab-259d-4b26-83d0-eb5f92102649 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 11 04:03:17 compute-0 nova_compute[259850]: 2025-10-11 04:03:17.486 2 DEBUG nova.compute.resource_tracker [None req-a15c22ab-259d-4b26-83d0-eb5f92102649 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4548MB free_disk=59.94268035888672GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Oct 11 04:03:17 compute-0 nova_compute[259850]: 2025-10-11 04:03:17.486 2 DEBUG oslo_concurrency.lockutils [None req-a15c22ab-259d-4b26-83d0-eb5f92102649 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 04:03:17 compute-0 nova_compute[259850]: 2025-10-11 04:03:17.487 2 DEBUG oslo_concurrency.lockutils [None req-a15c22ab-259d-4b26-83d0-eb5f92102649 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 04:03:17 compute-0 ceph-mon[74273]: from='client.? 192.168.122.100:0/1003816956' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 04:03:17 compute-0 nova_compute[259850]: 2025-10-11 04:03:17.566 2 DEBUG nova.compute.resource_tracker [None req-a15c22ab-259d-4b26-83d0-eb5f92102649 - - - - - -] Instance 02d4314d-9e34-4961-a2ae-1c9b0a3c1dfc actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Oct 11 04:03:17 compute-0 nova_compute[259850]: 2025-10-11 04:03:17.567 2 DEBUG nova.compute.resource_tracker [None req-a15c22ab-259d-4b26-83d0-eb5f92102649 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Oct 11 04:03:17 compute-0 nova_compute[259850]: 2025-10-11 04:03:17.567 2 DEBUG nova.compute.resource_tracker [None req-a15c22ab-259d-4b26-83d0-eb5f92102649 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=640MB phys_disk=59GB used_disk=1GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Oct 11 04:03:17 compute-0 nova_compute[259850]: 2025-10-11 04:03:17.597 2 DEBUG oslo_concurrency.processutils [None req-a15c22ab-259d-4b26-83d0-eb5f92102649 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 04:03:17 compute-0 nova_compute[259850]: 2025-10-11 04:03:17.752 2 DEBUG oslo_concurrency.lockutils [None req-a4cbc5ff-3802-46a8-b3b2-b96dd482d98e 6f96a3b66f9943398432732b3141745a 54de3f5004d1488aaf5e429b0071e194 - - default default] Acquiring lock "02d4314d-9e34-4961-a2ae-1c9b0a3c1dfc" by "nova.compute.manager.ComputeManager.detach_volume.<locals>.do_detach_volume" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 04:03:17 compute-0 nova_compute[259850]: 2025-10-11 04:03:17.753 2 DEBUG oslo_concurrency.lockutils [None req-a4cbc5ff-3802-46a8-b3b2-b96dd482d98e 6f96a3b66f9943398432732b3141745a 54de3f5004d1488aaf5e429b0071e194 - - default default] Lock "02d4314d-9e34-4961-a2ae-1c9b0a3c1dfc" acquired by "nova.compute.manager.ComputeManager.detach_volume.<locals>.do_detach_volume" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 04:03:17 compute-0 nova_compute[259850]: 2025-10-11 04:03:17.764 2 INFO nova.compute.manager [None req-a4cbc5ff-3802-46a8-b3b2-b96dd482d98e 6f96a3b66f9943398432732b3141745a 54de3f5004d1488aaf5e429b0071e194 - - default default] [instance: 02d4314d-9e34-4961-a2ae-1c9b0a3c1dfc] Detaching volume 124d81aa-c1b6-4933-a0f2-4582c93cb200
Oct 11 04:03:17 compute-0 nova_compute[259850]: 2025-10-11 04:03:17.913 2 INFO nova.virt.block_device [None req-a4cbc5ff-3802-46a8-b3b2-b96dd482d98e 6f96a3b66f9943398432732b3141745a 54de3f5004d1488aaf5e429b0071e194 - - default default] [instance: 02d4314d-9e34-4961-a2ae-1c9b0a3c1dfc] Attempting to driver detach volume 124d81aa-c1b6-4933-a0f2-4582c93cb200 from mountpoint /dev/vdb
Oct 11 04:03:18 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 11 04:03:18 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2424203058' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 04:03:18 compute-0 nova_compute[259850]: 2025-10-11 04:03:18.036 2 DEBUG oslo_concurrency.processutils [None req-a15c22ab-259d-4b26-83d0-eb5f92102649 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.439s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 04:03:18 compute-0 nova_compute[259850]: 2025-10-11 04:03:18.045 2 DEBUG nova.compute.provider_tree [None req-a15c22ab-259d-4b26-83d0-eb5f92102649 - - - - - -] Updating inventory in ProviderTree for provider 108a560b-89c0-4926-a2fc-cb749a6f8386 with inventory: {'MEMORY_MB': {'total': 7680, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0, 'reserved': 512}, 'VCPU': {'total': 8, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0, 'reserved': 0}, 'DISK_GB': {'total': 59, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9, 'reserved': 1}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176
Oct 11 04:03:18 compute-0 nova_compute[259850]: 2025-10-11 04:03:18.056 2 DEBUG os_brick.encryptors [None req-a4cbc5ff-3802-46a8-b3b2-b96dd482d98e 6f96a3b66f9943398432732b3141745a 54de3f5004d1488aaf5e429b0071e194 - - default default] Using volume encryption metadata '{'encryption_key_id': '797788f5-9826-4caa-81a4-b0aab0930665', 'control_location': 'front-end', 'cipher': 'aes-xts-plain64', 'key_size': 256, 'provider': 'luks'}' for connection: {'driver_volume_type': 'rbd', 'data': {'name': 'volumes/volume-124d81aa-c1b6-4933-a0f2-4582c93cb200', 'hosts': ['192.168.122.100'], 'ports': ['6789'], 'cluster_name': 'ceph', 'auth_enabled': True, 'auth_username': 'openstack', 'secret_type': 'ceph', 'secret_uuid': '***', 'volume_id': '124d81aa-c1b6-4933-a0f2-4582c93cb200', 'discard': True, 'qos_specs': None, 'access_mode': 'rw', 'encrypted': True, 'cacheable': False}, 'status': 'reserved', 'instance': '02d4314d-9e34-4961-a2ae-1c9b0a3c1dfc', 'attached_at': '', 'detached_at': '', 'volume_id': '124d81aa-c1b6-4933-a0f2-4582c93cb200', 'serial': '} get_encryption_metadata /usr/lib/python3.9/site-packages/os_brick/encryptors/__init__.py:135
Oct 11 04:03:18 compute-0 nova_compute[259850]: 2025-10-11 04:03:18.068 2 DEBUG nova.virt.libvirt.driver [None req-a4cbc5ff-3802-46a8-b3b2-b96dd482d98e 6f96a3b66f9943398432732b3141745a 54de3f5004d1488aaf5e429b0071e194 - - default default] Attempting to detach device vdb from instance 02d4314d-9e34-4961-a2ae-1c9b0a3c1dfc from the persistent domain config. _detach_from_persistent /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2487
Oct 11 04:03:18 compute-0 nova_compute[259850]: 2025-10-11 04:03:18.069 2 DEBUG nova.virt.libvirt.guest [None req-a4cbc5ff-3802-46a8-b3b2-b96dd482d98e 6f96a3b66f9943398432732b3141745a 54de3f5004d1488aaf5e429b0071e194 - - default default] detach device xml: <disk type="network" device="disk">
Oct 11 04:03:18 compute-0 nova_compute[259850]:   <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Oct 11 04:03:18 compute-0 nova_compute[259850]:   <source protocol="rbd" name="volumes/volume-124d81aa-c1b6-4933-a0f2-4582c93cb200">
Oct 11 04:03:18 compute-0 nova_compute[259850]:     <host name="192.168.122.100" port="6789"/>
Oct 11 04:03:18 compute-0 nova_compute[259850]:   </source>
Oct 11 04:03:18 compute-0 nova_compute[259850]:   <target dev="vdb" bus="virtio"/>
Oct 11 04:03:18 compute-0 nova_compute[259850]:   <serial>124d81aa-c1b6-4933-a0f2-4582c93cb200</serial>
Oct 11 04:03:18 compute-0 nova_compute[259850]:   <address type="pci" domain="0x0000" bus="0x06" slot="0x00" function="0x0"/>
Oct 11 04:03:18 compute-0 nova_compute[259850]:   <encryption format="luks">
Oct 11 04:03:18 compute-0 nova_compute[259850]:     <secret type="passphrase" uuid="c4372935-6c00-474d-b496-1b9cea82c0bb"/>
Oct 11 04:03:18 compute-0 nova_compute[259850]:   </encryption>
Oct 11 04:03:18 compute-0 nova_compute[259850]: </disk>
Oct 11 04:03:18 compute-0 nova_compute[259850]:  detach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:465
Oct 11 04:03:18 compute-0 nova_compute[259850]: 2025-10-11 04:03:18.079 2 INFO nova.virt.libvirt.driver [None req-a4cbc5ff-3802-46a8-b3b2-b96dd482d98e 6f96a3b66f9943398432732b3141745a 54de3f5004d1488aaf5e429b0071e194 - - default default] Successfully detached device vdb from instance 02d4314d-9e34-4961-a2ae-1c9b0a3c1dfc from the persistent domain config.
Oct 11 04:03:18 compute-0 nova_compute[259850]: 2025-10-11 04:03:18.080 2 DEBUG nova.virt.libvirt.driver [None req-a4cbc5ff-3802-46a8-b3b2-b96dd482d98e 6f96a3b66f9943398432732b3141745a 54de3f5004d1488aaf5e429b0071e194 - - default default] (1/8): Attempting to detach device vdb with device alias virtio-disk1 from instance 02d4314d-9e34-4961-a2ae-1c9b0a3c1dfc from the live domain config. _detach_from_live_with_retry /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2523
Oct 11 04:03:18 compute-0 nova_compute[259850]: 2025-10-11 04:03:18.081 2 DEBUG nova.virt.libvirt.guest [None req-a4cbc5ff-3802-46a8-b3b2-b96dd482d98e 6f96a3b66f9943398432732b3141745a 54de3f5004d1488aaf5e429b0071e194 - - default default] detach device xml: <disk type="network" device="disk">
Oct 11 04:03:18 compute-0 nova_compute[259850]:   <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Oct 11 04:03:18 compute-0 nova_compute[259850]:   <source protocol="rbd" name="volumes/volume-124d81aa-c1b6-4933-a0f2-4582c93cb200">
Oct 11 04:03:18 compute-0 nova_compute[259850]:     <host name="192.168.122.100" port="6789"/>
Oct 11 04:03:18 compute-0 nova_compute[259850]:   </source>
Oct 11 04:03:18 compute-0 nova_compute[259850]:   <target dev="vdb" bus="virtio"/>
Oct 11 04:03:18 compute-0 nova_compute[259850]:   <serial>124d81aa-c1b6-4933-a0f2-4582c93cb200</serial>
Oct 11 04:03:18 compute-0 nova_compute[259850]:   <address type="pci" domain="0x0000" bus="0x06" slot="0x00" function="0x0"/>
Oct 11 04:03:18 compute-0 nova_compute[259850]:   <encryption format="luks">
Oct 11 04:03:18 compute-0 nova_compute[259850]:     <secret type="passphrase" uuid="c4372935-6c00-474d-b496-1b9cea82c0bb"/>
Oct 11 04:03:18 compute-0 nova_compute[259850]:   </encryption>
Oct 11 04:03:18 compute-0 nova_compute[259850]: </disk>
Oct 11 04:03:18 compute-0 nova_compute[259850]:  detach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:465
Oct 11 04:03:18 compute-0 nova_compute[259850]: 2025-10-11 04:03:18.093 2 ERROR nova.scheduler.client.report [None req-a15c22ab-259d-4b26-83d0-eb5f92102649 - - - - - -] [req-081884f4-5606-481b-849c-339cd9357fb0] Failed to update inventory to [{'MEMORY_MB': {'total': 7680, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0, 'reserved': 512}, 'VCPU': {'total': 8, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0, 'reserved': 0}, 'DISK_GB': {'total': 59, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9, 'reserved': 1}}] for resource provider with UUID 108a560b-89c0-4926-a2fc-cb749a6f8386.  Got 409: {"errors": [{"status": 409, "title": "Conflict", "detail": "There was a conflict when trying to complete your request.\n\n resource provider generation conflict  ", "code": "placement.concurrent_update", "request_id": "req-081884f4-5606-481b-849c-339cd9357fb0"}]}
Oct 11 04:03:18 compute-0 nova_compute[259850]: 2025-10-11 04:03:18.114 2 DEBUG nova.scheduler.client.report [None req-a15c22ab-259d-4b26-83d0-eb5f92102649 - - - - - -] Refreshing inventories for resource provider 108a560b-89c0-4926-a2fc-cb749a6f8386 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804
Oct 11 04:03:18 compute-0 nova_compute[259850]: 2025-10-11 04:03:18.135 2 DEBUG nova.scheduler.client.report [None req-a15c22ab-259d-4b26-83d0-eb5f92102649 - - - - - -] Updating ProviderTree inventory for provider 108a560b-89c0-4926-a2fc-cb749a6f8386 from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768
Oct 11 04:03:18 compute-0 nova_compute[259850]: 2025-10-11 04:03:18.136 2 DEBUG nova.compute.provider_tree [None req-a15c22ab-259d-4b26-83d0-eb5f92102649 - - - - - -] Updating inventory in ProviderTree for provider 108a560b-89c0-4926-a2fc-cb749a6f8386 with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176
Oct 11 04:03:18 compute-0 nova_compute[259850]: 2025-10-11 04:03:18.159 2 DEBUG nova.scheduler.client.report [None req-a15c22ab-259d-4b26-83d0-eb5f92102649 - - - - - -] Refreshing aggregate associations for resource provider 108a560b-89c0-4926-a2fc-cb749a6f8386, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813
Oct 11 04:03:18 compute-0 nova_compute[259850]: 2025-10-11 04:03:18.183 2 DEBUG nova.scheduler.client.report [None req-a15c22ab-259d-4b26-83d0-eb5f92102649 - - - - - -] Refreshing trait associations for resource provider 108a560b-89c0-4926-a2fc-cb749a6f8386, traits: COMPUTE_IMAGE_TYPE_QCOW2,HW_CPU_X86_SSSE3,COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_STORAGE_BUS_SCSI,HW_CPU_X86_AESNI,HW_CPU_X86_FMA3,COMPUTE_SECURITY_UEFI_SECURE_BOOT,HW_CPU_X86_F16C,HW_CPU_X86_SVM,COMPUTE_STORAGE_BUS_USB,HW_CPU_X86_SHA,COMPUTE_ACCELERATORS,COMPUTE_IMAGE_TYPE_AMI,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_GRAPHICS_MODEL_BOCHS,COMPUTE_NET_VIF_MODEL_RTL8139,COMPUTE_IMAGE_TYPE_RAW,COMPUTE_NET_VIF_MODEL_NE2K_PCI,HW_CPU_X86_SSE41,COMPUTE_NODE,COMPUTE_RESCUE_BFV,COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_NET_ATTACH_INTERFACE,COMPUTE_IMAGE_TYPE_ISO,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,COMPUTE_TRUSTED_CERTS,COMPUTE_NET_VIF_MODEL_E1000E,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_STORAGE_BUS_SATA,HW_CPU_X86_AVX,COMPUTE_IMAGE_TYPE_AKI,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_STORAGE_BUS_VIRTIO,COMPUTE_NET_VIF_MODEL_VMXNET3,HW_CPU_X86_BMI2,HW_CPU_X86_MMX,COMPUTE_VOLUME_ATTACH_WITH_TAG,COMPUTE_SECURITY_TPM_1_2,COMPUTE_GRAPHICS_MODEL_NONE,HW_CPU_X86_AVX2,COMPUTE_DEVICE_TAGGING,HW_CPU_X86_AMD_SVM,COMPUTE_STORAGE_BUS_IDE,COMPUTE_GRAPHICS_MODEL_VGA,COMPUTE_VOLUME_EXTEND,COMPUTE_NET_VIF_MODEL_PCNET,HW_CPU_X86_CLMUL,HW_CPU_X86_SSE2,HW_CPU_X86_SSE4A,COMPUTE_GRAPHICS_MODEL_VIRTIO,COMPUTE_NET_VIF_MODEL_E1000,HW_CPU_X86_BMI,COMPUTE_VIOMMU_MODEL_INTEL,HW_CPU_X86_SSE,COMPUTE_VOLUME_MULTI_ATTACH,HW_CPU_X86_ABM,HW_CPU_X86_SSE42,COMPUTE_STORAGE_BUS_FDC,COMPUTE_SECURITY_TPM_2_0,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825
Oct 11 04:03:18 compute-0 nova_compute[259850]: 2025-10-11 04:03:18.210 2 DEBUG nova.virt.libvirt.driver [None req-3700afd8-f980-4cf2-b41e-54ecc7fbe9a2 - - - - - -] Received event <DeviceRemovedEvent: 1760155398.2101889, 02d4314d-9e34-4961-a2ae-1c9b0a3c1dfc => virtio-disk1> from libvirt while the driver is waiting for it; dispatched. emit_event /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2370
Oct 11 04:03:18 compute-0 nova_compute[259850]: 2025-10-11 04:03:18.213 2 DEBUG nova.virt.libvirt.driver [None req-a4cbc5ff-3802-46a8-b3b2-b96dd482d98e 6f96a3b66f9943398432732b3141745a 54de3f5004d1488aaf5e429b0071e194 - - default default] Start waiting for the detach event from libvirt for device vdb with device alias virtio-disk1 for instance 02d4314d-9e34-4961-a2ae-1c9b0a3c1dfc _detach_from_live_and_wait_for_event /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2599
Oct 11 04:03:18 compute-0 nova_compute[259850]: 2025-10-11 04:03:18.217 2 INFO nova.virt.libvirt.driver [None req-a4cbc5ff-3802-46a8-b3b2-b96dd482d98e 6f96a3b66f9943398432732b3141745a 54de3f5004d1488aaf5e429b0071e194 - - default default] Successfully detached device vdb from instance 02d4314d-9e34-4961-a2ae-1c9b0a3c1dfc from the live domain config.
Oct 11 04:03:18 compute-0 nova_compute[259850]: 2025-10-11 04:03:18.222 2 DEBUG oslo_concurrency.processutils [None req-a15c22ab-259d-4b26-83d0-eb5f92102649 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 04:03:18 compute-0 ceph-mon[74273]: pgmap v931: 305 pgs: 305 active+clean; 121 MiB data, 254 MiB used, 60 GiB / 60 GiB avail; 36 KiB/s rd, 17 KiB/s wr, 21 op/s
Oct 11 04:03:18 compute-0 ceph-mon[74273]: from='client.? 192.168.122.100:0/2424203058' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 04:03:18 compute-0 nova_compute[259850]: 2025-10-11 04:03:18.597 2 DEBUG nova.objects.instance [None req-a4cbc5ff-3802-46a8-b3b2-b96dd482d98e 6f96a3b66f9943398432732b3141745a 54de3f5004d1488aaf5e429b0071e194 - - default default] Lazy-loading 'flavor' on Instance uuid 02d4314d-9e34-4961-a2ae-1c9b0a3c1dfc obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 11 04:03:18 compute-0 nova_compute[259850]: 2025-10-11 04:03:18.646 2 DEBUG oslo_concurrency.lockutils [None req-a4cbc5ff-3802-46a8-b3b2-b96dd482d98e 6f96a3b66f9943398432732b3141745a 54de3f5004d1488aaf5e429b0071e194 - - default default] Lock "02d4314d-9e34-4961-a2ae-1c9b0a3c1dfc" "released" by "nova.compute.manager.ComputeManager.detach_volume.<locals>.do_detach_volume" :: held 0.893s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 04:03:18 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 11 04:03:18 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3027105413' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 04:03:18 compute-0 nova_compute[259850]: 2025-10-11 04:03:18.701 2 DEBUG oslo_concurrency.processutils [None req-a15c22ab-259d-4b26-83d0-eb5f92102649 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.479s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 04:03:18 compute-0 nova_compute[259850]: 2025-10-11 04:03:18.708 2 DEBUG nova.compute.provider_tree [None req-a15c22ab-259d-4b26-83d0-eb5f92102649 - - - - - -] Updating inventory in ProviderTree for provider 108a560b-89c0-4926-a2fc-cb749a6f8386 with inventory: {'MEMORY_MB': {'total': 7680, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0, 'reserved': 512}, 'VCPU': {'total': 8, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0, 'reserved': 0}, 'DISK_GB': {'total': 59, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9, 'reserved': 1}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176
Oct 11 04:03:18 compute-0 nova_compute[259850]: 2025-10-11 04:03:18.730 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 04:03:18 compute-0 nova_compute[259850]: 2025-10-11 04:03:18.747 2 DEBUG nova.scheduler.client.report [None req-a15c22ab-259d-4b26-83d0-eb5f92102649 - - - - - -] Updated inventory for provider 108a560b-89c0-4926-a2fc-cb749a6f8386 with generation 3 in Placement from set_inventory_for_provider using data: {'MEMORY_MB': {'total': 7680, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0, 'reserved': 512}, 'VCPU': {'total': 8, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0, 'reserved': 0}, 'DISK_GB': {'total': 59, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9, 'reserved': 1}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:957
Oct 11 04:03:18 compute-0 nova_compute[259850]: 2025-10-11 04:03:18.748 2 DEBUG nova.compute.provider_tree [None req-a15c22ab-259d-4b26-83d0-eb5f92102649 - - - - - -] Updating resource provider 108a560b-89c0-4926-a2fc-cb749a6f8386 generation from 3 to 4 during operation: update_inventory _update_generation /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:164
Oct 11 04:03:18 compute-0 nova_compute[259850]: 2025-10-11 04:03:18.748 2 DEBUG nova.compute.provider_tree [None req-a15c22ab-259d-4b26-83d0-eb5f92102649 - - - - - -] Updating inventory in ProviderTree for provider 108a560b-89c0-4926-a2fc-cb749a6f8386 with inventory: {'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176
Oct 11 04:03:18 compute-0 nova_compute[259850]: 2025-10-11 04:03:18.776 2 DEBUG nova.compute.resource_tracker [None req-a15c22ab-259d-4b26-83d0-eb5f92102649 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Oct 11 04:03:18 compute-0 nova_compute[259850]: 2025-10-11 04:03:18.777 2 DEBUG oslo_concurrency.lockutils [None req-a15c22ab-259d-4b26-83d0-eb5f92102649 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.290s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 04:03:19 compute-0 nova_compute[259850]: 2025-10-11 04:03:19.094 2 DEBUG oslo_concurrency.lockutils [None req-47d49500-8fa6-41f1-9940-3c429973796e 6f96a3b66f9943398432732b3141745a 54de3f5004d1488aaf5e429b0071e194 - - default default] Acquiring lock "02d4314d-9e34-4961-a2ae-1c9b0a3c1dfc" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 04:03:19 compute-0 nova_compute[259850]: 2025-10-11 04:03:19.095 2 DEBUG oslo_concurrency.lockutils [None req-47d49500-8fa6-41f1-9940-3c429973796e 6f96a3b66f9943398432732b3141745a 54de3f5004d1488aaf5e429b0071e194 - - default default] Lock "02d4314d-9e34-4961-a2ae-1c9b0a3c1dfc" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 04:03:19 compute-0 nova_compute[259850]: 2025-10-11 04:03:19.095 2 DEBUG oslo_concurrency.lockutils [None req-47d49500-8fa6-41f1-9940-3c429973796e 6f96a3b66f9943398432732b3141745a 54de3f5004d1488aaf5e429b0071e194 - - default default] Acquiring lock "02d4314d-9e34-4961-a2ae-1c9b0a3c1dfc-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 04:03:19 compute-0 nova_compute[259850]: 2025-10-11 04:03:19.096 2 DEBUG oslo_concurrency.lockutils [None req-47d49500-8fa6-41f1-9940-3c429973796e 6f96a3b66f9943398432732b3141745a 54de3f5004d1488aaf5e429b0071e194 - - default default] Lock "02d4314d-9e34-4961-a2ae-1c9b0a3c1dfc-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 04:03:19 compute-0 nova_compute[259850]: 2025-10-11 04:03:19.096 2 DEBUG oslo_concurrency.lockutils [None req-47d49500-8fa6-41f1-9940-3c429973796e 6f96a3b66f9943398432732b3141745a 54de3f5004d1488aaf5e429b0071e194 - - default default] Lock "02d4314d-9e34-4961-a2ae-1c9b0a3c1dfc-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 04:03:19 compute-0 nova_compute[259850]: 2025-10-11 04:03:19.097 2 INFO nova.compute.manager [None req-47d49500-8fa6-41f1-9940-3c429973796e 6f96a3b66f9943398432732b3141745a 54de3f5004d1488aaf5e429b0071e194 - - default default] [instance: 02d4314d-9e34-4961-a2ae-1c9b0a3c1dfc] Terminating instance
Oct 11 04:03:19 compute-0 nova_compute[259850]: 2025-10-11 04:03:19.098 2 DEBUG nova.compute.manager [None req-47d49500-8fa6-41f1-9940-3c429973796e 6f96a3b66f9943398432732b3141745a 54de3f5004d1488aaf5e429b0071e194 - - default default] [instance: 02d4314d-9e34-4961-a2ae-1c9b0a3c1dfc] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Oct 11 04:03:19 compute-0 kernel: tap601ef18d-d9 (unregistering): left promiscuous mode
Oct 11 04:03:19 compute-0 NetworkManager[44920]: <info>  [1760155399.1682] device (tap601ef18d-d9): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 11 04:03:19 compute-0 ovn_controller[152025]: 2025-10-11T04:03:19Z|00032|binding|INFO|Releasing lport 601ef18d-d973-476f-90d4-f8f40df267fa from this chassis (sb_readonly=0)
Oct 11 04:03:19 compute-0 ovn_controller[152025]: 2025-10-11T04:03:19Z|00033|binding|INFO|Setting lport 601ef18d-d973-476f-90d4-f8f40df267fa down in Southbound
Oct 11 04:03:19 compute-0 ovn_controller[152025]: 2025-10-11T04:03:19Z|00034|binding|INFO|Removing iface tap601ef18d-d9 ovn-installed in OVS
Oct 11 04:03:19 compute-0 nova_compute[259850]: 2025-10-11 04:03:19.181 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 04:03:19 compute-0 nova_compute[259850]: 2025-10-11 04:03:19.188 2 DEBUG oslo_service.periodic_task [None req-a15c22ab-259d-4b26-83d0-eb5f92102649 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 11 04:03:19 compute-0 nova_compute[259850]: 2025-10-11 04:03:19.189 2 DEBUG oslo_service.periodic_task [None req-a15c22ab-259d-4b26-83d0-eb5f92102649 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 11 04:03:19 compute-0 ovn_metadata_agent[161897]: 2025-10-11 04:03:19.190 161902 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:b3:25:de 10.100.0.5'], port_security=['fa:16:3e:b3:25:de 10.100.0.5'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.5/28', 'neutron:device_id': '02d4314d-9e34-4961-a2ae-1c9b0a3c1dfc', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-373b2ee9-84af-407c-9e36-4d16b55fdfd0', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '54de3f5004d1488aaf5e429b0071e194', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'cf46f1e6-a956-4884-b138-5e34f728c752', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com', 'neutron:port_fip': '192.168.122.213'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=24b759c8-c2ad-43bf-b2f5-82015d6a0f2f, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f22fde8afd0>], logical_port=601ef18d-d973-476f-90d4-f8f40df267fa) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f22fde8afd0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 11 04:03:19 compute-0 ovn_metadata_agent[161897]: 2025-10-11 04:03:19.193 161902 INFO neutron.agent.ovn.metadata.agent [-] Port 601ef18d-d973-476f-90d4-f8f40df267fa in datapath 373b2ee9-84af-407c-9e36-4d16b55fdfd0 unbound from our chassis
Oct 11 04:03:19 compute-0 ovn_metadata_agent[161897]: 2025-10-11 04:03:19.196 161902 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 373b2ee9-84af-407c-9e36-4d16b55fdfd0, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Oct 11 04:03:19 compute-0 ovn_metadata_agent[161897]: 2025-10-11 04:03:19.198 267637 DEBUG oslo.privsep.daemon [-] privsep: reply[72133101-c517-46d2-a2a3-a126f47f09cc]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 04:03:19 compute-0 ovn_metadata_agent[161897]: 2025-10-11 04:03:19.199 161902 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-373b2ee9-84af-407c-9e36-4d16b55fdfd0 namespace which is not needed anymore
Oct 11 04:03:19 compute-0 nova_compute[259850]: 2025-10-11 04:03:19.208 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 04:03:19 compute-0 systemd[1]: machine-qemu\x2d1\x2dinstance\x2d00000001.scope: Deactivated successfully.
Oct 11 04:03:19 compute-0 systemd[1]: machine-qemu\x2d1\x2dinstance\x2d00000001.scope: Consumed 16.093s CPU time.
Oct 11 04:03:19 compute-0 systemd-machined[214869]: Machine qemu-1-instance-00000001 terminated.
Oct 11 04:03:19 compute-0 nova_compute[259850]: 2025-10-11 04:03:19.336 2 INFO nova.virt.libvirt.driver [-] [instance: 02d4314d-9e34-4961-a2ae-1c9b0a3c1dfc] Instance destroyed successfully.
Oct 11 04:03:19 compute-0 nova_compute[259850]: 2025-10-11 04:03:19.337 2 DEBUG nova.objects.instance [None req-47d49500-8fa6-41f1-9940-3c429973796e 6f96a3b66f9943398432732b3141745a 54de3f5004d1488aaf5e429b0071e194 - - default default] Lazy-loading 'resources' on Instance uuid 02d4314d-9e34-4961-a2ae-1c9b0a3c1dfc obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 11 04:03:19 compute-0 nova_compute[259850]: 2025-10-11 04:03:19.352 2 DEBUG nova.virt.libvirt.vif [None req-47d49500-8fa6-41f1-9940-3c429973796e 6f96a3b66f9943398432732b3141745a 54de3f5004d1488aaf5e429b0071e194 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-11T04:02:32Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-EncryptedVolumesExtendAttachedTest-instance-1517859076',display_name='tempest-EncryptedVolumesExtendAttachedTest-instance-1517859076',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-encryptedvolumesextendattachedtest-instance-1517859076',id=1,image_ref='1a107e2f-1a9d-4b6f-861d-e64bee7d56be',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBIP9WwMZHg30GEZ6pU9u1A/MMvyJS2+nS/lRgwrDD2GyS0E+SUtgIIxuMa25JYk2802r1expk7HTzdwVfDdYPfQ09QKkuenleq+s8kuEDgjh5maYKeHlqJtfNaVfPDIR9g==',key_name='tempest-keypair-583789850',keypairs=<?>,launch_index=0,launched_at=2025-10-11T04:02:47Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='54de3f5004d1488aaf5e429b0071e194',ramdisk_id='',reservation_id='r-kyx1qls8',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='1a107e2f-1a9d-4b6f-861d-e64bee7d56be',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-EncryptedVolumesExtendAttachedTest-1383109666',owner_user_name='tempest-EncryptedVolumesExtendAttachedTest-1383109666-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-11T04:02:47Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='6f96a3b66f9943398432732b3141745a',uuid=02d4314d-9e34-4961-a2ae-1c9b0a3c1dfc,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "601ef18d-d973-476f-90d4-f8f40df267fa", "address": "fa:16:3e:b3:25:de", "network": {"id": "373b2ee9-84af-407c-9e36-4d16b55fdfd0", "bridge": "br-int", "label": "tempest-EncryptedVolumesExtendAttachedTest-757984270-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.213", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "54de3f5004d1488aaf5e429b0071e194", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap601ef18d-d9", "ovs_interfaceid": "601ef18d-d973-476f-90d4-f8f40df267fa", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Oct 11 04:03:19 compute-0 nova_compute[259850]: 2025-10-11 04:03:19.356 2 DEBUG nova.network.os_vif_util [None req-47d49500-8fa6-41f1-9940-3c429973796e 6f96a3b66f9943398432732b3141745a 54de3f5004d1488aaf5e429b0071e194 - - default default] Converting VIF {"id": "601ef18d-d973-476f-90d4-f8f40df267fa", "address": "fa:16:3e:b3:25:de", "network": {"id": "373b2ee9-84af-407c-9e36-4d16b55fdfd0", "bridge": "br-int", "label": "tempest-EncryptedVolumesExtendAttachedTest-757984270-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.213", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "54de3f5004d1488aaf5e429b0071e194", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap601ef18d-d9", "ovs_interfaceid": "601ef18d-d973-476f-90d4-f8f40df267fa", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 11 04:03:19 compute-0 nova_compute[259850]: 2025-10-11 04:03:19.357 2 DEBUG nova.network.os_vif_util [None req-47d49500-8fa6-41f1-9940-3c429973796e 6f96a3b66f9943398432732b3141745a 54de3f5004d1488aaf5e429b0071e194 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:b3:25:de,bridge_name='br-int',has_traffic_filtering=True,id=601ef18d-d973-476f-90d4-f8f40df267fa,network=Network(373b2ee9-84af-407c-9e36-4d16b55fdfd0),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap601ef18d-d9') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 11 04:03:19 compute-0 nova_compute[259850]: 2025-10-11 04:03:19.357 2 DEBUG os_vif [None req-47d49500-8fa6-41f1-9940-3c429973796e 6f96a3b66f9943398432732b3141745a 54de3f5004d1488aaf5e429b0071e194 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:b3:25:de,bridge_name='br-int',has_traffic_filtering=True,id=601ef18d-d973-476f-90d4-f8f40df267fa,network=Network(373b2ee9-84af-407c-9e36-4d16b55fdfd0),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap601ef18d-d9') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Oct 11 04:03:19 compute-0 nova_compute[259850]: 2025-10-11 04:03:19.360 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 04:03:19 compute-0 nova_compute[259850]: 2025-10-11 04:03:19.361 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap601ef18d-d9, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 04:03:19 compute-0 nova_compute[259850]: 2025-10-11 04:03:19.362 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 04:03:19 compute-0 nova_compute[259850]: 2025-10-11 04:03:19.363 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 04:03:19 compute-0 nova_compute[259850]: 2025-10-11 04:03:19.365 2 INFO os_vif [None req-47d49500-8fa6-41f1-9940-3c429973796e 6f96a3b66f9943398432732b3141745a 54de3f5004d1488aaf5e429b0071e194 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:b3:25:de,bridge_name='br-int',has_traffic_filtering=True,id=601ef18d-d973-476f-90d4-f8f40df267fa,network=Network(373b2ee9-84af-407c-9e36-4d16b55fdfd0),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap601ef18d-d9')
Oct 11 04:03:19 compute-0 neutron-haproxy-ovnmeta-373b2ee9-84af-407c-9e36-4d16b55fdfd0[267864]: [NOTICE]   (267868) : haproxy version is 2.8.14-c23fe91
Oct 11 04:03:19 compute-0 neutron-haproxy-ovnmeta-373b2ee9-84af-407c-9e36-4d16b55fdfd0[267864]: [NOTICE]   (267868) : path to executable is /usr/sbin/haproxy
Oct 11 04:03:19 compute-0 neutron-haproxy-ovnmeta-373b2ee9-84af-407c-9e36-4d16b55fdfd0[267864]: [WARNING]  (267868) : Exiting Master process...
Oct 11 04:03:19 compute-0 neutron-haproxy-ovnmeta-373b2ee9-84af-407c-9e36-4d16b55fdfd0[267864]: [WARNING]  (267868) : Exiting Master process...
Oct 11 04:03:19 compute-0 neutron-haproxy-ovnmeta-373b2ee9-84af-407c-9e36-4d16b55fdfd0[267864]: [ALERT]    (267868) : Current worker (267870) exited with code 143 (Terminated)
Oct 11 04:03:19 compute-0 neutron-haproxy-ovnmeta-373b2ee9-84af-407c-9e36-4d16b55fdfd0[267864]: [WARNING]  (267868) : All workers exited. Exiting... (0)
Oct 11 04:03:19 compute-0 systemd[1]: libpod-c5b8f26246d83a8fca5641e14fbb7e75a781f0f4a4c441f194ec34d196e9ce59.scope: Deactivated successfully.
Oct 11 04:03:19 compute-0 podman[268128]: 2025-10-11 04:03:19.386631394 +0000 UTC m=+0.064097358 container died c5b8f26246d83a8fca5641e14fbb7e75a781f0f4a4c441f194ec34d196e9ce59 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-373b2ee9-84af-407c-9e36-4d16b55fdfd0, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, io.buildah.version=1.41.3)
Oct 11 04:03:19 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v932: 305 pgs: 305 active+clean; 121 MiB data, 254 MiB used, 60 GiB / 60 GiB avail; 57 KiB/s rd, 19 KiB/s wr, 50 op/s
Oct 11 04:03:19 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-c5b8f26246d83a8fca5641e14fbb7e75a781f0f4a4c441f194ec34d196e9ce59-userdata-shm.mount: Deactivated successfully.
Oct 11 04:03:19 compute-0 systemd[1]: var-lib-containers-storage-overlay-46bdd741cf814c1bca60ebe5318642f1f8bb3ceb5d7975b452fbdc43fb0b49d3-merged.mount: Deactivated successfully.
Oct 11 04:03:19 compute-0 podman[268128]: 2025-10-11 04:03:19.432036506 +0000 UTC m=+0.109502470 container cleanup c5b8f26246d83a8fca5641e14fbb7e75a781f0f4a4c441f194ec34d196e9ce59 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-373b2ee9-84af-407c-9e36-4d16b55fdfd0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.vendor=CentOS, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0)
Oct 11 04:03:19 compute-0 systemd[1]: libpod-conmon-c5b8f26246d83a8fca5641e14fbb7e75a781f0f4a4c441f194ec34d196e9ce59.scope: Deactivated successfully.
Oct 11 04:03:19 compute-0 nova_compute[259850]: 2025-10-11 04:03:19.447 2 DEBUG nova.compute.manager [req-21d13ea8-c71a-4b3a-b600-78d6d416fd73 req-8c34050a-0e1a-41d6-8e2d-a0ae5a26c7dd f4159c198f2c491aba3076e803251bb4 551ef16ca7c64c7d98e9560ddab5abd8 - - default default] [instance: 02d4314d-9e34-4961-a2ae-1c9b0a3c1dfc] Received event network-vif-unplugged-601ef18d-d973-476f-90d4-f8f40df267fa external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 11 04:03:19 compute-0 nova_compute[259850]: 2025-10-11 04:03:19.447 2 DEBUG oslo_concurrency.lockutils [req-21d13ea8-c71a-4b3a-b600-78d6d416fd73 req-8c34050a-0e1a-41d6-8e2d-a0ae5a26c7dd f4159c198f2c491aba3076e803251bb4 551ef16ca7c64c7d98e9560ddab5abd8 - - default default] Acquiring lock "02d4314d-9e34-4961-a2ae-1c9b0a3c1dfc-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 04:03:19 compute-0 nova_compute[259850]: 2025-10-11 04:03:19.448 2 DEBUG oslo_concurrency.lockutils [req-21d13ea8-c71a-4b3a-b600-78d6d416fd73 req-8c34050a-0e1a-41d6-8e2d-a0ae5a26c7dd f4159c198f2c491aba3076e803251bb4 551ef16ca7c64c7d98e9560ddab5abd8 - - default default] Lock "02d4314d-9e34-4961-a2ae-1c9b0a3c1dfc-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 04:03:19 compute-0 nova_compute[259850]: 2025-10-11 04:03:19.448 2 DEBUG oslo_concurrency.lockutils [req-21d13ea8-c71a-4b3a-b600-78d6d416fd73 req-8c34050a-0e1a-41d6-8e2d-a0ae5a26c7dd f4159c198f2c491aba3076e803251bb4 551ef16ca7c64c7d98e9560ddab5abd8 - - default default] Lock "02d4314d-9e34-4961-a2ae-1c9b0a3c1dfc-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 04:03:19 compute-0 nova_compute[259850]: 2025-10-11 04:03:19.448 2 DEBUG nova.compute.manager [req-21d13ea8-c71a-4b3a-b600-78d6d416fd73 req-8c34050a-0e1a-41d6-8e2d-a0ae5a26c7dd f4159c198f2c491aba3076e803251bb4 551ef16ca7c64c7d98e9560ddab5abd8 - - default default] [instance: 02d4314d-9e34-4961-a2ae-1c9b0a3c1dfc] No waiting events found dispatching network-vif-unplugged-601ef18d-d973-476f-90d4-f8f40df267fa pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 11 04:03:19 compute-0 nova_compute[259850]: 2025-10-11 04:03:19.448 2 DEBUG nova.compute.manager [req-21d13ea8-c71a-4b3a-b600-78d6d416fd73 req-8c34050a-0e1a-41d6-8e2d-a0ae5a26c7dd f4159c198f2c491aba3076e803251bb4 551ef16ca7c64c7d98e9560ddab5abd8 - - default default] [instance: 02d4314d-9e34-4961-a2ae-1c9b0a3c1dfc] Received event network-vif-unplugged-601ef18d-d973-476f-90d4-f8f40df267fa for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Oct 11 04:03:19 compute-0 podman[268182]: 2025-10-11 04:03:19.50601856 +0000 UTC m=+0.050241499 container remove c5b8f26246d83a8fca5641e14fbb7e75a781f0f4a4c441f194ec34d196e9ce59 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-373b2ee9-84af-407c-9e36-4d16b55fdfd0, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true)
Oct 11 04:03:19 compute-0 ovn_metadata_agent[161897]: 2025-10-11 04:03:19.512 267637 DEBUG oslo.privsep.daemon [-] privsep: reply[bdfbcfa3-675a-41e8-bbaa-6e86ef8d4cd6]: (4, ('Sat Oct 11 04:03:19 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-373b2ee9-84af-407c-9e36-4d16b55fdfd0 (c5b8f26246d83a8fca5641e14fbb7e75a781f0f4a4c441f194ec34d196e9ce59)\nc5b8f26246d83a8fca5641e14fbb7e75a781f0f4a4c441f194ec34d196e9ce59\nSat Oct 11 04:03:19 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-373b2ee9-84af-407c-9e36-4d16b55fdfd0 (c5b8f26246d83a8fca5641e14fbb7e75a781f0f4a4c441f194ec34d196e9ce59)\nc5b8f26246d83a8fca5641e14fbb7e75a781f0f4a4c441f194ec34d196e9ce59\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 04:03:19 compute-0 ovn_metadata_agent[161897]: 2025-10-11 04:03:19.514 267637 DEBUG oslo.privsep.daemon [-] privsep: reply[a5662137-eb75-4fff-83be-140c3518cb5e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 04:03:19 compute-0 ovn_metadata_agent[161897]: 2025-10-11 04:03:19.514 161902 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap373b2ee9-80, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 04:03:19 compute-0 nova_compute[259850]: 2025-10-11 04:03:19.516 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 04:03:19 compute-0 kernel: tap373b2ee9-80: left promiscuous mode
Oct 11 04:03:19 compute-0 nova_compute[259850]: 2025-10-11 04:03:19.518 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 04:03:19 compute-0 ovn_metadata_agent[161897]: 2025-10-11 04:03:19.521 267637 DEBUG oslo.privsep.daemon [-] privsep: reply[4cebc091-8afc-4009-9ec7-e7ef172f5d03]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 04:03:19 compute-0 nova_compute[259850]: 2025-10-11 04:03:19.549 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 04:03:19 compute-0 ovn_metadata_agent[161897]: 2025-10-11 04:03:19.565 267637 DEBUG oslo.privsep.daemon [-] privsep: reply[bf8e3693-e812-47a9-aead-4a4b3bf751d7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 04:03:19 compute-0 ovn_metadata_agent[161897]: 2025-10-11 04:03:19.567 267637 DEBUG oslo.privsep.daemon [-] privsep: reply[110b535e-7da1-48db-9408-c7267e1d4d31]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 04:03:19 compute-0 ceph-mon[74273]: from='client.? 192.168.122.100:0/3027105413' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 04:03:19 compute-0 ovn_metadata_agent[161897]: 2025-10-11 04:03:19.596 267637 DEBUG oslo.privsep.daemon [-] privsep: reply[ed242889-a0e6-4cda-b6ad-681f09c98b5d]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 375472, 'reachable_time': 38834, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 268197, 'error': None, 'target': 'ovnmeta-373b2ee9-84af-407c-9e36-4d16b55fdfd0', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 04:03:19 compute-0 ovn_metadata_agent[161897]: 2025-10-11 04:03:19.613 162015 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-373b2ee9-84af-407c-9e36-4d16b55fdfd0 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Oct 11 04:03:19 compute-0 systemd[1]: run-netns-ovnmeta\x2d373b2ee9\x2d84af\x2d407c\x2d9e36\x2d4d16b55fdfd0.mount: Deactivated successfully.
Oct 11 04:03:19 compute-0 ovn_metadata_agent[161897]: 2025-10-11 04:03:19.613 162015 DEBUG oslo.privsep.daemon [-] privsep: reply[7c1f7448-f6cb-40b4-923c-7946576319cf]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 04:03:19 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e132 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 11 04:03:19 compute-0 nova_compute[259850]: 2025-10-11 04:03:19.729 2 INFO nova.virt.libvirt.driver [None req-47d49500-8fa6-41f1-9940-3c429973796e 6f96a3b66f9943398432732b3141745a 54de3f5004d1488aaf5e429b0071e194 - - default default] [instance: 02d4314d-9e34-4961-a2ae-1c9b0a3c1dfc] Deleting instance files /var/lib/nova/instances/02d4314d-9e34-4961-a2ae-1c9b0a3c1dfc_del
Oct 11 04:03:19 compute-0 nova_compute[259850]: 2025-10-11 04:03:19.731 2 INFO nova.virt.libvirt.driver [None req-47d49500-8fa6-41f1-9940-3c429973796e 6f96a3b66f9943398432732b3141745a 54de3f5004d1488aaf5e429b0071e194 - - default default] [instance: 02d4314d-9e34-4961-a2ae-1c9b0a3c1dfc] Deletion of /var/lib/nova/instances/02d4314d-9e34-4961-a2ae-1c9b0a3c1dfc_del complete
Oct 11 04:03:19 compute-0 nova_compute[259850]: 2025-10-11 04:03:19.813 2 DEBUG nova.virt.libvirt.host [None req-47d49500-8fa6-41f1-9940-3c429973796e 6f96a3b66f9943398432732b3141745a 54de3f5004d1488aaf5e429b0071e194 - - default default] Checking UEFI support for host arch (x86_64) supports_uefi /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1754
Oct 11 04:03:19 compute-0 nova_compute[259850]: 2025-10-11 04:03:19.814 2 INFO nova.virt.libvirt.host [None req-47d49500-8fa6-41f1-9940-3c429973796e 6f96a3b66f9943398432732b3141745a 54de3f5004d1488aaf5e429b0071e194 - - default default] UEFI support detected
Oct 11 04:03:19 compute-0 nova_compute[259850]: 2025-10-11 04:03:19.816 2 INFO nova.compute.manager [None req-47d49500-8fa6-41f1-9940-3c429973796e 6f96a3b66f9943398432732b3141745a 54de3f5004d1488aaf5e429b0071e194 - - default default] [instance: 02d4314d-9e34-4961-a2ae-1c9b0a3c1dfc] Took 0.72 seconds to destroy the instance on the hypervisor.
Oct 11 04:03:19 compute-0 nova_compute[259850]: 2025-10-11 04:03:19.817 2 DEBUG oslo.service.loopingcall [None req-47d49500-8fa6-41f1-9940-3c429973796e 6f96a3b66f9943398432732b3141745a 54de3f5004d1488aaf5e429b0071e194 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Oct 11 04:03:19 compute-0 nova_compute[259850]: 2025-10-11 04:03:19.818 2 DEBUG nova.compute.manager [-] [instance: 02d4314d-9e34-4961-a2ae-1c9b0a3c1dfc] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Oct 11 04:03:19 compute-0 nova_compute[259850]: 2025-10-11 04:03:19.818 2 DEBUG nova.network.neutron [-] [instance: 02d4314d-9e34-4961-a2ae-1c9b0a3c1dfc] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Oct 11 04:03:20 compute-0 ceph-mon[74273]: pgmap v932: 305 pgs: 305 active+clean; 121 MiB data, 254 MiB used, 60 GiB / 60 GiB avail; 57 KiB/s rd, 19 KiB/s wr, 50 op/s
Oct 11 04:03:20 compute-0 ceph-mgr[74563]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 04:03:20 compute-0 ceph-mgr[74563]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 04:03:20 compute-0 ceph-mgr[74563]: [balancer INFO root] Optimize plan auto_2025-10-11_04:03:20
Oct 11 04:03:20 compute-0 ceph-mgr[74563]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 11 04:03:20 compute-0 ceph-mgr[74563]: [balancer INFO root] do_upmap
Oct 11 04:03:20 compute-0 ceph-mgr[74563]: [balancer INFO root] pools ['default.rgw.log', '.rgw.root', 'default.rgw.meta', '.mgr', 'cephfs.cephfs.meta', 'default.rgw.control', 'vms', 'images', 'volumes', 'cephfs.cephfs.data', 'backups']
Oct 11 04:03:20 compute-0 ceph-mgr[74563]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 04:03:20 compute-0 ceph-mgr[74563]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 04:03:20 compute-0 ceph-mgr[74563]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 04:03:20 compute-0 ceph-mgr[74563]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 04:03:20 compute-0 ceph-mgr[74563]: [balancer INFO root] prepared 0/10 changes
Oct 11 04:03:20 compute-0 ceph-mgr[74563]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 11 04:03:20 compute-0 ceph-mgr[74563]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 11 04:03:20 compute-0 ceph-mgr[74563]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 11 04:03:20 compute-0 ceph-mgr[74563]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 11 04:03:20 compute-0 ceph-mgr[74563]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 11 04:03:20 compute-0 ceph-mgr[74563]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 11 04:03:20 compute-0 ceph-mgr[74563]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 11 04:03:20 compute-0 ceph-mgr[74563]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 11 04:03:20 compute-0 ceph-mgr[74563]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 11 04:03:20 compute-0 ceph-mgr[74563]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 11 04:03:20 compute-0 nova_compute[259850]: 2025-10-11 04:03:20.978 2 DEBUG nova.network.neutron [-] [instance: 02d4314d-9e34-4961-a2ae-1c9b0a3c1dfc] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 11 04:03:21 compute-0 nova_compute[259850]: 2025-10-11 04:03:21.004 2 INFO nova.compute.manager [-] [instance: 02d4314d-9e34-4961-a2ae-1c9b0a3c1dfc] Took 1.19 seconds to deallocate network for instance.
Oct 11 04:03:21 compute-0 nova_compute[259850]: 2025-10-11 04:03:21.054 2 DEBUG oslo_concurrency.lockutils [None req-47d49500-8fa6-41f1-9940-3c429973796e 6f96a3b66f9943398432732b3141745a 54de3f5004d1488aaf5e429b0071e194 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 04:03:21 compute-0 nova_compute[259850]: 2025-10-11 04:03:21.055 2 DEBUG oslo_concurrency.lockutils [None req-47d49500-8fa6-41f1-9940-3c429973796e 6f96a3b66f9943398432732b3141745a 54de3f5004d1488aaf5e429b0071e194 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 04:03:21 compute-0 nova_compute[259850]: 2025-10-11 04:03:21.118 2 DEBUG oslo_concurrency.processutils [None req-47d49500-8fa6-41f1-9940-3c429973796e 6f96a3b66f9943398432732b3141745a 54de3f5004d1488aaf5e429b0071e194 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 04:03:21 compute-0 nova_compute[259850]: 2025-10-11 04:03:21.326 2 DEBUG nova.compute.manager [req-4f295793-d618-458a-9312-104898ed795e req-b9b26bb5-5640-43e0-8b90-1560da37e4d7 f4159c198f2c491aba3076e803251bb4 551ef16ca7c64c7d98e9560ddab5abd8 - - default default] [instance: 02d4314d-9e34-4961-a2ae-1c9b0a3c1dfc] Received event network-vif-deleted-601ef18d-d973-476f-90d4-f8f40df267fa external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 11 04:03:21 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v933: 305 pgs: 305 active+clean; 121 MiB data, 254 MiB used, 60 GiB / 60 GiB avail; 47 KiB/s rd, 6.7 KiB/s wr, 36 op/s
Oct 11 04:03:21 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 11 04:03:21 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1540058438' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 04:03:21 compute-0 nova_compute[259850]: 2025-10-11 04:03:21.508 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 04:03:21 compute-0 nova_compute[259850]: 2025-10-11 04:03:21.527 2 DEBUG oslo_concurrency.processutils [None req-47d49500-8fa6-41f1-9940-3c429973796e 6f96a3b66f9943398432732b3141745a 54de3f5004d1488aaf5e429b0071e194 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.409s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 04:03:21 compute-0 nova_compute[259850]: 2025-10-11 04:03:21.535 2 DEBUG nova.compute.provider_tree [None req-47d49500-8fa6-41f1-9940-3c429973796e 6f96a3b66f9943398432732b3141745a 54de3f5004d1488aaf5e429b0071e194 - - default default] Inventory has not changed in ProviderTree for provider: 108a560b-89c0-4926-a2fc-cb749a6f8386 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 11 04:03:21 compute-0 nova_compute[259850]: 2025-10-11 04:03:21.554 2 DEBUG nova.scheduler.client.report [None req-47d49500-8fa6-41f1-9940-3c429973796e 6f96a3b66f9943398432732b3141745a 54de3f5004d1488aaf5e429b0071e194 - - default default] Inventory has not changed for provider 108a560b-89c0-4926-a2fc-cb749a6f8386 based on inventory data: {'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 11 04:03:21 compute-0 nova_compute[259850]: 2025-10-11 04:03:21.578 2 DEBUG oslo_concurrency.lockutils [None req-47d49500-8fa6-41f1-9940-3c429973796e 6f96a3b66f9943398432732b3141745a 54de3f5004d1488aaf5e429b0071e194 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.523s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 04:03:21 compute-0 ceph-mon[74273]: from='client.? 192.168.122.100:0/1540058438' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 04:03:21 compute-0 nova_compute[259850]: 2025-10-11 04:03:21.613 2 INFO nova.scheduler.client.report [None req-47d49500-8fa6-41f1-9940-3c429973796e 6f96a3b66f9943398432732b3141745a 54de3f5004d1488aaf5e429b0071e194 - - default default] Deleted allocations for instance 02d4314d-9e34-4961-a2ae-1c9b0a3c1dfc
Oct 11 04:03:21 compute-0 nova_compute[259850]: 2025-10-11 04:03:21.723 2 DEBUG nova.compute.manager [req-6ab178ce-f214-4003-a015-f1b744a4328f req-53c12a9c-93cb-474a-83e5-842c9e31b224 f4159c198f2c491aba3076e803251bb4 551ef16ca7c64c7d98e9560ddab5abd8 - - default default] [instance: 02d4314d-9e34-4961-a2ae-1c9b0a3c1dfc] Received event network-vif-plugged-601ef18d-d973-476f-90d4-f8f40df267fa external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 11 04:03:21 compute-0 nova_compute[259850]: 2025-10-11 04:03:21.723 2 DEBUG oslo_concurrency.lockutils [req-6ab178ce-f214-4003-a015-f1b744a4328f req-53c12a9c-93cb-474a-83e5-842c9e31b224 f4159c198f2c491aba3076e803251bb4 551ef16ca7c64c7d98e9560ddab5abd8 - - default default] Acquiring lock "02d4314d-9e34-4961-a2ae-1c9b0a3c1dfc-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 04:03:21 compute-0 nova_compute[259850]: 2025-10-11 04:03:21.724 2 DEBUG oslo_concurrency.lockutils [req-6ab178ce-f214-4003-a015-f1b744a4328f req-53c12a9c-93cb-474a-83e5-842c9e31b224 f4159c198f2c491aba3076e803251bb4 551ef16ca7c64c7d98e9560ddab5abd8 - - default default] Lock "02d4314d-9e34-4961-a2ae-1c9b0a3c1dfc-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 04:03:21 compute-0 nova_compute[259850]: 2025-10-11 04:03:21.724 2 DEBUG oslo_concurrency.lockutils [req-6ab178ce-f214-4003-a015-f1b744a4328f req-53c12a9c-93cb-474a-83e5-842c9e31b224 f4159c198f2c491aba3076e803251bb4 551ef16ca7c64c7d98e9560ddab5abd8 - - default default] Lock "02d4314d-9e34-4961-a2ae-1c9b0a3c1dfc-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 04:03:21 compute-0 nova_compute[259850]: 2025-10-11 04:03:21.724 2 DEBUG nova.compute.manager [req-6ab178ce-f214-4003-a015-f1b744a4328f req-53c12a9c-93cb-474a-83e5-842c9e31b224 f4159c198f2c491aba3076e803251bb4 551ef16ca7c64c7d98e9560ddab5abd8 - - default default] [instance: 02d4314d-9e34-4961-a2ae-1c9b0a3c1dfc] No waiting events found dispatching network-vif-plugged-601ef18d-d973-476f-90d4-f8f40df267fa pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 11 04:03:21 compute-0 nova_compute[259850]: 2025-10-11 04:03:21.724 2 WARNING nova.compute.manager [req-6ab178ce-f214-4003-a015-f1b744a4328f req-53c12a9c-93cb-474a-83e5-842c9e31b224 f4159c198f2c491aba3076e803251bb4 551ef16ca7c64c7d98e9560ddab5abd8 - - default default] [instance: 02d4314d-9e34-4961-a2ae-1c9b0a3c1dfc] Received unexpected event network-vif-plugged-601ef18d-d973-476f-90d4-f8f40df267fa for instance with vm_state deleted and task_state None.
Oct 11 04:03:21 compute-0 nova_compute[259850]: 2025-10-11 04:03:21.790 2 DEBUG oslo_concurrency.lockutils [None req-47d49500-8fa6-41f1-9940-3c429973796e 6f96a3b66f9943398432732b3141745a 54de3f5004d1488aaf5e429b0071e194 - - default default] Lock "02d4314d-9e34-4961-a2ae-1c9b0a3c1dfc" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.695s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 04:03:22 compute-0 ceph-mon[74273]: pgmap v933: 305 pgs: 305 active+clean; 121 MiB data, 254 MiB used, 60 GiB / 60 GiB avail; 47 KiB/s rd, 6.7 KiB/s wr, 36 op/s
Oct 11 04:03:22 compute-0 ovn_metadata_agent[161897]: 2025-10-11 04:03:22.951 161902 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 04:03:22 compute-0 ovn_metadata_agent[161897]: 2025-10-11 04:03:22.952 161902 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 04:03:22 compute-0 ovn_metadata_agent[161897]: 2025-10-11 04:03:22.952 161902 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 04:03:23 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v934: 305 pgs: 305 active+clean; 42 MiB data, 216 MiB used, 60 GiB / 60 GiB avail; 66 KiB/s rd, 8.2 KiB/s wr, 65 op/s
Oct 11 04:03:24 compute-0 nova_compute[259850]: 2025-10-11 04:03:24.362 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 04:03:24 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e132 do_prune osdmap full prune enabled
Oct 11 04:03:24 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e133 e133: 3 total, 3 up, 3 in
Oct 11 04:03:24 compute-0 ceph-mon[74273]: log_channel(cluster) log [DBG] : osdmap e133: 3 total, 3 up, 3 in
Oct 11 04:03:24 compute-0 ceph-mon[74273]: pgmap v934: 305 pgs: 305 active+clean; 42 MiB data, 216 MiB used, 60 GiB / 60 GiB avail; 66 KiB/s rd, 8.2 KiB/s wr, 65 op/s
Oct 11 04:03:24 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct 11 04:03:24 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1750666837' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 11 04:03:24 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct 11 04:03:24 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1750666837' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 11 04:03:24 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 11 04:03:25 compute-0 podman[268222]: 2025-10-11 04:03:25.364666277 +0000 UTC m=+0.075350762 container health_status 83ff5079e26f0f00bbfd2512db4ebca61e431e3b7e362664573e34fff179574c (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, org.label-schema.build-date=20251009)
Oct 11 04:03:25 compute-0 podman[268223]: 2025-10-11 04:03:25.392290682 +0000 UTC m=+0.093746719 container health_status 8f03625dda308e9a3fe3b3f00360811eab2a53db90537fed5552a11820542f07 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251009, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, config_id=iscsid, org.label-schema.license=GPLv2, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, container_name=iscsid, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Oct 11 04:03:25 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v936: 305 pgs: 305 active+clean; 42 MiB data, 216 MiB used, 60 GiB / 60 GiB avail; 48 KiB/s rd, 4.0 KiB/s wr, 69 op/s
Oct 11 04:03:25 compute-0 ceph-mon[74273]: osdmap e133: 3 total, 3 up, 3 in
Oct 11 04:03:25 compute-0 ceph-mon[74273]: from='client.? 192.168.122.10:0/1750666837' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 11 04:03:25 compute-0 ceph-mon[74273]: from='client.? 192.168.122.10:0/1750666837' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 11 04:03:26 compute-0 nova_compute[259850]: 2025-10-11 04:03:26.511 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 04:03:26 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e133 do_prune osdmap full prune enabled
Oct 11 04:03:26 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e134 e134: 3 total, 3 up, 3 in
Oct 11 04:03:26 compute-0 ceph-mon[74273]: log_channel(cluster) log [DBG] : osdmap e134: 3 total, 3 up, 3 in
Oct 11 04:03:26 compute-0 ceph-mon[74273]: pgmap v936: 305 pgs: 305 active+clean; 42 MiB data, 216 MiB used, 60 GiB / 60 GiB avail; 48 KiB/s rd, 4.0 KiB/s wr, 69 op/s
Oct 11 04:03:27 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v938: 305 pgs: 305 active+clean; 42 MiB data, 216 MiB used, 60 GiB / 60 GiB avail; 29 KiB/s rd, 2.2 KiB/s wr, 42 op/s
Oct 11 04:03:27 compute-0 ceph-mon[74273]: osdmap e134: 3 total, 3 up, 3 in
Oct 11 04:03:27 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct 11 04:03:27 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3854355021' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 11 04:03:27 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct 11 04:03:27 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3854355021' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 11 04:03:28 compute-0 nova_compute[259850]: 2025-10-11 04:03:28.136 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 04:03:28 compute-0 nova_compute[259850]: 2025-10-11 04:03:28.291 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 04:03:28 compute-0 ceph-mon[74273]: pgmap v938: 305 pgs: 305 active+clean; 42 MiB data, 216 MiB used, 60 GiB / 60 GiB avail; 29 KiB/s rd, 2.2 KiB/s wr, 42 op/s
Oct 11 04:03:28 compute-0 ceph-mon[74273]: from='client.? 192.168.122.10:0/3854355021' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 11 04:03:28 compute-0 ceph-mon[74273]: from='client.? 192.168.122.10:0/3854355021' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 11 04:03:29 compute-0 nova_compute[259850]: 2025-10-11 04:03:29.364 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 04:03:29 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v939: 305 pgs: 305 active+clean; 41 MiB data, 211 MiB used, 60 GiB / 60 GiB avail; 78 KiB/s rd, 4.7 KiB/s wr, 109 op/s
Oct 11 04:03:29 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 04:03:29 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e134 do_prune osdmap full prune enabled
Oct 11 04:03:29 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e135 e135: 3 total, 3 up, 3 in
Oct 11 04:03:29 compute-0 ceph-mon[74273]: log_channel(cluster) log [DBG] : osdmap e135: 3 total, 3 up, 3 in
Oct 11 04:03:30 compute-0 ceph-mon[74273]: pgmap v939: 305 pgs: 305 active+clean; 41 MiB data, 211 MiB used, 60 GiB / 60 GiB avail; 78 KiB/s rd, 4.7 KiB/s wr, 109 op/s
Oct 11 04:03:30 compute-0 ceph-mon[74273]: osdmap e135: 3 total, 3 up, 3 in
Oct 11 04:03:31 compute-0 ceph-mgr[74563]: [pg_autoscaler INFO root] _maybe_adjust
Oct 11 04:03:31 compute-0 ceph-mgr[74563]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 04:03:31 compute-0 ceph-mgr[74563]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct 11 04:03:31 compute-0 ceph-mgr[74563]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 04:03:31 compute-0 ceph-mgr[74563]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Oct 11 04:03:31 compute-0 ceph-mgr[74563]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 04:03:31 compute-0 ceph-mgr[74563]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Oct 11 04:03:31 compute-0 ceph-mgr[74563]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 04:03:31 compute-0 ceph-mgr[74563]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 11 04:03:31 compute-0 ceph-mgr[74563]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 04:03:31 compute-0 ceph-mgr[74563]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.000665858301588852 of space, bias 1.0, pg target 0.19975749047665559 quantized to 32 (current 32)
Oct 11 04:03:31 compute-0 ceph-mgr[74563]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 04:03:31 compute-0 ceph-mgr[74563]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Oct 11 04:03:31 compute-0 ceph-mgr[74563]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 04:03:31 compute-0 ceph-mgr[74563]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 11 04:03:31 compute-0 ceph-mgr[74563]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 04:03:31 compute-0 ceph-mgr[74563]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Oct 11 04:03:31 compute-0 ceph-mgr[74563]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 04:03:31 compute-0 ceph-mgr[74563]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct 11 04:03:31 compute-0 ceph-mgr[74563]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 04:03:31 compute-0 ceph-mgr[74563]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 11 04:03:31 compute-0 ceph-mgr[74563]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 04:03:31 compute-0 ceph-mgr[74563]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct 11 04:03:31 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v941: 305 pgs: 305 active+clean; 41 MiB data, 211 MiB used, 60 GiB / 60 GiB avail; 58 KiB/s rd, 2.9 KiB/s wr, 77 op/s
Oct 11 04:03:31 compute-0 nova_compute[259850]: 2025-10-11 04:03:31.556 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 04:03:32 compute-0 ceph-mon[74273]: pgmap v941: 305 pgs: 305 active+clean; 41 MiB data, 211 MiB used, 60 GiB / 60 GiB avail; 58 KiB/s rd, 2.9 KiB/s wr, 77 op/s
Oct 11 04:03:33 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v942: 305 pgs: 305 active+clean; 41 MiB data, 211 MiB used, 60 GiB / 60 GiB avail; 50 KiB/s rd, 2.5 KiB/s wr, 66 op/s
Oct 11 04:03:34 compute-0 nova_compute[259850]: 2025-10-11 04:03:34.335 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1760155399.331941, 02d4314d-9e34-4961-a2ae-1c9b0a3c1dfc => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 11 04:03:34 compute-0 nova_compute[259850]: 2025-10-11 04:03:34.335 2 INFO nova.compute.manager [-] [instance: 02d4314d-9e34-4961-a2ae-1c9b0a3c1dfc] VM Stopped (Lifecycle Event)
Oct 11 04:03:34 compute-0 nova_compute[259850]: 2025-10-11 04:03:34.367 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 04:03:34 compute-0 nova_compute[259850]: 2025-10-11 04:03:34.370 2 DEBUG nova.compute.manager [None req-3427b103-d9df-4c47-8b5d-2ae86cdcbef5 - - - - - -] [instance: 02d4314d-9e34-4961-a2ae-1c9b0a3c1dfc] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 11 04:03:34 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e135 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 04:03:34 compute-0 ceph-mon[74273]: pgmap v942: 305 pgs: 305 active+clean; 41 MiB data, 211 MiB used, 60 GiB / 60 GiB avail; 50 KiB/s rd, 2.5 KiB/s wr, 66 op/s
Oct 11 04:03:35 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v943: 305 pgs: 305 active+clean; 41 MiB data, 211 MiB used, 60 GiB / 60 GiB avail; 45 KiB/s rd, 2.3 KiB/s wr, 60 op/s
Oct 11 04:03:35 compute-0 podman[268264]: 2025-10-11 04:03:35.414391331 +0000 UTC m=+0.123532303 container health_status 648a70a95c858d67aef01a55c153ff0b5b9bef4b589fa10c62a24185180db61f (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3)
Oct 11 04:03:36 compute-0 nova_compute[259850]: 2025-10-11 04:03:36.558 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 04:03:36 compute-0 ceph-mon[74273]: pgmap v943: 305 pgs: 305 active+clean; 41 MiB data, 211 MiB used, 60 GiB / 60 GiB avail; 45 KiB/s rd, 2.3 KiB/s wr, 60 op/s
Oct 11 04:03:37 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v944: 305 pgs: 305 active+clean; 41 MiB data, 211 MiB used, 60 GiB / 60 GiB avail; 40 KiB/s rd, 2.0 KiB/s wr, 52 op/s
Oct 11 04:03:38 compute-0 ceph-mon[74273]: pgmap v944: 305 pgs: 305 active+clean; 41 MiB data, 211 MiB used, 60 GiB / 60 GiB avail; 40 KiB/s rd, 2.0 KiB/s wr, 52 op/s
Oct 11 04:03:38 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 11 04:03:38 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1471921609' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 11 04:03:39 compute-0 nova_compute[259850]: 2025-10-11 04:03:39.368 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 04:03:39 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v945: 305 pgs: 305 active+clean; 41 MiB data, 211 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 818 B/s wr, 2 op/s
Oct 11 04:03:39 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct 11 04:03:39 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1925070878' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 11 04:03:39 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct 11 04:03:39 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1925070878' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 11 04:03:39 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e135 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 04:03:39 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e135 do_prune osdmap full prune enabled
Oct 11 04:03:39 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e136 e136: 3 total, 3 up, 3 in
Oct 11 04:03:39 compute-0 ceph-mon[74273]: from='client.? 192.168.122.10:0/1471921609' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 11 04:03:39 compute-0 ceph-mon[74273]: from='client.? 192.168.122.10:0/1925070878' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 11 04:03:39 compute-0 ceph-mon[74273]: from='client.? 192.168.122.10:0/1925070878' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 11 04:03:39 compute-0 ceph-mon[74273]: log_channel(cluster) log [DBG] : osdmap e136: 3 total, 3 up, 3 in
Oct 11 04:03:40 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e136 do_prune osdmap full prune enabled
Oct 11 04:03:40 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e137 e137: 3 total, 3 up, 3 in
Oct 11 04:03:40 compute-0 ceph-mon[74273]: log_channel(cluster) log [DBG] : osdmap e137: 3 total, 3 up, 3 in
Oct 11 04:03:40 compute-0 ceph-mon[74273]: pgmap v945: 305 pgs: 305 active+clean; 41 MiB data, 211 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 818 B/s wr, 2 op/s
Oct 11 04:03:40 compute-0 ceph-mon[74273]: osdmap e136: 3 total, 3 up, 3 in
Oct 11 04:03:41 compute-0 podman[268292]: 2025-10-11 04:03:41.384443681 +0000 UTC m=+0.081478905 container health_status aa44bb3cffcfb092b9d85ae52a9bd9f36c87e07262632420f97b48a886109eab (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20251009, maintainer=OpenStack Kubernetes Operator team)
Oct 11 04:03:41 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v948: 305 pgs: 305 active+clean; 41 MiB data, 211 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 1023 B/s wr, 2 op/s
Oct 11 04:03:41 compute-0 nova_compute[259850]: 2025-10-11 04:03:41.559 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 04:03:41 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e137 do_prune osdmap full prune enabled
Oct 11 04:03:41 compute-0 ceph-mon[74273]: osdmap e137: 3 total, 3 up, 3 in
Oct 11 04:03:41 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e138 e138: 3 total, 3 up, 3 in
Oct 11 04:03:41 compute-0 ceph-mon[74273]: log_channel(cluster) log [DBG] : osdmap e138: 3 total, 3 up, 3 in
Oct 11 04:03:42 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e138 do_prune osdmap full prune enabled
Oct 11 04:03:42 compute-0 ceph-mon[74273]: pgmap v948: 305 pgs: 305 active+clean; 41 MiB data, 211 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 1023 B/s wr, 2 op/s
Oct 11 04:03:42 compute-0 ceph-mon[74273]: osdmap e138: 3 total, 3 up, 3 in
Oct 11 04:03:42 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e139 e139: 3 total, 3 up, 3 in
Oct 11 04:03:42 compute-0 ceph-mon[74273]: log_channel(cluster) log [DBG] : osdmap e139: 3 total, 3 up, 3 in
Oct 11 04:03:43 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v951: 305 pgs: 305 active+clean; 41 MiB data, 212 MiB used, 60 GiB / 60 GiB avail; 74 KiB/s rd, 5.0 KiB/s wr, 103 op/s
Oct 11 04:03:43 compute-0 ceph-mon[74273]: osdmap e139: 3 total, 3 up, 3 in
Oct 11 04:03:44 compute-0 nova_compute[259850]: 2025-10-11 04:03:44.370 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 04:03:44 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 04:03:44 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e139 do_prune osdmap full prune enabled
Oct 11 04:03:44 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e140 e140: 3 total, 3 up, 3 in
Oct 11 04:03:44 compute-0 ceph-mon[74273]: log_channel(cluster) log [DBG] : osdmap e140: 3 total, 3 up, 3 in
Oct 11 04:03:44 compute-0 ceph-mon[74273]: pgmap v951: 305 pgs: 305 active+clean; 41 MiB data, 212 MiB used, 60 GiB / 60 GiB avail; 74 KiB/s rd, 5.0 KiB/s wr, 103 op/s
Oct 11 04:03:44 compute-0 ceph-mon[74273]: osdmap e140: 3 total, 3 up, 3 in
Oct 11 04:03:45 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct 11 04:03:45 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3178053332' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 11 04:03:45 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct 11 04:03:45 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3178053332' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 11 04:03:45 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v953: 305 pgs: 305 active+clean; 41 MiB data, 212 MiB used, 60 GiB / 60 GiB avail; 63 KiB/s rd, 4.3 KiB/s wr, 88 op/s
Oct 11 04:03:45 compute-0 ceph-mon[74273]: from='client.? 192.168.122.10:0/3178053332' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 11 04:03:45 compute-0 ceph-mon[74273]: from='client.? 192.168.122.10:0/3178053332' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 11 04:03:46 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 11 04:03:46 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/4174037436' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 11 04:03:46 compute-0 nova_compute[259850]: 2025-10-11 04:03:46.561 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 04:03:46 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e140 do_prune osdmap full prune enabled
Oct 11 04:03:46 compute-0 ceph-mon[74273]: pgmap v953: 305 pgs: 305 active+clean; 41 MiB data, 212 MiB used, 60 GiB / 60 GiB avail; 63 KiB/s rd, 4.3 KiB/s wr, 88 op/s
Oct 11 04:03:46 compute-0 ceph-mon[74273]: from='client.? 192.168.122.10:0/4174037436' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 11 04:03:46 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e141 e141: 3 total, 3 up, 3 in
Oct 11 04:03:46 compute-0 ceph-mon[74273]: log_channel(cluster) log [DBG] : osdmap e141: 3 total, 3 up, 3 in
Oct 11 04:03:46 compute-0 ceph-osd[87591]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #43. Immutable memtables: 0.
Oct 11 04:03:47 compute-0 sudo[268312]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 04:03:47 compute-0 sudo[268312]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 04:03:47 compute-0 sudo[268312]: pam_unix(sudo:session): session closed for user root
Oct 11 04:03:47 compute-0 sudo[268337]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 11 04:03:47 compute-0 sudo[268337]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 04:03:47 compute-0 sudo[268337]: pam_unix(sudo:session): session closed for user root
Oct 11 04:03:47 compute-0 sudo[268362]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 04:03:47 compute-0 sudo[268362]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 04:03:47 compute-0 sudo[268362]: pam_unix(sudo:session): session closed for user root
Oct 11 04:03:47 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v955: 305 pgs: 305 active+clean; 41 MiB data, 212 MiB used, 60 GiB / 60 GiB avail; 52 KiB/s rd, 3.5 KiB/s wr, 73 op/s
Oct 11 04:03:47 compute-0 sudo[268387]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/23b68101-59a9-532f-ab6b-9acf78fb2162/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Oct 11 04:03:47 compute-0 sudo[268387]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 04:03:47 compute-0 ceph-mon[74273]: osdmap e141: 3 total, 3 up, 3 in
Oct 11 04:03:47 compute-0 sudo[268387]: pam_unix(sudo:session): session closed for user root
Oct 11 04:03:48 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 11 04:03:48 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 11 04:03:48 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct 11 04:03:48 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 11 04:03:48 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct 11 04:03:48 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' 
Oct 11 04:03:48 compute-0 ceph-mgr[74563]: [progress WARNING root] complete: ev c1eb6c64-99b7-4df2-96db-84d8a1207585 does not exist
Oct 11 04:03:48 compute-0 ceph-mgr[74563]: [progress WARNING root] complete: ev 5bca15b0-d292-47e3-85ef-b76c575ee3e5 does not exist
Oct 11 04:03:48 compute-0 ceph-mgr[74563]: [progress WARNING root] complete: ev 2269c95d-046b-40e2-a1ce-fb77d35e10b6 does not exist
Oct 11 04:03:48 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct 11 04:03:48 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 11 04:03:48 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct 11 04:03:48 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 11 04:03:48 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 11 04:03:48 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 11 04:03:48 compute-0 sudo[268444]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 04:03:48 compute-0 sudo[268444]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 04:03:48 compute-0 sudo[268444]: pam_unix(sudo:session): session closed for user root
Oct 11 04:03:48 compute-0 sudo[268469]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 11 04:03:48 compute-0 sudo[268469]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 04:03:48 compute-0 sudo[268469]: pam_unix(sudo:session): session closed for user root
Oct 11 04:03:48 compute-0 sudo[268494]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 04:03:48 compute-0 sudo[268494]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 04:03:48 compute-0 sudo[268494]: pam_unix(sudo:session): session closed for user root
Oct 11 04:03:48 compute-0 sudo[268519]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/23b68101-59a9-532f-ab6b-9acf78fb2162/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 23b68101-59a9-532f-ab6b-9acf78fb2162 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Oct 11 04:03:48 compute-0 sudo[268519]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 04:03:48 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct 11 04:03:48 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1491298257' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 11 04:03:48 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct 11 04:03:48 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1491298257' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 11 04:03:48 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct 11 04:03:48 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3788064545' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 11 04:03:48 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct 11 04:03:48 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3788064545' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 11 04:03:48 compute-0 ceph-mon[74273]: pgmap v955: 305 pgs: 305 active+clean; 41 MiB data, 212 MiB used, 60 GiB / 60 GiB avail; 52 KiB/s rd, 3.5 KiB/s wr, 73 op/s
Oct 11 04:03:48 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 11 04:03:48 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 11 04:03:48 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' 
Oct 11 04:03:48 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 11 04:03:48 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 11 04:03:48 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 11 04:03:48 compute-0 ceph-mon[74273]: from='client.? 192.168.122.10:0/1491298257' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 11 04:03:48 compute-0 ceph-mon[74273]: from='client.? 192.168.122.10:0/1491298257' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 11 04:03:48 compute-0 ceph-mon[74273]: from='client.? 192.168.122.10:0/3788064545' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 11 04:03:48 compute-0 ceph-mon[74273]: from='client.? 192.168.122.10:0/3788064545' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 11 04:03:48 compute-0 podman[268584]: 2025-10-11 04:03:48.848647277 +0000 UTC m=+0.066554026 container create 7b275524cf7bb2df5319204cd2859c88224601f8306be847330d3e4cd7a81d47 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_almeida, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef)
Oct 11 04:03:48 compute-0 systemd[1]: Started libpod-conmon-7b275524cf7bb2df5319204cd2859c88224601f8306be847330d3e4cd7a81d47.scope.
Oct 11 04:03:48 compute-0 podman[268584]: 2025-10-11 04:03:48.821180538 +0000 UTC m=+0.039087337 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 04:03:48 compute-0 systemd[1]: Started libcrun container.
Oct 11 04:03:48 compute-0 podman[268584]: 2025-10-11 04:03:48.949854394 +0000 UTC m=+0.167761173 container init 7b275524cf7bb2df5319204cd2859c88224601f8306be847330d3e4cd7a81d47 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_almeida, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, ceph=True)
Oct 11 04:03:48 compute-0 podman[268584]: 2025-10-11 04:03:48.960871563 +0000 UTC m=+0.178778322 container start 7b275524cf7bb2df5319204cd2859c88224601f8306be847330d3e4cd7a81d47 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_almeida, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 11 04:03:48 compute-0 podman[268584]: 2025-10-11 04:03:48.964655369 +0000 UTC m=+0.182562118 container attach 7b275524cf7bb2df5319204cd2859c88224601f8306be847330d3e4cd7a81d47 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_almeida, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.schema-version=1.0)
Oct 11 04:03:48 compute-0 pedantic_almeida[268601]: 167 167
Oct 11 04:03:48 compute-0 systemd[1]: libpod-7b275524cf7bb2df5319204cd2859c88224601f8306be847330d3e4cd7a81d47.scope: Deactivated successfully.
Oct 11 04:03:48 compute-0 podman[268584]: 2025-10-11 04:03:48.971687876 +0000 UTC m=+0.189594625 container died 7b275524cf7bb2df5319204cd2859c88224601f8306be847330d3e4cd7a81d47 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_almeida, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=reef, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 11 04:03:49 compute-0 systemd[1]: var-lib-containers-storage-overlay-584f443e3f84c01d417276bc0a42d9df6740f187fd4a4384d8c788f7639ca898-merged.mount: Deactivated successfully.
Oct 11 04:03:49 compute-0 podman[268584]: 2025-10-11 04:03:49.02319843 +0000 UTC m=+0.241105179 container remove 7b275524cf7bb2df5319204cd2859c88224601f8306be847330d3e4cd7a81d47 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_almeida, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.build-date=20250507, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef)
Oct 11 04:03:49 compute-0 systemd[1]: libpod-conmon-7b275524cf7bb2df5319204cd2859c88224601f8306be847330d3e4cd7a81d47.scope: Deactivated successfully.
Oct 11 04:03:49 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e141 do_prune osdmap full prune enabled
Oct 11 04:03:49 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e142 e142: 3 total, 3 up, 3 in
Oct 11 04:03:49 compute-0 ceph-mon[74273]: log_channel(cluster) log [DBG] : osdmap e142: 3 total, 3 up, 3 in
Oct 11 04:03:49 compute-0 ceph-osd[88594]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #43. Immutable memtables: 0.
Oct 11 04:03:49 compute-0 podman[268626]: 2025-10-11 04:03:49.235299445 +0000 UTC m=+0.061971978 container create efd586d5234168550d1537f9649463dcc70c3686f49bfcc8902a0c67d00f34bd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_chatterjee, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS)
Oct 11 04:03:49 compute-0 systemd[1]: Started libpod-conmon-efd586d5234168550d1537f9649463dcc70c3686f49bfcc8902a0c67d00f34bd.scope.
Oct 11 04:03:49 compute-0 podman[268626]: 2025-10-11 04:03:49.201876568 +0000 UTC m=+0.028549181 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 04:03:49 compute-0 systemd[1]: Started libcrun container.
Oct 11 04:03:49 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4b27373e5d1ac08f0ec58e13f06d3e481274c8d6bc1c45630ee3c75906f52233/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 11 04:03:49 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4b27373e5d1ac08f0ec58e13f06d3e481274c8d6bc1c45630ee3c75906f52233/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 11 04:03:49 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4b27373e5d1ac08f0ec58e13f06d3e481274c8d6bc1c45630ee3c75906f52233/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 11 04:03:49 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4b27373e5d1ac08f0ec58e13f06d3e481274c8d6bc1c45630ee3c75906f52233/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 11 04:03:49 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4b27373e5d1ac08f0ec58e13f06d3e481274c8d6bc1c45630ee3c75906f52233/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct 11 04:03:49 compute-0 podman[268626]: 2025-10-11 04:03:49.347898911 +0000 UTC m=+0.174571454 container init efd586d5234168550d1537f9649463dcc70c3686f49bfcc8902a0c67d00f34bd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_chatterjee, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 11 04:03:49 compute-0 podman[268626]: 2025-10-11 04:03:49.365383581 +0000 UTC m=+0.192056124 container start efd586d5234168550d1537f9649463dcc70c3686f49bfcc8902a0c67d00f34bd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_chatterjee, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=reef)
Oct 11 04:03:49 compute-0 podman[268626]: 2025-10-11 04:03:49.370498054 +0000 UTC m=+0.197170657 container attach efd586d5234168550d1537f9649463dcc70c3686f49bfcc8902a0c67d00f34bd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_chatterjee, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, ceph=True, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 11 04:03:49 compute-0 nova_compute[259850]: 2025-10-11 04:03:49.372 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 04:03:49 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v957: 305 pgs: 305 active+clean; 41 MiB data, 229 MiB used, 60 GiB / 60 GiB avail; 97 KiB/s rd, 6.3 KiB/s wr, 128 op/s
Oct 11 04:03:49 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct 11 04:03:49 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2140202642' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 11 04:03:49 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct 11 04:03:49 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2140202642' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 11 04:03:49 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 04:03:50 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e142 do_prune osdmap full prune enabled
Oct 11 04:03:50 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e143 e143: 3 total, 3 up, 3 in
Oct 11 04:03:50 compute-0 ceph-mon[74273]: log_channel(cluster) log [DBG] : osdmap e143: 3 total, 3 up, 3 in
Oct 11 04:03:50 compute-0 ceph-mon[74273]: osdmap e142: 3 total, 3 up, 3 in
Oct 11 04:03:50 compute-0 ceph-mon[74273]: pgmap v957: 305 pgs: 305 active+clean; 41 MiB data, 229 MiB used, 60 GiB / 60 GiB avail; 97 KiB/s rd, 6.3 KiB/s wr, 128 op/s
Oct 11 04:03:50 compute-0 ceph-mon[74273]: from='client.? 192.168.122.10:0/2140202642' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 11 04:03:50 compute-0 ceph-mon[74273]: from='client.? 192.168.122.10:0/2140202642' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 11 04:03:50 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct 11 04:03:50 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2734083483' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 11 04:03:50 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct 11 04:03:50 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2734083483' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 11 04:03:50 compute-0 gifted_chatterjee[268643]: --> passed data devices: 0 physical, 3 LVM
Oct 11 04:03:50 compute-0 gifted_chatterjee[268643]: --> relative data size: 1.0
Oct 11 04:03:50 compute-0 gifted_chatterjee[268643]: --> All data devices are unavailable
Oct 11 04:03:50 compute-0 systemd[1]: libpod-efd586d5234168550d1537f9649463dcc70c3686f49bfcc8902a0c67d00f34bd.scope: Deactivated successfully.
Oct 11 04:03:50 compute-0 systemd[1]: libpod-efd586d5234168550d1537f9649463dcc70c3686f49bfcc8902a0c67d00f34bd.scope: Consumed 1.114s CPU time.
Oct 11 04:03:50 compute-0 podman[268626]: 2025-10-11 04:03:50.548751258 +0000 UTC m=+1.375423821 container died efd586d5234168550d1537f9649463dcc70c3686f49bfcc8902a0c67d00f34bd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_chatterjee, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, ceph=True, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 11 04:03:50 compute-0 systemd[1]: var-lib-containers-storage-overlay-4b27373e5d1ac08f0ec58e13f06d3e481274c8d6bc1c45630ee3c75906f52233-merged.mount: Deactivated successfully.
Oct 11 04:03:50 compute-0 podman[268626]: 2025-10-11 04:03:50.613111802 +0000 UTC m=+1.439784345 container remove efd586d5234168550d1537f9649463dcc70c3686f49bfcc8902a0c67d00f34bd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_chatterjee, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True)
Oct 11 04:03:50 compute-0 systemd[1]: libpod-conmon-efd586d5234168550d1537f9649463dcc70c3686f49bfcc8902a0c67d00f34bd.scope: Deactivated successfully.
Oct 11 04:03:50 compute-0 sudo[268519]: pam_unix(sudo:session): session closed for user root
Oct 11 04:03:50 compute-0 sudo[268686]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 04:03:50 compute-0 sudo[268686]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 04:03:50 compute-0 ceph-mgr[74563]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 04:03:50 compute-0 ceph-mgr[74563]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 04:03:50 compute-0 sudo[268686]: pam_unix(sudo:session): session closed for user root
Oct 11 04:03:50 compute-0 ceph-mgr[74563]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 04:03:50 compute-0 ceph-mgr[74563]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 04:03:50 compute-0 ceph-mgr[74563]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 04:03:50 compute-0 ceph-mgr[74563]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 04:03:50 compute-0 sudo[268711]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 11 04:03:50 compute-0 sudo[268711]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 04:03:50 compute-0 sudo[268711]: pam_unix(sudo:session): session closed for user root
Oct 11 04:03:50 compute-0 sudo[268736]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 04:03:50 compute-0 sudo[268736]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 04:03:50 compute-0 sudo[268736]: pam_unix(sudo:session): session closed for user root
Oct 11 04:03:51 compute-0 sudo[268761]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/23b68101-59a9-532f-ab6b-9acf78fb2162/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 23b68101-59a9-532f-ab6b-9acf78fb2162 -- lvm list --format json
Oct 11 04:03:51 compute-0 sudo[268761]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 04:03:51 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e143 do_prune osdmap full prune enabled
Oct 11 04:03:51 compute-0 ceph-mon[74273]: osdmap e143: 3 total, 3 up, 3 in
Oct 11 04:03:51 compute-0 ceph-mon[74273]: from='client.? 192.168.122.10:0/2734083483' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 11 04:03:51 compute-0 ceph-mon[74273]: from='client.? 192.168.122.10:0/2734083483' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 11 04:03:51 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e144 e144: 3 total, 3 up, 3 in
Oct 11 04:03:51 compute-0 ceph-mon[74273]: log_channel(cluster) log [DBG] : osdmap e144: 3 total, 3 up, 3 in
Oct 11 04:03:51 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v960: 305 pgs: 305 active+clean; 41 MiB data, 229 MiB used, 60 GiB / 60 GiB avail; 126 KiB/s rd, 8.3 KiB/s wr, 168 op/s
Oct 11 04:03:51 compute-0 podman[268827]: 2025-10-11 04:03:51.449760181 +0000 UTC m=+0.041883585 container create d4eb1e9ec6206a48877f3f1e6026edce9c49df68ab286180141dd487619faa16 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_ramanujan, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 11 04:03:51 compute-0 systemd[1]: Started libpod-conmon-d4eb1e9ec6206a48877f3f1e6026edce9c49df68ab286180141dd487619faa16.scope.
Oct 11 04:03:51 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct 11 04:03:51 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1509036618' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 11 04:03:51 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct 11 04:03:51 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1509036618' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 11 04:03:51 compute-0 systemd[1]: Started libcrun container.
Oct 11 04:03:51 compute-0 podman[268827]: 2025-10-11 04:03:51.430242464 +0000 UTC m=+0.022365908 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 04:03:51 compute-0 podman[268827]: 2025-10-11 04:03:51.533079987 +0000 UTC m=+0.125203411 container init d4eb1e9ec6206a48877f3f1e6026edce9c49df68ab286180141dd487619faa16 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_ramanujan, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, ceph=True)
Oct 11 04:03:51 compute-0 podman[268827]: 2025-10-11 04:03:51.541222545 +0000 UTC m=+0.133345949 container start d4eb1e9ec6206a48877f3f1e6026edce9c49df68ab286180141dd487619faa16 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_ramanujan, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_REF=reef, OSD_FLAVOR=default)
Oct 11 04:03:51 compute-0 podman[268827]: 2025-10-11 04:03:51.544276831 +0000 UTC m=+0.136400335 container attach d4eb1e9ec6206a48877f3f1e6026edce9c49df68ab286180141dd487619faa16 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_ramanujan, CEPH_REF=reef, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True)
Oct 11 04:03:51 compute-0 great_ramanujan[268843]: 167 167
Oct 11 04:03:51 compute-0 systemd[1]: libpod-d4eb1e9ec6206a48877f3f1e6026edce9c49df68ab286180141dd487619faa16.scope: Deactivated successfully.
Oct 11 04:03:51 compute-0 podman[268827]: 2025-10-11 04:03:51.54819677 +0000 UTC m=+0.140320204 container died d4eb1e9ec6206a48877f3f1e6026edce9c49df68ab286180141dd487619faa16 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_ramanujan, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3)
Oct 11 04:03:51 compute-0 systemd[1]: var-lib-containers-storage-overlay-2bc7a69fc449762ad66455a178f1250d5d15e2a06e55f4b899ef513f12b4601c-merged.mount: Deactivated successfully.
Oct 11 04:03:51 compute-0 nova_compute[259850]: 2025-10-11 04:03:51.604 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 04:03:51 compute-0 podman[268827]: 2025-10-11 04:03:51.616643389 +0000 UTC m=+0.208766793 container remove d4eb1e9ec6206a48877f3f1e6026edce9c49df68ab286180141dd487619faa16 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_ramanujan, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3)
Oct 11 04:03:51 compute-0 systemd[1]: libpod-conmon-d4eb1e9ec6206a48877f3f1e6026edce9c49df68ab286180141dd487619faa16.scope: Deactivated successfully.
Oct 11 04:03:51 compute-0 podman[268868]: 2025-10-11 04:03:51.838668002 +0000 UTC m=+0.071219757 container create a0a64bc2982b7d7d5d88040431d433b78432929c4e8977dc46cf1c59949e0814 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jovial_hodgkin, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3)
Oct 11 04:03:51 compute-0 systemd[1]: Started libpod-conmon-a0a64bc2982b7d7d5d88040431d433b78432929c4e8977dc46cf1c59949e0814.scope.
Oct 11 04:03:51 compute-0 podman[268868]: 2025-10-11 04:03:51.812360325 +0000 UTC m=+0.044912120 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 04:03:51 compute-0 systemd[1]: Started libcrun container.
Oct 11 04:03:51 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1da5017e1c0d659231e084d7b88e7ee3a3a1aa9c5901383e99c59c6ce7f733c4/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 11 04:03:51 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1da5017e1c0d659231e084d7b88e7ee3a3a1aa9c5901383e99c59c6ce7f733c4/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 11 04:03:51 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1da5017e1c0d659231e084d7b88e7ee3a3a1aa9c5901383e99c59c6ce7f733c4/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 11 04:03:51 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1da5017e1c0d659231e084d7b88e7ee3a3a1aa9c5901383e99c59c6ce7f733c4/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 11 04:03:51 compute-0 podman[268868]: 2025-10-11 04:03:51.93886256 +0000 UTC m=+0.171414305 container init a0a64bc2982b7d7d5d88040431d433b78432929c4e8977dc46cf1c59949e0814 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jovial_hodgkin, CEPH_REF=reef, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, ceph=True)
Oct 11 04:03:51 compute-0 podman[268868]: 2025-10-11 04:03:51.949439087 +0000 UTC m=+0.181990802 container start a0a64bc2982b7d7d5d88040431d433b78432929c4e8977dc46cf1c59949e0814 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jovial_hodgkin, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, OSD_FLAVOR=default)
Oct 11 04:03:51 compute-0 podman[268868]: 2025-10-11 04:03:51.952215425 +0000 UTC m=+0.184767160 container attach a0a64bc2982b7d7d5d88040431d433b78432929c4e8977dc46cf1c59949e0814 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jovial_hodgkin, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Oct 11 04:03:52 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct 11 04:03:52 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1101459243' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 11 04:03:52 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct 11 04:03:52 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1101459243' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 11 04:03:52 compute-0 ceph-mon[74273]: osdmap e144: 3 total, 3 up, 3 in
Oct 11 04:03:52 compute-0 ceph-mon[74273]: pgmap v960: 305 pgs: 305 active+clean; 41 MiB data, 229 MiB used, 60 GiB / 60 GiB avail; 126 KiB/s rd, 8.3 KiB/s wr, 168 op/s
Oct 11 04:03:52 compute-0 ceph-mon[74273]: from='client.? 192.168.122.10:0/1509036618' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 11 04:03:52 compute-0 ceph-mon[74273]: from='client.? 192.168.122.10:0/1509036618' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 11 04:03:52 compute-0 ceph-mon[74273]: from='client.? 192.168.122.10:0/1101459243' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 11 04:03:52 compute-0 ceph-mon[74273]: from='client.? 192.168.122.10:0/1101459243' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 11 04:03:52 compute-0 jovial_hodgkin[268885]: {
Oct 11 04:03:52 compute-0 jovial_hodgkin[268885]:     "0": [
Oct 11 04:03:52 compute-0 jovial_hodgkin[268885]:         {
Oct 11 04:03:52 compute-0 jovial_hodgkin[268885]:             "devices": [
Oct 11 04:03:52 compute-0 jovial_hodgkin[268885]:                 "/dev/loop3"
Oct 11 04:03:52 compute-0 jovial_hodgkin[268885]:             ],
Oct 11 04:03:52 compute-0 jovial_hodgkin[268885]:             "lv_name": "ceph_lv0",
Oct 11 04:03:52 compute-0 jovial_hodgkin[268885]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct 11 04:03:52 compute-0 jovial_hodgkin[268885]:             "lv_size": "21470642176",
Oct 11 04:03:52 compute-0 jovial_hodgkin[268885]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=1XVTvq-Um5W-oCLh-MZzq-EKrd-vgGj-JdO4S3,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=23b68101-59a9-532f-ab6b-9acf78fb2162,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=bd7ac921-1218-45c1-b1c6-7c594dbceccb,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 11 04:03:52 compute-0 jovial_hodgkin[268885]:             "lv_uuid": "1XVTvq-Um5W-oCLh-MZzq-EKrd-vgGj-JdO4S3",
Oct 11 04:03:52 compute-0 jovial_hodgkin[268885]:             "name": "ceph_lv0",
Oct 11 04:03:52 compute-0 jovial_hodgkin[268885]:             "path": "/dev/ceph_vg0/ceph_lv0",
Oct 11 04:03:52 compute-0 jovial_hodgkin[268885]:             "tags": {
Oct 11 04:03:52 compute-0 jovial_hodgkin[268885]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct 11 04:03:52 compute-0 jovial_hodgkin[268885]:                 "ceph.block_uuid": "1XVTvq-Um5W-oCLh-MZzq-EKrd-vgGj-JdO4S3",
Oct 11 04:03:52 compute-0 jovial_hodgkin[268885]:                 "ceph.cephx_lockbox_secret": "",
Oct 11 04:03:52 compute-0 jovial_hodgkin[268885]:                 "ceph.cluster_fsid": "23b68101-59a9-532f-ab6b-9acf78fb2162",
Oct 11 04:03:52 compute-0 jovial_hodgkin[268885]:                 "ceph.cluster_name": "ceph",
Oct 11 04:03:52 compute-0 jovial_hodgkin[268885]:                 "ceph.crush_device_class": "",
Oct 11 04:03:52 compute-0 jovial_hodgkin[268885]:                 "ceph.encrypted": "0",
Oct 11 04:03:52 compute-0 jovial_hodgkin[268885]:                 "ceph.osd_fsid": "bd7ac921-1218-45c1-b1c6-7c594dbceccb",
Oct 11 04:03:52 compute-0 jovial_hodgkin[268885]:                 "ceph.osd_id": "0",
Oct 11 04:03:52 compute-0 jovial_hodgkin[268885]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 11 04:03:52 compute-0 jovial_hodgkin[268885]:                 "ceph.type": "block",
Oct 11 04:03:52 compute-0 jovial_hodgkin[268885]:                 "ceph.vdo": "0"
Oct 11 04:03:52 compute-0 jovial_hodgkin[268885]:             },
Oct 11 04:03:52 compute-0 jovial_hodgkin[268885]:             "type": "block",
Oct 11 04:03:52 compute-0 jovial_hodgkin[268885]:             "vg_name": "ceph_vg0"
Oct 11 04:03:52 compute-0 jovial_hodgkin[268885]:         }
Oct 11 04:03:52 compute-0 jovial_hodgkin[268885]:     ],
Oct 11 04:03:52 compute-0 jovial_hodgkin[268885]:     "1": [
Oct 11 04:03:52 compute-0 jovial_hodgkin[268885]:         {
Oct 11 04:03:52 compute-0 jovial_hodgkin[268885]:             "devices": [
Oct 11 04:03:52 compute-0 jovial_hodgkin[268885]:                 "/dev/loop4"
Oct 11 04:03:52 compute-0 jovial_hodgkin[268885]:             ],
Oct 11 04:03:52 compute-0 jovial_hodgkin[268885]:             "lv_name": "ceph_lv1",
Oct 11 04:03:52 compute-0 jovial_hodgkin[268885]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Oct 11 04:03:52 compute-0 jovial_hodgkin[268885]:             "lv_size": "21470642176",
Oct 11 04:03:52 compute-0 jovial_hodgkin[268885]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=SYHCVo-UVf0-Iwc3-4dzz-QuCG-19LJ-9rRA5x,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=23b68101-59a9-532f-ab6b-9acf78fb2162,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=38da774d-7ecf-442f-9a7a-97978287cff8,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 11 04:03:52 compute-0 jovial_hodgkin[268885]:             "lv_uuid": "SYHCVo-UVf0-Iwc3-4dzz-QuCG-19LJ-9rRA5x",
Oct 11 04:03:52 compute-0 jovial_hodgkin[268885]:             "name": "ceph_lv1",
Oct 11 04:03:52 compute-0 jovial_hodgkin[268885]:             "path": "/dev/ceph_vg1/ceph_lv1",
Oct 11 04:03:52 compute-0 jovial_hodgkin[268885]:             "tags": {
Oct 11 04:03:52 compute-0 jovial_hodgkin[268885]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Oct 11 04:03:52 compute-0 jovial_hodgkin[268885]:                 "ceph.block_uuid": "SYHCVo-UVf0-Iwc3-4dzz-QuCG-19LJ-9rRA5x",
Oct 11 04:03:52 compute-0 jovial_hodgkin[268885]:                 "ceph.cephx_lockbox_secret": "",
Oct 11 04:03:52 compute-0 jovial_hodgkin[268885]:                 "ceph.cluster_fsid": "23b68101-59a9-532f-ab6b-9acf78fb2162",
Oct 11 04:03:52 compute-0 jovial_hodgkin[268885]:                 "ceph.cluster_name": "ceph",
Oct 11 04:03:52 compute-0 jovial_hodgkin[268885]:                 "ceph.crush_device_class": "",
Oct 11 04:03:52 compute-0 jovial_hodgkin[268885]:                 "ceph.encrypted": "0",
Oct 11 04:03:52 compute-0 jovial_hodgkin[268885]:                 "ceph.osd_fsid": "38da774d-7ecf-442f-9a7a-97978287cff8",
Oct 11 04:03:52 compute-0 jovial_hodgkin[268885]:                 "ceph.osd_id": "1",
Oct 11 04:03:52 compute-0 jovial_hodgkin[268885]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 11 04:03:52 compute-0 jovial_hodgkin[268885]:                 "ceph.type": "block",
Oct 11 04:03:52 compute-0 jovial_hodgkin[268885]:                 "ceph.vdo": "0"
Oct 11 04:03:52 compute-0 jovial_hodgkin[268885]:             },
Oct 11 04:03:52 compute-0 jovial_hodgkin[268885]:             "type": "block",
Oct 11 04:03:52 compute-0 jovial_hodgkin[268885]:             "vg_name": "ceph_vg1"
Oct 11 04:03:52 compute-0 jovial_hodgkin[268885]:         }
Oct 11 04:03:52 compute-0 jovial_hodgkin[268885]:     ],
Oct 11 04:03:52 compute-0 jovial_hodgkin[268885]:     "2": [
Oct 11 04:03:52 compute-0 jovial_hodgkin[268885]:         {
Oct 11 04:03:52 compute-0 jovial_hodgkin[268885]:             "devices": [
Oct 11 04:03:52 compute-0 jovial_hodgkin[268885]:                 "/dev/loop5"
Oct 11 04:03:52 compute-0 jovial_hodgkin[268885]:             ],
Oct 11 04:03:52 compute-0 jovial_hodgkin[268885]:             "lv_name": "ceph_lv2",
Oct 11 04:03:52 compute-0 jovial_hodgkin[268885]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Oct 11 04:03:52 compute-0 jovial_hodgkin[268885]:             "lv_size": "21470642176",
Oct 11 04:03:52 compute-0 jovial_hodgkin[268885]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=RoMfIg-8Myq-NBZ3-1HM3-6QfC-sjk8-dlChvt,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=23b68101-59a9-532f-ab6b-9acf78fb2162,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=b0a32a6b-06c7-4cda-9235-0c5ffbd1bd85,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 11 04:03:52 compute-0 jovial_hodgkin[268885]:             "lv_uuid": "RoMfIg-8Myq-NBZ3-1HM3-6QfC-sjk8-dlChvt",
Oct 11 04:03:52 compute-0 jovial_hodgkin[268885]:             "name": "ceph_lv2",
Oct 11 04:03:52 compute-0 jovial_hodgkin[268885]:             "path": "/dev/ceph_vg2/ceph_lv2",
Oct 11 04:03:52 compute-0 jovial_hodgkin[268885]:             "tags": {
Oct 11 04:03:52 compute-0 jovial_hodgkin[268885]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Oct 11 04:03:52 compute-0 jovial_hodgkin[268885]:                 "ceph.block_uuid": "RoMfIg-8Myq-NBZ3-1HM3-6QfC-sjk8-dlChvt",
Oct 11 04:03:52 compute-0 jovial_hodgkin[268885]:                 "ceph.cephx_lockbox_secret": "",
Oct 11 04:03:52 compute-0 jovial_hodgkin[268885]:                 "ceph.cluster_fsid": "23b68101-59a9-532f-ab6b-9acf78fb2162",
Oct 11 04:03:52 compute-0 jovial_hodgkin[268885]:                 "ceph.cluster_name": "ceph",
Oct 11 04:03:52 compute-0 jovial_hodgkin[268885]:                 "ceph.crush_device_class": "",
Oct 11 04:03:52 compute-0 jovial_hodgkin[268885]:                 "ceph.encrypted": "0",
Oct 11 04:03:52 compute-0 jovial_hodgkin[268885]:                 "ceph.osd_fsid": "b0a32a6b-06c7-4cda-9235-0c5ffbd1bd85",
Oct 11 04:03:52 compute-0 jovial_hodgkin[268885]:                 "ceph.osd_id": "2",
Oct 11 04:03:52 compute-0 jovial_hodgkin[268885]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 11 04:03:52 compute-0 jovial_hodgkin[268885]:                 "ceph.type": "block",
Oct 11 04:03:52 compute-0 jovial_hodgkin[268885]:                 "ceph.vdo": "0"
Oct 11 04:03:52 compute-0 jovial_hodgkin[268885]:             },
Oct 11 04:03:52 compute-0 jovial_hodgkin[268885]:             "type": "block",
Oct 11 04:03:52 compute-0 jovial_hodgkin[268885]:             "vg_name": "ceph_vg2"
Oct 11 04:03:52 compute-0 jovial_hodgkin[268885]:         }
Oct 11 04:03:52 compute-0 jovial_hodgkin[268885]:     ]
Oct 11 04:03:52 compute-0 jovial_hodgkin[268885]: }
Oct 11 04:03:52 compute-0 systemd[1]: libpod-a0a64bc2982b7d7d5d88040431d433b78432929c4e8977dc46cf1c59949e0814.scope: Deactivated successfully.
Oct 11 04:03:52 compute-0 podman[268868]: 2025-10-11 04:03:52.770739346 +0000 UTC m=+1.003291111 container died a0a64bc2982b7d7d5d88040431d433b78432929c4e8977dc46cf1c59949e0814 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jovial_hodgkin, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Oct 11 04:03:52 compute-0 systemd[1]: var-lib-containers-storage-overlay-1da5017e1c0d659231e084d7b88e7ee3a3a1aa9c5901383e99c59c6ce7f733c4-merged.mount: Deactivated successfully.
Oct 11 04:03:52 compute-0 podman[268868]: 2025-10-11 04:03:52.844453442 +0000 UTC m=+1.077005147 container remove a0a64bc2982b7d7d5d88040431d433b78432929c4e8977dc46cf1c59949e0814 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jovial_hodgkin, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0)
Oct 11 04:03:52 compute-0 systemd[1]: libpod-conmon-a0a64bc2982b7d7d5d88040431d433b78432929c4e8977dc46cf1c59949e0814.scope: Deactivated successfully.
Oct 11 04:03:52 compute-0 sudo[268761]: pam_unix(sudo:session): session closed for user root
Oct 11 04:03:52 compute-0 sudo[268908]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 04:03:52 compute-0 sudo[268908]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 04:03:52 compute-0 sudo[268908]: pam_unix(sudo:session): session closed for user root
Oct 11 04:03:53 compute-0 sudo[268933]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 11 04:03:53 compute-0 sudo[268933]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 04:03:53 compute-0 sudo[268933]: pam_unix(sudo:session): session closed for user root
Oct 11 04:03:53 compute-0 sudo[268958]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 04:03:53 compute-0 sudo[268958]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 04:03:53 compute-0 sudo[268958]: pam_unix(sudo:session): session closed for user root
Oct 11 04:03:53 compute-0 sudo[268983]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/23b68101-59a9-532f-ab6b-9acf78fb2162/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 23b68101-59a9-532f-ab6b-9acf78fb2162 -- raw list --format json
Oct 11 04:03:53 compute-0 sudo[268983]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 04:03:53 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v961: 305 pgs: 305 active+clean; 41 MiB data, 247 MiB used, 60 GiB / 60 GiB avail; 248 KiB/s rd, 14 KiB/s wr, 333 op/s
Oct 11 04:03:53 compute-0 podman[269050]: 2025-10-11 04:03:53.69635277 +0000 UTC m=+0.069365405 container create 17558c145ba9de0c5eda209398c26fced4118142bb995c10870b3ad7c59ba9c7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=frosty_germain, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS)
Oct 11 04:03:53 compute-0 systemd[1]: Started libpod-conmon-17558c145ba9de0c5eda209398c26fced4118142bb995c10870b3ad7c59ba9c7.scope.
Oct 11 04:03:53 compute-0 podman[269050]: 2025-10-11 04:03:53.669469246 +0000 UTC m=+0.042481921 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 04:03:53 compute-0 systemd[1]: Started libcrun container.
Oct 11 04:03:53 compute-0 podman[269050]: 2025-10-11 04:03:53.812497155 +0000 UTC m=+0.185509830 container init 17558c145ba9de0c5eda209398c26fced4118142bb995c10870b3ad7c59ba9c7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=frosty_germain, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=reef, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Oct 11 04:03:53 compute-0 podman[269050]: 2025-10-11 04:03:53.825212432 +0000 UTC m=+0.198225037 container start 17558c145ba9de0c5eda209398c26fced4118142bb995c10870b3ad7c59ba9c7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=frosty_germain, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Oct 11 04:03:53 compute-0 podman[269050]: 2025-10-11 04:03:53.82906702 +0000 UTC m=+0.202079695 container attach 17558c145ba9de0c5eda209398c26fced4118142bb995c10870b3ad7c59ba9c7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=frosty_germain, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.build-date=20250507)
Oct 11 04:03:53 compute-0 frosty_germain[269066]: 167 167
Oct 11 04:03:53 compute-0 systemd[1]: libpod-17558c145ba9de0c5eda209398c26fced4118142bb995c10870b3ad7c59ba9c7.scope: Deactivated successfully.
Oct 11 04:03:53 compute-0 podman[269050]: 2025-10-11 04:03:53.834867032 +0000 UTC m=+0.207879667 container died 17558c145ba9de0c5eda209398c26fced4118142bb995c10870b3ad7c59ba9c7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=frosty_germain, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True)
Oct 11 04:03:53 compute-0 systemd[1]: var-lib-containers-storage-overlay-2021241821a2a6d526a23109555d8879f58e7fed3aa00f5a8aa13687b0de8c59-merged.mount: Deactivated successfully.
Oct 11 04:03:53 compute-0 podman[269050]: 2025-10-11 04:03:53.884654938 +0000 UTC m=+0.257667573 container remove 17558c145ba9de0c5eda209398c26fced4118142bb995c10870b3ad7c59ba9c7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=frosty_germain, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Oct 11 04:03:53 compute-0 systemd[1]: libpod-conmon-17558c145ba9de0c5eda209398c26fced4118142bb995c10870b3ad7c59ba9c7.scope: Deactivated successfully.
Oct 11 04:03:54 compute-0 podman[269090]: 2025-10-11 04:03:54.17407413 +0000 UTC m=+0.096604189 container create 4b03a975d9f4874add50c261b5502b341ba7271edbdbe93d1e267130e78192bf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_volhard, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 11 04:03:54 compute-0 podman[269090]: 2025-10-11 04:03:54.144688956 +0000 UTC m=+0.067219055 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 04:03:54 compute-0 systemd[1]: Started libpod-conmon-4b03a975d9f4874add50c261b5502b341ba7271edbdbe93d1e267130e78192bf.scope.
Oct 11 04:03:54 compute-0 systemd[1]: Started libcrun container.
Oct 11 04:03:54 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/266d44057451dcbf5c1b9d5fc405fcc5ae52c1ebf449e7bbbf46e3c565cff570/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 11 04:03:54 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/266d44057451dcbf5c1b9d5fc405fcc5ae52c1ebf449e7bbbf46e3c565cff570/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 11 04:03:54 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/266d44057451dcbf5c1b9d5fc405fcc5ae52c1ebf449e7bbbf46e3c565cff570/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 11 04:03:54 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/266d44057451dcbf5c1b9d5fc405fcc5ae52c1ebf449e7bbbf46e3c565cff570/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 11 04:03:54 compute-0 podman[269090]: 2025-10-11 04:03:54.292432717 +0000 UTC m=+0.214962846 container init 4b03a975d9f4874add50c261b5502b341ba7271edbdbe93d1e267130e78192bf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_volhard, io.buildah.version=1.39.3, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Oct 11 04:03:54 compute-0 podman[269090]: 2025-10-11 04:03:54.308122777 +0000 UTC m=+0.230652826 container start 4b03a975d9f4874add50c261b5502b341ba7271edbdbe93d1e267130e78192bf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_volhard, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, OSD_FLAVOR=default)
Oct 11 04:03:54 compute-0 podman[269090]: 2025-10-11 04:03:54.311942824 +0000 UTC m=+0.234472883 container attach 4b03a975d9f4874add50c261b5502b341ba7271edbdbe93d1e267130e78192bf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_volhard, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.build-date=20250507)
Oct 11 04:03:54 compute-0 nova_compute[259850]: 2025-10-11 04:03:54.375 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 04:03:54 compute-0 ceph-mon[74273]: pgmap v961: 305 pgs: 305 active+clean; 41 MiB data, 247 MiB used, 60 GiB / 60 GiB avail; 248 KiB/s rd, 14 KiB/s wr, 333 op/s
Oct 11 04:03:54 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 11 04:03:54 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3068804735' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 11 04:03:54 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 04:03:54 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e144 do_prune osdmap full prune enabled
Oct 11 04:03:54 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e145 e145: 3 total, 3 up, 3 in
Oct 11 04:03:54 compute-0 ceph-mon[74273]: log_channel(cluster) log [DBG] : osdmap e145: 3 total, 3 up, 3 in
Oct 11 04:03:55 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v963: 305 pgs: 305 active+clean; 41 MiB data, 247 MiB used, 60 GiB / 60 GiB avail; 151 KiB/s rd, 7.7 KiB/s wr, 204 op/s
Oct 11 04:03:55 compute-0 ceph-mon[74273]: from='client.? 192.168.122.10:0/3068804735' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 11 04:03:55 compute-0 ceph-mon[74273]: osdmap e145: 3 total, 3 up, 3 in
Oct 11 04:03:55 compute-0 hopeful_volhard[269106]: {
Oct 11 04:03:55 compute-0 hopeful_volhard[269106]:     "38da774d-7ecf-442f-9a7a-97978287cff8": {
Oct 11 04:03:55 compute-0 hopeful_volhard[269106]:         "ceph_fsid": "23b68101-59a9-532f-ab6b-9acf78fb2162",
Oct 11 04:03:55 compute-0 hopeful_volhard[269106]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Oct 11 04:03:55 compute-0 hopeful_volhard[269106]:         "osd_id": 1,
Oct 11 04:03:55 compute-0 hopeful_volhard[269106]:         "osd_uuid": "38da774d-7ecf-442f-9a7a-97978287cff8",
Oct 11 04:03:55 compute-0 hopeful_volhard[269106]:         "type": "bluestore"
Oct 11 04:03:55 compute-0 hopeful_volhard[269106]:     },
Oct 11 04:03:55 compute-0 hopeful_volhard[269106]:     "b0a32a6b-06c7-4cda-9235-0c5ffbd1bd85": {
Oct 11 04:03:55 compute-0 hopeful_volhard[269106]:         "ceph_fsid": "23b68101-59a9-532f-ab6b-9acf78fb2162",
Oct 11 04:03:55 compute-0 hopeful_volhard[269106]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Oct 11 04:03:55 compute-0 hopeful_volhard[269106]:         "osd_id": 2,
Oct 11 04:03:55 compute-0 hopeful_volhard[269106]:         "osd_uuid": "b0a32a6b-06c7-4cda-9235-0c5ffbd1bd85",
Oct 11 04:03:55 compute-0 hopeful_volhard[269106]:         "type": "bluestore"
Oct 11 04:03:55 compute-0 hopeful_volhard[269106]:     },
Oct 11 04:03:55 compute-0 hopeful_volhard[269106]:     "bd7ac921-1218-45c1-b1c6-7c594dbceccb": {
Oct 11 04:03:55 compute-0 hopeful_volhard[269106]:         "ceph_fsid": "23b68101-59a9-532f-ab6b-9acf78fb2162",
Oct 11 04:03:55 compute-0 hopeful_volhard[269106]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct 11 04:03:55 compute-0 hopeful_volhard[269106]:         "osd_id": 0,
Oct 11 04:03:55 compute-0 hopeful_volhard[269106]:         "osd_uuid": "bd7ac921-1218-45c1-b1c6-7c594dbceccb",
Oct 11 04:03:55 compute-0 hopeful_volhard[269106]:         "type": "bluestore"
Oct 11 04:03:55 compute-0 hopeful_volhard[269106]:     }
Oct 11 04:03:55 compute-0 hopeful_volhard[269106]: }
Oct 11 04:03:55 compute-0 systemd[1]: libpod-4b03a975d9f4874add50c261b5502b341ba7271edbdbe93d1e267130e78192bf.scope: Deactivated successfully.
Oct 11 04:03:55 compute-0 systemd[1]: libpod-4b03a975d9f4874add50c261b5502b341ba7271edbdbe93d1e267130e78192bf.scope: Consumed 1.223s CPU time.
Oct 11 04:03:55 compute-0 podman[269090]: 2025-10-11 04:03:55.536859955 +0000 UTC m=+1.459390044 container died 4b03a975d9f4874add50c261b5502b341ba7271edbdbe93d1e267130e78192bf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_volhard, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0)
Oct 11 04:03:55 compute-0 systemd[1]: var-lib-containers-storage-overlay-266d44057451dcbf5c1b9d5fc405fcc5ae52c1ebf449e7bbbf46e3c565cff570-merged.mount: Deactivated successfully.
Oct 11 04:03:55 compute-0 podman[269090]: 2025-10-11 04:03:55.626344943 +0000 UTC m=+1.548874972 container remove 4b03a975d9f4874add50c261b5502b341ba7271edbdbe93d1e267130e78192bf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_volhard, OSD_FLAVOR=default, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Oct 11 04:03:55 compute-0 systemd[1]: libpod-conmon-4b03a975d9f4874add50c261b5502b341ba7271edbdbe93d1e267130e78192bf.scope: Deactivated successfully.
Oct 11 04:03:55 compute-0 sudo[268983]: pam_unix(sudo:session): session closed for user root
Oct 11 04:03:55 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 11 04:03:55 compute-0 podman[269143]: 2025-10-11 04:03:55.670305655 +0000 UTC m=+0.091782323 container health_status 83ff5079e26f0f00bbfd2512db4ebca61e431e3b7e362664573e34fff179574c (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team)
Oct 11 04:03:55 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' 
Oct 11 04:03:55 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 11 04:03:55 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' 
Oct 11 04:03:55 compute-0 ceph-mgr[74563]: [progress WARNING root] complete: ev 8c5d67ce-1653-474f-8e71-12f2daabd7c1 does not exist
Oct 11 04:03:55 compute-0 ceph-mgr[74563]: [progress WARNING root] complete: ev c9bde45e-11eb-4f97-b80f-0c9b03f0f67b does not exist
Oct 11 04:03:55 compute-0 podman[269151]: 2025-10-11 04:03:55.691229052 +0000 UTC m=+0.108321637 container health_status 8f03625dda308e9a3fe3b3f00360811eab2a53db90537fed5552a11820542f07 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251009, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=iscsid, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, container_name=iscsid, org.label-schema.license=GPLv2)
Oct 11 04:03:55 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e145 do_prune osdmap full prune enabled
Oct 11 04:03:55 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e146 e146: 3 total, 3 up, 3 in
Oct 11 04:03:55 compute-0 ceph-mon[74273]: log_channel(cluster) log [DBG] : osdmap e146: 3 total, 3 up, 3 in
Oct 11 04:03:55 compute-0 sudo[269193]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 04:03:55 compute-0 sudo[269193]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 04:03:55 compute-0 sudo[269193]: pam_unix(sudo:session): session closed for user root
Oct 11 04:03:55 compute-0 sudo[269219]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Oct 11 04:03:55 compute-0 sudo[269219]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 04:03:55 compute-0 sudo[269219]: pam_unix(sudo:session): session closed for user root
Oct 11 04:03:56 compute-0 nova_compute[259850]: 2025-10-11 04:03:56.606 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 04:03:56 compute-0 ceph-mon[74273]: pgmap v963: 305 pgs: 305 active+clean; 41 MiB data, 247 MiB used, 60 GiB / 60 GiB avail; 151 KiB/s rd, 7.7 KiB/s wr, 204 op/s
Oct 11 04:03:56 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' 
Oct 11 04:03:56 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' 
Oct 11 04:03:56 compute-0 ceph-mon[74273]: osdmap e146: 3 total, 3 up, 3 in
Oct 11 04:03:56 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e146 do_prune osdmap full prune enabled
Oct 11 04:03:56 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e147 e147: 3 total, 3 up, 3 in
Oct 11 04:03:56 compute-0 ceph-mon[74273]: log_channel(cluster) log [DBG] : osdmap e147: 3 total, 3 up, 3 in
Oct 11 04:03:57 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v966: 305 pgs: 305 active+clean; 41 MiB data, 247 MiB used, 60 GiB / 60 GiB avail; 151 KiB/s rd, 7.7 KiB/s wr, 204 op/s
Oct 11 04:03:57 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e147 do_prune osdmap full prune enabled
Oct 11 04:03:57 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e148 e148: 3 total, 3 up, 3 in
Oct 11 04:03:57 compute-0 ceph-mon[74273]: osdmap e147: 3 total, 3 up, 3 in
Oct 11 04:03:57 compute-0 ceph-mon[74273]: log_channel(cluster) log [DBG] : osdmap e148: 3 total, 3 up, 3 in
Oct 11 04:03:58 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e148 do_prune osdmap full prune enabled
Oct 11 04:03:58 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e149 e149: 3 total, 3 up, 3 in
Oct 11 04:03:58 compute-0 ceph-mon[74273]: log_channel(cluster) log [DBG] : osdmap e149: 3 total, 3 up, 3 in
Oct 11 04:03:58 compute-0 ceph-mon[74273]: pgmap v966: 305 pgs: 305 active+clean; 41 MiB data, 247 MiB used, 60 GiB / 60 GiB avail; 151 KiB/s rd, 7.7 KiB/s wr, 204 op/s
Oct 11 04:03:58 compute-0 ceph-mon[74273]: osdmap e148: 3 total, 3 up, 3 in
Oct 11 04:03:59 compute-0 nova_compute[259850]: 2025-10-11 04:03:59.379 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 04:03:59 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v969: 305 pgs: 305 active+clean; 41 MiB data, 247 MiB used, 60 GiB / 60 GiB avail; 92 KiB/s rd, 7.2 KiB/s wr, 128 op/s
Oct 11 04:03:59 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e149 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 04:03:59 compute-0 ceph-mon[74273]: osdmap e149: 3 total, 3 up, 3 in
Oct 11 04:04:00 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct 11 04:04:00 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2359812139' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 11 04:04:00 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct 11 04:04:00 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2359812139' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 11 04:04:00 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e149 do_prune osdmap full prune enabled
Oct 11 04:04:00 compute-0 ceph-mon[74273]: pgmap v969: 305 pgs: 305 active+clean; 41 MiB data, 247 MiB used, 60 GiB / 60 GiB avail; 92 KiB/s rd, 7.2 KiB/s wr, 128 op/s
Oct 11 04:04:00 compute-0 ceph-mon[74273]: from='client.? 192.168.122.10:0/2359812139' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 11 04:04:00 compute-0 ceph-mon[74273]: from='client.? 192.168.122.10:0/2359812139' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 11 04:04:00 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e150 e150: 3 total, 3 up, 3 in
Oct 11 04:04:00 compute-0 ceph-mon[74273]: log_channel(cluster) log [DBG] : osdmap e150: 3 total, 3 up, 3 in
Oct 11 04:04:01 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct 11 04:04:01 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1869979439' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 11 04:04:01 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct 11 04:04:01 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1869979439' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 11 04:04:01 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 11 04:04:01 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3580253047' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 11 04:04:01 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v971: 305 pgs: 305 active+clean; 41 MiB data, 247 MiB used, 60 GiB / 60 GiB avail; 78 KiB/s rd, 6.2 KiB/s wr, 109 op/s
Oct 11 04:04:01 compute-0 nova_compute[259850]: 2025-10-11 04:04:01.658 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 04:04:01 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e150 do_prune osdmap full prune enabled
Oct 11 04:04:01 compute-0 ceph-mon[74273]: osdmap e150: 3 total, 3 up, 3 in
Oct 11 04:04:01 compute-0 ceph-mon[74273]: from='client.? 192.168.122.10:0/1869979439' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 11 04:04:01 compute-0 ceph-mon[74273]: from='client.? 192.168.122.10:0/1869979439' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 11 04:04:01 compute-0 ceph-mon[74273]: from='client.? 192.168.122.10:0/3580253047' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 11 04:04:01 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e151 e151: 3 total, 3 up, 3 in
Oct 11 04:04:01 compute-0 ceph-mon[74273]: log_channel(cluster) log [DBG] : osdmap e151: 3 total, 3 up, 3 in
Oct 11 04:04:02 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e151 do_prune osdmap full prune enabled
Oct 11 04:04:02 compute-0 ceph-mon[74273]: pgmap v971: 305 pgs: 305 active+clean; 41 MiB data, 247 MiB used, 60 GiB / 60 GiB avail; 78 KiB/s rd, 6.2 KiB/s wr, 109 op/s
Oct 11 04:04:02 compute-0 ceph-mon[74273]: osdmap e151: 3 total, 3 up, 3 in
Oct 11 04:04:02 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e152 e152: 3 total, 3 up, 3 in
Oct 11 04:04:02 compute-0 ceph-mon[74273]: log_channel(cluster) log [DBG] : osdmap e152: 3 total, 3 up, 3 in
Oct 11 04:04:03 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v974: 305 pgs: 305 active+clean; 41 MiB data, 251 MiB used, 60 GiB / 60 GiB avail; 155 KiB/s rd, 9.2 KiB/s wr, 209 op/s
Oct 11 04:04:03 compute-0 ovn_controller[152025]: 2025-10-11T04:04:03Z|00035|memory_trim|INFO|Detected inactivity (last active 30001 ms ago): trimming memory
Oct 11 04:04:03 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e152 do_prune osdmap full prune enabled
Oct 11 04:04:03 compute-0 ceph-mon[74273]: osdmap e152: 3 total, 3 up, 3 in
Oct 11 04:04:03 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e153 e153: 3 total, 3 up, 3 in
Oct 11 04:04:03 compute-0 ceph-mon[74273]: log_channel(cluster) log [DBG] : osdmap e153: 3 total, 3 up, 3 in
Oct 11 04:04:04 compute-0 nova_compute[259850]: 2025-10-11 04:04:04.381 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 04:04:04 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e153 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 04:04:04 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e153 do_prune osdmap full prune enabled
Oct 11 04:04:04 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e154 e154: 3 total, 3 up, 3 in
Oct 11 04:04:04 compute-0 ceph-mon[74273]: log_channel(cluster) log [DBG] : osdmap e154: 3 total, 3 up, 3 in
Oct 11 04:04:04 compute-0 ceph-mon[74273]: pgmap v974: 305 pgs: 305 active+clean; 41 MiB data, 251 MiB used, 60 GiB / 60 GiB avail; 155 KiB/s rd, 9.2 KiB/s wr, 209 op/s
Oct 11 04:04:04 compute-0 ceph-mon[74273]: osdmap e153: 3 total, 3 up, 3 in
Oct 11 04:04:04 compute-0 ceph-mon[74273]: osdmap e154: 3 total, 3 up, 3 in
Oct 11 04:04:05 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v977: 305 pgs: 305 active+clean; 41 MiB data, 251 MiB used, 60 GiB / 60 GiB avail; 182 KiB/s rd, 11 KiB/s wr, 245 op/s
Oct 11 04:04:05 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e154 do_prune osdmap full prune enabled
Oct 11 04:04:05 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e155 e155: 3 total, 3 up, 3 in
Oct 11 04:04:05 compute-0 ceph-mon[74273]: log_channel(cluster) log [DBG] : osdmap e155: 3 total, 3 up, 3 in
Oct 11 04:04:06 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct 11 04:04:06 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2986774149' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 11 04:04:06 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct 11 04:04:06 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2986774149' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 11 04:04:06 compute-0 podman[269245]: 2025-10-11 04:04:06.428371038 +0000 UTC m=+0.136357318 container health_status 648a70a95c858d67aef01a55c153ff0b5b9bef4b589fa10c62a24185180db61f (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, tcib_managed=true, org.label-schema.build-date=20251009, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller)
Oct 11 04:04:06 compute-0 nova_compute[259850]: 2025-10-11 04:04:06.661 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 04:04:06 compute-0 ceph-mon[74273]: pgmap v977: 305 pgs: 305 active+clean; 41 MiB data, 251 MiB used, 60 GiB / 60 GiB avail; 182 KiB/s rd, 11 KiB/s wr, 245 op/s
Oct 11 04:04:06 compute-0 ceph-mon[74273]: osdmap e155: 3 total, 3 up, 3 in
Oct 11 04:04:06 compute-0 ceph-mon[74273]: from='client.? 192.168.122.10:0/2986774149' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 11 04:04:06 compute-0 ceph-mon[74273]: from='client.? 192.168.122.10:0/2986774149' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 11 04:04:07 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v979: 305 pgs: 305 active+clean; 41 MiB data, 251 MiB used, 60 GiB / 60 GiB avail; 100 KiB/s rd, 4.1 KiB/s wr, 135 op/s
Oct 11 04:04:07 compute-0 ovn_metadata_agent[161897]: 2025-10-11 04:04:07.524 161902 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=5, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '7a:61:6f', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '92:f1:b6:e4:f1:16'}, ipsec=False) old=SB_Global(nb_cfg=4) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 11 04:04:07 compute-0 ovn_metadata_agent[161897]: 2025-10-11 04:04:07.526 161902 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 0 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Oct 11 04:04:07 compute-0 ovn_metadata_agent[161897]: 2025-10-11 04:04:07.527 161902 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=8a473e03-2208-47ae-afcd-05ad744a5969, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '5'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 04:04:07 compute-0 nova_compute[259850]: 2025-10-11 04:04:07.568 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 04:04:07 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e155 do_prune osdmap full prune enabled
Oct 11 04:04:07 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e156 e156: 3 total, 3 up, 3 in
Oct 11 04:04:07 compute-0 ceph-mon[74273]: log_channel(cluster) log [DBG] : osdmap e156: 3 total, 3 up, 3 in
Oct 11 04:04:07 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct 11 04:04:07 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3495988191' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 11 04:04:07 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct 11 04:04:07 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3495988191' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 11 04:04:08 compute-0 ceph-mon[74273]: pgmap v979: 305 pgs: 305 active+clean; 41 MiB data, 251 MiB used, 60 GiB / 60 GiB avail; 100 KiB/s rd, 4.1 KiB/s wr, 135 op/s
Oct 11 04:04:08 compute-0 ceph-mon[74273]: osdmap e156: 3 total, 3 up, 3 in
Oct 11 04:04:08 compute-0 ceph-mon[74273]: from='client.? 192.168.122.10:0/3495988191' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 11 04:04:08 compute-0 ceph-mon[74273]: from='client.? 192.168.122.10:0/3495988191' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 11 04:04:09 compute-0 nova_compute[259850]: 2025-10-11 04:04:09.383 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 04:04:09 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v981: 305 pgs: 305 active+clean; 41 MiB data, 252 MiB used, 60 GiB / 60 GiB avail; 119 KiB/s rd, 6.2 KiB/s wr, 160 op/s
Oct 11 04:04:09 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e156 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 04:04:10 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e156 do_prune osdmap full prune enabled
Oct 11 04:04:10 compute-0 ceph-mon[74273]: pgmap v981: 305 pgs: 305 active+clean; 41 MiB data, 252 MiB used, 60 GiB / 60 GiB avail; 119 KiB/s rd, 6.2 KiB/s wr, 160 op/s
Oct 11 04:04:10 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e157 e157: 3 total, 3 up, 3 in
Oct 11 04:04:10 compute-0 ceph-mon[74273]: log_channel(cluster) log [DBG] : osdmap e157: 3 total, 3 up, 3 in
Oct 11 04:04:11 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v983: 305 pgs: 305 active+clean; 41 MiB data, 252 MiB used, 60 GiB / 60 GiB avail; 112 KiB/s rd, 5.8 KiB/s wr, 150 op/s
Oct 11 04:04:11 compute-0 nova_compute[259850]: 2025-10-11 04:04:11.662 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 04:04:11 compute-0 ceph-mon[74273]: osdmap e157: 3 total, 3 up, 3 in
Oct 11 04:04:12 compute-0 podman[269271]: 2025-10-11 04:04:12.371817784 +0000 UTC m=+0.077785015 container health_status aa44bb3cffcfb092b9d85ae52a9bd9f36c87e07262632420f97b48a886109eab (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, org.label-schema.vendor=CentOS, tcib_managed=true)
Oct 11 04:04:12 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct 11 04:04:12 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1027095223' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 11 04:04:12 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct 11 04:04:12 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1027095223' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 11 04:04:12 compute-0 ceph-mon[74273]: pgmap v983: 305 pgs: 305 active+clean; 41 MiB data, 252 MiB used, 60 GiB / 60 GiB avail; 112 KiB/s rd, 5.8 KiB/s wr, 150 op/s
Oct 11 04:04:12 compute-0 ceph-mon[74273]: from='client.? 192.168.122.10:0/1027095223' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 11 04:04:12 compute-0 ceph-mon[74273]: from='client.? 192.168.122.10:0/1027095223' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 11 04:04:13 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v984: 305 pgs: 305 active+clean; 41 MiB data, 252 MiB used, 60 GiB / 60 GiB avail; 117 KiB/s rd, 6.8 KiB/s wr, 158 op/s
Oct 11 04:04:14 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct 11 04:04:14 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2718987080' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 11 04:04:14 compute-0 nova_compute[259850]: 2025-10-11 04:04:14.055 2 DEBUG oslo_service.periodic_task [None req-a15c22ab-259d-4b26-83d0-eb5f92102649 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 11 04:04:14 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct 11 04:04:14 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2718987080' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 11 04:04:14 compute-0 nova_compute[259850]: 2025-10-11 04:04:14.384 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 04:04:14 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e157 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 04:04:14 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e157 do_prune osdmap full prune enabled
Oct 11 04:04:14 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e158 e158: 3 total, 3 up, 3 in
Oct 11 04:04:14 compute-0 ceph-mon[74273]: log_channel(cluster) log [DBG] : osdmap e158: 3 total, 3 up, 3 in
Oct 11 04:04:14 compute-0 ceph-mon[74273]: pgmap v984: 305 pgs: 305 active+clean; 41 MiB data, 252 MiB used, 60 GiB / 60 GiB avail; 117 KiB/s rd, 6.8 KiB/s wr, 158 op/s
Oct 11 04:04:14 compute-0 ceph-mon[74273]: from='client.? 192.168.122.10:0/2718987080' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 11 04:04:14 compute-0 ceph-mon[74273]: from='client.? 192.168.122.10:0/2718987080' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 11 04:04:14 compute-0 ceph-mon[74273]: osdmap e158: 3 total, 3 up, 3 in
Oct 11 04:04:15 compute-0 nova_compute[259850]: 2025-10-11 04:04:15.059 2 DEBUG oslo_service.periodic_task [None req-a15c22ab-259d-4b26-83d0-eb5f92102649 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 11 04:04:15 compute-0 nova_compute[259850]: 2025-10-11 04:04:15.060 2 DEBUG oslo_service.periodic_task [None req-a15c22ab-259d-4b26-83d0-eb5f92102649 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 11 04:04:15 compute-0 nova_compute[259850]: 2025-10-11 04:04:15.060 2 DEBUG oslo_service.periodic_task [None req-a15c22ab-259d-4b26-83d0-eb5f92102649 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 11 04:04:15 compute-0 nova_compute[259850]: 2025-10-11 04:04:15.060 2 DEBUG nova.compute.manager [None req-a15c22ab-259d-4b26-83d0-eb5f92102649 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Oct 11 04:04:15 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v986: 305 pgs: 305 active+clean; 41 MiB data, 252 MiB used, 60 GiB / 60 GiB avail; 91 KiB/s rd, 4.9 KiB/s wr, 121 op/s
Oct 11 04:04:15 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct 11 04:04:15 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3691209489' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 11 04:04:15 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct 11 04:04:15 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3691209489' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 11 04:04:15 compute-0 ceph-mon[74273]: from='client.? 192.168.122.10:0/3691209489' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 11 04:04:15 compute-0 ceph-mon[74273]: from='client.? 192.168.122.10:0/3691209489' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 11 04:04:16 compute-0 nova_compute[259850]: 2025-10-11 04:04:16.059 2 DEBUG oslo_service.periodic_task [None req-a15c22ab-259d-4b26-83d0-eb5f92102649 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 11 04:04:16 compute-0 nova_compute[259850]: 2025-10-11 04:04:16.060 2 DEBUG nova.compute.manager [None req-a15c22ab-259d-4b26-83d0-eb5f92102649 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Oct 11 04:04:16 compute-0 nova_compute[259850]: 2025-10-11 04:04:16.060 2 DEBUG nova.compute.manager [None req-a15c22ab-259d-4b26-83d0-eb5f92102649 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Oct 11 04:04:16 compute-0 nova_compute[259850]: 2025-10-11 04:04:16.081 2 DEBUG nova.compute.manager [None req-a15c22ab-259d-4b26-83d0-eb5f92102649 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Oct 11 04:04:16 compute-0 nova_compute[259850]: 2025-10-11 04:04:16.082 2 DEBUG oslo_service.periodic_task [None req-a15c22ab-259d-4b26-83d0-eb5f92102649 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 11 04:04:16 compute-0 nova_compute[259850]: 2025-10-11 04:04:16.116 2 DEBUG oslo_concurrency.lockutils [None req-a15c22ab-259d-4b26-83d0-eb5f92102649 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 04:04:16 compute-0 nova_compute[259850]: 2025-10-11 04:04:16.117 2 DEBUG oslo_concurrency.lockutils [None req-a15c22ab-259d-4b26-83d0-eb5f92102649 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 04:04:16 compute-0 nova_compute[259850]: 2025-10-11 04:04:16.117 2 DEBUG oslo_concurrency.lockutils [None req-a15c22ab-259d-4b26-83d0-eb5f92102649 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 04:04:16 compute-0 nova_compute[259850]: 2025-10-11 04:04:16.118 2 DEBUG nova.compute.resource_tracker [None req-a15c22ab-259d-4b26-83d0-eb5f92102649 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Oct 11 04:04:16 compute-0 nova_compute[259850]: 2025-10-11 04:04:16.118 2 DEBUG oslo_concurrency.processutils [None req-a15c22ab-259d-4b26-83d0-eb5f92102649 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 04:04:16 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 11 04:04:16 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1468570970' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 04:04:16 compute-0 nova_compute[259850]: 2025-10-11 04:04:16.565 2 DEBUG oslo_concurrency.processutils [None req-a15c22ab-259d-4b26-83d0-eb5f92102649 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.446s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 04:04:16 compute-0 nova_compute[259850]: 2025-10-11 04:04:16.665 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 04:04:16 compute-0 nova_compute[259850]: 2025-10-11 04:04:16.782 2 WARNING nova.virt.libvirt.driver [None req-a15c22ab-259d-4b26-83d0-eb5f92102649 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 11 04:04:16 compute-0 nova_compute[259850]: 2025-10-11 04:04:16.783 2 DEBUG nova.compute.resource_tracker [None req-a15c22ab-259d-4b26-83d0-eb5f92102649 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4735MB free_disk=59.988277435302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Oct 11 04:04:16 compute-0 nova_compute[259850]: 2025-10-11 04:04:16.784 2 DEBUG oslo_concurrency.lockutils [None req-a15c22ab-259d-4b26-83d0-eb5f92102649 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 04:04:16 compute-0 nova_compute[259850]: 2025-10-11 04:04:16.784 2 DEBUG oslo_concurrency.lockutils [None req-a15c22ab-259d-4b26-83d0-eb5f92102649 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 04:04:16 compute-0 nova_compute[259850]: 2025-10-11 04:04:16.869 2 DEBUG nova.compute.resource_tracker [None req-a15c22ab-259d-4b26-83d0-eb5f92102649 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Oct 11 04:04:16 compute-0 nova_compute[259850]: 2025-10-11 04:04:16.870 2 DEBUG nova.compute.resource_tracker [None req-a15c22ab-259d-4b26-83d0-eb5f92102649 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Oct 11 04:04:16 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e158 do_prune osdmap full prune enabled
Oct 11 04:04:16 compute-0 ceph-mon[74273]: pgmap v986: 305 pgs: 305 active+clean; 41 MiB data, 252 MiB used, 60 GiB / 60 GiB avail; 91 KiB/s rd, 4.9 KiB/s wr, 121 op/s
Oct 11 04:04:16 compute-0 ceph-mon[74273]: from='client.? 192.168.122.100:0/1468570970' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 04:04:16 compute-0 nova_compute[259850]: 2025-10-11 04:04:16.889 2 DEBUG oslo_concurrency.processutils [None req-a15c22ab-259d-4b26-83d0-eb5f92102649 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 04:04:16 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e159 e159: 3 total, 3 up, 3 in
Oct 11 04:04:16 compute-0 ceph-mon[74273]: log_channel(cluster) log [DBG] : osdmap e159: 3 total, 3 up, 3 in
Oct 11 04:04:17 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 11 04:04:17 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2486296098' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 04:04:17 compute-0 nova_compute[259850]: 2025-10-11 04:04:17.351 2 DEBUG oslo_concurrency.processutils [None req-a15c22ab-259d-4b26-83d0-eb5f92102649 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.462s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 04:04:17 compute-0 nova_compute[259850]: 2025-10-11 04:04:17.357 2 DEBUG nova.compute.provider_tree [None req-a15c22ab-259d-4b26-83d0-eb5f92102649 - - - - - -] Inventory has not changed in ProviderTree for provider: 108a560b-89c0-4926-a2fc-cb749a6f8386 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 11 04:04:17 compute-0 nova_compute[259850]: 2025-10-11 04:04:17.374 2 DEBUG nova.scheduler.client.report [None req-a15c22ab-259d-4b26-83d0-eb5f92102649 - - - - - -] Inventory has not changed for provider 108a560b-89c0-4926-a2fc-cb749a6f8386 based on inventory data: {'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 11 04:04:17 compute-0 nova_compute[259850]: 2025-10-11 04:04:17.392 2 DEBUG nova.compute.resource_tracker [None req-a15c22ab-259d-4b26-83d0-eb5f92102649 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Oct 11 04:04:17 compute-0 nova_compute[259850]: 2025-10-11 04:04:17.393 2 DEBUG oslo_concurrency.lockutils [None req-a15c22ab-259d-4b26-83d0-eb5f92102649 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.609s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 04:04:17 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v988: 305 pgs: 305 active+clean; 41 MiB data, 252 MiB used, 60 GiB / 60 GiB avail; 33 KiB/s rd, 2.6 KiB/s wr, 45 op/s
Oct 11 04:04:17 compute-0 ceph-mon[74273]: osdmap e159: 3 total, 3 up, 3 in
Oct 11 04:04:17 compute-0 ceph-mon[74273]: from='client.? 192.168.122.100:0/2486296098' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 04:04:18 compute-0 nova_compute[259850]: 2025-10-11 04:04:18.370 2 DEBUG oslo_service.periodic_task [None req-a15c22ab-259d-4b26-83d0-eb5f92102649 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 11 04:04:18 compute-0 nova_compute[259850]: 2025-10-11 04:04:18.371 2 DEBUG oslo_service.periodic_task [None req-a15c22ab-259d-4b26-83d0-eb5f92102649 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 11 04:04:18 compute-0 nova_compute[259850]: 2025-10-11 04:04:18.371 2 DEBUG oslo_service.periodic_task [None req-a15c22ab-259d-4b26-83d0-eb5f92102649 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 11 04:04:18 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e159 do_prune osdmap full prune enabled
Oct 11 04:04:18 compute-0 ceph-mon[74273]: pgmap v988: 305 pgs: 305 active+clean; 41 MiB data, 252 MiB used, 60 GiB / 60 GiB avail; 33 KiB/s rd, 2.6 KiB/s wr, 45 op/s
Oct 11 04:04:18 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e160 e160: 3 total, 3 up, 3 in
Oct 11 04:04:18 compute-0 ceph-mon[74273]: log_channel(cluster) log [DBG] : osdmap e160: 3 total, 3 up, 3 in
Oct 11 04:04:19 compute-0 nova_compute[259850]: 2025-10-11 04:04:19.386 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 04:04:19 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v990: 305 pgs: 305 active+clean; 41 MiB data, 252 MiB used, 60 GiB / 60 GiB avail; 51 KiB/s rd, 3.0 KiB/s wr, 70 op/s
Oct 11 04:04:19 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e160 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 04:04:19 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e160 do_prune osdmap full prune enabled
Oct 11 04:04:19 compute-0 ceph-mon[74273]: osdmap e160: 3 total, 3 up, 3 in
Oct 11 04:04:19 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e161 e161: 3 total, 3 up, 3 in
Oct 11 04:04:19 compute-0 ceph-mon[74273]: log_channel(cluster) log [DBG] : osdmap e161: 3 total, 3 up, 3 in
Oct 11 04:04:20 compute-0 nova_compute[259850]: 2025-10-11 04:04:20.055 2 DEBUG oslo_service.periodic_task [None req-a15c22ab-259d-4b26-83d0-eb5f92102649 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 11 04:04:20 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct 11 04:04:20 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1605487828' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 11 04:04:20 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct 11 04:04:20 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1605487828' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 11 04:04:20 compute-0 ceph-mgr[74563]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 04:04:20 compute-0 ceph-mgr[74563]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 04:04:20 compute-0 ceph-mgr[74563]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 04:04:20 compute-0 ceph-mgr[74563]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 04:04:20 compute-0 ceph-mgr[74563]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 04:04:20 compute-0 ceph-mgr[74563]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 04:04:20 compute-0 ceph-mgr[74563]: [balancer INFO root] Optimize plan auto_2025-10-11_04:04:20
Oct 11 04:04:20 compute-0 ceph-mgr[74563]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 11 04:04:20 compute-0 ceph-mgr[74563]: [balancer INFO root] do_upmap
Oct 11 04:04:20 compute-0 ceph-mgr[74563]: [balancer INFO root] pools ['cephfs.cephfs.data', 'images', 'backups', 'vms', 'volumes', 'default.rgw.meta', 'cephfs.cephfs.meta', '.rgw.root', 'default.rgw.log', '.mgr', 'default.rgw.control']
Oct 11 04:04:20 compute-0 ceph-mgr[74563]: [balancer INFO root] prepared 0/10 changes
Oct 11 04:04:20 compute-0 ceph-mon[74273]: pgmap v990: 305 pgs: 305 active+clean; 41 MiB data, 252 MiB used, 60 GiB / 60 GiB avail; 51 KiB/s rd, 3.0 KiB/s wr, 70 op/s
Oct 11 04:04:20 compute-0 ceph-mon[74273]: osdmap e161: 3 total, 3 up, 3 in
Oct 11 04:04:20 compute-0 ceph-mon[74273]: from='client.? 192.168.122.10:0/1605487828' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 11 04:04:20 compute-0 ceph-mon[74273]: from='client.? 192.168.122.10:0/1605487828' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 11 04:04:20 compute-0 ceph-mgr[74563]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 11 04:04:20 compute-0 ceph-mgr[74563]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 11 04:04:20 compute-0 ceph-mgr[74563]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 11 04:04:20 compute-0 ceph-mgr[74563]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 11 04:04:20 compute-0 ceph-mgr[74563]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 11 04:04:20 compute-0 ceph-mgr[74563]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 11 04:04:20 compute-0 ceph-mgr[74563]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 11 04:04:20 compute-0 ceph-mgr[74563]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 11 04:04:20 compute-0 ceph-mgr[74563]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 11 04:04:20 compute-0 ceph-mgr[74563]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 11 04:04:21 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v992: 305 pgs: 305 active+clean; 41 MiB data, 252 MiB used, 60 GiB / 60 GiB avail; 51 KiB/s rd, 3.0 KiB/s wr, 70 op/s
Oct 11 04:04:21 compute-0 nova_compute[259850]: 2025-10-11 04:04:21.667 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 04:04:21 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e161 do_prune osdmap full prune enabled
Oct 11 04:04:21 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e162 e162: 3 total, 3 up, 3 in
Oct 11 04:04:21 compute-0 ceph-mon[74273]: log_channel(cluster) log [DBG] : osdmap e162: 3 total, 3 up, 3 in
Oct 11 04:04:22 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct 11 04:04:22 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3025980514' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 11 04:04:22 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct 11 04:04:22 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3025980514' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 11 04:04:22 compute-0 ceph-mon[74273]: pgmap v992: 305 pgs: 305 active+clean; 41 MiB data, 252 MiB used, 60 GiB / 60 GiB avail; 51 KiB/s rd, 3.0 KiB/s wr, 70 op/s
Oct 11 04:04:22 compute-0 ceph-mon[74273]: osdmap e162: 3 total, 3 up, 3 in
Oct 11 04:04:22 compute-0 ceph-mon[74273]: from='client.? 192.168.122.10:0/3025980514' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 11 04:04:22 compute-0 ceph-mon[74273]: from='client.? 192.168.122.10:0/3025980514' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 11 04:04:22 compute-0 ovn_metadata_agent[161897]: 2025-10-11 04:04:22.952 161902 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 04:04:22 compute-0 ovn_metadata_agent[161897]: 2025-10-11 04:04:22.953 161902 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 04:04:22 compute-0 ovn_metadata_agent[161897]: 2025-10-11 04:04:22.953 161902 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 04:04:23 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v994: 305 pgs: 305 active+clean; 41 MiB data, 252 MiB used, 60 GiB / 60 GiB avail; 146 KiB/s rd, 9.5 KiB/s wr, 200 op/s
Oct 11 04:04:23 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e162 do_prune osdmap full prune enabled
Oct 11 04:04:23 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e163 e163: 3 total, 3 up, 3 in
Oct 11 04:04:23 compute-0 ceph-mon[74273]: log_channel(cluster) log [DBG] : osdmap e163: 3 total, 3 up, 3 in
Oct 11 04:04:23 compute-0 ceph-mon[74273]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #48. Immutable memtables: 0.
Oct 11 04:04:23 compute-0 ceph-mon[74273]: rocksdb: (Original Log Time 2025/10/11-04:04:23.977221) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Oct 11 04:04:23 compute-0 ceph-mon[74273]: rocksdb: [db/flush_job.cc:856] [default] [JOB 23] Flushing memtable with next log file: 48
Oct 11 04:04:23 compute-0 ceph-mon[74273]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760155463977346, "job": 23, "event": "flush_started", "num_memtables": 1, "num_entries": 1719, "num_deletes": 262, "total_data_size": 2289114, "memory_usage": 2331264, "flush_reason": "Manual Compaction"}
Oct 11 04:04:23 compute-0 ceph-mon[74273]: rocksdb: [db/flush_job.cc:885] [default] [JOB 23] Level-0 flush table #49: started
Oct 11 04:04:23 compute-0 ceph-mon[74273]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760155463992987, "cf_name": "default", "job": 23, "event": "table_file_creation", "file_number": 49, "file_size": 2247664, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 19466, "largest_seqno": 21184, "table_properties": {"data_size": 2239591, "index_size": 4823, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2181, "raw_key_size": 18053, "raw_average_key_size": 21, "raw_value_size": 2223031, "raw_average_value_size": 2606, "num_data_blocks": 213, "num_entries": 853, "num_filter_entries": 853, "num_deletions": 262, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1760155354, "oldest_key_time": 1760155354, "file_creation_time": 1760155463, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "c5ef686a-ea96-43f3-b64e-136aeef6150d", "db_session_id": "5PENSWPAYOBU0GJSS6OB", "orig_file_number": 49, "seqno_to_time_mapping": "N/A"}}
Oct 11 04:04:23 compute-0 ceph-mon[74273]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 23] Flush lasted 15761 microseconds, and 8203 cpu microseconds.
Oct 11 04:04:23 compute-0 ceph-mon[74273]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 11 04:04:23 compute-0 ceph-mon[74273]: rocksdb: (Original Log Time 2025/10/11-04:04:23.993023) [db/flush_job.cc:967] [default] [JOB 23] Level-0 flush table #49: 2247664 bytes OK
Oct 11 04:04:23 compute-0 ceph-mon[74273]: rocksdb: (Original Log Time 2025/10/11-04:04:23.993040) [db/memtable_list.cc:519] [default] Level-0 commit table #49 started
Oct 11 04:04:23 compute-0 ceph-mon[74273]: rocksdb: (Original Log Time 2025/10/11-04:04:23.994103) [db/memtable_list.cc:722] [default] Level-0 commit table #49: memtable #1 done
Oct 11 04:04:23 compute-0 ceph-mon[74273]: rocksdb: (Original Log Time 2025/10/11-04:04:23.994115) EVENT_LOG_v1 {"time_micros": 1760155463994111, "job": 23, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Oct 11 04:04:23 compute-0 ceph-mon[74273]: rocksdb: (Original Log Time 2025/10/11-04:04:23.994128) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Oct 11 04:04:23 compute-0 ceph-mon[74273]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 23] Try to delete WAL files size 2281355, prev total WAL file size 2281355, number of live WAL files 2.
Oct 11 04:04:23 compute-0 ceph-mon[74273]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000045.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 11 04:04:23 compute-0 ceph-mon[74273]: rocksdb: (Original Log Time 2025/10/11-04:04:23.994738) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730031353036' seq:72057594037927935, type:22 .. '7061786F730031373538' seq:0, type:0; will stop at (end)
Oct 11 04:04:23 compute-0 ceph-mon[74273]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 24] Compacting 1@0 + 1@6 files to L6, score -1.00
Oct 11 04:04:23 compute-0 ceph-mon[74273]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 23 Base level 0, inputs: [49(2194KB)], [47(6755KB)]
Oct 11 04:04:23 compute-0 ceph-mon[74273]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760155463994785, "job": 24, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [49], "files_L6": [47], "score": -1, "input_data_size": 9164842, "oldest_snapshot_seqno": -1}
Oct 11 04:04:24 compute-0 ceph-mon[74273]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 24] Generated table #50: 4443 keys, 7433121 bytes, temperature: kUnknown
Oct 11 04:04:24 compute-0 ceph-mon[74273]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760155464046913, "cf_name": "default", "job": 24, "event": "table_file_creation", "file_number": 50, "file_size": 7433121, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 7402062, "index_size": 18853, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 11141, "raw_key_size": 110172, "raw_average_key_size": 24, "raw_value_size": 7320327, "raw_average_value_size": 1647, "num_data_blocks": 784, "num_entries": 4443, "num_filter_entries": 4443, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1760153731, "oldest_key_time": 0, "file_creation_time": 1760155463, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "c5ef686a-ea96-43f3-b64e-136aeef6150d", "db_session_id": "5PENSWPAYOBU0GJSS6OB", "orig_file_number": 50, "seqno_to_time_mapping": "N/A"}}
Oct 11 04:04:24 compute-0 ceph-mon[74273]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 11 04:04:24 compute-0 ceph-mon[74273]: rocksdb: (Original Log Time 2025/10/11-04:04:24.047168) [db/compaction/compaction_job.cc:1663] [default] [JOB 24] Compacted 1@0 + 1@6 files to L6 => 7433121 bytes
Oct 11 04:04:24 compute-0 ceph-mon[74273]: rocksdb: (Original Log Time 2025/10/11-04:04:24.048182) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 175.6 rd, 142.4 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(2.1, 6.6 +0.0 blob) out(7.1 +0.0 blob), read-write-amplify(7.4) write-amplify(3.3) OK, records in: 4972, records dropped: 529 output_compression: NoCompression
Oct 11 04:04:24 compute-0 ceph-mon[74273]: rocksdb: (Original Log Time 2025/10/11-04:04:24.048200) EVENT_LOG_v1 {"time_micros": 1760155464048191, "job": 24, "event": "compaction_finished", "compaction_time_micros": 52191, "compaction_time_cpu_micros": 33031, "output_level": 6, "num_output_files": 1, "total_output_size": 7433121, "num_input_records": 4972, "num_output_records": 4443, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Oct 11 04:04:24 compute-0 ceph-mon[74273]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000049.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 11 04:04:24 compute-0 ceph-mon[74273]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760155464048904, "job": 24, "event": "table_file_deletion", "file_number": 49}
Oct 11 04:04:24 compute-0 ceph-mon[74273]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000047.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 11 04:04:24 compute-0 ceph-mon[74273]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760155464050611, "job": 24, "event": "table_file_deletion", "file_number": 47}
Oct 11 04:04:24 compute-0 ceph-mon[74273]: rocksdb: (Original Log Time 2025/10/11-04:04:23.994677) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 11 04:04:24 compute-0 ceph-mon[74273]: rocksdb: (Original Log Time 2025/10/11-04:04:24.050717) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 11 04:04:24 compute-0 ceph-mon[74273]: rocksdb: (Original Log Time 2025/10/11-04:04:24.050725) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 11 04:04:24 compute-0 ceph-mon[74273]: rocksdb: (Original Log Time 2025/10/11-04:04:24.050729) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 11 04:04:24 compute-0 ceph-mon[74273]: rocksdb: (Original Log Time 2025/10/11-04:04:24.050732) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 11 04:04:24 compute-0 ceph-mon[74273]: rocksdb: (Original Log Time 2025/10/11-04:04:24.050735) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 11 04:04:24 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct 11 04:04:24 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2851901828' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 11 04:04:24 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct 11 04:04:24 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2851901828' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 11 04:04:24 compute-0 nova_compute[259850]: 2025-10-11 04:04:24.389 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 04:04:24 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e163 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 04:04:24 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e163 do_prune osdmap full prune enabled
Oct 11 04:04:24 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e164 e164: 3 total, 3 up, 3 in
Oct 11 04:04:24 compute-0 ceph-mon[74273]: log_channel(cluster) log [DBG] : osdmap e164: 3 total, 3 up, 3 in
Oct 11 04:04:24 compute-0 ceph-mon[74273]: pgmap v994: 305 pgs: 305 active+clean; 41 MiB data, 252 MiB used, 60 GiB / 60 GiB avail; 146 KiB/s rd, 9.5 KiB/s wr, 200 op/s
Oct 11 04:04:24 compute-0 ceph-mon[74273]: osdmap e163: 3 total, 3 up, 3 in
Oct 11 04:04:24 compute-0 ceph-mon[74273]: from='client.? 192.168.122.10:0/2851901828' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 11 04:04:24 compute-0 ceph-mon[74273]: from='client.? 192.168.122.10:0/2851901828' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 11 04:04:24 compute-0 ceph-mon[74273]: osdmap e164: 3 total, 3 up, 3 in
Oct 11 04:04:25 compute-0 sshd-session[269335]: Connection reset by 198.235.24.121 port 58090 [preauth]
Oct 11 04:04:25 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct 11 04:04:25 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/787125927' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 11 04:04:25 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct 11 04:04:25 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/787125927' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 11 04:04:25 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v997: 305 pgs: 305 active+clean; 41 MiB data, 252 MiB used, 60 GiB / 60 GiB avail; 103 KiB/s rd, 7.1 KiB/s wr, 141 op/s
Oct 11 04:04:25 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e164 do_prune osdmap full prune enabled
Oct 11 04:04:25 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e165 e165: 3 total, 3 up, 3 in
Oct 11 04:04:25 compute-0 ceph-mon[74273]: log_channel(cluster) log [DBG] : osdmap e165: 3 total, 3 up, 3 in
Oct 11 04:04:25 compute-0 ceph-mon[74273]: from='client.? 192.168.122.10:0/787125927' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 11 04:04:25 compute-0 ceph-mon[74273]: from='client.? 192.168.122.10:0/787125927' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 11 04:04:26 compute-0 podman[269337]: 2025-10-11 04:04:26.402773978 +0000 UTC m=+0.099694129 container health_status 83ff5079e26f0f00bbfd2512db4ebca61e431e3b7e362664573e34fff179574c (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251009, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, config_id=multipathd, io.buildah.version=1.41.3)
Oct 11 04:04:26 compute-0 podman[269338]: 2025-10-11 04:04:26.403865339 +0000 UTC m=+0.098796374 container health_status 8f03625dda308e9a3fe3b3f00360811eab2a53db90537fed5552a11820542f07 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=iscsid, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, container_name=iscsid)
Oct 11 04:04:26 compute-0 nova_compute[259850]: 2025-10-11 04:04:26.668 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 04:04:26 compute-0 ceph-mon[74273]: pgmap v997: 305 pgs: 305 active+clean; 41 MiB data, 252 MiB used, 60 GiB / 60 GiB avail; 103 KiB/s rd, 7.1 KiB/s wr, 141 op/s
Oct 11 04:04:26 compute-0 ceph-mon[74273]: osdmap e165: 3 total, 3 up, 3 in
Oct 11 04:04:27 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v999: 305 pgs: 305 active+clean; 41 MiB data, 252 MiB used, 60 GiB / 60 GiB avail; 104 KiB/s rd, 7.1 KiB/s wr, 141 op/s
Oct 11 04:04:27 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e165 do_prune osdmap full prune enabled
Oct 11 04:04:27 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e166 e166: 3 total, 3 up, 3 in
Oct 11 04:04:27 compute-0 ceph-mon[74273]: log_channel(cluster) log [DBG] : osdmap e166: 3 total, 3 up, 3 in
Oct 11 04:04:28 compute-0 ceph-mon[74273]: pgmap v999: 305 pgs: 305 active+clean; 41 MiB data, 252 MiB used, 60 GiB / 60 GiB avail; 104 KiB/s rd, 7.1 KiB/s wr, 141 op/s
Oct 11 04:04:28 compute-0 ceph-mon[74273]: osdmap e166: 3 total, 3 up, 3 in
Oct 11 04:04:29 compute-0 nova_compute[259850]: 2025-10-11 04:04:29.390 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 04:04:29 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1001: 305 pgs: 305 active+clean; 41 MiB data, 252 MiB used, 60 GiB / 60 GiB avail; 84 KiB/s rd, 3.7 KiB/s wr, 111 op/s
Oct 11 04:04:29 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e166 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 04:04:29 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e166 do_prune osdmap full prune enabled
Oct 11 04:04:29 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e167 e167: 3 total, 3 up, 3 in
Oct 11 04:04:30 compute-0 ceph-mon[74273]: log_channel(cluster) log [DBG] : osdmap e167: 3 total, 3 up, 3 in
Oct 11 04:04:31 compute-0 ceph-mon[74273]: pgmap v1001: 305 pgs: 305 active+clean; 41 MiB data, 252 MiB used, 60 GiB / 60 GiB avail; 84 KiB/s rd, 3.7 KiB/s wr, 111 op/s
Oct 11 04:04:31 compute-0 ceph-mon[74273]: osdmap e167: 3 total, 3 up, 3 in
Oct 11 04:04:31 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e167 do_prune osdmap full prune enabled
Oct 11 04:04:31 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e168 e168: 3 total, 3 up, 3 in
Oct 11 04:04:31 compute-0 ceph-mon[74273]: log_channel(cluster) log [DBG] : osdmap e168: 3 total, 3 up, 3 in
Oct 11 04:04:31 compute-0 ceph-mgr[74563]: [pg_autoscaler INFO root] _maybe_adjust
Oct 11 04:04:31 compute-0 ceph-mgr[74563]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 04:04:31 compute-0 ceph-mgr[74563]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct 11 04:04:31 compute-0 ceph-mgr[74563]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 04:04:31 compute-0 ceph-mgr[74563]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Oct 11 04:04:31 compute-0 ceph-mgr[74563]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 04:04:31 compute-0 ceph-mgr[74563]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 6.359070782053787e-07 of space, bias 1.0, pg target 0.0001907721234616136 quantized to 32 (current 32)
Oct 11 04:04:31 compute-0 ceph-mgr[74563]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 04:04:31 compute-0 ceph-mgr[74563]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Oct 11 04:04:31 compute-0 ceph-mgr[74563]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 04:04:31 compute-0 ceph-mgr[74563]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.000665858301588852 of space, bias 1.0, pg target 0.19975749047665559 quantized to 32 (current 32)
Oct 11 04:04:31 compute-0 ceph-mgr[74563]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 04:04:31 compute-0 ceph-mgr[74563]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Oct 11 04:04:31 compute-0 ceph-mgr[74563]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 04:04:31 compute-0 ceph-mgr[74563]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 11 04:04:31 compute-0 ceph-mgr[74563]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 04:04:31 compute-0 ceph-mgr[74563]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Oct 11 04:04:31 compute-0 ceph-mgr[74563]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 04:04:31 compute-0 ceph-mgr[74563]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct 11 04:04:31 compute-0 ceph-mgr[74563]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 04:04:31 compute-0 ceph-mgr[74563]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 11 04:04:31 compute-0 ceph-mgr[74563]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 04:04:31 compute-0 ceph-mgr[74563]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct 11 04:04:31 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct 11 04:04:31 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3439192736' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 11 04:04:31 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct 11 04:04:31 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3439192736' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 11 04:04:31 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1004: 305 pgs: 305 active+clean; 41 MiB data, 252 MiB used, 60 GiB / 60 GiB avail; 84 KiB/s rd, 3.7 KiB/s wr, 111 op/s
Oct 11 04:04:31 compute-0 nova_compute[259850]: 2025-10-11 04:04:31.671 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 04:04:32 compute-0 ceph-mon[74273]: osdmap e168: 3 total, 3 up, 3 in
Oct 11 04:04:32 compute-0 ceph-mon[74273]: from='client.? 192.168.122.10:0/3439192736' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 11 04:04:32 compute-0 ceph-mon[74273]: from='client.? 192.168.122.10:0/3439192736' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 11 04:04:33 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e168 do_prune osdmap full prune enabled
Oct 11 04:04:33 compute-0 ceph-mon[74273]: pgmap v1004: 305 pgs: 305 active+clean; 41 MiB data, 252 MiB used, 60 GiB / 60 GiB avail; 84 KiB/s rd, 3.7 KiB/s wr, 111 op/s
Oct 11 04:04:33 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e169 e169: 3 total, 3 up, 3 in
Oct 11 04:04:33 compute-0 ceph-mon[74273]: log_channel(cluster) log [DBG] : osdmap e169: 3 total, 3 up, 3 in
Oct 11 04:04:33 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1006: 305 pgs: 305 active+clean; 41 MiB data, 256 MiB used, 60 GiB / 60 GiB avail; 3.9 MiB/s rd, 8.6 KiB/s wr, 215 op/s
Oct 11 04:04:34 compute-0 ceph-mon[74273]: osdmap e169: 3 total, 3 up, 3 in
Oct 11 04:04:34 compute-0 nova_compute[259850]: 2025-10-11 04:04:34.393 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 04:04:34 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e169 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 04:04:34 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e169 do_prune osdmap full prune enabled
Oct 11 04:04:34 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e170 e170: 3 total, 3 up, 3 in
Oct 11 04:04:34 compute-0 ceph-mon[74273]: log_channel(cluster) log [DBG] : osdmap e170: 3 total, 3 up, 3 in
Oct 11 04:04:34 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct 11 04:04:34 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3688636814' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 11 04:04:34 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct 11 04:04:34 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3688636814' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 11 04:04:35 compute-0 ceph-mon[74273]: pgmap v1006: 305 pgs: 305 active+clean; 41 MiB data, 256 MiB used, 60 GiB / 60 GiB avail; 3.9 MiB/s rd, 8.6 KiB/s wr, 215 op/s
Oct 11 04:04:35 compute-0 ceph-mon[74273]: osdmap e170: 3 total, 3 up, 3 in
Oct 11 04:04:35 compute-0 ceph-mon[74273]: from='client.? 192.168.122.10:0/3688636814' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 11 04:04:35 compute-0 ceph-mon[74273]: from='client.? 192.168.122.10:0/3688636814' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 11 04:04:35 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1008: 305 pgs: 305 active+clean; 41 MiB data, 256 MiB used, 60 GiB / 60 GiB avail; 3.8 MiB/s rd, 6.1 KiB/s wr, 114 op/s
Oct 11 04:04:36 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct 11 04:04:36 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1699970095' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 11 04:04:36 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct 11 04:04:36 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1699970095' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 11 04:04:36 compute-0 ceph-mon[74273]: pgmap v1008: 305 pgs: 305 active+clean; 41 MiB data, 256 MiB used, 60 GiB / 60 GiB avail; 3.8 MiB/s rd, 6.1 KiB/s wr, 114 op/s
Oct 11 04:04:36 compute-0 ceph-mon[74273]: from='client.? 192.168.122.10:0/1699970095' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 11 04:04:36 compute-0 ceph-mon[74273]: from='client.? 192.168.122.10:0/1699970095' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 11 04:04:36 compute-0 nova_compute[259850]: 2025-10-11 04:04:36.673 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 04:04:37 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1009: 305 pgs: 305 active+clean; 41 MiB data, 256 MiB used, 60 GiB / 60 GiB avail; 3.3 MiB/s rd, 5.1 KiB/s wr, 97 op/s
Oct 11 04:04:37 compute-0 podman[269379]: 2025-10-11 04:04:37.438741075 +0000 UTC m=+0.145281769 container health_status 648a70a95c858d67aef01a55c153ff0b5b9bef4b589fa10c62a24185180db61f (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251009, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Oct 11 04:04:38 compute-0 ceph-mon[74273]: pgmap v1009: 305 pgs: 305 active+clean; 41 MiB data, 256 MiB used, 60 GiB / 60 GiB avail; 3.3 MiB/s rd, 5.1 KiB/s wr, 97 op/s
Oct 11 04:04:39 compute-0 nova_compute[259850]: 2025-10-11 04:04:39.394 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 04:04:39 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1010: 305 pgs: 305 active+clean; 88 MiB data, 277 MiB used, 60 GiB / 60 GiB avail; 2.7 MiB/s rd, 2.7 MiB/s wr, 183 op/s
Oct 11 04:04:39 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e170 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 04:04:39 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e170 do_prune osdmap full prune enabled
Oct 11 04:04:39 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e171 e171: 3 total, 3 up, 3 in
Oct 11 04:04:39 compute-0 ceph-mon[74273]: log_channel(cluster) log [DBG] : osdmap e171: 3 total, 3 up, 3 in
Oct 11 04:04:39 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct 11 04:04:39 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2603245597' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 11 04:04:39 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct 11 04:04:39 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2603245597' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 11 04:04:40 compute-0 ceph-mon[74273]: pgmap v1010: 305 pgs: 305 active+clean; 88 MiB data, 277 MiB used, 60 GiB / 60 GiB avail; 2.7 MiB/s rd, 2.7 MiB/s wr, 183 op/s
Oct 11 04:04:40 compute-0 ceph-mon[74273]: osdmap e171: 3 total, 3 up, 3 in
Oct 11 04:04:40 compute-0 ceph-mon[74273]: from='client.? 192.168.122.10:0/2603245597' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 11 04:04:40 compute-0 ceph-mon[74273]: from='client.? 192.168.122.10:0/2603245597' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 11 04:04:41 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct 11 04:04:41 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2672408185' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 11 04:04:41 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct 11 04:04:41 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2672408185' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 11 04:04:41 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1012: 305 pgs: 305 active+clean; 88 MiB data, 277 MiB used, 60 GiB / 60 GiB avail; 73 KiB/s rd, 2.7 MiB/s wr, 105 op/s
Oct 11 04:04:41 compute-0 nova_compute[259850]: 2025-10-11 04:04:41.674 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 04:04:41 compute-0 ceph-mon[74273]: from='client.? 192.168.122.10:0/2672408185' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 11 04:04:41 compute-0 ceph-mon[74273]: from='client.? 192.168.122.10:0/2672408185' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 11 04:04:42 compute-0 ceph-mon[74273]: pgmap v1012: 305 pgs: 305 active+clean; 88 MiB data, 277 MiB used, 60 GiB / 60 GiB avail; 73 KiB/s rd, 2.7 MiB/s wr, 105 op/s
Oct 11 04:04:43 compute-0 podman[269405]: 2025-10-11 04:04:43.360833349 +0000 UTC m=+0.066294592 container health_status aa44bb3cffcfb092b9d85ae52a9bd9f36c87e07262632420f97b48a886109eab (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, tcib_managed=true, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.build-date=20251009, maintainer=OpenStack Kubernetes Operator team)
Oct 11 04:04:43 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1013: 305 pgs: 305 active+clean; 41 MiB data, 256 MiB used, 60 GiB / 60 GiB avail; 96 KiB/s rd, 2.4 MiB/s wr, 137 op/s
Oct 11 04:04:44 compute-0 nova_compute[259850]: 2025-10-11 04:04:44.396 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 04:04:44 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e171 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 04:04:44 compute-0 ceph-mon[74273]: pgmap v1013: 305 pgs: 305 active+clean; 41 MiB data, 256 MiB used, 60 GiB / 60 GiB avail; 96 KiB/s rd, 2.4 MiB/s wr, 137 op/s
Oct 11 04:04:45 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1014: 305 pgs: 305 active+clean; 41 MiB data, 256 MiB used, 60 GiB / 60 GiB avail; 84 KiB/s rd, 2.1 MiB/s wr, 119 op/s
Oct 11 04:04:46 compute-0 nova_compute[259850]: 2025-10-11 04:04:46.677 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 04:04:46 compute-0 ceph-mon[74273]: pgmap v1014: 305 pgs: 305 active+clean; 41 MiB data, 256 MiB used, 60 GiB / 60 GiB avail; 84 KiB/s rd, 2.1 MiB/s wr, 119 op/s
Oct 11 04:04:47 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1015: 305 pgs: 305 active+clean; 41 MiB data, 256 MiB used, 60 GiB / 60 GiB avail; 84 KiB/s rd, 2.1 MiB/s wr, 119 op/s
Oct 11 04:04:48 compute-0 ceph-mon[74273]: pgmap v1015: 305 pgs: 305 active+clean; 41 MiB data, 256 MiB used, 60 GiB / 60 GiB avail; 84 KiB/s rd, 2.1 MiB/s wr, 119 op/s
Oct 11 04:04:49 compute-0 nova_compute[259850]: 2025-10-11 04:04:49.398 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 04:04:49 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1016: 305 pgs: 305 active+clean; 41 MiB data, 256 MiB used, 60 GiB / 60 GiB avail; 26 KiB/s rd, 1.8 KiB/s wr, 37 op/s
Oct 11 04:04:49 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e171 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 04:04:50 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct 11 04:04:50 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2763077875' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 11 04:04:50 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct 11 04:04:50 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2763077875' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 11 04:04:50 compute-0 ceph-mon[74273]: pgmap v1016: 305 pgs: 305 active+clean; 41 MiB data, 256 MiB used, 60 GiB / 60 GiB avail; 26 KiB/s rd, 1.8 KiB/s wr, 37 op/s
Oct 11 04:04:50 compute-0 ceph-mon[74273]: from='client.? 192.168.122.10:0/2763077875' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 11 04:04:50 compute-0 ceph-mon[74273]: from='client.? 192.168.122.10:0/2763077875' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 11 04:04:50 compute-0 ceph-mgr[74563]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 04:04:50 compute-0 ceph-mgr[74563]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 04:04:50 compute-0 ceph-mgr[74563]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 04:04:50 compute-0 ceph-mgr[74563]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 04:04:50 compute-0 ceph-mgr[74563]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 04:04:50 compute-0 ceph-mgr[74563]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 04:04:51 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1017: 305 pgs: 305 active+clean; 41 MiB data, 256 MiB used, 60 GiB / 60 GiB avail; 23 KiB/s rd, 1.5 KiB/s wr, 32 op/s
Oct 11 04:04:51 compute-0 nova_compute[259850]: 2025-10-11 04:04:51.679 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 04:04:52 compute-0 ceph-mon[74273]: pgmap v1017: 305 pgs: 305 active+clean; 41 MiB data, 256 MiB used, 60 GiB / 60 GiB avail; 23 KiB/s rd, 1.5 KiB/s wr, 32 op/s
Oct 11 04:04:53 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1018: 305 pgs: 305 active+clean; 41 MiB data, 256 MiB used, 60 GiB / 60 GiB avail; 23 KiB/s rd, 2.2 KiB/s wr, 33 op/s
Oct 11 04:04:54 compute-0 nova_compute[259850]: 2025-10-11 04:04:54.305 2 DEBUG oslo_concurrency.lockutils [None req-2c752ecf-f41a-4b75-87bd-b628c3d4db01 715d3ecfd40048a08fd0c9f8dc437cd6 a56f57f119b24e77bd165887162ef538 - - default default] Acquiring lock "e607828c-0677-46ba-a7a0-b9d21be4149e" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 04:04:54 compute-0 nova_compute[259850]: 2025-10-11 04:04:54.306 2 DEBUG oslo_concurrency.lockutils [None req-2c752ecf-f41a-4b75-87bd-b628c3d4db01 715d3ecfd40048a08fd0c9f8dc437cd6 a56f57f119b24e77bd165887162ef538 - - default default] Lock "e607828c-0677-46ba-a7a0-b9d21be4149e" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 04:04:54 compute-0 nova_compute[259850]: 2025-10-11 04:04:54.361 2 DEBUG nova.compute.manager [None req-2c752ecf-f41a-4b75-87bd-b628c3d4db01 715d3ecfd40048a08fd0c9f8dc437cd6 a56f57f119b24e77bd165887162ef538 - - default default] [instance: e607828c-0677-46ba-a7a0-b9d21be4149e] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Oct 11 04:04:54 compute-0 nova_compute[259850]: 2025-10-11 04:04:54.400 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 04:04:54 compute-0 nova_compute[259850]: 2025-10-11 04:04:54.661 2 DEBUG oslo_concurrency.lockutils [None req-2c752ecf-f41a-4b75-87bd-b628c3d4db01 715d3ecfd40048a08fd0c9f8dc437cd6 a56f57f119b24e77bd165887162ef538 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 04:04:54 compute-0 nova_compute[259850]: 2025-10-11 04:04:54.662 2 DEBUG oslo_concurrency.lockutils [None req-2c752ecf-f41a-4b75-87bd-b628c3d4db01 715d3ecfd40048a08fd0c9f8dc437cd6 a56f57f119b24e77bd165887162ef538 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 04:04:54 compute-0 nova_compute[259850]: 2025-10-11 04:04:54.671 2 DEBUG nova.virt.hardware [None req-2c752ecf-f41a-4b75-87bd-b628c3d4db01 715d3ecfd40048a08fd0c9f8dc437cd6 a56f57f119b24e77bd165887162ef538 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Oct 11 04:04:54 compute-0 nova_compute[259850]: 2025-10-11 04:04:54.672 2 INFO nova.compute.claims [None req-2c752ecf-f41a-4b75-87bd-b628c3d4db01 715d3ecfd40048a08fd0c9f8dc437cd6 a56f57f119b24e77bd165887162ef538 - - default default] [instance: e607828c-0677-46ba-a7a0-b9d21be4149e] Claim successful on node compute-0.ctlplane.example.com
Oct 11 04:04:54 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e171 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 04:04:54 compute-0 nova_compute[259850]: 2025-10-11 04:04:54.762 2 DEBUG oslo_concurrency.processutils [None req-2c752ecf-f41a-4b75-87bd-b628c3d4db01 715d3ecfd40048a08fd0c9f8dc437cd6 a56f57f119b24e77bd165887162ef538 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 04:04:54 compute-0 ceph-mon[74273]: pgmap v1018: 305 pgs: 305 active+clean; 41 MiB data, 256 MiB used, 60 GiB / 60 GiB avail; 23 KiB/s rd, 2.2 KiB/s wr, 33 op/s
Oct 11 04:04:55 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 11 04:04:55 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1345766497' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 04:04:55 compute-0 nova_compute[259850]: 2025-10-11 04:04:55.209 2 DEBUG oslo_concurrency.processutils [None req-2c752ecf-f41a-4b75-87bd-b628c3d4db01 715d3ecfd40048a08fd0c9f8dc437cd6 a56f57f119b24e77bd165887162ef538 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.448s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 04:04:55 compute-0 nova_compute[259850]: 2025-10-11 04:04:55.217 2 DEBUG nova.compute.provider_tree [None req-2c752ecf-f41a-4b75-87bd-b628c3d4db01 715d3ecfd40048a08fd0c9f8dc437cd6 a56f57f119b24e77bd165887162ef538 - - default default] Inventory has not changed in ProviderTree for provider: 108a560b-89c0-4926-a2fc-cb749a6f8386 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 11 04:04:55 compute-0 nova_compute[259850]: 2025-10-11 04:04:55.250 2 DEBUG nova.scheduler.client.report [None req-2c752ecf-f41a-4b75-87bd-b628c3d4db01 715d3ecfd40048a08fd0c9f8dc437cd6 a56f57f119b24e77bd165887162ef538 - - default default] Inventory has not changed for provider 108a560b-89c0-4926-a2fc-cb749a6f8386 based on inventory data: {'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 11 04:04:55 compute-0 nova_compute[259850]: 2025-10-11 04:04:55.287 2 DEBUG oslo_concurrency.lockutils [None req-2c752ecf-f41a-4b75-87bd-b628c3d4db01 715d3ecfd40048a08fd0c9f8dc437cd6 a56f57f119b24e77bd165887162ef538 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.625s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 04:04:55 compute-0 nova_compute[259850]: 2025-10-11 04:04:55.289 2 DEBUG nova.compute.manager [None req-2c752ecf-f41a-4b75-87bd-b628c3d4db01 715d3ecfd40048a08fd0c9f8dc437cd6 a56f57f119b24e77bd165887162ef538 - - default default] [instance: e607828c-0677-46ba-a7a0-b9d21be4149e] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Oct 11 04:04:55 compute-0 nova_compute[259850]: 2025-10-11 04:04:55.364 2 DEBUG nova.compute.manager [None req-2c752ecf-f41a-4b75-87bd-b628c3d4db01 715d3ecfd40048a08fd0c9f8dc437cd6 a56f57f119b24e77bd165887162ef538 - - default default] [instance: e607828c-0677-46ba-a7a0-b9d21be4149e] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Oct 11 04:04:55 compute-0 nova_compute[259850]: 2025-10-11 04:04:55.364 2 DEBUG nova.network.neutron [None req-2c752ecf-f41a-4b75-87bd-b628c3d4db01 715d3ecfd40048a08fd0c9f8dc437cd6 a56f57f119b24e77bd165887162ef538 - - default default] [instance: e607828c-0677-46ba-a7a0-b9d21be4149e] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Oct 11 04:04:55 compute-0 nova_compute[259850]: 2025-10-11 04:04:55.392 2 INFO nova.virt.libvirt.driver [None req-2c752ecf-f41a-4b75-87bd-b628c3d4db01 715d3ecfd40048a08fd0c9f8dc437cd6 a56f57f119b24e77bd165887162ef538 - - default default] [instance: e607828c-0677-46ba-a7a0-b9d21be4149e] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Oct 11 04:04:55 compute-0 nova_compute[259850]: 2025-10-11 04:04:55.408 2 DEBUG nova.compute.manager [None req-2c752ecf-f41a-4b75-87bd-b628c3d4db01 715d3ecfd40048a08fd0c9f8dc437cd6 a56f57f119b24e77bd165887162ef538 - - default default] [instance: e607828c-0677-46ba-a7a0-b9d21be4149e] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Oct 11 04:04:55 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1019: 305 pgs: 305 active+clean; 41 MiB data, 256 MiB used, 60 GiB / 60 GiB avail; 1.7 KiB/s rd, 1.3 KiB/s wr, 3 op/s
Oct 11 04:04:55 compute-0 nova_compute[259850]: 2025-10-11 04:04:55.558 2 DEBUG nova.compute.manager [None req-2c752ecf-f41a-4b75-87bd-b628c3d4db01 715d3ecfd40048a08fd0c9f8dc437cd6 a56f57f119b24e77bd165887162ef538 - - default default] [instance: e607828c-0677-46ba-a7a0-b9d21be4149e] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Oct 11 04:04:55 compute-0 nova_compute[259850]: 2025-10-11 04:04:55.559 2 DEBUG nova.virt.libvirt.driver [None req-2c752ecf-f41a-4b75-87bd-b628c3d4db01 715d3ecfd40048a08fd0c9f8dc437cd6 a56f57f119b24e77bd165887162ef538 - - default default] [instance: e607828c-0677-46ba-a7a0-b9d21be4149e] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Oct 11 04:04:55 compute-0 nova_compute[259850]: 2025-10-11 04:04:55.559 2 INFO nova.virt.libvirt.driver [None req-2c752ecf-f41a-4b75-87bd-b628c3d4db01 715d3ecfd40048a08fd0c9f8dc437cd6 a56f57f119b24e77bd165887162ef538 - - default default] [instance: e607828c-0677-46ba-a7a0-b9d21be4149e] Creating image(s)
Oct 11 04:04:55 compute-0 nova_compute[259850]: 2025-10-11 04:04:55.586 2 DEBUG nova.storage.rbd_utils [None req-2c752ecf-f41a-4b75-87bd-b628c3d4db01 715d3ecfd40048a08fd0c9f8dc437cd6 a56f57f119b24e77bd165887162ef538 - - default default] rbd image e607828c-0677-46ba-a7a0-b9d21be4149e_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 11 04:04:55 compute-0 nova_compute[259850]: 2025-10-11 04:04:55.610 2 DEBUG nova.storage.rbd_utils [None req-2c752ecf-f41a-4b75-87bd-b628c3d4db01 715d3ecfd40048a08fd0c9f8dc437cd6 a56f57f119b24e77bd165887162ef538 - - default default] rbd image e607828c-0677-46ba-a7a0-b9d21be4149e_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 11 04:04:55 compute-0 nova_compute[259850]: 2025-10-11 04:04:55.632 2 DEBUG nova.storage.rbd_utils [None req-2c752ecf-f41a-4b75-87bd-b628c3d4db01 715d3ecfd40048a08fd0c9f8dc437cd6 a56f57f119b24e77bd165887162ef538 - - default default] rbd image e607828c-0677-46ba-a7a0-b9d21be4149e_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 11 04:04:55 compute-0 nova_compute[259850]: 2025-10-11 04:04:55.635 2 DEBUG oslo_concurrency.processutils [None req-2c752ecf-f41a-4b75-87bd-b628c3d4db01 715d3ecfd40048a08fd0c9f8dc437cd6 a56f57f119b24e77bd165887162ef538 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/1fdd1a1629caceb533dd1ed1ad40a6716b3e72ac --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 04:04:55 compute-0 nova_compute[259850]: 2025-10-11 04:04:55.727 2 DEBUG oslo_concurrency.processutils [None req-2c752ecf-f41a-4b75-87bd-b628c3d4db01 715d3ecfd40048a08fd0c9f8dc437cd6 a56f57f119b24e77bd165887162ef538 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/1fdd1a1629caceb533dd1ed1ad40a6716b3e72ac --force-share --output=json" returned: 0 in 0.092s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 04:04:55 compute-0 nova_compute[259850]: 2025-10-11 04:04:55.728 2 DEBUG oslo_concurrency.lockutils [None req-2c752ecf-f41a-4b75-87bd-b628c3d4db01 715d3ecfd40048a08fd0c9f8dc437cd6 a56f57f119b24e77bd165887162ef538 - - default default] Acquiring lock "1fdd1a1629caceb533dd1ed1ad40a6716b3e72ac" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 04:04:55 compute-0 nova_compute[259850]: 2025-10-11 04:04:55.730 2 DEBUG oslo_concurrency.lockutils [None req-2c752ecf-f41a-4b75-87bd-b628c3d4db01 715d3ecfd40048a08fd0c9f8dc437cd6 a56f57f119b24e77bd165887162ef538 - - default default] Lock "1fdd1a1629caceb533dd1ed1ad40a6716b3e72ac" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 04:04:55 compute-0 nova_compute[259850]: 2025-10-11 04:04:55.730 2 DEBUG oslo_concurrency.lockutils [None req-2c752ecf-f41a-4b75-87bd-b628c3d4db01 715d3ecfd40048a08fd0c9f8dc437cd6 a56f57f119b24e77bd165887162ef538 - - default default] Lock "1fdd1a1629caceb533dd1ed1ad40a6716b3e72ac" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 04:04:55 compute-0 nova_compute[259850]: 2025-10-11 04:04:55.762 2 DEBUG nova.storage.rbd_utils [None req-2c752ecf-f41a-4b75-87bd-b628c3d4db01 715d3ecfd40048a08fd0c9f8dc437cd6 a56f57f119b24e77bd165887162ef538 - - default default] rbd image e607828c-0677-46ba-a7a0-b9d21be4149e_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 11 04:04:55 compute-0 nova_compute[259850]: 2025-10-11 04:04:55.766 2 DEBUG oslo_concurrency.processutils [None req-2c752ecf-f41a-4b75-87bd-b628c3d4db01 715d3ecfd40048a08fd0c9f8dc437cd6 a56f57f119b24e77bd165887162ef538 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/1fdd1a1629caceb533dd1ed1ad40a6716b3e72ac e607828c-0677-46ba-a7a0-b9d21be4149e_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 04:04:55 compute-0 nova_compute[259850]: 2025-10-11 04:04:55.795 2 DEBUG nova.policy [None req-2c752ecf-f41a-4b75-87bd-b628c3d4db01 715d3ecfd40048a08fd0c9f8dc437cd6 a56f57f119b24e77bd165887162ef538 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '715d3ecfd40048a08fd0c9f8dc437cd6', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'a56f57f119b24e77bd165887162ef538', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Oct 11 04:04:55 compute-0 ceph-mon[74273]: from='client.? 192.168.122.100:0/1345766497' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 04:04:55 compute-0 sudo[269538]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 04:04:55 compute-0 sudo[269538]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 04:04:55 compute-0 sudo[269538]: pam_unix(sudo:session): session closed for user root
Oct 11 04:04:56 compute-0 nova_compute[259850]: 2025-10-11 04:04:56.036 2 DEBUG oslo_concurrency.processutils [None req-2c752ecf-f41a-4b75-87bd-b628c3d4db01 715d3ecfd40048a08fd0c9f8dc437cd6 a56f57f119b24e77bd165887162ef538 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/1fdd1a1629caceb533dd1ed1ad40a6716b3e72ac e607828c-0677-46ba-a7a0-b9d21be4149e_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.269s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 04:04:56 compute-0 sudo[269565]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 11 04:04:56 compute-0 sudo[269565]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 04:04:56 compute-0 sudo[269565]: pam_unix(sudo:session): session closed for user root
Oct 11 04:04:56 compute-0 nova_compute[259850]: 2025-10-11 04:04:56.096 2 DEBUG nova.storage.rbd_utils [None req-2c752ecf-f41a-4b75-87bd-b628c3d4db01 715d3ecfd40048a08fd0c9f8dc437cd6 a56f57f119b24e77bd165887162ef538 - - default default] resizing rbd image e607828c-0677-46ba-a7a0-b9d21be4149e_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Oct 11 04:04:56 compute-0 sudo[269608]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 04:04:56 compute-0 sudo[269608]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 04:04:56 compute-0 sudo[269608]: pam_unix(sudo:session): session closed for user root
Oct 11 04:04:56 compute-0 nova_compute[259850]: 2025-10-11 04:04:56.199 2 DEBUG nova.objects.instance [None req-2c752ecf-f41a-4b75-87bd-b628c3d4db01 715d3ecfd40048a08fd0c9f8dc437cd6 a56f57f119b24e77bd165887162ef538 - - default default] Lazy-loading 'migration_context' on Instance uuid e607828c-0677-46ba-a7a0-b9d21be4149e obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 11 04:04:56 compute-0 nova_compute[259850]: 2025-10-11 04:04:56.215 2 DEBUG nova.virt.libvirt.driver [None req-2c752ecf-f41a-4b75-87bd-b628c3d4db01 715d3ecfd40048a08fd0c9f8dc437cd6 a56f57f119b24e77bd165887162ef538 - - default default] [instance: e607828c-0677-46ba-a7a0-b9d21be4149e] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Oct 11 04:04:56 compute-0 nova_compute[259850]: 2025-10-11 04:04:56.216 2 DEBUG nova.virt.libvirt.driver [None req-2c752ecf-f41a-4b75-87bd-b628c3d4db01 715d3ecfd40048a08fd0c9f8dc437cd6 a56f57f119b24e77bd165887162ef538 - - default default] [instance: e607828c-0677-46ba-a7a0-b9d21be4149e] Ensure instance console log exists: /var/lib/nova/instances/e607828c-0677-46ba-a7a0-b9d21be4149e/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Oct 11 04:04:56 compute-0 nova_compute[259850]: 2025-10-11 04:04:56.216 2 DEBUG oslo_concurrency.lockutils [None req-2c752ecf-f41a-4b75-87bd-b628c3d4db01 715d3ecfd40048a08fd0c9f8dc437cd6 a56f57f119b24e77bd165887162ef538 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 04:04:56 compute-0 nova_compute[259850]: 2025-10-11 04:04:56.216 2 DEBUG oslo_concurrency.lockutils [None req-2c752ecf-f41a-4b75-87bd-b628c3d4db01 715d3ecfd40048a08fd0c9f8dc437cd6 a56f57f119b24e77bd165887162ef538 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 04:04:56 compute-0 nova_compute[259850]: 2025-10-11 04:04:56.217 2 DEBUG oslo_concurrency.lockutils [None req-2c752ecf-f41a-4b75-87bd-b628c3d4db01 715d3ecfd40048a08fd0c9f8dc437cd6 a56f57f119b24e77bd165887162ef538 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 04:04:56 compute-0 sudo[269669]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/23b68101-59a9-532f-ab6b-9acf78fb2162/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Oct 11 04:04:56 compute-0 sudo[269669]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 04:04:56 compute-0 nova_compute[259850]: 2025-10-11 04:04:56.681 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 04:04:56 compute-0 nova_compute[259850]: 2025-10-11 04:04:56.729 2 DEBUG nova.network.neutron [None req-2c752ecf-f41a-4b75-87bd-b628c3d4db01 715d3ecfd40048a08fd0c9f8dc437cd6 a56f57f119b24e77bd165887162ef538 - - default default] [instance: e607828c-0677-46ba-a7a0-b9d21be4149e] Successfully created port: 7f528a6a-5bee-4ea6-ba46-7b56d53b170b _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Oct 11 04:04:56 compute-0 ceph-mon[74273]: pgmap v1019: 305 pgs: 305 active+clean; 41 MiB data, 256 MiB used, 60 GiB / 60 GiB avail; 1.7 KiB/s rd, 1.3 KiB/s wr, 3 op/s
Oct 11 04:04:56 compute-0 sudo[269669]: pam_unix(sudo:session): session closed for user root
Oct 11 04:04:56 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 11 04:04:56 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 11 04:04:56 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct 11 04:04:56 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 11 04:04:56 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct 11 04:04:56 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' 
Oct 11 04:04:56 compute-0 ceph-mgr[74563]: [progress WARNING root] complete: ev 59bd7a20-925f-4784-bf7f-c2c4fdaed696 does not exist
Oct 11 04:04:56 compute-0 ceph-mgr[74563]: [progress WARNING root] complete: ev e713bda7-ad30-4c0b-a6a7-dce6ff0db187 does not exist
Oct 11 04:04:56 compute-0 ceph-mgr[74563]: [progress WARNING root] complete: ev 5f6e1219-fba0-433d-bdd0-9994e5db56b0 does not exist
Oct 11 04:04:56 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct 11 04:04:56 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 11 04:04:56 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct 11 04:04:56 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 11 04:04:56 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 11 04:04:56 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 11 04:04:56 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct 11 04:04:56 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/497647001' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 11 04:04:56 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct 11 04:04:56 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/497647001' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 11 04:04:57 compute-0 sudo[269744]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 04:04:57 compute-0 sudo[269744]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 04:04:57 compute-0 sudo[269744]: pam_unix(sudo:session): session closed for user root
Oct 11 04:04:57 compute-0 sudo[269781]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 11 04:04:57 compute-0 sudo[269781]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 04:04:57 compute-0 sudo[269781]: pam_unix(sudo:session): session closed for user root
Oct 11 04:04:57 compute-0 podman[269768]: 2025-10-11 04:04:57.081254805 +0000 UTC m=+0.057636199 container health_status 83ff5079e26f0f00bbfd2512db4ebca61e431e3b7e362664573e34fff179574c (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009)
Oct 11 04:04:57 compute-0 podman[269769]: 2025-10-11 04:04:57.106933416 +0000 UTC m=+0.081967132 container health_status 8f03625dda308e9a3fe3b3f00360811eab2a53db90537fed5552a11820542f07 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, config_id=iscsid, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009, tcib_managed=true, container_name=iscsid)
Oct 11 04:04:57 compute-0 sudo[269835]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 04:04:57 compute-0 sudo[269835]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 04:04:57 compute-0 sudo[269835]: pam_unix(sudo:session): session closed for user root
Oct 11 04:04:57 compute-0 sudo[269860]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/23b68101-59a9-532f-ab6b-9acf78fb2162/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 23b68101-59a9-532f-ab6b-9acf78fb2162 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Oct 11 04:04:57 compute-0 sudo[269860]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 04:04:57 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1020: 305 pgs: 305 active+clean; 41 MiB data, 256 MiB used, 60 GiB / 60 GiB avail; 1.7 KiB/s rd, 1.3 KiB/s wr, 3 op/s
Oct 11 04:04:57 compute-0 podman[269925]: 2025-10-11 04:04:57.617876699 +0000 UTC m=+0.067050433 container create e37feb2df88a35492773227e63d624791a6917c9d64322ce41df3cbadd13a20a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=mystifying_cartwright, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_REF=reef, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 11 04:04:57 compute-0 systemd[1]: Started libpod-conmon-e37feb2df88a35492773227e63d624791a6917c9d64322ce41df3cbadd13a20a.scope.
Oct 11 04:04:57 compute-0 podman[269925]: 2025-10-11 04:04:57.588922516 +0000 UTC m=+0.038096300 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 04:04:57 compute-0 systemd[1]: Started libcrun container.
Oct 11 04:04:57 compute-0 podman[269925]: 2025-10-11 04:04:57.72301138 +0000 UTC m=+0.172185144 container init e37feb2df88a35492773227e63d624791a6917c9d64322ce41df3cbadd13a20a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=mystifying_cartwright, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, ceph=True, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 11 04:04:57 compute-0 podman[269925]: 2025-10-11 04:04:57.737514087 +0000 UTC m=+0.186687821 container start e37feb2df88a35492773227e63d624791a6917c9d64322ce41df3cbadd13a20a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=mystifying_cartwright, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 11 04:04:57 compute-0 podman[269925]: 2025-10-11 04:04:57.742479296 +0000 UTC m=+0.191653040 container attach e37feb2df88a35492773227e63d624791a6917c9d64322ce41df3cbadd13a20a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=mystifying_cartwright, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 11 04:04:57 compute-0 mystifying_cartwright[269942]: 167 167
Oct 11 04:04:57 compute-0 systemd[1]: libpod-e37feb2df88a35492773227e63d624791a6917c9d64322ce41df3cbadd13a20a.scope: Deactivated successfully.
Oct 11 04:04:57 compute-0 podman[269925]: 2025-10-11 04:04:57.749241766 +0000 UTC m=+0.198415500 container died e37feb2df88a35492773227e63d624791a6917c9d64322ce41df3cbadd13a20a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=mystifying_cartwright, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_REF=reef, ceph=True, OSD_FLAVOR=default)
Oct 11 04:04:57 compute-0 systemd[1]: var-lib-containers-storage-overlay-604e255728af495c44f33146b443efc67e5c7623e2f1342898fdeb8425559bb9-merged.mount: Deactivated successfully.
Oct 11 04:04:57 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 11 04:04:57 compute-0 podman[269925]: 2025-10-11 04:04:57.807267685 +0000 UTC m=+0.256441429 container remove e37feb2df88a35492773227e63d624791a6917c9d64322ce41df3cbadd13a20a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=mystifying_cartwright, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, ceph=True, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Oct 11 04:04:57 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 11 04:04:57 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' 
Oct 11 04:04:57 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 11 04:04:57 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 11 04:04:57 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 11 04:04:57 compute-0 ceph-mon[74273]: from='client.? 192.168.122.10:0/497647001' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 11 04:04:57 compute-0 ceph-mon[74273]: from='client.? 192.168.122.10:0/497647001' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 11 04:04:57 compute-0 systemd[1]: libpod-conmon-e37feb2df88a35492773227e63d624791a6917c9d64322ce41df3cbadd13a20a.scope: Deactivated successfully.
Oct 11 04:04:58 compute-0 podman[269965]: 2025-10-11 04:04:58.075032341 +0000 UTC m=+0.075159950 container create d9c0aadc070fe8d67b8a32862431678be3d4e2431b8e9c0bcaea630d1b6d14d7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=amazing_roentgen, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_REF=reef, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 11 04:04:58 compute-0 nova_compute[259850]: 2025-10-11 04:04:58.123 2 DEBUG nova.network.neutron [None req-2c752ecf-f41a-4b75-87bd-b628c3d4db01 715d3ecfd40048a08fd0c9f8dc437cd6 a56f57f119b24e77bd165887162ef538 - - default default] [instance: e607828c-0677-46ba-a7a0-b9d21be4149e] Successfully updated port: 7f528a6a-5bee-4ea6-ba46-7b56d53b170b _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Oct 11 04:04:58 compute-0 systemd[1]: Started libpod-conmon-d9c0aadc070fe8d67b8a32862431678be3d4e2431b8e9c0bcaea630d1b6d14d7.scope.
Oct 11 04:04:58 compute-0 podman[269965]: 2025-10-11 04:04:58.042995782 +0000 UTC m=+0.043123441 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 04:04:58 compute-0 nova_compute[259850]: 2025-10-11 04:04:58.159 2 DEBUG oslo_concurrency.lockutils [None req-2c752ecf-f41a-4b75-87bd-b628c3d4db01 715d3ecfd40048a08fd0c9f8dc437cd6 a56f57f119b24e77bd165887162ef538 - - default default] Acquiring lock "refresh_cache-e607828c-0677-46ba-a7a0-b9d21be4149e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 11 04:04:58 compute-0 nova_compute[259850]: 2025-10-11 04:04:58.160 2 DEBUG oslo_concurrency.lockutils [None req-2c752ecf-f41a-4b75-87bd-b628c3d4db01 715d3ecfd40048a08fd0c9f8dc437cd6 a56f57f119b24e77bd165887162ef538 - - default default] Acquired lock "refresh_cache-e607828c-0677-46ba-a7a0-b9d21be4149e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 11 04:04:58 compute-0 nova_compute[259850]: 2025-10-11 04:04:58.160 2 DEBUG nova.network.neutron [None req-2c752ecf-f41a-4b75-87bd-b628c3d4db01 715d3ecfd40048a08fd0c9f8dc437cd6 a56f57f119b24e77bd165887162ef538 - - default default] [instance: e607828c-0677-46ba-a7a0-b9d21be4149e] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Oct 11 04:04:58 compute-0 systemd[1]: Started libcrun container.
Oct 11 04:04:58 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b69cbf546e69d3c0d73b25c8281e75a1f707ca9be3189a09a165cb5d12492291/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 11 04:04:58 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b69cbf546e69d3c0d73b25c8281e75a1f707ca9be3189a09a165cb5d12492291/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 11 04:04:58 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b69cbf546e69d3c0d73b25c8281e75a1f707ca9be3189a09a165cb5d12492291/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 11 04:04:58 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b69cbf546e69d3c0d73b25c8281e75a1f707ca9be3189a09a165cb5d12492291/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 11 04:04:58 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b69cbf546e69d3c0d73b25c8281e75a1f707ca9be3189a09a165cb5d12492291/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct 11 04:04:58 compute-0 podman[269965]: 2025-10-11 04:04:58.190717239 +0000 UTC m=+0.190844858 container init d9c0aadc070fe8d67b8a32862431678be3d4e2431b8e9c0bcaea630d1b6d14d7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=amazing_roentgen, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Oct 11 04:04:58 compute-0 podman[269965]: 2025-10-11 04:04:58.206677187 +0000 UTC m=+0.206804766 container start d9c0aadc070fe8d67b8a32862431678be3d4e2431b8e9c0bcaea630d1b6d14d7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=amazing_roentgen, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 11 04:04:58 compute-0 podman[269965]: 2025-10-11 04:04:58.210579866 +0000 UTC m=+0.210707445 container attach d9c0aadc070fe8d67b8a32862431678be3d4e2431b8e9c0bcaea630d1b6d14d7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=amazing_roentgen, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_REF=reef, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Oct 11 04:04:58 compute-0 nova_compute[259850]: 2025-10-11 04:04:58.411 2 DEBUG nova.compute.manager [req-f92e77f5-130c-44bc-ae72-e79e7d08733e req-a53d0710-7903-4504-bc34-5c63bcdaaf98 f4159c198f2c491aba3076e803251bb4 551ef16ca7c64c7d98e9560ddab5abd8 - - default default] [instance: e607828c-0677-46ba-a7a0-b9d21be4149e] Received event network-changed-7f528a6a-5bee-4ea6-ba46-7b56d53b170b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 11 04:04:58 compute-0 nova_compute[259850]: 2025-10-11 04:04:58.411 2 DEBUG nova.compute.manager [req-f92e77f5-130c-44bc-ae72-e79e7d08733e req-a53d0710-7903-4504-bc34-5c63bcdaaf98 f4159c198f2c491aba3076e803251bb4 551ef16ca7c64c7d98e9560ddab5abd8 - - default default] [instance: e607828c-0677-46ba-a7a0-b9d21be4149e] Refreshing instance network info cache due to event network-changed-7f528a6a-5bee-4ea6-ba46-7b56d53b170b. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 11 04:04:58 compute-0 nova_compute[259850]: 2025-10-11 04:04:58.412 2 DEBUG oslo_concurrency.lockutils [req-f92e77f5-130c-44bc-ae72-e79e7d08733e req-a53d0710-7903-4504-bc34-5c63bcdaaf98 f4159c198f2c491aba3076e803251bb4 551ef16ca7c64c7d98e9560ddab5abd8 - - default default] Acquiring lock "refresh_cache-e607828c-0677-46ba-a7a0-b9d21be4149e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 11 04:04:58 compute-0 nova_compute[259850]: 2025-10-11 04:04:58.485 2 DEBUG nova.network.neutron [None req-2c752ecf-f41a-4b75-87bd-b628c3d4db01 715d3ecfd40048a08fd0c9f8dc437cd6 a56f57f119b24e77bd165887162ef538 - - default default] [instance: e607828c-0677-46ba-a7a0-b9d21be4149e] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Oct 11 04:04:58 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct 11 04:04:58 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3765773090' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 11 04:04:58 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct 11 04:04:58 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3765773090' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 11 04:04:58 compute-0 ceph-mon[74273]: pgmap v1020: 305 pgs: 305 active+clean; 41 MiB data, 256 MiB used, 60 GiB / 60 GiB avail; 1.7 KiB/s rd, 1.3 KiB/s wr, 3 op/s
Oct 11 04:04:58 compute-0 ceph-mon[74273]: from='client.? 192.168.122.10:0/3765773090' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 11 04:04:58 compute-0 ceph-mon[74273]: from='client.? 192.168.122.10:0/3765773090' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 11 04:04:59 compute-0 amazing_roentgen[269982]: --> passed data devices: 0 physical, 3 LVM
Oct 11 04:04:59 compute-0 amazing_roentgen[269982]: --> relative data size: 1.0
Oct 11 04:04:59 compute-0 amazing_roentgen[269982]: --> All data devices are unavailable
Oct 11 04:04:59 compute-0 systemd[1]: libpod-d9c0aadc070fe8d67b8a32862431678be3d4e2431b8e9c0bcaea630d1b6d14d7.scope: Deactivated successfully.
Oct 11 04:04:59 compute-0 podman[269965]: 2025-10-11 04:04:59.339961909 +0000 UTC m=+1.340089498 container died d9c0aadc070fe8d67b8a32862431678be3d4e2431b8e9c0bcaea630d1b6d14d7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=amazing_roentgen, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, OSD_FLAVOR=default)
Oct 11 04:04:59 compute-0 systemd[1]: libpod-d9c0aadc070fe8d67b8a32862431678be3d4e2431b8e9c0bcaea630d1b6d14d7.scope: Consumed 1.081s CPU time.
Oct 11 04:04:59 compute-0 systemd[1]: var-lib-containers-storage-overlay-b69cbf546e69d3c0d73b25c8281e75a1f707ca9be3189a09a165cb5d12492291-merged.mount: Deactivated successfully.
Oct 11 04:04:59 compute-0 nova_compute[259850]: 2025-10-11 04:04:59.401 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 04:04:59 compute-0 podman[269965]: 2025-10-11 04:04:59.434304656 +0000 UTC m=+1.434432235 container remove d9c0aadc070fe8d67b8a32862431678be3d4e2431b8e9c0bcaea630d1b6d14d7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=amazing_roentgen, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True)
Oct 11 04:04:59 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1021: 305 pgs: 305 active+clean; 88 MiB data, 277 MiB used, 60 GiB / 60 GiB avail; 29 KiB/s rd, 1.8 MiB/s wr, 44 op/s
Oct 11 04:04:59 compute-0 systemd[1]: libpod-conmon-d9c0aadc070fe8d67b8a32862431678be3d4e2431b8e9c0bcaea630d1b6d14d7.scope: Deactivated successfully.
Oct 11 04:04:59 compute-0 sudo[269860]: pam_unix(sudo:session): session closed for user root
Oct 11 04:04:59 compute-0 sudo[270023]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 04:04:59 compute-0 sudo[270023]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 04:04:59 compute-0 sudo[270023]: pam_unix(sudo:session): session closed for user root
Oct 11 04:04:59 compute-0 sudo[270048]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 11 04:04:59 compute-0 sudo[270048]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 04:04:59 compute-0 sudo[270048]: pam_unix(sudo:session): session closed for user root
Oct 11 04:04:59 compute-0 sudo[270073]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 04:04:59 compute-0 sudo[270073]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 04:04:59 compute-0 sudo[270073]: pam_unix(sudo:session): session closed for user root
Oct 11 04:04:59 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e171 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 04:04:59 compute-0 sudo[270098]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/23b68101-59a9-532f-ab6b-9acf78fb2162/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 23b68101-59a9-532f-ab6b-9acf78fb2162 -- lvm list --format json
Oct 11 04:04:59 compute-0 sudo[270098]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 04:04:59 compute-0 nova_compute[259850]: 2025-10-11 04:04:59.935 2 DEBUG nova.network.neutron [None req-2c752ecf-f41a-4b75-87bd-b628c3d4db01 715d3ecfd40048a08fd0c9f8dc437cd6 a56f57f119b24e77bd165887162ef538 - - default default] [instance: e607828c-0677-46ba-a7a0-b9d21be4149e] Updating instance_info_cache with network_info: [{"id": "7f528a6a-5bee-4ea6-ba46-7b56d53b170b", "address": "fa:16:3e:33:ea:29", "network": {"id": "01ca7d7a-ab7e-4753-9e65-58d83786bdc8", "bridge": "br-int", "label": "tempest-VolumesActionsTest-1985263256-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a56f57f119b24e77bd165887162ef538", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7f528a6a-5b", "ovs_interfaceid": "7f528a6a-5bee-4ea6-ba46-7b56d53b170b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 11 04:04:59 compute-0 nova_compute[259850]: 2025-10-11 04:04:59.958 2 DEBUG oslo_concurrency.lockutils [None req-2c752ecf-f41a-4b75-87bd-b628c3d4db01 715d3ecfd40048a08fd0c9f8dc437cd6 a56f57f119b24e77bd165887162ef538 - - default default] Releasing lock "refresh_cache-e607828c-0677-46ba-a7a0-b9d21be4149e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 11 04:04:59 compute-0 nova_compute[259850]: 2025-10-11 04:04:59.958 2 DEBUG nova.compute.manager [None req-2c752ecf-f41a-4b75-87bd-b628c3d4db01 715d3ecfd40048a08fd0c9f8dc437cd6 a56f57f119b24e77bd165887162ef538 - - default default] [instance: e607828c-0677-46ba-a7a0-b9d21be4149e] Instance network_info: |[{"id": "7f528a6a-5bee-4ea6-ba46-7b56d53b170b", "address": "fa:16:3e:33:ea:29", "network": {"id": "01ca7d7a-ab7e-4753-9e65-58d83786bdc8", "bridge": "br-int", "label": "tempest-VolumesActionsTest-1985263256-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a56f57f119b24e77bd165887162ef538", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7f528a6a-5b", "ovs_interfaceid": "7f528a6a-5bee-4ea6-ba46-7b56d53b170b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Oct 11 04:04:59 compute-0 nova_compute[259850]: 2025-10-11 04:04:59.958 2 DEBUG oslo_concurrency.lockutils [req-f92e77f5-130c-44bc-ae72-e79e7d08733e req-a53d0710-7903-4504-bc34-5c63bcdaaf98 f4159c198f2c491aba3076e803251bb4 551ef16ca7c64c7d98e9560ddab5abd8 - - default default] Acquired lock "refresh_cache-e607828c-0677-46ba-a7a0-b9d21be4149e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 11 04:04:59 compute-0 nova_compute[259850]: 2025-10-11 04:04:59.959 2 DEBUG nova.network.neutron [req-f92e77f5-130c-44bc-ae72-e79e7d08733e req-a53d0710-7903-4504-bc34-5c63bcdaaf98 f4159c198f2c491aba3076e803251bb4 551ef16ca7c64c7d98e9560ddab5abd8 - - default default] [instance: e607828c-0677-46ba-a7a0-b9d21be4149e] Refreshing network info cache for port 7f528a6a-5bee-4ea6-ba46-7b56d53b170b _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 11 04:04:59 compute-0 nova_compute[259850]: 2025-10-11 04:04:59.962 2 DEBUG nova.virt.libvirt.driver [None req-2c752ecf-f41a-4b75-87bd-b628c3d4db01 715d3ecfd40048a08fd0c9f8dc437cd6 a56f57f119b24e77bd165887162ef538 - - default default] [instance: e607828c-0677-46ba-a7a0-b9d21be4149e] Start _get_guest_xml network_info=[{"id": "7f528a6a-5bee-4ea6-ba46-7b56d53b170b", "address": "fa:16:3e:33:ea:29", "network": {"id": "01ca7d7a-ab7e-4753-9e65-58d83786bdc8", "bridge": "br-int", "label": "tempest-VolumesActionsTest-1985263256-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a56f57f119b24e77bd165887162ef538", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7f528a6a-5b", "ovs_interfaceid": "7f528a6a-5bee-4ea6-ba46-7b56d53b170b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-11T04:01:37Z,direct_url=<?>,disk_format='qcow2',id=1a107e2f-1a9d-4b6f-861d-e64bee7d56be,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='e4ac9f6319b648399a8baca50902ce47',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-11T04:01:39Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_type': 'disk', 'encrypted': False, 'encryption_secret_uuid': None, 'size': 0, 'encryption_format': None, 'boot_index': 0, 'device_name': '/dev/vda', 'guest_format': None, 'encryption_options': None, 'disk_bus': 'virtio', 'image_id': '1a107e2f-1a9d-4b6f-861d-e64bee7d56be'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Oct 11 04:04:59 compute-0 nova_compute[259850]: 2025-10-11 04:04:59.969 2 WARNING nova.virt.libvirt.driver [None req-2c752ecf-f41a-4b75-87bd-b628c3d4db01 715d3ecfd40048a08fd0c9f8dc437cd6 a56f57f119b24e77bd165887162ef538 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 11 04:04:59 compute-0 nova_compute[259850]: 2025-10-11 04:04:59.975 2 DEBUG nova.virt.libvirt.host [None req-2c752ecf-f41a-4b75-87bd-b628c3d4db01 715d3ecfd40048a08fd0c9f8dc437cd6 a56f57f119b24e77bd165887162ef538 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Oct 11 04:04:59 compute-0 nova_compute[259850]: 2025-10-11 04:04:59.976 2 DEBUG nova.virt.libvirt.host [None req-2c752ecf-f41a-4b75-87bd-b628c3d4db01 715d3ecfd40048a08fd0c9f8dc437cd6 a56f57f119b24e77bd165887162ef538 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Oct 11 04:04:59 compute-0 nova_compute[259850]: 2025-10-11 04:04:59.981 2 DEBUG nova.virt.libvirt.host [None req-2c752ecf-f41a-4b75-87bd-b628c3d4db01 715d3ecfd40048a08fd0c9f8dc437cd6 a56f57f119b24e77bd165887162ef538 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Oct 11 04:04:59 compute-0 nova_compute[259850]: 2025-10-11 04:04:59.982 2 DEBUG nova.virt.libvirt.host [None req-2c752ecf-f41a-4b75-87bd-b628c3d4db01 715d3ecfd40048a08fd0c9f8dc437cd6 a56f57f119b24e77bd165887162ef538 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Oct 11 04:04:59 compute-0 nova_compute[259850]: 2025-10-11 04:04:59.982 2 DEBUG nova.virt.libvirt.driver [None req-2c752ecf-f41a-4b75-87bd-b628c3d4db01 715d3ecfd40048a08fd0c9f8dc437cd6 a56f57f119b24e77bd165887162ef538 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Oct 11 04:04:59 compute-0 nova_compute[259850]: 2025-10-11 04:04:59.982 2 DEBUG nova.virt.hardware [None req-2c752ecf-f41a-4b75-87bd-b628c3d4db01 715d3ecfd40048a08fd0c9f8dc437cd6 a56f57f119b24e77bd165887162ef538 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-11T04:01:35Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='178575de-f0e6-4acd-9fcd-d75e3e09ac2e',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-11T04:01:37Z,direct_url=<?>,disk_format='qcow2',id=1a107e2f-1a9d-4b6f-861d-e64bee7d56be,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='e4ac9f6319b648399a8baca50902ce47',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-11T04:01:39Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Oct 11 04:04:59 compute-0 nova_compute[259850]: 2025-10-11 04:04:59.983 2 DEBUG nova.virt.hardware [None req-2c752ecf-f41a-4b75-87bd-b628c3d4db01 715d3ecfd40048a08fd0c9f8dc437cd6 a56f57f119b24e77bd165887162ef538 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Oct 11 04:04:59 compute-0 nova_compute[259850]: 2025-10-11 04:04:59.983 2 DEBUG nova.virt.hardware [None req-2c752ecf-f41a-4b75-87bd-b628c3d4db01 715d3ecfd40048a08fd0c9f8dc437cd6 a56f57f119b24e77bd165887162ef538 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Oct 11 04:04:59 compute-0 nova_compute[259850]: 2025-10-11 04:04:59.983 2 DEBUG nova.virt.hardware [None req-2c752ecf-f41a-4b75-87bd-b628c3d4db01 715d3ecfd40048a08fd0c9f8dc437cd6 a56f57f119b24e77bd165887162ef538 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Oct 11 04:04:59 compute-0 nova_compute[259850]: 2025-10-11 04:04:59.983 2 DEBUG nova.virt.hardware [None req-2c752ecf-f41a-4b75-87bd-b628c3d4db01 715d3ecfd40048a08fd0c9f8dc437cd6 a56f57f119b24e77bd165887162ef538 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Oct 11 04:04:59 compute-0 nova_compute[259850]: 2025-10-11 04:04:59.984 2 DEBUG nova.virt.hardware [None req-2c752ecf-f41a-4b75-87bd-b628c3d4db01 715d3ecfd40048a08fd0c9f8dc437cd6 a56f57f119b24e77bd165887162ef538 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Oct 11 04:04:59 compute-0 nova_compute[259850]: 2025-10-11 04:04:59.984 2 DEBUG nova.virt.hardware [None req-2c752ecf-f41a-4b75-87bd-b628c3d4db01 715d3ecfd40048a08fd0c9f8dc437cd6 a56f57f119b24e77bd165887162ef538 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Oct 11 04:04:59 compute-0 nova_compute[259850]: 2025-10-11 04:04:59.984 2 DEBUG nova.virt.hardware [None req-2c752ecf-f41a-4b75-87bd-b628c3d4db01 715d3ecfd40048a08fd0c9f8dc437cd6 a56f57f119b24e77bd165887162ef538 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Oct 11 04:04:59 compute-0 nova_compute[259850]: 2025-10-11 04:04:59.985 2 DEBUG nova.virt.hardware [None req-2c752ecf-f41a-4b75-87bd-b628c3d4db01 715d3ecfd40048a08fd0c9f8dc437cd6 a56f57f119b24e77bd165887162ef538 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Oct 11 04:04:59 compute-0 nova_compute[259850]: 2025-10-11 04:04:59.985 2 DEBUG nova.virt.hardware [None req-2c752ecf-f41a-4b75-87bd-b628c3d4db01 715d3ecfd40048a08fd0c9f8dc437cd6 a56f57f119b24e77bd165887162ef538 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Oct 11 04:04:59 compute-0 nova_compute[259850]: 2025-10-11 04:04:59.985 2 DEBUG nova.virt.hardware [None req-2c752ecf-f41a-4b75-87bd-b628c3d4db01 715d3ecfd40048a08fd0c9f8dc437cd6 a56f57f119b24e77bd165887162ef538 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Oct 11 04:04:59 compute-0 nova_compute[259850]: 2025-10-11 04:04:59.988 2 DEBUG oslo_concurrency.processutils [None req-2c752ecf-f41a-4b75-87bd-b628c3d4db01 715d3ecfd40048a08fd0c9f8dc437cd6 a56f57f119b24e77bd165887162ef538 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 04:05:00 compute-0 podman[270165]: 2025-10-11 04:05:00.10775412 +0000 UTC m=+0.047702630 container create b3f1e43eef1342af487f8279b7eed522fb029ffec71fd9afea4da8558c25abf2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_carson, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 11 04:05:00 compute-0 systemd[1]: Started libpod-conmon-b3f1e43eef1342af487f8279b7eed522fb029ffec71fd9afea4da8558c25abf2.scope.
Oct 11 04:05:00 compute-0 systemd[1]: Started libcrun container.
Oct 11 04:05:00 compute-0 podman[270165]: 2025-10-11 04:05:00.089786806 +0000 UTC m=+0.029735346 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 04:05:00 compute-0 podman[270165]: 2025-10-11 04:05:00.198895628 +0000 UTC m=+0.138844188 container init b3f1e43eef1342af487f8279b7eed522fb029ffec71fd9afea4da8558c25abf2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_carson, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3)
Oct 11 04:05:00 compute-0 podman[270165]: 2025-10-11 04:05:00.207832669 +0000 UTC m=+0.147781209 container start b3f1e43eef1342af487f8279b7eed522fb029ffec71fd9afea4da8558c25abf2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_carson, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 11 04:05:00 compute-0 podman[270165]: 2025-10-11 04:05:00.211835682 +0000 UTC m=+0.151784192 container attach b3f1e43eef1342af487f8279b7eed522fb029ffec71fd9afea4da8558c25abf2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_carson, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 11 04:05:00 compute-0 nervous_carson[270200]: 167 167
Oct 11 04:05:00 compute-0 systemd[1]: libpod-b3f1e43eef1342af487f8279b7eed522fb029ffec71fd9afea4da8558c25abf2.scope: Deactivated successfully.
Oct 11 04:05:00 compute-0 podman[270165]: 2025-10-11 04:05:00.217382407 +0000 UTC m=+0.157330927 container died b3f1e43eef1342af487f8279b7eed522fb029ffec71fd9afea4da8558c25abf2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_carson, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Oct 11 04:05:00 compute-0 systemd[1]: var-lib-containers-storage-overlay-f3be8604bf74d67fe33c412ad258d673d72feb65170b9863acd7fe6d5c69d565-merged.mount: Deactivated successfully.
Oct 11 04:05:00 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct 11 04:05:00 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3112692717' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 11 04:05:00 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct 11 04:05:00 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3112692717' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 11 04:05:00 compute-0 podman[270165]: 2025-10-11 04:05:00.257318338 +0000 UTC m=+0.197266848 container remove b3f1e43eef1342af487f8279b7eed522fb029ffec71fd9afea4da8558c25abf2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_carson, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 11 04:05:00 compute-0 systemd[1]: libpod-conmon-b3f1e43eef1342af487f8279b7eed522fb029ffec71fd9afea4da8558c25abf2.scope: Deactivated successfully.
Oct 11 04:05:00 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 11 04:05:00 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1350504531' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 11 04:05:00 compute-0 podman[270224]: 2025-10-11 04:05:00.435941632 +0000 UTC m=+0.044969304 container create 7ff4b87206de97124cf5b4ac85ba426a25df4408a45c6bd3a77bd22fdbaa5d12 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_bassi, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 11 04:05:00 compute-0 nova_compute[259850]: 2025-10-11 04:05:00.436 2 DEBUG oslo_concurrency.processutils [None req-2c752ecf-f41a-4b75-87bd-b628c3d4db01 715d3ecfd40048a08fd0c9f8dc437cd6 a56f57f119b24e77bd165887162ef538 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.447s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 04:05:00 compute-0 nova_compute[259850]: 2025-10-11 04:05:00.467 2 DEBUG nova.storage.rbd_utils [None req-2c752ecf-f41a-4b75-87bd-b628c3d4db01 715d3ecfd40048a08fd0c9f8dc437cd6 a56f57f119b24e77bd165887162ef538 - - default default] rbd image e607828c-0677-46ba-a7a0-b9d21be4149e_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 11 04:05:00 compute-0 systemd[1]: Started libpod-conmon-7ff4b87206de97124cf5b4ac85ba426a25df4408a45c6bd3a77bd22fdbaa5d12.scope.
Oct 11 04:05:00 compute-0 nova_compute[259850]: 2025-10-11 04:05:00.473 2 DEBUG oslo_concurrency.processutils [None req-2c752ecf-f41a-4b75-87bd-b628c3d4db01 715d3ecfd40048a08fd0c9f8dc437cd6 a56f57f119b24e77bd165887162ef538 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 04:05:00 compute-0 systemd[1]: Started libcrun container.
Oct 11 04:05:00 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/64b3da72e0708707082979faadc82f7d2df387707c3a330715f5b95fac7335e4/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 11 04:05:00 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/64b3da72e0708707082979faadc82f7d2df387707c3a330715f5b95fac7335e4/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 11 04:05:00 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/64b3da72e0708707082979faadc82f7d2df387707c3a330715f5b95fac7335e4/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 11 04:05:00 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/64b3da72e0708707082979faadc82f7d2df387707c3a330715f5b95fac7335e4/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 11 04:05:00 compute-0 podman[270224]: 2025-10-11 04:05:00.509508268 +0000 UTC m=+0.118535939 container init 7ff4b87206de97124cf5b4ac85ba426a25df4408a45c6bd3a77bd22fdbaa5d12 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_bassi, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Oct 11 04:05:00 compute-0 podman[270224]: 2025-10-11 04:05:00.418439091 +0000 UTC m=+0.027466782 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 04:05:00 compute-0 podman[270224]: 2025-10-11 04:05:00.516457403 +0000 UTC m=+0.125485074 container start 7ff4b87206de97124cf5b4ac85ba426a25df4408a45c6bd3a77bd22fdbaa5d12 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_bassi, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 11 04:05:00 compute-0 podman[270224]: 2025-10-11 04:05:00.519256001 +0000 UTC m=+0.128283672 container attach 7ff4b87206de97124cf5b4ac85ba426a25df4408a45c6bd3a77bd22fdbaa5d12 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_bassi, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.license=GPLv2)
Oct 11 04:05:00 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 11 04:05:00 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/347245407' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 11 04:05:00 compute-0 nova_compute[259850]: 2025-10-11 04:05:00.899 2 DEBUG oslo_concurrency.processutils [None req-2c752ecf-f41a-4b75-87bd-b628c3d4db01 715d3ecfd40048a08fd0c9f8dc437cd6 a56f57f119b24e77bd165887162ef538 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.426s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 04:05:00 compute-0 nova_compute[259850]: 2025-10-11 04:05:00.900 2 DEBUG nova.virt.libvirt.vif [None req-2c752ecf-f41a-4b75-87bd-b628c3d4db01 715d3ecfd40048a08fd0c9f8dc437cd6 a56f57f119b24e77bd165887162ef538 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-11T04:04:53Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-VolumesActionsTest-instance-1301380731',display_name='tempest-VolumesActionsTest-instance-1301380731',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-volumesactionstest-instance-1301380731',id=2,image_ref='1a107e2f-1a9d-4b6f-861d-e64bee7d56be',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='a56f57f119b24e77bd165887162ef538',ramdisk_id='',reservation_id='r-qzrhq9bt',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='1a107e2f-1a9d-4b6f-861d-e64bee7d56be',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-VolumesActionsTest-27294957',owner_user_name='tempest-VolumesActionsTest-27294957-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-11T04:04:55Z,user_data=None,user_id='715d3ecfd40048a08fd0c9f8dc437cd6',uuid=e607828c-0677-46ba-a7a0-b9d21be4149e,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "7f528a6a-5bee-4ea6-ba46-7b56d53b170b", "address": "fa:16:3e:33:ea:29", "network": {"id": "01ca7d7a-ab7e-4753-9e65-58d83786bdc8", "bridge": "br-int", "label": "tempest-VolumesActionsTest-1985263256-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a56f57f119b24e77bd165887162ef538", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7f528a6a-5b", "ovs_interfaceid": "7f528a6a-5bee-4ea6-ba46-7b56d53b170b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Oct 11 04:05:00 compute-0 nova_compute[259850]: 2025-10-11 04:05:00.901 2 DEBUG nova.network.os_vif_util [None req-2c752ecf-f41a-4b75-87bd-b628c3d4db01 715d3ecfd40048a08fd0c9f8dc437cd6 a56f57f119b24e77bd165887162ef538 - - default default] Converting VIF {"id": "7f528a6a-5bee-4ea6-ba46-7b56d53b170b", "address": "fa:16:3e:33:ea:29", "network": {"id": "01ca7d7a-ab7e-4753-9e65-58d83786bdc8", "bridge": "br-int", "label": "tempest-VolumesActionsTest-1985263256-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a56f57f119b24e77bd165887162ef538", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7f528a6a-5b", "ovs_interfaceid": "7f528a6a-5bee-4ea6-ba46-7b56d53b170b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 11 04:05:00 compute-0 nova_compute[259850]: 2025-10-11 04:05:00.902 2 DEBUG nova.network.os_vif_util [None req-2c752ecf-f41a-4b75-87bd-b628c3d4db01 715d3ecfd40048a08fd0c9f8dc437cd6 a56f57f119b24e77bd165887162ef538 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:33:ea:29,bridge_name='br-int',has_traffic_filtering=True,id=7f528a6a-5bee-4ea6-ba46-7b56d53b170b,network=Network(01ca7d7a-ab7e-4753-9e65-58d83786bdc8),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap7f528a6a-5b') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 11 04:05:00 compute-0 nova_compute[259850]: 2025-10-11 04:05:00.903 2 DEBUG nova.objects.instance [None req-2c752ecf-f41a-4b75-87bd-b628c3d4db01 715d3ecfd40048a08fd0c9f8dc437cd6 a56f57f119b24e77bd165887162ef538 - - default default] Lazy-loading 'pci_devices' on Instance uuid e607828c-0677-46ba-a7a0-b9d21be4149e obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 11 04:05:00 compute-0 nova_compute[259850]: 2025-10-11 04:05:00.921 2 DEBUG nova.virt.libvirt.driver [None req-2c752ecf-f41a-4b75-87bd-b628c3d4db01 715d3ecfd40048a08fd0c9f8dc437cd6 a56f57f119b24e77bd165887162ef538 - - default default] [instance: e607828c-0677-46ba-a7a0-b9d21be4149e] End _get_guest_xml xml=<domain type="kvm">
Oct 11 04:05:00 compute-0 nova_compute[259850]:   <uuid>e607828c-0677-46ba-a7a0-b9d21be4149e</uuid>
Oct 11 04:05:00 compute-0 nova_compute[259850]:   <name>instance-00000002</name>
Oct 11 04:05:00 compute-0 nova_compute[259850]:   <memory>131072</memory>
Oct 11 04:05:00 compute-0 nova_compute[259850]:   <vcpu>1</vcpu>
Oct 11 04:05:00 compute-0 nova_compute[259850]:   <metadata>
Oct 11 04:05:00 compute-0 nova_compute[259850]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 11 04:05:00 compute-0 nova_compute[259850]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 11 04:05:00 compute-0 nova_compute[259850]:       <nova:name>tempest-VolumesActionsTest-instance-1301380731</nova:name>
Oct 11 04:05:00 compute-0 nova_compute[259850]:       <nova:creationTime>2025-10-11 04:04:59</nova:creationTime>
Oct 11 04:05:00 compute-0 nova_compute[259850]:       <nova:flavor name="m1.nano">
Oct 11 04:05:00 compute-0 nova_compute[259850]:         <nova:memory>128</nova:memory>
Oct 11 04:05:00 compute-0 nova_compute[259850]:         <nova:disk>1</nova:disk>
Oct 11 04:05:00 compute-0 nova_compute[259850]:         <nova:swap>0</nova:swap>
Oct 11 04:05:00 compute-0 nova_compute[259850]:         <nova:ephemeral>0</nova:ephemeral>
Oct 11 04:05:00 compute-0 nova_compute[259850]:         <nova:vcpus>1</nova:vcpus>
Oct 11 04:05:00 compute-0 nova_compute[259850]:       </nova:flavor>
Oct 11 04:05:00 compute-0 nova_compute[259850]:       <nova:owner>
Oct 11 04:05:00 compute-0 nova_compute[259850]:         <nova:user uuid="715d3ecfd40048a08fd0c9f8dc437cd6">tempest-VolumesActionsTest-27294957-project-member</nova:user>
Oct 11 04:05:00 compute-0 nova_compute[259850]:         <nova:project uuid="a56f57f119b24e77bd165887162ef538">tempest-VolumesActionsTest-27294957</nova:project>
Oct 11 04:05:00 compute-0 nova_compute[259850]:       </nova:owner>
Oct 11 04:05:00 compute-0 nova_compute[259850]:       <nova:root type="image" uuid="1a107e2f-1a9d-4b6f-861d-e64bee7d56be"/>
Oct 11 04:05:00 compute-0 nova_compute[259850]:       <nova:ports>
Oct 11 04:05:00 compute-0 nova_compute[259850]:         <nova:port uuid="7f528a6a-5bee-4ea6-ba46-7b56d53b170b">
Oct 11 04:05:00 compute-0 nova_compute[259850]:           <nova:ip type="fixed" address="10.100.0.5" ipVersion="4"/>
Oct 11 04:05:00 compute-0 nova_compute[259850]:         </nova:port>
Oct 11 04:05:00 compute-0 nova_compute[259850]:       </nova:ports>
Oct 11 04:05:00 compute-0 nova_compute[259850]:     </nova:instance>
Oct 11 04:05:00 compute-0 nova_compute[259850]:   </metadata>
Oct 11 04:05:00 compute-0 nova_compute[259850]:   <sysinfo type="smbios">
Oct 11 04:05:00 compute-0 nova_compute[259850]:     <system>
Oct 11 04:05:00 compute-0 nova_compute[259850]:       <entry name="manufacturer">RDO</entry>
Oct 11 04:05:00 compute-0 nova_compute[259850]:       <entry name="product">OpenStack Compute</entry>
Oct 11 04:05:00 compute-0 nova_compute[259850]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 11 04:05:00 compute-0 nova_compute[259850]:       <entry name="serial">e607828c-0677-46ba-a7a0-b9d21be4149e</entry>
Oct 11 04:05:00 compute-0 nova_compute[259850]:       <entry name="uuid">e607828c-0677-46ba-a7a0-b9d21be4149e</entry>
Oct 11 04:05:00 compute-0 nova_compute[259850]:       <entry name="family">Virtual Machine</entry>
Oct 11 04:05:00 compute-0 nova_compute[259850]:     </system>
Oct 11 04:05:00 compute-0 nova_compute[259850]:   </sysinfo>
Oct 11 04:05:00 compute-0 nova_compute[259850]:   <os>
Oct 11 04:05:00 compute-0 nova_compute[259850]:     <type arch="x86_64" machine="q35">hvm</type>
Oct 11 04:05:00 compute-0 nova_compute[259850]:     <boot dev="hd"/>
Oct 11 04:05:00 compute-0 nova_compute[259850]:     <smbios mode="sysinfo"/>
Oct 11 04:05:00 compute-0 nova_compute[259850]:   </os>
Oct 11 04:05:00 compute-0 nova_compute[259850]:   <features>
Oct 11 04:05:00 compute-0 nova_compute[259850]:     <acpi/>
Oct 11 04:05:00 compute-0 nova_compute[259850]:     <apic/>
Oct 11 04:05:00 compute-0 nova_compute[259850]:     <vmcoreinfo/>
Oct 11 04:05:00 compute-0 nova_compute[259850]:   </features>
Oct 11 04:05:00 compute-0 nova_compute[259850]:   <clock offset="utc">
Oct 11 04:05:00 compute-0 nova_compute[259850]:     <timer name="pit" tickpolicy="delay"/>
Oct 11 04:05:00 compute-0 nova_compute[259850]:     <timer name="rtc" tickpolicy="catchup"/>
Oct 11 04:05:00 compute-0 nova_compute[259850]:     <timer name="hpet" present="no"/>
Oct 11 04:05:00 compute-0 nova_compute[259850]:   </clock>
Oct 11 04:05:00 compute-0 nova_compute[259850]:   <cpu mode="host-model" match="exact">
Oct 11 04:05:00 compute-0 nova_compute[259850]:     <topology sockets="1" cores="1" threads="1"/>
Oct 11 04:05:00 compute-0 nova_compute[259850]:   </cpu>
Oct 11 04:05:00 compute-0 nova_compute[259850]:   <devices>
Oct 11 04:05:00 compute-0 nova_compute[259850]:     <disk type="network" device="disk">
Oct 11 04:05:00 compute-0 nova_compute[259850]:       <driver type="raw" cache="none"/>
Oct 11 04:05:00 compute-0 nova_compute[259850]:       <source protocol="rbd" name="vms/e607828c-0677-46ba-a7a0-b9d21be4149e_disk">
Oct 11 04:05:00 compute-0 nova_compute[259850]:         <host name="192.168.122.100" port="6789"/>
Oct 11 04:05:00 compute-0 nova_compute[259850]:       </source>
Oct 11 04:05:00 compute-0 nova_compute[259850]:       <auth username="openstack">
Oct 11 04:05:00 compute-0 nova_compute[259850]:         <secret type="ceph" uuid="23b68101-59a9-532f-ab6b-9acf78fb2162"/>
Oct 11 04:05:00 compute-0 nova_compute[259850]:       </auth>
Oct 11 04:05:00 compute-0 nova_compute[259850]:       <target dev="vda" bus="virtio"/>
Oct 11 04:05:00 compute-0 nova_compute[259850]:     </disk>
Oct 11 04:05:00 compute-0 nova_compute[259850]:     <disk type="network" device="cdrom">
Oct 11 04:05:00 compute-0 nova_compute[259850]:       <driver type="raw" cache="none"/>
Oct 11 04:05:00 compute-0 nova_compute[259850]:       <source protocol="rbd" name="vms/e607828c-0677-46ba-a7a0-b9d21be4149e_disk.config">
Oct 11 04:05:00 compute-0 nova_compute[259850]:         <host name="192.168.122.100" port="6789"/>
Oct 11 04:05:00 compute-0 nova_compute[259850]:       </source>
Oct 11 04:05:00 compute-0 nova_compute[259850]:       <auth username="openstack">
Oct 11 04:05:00 compute-0 nova_compute[259850]:         <secret type="ceph" uuid="23b68101-59a9-532f-ab6b-9acf78fb2162"/>
Oct 11 04:05:00 compute-0 nova_compute[259850]:       </auth>
Oct 11 04:05:00 compute-0 nova_compute[259850]:       <target dev="sda" bus="sata"/>
Oct 11 04:05:00 compute-0 nova_compute[259850]:     </disk>
Oct 11 04:05:00 compute-0 nova_compute[259850]:     <interface type="ethernet">
Oct 11 04:05:00 compute-0 nova_compute[259850]:       <mac address="fa:16:3e:33:ea:29"/>
Oct 11 04:05:00 compute-0 nova_compute[259850]:       <model type="virtio"/>
Oct 11 04:05:00 compute-0 nova_compute[259850]:       <driver name="vhost" rx_queue_size="512"/>
Oct 11 04:05:00 compute-0 nova_compute[259850]:       <mtu size="1442"/>
Oct 11 04:05:00 compute-0 nova_compute[259850]:       <target dev="tap7f528a6a-5b"/>
Oct 11 04:05:00 compute-0 nova_compute[259850]:     </interface>
Oct 11 04:05:00 compute-0 nova_compute[259850]:     <serial type="pty">
Oct 11 04:05:00 compute-0 nova_compute[259850]:       <log file="/var/lib/nova/instances/e607828c-0677-46ba-a7a0-b9d21be4149e/console.log" append="off"/>
Oct 11 04:05:00 compute-0 nova_compute[259850]:     </serial>
Oct 11 04:05:00 compute-0 nova_compute[259850]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 11 04:05:00 compute-0 nova_compute[259850]:     <video>
Oct 11 04:05:00 compute-0 nova_compute[259850]:       <model type="virtio"/>
Oct 11 04:05:00 compute-0 nova_compute[259850]:     </video>
Oct 11 04:05:00 compute-0 nova_compute[259850]:     <input type="tablet" bus="usb"/>
Oct 11 04:05:00 compute-0 nova_compute[259850]:     <rng model="virtio">
Oct 11 04:05:00 compute-0 nova_compute[259850]:       <backend model="random">/dev/urandom</backend>
Oct 11 04:05:00 compute-0 nova_compute[259850]:     </rng>
Oct 11 04:05:00 compute-0 nova_compute[259850]:     <controller type="pci" model="pcie-root"/>
Oct 11 04:05:00 compute-0 nova_compute[259850]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 04:05:00 compute-0 nova_compute[259850]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 04:05:00 compute-0 nova_compute[259850]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 04:05:00 compute-0 nova_compute[259850]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 04:05:00 compute-0 nova_compute[259850]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 04:05:00 compute-0 nova_compute[259850]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 04:05:00 compute-0 nova_compute[259850]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 04:05:00 compute-0 nova_compute[259850]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 04:05:00 compute-0 nova_compute[259850]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 04:05:00 compute-0 nova_compute[259850]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 04:05:00 compute-0 nova_compute[259850]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 04:05:00 compute-0 nova_compute[259850]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 04:05:00 compute-0 nova_compute[259850]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 04:05:00 compute-0 nova_compute[259850]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 04:05:00 compute-0 nova_compute[259850]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 04:05:00 compute-0 nova_compute[259850]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 04:05:00 compute-0 nova_compute[259850]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 04:05:00 compute-0 nova_compute[259850]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 04:05:00 compute-0 nova_compute[259850]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 04:05:00 compute-0 nova_compute[259850]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 04:05:00 compute-0 nova_compute[259850]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 04:05:00 compute-0 nova_compute[259850]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 04:05:00 compute-0 nova_compute[259850]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 04:05:00 compute-0 nova_compute[259850]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 04:05:00 compute-0 nova_compute[259850]:     <controller type="usb" index="0"/>
Oct 11 04:05:00 compute-0 nova_compute[259850]:     <memballoon model="virtio">
Oct 11 04:05:00 compute-0 nova_compute[259850]:       <stats period="10"/>
Oct 11 04:05:00 compute-0 nova_compute[259850]:     </memballoon>
Oct 11 04:05:00 compute-0 nova_compute[259850]:   </devices>
Oct 11 04:05:00 compute-0 nova_compute[259850]: </domain>
Oct 11 04:05:00 compute-0 nova_compute[259850]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Oct 11 04:05:00 compute-0 nova_compute[259850]: 2025-10-11 04:05:00.921 2 DEBUG nova.compute.manager [None req-2c752ecf-f41a-4b75-87bd-b628c3d4db01 715d3ecfd40048a08fd0c9f8dc437cd6 a56f57f119b24e77bd165887162ef538 - - default default] [instance: e607828c-0677-46ba-a7a0-b9d21be4149e] Preparing to wait for external event network-vif-plugged-7f528a6a-5bee-4ea6-ba46-7b56d53b170b prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Oct 11 04:05:00 compute-0 nova_compute[259850]: 2025-10-11 04:05:00.921 2 DEBUG oslo_concurrency.lockutils [None req-2c752ecf-f41a-4b75-87bd-b628c3d4db01 715d3ecfd40048a08fd0c9f8dc437cd6 a56f57f119b24e77bd165887162ef538 - - default default] Acquiring lock "e607828c-0677-46ba-a7a0-b9d21be4149e-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 04:05:00 compute-0 nova_compute[259850]: 2025-10-11 04:05:00.922 2 DEBUG oslo_concurrency.lockutils [None req-2c752ecf-f41a-4b75-87bd-b628c3d4db01 715d3ecfd40048a08fd0c9f8dc437cd6 a56f57f119b24e77bd165887162ef538 - - default default] Lock "e607828c-0677-46ba-a7a0-b9d21be4149e-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 04:05:00 compute-0 nova_compute[259850]: 2025-10-11 04:05:00.922 2 DEBUG oslo_concurrency.lockutils [None req-2c752ecf-f41a-4b75-87bd-b628c3d4db01 715d3ecfd40048a08fd0c9f8dc437cd6 a56f57f119b24e77bd165887162ef538 - - default default] Lock "e607828c-0677-46ba-a7a0-b9d21be4149e-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 04:05:00 compute-0 nova_compute[259850]: 2025-10-11 04:05:00.923 2 DEBUG nova.virt.libvirt.vif [None req-2c752ecf-f41a-4b75-87bd-b628c3d4db01 715d3ecfd40048a08fd0c9f8dc437cd6 a56f57f119b24e77bd165887162ef538 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-11T04:04:53Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-VolumesActionsTest-instance-1301380731',display_name='tempest-VolumesActionsTest-instance-1301380731',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-volumesactionstest-instance-1301380731',id=2,image_ref='1a107e2f-1a9d-4b6f-861d-e64bee7d56be',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='a56f57f119b24e77bd165887162ef538',ramdisk_id='',reservation_id='r-qzrhq9bt',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='1a107e2f-1a9d-4b6f-861d-e64bee7d56be',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-VolumesActionsTest-27294957',owner_user_name='tempest-VolumesActionsTest-27294957-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-11T04:04:55Z,user_data=None,user_id='715d3ecfd40048a08fd0c9f8dc437cd6',uuid=e607828c-0677-46ba-a7a0-b9d21be4149e,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "7f528a6a-5bee-4ea6-ba46-7b56d53b170b", "address": "fa:16:3e:33:ea:29", "network": {"id": "01ca7d7a-ab7e-4753-9e65-58d83786bdc8", "bridge": "br-int", "label": "tempest-VolumesActionsTest-1985263256-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a56f57f119b24e77bd165887162ef538", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7f528a6a-5b", "ovs_interfaceid": "7f528a6a-5bee-4ea6-ba46-7b56d53b170b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Oct 11 04:05:00 compute-0 nova_compute[259850]: 2025-10-11 04:05:00.923 2 DEBUG nova.network.os_vif_util [None req-2c752ecf-f41a-4b75-87bd-b628c3d4db01 715d3ecfd40048a08fd0c9f8dc437cd6 a56f57f119b24e77bd165887162ef538 - - default default] Converting VIF {"id": "7f528a6a-5bee-4ea6-ba46-7b56d53b170b", "address": "fa:16:3e:33:ea:29", "network": {"id": "01ca7d7a-ab7e-4753-9e65-58d83786bdc8", "bridge": "br-int", "label": "tempest-VolumesActionsTest-1985263256-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a56f57f119b24e77bd165887162ef538", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7f528a6a-5b", "ovs_interfaceid": "7f528a6a-5bee-4ea6-ba46-7b56d53b170b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 11 04:05:00 compute-0 nova_compute[259850]: 2025-10-11 04:05:00.924 2 DEBUG nova.network.os_vif_util [None req-2c752ecf-f41a-4b75-87bd-b628c3d4db01 715d3ecfd40048a08fd0c9f8dc437cd6 a56f57f119b24e77bd165887162ef538 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:33:ea:29,bridge_name='br-int',has_traffic_filtering=True,id=7f528a6a-5bee-4ea6-ba46-7b56d53b170b,network=Network(01ca7d7a-ab7e-4753-9e65-58d83786bdc8),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap7f528a6a-5b') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 11 04:05:00 compute-0 nova_compute[259850]: 2025-10-11 04:05:00.924 2 DEBUG os_vif [None req-2c752ecf-f41a-4b75-87bd-b628c3d4db01 715d3ecfd40048a08fd0c9f8dc437cd6 a56f57f119b24e77bd165887162ef538 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:33:ea:29,bridge_name='br-int',has_traffic_filtering=True,id=7f528a6a-5bee-4ea6-ba46-7b56d53b170b,network=Network(01ca7d7a-ab7e-4753-9e65-58d83786bdc8),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap7f528a6a-5b') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Oct 11 04:05:00 compute-0 nova_compute[259850]: 2025-10-11 04:05:00.925 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 04:05:00 compute-0 nova_compute[259850]: 2025-10-11 04:05:00.925 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 04:05:00 compute-0 nova_compute[259850]: 2025-10-11 04:05:00.926 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 11 04:05:00 compute-0 nova_compute[259850]: 2025-10-11 04:05:00.929 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 04:05:00 compute-0 nova_compute[259850]: 2025-10-11 04:05:00.929 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap7f528a6a-5b, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 04:05:00 compute-0 nova_compute[259850]: 2025-10-11 04:05:00.930 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap7f528a6a-5b, col_values=(('external_ids', {'iface-id': '7f528a6a-5bee-4ea6-ba46-7b56d53b170b', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:33:ea:29', 'vm-uuid': 'e607828c-0677-46ba-a7a0-b9d21be4149e'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 04:05:00 compute-0 nova_compute[259850]: 2025-10-11 04:05:00.931 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 04:05:00 compute-0 NetworkManager[44920]: <info>  [1760155500.9325] manager: (tap7f528a6a-5b): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/31)
Oct 11 04:05:00 compute-0 nova_compute[259850]: 2025-10-11 04:05:00.934 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 11 04:05:00 compute-0 nova_compute[259850]: 2025-10-11 04:05:00.938 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 04:05:00 compute-0 nova_compute[259850]: 2025-10-11 04:05:00.939 2 INFO os_vif [None req-2c752ecf-f41a-4b75-87bd-b628c3d4db01 715d3ecfd40048a08fd0c9f8dc437cd6 a56f57f119b24e77bd165887162ef538 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:33:ea:29,bridge_name='br-int',has_traffic_filtering=True,id=7f528a6a-5bee-4ea6-ba46-7b56d53b170b,network=Network(01ca7d7a-ab7e-4753-9e65-58d83786bdc8),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap7f528a6a-5b')
Oct 11 04:05:00 compute-0 ceph-mon[74273]: pgmap v1021: 305 pgs: 305 active+clean; 88 MiB data, 277 MiB used, 60 GiB / 60 GiB avail; 29 KiB/s rd, 1.8 MiB/s wr, 44 op/s
Oct 11 04:05:00 compute-0 ceph-mon[74273]: from='client.? 192.168.122.10:0/3112692717' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 11 04:05:00 compute-0 ceph-mon[74273]: from='client.? 192.168.122.10:0/3112692717' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 11 04:05:00 compute-0 ceph-mon[74273]: from='client.? 192.168.122.100:0/1350504531' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 11 04:05:00 compute-0 ceph-mon[74273]: from='client.? 192.168.122.100:0/347245407' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 11 04:05:00 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct 11 04:05:00 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3661991307' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 11 04:05:00 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct 11 04:05:00 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3661991307' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 11 04:05:01 compute-0 nova_compute[259850]: 2025-10-11 04:05:01.004 2 DEBUG nova.virt.libvirt.driver [None req-2c752ecf-f41a-4b75-87bd-b628c3d4db01 715d3ecfd40048a08fd0c9f8dc437cd6 a56f57f119b24e77bd165887162ef538 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 11 04:05:01 compute-0 nova_compute[259850]: 2025-10-11 04:05:01.004 2 DEBUG nova.virt.libvirt.driver [None req-2c752ecf-f41a-4b75-87bd-b628c3d4db01 715d3ecfd40048a08fd0c9f8dc437cd6 a56f57f119b24e77bd165887162ef538 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 11 04:05:01 compute-0 nova_compute[259850]: 2025-10-11 04:05:01.004 2 DEBUG nova.virt.libvirt.driver [None req-2c752ecf-f41a-4b75-87bd-b628c3d4db01 715d3ecfd40048a08fd0c9f8dc437cd6 a56f57f119b24e77bd165887162ef538 - - default default] No VIF found with MAC fa:16:3e:33:ea:29, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Oct 11 04:05:01 compute-0 nova_compute[259850]: 2025-10-11 04:05:01.005 2 INFO nova.virt.libvirt.driver [None req-2c752ecf-f41a-4b75-87bd-b628c3d4db01 715d3ecfd40048a08fd0c9f8dc437cd6 a56f57f119b24e77bd165887162ef538 - - default default] [instance: e607828c-0677-46ba-a7a0-b9d21be4149e] Using config drive
Oct 11 04:05:01 compute-0 nova_compute[259850]: 2025-10-11 04:05:01.027 2 DEBUG nova.storage.rbd_utils [None req-2c752ecf-f41a-4b75-87bd-b628c3d4db01 715d3ecfd40048a08fd0c9f8dc437cd6 a56f57f119b24e77bd165887162ef538 - - default default] rbd image e607828c-0677-46ba-a7a0-b9d21be4149e_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 11 04:05:01 compute-0 peaceful_bassi[270260]: {
Oct 11 04:05:01 compute-0 peaceful_bassi[270260]:     "0": [
Oct 11 04:05:01 compute-0 peaceful_bassi[270260]:         {
Oct 11 04:05:01 compute-0 peaceful_bassi[270260]:             "devices": [
Oct 11 04:05:01 compute-0 peaceful_bassi[270260]:                 "/dev/loop3"
Oct 11 04:05:01 compute-0 peaceful_bassi[270260]:             ],
Oct 11 04:05:01 compute-0 peaceful_bassi[270260]:             "lv_name": "ceph_lv0",
Oct 11 04:05:01 compute-0 peaceful_bassi[270260]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct 11 04:05:01 compute-0 peaceful_bassi[270260]:             "lv_size": "21470642176",
Oct 11 04:05:01 compute-0 peaceful_bassi[270260]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=1XVTvq-Um5W-oCLh-MZzq-EKrd-vgGj-JdO4S3,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=23b68101-59a9-532f-ab6b-9acf78fb2162,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=bd7ac921-1218-45c1-b1c6-7c594dbceccb,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 11 04:05:01 compute-0 peaceful_bassi[270260]:             "lv_uuid": "1XVTvq-Um5W-oCLh-MZzq-EKrd-vgGj-JdO4S3",
Oct 11 04:05:01 compute-0 peaceful_bassi[270260]:             "name": "ceph_lv0",
Oct 11 04:05:01 compute-0 peaceful_bassi[270260]:             "path": "/dev/ceph_vg0/ceph_lv0",
Oct 11 04:05:01 compute-0 peaceful_bassi[270260]:             "tags": {
Oct 11 04:05:01 compute-0 peaceful_bassi[270260]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct 11 04:05:01 compute-0 peaceful_bassi[270260]:                 "ceph.block_uuid": "1XVTvq-Um5W-oCLh-MZzq-EKrd-vgGj-JdO4S3",
Oct 11 04:05:01 compute-0 peaceful_bassi[270260]:                 "ceph.cephx_lockbox_secret": "",
Oct 11 04:05:01 compute-0 peaceful_bassi[270260]:                 "ceph.cluster_fsid": "23b68101-59a9-532f-ab6b-9acf78fb2162",
Oct 11 04:05:01 compute-0 peaceful_bassi[270260]:                 "ceph.cluster_name": "ceph",
Oct 11 04:05:01 compute-0 peaceful_bassi[270260]:                 "ceph.crush_device_class": "",
Oct 11 04:05:01 compute-0 peaceful_bassi[270260]:                 "ceph.encrypted": "0",
Oct 11 04:05:01 compute-0 peaceful_bassi[270260]:                 "ceph.osd_fsid": "bd7ac921-1218-45c1-b1c6-7c594dbceccb",
Oct 11 04:05:01 compute-0 peaceful_bassi[270260]:                 "ceph.osd_id": "0",
Oct 11 04:05:01 compute-0 peaceful_bassi[270260]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 11 04:05:01 compute-0 peaceful_bassi[270260]:                 "ceph.type": "block",
Oct 11 04:05:01 compute-0 peaceful_bassi[270260]:                 "ceph.vdo": "0"
Oct 11 04:05:01 compute-0 peaceful_bassi[270260]:             },
Oct 11 04:05:01 compute-0 peaceful_bassi[270260]:             "type": "block",
Oct 11 04:05:01 compute-0 peaceful_bassi[270260]:             "vg_name": "ceph_vg0"
Oct 11 04:05:01 compute-0 peaceful_bassi[270260]:         }
Oct 11 04:05:01 compute-0 peaceful_bassi[270260]:     ],
Oct 11 04:05:01 compute-0 peaceful_bassi[270260]:     "1": [
Oct 11 04:05:01 compute-0 peaceful_bassi[270260]:         {
Oct 11 04:05:01 compute-0 peaceful_bassi[270260]:             "devices": [
Oct 11 04:05:01 compute-0 peaceful_bassi[270260]:                 "/dev/loop4"
Oct 11 04:05:01 compute-0 peaceful_bassi[270260]:             ],
Oct 11 04:05:01 compute-0 peaceful_bassi[270260]:             "lv_name": "ceph_lv1",
Oct 11 04:05:01 compute-0 peaceful_bassi[270260]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Oct 11 04:05:01 compute-0 peaceful_bassi[270260]:             "lv_size": "21470642176",
Oct 11 04:05:01 compute-0 peaceful_bassi[270260]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=SYHCVo-UVf0-Iwc3-4dzz-QuCG-19LJ-9rRA5x,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=23b68101-59a9-532f-ab6b-9acf78fb2162,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=38da774d-7ecf-442f-9a7a-97978287cff8,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 11 04:05:01 compute-0 peaceful_bassi[270260]:             "lv_uuid": "SYHCVo-UVf0-Iwc3-4dzz-QuCG-19LJ-9rRA5x",
Oct 11 04:05:01 compute-0 peaceful_bassi[270260]:             "name": "ceph_lv1",
Oct 11 04:05:01 compute-0 peaceful_bassi[270260]:             "path": "/dev/ceph_vg1/ceph_lv1",
Oct 11 04:05:01 compute-0 peaceful_bassi[270260]:             "tags": {
Oct 11 04:05:01 compute-0 peaceful_bassi[270260]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Oct 11 04:05:01 compute-0 peaceful_bassi[270260]:                 "ceph.block_uuid": "SYHCVo-UVf0-Iwc3-4dzz-QuCG-19LJ-9rRA5x",
Oct 11 04:05:01 compute-0 peaceful_bassi[270260]:                 "ceph.cephx_lockbox_secret": "",
Oct 11 04:05:01 compute-0 peaceful_bassi[270260]:                 "ceph.cluster_fsid": "23b68101-59a9-532f-ab6b-9acf78fb2162",
Oct 11 04:05:01 compute-0 peaceful_bassi[270260]:                 "ceph.cluster_name": "ceph",
Oct 11 04:05:01 compute-0 peaceful_bassi[270260]:                 "ceph.crush_device_class": "",
Oct 11 04:05:01 compute-0 peaceful_bassi[270260]:                 "ceph.encrypted": "0",
Oct 11 04:05:01 compute-0 peaceful_bassi[270260]:                 "ceph.osd_fsid": "38da774d-7ecf-442f-9a7a-97978287cff8",
Oct 11 04:05:01 compute-0 peaceful_bassi[270260]:                 "ceph.osd_id": "1",
Oct 11 04:05:01 compute-0 peaceful_bassi[270260]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 11 04:05:01 compute-0 peaceful_bassi[270260]:                 "ceph.type": "block",
Oct 11 04:05:01 compute-0 peaceful_bassi[270260]:                 "ceph.vdo": "0"
Oct 11 04:05:01 compute-0 peaceful_bassi[270260]:             },
Oct 11 04:05:01 compute-0 peaceful_bassi[270260]:             "type": "block",
Oct 11 04:05:01 compute-0 peaceful_bassi[270260]:             "vg_name": "ceph_vg1"
Oct 11 04:05:01 compute-0 peaceful_bassi[270260]:         }
Oct 11 04:05:01 compute-0 peaceful_bassi[270260]:     ],
Oct 11 04:05:01 compute-0 peaceful_bassi[270260]:     "2": [
Oct 11 04:05:01 compute-0 peaceful_bassi[270260]:         {
Oct 11 04:05:01 compute-0 peaceful_bassi[270260]:             "devices": [
Oct 11 04:05:01 compute-0 peaceful_bassi[270260]:                 "/dev/loop5"
Oct 11 04:05:01 compute-0 peaceful_bassi[270260]:             ],
Oct 11 04:05:01 compute-0 peaceful_bassi[270260]:             "lv_name": "ceph_lv2",
Oct 11 04:05:01 compute-0 peaceful_bassi[270260]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Oct 11 04:05:01 compute-0 peaceful_bassi[270260]:             "lv_size": "21470642176",
Oct 11 04:05:01 compute-0 peaceful_bassi[270260]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=RoMfIg-8Myq-NBZ3-1HM3-6QfC-sjk8-dlChvt,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=23b68101-59a9-532f-ab6b-9acf78fb2162,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=b0a32a6b-06c7-4cda-9235-0c5ffbd1bd85,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 11 04:05:01 compute-0 peaceful_bassi[270260]:             "lv_uuid": "RoMfIg-8Myq-NBZ3-1HM3-6QfC-sjk8-dlChvt",
Oct 11 04:05:01 compute-0 peaceful_bassi[270260]:             "name": "ceph_lv2",
Oct 11 04:05:01 compute-0 peaceful_bassi[270260]:             "path": "/dev/ceph_vg2/ceph_lv2",
Oct 11 04:05:01 compute-0 peaceful_bassi[270260]:             "tags": {
Oct 11 04:05:01 compute-0 peaceful_bassi[270260]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Oct 11 04:05:01 compute-0 peaceful_bassi[270260]:                 "ceph.block_uuid": "RoMfIg-8Myq-NBZ3-1HM3-6QfC-sjk8-dlChvt",
Oct 11 04:05:01 compute-0 peaceful_bassi[270260]:                 "ceph.cephx_lockbox_secret": "",
Oct 11 04:05:01 compute-0 peaceful_bassi[270260]:                 "ceph.cluster_fsid": "23b68101-59a9-532f-ab6b-9acf78fb2162",
Oct 11 04:05:01 compute-0 peaceful_bassi[270260]:                 "ceph.cluster_name": "ceph",
Oct 11 04:05:01 compute-0 peaceful_bassi[270260]:                 "ceph.crush_device_class": "",
Oct 11 04:05:01 compute-0 peaceful_bassi[270260]:                 "ceph.encrypted": "0",
Oct 11 04:05:01 compute-0 peaceful_bassi[270260]:                 "ceph.osd_fsid": "b0a32a6b-06c7-4cda-9235-0c5ffbd1bd85",
Oct 11 04:05:01 compute-0 peaceful_bassi[270260]:                 "ceph.osd_id": "2",
Oct 11 04:05:01 compute-0 peaceful_bassi[270260]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 11 04:05:01 compute-0 peaceful_bassi[270260]:                 "ceph.type": "block",
Oct 11 04:05:01 compute-0 peaceful_bassi[270260]:                 "ceph.vdo": "0"
Oct 11 04:05:01 compute-0 peaceful_bassi[270260]:             },
Oct 11 04:05:01 compute-0 peaceful_bassi[270260]:             "type": "block",
Oct 11 04:05:01 compute-0 peaceful_bassi[270260]:             "vg_name": "ceph_vg2"
Oct 11 04:05:01 compute-0 peaceful_bassi[270260]:         }
Oct 11 04:05:01 compute-0 peaceful_bassi[270260]:     ]
Oct 11 04:05:01 compute-0 peaceful_bassi[270260]: }
Oct 11 04:05:01 compute-0 systemd[1]: libpod-7ff4b87206de97124cf5b4ac85ba426a25df4408a45c6bd3a77bd22fdbaa5d12.scope: Deactivated successfully.
Oct 11 04:05:01 compute-0 podman[270224]: 2025-10-11 04:05:01.316962413 +0000 UTC m=+0.925990104 container died 7ff4b87206de97124cf5b4ac85ba426a25df4408a45c6bd3a77bd22fdbaa5d12 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_bassi, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Oct 11 04:05:01 compute-0 systemd[1]: var-lib-containers-storage-overlay-64b3da72e0708707082979faadc82f7d2df387707c3a330715f5b95fac7335e4-merged.mount: Deactivated successfully.
Oct 11 04:05:01 compute-0 podman[270224]: 2025-10-11 04:05:01.369829767 +0000 UTC m=+0.978857428 container remove 7ff4b87206de97124cf5b4ac85ba426a25df4408a45c6bd3a77bd22fdbaa5d12 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_bassi, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Oct 11 04:05:01 compute-0 systemd[1]: libpod-conmon-7ff4b87206de97124cf5b4ac85ba426a25df4408a45c6bd3a77bd22fdbaa5d12.scope: Deactivated successfully.
Oct 11 04:05:01 compute-0 sudo[270098]: pam_unix(sudo:session): session closed for user root
Oct 11 04:05:01 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1022: 305 pgs: 305 active+clean; 88 MiB data, 277 MiB used, 60 GiB / 60 GiB avail; 28 KiB/s rd, 1.8 MiB/s wr, 42 op/s
Oct 11 04:05:01 compute-0 sudo[270322]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 04:05:01 compute-0 sudo[270322]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 04:05:01 compute-0 sudo[270322]: pam_unix(sudo:session): session closed for user root
Oct 11 04:05:01 compute-0 nova_compute[259850]: 2025-10-11 04:05:01.510 2 DEBUG nova.network.neutron [req-f92e77f5-130c-44bc-ae72-e79e7d08733e req-a53d0710-7903-4504-bc34-5c63bcdaaf98 f4159c198f2c491aba3076e803251bb4 551ef16ca7c64c7d98e9560ddab5abd8 - - default default] [instance: e607828c-0677-46ba-a7a0-b9d21be4149e] Updated VIF entry in instance network info cache for port 7f528a6a-5bee-4ea6-ba46-7b56d53b170b. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 11 04:05:01 compute-0 nova_compute[259850]: 2025-10-11 04:05:01.511 2 DEBUG nova.network.neutron [req-f92e77f5-130c-44bc-ae72-e79e7d08733e req-a53d0710-7903-4504-bc34-5c63bcdaaf98 f4159c198f2c491aba3076e803251bb4 551ef16ca7c64c7d98e9560ddab5abd8 - - default default] [instance: e607828c-0677-46ba-a7a0-b9d21be4149e] Updating instance_info_cache with network_info: [{"id": "7f528a6a-5bee-4ea6-ba46-7b56d53b170b", "address": "fa:16:3e:33:ea:29", "network": {"id": "01ca7d7a-ab7e-4753-9e65-58d83786bdc8", "bridge": "br-int", "label": "tempest-VolumesActionsTest-1985263256-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a56f57f119b24e77bd165887162ef538", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7f528a6a-5b", "ovs_interfaceid": "7f528a6a-5bee-4ea6-ba46-7b56d53b170b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 11 04:05:01 compute-0 nova_compute[259850]: 2025-10-11 04:05:01.530 2 DEBUG oslo_concurrency.lockutils [req-f92e77f5-130c-44bc-ae72-e79e7d08733e req-a53d0710-7903-4504-bc34-5c63bcdaaf98 f4159c198f2c491aba3076e803251bb4 551ef16ca7c64c7d98e9560ddab5abd8 - - default default] Releasing lock "refresh_cache-e607828c-0677-46ba-a7a0-b9d21be4149e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 11 04:05:01 compute-0 nova_compute[259850]: 2025-10-11 04:05:01.563 2 INFO nova.virt.libvirt.driver [None req-2c752ecf-f41a-4b75-87bd-b628c3d4db01 715d3ecfd40048a08fd0c9f8dc437cd6 a56f57f119b24e77bd165887162ef538 - - default default] [instance: e607828c-0677-46ba-a7a0-b9d21be4149e] Creating config drive at /var/lib/nova/instances/e607828c-0677-46ba-a7a0-b9d21be4149e/disk.config
Oct 11 04:05:01 compute-0 sudo[270347]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 11 04:05:01 compute-0 nova_compute[259850]: 2025-10-11 04:05:01.572 2 DEBUG oslo_concurrency.processutils [None req-2c752ecf-f41a-4b75-87bd-b628c3d4db01 715d3ecfd40048a08fd0c9f8dc437cd6 a56f57f119b24e77bd165887162ef538 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/e607828c-0677-46ba-a7a0-b9d21be4149e/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp0a_lyqvd execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 04:05:01 compute-0 sudo[270347]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 04:05:01 compute-0 sudo[270347]: pam_unix(sudo:session): session closed for user root
Oct 11 04:05:01 compute-0 sudo[270373]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 04:05:01 compute-0 sudo[270373]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 04:05:01 compute-0 sudo[270373]: pam_unix(sudo:session): session closed for user root
Oct 11 04:05:01 compute-0 nova_compute[259850]: 2025-10-11 04:05:01.683 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 04:05:01 compute-0 nova_compute[259850]: 2025-10-11 04:05:01.700 2 DEBUG oslo_concurrency.processutils [None req-2c752ecf-f41a-4b75-87bd-b628c3d4db01 715d3ecfd40048a08fd0c9f8dc437cd6 a56f57f119b24e77bd165887162ef538 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/e607828c-0677-46ba-a7a0-b9d21be4149e/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp0a_lyqvd" returned: 0 in 0.128s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 04:05:01 compute-0 sudo[270400]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/23b68101-59a9-532f-ab6b-9acf78fb2162/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 23b68101-59a9-532f-ab6b-9acf78fb2162 -- raw list --format json
Oct 11 04:05:01 compute-0 sudo[270400]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 04:05:01 compute-0 nova_compute[259850]: 2025-10-11 04:05:01.743 2 DEBUG nova.storage.rbd_utils [None req-2c752ecf-f41a-4b75-87bd-b628c3d4db01 715d3ecfd40048a08fd0c9f8dc437cd6 a56f57f119b24e77bd165887162ef538 - - default default] rbd image e607828c-0677-46ba-a7a0-b9d21be4149e_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 11 04:05:01 compute-0 nova_compute[259850]: 2025-10-11 04:05:01.748 2 DEBUG oslo_concurrency.processutils [None req-2c752ecf-f41a-4b75-87bd-b628c3d4db01 715d3ecfd40048a08fd0c9f8dc437cd6 a56f57f119b24e77bd165887162ef538 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/e607828c-0677-46ba-a7a0-b9d21be4149e/disk.config e607828c-0677-46ba-a7a0-b9d21be4149e_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 04:05:01 compute-0 nova_compute[259850]: 2025-10-11 04:05:01.934 2 DEBUG oslo_concurrency.processutils [None req-2c752ecf-f41a-4b75-87bd-b628c3d4db01 715d3ecfd40048a08fd0c9f8dc437cd6 a56f57f119b24e77bd165887162ef538 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/e607828c-0677-46ba-a7a0-b9d21be4149e/disk.config e607828c-0677-46ba-a7a0-b9d21be4149e_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.186s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 04:05:01 compute-0 nova_compute[259850]: 2025-10-11 04:05:01.935 2 INFO nova.virt.libvirt.driver [None req-2c752ecf-f41a-4b75-87bd-b628c3d4db01 715d3ecfd40048a08fd0c9f8dc437cd6 a56f57f119b24e77bd165887162ef538 - - default default] [instance: e607828c-0677-46ba-a7a0-b9d21be4149e] Deleting local config drive /var/lib/nova/instances/e607828c-0677-46ba-a7a0-b9d21be4149e/disk.config because it was imported into RBD.
Oct 11 04:05:01 compute-0 ceph-mon[74273]: from='client.? 192.168.122.10:0/3661991307' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 11 04:05:01 compute-0 ceph-mon[74273]: from='client.? 192.168.122.10:0/3661991307' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 11 04:05:02 compute-0 kernel: tap7f528a6a-5b: entered promiscuous mode
Oct 11 04:05:02 compute-0 NetworkManager[44920]: <info>  [1760155502.0061] manager: (tap7f528a6a-5b): new Tun device (/org/freedesktop/NetworkManager/Devices/32)
Oct 11 04:05:02 compute-0 ovn_controller[152025]: 2025-10-11T04:05:02Z|00036|binding|INFO|Claiming lport 7f528a6a-5bee-4ea6-ba46-7b56d53b170b for this chassis.
Oct 11 04:05:02 compute-0 ovn_controller[152025]: 2025-10-11T04:05:02Z|00037|binding|INFO|7f528a6a-5bee-4ea6-ba46-7b56d53b170b: Claiming fa:16:3e:33:ea:29 10.100.0.5
Oct 11 04:05:02 compute-0 nova_compute[259850]: 2025-10-11 04:05:02.007 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 04:05:02 compute-0 nova_compute[259850]: 2025-10-11 04:05:02.016 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 04:05:02 compute-0 nova_compute[259850]: 2025-10-11 04:05:02.023 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 04:05:02 compute-0 ovn_metadata_agent[161897]: 2025-10-11 04:05:02.030 161902 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:33:ea:29 10.100.0.5'], port_security=['fa:16:3e:33:ea:29 10.100.0.5'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.5/28', 'neutron:device_id': 'e607828c-0677-46ba-a7a0-b9d21be4149e', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-01ca7d7a-ab7e-4753-9e65-58d83786bdc8', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'a56f57f119b24e77bd165887162ef538', 'neutron:revision_number': '2', 'neutron:security_group_ids': '1d55b596-dded-4eab-874b-8812dbd6943d', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=3a8a1b08-6831-48aa-9bdb-0e38b6956a06, chassis=[<ovs.db.idl.Row object at 0x7f22fde8afd0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f22fde8afd0>], logical_port=7f528a6a-5bee-4ea6-ba46-7b56d53b170b) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 11 04:05:02 compute-0 ovn_metadata_agent[161897]: 2025-10-11 04:05:02.032 161902 INFO neutron.agent.ovn.metadata.agent [-] Port 7f528a6a-5bee-4ea6-ba46-7b56d53b170b in datapath 01ca7d7a-ab7e-4753-9e65-58d83786bdc8 bound to our chassis
Oct 11 04:05:02 compute-0 ovn_metadata_agent[161897]: 2025-10-11 04:05:02.034 161902 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 01ca7d7a-ab7e-4753-9e65-58d83786bdc8
Oct 11 04:05:02 compute-0 ovn_metadata_agent[161897]: 2025-10-11 04:05:02.051 267637 DEBUG oslo.privsep.daemon [-] privsep: reply[2fccdb02-1d31-4aca-97d0-86ccc071b106]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 04:05:02 compute-0 ovn_metadata_agent[161897]: 2025-10-11 04:05:02.052 161902 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap01ca7d7a-a1 in ovnmeta-01ca7d7a-ab7e-4753-9e65-58d83786bdc8 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Oct 11 04:05:02 compute-0 ovn_metadata_agent[161897]: 2025-10-11 04:05:02.054 267637 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap01ca7d7a-a0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Oct 11 04:05:02 compute-0 ovn_metadata_agent[161897]: 2025-10-11 04:05:02.055 267637 DEBUG oslo.privsep.daemon [-] privsep: reply[1898f761-dc12-415b-b607-deb9213a06a5]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 04:05:02 compute-0 ovn_metadata_agent[161897]: 2025-10-11 04:05:02.056 267637 DEBUG oslo.privsep.daemon [-] privsep: reply[322bbf8e-2f13-419f-bc35-d24baed77096]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 04:05:02 compute-0 systemd-machined[214869]: New machine qemu-2-instance-00000002.
Oct 11 04:05:02 compute-0 systemd[1]: Started Virtual Machine qemu-2-instance-00000002.
Oct 11 04:05:02 compute-0 ovn_metadata_agent[161897]: 2025-10-11 04:05:02.074 162015 DEBUG oslo.privsep.daemon [-] privsep: reply[0f5520ef-0464-4043-a709-5299940760f7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 04:05:02 compute-0 systemd-udevd[270512]: Network interface NamePolicy= disabled on kernel command line.
Oct 11 04:05:02 compute-0 ovn_metadata_agent[161897]: 2025-10-11 04:05:02.103 267637 DEBUG oslo.privsep.daemon [-] privsep: reply[92cdf825-64d1-4f33-897e-7ee02a30e0d2]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 04:05:02 compute-0 NetworkManager[44920]: <info>  [1760155502.1070] device (tap7f528a6a-5b): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 11 04:05:02 compute-0 NetworkManager[44920]: <info>  [1760155502.1080] device (tap7f528a6a-5b): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 11 04:05:02 compute-0 ovn_metadata_agent[161897]: 2025-10-11 04:05:02.134 267785 DEBUG oslo.privsep.daemon [-] privsep: reply[7e400d69-7b47-4efd-95cf-aa95fd9d74e3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 04:05:02 compute-0 ovn_controller[152025]: 2025-10-11T04:05:02Z|00038|binding|INFO|Setting lport 7f528a6a-5bee-4ea6-ba46-7b56d53b170b ovn-installed in OVS
Oct 11 04:05:02 compute-0 ovn_controller[152025]: 2025-10-11T04:05:02Z|00039|binding|INFO|Setting lport 7f528a6a-5bee-4ea6-ba46-7b56d53b170b up in Southbound
Oct 11 04:05:02 compute-0 nova_compute[259850]: 2025-10-11 04:05:02.138 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 04:05:02 compute-0 systemd-udevd[270516]: Network interface NamePolicy= disabled on kernel command line.
Oct 11 04:05:02 compute-0 NetworkManager[44920]: <info>  [1760155502.1414] manager: (tap01ca7d7a-a0): new Veth device (/org/freedesktop/NetworkManager/Devices/33)
Oct 11 04:05:02 compute-0 ovn_metadata_agent[161897]: 2025-10-11 04:05:02.140 267637 DEBUG oslo.privsep.daemon [-] privsep: reply[dba63d03-b324-438d-be9a-66a623956f85]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 04:05:02 compute-0 ovn_metadata_agent[161897]: 2025-10-11 04:05:02.171 267785 DEBUG oslo.privsep.daemon [-] privsep: reply[a06893ba-a99a-4c12-985e-b8b0221661a0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 04:05:02 compute-0 ovn_metadata_agent[161897]: 2025-10-11 04:05:02.176 267785 DEBUG oslo.privsep.daemon [-] privsep: reply[946ab69d-ec60-4269-a429-02262edcd5e7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 04:05:02 compute-0 podman[270515]: 2025-10-11 04:05:02.180438182 +0000 UTC m=+0.054991515 container create 495b868691187ed0fbf3aa6e35891b06457dc2d53c58fd8693615d2754f49631 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_bhaskara, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Oct 11 04:05:02 compute-0 NetworkManager[44920]: <info>  [1760155502.2040] device (tap01ca7d7a-a0): carrier: link connected
Oct 11 04:05:02 compute-0 ovn_metadata_agent[161897]: 2025-10-11 04:05:02.209 267785 DEBUG oslo.privsep.daemon [-] privsep: reply[d91dabd6-aeb5-4519-80d9-abc4caff2ec3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 04:05:02 compute-0 systemd[1]: Started libpod-conmon-495b868691187ed0fbf3aa6e35891b06457dc2d53c58fd8693615d2754f49631.scope.
Oct 11 04:05:02 compute-0 ovn_metadata_agent[161897]: 2025-10-11 04:05:02.227 267637 DEBUG oslo.privsep.daemon [-] privsep: reply[46081b3a-06c2-4b75-a4a8-243b49adfed1]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap01ca7d7a-a1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:6c:75:af'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 18], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 388724, 'reachable_time': 33241, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 270556, 'error': None, 'target': 'ovnmeta-01ca7d7a-ab7e-4753-9e65-58d83786bdc8', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 04:05:02 compute-0 ovn_metadata_agent[161897]: 2025-10-11 04:05:02.241 267637 DEBUG oslo.privsep.daemon [-] privsep: reply[07a96679-bf62-4b62-b0ab-31dd81e1e0c0]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe6c:75af'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 388724, 'tstamp': 388724}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 270560, 'error': None, 'target': 'ovnmeta-01ca7d7a-ab7e-4753-9e65-58d83786bdc8', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 04:05:02 compute-0 podman[270515]: 2025-10-11 04:05:02.156780858 +0000 UTC m=+0.031334231 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 04:05:02 compute-0 systemd[1]: Started libcrun container.
Oct 11 04:05:02 compute-0 ovn_metadata_agent[161897]: 2025-10-11 04:05:02.258 267637 DEBUG oslo.privsep.daemon [-] privsep: reply[68898ce5-53b8-43c4-a4c2-73f6540e3d5a]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap01ca7d7a-a1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:6c:75:af'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 18], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 388724, 'reachable_time': 33241, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 270561, 'error': None, 'target': 'ovnmeta-01ca7d7a-ab7e-4753-9e65-58d83786bdc8', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 04:05:02 compute-0 podman[270515]: 2025-10-11 04:05:02.274999046 +0000 UTC m=+0.149552439 container init 495b868691187ed0fbf3aa6e35891b06457dc2d53c58fd8693615d2754f49631 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_bhaskara, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Oct 11 04:05:02 compute-0 podman[270515]: 2025-10-11 04:05:02.287442705 +0000 UTC m=+0.161996068 container start 495b868691187ed0fbf3aa6e35891b06457dc2d53c58fd8693615d2754f49631 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_bhaskara, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 11 04:05:02 compute-0 podman[270515]: 2025-10-11 04:05:02.290548763 +0000 UTC m=+0.165102126 container attach 495b868691187ed0fbf3aa6e35891b06457dc2d53c58fd8693615d2754f49631 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_bhaskara, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 11 04:05:02 compute-0 ovn_metadata_agent[161897]: 2025-10-11 04:05:02.294 267637 DEBUG oslo.privsep.daemon [-] privsep: reply[1259a2f6-e50a-4d86-9165-f79db71440a6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 04:05:02 compute-0 agitated_bhaskara[270557]: 167 167
Oct 11 04:05:02 compute-0 systemd[1]: libpod-495b868691187ed0fbf3aa6e35891b06457dc2d53c58fd8693615d2754f49631.scope: Deactivated successfully.
Oct 11 04:05:02 compute-0 podman[270515]: 2025-10-11 04:05:02.297010974 +0000 UTC m=+0.171564327 container died 495b868691187ed0fbf3aa6e35891b06457dc2d53c58fd8693615d2754f49631 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_bhaskara, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507)
Oct 11 04:05:02 compute-0 systemd[1]: var-lib-containers-storage-overlay-1c325c004ee65811ab2a0972295712a571733cf548b34360e9757b939f97ca17-merged.mount: Deactivated successfully.
Oct 11 04:05:02 compute-0 podman[270515]: 2025-10-11 04:05:02.342278525 +0000 UTC m=+0.216831908 container remove 495b868691187ed0fbf3aa6e35891b06457dc2d53c58fd8693615d2754f49631 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_bhaskara, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 11 04:05:02 compute-0 ovn_metadata_agent[161897]: 2025-10-11 04:05:02.352 267637 DEBUG oslo.privsep.daemon [-] privsep: reply[3387207d-c401-4965-afb6-0d29463b2f2a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 04:05:02 compute-0 ovn_metadata_agent[161897]: 2025-10-11 04:05:02.354 161902 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap01ca7d7a-a0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 04:05:02 compute-0 ovn_metadata_agent[161897]: 2025-10-11 04:05:02.354 161902 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 11 04:05:02 compute-0 ovn_metadata_agent[161897]: 2025-10-11 04:05:02.358 161902 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap01ca7d7a-a0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 04:05:02 compute-0 nova_compute[259850]: 2025-10-11 04:05:02.360 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 04:05:02 compute-0 NetworkManager[44920]: <info>  [1760155502.3612] manager: (tap01ca7d7a-a0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/34)
Oct 11 04:05:02 compute-0 kernel: tap01ca7d7a-a0: entered promiscuous mode
Oct 11 04:05:02 compute-0 ovn_metadata_agent[161897]: 2025-10-11 04:05:02.364 161902 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap01ca7d7a-a0, col_values=(('external_ids', {'iface-id': 'bf00a62f-9880-4f65-9ef5-7c57c9ac1996'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 04:05:02 compute-0 nova_compute[259850]: 2025-10-11 04:05:02.365 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 04:05:02 compute-0 ovn_controller[152025]: 2025-10-11T04:05:02Z|00040|binding|INFO|Releasing lport bf00a62f-9880-4f65-9ef5-7c57c9ac1996 from this chassis (sb_readonly=0)
Oct 11 04:05:02 compute-0 nova_compute[259850]: 2025-10-11 04:05:02.366 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 04:05:02 compute-0 ovn_metadata_agent[161897]: 2025-10-11 04:05:02.367 161902 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/01ca7d7a-ab7e-4753-9e65-58d83786bdc8.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/01ca7d7a-ab7e-4753-9e65-58d83786bdc8.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Oct 11 04:05:02 compute-0 ovn_metadata_agent[161897]: 2025-10-11 04:05:02.367 267637 DEBUG oslo.privsep.daemon [-] privsep: reply[5581accc-3362-496b-b591-f05abb6e7407]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 04:05:02 compute-0 ovn_metadata_agent[161897]: 2025-10-11 04:05:02.368 161902 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct 11 04:05:02 compute-0 ovn_metadata_agent[161897]: global
Oct 11 04:05:02 compute-0 ovn_metadata_agent[161897]:     log         /dev/log local0 debug
Oct 11 04:05:02 compute-0 ovn_metadata_agent[161897]:     log-tag     haproxy-metadata-proxy-01ca7d7a-ab7e-4753-9e65-58d83786bdc8
Oct 11 04:05:02 compute-0 ovn_metadata_agent[161897]:     user        root
Oct 11 04:05:02 compute-0 ovn_metadata_agent[161897]:     group       root
Oct 11 04:05:02 compute-0 ovn_metadata_agent[161897]:     maxconn     1024
Oct 11 04:05:02 compute-0 ovn_metadata_agent[161897]:     pidfile     /var/lib/neutron/external/pids/01ca7d7a-ab7e-4753-9e65-58d83786bdc8.pid.haproxy
Oct 11 04:05:02 compute-0 ovn_metadata_agent[161897]:     daemon
Oct 11 04:05:02 compute-0 ovn_metadata_agent[161897]: 
Oct 11 04:05:02 compute-0 ovn_metadata_agent[161897]: defaults
Oct 11 04:05:02 compute-0 ovn_metadata_agent[161897]:     log global
Oct 11 04:05:02 compute-0 ovn_metadata_agent[161897]:     mode http
Oct 11 04:05:02 compute-0 ovn_metadata_agent[161897]:     option httplog
Oct 11 04:05:02 compute-0 ovn_metadata_agent[161897]:     option dontlognull
Oct 11 04:05:02 compute-0 ovn_metadata_agent[161897]:     option http-server-close
Oct 11 04:05:02 compute-0 ovn_metadata_agent[161897]:     option forwardfor
Oct 11 04:05:02 compute-0 ovn_metadata_agent[161897]:     retries                 3
Oct 11 04:05:02 compute-0 ovn_metadata_agent[161897]:     timeout http-request    30s
Oct 11 04:05:02 compute-0 ovn_metadata_agent[161897]:     timeout connect         30s
Oct 11 04:05:02 compute-0 ovn_metadata_agent[161897]:     timeout client          32s
Oct 11 04:05:02 compute-0 ovn_metadata_agent[161897]:     timeout server          32s
Oct 11 04:05:02 compute-0 ovn_metadata_agent[161897]:     timeout http-keep-alive 30s
Oct 11 04:05:02 compute-0 ovn_metadata_agent[161897]: 
Oct 11 04:05:02 compute-0 ovn_metadata_agent[161897]: 
Oct 11 04:05:02 compute-0 ovn_metadata_agent[161897]: listen listener
Oct 11 04:05:02 compute-0 ovn_metadata_agent[161897]:     bind 169.254.169.254:80
Oct 11 04:05:02 compute-0 ovn_metadata_agent[161897]:     server metadata /var/lib/neutron/metadata_proxy
Oct 11 04:05:02 compute-0 ovn_metadata_agent[161897]:     http-request add-header X-OVN-Network-ID 01ca7d7a-ab7e-4753-9e65-58d83786bdc8
Oct 11 04:05:02 compute-0 ovn_metadata_agent[161897]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Oct 11 04:05:02 compute-0 ovn_metadata_agent[161897]: 2025-10-11 04:05:02.369 161902 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-01ca7d7a-ab7e-4753-9e65-58d83786bdc8', 'env', 'PROCESS_TAG=haproxy-01ca7d7a-ab7e-4753-9e65-58d83786bdc8', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/01ca7d7a-ab7e-4753-9e65-58d83786bdc8.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Oct 11 04:05:02 compute-0 systemd[1]: libpod-conmon-495b868691187ed0fbf3aa6e35891b06457dc2d53c58fd8693615d2754f49631.scope: Deactivated successfully.
Oct 11 04:05:02 compute-0 nova_compute[259850]: 2025-10-11 04:05:02.381 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 04:05:02 compute-0 podman[270592]: 2025-10-11 04:05:02.573393902 +0000 UTC m=+0.066882078 container create 9615e7da1057d8537193663674f7884b73684952f6db656217e75293607669e0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_kalam, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True)
Oct 11 04:05:02 compute-0 nova_compute[259850]: 2025-10-11 04:05:02.602 2 DEBUG nova.compute.manager [req-a7eb9668-bb84-4a64-b83c-943348760ece req-1ad5d3ea-f7b0-47c6-882b-ee9880ac7e6e f4159c198f2c491aba3076e803251bb4 551ef16ca7c64c7d98e9560ddab5abd8 - - default default] [instance: e607828c-0677-46ba-a7a0-b9d21be4149e] Received event network-vif-plugged-7f528a6a-5bee-4ea6-ba46-7b56d53b170b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 11 04:05:02 compute-0 nova_compute[259850]: 2025-10-11 04:05:02.603 2 DEBUG oslo_concurrency.lockutils [req-a7eb9668-bb84-4a64-b83c-943348760ece req-1ad5d3ea-f7b0-47c6-882b-ee9880ac7e6e f4159c198f2c491aba3076e803251bb4 551ef16ca7c64c7d98e9560ddab5abd8 - - default default] Acquiring lock "e607828c-0677-46ba-a7a0-b9d21be4149e-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 04:05:02 compute-0 nova_compute[259850]: 2025-10-11 04:05:02.603 2 DEBUG oslo_concurrency.lockutils [req-a7eb9668-bb84-4a64-b83c-943348760ece req-1ad5d3ea-f7b0-47c6-882b-ee9880ac7e6e f4159c198f2c491aba3076e803251bb4 551ef16ca7c64c7d98e9560ddab5abd8 - - default default] Lock "e607828c-0677-46ba-a7a0-b9d21be4149e-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 04:05:02 compute-0 nova_compute[259850]: 2025-10-11 04:05:02.603 2 DEBUG oslo_concurrency.lockutils [req-a7eb9668-bb84-4a64-b83c-943348760ece req-1ad5d3ea-f7b0-47c6-882b-ee9880ac7e6e f4159c198f2c491aba3076e803251bb4 551ef16ca7c64c7d98e9560ddab5abd8 - - default default] Lock "e607828c-0677-46ba-a7a0-b9d21be4149e-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 04:05:02 compute-0 nova_compute[259850]: 2025-10-11 04:05:02.603 2 DEBUG nova.compute.manager [req-a7eb9668-bb84-4a64-b83c-943348760ece req-1ad5d3ea-f7b0-47c6-882b-ee9880ac7e6e f4159c198f2c491aba3076e803251bb4 551ef16ca7c64c7d98e9560ddab5abd8 - - default default] [instance: e607828c-0677-46ba-a7a0-b9d21be4149e] Processing event network-vif-plugged-7f528a6a-5bee-4ea6-ba46-7b56d53b170b _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Oct 11 04:05:02 compute-0 systemd[1]: Started libpod-conmon-9615e7da1057d8537193663674f7884b73684952f6db656217e75293607669e0.scope.
Oct 11 04:05:02 compute-0 podman[270592]: 2025-10-11 04:05:02.551995152 +0000 UTC m=+0.045483358 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 04:05:02 compute-0 systemd[1]: Started libcrun container.
Oct 11 04:05:02 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ebe780dd0f1e31492dd06be9291efdb02aa7304e096d0089e064af2151c528e2/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 11 04:05:02 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ebe780dd0f1e31492dd06be9291efdb02aa7304e096d0089e064af2151c528e2/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 11 04:05:02 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ebe780dd0f1e31492dd06be9291efdb02aa7304e096d0089e064af2151c528e2/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 11 04:05:02 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ebe780dd0f1e31492dd06be9291efdb02aa7304e096d0089e064af2151c528e2/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 11 04:05:02 compute-0 podman[270592]: 2025-10-11 04:05:02.669603063 +0000 UTC m=+0.163091249 container init 9615e7da1057d8537193663674f7884b73684952f6db656217e75293607669e0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_kalam, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Oct 11 04:05:02 compute-0 podman[270592]: 2025-10-11 04:05:02.675815487 +0000 UTC m=+0.169303653 container start 9615e7da1057d8537193663674f7884b73684952f6db656217e75293607669e0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_kalam, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 11 04:05:02 compute-0 podman[270592]: 2025-10-11 04:05:02.678506933 +0000 UTC m=+0.171995099 container attach 9615e7da1057d8537193663674f7884b73684952f6db656217e75293607669e0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_kalam, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2)
Oct 11 04:05:02 compute-0 podman[270674]: 2025-10-11 04:05:02.749734782 +0000 UTC m=+0.054987224 container create 9ea074ed2185419cb53bde1cb9a425e75f88c044b0a01121d85ac1a1b9d29ca6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-01ca7d7a-ab7e-4753-9e65-58d83786bdc8, io.buildah.version=1.41.3, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Oct 11 04:05:02 compute-0 systemd[1]: Started libpod-conmon-9ea074ed2185419cb53bde1cb9a425e75f88c044b0a01121d85ac1a1b9d29ca6.scope.
Oct 11 04:05:02 compute-0 systemd[1]: Started libcrun container.
Oct 11 04:05:02 compute-0 podman[270674]: 2025-10-11 04:05:02.717867558 +0000 UTC m=+0.023119980 image pull 1061e4fafe13e0b9aa1ef2c904ba4ad70c44f3e87b1d831f16c6db34937f4022 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Oct 11 04:05:02 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b46723a12915356f07ff4515760858413f4f675aa03c62d88bc3a9e5949303a6/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct 11 04:05:02 compute-0 podman[270674]: 2025-10-11 04:05:02.836046595 +0000 UTC m=+0.141299077 container init 9ea074ed2185419cb53bde1cb9a425e75f88c044b0a01121d85ac1a1b9d29ca6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-01ca7d7a-ab7e-4753-9e65-58d83786bdc8, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Oct 11 04:05:02 compute-0 podman[270674]: 2025-10-11 04:05:02.849452572 +0000 UTC m=+0.154704984 container start 9ea074ed2185419cb53bde1cb9a425e75f88c044b0a01121d85ac1a1b9d29ca6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-01ca7d7a-ab7e-4753-9e65-58d83786bdc8, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true)
Oct 11 04:05:02 compute-0 neutron-haproxy-ovnmeta-01ca7d7a-ab7e-4753-9e65-58d83786bdc8[270691]: [NOTICE]   (270695) : New worker (270697) forked
Oct 11 04:05:02 compute-0 neutron-haproxy-ovnmeta-01ca7d7a-ab7e-4753-9e65-58d83786bdc8[270691]: [NOTICE]   (270695) : Loading success.
Oct 11 04:05:02 compute-0 ceph-mon[74273]: pgmap v1022: 305 pgs: 305 active+clean; 88 MiB data, 277 MiB used, 60 GiB / 60 GiB avail; 28 KiB/s rd, 1.8 MiB/s wr, 42 op/s
Oct 11 04:05:03 compute-0 nova_compute[259850]: 2025-10-11 04:05:03.146 2 DEBUG nova.virt.driver [None req-3700afd8-f980-4cf2-b41e-54ecc7fbe9a2 - - - - - -] Emitting event <LifecycleEvent: 1760155503.1457624, e607828c-0677-46ba-a7a0-b9d21be4149e => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 11 04:05:03 compute-0 nova_compute[259850]: 2025-10-11 04:05:03.147 2 INFO nova.compute.manager [None req-3700afd8-f980-4cf2-b41e-54ecc7fbe9a2 - - - - - -] [instance: e607828c-0677-46ba-a7a0-b9d21be4149e] VM Started (Lifecycle Event)
Oct 11 04:05:03 compute-0 nova_compute[259850]: 2025-10-11 04:05:03.151 2 DEBUG nova.compute.manager [None req-2c752ecf-f41a-4b75-87bd-b628c3d4db01 715d3ecfd40048a08fd0c9f8dc437cd6 a56f57f119b24e77bd165887162ef538 - - default default] [instance: e607828c-0677-46ba-a7a0-b9d21be4149e] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Oct 11 04:05:03 compute-0 nova_compute[259850]: 2025-10-11 04:05:03.160 2 DEBUG nova.virt.libvirt.driver [None req-2c752ecf-f41a-4b75-87bd-b628c3d4db01 715d3ecfd40048a08fd0c9f8dc437cd6 a56f57f119b24e77bd165887162ef538 - - default default] [instance: e607828c-0677-46ba-a7a0-b9d21be4149e] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Oct 11 04:05:03 compute-0 nova_compute[259850]: 2025-10-11 04:05:03.171 2 INFO nova.virt.libvirt.driver [-] [instance: e607828c-0677-46ba-a7a0-b9d21be4149e] Instance spawned successfully.
Oct 11 04:05:03 compute-0 nova_compute[259850]: 2025-10-11 04:05:03.172 2 DEBUG nova.virt.libvirt.driver [None req-2c752ecf-f41a-4b75-87bd-b628c3d4db01 715d3ecfd40048a08fd0c9f8dc437cd6 a56f57f119b24e77bd165887162ef538 - - default default] [instance: e607828c-0677-46ba-a7a0-b9d21be4149e] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Oct 11 04:05:03 compute-0 nova_compute[259850]: 2025-10-11 04:05:03.176 2 DEBUG nova.compute.manager [None req-3700afd8-f980-4cf2-b41e-54ecc7fbe9a2 - - - - - -] [instance: e607828c-0677-46ba-a7a0-b9d21be4149e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 11 04:05:03 compute-0 nova_compute[259850]: 2025-10-11 04:05:03.181 2 DEBUG nova.compute.manager [None req-3700afd8-f980-4cf2-b41e-54ecc7fbe9a2 - - - - - -] [instance: e607828c-0677-46ba-a7a0-b9d21be4149e] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 11 04:05:03 compute-0 nova_compute[259850]: 2025-10-11 04:05:03.201 2 DEBUG nova.virt.libvirt.driver [None req-2c752ecf-f41a-4b75-87bd-b628c3d4db01 715d3ecfd40048a08fd0c9f8dc437cd6 a56f57f119b24e77bd165887162ef538 - - default default] [instance: e607828c-0677-46ba-a7a0-b9d21be4149e] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 11 04:05:03 compute-0 nova_compute[259850]: 2025-10-11 04:05:03.202 2 DEBUG nova.virt.libvirt.driver [None req-2c752ecf-f41a-4b75-87bd-b628c3d4db01 715d3ecfd40048a08fd0c9f8dc437cd6 a56f57f119b24e77bd165887162ef538 - - default default] [instance: e607828c-0677-46ba-a7a0-b9d21be4149e] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 11 04:05:03 compute-0 nova_compute[259850]: 2025-10-11 04:05:03.203 2 DEBUG nova.virt.libvirt.driver [None req-2c752ecf-f41a-4b75-87bd-b628c3d4db01 715d3ecfd40048a08fd0c9f8dc437cd6 a56f57f119b24e77bd165887162ef538 - - default default] [instance: e607828c-0677-46ba-a7a0-b9d21be4149e] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 11 04:05:03 compute-0 nova_compute[259850]: 2025-10-11 04:05:03.204 2 DEBUG nova.virt.libvirt.driver [None req-2c752ecf-f41a-4b75-87bd-b628c3d4db01 715d3ecfd40048a08fd0c9f8dc437cd6 a56f57f119b24e77bd165887162ef538 - - default default] [instance: e607828c-0677-46ba-a7a0-b9d21be4149e] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 11 04:05:03 compute-0 nova_compute[259850]: 2025-10-11 04:05:03.205 2 DEBUG nova.virt.libvirt.driver [None req-2c752ecf-f41a-4b75-87bd-b628c3d4db01 715d3ecfd40048a08fd0c9f8dc437cd6 a56f57f119b24e77bd165887162ef538 - - default default] [instance: e607828c-0677-46ba-a7a0-b9d21be4149e] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 11 04:05:03 compute-0 nova_compute[259850]: 2025-10-11 04:05:03.206 2 DEBUG nova.virt.libvirt.driver [None req-2c752ecf-f41a-4b75-87bd-b628c3d4db01 715d3ecfd40048a08fd0c9f8dc437cd6 a56f57f119b24e77bd165887162ef538 - - default default] [instance: e607828c-0677-46ba-a7a0-b9d21be4149e] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 11 04:05:03 compute-0 nova_compute[259850]: 2025-10-11 04:05:03.214 2 INFO nova.compute.manager [None req-3700afd8-f980-4cf2-b41e-54ecc7fbe9a2 - - - - - -] [instance: e607828c-0677-46ba-a7a0-b9d21be4149e] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 11 04:05:03 compute-0 nova_compute[259850]: 2025-10-11 04:05:03.215 2 DEBUG nova.virt.driver [None req-3700afd8-f980-4cf2-b41e-54ecc7fbe9a2 - - - - - -] Emitting event <LifecycleEvent: 1760155503.1459496, e607828c-0677-46ba-a7a0-b9d21be4149e => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 11 04:05:03 compute-0 nova_compute[259850]: 2025-10-11 04:05:03.215 2 INFO nova.compute.manager [None req-3700afd8-f980-4cf2-b41e-54ecc7fbe9a2 - - - - - -] [instance: e607828c-0677-46ba-a7a0-b9d21be4149e] VM Paused (Lifecycle Event)
Oct 11 04:05:03 compute-0 nova_compute[259850]: 2025-10-11 04:05:03.250 2 DEBUG nova.compute.manager [None req-3700afd8-f980-4cf2-b41e-54ecc7fbe9a2 - - - - - -] [instance: e607828c-0677-46ba-a7a0-b9d21be4149e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 11 04:05:03 compute-0 nova_compute[259850]: 2025-10-11 04:05:03.255 2 DEBUG nova.virt.driver [None req-3700afd8-f980-4cf2-b41e-54ecc7fbe9a2 - - - - - -] Emitting event <LifecycleEvent: 1760155503.1593513, e607828c-0677-46ba-a7a0-b9d21be4149e => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 11 04:05:03 compute-0 nova_compute[259850]: 2025-10-11 04:05:03.255 2 INFO nova.compute.manager [None req-3700afd8-f980-4cf2-b41e-54ecc7fbe9a2 - - - - - -] [instance: e607828c-0677-46ba-a7a0-b9d21be4149e] VM Resumed (Lifecycle Event)
Oct 11 04:05:03 compute-0 nova_compute[259850]: 2025-10-11 04:05:03.280 2 DEBUG nova.compute.manager [None req-3700afd8-f980-4cf2-b41e-54ecc7fbe9a2 - - - - - -] [instance: e607828c-0677-46ba-a7a0-b9d21be4149e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 11 04:05:03 compute-0 nova_compute[259850]: 2025-10-11 04:05:03.283 2 DEBUG nova.compute.manager [None req-3700afd8-f980-4cf2-b41e-54ecc7fbe9a2 - - - - - -] [instance: e607828c-0677-46ba-a7a0-b9d21be4149e] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 11 04:05:03 compute-0 nova_compute[259850]: 2025-10-11 04:05:03.289 2 INFO nova.compute.manager [None req-2c752ecf-f41a-4b75-87bd-b628c3d4db01 715d3ecfd40048a08fd0c9f8dc437cd6 a56f57f119b24e77bd165887162ef538 - - default default] [instance: e607828c-0677-46ba-a7a0-b9d21be4149e] Took 7.73 seconds to spawn the instance on the hypervisor.
Oct 11 04:05:03 compute-0 nova_compute[259850]: 2025-10-11 04:05:03.290 2 DEBUG nova.compute.manager [None req-2c752ecf-f41a-4b75-87bd-b628c3d4db01 715d3ecfd40048a08fd0c9f8dc437cd6 a56f57f119b24e77bd165887162ef538 - - default default] [instance: e607828c-0677-46ba-a7a0-b9d21be4149e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 11 04:05:03 compute-0 nova_compute[259850]: 2025-10-11 04:05:03.302 2 INFO nova.compute.manager [None req-3700afd8-f980-4cf2-b41e-54ecc7fbe9a2 - - - - - -] [instance: e607828c-0677-46ba-a7a0-b9d21be4149e] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 11 04:05:03 compute-0 nova_compute[259850]: 2025-10-11 04:05:03.351 2 INFO nova.compute.manager [None req-2c752ecf-f41a-4b75-87bd-b628c3d4db01 715d3ecfd40048a08fd0c9f8dc437cd6 a56f57f119b24e77bd165887162ef538 - - default default] [instance: e607828c-0677-46ba-a7a0-b9d21be4149e] Took 8.73 seconds to build instance.
Oct 11 04:05:03 compute-0 nova_compute[259850]: 2025-10-11 04:05:03.378 2 DEBUG oslo_concurrency.lockutils [None req-2c752ecf-f41a-4b75-87bd-b628c3d4db01 715d3ecfd40048a08fd0c9f8dc437cd6 a56f57f119b24e77bd165887162ef538 - - default default] Lock "e607828c-0677-46ba-a7a0-b9d21be4149e" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 9.072s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 04:05:03 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1023: 305 pgs: 305 active+clean; 88 MiB data, 278 MiB used, 60 GiB / 60 GiB avail; 66 KiB/s rd, 1.8 MiB/s wr, 93 op/s
Oct 11 04:05:03 compute-0 strange_kalam[270649]: {
Oct 11 04:05:03 compute-0 strange_kalam[270649]:     "38da774d-7ecf-442f-9a7a-97978287cff8": {
Oct 11 04:05:03 compute-0 strange_kalam[270649]:         "ceph_fsid": "23b68101-59a9-532f-ab6b-9acf78fb2162",
Oct 11 04:05:03 compute-0 strange_kalam[270649]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Oct 11 04:05:03 compute-0 strange_kalam[270649]:         "osd_id": 1,
Oct 11 04:05:03 compute-0 strange_kalam[270649]:         "osd_uuid": "38da774d-7ecf-442f-9a7a-97978287cff8",
Oct 11 04:05:03 compute-0 strange_kalam[270649]:         "type": "bluestore"
Oct 11 04:05:03 compute-0 strange_kalam[270649]:     },
Oct 11 04:05:03 compute-0 strange_kalam[270649]:     "b0a32a6b-06c7-4cda-9235-0c5ffbd1bd85": {
Oct 11 04:05:03 compute-0 strange_kalam[270649]:         "ceph_fsid": "23b68101-59a9-532f-ab6b-9acf78fb2162",
Oct 11 04:05:03 compute-0 strange_kalam[270649]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Oct 11 04:05:03 compute-0 strange_kalam[270649]:         "osd_id": 2,
Oct 11 04:05:03 compute-0 strange_kalam[270649]:         "osd_uuid": "b0a32a6b-06c7-4cda-9235-0c5ffbd1bd85",
Oct 11 04:05:03 compute-0 strange_kalam[270649]:         "type": "bluestore"
Oct 11 04:05:03 compute-0 strange_kalam[270649]:     },
Oct 11 04:05:03 compute-0 strange_kalam[270649]:     "bd7ac921-1218-45c1-b1c6-7c594dbceccb": {
Oct 11 04:05:03 compute-0 strange_kalam[270649]:         "ceph_fsid": "23b68101-59a9-532f-ab6b-9acf78fb2162",
Oct 11 04:05:03 compute-0 strange_kalam[270649]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct 11 04:05:03 compute-0 strange_kalam[270649]:         "osd_id": 0,
Oct 11 04:05:03 compute-0 strange_kalam[270649]:         "osd_uuid": "bd7ac921-1218-45c1-b1c6-7c594dbceccb",
Oct 11 04:05:03 compute-0 strange_kalam[270649]:         "type": "bluestore"
Oct 11 04:05:03 compute-0 strange_kalam[270649]:     }
Oct 11 04:05:03 compute-0 strange_kalam[270649]: }
Oct 11 04:05:03 compute-0 systemd[1]: libpod-9615e7da1057d8537193663674f7884b73684952f6db656217e75293607669e0.scope: Deactivated successfully.
Oct 11 04:05:03 compute-0 systemd[1]: libpod-9615e7da1057d8537193663674f7884b73684952f6db656217e75293607669e0.scope: Consumed 1.026s CPU time.
Oct 11 04:05:03 compute-0 podman[270734]: 2025-10-11 04:05:03.764689622 +0000 UTC m=+0.035250561 container died 9615e7da1057d8537193663674f7884b73684952f6db656217e75293607669e0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_kalam, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef)
Oct 11 04:05:03 compute-0 systemd[1]: var-lib-containers-storage-overlay-ebe780dd0f1e31492dd06be9291efdb02aa7304e096d0089e064af2151c528e2-merged.mount: Deactivated successfully.
Oct 11 04:05:03 compute-0 podman[270734]: 2025-10-11 04:05:03.849346998 +0000 UTC m=+0.119907887 container remove 9615e7da1057d8537193663674f7884b73684952f6db656217e75293607669e0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_kalam, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Oct 11 04:05:03 compute-0 systemd[1]: libpod-conmon-9615e7da1057d8537193663674f7884b73684952f6db656217e75293607669e0.scope: Deactivated successfully.
Oct 11 04:05:03 compute-0 sudo[270400]: pam_unix(sudo:session): session closed for user root
Oct 11 04:05:03 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 11 04:05:03 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' 
Oct 11 04:05:03 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 11 04:05:03 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' 
Oct 11 04:05:03 compute-0 ceph-mgr[74563]: [progress WARNING root] complete: ev 83742cb3-c19c-4869-9204-5fb746e0eb11 does not exist
Oct 11 04:05:03 compute-0 ceph-mgr[74563]: [progress WARNING root] complete: ev e08dd9ee-999d-4d0f-a716-034eac4d3173 does not exist
Oct 11 04:05:04 compute-0 sudo[270749]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 04:05:04 compute-0 sudo[270749]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 04:05:04 compute-0 sudo[270749]: pam_unix(sudo:session): session closed for user root
Oct 11 04:05:04 compute-0 sudo[270774]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Oct 11 04:05:04 compute-0 sudo[270774]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 04:05:04 compute-0 sudo[270774]: pam_unix(sudo:session): session closed for user root
Oct 11 04:05:04 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e171 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 04:05:04 compute-0 nova_compute[259850]: 2025-10-11 04:05:04.789 2 DEBUG nova.compute.manager [req-6235f1f3-0056-4a0a-8ec5-60d96ad5b9f4 req-09a56e7d-efc9-4b64-8883-6b461675ccae f4159c198f2c491aba3076e803251bb4 551ef16ca7c64c7d98e9560ddab5abd8 - - default default] [instance: e607828c-0677-46ba-a7a0-b9d21be4149e] Received event network-vif-plugged-7f528a6a-5bee-4ea6-ba46-7b56d53b170b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 11 04:05:04 compute-0 nova_compute[259850]: 2025-10-11 04:05:04.790 2 DEBUG oslo_concurrency.lockutils [req-6235f1f3-0056-4a0a-8ec5-60d96ad5b9f4 req-09a56e7d-efc9-4b64-8883-6b461675ccae f4159c198f2c491aba3076e803251bb4 551ef16ca7c64c7d98e9560ddab5abd8 - - default default] Acquiring lock "e607828c-0677-46ba-a7a0-b9d21be4149e-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 04:05:04 compute-0 nova_compute[259850]: 2025-10-11 04:05:04.790 2 DEBUG oslo_concurrency.lockutils [req-6235f1f3-0056-4a0a-8ec5-60d96ad5b9f4 req-09a56e7d-efc9-4b64-8883-6b461675ccae f4159c198f2c491aba3076e803251bb4 551ef16ca7c64c7d98e9560ddab5abd8 - - default default] Lock "e607828c-0677-46ba-a7a0-b9d21be4149e-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 04:05:04 compute-0 nova_compute[259850]: 2025-10-11 04:05:04.791 2 DEBUG oslo_concurrency.lockutils [req-6235f1f3-0056-4a0a-8ec5-60d96ad5b9f4 req-09a56e7d-efc9-4b64-8883-6b461675ccae f4159c198f2c491aba3076e803251bb4 551ef16ca7c64c7d98e9560ddab5abd8 - - default default] Lock "e607828c-0677-46ba-a7a0-b9d21be4149e-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 04:05:04 compute-0 nova_compute[259850]: 2025-10-11 04:05:04.791 2 DEBUG nova.compute.manager [req-6235f1f3-0056-4a0a-8ec5-60d96ad5b9f4 req-09a56e7d-efc9-4b64-8883-6b461675ccae f4159c198f2c491aba3076e803251bb4 551ef16ca7c64c7d98e9560ddab5abd8 - - default default] [instance: e607828c-0677-46ba-a7a0-b9d21be4149e] No waiting events found dispatching network-vif-plugged-7f528a6a-5bee-4ea6-ba46-7b56d53b170b pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 11 04:05:04 compute-0 nova_compute[259850]: 2025-10-11 04:05:04.792 2 WARNING nova.compute.manager [req-6235f1f3-0056-4a0a-8ec5-60d96ad5b9f4 req-09a56e7d-efc9-4b64-8883-6b461675ccae f4159c198f2c491aba3076e803251bb4 551ef16ca7c64c7d98e9560ddab5abd8 - - default default] [instance: e607828c-0677-46ba-a7a0-b9d21be4149e] Received unexpected event network-vif-plugged-7f528a6a-5bee-4ea6-ba46-7b56d53b170b for instance with vm_state active and task_state None.
Oct 11 04:05:04 compute-0 ceph-mon[74273]: pgmap v1023: 305 pgs: 305 active+clean; 88 MiB data, 278 MiB used, 60 GiB / 60 GiB avail; 66 KiB/s rd, 1.8 MiB/s wr, 93 op/s
Oct 11 04:05:04 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' 
Oct 11 04:05:04 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' 
Oct 11 04:05:05 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1024: 305 pgs: 305 active+clean; 88 MiB data, 278 MiB used, 60 GiB / 60 GiB avail; 65 KiB/s rd, 1.8 MiB/s wr, 91 op/s
Oct 11 04:05:05 compute-0 nova_compute[259850]: 2025-10-11 04:05:05.933 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 04:05:06 compute-0 nova_compute[259850]: 2025-10-11 04:05:06.282 2 DEBUG oslo_concurrency.lockutils [None req-4fef83d3-7f07-4629-afa1-f091f4018513 715d3ecfd40048a08fd0c9f8dc437cd6 a56f57f119b24e77bd165887162ef538 - - default default] Acquiring lock "e607828c-0677-46ba-a7a0-b9d21be4149e" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 04:05:06 compute-0 nova_compute[259850]: 2025-10-11 04:05:06.282 2 DEBUG oslo_concurrency.lockutils [None req-4fef83d3-7f07-4629-afa1-f091f4018513 715d3ecfd40048a08fd0c9f8dc437cd6 a56f57f119b24e77bd165887162ef538 - - default default] Lock "e607828c-0677-46ba-a7a0-b9d21be4149e" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 04:05:06 compute-0 nova_compute[259850]: 2025-10-11 04:05:06.282 2 DEBUG oslo_concurrency.lockutils [None req-4fef83d3-7f07-4629-afa1-f091f4018513 715d3ecfd40048a08fd0c9f8dc437cd6 a56f57f119b24e77bd165887162ef538 - - default default] Acquiring lock "e607828c-0677-46ba-a7a0-b9d21be4149e-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 04:05:06 compute-0 nova_compute[259850]: 2025-10-11 04:05:06.282 2 DEBUG oslo_concurrency.lockutils [None req-4fef83d3-7f07-4629-afa1-f091f4018513 715d3ecfd40048a08fd0c9f8dc437cd6 a56f57f119b24e77bd165887162ef538 - - default default] Lock "e607828c-0677-46ba-a7a0-b9d21be4149e-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 04:05:06 compute-0 nova_compute[259850]: 2025-10-11 04:05:06.283 2 DEBUG oslo_concurrency.lockutils [None req-4fef83d3-7f07-4629-afa1-f091f4018513 715d3ecfd40048a08fd0c9f8dc437cd6 a56f57f119b24e77bd165887162ef538 - - default default] Lock "e607828c-0677-46ba-a7a0-b9d21be4149e-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 04:05:06 compute-0 nova_compute[259850]: 2025-10-11 04:05:06.284 2 INFO nova.compute.manager [None req-4fef83d3-7f07-4629-afa1-f091f4018513 715d3ecfd40048a08fd0c9f8dc437cd6 a56f57f119b24e77bd165887162ef538 - - default default] [instance: e607828c-0677-46ba-a7a0-b9d21be4149e] Terminating instance
Oct 11 04:05:06 compute-0 nova_compute[259850]: 2025-10-11 04:05:06.284 2 DEBUG nova.compute.manager [None req-4fef83d3-7f07-4629-afa1-f091f4018513 715d3ecfd40048a08fd0c9f8dc437cd6 a56f57f119b24e77bd165887162ef538 - - default default] [instance: e607828c-0677-46ba-a7a0-b9d21be4149e] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Oct 11 04:05:06 compute-0 kernel: tap7f528a6a-5b (unregistering): left promiscuous mode
Oct 11 04:05:06 compute-0 NetworkManager[44920]: <info>  [1760155506.3220] device (tap7f528a6a-5b): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 11 04:05:06 compute-0 ovn_controller[152025]: 2025-10-11T04:05:06Z|00041|binding|INFO|Releasing lport 7f528a6a-5bee-4ea6-ba46-7b56d53b170b from this chassis (sb_readonly=0)
Oct 11 04:05:06 compute-0 ovn_controller[152025]: 2025-10-11T04:05:06Z|00042|binding|INFO|Setting lport 7f528a6a-5bee-4ea6-ba46-7b56d53b170b down in Southbound
Oct 11 04:05:06 compute-0 ovn_controller[152025]: 2025-10-11T04:05:06Z|00043|binding|INFO|Removing iface tap7f528a6a-5b ovn-installed in OVS
Oct 11 04:05:06 compute-0 nova_compute[259850]: 2025-10-11 04:05:06.389 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 04:05:06 compute-0 ovn_metadata_agent[161897]: 2025-10-11 04:05:06.396 161902 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:33:ea:29 10.100.0.5'], port_security=['fa:16:3e:33:ea:29 10.100.0.5'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.5/28', 'neutron:device_id': 'e607828c-0677-46ba-a7a0-b9d21be4149e', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-01ca7d7a-ab7e-4753-9e65-58d83786bdc8', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'a56f57f119b24e77bd165887162ef538', 'neutron:revision_number': '4', 'neutron:security_group_ids': '1d55b596-dded-4eab-874b-8812dbd6943d', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=3a8a1b08-6831-48aa-9bdb-0e38b6956a06, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f22fde8afd0>], logical_port=7f528a6a-5bee-4ea6-ba46-7b56d53b170b) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f22fde8afd0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 11 04:05:06 compute-0 ovn_metadata_agent[161897]: 2025-10-11 04:05:06.397 161902 INFO neutron.agent.ovn.metadata.agent [-] Port 7f528a6a-5bee-4ea6-ba46-7b56d53b170b in datapath 01ca7d7a-ab7e-4753-9e65-58d83786bdc8 unbound from our chassis
Oct 11 04:05:06 compute-0 ovn_metadata_agent[161897]: 2025-10-11 04:05:06.398 161902 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 01ca7d7a-ab7e-4753-9e65-58d83786bdc8, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Oct 11 04:05:06 compute-0 ovn_metadata_agent[161897]: 2025-10-11 04:05:06.399 267637 DEBUG oslo.privsep.daemon [-] privsep: reply[38883555-b192-4f3d-867b-0f72c315fd42]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 04:05:06 compute-0 ovn_metadata_agent[161897]: 2025-10-11 04:05:06.399 161902 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-01ca7d7a-ab7e-4753-9e65-58d83786bdc8 namespace which is not needed anymore
Oct 11 04:05:06 compute-0 nova_compute[259850]: 2025-10-11 04:05:06.410 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 04:05:06 compute-0 systemd[1]: machine-qemu\x2d2\x2dinstance\x2d00000002.scope: Deactivated successfully.
Oct 11 04:05:06 compute-0 systemd[1]: machine-qemu\x2d2\x2dinstance\x2d00000002.scope: Consumed 4.191s CPU time.
Oct 11 04:05:06 compute-0 systemd-machined[214869]: Machine qemu-2-instance-00000002 terminated.
Oct 11 04:05:06 compute-0 nova_compute[259850]: 2025-10-11 04:05:06.518 2 INFO nova.virt.libvirt.driver [-] [instance: e607828c-0677-46ba-a7a0-b9d21be4149e] Instance destroyed successfully.
Oct 11 04:05:06 compute-0 nova_compute[259850]: 2025-10-11 04:05:06.519 2 DEBUG nova.objects.instance [None req-4fef83d3-7f07-4629-afa1-f091f4018513 715d3ecfd40048a08fd0c9f8dc437cd6 a56f57f119b24e77bd165887162ef538 - - default default] Lazy-loading 'resources' on Instance uuid e607828c-0677-46ba-a7a0-b9d21be4149e obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 11 04:05:06 compute-0 nova_compute[259850]: 2025-10-11 04:05:06.535 2 DEBUG nova.virt.libvirt.vif [None req-4fef83d3-7f07-4629-afa1-f091f4018513 715d3ecfd40048a08fd0c9f8dc437cd6 a56f57f119b24e77bd165887162ef538 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-11T04:04:53Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-VolumesActionsTest-instance-1301380731',display_name='tempest-VolumesActionsTest-instance-1301380731',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-volumesactionstest-instance-1301380731',id=2,image_ref='1a107e2f-1a9d-4b6f-861d-e64bee7d56be',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-10-11T04:05:03Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='a56f57f119b24e77bd165887162ef538',ramdisk_id='',reservation_id='r-qzrhq9bt',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='1a107e2f-1a9d-4b6f-861d-e64bee7d56be',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-VolumesActionsTest-27294957',owner_user_name='tempest-VolumesActionsTest-27294957-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-11T04:05:03Z,user_data=None,user_id='715d3ecfd40048a08fd0c9f8dc437cd6',uuid=e607828c-0677-46ba-a7a0-b9d21be4149e,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "7f528a6a-5bee-4ea6-ba46-7b56d53b170b", "address": "fa:16:3e:33:ea:29", "network": {"id": "01ca7d7a-ab7e-4753-9e65-58d83786bdc8", "bridge": "br-int", "label": "tempest-VolumesActionsTest-1985263256-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a56f57f119b24e77bd165887162ef538", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7f528a6a-5b", "ovs_interfaceid": "7f528a6a-5bee-4ea6-ba46-7b56d53b170b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Oct 11 04:05:06 compute-0 nova_compute[259850]: 2025-10-11 04:05:06.535 2 DEBUG nova.network.os_vif_util [None req-4fef83d3-7f07-4629-afa1-f091f4018513 715d3ecfd40048a08fd0c9f8dc437cd6 a56f57f119b24e77bd165887162ef538 - - default default] Converting VIF {"id": "7f528a6a-5bee-4ea6-ba46-7b56d53b170b", "address": "fa:16:3e:33:ea:29", "network": {"id": "01ca7d7a-ab7e-4753-9e65-58d83786bdc8", "bridge": "br-int", "label": "tempest-VolumesActionsTest-1985263256-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a56f57f119b24e77bd165887162ef538", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7f528a6a-5b", "ovs_interfaceid": "7f528a6a-5bee-4ea6-ba46-7b56d53b170b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 11 04:05:06 compute-0 nova_compute[259850]: 2025-10-11 04:05:06.536 2 DEBUG nova.network.os_vif_util [None req-4fef83d3-7f07-4629-afa1-f091f4018513 715d3ecfd40048a08fd0c9f8dc437cd6 a56f57f119b24e77bd165887162ef538 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:33:ea:29,bridge_name='br-int',has_traffic_filtering=True,id=7f528a6a-5bee-4ea6-ba46-7b56d53b170b,network=Network(01ca7d7a-ab7e-4753-9e65-58d83786bdc8),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap7f528a6a-5b') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 11 04:05:06 compute-0 nova_compute[259850]: 2025-10-11 04:05:06.537 2 DEBUG os_vif [None req-4fef83d3-7f07-4629-afa1-f091f4018513 715d3ecfd40048a08fd0c9f8dc437cd6 a56f57f119b24e77bd165887162ef538 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:33:ea:29,bridge_name='br-int',has_traffic_filtering=True,id=7f528a6a-5bee-4ea6-ba46-7b56d53b170b,network=Network(01ca7d7a-ab7e-4753-9e65-58d83786bdc8),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap7f528a6a-5b') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Oct 11 04:05:06 compute-0 nova_compute[259850]: 2025-10-11 04:05:06.540 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 04:05:06 compute-0 neutron-haproxy-ovnmeta-01ca7d7a-ab7e-4753-9e65-58d83786bdc8[270691]: [NOTICE]   (270695) : haproxy version is 2.8.14-c23fe91
Oct 11 04:05:06 compute-0 nova_compute[259850]: 2025-10-11 04:05:06.540 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap7f528a6a-5b, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 04:05:06 compute-0 neutron-haproxy-ovnmeta-01ca7d7a-ab7e-4753-9e65-58d83786bdc8[270691]: [NOTICE]   (270695) : path to executable is /usr/sbin/haproxy
Oct 11 04:05:06 compute-0 neutron-haproxy-ovnmeta-01ca7d7a-ab7e-4753-9e65-58d83786bdc8[270691]: [WARNING]  (270695) : Exiting Master process...
Oct 11 04:05:06 compute-0 neutron-haproxy-ovnmeta-01ca7d7a-ab7e-4753-9e65-58d83786bdc8[270691]: [ALERT]    (270695) : Current worker (270697) exited with code 143 (Terminated)
Oct 11 04:05:06 compute-0 neutron-haproxy-ovnmeta-01ca7d7a-ab7e-4753-9e65-58d83786bdc8[270691]: [WARNING]  (270695) : All workers exited. Exiting... (0)
Oct 11 04:05:06 compute-0 nova_compute[259850]: 2025-10-11 04:05:06.543 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 11 04:05:06 compute-0 systemd[1]: libpod-9ea074ed2185419cb53bde1cb9a425e75f88c044b0a01121d85ac1a1b9d29ca6.scope: Deactivated successfully.
Oct 11 04:05:06 compute-0 nova_compute[259850]: 2025-10-11 04:05:06.545 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 04:05:06 compute-0 conmon[270691]: conmon 9ea074ed2185419cb53b <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-9ea074ed2185419cb53bde1cb9a425e75f88c044b0a01121d85ac1a1b9d29ca6.scope/container/memory.events
Oct 11 04:05:06 compute-0 nova_compute[259850]: 2025-10-11 04:05:06.549 2 INFO os_vif [None req-4fef83d3-7f07-4629-afa1-f091f4018513 715d3ecfd40048a08fd0c9f8dc437cd6 a56f57f119b24e77bd165887162ef538 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:33:ea:29,bridge_name='br-int',has_traffic_filtering=True,id=7f528a6a-5bee-4ea6-ba46-7b56d53b170b,network=Network(01ca7d7a-ab7e-4753-9e65-58d83786bdc8),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap7f528a6a-5b')
Oct 11 04:05:06 compute-0 podman[270823]: 2025-10-11 04:05:06.549759099 +0000 UTC m=+0.053307476 container died 9ea074ed2185419cb53bde1cb9a425e75f88c044b0a01121d85ac1a1b9d29ca6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-01ca7d7a-ab7e-4753-9e65-58d83786bdc8, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, org.label-schema.build-date=20251009)
Oct 11 04:05:06 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-9ea074ed2185419cb53bde1cb9a425e75f88c044b0a01121d85ac1a1b9d29ca6-userdata-shm.mount: Deactivated successfully.
Oct 11 04:05:06 compute-0 systemd[1]: var-lib-containers-storage-overlay-b46723a12915356f07ff4515760858413f4f675aa03c62d88bc3a9e5949303a6-merged.mount: Deactivated successfully.
Oct 11 04:05:06 compute-0 podman[270823]: 2025-10-11 04:05:06.585641206 +0000 UTC m=+0.089189583 container cleanup 9ea074ed2185419cb53bde1cb9a425e75f88c044b0a01121d85ac1a1b9d29ca6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-01ca7d7a-ab7e-4753-9e65-58d83786bdc8, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, org.label-schema.license=GPLv2, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251009)
Oct 11 04:05:06 compute-0 systemd[1]: libpod-conmon-9ea074ed2185419cb53bde1cb9a425e75f88c044b0a01121d85ac1a1b9d29ca6.scope: Deactivated successfully.
Oct 11 04:05:06 compute-0 podman[270874]: 2025-10-11 04:05:06.662397171 +0000 UTC m=+0.052426103 container remove 9ea074ed2185419cb53bde1cb9a425e75f88c044b0a01121d85ac1a1b9d29ca6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-01ca7d7a-ab7e-4753-9e65-58d83786bdc8, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.vendor=CentOS, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, org.label-schema.build-date=20251009, org.label-schema.schema-version=1.0)
Oct 11 04:05:06 compute-0 ovn_metadata_agent[161897]: 2025-10-11 04:05:06.669 267637 DEBUG oslo.privsep.daemon [-] privsep: reply[bc10d62e-dfe4-4ab1-8f78-747c11e6ca9d]: (4, ('Sat Oct 11 04:05:06 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-01ca7d7a-ab7e-4753-9e65-58d83786bdc8 (9ea074ed2185419cb53bde1cb9a425e75f88c044b0a01121d85ac1a1b9d29ca6)\n9ea074ed2185419cb53bde1cb9a425e75f88c044b0a01121d85ac1a1b9d29ca6\nSat Oct 11 04:05:06 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-01ca7d7a-ab7e-4753-9e65-58d83786bdc8 (9ea074ed2185419cb53bde1cb9a425e75f88c044b0a01121d85ac1a1b9d29ca6)\n9ea074ed2185419cb53bde1cb9a425e75f88c044b0a01121d85ac1a1b9d29ca6\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 04:05:06 compute-0 ovn_metadata_agent[161897]: 2025-10-11 04:05:06.671 267637 DEBUG oslo.privsep.daemon [-] privsep: reply[606164d7-8a5c-4157-95f6-5340d7b5a526]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 04:05:06 compute-0 ovn_metadata_agent[161897]: 2025-10-11 04:05:06.672 161902 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap01ca7d7a-a0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 04:05:06 compute-0 kernel: tap01ca7d7a-a0: left promiscuous mode
Oct 11 04:05:06 compute-0 nova_compute[259850]: 2025-10-11 04:05:06.675 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 04:05:06 compute-0 nova_compute[259850]: 2025-10-11 04:05:06.677 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 04:05:06 compute-0 ovn_metadata_agent[161897]: 2025-10-11 04:05:06.680 267637 DEBUG oslo.privsep.daemon [-] privsep: reply[653d1ec3-f623-44f8-92a8-ec71c2f28553]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 04:05:06 compute-0 nova_compute[259850]: 2025-10-11 04:05:06.696 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 04:05:06 compute-0 ovn_metadata_agent[161897]: 2025-10-11 04:05:06.710 267637 DEBUG oslo.privsep.daemon [-] privsep: reply[ad5843e4-8035-41cf-9bd9-fef4895bb5e3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 04:05:06 compute-0 ovn_metadata_agent[161897]: 2025-10-11 04:05:06.712 267637 DEBUG oslo.privsep.daemon [-] privsep: reply[f370831e-e160-4bb7-b425-fac45920bc37]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 04:05:06 compute-0 ovn_metadata_agent[161897]: 2025-10-11 04:05:06.727 267637 DEBUG oslo.privsep.daemon [-] privsep: reply[3dd4dbe5-1a9e-4a11-a4de-ea92aef94044]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 388716, 'reachable_time': 41506, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 270896, 'error': None, 'target': 'ovnmeta-01ca7d7a-ab7e-4753-9e65-58d83786bdc8', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 04:05:06 compute-0 systemd[1]: run-netns-ovnmeta\x2d01ca7d7a\x2dab7e\x2d4753\x2d9e65\x2d58d83786bdc8.mount: Deactivated successfully.
Oct 11 04:05:06 compute-0 ovn_metadata_agent[161897]: 2025-10-11 04:05:06.731 162015 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-01ca7d7a-ab7e-4753-9e65-58d83786bdc8 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Oct 11 04:05:06 compute-0 ovn_metadata_agent[161897]: 2025-10-11 04:05:06.731 162015 DEBUG oslo.privsep.daemon [-] privsep: reply[c4440c97-5925-4923-b442-ecae2d863868]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 04:05:06 compute-0 nova_compute[259850]: 2025-10-11 04:05:06.913 2 INFO nova.virt.libvirt.driver [None req-4fef83d3-7f07-4629-afa1-f091f4018513 715d3ecfd40048a08fd0c9f8dc437cd6 a56f57f119b24e77bd165887162ef538 - - default default] [instance: e607828c-0677-46ba-a7a0-b9d21be4149e] Deleting instance files /var/lib/nova/instances/e607828c-0677-46ba-a7a0-b9d21be4149e_del
Oct 11 04:05:06 compute-0 nova_compute[259850]: 2025-10-11 04:05:06.914 2 INFO nova.virt.libvirt.driver [None req-4fef83d3-7f07-4629-afa1-f091f4018513 715d3ecfd40048a08fd0c9f8dc437cd6 a56f57f119b24e77bd165887162ef538 - - default default] [instance: e607828c-0677-46ba-a7a0-b9d21be4149e] Deletion of /var/lib/nova/instances/e607828c-0677-46ba-a7a0-b9d21be4149e_del complete
Oct 11 04:05:06 compute-0 ceph-mon[74273]: pgmap v1024: 305 pgs: 305 active+clean; 88 MiB data, 278 MiB used, 60 GiB / 60 GiB avail; 65 KiB/s rd, 1.8 MiB/s wr, 91 op/s
Oct 11 04:05:06 compute-0 nova_compute[259850]: 2025-10-11 04:05:06.976 2 INFO nova.compute.manager [None req-4fef83d3-7f07-4629-afa1-f091f4018513 715d3ecfd40048a08fd0c9f8dc437cd6 a56f57f119b24e77bd165887162ef538 - - default default] [instance: e607828c-0677-46ba-a7a0-b9d21be4149e] Took 0.69 seconds to destroy the instance on the hypervisor.
Oct 11 04:05:06 compute-0 nova_compute[259850]: 2025-10-11 04:05:06.977 2 DEBUG oslo.service.loopingcall [None req-4fef83d3-7f07-4629-afa1-f091f4018513 715d3ecfd40048a08fd0c9f8dc437cd6 a56f57f119b24e77bd165887162ef538 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Oct 11 04:05:06 compute-0 nova_compute[259850]: 2025-10-11 04:05:06.977 2 DEBUG nova.compute.manager [-] [instance: e607828c-0677-46ba-a7a0-b9d21be4149e] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Oct 11 04:05:06 compute-0 nova_compute[259850]: 2025-10-11 04:05:06.977 2 DEBUG nova.network.neutron [-] [instance: e607828c-0677-46ba-a7a0-b9d21be4149e] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Oct 11 04:05:07 compute-0 nova_compute[259850]: 2025-10-11 04:05:07.126 2 DEBUG nova.compute.manager [req-7ffbae84-90f2-4d1c-b900-34e91edba637 req-7d304986-f5fb-4bbc-ab8e-65c904386798 f4159c198f2c491aba3076e803251bb4 551ef16ca7c64c7d98e9560ddab5abd8 - - default default] [instance: e607828c-0677-46ba-a7a0-b9d21be4149e] Received event network-vif-unplugged-7f528a6a-5bee-4ea6-ba46-7b56d53b170b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 11 04:05:07 compute-0 nova_compute[259850]: 2025-10-11 04:05:07.127 2 DEBUG oslo_concurrency.lockutils [req-7ffbae84-90f2-4d1c-b900-34e91edba637 req-7d304986-f5fb-4bbc-ab8e-65c904386798 f4159c198f2c491aba3076e803251bb4 551ef16ca7c64c7d98e9560ddab5abd8 - - default default] Acquiring lock "e607828c-0677-46ba-a7a0-b9d21be4149e-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 04:05:07 compute-0 nova_compute[259850]: 2025-10-11 04:05:07.128 2 DEBUG oslo_concurrency.lockutils [req-7ffbae84-90f2-4d1c-b900-34e91edba637 req-7d304986-f5fb-4bbc-ab8e-65c904386798 f4159c198f2c491aba3076e803251bb4 551ef16ca7c64c7d98e9560ddab5abd8 - - default default] Lock "e607828c-0677-46ba-a7a0-b9d21be4149e-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 04:05:07 compute-0 nova_compute[259850]: 2025-10-11 04:05:07.128 2 DEBUG oslo_concurrency.lockutils [req-7ffbae84-90f2-4d1c-b900-34e91edba637 req-7d304986-f5fb-4bbc-ab8e-65c904386798 f4159c198f2c491aba3076e803251bb4 551ef16ca7c64c7d98e9560ddab5abd8 - - default default] Lock "e607828c-0677-46ba-a7a0-b9d21be4149e-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 04:05:07 compute-0 nova_compute[259850]: 2025-10-11 04:05:07.129 2 DEBUG nova.compute.manager [req-7ffbae84-90f2-4d1c-b900-34e91edba637 req-7d304986-f5fb-4bbc-ab8e-65c904386798 f4159c198f2c491aba3076e803251bb4 551ef16ca7c64c7d98e9560ddab5abd8 - - default default] [instance: e607828c-0677-46ba-a7a0-b9d21be4149e] No waiting events found dispatching network-vif-unplugged-7f528a6a-5bee-4ea6-ba46-7b56d53b170b pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 11 04:05:07 compute-0 nova_compute[259850]: 2025-10-11 04:05:07.129 2 DEBUG nova.compute.manager [req-7ffbae84-90f2-4d1c-b900-34e91edba637 req-7d304986-f5fb-4bbc-ab8e-65c904386798 f4159c198f2c491aba3076e803251bb4 551ef16ca7c64c7d98e9560ddab5abd8 - - default default] [instance: e607828c-0677-46ba-a7a0-b9d21be4149e] Received event network-vif-unplugged-7f528a6a-5bee-4ea6-ba46-7b56d53b170b for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Oct 11 04:05:07 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1025: 305 pgs: 305 active+clean; 88 MiB data, 278 MiB used, 60 GiB / 60 GiB avail; 65 KiB/s rd, 1.8 MiB/s wr, 91 op/s
Oct 11 04:05:08 compute-0 nova_compute[259850]: 2025-10-11 04:05:08.104 2 DEBUG nova.network.neutron [-] [instance: e607828c-0677-46ba-a7a0-b9d21be4149e] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 11 04:05:08 compute-0 nova_compute[259850]: 2025-10-11 04:05:08.127 2 INFO nova.compute.manager [-] [instance: e607828c-0677-46ba-a7a0-b9d21be4149e] Took 1.15 seconds to deallocate network for instance.
Oct 11 04:05:08 compute-0 nova_compute[259850]: 2025-10-11 04:05:08.193 2 DEBUG oslo_concurrency.lockutils [None req-4fef83d3-7f07-4629-afa1-f091f4018513 715d3ecfd40048a08fd0c9f8dc437cd6 a56f57f119b24e77bd165887162ef538 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 04:05:08 compute-0 nova_compute[259850]: 2025-10-11 04:05:08.194 2 DEBUG oslo_concurrency.lockutils [None req-4fef83d3-7f07-4629-afa1-f091f4018513 715d3ecfd40048a08fd0c9f8dc437cd6 a56f57f119b24e77bd165887162ef538 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 04:05:08 compute-0 nova_compute[259850]: 2025-10-11 04:05:08.262 2 DEBUG oslo_concurrency.processutils [None req-4fef83d3-7f07-4629-afa1-f091f4018513 715d3ecfd40048a08fd0c9f8dc437cd6 a56f57f119b24e77bd165887162ef538 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 04:05:08 compute-0 podman[270898]: 2025-10-11 04:05:08.404032779 +0000 UTC m=+0.111415488 container health_status 648a70a95c858d67aef01a55c153ff0b5b9bef4b589fa10c62a24185180db61f (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, tcib_managed=true, container_name=ovn_controller, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 11 04:05:08 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 11 04:05:08 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3179916079' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 04:05:08 compute-0 nova_compute[259850]: 2025-10-11 04:05:08.714 2 DEBUG oslo_concurrency.processutils [None req-4fef83d3-7f07-4629-afa1-f091f4018513 715d3ecfd40048a08fd0c9f8dc437cd6 a56f57f119b24e77bd165887162ef538 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.452s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 04:05:08 compute-0 nova_compute[259850]: 2025-10-11 04:05:08.726 2 DEBUG nova.compute.provider_tree [None req-4fef83d3-7f07-4629-afa1-f091f4018513 715d3ecfd40048a08fd0c9f8dc437cd6 a56f57f119b24e77bd165887162ef538 - - default default] Inventory has not changed in ProviderTree for provider: 108a560b-89c0-4926-a2fc-cb749a6f8386 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 11 04:05:08 compute-0 nova_compute[259850]: 2025-10-11 04:05:08.746 2 DEBUG nova.scheduler.client.report [None req-4fef83d3-7f07-4629-afa1-f091f4018513 715d3ecfd40048a08fd0c9f8dc437cd6 a56f57f119b24e77bd165887162ef538 - - default default] Inventory has not changed for provider 108a560b-89c0-4926-a2fc-cb749a6f8386 based on inventory data: {'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 11 04:05:08 compute-0 nova_compute[259850]: 2025-10-11 04:05:08.783 2 DEBUG oslo_concurrency.lockutils [None req-b592fa62-f727-4367-bf9b-b90887b2e6d4 5b55f8af0b6741c58fd7d7756dc5b302 dcd539919ebc4a97ab7c54b2325dfcd1 - - default default] Acquiring lock "f362654b-5459-4295-a15a-50dce3bd4232" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 04:05:08 compute-0 nova_compute[259850]: 2025-10-11 04:05:08.784 2 DEBUG oslo_concurrency.lockutils [None req-b592fa62-f727-4367-bf9b-b90887b2e6d4 5b55f8af0b6741c58fd7d7756dc5b302 dcd539919ebc4a97ab7c54b2325dfcd1 - - default default] Lock "f362654b-5459-4295-a15a-50dce3bd4232" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 04:05:08 compute-0 nova_compute[259850]: 2025-10-11 04:05:08.787 2 DEBUG oslo_concurrency.lockutils [None req-4fef83d3-7f07-4629-afa1-f091f4018513 715d3ecfd40048a08fd0c9f8dc437cd6 a56f57f119b24e77bd165887162ef538 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.593s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 04:05:08 compute-0 nova_compute[259850]: 2025-10-11 04:05:08.816 2 DEBUG nova.compute.manager [None req-b592fa62-f727-4367-bf9b-b90887b2e6d4 5b55f8af0b6741c58fd7d7756dc5b302 dcd539919ebc4a97ab7c54b2325dfcd1 - - default default] [instance: f362654b-5459-4295-a15a-50dce3bd4232] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Oct 11 04:05:08 compute-0 nova_compute[259850]: 2025-10-11 04:05:08.840 2 INFO nova.scheduler.client.report [None req-4fef83d3-7f07-4629-afa1-f091f4018513 715d3ecfd40048a08fd0c9f8dc437cd6 a56f57f119b24e77bd165887162ef538 - - default default] Deleted allocations for instance e607828c-0677-46ba-a7a0-b9d21be4149e
Oct 11 04:05:08 compute-0 nova_compute[259850]: 2025-10-11 04:05:08.899 2 DEBUG oslo_concurrency.lockutils [None req-b592fa62-f727-4367-bf9b-b90887b2e6d4 5b55f8af0b6741c58fd7d7756dc5b302 dcd539919ebc4a97ab7c54b2325dfcd1 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 04:05:08 compute-0 nova_compute[259850]: 2025-10-11 04:05:08.899 2 DEBUG oslo_concurrency.lockutils [None req-b592fa62-f727-4367-bf9b-b90887b2e6d4 5b55f8af0b6741c58fd7d7756dc5b302 dcd539919ebc4a97ab7c54b2325dfcd1 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 04:05:08 compute-0 nova_compute[259850]: 2025-10-11 04:05:08.908 2 DEBUG nova.virt.hardware [None req-b592fa62-f727-4367-bf9b-b90887b2e6d4 5b55f8af0b6741c58fd7d7756dc5b302 dcd539919ebc4a97ab7c54b2325dfcd1 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Oct 11 04:05:08 compute-0 nova_compute[259850]: 2025-10-11 04:05:08.909 2 INFO nova.compute.claims [None req-b592fa62-f727-4367-bf9b-b90887b2e6d4 5b55f8af0b6741c58fd7d7756dc5b302 dcd539919ebc4a97ab7c54b2325dfcd1 - - default default] [instance: f362654b-5459-4295-a15a-50dce3bd4232] Claim successful on node compute-0.ctlplane.example.com
Oct 11 04:05:08 compute-0 nova_compute[259850]: 2025-10-11 04:05:08.916 2 DEBUG oslo_concurrency.lockutils [None req-4fef83d3-7f07-4629-afa1-f091f4018513 715d3ecfd40048a08fd0c9f8dc437cd6 a56f57f119b24e77bd165887162ef538 - - default default] Lock "e607828c-0677-46ba-a7a0-b9d21be4149e" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.634s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 04:05:08 compute-0 ceph-mon[74273]: pgmap v1025: 305 pgs: 305 active+clean; 88 MiB data, 278 MiB used, 60 GiB / 60 GiB avail; 65 KiB/s rd, 1.8 MiB/s wr, 91 op/s
Oct 11 04:05:08 compute-0 ceph-mon[74273]: from='client.? 192.168.122.100:0/3179916079' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 04:05:09 compute-0 nova_compute[259850]: 2025-10-11 04:05:09.018 2 DEBUG oslo_concurrency.processutils [None req-b592fa62-f727-4367-bf9b-b90887b2e6d4 5b55f8af0b6741c58fd7d7756dc5b302 dcd539919ebc4a97ab7c54b2325dfcd1 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 04:05:09 compute-0 nova_compute[259850]: 2025-10-11 04:05:09.394 2 DEBUG nova.compute.manager [req-a56fb92f-181e-4914-9a78-b746170a50b5 req-3fe958c2-6b26-4089-9bca-49eab0219465 f4159c198f2c491aba3076e803251bb4 551ef16ca7c64c7d98e9560ddab5abd8 - - default default] [instance: e607828c-0677-46ba-a7a0-b9d21be4149e] Received event network-vif-plugged-7f528a6a-5bee-4ea6-ba46-7b56d53b170b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 11 04:05:09 compute-0 nova_compute[259850]: 2025-10-11 04:05:09.395 2 DEBUG oslo_concurrency.lockutils [req-a56fb92f-181e-4914-9a78-b746170a50b5 req-3fe958c2-6b26-4089-9bca-49eab0219465 f4159c198f2c491aba3076e803251bb4 551ef16ca7c64c7d98e9560ddab5abd8 - - default default] Acquiring lock "e607828c-0677-46ba-a7a0-b9d21be4149e-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 04:05:09 compute-0 nova_compute[259850]: 2025-10-11 04:05:09.396 2 DEBUG oslo_concurrency.lockutils [req-a56fb92f-181e-4914-9a78-b746170a50b5 req-3fe958c2-6b26-4089-9bca-49eab0219465 f4159c198f2c491aba3076e803251bb4 551ef16ca7c64c7d98e9560ddab5abd8 - - default default] Lock "e607828c-0677-46ba-a7a0-b9d21be4149e-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 04:05:09 compute-0 nova_compute[259850]: 2025-10-11 04:05:09.397 2 DEBUG oslo_concurrency.lockutils [req-a56fb92f-181e-4914-9a78-b746170a50b5 req-3fe958c2-6b26-4089-9bca-49eab0219465 f4159c198f2c491aba3076e803251bb4 551ef16ca7c64c7d98e9560ddab5abd8 - - default default] Lock "e607828c-0677-46ba-a7a0-b9d21be4149e-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 04:05:09 compute-0 nova_compute[259850]: 2025-10-11 04:05:09.397 2 DEBUG nova.compute.manager [req-a56fb92f-181e-4914-9a78-b746170a50b5 req-3fe958c2-6b26-4089-9bca-49eab0219465 f4159c198f2c491aba3076e803251bb4 551ef16ca7c64c7d98e9560ddab5abd8 - - default default] [instance: e607828c-0677-46ba-a7a0-b9d21be4149e] No waiting events found dispatching network-vif-plugged-7f528a6a-5bee-4ea6-ba46-7b56d53b170b pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 11 04:05:09 compute-0 nova_compute[259850]: 2025-10-11 04:05:09.398 2 WARNING nova.compute.manager [req-a56fb92f-181e-4914-9a78-b746170a50b5 req-3fe958c2-6b26-4089-9bca-49eab0219465 f4159c198f2c491aba3076e803251bb4 551ef16ca7c64c7d98e9560ddab5abd8 - - default default] [instance: e607828c-0677-46ba-a7a0-b9d21be4149e] Received unexpected event network-vif-plugged-7f528a6a-5bee-4ea6-ba46-7b56d53b170b for instance with vm_state deleted and task_state None.
Oct 11 04:05:09 compute-0 nova_compute[259850]: 2025-10-11 04:05:09.398 2 DEBUG nova.compute.manager [req-a56fb92f-181e-4914-9a78-b746170a50b5 req-3fe958c2-6b26-4089-9bca-49eab0219465 f4159c198f2c491aba3076e803251bb4 551ef16ca7c64c7d98e9560ddab5abd8 - - default default] [instance: e607828c-0677-46ba-a7a0-b9d21be4149e] Received event network-vif-deleted-7f528a6a-5bee-4ea6-ba46-7b56d53b170b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 11 04:05:09 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1026: 305 pgs: 305 active+clean; 41 MiB data, 256 MiB used, 60 GiB / 60 GiB avail; 2.0 MiB/s rd, 1.8 MiB/s wr, 182 op/s
Oct 11 04:05:09 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 11 04:05:09 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1268171943' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 04:05:09 compute-0 nova_compute[259850]: 2025-10-11 04:05:09.499 2 DEBUG oslo_concurrency.processutils [None req-b592fa62-f727-4367-bf9b-b90887b2e6d4 5b55f8af0b6741c58fd7d7756dc5b302 dcd539919ebc4a97ab7c54b2325dfcd1 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.481s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 04:05:09 compute-0 nova_compute[259850]: 2025-10-11 04:05:09.507 2 DEBUG nova.compute.provider_tree [None req-b592fa62-f727-4367-bf9b-b90887b2e6d4 5b55f8af0b6741c58fd7d7756dc5b302 dcd539919ebc4a97ab7c54b2325dfcd1 - - default default] Inventory has not changed in ProviderTree for provider: 108a560b-89c0-4926-a2fc-cb749a6f8386 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 11 04:05:09 compute-0 nova_compute[259850]: 2025-10-11 04:05:09.536 2 DEBUG nova.scheduler.client.report [None req-b592fa62-f727-4367-bf9b-b90887b2e6d4 5b55f8af0b6741c58fd7d7756dc5b302 dcd539919ebc4a97ab7c54b2325dfcd1 - - default default] Inventory has not changed for provider 108a560b-89c0-4926-a2fc-cb749a6f8386 based on inventory data: {'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 11 04:05:09 compute-0 nova_compute[259850]: 2025-10-11 04:05:09.567 2 DEBUG oslo_concurrency.lockutils [None req-b592fa62-f727-4367-bf9b-b90887b2e6d4 5b55f8af0b6741c58fd7d7756dc5b302 dcd539919ebc4a97ab7c54b2325dfcd1 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.667s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 04:05:09 compute-0 nova_compute[259850]: 2025-10-11 04:05:09.568 2 DEBUG nova.compute.manager [None req-b592fa62-f727-4367-bf9b-b90887b2e6d4 5b55f8af0b6741c58fd7d7756dc5b302 dcd539919ebc4a97ab7c54b2325dfcd1 - - default default] [instance: f362654b-5459-4295-a15a-50dce3bd4232] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Oct 11 04:05:09 compute-0 nova_compute[259850]: 2025-10-11 04:05:09.623 2 DEBUG nova.compute.manager [None req-b592fa62-f727-4367-bf9b-b90887b2e6d4 5b55f8af0b6741c58fd7d7756dc5b302 dcd539919ebc4a97ab7c54b2325dfcd1 - - default default] [instance: f362654b-5459-4295-a15a-50dce3bd4232] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Oct 11 04:05:09 compute-0 nova_compute[259850]: 2025-10-11 04:05:09.624 2 DEBUG nova.network.neutron [None req-b592fa62-f727-4367-bf9b-b90887b2e6d4 5b55f8af0b6741c58fd7d7756dc5b302 dcd539919ebc4a97ab7c54b2325dfcd1 - - default default] [instance: f362654b-5459-4295-a15a-50dce3bd4232] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Oct 11 04:05:09 compute-0 nova_compute[259850]: 2025-10-11 04:05:09.653 2 INFO nova.virt.libvirt.driver [None req-b592fa62-f727-4367-bf9b-b90887b2e6d4 5b55f8af0b6741c58fd7d7756dc5b302 dcd539919ebc4a97ab7c54b2325dfcd1 - - default default] [instance: f362654b-5459-4295-a15a-50dce3bd4232] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Oct 11 04:05:09 compute-0 nova_compute[259850]: 2025-10-11 04:05:09.681 2 DEBUG nova.compute.manager [None req-b592fa62-f727-4367-bf9b-b90887b2e6d4 5b55f8af0b6741c58fd7d7756dc5b302 dcd539919ebc4a97ab7c54b2325dfcd1 - - default default] [instance: f362654b-5459-4295-a15a-50dce3bd4232] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Oct 11 04:05:09 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e171 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 04:05:09 compute-0 nova_compute[259850]: 2025-10-11 04:05:09.780 2 DEBUG nova.compute.manager [None req-b592fa62-f727-4367-bf9b-b90887b2e6d4 5b55f8af0b6741c58fd7d7756dc5b302 dcd539919ebc4a97ab7c54b2325dfcd1 - - default default] [instance: f362654b-5459-4295-a15a-50dce3bd4232] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Oct 11 04:05:09 compute-0 nova_compute[259850]: 2025-10-11 04:05:09.781 2 DEBUG nova.virt.libvirt.driver [None req-b592fa62-f727-4367-bf9b-b90887b2e6d4 5b55f8af0b6741c58fd7d7756dc5b302 dcd539919ebc4a97ab7c54b2325dfcd1 - - default default] [instance: f362654b-5459-4295-a15a-50dce3bd4232] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Oct 11 04:05:09 compute-0 nova_compute[259850]: 2025-10-11 04:05:09.782 2 INFO nova.virt.libvirt.driver [None req-b592fa62-f727-4367-bf9b-b90887b2e6d4 5b55f8af0b6741c58fd7d7756dc5b302 dcd539919ebc4a97ab7c54b2325dfcd1 - - default default] [instance: f362654b-5459-4295-a15a-50dce3bd4232] Creating image(s)
Oct 11 04:05:09 compute-0 nova_compute[259850]: 2025-10-11 04:05:09.810 2 DEBUG nova.storage.rbd_utils [None req-b592fa62-f727-4367-bf9b-b90887b2e6d4 5b55f8af0b6741c58fd7d7756dc5b302 dcd539919ebc4a97ab7c54b2325dfcd1 - - default default] rbd image f362654b-5459-4295-a15a-50dce3bd4232_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 11 04:05:09 compute-0 nova_compute[259850]: 2025-10-11 04:05:09.843 2 DEBUG nova.storage.rbd_utils [None req-b592fa62-f727-4367-bf9b-b90887b2e6d4 5b55f8af0b6741c58fd7d7756dc5b302 dcd539919ebc4a97ab7c54b2325dfcd1 - - default default] rbd image f362654b-5459-4295-a15a-50dce3bd4232_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 11 04:05:09 compute-0 nova_compute[259850]: 2025-10-11 04:05:09.870 2 DEBUG nova.storage.rbd_utils [None req-b592fa62-f727-4367-bf9b-b90887b2e6d4 5b55f8af0b6741c58fd7d7756dc5b302 dcd539919ebc4a97ab7c54b2325dfcd1 - - default default] rbd image f362654b-5459-4295-a15a-50dce3bd4232_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 11 04:05:09 compute-0 nova_compute[259850]: 2025-10-11 04:05:09.874 2 DEBUG oslo_concurrency.processutils [None req-b592fa62-f727-4367-bf9b-b90887b2e6d4 5b55f8af0b6741c58fd7d7756dc5b302 dcd539919ebc4a97ab7c54b2325dfcd1 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/1fdd1a1629caceb533dd1ed1ad40a6716b3e72ac --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 04:05:09 compute-0 ceph-mon[74273]: from='client.? 192.168.122.100:0/1268171943' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 04:05:09 compute-0 nova_compute[259850]: 2025-10-11 04:05:09.941 2 DEBUG oslo_concurrency.processutils [None req-b592fa62-f727-4367-bf9b-b90887b2e6d4 5b55f8af0b6741c58fd7d7756dc5b302 dcd539919ebc4a97ab7c54b2325dfcd1 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/1fdd1a1629caceb533dd1ed1ad40a6716b3e72ac --force-share --output=json" returned: 0 in 0.067s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 04:05:09 compute-0 nova_compute[259850]: 2025-10-11 04:05:09.942 2 DEBUG oslo_concurrency.lockutils [None req-b592fa62-f727-4367-bf9b-b90887b2e6d4 5b55f8af0b6741c58fd7d7756dc5b302 dcd539919ebc4a97ab7c54b2325dfcd1 - - default default] Acquiring lock "1fdd1a1629caceb533dd1ed1ad40a6716b3e72ac" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 04:05:09 compute-0 nova_compute[259850]: 2025-10-11 04:05:09.942 2 DEBUG oslo_concurrency.lockutils [None req-b592fa62-f727-4367-bf9b-b90887b2e6d4 5b55f8af0b6741c58fd7d7756dc5b302 dcd539919ebc4a97ab7c54b2325dfcd1 - - default default] Lock "1fdd1a1629caceb533dd1ed1ad40a6716b3e72ac" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 04:05:09 compute-0 nova_compute[259850]: 2025-10-11 04:05:09.942 2 DEBUG oslo_concurrency.lockutils [None req-b592fa62-f727-4367-bf9b-b90887b2e6d4 5b55f8af0b6741c58fd7d7756dc5b302 dcd539919ebc4a97ab7c54b2325dfcd1 - - default default] Lock "1fdd1a1629caceb533dd1ed1ad40a6716b3e72ac" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 04:05:09 compute-0 nova_compute[259850]: 2025-10-11 04:05:09.967 2 DEBUG nova.storage.rbd_utils [None req-b592fa62-f727-4367-bf9b-b90887b2e6d4 5b55f8af0b6741c58fd7d7756dc5b302 dcd539919ebc4a97ab7c54b2325dfcd1 - - default default] rbd image f362654b-5459-4295-a15a-50dce3bd4232_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 11 04:05:09 compute-0 nova_compute[259850]: 2025-10-11 04:05:09.971 2 DEBUG oslo_concurrency.processutils [None req-b592fa62-f727-4367-bf9b-b90887b2e6d4 5b55f8af0b6741c58fd7d7756dc5b302 dcd539919ebc4a97ab7c54b2325dfcd1 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/1fdd1a1629caceb533dd1ed1ad40a6716b3e72ac f362654b-5459-4295-a15a-50dce3bd4232_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 04:05:10 compute-0 nova_compute[259850]: 2025-10-11 04:05:10.222 2 DEBUG oslo_concurrency.processutils [None req-b592fa62-f727-4367-bf9b-b90887b2e6d4 5b55f8af0b6741c58fd7d7756dc5b302 dcd539919ebc4a97ab7c54b2325dfcd1 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/1fdd1a1629caceb533dd1ed1ad40a6716b3e72ac f362654b-5459-4295-a15a-50dce3bd4232_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.252s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 04:05:10 compute-0 nova_compute[259850]: 2025-10-11 04:05:10.275 2 DEBUG nova.network.neutron [None req-b592fa62-f727-4367-bf9b-b90887b2e6d4 5b55f8af0b6741c58fd7d7756dc5b302 dcd539919ebc4a97ab7c54b2325dfcd1 - - default default] [instance: f362654b-5459-4295-a15a-50dce3bd4232] No network configured allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1188
Oct 11 04:05:10 compute-0 nova_compute[259850]: 2025-10-11 04:05:10.275 2 DEBUG nova.compute.manager [None req-b592fa62-f727-4367-bf9b-b90887b2e6d4 5b55f8af0b6741c58fd7d7756dc5b302 dcd539919ebc4a97ab7c54b2325dfcd1 - - default default] [instance: f362654b-5459-4295-a15a-50dce3bd4232] Instance network_info: |[]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Oct 11 04:05:10 compute-0 nova_compute[259850]: 2025-10-11 04:05:10.279 2 DEBUG nova.storage.rbd_utils [None req-b592fa62-f727-4367-bf9b-b90887b2e6d4 5b55f8af0b6741c58fd7d7756dc5b302 dcd539919ebc4a97ab7c54b2325dfcd1 - - default default] resizing rbd image f362654b-5459-4295-a15a-50dce3bd4232_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Oct 11 04:05:10 compute-0 nova_compute[259850]: 2025-10-11 04:05:10.356 2 DEBUG nova.objects.instance [None req-b592fa62-f727-4367-bf9b-b90887b2e6d4 5b55f8af0b6741c58fd7d7756dc5b302 dcd539919ebc4a97ab7c54b2325dfcd1 - - default default] Lazy-loading 'migration_context' on Instance uuid f362654b-5459-4295-a15a-50dce3bd4232 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 11 04:05:10 compute-0 nova_compute[259850]: 2025-10-11 04:05:10.380 2 DEBUG nova.virt.libvirt.driver [None req-b592fa62-f727-4367-bf9b-b90887b2e6d4 5b55f8af0b6741c58fd7d7756dc5b302 dcd539919ebc4a97ab7c54b2325dfcd1 - - default default] [instance: f362654b-5459-4295-a15a-50dce3bd4232] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Oct 11 04:05:10 compute-0 nova_compute[259850]: 2025-10-11 04:05:10.380 2 DEBUG nova.virt.libvirt.driver [None req-b592fa62-f727-4367-bf9b-b90887b2e6d4 5b55f8af0b6741c58fd7d7756dc5b302 dcd539919ebc4a97ab7c54b2325dfcd1 - - default default] [instance: f362654b-5459-4295-a15a-50dce3bd4232] Ensure instance console log exists: /var/lib/nova/instances/f362654b-5459-4295-a15a-50dce3bd4232/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Oct 11 04:05:10 compute-0 nova_compute[259850]: 2025-10-11 04:05:10.380 2 DEBUG oslo_concurrency.lockutils [None req-b592fa62-f727-4367-bf9b-b90887b2e6d4 5b55f8af0b6741c58fd7d7756dc5b302 dcd539919ebc4a97ab7c54b2325dfcd1 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 04:05:10 compute-0 nova_compute[259850]: 2025-10-11 04:05:10.381 2 DEBUG oslo_concurrency.lockutils [None req-b592fa62-f727-4367-bf9b-b90887b2e6d4 5b55f8af0b6741c58fd7d7756dc5b302 dcd539919ebc4a97ab7c54b2325dfcd1 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 04:05:10 compute-0 nova_compute[259850]: 2025-10-11 04:05:10.381 2 DEBUG oslo_concurrency.lockutils [None req-b592fa62-f727-4367-bf9b-b90887b2e6d4 5b55f8af0b6741c58fd7d7756dc5b302 dcd539919ebc4a97ab7c54b2325dfcd1 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 04:05:10 compute-0 nova_compute[259850]: 2025-10-11 04:05:10.382 2 DEBUG nova.virt.libvirt.driver [None req-b592fa62-f727-4367-bf9b-b90887b2e6d4 5b55f8af0b6741c58fd7d7756dc5b302 dcd539919ebc4a97ab7c54b2325dfcd1 - - default default] [instance: f362654b-5459-4295-a15a-50dce3bd4232] Start _get_guest_xml network_info=[] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-11T04:01:37Z,direct_url=<?>,disk_format='qcow2',id=1a107e2f-1a9d-4b6f-861d-e64bee7d56be,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='e4ac9f6319b648399a8baca50902ce47',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-11T04:01:39Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_type': 'disk', 'encrypted': False, 'encryption_secret_uuid': None, 'size': 0, 'encryption_format': None, 'boot_index': 0, 'device_name': '/dev/vda', 'guest_format': None, 'encryption_options': None, 'disk_bus': 'virtio', 'image_id': '1a107e2f-1a9d-4b6f-861d-e64bee7d56be'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Oct 11 04:05:10 compute-0 nova_compute[259850]: 2025-10-11 04:05:10.386 2 WARNING nova.virt.libvirt.driver [None req-b592fa62-f727-4367-bf9b-b90887b2e6d4 5b55f8af0b6741c58fd7d7756dc5b302 dcd539919ebc4a97ab7c54b2325dfcd1 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 11 04:05:10 compute-0 nova_compute[259850]: 2025-10-11 04:05:10.391 2 DEBUG nova.virt.libvirt.host [None req-b592fa62-f727-4367-bf9b-b90887b2e6d4 5b55f8af0b6741c58fd7d7756dc5b302 dcd539919ebc4a97ab7c54b2325dfcd1 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Oct 11 04:05:10 compute-0 nova_compute[259850]: 2025-10-11 04:05:10.391 2 DEBUG nova.virt.libvirt.host [None req-b592fa62-f727-4367-bf9b-b90887b2e6d4 5b55f8af0b6741c58fd7d7756dc5b302 dcd539919ebc4a97ab7c54b2325dfcd1 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Oct 11 04:05:10 compute-0 nova_compute[259850]: 2025-10-11 04:05:10.393 2 DEBUG nova.virt.libvirt.host [None req-b592fa62-f727-4367-bf9b-b90887b2e6d4 5b55f8af0b6741c58fd7d7756dc5b302 dcd539919ebc4a97ab7c54b2325dfcd1 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Oct 11 04:05:10 compute-0 nova_compute[259850]: 2025-10-11 04:05:10.394 2 DEBUG nova.virt.libvirt.host [None req-b592fa62-f727-4367-bf9b-b90887b2e6d4 5b55f8af0b6741c58fd7d7756dc5b302 dcd539919ebc4a97ab7c54b2325dfcd1 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Oct 11 04:05:10 compute-0 nova_compute[259850]: 2025-10-11 04:05:10.394 2 DEBUG nova.virt.libvirt.driver [None req-b592fa62-f727-4367-bf9b-b90887b2e6d4 5b55f8af0b6741c58fd7d7756dc5b302 dcd539919ebc4a97ab7c54b2325dfcd1 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Oct 11 04:05:10 compute-0 nova_compute[259850]: 2025-10-11 04:05:10.394 2 DEBUG nova.virt.hardware [None req-b592fa62-f727-4367-bf9b-b90887b2e6d4 5b55f8af0b6741c58fd7d7756dc5b302 dcd539919ebc4a97ab7c54b2325dfcd1 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-11T04:01:35Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='178575de-f0e6-4acd-9fcd-d75e3e09ac2e',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-11T04:01:37Z,direct_url=<?>,disk_format='qcow2',id=1a107e2f-1a9d-4b6f-861d-e64bee7d56be,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='e4ac9f6319b648399a8baca50902ce47',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-11T04:01:39Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Oct 11 04:05:10 compute-0 nova_compute[259850]: 2025-10-11 04:05:10.394 2 DEBUG nova.virt.hardware [None req-b592fa62-f727-4367-bf9b-b90887b2e6d4 5b55f8af0b6741c58fd7d7756dc5b302 dcd539919ebc4a97ab7c54b2325dfcd1 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Oct 11 04:05:10 compute-0 nova_compute[259850]: 2025-10-11 04:05:10.395 2 DEBUG nova.virt.hardware [None req-b592fa62-f727-4367-bf9b-b90887b2e6d4 5b55f8af0b6741c58fd7d7756dc5b302 dcd539919ebc4a97ab7c54b2325dfcd1 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Oct 11 04:05:10 compute-0 nova_compute[259850]: 2025-10-11 04:05:10.395 2 DEBUG nova.virt.hardware [None req-b592fa62-f727-4367-bf9b-b90887b2e6d4 5b55f8af0b6741c58fd7d7756dc5b302 dcd539919ebc4a97ab7c54b2325dfcd1 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Oct 11 04:05:10 compute-0 nova_compute[259850]: 2025-10-11 04:05:10.395 2 DEBUG nova.virt.hardware [None req-b592fa62-f727-4367-bf9b-b90887b2e6d4 5b55f8af0b6741c58fd7d7756dc5b302 dcd539919ebc4a97ab7c54b2325dfcd1 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Oct 11 04:05:10 compute-0 nova_compute[259850]: 2025-10-11 04:05:10.395 2 DEBUG nova.virt.hardware [None req-b592fa62-f727-4367-bf9b-b90887b2e6d4 5b55f8af0b6741c58fd7d7756dc5b302 dcd539919ebc4a97ab7c54b2325dfcd1 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Oct 11 04:05:10 compute-0 nova_compute[259850]: 2025-10-11 04:05:10.395 2 DEBUG nova.virt.hardware [None req-b592fa62-f727-4367-bf9b-b90887b2e6d4 5b55f8af0b6741c58fd7d7756dc5b302 dcd539919ebc4a97ab7c54b2325dfcd1 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Oct 11 04:05:10 compute-0 nova_compute[259850]: 2025-10-11 04:05:10.395 2 DEBUG nova.virt.hardware [None req-b592fa62-f727-4367-bf9b-b90887b2e6d4 5b55f8af0b6741c58fd7d7756dc5b302 dcd539919ebc4a97ab7c54b2325dfcd1 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Oct 11 04:05:10 compute-0 nova_compute[259850]: 2025-10-11 04:05:10.395 2 DEBUG nova.virt.hardware [None req-b592fa62-f727-4367-bf9b-b90887b2e6d4 5b55f8af0b6741c58fd7d7756dc5b302 dcd539919ebc4a97ab7c54b2325dfcd1 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Oct 11 04:05:10 compute-0 nova_compute[259850]: 2025-10-11 04:05:10.396 2 DEBUG nova.virt.hardware [None req-b592fa62-f727-4367-bf9b-b90887b2e6d4 5b55f8af0b6741c58fd7d7756dc5b302 dcd539919ebc4a97ab7c54b2325dfcd1 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Oct 11 04:05:10 compute-0 nova_compute[259850]: 2025-10-11 04:05:10.396 2 DEBUG nova.virt.hardware [None req-b592fa62-f727-4367-bf9b-b90887b2e6d4 5b55f8af0b6741c58fd7d7756dc5b302 dcd539919ebc4a97ab7c54b2325dfcd1 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Oct 11 04:05:10 compute-0 nova_compute[259850]: 2025-10-11 04:05:10.398 2 DEBUG oslo_concurrency.processutils [None req-b592fa62-f727-4367-bf9b-b90887b2e6d4 5b55f8af0b6741c58fd7d7756dc5b302 dcd539919ebc4a97ab7c54b2325dfcd1 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 04:05:10 compute-0 ovn_metadata_agent[161897]: 2025-10-11 04:05:10.525 161902 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=6, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '7a:61:6f', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '92:f1:b6:e4:f1:16'}, ipsec=False) old=SB_Global(nb_cfg=5) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 11 04:05:10 compute-0 nova_compute[259850]: 2025-10-11 04:05:10.525 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 04:05:10 compute-0 ovn_metadata_agent[161897]: 2025-10-11 04:05:10.526 161902 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 4 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Oct 11 04:05:10 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 11 04:05:10 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2321722147' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 11 04:05:10 compute-0 nova_compute[259850]: 2025-10-11 04:05:10.810 2 DEBUG oslo_concurrency.processutils [None req-b592fa62-f727-4367-bf9b-b90887b2e6d4 5b55f8af0b6741c58fd7d7756dc5b302 dcd539919ebc4a97ab7c54b2325dfcd1 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.412s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 04:05:10 compute-0 nova_compute[259850]: 2025-10-11 04:05:10.835 2 DEBUG nova.storage.rbd_utils [None req-b592fa62-f727-4367-bf9b-b90887b2e6d4 5b55f8af0b6741c58fd7d7756dc5b302 dcd539919ebc4a97ab7c54b2325dfcd1 - - default default] rbd image f362654b-5459-4295-a15a-50dce3bd4232_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 11 04:05:10 compute-0 nova_compute[259850]: 2025-10-11 04:05:10.839 2 DEBUG oslo_concurrency.processutils [None req-b592fa62-f727-4367-bf9b-b90887b2e6d4 5b55f8af0b6741c58fd7d7756dc5b302 dcd539919ebc4a97ab7c54b2325dfcd1 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 04:05:10 compute-0 ceph-mon[74273]: pgmap v1026: 305 pgs: 305 active+clean; 41 MiB data, 256 MiB used, 60 GiB / 60 GiB avail; 2.0 MiB/s rd, 1.8 MiB/s wr, 182 op/s
Oct 11 04:05:10 compute-0 ceph-mon[74273]: from='client.? 192.168.122.100:0/2321722147' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 11 04:05:11 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 11 04:05:11 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/443597739' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 11 04:05:11 compute-0 nova_compute[259850]: 2025-10-11 04:05:11.274 2 DEBUG oslo_concurrency.processutils [None req-b592fa62-f727-4367-bf9b-b90887b2e6d4 5b55f8af0b6741c58fd7d7756dc5b302 dcd539919ebc4a97ab7c54b2325dfcd1 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.435s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 04:05:11 compute-0 nova_compute[259850]: 2025-10-11 04:05:11.278 2 DEBUG nova.objects.instance [None req-b592fa62-f727-4367-bf9b-b90887b2e6d4 5b55f8af0b6741c58fd7d7756dc5b302 dcd539919ebc4a97ab7c54b2325dfcd1 - - default default] Lazy-loading 'pci_devices' on Instance uuid f362654b-5459-4295-a15a-50dce3bd4232 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 11 04:05:11 compute-0 nova_compute[259850]: 2025-10-11 04:05:11.299 2 DEBUG nova.virt.libvirt.driver [None req-b592fa62-f727-4367-bf9b-b90887b2e6d4 5b55f8af0b6741c58fd7d7756dc5b302 dcd539919ebc4a97ab7c54b2325dfcd1 - - default default] [instance: f362654b-5459-4295-a15a-50dce3bd4232] End _get_guest_xml xml=<domain type="kvm">
Oct 11 04:05:11 compute-0 nova_compute[259850]:   <uuid>f362654b-5459-4295-a15a-50dce3bd4232</uuid>
Oct 11 04:05:11 compute-0 nova_compute[259850]:   <name>instance-00000003</name>
Oct 11 04:05:11 compute-0 nova_compute[259850]:   <memory>131072</memory>
Oct 11 04:05:11 compute-0 nova_compute[259850]:   <vcpu>1</vcpu>
Oct 11 04:05:11 compute-0 nova_compute[259850]:   <metadata>
Oct 11 04:05:11 compute-0 nova_compute[259850]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 11 04:05:11 compute-0 nova_compute[259850]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 11 04:05:11 compute-0 nova_compute[259850]:       <nova:name>tempest-VolumesNegativeTest-instance-779269603</nova:name>
Oct 11 04:05:11 compute-0 nova_compute[259850]:       <nova:creationTime>2025-10-11 04:05:10</nova:creationTime>
Oct 11 04:05:11 compute-0 nova_compute[259850]:       <nova:flavor name="m1.nano">
Oct 11 04:05:11 compute-0 nova_compute[259850]:         <nova:memory>128</nova:memory>
Oct 11 04:05:11 compute-0 nova_compute[259850]:         <nova:disk>1</nova:disk>
Oct 11 04:05:11 compute-0 nova_compute[259850]:         <nova:swap>0</nova:swap>
Oct 11 04:05:11 compute-0 nova_compute[259850]:         <nova:ephemeral>0</nova:ephemeral>
Oct 11 04:05:11 compute-0 nova_compute[259850]:         <nova:vcpus>1</nova:vcpus>
Oct 11 04:05:11 compute-0 nova_compute[259850]:       </nova:flavor>
Oct 11 04:05:11 compute-0 nova_compute[259850]:       <nova:owner>
Oct 11 04:05:11 compute-0 nova_compute[259850]:         <nova:user uuid="5b55f8af0b6741c58fd7d7756dc5b302">tempest-VolumesNegativeTest-810895638-project-member</nova:user>
Oct 11 04:05:11 compute-0 nova_compute[259850]:         <nova:project uuid="dcd539919ebc4a97ab7c54b2325dfcd1">tempest-VolumesNegativeTest-810895638</nova:project>
Oct 11 04:05:11 compute-0 nova_compute[259850]:       </nova:owner>
Oct 11 04:05:11 compute-0 nova_compute[259850]:       <nova:root type="image" uuid="1a107e2f-1a9d-4b6f-861d-e64bee7d56be"/>
Oct 11 04:05:11 compute-0 nova_compute[259850]:       <nova:ports/>
Oct 11 04:05:11 compute-0 nova_compute[259850]:     </nova:instance>
Oct 11 04:05:11 compute-0 nova_compute[259850]:   </metadata>
Oct 11 04:05:11 compute-0 nova_compute[259850]:   <sysinfo type="smbios">
Oct 11 04:05:11 compute-0 nova_compute[259850]:     <system>
Oct 11 04:05:11 compute-0 nova_compute[259850]:       <entry name="manufacturer">RDO</entry>
Oct 11 04:05:11 compute-0 nova_compute[259850]:       <entry name="product">OpenStack Compute</entry>
Oct 11 04:05:11 compute-0 nova_compute[259850]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 11 04:05:11 compute-0 nova_compute[259850]:       <entry name="serial">f362654b-5459-4295-a15a-50dce3bd4232</entry>
Oct 11 04:05:11 compute-0 nova_compute[259850]:       <entry name="uuid">f362654b-5459-4295-a15a-50dce3bd4232</entry>
Oct 11 04:05:11 compute-0 nova_compute[259850]:       <entry name="family">Virtual Machine</entry>
Oct 11 04:05:11 compute-0 nova_compute[259850]:     </system>
Oct 11 04:05:11 compute-0 nova_compute[259850]:   </sysinfo>
Oct 11 04:05:11 compute-0 nova_compute[259850]:   <os>
Oct 11 04:05:11 compute-0 nova_compute[259850]:     <type arch="x86_64" machine="q35">hvm</type>
Oct 11 04:05:11 compute-0 nova_compute[259850]:     <boot dev="hd"/>
Oct 11 04:05:11 compute-0 nova_compute[259850]:     <smbios mode="sysinfo"/>
Oct 11 04:05:11 compute-0 nova_compute[259850]:   </os>
Oct 11 04:05:11 compute-0 nova_compute[259850]:   <features>
Oct 11 04:05:11 compute-0 nova_compute[259850]:     <acpi/>
Oct 11 04:05:11 compute-0 nova_compute[259850]:     <apic/>
Oct 11 04:05:11 compute-0 nova_compute[259850]:     <vmcoreinfo/>
Oct 11 04:05:11 compute-0 nova_compute[259850]:   </features>
Oct 11 04:05:11 compute-0 nova_compute[259850]:   <clock offset="utc">
Oct 11 04:05:11 compute-0 nova_compute[259850]:     <timer name="pit" tickpolicy="delay"/>
Oct 11 04:05:11 compute-0 nova_compute[259850]:     <timer name="rtc" tickpolicy="catchup"/>
Oct 11 04:05:11 compute-0 nova_compute[259850]:     <timer name="hpet" present="no"/>
Oct 11 04:05:11 compute-0 nova_compute[259850]:   </clock>
Oct 11 04:05:11 compute-0 nova_compute[259850]:   <cpu mode="host-model" match="exact">
Oct 11 04:05:11 compute-0 nova_compute[259850]:     <topology sockets="1" cores="1" threads="1"/>
Oct 11 04:05:11 compute-0 nova_compute[259850]:   </cpu>
Oct 11 04:05:11 compute-0 nova_compute[259850]:   <devices>
Oct 11 04:05:11 compute-0 nova_compute[259850]:     <disk type="network" device="disk">
Oct 11 04:05:11 compute-0 nova_compute[259850]:       <driver type="raw" cache="none"/>
Oct 11 04:05:11 compute-0 nova_compute[259850]:       <source protocol="rbd" name="vms/f362654b-5459-4295-a15a-50dce3bd4232_disk">
Oct 11 04:05:11 compute-0 nova_compute[259850]:         <host name="192.168.122.100" port="6789"/>
Oct 11 04:05:11 compute-0 nova_compute[259850]:       </source>
Oct 11 04:05:11 compute-0 nova_compute[259850]:       <auth username="openstack">
Oct 11 04:05:11 compute-0 nova_compute[259850]:         <secret type="ceph" uuid="23b68101-59a9-532f-ab6b-9acf78fb2162"/>
Oct 11 04:05:11 compute-0 nova_compute[259850]:       </auth>
Oct 11 04:05:11 compute-0 nova_compute[259850]:       <target dev="vda" bus="virtio"/>
Oct 11 04:05:11 compute-0 nova_compute[259850]:     </disk>
Oct 11 04:05:11 compute-0 nova_compute[259850]:     <disk type="network" device="cdrom">
Oct 11 04:05:11 compute-0 nova_compute[259850]:       <driver type="raw" cache="none"/>
Oct 11 04:05:11 compute-0 nova_compute[259850]:       <source protocol="rbd" name="vms/f362654b-5459-4295-a15a-50dce3bd4232_disk.config">
Oct 11 04:05:11 compute-0 nova_compute[259850]:         <host name="192.168.122.100" port="6789"/>
Oct 11 04:05:11 compute-0 nova_compute[259850]:       </source>
Oct 11 04:05:11 compute-0 nova_compute[259850]:       <auth username="openstack">
Oct 11 04:05:11 compute-0 nova_compute[259850]:         <secret type="ceph" uuid="23b68101-59a9-532f-ab6b-9acf78fb2162"/>
Oct 11 04:05:11 compute-0 nova_compute[259850]:       </auth>
Oct 11 04:05:11 compute-0 nova_compute[259850]:       <target dev="sda" bus="sata"/>
Oct 11 04:05:11 compute-0 nova_compute[259850]:     </disk>
Oct 11 04:05:11 compute-0 nova_compute[259850]:     <serial type="pty">
Oct 11 04:05:11 compute-0 nova_compute[259850]:       <log file="/var/lib/nova/instances/f362654b-5459-4295-a15a-50dce3bd4232/console.log" append="off"/>
Oct 11 04:05:11 compute-0 nova_compute[259850]:     </serial>
Oct 11 04:05:11 compute-0 nova_compute[259850]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 11 04:05:11 compute-0 nova_compute[259850]:     <video>
Oct 11 04:05:11 compute-0 nova_compute[259850]:       <model type="virtio"/>
Oct 11 04:05:11 compute-0 nova_compute[259850]:     </video>
Oct 11 04:05:11 compute-0 nova_compute[259850]:     <input type="tablet" bus="usb"/>
Oct 11 04:05:11 compute-0 nova_compute[259850]:     <rng model="virtio">
Oct 11 04:05:11 compute-0 nova_compute[259850]:       <backend model="random">/dev/urandom</backend>
Oct 11 04:05:11 compute-0 nova_compute[259850]:     </rng>
Oct 11 04:05:11 compute-0 nova_compute[259850]:     <controller type="pci" model="pcie-root"/>
Oct 11 04:05:11 compute-0 nova_compute[259850]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 04:05:11 compute-0 nova_compute[259850]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 04:05:11 compute-0 nova_compute[259850]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 04:05:11 compute-0 nova_compute[259850]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 04:05:11 compute-0 nova_compute[259850]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 04:05:11 compute-0 nova_compute[259850]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 04:05:11 compute-0 nova_compute[259850]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 04:05:11 compute-0 nova_compute[259850]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 04:05:11 compute-0 nova_compute[259850]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 04:05:11 compute-0 nova_compute[259850]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 04:05:11 compute-0 nova_compute[259850]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 04:05:11 compute-0 nova_compute[259850]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 04:05:11 compute-0 nova_compute[259850]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 04:05:11 compute-0 nova_compute[259850]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 04:05:11 compute-0 nova_compute[259850]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 04:05:11 compute-0 nova_compute[259850]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 04:05:11 compute-0 nova_compute[259850]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 04:05:11 compute-0 nova_compute[259850]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 04:05:11 compute-0 nova_compute[259850]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 04:05:11 compute-0 nova_compute[259850]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 04:05:11 compute-0 nova_compute[259850]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 04:05:11 compute-0 nova_compute[259850]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 04:05:11 compute-0 nova_compute[259850]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 04:05:11 compute-0 nova_compute[259850]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 04:05:11 compute-0 nova_compute[259850]:     <controller type="usb" index="0"/>
Oct 11 04:05:11 compute-0 nova_compute[259850]:     <memballoon model="virtio">
Oct 11 04:05:11 compute-0 nova_compute[259850]:       <stats period="10"/>
Oct 11 04:05:11 compute-0 nova_compute[259850]:     </memballoon>
Oct 11 04:05:11 compute-0 nova_compute[259850]:   </devices>
Oct 11 04:05:11 compute-0 nova_compute[259850]: </domain>
Oct 11 04:05:11 compute-0 nova_compute[259850]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Oct 11 04:05:11 compute-0 nova_compute[259850]: 2025-10-11 04:05:11.372 2 DEBUG nova.virt.libvirt.driver [None req-b592fa62-f727-4367-bf9b-b90887b2e6d4 5b55f8af0b6741c58fd7d7756dc5b302 dcd539919ebc4a97ab7c54b2325dfcd1 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 11 04:05:11 compute-0 nova_compute[259850]: 2025-10-11 04:05:11.373 2 DEBUG nova.virt.libvirt.driver [None req-b592fa62-f727-4367-bf9b-b90887b2e6d4 5b55f8af0b6741c58fd7d7756dc5b302 dcd539919ebc4a97ab7c54b2325dfcd1 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 11 04:05:11 compute-0 nova_compute[259850]: 2025-10-11 04:05:11.374 2 INFO nova.virt.libvirt.driver [None req-b592fa62-f727-4367-bf9b-b90887b2e6d4 5b55f8af0b6741c58fd7d7756dc5b302 dcd539919ebc4a97ab7c54b2325dfcd1 - - default default] [instance: f362654b-5459-4295-a15a-50dce3bd4232] Using config drive
Oct 11 04:05:11 compute-0 nova_compute[259850]: 2025-10-11 04:05:11.404 2 DEBUG nova.storage.rbd_utils [None req-b592fa62-f727-4367-bf9b-b90887b2e6d4 5b55f8af0b6741c58fd7d7756dc5b302 dcd539919ebc4a97ab7c54b2325dfcd1 - - default default] rbd image f362654b-5459-4295-a15a-50dce3bd4232_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 11 04:05:11 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1027: 305 pgs: 305 active+clean; 41 MiB data, 256 MiB used, 60 GiB / 60 GiB avail; 2.0 MiB/s rd, 15 KiB/s wr, 142 op/s
Oct 11 04:05:11 compute-0 nova_compute[259850]: 2025-10-11 04:05:11.542 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 04:05:11 compute-0 nova_compute[259850]: 2025-10-11 04:05:11.612 2 INFO nova.virt.libvirt.driver [None req-b592fa62-f727-4367-bf9b-b90887b2e6d4 5b55f8af0b6741c58fd7d7756dc5b302 dcd539919ebc4a97ab7c54b2325dfcd1 - - default default] [instance: f362654b-5459-4295-a15a-50dce3bd4232] Creating config drive at /var/lib/nova/instances/f362654b-5459-4295-a15a-50dce3bd4232/disk.config
Oct 11 04:05:11 compute-0 nova_compute[259850]: 2025-10-11 04:05:11.622 2 DEBUG oslo_concurrency.processutils [None req-b592fa62-f727-4367-bf9b-b90887b2e6d4 5b55f8af0b6741c58fd7d7756dc5b302 dcd539919ebc4a97ab7c54b2325dfcd1 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/f362654b-5459-4295-a15a-50dce3bd4232/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp4zfwrouk execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 04:05:11 compute-0 nova_compute[259850]: 2025-10-11 04:05:11.701 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 04:05:11 compute-0 nova_compute[259850]: 2025-10-11 04:05:11.766 2 DEBUG oslo_concurrency.processutils [None req-b592fa62-f727-4367-bf9b-b90887b2e6d4 5b55f8af0b6741c58fd7d7756dc5b302 dcd539919ebc4a97ab7c54b2325dfcd1 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/f362654b-5459-4295-a15a-50dce3bd4232/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp4zfwrouk" returned: 0 in 0.144s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 04:05:11 compute-0 nova_compute[259850]: 2025-10-11 04:05:11.804 2 DEBUG nova.storage.rbd_utils [None req-b592fa62-f727-4367-bf9b-b90887b2e6d4 5b55f8af0b6741c58fd7d7756dc5b302 dcd539919ebc4a97ab7c54b2325dfcd1 - - default default] rbd image f362654b-5459-4295-a15a-50dce3bd4232_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 11 04:05:11 compute-0 nova_compute[259850]: 2025-10-11 04:05:11.808 2 DEBUG oslo_concurrency.processutils [None req-b592fa62-f727-4367-bf9b-b90887b2e6d4 5b55f8af0b6741c58fd7d7756dc5b302 dcd539919ebc4a97ab7c54b2325dfcd1 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/f362654b-5459-4295-a15a-50dce3bd4232/disk.config f362654b-5459-4295-a15a-50dce3bd4232_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 04:05:11 compute-0 ceph-mon[74273]: from='client.? 192.168.122.100:0/443597739' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 11 04:05:11 compute-0 nova_compute[259850]: 2025-10-11 04:05:11.970 2 DEBUG oslo_concurrency.processutils [None req-b592fa62-f727-4367-bf9b-b90887b2e6d4 5b55f8af0b6741c58fd7d7756dc5b302 dcd539919ebc4a97ab7c54b2325dfcd1 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/f362654b-5459-4295-a15a-50dce3bd4232/disk.config f362654b-5459-4295-a15a-50dce3bd4232_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.162s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 04:05:11 compute-0 nova_compute[259850]: 2025-10-11 04:05:11.972 2 INFO nova.virt.libvirt.driver [None req-b592fa62-f727-4367-bf9b-b90887b2e6d4 5b55f8af0b6741c58fd7d7756dc5b302 dcd539919ebc4a97ab7c54b2325dfcd1 - - default default] [instance: f362654b-5459-4295-a15a-50dce3bd4232] Deleting local config drive /var/lib/nova/instances/f362654b-5459-4295-a15a-50dce3bd4232/disk.config because it was imported into RBD.
Oct 11 04:05:12 compute-0 systemd-machined[214869]: New machine qemu-3-instance-00000003.
Oct 11 04:05:12 compute-0 systemd[1]: Started Virtual Machine qemu-3-instance-00000003.
Oct 11 04:05:12 compute-0 nova_compute[259850]: 2025-10-11 04:05:12.734 2 DEBUG oslo_concurrency.lockutils [None req-9752be11-5777-4d60-8045-546a5440448d 715d3ecfd40048a08fd0c9f8dc437cd6 a56f57f119b24e77bd165887162ef538 - - default default] Acquiring lock "e7add65d-7f64-44b0-960b-62ab3f67e50e" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 04:05:12 compute-0 nova_compute[259850]: 2025-10-11 04:05:12.735 2 DEBUG oslo_concurrency.lockutils [None req-9752be11-5777-4d60-8045-546a5440448d 715d3ecfd40048a08fd0c9f8dc437cd6 a56f57f119b24e77bd165887162ef538 - - default default] Lock "e7add65d-7f64-44b0-960b-62ab3f67e50e" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 04:05:12 compute-0 nova_compute[259850]: 2025-10-11 04:05:12.755 2 DEBUG nova.compute.manager [None req-9752be11-5777-4d60-8045-546a5440448d 715d3ecfd40048a08fd0c9f8dc437cd6 a56f57f119b24e77bd165887162ef538 - - default default] [instance: e7add65d-7f64-44b0-960b-62ab3f67e50e] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Oct 11 04:05:12 compute-0 nova_compute[259850]: 2025-10-11 04:05:12.829 2 DEBUG oslo_concurrency.lockutils [None req-9752be11-5777-4d60-8045-546a5440448d 715d3ecfd40048a08fd0c9f8dc437cd6 a56f57f119b24e77bd165887162ef538 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 04:05:12 compute-0 nova_compute[259850]: 2025-10-11 04:05:12.830 2 DEBUG oslo_concurrency.lockutils [None req-9752be11-5777-4d60-8045-546a5440448d 715d3ecfd40048a08fd0c9f8dc437cd6 a56f57f119b24e77bd165887162ef538 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 04:05:12 compute-0 nova_compute[259850]: 2025-10-11 04:05:12.838 2 DEBUG nova.virt.hardware [None req-9752be11-5777-4d60-8045-546a5440448d 715d3ecfd40048a08fd0c9f8dc437cd6 a56f57f119b24e77bd165887162ef538 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Oct 11 04:05:12 compute-0 nova_compute[259850]: 2025-10-11 04:05:12.839 2 INFO nova.compute.claims [None req-9752be11-5777-4d60-8045-546a5440448d 715d3ecfd40048a08fd0c9f8dc437cd6 a56f57f119b24e77bd165887162ef538 - - default default] [instance: e7add65d-7f64-44b0-960b-62ab3f67e50e] Claim successful on node compute-0.ctlplane.example.com
Oct 11 04:05:12 compute-0 nova_compute[259850]: 2025-10-11 04:05:12.922 2 DEBUG nova.virt.driver [None req-3700afd8-f980-4cf2-b41e-54ecc7fbe9a2 - - - - - -] Emitting event <LifecycleEvent: 1760155512.9195347, f362654b-5459-4295-a15a-50dce3bd4232 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 11 04:05:12 compute-0 nova_compute[259850]: 2025-10-11 04:05:12.923 2 INFO nova.compute.manager [None req-3700afd8-f980-4cf2-b41e-54ecc7fbe9a2 - - - - - -] [instance: f362654b-5459-4295-a15a-50dce3bd4232] VM Resumed (Lifecycle Event)
Oct 11 04:05:12 compute-0 nova_compute[259850]: 2025-10-11 04:05:12.929 2 DEBUG nova.compute.manager [None req-b592fa62-f727-4367-bf9b-b90887b2e6d4 5b55f8af0b6741c58fd7d7756dc5b302 dcd539919ebc4a97ab7c54b2325dfcd1 - - default default] [instance: f362654b-5459-4295-a15a-50dce3bd4232] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Oct 11 04:05:12 compute-0 nova_compute[259850]: 2025-10-11 04:05:12.930 2 DEBUG nova.virt.libvirt.driver [None req-b592fa62-f727-4367-bf9b-b90887b2e6d4 5b55f8af0b6741c58fd7d7756dc5b302 dcd539919ebc4a97ab7c54b2325dfcd1 - - default default] [instance: f362654b-5459-4295-a15a-50dce3bd4232] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Oct 11 04:05:12 compute-0 nova_compute[259850]: 2025-10-11 04:05:12.934 2 INFO nova.virt.libvirt.driver [-] [instance: f362654b-5459-4295-a15a-50dce3bd4232] Instance spawned successfully.
Oct 11 04:05:12 compute-0 nova_compute[259850]: 2025-10-11 04:05:12.934 2 DEBUG nova.virt.libvirt.driver [None req-b592fa62-f727-4367-bf9b-b90887b2e6d4 5b55f8af0b6741c58fd7d7756dc5b302 dcd539919ebc4a97ab7c54b2325dfcd1 - - default default] [instance: f362654b-5459-4295-a15a-50dce3bd4232] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Oct 11 04:05:12 compute-0 nova_compute[259850]: 2025-10-11 04:05:12.958 2 DEBUG nova.compute.manager [None req-3700afd8-f980-4cf2-b41e-54ecc7fbe9a2 - - - - - -] [instance: f362654b-5459-4295-a15a-50dce3bd4232] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 11 04:05:12 compute-0 nova_compute[259850]: 2025-10-11 04:05:12.963 2 DEBUG nova.compute.manager [None req-3700afd8-f980-4cf2-b41e-54ecc7fbe9a2 - - - - - -] [instance: f362654b-5459-4295-a15a-50dce3bd4232] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 11 04:05:12 compute-0 ceph-mon[74273]: pgmap v1027: 305 pgs: 305 active+clean; 41 MiB data, 256 MiB used, 60 GiB / 60 GiB avail; 2.0 MiB/s rd, 15 KiB/s wr, 142 op/s
Oct 11 04:05:12 compute-0 nova_compute[259850]: 2025-10-11 04:05:12.967 2 DEBUG oslo_concurrency.processutils [None req-9752be11-5777-4d60-8045-546a5440448d 715d3ecfd40048a08fd0c9f8dc437cd6 a56f57f119b24e77bd165887162ef538 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 04:05:13 compute-0 nova_compute[259850]: 2025-10-11 04:05:13.017 2 DEBUG nova.virt.libvirt.driver [None req-b592fa62-f727-4367-bf9b-b90887b2e6d4 5b55f8af0b6741c58fd7d7756dc5b302 dcd539919ebc4a97ab7c54b2325dfcd1 - - default default] [instance: f362654b-5459-4295-a15a-50dce3bd4232] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 11 04:05:13 compute-0 nova_compute[259850]: 2025-10-11 04:05:13.018 2 DEBUG nova.virt.libvirt.driver [None req-b592fa62-f727-4367-bf9b-b90887b2e6d4 5b55f8af0b6741c58fd7d7756dc5b302 dcd539919ebc4a97ab7c54b2325dfcd1 - - default default] [instance: f362654b-5459-4295-a15a-50dce3bd4232] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 11 04:05:13 compute-0 nova_compute[259850]: 2025-10-11 04:05:13.019 2 DEBUG nova.virt.libvirt.driver [None req-b592fa62-f727-4367-bf9b-b90887b2e6d4 5b55f8af0b6741c58fd7d7756dc5b302 dcd539919ebc4a97ab7c54b2325dfcd1 - - default default] [instance: f362654b-5459-4295-a15a-50dce3bd4232] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 11 04:05:13 compute-0 nova_compute[259850]: 2025-10-11 04:05:13.020 2 DEBUG nova.virt.libvirt.driver [None req-b592fa62-f727-4367-bf9b-b90887b2e6d4 5b55f8af0b6741c58fd7d7756dc5b302 dcd539919ebc4a97ab7c54b2325dfcd1 - - default default] [instance: f362654b-5459-4295-a15a-50dce3bd4232] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 11 04:05:13 compute-0 nova_compute[259850]: 2025-10-11 04:05:13.020 2 DEBUG nova.virt.libvirt.driver [None req-b592fa62-f727-4367-bf9b-b90887b2e6d4 5b55f8af0b6741c58fd7d7756dc5b302 dcd539919ebc4a97ab7c54b2325dfcd1 - - default default] [instance: f362654b-5459-4295-a15a-50dce3bd4232] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 11 04:05:13 compute-0 nova_compute[259850]: 2025-10-11 04:05:13.021 2 DEBUG nova.virt.libvirt.driver [None req-b592fa62-f727-4367-bf9b-b90887b2e6d4 5b55f8af0b6741c58fd7d7756dc5b302 dcd539919ebc4a97ab7c54b2325dfcd1 - - default default] [instance: f362654b-5459-4295-a15a-50dce3bd4232] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 11 04:05:13 compute-0 nova_compute[259850]: 2025-10-11 04:05:13.027 2 INFO nova.compute.manager [None req-3700afd8-f980-4cf2-b41e-54ecc7fbe9a2 - - - - - -] [instance: f362654b-5459-4295-a15a-50dce3bd4232] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 11 04:05:13 compute-0 nova_compute[259850]: 2025-10-11 04:05:13.028 2 DEBUG nova.virt.driver [None req-3700afd8-f980-4cf2-b41e-54ecc7fbe9a2 - - - - - -] Emitting event <LifecycleEvent: 1760155512.9252775, f362654b-5459-4295-a15a-50dce3bd4232 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 11 04:05:13 compute-0 nova_compute[259850]: 2025-10-11 04:05:13.028 2 INFO nova.compute.manager [None req-3700afd8-f980-4cf2-b41e-54ecc7fbe9a2 - - - - - -] [instance: f362654b-5459-4295-a15a-50dce3bd4232] VM Started (Lifecycle Event)
Oct 11 04:05:13 compute-0 nova_compute[259850]: 2025-10-11 04:05:13.067 2 DEBUG nova.compute.manager [None req-3700afd8-f980-4cf2-b41e-54ecc7fbe9a2 - - - - - -] [instance: f362654b-5459-4295-a15a-50dce3bd4232] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 11 04:05:13 compute-0 nova_compute[259850]: 2025-10-11 04:05:13.070 2 DEBUG nova.compute.manager [None req-3700afd8-f980-4cf2-b41e-54ecc7fbe9a2 - - - - - -] [instance: f362654b-5459-4295-a15a-50dce3bd4232] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 11 04:05:13 compute-0 nova_compute[259850]: 2025-10-11 04:05:13.098 2 INFO nova.compute.manager [None req-3700afd8-f980-4cf2-b41e-54ecc7fbe9a2 - - - - - -] [instance: f362654b-5459-4295-a15a-50dce3bd4232] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 11 04:05:13 compute-0 nova_compute[259850]: 2025-10-11 04:05:13.118 2 INFO nova.compute.manager [None req-b592fa62-f727-4367-bf9b-b90887b2e6d4 5b55f8af0b6741c58fd7d7756dc5b302 dcd539919ebc4a97ab7c54b2325dfcd1 - - default default] [instance: f362654b-5459-4295-a15a-50dce3bd4232] Took 3.34 seconds to spawn the instance on the hypervisor.
Oct 11 04:05:13 compute-0 nova_compute[259850]: 2025-10-11 04:05:13.118 2 DEBUG nova.compute.manager [None req-b592fa62-f727-4367-bf9b-b90887b2e6d4 5b55f8af0b6741c58fd7d7756dc5b302 dcd539919ebc4a97ab7c54b2325dfcd1 - - default default] [instance: f362654b-5459-4295-a15a-50dce3bd4232] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 11 04:05:13 compute-0 nova_compute[259850]: 2025-10-11 04:05:13.185 2 INFO nova.compute.manager [None req-b592fa62-f727-4367-bf9b-b90887b2e6d4 5b55f8af0b6741c58fd7d7756dc5b302 dcd539919ebc4a97ab7c54b2325dfcd1 - - default default] [instance: f362654b-5459-4295-a15a-50dce3bd4232] Took 4.32 seconds to build instance.
Oct 11 04:05:13 compute-0 nova_compute[259850]: 2025-10-11 04:05:13.210 2 DEBUG oslo_concurrency.lockutils [None req-b592fa62-f727-4367-bf9b-b90887b2e6d4 5b55f8af0b6741c58fd7d7756dc5b302 dcd539919ebc4a97ab7c54b2325dfcd1 - - default default] Lock "f362654b-5459-4295-a15a-50dce3bd4232" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 4.425s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 04:05:13 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 11 04:05:13 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1029850385' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 04:05:13 compute-0 nova_compute[259850]: 2025-10-11 04:05:13.430 2 DEBUG oslo_concurrency.processutils [None req-9752be11-5777-4d60-8045-546a5440448d 715d3ecfd40048a08fd0c9f8dc437cd6 a56f57f119b24e77bd165887162ef538 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.463s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 04:05:13 compute-0 nova_compute[259850]: 2025-10-11 04:05:13.434 2 DEBUG nova.compute.provider_tree [None req-9752be11-5777-4d60-8045-546a5440448d 715d3ecfd40048a08fd0c9f8dc437cd6 a56f57f119b24e77bd165887162ef538 - - default default] Inventory has not changed in ProviderTree for provider: 108a560b-89c0-4926-a2fc-cb749a6f8386 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 11 04:05:13 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1028: 305 pgs: 305 active+clean; 88 MiB data, 278 MiB used, 60 GiB / 60 GiB avail; 2.0 MiB/s rd, 1.8 MiB/s wr, 179 op/s
Oct 11 04:05:13 compute-0 nova_compute[259850]: 2025-10-11 04:05:13.451 2 DEBUG nova.scheduler.client.report [None req-9752be11-5777-4d60-8045-546a5440448d 715d3ecfd40048a08fd0c9f8dc437cd6 a56f57f119b24e77bd165887162ef538 - - default default] Inventory has not changed for provider 108a560b-89c0-4926-a2fc-cb749a6f8386 based on inventory data: {'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 11 04:05:13 compute-0 nova_compute[259850]: 2025-10-11 04:05:13.476 2 DEBUG oslo_concurrency.lockutils [None req-9752be11-5777-4d60-8045-546a5440448d 715d3ecfd40048a08fd0c9f8dc437cd6 a56f57f119b24e77bd165887162ef538 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.646s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 04:05:13 compute-0 nova_compute[259850]: 2025-10-11 04:05:13.476 2 DEBUG nova.compute.manager [None req-9752be11-5777-4d60-8045-546a5440448d 715d3ecfd40048a08fd0c9f8dc437cd6 a56f57f119b24e77bd165887162ef538 - - default default] [instance: e7add65d-7f64-44b0-960b-62ab3f67e50e] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Oct 11 04:05:13 compute-0 nova_compute[259850]: 2025-10-11 04:05:13.528 2 DEBUG nova.compute.manager [None req-9752be11-5777-4d60-8045-546a5440448d 715d3ecfd40048a08fd0c9f8dc437cd6 a56f57f119b24e77bd165887162ef538 - - default default] [instance: e7add65d-7f64-44b0-960b-62ab3f67e50e] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Oct 11 04:05:13 compute-0 nova_compute[259850]: 2025-10-11 04:05:13.528 2 DEBUG nova.network.neutron [None req-9752be11-5777-4d60-8045-546a5440448d 715d3ecfd40048a08fd0c9f8dc437cd6 a56f57f119b24e77bd165887162ef538 - - default default] [instance: e7add65d-7f64-44b0-960b-62ab3f67e50e] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Oct 11 04:05:13 compute-0 nova_compute[259850]: 2025-10-11 04:05:13.548 2 INFO nova.virt.libvirt.driver [None req-9752be11-5777-4d60-8045-546a5440448d 715d3ecfd40048a08fd0c9f8dc437cd6 a56f57f119b24e77bd165887162ef538 - - default default] [instance: e7add65d-7f64-44b0-960b-62ab3f67e50e] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Oct 11 04:05:13 compute-0 nova_compute[259850]: 2025-10-11 04:05:13.567 2 DEBUG nova.compute.manager [None req-9752be11-5777-4d60-8045-546a5440448d 715d3ecfd40048a08fd0c9f8dc437cd6 a56f57f119b24e77bd165887162ef538 - - default default] [instance: e7add65d-7f64-44b0-960b-62ab3f67e50e] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Oct 11 04:05:13 compute-0 nova_compute[259850]: 2025-10-11 04:05:13.663 2 DEBUG nova.compute.manager [None req-9752be11-5777-4d60-8045-546a5440448d 715d3ecfd40048a08fd0c9f8dc437cd6 a56f57f119b24e77bd165887162ef538 - - default default] [instance: e7add65d-7f64-44b0-960b-62ab3f67e50e] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Oct 11 04:05:13 compute-0 nova_compute[259850]: 2025-10-11 04:05:13.664 2 DEBUG nova.virt.libvirt.driver [None req-9752be11-5777-4d60-8045-546a5440448d 715d3ecfd40048a08fd0c9f8dc437cd6 a56f57f119b24e77bd165887162ef538 - - default default] [instance: e7add65d-7f64-44b0-960b-62ab3f67e50e] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Oct 11 04:05:13 compute-0 nova_compute[259850]: 2025-10-11 04:05:13.665 2 INFO nova.virt.libvirt.driver [None req-9752be11-5777-4d60-8045-546a5440448d 715d3ecfd40048a08fd0c9f8dc437cd6 a56f57f119b24e77bd165887162ef538 - - default default] [instance: e7add65d-7f64-44b0-960b-62ab3f67e50e] Creating image(s)
Oct 11 04:05:13 compute-0 nova_compute[259850]: 2025-10-11 04:05:13.686 2 DEBUG nova.storage.rbd_utils [None req-9752be11-5777-4d60-8045-546a5440448d 715d3ecfd40048a08fd0c9f8dc437cd6 a56f57f119b24e77bd165887162ef538 - - default default] rbd image e7add65d-7f64-44b0-960b-62ab3f67e50e_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 11 04:05:13 compute-0 nova_compute[259850]: 2025-10-11 04:05:13.707 2 DEBUG nova.storage.rbd_utils [None req-9752be11-5777-4d60-8045-546a5440448d 715d3ecfd40048a08fd0c9f8dc437cd6 a56f57f119b24e77bd165887162ef538 - - default default] rbd image e7add65d-7f64-44b0-960b-62ab3f67e50e_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 11 04:05:13 compute-0 nova_compute[259850]: 2025-10-11 04:05:13.728 2 DEBUG nova.storage.rbd_utils [None req-9752be11-5777-4d60-8045-546a5440448d 715d3ecfd40048a08fd0c9f8dc437cd6 a56f57f119b24e77bd165887162ef538 - - default default] rbd image e7add65d-7f64-44b0-960b-62ab3f67e50e_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 11 04:05:13 compute-0 nova_compute[259850]: 2025-10-11 04:05:13.731 2 DEBUG oslo_concurrency.processutils [None req-9752be11-5777-4d60-8045-546a5440448d 715d3ecfd40048a08fd0c9f8dc437cd6 a56f57f119b24e77bd165887162ef538 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/1fdd1a1629caceb533dd1ed1ad40a6716b3e72ac --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 04:05:13 compute-0 nova_compute[259850]: 2025-10-11 04:05:13.750 2 DEBUG nova.policy [None req-9752be11-5777-4d60-8045-546a5440448d 715d3ecfd40048a08fd0c9f8dc437cd6 a56f57f119b24e77bd165887162ef538 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '715d3ecfd40048a08fd0c9f8dc437cd6', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'a56f57f119b24e77bd165887162ef538', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Oct 11 04:05:13 compute-0 nova_compute[259850]: 2025-10-11 04:05:13.784 2 DEBUG oslo_concurrency.processutils [None req-9752be11-5777-4d60-8045-546a5440448d 715d3ecfd40048a08fd0c9f8dc437cd6 a56f57f119b24e77bd165887162ef538 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/1fdd1a1629caceb533dd1ed1ad40a6716b3e72ac --force-share --output=json" returned: 0 in 0.054s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 04:05:13 compute-0 nova_compute[259850]: 2025-10-11 04:05:13.785 2 DEBUG oslo_concurrency.lockutils [None req-9752be11-5777-4d60-8045-546a5440448d 715d3ecfd40048a08fd0c9f8dc437cd6 a56f57f119b24e77bd165887162ef538 - - default default] Acquiring lock "1fdd1a1629caceb533dd1ed1ad40a6716b3e72ac" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 04:05:13 compute-0 nova_compute[259850]: 2025-10-11 04:05:13.786 2 DEBUG oslo_concurrency.lockutils [None req-9752be11-5777-4d60-8045-546a5440448d 715d3ecfd40048a08fd0c9f8dc437cd6 a56f57f119b24e77bd165887162ef538 - - default default] Lock "1fdd1a1629caceb533dd1ed1ad40a6716b3e72ac" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 04:05:13 compute-0 nova_compute[259850]: 2025-10-11 04:05:13.787 2 DEBUG oslo_concurrency.lockutils [None req-9752be11-5777-4d60-8045-546a5440448d 715d3ecfd40048a08fd0c9f8dc437cd6 a56f57f119b24e77bd165887162ef538 - - default default] Lock "1fdd1a1629caceb533dd1ed1ad40a6716b3e72ac" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 04:05:13 compute-0 nova_compute[259850]: 2025-10-11 04:05:13.812 2 DEBUG nova.storage.rbd_utils [None req-9752be11-5777-4d60-8045-546a5440448d 715d3ecfd40048a08fd0c9f8dc437cd6 a56f57f119b24e77bd165887162ef538 - - default default] rbd image e7add65d-7f64-44b0-960b-62ab3f67e50e_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 11 04:05:13 compute-0 nova_compute[259850]: 2025-10-11 04:05:13.817 2 DEBUG oslo_concurrency.processutils [None req-9752be11-5777-4d60-8045-546a5440448d 715d3ecfd40048a08fd0c9f8dc437cd6 a56f57f119b24e77bd165887162ef538 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/1fdd1a1629caceb533dd1ed1ad40a6716b3e72ac e7add65d-7f64-44b0-960b-62ab3f67e50e_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 04:05:13 compute-0 ceph-mon[74273]: from='client.? 192.168.122.100:0/1029850385' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 04:05:14 compute-0 nova_compute[259850]: 2025-10-11 04:05:14.087 2 DEBUG oslo_concurrency.processutils [None req-9752be11-5777-4d60-8045-546a5440448d 715d3ecfd40048a08fd0c9f8dc437cd6 a56f57f119b24e77bd165887162ef538 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/1fdd1a1629caceb533dd1ed1ad40a6716b3e72ac e7add65d-7f64-44b0-960b-62ab3f67e50e_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.270s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 04:05:14 compute-0 nova_compute[259850]: 2025-10-11 04:05:14.134 2 DEBUG oslo_concurrency.lockutils [None req-114c24db-8160-4ac2-879f-fadfc9f7eab9 5b55f8af0b6741c58fd7d7756dc5b302 dcd539919ebc4a97ab7c54b2325dfcd1 - - default default] Acquiring lock "f362654b-5459-4295-a15a-50dce3bd4232" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 04:05:14 compute-0 nova_compute[259850]: 2025-10-11 04:05:14.134 2 DEBUG oslo_concurrency.lockutils [None req-114c24db-8160-4ac2-879f-fadfc9f7eab9 5b55f8af0b6741c58fd7d7756dc5b302 dcd539919ebc4a97ab7c54b2325dfcd1 - - default default] Lock "f362654b-5459-4295-a15a-50dce3bd4232" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 04:05:14 compute-0 nova_compute[259850]: 2025-10-11 04:05:14.135 2 DEBUG oslo_concurrency.lockutils [None req-114c24db-8160-4ac2-879f-fadfc9f7eab9 5b55f8af0b6741c58fd7d7756dc5b302 dcd539919ebc4a97ab7c54b2325dfcd1 - - default default] Acquiring lock "f362654b-5459-4295-a15a-50dce3bd4232-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 04:05:14 compute-0 nova_compute[259850]: 2025-10-11 04:05:14.135 2 DEBUG oslo_concurrency.lockutils [None req-114c24db-8160-4ac2-879f-fadfc9f7eab9 5b55f8af0b6741c58fd7d7756dc5b302 dcd539919ebc4a97ab7c54b2325dfcd1 - - default default] Lock "f362654b-5459-4295-a15a-50dce3bd4232-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 04:05:14 compute-0 nova_compute[259850]: 2025-10-11 04:05:14.135 2 DEBUG oslo_concurrency.lockutils [None req-114c24db-8160-4ac2-879f-fadfc9f7eab9 5b55f8af0b6741c58fd7d7756dc5b302 dcd539919ebc4a97ab7c54b2325dfcd1 - - default default] Lock "f362654b-5459-4295-a15a-50dce3bd4232-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 04:05:14 compute-0 nova_compute[259850]: 2025-10-11 04:05:14.138 2 INFO nova.compute.manager [None req-114c24db-8160-4ac2-879f-fadfc9f7eab9 5b55f8af0b6741c58fd7d7756dc5b302 dcd539919ebc4a97ab7c54b2325dfcd1 - - default default] [instance: f362654b-5459-4295-a15a-50dce3bd4232] Terminating instance
Oct 11 04:05:14 compute-0 nova_compute[259850]: 2025-10-11 04:05:14.139 2 DEBUG oslo_concurrency.lockutils [None req-114c24db-8160-4ac2-879f-fadfc9f7eab9 5b55f8af0b6741c58fd7d7756dc5b302 dcd539919ebc4a97ab7c54b2325dfcd1 - - default default] Acquiring lock "refresh_cache-f362654b-5459-4295-a15a-50dce3bd4232" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 11 04:05:14 compute-0 nova_compute[259850]: 2025-10-11 04:05:14.139 2 DEBUG oslo_concurrency.lockutils [None req-114c24db-8160-4ac2-879f-fadfc9f7eab9 5b55f8af0b6741c58fd7d7756dc5b302 dcd539919ebc4a97ab7c54b2325dfcd1 - - default default] Acquired lock "refresh_cache-f362654b-5459-4295-a15a-50dce3bd4232" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 11 04:05:14 compute-0 nova_compute[259850]: 2025-10-11 04:05:14.140 2 DEBUG nova.network.neutron [None req-114c24db-8160-4ac2-879f-fadfc9f7eab9 5b55f8af0b6741c58fd7d7756dc5b302 dcd539919ebc4a97ab7c54b2325dfcd1 - - default default] [instance: f362654b-5459-4295-a15a-50dce3bd4232] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Oct 11 04:05:14 compute-0 nova_compute[259850]: 2025-10-11 04:05:14.192 2 DEBUG nova.storage.rbd_utils [None req-9752be11-5777-4d60-8045-546a5440448d 715d3ecfd40048a08fd0c9f8dc437cd6 a56f57f119b24e77bd165887162ef538 - - default default] resizing rbd image e7add65d-7f64-44b0-960b-62ab3f67e50e_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Oct 11 04:05:14 compute-0 nova_compute[259850]: 2025-10-11 04:05:14.269 2 DEBUG nova.objects.instance [None req-9752be11-5777-4d60-8045-546a5440448d 715d3ecfd40048a08fd0c9f8dc437cd6 a56f57f119b24e77bd165887162ef538 - - default default] Lazy-loading 'migration_context' on Instance uuid e7add65d-7f64-44b0-960b-62ab3f67e50e obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 11 04:05:14 compute-0 nova_compute[259850]: 2025-10-11 04:05:14.290 2 DEBUG nova.virt.libvirt.driver [None req-9752be11-5777-4d60-8045-546a5440448d 715d3ecfd40048a08fd0c9f8dc437cd6 a56f57f119b24e77bd165887162ef538 - - default default] [instance: e7add65d-7f64-44b0-960b-62ab3f67e50e] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Oct 11 04:05:14 compute-0 nova_compute[259850]: 2025-10-11 04:05:14.290 2 DEBUG nova.virt.libvirt.driver [None req-9752be11-5777-4d60-8045-546a5440448d 715d3ecfd40048a08fd0c9f8dc437cd6 a56f57f119b24e77bd165887162ef538 - - default default] [instance: e7add65d-7f64-44b0-960b-62ab3f67e50e] Ensure instance console log exists: /var/lib/nova/instances/e7add65d-7f64-44b0-960b-62ab3f67e50e/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Oct 11 04:05:14 compute-0 nova_compute[259850]: 2025-10-11 04:05:14.291 2 DEBUG oslo_concurrency.lockutils [None req-9752be11-5777-4d60-8045-546a5440448d 715d3ecfd40048a08fd0c9f8dc437cd6 a56f57f119b24e77bd165887162ef538 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 04:05:14 compute-0 nova_compute[259850]: 2025-10-11 04:05:14.291 2 DEBUG oslo_concurrency.lockutils [None req-9752be11-5777-4d60-8045-546a5440448d 715d3ecfd40048a08fd0c9f8dc437cd6 a56f57f119b24e77bd165887162ef538 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 04:05:14 compute-0 nova_compute[259850]: 2025-10-11 04:05:14.291 2 DEBUG oslo_concurrency.lockutils [None req-9752be11-5777-4d60-8045-546a5440448d 715d3ecfd40048a08fd0c9f8dc437cd6 a56f57f119b24e77bd165887162ef538 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 04:05:14 compute-0 podman[271501]: 2025-10-11 04:05:14.396872719 +0000 UTC m=+0.095146682 container health_status aa44bb3cffcfb092b9d85ae52a9bd9f36c87e07262632420f97b48a886109eab (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_managed=true, container_name=ovn_metadata_agent, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, org.label-schema.build-date=20251009, org.label-schema.vendor=CentOS)
Oct 11 04:05:14 compute-0 ovn_metadata_agent[161897]: 2025-10-11 04:05:14.527 161902 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=8a473e03-2208-47ae-afcd-05ad744a5969, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '6'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 04:05:14 compute-0 nova_compute[259850]: 2025-10-11 04:05:14.596 2 DEBUG nova.network.neutron [None req-114c24db-8160-4ac2-879f-fadfc9f7eab9 5b55f8af0b6741c58fd7d7756dc5b302 dcd539919ebc4a97ab7c54b2325dfcd1 - - default default] [instance: f362654b-5459-4295-a15a-50dce3bd4232] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Oct 11 04:05:14 compute-0 nova_compute[259850]: 2025-10-11 04:05:14.639 2 DEBUG nova.network.neutron [None req-9752be11-5777-4d60-8045-546a5440448d 715d3ecfd40048a08fd0c9f8dc437cd6 a56f57f119b24e77bd165887162ef538 - - default default] [instance: e7add65d-7f64-44b0-960b-62ab3f67e50e] Successfully created port: f42ee9e2-a84a-41b6-ba15-7baeab44cb80 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Oct 11 04:05:14 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e171 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 04:05:14 compute-0 nova_compute[259850]: 2025-10-11 04:05:14.859 2 DEBUG nova.network.neutron [None req-114c24db-8160-4ac2-879f-fadfc9f7eab9 5b55f8af0b6741c58fd7d7756dc5b302 dcd539919ebc4a97ab7c54b2325dfcd1 - - default default] [instance: f362654b-5459-4295-a15a-50dce3bd4232] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 11 04:05:14 compute-0 nova_compute[259850]: 2025-10-11 04:05:14.883 2 DEBUG oslo_concurrency.lockutils [None req-114c24db-8160-4ac2-879f-fadfc9f7eab9 5b55f8af0b6741c58fd7d7756dc5b302 dcd539919ebc4a97ab7c54b2325dfcd1 - - default default] Releasing lock "refresh_cache-f362654b-5459-4295-a15a-50dce3bd4232" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 11 04:05:14 compute-0 nova_compute[259850]: 2025-10-11 04:05:14.884 2 DEBUG nova.compute.manager [None req-114c24db-8160-4ac2-879f-fadfc9f7eab9 5b55f8af0b6741c58fd7d7756dc5b302 dcd539919ebc4a97ab7c54b2325dfcd1 - - default default] [instance: f362654b-5459-4295-a15a-50dce3bd4232] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Oct 11 04:05:14 compute-0 systemd[1]: machine-qemu\x2d3\x2dinstance\x2d00000003.scope: Deactivated successfully.
Oct 11 04:05:14 compute-0 systemd[1]: machine-qemu\x2d3\x2dinstance\x2d00000003.scope: Consumed 2.834s CPU time.
Oct 11 04:05:14 compute-0 systemd-machined[214869]: Machine qemu-3-instance-00000003 terminated.
Oct 11 04:05:15 compute-0 ceph-mon[74273]: pgmap v1028: 305 pgs: 305 active+clean; 88 MiB data, 278 MiB used, 60 GiB / 60 GiB avail; 2.0 MiB/s rd, 1.8 MiB/s wr, 179 op/s
Oct 11 04:05:15 compute-0 nova_compute[259850]: 2025-10-11 04:05:15.058 2 DEBUG oslo_service.periodic_task [None req-a15c22ab-259d-4b26-83d0-eb5f92102649 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 11 04:05:15 compute-0 nova_compute[259850]: 2025-10-11 04:05:15.059 2 DEBUG oslo_service.periodic_task [None req-a15c22ab-259d-4b26-83d0-eb5f92102649 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 11 04:05:15 compute-0 nova_compute[259850]: 2025-10-11 04:05:15.059 2 DEBUG oslo_service.periodic_task [None req-a15c22ab-259d-4b26-83d0-eb5f92102649 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 11 04:05:15 compute-0 nova_compute[259850]: 2025-10-11 04:05:15.059 2 DEBUG nova.compute.manager [None req-a15c22ab-259d-4b26-83d0-eb5f92102649 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Oct 11 04:05:15 compute-0 nova_compute[259850]: 2025-10-11 04:05:15.105 2 INFO nova.virt.libvirt.driver [-] [instance: f362654b-5459-4295-a15a-50dce3bd4232] Instance destroyed successfully.
Oct 11 04:05:15 compute-0 nova_compute[259850]: 2025-10-11 04:05:15.105 2 DEBUG nova.objects.instance [None req-114c24db-8160-4ac2-879f-fadfc9f7eab9 5b55f8af0b6741c58fd7d7756dc5b302 dcd539919ebc4a97ab7c54b2325dfcd1 - - default default] Lazy-loading 'resources' on Instance uuid f362654b-5459-4295-a15a-50dce3bd4232 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 11 04:05:15 compute-0 nova_compute[259850]: 2025-10-11 04:05:15.410 2 INFO nova.virt.libvirt.driver [None req-114c24db-8160-4ac2-879f-fadfc9f7eab9 5b55f8af0b6741c58fd7d7756dc5b302 dcd539919ebc4a97ab7c54b2325dfcd1 - - default default] [instance: f362654b-5459-4295-a15a-50dce3bd4232] Deleting instance files /var/lib/nova/instances/f362654b-5459-4295-a15a-50dce3bd4232_del
Oct 11 04:05:15 compute-0 nova_compute[259850]: 2025-10-11 04:05:15.411 2 INFO nova.virt.libvirt.driver [None req-114c24db-8160-4ac2-879f-fadfc9f7eab9 5b55f8af0b6741c58fd7d7756dc5b302 dcd539919ebc4a97ab7c54b2325dfcd1 - - default default] [instance: f362654b-5459-4295-a15a-50dce3bd4232] Deletion of /var/lib/nova/instances/f362654b-5459-4295-a15a-50dce3bd4232_del complete
Oct 11 04:05:15 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1029: 305 pgs: 305 active+clean; 88 MiB data, 278 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 128 op/s
Oct 11 04:05:15 compute-0 nova_compute[259850]: 2025-10-11 04:05:15.473 2 INFO nova.compute.manager [None req-114c24db-8160-4ac2-879f-fadfc9f7eab9 5b55f8af0b6741c58fd7d7756dc5b302 dcd539919ebc4a97ab7c54b2325dfcd1 - - default default] [instance: f362654b-5459-4295-a15a-50dce3bd4232] Took 0.59 seconds to destroy the instance on the hypervisor.
Oct 11 04:05:15 compute-0 nova_compute[259850]: 2025-10-11 04:05:15.473 2 DEBUG oslo.service.loopingcall [None req-114c24db-8160-4ac2-879f-fadfc9f7eab9 5b55f8af0b6741c58fd7d7756dc5b302 dcd539919ebc4a97ab7c54b2325dfcd1 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Oct 11 04:05:15 compute-0 nova_compute[259850]: 2025-10-11 04:05:15.474 2 DEBUG nova.compute.manager [-] [instance: f362654b-5459-4295-a15a-50dce3bd4232] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Oct 11 04:05:15 compute-0 nova_compute[259850]: 2025-10-11 04:05:15.474 2 DEBUG nova.network.neutron [-] [instance: f362654b-5459-4295-a15a-50dce3bd4232] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Oct 11 04:05:15 compute-0 nova_compute[259850]: 2025-10-11 04:05:15.740 2 DEBUG nova.network.neutron [-] [instance: f362654b-5459-4295-a15a-50dce3bd4232] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Oct 11 04:05:15 compute-0 nova_compute[259850]: 2025-10-11 04:05:15.869 2 DEBUG nova.network.neutron [-] [instance: f362654b-5459-4295-a15a-50dce3bd4232] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 11 04:05:15 compute-0 nova_compute[259850]: 2025-10-11 04:05:15.925 2 INFO nova.compute.manager [-] [instance: f362654b-5459-4295-a15a-50dce3bd4232] Took 0.45 seconds to deallocate network for instance.
Oct 11 04:05:16 compute-0 nova_compute[259850]: 2025-10-11 04:05:16.008 2 DEBUG oslo_concurrency.lockutils [None req-114c24db-8160-4ac2-879f-fadfc9f7eab9 5b55f8af0b6741c58fd7d7756dc5b302 dcd539919ebc4a97ab7c54b2325dfcd1 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 04:05:16 compute-0 nova_compute[259850]: 2025-10-11 04:05:16.009 2 DEBUG oslo_concurrency.lockutils [None req-114c24db-8160-4ac2-879f-fadfc9f7eab9 5b55f8af0b6741c58fd7d7756dc5b302 dcd539919ebc4a97ab7c54b2325dfcd1 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 04:05:16 compute-0 nova_compute[259850]: 2025-10-11 04:05:16.023 2 DEBUG nova.network.neutron [None req-9752be11-5777-4d60-8045-546a5440448d 715d3ecfd40048a08fd0c9f8dc437cd6 a56f57f119b24e77bd165887162ef538 - - default default] [instance: e7add65d-7f64-44b0-960b-62ab3f67e50e] Successfully updated port: f42ee9e2-a84a-41b6-ba15-7baeab44cb80 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Oct 11 04:05:16 compute-0 nova_compute[259850]: 2025-10-11 04:05:16.045 2 DEBUG oslo_concurrency.lockutils [None req-9752be11-5777-4d60-8045-546a5440448d 715d3ecfd40048a08fd0c9f8dc437cd6 a56f57f119b24e77bd165887162ef538 - - default default] Acquiring lock "refresh_cache-e7add65d-7f64-44b0-960b-62ab3f67e50e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 11 04:05:16 compute-0 nova_compute[259850]: 2025-10-11 04:05:16.045 2 DEBUG oslo_concurrency.lockutils [None req-9752be11-5777-4d60-8045-546a5440448d 715d3ecfd40048a08fd0c9f8dc437cd6 a56f57f119b24e77bd165887162ef538 - - default default] Acquired lock "refresh_cache-e7add65d-7f64-44b0-960b-62ab3f67e50e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 11 04:05:16 compute-0 nova_compute[259850]: 2025-10-11 04:05:16.046 2 DEBUG nova.network.neutron [None req-9752be11-5777-4d60-8045-546a5440448d 715d3ecfd40048a08fd0c9f8dc437cd6 a56f57f119b24e77bd165887162ef538 - - default default] [instance: e7add65d-7f64-44b0-960b-62ab3f67e50e] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Oct 11 04:05:16 compute-0 nova_compute[259850]: 2025-10-11 04:05:16.055 2 DEBUG oslo_service.periodic_task [None req-a15c22ab-259d-4b26-83d0-eb5f92102649 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 11 04:05:16 compute-0 nova_compute[259850]: 2025-10-11 04:05:16.058 2 DEBUG oslo_service.periodic_task [None req-a15c22ab-259d-4b26-83d0-eb5f92102649 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 11 04:05:16 compute-0 nova_compute[259850]: 2025-10-11 04:05:16.088 2 DEBUG oslo_concurrency.lockutils [None req-a15c22ab-259d-4b26-83d0-eb5f92102649 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 04:05:16 compute-0 nova_compute[259850]: 2025-10-11 04:05:16.102 2 DEBUG oslo_concurrency.processutils [None req-114c24db-8160-4ac2-879f-fadfc9f7eab9 5b55f8af0b6741c58fd7d7756dc5b302 dcd539919ebc4a97ab7c54b2325dfcd1 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 04:05:16 compute-0 nova_compute[259850]: 2025-10-11 04:05:16.218 2 DEBUG nova.compute.manager [req-9f116b35-8a89-4951-9c1e-4f4c668e1342 req-9d71827d-d3a8-42b7-ab14-caccba607c30 f4159c198f2c491aba3076e803251bb4 551ef16ca7c64c7d98e9560ddab5abd8 - - default default] [instance: e7add65d-7f64-44b0-960b-62ab3f67e50e] Received event network-changed-f42ee9e2-a84a-41b6-ba15-7baeab44cb80 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 11 04:05:16 compute-0 nova_compute[259850]: 2025-10-11 04:05:16.219 2 DEBUG nova.compute.manager [req-9f116b35-8a89-4951-9c1e-4f4c668e1342 req-9d71827d-d3a8-42b7-ab14-caccba607c30 f4159c198f2c491aba3076e803251bb4 551ef16ca7c64c7d98e9560ddab5abd8 - - default default] [instance: e7add65d-7f64-44b0-960b-62ab3f67e50e] Refreshing instance network info cache due to event network-changed-f42ee9e2-a84a-41b6-ba15-7baeab44cb80. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 11 04:05:16 compute-0 nova_compute[259850]: 2025-10-11 04:05:16.219 2 DEBUG oslo_concurrency.lockutils [req-9f116b35-8a89-4951-9c1e-4f4c668e1342 req-9d71827d-d3a8-42b7-ab14-caccba607c30 f4159c198f2c491aba3076e803251bb4 551ef16ca7c64c7d98e9560ddab5abd8 - - default default] Acquiring lock "refresh_cache-e7add65d-7f64-44b0-960b-62ab3f67e50e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 11 04:05:16 compute-0 nova_compute[259850]: 2025-10-11 04:05:16.249 2 DEBUG nova.network.neutron [None req-9752be11-5777-4d60-8045-546a5440448d 715d3ecfd40048a08fd0c9f8dc437cd6 a56f57f119b24e77bd165887162ef538 - - default default] [instance: e7add65d-7f64-44b0-960b-62ab3f67e50e] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Oct 11 04:05:16 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 11 04:05:16 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/538782461' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 04:05:16 compute-0 nova_compute[259850]: 2025-10-11 04:05:16.527 2 DEBUG oslo_concurrency.processutils [None req-114c24db-8160-4ac2-879f-fadfc9f7eab9 5b55f8af0b6741c58fd7d7756dc5b302 dcd539919ebc4a97ab7c54b2325dfcd1 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.425s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 04:05:16 compute-0 nova_compute[259850]: 2025-10-11 04:05:16.535 2 DEBUG nova.compute.provider_tree [None req-114c24db-8160-4ac2-879f-fadfc9f7eab9 5b55f8af0b6741c58fd7d7756dc5b302 dcd539919ebc4a97ab7c54b2325dfcd1 - - default default] Inventory has not changed in ProviderTree for provider: 108a560b-89c0-4926-a2fc-cb749a6f8386 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 11 04:05:16 compute-0 nova_compute[259850]: 2025-10-11 04:05:16.553 2 DEBUG nova.scheduler.client.report [None req-114c24db-8160-4ac2-879f-fadfc9f7eab9 5b55f8af0b6741c58fd7d7756dc5b302 dcd539919ebc4a97ab7c54b2325dfcd1 - - default default] Inventory has not changed for provider 108a560b-89c0-4926-a2fc-cb749a6f8386 based on inventory data: {'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 11 04:05:16 compute-0 nova_compute[259850]: 2025-10-11 04:05:16.571 2 DEBUG oslo_concurrency.lockutils [None req-114c24db-8160-4ac2-879f-fadfc9f7eab9 5b55f8af0b6741c58fd7d7756dc5b302 dcd539919ebc4a97ab7c54b2325dfcd1 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.562s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 04:05:16 compute-0 nova_compute[259850]: 2025-10-11 04:05:16.574 2 DEBUG oslo_concurrency.lockutils [None req-a15c22ab-259d-4b26-83d0-eb5f92102649 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.486s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 04:05:16 compute-0 nova_compute[259850]: 2025-10-11 04:05:16.574 2 DEBUG oslo_concurrency.lockutils [None req-a15c22ab-259d-4b26-83d0-eb5f92102649 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 04:05:16 compute-0 nova_compute[259850]: 2025-10-11 04:05:16.575 2 DEBUG nova.compute.resource_tracker [None req-a15c22ab-259d-4b26-83d0-eb5f92102649 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Oct 11 04:05:16 compute-0 nova_compute[259850]: 2025-10-11 04:05:16.575 2 DEBUG oslo_concurrency.processutils [None req-a15c22ab-259d-4b26-83d0-eb5f92102649 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 04:05:16 compute-0 nova_compute[259850]: 2025-10-11 04:05:16.593 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 04:05:16 compute-0 nova_compute[259850]: 2025-10-11 04:05:16.623 2 INFO nova.scheduler.client.report [None req-114c24db-8160-4ac2-879f-fadfc9f7eab9 5b55f8af0b6741c58fd7d7756dc5b302 dcd539919ebc4a97ab7c54b2325dfcd1 - - default default] Deleted allocations for instance f362654b-5459-4295-a15a-50dce3bd4232
Oct 11 04:05:16 compute-0 nova_compute[259850]: 2025-10-11 04:05:16.703 2 DEBUG oslo_concurrency.lockutils [None req-114c24db-8160-4ac2-879f-fadfc9f7eab9 5b55f8af0b6741c58fd7d7756dc5b302 dcd539919ebc4a97ab7c54b2325dfcd1 - - default default] Lock "f362654b-5459-4295-a15a-50dce3bd4232" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.569s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 04:05:16 compute-0 nova_compute[259850]: 2025-10-11 04:05:16.705 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 04:05:16 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 11 04:05:16 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2868787296' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 04:05:17 compute-0 ceph-mon[74273]: pgmap v1029: 305 pgs: 305 active+clean; 88 MiB data, 278 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 128 op/s
Oct 11 04:05:17 compute-0 ceph-mon[74273]: from='client.? 192.168.122.100:0/538782461' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 04:05:17 compute-0 ceph-mon[74273]: from='client.? 192.168.122.100:0/2868787296' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 04:05:17 compute-0 nova_compute[259850]: 2025-10-11 04:05:17.022 2 DEBUG oslo_concurrency.processutils [None req-a15c22ab-259d-4b26-83d0-eb5f92102649 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.447s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 04:05:17 compute-0 nova_compute[259850]: 2025-10-11 04:05:17.200 2 WARNING nova.virt.libvirt.driver [None req-a15c22ab-259d-4b26-83d0-eb5f92102649 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 11 04:05:17 compute-0 nova_compute[259850]: 2025-10-11 04:05:17.201 2 DEBUG nova.compute.resource_tracker [None req-a15c22ab-259d-4b26-83d0-eb5f92102649 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4661MB free_disk=59.96738052368164GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Oct 11 04:05:17 compute-0 nova_compute[259850]: 2025-10-11 04:05:17.201 2 DEBUG oslo_concurrency.lockutils [None req-a15c22ab-259d-4b26-83d0-eb5f92102649 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 04:05:17 compute-0 nova_compute[259850]: 2025-10-11 04:05:17.201 2 DEBUG oslo_concurrency.lockutils [None req-a15c22ab-259d-4b26-83d0-eb5f92102649 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 04:05:17 compute-0 nova_compute[259850]: 2025-10-11 04:05:17.264 2 DEBUG nova.compute.resource_tracker [None req-a15c22ab-259d-4b26-83d0-eb5f92102649 - - - - - -] Instance e7add65d-7f64-44b0-960b-62ab3f67e50e actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Oct 11 04:05:17 compute-0 nova_compute[259850]: 2025-10-11 04:05:17.265 2 DEBUG nova.compute.resource_tracker [None req-a15c22ab-259d-4b26-83d0-eb5f92102649 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Oct 11 04:05:17 compute-0 nova_compute[259850]: 2025-10-11 04:05:17.265 2 DEBUG nova.compute.resource_tracker [None req-a15c22ab-259d-4b26-83d0-eb5f92102649 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=640MB phys_disk=59GB used_disk=1GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Oct 11 04:05:17 compute-0 nova_compute[259850]: 2025-10-11 04:05:17.340 2 DEBUG oslo_concurrency.processutils [None req-a15c22ab-259d-4b26-83d0-eb5f92102649 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 04:05:17 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1030: 305 pgs: 305 active+clean; 88 MiB data, 278 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 128 op/s
Oct 11 04:05:17 compute-0 nova_compute[259850]: 2025-10-11 04:05:17.456 2 DEBUG nova.network.neutron [None req-9752be11-5777-4d60-8045-546a5440448d 715d3ecfd40048a08fd0c9f8dc437cd6 a56f57f119b24e77bd165887162ef538 - - default default] [instance: e7add65d-7f64-44b0-960b-62ab3f67e50e] Updating instance_info_cache with network_info: [{"id": "f42ee9e2-a84a-41b6-ba15-7baeab44cb80", "address": "fa:16:3e:00:3c:29", "network": {"id": "01ca7d7a-ab7e-4753-9e65-58d83786bdc8", "bridge": "br-int", "label": "tempest-VolumesActionsTest-1985263256-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a56f57f119b24e77bd165887162ef538", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf42ee9e2-a8", "ovs_interfaceid": "f42ee9e2-a84a-41b6-ba15-7baeab44cb80", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 11 04:05:17 compute-0 nova_compute[259850]: 2025-10-11 04:05:17.488 2 DEBUG oslo_concurrency.lockutils [None req-9752be11-5777-4d60-8045-546a5440448d 715d3ecfd40048a08fd0c9f8dc437cd6 a56f57f119b24e77bd165887162ef538 - - default default] Releasing lock "refresh_cache-e7add65d-7f64-44b0-960b-62ab3f67e50e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 11 04:05:17 compute-0 nova_compute[259850]: 2025-10-11 04:05:17.489 2 DEBUG nova.compute.manager [None req-9752be11-5777-4d60-8045-546a5440448d 715d3ecfd40048a08fd0c9f8dc437cd6 a56f57f119b24e77bd165887162ef538 - - default default] [instance: e7add65d-7f64-44b0-960b-62ab3f67e50e] Instance network_info: |[{"id": "f42ee9e2-a84a-41b6-ba15-7baeab44cb80", "address": "fa:16:3e:00:3c:29", "network": {"id": "01ca7d7a-ab7e-4753-9e65-58d83786bdc8", "bridge": "br-int", "label": "tempest-VolumesActionsTest-1985263256-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a56f57f119b24e77bd165887162ef538", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf42ee9e2-a8", "ovs_interfaceid": "f42ee9e2-a84a-41b6-ba15-7baeab44cb80", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Oct 11 04:05:17 compute-0 nova_compute[259850]: 2025-10-11 04:05:17.490 2 DEBUG oslo_concurrency.lockutils [req-9f116b35-8a89-4951-9c1e-4f4c668e1342 req-9d71827d-d3a8-42b7-ab14-caccba607c30 f4159c198f2c491aba3076e803251bb4 551ef16ca7c64c7d98e9560ddab5abd8 - - default default] Acquired lock "refresh_cache-e7add65d-7f64-44b0-960b-62ab3f67e50e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 11 04:05:17 compute-0 nova_compute[259850]: 2025-10-11 04:05:17.491 2 DEBUG nova.network.neutron [req-9f116b35-8a89-4951-9c1e-4f4c668e1342 req-9d71827d-d3a8-42b7-ab14-caccba607c30 f4159c198f2c491aba3076e803251bb4 551ef16ca7c64c7d98e9560ddab5abd8 - - default default] [instance: e7add65d-7f64-44b0-960b-62ab3f67e50e] Refreshing network info cache for port f42ee9e2-a84a-41b6-ba15-7baeab44cb80 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 11 04:05:17 compute-0 nova_compute[259850]: 2025-10-11 04:05:17.498 2 DEBUG nova.virt.libvirt.driver [None req-9752be11-5777-4d60-8045-546a5440448d 715d3ecfd40048a08fd0c9f8dc437cd6 a56f57f119b24e77bd165887162ef538 - - default default] [instance: e7add65d-7f64-44b0-960b-62ab3f67e50e] Start _get_guest_xml network_info=[{"id": "f42ee9e2-a84a-41b6-ba15-7baeab44cb80", "address": "fa:16:3e:00:3c:29", "network": {"id": "01ca7d7a-ab7e-4753-9e65-58d83786bdc8", "bridge": "br-int", "label": "tempest-VolumesActionsTest-1985263256-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a56f57f119b24e77bd165887162ef538", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf42ee9e2-a8", "ovs_interfaceid": "f42ee9e2-a84a-41b6-ba15-7baeab44cb80", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-11T04:01:37Z,direct_url=<?>,disk_format='qcow2',id=1a107e2f-1a9d-4b6f-861d-e64bee7d56be,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='e4ac9f6319b648399a8baca50902ce47',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-11T04:01:39Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_type': 'disk', 'encrypted': False, 'encryption_secret_uuid': None, 'size': 0, 'encryption_format': None, 'boot_index': 0, 'device_name': '/dev/vda', 'guest_format': None, 'encryption_options': None, 'disk_bus': 'virtio', 'image_id': '1a107e2f-1a9d-4b6f-861d-e64bee7d56be'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Oct 11 04:05:17 compute-0 nova_compute[259850]: 2025-10-11 04:05:17.504 2 WARNING nova.virt.libvirt.driver [None req-9752be11-5777-4d60-8045-546a5440448d 715d3ecfd40048a08fd0c9f8dc437cd6 a56f57f119b24e77bd165887162ef538 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 11 04:05:17 compute-0 nova_compute[259850]: 2025-10-11 04:05:17.509 2 DEBUG nova.virt.libvirt.host [None req-9752be11-5777-4d60-8045-546a5440448d 715d3ecfd40048a08fd0c9f8dc437cd6 a56f57f119b24e77bd165887162ef538 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Oct 11 04:05:17 compute-0 nova_compute[259850]: 2025-10-11 04:05:17.510 2 DEBUG nova.virt.libvirt.host [None req-9752be11-5777-4d60-8045-546a5440448d 715d3ecfd40048a08fd0c9f8dc437cd6 a56f57f119b24e77bd165887162ef538 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Oct 11 04:05:17 compute-0 nova_compute[259850]: 2025-10-11 04:05:17.513 2 DEBUG nova.virt.libvirt.host [None req-9752be11-5777-4d60-8045-546a5440448d 715d3ecfd40048a08fd0c9f8dc437cd6 a56f57f119b24e77bd165887162ef538 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Oct 11 04:05:17 compute-0 nova_compute[259850]: 2025-10-11 04:05:17.514 2 DEBUG nova.virt.libvirt.host [None req-9752be11-5777-4d60-8045-546a5440448d 715d3ecfd40048a08fd0c9f8dc437cd6 a56f57f119b24e77bd165887162ef538 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Oct 11 04:05:17 compute-0 nova_compute[259850]: 2025-10-11 04:05:17.515 2 DEBUG nova.virt.libvirt.driver [None req-9752be11-5777-4d60-8045-546a5440448d 715d3ecfd40048a08fd0c9f8dc437cd6 a56f57f119b24e77bd165887162ef538 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Oct 11 04:05:17 compute-0 nova_compute[259850]: 2025-10-11 04:05:17.515 2 DEBUG nova.virt.hardware [None req-9752be11-5777-4d60-8045-546a5440448d 715d3ecfd40048a08fd0c9f8dc437cd6 a56f57f119b24e77bd165887162ef538 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-11T04:01:35Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='178575de-f0e6-4acd-9fcd-d75e3e09ac2e',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-11T04:01:37Z,direct_url=<?>,disk_format='qcow2',id=1a107e2f-1a9d-4b6f-861d-e64bee7d56be,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='e4ac9f6319b648399a8baca50902ce47',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-11T04:01:39Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Oct 11 04:05:17 compute-0 nova_compute[259850]: 2025-10-11 04:05:17.516 2 DEBUG nova.virt.hardware [None req-9752be11-5777-4d60-8045-546a5440448d 715d3ecfd40048a08fd0c9f8dc437cd6 a56f57f119b24e77bd165887162ef538 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Oct 11 04:05:17 compute-0 nova_compute[259850]: 2025-10-11 04:05:17.516 2 DEBUG nova.virt.hardware [None req-9752be11-5777-4d60-8045-546a5440448d 715d3ecfd40048a08fd0c9f8dc437cd6 a56f57f119b24e77bd165887162ef538 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Oct 11 04:05:17 compute-0 nova_compute[259850]: 2025-10-11 04:05:17.517 2 DEBUG nova.virt.hardware [None req-9752be11-5777-4d60-8045-546a5440448d 715d3ecfd40048a08fd0c9f8dc437cd6 a56f57f119b24e77bd165887162ef538 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Oct 11 04:05:17 compute-0 nova_compute[259850]: 2025-10-11 04:05:17.517 2 DEBUG nova.virt.hardware [None req-9752be11-5777-4d60-8045-546a5440448d 715d3ecfd40048a08fd0c9f8dc437cd6 a56f57f119b24e77bd165887162ef538 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Oct 11 04:05:17 compute-0 nova_compute[259850]: 2025-10-11 04:05:17.518 2 DEBUG nova.virt.hardware [None req-9752be11-5777-4d60-8045-546a5440448d 715d3ecfd40048a08fd0c9f8dc437cd6 a56f57f119b24e77bd165887162ef538 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Oct 11 04:05:17 compute-0 nova_compute[259850]: 2025-10-11 04:05:17.518 2 DEBUG nova.virt.hardware [None req-9752be11-5777-4d60-8045-546a5440448d 715d3ecfd40048a08fd0c9f8dc437cd6 a56f57f119b24e77bd165887162ef538 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Oct 11 04:05:17 compute-0 nova_compute[259850]: 2025-10-11 04:05:17.519 2 DEBUG nova.virt.hardware [None req-9752be11-5777-4d60-8045-546a5440448d 715d3ecfd40048a08fd0c9f8dc437cd6 a56f57f119b24e77bd165887162ef538 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Oct 11 04:05:17 compute-0 nova_compute[259850]: 2025-10-11 04:05:17.519 2 DEBUG nova.virt.hardware [None req-9752be11-5777-4d60-8045-546a5440448d 715d3ecfd40048a08fd0c9f8dc437cd6 a56f57f119b24e77bd165887162ef538 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Oct 11 04:05:17 compute-0 nova_compute[259850]: 2025-10-11 04:05:17.520 2 DEBUG nova.virt.hardware [None req-9752be11-5777-4d60-8045-546a5440448d 715d3ecfd40048a08fd0c9f8dc437cd6 a56f57f119b24e77bd165887162ef538 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Oct 11 04:05:17 compute-0 nova_compute[259850]: 2025-10-11 04:05:17.520 2 DEBUG nova.virt.hardware [None req-9752be11-5777-4d60-8045-546a5440448d 715d3ecfd40048a08fd0c9f8dc437cd6 a56f57f119b24e77bd165887162ef538 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Oct 11 04:05:17 compute-0 nova_compute[259850]: 2025-10-11 04:05:17.526 2 DEBUG oslo_concurrency.processutils [None req-9752be11-5777-4d60-8045-546a5440448d 715d3ecfd40048a08fd0c9f8dc437cd6 a56f57f119b24e77bd165887162ef538 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 04:05:17 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 11 04:05:17 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/752079066' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 04:05:17 compute-0 nova_compute[259850]: 2025-10-11 04:05:17.834 2 DEBUG oslo_concurrency.processutils [None req-a15c22ab-259d-4b26-83d0-eb5f92102649 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.494s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 04:05:17 compute-0 nova_compute[259850]: 2025-10-11 04:05:17.842 2 DEBUG nova.compute.provider_tree [None req-a15c22ab-259d-4b26-83d0-eb5f92102649 - - - - - -] Inventory has not changed in ProviderTree for provider: 108a560b-89c0-4926-a2fc-cb749a6f8386 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 11 04:05:17 compute-0 nova_compute[259850]: 2025-10-11 04:05:17.862 2 DEBUG nova.scheduler.client.report [None req-a15c22ab-259d-4b26-83d0-eb5f92102649 - - - - - -] Inventory has not changed for provider 108a560b-89c0-4926-a2fc-cb749a6f8386 based on inventory data: {'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 11 04:05:17 compute-0 nova_compute[259850]: 2025-10-11 04:05:17.886 2 DEBUG nova.compute.resource_tracker [None req-a15c22ab-259d-4b26-83d0-eb5f92102649 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Oct 11 04:05:17 compute-0 nova_compute[259850]: 2025-10-11 04:05:17.888 2 DEBUG oslo_concurrency.lockutils [None req-a15c22ab-259d-4b26-83d0-eb5f92102649 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.686s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 04:05:17 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 11 04:05:17 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/156567156' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 11 04:05:17 compute-0 nova_compute[259850]: 2025-10-11 04:05:17.978 2 DEBUG oslo_concurrency.processutils [None req-9752be11-5777-4d60-8045-546a5440448d 715d3ecfd40048a08fd0c9f8dc437cd6 a56f57f119b24e77bd165887162ef538 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.453s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 04:05:18 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e171 do_prune osdmap full prune enabled
Oct 11 04:05:18 compute-0 ceph-mon[74273]: from='client.? 192.168.122.100:0/752079066' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 04:05:18 compute-0 ceph-mon[74273]: from='client.? 192.168.122.100:0/156567156' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 11 04:05:18 compute-0 nova_compute[259850]: 2025-10-11 04:05:18.010 2 DEBUG nova.storage.rbd_utils [None req-9752be11-5777-4d60-8045-546a5440448d 715d3ecfd40048a08fd0c9f8dc437cd6 a56f57f119b24e77bd165887162ef538 - - default default] rbd image e7add65d-7f64-44b0-960b-62ab3f67e50e_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 11 04:05:18 compute-0 nova_compute[259850]: 2025-10-11 04:05:18.015 2 DEBUG oslo_concurrency.processutils [None req-9752be11-5777-4d60-8045-546a5440448d 715d3ecfd40048a08fd0c9f8dc437cd6 a56f57f119b24e77bd165887162ef538 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 04:05:18 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e172 e172: 3 total, 3 up, 3 in
Oct 11 04:05:18 compute-0 ceph-mon[74273]: log_channel(cluster) log [DBG] : osdmap e172: 3 total, 3 up, 3 in
Oct 11 04:05:18 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 11 04:05:18 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2172836810' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 11 04:05:18 compute-0 nova_compute[259850]: 2025-10-11 04:05:18.505 2 DEBUG oslo_concurrency.processutils [None req-9752be11-5777-4d60-8045-546a5440448d 715d3ecfd40048a08fd0c9f8dc437cd6 a56f57f119b24e77bd165887162ef538 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.490s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 04:05:18 compute-0 nova_compute[259850]: 2025-10-11 04:05:18.508 2 DEBUG nova.virt.libvirt.vif [None req-9752be11-5777-4d60-8045-546a5440448d 715d3ecfd40048a08fd0c9f8dc437cd6 a56f57f119b24e77bd165887162ef538 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-11T04:05:12Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-VolumesActionsTest-instance-1033832338',display_name='tempest-VolumesActionsTest-instance-1033832338',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-volumesactionstest-instance-1033832338',id=4,image_ref='1a107e2f-1a9d-4b6f-861d-e64bee7d56be',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='a56f57f119b24e77bd165887162ef538',ramdisk_id='',reservation_id='r-81tx0alw',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='1a107e2f-1a9d-4b6f-861d-e64bee7d56be',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-VolumesActionsTest-27294957',owner_user_name='tempest-VolumesActionsTest-27294957-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-11T04:05:13Z,user_data=None,user_id='715d3ecfd40048a08fd0c9f8dc437cd6',uuid=e7add65d-7f64-44b0-960b-62ab3f67e50e,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "f42ee9e2-a84a-41b6-ba15-7baeab44cb80", "address": "fa:16:3e:00:3c:29", "network": {"id": "01ca7d7a-ab7e-4753-9e65-58d83786bdc8", "bridge": "br-int", "label": "tempest-VolumesActionsTest-1985263256-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a56f57f119b24e77bd165887162ef538", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf42ee9e2-a8", "ovs_interfaceid": "f42ee9e2-a84a-41b6-ba15-7baeab44cb80", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Oct 11 04:05:18 compute-0 nova_compute[259850]: 2025-10-11 04:05:18.509 2 DEBUG nova.network.os_vif_util [None req-9752be11-5777-4d60-8045-546a5440448d 715d3ecfd40048a08fd0c9f8dc437cd6 a56f57f119b24e77bd165887162ef538 - - default default] Converting VIF {"id": "f42ee9e2-a84a-41b6-ba15-7baeab44cb80", "address": "fa:16:3e:00:3c:29", "network": {"id": "01ca7d7a-ab7e-4753-9e65-58d83786bdc8", "bridge": "br-int", "label": "tempest-VolumesActionsTest-1985263256-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a56f57f119b24e77bd165887162ef538", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf42ee9e2-a8", "ovs_interfaceid": "f42ee9e2-a84a-41b6-ba15-7baeab44cb80", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 11 04:05:18 compute-0 nova_compute[259850]: 2025-10-11 04:05:18.510 2 DEBUG nova.network.os_vif_util [None req-9752be11-5777-4d60-8045-546a5440448d 715d3ecfd40048a08fd0c9f8dc437cd6 a56f57f119b24e77bd165887162ef538 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:00:3c:29,bridge_name='br-int',has_traffic_filtering=True,id=f42ee9e2-a84a-41b6-ba15-7baeab44cb80,network=Network(01ca7d7a-ab7e-4753-9e65-58d83786bdc8),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf42ee9e2-a8') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 11 04:05:18 compute-0 nova_compute[259850]: 2025-10-11 04:05:18.512 2 DEBUG nova.objects.instance [None req-9752be11-5777-4d60-8045-546a5440448d 715d3ecfd40048a08fd0c9f8dc437cd6 a56f57f119b24e77bd165887162ef538 - - default default] Lazy-loading 'pci_devices' on Instance uuid e7add65d-7f64-44b0-960b-62ab3f67e50e obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 11 04:05:18 compute-0 nova_compute[259850]: 2025-10-11 04:05:18.528 2 DEBUG nova.virt.libvirt.driver [None req-9752be11-5777-4d60-8045-546a5440448d 715d3ecfd40048a08fd0c9f8dc437cd6 a56f57f119b24e77bd165887162ef538 - - default default] [instance: e7add65d-7f64-44b0-960b-62ab3f67e50e] End _get_guest_xml xml=<domain type="kvm">
Oct 11 04:05:18 compute-0 nova_compute[259850]:   <uuid>e7add65d-7f64-44b0-960b-62ab3f67e50e</uuid>
Oct 11 04:05:18 compute-0 nova_compute[259850]:   <name>instance-00000004</name>
Oct 11 04:05:18 compute-0 nova_compute[259850]:   <memory>131072</memory>
Oct 11 04:05:18 compute-0 nova_compute[259850]:   <vcpu>1</vcpu>
Oct 11 04:05:18 compute-0 nova_compute[259850]:   <metadata>
Oct 11 04:05:18 compute-0 nova_compute[259850]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 11 04:05:18 compute-0 nova_compute[259850]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 11 04:05:18 compute-0 nova_compute[259850]:       <nova:name>tempest-VolumesActionsTest-instance-1033832338</nova:name>
Oct 11 04:05:18 compute-0 nova_compute[259850]:       <nova:creationTime>2025-10-11 04:05:17</nova:creationTime>
Oct 11 04:05:18 compute-0 nova_compute[259850]:       <nova:flavor name="m1.nano">
Oct 11 04:05:18 compute-0 nova_compute[259850]:         <nova:memory>128</nova:memory>
Oct 11 04:05:18 compute-0 nova_compute[259850]:         <nova:disk>1</nova:disk>
Oct 11 04:05:18 compute-0 nova_compute[259850]:         <nova:swap>0</nova:swap>
Oct 11 04:05:18 compute-0 nova_compute[259850]:         <nova:ephemeral>0</nova:ephemeral>
Oct 11 04:05:18 compute-0 nova_compute[259850]:         <nova:vcpus>1</nova:vcpus>
Oct 11 04:05:18 compute-0 nova_compute[259850]:       </nova:flavor>
Oct 11 04:05:18 compute-0 nova_compute[259850]:       <nova:owner>
Oct 11 04:05:18 compute-0 nova_compute[259850]:         <nova:user uuid="715d3ecfd40048a08fd0c9f8dc437cd6">tempest-VolumesActionsTest-27294957-project-member</nova:user>
Oct 11 04:05:18 compute-0 nova_compute[259850]:         <nova:project uuid="a56f57f119b24e77bd165887162ef538">tempest-VolumesActionsTest-27294957</nova:project>
Oct 11 04:05:18 compute-0 nova_compute[259850]:       </nova:owner>
Oct 11 04:05:18 compute-0 nova_compute[259850]:       <nova:root type="image" uuid="1a107e2f-1a9d-4b6f-861d-e64bee7d56be"/>
Oct 11 04:05:18 compute-0 nova_compute[259850]:       <nova:ports>
Oct 11 04:05:18 compute-0 nova_compute[259850]:         <nova:port uuid="f42ee9e2-a84a-41b6-ba15-7baeab44cb80">
Oct 11 04:05:18 compute-0 nova_compute[259850]:           <nova:ip type="fixed" address="10.100.0.13" ipVersion="4"/>
Oct 11 04:05:18 compute-0 nova_compute[259850]:         </nova:port>
Oct 11 04:05:18 compute-0 nova_compute[259850]:       </nova:ports>
Oct 11 04:05:18 compute-0 nova_compute[259850]:     </nova:instance>
Oct 11 04:05:18 compute-0 nova_compute[259850]:   </metadata>
Oct 11 04:05:18 compute-0 nova_compute[259850]:   <sysinfo type="smbios">
Oct 11 04:05:18 compute-0 nova_compute[259850]:     <system>
Oct 11 04:05:18 compute-0 nova_compute[259850]:       <entry name="manufacturer">RDO</entry>
Oct 11 04:05:18 compute-0 nova_compute[259850]:       <entry name="product">OpenStack Compute</entry>
Oct 11 04:05:18 compute-0 nova_compute[259850]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 11 04:05:18 compute-0 nova_compute[259850]:       <entry name="serial">e7add65d-7f64-44b0-960b-62ab3f67e50e</entry>
Oct 11 04:05:18 compute-0 nova_compute[259850]:       <entry name="uuid">e7add65d-7f64-44b0-960b-62ab3f67e50e</entry>
Oct 11 04:05:18 compute-0 nova_compute[259850]:       <entry name="family">Virtual Machine</entry>
Oct 11 04:05:18 compute-0 nova_compute[259850]:     </system>
Oct 11 04:05:18 compute-0 nova_compute[259850]:   </sysinfo>
Oct 11 04:05:18 compute-0 nova_compute[259850]:   <os>
Oct 11 04:05:18 compute-0 nova_compute[259850]:     <type arch="x86_64" machine="q35">hvm</type>
Oct 11 04:05:18 compute-0 nova_compute[259850]:     <boot dev="hd"/>
Oct 11 04:05:18 compute-0 nova_compute[259850]:     <smbios mode="sysinfo"/>
Oct 11 04:05:18 compute-0 nova_compute[259850]:   </os>
Oct 11 04:05:18 compute-0 nova_compute[259850]:   <features>
Oct 11 04:05:18 compute-0 nova_compute[259850]:     <acpi/>
Oct 11 04:05:18 compute-0 nova_compute[259850]:     <apic/>
Oct 11 04:05:18 compute-0 nova_compute[259850]:     <vmcoreinfo/>
Oct 11 04:05:18 compute-0 nova_compute[259850]:   </features>
Oct 11 04:05:18 compute-0 nova_compute[259850]:   <clock offset="utc">
Oct 11 04:05:18 compute-0 nova_compute[259850]:     <timer name="pit" tickpolicy="delay"/>
Oct 11 04:05:18 compute-0 nova_compute[259850]:     <timer name="rtc" tickpolicy="catchup"/>
Oct 11 04:05:18 compute-0 nova_compute[259850]:     <timer name="hpet" present="no"/>
Oct 11 04:05:18 compute-0 nova_compute[259850]:   </clock>
Oct 11 04:05:18 compute-0 nova_compute[259850]:   <cpu mode="host-model" match="exact">
Oct 11 04:05:18 compute-0 nova_compute[259850]:     <topology sockets="1" cores="1" threads="1"/>
Oct 11 04:05:18 compute-0 nova_compute[259850]:   </cpu>
Oct 11 04:05:18 compute-0 nova_compute[259850]:   <devices>
Oct 11 04:05:18 compute-0 nova_compute[259850]:     <disk type="network" device="disk">
Oct 11 04:05:18 compute-0 nova_compute[259850]:       <driver type="raw" cache="none"/>
Oct 11 04:05:18 compute-0 nova_compute[259850]:       <source protocol="rbd" name="vms/e7add65d-7f64-44b0-960b-62ab3f67e50e_disk">
Oct 11 04:05:18 compute-0 nova_compute[259850]:         <host name="192.168.122.100" port="6789"/>
Oct 11 04:05:18 compute-0 nova_compute[259850]:       </source>
Oct 11 04:05:18 compute-0 nova_compute[259850]:       <auth username="openstack">
Oct 11 04:05:18 compute-0 nova_compute[259850]:         <secret type="ceph" uuid="23b68101-59a9-532f-ab6b-9acf78fb2162"/>
Oct 11 04:05:18 compute-0 nova_compute[259850]:       </auth>
Oct 11 04:05:18 compute-0 nova_compute[259850]:       <target dev="vda" bus="virtio"/>
Oct 11 04:05:18 compute-0 nova_compute[259850]:     </disk>
Oct 11 04:05:18 compute-0 nova_compute[259850]:     <disk type="network" device="cdrom">
Oct 11 04:05:18 compute-0 nova_compute[259850]:       <driver type="raw" cache="none"/>
Oct 11 04:05:18 compute-0 nova_compute[259850]:       <source protocol="rbd" name="vms/e7add65d-7f64-44b0-960b-62ab3f67e50e_disk.config">
Oct 11 04:05:18 compute-0 nova_compute[259850]:         <host name="192.168.122.100" port="6789"/>
Oct 11 04:05:18 compute-0 nova_compute[259850]:       </source>
Oct 11 04:05:18 compute-0 nova_compute[259850]:       <auth username="openstack">
Oct 11 04:05:18 compute-0 nova_compute[259850]:         <secret type="ceph" uuid="23b68101-59a9-532f-ab6b-9acf78fb2162"/>
Oct 11 04:05:18 compute-0 nova_compute[259850]:       </auth>
Oct 11 04:05:18 compute-0 nova_compute[259850]:       <target dev="sda" bus="sata"/>
Oct 11 04:05:18 compute-0 nova_compute[259850]:     </disk>
Oct 11 04:05:18 compute-0 nova_compute[259850]:     <interface type="ethernet">
Oct 11 04:05:18 compute-0 nova_compute[259850]:       <mac address="fa:16:3e:00:3c:29"/>
Oct 11 04:05:18 compute-0 nova_compute[259850]:       <model type="virtio"/>
Oct 11 04:05:18 compute-0 nova_compute[259850]:       <driver name="vhost" rx_queue_size="512"/>
Oct 11 04:05:18 compute-0 nova_compute[259850]:       <mtu size="1442"/>
Oct 11 04:05:18 compute-0 nova_compute[259850]:       <target dev="tapf42ee9e2-a8"/>
Oct 11 04:05:18 compute-0 nova_compute[259850]:     </interface>
Oct 11 04:05:18 compute-0 nova_compute[259850]:     <serial type="pty">
Oct 11 04:05:18 compute-0 nova_compute[259850]:       <log file="/var/lib/nova/instances/e7add65d-7f64-44b0-960b-62ab3f67e50e/console.log" append="off"/>
Oct 11 04:05:18 compute-0 nova_compute[259850]:     </serial>
Oct 11 04:05:18 compute-0 nova_compute[259850]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 11 04:05:18 compute-0 nova_compute[259850]:     <video>
Oct 11 04:05:18 compute-0 nova_compute[259850]:       <model type="virtio"/>
Oct 11 04:05:18 compute-0 nova_compute[259850]:     </video>
Oct 11 04:05:18 compute-0 nova_compute[259850]:     <input type="tablet" bus="usb"/>
Oct 11 04:05:18 compute-0 nova_compute[259850]:     <rng model="virtio">
Oct 11 04:05:18 compute-0 nova_compute[259850]:       <backend model="random">/dev/urandom</backend>
Oct 11 04:05:18 compute-0 nova_compute[259850]:     </rng>
Oct 11 04:05:18 compute-0 nova_compute[259850]:     <controller type="pci" model="pcie-root"/>
Oct 11 04:05:18 compute-0 nova_compute[259850]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 04:05:18 compute-0 nova_compute[259850]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 04:05:18 compute-0 nova_compute[259850]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 04:05:18 compute-0 nova_compute[259850]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 04:05:18 compute-0 nova_compute[259850]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 04:05:18 compute-0 nova_compute[259850]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 04:05:18 compute-0 nova_compute[259850]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 04:05:18 compute-0 nova_compute[259850]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 04:05:18 compute-0 nova_compute[259850]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 04:05:18 compute-0 nova_compute[259850]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 04:05:18 compute-0 nova_compute[259850]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 04:05:18 compute-0 nova_compute[259850]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 04:05:18 compute-0 nova_compute[259850]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 04:05:18 compute-0 nova_compute[259850]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 04:05:18 compute-0 nova_compute[259850]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 04:05:18 compute-0 nova_compute[259850]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 04:05:18 compute-0 nova_compute[259850]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 04:05:18 compute-0 nova_compute[259850]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 04:05:18 compute-0 nova_compute[259850]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 04:05:18 compute-0 nova_compute[259850]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 04:05:18 compute-0 nova_compute[259850]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 04:05:18 compute-0 nova_compute[259850]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 04:05:18 compute-0 nova_compute[259850]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 04:05:18 compute-0 nova_compute[259850]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 04:05:18 compute-0 nova_compute[259850]:     <controller type="usb" index="0"/>
Oct 11 04:05:18 compute-0 nova_compute[259850]:     <memballoon model="virtio">
Oct 11 04:05:18 compute-0 nova_compute[259850]:       <stats period="10"/>
Oct 11 04:05:18 compute-0 nova_compute[259850]:     </memballoon>
Oct 11 04:05:18 compute-0 nova_compute[259850]:   </devices>
Oct 11 04:05:18 compute-0 nova_compute[259850]: </domain>
Oct 11 04:05:18 compute-0 nova_compute[259850]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Oct 11 04:05:18 compute-0 nova_compute[259850]: 2025-10-11 04:05:18.530 2 DEBUG nova.compute.manager [None req-9752be11-5777-4d60-8045-546a5440448d 715d3ecfd40048a08fd0c9f8dc437cd6 a56f57f119b24e77bd165887162ef538 - - default default] [instance: e7add65d-7f64-44b0-960b-62ab3f67e50e] Preparing to wait for external event network-vif-plugged-f42ee9e2-a84a-41b6-ba15-7baeab44cb80 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Oct 11 04:05:18 compute-0 nova_compute[259850]: 2025-10-11 04:05:18.531 2 DEBUG oslo_concurrency.lockutils [None req-9752be11-5777-4d60-8045-546a5440448d 715d3ecfd40048a08fd0c9f8dc437cd6 a56f57f119b24e77bd165887162ef538 - - default default] Acquiring lock "e7add65d-7f64-44b0-960b-62ab3f67e50e-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 04:05:18 compute-0 nova_compute[259850]: 2025-10-11 04:05:18.531 2 DEBUG oslo_concurrency.lockutils [None req-9752be11-5777-4d60-8045-546a5440448d 715d3ecfd40048a08fd0c9f8dc437cd6 a56f57f119b24e77bd165887162ef538 - - default default] Lock "e7add65d-7f64-44b0-960b-62ab3f67e50e-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 04:05:18 compute-0 nova_compute[259850]: 2025-10-11 04:05:18.532 2 DEBUG oslo_concurrency.lockutils [None req-9752be11-5777-4d60-8045-546a5440448d 715d3ecfd40048a08fd0c9f8dc437cd6 a56f57f119b24e77bd165887162ef538 - - default default] Lock "e7add65d-7f64-44b0-960b-62ab3f67e50e-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 04:05:18 compute-0 nova_compute[259850]: 2025-10-11 04:05:18.533 2 DEBUG nova.virt.libvirt.vif [None req-9752be11-5777-4d60-8045-546a5440448d 715d3ecfd40048a08fd0c9f8dc437cd6 a56f57f119b24e77bd165887162ef538 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-11T04:05:12Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-VolumesActionsTest-instance-1033832338',display_name='tempest-VolumesActionsTest-instance-1033832338',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-volumesactionstest-instance-1033832338',id=4,image_ref='1a107e2f-1a9d-4b6f-861d-e64bee7d56be',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='a56f57f119b24e77bd165887162ef538',ramdisk_id='',reservation_id='r-81tx0alw',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='1a107e2f-1a9d-4b6f-861d-e64bee7d56be',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-VolumesActionsTest-27294957',owner_user_name='tempest-VolumesActionsTest-27294957-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-11T04:05:13Z,user_data=None,user_id='715d3ecfd40048a08fd0c9f8dc437cd6',uuid=e7add65d-7f64-44b0-960b-62ab3f67e50e,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "f42ee9e2-a84a-41b6-ba15-7baeab44cb80", "address": "fa:16:3e:00:3c:29", "network": {"id": "01ca7d7a-ab7e-4753-9e65-58d83786bdc8", "bridge": "br-int", "label": "tempest-VolumesActionsTest-1985263256-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a56f57f119b24e77bd165887162ef538", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf42ee9e2-a8", "ovs_interfaceid": "f42ee9e2-a84a-41b6-ba15-7baeab44cb80", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Oct 11 04:05:18 compute-0 nova_compute[259850]: 2025-10-11 04:05:18.534 2 DEBUG nova.network.os_vif_util [None req-9752be11-5777-4d60-8045-546a5440448d 715d3ecfd40048a08fd0c9f8dc437cd6 a56f57f119b24e77bd165887162ef538 - - default default] Converting VIF {"id": "f42ee9e2-a84a-41b6-ba15-7baeab44cb80", "address": "fa:16:3e:00:3c:29", "network": {"id": "01ca7d7a-ab7e-4753-9e65-58d83786bdc8", "bridge": "br-int", "label": "tempest-VolumesActionsTest-1985263256-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a56f57f119b24e77bd165887162ef538", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf42ee9e2-a8", "ovs_interfaceid": "f42ee9e2-a84a-41b6-ba15-7baeab44cb80", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 11 04:05:18 compute-0 nova_compute[259850]: 2025-10-11 04:05:18.535 2 DEBUG nova.network.os_vif_util [None req-9752be11-5777-4d60-8045-546a5440448d 715d3ecfd40048a08fd0c9f8dc437cd6 a56f57f119b24e77bd165887162ef538 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:00:3c:29,bridge_name='br-int',has_traffic_filtering=True,id=f42ee9e2-a84a-41b6-ba15-7baeab44cb80,network=Network(01ca7d7a-ab7e-4753-9e65-58d83786bdc8),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf42ee9e2-a8') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 11 04:05:18 compute-0 nova_compute[259850]: 2025-10-11 04:05:18.535 2 DEBUG os_vif [None req-9752be11-5777-4d60-8045-546a5440448d 715d3ecfd40048a08fd0c9f8dc437cd6 a56f57f119b24e77bd165887162ef538 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:00:3c:29,bridge_name='br-int',has_traffic_filtering=True,id=f42ee9e2-a84a-41b6-ba15-7baeab44cb80,network=Network(01ca7d7a-ab7e-4753-9e65-58d83786bdc8),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf42ee9e2-a8') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Oct 11 04:05:18 compute-0 nova_compute[259850]: 2025-10-11 04:05:18.536 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 04:05:18 compute-0 nova_compute[259850]: 2025-10-11 04:05:18.537 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 04:05:18 compute-0 nova_compute[259850]: 2025-10-11 04:05:18.538 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 11 04:05:18 compute-0 nova_compute[259850]: 2025-10-11 04:05:18.541 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 04:05:18 compute-0 nova_compute[259850]: 2025-10-11 04:05:18.541 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapf42ee9e2-a8, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 04:05:18 compute-0 nova_compute[259850]: 2025-10-11 04:05:18.542 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapf42ee9e2-a8, col_values=(('external_ids', {'iface-id': 'f42ee9e2-a84a-41b6-ba15-7baeab44cb80', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:00:3c:29', 'vm-uuid': 'e7add65d-7f64-44b0-960b-62ab3f67e50e'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 04:05:18 compute-0 nova_compute[259850]: 2025-10-11 04:05:18.543 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 04:05:18 compute-0 NetworkManager[44920]: <info>  [1760155518.5449] manager: (tapf42ee9e2-a8): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/35)
Oct 11 04:05:18 compute-0 nova_compute[259850]: 2025-10-11 04:05:18.546 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 11 04:05:18 compute-0 nova_compute[259850]: 2025-10-11 04:05:18.552 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 04:05:18 compute-0 nova_compute[259850]: 2025-10-11 04:05:18.554 2 INFO os_vif [None req-9752be11-5777-4d60-8045-546a5440448d 715d3ecfd40048a08fd0c9f8dc437cd6 a56f57f119b24e77bd165887162ef538 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:00:3c:29,bridge_name='br-int',has_traffic_filtering=True,id=f42ee9e2-a84a-41b6-ba15-7baeab44cb80,network=Network(01ca7d7a-ab7e-4753-9e65-58d83786bdc8),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf42ee9e2-a8')
Oct 11 04:05:18 compute-0 nova_compute[259850]: 2025-10-11 04:05:18.605 2 DEBUG nova.virt.libvirt.driver [None req-9752be11-5777-4d60-8045-546a5440448d 715d3ecfd40048a08fd0c9f8dc437cd6 a56f57f119b24e77bd165887162ef538 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 11 04:05:18 compute-0 nova_compute[259850]: 2025-10-11 04:05:18.605 2 DEBUG nova.virt.libvirt.driver [None req-9752be11-5777-4d60-8045-546a5440448d 715d3ecfd40048a08fd0c9f8dc437cd6 a56f57f119b24e77bd165887162ef538 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 11 04:05:18 compute-0 nova_compute[259850]: 2025-10-11 04:05:18.605 2 DEBUG nova.virt.libvirt.driver [None req-9752be11-5777-4d60-8045-546a5440448d 715d3ecfd40048a08fd0c9f8dc437cd6 a56f57f119b24e77bd165887162ef538 - - default default] No VIF found with MAC fa:16:3e:00:3c:29, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Oct 11 04:05:18 compute-0 nova_compute[259850]: 2025-10-11 04:05:18.606 2 INFO nova.virt.libvirt.driver [None req-9752be11-5777-4d60-8045-546a5440448d 715d3ecfd40048a08fd0c9f8dc437cd6 a56f57f119b24e77bd165887162ef538 - - default default] [instance: e7add65d-7f64-44b0-960b-62ab3f67e50e] Using config drive
Oct 11 04:05:18 compute-0 nova_compute[259850]: 2025-10-11 04:05:18.630 2 DEBUG nova.storage.rbd_utils [None req-9752be11-5777-4d60-8045-546a5440448d 715d3ecfd40048a08fd0c9f8dc437cd6 a56f57f119b24e77bd165887162ef538 - - default default] rbd image e7add65d-7f64-44b0-960b-62ab3f67e50e_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 11 04:05:18 compute-0 nova_compute[259850]: 2025-10-11 04:05:18.889 2 DEBUG oslo_service.periodic_task [None req-a15c22ab-259d-4b26-83d0-eb5f92102649 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 11 04:05:18 compute-0 nova_compute[259850]: 2025-10-11 04:05:18.890 2 DEBUG nova.compute.manager [None req-a15c22ab-259d-4b26-83d0-eb5f92102649 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Oct 11 04:05:18 compute-0 nova_compute[259850]: 2025-10-11 04:05:18.890 2 DEBUG nova.compute.manager [None req-a15c22ab-259d-4b26-83d0-eb5f92102649 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Oct 11 04:05:18 compute-0 nova_compute[259850]: 2025-10-11 04:05:18.910 2 DEBUG nova.compute.manager [None req-a15c22ab-259d-4b26-83d0-eb5f92102649 - - - - - -] [instance: e7add65d-7f64-44b0-960b-62ab3f67e50e] Skipping network cache update for instance because it is Building. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9871
Oct 11 04:05:18 compute-0 nova_compute[259850]: 2025-10-11 04:05:18.910 2 DEBUG nova.compute.manager [None req-a15c22ab-259d-4b26-83d0-eb5f92102649 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Oct 11 04:05:18 compute-0 nova_compute[259850]: 2025-10-11 04:05:18.911 2 DEBUG oslo_service.periodic_task [None req-a15c22ab-259d-4b26-83d0-eb5f92102649 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 11 04:05:18 compute-0 nova_compute[259850]: 2025-10-11 04:05:18.912 2 DEBUG oslo_service.periodic_task [None req-a15c22ab-259d-4b26-83d0-eb5f92102649 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 11 04:05:19 compute-0 ceph-mon[74273]: pgmap v1030: 305 pgs: 305 active+clean; 88 MiB data, 278 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 128 op/s
Oct 11 04:05:19 compute-0 ceph-mon[74273]: osdmap e172: 3 total, 3 up, 3 in
Oct 11 04:05:19 compute-0 ceph-mon[74273]: from='client.? 192.168.122.100:0/2172836810' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 11 04:05:19 compute-0 nova_compute[259850]: 2025-10-11 04:05:19.059 2 DEBUG oslo_service.periodic_task [None req-a15c22ab-259d-4b26-83d0-eb5f92102649 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 11 04:05:19 compute-0 nova_compute[259850]: 2025-10-11 04:05:19.120 2 INFO nova.virt.libvirt.driver [None req-9752be11-5777-4d60-8045-546a5440448d 715d3ecfd40048a08fd0c9f8dc437cd6 a56f57f119b24e77bd165887162ef538 - - default default] [instance: e7add65d-7f64-44b0-960b-62ab3f67e50e] Creating config drive at /var/lib/nova/instances/e7add65d-7f64-44b0-960b-62ab3f67e50e/disk.config
Oct 11 04:05:19 compute-0 nova_compute[259850]: 2025-10-11 04:05:19.129 2 DEBUG oslo_concurrency.processutils [None req-9752be11-5777-4d60-8045-546a5440448d 715d3ecfd40048a08fd0c9f8dc437cd6 a56f57f119b24e77bd165887162ef538 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/e7add65d-7f64-44b0-960b-62ab3f67e50e/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpnvutp9uc execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 04:05:19 compute-0 nova_compute[259850]: 2025-10-11 04:05:19.261 2 DEBUG oslo_concurrency.processutils [None req-9752be11-5777-4d60-8045-546a5440448d 715d3ecfd40048a08fd0c9f8dc437cd6 a56f57f119b24e77bd165887162ef538 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/e7add65d-7f64-44b0-960b-62ab3f67e50e/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpnvutp9uc" returned: 0 in 0.132s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 04:05:19 compute-0 nova_compute[259850]: 2025-10-11 04:05:19.290 2 DEBUG nova.storage.rbd_utils [None req-9752be11-5777-4d60-8045-546a5440448d 715d3ecfd40048a08fd0c9f8dc437cd6 a56f57f119b24e77bd165887162ef538 - - default default] rbd image e7add65d-7f64-44b0-960b-62ab3f67e50e_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 11 04:05:19 compute-0 nova_compute[259850]: 2025-10-11 04:05:19.296 2 DEBUG oslo_concurrency.processutils [None req-9752be11-5777-4d60-8045-546a5440448d 715d3ecfd40048a08fd0c9f8dc437cd6 a56f57f119b24e77bd165887162ef538 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/e7add65d-7f64-44b0-960b-62ab3f67e50e/disk.config e7add65d-7f64-44b0-960b-62ab3f67e50e_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 04:05:19 compute-0 nova_compute[259850]: 2025-10-11 04:05:19.449 2 DEBUG nova.network.neutron [req-9f116b35-8a89-4951-9c1e-4f4c668e1342 req-9d71827d-d3a8-42b7-ab14-caccba607c30 f4159c198f2c491aba3076e803251bb4 551ef16ca7c64c7d98e9560ddab5abd8 - - default default] [instance: e7add65d-7f64-44b0-960b-62ab3f67e50e] Updated VIF entry in instance network info cache for port f42ee9e2-a84a-41b6-ba15-7baeab44cb80. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 11 04:05:19 compute-0 nova_compute[259850]: 2025-10-11 04:05:19.450 2 DEBUG nova.network.neutron [req-9f116b35-8a89-4951-9c1e-4f4c668e1342 req-9d71827d-d3a8-42b7-ab14-caccba607c30 f4159c198f2c491aba3076e803251bb4 551ef16ca7c64c7d98e9560ddab5abd8 - - default default] [instance: e7add65d-7f64-44b0-960b-62ab3f67e50e] Updating instance_info_cache with network_info: [{"id": "f42ee9e2-a84a-41b6-ba15-7baeab44cb80", "address": "fa:16:3e:00:3c:29", "network": {"id": "01ca7d7a-ab7e-4753-9e65-58d83786bdc8", "bridge": "br-int", "label": "tempest-VolumesActionsTest-1985263256-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a56f57f119b24e77bd165887162ef538", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf42ee9e2-a8", "ovs_interfaceid": "f42ee9e2-a84a-41b6-ba15-7baeab44cb80", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 11 04:05:19 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1032: 305 pgs: 305 active+clean; 88 MiB data, 279 MiB used, 60 GiB / 60 GiB avail; 2.4 MiB/s rd, 4.3 MiB/s wr, 195 op/s
Oct 11 04:05:19 compute-0 nova_compute[259850]: 2025-10-11 04:05:19.473 2 DEBUG oslo_concurrency.lockutils [req-9f116b35-8a89-4951-9c1e-4f4c668e1342 req-9d71827d-d3a8-42b7-ab14-caccba607c30 f4159c198f2c491aba3076e803251bb4 551ef16ca7c64c7d98e9560ddab5abd8 - - default default] Releasing lock "refresh_cache-e7add65d-7f64-44b0-960b-62ab3f67e50e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 11 04:05:19 compute-0 nova_compute[259850]: 2025-10-11 04:05:19.483 2 DEBUG oslo_concurrency.processutils [None req-9752be11-5777-4d60-8045-546a5440448d 715d3ecfd40048a08fd0c9f8dc437cd6 a56f57f119b24e77bd165887162ef538 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/e7add65d-7f64-44b0-960b-62ab3f67e50e/disk.config e7add65d-7f64-44b0-960b-62ab3f67e50e_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.187s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 04:05:19 compute-0 nova_compute[259850]: 2025-10-11 04:05:19.484 2 INFO nova.virt.libvirt.driver [None req-9752be11-5777-4d60-8045-546a5440448d 715d3ecfd40048a08fd0c9f8dc437cd6 a56f57f119b24e77bd165887162ef538 - - default default] [instance: e7add65d-7f64-44b0-960b-62ab3f67e50e] Deleting local config drive /var/lib/nova/instances/e7add65d-7f64-44b0-960b-62ab3f67e50e/disk.config because it was imported into RBD.
Oct 11 04:05:19 compute-0 kernel: tapf42ee9e2-a8: entered promiscuous mode
Oct 11 04:05:19 compute-0 ovn_controller[152025]: 2025-10-11T04:05:19Z|00044|binding|INFO|Claiming lport f42ee9e2-a84a-41b6-ba15-7baeab44cb80 for this chassis.
Oct 11 04:05:19 compute-0 ovn_controller[152025]: 2025-10-11T04:05:19Z|00045|binding|INFO|f42ee9e2-a84a-41b6-ba15-7baeab44cb80: Claiming fa:16:3e:00:3c:29 10.100.0.13
Oct 11 04:05:19 compute-0 nova_compute[259850]: 2025-10-11 04:05:19.553 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 04:05:19 compute-0 NetworkManager[44920]: <info>  [1760155519.5542] manager: (tapf42ee9e2-a8): new Tun device (/org/freedesktop/NetworkManager/Devices/36)
Oct 11 04:05:19 compute-0 ovn_metadata_agent[161897]: 2025-10-11 04:05:19.566 161902 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:00:3c:29 10.100.0.13'], port_security=['fa:16:3e:00:3c:29 10.100.0.13'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.13/28', 'neutron:device_id': 'e7add65d-7f64-44b0-960b-62ab3f67e50e', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-01ca7d7a-ab7e-4753-9e65-58d83786bdc8', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'a56f57f119b24e77bd165887162ef538', 'neutron:revision_number': '2', 'neutron:security_group_ids': '1d55b596-dded-4eab-874b-8812dbd6943d', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=3a8a1b08-6831-48aa-9bdb-0e38b6956a06, chassis=[<ovs.db.idl.Row object at 0x7f22fde8afd0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f22fde8afd0>], logical_port=f42ee9e2-a84a-41b6-ba15-7baeab44cb80) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 11 04:05:19 compute-0 ovn_metadata_agent[161897]: 2025-10-11 04:05:19.568 161902 INFO neutron.agent.ovn.metadata.agent [-] Port f42ee9e2-a84a-41b6-ba15-7baeab44cb80 in datapath 01ca7d7a-ab7e-4753-9e65-58d83786bdc8 bound to our chassis
Oct 11 04:05:19 compute-0 ovn_metadata_agent[161897]: 2025-10-11 04:05:19.570 161902 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 01ca7d7a-ab7e-4753-9e65-58d83786bdc8
Oct 11 04:05:19 compute-0 ovn_controller[152025]: 2025-10-11T04:05:19Z|00046|binding|INFO|Setting lport f42ee9e2-a84a-41b6-ba15-7baeab44cb80 ovn-installed in OVS
Oct 11 04:05:19 compute-0 ovn_controller[152025]: 2025-10-11T04:05:19Z|00047|binding|INFO|Setting lport f42ee9e2-a84a-41b6-ba15-7baeab44cb80 up in Southbound
Oct 11 04:05:19 compute-0 ovn_metadata_agent[161897]: 2025-10-11 04:05:19.589 267637 DEBUG oslo.privsep.daemon [-] privsep: reply[36718219-da3a-4311-b3a9-68236918a668]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 04:05:19 compute-0 ovn_metadata_agent[161897]: 2025-10-11 04:05:19.590 161902 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap01ca7d7a-a1 in ovnmeta-01ca7d7a-ab7e-4753-9e65-58d83786bdc8 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Oct 11 04:05:19 compute-0 nova_compute[259850]: 2025-10-11 04:05:19.591 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 04:05:19 compute-0 ovn_metadata_agent[161897]: 2025-10-11 04:05:19.595 267637 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap01ca7d7a-a0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Oct 11 04:05:19 compute-0 ovn_metadata_agent[161897]: 2025-10-11 04:05:19.595 267637 DEBUG oslo.privsep.daemon [-] privsep: reply[19806118-30ae-4632-97fc-49fbe120beeb]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 04:05:19 compute-0 nova_compute[259850]: 2025-10-11 04:05:19.597 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 04:05:19 compute-0 ovn_metadata_agent[161897]: 2025-10-11 04:05:19.598 267637 DEBUG oslo.privsep.daemon [-] privsep: reply[662765ad-c63c-4135-88eb-6db4bdbd0094]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 04:05:19 compute-0 systemd-machined[214869]: New machine qemu-4-instance-00000004.
Oct 11 04:05:19 compute-0 systemd[1]: Started Virtual Machine qemu-4-instance-00000004.
Oct 11 04:05:19 compute-0 ovn_metadata_agent[161897]: 2025-10-11 04:05:19.616 162015 DEBUG oslo.privsep.daemon [-] privsep: reply[532b9f5f-fd1f-4371-8004-53154bd4e2f9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 04:05:19 compute-0 systemd-udevd[271746]: Network interface NamePolicy= disabled on kernel command line.
Oct 11 04:05:19 compute-0 ovn_metadata_agent[161897]: 2025-10-11 04:05:19.639 267637 DEBUG oslo.privsep.daemon [-] privsep: reply[7a641ca3-45f4-4e79-8cc2-3f77cf8aae62]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 04:05:19 compute-0 NetworkManager[44920]: <info>  [1760155519.6516] device (tapf42ee9e2-a8): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 11 04:05:19 compute-0 NetworkManager[44920]: <info>  [1760155519.6527] device (tapf42ee9e2-a8): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 11 04:05:19 compute-0 ovn_metadata_agent[161897]: 2025-10-11 04:05:19.678 267785 DEBUG oslo.privsep.daemon [-] privsep: reply[5dd1d522-b8e8-4921-ac6b-8b2f0202d3f0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 04:05:19 compute-0 NetworkManager[44920]: <info>  [1760155519.6836] manager: (tap01ca7d7a-a0): new Veth device (/org/freedesktop/NetworkManager/Devices/37)
Oct 11 04:05:19 compute-0 ovn_metadata_agent[161897]: 2025-10-11 04:05:19.683 267637 DEBUG oslo.privsep.daemon [-] privsep: reply[c01ec04f-7633-4f19-b847-1612f875bd5c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 04:05:19 compute-0 ovn_metadata_agent[161897]: 2025-10-11 04:05:19.714 267785 DEBUG oslo.privsep.daemon [-] privsep: reply[18100b75-96a9-4125-9a2a-870515252c1f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 04:05:19 compute-0 ovn_metadata_agent[161897]: 2025-10-11 04:05:19.716 267785 DEBUG oslo.privsep.daemon [-] privsep: reply[fa7d105f-ecbe-46bf-a79e-cb0577a2252e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 04:05:19 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e172 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 04:05:19 compute-0 NetworkManager[44920]: <info>  [1760155519.7365] device (tap01ca7d7a-a0): carrier: link connected
Oct 11 04:05:19 compute-0 ovn_metadata_agent[161897]: 2025-10-11 04:05:19.745 267785 DEBUG oslo.privsep.daemon [-] privsep: reply[6b263857-052e-495b-92ed-7327115ff466]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 04:05:19 compute-0 ovn_metadata_agent[161897]: 2025-10-11 04:05:19.761 267637 DEBUG oslo.privsep.daemon [-] privsep: reply[306babef-d49d-4986-8158-2d6aa71dca12]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap01ca7d7a-a1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:6c:75:af'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 21], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 390477, 'reachable_time': 38293, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 271776, 'error': None, 'target': 'ovnmeta-01ca7d7a-ab7e-4753-9e65-58d83786bdc8', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 04:05:19 compute-0 ovn_metadata_agent[161897]: 2025-10-11 04:05:19.774 267637 DEBUG oslo.privsep.daemon [-] privsep: reply[2721e406-9c19-437a-ab5b-e436192b8bd8]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe6c:75af'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 390477, 'tstamp': 390477}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 271777, 'error': None, 'target': 'ovnmeta-01ca7d7a-ab7e-4753-9e65-58d83786bdc8', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 04:05:19 compute-0 ovn_metadata_agent[161897]: 2025-10-11 04:05:19.790 267637 DEBUG oslo.privsep.daemon [-] privsep: reply[5866581a-fe54-4ef8-b6b1-eabacc1f0c74]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap01ca7d7a-a1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:6c:75:af'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 21], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 390477, 'reachable_time': 38293, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 271778, 'error': None, 'target': 'ovnmeta-01ca7d7a-ab7e-4753-9e65-58d83786bdc8', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 04:05:19 compute-0 ovn_metadata_agent[161897]: 2025-10-11 04:05:19.821 267637 DEBUG oslo.privsep.daemon [-] privsep: reply[d05f53f4-6a31-4556-af7b-36a9178cf851]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 04:05:19 compute-0 nova_compute[259850]: 2025-10-11 04:05:19.864 2 DEBUG nova.compute.manager [req-c03fd41b-42c2-4d38-a50d-85f955a57a4f req-1d50be4b-e506-4046-84ad-cad4d5405a94 f4159c198f2c491aba3076e803251bb4 551ef16ca7c64c7d98e9560ddab5abd8 - - default default] [instance: e7add65d-7f64-44b0-960b-62ab3f67e50e] Received event network-vif-plugged-f42ee9e2-a84a-41b6-ba15-7baeab44cb80 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 11 04:05:19 compute-0 nova_compute[259850]: 2025-10-11 04:05:19.864 2 DEBUG oslo_concurrency.lockutils [req-c03fd41b-42c2-4d38-a50d-85f955a57a4f req-1d50be4b-e506-4046-84ad-cad4d5405a94 f4159c198f2c491aba3076e803251bb4 551ef16ca7c64c7d98e9560ddab5abd8 - - default default] Acquiring lock "e7add65d-7f64-44b0-960b-62ab3f67e50e-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 04:05:19 compute-0 nova_compute[259850]: 2025-10-11 04:05:19.864 2 DEBUG oslo_concurrency.lockutils [req-c03fd41b-42c2-4d38-a50d-85f955a57a4f req-1d50be4b-e506-4046-84ad-cad4d5405a94 f4159c198f2c491aba3076e803251bb4 551ef16ca7c64c7d98e9560ddab5abd8 - - default default] Lock "e7add65d-7f64-44b0-960b-62ab3f67e50e-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 04:05:19 compute-0 nova_compute[259850]: 2025-10-11 04:05:19.864 2 DEBUG oslo_concurrency.lockutils [req-c03fd41b-42c2-4d38-a50d-85f955a57a4f req-1d50be4b-e506-4046-84ad-cad4d5405a94 f4159c198f2c491aba3076e803251bb4 551ef16ca7c64c7d98e9560ddab5abd8 - - default default] Lock "e7add65d-7f64-44b0-960b-62ab3f67e50e-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 04:05:19 compute-0 nova_compute[259850]: 2025-10-11 04:05:19.865 2 DEBUG nova.compute.manager [req-c03fd41b-42c2-4d38-a50d-85f955a57a4f req-1d50be4b-e506-4046-84ad-cad4d5405a94 f4159c198f2c491aba3076e803251bb4 551ef16ca7c64c7d98e9560ddab5abd8 - - default default] [instance: e7add65d-7f64-44b0-960b-62ab3f67e50e] Processing event network-vif-plugged-f42ee9e2-a84a-41b6-ba15-7baeab44cb80 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Oct 11 04:05:19 compute-0 ovn_metadata_agent[161897]: 2025-10-11 04:05:19.872 267637 DEBUG oslo.privsep.daemon [-] privsep: reply[ec9e3ff1-5205-4479-beca-f2f79a7c9e22]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 04:05:19 compute-0 ovn_metadata_agent[161897]: 2025-10-11 04:05:19.873 161902 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap01ca7d7a-a0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 04:05:19 compute-0 ovn_metadata_agent[161897]: 2025-10-11 04:05:19.873 161902 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 11 04:05:19 compute-0 ovn_metadata_agent[161897]: 2025-10-11 04:05:19.874 161902 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap01ca7d7a-a0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 04:05:19 compute-0 NetworkManager[44920]: <info>  [1760155519.8769] manager: (tap01ca7d7a-a0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/38)
Oct 11 04:05:19 compute-0 kernel: tap01ca7d7a-a0: entered promiscuous mode
Oct 11 04:05:19 compute-0 nova_compute[259850]: 2025-10-11 04:05:19.877 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 04:05:19 compute-0 ovn_metadata_agent[161897]: 2025-10-11 04:05:19.880 161902 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap01ca7d7a-a0, col_values=(('external_ids', {'iface-id': 'bf00a62f-9880-4f65-9ef5-7c57c9ac1996'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 04:05:19 compute-0 ovn_controller[152025]: 2025-10-11T04:05:19Z|00048|binding|INFO|Releasing lport bf00a62f-9880-4f65-9ef5-7c57c9ac1996 from this chassis (sb_readonly=0)
Oct 11 04:05:19 compute-0 nova_compute[259850]: 2025-10-11 04:05:19.881 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 04:05:19 compute-0 ovn_metadata_agent[161897]: 2025-10-11 04:05:19.882 161902 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/01ca7d7a-ab7e-4753-9e65-58d83786bdc8.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/01ca7d7a-ab7e-4753-9e65-58d83786bdc8.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Oct 11 04:05:19 compute-0 ovn_metadata_agent[161897]: 2025-10-11 04:05:19.883 267637 DEBUG oslo.privsep.daemon [-] privsep: reply[f3708c2b-8421-474a-b93f-705200d6521f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 04:05:19 compute-0 ovn_metadata_agent[161897]: 2025-10-11 04:05:19.884 161902 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct 11 04:05:19 compute-0 ovn_metadata_agent[161897]: global
Oct 11 04:05:19 compute-0 ovn_metadata_agent[161897]:     log         /dev/log local0 debug
Oct 11 04:05:19 compute-0 ovn_metadata_agent[161897]:     log-tag     haproxy-metadata-proxy-01ca7d7a-ab7e-4753-9e65-58d83786bdc8
Oct 11 04:05:19 compute-0 ovn_metadata_agent[161897]:     user        root
Oct 11 04:05:19 compute-0 ovn_metadata_agent[161897]:     group       root
Oct 11 04:05:19 compute-0 ovn_metadata_agent[161897]:     maxconn     1024
Oct 11 04:05:19 compute-0 ovn_metadata_agent[161897]:     pidfile     /var/lib/neutron/external/pids/01ca7d7a-ab7e-4753-9e65-58d83786bdc8.pid.haproxy
Oct 11 04:05:19 compute-0 ovn_metadata_agent[161897]:     daemon
Oct 11 04:05:19 compute-0 ovn_metadata_agent[161897]: 
Oct 11 04:05:19 compute-0 ovn_metadata_agent[161897]: defaults
Oct 11 04:05:19 compute-0 ovn_metadata_agent[161897]:     log global
Oct 11 04:05:19 compute-0 ovn_metadata_agent[161897]:     mode http
Oct 11 04:05:19 compute-0 ovn_metadata_agent[161897]:     option httplog
Oct 11 04:05:19 compute-0 ovn_metadata_agent[161897]:     option dontlognull
Oct 11 04:05:19 compute-0 ovn_metadata_agent[161897]:     option http-server-close
Oct 11 04:05:19 compute-0 ovn_metadata_agent[161897]:     option forwardfor
Oct 11 04:05:19 compute-0 ovn_metadata_agent[161897]:     retries                 3
Oct 11 04:05:19 compute-0 ovn_metadata_agent[161897]:     timeout http-request    30s
Oct 11 04:05:19 compute-0 ovn_metadata_agent[161897]:     timeout connect         30s
Oct 11 04:05:19 compute-0 ovn_metadata_agent[161897]:     timeout client          32s
Oct 11 04:05:19 compute-0 ovn_metadata_agent[161897]:     timeout server          32s
Oct 11 04:05:19 compute-0 ovn_metadata_agent[161897]:     timeout http-keep-alive 30s
Oct 11 04:05:19 compute-0 ovn_metadata_agent[161897]: 
Oct 11 04:05:19 compute-0 ovn_metadata_agent[161897]: 
Oct 11 04:05:19 compute-0 ovn_metadata_agent[161897]: listen listener
Oct 11 04:05:19 compute-0 ovn_metadata_agent[161897]:     bind 169.254.169.254:80
Oct 11 04:05:19 compute-0 ovn_metadata_agent[161897]:     server metadata /var/lib/neutron/metadata_proxy
Oct 11 04:05:19 compute-0 ovn_metadata_agent[161897]:     http-request add-header X-OVN-Network-ID 01ca7d7a-ab7e-4753-9e65-58d83786bdc8
Oct 11 04:05:19 compute-0 ovn_metadata_agent[161897]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Oct 11 04:05:19 compute-0 ovn_metadata_agent[161897]: 2025-10-11 04:05:19.884 161902 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-01ca7d7a-ab7e-4753-9e65-58d83786bdc8', 'env', 'PROCESS_TAG=haproxy-01ca7d7a-ab7e-4753-9e65-58d83786bdc8', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/01ca7d7a-ab7e-4753-9e65-58d83786bdc8.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Oct 11 04:05:19 compute-0 nova_compute[259850]: 2025-10-11 04:05:19.898 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 04:05:20 compute-0 podman[271853]: 2025-10-11 04:05:20.30240502 +0000 UTC m=+0.062902266 container create 36769bec9ff06cedb43d8bedc068920c4d57831293faad756ca4450df1133439 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-01ca7d7a-ab7e-4753-9e65-58d83786bdc8, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.build-date=20251009, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team)
Oct 11 04:05:20 compute-0 systemd[1]: Started libpod-conmon-36769bec9ff06cedb43d8bedc068920c4d57831293faad756ca4450df1133439.scope.
Oct 11 04:05:20 compute-0 podman[271853]: 2025-10-11 04:05:20.268441377 +0000 UTC m=+0.028938683 image pull 1061e4fafe13e0b9aa1ef2c904ba4ad70c44f3e87b1d831f16c6db34937f4022 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Oct 11 04:05:20 compute-0 systemd[1]: Started libcrun container.
Oct 11 04:05:20 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1d79a8d945128347b3878a99e791961bbe731d2965469b7c797e64780d287411/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct 11 04:05:20 compute-0 podman[271853]: 2025-10-11 04:05:20.423993133 +0000 UTC m=+0.184490359 container init 36769bec9ff06cedb43d8bedc068920c4d57831293faad756ca4450df1133439 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-01ca7d7a-ab7e-4753-9e65-58d83786bdc8, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0)
Oct 11 04:05:20 compute-0 podman[271853]: 2025-10-11 04:05:20.429342533 +0000 UTC m=+0.189839749 container start 36769bec9ff06cedb43d8bedc068920c4d57831293faad756ca4450df1133439 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-01ca7d7a-ab7e-4753-9e65-58d83786bdc8, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.build-date=20251009, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, io.buildah.version=1.41.3)
Oct 11 04:05:20 compute-0 nova_compute[259850]: 2025-10-11 04:05:20.436 2 DEBUG nova.compute.manager [None req-9752be11-5777-4d60-8045-546a5440448d 715d3ecfd40048a08fd0c9f8dc437cd6 a56f57f119b24e77bd165887162ef538 - - default default] [instance: e7add65d-7f64-44b0-960b-62ab3f67e50e] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Oct 11 04:05:20 compute-0 nova_compute[259850]: 2025-10-11 04:05:20.437 2 DEBUG nova.virt.driver [None req-3700afd8-f980-4cf2-b41e-54ecc7fbe9a2 - - - - - -] Emitting event <LifecycleEvent: 1760155520.4352825, e7add65d-7f64-44b0-960b-62ab3f67e50e => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 11 04:05:20 compute-0 nova_compute[259850]: 2025-10-11 04:05:20.438 2 INFO nova.compute.manager [None req-3700afd8-f980-4cf2-b41e-54ecc7fbe9a2 - - - - - -] [instance: e7add65d-7f64-44b0-960b-62ab3f67e50e] VM Started (Lifecycle Event)
Oct 11 04:05:20 compute-0 neutron-haproxy-ovnmeta-01ca7d7a-ab7e-4753-9e65-58d83786bdc8[271868]: [NOTICE]   (271872) : New worker (271874) forked
Oct 11 04:05:20 compute-0 neutron-haproxy-ovnmeta-01ca7d7a-ab7e-4753-9e65-58d83786bdc8[271868]: [NOTICE]   (271872) : Loading success.
Oct 11 04:05:20 compute-0 nova_compute[259850]: 2025-10-11 04:05:20.460 2 DEBUG nova.virt.libvirt.driver [None req-9752be11-5777-4d60-8045-546a5440448d 715d3ecfd40048a08fd0c9f8dc437cd6 a56f57f119b24e77bd165887162ef538 - - default default] [instance: e7add65d-7f64-44b0-960b-62ab3f67e50e] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Oct 11 04:05:20 compute-0 nova_compute[259850]: 2025-10-11 04:05:20.465 2 INFO nova.virt.libvirt.driver [-] [instance: e7add65d-7f64-44b0-960b-62ab3f67e50e] Instance spawned successfully.
Oct 11 04:05:20 compute-0 nova_compute[259850]: 2025-10-11 04:05:20.465 2 DEBUG nova.virt.libvirt.driver [None req-9752be11-5777-4d60-8045-546a5440448d 715d3ecfd40048a08fd0c9f8dc437cd6 a56f57f119b24e77bd165887162ef538 - - default default] [instance: e7add65d-7f64-44b0-960b-62ab3f67e50e] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Oct 11 04:05:20 compute-0 nova_compute[259850]: 2025-10-11 04:05:20.473 2 DEBUG nova.compute.manager [None req-3700afd8-f980-4cf2-b41e-54ecc7fbe9a2 - - - - - -] [instance: e7add65d-7f64-44b0-960b-62ab3f67e50e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 11 04:05:20 compute-0 nova_compute[259850]: 2025-10-11 04:05:20.476 2 DEBUG nova.compute.manager [None req-3700afd8-f980-4cf2-b41e-54ecc7fbe9a2 - - - - - -] [instance: e7add65d-7f64-44b0-960b-62ab3f67e50e] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 11 04:05:20 compute-0 nova_compute[259850]: 2025-10-11 04:05:20.516 2 INFO nova.compute.manager [None req-3700afd8-f980-4cf2-b41e-54ecc7fbe9a2 - - - - - -] [instance: e7add65d-7f64-44b0-960b-62ab3f67e50e] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 11 04:05:20 compute-0 nova_compute[259850]: 2025-10-11 04:05:20.517 2 DEBUG nova.virt.driver [None req-3700afd8-f980-4cf2-b41e-54ecc7fbe9a2 - - - - - -] Emitting event <LifecycleEvent: 1760155520.4356728, e7add65d-7f64-44b0-960b-62ab3f67e50e => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 11 04:05:20 compute-0 nova_compute[259850]: 2025-10-11 04:05:20.518 2 INFO nova.compute.manager [None req-3700afd8-f980-4cf2-b41e-54ecc7fbe9a2 - - - - - -] [instance: e7add65d-7f64-44b0-960b-62ab3f67e50e] VM Paused (Lifecycle Event)
Oct 11 04:05:20 compute-0 nova_compute[259850]: 2025-10-11 04:05:20.522 2 DEBUG nova.virt.libvirt.driver [None req-9752be11-5777-4d60-8045-546a5440448d 715d3ecfd40048a08fd0c9f8dc437cd6 a56f57f119b24e77bd165887162ef538 - - default default] [instance: e7add65d-7f64-44b0-960b-62ab3f67e50e] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 11 04:05:20 compute-0 nova_compute[259850]: 2025-10-11 04:05:20.523 2 DEBUG nova.virt.libvirt.driver [None req-9752be11-5777-4d60-8045-546a5440448d 715d3ecfd40048a08fd0c9f8dc437cd6 a56f57f119b24e77bd165887162ef538 - - default default] [instance: e7add65d-7f64-44b0-960b-62ab3f67e50e] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 11 04:05:20 compute-0 nova_compute[259850]: 2025-10-11 04:05:20.523 2 DEBUG nova.virt.libvirt.driver [None req-9752be11-5777-4d60-8045-546a5440448d 715d3ecfd40048a08fd0c9f8dc437cd6 a56f57f119b24e77bd165887162ef538 - - default default] [instance: e7add65d-7f64-44b0-960b-62ab3f67e50e] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 11 04:05:20 compute-0 nova_compute[259850]: 2025-10-11 04:05:20.524 2 DEBUG nova.virt.libvirt.driver [None req-9752be11-5777-4d60-8045-546a5440448d 715d3ecfd40048a08fd0c9f8dc437cd6 a56f57f119b24e77bd165887162ef538 - - default default] [instance: e7add65d-7f64-44b0-960b-62ab3f67e50e] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 11 04:05:20 compute-0 nova_compute[259850]: 2025-10-11 04:05:20.524 2 DEBUG nova.virt.libvirt.driver [None req-9752be11-5777-4d60-8045-546a5440448d 715d3ecfd40048a08fd0c9f8dc437cd6 a56f57f119b24e77bd165887162ef538 - - default default] [instance: e7add65d-7f64-44b0-960b-62ab3f67e50e] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 11 04:05:20 compute-0 nova_compute[259850]: 2025-10-11 04:05:20.524 2 DEBUG nova.virt.libvirt.driver [None req-9752be11-5777-4d60-8045-546a5440448d 715d3ecfd40048a08fd0c9f8dc437cd6 a56f57f119b24e77bd165887162ef538 - - default default] [instance: e7add65d-7f64-44b0-960b-62ab3f67e50e] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 11 04:05:20 compute-0 nova_compute[259850]: 2025-10-11 04:05:20.539 2 DEBUG nova.compute.manager [None req-3700afd8-f980-4cf2-b41e-54ecc7fbe9a2 - - - - - -] [instance: e7add65d-7f64-44b0-960b-62ab3f67e50e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 11 04:05:20 compute-0 nova_compute[259850]: 2025-10-11 04:05:20.543 2 DEBUG nova.virt.driver [None req-3700afd8-f980-4cf2-b41e-54ecc7fbe9a2 - - - - - -] Emitting event <LifecycleEvent: 1760155520.4416432, e7add65d-7f64-44b0-960b-62ab3f67e50e => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 11 04:05:20 compute-0 nova_compute[259850]: 2025-10-11 04:05:20.543 2 INFO nova.compute.manager [None req-3700afd8-f980-4cf2-b41e-54ecc7fbe9a2 - - - - - -] [instance: e7add65d-7f64-44b0-960b-62ab3f67e50e] VM Resumed (Lifecycle Event)
Oct 11 04:05:20 compute-0 nova_compute[259850]: 2025-10-11 04:05:20.582 2 DEBUG nova.compute.manager [None req-3700afd8-f980-4cf2-b41e-54ecc7fbe9a2 - - - - - -] [instance: e7add65d-7f64-44b0-960b-62ab3f67e50e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 11 04:05:20 compute-0 nova_compute[259850]: 2025-10-11 04:05:20.586 2 DEBUG nova.compute.manager [None req-3700afd8-f980-4cf2-b41e-54ecc7fbe9a2 - - - - - -] [instance: e7add65d-7f64-44b0-960b-62ab3f67e50e] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 11 04:05:20 compute-0 nova_compute[259850]: 2025-10-11 04:05:20.590 2 INFO nova.compute.manager [None req-9752be11-5777-4d60-8045-546a5440448d 715d3ecfd40048a08fd0c9f8dc437cd6 a56f57f119b24e77bd165887162ef538 - - default default] [instance: e7add65d-7f64-44b0-960b-62ab3f67e50e] Took 6.93 seconds to spawn the instance on the hypervisor.
Oct 11 04:05:20 compute-0 nova_compute[259850]: 2025-10-11 04:05:20.591 2 DEBUG nova.compute.manager [None req-9752be11-5777-4d60-8045-546a5440448d 715d3ecfd40048a08fd0c9f8dc437cd6 a56f57f119b24e77bd165887162ef538 - - default default] [instance: e7add65d-7f64-44b0-960b-62ab3f67e50e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 11 04:05:20 compute-0 nova_compute[259850]: 2025-10-11 04:05:20.614 2 INFO nova.compute.manager [None req-3700afd8-f980-4cf2-b41e-54ecc7fbe9a2 - - - - - -] [instance: e7add65d-7f64-44b0-960b-62ab3f67e50e] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 11 04:05:20 compute-0 nova_compute[259850]: 2025-10-11 04:05:20.677 2 INFO nova.compute.manager [None req-9752be11-5777-4d60-8045-546a5440448d 715d3ecfd40048a08fd0c9f8dc437cd6 a56f57f119b24e77bd165887162ef538 - - default default] [instance: e7add65d-7f64-44b0-960b-62ab3f67e50e] Took 7.87 seconds to build instance.
Oct 11 04:05:20 compute-0 nova_compute[259850]: 2025-10-11 04:05:20.699 2 DEBUG oslo_concurrency.lockutils [None req-9752be11-5777-4d60-8045-546a5440448d 715d3ecfd40048a08fd0c9f8dc437cd6 a56f57f119b24e77bd165887162ef538 - - default default] Lock "e7add65d-7f64-44b0-960b-62ab3f67e50e" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 7.964s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 04:05:20 compute-0 ceph-mgr[74563]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 04:05:20 compute-0 ceph-mgr[74563]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 04:05:20 compute-0 ceph-mgr[74563]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 04:05:20 compute-0 ceph-mgr[74563]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 04:05:20 compute-0 ceph-mgr[74563]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 04:05:20 compute-0 ceph-mgr[74563]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 04:05:20 compute-0 ceph-mgr[74563]: [balancer INFO root] Optimize plan auto_2025-10-11_04:05:20
Oct 11 04:05:20 compute-0 ceph-mgr[74563]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 11 04:05:20 compute-0 ceph-mgr[74563]: [balancer INFO root] do_upmap
Oct 11 04:05:20 compute-0 ceph-mgr[74563]: [balancer INFO root] pools ['.rgw.root', '.mgr', 'volumes', 'cephfs.cephfs.meta', 'default.rgw.control', 'default.rgw.meta', 'default.rgw.log', 'images', 'vms', 'backups', 'cephfs.cephfs.data']
Oct 11 04:05:20 compute-0 ceph-mgr[74563]: [balancer INFO root] prepared 0/10 changes
Oct 11 04:05:20 compute-0 ceph-mgr[74563]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 11 04:05:20 compute-0 ceph-mgr[74563]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 11 04:05:20 compute-0 ceph-mgr[74563]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 11 04:05:20 compute-0 ceph-mgr[74563]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 11 04:05:20 compute-0 ceph-mgr[74563]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 11 04:05:20 compute-0 ceph-mgr[74563]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 11 04:05:20 compute-0 ceph-mgr[74563]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 11 04:05:20 compute-0 ceph-mgr[74563]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 11 04:05:20 compute-0 ceph-mgr[74563]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 11 04:05:20 compute-0 ceph-mgr[74563]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 11 04:05:21 compute-0 ceph-mon[74273]: pgmap v1032: 305 pgs: 305 active+clean; 88 MiB data, 279 MiB used, 60 GiB / 60 GiB avail; 2.4 MiB/s rd, 4.3 MiB/s wr, 195 op/s
Oct 11 04:05:21 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1033: 305 pgs: 305 active+clean; 88 MiB data, 279 MiB used, 60 GiB / 60 GiB avail; 2.4 MiB/s rd, 4.3 MiB/s wr, 195 op/s
Oct 11 04:05:21 compute-0 nova_compute[259850]: 2025-10-11 04:05:21.517 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1760155506.5161092, e607828c-0677-46ba-a7a0-b9d21be4149e => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 11 04:05:21 compute-0 nova_compute[259850]: 2025-10-11 04:05:21.517 2 INFO nova.compute.manager [-] [instance: e607828c-0677-46ba-a7a0-b9d21be4149e] VM Stopped (Lifecycle Event)
Oct 11 04:05:21 compute-0 nova_compute[259850]: 2025-10-11 04:05:21.622 2 DEBUG nova.compute.manager [None req-89e4d862-902e-41bb-8249-4064ff5ba16e - - - - - -] [instance: e607828c-0677-46ba-a7a0-b9d21be4149e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 11 04:05:21 compute-0 nova_compute[259850]: 2025-10-11 04:05:21.703 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 04:05:22 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e172 do_prune osdmap full prune enabled
Oct 11 04:05:22 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e173 e173: 3 total, 3 up, 3 in
Oct 11 04:05:22 compute-0 ceph-mon[74273]: log_channel(cluster) log [DBG] : osdmap e173: 3 total, 3 up, 3 in
Oct 11 04:05:22 compute-0 nova_compute[259850]: 2025-10-11 04:05:22.155 2 DEBUG nova.compute.manager [req-b82bb9fc-28f9-4de6-984e-b8d930466068 req-2c934c3c-ba39-4e7d-8299-38af7309e2f7 f4159c198f2c491aba3076e803251bb4 551ef16ca7c64c7d98e9560ddab5abd8 - - default default] [instance: e7add65d-7f64-44b0-960b-62ab3f67e50e] Received event network-vif-plugged-f42ee9e2-a84a-41b6-ba15-7baeab44cb80 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 11 04:05:22 compute-0 nova_compute[259850]: 2025-10-11 04:05:22.155 2 DEBUG oslo_concurrency.lockutils [req-b82bb9fc-28f9-4de6-984e-b8d930466068 req-2c934c3c-ba39-4e7d-8299-38af7309e2f7 f4159c198f2c491aba3076e803251bb4 551ef16ca7c64c7d98e9560ddab5abd8 - - default default] Acquiring lock "e7add65d-7f64-44b0-960b-62ab3f67e50e-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 04:05:22 compute-0 nova_compute[259850]: 2025-10-11 04:05:22.155 2 DEBUG oslo_concurrency.lockutils [req-b82bb9fc-28f9-4de6-984e-b8d930466068 req-2c934c3c-ba39-4e7d-8299-38af7309e2f7 f4159c198f2c491aba3076e803251bb4 551ef16ca7c64c7d98e9560ddab5abd8 - - default default] Lock "e7add65d-7f64-44b0-960b-62ab3f67e50e-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 04:05:22 compute-0 nova_compute[259850]: 2025-10-11 04:05:22.155 2 DEBUG oslo_concurrency.lockutils [req-b82bb9fc-28f9-4de6-984e-b8d930466068 req-2c934c3c-ba39-4e7d-8299-38af7309e2f7 f4159c198f2c491aba3076e803251bb4 551ef16ca7c64c7d98e9560ddab5abd8 - - default default] Lock "e7add65d-7f64-44b0-960b-62ab3f67e50e-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 04:05:22 compute-0 nova_compute[259850]: 2025-10-11 04:05:22.155 2 DEBUG nova.compute.manager [req-b82bb9fc-28f9-4de6-984e-b8d930466068 req-2c934c3c-ba39-4e7d-8299-38af7309e2f7 f4159c198f2c491aba3076e803251bb4 551ef16ca7c64c7d98e9560ddab5abd8 - - default default] [instance: e7add65d-7f64-44b0-960b-62ab3f67e50e] No waiting events found dispatching network-vif-plugged-f42ee9e2-a84a-41b6-ba15-7baeab44cb80 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 11 04:05:22 compute-0 nova_compute[259850]: 2025-10-11 04:05:22.156 2 WARNING nova.compute.manager [req-b82bb9fc-28f9-4de6-984e-b8d930466068 req-2c934c3c-ba39-4e7d-8299-38af7309e2f7 f4159c198f2c491aba3076e803251bb4 551ef16ca7c64c7d98e9560ddab5abd8 - - default default] [instance: e7add65d-7f64-44b0-960b-62ab3f67e50e] Received unexpected event network-vif-plugged-f42ee9e2-a84a-41b6-ba15-7baeab44cb80 for instance with vm_state active and task_state None.
Oct 11 04:05:22 compute-0 ovn_metadata_agent[161897]: 2025-10-11 04:05:22.953 161902 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 04:05:22 compute-0 ovn_metadata_agent[161897]: 2025-10-11 04:05:22.954 161902 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 04:05:22 compute-0 ovn_metadata_agent[161897]: 2025-10-11 04:05:22.955 161902 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 04:05:23 compute-0 ceph-mon[74273]: pgmap v1033: 305 pgs: 305 active+clean; 88 MiB data, 279 MiB used, 60 GiB / 60 GiB avail; 2.4 MiB/s rd, 4.3 MiB/s wr, 195 op/s
Oct 11 04:05:23 compute-0 ceph-mon[74273]: osdmap e173: 3 total, 3 up, 3 in
Oct 11 04:05:23 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1035: 305 pgs: 305 active+clean; 88 MiB data, 278 MiB used, 60 GiB / 60 GiB avail; 5.8 MiB/s rd, 2.7 MiB/s wr, 344 op/s
Oct 11 04:05:23 compute-0 nova_compute[259850]: 2025-10-11 04:05:23.546 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 04:05:23 compute-0 nova_compute[259850]: 2025-10-11 04:05:23.907 2 DEBUG oslo_concurrency.lockutils [None req-a0a37b20-9ab8-48fa-9ee7-493e179e1e8c 715d3ecfd40048a08fd0c9f8dc437cd6 a56f57f119b24e77bd165887162ef538 - - default default] Acquiring lock "e7add65d-7f64-44b0-960b-62ab3f67e50e" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 04:05:23 compute-0 nova_compute[259850]: 2025-10-11 04:05:23.908 2 DEBUG oslo_concurrency.lockutils [None req-a0a37b20-9ab8-48fa-9ee7-493e179e1e8c 715d3ecfd40048a08fd0c9f8dc437cd6 a56f57f119b24e77bd165887162ef538 - - default default] Lock "e7add65d-7f64-44b0-960b-62ab3f67e50e" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 04:05:23 compute-0 nova_compute[259850]: 2025-10-11 04:05:23.908 2 DEBUG oslo_concurrency.lockutils [None req-a0a37b20-9ab8-48fa-9ee7-493e179e1e8c 715d3ecfd40048a08fd0c9f8dc437cd6 a56f57f119b24e77bd165887162ef538 - - default default] Acquiring lock "e7add65d-7f64-44b0-960b-62ab3f67e50e-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 04:05:23 compute-0 nova_compute[259850]: 2025-10-11 04:05:23.908 2 DEBUG oslo_concurrency.lockutils [None req-a0a37b20-9ab8-48fa-9ee7-493e179e1e8c 715d3ecfd40048a08fd0c9f8dc437cd6 a56f57f119b24e77bd165887162ef538 - - default default] Lock "e7add65d-7f64-44b0-960b-62ab3f67e50e-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 04:05:23 compute-0 nova_compute[259850]: 2025-10-11 04:05:23.908 2 DEBUG oslo_concurrency.lockutils [None req-a0a37b20-9ab8-48fa-9ee7-493e179e1e8c 715d3ecfd40048a08fd0c9f8dc437cd6 a56f57f119b24e77bd165887162ef538 - - default default] Lock "e7add65d-7f64-44b0-960b-62ab3f67e50e-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 04:05:23 compute-0 nova_compute[259850]: 2025-10-11 04:05:23.909 2 INFO nova.compute.manager [None req-a0a37b20-9ab8-48fa-9ee7-493e179e1e8c 715d3ecfd40048a08fd0c9f8dc437cd6 a56f57f119b24e77bd165887162ef538 - - default default] [instance: e7add65d-7f64-44b0-960b-62ab3f67e50e] Terminating instance
Oct 11 04:05:23 compute-0 nova_compute[259850]: 2025-10-11 04:05:23.910 2 DEBUG nova.compute.manager [None req-a0a37b20-9ab8-48fa-9ee7-493e179e1e8c 715d3ecfd40048a08fd0c9f8dc437cd6 a56f57f119b24e77bd165887162ef538 - - default default] [instance: e7add65d-7f64-44b0-960b-62ab3f67e50e] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Oct 11 04:05:23 compute-0 kernel: tapf42ee9e2-a8 (unregistering): left promiscuous mode
Oct 11 04:05:23 compute-0 NetworkManager[44920]: <info>  [1760155523.9478] device (tapf42ee9e2-a8): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 11 04:05:24 compute-0 ovn_controller[152025]: 2025-10-11T04:05:24Z|00049|binding|INFO|Releasing lport f42ee9e2-a84a-41b6-ba15-7baeab44cb80 from this chassis (sb_readonly=0)
Oct 11 04:05:24 compute-0 ovn_controller[152025]: 2025-10-11T04:05:24Z|00050|binding|INFO|Setting lport f42ee9e2-a84a-41b6-ba15-7baeab44cb80 down in Southbound
Oct 11 04:05:24 compute-0 ovn_controller[152025]: 2025-10-11T04:05:24Z|00051|binding|INFO|Removing iface tapf42ee9e2-a8 ovn-installed in OVS
Oct 11 04:05:24 compute-0 nova_compute[259850]: 2025-10-11 04:05:24.003 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 04:05:24 compute-0 nova_compute[259850]: 2025-10-11 04:05:24.005 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 04:05:24 compute-0 ovn_metadata_agent[161897]: 2025-10-11 04:05:24.015 161902 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:00:3c:29 10.100.0.13'], port_security=['fa:16:3e:00:3c:29 10.100.0.13'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.13/28', 'neutron:device_id': 'e7add65d-7f64-44b0-960b-62ab3f67e50e', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-01ca7d7a-ab7e-4753-9e65-58d83786bdc8', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'a56f57f119b24e77bd165887162ef538', 'neutron:revision_number': '4', 'neutron:security_group_ids': '1d55b596-dded-4eab-874b-8812dbd6943d', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=3a8a1b08-6831-48aa-9bdb-0e38b6956a06, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f22fde8afd0>], logical_port=f42ee9e2-a84a-41b6-ba15-7baeab44cb80) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f22fde8afd0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 11 04:05:24 compute-0 ovn_metadata_agent[161897]: 2025-10-11 04:05:24.016 161902 INFO neutron.agent.ovn.metadata.agent [-] Port f42ee9e2-a84a-41b6-ba15-7baeab44cb80 in datapath 01ca7d7a-ab7e-4753-9e65-58d83786bdc8 unbound from our chassis
Oct 11 04:05:24 compute-0 ovn_metadata_agent[161897]: 2025-10-11 04:05:24.018 161902 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 01ca7d7a-ab7e-4753-9e65-58d83786bdc8, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Oct 11 04:05:24 compute-0 ovn_metadata_agent[161897]: 2025-10-11 04:05:24.019 267637 DEBUG oslo.privsep.daemon [-] privsep: reply[f775fc79-52a9-4417-8c38-251b14de1813]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 04:05:24 compute-0 ovn_metadata_agent[161897]: 2025-10-11 04:05:24.019 161902 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-01ca7d7a-ab7e-4753-9e65-58d83786bdc8 namespace which is not needed anymore
Oct 11 04:05:24 compute-0 nova_compute[259850]: 2025-10-11 04:05:24.026 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 04:05:24 compute-0 systemd[1]: machine-qemu\x2d4\x2dinstance\x2d00000004.scope: Deactivated successfully.
Oct 11 04:05:24 compute-0 systemd[1]: machine-qemu\x2d4\x2dinstance\x2d00000004.scope: Consumed 4.279s CPU time.
Oct 11 04:05:24 compute-0 systemd-machined[214869]: Machine qemu-4-instance-00000004 terminated.
Oct 11 04:05:24 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e173 do_prune osdmap full prune enabled
Oct 11 04:05:24 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e174 e174: 3 total, 3 up, 3 in
Oct 11 04:05:24 compute-0 ceph-mon[74273]: log_channel(cluster) log [DBG] : osdmap e174: 3 total, 3 up, 3 in
Oct 11 04:05:24 compute-0 ceph-mon[74273]: pgmap v1035: 305 pgs: 305 active+clean; 88 MiB data, 278 MiB used, 60 GiB / 60 GiB avail; 5.8 MiB/s rd, 2.7 MiB/s wr, 344 op/s
Oct 11 04:05:24 compute-0 nova_compute[259850]: 2025-10-11 04:05:24.134 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 04:05:24 compute-0 nova_compute[259850]: 2025-10-11 04:05:24.139 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 04:05:24 compute-0 nova_compute[259850]: 2025-10-11 04:05:24.149 2 INFO nova.virt.libvirt.driver [-] [instance: e7add65d-7f64-44b0-960b-62ab3f67e50e] Instance destroyed successfully.
Oct 11 04:05:24 compute-0 nova_compute[259850]: 2025-10-11 04:05:24.149 2 DEBUG nova.objects.instance [None req-a0a37b20-9ab8-48fa-9ee7-493e179e1e8c 715d3ecfd40048a08fd0c9f8dc437cd6 a56f57f119b24e77bd165887162ef538 - - default default] Lazy-loading 'resources' on Instance uuid e7add65d-7f64-44b0-960b-62ab3f67e50e obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 11 04:05:24 compute-0 nova_compute[259850]: 2025-10-11 04:05:24.167 2 DEBUG nova.virt.libvirt.vif [None req-a0a37b20-9ab8-48fa-9ee7-493e179e1e8c 715d3ecfd40048a08fd0c9f8dc437cd6 a56f57f119b24e77bd165887162ef538 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-11T04:05:12Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-VolumesActionsTest-instance-1033832338',display_name='tempest-VolumesActionsTest-instance-1033832338',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-volumesactionstest-instance-1033832338',id=4,image_ref='1a107e2f-1a9d-4b6f-861d-e64bee7d56be',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-10-11T04:05:20Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='a56f57f119b24e77bd165887162ef538',ramdisk_id='',reservation_id='r-81tx0alw',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='1a107e2f-1a9d-4b6f-861d-e64bee7d56be',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-VolumesActionsTest-27294957',owner_user_name='tempest-VolumesActionsTest-27294957-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-11T04:05:20Z,user_data=None,user_id='715d3ecfd40048a08fd0c9f8dc437cd6',uuid=e7add65d-7f64-44b0-960b-62ab3f67e50e,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "f42ee9e2-a84a-41b6-ba15-7baeab44cb80", "address": "fa:16:3e:00:3c:29", "network": {"id": "01ca7d7a-ab7e-4753-9e65-58d83786bdc8", "bridge": "br-int", "label": "tempest-VolumesActionsTest-1985263256-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a56f57f119b24e77bd165887162ef538", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf42ee9e2-a8", "ovs_interfaceid": "f42ee9e2-a84a-41b6-ba15-7baeab44cb80", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Oct 11 04:05:24 compute-0 nova_compute[259850]: 2025-10-11 04:05:24.168 2 DEBUG nova.network.os_vif_util [None req-a0a37b20-9ab8-48fa-9ee7-493e179e1e8c 715d3ecfd40048a08fd0c9f8dc437cd6 a56f57f119b24e77bd165887162ef538 - - default default] Converting VIF {"id": "f42ee9e2-a84a-41b6-ba15-7baeab44cb80", "address": "fa:16:3e:00:3c:29", "network": {"id": "01ca7d7a-ab7e-4753-9e65-58d83786bdc8", "bridge": "br-int", "label": "tempest-VolumesActionsTest-1985263256-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a56f57f119b24e77bd165887162ef538", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf42ee9e2-a8", "ovs_interfaceid": "f42ee9e2-a84a-41b6-ba15-7baeab44cb80", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 11 04:05:24 compute-0 nova_compute[259850]: 2025-10-11 04:05:24.169 2 DEBUG nova.network.os_vif_util [None req-a0a37b20-9ab8-48fa-9ee7-493e179e1e8c 715d3ecfd40048a08fd0c9f8dc437cd6 a56f57f119b24e77bd165887162ef538 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:00:3c:29,bridge_name='br-int',has_traffic_filtering=True,id=f42ee9e2-a84a-41b6-ba15-7baeab44cb80,network=Network(01ca7d7a-ab7e-4753-9e65-58d83786bdc8),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf42ee9e2-a8') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 11 04:05:24 compute-0 nova_compute[259850]: 2025-10-11 04:05:24.169 2 DEBUG os_vif [None req-a0a37b20-9ab8-48fa-9ee7-493e179e1e8c 715d3ecfd40048a08fd0c9f8dc437cd6 a56f57f119b24e77bd165887162ef538 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:00:3c:29,bridge_name='br-int',has_traffic_filtering=True,id=f42ee9e2-a84a-41b6-ba15-7baeab44cb80,network=Network(01ca7d7a-ab7e-4753-9e65-58d83786bdc8),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf42ee9e2-a8') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Oct 11 04:05:24 compute-0 nova_compute[259850]: 2025-10-11 04:05:24.171 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 04:05:24 compute-0 nova_compute[259850]: 2025-10-11 04:05:24.171 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapf42ee9e2-a8, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 04:05:24 compute-0 nova_compute[259850]: 2025-10-11 04:05:24.172 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 04:05:24 compute-0 nova_compute[259850]: 2025-10-11 04:05:24.174 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 04:05:24 compute-0 nova_compute[259850]: 2025-10-11 04:05:24.177 2 INFO os_vif [None req-a0a37b20-9ab8-48fa-9ee7-493e179e1e8c 715d3ecfd40048a08fd0c9f8dc437cd6 a56f57f119b24e77bd165887162ef538 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:00:3c:29,bridge_name='br-int',has_traffic_filtering=True,id=f42ee9e2-a84a-41b6-ba15-7baeab44cb80,network=Network(01ca7d7a-ab7e-4753-9e65-58d83786bdc8),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf42ee9e2-a8')
Oct 11 04:05:24 compute-0 neutron-haproxy-ovnmeta-01ca7d7a-ab7e-4753-9e65-58d83786bdc8[271868]: [NOTICE]   (271872) : haproxy version is 2.8.14-c23fe91
Oct 11 04:05:24 compute-0 neutron-haproxy-ovnmeta-01ca7d7a-ab7e-4753-9e65-58d83786bdc8[271868]: [NOTICE]   (271872) : path to executable is /usr/sbin/haproxy
Oct 11 04:05:24 compute-0 neutron-haproxy-ovnmeta-01ca7d7a-ab7e-4753-9e65-58d83786bdc8[271868]: [WARNING]  (271872) : Exiting Master process...
Oct 11 04:05:24 compute-0 neutron-haproxy-ovnmeta-01ca7d7a-ab7e-4753-9e65-58d83786bdc8[271868]: [WARNING]  (271872) : Exiting Master process...
Oct 11 04:05:24 compute-0 neutron-haproxy-ovnmeta-01ca7d7a-ab7e-4753-9e65-58d83786bdc8[271868]: [ALERT]    (271872) : Current worker (271874) exited with code 143 (Terminated)
Oct 11 04:05:24 compute-0 neutron-haproxy-ovnmeta-01ca7d7a-ab7e-4753-9e65-58d83786bdc8[271868]: [WARNING]  (271872) : All workers exited. Exiting... (0)
Oct 11 04:05:24 compute-0 systemd[1]: libpod-36769bec9ff06cedb43d8bedc068920c4d57831293faad756ca4450df1133439.scope: Deactivated successfully.
Oct 11 04:05:24 compute-0 conmon[271868]: conmon 36769bec9ff06cedb43d <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-36769bec9ff06cedb43d8bedc068920c4d57831293faad756ca4450df1133439.scope/container/memory.events
Oct 11 04:05:24 compute-0 podman[271907]: 2025-10-11 04:05:24.18833162 +0000 UTC m=+0.063905295 container died 36769bec9ff06cedb43d8bedc068920c4d57831293faad756ca4450df1133439 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-01ca7d7a-ab7e-4753-9e65-58d83786bdc8, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251009, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Oct 11 04:05:24 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-36769bec9ff06cedb43d8bedc068920c4d57831293faad756ca4450df1133439-userdata-shm.mount: Deactivated successfully.
Oct 11 04:05:24 compute-0 systemd[1]: var-lib-containers-storage-overlay-1d79a8d945128347b3878a99e791961bbe731d2965469b7c797e64780d287411-merged.mount: Deactivated successfully.
Oct 11 04:05:24 compute-0 podman[271907]: 2025-10-11 04:05:24.239127966 +0000 UTC m=+0.114701631 container cleanup 36769bec9ff06cedb43d8bedc068920c4d57831293faad756ca4450df1133439 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-01ca7d7a-ab7e-4753-9e65-58d83786bdc8, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251009, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Oct 11 04:05:24 compute-0 systemd[1]: libpod-conmon-36769bec9ff06cedb43d8bedc068920c4d57831293faad756ca4450df1133439.scope: Deactivated successfully.
Oct 11 04:05:24 compute-0 podman[271963]: 2025-10-11 04:05:24.305393116 +0000 UTC m=+0.042057482 container remove 36769bec9ff06cedb43d8bedc068920c4d57831293faad756ca4450df1133439 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-01ca7d7a-ab7e-4753-9e65-58d83786bdc8, org.label-schema.license=GPLv2, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Oct 11 04:05:24 compute-0 ovn_metadata_agent[161897]: 2025-10-11 04:05:24.314 267637 DEBUG oslo.privsep.daemon [-] privsep: reply[4390c358-2da5-4300-9e65-49e824283b7a]: (4, ('Sat Oct 11 04:05:24 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-01ca7d7a-ab7e-4753-9e65-58d83786bdc8 (36769bec9ff06cedb43d8bedc068920c4d57831293faad756ca4450df1133439)\n36769bec9ff06cedb43d8bedc068920c4d57831293faad756ca4450df1133439\nSat Oct 11 04:05:24 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-01ca7d7a-ab7e-4753-9e65-58d83786bdc8 (36769bec9ff06cedb43d8bedc068920c4d57831293faad756ca4450df1133439)\n36769bec9ff06cedb43d8bedc068920c4d57831293faad756ca4450df1133439\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 04:05:24 compute-0 ovn_metadata_agent[161897]: 2025-10-11 04:05:24.316 267637 DEBUG oslo.privsep.daemon [-] privsep: reply[ef23ef73-d7ca-4d75-8d2d-2abb440d40b1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 04:05:24 compute-0 ovn_metadata_agent[161897]: 2025-10-11 04:05:24.318 161902 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap01ca7d7a-a0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 04:05:24 compute-0 nova_compute[259850]: 2025-10-11 04:05:24.320 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 04:05:24 compute-0 kernel: tap01ca7d7a-a0: left promiscuous mode
Oct 11 04:05:24 compute-0 nova_compute[259850]: 2025-10-11 04:05:24.323 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 04:05:24 compute-0 ovn_metadata_agent[161897]: 2025-10-11 04:05:24.327 267637 DEBUG oslo.privsep.daemon [-] privsep: reply[df7704ff-f185-421f-8be3-839dce786b73]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 04:05:24 compute-0 nova_compute[259850]: 2025-10-11 04:05:24.347 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 04:05:24 compute-0 ovn_metadata_agent[161897]: 2025-10-11 04:05:24.359 267637 DEBUG oslo.privsep.daemon [-] privsep: reply[a4e36e5b-e034-41c3-bdf7-4b4db1815dc6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 04:05:24 compute-0 ovn_metadata_agent[161897]: 2025-10-11 04:05:24.361 267637 DEBUG oslo.privsep.daemon [-] privsep: reply[e31c7e4b-1e58-429c-86b2-97aaa8fcddd5]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 04:05:24 compute-0 nova_compute[259850]: 2025-10-11 04:05:24.374 2 DEBUG nova.compute.manager [req-d2cce4cd-5039-410c-938c-4570c867cdec req-9ea888a3-ccf8-455e-9273-85d962d960a4 f4159c198f2c491aba3076e803251bb4 551ef16ca7c64c7d98e9560ddab5abd8 - - default default] [instance: e7add65d-7f64-44b0-960b-62ab3f67e50e] Received event network-vif-unplugged-f42ee9e2-a84a-41b6-ba15-7baeab44cb80 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 11 04:05:24 compute-0 nova_compute[259850]: 2025-10-11 04:05:24.374 2 DEBUG oslo_concurrency.lockutils [req-d2cce4cd-5039-410c-938c-4570c867cdec req-9ea888a3-ccf8-455e-9273-85d962d960a4 f4159c198f2c491aba3076e803251bb4 551ef16ca7c64c7d98e9560ddab5abd8 - - default default] Acquiring lock "e7add65d-7f64-44b0-960b-62ab3f67e50e-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 04:05:24 compute-0 nova_compute[259850]: 2025-10-11 04:05:24.375 2 DEBUG oslo_concurrency.lockutils [req-d2cce4cd-5039-410c-938c-4570c867cdec req-9ea888a3-ccf8-455e-9273-85d962d960a4 f4159c198f2c491aba3076e803251bb4 551ef16ca7c64c7d98e9560ddab5abd8 - - default default] Lock "e7add65d-7f64-44b0-960b-62ab3f67e50e-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 04:05:24 compute-0 nova_compute[259850]: 2025-10-11 04:05:24.375 2 DEBUG oslo_concurrency.lockutils [req-d2cce4cd-5039-410c-938c-4570c867cdec req-9ea888a3-ccf8-455e-9273-85d962d960a4 f4159c198f2c491aba3076e803251bb4 551ef16ca7c64c7d98e9560ddab5abd8 - - default default] Lock "e7add65d-7f64-44b0-960b-62ab3f67e50e-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 04:05:24 compute-0 nova_compute[259850]: 2025-10-11 04:05:24.375 2 DEBUG nova.compute.manager [req-d2cce4cd-5039-410c-938c-4570c867cdec req-9ea888a3-ccf8-455e-9273-85d962d960a4 f4159c198f2c491aba3076e803251bb4 551ef16ca7c64c7d98e9560ddab5abd8 - - default default] [instance: e7add65d-7f64-44b0-960b-62ab3f67e50e] No waiting events found dispatching network-vif-unplugged-f42ee9e2-a84a-41b6-ba15-7baeab44cb80 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 11 04:05:24 compute-0 nova_compute[259850]: 2025-10-11 04:05:24.375 2 DEBUG nova.compute.manager [req-d2cce4cd-5039-410c-938c-4570c867cdec req-9ea888a3-ccf8-455e-9273-85d962d960a4 f4159c198f2c491aba3076e803251bb4 551ef16ca7c64c7d98e9560ddab5abd8 - - default default] [instance: e7add65d-7f64-44b0-960b-62ab3f67e50e] Received event network-vif-unplugged-f42ee9e2-a84a-41b6-ba15-7baeab44cb80 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Oct 11 04:05:24 compute-0 ovn_metadata_agent[161897]: 2025-10-11 04:05:24.382 267637 DEBUG oslo.privsep.daemon [-] privsep: reply[dfe7d7d2-7ade-442d-8f79-b423fae455bd]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 390471, 'reachable_time': 27951, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 271978, 'error': None, 'target': 'ovnmeta-01ca7d7a-ab7e-4753-9e65-58d83786bdc8', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 04:05:24 compute-0 systemd[1]: run-netns-ovnmeta\x2d01ca7d7a\x2dab7e\x2d4753\x2d9e65\x2d58d83786bdc8.mount: Deactivated successfully.
Oct 11 04:05:24 compute-0 ovn_metadata_agent[161897]: 2025-10-11 04:05:24.387 162015 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-01ca7d7a-ab7e-4753-9e65-58d83786bdc8 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Oct 11 04:05:24 compute-0 ovn_metadata_agent[161897]: 2025-10-11 04:05:24.387 162015 DEBUG oslo.privsep.daemon [-] privsep: reply[f0bbbd38-ec81-4b73-a015-7d8904e5ff23]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 04:05:24 compute-0 nova_compute[259850]: 2025-10-11 04:05:24.544 2 INFO nova.virt.libvirt.driver [None req-a0a37b20-9ab8-48fa-9ee7-493e179e1e8c 715d3ecfd40048a08fd0c9f8dc437cd6 a56f57f119b24e77bd165887162ef538 - - default default] [instance: e7add65d-7f64-44b0-960b-62ab3f67e50e] Deleting instance files /var/lib/nova/instances/e7add65d-7f64-44b0-960b-62ab3f67e50e_del
Oct 11 04:05:24 compute-0 nova_compute[259850]: 2025-10-11 04:05:24.545 2 INFO nova.virt.libvirt.driver [None req-a0a37b20-9ab8-48fa-9ee7-493e179e1e8c 715d3ecfd40048a08fd0c9f8dc437cd6 a56f57f119b24e77bd165887162ef538 - - default default] [instance: e7add65d-7f64-44b0-960b-62ab3f67e50e] Deletion of /var/lib/nova/instances/e7add65d-7f64-44b0-960b-62ab3f67e50e_del complete
Oct 11 04:05:24 compute-0 nova_compute[259850]: 2025-10-11 04:05:24.610 2 INFO nova.compute.manager [None req-a0a37b20-9ab8-48fa-9ee7-493e179e1e8c 715d3ecfd40048a08fd0c9f8dc437cd6 a56f57f119b24e77bd165887162ef538 - - default default] [instance: e7add65d-7f64-44b0-960b-62ab3f67e50e] Took 0.70 seconds to destroy the instance on the hypervisor.
Oct 11 04:05:24 compute-0 nova_compute[259850]: 2025-10-11 04:05:24.610 2 DEBUG oslo.service.loopingcall [None req-a0a37b20-9ab8-48fa-9ee7-493e179e1e8c 715d3ecfd40048a08fd0c9f8dc437cd6 a56f57f119b24e77bd165887162ef538 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Oct 11 04:05:24 compute-0 nova_compute[259850]: 2025-10-11 04:05:24.611 2 DEBUG nova.compute.manager [-] [instance: e7add65d-7f64-44b0-960b-62ab3f67e50e] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Oct 11 04:05:24 compute-0 nova_compute[259850]: 2025-10-11 04:05:24.611 2 DEBUG nova.network.neutron [-] [instance: e7add65d-7f64-44b0-960b-62ab3f67e50e] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Oct 11 04:05:24 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e174 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 04:05:24 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e174 do_prune osdmap full prune enabled
Oct 11 04:05:24 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e175 e175: 3 total, 3 up, 3 in
Oct 11 04:05:24 compute-0 ceph-mon[74273]: log_channel(cluster) log [DBG] : osdmap e175: 3 total, 3 up, 3 in
Oct 11 04:05:25 compute-0 ceph-mon[74273]: osdmap e174: 3 total, 3 up, 3 in
Oct 11 04:05:25 compute-0 ceph-mon[74273]: osdmap e175: 3 total, 3 up, 3 in
Oct 11 04:05:25 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1038: 305 pgs: 305 active+clean; 88 MiB data, 278 MiB used, 60 GiB / 60 GiB avail; 3.9 MiB/s rd, 29 KiB/s wr, 207 op/s
Oct 11 04:05:26 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e175 do_prune osdmap full prune enabled
Oct 11 04:05:26 compute-0 ceph-mon[74273]: pgmap v1038: 305 pgs: 305 active+clean; 88 MiB data, 278 MiB used, 60 GiB / 60 GiB avail; 3.9 MiB/s rd, 29 KiB/s wr, 207 op/s
Oct 11 04:05:26 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e176 e176: 3 total, 3 up, 3 in
Oct 11 04:05:26 compute-0 ceph-mon[74273]: log_channel(cluster) log [DBG] : osdmap e176: 3 total, 3 up, 3 in
Oct 11 04:05:26 compute-0 nova_compute[259850]: 2025-10-11 04:05:26.400 2 DEBUG nova.network.neutron [-] [instance: e7add65d-7f64-44b0-960b-62ab3f67e50e] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 11 04:05:26 compute-0 nova_compute[259850]: 2025-10-11 04:05:26.432 2 INFO nova.compute.manager [-] [instance: e7add65d-7f64-44b0-960b-62ab3f67e50e] Took 1.82 seconds to deallocate network for instance.
Oct 11 04:05:26 compute-0 nova_compute[259850]: 2025-10-11 04:05:26.486 2 DEBUG oslo_concurrency.lockutils [None req-a0a37b20-9ab8-48fa-9ee7-493e179e1e8c 715d3ecfd40048a08fd0c9f8dc437cd6 a56f57f119b24e77bd165887162ef538 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 04:05:26 compute-0 nova_compute[259850]: 2025-10-11 04:05:26.486 2 DEBUG oslo_concurrency.lockutils [None req-a0a37b20-9ab8-48fa-9ee7-493e179e1e8c 715d3ecfd40048a08fd0c9f8dc437cd6 a56f57f119b24e77bd165887162ef538 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 04:05:26 compute-0 nova_compute[259850]: 2025-10-11 04:05:26.492 2 DEBUG nova.compute.manager [req-9a5a9cba-4ad4-47e2-8fae-b6b0efc5f6d9 req-27d1d21b-76c8-4c49-ac86-0493cd64b326 f4159c198f2c491aba3076e803251bb4 551ef16ca7c64c7d98e9560ddab5abd8 - - default default] [instance: e7add65d-7f64-44b0-960b-62ab3f67e50e] Received event network-vif-plugged-f42ee9e2-a84a-41b6-ba15-7baeab44cb80 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 11 04:05:26 compute-0 nova_compute[259850]: 2025-10-11 04:05:26.492 2 DEBUG oslo_concurrency.lockutils [req-9a5a9cba-4ad4-47e2-8fae-b6b0efc5f6d9 req-27d1d21b-76c8-4c49-ac86-0493cd64b326 f4159c198f2c491aba3076e803251bb4 551ef16ca7c64c7d98e9560ddab5abd8 - - default default] Acquiring lock "e7add65d-7f64-44b0-960b-62ab3f67e50e-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 04:05:26 compute-0 nova_compute[259850]: 2025-10-11 04:05:26.492 2 DEBUG oslo_concurrency.lockutils [req-9a5a9cba-4ad4-47e2-8fae-b6b0efc5f6d9 req-27d1d21b-76c8-4c49-ac86-0493cd64b326 f4159c198f2c491aba3076e803251bb4 551ef16ca7c64c7d98e9560ddab5abd8 - - default default] Lock "e7add65d-7f64-44b0-960b-62ab3f67e50e-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 04:05:26 compute-0 nova_compute[259850]: 2025-10-11 04:05:26.493 2 DEBUG oslo_concurrency.lockutils [req-9a5a9cba-4ad4-47e2-8fae-b6b0efc5f6d9 req-27d1d21b-76c8-4c49-ac86-0493cd64b326 f4159c198f2c491aba3076e803251bb4 551ef16ca7c64c7d98e9560ddab5abd8 - - default default] Lock "e7add65d-7f64-44b0-960b-62ab3f67e50e-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 04:05:26 compute-0 nova_compute[259850]: 2025-10-11 04:05:26.493 2 DEBUG nova.compute.manager [req-9a5a9cba-4ad4-47e2-8fae-b6b0efc5f6d9 req-27d1d21b-76c8-4c49-ac86-0493cd64b326 f4159c198f2c491aba3076e803251bb4 551ef16ca7c64c7d98e9560ddab5abd8 - - default default] [instance: e7add65d-7f64-44b0-960b-62ab3f67e50e] No waiting events found dispatching network-vif-plugged-f42ee9e2-a84a-41b6-ba15-7baeab44cb80 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 11 04:05:26 compute-0 nova_compute[259850]: 2025-10-11 04:05:26.493 2 WARNING nova.compute.manager [req-9a5a9cba-4ad4-47e2-8fae-b6b0efc5f6d9 req-27d1d21b-76c8-4c49-ac86-0493cd64b326 f4159c198f2c491aba3076e803251bb4 551ef16ca7c64c7d98e9560ddab5abd8 - - default default] [instance: e7add65d-7f64-44b0-960b-62ab3f67e50e] Received unexpected event network-vif-plugged-f42ee9e2-a84a-41b6-ba15-7baeab44cb80 for instance with vm_state active and task_state deleting.
Oct 11 04:05:26 compute-0 nova_compute[259850]: 2025-10-11 04:05:26.493 2 DEBUG nova.compute.manager [req-9a5a9cba-4ad4-47e2-8fae-b6b0efc5f6d9 req-27d1d21b-76c8-4c49-ac86-0493cd64b326 f4159c198f2c491aba3076e803251bb4 551ef16ca7c64c7d98e9560ddab5abd8 - - default default] [instance: e7add65d-7f64-44b0-960b-62ab3f67e50e] Received event network-vif-deleted-f42ee9e2-a84a-41b6-ba15-7baeab44cb80 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 11 04:05:26 compute-0 nova_compute[259850]: 2025-10-11 04:05:26.546 2 DEBUG oslo_concurrency.processutils [None req-a0a37b20-9ab8-48fa-9ee7-493e179e1e8c 715d3ecfd40048a08fd0c9f8dc437cd6 a56f57f119b24e77bd165887162ef538 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 04:05:26 compute-0 nova_compute[259850]: 2025-10-11 04:05:26.723 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 04:05:26 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 11 04:05:26 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/870018845' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 04:05:26 compute-0 nova_compute[259850]: 2025-10-11 04:05:26.983 2 DEBUG oslo_concurrency.processutils [None req-a0a37b20-9ab8-48fa-9ee7-493e179e1e8c 715d3ecfd40048a08fd0c9f8dc437cd6 a56f57f119b24e77bd165887162ef538 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.437s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 04:05:26 compute-0 nova_compute[259850]: 2025-10-11 04:05:26.990 2 DEBUG nova.compute.provider_tree [None req-a0a37b20-9ab8-48fa-9ee7-493e179e1e8c 715d3ecfd40048a08fd0c9f8dc437cd6 a56f57f119b24e77bd165887162ef538 - - default default] Inventory has not changed in ProviderTree for provider: 108a560b-89c0-4926-a2fc-cb749a6f8386 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 11 04:05:27 compute-0 nova_compute[259850]: 2025-10-11 04:05:27.012 2 DEBUG nova.scheduler.client.report [None req-a0a37b20-9ab8-48fa-9ee7-493e179e1e8c 715d3ecfd40048a08fd0c9f8dc437cd6 a56f57f119b24e77bd165887162ef538 - - default default] Inventory has not changed for provider 108a560b-89c0-4926-a2fc-cb749a6f8386 based on inventory data: {'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 11 04:05:27 compute-0 nova_compute[259850]: 2025-10-11 04:05:27.034 2 DEBUG oslo_concurrency.lockutils [None req-a0a37b20-9ab8-48fa-9ee7-493e179e1e8c 715d3ecfd40048a08fd0c9f8dc437cd6 a56f57f119b24e77bd165887162ef538 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.548s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 04:05:27 compute-0 nova_compute[259850]: 2025-10-11 04:05:27.068 2 INFO nova.scheduler.client.report [None req-a0a37b20-9ab8-48fa-9ee7-493e179e1e8c 715d3ecfd40048a08fd0c9f8dc437cd6 a56f57f119b24e77bd165887162ef538 - - default default] Deleted allocations for instance e7add65d-7f64-44b0-960b-62ab3f67e50e
Oct 11 04:05:27 compute-0 ceph-mon[74273]: osdmap e176: 3 total, 3 up, 3 in
Oct 11 04:05:27 compute-0 ceph-mon[74273]: from='client.? 192.168.122.100:0/870018845' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 04:05:27 compute-0 nova_compute[259850]: 2025-10-11 04:05:27.352 2 DEBUG oslo_concurrency.lockutils [None req-a0a37b20-9ab8-48fa-9ee7-493e179e1e8c 715d3ecfd40048a08fd0c9f8dc437cd6 a56f57f119b24e77bd165887162ef538 - - default default] Lock "e7add65d-7f64-44b0-960b-62ab3f67e50e" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.444s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 04:05:27 compute-0 podman[272003]: 2025-10-11 04:05:27.354895596 +0000 UTC m=+0.063426661 container health_status 8f03625dda308e9a3fe3b3f00360811eab2a53db90537fed5552a11820542f07 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, container_name=iscsid, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, config_id=iscsid, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251009)
Oct 11 04:05:27 compute-0 podman[272002]: 2025-10-11 04:05:27.382745798 +0000 UTC m=+0.093966399 container health_status 83ff5079e26f0f00bbfd2512db4ebca61e431e3b7e362664573e34fff179574c (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, config_id=multipathd, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, managed_by=edpm_ansible)
Oct 11 04:05:27 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1040: 305 pgs: 305 active+clean; 88 MiB data, 278 MiB used, 60 GiB / 60 GiB avail; 4.3 MiB/s rd, 32 KiB/s wr, 229 op/s
Oct 11 04:05:28 compute-0 ceph-mon[74273]: pgmap v1040: 305 pgs: 305 active+clean; 88 MiB data, 278 MiB used, 60 GiB / 60 GiB avail; 4.3 MiB/s rd, 32 KiB/s wr, 229 op/s
Oct 11 04:05:29 compute-0 nova_compute[259850]: 2025-10-11 04:05:29.174 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 04:05:29 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1041: 305 pgs: 305 active+clean; 41 MiB data, 256 MiB used, 60 GiB / 60 GiB avail; 78 KiB/s rd, 6.5 KiB/s wr, 108 op/s
Oct 11 04:05:29 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e176 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 04:05:29 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e176 do_prune osdmap full prune enabled
Oct 11 04:05:29 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e177 e177: 3 total, 3 up, 3 in
Oct 11 04:05:29 compute-0 ceph-mon[74273]: log_channel(cluster) log [DBG] : osdmap e177: 3 total, 3 up, 3 in
Oct 11 04:05:30 compute-0 nova_compute[259850]: 2025-10-11 04:05:30.105 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1760155515.1033049, f362654b-5459-4295-a15a-50dce3bd4232 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 11 04:05:30 compute-0 nova_compute[259850]: 2025-10-11 04:05:30.105 2 INFO nova.compute.manager [-] [instance: f362654b-5459-4295-a15a-50dce3bd4232] VM Stopped (Lifecycle Event)
Oct 11 04:05:30 compute-0 nova_compute[259850]: 2025-10-11 04:05:30.132 2 DEBUG nova.compute.manager [None req-e5262a67-7f3d-4887-bcb3-3d0ba45446c4 - - - - - -] [instance: f362654b-5459-4295-a15a-50dce3bd4232] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 11 04:05:30 compute-0 ceph-mon[74273]: pgmap v1041: 305 pgs: 305 active+clean; 41 MiB data, 256 MiB used, 60 GiB / 60 GiB avail; 78 KiB/s rd, 6.5 KiB/s wr, 108 op/s
Oct 11 04:05:30 compute-0 ceph-mon[74273]: osdmap e177: 3 total, 3 up, 3 in
Oct 11 04:05:30 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct 11 04:05:30 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2588231720' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 11 04:05:30 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct 11 04:05:30 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2588231720' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 11 04:05:31 compute-0 ceph-mgr[74563]: [pg_autoscaler INFO root] _maybe_adjust
Oct 11 04:05:31 compute-0 ceph-mgr[74563]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 04:05:31 compute-0 ceph-mgr[74563]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct 11 04:05:31 compute-0 ceph-mgr[74563]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 04:05:31 compute-0 ceph-mgr[74563]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Oct 11 04:05:31 compute-0 ceph-mgr[74563]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 04:05:31 compute-0 ceph-mgr[74563]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 6.994977860259165e-07 of space, bias 1.0, pg target 0.00020984933580777494 quantized to 32 (current 32)
Oct 11 04:05:31 compute-0 ceph-mgr[74563]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 04:05:31 compute-0 ceph-mgr[74563]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Oct 11 04:05:31 compute-0 ceph-mgr[74563]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 04:05:31 compute-0 ceph-mgr[74563]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.000665858301588852 of space, bias 1.0, pg target 0.19975749047665559 quantized to 32 (current 32)
Oct 11 04:05:31 compute-0 ceph-mgr[74563]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 04:05:31 compute-0 ceph-mgr[74563]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Oct 11 04:05:31 compute-0 ceph-mgr[74563]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 04:05:31 compute-0 ceph-mgr[74563]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 11 04:05:31 compute-0 ceph-mgr[74563]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 04:05:31 compute-0 ceph-mgr[74563]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Oct 11 04:05:31 compute-0 ceph-mgr[74563]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 04:05:31 compute-0 ceph-mgr[74563]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct 11 04:05:31 compute-0 ceph-mgr[74563]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 04:05:31 compute-0 ceph-mgr[74563]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 11 04:05:31 compute-0 ceph-mgr[74563]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 04:05:31 compute-0 ceph-mgr[74563]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct 11 04:05:31 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1043: 305 pgs: 305 active+clean; 41 MiB data, 256 MiB used, 60 GiB / 60 GiB avail; 69 KiB/s rd, 5.8 KiB/s wr, 97 op/s
Oct 11 04:05:31 compute-0 ceph-mon[74273]: from='client.? 192.168.122.10:0/2588231720' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 11 04:05:31 compute-0 ceph-mon[74273]: from='client.? 192.168.122.10:0/2588231720' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 11 04:05:31 compute-0 nova_compute[259850]: 2025-10-11 04:05:31.769 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 04:05:32 compute-0 ceph-mon[74273]: pgmap v1043: 305 pgs: 305 active+clean; 41 MiB data, 256 MiB used, 60 GiB / 60 GiB avail; 69 KiB/s rd, 5.8 KiB/s wr, 97 op/s
Oct 11 04:05:33 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1044: 305 pgs: 305 active+clean; 177 MiB data, 328 MiB used, 60 GiB / 60 GiB avail; 100 KiB/s rd, 17 MiB/s wr, 142 op/s
Oct 11 04:05:34 compute-0 ceph-mon[74273]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Oct 11 04:05:34 compute-0 ceph-mon[74273]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 1800.0 total, 600.0 interval
                                           Cumulative writes: 4874 writes, 21K keys, 4874 commit groups, 1.0 writes per commit group, ingest: 0.03 GB, 0.02 MB/s
                                           Cumulative WAL: 4874 writes, 4874 syncs, 1.00 writes per sync, written: 0.03 GB, 0.02 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 1551 writes, 7011 keys, 1551 commit groups, 1.0 writes per commit group, ingest: 9.61 MB, 0.02 MB/s
                                           Interval WAL: 1551 writes, 1551 syncs, 1.00 writes per sync, written: 0.01 GB, 0.02 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                           
                                           ** Compaction Stats [default] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0    113.2      0.21              0.10        12    0.018       0      0       0.0       0.0
                                             L6      1/0    7.09 MB   0.0      0.1     0.0      0.1       0.1      0.0       0.0   3.2    162.0    132.6      0.58              0.32        11    0.053     48K   5780       0.0       0.0
                                            Sum      1/0    7.09 MB   0.0      0.1     0.0      0.1       0.1      0.0       0.0   4.2    118.4    127.4      0.79              0.42        23    0.034     48K   5780       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   5.1    142.7    144.2      0.31              0.19        10    0.031     23K   2590       0.0       0.0
                                           
                                           ** Compaction Stats [default] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Low      0/0    0.00 KB   0.0      0.1     0.0      0.1       0.1      0.0       0.0   0.0    162.0    132.6      0.58              0.32        11    0.053     48K   5780       0.0       0.0
                                           High      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0    115.1      0.21              0.10        11    0.019       0      0       0.0       0.0
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0     13.2      0.00              0.00         1    0.004       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1800.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.023, interval 0.008
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.10 GB write, 0.06 MB/s write, 0.09 GB read, 0.05 MB/s read, 0.8 seconds
                                           Interval compaction: 0.04 GB write, 0.07 MB/s write, 0.04 GB read, 0.07 MB/s read, 0.3 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x558495a5d1f0#2 capacity: 304.00 MB usage: 8.68 MB table_size: 0 occupancy: 18446744073709551615 collections: 4 last_copies: 0 last_secs: 0.000111 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(554,8.29 MB,2.72613%) FilterBlock(24,141.61 KB,0.0454903%) IndexBlock(24,263.20 KB,0.0845508%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [default] **
Oct 11 04:05:34 compute-0 nova_compute[259850]: 2025-10-11 04:05:34.226 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 04:05:34 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e177 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 04:05:34 compute-0 ceph-mon[74273]: pgmap v1044: 305 pgs: 305 active+clean; 177 MiB data, 328 MiB used, 60 GiB / 60 GiB avail; 100 KiB/s rd, 17 MiB/s wr, 142 op/s
Oct 11 04:05:35 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1045: 305 pgs: 305 active+clean; 177 MiB data, 328 MiB used, 60 GiB / 60 GiB avail; 86 KiB/s rd, 15 MiB/s wr, 122 op/s
Oct 11 04:05:36 compute-0 ceph-mon[74273]: pgmap v1045: 305 pgs: 305 active+clean; 177 MiB data, 328 MiB used, 60 GiB / 60 GiB avail; 86 KiB/s rd, 15 MiB/s wr, 122 op/s
Oct 11 04:05:36 compute-0 nova_compute[259850]: 2025-10-11 04:05:36.820 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 04:05:37 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1046: 305 pgs: 305 active+clean; 177 MiB data, 328 MiB used, 60 GiB / 60 GiB avail; 80 KiB/s rd, 14 MiB/s wr, 113 op/s
Oct 11 04:05:38 compute-0 ceph-mon[74273]: pgmap v1046: 305 pgs: 305 active+clean; 177 MiB data, 328 MiB used, 60 GiB / 60 GiB avail; 80 KiB/s rd, 14 MiB/s wr, 113 op/s
Oct 11 04:05:39 compute-0 nova_compute[259850]: 2025-10-11 04:05:39.146 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1760155524.1439433, e7add65d-7f64-44b0-960b-62ab3f67e50e => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 11 04:05:39 compute-0 nova_compute[259850]: 2025-10-11 04:05:39.147 2 INFO nova.compute.manager [-] [instance: e7add65d-7f64-44b0-960b-62ab3f67e50e] VM Stopped (Lifecycle Event)
Oct 11 04:05:39 compute-0 nova_compute[259850]: 2025-10-11 04:05:39.168 2 DEBUG nova.compute.manager [None req-d5f35086-ce1b-4bd4-af89-f53d0a89d061 - - - - - -] [instance: e7add65d-7f64-44b0-960b-62ab3f67e50e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 11 04:05:39 compute-0 nova_compute[259850]: 2025-10-11 04:05:39.230 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 04:05:39 compute-0 podman[272041]: 2025-10-11 04:05:39.416318682 +0000 UTC m=+0.127549821 container health_status 648a70a95c858d67aef01a55c153ff0b5b9bef4b589fa10c62a24185180db61f (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, managed_by=edpm_ansible, config_id=ovn_controller, org.label-schema.vendor=CentOS, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Oct 11 04:05:39 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1047: 305 pgs: 305 active+clean; 473 MiB data, 640 MiB used, 59 GiB / 60 GiB avail; 55 KiB/s rd, 43 MiB/s wr, 85 op/s
Oct 11 04:05:39 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e177 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 04:05:40 compute-0 ceph-mon[74273]: pgmap v1047: 305 pgs: 305 active+clean; 473 MiB data, 640 MiB used, 59 GiB / 60 GiB avail; 55 KiB/s rd, 43 MiB/s wr, 85 op/s
Oct 11 04:05:41 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1048: 305 pgs: 305 active+clean; 473 MiB data, 640 MiB used, 59 GiB / 60 GiB avail; 47 KiB/s rd, 37 MiB/s wr, 73 op/s
Oct 11 04:05:41 compute-0 nova_compute[259850]: 2025-10-11 04:05:41.823 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 04:05:42 compute-0 ceph-mon[74273]: pgmap v1048: 305 pgs: 305 active+clean; 473 MiB data, 640 MiB used, 59 GiB / 60 GiB avail; 47 KiB/s rd, 37 MiB/s wr, 73 op/s
Oct 11 04:05:43 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1049: 305 pgs: 305 active+clean; 825 MiB data, 1008 MiB used, 59 GiB / 60 GiB avail; 64 KiB/s rd, 65 MiB/s wr, 103 op/s
Oct 11 04:05:44 compute-0 nova_compute[259850]: 2025-10-11 04:05:44.234 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 04:05:44 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e177 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 04:05:44 compute-0 ceph-mon[74273]: pgmap v1049: 305 pgs: 305 active+clean; 825 MiB data, 1008 MiB used, 59 GiB / 60 GiB avail; 64 KiB/s rd, 65 MiB/s wr, 103 op/s
Oct 11 04:05:45 compute-0 podman[272068]: 2025-10-11 04:05:45.37873031 +0000 UTC m=+0.084405900 container health_status aa44bb3cffcfb092b9d85ae52a9bd9f36c87e07262632420f97b48a886109eab (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Oct 11 04:05:45 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1050: 305 pgs: 305 active+clean; 825 MiB data, 1008 MiB used, 59 GiB / 60 GiB avail; 36 KiB/s rd, 54 MiB/s wr, 63 op/s
Oct 11 04:05:46 compute-0 ceph-mon[74273]: pgmap v1050: 305 pgs: 305 active+clean; 825 MiB data, 1008 MiB used, 59 GiB / 60 GiB avail; 36 KiB/s rd, 54 MiB/s wr, 63 op/s
Oct 11 04:05:46 compute-0 nova_compute[259850]: 2025-10-11 04:05:46.826 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 04:05:47 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1051: 305 pgs: 305 active+clean; 825 MiB data, 1008 MiB used, 59 GiB / 60 GiB avail; 36 KiB/s rd, 54 MiB/s wr, 63 op/s
Oct 11 04:05:47 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e177 do_prune osdmap full prune enabled
Oct 11 04:05:47 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e178 e178: 3 total, 3 up, 3 in
Oct 11 04:05:47 compute-0 ceph-mon[74273]: log_channel(cluster) log [DBG] : osdmap e178: 3 total, 3 up, 3 in
Oct 11 04:05:48 compute-0 ceph-mon[74273]: pgmap v1051: 305 pgs: 305 active+clean; 825 MiB data, 1008 MiB used, 59 GiB / 60 GiB avail; 36 KiB/s rd, 54 MiB/s wr, 63 op/s
Oct 11 04:05:48 compute-0 ceph-mon[74273]: osdmap e178: 3 total, 3 up, 3 in
Oct 11 04:05:49 compute-0 nova_compute[259850]: 2025-10-11 04:05:49.239 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 04:05:49 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 11 04:05:49 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1903793380' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 11 04:05:49 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1053: 305 pgs: 305 active+clean; 1.0 GiB data, 1.3 GiB used, 59 GiB / 60 GiB avail; 49 KiB/s rd, 59 MiB/s wr, 82 op/s
Oct 11 04:05:49 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e178 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 04:05:49 compute-0 ceph-mon[74273]: from='client.? 192.168.122.10:0/1903793380' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 11 04:05:50 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct 11 04:05:50 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1344039338' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 11 04:05:50 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct 11 04:05:50 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1344039338' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 11 04:05:50 compute-0 ceph-mgr[74563]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 04:05:50 compute-0 ceph-mgr[74563]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 04:05:50 compute-0 ceph-mgr[74563]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 04:05:50 compute-0 ceph-mgr[74563]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 04:05:50 compute-0 ceph-mgr[74563]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 04:05:50 compute-0 ceph-mgr[74563]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 04:05:50 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e178 do_prune osdmap full prune enabled
Oct 11 04:05:50 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e179 e179: 3 total, 3 up, 3 in
Oct 11 04:05:50 compute-0 ceph-mon[74273]: log_channel(cluster) log [DBG] : osdmap e179: 3 total, 3 up, 3 in
Oct 11 04:05:50 compute-0 ceph-mon[74273]: pgmap v1053: 305 pgs: 305 active+clean; 1.0 GiB data, 1.3 GiB used, 59 GiB / 60 GiB avail; 49 KiB/s rd, 59 MiB/s wr, 82 op/s
Oct 11 04:05:50 compute-0 ceph-mon[74273]: from='client.? 192.168.122.10:0/1344039338' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 11 04:05:50 compute-0 ceph-mon[74273]: from='client.? 192.168.122.10:0/1344039338' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 11 04:05:51 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1055: 305 pgs: 305 active+clean; 1.0 GiB data, 1.3 GiB used, 59 GiB / 60 GiB avail; 34 KiB/s rd, 30 MiB/s wr, 55 op/s
Oct 11 04:05:51 compute-0 nova_compute[259850]: 2025-10-11 04:05:51.873 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 04:05:51 compute-0 ceph-mon[74273]: osdmap e179: 3 total, 3 up, 3 in
Oct 11 04:05:52 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct 11 04:05:52 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/4050071929' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 11 04:05:52 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct 11 04:05:52 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/4050071929' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 11 04:05:52 compute-0 nova_compute[259850]: 2025-10-11 04:05:52.716 2 DEBUG oslo_concurrency.lockutils [None req-4ebfed2d-14b1-462a-b105-26cb1d26994f c25efb567172419289431bb87a4358cb 1e2113337abc4651b6b207f4cda57799 - - default default] Acquiring lock "755b8dbf-4912-4ab3-87a0-0fdcfba7efe4" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 04:05:52 compute-0 nova_compute[259850]: 2025-10-11 04:05:52.716 2 DEBUG oslo_concurrency.lockutils [None req-4ebfed2d-14b1-462a-b105-26cb1d26994f c25efb567172419289431bb87a4358cb 1e2113337abc4651b6b207f4cda57799 - - default default] Lock "755b8dbf-4912-4ab3-87a0-0fdcfba7efe4" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 04:05:52 compute-0 nova_compute[259850]: 2025-10-11 04:05:52.746 2 DEBUG nova.compute.manager [None req-4ebfed2d-14b1-462a-b105-26cb1d26994f c25efb567172419289431bb87a4358cb 1e2113337abc4651b6b207f4cda57799 - - default default] [instance: 755b8dbf-4912-4ab3-87a0-0fdcfba7efe4] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Oct 11 04:05:52 compute-0 nova_compute[259850]: 2025-10-11 04:05:52.825 2 DEBUG oslo_concurrency.lockutils [None req-4ebfed2d-14b1-462a-b105-26cb1d26994f c25efb567172419289431bb87a4358cb 1e2113337abc4651b6b207f4cda57799 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 04:05:52 compute-0 nova_compute[259850]: 2025-10-11 04:05:52.825 2 DEBUG oslo_concurrency.lockutils [None req-4ebfed2d-14b1-462a-b105-26cb1d26994f c25efb567172419289431bb87a4358cb 1e2113337abc4651b6b207f4cda57799 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 04:05:52 compute-0 nova_compute[259850]: 2025-10-11 04:05:52.835 2 DEBUG nova.virt.hardware [None req-4ebfed2d-14b1-462a-b105-26cb1d26994f c25efb567172419289431bb87a4358cb 1e2113337abc4651b6b207f4cda57799 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Oct 11 04:05:52 compute-0 nova_compute[259850]: 2025-10-11 04:05:52.836 2 INFO nova.compute.claims [None req-4ebfed2d-14b1-462a-b105-26cb1d26994f c25efb567172419289431bb87a4358cb 1e2113337abc4651b6b207f4cda57799 - - default default] [instance: 755b8dbf-4912-4ab3-87a0-0fdcfba7efe4] Claim successful on node compute-0.ctlplane.example.com
Oct 11 04:05:52 compute-0 ceph-mon[74273]: pgmap v1055: 305 pgs: 305 active+clean; 1.0 GiB data, 1.3 GiB used, 59 GiB / 60 GiB avail; 34 KiB/s rd, 30 MiB/s wr, 55 op/s
Oct 11 04:05:52 compute-0 ceph-mon[74273]: from='client.? 192.168.122.10:0/4050071929' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 11 04:05:52 compute-0 ceph-mon[74273]: from='client.? 192.168.122.10:0/4050071929' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 11 04:05:52 compute-0 nova_compute[259850]: 2025-10-11 04:05:52.959 2 DEBUG oslo_concurrency.processutils [None req-4ebfed2d-14b1-462a-b105-26cb1d26994f c25efb567172419289431bb87a4358cb 1e2113337abc4651b6b207f4cda57799 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 04:05:53 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 11 04:05:53 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4139773975' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 04:05:53 compute-0 nova_compute[259850]: 2025-10-11 04:05:53.431 2 DEBUG oslo_concurrency.processutils [None req-4ebfed2d-14b1-462a-b105-26cb1d26994f c25efb567172419289431bb87a4358cb 1e2113337abc4651b6b207f4cda57799 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.473s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 04:05:53 compute-0 nova_compute[259850]: 2025-10-11 04:05:53.440 2 DEBUG nova.compute.provider_tree [None req-4ebfed2d-14b1-462a-b105-26cb1d26994f c25efb567172419289431bb87a4358cb 1e2113337abc4651b6b207f4cda57799 - - default default] Inventory has not changed in ProviderTree for provider: 108a560b-89c0-4926-a2fc-cb749a6f8386 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 11 04:05:53 compute-0 nova_compute[259850]: 2025-10-11 04:05:53.457 2 DEBUG nova.scheduler.client.report [None req-4ebfed2d-14b1-462a-b105-26cb1d26994f c25efb567172419289431bb87a4358cb 1e2113337abc4651b6b207f4cda57799 - - default default] Inventory has not changed for provider 108a560b-89c0-4926-a2fc-cb749a6f8386 based on inventory data: {'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 11 04:05:53 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1056: 305 pgs: 305 active+clean; 585 MiB data, 760 MiB used, 59 GiB / 60 GiB avail; 191 KiB/s rd, 98 MiB/s wr, 324 op/s
Oct 11 04:05:53 compute-0 nova_compute[259850]: 2025-10-11 04:05:53.481 2 DEBUG oslo_concurrency.lockutils [None req-4ebfed2d-14b1-462a-b105-26cb1d26994f c25efb567172419289431bb87a4358cb 1e2113337abc4651b6b207f4cda57799 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.655s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 04:05:53 compute-0 nova_compute[259850]: 2025-10-11 04:05:53.482 2 DEBUG nova.compute.manager [None req-4ebfed2d-14b1-462a-b105-26cb1d26994f c25efb567172419289431bb87a4358cb 1e2113337abc4651b6b207f4cda57799 - - default default] [instance: 755b8dbf-4912-4ab3-87a0-0fdcfba7efe4] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Oct 11 04:05:53 compute-0 nova_compute[259850]: 2025-10-11 04:05:53.537 2 DEBUG nova.compute.manager [None req-4ebfed2d-14b1-462a-b105-26cb1d26994f c25efb567172419289431bb87a4358cb 1e2113337abc4651b6b207f4cda57799 - - default default] [instance: 755b8dbf-4912-4ab3-87a0-0fdcfba7efe4] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Oct 11 04:05:53 compute-0 nova_compute[259850]: 2025-10-11 04:05:53.538 2 DEBUG nova.network.neutron [None req-4ebfed2d-14b1-462a-b105-26cb1d26994f c25efb567172419289431bb87a4358cb 1e2113337abc4651b6b207f4cda57799 - - default default] [instance: 755b8dbf-4912-4ab3-87a0-0fdcfba7efe4] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Oct 11 04:05:53 compute-0 nova_compute[259850]: 2025-10-11 04:05:53.573 2 INFO nova.virt.libvirt.driver [None req-4ebfed2d-14b1-462a-b105-26cb1d26994f c25efb567172419289431bb87a4358cb 1e2113337abc4651b6b207f4cda57799 - - default default] [instance: 755b8dbf-4912-4ab3-87a0-0fdcfba7efe4] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Oct 11 04:05:53 compute-0 nova_compute[259850]: 2025-10-11 04:05:53.598 2 DEBUG nova.compute.manager [None req-4ebfed2d-14b1-462a-b105-26cb1d26994f c25efb567172419289431bb87a4358cb 1e2113337abc4651b6b207f4cda57799 - - default default] [instance: 755b8dbf-4912-4ab3-87a0-0fdcfba7efe4] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Oct 11 04:05:53 compute-0 nova_compute[259850]: 2025-10-11 04:05:53.703 2 DEBUG nova.compute.manager [None req-4ebfed2d-14b1-462a-b105-26cb1d26994f c25efb567172419289431bb87a4358cb 1e2113337abc4651b6b207f4cda57799 - - default default] [instance: 755b8dbf-4912-4ab3-87a0-0fdcfba7efe4] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Oct 11 04:05:53 compute-0 nova_compute[259850]: 2025-10-11 04:05:53.705 2 DEBUG nova.virt.libvirt.driver [None req-4ebfed2d-14b1-462a-b105-26cb1d26994f c25efb567172419289431bb87a4358cb 1e2113337abc4651b6b207f4cda57799 - - default default] [instance: 755b8dbf-4912-4ab3-87a0-0fdcfba7efe4] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Oct 11 04:05:53 compute-0 nova_compute[259850]: 2025-10-11 04:05:53.706 2 INFO nova.virt.libvirt.driver [None req-4ebfed2d-14b1-462a-b105-26cb1d26994f c25efb567172419289431bb87a4358cb 1e2113337abc4651b6b207f4cda57799 - - default default] [instance: 755b8dbf-4912-4ab3-87a0-0fdcfba7efe4] Creating image(s)
Oct 11 04:05:53 compute-0 nova_compute[259850]: 2025-10-11 04:05:53.738 2 DEBUG nova.storage.rbd_utils [None req-4ebfed2d-14b1-462a-b105-26cb1d26994f c25efb567172419289431bb87a4358cb 1e2113337abc4651b6b207f4cda57799 - - default default] rbd image 755b8dbf-4912-4ab3-87a0-0fdcfba7efe4_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 11 04:05:53 compute-0 nova_compute[259850]: 2025-10-11 04:05:53.772 2 DEBUG nova.storage.rbd_utils [None req-4ebfed2d-14b1-462a-b105-26cb1d26994f c25efb567172419289431bb87a4358cb 1e2113337abc4651b6b207f4cda57799 - - default default] rbd image 755b8dbf-4912-4ab3-87a0-0fdcfba7efe4_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 11 04:05:53 compute-0 nova_compute[259850]: 2025-10-11 04:05:53.810 2 DEBUG nova.storage.rbd_utils [None req-4ebfed2d-14b1-462a-b105-26cb1d26994f c25efb567172419289431bb87a4358cb 1e2113337abc4651b6b207f4cda57799 - - default default] rbd image 755b8dbf-4912-4ab3-87a0-0fdcfba7efe4_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 11 04:05:53 compute-0 nova_compute[259850]: 2025-10-11 04:05:53.818 2 DEBUG oslo_concurrency.processutils [None req-4ebfed2d-14b1-462a-b105-26cb1d26994f c25efb567172419289431bb87a4358cb 1e2113337abc4651b6b207f4cda57799 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/1fdd1a1629caceb533dd1ed1ad40a6716b3e72ac --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 04:05:53 compute-0 nova_compute[259850]: 2025-10-11 04:05:53.897 2 DEBUG nova.policy [None req-4ebfed2d-14b1-462a-b105-26cb1d26994f c25efb567172419289431bb87a4358cb 1e2113337abc4651b6b207f4cda57799 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'c25efb567172419289431bb87a4358cb', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '1e2113337abc4651b6b207f4cda57799', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Oct 11 04:05:53 compute-0 nova_compute[259850]: 2025-10-11 04:05:53.908 2 DEBUG oslo_concurrency.processutils [None req-4ebfed2d-14b1-462a-b105-26cb1d26994f c25efb567172419289431bb87a4358cb 1e2113337abc4651b6b207f4cda57799 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/1fdd1a1629caceb533dd1ed1ad40a6716b3e72ac --force-share --output=json" returned: 0 in 0.089s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 04:05:53 compute-0 nova_compute[259850]: 2025-10-11 04:05:53.909 2 DEBUG oslo_concurrency.lockutils [None req-4ebfed2d-14b1-462a-b105-26cb1d26994f c25efb567172419289431bb87a4358cb 1e2113337abc4651b6b207f4cda57799 - - default default] Acquiring lock "1fdd1a1629caceb533dd1ed1ad40a6716b3e72ac" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 04:05:53 compute-0 nova_compute[259850]: 2025-10-11 04:05:53.910 2 DEBUG oslo_concurrency.lockutils [None req-4ebfed2d-14b1-462a-b105-26cb1d26994f c25efb567172419289431bb87a4358cb 1e2113337abc4651b6b207f4cda57799 - - default default] Lock "1fdd1a1629caceb533dd1ed1ad40a6716b3e72ac" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 04:05:53 compute-0 nova_compute[259850]: 2025-10-11 04:05:53.910 2 DEBUG oslo_concurrency.lockutils [None req-4ebfed2d-14b1-462a-b105-26cb1d26994f c25efb567172419289431bb87a4358cb 1e2113337abc4651b6b207f4cda57799 - - default default] Lock "1fdd1a1629caceb533dd1ed1ad40a6716b3e72ac" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 04:05:53 compute-0 ceph-mon[74273]: from='client.? 192.168.122.100:0/4139773975' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 04:05:53 compute-0 nova_compute[259850]: 2025-10-11 04:05:53.940 2 DEBUG nova.storage.rbd_utils [None req-4ebfed2d-14b1-462a-b105-26cb1d26994f c25efb567172419289431bb87a4358cb 1e2113337abc4651b6b207f4cda57799 - - default default] rbd image 755b8dbf-4912-4ab3-87a0-0fdcfba7efe4_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 11 04:05:53 compute-0 nova_compute[259850]: 2025-10-11 04:05:53.945 2 DEBUG oslo_concurrency.processutils [None req-4ebfed2d-14b1-462a-b105-26cb1d26994f c25efb567172419289431bb87a4358cb 1e2113337abc4651b6b207f4cda57799 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/1fdd1a1629caceb533dd1ed1ad40a6716b3e72ac 755b8dbf-4912-4ab3-87a0-0fdcfba7efe4_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 04:05:54 compute-0 nova_compute[259850]: 2025-10-11 04:05:54.243 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 04:05:54 compute-0 nova_compute[259850]: 2025-10-11 04:05:54.384 2 DEBUG oslo_concurrency.processutils [None req-4ebfed2d-14b1-462a-b105-26cb1d26994f c25efb567172419289431bb87a4358cb 1e2113337abc4651b6b207f4cda57799 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/1fdd1a1629caceb533dd1ed1ad40a6716b3e72ac 755b8dbf-4912-4ab3-87a0-0fdcfba7efe4_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.439s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 04:05:54 compute-0 nova_compute[259850]: 2025-10-11 04:05:54.478 2 DEBUG nova.storage.rbd_utils [None req-4ebfed2d-14b1-462a-b105-26cb1d26994f c25efb567172419289431bb87a4358cb 1e2113337abc4651b6b207f4cda57799 - - default default] resizing rbd image 755b8dbf-4912-4ab3-87a0-0fdcfba7efe4_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Oct 11 04:05:54 compute-0 nova_compute[259850]: 2025-10-11 04:05:54.595 2 DEBUG nova.network.neutron [None req-4ebfed2d-14b1-462a-b105-26cb1d26994f c25efb567172419289431bb87a4358cb 1e2113337abc4651b6b207f4cda57799 - - default default] [instance: 755b8dbf-4912-4ab3-87a0-0fdcfba7efe4] Successfully created port: 3d50c2ef-db4b-49d0-9eeb-d9a45369939d _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Oct 11 04:05:54 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 04:05:54 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e179 do_prune osdmap full prune enabled
Oct 11 04:05:54 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e180 e180: 3 total, 3 up, 3 in
Oct 11 04:05:54 compute-0 ceph-mon[74273]: log_channel(cluster) log [DBG] : osdmap e180: 3 total, 3 up, 3 in
Oct 11 04:05:54 compute-0 nova_compute[259850]: 2025-10-11 04:05:54.756 2 DEBUG nova.objects.instance [None req-4ebfed2d-14b1-462a-b105-26cb1d26994f c25efb567172419289431bb87a4358cb 1e2113337abc4651b6b207f4cda57799 - - default default] Lazy-loading 'migration_context' on Instance uuid 755b8dbf-4912-4ab3-87a0-0fdcfba7efe4 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 11 04:05:54 compute-0 nova_compute[259850]: 2025-10-11 04:05:54.771 2 DEBUG nova.virt.libvirt.driver [None req-4ebfed2d-14b1-462a-b105-26cb1d26994f c25efb567172419289431bb87a4358cb 1e2113337abc4651b6b207f4cda57799 - - default default] [instance: 755b8dbf-4912-4ab3-87a0-0fdcfba7efe4] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Oct 11 04:05:54 compute-0 nova_compute[259850]: 2025-10-11 04:05:54.772 2 DEBUG nova.virt.libvirt.driver [None req-4ebfed2d-14b1-462a-b105-26cb1d26994f c25efb567172419289431bb87a4358cb 1e2113337abc4651b6b207f4cda57799 - - default default] [instance: 755b8dbf-4912-4ab3-87a0-0fdcfba7efe4] Ensure instance console log exists: /var/lib/nova/instances/755b8dbf-4912-4ab3-87a0-0fdcfba7efe4/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Oct 11 04:05:54 compute-0 nova_compute[259850]: 2025-10-11 04:05:54.773 2 DEBUG oslo_concurrency.lockutils [None req-4ebfed2d-14b1-462a-b105-26cb1d26994f c25efb567172419289431bb87a4358cb 1e2113337abc4651b6b207f4cda57799 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 04:05:54 compute-0 nova_compute[259850]: 2025-10-11 04:05:54.774 2 DEBUG oslo_concurrency.lockutils [None req-4ebfed2d-14b1-462a-b105-26cb1d26994f c25efb567172419289431bb87a4358cb 1e2113337abc4651b6b207f4cda57799 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 04:05:54 compute-0 nova_compute[259850]: 2025-10-11 04:05:54.774 2 DEBUG oslo_concurrency.lockutils [None req-4ebfed2d-14b1-462a-b105-26cb1d26994f c25efb567172419289431bb87a4358cb 1e2113337abc4651b6b207f4cda57799 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 04:05:54 compute-0 ceph-mon[74273]: pgmap v1056: 305 pgs: 305 active+clean; 585 MiB data, 760 MiB used, 59 GiB / 60 GiB avail; 191 KiB/s rd, 98 MiB/s wr, 324 op/s
Oct 11 04:05:54 compute-0 ceph-mon[74273]: osdmap e180: 3 total, 3 up, 3 in
Oct 11 04:05:55 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1058: 305 pgs: 305 active+clean; 585 MiB data, 760 MiB used, 59 GiB / 60 GiB avail; 174 KiB/s rd, 91 MiB/s wr, 296 op/s
Oct 11 04:05:55 compute-0 nova_compute[259850]: 2025-10-11 04:05:55.884 2 DEBUG nova.network.neutron [None req-4ebfed2d-14b1-462a-b105-26cb1d26994f c25efb567172419289431bb87a4358cb 1e2113337abc4651b6b207f4cda57799 - - default default] [instance: 755b8dbf-4912-4ab3-87a0-0fdcfba7efe4] Successfully updated port: 3d50c2ef-db4b-49d0-9eeb-d9a45369939d _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Oct 11 04:05:55 compute-0 nova_compute[259850]: 2025-10-11 04:05:55.902 2 DEBUG oslo_concurrency.lockutils [None req-4ebfed2d-14b1-462a-b105-26cb1d26994f c25efb567172419289431bb87a4358cb 1e2113337abc4651b6b207f4cda57799 - - default default] Acquiring lock "refresh_cache-755b8dbf-4912-4ab3-87a0-0fdcfba7efe4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 11 04:05:55 compute-0 nova_compute[259850]: 2025-10-11 04:05:55.903 2 DEBUG oslo_concurrency.lockutils [None req-4ebfed2d-14b1-462a-b105-26cb1d26994f c25efb567172419289431bb87a4358cb 1e2113337abc4651b6b207f4cda57799 - - default default] Acquired lock "refresh_cache-755b8dbf-4912-4ab3-87a0-0fdcfba7efe4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 11 04:05:55 compute-0 nova_compute[259850]: 2025-10-11 04:05:55.903 2 DEBUG nova.network.neutron [None req-4ebfed2d-14b1-462a-b105-26cb1d26994f c25efb567172419289431bb87a4358cb 1e2113337abc4651b6b207f4cda57799 - - default default] [instance: 755b8dbf-4912-4ab3-87a0-0fdcfba7efe4] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Oct 11 04:05:55 compute-0 nova_compute[259850]: 2025-10-11 04:05:55.999 2 DEBUG nova.compute.manager [req-a2470e51-2075-437a-b207-424429eaa6fd req-c10db33c-0e3c-492d-8646-17af9a37a061 f4159c198f2c491aba3076e803251bb4 551ef16ca7c64c7d98e9560ddab5abd8 - - default default] [instance: 755b8dbf-4912-4ab3-87a0-0fdcfba7efe4] Received event network-changed-3d50c2ef-db4b-49d0-9eeb-d9a45369939d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 11 04:05:55 compute-0 nova_compute[259850]: 2025-10-11 04:05:55.999 2 DEBUG nova.compute.manager [req-a2470e51-2075-437a-b207-424429eaa6fd req-c10db33c-0e3c-492d-8646-17af9a37a061 f4159c198f2c491aba3076e803251bb4 551ef16ca7c64c7d98e9560ddab5abd8 - - default default] [instance: 755b8dbf-4912-4ab3-87a0-0fdcfba7efe4] Refreshing instance network info cache due to event network-changed-3d50c2ef-db4b-49d0-9eeb-d9a45369939d. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 11 04:05:56 compute-0 nova_compute[259850]: 2025-10-11 04:05:56.000 2 DEBUG oslo_concurrency.lockutils [req-a2470e51-2075-437a-b207-424429eaa6fd req-c10db33c-0e3c-492d-8646-17af9a37a061 f4159c198f2c491aba3076e803251bb4 551ef16ca7c64c7d98e9560ddab5abd8 - - default default] Acquiring lock "refresh_cache-755b8dbf-4912-4ab3-87a0-0fdcfba7efe4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 11 04:05:56 compute-0 nova_compute[259850]: 2025-10-11 04:05:56.109 2 DEBUG nova.network.neutron [None req-4ebfed2d-14b1-462a-b105-26cb1d26994f c25efb567172419289431bb87a4358cb 1e2113337abc4651b6b207f4cda57799 - - default default] [instance: 755b8dbf-4912-4ab3-87a0-0fdcfba7efe4] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Oct 11 04:05:56 compute-0 nova_compute[259850]: 2025-10-11 04:05:56.670 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 04:05:56 compute-0 nova_compute[259850]: 2025-10-11 04:05:56.875 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 04:05:56 compute-0 nova_compute[259850]: 2025-10-11 04:05:56.897 2 DEBUG nova.network.neutron [None req-4ebfed2d-14b1-462a-b105-26cb1d26994f c25efb567172419289431bb87a4358cb 1e2113337abc4651b6b207f4cda57799 - - default default] [instance: 755b8dbf-4912-4ab3-87a0-0fdcfba7efe4] Updating instance_info_cache with network_info: [{"id": "3d50c2ef-db4b-49d0-9eeb-d9a45369939d", "address": "fa:16:3e:7c:7f:a9", "network": {"id": "9137df72-3d02-4421-a504-7793ab98e168", "bridge": "br-int", "label": "tempest-VolumesActionsTest-1470993189-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1e2113337abc4651b6b207f4cda57799", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3d50c2ef-db", "ovs_interfaceid": "3d50c2ef-db4b-49d0-9eeb-d9a45369939d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 11 04:05:56 compute-0 nova_compute[259850]: 2025-10-11 04:05:56.917 2 DEBUG oslo_concurrency.lockutils [None req-4ebfed2d-14b1-462a-b105-26cb1d26994f c25efb567172419289431bb87a4358cb 1e2113337abc4651b6b207f4cda57799 - - default default] Releasing lock "refresh_cache-755b8dbf-4912-4ab3-87a0-0fdcfba7efe4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 11 04:05:56 compute-0 nova_compute[259850]: 2025-10-11 04:05:56.917 2 DEBUG nova.compute.manager [None req-4ebfed2d-14b1-462a-b105-26cb1d26994f c25efb567172419289431bb87a4358cb 1e2113337abc4651b6b207f4cda57799 - - default default] [instance: 755b8dbf-4912-4ab3-87a0-0fdcfba7efe4] Instance network_info: |[{"id": "3d50c2ef-db4b-49d0-9eeb-d9a45369939d", "address": "fa:16:3e:7c:7f:a9", "network": {"id": "9137df72-3d02-4421-a504-7793ab98e168", "bridge": "br-int", "label": "tempest-VolumesActionsTest-1470993189-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1e2113337abc4651b6b207f4cda57799", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3d50c2ef-db", "ovs_interfaceid": "3d50c2ef-db4b-49d0-9eeb-d9a45369939d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Oct 11 04:05:56 compute-0 nova_compute[259850]: 2025-10-11 04:05:56.918 2 DEBUG oslo_concurrency.lockutils [req-a2470e51-2075-437a-b207-424429eaa6fd req-c10db33c-0e3c-492d-8646-17af9a37a061 f4159c198f2c491aba3076e803251bb4 551ef16ca7c64c7d98e9560ddab5abd8 - - default default] Acquired lock "refresh_cache-755b8dbf-4912-4ab3-87a0-0fdcfba7efe4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 11 04:05:56 compute-0 nova_compute[259850]: 2025-10-11 04:05:56.918 2 DEBUG nova.network.neutron [req-a2470e51-2075-437a-b207-424429eaa6fd req-c10db33c-0e3c-492d-8646-17af9a37a061 f4159c198f2c491aba3076e803251bb4 551ef16ca7c64c7d98e9560ddab5abd8 - - default default] [instance: 755b8dbf-4912-4ab3-87a0-0fdcfba7efe4] Refreshing network info cache for port 3d50c2ef-db4b-49d0-9eeb-d9a45369939d _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 11 04:05:56 compute-0 nova_compute[259850]: 2025-10-11 04:05:56.922 2 DEBUG nova.virt.libvirt.driver [None req-4ebfed2d-14b1-462a-b105-26cb1d26994f c25efb567172419289431bb87a4358cb 1e2113337abc4651b6b207f4cda57799 - - default default] [instance: 755b8dbf-4912-4ab3-87a0-0fdcfba7efe4] Start _get_guest_xml network_info=[{"id": "3d50c2ef-db4b-49d0-9eeb-d9a45369939d", "address": "fa:16:3e:7c:7f:a9", "network": {"id": "9137df72-3d02-4421-a504-7793ab98e168", "bridge": "br-int", "label": "tempest-VolumesActionsTest-1470993189-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1e2113337abc4651b6b207f4cda57799", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3d50c2ef-db", "ovs_interfaceid": "3d50c2ef-db4b-49d0-9eeb-d9a45369939d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-11T04:01:37Z,direct_url=<?>,disk_format='qcow2',id=1a107e2f-1a9d-4b6f-861d-e64bee7d56be,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='e4ac9f6319b648399a8baca50902ce47',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-11T04:01:39Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_type': 'disk', 'encrypted': False, 'encryption_secret_uuid': None, 'size': 0, 'encryption_format': None, 'boot_index': 0, 'device_name': '/dev/vda', 'guest_format': None, 'encryption_options': None, 'disk_bus': 'virtio', 'image_id': '1a107e2f-1a9d-4b6f-861d-e64bee7d56be'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Oct 11 04:05:56 compute-0 nova_compute[259850]: 2025-10-11 04:05:56.927 2 WARNING nova.virt.libvirt.driver [None req-4ebfed2d-14b1-462a-b105-26cb1d26994f c25efb567172419289431bb87a4358cb 1e2113337abc4651b6b207f4cda57799 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 11 04:05:56 compute-0 nova_compute[259850]: 2025-10-11 04:05:56.934 2 DEBUG nova.virt.libvirt.host [None req-4ebfed2d-14b1-462a-b105-26cb1d26994f c25efb567172419289431bb87a4358cb 1e2113337abc4651b6b207f4cda57799 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Oct 11 04:05:56 compute-0 nova_compute[259850]: 2025-10-11 04:05:56.934 2 DEBUG nova.virt.libvirt.host [None req-4ebfed2d-14b1-462a-b105-26cb1d26994f c25efb567172419289431bb87a4358cb 1e2113337abc4651b6b207f4cda57799 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Oct 11 04:05:56 compute-0 nova_compute[259850]: 2025-10-11 04:05:56.941 2 DEBUG nova.virt.libvirt.host [None req-4ebfed2d-14b1-462a-b105-26cb1d26994f c25efb567172419289431bb87a4358cb 1e2113337abc4651b6b207f4cda57799 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Oct 11 04:05:56 compute-0 nova_compute[259850]: 2025-10-11 04:05:56.942 2 DEBUG nova.virt.libvirt.host [None req-4ebfed2d-14b1-462a-b105-26cb1d26994f c25efb567172419289431bb87a4358cb 1e2113337abc4651b6b207f4cda57799 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Oct 11 04:05:56 compute-0 nova_compute[259850]: 2025-10-11 04:05:56.943 2 DEBUG nova.virt.libvirt.driver [None req-4ebfed2d-14b1-462a-b105-26cb1d26994f c25efb567172419289431bb87a4358cb 1e2113337abc4651b6b207f4cda57799 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Oct 11 04:05:56 compute-0 nova_compute[259850]: 2025-10-11 04:05:56.943 2 DEBUG nova.virt.hardware [None req-4ebfed2d-14b1-462a-b105-26cb1d26994f c25efb567172419289431bb87a4358cb 1e2113337abc4651b6b207f4cda57799 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-11T04:01:35Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='178575de-f0e6-4acd-9fcd-d75e3e09ac2e',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-11T04:01:37Z,direct_url=<?>,disk_format='qcow2',id=1a107e2f-1a9d-4b6f-861d-e64bee7d56be,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='e4ac9f6319b648399a8baca50902ce47',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-11T04:01:39Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Oct 11 04:05:56 compute-0 nova_compute[259850]: 2025-10-11 04:05:56.944 2 DEBUG nova.virt.hardware [None req-4ebfed2d-14b1-462a-b105-26cb1d26994f c25efb567172419289431bb87a4358cb 1e2113337abc4651b6b207f4cda57799 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Oct 11 04:05:56 compute-0 nova_compute[259850]: 2025-10-11 04:05:56.945 2 DEBUG nova.virt.hardware [None req-4ebfed2d-14b1-462a-b105-26cb1d26994f c25efb567172419289431bb87a4358cb 1e2113337abc4651b6b207f4cda57799 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Oct 11 04:05:56 compute-0 nova_compute[259850]: 2025-10-11 04:05:56.945 2 DEBUG nova.virt.hardware [None req-4ebfed2d-14b1-462a-b105-26cb1d26994f c25efb567172419289431bb87a4358cb 1e2113337abc4651b6b207f4cda57799 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Oct 11 04:05:56 compute-0 nova_compute[259850]: 2025-10-11 04:05:56.946 2 DEBUG nova.virt.hardware [None req-4ebfed2d-14b1-462a-b105-26cb1d26994f c25efb567172419289431bb87a4358cb 1e2113337abc4651b6b207f4cda57799 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Oct 11 04:05:56 compute-0 nova_compute[259850]: 2025-10-11 04:05:56.946 2 DEBUG nova.virt.hardware [None req-4ebfed2d-14b1-462a-b105-26cb1d26994f c25efb567172419289431bb87a4358cb 1e2113337abc4651b6b207f4cda57799 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Oct 11 04:05:56 compute-0 nova_compute[259850]: 2025-10-11 04:05:56.946 2 DEBUG nova.virt.hardware [None req-4ebfed2d-14b1-462a-b105-26cb1d26994f c25efb567172419289431bb87a4358cb 1e2113337abc4651b6b207f4cda57799 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Oct 11 04:05:56 compute-0 nova_compute[259850]: 2025-10-11 04:05:56.947 2 DEBUG nova.virt.hardware [None req-4ebfed2d-14b1-462a-b105-26cb1d26994f c25efb567172419289431bb87a4358cb 1e2113337abc4651b6b207f4cda57799 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Oct 11 04:05:56 compute-0 nova_compute[259850]: 2025-10-11 04:05:56.947 2 DEBUG nova.virt.hardware [None req-4ebfed2d-14b1-462a-b105-26cb1d26994f c25efb567172419289431bb87a4358cb 1e2113337abc4651b6b207f4cda57799 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Oct 11 04:05:56 compute-0 nova_compute[259850]: 2025-10-11 04:05:56.947 2 DEBUG nova.virt.hardware [None req-4ebfed2d-14b1-462a-b105-26cb1d26994f c25efb567172419289431bb87a4358cb 1e2113337abc4651b6b207f4cda57799 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Oct 11 04:05:56 compute-0 ceph-mon[74273]: pgmap v1058: 305 pgs: 305 active+clean; 585 MiB data, 760 MiB used, 59 GiB / 60 GiB avail; 174 KiB/s rd, 91 MiB/s wr, 296 op/s
Oct 11 04:05:56 compute-0 nova_compute[259850]: 2025-10-11 04:05:56.948 2 DEBUG nova.virt.hardware [None req-4ebfed2d-14b1-462a-b105-26cb1d26994f c25efb567172419289431bb87a4358cb 1e2113337abc4651b6b207f4cda57799 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Oct 11 04:05:56 compute-0 nova_compute[259850]: 2025-10-11 04:05:56.951 2 DEBUG oslo_concurrency.processutils [None req-4ebfed2d-14b1-462a-b105-26cb1d26994f c25efb567172419289431bb87a4358cb 1e2113337abc4651b6b207f4cda57799 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 04:05:57 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 11 04:05:57 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2112930880' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 11 04:05:57 compute-0 nova_compute[259850]: 2025-10-11 04:05:57.369 2 DEBUG oslo_concurrency.processutils [None req-4ebfed2d-14b1-462a-b105-26cb1d26994f c25efb567172419289431bb87a4358cb 1e2113337abc4651b6b207f4cda57799 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.418s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 04:05:57 compute-0 nova_compute[259850]: 2025-10-11 04:05:57.405 2 DEBUG nova.storage.rbd_utils [None req-4ebfed2d-14b1-462a-b105-26cb1d26994f c25efb567172419289431bb87a4358cb 1e2113337abc4651b6b207f4cda57799 - - default default] rbd image 755b8dbf-4912-4ab3-87a0-0fdcfba7efe4_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 11 04:05:57 compute-0 nova_compute[259850]: 2025-10-11 04:05:57.411 2 DEBUG oslo_concurrency.processutils [None req-4ebfed2d-14b1-462a-b105-26cb1d26994f c25efb567172419289431bb87a4358cb 1e2113337abc4651b6b207f4cda57799 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 04:05:57 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1059: 305 pgs: 305 active+clean; 585 MiB data, 760 MiB used, 59 GiB / 60 GiB avail; 158 KiB/s rd, 68 MiB/s wr, 268 op/s
Oct 11 04:05:57 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 11 04:05:57 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/642376276' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 11 04:05:57 compute-0 nova_compute[259850]: 2025-10-11 04:05:57.869 2 DEBUG oslo_concurrency.processutils [None req-4ebfed2d-14b1-462a-b105-26cb1d26994f c25efb567172419289431bb87a4358cb 1e2113337abc4651b6b207f4cda57799 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.457s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 04:05:57 compute-0 nova_compute[259850]: 2025-10-11 04:05:57.872 2 DEBUG nova.virt.libvirt.vif [None req-4ebfed2d-14b1-462a-b105-26cb1d26994f c25efb567172419289431bb87a4358cb 1e2113337abc4651b6b207f4cda57799 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-11T04:05:51Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-VolumesActionsTest-instance-181778275',display_name='tempest-VolumesActionsTest-instance-181778275',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-volumesactionstest-instance-181778275',id=5,image_ref='1a107e2f-1a9d-4b6f-861d-e64bee7d56be',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='1e2113337abc4651b6b207f4cda57799',ramdisk_id='',reservation_id='r-dp3igpp2',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='1a107e2f-1a9d-4b6f-861d-e64bee7d56be',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-VolumesActionsTest-881975202',owner_user_name='tempest-VolumesActionsTest-881975202-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-11T04:05:53Z,user_data=None,user_id='c25efb567172419289431bb87a4358cb',uuid=755b8dbf-4912-4ab3-87a0-0fdcfba7efe4,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "3d50c2ef-db4b-49d0-9eeb-d9a45369939d", "address": "fa:16:3e:7c:7f:a9", "network": {"id": "9137df72-3d02-4421-a504-7793ab98e168", "bridge": "br-int", "label": "tempest-VolumesActionsTest-1470993189-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1e2113337abc4651b6b207f4cda57799", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3d50c2ef-db", "ovs_interfaceid": "3d50c2ef-db4b-49d0-9eeb-d9a45369939d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Oct 11 04:05:57 compute-0 nova_compute[259850]: 2025-10-11 04:05:57.873 2 DEBUG nova.network.os_vif_util [None req-4ebfed2d-14b1-462a-b105-26cb1d26994f c25efb567172419289431bb87a4358cb 1e2113337abc4651b6b207f4cda57799 - - default default] Converting VIF {"id": "3d50c2ef-db4b-49d0-9eeb-d9a45369939d", "address": "fa:16:3e:7c:7f:a9", "network": {"id": "9137df72-3d02-4421-a504-7793ab98e168", "bridge": "br-int", "label": "tempest-VolumesActionsTest-1470993189-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1e2113337abc4651b6b207f4cda57799", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3d50c2ef-db", "ovs_interfaceid": "3d50c2ef-db4b-49d0-9eeb-d9a45369939d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 11 04:05:57 compute-0 nova_compute[259850]: 2025-10-11 04:05:57.874 2 DEBUG nova.network.os_vif_util [None req-4ebfed2d-14b1-462a-b105-26cb1d26994f c25efb567172419289431bb87a4358cb 1e2113337abc4651b6b207f4cda57799 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:7c:7f:a9,bridge_name='br-int',has_traffic_filtering=True,id=3d50c2ef-db4b-49d0-9eeb-d9a45369939d,network=Network(9137df72-3d02-4421-a504-7793ab98e168),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap3d50c2ef-db') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 11 04:05:57 compute-0 nova_compute[259850]: 2025-10-11 04:05:57.876 2 DEBUG nova.objects.instance [None req-4ebfed2d-14b1-462a-b105-26cb1d26994f c25efb567172419289431bb87a4358cb 1e2113337abc4651b6b207f4cda57799 - - default default] Lazy-loading 'pci_devices' on Instance uuid 755b8dbf-4912-4ab3-87a0-0fdcfba7efe4 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 11 04:05:57 compute-0 nova_compute[259850]: 2025-10-11 04:05:57.898 2 DEBUG nova.virt.libvirt.driver [None req-4ebfed2d-14b1-462a-b105-26cb1d26994f c25efb567172419289431bb87a4358cb 1e2113337abc4651b6b207f4cda57799 - - default default] [instance: 755b8dbf-4912-4ab3-87a0-0fdcfba7efe4] End _get_guest_xml xml=<domain type="kvm">
Oct 11 04:05:57 compute-0 nova_compute[259850]:   <uuid>755b8dbf-4912-4ab3-87a0-0fdcfba7efe4</uuid>
Oct 11 04:05:57 compute-0 nova_compute[259850]:   <name>instance-00000005</name>
Oct 11 04:05:57 compute-0 nova_compute[259850]:   <memory>131072</memory>
Oct 11 04:05:57 compute-0 nova_compute[259850]:   <vcpu>1</vcpu>
Oct 11 04:05:57 compute-0 nova_compute[259850]:   <metadata>
Oct 11 04:05:57 compute-0 nova_compute[259850]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 11 04:05:57 compute-0 nova_compute[259850]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 11 04:05:57 compute-0 nova_compute[259850]:       <nova:name>tempest-VolumesActionsTest-instance-181778275</nova:name>
Oct 11 04:05:57 compute-0 nova_compute[259850]:       <nova:creationTime>2025-10-11 04:05:56</nova:creationTime>
Oct 11 04:05:57 compute-0 nova_compute[259850]:       <nova:flavor name="m1.nano">
Oct 11 04:05:57 compute-0 nova_compute[259850]:         <nova:memory>128</nova:memory>
Oct 11 04:05:57 compute-0 nova_compute[259850]:         <nova:disk>1</nova:disk>
Oct 11 04:05:57 compute-0 nova_compute[259850]:         <nova:swap>0</nova:swap>
Oct 11 04:05:57 compute-0 nova_compute[259850]:         <nova:ephemeral>0</nova:ephemeral>
Oct 11 04:05:57 compute-0 nova_compute[259850]:         <nova:vcpus>1</nova:vcpus>
Oct 11 04:05:57 compute-0 nova_compute[259850]:       </nova:flavor>
Oct 11 04:05:57 compute-0 nova_compute[259850]:       <nova:owner>
Oct 11 04:05:57 compute-0 nova_compute[259850]:         <nova:user uuid="c25efb567172419289431bb87a4358cb">tempest-VolumesActionsTest-881975202-project-member</nova:user>
Oct 11 04:05:57 compute-0 nova_compute[259850]:         <nova:project uuid="1e2113337abc4651b6b207f4cda57799">tempest-VolumesActionsTest-881975202</nova:project>
Oct 11 04:05:57 compute-0 nova_compute[259850]:       </nova:owner>
Oct 11 04:05:57 compute-0 nova_compute[259850]:       <nova:root type="image" uuid="1a107e2f-1a9d-4b6f-861d-e64bee7d56be"/>
Oct 11 04:05:57 compute-0 nova_compute[259850]:       <nova:ports>
Oct 11 04:05:57 compute-0 nova_compute[259850]:         <nova:port uuid="3d50c2ef-db4b-49d0-9eeb-d9a45369939d">
Oct 11 04:05:57 compute-0 nova_compute[259850]:           <nova:ip type="fixed" address="10.100.0.13" ipVersion="4"/>
Oct 11 04:05:57 compute-0 nova_compute[259850]:         </nova:port>
Oct 11 04:05:57 compute-0 nova_compute[259850]:       </nova:ports>
Oct 11 04:05:57 compute-0 nova_compute[259850]:     </nova:instance>
Oct 11 04:05:57 compute-0 nova_compute[259850]:   </metadata>
Oct 11 04:05:57 compute-0 nova_compute[259850]:   <sysinfo type="smbios">
Oct 11 04:05:57 compute-0 nova_compute[259850]:     <system>
Oct 11 04:05:57 compute-0 nova_compute[259850]:       <entry name="manufacturer">RDO</entry>
Oct 11 04:05:57 compute-0 nova_compute[259850]:       <entry name="product">OpenStack Compute</entry>
Oct 11 04:05:57 compute-0 nova_compute[259850]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 11 04:05:57 compute-0 nova_compute[259850]:       <entry name="serial">755b8dbf-4912-4ab3-87a0-0fdcfba7efe4</entry>
Oct 11 04:05:57 compute-0 nova_compute[259850]:       <entry name="uuid">755b8dbf-4912-4ab3-87a0-0fdcfba7efe4</entry>
Oct 11 04:05:57 compute-0 nova_compute[259850]:       <entry name="family">Virtual Machine</entry>
Oct 11 04:05:57 compute-0 nova_compute[259850]:     </system>
Oct 11 04:05:57 compute-0 nova_compute[259850]:   </sysinfo>
Oct 11 04:05:57 compute-0 nova_compute[259850]:   <os>
Oct 11 04:05:57 compute-0 nova_compute[259850]:     <type arch="x86_64" machine="q35">hvm</type>
Oct 11 04:05:57 compute-0 nova_compute[259850]:     <boot dev="hd"/>
Oct 11 04:05:57 compute-0 nova_compute[259850]:     <smbios mode="sysinfo"/>
Oct 11 04:05:57 compute-0 nova_compute[259850]:   </os>
Oct 11 04:05:57 compute-0 nova_compute[259850]:   <features>
Oct 11 04:05:57 compute-0 nova_compute[259850]:     <acpi/>
Oct 11 04:05:57 compute-0 nova_compute[259850]:     <apic/>
Oct 11 04:05:57 compute-0 nova_compute[259850]:     <vmcoreinfo/>
Oct 11 04:05:57 compute-0 nova_compute[259850]:   </features>
Oct 11 04:05:57 compute-0 nova_compute[259850]:   <clock offset="utc">
Oct 11 04:05:57 compute-0 nova_compute[259850]:     <timer name="pit" tickpolicy="delay"/>
Oct 11 04:05:57 compute-0 nova_compute[259850]:     <timer name="rtc" tickpolicy="catchup"/>
Oct 11 04:05:57 compute-0 nova_compute[259850]:     <timer name="hpet" present="no"/>
Oct 11 04:05:57 compute-0 nova_compute[259850]:   </clock>
Oct 11 04:05:57 compute-0 nova_compute[259850]:   <cpu mode="host-model" match="exact">
Oct 11 04:05:57 compute-0 nova_compute[259850]:     <topology sockets="1" cores="1" threads="1"/>
Oct 11 04:05:57 compute-0 nova_compute[259850]:   </cpu>
Oct 11 04:05:57 compute-0 nova_compute[259850]:   <devices>
Oct 11 04:05:57 compute-0 nova_compute[259850]:     <disk type="network" device="disk">
Oct 11 04:05:57 compute-0 nova_compute[259850]:       <driver type="raw" cache="none"/>
Oct 11 04:05:57 compute-0 nova_compute[259850]:       <source protocol="rbd" name="vms/755b8dbf-4912-4ab3-87a0-0fdcfba7efe4_disk">
Oct 11 04:05:57 compute-0 nova_compute[259850]:         <host name="192.168.122.100" port="6789"/>
Oct 11 04:05:57 compute-0 nova_compute[259850]:       </source>
Oct 11 04:05:57 compute-0 nova_compute[259850]:       <auth username="openstack">
Oct 11 04:05:57 compute-0 nova_compute[259850]:         <secret type="ceph" uuid="23b68101-59a9-532f-ab6b-9acf78fb2162"/>
Oct 11 04:05:57 compute-0 nova_compute[259850]:       </auth>
Oct 11 04:05:57 compute-0 nova_compute[259850]:       <target dev="vda" bus="virtio"/>
Oct 11 04:05:57 compute-0 nova_compute[259850]:     </disk>
Oct 11 04:05:57 compute-0 nova_compute[259850]:     <disk type="network" device="cdrom">
Oct 11 04:05:57 compute-0 nova_compute[259850]:       <driver type="raw" cache="none"/>
Oct 11 04:05:57 compute-0 nova_compute[259850]:       <source protocol="rbd" name="vms/755b8dbf-4912-4ab3-87a0-0fdcfba7efe4_disk.config">
Oct 11 04:05:57 compute-0 nova_compute[259850]:         <host name="192.168.122.100" port="6789"/>
Oct 11 04:05:57 compute-0 nova_compute[259850]:       </source>
Oct 11 04:05:57 compute-0 nova_compute[259850]:       <auth username="openstack">
Oct 11 04:05:57 compute-0 nova_compute[259850]:         <secret type="ceph" uuid="23b68101-59a9-532f-ab6b-9acf78fb2162"/>
Oct 11 04:05:57 compute-0 nova_compute[259850]:       </auth>
Oct 11 04:05:57 compute-0 nova_compute[259850]:       <target dev="sda" bus="sata"/>
Oct 11 04:05:57 compute-0 nova_compute[259850]:     </disk>
Oct 11 04:05:57 compute-0 nova_compute[259850]:     <interface type="ethernet">
Oct 11 04:05:57 compute-0 nova_compute[259850]:       <mac address="fa:16:3e:7c:7f:a9"/>
Oct 11 04:05:57 compute-0 nova_compute[259850]:       <model type="virtio"/>
Oct 11 04:05:57 compute-0 nova_compute[259850]:       <driver name="vhost" rx_queue_size="512"/>
Oct 11 04:05:57 compute-0 nova_compute[259850]:       <mtu size="1442"/>
Oct 11 04:05:57 compute-0 nova_compute[259850]:       <target dev="tap3d50c2ef-db"/>
Oct 11 04:05:57 compute-0 nova_compute[259850]:     </interface>
Oct 11 04:05:57 compute-0 nova_compute[259850]:     <serial type="pty">
Oct 11 04:05:57 compute-0 nova_compute[259850]:       <log file="/var/lib/nova/instances/755b8dbf-4912-4ab3-87a0-0fdcfba7efe4/console.log" append="off"/>
Oct 11 04:05:57 compute-0 nova_compute[259850]:     </serial>
Oct 11 04:05:57 compute-0 nova_compute[259850]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 11 04:05:57 compute-0 nova_compute[259850]:     <video>
Oct 11 04:05:57 compute-0 nova_compute[259850]:       <model type="virtio"/>
Oct 11 04:05:57 compute-0 nova_compute[259850]:     </video>
Oct 11 04:05:57 compute-0 nova_compute[259850]:     <input type="tablet" bus="usb"/>
Oct 11 04:05:57 compute-0 nova_compute[259850]:     <rng model="virtio">
Oct 11 04:05:57 compute-0 nova_compute[259850]:       <backend model="random">/dev/urandom</backend>
Oct 11 04:05:57 compute-0 nova_compute[259850]:     </rng>
Oct 11 04:05:57 compute-0 nova_compute[259850]:     <controller type="pci" model="pcie-root"/>
Oct 11 04:05:57 compute-0 nova_compute[259850]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 04:05:57 compute-0 nova_compute[259850]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 04:05:57 compute-0 nova_compute[259850]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 04:05:57 compute-0 nova_compute[259850]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 04:05:57 compute-0 nova_compute[259850]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 04:05:57 compute-0 nova_compute[259850]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 04:05:57 compute-0 nova_compute[259850]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 04:05:57 compute-0 nova_compute[259850]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 04:05:57 compute-0 nova_compute[259850]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 04:05:57 compute-0 nova_compute[259850]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 04:05:57 compute-0 nova_compute[259850]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 04:05:57 compute-0 nova_compute[259850]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 04:05:57 compute-0 nova_compute[259850]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 04:05:57 compute-0 nova_compute[259850]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 04:05:57 compute-0 nova_compute[259850]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 04:05:57 compute-0 nova_compute[259850]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 04:05:57 compute-0 nova_compute[259850]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 04:05:57 compute-0 nova_compute[259850]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 04:05:57 compute-0 nova_compute[259850]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 04:05:57 compute-0 nova_compute[259850]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 04:05:57 compute-0 nova_compute[259850]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 04:05:57 compute-0 nova_compute[259850]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 04:05:57 compute-0 nova_compute[259850]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 04:05:57 compute-0 nova_compute[259850]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 04:05:57 compute-0 nova_compute[259850]:     <controller type="usb" index="0"/>
Oct 11 04:05:57 compute-0 nova_compute[259850]:     <memballoon model="virtio">
Oct 11 04:05:57 compute-0 nova_compute[259850]:       <stats period="10"/>
Oct 11 04:05:57 compute-0 nova_compute[259850]:     </memballoon>
Oct 11 04:05:57 compute-0 nova_compute[259850]:   </devices>
Oct 11 04:05:57 compute-0 nova_compute[259850]: </domain>
Oct 11 04:05:57 compute-0 nova_compute[259850]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Oct 11 04:05:57 compute-0 nova_compute[259850]: 2025-10-11 04:05:57.900 2 DEBUG nova.compute.manager [None req-4ebfed2d-14b1-462a-b105-26cb1d26994f c25efb567172419289431bb87a4358cb 1e2113337abc4651b6b207f4cda57799 - - default default] [instance: 755b8dbf-4912-4ab3-87a0-0fdcfba7efe4] Preparing to wait for external event network-vif-plugged-3d50c2ef-db4b-49d0-9eeb-d9a45369939d prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Oct 11 04:05:57 compute-0 nova_compute[259850]: 2025-10-11 04:05:57.901 2 DEBUG oslo_concurrency.lockutils [None req-4ebfed2d-14b1-462a-b105-26cb1d26994f c25efb567172419289431bb87a4358cb 1e2113337abc4651b6b207f4cda57799 - - default default] Acquiring lock "755b8dbf-4912-4ab3-87a0-0fdcfba7efe4-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 04:05:57 compute-0 nova_compute[259850]: 2025-10-11 04:05:57.902 2 DEBUG oslo_concurrency.lockutils [None req-4ebfed2d-14b1-462a-b105-26cb1d26994f c25efb567172419289431bb87a4358cb 1e2113337abc4651b6b207f4cda57799 - - default default] Lock "755b8dbf-4912-4ab3-87a0-0fdcfba7efe4-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 04:05:57 compute-0 nova_compute[259850]: 2025-10-11 04:05:57.902 2 DEBUG oslo_concurrency.lockutils [None req-4ebfed2d-14b1-462a-b105-26cb1d26994f c25efb567172419289431bb87a4358cb 1e2113337abc4651b6b207f4cda57799 - - default default] Lock "755b8dbf-4912-4ab3-87a0-0fdcfba7efe4-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 04:05:57 compute-0 nova_compute[259850]: 2025-10-11 04:05:57.903 2 DEBUG nova.virt.libvirt.vif [None req-4ebfed2d-14b1-462a-b105-26cb1d26994f c25efb567172419289431bb87a4358cb 1e2113337abc4651b6b207f4cda57799 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-11T04:05:51Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-VolumesActionsTest-instance-181778275',display_name='tempest-VolumesActionsTest-instance-181778275',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-volumesactionstest-instance-181778275',id=5,image_ref='1a107e2f-1a9d-4b6f-861d-e64bee7d56be',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='1e2113337abc4651b6b207f4cda57799',ramdisk_id='',reservation_id='r-dp3igpp2',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='1a107e2f-1a9d-4b6f-861d-e64bee7d56be',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-VolumesActionsTest-881975202',owner_user_name='tempest-VolumesActionsTest-881975202-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-11T04:05:53Z,user_data=None,user_id='c25efb567172419289431bb87a4358cb',uuid=755b8dbf-4912-4ab3-87a0-0fdcfba7efe4,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "3d50c2ef-db4b-49d0-9eeb-d9a45369939d", "address": "fa:16:3e:7c:7f:a9", "network": {"id": "9137df72-3d02-4421-a504-7793ab98e168", "bridge": "br-int", "label": "tempest-VolumesActionsTest-1470993189-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1e2113337abc4651b6b207f4cda57799", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3d50c2ef-db", "ovs_interfaceid": "3d50c2ef-db4b-49d0-9eeb-d9a45369939d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Oct 11 04:05:57 compute-0 nova_compute[259850]: 2025-10-11 04:05:57.904 2 DEBUG nova.network.os_vif_util [None req-4ebfed2d-14b1-462a-b105-26cb1d26994f c25efb567172419289431bb87a4358cb 1e2113337abc4651b6b207f4cda57799 - - default default] Converting VIF {"id": "3d50c2ef-db4b-49d0-9eeb-d9a45369939d", "address": "fa:16:3e:7c:7f:a9", "network": {"id": "9137df72-3d02-4421-a504-7793ab98e168", "bridge": "br-int", "label": "tempest-VolumesActionsTest-1470993189-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1e2113337abc4651b6b207f4cda57799", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3d50c2ef-db", "ovs_interfaceid": "3d50c2ef-db4b-49d0-9eeb-d9a45369939d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 11 04:05:57 compute-0 nova_compute[259850]: 2025-10-11 04:05:57.905 2 DEBUG nova.network.os_vif_util [None req-4ebfed2d-14b1-462a-b105-26cb1d26994f c25efb567172419289431bb87a4358cb 1e2113337abc4651b6b207f4cda57799 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:7c:7f:a9,bridge_name='br-int',has_traffic_filtering=True,id=3d50c2ef-db4b-49d0-9eeb-d9a45369939d,network=Network(9137df72-3d02-4421-a504-7793ab98e168),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap3d50c2ef-db') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 11 04:05:57 compute-0 nova_compute[259850]: 2025-10-11 04:05:57.905 2 DEBUG os_vif [None req-4ebfed2d-14b1-462a-b105-26cb1d26994f c25efb567172419289431bb87a4358cb 1e2113337abc4651b6b207f4cda57799 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:7c:7f:a9,bridge_name='br-int',has_traffic_filtering=True,id=3d50c2ef-db4b-49d0-9eeb-d9a45369939d,network=Network(9137df72-3d02-4421-a504-7793ab98e168),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap3d50c2ef-db') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Oct 11 04:05:57 compute-0 nova_compute[259850]: 2025-10-11 04:05:57.906 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 04:05:57 compute-0 nova_compute[259850]: 2025-10-11 04:05:57.907 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 04:05:57 compute-0 nova_compute[259850]: 2025-10-11 04:05:57.908 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 11 04:05:57 compute-0 nova_compute[259850]: 2025-10-11 04:05:57.911 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 04:05:57 compute-0 nova_compute[259850]: 2025-10-11 04:05:57.912 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap3d50c2ef-db, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 04:05:57 compute-0 nova_compute[259850]: 2025-10-11 04:05:57.912 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap3d50c2ef-db, col_values=(('external_ids', {'iface-id': '3d50c2ef-db4b-49d0-9eeb-d9a45369939d', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:7c:7f:a9', 'vm-uuid': '755b8dbf-4912-4ab3-87a0-0fdcfba7efe4'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 04:05:57 compute-0 NetworkManager[44920]: <info>  [1760155557.9453] manager: (tap3d50c2ef-db): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/39)
Oct 11 04:05:57 compute-0 nova_compute[259850]: 2025-10-11 04:05:57.945 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 04:05:57 compute-0 nova_compute[259850]: 2025-10-11 04:05:57.950 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 04:05:57 compute-0 nova_compute[259850]: 2025-10-11 04:05:57.951 2 INFO os_vif [None req-4ebfed2d-14b1-462a-b105-26cb1d26994f c25efb567172419289431bb87a4358cb 1e2113337abc4651b6b207f4cda57799 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:7c:7f:a9,bridge_name='br-int',has_traffic_filtering=True,id=3d50c2ef-db4b-49d0-9eeb-d9a45369939d,network=Network(9137df72-3d02-4421-a504-7793ab98e168),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap3d50c2ef-db')
Oct 11 04:05:57 compute-0 ceph-mon[74273]: from='client.? 192.168.122.100:0/2112930880' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 11 04:05:57 compute-0 ceph-mon[74273]: from='client.? 192.168.122.100:0/642376276' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 11 04:05:58 compute-0 nova_compute[259850]: 2025-10-11 04:05:58.014 2 DEBUG nova.virt.libvirt.driver [None req-4ebfed2d-14b1-462a-b105-26cb1d26994f c25efb567172419289431bb87a4358cb 1e2113337abc4651b6b207f4cda57799 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 11 04:05:58 compute-0 nova_compute[259850]: 2025-10-11 04:05:58.015 2 DEBUG nova.virt.libvirt.driver [None req-4ebfed2d-14b1-462a-b105-26cb1d26994f c25efb567172419289431bb87a4358cb 1e2113337abc4651b6b207f4cda57799 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 11 04:05:58 compute-0 nova_compute[259850]: 2025-10-11 04:05:58.015 2 DEBUG nova.virt.libvirt.driver [None req-4ebfed2d-14b1-462a-b105-26cb1d26994f c25efb567172419289431bb87a4358cb 1e2113337abc4651b6b207f4cda57799 - - default default] No VIF found with MAC fa:16:3e:7c:7f:a9, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Oct 11 04:05:58 compute-0 nova_compute[259850]: 2025-10-11 04:05:58.015 2 INFO nova.virt.libvirt.driver [None req-4ebfed2d-14b1-462a-b105-26cb1d26994f c25efb567172419289431bb87a4358cb 1e2113337abc4651b6b207f4cda57799 - - default default] [instance: 755b8dbf-4912-4ab3-87a0-0fdcfba7efe4] Using config drive
Oct 11 04:05:58 compute-0 nova_compute[259850]: 2025-10-11 04:05:58.039 2 DEBUG nova.storage.rbd_utils [None req-4ebfed2d-14b1-462a-b105-26cb1d26994f c25efb567172419289431bb87a4358cb 1e2113337abc4651b6b207f4cda57799 - - default default] rbd image 755b8dbf-4912-4ab3-87a0-0fdcfba7efe4_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 11 04:05:58 compute-0 podman[272347]: 2025-10-11 04:05:58.083821555 +0000 UTC m=+0.080317266 container health_status 8f03625dda308e9a3fe3b3f00360811eab2a53db90537fed5552a11820542f07 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, tcib_managed=true, config_id=iscsid, org.label-schema.build-date=20251009, container_name=iscsid, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 11 04:05:58 compute-0 podman[272346]: 2025-10-11 04:05:58.08828732 +0000 UTC m=+0.089041160 container health_status 83ff5079e26f0f00bbfd2512db4ebca61e431e3b7e362664573e34fff179574c (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.build-date=20251009, tcib_managed=true, config_id=multipathd, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, org.label-schema.license=GPLv2, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Oct 11 04:05:58 compute-0 nova_compute[259850]: 2025-10-11 04:05:58.220 2 DEBUG nova.network.neutron [req-a2470e51-2075-437a-b207-424429eaa6fd req-c10db33c-0e3c-492d-8646-17af9a37a061 f4159c198f2c491aba3076e803251bb4 551ef16ca7c64c7d98e9560ddab5abd8 - - default default] [instance: 755b8dbf-4912-4ab3-87a0-0fdcfba7efe4] Updated VIF entry in instance network info cache for port 3d50c2ef-db4b-49d0-9eeb-d9a45369939d. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 11 04:05:58 compute-0 nova_compute[259850]: 2025-10-11 04:05:58.221 2 DEBUG nova.network.neutron [req-a2470e51-2075-437a-b207-424429eaa6fd req-c10db33c-0e3c-492d-8646-17af9a37a061 f4159c198f2c491aba3076e803251bb4 551ef16ca7c64c7d98e9560ddab5abd8 - - default default] [instance: 755b8dbf-4912-4ab3-87a0-0fdcfba7efe4] Updating instance_info_cache with network_info: [{"id": "3d50c2ef-db4b-49d0-9eeb-d9a45369939d", "address": "fa:16:3e:7c:7f:a9", "network": {"id": "9137df72-3d02-4421-a504-7793ab98e168", "bridge": "br-int", "label": "tempest-VolumesActionsTest-1470993189-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1e2113337abc4651b6b207f4cda57799", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3d50c2ef-db", "ovs_interfaceid": "3d50c2ef-db4b-49d0-9eeb-d9a45369939d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 11 04:05:58 compute-0 nova_compute[259850]: 2025-10-11 04:05:58.242 2 DEBUG oslo_concurrency.lockutils [req-a2470e51-2075-437a-b207-424429eaa6fd req-c10db33c-0e3c-492d-8646-17af9a37a061 f4159c198f2c491aba3076e803251bb4 551ef16ca7c64c7d98e9560ddab5abd8 - - default default] Releasing lock "refresh_cache-755b8dbf-4912-4ab3-87a0-0fdcfba7efe4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 11 04:05:58 compute-0 nova_compute[259850]: 2025-10-11 04:05:58.495 2 INFO nova.virt.libvirt.driver [None req-4ebfed2d-14b1-462a-b105-26cb1d26994f c25efb567172419289431bb87a4358cb 1e2113337abc4651b6b207f4cda57799 - - default default] [instance: 755b8dbf-4912-4ab3-87a0-0fdcfba7efe4] Creating config drive at /var/lib/nova/instances/755b8dbf-4912-4ab3-87a0-0fdcfba7efe4/disk.config
Oct 11 04:05:58 compute-0 nova_compute[259850]: 2025-10-11 04:05:58.504 2 DEBUG oslo_concurrency.processutils [None req-4ebfed2d-14b1-462a-b105-26cb1d26994f c25efb567172419289431bb87a4358cb 1e2113337abc4651b6b207f4cda57799 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/755b8dbf-4912-4ab3-87a0-0fdcfba7efe4/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpvh00_uf6 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 04:05:58 compute-0 nova_compute[259850]: 2025-10-11 04:05:58.648 2 DEBUG oslo_concurrency.processutils [None req-4ebfed2d-14b1-462a-b105-26cb1d26994f c25efb567172419289431bb87a4358cb 1e2113337abc4651b6b207f4cda57799 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/755b8dbf-4912-4ab3-87a0-0fdcfba7efe4/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpvh00_uf6" returned: 0 in 0.144s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 04:05:58 compute-0 nova_compute[259850]: 2025-10-11 04:05:58.687 2 DEBUG nova.storage.rbd_utils [None req-4ebfed2d-14b1-462a-b105-26cb1d26994f c25efb567172419289431bb87a4358cb 1e2113337abc4651b6b207f4cda57799 - - default default] rbd image 755b8dbf-4912-4ab3-87a0-0fdcfba7efe4_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 11 04:05:58 compute-0 nova_compute[259850]: 2025-10-11 04:05:58.691 2 DEBUG oslo_concurrency.processutils [None req-4ebfed2d-14b1-462a-b105-26cb1d26994f c25efb567172419289431bb87a4358cb 1e2113337abc4651b6b207f4cda57799 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/755b8dbf-4912-4ab3-87a0-0fdcfba7efe4/disk.config 755b8dbf-4912-4ab3-87a0-0fdcfba7efe4_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 04:05:58 compute-0 nova_compute[259850]: 2025-10-11 04:05:58.826 2 DEBUG oslo_concurrency.processutils [None req-4ebfed2d-14b1-462a-b105-26cb1d26994f c25efb567172419289431bb87a4358cb 1e2113337abc4651b6b207f4cda57799 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/755b8dbf-4912-4ab3-87a0-0fdcfba7efe4/disk.config 755b8dbf-4912-4ab3-87a0-0fdcfba7efe4_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.135s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 04:05:58 compute-0 nova_compute[259850]: 2025-10-11 04:05:58.828 2 INFO nova.virt.libvirt.driver [None req-4ebfed2d-14b1-462a-b105-26cb1d26994f c25efb567172419289431bb87a4358cb 1e2113337abc4651b6b207f4cda57799 - - default default] [instance: 755b8dbf-4912-4ab3-87a0-0fdcfba7efe4] Deleting local config drive /var/lib/nova/instances/755b8dbf-4912-4ab3-87a0-0fdcfba7efe4/disk.config because it was imported into RBD.
Oct 11 04:05:58 compute-0 kernel: tap3d50c2ef-db: entered promiscuous mode
Oct 11 04:05:58 compute-0 NetworkManager[44920]: <info>  [1760155558.8866] manager: (tap3d50c2ef-db): new Tun device (/org/freedesktop/NetworkManager/Devices/40)
Oct 11 04:05:58 compute-0 ovn_controller[152025]: 2025-10-11T04:05:58Z|00052|binding|INFO|Claiming lport 3d50c2ef-db4b-49d0-9eeb-d9a45369939d for this chassis.
Oct 11 04:05:58 compute-0 ovn_controller[152025]: 2025-10-11T04:05:58Z|00053|binding|INFO|3d50c2ef-db4b-49d0-9eeb-d9a45369939d: Claiming fa:16:3e:7c:7f:a9 10.100.0.13
Oct 11 04:05:58 compute-0 nova_compute[259850]: 2025-10-11 04:05:58.888 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 04:05:58 compute-0 nova_compute[259850]: 2025-10-11 04:05:58.896 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 04:05:58 compute-0 ovn_metadata_agent[161897]: 2025-10-11 04:05:58.909 161902 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:7c:7f:a9 10.100.0.13'], port_security=['fa:16:3e:7c:7f:a9 10.100.0.13'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.13/28', 'neutron:device_id': '755b8dbf-4912-4ab3-87a0-0fdcfba7efe4', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-9137df72-3d02-4421-a504-7793ab98e168', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '1e2113337abc4651b6b207f4cda57799', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'f6fb7e2b-bfc9-4684-a46c-0a9cceefd3a6', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=1f8e8e61-a178-4af4-909e-7e5428e21ee2, chassis=[<ovs.db.idl.Row object at 0x7f22fde8afd0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f22fde8afd0>], logical_port=3d50c2ef-db4b-49d0-9eeb-d9a45369939d) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 11 04:05:58 compute-0 ovn_metadata_agent[161897]: 2025-10-11 04:05:58.911 161902 INFO neutron.agent.ovn.metadata.agent [-] Port 3d50c2ef-db4b-49d0-9eeb-d9a45369939d in datapath 9137df72-3d02-4421-a504-7793ab98e168 bound to our chassis
Oct 11 04:05:58 compute-0 ovn_metadata_agent[161897]: 2025-10-11 04:05:58.914 161902 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 9137df72-3d02-4421-a504-7793ab98e168
Oct 11 04:05:58 compute-0 systemd-machined[214869]: New machine qemu-5-instance-00000005.
Oct 11 04:05:58 compute-0 systemd[1]: Started Virtual Machine qemu-5-instance-00000005.
Oct 11 04:05:58 compute-0 ovn_metadata_agent[161897]: 2025-10-11 04:05:58.929 267637 DEBUG oslo.privsep.daemon [-] privsep: reply[ee6d3fdc-80b5-4999-983d-a43697feabd9]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 04:05:58 compute-0 ovn_metadata_agent[161897]: 2025-10-11 04:05:58.930 161902 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap9137df72-31 in ovnmeta-9137df72-3d02-4421-a504-7793ab98e168 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Oct 11 04:05:58 compute-0 ovn_metadata_agent[161897]: 2025-10-11 04:05:58.933 267637 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap9137df72-30 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Oct 11 04:05:58 compute-0 ovn_metadata_agent[161897]: 2025-10-11 04:05:58.933 267637 DEBUG oslo.privsep.daemon [-] privsep: reply[48886a9d-2758-4c2c-a854-3505ac90551e]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 04:05:58 compute-0 ovn_metadata_agent[161897]: 2025-10-11 04:05:58.935 267637 DEBUG oslo.privsep.daemon [-] privsep: reply[ca2a44f4-164a-4fea-bfb3-4dd2f6cc43e3]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 04:05:58 compute-0 ovn_metadata_agent[161897]: 2025-10-11 04:05:58.952 162015 DEBUG oslo.privsep.daemon [-] privsep: reply[9e75f685-af4d-40c0-88c3-4a494cc3bad8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 04:05:58 compute-0 systemd-udevd[272457]: Network interface NamePolicy= disabled on kernel command line.
Oct 11 04:05:58 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e180 do_prune osdmap full prune enabled
Oct 11 04:05:58 compute-0 ceph-mon[74273]: pgmap v1059: 305 pgs: 305 active+clean; 585 MiB data, 760 MiB used, 59 GiB / 60 GiB avail; 158 KiB/s rd, 68 MiB/s wr, 268 op/s
Oct 11 04:05:58 compute-0 NetworkManager[44920]: <info>  [1760155558.9694] device (tap3d50c2ef-db): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 11 04:05:58 compute-0 NetworkManager[44920]: <info>  [1760155558.9710] device (tap3d50c2ef-db): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 11 04:05:58 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e181 e181: 3 total, 3 up, 3 in
Oct 11 04:05:58 compute-0 ceph-mon[74273]: log_channel(cluster) log [DBG] : osdmap e181: 3 total, 3 up, 3 in
Oct 11 04:05:58 compute-0 ovn_metadata_agent[161897]: 2025-10-11 04:05:58.984 267637 DEBUG oslo.privsep.daemon [-] privsep: reply[4571ded4-fceb-40d0-bfed-1d9f47a9e498]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 04:05:58 compute-0 ovn_controller[152025]: 2025-10-11T04:05:58Z|00054|binding|INFO|Setting lport 3d50c2ef-db4b-49d0-9eeb-d9a45369939d ovn-installed in OVS
Oct 11 04:05:58 compute-0 ovn_controller[152025]: 2025-10-11T04:05:58Z|00055|binding|INFO|Setting lport 3d50c2ef-db4b-49d0-9eeb-d9a45369939d up in Southbound
Oct 11 04:05:59 compute-0 nova_compute[259850]: 2025-10-11 04:05:59.011 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 04:05:59 compute-0 ovn_metadata_agent[161897]: 2025-10-11 04:05:59.027 267785 DEBUG oslo.privsep.daemon [-] privsep: reply[e4791ca2-5c81-406d-87ed-25b3789a666b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 04:05:59 compute-0 systemd-udevd[272462]: Network interface NamePolicy= disabled on kernel command line.
Oct 11 04:05:59 compute-0 NetworkManager[44920]: <info>  [1760155559.0331] manager: (tap9137df72-30): new Veth device (/org/freedesktop/NetworkManager/Devices/41)
Oct 11 04:05:59 compute-0 ovn_metadata_agent[161897]: 2025-10-11 04:05:59.032 267637 DEBUG oslo.privsep.daemon [-] privsep: reply[bcce7184-ef94-4ead-8cc7-23b5a8e5824d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 04:05:59 compute-0 ovn_metadata_agent[161897]: 2025-10-11 04:05:59.071 267785 DEBUG oslo.privsep.daemon [-] privsep: reply[785f24f2-fda0-45d7-810d-baad6c474501]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 04:05:59 compute-0 ovn_metadata_agent[161897]: 2025-10-11 04:05:59.074 267785 DEBUG oslo.privsep.daemon [-] privsep: reply[3fa90896-844a-4a15-91c9-678f565ef495]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 04:05:59 compute-0 NetworkManager[44920]: <info>  [1760155559.1018] device (tap9137df72-30): carrier: link connected
Oct 11 04:05:59 compute-0 ovn_metadata_agent[161897]: 2025-10-11 04:05:59.110 267785 DEBUG oslo.privsep.daemon [-] privsep: reply[cee0326b-f0b5-4525-855e-4405a0e33106]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 04:05:59 compute-0 ovn_metadata_agent[161897]: 2025-10-11 04:05:59.133 267637 DEBUG oslo.privsep.daemon [-] privsep: reply[632b0e67-e137-4df4-b05a-d5ae9ad6d68f]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap9137df72-31'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:4d:57:66'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 24], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 394413, 'reachable_time': 43268, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 272489, 'error': None, 'target': 'ovnmeta-9137df72-3d02-4421-a504-7793ab98e168', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 04:05:59 compute-0 ovn_metadata_agent[161897]: 2025-10-11 04:05:59.152 267637 DEBUG oslo.privsep.daemon [-] privsep: reply[366fdcca-9e5a-4152-be85-77070c6d14df]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe4d:5766'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 394413, 'tstamp': 394413}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 272498, 'error': None, 'target': 'ovnmeta-9137df72-3d02-4421-a504-7793ab98e168', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 04:05:59 compute-0 ovn_metadata_agent[161897]: 2025-10-11 04:05:59.167 267637 DEBUG oslo.privsep.daemon [-] privsep: reply[380184ad-feb2-4502-bbd2-ee0744fb9b20]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap9137df72-31'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:4d:57:66'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 2, 'rx_bytes': 110, 'tx_bytes': 176, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 2, 'rx_bytes': 110, 'tx_bytes': 176, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 24], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 394413, 'reachable_time': 43268, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 2, 'outoctets': 148, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 2, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 148, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 2, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 272508, 'error': None, 'target': 'ovnmeta-9137df72-3d02-4421-a504-7793ab98e168', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 04:05:59 compute-0 ovn_metadata_agent[161897]: 2025-10-11 04:05:59.199 267637 DEBUG oslo.privsep.daemon [-] privsep: reply[f489818d-6a29-42d0-90ce-6eadb1624511]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 04:05:59 compute-0 ovn_metadata_agent[161897]: 2025-10-11 04:05:59.250 267637 DEBUG oslo.privsep.daemon [-] privsep: reply[da8629ca-7126-4ac8-ad6d-9e90a69820b3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 04:05:59 compute-0 ovn_metadata_agent[161897]: 2025-10-11 04:05:59.251 161902 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap9137df72-30, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 04:05:59 compute-0 ovn_metadata_agent[161897]: 2025-10-11 04:05:59.251 161902 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 11 04:05:59 compute-0 ovn_metadata_agent[161897]: 2025-10-11 04:05:59.251 161902 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap9137df72-30, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 04:05:59 compute-0 nova_compute[259850]: 2025-10-11 04:05:59.252 2 DEBUG nova.compute.manager [req-60edadcb-3a6a-4880-8237-43b74e1fcbf2 req-d226ce88-2a85-44be-be25-6c0d6d1318dc f4159c198f2c491aba3076e803251bb4 551ef16ca7c64c7d98e9560ddab5abd8 - - default default] [instance: 755b8dbf-4912-4ab3-87a0-0fdcfba7efe4] Received event network-vif-plugged-3d50c2ef-db4b-49d0-9eeb-d9a45369939d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 11 04:05:59 compute-0 nova_compute[259850]: 2025-10-11 04:05:59.252 2 DEBUG oslo_concurrency.lockutils [req-60edadcb-3a6a-4880-8237-43b74e1fcbf2 req-d226ce88-2a85-44be-be25-6c0d6d1318dc f4159c198f2c491aba3076e803251bb4 551ef16ca7c64c7d98e9560ddab5abd8 - - default default] Acquiring lock "755b8dbf-4912-4ab3-87a0-0fdcfba7efe4-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 04:05:59 compute-0 nova_compute[259850]: 2025-10-11 04:05:59.253 2 DEBUG oslo_concurrency.lockutils [req-60edadcb-3a6a-4880-8237-43b74e1fcbf2 req-d226ce88-2a85-44be-be25-6c0d6d1318dc f4159c198f2c491aba3076e803251bb4 551ef16ca7c64c7d98e9560ddab5abd8 - - default default] Lock "755b8dbf-4912-4ab3-87a0-0fdcfba7efe4-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 04:05:59 compute-0 nova_compute[259850]: 2025-10-11 04:05:59.253 2 DEBUG oslo_concurrency.lockutils [req-60edadcb-3a6a-4880-8237-43b74e1fcbf2 req-d226ce88-2a85-44be-be25-6c0d6d1318dc f4159c198f2c491aba3076e803251bb4 551ef16ca7c64c7d98e9560ddab5abd8 - - default default] Lock "755b8dbf-4912-4ab3-87a0-0fdcfba7efe4-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 04:05:59 compute-0 NetworkManager[44920]: <info>  [1760155559.2541] manager: (tap9137df72-30): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/42)
Oct 11 04:05:59 compute-0 nova_compute[259850]: 2025-10-11 04:05:59.254 2 DEBUG nova.compute.manager [req-60edadcb-3a6a-4880-8237-43b74e1fcbf2 req-d226ce88-2a85-44be-be25-6c0d6d1318dc f4159c198f2c491aba3076e803251bb4 551ef16ca7c64c7d98e9560ddab5abd8 - - default default] [instance: 755b8dbf-4912-4ab3-87a0-0fdcfba7efe4] Processing event network-vif-plugged-3d50c2ef-db4b-49d0-9eeb-d9a45369939d _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Oct 11 04:05:59 compute-0 kernel: tap9137df72-30: entered promiscuous mode
Oct 11 04:05:59 compute-0 nova_compute[259850]: 2025-10-11 04:05:59.254 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 04:05:59 compute-0 ovn_metadata_agent[161897]: 2025-10-11 04:05:59.257 161902 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap9137df72-30, col_values=(('external_ids', {'iface-id': 'a4ebcf04-1e96-429c-ae5a-9786c6ef9662'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 04:05:59 compute-0 nova_compute[259850]: 2025-10-11 04:05:59.259 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 04:05:59 compute-0 ovn_controller[152025]: 2025-10-11T04:05:59Z|00056|binding|INFO|Releasing lport a4ebcf04-1e96-429c-ae5a-9786c6ef9662 from this chassis (sb_readonly=0)
Oct 11 04:05:59 compute-0 nova_compute[259850]: 2025-10-11 04:05:59.276 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 04:05:59 compute-0 ovn_metadata_agent[161897]: 2025-10-11 04:05:59.278 161902 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/9137df72-3d02-4421-a504-7793ab98e168.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/9137df72-3d02-4421-a504-7793ab98e168.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Oct 11 04:05:59 compute-0 ovn_metadata_agent[161897]: 2025-10-11 04:05:59.279 267637 DEBUG oslo.privsep.daemon [-] privsep: reply[116f8da5-57d0-4550-a922-e3f604a56752]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 04:05:59 compute-0 ovn_metadata_agent[161897]: 2025-10-11 04:05:59.279 161902 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct 11 04:05:59 compute-0 ovn_metadata_agent[161897]: global
Oct 11 04:05:59 compute-0 ovn_metadata_agent[161897]:     log         /dev/log local0 debug
Oct 11 04:05:59 compute-0 ovn_metadata_agent[161897]:     log-tag     haproxy-metadata-proxy-9137df72-3d02-4421-a504-7793ab98e168
Oct 11 04:05:59 compute-0 ovn_metadata_agent[161897]:     user        root
Oct 11 04:05:59 compute-0 ovn_metadata_agent[161897]:     group       root
Oct 11 04:05:59 compute-0 ovn_metadata_agent[161897]:     maxconn     1024
Oct 11 04:05:59 compute-0 ovn_metadata_agent[161897]:     pidfile     /var/lib/neutron/external/pids/9137df72-3d02-4421-a504-7793ab98e168.pid.haproxy
Oct 11 04:05:59 compute-0 ovn_metadata_agent[161897]:     daemon
Oct 11 04:05:59 compute-0 ovn_metadata_agent[161897]: 
Oct 11 04:05:59 compute-0 ovn_metadata_agent[161897]: defaults
Oct 11 04:05:59 compute-0 ovn_metadata_agent[161897]:     log global
Oct 11 04:05:59 compute-0 ovn_metadata_agent[161897]:     mode http
Oct 11 04:05:59 compute-0 ovn_metadata_agent[161897]:     option httplog
Oct 11 04:05:59 compute-0 ovn_metadata_agent[161897]:     option dontlognull
Oct 11 04:05:59 compute-0 ovn_metadata_agent[161897]:     option http-server-close
Oct 11 04:05:59 compute-0 ovn_metadata_agent[161897]:     option forwardfor
Oct 11 04:05:59 compute-0 ovn_metadata_agent[161897]:     retries                 3
Oct 11 04:05:59 compute-0 ovn_metadata_agent[161897]:     timeout http-request    30s
Oct 11 04:05:59 compute-0 ovn_metadata_agent[161897]:     timeout connect         30s
Oct 11 04:05:59 compute-0 ovn_metadata_agent[161897]:     timeout client          32s
Oct 11 04:05:59 compute-0 ovn_metadata_agent[161897]:     timeout server          32s
Oct 11 04:05:59 compute-0 ovn_metadata_agent[161897]:     timeout http-keep-alive 30s
Oct 11 04:05:59 compute-0 ovn_metadata_agent[161897]: 
Oct 11 04:05:59 compute-0 ovn_metadata_agent[161897]: 
Oct 11 04:05:59 compute-0 ovn_metadata_agent[161897]: listen listener
Oct 11 04:05:59 compute-0 ovn_metadata_agent[161897]:     bind 169.254.169.254:80
Oct 11 04:05:59 compute-0 ovn_metadata_agent[161897]:     server metadata /var/lib/neutron/metadata_proxy
Oct 11 04:05:59 compute-0 ovn_metadata_agent[161897]:     http-request add-header X-OVN-Network-ID 9137df72-3d02-4421-a504-7793ab98e168
Oct 11 04:05:59 compute-0 ovn_metadata_agent[161897]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Oct 11 04:05:59 compute-0 ovn_metadata_agent[161897]: 2025-10-11 04:05:59.280 161902 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-9137df72-3d02-4421-a504-7793ab98e168', 'env', 'PROCESS_TAG=haproxy-9137df72-3d02-4421-a504-7793ab98e168', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/9137df72-3d02-4421-a504-7793ab98e168.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Oct 11 04:05:59 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1061: 305 pgs: 305 active+clean; 88 MiB data, 658 MiB used, 59 GiB / 60 GiB avail; 288 KiB/s rd, 131 MiB/s wr, 514 op/s
Oct 11 04:05:59 compute-0 podman[272566]: 2025-10-11 04:05:59.667838589 +0000 UTC m=+0.066439766 container create faa4a59a86b0da78f5a8bdd871988a297cbd03db2ca6c749b48d468c672803f3 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-9137df72-3d02-4421-a504-7793ab98e168, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, io.buildah.version=1.41.3)
Oct 11 04:05:59 compute-0 systemd[1]: Started libpod-conmon-faa4a59a86b0da78f5a8bdd871988a297cbd03db2ca6c749b48d468c672803f3.scope.
Oct 11 04:05:59 compute-0 podman[272566]: 2025-10-11 04:05:59.633994099 +0000 UTC m=+0.032595366 image pull 1061e4fafe13e0b9aa1ef2c904ba4ad70c44f3e87b1d831f16c6db34937f4022 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Oct 11 04:05:59 compute-0 systemd[1]: Started libcrun container.
Oct 11 04:05:59 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e181 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 04:05:59 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8933da3fa40d5828e2ffe3b02a4a1bae9817385560e4709769a6239397d55dfb/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct 11 04:05:59 compute-0 podman[272566]: 2025-10-11 04:05:59.754466951 +0000 UTC m=+0.153068138 container init faa4a59a86b0da78f5a8bdd871988a297cbd03db2ca6c749b48d468c672803f3 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-9137df72-3d02-4421-a504-7793ab98e168, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251009, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 11 04:05:59 compute-0 podman[272566]: 2025-10-11 04:05:59.759531693 +0000 UTC m=+0.158132860 container start faa4a59a86b0da78f5a8bdd871988a297cbd03db2ca6c749b48d468c672803f3 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-9137df72-3d02-4421-a504-7793ab98e168, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 11 04:05:59 compute-0 neutron-haproxy-ovnmeta-9137df72-3d02-4421-a504-7793ab98e168[272581]: [NOTICE]   (272585) : New worker (272587) forked
Oct 11 04:05:59 compute-0 neutron-haproxy-ovnmeta-9137df72-3d02-4421-a504-7793ab98e168[272581]: [NOTICE]   (272585) : Loading success.
Oct 11 04:05:59 compute-0 nova_compute[259850]: 2025-10-11 04:05:59.800 2 DEBUG nova.compute.manager [None req-4ebfed2d-14b1-462a-b105-26cb1d26994f c25efb567172419289431bb87a4358cb 1e2113337abc4651b6b207f4cda57799 - - default default] [instance: 755b8dbf-4912-4ab3-87a0-0fdcfba7efe4] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Oct 11 04:05:59 compute-0 nova_compute[259850]: 2025-10-11 04:05:59.802 2 DEBUG nova.virt.driver [None req-3700afd8-f980-4cf2-b41e-54ecc7fbe9a2 - - - - - -] Emitting event <LifecycleEvent: 1760155559.801507, 755b8dbf-4912-4ab3-87a0-0fdcfba7efe4 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 11 04:05:59 compute-0 nova_compute[259850]: 2025-10-11 04:05:59.802 2 INFO nova.compute.manager [None req-3700afd8-f980-4cf2-b41e-54ecc7fbe9a2 - - - - - -] [instance: 755b8dbf-4912-4ab3-87a0-0fdcfba7efe4] VM Started (Lifecycle Event)
Oct 11 04:05:59 compute-0 nova_compute[259850]: 2025-10-11 04:05:59.806 2 DEBUG nova.virt.libvirt.driver [None req-4ebfed2d-14b1-462a-b105-26cb1d26994f c25efb567172419289431bb87a4358cb 1e2113337abc4651b6b207f4cda57799 - - default default] [instance: 755b8dbf-4912-4ab3-87a0-0fdcfba7efe4] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Oct 11 04:05:59 compute-0 nova_compute[259850]: 2025-10-11 04:05:59.813 2 INFO nova.virt.libvirt.driver [-] [instance: 755b8dbf-4912-4ab3-87a0-0fdcfba7efe4] Instance spawned successfully.
Oct 11 04:05:59 compute-0 nova_compute[259850]: 2025-10-11 04:05:59.813 2 DEBUG nova.virt.libvirt.driver [None req-4ebfed2d-14b1-462a-b105-26cb1d26994f c25efb567172419289431bb87a4358cb 1e2113337abc4651b6b207f4cda57799 - - default default] [instance: 755b8dbf-4912-4ab3-87a0-0fdcfba7efe4] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Oct 11 04:05:59 compute-0 nova_compute[259850]: 2025-10-11 04:05:59.853 2 DEBUG nova.compute.manager [None req-3700afd8-f980-4cf2-b41e-54ecc7fbe9a2 - - - - - -] [instance: 755b8dbf-4912-4ab3-87a0-0fdcfba7efe4] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 11 04:05:59 compute-0 nova_compute[259850]: 2025-10-11 04:05:59.859 2 DEBUG nova.virt.libvirt.driver [None req-4ebfed2d-14b1-462a-b105-26cb1d26994f c25efb567172419289431bb87a4358cb 1e2113337abc4651b6b207f4cda57799 - - default default] [instance: 755b8dbf-4912-4ab3-87a0-0fdcfba7efe4] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 11 04:05:59 compute-0 nova_compute[259850]: 2025-10-11 04:05:59.859 2 DEBUG nova.virt.libvirt.driver [None req-4ebfed2d-14b1-462a-b105-26cb1d26994f c25efb567172419289431bb87a4358cb 1e2113337abc4651b6b207f4cda57799 - - default default] [instance: 755b8dbf-4912-4ab3-87a0-0fdcfba7efe4] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 11 04:05:59 compute-0 nova_compute[259850]: 2025-10-11 04:05:59.860 2 DEBUG nova.virt.libvirt.driver [None req-4ebfed2d-14b1-462a-b105-26cb1d26994f c25efb567172419289431bb87a4358cb 1e2113337abc4651b6b207f4cda57799 - - default default] [instance: 755b8dbf-4912-4ab3-87a0-0fdcfba7efe4] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 11 04:05:59 compute-0 nova_compute[259850]: 2025-10-11 04:05:59.861 2 DEBUG nova.virt.libvirt.driver [None req-4ebfed2d-14b1-462a-b105-26cb1d26994f c25efb567172419289431bb87a4358cb 1e2113337abc4651b6b207f4cda57799 - - default default] [instance: 755b8dbf-4912-4ab3-87a0-0fdcfba7efe4] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 11 04:05:59 compute-0 nova_compute[259850]: 2025-10-11 04:05:59.861 2 DEBUG nova.virt.libvirt.driver [None req-4ebfed2d-14b1-462a-b105-26cb1d26994f c25efb567172419289431bb87a4358cb 1e2113337abc4651b6b207f4cda57799 - - default default] [instance: 755b8dbf-4912-4ab3-87a0-0fdcfba7efe4] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 11 04:05:59 compute-0 nova_compute[259850]: 2025-10-11 04:05:59.862 2 DEBUG nova.virt.libvirt.driver [None req-4ebfed2d-14b1-462a-b105-26cb1d26994f c25efb567172419289431bb87a4358cb 1e2113337abc4651b6b207f4cda57799 - - default default] [instance: 755b8dbf-4912-4ab3-87a0-0fdcfba7efe4] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 11 04:05:59 compute-0 nova_compute[259850]: 2025-10-11 04:05:59.868 2 DEBUG nova.compute.manager [None req-3700afd8-f980-4cf2-b41e-54ecc7fbe9a2 - - - - - -] [instance: 755b8dbf-4912-4ab3-87a0-0fdcfba7efe4] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 11 04:05:59 compute-0 nova_compute[259850]: 2025-10-11 04:05:59.906 2 INFO nova.compute.manager [None req-3700afd8-f980-4cf2-b41e-54ecc7fbe9a2 - - - - - -] [instance: 755b8dbf-4912-4ab3-87a0-0fdcfba7efe4] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 11 04:05:59 compute-0 nova_compute[259850]: 2025-10-11 04:05:59.907 2 DEBUG nova.virt.driver [None req-3700afd8-f980-4cf2-b41e-54ecc7fbe9a2 - - - - - -] Emitting event <LifecycleEvent: 1760155559.8017988, 755b8dbf-4912-4ab3-87a0-0fdcfba7efe4 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 11 04:05:59 compute-0 nova_compute[259850]: 2025-10-11 04:05:59.907 2 INFO nova.compute.manager [None req-3700afd8-f980-4cf2-b41e-54ecc7fbe9a2 - - - - - -] [instance: 755b8dbf-4912-4ab3-87a0-0fdcfba7efe4] VM Paused (Lifecycle Event)
Oct 11 04:05:59 compute-0 nova_compute[259850]: 2025-10-11 04:05:59.932 2 DEBUG nova.compute.manager [None req-3700afd8-f980-4cf2-b41e-54ecc7fbe9a2 - - - - - -] [instance: 755b8dbf-4912-4ab3-87a0-0fdcfba7efe4] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 11 04:05:59 compute-0 nova_compute[259850]: 2025-10-11 04:05:59.936 2 DEBUG nova.virt.driver [None req-3700afd8-f980-4cf2-b41e-54ecc7fbe9a2 - - - - - -] Emitting event <LifecycleEvent: 1760155559.8057392, 755b8dbf-4912-4ab3-87a0-0fdcfba7efe4 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 11 04:05:59 compute-0 nova_compute[259850]: 2025-10-11 04:05:59.936 2 INFO nova.compute.manager [None req-3700afd8-f980-4cf2-b41e-54ecc7fbe9a2 - - - - - -] [instance: 755b8dbf-4912-4ab3-87a0-0fdcfba7efe4] VM Resumed (Lifecycle Event)
Oct 11 04:05:59 compute-0 nova_compute[259850]: 2025-10-11 04:05:59.942 2 INFO nova.compute.manager [None req-4ebfed2d-14b1-462a-b105-26cb1d26994f c25efb567172419289431bb87a4358cb 1e2113337abc4651b6b207f4cda57799 - - default default] [instance: 755b8dbf-4912-4ab3-87a0-0fdcfba7efe4] Took 6.24 seconds to spawn the instance on the hypervisor.
Oct 11 04:05:59 compute-0 nova_compute[259850]: 2025-10-11 04:05:59.942 2 DEBUG nova.compute.manager [None req-4ebfed2d-14b1-462a-b105-26cb1d26994f c25efb567172419289431bb87a4358cb 1e2113337abc4651b6b207f4cda57799 - - default default] [instance: 755b8dbf-4912-4ab3-87a0-0fdcfba7efe4] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 11 04:05:59 compute-0 nova_compute[259850]: 2025-10-11 04:05:59.953 2 DEBUG nova.compute.manager [None req-3700afd8-f980-4cf2-b41e-54ecc7fbe9a2 - - - - - -] [instance: 755b8dbf-4912-4ab3-87a0-0fdcfba7efe4] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 11 04:05:59 compute-0 nova_compute[259850]: 2025-10-11 04:05:59.957 2 DEBUG nova.compute.manager [None req-3700afd8-f980-4cf2-b41e-54ecc7fbe9a2 - - - - - -] [instance: 755b8dbf-4912-4ab3-87a0-0fdcfba7efe4] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 11 04:05:59 compute-0 ceph-mon[74273]: osdmap e181: 3 total, 3 up, 3 in
Oct 11 04:05:59 compute-0 nova_compute[259850]: 2025-10-11 04:05:59.982 2 INFO nova.compute.manager [None req-3700afd8-f980-4cf2-b41e-54ecc7fbe9a2 - - - - - -] [instance: 755b8dbf-4912-4ab3-87a0-0fdcfba7efe4] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 11 04:06:00 compute-0 nova_compute[259850]: 2025-10-11 04:06:00.010 2 INFO nova.compute.manager [None req-4ebfed2d-14b1-462a-b105-26cb1d26994f c25efb567172419289431bb87a4358cb 1e2113337abc4651b6b207f4cda57799 - - default default] [instance: 755b8dbf-4912-4ab3-87a0-0fdcfba7efe4] Took 7.22 seconds to build instance.
Oct 11 04:06:00 compute-0 nova_compute[259850]: 2025-10-11 04:06:00.027 2 DEBUG oslo_concurrency.lockutils [None req-4ebfed2d-14b1-462a-b105-26cb1d26994f c25efb567172419289431bb87a4358cb 1e2113337abc4651b6b207f4cda57799 - - default default] Lock "755b8dbf-4912-4ab3-87a0-0fdcfba7efe4" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 7.311s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 04:06:00 compute-0 ceph-mon[74273]: pgmap v1061: 305 pgs: 305 active+clean; 88 MiB data, 658 MiB used, 59 GiB / 60 GiB avail; 288 KiB/s rd, 131 MiB/s wr, 514 op/s
Oct 11 04:06:01 compute-0 nova_compute[259850]: 2025-10-11 04:06:01.336 2 DEBUG nova.compute.manager [req-a979767b-a715-4811-bcf6-0fb929d70c4e req-891163f7-c6f3-4a08-a0cf-60b31f170b57 f4159c198f2c491aba3076e803251bb4 551ef16ca7c64c7d98e9560ddab5abd8 - - default default] [instance: 755b8dbf-4912-4ab3-87a0-0fdcfba7efe4] Received event network-vif-plugged-3d50c2ef-db4b-49d0-9eeb-d9a45369939d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 11 04:06:01 compute-0 nova_compute[259850]: 2025-10-11 04:06:01.336 2 DEBUG oslo_concurrency.lockutils [req-a979767b-a715-4811-bcf6-0fb929d70c4e req-891163f7-c6f3-4a08-a0cf-60b31f170b57 f4159c198f2c491aba3076e803251bb4 551ef16ca7c64c7d98e9560ddab5abd8 - - default default] Acquiring lock "755b8dbf-4912-4ab3-87a0-0fdcfba7efe4-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 04:06:01 compute-0 nova_compute[259850]: 2025-10-11 04:06:01.336 2 DEBUG oslo_concurrency.lockutils [req-a979767b-a715-4811-bcf6-0fb929d70c4e req-891163f7-c6f3-4a08-a0cf-60b31f170b57 f4159c198f2c491aba3076e803251bb4 551ef16ca7c64c7d98e9560ddab5abd8 - - default default] Lock "755b8dbf-4912-4ab3-87a0-0fdcfba7efe4-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 04:06:01 compute-0 nova_compute[259850]: 2025-10-11 04:06:01.337 2 DEBUG oslo_concurrency.lockutils [req-a979767b-a715-4811-bcf6-0fb929d70c4e req-891163f7-c6f3-4a08-a0cf-60b31f170b57 f4159c198f2c491aba3076e803251bb4 551ef16ca7c64c7d98e9560ddab5abd8 - - default default] Lock "755b8dbf-4912-4ab3-87a0-0fdcfba7efe4-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 04:06:01 compute-0 nova_compute[259850]: 2025-10-11 04:06:01.337 2 DEBUG nova.compute.manager [req-a979767b-a715-4811-bcf6-0fb929d70c4e req-891163f7-c6f3-4a08-a0cf-60b31f170b57 f4159c198f2c491aba3076e803251bb4 551ef16ca7c64c7d98e9560ddab5abd8 - - default default] [instance: 755b8dbf-4912-4ab3-87a0-0fdcfba7efe4] No waiting events found dispatching network-vif-plugged-3d50c2ef-db4b-49d0-9eeb-d9a45369939d pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 11 04:06:01 compute-0 nova_compute[259850]: 2025-10-11 04:06:01.337 2 WARNING nova.compute.manager [req-a979767b-a715-4811-bcf6-0fb929d70c4e req-891163f7-c6f3-4a08-a0cf-60b31f170b57 f4159c198f2c491aba3076e803251bb4 551ef16ca7c64c7d98e9560ddab5abd8 - - default default] [instance: 755b8dbf-4912-4ab3-87a0-0fdcfba7efe4] Received unexpected event network-vif-plugged-3d50c2ef-db4b-49d0-9eeb-d9a45369939d for instance with vm_state active and task_state None.
Oct 11 04:06:01 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1062: 305 pgs: 305 active+clean; 88 MiB data, 658 MiB used, 59 GiB / 60 GiB avail; 131 KiB/s rd, 63 MiB/s wr, 245 op/s
Oct 11 04:06:01 compute-0 nova_compute[259850]: 2025-10-11 04:06:01.919 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 04:06:02 compute-0 nova_compute[259850]: 2025-10-11 04:06:02.945 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 04:06:02 compute-0 ceph-mon[74273]: pgmap v1062: 305 pgs: 305 active+clean; 88 MiB data, 658 MiB used, 59 GiB / 60 GiB avail; 131 KiB/s rd, 63 MiB/s wr, 245 op/s
Oct 11 04:06:03 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1063: 305 pgs: 305 active+clean; 88 MiB data, 286 MiB used, 60 GiB / 60 GiB avail; 7.5 MiB/s rd, 57 MiB/s wr, 359 op/s
Oct 11 04:06:04 compute-0 sudo[272596]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 04:06:04 compute-0 sudo[272596]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 04:06:04 compute-0 sudo[272596]: pam_unix(sudo:session): session closed for user root
Oct 11 04:06:04 compute-0 sudo[272621]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 11 04:06:04 compute-0 sudo[272621]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 04:06:04 compute-0 sudo[272621]: pam_unix(sudo:session): session closed for user root
Oct 11 04:06:04 compute-0 sudo[272646]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 04:06:04 compute-0 sudo[272646]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 04:06:04 compute-0 sudo[272646]: pam_unix(sudo:session): session closed for user root
Oct 11 04:06:04 compute-0 sudo[272671]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/23b68101-59a9-532f-ab6b-9acf78fb2162/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Oct 11 04:06:04 compute-0 sudo[272671]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 04:06:04 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e181 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 04:06:04 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e181 do_prune osdmap full prune enabled
Oct 11 04:06:04 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e182 e182: 3 total, 3 up, 3 in
Oct 11 04:06:04 compute-0 ceph-mon[74273]: log_channel(cluster) log [DBG] : osdmap e182: 3 total, 3 up, 3 in
Oct 11 04:06:04 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct 11 04:06:04 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3107527697' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 11 04:06:04 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct 11 04:06:04 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3107527697' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 11 04:06:04 compute-0 sudo[272671]: pam_unix(sudo:session): session closed for user root
Oct 11 04:06:05 compute-0 ceph-mon[74273]: pgmap v1063: 305 pgs: 305 active+clean; 88 MiB data, 286 MiB used, 60 GiB / 60 GiB avail; 7.5 MiB/s rd, 57 MiB/s wr, 359 op/s
Oct 11 04:06:05 compute-0 ceph-mon[74273]: osdmap e182: 3 total, 3 up, 3 in
Oct 11 04:06:05 compute-0 ceph-mon[74273]: from='client.? 192.168.122.10:0/3107527697' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 11 04:06:05 compute-0 ceph-mon[74273]: from='client.? 192.168.122.10:0/3107527697' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 11 04:06:05 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 11 04:06:05 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 11 04:06:05 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct 11 04:06:05 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 11 04:06:05 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct 11 04:06:05 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' 
Oct 11 04:06:05 compute-0 ceph-mgr[74563]: [progress WARNING root] complete: ev f62699fe-3c0e-4b57-be81-1fd44d848630 does not exist
Oct 11 04:06:05 compute-0 ceph-mgr[74563]: [progress WARNING root] complete: ev 96cbdbac-720b-45b0-9b01-994f999d3817 does not exist
Oct 11 04:06:05 compute-0 ceph-mgr[74563]: [progress WARNING root] complete: ev 12a3bb57-2635-4005-b505-1029445990ef does not exist
Oct 11 04:06:05 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct 11 04:06:05 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 11 04:06:05 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct 11 04:06:05 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 11 04:06:05 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 11 04:06:05 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 11 04:06:05 compute-0 sudo[272727]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 04:06:05 compute-0 sudo[272727]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 04:06:05 compute-0 sudo[272727]: pam_unix(sudo:session): session closed for user root
Oct 11 04:06:05 compute-0 sudo[272752]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 11 04:06:05 compute-0 sudo[272752]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 04:06:05 compute-0 sudo[272752]: pam_unix(sudo:session): session closed for user root
Oct 11 04:06:05 compute-0 sudo[272777]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 04:06:05 compute-0 sudo[272777]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 04:06:05 compute-0 sudo[272777]: pam_unix(sudo:session): session closed for user root
Oct 11 04:06:05 compute-0 sudo[272802]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/23b68101-59a9-532f-ab6b-9acf78fb2162/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 23b68101-59a9-532f-ab6b-9acf78fb2162 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Oct 11 04:06:05 compute-0 sudo[272802]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 04:06:05 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1065: 305 pgs: 305 active+clean; 88 MiB data, 286 MiB used, 60 GiB / 60 GiB avail; 8.1 MiB/s rd, 63 MiB/s wr, 391 op/s
Oct 11 04:06:05 compute-0 nova_compute[259850]: 2025-10-11 04:06:05.646 2 DEBUG oslo_concurrency.lockutils [None req-e167bf5c-b2f3-42c7-b64f-48022489e846 c25efb567172419289431bb87a4358cb 1e2113337abc4651b6b207f4cda57799 - - default default] Acquiring lock "755b8dbf-4912-4ab3-87a0-0fdcfba7efe4" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 04:06:05 compute-0 nova_compute[259850]: 2025-10-11 04:06:05.646 2 DEBUG oslo_concurrency.lockutils [None req-e167bf5c-b2f3-42c7-b64f-48022489e846 c25efb567172419289431bb87a4358cb 1e2113337abc4651b6b207f4cda57799 - - default default] Lock "755b8dbf-4912-4ab3-87a0-0fdcfba7efe4" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 04:06:05 compute-0 nova_compute[259850]: 2025-10-11 04:06:05.646 2 DEBUG oslo_concurrency.lockutils [None req-e167bf5c-b2f3-42c7-b64f-48022489e846 c25efb567172419289431bb87a4358cb 1e2113337abc4651b6b207f4cda57799 - - default default] Acquiring lock "755b8dbf-4912-4ab3-87a0-0fdcfba7efe4-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 04:06:05 compute-0 nova_compute[259850]: 2025-10-11 04:06:05.647 2 DEBUG oslo_concurrency.lockutils [None req-e167bf5c-b2f3-42c7-b64f-48022489e846 c25efb567172419289431bb87a4358cb 1e2113337abc4651b6b207f4cda57799 - - default default] Lock "755b8dbf-4912-4ab3-87a0-0fdcfba7efe4-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 04:06:05 compute-0 nova_compute[259850]: 2025-10-11 04:06:05.647 2 DEBUG oslo_concurrency.lockutils [None req-e167bf5c-b2f3-42c7-b64f-48022489e846 c25efb567172419289431bb87a4358cb 1e2113337abc4651b6b207f4cda57799 - - default default] Lock "755b8dbf-4912-4ab3-87a0-0fdcfba7efe4-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 04:06:05 compute-0 nova_compute[259850]: 2025-10-11 04:06:05.648 2 INFO nova.compute.manager [None req-e167bf5c-b2f3-42c7-b64f-48022489e846 c25efb567172419289431bb87a4358cb 1e2113337abc4651b6b207f4cda57799 - - default default] [instance: 755b8dbf-4912-4ab3-87a0-0fdcfba7efe4] Terminating instance
Oct 11 04:06:05 compute-0 nova_compute[259850]: 2025-10-11 04:06:05.649 2 DEBUG nova.compute.manager [None req-e167bf5c-b2f3-42c7-b64f-48022489e846 c25efb567172419289431bb87a4358cb 1e2113337abc4651b6b207f4cda57799 - - default default] [instance: 755b8dbf-4912-4ab3-87a0-0fdcfba7efe4] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Oct 11 04:06:05 compute-0 podman[272866]: 2025-10-11 04:06:05.688084739 +0000 UTC m=+0.051368903 container create a76170300acd14d19fa38943a842f73a43a52ae320ccaba6f5efb4ff29ee1f67 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_snyder, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 11 04:06:05 compute-0 kernel: tap3d50c2ef-db (unregistering): left promiscuous mode
Oct 11 04:06:05 compute-0 NetworkManager[44920]: <info>  [1760155565.6997] device (tap3d50c2ef-db): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 11 04:06:05 compute-0 ovn_controller[152025]: 2025-10-11T04:06:05Z|00057|binding|INFO|Releasing lport 3d50c2ef-db4b-49d0-9eeb-d9a45369939d from this chassis (sb_readonly=0)
Oct 11 04:06:05 compute-0 ovn_controller[152025]: 2025-10-11T04:06:05Z|00058|binding|INFO|Setting lport 3d50c2ef-db4b-49d0-9eeb-d9a45369939d down in Southbound
Oct 11 04:06:05 compute-0 ovn_controller[152025]: 2025-10-11T04:06:05Z|00059|binding|INFO|Removing iface tap3d50c2ef-db ovn-installed in OVS
Oct 11 04:06:05 compute-0 nova_compute[259850]: 2025-10-11 04:06:05.752 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 04:06:05 compute-0 ovn_metadata_agent[161897]: 2025-10-11 04:06:05.758 161902 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:7c:7f:a9 10.100.0.13'], port_security=['fa:16:3e:7c:7f:a9 10.100.0.13'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.13/28', 'neutron:device_id': '755b8dbf-4912-4ab3-87a0-0fdcfba7efe4', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-9137df72-3d02-4421-a504-7793ab98e168', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '1e2113337abc4651b6b207f4cda57799', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'f6fb7e2b-bfc9-4684-a46c-0a9cceefd3a6', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=1f8e8e61-a178-4af4-909e-7e5428e21ee2, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f22fde8afd0>], logical_port=3d50c2ef-db4b-49d0-9eeb-d9a45369939d) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f22fde8afd0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 11 04:06:05 compute-0 ovn_metadata_agent[161897]: 2025-10-11 04:06:05.759 161902 INFO neutron.agent.ovn.metadata.agent [-] Port 3d50c2ef-db4b-49d0-9eeb-d9a45369939d in datapath 9137df72-3d02-4421-a504-7793ab98e168 unbound from our chassis
Oct 11 04:06:05 compute-0 ovn_metadata_agent[161897]: 2025-10-11 04:06:05.760 161902 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 9137df72-3d02-4421-a504-7793ab98e168, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Oct 11 04:06:05 compute-0 systemd[1]: Started libpod-conmon-a76170300acd14d19fa38943a842f73a43a52ae320ccaba6f5efb4ff29ee1f67.scope.
Oct 11 04:06:05 compute-0 ovn_metadata_agent[161897]: 2025-10-11 04:06:05.761 267637 DEBUG oslo.privsep.daemon [-] privsep: reply[5343cbbc-8cfe-4d34-969b-a289e8b68670]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 04:06:05 compute-0 ovn_metadata_agent[161897]: 2025-10-11 04:06:05.762 161902 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-9137df72-3d02-4421-a504-7793ab98e168 namespace which is not needed anymore
Oct 11 04:06:05 compute-0 podman[272866]: 2025-10-11 04:06:05.667671606 +0000 UTC m=+0.030955790 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 04:06:05 compute-0 nova_compute[259850]: 2025-10-11 04:06:05.782 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 04:06:05 compute-0 systemd[1]: Started libcrun container.
Oct 11 04:06:05 compute-0 systemd[1]: machine-qemu\x2d5\x2dinstance\x2d00000005.scope: Deactivated successfully.
Oct 11 04:06:05 compute-0 systemd[1]: machine-qemu\x2d5\x2dinstance\x2d00000005.scope: Consumed 6.675s CPU time.
Oct 11 04:06:05 compute-0 podman[272866]: 2025-10-11 04:06:05.812569724 +0000 UTC m=+0.175853918 container init a76170300acd14d19fa38943a842f73a43a52ae320ccaba6f5efb4ff29ee1f67 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_snyder, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef)
Oct 11 04:06:05 compute-0 systemd-machined[214869]: Machine qemu-5-instance-00000005 terminated.
Oct 11 04:06:05 compute-0 podman[272866]: 2025-10-11 04:06:05.824424376 +0000 UTC m=+0.187708580 container start a76170300acd14d19fa38943a842f73a43a52ae320ccaba6f5efb4ff29ee1f67 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_snyder, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 11 04:06:05 compute-0 podman[272866]: 2025-10-11 04:06:05.827993886 +0000 UTC m=+0.191278090 container attach a76170300acd14d19fa38943a842f73a43a52ae320ccaba6f5efb4ff29ee1f67 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_snyder, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef)
Oct 11 04:06:05 compute-0 peaceful_snyder[272886]: 167 167
Oct 11 04:06:05 compute-0 systemd[1]: libpod-a76170300acd14d19fa38943a842f73a43a52ae320ccaba6f5efb4ff29ee1f67.scope: Deactivated successfully.
Oct 11 04:06:05 compute-0 conmon[272886]: conmon a76170300acd14d19fa3 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-a76170300acd14d19fa38943a842f73a43a52ae320ccaba6f5efb4ff29ee1f67.scope/container/memory.events
Oct 11 04:06:05 compute-0 podman[272866]: 2025-10-11 04:06:05.832632807 +0000 UTC m=+0.195917011 container died a76170300acd14d19fa38943a842f73a43a52ae320ccaba6f5efb4ff29ee1f67 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_snyder, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.vendor=CentOS)
Oct 11 04:06:05 compute-0 systemd[1]: var-lib-containers-storage-overlay-bf99161b6e6fdb402d3c6bf99d9f9d4e737c93ecadae9a109fb9d850a82fc3cd-merged.mount: Deactivated successfully.
Oct 11 04:06:05 compute-0 podman[272866]: 2025-10-11 04:06:05.883212527 +0000 UTC m=+0.246496701 container remove a76170300acd14d19fa38943a842f73a43a52ae320ccaba6f5efb4ff29ee1f67 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_snyder, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0)
Oct 11 04:06:05 compute-0 systemd[1]: libpod-conmon-a76170300acd14d19fa38943a842f73a43a52ae320ccaba6f5efb4ff29ee1f67.scope: Deactivated successfully.
Oct 11 04:06:05 compute-0 nova_compute[259850]: 2025-10-11 04:06:05.900 2 INFO nova.virt.libvirt.driver [-] [instance: 755b8dbf-4912-4ab3-87a0-0fdcfba7efe4] Instance destroyed successfully.
Oct 11 04:06:05 compute-0 nova_compute[259850]: 2025-10-11 04:06:05.901 2 DEBUG nova.objects.instance [None req-e167bf5c-b2f3-42c7-b64f-48022489e846 c25efb567172419289431bb87a4358cb 1e2113337abc4651b6b207f4cda57799 - - default default] Lazy-loading 'resources' on Instance uuid 755b8dbf-4912-4ab3-87a0-0fdcfba7efe4 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 11 04:06:05 compute-0 nova_compute[259850]: 2025-10-11 04:06:05.915 2 DEBUG nova.virt.libvirt.vif [None req-e167bf5c-b2f3-42c7-b64f-48022489e846 c25efb567172419289431bb87a4358cb 1e2113337abc4651b6b207f4cda57799 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-11T04:05:51Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-VolumesActionsTest-instance-181778275',display_name='tempest-VolumesActionsTest-instance-181778275',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-volumesactionstest-instance-181778275',id=5,image_ref='1a107e2f-1a9d-4b6f-861d-e64bee7d56be',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-10-11T04:05:59Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='1e2113337abc4651b6b207f4cda57799',ramdisk_id='',reservation_id='r-dp3igpp2',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='1a107e2f-1a9d-4b6f-861d-e64bee7d56be',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-VolumesActionsTest-881975202',owner_user_name='tempest-VolumesActionsTest-881975202-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-11T04:05:59Z,user_data=None,user_id='c25efb567172419289431bb87a4358cb',uuid=755b8dbf-4912-4ab3-87a0-0fdcfba7efe4,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "3d50c2ef-db4b-49d0-9eeb-d9a45369939d", "address": "fa:16:3e:7c:7f:a9", "network": {"id": "9137df72-3d02-4421-a504-7793ab98e168", "bridge": "br-int", "label": "tempest-VolumesActionsTest-1470993189-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1e2113337abc4651b6b207f4cda57799", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3d50c2ef-db", "ovs_interfaceid": "3d50c2ef-db4b-49d0-9eeb-d9a45369939d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Oct 11 04:06:05 compute-0 nova_compute[259850]: 2025-10-11 04:06:05.916 2 DEBUG nova.network.os_vif_util [None req-e167bf5c-b2f3-42c7-b64f-48022489e846 c25efb567172419289431bb87a4358cb 1e2113337abc4651b6b207f4cda57799 - - default default] Converting VIF {"id": "3d50c2ef-db4b-49d0-9eeb-d9a45369939d", "address": "fa:16:3e:7c:7f:a9", "network": {"id": "9137df72-3d02-4421-a504-7793ab98e168", "bridge": "br-int", "label": "tempest-VolumesActionsTest-1470993189-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1e2113337abc4651b6b207f4cda57799", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3d50c2ef-db", "ovs_interfaceid": "3d50c2ef-db4b-49d0-9eeb-d9a45369939d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 11 04:06:05 compute-0 nova_compute[259850]: 2025-10-11 04:06:05.917 2 DEBUG nova.network.os_vif_util [None req-e167bf5c-b2f3-42c7-b64f-48022489e846 c25efb567172419289431bb87a4358cb 1e2113337abc4651b6b207f4cda57799 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:7c:7f:a9,bridge_name='br-int',has_traffic_filtering=True,id=3d50c2ef-db4b-49d0-9eeb-d9a45369939d,network=Network(9137df72-3d02-4421-a504-7793ab98e168),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap3d50c2ef-db') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 11 04:06:05 compute-0 neutron-haproxy-ovnmeta-9137df72-3d02-4421-a504-7793ab98e168[272581]: [NOTICE]   (272585) : haproxy version is 2.8.14-c23fe91
Oct 11 04:06:05 compute-0 neutron-haproxy-ovnmeta-9137df72-3d02-4421-a504-7793ab98e168[272581]: [NOTICE]   (272585) : path to executable is /usr/sbin/haproxy
Oct 11 04:06:05 compute-0 neutron-haproxy-ovnmeta-9137df72-3d02-4421-a504-7793ab98e168[272581]: [WARNING]  (272585) : Exiting Master process...
Oct 11 04:06:05 compute-0 nova_compute[259850]: 2025-10-11 04:06:05.917 2 DEBUG os_vif [None req-e167bf5c-b2f3-42c7-b64f-48022489e846 c25efb567172419289431bb87a4358cb 1e2113337abc4651b6b207f4cda57799 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:7c:7f:a9,bridge_name='br-int',has_traffic_filtering=True,id=3d50c2ef-db4b-49d0-9eeb-d9a45369939d,network=Network(9137df72-3d02-4421-a504-7793ab98e168),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap3d50c2ef-db') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Oct 11 04:06:05 compute-0 neutron-haproxy-ovnmeta-9137df72-3d02-4421-a504-7793ab98e168[272581]: [ALERT]    (272585) : Current worker (272587) exited with code 143 (Terminated)
Oct 11 04:06:05 compute-0 neutron-haproxy-ovnmeta-9137df72-3d02-4421-a504-7793ab98e168[272581]: [WARNING]  (272585) : All workers exited. Exiting... (0)
Oct 11 04:06:05 compute-0 nova_compute[259850]: 2025-10-11 04:06:05.919 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 04:06:05 compute-0 nova_compute[259850]: 2025-10-11 04:06:05.920 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap3d50c2ef-db, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 04:06:05 compute-0 systemd[1]: libpod-faa4a59a86b0da78f5a8bdd871988a297cbd03db2ca6c749b48d468c672803f3.scope: Deactivated successfully.
Oct 11 04:06:05 compute-0 nova_compute[259850]: 2025-10-11 04:06:05.922 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 04:06:05 compute-0 conmon[272581]: conmon faa4a59a86b0da78f5a8 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-faa4a59a86b0da78f5a8bdd871988a297cbd03db2ca6c749b48d468c672803f3.scope/container/memory.events
Oct 11 04:06:05 compute-0 nova_compute[259850]: 2025-10-11 04:06:05.924 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 11 04:06:05 compute-0 nova_compute[259850]: 2025-10-11 04:06:05.928 2 INFO os_vif [None req-e167bf5c-b2f3-42c7-b64f-48022489e846 c25efb567172419289431bb87a4358cb 1e2113337abc4651b6b207f4cda57799 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:7c:7f:a9,bridge_name='br-int',has_traffic_filtering=True,id=3d50c2ef-db4b-49d0-9eeb-d9a45369939d,network=Network(9137df72-3d02-4421-a504-7793ab98e168),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap3d50c2ef-db')
Oct 11 04:06:05 compute-0 podman[272914]: 2025-10-11 04:06:05.930825053 +0000 UTC m=+0.067528047 container died faa4a59a86b0da78f5a8bdd871988a297cbd03db2ca6c749b48d468c672803f3 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-9137df72-3d02-4421-a504-7793ab98e168, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3)
Oct 11 04:06:05 compute-0 systemd[1]: var-lib-containers-storage-overlay-8933da3fa40d5828e2ffe3b02a4a1bae9817385560e4709769a6239397d55dfb-merged.mount: Deactivated successfully.
Oct 11 04:06:05 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-faa4a59a86b0da78f5a8bdd871988a297cbd03db2ca6c749b48d468c672803f3-userdata-shm.mount: Deactivated successfully.
Oct 11 04:06:05 compute-0 podman[272914]: 2025-10-11 04:06:05.983211394 +0000 UTC m=+0.119914418 container cleanup faa4a59a86b0da78f5a8bdd871988a297cbd03db2ca6c749b48d468c672803f3 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-9137df72-3d02-4421-a504-7793ab98e168, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f)
Oct 11 04:06:05 compute-0 systemd[1]: libpod-conmon-faa4a59a86b0da78f5a8bdd871988a297cbd03db2ca6c749b48d468c672803f3.scope: Deactivated successfully.
Oct 11 04:06:06 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 11 04:06:06 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 11 04:06:06 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' 
Oct 11 04:06:06 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 11 04:06:06 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 11 04:06:06 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 11 04:06:06 compute-0 nova_compute[259850]: 2025-10-11 04:06:06.019 2 DEBUG nova.compute.manager [req-d02c51cb-728c-439a-a2c5-301d77b2cb62 req-c29deb2c-b8cd-4e61-a5ae-32e3a312ed4c f4159c198f2c491aba3076e803251bb4 551ef16ca7c64c7d98e9560ddab5abd8 - - default default] [instance: 755b8dbf-4912-4ab3-87a0-0fdcfba7efe4] Received event network-vif-unplugged-3d50c2ef-db4b-49d0-9eeb-d9a45369939d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 11 04:06:06 compute-0 nova_compute[259850]: 2025-10-11 04:06:06.019 2 DEBUG oslo_concurrency.lockutils [req-d02c51cb-728c-439a-a2c5-301d77b2cb62 req-c29deb2c-b8cd-4e61-a5ae-32e3a312ed4c f4159c198f2c491aba3076e803251bb4 551ef16ca7c64c7d98e9560ddab5abd8 - - default default] Acquiring lock "755b8dbf-4912-4ab3-87a0-0fdcfba7efe4-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 04:06:06 compute-0 nova_compute[259850]: 2025-10-11 04:06:06.019 2 DEBUG oslo_concurrency.lockutils [req-d02c51cb-728c-439a-a2c5-301d77b2cb62 req-c29deb2c-b8cd-4e61-a5ae-32e3a312ed4c f4159c198f2c491aba3076e803251bb4 551ef16ca7c64c7d98e9560ddab5abd8 - - default default] Lock "755b8dbf-4912-4ab3-87a0-0fdcfba7efe4-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 04:06:06 compute-0 nova_compute[259850]: 2025-10-11 04:06:06.020 2 DEBUG oslo_concurrency.lockutils [req-d02c51cb-728c-439a-a2c5-301d77b2cb62 req-c29deb2c-b8cd-4e61-a5ae-32e3a312ed4c f4159c198f2c491aba3076e803251bb4 551ef16ca7c64c7d98e9560ddab5abd8 - - default default] Lock "755b8dbf-4912-4ab3-87a0-0fdcfba7efe4-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 04:06:06 compute-0 nova_compute[259850]: 2025-10-11 04:06:06.020 2 DEBUG nova.compute.manager [req-d02c51cb-728c-439a-a2c5-301d77b2cb62 req-c29deb2c-b8cd-4e61-a5ae-32e3a312ed4c f4159c198f2c491aba3076e803251bb4 551ef16ca7c64c7d98e9560ddab5abd8 - - default default] [instance: 755b8dbf-4912-4ab3-87a0-0fdcfba7efe4] No waiting events found dispatching network-vif-unplugged-3d50c2ef-db4b-49d0-9eeb-d9a45369939d pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 11 04:06:06 compute-0 nova_compute[259850]: 2025-10-11 04:06:06.020 2 DEBUG nova.compute.manager [req-d02c51cb-728c-439a-a2c5-301d77b2cb62 req-c29deb2c-b8cd-4e61-a5ae-32e3a312ed4c f4159c198f2c491aba3076e803251bb4 551ef16ca7c64c7d98e9560ddab5abd8 - - default default] [instance: 755b8dbf-4912-4ab3-87a0-0fdcfba7efe4] Received event network-vif-unplugged-3d50c2ef-db4b-49d0-9eeb-d9a45369939d for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Oct 11 04:06:06 compute-0 podman[272978]: 2025-10-11 04:06:06.104256101 +0000 UTC m=+0.085685216 container remove faa4a59a86b0da78f5a8bdd871988a297cbd03db2ca6c749b48d468c672803f3 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-9137df72-3d02-4421-a504-7793ab98e168, tcib_managed=true, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f)
Oct 11 04:06:06 compute-0 ovn_metadata_agent[161897]: 2025-10-11 04:06:06.111 267637 DEBUG oslo.privsep.daemon [-] privsep: reply[9da3161c-b2e5-4a10-8cda-048bd7e4fe05]: (4, ('Sat Oct 11 04:06:05 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-9137df72-3d02-4421-a504-7793ab98e168 (faa4a59a86b0da78f5a8bdd871988a297cbd03db2ca6c749b48d468c672803f3)\nfaa4a59a86b0da78f5a8bdd871988a297cbd03db2ca6c749b48d468c672803f3\nSat Oct 11 04:06:06 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-9137df72-3d02-4421-a504-7793ab98e168 (faa4a59a86b0da78f5a8bdd871988a297cbd03db2ca6c749b48d468c672803f3)\nfaa4a59a86b0da78f5a8bdd871988a297cbd03db2ca6c749b48d468c672803f3\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 04:06:06 compute-0 ovn_metadata_agent[161897]: 2025-10-11 04:06:06.114 267637 DEBUG oslo.privsep.daemon [-] privsep: reply[34787897-c89e-4039-950e-c43ff2207f1b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 04:06:06 compute-0 ovn_metadata_agent[161897]: 2025-10-11 04:06:06.117 161902 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap9137df72-30, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 04:06:06 compute-0 kernel: tap9137df72-30: left promiscuous mode
Oct 11 04:06:06 compute-0 nova_compute[259850]: 2025-10-11 04:06:06.121 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 04:06:06 compute-0 ovn_metadata_agent[161897]: 2025-10-11 04:06:06.124 267637 DEBUG oslo.privsep.daemon [-] privsep: reply[2253855f-9539-4cb5-b051-9158928ca08b]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 04:06:06 compute-0 podman[272989]: 2025-10-11 04:06:06.137308109 +0000 UTC m=+0.091200821 container create 5aca12d1ca9a1f2d73c8d24093c2fdf3bfff6749df7390da69002a32f5686fc8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_joliot, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, io.buildah.version=1.39.3)
Oct 11 04:06:06 compute-0 nova_compute[259850]: 2025-10-11 04:06:06.138 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 04:06:06 compute-0 ovn_metadata_agent[161897]: 2025-10-11 04:06:06.149 267637 DEBUG oslo.privsep.daemon [-] privsep: reply[182af902-c94f-422b-9a8e-e1b361052478]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 04:06:06 compute-0 ovn_metadata_agent[161897]: 2025-10-11 04:06:06.151 267637 DEBUG oslo.privsep.daemon [-] privsep: reply[023e6216-46d7-4486-9c40-71a3f309190e]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 04:06:06 compute-0 ovn_metadata_agent[161897]: 2025-10-11 04:06:06.164 267637 DEBUG oslo.privsep.daemon [-] privsep: reply[02ea7d83-677a-4c0e-aac4-5eb01f587799]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 394405, 'reachable_time': 40751, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 273011, 'error': None, 'target': 'ovnmeta-9137df72-3d02-4421-a504-7793ab98e168', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 04:06:06 compute-0 ovn_metadata_agent[161897]: 2025-10-11 04:06:06.168 162015 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-9137df72-3d02-4421-a504-7793ab98e168 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Oct 11 04:06:06 compute-0 ovn_metadata_agent[161897]: 2025-10-11 04:06:06.168 162015 DEBUG oslo.privsep.daemon [-] privsep: reply[b9ce82a4-ff01-4683-89de-d89a097fe6f9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 04:06:06 compute-0 systemd[1]: Started libpod-conmon-5aca12d1ca9a1f2d73c8d24093c2fdf3bfff6749df7390da69002a32f5686fc8.scope.
Oct 11 04:06:06 compute-0 podman[272989]: 2025-10-11 04:06:06.10812889 +0000 UTC m=+0.062021612 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 04:06:06 compute-0 systemd[1]: Started libcrun container.
Oct 11 04:06:06 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4fb62cf2a1cc7052d6a7625a5276ff274ce06cdf448def43342c9c09b02d6e86/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 11 04:06:06 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4fb62cf2a1cc7052d6a7625a5276ff274ce06cdf448def43342c9c09b02d6e86/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 11 04:06:06 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4fb62cf2a1cc7052d6a7625a5276ff274ce06cdf448def43342c9c09b02d6e86/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 11 04:06:06 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4fb62cf2a1cc7052d6a7625a5276ff274ce06cdf448def43342c9c09b02d6e86/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 11 04:06:06 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4fb62cf2a1cc7052d6a7625a5276ff274ce06cdf448def43342c9c09b02d6e86/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct 11 04:06:06 compute-0 podman[272989]: 2025-10-11 04:06:06.253886812 +0000 UTC m=+0.207779544 container init 5aca12d1ca9a1f2d73c8d24093c2fdf3bfff6749df7390da69002a32f5686fc8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_joliot, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 11 04:06:06 compute-0 podman[272989]: 2025-10-11 04:06:06.269199021 +0000 UTC m=+0.223091723 container start 5aca12d1ca9a1f2d73c8d24093c2fdf3bfff6749df7390da69002a32f5686fc8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_joliot, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 11 04:06:06 compute-0 podman[272989]: 2025-10-11 04:06:06.272105163 +0000 UTC m=+0.225998055 container attach 5aca12d1ca9a1f2d73c8d24093c2fdf3bfff6749df7390da69002a32f5686fc8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_joliot, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Oct 11 04:06:06 compute-0 nova_compute[259850]: 2025-10-11 04:06:06.429 2 INFO nova.virt.libvirt.driver [None req-e167bf5c-b2f3-42c7-b64f-48022489e846 c25efb567172419289431bb87a4358cb 1e2113337abc4651b6b207f4cda57799 - - default default] [instance: 755b8dbf-4912-4ab3-87a0-0fdcfba7efe4] Deleting instance files /var/lib/nova/instances/755b8dbf-4912-4ab3-87a0-0fdcfba7efe4_del
Oct 11 04:06:06 compute-0 nova_compute[259850]: 2025-10-11 04:06:06.430 2 INFO nova.virt.libvirt.driver [None req-e167bf5c-b2f3-42c7-b64f-48022489e846 c25efb567172419289431bb87a4358cb 1e2113337abc4651b6b207f4cda57799 - - default default] [instance: 755b8dbf-4912-4ab3-87a0-0fdcfba7efe4] Deletion of /var/lib/nova/instances/755b8dbf-4912-4ab3-87a0-0fdcfba7efe4_del complete
Oct 11 04:06:06 compute-0 nova_compute[259850]: 2025-10-11 04:06:06.556 2 INFO nova.compute.manager [None req-e167bf5c-b2f3-42c7-b64f-48022489e846 c25efb567172419289431bb87a4358cb 1e2113337abc4651b6b207f4cda57799 - - default default] [instance: 755b8dbf-4912-4ab3-87a0-0fdcfba7efe4] Took 0.91 seconds to destroy the instance on the hypervisor.
Oct 11 04:06:06 compute-0 nova_compute[259850]: 2025-10-11 04:06:06.557 2 DEBUG oslo.service.loopingcall [None req-e167bf5c-b2f3-42c7-b64f-48022489e846 c25efb567172419289431bb87a4358cb 1e2113337abc4651b6b207f4cda57799 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Oct 11 04:06:06 compute-0 nova_compute[259850]: 2025-10-11 04:06:06.558 2 DEBUG nova.compute.manager [-] [instance: 755b8dbf-4912-4ab3-87a0-0fdcfba7efe4] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Oct 11 04:06:06 compute-0 nova_compute[259850]: 2025-10-11 04:06:06.559 2 DEBUG nova.network.neutron [-] [instance: 755b8dbf-4912-4ab3-87a0-0fdcfba7efe4] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Oct 11 04:06:06 compute-0 systemd[1]: run-netns-ovnmeta\x2d9137df72\x2d3d02\x2d4421\x2da504\x2d7793ab98e168.mount: Deactivated successfully.
Oct 11 04:06:06 compute-0 nova_compute[259850]: 2025-10-11 04:06:06.963 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 04:06:07 compute-0 ceph-mon[74273]: pgmap v1065: 305 pgs: 305 active+clean; 88 MiB data, 286 MiB used, 60 GiB / 60 GiB avail; 8.1 MiB/s rd, 63 MiB/s wr, 391 op/s
Oct 11 04:06:07 compute-0 charming_joliot[273015]: --> passed data devices: 0 physical, 3 LVM
Oct 11 04:06:07 compute-0 charming_joliot[273015]: --> relative data size: 1.0
Oct 11 04:06:07 compute-0 charming_joliot[273015]: --> All data devices are unavailable
Oct 11 04:06:07 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1066: 305 pgs: 305 active+clean; 88 MiB data, 286 MiB used, 60 GiB / 60 GiB avail; 7.5 MiB/s rd, 19 KiB/s wr, 137 op/s
Oct 11 04:06:07 compute-0 systemd[1]: libpod-5aca12d1ca9a1f2d73c8d24093c2fdf3bfff6749df7390da69002a32f5686fc8.scope: Deactivated successfully.
Oct 11 04:06:07 compute-0 systemd[1]: libpod-5aca12d1ca9a1f2d73c8d24093c2fdf3bfff6749df7390da69002a32f5686fc8.scope: Consumed 1.170s CPU time.
Oct 11 04:06:07 compute-0 podman[272989]: 2025-10-11 04:06:07.495632747 +0000 UTC m=+1.449525459 container died 5aca12d1ca9a1f2d73c8d24093c2fdf3bfff6749df7390da69002a32f5686fc8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_joliot, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_REF=reef, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 11 04:06:07 compute-0 systemd[1]: var-lib-containers-storage-overlay-4fb62cf2a1cc7052d6a7625a5276ff274ce06cdf448def43342c9c09b02d6e86-merged.mount: Deactivated successfully.
Oct 11 04:06:07 compute-0 podman[272989]: 2025-10-11 04:06:07.558811811 +0000 UTC m=+1.512704523 container remove 5aca12d1ca9a1f2d73c8d24093c2fdf3bfff6749df7390da69002a32f5686fc8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_joliot, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.build-date=20250507, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 11 04:06:07 compute-0 systemd[1]: libpod-conmon-5aca12d1ca9a1f2d73c8d24093c2fdf3bfff6749df7390da69002a32f5686fc8.scope: Deactivated successfully.
Oct 11 04:06:07 compute-0 sudo[272802]: pam_unix(sudo:session): session closed for user root
Oct 11 04:06:07 compute-0 sudo[273056]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 04:06:07 compute-0 sudo[273056]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 04:06:07 compute-0 sudo[273056]: pam_unix(sudo:session): session closed for user root
Oct 11 04:06:07 compute-0 sudo[273081]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 11 04:06:07 compute-0 sudo[273081]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 04:06:07 compute-0 sudo[273081]: pam_unix(sudo:session): session closed for user root
Oct 11 04:06:07 compute-0 sudo[273106]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 04:06:07 compute-0 sudo[273106]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 04:06:07 compute-0 sudo[273106]: pam_unix(sudo:session): session closed for user root
Oct 11 04:06:07 compute-0 sudo[273131]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/23b68101-59a9-532f-ab6b-9acf78fb2162/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 23b68101-59a9-532f-ab6b-9acf78fb2162 -- lvm list --format json
Oct 11 04:06:07 compute-0 sudo[273131]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 04:06:08 compute-0 nova_compute[259850]: 2025-10-11 04:06:08.045 2 DEBUG nova.network.neutron [-] [instance: 755b8dbf-4912-4ab3-87a0-0fdcfba7efe4] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 11 04:06:08 compute-0 nova_compute[259850]: 2025-10-11 04:06:08.062 2 INFO nova.compute.manager [-] [instance: 755b8dbf-4912-4ab3-87a0-0fdcfba7efe4] Took 1.50 seconds to deallocate network for instance.
Oct 11 04:06:08 compute-0 nova_compute[259850]: 2025-10-11 04:06:08.117 2 DEBUG oslo_concurrency.lockutils [None req-e167bf5c-b2f3-42c7-b64f-48022489e846 c25efb567172419289431bb87a4358cb 1e2113337abc4651b6b207f4cda57799 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 04:06:08 compute-0 nova_compute[259850]: 2025-10-11 04:06:08.117 2 DEBUG oslo_concurrency.lockutils [None req-e167bf5c-b2f3-42c7-b64f-48022489e846 c25efb567172419289431bb87a4358cb 1e2113337abc4651b6b207f4cda57799 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 04:06:08 compute-0 nova_compute[259850]: 2025-10-11 04:06:08.121 2 DEBUG nova.compute.manager [req-244a3d08-5712-48b2-ad76-8ae40685ea44 req-5e0669af-48d1-4cf8-9499-3e0565b7c925 f4159c198f2c491aba3076e803251bb4 551ef16ca7c64c7d98e9560ddab5abd8 - - default default] [instance: 755b8dbf-4912-4ab3-87a0-0fdcfba7efe4] Received event network-vif-plugged-3d50c2ef-db4b-49d0-9eeb-d9a45369939d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 11 04:06:08 compute-0 nova_compute[259850]: 2025-10-11 04:06:08.122 2 DEBUG oslo_concurrency.lockutils [req-244a3d08-5712-48b2-ad76-8ae40685ea44 req-5e0669af-48d1-4cf8-9499-3e0565b7c925 f4159c198f2c491aba3076e803251bb4 551ef16ca7c64c7d98e9560ddab5abd8 - - default default] Acquiring lock "755b8dbf-4912-4ab3-87a0-0fdcfba7efe4-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 04:06:08 compute-0 nova_compute[259850]: 2025-10-11 04:06:08.122 2 DEBUG oslo_concurrency.lockutils [req-244a3d08-5712-48b2-ad76-8ae40685ea44 req-5e0669af-48d1-4cf8-9499-3e0565b7c925 f4159c198f2c491aba3076e803251bb4 551ef16ca7c64c7d98e9560ddab5abd8 - - default default] Lock "755b8dbf-4912-4ab3-87a0-0fdcfba7efe4-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 04:06:08 compute-0 nova_compute[259850]: 2025-10-11 04:06:08.123 2 DEBUG oslo_concurrency.lockutils [req-244a3d08-5712-48b2-ad76-8ae40685ea44 req-5e0669af-48d1-4cf8-9499-3e0565b7c925 f4159c198f2c491aba3076e803251bb4 551ef16ca7c64c7d98e9560ddab5abd8 - - default default] Lock "755b8dbf-4912-4ab3-87a0-0fdcfba7efe4-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 04:06:08 compute-0 nova_compute[259850]: 2025-10-11 04:06:08.123 2 DEBUG nova.compute.manager [req-244a3d08-5712-48b2-ad76-8ae40685ea44 req-5e0669af-48d1-4cf8-9499-3e0565b7c925 f4159c198f2c491aba3076e803251bb4 551ef16ca7c64c7d98e9560ddab5abd8 - - default default] [instance: 755b8dbf-4912-4ab3-87a0-0fdcfba7efe4] No waiting events found dispatching network-vif-plugged-3d50c2ef-db4b-49d0-9eeb-d9a45369939d pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 11 04:06:08 compute-0 nova_compute[259850]: 2025-10-11 04:06:08.124 2 WARNING nova.compute.manager [req-244a3d08-5712-48b2-ad76-8ae40685ea44 req-5e0669af-48d1-4cf8-9499-3e0565b7c925 f4159c198f2c491aba3076e803251bb4 551ef16ca7c64c7d98e9560ddab5abd8 - - default default] [instance: 755b8dbf-4912-4ab3-87a0-0fdcfba7efe4] Received unexpected event network-vif-plugged-3d50c2ef-db4b-49d0-9eeb-d9a45369939d for instance with vm_state active and task_state deleting.
Oct 11 04:06:08 compute-0 nova_compute[259850]: 2025-10-11 04:06:08.187 2 DEBUG oslo_concurrency.processutils [None req-e167bf5c-b2f3-42c7-b64f-48022489e846 c25efb567172419289431bb87a4358cb 1e2113337abc4651b6b207f4cda57799 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 04:06:08 compute-0 nova_compute[259850]: 2025-10-11 04:06:08.213 2 DEBUG nova.compute.manager [req-7ff805ac-5197-4e1f-94a7-0631320cd6c7 req-7d872205-f9e5-490b-a3de-35422140d951 f4159c198f2c491aba3076e803251bb4 551ef16ca7c64c7d98e9560ddab5abd8 - - default default] [instance: 755b8dbf-4912-4ab3-87a0-0fdcfba7efe4] Received event network-vif-deleted-3d50c2ef-db4b-49d0-9eeb-d9a45369939d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 11 04:06:08 compute-0 podman[273196]: 2025-10-11 04:06:08.370440444 +0000 UTC m=+0.061166309 container create 586596120cb9cd9d411f12b4eb834e3fc6b40e5c8e8f066f61830c0b3d9e944c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_lamarr, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 11 04:06:08 compute-0 systemd[1]: Started libpod-conmon-586596120cb9cd9d411f12b4eb834e3fc6b40e5c8e8f066f61830c0b3d9e944c.scope.
Oct 11 04:06:08 compute-0 podman[273196]: 2025-10-11 04:06:08.350391101 +0000 UTC m=+0.041116966 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 04:06:08 compute-0 systemd[1]: Started libcrun container.
Oct 11 04:06:08 compute-0 podman[273196]: 2025-10-11 04:06:08.468206618 +0000 UTC m=+0.158932473 container init 586596120cb9cd9d411f12b4eb834e3fc6b40e5c8e8f066f61830c0b3d9e944c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_lamarr, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.39.3)
Oct 11 04:06:08 compute-0 podman[273196]: 2025-10-11 04:06:08.476370767 +0000 UTC m=+0.167096632 container start 586596120cb9cd9d411f12b4eb834e3fc6b40e5c8e8f066f61830c0b3d9e944c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_lamarr, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, ceph=True, org.label-schema.vendor=CentOS)
Oct 11 04:06:08 compute-0 podman[273196]: 2025-10-11 04:06:08.481067409 +0000 UTC m=+0.171793284 container attach 586596120cb9cd9d411f12b4eb834e3fc6b40e5c8e8f066f61830c0b3d9e944c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_lamarr, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 11 04:06:08 compute-0 relaxed_lamarr[273230]: 167 167
Oct 11 04:06:08 compute-0 systemd[1]: libpod-586596120cb9cd9d411f12b4eb834e3fc6b40e5c8e8f066f61830c0b3d9e944c.scope: Deactivated successfully.
Oct 11 04:06:08 compute-0 podman[273196]: 2025-10-11 04:06:08.488476417 +0000 UTC m=+0.179202282 container died 586596120cb9cd9d411f12b4eb834e3fc6b40e5c8e8f066f61830c0b3d9e944c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_lamarr, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_REF=reef)
Oct 11 04:06:08 compute-0 systemd[1]: var-lib-containers-storage-overlay-470fa84d059fae6e485fa465b3f5d19a0caf5b0e16fa0d8c5efe80cf40f4cb81-merged.mount: Deactivated successfully.
Oct 11 04:06:08 compute-0 podman[273196]: 2025-10-11 04:06:08.536431813 +0000 UTC m=+0.227157678 container remove 586596120cb9cd9d411f12b4eb834e3fc6b40e5c8e8f066f61830c0b3d9e944c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_lamarr, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Oct 11 04:06:08 compute-0 systemd[1]: libpod-conmon-586596120cb9cd9d411f12b4eb834e3fc6b40e5c8e8f066f61830c0b3d9e944c.scope: Deactivated successfully.
Oct 11 04:06:08 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 11 04:06:08 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2757919330' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 04:06:08 compute-0 nova_compute[259850]: 2025-10-11 04:06:08.640 2 DEBUG oslo_concurrency.processutils [None req-e167bf5c-b2f3-42c7-b64f-48022489e846 c25efb567172419289431bb87a4358cb 1e2113337abc4651b6b207f4cda57799 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.453s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 04:06:08 compute-0 nova_compute[259850]: 2025-10-11 04:06:08.649 2 DEBUG nova.compute.provider_tree [None req-e167bf5c-b2f3-42c7-b64f-48022489e846 c25efb567172419289431bb87a4358cb 1e2113337abc4651b6b207f4cda57799 - - default default] Inventory has not changed in ProviderTree for provider: 108a560b-89c0-4926-a2fc-cb749a6f8386 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 11 04:06:08 compute-0 nova_compute[259850]: 2025-10-11 04:06:08.687 2 DEBUG nova.scheduler.client.report [None req-e167bf5c-b2f3-42c7-b64f-48022489e846 c25efb567172419289431bb87a4358cb 1e2113337abc4651b6b207f4cda57799 - - default default] Inventory has not changed for provider 108a560b-89c0-4926-a2fc-cb749a6f8386 based on inventory data: {'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 11 04:06:08 compute-0 nova_compute[259850]: 2025-10-11 04:06:08.749 2 DEBUG oslo_concurrency.lockutils [None req-e167bf5c-b2f3-42c7-b64f-48022489e846 c25efb567172419289431bb87a4358cb 1e2113337abc4651b6b207f4cda57799 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.631s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 04:06:08 compute-0 podman[273257]: 2025-10-11 04:06:08.765707239 +0000 UTC m=+0.073488774 container create 4a062cacc9e4520d04d30772ff5d1dd83c1be26e7a1819812d75c736ddf667cb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_heisenberg, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 11 04:06:08 compute-0 nova_compute[259850]: 2025-10-11 04:06:08.786 2 INFO nova.scheduler.client.report [None req-e167bf5c-b2f3-42c7-b64f-48022489e846 c25efb567172419289431bb87a4358cb 1e2113337abc4651b6b207f4cda57799 - - default default] Deleted allocations for instance 755b8dbf-4912-4ab3-87a0-0fdcfba7efe4
Oct 11 04:06:08 compute-0 podman[273257]: 2025-10-11 04:06:08.721323773 +0000 UTC m=+0.029105368 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 04:06:08 compute-0 systemd[1]: Started libpod-conmon-4a062cacc9e4520d04d30772ff5d1dd83c1be26e7a1819812d75c736ddf667cb.scope.
Oct 11 04:06:08 compute-0 systemd[1]: Started libcrun container.
Oct 11 04:06:08 compute-0 nova_compute[259850]: 2025-10-11 04:06:08.867 2 DEBUG oslo_concurrency.lockutils [None req-e167bf5c-b2f3-42c7-b64f-48022489e846 c25efb567172419289431bb87a4358cb 1e2113337abc4651b6b207f4cda57799 - - default default] Lock "755b8dbf-4912-4ab3-87a0-0fdcfba7efe4" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.220s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 04:06:08 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e9dd8cac98eacdb561a12ccd7f460f48b9ec649cb53dffd5ee7264a9be2bf1d3/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 11 04:06:08 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e9dd8cac98eacdb561a12ccd7f460f48b9ec649cb53dffd5ee7264a9be2bf1d3/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 11 04:06:08 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e9dd8cac98eacdb561a12ccd7f460f48b9ec649cb53dffd5ee7264a9be2bf1d3/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 11 04:06:08 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e9dd8cac98eacdb561a12ccd7f460f48b9ec649cb53dffd5ee7264a9be2bf1d3/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 11 04:06:08 compute-0 podman[273257]: 2025-10-11 04:06:08.956596387 +0000 UTC m=+0.264377972 container init 4a062cacc9e4520d04d30772ff5d1dd83c1be26e7a1819812d75c736ddf667cb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_heisenberg, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 11 04:06:08 compute-0 podman[273257]: 2025-10-11 04:06:08.97523681 +0000 UTC m=+0.283018335 container start 4a062cacc9e4520d04d30772ff5d1dd83c1be26e7a1819812d75c736ddf667cb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_heisenberg, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 11 04:06:08 compute-0 podman[273257]: 2025-10-11 04:06:08.990220531 +0000 UTC m=+0.298002076 container attach 4a062cacc9e4520d04d30772ff5d1dd83c1be26e7a1819812d75c736ddf667cb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_heisenberg, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, io.buildah.version=1.39.3, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Oct 11 04:06:09 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e182 do_prune osdmap full prune enabled
Oct 11 04:06:09 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e183 e183: 3 total, 3 up, 3 in
Oct 11 04:06:09 compute-0 ceph-mon[74273]: log_channel(cluster) log [DBG] : osdmap e183: 3 total, 3 up, 3 in
Oct 11 04:06:09 compute-0 ceph-mon[74273]: pgmap v1066: 305 pgs: 305 active+clean; 88 MiB data, 286 MiB used, 60 GiB / 60 GiB avail; 7.5 MiB/s rd, 19 KiB/s wr, 137 op/s
Oct 11 04:06:09 compute-0 ceph-mon[74273]: from='client.? 192.168.122.100:0/2757919330' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 04:06:09 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1068: 305 pgs: 305 active+clean; 134 MiB data, 300 MiB used, 60 GiB / 60 GiB avail; 8.1 MiB/s rd, 5.3 MiB/s wr, 309 op/s
Oct 11 04:06:09 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e183 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 04:06:09 compute-0 vigorous_heisenberg[273273]: {
Oct 11 04:06:09 compute-0 vigorous_heisenberg[273273]:     "0": [
Oct 11 04:06:09 compute-0 vigorous_heisenberg[273273]:         {
Oct 11 04:06:09 compute-0 vigorous_heisenberg[273273]:             "devices": [
Oct 11 04:06:09 compute-0 vigorous_heisenberg[273273]:                 "/dev/loop3"
Oct 11 04:06:09 compute-0 vigorous_heisenberg[273273]:             ],
Oct 11 04:06:09 compute-0 vigorous_heisenberg[273273]:             "lv_name": "ceph_lv0",
Oct 11 04:06:09 compute-0 vigorous_heisenberg[273273]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct 11 04:06:09 compute-0 vigorous_heisenberg[273273]:             "lv_size": "21470642176",
Oct 11 04:06:09 compute-0 vigorous_heisenberg[273273]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=1XVTvq-Um5W-oCLh-MZzq-EKrd-vgGj-JdO4S3,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=23b68101-59a9-532f-ab6b-9acf78fb2162,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=bd7ac921-1218-45c1-b1c6-7c594dbceccb,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 11 04:06:09 compute-0 vigorous_heisenberg[273273]:             "lv_uuid": "1XVTvq-Um5W-oCLh-MZzq-EKrd-vgGj-JdO4S3",
Oct 11 04:06:09 compute-0 vigorous_heisenberg[273273]:             "name": "ceph_lv0",
Oct 11 04:06:09 compute-0 vigorous_heisenberg[273273]:             "path": "/dev/ceph_vg0/ceph_lv0",
Oct 11 04:06:09 compute-0 vigorous_heisenberg[273273]:             "tags": {
Oct 11 04:06:09 compute-0 vigorous_heisenberg[273273]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct 11 04:06:09 compute-0 vigorous_heisenberg[273273]:                 "ceph.block_uuid": "1XVTvq-Um5W-oCLh-MZzq-EKrd-vgGj-JdO4S3",
Oct 11 04:06:09 compute-0 vigorous_heisenberg[273273]:                 "ceph.cephx_lockbox_secret": "",
Oct 11 04:06:09 compute-0 vigorous_heisenberg[273273]:                 "ceph.cluster_fsid": "23b68101-59a9-532f-ab6b-9acf78fb2162",
Oct 11 04:06:09 compute-0 vigorous_heisenberg[273273]:                 "ceph.cluster_name": "ceph",
Oct 11 04:06:09 compute-0 vigorous_heisenberg[273273]:                 "ceph.crush_device_class": "",
Oct 11 04:06:09 compute-0 vigorous_heisenberg[273273]:                 "ceph.encrypted": "0",
Oct 11 04:06:09 compute-0 vigorous_heisenberg[273273]:                 "ceph.osd_fsid": "bd7ac921-1218-45c1-b1c6-7c594dbceccb",
Oct 11 04:06:09 compute-0 vigorous_heisenberg[273273]:                 "ceph.osd_id": "0",
Oct 11 04:06:09 compute-0 vigorous_heisenberg[273273]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 11 04:06:09 compute-0 vigorous_heisenberg[273273]:                 "ceph.type": "block",
Oct 11 04:06:09 compute-0 vigorous_heisenberg[273273]:                 "ceph.vdo": "0"
Oct 11 04:06:09 compute-0 vigorous_heisenberg[273273]:             },
Oct 11 04:06:09 compute-0 vigorous_heisenberg[273273]:             "type": "block",
Oct 11 04:06:09 compute-0 vigorous_heisenberg[273273]:             "vg_name": "ceph_vg0"
Oct 11 04:06:09 compute-0 vigorous_heisenberg[273273]:         }
Oct 11 04:06:09 compute-0 vigorous_heisenberg[273273]:     ],
Oct 11 04:06:09 compute-0 vigorous_heisenberg[273273]:     "1": [
Oct 11 04:06:09 compute-0 vigorous_heisenberg[273273]:         {
Oct 11 04:06:09 compute-0 vigorous_heisenberg[273273]:             "devices": [
Oct 11 04:06:09 compute-0 vigorous_heisenberg[273273]:                 "/dev/loop4"
Oct 11 04:06:09 compute-0 vigorous_heisenberg[273273]:             ],
Oct 11 04:06:09 compute-0 vigorous_heisenberg[273273]:             "lv_name": "ceph_lv1",
Oct 11 04:06:09 compute-0 vigorous_heisenberg[273273]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Oct 11 04:06:09 compute-0 vigorous_heisenberg[273273]:             "lv_size": "21470642176",
Oct 11 04:06:09 compute-0 vigorous_heisenberg[273273]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=SYHCVo-UVf0-Iwc3-4dzz-QuCG-19LJ-9rRA5x,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=23b68101-59a9-532f-ab6b-9acf78fb2162,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=38da774d-7ecf-442f-9a7a-97978287cff8,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 11 04:06:09 compute-0 vigorous_heisenberg[273273]:             "lv_uuid": "SYHCVo-UVf0-Iwc3-4dzz-QuCG-19LJ-9rRA5x",
Oct 11 04:06:09 compute-0 vigorous_heisenberg[273273]:             "name": "ceph_lv1",
Oct 11 04:06:09 compute-0 vigorous_heisenberg[273273]:             "path": "/dev/ceph_vg1/ceph_lv1",
Oct 11 04:06:09 compute-0 vigorous_heisenberg[273273]:             "tags": {
Oct 11 04:06:09 compute-0 vigorous_heisenberg[273273]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Oct 11 04:06:09 compute-0 vigorous_heisenberg[273273]:                 "ceph.block_uuid": "SYHCVo-UVf0-Iwc3-4dzz-QuCG-19LJ-9rRA5x",
Oct 11 04:06:09 compute-0 vigorous_heisenberg[273273]:                 "ceph.cephx_lockbox_secret": "",
Oct 11 04:06:09 compute-0 vigorous_heisenberg[273273]:                 "ceph.cluster_fsid": "23b68101-59a9-532f-ab6b-9acf78fb2162",
Oct 11 04:06:09 compute-0 vigorous_heisenberg[273273]:                 "ceph.cluster_name": "ceph",
Oct 11 04:06:09 compute-0 vigorous_heisenberg[273273]:                 "ceph.crush_device_class": "",
Oct 11 04:06:09 compute-0 vigorous_heisenberg[273273]:                 "ceph.encrypted": "0",
Oct 11 04:06:09 compute-0 vigorous_heisenberg[273273]:                 "ceph.osd_fsid": "38da774d-7ecf-442f-9a7a-97978287cff8",
Oct 11 04:06:09 compute-0 vigorous_heisenberg[273273]:                 "ceph.osd_id": "1",
Oct 11 04:06:09 compute-0 vigorous_heisenberg[273273]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 11 04:06:09 compute-0 vigorous_heisenberg[273273]:                 "ceph.type": "block",
Oct 11 04:06:09 compute-0 vigorous_heisenberg[273273]:                 "ceph.vdo": "0"
Oct 11 04:06:09 compute-0 vigorous_heisenberg[273273]:             },
Oct 11 04:06:09 compute-0 vigorous_heisenberg[273273]:             "type": "block",
Oct 11 04:06:09 compute-0 vigorous_heisenberg[273273]:             "vg_name": "ceph_vg1"
Oct 11 04:06:09 compute-0 vigorous_heisenberg[273273]:         }
Oct 11 04:06:09 compute-0 vigorous_heisenberg[273273]:     ],
Oct 11 04:06:09 compute-0 vigorous_heisenberg[273273]:     "2": [
Oct 11 04:06:09 compute-0 vigorous_heisenberg[273273]:         {
Oct 11 04:06:09 compute-0 vigorous_heisenberg[273273]:             "devices": [
Oct 11 04:06:09 compute-0 vigorous_heisenberg[273273]:                 "/dev/loop5"
Oct 11 04:06:09 compute-0 vigorous_heisenberg[273273]:             ],
Oct 11 04:06:09 compute-0 vigorous_heisenberg[273273]:             "lv_name": "ceph_lv2",
Oct 11 04:06:09 compute-0 vigorous_heisenberg[273273]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Oct 11 04:06:09 compute-0 vigorous_heisenberg[273273]:             "lv_size": "21470642176",
Oct 11 04:06:09 compute-0 vigorous_heisenberg[273273]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=RoMfIg-8Myq-NBZ3-1HM3-6QfC-sjk8-dlChvt,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=23b68101-59a9-532f-ab6b-9acf78fb2162,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=b0a32a6b-06c7-4cda-9235-0c5ffbd1bd85,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 11 04:06:09 compute-0 vigorous_heisenberg[273273]:             "lv_uuid": "RoMfIg-8Myq-NBZ3-1HM3-6QfC-sjk8-dlChvt",
Oct 11 04:06:09 compute-0 vigorous_heisenberg[273273]:             "name": "ceph_lv2",
Oct 11 04:06:09 compute-0 vigorous_heisenberg[273273]:             "path": "/dev/ceph_vg2/ceph_lv2",
Oct 11 04:06:09 compute-0 vigorous_heisenberg[273273]:             "tags": {
Oct 11 04:06:09 compute-0 vigorous_heisenberg[273273]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Oct 11 04:06:09 compute-0 vigorous_heisenberg[273273]:                 "ceph.block_uuid": "RoMfIg-8Myq-NBZ3-1HM3-6QfC-sjk8-dlChvt",
Oct 11 04:06:09 compute-0 vigorous_heisenberg[273273]:                 "ceph.cephx_lockbox_secret": "",
Oct 11 04:06:09 compute-0 vigorous_heisenberg[273273]:                 "ceph.cluster_fsid": "23b68101-59a9-532f-ab6b-9acf78fb2162",
Oct 11 04:06:09 compute-0 vigorous_heisenberg[273273]:                 "ceph.cluster_name": "ceph",
Oct 11 04:06:09 compute-0 vigorous_heisenberg[273273]:                 "ceph.crush_device_class": "",
Oct 11 04:06:09 compute-0 vigorous_heisenberg[273273]:                 "ceph.encrypted": "0",
Oct 11 04:06:09 compute-0 vigorous_heisenberg[273273]:                 "ceph.osd_fsid": "b0a32a6b-06c7-4cda-9235-0c5ffbd1bd85",
Oct 11 04:06:09 compute-0 vigorous_heisenberg[273273]:                 "ceph.osd_id": "2",
Oct 11 04:06:09 compute-0 vigorous_heisenberg[273273]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 11 04:06:09 compute-0 vigorous_heisenberg[273273]:                 "ceph.type": "block",
Oct 11 04:06:09 compute-0 vigorous_heisenberg[273273]:                 "ceph.vdo": "0"
Oct 11 04:06:09 compute-0 vigorous_heisenberg[273273]:             },
Oct 11 04:06:09 compute-0 vigorous_heisenberg[273273]:             "type": "block",
Oct 11 04:06:09 compute-0 vigorous_heisenberg[273273]:             "vg_name": "ceph_vg2"
Oct 11 04:06:09 compute-0 vigorous_heisenberg[273273]:         }
Oct 11 04:06:09 compute-0 vigorous_heisenberg[273273]:     ]
Oct 11 04:06:09 compute-0 vigorous_heisenberg[273273]: }
Oct 11 04:06:09 compute-0 systemd[1]: libpod-4a062cacc9e4520d04d30772ff5d1dd83c1be26e7a1819812d75c736ddf667cb.scope: Deactivated successfully.
Oct 11 04:06:09 compute-0 podman[273257]: 2025-10-11 04:06:09.820628621 +0000 UTC m=+1.128410126 container died 4a062cacc9e4520d04d30772ff5d1dd83c1be26e7a1819812d75c736ddf667cb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_heisenberg, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 11 04:06:09 compute-0 systemd[1]: var-lib-containers-storage-overlay-e9dd8cac98eacdb561a12ccd7f460f48b9ec649cb53dffd5ee7264a9be2bf1d3-merged.mount: Deactivated successfully.
Oct 11 04:06:09 compute-0 podman[273257]: 2025-10-11 04:06:09.880131731 +0000 UTC m=+1.187913246 container remove 4a062cacc9e4520d04d30772ff5d1dd83c1be26e7a1819812d75c736ddf667cb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_heisenberg, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 11 04:06:09 compute-0 systemd[1]: libpod-conmon-4a062cacc9e4520d04d30772ff5d1dd83c1be26e7a1819812d75c736ddf667cb.scope: Deactivated successfully.
Oct 11 04:06:09 compute-0 sudo[273131]: pam_unix(sudo:session): session closed for user root
Oct 11 04:06:09 compute-0 podman[273283]: 2025-10-11 04:06:09.97483201 +0000 UTC m=+0.121129721 container health_status 648a70a95c858d67aef01a55c153ff0b5b9bef4b589fa10c62a24185180db61f (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.build-date=20251009, org.label-schema.schema-version=1.0, tcib_managed=true)
Oct 11 04:06:09 compute-0 sudo[273313]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 04:06:10 compute-0 sudo[273313]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 04:06:10 compute-0 sudo[273313]: pam_unix(sudo:session): session closed for user root
Oct 11 04:06:10 compute-0 sudo[273345]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 11 04:06:10 compute-0 sudo[273345]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 04:06:10 compute-0 sudo[273345]: pam_unix(sudo:session): session closed for user root
Oct 11 04:06:10 compute-0 ceph-mon[74273]: osdmap e183: 3 total, 3 up, 3 in
Oct 11 04:06:10 compute-0 ceph-mon[74273]: pgmap v1068: 305 pgs: 305 active+clean; 134 MiB data, 300 MiB used, 60 GiB / 60 GiB avail; 8.1 MiB/s rd, 5.3 MiB/s wr, 309 op/s
Oct 11 04:06:10 compute-0 sudo[273370]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 04:06:10 compute-0 sudo[273370]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 04:06:10 compute-0 sudo[273370]: pam_unix(sudo:session): session closed for user root
Oct 11 04:06:10 compute-0 sudo[273395]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/23b68101-59a9-532f-ab6b-9acf78fb2162/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 23b68101-59a9-532f-ab6b-9acf78fb2162 -- raw list --format json
Oct 11 04:06:10 compute-0 sudo[273395]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 04:06:10 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct 11 04:06:10 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1888171320' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 11 04:06:10 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct 11 04:06:10 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1888171320' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 11 04:06:10 compute-0 podman[273461]: 2025-10-11 04:06:10.745688218 +0000 UTC m=+0.091937262 container create 3108347d4168563b7945d456d148468d543df1b1b7a284c27feaa686ac8a8312 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_jones, org.label-schema.license=GPLv2, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 11 04:06:10 compute-0 podman[273461]: 2025-10-11 04:06:10.687209957 +0000 UTC m=+0.033459071 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 04:06:10 compute-0 systemd[1]: Started libpod-conmon-3108347d4168563b7945d456d148468d543df1b1b7a284c27feaa686ac8a8312.scope.
Oct 11 04:06:10 compute-0 systemd[1]: Started libcrun container.
Oct 11 04:06:10 compute-0 podman[273461]: 2025-10-11 04:06:10.885501163 +0000 UTC m=+0.231750237 container init 3108347d4168563b7945d456d148468d543df1b1b7a284c27feaa686ac8a8312 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_jones, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_REF=reef, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Oct 11 04:06:10 compute-0 podman[273461]: 2025-10-11 04:06:10.899727132 +0000 UTC m=+0.245976206 container start 3108347d4168563b7945d456d148468d543df1b1b7a284c27feaa686ac8a8312 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_jones, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 11 04:06:10 compute-0 podman[273461]: 2025-10-11 04:06:10.904332021 +0000 UTC m=+0.250581105 container attach 3108347d4168563b7945d456d148468d543df1b1b7a284c27feaa686ac8a8312 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_jones, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 11 04:06:10 compute-0 exciting_jones[273477]: 167 167
Oct 11 04:06:10 compute-0 systemd[1]: libpod-3108347d4168563b7945d456d148468d543df1b1b7a284c27feaa686ac8a8312.scope: Deactivated successfully.
Oct 11 04:06:10 compute-0 conmon[273477]: conmon 3108347d4168563b7945 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-3108347d4168563b7945d456d148468d543df1b1b7a284c27feaa686ac8a8312.scope/container/memory.events
Oct 11 04:06:10 compute-0 podman[273461]: 2025-10-11 04:06:10.909715922 +0000 UTC m=+0.255964996 container died 3108347d4168563b7945d456d148468d543df1b1b7a284c27feaa686ac8a8312 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_jones, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_REF=reef, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 11 04:06:10 compute-0 nova_compute[259850]: 2025-10-11 04:06:10.923 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 04:06:10 compute-0 systemd[1]: var-lib-containers-storage-overlay-3f95ffd2a0306adc9f689e413785af96a02dec2e5c2ae68253c4bc6d915c0740-merged.mount: Deactivated successfully.
Oct 11 04:06:10 compute-0 podman[273461]: 2025-10-11 04:06:10.955374853 +0000 UTC m=+0.301623917 container remove 3108347d4168563b7945d456d148468d543df1b1b7a284c27feaa686ac8a8312 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_jones, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 11 04:06:10 compute-0 systemd[1]: libpod-conmon-3108347d4168563b7945d456d148468d543df1b1b7a284c27feaa686ac8a8312.scope: Deactivated successfully.
Oct 11 04:06:11 compute-0 ceph-mon[74273]: from='client.? 192.168.122.10:0/1888171320' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 11 04:06:11 compute-0 ceph-mon[74273]: from='client.? 192.168.122.10:0/1888171320' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 11 04:06:11 compute-0 podman[273501]: 2025-10-11 04:06:11.141467227 +0000 UTC m=+0.048603796 container create f9e52f4e1c6cec43da5167a7ac5428183fed345bd11584d8fbbcd8a5c47ee355 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_cray, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Oct 11 04:06:11 compute-0 systemd[1]: Started libpod-conmon-f9e52f4e1c6cec43da5167a7ac5428183fed345bd11584d8fbbcd8a5c47ee355.scope.
Oct 11 04:06:11 compute-0 podman[273501]: 2025-10-11 04:06:11.12235188 +0000 UTC m=+0.029488449 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 04:06:11 compute-0 systemd[1]: Started libcrun container.
Oct 11 04:06:11 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/06fef52bfa0b66bce6c65707ad44034e25debf8050cde2f4963215ba3fdb7f4d/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 11 04:06:11 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/06fef52bfa0b66bce6c65707ad44034e25debf8050cde2f4963215ba3fdb7f4d/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 11 04:06:11 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/06fef52bfa0b66bce6c65707ad44034e25debf8050cde2f4963215ba3fdb7f4d/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 11 04:06:11 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/06fef52bfa0b66bce6c65707ad44034e25debf8050cde2f4963215ba3fdb7f4d/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 11 04:06:11 compute-0 podman[273501]: 2025-10-11 04:06:11.257682529 +0000 UTC m=+0.164819078 container init f9e52f4e1c6cec43da5167a7ac5428183fed345bd11584d8fbbcd8a5c47ee355 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_cray, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 11 04:06:11 compute-0 podman[273501]: 2025-10-11 04:06:11.269896682 +0000 UTC m=+0.177033261 container start f9e52f4e1c6cec43da5167a7ac5428183fed345bd11584d8fbbcd8a5c47ee355 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_cray, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 11 04:06:11 compute-0 podman[273501]: 2025-10-11 04:06:11.273540034 +0000 UTC m=+0.180676643 container attach f9e52f4e1c6cec43da5167a7ac5428183fed345bd11584d8fbbcd8a5c47ee355 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_cray, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_REF=reef)
Oct 11 04:06:11 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1069: 305 pgs: 305 active+clean; 134 MiB data, 300 MiB used, 60 GiB / 60 GiB avail; 108 KiB/s rd, 5.3 MiB/s wr, 163 op/s
Oct 11 04:06:11 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct 11 04:06:11 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3216209756' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 11 04:06:11 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct 11 04:06:11 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3216209756' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 11 04:06:11 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct 11 04:06:11 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1682851066' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 11 04:06:11 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct 11 04:06:11 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1682851066' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 11 04:06:12 compute-0 nova_compute[259850]: 2025-10-11 04:06:12.013 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 04:06:12 compute-0 ceph-mon[74273]: pgmap v1069: 305 pgs: 305 active+clean; 134 MiB data, 300 MiB used, 60 GiB / 60 GiB avail; 108 KiB/s rd, 5.3 MiB/s wr, 163 op/s
Oct 11 04:06:12 compute-0 ceph-mon[74273]: from='client.? 192.168.122.10:0/3216209756' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 11 04:06:12 compute-0 ceph-mon[74273]: from='client.? 192.168.122.10:0/3216209756' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 11 04:06:12 compute-0 ceph-mon[74273]: from='client.? 192.168.122.10:0/1682851066' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 11 04:06:12 compute-0 ceph-mon[74273]: from='client.? 192.168.122.10:0/1682851066' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 11 04:06:12 compute-0 charming_cray[273518]: {
Oct 11 04:06:12 compute-0 charming_cray[273518]:     "38da774d-7ecf-442f-9a7a-97978287cff8": {
Oct 11 04:06:12 compute-0 charming_cray[273518]:         "ceph_fsid": "23b68101-59a9-532f-ab6b-9acf78fb2162",
Oct 11 04:06:12 compute-0 charming_cray[273518]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Oct 11 04:06:12 compute-0 charming_cray[273518]:         "osd_id": 1,
Oct 11 04:06:12 compute-0 charming_cray[273518]:         "osd_uuid": "38da774d-7ecf-442f-9a7a-97978287cff8",
Oct 11 04:06:12 compute-0 charming_cray[273518]:         "type": "bluestore"
Oct 11 04:06:12 compute-0 charming_cray[273518]:     },
Oct 11 04:06:12 compute-0 charming_cray[273518]:     "b0a32a6b-06c7-4cda-9235-0c5ffbd1bd85": {
Oct 11 04:06:12 compute-0 charming_cray[273518]:         "ceph_fsid": "23b68101-59a9-532f-ab6b-9acf78fb2162",
Oct 11 04:06:12 compute-0 charming_cray[273518]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Oct 11 04:06:12 compute-0 charming_cray[273518]:         "osd_id": 2,
Oct 11 04:06:12 compute-0 charming_cray[273518]:         "osd_uuid": "b0a32a6b-06c7-4cda-9235-0c5ffbd1bd85",
Oct 11 04:06:12 compute-0 charming_cray[273518]:         "type": "bluestore"
Oct 11 04:06:12 compute-0 charming_cray[273518]:     },
Oct 11 04:06:12 compute-0 charming_cray[273518]:     "bd7ac921-1218-45c1-b1c6-7c594dbceccb": {
Oct 11 04:06:12 compute-0 charming_cray[273518]:         "ceph_fsid": "23b68101-59a9-532f-ab6b-9acf78fb2162",
Oct 11 04:06:12 compute-0 charming_cray[273518]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct 11 04:06:12 compute-0 charming_cray[273518]:         "osd_id": 0,
Oct 11 04:06:12 compute-0 charming_cray[273518]:         "osd_uuid": "bd7ac921-1218-45c1-b1c6-7c594dbceccb",
Oct 11 04:06:12 compute-0 charming_cray[273518]:         "type": "bluestore"
Oct 11 04:06:12 compute-0 charming_cray[273518]:     }
Oct 11 04:06:12 compute-0 charming_cray[273518]: }
Oct 11 04:06:12 compute-0 systemd[1]: libpod-f9e52f4e1c6cec43da5167a7ac5428183fed345bd11584d8fbbcd8a5c47ee355.scope: Deactivated successfully.
Oct 11 04:06:12 compute-0 podman[273501]: 2025-10-11 04:06:12.452724475 +0000 UTC m=+1.359861054 container died f9e52f4e1c6cec43da5167a7ac5428183fed345bd11584d8fbbcd8a5c47ee355 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_cray, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 11 04:06:12 compute-0 systemd[1]: libpod-f9e52f4e1c6cec43da5167a7ac5428183fed345bd11584d8fbbcd8a5c47ee355.scope: Consumed 1.170s CPU time.
Oct 11 04:06:12 compute-0 systemd[1]: var-lib-containers-storage-overlay-06fef52bfa0b66bce6c65707ad44034e25debf8050cde2f4963215ba3fdb7f4d-merged.mount: Deactivated successfully.
Oct 11 04:06:12 compute-0 podman[273501]: 2025-10-11 04:06:12.541532917 +0000 UTC m=+1.448669506 container remove f9e52f4e1c6cec43da5167a7ac5428183fed345bd11584d8fbbcd8a5c47ee355 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_cray, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 11 04:06:12 compute-0 systemd[1]: libpod-conmon-f9e52f4e1c6cec43da5167a7ac5428183fed345bd11584d8fbbcd8a5c47ee355.scope: Deactivated successfully.
Oct 11 04:06:12 compute-0 sudo[273395]: pam_unix(sudo:session): session closed for user root
Oct 11 04:06:12 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 11 04:06:12 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' 
Oct 11 04:06:12 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 11 04:06:12 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' 
Oct 11 04:06:12 compute-0 ceph-mgr[74563]: [progress WARNING root] complete: ev cb0d26dc-465c-44eb-9739-29e923835f06 does not exist
Oct 11 04:06:12 compute-0 ceph-mgr[74563]: [progress WARNING root] complete: ev 3f2dbb46-fc9a-453d-b141-2615322f5a88 does not exist
Oct 11 04:06:12 compute-0 sudo[273564]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 04:06:12 compute-0 sudo[273564]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 04:06:12 compute-0 sudo[273564]: pam_unix(sudo:session): session closed for user root
Oct 11 04:06:12 compute-0 sudo[273589]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Oct 11 04:06:12 compute-0 sudo[273589]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 04:06:12 compute-0 sudo[273589]: pam_unix(sudo:session): session closed for user root
Oct 11 04:06:13 compute-0 nova_compute[259850]: 2025-10-11 04:06:13.045 2 DEBUG oslo_concurrency.lockutils [None req-cb9e36c3-6239-4794-920d-2428fabc3377 c5660041067943deb3c73caa6e62f851 2783729ed466412aac8ceb01d86a0b12 - - default default] Acquiring lock "2b618038-2466-4671-9914-c69aecf8c771" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 04:06:13 compute-0 nova_compute[259850]: 2025-10-11 04:06:13.046 2 DEBUG oslo_concurrency.lockutils [None req-cb9e36c3-6239-4794-920d-2428fabc3377 c5660041067943deb3c73caa6e62f851 2783729ed466412aac8ceb01d86a0b12 - - default default] Lock "2b618038-2466-4671-9914-c69aecf8c771" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 04:06:13 compute-0 nova_compute[259850]: 2025-10-11 04:06:13.071 2 DEBUG nova.compute.manager [None req-cb9e36c3-6239-4794-920d-2428fabc3377 c5660041067943deb3c73caa6e62f851 2783729ed466412aac8ceb01d86a0b12 - - default default] [instance: 2b618038-2466-4671-9914-c69aecf8c771] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Oct 11 04:06:13 compute-0 nova_compute[259850]: 2025-10-11 04:06:13.142 2 DEBUG oslo_concurrency.lockutils [None req-cb9e36c3-6239-4794-920d-2428fabc3377 c5660041067943deb3c73caa6e62f851 2783729ed466412aac8ceb01d86a0b12 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 04:06:13 compute-0 nova_compute[259850]: 2025-10-11 04:06:13.143 2 DEBUG oslo_concurrency.lockutils [None req-cb9e36c3-6239-4794-920d-2428fabc3377 c5660041067943deb3c73caa6e62f851 2783729ed466412aac8ceb01d86a0b12 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 04:06:13 compute-0 nova_compute[259850]: 2025-10-11 04:06:13.154 2 DEBUG nova.virt.hardware [None req-cb9e36c3-6239-4794-920d-2428fabc3377 c5660041067943deb3c73caa6e62f851 2783729ed466412aac8ceb01d86a0b12 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Oct 11 04:06:13 compute-0 nova_compute[259850]: 2025-10-11 04:06:13.155 2 INFO nova.compute.claims [None req-cb9e36c3-6239-4794-920d-2428fabc3377 c5660041067943deb3c73caa6e62f851 2783729ed466412aac8ceb01d86a0b12 - - default default] [instance: 2b618038-2466-4671-9914-c69aecf8c771] Claim successful on node compute-0.ctlplane.example.com
Oct 11 04:06:13 compute-0 nova_compute[259850]: 2025-10-11 04:06:13.258 2 DEBUG oslo_concurrency.processutils [None req-cb9e36c3-6239-4794-920d-2428fabc3377 c5660041067943deb3c73caa6e62f851 2783729ed466412aac8ceb01d86a0b12 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 04:06:13 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1070: 305 pgs: 305 active+clean; 134 MiB data, 307 MiB used, 60 GiB / 60 GiB avail; 158 KiB/s rd, 4.9 MiB/s wr, 229 op/s
Oct 11 04:06:13 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' 
Oct 11 04:06:13 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' 
Oct 11 04:06:13 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 11 04:06:13 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1506492548' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 04:06:13 compute-0 nova_compute[259850]: 2025-10-11 04:06:13.748 2 DEBUG oslo_concurrency.processutils [None req-cb9e36c3-6239-4794-920d-2428fabc3377 c5660041067943deb3c73caa6e62f851 2783729ed466412aac8ceb01d86a0b12 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.490s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 04:06:13 compute-0 nova_compute[259850]: 2025-10-11 04:06:13.758 2 DEBUG nova.compute.provider_tree [None req-cb9e36c3-6239-4794-920d-2428fabc3377 c5660041067943deb3c73caa6e62f851 2783729ed466412aac8ceb01d86a0b12 - - default default] Inventory has not changed in ProviderTree for provider: 108a560b-89c0-4926-a2fc-cb749a6f8386 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 11 04:06:13 compute-0 nova_compute[259850]: 2025-10-11 04:06:13.776 2 DEBUG nova.scheduler.client.report [None req-cb9e36c3-6239-4794-920d-2428fabc3377 c5660041067943deb3c73caa6e62f851 2783729ed466412aac8ceb01d86a0b12 - - default default] Inventory has not changed for provider 108a560b-89c0-4926-a2fc-cb749a6f8386 based on inventory data: {'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 11 04:06:13 compute-0 nova_compute[259850]: 2025-10-11 04:06:13.800 2 DEBUG oslo_concurrency.lockutils [None req-cb9e36c3-6239-4794-920d-2428fabc3377 c5660041067943deb3c73caa6e62f851 2783729ed466412aac8ceb01d86a0b12 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.657s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 04:06:13 compute-0 nova_compute[259850]: 2025-10-11 04:06:13.801 2 DEBUG nova.compute.manager [None req-cb9e36c3-6239-4794-920d-2428fabc3377 c5660041067943deb3c73caa6e62f851 2783729ed466412aac8ceb01d86a0b12 - - default default] [instance: 2b618038-2466-4671-9914-c69aecf8c771] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Oct 11 04:06:13 compute-0 nova_compute[259850]: 2025-10-11 04:06:13.855 2 DEBUG nova.compute.manager [None req-cb9e36c3-6239-4794-920d-2428fabc3377 c5660041067943deb3c73caa6e62f851 2783729ed466412aac8ceb01d86a0b12 - - default default] [instance: 2b618038-2466-4671-9914-c69aecf8c771] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Oct 11 04:06:13 compute-0 nova_compute[259850]: 2025-10-11 04:06:13.856 2 DEBUG nova.network.neutron [None req-cb9e36c3-6239-4794-920d-2428fabc3377 c5660041067943deb3c73caa6e62f851 2783729ed466412aac8ceb01d86a0b12 - - default default] [instance: 2b618038-2466-4671-9914-c69aecf8c771] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Oct 11 04:06:13 compute-0 nova_compute[259850]: 2025-10-11 04:06:13.880 2 INFO nova.virt.libvirt.driver [None req-cb9e36c3-6239-4794-920d-2428fabc3377 c5660041067943deb3c73caa6e62f851 2783729ed466412aac8ceb01d86a0b12 - - default default] [instance: 2b618038-2466-4671-9914-c69aecf8c771] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Oct 11 04:06:13 compute-0 nova_compute[259850]: 2025-10-11 04:06:13.900 2 DEBUG nova.compute.manager [None req-cb9e36c3-6239-4794-920d-2428fabc3377 c5660041067943deb3c73caa6e62f851 2783729ed466412aac8ceb01d86a0b12 - - default default] [instance: 2b618038-2466-4671-9914-c69aecf8c771] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Oct 11 04:06:14 compute-0 nova_compute[259850]: 2025-10-11 04:06:14.005 2 DEBUG nova.compute.manager [None req-cb9e36c3-6239-4794-920d-2428fabc3377 c5660041067943deb3c73caa6e62f851 2783729ed466412aac8ceb01d86a0b12 - - default default] [instance: 2b618038-2466-4671-9914-c69aecf8c771] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Oct 11 04:06:14 compute-0 nova_compute[259850]: 2025-10-11 04:06:14.007 2 DEBUG nova.virt.libvirt.driver [None req-cb9e36c3-6239-4794-920d-2428fabc3377 c5660041067943deb3c73caa6e62f851 2783729ed466412aac8ceb01d86a0b12 - - default default] [instance: 2b618038-2466-4671-9914-c69aecf8c771] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Oct 11 04:06:14 compute-0 nova_compute[259850]: 2025-10-11 04:06:14.008 2 INFO nova.virt.libvirt.driver [None req-cb9e36c3-6239-4794-920d-2428fabc3377 c5660041067943deb3c73caa6e62f851 2783729ed466412aac8ceb01d86a0b12 - - default default] [instance: 2b618038-2466-4671-9914-c69aecf8c771] Creating image(s)
Oct 11 04:06:14 compute-0 nova_compute[259850]: 2025-10-11 04:06:14.043 2 DEBUG nova.storage.rbd_utils [None req-cb9e36c3-6239-4794-920d-2428fabc3377 c5660041067943deb3c73caa6e62f851 2783729ed466412aac8ceb01d86a0b12 - - default default] rbd image 2b618038-2466-4671-9914-c69aecf8c771_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 11 04:06:14 compute-0 nova_compute[259850]: 2025-10-11 04:06:14.079 2 DEBUG nova.storage.rbd_utils [None req-cb9e36c3-6239-4794-920d-2428fabc3377 c5660041067943deb3c73caa6e62f851 2783729ed466412aac8ceb01d86a0b12 - - default default] rbd image 2b618038-2466-4671-9914-c69aecf8c771_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 11 04:06:14 compute-0 nova_compute[259850]: 2025-10-11 04:06:14.114 2 DEBUG nova.storage.rbd_utils [None req-cb9e36c3-6239-4794-920d-2428fabc3377 c5660041067943deb3c73caa6e62f851 2783729ed466412aac8ceb01d86a0b12 - - default default] rbd image 2b618038-2466-4671-9914-c69aecf8c771_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 11 04:06:14 compute-0 nova_compute[259850]: 2025-10-11 04:06:14.119 2 DEBUG oslo_concurrency.processutils [None req-cb9e36c3-6239-4794-920d-2428fabc3377 c5660041067943deb3c73caa6e62f851 2783729ed466412aac8ceb01d86a0b12 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/1fdd1a1629caceb533dd1ed1ad40a6716b3e72ac --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 04:06:14 compute-0 nova_compute[259850]: 2025-10-11 04:06:14.155 2 DEBUG nova.policy [None req-cb9e36c3-6239-4794-920d-2428fabc3377 c5660041067943deb3c73caa6e62f851 2783729ed466412aac8ceb01d86a0b12 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'c5660041067943deb3c73caa6e62f851', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '2783729ed466412aac8ceb01d86a0b12', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Oct 11 04:06:14 compute-0 nova_compute[259850]: 2025-10-11 04:06:14.161 2 DEBUG oslo_service.periodic_task [None req-a15c22ab-259d-4b26-83d0-eb5f92102649 - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 11 04:06:14 compute-0 nova_compute[259850]: 2025-10-11 04:06:14.214 2 DEBUG oslo_concurrency.processutils [None req-cb9e36c3-6239-4794-920d-2428fabc3377 c5660041067943deb3c73caa6e62f851 2783729ed466412aac8ceb01d86a0b12 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/1fdd1a1629caceb533dd1ed1ad40a6716b3e72ac --force-share --output=json" returned: 0 in 0.095s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 04:06:14 compute-0 nova_compute[259850]: 2025-10-11 04:06:14.215 2 DEBUG oslo_concurrency.lockutils [None req-cb9e36c3-6239-4794-920d-2428fabc3377 c5660041067943deb3c73caa6e62f851 2783729ed466412aac8ceb01d86a0b12 - - default default] Acquiring lock "1fdd1a1629caceb533dd1ed1ad40a6716b3e72ac" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 04:06:14 compute-0 nova_compute[259850]: 2025-10-11 04:06:14.216 2 DEBUG oslo_concurrency.lockutils [None req-cb9e36c3-6239-4794-920d-2428fabc3377 c5660041067943deb3c73caa6e62f851 2783729ed466412aac8ceb01d86a0b12 - - default default] Lock "1fdd1a1629caceb533dd1ed1ad40a6716b3e72ac" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 04:06:14 compute-0 nova_compute[259850]: 2025-10-11 04:06:14.216 2 DEBUG oslo_concurrency.lockutils [None req-cb9e36c3-6239-4794-920d-2428fabc3377 c5660041067943deb3c73caa6e62f851 2783729ed466412aac8ceb01d86a0b12 - - default default] Lock "1fdd1a1629caceb533dd1ed1ad40a6716b3e72ac" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 04:06:14 compute-0 nova_compute[259850]: 2025-10-11 04:06:14.243 2 DEBUG nova.storage.rbd_utils [None req-cb9e36c3-6239-4794-920d-2428fabc3377 c5660041067943deb3c73caa6e62f851 2783729ed466412aac8ceb01d86a0b12 - - default default] rbd image 2b618038-2466-4671-9914-c69aecf8c771_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 11 04:06:14 compute-0 nova_compute[259850]: 2025-10-11 04:06:14.249 2 DEBUG oslo_concurrency.processutils [None req-cb9e36c3-6239-4794-920d-2428fabc3377 c5660041067943deb3c73caa6e62f851 2783729ed466412aac8ceb01d86a0b12 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/1fdd1a1629caceb533dd1ed1ad40a6716b3e72ac 2b618038-2466-4671-9914-c69aecf8c771_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 04:06:14 compute-0 nova_compute[259850]: 2025-10-11 04:06:14.559 2 DEBUG oslo_concurrency.processutils [None req-cb9e36c3-6239-4794-920d-2428fabc3377 c5660041067943deb3c73caa6e62f851 2783729ed466412aac8ceb01d86a0b12 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/1fdd1a1629caceb533dd1ed1ad40a6716b3e72ac 2b618038-2466-4671-9914-c69aecf8c771_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.310s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 04:06:14 compute-0 nova_compute[259850]: 2025-10-11 04:06:14.620 2 DEBUG nova.storage.rbd_utils [None req-cb9e36c3-6239-4794-920d-2428fabc3377 c5660041067943deb3c73caa6e62f851 2783729ed466412aac8ceb01d86a0b12 - - default default] resizing rbd image 2b618038-2466-4671-9914-c69aecf8c771_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Oct 11 04:06:14 compute-0 ceph-mon[74273]: pgmap v1070: 305 pgs: 305 active+clean; 134 MiB data, 307 MiB used, 60 GiB / 60 GiB avail; 158 KiB/s rd, 4.9 MiB/s wr, 229 op/s
Oct 11 04:06:14 compute-0 ceph-mon[74273]: from='client.? 192.168.122.100:0/1506492548' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 04:06:14 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e183 do_prune osdmap full prune enabled
Oct 11 04:06:14 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e184 e184: 3 total, 3 up, 3 in
Oct 11 04:06:14 compute-0 ceph-mon[74273]: log_channel(cluster) log [DBG] : osdmap e184: 3 total, 3 up, 3 in
Oct 11 04:06:14 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e184 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 04:06:14 compute-0 nova_compute[259850]: 2025-10-11 04:06:14.765 2 DEBUG nova.objects.instance [None req-cb9e36c3-6239-4794-920d-2428fabc3377 c5660041067943deb3c73caa6e62f851 2783729ed466412aac8ceb01d86a0b12 - - default default] Lazy-loading 'migration_context' on Instance uuid 2b618038-2466-4671-9914-c69aecf8c771 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 11 04:06:14 compute-0 nova_compute[259850]: 2025-10-11 04:06:14.780 2 DEBUG nova.virt.libvirt.driver [None req-cb9e36c3-6239-4794-920d-2428fabc3377 c5660041067943deb3c73caa6e62f851 2783729ed466412aac8ceb01d86a0b12 - - default default] [instance: 2b618038-2466-4671-9914-c69aecf8c771] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Oct 11 04:06:14 compute-0 nova_compute[259850]: 2025-10-11 04:06:14.781 2 DEBUG nova.virt.libvirt.driver [None req-cb9e36c3-6239-4794-920d-2428fabc3377 c5660041067943deb3c73caa6e62f851 2783729ed466412aac8ceb01d86a0b12 - - default default] [instance: 2b618038-2466-4671-9914-c69aecf8c771] Ensure instance console log exists: /var/lib/nova/instances/2b618038-2466-4671-9914-c69aecf8c771/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Oct 11 04:06:14 compute-0 nova_compute[259850]: 2025-10-11 04:06:14.781 2 DEBUG oslo_concurrency.lockutils [None req-cb9e36c3-6239-4794-920d-2428fabc3377 c5660041067943deb3c73caa6e62f851 2783729ed466412aac8ceb01d86a0b12 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 04:06:14 compute-0 nova_compute[259850]: 2025-10-11 04:06:14.782 2 DEBUG oslo_concurrency.lockutils [None req-cb9e36c3-6239-4794-920d-2428fabc3377 c5660041067943deb3c73caa6e62f851 2783729ed466412aac8ceb01d86a0b12 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 04:06:14 compute-0 nova_compute[259850]: 2025-10-11 04:06:14.782 2 DEBUG oslo_concurrency.lockutils [None req-cb9e36c3-6239-4794-920d-2428fabc3377 c5660041067943deb3c73caa6e62f851 2783729ed466412aac8ceb01d86a0b12 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 04:06:14 compute-0 ovn_metadata_agent[161897]: 2025-10-11 04:06:14.873 161902 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=7, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '7a:61:6f', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '92:f1:b6:e4:f1:16'}, ipsec=False) old=SB_Global(nb_cfg=6) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 11 04:06:14 compute-0 nova_compute[259850]: 2025-10-11 04:06:14.873 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 04:06:14 compute-0 ovn_metadata_agent[161897]: 2025-10-11 04:06:14.875 161902 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 10 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Oct 11 04:06:15 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct 11 04:06:15 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/96048175' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 11 04:06:15 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct 11 04:06:15 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/96048175' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 11 04:06:15 compute-0 nova_compute[259850]: 2025-10-11 04:06:15.077 2 DEBUG nova.network.neutron [None req-cb9e36c3-6239-4794-920d-2428fabc3377 c5660041067943deb3c73caa6e62f851 2783729ed466412aac8ceb01d86a0b12 - - default default] [instance: 2b618038-2466-4671-9914-c69aecf8c771] Successfully created port: 18b8cda5-7bec-4b29-838f-24cad68162af _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Oct 11 04:06:15 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1072: 305 pgs: 305 active+clean; 134 MiB data, 307 MiB used, 60 GiB / 60 GiB avail; 173 KiB/s rd, 5.3 MiB/s wr, 250 op/s
Oct 11 04:06:15 compute-0 ceph-mon[74273]: osdmap e184: 3 total, 3 up, 3 in
Oct 11 04:06:15 compute-0 ceph-mon[74273]: from='client.? 192.168.122.10:0/96048175' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 11 04:06:15 compute-0 ceph-mon[74273]: from='client.? 192.168.122.10:0/96048175' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 11 04:06:15 compute-0 nova_compute[259850]: 2025-10-11 04:06:15.928 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 04:06:16 compute-0 nova_compute[259850]: 2025-10-11 04:06:16.077 2 DEBUG oslo_service.periodic_task [None req-a15c22ab-259d-4b26-83d0-eb5f92102649 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 11 04:06:16 compute-0 nova_compute[259850]: 2025-10-11 04:06:16.078 2 DEBUG oslo_service.periodic_task [None req-a15c22ab-259d-4b26-83d0-eb5f92102649 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 11 04:06:16 compute-0 nova_compute[259850]: 2025-10-11 04:06:16.078 2 DEBUG nova.compute.manager [None req-a15c22ab-259d-4b26-83d0-eb5f92102649 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Oct 11 04:06:16 compute-0 nova_compute[259850]: 2025-10-11 04:06:16.191 2 DEBUG nova.network.neutron [None req-cb9e36c3-6239-4794-920d-2428fabc3377 c5660041067943deb3c73caa6e62f851 2783729ed466412aac8ceb01d86a0b12 - - default default] [instance: 2b618038-2466-4671-9914-c69aecf8c771] Successfully updated port: 18b8cda5-7bec-4b29-838f-24cad68162af _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Oct 11 04:06:16 compute-0 nova_compute[259850]: 2025-10-11 04:06:16.221 2 DEBUG oslo_concurrency.lockutils [None req-cb9e36c3-6239-4794-920d-2428fabc3377 c5660041067943deb3c73caa6e62f851 2783729ed466412aac8ceb01d86a0b12 - - default default] Acquiring lock "refresh_cache-2b618038-2466-4671-9914-c69aecf8c771" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 11 04:06:16 compute-0 nova_compute[259850]: 2025-10-11 04:06:16.221 2 DEBUG oslo_concurrency.lockutils [None req-cb9e36c3-6239-4794-920d-2428fabc3377 c5660041067943deb3c73caa6e62f851 2783729ed466412aac8ceb01d86a0b12 - - default default] Acquired lock "refresh_cache-2b618038-2466-4671-9914-c69aecf8c771" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 11 04:06:16 compute-0 nova_compute[259850]: 2025-10-11 04:06:16.221 2 DEBUG nova.network.neutron [None req-cb9e36c3-6239-4794-920d-2428fabc3377 c5660041067943deb3c73caa6e62f851 2783729ed466412aac8ceb01d86a0b12 - - default default] [instance: 2b618038-2466-4671-9914-c69aecf8c771] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Oct 11 04:06:16 compute-0 nova_compute[259850]: 2025-10-11 04:06:16.340 2 DEBUG nova.compute.manager [req-72b5856a-528f-4944-9821-d70b81ec5840 req-a1486984-0409-4b00-951d-609a2d19cd1e f4159c198f2c491aba3076e803251bb4 551ef16ca7c64c7d98e9560ddab5abd8 - - default default] [instance: 2b618038-2466-4671-9914-c69aecf8c771] Received event network-changed-18b8cda5-7bec-4b29-838f-24cad68162af external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 11 04:06:16 compute-0 nova_compute[259850]: 2025-10-11 04:06:16.341 2 DEBUG nova.compute.manager [req-72b5856a-528f-4944-9821-d70b81ec5840 req-a1486984-0409-4b00-951d-609a2d19cd1e f4159c198f2c491aba3076e803251bb4 551ef16ca7c64c7d98e9560ddab5abd8 - - default default] [instance: 2b618038-2466-4671-9914-c69aecf8c771] Refreshing instance network info cache due to event network-changed-18b8cda5-7bec-4b29-838f-24cad68162af. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 11 04:06:16 compute-0 nova_compute[259850]: 2025-10-11 04:06:16.341 2 DEBUG oslo_concurrency.lockutils [req-72b5856a-528f-4944-9821-d70b81ec5840 req-a1486984-0409-4b00-951d-609a2d19cd1e f4159c198f2c491aba3076e803251bb4 551ef16ca7c64c7d98e9560ddab5abd8 - - default default] Acquiring lock "refresh_cache-2b618038-2466-4671-9914-c69aecf8c771" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 11 04:06:16 compute-0 nova_compute[259850]: 2025-10-11 04:06:16.388 2 DEBUG nova.network.neutron [None req-cb9e36c3-6239-4794-920d-2428fabc3377 c5660041067943deb3c73caa6e62f851 2783729ed466412aac8ceb01d86a0b12 - - default default] [instance: 2b618038-2466-4671-9914-c69aecf8c771] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Oct 11 04:06:16 compute-0 podman[273802]: 2025-10-11 04:06:16.404978375 +0000 UTC m=+0.100749539 container health_status aa44bb3cffcfb092b9d85ae52a9bd9f36c87e07262632420f97b48a886109eab (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2)
Oct 11 04:06:16 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct 11 04:06:16 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3685996692' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 11 04:06:16 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct 11 04:06:16 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3685996692' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 11 04:06:16 compute-0 ceph-mon[74273]: pgmap v1072: 305 pgs: 305 active+clean; 134 MiB data, 307 MiB used, 60 GiB / 60 GiB avail; 173 KiB/s rd, 5.3 MiB/s wr, 250 op/s
Oct 11 04:06:16 compute-0 ceph-mon[74273]: from='client.? 192.168.122.10:0/3685996692' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 11 04:06:16 compute-0 ceph-mon[74273]: from='client.? 192.168.122.10:0/3685996692' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 11 04:06:17 compute-0 nova_compute[259850]: 2025-10-11 04:06:17.053 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 04:06:17 compute-0 nova_compute[259850]: 2025-10-11 04:06:17.055 2 DEBUG oslo_service.periodic_task [None req-a15c22ab-259d-4b26-83d0-eb5f92102649 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 11 04:06:17 compute-0 nova_compute[259850]: 2025-10-11 04:06:17.059 2 DEBUG oslo_service.periodic_task [None req-a15c22ab-259d-4b26-83d0-eb5f92102649 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 11 04:06:17 compute-0 nova_compute[259850]: 2025-10-11 04:06:17.059 2 DEBUG oslo_service.periodic_task [None req-a15c22ab-259d-4b26-83d0-eb5f92102649 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 11 04:06:17 compute-0 nova_compute[259850]: 2025-10-11 04:06:17.059 2 DEBUG oslo_service.periodic_task [None req-a15c22ab-259d-4b26-83d0-eb5f92102649 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 11 04:06:17 compute-0 nova_compute[259850]: 2025-10-11 04:06:17.094 2 DEBUG oslo_concurrency.lockutils [None req-a15c22ab-259d-4b26-83d0-eb5f92102649 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 04:06:17 compute-0 nova_compute[259850]: 2025-10-11 04:06:17.095 2 DEBUG oslo_concurrency.lockutils [None req-a15c22ab-259d-4b26-83d0-eb5f92102649 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 04:06:17 compute-0 nova_compute[259850]: 2025-10-11 04:06:17.095 2 DEBUG oslo_concurrency.lockutils [None req-a15c22ab-259d-4b26-83d0-eb5f92102649 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 04:06:17 compute-0 nova_compute[259850]: 2025-10-11 04:06:17.095 2 DEBUG nova.compute.resource_tracker [None req-a15c22ab-259d-4b26-83d0-eb5f92102649 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Oct 11 04:06:17 compute-0 nova_compute[259850]: 2025-10-11 04:06:17.096 2 DEBUG oslo_concurrency.processutils [None req-a15c22ab-259d-4b26-83d0-eb5f92102649 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 04:06:17 compute-0 nova_compute[259850]: 2025-10-11 04:06:17.407 2 DEBUG nova.network.neutron [None req-cb9e36c3-6239-4794-920d-2428fabc3377 c5660041067943deb3c73caa6e62f851 2783729ed466412aac8ceb01d86a0b12 - - default default] [instance: 2b618038-2466-4671-9914-c69aecf8c771] Updating instance_info_cache with network_info: [{"id": "18b8cda5-7bec-4b29-838f-24cad68162af", "address": "fa:16:3e:55:96:0a", "network": {"id": "bfa0cc72-c909-48db-80bb-536eb7b52f6e", "bridge": "br-int", "label": "tempest-VolumesSnapshotTestJSON-1615284681-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2783729ed466412aac8ceb01d86a0b12", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap18b8cda5-7b", "ovs_interfaceid": "18b8cda5-7bec-4b29-838f-24cad68162af", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 11 04:06:17 compute-0 nova_compute[259850]: 2025-10-11 04:06:17.430 2 DEBUG oslo_concurrency.lockutils [None req-cb9e36c3-6239-4794-920d-2428fabc3377 c5660041067943deb3c73caa6e62f851 2783729ed466412aac8ceb01d86a0b12 - - default default] Releasing lock "refresh_cache-2b618038-2466-4671-9914-c69aecf8c771" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 11 04:06:17 compute-0 nova_compute[259850]: 2025-10-11 04:06:17.430 2 DEBUG nova.compute.manager [None req-cb9e36c3-6239-4794-920d-2428fabc3377 c5660041067943deb3c73caa6e62f851 2783729ed466412aac8ceb01d86a0b12 - - default default] [instance: 2b618038-2466-4671-9914-c69aecf8c771] Instance network_info: |[{"id": "18b8cda5-7bec-4b29-838f-24cad68162af", "address": "fa:16:3e:55:96:0a", "network": {"id": "bfa0cc72-c909-48db-80bb-536eb7b52f6e", "bridge": "br-int", "label": "tempest-VolumesSnapshotTestJSON-1615284681-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2783729ed466412aac8ceb01d86a0b12", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap18b8cda5-7b", "ovs_interfaceid": "18b8cda5-7bec-4b29-838f-24cad68162af", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Oct 11 04:06:17 compute-0 nova_compute[259850]: 2025-10-11 04:06:17.431 2 DEBUG oslo_concurrency.lockutils [req-72b5856a-528f-4944-9821-d70b81ec5840 req-a1486984-0409-4b00-951d-609a2d19cd1e f4159c198f2c491aba3076e803251bb4 551ef16ca7c64c7d98e9560ddab5abd8 - - default default] Acquired lock "refresh_cache-2b618038-2466-4671-9914-c69aecf8c771" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 11 04:06:17 compute-0 nova_compute[259850]: 2025-10-11 04:06:17.431 2 DEBUG nova.network.neutron [req-72b5856a-528f-4944-9821-d70b81ec5840 req-a1486984-0409-4b00-951d-609a2d19cd1e f4159c198f2c491aba3076e803251bb4 551ef16ca7c64c7d98e9560ddab5abd8 - - default default] [instance: 2b618038-2466-4671-9914-c69aecf8c771] Refreshing network info cache for port 18b8cda5-7bec-4b29-838f-24cad68162af _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 11 04:06:17 compute-0 nova_compute[259850]: 2025-10-11 04:06:17.436 2 DEBUG nova.virt.libvirt.driver [None req-cb9e36c3-6239-4794-920d-2428fabc3377 c5660041067943deb3c73caa6e62f851 2783729ed466412aac8ceb01d86a0b12 - - default default] [instance: 2b618038-2466-4671-9914-c69aecf8c771] Start _get_guest_xml network_info=[{"id": "18b8cda5-7bec-4b29-838f-24cad68162af", "address": "fa:16:3e:55:96:0a", "network": {"id": "bfa0cc72-c909-48db-80bb-536eb7b52f6e", "bridge": "br-int", "label": "tempest-VolumesSnapshotTestJSON-1615284681-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2783729ed466412aac8ceb01d86a0b12", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap18b8cda5-7b", "ovs_interfaceid": "18b8cda5-7bec-4b29-838f-24cad68162af", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-11T04:01:37Z,direct_url=<?>,disk_format='qcow2',id=1a107e2f-1a9d-4b6f-861d-e64bee7d56be,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='e4ac9f6319b648399a8baca50902ce47',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-11T04:01:39Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_type': 'disk', 'encrypted': False, 'encryption_secret_uuid': None, 'size': 0, 'encryption_format': None, 'boot_index': 0, 'device_name': '/dev/vda', 'guest_format': None, 'encryption_options': None, 'disk_bus': 'virtio', 'image_id': '1a107e2f-1a9d-4b6f-861d-e64bee7d56be'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Oct 11 04:06:17 compute-0 nova_compute[259850]: 2025-10-11 04:06:17.442 2 WARNING nova.virt.libvirt.driver [None req-cb9e36c3-6239-4794-920d-2428fabc3377 c5660041067943deb3c73caa6e62f851 2783729ed466412aac8ceb01d86a0b12 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 11 04:06:17 compute-0 nova_compute[259850]: 2025-10-11 04:06:17.448 2 DEBUG nova.virt.libvirt.host [None req-cb9e36c3-6239-4794-920d-2428fabc3377 c5660041067943deb3c73caa6e62f851 2783729ed466412aac8ceb01d86a0b12 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Oct 11 04:06:17 compute-0 nova_compute[259850]: 2025-10-11 04:06:17.449 2 DEBUG nova.virt.libvirt.host [None req-cb9e36c3-6239-4794-920d-2428fabc3377 c5660041067943deb3c73caa6e62f851 2783729ed466412aac8ceb01d86a0b12 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Oct 11 04:06:17 compute-0 nova_compute[259850]: 2025-10-11 04:06:17.466 2 DEBUG nova.virt.libvirt.host [None req-cb9e36c3-6239-4794-920d-2428fabc3377 c5660041067943deb3c73caa6e62f851 2783729ed466412aac8ceb01d86a0b12 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Oct 11 04:06:17 compute-0 nova_compute[259850]: 2025-10-11 04:06:17.467 2 DEBUG nova.virt.libvirt.host [None req-cb9e36c3-6239-4794-920d-2428fabc3377 c5660041067943deb3c73caa6e62f851 2783729ed466412aac8ceb01d86a0b12 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Oct 11 04:06:17 compute-0 nova_compute[259850]: 2025-10-11 04:06:17.468 2 DEBUG nova.virt.libvirt.driver [None req-cb9e36c3-6239-4794-920d-2428fabc3377 c5660041067943deb3c73caa6e62f851 2783729ed466412aac8ceb01d86a0b12 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Oct 11 04:06:17 compute-0 nova_compute[259850]: 2025-10-11 04:06:17.469 2 DEBUG nova.virt.hardware [None req-cb9e36c3-6239-4794-920d-2428fabc3377 c5660041067943deb3c73caa6e62f851 2783729ed466412aac8ceb01d86a0b12 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-11T04:01:35Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='178575de-f0e6-4acd-9fcd-d75e3e09ac2e',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-11T04:01:37Z,direct_url=<?>,disk_format='qcow2',id=1a107e2f-1a9d-4b6f-861d-e64bee7d56be,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='e4ac9f6319b648399a8baca50902ce47',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-11T04:01:39Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Oct 11 04:06:17 compute-0 nova_compute[259850]: 2025-10-11 04:06:17.470 2 DEBUG nova.virt.hardware [None req-cb9e36c3-6239-4794-920d-2428fabc3377 c5660041067943deb3c73caa6e62f851 2783729ed466412aac8ceb01d86a0b12 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Oct 11 04:06:17 compute-0 nova_compute[259850]: 2025-10-11 04:06:17.470 2 DEBUG nova.virt.hardware [None req-cb9e36c3-6239-4794-920d-2428fabc3377 c5660041067943deb3c73caa6e62f851 2783729ed466412aac8ceb01d86a0b12 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Oct 11 04:06:17 compute-0 nova_compute[259850]: 2025-10-11 04:06:17.470 2 DEBUG nova.virt.hardware [None req-cb9e36c3-6239-4794-920d-2428fabc3377 c5660041067943deb3c73caa6e62f851 2783729ed466412aac8ceb01d86a0b12 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Oct 11 04:06:17 compute-0 nova_compute[259850]: 2025-10-11 04:06:17.471 2 DEBUG nova.virt.hardware [None req-cb9e36c3-6239-4794-920d-2428fabc3377 c5660041067943deb3c73caa6e62f851 2783729ed466412aac8ceb01d86a0b12 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Oct 11 04:06:17 compute-0 nova_compute[259850]: 2025-10-11 04:06:17.471 2 DEBUG nova.virt.hardware [None req-cb9e36c3-6239-4794-920d-2428fabc3377 c5660041067943deb3c73caa6e62f851 2783729ed466412aac8ceb01d86a0b12 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Oct 11 04:06:17 compute-0 nova_compute[259850]: 2025-10-11 04:06:17.472 2 DEBUG nova.virt.hardware [None req-cb9e36c3-6239-4794-920d-2428fabc3377 c5660041067943deb3c73caa6e62f851 2783729ed466412aac8ceb01d86a0b12 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Oct 11 04:06:17 compute-0 nova_compute[259850]: 2025-10-11 04:06:17.472 2 DEBUG nova.virt.hardware [None req-cb9e36c3-6239-4794-920d-2428fabc3377 c5660041067943deb3c73caa6e62f851 2783729ed466412aac8ceb01d86a0b12 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Oct 11 04:06:17 compute-0 nova_compute[259850]: 2025-10-11 04:06:17.473 2 DEBUG nova.virt.hardware [None req-cb9e36c3-6239-4794-920d-2428fabc3377 c5660041067943deb3c73caa6e62f851 2783729ed466412aac8ceb01d86a0b12 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Oct 11 04:06:17 compute-0 nova_compute[259850]: 2025-10-11 04:06:17.473 2 DEBUG nova.virt.hardware [None req-cb9e36c3-6239-4794-920d-2428fabc3377 c5660041067943deb3c73caa6e62f851 2783729ed466412aac8ceb01d86a0b12 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Oct 11 04:06:17 compute-0 nova_compute[259850]: 2025-10-11 04:06:17.474 2 DEBUG nova.virt.hardware [None req-cb9e36c3-6239-4794-920d-2428fabc3377 c5660041067943deb3c73caa6e62f851 2783729ed466412aac8ceb01d86a0b12 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Oct 11 04:06:17 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1073: 305 pgs: 305 active+clean; 134 MiB data, 307 MiB used, 60 GiB / 60 GiB avail; 62 KiB/s rd, 4.0 KiB/s wr, 83 op/s
Oct 11 04:06:17 compute-0 nova_compute[259850]: 2025-10-11 04:06:17.480 2 DEBUG oslo_concurrency.processutils [None req-cb9e36c3-6239-4794-920d-2428fabc3377 c5660041067943deb3c73caa6e62f851 2783729ed466412aac8ceb01d86a0b12 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 04:06:17 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 11 04:06:17 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3377011248' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 04:06:17 compute-0 nova_compute[259850]: 2025-10-11 04:06:17.523 2 DEBUG oslo_concurrency.processutils [None req-a15c22ab-259d-4b26-83d0-eb5f92102649 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.427s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 04:06:17 compute-0 ceph-mon[74273]: from='client.? 192.168.122.100:0/3377011248' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 04:06:17 compute-0 nova_compute[259850]: 2025-10-11 04:06:17.727 2 WARNING nova.virt.libvirt.driver [None req-a15c22ab-259d-4b26-83d0-eb5f92102649 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 11 04:06:17 compute-0 nova_compute[259850]: 2025-10-11 04:06:17.728 2 DEBUG nova.compute.resource_tracker [None req-a15c22ab-259d-4b26-83d0-eb5f92102649 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4681MB free_disk=59.988277435302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Oct 11 04:06:17 compute-0 nova_compute[259850]: 2025-10-11 04:06:17.729 2 DEBUG oslo_concurrency.lockutils [None req-a15c22ab-259d-4b26-83d0-eb5f92102649 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 04:06:17 compute-0 nova_compute[259850]: 2025-10-11 04:06:17.729 2 DEBUG oslo_concurrency.lockutils [None req-a15c22ab-259d-4b26-83d0-eb5f92102649 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 04:06:17 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 11 04:06:17 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/120594559' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 11 04:06:17 compute-0 nova_compute[259850]: 2025-10-11 04:06:17.920 2 DEBUG oslo_concurrency.processutils [None req-cb9e36c3-6239-4794-920d-2428fabc3377 c5660041067943deb3c73caa6e62f851 2783729ed466412aac8ceb01d86a0b12 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.440s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 04:06:17 compute-0 nova_compute[259850]: 2025-10-11 04:06:17.952 2 DEBUG nova.storage.rbd_utils [None req-cb9e36c3-6239-4794-920d-2428fabc3377 c5660041067943deb3c73caa6e62f851 2783729ed466412aac8ceb01d86a0b12 - - default default] rbd image 2b618038-2466-4671-9914-c69aecf8c771_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 11 04:06:17 compute-0 nova_compute[259850]: 2025-10-11 04:06:17.959 2 DEBUG oslo_concurrency.processutils [None req-cb9e36c3-6239-4794-920d-2428fabc3377 c5660041067943deb3c73caa6e62f851 2783729ed466412aac8ceb01d86a0b12 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 04:06:18 compute-0 nova_compute[259850]: 2025-10-11 04:06:18.060 2 DEBUG nova.compute.resource_tracker [None req-a15c22ab-259d-4b26-83d0-eb5f92102649 - - - - - -] Instance 2b618038-2466-4671-9914-c69aecf8c771 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Oct 11 04:06:18 compute-0 nova_compute[259850]: 2025-10-11 04:06:18.061 2 DEBUG nova.compute.resource_tracker [None req-a15c22ab-259d-4b26-83d0-eb5f92102649 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Oct 11 04:06:18 compute-0 nova_compute[259850]: 2025-10-11 04:06:18.061 2 DEBUG nova.compute.resource_tracker [None req-a15c22ab-259d-4b26-83d0-eb5f92102649 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=640MB phys_disk=59GB used_disk=1GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Oct 11 04:06:18 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct 11 04:06:18 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2047000392' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 11 04:06:18 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct 11 04:06:18 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2047000392' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 11 04:06:18 compute-0 nova_compute[259850]: 2025-10-11 04:06:18.205 2 DEBUG oslo_concurrency.processutils [None req-a15c22ab-259d-4b26-83d0-eb5f92102649 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 04:06:18 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct 11 04:06:18 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/312928466' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 11 04:06:18 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct 11 04:06:18 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/312928466' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 11 04:06:18 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 11 04:06:18 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/803737825' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 11 04:06:18 compute-0 nova_compute[259850]: 2025-10-11 04:06:18.495 2 DEBUG oslo_concurrency.processutils [None req-cb9e36c3-6239-4794-920d-2428fabc3377 c5660041067943deb3c73caa6e62f851 2783729ed466412aac8ceb01d86a0b12 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.536s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 04:06:18 compute-0 nova_compute[259850]: 2025-10-11 04:06:18.497 2 DEBUG nova.virt.libvirt.vif [None req-cb9e36c3-6239-4794-920d-2428fabc3377 c5660041067943deb3c73caa6e62f851 2783729ed466412aac8ceb01d86a0b12 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-11T04:06:12Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-VolumesSnapshotTestJSON-instance-1537074081',display_name='tempest-VolumesSnapshotTestJSON-instance-1537074081',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-volumessnapshottestjson-instance-1537074081',id=6,image_ref='1a107e2f-1a9d-4b6f-861d-e64bee7d56be',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBJKvBS+sHt0R++EQjY8399i9xRV8xwy8PNGrky3BzxlKGZCtm3DcIWTejUfK1VEDKEydb8PJX5YdahSJhSOa4QWvc1+qljSsnLkUpuPznZoJliIMCS/A+eCn6if+XQhyhA==',key_name='tempest-keypair-1067966574',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='2783729ed466412aac8ceb01d86a0b12',ramdisk_id='',reservation_id='r-zfyt80ry',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='1a107e2f-1a9d-4b6f-861d-e64bee7d56be',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-VolumesSnapshotTestJSON-180407200',owner_user_name='tempest-VolumesSnapshotTestJSON-180407200-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-11T04:06:13Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='c5660041067943deb3c73caa6e62f851',uuid=2b618038-2466-4671-9914-c69aecf8c771,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "18b8cda5-7bec-4b29-838f-24cad68162af", "address": "fa:16:3e:55:96:0a", "network": {"id": "bfa0cc72-c909-48db-80bb-536eb7b52f6e", "bridge": "br-int", "label": "tempest-VolumesSnapshotTestJSON-1615284681-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2783729ed466412aac8ceb01d86a0b12", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap18b8cda5-7b", "ovs_interfaceid": "18b8cda5-7bec-4b29-838f-24cad68162af", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Oct 11 04:06:18 compute-0 nova_compute[259850]: 2025-10-11 04:06:18.497 2 DEBUG nova.network.os_vif_util [None req-cb9e36c3-6239-4794-920d-2428fabc3377 c5660041067943deb3c73caa6e62f851 2783729ed466412aac8ceb01d86a0b12 - - default default] Converting VIF {"id": "18b8cda5-7bec-4b29-838f-24cad68162af", "address": "fa:16:3e:55:96:0a", "network": {"id": "bfa0cc72-c909-48db-80bb-536eb7b52f6e", "bridge": "br-int", "label": "tempest-VolumesSnapshotTestJSON-1615284681-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2783729ed466412aac8ceb01d86a0b12", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap18b8cda5-7b", "ovs_interfaceid": "18b8cda5-7bec-4b29-838f-24cad68162af", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 11 04:06:18 compute-0 nova_compute[259850]: 2025-10-11 04:06:18.498 2 DEBUG nova.network.os_vif_util [None req-cb9e36c3-6239-4794-920d-2428fabc3377 c5660041067943deb3c73caa6e62f851 2783729ed466412aac8ceb01d86a0b12 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:55:96:0a,bridge_name='br-int',has_traffic_filtering=True,id=18b8cda5-7bec-4b29-838f-24cad68162af,network=Network(bfa0cc72-c909-48db-80bb-536eb7b52f6e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap18b8cda5-7b') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 11 04:06:18 compute-0 nova_compute[259850]: 2025-10-11 04:06:18.499 2 DEBUG nova.objects.instance [None req-cb9e36c3-6239-4794-920d-2428fabc3377 c5660041067943deb3c73caa6e62f851 2783729ed466412aac8ceb01d86a0b12 - - default default] Lazy-loading 'pci_devices' on Instance uuid 2b618038-2466-4671-9914-c69aecf8c771 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 11 04:06:18 compute-0 nova_compute[259850]: 2025-10-11 04:06:18.511 2 DEBUG nova.virt.libvirt.driver [None req-cb9e36c3-6239-4794-920d-2428fabc3377 c5660041067943deb3c73caa6e62f851 2783729ed466412aac8ceb01d86a0b12 - - default default] [instance: 2b618038-2466-4671-9914-c69aecf8c771] End _get_guest_xml xml=<domain type="kvm">
Oct 11 04:06:18 compute-0 nova_compute[259850]:   <uuid>2b618038-2466-4671-9914-c69aecf8c771</uuid>
Oct 11 04:06:18 compute-0 nova_compute[259850]:   <name>instance-00000006</name>
Oct 11 04:06:18 compute-0 nova_compute[259850]:   <memory>131072</memory>
Oct 11 04:06:18 compute-0 nova_compute[259850]:   <vcpu>1</vcpu>
Oct 11 04:06:18 compute-0 nova_compute[259850]:   <metadata>
Oct 11 04:06:18 compute-0 nova_compute[259850]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 11 04:06:18 compute-0 nova_compute[259850]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 11 04:06:18 compute-0 nova_compute[259850]:       <nova:name>tempest-VolumesSnapshotTestJSON-instance-1537074081</nova:name>
Oct 11 04:06:18 compute-0 nova_compute[259850]:       <nova:creationTime>2025-10-11 04:06:17</nova:creationTime>
Oct 11 04:06:18 compute-0 nova_compute[259850]:       <nova:flavor name="m1.nano">
Oct 11 04:06:18 compute-0 nova_compute[259850]:         <nova:memory>128</nova:memory>
Oct 11 04:06:18 compute-0 nova_compute[259850]:         <nova:disk>1</nova:disk>
Oct 11 04:06:18 compute-0 nova_compute[259850]:         <nova:swap>0</nova:swap>
Oct 11 04:06:18 compute-0 nova_compute[259850]:         <nova:ephemeral>0</nova:ephemeral>
Oct 11 04:06:18 compute-0 nova_compute[259850]:         <nova:vcpus>1</nova:vcpus>
Oct 11 04:06:18 compute-0 nova_compute[259850]:       </nova:flavor>
Oct 11 04:06:18 compute-0 nova_compute[259850]:       <nova:owner>
Oct 11 04:06:18 compute-0 nova_compute[259850]:         <nova:user uuid="c5660041067943deb3c73caa6e62f851">tempest-VolumesSnapshotTestJSON-180407200-project-member</nova:user>
Oct 11 04:06:18 compute-0 nova_compute[259850]:         <nova:project uuid="2783729ed466412aac8ceb01d86a0b12">tempest-VolumesSnapshotTestJSON-180407200</nova:project>
Oct 11 04:06:18 compute-0 nova_compute[259850]:       </nova:owner>
Oct 11 04:06:18 compute-0 nova_compute[259850]:       <nova:root type="image" uuid="1a107e2f-1a9d-4b6f-861d-e64bee7d56be"/>
Oct 11 04:06:18 compute-0 nova_compute[259850]:       <nova:ports>
Oct 11 04:06:18 compute-0 nova_compute[259850]:         <nova:port uuid="18b8cda5-7bec-4b29-838f-24cad68162af">
Oct 11 04:06:18 compute-0 nova_compute[259850]:           <nova:ip type="fixed" address="10.100.0.12" ipVersion="4"/>
Oct 11 04:06:18 compute-0 nova_compute[259850]:         </nova:port>
Oct 11 04:06:18 compute-0 nova_compute[259850]:       </nova:ports>
Oct 11 04:06:18 compute-0 nova_compute[259850]:     </nova:instance>
Oct 11 04:06:18 compute-0 nova_compute[259850]:   </metadata>
Oct 11 04:06:18 compute-0 nova_compute[259850]:   <sysinfo type="smbios">
Oct 11 04:06:18 compute-0 nova_compute[259850]:     <system>
Oct 11 04:06:18 compute-0 nova_compute[259850]:       <entry name="manufacturer">RDO</entry>
Oct 11 04:06:18 compute-0 nova_compute[259850]:       <entry name="product">OpenStack Compute</entry>
Oct 11 04:06:18 compute-0 nova_compute[259850]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 11 04:06:18 compute-0 nova_compute[259850]:       <entry name="serial">2b618038-2466-4671-9914-c69aecf8c771</entry>
Oct 11 04:06:18 compute-0 nova_compute[259850]:       <entry name="uuid">2b618038-2466-4671-9914-c69aecf8c771</entry>
Oct 11 04:06:18 compute-0 nova_compute[259850]:       <entry name="family">Virtual Machine</entry>
Oct 11 04:06:18 compute-0 nova_compute[259850]:     </system>
Oct 11 04:06:18 compute-0 nova_compute[259850]:   </sysinfo>
Oct 11 04:06:18 compute-0 nova_compute[259850]:   <os>
Oct 11 04:06:18 compute-0 nova_compute[259850]:     <type arch="x86_64" machine="q35">hvm</type>
Oct 11 04:06:18 compute-0 nova_compute[259850]:     <boot dev="hd"/>
Oct 11 04:06:18 compute-0 nova_compute[259850]:     <smbios mode="sysinfo"/>
Oct 11 04:06:18 compute-0 nova_compute[259850]:   </os>
Oct 11 04:06:18 compute-0 nova_compute[259850]:   <features>
Oct 11 04:06:18 compute-0 nova_compute[259850]:     <acpi/>
Oct 11 04:06:18 compute-0 nova_compute[259850]:     <apic/>
Oct 11 04:06:18 compute-0 nova_compute[259850]:     <vmcoreinfo/>
Oct 11 04:06:18 compute-0 nova_compute[259850]:   </features>
Oct 11 04:06:18 compute-0 nova_compute[259850]:   <clock offset="utc">
Oct 11 04:06:18 compute-0 nova_compute[259850]:     <timer name="pit" tickpolicy="delay"/>
Oct 11 04:06:18 compute-0 nova_compute[259850]:     <timer name="rtc" tickpolicy="catchup"/>
Oct 11 04:06:18 compute-0 nova_compute[259850]:     <timer name="hpet" present="no"/>
Oct 11 04:06:18 compute-0 nova_compute[259850]:   </clock>
Oct 11 04:06:18 compute-0 nova_compute[259850]:   <cpu mode="host-model" match="exact">
Oct 11 04:06:18 compute-0 nova_compute[259850]:     <topology sockets="1" cores="1" threads="1"/>
Oct 11 04:06:18 compute-0 nova_compute[259850]:   </cpu>
Oct 11 04:06:18 compute-0 nova_compute[259850]:   <devices>
Oct 11 04:06:18 compute-0 nova_compute[259850]:     <disk type="network" device="disk">
Oct 11 04:06:18 compute-0 nova_compute[259850]:       <driver type="raw" cache="none"/>
Oct 11 04:06:18 compute-0 nova_compute[259850]:       <source protocol="rbd" name="vms/2b618038-2466-4671-9914-c69aecf8c771_disk">
Oct 11 04:06:18 compute-0 nova_compute[259850]:         <host name="192.168.122.100" port="6789"/>
Oct 11 04:06:18 compute-0 nova_compute[259850]:       </source>
Oct 11 04:06:18 compute-0 nova_compute[259850]:       <auth username="openstack">
Oct 11 04:06:18 compute-0 nova_compute[259850]:         <secret type="ceph" uuid="23b68101-59a9-532f-ab6b-9acf78fb2162"/>
Oct 11 04:06:18 compute-0 nova_compute[259850]:       </auth>
Oct 11 04:06:18 compute-0 nova_compute[259850]:       <target dev="vda" bus="virtio"/>
Oct 11 04:06:18 compute-0 nova_compute[259850]:     </disk>
Oct 11 04:06:18 compute-0 nova_compute[259850]:     <disk type="network" device="cdrom">
Oct 11 04:06:18 compute-0 nova_compute[259850]:       <driver type="raw" cache="none"/>
Oct 11 04:06:18 compute-0 nova_compute[259850]:       <source protocol="rbd" name="vms/2b618038-2466-4671-9914-c69aecf8c771_disk.config">
Oct 11 04:06:18 compute-0 nova_compute[259850]:         <host name="192.168.122.100" port="6789"/>
Oct 11 04:06:18 compute-0 nova_compute[259850]:       </source>
Oct 11 04:06:18 compute-0 nova_compute[259850]:       <auth username="openstack">
Oct 11 04:06:18 compute-0 nova_compute[259850]:         <secret type="ceph" uuid="23b68101-59a9-532f-ab6b-9acf78fb2162"/>
Oct 11 04:06:18 compute-0 nova_compute[259850]:       </auth>
Oct 11 04:06:18 compute-0 nova_compute[259850]:       <target dev="sda" bus="sata"/>
Oct 11 04:06:18 compute-0 nova_compute[259850]:     </disk>
Oct 11 04:06:18 compute-0 nova_compute[259850]:     <interface type="ethernet">
Oct 11 04:06:18 compute-0 nova_compute[259850]:       <mac address="fa:16:3e:55:96:0a"/>
Oct 11 04:06:18 compute-0 nova_compute[259850]:       <model type="virtio"/>
Oct 11 04:06:18 compute-0 nova_compute[259850]:       <driver name="vhost" rx_queue_size="512"/>
Oct 11 04:06:18 compute-0 nova_compute[259850]:       <mtu size="1442"/>
Oct 11 04:06:18 compute-0 nova_compute[259850]:       <target dev="tap18b8cda5-7b"/>
Oct 11 04:06:18 compute-0 nova_compute[259850]:     </interface>
Oct 11 04:06:18 compute-0 nova_compute[259850]:     <serial type="pty">
Oct 11 04:06:18 compute-0 nova_compute[259850]:       <log file="/var/lib/nova/instances/2b618038-2466-4671-9914-c69aecf8c771/console.log" append="off"/>
Oct 11 04:06:18 compute-0 nova_compute[259850]:     </serial>
Oct 11 04:06:18 compute-0 nova_compute[259850]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 11 04:06:18 compute-0 nova_compute[259850]:     <video>
Oct 11 04:06:18 compute-0 nova_compute[259850]:       <model type="virtio"/>
Oct 11 04:06:18 compute-0 nova_compute[259850]:     </video>
Oct 11 04:06:18 compute-0 nova_compute[259850]:     <input type="tablet" bus="usb"/>
Oct 11 04:06:18 compute-0 nova_compute[259850]:     <rng model="virtio">
Oct 11 04:06:18 compute-0 nova_compute[259850]:       <backend model="random">/dev/urandom</backend>
Oct 11 04:06:18 compute-0 nova_compute[259850]:     </rng>
Oct 11 04:06:18 compute-0 nova_compute[259850]:     <controller type="pci" model="pcie-root"/>
Oct 11 04:06:18 compute-0 nova_compute[259850]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 04:06:18 compute-0 nova_compute[259850]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 04:06:18 compute-0 nova_compute[259850]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 04:06:18 compute-0 nova_compute[259850]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 04:06:18 compute-0 nova_compute[259850]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 04:06:18 compute-0 nova_compute[259850]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 04:06:18 compute-0 nova_compute[259850]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 04:06:18 compute-0 nova_compute[259850]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 04:06:18 compute-0 nova_compute[259850]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 04:06:18 compute-0 nova_compute[259850]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 04:06:18 compute-0 nova_compute[259850]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 04:06:18 compute-0 nova_compute[259850]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 04:06:18 compute-0 nova_compute[259850]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 04:06:18 compute-0 nova_compute[259850]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 04:06:18 compute-0 nova_compute[259850]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 04:06:18 compute-0 nova_compute[259850]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 04:06:18 compute-0 nova_compute[259850]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 04:06:18 compute-0 nova_compute[259850]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 04:06:18 compute-0 nova_compute[259850]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 04:06:18 compute-0 nova_compute[259850]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 04:06:18 compute-0 nova_compute[259850]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 04:06:18 compute-0 nova_compute[259850]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 04:06:18 compute-0 nova_compute[259850]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 04:06:18 compute-0 nova_compute[259850]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 04:06:18 compute-0 nova_compute[259850]:     <controller type="usb" index="0"/>
Oct 11 04:06:18 compute-0 nova_compute[259850]:     <memballoon model="virtio">
Oct 11 04:06:18 compute-0 nova_compute[259850]:       <stats period="10"/>
Oct 11 04:06:18 compute-0 nova_compute[259850]:     </memballoon>
Oct 11 04:06:18 compute-0 nova_compute[259850]:   </devices>
Oct 11 04:06:18 compute-0 nova_compute[259850]: </domain>
Oct 11 04:06:18 compute-0 nova_compute[259850]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Oct 11 04:06:18 compute-0 nova_compute[259850]: 2025-10-11 04:06:18.511 2 DEBUG nova.compute.manager [None req-cb9e36c3-6239-4794-920d-2428fabc3377 c5660041067943deb3c73caa6e62f851 2783729ed466412aac8ceb01d86a0b12 - - default default] [instance: 2b618038-2466-4671-9914-c69aecf8c771] Preparing to wait for external event network-vif-plugged-18b8cda5-7bec-4b29-838f-24cad68162af prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Oct 11 04:06:18 compute-0 nova_compute[259850]: 2025-10-11 04:06:18.512 2 DEBUG oslo_concurrency.lockutils [None req-cb9e36c3-6239-4794-920d-2428fabc3377 c5660041067943deb3c73caa6e62f851 2783729ed466412aac8ceb01d86a0b12 - - default default] Acquiring lock "2b618038-2466-4671-9914-c69aecf8c771-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 04:06:18 compute-0 nova_compute[259850]: 2025-10-11 04:06:18.512 2 DEBUG oslo_concurrency.lockutils [None req-cb9e36c3-6239-4794-920d-2428fabc3377 c5660041067943deb3c73caa6e62f851 2783729ed466412aac8ceb01d86a0b12 - - default default] Lock "2b618038-2466-4671-9914-c69aecf8c771-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 04:06:18 compute-0 nova_compute[259850]: 2025-10-11 04:06:18.512 2 DEBUG oslo_concurrency.lockutils [None req-cb9e36c3-6239-4794-920d-2428fabc3377 c5660041067943deb3c73caa6e62f851 2783729ed466412aac8ceb01d86a0b12 - - default default] Lock "2b618038-2466-4671-9914-c69aecf8c771-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 04:06:18 compute-0 nova_compute[259850]: 2025-10-11 04:06:18.513 2 DEBUG nova.virt.libvirt.vif [None req-cb9e36c3-6239-4794-920d-2428fabc3377 c5660041067943deb3c73caa6e62f851 2783729ed466412aac8ceb01d86a0b12 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-11T04:06:12Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-VolumesSnapshotTestJSON-instance-1537074081',display_name='tempest-VolumesSnapshotTestJSON-instance-1537074081',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-volumessnapshottestjson-instance-1537074081',id=6,image_ref='1a107e2f-1a9d-4b6f-861d-e64bee7d56be',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBJKvBS+sHt0R++EQjY8399i9xRV8xwy8PNGrky3BzxlKGZCtm3DcIWTejUfK1VEDKEydb8PJX5YdahSJhSOa4QWvc1+qljSsnLkUpuPznZoJliIMCS/A+eCn6if+XQhyhA==',key_name='tempest-keypair-1067966574',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='2783729ed466412aac8ceb01d86a0b12',ramdisk_id='',reservation_id='r-zfyt80ry',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='1a107e2f-1a9d-4b6f-861d-e64bee7d56be',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-VolumesSnapshotTestJSON-180407200',owner_user_name='tempest-VolumesSnapshotTestJSON-180407200-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-11T04:06:13Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='c5660041067943deb3c73caa6e62f851',uuid=2b618038-2466-4671-9914-c69aecf8c771,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "18b8cda5-7bec-4b29-838f-24cad68162af", "address": "fa:16:3e:55:96:0a", "network": {"id": "bfa0cc72-c909-48db-80bb-536eb7b52f6e", "bridge": "br-int", "label": "tempest-VolumesSnapshotTestJSON-1615284681-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2783729ed466412aac8ceb01d86a0b12", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap18b8cda5-7b", "ovs_interfaceid": "18b8cda5-7bec-4b29-838f-24cad68162af", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Oct 11 04:06:18 compute-0 nova_compute[259850]: 2025-10-11 04:06:18.513 2 DEBUG nova.network.os_vif_util [None req-cb9e36c3-6239-4794-920d-2428fabc3377 c5660041067943deb3c73caa6e62f851 2783729ed466412aac8ceb01d86a0b12 - - default default] Converting VIF {"id": "18b8cda5-7bec-4b29-838f-24cad68162af", "address": "fa:16:3e:55:96:0a", "network": {"id": "bfa0cc72-c909-48db-80bb-536eb7b52f6e", "bridge": "br-int", "label": "tempest-VolumesSnapshotTestJSON-1615284681-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2783729ed466412aac8ceb01d86a0b12", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap18b8cda5-7b", "ovs_interfaceid": "18b8cda5-7bec-4b29-838f-24cad68162af", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 11 04:06:18 compute-0 nova_compute[259850]: 2025-10-11 04:06:18.513 2 DEBUG nova.network.os_vif_util [None req-cb9e36c3-6239-4794-920d-2428fabc3377 c5660041067943deb3c73caa6e62f851 2783729ed466412aac8ceb01d86a0b12 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:55:96:0a,bridge_name='br-int',has_traffic_filtering=True,id=18b8cda5-7bec-4b29-838f-24cad68162af,network=Network(bfa0cc72-c909-48db-80bb-536eb7b52f6e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap18b8cda5-7b') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 11 04:06:18 compute-0 nova_compute[259850]: 2025-10-11 04:06:18.514 2 DEBUG os_vif [None req-cb9e36c3-6239-4794-920d-2428fabc3377 c5660041067943deb3c73caa6e62f851 2783729ed466412aac8ceb01d86a0b12 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:55:96:0a,bridge_name='br-int',has_traffic_filtering=True,id=18b8cda5-7bec-4b29-838f-24cad68162af,network=Network(bfa0cc72-c909-48db-80bb-536eb7b52f6e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap18b8cda5-7b') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Oct 11 04:06:18 compute-0 nova_compute[259850]: 2025-10-11 04:06:18.514 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 04:06:18 compute-0 nova_compute[259850]: 2025-10-11 04:06:18.514 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 04:06:18 compute-0 nova_compute[259850]: 2025-10-11 04:06:18.515 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 11 04:06:18 compute-0 nova_compute[259850]: 2025-10-11 04:06:18.518 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 04:06:18 compute-0 nova_compute[259850]: 2025-10-11 04:06:18.518 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap18b8cda5-7b, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 04:06:18 compute-0 nova_compute[259850]: 2025-10-11 04:06:18.518 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap18b8cda5-7b, col_values=(('external_ids', {'iface-id': '18b8cda5-7bec-4b29-838f-24cad68162af', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:55:96:0a', 'vm-uuid': '2b618038-2466-4671-9914-c69aecf8c771'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 04:06:18 compute-0 NetworkManager[44920]: <info>  [1760155578.5213] manager: (tap18b8cda5-7b): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/43)
Oct 11 04:06:18 compute-0 nova_compute[259850]: 2025-10-11 04:06:18.520 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 04:06:18 compute-0 nova_compute[259850]: 2025-10-11 04:06:18.523 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 11 04:06:18 compute-0 nova_compute[259850]: 2025-10-11 04:06:18.530 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 04:06:18 compute-0 nova_compute[259850]: 2025-10-11 04:06:18.531 2 INFO os_vif [None req-cb9e36c3-6239-4794-920d-2428fabc3377 c5660041067943deb3c73caa6e62f851 2783729ed466412aac8ceb01d86a0b12 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:55:96:0a,bridge_name='br-int',has_traffic_filtering=True,id=18b8cda5-7bec-4b29-838f-24cad68162af,network=Network(bfa0cc72-c909-48db-80bb-536eb7b52f6e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap18b8cda5-7b')
Oct 11 04:06:18 compute-0 nova_compute[259850]: 2025-10-11 04:06:18.575 2 DEBUG nova.virt.libvirt.driver [None req-cb9e36c3-6239-4794-920d-2428fabc3377 c5660041067943deb3c73caa6e62f851 2783729ed466412aac8ceb01d86a0b12 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 11 04:06:18 compute-0 nova_compute[259850]: 2025-10-11 04:06:18.575 2 DEBUG nova.virt.libvirt.driver [None req-cb9e36c3-6239-4794-920d-2428fabc3377 c5660041067943deb3c73caa6e62f851 2783729ed466412aac8ceb01d86a0b12 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 11 04:06:18 compute-0 nova_compute[259850]: 2025-10-11 04:06:18.575 2 DEBUG nova.virt.libvirt.driver [None req-cb9e36c3-6239-4794-920d-2428fabc3377 c5660041067943deb3c73caa6e62f851 2783729ed466412aac8ceb01d86a0b12 - - default default] No VIF found with MAC fa:16:3e:55:96:0a, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Oct 11 04:06:18 compute-0 nova_compute[259850]: 2025-10-11 04:06:18.576 2 INFO nova.virt.libvirt.driver [None req-cb9e36c3-6239-4794-920d-2428fabc3377 c5660041067943deb3c73caa6e62f851 2783729ed466412aac8ceb01d86a0b12 - - default default] [instance: 2b618038-2466-4671-9914-c69aecf8c771] Using config drive
Oct 11 04:06:18 compute-0 nova_compute[259850]: 2025-10-11 04:06:18.592 2 DEBUG nova.storage.rbd_utils [None req-cb9e36c3-6239-4794-920d-2428fabc3377 c5660041067943deb3c73caa6e62f851 2783729ed466412aac8ceb01d86a0b12 - - default default] rbd image 2b618038-2466-4671-9914-c69aecf8c771_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 11 04:06:18 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 11 04:06:18 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1414836973' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 04:06:18 compute-0 nova_compute[259850]: 2025-10-11 04:06:18.664 2 DEBUG oslo_concurrency.processutils [None req-a15c22ab-259d-4b26-83d0-eb5f92102649 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.460s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 04:06:18 compute-0 nova_compute[259850]: 2025-10-11 04:06:18.668 2 DEBUG nova.compute.provider_tree [None req-a15c22ab-259d-4b26-83d0-eb5f92102649 - - - - - -] Inventory has not changed in ProviderTree for provider: 108a560b-89c0-4926-a2fc-cb749a6f8386 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 11 04:06:18 compute-0 nova_compute[259850]: 2025-10-11 04:06:18.693 2 DEBUG nova.scheduler.client.report [None req-a15c22ab-259d-4b26-83d0-eb5f92102649 - - - - - -] Inventory has not changed for provider 108a560b-89c0-4926-a2fc-cb749a6f8386 based on inventory data: {'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 11 04:06:18 compute-0 ceph-mon[74273]: pgmap v1073: 305 pgs: 305 active+clean; 134 MiB data, 307 MiB used, 60 GiB / 60 GiB avail; 62 KiB/s rd, 4.0 KiB/s wr, 83 op/s
Oct 11 04:06:18 compute-0 ceph-mon[74273]: from='client.? 192.168.122.100:0/120594559' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 11 04:06:18 compute-0 ceph-mon[74273]: from='client.? 192.168.122.10:0/2047000392' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 11 04:06:18 compute-0 ceph-mon[74273]: from='client.? 192.168.122.10:0/2047000392' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 11 04:06:18 compute-0 ceph-mon[74273]: from='client.? 192.168.122.10:0/312928466' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 11 04:06:18 compute-0 ceph-mon[74273]: from='client.? 192.168.122.10:0/312928466' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 11 04:06:18 compute-0 ceph-mon[74273]: from='client.? 192.168.122.100:0/803737825' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 11 04:06:18 compute-0 ceph-mon[74273]: from='client.? 192.168.122.100:0/1414836973' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 04:06:18 compute-0 nova_compute[259850]: 2025-10-11 04:06:18.727 2 DEBUG nova.compute.resource_tracker [None req-a15c22ab-259d-4b26-83d0-eb5f92102649 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Oct 11 04:06:18 compute-0 nova_compute[259850]: 2025-10-11 04:06:18.727 2 DEBUG oslo_concurrency.lockutils [None req-a15c22ab-259d-4b26-83d0-eb5f92102649 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.998s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 04:06:18 compute-0 nova_compute[259850]: 2025-10-11 04:06:18.727 2 DEBUG oslo_service.periodic_task [None req-a15c22ab-259d-4b26-83d0-eb5f92102649 - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 11 04:06:18 compute-0 nova_compute[259850]: 2025-10-11 04:06:18.728 2 DEBUG nova.compute.manager [None req-a15c22ab-259d-4b26-83d0-eb5f92102649 - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183
Oct 11 04:06:19 compute-0 nova_compute[259850]: 2025-10-11 04:06:19.027 2 DEBUG nova.network.neutron [req-72b5856a-528f-4944-9821-d70b81ec5840 req-a1486984-0409-4b00-951d-609a2d19cd1e f4159c198f2c491aba3076e803251bb4 551ef16ca7c64c7d98e9560ddab5abd8 - - default default] [instance: 2b618038-2466-4671-9914-c69aecf8c771] Updated VIF entry in instance network info cache for port 18b8cda5-7bec-4b29-838f-24cad68162af. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 11 04:06:19 compute-0 nova_compute[259850]: 2025-10-11 04:06:19.028 2 DEBUG nova.network.neutron [req-72b5856a-528f-4944-9821-d70b81ec5840 req-a1486984-0409-4b00-951d-609a2d19cd1e f4159c198f2c491aba3076e803251bb4 551ef16ca7c64c7d98e9560ddab5abd8 - - default default] [instance: 2b618038-2466-4671-9914-c69aecf8c771] Updating instance_info_cache with network_info: [{"id": "18b8cda5-7bec-4b29-838f-24cad68162af", "address": "fa:16:3e:55:96:0a", "network": {"id": "bfa0cc72-c909-48db-80bb-536eb7b52f6e", "bridge": "br-int", "label": "tempest-VolumesSnapshotTestJSON-1615284681-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2783729ed466412aac8ceb01d86a0b12", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap18b8cda5-7b", "ovs_interfaceid": "18b8cda5-7bec-4b29-838f-24cad68162af", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 11 04:06:19 compute-0 nova_compute[259850]: 2025-10-11 04:06:19.051 2 DEBUG oslo_concurrency.lockutils [req-72b5856a-528f-4944-9821-d70b81ec5840 req-a1486984-0409-4b00-951d-609a2d19cd1e f4159c198f2c491aba3076e803251bb4 551ef16ca7c64c7d98e9560ddab5abd8 - - default default] Releasing lock "refresh_cache-2b618038-2466-4671-9914-c69aecf8c771" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 11 04:06:19 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1074: 305 pgs: 305 active+clean; 180 MiB data, 328 MiB used, 60 GiB / 60 GiB avail; 131 KiB/s rd, 2.1 MiB/s wr, 180 op/s
Oct 11 04:06:19 compute-0 nova_compute[259850]: 2025-10-11 04:06:19.659 2 INFO nova.virt.libvirt.driver [None req-cb9e36c3-6239-4794-920d-2428fabc3377 c5660041067943deb3c73caa6e62f851 2783729ed466412aac8ceb01d86a0b12 - - default default] [instance: 2b618038-2466-4671-9914-c69aecf8c771] Creating config drive at /var/lib/nova/instances/2b618038-2466-4671-9914-c69aecf8c771/disk.config
Oct 11 04:06:19 compute-0 nova_compute[259850]: 2025-10-11 04:06:19.670 2 DEBUG oslo_concurrency.processutils [None req-cb9e36c3-6239-4794-920d-2428fabc3377 c5660041067943deb3c73caa6e62f851 2783729ed466412aac8ceb01d86a0b12 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/2b618038-2466-4671-9914-c69aecf8c771/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpcp_xy9mj execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 04:06:19 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e184 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 04:06:19 compute-0 nova_compute[259850]: 2025-10-11 04:06:19.817 2 DEBUG oslo_concurrency.processutils [None req-cb9e36c3-6239-4794-920d-2428fabc3377 c5660041067943deb3c73caa6e62f851 2783729ed466412aac8ceb01d86a0b12 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/2b618038-2466-4671-9914-c69aecf8c771/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpcp_xy9mj" returned: 0 in 0.147s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 04:06:19 compute-0 nova_compute[259850]: 2025-10-11 04:06:19.860 2 DEBUG nova.storage.rbd_utils [None req-cb9e36c3-6239-4794-920d-2428fabc3377 c5660041067943deb3c73caa6e62f851 2783729ed466412aac8ceb01d86a0b12 - - default default] rbd image 2b618038-2466-4671-9914-c69aecf8c771_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 11 04:06:19 compute-0 nova_compute[259850]: 2025-10-11 04:06:19.865 2 DEBUG oslo_concurrency.processutils [None req-cb9e36c3-6239-4794-920d-2428fabc3377 c5660041067943deb3c73caa6e62f851 2783729ed466412aac8ceb01d86a0b12 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/2b618038-2466-4671-9914-c69aecf8c771/disk.config 2b618038-2466-4671-9914-c69aecf8c771_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 04:06:20 compute-0 nova_compute[259850]: 2025-10-11 04:06:20.056 2 DEBUG oslo_concurrency.processutils [None req-cb9e36c3-6239-4794-920d-2428fabc3377 c5660041067943deb3c73caa6e62f851 2783729ed466412aac8ceb01d86a0b12 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/2b618038-2466-4671-9914-c69aecf8c771/disk.config 2b618038-2466-4671-9914-c69aecf8c771_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.190s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 04:06:20 compute-0 nova_compute[259850]: 2025-10-11 04:06:20.057 2 INFO nova.virt.libvirt.driver [None req-cb9e36c3-6239-4794-920d-2428fabc3377 c5660041067943deb3c73caa6e62f851 2783729ed466412aac8ceb01d86a0b12 - - default default] [instance: 2b618038-2466-4671-9914-c69aecf8c771] Deleting local config drive /var/lib/nova/instances/2b618038-2466-4671-9914-c69aecf8c771/disk.config because it was imported into RBD.
Oct 11 04:06:20 compute-0 kernel: tap18b8cda5-7b: entered promiscuous mode
Oct 11 04:06:20 compute-0 ovn_controller[152025]: 2025-10-11T04:06:20Z|00060|binding|INFO|Claiming lport 18b8cda5-7bec-4b29-838f-24cad68162af for this chassis.
Oct 11 04:06:20 compute-0 ovn_controller[152025]: 2025-10-11T04:06:20Z|00061|binding|INFO|18b8cda5-7bec-4b29-838f-24cad68162af: Claiming fa:16:3e:55:96:0a 10.100.0.12
Oct 11 04:06:20 compute-0 NetworkManager[44920]: <info>  [1760155580.1336] manager: (tap18b8cda5-7b): new Tun device (/org/freedesktop/NetworkManager/Devices/44)
Oct 11 04:06:20 compute-0 nova_compute[259850]: 2025-10-11 04:06:20.134 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 04:06:20 compute-0 nova_compute[259850]: 2025-10-11 04:06:20.140 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 04:06:20 compute-0 nova_compute[259850]: 2025-10-11 04:06:20.143 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 04:06:20 compute-0 ovn_metadata_agent[161897]: 2025-10-11 04:06:20.163 161902 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:55:96:0a 10.100.0.12'], port_security=['fa:16:3e:55:96:0a 10.100.0.12'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.12/28', 'neutron:device_id': '2b618038-2466-4671-9914-c69aecf8c771', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-bfa0cc72-c909-48db-80bb-536eb7b52f6e', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '2783729ed466412aac8ceb01d86a0b12', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'a3924cbd-62fb-41dc-9d4a-3f864682569a', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=55b0cbfb-9e3c-469a-b06d-75c45688b585, chassis=[<ovs.db.idl.Row object at 0x7f22fde8afd0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f22fde8afd0>], logical_port=18b8cda5-7bec-4b29-838f-24cad68162af) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 11 04:06:20 compute-0 ovn_metadata_agent[161897]: 2025-10-11 04:06:20.165 161902 INFO neutron.agent.ovn.metadata.agent [-] Port 18b8cda5-7bec-4b29-838f-24cad68162af in datapath bfa0cc72-c909-48db-80bb-536eb7b52f6e bound to our chassis
Oct 11 04:06:20 compute-0 ovn_metadata_agent[161897]: 2025-10-11 04:06:20.167 161902 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network bfa0cc72-c909-48db-80bb-536eb7b52f6e
Oct 11 04:06:20 compute-0 systemd-machined[214869]: New machine qemu-6-instance-00000006.
Oct 11 04:06:20 compute-0 ovn_metadata_agent[161897]: 2025-10-11 04:06:20.183 267637 DEBUG oslo.privsep.daemon [-] privsep: reply[4edb38a8-b109-43bb-949c-073c8f629589]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 04:06:20 compute-0 ovn_metadata_agent[161897]: 2025-10-11 04:06:20.184 161902 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapbfa0cc72-c1 in ovnmeta-bfa0cc72-c909-48db-80bb-536eb7b52f6e namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Oct 11 04:06:20 compute-0 ovn_metadata_agent[161897]: 2025-10-11 04:06:20.187 267637 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapbfa0cc72-c0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Oct 11 04:06:20 compute-0 ovn_metadata_agent[161897]: 2025-10-11 04:06:20.188 267637 DEBUG oslo.privsep.daemon [-] privsep: reply[5e289651-708e-4633-a7aa-be8dfcfeff8f]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 04:06:20 compute-0 ovn_metadata_agent[161897]: 2025-10-11 04:06:20.189 267637 DEBUG oslo.privsep.daemon [-] privsep: reply[a4401fa2-f905-41c2-a0d3-37c39f1dbecc]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 04:06:20 compute-0 systemd[1]: Started Virtual Machine qemu-6-instance-00000006.
Oct 11 04:06:20 compute-0 ovn_metadata_agent[161897]: 2025-10-11 04:06:20.207 162015 DEBUG oslo.privsep.daemon [-] privsep: reply[e53b0c14-a3e3-4ba4-88c3-3ff86d10783e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 04:06:20 compute-0 systemd-udevd[274003]: Network interface NamePolicy= disabled on kernel command line.
Oct 11 04:06:20 compute-0 nova_compute[259850]: 2025-10-11 04:06:20.231 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 04:06:20 compute-0 ovn_metadata_agent[161897]: 2025-10-11 04:06:20.233 267637 DEBUG oslo.privsep.daemon [-] privsep: reply[974edcbc-4a1b-4343-8b7c-afc6b28c3b09]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 04:06:20 compute-0 NetworkManager[44920]: <info>  [1760155580.2388] device (tap18b8cda5-7b): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 11 04:06:20 compute-0 ovn_controller[152025]: 2025-10-11T04:06:20Z|00062|binding|INFO|Setting lport 18b8cda5-7bec-4b29-838f-24cad68162af ovn-installed in OVS
Oct 11 04:06:20 compute-0 ovn_controller[152025]: 2025-10-11T04:06:20Z|00063|binding|INFO|Setting lport 18b8cda5-7bec-4b29-838f-24cad68162af up in Southbound
Oct 11 04:06:20 compute-0 NetworkManager[44920]: <info>  [1760155580.2410] device (tap18b8cda5-7b): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 11 04:06:20 compute-0 nova_compute[259850]: 2025-10-11 04:06:20.241 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 04:06:20 compute-0 ovn_metadata_agent[161897]: 2025-10-11 04:06:20.280 267785 DEBUG oslo.privsep.daemon [-] privsep: reply[17e7666b-2b45-4f41-8017-44c94f70f814]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 04:06:20 compute-0 NetworkManager[44920]: <info>  [1760155580.2902] manager: (tapbfa0cc72-c0): new Veth device (/org/freedesktop/NetworkManager/Devices/45)
Oct 11 04:06:20 compute-0 ovn_metadata_agent[161897]: 2025-10-11 04:06:20.288 267637 DEBUG oslo.privsep.daemon [-] privsep: reply[07fbb932-377f-4de2-bf1e-bd29e5ccd3d6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 04:06:20 compute-0 systemd-udevd[274008]: Network interface NamePolicy= disabled on kernel command line.
Oct 11 04:06:20 compute-0 ovn_metadata_agent[161897]: 2025-10-11 04:06:20.341 267785 DEBUG oslo.privsep.daemon [-] privsep: reply[09b274e7-3684-4dc5-84a9-a483b214293d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 04:06:20 compute-0 ovn_metadata_agent[161897]: 2025-10-11 04:06:20.346 267785 DEBUG oslo.privsep.daemon [-] privsep: reply[39eaa9c4-c5f5-4c76-8d64-563b826f5b31]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 04:06:20 compute-0 NetworkManager[44920]: <info>  [1760155580.3803] device (tapbfa0cc72-c0): carrier: link connected
Oct 11 04:06:20 compute-0 ovn_metadata_agent[161897]: 2025-10-11 04:06:20.389 267785 DEBUG oslo.privsep.daemon [-] privsep: reply[c339fc10-14ab-4fec-b83e-8132d7690cad]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 04:06:20 compute-0 ovn_metadata_agent[161897]: 2025-10-11 04:06:20.410 267637 DEBUG oslo.privsep.daemon [-] privsep: reply[5d81a494-9367-4145-9c36-a1e1a4d12e9e]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapbfa0cc72-c1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:0c:c6:bc'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 27], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 396541, 'reachable_time': 26217, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 274034, 'error': None, 'target': 'ovnmeta-bfa0cc72-c909-48db-80bb-536eb7b52f6e', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 04:06:20 compute-0 ovn_metadata_agent[161897]: 2025-10-11 04:06:20.433 267637 DEBUG oslo.privsep.daemon [-] privsep: reply[bcd1ac44-5b15-49f5-a50d-3aa7edf48776]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe0c:c6bc'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 396541, 'tstamp': 396541}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 274035, 'error': None, 'target': 'ovnmeta-bfa0cc72-c909-48db-80bb-536eb7b52f6e', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 04:06:20 compute-0 ovn_metadata_agent[161897]: 2025-10-11 04:06:20.461 267637 DEBUG oslo.privsep.daemon [-] privsep: reply[8259abd5-2944-4691-a5ec-912e7db62398]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapbfa0cc72-c1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:0c:c6:bc'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 27], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 396541, 'reachable_time': 26217, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 274036, 'error': None, 'target': 'ovnmeta-bfa0cc72-c909-48db-80bb-536eb7b52f6e', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 04:06:20 compute-0 ovn_metadata_agent[161897]: 2025-10-11 04:06:20.504 267637 DEBUG oslo.privsep.daemon [-] privsep: reply[c9a4aaec-1738-44ca-b900-50dd5a0943a3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 04:06:20 compute-0 ovn_metadata_agent[161897]: 2025-10-11 04:06:20.594 267637 DEBUG oslo.privsep.daemon [-] privsep: reply[460afc01-077e-451b-8ba6-8086269f37b7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 04:06:20 compute-0 ovn_metadata_agent[161897]: 2025-10-11 04:06:20.596 161902 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapbfa0cc72-c0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 04:06:20 compute-0 ovn_metadata_agent[161897]: 2025-10-11 04:06:20.596 161902 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 11 04:06:20 compute-0 ovn_metadata_agent[161897]: 2025-10-11 04:06:20.597 161902 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapbfa0cc72-c0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 04:06:20 compute-0 nova_compute[259850]: 2025-10-11 04:06:20.639 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 04:06:20 compute-0 NetworkManager[44920]: <info>  [1760155580.6399] manager: (tapbfa0cc72-c0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/46)
Oct 11 04:06:20 compute-0 kernel: tapbfa0cc72-c0: entered promiscuous mode
Oct 11 04:06:20 compute-0 nova_compute[259850]: 2025-10-11 04:06:20.644 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 04:06:20 compute-0 ovn_metadata_agent[161897]: 2025-10-11 04:06:20.645 161902 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapbfa0cc72-c0, col_values=(('external_ids', {'iface-id': '0e0216bc-6b9d-4e75-bae2-b1d26e9e502e'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 04:06:20 compute-0 nova_compute[259850]: 2025-10-11 04:06:20.646 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 04:06:20 compute-0 ovn_controller[152025]: 2025-10-11T04:06:20Z|00064|binding|INFO|Releasing lport 0e0216bc-6b9d-4e75-bae2-b1d26e9e502e from this chassis (sb_readonly=0)
Oct 11 04:06:20 compute-0 nova_compute[259850]: 2025-10-11 04:06:20.683 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 04:06:20 compute-0 ovn_metadata_agent[161897]: 2025-10-11 04:06:20.684 161902 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/bfa0cc72-c909-48db-80bb-536eb7b52f6e.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/bfa0cc72-c909-48db-80bb-536eb7b52f6e.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Oct 11 04:06:20 compute-0 ovn_metadata_agent[161897]: 2025-10-11 04:06:20.686 267637 DEBUG oslo.privsep.daemon [-] privsep: reply[0a93ba82-565f-47f2-a984-b9defb64a732]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 04:06:20 compute-0 ovn_metadata_agent[161897]: 2025-10-11 04:06:20.687 161902 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct 11 04:06:20 compute-0 ovn_metadata_agent[161897]: global
Oct 11 04:06:20 compute-0 ovn_metadata_agent[161897]:     log         /dev/log local0 debug
Oct 11 04:06:20 compute-0 ovn_metadata_agent[161897]:     log-tag     haproxy-metadata-proxy-bfa0cc72-c909-48db-80bb-536eb7b52f6e
Oct 11 04:06:20 compute-0 ovn_metadata_agent[161897]:     user        root
Oct 11 04:06:20 compute-0 ovn_metadata_agent[161897]:     group       root
Oct 11 04:06:20 compute-0 ovn_metadata_agent[161897]:     maxconn     1024
Oct 11 04:06:20 compute-0 ovn_metadata_agent[161897]:     pidfile     /var/lib/neutron/external/pids/bfa0cc72-c909-48db-80bb-536eb7b52f6e.pid.haproxy
Oct 11 04:06:20 compute-0 ovn_metadata_agent[161897]:     daemon
Oct 11 04:06:20 compute-0 ovn_metadata_agent[161897]: 
Oct 11 04:06:20 compute-0 ovn_metadata_agent[161897]: defaults
Oct 11 04:06:20 compute-0 ovn_metadata_agent[161897]:     log global
Oct 11 04:06:20 compute-0 ovn_metadata_agent[161897]:     mode http
Oct 11 04:06:20 compute-0 ovn_metadata_agent[161897]:     option httplog
Oct 11 04:06:20 compute-0 ovn_metadata_agent[161897]:     option dontlognull
Oct 11 04:06:20 compute-0 ovn_metadata_agent[161897]:     option http-server-close
Oct 11 04:06:20 compute-0 ovn_metadata_agent[161897]:     option forwardfor
Oct 11 04:06:20 compute-0 ovn_metadata_agent[161897]:     retries                 3
Oct 11 04:06:20 compute-0 ovn_metadata_agent[161897]:     timeout http-request    30s
Oct 11 04:06:20 compute-0 ovn_metadata_agent[161897]:     timeout connect         30s
Oct 11 04:06:20 compute-0 ovn_metadata_agent[161897]:     timeout client          32s
Oct 11 04:06:20 compute-0 ovn_metadata_agent[161897]:     timeout server          32s
Oct 11 04:06:20 compute-0 ovn_metadata_agent[161897]:     timeout http-keep-alive 30s
Oct 11 04:06:20 compute-0 ovn_metadata_agent[161897]: 
Oct 11 04:06:20 compute-0 ovn_metadata_agent[161897]: 
Oct 11 04:06:20 compute-0 ovn_metadata_agent[161897]: listen listener
Oct 11 04:06:20 compute-0 ovn_metadata_agent[161897]:     bind 169.254.169.254:80
Oct 11 04:06:20 compute-0 ovn_metadata_agent[161897]:     server metadata /var/lib/neutron/metadata_proxy
Oct 11 04:06:20 compute-0 ovn_metadata_agent[161897]:     http-request add-header X-OVN-Network-ID bfa0cc72-c909-48db-80bb-536eb7b52f6e
Oct 11 04:06:20 compute-0 ovn_metadata_agent[161897]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Oct 11 04:06:20 compute-0 ovn_metadata_agent[161897]: 2025-10-11 04:06:20.688 161902 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-bfa0cc72-c909-48db-80bb-536eb7b52f6e', 'env', 'PROCESS_TAG=haproxy-bfa0cc72-c909-48db-80bb-536eb7b52f6e', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/bfa0cc72-c909-48db-80bb-536eb7b52f6e.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Oct 11 04:06:20 compute-0 ceph-mon[74273]: pgmap v1074: 305 pgs: 305 active+clean; 180 MiB data, 328 MiB used, 60 GiB / 60 GiB avail; 131 KiB/s rd, 2.1 MiB/s wr, 180 op/s
Oct 11 04:06:20 compute-0 nova_compute[259850]: 2025-10-11 04:06:20.745 2 DEBUG oslo_service.periodic_task [None req-a15c22ab-259d-4b26-83d0-eb5f92102649 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 11 04:06:20 compute-0 nova_compute[259850]: 2025-10-11 04:06:20.746 2 DEBUG nova.compute.manager [None req-a15c22ab-259d-4b26-83d0-eb5f92102649 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Oct 11 04:06:20 compute-0 nova_compute[259850]: 2025-10-11 04:06:20.746 2 DEBUG nova.compute.manager [None req-a15c22ab-259d-4b26-83d0-eb5f92102649 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Oct 11 04:06:20 compute-0 ceph-mgr[74563]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 04:06:20 compute-0 ceph-mgr[74563]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 04:06:20 compute-0 ceph-mgr[74563]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 04:06:20 compute-0 ceph-mgr[74563]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 04:06:20 compute-0 ceph-mgr[74563]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 04:06:20 compute-0 ceph-mgr[74563]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 04:06:20 compute-0 nova_compute[259850]: 2025-10-11 04:06:20.776 2 DEBUG nova.compute.manager [None req-a15c22ab-259d-4b26-83d0-eb5f92102649 - - - - - -] [instance: 2b618038-2466-4671-9914-c69aecf8c771] Skipping network cache update for instance because it is Building. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9871
Oct 11 04:06:20 compute-0 nova_compute[259850]: 2025-10-11 04:06:20.777 2 DEBUG nova.compute.manager [None req-a15c22ab-259d-4b26-83d0-eb5f92102649 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Oct 11 04:06:20 compute-0 nova_compute[259850]: 2025-10-11 04:06:20.778 2 DEBUG oslo_service.periodic_task [None req-a15c22ab-259d-4b26-83d0-eb5f92102649 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 11 04:06:20 compute-0 nova_compute[259850]: 2025-10-11 04:06:20.778 2 DEBUG oslo_service.periodic_task [None req-a15c22ab-259d-4b26-83d0-eb5f92102649 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 11 04:06:20 compute-0 ceph-mgr[74563]: [balancer INFO root] Optimize plan auto_2025-10-11_04:06:20
Oct 11 04:06:20 compute-0 ceph-mgr[74563]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 11 04:06:20 compute-0 ceph-mgr[74563]: [balancer INFO root] do_upmap
Oct 11 04:06:20 compute-0 ceph-mgr[74563]: [balancer INFO root] pools ['volumes', 'backups', 'images', 'cephfs.cephfs.data', 'default.rgw.log', 'default.rgw.meta', '.mgr', 'default.rgw.control', '.rgw.root', 'cephfs.cephfs.meta', 'vms']
Oct 11 04:06:20 compute-0 ceph-mgr[74563]: [balancer INFO root] prepared 0/10 changes
Oct 11 04:06:20 compute-0 nova_compute[259850]: 2025-10-11 04:06:20.885 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1760155565.8847134, 755b8dbf-4912-4ab3-87a0-0fdcfba7efe4 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 11 04:06:20 compute-0 nova_compute[259850]: 2025-10-11 04:06:20.886 2 INFO nova.compute.manager [-] [instance: 755b8dbf-4912-4ab3-87a0-0fdcfba7efe4] VM Stopped (Lifecycle Event)
Oct 11 04:06:20 compute-0 nova_compute[259850]: 2025-10-11 04:06:20.927 2 DEBUG nova.compute.manager [None req-396f36a3-1010-4f89-b8e2-73be6f4f87e0 - - - - - -] [instance: 755b8dbf-4912-4ab3-87a0-0fdcfba7efe4] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 11 04:06:20 compute-0 ceph-mgr[74563]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 11 04:06:20 compute-0 ceph-mgr[74563]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 11 04:06:20 compute-0 ceph-mgr[74563]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 11 04:06:20 compute-0 ceph-mgr[74563]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 11 04:06:20 compute-0 ceph-mgr[74563]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 11 04:06:20 compute-0 ceph-mgr[74563]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 11 04:06:20 compute-0 ceph-mgr[74563]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 11 04:06:20 compute-0 ceph-mgr[74563]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 11 04:06:20 compute-0 ceph-mgr[74563]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 11 04:06:20 compute-0 ceph-mgr[74563]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 11 04:06:21 compute-0 nova_compute[259850]: 2025-10-11 04:06:21.058 2 DEBUG oslo_service.periodic_task [None req-a15c22ab-259d-4b26-83d0-eb5f92102649 - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 11 04:06:21 compute-0 nova_compute[259850]: 2025-10-11 04:06:21.060 2 DEBUG nova.compute.manager [None req-a15c22ab-259d-4b26-83d0-eb5f92102649 - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145
Oct 11 04:06:21 compute-0 nova_compute[259850]: 2025-10-11 04:06:21.082 2 DEBUG nova.compute.manager [None req-a15c22ab-259d-4b26-83d0-eb5f92102649 - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154
Oct 11 04:06:21 compute-0 nova_compute[259850]: 2025-10-11 04:06:21.102 2 DEBUG nova.virt.driver [None req-3700afd8-f980-4cf2-b41e-54ecc7fbe9a2 - - - - - -] Emitting event <LifecycleEvent: 1760155581.102359, 2b618038-2466-4671-9914-c69aecf8c771 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 11 04:06:21 compute-0 nova_compute[259850]: 2025-10-11 04:06:21.103 2 INFO nova.compute.manager [None req-3700afd8-f980-4cf2-b41e-54ecc7fbe9a2 - - - - - -] [instance: 2b618038-2466-4671-9914-c69aecf8c771] VM Started (Lifecycle Event)
Oct 11 04:06:21 compute-0 podman[274110]: 2025-10-11 04:06:21.106490419 +0000 UTC m=+0.047148755 container create 4d85131394ce23a6db996008cb673881082a82f440672359004cd4c96ba48c69 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-bfa0cc72-c909-48db-80bb-536eb7b52f6e, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, tcib_managed=true, org.label-schema.build-date=20251009, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team)
Oct 11 04:06:21 compute-0 nova_compute[259850]: 2025-10-11 04:06:21.128 2 DEBUG nova.compute.manager [None req-3700afd8-f980-4cf2-b41e-54ecc7fbe9a2 - - - - - -] [instance: 2b618038-2466-4671-9914-c69aecf8c771] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 11 04:06:21 compute-0 nova_compute[259850]: 2025-10-11 04:06:21.135 2 DEBUG nova.virt.driver [None req-3700afd8-f980-4cf2-b41e-54ecc7fbe9a2 - - - - - -] Emitting event <LifecycleEvent: 1760155581.10243, 2b618038-2466-4671-9914-c69aecf8c771 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 11 04:06:21 compute-0 nova_compute[259850]: 2025-10-11 04:06:21.135 2 INFO nova.compute.manager [None req-3700afd8-f980-4cf2-b41e-54ecc7fbe9a2 - - - - - -] [instance: 2b618038-2466-4671-9914-c69aecf8c771] VM Paused (Lifecycle Event)
Oct 11 04:06:21 compute-0 systemd[1]: Started libpod-conmon-4d85131394ce23a6db996008cb673881082a82f440672359004cd4c96ba48c69.scope.
Oct 11 04:06:21 compute-0 nova_compute[259850]: 2025-10-11 04:06:21.158 2 DEBUG nova.compute.manager [None req-3700afd8-f980-4cf2-b41e-54ecc7fbe9a2 - - - - - -] [instance: 2b618038-2466-4671-9914-c69aecf8c771] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 11 04:06:21 compute-0 nova_compute[259850]: 2025-10-11 04:06:21.164 2 DEBUG nova.compute.manager [None req-3700afd8-f980-4cf2-b41e-54ecc7fbe9a2 - - - - - -] [instance: 2b618038-2466-4671-9914-c69aecf8c771] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 11 04:06:21 compute-0 systemd[1]: Started libcrun container.
Oct 11 04:06:21 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3d0b4004b8fcea597d98b58197a5431543b001faf1c93a28a6f1a14c415db1a9/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct 11 04:06:21 compute-0 podman[274110]: 2025-10-11 04:06:21.081797025 +0000 UTC m=+0.022455381 image pull 1061e4fafe13e0b9aa1ef2c904ba4ad70c44f3e87b1d831f16c6db34937f4022 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Oct 11 04:06:21 compute-0 nova_compute[259850]: 2025-10-11 04:06:21.189 2 INFO nova.compute.manager [None req-3700afd8-f980-4cf2-b41e-54ecc7fbe9a2 - - - - - -] [instance: 2b618038-2466-4671-9914-c69aecf8c771] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 11 04:06:21 compute-0 podman[274110]: 2025-10-11 04:06:21.196375892 +0000 UTC m=+0.137034268 container init 4d85131394ce23a6db996008cb673881082a82f440672359004cd4c96ba48c69 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-bfa0cc72-c909-48db-80bb-536eb7b52f6e, org.label-schema.vendor=CentOS, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009)
Oct 11 04:06:21 compute-0 podman[274110]: 2025-10-11 04:06:21.202230706 +0000 UTC m=+0.142889052 container start 4d85131394ce23a6db996008cb673881082a82f440672359004cd4c96ba48c69 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-bfa0cc72-c909-48db-80bb-536eb7b52f6e, org.label-schema.license=GPLv2, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, org.label-schema.build-date=20251009, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0)
Oct 11 04:06:21 compute-0 neutron-haproxy-ovnmeta-bfa0cc72-c909-48db-80bb-536eb7b52f6e[274125]: [NOTICE]   (274129) : New worker (274131) forked
Oct 11 04:06:21 compute-0 neutron-haproxy-ovnmeta-bfa0cc72-c909-48db-80bb-536eb7b52f6e[274125]: [NOTICE]   (274129) : Loading success.
Oct 11 04:06:21 compute-0 nova_compute[259850]: 2025-10-11 04:06:21.347 2 DEBUG nova.compute.manager [req-e25e03c1-85ce-405c-8c10-9cc5bab7b261 req-45476ddb-b2e8-4b7a-90c7-3a084dcc520f f4159c198f2c491aba3076e803251bb4 551ef16ca7c64c7d98e9560ddab5abd8 - - default default] [instance: 2b618038-2466-4671-9914-c69aecf8c771] Received event network-vif-plugged-18b8cda5-7bec-4b29-838f-24cad68162af external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 11 04:06:21 compute-0 nova_compute[259850]: 2025-10-11 04:06:21.348 2 DEBUG oslo_concurrency.lockutils [req-e25e03c1-85ce-405c-8c10-9cc5bab7b261 req-45476ddb-b2e8-4b7a-90c7-3a084dcc520f f4159c198f2c491aba3076e803251bb4 551ef16ca7c64c7d98e9560ddab5abd8 - - default default] Acquiring lock "2b618038-2466-4671-9914-c69aecf8c771-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 04:06:21 compute-0 nova_compute[259850]: 2025-10-11 04:06:21.348 2 DEBUG oslo_concurrency.lockutils [req-e25e03c1-85ce-405c-8c10-9cc5bab7b261 req-45476ddb-b2e8-4b7a-90c7-3a084dcc520f f4159c198f2c491aba3076e803251bb4 551ef16ca7c64c7d98e9560ddab5abd8 - - default default] Lock "2b618038-2466-4671-9914-c69aecf8c771-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 04:06:21 compute-0 nova_compute[259850]: 2025-10-11 04:06:21.348 2 DEBUG oslo_concurrency.lockutils [req-e25e03c1-85ce-405c-8c10-9cc5bab7b261 req-45476ddb-b2e8-4b7a-90c7-3a084dcc520f f4159c198f2c491aba3076e803251bb4 551ef16ca7c64c7d98e9560ddab5abd8 - - default default] Lock "2b618038-2466-4671-9914-c69aecf8c771-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 04:06:21 compute-0 nova_compute[259850]: 2025-10-11 04:06:21.348 2 DEBUG nova.compute.manager [req-e25e03c1-85ce-405c-8c10-9cc5bab7b261 req-45476ddb-b2e8-4b7a-90c7-3a084dcc520f f4159c198f2c491aba3076e803251bb4 551ef16ca7c64c7d98e9560ddab5abd8 - - default default] [instance: 2b618038-2466-4671-9914-c69aecf8c771] Processing event network-vif-plugged-18b8cda5-7bec-4b29-838f-24cad68162af _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Oct 11 04:06:21 compute-0 nova_compute[259850]: 2025-10-11 04:06:21.349 2 DEBUG nova.compute.manager [None req-cb9e36c3-6239-4794-920d-2428fabc3377 c5660041067943deb3c73caa6e62f851 2783729ed466412aac8ceb01d86a0b12 - - default default] [instance: 2b618038-2466-4671-9914-c69aecf8c771] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Oct 11 04:06:21 compute-0 nova_compute[259850]: 2025-10-11 04:06:21.353 2 DEBUG nova.virt.driver [None req-3700afd8-f980-4cf2-b41e-54ecc7fbe9a2 - - - - - -] Emitting event <LifecycleEvent: 1760155581.3534768, 2b618038-2466-4671-9914-c69aecf8c771 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 11 04:06:21 compute-0 nova_compute[259850]: 2025-10-11 04:06:21.353 2 INFO nova.compute.manager [None req-3700afd8-f980-4cf2-b41e-54ecc7fbe9a2 - - - - - -] [instance: 2b618038-2466-4671-9914-c69aecf8c771] VM Resumed (Lifecycle Event)
Oct 11 04:06:21 compute-0 nova_compute[259850]: 2025-10-11 04:06:21.369 2 DEBUG nova.virt.libvirt.driver [None req-cb9e36c3-6239-4794-920d-2428fabc3377 c5660041067943deb3c73caa6e62f851 2783729ed466412aac8ceb01d86a0b12 - - default default] [instance: 2b618038-2466-4671-9914-c69aecf8c771] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Oct 11 04:06:21 compute-0 nova_compute[259850]: 2025-10-11 04:06:21.372 2 INFO nova.virt.libvirt.driver [-] [instance: 2b618038-2466-4671-9914-c69aecf8c771] Instance spawned successfully.
Oct 11 04:06:21 compute-0 nova_compute[259850]: 2025-10-11 04:06:21.372 2 DEBUG nova.virt.libvirt.driver [None req-cb9e36c3-6239-4794-920d-2428fabc3377 c5660041067943deb3c73caa6e62f851 2783729ed466412aac8ceb01d86a0b12 - - default default] [instance: 2b618038-2466-4671-9914-c69aecf8c771] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Oct 11 04:06:21 compute-0 nova_compute[259850]: 2025-10-11 04:06:21.377 2 DEBUG nova.compute.manager [None req-3700afd8-f980-4cf2-b41e-54ecc7fbe9a2 - - - - - -] [instance: 2b618038-2466-4671-9914-c69aecf8c771] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 11 04:06:21 compute-0 nova_compute[259850]: 2025-10-11 04:06:21.380 2 DEBUG nova.compute.manager [None req-3700afd8-f980-4cf2-b41e-54ecc7fbe9a2 - - - - - -] [instance: 2b618038-2466-4671-9914-c69aecf8c771] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 11 04:06:21 compute-0 nova_compute[259850]: 2025-10-11 04:06:21.391 2 DEBUG nova.virt.libvirt.driver [None req-cb9e36c3-6239-4794-920d-2428fabc3377 c5660041067943deb3c73caa6e62f851 2783729ed466412aac8ceb01d86a0b12 - - default default] [instance: 2b618038-2466-4671-9914-c69aecf8c771] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 11 04:06:21 compute-0 nova_compute[259850]: 2025-10-11 04:06:21.392 2 DEBUG nova.virt.libvirt.driver [None req-cb9e36c3-6239-4794-920d-2428fabc3377 c5660041067943deb3c73caa6e62f851 2783729ed466412aac8ceb01d86a0b12 - - default default] [instance: 2b618038-2466-4671-9914-c69aecf8c771] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 11 04:06:21 compute-0 nova_compute[259850]: 2025-10-11 04:06:21.392 2 DEBUG nova.virt.libvirt.driver [None req-cb9e36c3-6239-4794-920d-2428fabc3377 c5660041067943deb3c73caa6e62f851 2783729ed466412aac8ceb01d86a0b12 - - default default] [instance: 2b618038-2466-4671-9914-c69aecf8c771] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 11 04:06:21 compute-0 nova_compute[259850]: 2025-10-11 04:06:21.392 2 DEBUG nova.virt.libvirt.driver [None req-cb9e36c3-6239-4794-920d-2428fabc3377 c5660041067943deb3c73caa6e62f851 2783729ed466412aac8ceb01d86a0b12 - - default default] [instance: 2b618038-2466-4671-9914-c69aecf8c771] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 11 04:06:21 compute-0 nova_compute[259850]: 2025-10-11 04:06:21.393 2 DEBUG nova.virt.libvirt.driver [None req-cb9e36c3-6239-4794-920d-2428fabc3377 c5660041067943deb3c73caa6e62f851 2783729ed466412aac8ceb01d86a0b12 - - default default] [instance: 2b618038-2466-4671-9914-c69aecf8c771] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 11 04:06:21 compute-0 nova_compute[259850]: 2025-10-11 04:06:21.393 2 DEBUG nova.virt.libvirt.driver [None req-cb9e36c3-6239-4794-920d-2428fabc3377 c5660041067943deb3c73caa6e62f851 2783729ed466412aac8ceb01d86a0b12 - - default default] [instance: 2b618038-2466-4671-9914-c69aecf8c771] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 11 04:06:21 compute-0 nova_compute[259850]: 2025-10-11 04:06:21.397 2 INFO nova.compute.manager [None req-3700afd8-f980-4cf2-b41e-54ecc7fbe9a2 - - - - - -] [instance: 2b618038-2466-4671-9914-c69aecf8c771] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 11 04:06:21 compute-0 nova_compute[259850]: 2025-10-11 04:06:21.459 2 INFO nova.compute.manager [None req-cb9e36c3-6239-4794-920d-2428fabc3377 c5660041067943deb3c73caa6e62f851 2783729ed466412aac8ceb01d86a0b12 - - default default] [instance: 2b618038-2466-4671-9914-c69aecf8c771] Took 7.45 seconds to spawn the instance on the hypervisor.
Oct 11 04:06:21 compute-0 nova_compute[259850]: 2025-10-11 04:06:21.459 2 DEBUG nova.compute.manager [None req-cb9e36c3-6239-4794-920d-2428fabc3377 c5660041067943deb3c73caa6e62f851 2783729ed466412aac8ceb01d86a0b12 - - default default] [instance: 2b618038-2466-4671-9914-c69aecf8c771] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 11 04:06:21 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1075: 305 pgs: 305 active+clean; 180 MiB data, 328 MiB used, 60 GiB / 60 GiB avail; 131 KiB/s rd, 2.1 MiB/s wr, 180 op/s
Oct 11 04:06:21 compute-0 nova_compute[259850]: 2025-10-11 04:06:21.519 2 INFO nova.compute.manager [None req-cb9e36c3-6239-4794-920d-2428fabc3377 c5660041067943deb3c73caa6e62f851 2783729ed466412aac8ceb01d86a0b12 - - default default] [instance: 2b618038-2466-4671-9914-c69aecf8c771] Took 8.40 seconds to build instance.
Oct 11 04:06:21 compute-0 nova_compute[259850]: 2025-10-11 04:06:21.543 2 DEBUG oslo_concurrency.lockutils [None req-cb9e36c3-6239-4794-920d-2428fabc3377 c5660041067943deb3c73caa6e62f851 2783729ed466412aac8ceb01d86a0b12 - - default default] Lock "2b618038-2466-4671-9914-c69aecf8c771" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 8.497s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 04:06:21 compute-0 ceph-mgr[74563]: client.0 ms_handle_reset on v2:192.168.122.100:6800/3360631616
Oct 11 04:06:22 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct 11 04:06:22 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/792515831' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 11 04:06:22 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct 11 04:06:22 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/792515831' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 11 04:06:22 compute-0 nova_compute[259850]: 2025-10-11 04:06:22.065 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 04:06:22 compute-0 ceph-mon[74273]: pgmap v1075: 305 pgs: 305 active+clean; 180 MiB data, 328 MiB used, 60 GiB / 60 GiB avail; 131 KiB/s rd, 2.1 MiB/s wr, 180 op/s
Oct 11 04:06:22 compute-0 ceph-mon[74273]: from='client.? 192.168.122.10:0/792515831' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 11 04:06:22 compute-0 ceph-mon[74273]: from='client.? 192.168.122.10:0/792515831' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 11 04:06:22 compute-0 ovn_metadata_agent[161897]: 2025-10-11 04:06:22.955 161902 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 04:06:22 compute-0 ovn_metadata_agent[161897]: 2025-10-11 04:06:22.957 161902 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 04:06:22 compute-0 ovn_metadata_agent[161897]: 2025-10-11 04:06:22.957 161902 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 04:06:23 compute-0 nova_compute[259850]: 2025-10-11 04:06:23.079 2 DEBUG oslo_service.periodic_task [None req-a15c22ab-259d-4b26-83d0-eb5f92102649 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 11 04:06:23 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1076: 305 pgs: 305 active+clean; 180 MiB data, 329 MiB used, 60 GiB / 60 GiB avail; 2.4 MiB/s rd, 2.1 MiB/s wr, 233 op/s
Oct 11 04:06:23 compute-0 nova_compute[259850]: 2025-10-11 04:06:23.497 2 DEBUG nova.compute.manager [req-4f45292b-7414-45e0-81b2-7ac889afdcf1 req-12adb59d-7b87-4152-8bd0-c196286d1609 f4159c198f2c491aba3076e803251bb4 551ef16ca7c64c7d98e9560ddab5abd8 - - default default] [instance: 2b618038-2466-4671-9914-c69aecf8c771] Received event network-vif-plugged-18b8cda5-7bec-4b29-838f-24cad68162af external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 11 04:06:23 compute-0 nova_compute[259850]: 2025-10-11 04:06:23.498 2 DEBUG oslo_concurrency.lockutils [req-4f45292b-7414-45e0-81b2-7ac889afdcf1 req-12adb59d-7b87-4152-8bd0-c196286d1609 f4159c198f2c491aba3076e803251bb4 551ef16ca7c64c7d98e9560ddab5abd8 - - default default] Acquiring lock "2b618038-2466-4671-9914-c69aecf8c771-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 04:06:23 compute-0 nova_compute[259850]: 2025-10-11 04:06:23.499 2 DEBUG oslo_concurrency.lockutils [req-4f45292b-7414-45e0-81b2-7ac889afdcf1 req-12adb59d-7b87-4152-8bd0-c196286d1609 f4159c198f2c491aba3076e803251bb4 551ef16ca7c64c7d98e9560ddab5abd8 - - default default] Lock "2b618038-2466-4671-9914-c69aecf8c771-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 04:06:23 compute-0 nova_compute[259850]: 2025-10-11 04:06:23.499 2 DEBUG oslo_concurrency.lockutils [req-4f45292b-7414-45e0-81b2-7ac889afdcf1 req-12adb59d-7b87-4152-8bd0-c196286d1609 f4159c198f2c491aba3076e803251bb4 551ef16ca7c64c7d98e9560ddab5abd8 - - default default] Lock "2b618038-2466-4671-9914-c69aecf8c771-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 04:06:23 compute-0 nova_compute[259850]: 2025-10-11 04:06:23.500 2 DEBUG nova.compute.manager [req-4f45292b-7414-45e0-81b2-7ac889afdcf1 req-12adb59d-7b87-4152-8bd0-c196286d1609 f4159c198f2c491aba3076e803251bb4 551ef16ca7c64c7d98e9560ddab5abd8 - - default default] [instance: 2b618038-2466-4671-9914-c69aecf8c771] No waiting events found dispatching network-vif-plugged-18b8cda5-7bec-4b29-838f-24cad68162af pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 11 04:06:23 compute-0 nova_compute[259850]: 2025-10-11 04:06:23.500 2 WARNING nova.compute.manager [req-4f45292b-7414-45e0-81b2-7ac889afdcf1 req-12adb59d-7b87-4152-8bd0-c196286d1609 f4159c198f2c491aba3076e803251bb4 551ef16ca7c64c7d98e9560ddab5abd8 - - default default] [instance: 2b618038-2466-4671-9914-c69aecf8c771] Received unexpected event network-vif-plugged-18b8cda5-7bec-4b29-838f-24cad68162af for instance with vm_state active and task_state None.
Oct 11 04:06:23 compute-0 nova_compute[259850]: 2025-10-11 04:06:23.521 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 04:06:24 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct 11 04:06:24 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1316779952' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 11 04:06:24 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct 11 04:06:24 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1316779952' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 11 04:06:24 compute-0 nova_compute[259850]: 2025-10-11 04:06:24.505 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 04:06:24 compute-0 NetworkManager[44920]: <info>  [1760155584.5059] manager: (patch-provnet-86cd831a-6a58-4ba8-a51c-57fa1a3acacc-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/47)
Oct 11 04:06:24 compute-0 NetworkManager[44920]: <info>  [1760155584.5066] manager: (patch-br-int-to-provnet-86cd831a-6a58-4ba8-a51c-57fa1a3acacc): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/48)
Oct 11 04:06:24 compute-0 nova_compute[259850]: 2025-10-11 04:06:24.558 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 04:06:24 compute-0 ovn_controller[152025]: 2025-10-11T04:06:24Z|00065|binding|INFO|Releasing lport 0e0216bc-6b9d-4e75-bae2-b1d26e9e502e from this chassis (sb_readonly=0)
Oct 11 04:06:24 compute-0 nova_compute[259850]: 2025-10-11 04:06:24.572 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 04:06:24 compute-0 ceph-mon[74273]: pgmap v1076: 305 pgs: 305 active+clean; 180 MiB data, 329 MiB used, 60 GiB / 60 GiB avail; 2.4 MiB/s rd, 2.1 MiB/s wr, 233 op/s
Oct 11 04:06:24 compute-0 ceph-mon[74273]: from='client.? 192.168.122.10:0/1316779952' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 11 04:06:24 compute-0 ceph-mon[74273]: from='client.? 192.168.122.10:0/1316779952' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 11 04:06:24 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e184 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 04:06:24 compute-0 ovn_metadata_agent[161897]: 2025-10-11 04:06:24.878 161902 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=8a473e03-2208-47ae-afcd-05ad744a5969, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '7'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 04:06:25 compute-0 nova_compute[259850]: 2025-10-11 04:06:25.003 2 DEBUG nova.compute.manager [req-dd1edf4a-fb82-408e-aa7c-f3d6f776833e req-56b3642f-9e38-4930-86a2-cc937186a5f6 f4159c198f2c491aba3076e803251bb4 551ef16ca7c64c7d98e9560ddab5abd8 - - default default] [instance: 2b618038-2466-4671-9914-c69aecf8c771] Received event network-changed-18b8cda5-7bec-4b29-838f-24cad68162af external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 11 04:06:25 compute-0 nova_compute[259850]: 2025-10-11 04:06:25.004 2 DEBUG nova.compute.manager [req-dd1edf4a-fb82-408e-aa7c-f3d6f776833e req-56b3642f-9e38-4930-86a2-cc937186a5f6 f4159c198f2c491aba3076e803251bb4 551ef16ca7c64c7d98e9560ddab5abd8 - - default default] [instance: 2b618038-2466-4671-9914-c69aecf8c771] Refreshing instance network info cache due to event network-changed-18b8cda5-7bec-4b29-838f-24cad68162af. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 11 04:06:25 compute-0 nova_compute[259850]: 2025-10-11 04:06:25.005 2 DEBUG oslo_concurrency.lockutils [req-dd1edf4a-fb82-408e-aa7c-f3d6f776833e req-56b3642f-9e38-4930-86a2-cc937186a5f6 f4159c198f2c491aba3076e803251bb4 551ef16ca7c64c7d98e9560ddab5abd8 - - default default] Acquiring lock "refresh_cache-2b618038-2466-4671-9914-c69aecf8c771" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 11 04:06:25 compute-0 nova_compute[259850]: 2025-10-11 04:06:25.005 2 DEBUG oslo_concurrency.lockutils [req-dd1edf4a-fb82-408e-aa7c-f3d6f776833e req-56b3642f-9e38-4930-86a2-cc937186a5f6 f4159c198f2c491aba3076e803251bb4 551ef16ca7c64c7d98e9560ddab5abd8 - - default default] Acquired lock "refresh_cache-2b618038-2466-4671-9914-c69aecf8c771" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 11 04:06:25 compute-0 nova_compute[259850]: 2025-10-11 04:06:25.006 2 DEBUG nova.network.neutron [req-dd1edf4a-fb82-408e-aa7c-f3d6f776833e req-56b3642f-9e38-4930-86a2-cc937186a5f6 f4159c198f2c491aba3076e803251bb4 551ef16ca7c64c7d98e9560ddab5abd8 - - default default] [instance: 2b618038-2466-4671-9914-c69aecf8c771] Refreshing network info cache for port 18b8cda5-7bec-4b29-838f-24cad68162af _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 11 04:06:25 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1077: 305 pgs: 305 active+clean; 180 MiB data, 329 MiB used, 60 GiB / 60 GiB avail; 2.2 MiB/s rd, 2.0 MiB/s wr, 216 op/s
Oct 11 04:06:26 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct 11 04:06:26 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1030658536' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 11 04:06:26 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct 11 04:06:26 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1030658536' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 11 04:06:26 compute-0 ceph-mon[74273]: pgmap v1077: 305 pgs: 305 active+clean; 180 MiB data, 329 MiB used, 60 GiB / 60 GiB avail; 2.2 MiB/s rd, 2.0 MiB/s wr, 216 op/s
Oct 11 04:06:26 compute-0 ceph-mon[74273]: from='client.? 192.168.122.10:0/1030658536' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 11 04:06:26 compute-0 ceph-mon[74273]: from='client.? 192.168.122.10:0/1030658536' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 11 04:06:27 compute-0 nova_compute[259850]: 2025-10-11 04:06:27.017 2 DEBUG nova.network.neutron [req-dd1edf4a-fb82-408e-aa7c-f3d6f776833e req-56b3642f-9e38-4930-86a2-cc937186a5f6 f4159c198f2c491aba3076e803251bb4 551ef16ca7c64c7d98e9560ddab5abd8 - - default default] [instance: 2b618038-2466-4671-9914-c69aecf8c771] Updated VIF entry in instance network info cache for port 18b8cda5-7bec-4b29-838f-24cad68162af. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 11 04:06:27 compute-0 nova_compute[259850]: 2025-10-11 04:06:27.018 2 DEBUG nova.network.neutron [req-dd1edf4a-fb82-408e-aa7c-f3d6f776833e req-56b3642f-9e38-4930-86a2-cc937186a5f6 f4159c198f2c491aba3076e803251bb4 551ef16ca7c64c7d98e9560ddab5abd8 - - default default] [instance: 2b618038-2466-4671-9914-c69aecf8c771] Updating instance_info_cache with network_info: [{"id": "18b8cda5-7bec-4b29-838f-24cad68162af", "address": "fa:16:3e:55:96:0a", "network": {"id": "bfa0cc72-c909-48db-80bb-536eb7b52f6e", "bridge": "br-int", "label": "tempest-VolumesSnapshotTestJSON-1615284681-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.240", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2783729ed466412aac8ceb01d86a0b12", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap18b8cda5-7b", "ovs_interfaceid": "18b8cda5-7bec-4b29-838f-24cad68162af", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 11 04:06:27 compute-0 nova_compute[259850]: 2025-10-11 04:06:27.043 2 DEBUG oslo_concurrency.lockutils [req-dd1edf4a-fb82-408e-aa7c-f3d6f776833e req-56b3642f-9e38-4930-86a2-cc937186a5f6 f4159c198f2c491aba3076e803251bb4 551ef16ca7c64c7d98e9560ddab5abd8 - - default default] Releasing lock "refresh_cache-2b618038-2466-4671-9914-c69aecf8c771" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 11 04:06:27 compute-0 nova_compute[259850]: 2025-10-11 04:06:27.106 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 04:06:27 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1078: 305 pgs: 305 active+clean; 180 MiB data, 329 MiB used, 60 GiB / 60 GiB avail; 2.0 MiB/s rd, 1.8 MiB/s wr, 194 op/s
Oct 11 04:06:28 compute-0 podman[274141]: 2025-10-11 04:06:28.350082928 +0000 UTC m=+0.062916937 container health_status 83ff5079e26f0f00bbfd2512db4ebca61e431e3b7e362664573e34fff179574c (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, container_name=multipathd, io.buildah.version=1.41.3, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Oct 11 04:06:28 compute-0 podman[274142]: 2025-10-11 04:06:28.401256185 +0000 UTC m=+0.109333520 container health_status 8f03625dda308e9a3fe3b3f00360811eab2a53db90537fed5552a11820542f07 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, container_name=iscsid, io.buildah.version=1.41.3, managed_by=edpm_ansible, config_id=iscsid, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, org.label-schema.build-date=20251009, org.label-schema.schema-version=1.0)
Oct 11 04:06:28 compute-0 nova_compute[259850]: 2025-10-11 04:06:28.524 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 04:06:28 compute-0 ceph-mon[74273]: pgmap v1078: 305 pgs: 305 active+clean; 180 MiB data, 329 MiB used, 60 GiB / 60 GiB avail; 2.0 MiB/s rd, 1.8 MiB/s wr, 194 op/s
Oct 11 04:06:29 compute-0 ovn_controller[152025]: 2025-10-11T04:06:29Z|00066|binding|INFO|Releasing lport 0e0216bc-6b9d-4e75-bae2-b1d26e9e502e from this chassis (sb_readonly=0)
Oct 11 04:06:29 compute-0 nova_compute[259850]: 2025-10-11 04:06:29.325 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 04:06:29 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1079: 305 pgs: 305 active+clean; 180 MiB data, 329 MiB used, 60 GiB / 60 GiB avail; 2.0 MiB/s rd, 1.8 MiB/s wr, 222 op/s
Oct 11 04:06:29 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e184 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 04:06:30 compute-0 ceph-mon[74273]: pgmap v1079: 305 pgs: 305 active+clean; 180 MiB data, 329 MiB used, 60 GiB / 60 GiB avail; 2.0 MiB/s rd, 1.8 MiB/s wr, 222 op/s
Oct 11 04:06:31 compute-0 ceph-mgr[74563]: [pg_autoscaler INFO root] _maybe_adjust
Oct 11 04:06:31 compute-0 ceph-mgr[74563]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 04:06:31 compute-0 ceph-mgr[74563]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct 11 04:06:31 compute-0 ceph-mgr[74563]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 04:06:31 compute-0 ceph-mgr[74563]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0003487950323956502 of space, bias 1.0, pg target 0.10463850971869505 quantized to 32 (current 32)
Oct 11 04:06:31 compute-0 ceph-mgr[74563]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 04:06:31 compute-0 ceph-mgr[74563]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0006926935802891189 of space, bias 1.0, pg target 0.20780807408673568 quantized to 32 (current 32)
Oct 11 04:06:31 compute-0 ceph-mgr[74563]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 04:06:31 compute-0 ceph-mgr[74563]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Oct 11 04:06:31 compute-0 ceph-mgr[74563]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 04:06:31 compute-0 ceph-mgr[74563]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.000665858301588852 of space, bias 1.0, pg target 0.19975749047665559 quantized to 32 (current 32)
Oct 11 04:06:31 compute-0 ceph-mgr[74563]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 04:06:31 compute-0 ceph-mgr[74563]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Oct 11 04:06:31 compute-0 ceph-mgr[74563]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 04:06:31 compute-0 ceph-mgr[74563]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 11 04:06:31 compute-0 ceph-mgr[74563]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 04:06:31 compute-0 ceph-mgr[74563]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Oct 11 04:06:31 compute-0 ceph-mgr[74563]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 04:06:31 compute-0 ceph-mgr[74563]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct 11 04:06:31 compute-0 ceph-mgr[74563]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 04:06:31 compute-0 ceph-mgr[74563]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 11 04:06:31 compute-0 ceph-mgr[74563]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 04:06:31 compute-0 ceph-mgr[74563]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct 11 04:06:31 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1080: 305 pgs: 305 active+clean; 180 MiB data, 329 MiB used, 60 GiB / 60 GiB avail; 2.0 MiB/s rd, 16 KiB/s wr, 130 op/s
Oct 11 04:06:31 compute-0 nova_compute[259850]: 2025-10-11 04:06:31.844 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 04:06:32 compute-0 nova_compute[259850]: 2025-10-11 04:06:32.115 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 04:06:32 compute-0 ceph-mon[74273]: pgmap v1080: 305 pgs: 305 active+clean; 180 MiB data, 329 MiB used, 60 GiB / 60 GiB avail; 2.0 MiB/s rd, 16 KiB/s wr, 130 op/s
Oct 11 04:06:33 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1081: 305 pgs: 305 active+clean; 206 MiB data, 348 MiB used, 60 GiB / 60 GiB avail; 2.2 MiB/s rd, 2.1 MiB/s wr, 173 op/s
Oct 11 04:06:33 compute-0 nova_compute[259850]: 2025-10-11 04:06:33.527 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 04:06:33 compute-0 ovn_controller[152025]: 2025-10-11T04:06:33Z|00006|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:55:96:0a 10.100.0.12
Oct 11 04:06:33 compute-0 ovn_controller[152025]: 2025-10-11T04:06:33Z|00007|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:55:96:0a 10.100.0.12
Oct 11 04:06:33 compute-0 ovn_controller[152025]: 2025-10-11T04:06:33Z|00067|binding|INFO|Releasing lport 0e0216bc-6b9d-4e75-bae2-b1d26e9e502e from this chassis (sb_readonly=0)
Oct 11 04:06:34 compute-0 nova_compute[259850]: 2025-10-11 04:06:34.079 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 04:06:34 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e184 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 04:06:34 compute-0 ceph-mon[74273]: pgmap v1081: 305 pgs: 305 active+clean; 206 MiB data, 348 MiB used, 60 GiB / 60 GiB avail; 2.2 MiB/s rd, 2.1 MiB/s wr, 173 op/s
Oct 11 04:06:35 compute-0 nova_compute[259850]: 2025-10-11 04:06:35.342 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 04:06:35 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1082: 305 pgs: 305 active+clean; 206 MiB data, 348 MiB used, 60 GiB / 60 GiB avail; 240 KiB/s rd, 2.1 MiB/s wr, 71 op/s
Oct 11 04:06:36 compute-0 ceph-mon[74273]: pgmap v1082: 305 pgs: 305 active+clean; 206 MiB data, 348 MiB used, 60 GiB / 60 GiB avail; 240 KiB/s rd, 2.1 MiB/s wr, 71 op/s
Oct 11 04:06:37 compute-0 nova_compute[259850]: 2025-10-11 04:06:37.118 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 04:06:37 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1083: 305 pgs: 305 active+clean; 206 MiB data, 348 MiB used, 60 GiB / 60 GiB avail; 240 KiB/s rd, 2.1 MiB/s wr, 71 op/s
Oct 11 04:06:38 compute-0 nova_compute[259850]: 2025-10-11 04:06:38.531 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 04:06:38 compute-0 ceph-mon[74273]: pgmap v1083: 305 pgs: 305 active+clean; 206 MiB data, 348 MiB used, 60 GiB / 60 GiB avail; 240 KiB/s rd, 2.1 MiB/s wr, 71 op/s
Oct 11 04:06:39 compute-0 nova_compute[259850]: 2025-10-11 04:06:39.463 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 04:06:39 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1084: 305 pgs: 305 active+clean; 213 MiB data, 354 MiB used, 60 GiB / 60 GiB avail; 346 KiB/s rd, 2.1 MiB/s wr, 90 op/s
Oct 11 04:06:39 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e184 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 04:06:40 compute-0 podman[274179]: 2025-10-11 04:06:40.421446055 +0000 UTC m=+0.118004753 container health_status 648a70a95c858d67aef01a55c153ff0b5b9bef4b589fa10c62a24185180db61f (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3)
Oct 11 04:06:40 compute-0 ceph-mon[74273]: pgmap v1084: 305 pgs: 305 active+clean; 213 MiB data, 354 MiB used, 60 GiB / 60 GiB avail; 346 KiB/s rd, 2.1 MiB/s wr, 90 op/s
Oct 11 04:06:40 compute-0 nova_compute[259850]: 2025-10-11 04:06:40.899 2 DEBUG oslo_concurrency.lockutils [None req-6f8d4677-b95b-4072-83f9-81fb6c60142b c5660041067943deb3c73caa6e62f851 2783729ed466412aac8ceb01d86a0b12 - - default default] Acquiring lock "2b618038-2466-4671-9914-c69aecf8c771" by "nova.compute.manager.ComputeManager.reserve_block_device_name.<locals>.do_reserve" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 04:06:40 compute-0 nova_compute[259850]: 2025-10-11 04:06:40.900 2 DEBUG oslo_concurrency.lockutils [None req-6f8d4677-b95b-4072-83f9-81fb6c60142b c5660041067943deb3c73caa6e62f851 2783729ed466412aac8ceb01d86a0b12 - - default default] Lock "2b618038-2466-4671-9914-c69aecf8c771" acquired by "nova.compute.manager.ComputeManager.reserve_block_device_name.<locals>.do_reserve" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 04:06:40 compute-0 nova_compute[259850]: 2025-10-11 04:06:40.918 2 DEBUG nova.objects.instance [None req-6f8d4677-b95b-4072-83f9-81fb6c60142b c5660041067943deb3c73caa6e62f851 2783729ed466412aac8ceb01d86a0b12 - - default default] Lazy-loading 'flavor' on Instance uuid 2b618038-2466-4671-9914-c69aecf8c771 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 11 04:06:40 compute-0 nova_compute[259850]: 2025-10-11 04:06:40.972 2 INFO nova.virt.libvirt.driver [None req-6f8d4677-b95b-4072-83f9-81fb6c60142b c5660041067943deb3c73caa6e62f851 2783729ed466412aac8ceb01d86a0b12 - - default default] [instance: 2b618038-2466-4671-9914-c69aecf8c771] Ignoring supplied device name: /dev/vdb
Oct 11 04:06:40 compute-0 nova_compute[259850]: 2025-10-11 04:06:40.990 2 DEBUG oslo_concurrency.lockutils [None req-6f8d4677-b95b-4072-83f9-81fb6c60142b c5660041067943deb3c73caa6e62f851 2783729ed466412aac8ceb01d86a0b12 - - default default] Lock "2b618038-2466-4671-9914-c69aecf8c771" "released" by "nova.compute.manager.ComputeManager.reserve_block_device_name.<locals>.do_reserve" :: held 0.090s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 04:06:41 compute-0 nova_compute[259850]: 2025-10-11 04:06:41.212 2 DEBUG oslo_concurrency.lockutils [None req-6f8d4677-b95b-4072-83f9-81fb6c60142b c5660041067943deb3c73caa6e62f851 2783729ed466412aac8ceb01d86a0b12 - - default default] Acquiring lock "2b618038-2466-4671-9914-c69aecf8c771" by "nova.compute.manager.ComputeManager.attach_volume.<locals>.do_attach_volume" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 04:06:41 compute-0 nova_compute[259850]: 2025-10-11 04:06:41.213 2 DEBUG oslo_concurrency.lockutils [None req-6f8d4677-b95b-4072-83f9-81fb6c60142b c5660041067943deb3c73caa6e62f851 2783729ed466412aac8ceb01d86a0b12 - - default default] Lock "2b618038-2466-4671-9914-c69aecf8c771" acquired by "nova.compute.manager.ComputeManager.attach_volume.<locals>.do_attach_volume" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 04:06:41 compute-0 nova_compute[259850]: 2025-10-11 04:06:41.213 2 INFO nova.compute.manager [None req-6f8d4677-b95b-4072-83f9-81fb6c60142b c5660041067943deb3c73caa6e62f851 2783729ed466412aac8ceb01d86a0b12 - - default default] [instance: 2b618038-2466-4671-9914-c69aecf8c771] Attaching volume 3fa01068-c029-4fc5-a8a6-68ced8aa6a2b to /dev/vdb
Oct 11 04:06:41 compute-0 nova_compute[259850]: 2025-10-11 04:06:41.351 2 DEBUG os_brick.utils [None req-6f8d4677-b95b-4072-83f9-81fb6c60142b c5660041067943deb3c73caa6e62f851 2783729ed466412aac8ceb01d86a0b12 - - default default] ==> get_connector_properties: call "{'root_helper': 'sudo nova-rootwrap /etc/nova/rootwrap.conf', 'my_ip': '192.168.122.100', 'multipath': True, 'enforce_multipath': True, 'host': 'compute-0.ctlplane.example.com', 'execute': None}" trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:176
Oct 11 04:06:41 compute-0 nova_compute[259850]: 2025-10-11 04:06:41.353 675 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): multipathd show status execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 04:06:41 compute-0 nova_compute[259850]: 2025-10-11 04:06:41.376 675 DEBUG oslo_concurrency.processutils [-] CMD "multipathd show status" returned: 0 in 0.023s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 04:06:41 compute-0 nova_compute[259850]: 2025-10-11 04:06:41.377 675 DEBUG oslo.privsep.daemon [-] privsep: reply[e73e24ea-53a8-4067-a3b1-54d79954992e]: (4, ('path checker states:\n\npaths: 0\nbusy: False\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 04:06:41 compute-0 nova_compute[259850]: 2025-10-11 04:06:41.377 675 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): cat /etc/iscsi/initiatorname.iscsi execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 04:06:41 compute-0 nova_compute[259850]: 2025-10-11 04:06:41.386 675 DEBUG oslo_concurrency.processutils [-] CMD "cat /etc/iscsi/initiatorname.iscsi" returned: 0 in 0.008s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 04:06:41 compute-0 nova_compute[259850]: 2025-10-11 04:06:41.386 675 DEBUG oslo.privsep.daemon [-] privsep: reply[9525b185-abe9-45ca-88e3-33803fbaddf2]: (4, ('InitiatorName=iqn.1994-05.com.redhat:e727c2bd432c', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 04:06:41 compute-0 nova_compute[259850]: 2025-10-11 04:06:41.387 675 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): findmnt -v / -n -o SOURCE execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 04:06:41 compute-0 nova_compute[259850]: 2025-10-11 04:06:41.395 675 DEBUG oslo_concurrency.processutils [-] CMD "findmnt -v / -n -o SOURCE" returned: 0 in 0.008s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 04:06:41 compute-0 nova_compute[259850]: 2025-10-11 04:06:41.395 675 DEBUG oslo.privsep.daemon [-] privsep: reply[bf545393-6ce5-44fb-b377-37ad43f92944]: (4, ('overlay\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 04:06:41 compute-0 nova_compute[259850]: 2025-10-11 04:06:41.397 675 DEBUG oslo.privsep.daemon [-] privsep: reply[732a8100-8fbf-4d9f-82ee-3fb15c727bc7]: (4, 'e4b2deed-ff06-4afb-a523-b61a9dddb9cc') _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 04:06:41 compute-0 nova_compute[259850]: 2025-10-11 04:06:41.398 2 DEBUG oslo_concurrency.processutils [None req-6f8d4677-b95b-4072-83f9-81fb6c60142b c5660041067943deb3c73caa6e62f851 2783729ed466412aac8ceb01d86a0b12 - - default default] Running cmd (subprocess): nvme version execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 04:06:41 compute-0 nova_compute[259850]: 2025-10-11 04:06:41.426 2 DEBUG oslo_concurrency.processutils [None req-6f8d4677-b95b-4072-83f9-81fb6c60142b c5660041067943deb3c73caa6e62f851 2783729ed466412aac8ceb01d86a0b12 - - default default] CMD "nvme version" returned: 0 in 0.028s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 04:06:41 compute-0 nova_compute[259850]: 2025-10-11 04:06:41.428 2 DEBUG os_brick.initiator.connectors.lightos [None req-6f8d4677-b95b-4072-83f9-81fb6c60142b c5660041067943deb3c73caa6e62f851 2783729ed466412aac8ceb01d86a0b12 - - default default] LIGHTOS: [Errno 111] ECONNREFUSED find_dsc /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:98
Oct 11 04:06:41 compute-0 nova_compute[259850]: 2025-10-11 04:06:41.428 2 DEBUG os_brick.initiator.connectors.lightos [None req-6f8d4677-b95b-4072-83f9-81fb6c60142b c5660041067943deb3c73caa6e62f851 2783729ed466412aac8ceb01d86a0b12 - - default default] LIGHTOS: did not find dsc, continuing anyway. get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:76
Oct 11 04:06:41 compute-0 nova_compute[259850]: 2025-10-11 04:06:41.428 2 DEBUG os_brick.initiator.connectors.lightos [None req-6f8d4677-b95b-4072-83f9-81fb6c60142b c5660041067943deb3c73caa6e62f851 2783729ed466412aac8ceb01d86a0b12 - - default default] LIGHTOS: finally hostnqn: nqn.2014-08.org.nvmexpress:uuid:83042a20-0f72-4c47-8453-e72ead378624 dsc:  get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:79
Oct 11 04:06:41 compute-0 nova_compute[259850]: 2025-10-11 04:06:41.428 2 DEBUG os_brick.utils [None req-6f8d4677-b95b-4072-83f9-81fb6c60142b c5660041067943deb3c73caa6e62f851 2783729ed466412aac8ceb01d86a0b12 - - default default] <== get_connector_properties: return (76ms) {'platform': 'x86_64', 'os_type': 'linux', 'ip': '192.168.122.100', 'host': 'compute-0.ctlplane.example.com', 'multipath': True, 'initiator': 'iqn.1994-05.com.redhat:e727c2bd432c', 'do_local_attach': False, 'nvme_hostid': '83042a20-0f72-4c47-8453-e72ead378624', 'system uuid': 'e4b2deed-ff06-4afb-a523-b61a9dddb9cc', 'nqn': 'nqn.2014-08.org.nvmexpress:uuid:83042a20-0f72-4c47-8453-e72ead378624', 'nvme_native_multipath': True, 'found_dsc': ''} trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:203
Oct 11 04:06:41 compute-0 nova_compute[259850]: 2025-10-11 04:06:41.429 2 DEBUG nova.virt.block_device [None req-6f8d4677-b95b-4072-83f9-81fb6c60142b c5660041067943deb3c73caa6e62f851 2783729ed466412aac8ceb01d86a0b12 - - default default] [instance: 2b618038-2466-4671-9914-c69aecf8c771] Updating existing volume attachment record: ddd5f1e6-18d1-4add-b4e6-0cb2871cd170 _volume_attach /usr/lib/python3.9/site-packages/nova/virt/block_device.py:631
Oct 11 04:06:41 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1085: 305 pgs: 305 active+clean; 213 MiB data, 354 MiB used, 60 GiB / 60 GiB avail; 326 KiB/s rd, 2.1 MiB/s wr, 62 op/s
Oct 11 04:06:41 compute-0 nova_compute[259850]: 2025-10-11 04:06:41.560 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 04:06:42 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 11 04:06:42 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/646754617' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 11 04:06:42 compute-0 nova_compute[259850]: 2025-10-11 04:06:42.151 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 04:06:42 compute-0 nova_compute[259850]: 2025-10-11 04:06:42.176 2 DEBUG nova.objects.instance [None req-6f8d4677-b95b-4072-83f9-81fb6c60142b c5660041067943deb3c73caa6e62f851 2783729ed466412aac8ceb01d86a0b12 - - default default] Lazy-loading 'flavor' on Instance uuid 2b618038-2466-4671-9914-c69aecf8c771 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 11 04:06:42 compute-0 nova_compute[259850]: 2025-10-11 04:06:42.199 2 DEBUG nova.virt.libvirt.driver [None req-6f8d4677-b95b-4072-83f9-81fb6c60142b c5660041067943deb3c73caa6e62f851 2783729ed466412aac8ceb01d86a0b12 - - default default] [instance: 2b618038-2466-4671-9914-c69aecf8c771] Attempting to attach volume 3fa01068-c029-4fc5-a8a6-68ced8aa6a2b with discard support enabled to an instance using an unsupported configuration. target_bus = virtio. Trim commands will not be issued to the storage device. _check_discard_for_attach_volume /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2168
Oct 11 04:06:42 compute-0 nova_compute[259850]: 2025-10-11 04:06:42.201 2 DEBUG nova.virt.libvirt.guest [None req-6f8d4677-b95b-4072-83f9-81fb6c60142b c5660041067943deb3c73caa6e62f851 2783729ed466412aac8ceb01d86a0b12 - - default default] attach device xml: <disk type="network" device="disk">
Oct 11 04:06:42 compute-0 nova_compute[259850]:   <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Oct 11 04:06:42 compute-0 nova_compute[259850]:   <source protocol="rbd" name="volumes/volume-3fa01068-c029-4fc5-a8a6-68ced8aa6a2b">
Oct 11 04:06:42 compute-0 nova_compute[259850]:     <host name="192.168.122.100" port="6789"/>
Oct 11 04:06:42 compute-0 nova_compute[259850]:   </source>
Oct 11 04:06:42 compute-0 nova_compute[259850]:   <auth username="openstack">
Oct 11 04:06:42 compute-0 nova_compute[259850]:     <secret type="ceph" uuid="23b68101-59a9-532f-ab6b-9acf78fb2162"/>
Oct 11 04:06:42 compute-0 nova_compute[259850]:   </auth>
Oct 11 04:06:42 compute-0 nova_compute[259850]:   <target dev="vdb" bus="virtio"/>
Oct 11 04:06:42 compute-0 nova_compute[259850]:   <serial>3fa01068-c029-4fc5-a8a6-68ced8aa6a2b</serial>
Oct 11 04:06:42 compute-0 nova_compute[259850]: </disk>
Oct 11 04:06:42 compute-0 nova_compute[259850]:  attach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:339
Oct 11 04:06:42 compute-0 nova_compute[259850]: 2025-10-11 04:06:42.337 2 DEBUG nova.virt.libvirt.driver [None req-6f8d4677-b95b-4072-83f9-81fb6c60142b c5660041067943deb3c73caa6e62f851 2783729ed466412aac8ceb01d86a0b12 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 11 04:06:42 compute-0 nova_compute[259850]: 2025-10-11 04:06:42.338 2 DEBUG nova.virt.libvirt.driver [None req-6f8d4677-b95b-4072-83f9-81fb6c60142b c5660041067943deb3c73caa6e62f851 2783729ed466412aac8ceb01d86a0b12 - - default default] No BDM found with device name vdb, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 11 04:06:42 compute-0 nova_compute[259850]: 2025-10-11 04:06:42.338 2 DEBUG nova.virt.libvirt.driver [None req-6f8d4677-b95b-4072-83f9-81fb6c60142b c5660041067943deb3c73caa6e62f851 2783729ed466412aac8ceb01d86a0b12 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 11 04:06:42 compute-0 nova_compute[259850]: 2025-10-11 04:06:42.339 2 DEBUG nova.virt.libvirt.driver [None req-6f8d4677-b95b-4072-83f9-81fb6c60142b c5660041067943deb3c73caa6e62f851 2783729ed466412aac8ceb01d86a0b12 - - default default] No VIF found with MAC fa:16:3e:55:96:0a, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Oct 11 04:06:42 compute-0 nova_compute[259850]: 2025-10-11 04:06:42.559 2 DEBUG oslo_concurrency.lockutils [None req-6f8d4677-b95b-4072-83f9-81fb6c60142b c5660041067943deb3c73caa6e62f851 2783729ed466412aac8ceb01d86a0b12 - - default default] Lock "2b618038-2466-4671-9914-c69aecf8c771" "released" by "nova.compute.manager.ComputeManager.attach_volume.<locals>.do_attach_volume" :: held 1.346s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 04:06:42 compute-0 ceph-mon[74273]: pgmap v1085: 305 pgs: 305 active+clean; 213 MiB data, 354 MiB used, 60 GiB / 60 GiB avail; 326 KiB/s rd, 2.1 MiB/s wr, 62 op/s
Oct 11 04:06:42 compute-0 ceph-mon[74273]: from='client.? 192.168.122.10:0/646754617' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 11 04:06:43 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1086: 305 pgs: 305 active+clean; 213 MiB data, 354 MiB used, 60 GiB / 60 GiB avail; 332 KiB/s rd, 2.1 MiB/s wr, 69 op/s
Oct 11 04:06:43 compute-0 nova_compute[259850]: 2025-10-11 04:06:43.535 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 04:06:44 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e184 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 04:06:44 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e184 do_prune osdmap full prune enabled
Oct 11 04:06:44 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e185 e185: 3 total, 3 up, 3 in
Oct 11 04:06:44 compute-0 ceph-mon[74273]: log_channel(cluster) log [DBG] : osdmap e185: 3 total, 3 up, 3 in
Oct 11 04:06:44 compute-0 ceph-mon[74273]: pgmap v1086: 305 pgs: 305 active+clean; 213 MiB data, 354 MiB used, 60 GiB / 60 GiB avail; 332 KiB/s rd, 2.1 MiB/s wr, 69 op/s
Oct 11 04:06:45 compute-0 nova_compute[259850]: 2025-10-11 04:06:45.405 2 DEBUG oslo_service.periodic_task [None req-a15c22ab-259d-4b26-83d0-eb5f92102649 - - - - - -] Running periodic task ComputeManager._sync_power_states run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 11 04:06:45 compute-0 nova_compute[259850]: 2025-10-11 04:06:45.427 2 DEBUG nova.compute.manager [None req-a15c22ab-259d-4b26-83d0-eb5f92102649 - - - - - -] Triggering sync for uuid 2b618038-2466-4671-9914-c69aecf8c771 _sync_power_states /usr/lib/python3.9/site-packages/nova/compute/manager.py:10268
Oct 11 04:06:45 compute-0 nova_compute[259850]: 2025-10-11 04:06:45.428 2 DEBUG oslo_concurrency.lockutils [None req-a15c22ab-259d-4b26-83d0-eb5f92102649 - - - - - -] Acquiring lock "2b618038-2466-4671-9914-c69aecf8c771" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 04:06:45 compute-0 nova_compute[259850]: 2025-10-11 04:06:45.428 2 DEBUG oslo_concurrency.lockutils [None req-a15c22ab-259d-4b26-83d0-eb5f92102649 - - - - - -] Lock "2b618038-2466-4671-9914-c69aecf8c771" acquired by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 04:06:45 compute-0 nova_compute[259850]: 2025-10-11 04:06:45.472 2 DEBUG oslo_concurrency.lockutils [None req-a15c22ab-259d-4b26-83d0-eb5f92102649 - - - - - -] Lock "2b618038-2466-4671-9914-c69aecf8c771" "released" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: held 0.044s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 04:06:45 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1088: 305 pgs: 305 active+clean; 213 MiB data, 354 MiB used, 60 GiB / 60 GiB avail; 136 KiB/s rd, 89 KiB/s wr, 31 op/s
Oct 11 04:06:45 compute-0 ceph-mon[74273]: osdmap e185: 3 total, 3 up, 3 in
Oct 11 04:06:46 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct 11 04:06:46 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/166429992' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 11 04:06:46 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e185 do_prune osdmap full prune enabled
Oct 11 04:06:46 compute-0 ceph-mon[74273]: pgmap v1088: 305 pgs: 305 active+clean; 213 MiB data, 354 MiB used, 60 GiB / 60 GiB avail; 136 KiB/s rd, 89 KiB/s wr, 31 op/s
Oct 11 04:06:46 compute-0 ceph-mon[74273]: from='client.? 192.168.122.10:0/166429992' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 11 04:06:46 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct 11 04:06:46 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/166429992' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 11 04:06:46 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e186 e186: 3 total, 3 up, 3 in
Oct 11 04:06:46 compute-0 ceph-mon[74273]: log_channel(cluster) log [DBG] : osdmap e186: 3 total, 3 up, 3 in
Oct 11 04:06:47 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct 11 04:06:47 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1548904906' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 11 04:06:47 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct 11 04:06:47 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1548904906' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 11 04:06:47 compute-0 nova_compute[259850]: 2025-10-11 04:06:47.154 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 04:06:47 compute-0 podman[274234]: 2025-10-11 04:06:47.407647243 +0000 UTC m=+0.098562848 container health_status aa44bb3cffcfb092b9d85ae52a9bd9f36c87e07262632420f97b48a886109eab (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251009, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Oct 11 04:06:47 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1090: 305 pgs: 305 active+clean; 213 MiB data, 354 MiB used, 60 GiB / 60 GiB avail; 10 KiB/s rd, 20 KiB/s wr, 10 op/s
Oct 11 04:06:47 compute-0 ceph-mon[74273]: from='client.? 192.168.122.10:0/166429992' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 11 04:06:47 compute-0 ceph-mon[74273]: osdmap e186: 3 total, 3 up, 3 in
Oct 11 04:06:47 compute-0 ceph-mon[74273]: from='client.? 192.168.122.10:0/1548904906' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 11 04:06:47 compute-0 ceph-mon[74273]: from='client.? 192.168.122.10:0/1548904906' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 11 04:06:47 compute-0 nova_compute[259850]: 2025-10-11 04:06:47.861 2 DEBUG oslo_concurrency.lockutils [None req-3126fc17-491e-4b76-a7d6-401b0ab0a9cb 77635b26e3624f318335b7dd5d5cf9c4 41596b84442c439b86ce2c239af0242c - - default default] Acquiring lock "26cb0d26-41fd-4cac-a0b5-1c630a0feba1" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 04:06:47 compute-0 nova_compute[259850]: 2025-10-11 04:06:47.861 2 DEBUG oslo_concurrency.lockutils [None req-3126fc17-491e-4b76-a7d6-401b0ab0a9cb 77635b26e3624f318335b7dd5d5cf9c4 41596b84442c439b86ce2c239af0242c - - default default] Lock "26cb0d26-41fd-4cac-a0b5-1c630a0feba1" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 04:06:47 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e186 do_prune osdmap full prune enabled
Oct 11 04:06:47 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e187 e187: 3 total, 3 up, 3 in
Oct 11 04:06:47 compute-0 ceph-mon[74273]: log_channel(cluster) log [DBG] : osdmap e187: 3 total, 3 up, 3 in
Oct 11 04:06:47 compute-0 nova_compute[259850]: 2025-10-11 04:06:47.886 2 DEBUG nova.compute.manager [None req-3126fc17-491e-4b76-a7d6-401b0ab0a9cb 77635b26e3624f318335b7dd5d5cf9c4 41596b84442c439b86ce2c239af0242c - - default default] [instance: 26cb0d26-41fd-4cac-a0b5-1c630a0feba1] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Oct 11 04:06:47 compute-0 nova_compute[259850]: 2025-10-11 04:06:47.968 2 DEBUG oslo_concurrency.lockutils [None req-3126fc17-491e-4b76-a7d6-401b0ab0a9cb 77635b26e3624f318335b7dd5d5cf9c4 41596b84442c439b86ce2c239af0242c - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 04:06:47 compute-0 nova_compute[259850]: 2025-10-11 04:06:47.969 2 DEBUG oslo_concurrency.lockutils [None req-3126fc17-491e-4b76-a7d6-401b0ab0a9cb 77635b26e3624f318335b7dd5d5cf9c4 41596b84442c439b86ce2c239af0242c - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 04:06:47 compute-0 nova_compute[259850]: 2025-10-11 04:06:47.979 2 DEBUG nova.virt.hardware [None req-3126fc17-491e-4b76-a7d6-401b0ab0a9cb 77635b26e3624f318335b7dd5d5cf9c4 41596b84442c439b86ce2c239af0242c - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Oct 11 04:06:47 compute-0 nova_compute[259850]: 2025-10-11 04:06:47.980 2 INFO nova.compute.claims [None req-3126fc17-491e-4b76-a7d6-401b0ab0a9cb 77635b26e3624f318335b7dd5d5cf9c4 41596b84442c439b86ce2c239af0242c - - default default] [instance: 26cb0d26-41fd-4cac-a0b5-1c630a0feba1] Claim successful on node compute-0.ctlplane.example.com
Oct 11 04:06:48 compute-0 nova_compute[259850]: 2025-10-11 04:06:48.132 2 DEBUG oslo_concurrency.processutils [None req-3126fc17-491e-4b76-a7d6-401b0ab0a9cb 77635b26e3624f318335b7dd5d5cf9c4 41596b84442c439b86ce2c239af0242c - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 04:06:48 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct 11 04:06:48 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/218049637' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 11 04:06:48 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct 11 04:06:48 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/218049637' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 11 04:06:48 compute-0 nova_compute[259850]: 2025-10-11 04:06:48.538 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 04:06:48 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 11 04:06:48 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1514321443' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 04:06:48 compute-0 nova_compute[259850]: 2025-10-11 04:06:48.598 2 DEBUG oslo_concurrency.processutils [None req-3126fc17-491e-4b76-a7d6-401b0ab0a9cb 77635b26e3624f318335b7dd5d5cf9c4 41596b84442c439b86ce2c239af0242c - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.466s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 04:06:48 compute-0 nova_compute[259850]: 2025-10-11 04:06:48.608 2 DEBUG nova.compute.provider_tree [None req-3126fc17-491e-4b76-a7d6-401b0ab0a9cb 77635b26e3624f318335b7dd5d5cf9c4 41596b84442c439b86ce2c239af0242c - - default default] Inventory has not changed in ProviderTree for provider: 108a560b-89c0-4926-a2fc-cb749a6f8386 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 11 04:06:48 compute-0 nova_compute[259850]: 2025-10-11 04:06:48.634 2 DEBUG nova.scheduler.client.report [None req-3126fc17-491e-4b76-a7d6-401b0ab0a9cb 77635b26e3624f318335b7dd5d5cf9c4 41596b84442c439b86ce2c239af0242c - - default default] Inventory has not changed for provider 108a560b-89c0-4926-a2fc-cb749a6f8386 based on inventory data: {'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 11 04:06:48 compute-0 nova_compute[259850]: 2025-10-11 04:06:48.664 2 DEBUG oslo_concurrency.lockutils [None req-3126fc17-491e-4b76-a7d6-401b0ab0a9cb 77635b26e3624f318335b7dd5d5cf9c4 41596b84442c439b86ce2c239af0242c - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.695s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 04:06:48 compute-0 nova_compute[259850]: 2025-10-11 04:06:48.665 2 DEBUG nova.compute.manager [None req-3126fc17-491e-4b76-a7d6-401b0ab0a9cb 77635b26e3624f318335b7dd5d5cf9c4 41596b84442c439b86ce2c239af0242c - - default default] [instance: 26cb0d26-41fd-4cac-a0b5-1c630a0feba1] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Oct 11 04:06:48 compute-0 nova_compute[259850]: 2025-10-11 04:06:48.728 2 DEBUG nova.compute.manager [None req-3126fc17-491e-4b76-a7d6-401b0ab0a9cb 77635b26e3624f318335b7dd5d5cf9c4 41596b84442c439b86ce2c239af0242c - - default default] [instance: 26cb0d26-41fd-4cac-a0b5-1c630a0feba1] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Oct 11 04:06:48 compute-0 nova_compute[259850]: 2025-10-11 04:06:48.729 2 DEBUG nova.network.neutron [None req-3126fc17-491e-4b76-a7d6-401b0ab0a9cb 77635b26e3624f318335b7dd5d5cf9c4 41596b84442c439b86ce2c239af0242c - - default default] [instance: 26cb0d26-41fd-4cac-a0b5-1c630a0feba1] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Oct 11 04:06:48 compute-0 nova_compute[259850]: 2025-10-11 04:06:48.747 2 INFO nova.virt.libvirt.driver [None req-3126fc17-491e-4b76-a7d6-401b0ab0a9cb 77635b26e3624f318335b7dd5d5cf9c4 41596b84442c439b86ce2c239af0242c - - default default] [instance: 26cb0d26-41fd-4cac-a0b5-1c630a0feba1] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Oct 11 04:06:48 compute-0 nova_compute[259850]: 2025-10-11 04:06:48.767 2 DEBUG nova.compute.manager [None req-3126fc17-491e-4b76-a7d6-401b0ab0a9cb 77635b26e3624f318335b7dd5d5cf9c4 41596b84442c439b86ce2c239af0242c - - default default] [instance: 26cb0d26-41fd-4cac-a0b5-1c630a0feba1] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Oct 11 04:06:48 compute-0 ceph-mon[74273]: pgmap v1090: 305 pgs: 305 active+clean; 213 MiB data, 354 MiB used, 60 GiB / 60 GiB avail; 10 KiB/s rd, 20 KiB/s wr, 10 op/s
Oct 11 04:06:48 compute-0 ceph-mon[74273]: osdmap e187: 3 total, 3 up, 3 in
Oct 11 04:06:48 compute-0 ceph-mon[74273]: from='client.? 192.168.122.10:0/218049637' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 11 04:06:48 compute-0 ceph-mon[74273]: from='client.? 192.168.122.10:0/218049637' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 11 04:06:48 compute-0 ceph-mon[74273]: from='client.? 192.168.122.100:0/1514321443' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 04:06:48 compute-0 nova_compute[259850]: 2025-10-11 04:06:48.894 2 DEBUG nova.compute.manager [None req-3126fc17-491e-4b76-a7d6-401b0ab0a9cb 77635b26e3624f318335b7dd5d5cf9c4 41596b84442c439b86ce2c239af0242c - - default default] [instance: 26cb0d26-41fd-4cac-a0b5-1c630a0feba1] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Oct 11 04:06:48 compute-0 nova_compute[259850]: 2025-10-11 04:06:48.896 2 DEBUG nova.virt.libvirt.driver [None req-3126fc17-491e-4b76-a7d6-401b0ab0a9cb 77635b26e3624f318335b7dd5d5cf9c4 41596b84442c439b86ce2c239af0242c - - default default] [instance: 26cb0d26-41fd-4cac-a0b5-1c630a0feba1] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Oct 11 04:06:48 compute-0 nova_compute[259850]: 2025-10-11 04:06:48.897 2 INFO nova.virt.libvirt.driver [None req-3126fc17-491e-4b76-a7d6-401b0ab0a9cb 77635b26e3624f318335b7dd5d5cf9c4 41596b84442c439b86ce2c239af0242c - - default default] [instance: 26cb0d26-41fd-4cac-a0b5-1c630a0feba1] Creating image(s)
Oct 11 04:06:48 compute-0 nova_compute[259850]: 2025-10-11 04:06:48.924 2 DEBUG nova.storage.rbd_utils [None req-3126fc17-491e-4b76-a7d6-401b0ab0a9cb 77635b26e3624f318335b7dd5d5cf9c4 41596b84442c439b86ce2c239af0242c - - default default] rbd image 26cb0d26-41fd-4cac-a0b5-1c630a0feba1_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 11 04:06:48 compute-0 nova_compute[259850]: 2025-10-11 04:06:48.960 2 DEBUG nova.storage.rbd_utils [None req-3126fc17-491e-4b76-a7d6-401b0ab0a9cb 77635b26e3624f318335b7dd5d5cf9c4 41596b84442c439b86ce2c239af0242c - - default default] rbd image 26cb0d26-41fd-4cac-a0b5-1c630a0feba1_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 11 04:06:49 compute-0 nova_compute[259850]: 2025-10-11 04:06:48.996 2 DEBUG nova.storage.rbd_utils [None req-3126fc17-491e-4b76-a7d6-401b0ab0a9cb 77635b26e3624f318335b7dd5d5cf9c4 41596b84442c439b86ce2c239af0242c - - default default] rbd image 26cb0d26-41fd-4cac-a0b5-1c630a0feba1_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 11 04:06:49 compute-0 nova_compute[259850]: 2025-10-11 04:06:49.011 2 DEBUG oslo_concurrency.processutils [None req-3126fc17-491e-4b76-a7d6-401b0ab0a9cb 77635b26e3624f318335b7dd5d5cf9c4 41596b84442c439b86ce2c239af0242c - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/1fdd1a1629caceb533dd1ed1ad40a6716b3e72ac --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 04:06:49 compute-0 nova_compute[259850]: 2025-10-11 04:06:49.041 2 DEBUG nova.policy [None req-3126fc17-491e-4b76-a7d6-401b0ab0a9cb 77635b26e3624f318335b7dd5d5cf9c4 41596b84442c439b86ce2c239af0242c - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '77635b26e3624f318335b7dd5d5cf9c4', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '41596b84442c439b86ce2c239af0242c', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Oct 11 04:06:49 compute-0 nova_compute[259850]: 2025-10-11 04:06:49.089 2 DEBUG oslo_concurrency.processutils [None req-3126fc17-491e-4b76-a7d6-401b0ab0a9cb 77635b26e3624f318335b7dd5d5cf9c4 41596b84442c439b86ce2c239af0242c - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/1fdd1a1629caceb533dd1ed1ad40a6716b3e72ac --force-share --output=json" returned: 0 in 0.078s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 04:06:49 compute-0 nova_compute[259850]: 2025-10-11 04:06:49.090 2 DEBUG oslo_concurrency.lockutils [None req-3126fc17-491e-4b76-a7d6-401b0ab0a9cb 77635b26e3624f318335b7dd5d5cf9c4 41596b84442c439b86ce2c239af0242c - - default default] Acquiring lock "1fdd1a1629caceb533dd1ed1ad40a6716b3e72ac" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 04:06:49 compute-0 nova_compute[259850]: 2025-10-11 04:06:49.090 2 DEBUG oslo_concurrency.lockutils [None req-3126fc17-491e-4b76-a7d6-401b0ab0a9cb 77635b26e3624f318335b7dd5d5cf9c4 41596b84442c439b86ce2c239af0242c - - default default] Lock "1fdd1a1629caceb533dd1ed1ad40a6716b3e72ac" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 04:06:49 compute-0 nova_compute[259850]: 2025-10-11 04:06:49.091 2 DEBUG oslo_concurrency.lockutils [None req-3126fc17-491e-4b76-a7d6-401b0ab0a9cb 77635b26e3624f318335b7dd5d5cf9c4 41596b84442c439b86ce2c239af0242c - - default default] Lock "1fdd1a1629caceb533dd1ed1ad40a6716b3e72ac" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 04:06:49 compute-0 nova_compute[259850]: 2025-10-11 04:06:49.110 2 DEBUG nova.storage.rbd_utils [None req-3126fc17-491e-4b76-a7d6-401b0ab0a9cb 77635b26e3624f318335b7dd5d5cf9c4 41596b84442c439b86ce2c239af0242c - - default default] rbd image 26cb0d26-41fd-4cac-a0b5-1c630a0feba1_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 11 04:06:49 compute-0 nova_compute[259850]: 2025-10-11 04:06:49.113 2 DEBUG oslo_concurrency.processutils [None req-3126fc17-491e-4b76-a7d6-401b0ab0a9cb 77635b26e3624f318335b7dd5d5cf9c4 41596b84442c439b86ce2c239af0242c - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/1fdd1a1629caceb533dd1ed1ad40a6716b3e72ac 26cb0d26-41fd-4cac-a0b5-1c630a0feba1_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 04:06:49 compute-0 nova_compute[259850]: 2025-10-11 04:06:49.338 2 DEBUG oslo_concurrency.processutils [None req-3126fc17-491e-4b76-a7d6-401b0ab0a9cb 77635b26e3624f318335b7dd5d5cf9c4 41596b84442c439b86ce2c239af0242c - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/1fdd1a1629caceb533dd1ed1ad40a6716b3e72ac 26cb0d26-41fd-4cac-a0b5-1c630a0feba1_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.225s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 04:06:49 compute-0 nova_compute[259850]: 2025-10-11 04:06:49.421 2 DEBUG nova.storage.rbd_utils [None req-3126fc17-491e-4b76-a7d6-401b0ab0a9cb 77635b26e3624f318335b7dd5d5cf9c4 41596b84442c439b86ce2c239af0242c - - default default] resizing rbd image 26cb0d26-41fd-4cac-a0b5-1c630a0feba1_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Oct 11 04:06:49 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1092: 305 pgs: 305 active+clean; 213 MiB data, 354 MiB used, 60 GiB / 60 GiB avail; 72 KiB/s rd, 7.8 KiB/s wr, 97 op/s
Oct 11 04:06:49 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct 11 04:06:49 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3831932484' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 11 04:06:49 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct 11 04:06:49 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3831932484' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 11 04:06:49 compute-0 nova_compute[259850]: 2025-10-11 04:06:49.534 2 DEBUG nova.objects.instance [None req-3126fc17-491e-4b76-a7d6-401b0ab0a9cb 77635b26e3624f318335b7dd5d5cf9c4 41596b84442c439b86ce2c239af0242c - - default default] Lazy-loading 'migration_context' on Instance uuid 26cb0d26-41fd-4cac-a0b5-1c630a0feba1 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 11 04:06:49 compute-0 nova_compute[259850]: 2025-10-11 04:06:49.549 2 DEBUG nova.virt.libvirt.driver [None req-3126fc17-491e-4b76-a7d6-401b0ab0a9cb 77635b26e3624f318335b7dd5d5cf9c4 41596b84442c439b86ce2c239af0242c - - default default] [instance: 26cb0d26-41fd-4cac-a0b5-1c630a0feba1] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Oct 11 04:06:49 compute-0 nova_compute[259850]: 2025-10-11 04:06:49.549 2 DEBUG nova.virt.libvirt.driver [None req-3126fc17-491e-4b76-a7d6-401b0ab0a9cb 77635b26e3624f318335b7dd5d5cf9c4 41596b84442c439b86ce2c239af0242c - - default default] [instance: 26cb0d26-41fd-4cac-a0b5-1c630a0feba1] Ensure instance console log exists: /var/lib/nova/instances/26cb0d26-41fd-4cac-a0b5-1c630a0feba1/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Oct 11 04:06:49 compute-0 nova_compute[259850]: 2025-10-11 04:06:49.550 2 DEBUG oslo_concurrency.lockutils [None req-3126fc17-491e-4b76-a7d6-401b0ab0a9cb 77635b26e3624f318335b7dd5d5cf9c4 41596b84442c439b86ce2c239af0242c - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 04:06:49 compute-0 nova_compute[259850]: 2025-10-11 04:06:49.550 2 DEBUG oslo_concurrency.lockutils [None req-3126fc17-491e-4b76-a7d6-401b0ab0a9cb 77635b26e3624f318335b7dd5d5cf9c4 41596b84442c439b86ce2c239af0242c - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 04:06:49 compute-0 nova_compute[259850]: 2025-10-11 04:06:49.551 2 DEBUG oslo_concurrency.lockutils [None req-3126fc17-491e-4b76-a7d6-401b0ab0a9cb 77635b26e3624f318335b7dd5d5cf9c4 41596b84442c439b86ce2c239af0242c - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 04:06:49 compute-0 nova_compute[259850]: 2025-10-11 04:06:49.613 2 DEBUG nova.network.neutron [None req-3126fc17-491e-4b76-a7d6-401b0ab0a9cb 77635b26e3624f318335b7dd5d5cf9c4 41596b84442c439b86ce2c239af0242c - - default default] [instance: 26cb0d26-41fd-4cac-a0b5-1c630a0feba1] Successfully created port: 4bf043b6-53f8-43fd-8fb7-67863dfbfe87 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Oct 11 04:06:49 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e187 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 04:06:49 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e187 do_prune osdmap full prune enabled
Oct 11 04:06:49 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e188 e188: 3 total, 3 up, 3 in
Oct 11 04:06:49 compute-0 ceph-mon[74273]: log_channel(cluster) log [DBG] : osdmap e188: 3 total, 3 up, 3 in
Oct 11 04:06:49 compute-0 ceph-mon[74273]: from='client.? 192.168.122.10:0/3831932484' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 11 04:06:49 compute-0 ceph-mon[74273]: from='client.? 192.168.122.10:0/3831932484' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 11 04:06:50 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct 11 04:06:50 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/258581371' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 11 04:06:50 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct 11 04:06:50 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/258581371' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 11 04:06:50 compute-0 ceph-mgr[74563]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 04:06:50 compute-0 ceph-mgr[74563]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 04:06:50 compute-0 ceph-mgr[74563]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 04:06:50 compute-0 ceph-mgr[74563]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 04:06:50 compute-0 ceph-mgr[74563]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 04:06:50 compute-0 ceph-mgr[74563]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 04:06:50 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e188 do_prune osdmap full prune enabled
Oct 11 04:06:50 compute-0 ceph-mon[74273]: pgmap v1092: 305 pgs: 305 active+clean; 213 MiB data, 354 MiB used, 60 GiB / 60 GiB avail; 72 KiB/s rd, 7.8 KiB/s wr, 97 op/s
Oct 11 04:06:50 compute-0 ceph-mon[74273]: osdmap e188: 3 total, 3 up, 3 in
Oct 11 04:06:50 compute-0 ceph-mon[74273]: from='client.? 192.168.122.10:0/258581371' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 11 04:06:50 compute-0 ceph-mon[74273]: from='client.? 192.168.122.10:0/258581371' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 11 04:06:50 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e189 e189: 3 total, 3 up, 3 in
Oct 11 04:06:50 compute-0 ceph-mon[74273]: log_channel(cluster) log [DBG] : osdmap e189: 3 total, 3 up, 3 in
Oct 11 04:06:51 compute-0 nova_compute[259850]: 2025-10-11 04:06:51.002 2 DEBUG nova.network.neutron [None req-3126fc17-491e-4b76-a7d6-401b0ab0a9cb 77635b26e3624f318335b7dd5d5cf9c4 41596b84442c439b86ce2c239af0242c - - default default] [instance: 26cb0d26-41fd-4cac-a0b5-1c630a0feba1] Successfully updated port: 4bf043b6-53f8-43fd-8fb7-67863dfbfe87 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Oct 11 04:06:51 compute-0 nova_compute[259850]: 2025-10-11 04:06:51.020 2 DEBUG oslo_concurrency.lockutils [None req-3126fc17-491e-4b76-a7d6-401b0ab0a9cb 77635b26e3624f318335b7dd5d5cf9c4 41596b84442c439b86ce2c239af0242c - - default default] Acquiring lock "refresh_cache-26cb0d26-41fd-4cac-a0b5-1c630a0feba1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 11 04:06:51 compute-0 nova_compute[259850]: 2025-10-11 04:06:51.021 2 DEBUG oslo_concurrency.lockutils [None req-3126fc17-491e-4b76-a7d6-401b0ab0a9cb 77635b26e3624f318335b7dd5d5cf9c4 41596b84442c439b86ce2c239af0242c - - default default] Acquired lock "refresh_cache-26cb0d26-41fd-4cac-a0b5-1c630a0feba1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 11 04:06:51 compute-0 nova_compute[259850]: 2025-10-11 04:06:51.021 2 DEBUG nova.network.neutron [None req-3126fc17-491e-4b76-a7d6-401b0ab0a9cb 77635b26e3624f318335b7dd5d5cf9c4 41596b84442c439b86ce2c239af0242c - - default default] [instance: 26cb0d26-41fd-4cac-a0b5-1c630a0feba1] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Oct 11 04:06:51 compute-0 nova_compute[259850]: 2025-10-11 04:06:51.133 2 DEBUG nova.compute.manager [req-68d9e909-1fe3-44af-b9f6-2bbcfe22e537 req-63c5820e-7495-42ab-b254-f2cb638ecf7a f4159c198f2c491aba3076e803251bb4 551ef16ca7c64c7d98e9560ddab5abd8 - - default default] [instance: 26cb0d26-41fd-4cac-a0b5-1c630a0feba1] Received event network-changed-4bf043b6-53f8-43fd-8fb7-67863dfbfe87 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 11 04:06:51 compute-0 nova_compute[259850]: 2025-10-11 04:06:51.133 2 DEBUG nova.compute.manager [req-68d9e909-1fe3-44af-b9f6-2bbcfe22e537 req-63c5820e-7495-42ab-b254-f2cb638ecf7a f4159c198f2c491aba3076e803251bb4 551ef16ca7c64c7d98e9560ddab5abd8 - - default default] [instance: 26cb0d26-41fd-4cac-a0b5-1c630a0feba1] Refreshing instance network info cache due to event network-changed-4bf043b6-53f8-43fd-8fb7-67863dfbfe87. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 11 04:06:51 compute-0 nova_compute[259850]: 2025-10-11 04:06:51.134 2 DEBUG oslo_concurrency.lockutils [req-68d9e909-1fe3-44af-b9f6-2bbcfe22e537 req-63c5820e-7495-42ab-b254-f2cb638ecf7a f4159c198f2c491aba3076e803251bb4 551ef16ca7c64c7d98e9560ddab5abd8 - - default default] Acquiring lock "refresh_cache-26cb0d26-41fd-4cac-a0b5-1c630a0feba1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 11 04:06:51 compute-0 nova_compute[259850]: 2025-10-11 04:06:51.213 2 DEBUG nova.network.neutron [None req-3126fc17-491e-4b76-a7d6-401b0ab0a9cb 77635b26e3624f318335b7dd5d5cf9c4 41596b84442c439b86ce2c239af0242c - - default default] [instance: 26cb0d26-41fd-4cac-a0b5-1c630a0feba1] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Oct 11 04:06:51 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1095: 305 pgs: 305 active+clean; 213 MiB data, 354 MiB used, 60 GiB / 60 GiB avail; 93 KiB/s rd, 10 KiB/s wr, 126 op/s
Oct 11 04:06:51 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e189 do_prune osdmap full prune enabled
Oct 11 04:06:51 compute-0 ceph-mon[74273]: osdmap e189: 3 total, 3 up, 3 in
Oct 11 04:06:51 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e190 e190: 3 total, 3 up, 3 in
Oct 11 04:06:51 compute-0 ceph-mon[74273]: log_channel(cluster) log [DBG] : osdmap e190: 3 total, 3 up, 3 in
Oct 11 04:06:52 compute-0 nova_compute[259850]: 2025-10-11 04:06:52.157 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 04:06:52 compute-0 nova_compute[259850]: 2025-10-11 04:06:52.192 2 DEBUG nova.network.neutron [None req-3126fc17-491e-4b76-a7d6-401b0ab0a9cb 77635b26e3624f318335b7dd5d5cf9c4 41596b84442c439b86ce2c239af0242c - - default default] [instance: 26cb0d26-41fd-4cac-a0b5-1c630a0feba1] Updating instance_info_cache with network_info: [{"id": "4bf043b6-53f8-43fd-8fb7-67863dfbfe87", "address": "fa:16:3e:20:01:d0", "network": {"id": "0ff4b514-1476-4866-8fda-c0b6a7674970", "bridge": "br-int", "label": "tempest-VolumesExtendAttachedTest-1613157742-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "41596b84442c439b86ce2c239af0242c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4bf043b6-53", "ovs_interfaceid": "4bf043b6-53f8-43fd-8fb7-67863dfbfe87", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 11 04:06:52 compute-0 nova_compute[259850]: 2025-10-11 04:06:52.212 2 DEBUG oslo_concurrency.lockutils [None req-3126fc17-491e-4b76-a7d6-401b0ab0a9cb 77635b26e3624f318335b7dd5d5cf9c4 41596b84442c439b86ce2c239af0242c - - default default] Releasing lock "refresh_cache-26cb0d26-41fd-4cac-a0b5-1c630a0feba1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 11 04:06:52 compute-0 nova_compute[259850]: 2025-10-11 04:06:52.212 2 DEBUG nova.compute.manager [None req-3126fc17-491e-4b76-a7d6-401b0ab0a9cb 77635b26e3624f318335b7dd5d5cf9c4 41596b84442c439b86ce2c239af0242c - - default default] [instance: 26cb0d26-41fd-4cac-a0b5-1c630a0feba1] Instance network_info: |[{"id": "4bf043b6-53f8-43fd-8fb7-67863dfbfe87", "address": "fa:16:3e:20:01:d0", "network": {"id": "0ff4b514-1476-4866-8fda-c0b6a7674970", "bridge": "br-int", "label": "tempest-VolumesExtendAttachedTest-1613157742-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "41596b84442c439b86ce2c239af0242c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4bf043b6-53", "ovs_interfaceid": "4bf043b6-53f8-43fd-8fb7-67863dfbfe87", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Oct 11 04:06:52 compute-0 nova_compute[259850]: 2025-10-11 04:06:52.213 2 DEBUG oslo_concurrency.lockutils [req-68d9e909-1fe3-44af-b9f6-2bbcfe22e537 req-63c5820e-7495-42ab-b254-f2cb638ecf7a f4159c198f2c491aba3076e803251bb4 551ef16ca7c64c7d98e9560ddab5abd8 - - default default] Acquired lock "refresh_cache-26cb0d26-41fd-4cac-a0b5-1c630a0feba1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 11 04:06:52 compute-0 nova_compute[259850]: 2025-10-11 04:06:52.213 2 DEBUG nova.network.neutron [req-68d9e909-1fe3-44af-b9f6-2bbcfe22e537 req-63c5820e-7495-42ab-b254-f2cb638ecf7a f4159c198f2c491aba3076e803251bb4 551ef16ca7c64c7d98e9560ddab5abd8 - - default default] [instance: 26cb0d26-41fd-4cac-a0b5-1c630a0feba1] Refreshing network info cache for port 4bf043b6-53f8-43fd-8fb7-67863dfbfe87 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 11 04:06:52 compute-0 nova_compute[259850]: 2025-10-11 04:06:52.216 2 DEBUG nova.virt.libvirt.driver [None req-3126fc17-491e-4b76-a7d6-401b0ab0a9cb 77635b26e3624f318335b7dd5d5cf9c4 41596b84442c439b86ce2c239af0242c - - default default] [instance: 26cb0d26-41fd-4cac-a0b5-1c630a0feba1] Start _get_guest_xml network_info=[{"id": "4bf043b6-53f8-43fd-8fb7-67863dfbfe87", "address": "fa:16:3e:20:01:d0", "network": {"id": "0ff4b514-1476-4866-8fda-c0b6a7674970", "bridge": "br-int", "label": "tempest-VolumesExtendAttachedTest-1613157742-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "41596b84442c439b86ce2c239af0242c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4bf043b6-53", "ovs_interfaceid": "4bf043b6-53f8-43fd-8fb7-67863dfbfe87", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-11T04:01:37Z,direct_url=<?>,disk_format='qcow2',id=1a107e2f-1a9d-4b6f-861d-e64bee7d56be,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='e4ac9f6319b648399a8baca50902ce47',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-11T04:01:39Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_type': 'disk', 'encrypted': False, 'encryption_secret_uuid': None, 'size': 0, 'encryption_format': None, 'boot_index': 0, 'device_name': '/dev/vda', 'guest_format': None, 'encryption_options': None, 'disk_bus': 'virtio', 'image_id': '1a107e2f-1a9d-4b6f-861d-e64bee7d56be'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Oct 11 04:06:52 compute-0 nova_compute[259850]: 2025-10-11 04:06:52.222 2 WARNING nova.virt.libvirt.driver [None req-3126fc17-491e-4b76-a7d6-401b0ab0a9cb 77635b26e3624f318335b7dd5d5cf9c4 41596b84442c439b86ce2c239af0242c - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 11 04:06:52 compute-0 nova_compute[259850]: 2025-10-11 04:06:52.226 2 DEBUG nova.virt.libvirt.host [None req-3126fc17-491e-4b76-a7d6-401b0ab0a9cb 77635b26e3624f318335b7dd5d5cf9c4 41596b84442c439b86ce2c239af0242c - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Oct 11 04:06:52 compute-0 nova_compute[259850]: 2025-10-11 04:06:52.226 2 DEBUG nova.virt.libvirt.host [None req-3126fc17-491e-4b76-a7d6-401b0ab0a9cb 77635b26e3624f318335b7dd5d5cf9c4 41596b84442c439b86ce2c239af0242c - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Oct 11 04:06:52 compute-0 nova_compute[259850]: 2025-10-11 04:06:52.231 2 DEBUG nova.virt.libvirt.host [None req-3126fc17-491e-4b76-a7d6-401b0ab0a9cb 77635b26e3624f318335b7dd5d5cf9c4 41596b84442c439b86ce2c239af0242c - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Oct 11 04:06:52 compute-0 nova_compute[259850]: 2025-10-11 04:06:52.231 2 DEBUG nova.virt.libvirt.host [None req-3126fc17-491e-4b76-a7d6-401b0ab0a9cb 77635b26e3624f318335b7dd5d5cf9c4 41596b84442c439b86ce2c239af0242c - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Oct 11 04:06:52 compute-0 nova_compute[259850]: 2025-10-11 04:06:52.232 2 DEBUG nova.virt.libvirt.driver [None req-3126fc17-491e-4b76-a7d6-401b0ab0a9cb 77635b26e3624f318335b7dd5d5cf9c4 41596b84442c439b86ce2c239af0242c - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Oct 11 04:06:52 compute-0 nova_compute[259850]: 2025-10-11 04:06:52.232 2 DEBUG nova.virt.hardware [None req-3126fc17-491e-4b76-a7d6-401b0ab0a9cb 77635b26e3624f318335b7dd5d5cf9c4 41596b84442c439b86ce2c239af0242c - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-11T04:01:35Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='178575de-f0e6-4acd-9fcd-d75e3e09ac2e',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-11T04:01:37Z,direct_url=<?>,disk_format='qcow2',id=1a107e2f-1a9d-4b6f-861d-e64bee7d56be,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='e4ac9f6319b648399a8baca50902ce47',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-11T04:01:39Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Oct 11 04:06:52 compute-0 nova_compute[259850]: 2025-10-11 04:06:52.233 2 DEBUG nova.virt.hardware [None req-3126fc17-491e-4b76-a7d6-401b0ab0a9cb 77635b26e3624f318335b7dd5d5cf9c4 41596b84442c439b86ce2c239af0242c - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Oct 11 04:06:52 compute-0 nova_compute[259850]: 2025-10-11 04:06:52.233 2 DEBUG nova.virt.hardware [None req-3126fc17-491e-4b76-a7d6-401b0ab0a9cb 77635b26e3624f318335b7dd5d5cf9c4 41596b84442c439b86ce2c239af0242c - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Oct 11 04:06:52 compute-0 nova_compute[259850]: 2025-10-11 04:06:52.233 2 DEBUG nova.virt.hardware [None req-3126fc17-491e-4b76-a7d6-401b0ab0a9cb 77635b26e3624f318335b7dd5d5cf9c4 41596b84442c439b86ce2c239af0242c - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Oct 11 04:06:52 compute-0 nova_compute[259850]: 2025-10-11 04:06:52.233 2 DEBUG nova.virt.hardware [None req-3126fc17-491e-4b76-a7d6-401b0ab0a9cb 77635b26e3624f318335b7dd5d5cf9c4 41596b84442c439b86ce2c239af0242c - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Oct 11 04:06:52 compute-0 nova_compute[259850]: 2025-10-11 04:06:52.234 2 DEBUG nova.virt.hardware [None req-3126fc17-491e-4b76-a7d6-401b0ab0a9cb 77635b26e3624f318335b7dd5d5cf9c4 41596b84442c439b86ce2c239af0242c - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Oct 11 04:06:52 compute-0 nova_compute[259850]: 2025-10-11 04:06:52.234 2 DEBUG nova.virt.hardware [None req-3126fc17-491e-4b76-a7d6-401b0ab0a9cb 77635b26e3624f318335b7dd5d5cf9c4 41596b84442c439b86ce2c239af0242c - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Oct 11 04:06:52 compute-0 nova_compute[259850]: 2025-10-11 04:06:52.234 2 DEBUG nova.virt.hardware [None req-3126fc17-491e-4b76-a7d6-401b0ab0a9cb 77635b26e3624f318335b7dd5d5cf9c4 41596b84442c439b86ce2c239af0242c - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Oct 11 04:06:52 compute-0 nova_compute[259850]: 2025-10-11 04:06:52.234 2 DEBUG nova.virt.hardware [None req-3126fc17-491e-4b76-a7d6-401b0ab0a9cb 77635b26e3624f318335b7dd5d5cf9c4 41596b84442c439b86ce2c239af0242c - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Oct 11 04:06:52 compute-0 nova_compute[259850]: 2025-10-11 04:06:52.234 2 DEBUG nova.virt.hardware [None req-3126fc17-491e-4b76-a7d6-401b0ab0a9cb 77635b26e3624f318335b7dd5d5cf9c4 41596b84442c439b86ce2c239af0242c - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Oct 11 04:06:52 compute-0 nova_compute[259850]: 2025-10-11 04:06:52.235 2 DEBUG nova.virt.hardware [None req-3126fc17-491e-4b76-a7d6-401b0ab0a9cb 77635b26e3624f318335b7dd5d5cf9c4 41596b84442c439b86ce2c239af0242c - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Oct 11 04:06:52 compute-0 nova_compute[259850]: 2025-10-11 04:06:52.237 2 DEBUG oslo_concurrency.processutils [None req-3126fc17-491e-4b76-a7d6-401b0ab0a9cb 77635b26e3624f318335b7dd5d5cf9c4 41596b84442c439b86ce2c239af0242c - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 04:06:52 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 11 04:06:52 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/486529857' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 11 04:06:52 compute-0 nova_compute[259850]: 2025-10-11 04:06:52.706 2 DEBUG oslo_concurrency.processutils [None req-3126fc17-491e-4b76-a7d6-401b0ab0a9cb 77635b26e3624f318335b7dd5d5cf9c4 41596b84442c439b86ce2c239af0242c - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.468s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 04:06:52 compute-0 nova_compute[259850]: 2025-10-11 04:06:52.736 2 DEBUG nova.storage.rbd_utils [None req-3126fc17-491e-4b76-a7d6-401b0ab0a9cb 77635b26e3624f318335b7dd5d5cf9c4 41596b84442c439b86ce2c239af0242c - - default default] rbd image 26cb0d26-41fd-4cac-a0b5-1c630a0feba1_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 11 04:06:52 compute-0 nova_compute[259850]: 2025-10-11 04:06:52.739 2 DEBUG oslo_concurrency.processutils [None req-3126fc17-491e-4b76-a7d6-401b0ab0a9cb 77635b26e3624f318335b7dd5d5cf9c4 41596b84442c439b86ce2c239af0242c - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 04:06:52 compute-0 ceph-mon[74273]: pgmap v1095: 305 pgs: 305 active+clean; 213 MiB data, 354 MiB used, 60 GiB / 60 GiB avail; 93 KiB/s rd, 10 KiB/s wr, 126 op/s
Oct 11 04:06:52 compute-0 ceph-mon[74273]: osdmap e190: 3 total, 3 up, 3 in
Oct 11 04:06:52 compute-0 ceph-mon[74273]: from='client.? 192.168.122.100:0/486529857' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 11 04:06:53 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 11 04:06:53 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2501642987' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 11 04:06:53 compute-0 nova_compute[259850]: 2025-10-11 04:06:53.149 2 DEBUG oslo_concurrency.processutils [None req-3126fc17-491e-4b76-a7d6-401b0ab0a9cb 77635b26e3624f318335b7dd5d5cf9c4 41596b84442c439b86ce2c239af0242c - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.410s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 04:06:53 compute-0 nova_compute[259850]: 2025-10-11 04:06:53.151 2 DEBUG nova.virt.libvirt.vif [None req-3126fc17-491e-4b76-a7d6-401b0ab0a9cb 77635b26e3624f318335b7dd5d5cf9c4 41596b84442c439b86ce2c239af0242c - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-11T04:06:47Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-VolumesExtendAttachedTest-instance-1267462374',display_name='tempest-VolumesExtendAttachedTest-instance-1267462374',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-volumesextendattachedtest-instance-1267462374',id=7,image_ref='1a107e2f-1a9d-4b6f-861d-e64bee7d56be',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBP0kgOIkPKMI3TE10SdB87sqJpLrPFOSBcFu1d0XzE1fj/PPC+I09TagWxQ8fgC7nINR5zBCN03htEgPk6hUhaQB08LyNPHOlKIdJ2drueAUzLNfbv1Latadi6FSu3IqCg==',key_name='tempest-keypair-2088827798',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='41596b84442c439b86ce2c239af0242c',ramdisk_id='',reservation_id='r-zkbeg8os',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='1a107e2f-1a9d-4b6f-861d-e64bee7d56be',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-VolumesExtendAttachedTest-1136455461',owner_user_name='tempest-VolumesExtendAttachedTest-1136455461-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-11T04:06:48Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='77635b26e3624f318335b7dd5d5cf9c4',uuid=26cb0d26-41fd-4cac-a0b5-1c630a0feba1,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "4bf043b6-53f8-43fd-8fb7-67863dfbfe87", "address": "fa:16:3e:20:01:d0", "network": {"id": "0ff4b514-1476-4866-8fda-c0b6a7674970", "bridge": "br-int", "label": "tempest-VolumesExtendAttachedTest-1613157742-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "41596b84442c439b86ce2c239af0242c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4bf043b6-53", "ovs_interfaceid": "4bf043b6-53f8-43fd-8fb7-67863dfbfe87", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Oct 11 04:06:53 compute-0 nova_compute[259850]: 2025-10-11 04:06:53.152 2 DEBUG nova.network.os_vif_util [None req-3126fc17-491e-4b76-a7d6-401b0ab0a9cb 77635b26e3624f318335b7dd5d5cf9c4 41596b84442c439b86ce2c239af0242c - - default default] Converting VIF {"id": "4bf043b6-53f8-43fd-8fb7-67863dfbfe87", "address": "fa:16:3e:20:01:d0", "network": {"id": "0ff4b514-1476-4866-8fda-c0b6a7674970", "bridge": "br-int", "label": "tempest-VolumesExtendAttachedTest-1613157742-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "41596b84442c439b86ce2c239af0242c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4bf043b6-53", "ovs_interfaceid": "4bf043b6-53f8-43fd-8fb7-67863dfbfe87", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 11 04:06:53 compute-0 nova_compute[259850]: 2025-10-11 04:06:53.153 2 DEBUG nova.network.os_vif_util [None req-3126fc17-491e-4b76-a7d6-401b0ab0a9cb 77635b26e3624f318335b7dd5d5cf9c4 41596b84442c439b86ce2c239af0242c - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:20:01:d0,bridge_name='br-int',has_traffic_filtering=True,id=4bf043b6-53f8-43fd-8fb7-67863dfbfe87,network=Network(0ff4b514-1476-4866-8fda-c0b6a7674970),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap4bf043b6-53') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 11 04:06:53 compute-0 nova_compute[259850]: 2025-10-11 04:06:53.154 2 DEBUG nova.objects.instance [None req-3126fc17-491e-4b76-a7d6-401b0ab0a9cb 77635b26e3624f318335b7dd5d5cf9c4 41596b84442c439b86ce2c239af0242c - - default default] Lazy-loading 'pci_devices' on Instance uuid 26cb0d26-41fd-4cac-a0b5-1c630a0feba1 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 11 04:06:53 compute-0 nova_compute[259850]: 2025-10-11 04:06:53.173 2 DEBUG nova.virt.libvirt.driver [None req-3126fc17-491e-4b76-a7d6-401b0ab0a9cb 77635b26e3624f318335b7dd5d5cf9c4 41596b84442c439b86ce2c239af0242c - - default default] [instance: 26cb0d26-41fd-4cac-a0b5-1c630a0feba1] End _get_guest_xml xml=<domain type="kvm">
Oct 11 04:06:53 compute-0 nova_compute[259850]:   <uuid>26cb0d26-41fd-4cac-a0b5-1c630a0feba1</uuid>
Oct 11 04:06:53 compute-0 nova_compute[259850]:   <name>instance-00000007</name>
Oct 11 04:06:53 compute-0 nova_compute[259850]:   <memory>131072</memory>
Oct 11 04:06:53 compute-0 nova_compute[259850]:   <vcpu>1</vcpu>
Oct 11 04:06:53 compute-0 nova_compute[259850]:   <metadata>
Oct 11 04:06:53 compute-0 nova_compute[259850]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 11 04:06:53 compute-0 nova_compute[259850]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 11 04:06:53 compute-0 nova_compute[259850]:       <nova:name>tempest-VolumesExtendAttachedTest-instance-1267462374</nova:name>
Oct 11 04:06:53 compute-0 nova_compute[259850]:       <nova:creationTime>2025-10-11 04:06:52</nova:creationTime>
Oct 11 04:06:53 compute-0 nova_compute[259850]:       <nova:flavor name="m1.nano">
Oct 11 04:06:53 compute-0 nova_compute[259850]:         <nova:memory>128</nova:memory>
Oct 11 04:06:53 compute-0 nova_compute[259850]:         <nova:disk>1</nova:disk>
Oct 11 04:06:53 compute-0 nova_compute[259850]:         <nova:swap>0</nova:swap>
Oct 11 04:06:53 compute-0 nova_compute[259850]:         <nova:ephemeral>0</nova:ephemeral>
Oct 11 04:06:53 compute-0 nova_compute[259850]:         <nova:vcpus>1</nova:vcpus>
Oct 11 04:06:53 compute-0 nova_compute[259850]:       </nova:flavor>
Oct 11 04:06:53 compute-0 nova_compute[259850]:       <nova:owner>
Oct 11 04:06:53 compute-0 nova_compute[259850]:         <nova:user uuid="77635b26e3624f318335b7dd5d5cf9c4">tempest-VolumesExtendAttachedTest-1136455461-project-member</nova:user>
Oct 11 04:06:53 compute-0 nova_compute[259850]:         <nova:project uuid="41596b84442c439b86ce2c239af0242c">tempest-VolumesExtendAttachedTest-1136455461</nova:project>
Oct 11 04:06:53 compute-0 nova_compute[259850]:       </nova:owner>
Oct 11 04:06:53 compute-0 nova_compute[259850]:       <nova:root type="image" uuid="1a107e2f-1a9d-4b6f-861d-e64bee7d56be"/>
Oct 11 04:06:53 compute-0 nova_compute[259850]:       <nova:ports>
Oct 11 04:06:53 compute-0 nova_compute[259850]:         <nova:port uuid="4bf043b6-53f8-43fd-8fb7-67863dfbfe87">
Oct 11 04:06:53 compute-0 nova_compute[259850]:           <nova:ip type="fixed" address="10.100.0.13" ipVersion="4"/>
Oct 11 04:06:53 compute-0 nova_compute[259850]:         </nova:port>
Oct 11 04:06:53 compute-0 nova_compute[259850]:       </nova:ports>
Oct 11 04:06:53 compute-0 nova_compute[259850]:     </nova:instance>
Oct 11 04:06:53 compute-0 nova_compute[259850]:   </metadata>
Oct 11 04:06:53 compute-0 nova_compute[259850]:   <sysinfo type="smbios">
Oct 11 04:06:53 compute-0 nova_compute[259850]:     <system>
Oct 11 04:06:53 compute-0 nova_compute[259850]:       <entry name="manufacturer">RDO</entry>
Oct 11 04:06:53 compute-0 nova_compute[259850]:       <entry name="product">OpenStack Compute</entry>
Oct 11 04:06:53 compute-0 nova_compute[259850]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 11 04:06:53 compute-0 nova_compute[259850]:       <entry name="serial">26cb0d26-41fd-4cac-a0b5-1c630a0feba1</entry>
Oct 11 04:06:53 compute-0 nova_compute[259850]:       <entry name="uuid">26cb0d26-41fd-4cac-a0b5-1c630a0feba1</entry>
Oct 11 04:06:53 compute-0 nova_compute[259850]:       <entry name="family">Virtual Machine</entry>
Oct 11 04:06:53 compute-0 nova_compute[259850]:     </system>
Oct 11 04:06:53 compute-0 nova_compute[259850]:   </sysinfo>
Oct 11 04:06:53 compute-0 nova_compute[259850]:   <os>
Oct 11 04:06:53 compute-0 nova_compute[259850]:     <type arch="x86_64" machine="q35">hvm</type>
Oct 11 04:06:53 compute-0 nova_compute[259850]:     <boot dev="hd"/>
Oct 11 04:06:53 compute-0 nova_compute[259850]:     <smbios mode="sysinfo"/>
Oct 11 04:06:53 compute-0 nova_compute[259850]:   </os>
Oct 11 04:06:53 compute-0 nova_compute[259850]:   <features>
Oct 11 04:06:53 compute-0 nova_compute[259850]:     <acpi/>
Oct 11 04:06:53 compute-0 nova_compute[259850]:     <apic/>
Oct 11 04:06:53 compute-0 nova_compute[259850]:     <vmcoreinfo/>
Oct 11 04:06:53 compute-0 nova_compute[259850]:   </features>
Oct 11 04:06:53 compute-0 nova_compute[259850]:   <clock offset="utc">
Oct 11 04:06:53 compute-0 nova_compute[259850]:     <timer name="pit" tickpolicy="delay"/>
Oct 11 04:06:53 compute-0 nova_compute[259850]:     <timer name="rtc" tickpolicy="catchup"/>
Oct 11 04:06:53 compute-0 nova_compute[259850]:     <timer name="hpet" present="no"/>
Oct 11 04:06:53 compute-0 nova_compute[259850]:   </clock>
Oct 11 04:06:53 compute-0 nova_compute[259850]:   <cpu mode="host-model" match="exact">
Oct 11 04:06:53 compute-0 nova_compute[259850]:     <topology sockets="1" cores="1" threads="1"/>
Oct 11 04:06:53 compute-0 nova_compute[259850]:   </cpu>
Oct 11 04:06:53 compute-0 nova_compute[259850]:   <devices>
Oct 11 04:06:53 compute-0 nova_compute[259850]:     <disk type="network" device="disk">
Oct 11 04:06:53 compute-0 nova_compute[259850]:       <driver type="raw" cache="none"/>
Oct 11 04:06:53 compute-0 nova_compute[259850]:       <source protocol="rbd" name="vms/26cb0d26-41fd-4cac-a0b5-1c630a0feba1_disk">
Oct 11 04:06:53 compute-0 nova_compute[259850]:         <host name="192.168.122.100" port="6789"/>
Oct 11 04:06:53 compute-0 nova_compute[259850]:       </source>
Oct 11 04:06:53 compute-0 nova_compute[259850]:       <auth username="openstack">
Oct 11 04:06:53 compute-0 nova_compute[259850]:         <secret type="ceph" uuid="23b68101-59a9-532f-ab6b-9acf78fb2162"/>
Oct 11 04:06:53 compute-0 nova_compute[259850]:       </auth>
Oct 11 04:06:53 compute-0 nova_compute[259850]:       <target dev="vda" bus="virtio"/>
Oct 11 04:06:53 compute-0 nova_compute[259850]:     </disk>
Oct 11 04:06:53 compute-0 nova_compute[259850]:     <disk type="network" device="cdrom">
Oct 11 04:06:53 compute-0 nova_compute[259850]:       <driver type="raw" cache="none"/>
Oct 11 04:06:53 compute-0 nova_compute[259850]:       <source protocol="rbd" name="vms/26cb0d26-41fd-4cac-a0b5-1c630a0feba1_disk.config">
Oct 11 04:06:53 compute-0 nova_compute[259850]:         <host name="192.168.122.100" port="6789"/>
Oct 11 04:06:53 compute-0 nova_compute[259850]:       </source>
Oct 11 04:06:53 compute-0 nova_compute[259850]:       <auth username="openstack">
Oct 11 04:06:53 compute-0 nova_compute[259850]:         <secret type="ceph" uuid="23b68101-59a9-532f-ab6b-9acf78fb2162"/>
Oct 11 04:06:53 compute-0 nova_compute[259850]:       </auth>
Oct 11 04:06:53 compute-0 nova_compute[259850]:       <target dev="sda" bus="sata"/>
Oct 11 04:06:53 compute-0 nova_compute[259850]:     </disk>
Oct 11 04:06:53 compute-0 nova_compute[259850]:     <interface type="ethernet">
Oct 11 04:06:53 compute-0 nova_compute[259850]:       <mac address="fa:16:3e:20:01:d0"/>
Oct 11 04:06:53 compute-0 nova_compute[259850]:       <model type="virtio"/>
Oct 11 04:06:53 compute-0 nova_compute[259850]:       <driver name="vhost" rx_queue_size="512"/>
Oct 11 04:06:53 compute-0 nova_compute[259850]:       <mtu size="1442"/>
Oct 11 04:06:53 compute-0 nova_compute[259850]:       <target dev="tap4bf043b6-53"/>
Oct 11 04:06:53 compute-0 nova_compute[259850]:     </interface>
Oct 11 04:06:53 compute-0 nova_compute[259850]:     <serial type="pty">
Oct 11 04:06:53 compute-0 nova_compute[259850]:       <log file="/var/lib/nova/instances/26cb0d26-41fd-4cac-a0b5-1c630a0feba1/console.log" append="off"/>
Oct 11 04:06:53 compute-0 nova_compute[259850]:     </serial>
Oct 11 04:06:53 compute-0 nova_compute[259850]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 11 04:06:53 compute-0 nova_compute[259850]:     <video>
Oct 11 04:06:53 compute-0 nova_compute[259850]:       <model type="virtio"/>
Oct 11 04:06:53 compute-0 nova_compute[259850]:     </video>
Oct 11 04:06:53 compute-0 nova_compute[259850]:     <input type="tablet" bus="usb"/>
Oct 11 04:06:53 compute-0 nova_compute[259850]:     <rng model="virtio">
Oct 11 04:06:53 compute-0 nova_compute[259850]:       <backend model="random">/dev/urandom</backend>
Oct 11 04:06:53 compute-0 nova_compute[259850]:     </rng>
Oct 11 04:06:53 compute-0 nova_compute[259850]:     <controller type="pci" model="pcie-root"/>
Oct 11 04:06:53 compute-0 nova_compute[259850]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 04:06:53 compute-0 nova_compute[259850]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 04:06:53 compute-0 nova_compute[259850]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 04:06:53 compute-0 nova_compute[259850]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 04:06:53 compute-0 nova_compute[259850]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 04:06:53 compute-0 nova_compute[259850]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 04:06:53 compute-0 nova_compute[259850]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 04:06:53 compute-0 nova_compute[259850]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 04:06:53 compute-0 nova_compute[259850]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 04:06:53 compute-0 nova_compute[259850]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 04:06:53 compute-0 nova_compute[259850]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 04:06:53 compute-0 nova_compute[259850]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 04:06:53 compute-0 nova_compute[259850]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 04:06:53 compute-0 nova_compute[259850]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 04:06:53 compute-0 nova_compute[259850]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 04:06:53 compute-0 nova_compute[259850]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 04:06:53 compute-0 nova_compute[259850]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 04:06:53 compute-0 nova_compute[259850]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 04:06:53 compute-0 nova_compute[259850]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 04:06:53 compute-0 nova_compute[259850]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 04:06:53 compute-0 nova_compute[259850]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 04:06:53 compute-0 nova_compute[259850]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 04:06:53 compute-0 nova_compute[259850]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 04:06:53 compute-0 nova_compute[259850]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 04:06:53 compute-0 nova_compute[259850]:     <controller type="usb" index="0"/>
Oct 11 04:06:53 compute-0 nova_compute[259850]:     <memballoon model="virtio">
Oct 11 04:06:53 compute-0 nova_compute[259850]:       <stats period="10"/>
Oct 11 04:06:53 compute-0 nova_compute[259850]:     </memballoon>
Oct 11 04:06:53 compute-0 nova_compute[259850]:   </devices>
Oct 11 04:06:53 compute-0 nova_compute[259850]: </domain>
Oct 11 04:06:53 compute-0 nova_compute[259850]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Oct 11 04:06:53 compute-0 nova_compute[259850]: 2025-10-11 04:06:53.175 2 DEBUG nova.compute.manager [None req-3126fc17-491e-4b76-a7d6-401b0ab0a9cb 77635b26e3624f318335b7dd5d5cf9c4 41596b84442c439b86ce2c239af0242c - - default default] [instance: 26cb0d26-41fd-4cac-a0b5-1c630a0feba1] Preparing to wait for external event network-vif-plugged-4bf043b6-53f8-43fd-8fb7-67863dfbfe87 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Oct 11 04:06:53 compute-0 nova_compute[259850]: 2025-10-11 04:06:53.176 2 DEBUG oslo_concurrency.lockutils [None req-3126fc17-491e-4b76-a7d6-401b0ab0a9cb 77635b26e3624f318335b7dd5d5cf9c4 41596b84442c439b86ce2c239af0242c - - default default] Acquiring lock "26cb0d26-41fd-4cac-a0b5-1c630a0feba1-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 04:06:53 compute-0 nova_compute[259850]: 2025-10-11 04:06:53.176 2 DEBUG oslo_concurrency.lockutils [None req-3126fc17-491e-4b76-a7d6-401b0ab0a9cb 77635b26e3624f318335b7dd5d5cf9c4 41596b84442c439b86ce2c239af0242c - - default default] Lock "26cb0d26-41fd-4cac-a0b5-1c630a0feba1-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 04:06:53 compute-0 nova_compute[259850]: 2025-10-11 04:06:53.177 2 DEBUG oslo_concurrency.lockutils [None req-3126fc17-491e-4b76-a7d6-401b0ab0a9cb 77635b26e3624f318335b7dd5d5cf9c4 41596b84442c439b86ce2c239af0242c - - default default] Lock "26cb0d26-41fd-4cac-a0b5-1c630a0feba1-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 04:06:53 compute-0 nova_compute[259850]: 2025-10-11 04:06:53.178 2 DEBUG nova.virt.libvirt.vif [None req-3126fc17-491e-4b76-a7d6-401b0ab0a9cb 77635b26e3624f318335b7dd5d5cf9c4 41596b84442c439b86ce2c239af0242c - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-11T04:06:47Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-VolumesExtendAttachedTest-instance-1267462374',display_name='tempest-VolumesExtendAttachedTest-instance-1267462374',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-volumesextendattachedtest-instance-1267462374',id=7,image_ref='1a107e2f-1a9d-4b6f-861d-e64bee7d56be',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBP0kgOIkPKMI3TE10SdB87sqJpLrPFOSBcFu1d0XzE1fj/PPC+I09TagWxQ8fgC7nINR5zBCN03htEgPk6hUhaQB08LyNPHOlKIdJ2drueAUzLNfbv1Latadi6FSu3IqCg==',key_name='tempest-keypair-2088827798',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='41596b84442c439b86ce2c239af0242c',ramdisk_id='',reservation_id='r-zkbeg8os',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='1a107e2f-1a9d-4b6f-861d-e64bee7d56be',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-VolumesExtendAttachedTest-1136455461',owner_user_name='tempest-VolumesExtendAttachedTest-1136455461-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-11T04:06:48Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='77635b26e3624f318335b7dd5d5cf9c4',uuid=26cb0d26-41fd-4cac-a0b5-1c630a0feba1,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "4bf043b6-53f8-43fd-8fb7-67863dfbfe87", "address": "fa:16:3e:20:01:d0", "network": {"id": "0ff4b514-1476-4866-8fda-c0b6a7674970", "bridge": "br-int", "label": "tempest-VolumesExtendAttachedTest-1613157742-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "41596b84442c439b86ce2c239af0242c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4bf043b6-53", "ovs_interfaceid": "4bf043b6-53f8-43fd-8fb7-67863dfbfe87", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Oct 11 04:06:53 compute-0 nova_compute[259850]: 2025-10-11 04:06:53.179 2 DEBUG nova.network.os_vif_util [None req-3126fc17-491e-4b76-a7d6-401b0ab0a9cb 77635b26e3624f318335b7dd5d5cf9c4 41596b84442c439b86ce2c239af0242c - - default default] Converting VIF {"id": "4bf043b6-53f8-43fd-8fb7-67863dfbfe87", "address": "fa:16:3e:20:01:d0", "network": {"id": "0ff4b514-1476-4866-8fda-c0b6a7674970", "bridge": "br-int", "label": "tempest-VolumesExtendAttachedTest-1613157742-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "41596b84442c439b86ce2c239af0242c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4bf043b6-53", "ovs_interfaceid": "4bf043b6-53f8-43fd-8fb7-67863dfbfe87", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 11 04:06:53 compute-0 nova_compute[259850]: 2025-10-11 04:06:53.180 2 DEBUG nova.network.os_vif_util [None req-3126fc17-491e-4b76-a7d6-401b0ab0a9cb 77635b26e3624f318335b7dd5d5cf9c4 41596b84442c439b86ce2c239af0242c - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:20:01:d0,bridge_name='br-int',has_traffic_filtering=True,id=4bf043b6-53f8-43fd-8fb7-67863dfbfe87,network=Network(0ff4b514-1476-4866-8fda-c0b6a7674970),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap4bf043b6-53') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 11 04:06:53 compute-0 nova_compute[259850]: 2025-10-11 04:06:53.180 2 DEBUG os_vif [None req-3126fc17-491e-4b76-a7d6-401b0ab0a9cb 77635b26e3624f318335b7dd5d5cf9c4 41596b84442c439b86ce2c239af0242c - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:20:01:d0,bridge_name='br-int',has_traffic_filtering=True,id=4bf043b6-53f8-43fd-8fb7-67863dfbfe87,network=Network(0ff4b514-1476-4866-8fda-c0b6a7674970),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap4bf043b6-53') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Oct 11 04:06:53 compute-0 nova_compute[259850]: 2025-10-11 04:06:53.181 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 04:06:53 compute-0 nova_compute[259850]: 2025-10-11 04:06:53.182 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 04:06:53 compute-0 nova_compute[259850]: 2025-10-11 04:06:53.183 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 11 04:06:53 compute-0 nova_compute[259850]: 2025-10-11 04:06:53.187 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 04:06:53 compute-0 nova_compute[259850]: 2025-10-11 04:06:53.188 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap4bf043b6-53, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 04:06:53 compute-0 nova_compute[259850]: 2025-10-11 04:06:53.188 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap4bf043b6-53, col_values=(('external_ids', {'iface-id': '4bf043b6-53f8-43fd-8fb7-67863dfbfe87', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:20:01:d0', 'vm-uuid': '26cb0d26-41fd-4cac-a0b5-1c630a0feba1'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 04:06:53 compute-0 NetworkManager[44920]: <info>  [1760155613.2300] manager: (tap4bf043b6-53): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/49)
Oct 11 04:06:53 compute-0 nova_compute[259850]: 2025-10-11 04:06:53.228 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 04:06:53 compute-0 nova_compute[259850]: 2025-10-11 04:06:53.234 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 11 04:06:53 compute-0 nova_compute[259850]: 2025-10-11 04:06:53.239 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 04:06:53 compute-0 nova_compute[259850]: 2025-10-11 04:06:53.241 2 INFO os_vif [None req-3126fc17-491e-4b76-a7d6-401b0ab0a9cb 77635b26e3624f318335b7dd5d5cf9c4 41596b84442c439b86ce2c239af0242c - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:20:01:d0,bridge_name='br-int',has_traffic_filtering=True,id=4bf043b6-53f8-43fd-8fb7-67863dfbfe87,network=Network(0ff4b514-1476-4866-8fda-c0b6a7674970),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap4bf043b6-53')
Oct 11 04:06:53 compute-0 nova_compute[259850]: 2025-10-11 04:06:53.313 2 DEBUG nova.virt.libvirt.driver [None req-3126fc17-491e-4b76-a7d6-401b0ab0a9cb 77635b26e3624f318335b7dd5d5cf9c4 41596b84442c439b86ce2c239af0242c - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 11 04:06:53 compute-0 nova_compute[259850]: 2025-10-11 04:06:53.314 2 DEBUG nova.virt.libvirt.driver [None req-3126fc17-491e-4b76-a7d6-401b0ab0a9cb 77635b26e3624f318335b7dd5d5cf9c4 41596b84442c439b86ce2c239af0242c - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 11 04:06:53 compute-0 nova_compute[259850]: 2025-10-11 04:06:53.314 2 DEBUG nova.virt.libvirt.driver [None req-3126fc17-491e-4b76-a7d6-401b0ab0a9cb 77635b26e3624f318335b7dd5d5cf9c4 41596b84442c439b86ce2c239af0242c - - default default] No VIF found with MAC fa:16:3e:20:01:d0, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Oct 11 04:06:53 compute-0 nova_compute[259850]: 2025-10-11 04:06:53.315 2 INFO nova.virt.libvirt.driver [None req-3126fc17-491e-4b76-a7d6-401b0ab0a9cb 77635b26e3624f318335b7dd5d5cf9c4 41596b84442c439b86ce2c239af0242c - - default default] [instance: 26cb0d26-41fd-4cac-a0b5-1c630a0feba1] Using config drive
Oct 11 04:06:53 compute-0 nova_compute[259850]: 2025-10-11 04:06:53.354 2 DEBUG nova.storage.rbd_utils [None req-3126fc17-491e-4b76-a7d6-401b0ab0a9cb 77635b26e3624f318335b7dd5d5cf9c4 41596b84442c439b86ce2c239af0242c - - default default] rbd image 26cb0d26-41fd-4cac-a0b5-1c630a0feba1_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 11 04:06:53 compute-0 nova_compute[259850]: 2025-10-11 04:06:53.471 2 DEBUG nova.network.neutron [req-68d9e909-1fe3-44af-b9f6-2bbcfe22e537 req-63c5820e-7495-42ab-b254-f2cb638ecf7a f4159c198f2c491aba3076e803251bb4 551ef16ca7c64c7d98e9560ddab5abd8 - - default default] [instance: 26cb0d26-41fd-4cac-a0b5-1c630a0feba1] Updated VIF entry in instance network info cache for port 4bf043b6-53f8-43fd-8fb7-67863dfbfe87. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 11 04:06:53 compute-0 nova_compute[259850]: 2025-10-11 04:06:53.473 2 DEBUG nova.network.neutron [req-68d9e909-1fe3-44af-b9f6-2bbcfe22e537 req-63c5820e-7495-42ab-b254-f2cb638ecf7a f4159c198f2c491aba3076e803251bb4 551ef16ca7c64c7d98e9560ddab5abd8 - - default default] [instance: 26cb0d26-41fd-4cac-a0b5-1c630a0feba1] Updating instance_info_cache with network_info: [{"id": "4bf043b6-53f8-43fd-8fb7-67863dfbfe87", "address": "fa:16:3e:20:01:d0", "network": {"id": "0ff4b514-1476-4866-8fda-c0b6a7674970", "bridge": "br-int", "label": "tempest-VolumesExtendAttachedTest-1613157742-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "41596b84442c439b86ce2c239af0242c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4bf043b6-53", "ovs_interfaceid": "4bf043b6-53f8-43fd-8fb7-67863dfbfe87", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 11 04:06:53 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1097: 305 pgs: 305 active+clean; 260 MiB data, 379 MiB used, 60 GiB / 60 GiB avail; 177 KiB/s rd, 3.8 MiB/s wr, 246 op/s
Oct 11 04:06:53 compute-0 nova_compute[259850]: 2025-10-11 04:06:53.498 2 DEBUG oslo_concurrency.lockutils [req-68d9e909-1fe3-44af-b9f6-2bbcfe22e537 req-63c5820e-7495-42ab-b254-f2cb638ecf7a f4159c198f2c491aba3076e803251bb4 551ef16ca7c64c7d98e9560ddab5abd8 - - default default] Releasing lock "refresh_cache-26cb0d26-41fd-4cac-a0b5-1c630a0feba1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 11 04:06:53 compute-0 nova_compute[259850]: 2025-10-11 04:06:53.599 2 DEBUG oslo_concurrency.lockutils [None req-55639a0b-e91c-4356-a99f-926387d986b2 c5660041067943deb3c73caa6e62f851 2783729ed466412aac8ceb01d86a0b12 - - default default] Acquiring lock "2b618038-2466-4671-9914-c69aecf8c771" by "nova.compute.manager.ComputeManager.detach_volume.<locals>.do_detach_volume" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 04:06:53 compute-0 nova_compute[259850]: 2025-10-11 04:06:53.600 2 DEBUG oslo_concurrency.lockutils [None req-55639a0b-e91c-4356-a99f-926387d986b2 c5660041067943deb3c73caa6e62f851 2783729ed466412aac8ceb01d86a0b12 - - default default] Lock "2b618038-2466-4671-9914-c69aecf8c771" acquired by "nova.compute.manager.ComputeManager.detach_volume.<locals>.do_detach_volume" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 04:06:53 compute-0 nova_compute[259850]: 2025-10-11 04:06:53.615 2 INFO nova.compute.manager [None req-55639a0b-e91c-4356-a99f-926387d986b2 c5660041067943deb3c73caa6e62f851 2783729ed466412aac8ceb01d86a0b12 - - default default] [instance: 2b618038-2466-4671-9914-c69aecf8c771] Detaching volume 3fa01068-c029-4fc5-a8a6-68ced8aa6a2b
Oct 11 04:06:53 compute-0 nova_compute[259850]: 2025-10-11 04:06:53.667 2 INFO nova.virt.libvirt.driver [None req-3126fc17-491e-4b76-a7d6-401b0ab0a9cb 77635b26e3624f318335b7dd5d5cf9c4 41596b84442c439b86ce2c239af0242c - - default default] [instance: 26cb0d26-41fd-4cac-a0b5-1c630a0feba1] Creating config drive at /var/lib/nova/instances/26cb0d26-41fd-4cac-a0b5-1c630a0feba1/disk.config
Oct 11 04:06:53 compute-0 nova_compute[259850]: 2025-10-11 04:06:53.679 2 DEBUG oslo_concurrency.processutils [None req-3126fc17-491e-4b76-a7d6-401b0ab0a9cb 77635b26e3624f318335b7dd5d5cf9c4 41596b84442c439b86ce2c239af0242c - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/26cb0d26-41fd-4cac-a0b5-1c630a0feba1/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpispatytv execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 04:06:53 compute-0 nova_compute[259850]: 2025-10-11 04:06:53.759 2 INFO nova.virt.block_device [None req-55639a0b-e91c-4356-a99f-926387d986b2 c5660041067943deb3c73caa6e62f851 2783729ed466412aac8ceb01d86a0b12 - - default default] [instance: 2b618038-2466-4671-9914-c69aecf8c771] Attempting to driver detach volume 3fa01068-c029-4fc5-a8a6-68ced8aa6a2b from mountpoint /dev/vdb
Oct 11 04:06:53 compute-0 nova_compute[259850]: 2025-10-11 04:06:53.763 2 DEBUG oslo_concurrency.lockutils [None req-25f0fde7-a62d-4988-855e-f4cb83dcef92 c5660041067943deb3c73caa6e62f851 2783729ed466412aac8ceb01d86a0b12 - - default default] Acquiring lock "2b618038-2466-4671-9914-c69aecf8c771" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 04:06:53 compute-0 nova_compute[259850]: 2025-10-11 04:06:53.774 2 DEBUG nova.virt.libvirt.driver [None req-55639a0b-e91c-4356-a99f-926387d986b2 c5660041067943deb3c73caa6e62f851 2783729ed466412aac8ceb01d86a0b12 - - default default] Attempting to detach device vdb from instance 2b618038-2466-4671-9914-c69aecf8c771 from the persistent domain config. _detach_from_persistent /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2487
Oct 11 04:06:53 compute-0 nova_compute[259850]: 2025-10-11 04:06:53.775 2 DEBUG nova.virt.libvirt.guest [None req-55639a0b-e91c-4356-a99f-926387d986b2 c5660041067943deb3c73caa6e62f851 2783729ed466412aac8ceb01d86a0b12 - - default default] detach device xml: <disk type="network" device="disk">
Oct 11 04:06:53 compute-0 nova_compute[259850]:   <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Oct 11 04:06:53 compute-0 nova_compute[259850]:   <source protocol="rbd" name="volumes/volume-3fa01068-c029-4fc5-a8a6-68ced8aa6a2b">
Oct 11 04:06:53 compute-0 nova_compute[259850]:     <host name="192.168.122.100" port="6789"/>
Oct 11 04:06:53 compute-0 nova_compute[259850]:   </source>
Oct 11 04:06:53 compute-0 nova_compute[259850]:   <target dev="vdb" bus="virtio"/>
Oct 11 04:06:53 compute-0 nova_compute[259850]:   <serial>3fa01068-c029-4fc5-a8a6-68ced8aa6a2b</serial>
Oct 11 04:06:53 compute-0 nova_compute[259850]:   <address type="pci" domain="0x0000" bus="0x06" slot="0x00" function="0x0"/>
Oct 11 04:06:53 compute-0 nova_compute[259850]: </disk>
Oct 11 04:06:53 compute-0 nova_compute[259850]:  detach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:465
Oct 11 04:06:53 compute-0 nova_compute[259850]: 2025-10-11 04:06:53.789 2 INFO nova.virt.libvirt.driver [None req-55639a0b-e91c-4356-a99f-926387d986b2 c5660041067943deb3c73caa6e62f851 2783729ed466412aac8ceb01d86a0b12 - - default default] Successfully detached device vdb from instance 2b618038-2466-4671-9914-c69aecf8c771 from the persistent domain config.
Oct 11 04:06:53 compute-0 nova_compute[259850]: 2025-10-11 04:06:53.790 2 DEBUG nova.virt.libvirt.driver [None req-55639a0b-e91c-4356-a99f-926387d986b2 c5660041067943deb3c73caa6e62f851 2783729ed466412aac8ceb01d86a0b12 - - default default] (1/8): Attempting to detach device vdb with device alias virtio-disk1 from instance 2b618038-2466-4671-9914-c69aecf8c771 from the live domain config. _detach_from_live_with_retry /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2523
Oct 11 04:06:53 compute-0 nova_compute[259850]: 2025-10-11 04:06:53.790 2 DEBUG nova.virt.libvirt.guest [None req-55639a0b-e91c-4356-a99f-926387d986b2 c5660041067943deb3c73caa6e62f851 2783729ed466412aac8ceb01d86a0b12 - - default default] detach device xml: <disk type="network" device="disk">
Oct 11 04:06:53 compute-0 nova_compute[259850]:   <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Oct 11 04:06:53 compute-0 nova_compute[259850]:   <source protocol="rbd" name="volumes/volume-3fa01068-c029-4fc5-a8a6-68ced8aa6a2b">
Oct 11 04:06:53 compute-0 nova_compute[259850]:     <host name="192.168.122.100" port="6789"/>
Oct 11 04:06:53 compute-0 nova_compute[259850]:   </source>
Oct 11 04:06:53 compute-0 nova_compute[259850]:   <target dev="vdb" bus="virtio"/>
Oct 11 04:06:53 compute-0 nova_compute[259850]:   <serial>3fa01068-c029-4fc5-a8a6-68ced8aa6a2b</serial>
Oct 11 04:06:53 compute-0 nova_compute[259850]:   <address type="pci" domain="0x0000" bus="0x06" slot="0x00" function="0x0"/>
Oct 11 04:06:53 compute-0 nova_compute[259850]: </disk>
Oct 11 04:06:53 compute-0 nova_compute[259850]:  detach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:465
Oct 11 04:06:53 compute-0 nova_compute[259850]: 2025-10-11 04:06:53.821 2 DEBUG oslo_concurrency.processutils [None req-3126fc17-491e-4b76-a7d6-401b0ab0a9cb 77635b26e3624f318335b7dd5d5cf9c4 41596b84442c439b86ce2c239af0242c - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/26cb0d26-41fd-4cac-a0b5-1c630a0feba1/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpispatytv" returned: 0 in 0.142s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 04:06:53 compute-0 nova_compute[259850]: 2025-10-11 04:06:53.860 2 DEBUG nova.storage.rbd_utils [None req-3126fc17-491e-4b76-a7d6-401b0ab0a9cb 77635b26e3624f318335b7dd5d5cf9c4 41596b84442c439b86ce2c239af0242c - - default default] rbd image 26cb0d26-41fd-4cac-a0b5-1c630a0feba1_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 11 04:06:53 compute-0 nova_compute[259850]: 2025-10-11 04:06:53.865 2 DEBUG oslo_concurrency.processutils [None req-3126fc17-491e-4b76-a7d6-401b0ab0a9cb 77635b26e3624f318335b7dd5d5cf9c4 41596b84442c439b86ce2c239af0242c - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/26cb0d26-41fd-4cac-a0b5-1c630a0feba1/disk.config 26cb0d26-41fd-4cac-a0b5-1c630a0feba1_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 04:06:53 compute-0 nova_compute[259850]: 2025-10-11 04:06:53.922 2 DEBUG nova.virt.libvirt.driver [None req-3700afd8-f980-4cf2-b41e-54ecc7fbe9a2 - - - - - -] Received event <DeviceRemovedEvent: 1760155613.920513, 2b618038-2466-4671-9914-c69aecf8c771 => virtio-disk1> from libvirt while the driver is waiting for it; dispatched. emit_event /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2370
Oct 11 04:06:53 compute-0 nova_compute[259850]: 2025-10-11 04:06:53.924 2 DEBUG nova.virt.libvirt.driver [None req-55639a0b-e91c-4356-a99f-926387d986b2 c5660041067943deb3c73caa6e62f851 2783729ed466412aac8ceb01d86a0b12 - - default default] Start waiting for the detach event from libvirt for device vdb with device alias virtio-disk1 for instance 2b618038-2466-4671-9914-c69aecf8c771 _detach_from_live_and_wait_for_event /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2599
Oct 11 04:06:53 compute-0 ceph-mon[74273]: from='client.? 192.168.122.100:0/2501642987' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 11 04:06:53 compute-0 nova_compute[259850]: 2025-10-11 04:06:53.930 2 INFO nova.virt.libvirt.driver [None req-55639a0b-e91c-4356-a99f-926387d986b2 c5660041067943deb3c73caa6e62f851 2783729ed466412aac8ceb01d86a0b12 - - default default] Successfully detached device vdb from instance 2b618038-2466-4671-9914-c69aecf8c771 from the live domain config.
Oct 11 04:06:54 compute-0 nova_compute[259850]: 2025-10-11 04:06:54.036 2 DEBUG oslo_concurrency.processutils [None req-3126fc17-491e-4b76-a7d6-401b0ab0a9cb 77635b26e3624f318335b7dd5d5cf9c4 41596b84442c439b86ce2c239af0242c - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/26cb0d26-41fd-4cac-a0b5-1c630a0feba1/disk.config 26cb0d26-41fd-4cac-a0b5-1c630a0feba1_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.172s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 04:06:54 compute-0 nova_compute[259850]: 2025-10-11 04:06:54.037 2 INFO nova.virt.libvirt.driver [None req-3126fc17-491e-4b76-a7d6-401b0ab0a9cb 77635b26e3624f318335b7dd5d5cf9c4 41596b84442c439b86ce2c239af0242c - - default default] [instance: 26cb0d26-41fd-4cac-a0b5-1c630a0feba1] Deleting local config drive /var/lib/nova/instances/26cb0d26-41fd-4cac-a0b5-1c630a0feba1/disk.config because it was imported into RBD.
Oct 11 04:06:54 compute-0 kernel: tap4bf043b6-53: entered promiscuous mode
Oct 11 04:06:54 compute-0 NetworkManager[44920]: <info>  [1760155614.0869] manager: (tap4bf043b6-53): new Tun device (/org/freedesktop/NetworkManager/Devices/50)
Oct 11 04:06:54 compute-0 ovn_controller[152025]: 2025-10-11T04:06:54Z|00068|binding|INFO|Claiming lport 4bf043b6-53f8-43fd-8fb7-67863dfbfe87 for this chassis.
Oct 11 04:06:54 compute-0 ovn_controller[152025]: 2025-10-11T04:06:54Z|00069|binding|INFO|4bf043b6-53f8-43fd-8fb7-67863dfbfe87: Claiming fa:16:3e:20:01:d0 10.100.0.13
Oct 11 04:06:54 compute-0 nova_compute[259850]: 2025-10-11 04:06:54.089 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 04:06:54 compute-0 ovn_metadata_agent[161897]: 2025-10-11 04:06:54.094 161902 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:20:01:d0 10.100.0.13'], port_security=['fa:16:3e:20:01:d0 10.100.0.13'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.13/28', 'neutron:device_id': '26cb0d26-41fd-4cac-a0b5-1c630a0feba1', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-0ff4b514-1476-4866-8fda-c0b6a7674970', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '41596b84442c439b86ce2c239af0242c', 'neutron:revision_number': '2', 'neutron:security_group_ids': '2b79ace0-3591-4cb7-9630-1c1e0585e64d', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=7b302cb1-4964-4a0f-a4e4-38f64faa4a71, chassis=[<ovs.db.idl.Row object at 0x7f22fde8afd0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f22fde8afd0>], logical_port=4bf043b6-53f8-43fd-8fb7-67863dfbfe87) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 11 04:06:54 compute-0 ovn_metadata_agent[161897]: 2025-10-11 04:06:54.095 161902 INFO neutron.agent.ovn.metadata.agent [-] Port 4bf043b6-53f8-43fd-8fb7-67863dfbfe87 in datapath 0ff4b514-1476-4866-8fda-c0b6a7674970 bound to our chassis
Oct 11 04:06:54 compute-0 ovn_metadata_agent[161897]: 2025-10-11 04:06:54.096 161902 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 0ff4b514-1476-4866-8fda-c0b6a7674970
Oct 11 04:06:54 compute-0 ovn_controller[152025]: 2025-10-11T04:06:54Z|00070|binding|INFO|Setting lport 4bf043b6-53f8-43fd-8fb7-67863dfbfe87 ovn-installed in OVS
Oct 11 04:06:54 compute-0 ovn_controller[152025]: 2025-10-11T04:06:54Z|00071|binding|INFO|Setting lport 4bf043b6-53f8-43fd-8fb7-67863dfbfe87 up in Southbound
Oct 11 04:06:54 compute-0 nova_compute[259850]: 2025-10-11 04:06:54.110 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 04:06:54 compute-0 ovn_metadata_agent[161897]: 2025-10-11 04:06:54.115 267637 DEBUG oslo.privsep.daemon [-] privsep: reply[4f45eea7-839e-46b5-8f4e-f831907ecabb]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 04:06:54 compute-0 ovn_metadata_agent[161897]: 2025-10-11 04:06:54.116 161902 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap0ff4b514-11 in ovnmeta-0ff4b514-1476-4866-8fda-c0b6a7674970 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Oct 11 04:06:54 compute-0 systemd-machined[214869]: New machine qemu-7-instance-00000007.
Oct 11 04:06:54 compute-0 systemd-udevd[274579]: Network interface NamePolicy= disabled on kernel command line.
Oct 11 04:06:54 compute-0 ovn_metadata_agent[161897]: 2025-10-11 04:06:54.123 267637 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap0ff4b514-10 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Oct 11 04:06:54 compute-0 ovn_metadata_agent[161897]: 2025-10-11 04:06:54.123 267637 DEBUG oslo.privsep.daemon [-] privsep: reply[345cedcb-953b-49bb-a85f-06f4a6378513]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 04:06:54 compute-0 ovn_metadata_agent[161897]: 2025-10-11 04:06:54.124 267637 DEBUG oslo.privsep.daemon [-] privsep: reply[e54aae06-d46b-4d53-a521-eef251bac68f]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 04:06:54 compute-0 NetworkManager[44920]: <info>  [1760155614.1378] device (tap4bf043b6-53): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 11 04:06:54 compute-0 NetworkManager[44920]: <info>  [1760155614.1386] device (tap4bf043b6-53): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 11 04:06:54 compute-0 systemd[1]: Started Virtual Machine qemu-7-instance-00000007.
Oct 11 04:06:54 compute-0 ovn_metadata_agent[161897]: 2025-10-11 04:06:54.141 162015 DEBUG oslo.privsep.daemon [-] privsep: reply[fa701d36-9851-467a-a37f-f5fada47e7df]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 04:06:54 compute-0 ovn_metadata_agent[161897]: 2025-10-11 04:06:54.172 267637 DEBUG oslo.privsep.daemon [-] privsep: reply[8f6f62d5-ae6b-4e52-8539-4c79eae95d73]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 04:06:54 compute-0 nova_compute[259850]: 2025-10-11 04:06:54.188 2 DEBUG nova.objects.instance [None req-55639a0b-e91c-4356-a99f-926387d986b2 c5660041067943deb3c73caa6e62f851 2783729ed466412aac8ceb01d86a0b12 - - default default] Lazy-loading 'flavor' on Instance uuid 2b618038-2466-4671-9914-c69aecf8c771 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 11 04:06:54 compute-0 ovn_metadata_agent[161897]: 2025-10-11 04:06:54.205 267785 DEBUG oslo.privsep.daemon [-] privsep: reply[f73be1d3-843a-4a6f-b6d3-ce78a4005f9f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 04:06:54 compute-0 ovn_metadata_agent[161897]: 2025-10-11 04:06:54.210 267637 DEBUG oslo.privsep.daemon [-] privsep: reply[5bde7d36-b8ab-4ed1-9e73-f5f512324fd2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 04:06:54 compute-0 NetworkManager[44920]: <info>  [1760155614.2109] manager: (tap0ff4b514-10): new Veth device (/org/freedesktop/NetworkManager/Devices/51)
Oct 11 04:06:54 compute-0 nova_compute[259850]: 2025-10-11 04:06:54.239 2 DEBUG oslo_concurrency.lockutils [None req-55639a0b-e91c-4356-a99f-926387d986b2 c5660041067943deb3c73caa6e62f851 2783729ed466412aac8ceb01d86a0b12 - - default default] Lock "2b618038-2466-4671-9914-c69aecf8c771" "released" by "nova.compute.manager.ComputeManager.detach_volume.<locals>.do_detach_volume" :: held 0.639s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 04:06:54 compute-0 nova_compute[259850]: 2025-10-11 04:06:54.240 2 DEBUG oslo_concurrency.lockutils [None req-25f0fde7-a62d-4988-855e-f4cb83dcef92 c5660041067943deb3c73caa6e62f851 2783729ed466412aac8ceb01d86a0b12 - - default default] Lock "2b618038-2466-4671-9914-c69aecf8c771" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.477s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 04:06:54 compute-0 nova_compute[259850]: 2025-10-11 04:06:54.240 2 DEBUG oslo_concurrency.lockutils [None req-25f0fde7-a62d-4988-855e-f4cb83dcef92 c5660041067943deb3c73caa6e62f851 2783729ed466412aac8ceb01d86a0b12 - - default default] Acquiring lock "2b618038-2466-4671-9914-c69aecf8c771-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 04:06:54 compute-0 nova_compute[259850]: 2025-10-11 04:06:54.240 2 DEBUG oslo_concurrency.lockutils [None req-25f0fde7-a62d-4988-855e-f4cb83dcef92 c5660041067943deb3c73caa6e62f851 2783729ed466412aac8ceb01d86a0b12 - - default default] Lock "2b618038-2466-4671-9914-c69aecf8c771-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 04:06:54 compute-0 nova_compute[259850]: 2025-10-11 04:06:54.240 2 DEBUG oslo_concurrency.lockutils [None req-25f0fde7-a62d-4988-855e-f4cb83dcef92 c5660041067943deb3c73caa6e62f851 2783729ed466412aac8ceb01d86a0b12 - - default default] Lock "2b618038-2466-4671-9914-c69aecf8c771-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 04:06:54 compute-0 nova_compute[259850]: 2025-10-11 04:06:54.241 2 INFO nova.compute.manager [None req-25f0fde7-a62d-4988-855e-f4cb83dcef92 c5660041067943deb3c73caa6e62f851 2783729ed466412aac8ceb01d86a0b12 - - default default] [instance: 2b618038-2466-4671-9914-c69aecf8c771] Terminating instance
Oct 11 04:06:54 compute-0 nova_compute[259850]: 2025-10-11 04:06:54.242 2 DEBUG nova.compute.manager [None req-25f0fde7-a62d-4988-855e-f4cb83dcef92 c5660041067943deb3c73caa6e62f851 2783729ed466412aac8ceb01d86a0b12 - - default default] [instance: 2b618038-2466-4671-9914-c69aecf8c771] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Oct 11 04:06:54 compute-0 ovn_metadata_agent[161897]: 2025-10-11 04:06:54.254 267785 DEBUG oslo.privsep.daemon [-] privsep: reply[aa48e46f-4e2b-483b-9592-f007378bbe2e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 04:06:54 compute-0 ovn_metadata_agent[161897]: 2025-10-11 04:06:54.258 267785 DEBUG oslo.privsep.daemon [-] privsep: reply[e268c9dd-f81e-4cbe-858c-c1354cca35a8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 04:06:54 compute-0 NetworkManager[44920]: <info>  [1760155614.2860] device (tap0ff4b514-10): carrier: link connected
Oct 11 04:06:54 compute-0 kernel: tap18b8cda5-7b (unregistering): left promiscuous mode
Oct 11 04:06:54 compute-0 ovn_metadata_agent[161897]: 2025-10-11 04:06:54.291 267785 DEBUG oslo.privsep.daemon [-] privsep: reply[6f329f61-c2a0-4335-aa82-b9c2fc1fe912]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 04:06:54 compute-0 NetworkManager[44920]: <info>  [1760155614.2960] device (tap18b8cda5-7b): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 11 04:06:54 compute-0 ovn_controller[152025]: 2025-10-11T04:06:54Z|00072|binding|INFO|Releasing lport 18b8cda5-7bec-4b29-838f-24cad68162af from this chassis (sb_readonly=0)
Oct 11 04:06:54 compute-0 ovn_controller[152025]: 2025-10-11T04:06:54Z|00073|binding|INFO|Setting lport 18b8cda5-7bec-4b29-838f-24cad68162af down in Southbound
Oct 11 04:06:54 compute-0 ovn_metadata_agent[161897]: 2025-10-11 04:06:54.343 267637 DEBUG oslo.privsep.daemon [-] privsep: reply[9137da95-a881-4ab3-9745-deed5fd7df7a]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap0ff4b514-11'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:ee:51:b2'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 29], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 399932, 'reachable_time': 19272, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 274611, 'error': None, 'target': 'ovnmeta-0ff4b514-1476-4866-8fda-c0b6a7674970', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 04:06:54 compute-0 nova_compute[259850]: 2025-10-11 04:06:54.344 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 04:06:54 compute-0 ovn_controller[152025]: 2025-10-11T04:06:54Z|00074|binding|INFO|Removing iface tap18b8cda5-7b ovn-installed in OVS
Oct 11 04:06:54 compute-0 nova_compute[259850]: 2025-10-11 04:06:54.346 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 04:06:54 compute-0 ovn_metadata_agent[161897]: 2025-10-11 04:06:54.353 161902 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:55:96:0a 10.100.0.12'], port_security=['fa:16:3e:55:96:0a 10.100.0.12'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.12/28', 'neutron:device_id': '2b618038-2466-4671-9914-c69aecf8c771', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-bfa0cc72-c909-48db-80bb-536eb7b52f6e', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '2783729ed466412aac8ceb01d86a0b12', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'a3924cbd-62fb-41dc-9d4a-3f864682569a', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com', 'neutron:port_fip': '192.168.122.240'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=55b0cbfb-9e3c-469a-b06d-75c45688b585, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f22fde8afd0>], logical_port=18b8cda5-7bec-4b29-838f-24cad68162af) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f22fde8afd0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 11 04:06:54 compute-0 nova_compute[259850]: 2025-10-11 04:06:54.367 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 04:06:54 compute-0 ovn_metadata_agent[161897]: 2025-10-11 04:06:54.367 267637 DEBUG oslo.privsep.daemon [-] privsep: reply[b447940e-d216-4c4f-986c-7f6c5965f4ed]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:feee:51b2'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 399932, 'tstamp': 399932}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 274615, 'error': None, 'target': 'ovnmeta-0ff4b514-1476-4866-8fda-c0b6a7674970', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 04:06:54 compute-0 systemd[1]: machine-qemu\x2d6\x2dinstance\x2d00000006.scope: Deactivated successfully.
Oct 11 04:06:54 compute-0 ovn_metadata_agent[161897]: 2025-10-11 04:06:54.382 267637 DEBUG oslo.privsep.daemon [-] privsep: reply[f6c2d5ac-50d9-43af-9fe2-82be1074f170]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap0ff4b514-11'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:ee:51:b2'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 29], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 399932, 'reachable_time': 19272, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 274616, 'error': None, 'target': 'ovnmeta-0ff4b514-1476-4866-8fda-c0b6a7674970', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 04:06:54 compute-0 systemd[1]: machine-qemu\x2d6\x2dinstance\x2d00000006.scope: Consumed 13.978s CPU time.
Oct 11 04:06:54 compute-0 systemd-machined[214869]: Machine qemu-6-instance-00000006 terminated.
Oct 11 04:06:54 compute-0 nova_compute[259850]: 2025-10-11 04:06:54.392 2 DEBUG nova.compute.manager [req-b65588e4-3993-40b0-81df-3346a5fdde27 req-78349160-7c2e-4797-8150-b4de284a7852 f4159c198f2c491aba3076e803251bb4 551ef16ca7c64c7d98e9560ddab5abd8 - - default default] [instance: 26cb0d26-41fd-4cac-a0b5-1c630a0feba1] Received event network-vif-plugged-4bf043b6-53f8-43fd-8fb7-67863dfbfe87 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 11 04:06:54 compute-0 nova_compute[259850]: 2025-10-11 04:06:54.394 2 DEBUG oslo_concurrency.lockutils [req-b65588e4-3993-40b0-81df-3346a5fdde27 req-78349160-7c2e-4797-8150-b4de284a7852 f4159c198f2c491aba3076e803251bb4 551ef16ca7c64c7d98e9560ddab5abd8 - - default default] Acquiring lock "26cb0d26-41fd-4cac-a0b5-1c630a0feba1-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 04:06:54 compute-0 nova_compute[259850]: 2025-10-11 04:06:54.394 2 DEBUG oslo_concurrency.lockutils [req-b65588e4-3993-40b0-81df-3346a5fdde27 req-78349160-7c2e-4797-8150-b4de284a7852 f4159c198f2c491aba3076e803251bb4 551ef16ca7c64c7d98e9560ddab5abd8 - - default default] Lock "26cb0d26-41fd-4cac-a0b5-1c630a0feba1-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 04:06:54 compute-0 nova_compute[259850]: 2025-10-11 04:06:54.394 2 DEBUG oslo_concurrency.lockutils [req-b65588e4-3993-40b0-81df-3346a5fdde27 req-78349160-7c2e-4797-8150-b4de284a7852 f4159c198f2c491aba3076e803251bb4 551ef16ca7c64c7d98e9560ddab5abd8 - - default default] Lock "26cb0d26-41fd-4cac-a0b5-1c630a0feba1-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 04:06:54 compute-0 nova_compute[259850]: 2025-10-11 04:06:54.395 2 DEBUG nova.compute.manager [req-b65588e4-3993-40b0-81df-3346a5fdde27 req-78349160-7c2e-4797-8150-b4de284a7852 f4159c198f2c491aba3076e803251bb4 551ef16ca7c64c7d98e9560ddab5abd8 - - default default] [instance: 26cb0d26-41fd-4cac-a0b5-1c630a0feba1] Processing event network-vif-plugged-4bf043b6-53f8-43fd-8fb7-67863dfbfe87 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Oct 11 04:06:54 compute-0 ovn_metadata_agent[161897]: 2025-10-11 04:06:54.413 267637 DEBUG oslo.privsep.daemon [-] privsep: reply[2e232028-895d-402b-b4ef-8bdf41684cdf]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 04:06:54 compute-0 ovn_metadata_agent[161897]: 2025-10-11 04:06:54.468 267637 DEBUG oslo.privsep.daemon [-] privsep: reply[7fa9fd99-3a34-46f0-ab5b-0226577fc5fc]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 04:06:54 compute-0 ovn_metadata_agent[161897]: 2025-10-11 04:06:54.469 161902 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap0ff4b514-10, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 04:06:54 compute-0 ovn_metadata_agent[161897]: 2025-10-11 04:06:54.470 161902 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 11 04:06:54 compute-0 ovn_metadata_agent[161897]: 2025-10-11 04:06:54.471 161902 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap0ff4b514-10, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 04:06:54 compute-0 nova_compute[259850]: 2025-10-11 04:06:54.472 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 04:06:54 compute-0 NetworkManager[44920]: <info>  [1760155614.4752] manager: (tap0ff4b514-10): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/52)
Oct 11 04:06:54 compute-0 nova_compute[259850]: 2025-10-11 04:06:54.476 2 INFO nova.virt.libvirt.driver [-] [instance: 2b618038-2466-4671-9914-c69aecf8c771] Instance destroyed successfully.
Oct 11 04:06:54 compute-0 nova_compute[259850]: 2025-10-11 04:06:54.477 2 DEBUG nova.objects.instance [None req-25f0fde7-a62d-4988-855e-f4cb83dcef92 c5660041067943deb3c73caa6e62f851 2783729ed466412aac8ceb01d86a0b12 - - default default] Lazy-loading 'resources' on Instance uuid 2b618038-2466-4671-9914-c69aecf8c771 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 11 04:06:54 compute-0 kernel: tap0ff4b514-10: entered promiscuous mode
Oct 11 04:06:54 compute-0 nova_compute[259850]: 2025-10-11 04:06:54.484 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 04:06:54 compute-0 ovn_metadata_agent[161897]: 2025-10-11 04:06:54.485 161902 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap0ff4b514-10, col_values=(('external_ids', {'iface-id': 'cf7b4ae4-21e0-4118-8c7e-eb76be39896d'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 04:06:54 compute-0 nova_compute[259850]: 2025-10-11 04:06:54.486 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 04:06:54 compute-0 ovn_controller[152025]: 2025-10-11T04:06:54Z|00075|binding|INFO|Releasing lport cf7b4ae4-21e0-4118-8c7e-eb76be39896d from this chassis (sb_readonly=0)
Oct 11 04:06:54 compute-0 nova_compute[259850]: 2025-10-11 04:06:54.491 2 DEBUG nova.virt.libvirt.vif [None req-25f0fde7-a62d-4988-855e-f4cb83dcef92 c5660041067943deb3c73caa6e62f851 2783729ed466412aac8ceb01d86a0b12 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-11T04:06:12Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-VolumesSnapshotTestJSON-instance-1537074081',display_name='tempest-VolumesSnapshotTestJSON-instance-1537074081',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-volumessnapshottestjson-instance-1537074081',id=6,image_ref='1a107e2f-1a9d-4b6f-861d-e64bee7d56be',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBJKvBS+sHt0R++EQjY8399i9xRV8xwy8PNGrky3BzxlKGZCtm3DcIWTejUfK1VEDKEydb8PJX5YdahSJhSOa4QWvc1+qljSsnLkUpuPznZoJliIMCS/A+eCn6if+XQhyhA==',key_name='tempest-keypair-1067966574',keypairs=<?>,launch_index=0,launched_at=2025-10-11T04:06:21Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='2783729ed466412aac8ceb01d86a0b12',ramdisk_id='',reservation_id='r-zfyt80ry',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='1a107e2f-1a9d-4b6f-861d-e64bee7d56be',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-VolumesSnapshotTestJSON-180407200',owner_user_name='tempest-VolumesSnapshotTestJSON-180407200-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-11T04:06:21Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='c5660041067943deb3c73caa6e62f851',uuid=2b618038-2466-4671-9914-c69aecf8c771,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "18b8cda5-7bec-4b29-838f-24cad68162af", "address": "fa:16:3e:55:96:0a", "network": {"id": "bfa0cc72-c909-48db-80bb-536eb7b52f6e", "bridge": "br-int", "label": "tempest-VolumesSnapshotTestJSON-1615284681-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.240", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2783729ed466412aac8ceb01d86a0b12", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap18b8cda5-7b", "ovs_interfaceid": "18b8cda5-7bec-4b29-838f-24cad68162af", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Oct 11 04:06:54 compute-0 nova_compute[259850]: 2025-10-11 04:06:54.491 2 DEBUG nova.network.os_vif_util [None req-25f0fde7-a62d-4988-855e-f4cb83dcef92 c5660041067943deb3c73caa6e62f851 2783729ed466412aac8ceb01d86a0b12 - - default default] Converting VIF {"id": "18b8cda5-7bec-4b29-838f-24cad68162af", "address": "fa:16:3e:55:96:0a", "network": {"id": "bfa0cc72-c909-48db-80bb-536eb7b52f6e", "bridge": "br-int", "label": "tempest-VolumesSnapshotTestJSON-1615284681-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.240", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2783729ed466412aac8ceb01d86a0b12", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap18b8cda5-7b", "ovs_interfaceid": "18b8cda5-7bec-4b29-838f-24cad68162af", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 11 04:06:54 compute-0 nova_compute[259850]: 2025-10-11 04:06:54.492 2 DEBUG nova.network.os_vif_util [None req-25f0fde7-a62d-4988-855e-f4cb83dcef92 c5660041067943deb3c73caa6e62f851 2783729ed466412aac8ceb01d86a0b12 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:55:96:0a,bridge_name='br-int',has_traffic_filtering=True,id=18b8cda5-7bec-4b29-838f-24cad68162af,network=Network(bfa0cc72-c909-48db-80bb-536eb7b52f6e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap18b8cda5-7b') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 11 04:06:54 compute-0 nova_compute[259850]: 2025-10-11 04:06:54.494 2 DEBUG os_vif [None req-25f0fde7-a62d-4988-855e-f4cb83dcef92 c5660041067943deb3c73caa6e62f851 2783729ed466412aac8ceb01d86a0b12 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:55:96:0a,bridge_name='br-int',has_traffic_filtering=True,id=18b8cda5-7bec-4b29-838f-24cad68162af,network=Network(bfa0cc72-c909-48db-80bb-536eb7b52f6e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap18b8cda5-7b') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Oct 11 04:06:54 compute-0 nova_compute[259850]: 2025-10-11 04:06:54.495 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 04:06:54 compute-0 nova_compute[259850]: 2025-10-11 04:06:54.496 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap18b8cda5-7b, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 04:06:54 compute-0 nova_compute[259850]: 2025-10-11 04:06:54.497 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 04:06:54 compute-0 nova_compute[259850]: 2025-10-11 04:06:54.499 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 11 04:06:54 compute-0 nova_compute[259850]: 2025-10-11 04:06:54.518 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 04:06:54 compute-0 nova_compute[259850]: 2025-10-11 04:06:54.520 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 04:06:54 compute-0 ovn_metadata_agent[161897]: 2025-10-11 04:06:54.521 161902 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/0ff4b514-1476-4866-8fda-c0b6a7674970.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/0ff4b514-1476-4866-8fda-c0b6a7674970.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Oct 11 04:06:54 compute-0 nova_compute[259850]: 2025-10-11 04:06:54.523 2 INFO os_vif [None req-25f0fde7-a62d-4988-855e-f4cb83dcef92 c5660041067943deb3c73caa6e62f851 2783729ed466412aac8ceb01d86a0b12 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:55:96:0a,bridge_name='br-int',has_traffic_filtering=True,id=18b8cda5-7bec-4b29-838f-24cad68162af,network=Network(bfa0cc72-c909-48db-80bb-536eb7b52f6e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap18b8cda5-7b')
Oct 11 04:06:54 compute-0 ovn_metadata_agent[161897]: 2025-10-11 04:06:54.522 267637 DEBUG oslo.privsep.daemon [-] privsep: reply[3ec35851-2b9d-42f8-a004-2ea2bd5b2fe6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 04:06:54 compute-0 ovn_metadata_agent[161897]: 2025-10-11 04:06:54.523 161902 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct 11 04:06:54 compute-0 ovn_metadata_agent[161897]: global
Oct 11 04:06:54 compute-0 ovn_metadata_agent[161897]:     log         /dev/log local0 debug
Oct 11 04:06:54 compute-0 ovn_metadata_agent[161897]:     log-tag     haproxy-metadata-proxy-0ff4b514-1476-4866-8fda-c0b6a7674970
Oct 11 04:06:54 compute-0 ovn_metadata_agent[161897]:     user        root
Oct 11 04:06:54 compute-0 ovn_metadata_agent[161897]:     group       root
Oct 11 04:06:54 compute-0 ovn_metadata_agent[161897]:     maxconn     1024
Oct 11 04:06:54 compute-0 ovn_metadata_agent[161897]:     pidfile     /var/lib/neutron/external/pids/0ff4b514-1476-4866-8fda-c0b6a7674970.pid.haproxy
Oct 11 04:06:54 compute-0 ovn_metadata_agent[161897]:     daemon
Oct 11 04:06:54 compute-0 ovn_metadata_agent[161897]: 
Oct 11 04:06:54 compute-0 ovn_metadata_agent[161897]: defaults
Oct 11 04:06:54 compute-0 ovn_metadata_agent[161897]:     log global
Oct 11 04:06:54 compute-0 ovn_metadata_agent[161897]:     mode http
Oct 11 04:06:54 compute-0 ovn_metadata_agent[161897]:     option httplog
Oct 11 04:06:54 compute-0 ovn_metadata_agent[161897]:     option dontlognull
Oct 11 04:06:54 compute-0 ovn_metadata_agent[161897]:     option http-server-close
Oct 11 04:06:54 compute-0 ovn_metadata_agent[161897]:     option forwardfor
Oct 11 04:06:54 compute-0 ovn_metadata_agent[161897]:     retries                 3
Oct 11 04:06:54 compute-0 ovn_metadata_agent[161897]:     timeout http-request    30s
Oct 11 04:06:54 compute-0 ovn_metadata_agent[161897]:     timeout connect         30s
Oct 11 04:06:54 compute-0 ovn_metadata_agent[161897]:     timeout client          32s
Oct 11 04:06:54 compute-0 ovn_metadata_agent[161897]:     timeout server          32s
Oct 11 04:06:54 compute-0 ovn_metadata_agent[161897]:     timeout http-keep-alive 30s
Oct 11 04:06:54 compute-0 ovn_metadata_agent[161897]: 
Oct 11 04:06:54 compute-0 ovn_metadata_agent[161897]: 
Oct 11 04:06:54 compute-0 ovn_metadata_agent[161897]: listen listener
Oct 11 04:06:54 compute-0 ovn_metadata_agent[161897]:     bind 169.254.169.254:80
Oct 11 04:06:54 compute-0 ovn_metadata_agent[161897]:     server metadata /var/lib/neutron/metadata_proxy
Oct 11 04:06:54 compute-0 ovn_metadata_agent[161897]:     http-request add-header X-OVN-Network-ID 0ff4b514-1476-4866-8fda-c0b6a7674970
Oct 11 04:06:54 compute-0 ovn_metadata_agent[161897]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Oct 11 04:06:54 compute-0 ovn_metadata_agent[161897]: 2025-10-11 04:06:54.523 161902 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-0ff4b514-1476-4866-8fda-c0b6a7674970', 'env', 'PROCESS_TAG=haproxy-0ff4b514-1476-4866-8fda-c0b6a7674970', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/0ff4b514-1476-4866-8fda-c0b6a7674970.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Oct 11 04:06:54 compute-0 nova_compute[259850]: 2025-10-11 04:06:54.546 2 DEBUG nova.compute.manager [req-b0136cd9-448b-407a-878b-68d6570ae5fc req-6887a89e-7913-49d8-93fb-b2d5532a41a7 f4159c198f2c491aba3076e803251bb4 551ef16ca7c64c7d98e9560ddab5abd8 - - default default] [instance: 2b618038-2466-4671-9914-c69aecf8c771] Received event network-vif-unplugged-18b8cda5-7bec-4b29-838f-24cad68162af external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 11 04:06:54 compute-0 nova_compute[259850]: 2025-10-11 04:06:54.547 2 DEBUG oslo_concurrency.lockutils [req-b0136cd9-448b-407a-878b-68d6570ae5fc req-6887a89e-7913-49d8-93fb-b2d5532a41a7 f4159c198f2c491aba3076e803251bb4 551ef16ca7c64c7d98e9560ddab5abd8 - - default default] Acquiring lock "2b618038-2466-4671-9914-c69aecf8c771-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 04:06:54 compute-0 nova_compute[259850]: 2025-10-11 04:06:54.547 2 DEBUG oslo_concurrency.lockutils [req-b0136cd9-448b-407a-878b-68d6570ae5fc req-6887a89e-7913-49d8-93fb-b2d5532a41a7 f4159c198f2c491aba3076e803251bb4 551ef16ca7c64c7d98e9560ddab5abd8 - - default default] Lock "2b618038-2466-4671-9914-c69aecf8c771-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 04:06:54 compute-0 nova_compute[259850]: 2025-10-11 04:06:54.547 2 DEBUG oslo_concurrency.lockutils [req-b0136cd9-448b-407a-878b-68d6570ae5fc req-6887a89e-7913-49d8-93fb-b2d5532a41a7 f4159c198f2c491aba3076e803251bb4 551ef16ca7c64c7d98e9560ddab5abd8 - - default default] Lock "2b618038-2466-4671-9914-c69aecf8c771-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 04:06:54 compute-0 nova_compute[259850]: 2025-10-11 04:06:54.547 2 DEBUG nova.compute.manager [req-b0136cd9-448b-407a-878b-68d6570ae5fc req-6887a89e-7913-49d8-93fb-b2d5532a41a7 f4159c198f2c491aba3076e803251bb4 551ef16ca7c64c7d98e9560ddab5abd8 - - default default] [instance: 2b618038-2466-4671-9914-c69aecf8c771] No waiting events found dispatching network-vif-unplugged-18b8cda5-7bec-4b29-838f-24cad68162af pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 11 04:06:54 compute-0 nova_compute[259850]: 2025-10-11 04:06:54.548 2 DEBUG nova.compute.manager [req-b0136cd9-448b-407a-878b-68d6570ae5fc req-6887a89e-7913-49d8-93fb-b2d5532a41a7 f4159c198f2c491aba3076e803251bb4 551ef16ca7c64c7d98e9560ddab5abd8 - - default default] [instance: 2b618038-2466-4671-9914-c69aecf8c771] Received event network-vif-unplugged-18b8cda5-7bec-4b29-838f-24cad68162af for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Oct 11 04:06:54 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e190 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 04:06:54 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e190 do_prune osdmap full prune enabled
Oct 11 04:06:54 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e191 e191: 3 total, 3 up, 3 in
Oct 11 04:06:54 compute-0 ceph-mon[74273]: log_channel(cluster) log [DBG] : osdmap e191: 3 total, 3 up, 3 in
Oct 11 04:06:54 compute-0 nova_compute[259850]: 2025-10-11 04:06:54.905 2 INFO nova.virt.libvirt.driver [None req-25f0fde7-a62d-4988-855e-f4cb83dcef92 c5660041067943deb3c73caa6e62f851 2783729ed466412aac8ceb01d86a0b12 - - default default] [instance: 2b618038-2466-4671-9914-c69aecf8c771] Deleting instance files /var/lib/nova/instances/2b618038-2466-4671-9914-c69aecf8c771_del
Oct 11 04:06:54 compute-0 nova_compute[259850]: 2025-10-11 04:06:54.905 2 INFO nova.virt.libvirt.driver [None req-25f0fde7-a62d-4988-855e-f4cb83dcef92 c5660041067943deb3c73caa6e62f851 2783729ed466412aac8ceb01d86a0b12 - - default default] [instance: 2b618038-2466-4671-9914-c69aecf8c771] Deletion of /var/lib/nova/instances/2b618038-2466-4671-9914-c69aecf8c771_del complete
Oct 11 04:06:54 compute-0 ceph-mon[74273]: pgmap v1097: 305 pgs: 305 active+clean; 260 MiB data, 379 MiB used, 60 GiB / 60 GiB avail; 177 KiB/s rd, 3.8 MiB/s wr, 246 op/s
Oct 11 04:06:54 compute-0 ceph-mon[74273]: osdmap e191: 3 total, 3 up, 3 in
Oct 11 04:06:54 compute-0 nova_compute[259850]: 2025-10-11 04:06:54.970 2 INFO nova.compute.manager [None req-25f0fde7-a62d-4988-855e-f4cb83dcef92 c5660041067943deb3c73caa6e62f851 2783729ed466412aac8ceb01d86a0b12 - - default default] [instance: 2b618038-2466-4671-9914-c69aecf8c771] Took 0.73 seconds to destroy the instance on the hypervisor.
Oct 11 04:06:54 compute-0 nova_compute[259850]: 2025-10-11 04:06:54.971 2 DEBUG oslo.service.loopingcall [None req-25f0fde7-a62d-4988-855e-f4cb83dcef92 c5660041067943deb3c73caa6e62f851 2783729ed466412aac8ceb01d86a0b12 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Oct 11 04:06:54 compute-0 nova_compute[259850]: 2025-10-11 04:06:54.971 2 DEBUG nova.compute.manager [-] [instance: 2b618038-2466-4671-9914-c69aecf8c771] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Oct 11 04:06:54 compute-0 nova_compute[259850]: 2025-10-11 04:06:54.972 2 DEBUG nova.network.neutron [-] [instance: 2b618038-2466-4671-9914-c69aecf8c771] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Oct 11 04:06:54 compute-0 podman[274720]: 2025-10-11 04:06:54.977323867 +0000 UTC m=+0.064594874 container create c965dbe07b521b13b8944f59378341e4d74893fcebd29bb3c2b6ae3da81e7039 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-0ff4b514-1476-4866-8fda-c0b6a7674970, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251009, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 11 04:06:55 compute-0 systemd[1]: Started libpod-conmon-c965dbe07b521b13b8944f59378341e4d74893fcebd29bb3c2b6ae3da81e7039.scope.
Oct 11 04:06:55 compute-0 podman[274720]: 2025-10-11 04:06:54.940218846 +0000 UTC m=+0.027489943 image pull 1061e4fafe13e0b9aa1ef2c904ba4ad70c44f3e87b1d831f16c6db34937f4022 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Oct 11 04:06:55 compute-0 systemd[1]: Started libcrun container.
Oct 11 04:06:55 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e1403aaf7db73fed7a15f7d8a17dad363c21ac3f38b601da06cbfe18c9333162/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct 11 04:06:55 compute-0 podman[274720]: 2025-10-11 04:06:55.085234407 +0000 UTC m=+0.172505464 container init c965dbe07b521b13b8944f59378341e4d74893fcebd29bb3c2b6ae3da81e7039 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-0ff4b514-1476-4866-8fda-c0b6a7674970, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251009, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, org.label-schema.schema-version=1.0)
Oct 11 04:06:55 compute-0 podman[274720]: 2025-10-11 04:06:55.099199699 +0000 UTC m=+0.186470726 container start c965dbe07b521b13b8944f59378341e4d74893fcebd29bb3c2b6ae3da81e7039 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-0ff4b514-1476-4866-8fda-c0b6a7674970, org.label-schema.license=GPLv2, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251009, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.schema-version=1.0)
Oct 11 04:06:55 compute-0 neutron-haproxy-ovnmeta-0ff4b514-1476-4866-8fda-c0b6a7674970[274736]: [NOTICE]   (274740) : New worker (274742) forked
Oct 11 04:06:55 compute-0 neutron-haproxy-ovnmeta-0ff4b514-1476-4866-8fda-c0b6a7674970[274736]: [NOTICE]   (274740) : Loading success.
Oct 11 04:06:55 compute-0 ovn_metadata_agent[161897]: 2025-10-11 04:06:55.158 161902 INFO neutron.agent.ovn.metadata.agent [-] Port 18b8cda5-7bec-4b29-838f-24cad68162af in datapath bfa0cc72-c909-48db-80bb-536eb7b52f6e unbound from our chassis
Oct 11 04:06:55 compute-0 ovn_metadata_agent[161897]: 2025-10-11 04:06:55.160 161902 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network bfa0cc72-c909-48db-80bb-536eb7b52f6e, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Oct 11 04:06:55 compute-0 ovn_metadata_agent[161897]: 2025-10-11 04:06:55.162 267637 DEBUG oslo.privsep.daemon [-] privsep: reply[6083ac22-bd82-4801-8b7e-7fb600b02c9f]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 04:06:55 compute-0 ovn_metadata_agent[161897]: 2025-10-11 04:06:55.163 161902 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-bfa0cc72-c909-48db-80bb-536eb7b52f6e namespace which is not needed anymore
Oct 11 04:06:55 compute-0 nova_compute[259850]: 2025-10-11 04:06:55.254 2 DEBUG nova.compute.manager [None req-3126fc17-491e-4b76-a7d6-401b0ab0a9cb 77635b26e3624f318335b7dd5d5cf9c4 41596b84442c439b86ce2c239af0242c - - default default] [instance: 26cb0d26-41fd-4cac-a0b5-1c630a0feba1] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Oct 11 04:06:55 compute-0 nova_compute[259850]: 2025-10-11 04:06:55.254 2 DEBUG nova.virt.driver [None req-3700afd8-f980-4cf2-b41e-54ecc7fbe9a2 - - - - - -] Emitting event <LifecycleEvent: 1760155615.2535524, 26cb0d26-41fd-4cac-a0b5-1c630a0feba1 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 11 04:06:55 compute-0 nova_compute[259850]: 2025-10-11 04:06:55.255 2 INFO nova.compute.manager [None req-3700afd8-f980-4cf2-b41e-54ecc7fbe9a2 - - - - - -] [instance: 26cb0d26-41fd-4cac-a0b5-1c630a0feba1] VM Started (Lifecycle Event)
Oct 11 04:06:55 compute-0 nova_compute[259850]: 2025-10-11 04:06:55.261 2 DEBUG nova.virt.libvirt.driver [None req-3126fc17-491e-4b76-a7d6-401b0ab0a9cb 77635b26e3624f318335b7dd5d5cf9c4 41596b84442c439b86ce2c239af0242c - - default default] [instance: 26cb0d26-41fd-4cac-a0b5-1c630a0feba1] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Oct 11 04:06:55 compute-0 nova_compute[259850]: 2025-10-11 04:06:55.264 2 INFO nova.virt.libvirt.driver [-] [instance: 26cb0d26-41fd-4cac-a0b5-1c630a0feba1] Instance spawned successfully.
Oct 11 04:06:55 compute-0 nova_compute[259850]: 2025-10-11 04:06:55.264 2 DEBUG nova.virt.libvirt.driver [None req-3126fc17-491e-4b76-a7d6-401b0ab0a9cb 77635b26e3624f318335b7dd5d5cf9c4 41596b84442c439b86ce2c239af0242c - - default default] [instance: 26cb0d26-41fd-4cac-a0b5-1c630a0feba1] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Oct 11 04:06:55 compute-0 nova_compute[259850]: 2025-10-11 04:06:55.278 2 DEBUG nova.compute.manager [None req-3700afd8-f980-4cf2-b41e-54ecc7fbe9a2 - - - - - -] [instance: 26cb0d26-41fd-4cac-a0b5-1c630a0feba1] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 11 04:06:55 compute-0 nova_compute[259850]: 2025-10-11 04:06:55.283 2 DEBUG nova.compute.manager [None req-3700afd8-f980-4cf2-b41e-54ecc7fbe9a2 - - - - - -] [instance: 26cb0d26-41fd-4cac-a0b5-1c630a0feba1] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 11 04:06:55 compute-0 nova_compute[259850]: 2025-10-11 04:06:55.287 2 DEBUG nova.virt.libvirt.driver [None req-3126fc17-491e-4b76-a7d6-401b0ab0a9cb 77635b26e3624f318335b7dd5d5cf9c4 41596b84442c439b86ce2c239af0242c - - default default] [instance: 26cb0d26-41fd-4cac-a0b5-1c630a0feba1] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 11 04:06:55 compute-0 nova_compute[259850]: 2025-10-11 04:06:55.288 2 DEBUG nova.virt.libvirt.driver [None req-3126fc17-491e-4b76-a7d6-401b0ab0a9cb 77635b26e3624f318335b7dd5d5cf9c4 41596b84442c439b86ce2c239af0242c - - default default] [instance: 26cb0d26-41fd-4cac-a0b5-1c630a0feba1] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 11 04:06:55 compute-0 nova_compute[259850]: 2025-10-11 04:06:55.288 2 DEBUG nova.virt.libvirt.driver [None req-3126fc17-491e-4b76-a7d6-401b0ab0a9cb 77635b26e3624f318335b7dd5d5cf9c4 41596b84442c439b86ce2c239af0242c - - default default] [instance: 26cb0d26-41fd-4cac-a0b5-1c630a0feba1] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 11 04:06:55 compute-0 nova_compute[259850]: 2025-10-11 04:06:55.288 2 DEBUG nova.virt.libvirt.driver [None req-3126fc17-491e-4b76-a7d6-401b0ab0a9cb 77635b26e3624f318335b7dd5d5cf9c4 41596b84442c439b86ce2c239af0242c - - default default] [instance: 26cb0d26-41fd-4cac-a0b5-1c630a0feba1] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 11 04:06:55 compute-0 nova_compute[259850]: 2025-10-11 04:06:55.289 2 DEBUG nova.virt.libvirt.driver [None req-3126fc17-491e-4b76-a7d6-401b0ab0a9cb 77635b26e3624f318335b7dd5d5cf9c4 41596b84442c439b86ce2c239af0242c - - default default] [instance: 26cb0d26-41fd-4cac-a0b5-1c630a0feba1] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 11 04:06:55 compute-0 nova_compute[259850]: 2025-10-11 04:06:55.289 2 DEBUG nova.virt.libvirt.driver [None req-3126fc17-491e-4b76-a7d6-401b0ab0a9cb 77635b26e3624f318335b7dd5d5cf9c4 41596b84442c439b86ce2c239af0242c - - default default] [instance: 26cb0d26-41fd-4cac-a0b5-1c630a0feba1] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 11 04:06:55 compute-0 nova_compute[259850]: 2025-10-11 04:06:55.304 2 INFO nova.compute.manager [None req-3700afd8-f980-4cf2-b41e-54ecc7fbe9a2 - - - - - -] [instance: 26cb0d26-41fd-4cac-a0b5-1c630a0feba1] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 11 04:06:55 compute-0 nova_compute[259850]: 2025-10-11 04:06:55.304 2 DEBUG nova.virt.driver [None req-3700afd8-f980-4cf2-b41e-54ecc7fbe9a2 - - - - - -] Emitting event <LifecycleEvent: 1760155615.2564971, 26cb0d26-41fd-4cac-a0b5-1c630a0feba1 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 11 04:06:55 compute-0 nova_compute[259850]: 2025-10-11 04:06:55.305 2 INFO nova.compute.manager [None req-3700afd8-f980-4cf2-b41e-54ecc7fbe9a2 - - - - - -] [instance: 26cb0d26-41fd-4cac-a0b5-1c630a0feba1] VM Paused (Lifecycle Event)
Oct 11 04:06:55 compute-0 neutron-haproxy-ovnmeta-bfa0cc72-c909-48db-80bb-536eb7b52f6e[274125]: [NOTICE]   (274129) : haproxy version is 2.8.14-c23fe91
Oct 11 04:06:55 compute-0 neutron-haproxy-ovnmeta-bfa0cc72-c909-48db-80bb-536eb7b52f6e[274125]: [NOTICE]   (274129) : path to executable is /usr/sbin/haproxy
Oct 11 04:06:55 compute-0 neutron-haproxy-ovnmeta-bfa0cc72-c909-48db-80bb-536eb7b52f6e[274125]: [WARNING]  (274129) : Exiting Master process...
Oct 11 04:06:55 compute-0 neutron-haproxy-ovnmeta-bfa0cc72-c909-48db-80bb-536eb7b52f6e[274125]: [WARNING]  (274129) : Exiting Master process...
Oct 11 04:06:55 compute-0 neutron-haproxy-ovnmeta-bfa0cc72-c909-48db-80bb-536eb7b52f6e[274125]: [ALERT]    (274129) : Current worker (274131) exited with code 143 (Terminated)
Oct 11 04:06:55 compute-0 neutron-haproxy-ovnmeta-bfa0cc72-c909-48db-80bb-536eb7b52f6e[274125]: [WARNING]  (274129) : All workers exited. Exiting... (0)
Oct 11 04:06:55 compute-0 systemd[1]: libpod-4d85131394ce23a6db996008cb673881082a82f440672359004cd4c96ba48c69.scope: Deactivated successfully.
Oct 11 04:06:55 compute-0 podman[274768]: 2025-10-11 04:06:55.331655404 +0000 UTC m=+0.051264170 container died 4d85131394ce23a6db996008cb673881082a82f440672359004cd4c96ba48c69 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-bfa0cc72-c909-48db-80bb-536eb7b52f6e, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, tcib_managed=true, org.label-schema.build-date=20251009)
Oct 11 04:06:55 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-4d85131394ce23a6db996008cb673881082a82f440672359004cd4c96ba48c69-userdata-shm.mount: Deactivated successfully.
Oct 11 04:06:55 compute-0 systemd[1]: var-lib-containers-storage-overlay-3d0b4004b8fcea597d98b58197a5431543b001faf1c93a28a6f1a14c415db1a9-merged.mount: Deactivated successfully.
Oct 11 04:06:55 compute-0 nova_compute[259850]: 2025-10-11 04:06:55.362 2 DEBUG nova.compute.manager [None req-3700afd8-f980-4cf2-b41e-54ecc7fbe9a2 - - - - - -] [instance: 26cb0d26-41fd-4cac-a0b5-1c630a0feba1] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 11 04:06:55 compute-0 nova_compute[259850]: 2025-10-11 04:06:55.366 2 DEBUG nova.virt.driver [None req-3700afd8-f980-4cf2-b41e-54ecc7fbe9a2 - - - - - -] Emitting event <LifecycleEvent: 1760155615.2574146, 26cb0d26-41fd-4cac-a0b5-1c630a0feba1 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 11 04:06:55 compute-0 nova_compute[259850]: 2025-10-11 04:06:55.366 2 INFO nova.compute.manager [None req-3700afd8-f980-4cf2-b41e-54ecc7fbe9a2 - - - - - -] [instance: 26cb0d26-41fd-4cac-a0b5-1c630a0feba1] VM Resumed (Lifecycle Event)
Oct 11 04:06:55 compute-0 podman[274768]: 2025-10-11 04:06:55.376454321 +0000 UTC m=+0.096063087 container cleanup 4d85131394ce23a6db996008cb673881082a82f440672359004cd4c96ba48c69 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-bfa0cc72-c909-48db-80bb-536eb7b52f6e, io.buildah.version=1.41.3, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, tcib_managed=true, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Oct 11 04:06:55 compute-0 systemd[1]: libpod-conmon-4d85131394ce23a6db996008cb673881082a82f440672359004cd4c96ba48c69.scope: Deactivated successfully.
Oct 11 04:06:55 compute-0 nova_compute[259850]: 2025-10-11 04:06:55.398 2 INFO nova.compute.manager [None req-3126fc17-491e-4b76-a7d6-401b0ab0a9cb 77635b26e3624f318335b7dd5d5cf9c4 41596b84442c439b86ce2c239af0242c - - default default] [instance: 26cb0d26-41fd-4cac-a0b5-1c630a0feba1] Took 6.50 seconds to spawn the instance on the hypervisor.
Oct 11 04:06:55 compute-0 nova_compute[259850]: 2025-10-11 04:06:55.399 2 DEBUG nova.compute.manager [None req-3126fc17-491e-4b76-a7d6-401b0ab0a9cb 77635b26e3624f318335b7dd5d5cf9c4 41596b84442c439b86ce2c239af0242c - - default default] [instance: 26cb0d26-41fd-4cac-a0b5-1c630a0feba1] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 11 04:06:55 compute-0 nova_compute[259850]: 2025-10-11 04:06:55.400 2 DEBUG nova.compute.manager [None req-3700afd8-f980-4cf2-b41e-54ecc7fbe9a2 - - - - - -] [instance: 26cb0d26-41fd-4cac-a0b5-1c630a0feba1] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 11 04:06:55 compute-0 nova_compute[259850]: 2025-10-11 04:06:55.405 2 DEBUG nova.compute.manager [None req-3700afd8-f980-4cf2-b41e-54ecc7fbe9a2 - - - - - -] [instance: 26cb0d26-41fd-4cac-a0b5-1c630a0feba1] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 11 04:06:55 compute-0 podman[274798]: 2025-10-11 04:06:55.436433425 +0000 UTC m=+0.038321567 container remove 4d85131394ce23a6db996008cb673881082a82f440672359004cd4c96ba48c69 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-bfa0cc72-c909-48db-80bb-536eb7b52f6e, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009, org.label-schema.vendor=CentOS, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f)
Oct 11 04:06:55 compute-0 nova_compute[259850]: 2025-10-11 04:06:55.443 2 INFO nova.compute.manager [None req-3700afd8-f980-4cf2-b41e-54ecc7fbe9a2 - - - - - -] [instance: 26cb0d26-41fd-4cac-a0b5-1c630a0feba1] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 11 04:06:55 compute-0 ovn_metadata_agent[161897]: 2025-10-11 04:06:55.445 267637 DEBUG oslo.privsep.daemon [-] privsep: reply[f328b0ec-86e9-446c-b6e5-1870bb483c10]: (4, ('Sat Oct 11 04:06:55 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-bfa0cc72-c909-48db-80bb-536eb7b52f6e (4d85131394ce23a6db996008cb673881082a82f440672359004cd4c96ba48c69)\n4d85131394ce23a6db996008cb673881082a82f440672359004cd4c96ba48c69\nSat Oct 11 04:06:55 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-bfa0cc72-c909-48db-80bb-536eb7b52f6e (4d85131394ce23a6db996008cb673881082a82f440672359004cd4c96ba48c69)\n4d85131394ce23a6db996008cb673881082a82f440672359004cd4c96ba48c69\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 04:06:55 compute-0 ovn_metadata_agent[161897]: 2025-10-11 04:06:55.447 267637 DEBUG oslo.privsep.daemon [-] privsep: reply[4101c4d6-4968-4399-a76d-e86d1e64e98b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 04:06:55 compute-0 ovn_metadata_agent[161897]: 2025-10-11 04:06:55.449 161902 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapbfa0cc72-c0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 04:06:55 compute-0 nova_compute[259850]: 2025-10-11 04:06:55.480 2 INFO nova.compute.manager [None req-3126fc17-491e-4b76-a7d6-401b0ab0a9cb 77635b26e3624f318335b7dd5d5cf9c4 41596b84442c439b86ce2c239af0242c - - default default] [instance: 26cb0d26-41fd-4cac-a0b5-1c630a0feba1] Took 7.54 seconds to build instance.
Oct 11 04:06:55 compute-0 nova_compute[259850]: 2025-10-11 04:06:55.491 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 04:06:55 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1099: 305 pgs: 305 active+clean; 260 MiB data, 379 MiB used, 60 GiB / 60 GiB avail; 133 KiB/s rd, 3.8 MiB/s wr, 190 op/s
Oct 11 04:06:55 compute-0 kernel: tapbfa0cc72-c0: left promiscuous mode
Oct 11 04:06:55 compute-0 nova_compute[259850]: 2025-10-11 04:06:55.508 2 DEBUG oslo_concurrency.lockutils [None req-3126fc17-491e-4b76-a7d6-401b0ab0a9cb 77635b26e3624f318335b7dd5d5cf9c4 41596b84442c439b86ce2c239af0242c - - default default] Lock "26cb0d26-41fd-4cac-a0b5-1c630a0feba1" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 7.647s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 04:06:55 compute-0 nova_compute[259850]: 2025-10-11 04:06:55.528 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 04:06:55 compute-0 ovn_metadata_agent[161897]: 2025-10-11 04:06:55.530 267637 DEBUG oslo.privsep.daemon [-] privsep: reply[045c49b2-784d-4c12-a7d4-e7a654544b5d]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 04:06:55 compute-0 ovn_metadata_agent[161897]: 2025-10-11 04:06:55.564 267637 DEBUG oslo.privsep.daemon [-] privsep: reply[71716df4-a21a-43c7-9069-5db5a8f6adf5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 04:06:55 compute-0 ovn_metadata_agent[161897]: 2025-10-11 04:06:55.566 267637 DEBUG oslo.privsep.daemon [-] privsep: reply[cd302144-6855-4883-b5ae-a2ffe125f269]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 04:06:55 compute-0 ovn_metadata_agent[161897]: 2025-10-11 04:06:55.586 267637 DEBUG oslo.privsep.daemon [-] privsep: reply[9d0e14e0-5fce-4f8e-8b54-794437063ea9]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 396530, 'reachable_time': 24503, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 274814, 'error': None, 'target': 'ovnmeta-bfa0cc72-c909-48db-80bb-536eb7b52f6e', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 04:06:55 compute-0 systemd[1]: run-netns-ovnmeta\x2dbfa0cc72\x2dc909\x2d48db\x2d80bb\x2d536eb7b52f6e.mount: Deactivated successfully.
Oct 11 04:06:55 compute-0 ovn_metadata_agent[161897]: 2025-10-11 04:06:55.589 162015 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-bfa0cc72-c909-48db-80bb-536eb7b52f6e deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Oct 11 04:06:55 compute-0 ovn_metadata_agent[161897]: 2025-10-11 04:06:55.589 162015 DEBUG oslo.privsep.daemon [-] privsep: reply[c635d724-30b7-4bd9-8b15-60ad8dec588f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 04:06:56 compute-0 nova_compute[259850]: 2025-10-11 04:06:56.467 2 DEBUG nova.compute.manager [req-e973949c-d859-44dc-896b-306b20723baa req-ad33d370-2537-497c-9c93-e1329741063f f4159c198f2c491aba3076e803251bb4 551ef16ca7c64c7d98e9560ddab5abd8 - - default default] [instance: 26cb0d26-41fd-4cac-a0b5-1c630a0feba1] Received event network-vif-plugged-4bf043b6-53f8-43fd-8fb7-67863dfbfe87 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 11 04:06:56 compute-0 nova_compute[259850]: 2025-10-11 04:06:56.468 2 DEBUG oslo_concurrency.lockutils [req-e973949c-d859-44dc-896b-306b20723baa req-ad33d370-2537-497c-9c93-e1329741063f f4159c198f2c491aba3076e803251bb4 551ef16ca7c64c7d98e9560ddab5abd8 - - default default] Acquiring lock "26cb0d26-41fd-4cac-a0b5-1c630a0feba1-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 04:06:56 compute-0 nova_compute[259850]: 2025-10-11 04:06:56.468 2 DEBUG oslo_concurrency.lockutils [req-e973949c-d859-44dc-896b-306b20723baa req-ad33d370-2537-497c-9c93-e1329741063f f4159c198f2c491aba3076e803251bb4 551ef16ca7c64c7d98e9560ddab5abd8 - - default default] Lock "26cb0d26-41fd-4cac-a0b5-1c630a0feba1-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 04:06:56 compute-0 nova_compute[259850]: 2025-10-11 04:06:56.468 2 DEBUG oslo_concurrency.lockutils [req-e973949c-d859-44dc-896b-306b20723baa req-ad33d370-2537-497c-9c93-e1329741063f f4159c198f2c491aba3076e803251bb4 551ef16ca7c64c7d98e9560ddab5abd8 - - default default] Lock "26cb0d26-41fd-4cac-a0b5-1c630a0feba1-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 04:06:56 compute-0 nova_compute[259850]: 2025-10-11 04:06:56.469 2 DEBUG nova.compute.manager [req-e973949c-d859-44dc-896b-306b20723baa req-ad33d370-2537-497c-9c93-e1329741063f f4159c198f2c491aba3076e803251bb4 551ef16ca7c64c7d98e9560ddab5abd8 - - default default] [instance: 26cb0d26-41fd-4cac-a0b5-1c630a0feba1] No waiting events found dispatching network-vif-plugged-4bf043b6-53f8-43fd-8fb7-67863dfbfe87 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 11 04:06:56 compute-0 nova_compute[259850]: 2025-10-11 04:06:56.469 2 WARNING nova.compute.manager [req-e973949c-d859-44dc-896b-306b20723baa req-ad33d370-2537-497c-9c93-e1329741063f f4159c198f2c491aba3076e803251bb4 551ef16ca7c64c7d98e9560ddab5abd8 - - default default] [instance: 26cb0d26-41fd-4cac-a0b5-1c630a0feba1] Received unexpected event network-vif-plugged-4bf043b6-53f8-43fd-8fb7-67863dfbfe87 for instance with vm_state active and task_state None.
Oct 11 04:06:56 compute-0 nova_compute[259850]: 2025-10-11 04:06:56.667 2 DEBUG nova.compute.manager [req-d54b0826-14e4-4c1e-98b9-88c915def473 req-fb20dffa-706f-42c3-b2aa-e4b72d54e377 f4159c198f2c491aba3076e803251bb4 551ef16ca7c64c7d98e9560ddab5abd8 - - default default] [instance: 2b618038-2466-4671-9914-c69aecf8c771] Received event network-vif-plugged-18b8cda5-7bec-4b29-838f-24cad68162af external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 11 04:06:56 compute-0 nova_compute[259850]: 2025-10-11 04:06:56.668 2 DEBUG oslo_concurrency.lockutils [req-d54b0826-14e4-4c1e-98b9-88c915def473 req-fb20dffa-706f-42c3-b2aa-e4b72d54e377 f4159c198f2c491aba3076e803251bb4 551ef16ca7c64c7d98e9560ddab5abd8 - - default default] Acquiring lock "2b618038-2466-4671-9914-c69aecf8c771-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 04:06:56 compute-0 nova_compute[259850]: 2025-10-11 04:06:56.668 2 DEBUG oslo_concurrency.lockutils [req-d54b0826-14e4-4c1e-98b9-88c915def473 req-fb20dffa-706f-42c3-b2aa-e4b72d54e377 f4159c198f2c491aba3076e803251bb4 551ef16ca7c64c7d98e9560ddab5abd8 - - default default] Lock "2b618038-2466-4671-9914-c69aecf8c771-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 04:06:56 compute-0 nova_compute[259850]: 2025-10-11 04:06:56.668 2 DEBUG oslo_concurrency.lockutils [req-d54b0826-14e4-4c1e-98b9-88c915def473 req-fb20dffa-706f-42c3-b2aa-e4b72d54e377 f4159c198f2c491aba3076e803251bb4 551ef16ca7c64c7d98e9560ddab5abd8 - - default default] Lock "2b618038-2466-4671-9914-c69aecf8c771-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 04:06:56 compute-0 nova_compute[259850]: 2025-10-11 04:06:56.669 2 DEBUG nova.compute.manager [req-d54b0826-14e4-4c1e-98b9-88c915def473 req-fb20dffa-706f-42c3-b2aa-e4b72d54e377 f4159c198f2c491aba3076e803251bb4 551ef16ca7c64c7d98e9560ddab5abd8 - - default default] [instance: 2b618038-2466-4671-9914-c69aecf8c771] No waiting events found dispatching network-vif-plugged-18b8cda5-7bec-4b29-838f-24cad68162af pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 11 04:06:56 compute-0 nova_compute[259850]: 2025-10-11 04:06:56.669 2 WARNING nova.compute.manager [req-d54b0826-14e4-4c1e-98b9-88c915def473 req-fb20dffa-706f-42c3-b2aa-e4b72d54e377 f4159c198f2c491aba3076e803251bb4 551ef16ca7c64c7d98e9560ddab5abd8 - - default default] [instance: 2b618038-2466-4671-9914-c69aecf8c771] Received unexpected event network-vif-plugged-18b8cda5-7bec-4b29-838f-24cad68162af for instance with vm_state active and task_state deleting.
Oct 11 04:06:56 compute-0 ceph-mon[74273]: pgmap v1099: 305 pgs: 305 active+clean; 260 MiB data, 379 MiB used, 60 GiB / 60 GiB avail; 133 KiB/s rd, 3.8 MiB/s wr, 190 op/s
Oct 11 04:06:57 compute-0 nova_compute[259850]: 2025-10-11 04:06:57.055 2 DEBUG nova.network.neutron [-] [instance: 2b618038-2466-4671-9914-c69aecf8c771] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 11 04:06:57 compute-0 nova_compute[259850]: 2025-10-11 04:06:57.112 2 INFO nova.compute.manager [-] [instance: 2b618038-2466-4671-9914-c69aecf8c771] Took 2.14 seconds to deallocate network for instance.
Oct 11 04:06:57 compute-0 nova_compute[259850]: 2025-10-11 04:06:57.159 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 04:06:57 compute-0 nova_compute[259850]: 2025-10-11 04:06:57.224 2 WARNING nova.volume.cinder [None req-25f0fde7-a62d-4988-855e-f4cb83dcef92 c5660041067943deb3c73caa6e62f851 2783729ed466412aac8ceb01d86a0b12 - - default default] Attachment ddd5f1e6-18d1-4add-b4e6-0cb2871cd170 does not exist. Ignoring.: cinderclient.exceptions.NotFound: Volume attachment could not be found with filter: attachment_id = ddd5f1e6-18d1-4add-b4e6-0cb2871cd170. (HTTP 404) (Request-ID: req-ca8a3e95-5793-4932-8a15-47a2de764e7b)
Oct 11 04:06:57 compute-0 nova_compute[259850]: 2025-10-11 04:06:57.224 2 INFO nova.compute.manager [None req-25f0fde7-a62d-4988-855e-f4cb83dcef92 c5660041067943deb3c73caa6e62f851 2783729ed466412aac8ceb01d86a0b12 - - default default] [instance: 2b618038-2466-4671-9914-c69aecf8c771] Took 0.11 seconds to detach 1 volumes for instance.
Oct 11 04:06:57 compute-0 nova_compute[259850]: 2025-10-11 04:06:57.268 2 DEBUG oslo_concurrency.lockutils [None req-25f0fde7-a62d-4988-855e-f4cb83dcef92 c5660041067943deb3c73caa6e62f851 2783729ed466412aac8ceb01d86a0b12 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 04:06:57 compute-0 nova_compute[259850]: 2025-10-11 04:06:57.269 2 DEBUG oslo_concurrency.lockutils [None req-25f0fde7-a62d-4988-855e-f4cb83dcef92 c5660041067943deb3c73caa6e62f851 2783729ed466412aac8ceb01d86a0b12 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 04:06:57 compute-0 nova_compute[259850]: 2025-10-11 04:06:57.348 2 DEBUG oslo_concurrency.processutils [None req-25f0fde7-a62d-4988-855e-f4cb83dcef92 c5660041067943deb3c73caa6e62f851 2783729ed466412aac8ceb01d86a0b12 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 04:06:57 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1100: 305 pgs: 305 active+clean; 260 MiB data, 379 MiB used, 60 GiB / 60 GiB avail; 114 KiB/s rd, 3.2 MiB/s wr, 162 op/s
Oct 11 04:06:57 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 11 04:06:57 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1612214278' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 04:06:57 compute-0 nova_compute[259850]: 2025-10-11 04:06:57.829 2 DEBUG oslo_concurrency.processutils [None req-25f0fde7-a62d-4988-855e-f4cb83dcef92 c5660041067943deb3c73caa6e62f851 2783729ed466412aac8ceb01d86a0b12 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.480s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 04:06:57 compute-0 nova_compute[259850]: 2025-10-11 04:06:57.837 2 DEBUG nova.compute.provider_tree [None req-25f0fde7-a62d-4988-855e-f4cb83dcef92 c5660041067943deb3c73caa6e62f851 2783729ed466412aac8ceb01d86a0b12 - - default default] Inventory has not changed in ProviderTree for provider: 108a560b-89c0-4926-a2fc-cb749a6f8386 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 11 04:06:57 compute-0 nova_compute[259850]: 2025-10-11 04:06:57.856 2 DEBUG nova.scheduler.client.report [None req-25f0fde7-a62d-4988-855e-f4cb83dcef92 c5660041067943deb3c73caa6e62f851 2783729ed466412aac8ceb01d86a0b12 - - default default] Inventory has not changed for provider 108a560b-89c0-4926-a2fc-cb749a6f8386 based on inventory data: {'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 11 04:06:57 compute-0 nova_compute[259850]: 2025-10-11 04:06:57.880 2 DEBUG oslo_concurrency.lockutils [None req-25f0fde7-a62d-4988-855e-f4cb83dcef92 c5660041067943deb3c73caa6e62f851 2783729ed466412aac8ceb01d86a0b12 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.611s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 04:06:57 compute-0 nova_compute[259850]: 2025-10-11 04:06:57.909 2 INFO nova.scheduler.client.report [None req-25f0fde7-a62d-4988-855e-f4cb83dcef92 c5660041067943deb3c73caa6e62f851 2783729ed466412aac8ceb01d86a0b12 - - default default] Deleted allocations for instance 2b618038-2466-4671-9914-c69aecf8c771
Oct 11 04:06:57 compute-0 ceph-mon[74273]: from='client.? 192.168.122.100:0/1612214278' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 04:06:57 compute-0 nova_compute[259850]: 2025-10-11 04:06:57.986 2 DEBUG oslo_concurrency.lockutils [None req-25f0fde7-a62d-4988-855e-f4cb83dcef92 c5660041067943deb3c73caa6e62f851 2783729ed466412aac8ceb01d86a0b12 - - default default] Lock "2b618038-2466-4671-9914-c69aecf8c771" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.746s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 04:06:58 compute-0 nova_compute[259850]: 2025-10-11 04:06:58.590 2 DEBUG nova.compute.manager [req-2543d798-29b3-46cc-9046-26047675a61a req-957bc4dc-e8d3-4195-b6e8-a0b1d790eb0f f4159c198f2c491aba3076e803251bb4 551ef16ca7c64c7d98e9560ddab5abd8 - - default default] [instance: 2b618038-2466-4671-9914-c69aecf8c771] Received event network-vif-deleted-18b8cda5-7bec-4b29-838f-24cad68162af external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 11 04:06:58 compute-0 ceph-mon[74273]: pgmap v1100: 305 pgs: 305 active+clean; 260 MiB data, 379 MiB used, 60 GiB / 60 GiB avail; 114 KiB/s rd, 3.2 MiB/s wr, 162 op/s
Oct 11 04:06:59 compute-0 podman[274838]: 2025-10-11 04:06:59.340838583 +0000 UTC m=+0.049959673 container health_status 8f03625dda308e9a3fe3b3f00360811eab2a53db90537fed5552a11820542f07 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, config_id=iscsid, container_name=iscsid, managed_by=edpm_ansible)
Oct 11 04:06:59 compute-0 podman[274837]: 2025-10-11 04:06:59.36708319 +0000 UTC m=+0.074455901 container health_status 83ff5079e26f0f00bbfd2512db4ebca61e431e3b7e362664573e34fff179574c (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009, container_name=multipathd, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Oct 11 04:06:59 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1101: 305 pgs: 305 active+clean; 180 MiB data, 339 MiB used, 60 GiB / 60 GiB avail; 5.6 MiB/s rd, 2.7 MiB/s wr, 295 op/s
Oct 11 04:06:59 compute-0 nova_compute[259850]: 2025-10-11 04:06:59.500 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 04:06:59 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct 11 04:06:59 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2956014448' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 11 04:06:59 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct 11 04:06:59 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2956014448' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 11 04:06:59 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e191 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 04:06:59 compute-0 ceph-mon[74273]: from='client.? 192.168.122.10:0/2956014448' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 11 04:06:59 compute-0 ceph-mon[74273]: from='client.? 192.168.122.10:0/2956014448' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 11 04:07:00 compute-0 nova_compute[259850]: 2025-10-11 04:07:00.667 2 DEBUG nova.compute.manager [req-5d6ca862-22ec-44b8-9dee-1dc5129c0812 req-ec295d07-4d07-4791-a05a-767ccee658fc f4159c198f2c491aba3076e803251bb4 551ef16ca7c64c7d98e9560ddab5abd8 - - default default] [instance: 26cb0d26-41fd-4cac-a0b5-1c630a0feba1] Received event network-changed-4bf043b6-53f8-43fd-8fb7-67863dfbfe87 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 11 04:07:00 compute-0 nova_compute[259850]: 2025-10-11 04:07:00.668 2 DEBUG nova.compute.manager [req-5d6ca862-22ec-44b8-9dee-1dc5129c0812 req-ec295d07-4d07-4791-a05a-767ccee658fc f4159c198f2c491aba3076e803251bb4 551ef16ca7c64c7d98e9560ddab5abd8 - - default default] [instance: 26cb0d26-41fd-4cac-a0b5-1c630a0feba1] Refreshing instance network info cache due to event network-changed-4bf043b6-53f8-43fd-8fb7-67863dfbfe87. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 11 04:07:00 compute-0 nova_compute[259850]: 2025-10-11 04:07:00.668 2 DEBUG oslo_concurrency.lockutils [req-5d6ca862-22ec-44b8-9dee-1dc5129c0812 req-ec295d07-4d07-4791-a05a-767ccee658fc f4159c198f2c491aba3076e803251bb4 551ef16ca7c64c7d98e9560ddab5abd8 - - default default] Acquiring lock "refresh_cache-26cb0d26-41fd-4cac-a0b5-1c630a0feba1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 11 04:07:00 compute-0 nova_compute[259850]: 2025-10-11 04:07:00.668 2 DEBUG oslo_concurrency.lockutils [req-5d6ca862-22ec-44b8-9dee-1dc5129c0812 req-ec295d07-4d07-4791-a05a-767ccee658fc f4159c198f2c491aba3076e803251bb4 551ef16ca7c64c7d98e9560ddab5abd8 - - default default] Acquired lock "refresh_cache-26cb0d26-41fd-4cac-a0b5-1c630a0feba1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 11 04:07:00 compute-0 nova_compute[259850]: 2025-10-11 04:07:00.669 2 DEBUG nova.network.neutron [req-5d6ca862-22ec-44b8-9dee-1dc5129c0812 req-ec295d07-4d07-4791-a05a-767ccee658fc f4159c198f2c491aba3076e803251bb4 551ef16ca7c64c7d98e9560ddab5abd8 - - default default] [instance: 26cb0d26-41fd-4cac-a0b5-1c630a0feba1] Refreshing network info cache for port 4bf043b6-53f8-43fd-8fb7-67863dfbfe87 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 11 04:07:00 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e191 do_prune osdmap full prune enabled
Oct 11 04:07:00 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e192 e192: 3 total, 3 up, 3 in
Oct 11 04:07:00 compute-0 ceph-mon[74273]: log_channel(cluster) log [DBG] : osdmap e192: 3 total, 3 up, 3 in
Oct 11 04:07:00 compute-0 ceph-mon[74273]: pgmap v1101: 305 pgs: 305 active+clean; 180 MiB data, 339 MiB used, 60 GiB / 60 GiB avail; 5.6 MiB/s rd, 2.7 MiB/s wr, 295 op/s
Oct 11 04:07:01 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1103: 305 pgs: 305 active+clean; 180 MiB data, 339 MiB used, 60 GiB / 60 GiB avail; 5.5 MiB/s rd, 23 KiB/s wr, 162 op/s
Oct 11 04:07:01 compute-0 nova_compute[259850]: 2025-10-11 04:07:01.688 2 DEBUG nova.network.neutron [req-5d6ca862-22ec-44b8-9dee-1dc5129c0812 req-ec295d07-4d07-4791-a05a-767ccee658fc f4159c198f2c491aba3076e803251bb4 551ef16ca7c64c7d98e9560ddab5abd8 - - default default] [instance: 26cb0d26-41fd-4cac-a0b5-1c630a0feba1] Updated VIF entry in instance network info cache for port 4bf043b6-53f8-43fd-8fb7-67863dfbfe87. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 11 04:07:01 compute-0 nova_compute[259850]: 2025-10-11 04:07:01.689 2 DEBUG nova.network.neutron [req-5d6ca862-22ec-44b8-9dee-1dc5129c0812 req-ec295d07-4d07-4791-a05a-767ccee658fc f4159c198f2c491aba3076e803251bb4 551ef16ca7c64c7d98e9560ddab5abd8 - - default default] [instance: 26cb0d26-41fd-4cac-a0b5-1c630a0feba1] Updating instance_info_cache with network_info: [{"id": "4bf043b6-53f8-43fd-8fb7-67863dfbfe87", "address": "fa:16:3e:20:01:d0", "network": {"id": "0ff4b514-1476-4866-8fda-c0b6a7674970", "bridge": "br-int", "label": "tempest-VolumesExtendAttachedTest-1613157742-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.196", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "41596b84442c439b86ce2c239af0242c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4bf043b6-53", "ovs_interfaceid": "4bf043b6-53f8-43fd-8fb7-67863dfbfe87", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 11 04:07:01 compute-0 nova_compute[259850]: 2025-10-11 04:07:01.708 2 DEBUG oslo_concurrency.lockutils [req-5d6ca862-22ec-44b8-9dee-1dc5129c0812 req-ec295d07-4d07-4791-a05a-767ccee658fc f4159c198f2c491aba3076e803251bb4 551ef16ca7c64c7d98e9560ddab5abd8 - - default default] Releasing lock "refresh_cache-26cb0d26-41fd-4cac-a0b5-1c630a0feba1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 11 04:07:01 compute-0 ceph-mon[74273]: osdmap e192: 3 total, 3 up, 3 in
Oct 11 04:07:02 compute-0 nova_compute[259850]: 2025-10-11 04:07:02.161 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 04:07:02 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct 11 04:07:02 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3951646056' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 11 04:07:02 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct 11 04:07:02 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3951646056' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 11 04:07:02 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e192 do_prune osdmap full prune enabled
Oct 11 04:07:02 compute-0 ceph-mon[74273]: pgmap v1103: 305 pgs: 305 active+clean; 180 MiB data, 339 MiB used, 60 GiB / 60 GiB avail; 5.5 MiB/s rd, 23 KiB/s wr, 162 op/s
Oct 11 04:07:02 compute-0 ceph-mon[74273]: from='client.? 192.168.122.10:0/3951646056' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 11 04:07:02 compute-0 ceph-mon[74273]: from='client.? 192.168.122.10:0/3951646056' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 11 04:07:03 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e193 e193: 3 total, 3 up, 3 in
Oct 11 04:07:03 compute-0 ceph-mon[74273]: log_channel(cluster) log [DBG] : osdmap e193: 3 total, 3 up, 3 in
Oct 11 04:07:03 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1105: 305 pgs: 1 active+clean+snaptrim, 304 active+clean; 134 MiB data, 333 MiB used, 60 GiB / 60 GiB avail; 5.5 MiB/s rd, 2.7 MiB/s wr, 284 op/s
Oct 11 04:07:04 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e193 do_prune osdmap full prune enabled
Oct 11 04:07:04 compute-0 ceph-mon[74273]: osdmap e193: 3 total, 3 up, 3 in
Oct 11 04:07:04 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e194 e194: 3 total, 3 up, 3 in
Oct 11 04:07:04 compute-0 ceph-mon[74273]: log_channel(cluster) log [DBG] : osdmap e194: 3 total, 3 up, 3 in
Oct 11 04:07:04 compute-0 nova_compute[259850]: 2025-10-11 04:07:04.505 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 04:07:04 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e194 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 04:07:05 compute-0 ceph-mon[74273]: pgmap v1105: 305 pgs: 1 active+clean+snaptrim, 304 active+clean; 134 MiB data, 333 MiB used, 60 GiB / 60 GiB avail; 5.5 MiB/s rd, 2.7 MiB/s wr, 284 op/s
Oct 11 04:07:05 compute-0 ceph-mon[74273]: osdmap e194: 3 total, 3 up, 3 in
Oct 11 04:07:05 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1107: 305 pgs: 1 active+clean+snaptrim, 304 active+clean; 134 MiB data, 333 MiB used, 60 GiB / 60 GiB avail; 111 KiB/s rd, 3.5 MiB/s wr, 162 op/s
Oct 11 04:07:06 compute-0 ovn_controller[152025]: 2025-10-11T04:07:06Z|00008|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:20:01:d0 10.100.0.13
Oct 11 04:07:06 compute-0 ovn_controller[152025]: 2025-10-11T04:07:06Z|00009|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:20:01:d0 10.100.0.13
Oct 11 04:07:07 compute-0 ceph-mon[74273]: pgmap v1107: 305 pgs: 1 active+clean+snaptrim, 304 active+clean; 134 MiB data, 333 MiB used, 60 GiB / 60 GiB avail; 111 KiB/s rd, 3.5 MiB/s wr, 162 op/s
Oct 11 04:07:07 compute-0 nova_compute[259850]: 2025-10-11 04:07:07.164 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 04:07:07 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1108: 305 pgs: 1 active+clean+snaptrim, 304 active+clean; 134 MiB data, 333 MiB used, 60 GiB / 60 GiB avail; 102 KiB/s rd, 3.3 MiB/s wr, 149 op/s
Oct 11 04:07:09 compute-0 ceph-mon[74273]: pgmap v1108: 305 pgs: 1 active+clean+snaptrim, 304 active+clean; 134 MiB data, 333 MiB used, 60 GiB / 60 GiB avail; 102 KiB/s rd, 3.3 MiB/s wr, 149 op/s
Oct 11 04:07:09 compute-0 nova_compute[259850]: 2025-10-11 04:07:09.472 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1760155614.4710677, 2b618038-2466-4671-9914-c69aecf8c771 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 11 04:07:09 compute-0 nova_compute[259850]: 2025-10-11 04:07:09.473 2 INFO nova.compute.manager [-] [instance: 2b618038-2466-4671-9914-c69aecf8c771] VM Stopped (Lifecycle Event)
Oct 11 04:07:09 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1109: 305 pgs: 305 active+clean; 167 MiB data, 337 MiB used, 60 GiB / 60 GiB avail; 552 KiB/s rd, 5.9 MiB/s wr, 227 op/s
Oct 11 04:07:09 compute-0 nova_compute[259850]: 2025-10-11 04:07:09.501 2 DEBUG nova.compute.manager [None req-2570f8e6-debb-4b2d-82d8-bc0ba5da70c6 - - - - - -] [instance: 2b618038-2466-4671-9914-c69aecf8c771] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 11 04:07:09 compute-0 nova_compute[259850]: 2025-10-11 04:07:09.509 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 04:07:09 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e194 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 04:07:10 compute-0 nova_compute[259850]: 2025-10-11 04:07:10.766 2 DEBUG oslo_concurrency.lockutils [None req-d6fcebc5-358d-4f4d-8f1e-340f5d7759df c5660041067943deb3c73caa6e62f851 2783729ed466412aac8ceb01d86a0b12 - - default default] Acquiring lock "388b5700-0501-4cb9-99cd-6d259e00afa4" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 04:07:10 compute-0 nova_compute[259850]: 2025-10-11 04:07:10.767 2 DEBUG oslo_concurrency.lockutils [None req-d6fcebc5-358d-4f4d-8f1e-340f5d7759df c5660041067943deb3c73caa6e62f851 2783729ed466412aac8ceb01d86a0b12 - - default default] Lock "388b5700-0501-4cb9-99cd-6d259e00afa4" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 04:07:10 compute-0 nova_compute[259850]: 2025-10-11 04:07:10.790 2 DEBUG nova.compute.manager [None req-d6fcebc5-358d-4f4d-8f1e-340f5d7759df c5660041067943deb3c73caa6e62f851 2783729ed466412aac8ceb01d86a0b12 - - default default] [instance: 388b5700-0501-4cb9-99cd-6d259e00afa4] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Oct 11 04:07:10 compute-0 nova_compute[259850]: 2025-10-11 04:07:10.874 2 DEBUG oslo_concurrency.lockutils [None req-d6fcebc5-358d-4f4d-8f1e-340f5d7759df c5660041067943deb3c73caa6e62f851 2783729ed466412aac8ceb01d86a0b12 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 04:07:10 compute-0 nova_compute[259850]: 2025-10-11 04:07:10.875 2 DEBUG oslo_concurrency.lockutils [None req-d6fcebc5-358d-4f4d-8f1e-340f5d7759df c5660041067943deb3c73caa6e62f851 2783729ed466412aac8ceb01d86a0b12 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 04:07:10 compute-0 nova_compute[259850]: 2025-10-11 04:07:10.888 2 DEBUG nova.virt.hardware [None req-d6fcebc5-358d-4f4d-8f1e-340f5d7759df c5660041067943deb3c73caa6e62f851 2783729ed466412aac8ceb01d86a0b12 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Oct 11 04:07:10 compute-0 nova_compute[259850]: 2025-10-11 04:07:10.889 2 INFO nova.compute.claims [None req-d6fcebc5-358d-4f4d-8f1e-340f5d7759df c5660041067943deb3c73caa6e62f851 2783729ed466412aac8ceb01d86a0b12 - - default default] [instance: 388b5700-0501-4cb9-99cd-6d259e00afa4] Claim successful on node compute-0.ctlplane.example.com
Oct 11 04:07:11 compute-0 nova_compute[259850]: 2025-10-11 04:07:11.014 2 DEBUG oslo_concurrency.processutils [None req-d6fcebc5-358d-4f4d-8f1e-340f5d7759df c5660041067943deb3c73caa6e62f851 2783729ed466412aac8ceb01d86a0b12 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 04:07:11 compute-0 ceph-mon[74273]: pgmap v1109: 305 pgs: 305 active+clean; 167 MiB data, 337 MiB used, 60 GiB / 60 GiB avail; 552 KiB/s rd, 5.9 MiB/s wr, 227 op/s
Oct 11 04:07:11 compute-0 podman[274898]: 2025-10-11 04:07:11.402116927 +0000 UTC m=+0.114260978 container health_status 648a70a95c858d67aef01a55c153ff0b5b9bef4b589fa10c62a24185180db61f (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_controller, org.label-schema.license=GPLv2, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251009, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=ovn_controller)
Oct 11 04:07:11 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1110: 305 pgs: 305 active+clean; 167 MiB data, 337 MiB used, 60 GiB / 60 GiB avail; 473 KiB/s rd, 3.6 MiB/s wr, 143 op/s
Oct 11 04:07:11 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 11 04:07:11 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/489480306' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 04:07:11 compute-0 nova_compute[259850]: 2025-10-11 04:07:11.521 2 DEBUG oslo_concurrency.processutils [None req-d6fcebc5-358d-4f4d-8f1e-340f5d7759df c5660041067943deb3c73caa6e62f851 2783729ed466412aac8ceb01d86a0b12 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.507s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 04:07:11 compute-0 nova_compute[259850]: 2025-10-11 04:07:11.525 2 DEBUG nova.compute.provider_tree [None req-d6fcebc5-358d-4f4d-8f1e-340f5d7759df c5660041067943deb3c73caa6e62f851 2783729ed466412aac8ceb01d86a0b12 - - default default] Inventory has not changed in ProviderTree for provider: 108a560b-89c0-4926-a2fc-cb749a6f8386 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 11 04:07:11 compute-0 nova_compute[259850]: 2025-10-11 04:07:11.552 2 DEBUG nova.scheduler.client.report [None req-d6fcebc5-358d-4f4d-8f1e-340f5d7759df c5660041067943deb3c73caa6e62f851 2783729ed466412aac8ceb01d86a0b12 - - default default] Inventory has not changed for provider 108a560b-89c0-4926-a2fc-cb749a6f8386 based on inventory data: {'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 11 04:07:11 compute-0 nova_compute[259850]: 2025-10-11 04:07:11.570 2 DEBUG oslo_concurrency.lockutils [None req-d6fcebc5-358d-4f4d-8f1e-340f5d7759df c5660041067943deb3c73caa6e62f851 2783729ed466412aac8ceb01d86a0b12 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.695s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 04:07:11 compute-0 nova_compute[259850]: 2025-10-11 04:07:11.570 2 DEBUG nova.compute.manager [None req-d6fcebc5-358d-4f4d-8f1e-340f5d7759df c5660041067943deb3c73caa6e62f851 2783729ed466412aac8ceb01d86a0b12 - - default default] [instance: 388b5700-0501-4cb9-99cd-6d259e00afa4] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Oct 11 04:07:11 compute-0 nova_compute[259850]: 2025-10-11 04:07:11.624 2 DEBUG nova.compute.manager [None req-d6fcebc5-358d-4f4d-8f1e-340f5d7759df c5660041067943deb3c73caa6e62f851 2783729ed466412aac8ceb01d86a0b12 - - default default] [instance: 388b5700-0501-4cb9-99cd-6d259e00afa4] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Oct 11 04:07:11 compute-0 nova_compute[259850]: 2025-10-11 04:07:11.624 2 DEBUG nova.network.neutron [None req-d6fcebc5-358d-4f4d-8f1e-340f5d7759df c5660041067943deb3c73caa6e62f851 2783729ed466412aac8ceb01d86a0b12 - - default default] [instance: 388b5700-0501-4cb9-99cd-6d259e00afa4] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Oct 11 04:07:11 compute-0 nova_compute[259850]: 2025-10-11 04:07:11.645 2 INFO nova.virt.libvirt.driver [None req-d6fcebc5-358d-4f4d-8f1e-340f5d7759df c5660041067943deb3c73caa6e62f851 2783729ed466412aac8ceb01d86a0b12 - - default default] [instance: 388b5700-0501-4cb9-99cd-6d259e00afa4] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Oct 11 04:07:11 compute-0 nova_compute[259850]: 2025-10-11 04:07:11.668 2 DEBUG nova.compute.manager [None req-d6fcebc5-358d-4f4d-8f1e-340f5d7759df c5660041067943deb3c73caa6e62f851 2783729ed466412aac8ceb01d86a0b12 - - default default] [instance: 388b5700-0501-4cb9-99cd-6d259e00afa4] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Oct 11 04:07:11 compute-0 nova_compute[259850]: 2025-10-11 04:07:11.745 2 DEBUG nova.compute.manager [None req-d6fcebc5-358d-4f4d-8f1e-340f5d7759df c5660041067943deb3c73caa6e62f851 2783729ed466412aac8ceb01d86a0b12 - - default default] [instance: 388b5700-0501-4cb9-99cd-6d259e00afa4] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Oct 11 04:07:11 compute-0 nova_compute[259850]: 2025-10-11 04:07:11.746 2 DEBUG nova.virt.libvirt.driver [None req-d6fcebc5-358d-4f4d-8f1e-340f5d7759df c5660041067943deb3c73caa6e62f851 2783729ed466412aac8ceb01d86a0b12 - - default default] [instance: 388b5700-0501-4cb9-99cd-6d259e00afa4] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Oct 11 04:07:11 compute-0 nova_compute[259850]: 2025-10-11 04:07:11.747 2 INFO nova.virt.libvirt.driver [None req-d6fcebc5-358d-4f4d-8f1e-340f5d7759df c5660041067943deb3c73caa6e62f851 2783729ed466412aac8ceb01d86a0b12 - - default default] [instance: 388b5700-0501-4cb9-99cd-6d259e00afa4] Creating image(s)
Oct 11 04:07:11 compute-0 nova_compute[259850]: 2025-10-11 04:07:11.770 2 DEBUG nova.storage.rbd_utils [None req-d6fcebc5-358d-4f4d-8f1e-340f5d7759df c5660041067943deb3c73caa6e62f851 2783729ed466412aac8ceb01d86a0b12 - - default default] rbd image 388b5700-0501-4cb9-99cd-6d259e00afa4_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 11 04:07:11 compute-0 nova_compute[259850]: 2025-10-11 04:07:11.801 2 DEBUG nova.storage.rbd_utils [None req-d6fcebc5-358d-4f4d-8f1e-340f5d7759df c5660041067943deb3c73caa6e62f851 2783729ed466412aac8ceb01d86a0b12 - - default default] rbd image 388b5700-0501-4cb9-99cd-6d259e00afa4_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 11 04:07:11 compute-0 nova_compute[259850]: 2025-10-11 04:07:11.826 2 DEBUG nova.storage.rbd_utils [None req-d6fcebc5-358d-4f4d-8f1e-340f5d7759df c5660041067943deb3c73caa6e62f851 2783729ed466412aac8ceb01d86a0b12 - - default default] rbd image 388b5700-0501-4cb9-99cd-6d259e00afa4_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 11 04:07:11 compute-0 nova_compute[259850]: 2025-10-11 04:07:11.830 2 DEBUG oslo_concurrency.processutils [None req-d6fcebc5-358d-4f4d-8f1e-340f5d7759df c5660041067943deb3c73caa6e62f851 2783729ed466412aac8ceb01d86a0b12 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/1fdd1a1629caceb533dd1ed1ad40a6716b3e72ac --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 04:07:11 compute-0 nova_compute[259850]: 2025-10-11 04:07:11.865 2 DEBUG nova.policy [None req-d6fcebc5-358d-4f4d-8f1e-340f5d7759df c5660041067943deb3c73caa6e62f851 2783729ed466412aac8ceb01d86a0b12 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'c5660041067943deb3c73caa6e62f851', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '2783729ed466412aac8ceb01d86a0b12', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Oct 11 04:07:11 compute-0 nova_compute[259850]: 2025-10-11 04:07:11.886 2 DEBUG oslo_concurrency.processutils [None req-d6fcebc5-358d-4f4d-8f1e-340f5d7759df c5660041067943deb3c73caa6e62f851 2783729ed466412aac8ceb01d86a0b12 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/1fdd1a1629caceb533dd1ed1ad40a6716b3e72ac --force-share --output=json" returned: 0 in 0.056s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 04:07:11 compute-0 nova_compute[259850]: 2025-10-11 04:07:11.887 2 DEBUG oslo_concurrency.lockutils [None req-d6fcebc5-358d-4f4d-8f1e-340f5d7759df c5660041067943deb3c73caa6e62f851 2783729ed466412aac8ceb01d86a0b12 - - default default] Acquiring lock "1fdd1a1629caceb533dd1ed1ad40a6716b3e72ac" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 04:07:11 compute-0 nova_compute[259850]: 2025-10-11 04:07:11.888 2 DEBUG oslo_concurrency.lockutils [None req-d6fcebc5-358d-4f4d-8f1e-340f5d7759df c5660041067943deb3c73caa6e62f851 2783729ed466412aac8ceb01d86a0b12 - - default default] Lock "1fdd1a1629caceb533dd1ed1ad40a6716b3e72ac" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 04:07:11 compute-0 nova_compute[259850]: 2025-10-11 04:07:11.888 2 DEBUG oslo_concurrency.lockutils [None req-d6fcebc5-358d-4f4d-8f1e-340f5d7759df c5660041067943deb3c73caa6e62f851 2783729ed466412aac8ceb01d86a0b12 - - default default] Lock "1fdd1a1629caceb533dd1ed1ad40a6716b3e72ac" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 04:07:11 compute-0 nova_compute[259850]: 2025-10-11 04:07:11.914 2 DEBUG nova.storage.rbd_utils [None req-d6fcebc5-358d-4f4d-8f1e-340f5d7759df c5660041067943deb3c73caa6e62f851 2783729ed466412aac8ceb01d86a0b12 - - default default] rbd image 388b5700-0501-4cb9-99cd-6d259e00afa4_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 11 04:07:11 compute-0 nova_compute[259850]: 2025-10-11 04:07:11.918 2 DEBUG oslo_concurrency.processutils [None req-d6fcebc5-358d-4f4d-8f1e-340f5d7759df c5660041067943deb3c73caa6e62f851 2783729ed466412aac8ceb01d86a0b12 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/1fdd1a1629caceb533dd1ed1ad40a6716b3e72ac 388b5700-0501-4cb9-99cd-6d259e00afa4_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 04:07:12 compute-0 ceph-mon[74273]: from='client.? 192.168.122.100:0/489480306' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 04:07:12 compute-0 nova_compute[259850]: 2025-10-11 04:07:12.158 2 DEBUG oslo_concurrency.processutils [None req-d6fcebc5-358d-4f4d-8f1e-340f5d7759df c5660041067943deb3c73caa6e62f851 2783729ed466412aac8ceb01d86a0b12 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/1fdd1a1629caceb533dd1ed1ad40a6716b3e72ac 388b5700-0501-4cb9-99cd-6d259e00afa4_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.240s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 04:07:12 compute-0 nova_compute[259850]: 2025-10-11 04:07:12.185 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 04:07:12 compute-0 nova_compute[259850]: 2025-10-11 04:07:12.220 2 DEBUG nova.storage.rbd_utils [None req-d6fcebc5-358d-4f4d-8f1e-340f5d7759df c5660041067943deb3c73caa6e62f851 2783729ed466412aac8ceb01d86a0b12 - - default default] resizing rbd image 388b5700-0501-4cb9-99cd-6d259e00afa4_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Oct 11 04:07:12 compute-0 nova_compute[259850]: 2025-10-11 04:07:12.326 2 DEBUG nova.objects.instance [None req-d6fcebc5-358d-4f4d-8f1e-340f5d7759df c5660041067943deb3c73caa6e62f851 2783729ed466412aac8ceb01d86a0b12 - - default default] Lazy-loading 'migration_context' on Instance uuid 388b5700-0501-4cb9-99cd-6d259e00afa4 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 11 04:07:12 compute-0 nova_compute[259850]: 2025-10-11 04:07:12.348 2 DEBUG nova.virt.libvirt.driver [None req-d6fcebc5-358d-4f4d-8f1e-340f5d7759df c5660041067943deb3c73caa6e62f851 2783729ed466412aac8ceb01d86a0b12 - - default default] [instance: 388b5700-0501-4cb9-99cd-6d259e00afa4] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Oct 11 04:07:12 compute-0 nova_compute[259850]: 2025-10-11 04:07:12.349 2 DEBUG nova.virt.libvirt.driver [None req-d6fcebc5-358d-4f4d-8f1e-340f5d7759df c5660041067943deb3c73caa6e62f851 2783729ed466412aac8ceb01d86a0b12 - - default default] [instance: 388b5700-0501-4cb9-99cd-6d259e00afa4] Ensure instance console log exists: /var/lib/nova/instances/388b5700-0501-4cb9-99cd-6d259e00afa4/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Oct 11 04:07:12 compute-0 nova_compute[259850]: 2025-10-11 04:07:12.350 2 DEBUG oslo_concurrency.lockutils [None req-d6fcebc5-358d-4f4d-8f1e-340f5d7759df c5660041067943deb3c73caa6e62f851 2783729ed466412aac8ceb01d86a0b12 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 04:07:12 compute-0 nova_compute[259850]: 2025-10-11 04:07:12.351 2 DEBUG oslo_concurrency.lockutils [None req-d6fcebc5-358d-4f4d-8f1e-340f5d7759df c5660041067943deb3c73caa6e62f851 2783729ed466412aac8ceb01d86a0b12 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 04:07:12 compute-0 nova_compute[259850]: 2025-10-11 04:07:12.351 2 DEBUG oslo_concurrency.lockutils [None req-d6fcebc5-358d-4f4d-8f1e-340f5d7759df c5660041067943deb3c73caa6e62f851 2783729ed466412aac8ceb01d86a0b12 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 04:07:12 compute-0 nova_compute[259850]: 2025-10-11 04:07:12.492 2 DEBUG nova.network.neutron [None req-d6fcebc5-358d-4f4d-8f1e-340f5d7759df c5660041067943deb3c73caa6e62f851 2783729ed466412aac8ceb01d86a0b12 - - default default] [instance: 388b5700-0501-4cb9-99cd-6d259e00afa4] Successfully created port: c1e60e1e-9066-4ce9-9064-2a732e2a407d _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Oct 11 04:07:12 compute-0 sudo[275094]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 04:07:12 compute-0 sudo[275094]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 04:07:12 compute-0 sudo[275094]: pam_unix(sudo:session): session closed for user root
Oct 11 04:07:13 compute-0 sudo[275119]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 11 04:07:13 compute-0 sudo[275119]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 04:07:13 compute-0 sudo[275119]: pam_unix(sudo:session): session closed for user root
Oct 11 04:07:13 compute-0 ceph-mon[74273]: pgmap v1110: 305 pgs: 305 active+clean; 167 MiB data, 337 MiB used, 60 GiB / 60 GiB avail; 473 KiB/s rd, 3.6 MiB/s wr, 143 op/s
Oct 11 04:07:13 compute-0 sudo[275144]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 04:07:13 compute-0 sudo[275144]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 04:07:13 compute-0 sudo[275144]: pam_unix(sudo:session): session closed for user root
Oct 11 04:07:13 compute-0 sudo[275169]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/23b68101-59a9-532f-ab6b-9acf78fb2162/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Oct 11 04:07:13 compute-0 sudo[275169]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 04:07:13 compute-0 nova_compute[259850]: 2025-10-11 04:07:13.206 2 DEBUG nova.network.neutron [None req-d6fcebc5-358d-4f4d-8f1e-340f5d7759df c5660041067943deb3c73caa6e62f851 2783729ed466412aac8ceb01d86a0b12 - - default default] [instance: 388b5700-0501-4cb9-99cd-6d259e00afa4] Successfully updated port: c1e60e1e-9066-4ce9-9064-2a732e2a407d _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Oct 11 04:07:13 compute-0 nova_compute[259850]: 2025-10-11 04:07:13.226 2 DEBUG oslo_concurrency.lockutils [None req-d6fcebc5-358d-4f4d-8f1e-340f5d7759df c5660041067943deb3c73caa6e62f851 2783729ed466412aac8ceb01d86a0b12 - - default default] Acquiring lock "refresh_cache-388b5700-0501-4cb9-99cd-6d259e00afa4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 11 04:07:13 compute-0 nova_compute[259850]: 2025-10-11 04:07:13.226 2 DEBUG oslo_concurrency.lockutils [None req-d6fcebc5-358d-4f4d-8f1e-340f5d7759df c5660041067943deb3c73caa6e62f851 2783729ed466412aac8ceb01d86a0b12 - - default default] Acquired lock "refresh_cache-388b5700-0501-4cb9-99cd-6d259e00afa4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 11 04:07:13 compute-0 nova_compute[259850]: 2025-10-11 04:07:13.226 2 DEBUG nova.network.neutron [None req-d6fcebc5-358d-4f4d-8f1e-340f5d7759df c5660041067943deb3c73caa6e62f851 2783729ed466412aac8ceb01d86a0b12 - - default default] [instance: 388b5700-0501-4cb9-99cd-6d259e00afa4] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Oct 11 04:07:13 compute-0 nova_compute[259850]: 2025-10-11 04:07:13.310 2 DEBUG nova.compute.manager [req-c91a25a4-96a3-4764-8e12-33aff2cf0823 req-7e19c25c-1b4a-4377-a37b-47f902ff9690 f4159c198f2c491aba3076e803251bb4 551ef16ca7c64c7d98e9560ddab5abd8 - - default default] [instance: 388b5700-0501-4cb9-99cd-6d259e00afa4] Received event network-changed-c1e60e1e-9066-4ce9-9064-2a732e2a407d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 11 04:07:13 compute-0 nova_compute[259850]: 2025-10-11 04:07:13.311 2 DEBUG nova.compute.manager [req-c91a25a4-96a3-4764-8e12-33aff2cf0823 req-7e19c25c-1b4a-4377-a37b-47f902ff9690 f4159c198f2c491aba3076e803251bb4 551ef16ca7c64c7d98e9560ddab5abd8 - - default default] [instance: 388b5700-0501-4cb9-99cd-6d259e00afa4] Refreshing instance network info cache due to event network-changed-c1e60e1e-9066-4ce9-9064-2a732e2a407d. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 11 04:07:13 compute-0 nova_compute[259850]: 2025-10-11 04:07:13.311 2 DEBUG oslo_concurrency.lockutils [req-c91a25a4-96a3-4764-8e12-33aff2cf0823 req-7e19c25c-1b4a-4377-a37b-47f902ff9690 f4159c198f2c491aba3076e803251bb4 551ef16ca7c64c7d98e9560ddab5abd8 - - default default] Acquiring lock "refresh_cache-388b5700-0501-4cb9-99cd-6d259e00afa4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 11 04:07:13 compute-0 nova_compute[259850]: 2025-10-11 04:07:13.364 2 DEBUG nova.network.neutron [None req-d6fcebc5-358d-4f4d-8f1e-340f5d7759df c5660041067943deb3c73caa6e62f851 2783729ed466412aac8ceb01d86a0b12 - - default default] [instance: 388b5700-0501-4cb9-99cd-6d259e00afa4] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Oct 11 04:07:13 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1111: 305 pgs: 305 active+clean; 213 MiB data, 358 MiB used, 60 GiB / 60 GiB avail; 397 KiB/s rd, 4.7 MiB/s wr, 117 op/s
Oct 11 04:07:13 compute-0 sudo[275169]: pam_unix(sudo:session): session closed for user root
Oct 11 04:07:13 compute-0 sudo[275227]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 04:07:13 compute-0 sudo[275227]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 04:07:13 compute-0 sudo[275227]: pam_unix(sudo:session): session closed for user root
Oct 11 04:07:13 compute-0 sudo[275252]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 11 04:07:13 compute-0 sudo[275252]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 04:07:13 compute-0 sudo[275252]: pam_unix(sudo:session): session closed for user root
Oct 11 04:07:14 compute-0 sudo[275277]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 04:07:14 compute-0 sudo[275277]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 04:07:14 compute-0 sudo[275277]: pam_unix(sudo:session): session closed for user root
Oct 11 04:07:14 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e194 do_prune osdmap full prune enabled
Oct 11 04:07:14 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e195 e195: 3 total, 3 up, 3 in
Oct 11 04:07:14 compute-0 ceph-mon[74273]: log_channel(cluster) log [DBG] : osdmap e195: 3 total, 3 up, 3 in
Oct 11 04:07:14 compute-0 sudo[275302]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/23b68101-59a9-532f-ab6b-9acf78fb2162/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 list-networks
Oct 11 04:07:14 compute-0 sudo[275302]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 04:07:14 compute-0 nova_compute[259850]: 2025-10-11 04:07:14.277 2 DEBUG nova.network.neutron [None req-d6fcebc5-358d-4f4d-8f1e-340f5d7759df c5660041067943deb3c73caa6e62f851 2783729ed466412aac8ceb01d86a0b12 - - default default] [instance: 388b5700-0501-4cb9-99cd-6d259e00afa4] Updating instance_info_cache with network_info: [{"id": "c1e60e1e-9066-4ce9-9064-2a732e2a407d", "address": "fa:16:3e:22:3d:d4", "network": {"id": "bfa0cc72-c909-48db-80bb-536eb7b52f6e", "bridge": "br-int", "label": "tempest-VolumesSnapshotTestJSON-1615284681-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2783729ed466412aac8ceb01d86a0b12", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc1e60e1e-90", "ovs_interfaceid": "c1e60e1e-9066-4ce9-9064-2a732e2a407d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 11 04:07:14 compute-0 nova_compute[259850]: 2025-10-11 04:07:14.298 2 DEBUG oslo_concurrency.lockutils [None req-d6fcebc5-358d-4f4d-8f1e-340f5d7759df c5660041067943deb3c73caa6e62f851 2783729ed466412aac8ceb01d86a0b12 - - default default] Releasing lock "refresh_cache-388b5700-0501-4cb9-99cd-6d259e00afa4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 11 04:07:14 compute-0 nova_compute[259850]: 2025-10-11 04:07:14.298 2 DEBUG nova.compute.manager [None req-d6fcebc5-358d-4f4d-8f1e-340f5d7759df c5660041067943deb3c73caa6e62f851 2783729ed466412aac8ceb01d86a0b12 - - default default] [instance: 388b5700-0501-4cb9-99cd-6d259e00afa4] Instance network_info: |[{"id": "c1e60e1e-9066-4ce9-9064-2a732e2a407d", "address": "fa:16:3e:22:3d:d4", "network": {"id": "bfa0cc72-c909-48db-80bb-536eb7b52f6e", "bridge": "br-int", "label": "tempest-VolumesSnapshotTestJSON-1615284681-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2783729ed466412aac8ceb01d86a0b12", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc1e60e1e-90", "ovs_interfaceid": "c1e60e1e-9066-4ce9-9064-2a732e2a407d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Oct 11 04:07:14 compute-0 nova_compute[259850]: 2025-10-11 04:07:14.299 2 DEBUG oslo_concurrency.lockutils [req-c91a25a4-96a3-4764-8e12-33aff2cf0823 req-7e19c25c-1b4a-4377-a37b-47f902ff9690 f4159c198f2c491aba3076e803251bb4 551ef16ca7c64c7d98e9560ddab5abd8 - - default default] Acquired lock "refresh_cache-388b5700-0501-4cb9-99cd-6d259e00afa4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 11 04:07:14 compute-0 nova_compute[259850]: 2025-10-11 04:07:14.299 2 DEBUG nova.network.neutron [req-c91a25a4-96a3-4764-8e12-33aff2cf0823 req-7e19c25c-1b4a-4377-a37b-47f902ff9690 f4159c198f2c491aba3076e803251bb4 551ef16ca7c64c7d98e9560ddab5abd8 - - default default] [instance: 388b5700-0501-4cb9-99cd-6d259e00afa4] Refreshing network info cache for port c1e60e1e-9066-4ce9-9064-2a732e2a407d _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 11 04:07:14 compute-0 nova_compute[259850]: 2025-10-11 04:07:14.304 2 DEBUG nova.virt.libvirt.driver [None req-d6fcebc5-358d-4f4d-8f1e-340f5d7759df c5660041067943deb3c73caa6e62f851 2783729ed466412aac8ceb01d86a0b12 - - default default] [instance: 388b5700-0501-4cb9-99cd-6d259e00afa4] Start _get_guest_xml network_info=[{"id": "c1e60e1e-9066-4ce9-9064-2a732e2a407d", "address": "fa:16:3e:22:3d:d4", "network": {"id": "bfa0cc72-c909-48db-80bb-536eb7b52f6e", "bridge": "br-int", "label": "tempest-VolumesSnapshotTestJSON-1615284681-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2783729ed466412aac8ceb01d86a0b12", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc1e60e1e-90", "ovs_interfaceid": "c1e60e1e-9066-4ce9-9064-2a732e2a407d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-11T04:01:37Z,direct_url=<?>,disk_format='qcow2',id=1a107e2f-1a9d-4b6f-861d-e64bee7d56be,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='e4ac9f6319b648399a8baca50902ce47',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-11T04:01:39Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_type': 'disk', 'encrypted': False, 'encryption_secret_uuid': None, 'size': 0, 'encryption_format': None, 'boot_index': 0, 'device_name': '/dev/vda', 'guest_format': None, 'encryption_options': None, 'disk_bus': 'virtio', 'image_id': '1a107e2f-1a9d-4b6f-861d-e64bee7d56be'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Oct 11 04:07:14 compute-0 nova_compute[259850]: 2025-10-11 04:07:14.310 2 WARNING nova.virt.libvirt.driver [None req-d6fcebc5-358d-4f4d-8f1e-340f5d7759df c5660041067943deb3c73caa6e62f851 2783729ed466412aac8ceb01d86a0b12 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 11 04:07:14 compute-0 nova_compute[259850]: 2025-10-11 04:07:14.315 2 DEBUG nova.virt.libvirt.host [None req-d6fcebc5-358d-4f4d-8f1e-340f5d7759df c5660041067943deb3c73caa6e62f851 2783729ed466412aac8ceb01d86a0b12 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Oct 11 04:07:14 compute-0 nova_compute[259850]: 2025-10-11 04:07:14.315 2 DEBUG nova.virt.libvirt.host [None req-d6fcebc5-358d-4f4d-8f1e-340f5d7759df c5660041067943deb3c73caa6e62f851 2783729ed466412aac8ceb01d86a0b12 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Oct 11 04:07:14 compute-0 nova_compute[259850]: 2025-10-11 04:07:14.323 2 DEBUG nova.virt.libvirt.host [None req-d6fcebc5-358d-4f4d-8f1e-340f5d7759df c5660041067943deb3c73caa6e62f851 2783729ed466412aac8ceb01d86a0b12 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Oct 11 04:07:14 compute-0 nova_compute[259850]: 2025-10-11 04:07:14.324 2 DEBUG nova.virt.libvirt.host [None req-d6fcebc5-358d-4f4d-8f1e-340f5d7759df c5660041067943deb3c73caa6e62f851 2783729ed466412aac8ceb01d86a0b12 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Oct 11 04:07:14 compute-0 nova_compute[259850]: 2025-10-11 04:07:14.324 2 DEBUG nova.virt.libvirt.driver [None req-d6fcebc5-358d-4f4d-8f1e-340f5d7759df c5660041067943deb3c73caa6e62f851 2783729ed466412aac8ceb01d86a0b12 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Oct 11 04:07:14 compute-0 nova_compute[259850]: 2025-10-11 04:07:14.324 2 DEBUG nova.virt.hardware [None req-d6fcebc5-358d-4f4d-8f1e-340f5d7759df c5660041067943deb3c73caa6e62f851 2783729ed466412aac8ceb01d86a0b12 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-11T04:01:35Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='178575de-f0e6-4acd-9fcd-d75e3e09ac2e',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-11T04:01:37Z,direct_url=<?>,disk_format='qcow2',id=1a107e2f-1a9d-4b6f-861d-e64bee7d56be,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='e4ac9f6319b648399a8baca50902ce47',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-11T04:01:39Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Oct 11 04:07:14 compute-0 nova_compute[259850]: 2025-10-11 04:07:14.325 2 DEBUG nova.virt.hardware [None req-d6fcebc5-358d-4f4d-8f1e-340f5d7759df c5660041067943deb3c73caa6e62f851 2783729ed466412aac8ceb01d86a0b12 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Oct 11 04:07:14 compute-0 nova_compute[259850]: 2025-10-11 04:07:14.325 2 DEBUG nova.virt.hardware [None req-d6fcebc5-358d-4f4d-8f1e-340f5d7759df c5660041067943deb3c73caa6e62f851 2783729ed466412aac8ceb01d86a0b12 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Oct 11 04:07:14 compute-0 nova_compute[259850]: 2025-10-11 04:07:14.325 2 DEBUG nova.virt.hardware [None req-d6fcebc5-358d-4f4d-8f1e-340f5d7759df c5660041067943deb3c73caa6e62f851 2783729ed466412aac8ceb01d86a0b12 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Oct 11 04:07:14 compute-0 nova_compute[259850]: 2025-10-11 04:07:14.325 2 DEBUG nova.virt.hardware [None req-d6fcebc5-358d-4f4d-8f1e-340f5d7759df c5660041067943deb3c73caa6e62f851 2783729ed466412aac8ceb01d86a0b12 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Oct 11 04:07:14 compute-0 nova_compute[259850]: 2025-10-11 04:07:14.325 2 DEBUG nova.virt.hardware [None req-d6fcebc5-358d-4f4d-8f1e-340f5d7759df c5660041067943deb3c73caa6e62f851 2783729ed466412aac8ceb01d86a0b12 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Oct 11 04:07:14 compute-0 nova_compute[259850]: 2025-10-11 04:07:14.326 2 DEBUG nova.virt.hardware [None req-d6fcebc5-358d-4f4d-8f1e-340f5d7759df c5660041067943deb3c73caa6e62f851 2783729ed466412aac8ceb01d86a0b12 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Oct 11 04:07:14 compute-0 nova_compute[259850]: 2025-10-11 04:07:14.326 2 DEBUG nova.virt.hardware [None req-d6fcebc5-358d-4f4d-8f1e-340f5d7759df c5660041067943deb3c73caa6e62f851 2783729ed466412aac8ceb01d86a0b12 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Oct 11 04:07:14 compute-0 nova_compute[259850]: 2025-10-11 04:07:14.326 2 DEBUG nova.virt.hardware [None req-d6fcebc5-358d-4f4d-8f1e-340f5d7759df c5660041067943deb3c73caa6e62f851 2783729ed466412aac8ceb01d86a0b12 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Oct 11 04:07:14 compute-0 nova_compute[259850]: 2025-10-11 04:07:14.326 2 DEBUG nova.virt.hardware [None req-d6fcebc5-358d-4f4d-8f1e-340f5d7759df c5660041067943deb3c73caa6e62f851 2783729ed466412aac8ceb01d86a0b12 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Oct 11 04:07:14 compute-0 nova_compute[259850]: 2025-10-11 04:07:14.326 2 DEBUG nova.virt.hardware [None req-d6fcebc5-358d-4f4d-8f1e-340f5d7759df c5660041067943deb3c73caa6e62f851 2783729ed466412aac8ceb01d86a0b12 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Oct 11 04:07:14 compute-0 nova_compute[259850]: 2025-10-11 04:07:14.328 2 DEBUG oslo_concurrency.processutils [None req-d6fcebc5-358d-4f4d-8f1e-340f5d7759df c5660041067943deb3c73caa6e62f851 2783729ed466412aac8ceb01d86a0b12 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 04:07:14 compute-0 sudo[275302]: pam_unix(sudo:session): session closed for user root
Oct 11 04:07:14 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 11 04:07:14 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' 
Oct 11 04:07:14 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 11 04:07:14 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' 
Oct 11 04:07:14 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 11 04:07:14 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 11 04:07:14 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct 11 04:07:14 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 11 04:07:14 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct 11 04:07:14 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' 
Oct 11 04:07:14 compute-0 ceph-mgr[74563]: [progress WARNING root] complete: ev 051152a4-15d2-4da6-908f-b00d47314a88 does not exist
Oct 11 04:07:14 compute-0 ceph-mgr[74563]: [progress WARNING root] complete: ev aca02a77-6947-4ec5-a836-c90e0df4aa5c does not exist
Oct 11 04:07:14 compute-0 ceph-mgr[74563]: [progress WARNING root] complete: ev 554c54a5-6787-49b1-bb9c-f9eaed727bc6 does not exist
Oct 11 04:07:14 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct 11 04:07:14 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 11 04:07:14 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct 11 04:07:14 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 11 04:07:14 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 11 04:07:14 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 11 04:07:14 compute-0 nova_compute[259850]: 2025-10-11 04:07:14.512 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 04:07:14 compute-0 sudo[275365]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 04:07:14 compute-0 sudo[275365]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 04:07:14 compute-0 sudo[275365]: pam_unix(sudo:session): session closed for user root
Oct 11 04:07:14 compute-0 nova_compute[259850]: 2025-10-11 04:07:14.578 2 DEBUG oslo_concurrency.lockutils [None req-aa309f2f-8802-4fd5-9e86-848570874f03 77635b26e3624f318335b7dd5d5cf9c4 41596b84442c439b86ce2c239af0242c - - default default] Acquiring lock "26cb0d26-41fd-4cac-a0b5-1c630a0feba1" by "nova.compute.manager.ComputeManager.reserve_block_device_name.<locals>.do_reserve" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 04:07:14 compute-0 nova_compute[259850]: 2025-10-11 04:07:14.579 2 DEBUG oslo_concurrency.lockutils [None req-aa309f2f-8802-4fd5-9e86-848570874f03 77635b26e3624f318335b7dd5d5cf9c4 41596b84442c439b86ce2c239af0242c - - default default] Lock "26cb0d26-41fd-4cac-a0b5-1c630a0feba1" acquired by "nova.compute.manager.ComputeManager.reserve_block_device_name.<locals>.do_reserve" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 04:07:14 compute-0 nova_compute[259850]: 2025-10-11 04:07:14.600 2 DEBUG nova.objects.instance [None req-aa309f2f-8802-4fd5-9e86-848570874f03 77635b26e3624f318335b7dd5d5cf9c4 41596b84442c439b86ce2c239af0242c - - default default] Lazy-loading 'flavor' on Instance uuid 26cb0d26-41fd-4cac-a0b5-1c630a0feba1 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 11 04:07:14 compute-0 nova_compute[259850]: 2025-10-11 04:07:14.627 2 INFO nova.virt.libvirt.driver [None req-aa309f2f-8802-4fd5-9e86-848570874f03 77635b26e3624f318335b7dd5d5cf9c4 41596b84442c439b86ce2c239af0242c - - default default] [instance: 26cb0d26-41fd-4cac-a0b5-1c630a0feba1] Ignoring supplied device name: /dev/vdb
Oct 11 04:07:14 compute-0 nova_compute[259850]: 2025-10-11 04:07:14.643 2 DEBUG oslo_concurrency.lockutils [None req-aa309f2f-8802-4fd5-9e86-848570874f03 77635b26e3624f318335b7dd5d5cf9c4 41596b84442c439b86ce2c239af0242c - - default default] Lock "26cb0d26-41fd-4cac-a0b5-1c630a0feba1" "released" by "nova.compute.manager.ComputeManager.reserve_block_device_name.<locals>.do_reserve" :: held 0.064s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 04:07:14 compute-0 sudo[275390]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 11 04:07:14 compute-0 sudo[275390]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 04:07:14 compute-0 sudo[275390]: pam_unix(sudo:session): session closed for user root
Oct 11 04:07:14 compute-0 sudo[275415]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 04:07:14 compute-0 sudo[275415]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 04:07:14 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 11 04:07:14 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/968574785' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 11 04:07:14 compute-0 sudo[275415]: pam_unix(sudo:session): session closed for user root
Oct 11 04:07:14 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e195 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 04:07:14 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e195 do_prune osdmap full prune enabled
Oct 11 04:07:14 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e196 e196: 3 total, 3 up, 3 in
Oct 11 04:07:14 compute-0 ceph-mon[74273]: log_channel(cluster) log [DBG] : osdmap e196: 3 total, 3 up, 3 in
Oct 11 04:07:14 compute-0 nova_compute[259850]: 2025-10-11 04:07:14.774 2 DEBUG oslo_concurrency.processutils [None req-d6fcebc5-358d-4f4d-8f1e-340f5d7759df c5660041067943deb3c73caa6e62f851 2783729ed466412aac8ceb01d86a0b12 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.446s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 04:07:14 compute-0 nova_compute[259850]: 2025-10-11 04:07:14.811 2 DEBUG nova.storage.rbd_utils [None req-d6fcebc5-358d-4f4d-8f1e-340f5d7759df c5660041067943deb3c73caa6e62f851 2783729ed466412aac8ceb01d86a0b12 - - default default] rbd image 388b5700-0501-4cb9-99cd-6d259e00afa4_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 11 04:07:14 compute-0 nova_compute[259850]: 2025-10-11 04:07:14.815 2 DEBUG oslo_concurrency.processutils [None req-d6fcebc5-358d-4f4d-8f1e-340f5d7759df c5660041067943deb3c73caa6e62f851 2783729ed466412aac8ceb01d86a0b12 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 04:07:14 compute-0 sudo[275442]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/23b68101-59a9-532f-ab6b-9acf78fb2162/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 23b68101-59a9-532f-ab6b-9acf78fb2162 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Oct 11 04:07:14 compute-0 sudo[275442]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 04:07:14 compute-0 nova_compute[259850]: 2025-10-11 04:07:14.872 2 DEBUG oslo_concurrency.lockutils [None req-aa309f2f-8802-4fd5-9e86-848570874f03 77635b26e3624f318335b7dd5d5cf9c4 41596b84442c439b86ce2c239af0242c - - default default] Acquiring lock "26cb0d26-41fd-4cac-a0b5-1c630a0feba1" by "nova.compute.manager.ComputeManager.attach_volume.<locals>.do_attach_volume" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 04:07:14 compute-0 nova_compute[259850]: 2025-10-11 04:07:14.873 2 DEBUG oslo_concurrency.lockutils [None req-aa309f2f-8802-4fd5-9e86-848570874f03 77635b26e3624f318335b7dd5d5cf9c4 41596b84442c439b86ce2c239af0242c - - default default] Lock "26cb0d26-41fd-4cac-a0b5-1c630a0feba1" acquired by "nova.compute.manager.ComputeManager.attach_volume.<locals>.do_attach_volume" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 04:07:14 compute-0 nova_compute[259850]: 2025-10-11 04:07:14.874 2 INFO nova.compute.manager [None req-aa309f2f-8802-4fd5-9e86-848570874f03 77635b26e3624f318335b7dd5d5cf9c4 41596b84442c439b86ce2c239af0242c - - default default] [instance: 26cb0d26-41fd-4cac-a0b5-1c630a0feba1] Attaching volume edb1a073-56fc-4b59-ae21-06d01b779d30 to /dev/vdb
Oct 11 04:07:14 compute-0 nova_compute[259850]: 2025-10-11 04:07:14.997 2 DEBUG os_brick.utils [None req-aa309f2f-8802-4fd5-9e86-848570874f03 77635b26e3624f318335b7dd5d5cf9c4 41596b84442c439b86ce2c239af0242c - - default default] ==> get_connector_properties: call "{'root_helper': 'sudo nova-rootwrap /etc/nova/rootwrap.conf', 'my_ip': '192.168.122.100', 'multipath': True, 'enforce_multipath': True, 'host': 'compute-0.ctlplane.example.com', 'execute': None}" trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:176
Oct 11 04:07:15 compute-0 nova_compute[259850]: 2025-10-11 04:07:15.000 675 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): multipathd show status execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 04:07:15 compute-0 nova_compute[259850]: 2025-10-11 04:07:15.009 675 DEBUG oslo_concurrency.processutils [-] CMD "multipathd show status" returned: 0 in 0.009s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 04:07:15 compute-0 nova_compute[259850]: 2025-10-11 04:07:15.009 675 DEBUG oslo.privsep.daemon [-] privsep: reply[2485ccbc-7414-4e88-9322-7b78da467f9f]: (4, ('path checker states:\n\npaths: 0\nbusy: False\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 04:07:15 compute-0 nova_compute[259850]: 2025-10-11 04:07:15.010 675 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): cat /etc/iscsi/initiatorname.iscsi execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 04:07:15 compute-0 nova_compute[259850]: 2025-10-11 04:07:15.022 675 DEBUG oslo_concurrency.processutils [-] CMD "cat /etc/iscsi/initiatorname.iscsi" returned: 0 in 0.012s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 04:07:15 compute-0 nova_compute[259850]: 2025-10-11 04:07:15.022 675 DEBUG oslo.privsep.daemon [-] privsep: reply[76ca11cb-4a20-42e4-8094-d85760309a34]: (4, ('InitiatorName=iqn.1994-05.com.redhat:e727c2bd432c', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 04:07:15 compute-0 nova_compute[259850]: 2025-10-11 04:07:15.024 675 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): findmnt -v / -n -o SOURCE execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 04:07:15 compute-0 nova_compute[259850]: 2025-10-11 04:07:15.038 675 DEBUG oslo_concurrency.processutils [-] CMD "findmnt -v / -n -o SOURCE" returned: 0 in 0.014s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 04:07:15 compute-0 nova_compute[259850]: 2025-10-11 04:07:15.038 675 DEBUG oslo.privsep.daemon [-] privsep: reply[33866fcf-509e-4349-9346-00abdfb8a91d]: (4, ('overlay\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 04:07:15 compute-0 nova_compute[259850]: 2025-10-11 04:07:15.040 675 DEBUG oslo.privsep.daemon [-] privsep: reply[76907940-4bdf-4ad1-bd2f-632541b0799b]: (4, 'e4b2deed-ff06-4afb-a523-b61a9dddb9cc') _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 04:07:15 compute-0 nova_compute[259850]: 2025-10-11 04:07:15.040 2 DEBUG oslo_concurrency.processutils [None req-aa309f2f-8802-4fd5-9e86-848570874f03 77635b26e3624f318335b7dd5d5cf9c4 41596b84442c439b86ce2c239af0242c - - default default] Running cmd (subprocess): nvme version execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 04:07:15 compute-0 nova_compute[259850]: 2025-10-11 04:07:15.070 2 DEBUG oslo_concurrency.processutils [None req-aa309f2f-8802-4fd5-9e86-848570874f03 77635b26e3624f318335b7dd5d5cf9c4 41596b84442c439b86ce2c239af0242c - - default default] CMD "nvme version" returned: 0 in 0.030s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 04:07:15 compute-0 nova_compute[259850]: 2025-10-11 04:07:15.073 2 DEBUG os_brick.initiator.connectors.lightos [None req-aa309f2f-8802-4fd5-9e86-848570874f03 77635b26e3624f318335b7dd5d5cf9c4 41596b84442c439b86ce2c239af0242c - - default default] LIGHTOS: [Errno 111] ECONNREFUSED find_dsc /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:98
Oct 11 04:07:15 compute-0 nova_compute[259850]: 2025-10-11 04:07:15.074 2 DEBUG os_brick.initiator.connectors.lightos [None req-aa309f2f-8802-4fd5-9e86-848570874f03 77635b26e3624f318335b7dd5d5cf9c4 41596b84442c439b86ce2c239af0242c - - default default] LIGHTOS: did not find dsc, continuing anyway. get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:76
Oct 11 04:07:15 compute-0 nova_compute[259850]: 2025-10-11 04:07:15.074 2 DEBUG os_brick.initiator.connectors.lightos [None req-aa309f2f-8802-4fd5-9e86-848570874f03 77635b26e3624f318335b7dd5d5cf9c4 41596b84442c439b86ce2c239af0242c - - default default] LIGHTOS: finally hostnqn: nqn.2014-08.org.nvmexpress:uuid:83042a20-0f72-4c47-8453-e72ead378624 dsc:  get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:79
Oct 11 04:07:15 compute-0 nova_compute[259850]: 2025-10-11 04:07:15.075 2 DEBUG os_brick.utils [None req-aa309f2f-8802-4fd5-9e86-848570874f03 77635b26e3624f318335b7dd5d5cf9c4 41596b84442c439b86ce2c239af0242c - - default default] <== get_connector_properties: return (76ms) {'platform': 'x86_64', 'os_type': 'linux', 'ip': '192.168.122.100', 'host': 'compute-0.ctlplane.example.com', 'multipath': True, 'initiator': 'iqn.1994-05.com.redhat:e727c2bd432c', 'do_local_attach': False, 'nvme_hostid': '83042a20-0f72-4c47-8453-e72ead378624', 'system uuid': 'e4b2deed-ff06-4afb-a523-b61a9dddb9cc', 'nqn': 'nqn.2014-08.org.nvmexpress:uuid:83042a20-0f72-4c47-8453-e72ead378624', 'nvme_native_multipath': True, 'found_dsc': ''} trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:203
Oct 11 04:07:15 compute-0 nova_compute[259850]: 2025-10-11 04:07:15.075 2 DEBUG nova.virt.block_device [None req-aa309f2f-8802-4fd5-9e86-848570874f03 77635b26e3624f318335b7dd5d5cf9c4 41596b84442c439b86ce2c239af0242c - - default default] [instance: 26cb0d26-41fd-4cac-a0b5-1c630a0feba1] Updating existing volume attachment record: 9ae687e1-571b-4e08-b7bc-093abf140514 _volume_attach /usr/lib/python3.9/site-packages/nova/virt/block_device.py:631
Oct 11 04:07:15 compute-0 ceph-mon[74273]: pgmap v1111: 305 pgs: 305 active+clean; 213 MiB data, 358 MiB used, 60 GiB / 60 GiB avail; 397 KiB/s rd, 4.7 MiB/s wr, 117 op/s
Oct 11 04:07:15 compute-0 ceph-mon[74273]: osdmap e195: 3 total, 3 up, 3 in
Oct 11 04:07:15 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' 
Oct 11 04:07:15 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' 
Oct 11 04:07:15 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 11 04:07:15 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 11 04:07:15 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' 
Oct 11 04:07:15 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 11 04:07:15 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 11 04:07:15 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 11 04:07:15 compute-0 ceph-mon[74273]: from='client.? 192.168.122.100:0/968574785' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 11 04:07:15 compute-0 ceph-mon[74273]: osdmap e196: 3 total, 3 up, 3 in
Oct 11 04:07:15 compute-0 podman[275551]: 2025-10-11 04:07:15.226576951 +0000 UTC m=+0.049812989 container create f3abf32b3899d8120abb09a958b093a22103fc0e7b1dcf048ff63d404791add9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=romantic_kare, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Oct 11 04:07:15 compute-0 systemd[1]: Started libpod-conmon-f3abf32b3899d8120abb09a958b093a22103fc0e7b1dcf048ff63d404791add9.scope.
Oct 11 04:07:15 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 11 04:07:15 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1619627801' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 11 04:07:15 compute-0 systemd[1]: Started libcrun container.
Oct 11 04:07:15 compute-0 nova_compute[259850]: 2025-10-11 04:07:15.296 2 DEBUG oslo_concurrency.processutils [None req-d6fcebc5-358d-4f4d-8f1e-340f5d7759df c5660041067943deb3c73caa6e62f851 2783729ed466412aac8ceb01d86a0b12 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.481s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 04:07:15 compute-0 nova_compute[259850]: 2025-10-11 04:07:15.299 2 DEBUG nova.virt.libvirt.vif [None req-d6fcebc5-358d-4f4d-8f1e-340f5d7759df c5660041067943deb3c73caa6e62f851 2783729ed466412aac8ceb01d86a0b12 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-11T04:07:10Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-VolumesSnapshotTestJSON-instance-1516925865',display_name='tempest-VolumesSnapshotTestJSON-instance-1516925865',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-volumessnapshottestjson-instance-1516925865',id=8,image_ref='1a107e2f-1a9d-4b6f-861d-e64bee7d56be',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBCqCtLkaBuP2+T82JevRpLfW+XDuidnc8c74aRC6BKydU2gPEclXEWf/mgVaUQf4ae+qFmwaHq0kdMt+x79T/LDdPi0iOEprVv7WxGP4WYENsjiYxUPMO1UNuH+JM4CShA==',key_name='tempest-keypair-1835134019',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='2783729ed466412aac8ceb01d86a0b12',ramdisk_id='',reservation_id='r-2dmaj0zh',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='1a107e2f-1a9d-4b6f-861d-e64bee7d56be',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-VolumesSnapshotTestJSON-180407200',owner_user_name='tempest-VolumesSnapshotTestJSON-180407200-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-11T04:07:11Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='c5660041067943deb3c73caa6e62f851',uuid=388b5700-0501-4cb9-99cd-6d259e00afa4,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "c1e60e1e-9066-4ce9-9064-2a732e2a407d", "address": "fa:16:3e:22:3d:d4", "network": {"id": "bfa0cc72-c909-48db-80bb-536eb7b52f6e", "bridge": "br-int", "label": "tempest-VolumesSnapshotTestJSON-1615284681-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2783729ed466412aac8ceb01d86a0b12", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc1e60e1e-90", "ovs_interfaceid": "c1e60e1e-9066-4ce9-9064-2a732e2a407d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Oct 11 04:07:15 compute-0 nova_compute[259850]: 2025-10-11 04:07:15.300 2 DEBUG nova.network.os_vif_util [None req-d6fcebc5-358d-4f4d-8f1e-340f5d7759df c5660041067943deb3c73caa6e62f851 2783729ed466412aac8ceb01d86a0b12 - - default default] Converting VIF {"id": "c1e60e1e-9066-4ce9-9064-2a732e2a407d", "address": "fa:16:3e:22:3d:d4", "network": {"id": "bfa0cc72-c909-48db-80bb-536eb7b52f6e", "bridge": "br-int", "label": "tempest-VolumesSnapshotTestJSON-1615284681-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2783729ed466412aac8ceb01d86a0b12", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc1e60e1e-90", "ovs_interfaceid": "c1e60e1e-9066-4ce9-9064-2a732e2a407d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 11 04:07:15 compute-0 nova_compute[259850]: 2025-10-11 04:07:15.301 2 DEBUG nova.network.os_vif_util [None req-d6fcebc5-358d-4f4d-8f1e-340f5d7759df c5660041067943deb3c73caa6e62f851 2783729ed466412aac8ceb01d86a0b12 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:22:3d:d4,bridge_name='br-int',has_traffic_filtering=True,id=c1e60e1e-9066-4ce9-9064-2a732e2a407d,network=Network(bfa0cc72-c909-48db-80bb-536eb7b52f6e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc1e60e1e-90') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 11 04:07:15 compute-0 podman[275551]: 2025-10-11 04:07:15.207908097 +0000 UTC m=+0.031144185 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 04:07:15 compute-0 nova_compute[259850]: 2025-10-11 04:07:15.302 2 DEBUG nova.objects.instance [None req-d6fcebc5-358d-4f4d-8f1e-340f5d7759df c5660041067943deb3c73caa6e62f851 2783729ed466412aac8ceb01d86a0b12 - - default default] Lazy-loading 'pci_devices' on Instance uuid 388b5700-0501-4cb9-99cd-6d259e00afa4 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 11 04:07:15 compute-0 podman[275551]: 2025-10-11 04:07:15.30634418 +0000 UTC m=+0.129580238 container init f3abf32b3899d8120abb09a958b093a22103fc0e7b1dcf048ff63d404791add9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=romantic_kare, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_REF=reef, ceph=True, org.label-schema.license=GPLv2)
Oct 11 04:07:15 compute-0 podman[275551]: 2025-10-11 04:07:15.314078947 +0000 UTC m=+0.137314975 container start f3abf32b3899d8120abb09a958b093a22103fc0e7b1dcf048ff63d404791add9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=romantic_kare, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507)
Oct 11 04:07:15 compute-0 podman[275551]: 2025-10-11 04:07:15.316866526 +0000 UTC m=+0.140102594 container attach f3abf32b3899d8120abb09a958b093a22103fc0e7b1dcf048ff63d404791add9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=romantic_kare, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 11 04:07:15 compute-0 romantic_kare[275568]: 167 167
Oct 11 04:07:15 compute-0 systemd[1]: libpod-f3abf32b3899d8120abb09a958b093a22103fc0e7b1dcf048ff63d404791add9.scope: Deactivated successfully.
Oct 11 04:07:15 compute-0 podman[275551]: 2025-10-11 04:07:15.319563301 +0000 UTC m=+0.142799349 container died f3abf32b3899d8120abb09a958b093a22103fc0e7b1dcf048ff63d404791add9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=romantic_kare, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 11 04:07:15 compute-0 nova_compute[259850]: 2025-10-11 04:07:15.321 2 DEBUG nova.virt.libvirt.driver [None req-d6fcebc5-358d-4f4d-8f1e-340f5d7759df c5660041067943deb3c73caa6e62f851 2783729ed466412aac8ceb01d86a0b12 - - default default] [instance: 388b5700-0501-4cb9-99cd-6d259e00afa4] End _get_guest_xml xml=<domain type="kvm">
Oct 11 04:07:15 compute-0 nova_compute[259850]:   <uuid>388b5700-0501-4cb9-99cd-6d259e00afa4</uuid>
Oct 11 04:07:15 compute-0 nova_compute[259850]:   <name>instance-00000008</name>
Oct 11 04:07:15 compute-0 nova_compute[259850]:   <memory>131072</memory>
Oct 11 04:07:15 compute-0 nova_compute[259850]:   <vcpu>1</vcpu>
Oct 11 04:07:15 compute-0 nova_compute[259850]:   <metadata>
Oct 11 04:07:15 compute-0 nova_compute[259850]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 11 04:07:15 compute-0 nova_compute[259850]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 11 04:07:15 compute-0 nova_compute[259850]:       <nova:name>tempest-VolumesSnapshotTestJSON-instance-1516925865</nova:name>
Oct 11 04:07:15 compute-0 nova_compute[259850]:       <nova:creationTime>2025-10-11 04:07:14</nova:creationTime>
Oct 11 04:07:15 compute-0 nova_compute[259850]:       <nova:flavor name="m1.nano">
Oct 11 04:07:15 compute-0 nova_compute[259850]:         <nova:memory>128</nova:memory>
Oct 11 04:07:15 compute-0 nova_compute[259850]:         <nova:disk>1</nova:disk>
Oct 11 04:07:15 compute-0 nova_compute[259850]:         <nova:swap>0</nova:swap>
Oct 11 04:07:15 compute-0 nova_compute[259850]:         <nova:ephemeral>0</nova:ephemeral>
Oct 11 04:07:15 compute-0 nova_compute[259850]:         <nova:vcpus>1</nova:vcpus>
Oct 11 04:07:15 compute-0 nova_compute[259850]:       </nova:flavor>
Oct 11 04:07:15 compute-0 nova_compute[259850]:       <nova:owner>
Oct 11 04:07:15 compute-0 nova_compute[259850]:         <nova:user uuid="c5660041067943deb3c73caa6e62f851">tempest-VolumesSnapshotTestJSON-180407200-project-member</nova:user>
Oct 11 04:07:15 compute-0 nova_compute[259850]:         <nova:project uuid="2783729ed466412aac8ceb01d86a0b12">tempest-VolumesSnapshotTestJSON-180407200</nova:project>
Oct 11 04:07:15 compute-0 nova_compute[259850]:       </nova:owner>
Oct 11 04:07:15 compute-0 nova_compute[259850]:       <nova:root type="image" uuid="1a107e2f-1a9d-4b6f-861d-e64bee7d56be"/>
Oct 11 04:07:15 compute-0 nova_compute[259850]:       <nova:ports>
Oct 11 04:07:15 compute-0 nova_compute[259850]:         <nova:port uuid="c1e60e1e-9066-4ce9-9064-2a732e2a407d">
Oct 11 04:07:15 compute-0 nova_compute[259850]:           <nova:ip type="fixed" address="10.100.0.11" ipVersion="4"/>
Oct 11 04:07:15 compute-0 nova_compute[259850]:         </nova:port>
Oct 11 04:07:15 compute-0 nova_compute[259850]:       </nova:ports>
Oct 11 04:07:15 compute-0 nova_compute[259850]:     </nova:instance>
Oct 11 04:07:15 compute-0 nova_compute[259850]:   </metadata>
Oct 11 04:07:15 compute-0 nova_compute[259850]:   <sysinfo type="smbios">
Oct 11 04:07:15 compute-0 nova_compute[259850]:     <system>
Oct 11 04:07:15 compute-0 nova_compute[259850]:       <entry name="manufacturer">RDO</entry>
Oct 11 04:07:15 compute-0 nova_compute[259850]:       <entry name="product">OpenStack Compute</entry>
Oct 11 04:07:15 compute-0 nova_compute[259850]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 11 04:07:15 compute-0 nova_compute[259850]:       <entry name="serial">388b5700-0501-4cb9-99cd-6d259e00afa4</entry>
Oct 11 04:07:15 compute-0 nova_compute[259850]:       <entry name="uuid">388b5700-0501-4cb9-99cd-6d259e00afa4</entry>
Oct 11 04:07:15 compute-0 nova_compute[259850]:       <entry name="family">Virtual Machine</entry>
Oct 11 04:07:15 compute-0 nova_compute[259850]:     </system>
Oct 11 04:07:15 compute-0 nova_compute[259850]:   </sysinfo>
Oct 11 04:07:15 compute-0 nova_compute[259850]:   <os>
Oct 11 04:07:15 compute-0 nova_compute[259850]:     <type arch="x86_64" machine="q35">hvm</type>
Oct 11 04:07:15 compute-0 nova_compute[259850]:     <boot dev="hd"/>
Oct 11 04:07:15 compute-0 nova_compute[259850]:     <smbios mode="sysinfo"/>
Oct 11 04:07:15 compute-0 nova_compute[259850]:   </os>
Oct 11 04:07:15 compute-0 nova_compute[259850]:   <features>
Oct 11 04:07:15 compute-0 nova_compute[259850]:     <acpi/>
Oct 11 04:07:15 compute-0 nova_compute[259850]:     <apic/>
Oct 11 04:07:15 compute-0 nova_compute[259850]:     <vmcoreinfo/>
Oct 11 04:07:15 compute-0 nova_compute[259850]:   </features>
Oct 11 04:07:15 compute-0 nova_compute[259850]:   <clock offset="utc">
Oct 11 04:07:15 compute-0 nova_compute[259850]:     <timer name="pit" tickpolicy="delay"/>
Oct 11 04:07:15 compute-0 nova_compute[259850]:     <timer name="rtc" tickpolicy="catchup"/>
Oct 11 04:07:15 compute-0 nova_compute[259850]:     <timer name="hpet" present="no"/>
Oct 11 04:07:15 compute-0 nova_compute[259850]:   </clock>
Oct 11 04:07:15 compute-0 nova_compute[259850]:   <cpu mode="host-model" match="exact">
Oct 11 04:07:15 compute-0 nova_compute[259850]:     <topology sockets="1" cores="1" threads="1"/>
Oct 11 04:07:15 compute-0 nova_compute[259850]:   </cpu>
Oct 11 04:07:15 compute-0 nova_compute[259850]:   <devices>
Oct 11 04:07:15 compute-0 nova_compute[259850]:     <disk type="network" device="disk">
Oct 11 04:07:15 compute-0 nova_compute[259850]:       <driver type="raw" cache="none"/>
Oct 11 04:07:15 compute-0 nova_compute[259850]:       <source protocol="rbd" name="vms/388b5700-0501-4cb9-99cd-6d259e00afa4_disk">
Oct 11 04:07:15 compute-0 nova_compute[259850]:         <host name="192.168.122.100" port="6789"/>
Oct 11 04:07:15 compute-0 nova_compute[259850]:       </source>
Oct 11 04:07:15 compute-0 nova_compute[259850]:       <auth username="openstack">
Oct 11 04:07:15 compute-0 nova_compute[259850]:         <secret type="ceph" uuid="23b68101-59a9-532f-ab6b-9acf78fb2162"/>
Oct 11 04:07:15 compute-0 nova_compute[259850]:       </auth>
Oct 11 04:07:15 compute-0 nova_compute[259850]:       <target dev="vda" bus="virtio"/>
Oct 11 04:07:15 compute-0 nova_compute[259850]:     </disk>
Oct 11 04:07:15 compute-0 nova_compute[259850]:     <disk type="network" device="cdrom">
Oct 11 04:07:15 compute-0 nova_compute[259850]:       <driver type="raw" cache="none"/>
Oct 11 04:07:15 compute-0 nova_compute[259850]:       <source protocol="rbd" name="vms/388b5700-0501-4cb9-99cd-6d259e00afa4_disk.config">
Oct 11 04:07:15 compute-0 nova_compute[259850]:         <host name="192.168.122.100" port="6789"/>
Oct 11 04:07:15 compute-0 nova_compute[259850]:       </source>
Oct 11 04:07:15 compute-0 nova_compute[259850]:       <auth username="openstack">
Oct 11 04:07:15 compute-0 nova_compute[259850]:         <secret type="ceph" uuid="23b68101-59a9-532f-ab6b-9acf78fb2162"/>
Oct 11 04:07:15 compute-0 nova_compute[259850]:       </auth>
Oct 11 04:07:15 compute-0 nova_compute[259850]:       <target dev="sda" bus="sata"/>
Oct 11 04:07:15 compute-0 nova_compute[259850]:     </disk>
Oct 11 04:07:15 compute-0 nova_compute[259850]:     <interface type="ethernet">
Oct 11 04:07:15 compute-0 nova_compute[259850]:       <mac address="fa:16:3e:22:3d:d4"/>
Oct 11 04:07:15 compute-0 nova_compute[259850]:       <model type="virtio"/>
Oct 11 04:07:15 compute-0 nova_compute[259850]:       <driver name="vhost" rx_queue_size="512"/>
Oct 11 04:07:15 compute-0 nova_compute[259850]:       <mtu size="1442"/>
Oct 11 04:07:15 compute-0 nova_compute[259850]:       <target dev="tapc1e60e1e-90"/>
Oct 11 04:07:15 compute-0 nova_compute[259850]:     </interface>
Oct 11 04:07:15 compute-0 nova_compute[259850]:     <serial type="pty">
Oct 11 04:07:15 compute-0 nova_compute[259850]:       <log file="/var/lib/nova/instances/388b5700-0501-4cb9-99cd-6d259e00afa4/console.log" append="off"/>
Oct 11 04:07:15 compute-0 nova_compute[259850]:     </serial>
Oct 11 04:07:15 compute-0 nova_compute[259850]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 11 04:07:15 compute-0 nova_compute[259850]:     <video>
Oct 11 04:07:15 compute-0 nova_compute[259850]:       <model type="virtio"/>
Oct 11 04:07:15 compute-0 nova_compute[259850]:     </video>
Oct 11 04:07:15 compute-0 nova_compute[259850]:     <input type="tablet" bus="usb"/>
Oct 11 04:07:15 compute-0 nova_compute[259850]:     <rng model="virtio">
Oct 11 04:07:15 compute-0 nova_compute[259850]:       <backend model="random">/dev/urandom</backend>
Oct 11 04:07:15 compute-0 nova_compute[259850]:     </rng>
Oct 11 04:07:15 compute-0 nova_compute[259850]:     <controller type="pci" model="pcie-root"/>
Oct 11 04:07:15 compute-0 nova_compute[259850]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 04:07:15 compute-0 nova_compute[259850]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 04:07:15 compute-0 nova_compute[259850]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 04:07:15 compute-0 nova_compute[259850]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 04:07:15 compute-0 nova_compute[259850]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 04:07:15 compute-0 nova_compute[259850]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 04:07:15 compute-0 nova_compute[259850]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 04:07:15 compute-0 nova_compute[259850]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 04:07:15 compute-0 nova_compute[259850]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 04:07:15 compute-0 nova_compute[259850]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 04:07:15 compute-0 nova_compute[259850]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 04:07:15 compute-0 nova_compute[259850]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 04:07:15 compute-0 nova_compute[259850]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 04:07:15 compute-0 nova_compute[259850]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 04:07:15 compute-0 nova_compute[259850]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 04:07:15 compute-0 nova_compute[259850]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 04:07:15 compute-0 nova_compute[259850]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 04:07:15 compute-0 nova_compute[259850]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 04:07:15 compute-0 nova_compute[259850]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 04:07:15 compute-0 nova_compute[259850]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 04:07:15 compute-0 nova_compute[259850]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 04:07:15 compute-0 nova_compute[259850]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 04:07:15 compute-0 nova_compute[259850]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 04:07:15 compute-0 nova_compute[259850]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 04:07:15 compute-0 nova_compute[259850]:     <controller type="usb" index="0"/>
Oct 11 04:07:15 compute-0 nova_compute[259850]:     <memballoon model="virtio">
Oct 11 04:07:15 compute-0 nova_compute[259850]:       <stats period="10"/>
Oct 11 04:07:15 compute-0 nova_compute[259850]:     </memballoon>
Oct 11 04:07:15 compute-0 nova_compute[259850]:   </devices>
Oct 11 04:07:15 compute-0 nova_compute[259850]: </domain>
Oct 11 04:07:15 compute-0 nova_compute[259850]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Oct 11 04:07:15 compute-0 nova_compute[259850]: 2025-10-11 04:07:15.322 2 DEBUG nova.compute.manager [None req-d6fcebc5-358d-4f4d-8f1e-340f5d7759df c5660041067943deb3c73caa6e62f851 2783729ed466412aac8ceb01d86a0b12 - - default default] [instance: 388b5700-0501-4cb9-99cd-6d259e00afa4] Preparing to wait for external event network-vif-plugged-c1e60e1e-9066-4ce9-9064-2a732e2a407d prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Oct 11 04:07:15 compute-0 nova_compute[259850]: 2025-10-11 04:07:15.322 2 DEBUG oslo_concurrency.lockutils [None req-d6fcebc5-358d-4f4d-8f1e-340f5d7759df c5660041067943deb3c73caa6e62f851 2783729ed466412aac8ceb01d86a0b12 - - default default] Acquiring lock "388b5700-0501-4cb9-99cd-6d259e00afa4-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 04:07:15 compute-0 nova_compute[259850]: 2025-10-11 04:07:15.323 2 DEBUG oslo_concurrency.lockutils [None req-d6fcebc5-358d-4f4d-8f1e-340f5d7759df c5660041067943deb3c73caa6e62f851 2783729ed466412aac8ceb01d86a0b12 - - default default] Lock "388b5700-0501-4cb9-99cd-6d259e00afa4-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 04:07:15 compute-0 nova_compute[259850]: 2025-10-11 04:07:15.323 2 DEBUG oslo_concurrency.lockutils [None req-d6fcebc5-358d-4f4d-8f1e-340f5d7759df c5660041067943deb3c73caa6e62f851 2783729ed466412aac8ceb01d86a0b12 - - default default] Lock "388b5700-0501-4cb9-99cd-6d259e00afa4-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 04:07:15 compute-0 nova_compute[259850]: 2025-10-11 04:07:15.324 2 DEBUG nova.virt.libvirt.vif [None req-d6fcebc5-358d-4f4d-8f1e-340f5d7759df c5660041067943deb3c73caa6e62f851 2783729ed466412aac8ceb01d86a0b12 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-11T04:07:10Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-VolumesSnapshotTestJSON-instance-1516925865',display_name='tempest-VolumesSnapshotTestJSON-instance-1516925865',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-volumessnapshottestjson-instance-1516925865',id=8,image_ref='1a107e2f-1a9d-4b6f-861d-e64bee7d56be',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBCqCtLkaBuP2+T82JevRpLfW+XDuidnc8c74aRC6BKydU2gPEclXEWf/mgVaUQf4ae+qFmwaHq0kdMt+x79T/LDdPi0iOEprVv7WxGP4WYENsjiYxUPMO1UNuH+JM4CShA==',key_name='tempest-keypair-1835134019',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='2783729ed466412aac8ceb01d86a0b12',ramdisk_id='',reservation_id='r-2dmaj0zh',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='1a107e2f-1a9d-4b6f-861d-e64bee7d56be',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-VolumesSnapshotTestJSON-180407200',owner_user_name='tempest-VolumesSnapshotTestJSON-180407200-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-11T04:07:11Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='c5660041067943deb3c73caa6e62f851',uuid=388b5700-0501-4cb9-99cd-6d259e00afa4,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "c1e60e1e-9066-4ce9-9064-2a732e2a407d", "address": "fa:16:3e:22:3d:d4", "network": {"id": "bfa0cc72-c909-48db-80bb-536eb7b52f6e", "bridge": "br-int", "label": "tempest-VolumesSnapshotTestJSON-1615284681-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2783729ed466412aac8ceb01d86a0b12", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc1e60e1e-90", "ovs_interfaceid": "c1e60e1e-9066-4ce9-9064-2a732e2a407d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Oct 11 04:07:15 compute-0 nova_compute[259850]: 2025-10-11 04:07:15.324 2 DEBUG nova.network.os_vif_util [None req-d6fcebc5-358d-4f4d-8f1e-340f5d7759df c5660041067943deb3c73caa6e62f851 2783729ed466412aac8ceb01d86a0b12 - - default default] Converting VIF {"id": "c1e60e1e-9066-4ce9-9064-2a732e2a407d", "address": "fa:16:3e:22:3d:d4", "network": {"id": "bfa0cc72-c909-48db-80bb-536eb7b52f6e", "bridge": "br-int", "label": "tempest-VolumesSnapshotTestJSON-1615284681-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2783729ed466412aac8ceb01d86a0b12", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc1e60e1e-90", "ovs_interfaceid": "c1e60e1e-9066-4ce9-9064-2a732e2a407d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 11 04:07:15 compute-0 nova_compute[259850]: 2025-10-11 04:07:15.324 2 DEBUG nova.network.os_vif_util [None req-d6fcebc5-358d-4f4d-8f1e-340f5d7759df c5660041067943deb3c73caa6e62f851 2783729ed466412aac8ceb01d86a0b12 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:22:3d:d4,bridge_name='br-int',has_traffic_filtering=True,id=c1e60e1e-9066-4ce9-9064-2a732e2a407d,network=Network(bfa0cc72-c909-48db-80bb-536eb7b52f6e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc1e60e1e-90') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 11 04:07:15 compute-0 nova_compute[259850]: 2025-10-11 04:07:15.325 2 DEBUG os_vif [None req-d6fcebc5-358d-4f4d-8f1e-340f5d7759df c5660041067943deb3c73caa6e62f851 2783729ed466412aac8ceb01d86a0b12 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:22:3d:d4,bridge_name='br-int',has_traffic_filtering=True,id=c1e60e1e-9066-4ce9-9064-2a732e2a407d,network=Network(bfa0cc72-c909-48db-80bb-536eb7b52f6e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc1e60e1e-90') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Oct 11 04:07:15 compute-0 nova_compute[259850]: 2025-10-11 04:07:15.325 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 04:07:15 compute-0 nova_compute[259850]: 2025-10-11 04:07:15.325 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 04:07:15 compute-0 nova_compute[259850]: 2025-10-11 04:07:15.326 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 11 04:07:15 compute-0 nova_compute[259850]: 2025-10-11 04:07:15.330 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 04:07:15 compute-0 nova_compute[259850]: 2025-10-11 04:07:15.330 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapc1e60e1e-90, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 04:07:15 compute-0 nova_compute[259850]: 2025-10-11 04:07:15.331 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapc1e60e1e-90, col_values=(('external_ids', {'iface-id': 'c1e60e1e-9066-4ce9-9064-2a732e2a407d', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:22:3d:d4', 'vm-uuid': '388b5700-0501-4cb9-99cd-6d259e00afa4'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 04:07:15 compute-0 nova_compute[259850]: 2025-10-11 04:07:15.332 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 04:07:15 compute-0 NetworkManager[44920]: <info>  [1760155635.3336] manager: (tapc1e60e1e-90): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/53)
Oct 11 04:07:15 compute-0 nova_compute[259850]: 2025-10-11 04:07:15.334 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 11 04:07:15 compute-0 nova_compute[259850]: 2025-10-11 04:07:15.347 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 04:07:15 compute-0 nova_compute[259850]: 2025-10-11 04:07:15.348 2 INFO os_vif [None req-d6fcebc5-358d-4f4d-8f1e-340f5d7759df c5660041067943deb3c73caa6e62f851 2783729ed466412aac8ceb01d86a0b12 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:22:3d:d4,bridge_name='br-int',has_traffic_filtering=True,id=c1e60e1e-9066-4ce9-9064-2a732e2a407d,network=Network(bfa0cc72-c909-48db-80bb-536eb7b52f6e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc1e60e1e-90')
Oct 11 04:07:15 compute-0 systemd[1]: var-lib-containers-storage-overlay-975c93220d209c7b27e0aafcae158c4756f5c33f7023bd103722aa7b75319b8f-merged.mount: Deactivated successfully.
Oct 11 04:07:15 compute-0 podman[275551]: 2025-10-11 04:07:15.368815453 +0000 UTC m=+0.192051531 container remove f3abf32b3899d8120abb09a958b093a22103fc0e7b1dcf048ff63d404791add9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=romantic_kare, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Oct 11 04:07:15 compute-0 systemd[1]: libpod-conmon-f3abf32b3899d8120abb09a958b093a22103fc0e7b1dcf048ff63d404791add9.scope: Deactivated successfully.
Oct 11 04:07:15 compute-0 nova_compute[259850]: 2025-10-11 04:07:15.414 2 DEBUG nova.virt.libvirt.driver [None req-d6fcebc5-358d-4f4d-8f1e-340f5d7759df c5660041067943deb3c73caa6e62f851 2783729ed466412aac8ceb01d86a0b12 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 11 04:07:15 compute-0 nova_compute[259850]: 2025-10-11 04:07:15.415 2 DEBUG nova.virt.libvirt.driver [None req-d6fcebc5-358d-4f4d-8f1e-340f5d7759df c5660041067943deb3c73caa6e62f851 2783729ed466412aac8ceb01d86a0b12 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 11 04:07:15 compute-0 nova_compute[259850]: 2025-10-11 04:07:15.415 2 DEBUG nova.virt.libvirt.driver [None req-d6fcebc5-358d-4f4d-8f1e-340f5d7759df c5660041067943deb3c73caa6e62f851 2783729ed466412aac8ceb01d86a0b12 - - default default] No VIF found with MAC fa:16:3e:22:3d:d4, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Oct 11 04:07:15 compute-0 nova_compute[259850]: 2025-10-11 04:07:15.416 2 INFO nova.virt.libvirt.driver [None req-d6fcebc5-358d-4f4d-8f1e-340f5d7759df c5660041067943deb3c73caa6e62f851 2783729ed466412aac8ceb01d86a0b12 - - default default] [instance: 388b5700-0501-4cb9-99cd-6d259e00afa4] Using config drive
Oct 11 04:07:15 compute-0 nova_compute[259850]: 2025-10-11 04:07:15.442 2 DEBUG nova.storage.rbd_utils [None req-d6fcebc5-358d-4f4d-8f1e-340f5d7759df c5660041067943deb3c73caa6e62f851 2783729ed466412aac8ceb01d86a0b12 - - default default] rbd image 388b5700-0501-4cb9-99cd-6d259e00afa4_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 11 04:07:15 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1114: 305 pgs: 305 active+clean; 213 MiB data, 358 MiB used, 60 GiB / 60 GiB avail; 496 KiB/s rd, 5.9 MiB/s wr, 147 op/s
Oct 11 04:07:15 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 11 04:07:15 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/457126879' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 11 04:07:15 compute-0 podman[275614]: 2025-10-11 04:07:15.643280867 +0000 UTC m=+0.074321247 container create 03e61a8737ddc8b7c3f8b8b1fbb4875c035816481b4d6973405cd38d8522a4d7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_bohr, CEPH_REF=reef, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3)
Oct 11 04:07:15 compute-0 systemd[1]: Started libpod-conmon-03e61a8737ddc8b7c3f8b8b1fbb4875c035816481b4d6973405cd38d8522a4d7.scope.
Oct 11 04:07:15 compute-0 podman[275614]: 2025-10-11 04:07:15.612920565 +0000 UTC m=+0.043961025 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 04:07:15 compute-0 systemd[1]: Started libcrun container.
Oct 11 04:07:15 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9239b446d1ce39a5cf0629847d030e01b0b977695d2448351bd9f1bfd4c546b2/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 11 04:07:15 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9239b446d1ce39a5cf0629847d030e01b0b977695d2448351bd9f1bfd4c546b2/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 11 04:07:15 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9239b446d1ce39a5cf0629847d030e01b0b977695d2448351bd9f1bfd4c546b2/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 11 04:07:15 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9239b446d1ce39a5cf0629847d030e01b0b977695d2448351bd9f1bfd4c546b2/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 11 04:07:15 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9239b446d1ce39a5cf0629847d030e01b0b977695d2448351bd9f1bfd4c546b2/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct 11 04:07:15 compute-0 nova_compute[259850]: 2025-10-11 04:07:15.746 2 DEBUG nova.objects.instance [None req-aa309f2f-8802-4fd5-9e86-848570874f03 77635b26e3624f318335b7dd5d5cf9c4 41596b84442c439b86ce2c239af0242c - - default default] Lazy-loading 'flavor' on Instance uuid 26cb0d26-41fd-4cac-a0b5-1c630a0feba1 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 11 04:07:15 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e196 do_prune osdmap full prune enabled
Oct 11 04:07:15 compute-0 podman[275614]: 2025-10-11 04:07:15.760028545 +0000 UTC m=+0.191069015 container init 03e61a8737ddc8b7c3f8b8b1fbb4875c035816481b4d6973405cd38d8522a4d7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_bohr, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True)
Oct 11 04:07:15 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e197 e197: 3 total, 3 up, 3 in
Oct 11 04:07:15 compute-0 ceph-mon[74273]: log_channel(cluster) log [DBG] : osdmap e197: 3 total, 3 up, 3 in
Oct 11 04:07:15 compute-0 nova_compute[259850]: 2025-10-11 04:07:15.770 2 DEBUG nova.virt.libvirt.driver [None req-aa309f2f-8802-4fd5-9e86-848570874f03 77635b26e3624f318335b7dd5d5cf9c4 41596b84442c439b86ce2c239af0242c - - default default] [instance: 26cb0d26-41fd-4cac-a0b5-1c630a0feba1] Attempting to attach volume edb1a073-56fc-4b59-ae21-06d01b779d30 with discard support enabled to an instance using an unsupported configuration. target_bus = virtio. Trim commands will not be issued to the storage device. _check_discard_for_attach_volume /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2168
Oct 11 04:07:15 compute-0 nova_compute[259850]: 2025-10-11 04:07:15.773 2 DEBUG nova.virt.libvirt.guest [None req-aa309f2f-8802-4fd5-9e86-848570874f03 77635b26e3624f318335b7dd5d5cf9c4 41596b84442c439b86ce2c239af0242c - - default default] attach device xml: <disk type="network" device="disk">
Oct 11 04:07:15 compute-0 nova_compute[259850]:   <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Oct 11 04:07:15 compute-0 nova_compute[259850]:   <source protocol="rbd" name="volumes/volume-edb1a073-56fc-4b59-ae21-06d01b779d30">
Oct 11 04:07:15 compute-0 nova_compute[259850]:     <host name="192.168.122.100" port="6789"/>
Oct 11 04:07:15 compute-0 nova_compute[259850]:   </source>
Oct 11 04:07:15 compute-0 nova_compute[259850]:   <auth username="openstack">
Oct 11 04:07:15 compute-0 nova_compute[259850]:     <secret type="ceph" uuid="23b68101-59a9-532f-ab6b-9acf78fb2162"/>
Oct 11 04:07:15 compute-0 nova_compute[259850]:   </auth>
Oct 11 04:07:15 compute-0 nova_compute[259850]:   <target dev="vdb" bus="virtio"/>
Oct 11 04:07:15 compute-0 nova_compute[259850]:   <serial>edb1a073-56fc-4b59-ae21-06d01b779d30</serial>
Oct 11 04:07:15 compute-0 nova_compute[259850]: </disk>
Oct 11 04:07:15 compute-0 nova_compute[259850]:  attach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:339
Oct 11 04:07:15 compute-0 podman[275614]: 2025-10-11 04:07:15.77804734 +0000 UTC m=+0.209087720 container start 03e61a8737ddc8b7c3f8b8b1fbb4875c035816481b4d6973405cd38d8522a4d7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_bohr, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, OSD_FLAVOR=default)
Oct 11 04:07:15 compute-0 podman[275614]: 2025-10-11 04:07:15.784780919 +0000 UTC m=+0.215821339 container attach 03e61a8737ddc8b7c3f8b8b1fbb4875c035816481b4d6973405cd38d8522a4d7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_bohr, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3)
Oct 11 04:07:15 compute-0 nova_compute[259850]: 2025-10-11 04:07:15.880 2 DEBUG nova.virt.libvirt.driver [None req-aa309f2f-8802-4fd5-9e86-848570874f03 77635b26e3624f318335b7dd5d5cf9c4 41596b84442c439b86ce2c239af0242c - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 11 04:07:15 compute-0 nova_compute[259850]: 2025-10-11 04:07:15.881 2 DEBUG nova.virt.libvirt.driver [None req-aa309f2f-8802-4fd5-9e86-848570874f03 77635b26e3624f318335b7dd5d5cf9c4 41596b84442c439b86ce2c239af0242c - - default default] No BDM found with device name vdb, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 11 04:07:15 compute-0 nova_compute[259850]: 2025-10-11 04:07:15.882 2 DEBUG nova.virt.libvirt.driver [None req-aa309f2f-8802-4fd5-9e86-848570874f03 77635b26e3624f318335b7dd5d5cf9c4 41596b84442c439b86ce2c239af0242c - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 11 04:07:15 compute-0 nova_compute[259850]: 2025-10-11 04:07:15.882 2 DEBUG nova.virt.libvirt.driver [None req-aa309f2f-8802-4fd5-9e86-848570874f03 77635b26e3624f318335b7dd5d5cf9c4 41596b84442c439b86ce2c239af0242c - - default default] No VIF found with MAC fa:16:3e:20:01:d0, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Oct 11 04:07:15 compute-0 nova_compute[259850]: 2025-10-11 04:07:15.925 2 INFO nova.virt.libvirt.driver [None req-d6fcebc5-358d-4f4d-8f1e-340f5d7759df c5660041067943deb3c73caa6e62f851 2783729ed466412aac8ceb01d86a0b12 - - default default] [instance: 388b5700-0501-4cb9-99cd-6d259e00afa4] Creating config drive at /var/lib/nova/instances/388b5700-0501-4cb9-99cd-6d259e00afa4/disk.config
Oct 11 04:07:15 compute-0 nova_compute[259850]: 2025-10-11 04:07:15.932 2 DEBUG oslo_concurrency.processutils [None req-d6fcebc5-358d-4f4d-8f1e-340f5d7759df c5660041067943deb3c73caa6e62f851 2783729ed466412aac8ceb01d86a0b12 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/388b5700-0501-4cb9-99cd-6d259e00afa4/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmprd09vqbh execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 04:07:16 compute-0 nova_compute[259850]: 2025-10-11 04:07:16.073 2 DEBUG oslo_concurrency.processutils [None req-d6fcebc5-358d-4f4d-8f1e-340f5d7759df c5660041067943deb3c73caa6e62f851 2783729ed466412aac8ceb01d86a0b12 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/388b5700-0501-4cb9-99cd-6d259e00afa4/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmprd09vqbh" returned: 0 in 0.141s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 04:07:16 compute-0 ceph-mon[74273]: from='client.? 192.168.122.100:0/1619627801' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 11 04:07:16 compute-0 ceph-mon[74273]: from='client.? 192.168.122.10:0/457126879' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 11 04:07:16 compute-0 ceph-mon[74273]: osdmap e197: 3 total, 3 up, 3 in
Oct 11 04:07:16 compute-0 nova_compute[259850]: 2025-10-11 04:07:16.115 2 DEBUG nova.storage.rbd_utils [None req-d6fcebc5-358d-4f4d-8f1e-340f5d7759df c5660041067943deb3c73caa6e62f851 2783729ed466412aac8ceb01d86a0b12 - - default default] rbd image 388b5700-0501-4cb9-99cd-6d259e00afa4_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 11 04:07:16 compute-0 nova_compute[259850]: 2025-10-11 04:07:16.119 2 DEBUG oslo_concurrency.processutils [None req-d6fcebc5-358d-4f4d-8f1e-340f5d7759df c5660041067943deb3c73caa6e62f851 2783729ed466412aac8ceb01d86a0b12 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/388b5700-0501-4cb9-99cd-6d259e00afa4/disk.config 388b5700-0501-4cb9-99cd-6d259e00afa4_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 04:07:16 compute-0 nova_compute[259850]: 2025-10-11 04:07:16.159 2 DEBUG nova.network.neutron [req-c91a25a4-96a3-4764-8e12-33aff2cf0823 req-7e19c25c-1b4a-4377-a37b-47f902ff9690 f4159c198f2c491aba3076e803251bb4 551ef16ca7c64c7d98e9560ddab5abd8 - - default default] [instance: 388b5700-0501-4cb9-99cd-6d259e00afa4] Updated VIF entry in instance network info cache for port c1e60e1e-9066-4ce9-9064-2a732e2a407d. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 11 04:07:16 compute-0 nova_compute[259850]: 2025-10-11 04:07:16.160 2 DEBUG nova.network.neutron [req-c91a25a4-96a3-4764-8e12-33aff2cf0823 req-7e19c25c-1b4a-4377-a37b-47f902ff9690 f4159c198f2c491aba3076e803251bb4 551ef16ca7c64c7d98e9560ddab5abd8 - - default default] [instance: 388b5700-0501-4cb9-99cd-6d259e00afa4] Updating instance_info_cache with network_info: [{"id": "c1e60e1e-9066-4ce9-9064-2a732e2a407d", "address": "fa:16:3e:22:3d:d4", "network": {"id": "bfa0cc72-c909-48db-80bb-536eb7b52f6e", "bridge": "br-int", "label": "tempest-VolumesSnapshotTestJSON-1615284681-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2783729ed466412aac8ceb01d86a0b12", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc1e60e1e-90", "ovs_interfaceid": "c1e60e1e-9066-4ce9-9064-2a732e2a407d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 11 04:07:16 compute-0 nova_compute[259850]: 2025-10-11 04:07:16.175 2 DEBUG oslo_concurrency.lockutils [None req-aa309f2f-8802-4fd5-9e86-848570874f03 77635b26e3624f318335b7dd5d5cf9c4 41596b84442c439b86ce2c239af0242c - - default default] Lock "26cb0d26-41fd-4cac-a0b5-1c630a0feba1" "released" by "nova.compute.manager.ComputeManager.attach_volume.<locals>.do_attach_volume" :: held 1.302s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 04:07:16 compute-0 nova_compute[259850]: 2025-10-11 04:07:16.204 2 DEBUG oslo_concurrency.lockutils [req-c91a25a4-96a3-4764-8e12-33aff2cf0823 req-7e19c25c-1b4a-4377-a37b-47f902ff9690 f4159c198f2c491aba3076e803251bb4 551ef16ca7c64c7d98e9560ddab5abd8 - - default default] Releasing lock "refresh_cache-388b5700-0501-4cb9-99cd-6d259e00afa4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 11 04:07:16 compute-0 nova_compute[259850]: 2025-10-11 04:07:16.292 2 DEBUG oslo_concurrency.processutils [None req-d6fcebc5-358d-4f4d-8f1e-340f5d7759df c5660041067943deb3c73caa6e62f851 2783729ed466412aac8ceb01d86a0b12 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/388b5700-0501-4cb9-99cd-6d259e00afa4/disk.config 388b5700-0501-4cb9-99cd-6d259e00afa4_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.173s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 04:07:16 compute-0 nova_compute[259850]: 2025-10-11 04:07:16.293 2 INFO nova.virt.libvirt.driver [None req-d6fcebc5-358d-4f4d-8f1e-340f5d7759df c5660041067943deb3c73caa6e62f851 2783729ed466412aac8ceb01d86a0b12 - - default default] [instance: 388b5700-0501-4cb9-99cd-6d259e00afa4] Deleting local config drive /var/lib/nova/instances/388b5700-0501-4cb9-99cd-6d259e00afa4/disk.config because it was imported into RBD.
Oct 11 04:07:16 compute-0 kernel: tapc1e60e1e-90: entered promiscuous mode
Oct 11 04:07:16 compute-0 NetworkManager[44920]: <info>  [1760155636.3574] manager: (tapc1e60e1e-90): new Tun device (/org/freedesktop/NetworkManager/Devices/54)
Oct 11 04:07:16 compute-0 ovn_controller[152025]: 2025-10-11T04:07:16Z|00076|binding|INFO|Claiming lport c1e60e1e-9066-4ce9-9064-2a732e2a407d for this chassis.
Oct 11 04:07:16 compute-0 nova_compute[259850]: 2025-10-11 04:07:16.358 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 04:07:16 compute-0 ovn_controller[152025]: 2025-10-11T04:07:16Z|00077|binding|INFO|c1e60e1e-9066-4ce9-9064-2a732e2a407d: Claiming fa:16:3e:22:3d:d4 10.100.0.11
Oct 11 04:07:16 compute-0 ovn_metadata_agent[161897]: 2025-10-11 04:07:16.367 161902 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:22:3d:d4 10.100.0.11'], port_security=['fa:16:3e:22:3d:d4 10.100.0.11'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.11/28', 'neutron:device_id': '388b5700-0501-4cb9-99cd-6d259e00afa4', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-bfa0cc72-c909-48db-80bb-536eb7b52f6e', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '2783729ed466412aac8ceb01d86a0b12', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'bdf40caf-e662-46c0-a51e-c7e0a77b4c10', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=55b0cbfb-9e3c-469a-b06d-75c45688b585, chassis=[<ovs.db.idl.Row object at 0x7f22fde8afd0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f22fde8afd0>], logical_port=c1e60e1e-9066-4ce9-9064-2a732e2a407d) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 11 04:07:16 compute-0 ovn_metadata_agent[161897]: 2025-10-11 04:07:16.369 161902 INFO neutron.agent.ovn.metadata.agent [-] Port c1e60e1e-9066-4ce9-9064-2a732e2a407d in datapath bfa0cc72-c909-48db-80bb-536eb7b52f6e bound to our chassis
Oct 11 04:07:16 compute-0 ovn_metadata_agent[161897]: 2025-10-11 04:07:16.370 161902 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network bfa0cc72-c909-48db-80bb-536eb7b52f6e
Oct 11 04:07:16 compute-0 ovn_metadata_agent[161897]: 2025-10-11 04:07:16.386 267637 DEBUG oslo.privsep.daemon [-] privsep: reply[b542f6f8-b39d-4829-b3e1-aa99c1f5eac0]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 04:07:16 compute-0 ovn_metadata_agent[161897]: 2025-10-11 04:07:16.387 161902 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapbfa0cc72-c1 in ovnmeta-bfa0cc72-c909-48db-80bb-536eb7b52f6e namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Oct 11 04:07:16 compute-0 ovn_metadata_agent[161897]: 2025-10-11 04:07:16.390 267637 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapbfa0cc72-c0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Oct 11 04:07:16 compute-0 ovn_metadata_agent[161897]: 2025-10-11 04:07:16.390 267637 DEBUG oslo.privsep.daemon [-] privsep: reply[39283f21-235e-48b7-a2a1-f7788f311b00]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 04:07:16 compute-0 ovn_metadata_agent[161897]: 2025-10-11 04:07:16.391 267637 DEBUG oslo.privsep.daemon [-] privsep: reply[61d7fb94-b91d-4315-bb38-3599bb7e49f3]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 04:07:16 compute-0 nova_compute[259850]: 2025-10-11 04:07:16.396 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 04:07:16 compute-0 ovn_controller[152025]: 2025-10-11T04:07:16Z|00078|binding|INFO|Setting lport c1e60e1e-9066-4ce9-9064-2a732e2a407d ovn-installed in OVS
Oct 11 04:07:16 compute-0 ovn_controller[152025]: 2025-10-11T04:07:16Z|00079|binding|INFO|Setting lport c1e60e1e-9066-4ce9-9064-2a732e2a407d up in Southbound
Oct 11 04:07:16 compute-0 nova_compute[259850]: 2025-10-11 04:07:16.399 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 04:07:16 compute-0 ovn_metadata_agent[161897]: 2025-10-11 04:07:16.414 162015 DEBUG oslo.privsep.daemon [-] privsep: reply[e65a965f-ff39-4796-af71-7b1f70e2c01c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 04:07:16 compute-0 systemd-udevd[275711]: Network interface NamePolicy= disabled on kernel command line.
Oct 11 04:07:16 compute-0 systemd-machined[214869]: New machine qemu-8-instance-00000008.
Oct 11 04:07:16 compute-0 NetworkManager[44920]: <info>  [1760155636.4288] device (tapc1e60e1e-90): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 11 04:07:16 compute-0 NetworkManager[44920]: <info>  [1760155636.4322] device (tapc1e60e1e-90): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 11 04:07:16 compute-0 systemd[1]: Started Virtual Machine qemu-8-instance-00000008.
Oct 11 04:07:16 compute-0 ovn_metadata_agent[161897]: 2025-10-11 04:07:16.439 267637 DEBUG oslo.privsep.daemon [-] privsep: reply[2183d74b-b97b-4150-af05-c2d5883e7c7a]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 04:07:16 compute-0 ovn_metadata_agent[161897]: 2025-10-11 04:07:16.478 267785 DEBUG oslo.privsep.daemon [-] privsep: reply[bf78d95a-ae67-4fa3-9ed1-0e02f2829103]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 04:07:16 compute-0 systemd-udevd[275714]: Network interface NamePolicy= disabled on kernel command line.
Oct 11 04:07:16 compute-0 NetworkManager[44920]: <info>  [1760155636.4867] manager: (tapbfa0cc72-c0): new Veth device (/org/freedesktop/NetworkManager/Devices/55)
Oct 11 04:07:16 compute-0 ovn_metadata_agent[161897]: 2025-10-11 04:07:16.484 267637 DEBUG oslo.privsep.daemon [-] privsep: reply[6d1dfcf8-367d-45c8-bfd9-6a2f06682db4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 04:07:16 compute-0 ovn_metadata_agent[161897]: 2025-10-11 04:07:16.537 267785 DEBUG oslo.privsep.daemon [-] privsep: reply[0c0ef7c1-3519-4104-8c34-d294bd8b5d95]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 04:07:16 compute-0 ovn_metadata_agent[161897]: 2025-10-11 04:07:16.540 267785 DEBUG oslo.privsep.daemon [-] privsep: reply[387081ec-fe3b-42f9-a9e7-1d2cb0059358]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 04:07:16 compute-0 NetworkManager[44920]: <info>  [1760155636.5689] device (tapbfa0cc72-c0): carrier: link connected
Oct 11 04:07:16 compute-0 ovn_metadata_agent[161897]: 2025-10-11 04:07:16.578 267785 DEBUG oslo.privsep.daemon [-] privsep: reply[dc5943bf-92b8-435c-afe2-b903ee1bc4ea]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 04:07:16 compute-0 nova_compute[259850]: 2025-10-11 04:07:16.596 2 DEBUG nova.compute.manager [req-6998358a-c196-4a51-bb7d-4f89f4f6d7d4 req-1b7ff902-a963-4246-939f-cbf75cbf1698 f4159c198f2c491aba3076e803251bb4 551ef16ca7c64c7d98e9560ddab5abd8 - - default default] [instance: 388b5700-0501-4cb9-99cd-6d259e00afa4] Received event network-vif-plugged-c1e60e1e-9066-4ce9-9064-2a732e2a407d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 11 04:07:16 compute-0 nova_compute[259850]: 2025-10-11 04:07:16.596 2 DEBUG oslo_concurrency.lockutils [req-6998358a-c196-4a51-bb7d-4f89f4f6d7d4 req-1b7ff902-a963-4246-939f-cbf75cbf1698 f4159c198f2c491aba3076e803251bb4 551ef16ca7c64c7d98e9560ddab5abd8 - - default default] Acquiring lock "388b5700-0501-4cb9-99cd-6d259e00afa4-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 04:07:16 compute-0 nova_compute[259850]: 2025-10-11 04:07:16.597 2 DEBUG oslo_concurrency.lockutils [req-6998358a-c196-4a51-bb7d-4f89f4f6d7d4 req-1b7ff902-a963-4246-939f-cbf75cbf1698 f4159c198f2c491aba3076e803251bb4 551ef16ca7c64c7d98e9560ddab5abd8 - - default default] Lock "388b5700-0501-4cb9-99cd-6d259e00afa4-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 04:07:16 compute-0 nova_compute[259850]: 2025-10-11 04:07:16.598 2 DEBUG oslo_concurrency.lockutils [req-6998358a-c196-4a51-bb7d-4f89f4f6d7d4 req-1b7ff902-a963-4246-939f-cbf75cbf1698 f4159c198f2c491aba3076e803251bb4 551ef16ca7c64c7d98e9560ddab5abd8 - - default default] Lock "388b5700-0501-4cb9-99cd-6d259e00afa4-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 04:07:16 compute-0 nova_compute[259850]: 2025-10-11 04:07:16.598 2 DEBUG nova.compute.manager [req-6998358a-c196-4a51-bb7d-4f89f4f6d7d4 req-1b7ff902-a963-4246-939f-cbf75cbf1698 f4159c198f2c491aba3076e803251bb4 551ef16ca7c64c7d98e9560ddab5abd8 - - default default] [instance: 388b5700-0501-4cb9-99cd-6d259e00afa4] Processing event network-vif-plugged-c1e60e1e-9066-4ce9-9064-2a732e2a407d _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Oct 11 04:07:16 compute-0 ovn_metadata_agent[161897]: 2025-10-11 04:07:16.596 267637 DEBUG oslo.privsep.daemon [-] privsep: reply[fdfd8fdf-3e87-42be-a514-3b52ee15c529]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapbfa0cc72-c1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:0c:c6:bc'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 2, 'rx_bytes': 110, 'tx_bytes': 176, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 2, 'rx_bytes': 110, 'tx_bytes': 176, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 32], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 402160, 'reachable_time': 24831, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 2, 'outoctets': 148, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 2, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 148, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 2, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 275750, 'error': None, 'target': 'ovnmeta-bfa0cc72-c909-48db-80bb-536eb7b52f6e', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 04:07:16 compute-0 ovn_metadata_agent[161897]: 2025-10-11 04:07:16.619 267637 DEBUG oslo.privsep.daemon [-] privsep: reply[b6e76085-d68c-4c92-9510-2518560ca731]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe0c:c6bc'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 402160, 'tstamp': 402160}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 275751, 'error': None, 'target': 'ovnmeta-bfa0cc72-c909-48db-80bb-536eb7b52f6e', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 04:07:16 compute-0 ovn_metadata_agent[161897]: 2025-10-11 04:07:16.644 267637 DEBUG oslo.privsep.daemon [-] privsep: reply[1911134f-122c-492d-82a5-d95028092e70]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapbfa0cc72-c1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:0c:c6:bc'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 2, 'rx_bytes': 110, 'tx_bytes': 176, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 2, 'rx_bytes': 110, 'tx_bytes': 176, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 32], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 402160, 'reachable_time': 24831, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 2, 'outoctets': 148, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 2, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 148, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 2, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 275754, 'error': None, 'target': 'ovnmeta-bfa0cc72-c909-48db-80bb-536eb7b52f6e', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 04:07:16 compute-0 ovn_metadata_agent[161897]: 2025-10-11 04:07:16.692 267637 DEBUG oslo.privsep.daemon [-] privsep: reply[1b9ffd0b-7931-41f2-8086-98ff4810e806]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 04:07:16 compute-0 ovn_metadata_agent[161897]: 2025-10-11 04:07:16.779 267637 DEBUG oslo.privsep.daemon [-] privsep: reply[1547fc24-9c3f-4c68-8ad9-2845e50d24c2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 04:07:16 compute-0 ovn_metadata_agent[161897]: 2025-10-11 04:07:16.780 161902 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapbfa0cc72-c0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 04:07:16 compute-0 ovn_metadata_agent[161897]: 2025-10-11 04:07:16.781 161902 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 11 04:07:16 compute-0 ovn_metadata_agent[161897]: 2025-10-11 04:07:16.781 161902 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapbfa0cc72-c0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 04:07:16 compute-0 NetworkManager[44920]: <info>  [1760155636.7831] manager: (tapbfa0cc72-c0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/56)
Oct 11 04:07:16 compute-0 nova_compute[259850]: 2025-10-11 04:07:16.783 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 04:07:16 compute-0 kernel: tapbfa0cc72-c0: entered promiscuous mode
Oct 11 04:07:16 compute-0 ovn_metadata_agent[161897]: 2025-10-11 04:07:16.786 161902 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapbfa0cc72-c0, col_values=(('external_ids', {'iface-id': '0e0216bc-6b9d-4e75-bae2-b1d26e9e502e'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 04:07:16 compute-0 nova_compute[259850]: 2025-10-11 04:07:16.787 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 04:07:16 compute-0 ovn_controller[152025]: 2025-10-11T04:07:16Z|00080|binding|INFO|Releasing lport 0e0216bc-6b9d-4e75-bae2-b1d26e9e502e from this chassis (sb_readonly=0)
Oct 11 04:07:16 compute-0 nova_compute[259850]: 2025-10-11 04:07:16.788 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 04:07:16 compute-0 ovn_metadata_agent[161897]: 2025-10-11 04:07:16.791 161902 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/bfa0cc72-c909-48db-80bb-536eb7b52f6e.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/bfa0cc72-c909-48db-80bb-536eb7b52f6e.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Oct 11 04:07:16 compute-0 ovn_metadata_agent[161897]: 2025-10-11 04:07:16.794 267637 DEBUG oslo.privsep.daemon [-] privsep: reply[13dfec34-c487-4156-8dbc-7eb3ab27404b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 04:07:16 compute-0 ovn_metadata_agent[161897]: 2025-10-11 04:07:16.795 161902 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct 11 04:07:16 compute-0 ovn_metadata_agent[161897]: global
Oct 11 04:07:16 compute-0 ovn_metadata_agent[161897]:     log         /dev/log local0 debug
Oct 11 04:07:16 compute-0 ovn_metadata_agent[161897]:     log-tag     haproxy-metadata-proxy-bfa0cc72-c909-48db-80bb-536eb7b52f6e
Oct 11 04:07:16 compute-0 ovn_metadata_agent[161897]:     user        root
Oct 11 04:07:16 compute-0 ovn_metadata_agent[161897]:     group       root
Oct 11 04:07:16 compute-0 ovn_metadata_agent[161897]:     maxconn     1024
Oct 11 04:07:16 compute-0 ovn_metadata_agent[161897]:     pidfile     /var/lib/neutron/external/pids/bfa0cc72-c909-48db-80bb-536eb7b52f6e.pid.haproxy
Oct 11 04:07:16 compute-0 ovn_metadata_agent[161897]:     daemon
Oct 11 04:07:16 compute-0 ovn_metadata_agent[161897]: 
Oct 11 04:07:16 compute-0 ovn_metadata_agent[161897]: defaults
Oct 11 04:07:16 compute-0 ovn_metadata_agent[161897]:     log global
Oct 11 04:07:16 compute-0 ovn_metadata_agent[161897]:     mode http
Oct 11 04:07:16 compute-0 ovn_metadata_agent[161897]:     option httplog
Oct 11 04:07:16 compute-0 ovn_metadata_agent[161897]:     option dontlognull
Oct 11 04:07:16 compute-0 ovn_metadata_agent[161897]:     option http-server-close
Oct 11 04:07:16 compute-0 ovn_metadata_agent[161897]:     option forwardfor
Oct 11 04:07:16 compute-0 ovn_metadata_agent[161897]:     retries                 3
Oct 11 04:07:16 compute-0 ovn_metadata_agent[161897]:     timeout http-request    30s
Oct 11 04:07:16 compute-0 ovn_metadata_agent[161897]:     timeout connect         30s
Oct 11 04:07:16 compute-0 ovn_metadata_agent[161897]:     timeout client          32s
Oct 11 04:07:16 compute-0 ovn_metadata_agent[161897]:     timeout server          32s
Oct 11 04:07:16 compute-0 ovn_metadata_agent[161897]:     timeout http-keep-alive 30s
Oct 11 04:07:16 compute-0 ovn_metadata_agent[161897]: 
Oct 11 04:07:16 compute-0 ovn_metadata_agent[161897]: 
Oct 11 04:07:16 compute-0 ovn_metadata_agent[161897]: listen listener
Oct 11 04:07:16 compute-0 ovn_metadata_agent[161897]:     bind 169.254.169.254:80
Oct 11 04:07:16 compute-0 ovn_metadata_agent[161897]:     server metadata /var/lib/neutron/metadata_proxy
Oct 11 04:07:16 compute-0 ovn_metadata_agent[161897]:     http-request add-header X-OVN-Network-ID bfa0cc72-c909-48db-80bb-536eb7b52f6e
Oct 11 04:07:16 compute-0 ovn_metadata_agent[161897]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Oct 11 04:07:16 compute-0 ovn_metadata_agent[161897]: 2025-10-11 04:07:16.796 161902 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-bfa0cc72-c909-48db-80bb-536eb7b52f6e', 'env', 'PROCESS_TAG=haproxy-bfa0cc72-c909-48db-80bb-536eb7b52f6e', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/bfa0cc72-c909-48db-80bb-536eb7b52f6e.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Oct 11 04:07:16 compute-0 nova_compute[259850]: 2025-10-11 04:07:16.804 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 04:07:16 compute-0 affectionate_bohr[275631]: --> passed data devices: 0 physical, 3 LVM
Oct 11 04:07:16 compute-0 affectionate_bohr[275631]: --> relative data size: 1.0
Oct 11 04:07:16 compute-0 affectionate_bohr[275631]: --> All data devices are unavailable
Oct 11 04:07:16 compute-0 systemd[1]: libpod-03e61a8737ddc8b7c3f8b8b1fbb4875c035816481b4d6973405cd38d8522a4d7.scope: Deactivated successfully.
Oct 11 04:07:16 compute-0 systemd[1]: libpod-03e61a8737ddc8b7c3f8b8b1fbb4875c035816481b4d6973405cd38d8522a4d7.scope: Consumed 1.068s CPU time.
Oct 11 04:07:16 compute-0 podman[275614]: 2025-10-11 04:07:16.935031358 +0000 UTC m=+1.366071738 container died 03e61a8737ddc8b7c3f8b8b1fbb4875c035816481b4d6973405cd38d8522a4d7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_bohr, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 11 04:07:16 compute-0 ceph-osd[87591]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Oct 11 04:07:16 compute-0 ceph-osd[87591]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 1800.1 total, 600.0 interval
                                           Cumulative writes: 11K writes, 46K keys, 11K commit groups, 1.0 writes per commit group, ingest: 0.03 GB, 0.02 MB/s
                                           Cumulative WAL: 11K writes, 3626 syncs, 3.23 writes per sync, written: 0.03 GB, 0.02 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 5949 writes, 22K keys, 5949 commit groups, 1.0 writes per commit group, ingest: 11.69 MB, 0.02 MB/s
                                           Interval WAL: 5949 writes, 2659 syncs, 2.24 writes per sync, written: 0.01 GB, 0.02 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Oct 11 04:07:16 compute-0 systemd[1]: var-lib-containers-storage-overlay-9239b446d1ce39a5cf0629847d030e01b0b977695d2448351bd9f1bfd4c546b2-merged.mount: Deactivated successfully.
Oct 11 04:07:16 compute-0 podman[275614]: 2025-10-11 04:07:16.999048885 +0000 UTC m=+1.430089255 container remove 03e61a8737ddc8b7c3f8b8b1fbb4875c035816481b4d6973405cd38d8522a4d7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_bohr, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507)
Oct 11 04:07:17 compute-0 systemd[1]: libpod-conmon-03e61a8737ddc8b7c3f8b8b1fbb4875c035816481b4d6973405cd38d8522a4d7.scope: Deactivated successfully.
Oct 11 04:07:17 compute-0 sudo[275442]: pam_unix(sudo:session): session closed for user root
Oct 11 04:07:17 compute-0 nova_compute[259850]: 2025-10-11 04:07:17.059 2 DEBUG oslo_service.periodic_task [None req-a15c22ab-259d-4b26-83d0-eb5f92102649 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 11 04:07:17 compute-0 nova_compute[259850]: 2025-10-11 04:07:17.061 2 DEBUG oslo_service.periodic_task [None req-a15c22ab-259d-4b26-83d0-eb5f92102649 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 11 04:07:17 compute-0 nova_compute[259850]: 2025-10-11 04:07:17.061 2 DEBUG oslo_service.periodic_task [None req-a15c22ab-259d-4b26-83d0-eb5f92102649 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 11 04:07:17 compute-0 nova_compute[259850]: 2025-10-11 04:07:17.061 2 DEBUG nova.compute.manager [None req-a15c22ab-259d-4b26-83d0-eb5f92102649 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Oct 11 04:07:17 compute-0 nova_compute[259850]: 2025-10-11 04:07:17.061 2 DEBUG oslo_service.periodic_task [None req-a15c22ab-259d-4b26-83d0-eb5f92102649 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 11 04:07:17 compute-0 nova_compute[259850]: 2025-10-11 04:07:17.084 2 DEBUG oslo_concurrency.lockutils [None req-a15c22ab-259d-4b26-83d0-eb5f92102649 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 04:07:17 compute-0 nova_compute[259850]: 2025-10-11 04:07:17.084 2 DEBUG oslo_concurrency.lockutils [None req-a15c22ab-259d-4b26-83d0-eb5f92102649 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 04:07:17 compute-0 nova_compute[259850]: 2025-10-11 04:07:17.084 2 DEBUG oslo_concurrency.lockutils [None req-a15c22ab-259d-4b26-83d0-eb5f92102649 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 04:07:17 compute-0 nova_compute[259850]: 2025-10-11 04:07:17.084 2 DEBUG nova.compute.resource_tracker [None req-a15c22ab-259d-4b26-83d0-eb5f92102649 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Oct 11 04:07:17 compute-0 nova_compute[259850]: 2025-10-11 04:07:17.085 2 DEBUG oslo_concurrency.processutils [None req-a15c22ab-259d-4b26-83d0-eb5f92102649 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 04:07:17 compute-0 sudo[275832]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 04:07:17 compute-0 ceph-mon[74273]: pgmap v1114: 305 pgs: 305 active+clean; 213 MiB data, 358 MiB used, 60 GiB / 60 GiB avail; 496 KiB/s rd, 5.9 MiB/s wr, 147 op/s
Oct 11 04:07:17 compute-0 sudo[275832]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 04:07:17 compute-0 sudo[275832]: pam_unix(sudo:session): session closed for user root
Oct 11 04:07:17 compute-0 nova_compute[259850]: 2025-10-11 04:07:17.168 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 04:07:17 compute-0 sudo[275877]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 11 04:07:17 compute-0 sudo[275877]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 04:07:17 compute-0 sudo[275877]: pam_unix(sudo:session): session closed for user root
Oct 11 04:07:17 compute-0 podman[275878]: 2025-10-11 04:07:17.181970809 +0000 UTC m=+0.054835840 container create aa2834ec2b2907c46876b9a994a1535364f9d2d5b15352f714b838966666e514 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-bfa0cc72-c909-48db-80bb-536eb7b52f6e, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.vendor=CentOS)
Oct 11 04:07:17 compute-0 systemd[1]: Started libpod-conmon-aa2834ec2b2907c46876b9a994a1535364f9d2d5b15352f714b838966666e514.scope.
Oct 11 04:07:17 compute-0 systemd[1]: Started libcrun container.
Oct 11 04:07:17 compute-0 sudo[275914]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 04:07:17 compute-0 sudo[275914]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 04:07:17 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bec204985a50fe418888bb5afd3a33c0d3dd65aa8cd08290d8a0462b3e5f00af/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct 11 04:07:17 compute-0 sudo[275914]: pam_unix(sudo:session): session closed for user root
Oct 11 04:07:17 compute-0 podman[275878]: 2025-10-11 04:07:17.155083805 +0000 UTC m=+0.027948846 image pull 1061e4fafe13e0b9aa1ef2c904ba4ad70c44f3e87b1d831f16c6db34937f4022 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Oct 11 04:07:17 compute-0 nova_compute[259850]: 2025-10-11 04:07:17.258 2 DEBUG nova.virt.driver [None req-3700afd8-f980-4cf2-b41e-54ecc7fbe9a2 - - - - - -] Emitting event <LifecycleEvent: 1760155637.257732, 388b5700-0501-4cb9-99cd-6d259e00afa4 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 11 04:07:17 compute-0 nova_compute[259850]: 2025-10-11 04:07:17.259 2 INFO nova.compute.manager [None req-3700afd8-f980-4cf2-b41e-54ecc7fbe9a2 - - - - - -] [instance: 388b5700-0501-4cb9-99cd-6d259e00afa4] VM Started (Lifecycle Event)
Oct 11 04:07:17 compute-0 nova_compute[259850]: 2025-10-11 04:07:17.262 2 DEBUG nova.compute.manager [None req-d6fcebc5-358d-4f4d-8f1e-340f5d7759df c5660041067943deb3c73caa6e62f851 2783729ed466412aac8ceb01d86a0b12 - - default default] [instance: 388b5700-0501-4cb9-99cd-6d259e00afa4] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Oct 11 04:07:17 compute-0 podman[275878]: 2025-10-11 04:07:17.270611768 +0000 UTC m=+0.143476879 container init aa2834ec2b2907c46876b9a994a1535364f9d2d5b15352f714b838966666e514 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-bfa0cc72-c909-48db-80bb-536eb7b52f6e, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, org.label-schema.build-date=20251009, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 11 04:07:17 compute-0 podman[275878]: 2025-10-11 04:07:17.280259088 +0000 UTC m=+0.153124119 container start aa2834ec2b2907c46876b9a994a1535364f9d2d5b15352f714b838966666e514 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-bfa0cc72-c909-48db-80bb-536eb7b52f6e, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f)
Oct 11 04:07:17 compute-0 nova_compute[259850]: 2025-10-11 04:07:17.280 2 DEBUG nova.compute.manager [None req-3700afd8-f980-4cf2-b41e-54ecc7fbe9a2 - - - - - -] [instance: 388b5700-0501-4cb9-99cd-6d259e00afa4] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 11 04:07:17 compute-0 nova_compute[259850]: 2025-10-11 04:07:17.281 2 DEBUG nova.virt.libvirt.driver [None req-d6fcebc5-358d-4f4d-8f1e-340f5d7759df c5660041067943deb3c73caa6e62f851 2783729ed466412aac8ceb01d86a0b12 - - default default] [instance: 388b5700-0501-4cb9-99cd-6d259e00afa4] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Oct 11 04:07:17 compute-0 nova_compute[259850]: 2025-10-11 04:07:17.285 2 DEBUG nova.compute.manager [None req-3700afd8-f980-4cf2-b41e-54ecc7fbe9a2 - - - - - -] [instance: 388b5700-0501-4cb9-99cd-6d259e00afa4] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 11 04:07:17 compute-0 nova_compute[259850]: 2025-10-11 04:07:17.287 2 INFO nova.virt.libvirt.driver [-] [instance: 388b5700-0501-4cb9-99cd-6d259e00afa4] Instance spawned successfully.
Oct 11 04:07:17 compute-0 nova_compute[259850]: 2025-10-11 04:07:17.287 2 DEBUG nova.virt.libvirt.driver [None req-d6fcebc5-358d-4f4d-8f1e-340f5d7759df c5660041067943deb3c73caa6e62f851 2783729ed466412aac8ceb01d86a0b12 - - default default] [instance: 388b5700-0501-4cb9-99cd-6d259e00afa4] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Oct 11 04:07:17 compute-0 neutron-haproxy-ovnmeta-bfa0cc72-c909-48db-80bb-536eb7b52f6e[275955]: [NOTICE]   (275981) : New worker (275989) forked
Oct 11 04:07:17 compute-0 neutron-haproxy-ovnmeta-bfa0cc72-c909-48db-80bb-536eb7b52f6e[275955]: [NOTICE]   (275981) : Loading success.
Oct 11 04:07:17 compute-0 nova_compute[259850]: 2025-10-11 04:07:17.313 2 INFO nova.compute.manager [None req-3700afd8-f980-4cf2-b41e-54ecc7fbe9a2 - - - - - -] [instance: 388b5700-0501-4cb9-99cd-6d259e00afa4] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 11 04:07:17 compute-0 nova_compute[259850]: 2025-10-11 04:07:17.314 2 DEBUG nova.virt.driver [None req-3700afd8-f980-4cf2-b41e-54ecc7fbe9a2 - - - - - -] Emitting event <LifecycleEvent: 1760155637.2578473, 388b5700-0501-4cb9-99cd-6d259e00afa4 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 11 04:07:17 compute-0 sudo[275963]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/23b68101-59a9-532f-ab6b-9acf78fb2162/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 23b68101-59a9-532f-ab6b-9acf78fb2162 -- lvm list --format json
Oct 11 04:07:17 compute-0 nova_compute[259850]: 2025-10-11 04:07:17.314 2 INFO nova.compute.manager [None req-3700afd8-f980-4cf2-b41e-54ecc7fbe9a2 - - - - - -] [instance: 388b5700-0501-4cb9-99cd-6d259e00afa4] VM Paused (Lifecycle Event)
Oct 11 04:07:17 compute-0 sudo[275963]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 04:07:17 compute-0 nova_compute[259850]: 2025-10-11 04:07:17.322 2 DEBUG nova.virt.libvirt.driver [None req-d6fcebc5-358d-4f4d-8f1e-340f5d7759df c5660041067943deb3c73caa6e62f851 2783729ed466412aac8ceb01d86a0b12 - - default default] [instance: 388b5700-0501-4cb9-99cd-6d259e00afa4] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 11 04:07:17 compute-0 nova_compute[259850]: 2025-10-11 04:07:17.322 2 DEBUG nova.virt.libvirt.driver [None req-d6fcebc5-358d-4f4d-8f1e-340f5d7759df c5660041067943deb3c73caa6e62f851 2783729ed466412aac8ceb01d86a0b12 - - default default] [instance: 388b5700-0501-4cb9-99cd-6d259e00afa4] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 11 04:07:17 compute-0 nova_compute[259850]: 2025-10-11 04:07:17.323 2 DEBUG nova.virt.libvirt.driver [None req-d6fcebc5-358d-4f4d-8f1e-340f5d7759df c5660041067943deb3c73caa6e62f851 2783729ed466412aac8ceb01d86a0b12 - - default default] [instance: 388b5700-0501-4cb9-99cd-6d259e00afa4] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 11 04:07:17 compute-0 nova_compute[259850]: 2025-10-11 04:07:17.323 2 DEBUG nova.virt.libvirt.driver [None req-d6fcebc5-358d-4f4d-8f1e-340f5d7759df c5660041067943deb3c73caa6e62f851 2783729ed466412aac8ceb01d86a0b12 - - default default] [instance: 388b5700-0501-4cb9-99cd-6d259e00afa4] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 11 04:07:17 compute-0 nova_compute[259850]: 2025-10-11 04:07:17.323 2 DEBUG nova.virt.libvirt.driver [None req-d6fcebc5-358d-4f4d-8f1e-340f5d7759df c5660041067943deb3c73caa6e62f851 2783729ed466412aac8ceb01d86a0b12 - - default default] [instance: 388b5700-0501-4cb9-99cd-6d259e00afa4] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 11 04:07:17 compute-0 nova_compute[259850]: 2025-10-11 04:07:17.324 2 DEBUG nova.virt.libvirt.driver [None req-d6fcebc5-358d-4f4d-8f1e-340f5d7759df c5660041067943deb3c73caa6e62f851 2783729ed466412aac8ceb01d86a0b12 - - default default] [instance: 388b5700-0501-4cb9-99cd-6d259e00afa4] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 11 04:07:17 compute-0 nova_compute[259850]: 2025-10-11 04:07:17.333 2 DEBUG nova.compute.manager [None req-3700afd8-f980-4cf2-b41e-54ecc7fbe9a2 - - - - - -] [instance: 388b5700-0501-4cb9-99cd-6d259e00afa4] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 11 04:07:17 compute-0 nova_compute[259850]: 2025-10-11 04:07:17.338 2 DEBUG nova.virt.driver [None req-3700afd8-f980-4cf2-b41e-54ecc7fbe9a2 - - - - - -] Emitting event <LifecycleEvent: 1760155637.2647145, 388b5700-0501-4cb9-99cd-6d259e00afa4 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 11 04:07:17 compute-0 nova_compute[259850]: 2025-10-11 04:07:17.338 2 INFO nova.compute.manager [None req-3700afd8-f980-4cf2-b41e-54ecc7fbe9a2 - - - - - -] [instance: 388b5700-0501-4cb9-99cd-6d259e00afa4] VM Resumed (Lifecycle Event)
Oct 11 04:07:17 compute-0 nova_compute[259850]: 2025-10-11 04:07:17.354 2 DEBUG nova.compute.manager [None req-3700afd8-f980-4cf2-b41e-54ecc7fbe9a2 - - - - - -] [instance: 388b5700-0501-4cb9-99cd-6d259e00afa4] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 11 04:07:17 compute-0 nova_compute[259850]: 2025-10-11 04:07:17.357 2 DEBUG nova.compute.manager [None req-3700afd8-f980-4cf2-b41e-54ecc7fbe9a2 - - - - - -] [instance: 388b5700-0501-4cb9-99cd-6d259e00afa4] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 11 04:07:17 compute-0 nova_compute[259850]: 2025-10-11 04:07:17.382 2 INFO nova.compute.manager [None req-3700afd8-f980-4cf2-b41e-54ecc7fbe9a2 - - - - - -] [instance: 388b5700-0501-4cb9-99cd-6d259e00afa4] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 11 04:07:17 compute-0 nova_compute[259850]: 2025-10-11 04:07:17.389 2 INFO nova.compute.manager [None req-d6fcebc5-358d-4f4d-8f1e-340f5d7759df c5660041067943deb3c73caa6e62f851 2783729ed466412aac8ceb01d86a0b12 - - default default] [instance: 388b5700-0501-4cb9-99cd-6d259e00afa4] Took 5.64 seconds to spawn the instance on the hypervisor.
Oct 11 04:07:17 compute-0 nova_compute[259850]: 2025-10-11 04:07:17.390 2 DEBUG nova.compute.manager [None req-d6fcebc5-358d-4f4d-8f1e-340f5d7759df c5660041067943deb3c73caa6e62f851 2783729ed466412aac8ceb01d86a0b12 - - default default] [instance: 388b5700-0501-4cb9-99cd-6d259e00afa4] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 11 04:07:17 compute-0 nova_compute[259850]: 2025-10-11 04:07:17.446 2 INFO nova.compute.manager [None req-d6fcebc5-358d-4f4d-8f1e-340f5d7759df c5660041067943deb3c73caa6e62f851 2783729ed466412aac8ceb01d86a0b12 - - default default] [instance: 388b5700-0501-4cb9-99cd-6d259e00afa4] Took 6.60 seconds to build instance.
Oct 11 04:07:17 compute-0 nova_compute[259850]: 2025-10-11 04:07:17.470 2 DEBUG oslo_concurrency.lockutils [None req-d6fcebc5-358d-4f4d-8f1e-340f5d7759df c5660041067943deb3c73caa6e62f851 2783729ed466412aac8ceb01d86a0b12 - - default default] Lock "388b5700-0501-4cb9-99cd-6d259e00afa4" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 6.703s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 04:07:17 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1116: 305 pgs: 305 active+clean; 213 MiB data, 358 MiB used, 60 GiB / 60 GiB avail; 35 KiB/s rd, 3.5 MiB/s wr, 56 op/s
Oct 11 04:07:17 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 11 04:07:17 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2185953177' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 04:07:17 compute-0 nova_compute[259850]: 2025-10-11 04:07:17.594 2 DEBUG oslo_concurrency.processutils [None req-a15c22ab-259d-4b26-83d0-eb5f92102649 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.510s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 04:07:17 compute-0 nova_compute[259850]: 2025-10-11 04:07:17.666 2 DEBUG nova.virt.libvirt.driver [None req-a15c22ab-259d-4b26-83d0-eb5f92102649 - - - - - -] skipping disk for instance-00000007 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 11 04:07:17 compute-0 nova_compute[259850]: 2025-10-11 04:07:17.666 2 DEBUG nova.virt.libvirt.driver [None req-a15c22ab-259d-4b26-83d0-eb5f92102649 - - - - - -] skipping disk for instance-00000007 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 11 04:07:17 compute-0 nova_compute[259850]: 2025-10-11 04:07:17.666 2 DEBUG nova.virt.libvirt.driver [None req-a15c22ab-259d-4b26-83d0-eb5f92102649 - - - - - -] skipping disk for instance-00000007 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 11 04:07:17 compute-0 nova_compute[259850]: 2025-10-11 04:07:17.671 2 DEBUG nova.virt.libvirt.driver [None req-a15c22ab-259d-4b26-83d0-eb5f92102649 - - - - - -] skipping disk for instance-00000008 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 11 04:07:17 compute-0 nova_compute[259850]: 2025-10-11 04:07:17.671 2 DEBUG nova.virt.libvirt.driver [None req-a15c22ab-259d-4b26-83d0-eb5f92102649 - - - - - -] skipping disk for instance-00000008 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 11 04:07:17 compute-0 podman[276041]: 2025-10-11 04:07:17.683340043 +0000 UTC m=+0.046767174 container create 4ff6103ac7b7e1352c398d5bc8655e52c9284d56ab4b06a71433e1932ce914d1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_kalam, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 11 04:07:17 compute-0 systemd[1]: Started libpod-conmon-4ff6103ac7b7e1352c398d5bc8655e52c9284d56ab4b06a71433e1932ce914d1.scope.
Oct 11 04:07:17 compute-0 podman[276040]: 2025-10-11 04:07:17.732997137 +0000 UTC m=+0.089798492 container health_status aa44bb3cffcfb092b9d85ae52a9bd9f36c87e07262632420f97b48a886109eab (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, tcib_managed=true, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251009)
Oct 11 04:07:17 compute-0 systemd[1]: Started libcrun container.
Oct 11 04:07:17 compute-0 podman[276041]: 2025-10-11 04:07:17.665451741 +0000 UTC m=+0.028878882 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 04:07:17 compute-0 podman[276041]: 2025-10-11 04:07:17.759530042 +0000 UTC m=+0.122957183 container init 4ff6103ac7b7e1352c398d5bc8655e52c9284d56ab4b06a71433e1932ce914d1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_kalam, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 11 04:07:17 compute-0 podman[276041]: 2025-10-11 04:07:17.769548973 +0000 UTC m=+0.132976084 container start 4ff6103ac7b7e1352c398d5bc8655e52c9284d56ab4b06a71433e1932ce914d1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_kalam, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 11 04:07:17 compute-0 podman[276041]: 2025-10-11 04:07:17.773050501 +0000 UTC m=+0.136477652 container attach 4ff6103ac7b7e1352c398d5bc8655e52c9284d56ab4b06a71433e1932ce914d1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_kalam, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 11 04:07:17 compute-0 focused_kalam[276075]: 167 167
Oct 11 04:07:17 compute-0 systemd[1]: libpod-4ff6103ac7b7e1352c398d5bc8655e52c9284d56ab4b06a71433e1932ce914d1.scope: Deactivated successfully.
Oct 11 04:07:17 compute-0 conmon[276075]: conmon 4ff6103ac7b7e1352c39 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-4ff6103ac7b7e1352c398d5bc8655e52c9284d56ab4b06a71433e1932ce914d1.scope/container/memory.events
Oct 11 04:07:17 compute-0 podman[276041]: 2025-10-11 04:07:17.78049424 +0000 UTC m=+0.143921361 container died 4ff6103ac7b7e1352c398d5bc8655e52c9284d56ab4b06a71433e1932ce914d1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_kalam, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, OSD_FLAVOR=default, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507)
Oct 11 04:07:17 compute-0 systemd[1]: var-lib-containers-storage-overlay-5e1bf0381c3bfb02ba139de627f85a0c487ce15c030de49f1bbd3a4d395b8a98-merged.mount: Deactivated successfully.
Oct 11 04:07:17 compute-0 podman[276041]: 2025-10-11 04:07:17.825087322 +0000 UTC m=+0.188514443 container remove 4ff6103ac7b7e1352c398d5bc8655e52c9284d56ab4b06a71433e1932ce914d1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_kalam, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 11 04:07:17 compute-0 systemd[1]: libpod-conmon-4ff6103ac7b7e1352c398d5bc8655e52c9284d56ab4b06a71433e1932ce914d1.scope: Deactivated successfully.
Oct 11 04:07:17 compute-0 nova_compute[259850]: 2025-10-11 04:07:17.893 2 DEBUG nova.compute.manager [req-cd1e8d5d-9898-4da9-8cd9-2a0d78bae6f3 req-9f24a7c0-cd9a-4164-be24-cf1787974c89 407a16c34d6f4e07bd2919006b3d8fef 551ef16ca7c64c7d98e9560ddab5abd8 - - default default] [instance: 26cb0d26-41fd-4cac-a0b5-1c630a0feba1] Received event volume-extended-edb1a073-56fc-4b59-ae21-06d01b779d30 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 11 04:07:17 compute-0 nova_compute[259850]: 2025-10-11 04:07:17.909 2 DEBUG nova.compute.manager [req-cd1e8d5d-9898-4da9-8cd9-2a0d78bae6f3 req-9f24a7c0-cd9a-4164-be24-cf1787974c89 407a16c34d6f4e07bd2919006b3d8fef 551ef16ca7c64c7d98e9560ddab5abd8 - - default default] [instance: 26cb0d26-41fd-4cac-a0b5-1c630a0feba1] Handling volume-extended event for volume edb1a073-56fc-4b59-ae21-06d01b779d30 extend_volume /usr/lib/python3.9/site-packages/nova/compute/manager.py:10896
Oct 11 04:07:17 compute-0 nova_compute[259850]: 2025-10-11 04:07:17.916 2 WARNING nova.virt.libvirt.driver [None req-a15c22ab-259d-4b26-83d0-eb5f92102649 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 11 04:07:17 compute-0 nova_compute[259850]: 2025-10-11 04:07:17.917 2 DEBUG nova.compute.resource_tracker [None req-a15c22ab-259d-4b26-83d0-eb5f92102649 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4324MB free_disk=59.9221076965332GB free_vcpus=6 pci_devices=[{"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Oct 11 04:07:17 compute-0 nova_compute[259850]: 2025-10-11 04:07:17.917 2 DEBUG oslo_concurrency.lockutils [None req-a15c22ab-259d-4b26-83d0-eb5f92102649 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 04:07:17 compute-0 nova_compute[259850]: 2025-10-11 04:07:17.917 2 DEBUG oslo_concurrency.lockutils [None req-a15c22ab-259d-4b26-83d0-eb5f92102649 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 04:07:17 compute-0 nova_compute[259850]: 2025-10-11 04:07:17.923 2 INFO nova.compute.manager [req-cd1e8d5d-9898-4da9-8cd9-2a0d78bae6f3 req-9f24a7c0-cd9a-4164-be24-cf1787974c89 407a16c34d6f4e07bd2919006b3d8fef 551ef16ca7c64c7d98e9560ddab5abd8 - - default default] [instance: 26cb0d26-41fd-4cac-a0b5-1c630a0feba1] Cinder extended volume edb1a073-56fc-4b59-ae21-06d01b779d30; extending it to detect new size
Oct 11 04:07:17 compute-0 nova_compute[259850]: 2025-10-11 04:07:17.985 2 DEBUG nova.compute.resource_tracker [None req-a15c22ab-259d-4b26-83d0-eb5f92102649 - - - - - -] Instance 26cb0d26-41fd-4cac-a0b5-1c630a0feba1 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Oct 11 04:07:17 compute-0 nova_compute[259850]: 2025-10-11 04:07:17.985 2 DEBUG nova.compute.resource_tracker [None req-a15c22ab-259d-4b26-83d0-eb5f92102649 - - - - - -] Instance 388b5700-0501-4cb9-99cd-6d259e00afa4 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Oct 11 04:07:17 compute-0 nova_compute[259850]: 2025-10-11 04:07:17.985 2 DEBUG nova.compute.resource_tracker [None req-a15c22ab-259d-4b26-83d0-eb5f92102649 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Oct 11 04:07:17 compute-0 nova_compute[259850]: 2025-10-11 04:07:17.986 2 DEBUG nova.compute.resource_tracker [None req-a15c22ab-259d-4b26-83d0-eb5f92102649 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=768MB phys_disk=59GB used_disk=2GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Oct 11 04:07:18 compute-0 podman[276098]: 2025-10-11 04:07:18.008738087 +0000 UTC m=+0.039555461 container create a981ddd12b401cc45fe9eaa011533ed2369fb4bb3b5a6eca8d87264e951c1f30 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=unruffled_dijkstra, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Oct 11 04:07:18 compute-0 nova_compute[259850]: 2025-10-11 04:07:18.032 2 DEBUG oslo_concurrency.processutils [None req-a15c22ab-259d-4b26-83d0-eb5f92102649 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 04:07:18 compute-0 systemd[1]: Started libpod-conmon-a981ddd12b401cc45fe9eaa011533ed2369fb4bb3b5a6eca8d87264e951c1f30.scope.
Oct 11 04:07:18 compute-0 systemd[1]: Started libcrun container.
Oct 11 04:07:18 compute-0 podman[276098]: 2025-10-11 04:07:17.992298526 +0000 UTC m=+0.023115910 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 04:07:18 compute-0 nova_compute[259850]: 2025-10-11 04:07:18.089 2 DEBUG nova.virt.libvirt.driver [req-cd1e8d5d-9898-4da9-8cd9-2a0d78bae6f3 req-9f24a7c0-cd9a-4164-be24-cf1787974c89 407a16c34d6f4e07bd2919006b3d8fef 551ef16ca7c64c7d98e9560ddab5abd8 - - default default] [instance: 26cb0d26-41fd-4cac-a0b5-1c630a0feba1] Resizing target device vdb to 2147483648 _resize_attached_volume /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2756
Oct 11 04:07:18 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e197 do_prune osdmap full prune enabled
Oct 11 04:07:18 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6b4a375e190340c91912d54dcfac0c14a932bf94813b8032f12fc87c6f0a79a2/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 11 04:07:18 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e198 e198: 3 total, 3 up, 3 in
Oct 11 04:07:18 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6b4a375e190340c91912d54dcfac0c14a932bf94813b8032f12fc87c6f0a79a2/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 11 04:07:18 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6b4a375e190340c91912d54dcfac0c14a932bf94813b8032f12fc87c6f0a79a2/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 11 04:07:18 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6b4a375e190340c91912d54dcfac0c14a932bf94813b8032f12fc87c6f0a79a2/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 11 04:07:18 compute-0 ceph-mon[74273]: log_channel(cluster) log [DBG] : osdmap e198: 3 total, 3 up, 3 in
Oct 11 04:07:18 compute-0 ceph-mon[74273]: pgmap v1116: 305 pgs: 305 active+clean; 213 MiB data, 358 MiB used, 60 GiB / 60 GiB avail; 35 KiB/s rd, 3.5 MiB/s wr, 56 op/s
Oct 11 04:07:18 compute-0 ceph-mon[74273]: from='client.? 192.168.122.100:0/2185953177' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 04:07:18 compute-0 podman[276098]: 2025-10-11 04:07:18.11287117 +0000 UTC m=+0.143688544 container init a981ddd12b401cc45fe9eaa011533ed2369fb4bb3b5a6eca8d87264e951c1f30 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=unruffled_dijkstra, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Oct 11 04:07:18 compute-0 podman[276098]: 2025-10-11 04:07:18.127786449 +0000 UTC m=+0.158603913 container start a981ddd12b401cc45fe9eaa011533ed2369fb4bb3b5a6eca8d87264e951c1f30 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=unruffled_dijkstra, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 11 04:07:18 compute-0 podman[276098]: 2025-10-11 04:07:18.133305534 +0000 UTC m=+0.164122928 container attach a981ddd12b401cc45fe9eaa011533ed2369fb4bb3b5a6eca8d87264e951c1f30 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=unruffled_dijkstra, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef)
Oct 11 04:07:18 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 11 04:07:18 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3505788621' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 04:07:18 compute-0 nova_compute[259850]: 2025-10-11 04:07:18.495 2 DEBUG oslo_concurrency.processutils [None req-a15c22ab-259d-4b26-83d0-eb5f92102649 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.463s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 04:07:18 compute-0 nova_compute[259850]: 2025-10-11 04:07:18.502 2 DEBUG nova.compute.provider_tree [None req-a15c22ab-259d-4b26-83d0-eb5f92102649 - - - - - -] Inventory has not changed in ProviderTree for provider: 108a560b-89c0-4926-a2fc-cb749a6f8386 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 11 04:07:18 compute-0 nova_compute[259850]: 2025-10-11 04:07:18.522 2 DEBUG nova.scheduler.client.report [None req-a15c22ab-259d-4b26-83d0-eb5f92102649 - - - - - -] Inventory has not changed for provider 108a560b-89c0-4926-a2fc-cb749a6f8386 based on inventory data: {'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 11 04:07:18 compute-0 nova_compute[259850]: 2025-10-11 04:07:18.551 2 DEBUG nova.compute.resource_tracker [None req-a15c22ab-259d-4b26-83d0-eb5f92102649 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Oct 11 04:07:18 compute-0 nova_compute[259850]: 2025-10-11 04:07:18.552 2 DEBUG oslo_concurrency.lockutils [None req-a15c22ab-259d-4b26-83d0-eb5f92102649 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.635s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 04:07:18 compute-0 nova_compute[259850]: 2025-10-11 04:07:18.691 2 DEBUG nova.compute.manager [req-0a0e2784-2900-48ea-a0ac-964bc16d0d85 req-08c07ef4-de8b-4592-ae9e-155a12006ebc f4159c198f2c491aba3076e803251bb4 551ef16ca7c64c7d98e9560ddab5abd8 - - default default] [instance: 388b5700-0501-4cb9-99cd-6d259e00afa4] Received event network-vif-plugged-c1e60e1e-9066-4ce9-9064-2a732e2a407d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 11 04:07:18 compute-0 nova_compute[259850]: 2025-10-11 04:07:18.692 2 DEBUG oslo_concurrency.lockutils [req-0a0e2784-2900-48ea-a0ac-964bc16d0d85 req-08c07ef4-de8b-4592-ae9e-155a12006ebc f4159c198f2c491aba3076e803251bb4 551ef16ca7c64c7d98e9560ddab5abd8 - - default default] Acquiring lock "388b5700-0501-4cb9-99cd-6d259e00afa4-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 04:07:18 compute-0 nova_compute[259850]: 2025-10-11 04:07:18.693 2 DEBUG oslo_concurrency.lockutils [req-0a0e2784-2900-48ea-a0ac-964bc16d0d85 req-08c07ef4-de8b-4592-ae9e-155a12006ebc f4159c198f2c491aba3076e803251bb4 551ef16ca7c64c7d98e9560ddab5abd8 - - default default] Lock "388b5700-0501-4cb9-99cd-6d259e00afa4-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 04:07:18 compute-0 nova_compute[259850]: 2025-10-11 04:07:18.693 2 DEBUG oslo_concurrency.lockutils [req-0a0e2784-2900-48ea-a0ac-964bc16d0d85 req-08c07ef4-de8b-4592-ae9e-155a12006ebc f4159c198f2c491aba3076e803251bb4 551ef16ca7c64c7d98e9560ddab5abd8 - - default default] Lock "388b5700-0501-4cb9-99cd-6d259e00afa4-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 04:07:18 compute-0 nova_compute[259850]: 2025-10-11 04:07:18.694 2 DEBUG nova.compute.manager [req-0a0e2784-2900-48ea-a0ac-964bc16d0d85 req-08c07ef4-de8b-4592-ae9e-155a12006ebc f4159c198f2c491aba3076e803251bb4 551ef16ca7c64c7d98e9560ddab5abd8 - - default default] [instance: 388b5700-0501-4cb9-99cd-6d259e00afa4] No waiting events found dispatching network-vif-plugged-c1e60e1e-9066-4ce9-9064-2a732e2a407d pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 11 04:07:18 compute-0 nova_compute[259850]: 2025-10-11 04:07:18.694 2 WARNING nova.compute.manager [req-0a0e2784-2900-48ea-a0ac-964bc16d0d85 req-08c07ef4-de8b-4592-ae9e-155a12006ebc f4159c198f2c491aba3076e803251bb4 551ef16ca7c64c7d98e9560ddab5abd8 - - default default] [instance: 388b5700-0501-4cb9-99cd-6d259e00afa4] Received unexpected event network-vif-plugged-c1e60e1e-9066-4ce9-9064-2a732e2a407d for instance with vm_state active and task_state None.
Oct 11 04:07:18 compute-0 unruffled_dijkstra[276115]: {
Oct 11 04:07:18 compute-0 unruffled_dijkstra[276115]:     "0": [
Oct 11 04:07:18 compute-0 unruffled_dijkstra[276115]:         {
Oct 11 04:07:18 compute-0 unruffled_dijkstra[276115]:             "devices": [
Oct 11 04:07:18 compute-0 unruffled_dijkstra[276115]:                 "/dev/loop3"
Oct 11 04:07:18 compute-0 unruffled_dijkstra[276115]:             ],
Oct 11 04:07:18 compute-0 unruffled_dijkstra[276115]:             "lv_name": "ceph_lv0",
Oct 11 04:07:18 compute-0 unruffled_dijkstra[276115]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct 11 04:07:18 compute-0 unruffled_dijkstra[276115]:             "lv_size": "21470642176",
Oct 11 04:07:18 compute-0 unruffled_dijkstra[276115]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=1XVTvq-Um5W-oCLh-MZzq-EKrd-vgGj-JdO4S3,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=23b68101-59a9-532f-ab6b-9acf78fb2162,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=bd7ac921-1218-45c1-b1c6-7c594dbceccb,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 11 04:07:18 compute-0 unruffled_dijkstra[276115]:             "lv_uuid": "1XVTvq-Um5W-oCLh-MZzq-EKrd-vgGj-JdO4S3",
Oct 11 04:07:18 compute-0 unruffled_dijkstra[276115]:             "name": "ceph_lv0",
Oct 11 04:07:18 compute-0 unruffled_dijkstra[276115]:             "path": "/dev/ceph_vg0/ceph_lv0",
Oct 11 04:07:18 compute-0 unruffled_dijkstra[276115]:             "tags": {
Oct 11 04:07:18 compute-0 unruffled_dijkstra[276115]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct 11 04:07:18 compute-0 unruffled_dijkstra[276115]:                 "ceph.block_uuid": "1XVTvq-Um5W-oCLh-MZzq-EKrd-vgGj-JdO4S3",
Oct 11 04:07:18 compute-0 unruffled_dijkstra[276115]:                 "ceph.cephx_lockbox_secret": "",
Oct 11 04:07:18 compute-0 unruffled_dijkstra[276115]:                 "ceph.cluster_fsid": "23b68101-59a9-532f-ab6b-9acf78fb2162",
Oct 11 04:07:18 compute-0 unruffled_dijkstra[276115]:                 "ceph.cluster_name": "ceph",
Oct 11 04:07:18 compute-0 unruffled_dijkstra[276115]:                 "ceph.crush_device_class": "",
Oct 11 04:07:18 compute-0 unruffled_dijkstra[276115]:                 "ceph.encrypted": "0",
Oct 11 04:07:18 compute-0 unruffled_dijkstra[276115]:                 "ceph.osd_fsid": "bd7ac921-1218-45c1-b1c6-7c594dbceccb",
Oct 11 04:07:18 compute-0 unruffled_dijkstra[276115]:                 "ceph.osd_id": "0",
Oct 11 04:07:18 compute-0 unruffled_dijkstra[276115]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 11 04:07:18 compute-0 unruffled_dijkstra[276115]:                 "ceph.type": "block",
Oct 11 04:07:18 compute-0 unruffled_dijkstra[276115]:                 "ceph.vdo": "0"
Oct 11 04:07:18 compute-0 unruffled_dijkstra[276115]:             },
Oct 11 04:07:18 compute-0 unruffled_dijkstra[276115]:             "type": "block",
Oct 11 04:07:18 compute-0 unruffled_dijkstra[276115]:             "vg_name": "ceph_vg0"
Oct 11 04:07:18 compute-0 unruffled_dijkstra[276115]:         }
Oct 11 04:07:18 compute-0 unruffled_dijkstra[276115]:     ],
Oct 11 04:07:18 compute-0 unruffled_dijkstra[276115]:     "1": [
Oct 11 04:07:18 compute-0 unruffled_dijkstra[276115]:         {
Oct 11 04:07:18 compute-0 unruffled_dijkstra[276115]:             "devices": [
Oct 11 04:07:18 compute-0 unruffled_dijkstra[276115]:                 "/dev/loop4"
Oct 11 04:07:18 compute-0 unruffled_dijkstra[276115]:             ],
Oct 11 04:07:18 compute-0 unruffled_dijkstra[276115]:             "lv_name": "ceph_lv1",
Oct 11 04:07:18 compute-0 unruffled_dijkstra[276115]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Oct 11 04:07:18 compute-0 unruffled_dijkstra[276115]:             "lv_size": "21470642176",
Oct 11 04:07:18 compute-0 unruffled_dijkstra[276115]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=SYHCVo-UVf0-Iwc3-4dzz-QuCG-19LJ-9rRA5x,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=23b68101-59a9-532f-ab6b-9acf78fb2162,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=38da774d-7ecf-442f-9a7a-97978287cff8,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 11 04:07:18 compute-0 unruffled_dijkstra[276115]:             "lv_uuid": "SYHCVo-UVf0-Iwc3-4dzz-QuCG-19LJ-9rRA5x",
Oct 11 04:07:18 compute-0 unruffled_dijkstra[276115]:             "name": "ceph_lv1",
Oct 11 04:07:18 compute-0 unruffled_dijkstra[276115]:             "path": "/dev/ceph_vg1/ceph_lv1",
Oct 11 04:07:18 compute-0 unruffled_dijkstra[276115]:             "tags": {
Oct 11 04:07:18 compute-0 unruffled_dijkstra[276115]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Oct 11 04:07:18 compute-0 unruffled_dijkstra[276115]:                 "ceph.block_uuid": "SYHCVo-UVf0-Iwc3-4dzz-QuCG-19LJ-9rRA5x",
Oct 11 04:07:18 compute-0 unruffled_dijkstra[276115]:                 "ceph.cephx_lockbox_secret": "",
Oct 11 04:07:18 compute-0 unruffled_dijkstra[276115]:                 "ceph.cluster_fsid": "23b68101-59a9-532f-ab6b-9acf78fb2162",
Oct 11 04:07:18 compute-0 unruffled_dijkstra[276115]:                 "ceph.cluster_name": "ceph",
Oct 11 04:07:18 compute-0 unruffled_dijkstra[276115]:                 "ceph.crush_device_class": "",
Oct 11 04:07:18 compute-0 unruffled_dijkstra[276115]:                 "ceph.encrypted": "0",
Oct 11 04:07:18 compute-0 unruffled_dijkstra[276115]:                 "ceph.osd_fsid": "38da774d-7ecf-442f-9a7a-97978287cff8",
Oct 11 04:07:18 compute-0 unruffled_dijkstra[276115]:                 "ceph.osd_id": "1",
Oct 11 04:07:18 compute-0 unruffled_dijkstra[276115]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 11 04:07:18 compute-0 unruffled_dijkstra[276115]:                 "ceph.type": "block",
Oct 11 04:07:18 compute-0 unruffled_dijkstra[276115]:                 "ceph.vdo": "0"
Oct 11 04:07:18 compute-0 unruffled_dijkstra[276115]:             },
Oct 11 04:07:18 compute-0 unruffled_dijkstra[276115]:             "type": "block",
Oct 11 04:07:18 compute-0 unruffled_dijkstra[276115]:             "vg_name": "ceph_vg1"
Oct 11 04:07:18 compute-0 unruffled_dijkstra[276115]:         }
Oct 11 04:07:18 compute-0 unruffled_dijkstra[276115]:     ],
Oct 11 04:07:18 compute-0 unruffled_dijkstra[276115]:     "2": [
Oct 11 04:07:18 compute-0 unruffled_dijkstra[276115]:         {
Oct 11 04:07:18 compute-0 unruffled_dijkstra[276115]:             "devices": [
Oct 11 04:07:18 compute-0 unruffled_dijkstra[276115]:                 "/dev/loop5"
Oct 11 04:07:18 compute-0 unruffled_dijkstra[276115]:             ],
Oct 11 04:07:18 compute-0 unruffled_dijkstra[276115]:             "lv_name": "ceph_lv2",
Oct 11 04:07:18 compute-0 unruffled_dijkstra[276115]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Oct 11 04:07:18 compute-0 unruffled_dijkstra[276115]:             "lv_size": "21470642176",
Oct 11 04:07:18 compute-0 unruffled_dijkstra[276115]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=RoMfIg-8Myq-NBZ3-1HM3-6QfC-sjk8-dlChvt,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=23b68101-59a9-532f-ab6b-9acf78fb2162,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=b0a32a6b-06c7-4cda-9235-0c5ffbd1bd85,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 11 04:07:18 compute-0 unruffled_dijkstra[276115]:             "lv_uuid": "RoMfIg-8Myq-NBZ3-1HM3-6QfC-sjk8-dlChvt",
Oct 11 04:07:18 compute-0 unruffled_dijkstra[276115]:             "name": "ceph_lv2",
Oct 11 04:07:18 compute-0 unruffled_dijkstra[276115]:             "path": "/dev/ceph_vg2/ceph_lv2",
Oct 11 04:07:18 compute-0 unruffled_dijkstra[276115]:             "tags": {
Oct 11 04:07:18 compute-0 unruffled_dijkstra[276115]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Oct 11 04:07:18 compute-0 unruffled_dijkstra[276115]:                 "ceph.block_uuid": "RoMfIg-8Myq-NBZ3-1HM3-6QfC-sjk8-dlChvt",
Oct 11 04:07:18 compute-0 unruffled_dijkstra[276115]:                 "ceph.cephx_lockbox_secret": "",
Oct 11 04:07:18 compute-0 unruffled_dijkstra[276115]:                 "ceph.cluster_fsid": "23b68101-59a9-532f-ab6b-9acf78fb2162",
Oct 11 04:07:18 compute-0 unruffled_dijkstra[276115]:                 "ceph.cluster_name": "ceph",
Oct 11 04:07:18 compute-0 unruffled_dijkstra[276115]:                 "ceph.crush_device_class": "",
Oct 11 04:07:18 compute-0 unruffled_dijkstra[276115]:                 "ceph.encrypted": "0",
Oct 11 04:07:18 compute-0 unruffled_dijkstra[276115]:                 "ceph.osd_fsid": "b0a32a6b-06c7-4cda-9235-0c5ffbd1bd85",
Oct 11 04:07:18 compute-0 unruffled_dijkstra[276115]:                 "ceph.osd_id": "2",
Oct 11 04:07:18 compute-0 unruffled_dijkstra[276115]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 11 04:07:18 compute-0 unruffled_dijkstra[276115]:                 "ceph.type": "block",
Oct 11 04:07:18 compute-0 unruffled_dijkstra[276115]:                 "ceph.vdo": "0"
Oct 11 04:07:18 compute-0 unruffled_dijkstra[276115]:             },
Oct 11 04:07:18 compute-0 unruffled_dijkstra[276115]:             "type": "block",
Oct 11 04:07:18 compute-0 unruffled_dijkstra[276115]:             "vg_name": "ceph_vg2"
Oct 11 04:07:18 compute-0 unruffled_dijkstra[276115]:         }
Oct 11 04:07:18 compute-0 unruffled_dijkstra[276115]:     ]
Oct 11 04:07:18 compute-0 unruffled_dijkstra[276115]: }
Oct 11 04:07:18 compute-0 systemd[1]: libpod-a981ddd12b401cc45fe9eaa011533ed2369fb4bb3b5a6eca8d87264e951c1f30.scope: Deactivated successfully.
Oct 11 04:07:18 compute-0 podman[276098]: 2025-10-11 04:07:18.920875771 +0000 UTC m=+0.951693175 container died a981ddd12b401cc45fe9eaa011533ed2369fb4bb3b5a6eca8d87264e951c1f30 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=unruffled_dijkstra, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 11 04:07:18 compute-0 systemd[1]: var-lib-containers-storage-overlay-6b4a375e190340c91912d54dcfac0c14a932bf94813b8032f12fc87c6f0a79a2-merged.mount: Deactivated successfully.
Oct 11 04:07:19 compute-0 podman[276098]: 2025-10-11 04:07:19.008793848 +0000 UTC m=+1.039611242 container remove a981ddd12b401cc45fe9eaa011533ed2369fb4bb3b5a6eca8d87264e951c1f30 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=unruffled_dijkstra, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=reef)
Oct 11 04:07:19 compute-0 systemd[1]: libpod-conmon-a981ddd12b401cc45fe9eaa011533ed2369fb4bb3b5a6eca8d87264e951c1f30.scope: Deactivated successfully.
Oct 11 04:07:19 compute-0 sudo[275963]: pam_unix(sudo:session): session closed for user root
Oct 11 04:07:19 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e198 do_prune osdmap full prune enabled
Oct 11 04:07:19 compute-0 ceph-mon[74273]: osdmap e198: 3 total, 3 up, 3 in
Oct 11 04:07:19 compute-0 ceph-mon[74273]: from='client.? 192.168.122.100:0/3505788621' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 04:07:19 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e199 e199: 3 total, 3 up, 3 in
Oct 11 04:07:19 compute-0 ceph-mon[74273]: log_channel(cluster) log [DBG] : osdmap e199: 3 total, 3 up, 3 in
Oct 11 04:07:19 compute-0 sudo[276160]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 04:07:19 compute-0 sudo[276160]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 04:07:19 compute-0 sudo[276160]: pam_unix(sudo:session): session closed for user root
Oct 11 04:07:19 compute-0 sudo[276185]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 11 04:07:19 compute-0 sudo[276185]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 04:07:19 compute-0 nova_compute[259850]: 2025-10-11 04:07:19.272 2 DEBUG oslo_concurrency.lockutils [None req-98242cb5-4889-4532-a4ee-f9eb7dc1fb94 77635b26e3624f318335b7dd5d5cf9c4 41596b84442c439b86ce2c239af0242c - - default default] Acquiring lock "26cb0d26-41fd-4cac-a0b5-1c630a0feba1" by "nova.compute.manager.ComputeManager.detach_volume.<locals>.do_detach_volume" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 04:07:19 compute-0 nova_compute[259850]: 2025-10-11 04:07:19.272 2 DEBUG oslo_concurrency.lockutils [None req-98242cb5-4889-4532-a4ee-f9eb7dc1fb94 77635b26e3624f318335b7dd5d5cf9c4 41596b84442c439b86ce2c239af0242c - - default default] Lock "26cb0d26-41fd-4cac-a0b5-1c630a0feba1" acquired by "nova.compute.manager.ComputeManager.detach_volume.<locals>.do_detach_volume" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 04:07:19 compute-0 sudo[276185]: pam_unix(sudo:session): session closed for user root
Oct 11 04:07:19 compute-0 nova_compute[259850]: 2025-10-11 04:07:19.286 2 INFO nova.compute.manager [None req-98242cb5-4889-4532-a4ee-f9eb7dc1fb94 77635b26e3624f318335b7dd5d5cf9c4 41596b84442c439b86ce2c239af0242c - - default default] [instance: 26cb0d26-41fd-4cac-a0b5-1c630a0feba1] Detaching volume edb1a073-56fc-4b59-ae21-06d01b779d30
Oct 11 04:07:19 compute-0 sudo[276210]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 04:07:19 compute-0 sudo[276210]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 04:07:19 compute-0 sudo[276210]: pam_unix(sudo:session): session closed for user root
Oct 11 04:07:19 compute-0 nova_compute[259850]: 2025-10-11 04:07:19.402 2 INFO nova.virt.block_device [None req-98242cb5-4889-4532-a4ee-f9eb7dc1fb94 77635b26e3624f318335b7dd5d5cf9c4 41596b84442c439b86ce2c239af0242c - - default default] [instance: 26cb0d26-41fd-4cac-a0b5-1c630a0feba1] Attempting to driver detach volume edb1a073-56fc-4b59-ae21-06d01b779d30 from mountpoint /dev/vdb
Oct 11 04:07:19 compute-0 nova_compute[259850]: 2025-10-11 04:07:19.412 2 DEBUG nova.virt.libvirt.driver [None req-98242cb5-4889-4532-a4ee-f9eb7dc1fb94 77635b26e3624f318335b7dd5d5cf9c4 41596b84442c439b86ce2c239af0242c - - default default] Attempting to detach device vdb from instance 26cb0d26-41fd-4cac-a0b5-1c630a0feba1 from the persistent domain config. _detach_from_persistent /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2487
Oct 11 04:07:19 compute-0 nova_compute[259850]: 2025-10-11 04:07:19.412 2 DEBUG nova.virt.libvirt.guest [None req-98242cb5-4889-4532-a4ee-f9eb7dc1fb94 77635b26e3624f318335b7dd5d5cf9c4 41596b84442c439b86ce2c239af0242c - - default default] detach device xml: <disk type="network" device="disk">
Oct 11 04:07:19 compute-0 nova_compute[259850]:   <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Oct 11 04:07:19 compute-0 nova_compute[259850]:   <source protocol="rbd" name="volumes/volume-edb1a073-56fc-4b59-ae21-06d01b779d30">
Oct 11 04:07:19 compute-0 nova_compute[259850]:     <host name="192.168.122.100" port="6789"/>
Oct 11 04:07:19 compute-0 nova_compute[259850]:   </source>
Oct 11 04:07:19 compute-0 nova_compute[259850]:   <target dev="vdb" bus="virtio"/>
Oct 11 04:07:19 compute-0 nova_compute[259850]:   <serial>edb1a073-56fc-4b59-ae21-06d01b779d30</serial>
Oct 11 04:07:19 compute-0 nova_compute[259850]:   <address type="pci" domain="0x0000" bus="0x06" slot="0x00" function="0x0"/>
Oct 11 04:07:19 compute-0 nova_compute[259850]: </disk>
Oct 11 04:07:19 compute-0 nova_compute[259850]:  detach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:465
Oct 11 04:07:19 compute-0 sudo[276235]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/23b68101-59a9-532f-ab6b-9acf78fb2162/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 23b68101-59a9-532f-ab6b-9acf78fb2162 -- raw list --format json
Oct 11 04:07:19 compute-0 sudo[276235]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 04:07:19 compute-0 nova_compute[259850]: 2025-10-11 04:07:19.421 2 INFO nova.virt.libvirt.driver [None req-98242cb5-4889-4532-a4ee-f9eb7dc1fb94 77635b26e3624f318335b7dd5d5cf9c4 41596b84442c439b86ce2c239af0242c - - default default] Successfully detached device vdb from instance 26cb0d26-41fd-4cac-a0b5-1c630a0feba1 from the persistent domain config.
Oct 11 04:07:19 compute-0 nova_compute[259850]: 2025-10-11 04:07:19.421 2 DEBUG nova.virt.libvirt.driver [None req-98242cb5-4889-4532-a4ee-f9eb7dc1fb94 77635b26e3624f318335b7dd5d5cf9c4 41596b84442c439b86ce2c239af0242c - - default default] (1/8): Attempting to detach device vdb with device alias virtio-disk1 from instance 26cb0d26-41fd-4cac-a0b5-1c630a0feba1 from the live domain config. _detach_from_live_with_retry /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2523
Oct 11 04:07:19 compute-0 nova_compute[259850]: 2025-10-11 04:07:19.422 2 DEBUG nova.virt.libvirt.guest [None req-98242cb5-4889-4532-a4ee-f9eb7dc1fb94 77635b26e3624f318335b7dd5d5cf9c4 41596b84442c439b86ce2c239af0242c - - default default] detach device xml: <disk type="network" device="disk">
Oct 11 04:07:19 compute-0 nova_compute[259850]:   <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Oct 11 04:07:19 compute-0 nova_compute[259850]:   <source protocol="rbd" name="volumes/volume-edb1a073-56fc-4b59-ae21-06d01b779d30">
Oct 11 04:07:19 compute-0 nova_compute[259850]:     <host name="192.168.122.100" port="6789"/>
Oct 11 04:07:19 compute-0 nova_compute[259850]:   </source>
Oct 11 04:07:19 compute-0 nova_compute[259850]:   <target dev="vdb" bus="virtio"/>
Oct 11 04:07:19 compute-0 nova_compute[259850]:   <serial>edb1a073-56fc-4b59-ae21-06d01b779d30</serial>
Oct 11 04:07:19 compute-0 nova_compute[259850]:   <address type="pci" domain="0x0000" bus="0x06" slot="0x00" function="0x0"/>
Oct 11 04:07:19 compute-0 nova_compute[259850]: </disk>
Oct 11 04:07:19 compute-0 nova_compute[259850]:  detach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:465
Oct 11 04:07:19 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1119: 305 pgs: 2 active+clean+snaptrim, 303 active+clean; 213 MiB data, 359 MiB used, 60 GiB / 60 GiB avail; 550 KiB/s rd, 72 KiB/s wr, 125 op/s
Oct 11 04:07:19 compute-0 nova_compute[259850]: 2025-10-11 04:07:19.547 2 DEBUG oslo_service.periodic_task [None req-a15c22ab-259d-4b26-83d0-eb5f92102649 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 11 04:07:19 compute-0 nova_compute[259850]: 2025-10-11 04:07:19.548 2 DEBUG oslo_service.periodic_task [None req-a15c22ab-259d-4b26-83d0-eb5f92102649 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 11 04:07:19 compute-0 nova_compute[259850]: 2025-10-11 04:07:19.548 2 DEBUG nova.compute.manager [None req-a15c22ab-259d-4b26-83d0-eb5f92102649 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Oct 11 04:07:19 compute-0 nova_compute[259850]: 2025-10-11 04:07:19.548 2 DEBUG nova.compute.manager [None req-a15c22ab-259d-4b26-83d0-eb5f92102649 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Oct 11 04:07:19 compute-0 nova_compute[259850]: 2025-10-11 04:07:19.558 2 DEBUG nova.virt.libvirt.driver [None req-3700afd8-f980-4cf2-b41e-54ecc7fbe9a2 - - - - - -] Received event <DeviceRemovedEvent: 1760155639.5578792, 26cb0d26-41fd-4cac-a0b5-1c630a0feba1 => virtio-disk1> from libvirt while the driver is waiting for it; dispatched. emit_event /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2370
Oct 11 04:07:19 compute-0 nova_compute[259850]: 2025-10-11 04:07:19.561 2 DEBUG nova.virt.libvirt.driver [None req-98242cb5-4889-4532-a4ee-f9eb7dc1fb94 77635b26e3624f318335b7dd5d5cf9c4 41596b84442c439b86ce2c239af0242c - - default default] Start waiting for the detach event from libvirt for device vdb with device alias virtio-disk1 for instance 26cb0d26-41fd-4cac-a0b5-1c630a0feba1 _detach_from_live_and_wait_for_event /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2599
Oct 11 04:07:19 compute-0 nova_compute[259850]: 2025-10-11 04:07:19.563 2 INFO nova.virt.libvirt.driver [None req-98242cb5-4889-4532-a4ee-f9eb7dc1fb94 77635b26e3624f318335b7dd5d5cf9c4 41596b84442c439b86ce2c239af0242c - - default default] Successfully detached device vdb from instance 26cb0d26-41fd-4cac-a0b5-1c630a0feba1 from the live domain config.
Oct 11 04:07:19 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct 11 04:07:19 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/960825542' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 11 04:07:19 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct 11 04:07:19 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/960825542' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 11 04:07:19 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e199 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 04:07:19 compute-0 nova_compute[259850]: 2025-10-11 04:07:19.757 2 DEBUG oslo_concurrency.lockutils [None req-a15c22ab-259d-4b26-83d0-eb5f92102649 - - - - - -] Acquiring lock "refresh_cache-26cb0d26-41fd-4cac-a0b5-1c630a0feba1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 11 04:07:19 compute-0 nova_compute[259850]: 2025-10-11 04:07:19.758 2 DEBUG oslo_concurrency.lockutils [None req-a15c22ab-259d-4b26-83d0-eb5f92102649 - - - - - -] Acquired lock "refresh_cache-26cb0d26-41fd-4cac-a0b5-1c630a0feba1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 11 04:07:19 compute-0 nova_compute[259850]: 2025-10-11 04:07:19.758 2 DEBUG nova.network.neutron [None req-a15c22ab-259d-4b26-83d0-eb5f92102649 - - - - - -] [instance: 26cb0d26-41fd-4cac-a0b5-1c630a0feba1] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Oct 11 04:07:19 compute-0 nova_compute[259850]: 2025-10-11 04:07:19.758 2 DEBUG nova.objects.instance [None req-a15c22ab-259d-4b26-83d0-eb5f92102649 - - - - - -] Lazy-loading 'info_cache' on Instance uuid 26cb0d26-41fd-4cac-a0b5-1c630a0feba1 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 11 04:07:19 compute-0 nova_compute[259850]: 2025-10-11 04:07:19.815 2 DEBUG nova.objects.instance [None req-98242cb5-4889-4532-a4ee-f9eb7dc1fb94 77635b26e3624f318335b7dd5d5cf9c4 41596b84442c439b86ce2c239af0242c - - default default] Lazy-loading 'flavor' on Instance uuid 26cb0d26-41fd-4cac-a0b5-1c630a0feba1 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 11 04:07:19 compute-0 podman[276305]: 2025-10-11 04:07:19.838353784 +0000 UTC m=+0.039986863 container create d98fb3a81e7afaae97c72b4058cc750a8630de3f8b2392b90cbc35dd504ea208 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_buck, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 11 04:07:19 compute-0 nova_compute[259850]: 2025-10-11 04:07:19.855 2 DEBUG oslo_concurrency.lockutils [None req-98242cb5-4889-4532-a4ee-f9eb7dc1fb94 77635b26e3624f318335b7dd5d5cf9c4 41596b84442c439b86ce2c239af0242c - - default default] Lock "26cb0d26-41fd-4cac-a0b5-1c630a0feba1" "released" by "nova.compute.manager.ComputeManager.detach_volume.<locals>.do_detach_volume" :: held 0.583s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 04:07:19 compute-0 systemd[1]: Started libpod-conmon-d98fb3a81e7afaae97c72b4058cc750a8630de3f8b2392b90cbc35dd504ea208.scope.
Oct 11 04:07:19 compute-0 systemd[1]: Started libcrun container.
Oct 11 04:07:19 compute-0 podman[276305]: 2025-10-11 04:07:19.820656878 +0000 UTC m=+0.022289997 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 04:07:19 compute-0 podman[276305]: 2025-10-11 04:07:19.920251793 +0000 UTC m=+0.121884892 container init d98fb3a81e7afaae97c72b4058cc750a8630de3f8b2392b90cbc35dd504ea208 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_buck, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=reef, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 11 04:07:19 compute-0 podman[276305]: 2025-10-11 04:07:19.926270842 +0000 UTC m=+0.127903941 container start d98fb3a81e7afaae97c72b4058cc750a8630de3f8b2392b90cbc35dd504ea208 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_buck, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0)
Oct 11 04:07:19 compute-0 podman[276305]: 2025-10-11 04:07:19.929092122 +0000 UTC m=+0.130725221 container attach d98fb3a81e7afaae97c72b4058cc750a8630de3f8b2392b90cbc35dd504ea208 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_buck, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 11 04:07:19 compute-0 crazy_buck[276321]: 167 167
Oct 11 04:07:19 compute-0 systemd[1]: libpod-d98fb3a81e7afaae97c72b4058cc750a8630de3f8b2392b90cbc35dd504ea208.scope: Deactivated successfully.
Oct 11 04:07:19 compute-0 podman[276305]: 2025-10-11 04:07:19.932139857 +0000 UTC m=+0.133772936 container died d98fb3a81e7afaae97c72b4058cc750a8630de3f8b2392b90cbc35dd504ea208 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_buck, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3)
Oct 11 04:07:19 compute-0 systemd[1]: var-lib-containers-storage-overlay-9b1577fc56345a4317c89e8197c04609844eda65d7b4efd580dd66a2881b27cf-merged.mount: Deactivated successfully.
Oct 11 04:07:19 compute-0 podman[276305]: 2025-10-11 04:07:19.961706557 +0000 UTC m=+0.163339636 container remove d98fb3a81e7afaae97c72b4058cc750a8630de3f8b2392b90cbc35dd504ea208 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_buck, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 11 04:07:19 compute-0 systemd[1]: libpod-conmon-d98fb3a81e7afaae97c72b4058cc750a8630de3f8b2392b90cbc35dd504ea208.scope: Deactivated successfully.
Oct 11 04:07:20 compute-0 ceph-mon[74273]: osdmap e199: 3 total, 3 up, 3 in
Oct 11 04:07:20 compute-0 ceph-mon[74273]: pgmap v1119: 305 pgs: 2 active+clean+snaptrim, 303 active+clean; 213 MiB data, 359 MiB used, 60 GiB / 60 GiB avail; 550 KiB/s rd, 72 KiB/s wr, 125 op/s
Oct 11 04:07:20 compute-0 ceph-mon[74273]: from='client.? 192.168.122.10:0/960825542' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 11 04:07:20 compute-0 ceph-mon[74273]: from='client.? 192.168.122.10:0/960825542' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 11 04:07:20 compute-0 ceph-mon[74273]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #51. Immutable memtables: 0.
Oct 11 04:07:20 compute-0 ceph-mon[74273]: rocksdb: (Original Log Time 2025/10/11-04:07:20.137140) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Oct 11 04:07:20 compute-0 ceph-mon[74273]: rocksdb: [db/flush_job.cc:856] [default] [JOB 25] Flushing memtable with next log file: 51
Oct 11 04:07:20 compute-0 ceph-mon[74273]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760155640137204, "job": 25, "event": "flush_started", "num_memtables": 1, "num_entries": 2316, "num_deletes": 263, "total_data_size": 3328221, "memory_usage": 3392608, "flush_reason": "Manual Compaction"}
Oct 11 04:07:20 compute-0 ceph-mon[74273]: rocksdb: [db/flush_job.cc:885] [default] [JOB 25] Level-0 flush table #52: started
Oct 11 04:07:20 compute-0 ceph-mon[74273]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760155640154297, "cf_name": "default", "job": 25, "event": "table_file_creation", "file_number": 52, "file_size": 3265016, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 21185, "largest_seqno": 23500, "table_properties": {"data_size": 3254449, "index_size": 6741, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2757, "raw_key_size": 23202, "raw_average_key_size": 21, "raw_value_size": 3232785, "raw_average_value_size": 2957, "num_data_blocks": 297, "num_entries": 1093, "num_filter_entries": 1093, "num_deletions": 263, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1760155464, "oldest_key_time": 1760155464, "file_creation_time": 1760155640, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "c5ef686a-ea96-43f3-b64e-136aeef6150d", "db_session_id": "5PENSWPAYOBU0GJSS6OB", "orig_file_number": 52, "seqno_to_time_mapping": "N/A"}}
Oct 11 04:07:20 compute-0 ceph-mon[74273]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 25] Flush lasted 17179 microseconds, and 6864 cpu microseconds.
Oct 11 04:07:20 compute-0 ceph-mon[74273]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 11 04:07:20 compute-0 ceph-mon[74273]: rocksdb: (Original Log Time 2025/10/11-04:07:20.154340) [db/flush_job.cc:967] [default] [JOB 25] Level-0 flush table #52: 3265016 bytes OK
Oct 11 04:07:20 compute-0 ceph-mon[74273]: rocksdb: (Original Log Time 2025/10/11-04:07:20.154360) [db/memtable_list.cc:519] [default] Level-0 commit table #52 started
Oct 11 04:07:20 compute-0 ceph-mon[74273]: rocksdb: (Original Log Time 2025/10/11-04:07:20.155819) [db/memtable_list.cc:722] [default] Level-0 commit table #52: memtable #1 done
Oct 11 04:07:20 compute-0 ceph-mon[74273]: rocksdb: (Original Log Time 2025/10/11-04:07:20.155834) EVENT_LOG_v1 {"time_micros": 1760155640155829, "job": 25, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Oct 11 04:07:20 compute-0 ceph-mon[74273]: rocksdb: (Original Log Time 2025/10/11-04:07:20.155852) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Oct 11 04:07:20 compute-0 ceph-mon[74273]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 25] Try to delete WAL files size 3318202, prev total WAL file size 3318202, number of live WAL files 2.
Oct 11 04:07:20 compute-0 ceph-mon[74273]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000048.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 11 04:07:20 compute-0 ceph-mon[74273]: rocksdb: (Original Log Time 2025/10/11-04:07:20.156863) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730031373537' seq:72057594037927935, type:22 .. '7061786F730032303039' seq:0, type:0; will stop at (end)
Oct 11 04:07:20 compute-0 ceph-mon[74273]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 26] Compacting 1@0 + 1@6 files to L6, score -1.00
Oct 11 04:07:20 compute-0 ceph-mon[74273]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 25 Base level 0, inputs: [52(3188KB)], [50(7258KB)]
Oct 11 04:07:20 compute-0 ceph-mon[74273]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760155640156887, "job": 26, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [52], "files_L6": [50], "score": -1, "input_data_size": 10698137, "oldest_snapshot_seqno": -1}
Oct 11 04:07:20 compute-0 podman[276346]: 2025-10-11 04:07:20.16310478 +0000 UTC m=+0.053285916 container create 238b29c7d256e244a386e5d2c385983bcf3966367ba0c4d9039a5d4bff232474 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=serene_goldberg, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 11 04:07:20 compute-0 systemd[1]: Started libpod-conmon-238b29c7d256e244a386e5d2c385983bcf3966367ba0c4d9039a5d4bff232474.scope.
Oct 11 04:07:20 compute-0 ceph-mon[74273]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 26] Generated table #53: 5004 keys, 8951847 bytes, temperature: kUnknown
Oct 11 04:07:20 compute-0 ceph-mon[74273]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760155640211423, "cf_name": "default", "job": 26, "event": "table_file_creation", "file_number": 53, "file_size": 8951847, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 8915233, "index_size": 23000, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 12549, "raw_key_size": 123181, "raw_average_key_size": 24, "raw_value_size": 8821847, "raw_average_value_size": 1762, "num_data_blocks": 955, "num_entries": 5004, "num_filter_entries": 5004, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1760153731, "oldest_key_time": 0, "file_creation_time": 1760155640, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "c5ef686a-ea96-43f3-b64e-136aeef6150d", "db_session_id": "5PENSWPAYOBU0GJSS6OB", "orig_file_number": 53, "seqno_to_time_mapping": "N/A"}}
Oct 11 04:07:20 compute-0 ceph-mon[74273]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 11 04:07:20 compute-0 ceph-mon[74273]: rocksdb: (Original Log Time 2025/10/11-04:07:20.211966) [db/compaction/compaction_job.cc:1663] [default] [JOB 26] Compacted 1@0 + 1@6 files to L6 => 8951847 bytes
Oct 11 04:07:20 compute-0 ceph-mon[74273]: rocksdb: (Original Log Time 2025/10/11-04:07:20.214032) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 195.0 rd, 163.2 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(3.1, 7.1 +0.0 blob) out(8.5 +0.0 blob), read-write-amplify(6.0) write-amplify(2.7) OK, records in: 5536, records dropped: 532 output_compression: NoCompression
Oct 11 04:07:20 compute-0 ceph-mon[74273]: rocksdb: (Original Log Time 2025/10/11-04:07:20.214063) EVENT_LOG_v1 {"time_micros": 1760155640214048, "job": 26, "event": "compaction_finished", "compaction_time_micros": 54868, "compaction_time_cpu_micros": 31875, "output_level": 6, "num_output_files": 1, "total_output_size": 8951847, "num_input_records": 5536, "num_output_records": 5004, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Oct 11 04:07:20 compute-0 ceph-mon[74273]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000052.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 11 04:07:20 compute-0 ceph-mon[74273]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760155640215188, "job": 26, "event": "table_file_deletion", "file_number": 52}
Oct 11 04:07:20 compute-0 ceph-mon[74273]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000050.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 11 04:07:20 compute-0 ceph-mon[74273]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760155640220349, "job": 26, "event": "table_file_deletion", "file_number": 50}
Oct 11 04:07:20 compute-0 ceph-mon[74273]: rocksdb: (Original Log Time 2025/10/11-04:07:20.156797) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 11 04:07:20 compute-0 ceph-mon[74273]: rocksdb: (Original Log Time 2025/10/11-04:07:20.220399) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 11 04:07:20 compute-0 ceph-mon[74273]: rocksdb: (Original Log Time 2025/10/11-04:07:20.220406) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 11 04:07:20 compute-0 ceph-mon[74273]: rocksdb: (Original Log Time 2025/10/11-04:07:20.220409) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 11 04:07:20 compute-0 ceph-mon[74273]: rocksdb: (Original Log Time 2025/10/11-04:07:20.220412) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 11 04:07:20 compute-0 ceph-mon[74273]: rocksdb: (Original Log Time 2025/10/11-04:07:20.220415) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 11 04:07:20 compute-0 systemd[1]: Started libcrun container.
Oct 11 04:07:20 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6024290ab3cd46e41ede1145f9f231245bbc2be81a3691ae0d66cd007d40aba6/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 11 04:07:20 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6024290ab3cd46e41ede1145f9f231245bbc2be81a3691ae0d66cd007d40aba6/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 11 04:07:20 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6024290ab3cd46e41ede1145f9f231245bbc2be81a3691ae0d66cd007d40aba6/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 11 04:07:20 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6024290ab3cd46e41ede1145f9f231245bbc2be81a3691ae0d66cd007d40aba6/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 11 04:07:20 compute-0 podman[276346]: 2025-10-11 04:07:20.140102165 +0000 UTC m=+0.030283391 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 04:07:20 compute-0 podman[276346]: 2025-10-11 04:07:20.241207943 +0000 UTC m=+0.131389129 container init 238b29c7d256e244a386e5d2c385983bcf3966367ba0c4d9039a5d4bff232474 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=serene_goldberg, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2)
Oct 11 04:07:20 compute-0 podman[276346]: 2025-10-11 04:07:20.254454055 +0000 UTC m=+0.144635191 container start 238b29c7d256e244a386e5d2c385983bcf3966367ba0c4d9039a5d4bff232474 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=serene_goldberg, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Oct 11 04:07:20 compute-0 podman[276346]: 2025-10-11 04:07:20.258371245 +0000 UTC m=+0.148552391 container attach 238b29c7d256e244a386e5d2c385983bcf3966367ba0c4d9039a5d4bff232474 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=serene_goldberg, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 11 04:07:20 compute-0 nova_compute[259850]: 2025-10-11 04:07:20.332 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 04:07:20 compute-0 nova_compute[259850]: 2025-10-11 04:07:20.636 2 DEBUG oslo_concurrency.lockutils [None req-3aef24c9-0a12-49f9-807f-ff227d4de7f4 77635b26e3624f318335b7dd5d5cf9c4 41596b84442c439b86ce2c239af0242c - - default default] Acquiring lock "26cb0d26-41fd-4cac-a0b5-1c630a0feba1" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 04:07:20 compute-0 nova_compute[259850]: 2025-10-11 04:07:20.637 2 DEBUG oslo_concurrency.lockutils [None req-3aef24c9-0a12-49f9-807f-ff227d4de7f4 77635b26e3624f318335b7dd5d5cf9c4 41596b84442c439b86ce2c239af0242c - - default default] Lock "26cb0d26-41fd-4cac-a0b5-1c630a0feba1" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 04:07:20 compute-0 nova_compute[259850]: 2025-10-11 04:07:20.637 2 DEBUG oslo_concurrency.lockutils [None req-3aef24c9-0a12-49f9-807f-ff227d4de7f4 77635b26e3624f318335b7dd5d5cf9c4 41596b84442c439b86ce2c239af0242c - - default default] Acquiring lock "26cb0d26-41fd-4cac-a0b5-1c630a0feba1-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 04:07:20 compute-0 nova_compute[259850]: 2025-10-11 04:07:20.637 2 DEBUG oslo_concurrency.lockutils [None req-3aef24c9-0a12-49f9-807f-ff227d4de7f4 77635b26e3624f318335b7dd5d5cf9c4 41596b84442c439b86ce2c239af0242c - - default default] Lock "26cb0d26-41fd-4cac-a0b5-1c630a0feba1-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 04:07:20 compute-0 nova_compute[259850]: 2025-10-11 04:07:20.638 2 DEBUG oslo_concurrency.lockutils [None req-3aef24c9-0a12-49f9-807f-ff227d4de7f4 77635b26e3624f318335b7dd5d5cf9c4 41596b84442c439b86ce2c239af0242c - - default default] Lock "26cb0d26-41fd-4cac-a0b5-1c630a0feba1-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 04:07:20 compute-0 nova_compute[259850]: 2025-10-11 04:07:20.639 2 INFO nova.compute.manager [None req-3aef24c9-0a12-49f9-807f-ff227d4de7f4 77635b26e3624f318335b7dd5d5cf9c4 41596b84442c439b86ce2c239af0242c - - default default] [instance: 26cb0d26-41fd-4cac-a0b5-1c630a0feba1] Terminating instance
Oct 11 04:07:20 compute-0 nova_compute[259850]: 2025-10-11 04:07:20.640 2 DEBUG nova.compute.manager [None req-3aef24c9-0a12-49f9-807f-ff227d4de7f4 77635b26e3624f318335b7dd5d5cf9c4 41596b84442c439b86ce2c239af0242c - - default default] [instance: 26cb0d26-41fd-4cac-a0b5-1c630a0feba1] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Oct 11 04:07:20 compute-0 kernel: tap4bf043b6-53 (unregistering): left promiscuous mode
Oct 11 04:07:20 compute-0 NetworkManager[44920]: <info>  [1760155640.7078] device (tap4bf043b6-53): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 11 04:07:20 compute-0 nova_compute[259850]: 2025-10-11 04:07:20.757 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 04:07:20 compute-0 ovn_controller[152025]: 2025-10-11T04:07:20Z|00081|binding|INFO|Releasing lport 4bf043b6-53f8-43fd-8fb7-67863dfbfe87 from this chassis (sb_readonly=0)
Oct 11 04:07:20 compute-0 ovn_controller[152025]: 2025-10-11T04:07:20Z|00082|binding|INFO|Setting lport 4bf043b6-53f8-43fd-8fb7-67863dfbfe87 down in Southbound
Oct 11 04:07:20 compute-0 ovn_controller[152025]: 2025-10-11T04:07:20Z|00083|binding|INFO|Removing iface tap4bf043b6-53 ovn-installed in OVS
Oct 11 04:07:20 compute-0 nova_compute[259850]: 2025-10-11 04:07:20.760 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 04:07:20 compute-0 ceph-mgr[74563]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 04:07:20 compute-0 ceph-mgr[74563]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 04:07:20 compute-0 ovn_metadata_agent[161897]: 2025-10-11 04:07:20.771 161902 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:20:01:d0 10.100.0.13'], port_security=['fa:16:3e:20:01:d0 10.100.0.13'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.13/28', 'neutron:device_id': '26cb0d26-41fd-4cac-a0b5-1c630a0feba1', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-0ff4b514-1476-4866-8fda-c0b6a7674970', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '41596b84442c439b86ce2c239af0242c', 'neutron:revision_number': '4', 'neutron:security_group_ids': '2b79ace0-3591-4cb7-9630-1c1e0585e64d', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com', 'neutron:port_fip': '192.168.122.196'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=7b302cb1-4964-4a0f-a4e4-38f64faa4a71, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f22fde8afd0>], logical_port=4bf043b6-53f8-43fd-8fb7-67863dfbfe87) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f22fde8afd0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 11 04:07:20 compute-0 ovn_metadata_agent[161897]: 2025-10-11 04:07:20.772 161902 INFO neutron.agent.ovn.metadata.agent [-] Port 4bf043b6-53f8-43fd-8fb7-67863dfbfe87 in datapath 0ff4b514-1476-4866-8fda-c0b6a7674970 unbound from our chassis
Oct 11 04:07:20 compute-0 ovn_metadata_agent[161897]: 2025-10-11 04:07:20.774 161902 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 0ff4b514-1476-4866-8fda-c0b6a7674970, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Oct 11 04:07:20 compute-0 ovn_metadata_agent[161897]: 2025-10-11 04:07:20.775 267637 DEBUG oslo.privsep.daemon [-] privsep: reply[f0deb15c-e22c-42fa-b825-c7a38a7db649]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 04:07:20 compute-0 ceph-mgr[74563]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 04:07:20 compute-0 ceph-mgr[74563]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 04:07:20 compute-0 ovn_metadata_agent[161897]: 2025-10-11 04:07:20.775 161902 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-0ff4b514-1476-4866-8fda-c0b6a7674970 namespace which is not needed anymore
Oct 11 04:07:20 compute-0 ceph-mgr[74563]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 04:07:20 compute-0 ceph-mgr[74563]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 04:07:20 compute-0 ceph-mgr[74563]: [balancer INFO root] Optimize plan auto_2025-10-11_04:07:20
Oct 11 04:07:20 compute-0 ceph-mgr[74563]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 11 04:07:20 compute-0 ceph-mgr[74563]: [balancer INFO root] do_upmap
Oct 11 04:07:20 compute-0 ceph-mgr[74563]: [balancer INFO root] pools ['images', 'volumes', 'default.rgw.control', 'default.rgw.log', 'vms', 'cephfs.cephfs.data', 'default.rgw.meta', '.rgw.root', 'backups', 'cephfs.cephfs.meta', '.mgr']
Oct 11 04:07:20 compute-0 ceph-mgr[74563]: [balancer INFO root] prepared 0/10 changes
Oct 11 04:07:20 compute-0 nova_compute[259850]: 2025-10-11 04:07:20.786 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 04:07:20 compute-0 systemd[1]: machine-qemu\x2d7\x2dinstance\x2d00000007.scope: Deactivated successfully.
Oct 11 04:07:20 compute-0 systemd[1]: machine-qemu\x2d7\x2dinstance\x2d00000007.scope: Consumed 13.574s CPU time.
Oct 11 04:07:20 compute-0 nova_compute[259850]: 2025-10-11 04:07:20.804 2 DEBUG nova.compute.manager [req-b89807e9-ad19-4240-80d9-5766a3ad5c29 req-48b9d583-e366-4eb2-8dbe-605d517976ec f4159c198f2c491aba3076e803251bb4 551ef16ca7c64c7d98e9560ddab5abd8 - - default default] [instance: 388b5700-0501-4cb9-99cd-6d259e00afa4] Received event network-changed-c1e60e1e-9066-4ce9-9064-2a732e2a407d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 11 04:07:20 compute-0 nova_compute[259850]: 2025-10-11 04:07:20.804 2 DEBUG nova.compute.manager [req-b89807e9-ad19-4240-80d9-5766a3ad5c29 req-48b9d583-e366-4eb2-8dbe-605d517976ec f4159c198f2c491aba3076e803251bb4 551ef16ca7c64c7d98e9560ddab5abd8 - - default default] [instance: 388b5700-0501-4cb9-99cd-6d259e00afa4] Refreshing instance network info cache due to event network-changed-c1e60e1e-9066-4ce9-9064-2a732e2a407d. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 11 04:07:20 compute-0 nova_compute[259850]: 2025-10-11 04:07:20.805 2 DEBUG oslo_concurrency.lockutils [req-b89807e9-ad19-4240-80d9-5766a3ad5c29 req-48b9d583-e366-4eb2-8dbe-605d517976ec f4159c198f2c491aba3076e803251bb4 551ef16ca7c64c7d98e9560ddab5abd8 - - default default] Acquiring lock "refresh_cache-388b5700-0501-4cb9-99cd-6d259e00afa4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 11 04:07:20 compute-0 nova_compute[259850]: 2025-10-11 04:07:20.805 2 DEBUG oslo_concurrency.lockutils [req-b89807e9-ad19-4240-80d9-5766a3ad5c29 req-48b9d583-e366-4eb2-8dbe-605d517976ec f4159c198f2c491aba3076e803251bb4 551ef16ca7c64c7d98e9560ddab5abd8 - - default default] Acquired lock "refresh_cache-388b5700-0501-4cb9-99cd-6d259e00afa4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 11 04:07:20 compute-0 nova_compute[259850]: 2025-10-11 04:07:20.805 2 DEBUG nova.network.neutron [req-b89807e9-ad19-4240-80d9-5766a3ad5c29 req-48b9d583-e366-4eb2-8dbe-605d517976ec f4159c198f2c491aba3076e803251bb4 551ef16ca7c64c7d98e9560ddab5abd8 - - default default] [instance: 388b5700-0501-4cb9-99cd-6d259e00afa4] Refreshing network info cache for port c1e60e1e-9066-4ce9-9064-2a732e2a407d _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 11 04:07:20 compute-0 systemd-machined[214869]: Machine qemu-7-instance-00000007 terminated.
Oct 11 04:07:20 compute-0 nova_compute[259850]: 2025-10-11 04:07:20.866 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 04:07:20 compute-0 nova_compute[259850]: 2025-10-11 04:07:20.879 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 04:07:20 compute-0 nova_compute[259850]: 2025-10-11 04:07:20.888 2 INFO nova.virt.libvirt.driver [-] [instance: 26cb0d26-41fd-4cac-a0b5-1c630a0feba1] Instance destroyed successfully.
Oct 11 04:07:20 compute-0 nova_compute[259850]: 2025-10-11 04:07:20.889 2 DEBUG nova.objects.instance [None req-3aef24c9-0a12-49f9-807f-ff227d4de7f4 77635b26e3624f318335b7dd5d5cf9c4 41596b84442c439b86ce2c239af0242c - - default default] Lazy-loading 'resources' on Instance uuid 26cb0d26-41fd-4cac-a0b5-1c630a0feba1 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 11 04:07:20 compute-0 nova_compute[259850]: 2025-10-11 04:07:20.905 2 DEBUG nova.virt.libvirt.vif [None req-3aef24c9-0a12-49f9-807f-ff227d4de7f4 77635b26e3624f318335b7dd5d5cf9c4 41596b84442c439b86ce2c239af0242c - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-11T04:06:47Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-VolumesExtendAttachedTest-instance-1267462374',display_name='tempest-VolumesExtendAttachedTest-instance-1267462374',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-volumesextendattachedtest-instance-1267462374',id=7,image_ref='1a107e2f-1a9d-4b6f-861d-e64bee7d56be',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBP0kgOIkPKMI3TE10SdB87sqJpLrPFOSBcFu1d0XzE1fj/PPC+I09TagWxQ8fgC7nINR5zBCN03htEgPk6hUhaQB08LyNPHOlKIdJ2drueAUzLNfbv1Latadi6FSu3IqCg==',key_name='tempest-keypair-2088827798',keypairs=<?>,launch_index=0,launched_at=2025-10-11T04:06:55Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='41596b84442c439b86ce2c239af0242c',ramdisk_id='',reservation_id='r-zkbeg8os',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='1a107e2f-1a9d-4b6f-861d-e64bee7d56be',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-VolumesExtendAttachedTest-1136455461',owner_user_name='tempest-VolumesExtendAttachedTest-1136455461-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-11T04:06:55Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='77635b26e3624f318335b7dd5d5cf9c4',uuid=26cb0d26-41fd-4cac-a0b5-1c630a0feba1,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "4bf043b6-53f8-43fd-8fb7-67863dfbfe87", "address": "fa:16:3e:20:01:d0", "network": {"id": "0ff4b514-1476-4866-8fda-c0b6a7674970", "bridge": "br-int", "label": "tempest-VolumesExtendAttachedTest-1613157742-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.196", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "41596b84442c439b86ce2c239af0242c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4bf043b6-53", "ovs_interfaceid": "4bf043b6-53f8-43fd-8fb7-67863dfbfe87", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Oct 11 04:07:20 compute-0 nova_compute[259850]: 2025-10-11 04:07:20.907 2 DEBUG nova.network.os_vif_util [None req-3aef24c9-0a12-49f9-807f-ff227d4de7f4 77635b26e3624f318335b7dd5d5cf9c4 41596b84442c439b86ce2c239af0242c - - default default] Converting VIF {"id": "4bf043b6-53f8-43fd-8fb7-67863dfbfe87", "address": "fa:16:3e:20:01:d0", "network": {"id": "0ff4b514-1476-4866-8fda-c0b6a7674970", "bridge": "br-int", "label": "tempest-VolumesExtendAttachedTest-1613157742-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.196", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "41596b84442c439b86ce2c239af0242c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4bf043b6-53", "ovs_interfaceid": "4bf043b6-53f8-43fd-8fb7-67863dfbfe87", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 11 04:07:20 compute-0 nova_compute[259850]: 2025-10-11 04:07:20.908 2 DEBUG nova.network.os_vif_util [None req-3aef24c9-0a12-49f9-807f-ff227d4de7f4 77635b26e3624f318335b7dd5d5cf9c4 41596b84442c439b86ce2c239af0242c - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:20:01:d0,bridge_name='br-int',has_traffic_filtering=True,id=4bf043b6-53f8-43fd-8fb7-67863dfbfe87,network=Network(0ff4b514-1476-4866-8fda-c0b6a7674970),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap4bf043b6-53') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 11 04:07:20 compute-0 nova_compute[259850]: 2025-10-11 04:07:20.909 2 DEBUG os_vif [None req-3aef24c9-0a12-49f9-807f-ff227d4de7f4 77635b26e3624f318335b7dd5d5cf9c4 41596b84442c439b86ce2c239af0242c - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:20:01:d0,bridge_name='br-int',has_traffic_filtering=True,id=4bf043b6-53f8-43fd-8fb7-67863dfbfe87,network=Network(0ff4b514-1476-4866-8fda-c0b6a7674970),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap4bf043b6-53') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Oct 11 04:07:20 compute-0 nova_compute[259850]: 2025-10-11 04:07:20.911 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 04:07:20 compute-0 nova_compute[259850]: 2025-10-11 04:07:20.911 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap4bf043b6-53, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 04:07:20 compute-0 nova_compute[259850]: 2025-10-11 04:07:20.913 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 04:07:20 compute-0 nova_compute[259850]: 2025-10-11 04:07:20.915 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 04:07:20 compute-0 nova_compute[259850]: 2025-10-11 04:07:20.917 2 INFO os_vif [None req-3aef24c9-0a12-49f9-807f-ff227d4de7f4 77635b26e3624f318335b7dd5d5cf9c4 41596b84442c439b86ce2c239af0242c - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:20:01:d0,bridge_name='br-int',has_traffic_filtering=True,id=4bf043b6-53f8-43fd-8fb7-67863dfbfe87,network=Network(0ff4b514-1476-4866-8fda-c0b6a7674970),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap4bf043b6-53')
Oct 11 04:07:20 compute-0 neutron-haproxy-ovnmeta-0ff4b514-1476-4866-8fda-c0b6a7674970[274736]: [NOTICE]   (274740) : haproxy version is 2.8.14-c23fe91
Oct 11 04:07:20 compute-0 neutron-haproxy-ovnmeta-0ff4b514-1476-4866-8fda-c0b6a7674970[274736]: [NOTICE]   (274740) : path to executable is /usr/sbin/haproxy
Oct 11 04:07:20 compute-0 neutron-haproxy-ovnmeta-0ff4b514-1476-4866-8fda-c0b6a7674970[274736]: [WARNING]  (274740) : Exiting Master process...
Oct 11 04:07:20 compute-0 neutron-haproxy-ovnmeta-0ff4b514-1476-4866-8fda-c0b6a7674970[274736]: [ALERT]    (274740) : Current worker (274742) exited with code 143 (Terminated)
Oct 11 04:07:20 compute-0 neutron-haproxy-ovnmeta-0ff4b514-1476-4866-8fda-c0b6a7674970[274736]: [WARNING]  (274740) : All workers exited. Exiting... (0)
Oct 11 04:07:20 compute-0 systemd[1]: libpod-c965dbe07b521b13b8944f59378341e4d74893fcebd29bb3c2b6ae3da81e7039.scope: Deactivated successfully.
Oct 11 04:07:20 compute-0 podman[276398]: 2025-10-11 04:07:20.974363373 +0000 UTC m=+0.057345041 container died c965dbe07b521b13b8944f59378341e4d74893fcebd29bb3c2b6ae3da81e7039 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-0ff4b514-1476-4866-8fda-c0b6a7674970, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251009, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f)
Oct 11 04:07:20 compute-0 ceph-mgr[74563]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 11 04:07:20 compute-0 ceph-mgr[74563]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 11 04:07:20 compute-0 ceph-mgr[74563]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 11 04:07:20 compute-0 ceph-mgr[74563]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 11 04:07:20 compute-0 ceph-mgr[74563]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 11 04:07:20 compute-0 ceph-mgr[74563]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 11 04:07:21 compute-0 ceph-mgr[74563]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 11 04:07:21 compute-0 ceph-mgr[74563]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 11 04:07:21 compute-0 ceph-mgr[74563]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 11 04:07:21 compute-0 ceph-mgr[74563]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 11 04:07:21 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-c965dbe07b521b13b8944f59378341e4d74893fcebd29bb3c2b6ae3da81e7039-userdata-shm.mount: Deactivated successfully.
Oct 11 04:07:21 compute-0 systemd[1]: var-lib-containers-storage-overlay-e1403aaf7db73fed7a15f7d8a17dad363c21ac3f38b601da06cbfe18c9333162-merged.mount: Deactivated successfully.
Oct 11 04:07:21 compute-0 podman[276398]: 2025-10-11 04:07:21.053581327 +0000 UTC m=+0.136563005 container cleanup c965dbe07b521b13b8944f59378341e4d74893fcebd29bb3c2b6ae3da81e7039 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-0ff4b514-1476-4866-8fda-c0b6a7674970, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251009, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 11 04:07:21 compute-0 systemd[1]: libpod-conmon-c965dbe07b521b13b8944f59378341e4d74893fcebd29bb3c2b6ae3da81e7039.scope: Deactivated successfully.
Oct 11 04:07:21 compute-0 podman[276457]: 2025-10-11 04:07:21.12244253 +0000 UTC m=+0.040347204 container remove c965dbe07b521b13b8944f59378341e4d74893fcebd29bb3c2b6ae3da81e7039 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-0ff4b514-1476-4866-8fda-c0b6a7674970, io.buildah.version=1.41.3, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009, org.label-schema.schema-version=1.0)
Oct 11 04:07:21 compute-0 ovn_metadata_agent[161897]: 2025-10-11 04:07:21.130 267637 DEBUG oslo.privsep.daemon [-] privsep: reply[29928df6-29f4-4e06-a9b0-b7441dac0207]: (4, ('Sat Oct 11 04:07:20 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-0ff4b514-1476-4866-8fda-c0b6a7674970 (c965dbe07b521b13b8944f59378341e4d74893fcebd29bb3c2b6ae3da81e7039)\nc965dbe07b521b13b8944f59378341e4d74893fcebd29bb3c2b6ae3da81e7039\nSat Oct 11 04:07:21 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-0ff4b514-1476-4866-8fda-c0b6a7674970 (c965dbe07b521b13b8944f59378341e4d74893fcebd29bb3c2b6ae3da81e7039)\nc965dbe07b521b13b8944f59378341e4d74893fcebd29bb3c2b6ae3da81e7039\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 04:07:21 compute-0 ovn_metadata_agent[161897]: 2025-10-11 04:07:21.133 267637 DEBUG oslo.privsep.daemon [-] privsep: reply[97323ea2-a708-474a-b344-1ef8d538b027]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 04:07:21 compute-0 ovn_metadata_agent[161897]: 2025-10-11 04:07:21.134 161902 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap0ff4b514-10, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 04:07:21 compute-0 nova_compute[259850]: 2025-10-11 04:07:21.136 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 04:07:21 compute-0 kernel: tap0ff4b514-10: left promiscuous mode
Oct 11 04:07:21 compute-0 nova_compute[259850]: 2025-10-11 04:07:21.170 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 04:07:21 compute-0 nova_compute[259850]: 2025-10-11 04:07:21.174 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 04:07:21 compute-0 ovn_metadata_agent[161897]: 2025-10-11 04:07:21.184 267637 DEBUG oslo.privsep.daemon [-] privsep: reply[111fce58-358a-4d25-a599-b5ab7cec4ba2]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 04:07:21 compute-0 ovn_metadata_agent[161897]: 2025-10-11 04:07:21.215 267637 DEBUG oslo.privsep.daemon [-] privsep: reply[0babb7eb-ab38-4519-a243-d770047af6ab]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 04:07:21 compute-0 ovn_metadata_agent[161897]: 2025-10-11 04:07:21.217 267637 DEBUG oslo.privsep.daemon [-] privsep: reply[50334aaa-5ef8-44fd-9702-651bb9a456ed]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 04:07:21 compute-0 ovn_metadata_agent[161897]: 2025-10-11 04:07:21.236 267637 DEBUG oslo.privsep.daemon [-] privsep: reply[dd2928c3-0ea4-4834-8481-29d6d49f309b]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 399923, 'reachable_time': 18938, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 276482, 'error': None, 'target': 'ovnmeta-0ff4b514-1476-4866-8fda-c0b6a7674970', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 04:07:21 compute-0 ovn_metadata_agent[161897]: 2025-10-11 04:07:21.239 162015 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-0ff4b514-1476-4866-8fda-c0b6a7674970 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Oct 11 04:07:21 compute-0 ovn_metadata_agent[161897]: 2025-10-11 04:07:21.240 162015 DEBUG oslo.privsep.daemon [-] privsep: reply[abd7c880-1c40-4569-9e54-cf73208af73e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 04:07:21 compute-0 systemd[1]: run-netns-ovnmeta\x2d0ff4b514\x2d1476\x2d4866\x2d8fda\x2dc0b6a7674970.mount: Deactivated successfully.
Oct 11 04:07:21 compute-0 serene_goldberg[276363]: {
Oct 11 04:07:21 compute-0 serene_goldberg[276363]:     "38da774d-7ecf-442f-9a7a-97978287cff8": {
Oct 11 04:07:21 compute-0 serene_goldberg[276363]:         "ceph_fsid": "23b68101-59a9-532f-ab6b-9acf78fb2162",
Oct 11 04:07:21 compute-0 serene_goldberg[276363]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Oct 11 04:07:21 compute-0 serene_goldberg[276363]:         "osd_id": 1,
Oct 11 04:07:21 compute-0 serene_goldberg[276363]:         "osd_uuid": "38da774d-7ecf-442f-9a7a-97978287cff8",
Oct 11 04:07:21 compute-0 serene_goldberg[276363]:         "type": "bluestore"
Oct 11 04:07:21 compute-0 serene_goldberg[276363]:     },
Oct 11 04:07:21 compute-0 serene_goldberg[276363]:     "b0a32a6b-06c7-4cda-9235-0c5ffbd1bd85": {
Oct 11 04:07:21 compute-0 serene_goldberg[276363]:         "ceph_fsid": "23b68101-59a9-532f-ab6b-9acf78fb2162",
Oct 11 04:07:21 compute-0 serene_goldberg[276363]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Oct 11 04:07:21 compute-0 serene_goldberg[276363]:         "osd_id": 2,
Oct 11 04:07:21 compute-0 serene_goldberg[276363]:         "osd_uuid": "b0a32a6b-06c7-4cda-9235-0c5ffbd1bd85",
Oct 11 04:07:21 compute-0 serene_goldberg[276363]:         "type": "bluestore"
Oct 11 04:07:21 compute-0 serene_goldberg[276363]:     },
Oct 11 04:07:21 compute-0 serene_goldberg[276363]:     "bd7ac921-1218-45c1-b1c6-7c594dbceccb": {
Oct 11 04:07:21 compute-0 serene_goldberg[276363]:         "ceph_fsid": "23b68101-59a9-532f-ab6b-9acf78fb2162",
Oct 11 04:07:21 compute-0 serene_goldberg[276363]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct 11 04:07:21 compute-0 serene_goldberg[276363]:         "osd_id": 0,
Oct 11 04:07:21 compute-0 serene_goldberg[276363]:         "osd_uuid": "bd7ac921-1218-45c1-b1c6-7c594dbceccb",
Oct 11 04:07:21 compute-0 serene_goldberg[276363]:         "type": "bluestore"
Oct 11 04:07:21 compute-0 serene_goldberg[276363]:     }
Oct 11 04:07:21 compute-0 serene_goldberg[276363]: }
Oct 11 04:07:21 compute-0 systemd[1]: libpod-238b29c7d256e244a386e5d2c385983bcf3966367ba0c4d9039a5d4bff232474.scope: Deactivated successfully.
Oct 11 04:07:21 compute-0 nova_compute[259850]: 2025-10-11 04:07:21.324 2 DEBUG nova.network.neutron [None req-a15c22ab-259d-4b26-83d0-eb5f92102649 - - - - - -] [instance: 26cb0d26-41fd-4cac-a0b5-1c630a0feba1] Updating instance_info_cache with network_info: [{"id": "4bf043b6-53f8-43fd-8fb7-67863dfbfe87", "address": "fa:16:3e:20:01:d0", "network": {"id": "0ff4b514-1476-4866-8fda-c0b6a7674970", "bridge": "br-int", "label": "tempest-VolumesExtendAttachedTest-1613157742-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.196", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "41596b84442c439b86ce2c239af0242c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4bf043b6-53", "ovs_interfaceid": "4bf043b6-53f8-43fd-8fb7-67863dfbfe87", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 11 04:07:21 compute-0 nova_compute[259850]: 2025-10-11 04:07:21.339 2 DEBUG oslo_concurrency.lockutils [None req-a15c22ab-259d-4b26-83d0-eb5f92102649 - - - - - -] Releasing lock "refresh_cache-26cb0d26-41fd-4cac-a0b5-1c630a0feba1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 11 04:07:21 compute-0 nova_compute[259850]: 2025-10-11 04:07:21.339 2 DEBUG nova.compute.manager [None req-a15c22ab-259d-4b26-83d0-eb5f92102649 - - - - - -] [instance: 26cb0d26-41fd-4cac-a0b5-1c630a0feba1] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Oct 11 04:07:21 compute-0 nova_compute[259850]: 2025-10-11 04:07:21.340 2 DEBUG oslo_service.periodic_task [None req-a15c22ab-259d-4b26-83d0-eb5f92102649 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 11 04:07:21 compute-0 nova_compute[259850]: 2025-10-11 04:07:21.340 2 DEBUG oslo_service.periodic_task [None req-a15c22ab-259d-4b26-83d0-eb5f92102649 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 11 04:07:21 compute-0 nova_compute[259850]: 2025-10-11 04:07:21.340 2 DEBUG oslo_service.periodic_task [None req-a15c22ab-259d-4b26-83d0-eb5f92102649 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 11 04:07:21 compute-0 nova_compute[259850]: 2025-10-11 04:07:21.360 2 INFO nova.virt.libvirt.driver [None req-3aef24c9-0a12-49f9-807f-ff227d4de7f4 77635b26e3624f318335b7dd5d5cf9c4 41596b84442c439b86ce2c239af0242c - - default default] [instance: 26cb0d26-41fd-4cac-a0b5-1c630a0feba1] Deleting instance files /var/lib/nova/instances/26cb0d26-41fd-4cac-a0b5-1c630a0feba1_del
Oct 11 04:07:21 compute-0 nova_compute[259850]: 2025-10-11 04:07:21.361 2 INFO nova.virt.libvirt.driver [None req-3aef24c9-0a12-49f9-807f-ff227d4de7f4 77635b26e3624f318335b7dd5d5cf9c4 41596b84442c439b86ce2c239af0242c - - default default] [instance: 26cb0d26-41fd-4cac-a0b5-1c630a0feba1] Deletion of /var/lib/nova/instances/26cb0d26-41fd-4cac-a0b5-1c630a0feba1_del complete
Oct 11 04:07:21 compute-0 podman[276488]: 2025-10-11 04:07:21.380310898 +0000 UTC m=+0.047609327 container died 238b29c7d256e244a386e5d2c385983bcf3966367ba0c4d9039a5d4bff232474 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=serene_goldberg, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 11 04:07:21 compute-0 systemd[1]: var-lib-containers-storage-overlay-6024290ab3cd46e41ede1145f9f231245bbc2be81a3691ae0d66cd007d40aba6-merged.mount: Deactivated successfully.
Oct 11 04:07:21 compute-0 nova_compute[259850]: 2025-10-11 04:07:21.425 2 INFO nova.compute.manager [None req-3aef24c9-0a12-49f9-807f-ff227d4de7f4 77635b26e3624f318335b7dd5d5cf9c4 41596b84442c439b86ce2c239af0242c - - default default] [instance: 26cb0d26-41fd-4cac-a0b5-1c630a0feba1] Took 0.78 seconds to destroy the instance on the hypervisor.
Oct 11 04:07:21 compute-0 nova_compute[259850]: 2025-10-11 04:07:21.427 2 DEBUG oslo.service.loopingcall [None req-3aef24c9-0a12-49f9-807f-ff227d4de7f4 77635b26e3624f318335b7dd5d5cf9c4 41596b84442c439b86ce2c239af0242c - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Oct 11 04:07:21 compute-0 nova_compute[259850]: 2025-10-11 04:07:21.427 2 DEBUG nova.compute.manager [-] [instance: 26cb0d26-41fd-4cac-a0b5-1c630a0feba1] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Oct 11 04:07:21 compute-0 nova_compute[259850]: 2025-10-11 04:07:21.428 2 DEBUG nova.network.neutron [-] [instance: 26cb0d26-41fd-4cac-a0b5-1c630a0feba1] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Oct 11 04:07:21 compute-0 podman[276488]: 2025-10-11 04:07:21.457509925 +0000 UTC m=+0.124808284 container remove 238b29c7d256e244a386e5d2c385983bcf3966367ba0c4d9039a5d4bff232474 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=serene_goldberg, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507)
Oct 11 04:07:21 compute-0 systemd[1]: libpod-conmon-238b29c7d256e244a386e5d2c385983bcf3966367ba0c4d9039a5d4bff232474.scope: Deactivated successfully.
Oct 11 04:07:21 compute-0 sudo[276235]: pam_unix(sudo:session): session closed for user root
Oct 11 04:07:21 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 11 04:07:21 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1120: 305 pgs: 2 active+clean+snaptrim, 303 active+clean; 213 MiB data, 359 MiB used, 60 GiB / 60 GiB avail; 434 KiB/s rd, 57 KiB/s wr, 99 op/s
Oct 11 04:07:21 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' 
Oct 11 04:07:21 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 11 04:07:21 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' 
Oct 11 04:07:21 compute-0 ceph-mgr[74563]: [progress WARNING root] complete: ev fbe26b04-a213-4733-9651-922b8320ce0d does not exist
Oct 11 04:07:21 compute-0 ceph-mgr[74563]: [progress WARNING root] complete: ev c29198ed-d4b2-4336-9355-89fa2e787442 does not exist
Oct 11 04:07:21 compute-0 sudo[276502]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 04:07:21 compute-0 sudo[276502]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 04:07:21 compute-0 sudo[276502]: pam_unix(sudo:session): session closed for user root
Oct 11 04:07:21 compute-0 sudo[276527]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Oct 11 04:07:21 compute-0 sudo[276527]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 04:07:21 compute-0 sudo[276527]: pam_unix(sudo:session): session closed for user root
Oct 11 04:07:21 compute-0 ceph-osd[88594]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Oct 11 04:07:21 compute-0 ceph-osd[88594]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 1800.1 total, 600.0 interval
                                           Cumulative writes: 12K writes, 51K keys, 12K commit groups, 1.0 writes per commit group, ingest: 0.03 GB, 0.02 MB/s
                                           Cumulative WAL: 12K writes, 3675 syncs, 3.40 writes per sync, written: 0.03 GB, 0.02 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 5507 writes, 22K keys, 5507 commit groups, 1.0 writes per commit group, ingest: 11.05 MB, 0.02 MB/s
                                           Interval WAL: 5507 writes, 2396 syncs, 2.30 writes per sync, written: 0.01 GB, 0.02 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Oct 11 04:07:22 compute-0 nova_compute[259850]: 2025-10-11 04:07:22.170 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 04:07:22 compute-0 nova_compute[259850]: 2025-10-11 04:07:22.406 2 DEBUG nova.network.neutron [req-b89807e9-ad19-4240-80d9-5766a3ad5c29 req-48b9d583-e366-4eb2-8dbe-605d517976ec f4159c198f2c491aba3076e803251bb4 551ef16ca7c64c7d98e9560ddab5abd8 - - default default] [instance: 388b5700-0501-4cb9-99cd-6d259e00afa4] Updated VIF entry in instance network info cache for port c1e60e1e-9066-4ce9-9064-2a732e2a407d. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 11 04:07:22 compute-0 nova_compute[259850]: 2025-10-11 04:07:22.407 2 DEBUG nova.network.neutron [req-b89807e9-ad19-4240-80d9-5766a3ad5c29 req-48b9d583-e366-4eb2-8dbe-605d517976ec f4159c198f2c491aba3076e803251bb4 551ef16ca7c64c7d98e9560ddab5abd8 - - default default] [instance: 388b5700-0501-4cb9-99cd-6d259e00afa4] Updating instance_info_cache with network_info: [{"id": "c1e60e1e-9066-4ce9-9064-2a732e2a407d", "address": "fa:16:3e:22:3d:d4", "network": {"id": "bfa0cc72-c909-48db-80bb-536eb7b52f6e", "bridge": "br-int", "label": "tempest-VolumesSnapshotTestJSON-1615284681-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.221", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2783729ed466412aac8ceb01d86a0b12", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc1e60e1e-90", "ovs_interfaceid": "c1e60e1e-9066-4ce9-9064-2a732e2a407d", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 11 04:07:22 compute-0 nova_compute[259850]: 2025-10-11 04:07:22.430 2 DEBUG nova.network.neutron [-] [instance: 26cb0d26-41fd-4cac-a0b5-1c630a0feba1] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 11 04:07:22 compute-0 nova_compute[259850]: 2025-10-11 04:07:22.433 2 DEBUG oslo_concurrency.lockutils [req-b89807e9-ad19-4240-80d9-5766a3ad5c29 req-48b9d583-e366-4eb2-8dbe-605d517976ec f4159c198f2c491aba3076e803251bb4 551ef16ca7c64c7d98e9560ddab5abd8 - - default default] Releasing lock "refresh_cache-388b5700-0501-4cb9-99cd-6d259e00afa4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 11 04:07:22 compute-0 nova_compute[259850]: 2025-10-11 04:07:22.445 2 INFO nova.compute.manager [-] [instance: 26cb0d26-41fd-4cac-a0b5-1c630a0feba1] Took 1.02 seconds to deallocate network for instance.
Oct 11 04:07:22 compute-0 nova_compute[259850]: 2025-10-11 04:07:22.489 2 DEBUG oslo_concurrency.lockutils [None req-3aef24c9-0a12-49f9-807f-ff227d4de7f4 77635b26e3624f318335b7dd5d5cf9c4 41596b84442c439b86ce2c239af0242c - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 04:07:22 compute-0 nova_compute[259850]: 2025-10-11 04:07:22.490 2 DEBUG oslo_concurrency.lockutils [None req-3aef24c9-0a12-49f9-807f-ff227d4de7f4 77635b26e3624f318335b7dd5d5cf9c4 41596b84442c439b86ce2c239af0242c - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 04:07:22 compute-0 ceph-mon[74273]: pgmap v1120: 305 pgs: 2 active+clean+snaptrim, 303 active+clean; 213 MiB data, 359 MiB used, 60 GiB / 60 GiB avail; 434 KiB/s rd, 57 KiB/s wr, 99 op/s
Oct 11 04:07:22 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' 
Oct 11 04:07:22 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' 
Oct 11 04:07:22 compute-0 nova_compute[259850]: 2025-10-11 04:07:22.580 2 DEBUG oslo_concurrency.processutils [None req-3aef24c9-0a12-49f9-807f-ff227d4de7f4 77635b26e3624f318335b7dd5d5cf9c4 41596b84442c439b86ce2c239af0242c - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 04:07:22 compute-0 ovn_metadata_agent[161897]: 2025-10-11 04:07:22.605 161902 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=8, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '7a:61:6f', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '92:f1:b6:e4:f1:16'}, ipsec=False) old=SB_Global(nb_cfg=7) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 11 04:07:22 compute-0 ovn_metadata_agent[161897]: 2025-10-11 04:07:22.607 161902 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 1 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Oct 11 04:07:22 compute-0 nova_compute[259850]: 2025-10-11 04:07:22.616 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 04:07:22 compute-0 nova_compute[259850]: 2025-10-11 04:07:22.926 2 DEBUG nova.compute.manager [req-635eb249-8146-4666-ae84-c94bbdcb9938 req-7265c303-637b-4933-aac9-51361cd0979c f4159c198f2c491aba3076e803251bb4 551ef16ca7c64c7d98e9560ddab5abd8 - - default default] [instance: 26cb0d26-41fd-4cac-a0b5-1c630a0feba1] Received event network-vif-unplugged-4bf043b6-53f8-43fd-8fb7-67863dfbfe87 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 11 04:07:22 compute-0 nova_compute[259850]: 2025-10-11 04:07:22.926 2 DEBUG oslo_concurrency.lockutils [req-635eb249-8146-4666-ae84-c94bbdcb9938 req-7265c303-637b-4933-aac9-51361cd0979c f4159c198f2c491aba3076e803251bb4 551ef16ca7c64c7d98e9560ddab5abd8 - - default default] Acquiring lock "26cb0d26-41fd-4cac-a0b5-1c630a0feba1-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 04:07:22 compute-0 nova_compute[259850]: 2025-10-11 04:07:22.927 2 DEBUG oslo_concurrency.lockutils [req-635eb249-8146-4666-ae84-c94bbdcb9938 req-7265c303-637b-4933-aac9-51361cd0979c f4159c198f2c491aba3076e803251bb4 551ef16ca7c64c7d98e9560ddab5abd8 - - default default] Lock "26cb0d26-41fd-4cac-a0b5-1c630a0feba1-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 04:07:22 compute-0 nova_compute[259850]: 2025-10-11 04:07:22.927 2 DEBUG oslo_concurrency.lockutils [req-635eb249-8146-4666-ae84-c94bbdcb9938 req-7265c303-637b-4933-aac9-51361cd0979c f4159c198f2c491aba3076e803251bb4 551ef16ca7c64c7d98e9560ddab5abd8 - - default default] Lock "26cb0d26-41fd-4cac-a0b5-1c630a0feba1-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 04:07:22 compute-0 nova_compute[259850]: 2025-10-11 04:07:22.927 2 DEBUG nova.compute.manager [req-635eb249-8146-4666-ae84-c94bbdcb9938 req-7265c303-637b-4933-aac9-51361cd0979c f4159c198f2c491aba3076e803251bb4 551ef16ca7c64c7d98e9560ddab5abd8 - - default default] [instance: 26cb0d26-41fd-4cac-a0b5-1c630a0feba1] No waiting events found dispatching network-vif-unplugged-4bf043b6-53f8-43fd-8fb7-67863dfbfe87 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 11 04:07:22 compute-0 nova_compute[259850]: 2025-10-11 04:07:22.927 2 WARNING nova.compute.manager [req-635eb249-8146-4666-ae84-c94bbdcb9938 req-7265c303-637b-4933-aac9-51361cd0979c f4159c198f2c491aba3076e803251bb4 551ef16ca7c64c7d98e9560ddab5abd8 - - default default] [instance: 26cb0d26-41fd-4cac-a0b5-1c630a0feba1] Received unexpected event network-vif-unplugged-4bf043b6-53f8-43fd-8fb7-67863dfbfe87 for instance with vm_state deleted and task_state None.
Oct 11 04:07:22 compute-0 nova_compute[259850]: 2025-10-11 04:07:22.928 2 DEBUG nova.compute.manager [req-635eb249-8146-4666-ae84-c94bbdcb9938 req-7265c303-637b-4933-aac9-51361cd0979c f4159c198f2c491aba3076e803251bb4 551ef16ca7c64c7d98e9560ddab5abd8 - - default default] [instance: 26cb0d26-41fd-4cac-a0b5-1c630a0feba1] Received event network-vif-plugged-4bf043b6-53f8-43fd-8fb7-67863dfbfe87 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 11 04:07:22 compute-0 nova_compute[259850]: 2025-10-11 04:07:22.928 2 DEBUG oslo_concurrency.lockutils [req-635eb249-8146-4666-ae84-c94bbdcb9938 req-7265c303-637b-4933-aac9-51361cd0979c f4159c198f2c491aba3076e803251bb4 551ef16ca7c64c7d98e9560ddab5abd8 - - default default] Acquiring lock "26cb0d26-41fd-4cac-a0b5-1c630a0feba1-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 04:07:22 compute-0 nova_compute[259850]: 2025-10-11 04:07:22.928 2 DEBUG oslo_concurrency.lockutils [req-635eb249-8146-4666-ae84-c94bbdcb9938 req-7265c303-637b-4933-aac9-51361cd0979c f4159c198f2c491aba3076e803251bb4 551ef16ca7c64c7d98e9560ddab5abd8 - - default default] Lock "26cb0d26-41fd-4cac-a0b5-1c630a0feba1-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 04:07:22 compute-0 nova_compute[259850]: 2025-10-11 04:07:22.928 2 DEBUG oslo_concurrency.lockutils [req-635eb249-8146-4666-ae84-c94bbdcb9938 req-7265c303-637b-4933-aac9-51361cd0979c f4159c198f2c491aba3076e803251bb4 551ef16ca7c64c7d98e9560ddab5abd8 - - default default] Lock "26cb0d26-41fd-4cac-a0b5-1c630a0feba1-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 04:07:22 compute-0 nova_compute[259850]: 2025-10-11 04:07:22.929 2 DEBUG nova.compute.manager [req-635eb249-8146-4666-ae84-c94bbdcb9938 req-7265c303-637b-4933-aac9-51361cd0979c f4159c198f2c491aba3076e803251bb4 551ef16ca7c64c7d98e9560ddab5abd8 - - default default] [instance: 26cb0d26-41fd-4cac-a0b5-1c630a0feba1] No waiting events found dispatching network-vif-plugged-4bf043b6-53f8-43fd-8fb7-67863dfbfe87 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 11 04:07:22 compute-0 nova_compute[259850]: 2025-10-11 04:07:22.929 2 WARNING nova.compute.manager [req-635eb249-8146-4666-ae84-c94bbdcb9938 req-7265c303-637b-4933-aac9-51361cd0979c f4159c198f2c491aba3076e803251bb4 551ef16ca7c64c7d98e9560ddab5abd8 - - default default] [instance: 26cb0d26-41fd-4cac-a0b5-1c630a0feba1] Received unexpected event network-vif-plugged-4bf043b6-53f8-43fd-8fb7-67863dfbfe87 for instance with vm_state deleted and task_state None.
Oct 11 04:07:22 compute-0 nova_compute[259850]: 2025-10-11 04:07:22.929 2 DEBUG nova.compute.manager [req-635eb249-8146-4666-ae84-c94bbdcb9938 req-7265c303-637b-4933-aac9-51361cd0979c f4159c198f2c491aba3076e803251bb4 551ef16ca7c64c7d98e9560ddab5abd8 - - default default] [instance: 26cb0d26-41fd-4cac-a0b5-1c630a0feba1] Received event network-vif-deleted-4bf043b6-53f8-43fd-8fb7-67863dfbfe87 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 11 04:07:22 compute-0 ovn_metadata_agent[161897]: 2025-10-11 04:07:22.956 161902 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 04:07:22 compute-0 ovn_metadata_agent[161897]: 2025-10-11 04:07:22.957 161902 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 04:07:22 compute-0 ovn_metadata_agent[161897]: 2025-10-11 04:07:22.957 161902 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 04:07:23 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 11 04:07:23 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3749986056' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 04:07:23 compute-0 nova_compute[259850]: 2025-10-11 04:07:23.022 2 DEBUG oslo_concurrency.processutils [None req-3aef24c9-0a12-49f9-807f-ff227d4de7f4 77635b26e3624f318335b7dd5d5cf9c4 41596b84442c439b86ce2c239af0242c - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.442s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 04:07:23 compute-0 nova_compute[259850]: 2025-10-11 04:07:23.027 2 DEBUG nova.compute.provider_tree [None req-3aef24c9-0a12-49f9-807f-ff227d4de7f4 77635b26e3624f318335b7dd5d5cf9c4 41596b84442c439b86ce2c239af0242c - - default default] Inventory has not changed in ProviderTree for provider: 108a560b-89c0-4926-a2fc-cb749a6f8386 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 11 04:07:23 compute-0 nova_compute[259850]: 2025-10-11 04:07:23.042 2 DEBUG nova.scheduler.client.report [None req-3aef24c9-0a12-49f9-807f-ff227d4de7f4 77635b26e3624f318335b7dd5d5cf9c4 41596b84442c439b86ce2c239af0242c - - default default] Inventory has not changed for provider 108a560b-89c0-4926-a2fc-cb749a6f8386 based on inventory data: {'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 11 04:07:23 compute-0 nova_compute[259850]: 2025-10-11 04:07:23.063 2 DEBUG oslo_concurrency.lockutils [None req-3aef24c9-0a12-49f9-807f-ff227d4de7f4 77635b26e3624f318335b7dd5d5cf9c4 41596b84442c439b86ce2c239af0242c - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.573s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 04:07:23 compute-0 nova_compute[259850]: 2025-10-11 04:07:23.100 2 INFO nova.scheduler.client.report [None req-3aef24c9-0a12-49f9-807f-ff227d4de7f4 77635b26e3624f318335b7dd5d5cf9c4 41596b84442c439b86ce2c239af0242c - - default default] Deleted allocations for instance 26cb0d26-41fd-4cac-a0b5-1c630a0feba1
Oct 11 04:07:23 compute-0 nova_compute[259850]: 2025-10-11 04:07:23.183 2 DEBUG oslo_concurrency.lockutils [None req-3aef24c9-0a12-49f9-807f-ff227d4de7f4 77635b26e3624f318335b7dd5d5cf9c4 41596b84442c439b86ce2c239af0242c - - default default] Lock "26cb0d26-41fd-4cac-a0b5-1c630a0feba1" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.546s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 04:07:23 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1121: 305 pgs: 305 active+clean; 134 MiB data, 315 MiB used, 60 GiB / 60 GiB avail; 3.1 MiB/s rd, 50 KiB/s wr, 252 op/s
Oct 11 04:07:23 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e199 do_prune osdmap full prune enabled
Oct 11 04:07:23 compute-0 ceph-mon[74273]: from='client.? 192.168.122.100:0/3749986056' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 04:07:23 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e200 e200: 3 total, 3 up, 3 in
Oct 11 04:07:23 compute-0 ceph-mon[74273]: log_channel(cluster) log [DBG] : osdmap e200: 3 total, 3 up, 3 in
Oct 11 04:07:23 compute-0 ovn_metadata_agent[161897]: 2025-10-11 04:07:23.610 161902 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=8a473e03-2208-47ae-afcd-05ad744a5969, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '8'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 04:07:24 compute-0 ceph-mon[74273]: pgmap v1121: 305 pgs: 305 active+clean; 134 MiB data, 315 MiB used, 60 GiB / 60 GiB avail; 3.1 MiB/s rd, 50 KiB/s wr, 252 op/s
Oct 11 04:07:24 compute-0 ceph-mon[74273]: osdmap e200: 3 total, 3 up, 3 in
Oct 11 04:07:24 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e200 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 04:07:24 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e200 do_prune osdmap full prune enabled
Oct 11 04:07:24 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e201 e201: 3 total, 3 up, 3 in
Oct 11 04:07:24 compute-0 ceph-mon[74273]: log_channel(cluster) log [DBG] : osdmap e201: 3 total, 3 up, 3 in
Oct 11 04:07:25 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct 11 04:07:25 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3173233615' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 11 04:07:25 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct 11 04:07:25 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3173233615' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 11 04:07:25 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1124: 305 pgs: 305 active+clean; 134 MiB data, 315 MiB used, 60 GiB / 60 GiB avail; 3.3 MiB/s rd, 7.5 KiB/s wr, 213 op/s
Oct 11 04:07:25 compute-0 ceph-mon[74273]: osdmap e201: 3 total, 3 up, 3 in
Oct 11 04:07:25 compute-0 ceph-mon[74273]: from='client.? 192.168.122.10:0/3173233615' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 11 04:07:25 compute-0 ceph-mon[74273]: from='client.? 192.168.122.10:0/3173233615' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 11 04:07:25 compute-0 nova_compute[259850]: 2025-10-11 04:07:25.913 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 04:07:26 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e201 do_prune osdmap full prune enabled
Oct 11 04:07:26 compute-0 ceph-mon[74273]: pgmap v1124: 305 pgs: 305 active+clean; 134 MiB data, 315 MiB used, 60 GiB / 60 GiB avail; 3.3 MiB/s rd, 7.5 KiB/s wr, 213 op/s
Oct 11 04:07:26 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e202 e202: 3 total, 3 up, 3 in
Oct 11 04:07:26 compute-0 ceph-mon[74273]: log_channel(cluster) log [DBG] : osdmap e202: 3 total, 3 up, 3 in
Oct 11 04:07:27 compute-0 nova_compute[259850]: 2025-10-11 04:07:27.206 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 04:07:27 compute-0 ceph-osd[89722]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Oct 11 04:07:27 compute-0 ceph-osd[89722]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 1800.1 total, 600.0 interval
                                           Cumulative writes: 10K writes, 44K keys, 10K commit groups, 1.0 writes per commit group, ingest: 0.03 GB, 0.02 MB/s
                                           Cumulative WAL: 10K writes, 3104 syncs, 3.50 writes per sync, written: 0.03 GB, 0.02 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 5192 writes, 21K keys, 5192 commit groups, 1.0 writes per commit group, ingest: 12.46 MB, 0.02 MB/s
                                           Interval WAL: 5192 writes, 2217 syncs, 2.34 writes per sync, written: 0.01 GB, 0.02 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Oct 11 04:07:27 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct 11 04:07:27 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3011872999' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 11 04:07:27 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct 11 04:07:27 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3011872999' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 11 04:07:27 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1126: 305 pgs: 305 active+clean; 134 MiB data, 315 MiB used, 60 GiB / 60 GiB avail; 3.5 MiB/s rd, 8.0 KiB/s wr, 227 op/s
Oct 11 04:07:27 compute-0 ceph-mon[74273]: osdmap e202: 3 total, 3 up, 3 in
Oct 11 04:07:27 compute-0 ceph-mon[74273]: from='client.? 192.168.122.10:0/3011872999' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 11 04:07:27 compute-0 ceph-mon[74273]: from='client.? 192.168.122.10:0/3011872999' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 11 04:07:27 compute-0 ceph-mgr[74563]: [devicehealth INFO root] Check health
Oct 11 04:07:28 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct 11 04:07:28 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/668816044' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 11 04:07:28 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct 11 04:07:28 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/668816044' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 11 04:07:28 compute-0 ceph-mon[74273]: pgmap v1126: 305 pgs: 305 active+clean; 134 MiB data, 315 MiB used, 60 GiB / 60 GiB avail; 3.5 MiB/s rd, 8.0 KiB/s wr, 227 op/s
Oct 11 04:07:28 compute-0 ceph-mon[74273]: from='client.? 192.168.122.10:0/668816044' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 11 04:07:28 compute-0 ceph-mon[74273]: from='client.? 192.168.122.10:0/668816044' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 11 04:07:29 compute-0 ovn_controller[152025]: 2025-10-11T04:07:29Z|00084|binding|INFO|Releasing lport 0e0216bc-6b9d-4e75-bae2-b1d26e9e502e from this chassis (sb_readonly=0)
Oct 11 04:07:29 compute-0 nova_compute[259850]: 2025-10-11 04:07:29.101 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 04:07:29 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1127: 305 pgs: 305 active+clean; 137 MiB data, 312 MiB used, 60 GiB / 60 GiB avail; 131 KiB/s rd, 327 KiB/s wr, 152 op/s
Oct 11 04:07:29 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e202 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 04:07:29 compute-0 ovn_controller[152025]: 2025-10-11T04:07:29Z|00010|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:22:3d:d4 10.100.0.11
Oct 11 04:07:29 compute-0 ovn_controller[152025]: 2025-10-11T04:07:29Z|00011|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:22:3d:d4 10.100.0.11
Oct 11 04:07:30 compute-0 podman[276575]: 2025-10-11 04:07:30.373975463 +0000 UTC m=+0.071368324 container health_status 8f03625dda308e9a3fe3b3f00360811eab2a53db90537fed5552a11820542f07 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, config_id=iscsid, container_name=iscsid, tcib_managed=true, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251009, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']})
Oct 11 04:07:30 compute-0 podman[276574]: 2025-10-11 04:07:30.37493558 +0000 UTC m=+0.071152838 container health_status 83ff5079e26f0f00bbfd2512db4ebca61e431e3b7e362664573e34fff179574c (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009, org.label-schema.vendor=CentOS, container_name=multipathd, org.label-schema.license=GPLv2, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd)
Oct 11 04:07:30 compute-0 ceph-mon[74273]: pgmap v1127: 305 pgs: 305 active+clean; 137 MiB data, 312 MiB used, 60 GiB / 60 GiB avail; 131 KiB/s rd, 327 KiB/s wr, 152 op/s
Oct 11 04:07:30 compute-0 nova_compute[259850]: 2025-10-11 04:07:30.916 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 04:07:31 compute-0 ceph-mgr[74563]: [pg_autoscaler INFO root] _maybe_adjust
Oct 11 04:07:31 compute-0 ceph-mgr[74563]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 04:07:31 compute-0 ceph-mgr[74563]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct 11 04:07:31 compute-0 ceph-mgr[74563]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 04:07:31 compute-0 ceph-mgr[74563]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.00037950934427297 of space, bias 1.0, pg target 0.113852803281891 quantized to 32 (current 32)
Oct 11 04:07:31 compute-0 ceph-mgr[74563]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 04:07:31 compute-0 ceph-mgr[74563]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0003469509018688546 of space, bias 1.0, pg target 0.10408527056065639 quantized to 32 (current 32)
Oct 11 04:07:31 compute-0 ceph-mgr[74563]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 04:07:31 compute-0 ceph-mgr[74563]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Oct 11 04:07:31 compute-0 ceph-mgr[74563]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 04:07:31 compute-0 ceph-mgr[74563]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.000665858301588852 of space, bias 1.0, pg target 0.19975749047665559 quantized to 32 (current 32)
Oct 11 04:07:31 compute-0 ceph-mgr[74563]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 04:07:31 compute-0 ceph-mgr[74563]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Oct 11 04:07:31 compute-0 ceph-mgr[74563]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 04:07:31 compute-0 ceph-mgr[74563]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 11 04:07:31 compute-0 ceph-mgr[74563]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 04:07:31 compute-0 ceph-mgr[74563]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Oct 11 04:07:31 compute-0 ceph-mgr[74563]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 04:07:31 compute-0 ceph-mgr[74563]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct 11 04:07:31 compute-0 ceph-mgr[74563]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 04:07:31 compute-0 ceph-mgr[74563]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 11 04:07:31 compute-0 ceph-mgr[74563]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 04:07:31 compute-0 ceph-mgr[74563]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct 11 04:07:31 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1128: 305 pgs: 305 active+clean; 137 MiB data, 312 MiB used, 60 GiB / 60 GiB avail; 98 KiB/s rd, 246 KiB/s wr, 114 op/s
Oct 11 04:07:32 compute-0 ovn_controller[152025]: 2025-10-11T04:07:32Z|00085|binding|INFO|Releasing lport 0e0216bc-6b9d-4e75-bae2-b1d26e9e502e from this chassis (sb_readonly=0)
Oct 11 04:07:32 compute-0 nova_compute[259850]: 2025-10-11 04:07:32.278 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 04:07:32 compute-0 nova_compute[259850]: 2025-10-11 04:07:32.281 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 04:07:32 compute-0 ceph-mon[74273]: pgmap v1128: 305 pgs: 305 active+clean; 137 MiB data, 312 MiB used, 60 GiB / 60 GiB avail; 98 KiB/s rd, 246 KiB/s wr, 114 op/s
Oct 11 04:07:33 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1129: 305 pgs: 305 active+clean; 167 MiB data, 345 MiB used, 60 GiB / 60 GiB avail; 538 KiB/s rd, 2.9 MiB/s wr, 205 op/s
Oct 11 04:07:34 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e202 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 04:07:34 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e202 do_prune osdmap full prune enabled
Oct 11 04:07:34 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e203 e203: 3 total, 3 up, 3 in
Oct 11 04:07:34 compute-0 ceph-mon[74273]: log_channel(cluster) log [DBG] : osdmap e203: 3 total, 3 up, 3 in
Oct 11 04:07:34 compute-0 ceph-mon[74273]: pgmap v1129: 305 pgs: 305 active+clean; 167 MiB data, 345 MiB used, 60 GiB / 60 GiB avail; 538 KiB/s rd, 2.9 MiB/s wr, 205 op/s
Oct 11 04:07:34 compute-0 ceph-mon[74273]: osdmap e203: 3 total, 3 up, 3 in
Oct 11 04:07:35 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1131: 305 pgs: 305 active+clean; 167 MiB data, 345 MiB used, 60 GiB / 60 GiB avail; 540 KiB/s rd, 2.9 MiB/s wr, 206 op/s
Oct 11 04:07:35 compute-0 nova_compute[259850]: 2025-10-11 04:07:35.882 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1760155640.8774316, 26cb0d26-41fd-4cac-a0b5-1c630a0feba1 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 11 04:07:35 compute-0 nova_compute[259850]: 2025-10-11 04:07:35.883 2 INFO nova.compute.manager [-] [instance: 26cb0d26-41fd-4cac-a0b5-1c630a0feba1] VM Stopped (Lifecycle Event)
Oct 11 04:07:35 compute-0 nova_compute[259850]: 2025-10-11 04:07:35.919 2 DEBUG nova.compute.manager [None req-6b202884-595a-4028-8a7b-e16ba539eeb4 - - - - - -] [instance: 26cb0d26-41fd-4cac-a0b5-1c630a0feba1] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 11 04:07:35 compute-0 nova_compute[259850]: 2025-10-11 04:07:35.920 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 04:07:36 compute-0 nova_compute[259850]: 2025-10-11 04:07:36.455 2 DEBUG oslo_concurrency.lockutils [None req-da546eda-6762-48a9-8925-f6dac873a32c c5660041067943deb3c73caa6e62f851 2783729ed466412aac8ceb01d86a0b12 - - default default] Acquiring lock "388b5700-0501-4cb9-99cd-6d259e00afa4" by "nova.compute.manager.ComputeManager.reserve_block_device_name.<locals>.do_reserve" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 04:07:36 compute-0 nova_compute[259850]: 2025-10-11 04:07:36.456 2 DEBUG oslo_concurrency.lockutils [None req-da546eda-6762-48a9-8925-f6dac873a32c c5660041067943deb3c73caa6e62f851 2783729ed466412aac8ceb01d86a0b12 - - default default] Lock "388b5700-0501-4cb9-99cd-6d259e00afa4" acquired by "nova.compute.manager.ComputeManager.reserve_block_device_name.<locals>.do_reserve" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 04:07:36 compute-0 nova_compute[259850]: 2025-10-11 04:07:36.479 2 DEBUG nova.objects.instance [None req-da546eda-6762-48a9-8925-f6dac873a32c c5660041067943deb3c73caa6e62f851 2783729ed466412aac8ceb01d86a0b12 - - default default] Lazy-loading 'flavor' on Instance uuid 388b5700-0501-4cb9-99cd-6d259e00afa4 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 11 04:07:36 compute-0 nova_compute[259850]: 2025-10-11 04:07:36.503 2 INFO nova.virt.libvirt.driver [None req-da546eda-6762-48a9-8925-f6dac873a32c c5660041067943deb3c73caa6e62f851 2783729ed466412aac8ceb01d86a0b12 - - default default] [instance: 388b5700-0501-4cb9-99cd-6d259e00afa4] Ignoring supplied device name: /dev/vdb
Oct 11 04:07:36 compute-0 nova_compute[259850]: 2025-10-11 04:07:36.518 2 DEBUG oslo_concurrency.lockutils [None req-da546eda-6762-48a9-8925-f6dac873a32c c5660041067943deb3c73caa6e62f851 2783729ed466412aac8ceb01d86a0b12 - - default default] Lock "388b5700-0501-4cb9-99cd-6d259e00afa4" "released" by "nova.compute.manager.ComputeManager.reserve_block_device_name.<locals>.do_reserve" :: held 0.062s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 04:07:36 compute-0 ceph-mon[74273]: pgmap v1131: 305 pgs: 305 active+clean; 167 MiB data, 345 MiB used, 60 GiB / 60 GiB avail; 540 KiB/s rd, 2.9 MiB/s wr, 206 op/s
Oct 11 04:07:36 compute-0 nova_compute[259850]: 2025-10-11 04:07:36.949 2 DEBUG oslo_concurrency.lockutils [None req-da546eda-6762-48a9-8925-f6dac873a32c c5660041067943deb3c73caa6e62f851 2783729ed466412aac8ceb01d86a0b12 - - default default] Acquiring lock "388b5700-0501-4cb9-99cd-6d259e00afa4" by "nova.compute.manager.ComputeManager.attach_volume.<locals>.do_attach_volume" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 04:07:36 compute-0 nova_compute[259850]: 2025-10-11 04:07:36.950 2 DEBUG oslo_concurrency.lockutils [None req-da546eda-6762-48a9-8925-f6dac873a32c c5660041067943deb3c73caa6e62f851 2783729ed466412aac8ceb01d86a0b12 - - default default] Lock "388b5700-0501-4cb9-99cd-6d259e00afa4" acquired by "nova.compute.manager.ComputeManager.attach_volume.<locals>.do_attach_volume" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 04:07:36 compute-0 nova_compute[259850]: 2025-10-11 04:07:36.950 2 INFO nova.compute.manager [None req-da546eda-6762-48a9-8925-f6dac873a32c c5660041067943deb3c73caa6e62f851 2783729ed466412aac8ceb01d86a0b12 - - default default] [instance: 388b5700-0501-4cb9-99cd-6d259e00afa4] Attaching volume cd4b7ebf-cc74-45bd-bc65-4350159aa8a0 to /dev/vdb
Oct 11 04:07:37 compute-0 nova_compute[259850]: 2025-10-11 04:07:37.128 2 DEBUG os_brick.utils [None req-da546eda-6762-48a9-8925-f6dac873a32c c5660041067943deb3c73caa6e62f851 2783729ed466412aac8ceb01d86a0b12 - - default default] ==> get_connector_properties: call "{'root_helper': 'sudo nova-rootwrap /etc/nova/rootwrap.conf', 'my_ip': '192.168.122.100', 'multipath': True, 'enforce_multipath': True, 'host': 'compute-0.ctlplane.example.com', 'execute': None}" trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:176
Oct 11 04:07:37 compute-0 nova_compute[259850]: 2025-10-11 04:07:37.130 675 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): multipathd show status execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 04:07:37 compute-0 nova_compute[259850]: 2025-10-11 04:07:37.143 675 DEBUG oslo_concurrency.processutils [-] CMD "multipathd show status" returned: 0 in 0.013s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 04:07:37 compute-0 nova_compute[259850]: 2025-10-11 04:07:37.143 675 DEBUG oslo.privsep.daemon [-] privsep: reply[fc831a3f-c22b-4447-811b-53eb20938564]: (4, ('path checker states:\n\npaths: 0\nbusy: False\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 04:07:37 compute-0 nova_compute[259850]: 2025-10-11 04:07:37.145 675 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): cat /etc/iscsi/initiatorname.iscsi execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 04:07:37 compute-0 nova_compute[259850]: 2025-10-11 04:07:37.155 675 DEBUG oslo_concurrency.processutils [-] CMD "cat /etc/iscsi/initiatorname.iscsi" returned: 0 in 0.009s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 04:07:37 compute-0 nova_compute[259850]: 2025-10-11 04:07:37.155 675 DEBUG oslo.privsep.daemon [-] privsep: reply[f0197868-b08b-4f68-9e14-21a9b25ae047]: (4, ('InitiatorName=iqn.1994-05.com.redhat:e727c2bd432c', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 04:07:37 compute-0 nova_compute[259850]: 2025-10-11 04:07:37.157 675 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): findmnt -v / -n -o SOURCE execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 04:07:37 compute-0 nova_compute[259850]: 2025-10-11 04:07:37.173 675 DEBUG oslo_concurrency.processutils [-] CMD "findmnt -v / -n -o SOURCE" returned: 0 in 0.016s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 04:07:37 compute-0 nova_compute[259850]: 2025-10-11 04:07:37.174 675 DEBUG oslo.privsep.daemon [-] privsep: reply[ab6461fb-d40a-4ca1-9605-738037cc5573]: (4, ('overlay\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 04:07:37 compute-0 nova_compute[259850]: 2025-10-11 04:07:37.177 675 DEBUG oslo.privsep.daemon [-] privsep: reply[111a97a2-737b-415b-b9ce-d9b073b20b8f]: (4, 'e4b2deed-ff06-4afb-a523-b61a9dddb9cc') _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 04:07:37 compute-0 nova_compute[259850]: 2025-10-11 04:07:37.178 2 DEBUG oslo_concurrency.processutils [None req-da546eda-6762-48a9-8925-f6dac873a32c c5660041067943deb3c73caa6e62f851 2783729ed466412aac8ceb01d86a0b12 - - default default] Running cmd (subprocess): nvme version execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 04:07:37 compute-0 nova_compute[259850]: 2025-10-11 04:07:37.204 2 DEBUG oslo_concurrency.processutils [None req-da546eda-6762-48a9-8925-f6dac873a32c c5660041067943deb3c73caa6e62f851 2783729ed466412aac8ceb01d86a0b12 - - default default] CMD "nvme version" returned: 0 in 0.027s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 04:07:37 compute-0 nova_compute[259850]: 2025-10-11 04:07:37.208 2 DEBUG os_brick.initiator.connectors.lightos [None req-da546eda-6762-48a9-8925-f6dac873a32c c5660041067943deb3c73caa6e62f851 2783729ed466412aac8ceb01d86a0b12 - - default default] LIGHTOS: [Errno 111] ECONNREFUSED find_dsc /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:98
Oct 11 04:07:37 compute-0 nova_compute[259850]: 2025-10-11 04:07:37.209 2 DEBUG os_brick.initiator.connectors.lightos [None req-da546eda-6762-48a9-8925-f6dac873a32c c5660041067943deb3c73caa6e62f851 2783729ed466412aac8ceb01d86a0b12 - - default default] LIGHTOS: did not find dsc, continuing anyway. get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:76
Oct 11 04:07:37 compute-0 nova_compute[259850]: 2025-10-11 04:07:37.209 2 DEBUG os_brick.initiator.connectors.lightos [None req-da546eda-6762-48a9-8925-f6dac873a32c c5660041067943deb3c73caa6e62f851 2783729ed466412aac8ceb01d86a0b12 - - default default] LIGHTOS: finally hostnqn: nqn.2014-08.org.nvmexpress:uuid:83042a20-0f72-4c47-8453-e72ead378624 dsc:  get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:79
Oct 11 04:07:37 compute-0 nova_compute[259850]: 2025-10-11 04:07:37.210 2 DEBUG os_brick.utils [None req-da546eda-6762-48a9-8925-f6dac873a32c c5660041067943deb3c73caa6e62f851 2783729ed466412aac8ceb01d86a0b12 - - default default] <== get_connector_properties: return (80ms) {'platform': 'x86_64', 'os_type': 'linux', 'ip': '192.168.122.100', 'host': 'compute-0.ctlplane.example.com', 'multipath': True, 'initiator': 'iqn.1994-05.com.redhat:e727c2bd432c', 'do_local_attach': False, 'nvme_hostid': '83042a20-0f72-4c47-8453-e72ead378624', 'system uuid': 'e4b2deed-ff06-4afb-a523-b61a9dddb9cc', 'nqn': 'nqn.2014-08.org.nvmexpress:uuid:83042a20-0f72-4c47-8453-e72ead378624', 'nvme_native_multipath': True, 'found_dsc': ''} trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:203
Oct 11 04:07:37 compute-0 nova_compute[259850]: 2025-10-11 04:07:37.211 2 DEBUG nova.virt.block_device [None req-da546eda-6762-48a9-8925-f6dac873a32c c5660041067943deb3c73caa6e62f851 2783729ed466412aac8ceb01d86a0b12 - - default default] [instance: 388b5700-0501-4cb9-99cd-6d259e00afa4] Updating existing volume attachment record: ee26f169-9217-484a-9615-accb64c4d12a _volume_attach /usr/lib/python3.9/site-packages/nova/virt/block_device.py:631
Oct 11 04:07:37 compute-0 nova_compute[259850]: 2025-10-11 04:07:37.283 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 04:07:37 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1132: 305 pgs: 305 active+clean; 167 MiB data, 345 MiB used, 60 GiB / 60 GiB avail; 470 KiB/s rd, 2.6 MiB/s wr, 179 op/s
Oct 11 04:07:38 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 11 04:07:38 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3418933428' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 11 04:07:38 compute-0 nova_compute[259850]: 2025-10-11 04:07:38.330 2 DEBUG nova.objects.instance [None req-da546eda-6762-48a9-8925-f6dac873a32c c5660041067943deb3c73caa6e62f851 2783729ed466412aac8ceb01d86a0b12 - - default default] Lazy-loading 'flavor' on Instance uuid 388b5700-0501-4cb9-99cd-6d259e00afa4 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 11 04:07:38 compute-0 nova_compute[259850]: 2025-10-11 04:07:38.362 2 DEBUG nova.virt.libvirt.driver [None req-da546eda-6762-48a9-8925-f6dac873a32c c5660041067943deb3c73caa6e62f851 2783729ed466412aac8ceb01d86a0b12 - - default default] [instance: 388b5700-0501-4cb9-99cd-6d259e00afa4] Attempting to attach volume cd4b7ebf-cc74-45bd-bc65-4350159aa8a0 with discard support enabled to an instance using an unsupported configuration. target_bus = virtio. Trim commands will not be issued to the storage device. _check_discard_for_attach_volume /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2168
Oct 11 04:07:38 compute-0 nova_compute[259850]: 2025-10-11 04:07:38.368 2 DEBUG nova.virt.libvirt.guest [None req-da546eda-6762-48a9-8925-f6dac873a32c c5660041067943deb3c73caa6e62f851 2783729ed466412aac8ceb01d86a0b12 - - default default] attach device xml: <disk type="network" device="disk">
Oct 11 04:07:38 compute-0 nova_compute[259850]:   <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Oct 11 04:07:38 compute-0 nova_compute[259850]:   <source protocol="rbd" name="volumes/volume-cd4b7ebf-cc74-45bd-bc65-4350159aa8a0">
Oct 11 04:07:38 compute-0 nova_compute[259850]:     <host name="192.168.122.100" port="6789"/>
Oct 11 04:07:38 compute-0 nova_compute[259850]:   </source>
Oct 11 04:07:38 compute-0 nova_compute[259850]:   <auth username="openstack">
Oct 11 04:07:38 compute-0 nova_compute[259850]:     <secret type="ceph" uuid="23b68101-59a9-532f-ab6b-9acf78fb2162"/>
Oct 11 04:07:38 compute-0 nova_compute[259850]:   </auth>
Oct 11 04:07:38 compute-0 nova_compute[259850]:   <target dev="vdb" bus="virtio"/>
Oct 11 04:07:38 compute-0 nova_compute[259850]:   <serial>cd4b7ebf-cc74-45bd-bc65-4350159aa8a0</serial>
Oct 11 04:07:38 compute-0 nova_compute[259850]: </disk>
Oct 11 04:07:38 compute-0 nova_compute[259850]:  attach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:339
Oct 11 04:07:38 compute-0 nova_compute[259850]: 2025-10-11 04:07:38.499 2 DEBUG nova.virt.libvirt.driver [None req-da546eda-6762-48a9-8925-f6dac873a32c c5660041067943deb3c73caa6e62f851 2783729ed466412aac8ceb01d86a0b12 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 11 04:07:38 compute-0 nova_compute[259850]: 2025-10-11 04:07:38.500 2 DEBUG nova.virt.libvirt.driver [None req-da546eda-6762-48a9-8925-f6dac873a32c c5660041067943deb3c73caa6e62f851 2783729ed466412aac8ceb01d86a0b12 - - default default] No BDM found with device name vdb, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 11 04:07:38 compute-0 nova_compute[259850]: 2025-10-11 04:07:38.501 2 DEBUG nova.virt.libvirt.driver [None req-da546eda-6762-48a9-8925-f6dac873a32c c5660041067943deb3c73caa6e62f851 2783729ed466412aac8ceb01d86a0b12 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 11 04:07:38 compute-0 nova_compute[259850]: 2025-10-11 04:07:38.501 2 DEBUG nova.virt.libvirt.driver [None req-da546eda-6762-48a9-8925-f6dac873a32c c5660041067943deb3c73caa6e62f851 2783729ed466412aac8ceb01d86a0b12 - - default default] No VIF found with MAC fa:16:3e:22:3d:d4, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Oct 11 04:07:38 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct 11 04:07:38 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/44690651' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 11 04:07:38 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct 11 04:07:38 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/44690651' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 11 04:07:38 compute-0 ceph-mon[74273]: pgmap v1132: 305 pgs: 305 active+clean; 167 MiB data, 345 MiB used, 60 GiB / 60 GiB avail; 470 KiB/s rd, 2.6 MiB/s wr, 179 op/s
Oct 11 04:07:38 compute-0 ceph-mon[74273]: from='client.? 192.168.122.10:0/3418933428' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 11 04:07:38 compute-0 ceph-mon[74273]: from='client.? 192.168.122.10:0/44690651' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 11 04:07:38 compute-0 ceph-mon[74273]: from='client.? 192.168.122.10:0/44690651' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 11 04:07:38 compute-0 nova_compute[259850]: 2025-10-11 04:07:38.901 2 DEBUG oslo_concurrency.lockutils [None req-da546eda-6762-48a9-8925-f6dac873a32c c5660041067943deb3c73caa6e62f851 2783729ed466412aac8ceb01d86a0b12 - - default default] Lock "388b5700-0501-4cb9-99cd-6d259e00afa4" "released" by "nova.compute.manager.ComputeManager.attach_volume.<locals>.do_attach_volume" :: held 1.951s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 04:07:39 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1133: 305 pgs: 305 active+clean; 167 MiB data, 345 MiB used, 60 GiB / 60 GiB avail; 393 KiB/s rd, 2.4 MiB/s wr, 89 op/s
Oct 11 04:07:39 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e203 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 04:07:39 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e203 do_prune osdmap full prune enabled
Oct 11 04:07:39 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e204 e204: 3 total, 3 up, 3 in
Oct 11 04:07:39 compute-0 ceph-mon[74273]: log_channel(cluster) log [DBG] : osdmap e204: 3 total, 3 up, 3 in
Oct 11 04:07:40 compute-0 ceph-mon[74273]: pgmap v1133: 305 pgs: 305 active+clean; 167 MiB data, 345 MiB used, 60 GiB / 60 GiB avail; 393 KiB/s rd, 2.4 MiB/s wr, 89 op/s
Oct 11 04:07:40 compute-0 ceph-mon[74273]: osdmap e204: 3 total, 3 up, 3 in
Oct 11 04:07:40 compute-0 nova_compute[259850]: 2025-10-11 04:07:40.923 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 04:07:41 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1135: 305 pgs: 305 active+clean; 167 MiB data, 345 MiB used, 60 GiB / 60 GiB avail; 1.6 KiB/s rd, 19 KiB/s wr, 2 op/s
Oct 11 04:07:41 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e204 do_prune osdmap full prune enabled
Oct 11 04:07:41 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e205 e205: 3 total, 3 up, 3 in
Oct 11 04:07:41 compute-0 ceph-mon[74273]: log_channel(cluster) log [DBG] : osdmap e205: 3 total, 3 up, 3 in
Oct 11 04:07:41 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct 11 04:07:41 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1981391619' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 11 04:07:41 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct 11 04:07:41 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1981391619' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 11 04:07:42 compute-0 nova_compute[259850]: 2025-10-11 04:07:42.286 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 04:07:42 compute-0 podman[276642]: 2025-10-11 04:07:42.461099884 +0000 UTC m=+0.154730774 container health_status 648a70a95c858d67aef01a55c153ff0b5b9bef4b589fa10c62a24185180db61f (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, container_name=ovn_controller, org.label-schema.build-date=20251009, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Oct 11 04:07:42 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e205 do_prune osdmap full prune enabled
Oct 11 04:07:42 compute-0 ceph-mon[74273]: pgmap v1135: 305 pgs: 305 active+clean; 167 MiB data, 345 MiB used, 60 GiB / 60 GiB avail; 1.6 KiB/s rd, 19 KiB/s wr, 2 op/s
Oct 11 04:07:42 compute-0 ceph-mon[74273]: osdmap e205: 3 total, 3 up, 3 in
Oct 11 04:07:42 compute-0 ceph-mon[74273]: from='client.? 192.168.122.10:0/1981391619' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 11 04:07:42 compute-0 ceph-mon[74273]: from='client.? 192.168.122.10:0/1981391619' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 11 04:07:42 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e206 e206: 3 total, 3 up, 3 in
Oct 11 04:07:42 compute-0 ceph-mon[74273]: log_channel(cluster) log [DBG] : osdmap e206: 3 total, 3 up, 3 in
Oct 11 04:07:43 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1138: 305 pgs: 305 active+clean; 167 MiB data, 346 MiB used, 60 GiB / 60 GiB avail; 78 KiB/s rd, 33 KiB/s wr, 106 op/s
Oct 11 04:07:43 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e206 do_prune osdmap full prune enabled
Oct 11 04:07:43 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e207 e207: 3 total, 3 up, 3 in
Oct 11 04:07:43 compute-0 ceph-mon[74273]: osdmap e206: 3 total, 3 up, 3 in
Oct 11 04:07:43 compute-0 ceph-mon[74273]: log_channel(cluster) log [DBG] : osdmap e207: 3 total, 3 up, 3 in
Oct 11 04:07:44 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e207 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 04:07:44 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e207 do_prune osdmap full prune enabled
Oct 11 04:07:44 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e208 e208: 3 total, 3 up, 3 in
Oct 11 04:07:44 compute-0 ceph-mon[74273]: log_channel(cluster) log [DBG] : osdmap e208: 3 total, 3 up, 3 in
Oct 11 04:07:44 compute-0 ceph-mon[74273]: pgmap v1138: 305 pgs: 305 active+clean; 167 MiB data, 346 MiB used, 60 GiB / 60 GiB avail; 78 KiB/s rd, 33 KiB/s wr, 106 op/s
Oct 11 04:07:44 compute-0 ceph-mon[74273]: osdmap e207: 3 total, 3 up, 3 in
Oct 11 04:07:45 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1141: 305 pgs: 305 active+clean; 167 MiB data, 346 MiB used, 60 GiB / 60 GiB avail; 114 KiB/s rd, 11 KiB/s wr, 155 op/s
Oct 11 04:07:45 compute-0 nova_compute[259850]: 2025-10-11 04:07:45.926 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 04:07:45 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e208 do_prune osdmap full prune enabled
Oct 11 04:07:45 compute-0 ceph-mon[74273]: osdmap e208: 3 total, 3 up, 3 in
Oct 11 04:07:45 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e209 e209: 3 total, 3 up, 3 in
Oct 11 04:07:45 compute-0 ceph-mon[74273]: log_channel(cluster) log [DBG] : osdmap e209: 3 total, 3 up, 3 in
Oct 11 04:07:46 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e209 do_prune osdmap full prune enabled
Oct 11 04:07:46 compute-0 ceph-mon[74273]: pgmap v1141: 305 pgs: 305 active+clean; 167 MiB data, 346 MiB used, 60 GiB / 60 GiB avail; 114 KiB/s rd, 11 KiB/s wr, 155 op/s
Oct 11 04:07:46 compute-0 ceph-mon[74273]: osdmap e209: 3 total, 3 up, 3 in
Oct 11 04:07:46 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e210 e210: 3 total, 3 up, 3 in
Oct 11 04:07:46 compute-0 ceph-mon[74273]: log_channel(cluster) log [DBG] : osdmap e210: 3 total, 3 up, 3 in
Oct 11 04:07:47 compute-0 nova_compute[259850]: 2025-10-11 04:07:47.288 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 04:07:47 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1144: 305 pgs: 305 active+clean; 167 MiB data, 346 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:07:47 compute-0 ceph-mon[74273]: osdmap e210: 3 total, 3 up, 3 in
Oct 11 04:07:48 compute-0 nova_compute[259850]: 2025-10-11 04:07:48.127 2 DEBUG oslo_concurrency.lockutils [None req-b898d667-0d05-4e89-be30-1815c2d969d0 c5660041067943deb3c73caa6e62f851 2783729ed466412aac8ceb01d86a0b12 - - default default] Acquiring lock "388b5700-0501-4cb9-99cd-6d259e00afa4" by "nova.compute.manager.ComputeManager.detach_volume.<locals>.do_detach_volume" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 04:07:48 compute-0 nova_compute[259850]: 2025-10-11 04:07:48.127 2 DEBUG oslo_concurrency.lockutils [None req-b898d667-0d05-4e89-be30-1815c2d969d0 c5660041067943deb3c73caa6e62f851 2783729ed466412aac8ceb01d86a0b12 - - default default] Lock "388b5700-0501-4cb9-99cd-6d259e00afa4" acquired by "nova.compute.manager.ComputeManager.detach_volume.<locals>.do_detach_volume" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 04:07:48 compute-0 nova_compute[259850]: 2025-10-11 04:07:48.147 2 INFO nova.compute.manager [None req-b898d667-0d05-4e89-be30-1815c2d969d0 c5660041067943deb3c73caa6e62f851 2783729ed466412aac8ceb01d86a0b12 - - default default] [instance: 388b5700-0501-4cb9-99cd-6d259e00afa4] Detaching volume cd4b7ebf-cc74-45bd-bc65-4350159aa8a0
Oct 11 04:07:48 compute-0 nova_compute[259850]: 2025-10-11 04:07:48.321 2 INFO nova.virt.block_device [None req-b898d667-0d05-4e89-be30-1815c2d969d0 c5660041067943deb3c73caa6e62f851 2783729ed466412aac8ceb01d86a0b12 - - default default] [instance: 388b5700-0501-4cb9-99cd-6d259e00afa4] Attempting to driver detach volume cd4b7ebf-cc74-45bd-bc65-4350159aa8a0 from mountpoint /dev/vdb
Oct 11 04:07:48 compute-0 nova_compute[259850]: 2025-10-11 04:07:48.334 2 DEBUG nova.virt.libvirt.driver [None req-b898d667-0d05-4e89-be30-1815c2d969d0 c5660041067943deb3c73caa6e62f851 2783729ed466412aac8ceb01d86a0b12 - - default default] Attempting to detach device vdb from instance 388b5700-0501-4cb9-99cd-6d259e00afa4 from the persistent domain config. _detach_from_persistent /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2487
Oct 11 04:07:48 compute-0 nova_compute[259850]: 2025-10-11 04:07:48.335 2 DEBUG nova.virt.libvirt.guest [None req-b898d667-0d05-4e89-be30-1815c2d969d0 c5660041067943deb3c73caa6e62f851 2783729ed466412aac8ceb01d86a0b12 - - default default] detach device xml: <disk type="network" device="disk">
Oct 11 04:07:48 compute-0 nova_compute[259850]:   <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Oct 11 04:07:48 compute-0 nova_compute[259850]:   <source protocol="rbd" name="volumes/volume-cd4b7ebf-cc74-45bd-bc65-4350159aa8a0">
Oct 11 04:07:48 compute-0 nova_compute[259850]:     <host name="192.168.122.100" port="6789"/>
Oct 11 04:07:48 compute-0 nova_compute[259850]:   </source>
Oct 11 04:07:48 compute-0 nova_compute[259850]:   <target dev="vdb" bus="virtio"/>
Oct 11 04:07:48 compute-0 nova_compute[259850]:   <serial>cd4b7ebf-cc74-45bd-bc65-4350159aa8a0</serial>
Oct 11 04:07:48 compute-0 nova_compute[259850]:   <address type="pci" domain="0x0000" bus="0x06" slot="0x00" function="0x0"/>
Oct 11 04:07:48 compute-0 nova_compute[259850]: </disk>
Oct 11 04:07:48 compute-0 nova_compute[259850]:  detach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:465
Oct 11 04:07:48 compute-0 nova_compute[259850]: 2025-10-11 04:07:48.337 2 DEBUG oslo_concurrency.lockutils [None req-77734dc7-a664-43df-98f2-48f05a04a645 c5660041067943deb3c73caa6e62f851 2783729ed466412aac8ceb01d86a0b12 - - default default] Acquiring lock "388b5700-0501-4cb9-99cd-6d259e00afa4" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 04:07:48 compute-0 nova_compute[259850]: 2025-10-11 04:07:48.346 2 INFO nova.virt.libvirt.driver [None req-b898d667-0d05-4e89-be30-1815c2d969d0 c5660041067943deb3c73caa6e62f851 2783729ed466412aac8ceb01d86a0b12 - - default default] Successfully detached device vdb from instance 388b5700-0501-4cb9-99cd-6d259e00afa4 from the persistent domain config.
Oct 11 04:07:48 compute-0 nova_compute[259850]: 2025-10-11 04:07:48.347 2 DEBUG nova.virt.libvirt.driver [None req-b898d667-0d05-4e89-be30-1815c2d969d0 c5660041067943deb3c73caa6e62f851 2783729ed466412aac8ceb01d86a0b12 - - default default] (1/8): Attempting to detach device vdb with device alias virtio-disk1 from instance 388b5700-0501-4cb9-99cd-6d259e00afa4 from the live domain config. _detach_from_live_with_retry /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2523
Oct 11 04:07:48 compute-0 nova_compute[259850]: 2025-10-11 04:07:48.348 2 DEBUG nova.virt.libvirt.guest [None req-b898d667-0d05-4e89-be30-1815c2d969d0 c5660041067943deb3c73caa6e62f851 2783729ed466412aac8ceb01d86a0b12 - - default default] detach device xml: <disk type="network" device="disk">
Oct 11 04:07:48 compute-0 nova_compute[259850]:   <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Oct 11 04:07:48 compute-0 nova_compute[259850]:   <source protocol="rbd" name="volumes/volume-cd4b7ebf-cc74-45bd-bc65-4350159aa8a0">
Oct 11 04:07:48 compute-0 nova_compute[259850]:     <host name="192.168.122.100" port="6789"/>
Oct 11 04:07:48 compute-0 nova_compute[259850]:   </source>
Oct 11 04:07:48 compute-0 nova_compute[259850]:   <target dev="vdb" bus="virtio"/>
Oct 11 04:07:48 compute-0 nova_compute[259850]:   <serial>cd4b7ebf-cc74-45bd-bc65-4350159aa8a0</serial>
Oct 11 04:07:48 compute-0 nova_compute[259850]:   <address type="pci" domain="0x0000" bus="0x06" slot="0x00" function="0x0"/>
Oct 11 04:07:48 compute-0 nova_compute[259850]: </disk>
Oct 11 04:07:48 compute-0 nova_compute[259850]:  detach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:465
Oct 11 04:07:48 compute-0 podman[276668]: 2025-10-11 04:07:48.370992368 +0000 UTC m=+0.080728997 container health_status aa44bb3cffcfb092b9d85ae52a9bd9f36c87e07262632420f97b48a886109eab (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20251009, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent)
Oct 11 04:07:48 compute-0 nova_compute[259850]: 2025-10-11 04:07:48.458 2 DEBUG nova.virt.libvirt.driver [None req-3700afd8-f980-4cf2-b41e-54ecc7fbe9a2 - - - - - -] Received event <DeviceRemovedEvent: 1760155668.4579995, 388b5700-0501-4cb9-99cd-6d259e00afa4 => virtio-disk1> from libvirt while the driver is waiting for it; dispatched. emit_event /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2370
Oct 11 04:07:48 compute-0 nova_compute[259850]: 2025-10-11 04:07:48.461 2 DEBUG nova.virt.libvirt.driver [None req-b898d667-0d05-4e89-be30-1815c2d969d0 c5660041067943deb3c73caa6e62f851 2783729ed466412aac8ceb01d86a0b12 - - default default] Start waiting for the detach event from libvirt for device vdb with device alias virtio-disk1 for instance 388b5700-0501-4cb9-99cd-6d259e00afa4 _detach_from_live_and_wait_for_event /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2599
Oct 11 04:07:48 compute-0 nova_compute[259850]: 2025-10-11 04:07:48.463 2 INFO nova.virt.libvirt.driver [None req-b898d667-0d05-4e89-be30-1815c2d969d0 c5660041067943deb3c73caa6e62f851 2783729ed466412aac8ceb01d86a0b12 - - default default] Successfully detached device vdb from instance 388b5700-0501-4cb9-99cd-6d259e00afa4 from the live domain config.
Oct 11 04:07:48 compute-0 nova_compute[259850]: 2025-10-11 04:07:48.734 2 DEBUG nova.objects.instance [None req-b898d667-0d05-4e89-be30-1815c2d969d0 c5660041067943deb3c73caa6e62f851 2783729ed466412aac8ceb01d86a0b12 - - default default] Lazy-loading 'flavor' on Instance uuid 388b5700-0501-4cb9-99cd-6d259e00afa4 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 11 04:07:48 compute-0 nova_compute[259850]: 2025-10-11 04:07:48.801 2 DEBUG oslo_concurrency.lockutils [None req-b898d667-0d05-4e89-be30-1815c2d969d0 c5660041067943deb3c73caa6e62f851 2783729ed466412aac8ceb01d86a0b12 - - default default] Lock "388b5700-0501-4cb9-99cd-6d259e00afa4" "released" by "nova.compute.manager.ComputeManager.detach_volume.<locals>.do_detach_volume" :: held 0.674s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 04:07:48 compute-0 nova_compute[259850]: 2025-10-11 04:07:48.803 2 DEBUG oslo_concurrency.lockutils [None req-77734dc7-a664-43df-98f2-48f05a04a645 c5660041067943deb3c73caa6e62f851 2783729ed466412aac8ceb01d86a0b12 - - default default] Lock "388b5700-0501-4cb9-99cd-6d259e00afa4" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.465s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 04:07:48 compute-0 nova_compute[259850]: 2025-10-11 04:07:48.803 2 DEBUG oslo_concurrency.lockutils [None req-77734dc7-a664-43df-98f2-48f05a04a645 c5660041067943deb3c73caa6e62f851 2783729ed466412aac8ceb01d86a0b12 - - default default] Acquiring lock "388b5700-0501-4cb9-99cd-6d259e00afa4-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 04:07:48 compute-0 nova_compute[259850]: 2025-10-11 04:07:48.804 2 DEBUG oslo_concurrency.lockutils [None req-77734dc7-a664-43df-98f2-48f05a04a645 c5660041067943deb3c73caa6e62f851 2783729ed466412aac8ceb01d86a0b12 - - default default] Lock "388b5700-0501-4cb9-99cd-6d259e00afa4-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 04:07:48 compute-0 nova_compute[259850]: 2025-10-11 04:07:48.804 2 DEBUG oslo_concurrency.lockutils [None req-77734dc7-a664-43df-98f2-48f05a04a645 c5660041067943deb3c73caa6e62f851 2783729ed466412aac8ceb01d86a0b12 - - default default] Lock "388b5700-0501-4cb9-99cd-6d259e00afa4-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 04:07:48 compute-0 nova_compute[259850]: 2025-10-11 04:07:48.806 2 INFO nova.compute.manager [None req-77734dc7-a664-43df-98f2-48f05a04a645 c5660041067943deb3c73caa6e62f851 2783729ed466412aac8ceb01d86a0b12 - - default default] [instance: 388b5700-0501-4cb9-99cd-6d259e00afa4] Terminating instance
Oct 11 04:07:48 compute-0 nova_compute[259850]: 2025-10-11 04:07:48.808 2 DEBUG nova.compute.manager [None req-77734dc7-a664-43df-98f2-48f05a04a645 c5660041067943deb3c73caa6e62f851 2783729ed466412aac8ceb01d86a0b12 - - default default] [instance: 388b5700-0501-4cb9-99cd-6d259e00afa4] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Oct 11 04:07:48 compute-0 kernel: tapc1e60e1e-90 (unregistering): left promiscuous mode
Oct 11 04:07:48 compute-0 NetworkManager[44920]: <info>  [1760155668.8730] device (tapc1e60e1e-90): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 11 04:07:48 compute-0 nova_compute[259850]: 2025-10-11 04:07:48.880 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 04:07:48 compute-0 ovn_controller[152025]: 2025-10-11T04:07:48Z|00086|binding|INFO|Releasing lport c1e60e1e-9066-4ce9-9064-2a732e2a407d from this chassis (sb_readonly=0)
Oct 11 04:07:48 compute-0 ovn_controller[152025]: 2025-10-11T04:07:48Z|00087|binding|INFO|Setting lport c1e60e1e-9066-4ce9-9064-2a732e2a407d down in Southbound
Oct 11 04:07:48 compute-0 ovn_controller[152025]: 2025-10-11T04:07:48Z|00088|binding|INFO|Removing iface tapc1e60e1e-90 ovn-installed in OVS
Oct 11 04:07:48 compute-0 ovn_metadata_agent[161897]: 2025-10-11 04:07:48.888 161902 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:22:3d:d4 10.100.0.11'], port_security=['fa:16:3e:22:3d:d4 10.100.0.11'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.11/28', 'neutron:device_id': '388b5700-0501-4cb9-99cd-6d259e00afa4', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-bfa0cc72-c909-48db-80bb-536eb7b52f6e', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '2783729ed466412aac8ceb01d86a0b12', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'bdf40caf-e662-46c0-a51e-c7e0a77b4c10', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com', 'neutron:port_fip': '192.168.122.221'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=55b0cbfb-9e3c-469a-b06d-75c45688b585, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f22fde8afd0>], logical_port=c1e60e1e-9066-4ce9-9064-2a732e2a407d) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f22fde8afd0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 11 04:07:48 compute-0 ovn_metadata_agent[161897]: 2025-10-11 04:07:48.891 161902 INFO neutron.agent.ovn.metadata.agent [-] Port c1e60e1e-9066-4ce9-9064-2a732e2a407d in datapath bfa0cc72-c909-48db-80bb-536eb7b52f6e unbound from our chassis
Oct 11 04:07:48 compute-0 ovn_metadata_agent[161897]: 2025-10-11 04:07:48.893 161902 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network bfa0cc72-c909-48db-80bb-536eb7b52f6e, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Oct 11 04:07:48 compute-0 ovn_metadata_agent[161897]: 2025-10-11 04:07:48.894 267637 DEBUG oslo.privsep.daemon [-] privsep: reply[597b2816-8499-415c-99f7-328bafc57a48]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 04:07:48 compute-0 ovn_metadata_agent[161897]: 2025-10-11 04:07:48.895 161902 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-bfa0cc72-c909-48db-80bb-536eb7b52f6e namespace which is not needed anymore
Oct 11 04:07:48 compute-0 nova_compute[259850]: 2025-10-11 04:07:48.926 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 04:07:48 compute-0 systemd[1]: machine-qemu\x2d8\x2dinstance\x2d00000008.scope: Deactivated successfully.
Oct 11 04:07:48 compute-0 systemd[1]: machine-qemu\x2d8\x2dinstance\x2d00000008.scope: Consumed 13.725s CPU time.
Oct 11 04:07:48 compute-0 systemd-machined[214869]: Machine qemu-8-instance-00000008 terminated.
Oct 11 04:07:48 compute-0 ceph-mon[74273]: pgmap v1144: 305 pgs: 305 active+clean; 167 MiB data, 346 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:07:49 compute-0 kernel: tapc1e60e1e-90: entered promiscuous mode
Oct 11 04:07:49 compute-0 NetworkManager[44920]: <info>  [1760155669.0345] manager: (tapc1e60e1e-90): new Tun device (/org/freedesktop/NetworkManager/Devices/57)
Oct 11 04:07:49 compute-0 ovn_controller[152025]: 2025-10-11T04:07:49Z|00089|binding|INFO|Claiming lport c1e60e1e-9066-4ce9-9064-2a732e2a407d for this chassis.
Oct 11 04:07:49 compute-0 ovn_controller[152025]: 2025-10-11T04:07:49Z|00090|binding|INFO|c1e60e1e-9066-4ce9-9064-2a732e2a407d: Claiming fa:16:3e:22:3d:d4 10.100.0.11
Oct 11 04:07:49 compute-0 nova_compute[259850]: 2025-10-11 04:07:49.035 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 04:07:49 compute-0 systemd-udevd[276691]: Network interface NamePolicy= disabled on kernel command line.
Oct 11 04:07:49 compute-0 ovn_metadata_agent[161897]: 2025-10-11 04:07:49.048 161902 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:22:3d:d4 10.100.0.11'], port_security=['fa:16:3e:22:3d:d4 10.100.0.11'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.11/28', 'neutron:device_id': '388b5700-0501-4cb9-99cd-6d259e00afa4', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-bfa0cc72-c909-48db-80bb-536eb7b52f6e', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '2783729ed466412aac8ceb01d86a0b12', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'bdf40caf-e662-46c0-a51e-c7e0a77b4c10', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com', 'neutron:port_fip': '192.168.122.221'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=55b0cbfb-9e3c-469a-b06d-75c45688b585, chassis=[<ovs.db.idl.Row object at 0x7f22fde8afd0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f22fde8afd0>], logical_port=c1e60e1e-9066-4ce9-9064-2a732e2a407d) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 11 04:07:49 compute-0 kernel: tapc1e60e1e-90 (unregistering): left promiscuous mode
Oct 11 04:07:49 compute-0 virtnodedevd[260173]: libvirt version: 10.10.0, package: 15.el9 (builder@centos.org, 2025-08-18-13:22:20, )
Oct 11 04:07:49 compute-0 virtnodedevd[260173]: hostname: compute-0
Oct 11 04:07:49 compute-0 virtnodedevd[260173]: ethtool ioctl error on tapc1e60e1e-90: No such device
Oct 11 04:07:49 compute-0 virtnodedevd[260173]: ethtool ioctl error on tapc1e60e1e-90: No such device
Oct 11 04:07:49 compute-0 nova_compute[259850]: 2025-10-11 04:07:49.078 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 04:07:49 compute-0 ovn_controller[152025]: 2025-10-11T04:07:49Z|00091|binding|INFO|Setting lport c1e60e1e-9066-4ce9-9064-2a732e2a407d ovn-installed in OVS
Oct 11 04:07:49 compute-0 ovn_controller[152025]: 2025-10-11T04:07:49Z|00092|binding|INFO|Setting lport c1e60e1e-9066-4ce9-9064-2a732e2a407d up in Southbound
Oct 11 04:07:49 compute-0 virtnodedevd[260173]: ethtool ioctl error on tapc1e60e1e-90: No such device
Oct 11 04:07:49 compute-0 ovn_controller[152025]: 2025-10-11T04:07:49Z|00093|binding|INFO|Releasing lport c1e60e1e-9066-4ce9-9064-2a732e2a407d from this chassis (sb_readonly=0)
Oct 11 04:07:49 compute-0 ovn_controller[152025]: 2025-10-11T04:07:49Z|00094|binding|INFO|Setting lport c1e60e1e-9066-4ce9-9064-2a732e2a407d down in Southbound
Oct 11 04:07:49 compute-0 ovn_controller[152025]: 2025-10-11T04:07:49Z|00095|binding|INFO|Removing iface tapc1e60e1e-90 ovn-installed in OVS
Oct 11 04:07:49 compute-0 nova_compute[259850]: 2025-10-11 04:07:49.083 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 04:07:49 compute-0 virtnodedevd[260173]: ethtool ioctl error on tapc1e60e1e-90: No such device
Oct 11 04:07:49 compute-0 virtnodedevd[260173]: ethtool ioctl error on tapc1e60e1e-90: No such device
Oct 11 04:07:49 compute-0 ovn_metadata_agent[161897]: 2025-10-11 04:07:49.094 161902 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:22:3d:d4 10.100.0.11'], port_security=['fa:16:3e:22:3d:d4 10.100.0.11'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.11/28', 'neutron:device_id': '388b5700-0501-4cb9-99cd-6d259e00afa4', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-bfa0cc72-c909-48db-80bb-536eb7b52f6e', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '2783729ed466412aac8ceb01d86a0b12', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'bdf40caf-e662-46c0-a51e-c7e0a77b4c10', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com', 'neutron:port_fip': '192.168.122.221'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=55b0cbfb-9e3c-469a-b06d-75c45688b585, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f22fde8afd0>], logical_port=c1e60e1e-9066-4ce9-9064-2a732e2a407d) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f22fde8afd0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 11 04:07:49 compute-0 nova_compute[259850]: 2025-10-11 04:07:49.094 2 INFO nova.virt.libvirt.driver [-] [instance: 388b5700-0501-4cb9-99cd-6d259e00afa4] Instance destroyed successfully.
Oct 11 04:07:49 compute-0 nova_compute[259850]: 2025-10-11 04:07:49.095 2 DEBUG nova.objects.instance [None req-77734dc7-a664-43df-98f2-48f05a04a645 c5660041067943deb3c73caa6e62f851 2783729ed466412aac8ceb01d86a0b12 - - default default] Lazy-loading 'resources' on Instance uuid 388b5700-0501-4cb9-99cd-6d259e00afa4 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 11 04:07:49 compute-0 nova_compute[259850]: 2025-10-11 04:07:49.100 2 DEBUG nova.compute.manager [req-080ee5f8-ff46-49cf-ae50-7facead27482 req-833d0040-4f72-45a0-96c6-1466f57a334a f4159c198f2c491aba3076e803251bb4 551ef16ca7c64c7d98e9560ddab5abd8 - - default default] [instance: 388b5700-0501-4cb9-99cd-6d259e00afa4] Received event network-vif-unplugged-c1e60e1e-9066-4ce9-9064-2a732e2a407d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 11 04:07:49 compute-0 nova_compute[259850]: 2025-10-11 04:07:49.101 2 DEBUG oslo_concurrency.lockutils [req-080ee5f8-ff46-49cf-ae50-7facead27482 req-833d0040-4f72-45a0-96c6-1466f57a334a f4159c198f2c491aba3076e803251bb4 551ef16ca7c64c7d98e9560ddab5abd8 - - default default] Acquiring lock "388b5700-0501-4cb9-99cd-6d259e00afa4-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 04:07:49 compute-0 nova_compute[259850]: 2025-10-11 04:07:49.101 2 DEBUG oslo_concurrency.lockutils [req-080ee5f8-ff46-49cf-ae50-7facead27482 req-833d0040-4f72-45a0-96c6-1466f57a334a f4159c198f2c491aba3076e803251bb4 551ef16ca7c64c7d98e9560ddab5abd8 - - default default] Lock "388b5700-0501-4cb9-99cd-6d259e00afa4-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 04:07:49 compute-0 nova_compute[259850]: 2025-10-11 04:07:49.101 2 DEBUG oslo_concurrency.lockutils [req-080ee5f8-ff46-49cf-ae50-7facead27482 req-833d0040-4f72-45a0-96c6-1466f57a334a f4159c198f2c491aba3076e803251bb4 551ef16ca7c64c7d98e9560ddab5abd8 - - default default] Lock "388b5700-0501-4cb9-99cd-6d259e00afa4-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 04:07:49 compute-0 nova_compute[259850]: 2025-10-11 04:07:49.101 2 DEBUG nova.compute.manager [req-080ee5f8-ff46-49cf-ae50-7facead27482 req-833d0040-4f72-45a0-96c6-1466f57a334a f4159c198f2c491aba3076e803251bb4 551ef16ca7c64c7d98e9560ddab5abd8 - - default default] [instance: 388b5700-0501-4cb9-99cd-6d259e00afa4] No waiting events found dispatching network-vif-unplugged-c1e60e1e-9066-4ce9-9064-2a732e2a407d pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 11 04:07:49 compute-0 nova_compute[259850]: 2025-10-11 04:07:49.101 2 DEBUG nova.compute.manager [req-080ee5f8-ff46-49cf-ae50-7facead27482 req-833d0040-4f72-45a0-96c6-1466f57a334a f4159c198f2c491aba3076e803251bb4 551ef16ca7c64c7d98e9560ddab5abd8 - - default default] [instance: 388b5700-0501-4cb9-99cd-6d259e00afa4] Received event network-vif-unplugged-c1e60e1e-9066-4ce9-9064-2a732e2a407d for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Oct 11 04:07:49 compute-0 virtnodedevd[260173]: ethtool ioctl error on tapc1e60e1e-90: No such device
Oct 11 04:07:49 compute-0 neutron-haproxy-ovnmeta-bfa0cc72-c909-48db-80bb-536eb7b52f6e[275955]: [NOTICE]   (275981) : haproxy version is 2.8.14-c23fe91
Oct 11 04:07:49 compute-0 neutron-haproxy-ovnmeta-bfa0cc72-c909-48db-80bb-536eb7b52f6e[275955]: [NOTICE]   (275981) : path to executable is /usr/sbin/haproxy
Oct 11 04:07:49 compute-0 neutron-haproxy-ovnmeta-bfa0cc72-c909-48db-80bb-536eb7b52f6e[275955]: [WARNING]  (275981) : Exiting Master process...
Oct 11 04:07:49 compute-0 neutron-haproxy-ovnmeta-bfa0cc72-c909-48db-80bb-536eb7b52f6e[275955]: [ALERT]    (275981) : Current worker (275989) exited with code 143 (Terminated)
Oct 11 04:07:49 compute-0 neutron-haproxy-ovnmeta-bfa0cc72-c909-48db-80bb-536eb7b52f6e[275955]: [WARNING]  (275981) : All workers exited. Exiting... (0)
Oct 11 04:07:49 compute-0 nova_compute[259850]: 2025-10-11 04:07:49.109 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 04:07:49 compute-0 systemd[1]: libpod-aa2834ec2b2907c46876b9a994a1535364f9d2d5b15352f714b838966666e514.scope: Deactivated successfully.
Oct 11 04:07:49 compute-0 virtnodedevd[260173]: ethtool ioctl error on tapc1e60e1e-90: No such device
Oct 11 04:07:49 compute-0 nova_compute[259850]: 2025-10-11 04:07:49.112 2 DEBUG nova.virt.libvirt.vif [None req-77734dc7-a664-43df-98f2-48f05a04a645 c5660041067943deb3c73caa6e62f851 2783729ed466412aac8ceb01d86a0b12 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-11T04:07:10Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-VolumesSnapshotTestJSON-instance-1516925865',display_name='tempest-VolumesSnapshotTestJSON-instance-1516925865',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-volumessnapshottestjson-instance-1516925865',id=8,image_ref='1a107e2f-1a9d-4b6f-861d-e64bee7d56be',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBCqCtLkaBuP2+T82JevRpLfW+XDuidnc8c74aRC6BKydU2gPEclXEWf/mgVaUQf4ae+qFmwaHq0kdMt+x79T/LDdPi0iOEprVv7WxGP4WYENsjiYxUPMO1UNuH+JM4CShA==',key_name='tempest-keypair-1835134019',keypairs=<?>,launch_index=0,launched_at=2025-10-11T04:07:17Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='2783729ed466412aac8ceb01d86a0b12',ramdisk_id='',reservation_id='r-2dmaj0zh',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='1a107e2f-1a9d-4b6f-861d-e64bee7d56be',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-VolumesSnapshotTestJSON-180407200',owner_user_name='tempest-VolumesSnapshotTestJSON-180407200-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-11T04:07:17Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='c5660041067943deb3c73caa6e62f851',uuid=388b5700-0501-4cb9-99cd-6d259e00afa4,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "c1e60e1e-9066-4ce9-9064-2a732e2a407d", "address": "fa:16:3e:22:3d:d4", "network": {"id": "bfa0cc72-c909-48db-80bb-536eb7b52f6e", "bridge": "br-int", "label": "tempest-VolumesSnapshotTestJSON-1615284681-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.221", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2783729ed466412aac8ceb01d86a0b12", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc1e60e1e-90", "ovs_interfaceid": "c1e60e1e-9066-4ce9-9064-2a732e2a407d", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Oct 11 04:07:49 compute-0 nova_compute[259850]: 2025-10-11 04:07:49.112 2 DEBUG nova.network.os_vif_util [None req-77734dc7-a664-43df-98f2-48f05a04a645 c5660041067943deb3c73caa6e62f851 2783729ed466412aac8ceb01d86a0b12 - - default default] Converting VIF {"id": "c1e60e1e-9066-4ce9-9064-2a732e2a407d", "address": "fa:16:3e:22:3d:d4", "network": {"id": "bfa0cc72-c909-48db-80bb-536eb7b52f6e", "bridge": "br-int", "label": "tempest-VolumesSnapshotTestJSON-1615284681-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.221", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2783729ed466412aac8ceb01d86a0b12", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc1e60e1e-90", "ovs_interfaceid": "c1e60e1e-9066-4ce9-9064-2a732e2a407d", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 11 04:07:49 compute-0 nova_compute[259850]: 2025-10-11 04:07:49.113 2 DEBUG nova.network.os_vif_util [None req-77734dc7-a664-43df-98f2-48f05a04a645 c5660041067943deb3c73caa6e62f851 2783729ed466412aac8ceb01d86a0b12 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:22:3d:d4,bridge_name='br-int',has_traffic_filtering=True,id=c1e60e1e-9066-4ce9-9064-2a732e2a407d,network=Network(bfa0cc72-c909-48db-80bb-536eb7b52f6e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc1e60e1e-90') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 11 04:07:49 compute-0 nova_compute[259850]: 2025-10-11 04:07:49.114 2 DEBUG os_vif [None req-77734dc7-a664-43df-98f2-48f05a04a645 c5660041067943deb3c73caa6e62f851 2783729ed466412aac8ceb01d86a0b12 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:22:3d:d4,bridge_name='br-int',has_traffic_filtering=True,id=c1e60e1e-9066-4ce9-9064-2a732e2a407d,network=Network(bfa0cc72-c909-48db-80bb-536eb7b52f6e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc1e60e1e-90') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Oct 11 04:07:49 compute-0 nova_compute[259850]: 2025-10-11 04:07:49.115 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 04:07:49 compute-0 nova_compute[259850]: 2025-10-11 04:07:49.115 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapc1e60e1e-90, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 04:07:49 compute-0 nova_compute[259850]: 2025-10-11 04:07:49.117 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 04:07:49 compute-0 podman[276712]: 2025-10-11 04:07:49.117686538 +0000 UTC m=+0.079835282 container died aa2834ec2b2907c46876b9a994a1535364f9d2d5b15352f714b838966666e514 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-bfa0cc72-c909-48db-80bb-536eb7b52f6e, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, tcib_managed=true)
Oct 11 04:07:49 compute-0 nova_compute[259850]: 2025-10-11 04:07:49.117 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 11 04:07:49 compute-0 virtnodedevd[260173]: ethtool ioctl error on tapc1e60e1e-90: No such device
Oct 11 04:07:49 compute-0 nova_compute[259850]: 2025-10-11 04:07:49.120 2 INFO os_vif [None req-77734dc7-a664-43df-98f2-48f05a04a645 c5660041067943deb3c73caa6e62f851 2783729ed466412aac8ceb01d86a0b12 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:22:3d:d4,bridge_name='br-int',has_traffic_filtering=True,id=c1e60e1e-9066-4ce9-9064-2a732e2a407d,network=Network(bfa0cc72-c909-48db-80bb-536eb7b52f6e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc1e60e1e-90')
Oct 11 04:07:49 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-aa2834ec2b2907c46876b9a994a1535364f9d2d5b15352f714b838966666e514-userdata-shm.mount: Deactivated successfully.
Oct 11 04:07:49 compute-0 systemd[1]: var-lib-containers-storage-overlay-bec204985a50fe418888bb5afd3a33c0d3dd65aa8cd08290d8a0462b3e5f00af-merged.mount: Deactivated successfully.
Oct 11 04:07:49 compute-0 podman[276712]: 2025-10-11 04:07:49.154981545 +0000 UTC m=+0.117130279 container cleanup aa2834ec2b2907c46876b9a994a1535364f9d2d5b15352f714b838966666e514 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-bfa0cc72-c909-48db-80bb-536eb7b52f6e, org.label-schema.license=GPLv2, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3)
Oct 11 04:07:49 compute-0 systemd[1]: libpod-conmon-aa2834ec2b2907c46876b9a994a1535364f9d2d5b15352f714b838966666e514.scope: Deactivated successfully.
Oct 11 04:07:49 compute-0 podman[276781]: 2025-10-11 04:07:49.235953388 +0000 UTC m=+0.055488189 container remove aa2834ec2b2907c46876b9a994a1535364f9d2d5b15352f714b838966666e514 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-bfa0cc72-c909-48db-80bb-536eb7b52f6e, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251009, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS)
Oct 11 04:07:49 compute-0 ovn_metadata_agent[161897]: 2025-10-11 04:07:49.242 267637 DEBUG oslo.privsep.daemon [-] privsep: reply[efa55893-f40c-4637-8a88-7d459f06357c]: (4, ('Sat Oct 11 04:07:49 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-bfa0cc72-c909-48db-80bb-536eb7b52f6e (aa2834ec2b2907c46876b9a994a1535364f9d2d5b15352f714b838966666e514)\naa2834ec2b2907c46876b9a994a1535364f9d2d5b15352f714b838966666e514\nSat Oct 11 04:07:49 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-bfa0cc72-c909-48db-80bb-536eb7b52f6e (aa2834ec2b2907c46876b9a994a1535364f9d2d5b15352f714b838966666e514)\naa2834ec2b2907c46876b9a994a1535364f9d2d5b15352f714b838966666e514\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 04:07:49 compute-0 ovn_metadata_agent[161897]: 2025-10-11 04:07:49.245 267637 DEBUG oslo.privsep.daemon [-] privsep: reply[84fb0814-0339-4338-bf69-1662f1adbd24]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 04:07:49 compute-0 ovn_metadata_agent[161897]: 2025-10-11 04:07:49.246 161902 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapbfa0cc72-c0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 04:07:49 compute-0 nova_compute[259850]: 2025-10-11 04:07:49.248 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 04:07:49 compute-0 kernel: tapbfa0cc72-c0: left promiscuous mode
Oct 11 04:07:49 compute-0 nova_compute[259850]: 2025-10-11 04:07:49.265 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 04:07:49 compute-0 nova_compute[259850]: 2025-10-11 04:07:49.266 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 04:07:49 compute-0 ovn_metadata_agent[161897]: 2025-10-11 04:07:49.268 267637 DEBUG oslo.privsep.daemon [-] privsep: reply[c0e5c452-3a7d-42ba-9b6a-84b51ffd7a62]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 04:07:49 compute-0 ovn_metadata_agent[161897]: 2025-10-11 04:07:49.292 267637 DEBUG oslo.privsep.daemon [-] privsep: reply[b7a6a260-f6dc-4bb7-a22f-6978f55b0a20]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 04:07:49 compute-0 ovn_metadata_agent[161897]: 2025-10-11 04:07:49.293 267637 DEBUG oslo.privsep.daemon [-] privsep: reply[7e0341f2-db73-4e1e-a2f3-9dcda24a3fe0]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 04:07:49 compute-0 ovn_metadata_agent[161897]: 2025-10-11 04:07:49.319 267637 DEBUG oslo.privsep.daemon [-] privsep: reply[c54562d5-a963-4155-9570-31bb9509bc7f]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 402150, 'reachable_time': 18105, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 276796, 'error': None, 'target': 'ovnmeta-bfa0cc72-c909-48db-80bb-536eb7b52f6e', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 04:07:49 compute-0 systemd[1]: run-netns-ovnmeta\x2dbfa0cc72\x2dc909\x2d48db\x2d80bb\x2d536eb7b52f6e.mount: Deactivated successfully.
Oct 11 04:07:49 compute-0 ovn_metadata_agent[161897]: 2025-10-11 04:07:49.323 162015 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-bfa0cc72-c909-48db-80bb-536eb7b52f6e deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Oct 11 04:07:49 compute-0 ovn_metadata_agent[161897]: 2025-10-11 04:07:49.323 162015 DEBUG oslo.privsep.daemon [-] privsep: reply[9d9c1d80-375f-41b0-b886-e0ab90af135d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 04:07:49 compute-0 ovn_metadata_agent[161897]: 2025-10-11 04:07:49.324 161902 INFO neutron.agent.ovn.metadata.agent [-] Port c1e60e1e-9066-4ce9-9064-2a732e2a407d in datapath bfa0cc72-c909-48db-80bb-536eb7b52f6e unbound from our chassis
Oct 11 04:07:49 compute-0 ovn_metadata_agent[161897]: 2025-10-11 04:07:49.325 161902 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network bfa0cc72-c909-48db-80bb-536eb7b52f6e, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Oct 11 04:07:49 compute-0 ovn_metadata_agent[161897]: 2025-10-11 04:07:49.326 267637 DEBUG oslo.privsep.daemon [-] privsep: reply[3e5759f8-0a6d-405b-b876-34c4d27b6095]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 04:07:49 compute-0 ovn_metadata_agent[161897]: 2025-10-11 04:07:49.326 161902 INFO neutron.agent.ovn.metadata.agent [-] Port c1e60e1e-9066-4ce9-9064-2a732e2a407d in datapath bfa0cc72-c909-48db-80bb-536eb7b52f6e unbound from our chassis
Oct 11 04:07:49 compute-0 ovn_metadata_agent[161897]: 2025-10-11 04:07:49.327 161902 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network bfa0cc72-c909-48db-80bb-536eb7b52f6e, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Oct 11 04:07:49 compute-0 ovn_metadata_agent[161897]: 2025-10-11 04:07:49.327 267637 DEBUG oslo.privsep.daemon [-] privsep: reply[bd910fd1-7175-4faf-86d1-ab8c821b1fc8]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 04:07:49 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1145: 305 pgs: 305 active+clean; 167 MiB data, 346 MiB used, 60 GiB / 60 GiB avail; 81 KiB/s rd, 5.4 KiB/s wr, 110 op/s
Oct 11 04:07:49 compute-0 nova_compute[259850]: 2025-10-11 04:07:49.546 2 INFO nova.virt.libvirt.driver [None req-77734dc7-a664-43df-98f2-48f05a04a645 c5660041067943deb3c73caa6e62f851 2783729ed466412aac8ceb01d86a0b12 - - default default] [instance: 388b5700-0501-4cb9-99cd-6d259e00afa4] Deleting instance files /var/lib/nova/instances/388b5700-0501-4cb9-99cd-6d259e00afa4_del
Oct 11 04:07:49 compute-0 nova_compute[259850]: 2025-10-11 04:07:49.547 2 INFO nova.virt.libvirt.driver [None req-77734dc7-a664-43df-98f2-48f05a04a645 c5660041067943deb3c73caa6e62f851 2783729ed466412aac8ceb01d86a0b12 - - default default] [instance: 388b5700-0501-4cb9-99cd-6d259e00afa4] Deletion of /var/lib/nova/instances/388b5700-0501-4cb9-99cd-6d259e00afa4_del complete
Oct 11 04:07:49 compute-0 nova_compute[259850]: 2025-10-11 04:07:49.603 2 INFO nova.compute.manager [None req-77734dc7-a664-43df-98f2-48f05a04a645 c5660041067943deb3c73caa6e62f851 2783729ed466412aac8ceb01d86a0b12 - - default default] [instance: 388b5700-0501-4cb9-99cd-6d259e00afa4] Took 0.79 seconds to destroy the instance on the hypervisor.
Oct 11 04:07:49 compute-0 nova_compute[259850]: 2025-10-11 04:07:49.603 2 DEBUG oslo.service.loopingcall [None req-77734dc7-a664-43df-98f2-48f05a04a645 c5660041067943deb3c73caa6e62f851 2783729ed466412aac8ceb01d86a0b12 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Oct 11 04:07:49 compute-0 nova_compute[259850]: 2025-10-11 04:07:49.603 2 DEBUG nova.compute.manager [-] [instance: 388b5700-0501-4cb9-99cd-6d259e00afa4] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Oct 11 04:07:49 compute-0 nova_compute[259850]: 2025-10-11 04:07:49.604 2 DEBUG nova.network.neutron [-] [instance: 388b5700-0501-4cb9-99cd-6d259e00afa4] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Oct 11 04:07:49 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e210 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 04:07:49 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e210 do_prune osdmap full prune enabled
Oct 11 04:07:49 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e211 e211: 3 total, 3 up, 3 in
Oct 11 04:07:49 compute-0 ceph-mon[74273]: log_channel(cluster) log [DBG] : osdmap e211: 3 total, 3 up, 3 in
Oct 11 04:07:50 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct 11 04:07:50 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/4219195442' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 11 04:07:50 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct 11 04:07:50 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/4219195442' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 11 04:07:50 compute-0 ceph-mgr[74563]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 04:07:50 compute-0 ceph-mgr[74563]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 04:07:50 compute-0 ceph-mgr[74563]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 04:07:50 compute-0 ceph-mgr[74563]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 04:07:50 compute-0 ceph-mgr[74563]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 04:07:50 compute-0 ceph-mgr[74563]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 04:07:50 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e211 do_prune osdmap full prune enabled
Oct 11 04:07:50 compute-0 ceph-mon[74273]: pgmap v1145: 305 pgs: 305 active+clean; 167 MiB data, 346 MiB used, 60 GiB / 60 GiB avail; 81 KiB/s rd, 5.4 KiB/s wr, 110 op/s
Oct 11 04:07:50 compute-0 ceph-mon[74273]: osdmap e211: 3 total, 3 up, 3 in
Oct 11 04:07:50 compute-0 ceph-mon[74273]: from='client.? 192.168.122.10:0/4219195442' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 11 04:07:50 compute-0 ceph-mon[74273]: from='client.? 192.168.122.10:0/4219195442' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 11 04:07:51 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e212 e212: 3 total, 3 up, 3 in
Oct 11 04:07:51 compute-0 ceph-mon[74273]: log_channel(cluster) log [DBG] : osdmap e212: 3 total, 3 up, 3 in
Oct 11 04:07:51 compute-0 nova_compute[259850]: 2025-10-11 04:07:51.141 2 DEBUG nova.network.neutron [-] [instance: 388b5700-0501-4cb9-99cd-6d259e00afa4] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 11 04:07:51 compute-0 nova_compute[259850]: 2025-10-11 04:07:51.163 2 INFO nova.compute.manager [-] [instance: 388b5700-0501-4cb9-99cd-6d259e00afa4] Took 1.56 seconds to deallocate network for instance.
Oct 11 04:07:51 compute-0 nova_compute[259850]: 2025-10-11 04:07:51.220 2 DEBUG nova.compute.manager [req-651c5188-26ba-46ab-b09b-43f948ca9f8c req-3e55f14b-6ede-4a0f-ab2f-83280d17e753 f4159c198f2c491aba3076e803251bb4 551ef16ca7c64c7d98e9560ddab5abd8 - - default default] [instance: 388b5700-0501-4cb9-99cd-6d259e00afa4] Received event network-vif-plugged-c1e60e1e-9066-4ce9-9064-2a732e2a407d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 11 04:07:51 compute-0 nova_compute[259850]: 2025-10-11 04:07:51.220 2 DEBUG oslo_concurrency.lockutils [req-651c5188-26ba-46ab-b09b-43f948ca9f8c req-3e55f14b-6ede-4a0f-ab2f-83280d17e753 f4159c198f2c491aba3076e803251bb4 551ef16ca7c64c7d98e9560ddab5abd8 - - default default] Acquiring lock "388b5700-0501-4cb9-99cd-6d259e00afa4-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 04:07:51 compute-0 nova_compute[259850]: 2025-10-11 04:07:51.221 2 DEBUG oslo_concurrency.lockutils [req-651c5188-26ba-46ab-b09b-43f948ca9f8c req-3e55f14b-6ede-4a0f-ab2f-83280d17e753 f4159c198f2c491aba3076e803251bb4 551ef16ca7c64c7d98e9560ddab5abd8 - - default default] Lock "388b5700-0501-4cb9-99cd-6d259e00afa4-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 04:07:51 compute-0 nova_compute[259850]: 2025-10-11 04:07:51.221 2 DEBUG oslo_concurrency.lockutils [req-651c5188-26ba-46ab-b09b-43f948ca9f8c req-3e55f14b-6ede-4a0f-ab2f-83280d17e753 f4159c198f2c491aba3076e803251bb4 551ef16ca7c64c7d98e9560ddab5abd8 - - default default] Lock "388b5700-0501-4cb9-99cd-6d259e00afa4-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 04:07:51 compute-0 nova_compute[259850]: 2025-10-11 04:07:51.222 2 DEBUG nova.compute.manager [req-651c5188-26ba-46ab-b09b-43f948ca9f8c req-3e55f14b-6ede-4a0f-ab2f-83280d17e753 f4159c198f2c491aba3076e803251bb4 551ef16ca7c64c7d98e9560ddab5abd8 - - default default] [instance: 388b5700-0501-4cb9-99cd-6d259e00afa4] No waiting events found dispatching network-vif-plugged-c1e60e1e-9066-4ce9-9064-2a732e2a407d pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 11 04:07:51 compute-0 nova_compute[259850]: 2025-10-11 04:07:51.222 2 WARNING nova.compute.manager [req-651c5188-26ba-46ab-b09b-43f948ca9f8c req-3e55f14b-6ede-4a0f-ab2f-83280d17e753 f4159c198f2c491aba3076e803251bb4 551ef16ca7c64c7d98e9560ddab5abd8 - - default default] [instance: 388b5700-0501-4cb9-99cd-6d259e00afa4] Received unexpected event network-vif-plugged-c1e60e1e-9066-4ce9-9064-2a732e2a407d for instance with vm_state active and task_state deleting.
Oct 11 04:07:51 compute-0 nova_compute[259850]: 2025-10-11 04:07:51.223 2 DEBUG nova.compute.manager [req-651c5188-26ba-46ab-b09b-43f948ca9f8c req-3e55f14b-6ede-4a0f-ab2f-83280d17e753 f4159c198f2c491aba3076e803251bb4 551ef16ca7c64c7d98e9560ddab5abd8 - - default default] [instance: 388b5700-0501-4cb9-99cd-6d259e00afa4] Received event network-vif-plugged-c1e60e1e-9066-4ce9-9064-2a732e2a407d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 11 04:07:51 compute-0 nova_compute[259850]: 2025-10-11 04:07:51.223 2 DEBUG oslo_concurrency.lockutils [req-651c5188-26ba-46ab-b09b-43f948ca9f8c req-3e55f14b-6ede-4a0f-ab2f-83280d17e753 f4159c198f2c491aba3076e803251bb4 551ef16ca7c64c7d98e9560ddab5abd8 - - default default] Acquiring lock "388b5700-0501-4cb9-99cd-6d259e00afa4-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 04:07:51 compute-0 nova_compute[259850]: 2025-10-11 04:07:51.224 2 DEBUG oslo_concurrency.lockutils [req-651c5188-26ba-46ab-b09b-43f948ca9f8c req-3e55f14b-6ede-4a0f-ab2f-83280d17e753 f4159c198f2c491aba3076e803251bb4 551ef16ca7c64c7d98e9560ddab5abd8 - - default default] Lock "388b5700-0501-4cb9-99cd-6d259e00afa4-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 04:07:51 compute-0 nova_compute[259850]: 2025-10-11 04:07:51.224 2 DEBUG oslo_concurrency.lockutils [req-651c5188-26ba-46ab-b09b-43f948ca9f8c req-3e55f14b-6ede-4a0f-ab2f-83280d17e753 f4159c198f2c491aba3076e803251bb4 551ef16ca7c64c7d98e9560ddab5abd8 - - default default] Lock "388b5700-0501-4cb9-99cd-6d259e00afa4-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 04:07:51 compute-0 nova_compute[259850]: 2025-10-11 04:07:51.225 2 DEBUG nova.compute.manager [req-651c5188-26ba-46ab-b09b-43f948ca9f8c req-3e55f14b-6ede-4a0f-ab2f-83280d17e753 f4159c198f2c491aba3076e803251bb4 551ef16ca7c64c7d98e9560ddab5abd8 - - default default] [instance: 388b5700-0501-4cb9-99cd-6d259e00afa4] No waiting events found dispatching network-vif-plugged-c1e60e1e-9066-4ce9-9064-2a732e2a407d pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 11 04:07:51 compute-0 nova_compute[259850]: 2025-10-11 04:07:51.225 2 WARNING nova.compute.manager [req-651c5188-26ba-46ab-b09b-43f948ca9f8c req-3e55f14b-6ede-4a0f-ab2f-83280d17e753 f4159c198f2c491aba3076e803251bb4 551ef16ca7c64c7d98e9560ddab5abd8 - - default default] [instance: 388b5700-0501-4cb9-99cd-6d259e00afa4] Received unexpected event network-vif-plugged-c1e60e1e-9066-4ce9-9064-2a732e2a407d for instance with vm_state active and task_state deleting.
Oct 11 04:07:51 compute-0 nova_compute[259850]: 2025-10-11 04:07:51.225 2 DEBUG nova.compute.manager [req-651c5188-26ba-46ab-b09b-43f948ca9f8c req-3e55f14b-6ede-4a0f-ab2f-83280d17e753 f4159c198f2c491aba3076e803251bb4 551ef16ca7c64c7d98e9560ddab5abd8 - - default default] [instance: 388b5700-0501-4cb9-99cd-6d259e00afa4] Received event network-vif-plugged-c1e60e1e-9066-4ce9-9064-2a732e2a407d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 11 04:07:51 compute-0 nova_compute[259850]: 2025-10-11 04:07:51.226 2 DEBUG oslo_concurrency.lockutils [req-651c5188-26ba-46ab-b09b-43f948ca9f8c req-3e55f14b-6ede-4a0f-ab2f-83280d17e753 f4159c198f2c491aba3076e803251bb4 551ef16ca7c64c7d98e9560ddab5abd8 - - default default] Acquiring lock "388b5700-0501-4cb9-99cd-6d259e00afa4-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 04:07:51 compute-0 nova_compute[259850]: 2025-10-11 04:07:51.226 2 DEBUG oslo_concurrency.lockutils [req-651c5188-26ba-46ab-b09b-43f948ca9f8c req-3e55f14b-6ede-4a0f-ab2f-83280d17e753 f4159c198f2c491aba3076e803251bb4 551ef16ca7c64c7d98e9560ddab5abd8 - - default default] Lock "388b5700-0501-4cb9-99cd-6d259e00afa4-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 04:07:51 compute-0 nova_compute[259850]: 2025-10-11 04:07:51.227 2 DEBUG oslo_concurrency.lockutils [req-651c5188-26ba-46ab-b09b-43f948ca9f8c req-3e55f14b-6ede-4a0f-ab2f-83280d17e753 f4159c198f2c491aba3076e803251bb4 551ef16ca7c64c7d98e9560ddab5abd8 - - default default] Lock "388b5700-0501-4cb9-99cd-6d259e00afa4-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 04:07:51 compute-0 nova_compute[259850]: 2025-10-11 04:07:51.227 2 DEBUG nova.compute.manager [req-651c5188-26ba-46ab-b09b-43f948ca9f8c req-3e55f14b-6ede-4a0f-ab2f-83280d17e753 f4159c198f2c491aba3076e803251bb4 551ef16ca7c64c7d98e9560ddab5abd8 - - default default] [instance: 388b5700-0501-4cb9-99cd-6d259e00afa4] No waiting events found dispatching network-vif-plugged-c1e60e1e-9066-4ce9-9064-2a732e2a407d pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 11 04:07:51 compute-0 nova_compute[259850]: 2025-10-11 04:07:51.228 2 WARNING nova.compute.manager [req-651c5188-26ba-46ab-b09b-43f948ca9f8c req-3e55f14b-6ede-4a0f-ab2f-83280d17e753 f4159c198f2c491aba3076e803251bb4 551ef16ca7c64c7d98e9560ddab5abd8 - - default default] [instance: 388b5700-0501-4cb9-99cd-6d259e00afa4] Received unexpected event network-vif-plugged-c1e60e1e-9066-4ce9-9064-2a732e2a407d for instance with vm_state active and task_state deleting.
Oct 11 04:07:51 compute-0 nova_compute[259850]: 2025-10-11 04:07:51.228 2 DEBUG nova.compute.manager [req-651c5188-26ba-46ab-b09b-43f948ca9f8c req-3e55f14b-6ede-4a0f-ab2f-83280d17e753 f4159c198f2c491aba3076e803251bb4 551ef16ca7c64c7d98e9560ddab5abd8 - - default default] [instance: 388b5700-0501-4cb9-99cd-6d259e00afa4] Received event network-vif-unplugged-c1e60e1e-9066-4ce9-9064-2a732e2a407d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 11 04:07:51 compute-0 nova_compute[259850]: 2025-10-11 04:07:51.228 2 DEBUG oslo_concurrency.lockutils [req-651c5188-26ba-46ab-b09b-43f948ca9f8c req-3e55f14b-6ede-4a0f-ab2f-83280d17e753 f4159c198f2c491aba3076e803251bb4 551ef16ca7c64c7d98e9560ddab5abd8 - - default default] Acquiring lock "388b5700-0501-4cb9-99cd-6d259e00afa4-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 04:07:51 compute-0 nova_compute[259850]: 2025-10-11 04:07:51.229 2 DEBUG oslo_concurrency.lockutils [req-651c5188-26ba-46ab-b09b-43f948ca9f8c req-3e55f14b-6ede-4a0f-ab2f-83280d17e753 f4159c198f2c491aba3076e803251bb4 551ef16ca7c64c7d98e9560ddab5abd8 - - default default] Lock "388b5700-0501-4cb9-99cd-6d259e00afa4-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 04:07:51 compute-0 nova_compute[259850]: 2025-10-11 04:07:51.229 2 DEBUG oslo_concurrency.lockutils [req-651c5188-26ba-46ab-b09b-43f948ca9f8c req-3e55f14b-6ede-4a0f-ab2f-83280d17e753 f4159c198f2c491aba3076e803251bb4 551ef16ca7c64c7d98e9560ddab5abd8 - - default default] Lock "388b5700-0501-4cb9-99cd-6d259e00afa4-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 04:07:51 compute-0 nova_compute[259850]: 2025-10-11 04:07:51.230 2 DEBUG nova.compute.manager [req-651c5188-26ba-46ab-b09b-43f948ca9f8c req-3e55f14b-6ede-4a0f-ab2f-83280d17e753 f4159c198f2c491aba3076e803251bb4 551ef16ca7c64c7d98e9560ddab5abd8 - - default default] [instance: 388b5700-0501-4cb9-99cd-6d259e00afa4] No waiting events found dispatching network-vif-unplugged-c1e60e1e-9066-4ce9-9064-2a732e2a407d pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 11 04:07:51 compute-0 nova_compute[259850]: 2025-10-11 04:07:51.230 2 DEBUG nova.compute.manager [req-651c5188-26ba-46ab-b09b-43f948ca9f8c req-3e55f14b-6ede-4a0f-ab2f-83280d17e753 f4159c198f2c491aba3076e803251bb4 551ef16ca7c64c7d98e9560ddab5abd8 - - default default] [instance: 388b5700-0501-4cb9-99cd-6d259e00afa4] Received event network-vif-unplugged-c1e60e1e-9066-4ce9-9064-2a732e2a407d for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Oct 11 04:07:51 compute-0 nova_compute[259850]: 2025-10-11 04:07:51.230 2 DEBUG nova.compute.manager [req-651c5188-26ba-46ab-b09b-43f948ca9f8c req-3e55f14b-6ede-4a0f-ab2f-83280d17e753 f4159c198f2c491aba3076e803251bb4 551ef16ca7c64c7d98e9560ddab5abd8 - - default default] [instance: 388b5700-0501-4cb9-99cd-6d259e00afa4] Received event network-vif-plugged-c1e60e1e-9066-4ce9-9064-2a732e2a407d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 11 04:07:51 compute-0 nova_compute[259850]: 2025-10-11 04:07:51.231 2 DEBUG oslo_concurrency.lockutils [req-651c5188-26ba-46ab-b09b-43f948ca9f8c req-3e55f14b-6ede-4a0f-ab2f-83280d17e753 f4159c198f2c491aba3076e803251bb4 551ef16ca7c64c7d98e9560ddab5abd8 - - default default] Acquiring lock "388b5700-0501-4cb9-99cd-6d259e00afa4-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 04:07:51 compute-0 nova_compute[259850]: 2025-10-11 04:07:51.231 2 DEBUG oslo_concurrency.lockutils [req-651c5188-26ba-46ab-b09b-43f948ca9f8c req-3e55f14b-6ede-4a0f-ab2f-83280d17e753 f4159c198f2c491aba3076e803251bb4 551ef16ca7c64c7d98e9560ddab5abd8 - - default default] Lock "388b5700-0501-4cb9-99cd-6d259e00afa4-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 04:07:51 compute-0 nova_compute[259850]: 2025-10-11 04:07:51.232 2 DEBUG oslo_concurrency.lockutils [req-651c5188-26ba-46ab-b09b-43f948ca9f8c req-3e55f14b-6ede-4a0f-ab2f-83280d17e753 f4159c198f2c491aba3076e803251bb4 551ef16ca7c64c7d98e9560ddab5abd8 - - default default] Lock "388b5700-0501-4cb9-99cd-6d259e00afa4-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 04:07:51 compute-0 nova_compute[259850]: 2025-10-11 04:07:51.232 2 DEBUG nova.compute.manager [req-651c5188-26ba-46ab-b09b-43f948ca9f8c req-3e55f14b-6ede-4a0f-ab2f-83280d17e753 f4159c198f2c491aba3076e803251bb4 551ef16ca7c64c7d98e9560ddab5abd8 - - default default] [instance: 388b5700-0501-4cb9-99cd-6d259e00afa4] No waiting events found dispatching network-vif-plugged-c1e60e1e-9066-4ce9-9064-2a732e2a407d pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 11 04:07:51 compute-0 nova_compute[259850]: 2025-10-11 04:07:51.232 2 WARNING nova.compute.manager [req-651c5188-26ba-46ab-b09b-43f948ca9f8c req-3e55f14b-6ede-4a0f-ab2f-83280d17e753 f4159c198f2c491aba3076e803251bb4 551ef16ca7c64c7d98e9560ddab5abd8 - - default default] [instance: 388b5700-0501-4cb9-99cd-6d259e00afa4] Received unexpected event network-vif-plugged-c1e60e1e-9066-4ce9-9064-2a732e2a407d for instance with vm_state active and task_state deleting.
Oct 11 04:07:51 compute-0 nova_compute[259850]: 2025-10-11 04:07:51.296 2 WARNING nova.volume.cinder [None req-77734dc7-a664-43df-98f2-48f05a04a645 c5660041067943deb3c73caa6e62f851 2783729ed466412aac8ceb01d86a0b12 - - default default] Attachment ee26f169-9217-484a-9615-accb64c4d12a does not exist. Ignoring.: cinderclient.exceptions.NotFound: Volume attachment could not be found with filter: attachment_id = ee26f169-9217-484a-9615-accb64c4d12a. (HTTP 404) (Request-ID: req-63f5a520-0e5f-41b9-8330-678d7005e90f)
Oct 11 04:07:51 compute-0 nova_compute[259850]: 2025-10-11 04:07:51.297 2 INFO nova.compute.manager [None req-77734dc7-a664-43df-98f2-48f05a04a645 c5660041067943deb3c73caa6e62f851 2783729ed466412aac8ceb01d86a0b12 - - default default] [instance: 388b5700-0501-4cb9-99cd-6d259e00afa4] Took 0.13 seconds to detach 1 volumes for instance.
Oct 11 04:07:51 compute-0 nova_compute[259850]: 2025-10-11 04:07:51.348 2 DEBUG oslo_concurrency.lockutils [None req-77734dc7-a664-43df-98f2-48f05a04a645 c5660041067943deb3c73caa6e62f851 2783729ed466412aac8ceb01d86a0b12 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 04:07:51 compute-0 nova_compute[259850]: 2025-10-11 04:07:51.349 2 DEBUG oslo_concurrency.lockutils [None req-77734dc7-a664-43df-98f2-48f05a04a645 c5660041067943deb3c73caa6e62f851 2783729ed466412aac8ceb01d86a0b12 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 04:07:51 compute-0 nova_compute[259850]: 2025-10-11 04:07:51.430 2 DEBUG oslo_concurrency.processutils [None req-77734dc7-a664-43df-98f2-48f05a04a645 c5660041067943deb3c73caa6e62f851 2783729ed466412aac8ceb01d86a0b12 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 04:07:51 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1148: 305 pgs: 305 active+clean; 167 MiB data, 346 MiB used, 60 GiB / 60 GiB avail; 81 KiB/s rd, 5.4 KiB/s wr, 110 op/s
Oct 11 04:07:51 compute-0 nova_compute[259850]: 2025-10-11 04:07:51.799 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 04:07:51 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 11 04:07:51 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3888720050' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 04:07:51 compute-0 nova_compute[259850]: 2025-10-11 04:07:51.902 2 DEBUG oslo_concurrency.processutils [None req-77734dc7-a664-43df-98f2-48f05a04a645 c5660041067943deb3c73caa6e62f851 2783729ed466412aac8ceb01d86a0b12 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.472s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 04:07:51 compute-0 nova_compute[259850]: 2025-10-11 04:07:51.907 2 DEBUG nova.compute.provider_tree [None req-77734dc7-a664-43df-98f2-48f05a04a645 c5660041067943deb3c73caa6e62f851 2783729ed466412aac8ceb01d86a0b12 - - default default] Inventory has not changed in ProviderTree for provider: 108a560b-89c0-4926-a2fc-cb749a6f8386 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 11 04:07:51 compute-0 nova_compute[259850]: 2025-10-11 04:07:51.950 2 DEBUG nova.scheduler.client.report [None req-77734dc7-a664-43df-98f2-48f05a04a645 c5660041067943deb3c73caa6e62f851 2783729ed466412aac8ceb01d86a0b12 - - default default] Inventory has not changed for provider 108a560b-89c0-4926-a2fc-cb749a6f8386 based on inventory data: {'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 11 04:07:51 compute-0 nova_compute[259850]: 2025-10-11 04:07:51.973 2 DEBUG oslo_concurrency.lockutils [None req-77734dc7-a664-43df-98f2-48f05a04a645 c5660041067943deb3c73caa6e62f851 2783729ed466412aac8ceb01d86a0b12 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.625s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 04:07:52 compute-0 nova_compute[259850]: 2025-10-11 04:07:52.004 2 INFO nova.scheduler.client.report [None req-77734dc7-a664-43df-98f2-48f05a04a645 c5660041067943deb3c73caa6e62f851 2783729ed466412aac8ceb01d86a0b12 - - default default] Deleted allocations for instance 388b5700-0501-4cb9-99cd-6d259e00afa4
Oct 11 04:07:52 compute-0 ceph-mon[74273]: osdmap e212: 3 total, 3 up, 3 in
Oct 11 04:07:52 compute-0 ceph-mon[74273]: from='client.? 192.168.122.100:0/3888720050' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 04:07:52 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e212 do_prune osdmap full prune enabled
Oct 11 04:07:52 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e213 e213: 3 total, 3 up, 3 in
Oct 11 04:07:52 compute-0 ceph-mon[74273]: log_channel(cluster) log [DBG] : osdmap e213: 3 total, 3 up, 3 in
Oct 11 04:07:52 compute-0 nova_compute[259850]: 2025-10-11 04:07:52.065 2 DEBUG oslo_concurrency.lockutils [None req-77734dc7-a664-43df-98f2-48f05a04a645 c5660041067943deb3c73caa6e62f851 2783729ed466412aac8ceb01d86a0b12 - - default default] Lock "388b5700-0501-4cb9-99cd-6d259e00afa4" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.262s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 04:07:52 compute-0 nova_compute[259850]: 2025-10-11 04:07:52.291 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 04:07:53 compute-0 ceph-mon[74273]: pgmap v1148: 305 pgs: 305 active+clean; 167 MiB data, 346 MiB used, 60 GiB / 60 GiB avail; 81 KiB/s rd, 5.4 KiB/s wr, 110 op/s
Oct 11 04:07:53 compute-0 ceph-mon[74273]: osdmap e213: 3 total, 3 up, 3 in
Oct 11 04:07:53 compute-0 nova_compute[259850]: 2025-10-11 04:07:53.332 2 DEBUG nova.compute.manager [req-2e12d5fc-c215-4df2-920d-70bf51747cd0 req-5d026971-073f-4474-b181-19c720b62f6d f4159c198f2c491aba3076e803251bb4 551ef16ca7c64c7d98e9560ddab5abd8 - - default default] [instance: 388b5700-0501-4cb9-99cd-6d259e00afa4] Received event network-vif-deleted-c1e60e1e-9066-4ce9-9064-2a732e2a407d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 11 04:07:53 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1150: 305 pgs: 305 active+clean; 88 MiB data, 299 MiB used, 60 GiB / 60 GiB avail; 158 KiB/s rd, 12 KiB/s wr, 220 op/s
Oct 11 04:07:53 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct 11 04:07:53 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3336464025' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 11 04:07:53 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct 11 04:07:53 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3336464025' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 11 04:07:54 compute-0 ceph-mon[74273]: from='client.? 192.168.122.10:0/3336464025' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 11 04:07:54 compute-0 ceph-mon[74273]: from='client.? 192.168.122.10:0/3336464025' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 11 04:07:54 compute-0 nova_compute[259850]: 2025-10-11 04:07:54.116 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 04:07:54 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e213 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 04:07:54 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e213 do_prune osdmap full prune enabled
Oct 11 04:07:54 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e214 e214: 3 total, 3 up, 3 in
Oct 11 04:07:54 compute-0 ceph-mon[74273]: log_channel(cluster) log [DBG] : osdmap e214: 3 total, 3 up, 3 in
Oct 11 04:07:55 compute-0 ceph-mon[74273]: pgmap v1150: 305 pgs: 305 active+clean; 88 MiB data, 299 MiB used, 60 GiB / 60 GiB avail; 158 KiB/s rd, 12 KiB/s wr, 220 op/s
Oct 11 04:07:55 compute-0 ceph-mon[74273]: osdmap e214: 3 total, 3 up, 3 in
Oct 11 04:07:55 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1152: 305 pgs: 305 active+clean; 88 MiB data, 299 MiB used, 60 GiB / 60 GiB avail; 90 KiB/s rd, 7.9 KiB/s wr, 127 op/s
Oct 11 04:07:56 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e214 do_prune osdmap full prune enabled
Oct 11 04:07:56 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e215 e215: 3 total, 3 up, 3 in
Oct 11 04:07:56 compute-0 ceph-mon[74273]: log_channel(cluster) log [DBG] : osdmap e215: 3 total, 3 up, 3 in
Oct 11 04:07:57 compute-0 ceph-mon[74273]: pgmap v1152: 305 pgs: 305 active+clean; 88 MiB data, 299 MiB used, 60 GiB / 60 GiB avail; 90 KiB/s rd, 7.9 KiB/s wr, 127 op/s
Oct 11 04:07:57 compute-0 ceph-mon[74273]: osdmap e215: 3 total, 3 up, 3 in
Oct 11 04:07:57 compute-0 nova_compute[259850]: 2025-10-11 04:07:57.293 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 04:07:57 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1154: 305 pgs: 305 active+clean; 88 MiB data, 299 MiB used, 60 GiB / 60 GiB avail; 83 KiB/s rd, 7.3 KiB/s wr, 117 op/s
Oct 11 04:07:59 compute-0 ceph-mon[74273]: pgmap v1154: 305 pgs: 305 active+clean; 88 MiB data, 299 MiB used, 60 GiB / 60 GiB avail; 83 KiB/s rd, 7.3 KiB/s wr, 117 op/s
Oct 11 04:07:59 compute-0 nova_compute[259850]: 2025-10-11 04:07:59.119 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 04:07:59 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1155: 305 pgs: 305 active+clean; 88 MiB data, 283 MiB used, 60 GiB / 60 GiB avail; 103 KiB/s rd, 8.3 KiB/s wr, 144 op/s
Oct 11 04:07:59 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct 11 04:07:59 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/4135454000' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 11 04:07:59 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct 11 04:07:59 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/4135454000' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 11 04:07:59 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e215 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 04:08:00 compute-0 ceph-mon[74273]: from='client.? 192.168.122.10:0/4135454000' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 11 04:08:00 compute-0 ceph-mon[74273]: from='client.? 192.168.122.10:0/4135454000' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 11 04:08:01 compute-0 ceph-mon[74273]: pgmap v1155: 305 pgs: 305 active+clean; 88 MiB data, 283 MiB used, 60 GiB / 60 GiB avail; 103 KiB/s rd, 8.3 KiB/s wr, 144 op/s
Oct 11 04:08:01 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e215 do_prune osdmap full prune enabled
Oct 11 04:08:01 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e216 e216: 3 total, 3 up, 3 in
Oct 11 04:08:01 compute-0 ceph-mon[74273]: log_channel(cluster) log [DBG] : osdmap e216: 3 total, 3 up, 3 in
Oct 11 04:08:01 compute-0 podman[276821]: 2025-10-11 04:08:01.364786863 +0000 UTC m=+0.068220516 container health_status 8f03625dda308e9a3fe3b3f00360811eab2a53db90537fed5552a11820542f07 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, config_id=iscsid, container_name=iscsid, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, org.label-schema.build-date=20251009, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 11 04:08:01 compute-0 podman[276820]: 2025-10-11 04:08:01.380492664 +0000 UTC m=+0.078213646 container health_status 83ff5079e26f0f00bbfd2512db4ebca61e431e3b7e362664573e34fff179574c (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, container_name=multipathd, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 11 04:08:01 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1157: 305 pgs: 305 active+clean; 88 MiB data, 283 MiB used, 60 GiB / 60 GiB avail; 41 KiB/s rd, 2.7 KiB/s wr, 54 op/s
Oct 11 04:08:01 compute-0 nova_compute[259850]: 2025-10-11 04:08:01.622 2 DEBUG oslo_concurrency.lockutils [None req-2e595185-ce74-4345-8f77-04ce87399792 fc44058c9b8d47d1907c195c404898c8 c04e56df694d49fdbb22c39773dfc036 - - default default] Acquiring lock "5814e0c3-8afc-4d2d-98eb-6da773bfb7c7" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 04:08:01 compute-0 nova_compute[259850]: 2025-10-11 04:08:01.623 2 DEBUG oslo_concurrency.lockutils [None req-2e595185-ce74-4345-8f77-04ce87399792 fc44058c9b8d47d1907c195c404898c8 c04e56df694d49fdbb22c39773dfc036 - - default default] Lock "5814e0c3-8afc-4d2d-98eb-6da773bfb7c7" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 04:08:01 compute-0 nova_compute[259850]: 2025-10-11 04:08:01.648 2 DEBUG nova.compute.manager [None req-2e595185-ce74-4345-8f77-04ce87399792 fc44058c9b8d47d1907c195c404898c8 c04e56df694d49fdbb22c39773dfc036 - - default default] [instance: 5814e0c3-8afc-4d2d-98eb-6da773bfb7c7] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Oct 11 04:08:01 compute-0 nova_compute[259850]: 2025-10-11 04:08:01.732 2 DEBUG oslo_concurrency.lockutils [None req-2e595185-ce74-4345-8f77-04ce87399792 fc44058c9b8d47d1907c195c404898c8 c04e56df694d49fdbb22c39773dfc036 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 04:08:01 compute-0 nova_compute[259850]: 2025-10-11 04:08:01.733 2 DEBUG oslo_concurrency.lockutils [None req-2e595185-ce74-4345-8f77-04ce87399792 fc44058c9b8d47d1907c195c404898c8 c04e56df694d49fdbb22c39773dfc036 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 04:08:01 compute-0 nova_compute[259850]: 2025-10-11 04:08:01.744 2 DEBUG nova.virt.hardware [None req-2e595185-ce74-4345-8f77-04ce87399792 fc44058c9b8d47d1907c195c404898c8 c04e56df694d49fdbb22c39773dfc036 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Oct 11 04:08:01 compute-0 nova_compute[259850]: 2025-10-11 04:08:01.744 2 INFO nova.compute.claims [None req-2e595185-ce74-4345-8f77-04ce87399792 fc44058c9b8d47d1907c195c404898c8 c04e56df694d49fdbb22c39773dfc036 - - default default] [instance: 5814e0c3-8afc-4d2d-98eb-6da773bfb7c7] Claim successful on node compute-0.ctlplane.example.com
Oct 11 04:08:01 compute-0 nova_compute[259850]: 2025-10-11 04:08:01.891 2 DEBUG oslo_concurrency.processutils [None req-2e595185-ce74-4345-8f77-04ce87399792 fc44058c9b8d47d1907c195c404898c8 c04e56df694d49fdbb22c39773dfc036 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 04:08:02 compute-0 ceph-mon[74273]: osdmap e216: 3 total, 3 up, 3 in
Oct 11 04:08:02 compute-0 ceph-mon[74273]: pgmap v1157: 305 pgs: 305 active+clean; 88 MiB data, 283 MiB used, 60 GiB / 60 GiB avail; 41 KiB/s rd, 2.7 KiB/s wr, 54 op/s
Oct 11 04:08:02 compute-0 nova_compute[259850]: 2025-10-11 04:08:02.296 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 04:08:02 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 11 04:08:02 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2795480090' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 04:08:02 compute-0 nova_compute[259850]: 2025-10-11 04:08:02.364 2 DEBUG oslo_concurrency.processutils [None req-2e595185-ce74-4345-8f77-04ce87399792 fc44058c9b8d47d1907c195c404898c8 c04e56df694d49fdbb22c39773dfc036 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.472s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 04:08:02 compute-0 nova_compute[259850]: 2025-10-11 04:08:02.374 2 DEBUG nova.compute.provider_tree [None req-2e595185-ce74-4345-8f77-04ce87399792 fc44058c9b8d47d1907c195c404898c8 c04e56df694d49fdbb22c39773dfc036 - - default default] Inventory has not changed in ProviderTree for provider: 108a560b-89c0-4926-a2fc-cb749a6f8386 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 11 04:08:02 compute-0 nova_compute[259850]: 2025-10-11 04:08:02.394 2 DEBUG nova.scheduler.client.report [None req-2e595185-ce74-4345-8f77-04ce87399792 fc44058c9b8d47d1907c195c404898c8 c04e56df694d49fdbb22c39773dfc036 - - default default] Inventory has not changed for provider 108a560b-89c0-4926-a2fc-cb749a6f8386 based on inventory data: {'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 11 04:08:02 compute-0 nova_compute[259850]: 2025-10-11 04:08:02.414 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 04:08:02 compute-0 nova_compute[259850]: 2025-10-11 04:08:02.419 2 DEBUG oslo_concurrency.lockutils [None req-2e595185-ce74-4345-8f77-04ce87399792 fc44058c9b8d47d1907c195c404898c8 c04e56df694d49fdbb22c39773dfc036 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.686s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 04:08:02 compute-0 nova_compute[259850]: 2025-10-11 04:08:02.419 2 DEBUG nova.compute.manager [None req-2e595185-ce74-4345-8f77-04ce87399792 fc44058c9b8d47d1907c195c404898c8 c04e56df694d49fdbb22c39773dfc036 - - default default] [instance: 5814e0c3-8afc-4d2d-98eb-6da773bfb7c7] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Oct 11 04:08:02 compute-0 nova_compute[259850]: 2025-10-11 04:08:02.470 2 DEBUG nova.compute.manager [None req-2e595185-ce74-4345-8f77-04ce87399792 fc44058c9b8d47d1907c195c404898c8 c04e56df694d49fdbb22c39773dfc036 - - default default] [instance: 5814e0c3-8afc-4d2d-98eb-6da773bfb7c7] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Oct 11 04:08:02 compute-0 nova_compute[259850]: 2025-10-11 04:08:02.471 2 DEBUG nova.network.neutron [None req-2e595185-ce74-4345-8f77-04ce87399792 fc44058c9b8d47d1907c195c404898c8 c04e56df694d49fdbb22c39773dfc036 - - default default] [instance: 5814e0c3-8afc-4d2d-98eb-6da773bfb7c7] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Oct 11 04:08:02 compute-0 nova_compute[259850]: 2025-10-11 04:08:02.491 2 INFO nova.virt.libvirt.driver [None req-2e595185-ce74-4345-8f77-04ce87399792 fc44058c9b8d47d1907c195c404898c8 c04e56df694d49fdbb22c39773dfc036 - - default default] [instance: 5814e0c3-8afc-4d2d-98eb-6da773bfb7c7] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Oct 11 04:08:02 compute-0 nova_compute[259850]: 2025-10-11 04:08:02.516 2 DEBUG nova.compute.manager [None req-2e595185-ce74-4345-8f77-04ce87399792 fc44058c9b8d47d1907c195c404898c8 c04e56df694d49fdbb22c39773dfc036 - - default default] [instance: 5814e0c3-8afc-4d2d-98eb-6da773bfb7c7] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Oct 11 04:08:02 compute-0 nova_compute[259850]: 2025-10-11 04:08:02.625 2 DEBUG nova.compute.manager [None req-2e595185-ce74-4345-8f77-04ce87399792 fc44058c9b8d47d1907c195c404898c8 c04e56df694d49fdbb22c39773dfc036 - - default default] [instance: 5814e0c3-8afc-4d2d-98eb-6da773bfb7c7] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Oct 11 04:08:02 compute-0 nova_compute[259850]: 2025-10-11 04:08:02.627 2 DEBUG nova.virt.libvirt.driver [None req-2e595185-ce74-4345-8f77-04ce87399792 fc44058c9b8d47d1907c195c404898c8 c04e56df694d49fdbb22c39773dfc036 - - default default] [instance: 5814e0c3-8afc-4d2d-98eb-6da773bfb7c7] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Oct 11 04:08:02 compute-0 nova_compute[259850]: 2025-10-11 04:08:02.628 2 INFO nova.virt.libvirt.driver [None req-2e595185-ce74-4345-8f77-04ce87399792 fc44058c9b8d47d1907c195c404898c8 c04e56df694d49fdbb22c39773dfc036 - - default default] [instance: 5814e0c3-8afc-4d2d-98eb-6da773bfb7c7] Creating image(s)
Oct 11 04:08:02 compute-0 nova_compute[259850]: 2025-10-11 04:08:02.671 2 DEBUG nova.storage.rbd_utils [None req-2e595185-ce74-4345-8f77-04ce87399792 fc44058c9b8d47d1907c195c404898c8 c04e56df694d49fdbb22c39773dfc036 - - default default] rbd image 5814e0c3-8afc-4d2d-98eb-6da773bfb7c7_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 11 04:08:02 compute-0 nova_compute[259850]: 2025-10-11 04:08:02.708 2 DEBUG nova.storage.rbd_utils [None req-2e595185-ce74-4345-8f77-04ce87399792 fc44058c9b8d47d1907c195c404898c8 c04e56df694d49fdbb22c39773dfc036 - - default default] rbd image 5814e0c3-8afc-4d2d-98eb-6da773bfb7c7_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 11 04:08:02 compute-0 nova_compute[259850]: 2025-10-11 04:08:02.749 2 DEBUG nova.storage.rbd_utils [None req-2e595185-ce74-4345-8f77-04ce87399792 fc44058c9b8d47d1907c195c404898c8 c04e56df694d49fdbb22c39773dfc036 - - default default] rbd image 5814e0c3-8afc-4d2d-98eb-6da773bfb7c7_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 11 04:08:02 compute-0 nova_compute[259850]: 2025-10-11 04:08:02.755 2 DEBUG oslo_concurrency.processutils [None req-2e595185-ce74-4345-8f77-04ce87399792 fc44058c9b8d47d1907c195c404898c8 c04e56df694d49fdbb22c39773dfc036 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/1fdd1a1629caceb533dd1ed1ad40a6716b3e72ac --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 04:08:02 compute-0 nova_compute[259850]: 2025-10-11 04:08:02.831 2 DEBUG oslo_concurrency.processutils [None req-2e595185-ce74-4345-8f77-04ce87399792 fc44058c9b8d47d1907c195c404898c8 c04e56df694d49fdbb22c39773dfc036 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/1fdd1a1629caceb533dd1ed1ad40a6716b3e72ac --force-share --output=json" returned: 0 in 0.077s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 04:08:02 compute-0 nova_compute[259850]: 2025-10-11 04:08:02.834 2 DEBUG oslo_concurrency.lockutils [None req-2e595185-ce74-4345-8f77-04ce87399792 fc44058c9b8d47d1907c195c404898c8 c04e56df694d49fdbb22c39773dfc036 - - default default] Acquiring lock "1fdd1a1629caceb533dd1ed1ad40a6716b3e72ac" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 04:08:02 compute-0 nova_compute[259850]: 2025-10-11 04:08:02.835 2 DEBUG oslo_concurrency.lockutils [None req-2e595185-ce74-4345-8f77-04ce87399792 fc44058c9b8d47d1907c195c404898c8 c04e56df694d49fdbb22c39773dfc036 - - default default] Lock "1fdd1a1629caceb533dd1ed1ad40a6716b3e72ac" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 04:08:02 compute-0 nova_compute[259850]: 2025-10-11 04:08:02.836 2 DEBUG oslo_concurrency.lockutils [None req-2e595185-ce74-4345-8f77-04ce87399792 fc44058c9b8d47d1907c195c404898c8 c04e56df694d49fdbb22c39773dfc036 - - default default] Lock "1fdd1a1629caceb533dd1ed1ad40a6716b3e72ac" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 04:08:02 compute-0 nova_compute[259850]: 2025-10-11 04:08:02.869 2 DEBUG nova.storage.rbd_utils [None req-2e595185-ce74-4345-8f77-04ce87399792 fc44058c9b8d47d1907c195c404898c8 c04e56df694d49fdbb22c39773dfc036 - - default default] rbd image 5814e0c3-8afc-4d2d-98eb-6da773bfb7c7_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 11 04:08:02 compute-0 nova_compute[259850]: 2025-10-11 04:08:02.874 2 DEBUG oslo_concurrency.processutils [None req-2e595185-ce74-4345-8f77-04ce87399792 fc44058c9b8d47d1907c195c404898c8 c04e56df694d49fdbb22c39773dfc036 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/1fdd1a1629caceb533dd1ed1ad40a6716b3e72ac 5814e0c3-8afc-4d2d-98eb-6da773bfb7c7_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 04:08:02 compute-0 nova_compute[259850]: 2025-10-11 04:08:02.938 2 DEBUG nova.policy [None req-2e595185-ce74-4345-8f77-04ce87399792 fc44058c9b8d47d1907c195c404898c8 c04e56df694d49fdbb22c39773dfc036 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'fc44058c9b8d47d1907c195c404898c8', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'c04e56df694d49fdbb22c39773dfc036', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Oct 11 04:08:03 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct 11 04:08:03 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2385764097' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 11 04:08:03 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct 11 04:08:03 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2385764097' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 11 04:08:03 compute-0 ceph-mon[74273]: from='client.? 192.168.122.100:0/2795480090' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 04:08:03 compute-0 ceph-mon[74273]: from='client.? 192.168.122.10:0/2385764097' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 11 04:08:03 compute-0 ceph-mon[74273]: from='client.? 192.168.122.10:0/2385764097' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 11 04:08:03 compute-0 nova_compute[259850]: 2025-10-11 04:08:03.507 2 DEBUG oslo_concurrency.processutils [None req-2e595185-ce74-4345-8f77-04ce87399792 fc44058c9b8d47d1907c195c404898c8 c04e56df694d49fdbb22c39773dfc036 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/1fdd1a1629caceb533dd1ed1ad40a6716b3e72ac 5814e0c3-8afc-4d2d-98eb-6da773bfb7c7_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.634s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 04:08:03 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1158: 305 pgs: 305 active+clean; 88 MiB data, 283 MiB used, 60 GiB / 60 GiB avail; 85 KiB/s rd, 4.9 KiB/s wr, 112 op/s
Oct 11 04:08:03 compute-0 nova_compute[259850]: 2025-10-11 04:08:03.591 2 DEBUG nova.storage.rbd_utils [None req-2e595185-ce74-4345-8f77-04ce87399792 fc44058c9b8d47d1907c195c404898c8 c04e56df694d49fdbb22c39773dfc036 - - default default] resizing rbd image 5814e0c3-8afc-4d2d-98eb-6da773bfb7c7_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Oct 11 04:08:03 compute-0 nova_compute[259850]: 2025-10-11 04:08:03.713 2 DEBUG nova.objects.instance [None req-2e595185-ce74-4345-8f77-04ce87399792 fc44058c9b8d47d1907c195c404898c8 c04e56df694d49fdbb22c39773dfc036 - - default default] Lazy-loading 'migration_context' on Instance uuid 5814e0c3-8afc-4d2d-98eb-6da773bfb7c7 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 11 04:08:03 compute-0 nova_compute[259850]: 2025-10-11 04:08:03.732 2 DEBUG nova.virt.libvirt.driver [None req-2e595185-ce74-4345-8f77-04ce87399792 fc44058c9b8d47d1907c195c404898c8 c04e56df694d49fdbb22c39773dfc036 - - default default] [instance: 5814e0c3-8afc-4d2d-98eb-6da773bfb7c7] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Oct 11 04:08:03 compute-0 nova_compute[259850]: 2025-10-11 04:08:03.733 2 DEBUG nova.virt.libvirt.driver [None req-2e595185-ce74-4345-8f77-04ce87399792 fc44058c9b8d47d1907c195c404898c8 c04e56df694d49fdbb22c39773dfc036 - - default default] [instance: 5814e0c3-8afc-4d2d-98eb-6da773bfb7c7] Ensure instance console log exists: /var/lib/nova/instances/5814e0c3-8afc-4d2d-98eb-6da773bfb7c7/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Oct 11 04:08:03 compute-0 nova_compute[259850]: 2025-10-11 04:08:03.734 2 DEBUG oslo_concurrency.lockutils [None req-2e595185-ce74-4345-8f77-04ce87399792 fc44058c9b8d47d1907c195c404898c8 c04e56df694d49fdbb22c39773dfc036 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 04:08:03 compute-0 nova_compute[259850]: 2025-10-11 04:08:03.734 2 DEBUG oslo_concurrency.lockutils [None req-2e595185-ce74-4345-8f77-04ce87399792 fc44058c9b8d47d1907c195c404898c8 c04e56df694d49fdbb22c39773dfc036 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 04:08:03 compute-0 nova_compute[259850]: 2025-10-11 04:08:03.735 2 DEBUG oslo_concurrency.lockutils [None req-2e595185-ce74-4345-8f77-04ce87399792 fc44058c9b8d47d1907c195c404898c8 c04e56df694d49fdbb22c39773dfc036 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 04:08:04 compute-0 nova_compute[259850]: 2025-10-11 04:08:04.089 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1760155669.0833025, 388b5700-0501-4cb9-99cd-6d259e00afa4 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 11 04:08:04 compute-0 nova_compute[259850]: 2025-10-11 04:08:04.089 2 INFO nova.compute.manager [-] [instance: 388b5700-0501-4cb9-99cd-6d259e00afa4] VM Stopped (Lifecycle Event)
Oct 11 04:08:04 compute-0 nova_compute[259850]: 2025-10-11 04:08:04.093 2 DEBUG nova.network.neutron [None req-2e595185-ce74-4345-8f77-04ce87399792 fc44058c9b8d47d1907c195c404898c8 c04e56df694d49fdbb22c39773dfc036 - - default default] [instance: 5814e0c3-8afc-4d2d-98eb-6da773bfb7c7] Successfully created port: 05e93c0e-0ca7-4152-9b30-cb802b90de1f _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Oct 11 04:08:04 compute-0 nova_compute[259850]: 2025-10-11 04:08:04.122 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 04:08:04 compute-0 nova_compute[259850]: 2025-10-11 04:08:04.126 2 DEBUG nova.compute.manager [None req-8ac62b29-7a18-4b8a-8460-7ce6a4e2f79a - - - - - -] [instance: 388b5700-0501-4cb9-99cd-6d259e00afa4] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 11 04:08:04 compute-0 ceph-mon[74273]: pgmap v1158: 305 pgs: 305 active+clean; 88 MiB data, 283 MiB used, 60 GiB / 60 GiB avail; 85 KiB/s rd, 4.9 KiB/s wr, 112 op/s
Oct 11 04:08:04 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e216 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 04:08:04 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e216 do_prune osdmap full prune enabled
Oct 11 04:08:04 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e217 e217: 3 total, 3 up, 3 in
Oct 11 04:08:04 compute-0 ceph-mon[74273]: log_channel(cluster) log [DBG] : osdmap e217: 3 total, 3 up, 3 in
Oct 11 04:08:05 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1160: 305 pgs: 305 active+clean; 88 MiB data, 283 MiB used, 60 GiB / 60 GiB avail; 85 KiB/s rd, 4.9 KiB/s wr, 112 op/s
Oct 11 04:08:05 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e217 do_prune osdmap full prune enabled
Oct 11 04:08:05 compute-0 ceph-mon[74273]: osdmap e217: 3 total, 3 up, 3 in
Oct 11 04:08:05 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e218 e218: 3 total, 3 up, 3 in
Oct 11 04:08:05 compute-0 ceph-mon[74273]: log_channel(cluster) log [DBG] : osdmap e218: 3 total, 3 up, 3 in
Oct 11 04:08:05 compute-0 nova_compute[259850]: 2025-10-11 04:08:05.975 2 DEBUG nova.network.neutron [None req-2e595185-ce74-4345-8f77-04ce87399792 fc44058c9b8d47d1907c195c404898c8 c04e56df694d49fdbb22c39773dfc036 - - default default] [instance: 5814e0c3-8afc-4d2d-98eb-6da773bfb7c7] Successfully updated port: 05e93c0e-0ca7-4152-9b30-cb802b90de1f _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Oct 11 04:08:05 compute-0 nova_compute[259850]: 2025-10-11 04:08:05.995 2 DEBUG oslo_concurrency.lockutils [None req-2e595185-ce74-4345-8f77-04ce87399792 fc44058c9b8d47d1907c195c404898c8 c04e56df694d49fdbb22c39773dfc036 - - default default] Acquiring lock "refresh_cache-5814e0c3-8afc-4d2d-98eb-6da773bfb7c7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 11 04:08:05 compute-0 nova_compute[259850]: 2025-10-11 04:08:05.995 2 DEBUG oslo_concurrency.lockutils [None req-2e595185-ce74-4345-8f77-04ce87399792 fc44058c9b8d47d1907c195c404898c8 c04e56df694d49fdbb22c39773dfc036 - - default default] Acquired lock "refresh_cache-5814e0c3-8afc-4d2d-98eb-6da773bfb7c7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 11 04:08:05 compute-0 nova_compute[259850]: 2025-10-11 04:08:05.996 2 DEBUG nova.network.neutron [None req-2e595185-ce74-4345-8f77-04ce87399792 fc44058c9b8d47d1907c195c404898c8 c04e56df694d49fdbb22c39773dfc036 - - default default] [instance: 5814e0c3-8afc-4d2d-98eb-6da773bfb7c7] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Oct 11 04:08:06 compute-0 nova_compute[259850]: 2025-10-11 04:08:06.183 2 DEBUG nova.compute.manager [req-97afd353-a15e-4e3b-b9ae-61b35b2b4e6d req-a4f262ad-3284-419c-9f02-d9055dc3ef0a f4159c198f2c491aba3076e803251bb4 551ef16ca7c64c7d98e9560ddab5abd8 - - default default] [instance: 5814e0c3-8afc-4d2d-98eb-6da773bfb7c7] Received event network-changed-05e93c0e-0ca7-4152-9b30-cb802b90de1f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 11 04:08:06 compute-0 nova_compute[259850]: 2025-10-11 04:08:06.183 2 DEBUG nova.compute.manager [req-97afd353-a15e-4e3b-b9ae-61b35b2b4e6d req-a4f262ad-3284-419c-9f02-d9055dc3ef0a f4159c198f2c491aba3076e803251bb4 551ef16ca7c64c7d98e9560ddab5abd8 - - default default] [instance: 5814e0c3-8afc-4d2d-98eb-6da773bfb7c7] Refreshing instance network info cache due to event network-changed-05e93c0e-0ca7-4152-9b30-cb802b90de1f. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 11 04:08:06 compute-0 nova_compute[259850]: 2025-10-11 04:08:06.184 2 DEBUG oslo_concurrency.lockutils [req-97afd353-a15e-4e3b-b9ae-61b35b2b4e6d req-a4f262ad-3284-419c-9f02-d9055dc3ef0a f4159c198f2c491aba3076e803251bb4 551ef16ca7c64c7d98e9560ddab5abd8 - - default default] Acquiring lock "refresh_cache-5814e0c3-8afc-4d2d-98eb-6da773bfb7c7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 11 04:08:06 compute-0 nova_compute[259850]: 2025-10-11 04:08:06.257 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 04:08:06 compute-0 nova_compute[259850]: 2025-10-11 04:08:06.267 2 DEBUG nova.network.neutron [None req-2e595185-ce74-4345-8f77-04ce87399792 fc44058c9b8d47d1907c195c404898c8 c04e56df694d49fdbb22c39773dfc036 - - default default] [instance: 5814e0c3-8afc-4d2d-98eb-6da773bfb7c7] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Oct 11 04:08:06 compute-0 ceph-mon[74273]: pgmap v1160: 305 pgs: 305 active+clean; 88 MiB data, 283 MiB used, 60 GiB / 60 GiB avail; 85 KiB/s rd, 4.9 KiB/s wr, 112 op/s
Oct 11 04:08:06 compute-0 ceph-mon[74273]: osdmap e218: 3 total, 3 up, 3 in
Oct 11 04:08:07 compute-0 nova_compute[259850]: 2025-10-11 04:08:07.165 2 DEBUG nova.network.neutron [None req-2e595185-ce74-4345-8f77-04ce87399792 fc44058c9b8d47d1907c195c404898c8 c04e56df694d49fdbb22c39773dfc036 - - default default] [instance: 5814e0c3-8afc-4d2d-98eb-6da773bfb7c7] Updating instance_info_cache with network_info: [{"id": "05e93c0e-0ca7-4152-9b30-cb802b90de1f", "address": "fa:16:3e:fc:44:96", "network": {"id": "8cb72c94-41d7-40be-8ef7-9351e1b06d48", "bridge": "br-int", "label": "tempest-VolumesBackupsTest-1596968619-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c04e56df694d49fdbb22c39773dfc036", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap05e93c0e-0c", "ovs_interfaceid": "05e93c0e-0ca7-4152-9b30-cb802b90de1f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 11 04:08:07 compute-0 nova_compute[259850]: 2025-10-11 04:08:07.191 2 DEBUG oslo_concurrency.lockutils [None req-2e595185-ce74-4345-8f77-04ce87399792 fc44058c9b8d47d1907c195c404898c8 c04e56df694d49fdbb22c39773dfc036 - - default default] Releasing lock "refresh_cache-5814e0c3-8afc-4d2d-98eb-6da773bfb7c7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 11 04:08:07 compute-0 nova_compute[259850]: 2025-10-11 04:08:07.191 2 DEBUG nova.compute.manager [None req-2e595185-ce74-4345-8f77-04ce87399792 fc44058c9b8d47d1907c195c404898c8 c04e56df694d49fdbb22c39773dfc036 - - default default] [instance: 5814e0c3-8afc-4d2d-98eb-6da773bfb7c7] Instance network_info: |[{"id": "05e93c0e-0ca7-4152-9b30-cb802b90de1f", "address": "fa:16:3e:fc:44:96", "network": {"id": "8cb72c94-41d7-40be-8ef7-9351e1b06d48", "bridge": "br-int", "label": "tempest-VolumesBackupsTest-1596968619-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c04e56df694d49fdbb22c39773dfc036", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap05e93c0e-0c", "ovs_interfaceid": "05e93c0e-0ca7-4152-9b30-cb802b90de1f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Oct 11 04:08:07 compute-0 nova_compute[259850]: 2025-10-11 04:08:07.192 2 DEBUG oslo_concurrency.lockutils [req-97afd353-a15e-4e3b-b9ae-61b35b2b4e6d req-a4f262ad-3284-419c-9f02-d9055dc3ef0a f4159c198f2c491aba3076e803251bb4 551ef16ca7c64c7d98e9560ddab5abd8 - - default default] Acquired lock "refresh_cache-5814e0c3-8afc-4d2d-98eb-6da773bfb7c7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 11 04:08:07 compute-0 nova_compute[259850]: 2025-10-11 04:08:07.193 2 DEBUG nova.network.neutron [req-97afd353-a15e-4e3b-b9ae-61b35b2b4e6d req-a4f262ad-3284-419c-9f02-d9055dc3ef0a f4159c198f2c491aba3076e803251bb4 551ef16ca7c64c7d98e9560ddab5abd8 - - default default] [instance: 5814e0c3-8afc-4d2d-98eb-6da773bfb7c7] Refreshing network info cache for port 05e93c0e-0ca7-4152-9b30-cb802b90de1f _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 11 04:08:07 compute-0 nova_compute[259850]: 2025-10-11 04:08:07.198 2 DEBUG nova.virt.libvirt.driver [None req-2e595185-ce74-4345-8f77-04ce87399792 fc44058c9b8d47d1907c195c404898c8 c04e56df694d49fdbb22c39773dfc036 - - default default] [instance: 5814e0c3-8afc-4d2d-98eb-6da773bfb7c7] Start _get_guest_xml network_info=[{"id": "05e93c0e-0ca7-4152-9b30-cb802b90de1f", "address": "fa:16:3e:fc:44:96", "network": {"id": "8cb72c94-41d7-40be-8ef7-9351e1b06d48", "bridge": "br-int", "label": "tempest-VolumesBackupsTest-1596968619-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c04e56df694d49fdbb22c39773dfc036", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap05e93c0e-0c", "ovs_interfaceid": "05e93c0e-0ca7-4152-9b30-cb802b90de1f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-11T04:01:37Z,direct_url=<?>,disk_format='qcow2',id=1a107e2f-1a9d-4b6f-861d-e64bee7d56be,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='e4ac9f6319b648399a8baca50902ce47',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-11T04:01:39Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_type': 'disk', 'encrypted': False, 'encryption_secret_uuid': None, 'size': 0, 'encryption_format': None, 'boot_index': 0, 'device_name': '/dev/vda', 'guest_format': None, 'encryption_options': None, 'disk_bus': 'virtio', 'image_id': '1a107e2f-1a9d-4b6f-861d-e64bee7d56be'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Oct 11 04:08:07 compute-0 nova_compute[259850]: 2025-10-11 04:08:07.204 2 WARNING nova.virt.libvirt.driver [None req-2e595185-ce74-4345-8f77-04ce87399792 fc44058c9b8d47d1907c195c404898c8 c04e56df694d49fdbb22c39773dfc036 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 11 04:08:07 compute-0 nova_compute[259850]: 2025-10-11 04:08:07.213 2 DEBUG nova.virt.libvirt.host [None req-2e595185-ce74-4345-8f77-04ce87399792 fc44058c9b8d47d1907c195c404898c8 c04e56df694d49fdbb22c39773dfc036 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Oct 11 04:08:07 compute-0 nova_compute[259850]: 2025-10-11 04:08:07.213 2 DEBUG nova.virt.libvirt.host [None req-2e595185-ce74-4345-8f77-04ce87399792 fc44058c9b8d47d1907c195c404898c8 c04e56df694d49fdbb22c39773dfc036 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Oct 11 04:08:07 compute-0 nova_compute[259850]: 2025-10-11 04:08:07.223 2 DEBUG nova.virt.libvirt.host [None req-2e595185-ce74-4345-8f77-04ce87399792 fc44058c9b8d47d1907c195c404898c8 c04e56df694d49fdbb22c39773dfc036 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Oct 11 04:08:07 compute-0 nova_compute[259850]: 2025-10-11 04:08:07.225 2 DEBUG nova.virt.libvirt.host [None req-2e595185-ce74-4345-8f77-04ce87399792 fc44058c9b8d47d1907c195c404898c8 c04e56df694d49fdbb22c39773dfc036 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Oct 11 04:08:07 compute-0 nova_compute[259850]: 2025-10-11 04:08:07.227 2 DEBUG nova.virt.libvirt.driver [None req-2e595185-ce74-4345-8f77-04ce87399792 fc44058c9b8d47d1907c195c404898c8 c04e56df694d49fdbb22c39773dfc036 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Oct 11 04:08:07 compute-0 nova_compute[259850]: 2025-10-11 04:08:07.227 2 DEBUG nova.virt.hardware [None req-2e595185-ce74-4345-8f77-04ce87399792 fc44058c9b8d47d1907c195c404898c8 c04e56df694d49fdbb22c39773dfc036 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-11T04:01:35Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='178575de-f0e6-4acd-9fcd-d75e3e09ac2e',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-11T04:01:37Z,direct_url=<?>,disk_format='qcow2',id=1a107e2f-1a9d-4b6f-861d-e64bee7d56be,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='e4ac9f6319b648399a8baca50902ce47',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-11T04:01:39Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Oct 11 04:08:07 compute-0 nova_compute[259850]: 2025-10-11 04:08:07.228 2 DEBUG nova.virt.hardware [None req-2e595185-ce74-4345-8f77-04ce87399792 fc44058c9b8d47d1907c195c404898c8 c04e56df694d49fdbb22c39773dfc036 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Oct 11 04:08:07 compute-0 nova_compute[259850]: 2025-10-11 04:08:07.228 2 DEBUG nova.virt.hardware [None req-2e595185-ce74-4345-8f77-04ce87399792 fc44058c9b8d47d1907c195c404898c8 c04e56df694d49fdbb22c39773dfc036 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Oct 11 04:08:07 compute-0 nova_compute[259850]: 2025-10-11 04:08:07.228 2 DEBUG nova.virt.hardware [None req-2e595185-ce74-4345-8f77-04ce87399792 fc44058c9b8d47d1907c195c404898c8 c04e56df694d49fdbb22c39773dfc036 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Oct 11 04:08:07 compute-0 nova_compute[259850]: 2025-10-11 04:08:07.228 2 DEBUG nova.virt.hardware [None req-2e595185-ce74-4345-8f77-04ce87399792 fc44058c9b8d47d1907c195c404898c8 c04e56df694d49fdbb22c39773dfc036 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Oct 11 04:08:07 compute-0 nova_compute[259850]: 2025-10-11 04:08:07.229 2 DEBUG nova.virt.hardware [None req-2e595185-ce74-4345-8f77-04ce87399792 fc44058c9b8d47d1907c195c404898c8 c04e56df694d49fdbb22c39773dfc036 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Oct 11 04:08:07 compute-0 nova_compute[259850]: 2025-10-11 04:08:07.229 2 DEBUG nova.virt.hardware [None req-2e595185-ce74-4345-8f77-04ce87399792 fc44058c9b8d47d1907c195c404898c8 c04e56df694d49fdbb22c39773dfc036 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Oct 11 04:08:07 compute-0 nova_compute[259850]: 2025-10-11 04:08:07.229 2 DEBUG nova.virt.hardware [None req-2e595185-ce74-4345-8f77-04ce87399792 fc44058c9b8d47d1907c195c404898c8 c04e56df694d49fdbb22c39773dfc036 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Oct 11 04:08:07 compute-0 nova_compute[259850]: 2025-10-11 04:08:07.230 2 DEBUG nova.virt.hardware [None req-2e595185-ce74-4345-8f77-04ce87399792 fc44058c9b8d47d1907c195c404898c8 c04e56df694d49fdbb22c39773dfc036 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Oct 11 04:08:07 compute-0 nova_compute[259850]: 2025-10-11 04:08:07.230 2 DEBUG nova.virt.hardware [None req-2e595185-ce74-4345-8f77-04ce87399792 fc44058c9b8d47d1907c195c404898c8 c04e56df694d49fdbb22c39773dfc036 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Oct 11 04:08:07 compute-0 nova_compute[259850]: 2025-10-11 04:08:07.230 2 DEBUG nova.virt.hardware [None req-2e595185-ce74-4345-8f77-04ce87399792 fc44058c9b8d47d1907c195c404898c8 c04e56df694d49fdbb22c39773dfc036 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Oct 11 04:08:07 compute-0 nova_compute[259850]: 2025-10-11 04:08:07.233 2 DEBUG oslo_concurrency.processutils [None req-2e595185-ce74-4345-8f77-04ce87399792 fc44058c9b8d47d1907c195c404898c8 c04e56df694d49fdbb22c39773dfc036 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 04:08:07 compute-0 nova_compute[259850]: 2025-10-11 04:08:07.297 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 04:08:07 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1162: 305 pgs: 305 active+clean; 88 MiB data, 283 MiB used, 60 GiB / 60 GiB avail; 63 KiB/s rd, 3.3 KiB/s wr, 82 op/s
Oct 11 04:08:07 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 11 04:08:07 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/988911569' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 11 04:08:07 compute-0 nova_compute[259850]: 2025-10-11 04:08:07.708 2 DEBUG oslo_concurrency.processutils [None req-2e595185-ce74-4345-8f77-04ce87399792 fc44058c9b8d47d1907c195c404898c8 c04e56df694d49fdbb22c39773dfc036 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.475s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 04:08:07 compute-0 nova_compute[259850]: 2025-10-11 04:08:07.743 2 DEBUG nova.storage.rbd_utils [None req-2e595185-ce74-4345-8f77-04ce87399792 fc44058c9b8d47d1907c195c404898c8 c04e56df694d49fdbb22c39773dfc036 - - default default] rbd image 5814e0c3-8afc-4d2d-98eb-6da773bfb7c7_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 11 04:08:07 compute-0 nova_compute[259850]: 2025-10-11 04:08:07.748 2 DEBUG oslo_concurrency.processutils [None req-2e595185-ce74-4345-8f77-04ce87399792 fc44058c9b8d47d1907c195c404898c8 c04e56df694d49fdbb22c39773dfc036 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 04:08:07 compute-0 ceph-mon[74273]: from='client.? 192.168.122.100:0/988911569' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 11 04:08:08 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 11 04:08:08 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3838124643' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 11 04:08:08 compute-0 nova_compute[259850]: 2025-10-11 04:08:08.291 2 DEBUG oslo_concurrency.processutils [None req-2e595185-ce74-4345-8f77-04ce87399792 fc44058c9b8d47d1907c195c404898c8 c04e56df694d49fdbb22c39773dfc036 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.542s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 04:08:08 compute-0 nova_compute[259850]: 2025-10-11 04:08:08.293 2 DEBUG nova.virt.libvirt.vif [None req-2e595185-ce74-4345-8f77-04ce87399792 fc44058c9b8d47d1907c195c404898c8 c04e56df694d49fdbb22c39773dfc036 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-11T04:08:00Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-VolumesBackupsTest-instance-1031640988',display_name='tempest-VolumesBackupsTest-instance-1031640988',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-volumesbackupstest-instance-1031640988',id=9,image_ref='1a107e2f-1a9d-4b6f-861d-e64bee7d56be',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBNVqTNHLi0WBIbfKKzHgX7uX1c7Db4lpCGYQPDzDbX5PeMXwJgA86ENR9AHoUIPJm52kGc03LyhHVLcWEZvMPuNYEOXd0aovsRUC5Fu4Wy9sztYBoemBH/MUmHd01HKxGw==',key_name='tempest-keypair-1402827230',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='c04e56df694d49fdbb22c39773dfc036',ramdisk_id='',reservation_id='r-glbrhlmo',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='1a107e2f-1a9d-4b6f-861d-e64bee7d56be',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-VolumesBackupsTest-722883341',owner_user_name='tempest-VolumesBackupsTest-722883341-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-11T04:08:02Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='fc44058c9b8d47d1907c195c404898c8',uuid=5814e0c3-8afc-4d2d-98eb-6da773bfb7c7,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "05e93c0e-0ca7-4152-9b30-cb802b90de1f", "address": "fa:16:3e:fc:44:96", "network": {"id": "8cb72c94-41d7-40be-8ef7-9351e1b06d48", "bridge": "br-int", "label": "tempest-VolumesBackupsTest-1596968619-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c04e56df694d49fdbb22c39773dfc036", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap05e93c0e-0c", "ovs_interfaceid": "05e93c0e-0ca7-4152-9b30-cb802b90de1f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Oct 11 04:08:08 compute-0 nova_compute[259850]: 2025-10-11 04:08:08.294 2 DEBUG nova.network.os_vif_util [None req-2e595185-ce74-4345-8f77-04ce87399792 fc44058c9b8d47d1907c195c404898c8 c04e56df694d49fdbb22c39773dfc036 - - default default] Converting VIF {"id": "05e93c0e-0ca7-4152-9b30-cb802b90de1f", "address": "fa:16:3e:fc:44:96", "network": {"id": "8cb72c94-41d7-40be-8ef7-9351e1b06d48", "bridge": "br-int", "label": "tempest-VolumesBackupsTest-1596968619-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c04e56df694d49fdbb22c39773dfc036", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap05e93c0e-0c", "ovs_interfaceid": "05e93c0e-0ca7-4152-9b30-cb802b90de1f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 11 04:08:08 compute-0 nova_compute[259850]: 2025-10-11 04:08:08.295 2 DEBUG nova.network.os_vif_util [None req-2e595185-ce74-4345-8f77-04ce87399792 fc44058c9b8d47d1907c195c404898c8 c04e56df694d49fdbb22c39773dfc036 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:fc:44:96,bridge_name='br-int',has_traffic_filtering=True,id=05e93c0e-0ca7-4152-9b30-cb802b90de1f,network=Network(8cb72c94-41d7-40be-8ef7-9351e1b06d48),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap05e93c0e-0c') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 11 04:08:08 compute-0 nova_compute[259850]: 2025-10-11 04:08:08.297 2 DEBUG nova.objects.instance [None req-2e595185-ce74-4345-8f77-04ce87399792 fc44058c9b8d47d1907c195c404898c8 c04e56df694d49fdbb22c39773dfc036 - - default default] Lazy-loading 'pci_devices' on Instance uuid 5814e0c3-8afc-4d2d-98eb-6da773bfb7c7 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 11 04:08:08 compute-0 nova_compute[259850]: 2025-10-11 04:08:08.334 2 DEBUG nova.virt.libvirt.driver [None req-2e595185-ce74-4345-8f77-04ce87399792 fc44058c9b8d47d1907c195c404898c8 c04e56df694d49fdbb22c39773dfc036 - - default default] [instance: 5814e0c3-8afc-4d2d-98eb-6da773bfb7c7] End _get_guest_xml xml=<domain type="kvm">
Oct 11 04:08:08 compute-0 nova_compute[259850]:   <uuid>5814e0c3-8afc-4d2d-98eb-6da773bfb7c7</uuid>
Oct 11 04:08:08 compute-0 nova_compute[259850]:   <name>instance-00000009</name>
Oct 11 04:08:08 compute-0 nova_compute[259850]:   <memory>131072</memory>
Oct 11 04:08:08 compute-0 nova_compute[259850]:   <vcpu>1</vcpu>
Oct 11 04:08:08 compute-0 nova_compute[259850]:   <metadata>
Oct 11 04:08:08 compute-0 nova_compute[259850]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 11 04:08:08 compute-0 nova_compute[259850]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 11 04:08:08 compute-0 nova_compute[259850]:       <nova:name>tempest-VolumesBackupsTest-instance-1031640988</nova:name>
Oct 11 04:08:08 compute-0 nova_compute[259850]:       <nova:creationTime>2025-10-11 04:08:07</nova:creationTime>
Oct 11 04:08:08 compute-0 nova_compute[259850]:       <nova:flavor name="m1.nano">
Oct 11 04:08:08 compute-0 nova_compute[259850]:         <nova:memory>128</nova:memory>
Oct 11 04:08:08 compute-0 nova_compute[259850]:         <nova:disk>1</nova:disk>
Oct 11 04:08:08 compute-0 nova_compute[259850]:         <nova:swap>0</nova:swap>
Oct 11 04:08:08 compute-0 nova_compute[259850]:         <nova:ephemeral>0</nova:ephemeral>
Oct 11 04:08:08 compute-0 nova_compute[259850]:         <nova:vcpus>1</nova:vcpus>
Oct 11 04:08:08 compute-0 nova_compute[259850]:       </nova:flavor>
Oct 11 04:08:08 compute-0 nova_compute[259850]:       <nova:owner>
Oct 11 04:08:08 compute-0 nova_compute[259850]:         <nova:user uuid="fc44058c9b8d47d1907c195c404898c8">tempest-VolumesBackupsTest-722883341-project-member</nova:user>
Oct 11 04:08:08 compute-0 nova_compute[259850]:         <nova:project uuid="c04e56df694d49fdbb22c39773dfc036">tempest-VolumesBackupsTest-722883341</nova:project>
Oct 11 04:08:08 compute-0 nova_compute[259850]:       </nova:owner>
Oct 11 04:08:08 compute-0 nova_compute[259850]:       <nova:root type="image" uuid="1a107e2f-1a9d-4b6f-861d-e64bee7d56be"/>
Oct 11 04:08:08 compute-0 nova_compute[259850]:       <nova:ports>
Oct 11 04:08:08 compute-0 nova_compute[259850]:         <nova:port uuid="05e93c0e-0ca7-4152-9b30-cb802b90de1f">
Oct 11 04:08:08 compute-0 nova_compute[259850]:           <nova:ip type="fixed" address="10.100.0.11" ipVersion="4"/>
Oct 11 04:08:08 compute-0 nova_compute[259850]:         </nova:port>
Oct 11 04:08:08 compute-0 nova_compute[259850]:       </nova:ports>
Oct 11 04:08:08 compute-0 nova_compute[259850]:     </nova:instance>
Oct 11 04:08:08 compute-0 nova_compute[259850]:   </metadata>
Oct 11 04:08:08 compute-0 nova_compute[259850]:   <sysinfo type="smbios">
Oct 11 04:08:08 compute-0 nova_compute[259850]:     <system>
Oct 11 04:08:08 compute-0 nova_compute[259850]:       <entry name="manufacturer">RDO</entry>
Oct 11 04:08:08 compute-0 nova_compute[259850]:       <entry name="product">OpenStack Compute</entry>
Oct 11 04:08:08 compute-0 nova_compute[259850]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 11 04:08:08 compute-0 nova_compute[259850]:       <entry name="serial">5814e0c3-8afc-4d2d-98eb-6da773bfb7c7</entry>
Oct 11 04:08:08 compute-0 nova_compute[259850]:       <entry name="uuid">5814e0c3-8afc-4d2d-98eb-6da773bfb7c7</entry>
Oct 11 04:08:08 compute-0 nova_compute[259850]:       <entry name="family">Virtual Machine</entry>
Oct 11 04:08:08 compute-0 nova_compute[259850]:     </system>
Oct 11 04:08:08 compute-0 nova_compute[259850]:   </sysinfo>
Oct 11 04:08:08 compute-0 nova_compute[259850]:   <os>
Oct 11 04:08:08 compute-0 nova_compute[259850]:     <type arch="x86_64" machine="q35">hvm</type>
Oct 11 04:08:08 compute-0 nova_compute[259850]:     <boot dev="hd"/>
Oct 11 04:08:08 compute-0 nova_compute[259850]:     <smbios mode="sysinfo"/>
Oct 11 04:08:08 compute-0 nova_compute[259850]:   </os>
Oct 11 04:08:08 compute-0 nova_compute[259850]:   <features>
Oct 11 04:08:08 compute-0 nova_compute[259850]:     <acpi/>
Oct 11 04:08:08 compute-0 nova_compute[259850]:     <apic/>
Oct 11 04:08:08 compute-0 nova_compute[259850]:     <vmcoreinfo/>
Oct 11 04:08:08 compute-0 nova_compute[259850]:   </features>
Oct 11 04:08:08 compute-0 nova_compute[259850]:   <clock offset="utc">
Oct 11 04:08:08 compute-0 nova_compute[259850]:     <timer name="pit" tickpolicy="delay"/>
Oct 11 04:08:08 compute-0 nova_compute[259850]:     <timer name="rtc" tickpolicy="catchup"/>
Oct 11 04:08:08 compute-0 nova_compute[259850]:     <timer name="hpet" present="no"/>
Oct 11 04:08:08 compute-0 nova_compute[259850]:   </clock>
Oct 11 04:08:08 compute-0 nova_compute[259850]:   <cpu mode="host-model" match="exact">
Oct 11 04:08:08 compute-0 nova_compute[259850]:     <topology sockets="1" cores="1" threads="1"/>
Oct 11 04:08:08 compute-0 nova_compute[259850]:   </cpu>
Oct 11 04:08:08 compute-0 nova_compute[259850]:   <devices>
Oct 11 04:08:08 compute-0 nova_compute[259850]:     <disk type="network" device="disk">
Oct 11 04:08:08 compute-0 nova_compute[259850]:       <driver type="raw" cache="none"/>
Oct 11 04:08:08 compute-0 nova_compute[259850]:       <source protocol="rbd" name="vms/5814e0c3-8afc-4d2d-98eb-6da773bfb7c7_disk">
Oct 11 04:08:08 compute-0 nova_compute[259850]:         <host name="192.168.122.100" port="6789"/>
Oct 11 04:08:08 compute-0 nova_compute[259850]:       </source>
Oct 11 04:08:08 compute-0 nova_compute[259850]:       <auth username="openstack">
Oct 11 04:08:08 compute-0 nova_compute[259850]:         <secret type="ceph" uuid="23b68101-59a9-532f-ab6b-9acf78fb2162"/>
Oct 11 04:08:08 compute-0 nova_compute[259850]:       </auth>
Oct 11 04:08:08 compute-0 nova_compute[259850]:       <target dev="vda" bus="virtio"/>
Oct 11 04:08:08 compute-0 nova_compute[259850]:     </disk>
Oct 11 04:08:08 compute-0 nova_compute[259850]:     <disk type="network" device="cdrom">
Oct 11 04:08:08 compute-0 nova_compute[259850]:       <driver type="raw" cache="none"/>
Oct 11 04:08:08 compute-0 nova_compute[259850]:       <source protocol="rbd" name="vms/5814e0c3-8afc-4d2d-98eb-6da773bfb7c7_disk.config">
Oct 11 04:08:08 compute-0 nova_compute[259850]:         <host name="192.168.122.100" port="6789"/>
Oct 11 04:08:08 compute-0 nova_compute[259850]:       </source>
Oct 11 04:08:08 compute-0 nova_compute[259850]:       <auth username="openstack">
Oct 11 04:08:08 compute-0 nova_compute[259850]:         <secret type="ceph" uuid="23b68101-59a9-532f-ab6b-9acf78fb2162"/>
Oct 11 04:08:08 compute-0 nova_compute[259850]:       </auth>
Oct 11 04:08:08 compute-0 nova_compute[259850]:       <target dev="sda" bus="sata"/>
Oct 11 04:08:08 compute-0 nova_compute[259850]:     </disk>
Oct 11 04:08:08 compute-0 nova_compute[259850]:     <interface type="ethernet">
Oct 11 04:08:08 compute-0 nova_compute[259850]:       <mac address="fa:16:3e:fc:44:96"/>
Oct 11 04:08:08 compute-0 nova_compute[259850]:       <model type="virtio"/>
Oct 11 04:08:08 compute-0 nova_compute[259850]:       <driver name="vhost" rx_queue_size="512"/>
Oct 11 04:08:08 compute-0 nova_compute[259850]:       <mtu size="1442"/>
Oct 11 04:08:08 compute-0 nova_compute[259850]:       <target dev="tap05e93c0e-0c"/>
Oct 11 04:08:08 compute-0 nova_compute[259850]:     </interface>
Oct 11 04:08:08 compute-0 nova_compute[259850]:     <serial type="pty">
Oct 11 04:08:08 compute-0 nova_compute[259850]:       <log file="/var/lib/nova/instances/5814e0c3-8afc-4d2d-98eb-6da773bfb7c7/console.log" append="off"/>
Oct 11 04:08:08 compute-0 nova_compute[259850]:     </serial>
Oct 11 04:08:08 compute-0 nova_compute[259850]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 11 04:08:08 compute-0 nova_compute[259850]:     <video>
Oct 11 04:08:08 compute-0 nova_compute[259850]:       <model type="virtio"/>
Oct 11 04:08:08 compute-0 nova_compute[259850]:     </video>
Oct 11 04:08:08 compute-0 nova_compute[259850]:     <input type="tablet" bus="usb"/>
Oct 11 04:08:08 compute-0 nova_compute[259850]:     <rng model="virtio">
Oct 11 04:08:08 compute-0 nova_compute[259850]:       <backend model="random">/dev/urandom</backend>
Oct 11 04:08:08 compute-0 nova_compute[259850]:     </rng>
Oct 11 04:08:08 compute-0 nova_compute[259850]:     <controller type="pci" model="pcie-root"/>
Oct 11 04:08:08 compute-0 nova_compute[259850]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 04:08:08 compute-0 nova_compute[259850]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 04:08:08 compute-0 nova_compute[259850]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 04:08:08 compute-0 nova_compute[259850]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 04:08:08 compute-0 nova_compute[259850]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 04:08:08 compute-0 nova_compute[259850]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 04:08:08 compute-0 nova_compute[259850]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 04:08:08 compute-0 nova_compute[259850]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 04:08:08 compute-0 nova_compute[259850]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 04:08:08 compute-0 nova_compute[259850]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 04:08:08 compute-0 nova_compute[259850]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 04:08:08 compute-0 nova_compute[259850]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 04:08:08 compute-0 nova_compute[259850]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 04:08:08 compute-0 nova_compute[259850]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 04:08:08 compute-0 nova_compute[259850]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 04:08:08 compute-0 nova_compute[259850]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 04:08:08 compute-0 nova_compute[259850]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 04:08:08 compute-0 nova_compute[259850]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 04:08:08 compute-0 nova_compute[259850]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 04:08:08 compute-0 nova_compute[259850]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 04:08:08 compute-0 nova_compute[259850]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 04:08:08 compute-0 nova_compute[259850]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 04:08:08 compute-0 nova_compute[259850]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 04:08:08 compute-0 nova_compute[259850]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 04:08:08 compute-0 nova_compute[259850]:     <controller type="usb" index="0"/>
Oct 11 04:08:08 compute-0 nova_compute[259850]:     <memballoon model="virtio">
Oct 11 04:08:08 compute-0 nova_compute[259850]:       <stats period="10"/>
Oct 11 04:08:08 compute-0 nova_compute[259850]:     </memballoon>
Oct 11 04:08:08 compute-0 nova_compute[259850]:   </devices>
Oct 11 04:08:08 compute-0 nova_compute[259850]: </domain>
Oct 11 04:08:08 compute-0 nova_compute[259850]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Oct 11 04:08:08 compute-0 nova_compute[259850]: 2025-10-11 04:08:08.336 2 DEBUG nova.compute.manager [None req-2e595185-ce74-4345-8f77-04ce87399792 fc44058c9b8d47d1907c195c404898c8 c04e56df694d49fdbb22c39773dfc036 - - default default] [instance: 5814e0c3-8afc-4d2d-98eb-6da773bfb7c7] Preparing to wait for external event network-vif-plugged-05e93c0e-0ca7-4152-9b30-cb802b90de1f prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Oct 11 04:08:08 compute-0 nova_compute[259850]: 2025-10-11 04:08:08.336 2 DEBUG oslo_concurrency.lockutils [None req-2e595185-ce74-4345-8f77-04ce87399792 fc44058c9b8d47d1907c195c404898c8 c04e56df694d49fdbb22c39773dfc036 - - default default] Acquiring lock "5814e0c3-8afc-4d2d-98eb-6da773bfb7c7-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 04:08:08 compute-0 nova_compute[259850]: 2025-10-11 04:08:08.337 2 DEBUG oslo_concurrency.lockutils [None req-2e595185-ce74-4345-8f77-04ce87399792 fc44058c9b8d47d1907c195c404898c8 c04e56df694d49fdbb22c39773dfc036 - - default default] Lock "5814e0c3-8afc-4d2d-98eb-6da773bfb7c7-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 04:08:08 compute-0 nova_compute[259850]: 2025-10-11 04:08:08.337 2 DEBUG oslo_concurrency.lockutils [None req-2e595185-ce74-4345-8f77-04ce87399792 fc44058c9b8d47d1907c195c404898c8 c04e56df694d49fdbb22c39773dfc036 - - default default] Lock "5814e0c3-8afc-4d2d-98eb-6da773bfb7c7-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 04:08:08 compute-0 nova_compute[259850]: 2025-10-11 04:08:08.338 2 DEBUG nova.virt.libvirt.vif [None req-2e595185-ce74-4345-8f77-04ce87399792 fc44058c9b8d47d1907c195c404898c8 c04e56df694d49fdbb22c39773dfc036 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-11T04:08:00Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-VolumesBackupsTest-instance-1031640988',display_name='tempest-VolumesBackupsTest-instance-1031640988',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-volumesbackupstest-instance-1031640988',id=9,image_ref='1a107e2f-1a9d-4b6f-861d-e64bee7d56be',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBNVqTNHLi0WBIbfKKzHgX7uX1c7Db4lpCGYQPDzDbX5PeMXwJgA86ENR9AHoUIPJm52kGc03LyhHVLcWEZvMPuNYEOXd0aovsRUC5Fu4Wy9sztYBoemBH/MUmHd01HKxGw==',key_name='tempest-keypair-1402827230',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='c04e56df694d49fdbb22c39773dfc036',ramdisk_id='',reservation_id='r-glbrhlmo',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='1a107e2f-1a9d-4b6f-861d-e64bee7d56be',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-VolumesBackupsTest-722883341',owner_user_name='tempest-VolumesBackupsTest-722883341-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-11T04:08:02Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='fc44058c9b8d47d1907c195c404898c8',uuid=5814e0c3-8afc-4d2d-98eb-6da773bfb7c7,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "05e93c0e-0ca7-4152-9b30-cb802b90de1f", "address": "fa:16:3e:fc:44:96", "network": {"id": "8cb72c94-41d7-40be-8ef7-9351e1b06d48", "bridge": "br-int", "label": "tempest-VolumesBackupsTest-1596968619-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c04e56df694d49fdbb22c39773dfc036", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap05e93c0e-0c", "ovs_interfaceid": "05e93c0e-0ca7-4152-9b30-cb802b90de1f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Oct 11 04:08:08 compute-0 nova_compute[259850]: 2025-10-11 04:08:08.338 2 DEBUG nova.network.os_vif_util [None req-2e595185-ce74-4345-8f77-04ce87399792 fc44058c9b8d47d1907c195c404898c8 c04e56df694d49fdbb22c39773dfc036 - - default default] Converting VIF {"id": "05e93c0e-0ca7-4152-9b30-cb802b90de1f", "address": "fa:16:3e:fc:44:96", "network": {"id": "8cb72c94-41d7-40be-8ef7-9351e1b06d48", "bridge": "br-int", "label": "tempest-VolumesBackupsTest-1596968619-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c04e56df694d49fdbb22c39773dfc036", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap05e93c0e-0c", "ovs_interfaceid": "05e93c0e-0ca7-4152-9b30-cb802b90de1f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 11 04:08:08 compute-0 nova_compute[259850]: 2025-10-11 04:08:08.339 2 DEBUG nova.network.os_vif_util [None req-2e595185-ce74-4345-8f77-04ce87399792 fc44058c9b8d47d1907c195c404898c8 c04e56df694d49fdbb22c39773dfc036 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:fc:44:96,bridge_name='br-int',has_traffic_filtering=True,id=05e93c0e-0ca7-4152-9b30-cb802b90de1f,network=Network(8cb72c94-41d7-40be-8ef7-9351e1b06d48),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap05e93c0e-0c') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 11 04:08:08 compute-0 nova_compute[259850]: 2025-10-11 04:08:08.339 2 DEBUG os_vif [None req-2e595185-ce74-4345-8f77-04ce87399792 fc44058c9b8d47d1907c195c404898c8 c04e56df694d49fdbb22c39773dfc036 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:fc:44:96,bridge_name='br-int',has_traffic_filtering=True,id=05e93c0e-0ca7-4152-9b30-cb802b90de1f,network=Network(8cb72c94-41d7-40be-8ef7-9351e1b06d48),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap05e93c0e-0c') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Oct 11 04:08:08 compute-0 nova_compute[259850]: 2025-10-11 04:08:08.340 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 04:08:08 compute-0 nova_compute[259850]: 2025-10-11 04:08:08.340 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 04:08:08 compute-0 nova_compute[259850]: 2025-10-11 04:08:08.341 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 11 04:08:08 compute-0 nova_compute[259850]: 2025-10-11 04:08:08.344 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 04:08:08 compute-0 nova_compute[259850]: 2025-10-11 04:08:08.344 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap05e93c0e-0c, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 04:08:08 compute-0 nova_compute[259850]: 2025-10-11 04:08:08.345 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap05e93c0e-0c, col_values=(('external_ids', {'iface-id': '05e93c0e-0ca7-4152-9b30-cb802b90de1f', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:fc:44:96', 'vm-uuid': '5814e0c3-8afc-4d2d-98eb-6da773bfb7c7'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 04:08:08 compute-0 NetworkManager[44920]: <info>  [1760155688.3831] manager: (tap05e93c0e-0c): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/58)
Oct 11 04:08:08 compute-0 nova_compute[259850]: 2025-10-11 04:08:08.382 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 04:08:08 compute-0 nova_compute[259850]: 2025-10-11 04:08:08.385 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 11 04:08:08 compute-0 nova_compute[259850]: 2025-10-11 04:08:08.389 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 04:08:08 compute-0 nova_compute[259850]: 2025-10-11 04:08:08.389 2 INFO os_vif [None req-2e595185-ce74-4345-8f77-04ce87399792 fc44058c9b8d47d1907c195c404898c8 c04e56df694d49fdbb22c39773dfc036 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:fc:44:96,bridge_name='br-int',has_traffic_filtering=True,id=05e93c0e-0ca7-4152-9b30-cb802b90de1f,network=Network(8cb72c94-41d7-40be-8ef7-9351e1b06d48),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap05e93c0e-0c')
Oct 11 04:08:08 compute-0 nova_compute[259850]: 2025-10-11 04:08:08.458 2 DEBUG nova.virt.libvirt.driver [None req-2e595185-ce74-4345-8f77-04ce87399792 fc44058c9b8d47d1907c195c404898c8 c04e56df694d49fdbb22c39773dfc036 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 11 04:08:08 compute-0 nova_compute[259850]: 2025-10-11 04:08:08.459 2 DEBUG nova.virt.libvirt.driver [None req-2e595185-ce74-4345-8f77-04ce87399792 fc44058c9b8d47d1907c195c404898c8 c04e56df694d49fdbb22c39773dfc036 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 11 04:08:08 compute-0 nova_compute[259850]: 2025-10-11 04:08:08.459 2 DEBUG nova.virt.libvirt.driver [None req-2e595185-ce74-4345-8f77-04ce87399792 fc44058c9b8d47d1907c195c404898c8 c04e56df694d49fdbb22c39773dfc036 - - default default] No VIF found with MAC fa:16:3e:fc:44:96, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Oct 11 04:08:08 compute-0 nova_compute[259850]: 2025-10-11 04:08:08.460 2 INFO nova.virt.libvirt.driver [None req-2e595185-ce74-4345-8f77-04ce87399792 fc44058c9b8d47d1907c195c404898c8 c04e56df694d49fdbb22c39773dfc036 - - default default] [instance: 5814e0c3-8afc-4d2d-98eb-6da773bfb7c7] Using config drive
Oct 11 04:08:08 compute-0 nova_compute[259850]: 2025-10-11 04:08:08.488 2 DEBUG nova.storage.rbd_utils [None req-2e595185-ce74-4345-8f77-04ce87399792 fc44058c9b8d47d1907c195c404898c8 c04e56df694d49fdbb22c39773dfc036 - - default default] rbd image 5814e0c3-8afc-4d2d-98eb-6da773bfb7c7_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 11 04:08:08 compute-0 nova_compute[259850]: 2025-10-11 04:08:08.621 2 DEBUG nova.network.neutron [req-97afd353-a15e-4e3b-b9ae-61b35b2b4e6d req-a4f262ad-3284-419c-9f02-d9055dc3ef0a f4159c198f2c491aba3076e803251bb4 551ef16ca7c64c7d98e9560ddab5abd8 - - default default] [instance: 5814e0c3-8afc-4d2d-98eb-6da773bfb7c7] Updated VIF entry in instance network info cache for port 05e93c0e-0ca7-4152-9b30-cb802b90de1f. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 11 04:08:08 compute-0 nova_compute[259850]: 2025-10-11 04:08:08.621 2 DEBUG nova.network.neutron [req-97afd353-a15e-4e3b-b9ae-61b35b2b4e6d req-a4f262ad-3284-419c-9f02-d9055dc3ef0a f4159c198f2c491aba3076e803251bb4 551ef16ca7c64c7d98e9560ddab5abd8 - - default default] [instance: 5814e0c3-8afc-4d2d-98eb-6da773bfb7c7] Updating instance_info_cache with network_info: [{"id": "05e93c0e-0ca7-4152-9b30-cb802b90de1f", "address": "fa:16:3e:fc:44:96", "network": {"id": "8cb72c94-41d7-40be-8ef7-9351e1b06d48", "bridge": "br-int", "label": "tempest-VolumesBackupsTest-1596968619-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c04e56df694d49fdbb22c39773dfc036", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap05e93c0e-0c", "ovs_interfaceid": "05e93c0e-0ca7-4152-9b30-cb802b90de1f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 11 04:08:08 compute-0 nova_compute[259850]: 2025-10-11 04:08:08.650 2 DEBUG oslo_concurrency.lockutils [req-97afd353-a15e-4e3b-b9ae-61b35b2b4e6d req-a4f262ad-3284-419c-9f02-d9055dc3ef0a f4159c198f2c491aba3076e803251bb4 551ef16ca7c64c7d98e9560ddab5abd8 - - default default] Releasing lock "refresh_cache-5814e0c3-8afc-4d2d-98eb-6da773bfb7c7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 11 04:08:08 compute-0 ceph-mon[74273]: pgmap v1162: 305 pgs: 305 active+clean; 88 MiB data, 283 MiB used, 60 GiB / 60 GiB avail; 63 KiB/s rd, 3.3 KiB/s wr, 82 op/s
Oct 11 04:08:08 compute-0 ceph-mon[74273]: from='client.? 192.168.122.100:0/3838124643' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 11 04:08:08 compute-0 nova_compute[259850]: 2025-10-11 04:08:08.991 2 INFO nova.virt.libvirt.driver [None req-2e595185-ce74-4345-8f77-04ce87399792 fc44058c9b8d47d1907c195c404898c8 c04e56df694d49fdbb22c39773dfc036 - - default default] [instance: 5814e0c3-8afc-4d2d-98eb-6da773bfb7c7] Creating config drive at /var/lib/nova/instances/5814e0c3-8afc-4d2d-98eb-6da773bfb7c7/disk.config
Oct 11 04:08:08 compute-0 nova_compute[259850]: 2025-10-11 04:08:08.997 2 DEBUG oslo_concurrency.processutils [None req-2e595185-ce74-4345-8f77-04ce87399792 fc44058c9b8d47d1907c195c404898c8 c04e56df694d49fdbb22c39773dfc036 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/5814e0c3-8afc-4d2d-98eb-6da773bfb7c7/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmps1f49vgo execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 04:08:09 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct 11 04:08:09 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/673283600' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 11 04:08:09 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct 11 04:08:09 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/673283600' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 11 04:08:09 compute-0 nova_compute[259850]: 2025-10-11 04:08:09.134 2 DEBUG oslo_concurrency.processutils [None req-2e595185-ce74-4345-8f77-04ce87399792 fc44058c9b8d47d1907c195c404898c8 c04e56df694d49fdbb22c39773dfc036 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/5814e0c3-8afc-4d2d-98eb-6da773bfb7c7/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmps1f49vgo" returned: 0 in 0.137s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 04:08:09 compute-0 nova_compute[259850]: 2025-10-11 04:08:09.161 2 DEBUG nova.storage.rbd_utils [None req-2e595185-ce74-4345-8f77-04ce87399792 fc44058c9b8d47d1907c195c404898c8 c04e56df694d49fdbb22c39773dfc036 - - default default] rbd image 5814e0c3-8afc-4d2d-98eb-6da773bfb7c7_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 11 04:08:09 compute-0 nova_compute[259850]: 2025-10-11 04:08:09.166 2 DEBUG oslo_concurrency.processutils [None req-2e595185-ce74-4345-8f77-04ce87399792 fc44058c9b8d47d1907c195c404898c8 c04e56df694d49fdbb22c39773dfc036 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/5814e0c3-8afc-4d2d-98eb-6da773bfb7c7/disk.config 5814e0c3-8afc-4d2d-98eb-6da773bfb7c7_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 04:08:09 compute-0 nova_compute[259850]: 2025-10-11 04:08:09.367 2 DEBUG oslo_concurrency.processutils [None req-2e595185-ce74-4345-8f77-04ce87399792 fc44058c9b8d47d1907c195c404898c8 c04e56df694d49fdbb22c39773dfc036 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/5814e0c3-8afc-4d2d-98eb-6da773bfb7c7/disk.config 5814e0c3-8afc-4d2d-98eb-6da773bfb7c7_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.201s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 04:08:09 compute-0 nova_compute[259850]: 2025-10-11 04:08:09.368 2 INFO nova.virt.libvirt.driver [None req-2e595185-ce74-4345-8f77-04ce87399792 fc44058c9b8d47d1907c195c404898c8 c04e56df694d49fdbb22c39773dfc036 - - default default] [instance: 5814e0c3-8afc-4d2d-98eb-6da773bfb7c7] Deleting local config drive /var/lib/nova/instances/5814e0c3-8afc-4d2d-98eb-6da773bfb7c7/disk.config because it was imported into RBD.
Oct 11 04:08:09 compute-0 kernel: tap05e93c0e-0c: entered promiscuous mode
Oct 11 04:08:09 compute-0 NetworkManager[44920]: <info>  [1760155689.4368] manager: (tap05e93c0e-0c): new Tun device (/org/freedesktop/NetworkManager/Devices/59)
Oct 11 04:08:09 compute-0 ovn_controller[152025]: 2025-10-11T04:08:09Z|00096|binding|INFO|Claiming lport 05e93c0e-0ca7-4152-9b30-cb802b90de1f for this chassis.
Oct 11 04:08:09 compute-0 nova_compute[259850]: 2025-10-11 04:08:09.461 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 04:08:09 compute-0 ovn_controller[152025]: 2025-10-11T04:08:09Z|00097|binding|INFO|05e93c0e-0ca7-4152-9b30-cb802b90de1f: Claiming fa:16:3e:fc:44:96 10.100.0.11
Oct 11 04:08:09 compute-0 ovn_metadata_agent[161897]: 2025-10-11 04:08:09.471 161902 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:fc:44:96 10.100.0.11'], port_security=['fa:16:3e:fc:44:96 10.100.0.11'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.11/28', 'neutron:device_id': '5814e0c3-8afc-4d2d-98eb-6da773bfb7c7', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-8cb72c94-41d7-40be-8ef7-9351e1b06d48', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'c04e56df694d49fdbb22c39773dfc036', 'neutron:revision_number': '2', 'neutron:security_group_ids': '19f99e73-7b96-4627-a9e6-b29b26da7418', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=e3458ebb-1a6a-4cc8-a158-43868faee92e, chassis=[<ovs.db.idl.Row object at 0x7f22fde8afd0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f22fde8afd0>], logical_port=05e93c0e-0ca7-4152-9b30-cb802b90de1f) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 11 04:08:09 compute-0 ovn_metadata_agent[161897]: 2025-10-11 04:08:09.473 161902 INFO neutron.agent.ovn.metadata.agent [-] Port 05e93c0e-0ca7-4152-9b30-cb802b90de1f in datapath 8cb72c94-41d7-40be-8ef7-9351e1b06d48 bound to our chassis
Oct 11 04:08:09 compute-0 ovn_metadata_agent[161897]: 2025-10-11 04:08:09.475 161902 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 8cb72c94-41d7-40be-8ef7-9351e1b06d48
Oct 11 04:08:09 compute-0 systemd-udevd[277186]: Network interface NamePolicy= disabled on kernel command line.
Oct 11 04:08:09 compute-0 systemd-machined[214869]: New machine qemu-9-instance-00000009.
Oct 11 04:08:09 compute-0 ovn_metadata_agent[161897]: 2025-10-11 04:08:09.494 267637 DEBUG oslo.privsep.daemon [-] privsep: reply[99e5263f-c144-46f8-bdc7-7779fcc13011]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 04:08:09 compute-0 ovn_metadata_agent[161897]: 2025-10-11 04:08:09.496 161902 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap8cb72c94-41 in ovnmeta-8cb72c94-41d7-40be-8ef7-9351e1b06d48 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Oct 11 04:08:09 compute-0 ovn_metadata_agent[161897]: 2025-10-11 04:08:09.499 267637 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap8cb72c94-40 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Oct 11 04:08:09 compute-0 ovn_metadata_agent[161897]: 2025-10-11 04:08:09.499 267637 DEBUG oslo.privsep.daemon [-] privsep: reply[460604a6-c3cf-42f6-ab78-8eb2d01383c6]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 04:08:09 compute-0 ovn_controller[152025]: 2025-10-11T04:08:09Z|00098|binding|INFO|Setting lport 05e93c0e-0ca7-4152-9b30-cb802b90de1f ovn-installed in OVS
Oct 11 04:08:09 compute-0 ovn_controller[152025]: 2025-10-11T04:08:09Z|00099|binding|INFO|Setting lport 05e93c0e-0ca7-4152-9b30-cb802b90de1f up in Southbound
Oct 11 04:08:09 compute-0 ovn_metadata_agent[161897]: 2025-10-11 04:08:09.501 267637 DEBUG oslo.privsep.daemon [-] privsep: reply[ef790252-60cc-4639-bcbd-68ca50d34bb0]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 04:08:09 compute-0 nova_compute[259850]: 2025-10-11 04:08:09.502 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 04:08:09 compute-0 nova_compute[259850]: 2025-10-11 04:08:09.504 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 04:08:09 compute-0 systemd[1]: Started Virtual Machine qemu-9-instance-00000009.
Oct 11 04:08:09 compute-0 ovn_metadata_agent[161897]: 2025-10-11 04:08:09.514 162015 DEBUG oslo.privsep.daemon [-] privsep: reply[f6f1126f-248b-47f1-a6f0-212726ab17ac]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 04:08:09 compute-0 NetworkManager[44920]: <info>  [1760155689.5213] device (tap05e93c0e-0c): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 11 04:08:09 compute-0 NetworkManager[44920]: <info>  [1760155689.5233] device (tap05e93c0e-0c): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 11 04:08:09 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1163: 305 pgs: 305 active+clean; 134 MiB data, 305 MiB used, 60 GiB / 60 GiB avail; 101 KiB/s rd, 2.7 MiB/s wr, 141 op/s
Oct 11 04:08:09 compute-0 ovn_metadata_agent[161897]: 2025-10-11 04:08:09.541 267637 DEBUG oslo.privsep.daemon [-] privsep: reply[9cbb648b-66df-4662-8b11-bff217126238]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 04:08:09 compute-0 ovn_metadata_agent[161897]: 2025-10-11 04:08:09.581 267785 DEBUG oslo.privsep.daemon [-] privsep: reply[c7fe91dd-0940-4ee3-835e-9dc7adf4b97f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 04:08:09 compute-0 NetworkManager[44920]: <info>  [1760155689.5903] manager: (tap8cb72c94-40): new Veth device (/org/freedesktop/NetworkManager/Devices/60)
Oct 11 04:08:09 compute-0 ovn_metadata_agent[161897]: 2025-10-11 04:08:09.589 267637 DEBUG oslo.privsep.daemon [-] privsep: reply[08a92620-cdf7-4203-b4f3-e0db31939d92]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 04:08:09 compute-0 systemd-udevd[277189]: Network interface NamePolicy= disabled on kernel command line.
Oct 11 04:08:09 compute-0 ovn_metadata_agent[161897]: 2025-10-11 04:08:09.636 267785 DEBUG oslo.privsep.daemon [-] privsep: reply[1bc31464-1227-4d3c-99e7-7fff3af139fe]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 04:08:09 compute-0 ovn_metadata_agent[161897]: 2025-10-11 04:08:09.641 267785 DEBUG oslo.privsep.daemon [-] privsep: reply[62865287-2f45-47b4-b65b-9542e92a72da]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 04:08:09 compute-0 NetworkManager[44920]: <info>  [1760155689.6758] device (tap8cb72c94-40): carrier: link connected
Oct 11 04:08:09 compute-0 ovn_metadata_agent[161897]: 2025-10-11 04:08:09.683 267785 DEBUG oslo.privsep.daemon [-] privsep: reply[13b4feaa-8b90-4f7e-8c43-2a5b78adf304]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 04:08:09 compute-0 ovn_metadata_agent[161897]: 2025-10-11 04:08:09.707 267637 DEBUG oslo.privsep.daemon [-] privsep: reply[c114abf2-3b8a-4212-a51c-a09bca6e2567]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap8cb72c94-41'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:36:21:0e'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 36], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 407471, 'reachable_time': 27962, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 277218, 'error': None, 'target': 'ovnmeta-8cb72c94-41d7-40be-8ef7-9351e1b06d48', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 04:08:09 compute-0 ovn_metadata_agent[161897]: 2025-10-11 04:08:09.729 267637 DEBUG oslo.privsep.daemon [-] privsep: reply[1b10531e-0741-4a3f-bc01-f6633a6b2bdf]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe36:210e'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 407471, 'tstamp': 407471}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 277219, 'error': None, 'target': 'ovnmeta-8cb72c94-41d7-40be-8ef7-9351e1b06d48', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 04:08:09 compute-0 ovn_metadata_agent[161897]: 2025-10-11 04:08:09.754 267637 DEBUG oslo.privsep.daemon [-] privsep: reply[9026343b-fa7b-43bb-995b-ffc24780a8e3]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap8cb72c94-41'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:36:21:0e'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 36], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 407471, 'reachable_time': 27962, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 277220, 'error': None, 'target': 'ovnmeta-8cb72c94-41d7-40be-8ef7-9351e1b06d48', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 04:08:09 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e218 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 04:08:09 compute-0 nova_compute[259850]: 2025-10-11 04:08:09.776 2 DEBUG nova.compute.manager [req-4f6d0766-a4aa-4ccb-b015-2d2d98c0c45a req-98617cca-5956-440b-8390-406abd989c2d f4159c198f2c491aba3076e803251bb4 551ef16ca7c64c7d98e9560ddab5abd8 - - default default] [instance: 5814e0c3-8afc-4d2d-98eb-6da773bfb7c7] Received event network-vif-plugged-05e93c0e-0ca7-4152-9b30-cb802b90de1f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 11 04:08:09 compute-0 nova_compute[259850]: 2025-10-11 04:08:09.777 2 DEBUG oslo_concurrency.lockutils [req-4f6d0766-a4aa-4ccb-b015-2d2d98c0c45a req-98617cca-5956-440b-8390-406abd989c2d f4159c198f2c491aba3076e803251bb4 551ef16ca7c64c7d98e9560ddab5abd8 - - default default] Acquiring lock "5814e0c3-8afc-4d2d-98eb-6da773bfb7c7-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 04:08:09 compute-0 nova_compute[259850]: 2025-10-11 04:08:09.778 2 DEBUG oslo_concurrency.lockutils [req-4f6d0766-a4aa-4ccb-b015-2d2d98c0c45a req-98617cca-5956-440b-8390-406abd989c2d f4159c198f2c491aba3076e803251bb4 551ef16ca7c64c7d98e9560ddab5abd8 - - default default] Lock "5814e0c3-8afc-4d2d-98eb-6da773bfb7c7-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 04:08:09 compute-0 nova_compute[259850]: 2025-10-11 04:08:09.778 2 DEBUG oslo_concurrency.lockutils [req-4f6d0766-a4aa-4ccb-b015-2d2d98c0c45a req-98617cca-5956-440b-8390-406abd989c2d f4159c198f2c491aba3076e803251bb4 551ef16ca7c64c7d98e9560ddab5abd8 - - default default] Lock "5814e0c3-8afc-4d2d-98eb-6da773bfb7c7-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 04:08:09 compute-0 nova_compute[259850]: 2025-10-11 04:08:09.779 2 DEBUG nova.compute.manager [req-4f6d0766-a4aa-4ccb-b015-2d2d98c0c45a req-98617cca-5956-440b-8390-406abd989c2d f4159c198f2c491aba3076e803251bb4 551ef16ca7c64c7d98e9560ddab5abd8 - - default default] [instance: 5814e0c3-8afc-4d2d-98eb-6da773bfb7c7] Processing event network-vif-plugged-05e93c0e-0ca7-4152-9b30-cb802b90de1f _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Oct 11 04:08:09 compute-0 ovn_metadata_agent[161897]: 2025-10-11 04:08:09.809 267637 DEBUG oslo.privsep.daemon [-] privsep: reply[3ddd486d-573c-48bc-9652-3c3696430dd3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 04:08:09 compute-0 ceph-mon[74273]: from='client.? 192.168.122.10:0/673283600' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 11 04:08:09 compute-0 ceph-mon[74273]: from='client.? 192.168.122.10:0/673283600' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 11 04:08:09 compute-0 ovn_metadata_agent[161897]: 2025-10-11 04:08:09.899 267637 DEBUG oslo.privsep.daemon [-] privsep: reply[3d3c5b5a-c7d1-4214-99a9-30e55a0d3518]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 04:08:09 compute-0 ovn_metadata_agent[161897]: 2025-10-11 04:08:09.904 161902 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap8cb72c94-40, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 04:08:09 compute-0 ovn_metadata_agent[161897]: 2025-10-11 04:08:09.904 161902 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 11 04:08:09 compute-0 ovn_metadata_agent[161897]: 2025-10-11 04:08:09.905 161902 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap8cb72c94-40, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 04:08:09 compute-0 nova_compute[259850]: 2025-10-11 04:08:09.907 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 04:08:09 compute-0 NetworkManager[44920]: <info>  [1760155689.9082] manager: (tap8cb72c94-40): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/61)
Oct 11 04:08:09 compute-0 kernel: tap8cb72c94-40: entered promiscuous mode
Oct 11 04:08:09 compute-0 nova_compute[259850]: 2025-10-11 04:08:09.911 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 04:08:09 compute-0 ovn_metadata_agent[161897]: 2025-10-11 04:08:09.912 161902 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap8cb72c94-40, col_values=(('external_ids', {'iface-id': '34d69504-322d-456b-93e7-c4c1d52774df'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 04:08:09 compute-0 nova_compute[259850]: 2025-10-11 04:08:09.913 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 04:08:09 compute-0 ovn_controller[152025]: 2025-10-11T04:08:09Z|00100|binding|INFO|Releasing lport 34d69504-322d-456b-93e7-c4c1d52774df from this chassis (sb_readonly=0)
Oct 11 04:08:09 compute-0 nova_compute[259850]: 2025-10-11 04:08:09.915 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 04:08:09 compute-0 ovn_metadata_agent[161897]: 2025-10-11 04:08:09.915 161902 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/8cb72c94-41d7-40be-8ef7-9351e1b06d48.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/8cb72c94-41d7-40be-8ef7-9351e1b06d48.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Oct 11 04:08:09 compute-0 ovn_metadata_agent[161897]: 2025-10-11 04:08:09.917 267637 DEBUG oslo.privsep.daemon [-] privsep: reply[b0acf9a5-b3ad-49cf-81ff-ef7d93defbb9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 04:08:09 compute-0 ovn_metadata_agent[161897]: 2025-10-11 04:08:09.918 161902 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct 11 04:08:09 compute-0 ovn_metadata_agent[161897]: global
Oct 11 04:08:09 compute-0 ovn_metadata_agent[161897]:     log         /dev/log local0 debug
Oct 11 04:08:09 compute-0 ovn_metadata_agent[161897]:     log-tag     haproxy-metadata-proxy-8cb72c94-41d7-40be-8ef7-9351e1b06d48
Oct 11 04:08:09 compute-0 ovn_metadata_agent[161897]:     user        root
Oct 11 04:08:09 compute-0 ovn_metadata_agent[161897]:     group       root
Oct 11 04:08:09 compute-0 ovn_metadata_agent[161897]:     maxconn     1024
Oct 11 04:08:09 compute-0 ovn_metadata_agent[161897]:     pidfile     /var/lib/neutron/external/pids/8cb72c94-41d7-40be-8ef7-9351e1b06d48.pid.haproxy
Oct 11 04:08:09 compute-0 ovn_metadata_agent[161897]:     daemon
Oct 11 04:08:09 compute-0 ovn_metadata_agent[161897]: 
Oct 11 04:08:09 compute-0 ovn_metadata_agent[161897]: defaults
Oct 11 04:08:09 compute-0 ovn_metadata_agent[161897]:     log global
Oct 11 04:08:09 compute-0 ovn_metadata_agent[161897]:     mode http
Oct 11 04:08:09 compute-0 ovn_metadata_agent[161897]:     option httplog
Oct 11 04:08:09 compute-0 ovn_metadata_agent[161897]:     option dontlognull
Oct 11 04:08:09 compute-0 ovn_metadata_agent[161897]:     option http-server-close
Oct 11 04:08:09 compute-0 ovn_metadata_agent[161897]:     option forwardfor
Oct 11 04:08:09 compute-0 ovn_metadata_agent[161897]:     retries                 3
Oct 11 04:08:09 compute-0 ovn_metadata_agent[161897]:     timeout http-request    30s
Oct 11 04:08:09 compute-0 ovn_metadata_agent[161897]:     timeout connect         30s
Oct 11 04:08:09 compute-0 ovn_metadata_agent[161897]:     timeout client          32s
Oct 11 04:08:09 compute-0 ovn_metadata_agent[161897]:     timeout server          32s
Oct 11 04:08:09 compute-0 ovn_metadata_agent[161897]:     timeout http-keep-alive 30s
Oct 11 04:08:09 compute-0 ovn_metadata_agent[161897]: 
Oct 11 04:08:09 compute-0 ovn_metadata_agent[161897]: 
Oct 11 04:08:09 compute-0 ovn_metadata_agent[161897]: listen listener
Oct 11 04:08:09 compute-0 ovn_metadata_agent[161897]:     bind 169.254.169.254:80
Oct 11 04:08:09 compute-0 ovn_metadata_agent[161897]:     server metadata /var/lib/neutron/metadata_proxy
Oct 11 04:08:09 compute-0 ovn_metadata_agent[161897]:     http-request add-header X-OVN-Network-ID 8cb72c94-41d7-40be-8ef7-9351e1b06d48
Oct 11 04:08:09 compute-0 ovn_metadata_agent[161897]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Oct 11 04:08:09 compute-0 ovn_metadata_agent[161897]: 2025-10-11 04:08:09.918 161902 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-8cb72c94-41d7-40be-8ef7-9351e1b06d48', 'env', 'PROCESS_TAG=haproxy-8cb72c94-41d7-40be-8ef7-9351e1b06d48', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/8cb72c94-41d7-40be-8ef7-9351e1b06d48.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Oct 11 04:08:09 compute-0 nova_compute[259850]: 2025-10-11 04:08:09.930 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 04:08:10 compute-0 podman[277294]: 2025-10-11 04:08:10.288359619 +0000 UTC m=+0.059986585 container create 415a72eaefed018b58afb9f793f2c0b92e16bad6cea422b96f2721d49a00689f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-8cb72c94-41d7-40be-8ef7-9351e1b06d48, org.label-schema.build-date=20251009, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f)
Oct 11 04:08:10 compute-0 systemd[1]: Started libpod-conmon-415a72eaefed018b58afb9f793f2c0b92e16bad6cea422b96f2721d49a00689f.scope.
Oct 11 04:08:10 compute-0 podman[277294]: 2025-10-11 04:08:10.25277319 +0000 UTC m=+0.024400156 image pull 1061e4fafe13e0b9aa1ef2c904ba4ad70c44f3e87b1d831f16c6db34937f4022 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Oct 11 04:08:10 compute-0 systemd[1]: Started libcrun container.
Oct 11 04:08:10 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/306b951c3e253160853bcdc8a8338aea333d3aad898bcaa4a62724d51fb3f167/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct 11 04:08:10 compute-0 podman[277294]: 2025-10-11 04:08:10.402831982 +0000 UTC m=+0.174459038 container init 415a72eaefed018b58afb9f793f2c0b92e16bad6cea422b96f2721d49a00689f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-8cb72c94-41d7-40be-8ef7-9351e1b06d48, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009, tcib_managed=true, org.label-schema.license=GPLv2, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0)
Oct 11 04:08:10 compute-0 podman[277294]: 2025-10-11 04:08:10.41310403 +0000 UTC m=+0.184731026 container start 415a72eaefed018b58afb9f793f2c0b92e16bad6cea422b96f2721d49a00689f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-8cb72c94-41d7-40be-8ef7-9351e1b06d48, tcib_managed=true, org.label-schema.vendor=CentOS, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.build-date=20251009, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Oct 11 04:08:10 compute-0 neutron-haproxy-ovnmeta-8cb72c94-41d7-40be-8ef7-9351e1b06d48[277309]: [NOTICE]   (277313) : New worker (277315) forked
Oct 11 04:08:10 compute-0 neutron-haproxy-ovnmeta-8cb72c94-41d7-40be-8ef7-9351e1b06d48[277309]: [NOTICE]   (277313) : Loading success.
Oct 11 04:08:10 compute-0 nova_compute[259850]: 2025-10-11 04:08:10.559 2 DEBUG nova.virt.driver [None req-3700afd8-f980-4cf2-b41e-54ecc7fbe9a2 - - - - - -] Emitting event <LifecycleEvent: 1760155690.5585, 5814e0c3-8afc-4d2d-98eb-6da773bfb7c7 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 11 04:08:10 compute-0 nova_compute[259850]: 2025-10-11 04:08:10.560 2 INFO nova.compute.manager [None req-3700afd8-f980-4cf2-b41e-54ecc7fbe9a2 - - - - - -] [instance: 5814e0c3-8afc-4d2d-98eb-6da773bfb7c7] VM Started (Lifecycle Event)
Oct 11 04:08:10 compute-0 nova_compute[259850]: 2025-10-11 04:08:10.564 2 DEBUG nova.compute.manager [None req-2e595185-ce74-4345-8f77-04ce87399792 fc44058c9b8d47d1907c195c404898c8 c04e56df694d49fdbb22c39773dfc036 - - default default] [instance: 5814e0c3-8afc-4d2d-98eb-6da773bfb7c7] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Oct 11 04:08:10 compute-0 nova_compute[259850]: 2025-10-11 04:08:10.569 2 DEBUG nova.virt.libvirt.driver [None req-2e595185-ce74-4345-8f77-04ce87399792 fc44058c9b8d47d1907c195c404898c8 c04e56df694d49fdbb22c39773dfc036 - - default default] [instance: 5814e0c3-8afc-4d2d-98eb-6da773bfb7c7] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Oct 11 04:08:10 compute-0 nova_compute[259850]: 2025-10-11 04:08:10.573 2 INFO nova.virt.libvirt.driver [-] [instance: 5814e0c3-8afc-4d2d-98eb-6da773bfb7c7] Instance spawned successfully.
Oct 11 04:08:10 compute-0 nova_compute[259850]: 2025-10-11 04:08:10.573 2 DEBUG nova.virt.libvirt.driver [None req-2e595185-ce74-4345-8f77-04ce87399792 fc44058c9b8d47d1907c195c404898c8 c04e56df694d49fdbb22c39773dfc036 - - default default] [instance: 5814e0c3-8afc-4d2d-98eb-6da773bfb7c7] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Oct 11 04:08:10 compute-0 nova_compute[259850]: 2025-10-11 04:08:10.589 2 DEBUG nova.compute.manager [None req-3700afd8-f980-4cf2-b41e-54ecc7fbe9a2 - - - - - -] [instance: 5814e0c3-8afc-4d2d-98eb-6da773bfb7c7] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 11 04:08:10 compute-0 nova_compute[259850]: 2025-10-11 04:08:10.595 2 DEBUG nova.compute.manager [None req-3700afd8-f980-4cf2-b41e-54ecc7fbe9a2 - - - - - -] [instance: 5814e0c3-8afc-4d2d-98eb-6da773bfb7c7] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 11 04:08:10 compute-0 nova_compute[259850]: 2025-10-11 04:08:10.601 2 DEBUG nova.virt.libvirt.driver [None req-2e595185-ce74-4345-8f77-04ce87399792 fc44058c9b8d47d1907c195c404898c8 c04e56df694d49fdbb22c39773dfc036 - - default default] [instance: 5814e0c3-8afc-4d2d-98eb-6da773bfb7c7] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 11 04:08:10 compute-0 nova_compute[259850]: 2025-10-11 04:08:10.601 2 DEBUG nova.virt.libvirt.driver [None req-2e595185-ce74-4345-8f77-04ce87399792 fc44058c9b8d47d1907c195c404898c8 c04e56df694d49fdbb22c39773dfc036 - - default default] [instance: 5814e0c3-8afc-4d2d-98eb-6da773bfb7c7] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 11 04:08:10 compute-0 nova_compute[259850]: 2025-10-11 04:08:10.602 2 DEBUG nova.virt.libvirt.driver [None req-2e595185-ce74-4345-8f77-04ce87399792 fc44058c9b8d47d1907c195c404898c8 c04e56df694d49fdbb22c39773dfc036 - - default default] [instance: 5814e0c3-8afc-4d2d-98eb-6da773bfb7c7] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 11 04:08:10 compute-0 nova_compute[259850]: 2025-10-11 04:08:10.602 2 DEBUG nova.virt.libvirt.driver [None req-2e595185-ce74-4345-8f77-04ce87399792 fc44058c9b8d47d1907c195c404898c8 c04e56df694d49fdbb22c39773dfc036 - - default default] [instance: 5814e0c3-8afc-4d2d-98eb-6da773bfb7c7] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 11 04:08:10 compute-0 nova_compute[259850]: 2025-10-11 04:08:10.603 2 DEBUG nova.virt.libvirt.driver [None req-2e595185-ce74-4345-8f77-04ce87399792 fc44058c9b8d47d1907c195c404898c8 c04e56df694d49fdbb22c39773dfc036 - - default default] [instance: 5814e0c3-8afc-4d2d-98eb-6da773bfb7c7] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 11 04:08:10 compute-0 nova_compute[259850]: 2025-10-11 04:08:10.603 2 DEBUG nova.virt.libvirt.driver [None req-2e595185-ce74-4345-8f77-04ce87399792 fc44058c9b8d47d1907c195c404898c8 c04e56df694d49fdbb22c39773dfc036 - - default default] [instance: 5814e0c3-8afc-4d2d-98eb-6da773bfb7c7] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 11 04:08:10 compute-0 nova_compute[259850]: 2025-10-11 04:08:10.637 2 INFO nova.compute.manager [None req-3700afd8-f980-4cf2-b41e-54ecc7fbe9a2 - - - - - -] [instance: 5814e0c3-8afc-4d2d-98eb-6da773bfb7c7] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 11 04:08:10 compute-0 nova_compute[259850]: 2025-10-11 04:08:10.638 2 DEBUG nova.virt.driver [None req-3700afd8-f980-4cf2-b41e-54ecc7fbe9a2 - - - - - -] Emitting event <LifecycleEvent: 1760155690.5587804, 5814e0c3-8afc-4d2d-98eb-6da773bfb7c7 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 11 04:08:10 compute-0 nova_compute[259850]: 2025-10-11 04:08:10.638 2 INFO nova.compute.manager [None req-3700afd8-f980-4cf2-b41e-54ecc7fbe9a2 - - - - - -] [instance: 5814e0c3-8afc-4d2d-98eb-6da773bfb7c7] VM Paused (Lifecycle Event)
Oct 11 04:08:10 compute-0 nova_compute[259850]: 2025-10-11 04:08:10.665 2 DEBUG nova.compute.manager [None req-3700afd8-f980-4cf2-b41e-54ecc7fbe9a2 - - - - - -] [instance: 5814e0c3-8afc-4d2d-98eb-6da773bfb7c7] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 11 04:08:10 compute-0 nova_compute[259850]: 2025-10-11 04:08:10.669 2 DEBUG nova.virt.driver [None req-3700afd8-f980-4cf2-b41e-54ecc7fbe9a2 - - - - - -] Emitting event <LifecycleEvent: 1760155690.568353, 5814e0c3-8afc-4d2d-98eb-6da773bfb7c7 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 11 04:08:10 compute-0 nova_compute[259850]: 2025-10-11 04:08:10.670 2 INFO nova.compute.manager [None req-3700afd8-f980-4cf2-b41e-54ecc7fbe9a2 - - - - - -] [instance: 5814e0c3-8afc-4d2d-98eb-6da773bfb7c7] VM Resumed (Lifecycle Event)
Oct 11 04:08:10 compute-0 nova_compute[259850]: 2025-10-11 04:08:10.675 2 INFO nova.compute.manager [None req-2e595185-ce74-4345-8f77-04ce87399792 fc44058c9b8d47d1907c195c404898c8 c04e56df694d49fdbb22c39773dfc036 - - default default] [instance: 5814e0c3-8afc-4d2d-98eb-6da773bfb7c7] Took 8.05 seconds to spawn the instance on the hypervisor.
Oct 11 04:08:10 compute-0 nova_compute[259850]: 2025-10-11 04:08:10.676 2 DEBUG nova.compute.manager [None req-2e595185-ce74-4345-8f77-04ce87399792 fc44058c9b8d47d1907c195c404898c8 c04e56df694d49fdbb22c39773dfc036 - - default default] [instance: 5814e0c3-8afc-4d2d-98eb-6da773bfb7c7] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 11 04:08:10 compute-0 nova_compute[259850]: 2025-10-11 04:08:10.689 2 DEBUG nova.compute.manager [None req-3700afd8-f980-4cf2-b41e-54ecc7fbe9a2 - - - - - -] [instance: 5814e0c3-8afc-4d2d-98eb-6da773bfb7c7] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 11 04:08:10 compute-0 nova_compute[259850]: 2025-10-11 04:08:10.696 2 DEBUG nova.compute.manager [None req-3700afd8-f980-4cf2-b41e-54ecc7fbe9a2 - - - - - -] [instance: 5814e0c3-8afc-4d2d-98eb-6da773bfb7c7] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 11 04:08:10 compute-0 nova_compute[259850]: 2025-10-11 04:08:10.727 2 INFO nova.compute.manager [None req-3700afd8-f980-4cf2-b41e-54ecc7fbe9a2 - - - - - -] [instance: 5814e0c3-8afc-4d2d-98eb-6da773bfb7c7] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 11 04:08:10 compute-0 nova_compute[259850]: 2025-10-11 04:08:10.752 2 INFO nova.compute.manager [None req-2e595185-ce74-4345-8f77-04ce87399792 fc44058c9b8d47d1907c195c404898c8 c04e56df694d49fdbb22c39773dfc036 - - default default] [instance: 5814e0c3-8afc-4d2d-98eb-6da773bfb7c7] Took 9.06 seconds to build instance.
Oct 11 04:08:10 compute-0 nova_compute[259850]: 2025-10-11 04:08:10.774 2 DEBUG oslo_concurrency.lockutils [None req-2e595185-ce74-4345-8f77-04ce87399792 fc44058c9b8d47d1907c195c404898c8 c04e56df694d49fdbb22c39773dfc036 - - default default] Lock "5814e0c3-8afc-4d2d-98eb-6da773bfb7c7" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 9.151s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 04:08:10 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e218 do_prune osdmap full prune enabled
Oct 11 04:08:10 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e219 e219: 3 total, 3 up, 3 in
Oct 11 04:08:10 compute-0 ceph-mon[74273]: log_channel(cluster) log [DBG] : osdmap e219: 3 total, 3 up, 3 in
Oct 11 04:08:10 compute-0 ceph-mon[74273]: pgmap v1163: 305 pgs: 305 active+clean; 134 MiB data, 305 MiB used, 60 GiB / 60 GiB avail; 101 KiB/s rd, 2.7 MiB/s wr, 141 op/s
Oct 11 04:08:11 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1165: 305 pgs: 305 active+clean; 134 MiB data, 305 MiB used, 60 GiB / 60 GiB avail; 60 KiB/s rd, 3.2 MiB/s wr, 89 op/s
Oct 11 04:08:11 compute-0 nova_compute[259850]: 2025-10-11 04:08:11.660 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 04:08:11 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct 11 04:08:11 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2458900322' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 11 04:08:11 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct 11 04:08:11 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2458900322' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 11 04:08:11 compute-0 ceph-mon[74273]: osdmap e219: 3 total, 3 up, 3 in
Oct 11 04:08:11 compute-0 ceph-mon[74273]: from='client.? 192.168.122.10:0/2458900322' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 11 04:08:11 compute-0 ceph-mon[74273]: from='client.? 192.168.122.10:0/2458900322' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 11 04:08:11 compute-0 nova_compute[259850]: 2025-10-11 04:08:11.901 2 DEBUG nova.compute.manager [req-b62a53ad-1602-42c9-931a-cb433cb9fbdf req-75e34e1b-95a0-41a8-9aaa-01ebc5c9672e f4159c198f2c491aba3076e803251bb4 551ef16ca7c64c7d98e9560ddab5abd8 - - default default] [instance: 5814e0c3-8afc-4d2d-98eb-6da773bfb7c7] Received event network-vif-plugged-05e93c0e-0ca7-4152-9b30-cb802b90de1f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 11 04:08:11 compute-0 nova_compute[259850]: 2025-10-11 04:08:11.902 2 DEBUG oslo_concurrency.lockutils [req-b62a53ad-1602-42c9-931a-cb433cb9fbdf req-75e34e1b-95a0-41a8-9aaa-01ebc5c9672e f4159c198f2c491aba3076e803251bb4 551ef16ca7c64c7d98e9560ddab5abd8 - - default default] Acquiring lock "5814e0c3-8afc-4d2d-98eb-6da773bfb7c7-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 04:08:11 compute-0 nova_compute[259850]: 2025-10-11 04:08:11.902 2 DEBUG oslo_concurrency.lockutils [req-b62a53ad-1602-42c9-931a-cb433cb9fbdf req-75e34e1b-95a0-41a8-9aaa-01ebc5c9672e f4159c198f2c491aba3076e803251bb4 551ef16ca7c64c7d98e9560ddab5abd8 - - default default] Lock "5814e0c3-8afc-4d2d-98eb-6da773bfb7c7-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 04:08:11 compute-0 nova_compute[259850]: 2025-10-11 04:08:11.903 2 DEBUG oslo_concurrency.lockutils [req-b62a53ad-1602-42c9-931a-cb433cb9fbdf req-75e34e1b-95a0-41a8-9aaa-01ebc5c9672e f4159c198f2c491aba3076e803251bb4 551ef16ca7c64c7d98e9560ddab5abd8 - - default default] Lock "5814e0c3-8afc-4d2d-98eb-6da773bfb7c7-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 04:08:11 compute-0 nova_compute[259850]: 2025-10-11 04:08:11.903 2 DEBUG nova.compute.manager [req-b62a53ad-1602-42c9-931a-cb433cb9fbdf req-75e34e1b-95a0-41a8-9aaa-01ebc5c9672e f4159c198f2c491aba3076e803251bb4 551ef16ca7c64c7d98e9560ddab5abd8 - - default default] [instance: 5814e0c3-8afc-4d2d-98eb-6da773bfb7c7] No waiting events found dispatching network-vif-plugged-05e93c0e-0ca7-4152-9b30-cb802b90de1f pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 11 04:08:11 compute-0 nova_compute[259850]: 2025-10-11 04:08:11.903 2 WARNING nova.compute.manager [req-b62a53ad-1602-42c9-931a-cb433cb9fbdf req-75e34e1b-95a0-41a8-9aaa-01ebc5c9672e f4159c198f2c491aba3076e803251bb4 551ef16ca7c64c7d98e9560ddab5abd8 - - default default] [instance: 5814e0c3-8afc-4d2d-98eb-6da773bfb7c7] Received unexpected event network-vif-plugged-05e93c0e-0ca7-4152-9b30-cb802b90de1f for instance with vm_state active and task_state None.
Oct 11 04:08:12 compute-0 nova_compute[259850]: 2025-10-11 04:08:12.303 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 04:08:12 compute-0 ceph-mon[74273]: pgmap v1165: 305 pgs: 305 active+clean; 134 MiB data, 305 MiB used, 60 GiB / 60 GiB avail; 60 KiB/s rd, 3.2 MiB/s wr, 89 op/s
Oct 11 04:08:13 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct 11 04:08:13 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/4131269850' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 11 04:08:13 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct 11 04:08:13 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/4131269850' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 11 04:08:13 compute-0 nova_compute[259850]: 2025-10-11 04:08:13.382 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 04:08:13 compute-0 podman[277325]: 2025-10-11 04:08:13.435666904 +0000 UTC m=+0.136141262 container health_status 648a70a95c858d67aef01a55c153ff0b5b9bef4b589fa10c62a24185180db61f (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_id=ovn_controller, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20251009)
Oct 11 04:08:13 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1166: 305 pgs: 305 active+clean; 134 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 3.0 MiB/s rd, 2.7 MiB/s wr, 245 op/s
Oct 11 04:08:13 compute-0 ceph-mon[74273]: from='client.? 192.168.122.10:0/4131269850' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 11 04:08:13 compute-0 ceph-mon[74273]: from='client.? 192.168.122.10:0/4131269850' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 11 04:08:14 compute-0 nova_compute[259850]: 2025-10-11 04:08:14.022 2 DEBUG nova.compute.manager [req-26ab4a35-a3a9-45ba-8559-613cb711ae59 req-db6dd39d-8ed0-41a2-a041-9a329b802399 f4159c198f2c491aba3076e803251bb4 551ef16ca7c64c7d98e9560ddab5abd8 - - default default] [instance: 5814e0c3-8afc-4d2d-98eb-6da773bfb7c7] Received event network-changed-05e93c0e-0ca7-4152-9b30-cb802b90de1f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 11 04:08:14 compute-0 nova_compute[259850]: 2025-10-11 04:08:14.023 2 DEBUG nova.compute.manager [req-26ab4a35-a3a9-45ba-8559-613cb711ae59 req-db6dd39d-8ed0-41a2-a041-9a329b802399 f4159c198f2c491aba3076e803251bb4 551ef16ca7c64c7d98e9560ddab5abd8 - - default default] [instance: 5814e0c3-8afc-4d2d-98eb-6da773bfb7c7] Refreshing instance network info cache due to event network-changed-05e93c0e-0ca7-4152-9b30-cb802b90de1f. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 11 04:08:14 compute-0 nova_compute[259850]: 2025-10-11 04:08:14.023 2 DEBUG oslo_concurrency.lockutils [req-26ab4a35-a3a9-45ba-8559-613cb711ae59 req-db6dd39d-8ed0-41a2-a041-9a329b802399 f4159c198f2c491aba3076e803251bb4 551ef16ca7c64c7d98e9560ddab5abd8 - - default default] Acquiring lock "refresh_cache-5814e0c3-8afc-4d2d-98eb-6da773bfb7c7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 11 04:08:14 compute-0 nova_compute[259850]: 2025-10-11 04:08:14.023 2 DEBUG oslo_concurrency.lockutils [req-26ab4a35-a3a9-45ba-8559-613cb711ae59 req-db6dd39d-8ed0-41a2-a041-9a329b802399 f4159c198f2c491aba3076e803251bb4 551ef16ca7c64c7d98e9560ddab5abd8 - - default default] Acquired lock "refresh_cache-5814e0c3-8afc-4d2d-98eb-6da773bfb7c7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 11 04:08:14 compute-0 nova_compute[259850]: 2025-10-11 04:08:14.024 2 DEBUG nova.network.neutron [req-26ab4a35-a3a9-45ba-8559-613cb711ae59 req-db6dd39d-8ed0-41a2-a041-9a329b802399 f4159c198f2c491aba3076e803251bb4 551ef16ca7c64c7d98e9560ddab5abd8 - - default default] [instance: 5814e0c3-8afc-4d2d-98eb-6da773bfb7c7] Refreshing network info cache for port 05e93c0e-0ca7-4152-9b30-cb802b90de1f _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 11 04:08:14 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e219 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 04:08:14 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e219 do_prune osdmap full prune enabled
Oct 11 04:08:14 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e220 e220: 3 total, 3 up, 3 in
Oct 11 04:08:14 compute-0 ceph-mon[74273]: log_channel(cluster) log [DBG] : osdmap e220: 3 total, 3 up, 3 in
Oct 11 04:08:14 compute-0 ceph-mon[74273]: pgmap v1166: 305 pgs: 305 active+clean; 134 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 3.0 MiB/s rd, 2.7 MiB/s wr, 245 op/s
Oct 11 04:08:14 compute-0 ceph-mon[74273]: osdmap e220: 3 total, 3 up, 3 in
Oct 11 04:08:15 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1168: 305 pgs: 305 active+clean; 134 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 3.0 MiB/s rd, 2.7 MiB/s wr, 245 op/s
Oct 11 04:08:15 compute-0 nova_compute[259850]: 2025-10-11 04:08:15.547 2 DEBUG nova.network.neutron [req-26ab4a35-a3a9-45ba-8559-613cb711ae59 req-db6dd39d-8ed0-41a2-a041-9a329b802399 f4159c198f2c491aba3076e803251bb4 551ef16ca7c64c7d98e9560ddab5abd8 - - default default] [instance: 5814e0c3-8afc-4d2d-98eb-6da773bfb7c7] Updated VIF entry in instance network info cache for port 05e93c0e-0ca7-4152-9b30-cb802b90de1f. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 11 04:08:15 compute-0 nova_compute[259850]: 2025-10-11 04:08:15.548 2 DEBUG nova.network.neutron [req-26ab4a35-a3a9-45ba-8559-613cb711ae59 req-db6dd39d-8ed0-41a2-a041-9a329b802399 f4159c198f2c491aba3076e803251bb4 551ef16ca7c64c7d98e9560ddab5abd8 - - default default] [instance: 5814e0c3-8afc-4d2d-98eb-6da773bfb7c7] Updating instance_info_cache with network_info: [{"id": "05e93c0e-0ca7-4152-9b30-cb802b90de1f", "address": "fa:16:3e:fc:44:96", "network": {"id": "8cb72c94-41d7-40be-8ef7-9351e1b06d48", "bridge": "br-int", "label": "tempest-VolumesBackupsTest-1596968619-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.189", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c04e56df694d49fdbb22c39773dfc036", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap05e93c0e-0c", "ovs_interfaceid": "05e93c0e-0ca7-4152-9b30-cb802b90de1f", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 11 04:08:15 compute-0 nova_compute[259850]: 2025-10-11 04:08:15.585 2 DEBUG oslo_concurrency.lockutils [req-26ab4a35-a3a9-45ba-8559-613cb711ae59 req-db6dd39d-8ed0-41a2-a041-9a329b802399 f4159c198f2c491aba3076e803251bb4 551ef16ca7c64c7d98e9560ddab5abd8 - - default default] Releasing lock "refresh_cache-5814e0c3-8afc-4d2d-98eb-6da773bfb7c7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 11 04:08:15 compute-0 nova_compute[259850]: 2025-10-11 04:08:15.766 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 04:08:16 compute-0 ceph-mon[74273]: pgmap v1168: 305 pgs: 305 active+clean; 134 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 3.0 MiB/s rd, 2.7 MiB/s wr, 245 op/s
Oct 11 04:08:17 compute-0 nova_compute[259850]: 2025-10-11 04:08:17.059 2 DEBUG oslo_service.periodic_task [None req-a15c22ab-259d-4b26-83d0-eb5f92102649 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 11 04:08:17 compute-0 nova_compute[259850]: 2025-10-11 04:08:17.060 2 DEBUG nova.compute.manager [None req-a15c22ab-259d-4b26-83d0-eb5f92102649 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Oct 11 04:08:17 compute-0 nova_compute[259850]: 2025-10-11 04:08:17.061 2 DEBUG oslo_service.periodic_task [None req-a15c22ab-259d-4b26-83d0-eb5f92102649 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 11 04:08:17 compute-0 nova_compute[259850]: 2025-10-11 04:08:17.087 2 DEBUG oslo_concurrency.lockutils [None req-a15c22ab-259d-4b26-83d0-eb5f92102649 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 04:08:17 compute-0 nova_compute[259850]: 2025-10-11 04:08:17.088 2 DEBUG oslo_concurrency.lockutils [None req-a15c22ab-259d-4b26-83d0-eb5f92102649 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 04:08:17 compute-0 nova_compute[259850]: 2025-10-11 04:08:17.088 2 DEBUG oslo_concurrency.lockutils [None req-a15c22ab-259d-4b26-83d0-eb5f92102649 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 04:08:17 compute-0 nova_compute[259850]: 2025-10-11 04:08:17.089 2 DEBUG nova.compute.resource_tracker [None req-a15c22ab-259d-4b26-83d0-eb5f92102649 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Oct 11 04:08:17 compute-0 nova_compute[259850]: 2025-10-11 04:08:17.090 2 DEBUG oslo_concurrency.processutils [None req-a15c22ab-259d-4b26-83d0-eb5f92102649 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 04:08:17 compute-0 nova_compute[259850]: 2025-10-11 04:08:17.328 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 04:08:17 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 11 04:08:17 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3181994744' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 04:08:17 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1169: 305 pgs: 305 active+clean; 134 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 2.9 MiB/s rd, 23 KiB/s wr, 170 op/s
Oct 11 04:08:17 compute-0 nova_compute[259850]: 2025-10-11 04:08:17.536 2 DEBUG oslo_concurrency.processutils [None req-a15c22ab-259d-4b26-83d0-eb5f92102649 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.446s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 04:08:17 compute-0 nova_compute[259850]: 2025-10-11 04:08:17.624 2 DEBUG nova.virt.libvirt.driver [None req-a15c22ab-259d-4b26-83d0-eb5f92102649 - - - - - -] skipping disk for instance-00000009 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 11 04:08:17 compute-0 nova_compute[259850]: 2025-10-11 04:08:17.626 2 DEBUG nova.virt.libvirt.driver [None req-a15c22ab-259d-4b26-83d0-eb5f92102649 - - - - - -] skipping disk for instance-00000009 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 11 04:08:17 compute-0 nova_compute[259850]: 2025-10-11 04:08:17.823 2 WARNING nova.virt.libvirt.driver [None req-a15c22ab-259d-4b26-83d0-eb5f92102649 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 11 04:08:17 compute-0 nova_compute[259850]: 2025-10-11 04:08:17.825 2 DEBUG nova.compute.resource_tracker [None req-a15c22ab-259d-4b26-83d0-eb5f92102649 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4492MB free_disk=59.96735763549805GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Oct 11 04:08:17 compute-0 nova_compute[259850]: 2025-10-11 04:08:17.825 2 DEBUG oslo_concurrency.lockutils [None req-a15c22ab-259d-4b26-83d0-eb5f92102649 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 04:08:17 compute-0 nova_compute[259850]: 2025-10-11 04:08:17.826 2 DEBUG oslo_concurrency.lockutils [None req-a15c22ab-259d-4b26-83d0-eb5f92102649 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 04:08:17 compute-0 ceph-mon[74273]: from='client.? 192.168.122.100:0/3181994744' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 04:08:17 compute-0 nova_compute[259850]: 2025-10-11 04:08:17.981 2 DEBUG nova.compute.resource_tracker [None req-a15c22ab-259d-4b26-83d0-eb5f92102649 - - - - - -] Instance 5814e0c3-8afc-4d2d-98eb-6da773bfb7c7 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Oct 11 04:08:17 compute-0 nova_compute[259850]: 2025-10-11 04:08:17.983 2 DEBUG nova.compute.resource_tracker [None req-a15c22ab-259d-4b26-83d0-eb5f92102649 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Oct 11 04:08:17 compute-0 nova_compute[259850]: 2025-10-11 04:08:17.983 2 DEBUG nova.compute.resource_tracker [None req-a15c22ab-259d-4b26-83d0-eb5f92102649 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=640MB phys_disk=59GB used_disk=1GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Oct 11 04:08:18 compute-0 nova_compute[259850]: 2025-10-11 04:08:18.018 2 DEBUG oslo_concurrency.processutils [None req-a15c22ab-259d-4b26-83d0-eb5f92102649 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 04:08:18 compute-0 nova_compute[259850]: 2025-10-11 04:08:18.413 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 04:08:18 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 11 04:08:18 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1546648429' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 04:08:18 compute-0 nova_compute[259850]: 2025-10-11 04:08:18.469 2 DEBUG oslo_concurrency.processutils [None req-a15c22ab-259d-4b26-83d0-eb5f92102649 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.451s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 04:08:18 compute-0 nova_compute[259850]: 2025-10-11 04:08:18.477 2 DEBUG nova.compute.provider_tree [None req-a15c22ab-259d-4b26-83d0-eb5f92102649 - - - - - -] Inventory has not changed in ProviderTree for provider: 108a560b-89c0-4926-a2fc-cb749a6f8386 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 11 04:08:18 compute-0 nova_compute[259850]: 2025-10-11 04:08:18.492 2 DEBUG nova.scheduler.client.report [None req-a15c22ab-259d-4b26-83d0-eb5f92102649 - - - - - -] Inventory has not changed for provider 108a560b-89c0-4926-a2fc-cb749a6f8386 based on inventory data: {'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 11 04:08:18 compute-0 nova_compute[259850]: 2025-10-11 04:08:18.530 2 DEBUG nova.compute.resource_tracker [None req-a15c22ab-259d-4b26-83d0-eb5f92102649 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Oct 11 04:08:18 compute-0 nova_compute[259850]: 2025-10-11 04:08:18.531 2 DEBUG oslo_concurrency.lockutils [None req-a15c22ab-259d-4b26-83d0-eb5f92102649 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.705s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 04:08:18 compute-0 ovn_controller[152025]: 2025-10-11T04:08:18Z|00101|binding|INFO|Releasing lport 34d69504-322d-456b-93e7-c4c1d52774df from this chassis (sb_readonly=0)
Oct 11 04:08:18 compute-0 nova_compute[259850]: 2025-10-11 04:08:18.640 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 04:08:18 compute-0 ceph-mon[74273]: pgmap v1169: 305 pgs: 305 active+clean; 134 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 2.9 MiB/s rd, 23 KiB/s wr, 170 op/s
Oct 11 04:08:18 compute-0 ceph-mon[74273]: from='client.? 192.168.122.100:0/1546648429' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 04:08:19 compute-0 podman[277394]: 2025-10-11 04:08:19.375615797 +0000 UTC m=+0.080338501 container health_status aa44bb3cffcfb092b9d85ae52a9bd9f36c87e07262632420f97b48a886109eab (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ovn_metadata_agent, org.label-schema.build-date=20251009, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team)
Oct 11 04:08:19 compute-0 nova_compute[259850]: 2025-10-11 04:08:19.532 2 DEBUG oslo_service.periodic_task [None req-a15c22ab-259d-4b26-83d0-eb5f92102649 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 11 04:08:19 compute-0 nova_compute[259850]: 2025-10-11 04:08:19.533 2 DEBUG oslo_service.periodic_task [None req-a15c22ab-259d-4b26-83d0-eb5f92102649 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 11 04:08:19 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1170: 305 pgs: 305 active+clean; 134 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 2.7 MiB/s rd, 22 KiB/s wr, 175 op/s
Oct 11 04:08:19 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e220 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 04:08:19 compute-0 ceph-mon[74273]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #54. Immutable memtables: 0.
Oct 11 04:08:19 compute-0 ceph-mon[74273]: rocksdb: (Original Log Time 2025/10/11-04:08:19.785667) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Oct 11 04:08:19 compute-0 ceph-mon[74273]: rocksdb: [db/flush_job.cc:856] [default] [JOB 27] Flushing memtable with next log file: 54
Oct 11 04:08:19 compute-0 ceph-mon[74273]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760155699785691, "job": 27, "event": "flush_started", "num_memtables": 1, "num_entries": 1064, "num_deletes": 269, "total_data_size": 1244451, "memory_usage": 1267504, "flush_reason": "Manual Compaction"}
Oct 11 04:08:19 compute-0 ceph-mon[74273]: rocksdb: [db/flush_job.cc:885] [default] [JOB 27] Level-0 flush table #55: started
Oct 11 04:08:19 compute-0 ceph-mon[74273]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760155699794975, "cf_name": "default", "job": 27, "event": "table_file_creation", "file_number": 55, "file_size": 1227540, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 23501, "largest_seqno": 24564, "table_properties": {"data_size": 1222364, "index_size": 2636, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1477, "raw_key_size": 11688, "raw_average_key_size": 19, "raw_value_size": 1211640, "raw_average_value_size": 2060, "num_data_blocks": 116, "num_entries": 588, "num_filter_entries": 588, "num_deletions": 269, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1760155641, "oldest_key_time": 1760155641, "file_creation_time": 1760155699, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "c5ef686a-ea96-43f3-b64e-136aeef6150d", "db_session_id": "5PENSWPAYOBU0GJSS6OB", "orig_file_number": 55, "seqno_to_time_mapping": "N/A"}}
Oct 11 04:08:19 compute-0 ceph-mon[74273]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 27] Flush lasted 9341 microseconds, and 3899 cpu microseconds.
Oct 11 04:08:19 compute-0 ceph-mon[74273]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 11 04:08:19 compute-0 ceph-mon[74273]: rocksdb: (Original Log Time 2025/10/11-04:08:19.795006) [db/flush_job.cc:967] [default] [JOB 27] Level-0 flush table #55: 1227540 bytes OK
Oct 11 04:08:19 compute-0 ceph-mon[74273]: rocksdb: (Original Log Time 2025/10/11-04:08:19.795023) [db/memtable_list.cc:519] [default] Level-0 commit table #55 started
Oct 11 04:08:19 compute-0 ceph-mon[74273]: rocksdb: (Original Log Time 2025/10/11-04:08:19.796321) [db/memtable_list.cc:722] [default] Level-0 commit table #55: memtable #1 done
Oct 11 04:08:19 compute-0 ceph-mon[74273]: rocksdb: (Original Log Time 2025/10/11-04:08:19.796335) EVENT_LOG_v1 {"time_micros": 1760155699796330, "job": 27, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Oct 11 04:08:19 compute-0 ceph-mon[74273]: rocksdb: (Original Log Time 2025/10/11-04:08:19.796351) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Oct 11 04:08:19 compute-0 ceph-mon[74273]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 27] Try to delete WAL files size 1239242, prev total WAL file size 1239242, number of live WAL files 2.
Oct 11 04:08:19 compute-0 ceph-mon[74273]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000051.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 11 04:08:19 compute-0 ceph-mon[74273]: rocksdb: (Original Log Time 2025/10/11-04:08:19.796925) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6C6F676D00353030' seq:72057594037927935, type:22 .. '6C6F676D00373534' seq:0, type:0; will stop at (end)
Oct 11 04:08:19 compute-0 ceph-mon[74273]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 28] Compacting 1@0 + 1@6 files to L6, score -1.00
Oct 11 04:08:19 compute-0 ceph-mon[74273]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 27 Base level 0, inputs: [55(1198KB)], [53(8742KB)]
Oct 11 04:08:19 compute-0 ceph-mon[74273]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760155699796974, "job": 28, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [55], "files_L6": [53], "score": -1, "input_data_size": 10179387, "oldest_snapshot_seqno": -1}
Oct 11 04:08:19 compute-0 ceph-mon[74273]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 28] Generated table #56: 5048 keys, 10083449 bytes, temperature: kUnknown
Oct 11 04:08:19 compute-0 ceph-mon[74273]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760155699855778, "cf_name": "default", "job": 28, "event": "table_file_creation", "file_number": 56, "file_size": 10083449, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 10044404, "index_size": 25303, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 12677, "raw_key_size": 125505, "raw_average_key_size": 24, "raw_value_size": 9948184, "raw_average_value_size": 1970, "num_data_blocks": 1050, "num_entries": 5048, "num_filter_entries": 5048, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1760153731, "oldest_key_time": 0, "file_creation_time": 1760155699, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "c5ef686a-ea96-43f3-b64e-136aeef6150d", "db_session_id": "5PENSWPAYOBU0GJSS6OB", "orig_file_number": 56, "seqno_to_time_mapping": "N/A"}}
Oct 11 04:08:19 compute-0 ceph-mon[74273]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 11 04:08:19 compute-0 ceph-mon[74273]: rocksdb: (Original Log Time 2025/10/11-04:08:19.856120) [db/compaction/compaction_job.cc:1663] [default] [JOB 28] Compacted 1@0 + 1@6 files to L6 => 10083449 bytes
Oct 11 04:08:19 compute-0 ceph-mon[74273]: rocksdb: (Original Log Time 2025/10/11-04:08:19.857647) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 172.7 rd, 171.1 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.2, 8.5 +0.0 blob) out(9.6 +0.0 blob), read-write-amplify(16.5) write-amplify(8.2) OK, records in: 5592, records dropped: 544 output_compression: NoCompression
Oct 11 04:08:19 compute-0 ceph-mon[74273]: rocksdb: (Original Log Time 2025/10/11-04:08:19.857677) EVENT_LOG_v1 {"time_micros": 1760155699857663, "job": 28, "event": "compaction_finished", "compaction_time_micros": 58930, "compaction_time_cpu_micros": 39618, "output_level": 6, "num_output_files": 1, "total_output_size": 10083449, "num_input_records": 5592, "num_output_records": 5048, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Oct 11 04:08:19 compute-0 ceph-mon[74273]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000055.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 11 04:08:19 compute-0 ceph-mon[74273]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760155699858252, "job": 28, "event": "table_file_deletion", "file_number": 55}
Oct 11 04:08:19 compute-0 ceph-mon[74273]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000053.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 11 04:08:19 compute-0 ceph-mon[74273]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760155699860842, "job": 28, "event": "table_file_deletion", "file_number": 53}
Oct 11 04:08:19 compute-0 ceph-mon[74273]: rocksdb: (Original Log Time 2025/10/11-04:08:19.796849) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 11 04:08:19 compute-0 ceph-mon[74273]: rocksdb: (Original Log Time 2025/10/11-04:08:19.860897) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 11 04:08:19 compute-0 ceph-mon[74273]: rocksdb: (Original Log Time 2025/10/11-04:08:19.860904) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 11 04:08:19 compute-0 ceph-mon[74273]: rocksdb: (Original Log Time 2025/10/11-04:08:19.860908) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 11 04:08:19 compute-0 ceph-mon[74273]: rocksdb: (Original Log Time 2025/10/11-04:08:19.860912) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 11 04:08:19 compute-0 ceph-mon[74273]: rocksdb: (Original Log Time 2025/10/11-04:08:19.860916) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 11 04:08:20 compute-0 nova_compute[259850]: 2025-10-11 04:08:20.055 2 DEBUG oslo_service.periodic_task [None req-a15c22ab-259d-4b26-83d0-eb5f92102649 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 11 04:08:20 compute-0 nova_compute[259850]: 2025-10-11 04:08:20.058 2 DEBUG oslo_service.periodic_task [None req-a15c22ab-259d-4b26-83d0-eb5f92102649 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 11 04:08:20 compute-0 nova_compute[259850]: 2025-10-11 04:08:20.058 2 DEBUG nova.compute.manager [None req-a15c22ab-259d-4b26-83d0-eb5f92102649 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Oct 11 04:08:20 compute-0 nova_compute[259850]: 2025-10-11 04:08:20.105 2 DEBUG nova.compute.manager [None req-a15c22ab-259d-4b26-83d0-eb5f92102649 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Oct 11 04:08:20 compute-0 nova_compute[259850]: 2025-10-11 04:08:20.106 2 DEBUG oslo_service.periodic_task [None req-a15c22ab-259d-4b26-83d0-eb5f92102649 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 11 04:08:20 compute-0 ceph-mgr[74563]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 04:08:20 compute-0 ceph-mgr[74563]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 04:08:20 compute-0 ceph-mgr[74563]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 04:08:20 compute-0 ceph-mgr[74563]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 04:08:20 compute-0 ceph-mgr[74563]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 04:08:20 compute-0 ceph-mgr[74563]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 04:08:20 compute-0 ceph-mgr[74563]: [balancer INFO root] Optimize plan auto_2025-10-11_04:08:20
Oct 11 04:08:20 compute-0 ceph-mgr[74563]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 11 04:08:20 compute-0 ceph-mgr[74563]: [balancer INFO root] do_upmap
Oct 11 04:08:20 compute-0 ceph-mgr[74563]: [balancer INFO root] pools ['.mgr', 'default.rgw.control', 'default.rgw.meta', 'default.rgw.log', 'images', 'volumes', 'cephfs.cephfs.meta', 'backups', '.rgw.root', 'cephfs.cephfs.data', 'vms']
Oct 11 04:08:20 compute-0 ceph-mgr[74563]: [balancer INFO root] prepared 0/10 changes
Oct 11 04:08:20 compute-0 ceph-mon[74273]: pgmap v1170: 305 pgs: 305 active+clean; 134 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 2.7 MiB/s rd, 22 KiB/s wr, 175 op/s
Oct 11 04:08:21 compute-0 ceph-mgr[74563]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 11 04:08:21 compute-0 ceph-mgr[74563]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 11 04:08:21 compute-0 ceph-mgr[74563]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 11 04:08:21 compute-0 ceph-mgr[74563]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 11 04:08:21 compute-0 ceph-mgr[74563]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 11 04:08:21 compute-0 ceph-mgr[74563]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 11 04:08:21 compute-0 ceph-mgr[74563]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 11 04:08:21 compute-0 ceph-mgr[74563]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 11 04:08:21 compute-0 ceph-mgr[74563]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 11 04:08:21 compute-0 ceph-mgr[74563]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 11 04:08:21 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1171: 305 pgs: 305 active+clean; 134 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 2.3 MiB/s rd, 19 KiB/s wr, 152 op/s
Oct 11 04:08:21 compute-0 sudo[277414]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 04:08:21 compute-0 sudo[277414]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 04:08:21 compute-0 sudo[277414]: pam_unix(sudo:session): session closed for user root
Oct 11 04:08:21 compute-0 sudo[277439]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 11 04:08:21 compute-0 sudo[277439]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 04:08:21 compute-0 sudo[277439]: pam_unix(sudo:session): session closed for user root
Oct 11 04:08:21 compute-0 sudo[277464]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 04:08:21 compute-0 sudo[277464]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 04:08:21 compute-0 sudo[277464]: pam_unix(sudo:session): session closed for user root
Oct 11 04:08:22 compute-0 nova_compute[259850]: 2025-10-11 04:08:22.059 2 DEBUG oslo_service.periodic_task [None req-a15c22ab-259d-4b26-83d0-eb5f92102649 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 11 04:08:22 compute-0 sudo[277489]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/23b68101-59a9-532f-ab6b-9acf78fb2162/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 check-host
Oct 11 04:08:22 compute-0 sudo[277489]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 04:08:22 compute-0 nova_compute[259850]: 2025-10-11 04:08:22.379 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 04:08:22 compute-0 sudo[277489]: pam_unix(sudo:session): session closed for user root
Oct 11 04:08:22 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 11 04:08:22 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' 
Oct 11 04:08:22 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 11 04:08:22 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' 
Oct 11 04:08:22 compute-0 sudo[277534]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 04:08:22 compute-0 sudo[277534]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 04:08:22 compute-0 sudo[277534]: pam_unix(sudo:session): session closed for user root
Oct 11 04:08:22 compute-0 sudo[277559]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 11 04:08:22 compute-0 sudo[277559]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 04:08:22 compute-0 sudo[277559]: pam_unix(sudo:session): session closed for user root
Oct 11 04:08:22 compute-0 sudo[277584]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 04:08:22 compute-0 sudo[277584]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 04:08:22 compute-0 sudo[277584]: pam_unix(sudo:session): session closed for user root
Oct 11 04:08:22 compute-0 sudo[277609]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/23b68101-59a9-532f-ab6b-9acf78fb2162/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Oct 11 04:08:22 compute-0 sudo[277609]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 04:08:22 compute-0 ceph-mon[74273]: pgmap v1171: 305 pgs: 305 active+clean; 134 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 2.3 MiB/s rd, 19 KiB/s wr, 152 op/s
Oct 11 04:08:22 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' 
Oct 11 04:08:22 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' 
Oct 11 04:08:22 compute-0 ovn_metadata_agent[161897]: 2025-10-11 04:08:22.957 161902 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 04:08:22 compute-0 ovn_metadata_agent[161897]: 2025-10-11 04:08:22.958 161902 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 04:08:22 compute-0 ovn_metadata_agent[161897]: 2025-10-11 04:08:22.958 161902 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 04:08:23 compute-0 nova_compute[259850]: 2025-10-11 04:08:23.059 2 DEBUG oslo_service.periodic_task [None req-a15c22ab-259d-4b26-83d0-eb5f92102649 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 11 04:08:23 compute-0 ovn_controller[152025]: 2025-10-11T04:08:23Z|00012|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:fc:44:96 10.100.0.11
Oct 11 04:08:23 compute-0 ovn_controller[152025]: 2025-10-11T04:08:23Z|00013|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:fc:44:96 10.100.0.11
Oct 11 04:08:23 compute-0 sudo[277609]: pam_unix(sudo:session): session closed for user root
Oct 11 04:08:23 compute-0 nova_compute[259850]: 2025-10-11 04:08:23.417 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 04:08:23 compute-0 sudo[277665]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 04:08:23 compute-0 sudo[277665]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 04:08:23 compute-0 sudo[277665]: pam_unix(sudo:session): session closed for user root
Oct 11 04:08:23 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1172: 305 pgs: 305 active+clean; 167 MiB data, 314 MiB used, 60 GiB / 60 GiB avail; 2.5 MiB/s rd, 2.6 MiB/s wr, 121 op/s
Oct 11 04:08:23 compute-0 sudo[277690]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 11 04:08:23 compute-0 sudo[277690]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 04:08:23 compute-0 sudo[277690]: pam_unix(sudo:session): session closed for user root
Oct 11 04:08:23 compute-0 sudo[277715]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 04:08:23 compute-0 sudo[277715]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 04:08:23 compute-0 sudo[277715]: pam_unix(sudo:session): session closed for user root
Oct 11 04:08:23 compute-0 sudo[277740]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/23b68101-59a9-532f-ab6b-9acf78fb2162/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 23b68101-59a9-532f-ab6b-9acf78fb2162 -- inventory --format=json-pretty --filter-for-batch
Oct 11 04:08:23 compute-0 sudo[277740]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 04:08:24 compute-0 podman[277804]: 2025-10-11 04:08:24.140877028 +0000 UTC m=+0.042226599 container create f7d849298cb4f86f247c5b83f8bb82f5dcfd74fcbcbd9bd2cac104e1a5183052 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_sammet, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 11 04:08:24 compute-0 systemd[1]: Started libpod-conmon-f7d849298cb4f86f247c5b83f8bb82f5dcfd74fcbcbd9bd2cac104e1a5183052.scope.
Oct 11 04:08:24 compute-0 systemd[1]: Started libcrun container.
Oct 11 04:08:24 compute-0 podman[277804]: 2025-10-11 04:08:24.119785385 +0000 UTC m=+0.021134966 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 04:08:24 compute-0 podman[277804]: 2025-10-11 04:08:24.232545447 +0000 UTC m=+0.133895018 container init f7d849298cb4f86f247c5b83f8bb82f5dcfd74fcbcbd9bd2cac104e1a5183052 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_sammet, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Oct 11 04:08:24 compute-0 podman[277804]: 2025-10-11 04:08:24.247893169 +0000 UTC m=+0.149242740 container start f7d849298cb4f86f247c5b83f8bb82f5dcfd74fcbcbd9bd2cac104e1a5183052 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_sammet, io.buildah.version=1.39.3, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 11 04:08:24 compute-0 podman[277804]: 2025-10-11 04:08:24.252691694 +0000 UTC m=+0.154041305 container attach f7d849298cb4f86f247c5b83f8bb82f5dcfd74fcbcbd9bd2cac104e1a5183052 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_sammet, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 11 04:08:24 compute-0 vigilant_sammet[277820]: 167 167
Oct 11 04:08:24 compute-0 podman[277804]: 2025-10-11 04:08:24.255627607 +0000 UTC m=+0.156977198 container died f7d849298cb4f86f247c5b83f8bb82f5dcfd74fcbcbd9bd2cac104e1a5183052 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_sammet, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, ceph=True)
Oct 11 04:08:24 compute-0 systemd[1]: libpod-f7d849298cb4f86f247c5b83f8bb82f5dcfd74fcbcbd9bd2cac104e1a5183052.scope: Deactivated successfully.
Oct 11 04:08:24 compute-0 systemd[1]: var-lib-containers-storage-overlay-4ef899c765102e029719c6337caec6af58bb9563c3467e55ac22ae51e5de2508-merged.mount: Deactivated successfully.
Oct 11 04:08:24 compute-0 podman[277804]: 2025-10-11 04:08:24.302064703 +0000 UTC m=+0.203414274 container remove f7d849298cb4f86f247c5b83f8bb82f5dcfd74fcbcbd9bd2cac104e1a5183052 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_sammet, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 11 04:08:24 compute-0 systemd[1]: libpod-conmon-f7d849298cb4f86f247c5b83f8bb82f5dcfd74fcbcbd9bd2cac104e1a5183052.scope: Deactivated successfully.
Oct 11 04:08:24 compute-0 podman[277841]: 2025-10-11 04:08:24.528877565 +0000 UTC m=+0.076166774 container create 55f5910edae7e99f4e1c65f28c0bd40ff33af0680a7c704b9bc541ec92851191 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_williamson, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 11 04:08:24 compute-0 systemd[1]: Started libpod-conmon-55f5910edae7e99f4e1c65f28c0bd40ff33af0680a7c704b9bc541ec92851191.scope.
Oct 11 04:08:24 compute-0 podman[277841]: 2025-10-11 04:08:24.498077508 +0000 UTC m=+0.045366777 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 04:08:24 compute-0 systemd[1]: Started libcrun container.
Oct 11 04:08:24 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7f1c6f20de727f60475ecc28e4aca7384449342f6afcce6ce461492b71d754a1/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 11 04:08:24 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7f1c6f20de727f60475ecc28e4aca7384449342f6afcce6ce461492b71d754a1/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 11 04:08:24 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7f1c6f20de727f60475ecc28e4aca7384449342f6afcce6ce461492b71d754a1/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 11 04:08:24 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7f1c6f20de727f60475ecc28e4aca7384449342f6afcce6ce461492b71d754a1/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 11 04:08:24 compute-0 podman[277841]: 2025-10-11 04:08:24.638105628 +0000 UTC m=+0.185394827 container init 55f5910edae7e99f4e1c65f28c0bd40ff33af0680a7c704b9bc541ec92851191 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_williamson, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.build-date=20250507)
Oct 11 04:08:24 compute-0 podman[277841]: 2025-10-11 04:08:24.656385782 +0000 UTC m=+0.203674991 container start 55f5910edae7e99f4e1c65f28c0bd40ff33af0680a7c704b9bc541ec92851191 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_williamson, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef)
Oct 11 04:08:24 compute-0 podman[277841]: 2025-10-11 04:08:24.661431234 +0000 UTC m=+0.208720513 container attach 55f5910edae7e99f4e1c65f28c0bd40ff33af0680a7c704b9bc541ec92851191 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_williamson, ceph=True, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 11 04:08:24 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e220 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 04:08:24 compute-0 ceph-mon[74273]: pgmap v1172: 305 pgs: 305 active+clean; 167 MiB data, 314 MiB used, 60 GiB / 60 GiB avail; 2.5 MiB/s rd, 2.6 MiB/s wr, 121 op/s
Oct 11 04:08:25 compute-0 nova_compute[259850]: 2025-10-11 04:08:25.054 2 DEBUG oslo_service.periodic_task [None req-a15c22ab-259d-4b26-83d0-eb5f92102649 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 11 04:08:25 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1173: 305 pgs: 305 active+clean; 167 MiB data, 314 MiB used, 60 GiB / 60 GiB avail; 2.3 MiB/s rd, 2.4 MiB/s wr, 113 op/s
Oct 11 04:08:26 compute-0 thirsty_williamson[277858]: [
Oct 11 04:08:26 compute-0 thirsty_williamson[277858]:     {
Oct 11 04:08:26 compute-0 thirsty_williamson[277858]:         "available": false,
Oct 11 04:08:26 compute-0 thirsty_williamson[277858]:         "ceph_device": false,
Oct 11 04:08:26 compute-0 thirsty_williamson[277858]:         "device_id": "QEMU_DVD-ROM_QM00001",
Oct 11 04:08:26 compute-0 thirsty_williamson[277858]:         "lsm_data": {},
Oct 11 04:08:26 compute-0 thirsty_williamson[277858]:         "lvs": [],
Oct 11 04:08:26 compute-0 thirsty_williamson[277858]:         "path": "/dev/sr0",
Oct 11 04:08:26 compute-0 thirsty_williamson[277858]:         "rejected_reasons": [
Oct 11 04:08:26 compute-0 thirsty_williamson[277858]:             "Has a FileSystem",
Oct 11 04:08:26 compute-0 thirsty_williamson[277858]:             "Insufficient space (<5GB)"
Oct 11 04:08:26 compute-0 thirsty_williamson[277858]:         ],
Oct 11 04:08:26 compute-0 thirsty_williamson[277858]:         "sys_api": {
Oct 11 04:08:26 compute-0 thirsty_williamson[277858]:             "actuators": null,
Oct 11 04:08:26 compute-0 thirsty_williamson[277858]:             "device_nodes": "sr0",
Oct 11 04:08:26 compute-0 thirsty_williamson[277858]:             "devname": "sr0",
Oct 11 04:08:26 compute-0 thirsty_williamson[277858]:             "human_readable_size": "482.00 KB",
Oct 11 04:08:26 compute-0 thirsty_williamson[277858]:             "id_bus": "ata",
Oct 11 04:08:26 compute-0 thirsty_williamson[277858]:             "model": "QEMU DVD-ROM",
Oct 11 04:08:26 compute-0 thirsty_williamson[277858]:             "nr_requests": "2",
Oct 11 04:08:26 compute-0 thirsty_williamson[277858]:             "parent": "/dev/sr0",
Oct 11 04:08:26 compute-0 thirsty_williamson[277858]:             "partitions": {},
Oct 11 04:08:26 compute-0 thirsty_williamson[277858]:             "path": "/dev/sr0",
Oct 11 04:08:26 compute-0 thirsty_williamson[277858]:             "removable": "1",
Oct 11 04:08:26 compute-0 thirsty_williamson[277858]:             "rev": "2.5+",
Oct 11 04:08:26 compute-0 thirsty_williamson[277858]:             "ro": "0",
Oct 11 04:08:26 compute-0 thirsty_williamson[277858]:             "rotational": "0",
Oct 11 04:08:26 compute-0 thirsty_williamson[277858]:             "sas_address": "",
Oct 11 04:08:26 compute-0 thirsty_williamson[277858]:             "sas_device_handle": "",
Oct 11 04:08:26 compute-0 thirsty_williamson[277858]:             "scheduler_mode": "mq-deadline",
Oct 11 04:08:26 compute-0 thirsty_williamson[277858]:             "sectors": 0,
Oct 11 04:08:26 compute-0 thirsty_williamson[277858]:             "sectorsize": "2048",
Oct 11 04:08:26 compute-0 thirsty_williamson[277858]:             "size": 493568.0,
Oct 11 04:08:26 compute-0 thirsty_williamson[277858]:             "support_discard": "2048",
Oct 11 04:08:26 compute-0 thirsty_williamson[277858]:             "type": "disk",
Oct 11 04:08:26 compute-0 thirsty_williamson[277858]:             "vendor": "QEMU"
Oct 11 04:08:26 compute-0 thirsty_williamson[277858]:         }
Oct 11 04:08:26 compute-0 thirsty_williamson[277858]:     }
Oct 11 04:08:26 compute-0 thirsty_williamson[277858]: ]
Oct 11 04:08:26 compute-0 systemd[1]: libpod-55f5910edae7e99f4e1c65f28c0bd40ff33af0680a7c704b9bc541ec92851191.scope: Deactivated successfully.
Oct 11 04:08:26 compute-0 systemd[1]: libpod-55f5910edae7e99f4e1c65f28c0bd40ff33af0680a7c704b9bc541ec92851191.scope: Consumed 1.702s CPU time.
Oct 11 04:08:26 compute-0 podman[280146]: 2025-10-11 04:08:26.340797314 +0000 UTC m=+0.021814505 container died 55f5910edae7e99f4e1c65f28c0bd40ff33af0680a7c704b9bc541ec92851191 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_williamson, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 11 04:08:26 compute-0 systemd[1]: var-lib-containers-storage-overlay-7f1c6f20de727f60475ecc28e4aca7384449342f6afcce6ce461492b71d754a1-merged.mount: Deactivated successfully.
Oct 11 04:08:26 compute-0 podman[280146]: 2025-10-11 04:08:26.386731737 +0000 UTC m=+0.067748908 container remove 55f5910edae7e99f4e1c65f28c0bd40ff33af0680a7c704b9bc541ec92851191 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_williamson, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 11 04:08:26 compute-0 systemd[1]: libpod-conmon-55f5910edae7e99f4e1c65f28c0bd40ff33af0680a7c704b9bc541ec92851191.scope: Deactivated successfully.
Oct 11 04:08:26 compute-0 sudo[277740]: pam_unix(sudo:session): session closed for user root
Oct 11 04:08:26 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 11 04:08:26 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' 
Oct 11 04:08:26 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 11 04:08:26 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' 
Oct 11 04:08:26 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"} v 0) v1
Oct 11 04:08:26 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Oct 11 04:08:26 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 11 04:08:26 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 11 04:08:26 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct 11 04:08:26 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 11 04:08:26 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct 11 04:08:26 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' 
Oct 11 04:08:26 compute-0 ceph-mgr[74563]: [progress WARNING root] complete: ev 95621973-4a51-44d0-ae6b-6f1da392fd17 does not exist
Oct 11 04:08:26 compute-0 ceph-mgr[74563]: [progress WARNING root] complete: ev e2433e15-0a9c-4e86-8b24-085305b950e0 does not exist
Oct 11 04:08:26 compute-0 ceph-mgr[74563]: [progress WARNING root] complete: ev 70e817f0-d2ac-4d21-936c-bb2e0e845367 does not exist
Oct 11 04:08:26 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct 11 04:08:26 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 11 04:08:26 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct 11 04:08:26 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 11 04:08:26 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 11 04:08:26 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 11 04:08:26 compute-0 sudo[280161]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 04:08:26 compute-0 sudo[280161]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 04:08:26 compute-0 sudo[280161]: pam_unix(sudo:session): session closed for user root
Oct 11 04:08:26 compute-0 sudo[280186]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 11 04:08:26 compute-0 sudo[280186]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 04:08:26 compute-0 sudo[280186]: pam_unix(sudo:session): session closed for user root
Oct 11 04:08:26 compute-0 sudo[280211]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 04:08:26 compute-0 sudo[280211]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 04:08:26 compute-0 sudo[280211]: pam_unix(sudo:session): session closed for user root
Oct 11 04:08:26 compute-0 sudo[280236]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/23b68101-59a9-532f-ab6b-9acf78fb2162/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 23b68101-59a9-532f-ab6b-9acf78fb2162 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Oct 11 04:08:26 compute-0 sudo[280236]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 04:08:26 compute-0 ceph-mon[74273]: pgmap v1173: 305 pgs: 305 active+clean; 167 MiB data, 314 MiB used, 60 GiB / 60 GiB avail; 2.3 MiB/s rd, 2.4 MiB/s wr, 113 op/s
Oct 11 04:08:26 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' 
Oct 11 04:08:26 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' 
Oct 11 04:08:26 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Oct 11 04:08:26 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 11 04:08:26 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 11 04:08:26 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' 
Oct 11 04:08:26 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 11 04:08:26 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 11 04:08:26 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 11 04:08:27 compute-0 podman[280303]: 2025-10-11 04:08:27.209254828 +0000 UTC m=+0.061411019 container create 6274eb1004d51e04090b919dff807b9d8020791e394623ca6d92f985290901bb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_greider, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Oct 11 04:08:27 compute-0 systemd[1]: Started libpod-conmon-6274eb1004d51e04090b919dff807b9d8020791e394623ca6d92f985290901bb.scope.
Oct 11 04:08:27 compute-0 systemd[1]: Started libcrun container.
Oct 11 04:08:27 compute-0 podman[280303]: 2025-10-11 04:08:27.190282994 +0000 UTC m=+0.042439155 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 04:08:27 compute-0 podman[280303]: 2025-10-11 04:08:27.298824988 +0000 UTC m=+0.150981239 container init 6274eb1004d51e04090b919dff807b9d8020791e394623ca6d92f985290901bb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_greider, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Oct 11 04:08:27 compute-0 podman[280303]: 2025-10-11 04:08:27.309774476 +0000 UTC m=+0.161930617 container start 6274eb1004d51e04090b919dff807b9d8020791e394623ca6d92f985290901bb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_greider, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_REF=reef)
Oct 11 04:08:27 compute-0 podman[280303]: 2025-10-11 04:08:27.31598602 +0000 UTC m=+0.168142231 container attach 6274eb1004d51e04090b919dff807b9d8020791e394623ca6d92f985290901bb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_greider, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3)
Oct 11 04:08:27 compute-0 blissful_greider[280320]: 167 167
Oct 11 04:08:27 compute-0 systemd[1]: libpod-6274eb1004d51e04090b919dff807b9d8020791e394623ca6d92f985290901bb.scope: Deactivated successfully.
Oct 11 04:08:27 compute-0 podman[280303]: 2025-10-11 04:08:27.318052089 +0000 UTC m=+0.170208270 container died 6274eb1004d51e04090b919dff807b9d8020791e394623ca6d92f985290901bb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_greider, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Oct 11 04:08:27 compute-0 systemd[1]: var-lib-containers-storage-overlay-0650faa1de9de53fbaac542637b3dc6c82884f48cf7f946749e1c96bbc20a2ff-merged.mount: Deactivated successfully.
Oct 11 04:08:27 compute-0 podman[280303]: 2025-10-11 04:08:27.371319167 +0000 UTC m=+0.223475338 container remove 6274eb1004d51e04090b919dff807b9d8020791e394623ca6d92f985290901bb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_greider, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Oct 11 04:08:27 compute-0 nova_compute[259850]: 2025-10-11 04:08:27.382 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 04:08:27 compute-0 systemd[1]: libpod-conmon-6274eb1004d51e04090b919dff807b9d8020791e394623ca6d92f985290901bb.scope: Deactivated successfully.
Oct 11 04:08:27 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1174: 305 pgs: 305 active+clean; 167 MiB data, 314 MiB used, 60 GiB / 60 GiB avail; 2.0 MiB/s rd, 2.1 MiB/s wr, 101 op/s
Oct 11 04:08:27 compute-0 podman[280343]: 2025-10-11 04:08:27.540858687 +0000 UTC m=+0.041780586 container create 9b23031ea02d5ab187fa63040cc8c28fc9e2ce586e79f65725a8e73155315133 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_keller, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Oct 11 04:08:27 compute-0 systemd[1]: Started libpod-conmon-9b23031ea02d5ab187fa63040cc8c28fc9e2ce586e79f65725a8e73155315133.scope.
Oct 11 04:08:27 compute-0 podman[280343]: 2025-10-11 04:08:27.522139191 +0000 UTC m=+0.023061140 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 04:08:27 compute-0 systemd[1]: Started libcrun container.
Oct 11 04:08:27 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/40f11098308907b0e4a5a0f1362ab2564badc9cdfa2537242157d55aaf2f80e2/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 11 04:08:27 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/40f11098308907b0e4a5a0f1362ab2564badc9cdfa2537242157d55aaf2f80e2/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 11 04:08:27 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/40f11098308907b0e4a5a0f1362ab2564badc9cdfa2537242157d55aaf2f80e2/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 11 04:08:27 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/40f11098308907b0e4a5a0f1362ab2564badc9cdfa2537242157d55aaf2f80e2/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 11 04:08:27 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/40f11098308907b0e4a5a0f1362ab2564badc9cdfa2537242157d55aaf2f80e2/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct 11 04:08:27 compute-0 podman[280343]: 2025-10-11 04:08:27.645595424 +0000 UTC m=+0.146517353 container init 9b23031ea02d5ab187fa63040cc8c28fc9e2ce586e79f65725a8e73155315133 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_keller, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Oct 11 04:08:27 compute-0 podman[280343]: 2025-10-11 04:08:27.658665742 +0000 UTC m=+0.159587671 container start 9b23031ea02d5ab187fa63040cc8c28fc9e2ce586e79f65725a8e73155315133 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_keller, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0)
Oct 11 04:08:27 compute-0 podman[280343]: 2025-10-11 04:08:27.663062556 +0000 UTC m=+0.163984495 container attach 9b23031ea02d5ab187fa63040cc8c28fc9e2ce586e79f65725a8e73155315133 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_keller, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.39.3)
Oct 11 04:08:27 compute-0 nova_compute[259850]: 2025-10-11 04:08:27.844 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 04:08:27 compute-0 ovn_metadata_agent[161897]: 2025-10-11 04:08:27.844 161902 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=9, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '7a:61:6f', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '92:f1:b6:e4:f1:16'}, ipsec=False) old=SB_Global(nb_cfg=8) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 11 04:08:27 compute-0 ovn_metadata_agent[161897]: 2025-10-11 04:08:27.848 161902 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 3 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Oct 11 04:08:28 compute-0 nova_compute[259850]: 2025-10-11 04:08:28.418 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 04:08:28 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 11 04:08:28 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/162251364' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 11 04:08:28 compute-0 gallant_keller[280360]: --> passed data devices: 0 physical, 3 LVM
Oct 11 04:08:28 compute-0 gallant_keller[280360]: --> relative data size: 1.0
Oct 11 04:08:28 compute-0 gallant_keller[280360]: --> All data devices are unavailable
Oct 11 04:08:28 compute-0 systemd[1]: libpod-9b23031ea02d5ab187fa63040cc8c28fc9e2ce586e79f65725a8e73155315133.scope: Deactivated successfully.
Oct 11 04:08:28 compute-0 systemd[1]: libpod-9b23031ea02d5ab187fa63040cc8c28fc9e2ce586e79f65725a8e73155315133.scope: Consumed 1.115s CPU time.
Oct 11 04:08:28 compute-0 podman[280389]: 2025-10-11 04:08:28.90113959 +0000 UTC m=+0.037671211 container died 9b23031ea02d5ab187fa63040cc8c28fc9e2ce586e79f65725a8e73155315133 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_keller, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 11 04:08:28 compute-0 systemd[1]: var-lib-containers-storage-overlay-40f11098308907b0e4a5a0f1362ab2564badc9cdfa2537242157d55aaf2f80e2-merged.mount: Deactivated successfully.
Oct 11 04:08:28 compute-0 podman[280389]: 2025-10-11 04:08:28.964900264 +0000 UTC m=+0.101431885 container remove 9b23031ea02d5ab187fa63040cc8c28fc9e2ce586e79f65725a8e73155315133 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_keller, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.build-date=20250507)
Oct 11 04:08:28 compute-0 systemd[1]: libpod-conmon-9b23031ea02d5ab187fa63040cc8c28fc9e2ce586e79f65725a8e73155315133.scope: Deactivated successfully.
Oct 11 04:08:28 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e220 do_prune osdmap full prune enabled
Oct 11 04:08:28 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e221 e221: 3 total, 3 up, 3 in
Oct 11 04:08:28 compute-0 ceph-mon[74273]: log_channel(cluster) log [DBG] : osdmap e221: 3 total, 3 up, 3 in
Oct 11 04:08:28 compute-0 ceph-mon[74273]: pgmap v1174: 305 pgs: 305 active+clean; 167 MiB data, 314 MiB used, 60 GiB / 60 GiB avail; 2.0 MiB/s rd, 2.1 MiB/s wr, 101 op/s
Oct 11 04:08:28 compute-0 ceph-mon[74273]: from='client.? 192.168.122.10:0/162251364' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 11 04:08:29 compute-0 sudo[280236]: pam_unix(sudo:session): session closed for user root
Oct 11 04:08:29 compute-0 sudo[280405]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 04:08:29 compute-0 sudo[280405]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 04:08:29 compute-0 sudo[280405]: pam_unix(sudo:session): session closed for user root
Oct 11 04:08:29 compute-0 sudo[280430]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 11 04:08:29 compute-0 sudo[280430]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 04:08:29 compute-0 sudo[280430]: pam_unix(sudo:session): session closed for user root
Oct 11 04:08:29 compute-0 sudo[280455]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 04:08:29 compute-0 sudo[280455]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 04:08:29 compute-0 sudo[280455]: pam_unix(sudo:session): session closed for user root
Oct 11 04:08:29 compute-0 sudo[280480]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/23b68101-59a9-532f-ab6b-9acf78fb2162/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 23b68101-59a9-532f-ab6b-9acf78fb2162 -- lvm list --format json
Oct 11 04:08:29 compute-0 sudo[280480]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 04:08:29 compute-0 nova_compute[259850]: 2025-10-11 04:08:29.497 2 DEBUG oslo_concurrency.lockutils [None req-b30db6ef-5faf-40b2-a588-caa8bb7a5cf2 fc44058c9b8d47d1907c195c404898c8 c04e56df694d49fdbb22c39773dfc036 - - default default] Acquiring lock "5814e0c3-8afc-4d2d-98eb-6da773bfb7c7" by "nova.compute.manager.ComputeManager.reserve_block_device_name.<locals>.do_reserve" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 04:08:29 compute-0 nova_compute[259850]: 2025-10-11 04:08:29.498 2 DEBUG oslo_concurrency.lockutils [None req-b30db6ef-5faf-40b2-a588-caa8bb7a5cf2 fc44058c9b8d47d1907c195c404898c8 c04e56df694d49fdbb22c39773dfc036 - - default default] Lock "5814e0c3-8afc-4d2d-98eb-6da773bfb7c7" acquired by "nova.compute.manager.ComputeManager.reserve_block_device_name.<locals>.do_reserve" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 04:08:29 compute-0 nova_compute[259850]: 2025-10-11 04:08:29.524 2 DEBUG nova.objects.instance [None req-b30db6ef-5faf-40b2-a588-caa8bb7a5cf2 fc44058c9b8d47d1907c195c404898c8 c04e56df694d49fdbb22c39773dfc036 - - default default] Lazy-loading 'flavor' on Instance uuid 5814e0c3-8afc-4d2d-98eb-6da773bfb7c7 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 11 04:08:29 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1176: 305 pgs: 305 active+clean; 213 MiB data, 335 MiB used, 60 GiB / 60 GiB avail; 2.5 MiB/s rd, 4.7 MiB/s wr, 198 op/s
Oct 11 04:08:29 compute-0 nova_compute[259850]: 2025-10-11 04:08:29.547 2 INFO nova.virt.libvirt.driver [None req-b30db6ef-5faf-40b2-a588-caa8bb7a5cf2 fc44058c9b8d47d1907c195c404898c8 c04e56df694d49fdbb22c39773dfc036 - - default default] [instance: 5814e0c3-8afc-4d2d-98eb-6da773bfb7c7] Ignoring supplied device name: /dev/vdb
Oct 11 04:08:29 compute-0 nova_compute[259850]: 2025-10-11 04:08:29.563 2 DEBUG oslo_concurrency.lockutils [None req-b30db6ef-5faf-40b2-a588-caa8bb7a5cf2 fc44058c9b8d47d1907c195c404898c8 c04e56df694d49fdbb22c39773dfc036 - - default default] Lock "5814e0c3-8afc-4d2d-98eb-6da773bfb7c7" "released" by "nova.compute.manager.ComputeManager.reserve_block_device_name.<locals>.do_reserve" :: held 0.065s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 04:08:29 compute-0 nova_compute[259850]: 2025-10-11 04:08:29.779 2 DEBUG oslo_concurrency.lockutils [None req-b30db6ef-5faf-40b2-a588-caa8bb7a5cf2 fc44058c9b8d47d1907c195c404898c8 c04e56df694d49fdbb22c39773dfc036 - - default default] Acquiring lock "5814e0c3-8afc-4d2d-98eb-6da773bfb7c7" by "nova.compute.manager.ComputeManager.attach_volume.<locals>.do_attach_volume" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 04:08:29 compute-0 nova_compute[259850]: 2025-10-11 04:08:29.779 2 DEBUG oslo_concurrency.lockutils [None req-b30db6ef-5faf-40b2-a588-caa8bb7a5cf2 fc44058c9b8d47d1907c195c404898c8 c04e56df694d49fdbb22c39773dfc036 - - default default] Lock "5814e0c3-8afc-4d2d-98eb-6da773bfb7c7" acquired by "nova.compute.manager.ComputeManager.attach_volume.<locals>.do_attach_volume" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 04:08:29 compute-0 nova_compute[259850]: 2025-10-11 04:08:29.780 2 INFO nova.compute.manager [None req-b30db6ef-5faf-40b2-a588-caa8bb7a5cf2 fc44058c9b8d47d1907c195c404898c8 c04e56df694d49fdbb22c39773dfc036 - - default default] [instance: 5814e0c3-8afc-4d2d-98eb-6da773bfb7c7] Attaching volume 9afcb9af-1562-4190-88be-e79d9bae4aa8 to /dev/vdb
Oct 11 04:08:29 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e221 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 04:08:29 compute-0 podman[280546]: 2025-10-11 04:08:29.797621273 +0000 UTC m=+0.043233898 container create ec4e361001879960564965f237a4802f8c677bc3d35b0276ba207f5ffbab4b3c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_wiles, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=reef, ceph=True, io.buildah.version=1.39.3)
Oct 11 04:08:29 compute-0 systemd[1]: Started libpod-conmon-ec4e361001879960564965f237a4802f8c677bc3d35b0276ba207f5ffbab4b3c.scope.
Oct 11 04:08:29 compute-0 systemd[1]: Started libcrun container.
Oct 11 04:08:29 compute-0 podman[280546]: 2025-10-11 04:08:29.866339806 +0000 UTC m=+0.111952451 container init ec4e361001879960564965f237a4802f8c677bc3d35b0276ba207f5ffbab4b3c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_wiles, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=reef, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3)
Oct 11 04:08:29 compute-0 podman[280546]: 2025-10-11 04:08:29.872072687 +0000 UTC m=+0.117685312 container start ec4e361001879960564965f237a4802f8c677bc3d35b0276ba207f5ffbab4b3c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_wiles, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.39.3)
Oct 11 04:08:29 compute-0 podman[280546]: 2025-10-11 04:08:29.777552828 +0000 UTC m=+0.023165493 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 04:08:29 compute-0 podman[280546]: 2025-10-11 04:08:29.874739902 +0000 UTC m=+0.120352547 container attach ec4e361001879960564965f237a4802f8c677bc3d35b0276ba207f5ffbab4b3c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_wiles, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3)
Oct 11 04:08:29 compute-0 stoic_wiles[280562]: 167 167
Oct 11 04:08:29 compute-0 systemd[1]: libpod-ec4e361001879960564965f237a4802f8c677bc3d35b0276ba207f5ffbab4b3c.scope: Deactivated successfully.
Oct 11 04:08:29 compute-0 podman[280546]: 2025-10-11 04:08:29.87713831 +0000 UTC m=+0.122750935 container died ec4e361001879960564965f237a4802f8c677bc3d35b0276ba207f5ffbab4b3c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_wiles, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef)
Oct 11 04:08:29 compute-0 systemd[1]: var-lib-containers-storage-overlay-8e27f6bb849ab8575823a70a6be99872dc0ed2b5b5755a3978d1d174a43da024-merged.mount: Deactivated successfully.
Oct 11 04:08:29 compute-0 podman[280546]: 2025-10-11 04:08:29.904822069 +0000 UTC m=+0.150434694 container remove ec4e361001879960564965f237a4802f8c677bc3d35b0276ba207f5ffbab4b3c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_wiles, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3)
Oct 11 04:08:29 compute-0 systemd[1]: libpod-conmon-ec4e361001879960564965f237a4802f8c677bc3d35b0276ba207f5ffbab4b3c.scope: Deactivated successfully.
Oct 11 04:08:29 compute-0 nova_compute[259850]: 2025-10-11 04:08:29.932 2 DEBUG os_brick.utils [None req-b30db6ef-5faf-40b2-a588-caa8bb7a5cf2 fc44058c9b8d47d1907c195c404898c8 c04e56df694d49fdbb22c39773dfc036 - - default default] ==> get_connector_properties: call "{'root_helper': 'sudo nova-rootwrap /etc/nova/rootwrap.conf', 'my_ip': '192.168.122.100', 'multipath': True, 'enforce_multipath': True, 'host': 'compute-0.ctlplane.example.com', 'execute': None}" trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:176
Oct 11 04:08:29 compute-0 nova_compute[259850]: 2025-10-11 04:08:29.935 675 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): multipathd show status execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 04:08:29 compute-0 nova_compute[259850]: 2025-10-11 04:08:29.984 675 DEBUG oslo_concurrency.processutils [-] CMD "multipathd show status" returned: 0 in 0.048s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 04:08:29 compute-0 nova_compute[259850]: 2025-10-11 04:08:29.984 675 DEBUG oslo.privsep.daemon [-] privsep: reply[f64ceeba-d3c2-4270-a67c-8c1945cb9cc3]: (4, ('path checker states:\n\npaths: 0\nbusy: False\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 04:08:29 compute-0 nova_compute[259850]: 2025-10-11 04:08:29.985 675 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): cat /etc/iscsi/initiatorname.iscsi execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 04:08:29 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e221 do_prune osdmap full prune enabled
Oct 11 04:08:29 compute-0 nova_compute[259850]: 2025-10-11 04:08:29.995 675 DEBUG oslo_concurrency.processutils [-] CMD "cat /etc/iscsi/initiatorname.iscsi" returned: 0 in 0.010s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 04:08:29 compute-0 nova_compute[259850]: 2025-10-11 04:08:29.995 675 DEBUG oslo.privsep.daemon [-] privsep: reply[e0ffdb9f-33e3-4707-b889-d0821a36b160]: (4, ('InitiatorName=iqn.1994-05.com.redhat:e727c2bd432c', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 04:08:29 compute-0 nova_compute[259850]: 2025-10-11 04:08:29.996 675 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): findmnt -v / -n -o SOURCE execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 04:08:29 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e222 e222: 3 total, 3 up, 3 in
Oct 11 04:08:30 compute-0 ceph-mon[74273]: osdmap e221: 3 total, 3 up, 3 in
Oct 11 04:08:30 compute-0 ceph-mon[74273]: log_channel(cluster) log [DBG] : osdmap e222: 3 total, 3 up, 3 in
Oct 11 04:08:30 compute-0 nova_compute[259850]: 2025-10-11 04:08:30.009 675 DEBUG oslo_concurrency.processutils [-] CMD "findmnt -v / -n -o SOURCE" returned: 0 in 0.013s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 04:08:30 compute-0 nova_compute[259850]: 2025-10-11 04:08:30.010 675 DEBUG oslo.privsep.daemon [-] privsep: reply[a050c986-5dc7-49b9-90df-747ffaa42c3b]: (4, ('overlay\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 04:08:30 compute-0 nova_compute[259850]: 2025-10-11 04:08:30.010 675 DEBUG oslo.privsep.daemon [-] privsep: reply[2f1c6fcd-cbf3-4170-a48d-1aa07aa9a3dd]: (4, 'e4b2deed-ff06-4afb-a523-b61a9dddb9cc') _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 04:08:30 compute-0 nova_compute[259850]: 2025-10-11 04:08:30.011 2 DEBUG oslo_concurrency.processutils [None req-b30db6ef-5faf-40b2-a588-caa8bb7a5cf2 fc44058c9b8d47d1907c195c404898c8 c04e56df694d49fdbb22c39773dfc036 - - default default] Running cmd (subprocess): nvme version execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 04:08:30 compute-0 nova_compute[259850]: 2025-10-11 04:08:30.035 2 DEBUG oslo_concurrency.processutils [None req-b30db6ef-5faf-40b2-a588-caa8bb7a5cf2 fc44058c9b8d47d1907c195c404898c8 c04e56df694d49fdbb22c39773dfc036 - - default default] CMD "nvme version" returned: 0 in 0.024s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 04:08:30 compute-0 nova_compute[259850]: 2025-10-11 04:08:30.037 2 DEBUG os_brick.initiator.connectors.lightos [None req-b30db6ef-5faf-40b2-a588-caa8bb7a5cf2 fc44058c9b8d47d1907c195c404898c8 c04e56df694d49fdbb22c39773dfc036 - - default default] LIGHTOS: [Errno 111] ECONNREFUSED find_dsc /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:98
Oct 11 04:08:30 compute-0 nova_compute[259850]: 2025-10-11 04:08:30.037 2 DEBUG os_brick.initiator.connectors.lightos [None req-b30db6ef-5faf-40b2-a588-caa8bb7a5cf2 fc44058c9b8d47d1907c195c404898c8 c04e56df694d49fdbb22c39773dfc036 - - default default] LIGHTOS: did not find dsc, continuing anyway. get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:76
Oct 11 04:08:30 compute-0 nova_compute[259850]: 2025-10-11 04:08:30.037 2 DEBUG os_brick.initiator.connectors.lightos [None req-b30db6ef-5faf-40b2-a588-caa8bb7a5cf2 fc44058c9b8d47d1907c195c404898c8 c04e56df694d49fdbb22c39773dfc036 - - default default] LIGHTOS: finally hostnqn: nqn.2014-08.org.nvmexpress:uuid:83042a20-0f72-4c47-8453-e72ead378624 dsc:  get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:79
Oct 11 04:08:30 compute-0 nova_compute[259850]: 2025-10-11 04:08:30.038 2 DEBUG os_brick.utils [None req-b30db6ef-5faf-40b2-a588-caa8bb7a5cf2 fc44058c9b8d47d1907c195c404898c8 c04e56df694d49fdbb22c39773dfc036 - - default default] <== get_connector_properties: return (102ms) {'platform': 'x86_64', 'os_type': 'linux', 'ip': '192.168.122.100', 'host': 'compute-0.ctlplane.example.com', 'multipath': True, 'initiator': 'iqn.1994-05.com.redhat:e727c2bd432c', 'do_local_attach': False, 'nvme_hostid': '83042a20-0f72-4c47-8453-e72ead378624', 'system uuid': 'e4b2deed-ff06-4afb-a523-b61a9dddb9cc', 'nqn': 'nqn.2014-08.org.nvmexpress:uuid:83042a20-0f72-4c47-8453-e72ead378624', 'nvme_native_multipath': True, 'found_dsc': ''} trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:203
Oct 11 04:08:30 compute-0 nova_compute[259850]: 2025-10-11 04:08:30.038 2 DEBUG nova.virt.block_device [None req-b30db6ef-5faf-40b2-a588-caa8bb7a5cf2 fc44058c9b8d47d1907c195c404898c8 c04e56df694d49fdbb22c39773dfc036 - - default default] [instance: 5814e0c3-8afc-4d2d-98eb-6da773bfb7c7] Updating existing volume attachment record: 7cada880-ebbb-4960-af1a-7cd1e82f4b8b _volume_attach /usr/lib/python3.9/site-packages/nova/virt/block_device.py:631
Oct 11 04:08:30 compute-0 podman[280592]: 2025-10-11 04:08:30.098477688 +0000 UTC m=+0.049295908 container create 12f46542f20f8ceda986c0da4a8adcf6656e3714fa0600542cec470399f29018 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elegant_blackburn, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 11 04:08:30 compute-0 systemd[1]: Started libpod-conmon-12f46542f20f8ceda986c0da4a8adcf6656e3714fa0600542cec470399f29018.scope.
Oct 11 04:08:30 compute-0 podman[280592]: 2025-10-11 04:08:30.079337509 +0000 UTC m=+0.030155729 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 04:08:30 compute-0 systemd[1]: Started libcrun container.
Oct 11 04:08:30 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/79878ee897e1dda1bf223d5a13f4b7a52f520247a47f22f49def32cf74addd2d/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 11 04:08:30 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/79878ee897e1dda1bf223d5a13f4b7a52f520247a47f22f49def32cf74addd2d/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 11 04:08:30 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/79878ee897e1dda1bf223d5a13f4b7a52f520247a47f22f49def32cf74addd2d/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 11 04:08:30 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/79878ee897e1dda1bf223d5a13f4b7a52f520247a47f22f49def32cf74addd2d/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 11 04:08:30 compute-0 podman[280592]: 2025-10-11 04:08:30.206126356 +0000 UTC m=+0.156944546 container init 12f46542f20f8ceda986c0da4a8adcf6656e3714fa0600542cec470399f29018 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elegant_blackburn, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250507, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 11 04:08:30 compute-0 podman[280592]: 2025-10-11 04:08:30.218677489 +0000 UTC m=+0.169495679 container start 12f46542f20f8ceda986c0da4a8adcf6656e3714fa0600542cec470399f29018 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elegant_blackburn, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507)
Oct 11 04:08:30 compute-0 podman[280592]: 2025-10-11 04:08:30.222208859 +0000 UTC m=+0.173027129 container attach 12f46542f20f8ceda986c0da4a8adcf6656e3714fa0600542cec470399f29018 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elegant_blackburn, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.license=GPLv2)
Oct 11 04:08:30 compute-0 nova_compute[259850]: 2025-10-11 04:08:30.271 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 04:08:30 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 11 04:08:30 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/577327327' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 11 04:08:30 compute-0 nova_compute[259850]: 2025-10-11 04:08:30.771 2 DEBUG nova.objects.instance [None req-b30db6ef-5faf-40b2-a588-caa8bb7a5cf2 fc44058c9b8d47d1907c195c404898c8 c04e56df694d49fdbb22c39773dfc036 - - default default] Lazy-loading 'flavor' on Instance uuid 5814e0c3-8afc-4d2d-98eb-6da773bfb7c7 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 11 04:08:30 compute-0 nova_compute[259850]: 2025-10-11 04:08:30.797 2 DEBUG nova.virt.libvirt.driver [None req-b30db6ef-5faf-40b2-a588-caa8bb7a5cf2 fc44058c9b8d47d1907c195c404898c8 c04e56df694d49fdbb22c39773dfc036 - - default default] [instance: 5814e0c3-8afc-4d2d-98eb-6da773bfb7c7] Attempting to attach volume 9afcb9af-1562-4190-88be-e79d9bae4aa8 with discard support enabled to an instance using an unsupported configuration. target_bus = virtio. Trim commands will not be issued to the storage device. _check_discard_for_attach_volume /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2168
Oct 11 04:08:30 compute-0 nova_compute[259850]: 2025-10-11 04:08:30.801 2 DEBUG nova.virt.libvirt.guest [None req-b30db6ef-5faf-40b2-a588-caa8bb7a5cf2 fc44058c9b8d47d1907c195c404898c8 c04e56df694d49fdbb22c39773dfc036 - - default default] attach device xml: <disk type="network" device="disk">
Oct 11 04:08:30 compute-0 nova_compute[259850]:   <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Oct 11 04:08:30 compute-0 nova_compute[259850]:   <source protocol="rbd" name="volumes/volume-9afcb9af-1562-4190-88be-e79d9bae4aa8">
Oct 11 04:08:30 compute-0 nova_compute[259850]:     <host name="192.168.122.100" port="6789"/>
Oct 11 04:08:30 compute-0 nova_compute[259850]:   </source>
Oct 11 04:08:30 compute-0 nova_compute[259850]:   <auth username="openstack">
Oct 11 04:08:30 compute-0 nova_compute[259850]:     <secret type="ceph" uuid="23b68101-59a9-532f-ab6b-9acf78fb2162"/>
Oct 11 04:08:30 compute-0 nova_compute[259850]:   </auth>
Oct 11 04:08:30 compute-0 nova_compute[259850]:   <target dev="vdb" bus="virtio"/>
Oct 11 04:08:30 compute-0 nova_compute[259850]:   <serial>9afcb9af-1562-4190-88be-e79d9bae4aa8</serial>
Oct 11 04:08:30 compute-0 nova_compute[259850]: </disk>
Oct 11 04:08:30 compute-0 nova_compute[259850]:  attach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:339
Oct 11 04:08:30 compute-0 ovn_metadata_agent[161897]: 2025-10-11 04:08:30.850 161902 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=8a473e03-2208-47ae-afcd-05ad744a5969, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '9'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 04:08:30 compute-0 nova_compute[259850]: 2025-10-11 04:08:30.910 2 DEBUG nova.virt.libvirt.driver [None req-b30db6ef-5faf-40b2-a588-caa8bb7a5cf2 fc44058c9b8d47d1907c195c404898c8 c04e56df694d49fdbb22c39773dfc036 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 11 04:08:30 compute-0 nova_compute[259850]: 2025-10-11 04:08:30.910 2 DEBUG nova.virt.libvirt.driver [None req-b30db6ef-5faf-40b2-a588-caa8bb7a5cf2 fc44058c9b8d47d1907c195c404898c8 c04e56df694d49fdbb22c39773dfc036 - - default default] No BDM found with device name vdb, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 11 04:08:30 compute-0 nova_compute[259850]: 2025-10-11 04:08:30.911 2 DEBUG nova.virt.libvirt.driver [None req-b30db6ef-5faf-40b2-a588-caa8bb7a5cf2 fc44058c9b8d47d1907c195c404898c8 c04e56df694d49fdbb22c39773dfc036 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 11 04:08:30 compute-0 nova_compute[259850]: 2025-10-11 04:08:30.911 2 DEBUG nova.virt.libvirt.driver [None req-b30db6ef-5faf-40b2-a588-caa8bb7a5cf2 fc44058c9b8d47d1907c195c404898c8 c04e56df694d49fdbb22c39773dfc036 - - default default] No VIF found with MAC fa:16:3e:fc:44:96, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Oct 11 04:08:30 compute-0 elegant_blackburn[280609]: {
Oct 11 04:08:30 compute-0 elegant_blackburn[280609]:     "0": [
Oct 11 04:08:30 compute-0 elegant_blackburn[280609]:         {
Oct 11 04:08:30 compute-0 elegant_blackburn[280609]:             "devices": [
Oct 11 04:08:30 compute-0 elegant_blackburn[280609]:                 "/dev/loop3"
Oct 11 04:08:30 compute-0 elegant_blackburn[280609]:             ],
Oct 11 04:08:30 compute-0 elegant_blackburn[280609]:             "lv_name": "ceph_lv0",
Oct 11 04:08:30 compute-0 elegant_blackburn[280609]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct 11 04:08:30 compute-0 elegant_blackburn[280609]:             "lv_size": "21470642176",
Oct 11 04:08:30 compute-0 elegant_blackburn[280609]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=1XVTvq-Um5W-oCLh-MZzq-EKrd-vgGj-JdO4S3,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=23b68101-59a9-532f-ab6b-9acf78fb2162,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=bd7ac921-1218-45c1-b1c6-7c594dbceccb,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 11 04:08:30 compute-0 elegant_blackburn[280609]:             "lv_uuid": "1XVTvq-Um5W-oCLh-MZzq-EKrd-vgGj-JdO4S3",
Oct 11 04:08:30 compute-0 elegant_blackburn[280609]:             "name": "ceph_lv0",
Oct 11 04:08:30 compute-0 elegant_blackburn[280609]:             "path": "/dev/ceph_vg0/ceph_lv0",
Oct 11 04:08:30 compute-0 elegant_blackburn[280609]:             "tags": {
Oct 11 04:08:30 compute-0 elegant_blackburn[280609]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct 11 04:08:30 compute-0 elegant_blackburn[280609]:                 "ceph.block_uuid": "1XVTvq-Um5W-oCLh-MZzq-EKrd-vgGj-JdO4S3",
Oct 11 04:08:30 compute-0 elegant_blackburn[280609]:                 "ceph.cephx_lockbox_secret": "",
Oct 11 04:08:30 compute-0 elegant_blackburn[280609]:                 "ceph.cluster_fsid": "23b68101-59a9-532f-ab6b-9acf78fb2162",
Oct 11 04:08:30 compute-0 elegant_blackburn[280609]:                 "ceph.cluster_name": "ceph",
Oct 11 04:08:30 compute-0 elegant_blackburn[280609]:                 "ceph.crush_device_class": "",
Oct 11 04:08:30 compute-0 elegant_blackburn[280609]:                 "ceph.encrypted": "0",
Oct 11 04:08:30 compute-0 elegant_blackburn[280609]:                 "ceph.osd_fsid": "bd7ac921-1218-45c1-b1c6-7c594dbceccb",
Oct 11 04:08:30 compute-0 elegant_blackburn[280609]:                 "ceph.osd_id": "0",
Oct 11 04:08:30 compute-0 elegant_blackburn[280609]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 11 04:08:30 compute-0 elegant_blackburn[280609]:                 "ceph.type": "block",
Oct 11 04:08:30 compute-0 elegant_blackburn[280609]:                 "ceph.vdo": "0"
Oct 11 04:08:30 compute-0 elegant_blackburn[280609]:             },
Oct 11 04:08:30 compute-0 elegant_blackburn[280609]:             "type": "block",
Oct 11 04:08:30 compute-0 elegant_blackburn[280609]:             "vg_name": "ceph_vg0"
Oct 11 04:08:30 compute-0 elegant_blackburn[280609]:         }
Oct 11 04:08:30 compute-0 elegant_blackburn[280609]:     ],
Oct 11 04:08:30 compute-0 elegant_blackburn[280609]:     "1": [
Oct 11 04:08:30 compute-0 elegant_blackburn[280609]:         {
Oct 11 04:08:30 compute-0 elegant_blackburn[280609]:             "devices": [
Oct 11 04:08:30 compute-0 elegant_blackburn[280609]:                 "/dev/loop4"
Oct 11 04:08:30 compute-0 elegant_blackburn[280609]:             ],
Oct 11 04:08:30 compute-0 elegant_blackburn[280609]:             "lv_name": "ceph_lv1",
Oct 11 04:08:30 compute-0 elegant_blackburn[280609]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Oct 11 04:08:30 compute-0 elegant_blackburn[280609]:             "lv_size": "21470642176",
Oct 11 04:08:30 compute-0 elegant_blackburn[280609]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=SYHCVo-UVf0-Iwc3-4dzz-QuCG-19LJ-9rRA5x,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=23b68101-59a9-532f-ab6b-9acf78fb2162,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=38da774d-7ecf-442f-9a7a-97978287cff8,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 11 04:08:30 compute-0 elegant_blackburn[280609]:             "lv_uuid": "SYHCVo-UVf0-Iwc3-4dzz-QuCG-19LJ-9rRA5x",
Oct 11 04:08:30 compute-0 elegant_blackburn[280609]:             "name": "ceph_lv1",
Oct 11 04:08:30 compute-0 elegant_blackburn[280609]:             "path": "/dev/ceph_vg1/ceph_lv1",
Oct 11 04:08:30 compute-0 elegant_blackburn[280609]:             "tags": {
Oct 11 04:08:30 compute-0 elegant_blackburn[280609]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Oct 11 04:08:30 compute-0 elegant_blackburn[280609]:                 "ceph.block_uuid": "SYHCVo-UVf0-Iwc3-4dzz-QuCG-19LJ-9rRA5x",
Oct 11 04:08:30 compute-0 elegant_blackburn[280609]:                 "ceph.cephx_lockbox_secret": "",
Oct 11 04:08:30 compute-0 elegant_blackburn[280609]:                 "ceph.cluster_fsid": "23b68101-59a9-532f-ab6b-9acf78fb2162",
Oct 11 04:08:30 compute-0 elegant_blackburn[280609]:                 "ceph.cluster_name": "ceph",
Oct 11 04:08:30 compute-0 elegant_blackburn[280609]:                 "ceph.crush_device_class": "",
Oct 11 04:08:30 compute-0 elegant_blackburn[280609]:                 "ceph.encrypted": "0",
Oct 11 04:08:30 compute-0 elegant_blackburn[280609]:                 "ceph.osd_fsid": "38da774d-7ecf-442f-9a7a-97978287cff8",
Oct 11 04:08:30 compute-0 elegant_blackburn[280609]:                 "ceph.osd_id": "1",
Oct 11 04:08:30 compute-0 elegant_blackburn[280609]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 11 04:08:30 compute-0 elegant_blackburn[280609]:                 "ceph.type": "block",
Oct 11 04:08:30 compute-0 elegant_blackburn[280609]:                 "ceph.vdo": "0"
Oct 11 04:08:30 compute-0 elegant_blackburn[280609]:             },
Oct 11 04:08:30 compute-0 elegant_blackburn[280609]:             "type": "block",
Oct 11 04:08:30 compute-0 elegant_blackburn[280609]:             "vg_name": "ceph_vg1"
Oct 11 04:08:30 compute-0 elegant_blackburn[280609]:         }
Oct 11 04:08:30 compute-0 elegant_blackburn[280609]:     ],
Oct 11 04:08:30 compute-0 elegant_blackburn[280609]:     "2": [
Oct 11 04:08:30 compute-0 elegant_blackburn[280609]:         {
Oct 11 04:08:30 compute-0 elegant_blackburn[280609]:             "devices": [
Oct 11 04:08:30 compute-0 elegant_blackburn[280609]:                 "/dev/loop5"
Oct 11 04:08:30 compute-0 elegant_blackburn[280609]:             ],
Oct 11 04:08:30 compute-0 elegant_blackburn[280609]:             "lv_name": "ceph_lv2",
Oct 11 04:08:30 compute-0 elegant_blackburn[280609]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Oct 11 04:08:30 compute-0 elegant_blackburn[280609]:             "lv_size": "21470642176",
Oct 11 04:08:30 compute-0 elegant_blackburn[280609]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=RoMfIg-8Myq-NBZ3-1HM3-6QfC-sjk8-dlChvt,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=23b68101-59a9-532f-ab6b-9acf78fb2162,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=b0a32a6b-06c7-4cda-9235-0c5ffbd1bd85,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 11 04:08:30 compute-0 elegant_blackburn[280609]:             "lv_uuid": "RoMfIg-8Myq-NBZ3-1HM3-6QfC-sjk8-dlChvt",
Oct 11 04:08:30 compute-0 elegant_blackburn[280609]:             "name": "ceph_lv2",
Oct 11 04:08:30 compute-0 elegant_blackburn[280609]:             "path": "/dev/ceph_vg2/ceph_lv2",
Oct 11 04:08:30 compute-0 elegant_blackburn[280609]:             "tags": {
Oct 11 04:08:30 compute-0 elegant_blackburn[280609]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Oct 11 04:08:30 compute-0 elegant_blackburn[280609]:                 "ceph.block_uuid": "RoMfIg-8Myq-NBZ3-1HM3-6QfC-sjk8-dlChvt",
Oct 11 04:08:30 compute-0 elegant_blackburn[280609]:                 "ceph.cephx_lockbox_secret": "",
Oct 11 04:08:30 compute-0 elegant_blackburn[280609]:                 "ceph.cluster_fsid": "23b68101-59a9-532f-ab6b-9acf78fb2162",
Oct 11 04:08:30 compute-0 elegant_blackburn[280609]:                 "ceph.cluster_name": "ceph",
Oct 11 04:08:30 compute-0 elegant_blackburn[280609]:                 "ceph.crush_device_class": "",
Oct 11 04:08:30 compute-0 elegant_blackburn[280609]:                 "ceph.encrypted": "0",
Oct 11 04:08:30 compute-0 elegant_blackburn[280609]:                 "ceph.osd_fsid": "b0a32a6b-06c7-4cda-9235-0c5ffbd1bd85",
Oct 11 04:08:30 compute-0 elegant_blackburn[280609]:                 "ceph.osd_id": "2",
Oct 11 04:08:30 compute-0 elegant_blackburn[280609]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 11 04:08:30 compute-0 elegant_blackburn[280609]:                 "ceph.type": "block",
Oct 11 04:08:30 compute-0 elegant_blackburn[280609]:                 "ceph.vdo": "0"
Oct 11 04:08:30 compute-0 elegant_blackburn[280609]:             },
Oct 11 04:08:30 compute-0 elegant_blackburn[280609]:             "type": "block",
Oct 11 04:08:30 compute-0 elegant_blackburn[280609]:             "vg_name": "ceph_vg2"
Oct 11 04:08:30 compute-0 elegant_blackburn[280609]:         }
Oct 11 04:08:30 compute-0 elegant_blackburn[280609]:     ]
Oct 11 04:08:30 compute-0 elegant_blackburn[280609]: }
Oct 11 04:08:31 compute-0 ceph-mon[74273]: pgmap v1176: 305 pgs: 305 active+clean; 213 MiB data, 335 MiB used, 60 GiB / 60 GiB avail; 2.5 MiB/s rd, 4.7 MiB/s wr, 198 op/s
Oct 11 04:08:31 compute-0 ceph-mon[74273]: osdmap e222: 3 total, 3 up, 3 in
Oct 11 04:08:31 compute-0 ceph-mon[74273]: from='client.? 192.168.122.10:0/577327327' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 11 04:08:31 compute-0 systemd[1]: libpod-12f46542f20f8ceda986c0da4a8adcf6656e3714fa0600542cec470399f29018.scope: Deactivated successfully.
Oct 11 04:08:31 compute-0 podman[280592]: 2025-10-11 04:08:31.018762959 +0000 UTC m=+0.969581189 container died 12f46542f20f8ceda986c0da4a8adcf6656e3714fa0600542cec470399f29018 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elegant_blackburn, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef)
Oct 11 04:08:31 compute-0 systemd[1]: var-lib-containers-storage-overlay-79878ee897e1dda1bf223d5a13f4b7a52f520247a47f22f49def32cf74addd2d-merged.mount: Deactivated successfully.
Oct 11 04:08:31 compute-0 podman[280592]: 2025-10-11 04:08:31.077285536 +0000 UTC m=+1.028103726 container remove 12f46542f20f8ceda986c0da4a8adcf6656e3714fa0600542cec470399f29018 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elegant_blackburn, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Oct 11 04:08:31 compute-0 systemd[1]: libpod-conmon-12f46542f20f8ceda986c0da4a8adcf6656e3714fa0600542cec470399f29018.scope: Deactivated successfully.
Oct 11 04:08:31 compute-0 nova_compute[259850]: 2025-10-11 04:08:31.100 2 DEBUG oslo_concurrency.lockutils [None req-b30db6ef-5faf-40b2-a588-caa8bb7a5cf2 fc44058c9b8d47d1907c195c404898c8 c04e56df694d49fdbb22c39773dfc036 - - default default] Lock "5814e0c3-8afc-4d2d-98eb-6da773bfb7c7" "released" by "nova.compute.manager.ComputeManager.attach_volume.<locals>.do_attach_volume" :: held 1.321s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 04:08:31 compute-0 sudo[280480]: pam_unix(sudo:session): session closed for user root
Oct 11 04:08:31 compute-0 sudo[280651]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 04:08:31 compute-0 sudo[280651]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 04:08:31 compute-0 sudo[280651]: pam_unix(sudo:session): session closed for user root
Oct 11 04:08:31 compute-0 ceph-mgr[74563]: [pg_autoscaler INFO root] _maybe_adjust
Oct 11 04:08:31 compute-0 ceph-mgr[74563]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 04:08:31 compute-0 ceph-mgr[74563]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct 11 04:08:31 compute-0 ceph-mgr[74563]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 04:08:31 compute-0 ceph-mgr[74563]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0007573653301426059 of space, bias 1.0, pg target 0.2272095990427818 quantized to 32 (current 32)
Oct 11 04:08:31 compute-0 ceph-mgr[74563]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 04:08:31 compute-0 ceph-mgr[74563]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0006926935802891189 of space, bias 1.0, pg target 0.20780807408673568 quantized to 32 (current 32)
Oct 11 04:08:31 compute-0 ceph-mgr[74563]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 04:08:31 compute-0 ceph-mgr[74563]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Oct 11 04:08:31 compute-0 ceph-mgr[74563]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 04:08:31 compute-0 ceph-mgr[74563]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.000665858301588852 of space, bias 1.0, pg target 0.19975749047665559 quantized to 32 (current 32)
Oct 11 04:08:31 compute-0 ceph-mgr[74563]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 04:08:31 compute-0 ceph-mgr[74563]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Oct 11 04:08:31 compute-0 ceph-mgr[74563]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 04:08:31 compute-0 ceph-mgr[74563]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 11 04:08:31 compute-0 ceph-mgr[74563]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 04:08:31 compute-0 ceph-mgr[74563]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Oct 11 04:08:31 compute-0 ceph-mgr[74563]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 04:08:31 compute-0 ceph-mgr[74563]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct 11 04:08:31 compute-0 ceph-mgr[74563]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 04:08:31 compute-0 ceph-mgr[74563]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 11 04:08:31 compute-0 ceph-mgr[74563]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 04:08:31 compute-0 ceph-mgr[74563]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct 11 04:08:31 compute-0 sudo[280676]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 11 04:08:31 compute-0 sudo[280676]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 04:08:31 compute-0 sudo[280676]: pam_unix(sudo:session): session closed for user root
Oct 11 04:08:31 compute-0 sudo[280701]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 04:08:31 compute-0 sudo[280701]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 04:08:31 compute-0 sudo[280701]: pam_unix(sudo:session): session closed for user root
Oct 11 04:08:31 compute-0 sudo[280726]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/23b68101-59a9-532f-ab6b-9acf78fb2162/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 23b68101-59a9-532f-ab6b-9acf78fb2162 -- raw list --format json
Oct 11 04:08:31 compute-0 sudo[280726]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 04:08:31 compute-0 podman[280750]: 2025-10-11 04:08:31.481947451 +0000 UTC m=+0.054450793 container health_status 83ff5079e26f0f00bbfd2512db4ebca61e431e3b7e362664573e34fff179574c (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_id=multipathd, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.build-date=20251009, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']})
Oct 11 04:08:31 compute-0 podman[280751]: 2025-10-11 04:08:31.511032389 +0000 UTC m=+0.083569762 container health_status 8f03625dda308e9a3fe3b3f00360811eab2a53db90537fed5552a11820542f07 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, container_name=iscsid, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, config_id=iscsid, org.label-schema.schema-version=1.0)
Oct 11 04:08:31 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1178: 305 pgs: 305 active+clean; 213 MiB data, 335 MiB used, 60 GiB / 60 GiB avail; 74 KiB/s rd, 2.7 MiB/s wr, 116 op/s
Oct 11 04:08:31 compute-0 podman[280835]: 2025-10-11 04:08:31.734163827 +0000 UTC m=+0.036027534 container create f867a016be723589f90898b0443cd16787fac0ca75e747dc77335f0b76ed3fb7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_chatelet, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Oct 11 04:08:31 compute-0 systemd[1]: Started libpod-conmon-f867a016be723589f90898b0443cd16787fac0ca75e747dc77335f0b76ed3fb7.scope.
Oct 11 04:08:31 compute-0 systemd[1]: Started libcrun container.
Oct 11 04:08:31 compute-0 podman[280835]: 2025-10-11 04:08:31.718961559 +0000 UTC m=+0.020825286 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 04:08:31 compute-0 podman[280835]: 2025-10-11 04:08:31.820363673 +0000 UTC m=+0.122227400 container init f867a016be723589f90898b0443cd16787fac0ca75e747dc77335f0b76ed3fb7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_chatelet, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3)
Oct 11 04:08:31 compute-0 podman[280835]: 2025-10-11 04:08:31.82810734 +0000 UTC m=+0.129971047 container start f867a016be723589f90898b0443cd16787fac0ca75e747dc77335f0b76ed3fb7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_chatelet, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3)
Oct 11 04:08:31 compute-0 podman[280835]: 2025-10-11 04:08:31.83094369 +0000 UTC m=+0.132807417 container attach f867a016be723589f90898b0443cd16787fac0ca75e747dc77335f0b76ed3fb7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_chatelet, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_REF=reef, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Oct 11 04:08:31 compute-0 eloquent_chatelet[280851]: 167 167
Oct 11 04:08:31 compute-0 systemd[1]: libpod-f867a016be723589f90898b0443cd16787fac0ca75e747dc77335f0b76ed3fb7.scope: Deactivated successfully.
Oct 11 04:08:31 compute-0 podman[280835]: 2025-10-11 04:08:31.833403219 +0000 UTC m=+0.135266936 container died f867a016be723589f90898b0443cd16787fac0ca75e747dc77335f0b76ed3fb7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_chatelet, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, ceph=True, org.label-schema.build-date=20250507, OSD_FLAVOR=default)
Oct 11 04:08:31 compute-0 systemd[1]: var-lib-containers-storage-overlay-e6ba5f1121d0db75f59006853ff846f6fe3702a8b05362549d936694e9ba339f-merged.mount: Deactivated successfully.
Oct 11 04:08:31 compute-0 podman[280835]: 2025-10-11 04:08:31.869498685 +0000 UTC m=+0.171362392 container remove f867a016be723589f90898b0443cd16787fac0ca75e747dc77335f0b76ed3fb7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_chatelet, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 11 04:08:31 compute-0 systemd[1]: libpod-conmon-f867a016be723589f90898b0443cd16787fac0ca75e747dc77335f0b76ed3fb7.scope: Deactivated successfully.
Oct 11 04:08:32 compute-0 podman[280876]: 2025-10-11 04:08:32.043926073 +0000 UTC m=+0.043962458 container create 886ce59390fcd87a98fe58669f9b9e890f0df8064e170cf5ec431b42f96ac0e4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_colden, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef)
Oct 11 04:08:32 compute-0 systemd[1]: Started libpod-conmon-886ce59390fcd87a98fe58669f9b9e890f0df8064e170cf5ec431b42f96ac0e4.scope.
Oct 11 04:08:32 compute-0 systemd[1]: Started libcrun container.
Oct 11 04:08:32 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e88ca9b2dd71d017539539554ecb4fef2bb0d5e5c31cf9f38afb080078228dd2/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 11 04:08:32 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e88ca9b2dd71d017539539554ecb4fef2bb0d5e5c31cf9f38afb080078228dd2/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 11 04:08:32 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e88ca9b2dd71d017539539554ecb4fef2bb0d5e5c31cf9f38afb080078228dd2/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 11 04:08:32 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e88ca9b2dd71d017539539554ecb4fef2bb0d5e5c31cf9f38afb080078228dd2/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 11 04:08:32 compute-0 podman[280876]: 2025-10-11 04:08:32.118010857 +0000 UTC m=+0.118047242 container init 886ce59390fcd87a98fe58669f9b9e890f0df8064e170cf5ec431b42f96ac0e4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_colden, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 11 04:08:32 compute-0 podman[280876]: 2025-10-11 04:08:32.025918456 +0000 UTC m=+0.025954871 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 04:08:32 compute-0 podman[280876]: 2025-10-11 04:08:32.130662103 +0000 UTC m=+0.130698518 container start 886ce59390fcd87a98fe58669f9b9e890f0df8064e170cf5ec431b42f96ac0e4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_colden, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.vendor=CentOS)
Oct 11 04:08:32 compute-0 podman[280876]: 2025-10-11 04:08:32.134138021 +0000 UTC m=+0.134174406 container attach 886ce59390fcd87a98fe58669f9b9e890f0df8064e170cf5ec431b42f96ac0e4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_colden, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 11 04:08:32 compute-0 nova_compute[259850]: 2025-10-11 04:08:32.385 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 04:08:33 compute-0 ceph-mon[74273]: pgmap v1178: 305 pgs: 305 active+clean; 213 MiB data, 335 MiB used, 60 GiB / 60 GiB avail; 74 KiB/s rd, 2.7 MiB/s wr, 116 op/s
Oct 11 04:08:33 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 11 04:08:33 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1692373382' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 11 04:08:33 compute-0 competent_colden[280893]: {
Oct 11 04:08:33 compute-0 competent_colden[280893]:     "38da774d-7ecf-442f-9a7a-97978287cff8": {
Oct 11 04:08:33 compute-0 competent_colden[280893]:         "ceph_fsid": "23b68101-59a9-532f-ab6b-9acf78fb2162",
Oct 11 04:08:33 compute-0 competent_colden[280893]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Oct 11 04:08:33 compute-0 competent_colden[280893]:         "osd_id": 1,
Oct 11 04:08:33 compute-0 competent_colden[280893]:         "osd_uuid": "38da774d-7ecf-442f-9a7a-97978287cff8",
Oct 11 04:08:33 compute-0 competent_colden[280893]:         "type": "bluestore"
Oct 11 04:08:33 compute-0 competent_colden[280893]:     },
Oct 11 04:08:33 compute-0 competent_colden[280893]:     "b0a32a6b-06c7-4cda-9235-0c5ffbd1bd85": {
Oct 11 04:08:33 compute-0 competent_colden[280893]:         "ceph_fsid": "23b68101-59a9-532f-ab6b-9acf78fb2162",
Oct 11 04:08:33 compute-0 competent_colden[280893]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Oct 11 04:08:33 compute-0 competent_colden[280893]:         "osd_id": 2,
Oct 11 04:08:33 compute-0 competent_colden[280893]:         "osd_uuid": "b0a32a6b-06c7-4cda-9235-0c5ffbd1bd85",
Oct 11 04:08:33 compute-0 competent_colden[280893]:         "type": "bluestore"
Oct 11 04:08:33 compute-0 competent_colden[280893]:     },
Oct 11 04:08:33 compute-0 competent_colden[280893]:     "bd7ac921-1218-45c1-b1c6-7c594dbceccb": {
Oct 11 04:08:33 compute-0 competent_colden[280893]:         "ceph_fsid": "23b68101-59a9-532f-ab6b-9acf78fb2162",
Oct 11 04:08:33 compute-0 competent_colden[280893]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct 11 04:08:33 compute-0 competent_colden[280893]:         "osd_id": 0,
Oct 11 04:08:33 compute-0 competent_colden[280893]:         "osd_uuid": "bd7ac921-1218-45c1-b1c6-7c594dbceccb",
Oct 11 04:08:33 compute-0 competent_colden[280893]:         "type": "bluestore"
Oct 11 04:08:33 compute-0 competent_colden[280893]:     }
Oct 11 04:08:33 compute-0 competent_colden[280893]: }
Oct 11 04:08:33 compute-0 systemd[1]: libpod-886ce59390fcd87a98fe58669f9b9e890f0df8064e170cf5ec431b42f96ac0e4.scope: Deactivated successfully.
Oct 11 04:08:33 compute-0 podman[280876]: 2025-10-11 04:08:33.123941279 +0000 UTC m=+1.123977704 container died 886ce59390fcd87a98fe58669f9b9e890f0df8064e170cf5ec431b42f96ac0e4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_colden, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 11 04:08:33 compute-0 systemd[1]: var-lib-containers-storage-overlay-e88ca9b2dd71d017539539554ecb4fef2bb0d5e5c31cf9f38afb080078228dd2-merged.mount: Deactivated successfully.
Oct 11 04:08:33 compute-0 podman[280876]: 2025-10-11 04:08:33.206374509 +0000 UTC m=+1.206410904 container remove 886ce59390fcd87a98fe58669f9b9e890f0df8064e170cf5ec431b42f96ac0e4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_colden, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Oct 11 04:08:33 compute-0 systemd[1]: libpod-conmon-886ce59390fcd87a98fe58669f9b9e890f0df8064e170cf5ec431b42f96ac0e4.scope: Deactivated successfully.
Oct 11 04:08:33 compute-0 sudo[280726]: pam_unix(sudo:session): session closed for user root
Oct 11 04:08:33 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 11 04:08:33 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' 
Oct 11 04:08:33 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 11 04:08:33 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' 
Oct 11 04:08:33 compute-0 ceph-mgr[74563]: [progress WARNING root] complete: ev a51f8ae1-9f53-447b-89aa-00d0f5dd4688 does not exist
Oct 11 04:08:33 compute-0 ceph-mgr[74563]: [progress WARNING root] complete: ev d51b2fc8-3638-4b5c-993e-f7bd806a3497 does not exist
Oct 11 04:08:33 compute-0 nova_compute[259850]: 2025-10-11 04:08:33.308 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 04:08:33 compute-0 sudo[280937]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 04:08:33 compute-0 sudo[280937]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 04:08:33 compute-0 sudo[280937]: pam_unix(sudo:session): session closed for user root
Oct 11 04:08:33 compute-0 sudo[280962]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Oct 11 04:08:33 compute-0 sudo[280962]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 04:08:33 compute-0 sudo[280962]: pam_unix(sudo:session): session closed for user root
Oct 11 04:08:33 compute-0 nova_compute[259850]: 2025-10-11 04:08:33.420 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 04:08:33 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 11 04:08:33 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1169566769' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 11 04:08:33 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1179: 305 pgs: 305 active+clean; 260 MiB data, 356 MiB used, 60 GiB / 60 GiB avail; 2.8 MiB/s rd, 5.3 MiB/s wr, 166 op/s
Oct 11 04:08:34 compute-0 ceph-mon[74273]: from='client.? 192.168.122.10:0/1692373382' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 11 04:08:34 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' 
Oct 11 04:08:34 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' 
Oct 11 04:08:34 compute-0 ceph-mon[74273]: from='client.? 192.168.122.10:0/1169566769' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 11 04:08:34 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e222 do_prune osdmap full prune enabled
Oct 11 04:08:34 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e223 e223: 3 total, 3 up, 3 in
Oct 11 04:08:34 compute-0 ceph-mon[74273]: log_channel(cluster) log [DBG] : osdmap e223: 3 total, 3 up, 3 in
Oct 11 04:08:34 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e223 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 04:08:35 compute-0 ceph-mon[74273]: pgmap v1179: 305 pgs: 305 active+clean; 260 MiB data, 356 MiB used, 60 GiB / 60 GiB avail; 2.8 MiB/s rd, 5.3 MiB/s wr, 166 op/s
Oct 11 04:08:35 compute-0 ceph-mon[74273]: osdmap e223: 3 total, 3 up, 3 in
Oct 11 04:08:35 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e223 do_prune osdmap full prune enabled
Oct 11 04:08:35 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e224 e224: 3 total, 3 up, 3 in
Oct 11 04:08:35 compute-0 ceph-mon[74273]: log_channel(cluster) log [DBG] : osdmap e224: 3 total, 3 up, 3 in
Oct 11 04:08:35 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1182: 305 pgs: 305 active+clean; 260 MiB data, 356 MiB used, 60 GiB / 60 GiB avail; 3.6 MiB/s rd, 3.6 MiB/s wr, 66 op/s
Oct 11 04:08:36 compute-0 ceph-mon[74273]: osdmap e224: 3 total, 3 up, 3 in
Oct 11 04:08:36 compute-0 ceph-mon[74273]: pgmap v1182: 305 pgs: 305 active+clean; 260 MiB data, 356 MiB used, 60 GiB / 60 GiB avail; 3.6 MiB/s rd, 3.6 MiB/s wr, 66 op/s
Oct 11 04:08:37 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e224 do_prune osdmap full prune enabled
Oct 11 04:08:37 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e225 e225: 3 total, 3 up, 3 in
Oct 11 04:08:37 compute-0 ceph-mon[74273]: log_channel(cluster) log [DBG] : osdmap e225: 3 total, 3 up, 3 in
Oct 11 04:08:37 compute-0 nova_compute[259850]: 2025-10-11 04:08:37.387 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 04:08:37 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1184: 305 pgs: 305 active+clean; 260 MiB data, 356 MiB used, 60 GiB / 60 GiB avail; 3.6 MiB/s rd, 3.6 MiB/s wr, 66 op/s
Oct 11 04:08:38 compute-0 ceph-mon[74273]: osdmap e225: 3 total, 3 up, 3 in
Oct 11 04:08:38 compute-0 ceph-mon[74273]: pgmap v1184: 305 pgs: 305 active+clean; 260 MiB data, 356 MiB used, 60 GiB / 60 GiB avail; 3.6 MiB/s rd, 3.6 MiB/s wr, 66 op/s
Oct 11 04:08:38 compute-0 nova_compute[259850]: 2025-10-11 04:08:38.423 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 04:08:38 compute-0 nova_compute[259850]: 2025-10-11 04:08:38.500 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 04:08:39 compute-0 nova_compute[259850]: 2025-10-11 04:08:39.109 2 DEBUG oslo_concurrency.lockutils [None req-499644ae-a54b-4431-922c-c706bac111d3 fc44058c9b8d47d1907c195c404898c8 c04e56df694d49fdbb22c39773dfc036 - - default default] Acquiring lock "5814e0c3-8afc-4d2d-98eb-6da773bfb7c7" by "nova.compute.manager.ComputeManager.detach_volume.<locals>.do_detach_volume" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 04:08:39 compute-0 nova_compute[259850]: 2025-10-11 04:08:39.110 2 DEBUG oslo_concurrency.lockutils [None req-499644ae-a54b-4431-922c-c706bac111d3 fc44058c9b8d47d1907c195c404898c8 c04e56df694d49fdbb22c39773dfc036 - - default default] Lock "5814e0c3-8afc-4d2d-98eb-6da773bfb7c7" acquired by "nova.compute.manager.ComputeManager.detach_volume.<locals>.do_detach_volume" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 04:08:39 compute-0 nova_compute[259850]: 2025-10-11 04:08:39.123 2 INFO nova.compute.manager [None req-499644ae-a54b-4431-922c-c706bac111d3 fc44058c9b8d47d1907c195c404898c8 c04e56df694d49fdbb22c39773dfc036 - - default default] [instance: 5814e0c3-8afc-4d2d-98eb-6da773bfb7c7] Detaching volume 9afcb9af-1562-4190-88be-e79d9bae4aa8
Oct 11 04:08:39 compute-0 nova_compute[259850]: 2025-10-11 04:08:39.269 2 INFO nova.virt.block_device [None req-499644ae-a54b-4431-922c-c706bac111d3 fc44058c9b8d47d1907c195c404898c8 c04e56df694d49fdbb22c39773dfc036 - - default default] [instance: 5814e0c3-8afc-4d2d-98eb-6da773bfb7c7] Attempting to driver detach volume 9afcb9af-1562-4190-88be-e79d9bae4aa8 from mountpoint /dev/vdb
Oct 11 04:08:39 compute-0 nova_compute[259850]: 2025-10-11 04:08:39.284 2 DEBUG nova.virt.libvirt.driver [None req-499644ae-a54b-4431-922c-c706bac111d3 fc44058c9b8d47d1907c195c404898c8 c04e56df694d49fdbb22c39773dfc036 - - default default] Attempting to detach device vdb from instance 5814e0c3-8afc-4d2d-98eb-6da773bfb7c7 from the persistent domain config. _detach_from_persistent /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2487
Oct 11 04:08:39 compute-0 nova_compute[259850]: 2025-10-11 04:08:39.285 2 DEBUG nova.virt.libvirt.guest [None req-499644ae-a54b-4431-922c-c706bac111d3 fc44058c9b8d47d1907c195c404898c8 c04e56df694d49fdbb22c39773dfc036 - - default default] detach device xml: <disk type="network" device="disk">
Oct 11 04:08:39 compute-0 nova_compute[259850]:   <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Oct 11 04:08:39 compute-0 nova_compute[259850]:   <source protocol="rbd" name="volumes/volume-9afcb9af-1562-4190-88be-e79d9bae4aa8">
Oct 11 04:08:39 compute-0 nova_compute[259850]:     <host name="192.168.122.100" port="6789"/>
Oct 11 04:08:39 compute-0 nova_compute[259850]:   </source>
Oct 11 04:08:39 compute-0 nova_compute[259850]:   <target dev="vdb" bus="virtio"/>
Oct 11 04:08:39 compute-0 nova_compute[259850]:   <serial>9afcb9af-1562-4190-88be-e79d9bae4aa8</serial>
Oct 11 04:08:39 compute-0 nova_compute[259850]:   <address type="pci" domain="0x0000" bus="0x06" slot="0x00" function="0x0"/>
Oct 11 04:08:39 compute-0 nova_compute[259850]: </disk>
Oct 11 04:08:39 compute-0 nova_compute[259850]:  detach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:465
Oct 11 04:08:39 compute-0 nova_compute[259850]: 2025-10-11 04:08:39.296 2 INFO nova.virt.libvirt.driver [None req-499644ae-a54b-4431-922c-c706bac111d3 fc44058c9b8d47d1907c195c404898c8 c04e56df694d49fdbb22c39773dfc036 - - default default] Successfully detached device vdb from instance 5814e0c3-8afc-4d2d-98eb-6da773bfb7c7 from the persistent domain config.
Oct 11 04:08:39 compute-0 nova_compute[259850]: 2025-10-11 04:08:39.297 2 DEBUG nova.virt.libvirt.driver [None req-499644ae-a54b-4431-922c-c706bac111d3 fc44058c9b8d47d1907c195c404898c8 c04e56df694d49fdbb22c39773dfc036 - - default default] (1/8): Attempting to detach device vdb with device alias virtio-disk1 from instance 5814e0c3-8afc-4d2d-98eb-6da773bfb7c7 from the live domain config. _detach_from_live_with_retry /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2523
Oct 11 04:08:39 compute-0 nova_compute[259850]: 2025-10-11 04:08:39.298 2 DEBUG nova.virt.libvirt.guest [None req-499644ae-a54b-4431-922c-c706bac111d3 fc44058c9b8d47d1907c195c404898c8 c04e56df694d49fdbb22c39773dfc036 - - default default] detach device xml: <disk type="network" device="disk">
Oct 11 04:08:39 compute-0 nova_compute[259850]:   <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Oct 11 04:08:39 compute-0 nova_compute[259850]:   <source protocol="rbd" name="volumes/volume-9afcb9af-1562-4190-88be-e79d9bae4aa8">
Oct 11 04:08:39 compute-0 nova_compute[259850]:     <host name="192.168.122.100" port="6789"/>
Oct 11 04:08:39 compute-0 nova_compute[259850]:   </source>
Oct 11 04:08:39 compute-0 nova_compute[259850]:   <target dev="vdb" bus="virtio"/>
Oct 11 04:08:39 compute-0 nova_compute[259850]:   <serial>9afcb9af-1562-4190-88be-e79d9bae4aa8</serial>
Oct 11 04:08:39 compute-0 nova_compute[259850]:   <address type="pci" domain="0x0000" bus="0x06" slot="0x00" function="0x0"/>
Oct 11 04:08:39 compute-0 nova_compute[259850]: </disk>
Oct 11 04:08:39 compute-0 nova_compute[259850]:  detach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:465
Oct 11 04:08:39 compute-0 nova_compute[259850]: 2025-10-11 04:08:39.421 2 DEBUG nova.virt.libvirt.driver [None req-3700afd8-f980-4cf2-b41e-54ecc7fbe9a2 - - - - - -] Received event <DeviceRemovedEvent: 1760155719.4213274, 5814e0c3-8afc-4d2d-98eb-6da773bfb7c7 => virtio-disk1> from libvirt while the driver is waiting for it; dispatched. emit_event /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2370
Oct 11 04:08:39 compute-0 nova_compute[259850]: 2025-10-11 04:08:39.425 2 DEBUG nova.virt.libvirt.driver [None req-499644ae-a54b-4431-922c-c706bac111d3 fc44058c9b8d47d1907c195c404898c8 c04e56df694d49fdbb22c39773dfc036 - - default default] Start waiting for the detach event from libvirt for device vdb with device alias virtio-disk1 for instance 5814e0c3-8afc-4d2d-98eb-6da773bfb7c7 _detach_from_live_and_wait_for_event /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2599
Oct 11 04:08:39 compute-0 nova_compute[259850]: 2025-10-11 04:08:39.429 2 INFO nova.virt.libvirt.driver [None req-499644ae-a54b-4431-922c-c706bac111d3 fc44058c9b8d47d1907c195c404898c8 c04e56df694d49fdbb22c39773dfc036 - - default default] Successfully detached device vdb from instance 5814e0c3-8afc-4d2d-98eb-6da773bfb7c7 from the live domain config.
Oct 11 04:08:39 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1185: 305 pgs: 305 active+clean; 306 MiB data, 378 MiB used, 60 GiB / 60 GiB avail; 3.6 MiB/s rd, 3.6 MiB/s wr, 138 op/s
Oct 11 04:08:39 compute-0 nova_compute[259850]: 2025-10-11 04:08:39.595 2 DEBUG nova.objects.instance [None req-499644ae-a54b-4431-922c-c706bac111d3 fc44058c9b8d47d1907c195c404898c8 c04e56df694d49fdbb22c39773dfc036 - - default default] Lazy-loading 'flavor' on Instance uuid 5814e0c3-8afc-4d2d-98eb-6da773bfb7c7 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 11 04:08:39 compute-0 nova_compute[259850]: 2025-10-11 04:08:39.645 2 DEBUG oslo_concurrency.lockutils [None req-499644ae-a54b-4431-922c-c706bac111d3 fc44058c9b8d47d1907c195c404898c8 c04e56df694d49fdbb22c39773dfc036 - - default default] Lock "5814e0c3-8afc-4d2d-98eb-6da773bfb7c7" "released" by "nova.compute.manager.ComputeManager.detach_volume.<locals>.do_detach_volume" :: held 0.535s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 04:08:39 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e225 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 04:08:39 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e225 do_prune osdmap full prune enabled
Oct 11 04:08:39 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e226 e226: 3 total, 3 up, 3 in
Oct 11 04:08:39 compute-0 ceph-mon[74273]: log_channel(cluster) log [DBG] : osdmap e226: 3 total, 3 up, 3 in
Oct 11 04:08:40 compute-0 nova_compute[259850]: 2025-10-11 04:08:40.677 2 DEBUG oslo_concurrency.lockutils [None req-17417bbe-35ff-458e-8273-0693f85341eb fc44058c9b8d47d1907c195c404898c8 c04e56df694d49fdbb22c39773dfc036 - - default default] Acquiring lock "5814e0c3-8afc-4d2d-98eb-6da773bfb7c7" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 04:08:40 compute-0 nova_compute[259850]: 2025-10-11 04:08:40.678 2 DEBUG oslo_concurrency.lockutils [None req-17417bbe-35ff-458e-8273-0693f85341eb fc44058c9b8d47d1907c195c404898c8 c04e56df694d49fdbb22c39773dfc036 - - default default] Lock "5814e0c3-8afc-4d2d-98eb-6da773bfb7c7" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 04:08:40 compute-0 nova_compute[259850]: 2025-10-11 04:08:40.678 2 DEBUG oslo_concurrency.lockutils [None req-17417bbe-35ff-458e-8273-0693f85341eb fc44058c9b8d47d1907c195c404898c8 c04e56df694d49fdbb22c39773dfc036 - - default default] Acquiring lock "5814e0c3-8afc-4d2d-98eb-6da773bfb7c7-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 04:08:40 compute-0 nova_compute[259850]: 2025-10-11 04:08:40.679 2 DEBUG oslo_concurrency.lockutils [None req-17417bbe-35ff-458e-8273-0693f85341eb fc44058c9b8d47d1907c195c404898c8 c04e56df694d49fdbb22c39773dfc036 - - default default] Lock "5814e0c3-8afc-4d2d-98eb-6da773bfb7c7-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 04:08:40 compute-0 nova_compute[259850]: 2025-10-11 04:08:40.679 2 DEBUG oslo_concurrency.lockutils [None req-17417bbe-35ff-458e-8273-0693f85341eb fc44058c9b8d47d1907c195c404898c8 c04e56df694d49fdbb22c39773dfc036 - - default default] Lock "5814e0c3-8afc-4d2d-98eb-6da773bfb7c7-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 04:08:40 compute-0 nova_compute[259850]: 2025-10-11 04:08:40.681 2 INFO nova.compute.manager [None req-17417bbe-35ff-458e-8273-0693f85341eb fc44058c9b8d47d1907c195c404898c8 c04e56df694d49fdbb22c39773dfc036 - - default default] [instance: 5814e0c3-8afc-4d2d-98eb-6da773bfb7c7] Terminating instance
Oct 11 04:08:40 compute-0 nova_compute[259850]: 2025-10-11 04:08:40.683 2 DEBUG nova.compute.manager [None req-17417bbe-35ff-458e-8273-0693f85341eb fc44058c9b8d47d1907c195c404898c8 c04e56df694d49fdbb22c39773dfc036 - - default default] [instance: 5814e0c3-8afc-4d2d-98eb-6da773bfb7c7] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Oct 11 04:08:40 compute-0 kernel: tap05e93c0e-0c (unregistering): left promiscuous mode
Oct 11 04:08:40 compute-0 NetworkManager[44920]: <info>  [1760155720.7325] device (tap05e93c0e-0c): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 11 04:08:40 compute-0 nova_compute[259850]: 2025-10-11 04:08:40.740 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 04:08:40 compute-0 ovn_controller[152025]: 2025-10-11T04:08:40Z|00102|binding|INFO|Releasing lport 05e93c0e-0ca7-4152-9b30-cb802b90de1f from this chassis (sb_readonly=0)
Oct 11 04:08:40 compute-0 ovn_controller[152025]: 2025-10-11T04:08:40Z|00103|binding|INFO|Setting lport 05e93c0e-0ca7-4152-9b30-cb802b90de1f down in Southbound
Oct 11 04:08:40 compute-0 ovn_controller[152025]: 2025-10-11T04:08:40Z|00104|binding|INFO|Removing iface tap05e93c0e-0c ovn-installed in OVS
Oct 11 04:08:40 compute-0 nova_compute[259850]: 2025-10-11 04:08:40.744 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 04:08:40 compute-0 ovn_metadata_agent[161897]: 2025-10-11 04:08:40.755 161902 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:fc:44:96 10.100.0.11'], port_security=['fa:16:3e:fc:44:96 10.100.0.11'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.11/28', 'neutron:device_id': '5814e0c3-8afc-4d2d-98eb-6da773bfb7c7', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-8cb72c94-41d7-40be-8ef7-9351e1b06d48', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'c04e56df694d49fdbb22c39773dfc036', 'neutron:revision_number': '4', 'neutron:security_group_ids': '19f99e73-7b96-4627-a9e6-b29b26da7418', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com', 'neutron:port_fip': '192.168.122.189'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=e3458ebb-1a6a-4cc8-a158-43868faee92e, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f22fde8afd0>], logical_port=05e93c0e-0ca7-4152-9b30-cb802b90de1f) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f22fde8afd0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 11 04:08:40 compute-0 ovn_metadata_agent[161897]: 2025-10-11 04:08:40.757 161902 INFO neutron.agent.ovn.metadata.agent [-] Port 05e93c0e-0ca7-4152-9b30-cb802b90de1f in datapath 8cb72c94-41d7-40be-8ef7-9351e1b06d48 unbound from our chassis
Oct 11 04:08:40 compute-0 ovn_metadata_agent[161897]: 2025-10-11 04:08:40.760 161902 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 8cb72c94-41d7-40be-8ef7-9351e1b06d48, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Oct 11 04:08:40 compute-0 ovn_metadata_agent[161897]: 2025-10-11 04:08:40.761 267637 DEBUG oslo.privsep.daemon [-] privsep: reply[bac6abf0-0367-4b8b-a789-7434e280c5b1]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 04:08:40 compute-0 ovn_metadata_agent[161897]: 2025-10-11 04:08:40.762 161902 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-8cb72c94-41d7-40be-8ef7-9351e1b06d48 namespace which is not needed anymore
Oct 11 04:08:40 compute-0 nova_compute[259850]: 2025-10-11 04:08:40.775 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 04:08:40 compute-0 ceph-mon[74273]: pgmap v1185: 305 pgs: 305 active+clean; 306 MiB data, 378 MiB used, 60 GiB / 60 GiB avail; 3.6 MiB/s rd, 3.6 MiB/s wr, 138 op/s
Oct 11 04:08:40 compute-0 ceph-mon[74273]: osdmap e226: 3 total, 3 up, 3 in
Oct 11 04:08:40 compute-0 systemd[1]: machine-qemu\x2d9\x2dinstance\x2d00000009.scope: Deactivated successfully.
Oct 11 04:08:40 compute-0 systemd[1]: machine-qemu\x2d9\x2dinstance\x2d00000009.scope: Consumed 13.708s CPU time.
Oct 11 04:08:40 compute-0 systemd-machined[214869]: Machine qemu-9-instance-00000009 terminated.
Oct 11 04:08:40 compute-0 nova_compute[259850]: 2025-10-11 04:08:40.907 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 04:08:40 compute-0 nova_compute[259850]: 2025-10-11 04:08:40.919 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 04:08:40 compute-0 nova_compute[259850]: 2025-10-11 04:08:40.925 2 INFO nova.virt.libvirt.driver [-] [instance: 5814e0c3-8afc-4d2d-98eb-6da773bfb7c7] Instance destroyed successfully.
Oct 11 04:08:40 compute-0 nova_compute[259850]: 2025-10-11 04:08:40.926 2 DEBUG nova.objects.instance [None req-17417bbe-35ff-458e-8273-0693f85341eb fc44058c9b8d47d1907c195c404898c8 c04e56df694d49fdbb22c39773dfc036 - - default default] Lazy-loading 'resources' on Instance uuid 5814e0c3-8afc-4d2d-98eb-6da773bfb7c7 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 11 04:08:40 compute-0 nova_compute[259850]: 2025-10-11 04:08:40.952 2 DEBUG nova.virt.libvirt.vif [None req-17417bbe-35ff-458e-8273-0693f85341eb fc44058c9b8d47d1907c195c404898c8 c04e56df694d49fdbb22c39773dfc036 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-11T04:08:00Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-VolumesBackupsTest-instance-1031640988',display_name='tempest-VolumesBackupsTest-instance-1031640988',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-volumesbackupstest-instance-1031640988',id=9,image_ref='1a107e2f-1a9d-4b6f-861d-e64bee7d56be',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBNVqTNHLi0WBIbfKKzHgX7uX1c7Db4lpCGYQPDzDbX5PeMXwJgA86ENR9AHoUIPJm52kGc03LyhHVLcWEZvMPuNYEOXd0aovsRUC5Fu4Wy9sztYBoemBH/MUmHd01HKxGw==',key_name='tempest-keypair-1402827230',keypairs=<?>,launch_index=0,launched_at=2025-10-11T04:08:10Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='c04e56df694d49fdbb22c39773dfc036',ramdisk_id='',reservation_id='r-glbrhlmo',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='1a107e2f-1a9d-4b6f-861d-e64bee7d56be',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-VolumesBackupsTest-722883341',owner_user_name='tempest-VolumesBackupsTest-722883341-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-11T04:08:10Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='fc44058c9b8d47d1907c195c404898c8',uuid=5814e0c3-8afc-4d2d-98eb-6da773bfb7c7,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "05e93c0e-0ca7-4152-9b30-cb802b90de1f", "address": "fa:16:3e:fc:44:96", "network": {"id": "8cb72c94-41d7-40be-8ef7-9351e1b06d48", "bridge": "br-int", "label": "tempest-VolumesBackupsTest-1596968619-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.189", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c04e56df694d49fdbb22c39773dfc036", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap05e93c0e-0c", "ovs_interfaceid": "05e93c0e-0ca7-4152-9b30-cb802b90de1f", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Oct 11 04:08:40 compute-0 nova_compute[259850]: 2025-10-11 04:08:40.952 2 DEBUG nova.network.os_vif_util [None req-17417bbe-35ff-458e-8273-0693f85341eb fc44058c9b8d47d1907c195c404898c8 c04e56df694d49fdbb22c39773dfc036 - - default default] Converting VIF {"id": "05e93c0e-0ca7-4152-9b30-cb802b90de1f", "address": "fa:16:3e:fc:44:96", "network": {"id": "8cb72c94-41d7-40be-8ef7-9351e1b06d48", "bridge": "br-int", "label": "tempest-VolumesBackupsTest-1596968619-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.189", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c04e56df694d49fdbb22c39773dfc036", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap05e93c0e-0c", "ovs_interfaceid": "05e93c0e-0ca7-4152-9b30-cb802b90de1f", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 11 04:08:40 compute-0 nova_compute[259850]: 2025-10-11 04:08:40.953 2 DEBUG nova.network.os_vif_util [None req-17417bbe-35ff-458e-8273-0693f85341eb fc44058c9b8d47d1907c195c404898c8 c04e56df694d49fdbb22c39773dfc036 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:fc:44:96,bridge_name='br-int',has_traffic_filtering=True,id=05e93c0e-0ca7-4152-9b30-cb802b90de1f,network=Network(8cb72c94-41d7-40be-8ef7-9351e1b06d48),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap05e93c0e-0c') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 11 04:08:40 compute-0 nova_compute[259850]: 2025-10-11 04:08:40.954 2 DEBUG os_vif [None req-17417bbe-35ff-458e-8273-0693f85341eb fc44058c9b8d47d1907c195c404898c8 c04e56df694d49fdbb22c39773dfc036 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:fc:44:96,bridge_name='br-int',has_traffic_filtering=True,id=05e93c0e-0ca7-4152-9b30-cb802b90de1f,network=Network(8cb72c94-41d7-40be-8ef7-9351e1b06d48),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap05e93c0e-0c') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Oct 11 04:08:40 compute-0 neutron-haproxy-ovnmeta-8cb72c94-41d7-40be-8ef7-9351e1b06d48[277309]: [NOTICE]   (277313) : haproxy version is 2.8.14-c23fe91
Oct 11 04:08:40 compute-0 neutron-haproxy-ovnmeta-8cb72c94-41d7-40be-8ef7-9351e1b06d48[277309]: [NOTICE]   (277313) : path to executable is /usr/sbin/haproxy
Oct 11 04:08:40 compute-0 neutron-haproxy-ovnmeta-8cb72c94-41d7-40be-8ef7-9351e1b06d48[277309]: [WARNING]  (277313) : Exiting Master process...
Oct 11 04:08:40 compute-0 neutron-haproxy-ovnmeta-8cb72c94-41d7-40be-8ef7-9351e1b06d48[277309]: [WARNING]  (277313) : Exiting Master process...
Oct 11 04:08:40 compute-0 neutron-haproxy-ovnmeta-8cb72c94-41d7-40be-8ef7-9351e1b06d48[277309]: [ALERT]    (277313) : Current worker (277315) exited with code 143 (Terminated)
Oct 11 04:08:40 compute-0 neutron-haproxy-ovnmeta-8cb72c94-41d7-40be-8ef7-9351e1b06d48[277309]: [WARNING]  (277313) : All workers exited. Exiting... (0)
Oct 11 04:08:40 compute-0 nova_compute[259850]: 2025-10-11 04:08:40.956 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 04:08:40 compute-0 nova_compute[259850]: 2025-10-11 04:08:40.957 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap05e93c0e-0c, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 04:08:40 compute-0 nova_compute[259850]: 2025-10-11 04:08:40.959 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 04:08:40 compute-0 nova_compute[259850]: 2025-10-11 04:08:40.962 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 04:08:40 compute-0 systemd[1]: libpod-415a72eaefed018b58afb9f793f2c0b92e16bad6cea422b96f2721d49a00689f.scope: Deactivated successfully.
Oct 11 04:08:40 compute-0 podman[281013]: 2025-10-11 04:08:40.966972895 +0000 UTC m=+0.071989126 container died 415a72eaefed018b58afb9f793f2c0b92e16bad6cea422b96f2721d49a00689f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-8cb72c94-41d7-40be-8ef7-9351e1b06d48, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009, org.label-schema.schema-version=1.0, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Oct 11 04:08:40 compute-0 nova_compute[259850]: 2025-10-11 04:08:40.966 2 INFO os_vif [None req-17417bbe-35ff-458e-8273-0693f85341eb fc44058c9b8d47d1907c195c404898c8 c04e56df694d49fdbb22c39773dfc036 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:fc:44:96,bridge_name='br-int',has_traffic_filtering=True,id=05e93c0e-0ca7-4152-9b30-cb802b90de1f,network=Network(8cb72c94-41d7-40be-8ef7-9351e1b06d48),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap05e93c0e-0c')
Oct 11 04:08:40 compute-0 systemd[1]: var-lib-containers-storage-overlay-306b951c3e253160853bcdc8a8338aea333d3aad898bcaa4a62724d51fb3f167-merged.mount: Deactivated successfully.
Oct 11 04:08:40 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-415a72eaefed018b58afb9f793f2c0b92e16bad6cea422b96f2721d49a00689f-userdata-shm.mount: Deactivated successfully.
Oct 11 04:08:41 compute-0 podman[281013]: 2025-10-11 04:08:41.00518493 +0000 UTC m=+0.110201171 container cleanup 415a72eaefed018b58afb9f793f2c0b92e16bad6cea422b96f2721d49a00689f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-8cb72c94-41d7-40be-8ef7-9351e1b06d48, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20251009, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, tcib_managed=true, io.buildah.version=1.41.3)
Oct 11 04:08:41 compute-0 systemd[1]: libpod-conmon-415a72eaefed018b58afb9f793f2c0b92e16bad6cea422b96f2721d49a00689f.scope: Deactivated successfully.
Oct 11 04:08:41 compute-0 podman[281066]: 2025-10-11 04:08:41.100759859 +0000 UTC m=+0.055552744 container remove 415a72eaefed018b58afb9f793f2c0b92e16bad6cea422b96f2721d49a00689f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-8cb72c94-41d7-40be-8ef7-9351e1b06d48, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, io.buildah.version=1.41.3, org.label-schema.build-date=20251009, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.vendor=CentOS)
Oct 11 04:08:41 compute-0 ovn_metadata_agent[161897]: 2025-10-11 04:08:41.107 267637 DEBUG oslo.privsep.daemon [-] privsep: reply[d79b2c34-1c14-415a-8cf7-c498bbac169a]: (4, ('Sat Oct 11 04:08:40 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-8cb72c94-41d7-40be-8ef7-9351e1b06d48 (415a72eaefed018b58afb9f793f2c0b92e16bad6cea422b96f2721d49a00689f)\n415a72eaefed018b58afb9f793f2c0b92e16bad6cea422b96f2721d49a00689f\nSat Oct 11 04:08:41 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-8cb72c94-41d7-40be-8ef7-9351e1b06d48 (415a72eaefed018b58afb9f793f2c0b92e16bad6cea422b96f2721d49a00689f)\n415a72eaefed018b58afb9f793f2c0b92e16bad6cea422b96f2721d49a00689f\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 04:08:41 compute-0 ovn_metadata_agent[161897]: 2025-10-11 04:08:41.108 267637 DEBUG oslo.privsep.daemon [-] privsep: reply[9ae3f6e7-8045-45a4-8f4e-92d8ddcf1182]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 04:08:41 compute-0 ovn_metadata_agent[161897]: 2025-10-11 04:08:41.109 161902 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap8cb72c94-40, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 04:08:41 compute-0 kernel: tap8cb72c94-40: left promiscuous mode
Oct 11 04:08:41 compute-0 nova_compute[259850]: 2025-10-11 04:08:41.112 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 04:08:41 compute-0 ovn_metadata_agent[161897]: 2025-10-11 04:08:41.117 267637 DEBUG oslo.privsep.daemon [-] privsep: reply[e959686b-5cde-4ed2-b8c7-6fefd24ee1e4]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 04:08:41 compute-0 nova_compute[259850]: 2025-10-11 04:08:41.138 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 04:08:41 compute-0 ovn_metadata_agent[161897]: 2025-10-11 04:08:41.143 267637 DEBUG oslo.privsep.daemon [-] privsep: reply[1cb006a1-6869-41de-88d4-4ddcb6e408b1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 04:08:41 compute-0 ovn_metadata_agent[161897]: 2025-10-11 04:08:41.145 267637 DEBUG oslo.privsep.daemon [-] privsep: reply[85d04d0a-7150-4db9-acf4-2d8519dd26b7]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 04:08:41 compute-0 ovn_metadata_agent[161897]: 2025-10-11 04:08:41.165 267637 DEBUG oslo.privsep.daemon [-] privsep: reply[52140381-5af7-4f1e-86ae-202b7c8c691d]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 407461, 'reachable_time': 25740, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 281086, 'error': None, 'target': 'ovnmeta-8cb72c94-41d7-40be-8ef7-9351e1b06d48', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 04:08:41 compute-0 ovn_metadata_agent[161897]: 2025-10-11 04:08:41.168 162015 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-8cb72c94-41d7-40be-8ef7-9351e1b06d48 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Oct 11 04:08:41 compute-0 ovn_metadata_agent[161897]: 2025-10-11 04:08:41.168 162015 DEBUG oslo.privsep.daemon [-] privsep: reply[06fad5da-c1e5-4952-91f1-495b240abf27]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 04:08:41 compute-0 systemd[1]: run-netns-ovnmeta\x2d8cb72c94\x2d41d7\x2d40be\x2d8ef7\x2d9351e1b06d48.mount: Deactivated successfully.
Oct 11 04:08:41 compute-0 nova_compute[259850]: 2025-10-11 04:08:41.375 2 INFO nova.virt.libvirt.driver [None req-17417bbe-35ff-458e-8273-0693f85341eb fc44058c9b8d47d1907c195c404898c8 c04e56df694d49fdbb22c39773dfc036 - - default default] [instance: 5814e0c3-8afc-4d2d-98eb-6da773bfb7c7] Deleting instance files /var/lib/nova/instances/5814e0c3-8afc-4d2d-98eb-6da773bfb7c7_del
Oct 11 04:08:41 compute-0 nova_compute[259850]: 2025-10-11 04:08:41.376 2 INFO nova.virt.libvirt.driver [None req-17417bbe-35ff-458e-8273-0693f85341eb fc44058c9b8d47d1907c195c404898c8 c04e56df694d49fdbb22c39773dfc036 - - default default] [instance: 5814e0c3-8afc-4d2d-98eb-6da773bfb7c7] Deletion of /var/lib/nova/instances/5814e0c3-8afc-4d2d-98eb-6da773bfb7c7_del complete
Oct 11 04:08:41 compute-0 nova_compute[259850]: 2025-10-11 04:08:41.429 2 DEBUG nova.compute.manager [req-5700eea0-1372-4ea2-8d04-3079976ae464 req-6c5074c2-c055-4879-b547-38d437ab756d f4159c198f2c491aba3076e803251bb4 551ef16ca7c64c7d98e9560ddab5abd8 - - default default] [instance: 5814e0c3-8afc-4d2d-98eb-6da773bfb7c7] Received event network-vif-unplugged-05e93c0e-0ca7-4152-9b30-cb802b90de1f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 11 04:08:41 compute-0 nova_compute[259850]: 2025-10-11 04:08:41.429 2 DEBUG oslo_concurrency.lockutils [req-5700eea0-1372-4ea2-8d04-3079976ae464 req-6c5074c2-c055-4879-b547-38d437ab756d f4159c198f2c491aba3076e803251bb4 551ef16ca7c64c7d98e9560ddab5abd8 - - default default] Acquiring lock "5814e0c3-8afc-4d2d-98eb-6da773bfb7c7-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 04:08:41 compute-0 nova_compute[259850]: 2025-10-11 04:08:41.429 2 DEBUG oslo_concurrency.lockutils [req-5700eea0-1372-4ea2-8d04-3079976ae464 req-6c5074c2-c055-4879-b547-38d437ab756d f4159c198f2c491aba3076e803251bb4 551ef16ca7c64c7d98e9560ddab5abd8 - - default default] Lock "5814e0c3-8afc-4d2d-98eb-6da773bfb7c7-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 04:08:41 compute-0 nova_compute[259850]: 2025-10-11 04:08:41.429 2 DEBUG oslo_concurrency.lockutils [req-5700eea0-1372-4ea2-8d04-3079976ae464 req-6c5074c2-c055-4879-b547-38d437ab756d f4159c198f2c491aba3076e803251bb4 551ef16ca7c64c7d98e9560ddab5abd8 - - default default] Lock "5814e0c3-8afc-4d2d-98eb-6da773bfb7c7-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 04:08:41 compute-0 nova_compute[259850]: 2025-10-11 04:08:41.429 2 DEBUG nova.compute.manager [req-5700eea0-1372-4ea2-8d04-3079976ae464 req-6c5074c2-c055-4879-b547-38d437ab756d f4159c198f2c491aba3076e803251bb4 551ef16ca7c64c7d98e9560ddab5abd8 - - default default] [instance: 5814e0c3-8afc-4d2d-98eb-6da773bfb7c7] No waiting events found dispatching network-vif-unplugged-05e93c0e-0ca7-4152-9b30-cb802b90de1f pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 11 04:08:41 compute-0 nova_compute[259850]: 2025-10-11 04:08:41.429 2 DEBUG nova.compute.manager [req-5700eea0-1372-4ea2-8d04-3079976ae464 req-6c5074c2-c055-4879-b547-38d437ab756d f4159c198f2c491aba3076e803251bb4 551ef16ca7c64c7d98e9560ddab5abd8 - - default default] [instance: 5814e0c3-8afc-4d2d-98eb-6da773bfb7c7] Received event network-vif-unplugged-05e93c0e-0ca7-4152-9b30-cb802b90de1f for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Oct 11 04:08:41 compute-0 nova_compute[259850]: 2025-10-11 04:08:41.444 2 INFO nova.compute.manager [None req-17417bbe-35ff-458e-8273-0693f85341eb fc44058c9b8d47d1907c195c404898c8 c04e56df694d49fdbb22c39773dfc036 - - default default] [instance: 5814e0c3-8afc-4d2d-98eb-6da773bfb7c7] Took 0.76 seconds to destroy the instance on the hypervisor.
Oct 11 04:08:41 compute-0 nova_compute[259850]: 2025-10-11 04:08:41.444 2 DEBUG oslo.service.loopingcall [None req-17417bbe-35ff-458e-8273-0693f85341eb fc44058c9b8d47d1907c195c404898c8 c04e56df694d49fdbb22c39773dfc036 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Oct 11 04:08:41 compute-0 nova_compute[259850]: 2025-10-11 04:08:41.446 2 DEBUG nova.compute.manager [-] [instance: 5814e0c3-8afc-4d2d-98eb-6da773bfb7c7] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Oct 11 04:08:41 compute-0 nova_compute[259850]: 2025-10-11 04:08:41.446 2 DEBUG nova.network.neutron [-] [instance: 5814e0c3-8afc-4d2d-98eb-6da773bfb7c7] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Oct 11 04:08:41 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1187: 305 pgs: 305 active+clean; 306 MiB data, 378 MiB used, 60 GiB / 60 GiB avail; 3.5 MiB/s rd, 3.4 MiB/s wr, 132 op/s
Oct 11 04:08:42 compute-0 nova_compute[259850]: 2025-10-11 04:08:42.008 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 04:08:42 compute-0 nova_compute[259850]: 2025-10-11 04:08:42.055 2 DEBUG oslo_concurrency.lockutils [None req-6cb29daa-1624-4ef6-9260-77ed05213d92 8bb2149cfdfe44b2a94076ed5e55fbaf 38b79203307d4f1caa56e7e44b103572 - - default default] Acquiring lock "8010953a-e520-477e-a4ba-ceb34db48982" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 04:08:42 compute-0 nova_compute[259850]: 2025-10-11 04:08:42.056 2 DEBUG oslo_concurrency.lockutils [None req-6cb29daa-1624-4ef6-9260-77ed05213d92 8bb2149cfdfe44b2a94076ed5e55fbaf 38b79203307d4f1caa56e7e44b103572 - - default default] Lock "8010953a-e520-477e-a4ba-ceb34db48982" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 04:08:42 compute-0 nova_compute[259850]: 2025-10-11 04:08:42.077 2 DEBUG nova.compute.manager [None req-6cb29daa-1624-4ef6-9260-77ed05213d92 8bb2149cfdfe44b2a94076ed5e55fbaf 38b79203307d4f1caa56e7e44b103572 - - default default] [instance: 8010953a-e520-477e-a4ba-ceb34db48982] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Oct 11 04:08:42 compute-0 nova_compute[259850]: 2025-10-11 04:08:42.148 2 DEBUG oslo_concurrency.lockutils [None req-6cb29daa-1624-4ef6-9260-77ed05213d92 8bb2149cfdfe44b2a94076ed5e55fbaf 38b79203307d4f1caa56e7e44b103572 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 04:08:42 compute-0 nova_compute[259850]: 2025-10-11 04:08:42.149 2 DEBUG oslo_concurrency.lockutils [None req-6cb29daa-1624-4ef6-9260-77ed05213d92 8bb2149cfdfe44b2a94076ed5e55fbaf 38b79203307d4f1caa56e7e44b103572 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 04:08:42 compute-0 nova_compute[259850]: 2025-10-11 04:08:42.160 2 DEBUG nova.virt.hardware [None req-6cb29daa-1624-4ef6-9260-77ed05213d92 8bb2149cfdfe44b2a94076ed5e55fbaf 38b79203307d4f1caa56e7e44b103572 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Oct 11 04:08:42 compute-0 nova_compute[259850]: 2025-10-11 04:08:42.160 2 INFO nova.compute.claims [None req-6cb29daa-1624-4ef6-9260-77ed05213d92 8bb2149cfdfe44b2a94076ed5e55fbaf 38b79203307d4f1caa56e7e44b103572 - - default default] [instance: 8010953a-e520-477e-a4ba-ceb34db48982] Claim successful on node compute-0.ctlplane.example.com
Oct 11 04:08:42 compute-0 nova_compute[259850]: 2025-10-11 04:08:42.245 2 DEBUG nova.scheduler.client.report [None req-6cb29daa-1624-4ef6-9260-77ed05213d92 8bb2149cfdfe44b2a94076ed5e55fbaf 38b79203307d4f1caa56e7e44b103572 - - default default] Refreshing inventories for resource provider 108a560b-89c0-4926-a2fc-cb749a6f8386 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804
Oct 11 04:08:42 compute-0 nova_compute[259850]: 2025-10-11 04:08:42.267 2 DEBUG nova.scheduler.client.report [None req-6cb29daa-1624-4ef6-9260-77ed05213d92 8bb2149cfdfe44b2a94076ed5e55fbaf 38b79203307d4f1caa56e7e44b103572 - - default default] Updating ProviderTree inventory for provider 108a560b-89c0-4926-a2fc-cb749a6f8386 from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768
Oct 11 04:08:42 compute-0 nova_compute[259850]: 2025-10-11 04:08:42.268 2 DEBUG nova.compute.provider_tree [None req-6cb29daa-1624-4ef6-9260-77ed05213d92 8bb2149cfdfe44b2a94076ed5e55fbaf 38b79203307d4f1caa56e7e44b103572 - - default default] Updating inventory in ProviderTree for provider 108a560b-89c0-4926-a2fc-cb749a6f8386 with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176
Oct 11 04:08:42 compute-0 nova_compute[259850]: 2025-10-11 04:08:42.289 2 DEBUG nova.scheduler.client.report [None req-6cb29daa-1624-4ef6-9260-77ed05213d92 8bb2149cfdfe44b2a94076ed5e55fbaf 38b79203307d4f1caa56e7e44b103572 - - default default] Refreshing aggregate associations for resource provider 108a560b-89c0-4926-a2fc-cb749a6f8386, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813
Oct 11 04:08:42 compute-0 nova_compute[259850]: 2025-10-11 04:08:42.312 2 DEBUG nova.scheduler.client.report [None req-6cb29daa-1624-4ef6-9260-77ed05213d92 8bb2149cfdfe44b2a94076ed5e55fbaf 38b79203307d4f1caa56e7e44b103572 - - default default] Refreshing trait associations for resource provider 108a560b-89c0-4926-a2fc-cb749a6f8386, traits: COMPUTE_IMAGE_TYPE_QCOW2,HW_CPU_X86_SSSE3,COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_STORAGE_BUS_SCSI,HW_CPU_X86_AESNI,HW_CPU_X86_FMA3,COMPUTE_SECURITY_UEFI_SECURE_BOOT,HW_CPU_X86_F16C,HW_CPU_X86_SVM,COMPUTE_STORAGE_BUS_USB,HW_CPU_X86_SHA,COMPUTE_ACCELERATORS,COMPUTE_IMAGE_TYPE_AMI,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_GRAPHICS_MODEL_BOCHS,COMPUTE_NET_VIF_MODEL_RTL8139,COMPUTE_IMAGE_TYPE_RAW,COMPUTE_NET_VIF_MODEL_NE2K_PCI,HW_CPU_X86_SSE41,COMPUTE_NODE,COMPUTE_RESCUE_BFV,COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_NET_ATTACH_INTERFACE,COMPUTE_IMAGE_TYPE_ISO,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,COMPUTE_TRUSTED_CERTS,COMPUTE_NET_VIF_MODEL_E1000E,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_STORAGE_BUS_SATA,HW_CPU_X86_AVX,COMPUTE_IMAGE_TYPE_AKI,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_STORAGE_BUS_VIRTIO,COMPUTE_NET_VIF_MODEL_VMXNET3,HW_CPU_X86_BMI2,HW_CPU_X86_MMX,COMPUTE_VOLUME_ATTACH_WITH_TAG,COMPUTE_SECURITY_TPM_1_2,COMPUTE_GRAPHICS_MODEL_NONE,HW_CPU_X86_AVX2,COMPUTE_DEVICE_TAGGING,HW_CPU_X86_AMD_SVM,COMPUTE_STORAGE_BUS_IDE,COMPUTE_GRAPHICS_MODEL_VGA,COMPUTE_VOLUME_EXTEND,COMPUTE_NET_VIF_MODEL_PCNET,HW_CPU_X86_CLMUL,HW_CPU_X86_SSE2,HW_CPU_X86_SSE4A,COMPUTE_GRAPHICS_MODEL_VIRTIO,COMPUTE_NET_VIF_MODEL_E1000,HW_CPU_X86_BMI,COMPUTE_VIOMMU_MODEL_INTEL,HW_CPU_X86_SSE,COMPUTE_VOLUME_MULTI_ATTACH,HW_CPU_X86_ABM,HW_CPU_X86_SSE42,COMPUTE_STORAGE_BUS_FDC,COMPUTE_SECURITY_TPM_2_0,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825
Oct 11 04:08:42 compute-0 nova_compute[259850]: 2025-10-11 04:08:42.387 2 DEBUG oslo_concurrency.processutils [None req-6cb29daa-1624-4ef6-9260-77ed05213d92 8bb2149cfdfe44b2a94076ed5e55fbaf 38b79203307d4f1caa56e7e44b103572 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 04:08:42 compute-0 nova_compute[259850]: 2025-10-11 04:08:42.418 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 04:08:42 compute-0 nova_compute[259850]: 2025-10-11 04:08:42.423 2 DEBUG nova.network.neutron [-] [instance: 5814e0c3-8afc-4d2d-98eb-6da773bfb7c7] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 11 04:08:42 compute-0 nova_compute[259850]: 2025-10-11 04:08:42.441 2 INFO nova.compute.manager [-] [instance: 5814e0c3-8afc-4d2d-98eb-6da773bfb7c7] Took 0.99 seconds to deallocate network for instance.
Oct 11 04:08:42 compute-0 nova_compute[259850]: 2025-10-11 04:08:42.493 2 DEBUG oslo_concurrency.lockutils [None req-17417bbe-35ff-458e-8273-0693f85341eb fc44058c9b8d47d1907c195c404898c8 c04e56df694d49fdbb22c39773dfc036 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 04:08:42 compute-0 ceph-mon[74273]: pgmap v1187: 305 pgs: 305 active+clean; 306 MiB data, 378 MiB used, 60 GiB / 60 GiB avail; 3.5 MiB/s rd, 3.4 MiB/s wr, 132 op/s
Oct 11 04:08:42 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 11 04:08:42 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1809001123' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 04:08:42 compute-0 nova_compute[259850]: 2025-10-11 04:08:42.849 2 DEBUG oslo_concurrency.processutils [None req-6cb29daa-1624-4ef6-9260-77ed05213d92 8bb2149cfdfe44b2a94076ed5e55fbaf 38b79203307d4f1caa56e7e44b103572 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.462s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 04:08:42 compute-0 nova_compute[259850]: 2025-10-11 04:08:42.858 2 DEBUG nova.compute.provider_tree [None req-6cb29daa-1624-4ef6-9260-77ed05213d92 8bb2149cfdfe44b2a94076ed5e55fbaf 38b79203307d4f1caa56e7e44b103572 - - default default] Inventory has not changed in ProviderTree for provider: 108a560b-89c0-4926-a2fc-cb749a6f8386 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 11 04:08:42 compute-0 nova_compute[259850]: 2025-10-11 04:08:42.882 2 DEBUG nova.scheduler.client.report [None req-6cb29daa-1624-4ef6-9260-77ed05213d92 8bb2149cfdfe44b2a94076ed5e55fbaf 38b79203307d4f1caa56e7e44b103572 - - default default] Inventory has not changed for provider 108a560b-89c0-4926-a2fc-cb749a6f8386 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 11 04:08:42 compute-0 nova_compute[259850]: 2025-10-11 04:08:42.912 2 DEBUG oslo_concurrency.lockutils [None req-6cb29daa-1624-4ef6-9260-77ed05213d92 8bb2149cfdfe44b2a94076ed5e55fbaf 38b79203307d4f1caa56e7e44b103572 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.763s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 04:08:42 compute-0 nova_compute[259850]: 2025-10-11 04:08:42.912 2 DEBUG nova.compute.manager [None req-6cb29daa-1624-4ef6-9260-77ed05213d92 8bb2149cfdfe44b2a94076ed5e55fbaf 38b79203307d4f1caa56e7e44b103572 - - default default] [instance: 8010953a-e520-477e-a4ba-ceb34db48982] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Oct 11 04:08:42 compute-0 nova_compute[259850]: 2025-10-11 04:08:42.914 2 DEBUG oslo_concurrency.lockutils [None req-17417bbe-35ff-458e-8273-0693f85341eb fc44058c9b8d47d1907c195c404898c8 c04e56df694d49fdbb22c39773dfc036 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.422s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 04:08:42 compute-0 nova_compute[259850]: 2025-10-11 04:08:42.972 2 DEBUG oslo_concurrency.processutils [None req-17417bbe-35ff-458e-8273-0693f85341eb fc44058c9b8d47d1907c195c404898c8 c04e56df694d49fdbb22c39773dfc036 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 04:08:43 compute-0 nova_compute[259850]: 2025-10-11 04:08:43.003 2 DEBUG nova.compute.manager [None req-6cb29daa-1624-4ef6-9260-77ed05213d92 8bb2149cfdfe44b2a94076ed5e55fbaf 38b79203307d4f1caa56e7e44b103572 - - default default] [instance: 8010953a-e520-477e-a4ba-ceb34db48982] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Oct 11 04:08:43 compute-0 nova_compute[259850]: 2025-10-11 04:08:43.003 2 DEBUG nova.network.neutron [None req-6cb29daa-1624-4ef6-9260-77ed05213d92 8bb2149cfdfe44b2a94076ed5e55fbaf 38b79203307d4f1caa56e7e44b103572 - - default default] [instance: 8010953a-e520-477e-a4ba-ceb34db48982] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Oct 11 04:08:43 compute-0 nova_compute[259850]: 2025-10-11 04:08:43.030 2 INFO nova.virt.libvirt.driver [None req-6cb29daa-1624-4ef6-9260-77ed05213d92 8bb2149cfdfe44b2a94076ed5e55fbaf 38b79203307d4f1caa56e7e44b103572 - - default default] [instance: 8010953a-e520-477e-a4ba-ceb34db48982] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Oct 11 04:08:43 compute-0 nova_compute[259850]: 2025-10-11 04:08:43.054 2 DEBUG nova.compute.manager [None req-6cb29daa-1624-4ef6-9260-77ed05213d92 8bb2149cfdfe44b2a94076ed5e55fbaf 38b79203307d4f1caa56e7e44b103572 - - default default] [instance: 8010953a-e520-477e-a4ba-ceb34db48982] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Oct 11 04:08:43 compute-0 nova_compute[259850]: 2025-10-11 04:08:43.126 2 INFO nova.virt.block_device [None req-6cb29daa-1624-4ef6-9260-77ed05213d92 8bb2149cfdfe44b2a94076ed5e55fbaf 38b79203307d4f1caa56e7e44b103572 - - default default] [instance: 8010953a-e520-477e-a4ba-ceb34db48982] Booting with volume 6efc2072-594e-47eb-9f95-652902703cf7 at /dev/vda
Oct 11 04:08:43 compute-0 nova_compute[259850]: 2025-10-11 04:08:43.213 2 DEBUG nova.policy [None req-6cb29daa-1624-4ef6-9260-77ed05213d92 8bb2149cfdfe44b2a94076ed5e55fbaf 38b79203307d4f1caa56e7e44b103572 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '8bb2149cfdfe44b2a94076ed5e55fbaf', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '38b79203307d4f1caa56e7e44b103572', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Oct 11 04:08:43 compute-0 nova_compute[259850]: 2025-10-11 04:08:43.262 2 DEBUG os_brick.utils [None req-6cb29daa-1624-4ef6-9260-77ed05213d92 8bb2149cfdfe44b2a94076ed5e55fbaf 38b79203307d4f1caa56e7e44b103572 - - default default] ==> get_connector_properties: call "{'root_helper': 'sudo nova-rootwrap /etc/nova/rootwrap.conf', 'my_ip': '192.168.122.100', 'multipath': True, 'enforce_multipath': True, 'host': 'compute-0.ctlplane.example.com', 'execute': None}" trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:176
Oct 11 04:08:43 compute-0 nova_compute[259850]: 2025-10-11 04:08:43.264 675 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): multipathd show status execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 04:08:43 compute-0 nova_compute[259850]: 2025-10-11 04:08:43.277 675 DEBUG oslo_concurrency.processutils [-] CMD "multipathd show status" returned: 0 in 0.014s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 04:08:43 compute-0 nova_compute[259850]: 2025-10-11 04:08:43.278 675 DEBUG oslo.privsep.daemon [-] privsep: reply[2db0bf66-0278-4e50-a60d-cf0e3515b291]: (4, ('path checker states:\n\npaths: 0\nbusy: False\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 04:08:43 compute-0 nova_compute[259850]: 2025-10-11 04:08:43.280 675 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): cat /etc/iscsi/initiatorname.iscsi execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 04:08:43 compute-0 nova_compute[259850]: 2025-10-11 04:08:43.289 675 DEBUG oslo_concurrency.processutils [-] CMD "cat /etc/iscsi/initiatorname.iscsi" returned: 0 in 0.009s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 04:08:43 compute-0 nova_compute[259850]: 2025-10-11 04:08:43.289 675 DEBUG oslo.privsep.daemon [-] privsep: reply[642c56a0-a7e0-43b1-ad57-00c81ab4d875]: (4, ('InitiatorName=iqn.1994-05.com.redhat:e727c2bd432c', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 04:08:43 compute-0 nova_compute[259850]: 2025-10-11 04:08:43.291 675 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): findmnt -v / -n -o SOURCE execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 04:08:43 compute-0 nova_compute[259850]: 2025-10-11 04:08:43.298 675 DEBUG oslo_concurrency.processutils [-] CMD "findmnt -v / -n -o SOURCE" returned: 0 in 0.007s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 04:08:43 compute-0 nova_compute[259850]: 2025-10-11 04:08:43.299 675 DEBUG oslo.privsep.daemon [-] privsep: reply[92148b39-74ff-48fa-a78d-34161b0d3f4f]: (4, ('overlay\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 04:08:43 compute-0 nova_compute[259850]: 2025-10-11 04:08:43.301 675 DEBUG oslo.privsep.daemon [-] privsep: reply[22e1b123-9c53-4712-8dc3-672c299b2a22]: (4, 'e4b2deed-ff06-4afb-a523-b61a9dddb9cc') _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 04:08:43 compute-0 nova_compute[259850]: 2025-10-11 04:08:43.301 2 DEBUG oslo_concurrency.processutils [None req-6cb29daa-1624-4ef6-9260-77ed05213d92 8bb2149cfdfe44b2a94076ed5e55fbaf 38b79203307d4f1caa56e7e44b103572 - - default default] Running cmd (subprocess): nvme version execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 04:08:43 compute-0 nova_compute[259850]: 2025-10-11 04:08:43.330 2 DEBUG oslo_concurrency.processutils [None req-6cb29daa-1624-4ef6-9260-77ed05213d92 8bb2149cfdfe44b2a94076ed5e55fbaf 38b79203307d4f1caa56e7e44b103572 - - default default] CMD "nvme version" returned: 0 in 0.028s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 04:08:43 compute-0 nova_compute[259850]: 2025-10-11 04:08:43.340 2 DEBUG os_brick.initiator.connectors.lightos [None req-6cb29daa-1624-4ef6-9260-77ed05213d92 8bb2149cfdfe44b2a94076ed5e55fbaf 38b79203307d4f1caa56e7e44b103572 - - default default] LIGHTOS: [Errno 111] ECONNREFUSED find_dsc /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:98
Oct 11 04:08:43 compute-0 nova_compute[259850]: 2025-10-11 04:08:43.341 2 DEBUG os_brick.initiator.connectors.lightos [None req-6cb29daa-1624-4ef6-9260-77ed05213d92 8bb2149cfdfe44b2a94076ed5e55fbaf 38b79203307d4f1caa56e7e44b103572 - - default default] LIGHTOS: did not find dsc, continuing anyway. get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:76
Oct 11 04:08:43 compute-0 nova_compute[259850]: 2025-10-11 04:08:43.341 2 DEBUG os_brick.initiator.connectors.lightos [None req-6cb29daa-1624-4ef6-9260-77ed05213d92 8bb2149cfdfe44b2a94076ed5e55fbaf 38b79203307d4f1caa56e7e44b103572 - - default default] LIGHTOS: finally hostnqn: nqn.2014-08.org.nvmexpress:uuid:83042a20-0f72-4c47-8453-e72ead378624 dsc:  get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:79
Oct 11 04:08:43 compute-0 nova_compute[259850]: 2025-10-11 04:08:43.342 2 DEBUG os_brick.utils [None req-6cb29daa-1624-4ef6-9260-77ed05213d92 8bb2149cfdfe44b2a94076ed5e55fbaf 38b79203307d4f1caa56e7e44b103572 - - default default] <== get_connector_properties: return (78ms) {'platform': 'x86_64', 'os_type': 'linux', 'ip': '192.168.122.100', 'host': 'compute-0.ctlplane.example.com', 'multipath': True, 'initiator': 'iqn.1994-05.com.redhat:e727c2bd432c', 'do_local_attach': False, 'nvme_hostid': '83042a20-0f72-4c47-8453-e72ead378624', 'system uuid': 'e4b2deed-ff06-4afb-a523-b61a9dddb9cc', 'nqn': 'nqn.2014-08.org.nvmexpress:uuid:83042a20-0f72-4c47-8453-e72ead378624', 'nvme_native_multipath': True, 'found_dsc': ''} trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:203
Oct 11 04:08:43 compute-0 nova_compute[259850]: 2025-10-11 04:08:43.343 2 DEBUG nova.virt.block_device [None req-6cb29daa-1624-4ef6-9260-77ed05213d92 8bb2149cfdfe44b2a94076ed5e55fbaf 38b79203307d4f1caa56e7e44b103572 - - default default] [instance: 8010953a-e520-477e-a4ba-ceb34db48982] Updating existing volume attachment record: c162ad46-c17c-4e5c-aee6-0c2d9befe674 _volume_attach /usr/lib/python3.9/site-packages/nova/virt/block_device.py:631
Oct 11 04:08:43 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 11 04:08:43 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/684889823' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 04:08:43 compute-0 nova_compute[259850]: 2025-10-11 04:08:43.422 2 DEBUG oslo_concurrency.processutils [None req-17417bbe-35ff-458e-8273-0693f85341eb fc44058c9b8d47d1907c195c404898c8 c04e56df694d49fdbb22c39773dfc036 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.450s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 04:08:43 compute-0 nova_compute[259850]: 2025-10-11 04:08:43.428 2 DEBUG nova.compute.provider_tree [None req-17417bbe-35ff-458e-8273-0693f85341eb fc44058c9b8d47d1907c195c404898c8 c04e56df694d49fdbb22c39773dfc036 - - default default] Inventory has not changed in ProviderTree for provider: 108a560b-89c0-4926-a2fc-cb749a6f8386 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 11 04:08:43 compute-0 nova_compute[259850]: 2025-10-11 04:08:43.450 2 DEBUG nova.scheduler.client.report [None req-17417bbe-35ff-458e-8273-0693f85341eb fc44058c9b8d47d1907c195c404898c8 c04e56df694d49fdbb22c39773dfc036 - - default default] Inventory has not changed for provider 108a560b-89c0-4926-a2fc-cb749a6f8386 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 11 04:08:43 compute-0 nova_compute[259850]: 2025-10-11 04:08:43.484 2 DEBUG oslo_concurrency.lockutils [None req-17417bbe-35ff-458e-8273-0693f85341eb fc44058c9b8d47d1907c195c404898c8 c04e56df694d49fdbb22c39773dfc036 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.569s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 04:08:43 compute-0 nova_compute[259850]: 2025-10-11 04:08:43.519 2 INFO nova.scheduler.client.report [None req-17417bbe-35ff-458e-8273-0693f85341eb fc44058c9b8d47d1907c195c404898c8 c04e56df694d49fdbb22c39773dfc036 - - default default] Deleted allocations for instance 5814e0c3-8afc-4d2d-98eb-6da773bfb7c7
Oct 11 04:08:43 compute-0 nova_compute[259850]: 2025-10-11 04:08:43.524 2 DEBUG nova.compute.manager [req-ea608e9e-1abb-4e59-9a6d-43fd19254a58 req-91ec0ef9-9443-481c-9bda-f2749888b946 f4159c198f2c491aba3076e803251bb4 551ef16ca7c64c7d98e9560ddab5abd8 - - default default] [instance: 5814e0c3-8afc-4d2d-98eb-6da773bfb7c7] Received event network-vif-plugged-05e93c0e-0ca7-4152-9b30-cb802b90de1f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 11 04:08:43 compute-0 nova_compute[259850]: 2025-10-11 04:08:43.524 2 DEBUG oslo_concurrency.lockutils [req-ea608e9e-1abb-4e59-9a6d-43fd19254a58 req-91ec0ef9-9443-481c-9bda-f2749888b946 f4159c198f2c491aba3076e803251bb4 551ef16ca7c64c7d98e9560ddab5abd8 - - default default] Acquiring lock "5814e0c3-8afc-4d2d-98eb-6da773bfb7c7-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 04:08:43 compute-0 nova_compute[259850]: 2025-10-11 04:08:43.524 2 DEBUG oslo_concurrency.lockutils [req-ea608e9e-1abb-4e59-9a6d-43fd19254a58 req-91ec0ef9-9443-481c-9bda-f2749888b946 f4159c198f2c491aba3076e803251bb4 551ef16ca7c64c7d98e9560ddab5abd8 - - default default] Lock "5814e0c3-8afc-4d2d-98eb-6da773bfb7c7-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 04:08:43 compute-0 nova_compute[259850]: 2025-10-11 04:08:43.525 2 DEBUG oslo_concurrency.lockutils [req-ea608e9e-1abb-4e59-9a6d-43fd19254a58 req-91ec0ef9-9443-481c-9bda-f2749888b946 f4159c198f2c491aba3076e803251bb4 551ef16ca7c64c7d98e9560ddab5abd8 - - default default] Lock "5814e0c3-8afc-4d2d-98eb-6da773bfb7c7-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 04:08:43 compute-0 nova_compute[259850]: 2025-10-11 04:08:43.525 2 DEBUG nova.compute.manager [req-ea608e9e-1abb-4e59-9a6d-43fd19254a58 req-91ec0ef9-9443-481c-9bda-f2749888b946 f4159c198f2c491aba3076e803251bb4 551ef16ca7c64c7d98e9560ddab5abd8 - - default default] [instance: 5814e0c3-8afc-4d2d-98eb-6da773bfb7c7] No waiting events found dispatching network-vif-plugged-05e93c0e-0ca7-4152-9b30-cb802b90de1f pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 11 04:08:43 compute-0 nova_compute[259850]: 2025-10-11 04:08:43.525 2 WARNING nova.compute.manager [req-ea608e9e-1abb-4e59-9a6d-43fd19254a58 req-91ec0ef9-9443-481c-9bda-f2749888b946 f4159c198f2c491aba3076e803251bb4 551ef16ca7c64c7d98e9560ddab5abd8 - - default default] [instance: 5814e0c3-8afc-4d2d-98eb-6da773bfb7c7] Received unexpected event network-vif-plugged-05e93c0e-0ca7-4152-9b30-cb802b90de1f for instance with vm_state deleted and task_state None.
Oct 11 04:08:43 compute-0 nova_compute[259850]: 2025-10-11 04:08:43.526 2 DEBUG nova.compute.manager [req-ea608e9e-1abb-4e59-9a6d-43fd19254a58 req-91ec0ef9-9443-481c-9bda-f2749888b946 f4159c198f2c491aba3076e803251bb4 551ef16ca7c64c7d98e9560ddab5abd8 - - default default] [instance: 5814e0c3-8afc-4d2d-98eb-6da773bfb7c7] Received event network-vif-deleted-05e93c0e-0ca7-4152-9b30-cb802b90de1f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 11 04:08:43 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1188: 305 pgs: 305 active+clean; 226 MiB data, 333 MiB used, 60 GiB / 60 GiB avail; 2.7 MiB/s rd, 2.7 MiB/s wr, 145 op/s
Oct 11 04:08:43 compute-0 nova_compute[259850]: 2025-10-11 04:08:43.591 2 DEBUG oslo_concurrency.lockutils [None req-17417bbe-35ff-458e-8273-0693f85341eb fc44058c9b8d47d1907c195c404898c8 c04e56df694d49fdbb22c39773dfc036 - - default default] Lock "5814e0c3-8afc-4d2d-98eb-6da773bfb7c7" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.913s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 04:08:43 compute-0 ceph-mon[74273]: from='client.? 192.168.122.100:0/1809001123' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 04:08:43 compute-0 ceph-mon[74273]: from='client.? 192.168.122.100:0/684889823' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 04:08:44 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 11 04:08:44 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2413988528' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 11 04:08:44 compute-0 nova_compute[259850]: 2025-10-11 04:08:44.195 2 DEBUG nova.network.neutron [None req-6cb29daa-1624-4ef6-9260-77ed05213d92 8bb2149cfdfe44b2a94076ed5e55fbaf 38b79203307d4f1caa56e7e44b103572 - - default default] [instance: 8010953a-e520-477e-a4ba-ceb34db48982] Successfully created port: 34c86870-ee92-41f3-909b-1b576896b9cc _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Oct 11 04:08:44 compute-0 podman[281139]: 2025-10-11 04:08:44.436302056 +0000 UTC m=+0.132029376 container health_status 648a70a95c858d67aef01a55c153ff0b5b9bef4b589fa10c62a24185180db61f (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_controller, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, config_id=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f)
Oct 11 04:08:44 compute-0 nova_compute[259850]: 2025-10-11 04:08:44.489 2 DEBUG nova.compute.manager [None req-6cb29daa-1624-4ef6-9260-77ed05213d92 8bb2149cfdfe44b2a94076ed5e55fbaf 38b79203307d4f1caa56e7e44b103572 - - default default] [instance: 8010953a-e520-477e-a4ba-ceb34db48982] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Oct 11 04:08:44 compute-0 nova_compute[259850]: 2025-10-11 04:08:44.492 2 DEBUG nova.virt.libvirt.driver [None req-6cb29daa-1624-4ef6-9260-77ed05213d92 8bb2149cfdfe44b2a94076ed5e55fbaf 38b79203307d4f1caa56e7e44b103572 - - default default] [instance: 8010953a-e520-477e-a4ba-ceb34db48982] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Oct 11 04:08:44 compute-0 nova_compute[259850]: 2025-10-11 04:08:44.492 2 INFO nova.virt.libvirt.driver [None req-6cb29daa-1624-4ef6-9260-77ed05213d92 8bb2149cfdfe44b2a94076ed5e55fbaf 38b79203307d4f1caa56e7e44b103572 - - default default] [instance: 8010953a-e520-477e-a4ba-ceb34db48982] Creating image(s)
Oct 11 04:08:44 compute-0 nova_compute[259850]: 2025-10-11 04:08:44.493 2 DEBUG nova.virt.libvirt.driver [None req-6cb29daa-1624-4ef6-9260-77ed05213d92 8bb2149cfdfe44b2a94076ed5e55fbaf 38b79203307d4f1caa56e7e44b103572 - - default default] [instance: 8010953a-e520-477e-a4ba-ceb34db48982] Did not create local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4859
Oct 11 04:08:44 compute-0 nova_compute[259850]: 2025-10-11 04:08:44.494 2 DEBUG nova.virt.libvirt.driver [None req-6cb29daa-1624-4ef6-9260-77ed05213d92 8bb2149cfdfe44b2a94076ed5e55fbaf 38b79203307d4f1caa56e7e44b103572 - - default default] [instance: 8010953a-e520-477e-a4ba-ceb34db48982] Ensure instance console log exists: /var/lib/nova/instances/8010953a-e520-477e-a4ba-ceb34db48982/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Oct 11 04:08:44 compute-0 nova_compute[259850]: 2025-10-11 04:08:44.495 2 DEBUG oslo_concurrency.lockutils [None req-6cb29daa-1624-4ef6-9260-77ed05213d92 8bb2149cfdfe44b2a94076ed5e55fbaf 38b79203307d4f1caa56e7e44b103572 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 04:08:44 compute-0 nova_compute[259850]: 2025-10-11 04:08:44.495 2 DEBUG oslo_concurrency.lockutils [None req-6cb29daa-1624-4ef6-9260-77ed05213d92 8bb2149cfdfe44b2a94076ed5e55fbaf 38b79203307d4f1caa56e7e44b103572 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 04:08:44 compute-0 nova_compute[259850]: 2025-10-11 04:08:44.496 2 DEBUG oslo_concurrency.lockutils [None req-6cb29daa-1624-4ef6-9260-77ed05213d92 8bb2149cfdfe44b2a94076ed5e55fbaf 38b79203307d4f1caa56e7e44b103572 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 04:08:44 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e226 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 04:08:44 compute-0 ceph-mon[74273]: pgmap v1188: 305 pgs: 305 active+clean; 226 MiB data, 333 MiB used, 60 GiB / 60 GiB avail; 2.7 MiB/s rd, 2.7 MiB/s wr, 145 op/s
Oct 11 04:08:44 compute-0 ceph-mon[74273]: from='client.? 192.168.122.10:0/2413988528' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 11 04:08:45 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1189: 305 pgs: 305 active+clean; 226 MiB data, 333 MiB used, 60 GiB / 60 GiB avail; 2.7 MiB/s rd, 2.6 MiB/s wr, 141 op/s
Oct 11 04:08:45 compute-0 nova_compute[259850]: 2025-10-11 04:08:45.590 2 DEBUG nova.network.neutron [None req-6cb29daa-1624-4ef6-9260-77ed05213d92 8bb2149cfdfe44b2a94076ed5e55fbaf 38b79203307d4f1caa56e7e44b103572 - - default default] [instance: 8010953a-e520-477e-a4ba-ceb34db48982] Successfully updated port: 34c86870-ee92-41f3-909b-1b576896b9cc _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Oct 11 04:08:45 compute-0 nova_compute[259850]: 2025-10-11 04:08:45.610 2 DEBUG oslo_concurrency.lockutils [None req-6cb29daa-1624-4ef6-9260-77ed05213d92 8bb2149cfdfe44b2a94076ed5e55fbaf 38b79203307d4f1caa56e7e44b103572 - - default default] Acquiring lock "refresh_cache-8010953a-e520-477e-a4ba-ceb34db48982" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 11 04:08:45 compute-0 nova_compute[259850]: 2025-10-11 04:08:45.611 2 DEBUG oslo_concurrency.lockutils [None req-6cb29daa-1624-4ef6-9260-77ed05213d92 8bb2149cfdfe44b2a94076ed5e55fbaf 38b79203307d4f1caa56e7e44b103572 - - default default] Acquired lock "refresh_cache-8010953a-e520-477e-a4ba-ceb34db48982" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 11 04:08:45 compute-0 nova_compute[259850]: 2025-10-11 04:08:45.611 2 DEBUG nova.network.neutron [None req-6cb29daa-1624-4ef6-9260-77ed05213d92 8bb2149cfdfe44b2a94076ed5e55fbaf 38b79203307d4f1caa56e7e44b103572 - - default default] [instance: 8010953a-e520-477e-a4ba-ceb34db48982] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Oct 11 04:08:45 compute-0 nova_compute[259850]: 2025-10-11 04:08:45.760 2 DEBUG nova.compute.manager [req-04585ad0-c6cb-45f0-a669-a74622e06133 req-9afc7341-1f3b-4da2-8024-fb8677533bfc f4159c198f2c491aba3076e803251bb4 551ef16ca7c64c7d98e9560ddab5abd8 - - default default] [instance: 8010953a-e520-477e-a4ba-ceb34db48982] Received event network-changed-34c86870-ee92-41f3-909b-1b576896b9cc external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 11 04:08:45 compute-0 nova_compute[259850]: 2025-10-11 04:08:45.761 2 DEBUG nova.compute.manager [req-04585ad0-c6cb-45f0-a669-a74622e06133 req-9afc7341-1f3b-4da2-8024-fb8677533bfc f4159c198f2c491aba3076e803251bb4 551ef16ca7c64c7d98e9560ddab5abd8 - - default default] [instance: 8010953a-e520-477e-a4ba-ceb34db48982] Refreshing instance network info cache due to event network-changed-34c86870-ee92-41f3-909b-1b576896b9cc. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 11 04:08:45 compute-0 nova_compute[259850]: 2025-10-11 04:08:45.761 2 DEBUG oslo_concurrency.lockutils [req-04585ad0-c6cb-45f0-a669-a74622e06133 req-9afc7341-1f3b-4da2-8024-fb8677533bfc f4159c198f2c491aba3076e803251bb4 551ef16ca7c64c7d98e9560ddab5abd8 - - default default] Acquiring lock "refresh_cache-8010953a-e520-477e-a4ba-ceb34db48982" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 11 04:08:45 compute-0 nova_compute[259850]: 2025-10-11 04:08:45.896 2 DEBUG nova.network.neutron [None req-6cb29daa-1624-4ef6-9260-77ed05213d92 8bb2149cfdfe44b2a94076ed5e55fbaf 38b79203307d4f1caa56e7e44b103572 - - default default] [instance: 8010953a-e520-477e-a4ba-ceb34db48982] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Oct 11 04:08:45 compute-0 nova_compute[259850]: 2025-10-11 04:08:45.977 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 04:08:46 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e226 do_prune osdmap full prune enabled
Oct 11 04:08:46 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e227 e227: 3 total, 3 up, 3 in
Oct 11 04:08:46 compute-0 ceph-mon[74273]: log_channel(cluster) log [DBG] : osdmap e227: 3 total, 3 up, 3 in
Oct 11 04:08:46 compute-0 ceph-mon[74273]: pgmap v1189: 305 pgs: 305 active+clean; 226 MiB data, 333 MiB used, 60 GiB / 60 GiB avail; 2.7 MiB/s rd, 2.6 MiB/s wr, 141 op/s
Oct 11 04:08:47 compute-0 nova_compute[259850]: 2025-10-11 04:08:47.146 2 DEBUG nova.network.neutron [None req-6cb29daa-1624-4ef6-9260-77ed05213d92 8bb2149cfdfe44b2a94076ed5e55fbaf 38b79203307d4f1caa56e7e44b103572 - - default default] [instance: 8010953a-e520-477e-a4ba-ceb34db48982] Updating instance_info_cache with network_info: [{"id": "34c86870-ee92-41f3-909b-1b576896b9cc", "address": "fa:16:3e:d4:0e:80", "network": {"id": "5dfb34b3-4c6e-41e7-a36e-ee8fb0fbb2e1", "bridge": "br-int", "label": "tempest-TestVolumeBackupRestore-789712542-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "38b79203307d4f1caa56e7e44b103572", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap34c86870-ee", "ovs_interfaceid": "34c86870-ee92-41f3-909b-1b576896b9cc", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 11 04:08:47 compute-0 nova_compute[259850]: 2025-10-11 04:08:47.190 2 DEBUG oslo_concurrency.lockutils [None req-6cb29daa-1624-4ef6-9260-77ed05213d92 8bb2149cfdfe44b2a94076ed5e55fbaf 38b79203307d4f1caa56e7e44b103572 - - default default] Releasing lock "refresh_cache-8010953a-e520-477e-a4ba-ceb34db48982" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 11 04:08:47 compute-0 nova_compute[259850]: 2025-10-11 04:08:47.190 2 DEBUG nova.compute.manager [None req-6cb29daa-1624-4ef6-9260-77ed05213d92 8bb2149cfdfe44b2a94076ed5e55fbaf 38b79203307d4f1caa56e7e44b103572 - - default default] [instance: 8010953a-e520-477e-a4ba-ceb34db48982] Instance network_info: |[{"id": "34c86870-ee92-41f3-909b-1b576896b9cc", "address": "fa:16:3e:d4:0e:80", "network": {"id": "5dfb34b3-4c6e-41e7-a36e-ee8fb0fbb2e1", "bridge": "br-int", "label": "tempest-TestVolumeBackupRestore-789712542-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "38b79203307d4f1caa56e7e44b103572", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap34c86870-ee", "ovs_interfaceid": "34c86870-ee92-41f3-909b-1b576896b9cc", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Oct 11 04:08:47 compute-0 nova_compute[259850]: 2025-10-11 04:08:47.191 2 DEBUG oslo_concurrency.lockutils [req-04585ad0-c6cb-45f0-a669-a74622e06133 req-9afc7341-1f3b-4da2-8024-fb8677533bfc f4159c198f2c491aba3076e803251bb4 551ef16ca7c64c7d98e9560ddab5abd8 - - default default] Acquired lock "refresh_cache-8010953a-e520-477e-a4ba-ceb34db48982" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 11 04:08:47 compute-0 nova_compute[259850]: 2025-10-11 04:08:47.191 2 DEBUG nova.network.neutron [req-04585ad0-c6cb-45f0-a669-a74622e06133 req-9afc7341-1f3b-4da2-8024-fb8677533bfc f4159c198f2c491aba3076e803251bb4 551ef16ca7c64c7d98e9560ddab5abd8 - - default default] [instance: 8010953a-e520-477e-a4ba-ceb34db48982] Refreshing network info cache for port 34c86870-ee92-41f3-909b-1b576896b9cc _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 11 04:08:47 compute-0 nova_compute[259850]: 2025-10-11 04:08:47.197 2 DEBUG nova.virt.libvirt.driver [None req-6cb29daa-1624-4ef6-9260-77ed05213d92 8bb2149cfdfe44b2a94076ed5e55fbaf 38b79203307d4f1caa56e7e44b103572 - - default default] [instance: 8010953a-e520-477e-a4ba-ceb34db48982] Start _get_guest_xml network_info=[{"id": "34c86870-ee92-41f3-909b-1b576896b9cc", "address": "fa:16:3e:d4:0e:80", "network": {"id": "5dfb34b3-4c6e-41e7-a36e-ee8fb0fbb2e1", "bridge": "br-int", "label": "tempest-TestVolumeBackupRestore-789712542-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "38b79203307d4f1caa56e7e44b103572", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap34c86870-ee", "ovs_interfaceid": "34c86870-ee92-41f3-909b-1b576896b9cc", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, '/dev/vda': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum=<?>,container_format=<?>,created_at=<?>,direct_url=<?>,disk_format=<?>,id=<?>,min_disk=0,min_ram=0,name=<?>,owner=<?>,properties=ImageMetaProps,protected=<?>,size=1073741824,status='active',tags=<?>,updated_at=<?>,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [], 'ephemerals': [], 'block_device_mapping': [{'device_type': 'disk', 'delete_on_termination': False, 'connection_info': {'driver_volume_type': 'rbd', 'data': {'name': 'volumes/volume-6efc2072-594e-47eb-9f95-652902703cf7', 'hosts': ['192.168.122.100'], 'ports': ['6789'], 'cluster_name': 'ceph', 'auth_enabled': True, 'auth_username': 'openstack', 'secret_type': 'ceph', 'secret_uuid': '***', 'volume_id': '6efc2072-594e-47eb-9f95-652902703cf7', 'discard': True, 'qos_specs': None, 'access_mode': 'rw', 'encrypted': False, 'cacheable': False}, 'status': 'reserved', 'instance': '8010953a-e520-477e-a4ba-ceb34db48982', 'attached_at': '', 'detached_at': '', 'volume_id': '6efc2072-594e-47eb-9f95-652902703cf7', 'serial': '6efc2072-594e-47eb-9f95-652902703cf7'}, 'boot_index': 0, 'guest_format': None, 'attachment_id': 'c162ad46-c17c-4e5c-aee6-0c2d9befe674', 'mount_device': '/dev/vda', 'disk_bus': 'virtio', 'volume_type': None}], ': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Oct 11 04:08:47 compute-0 nova_compute[259850]: 2025-10-11 04:08:47.204 2 WARNING nova.virt.libvirt.driver [None req-6cb29daa-1624-4ef6-9260-77ed05213d92 8bb2149cfdfe44b2a94076ed5e55fbaf 38b79203307d4f1caa56e7e44b103572 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 11 04:08:47 compute-0 nova_compute[259850]: 2025-10-11 04:08:47.211 2 DEBUG nova.virt.libvirt.host [None req-6cb29daa-1624-4ef6-9260-77ed05213d92 8bb2149cfdfe44b2a94076ed5e55fbaf 38b79203307d4f1caa56e7e44b103572 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Oct 11 04:08:47 compute-0 nova_compute[259850]: 2025-10-11 04:08:47.212 2 DEBUG nova.virt.libvirt.host [None req-6cb29daa-1624-4ef6-9260-77ed05213d92 8bb2149cfdfe44b2a94076ed5e55fbaf 38b79203307d4f1caa56e7e44b103572 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Oct 11 04:08:47 compute-0 nova_compute[259850]: 2025-10-11 04:08:47.222 2 DEBUG nova.virt.libvirt.host [None req-6cb29daa-1624-4ef6-9260-77ed05213d92 8bb2149cfdfe44b2a94076ed5e55fbaf 38b79203307d4f1caa56e7e44b103572 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Oct 11 04:08:47 compute-0 nova_compute[259850]: 2025-10-11 04:08:47.223 2 DEBUG nova.virt.libvirt.host [None req-6cb29daa-1624-4ef6-9260-77ed05213d92 8bb2149cfdfe44b2a94076ed5e55fbaf 38b79203307d4f1caa56e7e44b103572 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Oct 11 04:08:47 compute-0 nova_compute[259850]: 2025-10-11 04:08:47.224 2 DEBUG nova.virt.libvirt.driver [None req-6cb29daa-1624-4ef6-9260-77ed05213d92 8bb2149cfdfe44b2a94076ed5e55fbaf 38b79203307d4f1caa56e7e44b103572 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Oct 11 04:08:47 compute-0 nova_compute[259850]: 2025-10-11 04:08:47.224 2 DEBUG nova.virt.hardware [None req-6cb29daa-1624-4ef6-9260-77ed05213d92 8bb2149cfdfe44b2a94076ed5e55fbaf 38b79203307d4f1caa56e7e44b103572 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-11T04:01:35Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='178575de-f0e6-4acd-9fcd-d75e3e09ac2e',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum=<?>,container_format=<?>,created_at=<?>,direct_url=<?>,disk_format=<?>,id=<?>,min_disk=0,min_ram=0,name=<?>,owner=<?>,properties=ImageMetaProps,protected=<?>,size=1073741824,status='active',tags=<?>,updated_at=<?>,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Oct 11 04:08:47 compute-0 nova_compute[259850]: 2025-10-11 04:08:47.225 2 DEBUG nova.virt.hardware [None req-6cb29daa-1624-4ef6-9260-77ed05213d92 8bb2149cfdfe44b2a94076ed5e55fbaf 38b79203307d4f1caa56e7e44b103572 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Oct 11 04:08:47 compute-0 nova_compute[259850]: 2025-10-11 04:08:47.226 2 DEBUG nova.virt.hardware [None req-6cb29daa-1624-4ef6-9260-77ed05213d92 8bb2149cfdfe44b2a94076ed5e55fbaf 38b79203307d4f1caa56e7e44b103572 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Oct 11 04:08:47 compute-0 nova_compute[259850]: 2025-10-11 04:08:47.226 2 DEBUG nova.virt.hardware [None req-6cb29daa-1624-4ef6-9260-77ed05213d92 8bb2149cfdfe44b2a94076ed5e55fbaf 38b79203307d4f1caa56e7e44b103572 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Oct 11 04:08:47 compute-0 nova_compute[259850]: 2025-10-11 04:08:47.226 2 DEBUG nova.virt.hardware [None req-6cb29daa-1624-4ef6-9260-77ed05213d92 8bb2149cfdfe44b2a94076ed5e55fbaf 38b79203307d4f1caa56e7e44b103572 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Oct 11 04:08:47 compute-0 nova_compute[259850]: 2025-10-11 04:08:47.227 2 DEBUG nova.virt.hardware [None req-6cb29daa-1624-4ef6-9260-77ed05213d92 8bb2149cfdfe44b2a94076ed5e55fbaf 38b79203307d4f1caa56e7e44b103572 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Oct 11 04:08:47 compute-0 nova_compute[259850]: 2025-10-11 04:08:47.227 2 DEBUG nova.virt.hardware [None req-6cb29daa-1624-4ef6-9260-77ed05213d92 8bb2149cfdfe44b2a94076ed5e55fbaf 38b79203307d4f1caa56e7e44b103572 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Oct 11 04:08:47 compute-0 nova_compute[259850]: 2025-10-11 04:08:47.228 2 DEBUG nova.virt.hardware [None req-6cb29daa-1624-4ef6-9260-77ed05213d92 8bb2149cfdfe44b2a94076ed5e55fbaf 38b79203307d4f1caa56e7e44b103572 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Oct 11 04:08:47 compute-0 nova_compute[259850]: 2025-10-11 04:08:47.228 2 DEBUG nova.virt.hardware [None req-6cb29daa-1624-4ef6-9260-77ed05213d92 8bb2149cfdfe44b2a94076ed5e55fbaf 38b79203307d4f1caa56e7e44b103572 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Oct 11 04:08:47 compute-0 nova_compute[259850]: 2025-10-11 04:08:47.228 2 DEBUG nova.virt.hardware [None req-6cb29daa-1624-4ef6-9260-77ed05213d92 8bb2149cfdfe44b2a94076ed5e55fbaf 38b79203307d4f1caa56e7e44b103572 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Oct 11 04:08:47 compute-0 nova_compute[259850]: 2025-10-11 04:08:47.229 2 DEBUG nova.virt.hardware [None req-6cb29daa-1624-4ef6-9260-77ed05213d92 8bb2149cfdfe44b2a94076ed5e55fbaf 38b79203307d4f1caa56e7e44b103572 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Oct 11 04:08:47 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct 11 04:08:47 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/154964815' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 11 04:08:47 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct 11 04:08:47 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/154964815' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 11 04:08:47 compute-0 nova_compute[259850]: 2025-10-11 04:08:47.264 2 DEBUG nova.storage.rbd_utils [None req-6cb29daa-1624-4ef6-9260-77ed05213d92 8bb2149cfdfe44b2a94076ed5e55fbaf 38b79203307d4f1caa56e7e44b103572 - - default default] rbd image 8010953a-e520-477e-a4ba-ceb34db48982_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 11 04:08:47 compute-0 nova_compute[259850]: 2025-10-11 04:08:47.268 2 DEBUG oslo_concurrency.processutils [None req-6cb29daa-1624-4ef6-9260-77ed05213d92 8bb2149cfdfe44b2a94076ed5e55fbaf 38b79203307d4f1caa56e7e44b103572 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 04:08:47 compute-0 nova_compute[259850]: 2025-10-11 04:08:47.389 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 04:08:47 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1191: 305 pgs: 305 active+clean; 226 MiB data, 333 MiB used, 60 GiB / 60 GiB avail; 28 KiB/s rd, 3.2 KiB/s wr, 41 op/s
Oct 11 04:08:47 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 11 04:08:47 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2709424185' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 11 04:08:47 compute-0 nova_compute[259850]: 2025-10-11 04:08:47.712 2 DEBUG oslo_concurrency.processutils [None req-6cb29daa-1624-4ef6-9260-77ed05213d92 8bb2149cfdfe44b2a94076ed5e55fbaf 38b79203307d4f1caa56e7e44b103572 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.444s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 04:08:47 compute-0 nova_compute[259850]: 2025-10-11 04:08:47.740 2 DEBUG nova.virt.libvirt.vif [None req-6cb29daa-1624-4ef6-9260-77ed05213d92 8bb2149cfdfe44b2a94076ed5e55fbaf 38b79203307d4f1caa56e7e44b103572 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-11T04:08:41Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestVolumeBackupRestore-server-1706545763',display_name='tempest-TestVolumeBackupRestore-server-1706545763',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testvolumebackuprestore-server-1706545763',id=10,image_ref='',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBH17W5kE8c8jHNH5W8L2W3jo0pvpW5izjZjtF23lj8QmsAX/ee5wMJLpdvzZ/zFNqur9tp2txrryU7QgSH4v5UjD/oKvsJwNRUCrfO426+mL3v0B2OhIPlcgoKavmlsW+g==',key_name='tempest-TestVolumeBackupRestore-317158668',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='38b79203307d4f1caa56e7e44b103572',ramdisk_id='',reservation_id='r-0h0jc2ph',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',image_signature_verified='False',network_allocated='True',owner_project_name='tempest-TestVolumeBackupRestore-949077181',owner_user_name='tempest-TestVolumeBackupRestore-949077181-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-11T04:08:43Z,user_data=None,user_id='8bb2149cfdfe44b2a94076ed5e55fbaf',uuid=8010953a-e520-477e-a4ba-ceb34db48982,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "34c86870-ee92-41f3-909b-1b576896b9cc", "address": "fa:16:3e:d4:0e:80", "network": {"id": "5dfb34b3-4c6e-41e7-a36e-ee8fb0fbb2e1", "bridge": "br-int", "label": "tempest-TestVolumeBackupRestore-789712542-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "38b79203307d4f1caa56e7e44b103572", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap34c86870-ee", "ovs_interfaceid": "34c86870-ee92-41f3-909b-1b576896b9cc", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Oct 11 04:08:47 compute-0 nova_compute[259850]: 2025-10-11 04:08:47.741 2 DEBUG nova.network.os_vif_util [None req-6cb29daa-1624-4ef6-9260-77ed05213d92 8bb2149cfdfe44b2a94076ed5e55fbaf 38b79203307d4f1caa56e7e44b103572 - - default default] Converting VIF {"id": "34c86870-ee92-41f3-909b-1b576896b9cc", "address": "fa:16:3e:d4:0e:80", "network": {"id": "5dfb34b3-4c6e-41e7-a36e-ee8fb0fbb2e1", "bridge": "br-int", "label": "tempest-TestVolumeBackupRestore-789712542-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "38b79203307d4f1caa56e7e44b103572", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap34c86870-ee", "ovs_interfaceid": "34c86870-ee92-41f3-909b-1b576896b9cc", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 11 04:08:47 compute-0 nova_compute[259850]: 2025-10-11 04:08:47.742 2 DEBUG nova.network.os_vif_util [None req-6cb29daa-1624-4ef6-9260-77ed05213d92 8bb2149cfdfe44b2a94076ed5e55fbaf 38b79203307d4f1caa56e7e44b103572 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:d4:0e:80,bridge_name='br-int',has_traffic_filtering=True,id=34c86870-ee92-41f3-909b-1b576896b9cc,network=Network(5dfb34b3-4c6e-41e7-a36e-ee8fb0fbb2e1),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap34c86870-ee') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 11 04:08:47 compute-0 nova_compute[259850]: 2025-10-11 04:08:47.744 2 DEBUG nova.objects.instance [None req-6cb29daa-1624-4ef6-9260-77ed05213d92 8bb2149cfdfe44b2a94076ed5e55fbaf 38b79203307d4f1caa56e7e44b103572 - - default default] Lazy-loading 'pci_devices' on Instance uuid 8010953a-e520-477e-a4ba-ceb34db48982 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 11 04:08:47 compute-0 nova_compute[259850]: 2025-10-11 04:08:47.759 2 DEBUG nova.virt.libvirt.driver [None req-6cb29daa-1624-4ef6-9260-77ed05213d92 8bb2149cfdfe44b2a94076ed5e55fbaf 38b79203307d4f1caa56e7e44b103572 - - default default] [instance: 8010953a-e520-477e-a4ba-ceb34db48982] End _get_guest_xml xml=<domain type="kvm">
Oct 11 04:08:47 compute-0 nova_compute[259850]:   <uuid>8010953a-e520-477e-a4ba-ceb34db48982</uuid>
Oct 11 04:08:47 compute-0 nova_compute[259850]:   <name>instance-0000000a</name>
Oct 11 04:08:47 compute-0 nova_compute[259850]:   <memory>131072</memory>
Oct 11 04:08:47 compute-0 nova_compute[259850]:   <vcpu>1</vcpu>
Oct 11 04:08:47 compute-0 nova_compute[259850]:   <metadata>
Oct 11 04:08:47 compute-0 nova_compute[259850]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 11 04:08:47 compute-0 nova_compute[259850]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 11 04:08:47 compute-0 nova_compute[259850]:       <nova:name>tempest-TestVolumeBackupRestore-server-1706545763</nova:name>
Oct 11 04:08:47 compute-0 nova_compute[259850]:       <nova:creationTime>2025-10-11 04:08:47</nova:creationTime>
Oct 11 04:08:47 compute-0 nova_compute[259850]:       <nova:flavor name="m1.nano">
Oct 11 04:08:47 compute-0 nova_compute[259850]:         <nova:memory>128</nova:memory>
Oct 11 04:08:47 compute-0 nova_compute[259850]:         <nova:disk>1</nova:disk>
Oct 11 04:08:47 compute-0 nova_compute[259850]:         <nova:swap>0</nova:swap>
Oct 11 04:08:47 compute-0 nova_compute[259850]:         <nova:ephemeral>0</nova:ephemeral>
Oct 11 04:08:47 compute-0 nova_compute[259850]:         <nova:vcpus>1</nova:vcpus>
Oct 11 04:08:47 compute-0 nova_compute[259850]:       </nova:flavor>
Oct 11 04:08:47 compute-0 nova_compute[259850]:       <nova:owner>
Oct 11 04:08:47 compute-0 nova_compute[259850]:         <nova:user uuid="8bb2149cfdfe44b2a94076ed5e55fbaf">tempest-TestVolumeBackupRestore-949077181-project-member</nova:user>
Oct 11 04:08:47 compute-0 nova_compute[259850]:         <nova:project uuid="38b79203307d4f1caa56e7e44b103572">tempest-TestVolumeBackupRestore-949077181</nova:project>
Oct 11 04:08:47 compute-0 nova_compute[259850]:       </nova:owner>
Oct 11 04:08:47 compute-0 nova_compute[259850]:       <nova:ports>
Oct 11 04:08:47 compute-0 nova_compute[259850]:         <nova:port uuid="34c86870-ee92-41f3-909b-1b576896b9cc">
Oct 11 04:08:47 compute-0 nova_compute[259850]:           <nova:ip type="fixed" address="10.100.0.11" ipVersion="4"/>
Oct 11 04:08:47 compute-0 nova_compute[259850]:         </nova:port>
Oct 11 04:08:47 compute-0 nova_compute[259850]:       </nova:ports>
Oct 11 04:08:47 compute-0 nova_compute[259850]:     </nova:instance>
Oct 11 04:08:47 compute-0 nova_compute[259850]:   </metadata>
Oct 11 04:08:47 compute-0 nova_compute[259850]:   <sysinfo type="smbios">
Oct 11 04:08:47 compute-0 nova_compute[259850]:     <system>
Oct 11 04:08:47 compute-0 nova_compute[259850]:       <entry name="manufacturer">RDO</entry>
Oct 11 04:08:47 compute-0 nova_compute[259850]:       <entry name="product">OpenStack Compute</entry>
Oct 11 04:08:47 compute-0 nova_compute[259850]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 11 04:08:47 compute-0 nova_compute[259850]:       <entry name="serial">8010953a-e520-477e-a4ba-ceb34db48982</entry>
Oct 11 04:08:47 compute-0 nova_compute[259850]:       <entry name="uuid">8010953a-e520-477e-a4ba-ceb34db48982</entry>
Oct 11 04:08:47 compute-0 nova_compute[259850]:       <entry name="family">Virtual Machine</entry>
Oct 11 04:08:47 compute-0 nova_compute[259850]:     </system>
Oct 11 04:08:47 compute-0 nova_compute[259850]:   </sysinfo>
Oct 11 04:08:47 compute-0 nova_compute[259850]:   <os>
Oct 11 04:08:47 compute-0 nova_compute[259850]:     <type arch="x86_64" machine="q35">hvm</type>
Oct 11 04:08:47 compute-0 nova_compute[259850]:     <boot dev="hd"/>
Oct 11 04:08:47 compute-0 nova_compute[259850]:     <smbios mode="sysinfo"/>
Oct 11 04:08:47 compute-0 nova_compute[259850]:   </os>
Oct 11 04:08:47 compute-0 nova_compute[259850]:   <features>
Oct 11 04:08:47 compute-0 nova_compute[259850]:     <acpi/>
Oct 11 04:08:47 compute-0 nova_compute[259850]:     <apic/>
Oct 11 04:08:47 compute-0 nova_compute[259850]:     <vmcoreinfo/>
Oct 11 04:08:47 compute-0 nova_compute[259850]:   </features>
Oct 11 04:08:47 compute-0 nova_compute[259850]:   <clock offset="utc">
Oct 11 04:08:47 compute-0 nova_compute[259850]:     <timer name="pit" tickpolicy="delay"/>
Oct 11 04:08:47 compute-0 nova_compute[259850]:     <timer name="rtc" tickpolicy="catchup"/>
Oct 11 04:08:47 compute-0 nova_compute[259850]:     <timer name="hpet" present="no"/>
Oct 11 04:08:47 compute-0 nova_compute[259850]:   </clock>
Oct 11 04:08:47 compute-0 nova_compute[259850]:   <cpu mode="host-model" match="exact">
Oct 11 04:08:47 compute-0 nova_compute[259850]:     <topology sockets="1" cores="1" threads="1"/>
Oct 11 04:08:47 compute-0 nova_compute[259850]:   </cpu>
Oct 11 04:08:47 compute-0 nova_compute[259850]:   <devices>
Oct 11 04:08:47 compute-0 nova_compute[259850]:     <disk type="network" device="cdrom">
Oct 11 04:08:47 compute-0 nova_compute[259850]:       <driver type="raw" cache="none"/>
Oct 11 04:08:47 compute-0 nova_compute[259850]:       <source protocol="rbd" name="vms/8010953a-e520-477e-a4ba-ceb34db48982_disk.config">
Oct 11 04:08:47 compute-0 nova_compute[259850]:         <host name="192.168.122.100" port="6789"/>
Oct 11 04:08:47 compute-0 nova_compute[259850]:       </source>
Oct 11 04:08:47 compute-0 nova_compute[259850]:       <auth username="openstack">
Oct 11 04:08:47 compute-0 nova_compute[259850]:         <secret type="ceph" uuid="23b68101-59a9-532f-ab6b-9acf78fb2162"/>
Oct 11 04:08:47 compute-0 nova_compute[259850]:       </auth>
Oct 11 04:08:47 compute-0 nova_compute[259850]:       <target dev="sda" bus="sata"/>
Oct 11 04:08:47 compute-0 nova_compute[259850]:     </disk>
Oct 11 04:08:47 compute-0 nova_compute[259850]:     <disk type="network" device="disk">
Oct 11 04:08:47 compute-0 nova_compute[259850]:       <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Oct 11 04:08:47 compute-0 nova_compute[259850]:       <source protocol="rbd" name="volumes/volume-6efc2072-594e-47eb-9f95-652902703cf7">
Oct 11 04:08:47 compute-0 nova_compute[259850]:         <host name="192.168.122.100" port="6789"/>
Oct 11 04:08:47 compute-0 nova_compute[259850]:       </source>
Oct 11 04:08:47 compute-0 nova_compute[259850]:       <auth username="openstack">
Oct 11 04:08:47 compute-0 nova_compute[259850]:         <secret type="ceph" uuid="23b68101-59a9-532f-ab6b-9acf78fb2162"/>
Oct 11 04:08:47 compute-0 nova_compute[259850]:       </auth>
Oct 11 04:08:47 compute-0 nova_compute[259850]:       <target dev="vda" bus="virtio"/>
Oct 11 04:08:47 compute-0 nova_compute[259850]:       <serial>6efc2072-594e-47eb-9f95-652902703cf7</serial>
Oct 11 04:08:47 compute-0 nova_compute[259850]:     </disk>
Oct 11 04:08:47 compute-0 nova_compute[259850]:     <interface type="ethernet">
Oct 11 04:08:47 compute-0 nova_compute[259850]:       <mac address="fa:16:3e:d4:0e:80"/>
Oct 11 04:08:47 compute-0 nova_compute[259850]:       <model type="virtio"/>
Oct 11 04:08:47 compute-0 nova_compute[259850]:       <driver name="vhost" rx_queue_size="512"/>
Oct 11 04:08:47 compute-0 nova_compute[259850]:       <mtu size="1442"/>
Oct 11 04:08:47 compute-0 nova_compute[259850]:       <target dev="tap34c86870-ee"/>
Oct 11 04:08:47 compute-0 nova_compute[259850]:     </interface>
Oct 11 04:08:47 compute-0 nova_compute[259850]:     <serial type="pty">
Oct 11 04:08:47 compute-0 nova_compute[259850]:       <log file="/var/lib/nova/instances/8010953a-e520-477e-a4ba-ceb34db48982/console.log" append="off"/>
Oct 11 04:08:47 compute-0 nova_compute[259850]:     </serial>
Oct 11 04:08:47 compute-0 nova_compute[259850]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 11 04:08:47 compute-0 nova_compute[259850]:     <video>
Oct 11 04:08:47 compute-0 nova_compute[259850]:       <model type="virtio"/>
Oct 11 04:08:47 compute-0 nova_compute[259850]:     </video>
Oct 11 04:08:47 compute-0 nova_compute[259850]:     <input type="tablet" bus="usb"/>
Oct 11 04:08:47 compute-0 nova_compute[259850]:     <rng model="virtio">
Oct 11 04:08:47 compute-0 nova_compute[259850]:       <backend model="random">/dev/urandom</backend>
Oct 11 04:08:47 compute-0 nova_compute[259850]:     </rng>
Oct 11 04:08:47 compute-0 nova_compute[259850]:     <controller type="pci" model="pcie-root"/>
Oct 11 04:08:47 compute-0 nova_compute[259850]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 04:08:47 compute-0 nova_compute[259850]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 04:08:47 compute-0 nova_compute[259850]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 04:08:47 compute-0 nova_compute[259850]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 04:08:47 compute-0 nova_compute[259850]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 04:08:47 compute-0 nova_compute[259850]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 04:08:47 compute-0 nova_compute[259850]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 04:08:47 compute-0 nova_compute[259850]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 04:08:47 compute-0 nova_compute[259850]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 04:08:47 compute-0 nova_compute[259850]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 04:08:47 compute-0 nova_compute[259850]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 04:08:47 compute-0 nova_compute[259850]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 04:08:47 compute-0 nova_compute[259850]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 04:08:47 compute-0 nova_compute[259850]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 04:08:47 compute-0 nova_compute[259850]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 04:08:47 compute-0 nova_compute[259850]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 04:08:47 compute-0 nova_compute[259850]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 04:08:47 compute-0 nova_compute[259850]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 04:08:47 compute-0 nova_compute[259850]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 04:08:47 compute-0 nova_compute[259850]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 04:08:47 compute-0 nova_compute[259850]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 04:08:47 compute-0 nova_compute[259850]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 04:08:47 compute-0 nova_compute[259850]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 04:08:47 compute-0 nova_compute[259850]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 04:08:47 compute-0 nova_compute[259850]:     <controller type="usb" index="0"/>
Oct 11 04:08:47 compute-0 nova_compute[259850]:     <memballoon model="virtio">
Oct 11 04:08:47 compute-0 nova_compute[259850]:       <stats period="10"/>
Oct 11 04:08:47 compute-0 nova_compute[259850]:     </memballoon>
Oct 11 04:08:47 compute-0 nova_compute[259850]:   </devices>
Oct 11 04:08:47 compute-0 nova_compute[259850]: </domain>
Oct 11 04:08:47 compute-0 nova_compute[259850]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Oct 11 04:08:47 compute-0 nova_compute[259850]: 2025-10-11 04:08:47.761 2 DEBUG nova.compute.manager [None req-6cb29daa-1624-4ef6-9260-77ed05213d92 8bb2149cfdfe44b2a94076ed5e55fbaf 38b79203307d4f1caa56e7e44b103572 - - default default] [instance: 8010953a-e520-477e-a4ba-ceb34db48982] Preparing to wait for external event network-vif-plugged-34c86870-ee92-41f3-909b-1b576896b9cc prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Oct 11 04:08:47 compute-0 nova_compute[259850]: 2025-10-11 04:08:47.762 2 DEBUG oslo_concurrency.lockutils [None req-6cb29daa-1624-4ef6-9260-77ed05213d92 8bb2149cfdfe44b2a94076ed5e55fbaf 38b79203307d4f1caa56e7e44b103572 - - default default] Acquiring lock "8010953a-e520-477e-a4ba-ceb34db48982-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 04:08:47 compute-0 nova_compute[259850]: 2025-10-11 04:08:47.762 2 DEBUG oslo_concurrency.lockutils [None req-6cb29daa-1624-4ef6-9260-77ed05213d92 8bb2149cfdfe44b2a94076ed5e55fbaf 38b79203307d4f1caa56e7e44b103572 - - default default] Lock "8010953a-e520-477e-a4ba-ceb34db48982-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 04:08:47 compute-0 nova_compute[259850]: 2025-10-11 04:08:47.763 2 DEBUG oslo_concurrency.lockutils [None req-6cb29daa-1624-4ef6-9260-77ed05213d92 8bb2149cfdfe44b2a94076ed5e55fbaf 38b79203307d4f1caa56e7e44b103572 - - default default] Lock "8010953a-e520-477e-a4ba-ceb34db48982-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 04:08:47 compute-0 nova_compute[259850]: 2025-10-11 04:08:47.764 2 DEBUG nova.virt.libvirt.vif [None req-6cb29daa-1624-4ef6-9260-77ed05213d92 8bb2149cfdfe44b2a94076ed5e55fbaf 38b79203307d4f1caa56e7e44b103572 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-11T04:08:41Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestVolumeBackupRestore-server-1706545763',display_name='tempest-TestVolumeBackupRestore-server-1706545763',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testvolumebackuprestore-server-1706545763',id=10,image_ref='',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBH17W5kE8c8jHNH5W8L2W3jo0pvpW5izjZjtF23lj8QmsAX/ee5wMJLpdvzZ/zFNqur9tp2txrryU7QgSH4v5UjD/oKvsJwNRUCrfO426+mL3v0B2OhIPlcgoKavmlsW+g==',key_name='tempest-TestVolumeBackupRestore-317158668',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='38b79203307d4f1caa56e7e44b103572',ramdisk_id='',reservation_id='r-0h0jc2ph',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',image_signature_verified='False',network_allocated='True',owner_project_name='tempest-TestVolumeBackupRestore-949077181',owner_user_name='tempest-TestVolumeBackupRestore-949077181-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-11T04:08:43Z,user_data=None,user_id='8bb2149cfdfe44b2a94076ed5e55fbaf',uuid=8010953a-e520-477e-a4ba-ceb34db48982,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "34c86870-ee92-41f3-909b-1b576896b9cc", "address": "fa:16:3e:d4:0e:80", "network": {"id": "5dfb34b3-4c6e-41e7-a36e-ee8fb0fbb2e1", "bridge": "br-int", "label": "tempest-TestVolumeBackupRestore-789712542-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "38b79203307d4f1caa56e7e44b103572", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap34c86870-ee", "ovs_interfaceid": "34c86870-ee92-41f3-909b-1b576896b9cc", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Oct 11 04:08:47 compute-0 nova_compute[259850]: 2025-10-11 04:08:47.765 2 DEBUG nova.network.os_vif_util [None req-6cb29daa-1624-4ef6-9260-77ed05213d92 8bb2149cfdfe44b2a94076ed5e55fbaf 38b79203307d4f1caa56e7e44b103572 - - default default] Converting VIF {"id": "34c86870-ee92-41f3-909b-1b576896b9cc", "address": "fa:16:3e:d4:0e:80", "network": {"id": "5dfb34b3-4c6e-41e7-a36e-ee8fb0fbb2e1", "bridge": "br-int", "label": "tempest-TestVolumeBackupRestore-789712542-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "38b79203307d4f1caa56e7e44b103572", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap34c86870-ee", "ovs_interfaceid": "34c86870-ee92-41f3-909b-1b576896b9cc", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 11 04:08:47 compute-0 nova_compute[259850]: 2025-10-11 04:08:47.766 2 DEBUG nova.network.os_vif_util [None req-6cb29daa-1624-4ef6-9260-77ed05213d92 8bb2149cfdfe44b2a94076ed5e55fbaf 38b79203307d4f1caa56e7e44b103572 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:d4:0e:80,bridge_name='br-int',has_traffic_filtering=True,id=34c86870-ee92-41f3-909b-1b576896b9cc,network=Network(5dfb34b3-4c6e-41e7-a36e-ee8fb0fbb2e1),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap34c86870-ee') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 11 04:08:47 compute-0 nova_compute[259850]: 2025-10-11 04:08:47.767 2 DEBUG os_vif [None req-6cb29daa-1624-4ef6-9260-77ed05213d92 8bb2149cfdfe44b2a94076ed5e55fbaf 38b79203307d4f1caa56e7e44b103572 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:d4:0e:80,bridge_name='br-int',has_traffic_filtering=True,id=34c86870-ee92-41f3-909b-1b576896b9cc,network=Network(5dfb34b3-4c6e-41e7-a36e-ee8fb0fbb2e1),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap34c86870-ee') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Oct 11 04:08:47 compute-0 nova_compute[259850]: 2025-10-11 04:08:47.768 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 04:08:47 compute-0 nova_compute[259850]: 2025-10-11 04:08:47.768 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 04:08:47 compute-0 nova_compute[259850]: 2025-10-11 04:08:47.769 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 11 04:08:47 compute-0 nova_compute[259850]: 2025-10-11 04:08:47.773 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 04:08:47 compute-0 nova_compute[259850]: 2025-10-11 04:08:47.774 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap34c86870-ee, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 04:08:47 compute-0 nova_compute[259850]: 2025-10-11 04:08:47.775 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap34c86870-ee, col_values=(('external_ids', {'iface-id': '34c86870-ee92-41f3-909b-1b576896b9cc', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:d4:0e:80', 'vm-uuid': '8010953a-e520-477e-a4ba-ceb34db48982'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 04:08:47 compute-0 nova_compute[259850]: 2025-10-11 04:08:47.777 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 04:08:47 compute-0 NetworkManager[44920]: <info>  [1760155727.7788] manager: (tap34c86870-ee): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/62)
Oct 11 04:08:47 compute-0 nova_compute[259850]: 2025-10-11 04:08:47.780 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 11 04:08:47 compute-0 nova_compute[259850]: 2025-10-11 04:08:47.784 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 04:08:47 compute-0 nova_compute[259850]: 2025-10-11 04:08:47.786 2 INFO os_vif [None req-6cb29daa-1624-4ef6-9260-77ed05213d92 8bb2149cfdfe44b2a94076ed5e55fbaf 38b79203307d4f1caa56e7e44b103572 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:d4:0e:80,bridge_name='br-int',has_traffic_filtering=True,id=34c86870-ee92-41f3-909b-1b576896b9cc,network=Network(5dfb34b3-4c6e-41e7-a36e-ee8fb0fbb2e1),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap34c86870-ee')
Oct 11 04:08:47 compute-0 ceph-mon[74273]: osdmap e227: 3 total, 3 up, 3 in
Oct 11 04:08:47 compute-0 ceph-mon[74273]: from='client.? 192.168.122.10:0/154964815' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 11 04:08:47 compute-0 ceph-mon[74273]: from='client.? 192.168.122.10:0/154964815' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 11 04:08:47 compute-0 ceph-mon[74273]: from='client.? 192.168.122.100:0/2709424185' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 11 04:08:47 compute-0 nova_compute[259850]: 2025-10-11 04:08:47.866 2 DEBUG nova.virt.libvirt.driver [None req-6cb29daa-1624-4ef6-9260-77ed05213d92 8bb2149cfdfe44b2a94076ed5e55fbaf 38b79203307d4f1caa56e7e44b103572 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 11 04:08:47 compute-0 nova_compute[259850]: 2025-10-11 04:08:47.867 2 DEBUG nova.virt.libvirt.driver [None req-6cb29daa-1624-4ef6-9260-77ed05213d92 8bb2149cfdfe44b2a94076ed5e55fbaf 38b79203307d4f1caa56e7e44b103572 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 11 04:08:47 compute-0 nova_compute[259850]: 2025-10-11 04:08:47.867 2 DEBUG nova.virt.libvirt.driver [None req-6cb29daa-1624-4ef6-9260-77ed05213d92 8bb2149cfdfe44b2a94076ed5e55fbaf 38b79203307d4f1caa56e7e44b103572 - - default default] No VIF found with MAC fa:16:3e:d4:0e:80, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Oct 11 04:08:47 compute-0 nova_compute[259850]: 2025-10-11 04:08:47.868 2 INFO nova.virt.libvirt.driver [None req-6cb29daa-1624-4ef6-9260-77ed05213d92 8bb2149cfdfe44b2a94076ed5e55fbaf 38b79203307d4f1caa56e7e44b103572 - - default default] [instance: 8010953a-e520-477e-a4ba-ceb34db48982] Using config drive
Oct 11 04:08:47 compute-0 nova_compute[259850]: 2025-10-11 04:08:47.903 2 DEBUG nova.storage.rbd_utils [None req-6cb29daa-1624-4ef6-9260-77ed05213d92 8bb2149cfdfe44b2a94076ed5e55fbaf 38b79203307d4f1caa56e7e44b103572 - - default default] rbd image 8010953a-e520-477e-a4ba-ceb34db48982_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 11 04:08:48 compute-0 nova_compute[259850]: 2025-10-11 04:08:48.260 2 INFO nova.virt.libvirt.driver [None req-6cb29daa-1624-4ef6-9260-77ed05213d92 8bb2149cfdfe44b2a94076ed5e55fbaf 38b79203307d4f1caa56e7e44b103572 - - default default] [instance: 8010953a-e520-477e-a4ba-ceb34db48982] Creating config drive at /var/lib/nova/instances/8010953a-e520-477e-a4ba-ceb34db48982/disk.config
Oct 11 04:08:48 compute-0 nova_compute[259850]: 2025-10-11 04:08:48.270 2 DEBUG oslo_concurrency.processutils [None req-6cb29daa-1624-4ef6-9260-77ed05213d92 8bb2149cfdfe44b2a94076ed5e55fbaf 38b79203307d4f1caa56e7e44b103572 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/8010953a-e520-477e-a4ba-ceb34db48982/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp21ys00b8 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 04:08:48 compute-0 nova_compute[259850]: 2025-10-11 04:08:48.415 2 DEBUG oslo_concurrency.processutils [None req-6cb29daa-1624-4ef6-9260-77ed05213d92 8bb2149cfdfe44b2a94076ed5e55fbaf 38b79203307d4f1caa56e7e44b103572 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/8010953a-e520-477e-a4ba-ceb34db48982/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp21ys00b8" returned: 0 in 0.145s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 04:08:48 compute-0 nova_compute[259850]: 2025-10-11 04:08:48.455 2 DEBUG nova.storage.rbd_utils [None req-6cb29daa-1624-4ef6-9260-77ed05213d92 8bb2149cfdfe44b2a94076ed5e55fbaf 38b79203307d4f1caa56e7e44b103572 - - default default] rbd image 8010953a-e520-477e-a4ba-ceb34db48982_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 11 04:08:48 compute-0 nova_compute[259850]: 2025-10-11 04:08:48.460 2 DEBUG oslo_concurrency.processutils [None req-6cb29daa-1624-4ef6-9260-77ed05213d92 8bb2149cfdfe44b2a94076ed5e55fbaf 38b79203307d4f1caa56e7e44b103572 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/8010953a-e520-477e-a4ba-ceb34db48982/disk.config 8010953a-e520-477e-a4ba-ceb34db48982_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 04:08:48 compute-0 nova_compute[259850]: 2025-10-11 04:08:48.639 2 DEBUG oslo_concurrency.processutils [None req-6cb29daa-1624-4ef6-9260-77ed05213d92 8bb2149cfdfe44b2a94076ed5e55fbaf 38b79203307d4f1caa56e7e44b103572 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/8010953a-e520-477e-a4ba-ceb34db48982/disk.config 8010953a-e520-477e-a4ba-ceb34db48982_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.179s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 04:08:48 compute-0 nova_compute[259850]: 2025-10-11 04:08:48.640 2 INFO nova.virt.libvirt.driver [None req-6cb29daa-1624-4ef6-9260-77ed05213d92 8bb2149cfdfe44b2a94076ed5e55fbaf 38b79203307d4f1caa56e7e44b103572 - - default default] [instance: 8010953a-e520-477e-a4ba-ceb34db48982] Deleting local config drive /var/lib/nova/instances/8010953a-e520-477e-a4ba-ceb34db48982/disk.config because it was imported into RBD.
Oct 11 04:08:48 compute-0 kernel: tap34c86870-ee: entered promiscuous mode
Oct 11 04:08:48 compute-0 NetworkManager[44920]: <info>  [1760155728.7186] manager: (tap34c86870-ee): new Tun device (/org/freedesktop/NetworkManager/Devices/63)
Oct 11 04:08:48 compute-0 nova_compute[259850]: 2025-10-11 04:08:48.761 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 04:08:48 compute-0 systemd-udevd[281275]: Network interface NamePolicy= disabled on kernel command line.
Oct 11 04:08:48 compute-0 ovn_controller[152025]: 2025-10-11T04:08:48Z|00105|binding|INFO|Claiming lport 34c86870-ee92-41f3-909b-1b576896b9cc for this chassis.
Oct 11 04:08:48 compute-0 ovn_controller[152025]: 2025-10-11T04:08:48Z|00106|binding|INFO|34c86870-ee92-41f3-909b-1b576896b9cc: Claiming fa:16:3e:d4:0e:80 10.100.0.11
Oct 11 04:08:48 compute-0 ovn_metadata_agent[161897]: 2025-10-11 04:08:48.773 161902 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:d4:0e:80 10.100.0.11'], port_security=['fa:16:3e:d4:0e:80 10.100.0.11'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.11/28', 'neutron:device_id': '8010953a-e520-477e-a4ba-ceb34db48982', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-5dfb34b3-4c6e-41e7-a36e-ee8fb0fbb2e1', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '38b79203307d4f1caa56e7e44b103572', 'neutron:revision_number': '2', 'neutron:security_group_ids': '912fb5b2-0956-49ad-b895-ceb40eae61c4', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=ef7f882e-b41a-4919-9a27-1f63862813fb, chassis=[<ovs.db.idl.Row object at 0x7f22fde8afd0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f22fde8afd0>], logical_port=34c86870-ee92-41f3-909b-1b576896b9cc) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 11 04:08:48 compute-0 ovn_metadata_agent[161897]: 2025-10-11 04:08:48.775 161902 INFO neutron.agent.ovn.metadata.agent [-] Port 34c86870-ee92-41f3-909b-1b576896b9cc in datapath 5dfb34b3-4c6e-41e7-a36e-ee8fb0fbb2e1 bound to our chassis
Oct 11 04:08:48 compute-0 ovn_metadata_agent[161897]: 2025-10-11 04:08:48.777 161902 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 5dfb34b3-4c6e-41e7-a36e-ee8fb0fbb2e1
Oct 11 04:08:48 compute-0 NetworkManager[44920]: <info>  [1760155728.7832] device (tap34c86870-ee): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 11 04:08:48 compute-0 NetworkManager[44920]: <info>  [1760155728.7868] device (tap34c86870-ee): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 11 04:08:48 compute-0 ovn_metadata_agent[161897]: 2025-10-11 04:08:48.792 267637 DEBUG oslo.privsep.daemon [-] privsep: reply[ca0d1f2d-e108-4ff4-af1f-b54e45e17012]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 04:08:48 compute-0 ovn_metadata_agent[161897]: 2025-10-11 04:08:48.793 161902 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap5dfb34b3-41 in ovnmeta-5dfb34b3-4c6e-41e7-a36e-ee8fb0fbb2e1 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Oct 11 04:08:48 compute-0 ovn_metadata_agent[161897]: 2025-10-11 04:08:48.797 267637 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap5dfb34b3-40 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Oct 11 04:08:48 compute-0 ovn_metadata_agent[161897]: 2025-10-11 04:08:48.797 267637 DEBUG oslo.privsep.daemon [-] privsep: reply[c19b5f3f-c634-4f93-b562-9941f483e5bd]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 04:08:48 compute-0 ovn_metadata_agent[161897]: 2025-10-11 04:08:48.798 267637 DEBUG oslo.privsep.daemon [-] privsep: reply[6d1e313d-3354-42ae-a241-8d18d43458f8]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 04:08:48 compute-0 systemd-machined[214869]: New machine qemu-10-instance-0000000a.
Oct 11 04:08:48 compute-0 ovn_controller[152025]: 2025-10-11T04:08:48Z|00107|binding|INFO|Setting lport 34c86870-ee92-41f3-909b-1b576896b9cc ovn-installed in OVS
Oct 11 04:08:48 compute-0 ovn_controller[152025]: 2025-10-11T04:08:48Z|00108|binding|INFO|Setting lport 34c86870-ee92-41f3-909b-1b576896b9cc up in Southbound
Oct 11 04:08:48 compute-0 ovn_metadata_agent[161897]: 2025-10-11 04:08:48.811 162015 DEBUG oslo.privsep.daemon [-] privsep: reply[9854a7df-9c29-4886-9db5-3b963d486cf8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 04:08:48 compute-0 nova_compute[259850]: 2025-10-11 04:08:48.813 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 04:08:48 compute-0 nova_compute[259850]: 2025-10-11 04:08:48.814 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 04:08:48 compute-0 systemd[1]: Started Virtual Machine qemu-10-instance-0000000a.
Oct 11 04:08:48 compute-0 ovn_metadata_agent[161897]: 2025-10-11 04:08:48.825 267637 DEBUG oslo.privsep.daemon [-] privsep: reply[2dd314de-7322-48b5-a067-37fb3a4734d9]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 04:08:48 compute-0 ceph-mon[74273]: pgmap v1191: 305 pgs: 305 active+clean; 226 MiB data, 333 MiB used, 60 GiB / 60 GiB avail; 28 KiB/s rd, 3.2 KiB/s wr, 41 op/s
Oct 11 04:08:48 compute-0 ovn_metadata_agent[161897]: 2025-10-11 04:08:48.856 267785 DEBUG oslo.privsep.daemon [-] privsep: reply[9fc8347f-bf91-4f31-aa29-cb5df248de40]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 04:08:48 compute-0 NetworkManager[44920]: <info>  [1760155728.8640] manager: (tap5dfb34b3-40): new Veth device (/org/freedesktop/NetworkManager/Devices/64)
Oct 11 04:08:48 compute-0 ovn_metadata_agent[161897]: 2025-10-11 04:08:48.863 267637 DEBUG oslo.privsep.daemon [-] privsep: reply[d91d7fcd-ddff-426e-881c-ce2367d57568]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 04:08:48 compute-0 ovn_metadata_agent[161897]: 2025-10-11 04:08:48.899 267785 DEBUG oslo.privsep.daemon [-] privsep: reply[56c73add-a6c0-474a-ab1b-57437de6ec6b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 04:08:48 compute-0 ovn_metadata_agent[161897]: 2025-10-11 04:08:48.902 267785 DEBUG oslo.privsep.daemon [-] privsep: reply[8321470a-fe32-4f6b-93d8-5e026716f9dd]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 04:08:48 compute-0 NetworkManager[44920]: <info>  [1760155728.9339] device (tap5dfb34b3-40): carrier: link connected
Oct 11 04:08:48 compute-0 ovn_metadata_agent[161897]: 2025-10-11 04:08:48.942 267785 DEBUG oslo.privsep.daemon [-] privsep: reply[9005f0e1-5f79-4d35-bcd2-3ce6d58a0a0c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 04:08:48 compute-0 ovn_metadata_agent[161897]: 2025-10-11 04:08:48.964 267637 DEBUG oslo.privsep.daemon [-] privsep: reply[801944ce-7e86-4a73-bd7f-faad137e1521]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap5dfb34b3-41'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:ad:ae:52'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 39], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 411397, 'reachable_time': 20442, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 281311, 'error': None, 'target': 'ovnmeta-5dfb34b3-4c6e-41e7-a36e-ee8fb0fbb2e1', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 04:08:48 compute-0 ovn_metadata_agent[161897]: 2025-10-11 04:08:48.989 267637 DEBUG oslo.privsep.daemon [-] privsep: reply[71c16783-298c-4262-8247-23cdb0cf4b47]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fead:ae52'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 411397, 'tstamp': 411397}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 281312, 'error': None, 'target': 'ovnmeta-5dfb34b3-4c6e-41e7-a36e-ee8fb0fbb2e1', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 04:08:49 compute-0 ovn_metadata_agent[161897]: 2025-10-11 04:08:49.015 267637 DEBUG oslo.privsep.daemon [-] privsep: reply[59993349-e985-46eb-aa9a-3bb36f24327a]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap5dfb34b3-41'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:ad:ae:52'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 39], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 411397, 'reachable_time': 20442, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 281313, 'error': None, 'target': 'ovnmeta-5dfb34b3-4c6e-41e7-a36e-ee8fb0fbb2e1', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 04:08:49 compute-0 ovn_metadata_agent[161897]: 2025-10-11 04:08:49.064 267637 DEBUG oslo.privsep.daemon [-] privsep: reply[868a214d-6d71-4d37-8993-bdbcdb21ef02]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 04:08:49 compute-0 ovn_metadata_agent[161897]: 2025-10-11 04:08:49.125 267637 DEBUG oslo.privsep.daemon [-] privsep: reply[12d25f6a-fcbd-415c-94e3-f36319474788]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 04:08:49 compute-0 ovn_metadata_agent[161897]: 2025-10-11 04:08:49.126 161902 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap5dfb34b3-40, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 04:08:49 compute-0 ovn_metadata_agent[161897]: 2025-10-11 04:08:49.127 161902 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 11 04:08:49 compute-0 ovn_metadata_agent[161897]: 2025-10-11 04:08:49.128 161902 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap5dfb34b3-40, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 04:08:49 compute-0 NetworkManager[44920]: <info>  [1760155729.1307] manager: (tap5dfb34b3-40): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/65)
Oct 11 04:08:49 compute-0 kernel: tap5dfb34b3-40: entered promiscuous mode
Oct 11 04:08:49 compute-0 nova_compute[259850]: 2025-10-11 04:08:49.130 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 04:08:49 compute-0 ovn_metadata_agent[161897]: 2025-10-11 04:08:49.134 161902 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap5dfb34b3-40, col_values=(('external_ids', {'iface-id': '1d722b8d-6273-437d-91b8-d3f6b817cebf'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 04:08:49 compute-0 nova_compute[259850]: 2025-10-11 04:08:49.135 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 04:08:49 compute-0 ovn_controller[152025]: 2025-10-11T04:08:49Z|00109|binding|INFO|Releasing lport 1d722b8d-6273-437d-91b8-d3f6b817cebf from this chassis (sb_readonly=0)
Oct 11 04:08:49 compute-0 ovn_metadata_agent[161897]: 2025-10-11 04:08:49.137 161902 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/5dfb34b3-4c6e-41e7-a36e-ee8fb0fbb2e1.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/5dfb34b3-4c6e-41e7-a36e-ee8fb0fbb2e1.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Oct 11 04:08:49 compute-0 ovn_metadata_agent[161897]: 2025-10-11 04:08:49.138 267637 DEBUG oslo.privsep.daemon [-] privsep: reply[adec0115-7de0-4267-9de0-a1d9310959d7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 04:08:49 compute-0 ovn_metadata_agent[161897]: 2025-10-11 04:08:49.139 161902 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct 11 04:08:49 compute-0 ovn_metadata_agent[161897]: global
Oct 11 04:08:49 compute-0 ovn_metadata_agent[161897]:     log         /dev/log local0 debug
Oct 11 04:08:49 compute-0 ovn_metadata_agent[161897]:     log-tag     haproxy-metadata-proxy-5dfb34b3-4c6e-41e7-a36e-ee8fb0fbb2e1
Oct 11 04:08:49 compute-0 ovn_metadata_agent[161897]:     user        root
Oct 11 04:08:49 compute-0 ovn_metadata_agent[161897]:     group       root
Oct 11 04:08:49 compute-0 ovn_metadata_agent[161897]:     maxconn     1024
Oct 11 04:08:49 compute-0 ovn_metadata_agent[161897]:     pidfile     /var/lib/neutron/external/pids/5dfb34b3-4c6e-41e7-a36e-ee8fb0fbb2e1.pid.haproxy
Oct 11 04:08:49 compute-0 ovn_metadata_agent[161897]:     daemon
Oct 11 04:08:49 compute-0 ovn_metadata_agent[161897]: 
Oct 11 04:08:49 compute-0 ovn_metadata_agent[161897]: defaults
Oct 11 04:08:49 compute-0 ovn_metadata_agent[161897]:     log global
Oct 11 04:08:49 compute-0 ovn_metadata_agent[161897]:     mode http
Oct 11 04:08:49 compute-0 ovn_metadata_agent[161897]:     option httplog
Oct 11 04:08:49 compute-0 ovn_metadata_agent[161897]:     option dontlognull
Oct 11 04:08:49 compute-0 ovn_metadata_agent[161897]:     option http-server-close
Oct 11 04:08:49 compute-0 ovn_metadata_agent[161897]:     option forwardfor
Oct 11 04:08:49 compute-0 ovn_metadata_agent[161897]:     retries                 3
Oct 11 04:08:49 compute-0 ovn_metadata_agent[161897]:     timeout http-request    30s
Oct 11 04:08:49 compute-0 ovn_metadata_agent[161897]:     timeout connect         30s
Oct 11 04:08:49 compute-0 ovn_metadata_agent[161897]:     timeout client          32s
Oct 11 04:08:49 compute-0 ovn_metadata_agent[161897]:     timeout server          32s
Oct 11 04:08:49 compute-0 ovn_metadata_agent[161897]:     timeout http-keep-alive 30s
Oct 11 04:08:49 compute-0 ovn_metadata_agent[161897]: 
Oct 11 04:08:49 compute-0 ovn_metadata_agent[161897]: 
Oct 11 04:08:49 compute-0 ovn_metadata_agent[161897]: listen listener
Oct 11 04:08:49 compute-0 ovn_metadata_agent[161897]:     bind 169.254.169.254:80
Oct 11 04:08:49 compute-0 ovn_metadata_agent[161897]:     server metadata /var/lib/neutron/metadata_proxy
Oct 11 04:08:49 compute-0 ovn_metadata_agent[161897]:     http-request add-header X-OVN-Network-ID 5dfb34b3-4c6e-41e7-a36e-ee8fb0fbb2e1
Oct 11 04:08:49 compute-0 ovn_metadata_agent[161897]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Oct 11 04:08:49 compute-0 ovn_metadata_agent[161897]: 2025-10-11 04:08:49.140 161902 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-5dfb34b3-4c6e-41e7-a36e-ee8fb0fbb2e1', 'env', 'PROCESS_TAG=haproxy-5dfb34b3-4c6e-41e7-a36e-ee8fb0fbb2e1', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/5dfb34b3-4c6e-41e7-a36e-ee8fb0fbb2e1.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Oct 11 04:08:49 compute-0 nova_compute[259850]: 2025-10-11 04:08:49.153 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 04:08:49 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1192: 305 pgs: 305 active+clean; 226 MiB data, 331 MiB used, 60 GiB / 60 GiB avail; 40 KiB/s rd, 3.4 KiB/s wr, 56 op/s
Oct 11 04:08:49 compute-0 podman[281345]: 2025-10-11 04:08:49.55780967 +0000 UTC m=+0.069319062 container create d1a26a6087ccbe069490574ff1056c999e0758b2317bb5274473173217e7e28a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-5dfb34b3-4c6e-41e7-a36e-ee8fb0fbb2e1, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.build-date=20251009)
Oct 11 04:08:49 compute-0 podman[281345]: 2025-10-11 04:08:49.518688269 +0000 UTC m=+0.030197751 image pull 1061e4fafe13e0b9aa1ef2c904ba4ad70c44f3e87b1d831f16c6db34937f4022 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Oct 11 04:08:49 compute-0 systemd[1]: Started libpod-conmon-d1a26a6087ccbe069490574ff1056c999e0758b2317bb5274473173217e7e28a.scope.
Oct 11 04:08:49 compute-0 nova_compute[259850]: 2025-10-11 04:08:49.637 2 DEBUG nova.network.neutron [req-04585ad0-c6cb-45f0-a669-a74622e06133 req-9afc7341-1f3b-4da2-8024-fb8677533bfc f4159c198f2c491aba3076e803251bb4 551ef16ca7c64c7d98e9560ddab5abd8 - - default default] [instance: 8010953a-e520-477e-a4ba-ceb34db48982] Updated VIF entry in instance network info cache for port 34c86870-ee92-41f3-909b-1b576896b9cc. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 11 04:08:49 compute-0 nova_compute[259850]: 2025-10-11 04:08:49.639 2 DEBUG nova.network.neutron [req-04585ad0-c6cb-45f0-a669-a74622e06133 req-9afc7341-1f3b-4da2-8024-fb8677533bfc f4159c198f2c491aba3076e803251bb4 551ef16ca7c64c7d98e9560ddab5abd8 - - default default] [instance: 8010953a-e520-477e-a4ba-ceb34db48982] Updating instance_info_cache with network_info: [{"id": "34c86870-ee92-41f3-909b-1b576896b9cc", "address": "fa:16:3e:d4:0e:80", "network": {"id": "5dfb34b3-4c6e-41e7-a36e-ee8fb0fbb2e1", "bridge": "br-int", "label": "tempest-TestVolumeBackupRestore-789712542-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "38b79203307d4f1caa56e7e44b103572", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap34c86870-ee", "ovs_interfaceid": "34c86870-ee92-41f3-909b-1b576896b9cc", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 11 04:08:49 compute-0 systemd[1]: Started libcrun container.
Oct 11 04:08:49 compute-0 nova_compute[259850]: 2025-10-11 04:08:49.654 2 DEBUG oslo_concurrency.lockutils [req-04585ad0-c6cb-45f0-a669-a74622e06133 req-9afc7341-1f3b-4da2-8024-fb8677533bfc f4159c198f2c491aba3076e803251bb4 551ef16ca7c64c7d98e9560ddab5abd8 - - default default] Releasing lock "refresh_cache-8010953a-e520-477e-a4ba-ceb34db48982" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 11 04:08:49 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7bfc0d88dd5722f98c887d247e1d2bd8e16b12b2ad72754eff5d000d647b135c/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct 11 04:08:49 compute-0 podman[281345]: 2025-10-11 04:08:49.682302762 +0000 UTC m=+0.193812184 container init d1a26a6087ccbe069490574ff1056c999e0758b2317bb5274473173217e7e28a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-5dfb34b3-4c6e-41e7-a36e-ee8fb0fbb2e1, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, org.label-schema.build-date=20251009, tcib_managed=true, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team)
Oct 11 04:08:49 compute-0 podman[281358]: 2025-10-11 04:08:49.689608568 +0000 UTC m=+0.089801838 container health_status aa44bb3cffcfb092b9d85ae52a9bd9f36c87e07262632420f97b48a886109eab (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, container_name=ovn_metadata_agent, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20251009, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true)
Oct 11 04:08:49 compute-0 podman[281345]: 2025-10-11 04:08:49.691237254 +0000 UTC m=+0.202746646 container start d1a26a6087ccbe069490574ff1056c999e0758b2317bb5274473173217e7e28a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-5dfb34b3-4c6e-41e7-a36e-ee8fb0fbb2e1, org.label-schema.license=GPLv2, tcib_managed=true, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, org.label-schema.build-date=20251009, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 11 04:08:49 compute-0 neutron-haproxy-ovnmeta-5dfb34b3-4c6e-41e7-a36e-ee8fb0fbb2e1[281371]: [NOTICE]   (281401) : New worker (281417) forked
Oct 11 04:08:49 compute-0 neutron-haproxy-ovnmeta-5dfb34b3-4c6e-41e7-a36e-ee8fb0fbb2e1[281371]: [NOTICE]   (281401) : Loading success.
Oct 11 04:08:49 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e227 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 04:08:49 compute-0 nova_compute[259850]: 2025-10-11 04:08:49.824 2 DEBUG nova.compute.manager [req-4e443085-9b93-44a4-9847-cc033e7ce632 req-399ec316-dc1e-40b6-b778-16bc9a21e1fe f4159c198f2c491aba3076e803251bb4 551ef16ca7c64c7d98e9560ddab5abd8 - - default default] [instance: 8010953a-e520-477e-a4ba-ceb34db48982] Received event network-vif-plugged-34c86870-ee92-41f3-909b-1b576896b9cc external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 11 04:08:49 compute-0 nova_compute[259850]: 2025-10-11 04:08:49.825 2 DEBUG oslo_concurrency.lockutils [req-4e443085-9b93-44a4-9847-cc033e7ce632 req-399ec316-dc1e-40b6-b778-16bc9a21e1fe f4159c198f2c491aba3076e803251bb4 551ef16ca7c64c7d98e9560ddab5abd8 - - default default] Acquiring lock "8010953a-e520-477e-a4ba-ceb34db48982-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 04:08:49 compute-0 nova_compute[259850]: 2025-10-11 04:08:49.826 2 DEBUG oslo_concurrency.lockutils [req-4e443085-9b93-44a4-9847-cc033e7ce632 req-399ec316-dc1e-40b6-b778-16bc9a21e1fe f4159c198f2c491aba3076e803251bb4 551ef16ca7c64c7d98e9560ddab5abd8 - - default default] Lock "8010953a-e520-477e-a4ba-ceb34db48982-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 04:08:49 compute-0 nova_compute[259850]: 2025-10-11 04:08:49.826 2 DEBUG oslo_concurrency.lockutils [req-4e443085-9b93-44a4-9847-cc033e7ce632 req-399ec316-dc1e-40b6-b778-16bc9a21e1fe f4159c198f2c491aba3076e803251bb4 551ef16ca7c64c7d98e9560ddab5abd8 - - default default] Lock "8010953a-e520-477e-a4ba-ceb34db48982-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 04:08:49 compute-0 nova_compute[259850]: 2025-10-11 04:08:49.827 2 DEBUG nova.compute.manager [req-4e443085-9b93-44a4-9847-cc033e7ce632 req-399ec316-dc1e-40b6-b778-16bc9a21e1fe f4159c198f2c491aba3076e803251bb4 551ef16ca7c64c7d98e9560ddab5abd8 - - default default] [instance: 8010953a-e520-477e-a4ba-ceb34db48982] Processing event network-vif-plugged-34c86870-ee92-41f3-909b-1b576896b9cc _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Oct 11 04:08:50 compute-0 nova_compute[259850]: 2025-10-11 04:08:50.230 2 DEBUG nova.virt.driver [None req-3700afd8-f980-4cf2-b41e-54ecc7fbe9a2 - - - - - -] Emitting event <LifecycleEvent: 1760155730.229777, 8010953a-e520-477e-a4ba-ceb34db48982 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 11 04:08:50 compute-0 nova_compute[259850]: 2025-10-11 04:08:50.230 2 INFO nova.compute.manager [None req-3700afd8-f980-4cf2-b41e-54ecc7fbe9a2 - - - - - -] [instance: 8010953a-e520-477e-a4ba-ceb34db48982] VM Started (Lifecycle Event)
Oct 11 04:08:50 compute-0 nova_compute[259850]: 2025-10-11 04:08:50.233 2 DEBUG nova.compute.manager [None req-6cb29daa-1624-4ef6-9260-77ed05213d92 8bb2149cfdfe44b2a94076ed5e55fbaf 38b79203307d4f1caa56e7e44b103572 - - default default] [instance: 8010953a-e520-477e-a4ba-ceb34db48982] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Oct 11 04:08:50 compute-0 nova_compute[259850]: 2025-10-11 04:08:50.236 2 DEBUG nova.virt.libvirt.driver [None req-6cb29daa-1624-4ef6-9260-77ed05213d92 8bb2149cfdfe44b2a94076ed5e55fbaf 38b79203307d4f1caa56e7e44b103572 - - default default] [instance: 8010953a-e520-477e-a4ba-ceb34db48982] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Oct 11 04:08:50 compute-0 nova_compute[259850]: 2025-10-11 04:08:50.240 2 INFO nova.virt.libvirt.driver [-] [instance: 8010953a-e520-477e-a4ba-ceb34db48982] Instance spawned successfully.
Oct 11 04:08:50 compute-0 nova_compute[259850]: 2025-10-11 04:08:50.241 2 DEBUG nova.virt.libvirt.driver [None req-6cb29daa-1624-4ef6-9260-77ed05213d92 8bb2149cfdfe44b2a94076ed5e55fbaf 38b79203307d4f1caa56e7e44b103572 - - default default] [instance: 8010953a-e520-477e-a4ba-ceb34db48982] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Oct 11 04:08:50 compute-0 nova_compute[259850]: 2025-10-11 04:08:50.265 2 DEBUG nova.compute.manager [None req-3700afd8-f980-4cf2-b41e-54ecc7fbe9a2 - - - - - -] [instance: 8010953a-e520-477e-a4ba-ceb34db48982] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 11 04:08:50 compute-0 nova_compute[259850]: 2025-10-11 04:08:50.274 2 DEBUG nova.compute.manager [None req-3700afd8-f980-4cf2-b41e-54ecc7fbe9a2 - - - - - -] [instance: 8010953a-e520-477e-a4ba-ceb34db48982] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 11 04:08:50 compute-0 nova_compute[259850]: 2025-10-11 04:08:50.279 2 DEBUG nova.virt.libvirt.driver [None req-6cb29daa-1624-4ef6-9260-77ed05213d92 8bb2149cfdfe44b2a94076ed5e55fbaf 38b79203307d4f1caa56e7e44b103572 - - default default] [instance: 8010953a-e520-477e-a4ba-ceb34db48982] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 11 04:08:50 compute-0 nova_compute[259850]: 2025-10-11 04:08:50.280 2 DEBUG nova.virt.libvirt.driver [None req-6cb29daa-1624-4ef6-9260-77ed05213d92 8bb2149cfdfe44b2a94076ed5e55fbaf 38b79203307d4f1caa56e7e44b103572 - - default default] [instance: 8010953a-e520-477e-a4ba-ceb34db48982] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 11 04:08:50 compute-0 nova_compute[259850]: 2025-10-11 04:08:50.280 2 DEBUG nova.virt.libvirt.driver [None req-6cb29daa-1624-4ef6-9260-77ed05213d92 8bb2149cfdfe44b2a94076ed5e55fbaf 38b79203307d4f1caa56e7e44b103572 - - default default] [instance: 8010953a-e520-477e-a4ba-ceb34db48982] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 11 04:08:50 compute-0 nova_compute[259850]: 2025-10-11 04:08:50.281 2 DEBUG nova.virt.libvirt.driver [None req-6cb29daa-1624-4ef6-9260-77ed05213d92 8bb2149cfdfe44b2a94076ed5e55fbaf 38b79203307d4f1caa56e7e44b103572 - - default default] [instance: 8010953a-e520-477e-a4ba-ceb34db48982] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 11 04:08:50 compute-0 nova_compute[259850]: 2025-10-11 04:08:50.282 2 DEBUG nova.virt.libvirt.driver [None req-6cb29daa-1624-4ef6-9260-77ed05213d92 8bb2149cfdfe44b2a94076ed5e55fbaf 38b79203307d4f1caa56e7e44b103572 - - default default] [instance: 8010953a-e520-477e-a4ba-ceb34db48982] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 11 04:08:50 compute-0 nova_compute[259850]: 2025-10-11 04:08:50.283 2 DEBUG nova.virt.libvirt.driver [None req-6cb29daa-1624-4ef6-9260-77ed05213d92 8bb2149cfdfe44b2a94076ed5e55fbaf 38b79203307d4f1caa56e7e44b103572 - - default default] [instance: 8010953a-e520-477e-a4ba-ceb34db48982] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 11 04:08:50 compute-0 nova_compute[259850]: 2025-10-11 04:08:50.307 2 INFO nova.compute.manager [None req-3700afd8-f980-4cf2-b41e-54ecc7fbe9a2 - - - - - -] [instance: 8010953a-e520-477e-a4ba-ceb34db48982] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 11 04:08:50 compute-0 nova_compute[259850]: 2025-10-11 04:08:50.308 2 DEBUG nova.virt.driver [None req-3700afd8-f980-4cf2-b41e-54ecc7fbe9a2 - - - - - -] Emitting event <LifecycleEvent: 1760155730.2301302, 8010953a-e520-477e-a4ba-ceb34db48982 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 11 04:08:50 compute-0 nova_compute[259850]: 2025-10-11 04:08:50.308 2 INFO nova.compute.manager [None req-3700afd8-f980-4cf2-b41e-54ecc7fbe9a2 - - - - - -] [instance: 8010953a-e520-477e-a4ba-ceb34db48982] VM Paused (Lifecycle Event)
Oct 11 04:08:50 compute-0 nova_compute[259850]: 2025-10-11 04:08:50.334 2 DEBUG nova.compute.manager [None req-3700afd8-f980-4cf2-b41e-54ecc7fbe9a2 - - - - - -] [instance: 8010953a-e520-477e-a4ba-ceb34db48982] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 11 04:08:50 compute-0 nova_compute[259850]: 2025-10-11 04:08:50.338 2 DEBUG nova.virt.driver [None req-3700afd8-f980-4cf2-b41e-54ecc7fbe9a2 - - - - - -] Emitting event <LifecycleEvent: 1760155730.237045, 8010953a-e520-477e-a4ba-ceb34db48982 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 11 04:08:50 compute-0 nova_compute[259850]: 2025-10-11 04:08:50.339 2 INFO nova.compute.manager [None req-3700afd8-f980-4cf2-b41e-54ecc7fbe9a2 - - - - - -] [instance: 8010953a-e520-477e-a4ba-ceb34db48982] VM Resumed (Lifecycle Event)
Oct 11 04:08:50 compute-0 nova_compute[259850]: 2025-10-11 04:08:50.347 2 INFO nova.compute.manager [None req-6cb29daa-1624-4ef6-9260-77ed05213d92 8bb2149cfdfe44b2a94076ed5e55fbaf 38b79203307d4f1caa56e7e44b103572 - - default default] [instance: 8010953a-e520-477e-a4ba-ceb34db48982] Took 5.86 seconds to spawn the instance on the hypervisor.
Oct 11 04:08:50 compute-0 nova_compute[259850]: 2025-10-11 04:08:50.347 2 DEBUG nova.compute.manager [None req-6cb29daa-1624-4ef6-9260-77ed05213d92 8bb2149cfdfe44b2a94076ed5e55fbaf 38b79203307d4f1caa56e7e44b103572 - - default default] [instance: 8010953a-e520-477e-a4ba-ceb34db48982] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 11 04:08:50 compute-0 nova_compute[259850]: 2025-10-11 04:08:50.361 2 DEBUG nova.compute.manager [None req-3700afd8-f980-4cf2-b41e-54ecc7fbe9a2 - - - - - -] [instance: 8010953a-e520-477e-a4ba-ceb34db48982] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 11 04:08:50 compute-0 nova_compute[259850]: 2025-10-11 04:08:50.365 2 DEBUG nova.compute.manager [None req-3700afd8-f980-4cf2-b41e-54ecc7fbe9a2 - - - - - -] [instance: 8010953a-e520-477e-a4ba-ceb34db48982] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 11 04:08:50 compute-0 nova_compute[259850]: 2025-10-11 04:08:50.391 2 INFO nova.compute.manager [None req-3700afd8-f980-4cf2-b41e-54ecc7fbe9a2 - - - - - -] [instance: 8010953a-e520-477e-a4ba-ceb34db48982] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 11 04:08:50 compute-0 nova_compute[259850]: 2025-10-11 04:08:50.414 2 INFO nova.compute.manager [None req-6cb29daa-1624-4ef6-9260-77ed05213d92 8bb2149cfdfe44b2a94076ed5e55fbaf 38b79203307d4f1caa56e7e44b103572 - - default default] [instance: 8010953a-e520-477e-a4ba-ceb34db48982] Took 8.30 seconds to build instance.
Oct 11 04:08:50 compute-0 nova_compute[259850]: 2025-10-11 04:08:50.437 2 DEBUG oslo_concurrency.lockutils [None req-6cb29daa-1624-4ef6-9260-77ed05213d92 8bb2149cfdfe44b2a94076ed5e55fbaf 38b79203307d4f1caa56e7e44b103572 - - default default] Lock "8010953a-e520-477e-a4ba-ceb34db48982" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 8.381s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 04:08:50 compute-0 nova_compute[259850]: 2025-10-11 04:08:50.511 2 DEBUG oslo_concurrency.lockutils [None req-5a7b44da-4186-418d-b11a-c8513c690b68 f301bb3cf7f94411bff904828db8c555 3c9fe3215f964559830df6c94dd6a581 - - default default] Acquiring lock "e879a322-2581-43da-916b-423a94821ed0" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 04:08:50 compute-0 nova_compute[259850]: 2025-10-11 04:08:50.511 2 DEBUG oslo_concurrency.lockutils [None req-5a7b44da-4186-418d-b11a-c8513c690b68 f301bb3cf7f94411bff904828db8c555 3c9fe3215f964559830df6c94dd6a581 - - default default] Lock "e879a322-2581-43da-916b-423a94821ed0" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 04:08:50 compute-0 nova_compute[259850]: 2025-10-11 04:08:50.557 2 DEBUG nova.compute.manager [None req-5a7b44da-4186-418d-b11a-c8513c690b68 f301bb3cf7f94411bff904828db8c555 3c9fe3215f964559830df6c94dd6a581 - - default default] [instance: e879a322-2581-43da-916b-423a94821ed0] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Oct 11 04:08:50 compute-0 nova_compute[259850]: 2025-10-11 04:08:50.634 2 DEBUG oslo_concurrency.lockutils [None req-5a7b44da-4186-418d-b11a-c8513c690b68 f301bb3cf7f94411bff904828db8c555 3c9fe3215f964559830df6c94dd6a581 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 04:08:50 compute-0 nova_compute[259850]: 2025-10-11 04:08:50.635 2 DEBUG oslo_concurrency.lockutils [None req-5a7b44da-4186-418d-b11a-c8513c690b68 f301bb3cf7f94411bff904828db8c555 3c9fe3215f964559830df6c94dd6a581 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 04:08:50 compute-0 nova_compute[259850]: 2025-10-11 04:08:50.646 2 DEBUG nova.virt.hardware [None req-5a7b44da-4186-418d-b11a-c8513c690b68 f301bb3cf7f94411bff904828db8c555 3c9fe3215f964559830df6c94dd6a581 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Oct 11 04:08:50 compute-0 nova_compute[259850]: 2025-10-11 04:08:50.646 2 INFO nova.compute.claims [None req-5a7b44da-4186-418d-b11a-c8513c690b68 f301bb3cf7f94411bff904828db8c555 3c9fe3215f964559830df6c94dd6a581 - - default default] [instance: e879a322-2581-43da-916b-423a94821ed0] Claim successful on node compute-0.ctlplane.example.com
Oct 11 04:08:50 compute-0 ceph-mgr[74563]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 04:08:50 compute-0 ceph-mgr[74563]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 04:08:50 compute-0 nova_compute[259850]: 2025-10-11 04:08:50.774 2 DEBUG oslo_concurrency.processutils [None req-5a7b44da-4186-418d-b11a-c8513c690b68 f301bb3cf7f94411bff904828db8c555 3c9fe3215f964559830df6c94dd6a581 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 04:08:50 compute-0 ceph-mgr[74563]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 04:08:50 compute-0 ceph-mgr[74563]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 04:08:50 compute-0 ceph-mgr[74563]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 04:08:50 compute-0 ceph-mgr[74563]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 04:08:50 compute-0 ceph-mon[74273]: pgmap v1192: 305 pgs: 305 active+clean; 226 MiB data, 331 MiB used, 60 GiB / 60 GiB avail; 40 KiB/s rd, 3.4 KiB/s wr, 56 op/s
Oct 11 04:08:51 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 11 04:08:51 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2540897313' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 04:08:51 compute-0 nova_compute[259850]: 2025-10-11 04:08:51.277 2 DEBUG oslo_concurrency.processutils [None req-5a7b44da-4186-418d-b11a-c8513c690b68 f301bb3cf7f94411bff904828db8c555 3c9fe3215f964559830df6c94dd6a581 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.503s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 04:08:51 compute-0 nova_compute[259850]: 2025-10-11 04:08:51.283 2 DEBUG nova.compute.provider_tree [None req-5a7b44da-4186-418d-b11a-c8513c690b68 f301bb3cf7f94411bff904828db8c555 3c9fe3215f964559830df6c94dd6a581 - - default default] Inventory has not changed in ProviderTree for provider: 108a560b-89c0-4926-a2fc-cb749a6f8386 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 11 04:08:51 compute-0 nova_compute[259850]: 2025-10-11 04:08:51.347 2 DEBUG nova.scheduler.client.report [None req-5a7b44da-4186-418d-b11a-c8513c690b68 f301bb3cf7f94411bff904828db8c555 3c9fe3215f964559830df6c94dd6a581 - - default default] Inventory has not changed for provider 108a560b-89c0-4926-a2fc-cb749a6f8386 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 11 04:08:51 compute-0 nova_compute[259850]: 2025-10-11 04:08:51.377 2 DEBUG oslo_concurrency.lockutils [None req-5a7b44da-4186-418d-b11a-c8513c690b68 f301bb3cf7f94411bff904828db8c555 3c9fe3215f964559830df6c94dd6a581 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.742s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 04:08:51 compute-0 nova_compute[259850]: 2025-10-11 04:08:51.378 2 DEBUG nova.compute.manager [None req-5a7b44da-4186-418d-b11a-c8513c690b68 f301bb3cf7f94411bff904828db8c555 3c9fe3215f964559830df6c94dd6a581 - - default default] [instance: e879a322-2581-43da-916b-423a94821ed0] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Oct 11 04:08:51 compute-0 nova_compute[259850]: 2025-10-11 04:08:51.432 2 DEBUG nova.compute.manager [None req-5a7b44da-4186-418d-b11a-c8513c690b68 f301bb3cf7f94411bff904828db8c555 3c9fe3215f964559830df6c94dd6a581 - - default default] [instance: e879a322-2581-43da-916b-423a94821ed0] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Oct 11 04:08:51 compute-0 nova_compute[259850]: 2025-10-11 04:08:51.432 2 DEBUG nova.network.neutron [None req-5a7b44da-4186-418d-b11a-c8513c690b68 f301bb3cf7f94411bff904828db8c555 3c9fe3215f964559830df6c94dd6a581 - - default default] [instance: e879a322-2581-43da-916b-423a94821ed0] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Oct 11 04:08:51 compute-0 nova_compute[259850]: 2025-10-11 04:08:51.453 2 INFO nova.virt.libvirt.driver [None req-5a7b44da-4186-418d-b11a-c8513c690b68 f301bb3cf7f94411bff904828db8c555 3c9fe3215f964559830df6c94dd6a581 - - default default] [instance: e879a322-2581-43da-916b-423a94821ed0] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Oct 11 04:08:51 compute-0 nova_compute[259850]: 2025-10-11 04:08:51.470 2 DEBUG nova.compute.manager [None req-5a7b44da-4186-418d-b11a-c8513c690b68 f301bb3cf7f94411bff904828db8c555 3c9fe3215f964559830df6c94dd6a581 - - default default] [instance: e879a322-2581-43da-916b-423a94821ed0] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Oct 11 04:08:51 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1193: 305 pgs: 305 active+clean; 226 MiB data, 331 MiB used, 60 GiB / 60 GiB avail; 39 KiB/s rd, 3.3 KiB/s wr, 55 op/s
Oct 11 04:08:51 compute-0 nova_compute[259850]: 2025-10-11 04:08:51.558 2 DEBUG nova.compute.manager [None req-5a7b44da-4186-418d-b11a-c8513c690b68 f301bb3cf7f94411bff904828db8c555 3c9fe3215f964559830df6c94dd6a581 - - default default] [instance: e879a322-2581-43da-916b-423a94821ed0] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Oct 11 04:08:51 compute-0 nova_compute[259850]: 2025-10-11 04:08:51.560 2 DEBUG nova.virt.libvirt.driver [None req-5a7b44da-4186-418d-b11a-c8513c690b68 f301bb3cf7f94411bff904828db8c555 3c9fe3215f964559830df6c94dd6a581 - - default default] [instance: e879a322-2581-43da-916b-423a94821ed0] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Oct 11 04:08:51 compute-0 nova_compute[259850]: 2025-10-11 04:08:51.560 2 INFO nova.virt.libvirt.driver [None req-5a7b44da-4186-418d-b11a-c8513c690b68 f301bb3cf7f94411bff904828db8c555 3c9fe3215f964559830df6c94dd6a581 - - default default] [instance: e879a322-2581-43da-916b-423a94821ed0] Creating image(s)
Oct 11 04:08:51 compute-0 nova_compute[259850]: 2025-10-11 04:08:51.581 2 DEBUG nova.storage.rbd_utils [None req-5a7b44da-4186-418d-b11a-c8513c690b68 f301bb3cf7f94411bff904828db8c555 3c9fe3215f964559830df6c94dd6a581 - - default default] rbd image e879a322-2581-43da-916b-423a94821ed0_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 11 04:08:51 compute-0 nova_compute[259850]: 2025-10-11 04:08:51.601 2 DEBUG nova.storage.rbd_utils [None req-5a7b44da-4186-418d-b11a-c8513c690b68 f301bb3cf7f94411bff904828db8c555 3c9fe3215f964559830df6c94dd6a581 - - default default] rbd image e879a322-2581-43da-916b-423a94821ed0_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 11 04:08:51 compute-0 nova_compute[259850]: 2025-10-11 04:08:51.619 2 DEBUG nova.storage.rbd_utils [None req-5a7b44da-4186-418d-b11a-c8513c690b68 f301bb3cf7f94411bff904828db8c555 3c9fe3215f964559830df6c94dd6a581 - - default default] rbd image e879a322-2581-43da-916b-423a94821ed0_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 11 04:08:51 compute-0 nova_compute[259850]: 2025-10-11 04:08:51.622 2 DEBUG oslo_concurrency.processutils [None req-5a7b44da-4186-418d-b11a-c8513c690b68 f301bb3cf7f94411bff904828db8c555 3c9fe3215f964559830df6c94dd6a581 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/1fdd1a1629caceb533dd1ed1ad40a6716b3e72ac --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 04:08:51 compute-0 nova_compute[259850]: 2025-10-11 04:08:51.639 2 DEBUG nova.policy [None req-5a7b44da-4186-418d-b11a-c8513c690b68 f301bb3cf7f94411bff904828db8c555 3c9fe3215f964559830df6c94dd6a581 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'f301bb3cf7f94411bff904828db8c555', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '3c9fe3215f964559830df6c94dd6a581', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Oct 11 04:08:51 compute-0 nova_compute[259850]: 2025-10-11 04:08:51.676 2 DEBUG oslo_concurrency.processutils [None req-5a7b44da-4186-418d-b11a-c8513c690b68 f301bb3cf7f94411bff904828db8c555 3c9fe3215f964559830df6c94dd6a581 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/1fdd1a1629caceb533dd1ed1ad40a6716b3e72ac --force-share --output=json" returned: 0 in 0.054s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 04:08:51 compute-0 nova_compute[259850]: 2025-10-11 04:08:51.677 2 DEBUG oslo_concurrency.lockutils [None req-5a7b44da-4186-418d-b11a-c8513c690b68 f301bb3cf7f94411bff904828db8c555 3c9fe3215f964559830df6c94dd6a581 - - default default] Acquiring lock "1fdd1a1629caceb533dd1ed1ad40a6716b3e72ac" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 04:08:51 compute-0 nova_compute[259850]: 2025-10-11 04:08:51.677 2 DEBUG oslo_concurrency.lockutils [None req-5a7b44da-4186-418d-b11a-c8513c690b68 f301bb3cf7f94411bff904828db8c555 3c9fe3215f964559830df6c94dd6a581 - - default default] Lock "1fdd1a1629caceb533dd1ed1ad40a6716b3e72ac" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 04:08:51 compute-0 nova_compute[259850]: 2025-10-11 04:08:51.677 2 DEBUG oslo_concurrency.lockutils [None req-5a7b44da-4186-418d-b11a-c8513c690b68 f301bb3cf7f94411bff904828db8c555 3c9fe3215f964559830df6c94dd6a581 - - default default] Lock "1fdd1a1629caceb533dd1ed1ad40a6716b3e72ac" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 04:08:51 compute-0 nova_compute[259850]: 2025-10-11 04:08:51.694 2 DEBUG nova.storage.rbd_utils [None req-5a7b44da-4186-418d-b11a-c8513c690b68 f301bb3cf7f94411bff904828db8c555 3c9fe3215f964559830df6c94dd6a581 - - default default] rbd image e879a322-2581-43da-916b-423a94821ed0_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 11 04:08:51 compute-0 nova_compute[259850]: 2025-10-11 04:08:51.697 2 DEBUG oslo_concurrency.processutils [None req-5a7b44da-4186-418d-b11a-c8513c690b68 f301bb3cf7f94411bff904828db8c555 3c9fe3215f964559830df6c94dd6a581 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/1fdd1a1629caceb533dd1ed1ad40a6716b3e72ac e879a322-2581-43da-916b-423a94821ed0_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 04:08:51 compute-0 ceph-mon[74273]: from='client.? 192.168.122.100:0/2540897313' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 04:08:51 compute-0 nova_compute[259850]: 2025-10-11 04:08:51.941 2 DEBUG nova.compute.manager [req-3361de60-71d6-475c-a0cc-a3e5e532e3fd req-3195ca97-bfbd-4270-8f02-061d546559b8 f4159c198f2c491aba3076e803251bb4 551ef16ca7c64c7d98e9560ddab5abd8 - - default default] [instance: 8010953a-e520-477e-a4ba-ceb34db48982] Received event network-vif-plugged-34c86870-ee92-41f3-909b-1b576896b9cc external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 11 04:08:51 compute-0 nova_compute[259850]: 2025-10-11 04:08:51.942 2 DEBUG oslo_concurrency.lockutils [req-3361de60-71d6-475c-a0cc-a3e5e532e3fd req-3195ca97-bfbd-4270-8f02-061d546559b8 f4159c198f2c491aba3076e803251bb4 551ef16ca7c64c7d98e9560ddab5abd8 - - default default] Acquiring lock "8010953a-e520-477e-a4ba-ceb34db48982-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 04:08:51 compute-0 nova_compute[259850]: 2025-10-11 04:08:51.942 2 DEBUG oslo_concurrency.lockutils [req-3361de60-71d6-475c-a0cc-a3e5e532e3fd req-3195ca97-bfbd-4270-8f02-061d546559b8 f4159c198f2c491aba3076e803251bb4 551ef16ca7c64c7d98e9560ddab5abd8 - - default default] Lock "8010953a-e520-477e-a4ba-ceb34db48982-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 04:08:51 compute-0 nova_compute[259850]: 2025-10-11 04:08:51.943 2 DEBUG oslo_concurrency.lockutils [req-3361de60-71d6-475c-a0cc-a3e5e532e3fd req-3195ca97-bfbd-4270-8f02-061d546559b8 f4159c198f2c491aba3076e803251bb4 551ef16ca7c64c7d98e9560ddab5abd8 - - default default] Lock "8010953a-e520-477e-a4ba-ceb34db48982-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 04:08:51 compute-0 nova_compute[259850]: 2025-10-11 04:08:51.943 2 DEBUG nova.compute.manager [req-3361de60-71d6-475c-a0cc-a3e5e532e3fd req-3195ca97-bfbd-4270-8f02-061d546559b8 f4159c198f2c491aba3076e803251bb4 551ef16ca7c64c7d98e9560ddab5abd8 - - default default] [instance: 8010953a-e520-477e-a4ba-ceb34db48982] No waiting events found dispatching network-vif-plugged-34c86870-ee92-41f3-909b-1b576896b9cc pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 11 04:08:51 compute-0 nova_compute[259850]: 2025-10-11 04:08:51.944 2 WARNING nova.compute.manager [req-3361de60-71d6-475c-a0cc-a3e5e532e3fd req-3195ca97-bfbd-4270-8f02-061d546559b8 f4159c198f2c491aba3076e803251bb4 551ef16ca7c64c7d98e9560ddab5abd8 - - default default] [instance: 8010953a-e520-477e-a4ba-ceb34db48982] Received unexpected event network-vif-plugged-34c86870-ee92-41f3-909b-1b576896b9cc for instance with vm_state active and task_state None.
Oct 11 04:08:51 compute-0 nova_compute[259850]: 2025-10-11 04:08:51.948 2 DEBUG oslo_concurrency.processutils [None req-5a7b44da-4186-418d-b11a-c8513c690b68 f301bb3cf7f94411bff904828db8c555 3c9fe3215f964559830df6c94dd6a581 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/1fdd1a1629caceb533dd1ed1ad40a6716b3e72ac e879a322-2581-43da-916b-423a94821ed0_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.251s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 04:08:52 compute-0 nova_compute[259850]: 2025-10-11 04:08:52.029 2 DEBUG nova.storage.rbd_utils [None req-5a7b44da-4186-418d-b11a-c8513c690b68 f301bb3cf7f94411bff904828db8c555 3c9fe3215f964559830df6c94dd6a581 - - default default] resizing rbd image e879a322-2581-43da-916b-423a94821ed0_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Oct 11 04:08:52 compute-0 nova_compute[259850]: 2025-10-11 04:08:52.150 2 DEBUG nova.objects.instance [None req-5a7b44da-4186-418d-b11a-c8513c690b68 f301bb3cf7f94411bff904828db8c555 3c9fe3215f964559830df6c94dd6a581 - - default default] Lazy-loading 'migration_context' on Instance uuid e879a322-2581-43da-916b-423a94821ed0 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 11 04:08:52 compute-0 nova_compute[259850]: 2025-10-11 04:08:52.167 2 DEBUG nova.virt.libvirt.driver [None req-5a7b44da-4186-418d-b11a-c8513c690b68 f301bb3cf7f94411bff904828db8c555 3c9fe3215f964559830df6c94dd6a581 - - default default] [instance: e879a322-2581-43da-916b-423a94821ed0] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Oct 11 04:08:52 compute-0 nova_compute[259850]: 2025-10-11 04:08:52.168 2 DEBUG nova.virt.libvirt.driver [None req-5a7b44da-4186-418d-b11a-c8513c690b68 f301bb3cf7f94411bff904828db8c555 3c9fe3215f964559830df6c94dd6a581 - - default default] [instance: e879a322-2581-43da-916b-423a94821ed0] Ensure instance console log exists: /var/lib/nova/instances/e879a322-2581-43da-916b-423a94821ed0/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Oct 11 04:08:52 compute-0 nova_compute[259850]: 2025-10-11 04:08:52.168 2 DEBUG oslo_concurrency.lockutils [None req-5a7b44da-4186-418d-b11a-c8513c690b68 f301bb3cf7f94411bff904828db8c555 3c9fe3215f964559830df6c94dd6a581 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 04:08:52 compute-0 nova_compute[259850]: 2025-10-11 04:08:52.169 2 DEBUG oslo_concurrency.lockutils [None req-5a7b44da-4186-418d-b11a-c8513c690b68 f301bb3cf7f94411bff904828db8c555 3c9fe3215f964559830df6c94dd6a581 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 04:08:52 compute-0 nova_compute[259850]: 2025-10-11 04:08:52.169 2 DEBUG oslo_concurrency.lockutils [None req-5a7b44da-4186-418d-b11a-c8513c690b68 f301bb3cf7f94411bff904828db8c555 3c9fe3215f964559830df6c94dd6a581 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 04:08:52 compute-0 nova_compute[259850]: 2025-10-11 04:08:52.421 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 04:08:52 compute-0 nova_compute[259850]: 2025-10-11 04:08:52.684 2 DEBUG nova.network.neutron [None req-5a7b44da-4186-418d-b11a-c8513c690b68 f301bb3cf7f94411bff904828db8c555 3c9fe3215f964559830df6c94dd6a581 - - default default] [instance: e879a322-2581-43da-916b-423a94821ed0] Successfully created port: cc7a934c-f273-4dde-b492-d37feef39f58 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Oct 11 04:08:52 compute-0 nova_compute[259850]: 2025-10-11 04:08:52.777 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 04:08:52 compute-0 ceph-mon[74273]: pgmap v1193: 305 pgs: 305 active+clean; 226 MiB data, 331 MiB used, 60 GiB / 60 GiB avail; 39 KiB/s rd, 3.3 KiB/s wr, 55 op/s
Oct 11 04:08:53 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1194: 305 pgs: 305 active+clean; 273 MiB data, 353 MiB used, 60 GiB / 60 GiB avail; 4.4 MiB/s rd, 2.2 MiB/s wr, 158 op/s
Oct 11 04:08:54 compute-0 nova_compute[259850]: 2025-10-11 04:08:54.019 2 DEBUG nova.network.neutron [None req-5a7b44da-4186-418d-b11a-c8513c690b68 f301bb3cf7f94411bff904828db8c555 3c9fe3215f964559830df6c94dd6a581 - - default default] [instance: e879a322-2581-43da-916b-423a94821ed0] Successfully updated port: cc7a934c-f273-4dde-b492-d37feef39f58 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Oct 11 04:08:54 compute-0 nova_compute[259850]: 2025-10-11 04:08:54.037 2 DEBUG oslo_concurrency.lockutils [None req-5a7b44da-4186-418d-b11a-c8513c690b68 f301bb3cf7f94411bff904828db8c555 3c9fe3215f964559830df6c94dd6a581 - - default default] Acquiring lock "refresh_cache-e879a322-2581-43da-916b-423a94821ed0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 11 04:08:54 compute-0 nova_compute[259850]: 2025-10-11 04:08:54.037 2 DEBUG oslo_concurrency.lockutils [None req-5a7b44da-4186-418d-b11a-c8513c690b68 f301bb3cf7f94411bff904828db8c555 3c9fe3215f964559830df6c94dd6a581 - - default default] Acquired lock "refresh_cache-e879a322-2581-43da-916b-423a94821ed0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 11 04:08:54 compute-0 nova_compute[259850]: 2025-10-11 04:08:54.037 2 DEBUG nova.network.neutron [None req-5a7b44da-4186-418d-b11a-c8513c690b68 f301bb3cf7f94411bff904828db8c555 3c9fe3215f964559830df6c94dd6a581 - - default default] [instance: e879a322-2581-43da-916b-423a94821ed0] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Oct 11 04:08:54 compute-0 nova_compute[259850]: 2025-10-11 04:08:54.112 2 DEBUG nova.compute.manager [req-a3c48fd8-681a-4831-9b28-11ea96a24518 req-4d702458-759d-4158-ba81-15e652bbb0f8 f4159c198f2c491aba3076e803251bb4 551ef16ca7c64c7d98e9560ddab5abd8 - - default default] [instance: e879a322-2581-43da-916b-423a94821ed0] Received event network-changed-cc7a934c-f273-4dde-b492-d37feef39f58 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 11 04:08:54 compute-0 nova_compute[259850]: 2025-10-11 04:08:54.112 2 DEBUG nova.compute.manager [req-a3c48fd8-681a-4831-9b28-11ea96a24518 req-4d702458-759d-4158-ba81-15e652bbb0f8 f4159c198f2c491aba3076e803251bb4 551ef16ca7c64c7d98e9560ddab5abd8 - - default default] [instance: e879a322-2581-43da-916b-423a94821ed0] Refreshing instance network info cache due to event network-changed-cc7a934c-f273-4dde-b492-d37feef39f58. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 11 04:08:54 compute-0 nova_compute[259850]: 2025-10-11 04:08:54.113 2 DEBUG oslo_concurrency.lockutils [req-a3c48fd8-681a-4831-9b28-11ea96a24518 req-4d702458-759d-4158-ba81-15e652bbb0f8 f4159c198f2c491aba3076e803251bb4 551ef16ca7c64c7d98e9560ddab5abd8 - - default default] Acquiring lock "refresh_cache-e879a322-2581-43da-916b-423a94821ed0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 11 04:08:54 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e227 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 04:08:54 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e227 do_prune osdmap full prune enabled
Oct 11 04:08:54 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e228 e228: 3 total, 3 up, 3 in
Oct 11 04:08:54 compute-0 ceph-mon[74273]: log_channel(cluster) log [DBG] : osdmap e228: 3 total, 3 up, 3 in
Oct 11 04:08:54 compute-0 nova_compute[259850]: 2025-10-11 04:08:54.841 2 DEBUG nova.network.neutron [None req-5a7b44da-4186-418d-b11a-c8513c690b68 f301bb3cf7f94411bff904828db8c555 3c9fe3215f964559830df6c94dd6a581 - - default default] [instance: e879a322-2581-43da-916b-423a94821ed0] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Oct 11 04:08:54 compute-0 ceph-mon[74273]: pgmap v1194: 305 pgs: 305 active+clean; 273 MiB data, 353 MiB used, 60 GiB / 60 GiB avail; 4.4 MiB/s rd, 2.2 MiB/s wr, 158 op/s
Oct 11 04:08:54 compute-0 ceph-mon[74273]: osdmap e228: 3 total, 3 up, 3 in
Oct 11 04:08:55 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1196: 305 pgs: 305 active+clean; 273 MiB data, 353 MiB used, 60 GiB / 60 GiB avail; 5.0 MiB/s rd, 2.5 MiB/s wr, 181 op/s
Oct 11 04:08:55 compute-0 nova_compute[259850]: 2025-10-11 04:08:55.882 2 DEBUG nova.network.neutron [None req-5a7b44da-4186-418d-b11a-c8513c690b68 f301bb3cf7f94411bff904828db8c555 3c9fe3215f964559830df6c94dd6a581 - - default default] [instance: e879a322-2581-43da-916b-423a94821ed0] Updating instance_info_cache with network_info: [{"id": "cc7a934c-f273-4dde-b492-d37feef39f58", "address": "fa:16:3e:a8:a4:b5", "network": {"id": "bc525eaa-e13d-45ff-a473-c699abd60e90", "bridge": "br-int", "label": "tempest-TestEncryptedCinderVolumes-452300963-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3c9fe3215f964559830df6c94dd6a581", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapcc7a934c-f2", "ovs_interfaceid": "cc7a934c-f273-4dde-b492-d37feef39f58", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 11 04:08:55 compute-0 nova_compute[259850]: 2025-10-11 04:08:55.905 2 DEBUG oslo_concurrency.lockutils [None req-5a7b44da-4186-418d-b11a-c8513c690b68 f301bb3cf7f94411bff904828db8c555 3c9fe3215f964559830df6c94dd6a581 - - default default] Releasing lock "refresh_cache-e879a322-2581-43da-916b-423a94821ed0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 11 04:08:55 compute-0 nova_compute[259850]: 2025-10-11 04:08:55.905 2 DEBUG nova.compute.manager [None req-5a7b44da-4186-418d-b11a-c8513c690b68 f301bb3cf7f94411bff904828db8c555 3c9fe3215f964559830df6c94dd6a581 - - default default] [instance: e879a322-2581-43da-916b-423a94821ed0] Instance network_info: |[{"id": "cc7a934c-f273-4dde-b492-d37feef39f58", "address": "fa:16:3e:a8:a4:b5", "network": {"id": "bc525eaa-e13d-45ff-a473-c699abd60e90", "bridge": "br-int", "label": "tempest-TestEncryptedCinderVolumes-452300963-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3c9fe3215f964559830df6c94dd6a581", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapcc7a934c-f2", "ovs_interfaceid": "cc7a934c-f273-4dde-b492-d37feef39f58", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Oct 11 04:08:55 compute-0 nova_compute[259850]: 2025-10-11 04:08:55.906 2 DEBUG oslo_concurrency.lockutils [req-a3c48fd8-681a-4831-9b28-11ea96a24518 req-4d702458-759d-4158-ba81-15e652bbb0f8 f4159c198f2c491aba3076e803251bb4 551ef16ca7c64c7d98e9560ddab5abd8 - - default default] Acquired lock "refresh_cache-e879a322-2581-43da-916b-423a94821ed0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 11 04:08:55 compute-0 nova_compute[259850]: 2025-10-11 04:08:55.907 2 DEBUG nova.network.neutron [req-a3c48fd8-681a-4831-9b28-11ea96a24518 req-4d702458-759d-4158-ba81-15e652bbb0f8 f4159c198f2c491aba3076e803251bb4 551ef16ca7c64c7d98e9560ddab5abd8 - - default default] [instance: e879a322-2581-43da-916b-423a94821ed0] Refreshing network info cache for port cc7a934c-f273-4dde-b492-d37feef39f58 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 11 04:08:55 compute-0 nova_compute[259850]: 2025-10-11 04:08:55.918 2 DEBUG nova.virt.libvirt.driver [None req-5a7b44da-4186-418d-b11a-c8513c690b68 f301bb3cf7f94411bff904828db8c555 3c9fe3215f964559830df6c94dd6a581 - - default default] [instance: e879a322-2581-43da-916b-423a94821ed0] Start _get_guest_xml network_info=[{"id": "cc7a934c-f273-4dde-b492-d37feef39f58", "address": "fa:16:3e:a8:a4:b5", "network": {"id": "bc525eaa-e13d-45ff-a473-c699abd60e90", "bridge": "br-int", "label": "tempest-TestEncryptedCinderVolumes-452300963-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3c9fe3215f964559830df6c94dd6a581", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapcc7a934c-f2", "ovs_interfaceid": "cc7a934c-f273-4dde-b492-d37feef39f58", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-11T04:01:37Z,direct_url=<?>,disk_format='qcow2',id=1a107e2f-1a9d-4b6f-861d-e64bee7d56be,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='e4ac9f6319b648399a8baca50902ce47',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-11T04:01:39Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_type': 'disk', 'encrypted': False, 'encryption_secret_uuid': None, 'size': 0, 'encryption_format': None, 'boot_index': 0, 'device_name': '/dev/vda', 'guest_format': None, 'encryption_options': None, 'disk_bus': 'virtio', 'image_id': '1a107e2f-1a9d-4b6f-861d-e64bee7d56be'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Oct 11 04:08:55 compute-0 nova_compute[259850]: 2025-10-11 04:08:55.921 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1760155720.91911, 5814e0c3-8afc-4d2d-98eb-6da773bfb7c7 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 11 04:08:55 compute-0 nova_compute[259850]: 2025-10-11 04:08:55.921 2 INFO nova.compute.manager [-] [instance: 5814e0c3-8afc-4d2d-98eb-6da773bfb7c7] VM Stopped (Lifecycle Event)
Oct 11 04:08:55 compute-0 nova_compute[259850]: 2025-10-11 04:08:55.928 2 WARNING nova.virt.libvirt.driver [None req-5a7b44da-4186-418d-b11a-c8513c690b68 f301bb3cf7f94411bff904828db8c555 3c9fe3215f964559830df6c94dd6a581 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 11 04:08:55 compute-0 nova_compute[259850]: 2025-10-11 04:08:55.938 2 DEBUG nova.virt.libvirt.host [None req-5a7b44da-4186-418d-b11a-c8513c690b68 f301bb3cf7f94411bff904828db8c555 3c9fe3215f964559830df6c94dd6a581 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Oct 11 04:08:55 compute-0 nova_compute[259850]: 2025-10-11 04:08:55.939 2 DEBUG nova.virt.libvirt.host [None req-5a7b44da-4186-418d-b11a-c8513c690b68 f301bb3cf7f94411bff904828db8c555 3c9fe3215f964559830df6c94dd6a581 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Oct 11 04:08:55 compute-0 nova_compute[259850]: 2025-10-11 04:08:55.944 2 DEBUG nova.virt.libvirt.host [None req-5a7b44da-4186-418d-b11a-c8513c690b68 f301bb3cf7f94411bff904828db8c555 3c9fe3215f964559830df6c94dd6a581 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Oct 11 04:08:55 compute-0 nova_compute[259850]: 2025-10-11 04:08:55.945 2 DEBUG nova.virt.libvirt.host [None req-5a7b44da-4186-418d-b11a-c8513c690b68 f301bb3cf7f94411bff904828db8c555 3c9fe3215f964559830df6c94dd6a581 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Oct 11 04:08:55 compute-0 nova_compute[259850]: 2025-10-11 04:08:55.946 2 DEBUG nova.virt.libvirt.driver [None req-5a7b44da-4186-418d-b11a-c8513c690b68 f301bb3cf7f94411bff904828db8c555 3c9fe3215f964559830df6c94dd6a581 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Oct 11 04:08:55 compute-0 nova_compute[259850]: 2025-10-11 04:08:55.946 2 DEBUG nova.virt.hardware [None req-5a7b44da-4186-418d-b11a-c8513c690b68 f301bb3cf7f94411bff904828db8c555 3c9fe3215f964559830df6c94dd6a581 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-11T04:01:35Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='178575de-f0e6-4acd-9fcd-d75e3e09ac2e',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-11T04:01:37Z,direct_url=<?>,disk_format='qcow2',id=1a107e2f-1a9d-4b6f-861d-e64bee7d56be,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='e4ac9f6319b648399a8baca50902ce47',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-11T04:01:39Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Oct 11 04:08:55 compute-0 nova_compute[259850]: 2025-10-11 04:08:55.947 2 DEBUG nova.virt.hardware [None req-5a7b44da-4186-418d-b11a-c8513c690b68 f301bb3cf7f94411bff904828db8c555 3c9fe3215f964559830df6c94dd6a581 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Oct 11 04:08:55 compute-0 nova_compute[259850]: 2025-10-11 04:08:55.948 2 DEBUG nova.virt.hardware [None req-5a7b44da-4186-418d-b11a-c8513c690b68 f301bb3cf7f94411bff904828db8c555 3c9fe3215f964559830df6c94dd6a581 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Oct 11 04:08:55 compute-0 nova_compute[259850]: 2025-10-11 04:08:55.948 2 DEBUG nova.virt.hardware [None req-5a7b44da-4186-418d-b11a-c8513c690b68 f301bb3cf7f94411bff904828db8c555 3c9fe3215f964559830df6c94dd6a581 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Oct 11 04:08:55 compute-0 nova_compute[259850]: 2025-10-11 04:08:55.949 2 DEBUG nova.virt.hardware [None req-5a7b44da-4186-418d-b11a-c8513c690b68 f301bb3cf7f94411bff904828db8c555 3c9fe3215f964559830df6c94dd6a581 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Oct 11 04:08:55 compute-0 nova_compute[259850]: 2025-10-11 04:08:55.950 2 DEBUG nova.virt.hardware [None req-5a7b44da-4186-418d-b11a-c8513c690b68 f301bb3cf7f94411bff904828db8c555 3c9fe3215f964559830df6c94dd6a581 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Oct 11 04:08:55 compute-0 nova_compute[259850]: 2025-10-11 04:08:55.950 2 DEBUG nova.virt.hardware [None req-5a7b44da-4186-418d-b11a-c8513c690b68 f301bb3cf7f94411bff904828db8c555 3c9fe3215f964559830df6c94dd6a581 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Oct 11 04:08:55 compute-0 nova_compute[259850]: 2025-10-11 04:08:55.951 2 DEBUG nova.virt.hardware [None req-5a7b44da-4186-418d-b11a-c8513c690b68 f301bb3cf7f94411bff904828db8c555 3c9fe3215f964559830df6c94dd6a581 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Oct 11 04:08:55 compute-0 nova_compute[259850]: 2025-10-11 04:08:55.951 2 DEBUG nova.virt.hardware [None req-5a7b44da-4186-418d-b11a-c8513c690b68 f301bb3cf7f94411bff904828db8c555 3c9fe3215f964559830df6c94dd6a581 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Oct 11 04:08:55 compute-0 nova_compute[259850]: 2025-10-11 04:08:55.952 2 DEBUG nova.virt.hardware [None req-5a7b44da-4186-418d-b11a-c8513c690b68 f301bb3cf7f94411bff904828db8c555 3c9fe3215f964559830df6c94dd6a581 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Oct 11 04:08:55 compute-0 nova_compute[259850]: 2025-10-11 04:08:55.952 2 DEBUG nova.virt.hardware [None req-5a7b44da-4186-418d-b11a-c8513c690b68 f301bb3cf7f94411bff904828db8c555 3c9fe3215f964559830df6c94dd6a581 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Oct 11 04:08:55 compute-0 nova_compute[259850]: 2025-10-11 04:08:55.958 2 DEBUG oslo_concurrency.processutils [None req-5a7b44da-4186-418d-b11a-c8513c690b68 f301bb3cf7f94411bff904828db8c555 3c9fe3215f964559830df6c94dd6a581 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 04:08:55 compute-0 nova_compute[259850]: 2025-10-11 04:08:55.984 2 DEBUG nova.compute.manager [None req-d2ecff9d-95c3-44ee-a3a4-c98370e12c73 - - - - - -] [instance: 5814e0c3-8afc-4d2d-98eb-6da773bfb7c7] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 11 04:08:56 compute-0 nova_compute[259850]: 2025-10-11 04:08:56.233 2 DEBUG nova.compute.manager [req-aecc8974-aeb2-420c-9296-66037eefe177 req-674e025c-c55b-4487-8b6c-bf00d03a4b60 f4159c198f2c491aba3076e803251bb4 551ef16ca7c64c7d98e9560ddab5abd8 - - default default] [instance: 8010953a-e520-477e-a4ba-ceb34db48982] Received event network-changed-34c86870-ee92-41f3-909b-1b576896b9cc external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 11 04:08:56 compute-0 nova_compute[259850]: 2025-10-11 04:08:56.234 2 DEBUG nova.compute.manager [req-aecc8974-aeb2-420c-9296-66037eefe177 req-674e025c-c55b-4487-8b6c-bf00d03a4b60 f4159c198f2c491aba3076e803251bb4 551ef16ca7c64c7d98e9560ddab5abd8 - - default default] [instance: 8010953a-e520-477e-a4ba-ceb34db48982] Refreshing instance network info cache due to event network-changed-34c86870-ee92-41f3-909b-1b576896b9cc. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 11 04:08:56 compute-0 nova_compute[259850]: 2025-10-11 04:08:56.234 2 DEBUG oslo_concurrency.lockutils [req-aecc8974-aeb2-420c-9296-66037eefe177 req-674e025c-c55b-4487-8b6c-bf00d03a4b60 f4159c198f2c491aba3076e803251bb4 551ef16ca7c64c7d98e9560ddab5abd8 - - default default] Acquiring lock "refresh_cache-8010953a-e520-477e-a4ba-ceb34db48982" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 11 04:08:56 compute-0 nova_compute[259850]: 2025-10-11 04:08:56.235 2 DEBUG oslo_concurrency.lockutils [req-aecc8974-aeb2-420c-9296-66037eefe177 req-674e025c-c55b-4487-8b6c-bf00d03a4b60 f4159c198f2c491aba3076e803251bb4 551ef16ca7c64c7d98e9560ddab5abd8 - - default default] Acquired lock "refresh_cache-8010953a-e520-477e-a4ba-ceb34db48982" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 11 04:08:56 compute-0 nova_compute[259850]: 2025-10-11 04:08:56.235 2 DEBUG nova.network.neutron [req-aecc8974-aeb2-420c-9296-66037eefe177 req-674e025c-c55b-4487-8b6c-bf00d03a4b60 f4159c198f2c491aba3076e803251bb4 551ef16ca7c64c7d98e9560ddab5abd8 - - default default] [instance: 8010953a-e520-477e-a4ba-ceb34db48982] Refreshing network info cache for port 34c86870-ee92-41f3-909b-1b576896b9cc _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 11 04:08:56 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 11 04:08:56 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/236967628' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 11 04:08:56 compute-0 nova_compute[259850]: 2025-10-11 04:08:56.408 2 DEBUG oslo_concurrency.processutils [None req-5a7b44da-4186-418d-b11a-c8513c690b68 f301bb3cf7f94411bff904828db8c555 3c9fe3215f964559830df6c94dd6a581 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.450s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 04:08:56 compute-0 nova_compute[259850]: 2025-10-11 04:08:56.441 2 DEBUG nova.storage.rbd_utils [None req-5a7b44da-4186-418d-b11a-c8513c690b68 f301bb3cf7f94411bff904828db8c555 3c9fe3215f964559830df6c94dd6a581 - - default default] rbd image e879a322-2581-43da-916b-423a94821ed0_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 11 04:08:56 compute-0 nova_compute[259850]: 2025-10-11 04:08:56.446 2 DEBUG oslo_concurrency.processutils [None req-5a7b44da-4186-418d-b11a-c8513c690b68 f301bb3cf7f94411bff904828db8c555 3c9fe3215f964559830df6c94dd6a581 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 04:08:56 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 11 04:08:56 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1831956147' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 11 04:08:56 compute-0 nova_compute[259850]: 2025-10-11 04:08:56.886 2 DEBUG oslo_concurrency.processutils [None req-5a7b44da-4186-418d-b11a-c8513c690b68 f301bb3cf7f94411bff904828db8c555 3c9fe3215f964559830df6c94dd6a581 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.440s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 04:08:56 compute-0 nova_compute[259850]: 2025-10-11 04:08:56.888 2 DEBUG nova.virt.libvirt.vif [None req-5a7b44da-4186-418d-b11a-c8513c690b68 f301bb3cf7f94411bff904828db8c555 3c9fe3215f964559830df6c94dd6a581 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-11T04:08:49Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestEncryptedCinderVolumes-server-1632845143',display_name='tempest-TestEncryptedCinderVolumes-server-1632845143',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testencryptedcindervolumes-server-1632845143',id=11,image_ref='1a107e2f-1a9d-4b6f-861d-e64bee7d56be',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBBaVbxNnaD5q0XmYwovrzExmbbVMAXd2YcdP8HyN3xmFYmLrUV4WQODRkW4d2lIUauD7nrrJ4pDUAC7Sn3o+1xphApTEfJBl9skNeWXh4VYjPGwBwFYiqPpdaiLhMeR/Rw==',key_name='tempest-keypair-733620179',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='3c9fe3215f964559830df6c94dd6a581',ramdisk_id='',reservation_id='r-2bscv5uk',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='1a107e2f-1a9d-4b6f-861d-e64bee7d56be',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestEncryptedCinderVolumes-1907206765',owner_user_name='tempest-TestEncryptedCinderVolumes-1907206765-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-11T04:08:51Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='f301bb3cf7f94411bff904828db8c555',uuid=e879a322-2581-43da-916b-423a94821ed0,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "cc7a934c-f273-4dde-b492-d37feef39f58", "address": "fa:16:3e:a8:a4:b5", "network": {"id": "bc525eaa-e13d-45ff-a473-c699abd60e90", "bridge": "br-int", "label": "tempest-TestEncryptedCinderVolumes-452300963-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3c9fe3215f964559830df6c94dd6a581", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapcc7a934c-f2", "ovs_interfaceid": "cc7a934c-f273-4dde-b492-d37feef39f58", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Oct 11 04:08:56 compute-0 nova_compute[259850]: 2025-10-11 04:08:56.889 2 DEBUG nova.network.os_vif_util [None req-5a7b44da-4186-418d-b11a-c8513c690b68 f301bb3cf7f94411bff904828db8c555 3c9fe3215f964559830df6c94dd6a581 - - default default] Converting VIF {"id": "cc7a934c-f273-4dde-b492-d37feef39f58", "address": "fa:16:3e:a8:a4:b5", "network": {"id": "bc525eaa-e13d-45ff-a473-c699abd60e90", "bridge": "br-int", "label": "tempest-TestEncryptedCinderVolumes-452300963-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3c9fe3215f964559830df6c94dd6a581", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapcc7a934c-f2", "ovs_interfaceid": "cc7a934c-f273-4dde-b492-d37feef39f58", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 11 04:08:56 compute-0 nova_compute[259850]: 2025-10-11 04:08:56.889 2 DEBUG nova.network.os_vif_util [None req-5a7b44da-4186-418d-b11a-c8513c690b68 f301bb3cf7f94411bff904828db8c555 3c9fe3215f964559830df6c94dd6a581 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:a8:a4:b5,bridge_name='br-int',has_traffic_filtering=True,id=cc7a934c-f273-4dde-b492-d37feef39f58,network=Network(bc525eaa-e13d-45ff-a473-c699abd60e90),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapcc7a934c-f2') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 11 04:08:56 compute-0 nova_compute[259850]: 2025-10-11 04:08:56.890 2 DEBUG nova.objects.instance [None req-5a7b44da-4186-418d-b11a-c8513c690b68 f301bb3cf7f94411bff904828db8c555 3c9fe3215f964559830df6c94dd6a581 - - default default] Lazy-loading 'pci_devices' on Instance uuid e879a322-2581-43da-916b-423a94821ed0 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 11 04:08:56 compute-0 nova_compute[259850]: 2025-10-11 04:08:56.908 2 DEBUG nova.virt.libvirt.driver [None req-5a7b44da-4186-418d-b11a-c8513c690b68 f301bb3cf7f94411bff904828db8c555 3c9fe3215f964559830df6c94dd6a581 - - default default] [instance: e879a322-2581-43da-916b-423a94821ed0] End _get_guest_xml xml=<domain type="kvm">
Oct 11 04:08:56 compute-0 nova_compute[259850]:   <uuid>e879a322-2581-43da-916b-423a94821ed0</uuid>
Oct 11 04:08:56 compute-0 nova_compute[259850]:   <name>instance-0000000b</name>
Oct 11 04:08:56 compute-0 nova_compute[259850]:   <memory>131072</memory>
Oct 11 04:08:56 compute-0 nova_compute[259850]:   <vcpu>1</vcpu>
Oct 11 04:08:56 compute-0 nova_compute[259850]:   <metadata>
Oct 11 04:08:56 compute-0 nova_compute[259850]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 11 04:08:56 compute-0 nova_compute[259850]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 11 04:08:56 compute-0 nova_compute[259850]:       <nova:name>tempest-TestEncryptedCinderVolumes-server-1632845143</nova:name>
Oct 11 04:08:56 compute-0 nova_compute[259850]:       <nova:creationTime>2025-10-11 04:08:55</nova:creationTime>
Oct 11 04:08:56 compute-0 nova_compute[259850]:       <nova:flavor name="m1.nano">
Oct 11 04:08:56 compute-0 nova_compute[259850]:         <nova:memory>128</nova:memory>
Oct 11 04:08:56 compute-0 nova_compute[259850]:         <nova:disk>1</nova:disk>
Oct 11 04:08:56 compute-0 nova_compute[259850]:         <nova:swap>0</nova:swap>
Oct 11 04:08:56 compute-0 nova_compute[259850]:         <nova:ephemeral>0</nova:ephemeral>
Oct 11 04:08:56 compute-0 nova_compute[259850]:         <nova:vcpus>1</nova:vcpus>
Oct 11 04:08:56 compute-0 nova_compute[259850]:       </nova:flavor>
Oct 11 04:08:56 compute-0 nova_compute[259850]:       <nova:owner>
Oct 11 04:08:56 compute-0 nova_compute[259850]:         <nova:user uuid="f301bb3cf7f94411bff904828db8c555">tempest-TestEncryptedCinderVolumes-1907206765-project-member</nova:user>
Oct 11 04:08:56 compute-0 nova_compute[259850]:         <nova:project uuid="3c9fe3215f964559830df6c94dd6a581">tempest-TestEncryptedCinderVolumes-1907206765</nova:project>
Oct 11 04:08:56 compute-0 nova_compute[259850]:       </nova:owner>
Oct 11 04:08:56 compute-0 nova_compute[259850]:       <nova:root type="image" uuid="1a107e2f-1a9d-4b6f-861d-e64bee7d56be"/>
Oct 11 04:08:56 compute-0 nova_compute[259850]:       <nova:ports>
Oct 11 04:08:56 compute-0 nova_compute[259850]:         <nova:port uuid="cc7a934c-f273-4dde-b492-d37feef39f58">
Oct 11 04:08:56 compute-0 nova_compute[259850]:           <nova:ip type="fixed" address="10.100.0.9" ipVersion="4"/>
Oct 11 04:08:56 compute-0 nova_compute[259850]:         </nova:port>
Oct 11 04:08:56 compute-0 nova_compute[259850]:       </nova:ports>
Oct 11 04:08:56 compute-0 nova_compute[259850]:     </nova:instance>
Oct 11 04:08:56 compute-0 nova_compute[259850]:   </metadata>
Oct 11 04:08:56 compute-0 nova_compute[259850]:   <sysinfo type="smbios">
Oct 11 04:08:56 compute-0 nova_compute[259850]:     <system>
Oct 11 04:08:56 compute-0 nova_compute[259850]:       <entry name="manufacturer">RDO</entry>
Oct 11 04:08:56 compute-0 nova_compute[259850]:       <entry name="product">OpenStack Compute</entry>
Oct 11 04:08:56 compute-0 nova_compute[259850]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 11 04:08:56 compute-0 nova_compute[259850]:       <entry name="serial">e879a322-2581-43da-916b-423a94821ed0</entry>
Oct 11 04:08:56 compute-0 nova_compute[259850]:       <entry name="uuid">e879a322-2581-43da-916b-423a94821ed0</entry>
Oct 11 04:08:56 compute-0 nova_compute[259850]:       <entry name="family">Virtual Machine</entry>
Oct 11 04:08:56 compute-0 nova_compute[259850]:     </system>
Oct 11 04:08:56 compute-0 nova_compute[259850]:   </sysinfo>
Oct 11 04:08:56 compute-0 nova_compute[259850]:   <os>
Oct 11 04:08:56 compute-0 nova_compute[259850]:     <type arch="x86_64" machine="q35">hvm</type>
Oct 11 04:08:56 compute-0 nova_compute[259850]:     <boot dev="hd"/>
Oct 11 04:08:56 compute-0 nova_compute[259850]:     <smbios mode="sysinfo"/>
Oct 11 04:08:56 compute-0 nova_compute[259850]:   </os>
Oct 11 04:08:56 compute-0 nova_compute[259850]:   <features>
Oct 11 04:08:56 compute-0 nova_compute[259850]:     <acpi/>
Oct 11 04:08:56 compute-0 nova_compute[259850]:     <apic/>
Oct 11 04:08:56 compute-0 nova_compute[259850]:     <vmcoreinfo/>
Oct 11 04:08:56 compute-0 nova_compute[259850]:   </features>
Oct 11 04:08:56 compute-0 nova_compute[259850]:   <clock offset="utc">
Oct 11 04:08:56 compute-0 nova_compute[259850]:     <timer name="pit" tickpolicy="delay"/>
Oct 11 04:08:56 compute-0 nova_compute[259850]:     <timer name="rtc" tickpolicy="catchup"/>
Oct 11 04:08:56 compute-0 nova_compute[259850]:     <timer name="hpet" present="no"/>
Oct 11 04:08:56 compute-0 nova_compute[259850]:   </clock>
Oct 11 04:08:56 compute-0 nova_compute[259850]:   <cpu mode="host-model" match="exact">
Oct 11 04:08:56 compute-0 nova_compute[259850]:     <topology sockets="1" cores="1" threads="1"/>
Oct 11 04:08:56 compute-0 nova_compute[259850]:   </cpu>
Oct 11 04:08:56 compute-0 nova_compute[259850]:   <devices>
Oct 11 04:08:56 compute-0 nova_compute[259850]:     <disk type="network" device="disk">
Oct 11 04:08:56 compute-0 nova_compute[259850]:       <driver type="raw" cache="none"/>
Oct 11 04:08:56 compute-0 nova_compute[259850]:       <source protocol="rbd" name="vms/e879a322-2581-43da-916b-423a94821ed0_disk">
Oct 11 04:08:56 compute-0 nova_compute[259850]:         <host name="192.168.122.100" port="6789"/>
Oct 11 04:08:56 compute-0 nova_compute[259850]:       </source>
Oct 11 04:08:56 compute-0 nova_compute[259850]:       <auth username="openstack">
Oct 11 04:08:56 compute-0 nova_compute[259850]:         <secret type="ceph" uuid="23b68101-59a9-532f-ab6b-9acf78fb2162"/>
Oct 11 04:08:56 compute-0 nova_compute[259850]:       </auth>
Oct 11 04:08:56 compute-0 nova_compute[259850]:       <target dev="vda" bus="virtio"/>
Oct 11 04:08:56 compute-0 nova_compute[259850]:     </disk>
Oct 11 04:08:56 compute-0 nova_compute[259850]:     <disk type="network" device="cdrom">
Oct 11 04:08:56 compute-0 nova_compute[259850]:       <driver type="raw" cache="none"/>
Oct 11 04:08:56 compute-0 nova_compute[259850]:       <source protocol="rbd" name="vms/e879a322-2581-43da-916b-423a94821ed0_disk.config">
Oct 11 04:08:56 compute-0 nova_compute[259850]:         <host name="192.168.122.100" port="6789"/>
Oct 11 04:08:56 compute-0 nova_compute[259850]:       </source>
Oct 11 04:08:56 compute-0 nova_compute[259850]:       <auth username="openstack">
Oct 11 04:08:56 compute-0 nova_compute[259850]:         <secret type="ceph" uuid="23b68101-59a9-532f-ab6b-9acf78fb2162"/>
Oct 11 04:08:56 compute-0 nova_compute[259850]:       </auth>
Oct 11 04:08:56 compute-0 nova_compute[259850]:       <target dev="sda" bus="sata"/>
Oct 11 04:08:56 compute-0 nova_compute[259850]:     </disk>
Oct 11 04:08:56 compute-0 nova_compute[259850]:     <interface type="ethernet">
Oct 11 04:08:56 compute-0 nova_compute[259850]:       <mac address="fa:16:3e:a8:a4:b5"/>
Oct 11 04:08:56 compute-0 nova_compute[259850]:       <model type="virtio"/>
Oct 11 04:08:56 compute-0 nova_compute[259850]:       <driver name="vhost" rx_queue_size="512"/>
Oct 11 04:08:56 compute-0 nova_compute[259850]:       <mtu size="1442"/>
Oct 11 04:08:56 compute-0 nova_compute[259850]:       <target dev="tapcc7a934c-f2"/>
Oct 11 04:08:56 compute-0 nova_compute[259850]:     </interface>
Oct 11 04:08:56 compute-0 nova_compute[259850]:     <serial type="pty">
Oct 11 04:08:56 compute-0 nova_compute[259850]:       <log file="/var/lib/nova/instances/e879a322-2581-43da-916b-423a94821ed0/console.log" append="off"/>
Oct 11 04:08:56 compute-0 nova_compute[259850]:     </serial>
Oct 11 04:08:56 compute-0 nova_compute[259850]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 11 04:08:56 compute-0 nova_compute[259850]:     <video>
Oct 11 04:08:56 compute-0 nova_compute[259850]:       <model type="virtio"/>
Oct 11 04:08:56 compute-0 nova_compute[259850]:     </video>
Oct 11 04:08:56 compute-0 nova_compute[259850]:     <input type="tablet" bus="usb"/>
Oct 11 04:08:56 compute-0 nova_compute[259850]:     <rng model="virtio">
Oct 11 04:08:56 compute-0 nova_compute[259850]:       <backend model="random">/dev/urandom</backend>
Oct 11 04:08:56 compute-0 nova_compute[259850]:     </rng>
Oct 11 04:08:56 compute-0 nova_compute[259850]:     <controller type="pci" model="pcie-root"/>
Oct 11 04:08:56 compute-0 nova_compute[259850]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 04:08:56 compute-0 nova_compute[259850]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 04:08:56 compute-0 nova_compute[259850]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 04:08:56 compute-0 nova_compute[259850]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 04:08:56 compute-0 nova_compute[259850]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 04:08:56 compute-0 nova_compute[259850]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 04:08:56 compute-0 nova_compute[259850]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 04:08:56 compute-0 nova_compute[259850]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 04:08:56 compute-0 nova_compute[259850]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 04:08:56 compute-0 nova_compute[259850]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 04:08:56 compute-0 nova_compute[259850]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 04:08:56 compute-0 nova_compute[259850]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 04:08:56 compute-0 nova_compute[259850]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 04:08:56 compute-0 nova_compute[259850]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 04:08:56 compute-0 nova_compute[259850]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 04:08:56 compute-0 nova_compute[259850]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 04:08:56 compute-0 nova_compute[259850]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 04:08:56 compute-0 nova_compute[259850]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 04:08:56 compute-0 nova_compute[259850]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 04:08:56 compute-0 nova_compute[259850]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 04:08:56 compute-0 nova_compute[259850]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 04:08:56 compute-0 nova_compute[259850]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 04:08:56 compute-0 nova_compute[259850]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 04:08:56 compute-0 nova_compute[259850]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 04:08:56 compute-0 nova_compute[259850]:     <controller type="usb" index="0"/>
Oct 11 04:08:56 compute-0 nova_compute[259850]:     <memballoon model="virtio">
Oct 11 04:08:56 compute-0 nova_compute[259850]:       <stats period="10"/>
Oct 11 04:08:56 compute-0 nova_compute[259850]:     </memballoon>
Oct 11 04:08:56 compute-0 nova_compute[259850]:   </devices>
Oct 11 04:08:56 compute-0 nova_compute[259850]: </domain>
Oct 11 04:08:56 compute-0 nova_compute[259850]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Oct 11 04:08:56 compute-0 nova_compute[259850]: 2025-10-11 04:08:56.910 2 DEBUG nova.compute.manager [None req-5a7b44da-4186-418d-b11a-c8513c690b68 f301bb3cf7f94411bff904828db8c555 3c9fe3215f964559830df6c94dd6a581 - - default default] [instance: e879a322-2581-43da-916b-423a94821ed0] Preparing to wait for external event network-vif-plugged-cc7a934c-f273-4dde-b492-d37feef39f58 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Oct 11 04:08:56 compute-0 nova_compute[259850]: 2025-10-11 04:08:56.910 2 DEBUG oslo_concurrency.lockutils [None req-5a7b44da-4186-418d-b11a-c8513c690b68 f301bb3cf7f94411bff904828db8c555 3c9fe3215f964559830df6c94dd6a581 - - default default] Acquiring lock "e879a322-2581-43da-916b-423a94821ed0-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 04:08:56 compute-0 nova_compute[259850]: 2025-10-11 04:08:56.911 2 DEBUG oslo_concurrency.lockutils [None req-5a7b44da-4186-418d-b11a-c8513c690b68 f301bb3cf7f94411bff904828db8c555 3c9fe3215f964559830df6c94dd6a581 - - default default] Lock "e879a322-2581-43da-916b-423a94821ed0-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 04:08:56 compute-0 nova_compute[259850]: 2025-10-11 04:08:56.911 2 DEBUG oslo_concurrency.lockutils [None req-5a7b44da-4186-418d-b11a-c8513c690b68 f301bb3cf7f94411bff904828db8c555 3c9fe3215f964559830df6c94dd6a581 - - default default] Lock "e879a322-2581-43da-916b-423a94821ed0-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 04:08:56 compute-0 nova_compute[259850]: 2025-10-11 04:08:56.912 2 DEBUG nova.virt.libvirt.vif [None req-5a7b44da-4186-418d-b11a-c8513c690b68 f301bb3cf7f94411bff904828db8c555 3c9fe3215f964559830df6c94dd6a581 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-11T04:08:49Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestEncryptedCinderVolumes-server-1632845143',display_name='tempest-TestEncryptedCinderVolumes-server-1632845143',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testencryptedcindervolumes-server-1632845143',id=11,image_ref='1a107e2f-1a9d-4b6f-861d-e64bee7d56be',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBBaVbxNnaD5q0XmYwovrzExmbbVMAXd2YcdP8HyN3xmFYmLrUV4WQODRkW4d2lIUauD7nrrJ4pDUAC7Sn3o+1xphApTEfJBl9skNeWXh4VYjPGwBwFYiqPpdaiLhMeR/Rw==',key_name='tempest-keypair-733620179',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='3c9fe3215f964559830df6c94dd6a581',ramdisk_id='',reservation_id='r-2bscv5uk',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='1a107e2f-1a9d-4b6f-861d-e64bee7d56be',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestEncryptedCinderVolumes-1907206765',owner_user_name='tempest-TestEncryptedCinderVolumes-1907206765-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-11T04:08:51Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='f301bb3cf7f94411bff904828db8c555',uuid=e879a322-2581-43da-916b-423a94821ed0,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "cc7a934c-f273-4dde-b492-d37feef39f58", "address": "fa:16:3e:a8:a4:b5", "network": {"id": "bc525eaa-e13d-45ff-a473-c699abd60e90", "bridge": "br-int", "label": "tempest-TestEncryptedCinderVolumes-452300963-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3c9fe3215f964559830df6c94dd6a581", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapcc7a934c-f2", "ovs_interfaceid": "cc7a934c-f273-4dde-b492-d37feef39f58", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Oct 11 04:08:56 compute-0 ceph-mon[74273]: pgmap v1196: 305 pgs: 305 active+clean; 273 MiB data, 353 MiB used, 60 GiB / 60 GiB avail; 5.0 MiB/s rd, 2.5 MiB/s wr, 181 op/s
Oct 11 04:08:56 compute-0 ceph-mon[74273]: from='client.? 192.168.122.100:0/236967628' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 11 04:08:56 compute-0 ceph-mon[74273]: from='client.? 192.168.122.100:0/1831956147' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 11 04:08:56 compute-0 nova_compute[259850]: 2025-10-11 04:08:56.914 2 DEBUG nova.network.os_vif_util [None req-5a7b44da-4186-418d-b11a-c8513c690b68 f301bb3cf7f94411bff904828db8c555 3c9fe3215f964559830df6c94dd6a581 - - default default] Converting VIF {"id": "cc7a934c-f273-4dde-b492-d37feef39f58", "address": "fa:16:3e:a8:a4:b5", "network": {"id": "bc525eaa-e13d-45ff-a473-c699abd60e90", "bridge": "br-int", "label": "tempest-TestEncryptedCinderVolumes-452300963-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3c9fe3215f964559830df6c94dd6a581", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapcc7a934c-f2", "ovs_interfaceid": "cc7a934c-f273-4dde-b492-d37feef39f58", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 11 04:08:56 compute-0 nova_compute[259850]: 2025-10-11 04:08:56.916 2 DEBUG nova.network.os_vif_util [None req-5a7b44da-4186-418d-b11a-c8513c690b68 f301bb3cf7f94411bff904828db8c555 3c9fe3215f964559830df6c94dd6a581 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:a8:a4:b5,bridge_name='br-int',has_traffic_filtering=True,id=cc7a934c-f273-4dde-b492-d37feef39f58,network=Network(bc525eaa-e13d-45ff-a473-c699abd60e90),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapcc7a934c-f2') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 11 04:08:56 compute-0 nova_compute[259850]: 2025-10-11 04:08:56.916 2 DEBUG os_vif [None req-5a7b44da-4186-418d-b11a-c8513c690b68 f301bb3cf7f94411bff904828db8c555 3c9fe3215f964559830df6c94dd6a581 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:a8:a4:b5,bridge_name='br-int',has_traffic_filtering=True,id=cc7a934c-f273-4dde-b492-d37feef39f58,network=Network(bc525eaa-e13d-45ff-a473-c699abd60e90),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapcc7a934c-f2') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Oct 11 04:08:56 compute-0 nova_compute[259850]: 2025-10-11 04:08:56.917 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 04:08:56 compute-0 nova_compute[259850]: 2025-10-11 04:08:56.918 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 04:08:56 compute-0 nova_compute[259850]: 2025-10-11 04:08:56.921 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 11 04:08:56 compute-0 nova_compute[259850]: 2025-10-11 04:08:56.925 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 04:08:56 compute-0 nova_compute[259850]: 2025-10-11 04:08:56.925 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapcc7a934c-f2, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 04:08:56 compute-0 nova_compute[259850]: 2025-10-11 04:08:56.926 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapcc7a934c-f2, col_values=(('external_ids', {'iface-id': 'cc7a934c-f273-4dde-b492-d37feef39f58', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:a8:a4:b5', 'vm-uuid': 'e879a322-2581-43da-916b-423a94821ed0'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 04:08:56 compute-0 nova_compute[259850]: 2025-10-11 04:08:56.928 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 04:08:56 compute-0 NetworkManager[44920]: <info>  [1760155736.9294] manager: (tapcc7a934c-f2): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/66)
Oct 11 04:08:56 compute-0 nova_compute[259850]: 2025-10-11 04:08:56.931 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 11 04:08:56 compute-0 nova_compute[259850]: 2025-10-11 04:08:56.935 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 04:08:56 compute-0 nova_compute[259850]: 2025-10-11 04:08:56.935 2 INFO os_vif [None req-5a7b44da-4186-418d-b11a-c8513c690b68 f301bb3cf7f94411bff904828db8c555 3c9fe3215f964559830df6c94dd6a581 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:a8:a4:b5,bridge_name='br-int',has_traffic_filtering=True,id=cc7a934c-f273-4dde-b492-d37feef39f58,network=Network(bc525eaa-e13d-45ff-a473-c699abd60e90),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapcc7a934c-f2')
Oct 11 04:08:56 compute-0 nova_compute[259850]: 2025-10-11 04:08:56.989 2 DEBUG nova.virt.libvirt.driver [None req-5a7b44da-4186-418d-b11a-c8513c690b68 f301bb3cf7f94411bff904828db8c555 3c9fe3215f964559830df6c94dd6a581 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 11 04:08:56 compute-0 nova_compute[259850]: 2025-10-11 04:08:56.990 2 DEBUG nova.virt.libvirt.driver [None req-5a7b44da-4186-418d-b11a-c8513c690b68 f301bb3cf7f94411bff904828db8c555 3c9fe3215f964559830df6c94dd6a581 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 11 04:08:56 compute-0 nova_compute[259850]: 2025-10-11 04:08:56.990 2 DEBUG nova.virt.libvirt.driver [None req-5a7b44da-4186-418d-b11a-c8513c690b68 f301bb3cf7f94411bff904828db8c555 3c9fe3215f964559830df6c94dd6a581 - - default default] No VIF found with MAC fa:16:3e:a8:a4:b5, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Oct 11 04:08:56 compute-0 nova_compute[259850]: 2025-10-11 04:08:56.991 2 INFO nova.virt.libvirt.driver [None req-5a7b44da-4186-418d-b11a-c8513c690b68 f301bb3cf7f94411bff904828db8c555 3c9fe3215f964559830df6c94dd6a581 - - default default] [instance: e879a322-2581-43da-916b-423a94821ed0] Using config drive
Oct 11 04:08:57 compute-0 nova_compute[259850]: 2025-10-11 04:08:57.012 2 DEBUG nova.storage.rbd_utils [None req-5a7b44da-4186-418d-b11a-c8513c690b68 f301bb3cf7f94411bff904828db8c555 3c9fe3215f964559830df6c94dd6a581 - - default default] rbd image e879a322-2581-43da-916b-423a94821ed0_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 11 04:08:57 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 11 04:08:57 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/15421864' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 11 04:08:57 compute-0 nova_compute[259850]: 2025-10-11 04:08:57.420 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 04:08:57 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1197: 305 pgs: 305 active+clean; 273 MiB data, 353 MiB used, 60 GiB / 60 GiB avail; 4.4 MiB/s rd, 2.2 MiB/s wr, 158 op/s
Oct 11 04:08:57 compute-0 nova_compute[259850]: 2025-10-11 04:08:57.668 2 INFO nova.virt.libvirt.driver [None req-5a7b44da-4186-418d-b11a-c8513c690b68 f301bb3cf7f94411bff904828db8c555 3c9fe3215f964559830df6c94dd6a581 - - default default] [instance: e879a322-2581-43da-916b-423a94821ed0] Creating config drive at /var/lib/nova/instances/e879a322-2581-43da-916b-423a94821ed0/disk.config
Oct 11 04:08:57 compute-0 nova_compute[259850]: 2025-10-11 04:08:57.674 2 DEBUG oslo_concurrency.processutils [None req-5a7b44da-4186-418d-b11a-c8513c690b68 f301bb3cf7f94411bff904828db8c555 3c9fe3215f964559830df6c94dd6a581 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/e879a322-2581-43da-916b-423a94821ed0/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp38ab69fe execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 04:08:57 compute-0 nova_compute[259850]: 2025-10-11 04:08:57.804 2 DEBUG oslo_concurrency.processutils [None req-5a7b44da-4186-418d-b11a-c8513c690b68 f301bb3cf7f94411bff904828db8c555 3c9fe3215f964559830df6c94dd6a581 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/e879a322-2581-43da-916b-423a94821ed0/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp38ab69fe" returned: 0 in 0.130s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 04:08:57 compute-0 nova_compute[259850]: 2025-10-11 04:08:57.840 2 DEBUG nova.storage.rbd_utils [None req-5a7b44da-4186-418d-b11a-c8513c690b68 f301bb3cf7f94411bff904828db8c555 3c9fe3215f964559830df6c94dd6a581 - - default default] rbd image e879a322-2581-43da-916b-423a94821ed0_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 11 04:08:57 compute-0 nova_compute[259850]: 2025-10-11 04:08:57.844 2 DEBUG oslo_concurrency.processutils [None req-5a7b44da-4186-418d-b11a-c8513c690b68 f301bb3cf7f94411bff904828db8c555 3c9fe3215f964559830df6c94dd6a581 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/e879a322-2581-43da-916b-423a94821ed0/disk.config e879a322-2581-43da-916b-423a94821ed0_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 04:08:57 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e228 do_prune osdmap full prune enabled
Oct 11 04:08:57 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e229 e229: 3 total, 3 up, 3 in
Oct 11 04:08:57 compute-0 ceph-mon[74273]: log_channel(cluster) log [DBG] : osdmap e229: 3 total, 3 up, 3 in
Oct 11 04:08:57 compute-0 ceph-mon[74273]: from='client.? 192.168.122.10:0/15421864' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 11 04:08:58 compute-0 nova_compute[259850]: 2025-10-11 04:08:58.035 2 DEBUG oslo_concurrency.processutils [None req-5a7b44da-4186-418d-b11a-c8513c690b68 f301bb3cf7f94411bff904828db8c555 3c9fe3215f964559830df6c94dd6a581 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/e879a322-2581-43da-916b-423a94821ed0/disk.config e879a322-2581-43da-916b-423a94821ed0_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.191s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 04:08:58 compute-0 nova_compute[259850]: 2025-10-11 04:08:58.036 2 INFO nova.virt.libvirt.driver [None req-5a7b44da-4186-418d-b11a-c8513c690b68 f301bb3cf7f94411bff904828db8c555 3c9fe3215f964559830df6c94dd6a581 - - default default] [instance: e879a322-2581-43da-916b-423a94821ed0] Deleting local config drive /var/lib/nova/instances/e879a322-2581-43da-916b-423a94821ed0/disk.config because it was imported into RBD.
Oct 11 04:08:58 compute-0 kernel: tapcc7a934c-f2: entered promiscuous mode
Oct 11 04:08:58 compute-0 NetworkManager[44920]: <info>  [1760155738.1023] manager: (tapcc7a934c-f2): new Tun device (/org/freedesktop/NetworkManager/Devices/67)
Oct 11 04:08:58 compute-0 ovn_controller[152025]: 2025-10-11T04:08:58Z|00110|binding|INFO|Claiming lport cc7a934c-f273-4dde-b492-d37feef39f58 for this chassis.
Oct 11 04:08:58 compute-0 ovn_controller[152025]: 2025-10-11T04:08:58Z|00111|binding|INFO|cc7a934c-f273-4dde-b492-d37feef39f58: Claiming fa:16:3e:a8:a4:b5 10.100.0.9
Oct 11 04:08:58 compute-0 nova_compute[259850]: 2025-10-11 04:08:58.106 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 04:08:58 compute-0 ovn_metadata_agent[161897]: 2025-10-11 04:08:58.122 161902 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:a8:a4:b5 10.100.0.9'], port_security=['fa:16:3e:a8:a4:b5 10.100.0.9'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.9/28', 'neutron:device_id': 'e879a322-2581-43da-916b-423a94821ed0', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-bc525eaa-e13d-45ff-a473-c699abd60e90', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '3c9fe3215f964559830df6c94dd6a581', 'neutron:revision_number': '2', 'neutron:security_group_ids': '4053e409-faa5-44f7-9062-ac885993198c', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=b776411c-ef6a-4c8c-89aa-a5baa905f9ce, chassis=[<ovs.db.idl.Row object at 0x7f22fde8afd0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f22fde8afd0>], logical_port=cc7a934c-f273-4dde-b492-d37feef39f58) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 11 04:08:58 compute-0 ovn_metadata_agent[161897]: 2025-10-11 04:08:58.125 161902 INFO neutron.agent.ovn.metadata.agent [-] Port cc7a934c-f273-4dde-b492-d37feef39f58 in datapath bc525eaa-e13d-45ff-a473-c699abd60e90 bound to our chassis
Oct 11 04:08:58 compute-0 ovn_metadata_agent[161897]: 2025-10-11 04:08:58.128 161902 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network bc525eaa-e13d-45ff-a473-c699abd60e90
Oct 11 04:08:58 compute-0 ovn_controller[152025]: 2025-10-11T04:08:58Z|00112|binding|INFO|Setting lport cc7a934c-f273-4dde-b492-d37feef39f58 ovn-installed in OVS
Oct 11 04:08:58 compute-0 ovn_controller[152025]: 2025-10-11T04:08:58Z|00113|binding|INFO|Setting lport cc7a934c-f273-4dde-b492-d37feef39f58 up in Southbound
Oct 11 04:08:58 compute-0 nova_compute[259850]: 2025-10-11 04:08:58.132 2 DEBUG nova.network.neutron [req-aecc8974-aeb2-420c-9296-66037eefe177 req-674e025c-c55b-4487-8b6c-bf00d03a4b60 f4159c198f2c491aba3076e803251bb4 551ef16ca7c64c7d98e9560ddab5abd8 - - default default] [instance: 8010953a-e520-477e-a4ba-ceb34db48982] Updated VIF entry in instance network info cache for port 34c86870-ee92-41f3-909b-1b576896b9cc. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 11 04:08:58 compute-0 nova_compute[259850]: 2025-10-11 04:08:58.133 2 DEBUG nova.network.neutron [req-aecc8974-aeb2-420c-9296-66037eefe177 req-674e025c-c55b-4487-8b6c-bf00d03a4b60 f4159c198f2c491aba3076e803251bb4 551ef16ca7c64c7d98e9560ddab5abd8 - - default default] [instance: 8010953a-e520-477e-a4ba-ceb34db48982] Updating instance_info_cache with network_info: [{"id": "34c86870-ee92-41f3-909b-1b576896b9cc", "address": "fa:16:3e:d4:0e:80", "network": {"id": "5dfb34b3-4c6e-41e7-a36e-ee8fb0fbb2e1", "bridge": "br-int", "label": "tempest-TestVolumeBackupRestore-789712542-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.177", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "38b79203307d4f1caa56e7e44b103572", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap34c86870-ee", "ovs_interfaceid": "34c86870-ee92-41f3-909b-1b576896b9cc", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 11 04:08:58 compute-0 systemd-udevd[281758]: Network interface NamePolicy= disabled on kernel command line.
Oct 11 04:08:58 compute-0 nova_compute[259850]: 2025-10-11 04:08:58.135 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 04:08:58 compute-0 nova_compute[259850]: 2025-10-11 04:08:58.142 2 DEBUG nova.network.neutron [req-a3c48fd8-681a-4831-9b28-11ea96a24518 req-4d702458-759d-4158-ba81-15e652bbb0f8 f4159c198f2c491aba3076e803251bb4 551ef16ca7c64c7d98e9560ddab5abd8 - - default default] [instance: e879a322-2581-43da-916b-423a94821ed0] Updated VIF entry in instance network info cache for port cc7a934c-f273-4dde-b492-d37feef39f58. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 11 04:08:58 compute-0 nova_compute[259850]: 2025-10-11 04:08:58.142 2 DEBUG nova.network.neutron [req-a3c48fd8-681a-4831-9b28-11ea96a24518 req-4d702458-759d-4158-ba81-15e652bbb0f8 f4159c198f2c491aba3076e803251bb4 551ef16ca7c64c7d98e9560ddab5abd8 - - default default] [instance: e879a322-2581-43da-916b-423a94821ed0] Updating instance_info_cache with network_info: [{"id": "cc7a934c-f273-4dde-b492-d37feef39f58", "address": "fa:16:3e:a8:a4:b5", "network": {"id": "bc525eaa-e13d-45ff-a473-c699abd60e90", "bridge": "br-int", "label": "tempest-TestEncryptedCinderVolumes-452300963-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3c9fe3215f964559830df6c94dd6a581", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapcc7a934c-f2", "ovs_interfaceid": "cc7a934c-f273-4dde-b492-d37feef39f58", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 11 04:08:58 compute-0 ovn_metadata_agent[161897]: 2025-10-11 04:08:58.144 267637 DEBUG oslo.privsep.daemon [-] privsep: reply[29aa5441-5548-4951-afba-8e037819dff9]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 04:08:58 compute-0 ovn_metadata_agent[161897]: 2025-10-11 04:08:58.145 161902 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapbc525eaa-e1 in ovnmeta-bc525eaa-e13d-45ff-a473-c699abd60e90 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Oct 11 04:08:58 compute-0 ovn_metadata_agent[161897]: 2025-10-11 04:08:58.151 267637 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapbc525eaa-e0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Oct 11 04:08:58 compute-0 ovn_metadata_agent[161897]: 2025-10-11 04:08:58.151 267637 DEBUG oslo.privsep.daemon [-] privsep: reply[5ea55147-6ca8-4b92-b691-484a8c3ce94d]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 04:08:58 compute-0 NetworkManager[44920]: <info>  [1760155738.1546] device (tapcc7a934c-f2): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 11 04:08:58 compute-0 ovn_metadata_agent[161897]: 2025-10-11 04:08:58.154 267637 DEBUG oslo.privsep.daemon [-] privsep: reply[c5b0c619-f3cd-49a4-a911-dcbeff75d53d]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 04:08:58 compute-0 NetworkManager[44920]: <info>  [1760155738.1562] device (tapcc7a934c-f2): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 11 04:08:58 compute-0 nova_compute[259850]: 2025-10-11 04:08:58.162 2 DEBUG oslo_concurrency.lockutils [req-a3c48fd8-681a-4831-9b28-11ea96a24518 req-4d702458-759d-4158-ba81-15e652bbb0f8 f4159c198f2c491aba3076e803251bb4 551ef16ca7c64c7d98e9560ddab5abd8 - - default default] Releasing lock "refresh_cache-e879a322-2581-43da-916b-423a94821ed0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 11 04:08:58 compute-0 nova_compute[259850]: 2025-10-11 04:08:58.163 2 DEBUG oslo_concurrency.lockutils [req-aecc8974-aeb2-420c-9296-66037eefe177 req-674e025c-c55b-4487-8b6c-bf00d03a4b60 f4159c198f2c491aba3076e803251bb4 551ef16ca7c64c7d98e9560ddab5abd8 - - default default] Releasing lock "refresh_cache-8010953a-e520-477e-a4ba-ceb34db48982" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 11 04:08:58 compute-0 systemd-machined[214869]: New machine qemu-11-instance-0000000b.
Oct 11 04:08:58 compute-0 ovn_metadata_agent[161897]: 2025-10-11 04:08:58.171 162015 DEBUG oslo.privsep.daemon [-] privsep: reply[ac8f5e02-9699-491d-9ebd-09b26d89413d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 04:08:58 compute-0 systemd[1]: Started Virtual Machine qemu-11-instance-0000000b.
Oct 11 04:08:58 compute-0 ovn_metadata_agent[161897]: 2025-10-11 04:08:58.204 267637 DEBUG oslo.privsep.daemon [-] privsep: reply[61865c75-f580-4a07-92fd-892073ca5599]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 04:08:58 compute-0 ovn_metadata_agent[161897]: 2025-10-11 04:08:58.239 267785 DEBUG oslo.privsep.daemon [-] privsep: reply[b9c32830-0b4b-4663-940a-1f4db2f83760]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 04:08:58 compute-0 NetworkManager[44920]: <info>  [1760155738.2480] manager: (tapbc525eaa-e0): new Veth device (/org/freedesktop/NetworkManager/Devices/68)
Oct 11 04:08:58 compute-0 ovn_metadata_agent[161897]: 2025-10-11 04:08:58.246 267637 DEBUG oslo.privsep.daemon [-] privsep: reply[eb0690a0-cb8f-4481-82b6-b6a391c83cc3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 04:08:58 compute-0 ovn_metadata_agent[161897]: 2025-10-11 04:08:58.289 267785 DEBUG oslo.privsep.daemon [-] privsep: reply[83caf938-b6ff-4687-9bf2-040430f1071e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 04:08:58 compute-0 ovn_metadata_agent[161897]: 2025-10-11 04:08:58.292 267785 DEBUG oslo.privsep.daemon [-] privsep: reply[c097b56a-a343-4f83-9888-83a078db3242]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 04:08:58 compute-0 NetworkManager[44920]: <info>  [1760155738.3236] device (tapbc525eaa-e0): carrier: link connected
Oct 11 04:08:58 compute-0 nova_compute[259850]: 2025-10-11 04:08:58.322 2 DEBUG nova.compute.manager [req-9b6860ed-8e28-477f-b970-73fcb8158105 req-cd282ff0-c706-4f8a-98ed-257112834500 f4159c198f2c491aba3076e803251bb4 551ef16ca7c64c7d98e9560ddab5abd8 - - default default] [instance: 8010953a-e520-477e-a4ba-ceb34db48982] Received event network-changed-34c86870-ee92-41f3-909b-1b576896b9cc external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 11 04:08:58 compute-0 nova_compute[259850]: 2025-10-11 04:08:58.323 2 DEBUG nova.compute.manager [req-9b6860ed-8e28-477f-b970-73fcb8158105 req-cd282ff0-c706-4f8a-98ed-257112834500 f4159c198f2c491aba3076e803251bb4 551ef16ca7c64c7d98e9560ddab5abd8 - - default default] [instance: 8010953a-e520-477e-a4ba-ceb34db48982] Refreshing instance network info cache due to event network-changed-34c86870-ee92-41f3-909b-1b576896b9cc. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 11 04:08:58 compute-0 nova_compute[259850]: 2025-10-11 04:08:58.323 2 DEBUG oslo_concurrency.lockutils [req-9b6860ed-8e28-477f-b970-73fcb8158105 req-cd282ff0-c706-4f8a-98ed-257112834500 f4159c198f2c491aba3076e803251bb4 551ef16ca7c64c7d98e9560ddab5abd8 - - default default] Acquiring lock "refresh_cache-8010953a-e520-477e-a4ba-ceb34db48982" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 11 04:08:58 compute-0 nova_compute[259850]: 2025-10-11 04:08:58.323 2 DEBUG oslo_concurrency.lockutils [req-9b6860ed-8e28-477f-b970-73fcb8158105 req-cd282ff0-c706-4f8a-98ed-257112834500 f4159c198f2c491aba3076e803251bb4 551ef16ca7c64c7d98e9560ddab5abd8 - - default default] Acquired lock "refresh_cache-8010953a-e520-477e-a4ba-ceb34db48982" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 11 04:08:58 compute-0 nova_compute[259850]: 2025-10-11 04:08:58.323 2 DEBUG nova.network.neutron [req-9b6860ed-8e28-477f-b970-73fcb8158105 req-cd282ff0-c706-4f8a-98ed-257112834500 f4159c198f2c491aba3076e803251bb4 551ef16ca7c64c7d98e9560ddab5abd8 - - default default] [instance: 8010953a-e520-477e-a4ba-ceb34db48982] Refreshing network info cache for port 34c86870-ee92-41f3-909b-1b576896b9cc _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 11 04:08:58 compute-0 ovn_metadata_agent[161897]: 2025-10-11 04:08:58.327 267785 DEBUG oslo.privsep.daemon [-] privsep: reply[ef307a34-ce33-4558-969b-0e2e6c6a4958]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 04:08:58 compute-0 ovn_metadata_agent[161897]: 2025-10-11 04:08:58.344 267637 DEBUG oslo.privsep.daemon [-] privsep: reply[b31a747f-21ac-4b7b-9e1d-f3867908b40b]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapbc525eaa-e1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:8b:a6:55'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 41], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 412335, 'reachable_time': 42052, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 281794, 'error': None, 'target': 'ovnmeta-bc525eaa-e13d-45ff-a473-c699abd60e90', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 04:08:58 compute-0 ovn_metadata_agent[161897]: 2025-10-11 04:08:58.359 267637 DEBUG oslo.privsep.daemon [-] privsep: reply[064558b2-77eb-42cb-a256-b1a244b35eec]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe8b:a655'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 412335, 'tstamp': 412335}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 281795, 'error': None, 'target': 'ovnmeta-bc525eaa-e13d-45ff-a473-c699abd60e90', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 04:08:58 compute-0 ovn_metadata_agent[161897]: 2025-10-11 04:08:58.374 267637 DEBUG oslo.privsep.daemon [-] privsep: reply[8b6e4942-7e59-4cf7-bc8a-b4c730cb7597]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapbc525eaa-e1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:8b:a6:55'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 41], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 412335, 'reachable_time': 42052, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 281796, 'error': None, 'target': 'ovnmeta-bc525eaa-e13d-45ff-a473-c699abd60e90', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 04:08:58 compute-0 ovn_metadata_agent[161897]: 2025-10-11 04:08:58.406 267637 DEBUG oslo.privsep.daemon [-] privsep: reply[177bcc21-5a03-45bb-821c-6e845824bb04]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 04:08:58 compute-0 ovn_metadata_agent[161897]: 2025-10-11 04:08:58.465 267637 DEBUG oslo.privsep.daemon [-] privsep: reply[e264b192-479d-4e9d-be16-605f70127b58]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 04:08:58 compute-0 ovn_metadata_agent[161897]: 2025-10-11 04:08:58.466 161902 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapbc525eaa-e0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 04:08:58 compute-0 ovn_metadata_agent[161897]: 2025-10-11 04:08:58.466 161902 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 11 04:08:58 compute-0 ovn_metadata_agent[161897]: 2025-10-11 04:08:58.467 161902 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapbc525eaa-e0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 04:08:58 compute-0 nova_compute[259850]: 2025-10-11 04:08:58.468 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 04:08:58 compute-0 NetworkManager[44920]: <info>  [1760155738.4693] manager: (tapbc525eaa-e0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/69)
Oct 11 04:08:58 compute-0 kernel: tapbc525eaa-e0: entered promiscuous mode
Oct 11 04:08:58 compute-0 ovn_metadata_agent[161897]: 2025-10-11 04:08:58.471 161902 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapbc525eaa-e0, col_values=(('external_ids', {'iface-id': 'cdd2aeac-e6a8-47f4-bd20-3e943fcf66e2'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 04:08:58 compute-0 ovn_controller[152025]: 2025-10-11T04:08:58Z|00114|binding|INFO|Releasing lport cdd2aeac-e6a8-47f4-bd20-3e943fcf66e2 from this chassis (sb_readonly=0)
Oct 11 04:08:58 compute-0 nova_compute[259850]: 2025-10-11 04:08:58.474 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 04:08:58 compute-0 ovn_metadata_agent[161897]: 2025-10-11 04:08:58.474 161902 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/bc525eaa-e13d-45ff-a473-c699abd60e90.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/bc525eaa-e13d-45ff-a473-c699abd60e90.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Oct 11 04:08:58 compute-0 ovn_metadata_agent[161897]: 2025-10-11 04:08:58.474 267637 DEBUG oslo.privsep.daemon [-] privsep: reply[97dba376-33a1-4fdf-968c-a8c549bf31a0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 04:08:58 compute-0 ovn_metadata_agent[161897]: 2025-10-11 04:08:58.475 161902 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct 11 04:08:58 compute-0 ovn_metadata_agent[161897]: global
Oct 11 04:08:58 compute-0 ovn_metadata_agent[161897]:     log         /dev/log local0 debug
Oct 11 04:08:58 compute-0 ovn_metadata_agent[161897]:     log-tag     haproxy-metadata-proxy-bc525eaa-e13d-45ff-a473-c699abd60e90
Oct 11 04:08:58 compute-0 ovn_metadata_agent[161897]:     user        root
Oct 11 04:08:58 compute-0 ovn_metadata_agent[161897]:     group       root
Oct 11 04:08:58 compute-0 ovn_metadata_agent[161897]:     maxconn     1024
Oct 11 04:08:58 compute-0 ovn_metadata_agent[161897]:     pidfile     /var/lib/neutron/external/pids/bc525eaa-e13d-45ff-a473-c699abd60e90.pid.haproxy
Oct 11 04:08:58 compute-0 ovn_metadata_agent[161897]:     daemon
Oct 11 04:08:58 compute-0 ovn_metadata_agent[161897]: 
Oct 11 04:08:58 compute-0 ovn_metadata_agent[161897]: defaults
Oct 11 04:08:58 compute-0 ovn_metadata_agent[161897]:     log global
Oct 11 04:08:58 compute-0 ovn_metadata_agent[161897]:     mode http
Oct 11 04:08:58 compute-0 ovn_metadata_agent[161897]:     option httplog
Oct 11 04:08:58 compute-0 ovn_metadata_agent[161897]:     option dontlognull
Oct 11 04:08:58 compute-0 ovn_metadata_agent[161897]:     option http-server-close
Oct 11 04:08:58 compute-0 ovn_metadata_agent[161897]:     option forwardfor
Oct 11 04:08:58 compute-0 ovn_metadata_agent[161897]:     retries                 3
Oct 11 04:08:58 compute-0 ovn_metadata_agent[161897]:     timeout http-request    30s
Oct 11 04:08:58 compute-0 ovn_metadata_agent[161897]:     timeout connect         30s
Oct 11 04:08:58 compute-0 ovn_metadata_agent[161897]:     timeout client          32s
Oct 11 04:08:58 compute-0 ovn_metadata_agent[161897]:     timeout server          32s
Oct 11 04:08:58 compute-0 ovn_metadata_agent[161897]:     timeout http-keep-alive 30s
Oct 11 04:08:58 compute-0 ovn_metadata_agent[161897]: 
Oct 11 04:08:58 compute-0 ovn_metadata_agent[161897]: 
Oct 11 04:08:58 compute-0 ovn_metadata_agent[161897]: listen listener
Oct 11 04:08:58 compute-0 ovn_metadata_agent[161897]:     bind 169.254.169.254:80
Oct 11 04:08:58 compute-0 ovn_metadata_agent[161897]:     server metadata /var/lib/neutron/metadata_proxy
Oct 11 04:08:58 compute-0 ovn_metadata_agent[161897]:     http-request add-header X-OVN-Network-ID bc525eaa-e13d-45ff-a473-c699abd60e90
Oct 11 04:08:58 compute-0 ovn_metadata_agent[161897]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Oct 11 04:08:58 compute-0 ovn_metadata_agent[161897]: 2025-10-11 04:08:58.476 161902 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-bc525eaa-e13d-45ff-a473-c699abd60e90', 'env', 'PROCESS_TAG=haproxy-bc525eaa-e13d-45ff-a473-c699abd60e90', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/bc525eaa-e13d-45ff-a473-c699abd60e90.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Oct 11 04:08:58 compute-0 nova_compute[259850]: 2025-10-11 04:08:58.494 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 04:08:58 compute-0 podman[281870]: 2025-10-11 04:08:58.861469051 +0000 UTC m=+0.060672428 container create a463e9809ccedb1a9105dd493a0c99d9bbf1e234d96d7b81e791e62714312ac0 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-bc525eaa-e13d-45ff-a473-c699abd60e90, org.label-schema.build-date=20251009, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Oct 11 04:08:58 compute-0 systemd[1]: Started libpod-conmon-a463e9809ccedb1a9105dd493a0c99d9bbf1e234d96d7b81e791e62714312ac0.scope.
Oct 11 04:08:58 compute-0 podman[281870]: 2025-10-11 04:08:58.821310591 +0000 UTC m=+0.020513988 image pull 1061e4fafe13e0b9aa1ef2c904ba4ad70c44f3e87b1d831f16c6db34937f4022 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Oct 11 04:08:58 compute-0 systemd[1]: Started libcrun container.
Oct 11 04:08:58 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a114031a3c6f7de5c95f5b215cc97e178b7a0bbd850dd345fcf4bddbb022bc60/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct 11 04:08:58 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e229 do_prune osdmap full prune enabled
Oct 11 04:08:58 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e230 e230: 3 total, 3 up, 3 in
Oct 11 04:08:58 compute-0 ceph-mon[74273]: log_channel(cluster) log [DBG] : osdmap e230: 3 total, 3 up, 3 in
Oct 11 04:08:58 compute-0 ceph-mon[74273]: pgmap v1197: 305 pgs: 305 active+clean; 273 MiB data, 353 MiB used, 60 GiB / 60 GiB avail; 4.4 MiB/s rd, 2.2 MiB/s wr, 158 op/s
Oct 11 04:08:58 compute-0 ceph-mon[74273]: osdmap e229: 3 total, 3 up, 3 in
Oct 11 04:08:58 compute-0 podman[281870]: 2025-10-11 04:08:58.957927005 +0000 UTC m=+0.157130382 container init a463e9809ccedb1a9105dd493a0c99d9bbf1e234d96d7b81e791e62714312ac0 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-bc525eaa-e13d-45ff-a473-c699abd60e90, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.build-date=20251009)
Oct 11 04:08:58 compute-0 podman[281870]: 2025-10-11 04:08:58.96698206 +0000 UTC m=+0.166185427 container start a463e9809ccedb1a9105dd493a0c99d9bbf1e234d96d7b81e791e62714312ac0 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-bc525eaa-e13d-45ff-a473-c699abd60e90, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.build-date=20251009, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, org.label-schema.schema-version=1.0)
Oct 11 04:08:58 compute-0 neutron-haproxy-ovnmeta-bc525eaa-e13d-45ff-a473-c699abd60e90[281885]: [NOTICE]   (281889) : New worker (281891) forked
Oct 11 04:08:58 compute-0 neutron-haproxy-ovnmeta-bc525eaa-e13d-45ff-a473-c699abd60e90[281885]: [NOTICE]   (281889) : Loading success.
Oct 11 04:08:59 compute-0 nova_compute[259850]: 2025-10-11 04:08:59.184 2 DEBUG nova.virt.driver [None req-3700afd8-f980-4cf2-b41e-54ecc7fbe9a2 - - - - - -] Emitting event <LifecycleEvent: 1760155739.184216, e879a322-2581-43da-916b-423a94821ed0 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 11 04:08:59 compute-0 nova_compute[259850]: 2025-10-11 04:08:59.184 2 INFO nova.compute.manager [None req-3700afd8-f980-4cf2-b41e-54ecc7fbe9a2 - - - - - -] [instance: e879a322-2581-43da-916b-423a94821ed0] VM Started (Lifecycle Event)
Oct 11 04:08:59 compute-0 nova_compute[259850]: 2025-10-11 04:08:59.209 2 DEBUG nova.compute.manager [None req-3700afd8-f980-4cf2-b41e-54ecc7fbe9a2 - - - - - -] [instance: e879a322-2581-43da-916b-423a94821ed0] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 11 04:08:59 compute-0 nova_compute[259850]: 2025-10-11 04:08:59.214 2 DEBUG nova.virt.driver [None req-3700afd8-f980-4cf2-b41e-54ecc7fbe9a2 - - - - - -] Emitting event <LifecycleEvent: 1760155739.1843543, e879a322-2581-43da-916b-423a94821ed0 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 11 04:08:59 compute-0 nova_compute[259850]: 2025-10-11 04:08:59.214 2 INFO nova.compute.manager [None req-3700afd8-f980-4cf2-b41e-54ecc7fbe9a2 - - - - - -] [instance: e879a322-2581-43da-916b-423a94821ed0] VM Paused (Lifecycle Event)
Oct 11 04:08:59 compute-0 nova_compute[259850]: 2025-10-11 04:08:59.231 2 DEBUG nova.compute.manager [None req-3700afd8-f980-4cf2-b41e-54ecc7fbe9a2 - - - - - -] [instance: e879a322-2581-43da-916b-423a94821ed0] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 11 04:08:59 compute-0 nova_compute[259850]: 2025-10-11 04:08:59.236 2 DEBUG nova.compute.manager [None req-3700afd8-f980-4cf2-b41e-54ecc7fbe9a2 - - - - - -] [instance: e879a322-2581-43da-916b-423a94821ed0] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 11 04:08:59 compute-0 nova_compute[259850]: 2025-10-11 04:08:59.257 2 DEBUG nova.compute.manager [req-214b77ca-c6df-4b1d-8c2f-634e408d8113 req-8e31f078-53f3-4231-bcc7-07490934b687 f4159c198f2c491aba3076e803251bb4 551ef16ca7c64c7d98e9560ddab5abd8 - - default default] [instance: e879a322-2581-43da-916b-423a94821ed0] Received event network-vif-plugged-cc7a934c-f273-4dde-b492-d37feef39f58 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 11 04:08:59 compute-0 nova_compute[259850]: 2025-10-11 04:08:59.258 2 DEBUG oslo_concurrency.lockutils [req-214b77ca-c6df-4b1d-8c2f-634e408d8113 req-8e31f078-53f3-4231-bcc7-07490934b687 f4159c198f2c491aba3076e803251bb4 551ef16ca7c64c7d98e9560ddab5abd8 - - default default] Acquiring lock "e879a322-2581-43da-916b-423a94821ed0-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 04:08:59 compute-0 nova_compute[259850]: 2025-10-11 04:08:59.258 2 DEBUG oslo_concurrency.lockutils [req-214b77ca-c6df-4b1d-8c2f-634e408d8113 req-8e31f078-53f3-4231-bcc7-07490934b687 f4159c198f2c491aba3076e803251bb4 551ef16ca7c64c7d98e9560ddab5abd8 - - default default] Lock "e879a322-2581-43da-916b-423a94821ed0-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 04:08:59 compute-0 nova_compute[259850]: 2025-10-11 04:08:59.259 2 DEBUG oslo_concurrency.lockutils [req-214b77ca-c6df-4b1d-8c2f-634e408d8113 req-8e31f078-53f3-4231-bcc7-07490934b687 f4159c198f2c491aba3076e803251bb4 551ef16ca7c64c7d98e9560ddab5abd8 - - default default] Lock "e879a322-2581-43da-916b-423a94821ed0-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 04:08:59 compute-0 nova_compute[259850]: 2025-10-11 04:08:59.259 2 DEBUG nova.compute.manager [req-214b77ca-c6df-4b1d-8c2f-634e408d8113 req-8e31f078-53f3-4231-bcc7-07490934b687 f4159c198f2c491aba3076e803251bb4 551ef16ca7c64c7d98e9560ddab5abd8 - - default default] [instance: e879a322-2581-43da-916b-423a94821ed0] Processing event network-vif-plugged-cc7a934c-f273-4dde-b492-d37feef39f58 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Oct 11 04:08:59 compute-0 nova_compute[259850]: 2025-10-11 04:08:59.260 2 DEBUG nova.compute.manager [None req-5a7b44da-4186-418d-b11a-c8513c690b68 f301bb3cf7f94411bff904828db8c555 3c9fe3215f964559830df6c94dd6a581 - - default default] [instance: e879a322-2581-43da-916b-423a94821ed0] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Oct 11 04:08:59 compute-0 nova_compute[259850]: 2025-10-11 04:08:59.261 2 INFO nova.compute.manager [None req-3700afd8-f980-4cf2-b41e-54ecc7fbe9a2 - - - - - -] [instance: e879a322-2581-43da-916b-423a94821ed0] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 11 04:08:59 compute-0 nova_compute[259850]: 2025-10-11 04:08:59.264 2 DEBUG nova.virt.driver [None req-3700afd8-f980-4cf2-b41e-54ecc7fbe9a2 - - - - - -] Emitting event <LifecycleEvent: 1760155739.2639925, e879a322-2581-43da-916b-423a94821ed0 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 11 04:08:59 compute-0 nova_compute[259850]: 2025-10-11 04:08:59.264 2 INFO nova.compute.manager [None req-3700afd8-f980-4cf2-b41e-54ecc7fbe9a2 - - - - - -] [instance: e879a322-2581-43da-916b-423a94821ed0] VM Resumed (Lifecycle Event)
Oct 11 04:08:59 compute-0 nova_compute[259850]: 2025-10-11 04:08:59.269 2 DEBUG nova.virt.libvirt.driver [None req-5a7b44da-4186-418d-b11a-c8513c690b68 f301bb3cf7f94411bff904828db8c555 3c9fe3215f964559830df6c94dd6a581 - - default default] [instance: e879a322-2581-43da-916b-423a94821ed0] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Oct 11 04:08:59 compute-0 nova_compute[259850]: 2025-10-11 04:08:59.272 2 INFO nova.virt.libvirt.driver [-] [instance: e879a322-2581-43da-916b-423a94821ed0] Instance spawned successfully.
Oct 11 04:08:59 compute-0 nova_compute[259850]: 2025-10-11 04:08:59.273 2 DEBUG nova.virt.libvirt.driver [None req-5a7b44da-4186-418d-b11a-c8513c690b68 f301bb3cf7f94411bff904828db8c555 3c9fe3215f964559830df6c94dd6a581 - - default default] [instance: e879a322-2581-43da-916b-423a94821ed0] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Oct 11 04:08:59 compute-0 nova_compute[259850]: 2025-10-11 04:08:59.287 2 DEBUG nova.compute.manager [None req-3700afd8-f980-4cf2-b41e-54ecc7fbe9a2 - - - - - -] [instance: e879a322-2581-43da-916b-423a94821ed0] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 11 04:08:59 compute-0 nova_compute[259850]: 2025-10-11 04:08:59.296 2 DEBUG nova.compute.manager [None req-3700afd8-f980-4cf2-b41e-54ecc7fbe9a2 - - - - - -] [instance: e879a322-2581-43da-916b-423a94821ed0] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 11 04:08:59 compute-0 nova_compute[259850]: 2025-10-11 04:08:59.299 2 DEBUG nova.virt.libvirt.driver [None req-5a7b44da-4186-418d-b11a-c8513c690b68 f301bb3cf7f94411bff904828db8c555 3c9fe3215f964559830df6c94dd6a581 - - default default] [instance: e879a322-2581-43da-916b-423a94821ed0] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 11 04:08:59 compute-0 nova_compute[259850]: 2025-10-11 04:08:59.300 2 DEBUG nova.virt.libvirt.driver [None req-5a7b44da-4186-418d-b11a-c8513c690b68 f301bb3cf7f94411bff904828db8c555 3c9fe3215f964559830df6c94dd6a581 - - default default] [instance: e879a322-2581-43da-916b-423a94821ed0] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 11 04:08:59 compute-0 nova_compute[259850]: 2025-10-11 04:08:59.300 2 DEBUG nova.virt.libvirt.driver [None req-5a7b44da-4186-418d-b11a-c8513c690b68 f301bb3cf7f94411bff904828db8c555 3c9fe3215f964559830df6c94dd6a581 - - default default] [instance: e879a322-2581-43da-916b-423a94821ed0] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 11 04:08:59 compute-0 nova_compute[259850]: 2025-10-11 04:08:59.301 2 DEBUG nova.virt.libvirt.driver [None req-5a7b44da-4186-418d-b11a-c8513c690b68 f301bb3cf7f94411bff904828db8c555 3c9fe3215f964559830df6c94dd6a581 - - default default] [instance: e879a322-2581-43da-916b-423a94821ed0] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 11 04:08:59 compute-0 nova_compute[259850]: 2025-10-11 04:08:59.302 2 DEBUG nova.virt.libvirt.driver [None req-5a7b44da-4186-418d-b11a-c8513c690b68 f301bb3cf7f94411bff904828db8c555 3c9fe3215f964559830df6c94dd6a581 - - default default] [instance: e879a322-2581-43da-916b-423a94821ed0] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 11 04:08:59 compute-0 nova_compute[259850]: 2025-10-11 04:08:59.303 2 DEBUG nova.virt.libvirt.driver [None req-5a7b44da-4186-418d-b11a-c8513c690b68 f301bb3cf7f94411bff904828db8c555 3c9fe3215f964559830df6c94dd6a581 - - default default] [instance: e879a322-2581-43da-916b-423a94821ed0] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 11 04:08:59 compute-0 nova_compute[259850]: 2025-10-11 04:08:59.330 2 INFO nova.compute.manager [None req-3700afd8-f980-4cf2-b41e-54ecc7fbe9a2 - - - - - -] [instance: e879a322-2581-43da-916b-423a94821ed0] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 11 04:08:59 compute-0 nova_compute[259850]: 2025-10-11 04:08:59.361 2 INFO nova.compute.manager [None req-5a7b44da-4186-418d-b11a-c8513c690b68 f301bb3cf7f94411bff904828db8c555 3c9fe3215f964559830df6c94dd6a581 - - default default] [instance: e879a322-2581-43da-916b-423a94821ed0] Took 7.80 seconds to spawn the instance on the hypervisor.
Oct 11 04:08:59 compute-0 nova_compute[259850]: 2025-10-11 04:08:59.361 2 DEBUG nova.compute.manager [None req-5a7b44da-4186-418d-b11a-c8513c690b68 f301bb3cf7f94411bff904828db8c555 3c9fe3215f964559830df6c94dd6a581 - - default default] [instance: e879a322-2581-43da-916b-423a94821ed0] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 11 04:08:59 compute-0 nova_compute[259850]: 2025-10-11 04:08:59.430 2 INFO nova.compute.manager [None req-5a7b44da-4186-418d-b11a-c8513c690b68 f301bb3cf7f94411bff904828db8c555 3c9fe3215f964559830df6c94dd6a581 - - default default] [instance: e879a322-2581-43da-916b-423a94821ed0] Took 8.83 seconds to build instance.
Oct 11 04:08:59 compute-0 nova_compute[259850]: 2025-10-11 04:08:59.451 2 DEBUG oslo_concurrency.lockutils [None req-5a7b44da-4186-418d-b11a-c8513c690b68 f301bb3cf7f94411bff904828db8c555 3c9fe3215f964559830df6c94dd6a581 - - default default] Lock "e879a322-2581-43da-916b-423a94821ed0" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 8.940s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 04:08:59 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1200: 305 pgs: 305 active+clean; 319 MiB data, 374 MiB used, 60 GiB / 60 GiB avail; 48 KiB/s rd, 3.5 MiB/s wr, 74 op/s
Oct 11 04:08:59 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e230 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 04:08:59 compute-0 ceph-mon[74273]: osdmap e230: 3 total, 3 up, 3 in
Oct 11 04:09:00 compute-0 nova_compute[259850]: 2025-10-11 04:09:00.047 2 DEBUG nova.network.neutron [req-9b6860ed-8e28-477f-b970-73fcb8158105 req-cd282ff0-c706-4f8a-98ed-257112834500 f4159c198f2c491aba3076e803251bb4 551ef16ca7c64c7d98e9560ddab5abd8 - - default default] [instance: 8010953a-e520-477e-a4ba-ceb34db48982] Updated VIF entry in instance network info cache for port 34c86870-ee92-41f3-909b-1b576896b9cc. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 11 04:09:00 compute-0 nova_compute[259850]: 2025-10-11 04:09:00.048 2 DEBUG nova.network.neutron [req-9b6860ed-8e28-477f-b970-73fcb8158105 req-cd282ff0-c706-4f8a-98ed-257112834500 f4159c198f2c491aba3076e803251bb4 551ef16ca7c64c7d98e9560ddab5abd8 - - default default] [instance: 8010953a-e520-477e-a4ba-ceb34db48982] Updating instance_info_cache with network_info: [{"id": "34c86870-ee92-41f3-909b-1b576896b9cc", "address": "fa:16:3e:d4:0e:80", "network": {"id": "5dfb34b3-4c6e-41e7-a36e-ee8fb0fbb2e1", "bridge": "br-int", "label": "tempest-TestVolumeBackupRestore-789712542-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.177", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "38b79203307d4f1caa56e7e44b103572", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap34c86870-ee", "ovs_interfaceid": "34c86870-ee92-41f3-909b-1b576896b9cc", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 11 04:09:00 compute-0 nova_compute[259850]: 2025-10-11 04:09:00.062 2 DEBUG oslo_concurrency.lockutils [req-9b6860ed-8e28-477f-b970-73fcb8158105 req-cd282ff0-c706-4f8a-98ed-257112834500 f4159c198f2c491aba3076e803251bb4 551ef16ca7c64c7d98e9560ddab5abd8 - - default default] Releasing lock "refresh_cache-8010953a-e520-477e-a4ba-ceb34db48982" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 11 04:09:00 compute-0 nova_compute[259850]: 2025-10-11 04:09:00.063 2 DEBUG nova.compute.manager [req-9b6860ed-8e28-477f-b970-73fcb8158105 req-cd282ff0-c706-4f8a-98ed-257112834500 f4159c198f2c491aba3076e803251bb4 551ef16ca7c64c7d98e9560ddab5abd8 - - default default] [instance: 8010953a-e520-477e-a4ba-ceb34db48982] Received event network-changed-34c86870-ee92-41f3-909b-1b576896b9cc external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 11 04:09:00 compute-0 nova_compute[259850]: 2025-10-11 04:09:00.064 2 DEBUG nova.compute.manager [req-9b6860ed-8e28-477f-b970-73fcb8158105 req-cd282ff0-c706-4f8a-98ed-257112834500 f4159c198f2c491aba3076e803251bb4 551ef16ca7c64c7d98e9560ddab5abd8 - - default default] [instance: 8010953a-e520-477e-a4ba-ceb34db48982] Refreshing instance network info cache due to event network-changed-34c86870-ee92-41f3-909b-1b576896b9cc. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 11 04:09:00 compute-0 nova_compute[259850]: 2025-10-11 04:09:00.064 2 DEBUG oslo_concurrency.lockutils [req-9b6860ed-8e28-477f-b970-73fcb8158105 req-cd282ff0-c706-4f8a-98ed-257112834500 f4159c198f2c491aba3076e803251bb4 551ef16ca7c64c7d98e9560ddab5abd8 - - default default] Acquiring lock "refresh_cache-8010953a-e520-477e-a4ba-ceb34db48982" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 11 04:09:00 compute-0 nova_compute[259850]: 2025-10-11 04:09:00.064 2 DEBUG oslo_concurrency.lockutils [req-9b6860ed-8e28-477f-b970-73fcb8158105 req-cd282ff0-c706-4f8a-98ed-257112834500 f4159c198f2c491aba3076e803251bb4 551ef16ca7c64c7d98e9560ddab5abd8 - - default default] Acquired lock "refresh_cache-8010953a-e520-477e-a4ba-ceb34db48982" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 11 04:09:00 compute-0 nova_compute[259850]: 2025-10-11 04:09:00.065 2 DEBUG nova.network.neutron [req-9b6860ed-8e28-477f-b970-73fcb8158105 req-cd282ff0-c706-4f8a-98ed-257112834500 f4159c198f2c491aba3076e803251bb4 551ef16ca7c64c7d98e9560ddab5abd8 - - default default] [instance: 8010953a-e520-477e-a4ba-ceb34db48982] Refreshing network info cache for port 34c86870-ee92-41f3-909b-1b576896b9cc _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 11 04:09:00 compute-0 ceph-mon[74273]: pgmap v1200: 305 pgs: 305 active+clean; 319 MiB data, 374 MiB used, 60 GiB / 60 GiB avail; 48 KiB/s rd, 3.5 MiB/s wr, 74 op/s
Oct 11 04:09:01 compute-0 nova_compute[259850]: 2025-10-11 04:09:01.425 2 DEBUG nova.network.neutron [req-9b6860ed-8e28-477f-b970-73fcb8158105 req-cd282ff0-c706-4f8a-98ed-257112834500 f4159c198f2c491aba3076e803251bb4 551ef16ca7c64c7d98e9560ddab5abd8 - - default default] [instance: 8010953a-e520-477e-a4ba-ceb34db48982] Updated VIF entry in instance network info cache for port 34c86870-ee92-41f3-909b-1b576896b9cc. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 11 04:09:01 compute-0 nova_compute[259850]: 2025-10-11 04:09:01.425 2 DEBUG nova.network.neutron [req-9b6860ed-8e28-477f-b970-73fcb8158105 req-cd282ff0-c706-4f8a-98ed-257112834500 f4159c198f2c491aba3076e803251bb4 551ef16ca7c64c7d98e9560ddab5abd8 - - default default] [instance: 8010953a-e520-477e-a4ba-ceb34db48982] Updating instance_info_cache with network_info: [{"id": "34c86870-ee92-41f3-909b-1b576896b9cc", "address": "fa:16:3e:d4:0e:80", "network": {"id": "5dfb34b3-4c6e-41e7-a36e-ee8fb0fbb2e1", "bridge": "br-int", "label": "tempest-TestVolumeBackupRestore-789712542-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.177", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "38b79203307d4f1caa56e7e44b103572", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap34c86870-ee", "ovs_interfaceid": "34c86870-ee92-41f3-909b-1b576896b9cc", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 11 04:09:01 compute-0 nova_compute[259850]: 2025-10-11 04:09:01.451 2 DEBUG oslo_concurrency.lockutils [req-9b6860ed-8e28-477f-b970-73fcb8158105 req-cd282ff0-c706-4f8a-98ed-257112834500 f4159c198f2c491aba3076e803251bb4 551ef16ca7c64c7d98e9560ddab5abd8 - - default default] Releasing lock "refresh_cache-8010953a-e520-477e-a4ba-ceb34db48982" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 11 04:09:01 compute-0 nova_compute[259850]: 2025-10-11 04:09:01.494 2 DEBUG nova.compute.manager [req-d76866ca-a4bb-47a0-bd2b-d2282753321b req-4a01c07b-5186-498e-84f1-bcab2a76daeb f4159c198f2c491aba3076e803251bb4 551ef16ca7c64c7d98e9560ddab5abd8 - - default default] [instance: e879a322-2581-43da-916b-423a94821ed0] Received event network-vif-plugged-cc7a934c-f273-4dde-b492-d37feef39f58 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 11 04:09:01 compute-0 nova_compute[259850]: 2025-10-11 04:09:01.495 2 DEBUG oslo_concurrency.lockutils [req-d76866ca-a4bb-47a0-bd2b-d2282753321b req-4a01c07b-5186-498e-84f1-bcab2a76daeb f4159c198f2c491aba3076e803251bb4 551ef16ca7c64c7d98e9560ddab5abd8 - - default default] Acquiring lock "e879a322-2581-43da-916b-423a94821ed0-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 04:09:01 compute-0 nova_compute[259850]: 2025-10-11 04:09:01.495 2 DEBUG oslo_concurrency.lockutils [req-d76866ca-a4bb-47a0-bd2b-d2282753321b req-4a01c07b-5186-498e-84f1-bcab2a76daeb f4159c198f2c491aba3076e803251bb4 551ef16ca7c64c7d98e9560ddab5abd8 - - default default] Lock "e879a322-2581-43da-916b-423a94821ed0-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 04:09:01 compute-0 nova_compute[259850]: 2025-10-11 04:09:01.495 2 DEBUG oslo_concurrency.lockutils [req-d76866ca-a4bb-47a0-bd2b-d2282753321b req-4a01c07b-5186-498e-84f1-bcab2a76daeb f4159c198f2c491aba3076e803251bb4 551ef16ca7c64c7d98e9560ddab5abd8 - - default default] Lock "e879a322-2581-43da-916b-423a94821ed0-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 04:09:01 compute-0 nova_compute[259850]: 2025-10-11 04:09:01.495 2 DEBUG nova.compute.manager [req-d76866ca-a4bb-47a0-bd2b-d2282753321b req-4a01c07b-5186-498e-84f1-bcab2a76daeb f4159c198f2c491aba3076e803251bb4 551ef16ca7c64c7d98e9560ddab5abd8 - - default default] [instance: e879a322-2581-43da-916b-423a94821ed0] No waiting events found dispatching network-vif-plugged-cc7a934c-f273-4dde-b492-d37feef39f58 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 11 04:09:01 compute-0 nova_compute[259850]: 2025-10-11 04:09:01.495 2 WARNING nova.compute.manager [req-d76866ca-a4bb-47a0-bd2b-d2282753321b req-4a01c07b-5186-498e-84f1-bcab2a76daeb f4159c198f2c491aba3076e803251bb4 551ef16ca7c64c7d98e9560ddab5abd8 - - default default] [instance: e879a322-2581-43da-916b-423a94821ed0] Received unexpected event network-vif-plugged-cc7a934c-f273-4dde-b492-d37feef39f58 for instance with vm_state active and task_state None.
Oct 11 04:09:01 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1201: 305 pgs: 305 active+clean; 319 MiB data, 374 MiB used, 60 GiB / 60 GiB avail; 42 KiB/s rd, 3.2 MiB/s wr, 66 op/s
Oct 11 04:09:01 compute-0 ovn_controller[152025]: 2025-10-11T04:09:01Z|00014|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:d4:0e:80 10.100.0.11
Oct 11 04:09:01 compute-0 ovn_controller[152025]: 2025-10-11T04:09:01Z|00015|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:d4:0e:80 10.100.0.11
Oct 11 04:09:01 compute-0 nova_compute[259850]: 2025-10-11 04:09:01.929 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 04:09:02 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 11 04:09:02 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3558574396' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 11 04:09:02 compute-0 podman[281901]: 2025-10-11 04:09:02.357242865 +0000 UTC m=+0.065678619 container health_status 8f03625dda308e9a3fe3b3f00360811eab2a53db90537fed5552a11820542f07 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, config_id=iscsid, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251009, org.label-schema.schema-version=1.0, container_name=iscsid, org.label-schema.vendor=CentOS)
Oct 11 04:09:02 compute-0 podman[281900]: 2025-10-11 04:09:02.38192866 +0000 UTC m=+0.093070740 container health_status 83ff5079e26f0f00bbfd2512db4ebca61e431e3b7e362664573e34fff179574c (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=multipathd, container_name=multipathd, org.label-schema.build-date=20251009)
Oct 11 04:09:02 compute-0 nova_compute[259850]: 2025-10-11 04:09:02.422 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 04:09:02 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e230 do_prune osdmap full prune enabled
Oct 11 04:09:02 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e231 e231: 3 total, 3 up, 3 in
Oct 11 04:09:02 compute-0 ceph-mon[74273]: log_channel(cluster) log [DBG] : osdmap e231: 3 total, 3 up, 3 in
Oct 11 04:09:02 compute-0 ceph-mon[74273]: pgmap v1201: 305 pgs: 305 active+clean; 319 MiB data, 374 MiB used, 60 GiB / 60 GiB avail; 42 KiB/s rd, 3.2 MiB/s wr, 66 op/s
Oct 11 04:09:02 compute-0 ceph-mon[74273]: from='client.? 192.168.122.10:0/3558574396' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 11 04:09:03 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1203: 305 pgs: 305 active+clean; 441 MiB data, 425 MiB used, 60 GiB / 60 GiB avail; 12 MiB/s rd, 14 MiB/s wr, 469 op/s
Oct 11 04:09:03 compute-0 nova_compute[259850]: 2025-10-11 04:09:03.577 2 DEBUG nova.compute.manager [req-a981c3b6-55c1-4ceb-a716-de0090fd0f52 req-9de81cdd-3592-4acd-9539-f3892890c3a1 f4159c198f2c491aba3076e803251bb4 551ef16ca7c64c7d98e9560ddab5abd8 - - default default] [instance: e879a322-2581-43da-916b-423a94821ed0] Received event network-changed-cc7a934c-f273-4dde-b492-d37feef39f58 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 11 04:09:03 compute-0 nova_compute[259850]: 2025-10-11 04:09:03.577 2 DEBUG nova.compute.manager [req-a981c3b6-55c1-4ceb-a716-de0090fd0f52 req-9de81cdd-3592-4acd-9539-f3892890c3a1 f4159c198f2c491aba3076e803251bb4 551ef16ca7c64c7d98e9560ddab5abd8 - - default default] [instance: e879a322-2581-43da-916b-423a94821ed0] Refreshing instance network info cache due to event network-changed-cc7a934c-f273-4dde-b492-d37feef39f58. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 11 04:09:03 compute-0 nova_compute[259850]: 2025-10-11 04:09:03.578 2 DEBUG oslo_concurrency.lockutils [req-a981c3b6-55c1-4ceb-a716-de0090fd0f52 req-9de81cdd-3592-4acd-9539-f3892890c3a1 f4159c198f2c491aba3076e803251bb4 551ef16ca7c64c7d98e9560ddab5abd8 - - default default] Acquiring lock "refresh_cache-e879a322-2581-43da-916b-423a94821ed0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 11 04:09:03 compute-0 nova_compute[259850]: 2025-10-11 04:09:03.578 2 DEBUG oslo_concurrency.lockutils [req-a981c3b6-55c1-4ceb-a716-de0090fd0f52 req-9de81cdd-3592-4acd-9539-f3892890c3a1 f4159c198f2c491aba3076e803251bb4 551ef16ca7c64c7d98e9560ddab5abd8 - - default default] Acquired lock "refresh_cache-e879a322-2581-43da-916b-423a94821ed0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 11 04:09:03 compute-0 nova_compute[259850]: 2025-10-11 04:09:03.578 2 DEBUG nova.network.neutron [req-a981c3b6-55c1-4ceb-a716-de0090fd0f52 req-9de81cdd-3592-4acd-9539-f3892890c3a1 f4159c198f2c491aba3076e803251bb4 551ef16ca7c64c7d98e9560ddab5abd8 - - default default] [instance: e879a322-2581-43da-916b-423a94821ed0] Refreshing network info cache for port cc7a934c-f273-4dde-b492-d37feef39f58 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 11 04:09:03 compute-0 ceph-mon[74273]: osdmap e231: 3 total, 3 up, 3 in
Oct 11 04:09:04 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e231 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 04:09:04 compute-0 nova_compute[259850]: 2025-10-11 04:09:04.903 2 DEBUG nova.network.neutron [req-a981c3b6-55c1-4ceb-a716-de0090fd0f52 req-9de81cdd-3592-4acd-9539-f3892890c3a1 f4159c198f2c491aba3076e803251bb4 551ef16ca7c64c7d98e9560ddab5abd8 - - default default] [instance: e879a322-2581-43da-916b-423a94821ed0] Updated VIF entry in instance network info cache for port cc7a934c-f273-4dde-b492-d37feef39f58. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 11 04:09:04 compute-0 nova_compute[259850]: 2025-10-11 04:09:04.904 2 DEBUG nova.network.neutron [req-a981c3b6-55c1-4ceb-a716-de0090fd0f52 req-9de81cdd-3592-4acd-9539-f3892890c3a1 f4159c198f2c491aba3076e803251bb4 551ef16ca7c64c7d98e9560ddab5abd8 - - default default] [instance: e879a322-2581-43da-916b-423a94821ed0] Updating instance_info_cache with network_info: [{"id": "cc7a934c-f273-4dde-b492-d37feef39f58", "address": "fa:16:3e:a8:a4:b5", "network": {"id": "bc525eaa-e13d-45ff-a473-c699abd60e90", "bridge": "br-int", "label": "tempest-TestEncryptedCinderVolumes-452300963-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.202", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3c9fe3215f964559830df6c94dd6a581", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapcc7a934c-f2", "ovs_interfaceid": "cc7a934c-f273-4dde-b492-d37feef39f58", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 11 04:09:04 compute-0 nova_compute[259850]: 2025-10-11 04:09:04.925 2 DEBUG oslo_concurrency.lockutils [req-a981c3b6-55c1-4ceb-a716-de0090fd0f52 req-9de81cdd-3592-4acd-9539-f3892890c3a1 f4159c198f2c491aba3076e803251bb4 551ef16ca7c64c7d98e9560ddab5abd8 - - default default] Releasing lock "refresh_cache-e879a322-2581-43da-916b-423a94821ed0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 11 04:09:04 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e231 do_prune osdmap full prune enabled
Oct 11 04:09:04 compute-0 ceph-mon[74273]: pgmap v1203: 305 pgs: 305 active+clean; 441 MiB data, 425 MiB used, 60 GiB / 60 GiB avail; 12 MiB/s rd, 14 MiB/s wr, 469 op/s
Oct 11 04:09:05 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e232 e232: 3 total, 3 up, 3 in
Oct 11 04:09:05 compute-0 ceph-mon[74273]: log_channel(cluster) log [DBG] : osdmap e232: 3 total, 3 up, 3 in
Oct 11 04:09:05 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct 11 04:09:05 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1760446680' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 11 04:09:05 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct 11 04:09:05 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1760446680' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 11 04:09:05 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1205: 305 pgs: 305 active+clean; 441 MiB data, 425 MiB used, 60 GiB / 60 GiB avail; 11 MiB/s rd, 9.4 MiB/s wr, 357 op/s
Oct 11 04:09:06 compute-0 ceph-mon[74273]: osdmap e232: 3 total, 3 up, 3 in
Oct 11 04:09:06 compute-0 ceph-mon[74273]: from='client.? 192.168.122.10:0/1760446680' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 11 04:09:06 compute-0 ceph-mon[74273]: from='client.? 192.168.122.10:0/1760446680' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 11 04:09:06 compute-0 nova_compute[259850]: 2025-10-11 04:09:06.933 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 04:09:07 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e232 do_prune osdmap full prune enabled
Oct 11 04:09:07 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e233 e233: 3 total, 3 up, 3 in
Oct 11 04:09:07 compute-0 ceph-mon[74273]: log_channel(cluster) log [DBG] : osdmap e233: 3 total, 3 up, 3 in
Oct 11 04:09:07 compute-0 ceph-mon[74273]: pgmap v1205: 305 pgs: 305 active+clean; 441 MiB data, 425 MiB used, 60 GiB / 60 GiB avail; 11 MiB/s rd, 9.4 MiB/s wr, 357 op/s
Oct 11 04:09:07 compute-0 nova_compute[259850]: 2025-10-11 04:09:07.453 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 04:09:07 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1207: 305 pgs: 305 active+clean; 441 MiB data, 425 MiB used, 60 GiB / 60 GiB avail; 12 MiB/s rd, 10 MiB/s wr, 394 op/s
Oct 11 04:09:08 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e233 do_prune osdmap full prune enabled
Oct 11 04:09:08 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e234 e234: 3 total, 3 up, 3 in
Oct 11 04:09:08 compute-0 ceph-mon[74273]: log_channel(cluster) log [DBG] : osdmap e234: 3 total, 3 up, 3 in
Oct 11 04:09:08 compute-0 ceph-mon[74273]: osdmap e233: 3 total, 3 up, 3 in
Oct 11 04:09:08 compute-0 nova_compute[259850]: 2025-10-11 04:09:08.130 2 DEBUG nova.compute.manager [req-9ed3c2c4-d988-462d-a3f6-6ed45b21ad8d req-985e3421-3fa1-4e3c-97b0-df9caadc81af f4159c198f2c491aba3076e803251bb4 551ef16ca7c64c7d98e9560ddab5abd8 - - default default] [instance: 8010953a-e520-477e-a4ba-ceb34db48982] Received event network-changed-34c86870-ee92-41f3-909b-1b576896b9cc external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 11 04:09:08 compute-0 nova_compute[259850]: 2025-10-11 04:09:08.130 2 DEBUG nova.compute.manager [req-9ed3c2c4-d988-462d-a3f6-6ed45b21ad8d req-985e3421-3fa1-4e3c-97b0-df9caadc81af f4159c198f2c491aba3076e803251bb4 551ef16ca7c64c7d98e9560ddab5abd8 - - default default] [instance: 8010953a-e520-477e-a4ba-ceb34db48982] Refreshing instance network info cache due to event network-changed-34c86870-ee92-41f3-909b-1b576896b9cc. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 11 04:09:08 compute-0 nova_compute[259850]: 2025-10-11 04:09:08.131 2 DEBUG oslo_concurrency.lockutils [req-9ed3c2c4-d988-462d-a3f6-6ed45b21ad8d req-985e3421-3fa1-4e3c-97b0-df9caadc81af f4159c198f2c491aba3076e803251bb4 551ef16ca7c64c7d98e9560ddab5abd8 - - default default] Acquiring lock "refresh_cache-8010953a-e520-477e-a4ba-ceb34db48982" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 11 04:09:08 compute-0 nova_compute[259850]: 2025-10-11 04:09:08.131 2 DEBUG oslo_concurrency.lockutils [req-9ed3c2c4-d988-462d-a3f6-6ed45b21ad8d req-985e3421-3fa1-4e3c-97b0-df9caadc81af f4159c198f2c491aba3076e803251bb4 551ef16ca7c64c7d98e9560ddab5abd8 - - default default] Acquired lock "refresh_cache-8010953a-e520-477e-a4ba-ceb34db48982" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 11 04:09:08 compute-0 nova_compute[259850]: 2025-10-11 04:09:08.132 2 DEBUG nova.network.neutron [req-9ed3c2c4-d988-462d-a3f6-6ed45b21ad8d req-985e3421-3fa1-4e3c-97b0-df9caadc81af f4159c198f2c491aba3076e803251bb4 551ef16ca7c64c7d98e9560ddab5abd8 - - default default] [instance: 8010953a-e520-477e-a4ba-ceb34db48982] Refreshing network info cache for port 34c86870-ee92-41f3-909b-1b576896b9cc _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 11 04:09:08 compute-0 nova_compute[259850]: 2025-10-11 04:09:08.232 2 DEBUG oslo_concurrency.lockutils [None req-0e8da413-cb5c-4e92-a9e8-820e499f52fa 8bb2149cfdfe44b2a94076ed5e55fbaf 38b79203307d4f1caa56e7e44b103572 - - default default] Acquiring lock "8010953a-e520-477e-a4ba-ceb34db48982" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 04:09:08 compute-0 nova_compute[259850]: 2025-10-11 04:09:08.233 2 DEBUG oslo_concurrency.lockutils [None req-0e8da413-cb5c-4e92-a9e8-820e499f52fa 8bb2149cfdfe44b2a94076ed5e55fbaf 38b79203307d4f1caa56e7e44b103572 - - default default] Lock "8010953a-e520-477e-a4ba-ceb34db48982" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 04:09:08 compute-0 nova_compute[259850]: 2025-10-11 04:09:08.233 2 DEBUG oslo_concurrency.lockutils [None req-0e8da413-cb5c-4e92-a9e8-820e499f52fa 8bb2149cfdfe44b2a94076ed5e55fbaf 38b79203307d4f1caa56e7e44b103572 - - default default] Acquiring lock "8010953a-e520-477e-a4ba-ceb34db48982-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 04:09:08 compute-0 nova_compute[259850]: 2025-10-11 04:09:08.234 2 DEBUG oslo_concurrency.lockutils [None req-0e8da413-cb5c-4e92-a9e8-820e499f52fa 8bb2149cfdfe44b2a94076ed5e55fbaf 38b79203307d4f1caa56e7e44b103572 - - default default] Lock "8010953a-e520-477e-a4ba-ceb34db48982-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 04:09:08 compute-0 nova_compute[259850]: 2025-10-11 04:09:08.234 2 DEBUG oslo_concurrency.lockutils [None req-0e8da413-cb5c-4e92-a9e8-820e499f52fa 8bb2149cfdfe44b2a94076ed5e55fbaf 38b79203307d4f1caa56e7e44b103572 - - default default] Lock "8010953a-e520-477e-a4ba-ceb34db48982-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 04:09:08 compute-0 nova_compute[259850]: 2025-10-11 04:09:08.236 2 INFO nova.compute.manager [None req-0e8da413-cb5c-4e92-a9e8-820e499f52fa 8bb2149cfdfe44b2a94076ed5e55fbaf 38b79203307d4f1caa56e7e44b103572 - - default default] [instance: 8010953a-e520-477e-a4ba-ceb34db48982] Terminating instance
Oct 11 04:09:08 compute-0 nova_compute[259850]: 2025-10-11 04:09:08.238 2 DEBUG nova.compute.manager [None req-0e8da413-cb5c-4e92-a9e8-820e499f52fa 8bb2149cfdfe44b2a94076ed5e55fbaf 38b79203307d4f1caa56e7e44b103572 - - default default] [instance: 8010953a-e520-477e-a4ba-ceb34db48982] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Oct 11 04:09:08 compute-0 kernel: tap34c86870-ee (unregistering): left promiscuous mode
Oct 11 04:09:08 compute-0 NetworkManager[44920]: <info>  [1760155748.2988] device (tap34c86870-ee): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 11 04:09:08 compute-0 ovn_controller[152025]: 2025-10-11T04:09:08Z|00115|binding|INFO|Releasing lport 34c86870-ee92-41f3-909b-1b576896b9cc from this chassis (sb_readonly=0)
Oct 11 04:09:08 compute-0 ovn_controller[152025]: 2025-10-11T04:09:08Z|00116|binding|INFO|Setting lport 34c86870-ee92-41f3-909b-1b576896b9cc down in Southbound
Oct 11 04:09:08 compute-0 ovn_controller[152025]: 2025-10-11T04:09:08Z|00117|binding|INFO|Removing iface tap34c86870-ee ovn-installed in OVS
Oct 11 04:09:08 compute-0 nova_compute[259850]: 2025-10-11 04:09:08.314 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 04:09:08 compute-0 ovn_metadata_agent[161897]: 2025-10-11 04:09:08.318 161902 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:d4:0e:80 10.100.0.11'], port_security=['fa:16:3e:d4:0e:80 10.100.0.11'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.11/28', 'neutron:device_id': '8010953a-e520-477e-a4ba-ceb34db48982', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-5dfb34b3-4c6e-41e7-a36e-ee8fb0fbb2e1', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '38b79203307d4f1caa56e7e44b103572', 'neutron:revision_number': '4', 'neutron:security_group_ids': '912fb5b2-0956-49ad-b895-ceb40eae61c4', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=ef7f882e-b41a-4919-9a27-1f63862813fb, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f22fde8afd0>], logical_port=34c86870-ee92-41f3-909b-1b576896b9cc) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f22fde8afd0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 11 04:09:08 compute-0 ovn_metadata_agent[161897]: 2025-10-11 04:09:08.319 161902 INFO neutron.agent.ovn.metadata.agent [-] Port 34c86870-ee92-41f3-909b-1b576896b9cc in datapath 5dfb34b3-4c6e-41e7-a36e-ee8fb0fbb2e1 unbound from our chassis
Oct 11 04:09:08 compute-0 ovn_metadata_agent[161897]: 2025-10-11 04:09:08.320 161902 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 5dfb34b3-4c6e-41e7-a36e-ee8fb0fbb2e1, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Oct 11 04:09:08 compute-0 ovn_metadata_agent[161897]: 2025-10-11 04:09:08.321 267637 DEBUG oslo.privsep.daemon [-] privsep: reply[53659c46-df1a-4f89-a957-35d8069a0443]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 04:09:08 compute-0 ovn_metadata_agent[161897]: 2025-10-11 04:09:08.322 161902 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-5dfb34b3-4c6e-41e7-a36e-ee8fb0fbb2e1 namespace which is not needed anymore
Oct 11 04:09:08 compute-0 nova_compute[259850]: 2025-10-11 04:09:08.360 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 04:09:08 compute-0 systemd[1]: machine-qemu\x2d10\x2dinstance\x2d0000000a.scope: Deactivated successfully.
Oct 11 04:09:08 compute-0 systemd[1]: machine-qemu\x2d10\x2dinstance\x2d0000000a.scope: Consumed 12.845s CPU time.
Oct 11 04:09:08 compute-0 systemd-machined[214869]: Machine qemu-10-instance-0000000a terminated.
Oct 11 04:09:08 compute-0 nova_compute[259850]: 2025-10-11 04:09:08.457 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 04:09:08 compute-0 nova_compute[259850]: 2025-10-11 04:09:08.464 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 04:09:08 compute-0 neutron-haproxy-ovnmeta-5dfb34b3-4c6e-41e7-a36e-ee8fb0fbb2e1[281371]: [NOTICE]   (281401) : haproxy version is 2.8.14-c23fe91
Oct 11 04:09:08 compute-0 neutron-haproxy-ovnmeta-5dfb34b3-4c6e-41e7-a36e-ee8fb0fbb2e1[281371]: [NOTICE]   (281401) : path to executable is /usr/sbin/haproxy
Oct 11 04:09:08 compute-0 neutron-haproxy-ovnmeta-5dfb34b3-4c6e-41e7-a36e-ee8fb0fbb2e1[281371]: [ALERT]    (281401) : Current worker (281417) exited with code 143 (Terminated)
Oct 11 04:09:08 compute-0 neutron-haproxy-ovnmeta-5dfb34b3-4c6e-41e7-a36e-ee8fb0fbb2e1[281371]: [WARNING]  (281401) : All workers exited. Exiting... (0)
Oct 11 04:09:08 compute-0 systemd[1]: libpod-d1a26a6087ccbe069490574ff1056c999e0758b2317bb5274473173217e7e28a.scope: Deactivated successfully.
Oct 11 04:09:08 compute-0 nova_compute[259850]: 2025-10-11 04:09:08.476 2 INFO nova.virt.libvirt.driver [-] [instance: 8010953a-e520-477e-a4ba-ceb34db48982] Instance destroyed successfully.
Oct 11 04:09:08 compute-0 nova_compute[259850]: 2025-10-11 04:09:08.476 2 DEBUG nova.objects.instance [None req-0e8da413-cb5c-4e92-a9e8-820e499f52fa 8bb2149cfdfe44b2a94076ed5e55fbaf 38b79203307d4f1caa56e7e44b103572 - - default default] Lazy-loading 'resources' on Instance uuid 8010953a-e520-477e-a4ba-ceb34db48982 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 11 04:09:08 compute-0 podman[281961]: 2025-10-11 04:09:08.484644651 +0000 UTC m=+0.055673487 container died d1a26a6087ccbe069490574ff1056c999e0758b2317bb5274473173217e7e28a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-5dfb34b3-4c6e-41e7-a36e-ee8fb0fbb2e1, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.build-date=20251009, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Oct 11 04:09:08 compute-0 nova_compute[259850]: 2025-10-11 04:09:08.490 2 DEBUG nova.virt.libvirt.vif [None req-0e8da413-cb5c-4e92-a9e8-820e499f52fa 8bb2149cfdfe44b2a94076ed5e55fbaf 38b79203307d4f1caa56e7e44b103572 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-11T04:08:41Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestVolumeBackupRestore-server-1706545763',display_name='tempest-TestVolumeBackupRestore-server-1706545763',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testvolumebackuprestore-server-1706545763',id=10,image_ref='',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBH17W5kE8c8jHNH5W8L2W3jo0pvpW5izjZjtF23lj8QmsAX/ee5wMJLpdvzZ/zFNqur9tp2txrryU7QgSH4v5UjD/oKvsJwNRUCrfO426+mL3v0B2OhIPlcgoKavmlsW+g==',key_name='tempest-TestVolumeBackupRestore-317158668',keypairs=<?>,launch_index=0,launched_at=2025-10-11T04:08:50Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='38b79203307d4f1caa56e7e44b103572',ramdisk_id='',reservation_id='r-0h0jc2ph',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',image_signature_verified='False',owner_project_name='tempest-TestVolumeBackupRestore-949077181',owner_user_name='tempest-TestVolumeBackupRestore-949077181-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-11T04:08:50Z,user_data=None,user_id='8bb2149cfdfe44b2a94076ed5e55fbaf',uuid=8010953a-e520-477e-a4ba-ceb34db48982,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "34c86870-ee92-41f3-909b-1b576896b9cc", "address": "fa:16:3e:d4:0e:80", "network": {"id": "5dfb34b3-4c6e-41e7-a36e-ee8fb0fbb2e1", "bridge": "br-int", "label": "tempest-TestVolumeBackupRestore-789712542-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.177", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "38b79203307d4f1caa56e7e44b103572", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap34c86870-ee", "ovs_interfaceid": "34c86870-ee92-41f3-909b-1b576896b9cc", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Oct 11 04:09:08 compute-0 nova_compute[259850]: 2025-10-11 04:09:08.491 2 DEBUG nova.network.os_vif_util [None req-0e8da413-cb5c-4e92-a9e8-820e499f52fa 8bb2149cfdfe44b2a94076ed5e55fbaf 38b79203307d4f1caa56e7e44b103572 - - default default] Converting VIF {"id": "34c86870-ee92-41f3-909b-1b576896b9cc", "address": "fa:16:3e:d4:0e:80", "network": {"id": "5dfb34b3-4c6e-41e7-a36e-ee8fb0fbb2e1", "bridge": "br-int", "label": "tempest-TestVolumeBackupRestore-789712542-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.177", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "38b79203307d4f1caa56e7e44b103572", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap34c86870-ee", "ovs_interfaceid": "34c86870-ee92-41f3-909b-1b576896b9cc", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 11 04:09:08 compute-0 nova_compute[259850]: 2025-10-11 04:09:08.491 2 DEBUG nova.network.os_vif_util [None req-0e8da413-cb5c-4e92-a9e8-820e499f52fa 8bb2149cfdfe44b2a94076ed5e55fbaf 38b79203307d4f1caa56e7e44b103572 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:d4:0e:80,bridge_name='br-int',has_traffic_filtering=True,id=34c86870-ee92-41f3-909b-1b576896b9cc,network=Network(5dfb34b3-4c6e-41e7-a36e-ee8fb0fbb2e1),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap34c86870-ee') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 11 04:09:08 compute-0 nova_compute[259850]: 2025-10-11 04:09:08.492 2 DEBUG os_vif [None req-0e8da413-cb5c-4e92-a9e8-820e499f52fa 8bb2149cfdfe44b2a94076ed5e55fbaf 38b79203307d4f1caa56e7e44b103572 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:d4:0e:80,bridge_name='br-int',has_traffic_filtering=True,id=34c86870-ee92-41f3-909b-1b576896b9cc,network=Network(5dfb34b3-4c6e-41e7-a36e-ee8fb0fbb2e1),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap34c86870-ee') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Oct 11 04:09:08 compute-0 nova_compute[259850]: 2025-10-11 04:09:08.494 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 04:09:08 compute-0 nova_compute[259850]: 2025-10-11 04:09:08.494 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap34c86870-ee, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 04:09:08 compute-0 nova_compute[259850]: 2025-10-11 04:09:08.545 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 04:09:08 compute-0 nova_compute[259850]: 2025-10-11 04:09:08.547 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 04:09:08 compute-0 nova_compute[259850]: 2025-10-11 04:09:08.549 2 INFO os_vif [None req-0e8da413-cb5c-4e92-a9e8-820e499f52fa 8bb2149cfdfe44b2a94076ed5e55fbaf 38b79203307d4f1caa56e7e44b103572 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:d4:0e:80,bridge_name='br-int',has_traffic_filtering=True,id=34c86870-ee92-41f3-909b-1b576896b9cc,network=Network(5dfb34b3-4c6e-41e7-a36e-ee8fb0fbb2e1),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap34c86870-ee')
Oct 11 04:09:08 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-d1a26a6087ccbe069490574ff1056c999e0758b2317bb5274473173217e7e28a-userdata-shm.mount: Deactivated successfully.
Oct 11 04:09:08 compute-0 systemd[1]: var-lib-containers-storage-overlay-7bfc0d88dd5722f98c887d247e1d2bd8e16b12b2ad72754eff5d000d647b135c-merged.mount: Deactivated successfully.
Oct 11 04:09:08 compute-0 podman[281961]: 2025-10-11 04:09:08.578616375 +0000 UTC m=+0.149645191 container cleanup d1a26a6087ccbe069490574ff1056c999e0758b2317bb5274473173217e7e28a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-5dfb34b3-4c6e-41e7-a36e-ee8fb0fbb2e1, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, org.label-schema.build-date=20251009, org.label-schema.vendor=CentOS)
Oct 11 04:09:08 compute-0 systemd[1]: libpod-conmon-d1a26a6087ccbe069490574ff1056c999e0758b2317bb5274473173217e7e28a.scope: Deactivated successfully.
Oct 11 04:09:08 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct 11 04:09:08 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1363333289' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 11 04:09:08 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct 11 04:09:08 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1363333289' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 11 04:09:08 compute-0 podman[282015]: 2025-10-11 04:09:08.652852774 +0000 UTC m=+0.048803844 container remove d1a26a6087ccbe069490574ff1056c999e0758b2317bb5274473173217e7e28a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-5dfb34b3-4c6e-41e7-a36e-ee8fb0fbb2e1, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251009, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, tcib_managed=true)
Oct 11 04:09:08 compute-0 ovn_metadata_agent[161897]: 2025-10-11 04:09:08.659 267637 DEBUG oslo.privsep.daemon [-] privsep: reply[2288dd22-794f-499c-89f0-ac82d33d7cde]: (4, ('Sat Oct 11 04:09:08 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-5dfb34b3-4c6e-41e7-a36e-ee8fb0fbb2e1 (d1a26a6087ccbe069490574ff1056c999e0758b2317bb5274473173217e7e28a)\nd1a26a6087ccbe069490574ff1056c999e0758b2317bb5274473173217e7e28a\nSat Oct 11 04:09:08 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-5dfb34b3-4c6e-41e7-a36e-ee8fb0fbb2e1 (d1a26a6087ccbe069490574ff1056c999e0758b2317bb5274473173217e7e28a)\nd1a26a6087ccbe069490574ff1056c999e0758b2317bb5274473173217e7e28a\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 04:09:08 compute-0 ovn_metadata_agent[161897]: 2025-10-11 04:09:08.660 267637 DEBUG oslo.privsep.daemon [-] privsep: reply[0a7317d3-7bd2-40ce-b243-d95fcf268a1b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 04:09:08 compute-0 ovn_metadata_agent[161897]: 2025-10-11 04:09:08.661 161902 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap5dfb34b3-40, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 04:09:08 compute-0 nova_compute[259850]: 2025-10-11 04:09:08.663 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 04:09:08 compute-0 kernel: tap5dfb34b3-40: left promiscuous mode
Oct 11 04:09:08 compute-0 ovn_metadata_agent[161897]: 2025-10-11 04:09:08.669 267637 DEBUG oslo.privsep.daemon [-] privsep: reply[5c53e187-c3c3-40af-b37c-7457b534ae83]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 04:09:08 compute-0 nova_compute[259850]: 2025-10-11 04:09:08.689 2 DEBUG nova.compute.manager [req-63b31286-b493-4503-86b0-3b7694876203 req-23c700a3-2082-457f-8aef-2d4713cdaa5a f4159c198f2c491aba3076e803251bb4 551ef16ca7c64c7d98e9560ddab5abd8 - - default default] [instance: 8010953a-e520-477e-a4ba-ceb34db48982] Received event network-vif-unplugged-34c86870-ee92-41f3-909b-1b576896b9cc external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 11 04:09:08 compute-0 nova_compute[259850]: 2025-10-11 04:09:08.690 2 DEBUG oslo_concurrency.lockutils [req-63b31286-b493-4503-86b0-3b7694876203 req-23c700a3-2082-457f-8aef-2d4713cdaa5a f4159c198f2c491aba3076e803251bb4 551ef16ca7c64c7d98e9560ddab5abd8 - - default default] Acquiring lock "8010953a-e520-477e-a4ba-ceb34db48982-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 04:09:08 compute-0 nova_compute[259850]: 2025-10-11 04:09:08.690 2 DEBUG oslo_concurrency.lockutils [req-63b31286-b493-4503-86b0-3b7694876203 req-23c700a3-2082-457f-8aef-2d4713cdaa5a f4159c198f2c491aba3076e803251bb4 551ef16ca7c64c7d98e9560ddab5abd8 - - default default] Lock "8010953a-e520-477e-a4ba-ceb34db48982-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 04:09:08 compute-0 nova_compute[259850]: 2025-10-11 04:09:08.690 2 DEBUG oslo_concurrency.lockutils [req-63b31286-b493-4503-86b0-3b7694876203 req-23c700a3-2082-457f-8aef-2d4713cdaa5a f4159c198f2c491aba3076e803251bb4 551ef16ca7c64c7d98e9560ddab5abd8 - - default default] Lock "8010953a-e520-477e-a4ba-ceb34db48982-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 04:09:08 compute-0 nova_compute[259850]: 2025-10-11 04:09:08.690 2 DEBUG nova.compute.manager [req-63b31286-b493-4503-86b0-3b7694876203 req-23c700a3-2082-457f-8aef-2d4713cdaa5a f4159c198f2c491aba3076e803251bb4 551ef16ca7c64c7d98e9560ddab5abd8 - - default default] [instance: 8010953a-e520-477e-a4ba-ceb34db48982] No waiting events found dispatching network-vif-unplugged-34c86870-ee92-41f3-909b-1b576896b9cc pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 11 04:09:08 compute-0 nova_compute[259850]: 2025-10-11 04:09:08.690 2 DEBUG nova.compute.manager [req-63b31286-b493-4503-86b0-3b7694876203 req-23c700a3-2082-457f-8aef-2d4713cdaa5a f4159c198f2c491aba3076e803251bb4 551ef16ca7c64c7d98e9560ddab5abd8 - - default default] [instance: 8010953a-e520-477e-a4ba-ceb34db48982] Received event network-vif-unplugged-34c86870-ee92-41f3-909b-1b576896b9cc for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Oct 11 04:09:08 compute-0 nova_compute[259850]: 2025-10-11 04:09:08.694 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 04:09:08 compute-0 ovn_metadata_agent[161897]: 2025-10-11 04:09:08.701 267637 DEBUG oslo.privsep.daemon [-] privsep: reply[1503fd17-e060-4370-9f27-a8a81c68d942]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 04:09:08 compute-0 ovn_metadata_agent[161897]: 2025-10-11 04:09:08.702 267637 DEBUG oslo.privsep.daemon [-] privsep: reply[141dec90-adc1-4693-99ff-d72a39e5ba2a]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 04:09:08 compute-0 ovn_metadata_agent[161897]: 2025-10-11 04:09:08.724 267637 DEBUG oslo.privsep.daemon [-] privsep: reply[34294dc4-608c-4812-97f6-ebd8aee6ef95]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 411388, 'reachable_time': 43623, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 282032, 'error': None, 'target': 'ovnmeta-5dfb34b3-4c6e-41e7-a36e-ee8fb0fbb2e1', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 04:09:08 compute-0 systemd[1]: run-netns-ovnmeta\x2d5dfb34b3\x2d4c6e\x2d41e7\x2da36e\x2dee8fb0fbb2e1.mount: Deactivated successfully.
Oct 11 04:09:08 compute-0 ovn_metadata_agent[161897]: 2025-10-11 04:09:08.726 162015 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-5dfb34b3-4c6e-41e7-a36e-ee8fb0fbb2e1 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Oct 11 04:09:08 compute-0 ovn_metadata_agent[161897]: 2025-10-11 04:09:08.726 162015 DEBUG oslo.privsep.daemon [-] privsep: reply[190e6380-0d8f-4276-b859-39ff529d1cbd]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 04:09:08 compute-0 nova_compute[259850]: 2025-10-11 04:09:08.752 2 INFO nova.virt.libvirt.driver [None req-0e8da413-cb5c-4e92-a9e8-820e499f52fa 8bb2149cfdfe44b2a94076ed5e55fbaf 38b79203307d4f1caa56e7e44b103572 - - default default] [instance: 8010953a-e520-477e-a4ba-ceb34db48982] Deleting instance files /var/lib/nova/instances/8010953a-e520-477e-a4ba-ceb34db48982_del
Oct 11 04:09:08 compute-0 nova_compute[259850]: 2025-10-11 04:09:08.753 2 INFO nova.virt.libvirt.driver [None req-0e8da413-cb5c-4e92-a9e8-820e499f52fa 8bb2149cfdfe44b2a94076ed5e55fbaf 38b79203307d4f1caa56e7e44b103572 - - default default] [instance: 8010953a-e520-477e-a4ba-ceb34db48982] Deletion of /var/lib/nova/instances/8010953a-e520-477e-a4ba-ceb34db48982_del complete
Oct 11 04:09:08 compute-0 nova_compute[259850]: 2025-10-11 04:09:08.827 2 INFO nova.compute.manager [None req-0e8da413-cb5c-4e92-a9e8-820e499f52fa 8bb2149cfdfe44b2a94076ed5e55fbaf 38b79203307d4f1caa56e7e44b103572 - - default default] [instance: 8010953a-e520-477e-a4ba-ceb34db48982] Took 0.59 seconds to destroy the instance on the hypervisor.
Oct 11 04:09:08 compute-0 nova_compute[259850]: 2025-10-11 04:09:08.827 2 DEBUG oslo.service.loopingcall [None req-0e8da413-cb5c-4e92-a9e8-820e499f52fa 8bb2149cfdfe44b2a94076ed5e55fbaf 38b79203307d4f1caa56e7e44b103572 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Oct 11 04:09:08 compute-0 nova_compute[259850]: 2025-10-11 04:09:08.828 2 DEBUG nova.compute.manager [-] [instance: 8010953a-e520-477e-a4ba-ceb34db48982] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Oct 11 04:09:08 compute-0 nova_compute[259850]: 2025-10-11 04:09:08.828 2 DEBUG nova.network.neutron [-] [instance: 8010953a-e520-477e-a4ba-ceb34db48982] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Oct 11 04:09:09 compute-0 ceph-mon[74273]: pgmap v1207: 305 pgs: 305 active+clean; 441 MiB data, 425 MiB used, 60 GiB / 60 GiB avail; 12 MiB/s rd, 10 MiB/s wr, 394 op/s
Oct 11 04:09:09 compute-0 ceph-mon[74273]: osdmap e234: 3 total, 3 up, 3 in
Oct 11 04:09:09 compute-0 ceph-mon[74273]: from='client.? 192.168.122.10:0/1363333289' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 11 04:09:09 compute-0 ceph-mon[74273]: from='client.? 192.168.122.10:0/1363333289' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 11 04:09:09 compute-0 nova_compute[259850]: 2025-10-11 04:09:09.433 2 DEBUG nova.network.neutron [-] [instance: 8010953a-e520-477e-a4ba-ceb34db48982] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 11 04:09:09 compute-0 nova_compute[259850]: 2025-10-11 04:09:09.445 2 DEBUG nova.network.neutron [req-9ed3c2c4-d988-462d-a3f6-6ed45b21ad8d req-985e3421-3fa1-4e3c-97b0-df9caadc81af f4159c198f2c491aba3076e803251bb4 551ef16ca7c64c7d98e9560ddab5abd8 - - default default] [instance: 8010953a-e520-477e-a4ba-ceb34db48982] Updated VIF entry in instance network info cache for port 34c86870-ee92-41f3-909b-1b576896b9cc. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 11 04:09:09 compute-0 nova_compute[259850]: 2025-10-11 04:09:09.445 2 DEBUG nova.network.neutron [req-9ed3c2c4-d988-462d-a3f6-6ed45b21ad8d req-985e3421-3fa1-4e3c-97b0-df9caadc81af f4159c198f2c491aba3076e803251bb4 551ef16ca7c64c7d98e9560ddab5abd8 - - default default] [instance: 8010953a-e520-477e-a4ba-ceb34db48982] Updating instance_info_cache with network_info: [{"id": "34c86870-ee92-41f3-909b-1b576896b9cc", "address": "fa:16:3e:d4:0e:80", "network": {"id": "5dfb34b3-4c6e-41e7-a36e-ee8fb0fbb2e1", "bridge": "br-int", "label": "tempest-TestVolumeBackupRestore-789712542-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "38b79203307d4f1caa56e7e44b103572", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap34c86870-ee", "ovs_interfaceid": "34c86870-ee92-41f3-909b-1b576896b9cc", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 11 04:09:09 compute-0 nova_compute[259850]: 2025-10-11 04:09:09.454 2 INFO nova.compute.manager [-] [instance: 8010953a-e520-477e-a4ba-ceb34db48982] Took 0.63 seconds to deallocate network for instance.
Oct 11 04:09:09 compute-0 nova_compute[259850]: 2025-10-11 04:09:09.464 2 DEBUG oslo_concurrency.lockutils [req-9ed3c2c4-d988-462d-a3f6-6ed45b21ad8d req-985e3421-3fa1-4e3c-97b0-df9caadc81af f4159c198f2c491aba3076e803251bb4 551ef16ca7c64c7d98e9560ddab5abd8 - - default default] Releasing lock "refresh_cache-8010953a-e520-477e-a4ba-ceb34db48982" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 11 04:09:09 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1209: 305 pgs: 305 active+clean; 362 MiB data, 410 MiB used, 60 GiB / 60 GiB avail; 74 KiB/s rd, 1.1 MiB/s wr, 105 op/s
Oct 11 04:09:09 compute-0 nova_compute[259850]: 2025-10-11 04:09:09.636 2 INFO nova.compute.manager [None req-0e8da413-cb5c-4e92-a9e8-820e499f52fa 8bb2149cfdfe44b2a94076ed5e55fbaf 38b79203307d4f1caa56e7e44b103572 - - default default] [instance: 8010953a-e520-477e-a4ba-ceb34db48982] Took 0.18 seconds to detach 1 volumes for instance.
Oct 11 04:09:09 compute-0 nova_compute[259850]: 2025-10-11 04:09:09.698 2 DEBUG oslo_concurrency.lockutils [None req-0e8da413-cb5c-4e92-a9e8-820e499f52fa 8bb2149cfdfe44b2a94076ed5e55fbaf 38b79203307d4f1caa56e7e44b103572 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 04:09:09 compute-0 nova_compute[259850]: 2025-10-11 04:09:09.699 2 DEBUG oslo_concurrency.lockutils [None req-0e8da413-cb5c-4e92-a9e8-820e499f52fa 8bb2149cfdfe44b2a94076ed5e55fbaf 38b79203307d4f1caa56e7e44b103572 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 04:09:09 compute-0 nova_compute[259850]: 2025-10-11 04:09:09.778 2 DEBUG oslo_concurrency.processutils [None req-0e8da413-cb5c-4e92-a9e8-820e499f52fa 8bb2149cfdfe44b2a94076ed5e55fbaf 38b79203307d4f1caa56e7e44b103572 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 04:09:09 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e234 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 04:09:10 compute-0 nova_compute[259850]: 2025-10-11 04:09:10.220 2 DEBUG nova.compute.manager [req-bf9033f7-9572-4824-854a-4bb11a3773ce req-69cdfc4e-8be7-4b61-b7c2-5a324457c607 f4159c198f2c491aba3076e803251bb4 551ef16ca7c64c7d98e9560ddab5abd8 - - default default] [instance: 8010953a-e520-477e-a4ba-ceb34db48982] Received event network-vif-deleted-34c86870-ee92-41f3-909b-1b576896b9cc external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 11 04:09:10 compute-0 nova_compute[259850]: 2025-10-11 04:09:10.221 2 INFO nova.compute.manager [req-bf9033f7-9572-4824-854a-4bb11a3773ce req-69cdfc4e-8be7-4b61-b7c2-5a324457c607 f4159c198f2c491aba3076e803251bb4 551ef16ca7c64c7d98e9560ddab5abd8 - - default default] [instance: 8010953a-e520-477e-a4ba-ceb34db48982] Neutron deleted interface 34c86870-ee92-41f3-909b-1b576896b9cc; detaching it from the instance and deleting it from the info cache
Oct 11 04:09:10 compute-0 nova_compute[259850]: 2025-10-11 04:09:10.221 2 DEBUG nova.network.neutron [req-bf9033f7-9572-4824-854a-4bb11a3773ce req-69cdfc4e-8be7-4b61-b7c2-5a324457c607 f4159c198f2c491aba3076e803251bb4 551ef16ca7c64c7d98e9560ddab5abd8 - - default default] [instance: 8010953a-e520-477e-a4ba-ceb34db48982] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 11 04:09:10 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 11 04:09:10 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2052868463' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 04:09:10 compute-0 nova_compute[259850]: 2025-10-11 04:09:10.250 2 DEBUG nova.compute.manager [req-bf9033f7-9572-4824-854a-4bb11a3773ce req-69cdfc4e-8be7-4b61-b7c2-5a324457c607 f4159c198f2c491aba3076e803251bb4 551ef16ca7c64c7d98e9560ddab5abd8 - - default default] [instance: 8010953a-e520-477e-a4ba-ceb34db48982] Detach interface failed, port_id=34c86870-ee92-41f3-909b-1b576896b9cc, reason: Instance 8010953a-e520-477e-a4ba-ceb34db48982 could not be found. _process_instance_vif_deleted_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10882
Oct 11 04:09:10 compute-0 nova_compute[259850]: 2025-10-11 04:09:10.261 2 DEBUG oslo_concurrency.processutils [None req-0e8da413-cb5c-4e92-a9e8-820e499f52fa 8bb2149cfdfe44b2a94076ed5e55fbaf 38b79203307d4f1caa56e7e44b103572 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.483s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 04:09:10 compute-0 nova_compute[259850]: 2025-10-11 04:09:10.269 2 DEBUG nova.compute.provider_tree [None req-0e8da413-cb5c-4e92-a9e8-820e499f52fa 8bb2149cfdfe44b2a94076ed5e55fbaf 38b79203307d4f1caa56e7e44b103572 - - default default] Inventory has not changed in ProviderTree for provider: 108a560b-89c0-4926-a2fc-cb749a6f8386 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 11 04:09:10 compute-0 nova_compute[259850]: 2025-10-11 04:09:10.284 2 DEBUG nova.scheduler.client.report [None req-0e8da413-cb5c-4e92-a9e8-820e499f52fa 8bb2149cfdfe44b2a94076ed5e55fbaf 38b79203307d4f1caa56e7e44b103572 - - default default] Inventory has not changed for provider 108a560b-89c0-4926-a2fc-cb749a6f8386 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 11 04:09:10 compute-0 nova_compute[259850]: 2025-10-11 04:09:10.308 2 DEBUG oslo_concurrency.lockutils [None req-0e8da413-cb5c-4e92-a9e8-820e499f52fa 8bb2149cfdfe44b2a94076ed5e55fbaf 38b79203307d4f1caa56e7e44b103572 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.609s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 04:09:10 compute-0 nova_compute[259850]: 2025-10-11 04:09:10.341 2 INFO nova.scheduler.client.report [None req-0e8da413-cb5c-4e92-a9e8-820e499f52fa 8bb2149cfdfe44b2a94076ed5e55fbaf 38b79203307d4f1caa56e7e44b103572 - - default default] Deleted allocations for instance 8010953a-e520-477e-a4ba-ceb34db48982
Oct 11 04:09:10 compute-0 nova_compute[259850]: 2025-10-11 04:09:10.413 2 DEBUG oslo_concurrency.lockutils [None req-0e8da413-cb5c-4e92-a9e8-820e499f52fa 8bb2149cfdfe44b2a94076ed5e55fbaf 38b79203307d4f1caa56e7e44b103572 - - default default] Lock "8010953a-e520-477e-a4ba-ceb34db48982" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.180s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 04:09:10 compute-0 nova_compute[259850]: 2025-10-11 04:09:10.798 2 DEBUG nova.compute.manager [req-eb63d9f6-5948-4adb-8f5e-7142ddf67e40 req-1da8e5e7-5263-4bae-8fee-d24d2d6bed02 f4159c198f2c491aba3076e803251bb4 551ef16ca7c64c7d98e9560ddab5abd8 - - default default] [instance: 8010953a-e520-477e-a4ba-ceb34db48982] Received event network-vif-plugged-34c86870-ee92-41f3-909b-1b576896b9cc external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 11 04:09:10 compute-0 nova_compute[259850]: 2025-10-11 04:09:10.799 2 DEBUG oslo_concurrency.lockutils [req-eb63d9f6-5948-4adb-8f5e-7142ddf67e40 req-1da8e5e7-5263-4bae-8fee-d24d2d6bed02 f4159c198f2c491aba3076e803251bb4 551ef16ca7c64c7d98e9560ddab5abd8 - - default default] Acquiring lock "8010953a-e520-477e-a4ba-ceb34db48982-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 04:09:10 compute-0 nova_compute[259850]: 2025-10-11 04:09:10.801 2 DEBUG oslo_concurrency.lockutils [req-eb63d9f6-5948-4adb-8f5e-7142ddf67e40 req-1da8e5e7-5263-4bae-8fee-d24d2d6bed02 f4159c198f2c491aba3076e803251bb4 551ef16ca7c64c7d98e9560ddab5abd8 - - default default] Lock "8010953a-e520-477e-a4ba-ceb34db48982-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 04:09:10 compute-0 nova_compute[259850]: 2025-10-11 04:09:10.801 2 DEBUG oslo_concurrency.lockutils [req-eb63d9f6-5948-4adb-8f5e-7142ddf67e40 req-1da8e5e7-5263-4bae-8fee-d24d2d6bed02 f4159c198f2c491aba3076e803251bb4 551ef16ca7c64c7d98e9560ddab5abd8 - - default default] Lock "8010953a-e520-477e-a4ba-ceb34db48982-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 04:09:10 compute-0 nova_compute[259850]: 2025-10-11 04:09:10.802 2 DEBUG nova.compute.manager [req-eb63d9f6-5948-4adb-8f5e-7142ddf67e40 req-1da8e5e7-5263-4bae-8fee-d24d2d6bed02 f4159c198f2c491aba3076e803251bb4 551ef16ca7c64c7d98e9560ddab5abd8 - - default default] [instance: 8010953a-e520-477e-a4ba-ceb34db48982] No waiting events found dispatching network-vif-plugged-34c86870-ee92-41f3-909b-1b576896b9cc pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 11 04:09:10 compute-0 nova_compute[259850]: 2025-10-11 04:09:10.802 2 WARNING nova.compute.manager [req-eb63d9f6-5948-4adb-8f5e-7142ddf67e40 req-1da8e5e7-5263-4bae-8fee-d24d2d6bed02 f4159c198f2c491aba3076e803251bb4 551ef16ca7c64c7d98e9560ddab5abd8 - - default default] [instance: 8010953a-e520-477e-a4ba-ceb34db48982] Received unexpected event network-vif-plugged-34c86870-ee92-41f3-909b-1b576896b9cc for instance with vm_state deleted and task_state None.
Oct 11 04:09:10 compute-0 ovn_controller[152025]: 2025-10-11T04:09:10Z|00016|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:a8:a4:b5 10.100.0.9
Oct 11 04:09:10 compute-0 ovn_controller[152025]: 2025-10-11T04:09:10Z|00017|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:a8:a4:b5 10.100.0.9
Oct 11 04:09:11 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 11 04:09:11 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/30241022' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 11 04:09:11 compute-0 ceph-mon[74273]: pgmap v1209: 305 pgs: 305 active+clean; 362 MiB data, 410 MiB used, 60 GiB / 60 GiB avail; 74 KiB/s rd, 1.1 MiB/s wr, 105 op/s
Oct 11 04:09:11 compute-0 ceph-mon[74273]: from='client.? 192.168.122.100:0/2052868463' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 04:09:11 compute-0 ceph-mon[74273]: from='client.? 192.168.122.10:0/30241022' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 11 04:09:11 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1210: 305 pgs: 305 active+clean; 362 MiB data, 410 MiB used, 60 GiB / 60 GiB avail; 68 KiB/s rd, 1018 KiB/s wr, 96 op/s
Oct 11 04:09:12 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e234 do_prune osdmap full prune enabled
Oct 11 04:09:12 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e235 e235: 3 total, 3 up, 3 in
Oct 11 04:09:12 compute-0 ceph-mon[74273]: log_channel(cluster) log [DBG] : osdmap e235: 3 total, 3 up, 3 in
Oct 11 04:09:12 compute-0 nova_compute[259850]: 2025-10-11 04:09:12.475 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 04:09:12 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct 11 04:09:12 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3535200019' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 11 04:09:12 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct 11 04:09:12 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3535200019' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 11 04:09:12 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct 11 04:09:12 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3962740693' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 11 04:09:12 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct 11 04:09:12 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3962740693' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 11 04:09:13 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e235 do_prune osdmap full prune enabled
Oct 11 04:09:13 compute-0 ceph-mon[74273]: pgmap v1210: 305 pgs: 305 active+clean; 362 MiB data, 410 MiB used, 60 GiB / 60 GiB avail; 68 KiB/s rd, 1018 KiB/s wr, 96 op/s
Oct 11 04:09:13 compute-0 ceph-mon[74273]: osdmap e235: 3 total, 3 up, 3 in
Oct 11 04:09:13 compute-0 ceph-mon[74273]: from='client.? 192.168.122.10:0/3535200019' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 11 04:09:13 compute-0 ceph-mon[74273]: from='client.? 192.168.122.10:0/3535200019' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 11 04:09:13 compute-0 ceph-mon[74273]: from='client.? 192.168.122.10:0/3962740693' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 11 04:09:13 compute-0 ceph-mon[74273]: from='client.? 192.168.122.10:0/3962740693' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 11 04:09:13 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e236 e236: 3 total, 3 up, 3 in
Oct 11 04:09:13 compute-0 ceph-mon[74273]: log_channel(cluster) log [DBG] : osdmap e236: 3 total, 3 up, 3 in
Oct 11 04:09:13 compute-0 nova_compute[259850]: 2025-10-11 04:09:13.546 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 04:09:13 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1213: 305 pgs: 8 active+clean+snaptrim_wait, 4 active+clean+snaptrim, 293 active+clean; 186 MiB data, 403 MiB used, 60 GiB / 60 GiB avail; 1008 KiB/s rd, 5.4 MiB/s wr, 456 op/s
Oct 11 04:09:14 compute-0 ceph-mon[74273]: osdmap e236: 3 total, 3 up, 3 in
Oct 11 04:09:14 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e236 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 04:09:14 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e236 do_prune osdmap full prune enabled
Oct 11 04:09:14 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e237 e237: 3 total, 3 up, 3 in
Oct 11 04:09:14 compute-0 ceph-mon[74273]: log_channel(cluster) log [DBG] : osdmap e237: 3 total, 3 up, 3 in
Oct 11 04:09:15 compute-0 ceph-mon[74273]: pgmap v1213: 305 pgs: 8 active+clean+snaptrim_wait, 4 active+clean+snaptrim, 293 active+clean; 186 MiB data, 403 MiB used, 60 GiB / 60 GiB avail; 1008 KiB/s rd, 5.4 MiB/s wr, 456 op/s
Oct 11 04:09:15 compute-0 ceph-mon[74273]: osdmap e237: 3 total, 3 up, 3 in
Oct 11 04:09:15 compute-0 ovn_controller[152025]: 2025-10-11T04:09:15Z|00118|binding|INFO|Releasing lport cdd2aeac-e6a8-47f4-bd20-3e943fcf66e2 from this chassis (sb_readonly=0)
Oct 11 04:09:15 compute-0 nova_compute[259850]: 2025-10-11 04:09:15.480 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 04:09:15 compute-0 podman[282057]: 2025-10-11 04:09:15.484399501 +0000 UTC m=+0.181199179 container health_status 648a70a95c858d67aef01a55c153ff0b5b9bef4b589fa10c62a24185180db61f (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.build-date=20251009, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true)
Oct 11 04:09:15 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1215: 305 pgs: 8 active+clean+snaptrim_wait, 4 active+clean+snaptrim, 293 active+clean; 186 MiB data, 403 MiB used, 60 GiB / 60 GiB avail; 934 KiB/s rd, 4.3 MiB/s wr, 351 op/s
Oct 11 04:09:15 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 11 04:09:15 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/4075093116' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 11 04:09:16 compute-0 ceph-mon[74273]: from='client.? 192.168.122.10:0/4075093116' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 11 04:09:17 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e237 do_prune osdmap full prune enabled
Oct 11 04:09:17 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e238 e238: 3 total, 3 up, 3 in
Oct 11 04:09:17 compute-0 ceph-mon[74273]: log_channel(cluster) log [DBG] : osdmap e238: 3 total, 3 up, 3 in
Oct 11 04:09:17 compute-0 ceph-mon[74273]: pgmap v1215: 305 pgs: 8 active+clean+snaptrim_wait, 4 active+clean+snaptrim, 293 active+clean; 186 MiB data, 403 MiB used, 60 GiB / 60 GiB avail; 934 KiB/s rd, 4.3 MiB/s wr, 351 op/s
Oct 11 04:09:17 compute-0 nova_compute[259850]: 2025-10-11 04:09:17.514 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 04:09:17 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1217: 305 pgs: 8 active+clean+snaptrim_wait, 4 active+clean+snaptrim, 293 active+clean; 186 MiB data, 403 MiB used, 60 GiB / 60 GiB avail; 1022 KiB/s rd, 4.7 MiB/s wr, 384 op/s
Oct 11 04:09:18 compute-0 nova_compute[259850]: 2025-10-11 04:09:18.059 2 DEBUG oslo_service.periodic_task [None req-a15c22ab-259d-4b26-83d0-eb5f92102649 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 11 04:09:18 compute-0 nova_compute[259850]: 2025-10-11 04:09:18.060 2 DEBUG nova.compute.manager [None req-a15c22ab-259d-4b26-83d0-eb5f92102649 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Oct 11 04:09:18 compute-0 nova_compute[259850]: 2025-10-11 04:09:18.060 2 DEBUG oslo_service.periodic_task [None req-a15c22ab-259d-4b26-83d0-eb5f92102649 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 11 04:09:18 compute-0 ceph-mon[74273]: osdmap e238: 3 total, 3 up, 3 in
Oct 11 04:09:18 compute-0 ceph-mon[74273]: pgmap v1217: 305 pgs: 8 active+clean+snaptrim_wait, 4 active+clean+snaptrim, 293 active+clean; 186 MiB data, 403 MiB used, 60 GiB / 60 GiB avail; 1022 KiB/s rd, 4.7 MiB/s wr, 384 op/s
Oct 11 04:09:18 compute-0 nova_compute[259850]: 2025-10-11 04:09:18.174 2 DEBUG oslo_concurrency.lockutils [None req-a15c22ab-259d-4b26-83d0-eb5f92102649 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 04:09:18 compute-0 nova_compute[259850]: 2025-10-11 04:09:18.175 2 DEBUG oslo_concurrency.lockutils [None req-a15c22ab-259d-4b26-83d0-eb5f92102649 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 04:09:18 compute-0 nova_compute[259850]: 2025-10-11 04:09:18.175 2 DEBUG oslo_concurrency.lockutils [None req-a15c22ab-259d-4b26-83d0-eb5f92102649 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 04:09:18 compute-0 nova_compute[259850]: 2025-10-11 04:09:18.175 2 DEBUG nova.compute.resource_tracker [None req-a15c22ab-259d-4b26-83d0-eb5f92102649 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Oct 11 04:09:18 compute-0 nova_compute[259850]: 2025-10-11 04:09:18.175 2 DEBUG oslo_concurrency.processutils [None req-a15c22ab-259d-4b26-83d0-eb5f92102649 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 04:09:18 compute-0 nova_compute[259850]: 2025-10-11 04:09:18.588 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 04:09:18 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 11 04:09:18 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4282235667' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 04:09:18 compute-0 nova_compute[259850]: 2025-10-11 04:09:18.742 2 DEBUG oslo_concurrency.processutils [None req-a15c22ab-259d-4b26-83d0-eb5f92102649 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.566s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 04:09:18 compute-0 nova_compute[259850]: 2025-10-11 04:09:18.856 2 DEBUG nova.virt.libvirt.driver [None req-a15c22ab-259d-4b26-83d0-eb5f92102649 - - - - - -] skipping disk for instance-0000000b as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 11 04:09:18 compute-0 nova_compute[259850]: 2025-10-11 04:09:18.857 2 DEBUG nova.virt.libvirt.driver [None req-a15c22ab-259d-4b26-83d0-eb5f92102649 - - - - - -] skipping disk for instance-0000000b as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 11 04:09:18 compute-0 ovn_controller[152025]: 2025-10-11T04:09:18Z|00119|binding|INFO|Releasing lport cdd2aeac-e6a8-47f4-bd20-3e943fcf66e2 from this chassis (sb_readonly=0)
Oct 11 04:09:19 compute-0 nova_compute[259850]: 2025-10-11 04:09:19.087 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 04:09:19 compute-0 nova_compute[259850]: 2025-10-11 04:09:19.124 2 WARNING nova.virt.libvirt.driver [None req-a15c22ab-259d-4b26-83d0-eb5f92102649 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 11 04:09:19 compute-0 nova_compute[259850]: 2025-10-11 04:09:19.126 2 DEBUG nova.compute.resource_tracker [None req-a15c22ab-259d-4b26-83d0-eb5f92102649 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4406MB free_disk=59.94285202026367GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Oct 11 04:09:19 compute-0 nova_compute[259850]: 2025-10-11 04:09:19.127 2 DEBUG oslo_concurrency.lockutils [None req-a15c22ab-259d-4b26-83d0-eb5f92102649 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 04:09:19 compute-0 nova_compute[259850]: 2025-10-11 04:09:19.128 2 DEBUG oslo_concurrency.lockutils [None req-a15c22ab-259d-4b26-83d0-eb5f92102649 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 04:09:19 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e238 do_prune osdmap full prune enabled
Oct 11 04:09:19 compute-0 ceph-mon[74273]: from='client.? 192.168.122.100:0/4282235667' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 04:09:19 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e239 e239: 3 total, 3 up, 3 in
Oct 11 04:09:19 compute-0 ceph-mon[74273]: log_channel(cluster) log [DBG] : osdmap e239: 3 total, 3 up, 3 in
Oct 11 04:09:19 compute-0 nova_compute[259850]: 2025-10-11 04:09:19.243 2 DEBUG nova.compute.resource_tracker [None req-a15c22ab-259d-4b26-83d0-eb5f92102649 - - - - - -] Instance e879a322-2581-43da-916b-423a94821ed0 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Oct 11 04:09:19 compute-0 nova_compute[259850]: 2025-10-11 04:09:19.244 2 DEBUG nova.compute.resource_tracker [None req-a15c22ab-259d-4b26-83d0-eb5f92102649 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Oct 11 04:09:19 compute-0 nova_compute[259850]: 2025-10-11 04:09:19.244 2 DEBUG nova.compute.resource_tracker [None req-a15c22ab-259d-4b26-83d0-eb5f92102649 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=640MB phys_disk=59GB used_disk=1GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Oct 11 04:09:19 compute-0 nova_compute[259850]: 2025-10-11 04:09:19.300 2 DEBUG oslo_concurrency.processutils [None req-a15c22ab-259d-4b26-83d0-eb5f92102649 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 04:09:19 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1219: 305 pgs: 305 active+clean; 167 MiB data, 324 MiB used, 60 GiB / 60 GiB avail; 39 KiB/s rd, 27 KiB/s wr, 53 op/s
Oct 11 04:09:19 compute-0 nova_compute[259850]: 2025-10-11 04:09:19.632 2 DEBUG oslo_concurrency.lockutils [None req-6aa29d55-1fce-4384-b520-78a23e5b0a51 f301bb3cf7f94411bff904828db8c555 3c9fe3215f964559830df6c94dd6a581 - - default default] Acquiring lock "e879a322-2581-43da-916b-423a94821ed0" by "nova.compute.manager.ComputeManager.reserve_block_device_name.<locals>.do_reserve" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 04:09:19 compute-0 nova_compute[259850]: 2025-10-11 04:09:19.633 2 DEBUG oslo_concurrency.lockutils [None req-6aa29d55-1fce-4384-b520-78a23e5b0a51 f301bb3cf7f94411bff904828db8c555 3c9fe3215f964559830df6c94dd6a581 - - default default] Lock "e879a322-2581-43da-916b-423a94821ed0" acquired by "nova.compute.manager.ComputeManager.reserve_block_device_name.<locals>.do_reserve" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 04:09:19 compute-0 nova_compute[259850]: 2025-10-11 04:09:19.669 2 DEBUG nova.objects.instance [None req-6aa29d55-1fce-4384-b520-78a23e5b0a51 f301bb3cf7f94411bff904828db8c555 3c9fe3215f964559830df6c94dd6a581 - - default default] Lazy-loading 'flavor' on Instance uuid e879a322-2581-43da-916b-423a94821ed0 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 11 04:09:19 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct 11 04:09:19 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2351648324' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 11 04:09:19 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct 11 04:09:19 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2351648324' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 11 04:09:19 compute-0 nova_compute[259850]: 2025-10-11 04:09:19.711 2 DEBUG oslo_concurrency.lockutils [None req-6aa29d55-1fce-4384-b520-78a23e5b0a51 f301bb3cf7f94411bff904828db8c555 3c9fe3215f964559830df6c94dd6a581 - - default default] Lock "e879a322-2581-43da-916b-423a94821ed0" "released" by "nova.compute.manager.ComputeManager.reserve_block_device_name.<locals>.do_reserve" :: held 0.078s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 04:09:19 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 11 04:09:19 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1209235369' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 04:09:19 compute-0 nova_compute[259850]: 2025-10-11 04:09:19.761 2 DEBUG oslo_concurrency.processutils [None req-a15c22ab-259d-4b26-83d0-eb5f92102649 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.461s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 04:09:19 compute-0 nova_compute[259850]: 2025-10-11 04:09:19.767 2 DEBUG nova.compute.provider_tree [None req-a15c22ab-259d-4b26-83d0-eb5f92102649 - - - - - -] Inventory has not changed in ProviderTree for provider: 108a560b-89c0-4926-a2fc-cb749a6f8386 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 11 04:09:19 compute-0 nova_compute[259850]: 2025-10-11 04:09:19.782 2 DEBUG nova.scheduler.client.report [None req-a15c22ab-259d-4b26-83d0-eb5f92102649 - - - - - -] Inventory has not changed for provider 108a560b-89c0-4926-a2fc-cb749a6f8386 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 11 04:09:19 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e239 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 04:09:19 compute-0 nova_compute[259850]: 2025-10-11 04:09:19.808 2 DEBUG nova.compute.resource_tracker [None req-a15c22ab-259d-4b26-83d0-eb5f92102649 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Oct 11 04:09:19 compute-0 nova_compute[259850]: 2025-10-11 04:09:19.809 2 DEBUG oslo_concurrency.lockutils [None req-a15c22ab-259d-4b26-83d0-eb5f92102649 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.681s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 04:09:19 compute-0 nova_compute[259850]: 2025-10-11 04:09:19.991 2 DEBUG oslo_concurrency.lockutils [None req-6aa29d55-1fce-4384-b520-78a23e5b0a51 f301bb3cf7f94411bff904828db8c555 3c9fe3215f964559830df6c94dd6a581 - - default default] Acquiring lock "e879a322-2581-43da-916b-423a94821ed0" by "nova.compute.manager.ComputeManager.attach_volume.<locals>.do_attach_volume" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 04:09:19 compute-0 nova_compute[259850]: 2025-10-11 04:09:19.992 2 DEBUG oslo_concurrency.lockutils [None req-6aa29d55-1fce-4384-b520-78a23e5b0a51 f301bb3cf7f94411bff904828db8c555 3c9fe3215f964559830df6c94dd6a581 - - default default] Lock "e879a322-2581-43da-916b-423a94821ed0" acquired by "nova.compute.manager.ComputeManager.attach_volume.<locals>.do_attach_volume" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 04:09:19 compute-0 nova_compute[259850]: 2025-10-11 04:09:19.992 2 INFO nova.compute.manager [None req-6aa29d55-1fce-4384-b520-78a23e5b0a51 f301bb3cf7f94411bff904828db8c555 3c9fe3215f964559830df6c94dd6a581 - - default default] [instance: e879a322-2581-43da-916b-423a94821ed0] Attaching volume dd3540f3-c3a1-4406-8e2f-e8af43421d08 to /dev/vdb
Oct 11 04:09:20 compute-0 ceph-mon[74273]: osdmap e239: 3 total, 3 up, 3 in
Oct 11 04:09:20 compute-0 ceph-mon[74273]: pgmap v1219: 305 pgs: 305 active+clean; 167 MiB data, 324 MiB used, 60 GiB / 60 GiB avail; 39 KiB/s rd, 27 KiB/s wr, 53 op/s
Oct 11 04:09:20 compute-0 ceph-mon[74273]: from='client.? 192.168.122.10:0/2351648324' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 11 04:09:20 compute-0 ceph-mon[74273]: from='client.? 192.168.122.10:0/2351648324' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 11 04:09:20 compute-0 ceph-mon[74273]: from='client.? 192.168.122.100:0/1209235369' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 04:09:20 compute-0 nova_compute[259850]: 2025-10-11 04:09:20.239 2 DEBUG os_brick.utils [None req-6aa29d55-1fce-4384-b520-78a23e5b0a51 f301bb3cf7f94411bff904828db8c555 3c9fe3215f964559830df6c94dd6a581 - - default default] ==> get_connector_properties: call "{'root_helper': 'sudo nova-rootwrap /etc/nova/rootwrap.conf', 'my_ip': '192.168.122.100', 'multipath': True, 'enforce_multipath': True, 'host': 'compute-0.ctlplane.example.com', 'execute': None}" trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:176
Oct 11 04:09:20 compute-0 nova_compute[259850]: 2025-10-11 04:09:20.241 675 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): multipathd show status execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 04:09:20 compute-0 nova_compute[259850]: 2025-10-11 04:09:20.263 675 DEBUG oslo_concurrency.processutils [-] CMD "multipathd show status" returned: 0 in 0.022s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 04:09:20 compute-0 nova_compute[259850]: 2025-10-11 04:09:20.263 675 DEBUG oslo.privsep.daemon [-] privsep: reply[42b2a10a-caa7-4773-826f-4b10cff93acd]: (4, ('path checker states:\n\npaths: 0\nbusy: False\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 04:09:20 compute-0 nova_compute[259850]: 2025-10-11 04:09:20.265 675 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): cat /etc/iscsi/initiatorname.iscsi execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 04:09:20 compute-0 nova_compute[259850]: 2025-10-11 04:09:20.278 675 DEBUG oslo_concurrency.processutils [-] CMD "cat /etc/iscsi/initiatorname.iscsi" returned: 0 in 0.012s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 04:09:20 compute-0 nova_compute[259850]: 2025-10-11 04:09:20.278 675 DEBUG oslo.privsep.daemon [-] privsep: reply[f691ad4d-ee14-4c76-98ce-874a385a2c4e]: (4, ('InitiatorName=iqn.1994-05.com.redhat:e727c2bd432c', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 04:09:20 compute-0 nova_compute[259850]: 2025-10-11 04:09:20.280 675 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): findmnt -v / -n -o SOURCE execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 04:09:20 compute-0 nova_compute[259850]: 2025-10-11 04:09:20.293 675 DEBUG oslo_concurrency.processutils [-] CMD "findmnt -v / -n -o SOURCE" returned: 0 in 0.013s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 04:09:20 compute-0 nova_compute[259850]: 2025-10-11 04:09:20.294 675 DEBUG oslo.privsep.daemon [-] privsep: reply[82cd4fd2-0b40-4f7a-b1f9-07a3365a9f8e]: (4, ('overlay\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 04:09:20 compute-0 nova_compute[259850]: 2025-10-11 04:09:20.296 675 DEBUG oslo.privsep.daemon [-] privsep: reply[7cc3c242-7b3d-4c95-ac32-6d39ea877471]: (4, 'e4b2deed-ff06-4afb-a523-b61a9dddb9cc') _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 04:09:20 compute-0 nova_compute[259850]: 2025-10-11 04:09:20.297 2 DEBUG oslo_concurrency.processutils [None req-6aa29d55-1fce-4384-b520-78a23e5b0a51 f301bb3cf7f94411bff904828db8c555 3c9fe3215f964559830df6c94dd6a581 - - default default] Running cmd (subprocess): nvme version execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 04:09:20 compute-0 nova_compute[259850]: 2025-10-11 04:09:20.331 2 DEBUG oslo_concurrency.processutils [None req-6aa29d55-1fce-4384-b520-78a23e5b0a51 f301bb3cf7f94411bff904828db8c555 3c9fe3215f964559830df6c94dd6a581 - - default default] CMD "nvme version" returned: 0 in 0.035s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 04:09:20 compute-0 nova_compute[259850]: 2025-10-11 04:09:20.335 2 DEBUG os_brick.initiator.connectors.lightos [None req-6aa29d55-1fce-4384-b520-78a23e5b0a51 f301bb3cf7f94411bff904828db8c555 3c9fe3215f964559830df6c94dd6a581 - - default default] LIGHTOS: [Errno 111] ECONNREFUSED find_dsc /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:98
Oct 11 04:09:20 compute-0 nova_compute[259850]: 2025-10-11 04:09:20.335 2 DEBUG os_brick.initiator.connectors.lightos [None req-6aa29d55-1fce-4384-b520-78a23e5b0a51 f301bb3cf7f94411bff904828db8c555 3c9fe3215f964559830df6c94dd6a581 - - default default] LIGHTOS: did not find dsc, continuing anyway. get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:76
Oct 11 04:09:20 compute-0 nova_compute[259850]: 2025-10-11 04:09:20.336 2 DEBUG os_brick.initiator.connectors.lightos [None req-6aa29d55-1fce-4384-b520-78a23e5b0a51 f301bb3cf7f94411bff904828db8c555 3c9fe3215f964559830df6c94dd6a581 - - default default] LIGHTOS: finally hostnqn: nqn.2014-08.org.nvmexpress:uuid:83042a20-0f72-4c47-8453-e72ead378624 dsc:  get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:79
Oct 11 04:09:20 compute-0 nova_compute[259850]: 2025-10-11 04:09:20.336 2 DEBUG os_brick.utils [None req-6aa29d55-1fce-4384-b520-78a23e5b0a51 f301bb3cf7f94411bff904828db8c555 3c9fe3215f964559830df6c94dd6a581 - - default default] <== get_connector_properties: return (96ms) {'platform': 'x86_64', 'os_type': 'linux', 'ip': '192.168.122.100', 'host': 'compute-0.ctlplane.example.com', 'multipath': True, 'initiator': 'iqn.1994-05.com.redhat:e727c2bd432c', 'do_local_attach': False, 'nvme_hostid': '83042a20-0f72-4c47-8453-e72ead378624', 'system uuid': 'e4b2deed-ff06-4afb-a523-b61a9dddb9cc', 'nqn': 'nqn.2014-08.org.nvmexpress:uuid:83042a20-0f72-4c47-8453-e72ead378624', 'nvme_native_multipath': True, 'found_dsc': ''} trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:203
Oct 11 04:09:20 compute-0 nova_compute[259850]: 2025-10-11 04:09:20.336 2 DEBUG nova.virt.block_device [None req-6aa29d55-1fce-4384-b520-78a23e5b0a51 f301bb3cf7f94411bff904828db8c555 3c9fe3215f964559830df6c94dd6a581 - - default default] [instance: e879a322-2581-43da-916b-423a94821ed0] Updating existing volume attachment record: 454a1d52-0758-4275-9b6c-539f5a43bf17 _volume_attach /usr/lib/python3.9/site-packages/nova/virt/block_device.py:631
Oct 11 04:09:20 compute-0 podman[282131]: 2025-10-11 04:09:20.374929078 +0000 UTC m=+0.075052282 container health_status aa44bb3cffcfb092b9d85ae52a9bd9f36c87e07262632420f97b48a886109eab (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 11 04:09:20 compute-0 ceph-mgr[74563]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 04:09:20 compute-0 ceph-mgr[74563]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 04:09:20 compute-0 ceph-mgr[74563]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 04:09:20 compute-0 ceph-mgr[74563]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 04:09:20 compute-0 ceph-mgr[74563]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 04:09:20 compute-0 ceph-mgr[74563]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 04:09:20 compute-0 ceph-mgr[74563]: [balancer INFO root] Optimize plan auto_2025-10-11_04:09:20
Oct 11 04:09:20 compute-0 ceph-mgr[74563]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 11 04:09:20 compute-0 ceph-mgr[74563]: [balancer INFO root] do_upmap
Oct 11 04:09:20 compute-0 ceph-mgr[74563]: [balancer INFO root] pools ['.mgr', '.rgw.root', 'default.rgw.control', 'cephfs.cephfs.meta', 'images', 'backups', 'default.rgw.log', 'volumes', 'default.rgw.meta', 'cephfs.cephfs.data', 'vms']
Oct 11 04:09:20 compute-0 ceph-mgr[74563]: [balancer INFO root] prepared 0/10 changes
Oct 11 04:09:20 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 11 04:09:20 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1249067358' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 11 04:09:21 compute-0 ceph-mgr[74563]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 11 04:09:21 compute-0 ceph-mgr[74563]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 11 04:09:21 compute-0 ceph-mgr[74563]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 11 04:09:21 compute-0 ceph-mgr[74563]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 11 04:09:21 compute-0 ceph-mgr[74563]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 11 04:09:21 compute-0 ceph-mgr[74563]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 11 04:09:21 compute-0 ceph-mgr[74563]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 11 04:09:21 compute-0 ceph-mgr[74563]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 11 04:09:21 compute-0 ceph-mgr[74563]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 11 04:09:21 compute-0 ceph-mgr[74563]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 11 04:09:21 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e239 do_prune osdmap full prune enabled
Oct 11 04:09:21 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e240 e240: 3 total, 3 up, 3 in
Oct 11 04:09:21 compute-0 ceph-mon[74273]: log_channel(cluster) log [DBG] : osdmap e240: 3 total, 3 up, 3 in
Oct 11 04:09:21 compute-0 ceph-mon[74273]: from='client.? 192.168.122.10:0/1249067358' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 11 04:09:21 compute-0 nova_compute[259850]: 2025-10-11 04:09:21.520 2 DEBUG os_brick.encryptors [None req-6aa29d55-1fce-4384-b520-78a23e5b0a51 f301bb3cf7f94411bff904828db8c555 3c9fe3215f964559830df6c94dd6a581 - - default default] Using volume encryption metadata '{'encryption_key_id': 'd4dc2c1a-0733-43d9-a5d3-cfbb11069bed', 'control_location': 'front-end', 'cipher': 'aes-xts-plain64', 'key_size': 256, 'provider': 'luks'}' for connection: {'driver_volume_type': 'rbd', 'data': {'name': 'volumes/volume-dd3540f3-c3a1-4406-8e2f-e8af43421d08', 'hosts': ['192.168.122.100'], 'ports': ['6789'], 'cluster_name': 'ceph', 'auth_enabled': True, 'auth_username': 'openstack', 'secret_type': 'ceph', 'secret_uuid': '***', 'volume_id': 'dd3540f3-c3a1-4406-8e2f-e8af43421d08', 'discard': True, 'qos_specs': None, 'access_mode': 'rw', 'encrypted': True, 'cacheable': False}, 'status': 'reserved', 'instance': 'e879a322-2581-43da-916b-423a94821ed0', 'attached_at': '', 'detached_at': '', 'volume_id': 'dd3540f3-c3a1-4406-8e2f-e8af43421d08', 'serial': '} get_encryption_metadata /usr/lib/python3.9/site-packages/os_brick/encryptors/__init__.py:135
Oct 11 04:09:21 compute-0 nova_compute[259850]: 2025-10-11 04:09:21.530 2 DEBUG barbicanclient.client [None req-6aa29d55-1fce-4384-b520-78a23e5b0a51 f301bb3cf7f94411bff904828db8c555 3c9fe3215f964559830df6c94dd6a581 - - default default] Creating Client object Client /usr/lib/python3.9/site-packages/barbicanclient/client.py:163
Oct 11 04:09:21 compute-0 nova_compute[259850]: 2025-10-11 04:09:21.547 2 DEBUG barbicanclient.v1.secrets [None req-6aa29d55-1fce-4384-b520-78a23e5b0a51 f301bb3cf7f94411bff904828db8c555 3c9fe3215f964559830df6c94dd6a581 - - default default] Getting secret - Secret href: https://barbican-internal.openstack.svc:9311/secrets/d4dc2c1a-0733-43d9-a5d3-cfbb11069bed get /usr/lib/python3.9/site-packages/barbicanclient/v1/secrets.py:514
Oct 11 04:09:21 compute-0 nova_compute[259850]: 2025-10-11 04:09:21.548 2 INFO barbicanclient.base [None req-6aa29d55-1fce-4384-b520-78a23e5b0a51 f301bb3cf7f94411bff904828db8c555 3c9fe3215f964559830df6c94dd6a581 - - default default] Calculated Secrets uuid ref: secrets/d4dc2c1a-0733-43d9-a5d3-cfbb11069bed
Oct 11 04:09:21 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1221: 305 pgs: 305 active+clean; 167 MiB data, 324 MiB used, 60 GiB / 60 GiB avail; 39 KiB/s rd, 27 KiB/s wr, 53 op/s
Oct 11 04:09:21 compute-0 nova_compute[259850]: 2025-10-11 04:09:21.573 2 DEBUG barbicanclient.client [None req-6aa29d55-1fce-4384-b520-78a23e5b0a51 f301bb3cf7f94411bff904828db8c555 3c9fe3215f964559830df6c94dd6a581 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87
Oct 11 04:09:21 compute-0 nova_compute[259850]: 2025-10-11 04:09:21.574 2 INFO barbicanclient.base [None req-6aa29d55-1fce-4384-b520-78a23e5b0a51 f301bb3cf7f94411bff904828db8c555 3c9fe3215f964559830df6c94dd6a581 - - default default] Calculated Secrets uuid ref: secrets/d4dc2c1a-0733-43d9-a5d3-cfbb11069bed
Oct 11 04:09:21 compute-0 nova_compute[259850]: 2025-10-11 04:09:21.597 2 DEBUG barbicanclient.client [None req-6aa29d55-1fce-4384-b520-78a23e5b0a51 f301bb3cf7f94411bff904828db8c555 3c9fe3215f964559830df6c94dd6a581 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87
Oct 11 04:09:21 compute-0 nova_compute[259850]: 2025-10-11 04:09:21.598 2 INFO barbicanclient.base [None req-6aa29d55-1fce-4384-b520-78a23e5b0a51 f301bb3cf7f94411bff904828db8c555 3c9fe3215f964559830df6c94dd6a581 - - default default] Calculated Secrets uuid ref: secrets/d4dc2c1a-0733-43d9-a5d3-cfbb11069bed
Oct 11 04:09:21 compute-0 nova_compute[259850]: 2025-10-11 04:09:21.619 2 DEBUG barbicanclient.client [None req-6aa29d55-1fce-4384-b520-78a23e5b0a51 f301bb3cf7f94411bff904828db8c555 3c9fe3215f964559830df6c94dd6a581 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87
Oct 11 04:09:21 compute-0 nova_compute[259850]: 2025-10-11 04:09:21.619 2 INFO barbicanclient.base [None req-6aa29d55-1fce-4384-b520-78a23e5b0a51 f301bb3cf7f94411bff904828db8c555 3c9fe3215f964559830df6c94dd6a581 - - default default] Calculated Secrets uuid ref: secrets/d4dc2c1a-0733-43d9-a5d3-cfbb11069bed
Oct 11 04:09:21 compute-0 nova_compute[259850]: 2025-10-11 04:09:21.640 2 DEBUG barbicanclient.client [None req-6aa29d55-1fce-4384-b520-78a23e5b0a51 f301bb3cf7f94411bff904828db8c555 3c9fe3215f964559830df6c94dd6a581 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87
Oct 11 04:09:21 compute-0 nova_compute[259850]: 2025-10-11 04:09:21.641 2 INFO barbicanclient.base [None req-6aa29d55-1fce-4384-b520-78a23e5b0a51 f301bb3cf7f94411bff904828db8c555 3c9fe3215f964559830df6c94dd6a581 - - default default] Calculated Secrets uuid ref: secrets/d4dc2c1a-0733-43d9-a5d3-cfbb11069bed
Oct 11 04:09:21 compute-0 nova_compute[259850]: 2025-10-11 04:09:21.664 2 DEBUG barbicanclient.client [None req-6aa29d55-1fce-4384-b520-78a23e5b0a51 f301bb3cf7f94411bff904828db8c555 3c9fe3215f964559830df6c94dd6a581 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87
Oct 11 04:09:21 compute-0 nova_compute[259850]: 2025-10-11 04:09:21.665 2 INFO barbicanclient.base [None req-6aa29d55-1fce-4384-b520-78a23e5b0a51 f301bb3cf7f94411bff904828db8c555 3c9fe3215f964559830df6c94dd6a581 - - default default] Calculated Secrets uuid ref: secrets/d4dc2c1a-0733-43d9-a5d3-cfbb11069bed
Oct 11 04:09:21 compute-0 nova_compute[259850]: 2025-10-11 04:09:21.688 2 DEBUG barbicanclient.client [None req-6aa29d55-1fce-4384-b520-78a23e5b0a51 f301bb3cf7f94411bff904828db8c555 3c9fe3215f964559830df6c94dd6a581 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87
Oct 11 04:09:21 compute-0 nova_compute[259850]: 2025-10-11 04:09:21.689 2 INFO barbicanclient.base [None req-6aa29d55-1fce-4384-b520-78a23e5b0a51 f301bb3cf7f94411bff904828db8c555 3c9fe3215f964559830df6c94dd6a581 - - default default] Calculated Secrets uuid ref: secrets/d4dc2c1a-0733-43d9-a5d3-cfbb11069bed
Oct 11 04:09:21 compute-0 nova_compute[259850]: 2025-10-11 04:09:21.711 2 DEBUG barbicanclient.client [None req-6aa29d55-1fce-4384-b520-78a23e5b0a51 f301bb3cf7f94411bff904828db8c555 3c9fe3215f964559830df6c94dd6a581 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87
Oct 11 04:09:21 compute-0 nova_compute[259850]: 2025-10-11 04:09:21.712 2 INFO barbicanclient.base [None req-6aa29d55-1fce-4384-b520-78a23e5b0a51 f301bb3cf7f94411bff904828db8c555 3c9fe3215f964559830df6c94dd6a581 - - default default] Calculated Secrets uuid ref: secrets/d4dc2c1a-0733-43d9-a5d3-cfbb11069bed
Oct 11 04:09:21 compute-0 nova_compute[259850]: 2025-10-11 04:09:21.738 2 DEBUG barbicanclient.client [None req-6aa29d55-1fce-4384-b520-78a23e5b0a51 f301bb3cf7f94411bff904828db8c555 3c9fe3215f964559830df6c94dd6a581 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87
Oct 11 04:09:21 compute-0 nova_compute[259850]: 2025-10-11 04:09:21.740 2 INFO barbicanclient.base [None req-6aa29d55-1fce-4384-b520-78a23e5b0a51 f301bb3cf7f94411bff904828db8c555 3c9fe3215f964559830df6c94dd6a581 - - default default] Calculated Secrets uuid ref: secrets/d4dc2c1a-0733-43d9-a5d3-cfbb11069bed
Oct 11 04:09:21 compute-0 nova_compute[259850]: 2025-10-11 04:09:21.760 2 DEBUG barbicanclient.client [None req-6aa29d55-1fce-4384-b520-78a23e5b0a51 f301bb3cf7f94411bff904828db8c555 3c9fe3215f964559830df6c94dd6a581 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87
Oct 11 04:09:21 compute-0 nova_compute[259850]: 2025-10-11 04:09:21.762 2 INFO barbicanclient.base [None req-6aa29d55-1fce-4384-b520-78a23e5b0a51 f301bb3cf7f94411bff904828db8c555 3c9fe3215f964559830df6c94dd6a581 - - default default] Calculated Secrets uuid ref: secrets/d4dc2c1a-0733-43d9-a5d3-cfbb11069bed
Oct 11 04:09:21 compute-0 nova_compute[259850]: 2025-10-11 04:09:21.788 2 DEBUG barbicanclient.client [None req-6aa29d55-1fce-4384-b520-78a23e5b0a51 f301bb3cf7f94411bff904828db8c555 3c9fe3215f964559830df6c94dd6a581 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87
Oct 11 04:09:21 compute-0 nova_compute[259850]: 2025-10-11 04:09:21.789 2 INFO barbicanclient.base [None req-6aa29d55-1fce-4384-b520-78a23e5b0a51 f301bb3cf7f94411bff904828db8c555 3c9fe3215f964559830df6c94dd6a581 - - default default] Calculated Secrets uuid ref: secrets/d4dc2c1a-0733-43d9-a5d3-cfbb11069bed
Oct 11 04:09:21 compute-0 nova_compute[259850]: 2025-10-11 04:09:21.808 2 DEBUG oslo_service.periodic_task [None req-a15c22ab-259d-4b26-83d0-eb5f92102649 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 11 04:09:21 compute-0 nova_compute[259850]: 2025-10-11 04:09:21.809 2 DEBUG nova.compute.manager [None req-a15c22ab-259d-4b26-83d0-eb5f92102649 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Oct 11 04:09:21 compute-0 nova_compute[259850]: 2025-10-11 04:09:21.810 2 DEBUG nova.compute.manager [None req-a15c22ab-259d-4b26-83d0-eb5f92102649 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Oct 11 04:09:21 compute-0 nova_compute[259850]: 2025-10-11 04:09:21.819 2 DEBUG barbicanclient.client [None req-6aa29d55-1fce-4384-b520-78a23e5b0a51 f301bb3cf7f94411bff904828db8c555 3c9fe3215f964559830df6c94dd6a581 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87
Oct 11 04:09:21 compute-0 nova_compute[259850]: 2025-10-11 04:09:21.820 2 INFO barbicanclient.base [None req-6aa29d55-1fce-4384-b520-78a23e5b0a51 f301bb3cf7f94411bff904828db8c555 3c9fe3215f964559830df6c94dd6a581 - - default default] Calculated Secrets uuid ref: secrets/d4dc2c1a-0733-43d9-a5d3-cfbb11069bed
Oct 11 04:09:21 compute-0 nova_compute[259850]: 2025-10-11 04:09:21.846 2 DEBUG barbicanclient.client [None req-6aa29d55-1fce-4384-b520-78a23e5b0a51 f301bb3cf7f94411bff904828db8c555 3c9fe3215f964559830df6c94dd6a581 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87
Oct 11 04:09:21 compute-0 nova_compute[259850]: 2025-10-11 04:09:21.847 2 INFO barbicanclient.base [None req-6aa29d55-1fce-4384-b520-78a23e5b0a51 f301bb3cf7f94411bff904828db8c555 3c9fe3215f964559830df6c94dd6a581 - - default default] Calculated Secrets uuid ref: secrets/d4dc2c1a-0733-43d9-a5d3-cfbb11069bed
Oct 11 04:09:21 compute-0 nova_compute[259850]: 2025-10-11 04:09:21.871 2 DEBUG barbicanclient.client [None req-6aa29d55-1fce-4384-b520-78a23e5b0a51 f301bb3cf7f94411bff904828db8c555 3c9fe3215f964559830df6c94dd6a581 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87
Oct 11 04:09:21 compute-0 nova_compute[259850]: 2025-10-11 04:09:21.872 2 INFO barbicanclient.base [None req-6aa29d55-1fce-4384-b520-78a23e5b0a51 f301bb3cf7f94411bff904828db8c555 3c9fe3215f964559830df6c94dd6a581 - - default default] Calculated Secrets uuid ref: secrets/d4dc2c1a-0733-43d9-a5d3-cfbb11069bed
Oct 11 04:09:21 compute-0 nova_compute[259850]: 2025-10-11 04:09:21.894 2 DEBUG barbicanclient.client [None req-6aa29d55-1fce-4384-b520-78a23e5b0a51 f301bb3cf7f94411bff904828db8c555 3c9fe3215f964559830df6c94dd6a581 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87
Oct 11 04:09:21 compute-0 nova_compute[259850]: 2025-10-11 04:09:21.895 2 INFO barbicanclient.base [None req-6aa29d55-1fce-4384-b520-78a23e5b0a51 f301bb3cf7f94411bff904828db8c555 3c9fe3215f964559830df6c94dd6a581 - - default default] Calculated Secrets uuid ref: secrets/d4dc2c1a-0733-43d9-a5d3-cfbb11069bed
Oct 11 04:09:21 compute-0 nova_compute[259850]: 2025-10-11 04:09:21.927 2 DEBUG barbicanclient.client [None req-6aa29d55-1fce-4384-b520-78a23e5b0a51 f301bb3cf7f94411bff904828db8c555 3c9fe3215f964559830df6c94dd6a581 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87
Oct 11 04:09:21 compute-0 nova_compute[259850]: 2025-10-11 04:09:21.928 2 DEBUG nova.virt.libvirt.host [None req-6aa29d55-1fce-4384-b520-78a23e5b0a51 f301bb3cf7f94411bff904828db8c555 3c9fe3215f964559830df6c94dd6a581 - - default default] Secret XML: <secret ephemeral="no" private="no">
Oct 11 04:09:21 compute-0 nova_compute[259850]:   <usage type="volume">
Oct 11 04:09:21 compute-0 nova_compute[259850]:     <volume>dd3540f3-c3a1-4406-8e2f-e8af43421d08</volume>
Oct 11 04:09:21 compute-0 nova_compute[259850]:   </usage>
Oct 11 04:09:21 compute-0 nova_compute[259850]: </secret>
Oct 11 04:09:21 compute-0 nova_compute[259850]:  create_secret /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1131
Oct 11 04:09:21 compute-0 nova_compute[259850]: 2025-10-11 04:09:21.937 2 DEBUG nova.objects.instance [None req-6aa29d55-1fce-4384-b520-78a23e5b0a51 f301bb3cf7f94411bff904828db8c555 3c9fe3215f964559830df6c94dd6a581 - - default default] Lazy-loading 'flavor' on Instance uuid e879a322-2581-43da-916b-423a94821ed0 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 11 04:09:21 compute-0 nova_compute[259850]: 2025-10-11 04:09:21.962 2 DEBUG nova.virt.libvirt.driver [None req-6aa29d55-1fce-4384-b520-78a23e5b0a51 f301bb3cf7f94411bff904828db8c555 3c9fe3215f964559830df6c94dd6a581 - - default default] [instance: e879a322-2581-43da-916b-423a94821ed0] Attempting to attach volume dd3540f3-c3a1-4406-8e2f-e8af43421d08 with discard support enabled to an instance using an unsupported configuration. target_bus = virtio. Trim commands will not be issued to the storage device. _check_discard_for_attach_volume /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2168
Oct 11 04:09:21 compute-0 nova_compute[259850]: 2025-10-11 04:09:21.965 2 DEBUG nova.virt.libvirt.guest [None req-6aa29d55-1fce-4384-b520-78a23e5b0a51 f301bb3cf7f94411bff904828db8c555 3c9fe3215f964559830df6c94dd6a581 - - default default] attach device xml: <disk type="network" device="disk">
Oct 11 04:09:21 compute-0 nova_compute[259850]:   <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Oct 11 04:09:21 compute-0 nova_compute[259850]:   <source protocol="rbd" name="volumes/volume-dd3540f3-c3a1-4406-8e2f-e8af43421d08">
Oct 11 04:09:21 compute-0 nova_compute[259850]:     <host name="192.168.122.100" port="6789"/>
Oct 11 04:09:21 compute-0 nova_compute[259850]:   </source>
Oct 11 04:09:21 compute-0 nova_compute[259850]:   <auth username="openstack">
Oct 11 04:09:21 compute-0 nova_compute[259850]:     <secret type="ceph" uuid="23b68101-59a9-532f-ab6b-9acf78fb2162"/>
Oct 11 04:09:21 compute-0 nova_compute[259850]:   </auth>
Oct 11 04:09:21 compute-0 nova_compute[259850]:   <target dev="vdb" bus="virtio"/>
Oct 11 04:09:21 compute-0 nova_compute[259850]:   <serial>dd3540f3-c3a1-4406-8e2f-e8af43421d08</serial>
Oct 11 04:09:21 compute-0 nova_compute[259850]:   <encryption format="luks">
Oct 11 04:09:21 compute-0 nova_compute[259850]:     <secret type="passphrase" uuid="a9e56145-4ab9-48b4-98c0-5100232c570c"/>
Oct 11 04:09:21 compute-0 nova_compute[259850]:   </encryption>
Oct 11 04:09:21 compute-0 nova_compute[259850]: </disk>
Oct 11 04:09:21 compute-0 nova_compute[259850]:  attach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:339
Oct 11 04:09:22 compute-0 nova_compute[259850]: 2025-10-11 04:09:22.126 2 DEBUG oslo_concurrency.lockutils [None req-a15c22ab-259d-4b26-83d0-eb5f92102649 - - - - - -] Acquiring lock "refresh_cache-e879a322-2581-43da-916b-423a94821ed0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 11 04:09:22 compute-0 nova_compute[259850]: 2025-10-11 04:09:22.128 2 DEBUG oslo_concurrency.lockutils [None req-a15c22ab-259d-4b26-83d0-eb5f92102649 - - - - - -] Acquired lock "refresh_cache-e879a322-2581-43da-916b-423a94821ed0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 11 04:09:22 compute-0 nova_compute[259850]: 2025-10-11 04:09:22.128 2 DEBUG nova.network.neutron [None req-a15c22ab-259d-4b26-83d0-eb5f92102649 - - - - - -] [instance: e879a322-2581-43da-916b-423a94821ed0] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Oct 11 04:09:22 compute-0 nova_compute[259850]: 2025-10-11 04:09:22.129 2 DEBUG nova.objects.instance [None req-a15c22ab-259d-4b26-83d0-eb5f92102649 - - - - - -] Lazy-loading 'info_cache' on Instance uuid e879a322-2581-43da-916b-423a94821ed0 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 11 04:09:22 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e240 do_prune osdmap full prune enabled
Oct 11 04:09:22 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e241 e241: 3 total, 3 up, 3 in
Oct 11 04:09:22 compute-0 ceph-mon[74273]: osdmap e240: 3 total, 3 up, 3 in
Oct 11 04:09:22 compute-0 ceph-mon[74273]: pgmap v1221: 305 pgs: 305 active+clean; 167 MiB data, 324 MiB used, 60 GiB / 60 GiB avail; 39 KiB/s rd, 27 KiB/s wr, 53 op/s
Oct 11 04:09:22 compute-0 ceph-mon[74273]: log_channel(cluster) log [DBG] : osdmap e241: 3 total, 3 up, 3 in
Oct 11 04:09:22 compute-0 nova_compute[259850]: 2025-10-11 04:09:22.554 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 04:09:22 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct 11 04:09:22 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3935261789' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 11 04:09:22 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct 11 04:09:22 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3935261789' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 11 04:09:22 compute-0 ovn_metadata_agent[161897]: 2025-10-11 04:09:22.959 161902 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 04:09:22 compute-0 ovn_metadata_agent[161897]: 2025-10-11 04:09:22.959 161902 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 04:09:22 compute-0 ovn_metadata_agent[161897]: 2025-10-11 04:09:22.960 161902 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 04:09:23 compute-0 ceph-mon[74273]: osdmap e241: 3 total, 3 up, 3 in
Oct 11 04:09:23 compute-0 ceph-mon[74273]: from='client.? 192.168.122.10:0/3935261789' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 11 04:09:23 compute-0 ceph-mon[74273]: from='client.? 192.168.122.10:0/3935261789' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 11 04:09:23 compute-0 nova_compute[259850]: 2025-10-11 04:09:23.471 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1760155748.4707325, 8010953a-e520-477e-a4ba-ceb34db48982 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 11 04:09:23 compute-0 nova_compute[259850]: 2025-10-11 04:09:23.473 2 INFO nova.compute.manager [-] [instance: 8010953a-e520-477e-a4ba-ceb34db48982] VM Stopped (Lifecycle Event)
Oct 11 04:09:23 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1223: 305 pgs: 305 active+clean; 167 MiB data, 328 MiB used, 60 GiB / 60 GiB avail; 127 KiB/s rd, 31 KiB/s wr, 168 op/s
Oct 11 04:09:23 compute-0 nova_compute[259850]: 2025-10-11 04:09:23.621 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 04:09:23 compute-0 nova_compute[259850]: 2025-10-11 04:09:23.671 2 DEBUG nova.compute.manager [None req-a5c186a9-86e8-4b7a-8e05-dd6bddae456b - - - - - -] [instance: 8010953a-e520-477e-a4ba-ceb34db48982] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 11 04:09:24 compute-0 nova_compute[259850]: 2025-10-11 04:09:24.105 2 DEBUG nova.network.neutron [None req-a15c22ab-259d-4b26-83d0-eb5f92102649 - - - - - -] [instance: e879a322-2581-43da-916b-423a94821ed0] Updating instance_info_cache with network_info: [{"id": "cc7a934c-f273-4dde-b492-d37feef39f58", "address": "fa:16:3e:a8:a4:b5", "network": {"id": "bc525eaa-e13d-45ff-a473-c699abd60e90", "bridge": "br-int", "label": "tempest-TestEncryptedCinderVolumes-452300963-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.202", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3c9fe3215f964559830df6c94dd6a581", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapcc7a934c-f2", "ovs_interfaceid": "cc7a934c-f273-4dde-b492-d37feef39f58", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 11 04:09:24 compute-0 nova_compute[259850]: 2025-10-11 04:09:24.124 2 DEBUG oslo_concurrency.lockutils [None req-a15c22ab-259d-4b26-83d0-eb5f92102649 - - - - - -] Releasing lock "refresh_cache-e879a322-2581-43da-916b-423a94821ed0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 11 04:09:24 compute-0 nova_compute[259850]: 2025-10-11 04:09:24.124 2 DEBUG nova.compute.manager [None req-a15c22ab-259d-4b26-83d0-eb5f92102649 - - - - - -] [instance: e879a322-2581-43da-916b-423a94821ed0] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Oct 11 04:09:24 compute-0 nova_compute[259850]: 2025-10-11 04:09:24.125 2 DEBUG oslo_service.periodic_task [None req-a15c22ab-259d-4b26-83d0-eb5f92102649 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 11 04:09:24 compute-0 nova_compute[259850]: 2025-10-11 04:09:24.125 2 DEBUG oslo_service.periodic_task [None req-a15c22ab-259d-4b26-83d0-eb5f92102649 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 11 04:09:24 compute-0 nova_compute[259850]: 2025-10-11 04:09:24.126 2 DEBUG oslo_service.periodic_task [None req-a15c22ab-259d-4b26-83d0-eb5f92102649 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 11 04:09:24 compute-0 nova_compute[259850]: 2025-10-11 04:09:24.126 2 DEBUG oslo_service.periodic_task [None req-a15c22ab-259d-4b26-83d0-eb5f92102649 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 11 04:09:24 compute-0 ceph-mon[74273]: pgmap v1223: 305 pgs: 305 active+clean; 167 MiB data, 328 MiB used, 60 GiB / 60 GiB avail; 127 KiB/s rd, 31 KiB/s wr, 168 op/s
Oct 11 04:09:24 compute-0 nova_compute[259850]: 2025-10-11 04:09:24.260 2 DEBUG nova.virt.libvirt.driver [None req-6aa29d55-1fce-4384-b520-78a23e5b0a51 f301bb3cf7f94411bff904828db8c555 3c9fe3215f964559830df6c94dd6a581 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 11 04:09:24 compute-0 nova_compute[259850]: 2025-10-11 04:09:24.261 2 DEBUG nova.virt.libvirt.driver [None req-6aa29d55-1fce-4384-b520-78a23e5b0a51 f301bb3cf7f94411bff904828db8c555 3c9fe3215f964559830df6c94dd6a581 - - default default] No BDM found with device name vdb, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 11 04:09:24 compute-0 nova_compute[259850]: 2025-10-11 04:09:24.262 2 DEBUG nova.virt.libvirt.driver [None req-6aa29d55-1fce-4384-b520-78a23e5b0a51 f301bb3cf7f94411bff904828db8c555 3c9fe3215f964559830df6c94dd6a581 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 11 04:09:24 compute-0 nova_compute[259850]: 2025-10-11 04:09:24.262 2 DEBUG nova.virt.libvirt.driver [None req-6aa29d55-1fce-4384-b520-78a23e5b0a51 f301bb3cf7f94411bff904828db8c555 3c9fe3215f964559830df6c94dd6a581 - - default default] No VIF found with MAC fa:16:3e:a8:a4:b5, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Oct 11 04:09:24 compute-0 nova_compute[259850]: 2025-10-11 04:09:24.372 2 DEBUG oslo_service.periodic_task [None req-a15c22ab-259d-4b26-83d0-eb5f92102649 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 11 04:09:24 compute-0 nova_compute[259850]: 2025-10-11 04:09:24.644 2 DEBUG oslo_concurrency.lockutils [None req-6aa29d55-1fce-4384-b520-78a23e5b0a51 f301bb3cf7f94411bff904828db8c555 3c9fe3215f964559830df6c94dd6a581 - - default default] Lock "e879a322-2581-43da-916b-423a94821ed0" "released" by "nova.compute.manager.ComputeManager.attach_volume.<locals>.do_attach_volume" :: held 4.653s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 04:09:24 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e241 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 04:09:24 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e241 do_prune osdmap full prune enabled
Oct 11 04:09:24 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e242 e242: 3 total, 3 up, 3 in
Oct 11 04:09:24 compute-0 ceph-mon[74273]: log_channel(cluster) log [DBG] : osdmap e242: 3 total, 3 up, 3 in
Oct 11 04:09:25 compute-0 nova_compute[259850]: 2025-10-11 04:09:25.058 2 DEBUG oslo_service.periodic_task [None req-a15c22ab-259d-4b26-83d0-eb5f92102649 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 11 04:09:25 compute-0 nova_compute[259850]: 2025-10-11 04:09:25.549 2 DEBUG oslo_concurrency.lockutils [None req-cf59774f-dea1-4aa6-bb05-cc923955f9de f301bb3cf7f94411bff904828db8c555 3c9fe3215f964559830df6c94dd6a581 - - default default] Acquiring lock "e879a322-2581-43da-916b-423a94821ed0" by "nova.compute.manager.ComputeManager.detach_volume.<locals>.do_detach_volume" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 04:09:25 compute-0 nova_compute[259850]: 2025-10-11 04:09:25.550 2 DEBUG oslo_concurrency.lockutils [None req-cf59774f-dea1-4aa6-bb05-cc923955f9de f301bb3cf7f94411bff904828db8c555 3c9fe3215f964559830df6c94dd6a581 - - default default] Lock "e879a322-2581-43da-916b-423a94821ed0" acquired by "nova.compute.manager.ComputeManager.detach_volume.<locals>.do_detach_volume" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 04:09:25 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1225: 305 pgs: 305 active+clean; 167 MiB data, 328 MiB used, 60 GiB / 60 GiB avail; 88 KiB/s rd, 3.5 KiB/s wr, 114 op/s
Oct 11 04:09:25 compute-0 nova_compute[259850]: 2025-10-11 04:09:25.571 2 INFO nova.compute.manager [None req-cf59774f-dea1-4aa6-bb05-cc923955f9de f301bb3cf7f94411bff904828db8c555 3c9fe3215f964559830df6c94dd6a581 - - default default] [instance: e879a322-2581-43da-916b-423a94821ed0] Detaching volume dd3540f3-c3a1-4406-8e2f-e8af43421d08
Oct 11 04:09:25 compute-0 nova_compute[259850]: 2025-10-11 04:09:25.715 2 INFO nova.virt.block_device [None req-cf59774f-dea1-4aa6-bb05-cc923955f9de f301bb3cf7f94411bff904828db8c555 3c9fe3215f964559830df6c94dd6a581 - - default default] [instance: e879a322-2581-43da-916b-423a94821ed0] Attempting to driver detach volume dd3540f3-c3a1-4406-8e2f-e8af43421d08 from mountpoint /dev/vdb
Oct 11 04:09:25 compute-0 ceph-mon[74273]: osdmap e242: 3 total, 3 up, 3 in
Oct 11 04:09:25 compute-0 nova_compute[259850]: 2025-10-11 04:09:25.854 2 DEBUG os_brick.encryptors [None req-cf59774f-dea1-4aa6-bb05-cc923955f9de f301bb3cf7f94411bff904828db8c555 3c9fe3215f964559830df6c94dd6a581 - - default default] Using volume encryption metadata '{'encryption_key_id': 'd4dc2c1a-0733-43d9-a5d3-cfbb11069bed', 'control_location': 'front-end', 'cipher': 'aes-xts-plain64', 'key_size': 256, 'provider': 'luks'}' for connection: {'driver_volume_type': 'rbd', 'data': {'name': 'volumes/volume-dd3540f3-c3a1-4406-8e2f-e8af43421d08', 'hosts': ['192.168.122.100'], 'ports': ['6789'], 'cluster_name': 'ceph', 'auth_enabled': True, 'auth_username': 'openstack', 'secret_type': 'ceph', 'secret_uuid': '***', 'volume_id': 'dd3540f3-c3a1-4406-8e2f-e8af43421d08', 'discard': True, 'qos_specs': None, 'access_mode': 'rw', 'encrypted': True, 'cacheable': False}, 'status': 'reserved', 'instance': 'e879a322-2581-43da-916b-423a94821ed0', 'attached_at': '', 'detached_at': '', 'volume_id': 'dd3540f3-c3a1-4406-8e2f-e8af43421d08', 'serial': '} get_encryption_metadata /usr/lib/python3.9/site-packages/os_brick/encryptors/__init__.py:135
Oct 11 04:09:25 compute-0 nova_compute[259850]: 2025-10-11 04:09:25.863 2 DEBUG nova.virt.libvirt.driver [None req-cf59774f-dea1-4aa6-bb05-cc923955f9de f301bb3cf7f94411bff904828db8c555 3c9fe3215f964559830df6c94dd6a581 - - default default] Attempting to detach device vdb from instance e879a322-2581-43da-916b-423a94821ed0 from the persistent domain config. _detach_from_persistent /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2487
Oct 11 04:09:25 compute-0 nova_compute[259850]: 2025-10-11 04:09:25.864 2 DEBUG nova.virt.libvirt.guest [None req-cf59774f-dea1-4aa6-bb05-cc923955f9de f301bb3cf7f94411bff904828db8c555 3c9fe3215f964559830df6c94dd6a581 - - default default] detach device xml: <disk type="network" device="disk">
Oct 11 04:09:25 compute-0 nova_compute[259850]:   <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Oct 11 04:09:25 compute-0 nova_compute[259850]:   <source protocol="rbd" name="volumes/volume-dd3540f3-c3a1-4406-8e2f-e8af43421d08">
Oct 11 04:09:25 compute-0 nova_compute[259850]:     <host name="192.168.122.100" port="6789"/>
Oct 11 04:09:25 compute-0 nova_compute[259850]:   </source>
Oct 11 04:09:25 compute-0 nova_compute[259850]:   <target dev="vdb" bus="virtio"/>
Oct 11 04:09:25 compute-0 nova_compute[259850]:   <serial>dd3540f3-c3a1-4406-8e2f-e8af43421d08</serial>
Oct 11 04:09:25 compute-0 nova_compute[259850]:   <address type="pci" domain="0x0000" bus="0x06" slot="0x00" function="0x0"/>
Oct 11 04:09:25 compute-0 nova_compute[259850]:   <encryption format="luks">
Oct 11 04:09:25 compute-0 nova_compute[259850]:     <secret type="passphrase" uuid="a9e56145-4ab9-48b4-98c0-5100232c570c"/>
Oct 11 04:09:25 compute-0 nova_compute[259850]:   </encryption>
Oct 11 04:09:25 compute-0 nova_compute[259850]: </disk>
Oct 11 04:09:25 compute-0 nova_compute[259850]:  detach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:465
Oct 11 04:09:25 compute-0 nova_compute[259850]: 2025-10-11 04:09:25.874 2 INFO nova.virt.libvirt.driver [None req-cf59774f-dea1-4aa6-bb05-cc923955f9de f301bb3cf7f94411bff904828db8c555 3c9fe3215f964559830df6c94dd6a581 - - default default] Successfully detached device vdb from instance e879a322-2581-43da-916b-423a94821ed0 from the persistent domain config.
Oct 11 04:09:25 compute-0 nova_compute[259850]: 2025-10-11 04:09:25.875 2 DEBUG nova.virt.libvirt.driver [None req-cf59774f-dea1-4aa6-bb05-cc923955f9de f301bb3cf7f94411bff904828db8c555 3c9fe3215f964559830df6c94dd6a581 - - default default] (1/8): Attempting to detach device vdb with device alias virtio-disk1 from instance e879a322-2581-43da-916b-423a94821ed0 from the live domain config. _detach_from_live_with_retry /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2523
Oct 11 04:09:25 compute-0 nova_compute[259850]: 2025-10-11 04:09:25.876 2 DEBUG nova.virt.libvirt.guest [None req-cf59774f-dea1-4aa6-bb05-cc923955f9de f301bb3cf7f94411bff904828db8c555 3c9fe3215f964559830df6c94dd6a581 - - default default] detach device xml: <disk type="network" device="disk">
Oct 11 04:09:25 compute-0 nova_compute[259850]:   <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Oct 11 04:09:25 compute-0 nova_compute[259850]:   <source protocol="rbd" name="volumes/volume-dd3540f3-c3a1-4406-8e2f-e8af43421d08">
Oct 11 04:09:25 compute-0 nova_compute[259850]:     <host name="192.168.122.100" port="6789"/>
Oct 11 04:09:25 compute-0 nova_compute[259850]:   </source>
Oct 11 04:09:25 compute-0 nova_compute[259850]:   <target dev="vdb" bus="virtio"/>
Oct 11 04:09:25 compute-0 nova_compute[259850]:   <serial>dd3540f3-c3a1-4406-8e2f-e8af43421d08</serial>
Oct 11 04:09:25 compute-0 nova_compute[259850]:   <address type="pci" domain="0x0000" bus="0x06" slot="0x00" function="0x0"/>
Oct 11 04:09:25 compute-0 nova_compute[259850]:   <encryption format="luks">
Oct 11 04:09:25 compute-0 nova_compute[259850]:     <secret type="passphrase" uuid="a9e56145-4ab9-48b4-98c0-5100232c570c"/>
Oct 11 04:09:25 compute-0 nova_compute[259850]:   </encryption>
Oct 11 04:09:25 compute-0 nova_compute[259850]: </disk>
Oct 11 04:09:25 compute-0 nova_compute[259850]:  detach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:465
Oct 11 04:09:26 compute-0 nova_compute[259850]: 2025-10-11 04:09:26.001 2 DEBUG nova.virt.libvirt.driver [None req-3700afd8-f980-4cf2-b41e-54ecc7fbe9a2 - - - - - -] Received event <DeviceRemovedEvent: 1760155766.001048, e879a322-2581-43da-916b-423a94821ed0 => virtio-disk1> from libvirt while the driver is waiting for it; dispatched. emit_event /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2370
Oct 11 04:09:26 compute-0 nova_compute[259850]: 2025-10-11 04:09:26.005 2 DEBUG nova.virt.libvirt.driver [None req-cf59774f-dea1-4aa6-bb05-cc923955f9de f301bb3cf7f94411bff904828db8c555 3c9fe3215f964559830df6c94dd6a581 - - default default] Start waiting for the detach event from libvirt for device vdb with device alias virtio-disk1 for instance e879a322-2581-43da-916b-423a94821ed0 _detach_from_live_and_wait_for_event /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2599
Oct 11 04:09:26 compute-0 nova_compute[259850]: 2025-10-11 04:09:26.008 2 INFO nova.virt.libvirt.driver [None req-cf59774f-dea1-4aa6-bb05-cc923955f9de f301bb3cf7f94411bff904828db8c555 3c9fe3215f964559830df6c94dd6a581 - - default default] Successfully detached device vdb from instance e879a322-2581-43da-916b-423a94821ed0 from the live domain config.
Oct 11 04:09:26 compute-0 nova_compute[259850]: 2025-10-11 04:09:26.220 2 DEBUG nova.objects.instance [None req-cf59774f-dea1-4aa6-bb05-cc923955f9de f301bb3cf7f94411bff904828db8c555 3c9fe3215f964559830df6c94dd6a581 - - default default] Lazy-loading 'flavor' on Instance uuid e879a322-2581-43da-916b-423a94821ed0 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 11 04:09:26 compute-0 nova_compute[259850]: 2025-10-11 04:09:26.273 2 DEBUG oslo_concurrency.lockutils [None req-cf59774f-dea1-4aa6-bb05-cc923955f9de f301bb3cf7f94411bff904828db8c555 3c9fe3215f964559830df6c94dd6a581 - - default default] Lock "e879a322-2581-43da-916b-423a94821ed0" "released" by "nova.compute.manager.ComputeManager.detach_volume.<locals>.do_detach_volume" :: held 0.723s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 04:09:26 compute-0 ceph-mon[74273]: pgmap v1225: 305 pgs: 305 active+clean; 167 MiB data, 328 MiB used, 60 GiB / 60 GiB avail; 88 KiB/s rd, 3.5 KiB/s wr, 114 op/s
Oct 11 04:09:27 compute-0 nova_compute[259850]: 2025-10-11 04:09:27.315 2 DEBUG oslo_concurrency.lockutils [None req-bc039bda-c4b6-4076-af6c-6d487c63f58e f301bb3cf7f94411bff904828db8c555 3c9fe3215f964559830df6c94dd6a581 - - default default] Acquiring lock "e879a322-2581-43da-916b-423a94821ed0" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 04:09:27 compute-0 nova_compute[259850]: 2025-10-11 04:09:27.316 2 DEBUG oslo_concurrency.lockutils [None req-bc039bda-c4b6-4076-af6c-6d487c63f58e f301bb3cf7f94411bff904828db8c555 3c9fe3215f964559830df6c94dd6a581 - - default default] Lock "e879a322-2581-43da-916b-423a94821ed0" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 04:09:27 compute-0 nova_compute[259850]: 2025-10-11 04:09:27.317 2 DEBUG oslo_concurrency.lockutils [None req-bc039bda-c4b6-4076-af6c-6d487c63f58e f301bb3cf7f94411bff904828db8c555 3c9fe3215f964559830df6c94dd6a581 - - default default] Acquiring lock "e879a322-2581-43da-916b-423a94821ed0-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 04:09:27 compute-0 nova_compute[259850]: 2025-10-11 04:09:27.317 2 DEBUG oslo_concurrency.lockutils [None req-bc039bda-c4b6-4076-af6c-6d487c63f58e f301bb3cf7f94411bff904828db8c555 3c9fe3215f964559830df6c94dd6a581 - - default default] Lock "e879a322-2581-43da-916b-423a94821ed0-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 04:09:27 compute-0 nova_compute[259850]: 2025-10-11 04:09:27.318 2 DEBUG oslo_concurrency.lockutils [None req-bc039bda-c4b6-4076-af6c-6d487c63f58e f301bb3cf7f94411bff904828db8c555 3c9fe3215f964559830df6c94dd6a581 - - default default] Lock "e879a322-2581-43da-916b-423a94821ed0-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 04:09:27 compute-0 nova_compute[259850]: 2025-10-11 04:09:27.319 2 INFO nova.compute.manager [None req-bc039bda-c4b6-4076-af6c-6d487c63f58e f301bb3cf7f94411bff904828db8c555 3c9fe3215f964559830df6c94dd6a581 - - default default] [instance: e879a322-2581-43da-916b-423a94821ed0] Terminating instance
Oct 11 04:09:27 compute-0 nova_compute[259850]: 2025-10-11 04:09:27.321 2 DEBUG nova.compute.manager [None req-bc039bda-c4b6-4076-af6c-6d487c63f58e f301bb3cf7f94411bff904828db8c555 3c9fe3215f964559830df6c94dd6a581 - - default default] [instance: e879a322-2581-43da-916b-423a94821ed0] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Oct 11 04:09:27 compute-0 kernel: tapcc7a934c-f2 (unregistering): left promiscuous mode
Oct 11 04:09:27 compute-0 NetworkManager[44920]: <info>  [1760155767.4004] device (tapcc7a934c-f2): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 11 04:09:27 compute-0 nova_compute[259850]: 2025-10-11 04:09:27.411 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 04:09:27 compute-0 ovn_controller[152025]: 2025-10-11T04:09:27Z|00120|binding|INFO|Releasing lport cc7a934c-f273-4dde-b492-d37feef39f58 from this chassis (sb_readonly=0)
Oct 11 04:09:27 compute-0 ovn_controller[152025]: 2025-10-11T04:09:27Z|00121|binding|INFO|Setting lport cc7a934c-f273-4dde-b492-d37feef39f58 down in Southbound
Oct 11 04:09:27 compute-0 ovn_controller[152025]: 2025-10-11T04:09:27Z|00122|binding|INFO|Removing iface tapcc7a934c-f2 ovn-installed in OVS
Oct 11 04:09:27 compute-0 nova_compute[259850]: 2025-10-11 04:09:27.414 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 04:09:27 compute-0 ovn_metadata_agent[161897]: 2025-10-11 04:09:27.425 161902 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:a8:a4:b5 10.100.0.9'], port_security=['fa:16:3e:a8:a4:b5 10.100.0.9'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.9/28', 'neutron:device_id': 'e879a322-2581-43da-916b-423a94821ed0', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-bc525eaa-e13d-45ff-a473-c699abd60e90', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '3c9fe3215f964559830df6c94dd6a581', 'neutron:revision_number': '4', 'neutron:security_group_ids': '4053e409-faa5-44f7-9062-ac885993198c', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com', 'neutron:port_fip': '192.168.122.202'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=b776411c-ef6a-4c8c-89aa-a5baa905f9ce, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f22fde8afd0>], logical_port=cc7a934c-f273-4dde-b492-d37feef39f58) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f22fde8afd0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 11 04:09:27 compute-0 ovn_metadata_agent[161897]: 2025-10-11 04:09:27.427 161902 INFO neutron.agent.ovn.metadata.agent [-] Port cc7a934c-f273-4dde-b492-d37feef39f58 in datapath bc525eaa-e13d-45ff-a473-c699abd60e90 unbound from our chassis
Oct 11 04:09:27 compute-0 ovn_metadata_agent[161897]: 2025-10-11 04:09:27.430 161902 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network bc525eaa-e13d-45ff-a473-c699abd60e90, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Oct 11 04:09:27 compute-0 ovn_metadata_agent[161897]: 2025-10-11 04:09:27.433 267637 DEBUG oslo.privsep.daemon [-] privsep: reply[2d57e38e-d3e3-4a46-90ae-93a084d787ca]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 04:09:27 compute-0 ovn_metadata_agent[161897]: 2025-10-11 04:09:27.433 161902 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-bc525eaa-e13d-45ff-a473-c699abd60e90 namespace which is not needed anymore
Oct 11 04:09:27 compute-0 nova_compute[259850]: 2025-10-11 04:09:27.456 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 04:09:27 compute-0 systemd[1]: machine-qemu\x2d11\x2dinstance\x2d0000000b.scope: Deactivated successfully.
Oct 11 04:09:27 compute-0 systemd[1]: machine-qemu\x2d11\x2dinstance\x2d0000000b.scope: Consumed 15.503s CPU time.
Oct 11 04:09:27 compute-0 systemd-machined[214869]: Machine qemu-11-instance-0000000b terminated.
Oct 11 04:09:27 compute-0 nova_compute[259850]: 2025-10-11 04:09:27.561 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 04:09:27 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1226: 305 pgs: 305 active+clean; 167 MiB data, 328 MiB used, 60 GiB / 60 GiB avail; 83 KiB/s rd, 3.3 KiB/s wr, 108 op/s
Oct 11 04:09:27 compute-0 nova_compute[259850]: 2025-10-11 04:09:27.567 2 INFO nova.virt.libvirt.driver [-] [instance: e879a322-2581-43da-916b-423a94821ed0] Instance destroyed successfully.
Oct 11 04:09:27 compute-0 nova_compute[259850]: 2025-10-11 04:09:27.568 2 DEBUG nova.objects.instance [None req-bc039bda-c4b6-4076-af6c-6d487c63f58e f301bb3cf7f94411bff904828db8c555 3c9fe3215f964559830df6c94dd6a581 - - default default] Lazy-loading 'resources' on Instance uuid e879a322-2581-43da-916b-423a94821ed0 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 11 04:09:27 compute-0 nova_compute[259850]: 2025-10-11 04:09:27.591 2 DEBUG nova.virt.libvirt.vif [None req-bc039bda-c4b6-4076-af6c-6d487c63f58e f301bb3cf7f94411bff904828db8c555 3c9fe3215f964559830df6c94dd6a581 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-11T04:08:49Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestEncryptedCinderVolumes-server-1632845143',display_name='tempest-TestEncryptedCinderVolumes-server-1632845143',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testencryptedcindervolumes-server-1632845143',id=11,image_ref='1a107e2f-1a9d-4b6f-861d-e64bee7d56be',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBBaVbxNnaD5q0XmYwovrzExmbbVMAXd2YcdP8HyN3xmFYmLrUV4WQODRkW4d2lIUauD7nrrJ4pDUAC7Sn3o+1xphApTEfJBl9skNeWXh4VYjPGwBwFYiqPpdaiLhMeR/Rw==',key_name='tempest-keypair-733620179',keypairs=<?>,launch_index=0,launched_at=2025-10-11T04:08:59Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='3c9fe3215f964559830df6c94dd6a581',ramdisk_id='',reservation_id='r-2bscv5uk',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='1a107e2f-1a9d-4b6f-861d-e64bee7d56be',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestEncryptedCinderVolumes-1907206765',owner_user_name='tempest-TestEncryptedCinderVolumes-1907206765-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-11T04:08:59Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='f301bb3cf7f94411bff904828db8c555',uuid=e879a322-2581-43da-916b-423a94821ed0,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "cc7a934c-f273-4dde-b492-d37feef39f58", "address": "fa:16:3e:a8:a4:b5", "network": {"id": "bc525eaa-e13d-45ff-a473-c699abd60e90", "bridge": "br-int", "label": "tempest-TestEncryptedCinderVolumes-452300963-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.202", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3c9fe3215f964559830df6c94dd6a581", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapcc7a934c-f2", "ovs_interfaceid": "cc7a934c-f273-4dde-b492-d37feef39f58", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Oct 11 04:09:27 compute-0 nova_compute[259850]: 2025-10-11 04:09:27.592 2 DEBUG nova.network.os_vif_util [None req-bc039bda-c4b6-4076-af6c-6d487c63f58e f301bb3cf7f94411bff904828db8c555 3c9fe3215f964559830df6c94dd6a581 - - default default] Converting VIF {"id": "cc7a934c-f273-4dde-b492-d37feef39f58", "address": "fa:16:3e:a8:a4:b5", "network": {"id": "bc525eaa-e13d-45ff-a473-c699abd60e90", "bridge": "br-int", "label": "tempest-TestEncryptedCinderVolumes-452300963-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.202", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3c9fe3215f964559830df6c94dd6a581", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapcc7a934c-f2", "ovs_interfaceid": "cc7a934c-f273-4dde-b492-d37feef39f58", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 11 04:09:27 compute-0 nova_compute[259850]: 2025-10-11 04:09:27.593 2 DEBUG nova.network.os_vif_util [None req-bc039bda-c4b6-4076-af6c-6d487c63f58e f301bb3cf7f94411bff904828db8c555 3c9fe3215f964559830df6c94dd6a581 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:a8:a4:b5,bridge_name='br-int',has_traffic_filtering=True,id=cc7a934c-f273-4dde-b492-d37feef39f58,network=Network(bc525eaa-e13d-45ff-a473-c699abd60e90),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapcc7a934c-f2') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 11 04:09:27 compute-0 nova_compute[259850]: 2025-10-11 04:09:27.594 2 DEBUG os_vif [None req-bc039bda-c4b6-4076-af6c-6d487c63f58e f301bb3cf7f94411bff904828db8c555 3c9fe3215f964559830df6c94dd6a581 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:a8:a4:b5,bridge_name='br-int',has_traffic_filtering=True,id=cc7a934c-f273-4dde-b492-d37feef39f58,network=Network(bc525eaa-e13d-45ff-a473-c699abd60e90),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapcc7a934c-f2') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Oct 11 04:09:27 compute-0 nova_compute[259850]: 2025-10-11 04:09:27.597 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 04:09:27 compute-0 nova_compute[259850]: 2025-10-11 04:09:27.598 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapcc7a934c-f2, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 04:09:27 compute-0 nova_compute[259850]: 2025-10-11 04:09:27.600 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 04:09:27 compute-0 nova_compute[259850]: 2025-10-11 04:09:27.603 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 11 04:09:27 compute-0 nova_compute[259850]: 2025-10-11 04:09:27.606 2 INFO os_vif [None req-bc039bda-c4b6-4076-af6c-6d487c63f58e f301bb3cf7f94411bff904828db8c555 3c9fe3215f964559830df6c94dd6a581 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:a8:a4:b5,bridge_name='br-int',has_traffic_filtering=True,id=cc7a934c-f273-4dde-b492-d37feef39f58,network=Network(bc525eaa-e13d-45ff-a473-c699abd60e90),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapcc7a934c-f2')
Oct 11 04:09:27 compute-0 neutron-haproxy-ovnmeta-bc525eaa-e13d-45ff-a473-c699abd60e90[281885]: [NOTICE]   (281889) : haproxy version is 2.8.14-c23fe91
Oct 11 04:09:27 compute-0 nova_compute[259850]: 2025-10-11 04:09:27.640 2 DEBUG nova.compute.manager [req-8c5a7448-b45f-4877-8090-3b76355a80e6 req-39a84ca5-a173-450b-bf1b-9ade1b399c7e f4159c198f2c491aba3076e803251bb4 551ef16ca7c64c7d98e9560ddab5abd8 - - default default] [instance: e879a322-2581-43da-916b-423a94821ed0] Received event network-vif-unplugged-cc7a934c-f273-4dde-b492-d37feef39f58 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 11 04:09:27 compute-0 nova_compute[259850]: 2025-10-11 04:09:27.641 2 DEBUG oslo_concurrency.lockutils [req-8c5a7448-b45f-4877-8090-3b76355a80e6 req-39a84ca5-a173-450b-bf1b-9ade1b399c7e f4159c198f2c491aba3076e803251bb4 551ef16ca7c64c7d98e9560ddab5abd8 - - default default] Acquiring lock "e879a322-2581-43da-916b-423a94821ed0-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 04:09:27 compute-0 nova_compute[259850]: 2025-10-11 04:09:27.641 2 DEBUG oslo_concurrency.lockutils [req-8c5a7448-b45f-4877-8090-3b76355a80e6 req-39a84ca5-a173-450b-bf1b-9ade1b399c7e f4159c198f2c491aba3076e803251bb4 551ef16ca7c64c7d98e9560ddab5abd8 - - default default] Lock "e879a322-2581-43da-916b-423a94821ed0-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 04:09:27 compute-0 neutron-haproxy-ovnmeta-bc525eaa-e13d-45ff-a473-c699abd60e90[281885]: [NOTICE]   (281889) : path to executable is /usr/sbin/haproxy
Oct 11 04:09:27 compute-0 neutron-haproxy-ovnmeta-bc525eaa-e13d-45ff-a473-c699abd60e90[281885]: [WARNING]  (281889) : Exiting Master process...
Oct 11 04:09:27 compute-0 neutron-haproxy-ovnmeta-bc525eaa-e13d-45ff-a473-c699abd60e90[281885]: [WARNING]  (281889) : Exiting Master process...
Oct 11 04:09:27 compute-0 nova_compute[259850]: 2025-10-11 04:09:27.642 2 DEBUG oslo_concurrency.lockutils [req-8c5a7448-b45f-4877-8090-3b76355a80e6 req-39a84ca5-a173-450b-bf1b-9ade1b399c7e f4159c198f2c491aba3076e803251bb4 551ef16ca7c64c7d98e9560ddab5abd8 - - default default] Lock "e879a322-2581-43da-916b-423a94821ed0-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 04:09:27 compute-0 nova_compute[259850]: 2025-10-11 04:09:27.642 2 DEBUG nova.compute.manager [req-8c5a7448-b45f-4877-8090-3b76355a80e6 req-39a84ca5-a173-450b-bf1b-9ade1b399c7e f4159c198f2c491aba3076e803251bb4 551ef16ca7c64c7d98e9560ddab5abd8 - - default default] [instance: e879a322-2581-43da-916b-423a94821ed0] No waiting events found dispatching network-vif-unplugged-cc7a934c-f273-4dde-b492-d37feef39f58 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 11 04:09:27 compute-0 nova_compute[259850]: 2025-10-11 04:09:27.642 2 DEBUG nova.compute.manager [req-8c5a7448-b45f-4877-8090-3b76355a80e6 req-39a84ca5-a173-450b-bf1b-9ade1b399c7e f4159c198f2c491aba3076e803251bb4 551ef16ca7c64c7d98e9560ddab5abd8 - - default default] [instance: e879a322-2581-43da-916b-423a94821ed0] Received event network-vif-unplugged-cc7a934c-f273-4dde-b492-d37feef39f58 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Oct 11 04:09:27 compute-0 neutron-haproxy-ovnmeta-bc525eaa-e13d-45ff-a473-c699abd60e90[281885]: [ALERT]    (281889) : Current worker (281891) exited with code 143 (Terminated)
Oct 11 04:09:27 compute-0 neutron-haproxy-ovnmeta-bc525eaa-e13d-45ff-a473-c699abd60e90[281885]: [WARNING]  (281889) : All workers exited. Exiting... (0)
Oct 11 04:09:27 compute-0 systemd[1]: libpod-a463e9809ccedb1a9105dd493a0c99d9bbf1e234d96d7b81e791e62714312ac0.scope: Deactivated successfully.
Oct 11 04:09:27 compute-0 podman[282206]: 2025-10-11 04:09:27.655034936 +0000 UTC m=+0.077094490 container died a463e9809ccedb1a9105dd493a0c99d9bbf1e234d96d7b81e791e62714312ac0 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-bc525eaa-e13d-45ff-a473-c699abd60e90, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Oct 11 04:09:27 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-a463e9809ccedb1a9105dd493a0c99d9bbf1e234d96d7b81e791e62714312ac0-userdata-shm.mount: Deactivated successfully.
Oct 11 04:09:27 compute-0 systemd[1]: var-lib-containers-storage-overlay-a114031a3c6f7de5c95f5b215cc97e178b7a0bbd850dd345fcf4bddbb022bc60-merged.mount: Deactivated successfully.
Oct 11 04:09:27 compute-0 podman[282206]: 2025-10-11 04:09:27.701534085 +0000 UTC m=+0.123593599 container cleanup a463e9809ccedb1a9105dd493a0c99d9bbf1e234d96d7b81e791e62714312ac0 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-bc525eaa-e13d-45ff-a473-c699abd60e90, org.label-schema.vendor=CentOS, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009, tcib_managed=true)
Oct 11 04:09:27 compute-0 systemd[1]: libpod-conmon-a463e9809ccedb1a9105dd493a0c99d9bbf1e234d96d7b81e791e62714312ac0.scope: Deactivated successfully.
Oct 11 04:09:27 compute-0 podman[282261]: 2025-10-11 04:09:27.778997214 +0000 UTC m=+0.049341549 container remove a463e9809ccedb1a9105dd493a0c99d9bbf1e234d96d7b81e791e62714312ac0 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-bc525eaa-e13d-45ff-a473-c699abd60e90, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.build-date=20251009, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Oct 11 04:09:27 compute-0 ovn_metadata_agent[161897]: 2025-10-11 04:09:27.786 267637 DEBUG oslo.privsep.daemon [-] privsep: reply[9d5e3480-bac4-49dc-8644-006cb8de7e53]: (4, ('Sat Oct 11 04:09:27 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-bc525eaa-e13d-45ff-a473-c699abd60e90 (a463e9809ccedb1a9105dd493a0c99d9bbf1e234d96d7b81e791e62714312ac0)\na463e9809ccedb1a9105dd493a0c99d9bbf1e234d96d7b81e791e62714312ac0\nSat Oct 11 04:09:27 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-bc525eaa-e13d-45ff-a473-c699abd60e90 (a463e9809ccedb1a9105dd493a0c99d9bbf1e234d96d7b81e791e62714312ac0)\na463e9809ccedb1a9105dd493a0c99d9bbf1e234d96d7b81e791e62714312ac0\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 04:09:27 compute-0 ovn_metadata_agent[161897]: 2025-10-11 04:09:27.788 267637 DEBUG oslo.privsep.daemon [-] privsep: reply[276e8f13-a92f-42ef-84f8-7410dd365fcc]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 04:09:27 compute-0 ovn_metadata_agent[161897]: 2025-10-11 04:09:27.789 161902 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapbc525eaa-e0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 04:09:27 compute-0 nova_compute[259850]: 2025-10-11 04:09:27.791 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 04:09:27 compute-0 kernel: tapbc525eaa-e0: left promiscuous mode
Oct 11 04:09:27 compute-0 nova_compute[259850]: 2025-10-11 04:09:27.812 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 04:09:27 compute-0 ovn_metadata_agent[161897]: 2025-10-11 04:09:27.815 267637 DEBUG oslo.privsep.daemon [-] privsep: reply[d963cc78-5aea-446b-ab53-b22e43d403b6]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 04:09:27 compute-0 ovn_metadata_agent[161897]: 2025-10-11 04:09:27.838 267637 DEBUG oslo.privsep.daemon [-] privsep: reply[b48dea7b-e6a0-44ef-9d45-3a51a490237a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 04:09:27 compute-0 ovn_metadata_agent[161897]: 2025-10-11 04:09:27.842 267637 DEBUG oslo.privsep.daemon [-] privsep: reply[711fde2b-183f-460f-835f-262b96c25502]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 04:09:27 compute-0 ovn_metadata_agent[161897]: 2025-10-11 04:09:27.856 267637 DEBUG oslo.privsep.daemon [-] privsep: reply[fda7cd02-529a-4407-b3b2-b5217d304cdf]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 412327, 'reachable_time': 43217, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 282276, 'error': None, 'target': 'ovnmeta-bc525eaa-e13d-45ff-a473-c699abd60e90', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 04:09:27 compute-0 ovn_metadata_agent[161897]: 2025-10-11 04:09:27.858 162015 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-bc525eaa-e13d-45ff-a473-c699abd60e90 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Oct 11 04:09:27 compute-0 ovn_metadata_agent[161897]: 2025-10-11 04:09:27.858 162015 DEBUG oslo.privsep.daemon [-] privsep: reply[40014a15-0ded-4997-81f7-bc842b053931]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 04:09:27 compute-0 systemd[1]: run-netns-ovnmeta\x2dbc525eaa\x2de13d\x2d45ff\x2da473\x2dc699abd60e90.mount: Deactivated successfully.
Oct 11 04:09:27 compute-0 nova_compute[259850]: 2025-10-11 04:09:27.981 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 04:09:28 compute-0 nova_compute[259850]: 2025-10-11 04:09:28.056 2 INFO nova.virt.libvirt.driver [None req-bc039bda-c4b6-4076-af6c-6d487c63f58e f301bb3cf7f94411bff904828db8c555 3c9fe3215f964559830df6c94dd6a581 - - default default] [instance: e879a322-2581-43da-916b-423a94821ed0] Deleting instance files /var/lib/nova/instances/e879a322-2581-43da-916b-423a94821ed0_del
Oct 11 04:09:28 compute-0 nova_compute[259850]: 2025-10-11 04:09:28.057 2 INFO nova.virt.libvirt.driver [None req-bc039bda-c4b6-4076-af6c-6d487c63f58e f301bb3cf7f94411bff904828db8c555 3c9fe3215f964559830df6c94dd6a581 - - default default] [instance: e879a322-2581-43da-916b-423a94821ed0] Deletion of /var/lib/nova/instances/e879a322-2581-43da-916b-423a94821ed0_del complete
Oct 11 04:09:28 compute-0 nova_compute[259850]: 2025-10-11 04:09:28.121 2 INFO nova.compute.manager [None req-bc039bda-c4b6-4076-af6c-6d487c63f58e f301bb3cf7f94411bff904828db8c555 3c9fe3215f964559830df6c94dd6a581 - - default default] [instance: e879a322-2581-43da-916b-423a94821ed0] Took 0.80 seconds to destroy the instance on the hypervisor.
Oct 11 04:09:28 compute-0 nova_compute[259850]: 2025-10-11 04:09:28.122 2 DEBUG oslo.service.loopingcall [None req-bc039bda-c4b6-4076-af6c-6d487c63f58e f301bb3cf7f94411bff904828db8c555 3c9fe3215f964559830df6c94dd6a581 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Oct 11 04:09:28 compute-0 nova_compute[259850]: 2025-10-11 04:09:28.122 2 DEBUG nova.compute.manager [-] [instance: e879a322-2581-43da-916b-423a94821ed0] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Oct 11 04:09:28 compute-0 nova_compute[259850]: 2025-10-11 04:09:28.123 2 DEBUG nova.network.neutron [-] [instance: e879a322-2581-43da-916b-423a94821ed0] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Oct 11 04:09:28 compute-0 ovn_metadata_agent[161897]: 2025-10-11 04:09:28.288 161902 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=10, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '7a:61:6f', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '92:f1:b6:e4:f1:16'}, ipsec=False) old=SB_Global(nb_cfg=9) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 11 04:09:28 compute-0 nova_compute[259850]: 2025-10-11 04:09:28.289 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 04:09:28 compute-0 ovn_metadata_agent[161897]: 2025-10-11 04:09:28.289 161902 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 1 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Oct 11 04:09:28 compute-0 ceph-mon[74273]: pgmap v1226: 305 pgs: 305 active+clean; 167 MiB data, 328 MiB used, 60 GiB / 60 GiB avail; 83 KiB/s rd, 3.3 KiB/s wr, 108 op/s
Oct 11 04:09:29 compute-0 nova_compute[259850]: 2025-10-11 04:09:29.116 2 DEBUG nova.network.neutron [-] [instance: e879a322-2581-43da-916b-423a94821ed0] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 11 04:09:29 compute-0 nova_compute[259850]: 2025-10-11 04:09:29.140 2 INFO nova.compute.manager [-] [instance: e879a322-2581-43da-916b-423a94821ed0] Took 1.02 seconds to deallocate network for instance.
Oct 11 04:09:29 compute-0 nova_compute[259850]: 2025-10-11 04:09:29.209 2 DEBUG oslo_concurrency.lockutils [None req-bc039bda-c4b6-4076-af6c-6d487c63f58e f301bb3cf7f94411bff904828db8c555 3c9fe3215f964559830df6c94dd6a581 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 04:09:29 compute-0 nova_compute[259850]: 2025-10-11 04:09:29.210 2 DEBUG oslo_concurrency.lockutils [None req-bc039bda-c4b6-4076-af6c-6d487c63f58e f301bb3cf7f94411bff904828db8c555 3c9fe3215f964559830df6c94dd6a581 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 04:09:29 compute-0 nova_compute[259850]: 2025-10-11 04:09:29.235 2 DEBUG nova.compute.manager [req-8f492580-a6b9-4edd-90ca-5dace155e0cf req-6ae7e5cf-6482-492f-8494-03fa557630ae f4159c198f2c491aba3076e803251bb4 551ef16ca7c64c7d98e9560ddab5abd8 - - default default] [instance: e879a322-2581-43da-916b-423a94821ed0] Received event network-vif-deleted-cc7a934c-f273-4dde-b492-d37feef39f58 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 11 04:09:29 compute-0 nova_compute[259850]: 2025-10-11 04:09:29.266 2 DEBUG oslo_concurrency.processutils [None req-bc039bda-c4b6-4076-af6c-6d487c63f58e f301bb3cf7f94411bff904828db8c555 3c9fe3215f964559830df6c94dd6a581 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 04:09:29 compute-0 ovn_metadata_agent[161897]: 2025-10-11 04:09:29.292 161902 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=8a473e03-2208-47ae-afcd-05ad744a5969, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '10'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 04:09:29 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1227: 305 pgs: 305 active+clean; 147 MiB data, 329 MiB used, 60 GiB / 60 GiB avail; 110 KiB/s rd, 5.9 KiB/s wr, 105 op/s
Oct 11 04:09:29 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 11 04:09:29 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2453446375' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 04:09:29 compute-0 nova_compute[259850]: 2025-10-11 04:09:29.721 2 DEBUG nova.compute.manager [req-a59c9cad-afd6-488d-b507-7d8e0e36e7aa req-5f56d815-139c-49c5-8e8b-be797629d630 f4159c198f2c491aba3076e803251bb4 551ef16ca7c64c7d98e9560ddab5abd8 - - default default] [instance: e879a322-2581-43da-916b-423a94821ed0] Received event network-vif-plugged-cc7a934c-f273-4dde-b492-d37feef39f58 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 11 04:09:29 compute-0 nova_compute[259850]: 2025-10-11 04:09:29.722 2 DEBUG oslo_concurrency.lockutils [req-a59c9cad-afd6-488d-b507-7d8e0e36e7aa req-5f56d815-139c-49c5-8e8b-be797629d630 f4159c198f2c491aba3076e803251bb4 551ef16ca7c64c7d98e9560ddab5abd8 - - default default] Acquiring lock "e879a322-2581-43da-916b-423a94821ed0-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 04:09:29 compute-0 nova_compute[259850]: 2025-10-11 04:09:29.723 2 DEBUG oslo_concurrency.lockutils [req-a59c9cad-afd6-488d-b507-7d8e0e36e7aa req-5f56d815-139c-49c5-8e8b-be797629d630 f4159c198f2c491aba3076e803251bb4 551ef16ca7c64c7d98e9560ddab5abd8 - - default default] Lock "e879a322-2581-43da-916b-423a94821ed0-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 04:09:29 compute-0 nova_compute[259850]: 2025-10-11 04:09:29.723 2 DEBUG oslo_concurrency.lockutils [req-a59c9cad-afd6-488d-b507-7d8e0e36e7aa req-5f56d815-139c-49c5-8e8b-be797629d630 f4159c198f2c491aba3076e803251bb4 551ef16ca7c64c7d98e9560ddab5abd8 - - default default] Lock "e879a322-2581-43da-916b-423a94821ed0-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 04:09:29 compute-0 nova_compute[259850]: 2025-10-11 04:09:29.724 2 DEBUG nova.compute.manager [req-a59c9cad-afd6-488d-b507-7d8e0e36e7aa req-5f56d815-139c-49c5-8e8b-be797629d630 f4159c198f2c491aba3076e803251bb4 551ef16ca7c64c7d98e9560ddab5abd8 - - default default] [instance: e879a322-2581-43da-916b-423a94821ed0] No waiting events found dispatching network-vif-plugged-cc7a934c-f273-4dde-b492-d37feef39f58 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 11 04:09:29 compute-0 nova_compute[259850]: 2025-10-11 04:09:29.725 2 WARNING nova.compute.manager [req-a59c9cad-afd6-488d-b507-7d8e0e36e7aa req-5f56d815-139c-49c5-8e8b-be797629d630 f4159c198f2c491aba3076e803251bb4 551ef16ca7c64c7d98e9560ddab5abd8 - - default default] [instance: e879a322-2581-43da-916b-423a94821ed0] Received unexpected event network-vif-plugged-cc7a934c-f273-4dde-b492-d37feef39f58 for instance with vm_state deleted and task_state None.
Oct 11 04:09:29 compute-0 nova_compute[259850]: 2025-10-11 04:09:29.726 2 DEBUG oslo_concurrency.processutils [None req-bc039bda-c4b6-4076-af6c-6d487c63f58e f301bb3cf7f94411bff904828db8c555 3c9fe3215f964559830df6c94dd6a581 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.460s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 04:09:29 compute-0 nova_compute[259850]: 2025-10-11 04:09:29.735 2 DEBUG nova.compute.provider_tree [None req-bc039bda-c4b6-4076-af6c-6d487c63f58e f301bb3cf7f94411bff904828db8c555 3c9fe3215f964559830df6c94dd6a581 - - default default] Inventory has not changed in ProviderTree for provider: 108a560b-89c0-4926-a2fc-cb749a6f8386 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 11 04:09:29 compute-0 nova_compute[259850]: 2025-10-11 04:09:29.754 2 DEBUG nova.scheduler.client.report [None req-bc039bda-c4b6-4076-af6c-6d487c63f58e f301bb3cf7f94411bff904828db8c555 3c9fe3215f964559830df6c94dd6a581 - - default default] Inventory has not changed for provider 108a560b-89c0-4926-a2fc-cb749a6f8386 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 11 04:09:29 compute-0 nova_compute[259850]: 2025-10-11 04:09:29.770 2 DEBUG oslo_concurrency.lockutils [None req-bc039bda-c4b6-4076-af6c-6d487c63f58e f301bb3cf7f94411bff904828db8c555 3c9fe3215f964559830df6c94dd6a581 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.561s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 04:09:29 compute-0 nova_compute[259850]: 2025-10-11 04:09:29.792 2 INFO nova.scheduler.client.report [None req-bc039bda-c4b6-4076-af6c-6d487c63f58e f301bb3cf7f94411bff904828db8c555 3c9fe3215f964559830df6c94dd6a581 - - default default] Deleted allocations for instance e879a322-2581-43da-916b-423a94821ed0
Oct 11 04:09:29 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e242 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 04:09:29 compute-0 ceph-mon[74273]: from='client.? 192.168.122.100:0/2453446375' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 04:09:29 compute-0 nova_compute[259850]: 2025-10-11 04:09:29.868 2 DEBUG oslo_concurrency.lockutils [None req-bc039bda-c4b6-4076-af6c-6d487c63f58e f301bb3cf7f94411bff904828db8c555 3c9fe3215f964559830df6c94dd6a581 - - default default] Lock "e879a322-2581-43da-916b-423a94821ed0" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.552s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 04:09:30 compute-0 ceph-mon[74273]: pgmap v1227: 305 pgs: 305 active+clean; 147 MiB data, 329 MiB used, 60 GiB / 60 GiB avail; 110 KiB/s rd, 5.9 KiB/s wr, 105 op/s
Oct 11 04:09:31 compute-0 ceph-mgr[74563]: [pg_autoscaler INFO root] _maybe_adjust
Oct 11 04:09:31 compute-0 ceph-mgr[74563]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 04:09:31 compute-0 ceph-mgr[74563]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct 11 04:09:31 compute-0 ceph-mgr[74563]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 04:09:31 compute-0 ceph-mgr[74563]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0004616685387771049 of space, bias 1.0, pg target 0.13850056163313146 quantized to 32 (current 32)
Oct 11 04:09:31 compute-0 ceph-mgr[74563]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 04:09:31 compute-0 ceph-mgr[74563]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.00035095711646154846 of space, bias 1.0, pg target 0.10528713493846453 quantized to 32 (current 32)
Oct 11 04:09:31 compute-0 ceph-mgr[74563]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 04:09:31 compute-0 ceph-mgr[74563]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Oct 11 04:09:31 compute-0 ceph-mgr[74563]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 04:09:31 compute-0 ceph-mgr[74563]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.000665858301588852 of space, bias 1.0, pg target 0.19975749047665559 quantized to 32 (current 32)
Oct 11 04:09:31 compute-0 ceph-mgr[74563]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 04:09:31 compute-0 ceph-mgr[74563]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Oct 11 04:09:31 compute-0 ceph-mgr[74563]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 04:09:31 compute-0 ceph-mgr[74563]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 11 04:09:31 compute-0 ceph-mgr[74563]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 04:09:31 compute-0 ceph-mgr[74563]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Oct 11 04:09:31 compute-0 ceph-mgr[74563]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 04:09:31 compute-0 ceph-mgr[74563]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct 11 04:09:31 compute-0 ceph-mgr[74563]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 04:09:31 compute-0 ceph-mgr[74563]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 11 04:09:31 compute-0 ceph-mgr[74563]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 04:09:31 compute-0 ceph-mgr[74563]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct 11 04:09:31 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1228: 305 pgs: 305 active+clean; 147 MiB data, 329 MiB used, 60 GiB / 60 GiB avail; 94 KiB/s rd, 5.0 KiB/s wr, 90 op/s
Oct 11 04:09:31 compute-0 nova_compute[259850]: 2025-10-11 04:09:31.661 2 DEBUG oslo_concurrency.lockutils [None req-da913559-35de-488f-b8c7-13ddac2bc74c fc44058c9b8d47d1907c195c404898c8 c04e56df694d49fdbb22c39773dfc036 - - default default] Acquiring lock "3d2a66c2-9869-4f0a-a27f-db3a14d43466" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 04:09:31 compute-0 nova_compute[259850]: 2025-10-11 04:09:31.662 2 DEBUG oslo_concurrency.lockutils [None req-da913559-35de-488f-b8c7-13ddac2bc74c fc44058c9b8d47d1907c195c404898c8 c04e56df694d49fdbb22c39773dfc036 - - default default] Lock "3d2a66c2-9869-4f0a-a27f-db3a14d43466" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 04:09:31 compute-0 nova_compute[259850]: 2025-10-11 04:09:31.682 2 DEBUG nova.compute.manager [None req-da913559-35de-488f-b8c7-13ddac2bc74c fc44058c9b8d47d1907c195c404898c8 c04e56df694d49fdbb22c39773dfc036 - - default default] [instance: 3d2a66c2-9869-4f0a-a27f-db3a14d43466] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Oct 11 04:09:31 compute-0 nova_compute[259850]: 2025-10-11 04:09:31.743 2 DEBUG oslo_concurrency.lockutils [None req-da913559-35de-488f-b8c7-13ddac2bc74c fc44058c9b8d47d1907c195c404898c8 c04e56df694d49fdbb22c39773dfc036 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 04:09:31 compute-0 nova_compute[259850]: 2025-10-11 04:09:31.743 2 DEBUG oslo_concurrency.lockutils [None req-da913559-35de-488f-b8c7-13ddac2bc74c fc44058c9b8d47d1907c195c404898c8 c04e56df694d49fdbb22c39773dfc036 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 04:09:31 compute-0 nova_compute[259850]: 2025-10-11 04:09:31.749 2 DEBUG nova.virt.hardware [None req-da913559-35de-488f-b8c7-13ddac2bc74c fc44058c9b8d47d1907c195c404898c8 c04e56df694d49fdbb22c39773dfc036 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Oct 11 04:09:31 compute-0 nova_compute[259850]: 2025-10-11 04:09:31.749 2 INFO nova.compute.claims [None req-da913559-35de-488f-b8c7-13ddac2bc74c fc44058c9b8d47d1907c195c404898c8 c04e56df694d49fdbb22c39773dfc036 - - default default] [instance: 3d2a66c2-9869-4f0a-a27f-db3a14d43466] Claim successful on node compute-0.ctlplane.example.com
Oct 11 04:09:31 compute-0 nova_compute[259850]: 2025-10-11 04:09:31.869 2 DEBUG oslo_concurrency.processutils [None req-da913559-35de-488f-b8c7-13ddac2bc74c fc44058c9b8d47d1907c195c404898c8 c04e56df694d49fdbb22c39773dfc036 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 04:09:32 compute-0 nova_compute[259850]: 2025-10-11 04:09:32.001 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 04:09:32 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 11 04:09:32 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/746545235' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 04:09:32 compute-0 nova_compute[259850]: 2025-10-11 04:09:32.310 2 DEBUG oslo_concurrency.processutils [None req-da913559-35de-488f-b8c7-13ddac2bc74c fc44058c9b8d47d1907c195c404898c8 c04e56df694d49fdbb22c39773dfc036 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.441s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 04:09:32 compute-0 nova_compute[259850]: 2025-10-11 04:09:32.319 2 DEBUG nova.compute.provider_tree [None req-da913559-35de-488f-b8c7-13ddac2bc74c fc44058c9b8d47d1907c195c404898c8 c04e56df694d49fdbb22c39773dfc036 - - default default] Inventory has not changed in ProviderTree for provider: 108a560b-89c0-4926-a2fc-cb749a6f8386 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 11 04:09:32 compute-0 nova_compute[259850]: 2025-10-11 04:09:32.344 2 DEBUG nova.scheduler.client.report [None req-da913559-35de-488f-b8c7-13ddac2bc74c fc44058c9b8d47d1907c195c404898c8 c04e56df694d49fdbb22c39773dfc036 - - default default] Inventory has not changed for provider 108a560b-89c0-4926-a2fc-cb749a6f8386 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 11 04:09:32 compute-0 nova_compute[259850]: 2025-10-11 04:09:32.369 2 DEBUG oslo_concurrency.lockutils [None req-da913559-35de-488f-b8c7-13ddac2bc74c fc44058c9b8d47d1907c195c404898c8 c04e56df694d49fdbb22c39773dfc036 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.626s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 04:09:32 compute-0 nova_compute[259850]: 2025-10-11 04:09:32.371 2 DEBUG nova.compute.manager [None req-da913559-35de-488f-b8c7-13ddac2bc74c fc44058c9b8d47d1907c195c404898c8 c04e56df694d49fdbb22c39773dfc036 - - default default] [instance: 3d2a66c2-9869-4f0a-a27f-db3a14d43466] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Oct 11 04:09:32 compute-0 nova_compute[259850]: 2025-10-11 04:09:32.439 2 DEBUG nova.compute.manager [None req-da913559-35de-488f-b8c7-13ddac2bc74c fc44058c9b8d47d1907c195c404898c8 c04e56df694d49fdbb22c39773dfc036 - - default default] [instance: 3d2a66c2-9869-4f0a-a27f-db3a14d43466] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Oct 11 04:09:32 compute-0 nova_compute[259850]: 2025-10-11 04:09:32.440 2 DEBUG nova.network.neutron [None req-da913559-35de-488f-b8c7-13ddac2bc74c fc44058c9b8d47d1907c195c404898c8 c04e56df694d49fdbb22c39773dfc036 - - default default] [instance: 3d2a66c2-9869-4f0a-a27f-db3a14d43466] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Oct 11 04:09:32 compute-0 nova_compute[259850]: 2025-10-11 04:09:32.462 2 INFO nova.virt.libvirt.driver [None req-da913559-35de-488f-b8c7-13ddac2bc74c fc44058c9b8d47d1907c195c404898c8 c04e56df694d49fdbb22c39773dfc036 - - default default] [instance: 3d2a66c2-9869-4f0a-a27f-db3a14d43466] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Oct 11 04:09:32 compute-0 nova_compute[259850]: 2025-10-11 04:09:32.485 2 DEBUG nova.compute.manager [None req-da913559-35de-488f-b8c7-13ddac2bc74c fc44058c9b8d47d1907c195c404898c8 c04e56df694d49fdbb22c39773dfc036 - - default default] [instance: 3d2a66c2-9869-4f0a-a27f-db3a14d43466] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Oct 11 04:09:32 compute-0 nova_compute[259850]: 2025-10-11 04:09:32.589 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 04:09:32 compute-0 nova_compute[259850]: 2025-10-11 04:09:32.600 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 04:09:32 compute-0 nova_compute[259850]: 2025-10-11 04:09:32.623 2 DEBUG nova.compute.manager [None req-da913559-35de-488f-b8c7-13ddac2bc74c fc44058c9b8d47d1907c195c404898c8 c04e56df694d49fdbb22c39773dfc036 - - default default] [instance: 3d2a66c2-9869-4f0a-a27f-db3a14d43466] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Oct 11 04:09:32 compute-0 nova_compute[259850]: 2025-10-11 04:09:32.624 2 DEBUG nova.virt.libvirt.driver [None req-da913559-35de-488f-b8c7-13ddac2bc74c fc44058c9b8d47d1907c195c404898c8 c04e56df694d49fdbb22c39773dfc036 - - default default] [instance: 3d2a66c2-9869-4f0a-a27f-db3a14d43466] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Oct 11 04:09:32 compute-0 nova_compute[259850]: 2025-10-11 04:09:32.625 2 INFO nova.virt.libvirt.driver [None req-da913559-35de-488f-b8c7-13ddac2bc74c fc44058c9b8d47d1907c195c404898c8 c04e56df694d49fdbb22c39773dfc036 - - default default] [instance: 3d2a66c2-9869-4f0a-a27f-db3a14d43466] Creating image(s)
Oct 11 04:09:32 compute-0 nova_compute[259850]: 2025-10-11 04:09:32.653 2 DEBUG nova.storage.rbd_utils [None req-da913559-35de-488f-b8c7-13ddac2bc74c fc44058c9b8d47d1907c195c404898c8 c04e56df694d49fdbb22c39773dfc036 - - default default] rbd image 3d2a66c2-9869-4f0a-a27f-db3a14d43466_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 11 04:09:32 compute-0 nova_compute[259850]: 2025-10-11 04:09:32.681 2 DEBUG nova.storage.rbd_utils [None req-da913559-35de-488f-b8c7-13ddac2bc74c fc44058c9b8d47d1907c195c404898c8 c04e56df694d49fdbb22c39773dfc036 - - default default] rbd image 3d2a66c2-9869-4f0a-a27f-db3a14d43466_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 11 04:09:32 compute-0 nova_compute[259850]: 2025-10-11 04:09:32.705 2 DEBUG nova.storage.rbd_utils [None req-da913559-35de-488f-b8c7-13ddac2bc74c fc44058c9b8d47d1907c195c404898c8 c04e56df694d49fdbb22c39773dfc036 - - default default] rbd image 3d2a66c2-9869-4f0a-a27f-db3a14d43466_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 11 04:09:32 compute-0 nova_compute[259850]: 2025-10-11 04:09:32.709 2 DEBUG oslo_concurrency.processutils [None req-da913559-35de-488f-b8c7-13ddac2bc74c fc44058c9b8d47d1907c195c404898c8 c04e56df694d49fdbb22c39773dfc036 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/1fdd1a1629caceb533dd1ed1ad40a6716b3e72ac --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 04:09:32 compute-0 nova_compute[259850]: 2025-10-11 04:09:32.732 2 DEBUG nova.policy [None req-da913559-35de-488f-b8c7-13ddac2bc74c fc44058c9b8d47d1907c195c404898c8 c04e56df694d49fdbb22c39773dfc036 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'fc44058c9b8d47d1907c195c404898c8', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'c04e56df694d49fdbb22c39773dfc036', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Oct 11 04:09:32 compute-0 nova_compute[259850]: 2025-10-11 04:09:32.773 2 DEBUG oslo_concurrency.processutils [None req-da913559-35de-488f-b8c7-13ddac2bc74c fc44058c9b8d47d1907c195c404898c8 c04e56df694d49fdbb22c39773dfc036 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/1fdd1a1629caceb533dd1ed1ad40a6716b3e72ac --force-share --output=json" returned: 0 in 0.065s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 04:09:32 compute-0 nova_compute[259850]: 2025-10-11 04:09:32.774 2 DEBUG oslo_concurrency.lockutils [None req-da913559-35de-488f-b8c7-13ddac2bc74c fc44058c9b8d47d1907c195c404898c8 c04e56df694d49fdbb22c39773dfc036 - - default default] Acquiring lock "1fdd1a1629caceb533dd1ed1ad40a6716b3e72ac" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 04:09:32 compute-0 nova_compute[259850]: 2025-10-11 04:09:32.775 2 DEBUG oslo_concurrency.lockutils [None req-da913559-35de-488f-b8c7-13ddac2bc74c fc44058c9b8d47d1907c195c404898c8 c04e56df694d49fdbb22c39773dfc036 - - default default] Lock "1fdd1a1629caceb533dd1ed1ad40a6716b3e72ac" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 04:09:32 compute-0 nova_compute[259850]: 2025-10-11 04:09:32.775 2 DEBUG oslo_concurrency.lockutils [None req-da913559-35de-488f-b8c7-13ddac2bc74c fc44058c9b8d47d1907c195c404898c8 c04e56df694d49fdbb22c39773dfc036 - - default default] Lock "1fdd1a1629caceb533dd1ed1ad40a6716b3e72ac" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 04:09:32 compute-0 nova_compute[259850]: 2025-10-11 04:09:32.800 2 DEBUG nova.storage.rbd_utils [None req-da913559-35de-488f-b8c7-13ddac2bc74c fc44058c9b8d47d1907c195c404898c8 c04e56df694d49fdbb22c39773dfc036 - - default default] rbd image 3d2a66c2-9869-4f0a-a27f-db3a14d43466_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 11 04:09:32 compute-0 nova_compute[259850]: 2025-10-11 04:09:32.805 2 DEBUG oslo_concurrency.processutils [None req-da913559-35de-488f-b8c7-13ddac2bc74c fc44058c9b8d47d1907c195c404898c8 c04e56df694d49fdbb22c39773dfc036 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/1fdd1a1629caceb533dd1ed1ad40a6716b3e72ac 3d2a66c2-9869-4f0a-a27f-db3a14d43466_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 04:09:32 compute-0 ceph-mon[74273]: pgmap v1228: 305 pgs: 305 active+clean; 147 MiB data, 329 MiB used, 60 GiB / 60 GiB avail; 94 KiB/s rd, 5.0 KiB/s wr, 90 op/s
Oct 11 04:09:32 compute-0 ceph-mon[74273]: from='client.? 192.168.122.100:0/746545235' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 04:09:32 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct 11 04:09:32 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3157938768' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 11 04:09:32 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct 11 04:09:32 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3157938768' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 11 04:09:33 compute-0 nova_compute[259850]: 2025-10-11 04:09:33.075 2 DEBUG oslo_concurrency.processutils [None req-da913559-35de-488f-b8c7-13ddac2bc74c fc44058c9b8d47d1907c195c404898c8 c04e56df694d49fdbb22c39773dfc036 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/1fdd1a1629caceb533dd1ed1ad40a6716b3e72ac 3d2a66c2-9869-4f0a-a27f-db3a14d43466_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.270s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 04:09:33 compute-0 nova_compute[259850]: 2025-10-11 04:09:33.129 2 DEBUG nova.storage.rbd_utils [None req-da913559-35de-488f-b8c7-13ddac2bc74c fc44058c9b8d47d1907c195c404898c8 c04e56df694d49fdbb22c39773dfc036 - - default default] resizing rbd image 3d2a66c2-9869-4f0a-a27f-db3a14d43466_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Oct 11 04:09:33 compute-0 nova_compute[259850]: 2025-10-11 04:09:33.233 2 DEBUG nova.objects.instance [None req-da913559-35de-488f-b8c7-13ddac2bc74c fc44058c9b8d47d1907c195c404898c8 c04e56df694d49fdbb22c39773dfc036 - - default default] Lazy-loading 'migration_context' on Instance uuid 3d2a66c2-9869-4f0a-a27f-db3a14d43466 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 11 04:09:33 compute-0 nova_compute[259850]: 2025-10-11 04:09:33.250 2 DEBUG nova.virt.libvirt.driver [None req-da913559-35de-488f-b8c7-13ddac2bc74c fc44058c9b8d47d1907c195c404898c8 c04e56df694d49fdbb22c39773dfc036 - - default default] [instance: 3d2a66c2-9869-4f0a-a27f-db3a14d43466] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Oct 11 04:09:33 compute-0 nova_compute[259850]: 2025-10-11 04:09:33.251 2 DEBUG nova.virt.libvirt.driver [None req-da913559-35de-488f-b8c7-13ddac2bc74c fc44058c9b8d47d1907c195c404898c8 c04e56df694d49fdbb22c39773dfc036 - - default default] [instance: 3d2a66c2-9869-4f0a-a27f-db3a14d43466] Ensure instance console log exists: /var/lib/nova/instances/3d2a66c2-9869-4f0a-a27f-db3a14d43466/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Oct 11 04:09:33 compute-0 nova_compute[259850]: 2025-10-11 04:09:33.252 2 DEBUG oslo_concurrency.lockutils [None req-da913559-35de-488f-b8c7-13ddac2bc74c fc44058c9b8d47d1907c195c404898c8 c04e56df694d49fdbb22c39773dfc036 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 04:09:33 compute-0 nova_compute[259850]: 2025-10-11 04:09:33.252 2 DEBUG oslo_concurrency.lockutils [None req-da913559-35de-488f-b8c7-13ddac2bc74c fc44058c9b8d47d1907c195c404898c8 c04e56df694d49fdbb22c39773dfc036 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 04:09:33 compute-0 nova_compute[259850]: 2025-10-11 04:09:33.253 2 DEBUG oslo_concurrency.lockutils [None req-da913559-35de-488f-b8c7-13ddac2bc74c fc44058c9b8d47d1907c195c404898c8 c04e56df694d49fdbb22c39773dfc036 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 04:09:33 compute-0 podman[282489]: 2025-10-11 04:09:33.398729956 +0000 UTC m=+0.091937938 container health_status 8f03625dda308e9a3fe3b3f00360811eab2a53db90537fed5552a11820542f07 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, managed_by=edpm_ansible, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, tcib_managed=true, container_name=iscsid, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, config_id=iscsid, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Oct 11 04:09:33 compute-0 podman[282488]: 2025-10-11 04:09:33.405424234 +0000 UTC m=+0.101749323 container health_status 83ff5079e26f0f00bbfd2512db4ebca61e431e3b7e362664573e34fff179574c (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, config_id=multipathd, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009, org.label-schema.name=CentOS Stream 9 Base Image, container_name=multipathd, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']})
Oct 11 04:09:33 compute-0 sudo[282524]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 04:09:33 compute-0 sudo[282524]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 04:09:33 compute-0 sudo[282524]: pam_unix(sudo:session): session closed for user root
Oct 11 04:09:33 compute-0 nova_compute[259850]: 2025-10-11 04:09:33.562 2 DEBUG nova.network.neutron [None req-da913559-35de-488f-b8c7-13ddac2bc74c fc44058c9b8d47d1907c195c404898c8 c04e56df694d49fdbb22c39773dfc036 - - default default] [instance: 3d2a66c2-9869-4f0a-a27f-db3a14d43466] Successfully created port: 8701ce4d-adc7-4369-9f76-cf6dea290bff _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Oct 11 04:09:33 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1229: 305 pgs: 305 active+clean; 88 MiB data, 282 MiB used, 60 GiB / 60 GiB avail; 62 KiB/s rd, 4.5 KiB/s wr, 52 op/s
Oct 11 04:09:33 compute-0 sudo[282549]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 11 04:09:33 compute-0 sudo[282549]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 04:09:33 compute-0 sudo[282549]: pam_unix(sudo:session): session closed for user root
Oct 11 04:09:33 compute-0 sudo[282574]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 04:09:33 compute-0 sudo[282574]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 04:09:33 compute-0 sudo[282574]: pam_unix(sudo:session): session closed for user root
Oct 11 04:09:33 compute-0 sudo[282599]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/23b68101-59a9-532f-ab6b-9acf78fb2162/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Oct 11 04:09:33 compute-0 sudo[282599]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 04:09:33 compute-0 ceph-mon[74273]: from='client.? 192.168.122.10:0/3157938768' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 11 04:09:33 compute-0 ceph-mon[74273]: from='client.? 192.168.122.10:0/3157938768' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 11 04:09:34 compute-0 sudo[282599]: pam_unix(sudo:session): session closed for user root
Oct 11 04:09:34 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 11 04:09:34 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 11 04:09:34 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct 11 04:09:34 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 11 04:09:34 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct 11 04:09:34 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' 
Oct 11 04:09:34 compute-0 ceph-mgr[74563]: [progress WARNING root] complete: ev d13424d5-c3d5-4daf-af1c-c67959a1a6b1 does not exist
Oct 11 04:09:34 compute-0 ceph-mgr[74563]: [progress WARNING root] complete: ev 64d6a4db-0121-4ac3-a01c-e8802c8c3fca does not exist
Oct 11 04:09:34 compute-0 ceph-mgr[74563]: [progress WARNING root] complete: ev 673bf9f6-3ce9-4cfb-9c56-ce7077f638a4 does not exist
Oct 11 04:09:34 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct 11 04:09:34 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 11 04:09:34 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct 11 04:09:34 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 11 04:09:34 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 11 04:09:34 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 11 04:09:34 compute-0 sudo[282655]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 04:09:34 compute-0 sudo[282655]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 04:09:34 compute-0 sudo[282655]: pam_unix(sudo:session): session closed for user root
Oct 11 04:09:34 compute-0 sudo[282680]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 11 04:09:34 compute-0 sudo[282680]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 04:09:34 compute-0 sudo[282680]: pam_unix(sudo:session): session closed for user root
Oct 11 04:09:34 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 11 04:09:34 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2978714293' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 11 04:09:34 compute-0 sudo[282705]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 04:09:34 compute-0 sudo[282705]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 04:09:34 compute-0 sudo[282705]: pam_unix(sudo:session): session closed for user root
Oct 11 04:09:34 compute-0 sudo[282730]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/23b68101-59a9-532f-ab6b-9acf78fb2162/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 23b68101-59a9-532f-ab6b-9acf78fb2162 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Oct 11 04:09:34 compute-0 sudo[282730]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 04:09:34 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e242 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 04:09:34 compute-0 ceph-mon[74273]: pgmap v1229: 305 pgs: 305 active+clean; 88 MiB data, 282 MiB used, 60 GiB / 60 GiB avail; 62 KiB/s rd, 4.5 KiB/s wr, 52 op/s
Oct 11 04:09:34 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 11 04:09:34 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 11 04:09:34 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' 
Oct 11 04:09:34 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 11 04:09:34 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 11 04:09:34 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 11 04:09:34 compute-0 ceph-mon[74273]: from='client.? 192.168.122.10:0/2978714293' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 11 04:09:35 compute-0 podman[282796]: 2025-10-11 04:09:35.289307498 +0000 UTC m=+0.067448229 container create 240235be3945af26b5a2da00b8150b57d42bb9730333bc054df5252c5e8373d0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_swirles, OSD_FLAVOR=default, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Oct 11 04:09:35 compute-0 systemd[1]: Started libpod-conmon-240235be3945af26b5a2da00b8150b57d42bb9730333bc054df5252c5e8373d0.scope.
Oct 11 04:09:35 compute-0 podman[282796]: 2025-10-11 04:09:35.26060811 +0000 UTC m=+0.038748921 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 04:09:35 compute-0 systemd[1]: Started libcrun container.
Oct 11 04:09:35 compute-0 nova_compute[259850]: 2025-10-11 04:09:35.398 2 DEBUG nova.network.neutron [None req-da913559-35de-488f-b8c7-13ddac2bc74c fc44058c9b8d47d1907c195c404898c8 c04e56df694d49fdbb22c39773dfc036 - - default default] [instance: 3d2a66c2-9869-4f0a-a27f-db3a14d43466] Successfully updated port: 8701ce4d-adc7-4369-9f76-cf6dea290bff _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Oct 11 04:09:35 compute-0 podman[282796]: 2025-10-11 04:09:35.406607998 +0000 UTC m=+0.184748789 container init 240235be3945af26b5a2da00b8150b57d42bb9730333bc054df5252c5e8373d0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_swirles, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Oct 11 04:09:35 compute-0 nova_compute[259850]: 2025-10-11 04:09:35.418 2 DEBUG oslo_concurrency.lockutils [None req-da913559-35de-488f-b8c7-13ddac2bc74c fc44058c9b8d47d1907c195c404898c8 c04e56df694d49fdbb22c39773dfc036 - - default default] Acquiring lock "refresh_cache-3d2a66c2-9869-4f0a-a27f-db3a14d43466" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 11 04:09:35 compute-0 nova_compute[259850]: 2025-10-11 04:09:35.418 2 DEBUG oslo_concurrency.lockutils [None req-da913559-35de-488f-b8c7-13ddac2bc74c fc44058c9b8d47d1907c195c404898c8 c04e56df694d49fdbb22c39773dfc036 - - default default] Acquired lock "refresh_cache-3d2a66c2-9869-4f0a-a27f-db3a14d43466" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 11 04:09:35 compute-0 nova_compute[259850]: 2025-10-11 04:09:35.419 2 DEBUG nova.network.neutron [None req-da913559-35de-488f-b8c7-13ddac2bc74c fc44058c9b8d47d1907c195c404898c8 c04e56df694d49fdbb22c39773dfc036 - - default default] [instance: 3d2a66c2-9869-4f0a-a27f-db3a14d43466] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Oct 11 04:09:35 compute-0 podman[282796]: 2025-10-11 04:09:35.419414498 +0000 UTC m=+0.197555229 container start 240235be3945af26b5a2da00b8150b57d42bb9730333bc054df5252c5e8373d0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_swirles, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Oct 11 04:09:35 compute-0 podman[282796]: 2025-10-11 04:09:35.423574595 +0000 UTC m=+0.201715406 container attach 240235be3945af26b5a2da00b8150b57d42bb9730333bc054df5252c5e8373d0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_swirles, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 11 04:09:35 compute-0 trusting_swirles[282812]: 167 167
Oct 11 04:09:35 compute-0 systemd[1]: libpod-240235be3945af26b5a2da00b8150b57d42bb9730333bc054df5252c5e8373d0.scope: Deactivated successfully.
Oct 11 04:09:35 compute-0 podman[282796]: 2025-10-11 04:09:35.428685979 +0000 UTC m=+0.206826740 container died 240235be3945af26b5a2da00b8150b57d42bb9730333bc054df5252c5e8373d0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_swirles, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Oct 11 04:09:35 compute-0 systemd[1]: var-lib-containers-storage-overlay-c8712ed073ccc782e98a1154502d1dea5726bf55b5d6abe66a83ff602116faf5-merged.mount: Deactivated successfully.
Oct 11 04:09:35 compute-0 podman[282796]: 2025-10-11 04:09:35.48062782 +0000 UTC m=+0.258768561 container remove 240235be3945af26b5a2da00b8150b57d42bb9730333bc054df5252c5e8373d0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_swirles, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 11 04:09:35 compute-0 systemd[1]: libpod-conmon-240235be3945af26b5a2da00b8150b57d42bb9730333bc054df5252c5e8373d0.scope: Deactivated successfully.
Oct 11 04:09:35 compute-0 nova_compute[259850]: 2025-10-11 04:09:35.516 2 DEBUG nova.compute.manager [req-64ac0f40-b05b-4848-9cc6-f2ed7f3cd9e2 req-a3e38afc-eacc-447c-a2e8-f9b83dc3becc f4159c198f2c491aba3076e803251bb4 551ef16ca7c64c7d98e9560ddab5abd8 - - default default] [instance: 3d2a66c2-9869-4f0a-a27f-db3a14d43466] Received event network-changed-8701ce4d-adc7-4369-9f76-cf6dea290bff external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 11 04:09:35 compute-0 nova_compute[259850]: 2025-10-11 04:09:35.516 2 DEBUG nova.compute.manager [req-64ac0f40-b05b-4848-9cc6-f2ed7f3cd9e2 req-a3e38afc-eacc-447c-a2e8-f9b83dc3becc f4159c198f2c491aba3076e803251bb4 551ef16ca7c64c7d98e9560ddab5abd8 - - default default] [instance: 3d2a66c2-9869-4f0a-a27f-db3a14d43466] Refreshing instance network info cache due to event network-changed-8701ce4d-adc7-4369-9f76-cf6dea290bff. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 11 04:09:35 compute-0 nova_compute[259850]: 2025-10-11 04:09:35.517 2 DEBUG oslo_concurrency.lockutils [req-64ac0f40-b05b-4848-9cc6-f2ed7f3cd9e2 req-a3e38afc-eacc-447c-a2e8-f9b83dc3becc f4159c198f2c491aba3076e803251bb4 551ef16ca7c64c7d98e9560ddab5abd8 - - default default] Acquiring lock "refresh_cache-3d2a66c2-9869-4f0a-a27f-db3a14d43466" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 11 04:09:35 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1230: 305 pgs: 305 active+clean; 88 MiB data, 282 MiB used, 60 GiB / 60 GiB avail; 57 KiB/s rd, 4.2 KiB/s wr, 49 op/s
Oct 11 04:09:35 compute-0 nova_compute[259850]: 2025-10-11 04:09:35.580 2 DEBUG nova.network.neutron [None req-da913559-35de-488f-b8c7-13ddac2bc74c fc44058c9b8d47d1907c195c404898c8 c04e56df694d49fdbb22c39773dfc036 - - default default] [instance: 3d2a66c2-9869-4f0a-a27f-db3a14d43466] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Oct 11 04:09:35 compute-0 podman[282837]: 2025-10-11 04:09:35.709636394 +0000 UTC m=+0.058021724 container create 8155b7dfc198f780ecdb02d63aeb672ff6fa31824e2354bd64653b4b74311a85 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_ride, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.schema-version=1.0)
Oct 11 04:09:35 compute-0 systemd[1]: Started libpod-conmon-8155b7dfc198f780ecdb02d63aeb672ff6fa31824e2354bd64653b4b74311a85.scope.
Oct 11 04:09:35 compute-0 podman[282837]: 2025-10-11 04:09:35.68178118 +0000 UTC m=+0.030166590 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 04:09:35 compute-0 systemd[1]: Started libcrun container.
Oct 11 04:09:35 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/41145d96595e5833d46332e1368c99f7c4627e630427c5e14b3b2f49617fb3a8/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 11 04:09:35 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/41145d96595e5833d46332e1368c99f7c4627e630427c5e14b3b2f49617fb3a8/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 11 04:09:35 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/41145d96595e5833d46332e1368c99f7c4627e630427c5e14b3b2f49617fb3a8/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 11 04:09:35 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/41145d96595e5833d46332e1368c99f7c4627e630427c5e14b3b2f49617fb3a8/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 11 04:09:35 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/41145d96595e5833d46332e1368c99f7c4627e630427c5e14b3b2f49617fb3a8/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct 11 04:09:35 compute-0 podman[282837]: 2025-10-11 04:09:35.819377461 +0000 UTC m=+0.167762871 container init 8155b7dfc198f780ecdb02d63aeb672ff6fa31824e2354bd64653b4b74311a85 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_ride, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 11 04:09:35 compute-0 podman[282837]: 2025-10-11 04:09:35.831326428 +0000 UTC m=+0.179711788 container start 8155b7dfc198f780ecdb02d63aeb672ff6fa31824e2354bd64653b4b74311a85 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_ride, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507)
Oct 11 04:09:35 compute-0 podman[282837]: 2025-10-11 04:09:35.836131853 +0000 UTC m=+0.184517273 container attach 8155b7dfc198f780ecdb02d63aeb672ff6fa31824e2354bd64653b4b74311a85 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_ride, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 11 04:09:35 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e242 do_prune osdmap full prune enabled
Oct 11 04:09:35 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e243 e243: 3 total, 3 up, 3 in
Oct 11 04:09:35 compute-0 ceph-mon[74273]: log_channel(cluster) log [DBG] : osdmap e243: 3 total, 3 up, 3 in
Oct 11 04:09:36 compute-0 nova_compute[259850]: 2025-10-11 04:09:36.797 2 DEBUG nova.network.neutron [None req-da913559-35de-488f-b8c7-13ddac2bc74c fc44058c9b8d47d1907c195c404898c8 c04e56df694d49fdbb22c39773dfc036 - - default default] [instance: 3d2a66c2-9869-4f0a-a27f-db3a14d43466] Updating instance_info_cache with network_info: [{"id": "8701ce4d-adc7-4369-9f76-cf6dea290bff", "address": "fa:16:3e:5a:0f:e2", "network": {"id": "8cb72c94-41d7-40be-8ef7-9351e1b06d48", "bridge": "br-int", "label": "tempest-VolumesBackupsTest-1596968619-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c04e56df694d49fdbb22c39773dfc036", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8701ce4d-ad", "ovs_interfaceid": "8701ce4d-adc7-4369-9f76-cf6dea290bff", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 11 04:09:36 compute-0 nova_compute[259850]: 2025-10-11 04:09:36.826 2 DEBUG oslo_concurrency.lockutils [None req-da913559-35de-488f-b8c7-13ddac2bc74c fc44058c9b8d47d1907c195c404898c8 c04e56df694d49fdbb22c39773dfc036 - - default default] Releasing lock "refresh_cache-3d2a66c2-9869-4f0a-a27f-db3a14d43466" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 11 04:09:36 compute-0 nova_compute[259850]: 2025-10-11 04:09:36.827 2 DEBUG nova.compute.manager [None req-da913559-35de-488f-b8c7-13ddac2bc74c fc44058c9b8d47d1907c195c404898c8 c04e56df694d49fdbb22c39773dfc036 - - default default] [instance: 3d2a66c2-9869-4f0a-a27f-db3a14d43466] Instance network_info: |[{"id": "8701ce4d-adc7-4369-9f76-cf6dea290bff", "address": "fa:16:3e:5a:0f:e2", "network": {"id": "8cb72c94-41d7-40be-8ef7-9351e1b06d48", "bridge": "br-int", "label": "tempest-VolumesBackupsTest-1596968619-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c04e56df694d49fdbb22c39773dfc036", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8701ce4d-ad", "ovs_interfaceid": "8701ce4d-adc7-4369-9f76-cf6dea290bff", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Oct 11 04:09:36 compute-0 nova_compute[259850]: 2025-10-11 04:09:36.828 2 DEBUG oslo_concurrency.lockutils [req-64ac0f40-b05b-4848-9cc6-f2ed7f3cd9e2 req-a3e38afc-eacc-447c-a2e8-f9b83dc3becc f4159c198f2c491aba3076e803251bb4 551ef16ca7c64c7d98e9560ddab5abd8 - - default default] Acquired lock "refresh_cache-3d2a66c2-9869-4f0a-a27f-db3a14d43466" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 11 04:09:36 compute-0 nova_compute[259850]: 2025-10-11 04:09:36.828 2 DEBUG nova.network.neutron [req-64ac0f40-b05b-4848-9cc6-f2ed7f3cd9e2 req-a3e38afc-eacc-447c-a2e8-f9b83dc3becc f4159c198f2c491aba3076e803251bb4 551ef16ca7c64c7d98e9560ddab5abd8 - - default default] [instance: 3d2a66c2-9869-4f0a-a27f-db3a14d43466] Refreshing network info cache for port 8701ce4d-adc7-4369-9f76-cf6dea290bff _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 11 04:09:36 compute-0 nova_compute[259850]: 2025-10-11 04:09:36.833 2 DEBUG nova.virt.libvirt.driver [None req-da913559-35de-488f-b8c7-13ddac2bc74c fc44058c9b8d47d1907c195c404898c8 c04e56df694d49fdbb22c39773dfc036 - - default default] [instance: 3d2a66c2-9869-4f0a-a27f-db3a14d43466] Start _get_guest_xml network_info=[{"id": "8701ce4d-adc7-4369-9f76-cf6dea290bff", "address": "fa:16:3e:5a:0f:e2", "network": {"id": "8cb72c94-41d7-40be-8ef7-9351e1b06d48", "bridge": "br-int", "label": "tempest-VolumesBackupsTest-1596968619-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c04e56df694d49fdbb22c39773dfc036", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8701ce4d-ad", "ovs_interfaceid": "8701ce4d-adc7-4369-9f76-cf6dea290bff", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-11T04:01:37Z,direct_url=<?>,disk_format='qcow2',id=1a107e2f-1a9d-4b6f-861d-e64bee7d56be,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='e4ac9f6319b648399a8baca50902ce47',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-11T04:01:39Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_type': 'disk', 'encrypted': False, 'encryption_secret_uuid': None, 'size': 0, 'encryption_format': None, 'boot_index': 0, 'device_name': '/dev/vda', 'guest_format': None, 'encryption_options': None, 'disk_bus': 'virtio', 'image_id': '1a107e2f-1a9d-4b6f-861d-e64bee7d56be'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Oct 11 04:09:36 compute-0 nova_compute[259850]: 2025-10-11 04:09:36.842 2 WARNING nova.virt.libvirt.driver [None req-da913559-35de-488f-b8c7-13ddac2bc74c fc44058c9b8d47d1907c195c404898c8 c04e56df694d49fdbb22c39773dfc036 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 11 04:09:36 compute-0 nova_compute[259850]: 2025-10-11 04:09:36.854 2 DEBUG nova.virt.libvirt.host [None req-da913559-35de-488f-b8c7-13ddac2bc74c fc44058c9b8d47d1907c195c404898c8 c04e56df694d49fdbb22c39773dfc036 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Oct 11 04:09:36 compute-0 nova_compute[259850]: 2025-10-11 04:09:36.855 2 DEBUG nova.virt.libvirt.host [None req-da913559-35de-488f-b8c7-13ddac2bc74c fc44058c9b8d47d1907c195c404898c8 c04e56df694d49fdbb22c39773dfc036 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Oct 11 04:09:36 compute-0 nova_compute[259850]: 2025-10-11 04:09:36.859 2 DEBUG nova.virt.libvirt.host [None req-da913559-35de-488f-b8c7-13ddac2bc74c fc44058c9b8d47d1907c195c404898c8 c04e56df694d49fdbb22c39773dfc036 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Oct 11 04:09:36 compute-0 nova_compute[259850]: 2025-10-11 04:09:36.861 2 DEBUG nova.virt.libvirt.host [None req-da913559-35de-488f-b8c7-13ddac2bc74c fc44058c9b8d47d1907c195c404898c8 c04e56df694d49fdbb22c39773dfc036 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Oct 11 04:09:36 compute-0 nova_compute[259850]: 2025-10-11 04:09:36.862 2 DEBUG nova.virt.libvirt.driver [None req-da913559-35de-488f-b8c7-13ddac2bc74c fc44058c9b8d47d1907c195c404898c8 c04e56df694d49fdbb22c39773dfc036 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Oct 11 04:09:36 compute-0 nova_compute[259850]: 2025-10-11 04:09:36.862 2 DEBUG nova.virt.hardware [None req-da913559-35de-488f-b8c7-13ddac2bc74c fc44058c9b8d47d1907c195c404898c8 c04e56df694d49fdbb22c39773dfc036 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-11T04:01:35Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='178575de-f0e6-4acd-9fcd-d75e3e09ac2e',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-11T04:01:37Z,direct_url=<?>,disk_format='qcow2',id=1a107e2f-1a9d-4b6f-861d-e64bee7d56be,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='e4ac9f6319b648399a8baca50902ce47',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-11T04:01:39Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Oct 11 04:09:36 compute-0 nova_compute[259850]: 2025-10-11 04:09:36.863 2 DEBUG nova.virt.hardware [None req-da913559-35de-488f-b8c7-13ddac2bc74c fc44058c9b8d47d1907c195c404898c8 c04e56df694d49fdbb22c39773dfc036 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Oct 11 04:09:36 compute-0 nova_compute[259850]: 2025-10-11 04:09:36.863 2 DEBUG nova.virt.hardware [None req-da913559-35de-488f-b8c7-13ddac2bc74c fc44058c9b8d47d1907c195c404898c8 c04e56df694d49fdbb22c39773dfc036 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Oct 11 04:09:36 compute-0 nova_compute[259850]: 2025-10-11 04:09:36.864 2 DEBUG nova.virt.hardware [None req-da913559-35de-488f-b8c7-13ddac2bc74c fc44058c9b8d47d1907c195c404898c8 c04e56df694d49fdbb22c39773dfc036 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Oct 11 04:09:36 compute-0 nova_compute[259850]: 2025-10-11 04:09:36.864 2 DEBUG nova.virt.hardware [None req-da913559-35de-488f-b8c7-13ddac2bc74c fc44058c9b8d47d1907c195c404898c8 c04e56df694d49fdbb22c39773dfc036 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Oct 11 04:09:36 compute-0 nova_compute[259850]: 2025-10-11 04:09:36.864 2 DEBUG nova.virt.hardware [None req-da913559-35de-488f-b8c7-13ddac2bc74c fc44058c9b8d47d1907c195c404898c8 c04e56df694d49fdbb22c39773dfc036 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Oct 11 04:09:36 compute-0 nova_compute[259850]: 2025-10-11 04:09:36.865 2 DEBUG nova.virt.hardware [None req-da913559-35de-488f-b8c7-13ddac2bc74c fc44058c9b8d47d1907c195c404898c8 c04e56df694d49fdbb22c39773dfc036 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Oct 11 04:09:36 compute-0 nova_compute[259850]: 2025-10-11 04:09:36.865 2 DEBUG nova.virt.hardware [None req-da913559-35de-488f-b8c7-13ddac2bc74c fc44058c9b8d47d1907c195c404898c8 c04e56df694d49fdbb22c39773dfc036 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Oct 11 04:09:36 compute-0 nova_compute[259850]: 2025-10-11 04:09:36.865 2 DEBUG nova.virt.hardware [None req-da913559-35de-488f-b8c7-13ddac2bc74c fc44058c9b8d47d1907c195c404898c8 c04e56df694d49fdbb22c39773dfc036 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Oct 11 04:09:36 compute-0 nova_compute[259850]: 2025-10-11 04:09:36.866 2 DEBUG nova.virt.hardware [None req-da913559-35de-488f-b8c7-13ddac2bc74c fc44058c9b8d47d1907c195c404898c8 c04e56df694d49fdbb22c39773dfc036 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Oct 11 04:09:36 compute-0 nova_compute[259850]: 2025-10-11 04:09:36.866 2 DEBUG nova.virt.hardware [None req-da913559-35de-488f-b8c7-13ddac2bc74c fc44058c9b8d47d1907c195c404898c8 c04e56df694d49fdbb22c39773dfc036 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Oct 11 04:09:36 compute-0 nova_compute[259850]: 2025-10-11 04:09:36.871 2 DEBUG oslo_concurrency.processutils [None req-da913559-35de-488f-b8c7-13ddac2bc74c fc44058c9b8d47d1907c195c404898c8 c04e56df694d49fdbb22c39773dfc036 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 04:09:36 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e243 do_prune osdmap full prune enabled
Oct 11 04:09:36 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e244 e244: 3 total, 3 up, 3 in
Oct 11 04:09:36 compute-0 ceph-mon[74273]: log_channel(cluster) log [DBG] : osdmap e244: 3 total, 3 up, 3 in
Oct 11 04:09:36 compute-0 ceph-mon[74273]: pgmap v1230: 305 pgs: 305 active+clean; 88 MiB data, 282 MiB used, 60 GiB / 60 GiB avail; 57 KiB/s rd, 4.2 KiB/s wr, 49 op/s
Oct 11 04:09:36 compute-0 ceph-mon[74273]: osdmap e243: 3 total, 3 up, 3 in
Oct 11 04:09:37 compute-0 eloquent_ride[282853]: --> passed data devices: 0 physical, 3 LVM
Oct 11 04:09:37 compute-0 eloquent_ride[282853]: --> relative data size: 1.0
Oct 11 04:09:37 compute-0 eloquent_ride[282853]: --> All data devices are unavailable
Oct 11 04:09:37 compute-0 systemd[1]: libpod-8155b7dfc198f780ecdb02d63aeb672ff6fa31824e2354bd64653b4b74311a85.scope: Deactivated successfully.
Oct 11 04:09:37 compute-0 podman[282837]: 2025-10-11 04:09:37.056371725 +0000 UTC m=+1.404757075 container died 8155b7dfc198f780ecdb02d63aeb672ff6fa31824e2354bd64653b4b74311a85 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_ride, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 11 04:09:37 compute-0 systemd[1]: libpod-8155b7dfc198f780ecdb02d63aeb672ff6fa31824e2354bd64653b4b74311a85.scope: Consumed 1.143s CPU time.
Oct 11 04:09:37 compute-0 systemd[1]: var-lib-containers-storage-overlay-41145d96595e5833d46332e1368c99f7c4627e630427c5e14b3b2f49617fb3a8-merged.mount: Deactivated successfully.
Oct 11 04:09:37 compute-0 podman[282837]: 2025-10-11 04:09:37.127400223 +0000 UTC m=+1.475785553 container remove 8155b7dfc198f780ecdb02d63aeb672ff6fa31824e2354bd64653b4b74311a85 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_ride, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 11 04:09:37 compute-0 systemd[1]: libpod-conmon-8155b7dfc198f780ecdb02d63aeb672ff6fa31824e2354bd64653b4b74311a85.scope: Deactivated successfully.
Oct 11 04:09:37 compute-0 sudo[282730]: pam_unix(sudo:session): session closed for user root
Oct 11 04:09:37 compute-0 sudo[282916]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 04:09:37 compute-0 sudo[282916]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 04:09:37 compute-0 sudo[282916]: pam_unix(sudo:session): session closed for user root
Oct 11 04:09:37 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 11 04:09:37 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/231201400' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 11 04:09:37 compute-0 sudo[282941]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 11 04:09:37 compute-0 sudo[282941]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 04:09:37 compute-0 sudo[282941]: pam_unix(sudo:session): session closed for user root
Oct 11 04:09:37 compute-0 nova_compute[259850]: 2025-10-11 04:09:37.373 2 DEBUG oslo_concurrency.processutils [None req-da913559-35de-488f-b8c7-13ddac2bc74c fc44058c9b8d47d1907c195c404898c8 c04e56df694d49fdbb22c39773dfc036 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.502s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 04:09:37 compute-0 nova_compute[259850]: 2025-10-11 04:09:37.412 2 DEBUG nova.storage.rbd_utils [None req-da913559-35de-488f-b8c7-13ddac2bc74c fc44058c9b8d47d1907c195c404898c8 c04e56df694d49fdbb22c39773dfc036 - - default default] rbd image 3d2a66c2-9869-4f0a-a27f-db3a14d43466_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 11 04:09:37 compute-0 nova_compute[259850]: 2025-10-11 04:09:37.421 2 DEBUG oslo_concurrency.processutils [None req-da913559-35de-488f-b8c7-13ddac2bc74c fc44058c9b8d47d1907c195c404898c8 c04e56df694d49fdbb22c39773dfc036 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 04:09:37 compute-0 sudo[282968]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 04:09:37 compute-0 sudo[282968]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 04:09:37 compute-0 sudo[282968]: pam_unix(sudo:session): session closed for user root
Oct 11 04:09:37 compute-0 sudo[283012]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/23b68101-59a9-532f-ab6b-9acf78fb2162/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 23b68101-59a9-532f-ab6b-9acf78fb2162 -- lvm list --format json
Oct 11 04:09:37 compute-0 sudo[283012]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 04:09:37 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1233: 305 pgs: 305 active+clean; 88 MiB data, 282 MiB used, 60 GiB / 60 GiB avail; 32 KiB/s rd, 2.4 KiB/s wr, 46 op/s
Oct 11 04:09:37 compute-0 nova_compute[259850]: 2025-10-11 04:09:37.602 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4999-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 11 04:09:37 compute-0 nova_compute[259850]: 2025-10-11 04:09:37.605 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 11 04:09:37 compute-0 nova_compute[259850]: 2025-10-11 04:09:37.605 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: idle 5003 ms, sending inactivity probe run /usr/lib64/python3.9/site-packages/ovs/reconnect.py:117
Oct 11 04:09:37 compute-0 nova_compute[259850]: 2025-10-11 04:09:37.605 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering IDLE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519
Oct 11 04:09:37 compute-0 nova_compute[259850]: 2025-10-11 04:09:37.621 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 04:09:37 compute-0 nova_compute[259850]: 2025-10-11 04:09:37.622 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering ACTIVE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519
Oct 11 04:09:37 compute-0 nova_compute[259850]: 2025-10-11 04:09:37.808 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 04:09:37 compute-0 ceph-mon[74273]: osdmap e244: 3 total, 3 up, 3 in
Oct 11 04:09:37 compute-0 ceph-mon[74273]: from='client.? 192.168.122.100:0/231201400' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 11 04:09:37 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 11 04:09:37 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3225885807' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 11 04:09:37 compute-0 nova_compute[259850]: 2025-10-11 04:09:37.975 2 DEBUG oslo_concurrency.processutils [None req-da913559-35de-488f-b8c7-13ddac2bc74c fc44058c9b8d47d1907c195c404898c8 c04e56df694d49fdbb22c39773dfc036 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.554s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 04:09:37 compute-0 nova_compute[259850]: 2025-10-11 04:09:37.979 2 DEBUG nova.virt.libvirt.vif [None req-da913559-35de-488f-b8c7-13ddac2bc74c fc44058c9b8d47d1907c195c404898c8 c04e56df694d49fdbb22c39773dfc036 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-11T04:09:30Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-VolumesBackupsTest-instance-410999225',display_name='tempest-VolumesBackupsTest-instance-410999225',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-volumesbackupstest-instance-410999225',id=12,image_ref='1a107e2f-1a9d-4b6f-861d-e64bee7d56be',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBJ1KmH+kJIZMj9qOyrlWxwz+pGXMpc0KLGkIVUjjdWibG6RiDcTS46lNKLmnSn97+2MdyOF62BS3v/NOEEFhaG5BZiPMST03NMPah7Zm6F4yzBBh5fuEr3GtdkCvCwfzbQ==',key_name='tempest-keypair-1804503314',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='c04e56df694d49fdbb22c39773dfc036',ramdisk_id='',reservation_id='r-z41xhvc5',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='1a107e2f-1a9d-4b6f-861d-e64bee7d56be',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-VolumesBackupsTest-722883341',owner_user_name='tempest-VolumesBackupsTest-722883341-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-11T04:09:32Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='fc44058c9b8d47d1907c195c404898c8',uuid=3d2a66c2-9869-4f0a-a27f-db3a14d43466,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "8701ce4d-adc7-4369-9f76-cf6dea290bff", "address": "fa:16:3e:5a:0f:e2", "network": {"id": "8cb72c94-41d7-40be-8ef7-9351e1b06d48", "bridge": "br-int", "label": "tempest-VolumesBackupsTest-1596968619-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c04e56df694d49fdbb22c39773dfc036", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8701ce4d-ad", "ovs_interfaceid": "8701ce4d-adc7-4369-9f76-cf6dea290bff", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Oct 11 04:09:37 compute-0 nova_compute[259850]: 2025-10-11 04:09:37.979 2 DEBUG nova.network.os_vif_util [None req-da913559-35de-488f-b8c7-13ddac2bc74c fc44058c9b8d47d1907c195c404898c8 c04e56df694d49fdbb22c39773dfc036 - - default default] Converting VIF {"id": "8701ce4d-adc7-4369-9f76-cf6dea290bff", "address": "fa:16:3e:5a:0f:e2", "network": {"id": "8cb72c94-41d7-40be-8ef7-9351e1b06d48", "bridge": "br-int", "label": "tempest-VolumesBackupsTest-1596968619-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c04e56df694d49fdbb22c39773dfc036", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8701ce4d-ad", "ovs_interfaceid": "8701ce4d-adc7-4369-9f76-cf6dea290bff", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 11 04:09:37 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 11 04:09:37 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1197463477' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 11 04:09:37 compute-0 nova_compute[259850]: 2025-10-11 04:09:37.981 2 DEBUG nova.network.os_vif_util [None req-da913559-35de-488f-b8c7-13ddac2bc74c fc44058c9b8d47d1907c195c404898c8 c04e56df694d49fdbb22c39773dfc036 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:5a:0f:e2,bridge_name='br-int',has_traffic_filtering=True,id=8701ce4d-adc7-4369-9f76-cf6dea290bff,network=Network(8cb72c94-41d7-40be-8ef7-9351e1b06d48),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8701ce4d-ad') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 11 04:09:37 compute-0 nova_compute[259850]: 2025-10-11 04:09:37.984 2 DEBUG nova.objects.instance [None req-da913559-35de-488f-b8c7-13ddac2bc74c fc44058c9b8d47d1907c195c404898c8 c04e56df694d49fdbb22c39773dfc036 - - default default] Lazy-loading 'pci_devices' on Instance uuid 3d2a66c2-9869-4f0a-a27f-db3a14d43466 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 11 04:09:38 compute-0 nova_compute[259850]: 2025-10-11 04:09:38.009 2 DEBUG nova.virt.libvirt.driver [None req-da913559-35de-488f-b8c7-13ddac2bc74c fc44058c9b8d47d1907c195c404898c8 c04e56df694d49fdbb22c39773dfc036 - - default default] [instance: 3d2a66c2-9869-4f0a-a27f-db3a14d43466] End _get_guest_xml xml=<domain type="kvm">
Oct 11 04:09:38 compute-0 nova_compute[259850]:   <uuid>3d2a66c2-9869-4f0a-a27f-db3a14d43466</uuid>
Oct 11 04:09:38 compute-0 nova_compute[259850]:   <name>instance-0000000c</name>
Oct 11 04:09:38 compute-0 nova_compute[259850]:   <memory>131072</memory>
Oct 11 04:09:38 compute-0 nova_compute[259850]:   <vcpu>1</vcpu>
Oct 11 04:09:38 compute-0 nova_compute[259850]:   <metadata>
Oct 11 04:09:38 compute-0 nova_compute[259850]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 11 04:09:38 compute-0 nova_compute[259850]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 11 04:09:38 compute-0 nova_compute[259850]:       <nova:name>tempest-VolumesBackupsTest-instance-410999225</nova:name>
Oct 11 04:09:38 compute-0 nova_compute[259850]:       <nova:creationTime>2025-10-11 04:09:36</nova:creationTime>
Oct 11 04:09:38 compute-0 nova_compute[259850]:       <nova:flavor name="m1.nano">
Oct 11 04:09:38 compute-0 nova_compute[259850]:         <nova:memory>128</nova:memory>
Oct 11 04:09:38 compute-0 nova_compute[259850]:         <nova:disk>1</nova:disk>
Oct 11 04:09:38 compute-0 nova_compute[259850]:         <nova:swap>0</nova:swap>
Oct 11 04:09:38 compute-0 nova_compute[259850]:         <nova:ephemeral>0</nova:ephemeral>
Oct 11 04:09:38 compute-0 nova_compute[259850]:         <nova:vcpus>1</nova:vcpus>
Oct 11 04:09:38 compute-0 nova_compute[259850]:       </nova:flavor>
Oct 11 04:09:38 compute-0 nova_compute[259850]:       <nova:owner>
Oct 11 04:09:38 compute-0 nova_compute[259850]:         <nova:user uuid="fc44058c9b8d47d1907c195c404898c8">tempest-VolumesBackupsTest-722883341-project-member</nova:user>
Oct 11 04:09:38 compute-0 nova_compute[259850]:         <nova:project uuid="c04e56df694d49fdbb22c39773dfc036">tempest-VolumesBackupsTest-722883341</nova:project>
Oct 11 04:09:38 compute-0 nova_compute[259850]:       </nova:owner>
Oct 11 04:09:38 compute-0 nova_compute[259850]:       <nova:root type="image" uuid="1a107e2f-1a9d-4b6f-861d-e64bee7d56be"/>
Oct 11 04:09:38 compute-0 nova_compute[259850]:       <nova:ports>
Oct 11 04:09:38 compute-0 nova_compute[259850]:         <nova:port uuid="8701ce4d-adc7-4369-9f76-cf6dea290bff">
Oct 11 04:09:38 compute-0 nova_compute[259850]:           <nova:ip type="fixed" address="10.100.0.9" ipVersion="4"/>
Oct 11 04:09:38 compute-0 nova_compute[259850]:         </nova:port>
Oct 11 04:09:38 compute-0 nova_compute[259850]:       </nova:ports>
Oct 11 04:09:38 compute-0 nova_compute[259850]:     </nova:instance>
Oct 11 04:09:38 compute-0 nova_compute[259850]:   </metadata>
Oct 11 04:09:38 compute-0 nova_compute[259850]:   <sysinfo type="smbios">
Oct 11 04:09:38 compute-0 nova_compute[259850]:     <system>
Oct 11 04:09:38 compute-0 nova_compute[259850]:       <entry name="manufacturer">RDO</entry>
Oct 11 04:09:38 compute-0 nova_compute[259850]:       <entry name="product">OpenStack Compute</entry>
Oct 11 04:09:38 compute-0 nova_compute[259850]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 11 04:09:38 compute-0 nova_compute[259850]:       <entry name="serial">3d2a66c2-9869-4f0a-a27f-db3a14d43466</entry>
Oct 11 04:09:38 compute-0 nova_compute[259850]:       <entry name="uuid">3d2a66c2-9869-4f0a-a27f-db3a14d43466</entry>
Oct 11 04:09:38 compute-0 nova_compute[259850]:       <entry name="family">Virtual Machine</entry>
Oct 11 04:09:38 compute-0 nova_compute[259850]:     </system>
Oct 11 04:09:38 compute-0 nova_compute[259850]:   </sysinfo>
Oct 11 04:09:38 compute-0 nova_compute[259850]:   <os>
Oct 11 04:09:38 compute-0 nova_compute[259850]:     <type arch="x86_64" machine="q35">hvm</type>
Oct 11 04:09:38 compute-0 nova_compute[259850]:     <boot dev="hd"/>
Oct 11 04:09:38 compute-0 nova_compute[259850]:     <smbios mode="sysinfo"/>
Oct 11 04:09:38 compute-0 nova_compute[259850]:   </os>
Oct 11 04:09:38 compute-0 nova_compute[259850]:   <features>
Oct 11 04:09:38 compute-0 nova_compute[259850]:     <acpi/>
Oct 11 04:09:38 compute-0 nova_compute[259850]:     <apic/>
Oct 11 04:09:38 compute-0 nova_compute[259850]:     <vmcoreinfo/>
Oct 11 04:09:38 compute-0 nova_compute[259850]:   </features>
Oct 11 04:09:38 compute-0 nova_compute[259850]:   <clock offset="utc">
Oct 11 04:09:38 compute-0 nova_compute[259850]:     <timer name="pit" tickpolicy="delay"/>
Oct 11 04:09:38 compute-0 nova_compute[259850]:     <timer name="rtc" tickpolicy="catchup"/>
Oct 11 04:09:38 compute-0 nova_compute[259850]:     <timer name="hpet" present="no"/>
Oct 11 04:09:38 compute-0 nova_compute[259850]:   </clock>
Oct 11 04:09:38 compute-0 nova_compute[259850]:   <cpu mode="host-model" match="exact">
Oct 11 04:09:38 compute-0 nova_compute[259850]:     <topology sockets="1" cores="1" threads="1"/>
Oct 11 04:09:38 compute-0 nova_compute[259850]:   </cpu>
Oct 11 04:09:38 compute-0 nova_compute[259850]:   <devices>
Oct 11 04:09:38 compute-0 nova_compute[259850]:     <disk type="network" device="disk">
Oct 11 04:09:38 compute-0 nova_compute[259850]:       <driver type="raw" cache="none"/>
Oct 11 04:09:38 compute-0 nova_compute[259850]:       <source protocol="rbd" name="vms/3d2a66c2-9869-4f0a-a27f-db3a14d43466_disk">
Oct 11 04:09:38 compute-0 nova_compute[259850]:         <host name="192.168.122.100" port="6789"/>
Oct 11 04:09:38 compute-0 nova_compute[259850]:       </source>
Oct 11 04:09:38 compute-0 nova_compute[259850]:       <auth username="openstack">
Oct 11 04:09:38 compute-0 nova_compute[259850]:         <secret type="ceph" uuid="23b68101-59a9-532f-ab6b-9acf78fb2162"/>
Oct 11 04:09:38 compute-0 nova_compute[259850]:       </auth>
Oct 11 04:09:38 compute-0 nova_compute[259850]:       <target dev="vda" bus="virtio"/>
Oct 11 04:09:38 compute-0 nova_compute[259850]:     </disk>
Oct 11 04:09:38 compute-0 nova_compute[259850]:     <disk type="network" device="cdrom">
Oct 11 04:09:38 compute-0 nova_compute[259850]:       <driver type="raw" cache="none"/>
Oct 11 04:09:38 compute-0 nova_compute[259850]:       <source protocol="rbd" name="vms/3d2a66c2-9869-4f0a-a27f-db3a14d43466_disk.config">
Oct 11 04:09:38 compute-0 nova_compute[259850]:         <host name="192.168.122.100" port="6789"/>
Oct 11 04:09:38 compute-0 nova_compute[259850]:       </source>
Oct 11 04:09:38 compute-0 nova_compute[259850]:       <auth username="openstack">
Oct 11 04:09:38 compute-0 nova_compute[259850]:         <secret type="ceph" uuid="23b68101-59a9-532f-ab6b-9acf78fb2162"/>
Oct 11 04:09:38 compute-0 nova_compute[259850]:       </auth>
Oct 11 04:09:38 compute-0 nova_compute[259850]:       <target dev="sda" bus="sata"/>
Oct 11 04:09:38 compute-0 nova_compute[259850]:     </disk>
Oct 11 04:09:38 compute-0 nova_compute[259850]:     <interface type="ethernet">
Oct 11 04:09:38 compute-0 nova_compute[259850]:       <mac address="fa:16:3e:5a:0f:e2"/>
Oct 11 04:09:38 compute-0 nova_compute[259850]:       <model type="virtio"/>
Oct 11 04:09:38 compute-0 nova_compute[259850]:       <driver name="vhost" rx_queue_size="512"/>
Oct 11 04:09:38 compute-0 nova_compute[259850]:       <mtu size="1442"/>
Oct 11 04:09:38 compute-0 nova_compute[259850]:       <target dev="tap8701ce4d-ad"/>
Oct 11 04:09:38 compute-0 nova_compute[259850]:     </interface>
Oct 11 04:09:38 compute-0 nova_compute[259850]:     <serial type="pty">
Oct 11 04:09:38 compute-0 nova_compute[259850]:       <log file="/var/lib/nova/instances/3d2a66c2-9869-4f0a-a27f-db3a14d43466/console.log" append="off"/>
Oct 11 04:09:38 compute-0 nova_compute[259850]:     </serial>
Oct 11 04:09:38 compute-0 nova_compute[259850]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 11 04:09:38 compute-0 nova_compute[259850]:     <video>
Oct 11 04:09:38 compute-0 nova_compute[259850]:       <model type="virtio"/>
Oct 11 04:09:38 compute-0 nova_compute[259850]:     </video>
Oct 11 04:09:38 compute-0 nova_compute[259850]:     <input type="tablet" bus="usb"/>
Oct 11 04:09:38 compute-0 nova_compute[259850]:     <rng model="virtio">
Oct 11 04:09:38 compute-0 nova_compute[259850]:       <backend model="random">/dev/urandom</backend>
Oct 11 04:09:38 compute-0 nova_compute[259850]:     </rng>
Oct 11 04:09:38 compute-0 nova_compute[259850]:     <controller type="pci" model="pcie-root"/>
Oct 11 04:09:38 compute-0 nova_compute[259850]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 04:09:38 compute-0 nova_compute[259850]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 04:09:38 compute-0 nova_compute[259850]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 04:09:38 compute-0 nova_compute[259850]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 04:09:38 compute-0 nova_compute[259850]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 04:09:38 compute-0 nova_compute[259850]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 04:09:38 compute-0 nova_compute[259850]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 04:09:38 compute-0 nova_compute[259850]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 04:09:38 compute-0 nova_compute[259850]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 04:09:38 compute-0 nova_compute[259850]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 04:09:38 compute-0 nova_compute[259850]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 04:09:38 compute-0 nova_compute[259850]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 04:09:38 compute-0 nova_compute[259850]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 04:09:38 compute-0 nova_compute[259850]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 04:09:38 compute-0 nova_compute[259850]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 04:09:38 compute-0 nova_compute[259850]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 04:09:38 compute-0 nova_compute[259850]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 04:09:38 compute-0 nova_compute[259850]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 04:09:38 compute-0 nova_compute[259850]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 04:09:38 compute-0 nova_compute[259850]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 04:09:38 compute-0 nova_compute[259850]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 04:09:38 compute-0 nova_compute[259850]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 04:09:38 compute-0 nova_compute[259850]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 04:09:38 compute-0 nova_compute[259850]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 04:09:38 compute-0 nova_compute[259850]:     <controller type="usb" index="0"/>
Oct 11 04:09:38 compute-0 nova_compute[259850]:     <memballoon model="virtio">
Oct 11 04:09:38 compute-0 nova_compute[259850]:       <stats period="10"/>
Oct 11 04:09:38 compute-0 nova_compute[259850]:     </memballoon>
Oct 11 04:09:38 compute-0 nova_compute[259850]:   </devices>
Oct 11 04:09:38 compute-0 nova_compute[259850]: </domain>
Oct 11 04:09:38 compute-0 nova_compute[259850]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Oct 11 04:09:38 compute-0 nova_compute[259850]: 2025-10-11 04:09:38.011 2 DEBUG nova.compute.manager [None req-da913559-35de-488f-b8c7-13ddac2bc74c fc44058c9b8d47d1907c195c404898c8 c04e56df694d49fdbb22c39773dfc036 - - default default] [instance: 3d2a66c2-9869-4f0a-a27f-db3a14d43466] Preparing to wait for external event network-vif-plugged-8701ce4d-adc7-4369-9f76-cf6dea290bff prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Oct 11 04:09:38 compute-0 nova_compute[259850]: 2025-10-11 04:09:38.012 2 DEBUG oslo_concurrency.lockutils [None req-da913559-35de-488f-b8c7-13ddac2bc74c fc44058c9b8d47d1907c195c404898c8 c04e56df694d49fdbb22c39773dfc036 - - default default] Acquiring lock "3d2a66c2-9869-4f0a-a27f-db3a14d43466-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 04:09:38 compute-0 nova_compute[259850]: 2025-10-11 04:09:38.012 2 DEBUG oslo_concurrency.lockutils [None req-da913559-35de-488f-b8c7-13ddac2bc74c fc44058c9b8d47d1907c195c404898c8 c04e56df694d49fdbb22c39773dfc036 - - default default] Lock "3d2a66c2-9869-4f0a-a27f-db3a14d43466-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 04:09:38 compute-0 nova_compute[259850]: 2025-10-11 04:09:38.013 2 DEBUG oslo_concurrency.lockutils [None req-da913559-35de-488f-b8c7-13ddac2bc74c fc44058c9b8d47d1907c195c404898c8 c04e56df694d49fdbb22c39773dfc036 - - default default] Lock "3d2a66c2-9869-4f0a-a27f-db3a14d43466-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 04:09:38 compute-0 nova_compute[259850]: 2025-10-11 04:09:38.014 2 DEBUG nova.virt.libvirt.vif [None req-da913559-35de-488f-b8c7-13ddac2bc74c fc44058c9b8d47d1907c195c404898c8 c04e56df694d49fdbb22c39773dfc036 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-11T04:09:30Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-VolumesBackupsTest-instance-410999225',display_name='tempest-VolumesBackupsTest-instance-410999225',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-volumesbackupstest-instance-410999225',id=12,image_ref='1a107e2f-1a9d-4b6f-861d-e64bee7d56be',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBJ1KmH+kJIZMj9qOyrlWxwz+pGXMpc0KLGkIVUjjdWibG6RiDcTS46lNKLmnSn97+2MdyOF62BS3v/NOEEFhaG5BZiPMST03NMPah7Zm6F4yzBBh5fuEr3GtdkCvCwfzbQ==',key_name='tempest-keypair-1804503314',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='c04e56df694d49fdbb22c39773dfc036',ramdisk_id='',reservation_id='r-z41xhvc5',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='1a107e2f-1a9d-4b6f-861d-e64bee7d56be',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-VolumesBackupsTest-722883341',owner_user_name='tempest-VolumesBackupsTest-722883341-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-11T04:09:32Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='fc44058c9b8d47d1907c195c404898c8',uuid=3d2a66c2-9869-4f0a-a27f-db3a14d43466,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "8701ce4d-adc7-4369-9f76-cf6dea290bff", "address": "fa:16:3e:5a:0f:e2", "network": {"id": "8cb72c94-41d7-40be-8ef7-9351e1b06d48", "bridge": "br-int", "label": "tempest-VolumesBackupsTest-1596968619-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c04e56df694d49fdbb22c39773dfc036", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8701ce4d-ad", "ovs_interfaceid": "8701ce4d-adc7-4369-9f76-cf6dea290bff", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Oct 11 04:09:38 compute-0 nova_compute[259850]: 2025-10-11 04:09:38.015 2 DEBUG nova.network.os_vif_util [None req-da913559-35de-488f-b8c7-13ddac2bc74c fc44058c9b8d47d1907c195c404898c8 c04e56df694d49fdbb22c39773dfc036 - - default default] Converting VIF {"id": "8701ce4d-adc7-4369-9f76-cf6dea290bff", "address": "fa:16:3e:5a:0f:e2", "network": {"id": "8cb72c94-41d7-40be-8ef7-9351e1b06d48", "bridge": "br-int", "label": "tempest-VolumesBackupsTest-1596968619-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c04e56df694d49fdbb22c39773dfc036", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8701ce4d-ad", "ovs_interfaceid": "8701ce4d-adc7-4369-9f76-cf6dea290bff", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 11 04:09:38 compute-0 nova_compute[259850]: 2025-10-11 04:09:38.016 2 DEBUG nova.network.os_vif_util [None req-da913559-35de-488f-b8c7-13ddac2bc74c fc44058c9b8d47d1907c195c404898c8 c04e56df694d49fdbb22c39773dfc036 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:5a:0f:e2,bridge_name='br-int',has_traffic_filtering=True,id=8701ce4d-adc7-4369-9f76-cf6dea290bff,network=Network(8cb72c94-41d7-40be-8ef7-9351e1b06d48),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8701ce4d-ad') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 11 04:09:38 compute-0 nova_compute[259850]: 2025-10-11 04:09:38.017 2 DEBUG os_vif [None req-da913559-35de-488f-b8c7-13ddac2bc74c fc44058c9b8d47d1907c195c404898c8 c04e56df694d49fdbb22c39773dfc036 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:5a:0f:e2,bridge_name='br-int',has_traffic_filtering=True,id=8701ce4d-adc7-4369-9f76-cf6dea290bff,network=Network(8cb72c94-41d7-40be-8ef7-9351e1b06d48),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8701ce4d-ad') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Oct 11 04:09:38 compute-0 nova_compute[259850]: 2025-10-11 04:09:38.018 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 04:09:38 compute-0 nova_compute[259850]: 2025-10-11 04:09:38.018 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 04:09:38 compute-0 nova_compute[259850]: 2025-10-11 04:09:38.019 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 11 04:09:38 compute-0 nova_compute[259850]: 2025-10-11 04:09:38.024 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 04:09:38 compute-0 nova_compute[259850]: 2025-10-11 04:09:38.025 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap8701ce4d-ad, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 04:09:38 compute-0 nova_compute[259850]: 2025-10-11 04:09:38.026 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap8701ce4d-ad, col_values=(('external_ids', {'iface-id': '8701ce4d-adc7-4369-9f76-cf6dea290bff', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:5a:0f:e2', 'vm-uuid': '3d2a66c2-9869-4f0a-a27f-db3a14d43466'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 04:09:38 compute-0 nova_compute[259850]: 2025-10-11 04:09:38.028 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 04:09:38 compute-0 NetworkManager[44920]: <info>  [1760155778.0297] manager: (tap8701ce4d-ad): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/70)
Oct 11 04:09:38 compute-0 nova_compute[259850]: 2025-10-11 04:09:38.030 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 11 04:09:38 compute-0 podman[283097]: 2025-10-11 04:09:38.03611049 +0000 UTC m=+0.070904436 container create 2fa5f84f2db32b960795558ff0a48f8bdea25561f430fee24b84339a16ec2ef6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_montalcini, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 11 04:09:38 compute-0 nova_compute[259850]: 2025-10-11 04:09:38.067 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 04:09:38 compute-0 nova_compute[259850]: 2025-10-11 04:09:38.069 2 INFO os_vif [None req-da913559-35de-488f-b8c7-13ddac2bc74c fc44058c9b8d47d1907c195c404898c8 c04e56df694d49fdbb22c39773dfc036 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:5a:0f:e2,bridge_name='br-int',has_traffic_filtering=True,id=8701ce4d-adc7-4369-9f76-cf6dea290bff,network=Network(8cb72c94-41d7-40be-8ef7-9351e1b06d48),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8701ce4d-ad')
Oct 11 04:09:38 compute-0 systemd[1]: Started libpod-conmon-2fa5f84f2db32b960795558ff0a48f8bdea25561f430fee24b84339a16ec2ef6.scope.
Oct 11 04:09:38 compute-0 nova_compute[259850]: 2025-10-11 04:09:38.079 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 04:09:38 compute-0 podman[283097]: 2025-10-11 04:09:38.01370753 +0000 UTC m=+0.048501506 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 04:09:38 compute-0 systemd[1]: Started libcrun container.
Oct 11 04:09:38 compute-0 podman[283097]: 2025-10-11 04:09:38.142472273 +0000 UTC m=+0.177266249 container init 2fa5f84f2db32b960795558ff0a48f8bdea25561f430fee24b84339a16ec2ef6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_montalcini, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 11 04:09:38 compute-0 nova_compute[259850]: 2025-10-11 04:09:38.145 2 DEBUG nova.virt.libvirt.driver [None req-da913559-35de-488f-b8c7-13ddac2bc74c fc44058c9b8d47d1907c195c404898c8 c04e56df694d49fdbb22c39773dfc036 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 11 04:09:38 compute-0 nova_compute[259850]: 2025-10-11 04:09:38.145 2 DEBUG nova.virt.libvirt.driver [None req-da913559-35de-488f-b8c7-13ddac2bc74c fc44058c9b8d47d1907c195c404898c8 c04e56df694d49fdbb22c39773dfc036 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 11 04:09:38 compute-0 nova_compute[259850]: 2025-10-11 04:09:38.146 2 DEBUG nova.virt.libvirt.driver [None req-da913559-35de-488f-b8c7-13ddac2bc74c fc44058c9b8d47d1907c195c404898c8 c04e56df694d49fdbb22c39773dfc036 - - default default] No VIF found with MAC fa:16:3e:5a:0f:e2, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Oct 11 04:09:38 compute-0 nova_compute[259850]: 2025-10-11 04:09:38.147 2 INFO nova.virt.libvirt.driver [None req-da913559-35de-488f-b8c7-13ddac2bc74c fc44058c9b8d47d1907c195c404898c8 c04e56df694d49fdbb22c39773dfc036 - - default default] [instance: 3d2a66c2-9869-4f0a-a27f-db3a14d43466] Using config drive
Oct 11 04:09:38 compute-0 podman[283097]: 2025-10-11 04:09:38.150804787 +0000 UTC m=+0.185598723 container start 2fa5f84f2db32b960795558ff0a48f8bdea25561f430fee24b84339a16ec2ef6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_montalcini, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Oct 11 04:09:38 compute-0 podman[283097]: 2025-10-11 04:09:38.154190182 +0000 UTC m=+0.188984158 container attach 2fa5f84f2db32b960795558ff0a48f8bdea25561f430fee24b84339a16ec2ef6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_montalcini, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 11 04:09:38 compute-0 interesting_montalcini[283118]: 167 167
Oct 11 04:09:38 compute-0 systemd[1]: libpod-2fa5f84f2db32b960795558ff0a48f8bdea25561f430fee24b84339a16ec2ef6.scope: Deactivated successfully.
Oct 11 04:09:38 compute-0 conmon[283118]: conmon 2fa5f84f2db32b960795 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-2fa5f84f2db32b960795558ff0a48f8bdea25561f430fee24b84339a16ec2ef6.scope/container/memory.events
Oct 11 04:09:38 compute-0 podman[283097]: 2025-10-11 04:09:38.161358424 +0000 UTC m=+0.196152420 container died 2fa5f84f2db32b960795558ff0a48f8bdea25561f430fee24b84339a16ec2ef6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_montalcini, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 11 04:09:38 compute-0 nova_compute[259850]: 2025-10-11 04:09:38.192 2 DEBUG nova.storage.rbd_utils [None req-da913559-35de-488f-b8c7-13ddac2bc74c fc44058c9b8d47d1907c195c404898c8 c04e56df694d49fdbb22c39773dfc036 - - default default] rbd image 3d2a66c2-9869-4f0a-a27f-db3a14d43466_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 11 04:09:38 compute-0 systemd[1]: var-lib-containers-storage-overlay-f159de9d9ff72e121ea2f519f4023dac6a3bb7d7174466d645ccf504f7262352-merged.mount: Deactivated successfully.
Oct 11 04:09:38 compute-0 podman[283097]: 2025-10-11 04:09:38.208055718 +0000 UTC m=+0.242849654 container remove 2fa5f84f2db32b960795558ff0a48f8bdea25561f430fee24b84339a16ec2ef6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_montalcini, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 11 04:09:38 compute-0 systemd[1]: libpod-conmon-2fa5f84f2db32b960795558ff0a48f8bdea25561f430fee24b84339a16ec2ef6.scope: Deactivated successfully.
Oct 11 04:09:38 compute-0 podman[283160]: 2025-10-11 04:09:38.452745342 +0000 UTC m=+0.094080428 container create 555e8a7186d3ad36526f772b947d24dff5e6af5e45bb9bb7faf0acad48b9c7f7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dazzling_antonelli, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Oct 11 04:09:38 compute-0 podman[283160]: 2025-10-11 04:09:38.394413611 +0000 UTC m=+0.035748767 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 04:09:38 compute-0 systemd[1]: Started libpod-conmon-555e8a7186d3ad36526f772b947d24dff5e6af5e45bb9bb7faf0acad48b9c7f7.scope.
Oct 11 04:09:38 compute-0 systemd[1]: Started libcrun container.
Oct 11 04:09:38 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3bccec28d790e028bc03c8eaeb6836d511b3bb7f19ce384fa068bb175411f9b2/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 11 04:09:38 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3bccec28d790e028bc03c8eaeb6836d511b3bb7f19ce384fa068bb175411f9b2/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 11 04:09:38 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3bccec28d790e028bc03c8eaeb6836d511b3bb7f19ce384fa068bb175411f9b2/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 11 04:09:38 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3bccec28d790e028bc03c8eaeb6836d511b3bb7f19ce384fa068bb175411f9b2/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 11 04:09:38 compute-0 podman[283160]: 2025-10-11 04:09:38.549113933 +0000 UTC m=+0.190449029 container init 555e8a7186d3ad36526f772b947d24dff5e6af5e45bb9bb7faf0acad48b9c7f7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dazzling_antonelli, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 11 04:09:38 compute-0 podman[283160]: 2025-10-11 04:09:38.559584727 +0000 UTC m=+0.200919783 container start 555e8a7186d3ad36526f772b947d24dff5e6af5e45bb9bb7faf0acad48b9c7f7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dazzling_antonelli, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, ceph=True)
Oct 11 04:09:38 compute-0 podman[283160]: 2025-10-11 04:09:38.607640099 +0000 UTC m=+0.248975195 container attach 555e8a7186d3ad36526f772b947d24dff5e6af5e45bb9bb7faf0acad48b9c7f7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dazzling_antonelli, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Oct 11 04:09:38 compute-0 nova_compute[259850]: 2025-10-11 04:09:38.625 2 INFO nova.virt.libvirt.driver [None req-da913559-35de-488f-b8c7-13ddac2bc74c fc44058c9b8d47d1907c195c404898c8 c04e56df694d49fdbb22c39773dfc036 - - default default] [instance: 3d2a66c2-9869-4f0a-a27f-db3a14d43466] Creating config drive at /var/lib/nova/instances/3d2a66c2-9869-4f0a-a27f-db3a14d43466/disk.config
Oct 11 04:09:38 compute-0 nova_compute[259850]: 2025-10-11 04:09:38.631 2 DEBUG oslo_concurrency.processutils [None req-da913559-35de-488f-b8c7-13ddac2bc74c fc44058c9b8d47d1907c195c404898c8 c04e56df694d49fdbb22c39773dfc036 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/3d2a66c2-9869-4f0a-a27f-db3a14d43466/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp7rj9piv2 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 04:09:38 compute-0 nova_compute[259850]: 2025-10-11 04:09:38.666 2 DEBUG nova.network.neutron [req-64ac0f40-b05b-4848-9cc6-f2ed7f3cd9e2 req-a3e38afc-eacc-447c-a2e8-f9b83dc3becc f4159c198f2c491aba3076e803251bb4 551ef16ca7c64c7d98e9560ddab5abd8 - - default default] [instance: 3d2a66c2-9869-4f0a-a27f-db3a14d43466] Updated VIF entry in instance network info cache for port 8701ce4d-adc7-4369-9f76-cf6dea290bff. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 11 04:09:38 compute-0 nova_compute[259850]: 2025-10-11 04:09:38.667 2 DEBUG nova.network.neutron [req-64ac0f40-b05b-4848-9cc6-f2ed7f3cd9e2 req-a3e38afc-eacc-447c-a2e8-f9b83dc3becc f4159c198f2c491aba3076e803251bb4 551ef16ca7c64c7d98e9560ddab5abd8 - - default default] [instance: 3d2a66c2-9869-4f0a-a27f-db3a14d43466] Updating instance_info_cache with network_info: [{"id": "8701ce4d-adc7-4369-9f76-cf6dea290bff", "address": "fa:16:3e:5a:0f:e2", "network": {"id": "8cb72c94-41d7-40be-8ef7-9351e1b06d48", "bridge": "br-int", "label": "tempest-VolumesBackupsTest-1596968619-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c04e56df694d49fdbb22c39773dfc036", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8701ce4d-ad", "ovs_interfaceid": "8701ce4d-adc7-4369-9f76-cf6dea290bff", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 11 04:09:38 compute-0 nova_compute[259850]: 2025-10-11 04:09:38.695 2 DEBUG oslo_concurrency.lockutils [req-64ac0f40-b05b-4848-9cc6-f2ed7f3cd9e2 req-a3e38afc-eacc-447c-a2e8-f9b83dc3becc f4159c198f2c491aba3076e803251bb4 551ef16ca7c64c7d98e9560ddab5abd8 - - default default] Releasing lock "refresh_cache-3d2a66c2-9869-4f0a-a27f-db3a14d43466" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 11 04:09:38 compute-0 nova_compute[259850]: 2025-10-11 04:09:38.783 2 DEBUG oslo_concurrency.processutils [None req-da913559-35de-488f-b8c7-13ddac2bc74c fc44058c9b8d47d1907c195c404898c8 c04e56df694d49fdbb22c39773dfc036 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/3d2a66c2-9869-4f0a-a27f-db3a14d43466/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp7rj9piv2" returned: 0 in 0.151s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 04:09:38 compute-0 nova_compute[259850]: 2025-10-11 04:09:38.828 2 DEBUG nova.storage.rbd_utils [None req-da913559-35de-488f-b8c7-13ddac2bc74c fc44058c9b8d47d1907c195c404898c8 c04e56df694d49fdbb22c39773dfc036 - - default default] rbd image 3d2a66c2-9869-4f0a-a27f-db3a14d43466_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 11 04:09:38 compute-0 nova_compute[259850]: 2025-10-11 04:09:38.836 2 DEBUG oslo_concurrency.processutils [None req-da913559-35de-488f-b8c7-13ddac2bc74c fc44058c9b8d47d1907c195c404898c8 c04e56df694d49fdbb22c39773dfc036 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/3d2a66c2-9869-4f0a-a27f-db3a14d43466/disk.config 3d2a66c2-9869-4f0a-a27f-db3a14d43466_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 04:09:38 compute-0 ceph-mon[74273]: pgmap v1233: 305 pgs: 305 active+clean; 88 MiB data, 282 MiB used, 60 GiB / 60 GiB avail; 32 KiB/s rd, 2.4 KiB/s wr, 46 op/s
Oct 11 04:09:38 compute-0 ceph-mon[74273]: from='client.? 192.168.122.100:0/3225885807' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 11 04:09:38 compute-0 ceph-mon[74273]: from='client.? 192.168.122.10:0/1197463477' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 11 04:09:39 compute-0 nova_compute[259850]: 2025-10-11 04:09:39.099 2 DEBUG oslo_concurrency.processutils [None req-da913559-35de-488f-b8c7-13ddac2bc74c fc44058c9b8d47d1907c195c404898c8 c04e56df694d49fdbb22c39773dfc036 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/3d2a66c2-9869-4f0a-a27f-db3a14d43466/disk.config 3d2a66c2-9869-4f0a-a27f-db3a14d43466_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.263s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 04:09:39 compute-0 nova_compute[259850]: 2025-10-11 04:09:39.100 2 INFO nova.virt.libvirt.driver [None req-da913559-35de-488f-b8c7-13ddac2bc74c fc44058c9b8d47d1907c195c404898c8 c04e56df694d49fdbb22c39773dfc036 - - default default] [instance: 3d2a66c2-9869-4f0a-a27f-db3a14d43466] Deleting local config drive /var/lib/nova/instances/3d2a66c2-9869-4f0a-a27f-db3a14d43466/disk.config because it was imported into RBD.
Oct 11 04:09:39 compute-0 NetworkManager[44920]: <info>  [1760155779.1829] manager: (tap8701ce4d-ad): new Tun device (/org/freedesktop/NetworkManager/Devices/71)
Oct 11 04:09:39 compute-0 kernel: tap8701ce4d-ad: entered promiscuous mode
Oct 11 04:09:39 compute-0 nova_compute[259850]: 2025-10-11 04:09:39.196 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 04:09:39 compute-0 ovn_controller[152025]: 2025-10-11T04:09:39Z|00123|binding|INFO|Claiming lport 8701ce4d-adc7-4369-9f76-cf6dea290bff for this chassis.
Oct 11 04:09:39 compute-0 ovn_controller[152025]: 2025-10-11T04:09:39Z|00124|binding|INFO|8701ce4d-adc7-4369-9f76-cf6dea290bff: Claiming fa:16:3e:5a:0f:e2 10.100.0.9
Oct 11 04:09:39 compute-0 ovn_metadata_agent[161897]: 2025-10-11 04:09:39.201 161902 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:5a:0f:e2 10.100.0.9'], port_security=['fa:16:3e:5a:0f:e2 10.100.0.9'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.9/28', 'neutron:device_id': '3d2a66c2-9869-4f0a-a27f-db3a14d43466', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-8cb72c94-41d7-40be-8ef7-9351e1b06d48', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'c04e56df694d49fdbb22c39773dfc036', 'neutron:revision_number': '2', 'neutron:security_group_ids': '2736498f-8594-48ae-b459-bb8ac5ce5d5a', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=e3458ebb-1a6a-4cc8-a158-43868faee92e, chassis=[<ovs.db.idl.Row object at 0x7f22fde8afd0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f22fde8afd0>], logical_port=8701ce4d-adc7-4369-9f76-cf6dea290bff) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 11 04:09:39 compute-0 ovn_metadata_agent[161897]: 2025-10-11 04:09:39.203 161902 INFO neutron.agent.ovn.metadata.agent [-] Port 8701ce4d-adc7-4369-9f76-cf6dea290bff in datapath 8cb72c94-41d7-40be-8ef7-9351e1b06d48 bound to our chassis
Oct 11 04:09:39 compute-0 ovn_metadata_agent[161897]: 2025-10-11 04:09:39.204 161902 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 8cb72c94-41d7-40be-8ef7-9351e1b06d48
Oct 11 04:09:39 compute-0 ovn_metadata_agent[161897]: 2025-10-11 04:09:39.229 267637 DEBUG oslo.privsep.daemon [-] privsep: reply[6c06d1ab-3b2e-45cc-b344-b36eabb3e941]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 04:09:39 compute-0 ovn_metadata_agent[161897]: 2025-10-11 04:09:39.231 161902 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap8cb72c94-41 in ovnmeta-8cb72c94-41d7-40be-8ef7-9351e1b06d48 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Oct 11 04:09:39 compute-0 systemd-machined[214869]: New machine qemu-12-instance-0000000c.
Oct 11 04:09:39 compute-0 ovn_metadata_agent[161897]: 2025-10-11 04:09:39.235 267637 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap8cb72c94-40 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Oct 11 04:09:39 compute-0 ovn_metadata_agent[161897]: 2025-10-11 04:09:39.235 267637 DEBUG oslo.privsep.daemon [-] privsep: reply[e2b8fd4c-9469-4f42-8498-dad52c705b2e]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 04:09:39 compute-0 ovn_metadata_agent[161897]: 2025-10-11 04:09:39.236 267637 DEBUG oslo.privsep.daemon [-] privsep: reply[52474cae-1e0b-4cb8-b3db-fa3127ecb4e5]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 04:09:39 compute-0 ovn_metadata_agent[161897]: 2025-10-11 04:09:39.246 162015 DEBUG oslo.privsep.daemon [-] privsep: reply[0667bf38-7a46-4703-8ccd-21e54e703536]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 04:09:39 compute-0 systemd[1]: Started Virtual Machine qemu-12-instance-0000000c.
Oct 11 04:09:39 compute-0 systemd-udevd[283239]: Network interface NamePolicy= disabled on kernel command line.
Oct 11 04:09:39 compute-0 ovn_metadata_agent[161897]: 2025-10-11 04:09:39.276 267637 DEBUG oslo.privsep.daemon [-] privsep: reply[655604eb-d107-472b-a3d5-997eeb79d604]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 04:09:39 compute-0 NetworkManager[44920]: <info>  [1760155779.2788] device (tap8701ce4d-ad): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 11 04:09:39 compute-0 NetworkManager[44920]: <info>  [1760155779.2795] device (tap8701ce4d-ad): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 11 04:09:39 compute-0 nova_compute[259850]: 2025-10-11 04:09:39.284 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 04:09:39 compute-0 ovn_controller[152025]: 2025-10-11T04:09:39Z|00125|binding|INFO|Setting lport 8701ce4d-adc7-4369-9f76-cf6dea290bff ovn-installed in OVS
Oct 11 04:09:39 compute-0 ovn_controller[152025]: 2025-10-11T04:09:39Z|00126|binding|INFO|Setting lport 8701ce4d-adc7-4369-9f76-cf6dea290bff up in Southbound
Oct 11 04:09:39 compute-0 nova_compute[259850]: 2025-10-11 04:09:39.292 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 04:09:39 compute-0 ovn_metadata_agent[161897]: 2025-10-11 04:09:39.313 267785 DEBUG oslo.privsep.daemon [-] privsep: reply[f426f3a9-0eb6-41fc-b878-b39823b9ed0b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 04:09:39 compute-0 systemd-udevd[283246]: Network interface NamePolicy= disabled on kernel command line.
Oct 11 04:09:39 compute-0 NetworkManager[44920]: <info>  [1760155779.3197] manager: (tap8cb72c94-40): new Veth device (/org/freedesktop/NetworkManager/Devices/72)
Oct 11 04:09:39 compute-0 ovn_metadata_agent[161897]: 2025-10-11 04:09:39.322 267637 DEBUG oslo.privsep.daemon [-] privsep: reply[3a999b8f-55cb-44d5-8f2e-e464f998f6b0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 04:09:39 compute-0 ovn_metadata_agent[161897]: 2025-10-11 04:09:39.360 267785 DEBUG oslo.privsep.daemon [-] privsep: reply[4422019b-4173-4c43-a49e-607791d96166]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 04:09:39 compute-0 ovn_metadata_agent[161897]: 2025-10-11 04:09:39.363 267785 DEBUG oslo.privsep.daemon [-] privsep: reply[25745f43-d05e-4c90-9934-135f553b3868]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 04:09:39 compute-0 NetworkManager[44920]: <info>  [1760155779.3873] device (tap8cb72c94-40): carrier: link connected
Oct 11 04:09:39 compute-0 ovn_metadata_agent[161897]: 2025-10-11 04:09:39.392 267785 DEBUG oslo.privsep.daemon [-] privsep: reply[8d9a1964-8f85-4f60-9f23-61a912fe85ae]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 04:09:39 compute-0 ovn_metadata_agent[161897]: 2025-10-11 04:09:39.409 267637 DEBUG oslo.privsep.daemon [-] privsep: reply[6d10fcf0-6003-45ce-a88e-108fdddc0b95]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap8cb72c94-41'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:36:21:0e'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 45], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 416442, 'reachable_time': 17039, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 283271, 'error': None, 'target': 'ovnmeta-8cb72c94-41d7-40be-8ef7-9351e1b06d48', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 04:09:39 compute-0 ovn_metadata_agent[161897]: 2025-10-11 04:09:39.432 267637 DEBUG oslo.privsep.daemon [-] privsep: reply[7bcf8025-477a-47ae-a5ad-4b88b63d24dc]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe36:210e'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 416442, 'tstamp': 416442}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 283272, 'error': None, 'target': 'ovnmeta-8cb72c94-41d7-40be-8ef7-9351e1b06d48', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 04:09:39 compute-0 ovn_metadata_agent[161897]: 2025-10-11 04:09:39.456 267637 DEBUG oslo.privsep.daemon [-] privsep: reply[99d6a5fd-4438-45ef-b313-6ccfbc9b11b0]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap8cb72c94-41'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:36:21:0e'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 45], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 416442, 'reachable_time': 17039, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 283275, 'error': None, 'target': 'ovnmeta-8cb72c94-41d7-40be-8ef7-9351e1b06d48', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 04:09:39 compute-0 dazzling_antonelli[283177]: {
Oct 11 04:09:39 compute-0 dazzling_antonelli[283177]:     "0": [
Oct 11 04:09:39 compute-0 dazzling_antonelli[283177]:         {
Oct 11 04:09:39 compute-0 dazzling_antonelli[283177]:             "devices": [
Oct 11 04:09:39 compute-0 dazzling_antonelli[283177]:                 "/dev/loop3"
Oct 11 04:09:39 compute-0 dazzling_antonelli[283177]:             ],
Oct 11 04:09:39 compute-0 dazzling_antonelli[283177]:             "lv_name": "ceph_lv0",
Oct 11 04:09:39 compute-0 dazzling_antonelli[283177]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct 11 04:09:39 compute-0 dazzling_antonelli[283177]:             "lv_size": "21470642176",
Oct 11 04:09:39 compute-0 dazzling_antonelli[283177]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=1XVTvq-Um5W-oCLh-MZzq-EKrd-vgGj-JdO4S3,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=23b68101-59a9-532f-ab6b-9acf78fb2162,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=bd7ac921-1218-45c1-b1c6-7c594dbceccb,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 11 04:09:39 compute-0 dazzling_antonelli[283177]:             "lv_uuid": "1XVTvq-Um5W-oCLh-MZzq-EKrd-vgGj-JdO4S3",
Oct 11 04:09:39 compute-0 dazzling_antonelli[283177]:             "name": "ceph_lv0",
Oct 11 04:09:39 compute-0 dazzling_antonelli[283177]:             "path": "/dev/ceph_vg0/ceph_lv0",
Oct 11 04:09:39 compute-0 dazzling_antonelli[283177]:             "tags": {
Oct 11 04:09:39 compute-0 dazzling_antonelli[283177]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct 11 04:09:39 compute-0 dazzling_antonelli[283177]:                 "ceph.block_uuid": "1XVTvq-Um5W-oCLh-MZzq-EKrd-vgGj-JdO4S3",
Oct 11 04:09:39 compute-0 dazzling_antonelli[283177]:                 "ceph.cephx_lockbox_secret": "",
Oct 11 04:09:39 compute-0 dazzling_antonelli[283177]:                 "ceph.cluster_fsid": "23b68101-59a9-532f-ab6b-9acf78fb2162",
Oct 11 04:09:39 compute-0 dazzling_antonelli[283177]:                 "ceph.cluster_name": "ceph",
Oct 11 04:09:39 compute-0 dazzling_antonelli[283177]:                 "ceph.crush_device_class": "",
Oct 11 04:09:39 compute-0 dazzling_antonelli[283177]:                 "ceph.encrypted": "0",
Oct 11 04:09:39 compute-0 dazzling_antonelli[283177]:                 "ceph.osd_fsid": "bd7ac921-1218-45c1-b1c6-7c594dbceccb",
Oct 11 04:09:39 compute-0 dazzling_antonelli[283177]:                 "ceph.osd_id": "0",
Oct 11 04:09:39 compute-0 dazzling_antonelli[283177]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 11 04:09:39 compute-0 dazzling_antonelli[283177]:                 "ceph.type": "block",
Oct 11 04:09:39 compute-0 dazzling_antonelli[283177]:                 "ceph.vdo": "0"
Oct 11 04:09:39 compute-0 dazzling_antonelli[283177]:             },
Oct 11 04:09:39 compute-0 dazzling_antonelli[283177]:             "type": "block",
Oct 11 04:09:39 compute-0 dazzling_antonelli[283177]:             "vg_name": "ceph_vg0"
Oct 11 04:09:39 compute-0 dazzling_antonelli[283177]:         }
Oct 11 04:09:39 compute-0 dazzling_antonelli[283177]:     ],
Oct 11 04:09:39 compute-0 dazzling_antonelli[283177]:     "1": [
Oct 11 04:09:39 compute-0 dazzling_antonelli[283177]:         {
Oct 11 04:09:39 compute-0 dazzling_antonelli[283177]:             "devices": [
Oct 11 04:09:39 compute-0 dazzling_antonelli[283177]:                 "/dev/loop4"
Oct 11 04:09:39 compute-0 dazzling_antonelli[283177]:             ],
Oct 11 04:09:39 compute-0 dazzling_antonelli[283177]:             "lv_name": "ceph_lv1",
Oct 11 04:09:39 compute-0 dazzling_antonelli[283177]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Oct 11 04:09:39 compute-0 dazzling_antonelli[283177]:             "lv_size": "21470642176",
Oct 11 04:09:39 compute-0 dazzling_antonelli[283177]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=SYHCVo-UVf0-Iwc3-4dzz-QuCG-19LJ-9rRA5x,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=23b68101-59a9-532f-ab6b-9acf78fb2162,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=38da774d-7ecf-442f-9a7a-97978287cff8,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 11 04:09:39 compute-0 dazzling_antonelli[283177]:             "lv_uuid": "SYHCVo-UVf0-Iwc3-4dzz-QuCG-19LJ-9rRA5x",
Oct 11 04:09:39 compute-0 dazzling_antonelli[283177]:             "name": "ceph_lv1",
Oct 11 04:09:39 compute-0 dazzling_antonelli[283177]:             "path": "/dev/ceph_vg1/ceph_lv1",
Oct 11 04:09:39 compute-0 dazzling_antonelli[283177]:             "tags": {
Oct 11 04:09:39 compute-0 dazzling_antonelli[283177]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Oct 11 04:09:39 compute-0 dazzling_antonelli[283177]:                 "ceph.block_uuid": "SYHCVo-UVf0-Iwc3-4dzz-QuCG-19LJ-9rRA5x",
Oct 11 04:09:39 compute-0 dazzling_antonelli[283177]:                 "ceph.cephx_lockbox_secret": "",
Oct 11 04:09:39 compute-0 dazzling_antonelli[283177]:                 "ceph.cluster_fsid": "23b68101-59a9-532f-ab6b-9acf78fb2162",
Oct 11 04:09:39 compute-0 dazzling_antonelli[283177]:                 "ceph.cluster_name": "ceph",
Oct 11 04:09:39 compute-0 dazzling_antonelli[283177]:                 "ceph.crush_device_class": "",
Oct 11 04:09:39 compute-0 dazzling_antonelli[283177]:                 "ceph.encrypted": "0",
Oct 11 04:09:39 compute-0 dazzling_antonelli[283177]:                 "ceph.osd_fsid": "38da774d-7ecf-442f-9a7a-97978287cff8",
Oct 11 04:09:39 compute-0 dazzling_antonelli[283177]:                 "ceph.osd_id": "1",
Oct 11 04:09:39 compute-0 dazzling_antonelli[283177]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 11 04:09:39 compute-0 dazzling_antonelli[283177]:                 "ceph.type": "block",
Oct 11 04:09:39 compute-0 dazzling_antonelli[283177]:                 "ceph.vdo": "0"
Oct 11 04:09:39 compute-0 dazzling_antonelli[283177]:             },
Oct 11 04:09:39 compute-0 dazzling_antonelli[283177]:             "type": "block",
Oct 11 04:09:39 compute-0 dazzling_antonelli[283177]:             "vg_name": "ceph_vg1"
Oct 11 04:09:39 compute-0 dazzling_antonelli[283177]:         }
Oct 11 04:09:39 compute-0 dazzling_antonelli[283177]:     ],
Oct 11 04:09:39 compute-0 dazzling_antonelli[283177]:     "2": [
Oct 11 04:09:39 compute-0 dazzling_antonelli[283177]:         {
Oct 11 04:09:39 compute-0 dazzling_antonelli[283177]:             "devices": [
Oct 11 04:09:39 compute-0 dazzling_antonelli[283177]:                 "/dev/loop5"
Oct 11 04:09:39 compute-0 dazzling_antonelli[283177]:             ],
Oct 11 04:09:39 compute-0 dazzling_antonelli[283177]:             "lv_name": "ceph_lv2",
Oct 11 04:09:39 compute-0 dazzling_antonelli[283177]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Oct 11 04:09:39 compute-0 dazzling_antonelli[283177]:             "lv_size": "21470642176",
Oct 11 04:09:39 compute-0 dazzling_antonelli[283177]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=RoMfIg-8Myq-NBZ3-1HM3-6QfC-sjk8-dlChvt,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=23b68101-59a9-532f-ab6b-9acf78fb2162,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=b0a32a6b-06c7-4cda-9235-0c5ffbd1bd85,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 11 04:09:39 compute-0 dazzling_antonelli[283177]:             "lv_uuid": "RoMfIg-8Myq-NBZ3-1HM3-6QfC-sjk8-dlChvt",
Oct 11 04:09:39 compute-0 dazzling_antonelli[283177]:             "name": "ceph_lv2",
Oct 11 04:09:39 compute-0 dazzling_antonelli[283177]:             "path": "/dev/ceph_vg2/ceph_lv2",
Oct 11 04:09:39 compute-0 dazzling_antonelli[283177]:             "tags": {
Oct 11 04:09:39 compute-0 dazzling_antonelli[283177]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Oct 11 04:09:39 compute-0 dazzling_antonelli[283177]:                 "ceph.block_uuid": "RoMfIg-8Myq-NBZ3-1HM3-6QfC-sjk8-dlChvt",
Oct 11 04:09:39 compute-0 dazzling_antonelli[283177]:                 "ceph.cephx_lockbox_secret": "",
Oct 11 04:09:39 compute-0 dazzling_antonelli[283177]:                 "ceph.cluster_fsid": "23b68101-59a9-532f-ab6b-9acf78fb2162",
Oct 11 04:09:39 compute-0 dazzling_antonelli[283177]:                 "ceph.cluster_name": "ceph",
Oct 11 04:09:39 compute-0 dazzling_antonelli[283177]:                 "ceph.crush_device_class": "",
Oct 11 04:09:39 compute-0 dazzling_antonelli[283177]:                 "ceph.encrypted": "0",
Oct 11 04:09:39 compute-0 dazzling_antonelli[283177]:                 "ceph.osd_fsid": "b0a32a6b-06c7-4cda-9235-0c5ffbd1bd85",
Oct 11 04:09:39 compute-0 dazzling_antonelli[283177]:                 "ceph.osd_id": "2",
Oct 11 04:09:39 compute-0 dazzling_antonelli[283177]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 11 04:09:39 compute-0 dazzling_antonelli[283177]:                 "ceph.type": "block",
Oct 11 04:09:39 compute-0 dazzling_antonelli[283177]:                 "ceph.vdo": "0"
Oct 11 04:09:39 compute-0 dazzling_antonelli[283177]:             },
Oct 11 04:09:39 compute-0 dazzling_antonelli[283177]:             "type": "block",
Oct 11 04:09:39 compute-0 dazzling_antonelli[283177]:             "vg_name": "ceph_vg2"
Oct 11 04:09:39 compute-0 dazzling_antonelli[283177]:         }
Oct 11 04:09:39 compute-0 dazzling_antonelli[283177]:     ]
Oct 11 04:09:39 compute-0 dazzling_antonelli[283177]: }
Oct 11 04:09:39 compute-0 ovn_metadata_agent[161897]: 2025-10-11 04:09:39.509 267637 DEBUG oslo.privsep.daemon [-] privsep: reply[aa4beebd-87b3-416a-9f3a-180ea41439f0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 04:09:39 compute-0 systemd[1]: libpod-555e8a7186d3ad36526f772b947d24dff5e6af5e45bb9bb7faf0acad48b9c7f7.scope: Deactivated successfully.
Oct 11 04:09:39 compute-0 podman[283160]: 2025-10-11 04:09:39.513713242 +0000 UTC m=+1.155048308 container died 555e8a7186d3ad36526f772b947d24dff5e6af5e45bb9bb7faf0acad48b9c7f7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dazzling_antonelli, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 11 04:09:39 compute-0 nova_compute[259850]: 2025-10-11 04:09:39.535 2 DEBUG nova.compute.manager [req-9dc1f874-d2f6-4093-969a-820675633ec5 req-6f97e291-31d9-4311-8445-a3e8190ae714 f4159c198f2c491aba3076e803251bb4 551ef16ca7c64c7d98e9560ddab5abd8 - - default default] [instance: 3d2a66c2-9869-4f0a-a27f-db3a14d43466] Received event network-vif-plugged-8701ce4d-adc7-4369-9f76-cf6dea290bff external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 11 04:09:39 compute-0 nova_compute[259850]: 2025-10-11 04:09:39.535 2 DEBUG oslo_concurrency.lockutils [req-9dc1f874-d2f6-4093-969a-820675633ec5 req-6f97e291-31d9-4311-8445-a3e8190ae714 f4159c198f2c491aba3076e803251bb4 551ef16ca7c64c7d98e9560ddab5abd8 - - default default] Acquiring lock "3d2a66c2-9869-4f0a-a27f-db3a14d43466-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 04:09:39 compute-0 nova_compute[259850]: 2025-10-11 04:09:39.536 2 DEBUG oslo_concurrency.lockutils [req-9dc1f874-d2f6-4093-969a-820675633ec5 req-6f97e291-31d9-4311-8445-a3e8190ae714 f4159c198f2c491aba3076e803251bb4 551ef16ca7c64c7d98e9560ddab5abd8 - - default default] Lock "3d2a66c2-9869-4f0a-a27f-db3a14d43466-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 04:09:39 compute-0 nova_compute[259850]: 2025-10-11 04:09:39.536 2 DEBUG oslo_concurrency.lockutils [req-9dc1f874-d2f6-4093-969a-820675633ec5 req-6f97e291-31d9-4311-8445-a3e8190ae714 f4159c198f2c491aba3076e803251bb4 551ef16ca7c64c7d98e9560ddab5abd8 - - default default] Lock "3d2a66c2-9869-4f0a-a27f-db3a14d43466-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 04:09:39 compute-0 nova_compute[259850]: 2025-10-11 04:09:39.536 2 DEBUG nova.compute.manager [req-9dc1f874-d2f6-4093-969a-820675633ec5 req-6f97e291-31d9-4311-8445-a3e8190ae714 f4159c198f2c491aba3076e803251bb4 551ef16ca7c64c7d98e9560ddab5abd8 - - default default] [instance: 3d2a66c2-9869-4f0a-a27f-db3a14d43466] Processing event network-vif-plugged-8701ce4d-adc7-4369-9f76-cf6dea290bff _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Oct 11 04:09:39 compute-0 systemd[1]: var-lib-containers-storage-overlay-3bccec28d790e028bc03c8eaeb6836d511b3bb7f19ce384fa068bb175411f9b2-merged.mount: Deactivated successfully.
Oct 11 04:09:39 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1234: 305 pgs: 305 active+clean; 134 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 74 KiB/s rd, 2.7 MiB/s wr, 110 op/s
Oct 11 04:09:39 compute-0 ovn_metadata_agent[161897]: 2025-10-11 04:09:39.572 267637 DEBUG oslo.privsep.daemon [-] privsep: reply[172134c9-5f67-47bf-9348-15ea92eaad3d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 04:09:39 compute-0 ovn_metadata_agent[161897]: 2025-10-11 04:09:39.575 161902 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap8cb72c94-40, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 04:09:39 compute-0 ovn_metadata_agent[161897]: 2025-10-11 04:09:39.575 161902 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 11 04:09:39 compute-0 ovn_metadata_agent[161897]: 2025-10-11 04:09:39.575 161902 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap8cb72c94-40, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 04:09:39 compute-0 kernel: tap8cb72c94-40: entered promiscuous mode
Oct 11 04:09:39 compute-0 NetworkManager[44920]: <info>  [1760155779.5791] manager: (tap8cb72c94-40): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/73)
Oct 11 04:09:39 compute-0 nova_compute[259850]: 2025-10-11 04:09:39.578 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 04:09:39 compute-0 nova_compute[259850]: 2025-10-11 04:09:39.582 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 04:09:39 compute-0 ovn_metadata_agent[161897]: 2025-10-11 04:09:39.584 161902 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap8cb72c94-40, col_values=(('external_ids', {'iface-id': '34d69504-322d-456b-93e7-c4c1d52774df'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 04:09:39 compute-0 ovn_controller[152025]: 2025-10-11T04:09:39Z|00127|binding|INFO|Releasing lport 34d69504-322d-456b-93e7-c4c1d52774df from this chassis (sb_readonly=0)
Oct 11 04:09:39 compute-0 nova_compute[259850]: 2025-10-11 04:09:39.585 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 04:09:39 compute-0 podman[283160]: 2025-10-11 04:09:39.602066608 +0000 UTC m=+1.243401674 container remove 555e8a7186d3ad36526f772b947d24dff5e6af5e45bb9bb7faf0acad48b9c7f7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dazzling_antonelli, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 11 04:09:39 compute-0 systemd[1]: libpod-conmon-555e8a7186d3ad36526f772b947d24dff5e6af5e45bb9bb7faf0acad48b9c7f7.scope: Deactivated successfully.
Oct 11 04:09:39 compute-0 ovn_metadata_agent[161897]: 2025-10-11 04:09:39.613 161902 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/8cb72c94-41d7-40be-8ef7-9351e1b06d48.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/8cb72c94-41d7-40be-8ef7-9351e1b06d48.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Oct 11 04:09:39 compute-0 nova_compute[259850]: 2025-10-11 04:09:39.612 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 04:09:39 compute-0 ovn_metadata_agent[161897]: 2025-10-11 04:09:39.624 267637 DEBUG oslo.privsep.daemon [-] privsep: reply[6eae5f53-da1a-4205-ac2e-252b91b59af1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 04:09:39 compute-0 ovn_metadata_agent[161897]: 2025-10-11 04:09:39.625 161902 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct 11 04:09:39 compute-0 ovn_metadata_agent[161897]: global
Oct 11 04:09:39 compute-0 ovn_metadata_agent[161897]:     log         /dev/log local0 debug
Oct 11 04:09:39 compute-0 ovn_metadata_agent[161897]:     log-tag     haproxy-metadata-proxy-8cb72c94-41d7-40be-8ef7-9351e1b06d48
Oct 11 04:09:39 compute-0 ovn_metadata_agent[161897]:     user        root
Oct 11 04:09:39 compute-0 ovn_metadata_agent[161897]:     group       root
Oct 11 04:09:39 compute-0 ovn_metadata_agent[161897]:     maxconn     1024
Oct 11 04:09:39 compute-0 ovn_metadata_agent[161897]:     pidfile     /var/lib/neutron/external/pids/8cb72c94-41d7-40be-8ef7-9351e1b06d48.pid.haproxy
Oct 11 04:09:39 compute-0 ovn_metadata_agent[161897]:     daemon
Oct 11 04:09:39 compute-0 ovn_metadata_agent[161897]: 
Oct 11 04:09:39 compute-0 ovn_metadata_agent[161897]: defaults
Oct 11 04:09:39 compute-0 ovn_metadata_agent[161897]:     log global
Oct 11 04:09:39 compute-0 ovn_metadata_agent[161897]:     mode http
Oct 11 04:09:39 compute-0 ovn_metadata_agent[161897]:     option httplog
Oct 11 04:09:39 compute-0 ovn_metadata_agent[161897]:     option dontlognull
Oct 11 04:09:39 compute-0 ovn_metadata_agent[161897]:     option http-server-close
Oct 11 04:09:39 compute-0 ovn_metadata_agent[161897]:     option forwardfor
Oct 11 04:09:39 compute-0 ovn_metadata_agent[161897]:     retries                 3
Oct 11 04:09:39 compute-0 ovn_metadata_agent[161897]:     timeout http-request    30s
Oct 11 04:09:39 compute-0 ovn_metadata_agent[161897]:     timeout connect         30s
Oct 11 04:09:39 compute-0 ovn_metadata_agent[161897]:     timeout client          32s
Oct 11 04:09:39 compute-0 ovn_metadata_agent[161897]:     timeout server          32s
Oct 11 04:09:39 compute-0 ovn_metadata_agent[161897]:     timeout http-keep-alive 30s
Oct 11 04:09:39 compute-0 ovn_metadata_agent[161897]: 
Oct 11 04:09:39 compute-0 ovn_metadata_agent[161897]: 
Oct 11 04:09:39 compute-0 ovn_metadata_agent[161897]: listen listener
Oct 11 04:09:39 compute-0 ovn_metadata_agent[161897]:     bind 169.254.169.254:80
Oct 11 04:09:39 compute-0 ovn_metadata_agent[161897]:     server metadata /var/lib/neutron/metadata_proxy
Oct 11 04:09:39 compute-0 ovn_metadata_agent[161897]:     http-request add-header X-OVN-Network-ID 8cb72c94-41d7-40be-8ef7-9351e1b06d48
Oct 11 04:09:39 compute-0 ovn_metadata_agent[161897]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Oct 11 04:09:39 compute-0 ovn_metadata_agent[161897]: 2025-10-11 04:09:39.631 161902 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-8cb72c94-41d7-40be-8ef7-9351e1b06d48', 'env', 'PROCESS_TAG=haproxy-8cb72c94-41d7-40be-8ef7-9351e1b06d48', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/8cb72c94-41d7-40be-8ef7-9351e1b06d48.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Oct 11 04:09:39 compute-0 sudo[283012]: pam_unix(sudo:session): session closed for user root
Oct 11 04:09:39 compute-0 sudo[283296]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 04:09:39 compute-0 sudo[283296]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 04:09:39 compute-0 sudo[283296]: pam_unix(sudo:session): session closed for user root
Oct 11 04:09:39 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e244 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 04:09:39 compute-0 sudo[283322]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 11 04:09:39 compute-0 sudo[283322]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 04:09:39 compute-0 sudo[283322]: pam_unix(sudo:session): session closed for user root
Oct 11 04:09:39 compute-0 sudo[283354]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 04:09:39 compute-0 sudo[283354]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 04:09:39 compute-0 sudo[283354]: pam_unix(sudo:session): session closed for user root
Oct 11 04:09:40 compute-0 sudo[283402]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/23b68101-59a9-532f-ab6b-9acf78fb2162/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 23b68101-59a9-532f-ab6b-9acf78fb2162 -- raw list --format json
Oct 11 04:09:40 compute-0 sudo[283402]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 04:09:40 compute-0 podman[283450]: 2025-10-11 04:09:40.049735174 +0000 UTC m=+0.038038142 image pull 1061e4fafe13e0b9aa1ef2c904ba4ad70c44f3e87b1d831f16c6db34937f4022 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Oct 11 04:09:40 compute-0 podman[283450]: 2025-10-11 04:09:40.147478474 +0000 UTC m=+0.135781432 container create e0c60e46670e894b12b03d45b22a2e2d0c93661734ef5e673d0a779afd6d2e5c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-8cb72c94-41d7-40be-8ef7-9351e1b06d48, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_managed=true, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251009, org.label-schema.schema-version=1.0)
Oct 11 04:09:40 compute-0 systemd[1]: Started libpod-conmon-e0c60e46670e894b12b03d45b22a2e2d0c93661734ef5e673d0a779afd6d2e5c.scope.
Oct 11 04:09:40 compute-0 systemd[1]: Started libcrun container.
Oct 11 04:09:40 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9b6dc132fc2e99bdab9f1bce2eb437599b7505894b0369b7b8333c054fdca669/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct 11 04:09:40 compute-0 podman[283450]: 2025-10-11 04:09:40.319537735 +0000 UTC m=+0.307840683 container init e0c60e46670e894b12b03d45b22a2e2d0c93661734ef5e673d0a779afd6d2e5c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-8cb72c94-41d7-40be-8ef7-9351e1b06d48, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009, io.buildah.version=1.41.3, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 11 04:09:40 compute-0 podman[283450]: 2025-10-11 04:09:40.338276842 +0000 UTC m=+0.326579790 container start e0c60e46670e894b12b03d45b22a2e2d0c93661734ef5e673d0a779afd6d2e5c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-8cb72c94-41d7-40be-8ef7-9351e1b06d48, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_managed=true, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, org.label-schema.build-date=20251009)
Oct 11 04:09:40 compute-0 neutron-haproxy-ovnmeta-8cb72c94-41d7-40be-8ef7-9351e1b06d48[283484]: [NOTICE]   (283502) : New worker (283504) forked
Oct 11 04:09:40 compute-0 neutron-haproxy-ovnmeta-8cb72c94-41d7-40be-8ef7-9351e1b06d48[283484]: [NOTICE]   (283502) : Loading success.
Oct 11 04:09:40 compute-0 podman[283527]: 2025-10-11 04:09:40.525836639 +0000 UTC m=+0.063538869 container create 1e794cbf7dc72d09868f5ab77b747a0009801295cff67c7dfee29036d79fc4e2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_goldstine, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 11 04:09:40 compute-0 podman[283527]: 2025-10-11 04:09:40.493729716 +0000 UTC m=+0.031432026 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 04:09:40 compute-0 systemd[1]: Started libpod-conmon-1e794cbf7dc72d09868f5ab77b747a0009801295cff67c7dfee29036d79fc4e2.scope.
Oct 11 04:09:40 compute-0 systemd[1]: Started libcrun container.
Oct 11 04:09:40 compute-0 podman[283527]: 2025-10-11 04:09:40.683883226 +0000 UTC m=+0.221585476 container init 1e794cbf7dc72d09868f5ab77b747a0009801295cff67c7dfee29036d79fc4e2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_goldstine, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3)
Oct 11 04:09:40 compute-0 podman[283527]: 2025-10-11 04:09:40.700074151 +0000 UTC m=+0.237776381 container start 1e794cbf7dc72d09868f5ab77b747a0009801295cff67c7dfee29036d79fc4e2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_goldstine, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 11 04:09:40 compute-0 systemd[1]: libpod-1e794cbf7dc72d09868f5ab77b747a0009801295cff67c7dfee29036d79fc4e2.scope: Deactivated successfully.
Oct 11 04:09:40 compute-0 silly_goldstine[283544]: 167 167
Oct 11 04:09:40 compute-0 conmon[283544]: conmon 1e794cbf7dc72d09868f <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-1e794cbf7dc72d09868f5ab77b747a0009801295cff67c7dfee29036d79fc4e2.scope/container/memory.events
Oct 11 04:09:40 compute-0 podman[283527]: 2025-10-11 04:09:40.713724155 +0000 UTC m=+0.251426395 container attach 1e794cbf7dc72d09868f5ab77b747a0009801295cff67c7dfee29036d79fc4e2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_goldstine, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, io.buildah.version=1.39.3)
Oct 11 04:09:40 compute-0 podman[283527]: 2025-10-11 04:09:40.714419285 +0000 UTC m=+0.252121505 container died 1e794cbf7dc72d09868f5ab77b747a0009801295cff67c7dfee29036d79fc4e2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_goldstine, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 11 04:09:40 compute-0 systemd[1]: var-lib-containers-storage-overlay-9aa5f40d77bf980f27d7d03aedada3bc9469ef00554345b92e63c039792d0786-merged.mount: Deactivated successfully.
Oct 11 04:09:40 compute-0 podman[283527]: 2025-10-11 04:09:40.773054885 +0000 UTC m=+0.310757125 container remove 1e794cbf7dc72d09868f5ab77b747a0009801295cff67c7dfee29036d79fc4e2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_goldstine, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, ceph=True)
Oct 11 04:09:40 compute-0 systemd[1]: libpod-conmon-1e794cbf7dc72d09868f5ab77b747a0009801295cff67c7dfee29036d79fc4e2.scope: Deactivated successfully.
Oct 11 04:09:40 compute-0 nova_compute[259850]: 2025-10-11 04:09:40.872 2 DEBUG nova.compute.manager [None req-da913559-35de-488f-b8c7-13ddac2bc74c fc44058c9b8d47d1907c195c404898c8 c04e56df694d49fdbb22c39773dfc036 - - default default] [instance: 3d2a66c2-9869-4f0a-a27f-db3a14d43466] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Oct 11 04:09:40 compute-0 nova_compute[259850]: 2025-10-11 04:09:40.873 2 DEBUG nova.virt.driver [None req-3700afd8-f980-4cf2-b41e-54ecc7fbe9a2 - - - - - -] Emitting event <LifecycleEvent: 1760155780.872035, 3d2a66c2-9869-4f0a-a27f-db3a14d43466 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 11 04:09:40 compute-0 nova_compute[259850]: 2025-10-11 04:09:40.873 2 INFO nova.compute.manager [None req-3700afd8-f980-4cf2-b41e-54ecc7fbe9a2 - - - - - -] [instance: 3d2a66c2-9869-4f0a-a27f-db3a14d43466] VM Started (Lifecycle Event)
Oct 11 04:09:40 compute-0 nova_compute[259850]: 2025-10-11 04:09:40.881 2 DEBUG nova.virt.libvirt.driver [None req-da913559-35de-488f-b8c7-13ddac2bc74c fc44058c9b8d47d1907c195c404898c8 c04e56df694d49fdbb22c39773dfc036 - - default default] [instance: 3d2a66c2-9869-4f0a-a27f-db3a14d43466] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Oct 11 04:09:40 compute-0 nova_compute[259850]: 2025-10-11 04:09:40.886 2 INFO nova.virt.libvirt.driver [-] [instance: 3d2a66c2-9869-4f0a-a27f-db3a14d43466] Instance spawned successfully.
Oct 11 04:09:40 compute-0 nova_compute[259850]: 2025-10-11 04:09:40.886 2 DEBUG nova.virt.libvirt.driver [None req-da913559-35de-488f-b8c7-13ddac2bc74c fc44058c9b8d47d1907c195c404898c8 c04e56df694d49fdbb22c39773dfc036 - - default default] [instance: 3d2a66c2-9869-4f0a-a27f-db3a14d43466] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Oct 11 04:09:40 compute-0 nova_compute[259850]: 2025-10-11 04:09:40.894 2 DEBUG nova.compute.manager [None req-3700afd8-f980-4cf2-b41e-54ecc7fbe9a2 - - - - - -] [instance: 3d2a66c2-9869-4f0a-a27f-db3a14d43466] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 11 04:09:40 compute-0 nova_compute[259850]: 2025-10-11 04:09:40.900 2 DEBUG nova.compute.manager [None req-3700afd8-f980-4cf2-b41e-54ecc7fbe9a2 - - - - - -] [instance: 3d2a66c2-9869-4f0a-a27f-db3a14d43466] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 11 04:09:40 compute-0 nova_compute[259850]: 2025-10-11 04:09:40.915 2 DEBUG nova.virt.libvirt.driver [None req-da913559-35de-488f-b8c7-13ddac2bc74c fc44058c9b8d47d1907c195c404898c8 c04e56df694d49fdbb22c39773dfc036 - - default default] [instance: 3d2a66c2-9869-4f0a-a27f-db3a14d43466] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 11 04:09:40 compute-0 nova_compute[259850]: 2025-10-11 04:09:40.916 2 DEBUG nova.virt.libvirt.driver [None req-da913559-35de-488f-b8c7-13ddac2bc74c fc44058c9b8d47d1907c195c404898c8 c04e56df694d49fdbb22c39773dfc036 - - default default] [instance: 3d2a66c2-9869-4f0a-a27f-db3a14d43466] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 11 04:09:40 compute-0 nova_compute[259850]: 2025-10-11 04:09:40.917 2 DEBUG nova.virt.libvirt.driver [None req-da913559-35de-488f-b8c7-13ddac2bc74c fc44058c9b8d47d1907c195c404898c8 c04e56df694d49fdbb22c39773dfc036 - - default default] [instance: 3d2a66c2-9869-4f0a-a27f-db3a14d43466] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 11 04:09:40 compute-0 nova_compute[259850]: 2025-10-11 04:09:40.918 2 DEBUG nova.virt.libvirt.driver [None req-da913559-35de-488f-b8c7-13ddac2bc74c fc44058c9b8d47d1907c195c404898c8 c04e56df694d49fdbb22c39773dfc036 - - default default] [instance: 3d2a66c2-9869-4f0a-a27f-db3a14d43466] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 11 04:09:40 compute-0 nova_compute[259850]: 2025-10-11 04:09:40.919 2 DEBUG nova.virt.libvirt.driver [None req-da913559-35de-488f-b8c7-13ddac2bc74c fc44058c9b8d47d1907c195c404898c8 c04e56df694d49fdbb22c39773dfc036 - - default default] [instance: 3d2a66c2-9869-4f0a-a27f-db3a14d43466] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 11 04:09:40 compute-0 nova_compute[259850]: 2025-10-11 04:09:40.920 2 DEBUG nova.virt.libvirt.driver [None req-da913559-35de-488f-b8c7-13ddac2bc74c fc44058c9b8d47d1907c195c404898c8 c04e56df694d49fdbb22c39773dfc036 - - default default] [instance: 3d2a66c2-9869-4f0a-a27f-db3a14d43466] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 11 04:09:40 compute-0 nova_compute[259850]: 2025-10-11 04:09:40.927 2 INFO nova.compute.manager [None req-3700afd8-f980-4cf2-b41e-54ecc7fbe9a2 - - - - - -] [instance: 3d2a66c2-9869-4f0a-a27f-db3a14d43466] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 11 04:09:40 compute-0 nova_compute[259850]: 2025-10-11 04:09:40.927 2 DEBUG nova.virt.driver [None req-3700afd8-f980-4cf2-b41e-54ecc7fbe9a2 - - - - - -] Emitting event <LifecycleEvent: 1760155780.873026, 3d2a66c2-9869-4f0a-a27f-db3a14d43466 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 11 04:09:40 compute-0 nova_compute[259850]: 2025-10-11 04:09:40.928 2 INFO nova.compute.manager [None req-3700afd8-f980-4cf2-b41e-54ecc7fbe9a2 - - - - - -] [instance: 3d2a66c2-9869-4f0a-a27f-db3a14d43466] VM Paused (Lifecycle Event)
Oct 11 04:09:40 compute-0 ceph-mon[74273]: pgmap v1234: 305 pgs: 305 active+clean; 134 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 74 KiB/s rd, 2.7 MiB/s wr, 110 op/s
Oct 11 04:09:40 compute-0 nova_compute[259850]: 2025-10-11 04:09:40.977 2 DEBUG nova.compute.manager [None req-3700afd8-f980-4cf2-b41e-54ecc7fbe9a2 - - - - - -] [instance: 3d2a66c2-9869-4f0a-a27f-db3a14d43466] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 11 04:09:40 compute-0 nova_compute[259850]: 2025-10-11 04:09:40.981 2 DEBUG nova.virt.driver [None req-3700afd8-f980-4cf2-b41e-54ecc7fbe9a2 - - - - - -] Emitting event <LifecycleEvent: 1760155780.879582, 3d2a66c2-9869-4f0a-a27f-db3a14d43466 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 11 04:09:40 compute-0 nova_compute[259850]: 2025-10-11 04:09:40.981 2 INFO nova.compute.manager [None req-3700afd8-f980-4cf2-b41e-54ecc7fbe9a2 - - - - - -] [instance: 3d2a66c2-9869-4f0a-a27f-db3a14d43466] VM Resumed (Lifecycle Event)
Oct 11 04:09:40 compute-0 podman[283568]: 2025-10-11 04:09:40.998629691 +0000 UTC m=+0.045883262 container create 0963e4835cf374ee0f020e363791c0bc09bc8237824a3884bfff8e7f61c61321 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_spence, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3)
Oct 11 04:09:41 compute-0 nova_compute[259850]: 2025-10-11 04:09:41.009 2 DEBUG nova.compute.manager [None req-3700afd8-f980-4cf2-b41e-54ecc7fbe9a2 - - - - - -] [instance: 3d2a66c2-9869-4f0a-a27f-db3a14d43466] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 11 04:09:41 compute-0 nova_compute[259850]: 2025-10-11 04:09:41.016 2 DEBUG nova.compute.manager [None req-3700afd8-f980-4cf2-b41e-54ecc7fbe9a2 - - - - - -] [instance: 3d2a66c2-9869-4f0a-a27f-db3a14d43466] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 11 04:09:41 compute-0 nova_compute[259850]: 2025-10-11 04:09:41.020 2 INFO nova.compute.manager [None req-da913559-35de-488f-b8c7-13ddac2bc74c fc44058c9b8d47d1907c195c404898c8 c04e56df694d49fdbb22c39773dfc036 - - default default] [instance: 3d2a66c2-9869-4f0a-a27f-db3a14d43466] Took 8.40 seconds to spawn the instance on the hypervisor.
Oct 11 04:09:41 compute-0 nova_compute[259850]: 2025-10-11 04:09:41.021 2 DEBUG nova.compute.manager [None req-da913559-35de-488f-b8c7-13ddac2bc74c fc44058c9b8d47d1907c195c404898c8 c04e56df694d49fdbb22c39773dfc036 - - default default] [instance: 3d2a66c2-9869-4f0a-a27f-db3a14d43466] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 11 04:09:41 compute-0 nova_compute[259850]: 2025-10-11 04:09:41.055 2 INFO nova.compute.manager [None req-3700afd8-f980-4cf2-b41e-54ecc7fbe9a2 - - - - - -] [instance: 3d2a66c2-9869-4f0a-a27f-db3a14d43466] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 11 04:09:41 compute-0 podman[283568]: 2025-10-11 04:09:40.977357763 +0000 UTC m=+0.024611344 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 04:09:41 compute-0 systemd[1]: Started libpod-conmon-0963e4835cf374ee0f020e363791c0bc09bc8237824a3884bfff8e7f61c61321.scope.
Oct 11 04:09:41 compute-0 nova_compute[259850]: 2025-10-11 04:09:41.095 2 INFO nova.compute.manager [None req-da913559-35de-488f-b8c7-13ddac2bc74c fc44058c9b8d47d1907c195c404898c8 c04e56df694d49fdbb22c39773dfc036 - - default default] [instance: 3d2a66c2-9869-4f0a-a27f-db3a14d43466] Took 9.37 seconds to build instance.
Oct 11 04:09:41 compute-0 nova_compute[259850]: 2025-10-11 04:09:41.114 2 DEBUG oslo_concurrency.lockutils [None req-da913559-35de-488f-b8c7-13ddac2bc74c fc44058c9b8d47d1907c195c404898c8 c04e56df694d49fdbb22c39773dfc036 - - default default] Lock "3d2a66c2-9869-4f0a-a27f-db3a14d43466" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 9.453s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 04:09:41 compute-0 systemd[1]: Started libcrun container.
Oct 11 04:09:41 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/321ab7e5a2a0d2aa7a5f2a17df6c213f539ff97996c1f36eb80edffcb554ace5/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 11 04:09:41 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/321ab7e5a2a0d2aa7a5f2a17df6c213f539ff97996c1f36eb80edffcb554ace5/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 11 04:09:41 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/321ab7e5a2a0d2aa7a5f2a17df6c213f539ff97996c1f36eb80edffcb554ace5/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 11 04:09:41 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/321ab7e5a2a0d2aa7a5f2a17df6c213f539ff97996c1f36eb80edffcb554ace5/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 11 04:09:41 compute-0 podman[283568]: 2025-10-11 04:09:41.151606825 +0000 UTC m=+0.198860406 container init 0963e4835cf374ee0f020e363791c0bc09bc8237824a3884bfff8e7f61c61321 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_spence, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 11 04:09:41 compute-0 podman[283568]: 2025-10-11 04:09:41.163317835 +0000 UTC m=+0.210571396 container start 0963e4835cf374ee0f020e363791c0bc09bc8237824a3884bfff8e7f61c61321 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_spence, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_REF=reef)
Oct 11 04:09:41 compute-0 podman[283568]: 2025-10-11 04:09:41.187065673 +0000 UTC m=+0.234319284 container attach 0963e4835cf374ee0f020e363791c0bc09bc8237824a3884bfff8e7f61c61321 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_spence, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.build-date=20250507)
Oct 11 04:09:41 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1235: 305 pgs: 305 active+clean; 134 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 42 KiB/s rd, 2.7 MiB/s wr, 64 op/s
Oct 11 04:09:41 compute-0 nova_compute[259850]: 2025-10-11 04:09:41.658 2 DEBUG nova.compute.manager [req-8814a070-47a9-4b06-a596-2e67219d43ba req-56b55df1-7396-4f13-8815-38ac943b71a7 f4159c198f2c491aba3076e803251bb4 551ef16ca7c64c7d98e9560ddab5abd8 - - default default] [instance: 3d2a66c2-9869-4f0a-a27f-db3a14d43466] Received event network-vif-plugged-8701ce4d-adc7-4369-9f76-cf6dea290bff external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 11 04:09:41 compute-0 nova_compute[259850]: 2025-10-11 04:09:41.659 2 DEBUG oslo_concurrency.lockutils [req-8814a070-47a9-4b06-a596-2e67219d43ba req-56b55df1-7396-4f13-8815-38ac943b71a7 f4159c198f2c491aba3076e803251bb4 551ef16ca7c64c7d98e9560ddab5abd8 - - default default] Acquiring lock "3d2a66c2-9869-4f0a-a27f-db3a14d43466-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 04:09:41 compute-0 nova_compute[259850]: 2025-10-11 04:09:41.659 2 DEBUG oslo_concurrency.lockutils [req-8814a070-47a9-4b06-a596-2e67219d43ba req-56b55df1-7396-4f13-8815-38ac943b71a7 f4159c198f2c491aba3076e803251bb4 551ef16ca7c64c7d98e9560ddab5abd8 - - default default] Lock "3d2a66c2-9869-4f0a-a27f-db3a14d43466-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 04:09:41 compute-0 nova_compute[259850]: 2025-10-11 04:09:41.659 2 DEBUG oslo_concurrency.lockutils [req-8814a070-47a9-4b06-a596-2e67219d43ba req-56b55df1-7396-4f13-8815-38ac943b71a7 f4159c198f2c491aba3076e803251bb4 551ef16ca7c64c7d98e9560ddab5abd8 - - default default] Lock "3d2a66c2-9869-4f0a-a27f-db3a14d43466-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 04:09:41 compute-0 nova_compute[259850]: 2025-10-11 04:09:41.659 2 DEBUG nova.compute.manager [req-8814a070-47a9-4b06-a596-2e67219d43ba req-56b55df1-7396-4f13-8815-38ac943b71a7 f4159c198f2c491aba3076e803251bb4 551ef16ca7c64c7d98e9560ddab5abd8 - - default default] [instance: 3d2a66c2-9869-4f0a-a27f-db3a14d43466] No waiting events found dispatching network-vif-plugged-8701ce4d-adc7-4369-9f76-cf6dea290bff pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 11 04:09:41 compute-0 nova_compute[259850]: 2025-10-11 04:09:41.659 2 WARNING nova.compute.manager [req-8814a070-47a9-4b06-a596-2e67219d43ba req-56b55df1-7396-4f13-8815-38ac943b71a7 f4159c198f2c491aba3076e803251bb4 551ef16ca7c64c7d98e9560ddab5abd8 - - default default] [instance: 3d2a66c2-9869-4f0a-a27f-db3a14d43466] Received unexpected event network-vif-plugged-8701ce4d-adc7-4369-9f76-cf6dea290bff for instance with vm_state active and task_state None.
Oct 11 04:09:42 compute-0 hardcore_spence[283584]: {
Oct 11 04:09:42 compute-0 hardcore_spence[283584]:     "38da774d-7ecf-442f-9a7a-97978287cff8": {
Oct 11 04:09:42 compute-0 hardcore_spence[283584]:         "ceph_fsid": "23b68101-59a9-532f-ab6b-9acf78fb2162",
Oct 11 04:09:42 compute-0 hardcore_spence[283584]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Oct 11 04:09:42 compute-0 hardcore_spence[283584]:         "osd_id": 1,
Oct 11 04:09:42 compute-0 hardcore_spence[283584]:         "osd_uuid": "38da774d-7ecf-442f-9a7a-97978287cff8",
Oct 11 04:09:42 compute-0 hardcore_spence[283584]:         "type": "bluestore"
Oct 11 04:09:42 compute-0 hardcore_spence[283584]:     },
Oct 11 04:09:42 compute-0 hardcore_spence[283584]:     "b0a32a6b-06c7-4cda-9235-0c5ffbd1bd85": {
Oct 11 04:09:42 compute-0 hardcore_spence[283584]:         "ceph_fsid": "23b68101-59a9-532f-ab6b-9acf78fb2162",
Oct 11 04:09:42 compute-0 hardcore_spence[283584]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Oct 11 04:09:42 compute-0 hardcore_spence[283584]:         "osd_id": 2,
Oct 11 04:09:42 compute-0 hardcore_spence[283584]:         "osd_uuid": "b0a32a6b-06c7-4cda-9235-0c5ffbd1bd85",
Oct 11 04:09:42 compute-0 hardcore_spence[283584]:         "type": "bluestore"
Oct 11 04:09:42 compute-0 hardcore_spence[283584]:     },
Oct 11 04:09:42 compute-0 hardcore_spence[283584]:     "bd7ac921-1218-45c1-b1c6-7c594dbceccb": {
Oct 11 04:09:42 compute-0 hardcore_spence[283584]:         "ceph_fsid": "23b68101-59a9-532f-ab6b-9acf78fb2162",
Oct 11 04:09:42 compute-0 hardcore_spence[283584]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct 11 04:09:42 compute-0 hardcore_spence[283584]:         "osd_id": 0,
Oct 11 04:09:42 compute-0 hardcore_spence[283584]:         "osd_uuid": "bd7ac921-1218-45c1-b1c6-7c594dbceccb",
Oct 11 04:09:42 compute-0 hardcore_spence[283584]:         "type": "bluestore"
Oct 11 04:09:42 compute-0 hardcore_spence[283584]:     }
Oct 11 04:09:42 compute-0 hardcore_spence[283584]: }
Oct 11 04:09:42 compute-0 systemd[1]: libpod-0963e4835cf374ee0f020e363791c0bc09bc8237824a3884bfff8e7f61c61321.scope: Deactivated successfully.
Oct 11 04:09:42 compute-0 podman[283568]: 2025-10-11 04:09:42.291080274 +0000 UTC m=+1.338333845 container died 0963e4835cf374ee0f020e363791c0bc09bc8237824a3884bfff8e7f61c61321 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_spence, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2)
Oct 11 04:09:42 compute-0 systemd[1]: libpod-0963e4835cf374ee0f020e363791c0bc09bc8237824a3884bfff8e7f61c61321.scope: Consumed 1.087s CPU time.
Oct 11 04:09:42 compute-0 systemd[1]: var-lib-containers-storage-overlay-321ab7e5a2a0d2aa7a5f2a17df6c213f539ff97996c1f36eb80edffcb554ace5-merged.mount: Deactivated successfully.
Oct 11 04:09:42 compute-0 podman[283568]: 2025-10-11 04:09:42.398745303 +0000 UTC m=+1.445998874 container remove 0963e4835cf374ee0f020e363791c0bc09bc8237824a3884bfff8e7f61c61321 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_spence, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 11 04:09:42 compute-0 systemd[1]: libpod-conmon-0963e4835cf374ee0f020e363791c0bc09bc8237824a3884bfff8e7f61c61321.scope: Deactivated successfully.
Oct 11 04:09:42 compute-0 sudo[283402]: pam_unix(sudo:session): session closed for user root
Oct 11 04:09:42 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 11 04:09:42 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' 
Oct 11 04:09:42 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 11 04:09:42 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' 
Oct 11 04:09:42 compute-0 ceph-mgr[74563]: [progress WARNING root] complete: ev 7bf53f1c-68a0-4cd6-9ccd-d3e671a35b1a does not exist
Oct 11 04:09:42 compute-0 ceph-mgr[74563]: [progress WARNING root] complete: ev ac7810a3-867b-4fe5-b0a6-531167ac275d does not exist
Oct 11 04:09:42 compute-0 sudo[283629]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 04:09:42 compute-0 sudo[283629]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 04:09:42 compute-0 sudo[283629]: pam_unix(sudo:session): session closed for user root
Oct 11 04:09:42 compute-0 nova_compute[259850]: 2025-10-11 04:09:42.565 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1760155767.564347, e879a322-2581-43da-916b-423a94821ed0 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 11 04:09:42 compute-0 nova_compute[259850]: 2025-10-11 04:09:42.565 2 INFO nova.compute.manager [-] [instance: e879a322-2581-43da-916b-423a94821ed0] VM Stopped (Lifecycle Event)
Oct 11 04:09:42 compute-0 nova_compute[259850]: 2025-10-11 04:09:42.592 2 DEBUG nova.compute.manager [None req-7a850389-61be-4f43-80fb-376eda54a16b - - - - - -] [instance: e879a322-2581-43da-916b-423a94821ed0] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 11 04:09:42 compute-0 sudo[283654]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Oct 11 04:09:42 compute-0 sudo[283654]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 04:09:42 compute-0 sudo[283654]: pam_unix(sudo:session): session closed for user root
Oct 11 04:09:42 compute-0 nova_compute[259850]: 2025-10-11 04:09:42.624 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 04:09:42 compute-0 ceph-mon[74273]: pgmap v1235: 305 pgs: 305 active+clean; 134 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 42 KiB/s rd, 2.7 MiB/s wr, 64 op/s
Oct 11 04:09:42 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' 
Oct 11 04:09:42 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' 
Oct 11 04:09:43 compute-0 nova_compute[259850]: 2025-10-11 04:09:43.029 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 04:09:43 compute-0 nova_compute[259850]: 2025-10-11 04:09:43.453 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 04:09:43 compute-0 NetworkManager[44920]: <info>  [1760155783.4586] manager: (patch-provnet-86cd831a-6a58-4ba8-a51c-57fa1a3acacc-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/74)
Oct 11 04:09:43 compute-0 NetworkManager[44920]: <info>  [1760155783.4600] manager: (patch-br-int-to-provnet-86cd831a-6a58-4ba8-a51c-57fa1a3acacc): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/75)
Oct 11 04:09:43 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1236: 305 pgs: 305 active+clean; 1.0 GiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 3.1 MiB/s rd, 120 MiB/s wr, 462 op/s
Oct 11 04:09:43 compute-0 nova_compute[259850]: 2025-10-11 04:09:43.578 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 04:09:43 compute-0 ovn_controller[152025]: 2025-10-11T04:09:43Z|00128|binding|INFO|Releasing lport 34d69504-322d-456b-93e7-c4c1d52774df from this chassis (sb_readonly=0)
Oct 11 04:09:43 compute-0 nova_compute[259850]: 2025-10-11 04:09:43.586 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 04:09:43 compute-0 nova_compute[259850]: 2025-10-11 04:09:43.797 2 DEBUG nova.compute.manager [req-b28f1830-ea91-4ea0-a561-9dcfaa86732e req-67fda669-1c12-47e9-b342-b1ea8d388147 f4159c198f2c491aba3076e803251bb4 551ef16ca7c64c7d98e9560ddab5abd8 - - default default] [instance: 3d2a66c2-9869-4f0a-a27f-db3a14d43466] Received event network-changed-8701ce4d-adc7-4369-9f76-cf6dea290bff external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 11 04:09:43 compute-0 nova_compute[259850]: 2025-10-11 04:09:43.798 2 DEBUG nova.compute.manager [req-b28f1830-ea91-4ea0-a561-9dcfaa86732e req-67fda669-1c12-47e9-b342-b1ea8d388147 f4159c198f2c491aba3076e803251bb4 551ef16ca7c64c7d98e9560ddab5abd8 - - default default] [instance: 3d2a66c2-9869-4f0a-a27f-db3a14d43466] Refreshing instance network info cache due to event network-changed-8701ce4d-adc7-4369-9f76-cf6dea290bff. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 11 04:09:43 compute-0 nova_compute[259850]: 2025-10-11 04:09:43.799 2 DEBUG oslo_concurrency.lockutils [req-b28f1830-ea91-4ea0-a561-9dcfaa86732e req-67fda669-1c12-47e9-b342-b1ea8d388147 f4159c198f2c491aba3076e803251bb4 551ef16ca7c64c7d98e9560ddab5abd8 - - default default] Acquiring lock "refresh_cache-3d2a66c2-9869-4f0a-a27f-db3a14d43466" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 11 04:09:43 compute-0 nova_compute[259850]: 2025-10-11 04:09:43.799 2 DEBUG oslo_concurrency.lockutils [req-b28f1830-ea91-4ea0-a561-9dcfaa86732e req-67fda669-1c12-47e9-b342-b1ea8d388147 f4159c198f2c491aba3076e803251bb4 551ef16ca7c64c7d98e9560ddab5abd8 - - default default] Acquired lock "refresh_cache-3d2a66c2-9869-4f0a-a27f-db3a14d43466" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 11 04:09:43 compute-0 nova_compute[259850]: 2025-10-11 04:09:43.800 2 DEBUG nova.network.neutron [req-b28f1830-ea91-4ea0-a561-9dcfaa86732e req-67fda669-1c12-47e9-b342-b1ea8d388147 f4159c198f2c491aba3076e803251bb4 551ef16ca7c64c7d98e9560ddab5abd8 - - default default] [instance: 3d2a66c2-9869-4f0a-a27f-db3a14d43466] Refreshing network info cache for port 8701ce4d-adc7-4369-9f76-cf6dea290bff _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 11 04:09:44 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e244 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 04:09:44 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e244 do_prune osdmap full prune enabled
Oct 11 04:09:44 compute-0 ceph-mon[74273]: pgmap v1236: 305 pgs: 305 active+clean; 1.0 GiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 3.1 MiB/s rd, 120 MiB/s wr, 462 op/s
Oct 11 04:09:44 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e245 e245: 3 total, 3 up, 3 in
Oct 11 04:09:44 compute-0 ceph-mon[74273]: log_channel(cluster) log [DBG] : osdmap e245: 3 total, 3 up, 3 in
Oct 11 04:09:45 compute-0 unix_chkpwd[283683]: password check failed for user (root)
Oct 11 04:09:45 compute-0 sshd-session[283681]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=80.94.93.119  user=root
Oct 11 04:09:45 compute-0 nova_compute[259850]: 2025-10-11 04:09:45.216 2 DEBUG nova.network.neutron [req-b28f1830-ea91-4ea0-a561-9dcfaa86732e req-67fda669-1c12-47e9-b342-b1ea8d388147 f4159c198f2c491aba3076e803251bb4 551ef16ca7c64c7d98e9560ddab5abd8 - - default default] [instance: 3d2a66c2-9869-4f0a-a27f-db3a14d43466] Updated VIF entry in instance network info cache for port 8701ce4d-adc7-4369-9f76-cf6dea290bff. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 11 04:09:45 compute-0 nova_compute[259850]: 2025-10-11 04:09:45.216 2 DEBUG nova.network.neutron [req-b28f1830-ea91-4ea0-a561-9dcfaa86732e req-67fda669-1c12-47e9-b342-b1ea8d388147 f4159c198f2c491aba3076e803251bb4 551ef16ca7c64c7d98e9560ddab5abd8 - - default default] [instance: 3d2a66c2-9869-4f0a-a27f-db3a14d43466] Updating instance_info_cache with network_info: [{"id": "8701ce4d-adc7-4369-9f76-cf6dea290bff", "address": "fa:16:3e:5a:0f:e2", "network": {"id": "8cb72c94-41d7-40be-8ef7-9351e1b06d48", "bridge": "br-int", "label": "tempest-VolumesBackupsTest-1596968619-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.219", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c04e56df694d49fdbb22c39773dfc036", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8701ce4d-ad", "ovs_interfaceid": "8701ce4d-adc7-4369-9f76-cf6dea290bff", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 11 04:09:45 compute-0 nova_compute[259850]: 2025-10-11 04:09:45.236 2 DEBUG oslo_concurrency.lockutils [req-b28f1830-ea91-4ea0-a561-9dcfaa86732e req-67fda669-1c12-47e9-b342-b1ea8d388147 f4159c198f2c491aba3076e803251bb4 551ef16ca7c64c7d98e9560ddab5abd8 - - default default] Releasing lock "refresh_cache-3d2a66c2-9869-4f0a-a27f-db3a14d43466" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 11 04:09:45 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1238: 305 pgs: 305 active+clean; 1.0 GiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 2.8 MiB/s rd, 110 MiB/s wr, 427 op/s
Oct 11 04:09:45 compute-0 ceph-mon[74273]: osdmap e245: 3 total, 3 up, 3 in
Oct 11 04:09:46 compute-0 podman[283684]: 2025-10-11 04:09:46.43591762 +0000 UTC m=+0.132127178 container health_status 648a70a95c858d67aef01a55c153ff0b5b9bef4b589fa10c62a24185180db61f (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 11 04:09:46 compute-0 sshd-session[283681]: Failed password for root from 80.94.93.119 port 60440 ssh2
Oct 11 04:09:46 compute-0 ceph-mon[74273]: pgmap v1238: 305 pgs: 305 active+clean; 1.0 GiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 2.8 MiB/s rd, 110 MiB/s wr, 427 op/s
Oct 11 04:09:47 compute-0 unix_chkpwd[283711]: password check failed for user (root)
Oct 11 04:09:47 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1239: 305 pgs: 305 active+clean; 1.0 GiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 2.5 MiB/s rd, 96 MiB/s wr, 370 op/s
Oct 11 04:09:47 compute-0 nova_compute[259850]: 2025-10-11 04:09:47.626 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 04:09:48 compute-0 nova_compute[259850]: 2025-10-11 04:09:48.031 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 04:09:48 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 11 04:09:48 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/726395510' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 11 04:09:48 compute-0 sshd-session[283681]: Failed password for root from 80.94.93.119 port 60440 ssh2
Oct 11 04:09:48 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e245 do_prune osdmap full prune enabled
Oct 11 04:09:48 compute-0 ceph-mon[74273]: pgmap v1239: 305 pgs: 305 active+clean; 1.0 GiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 2.5 MiB/s rd, 96 MiB/s wr, 370 op/s
Oct 11 04:09:48 compute-0 ceph-mon[74273]: from='client.? 192.168.122.10:0/726395510' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 11 04:09:49 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e246 e246: 3 total, 3 up, 3 in
Oct 11 04:09:49 compute-0 ceph-mon[74273]: log_channel(cluster) log [DBG] : osdmap e246: 3 total, 3 up, 3 in
Oct 11 04:09:49 compute-0 unix_chkpwd[283712]: password check failed for user (root)
Oct 11 04:09:49 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1241: 305 pgs: 305 active+clean; 1.1 GiB data, 1.3 GiB used, 59 GiB / 60 GiB avail; 3.1 MiB/s rd, 128 MiB/s wr, 443 op/s
Oct 11 04:09:49 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e246 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 04:09:49 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 11 04:09:49 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2950869480' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 11 04:09:50 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e246 do_prune osdmap full prune enabled
Oct 11 04:09:50 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e247 e247: 3 total, 3 up, 3 in
Oct 11 04:09:50 compute-0 ceph-mon[74273]: osdmap e246: 3 total, 3 up, 3 in
Oct 11 04:09:50 compute-0 ceph-mon[74273]: from='client.? 192.168.122.10:0/2950869480' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 11 04:09:50 compute-0 ceph-mon[74273]: log_channel(cluster) log [DBG] : osdmap e247: 3 total, 3 up, 3 in
Oct 11 04:09:50 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct 11 04:09:50 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1183703503' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 11 04:09:50 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct 11 04:09:50 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1183703503' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 11 04:09:50 compute-0 ceph-mgr[74563]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 04:09:50 compute-0 ceph-mgr[74563]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 04:09:50 compute-0 ceph-mgr[74563]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 04:09:50 compute-0 ceph-mgr[74563]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 04:09:50 compute-0 ceph-mgr[74563]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 04:09:50 compute-0 ceph-mgr[74563]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 04:09:51 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e247 do_prune osdmap full prune enabled
Oct 11 04:09:51 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e248 e248: 3 total, 3 up, 3 in
Oct 11 04:09:51 compute-0 ceph-mon[74273]: log_channel(cluster) log [DBG] : osdmap e248: 3 total, 3 up, 3 in
Oct 11 04:09:51 compute-0 ceph-mon[74273]: pgmap v1241: 305 pgs: 305 active+clean; 1.1 GiB data, 1.3 GiB used, 59 GiB / 60 GiB avail; 3.1 MiB/s rd, 128 MiB/s wr, 443 op/s
Oct 11 04:09:51 compute-0 ceph-mon[74273]: osdmap e247: 3 total, 3 up, 3 in
Oct 11 04:09:51 compute-0 ceph-mon[74273]: from='client.? 192.168.122.10:0/1183703503' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 11 04:09:51 compute-0 ceph-mon[74273]: from='client.? 192.168.122.10:0/1183703503' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 11 04:09:51 compute-0 podman[283713]: 2025-10-11 04:09:51.404255516 +0000 UTC m=+0.096256139 container health_status aa44bb3cffcfb092b9d85ae52a9bd9f36c87e07262632420f97b48a886109eab (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true)
Oct 11 04:09:51 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1244: 305 pgs: 305 active+clean; 1.1 GiB data, 1.3 GiB used, 59 GiB / 60 GiB avail; 39 KiB/s rd, 15 MiB/s wr, 59 op/s
Oct 11 04:09:51 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 11 04:09:51 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1651750993' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 11 04:09:51 compute-0 sshd-session[283681]: Failed password for root from 80.94.93.119 port 60440 ssh2
Oct 11 04:09:52 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e248 do_prune osdmap full prune enabled
Oct 11 04:09:52 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e249 e249: 3 total, 3 up, 3 in
Oct 11 04:09:52 compute-0 ceph-mon[74273]: log_channel(cluster) log [DBG] : osdmap e249: 3 total, 3 up, 3 in
Oct 11 04:09:52 compute-0 ceph-mon[74273]: osdmap e248: 3 total, 3 up, 3 in
Oct 11 04:09:52 compute-0 ceph-mon[74273]: from='client.? 192.168.122.10:0/1651750993' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 11 04:09:52 compute-0 nova_compute[259850]: 2025-10-11 04:09:52.627 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 04:09:53 compute-0 nova_compute[259850]: 2025-10-11 04:09:53.034 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 04:09:53 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e249 do_prune osdmap full prune enabled
Oct 11 04:09:53 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e250 e250: 3 total, 3 up, 3 in
Oct 11 04:09:53 compute-0 ceph-mon[74273]: log_channel(cluster) log [DBG] : osdmap e250: 3 total, 3 up, 3 in
Oct 11 04:09:53 compute-0 ceph-mon[74273]: pgmap v1244: 305 pgs: 305 active+clean; 1.1 GiB data, 1.3 GiB used, 59 GiB / 60 GiB avail; 39 KiB/s rd, 15 MiB/s wr, 59 op/s
Oct 11 04:09:53 compute-0 ceph-mon[74273]: osdmap e249: 3 total, 3 up, 3 in
Oct 11 04:09:53 compute-0 sshd-session[283681]: Received disconnect from 80.94.93.119 port 60440:11:  [preauth]
Oct 11 04:09:53 compute-0 sshd-session[283681]: Disconnected from authenticating user root 80.94.93.119 port 60440 [preauth]
Oct 11 04:09:53 compute-0 sshd-session[283681]: PAM 2 more authentication failures; logname= uid=0 euid=0 tty=ssh ruser= rhost=80.94.93.119  user=root
Oct 11 04:09:53 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1247: 305 pgs: 305 active+clean; 1.2 GiB data, 1.4 GiB used, 59 GiB / 60 GiB avail; 1006 KiB/s rd, 6.4 MiB/s wr, 316 op/s
Oct 11 04:09:53 compute-0 unix_chkpwd[283733]: password check failed for user (root)
Oct 11 04:09:53 compute-0 sshd-session[283731]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=80.94.93.119  user=root
Oct 11 04:09:54 compute-0 ovn_controller[152025]: 2025-10-11T04:09:54Z|00018|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:5a:0f:e2 10.100.0.9
Oct 11 04:09:54 compute-0 ovn_controller[152025]: 2025-10-11T04:09:54Z|00019|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:5a:0f:e2 10.100.0.9
Oct 11 04:09:54 compute-0 ceph-mon[74273]: osdmap e250: 3 total, 3 up, 3 in
Oct 11 04:09:54 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e250 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 04:09:54 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e250 do_prune osdmap full prune enabled
Oct 11 04:09:54 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e251 e251: 3 total, 3 up, 3 in
Oct 11 04:09:54 compute-0 ceph-mon[74273]: log_channel(cluster) log [DBG] : osdmap e251: 3 total, 3 up, 3 in
Oct 11 04:09:54 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 11 04:09:54 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1824223426' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 11 04:09:55 compute-0 ceph-mon[74273]: pgmap v1247: 305 pgs: 305 active+clean; 1.2 GiB data, 1.4 GiB used, 59 GiB / 60 GiB avail; 1006 KiB/s rd, 6.4 MiB/s wr, 316 op/s
Oct 11 04:09:55 compute-0 ceph-mon[74273]: osdmap e251: 3 total, 3 up, 3 in
Oct 11 04:09:55 compute-0 ceph-mon[74273]: from='client.? 192.168.122.10:0/1824223426' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 11 04:09:55 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1249: 305 pgs: 305 active+clean; 1.2 GiB data, 1.4 GiB used, 59 GiB / 60 GiB avail; 884 KiB/s rd, 5.6 MiB/s wr, 278 op/s
Oct 11 04:09:56 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 11 04:09:56 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2845216911' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 11 04:09:56 compute-0 sshd-session[283731]: Failed password for root from 80.94.93.119 port 57928 ssh2
Oct 11 04:09:57 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e251 do_prune osdmap full prune enabled
Oct 11 04:09:57 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e252 e252: 3 total, 3 up, 3 in
Oct 11 04:09:57 compute-0 ceph-mon[74273]: log_channel(cluster) log [DBG] : osdmap e252: 3 total, 3 up, 3 in
Oct 11 04:09:57 compute-0 ceph-mon[74273]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #57. Immutable memtables: 0.
Oct 11 04:09:57 compute-0 ceph-mon[74273]: rocksdb: (Original Log Time 2025/10/11-04:09:57.105875) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Oct 11 04:09:57 compute-0 ceph-mon[74273]: rocksdb: [db/flush_job.cc:856] [default] [JOB 29] Flushing memtable with next log file: 57
Oct 11 04:09:57 compute-0 ceph-mon[74273]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760155797105915, "job": 29, "event": "flush_started", "num_memtables": 1, "num_entries": 1585, "num_deletes": 263, "total_data_size": 2160285, "memory_usage": 2207280, "flush_reason": "Manual Compaction"}
Oct 11 04:09:57 compute-0 ceph-mon[74273]: rocksdb: [db/flush_job.cc:885] [default] [JOB 29] Level-0 flush table #58: started
Oct 11 04:09:57 compute-0 ceph-mon[74273]: pgmap v1249: 305 pgs: 305 active+clean; 1.2 GiB data, 1.4 GiB used, 59 GiB / 60 GiB avail; 884 KiB/s rd, 5.6 MiB/s wr, 278 op/s
Oct 11 04:09:57 compute-0 ceph-mon[74273]: from='client.? 192.168.122.10:0/2845216911' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 11 04:09:57 compute-0 ceph-mon[74273]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760155797123430, "cf_name": "default", "job": 29, "event": "table_file_creation", "file_number": 58, "file_size": 2099143, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 24565, "largest_seqno": 26149, "table_properties": {"data_size": 2091631, "index_size": 4457, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1989, "raw_key_size": 16597, "raw_average_key_size": 21, "raw_value_size": 2076256, "raw_average_value_size": 2628, "num_data_blocks": 198, "num_entries": 790, "num_filter_entries": 790, "num_deletions": 263, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1760155699, "oldest_key_time": 1760155699, "file_creation_time": 1760155797, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "c5ef686a-ea96-43f3-b64e-136aeef6150d", "db_session_id": "5PENSWPAYOBU0GJSS6OB", "orig_file_number": 58, "seqno_to_time_mapping": "N/A"}}
Oct 11 04:09:57 compute-0 ceph-mon[74273]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 29] Flush lasted 17597 microseconds, and 6755 cpu microseconds.
Oct 11 04:09:57 compute-0 ceph-mon[74273]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 11 04:09:57 compute-0 ceph-mon[74273]: rocksdb: (Original Log Time 2025/10/11-04:09:57.123472) [db/flush_job.cc:967] [default] [JOB 29] Level-0 flush table #58: 2099143 bytes OK
Oct 11 04:09:57 compute-0 ceph-mon[74273]: rocksdb: (Original Log Time 2025/10/11-04:09:57.123491) [db/memtable_list.cc:519] [default] Level-0 commit table #58 started
Oct 11 04:09:57 compute-0 ceph-mon[74273]: rocksdb: (Original Log Time 2025/10/11-04:09:57.124702) [db/memtable_list.cc:722] [default] Level-0 commit table #58: memtable #1 done
Oct 11 04:09:57 compute-0 ceph-mon[74273]: rocksdb: (Original Log Time 2025/10/11-04:09:57.124718) EVENT_LOG_v1 {"time_micros": 1760155797124713, "job": 29, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Oct 11 04:09:57 compute-0 ceph-mon[74273]: rocksdb: (Original Log Time 2025/10/11-04:09:57.124730) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Oct 11 04:09:57 compute-0 ceph-mon[74273]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 29] Try to delete WAL files size 2153079, prev total WAL file size 2153079, number of live WAL files 2.
Oct 11 04:09:57 compute-0 ceph-mon[74273]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000054.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 11 04:09:57 compute-0 ceph-mon[74273]: rocksdb: (Original Log Time 2025/10/11-04:09:57.125828) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730032303038' seq:72057594037927935, type:22 .. '7061786F730032323630' seq:0, type:0; will stop at (end)
Oct 11 04:09:57 compute-0 ceph-mon[74273]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 30] Compacting 1@0 + 1@6 files to L6, score -1.00
Oct 11 04:09:57 compute-0 ceph-mon[74273]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 29 Base level 0, inputs: [58(2049KB)], [56(9847KB)]
Oct 11 04:09:57 compute-0 ceph-mon[74273]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760155797125875, "job": 30, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [58], "files_L6": [56], "score": -1, "input_data_size": 12182592, "oldest_snapshot_seqno": -1}
Oct 11 04:09:57 compute-0 ceph-mon[74273]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 30] Generated table #59: 5306 keys, 10464761 bytes, temperature: kUnknown
Oct 11 04:09:57 compute-0 ceph-mon[74273]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760155797215602, "cf_name": "default", "job": 30, "event": "table_file_creation", "file_number": 59, "file_size": 10464761, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 10423490, "index_size": 26889, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 13317, "raw_key_size": 131909, "raw_average_key_size": 24, "raw_value_size": 10322149, "raw_average_value_size": 1945, "num_data_blocks": 1111, "num_entries": 5306, "num_filter_entries": 5306, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1760153731, "oldest_key_time": 0, "file_creation_time": 1760155797, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "c5ef686a-ea96-43f3-b64e-136aeef6150d", "db_session_id": "5PENSWPAYOBU0GJSS6OB", "orig_file_number": 59, "seqno_to_time_mapping": "N/A"}}
Oct 11 04:09:57 compute-0 ceph-mon[74273]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 11 04:09:57 compute-0 ceph-mon[74273]: rocksdb: (Original Log Time 2025/10/11-04:09:57.215797) [db/compaction/compaction_job.cc:1663] [default] [JOB 30] Compacted 1@0 + 1@6 files to L6 => 10464761 bytes
Oct 11 04:09:57 compute-0 ceph-mon[74273]: rocksdb: (Original Log Time 2025/10/11-04:09:57.226774) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 135.7 rd, 116.6 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(2.0, 9.6 +0.0 blob) out(10.0 +0.0 blob), read-write-amplify(10.8) write-amplify(5.0) OK, records in: 5838, records dropped: 532 output_compression: NoCompression
Oct 11 04:09:57 compute-0 ceph-mon[74273]: rocksdb: (Original Log Time 2025/10/11-04:09:57.226794) EVENT_LOG_v1 {"time_micros": 1760155797226784, "job": 30, "event": "compaction_finished", "compaction_time_micros": 89773, "compaction_time_cpu_micros": 54511, "output_level": 6, "num_output_files": 1, "total_output_size": 10464761, "num_input_records": 5838, "num_output_records": 5306, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Oct 11 04:09:57 compute-0 ceph-mon[74273]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000058.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 11 04:09:57 compute-0 ceph-mon[74273]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760155797227329, "job": 30, "event": "table_file_deletion", "file_number": 58}
Oct 11 04:09:57 compute-0 ceph-mon[74273]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000056.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 11 04:09:57 compute-0 ceph-mon[74273]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760155797229125, "job": 30, "event": "table_file_deletion", "file_number": 56}
Oct 11 04:09:57 compute-0 ceph-mon[74273]: rocksdb: (Original Log Time 2025/10/11-04:09:57.125752) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 11 04:09:57 compute-0 ceph-mon[74273]: rocksdb: (Original Log Time 2025/10/11-04:09:57.229202) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 11 04:09:57 compute-0 ceph-mon[74273]: rocksdb: (Original Log Time 2025/10/11-04:09:57.229209) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 11 04:09:57 compute-0 ceph-mon[74273]: rocksdb: (Original Log Time 2025/10/11-04:09:57.229211) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 11 04:09:57 compute-0 ceph-mon[74273]: rocksdb: (Original Log Time 2025/10/11-04:09:57.229214) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 11 04:09:57 compute-0 ceph-mon[74273]: rocksdb: (Original Log Time 2025/10/11-04:09:57.229217) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 11 04:09:57 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1251: 305 pgs: 305 active+clean; 1.2 GiB data, 1.4 GiB used, 59 GiB / 60 GiB avail; 727 KiB/s rd, 4.6 MiB/s wr, 228 op/s
Oct 11 04:09:57 compute-0 nova_compute[259850]: 2025-10-11 04:09:57.650 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 04:09:57 compute-0 unix_chkpwd[283734]: password check failed for user (root)
Oct 11 04:09:58 compute-0 nova_compute[259850]: 2025-10-11 04:09:58.037 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 04:09:58 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e252 do_prune osdmap full prune enabled
Oct 11 04:09:58 compute-0 ceph-mon[74273]: osdmap e252: 3 total, 3 up, 3 in
Oct 11 04:09:58 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e253 e253: 3 total, 3 up, 3 in
Oct 11 04:09:58 compute-0 ceph-mon[74273]: log_channel(cluster) log [DBG] : osdmap e253: 3 total, 3 up, 3 in
Oct 11 04:09:59 compute-0 ceph-mon[74273]: pgmap v1251: 305 pgs: 305 active+clean; 1.2 GiB data, 1.4 GiB used, 59 GiB / 60 GiB avail; 727 KiB/s rd, 4.6 MiB/s wr, 228 op/s
Oct 11 04:09:59 compute-0 ceph-mon[74273]: osdmap e253: 3 total, 3 up, 3 in
Oct 11 04:09:59 compute-0 sshd-session[283731]: Failed password for root from 80.94.93.119 port 57928 ssh2
Oct 11 04:09:59 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1253: 305 pgs: 305 active+clean; 1.7 GiB data, 1.8 GiB used, 58 GiB / 60 GiB avail; 229 KiB/s rd, 87 MiB/s wr, 301 op/s
Oct 11 04:09:59 compute-0 unix_chkpwd[283736]: password check failed for user (root)
Oct 11 04:09:59 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e253 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 04:10:00 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e253 do_prune osdmap full prune enabled
Oct 11 04:10:00 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e254 e254: 3 total, 3 up, 3 in
Oct 11 04:10:00 compute-0 ceph-mon[74273]: log_channel(cluster) log [DBG] : osdmap e254: 3 total, 3 up, 3 in
Oct 11 04:10:00 compute-0 ceph-mon[74273]: pgmap v1253: 305 pgs: 305 active+clean; 1.7 GiB data, 1.8 GiB used, 58 GiB / 60 GiB avail; 229 KiB/s rd, 87 MiB/s wr, 301 op/s
Oct 11 04:10:00 compute-0 ceph-mon[74273]: osdmap e254: 3 total, 3 up, 3 in
Oct 11 04:10:01 compute-0 anacron[1070]: Job `cron.monthly' started
Oct 11 04:10:01 compute-0 anacron[1070]: Job `cron.monthly' terminated
Oct 11 04:10:01 compute-0 anacron[1070]: Normal exit (3 jobs run)
Oct 11 04:10:01 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e254 do_prune osdmap full prune enabled
Oct 11 04:10:01 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e255 e255: 3 total, 3 up, 3 in
Oct 11 04:10:01 compute-0 ceph-mon[74273]: log_channel(cluster) log [DBG] : osdmap e255: 3 total, 3 up, 3 in
Oct 11 04:10:01 compute-0 sshd-session[283731]: Failed password for root from 80.94.93.119 port 57928 ssh2
Oct 11 04:10:01 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1256: 305 pgs: 305 active+clean; 1.7 GiB data, 1.8 GiB used, 58 GiB / 60 GiB avail; 307 KiB/s rd, 117 MiB/s wr, 403 op/s
Oct 11 04:10:01 compute-0 sshd-session[283731]: Received disconnect from 80.94.93.119 port 57928:11:  [preauth]
Oct 11 04:10:01 compute-0 sshd-session[283731]: Disconnected from authenticating user root 80.94.93.119 port 57928 [preauth]
Oct 11 04:10:01 compute-0 sshd-session[283731]: PAM 2 more authentication failures; logname= uid=0 euid=0 tty=ssh ruser= rhost=80.94.93.119  user=root
Oct 11 04:10:02 compute-0 ceph-mon[74273]: osdmap e255: 3 total, 3 up, 3 in
Oct 11 04:10:02 compute-0 ceph-mon[74273]: pgmap v1256: 305 pgs: 305 active+clean; 1.7 GiB data, 1.8 GiB used, 58 GiB / 60 GiB avail; 307 KiB/s rd, 117 MiB/s wr, 403 op/s
Oct 11 04:10:02 compute-0 unix_chkpwd[283741]: password check failed for user (root)
Oct 11 04:10:02 compute-0 sshd-session[283739]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=80.94.93.119  user=root
Oct 11 04:10:02 compute-0 nova_compute[259850]: 2025-10-11 04:10:02.653 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 04:10:03 compute-0 nova_compute[259850]: 2025-10-11 04:10:03.039 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 04:10:03 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e255 do_prune osdmap full prune enabled
Oct 11 04:10:03 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e256 e256: 3 total, 3 up, 3 in
Oct 11 04:10:03 compute-0 ceph-mon[74273]: log_channel(cluster) log [DBG] : osdmap e256: 3 total, 3 up, 3 in
Oct 11 04:10:03 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 11 04:10:03 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3524779304' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 11 04:10:03 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1258: 305 pgs: 305 active+clean; 2.2 GiB data, 2.4 GiB used, 58 GiB / 60 GiB avail; 172 KiB/s rd, 92 MiB/s wr, 285 op/s
Oct 11 04:10:04 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e256 do_prune osdmap full prune enabled
Oct 11 04:10:04 compute-0 ceph-mon[74273]: osdmap e256: 3 total, 3 up, 3 in
Oct 11 04:10:04 compute-0 ceph-mon[74273]: from='client.? 192.168.122.10:0/3524779304' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 11 04:10:04 compute-0 ceph-mon[74273]: pgmap v1258: 305 pgs: 305 active+clean; 2.2 GiB data, 2.4 GiB used, 58 GiB / 60 GiB avail; 172 KiB/s rd, 92 MiB/s wr, 285 op/s
Oct 11 04:10:04 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e257 e257: 3 total, 3 up, 3 in
Oct 11 04:10:04 compute-0 ceph-mon[74273]: log_channel(cluster) log [DBG] : osdmap e257: 3 total, 3 up, 3 in
Oct 11 04:10:04 compute-0 podman[283742]: 2025-10-11 04:10:04.516663277 +0000 UTC m=+0.107696491 container health_status 83ff5079e26f0f00bbfd2512db4ebca61e431e3b7e362664573e34fff179574c (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, container_name=multipathd, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=multipathd, org.label-schema.build-date=20251009, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']})
Oct 11 04:10:04 compute-0 podman[283743]: 2025-10-11 04:10:04.517551362 +0000 UTC m=+0.108786292 container health_status 8f03625dda308e9a3fe3b3f00360811eab2a53db90537fed5552a11820542f07 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, config_id=iscsid, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, container_name=iscsid, managed_by=edpm_ansible, org.label-schema.build-date=20251009)
Oct 11 04:10:04 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e257 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 04:10:04 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e257 do_prune osdmap full prune enabled
Oct 11 04:10:04 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e258 e258: 3 total, 3 up, 3 in
Oct 11 04:10:04 compute-0 ceph-mon[74273]: log_channel(cluster) log [DBG] : osdmap e258: 3 total, 3 up, 3 in
Oct 11 04:10:05 compute-0 ceph-mon[74273]: osdmap e257: 3 total, 3 up, 3 in
Oct 11 04:10:05 compute-0 ceph-mon[74273]: osdmap e258: 3 total, 3 up, 3 in
Oct 11 04:10:05 compute-0 sshd-session[283739]: Failed password for root from 80.94.93.119 port 36014 ssh2
Oct 11 04:10:05 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1261: 305 pgs: 305 active+clean; 2.2 GiB data, 2.4 GiB used, 58 GiB / 60 GiB avail; 222 KiB/s rd, 119 MiB/s wr, 369 op/s
Oct 11 04:10:06 compute-0 unix_chkpwd[283782]: password check failed for user (root)
Oct 11 04:10:06 compute-0 ceph-mon[74273]: pgmap v1261: 305 pgs: 305 active+clean; 2.2 GiB data, 2.4 GiB used, 58 GiB / 60 GiB avail; 222 KiB/s rd, 119 MiB/s wr, 369 op/s
Oct 11 04:10:06 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 11 04:10:06 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/4200033927' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 11 04:10:07 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e258 do_prune osdmap full prune enabled
Oct 11 04:10:07 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e259 e259: 3 total, 3 up, 3 in
Oct 11 04:10:07 compute-0 ceph-mon[74273]: log_channel(cluster) log [DBG] : osdmap e259: 3 total, 3 up, 3 in
Oct 11 04:10:07 compute-0 ceph-mon[74273]: from='client.? 192.168.122.10:0/4200033927' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 11 04:10:07 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1263: 305 pgs: 305 active+clean; 2.2 GiB data, 2.4 GiB used, 58 GiB / 60 GiB avail
Oct 11 04:10:07 compute-0 nova_compute[259850]: 2025-10-11 04:10:07.656 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 04:10:08 compute-0 nova_compute[259850]: 2025-10-11 04:10:08.041 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 04:10:08 compute-0 sshd-session[283739]: Failed password for root from 80.94.93.119 port 36014 ssh2
Oct 11 04:10:08 compute-0 unix_chkpwd[283783]: password check failed for user (root)
Oct 11 04:10:08 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e259 do_prune osdmap full prune enabled
Oct 11 04:10:08 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e260 e260: 3 total, 3 up, 3 in
Oct 11 04:10:08 compute-0 ceph-mon[74273]: osdmap e259: 3 total, 3 up, 3 in
Oct 11 04:10:08 compute-0 ceph-mon[74273]: pgmap v1263: 305 pgs: 305 active+clean; 2.2 GiB data, 2.4 GiB used, 58 GiB / 60 GiB avail
Oct 11 04:10:08 compute-0 ceph-mon[74273]: log_channel(cluster) log [DBG] : osdmap e260: 3 total, 3 up, 3 in
Oct 11 04:10:09 compute-0 ceph-mon[74273]: osdmap e260: 3 total, 3 up, 3 in
Oct 11 04:10:09 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1265: 305 pgs: 305 active+clean; 2.2 GiB data, 2.4 GiB used, 58 GiB / 60 GiB avail; 4.0 MiB/s rd, 13 KiB/s wr, 111 op/s
Oct 11 04:10:09 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e260 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 04:10:10 compute-0 sshd-session[283739]: Failed password for root from 80.94.93.119 port 36014 ssh2
Oct 11 04:10:10 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 11 04:10:10 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1173512752' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 11 04:10:10 compute-0 ceph-mon[74273]: pgmap v1265: 305 pgs: 305 active+clean; 2.2 GiB data, 2.4 GiB used, 58 GiB / 60 GiB avail; 4.0 MiB/s rd, 13 KiB/s wr, 111 op/s
Oct 11 04:10:10 compute-0 ceph-mon[74273]: from='client.? 192.168.122.10:0/1173512752' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 11 04:10:11 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e260 do_prune osdmap full prune enabled
Oct 11 04:10:11 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e261 e261: 3 total, 3 up, 3 in
Oct 11 04:10:11 compute-0 ceph-mon[74273]: log_channel(cluster) log [DBG] : osdmap e261: 3 total, 3 up, 3 in
Oct 11 04:10:11 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1267: 305 pgs: 305 active+clean; 2.2 GiB data, 2.4 GiB used, 58 GiB / 60 GiB avail; 3.5 MiB/s rd, 11 KiB/s wr, 95 op/s
Oct 11 04:10:12 compute-0 sshd-session[283739]: Received disconnect from 80.94.93.119 port 36014:11:  [preauth]
Oct 11 04:10:12 compute-0 sshd-session[283739]: Disconnected from authenticating user root 80.94.93.119 port 36014 [preauth]
Oct 11 04:10:12 compute-0 sshd-session[283739]: PAM 2 more authentication failures; logname= uid=0 euid=0 tty=ssh ruser= rhost=80.94.93.119  user=root
Oct 11 04:10:12 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e261 do_prune osdmap full prune enabled
Oct 11 04:10:12 compute-0 ceph-mon[74273]: osdmap e261: 3 total, 3 up, 3 in
Oct 11 04:10:12 compute-0 ceph-mon[74273]: pgmap v1267: 305 pgs: 305 active+clean; 2.2 GiB data, 2.4 GiB used, 58 GiB / 60 GiB avail; 3.5 MiB/s rd, 11 KiB/s wr, 95 op/s
Oct 11 04:10:12 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e262 e262: 3 total, 3 up, 3 in
Oct 11 04:10:12 compute-0 ceph-mon[74273]: log_channel(cluster) log [DBG] : osdmap e262: 3 total, 3 up, 3 in
Oct 11 04:10:12 compute-0 nova_compute[259850]: 2025-10-11 04:10:12.658 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 04:10:13 compute-0 nova_compute[259850]: 2025-10-11 04:10:13.043 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 04:10:13 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 11 04:10:13 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1165603039' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 11 04:10:13 compute-0 ovn_controller[152025]: 2025-10-11T04:10:13Z|00129|memory_trim|INFO|Detected inactivity (last active 30019 ms ago): trimming memory
Oct 11 04:10:13 compute-0 ceph-mon[74273]: osdmap e262: 3 total, 3 up, 3 in
Oct 11 04:10:13 compute-0 ceph-mon[74273]: from='client.? 192.168.122.10:0/1165603039' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 11 04:10:13 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1269: 305 pgs: 305 active+clean; 2.2 GiB data, 2.4 GiB used, 58 GiB / 60 GiB avail; 3.6 MiB/s rd, 3.6 MiB/s wr, 215 op/s
Oct 11 04:10:14 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 11 04:10:14 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2317616354' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 11 04:10:14 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e262 do_prune osdmap full prune enabled
Oct 11 04:10:14 compute-0 ceph-mon[74273]: pgmap v1269: 305 pgs: 305 active+clean; 2.2 GiB data, 2.4 GiB used, 58 GiB / 60 GiB avail; 3.6 MiB/s rd, 3.6 MiB/s wr, 215 op/s
Oct 11 04:10:14 compute-0 ceph-mon[74273]: from='client.? 192.168.122.10:0/2317616354' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 11 04:10:14 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e263 e263: 3 total, 3 up, 3 in
Oct 11 04:10:14 compute-0 ceph-mon[74273]: log_channel(cluster) log [DBG] : osdmap e263: 3 total, 3 up, 3 in
Oct 11 04:10:14 compute-0 ceph-osd[87591]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #44. Immutable memtables: 1.
Oct 11 04:10:14 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e263 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 04:10:14 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e263 do_prune osdmap full prune enabled
Oct 11 04:10:14 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e264 e264: 3 total, 3 up, 3 in
Oct 11 04:10:14 compute-0 ceph-mon[74273]: log_channel(cluster) log [DBG] : osdmap e264: 3 total, 3 up, 3 in
Oct 11 04:10:15 compute-0 ceph-mon[74273]: osdmap e263: 3 total, 3 up, 3 in
Oct 11 04:10:15 compute-0 ceph-mon[74273]: osdmap e264: 3 total, 3 up, 3 in
Oct 11 04:10:15 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1272: 305 pgs: 305 active+clean; 2.2 GiB data, 2.4 GiB used, 58 GiB / 60 GiB avail; 119 KiB/s rd, 5.2 MiB/s wr, 174 op/s
Oct 11 04:10:15 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e264 do_prune osdmap full prune enabled
Oct 11 04:10:15 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e265 e265: 3 total, 3 up, 3 in
Oct 11 04:10:15 compute-0 ceph-mon[74273]: log_channel(cluster) log [DBG] : osdmap e265: 3 total, 3 up, 3 in
Oct 11 04:10:15 compute-0 nova_compute[259850]: 2025-10-11 04:10:15.987 2 DEBUG oslo_concurrency.lockutils [None req-8b9e801b-780e-4060-a8b6-1ce08ef349b6 fc44058c9b8d47d1907c195c404898c8 c04e56df694d49fdbb22c39773dfc036 - - default default] Acquiring lock "3d2a66c2-9869-4f0a-a27f-db3a14d43466" by "nova.compute.manager.ComputeManager.reserve_block_device_name.<locals>.do_reserve" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 04:10:15 compute-0 nova_compute[259850]: 2025-10-11 04:10:15.988 2 DEBUG oslo_concurrency.lockutils [None req-8b9e801b-780e-4060-a8b6-1ce08ef349b6 fc44058c9b8d47d1907c195c404898c8 c04e56df694d49fdbb22c39773dfc036 - - default default] Lock "3d2a66c2-9869-4f0a-a27f-db3a14d43466" acquired by "nova.compute.manager.ComputeManager.reserve_block_device_name.<locals>.do_reserve" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 04:10:16 compute-0 nova_compute[259850]: 2025-10-11 04:10:16.007 2 DEBUG nova.objects.instance [None req-8b9e801b-780e-4060-a8b6-1ce08ef349b6 fc44058c9b8d47d1907c195c404898c8 c04e56df694d49fdbb22c39773dfc036 - - default default] Lazy-loading 'flavor' on Instance uuid 3d2a66c2-9869-4f0a-a27f-db3a14d43466 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 11 04:10:16 compute-0 nova_compute[259850]: 2025-10-11 04:10:16.025 2 INFO nova.virt.libvirt.driver [None req-8b9e801b-780e-4060-a8b6-1ce08ef349b6 fc44058c9b8d47d1907c195c404898c8 c04e56df694d49fdbb22c39773dfc036 - - default default] [instance: 3d2a66c2-9869-4f0a-a27f-db3a14d43466] Ignoring supplied device name: /dev/vdb
Oct 11 04:10:16 compute-0 nova_compute[259850]: 2025-10-11 04:10:16.154 2 DEBUG oslo_concurrency.lockutils [None req-8b9e801b-780e-4060-a8b6-1ce08ef349b6 fc44058c9b8d47d1907c195c404898c8 c04e56df694d49fdbb22c39773dfc036 - - default default] Lock "3d2a66c2-9869-4f0a-a27f-db3a14d43466" "released" by "nova.compute.manager.ComputeManager.reserve_block_device_name.<locals>.do_reserve" :: held 0.166s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 04:10:16 compute-0 nova_compute[259850]: 2025-10-11 04:10:16.445 2 DEBUG oslo_concurrency.lockutils [None req-8b9e801b-780e-4060-a8b6-1ce08ef349b6 fc44058c9b8d47d1907c195c404898c8 c04e56df694d49fdbb22c39773dfc036 - - default default] Acquiring lock "3d2a66c2-9869-4f0a-a27f-db3a14d43466" by "nova.compute.manager.ComputeManager.attach_volume.<locals>.do_attach_volume" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 04:10:16 compute-0 nova_compute[259850]: 2025-10-11 04:10:16.446 2 DEBUG oslo_concurrency.lockutils [None req-8b9e801b-780e-4060-a8b6-1ce08ef349b6 fc44058c9b8d47d1907c195c404898c8 c04e56df694d49fdbb22c39773dfc036 - - default default] Lock "3d2a66c2-9869-4f0a-a27f-db3a14d43466" acquired by "nova.compute.manager.ComputeManager.attach_volume.<locals>.do_attach_volume" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 04:10:16 compute-0 nova_compute[259850]: 2025-10-11 04:10:16.447 2 INFO nova.compute.manager [None req-8b9e801b-780e-4060-a8b6-1ce08ef349b6 fc44058c9b8d47d1907c195c404898c8 c04e56df694d49fdbb22c39773dfc036 - - default default] [instance: 3d2a66c2-9869-4f0a-a27f-db3a14d43466] Attaching volume a9d7e36f-ac88-4a05-ba10-f3c19cb5ac93 to /dev/vdb
Oct 11 04:10:16 compute-0 nova_compute[259850]: 2025-10-11 04:10:16.612 2 DEBUG os_brick.utils [None req-8b9e801b-780e-4060-a8b6-1ce08ef349b6 fc44058c9b8d47d1907c195c404898c8 c04e56df694d49fdbb22c39773dfc036 - - default default] ==> get_connector_properties: call "{'root_helper': 'sudo nova-rootwrap /etc/nova/rootwrap.conf', 'my_ip': '192.168.122.100', 'multipath': True, 'enforce_multipath': True, 'host': 'compute-0.ctlplane.example.com', 'execute': None}" trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:176
Oct 11 04:10:16 compute-0 nova_compute[259850]: 2025-10-11 04:10:16.613 675 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): multipathd show status execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 04:10:16 compute-0 nova_compute[259850]: 2025-10-11 04:10:16.631 675 DEBUG oslo_concurrency.processutils [-] CMD "multipathd show status" returned: 0 in 0.018s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 04:10:16 compute-0 nova_compute[259850]: 2025-10-11 04:10:16.632 675 DEBUG oslo.privsep.daemon [-] privsep: reply[6cae58ac-1168-4cde-af8f-2c6084018665]: (4, ('path checker states:\n\npaths: 0\nbusy: False\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 04:10:16 compute-0 nova_compute[259850]: 2025-10-11 04:10:16.634 675 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): cat /etc/iscsi/initiatorname.iscsi execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 04:10:16 compute-0 nova_compute[259850]: 2025-10-11 04:10:16.645 675 DEBUG oslo_concurrency.processutils [-] CMD "cat /etc/iscsi/initiatorname.iscsi" returned: 0 in 0.011s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 04:10:16 compute-0 nova_compute[259850]: 2025-10-11 04:10:16.646 675 DEBUG oslo.privsep.daemon [-] privsep: reply[3c64c721-8260-48dc-aaaa-8f4d97063523]: (4, ('InitiatorName=iqn.1994-05.com.redhat:e727c2bd432c', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 04:10:16 compute-0 nova_compute[259850]: 2025-10-11 04:10:16.648 675 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): findmnt -v / -n -o SOURCE execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 04:10:16 compute-0 nova_compute[259850]: 2025-10-11 04:10:16.662 675 DEBUG oslo_concurrency.processutils [-] CMD "findmnt -v / -n -o SOURCE" returned: 0 in 0.015s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 04:10:16 compute-0 nova_compute[259850]: 2025-10-11 04:10:16.663 675 DEBUG oslo.privsep.daemon [-] privsep: reply[d36a34ae-ee55-468f-af49-b1e27f36bc5c]: (4, ('overlay\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 04:10:16 compute-0 nova_compute[259850]: 2025-10-11 04:10:16.664 675 DEBUG oslo.privsep.daemon [-] privsep: reply[ed95ba6c-ecc7-4c5a-8000-cb45853b1b28]: (4, 'e4b2deed-ff06-4afb-a523-b61a9dddb9cc') _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 04:10:16 compute-0 nova_compute[259850]: 2025-10-11 04:10:16.665 2 DEBUG oslo_concurrency.processutils [None req-8b9e801b-780e-4060-a8b6-1ce08ef349b6 fc44058c9b8d47d1907c195c404898c8 c04e56df694d49fdbb22c39773dfc036 - - default default] Running cmd (subprocess): nvme version execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 04:10:16 compute-0 nova_compute[259850]: 2025-10-11 04:10:16.694 2 DEBUG oslo_concurrency.processutils [None req-8b9e801b-780e-4060-a8b6-1ce08ef349b6 fc44058c9b8d47d1907c195c404898c8 c04e56df694d49fdbb22c39773dfc036 - - default default] CMD "nvme version" returned: 0 in 0.029s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 04:10:16 compute-0 nova_compute[259850]: 2025-10-11 04:10:16.698 2 DEBUG os_brick.initiator.connectors.lightos [None req-8b9e801b-780e-4060-a8b6-1ce08ef349b6 fc44058c9b8d47d1907c195c404898c8 c04e56df694d49fdbb22c39773dfc036 - - default default] LIGHTOS: [Errno 111] ECONNREFUSED find_dsc /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:98
Oct 11 04:10:16 compute-0 nova_compute[259850]: 2025-10-11 04:10:16.699 2 DEBUG os_brick.initiator.connectors.lightos [None req-8b9e801b-780e-4060-a8b6-1ce08ef349b6 fc44058c9b8d47d1907c195c404898c8 c04e56df694d49fdbb22c39773dfc036 - - default default] LIGHTOS: did not find dsc, continuing anyway. get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:76
Oct 11 04:10:16 compute-0 nova_compute[259850]: 2025-10-11 04:10:16.699 2 DEBUG os_brick.initiator.connectors.lightos [None req-8b9e801b-780e-4060-a8b6-1ce08ef349b6 fc44058c9b8d47d1907c195c404898c8 c04e56df694d49fdbb22c39773dfc036 - - default default] LIGHTOS: finally hostnqn: nqn.2014-08.org.nvmexpress:uuid:83042a20-0f72-4c47-8453-e72ead378624 dsc:  get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:79
Oct 11 04:10:16 compute-0 nova_compute[259850]: 2025-10-11 04:10:16.700 2 DEBUG os_brick.utils [None req-8b9e801b-780e-4060-a8b6-1ce08ef349b6 fc44058c9b8d47d1907c195c404898c8 c04e56df694d49fdbb22c39773dfc036 - - default default] <== get_connector_properties: return (87ms) {'platform': 'x86_64', 'os_type': 'linux', 'ip': '192.168.122.100', 'host': 'compute-0.ctlplane.example.com', 'multipath': True, 'initiator': 'iqn.1994-05.com.redhat:e727c2bd432c', 'do_local_attach': False, 'nvme_hostid': '83042a20-0f72-4c47-8453-e72ead378624', 'system uuid': 'e4b2deed-ff06-4afb-a523-b61a9dddb9cc', 'nqn': 'nqn.2014-08.org.nvmexpress:uuid:83042a20-0f72-4c47-8453-e72ead378624', 'nvme_native_multipath': True, 'found_dsc': ''} trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:203
Oct 11 04:10:16 compute-0 nova_compute[259850]: 2025-10-11 04:10:16.700 2 DEBUG nova.virt.block_device [None req-8b9e801b-780e-4060-a8b6-1ce08ef349b6 fc44058c9b8d47d1907c195c404898c8 c04e56df694d49fdbb22c39773dfc036 - - default default] [instance: 3d2a66c2-9869-4f0a-a27f-db3a14d43466] Updating existing volume attachment record: 36ee621f-3c2c-4a1e-8a0a-ad6367917fe6 _volume_attach /usr/lib/python3.9/site-packages/nova/virt/block_device.py:631
Oct 11 04:10:16 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e265 do_prune osdmap full prune enabled
Oct 11 04:10:16 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e266 e266: 3 total, 3 up, 3 in
Oct 11 04:10:16 compute-0 ceph-mon[74273]: log_channel(cluster) log [DBG] : osdmap e266: 3 total, 3 up, 3 in
Oct 11 04:10:16 compute-0 ceph-mon[74273]: pgmap v1272: 305 pgs: 305 active+clean; 2.2 GiB data, 2.4 GiB used, 58 GiB / 60 GiB avail; 119 KiB/s rd, 5.2 MiB/s wr, 174 op/s
Oct 11 04:10:16 compute-0 ceph-mon[74273]: osdmap e265: 3 total, 3 up, 3 in
Oct 11 04:10:17 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 11 04:10:17 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/38160022' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 11 04:10:17 compute-0 podman[283793]: 2025-10-11 04:10:17.396000971 +0000 UTC m=+0.099530401 container health_status 648a70a95c858d67aef01a55c153ff0b5b9bef4b589fa10c62a24185180db61f (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251009, container_name=ovn_controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Oct 11 04:10:17 compute-0 nova_compute[259850]: 2025-10-11 04:10:17.482 2 DEBUG nova.objects.instance [None req-8b9e801b-780e-4060-a8b6-1ce08ef349b6 fc44058c9b8d47d1907c195c404898c8 c04e56df694d49fdbb22c39773dfc036 - - default default] Lazy-loading 'flavor' on Instance uuid 3d2a66c2-9869-4f0a-a27f-db3a14d43466 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 11 04:10:17 compute-0 nova_compute[259850]: 2025-10-11 04:10:17.506 2 DEBUG nova.virt.libvirt.driver [None req-8b9e801b-780e-4060-a8b6-1ce08ef349b6 fc44058c9b8d47d1907c195c404898c8 c04e56df694d49fdbb22c39773dfc036 - - default default] [instance: 3d2a66c2-9869-4f0a-a27f-db3a14d43466] Attempting to attach volume a9d7e36f-ac88-4a05-ba10-f3c19cb5ac93 with discard support enabled to an instance using an unsupported configuration. target_bus = virtio. Trim commands will not be issued to the storage device. _check_discard_for_attach_volume /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2168
Oct 11 04:10:17 compute-0 nova_compute[259850]: 2025-10-11 04:10:17.510 2 DEBUG nova.virt.libvirt.guest [None req-8b9e801b-780e-4060-a8b6-1ce08ef349b6 fc44058c9b8d47d1907c195c404898c8 c04e56df694d49fdbb22c39773dfc036 - - default default] attach device xml: <disk type="network" device="disk">
Oct 11 04:10:17 compute-0 nova_compute[259850]:   <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Oct 11 04:10:17 compute-0 nova_compute[259850]:   <source protocol="rbd" name="volumes/volume-a9d7e36f-ac88-4a05-ba10-f3c19cb5ac93">
Oct 11 04:10:17 compute-0 nova_compute[259850]:     <host name="192.168.122.100" port="6789"/>
Oct 11 04:10:17 compute-0 nova_compute[259850]:   </source>
Oct 11 04:10:17 compute-0 nova_compute[259850]:   <auth username="openstack">
Oct 11 04:10:17 compute-0 nova_compute[259850]:     <secret type="ceph" uuid="23b68101-59a9-532f-ab6b-9acf78fb2162"/>
Oct 11 04:10:17 compute-0 nova_compute[259850]:   </auth>
Oct 11 04:10:17 compute-0 nova_compute[259850]:   <target dev="vdb" bus="virtio"/>
Oct 11 04:10:17 compute-0 nova_compute[259850]:   <serial>a9d7e36f-ac88-4a05-ba10-f3c19cb5ac93</serial>
Oct 11 04:10:17 compute-0 nova_compute[259850]: </disk>
Oct 11 04:10:17 compute-0 nova_compute[259850]:  attach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:339
Oct 11 04:10:17 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1275: 305 pgs: 305 active+clean; 2.2 GiB data, 2.4 GiB used, 58 GiB / 60 GiB avail
Oct 11 04:10:17 compute-0 nova_compute[259850]: 2025-10-11 04:10:17.643 2 DEBUG nova.virt.libvirt.driver [None req-8b9e801b-780e-4060-a8b6-1ce08ef349b6 fc44058c9b8d47d1907c195c404898c8 c04e56df694d49fdbb22c39773dfc036 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 11 04:10:17 compute-0 nova_compute[259850]: 2025-10-11 04:10:17.644 2 DEBUG nova.virt.libvirt.driver [None req-8b9e801b-780e-4060-a8b6-1ce08ef349b6 fc44058c9b8d47d1907c195c404898c8 c04e56df694d49fdbb22c39773dfc036 - - default default] No BDM found with device name vdb, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 11 04:10:17 compute-0 nova_compute[259850]: 2025-10-11 04:10:17.645 2 DEBUG nova.virt.libvirt.driver [None req-8b9e801b-780e-4060-a8b6-1ce08ef349b6 fc44058c9b8d47d1907c195c404898c8 c04e56df694d49fdbb22c39773dfc036 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 11 04:10:17 compute-0 nova_compute[259850]: 2025-10-11 04:10:17.645 2 DEBUG nova.virt.libvirt.driver [None req-8b9e801b-780e-4060-a8b6-1ce08ef349b6 fc44058c9b8d47d1907c195c404898c8 c04e56df694d49fdbb22c39773dfc036 - - default default] No VIF found with MAC fa:16:3e:5a:0f:e2, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Oct 11 04:10:17 compute-0 nova_compute[259850]: 2025-10-11 04:10:17.661 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 04:10:17 compute-0 nova_compute[259850]: 2025-10-11 04:10:17.821 2 DEBUG oslo_concurrency.lockutils [None req-8b9e801b-780e-4060-a8b6-1ce08ef349b6 fc44058c9b8d47d1907c195c404898c8 c04e56df694d49fdbb22c39773dfc036 - - default default] Lock "3d2a66c2-9869-4f0a-a27f-db3a14d43466" "released" by "nova.compute.manager.ComputeManager.attach_volume.<locals>.do_attach_volume" :: held 1.375s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 04:10:17 compute-0 ceph-mon[74273]: osdmap e266: 3 total, 3 up, 3 in
Oct 11 04:10:17 compute-0 ceph-mon[74273]: from='client.? 192.168.122.10:0/38160022' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 11 04:10:18 compute-0 nova_compute[259850]: 2025-10-11 04:10:18.045 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 04:10:18 compute-0 nova_compute[259850]: 2025-10-11 04:10:18.059 2 DEBUG oslo_service.periodic_task [None req-a15c22ab-259d-4b26-83d0-eb5f92102649 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 11 04:10:18 compute-0 nova_compute[259850]: 2025-10-11 04:10:18.059 2 DEBUG nova.compute.manager [None req-a15c22ab-259d-4b26-83d0-eb5f92102649 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Oct 11 04:10:18 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e266 do_prune osdmap full prune enabled
Oct 11 04:10:18 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e267 e267: 3 total, 3 up, 3 in
Oct 11 04:10:18 compute-0 ceph-mon[74273]: log_channel(cluster) log [DBG] : osdmap e267: 3 total, 3 up, 3 in
Oct 11 04:10:18 compute-0 ceph-mon[74273]: pgmap v1275: 305 pgs: 305 active+clean; 2.2 GiB data, 2.4 GiB used, 58 GiB / 60 GiB avail
Oct 11 04:10:19 compute-0 nova_compute[259850]: 2025-10-11 04:10:19.059 2 DEBUG oslo_service.periodic_task [None req-a15c22ab-259d-4b26-83d0-eb5f92102649 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 11 04:10:19 compute-0 nova_compute[259850]: 2025-10-11 04:10:19.091 2 DEBUG oslo_concurrency.lockutils [None req-a15c22ab-259d-4b26-83d0-eb5f92102649 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 04:10:19 compute-0 nova_compute[259850]: 2025-10-11 04:10:19.091 2 DEBUG oslo_concurrency.lockutils [None req-a15c22ab-259d-4b26-83d0-eb5f92102649 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 04:10:19 compute-0 nova_compute[259850]: 2025-10-11 04:10:19.092 2 DEBUG oslo_concurrency.lockutils [None req-a15c22ab-259d-4b26-83d0-eb5f92102649 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 04:10:19 compute-0 nova_compute[259850]: 2025-10-11 04:10:19.092 2 DEBUG nova.compute.resource_tracker [None req-a15c22ab-259d-4b26-83d0-eb5f92102649 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Oct 11 04:10:19 compute-0 nova_compute[259850]: 2025-10-11 04:10:19.093 2 DEBUG oslo_concurrency.processutils [None req-a15c22ab-259d-4b26-83d0-eb5f92102649 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 04:10:19 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 11 04:10:19 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1032176299' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 04:10:19 compute-0 nova_compute[259850]: 2025-10-11 04:10:19.559 2 DEBUG oslo_concurrency.processutils [None req-a15c22ab-259d-4b26-83d0-eb5f92102649 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.466s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 04:10:19 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 11 04:10:19 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3812192151' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 11 04:10:19 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1277: 305 pgs: 305 active+clean; 2.3 GiB data, 2.4 GiB used, 58 GiB / 60 GiB avail; 4.6 MiB/s rd, 4.5 MiB/s wr, 159 op/s
Oct 11 04:10:19 compute-0 nova_compute[259850]: 2025-10-11 04:10:19.637 2 DEBUG nova.virt.libvirt.driver [None req-a15c22ab-259d-4b26-83d0-eb5f92102649 - - - - - -] skipping disk for instance-0000000c as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 11 04:10:19 compute-0 nova_compute[259850]: 2025-10-11 04:10:19.638 2 DEBUG nova.virt.libvirt.driver [None req-a15c22ab-259d-4b26-83d0-eb5f92102649 - - - - - -] skipping disk for instance-0000000c as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 11 04:10:19 compute-0 nova_compute[259850]: 2025-10-11 04:10:19.638 2 DEBUG nova.virt.libvirt.driver [None req-a15c22ab-259d-4b26-83d0-eb5f92102649 - - - - - -] skipping disk for instance-0000000c as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 11 04:10:19 compute-0 nova_compute[259850]: 2025-10-11 04:10:19.823 2 WARNING nova.virt.libvirt.driver [None req-a15c22ab-259d-4b26-83d0-eb5f92102649 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 11 04:10:19 compute-0 nova_compute[259850]: 2025-10-11 04:10:19.824 2 DEBUG nova.compute.resource_tracker [None req-a15c22ab-259d-4b26-83d0-eb5f92102649 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4396MB free_disk=59.94267654418945GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Oct 11 04:10:19 compute-0 nova_compute[259850]: 2025-10-11 04:10:19.824 2 DEBUG oslo_concurrency.lockutils [None req-a15c22ab-259d-4b26-83d0-eb5f92102649 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 04:10:19 compute-0 nova_compute[259850]: 2025-10-11 04:10:19.824 2 DEBUG oslo_concurrency.lockutils [None req-a15c22ab-259d-4b26-83d0-eb5f92102649 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 04:10:19 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e267 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 04:10:19 compute-0 nova_compute[259850]: 2025-10-11 04:10:19.901 2 DEBUG nova.compute.resource_tracker [None req-a15c22ab-259d-4b26-83d0-eb5f92102649 - - - - - -] Instance 3d2a66c2-9869-4f0a-a27f-db3a14d43466 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Oct 11 04:10:19 compute-0 nova_compute[259850]: 2025-10-11 04:10:19.901 2 DEBUG nova.compute.resource_tracker [None req-a15c22ab-259d-4b26-83d0-eb5f92102649 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Oct 11 04:10:19 compute-0 nova_compute[259850]: 2025-10-11 04:10:19.901 2 DEBUG nova.compute.resource_tracker [None req-a15c22ab-259d-4b26-83d0-eb5f92102649 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=640MB phys_disk=59GB used_disk=1GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Oct 11 04:10:19 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e267 do_prune osdmap full prune enabled
Oct 11 04:10:19 compute-0 ceph-mon[74273]: osdmap e267: 3 total, 3 up, 3 in
Oct 11 04:10:19 compute-0 ceph-mon[74273]: from='client.? 192.168.122.100:0/1032176299' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 04:10:19 compute-0 ceph-mon[74273]: from='client.? 192.168.122.10:0/3812192151' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 11 04:10:19 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e268 e268: 3 total, 3 up, 3 in
Oct 11 04:10:19 compute-0 ceph-mon[74273]: log_channel(cluster) log [DBG] : osdmap e268: 3 total, 3 up, 3 in
Oct 11 04:10:19 compute-0 nova_compute[259850]: 2025-10-11 04:10:19.948 2 DEBUG oslo_concurrency.processutils [None req-a15c22ab-259d-4b26-83d0-eb5f92102649 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 04:10:20 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 11 04:10:20 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3309665668' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 04:10:20 compute-0 nova_compute[259850]: 2025-10-11 04:10:20.447 2 DEBUG oslo_concurrency.processutils [None req-a15c22ab-259d-4b26-83d0-eb5f92102649 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.499s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 04:10:20 compute-0 nova_compute[259850]: 2025-10-11 04:10:20.454 2 DEBUG nova.compute.provider_tree [None req-a15c22ab-259d-4b26-83d0-eb5f92102649 - - - - - -] Inventory has not changed in ProviderTree for provider: 108a560b-89c0-4926-a2fc-cb749a6f8386 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 11 04:10:20 compute-0 nova_compute[259850]: 2025-10-11 04:10:20.472 2 DEBUG nova.scheduler.client.report [None req-a15c22ab-259d-4b26-83d0-eb5f92102649 - - - - - -] Inventory has not changed for provider 108a560b-89c0-4926-a2fc-cb749a6f8386 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 11 04:10:20 compute-0 nova_compute[259850]: 2025-10-11 04:10:20.494 2 DEBUG nova.compute.resource_tracker [None req-a15c22ab-259d-4b26-83d0-eb5f92102649 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Oct 11 04:10:20 compute-0 nova_compute[259850]: 2025-10-11 04:10:20.495 2 DEBUG oslo_concurrency.lockutils [None req-a15c22ab-259d-4b26-83d0-eb5f92102649 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.670s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 04:10:20 compute-0 ceph-mgr[74563]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 04:10:20 compute-0 ceph-mgr[74563]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 04:10:20 compute-0 ceph-mgr[74563]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 04:10:20 compute-0 ceph-mgr[74563]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 04:10:20 compute-0 ceph-mgr[74563]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 04:10:20 compute-0 ceph-mgr[74563]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 04:10:20 compute-0 ceph-mgr[74563]: [balancer INFO root] Optimize plan auto_2025-10-11_04:10:20
Oct 11 04:10:20 compute-0 ceph-mgr[74563]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 11 04:10:20 compute-0 ceph-mgr[74563]: [balancer INFO root] do_upmap
Oct 11 04:10:20 compute-0 ceph-mgr[74563]: [balancer INFO root] pools ['volumes', '.mgr', 'images', 'default.rgw.log', 'cephfs.cephfs.meta', 'vms', '.rgw.root', 'default.rgw.meta', 'backups', 'cephfs.cephfs.data', 'default.rgw.control']
Oct 11 04:10:20 compute-0 ceph-mgr[74563]: [balancer INFO root] prepared 0/10 changes
Oct 11 04:10:20 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e268 do_prune osdmap full prune enabled
Oct 11 04:10:20 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e269 e269: 3 total, 3 up, 3 in
Oct 11 04:10:20 compute-0 ceph-mon[74273]: log_channel(cluster) log [DBG] : osdmap e269: 3 total, 3 up, 3 in
Oct 11 04:10:20 compute-0 ceph-mon[74273]: pgmap v1277: 305 pgs: 305 active+clean; 2.3 GiB data, 2.4 GiB used, 58 GiB / 60 GiB avail; 4.6 MiB/s rd, 4.5 MiB/s wr, 159 op/s
Oct 11 04:10:20 compute-0 ceph-mon[74273]: osdmap e268: 3 total, 3 up, 3 in
Oct 11 04:10:20 compute-0 ceph-mon[74273]: from='client.? 192.168.122.100:0/3309665668' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 04:10:21 compute-0 ceph-mgr[74563]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 11 04:10:21 compute-0 ceph-mgr[74563]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 11 04:10:21 compute-0 ceph-mgr[74563]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 11 04:10:21 compute-0 ceph-mgr[74563]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 11 04:10:21 compute-0 ceph-mgr[74563]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 11 04:10:21 compute-0 ceph-mgr[74563]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 11 04:10:21 compute-0 ceph-mgr[74563]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 11 04:10:21 compute-0 ceph-mgr[74563]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 11 04:10:21 compute-0 ceph-mgr[74563]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 11 04:10:21 compute-0 ceph-mgr[74563]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 11 04:10:21 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1280: 305 pgs: 305 active+clean; 2.3 GiB data, 2.4 GiB used, 58 GiB / 60 GiB avail; 4.6 MiB/s rd, 4.5 MiB/s wr, 160 op/s
Oct 11 04:10:21 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e269 do_prune osdmap full prune enabled
Oct 11 04:10:21 compute-0 ceph-mon[74273]: osdmap e269: 3 total, 3 up, 3 in
Oct 11 04:10:21 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e270 e270: 3 total, 3 up, 3 in
Oct 11 04:10:21 compute-0 ceph-mon[74273]: log_channel(cluster) log [DBG] : osdmap e270: 3 total, 3 up, 3 in
Oct 11 04:10:22 compute-0 nova_compute[259850]: 2025-10-11 04:10:22.149 2 DEBUG oslo_concurrency.lockutils [None req-71040294-cc4b-4737-ac55-772d3478d218 9d2ae7a5228f4cb98ea73ec06ee2dc1e 090ce8762cd840ba8eedda774a81c19f - - default default] Acquiring lock "b41a3cc1-8f24-43ac-981f-ecd099bcc7ce" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 04:10:22 compute-0 nova_compute[259850]: 2025-10-11 04:10:22.149 2 DEBUG oslo_concurrency.lockutils [None req-71040294-cc4b-4737-ac55-772d3478d218 9d2ae7a5228f4cb98ea73ec06ee2dc1e 090ce8762cd840ba8eedda774a81c19f - - default default] Lock "b41a3cc1-8f24-43ac-981f-ecd099bcc7ce" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 04:10:22 compute-0 nova_compute[259850]: 2025-10-11 04:10:22.173 2 DEBUG nova.compute.manager [None req-71040294-cc4b-4737-ac55-772d3478d218 9d2ae7a5228f4cb98ea73ec06ee2dc1e 090ce8762cd840ba8eedda774a81c19f - - default default] [instance: b41a3cc1-8f24-43ac-981f-ecd099bcc7ce] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Oct 11 04:10:22 compute-0 nova_compute[259850]: 2025-10-11 04:10:22.243 2 DEBUG oslo_concurrency.lockutils [None req-71040294-cc4b-4737-ac55-772d3478d218 9d2ae7a5228f4cb98ea73ec06ee2dc1e 090ce8762cd840ba8eedda774a81c19f - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 04:10:22 compute-0 nova_compute[259850]: 2025-10-11 04:10:22.244 2 DEBUG oslo_concurrency.lockutils [None req-71040294-cc4b-4737-ac55-772d3478d218 9d2ae7a5228f4cb98ea73ec06ee2dc1e 090ce8762cd840ba8eedda774a81c19f - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 04:10:22 compute-0 nova_compute[259850]: 2025-10-11 04:10:22.253 2 DEBUG nova.virt.hardware [None req-71040294-cc4b-4737-ac55-772d3478d218 9d2ae7a5228f4cb98ea73ec06ee2dc1e 090ce8762cd840ba8eedda774a81c19f - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Oct 11 04:10:22 compute-0 nova_compute[259850]: 2025-10-11 04:10:22.253 2 INFO nova.compute.claims [None req-71040294-cc4b-4737-ac55-772d3478d218 9d2ae7a5228f4cb98ea73ec06ee2dc1e 090ce8762cd840ba8eedda774a81c19f - - default default] [instance: b41a3cc1-8f24-43ac-981f-ecd099bcc7ce] Claim successful on node compute-0.ctlplane.example.com
Oct 11 04:10:22 compute-0 podman[283884]: 2025-10-11 04:10:22.38937071 +0000 UTC m=+0.087682058 container health_status aa44bb3cffcfb092b9d85ae52a9bd9f36c87e07262632420f97b48a886109eab (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.build-date=20251009, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2)
Oct 11 04:10:22 compute-0 nova_compute[259850]: 2025-10-11 04:10:22.415 2 DEBUG oslo_concurrency.processutils [None req-71040294-cc4b-4737-ac55-772d3478d218 9d2ae7a5228f4cb98ea73ec06ee2dc1e 090ce8762cd840ba8eedda774a81c19f - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 04:10:22 compute-0 nova_compute[259850]: 2025-10-11 04:10:22.495 2 DEBUG oslo_service.periodic_task [None req-a15c22ab-259d-4b26-83d0-eb5f92102649 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 11 04:10:22 compute-0 nova_compute[259850]: 2025-10-11 04:10:22.496 2 DEBUG oslo_service.periodic_task [None req-a15c22ab-259d-4b26-83d0-eb5f92102649 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 11 04:10:22 compute-0 nova_compute[259850]: 2025-10-11 04:10:22.497 2 DEBUG nova.compute.manager [None req-a15c22ab-259d-4b26-83d0-eb5f92102649 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Oct 11 04:10:22 compute-0 nova_compute[259850]: 2025-10-11 04:10:22.497 2 DEBUG nova.compute.manager [None req-a15c22ab-259d-4b26-83d0-eb5f92102649 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Oct 11 04:10:22 compute-0 nova_compute[259850]: 2025-10-11 04:10:22.521 2 DEBUG nova.compute.manager [None req-a15c22ab-259d-4b26-83d0-eb5f92102649 - - - - - -] [instance: b41a3cc1-8f24-43ac-981f-ecd099bcc7ce] Skipping network cache update for instance because it is Building. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9871
Oct 11 04:10:22 compute-0 nova_compute[259850]: 2025-10-11 04:10:22.663 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 04:10:22 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 11 04:10:22 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/193863068' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 04:10:22 compute-0 nova_compute[259850]: 2025-10-11 04:10:22.833 2 DEBUG oslo_concurrency.lockutils [None req-a15c22ab-259d-4b26-83d0-eb5f92102649 - - - - - -] Acquiring lock "refresh_cache-3d2a66c2-9869-4f0a-a27f-db3a14d43466" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 11 04:10:22 compute-0 nova_compute[259850]: 2025-10-11 04:10:22.833 2 DEBUG oslo_concurrency.lockutils [None req-a15c22ab-259d-4b26-83d0-eb5f92102649 - - - - - -] Acquired lock "refresh_cache-3d2a66c2-9869-4f0a-a27f-db3a14d43466" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 11 04:10:22 compute-0 nova_compute[259850]: 2025-10-11 04:10:22.833 2 DEBUG nova.network.neutron [None req-a15c22ab-259d-4b26-83d0-eb5f92102649 - - - - - -] [instance: 3d2a66c2-9869-4f0a-a27f-db3a14d43466] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Oct 11 04:10:22 compute-0 nova_compute[259850]: 2025-10-11 04:10:22.833 2 DEBUG nova.objects.instance [None req-a15c22ab-259d-4b26-83d0-eb5f92102649 - - - - - -] Lazy-loading 'info_cache' on Instance uuid 3d2a66c2-9869-4f0a-a27f-db3a14d43466 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 11 04:10:22 compute-0 nova_compute[259850]: 2025-10-11 04:10:22.842 2 DEBUG oslo_concurrency.processutils [None req-71040294-cc4b-4737-ac55-772d3478d218 9d2ae7a5228f4cb98ea73ec06ee2dc1e 090ce8762cd840ba8eedda774a81c19f - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.428s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 04:10:22 compute-0 nova_compute[259850]: 2025-10-11 04:10:22.848 2 DEBUG nova.compute.provider_tree [None req-71040294-cc4b-4737-ac55-772d3478d218 9d2ae7a5228f4cb98ea73ec06ee2dc1e 090ce8762cd840ba8eedda774a81c19f - - default default] Inventory has not changed in ProviderTree for provider: 108a560b-89c0-4926-a2fc-cb749a6f8386 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 11 04:10:22 compute-0 nova_compute[259850]: 2025-10-11 04:10:22.877 2 DEBUG nova.scheduler.client.report [None req-71040294-cc4b-4737-ac55-772d3478d218 9d2ae7a5228f4cb98ea73ec06ee2dc1e 090ce8762cd840ba8eedda774a81c19f - - default default] Inventory has not changed for provider 108a560b-89c0-4926-a2fc-cb749a6f8386 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 11 04:10:22 compute-0 nova_compute[259850]: 2025-10-11 04:10:22.908 2 DEBUG oslo_concurrency.lockutils [None req-71040294-cc4b-4737-ac55-772d3478d218 9d2ae7a5228f4cb98ea73ec06ee2dc1e 090ce8762cd840ba8eedda774a81c19f - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.664s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 04:10:22 compute-0 nova_compute[259850]: 2025-10-11 04:10:22.908 2 DEBUG nova.compute.manager [None req-71040294-cc4b-4737-ac55-772d3478d218 9d2ae7a5228f4cb98ea73ec06ee2dc1e 090ce8762cd840ba8eedda774a81c19f - - default default] [instance: b41a3cc1-8f24-43ac-981f-ecd099bcc7ce] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Oct 11 04:10:22 compute-0 ovn_metadata_agent[161897]: 2025-10-11 04:10:22.959 161902 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 04:10:22 compute-0 ovn_metadata_agent[161897]: 2025-10-11 04:10:22.960 161902 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 04:10:22 compute-0 ovn_metadata_agent[161897]: 2025-10-11 04:10:22.960 161902 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 04:10:22 compute-0 nova_compute[259850]: 2025-10-11 04:10:22.965 2 DEBUG nova.compute.manager [None req-71040294-cc4b-4737-ac55-772d3478d218 9d2ae7a5228f4cb98ea73ec06ee2dc1e 090ce8762cd840ba8eedda774a81c19f - - default default] [instance: b41a3cc1-8f24-43ac-981f-ecd099bcc7ce] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Oct 11 04:10:22 compute-0 nova_compute[259850]: 2025-10-11 04:10:22.965 2 DEBUG nova.network.neutron [None req-71040294-cc4b-4737-ac55-772d3478d218 9d2ae7a5228f4cb98ea73ec06ee2dc1e 090ce8762cd840ba8eedda774a81c19f - - default default] [instance: b41a3cc1-8f24-43ac-981f-ecd099bcc7ce] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Oct 11 04:10:22 compute-0 ceph-mon[74273]: pgmap v1280: 305 pgs: 305 active+clean; 2.3 GiB data, 2.4 GiB used, 58 GiB / 60 GiB avail; 4.6 MiB/s rd, 4.5 MiB/s wr, 160 op/s
Oct 11 04:10:22 compute-0 ceph-mon[74273]: osdmap e270: 3 total, 3 up, 3 in
Oct 11 04:10:22 compute-0 ceph-mon[74273]: from='client.? 192.168.122.100:0/193863068' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 04:10:22 compute-0 nova_compute[259850]: 2025-10-11 04:10:22.989 2 INFO nova.virt.libvirt.driver [None req-71040294-cc4b-4737-ac55-772d3478d218 9d2ae7a5228f4cb98ea73ec06ee2dc1e 090ce8762cd840ba8eedda774a81c19f - - default default] [instance: b41a3cc1-8f24-43ac-981f-ecd099bcc7ce] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Oct 11 04:10:23 compute-0 nova_compute[259850]: 2025-10-11 04:10:23.038 2 DEBUG nova.compute.manager [None req-71040294-cc4b-4737-ac55-772d3478d218 9d2ae7a5228f4cb98ea73ec06ee2dc1e 090ce8762cd840ba8eedda774a81c19f - - default default] [instance: b41a3cc1-8f24-43ac-981f-ecd099bcc7ce] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Oct 11 04:10:23 compute-0 nova_compute[259850]: 2025-10-11 04:10:23.047 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 04:10:23 compute-0 nova_compute[259850]: 2025-10-11 04:10:23.099 2 INFO nova.virt.block_device [None req-71040294-cc4b-4737-ac55-772d3478d218 9d2ae7a5228f4cb98ea73ec06ee2dc1e 090ce8762cd840ba8eedda774a81c19f - - default default] [instance: b41a3cc1-8f24-43ac-981f-ecd099bcc7ce] Booting with volume 0c52691d-f590-4dbc-8ec1-127daac8e8d9 at /dev/vdb
Oct 11 04:10:23 compute-0 nova_compute[259850]: 2025-10-11 04:10:23.148 2 DEBUG nova.policy [None req-71040294-cc4b-4737-ac55-772d3478d218 9d2ae7a5228f4cb98ea73ec06ee2dc1e 090ce8762cd840ba8eedda774a81c19f - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '9d2ae7a5228f4cb98ea73ec06ee2dc1e', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '090ce8762cd840ba8eedda774a81c19f', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Oct 11 04:10:23 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 11 04:10:23 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/661451748' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 11 04:10:23 compute-0 nova_compute[259850]: 2025-10-11 04:10:23.295 2 DEBUG os_brick.utils [None req-71040294-cc4b-4737-ac55-772d3478d218 9d2ae7a5228f4cb98ea73ec06ee2dc1e 090ce8762cd840ba8eedda774a81c19f - - default default] ==> get_connector_properties: call "{'root_helper': 'sudo nova-rootwrap /etc/nova/rootwrap.conf', 'my_ip': '192.168.122.100', 'multipath': True, 'enforce_multipath': True, 'host': 'compute-0.ctlplane.example.com', 'execute': None}" trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:176
Oct 11 04:10:23 compute-0 nova_compute[259850]: 2025-10-11 04:10:23.296 675 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): multipathd show status execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 04:10:23 compute-0 nova_compute[259850]: 2025-10-11 04:10:23.312 675 DEBUG oslo_concurrency.processutils [-] CMD "multipathd show status" returned: 0 in 0.016s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 04:10:23 compute-0 nova_compute[259850]: 2025-10-11 04:10:23.312 675 DEBUG oslo.privsep.daemon [-] privsep: reply[0e18e3dd-c14c-448c-832e-aa7f684a5ac3]: (4, ('path checker states:\n\npaths: 0\nbusy: False\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 04:10:23 compute-0 nova_compute[259850]: 2025-10-11 04:10:23.314 675 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): cat /etc/iscsi/initiatorname.iscsi execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 04:10:23 compute-0 nova_compute[259850]: 2025-10-11 04:10:23.338 675 DEBUG oslo_concurrency.processutils [-] CMD "cat /etc/iscsi/initiatorname.iscsi" returned: 0 in 0.024s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 04:10:23 compute-0 nova_compute[259850]: 2025-10-11 04:10:23.338 675 DEBUG oslo.privsep.daemon [-] privsep: reply[3234dd24-555d-4190-a643-0f6ed1c79c80]: (4, ('InitiatorName=iqn.1994-05.com.redhat:e727c2bd432c', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 04:10:23 compute-0 nova_compute[259850]: 2025-10-11 04:10:23.341 675 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): findmnt -v / -n -o SOURCE execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 04:10:23 compute-0 nova_compute[259850]: 2025-10-11 04:10:23.353 675 DEBUG oslo_concurrency.processutils [-] CMD "findmnt -v / -n -o SOURCE" returned: 0 in 0.012s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 04:10:23 compute-0 nova_compute[259850]: 2025-10-11 04:10:23.353 675 DEBUG oslo.privsep.daemon [-] privsep: reply[7d2f7739-244a-48aa-a754-e6ccff5932f3]: (4, ('overlay\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 04:10:23 compute-0 nova_compute[259850]: 2025-10-11 04:10:23.356 675 DEBUG oslo.privsep.daemon [-] privsep: reply[a41c31e6-f159-4393-bd5e-c60ec6e61cf5]: (4, 'e4b2deed-ff06-4afb-a523-b61a9dddb9cc') _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 04:10:23 compute-0 nova_compute[259850]: 2025-10-11 04:10:23.357 2 DEBUG oslo_concurrency.processutils [None req-71040294-cc4b-4737-ac55-772d3478d218 9d2ae7a5228f4cb98ea73ec06ee2dc1e 090ce8762cd840ba8eedda774a81c19f - - default default] Running cmd (subprocess): nvme version execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 04:10:23 compute-0 nova_compute[259850]: 2025-10-11 04:10:23.381 2 DEBUG oslo_concurrency.processutils [None req-71040294-cc4b-4737-ac55-772d3478d218 9d2ae7a5228f4cb98ea73ec06ee2dc1e 090ce8762cd840ba8eedda774a81c19f - - default default] CMD "nvme version" returned: 0 in 0.024s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 04:10:23 compute-0 nova_compute[259850]: 2025-10-11 04:10:23.385 2 DEBUG os_brick.initiator.connectors.lightos [None req-71040294-cc4b-4737-ac55-772d3478d218 9d2ae7a5228f4cb98ea73ec06ee2dc1e 090ce8762cd840ba8eedda774a81c19f - - default default] LIGHTOS: [Errno 111] ECONNREFUSED find_dsc /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:98
Oct 11 04:10:23 compute-0 nova_compute[259850]: 2025-10-11 04:10:23.385 2 DEBUG os_brick.initiator.connectors.lightos [None req-71040294-cc4b-4737-ac55-772d3478d218 9d2ae7a5228f4cb98ea73ec06ee2dc1e 090ce8762cd840ba8eedda774a81c19f - - default default] LIGHTOS: did not find dsc, continuing anyway. get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:76
Oct 11 04:10:23 compute-0 nova_compute[259850]: 2025-10-11 04:10:23.386 2 DEBUG os_brick.initiator.connectors.lightos [None req-71040294-cc4b-4737-ac55-772d3478d218 9d2ae7a5228f4cb98ea73ec06ee2dc1e 090ce8762cd840ba8eedda774a81c19f - - default default] LIGHTOS: finally hostnqn: nqn.2014-08.org.nvmexpress:uuid:83042a20-0f72-4c47-8453-e72ead378624 dsc:  get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:79
Oct 11 04:10:23 compute-0 nova_compute[259850]: 2025-10-11 04:10:23.386 2 DEBUG os_brick.utils [None req-71040294-cc4b-4737-ac55-772d3478d218 9d2ae7a5228f4cb98ea73ec06ee2dc1e 090ce8762cd840ba8eedda774a81c19f - - default default] <== get_connector_properties: return (90ms) {'platform': 'x86_64', 'os_type': 'linux', 'ip': '192.168.122.100', 'host': 'compute-0.ctlplane.example.com', 'multipath': True, 'initiator': 'iqn.1994-05.com.redhat:e727c2bd432c', 'do_local_attach': False, 'nvme_hostid': '83042a20-0f72-4c47-8453-e72ead378624', 'system uuid': 'e4b2deed-ff06-4afb-a523-b61a9dddb9cc', 'nqn': 'nqn.2014-08.org.nvmexpress:uuid:83042a20-0f72-4c47-8453-e72ead378624', 'nvme_native_multipath': True, 'found_dsc': ''} trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:203
Oct 11 04:10:23 compute-0 nova_compute[259850]: 2025-10-11 04:10:23.387 2 DEBUG nova.virt.block_device [None req-71040294-cc4b-4737-ac55-772d3478d218 9d2ae7a5228f4cb98ea73ec06ee2dc1e 090ce8762cd840ba8eedda774a81c19f - - default default] [instance: b41a3cc1-8f24-43ac-981f-ecd099bcc7ce] Updating existing volume attachment record: f0059ae3-f69a-4c20-824b-8cd2291bdbe4 _volume_attach /usr/lib/python3.9/site-packages/nova/virt/block_device.py:631
Oct 11 04:10:23 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1282: 305 pgs: 305 active+clean; 2.3 GiB data, 2.4 GiB used, 58 GiB / 60 GiB avail; 114 KiB/s rd, 11 KiB/s wr, 156 op/s
Oct 11 04:10:23 compute-0 nova_compute[259850]: 2025-10-11 04:10:23.922 2 DEBUG nova.network.neutron [None req-71040294-cc4b-4737-ac55-772d3478d218 9d2ae7a5228f4cb98ea73ec06ee2dc1e 090ce8762cd840ba8eedda774a81c19f - - default default] [instance: b41a3cc1-8f24-43ac-981f-ecd099bcc7ce] Successfully created port: f962d69d-912c-4bbe-8b62-be8b3ee5a694 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Oct 11 04:10:23 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e270 do_prune osdmap full prune enabled
Oct 11 04:10:23 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e271 e271: 3 total, 3 up, 3 in
Oct 11 04:10:23 compute-0 ceph-mon[74273]: log_channel(cluster) log [DBG] : osdmap e271: 3 total, 3 up, 3 in
Oct 11 04:10:23 compute-0 ceph-mon[74273]: from='client.? 192.168.122.10:0/661451748' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 11 04:10:24 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 11 04:10:24 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1867558874' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 11 04:10:24 compute-0 nova_compute[259850]: 2025-10-11 04:10:24.400 2 DEBUG nova.compute.manager [None req-71040294-cc4b-4737-ac55-772d3478d218 9d2ae7a5228f4cb98ea73ec06ee2dc1e 090ce8762cd840ba8eedda774a81c19f - - default default] [instance: b41a3cc1-8f24-43ac-981f-ecd099bcc7ce] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Oct 11 04:10:24 compute-0 nova_compute[259850]: 2025-10-11 04:10:24.403 2 DEBUG nova.virt.libvirt.driver [None req-71040294-cc4b-4737-ac55-772d3478d218 9d2ae7a5228f4cb98ea73ec06ee2dc1e 090ce8762cd840ba8eedda774a81c19f - - default default] [instance: b41a3cc1-8f24-43ac-981f-ecd099bcc7ce] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Oct 11 04:10:24 compute-0 nova_compute[259850]: 2025-10-11 04:10:24.403 2 INFO nova.virt.libvirt.driver [None req-71040294-cc4b-4737-ac55-772d3478d218 9d2ae7a5228f4cb98ea73ec06ee2dc1e 090ce8762cd840ba8eedda774a81c19f - - default default] [instance: b41a3cc1-8f24-43ac-981f-ecd099bcc7ce] Creating image(s)
Oct 11 04:10:24 compute-0 nova_compute[259850]: 2025-10-11 04:10:24.439 2 DEBUG nova.storage.rbd_utils [None req-71040294-cc4b-4737-ac55-772d3478d218 9d2ae7a5228f4cb98ea73ec06ee2dc1e 090ce8762cd840ba8eedda774a81c19f - - default default] rbd image b41a3cc1-8f24-43ac-981f-ecd099bcc7ce_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 11 04:10:24 compute-0 nova_compute[259850]: 2025-10-11 04:10:24.468 2 DEBUG nova.storage.rbd_utils [None req-71040294-cc4b-4737-ac55-772d3478d218 9d2ae7a5228f4cb98ea73ec06ee2dc1e 090ce8762cd840ba8eedda774a81c19f - - default default] rbd image b41a3cc1-8f24-43ac-981f-ecd099bcc7ce_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 11 04:10:24 compute-0 nova_compute[259850]: 2025-10-11 04:10:24.498 2 DEBUG nova.storage.rbd_utils [None req-71040294-cc4b-4737-ac55-772d3478d218 9d2ae7a5228f4cb98ea73ec06ee2dc1e 090ce8762cd840ba8eedda774a81c19f - - default default] rbd image b41a3cc1-8f24-43ac-981f-ecd099bcc7ce_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 11 04:10:24 compute-0 nova_compute[259850]: 2025-10-11 04:10:24.502 2 DEBUG oslo_concurrency.processutils [None req-71040294-cc4b-4737-ac55-772d3478d218 9d2ae7a5228f4cb98ea73ec06ee2dc1e 090ce8762cd840ba8eedda774a81c19f - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/1fdd1a1629caceb533dd1ed1ad40a6716b3e72ac --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 04:10:24 compute-0 nova_compute[259850]: 2025-10-11 04:10:24.521 2 DEBUG nova.network.neutron [None req-a15c22ab-259d-4b26-83d0-eb5f92102649 - - - - - -] [instance: 3d2a66c2-9869-4f0a-a27f-db3a14d43466] Updating instance_info_cache with network_info: [{"id": "8701ce4d-adc7-4369-9f76-cf6dea290bff", "address": "fa:16:3e:5a:0f:e2", "network": {"id": "8cb72c94-41d7-40be-8ef7-9351e1b06d48", "bridge": "br-int", "label": "tempest-VolumesBackupsTest-1596968619-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.219", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c04e56df694d49fdbb22c39773dfc036", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8701ce4d-ad", "ovs_interfaceid": "8701ce4d-adc7-4369-9f76-cf6dea290bff", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 11 04:10:24 compute-0 nova_compute[259850]: 2025-10-11 04:10:24.538 2 DEBUG oslo_concurrency.lockutils [None req-a15c22ab-259d-4b26-83d0-eb5f92102649 - - - - - -] Releasing lock "refresh_cache-3d2a66c2-9869-4f0a-a27f-db3a14d43466" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 11 04:10:24 compute-0 nova_compute[259850]: 2025-10-11 04:10:24.539 2 DEBUG nova.compute.manager [None req-a15c22ab-259d-4b26-83d0-eb5f92102649 - - - - - -] [instance: 3d2a66c2-9869-4f0a-a27f-db3a14d43466] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Oct 11 04:10:24 compute-0 nova_compute[259850]: 2025-10-11 04:10:24.540 2 DEBUG oslo_service.periodic_task [None req-a15c22ab-259d-4b26-83d0-eb5f92102649 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 11 04:10:24 compute-0 nova_compute[259850]: 2025-10-11 04:10:24.540 2 DEBUG oslo_service.periodic_task [None req-a15c22ab-259d-4b26-83d0-eb5f92102649 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 11 04:10:24 compute-0 nova_compute[259850]: 2025-10-11 04:10:24.541 2 DEBUG oslo_service.periodic_task [None req-a15c22ab-259d-4b26-83d0-eb5f92102649 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 11 04:10:24 compute-0 nova_compute[259850]: 2025-10-11 04:10:24.541 2 DEBUG oslo_service.periodic_task [None req-a15c22ab-259d-4b26-83d0-eb5f92102649 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 11 04:10:24 compute-0 nova_compute[259850]: 2025-10-11 04:10:24.558 2 DEBUG oslo_concurrency.processutils [None req-71040294-cc4b-4737-ac55-772d3478d218 9d2ae7a5228f4cb98ea73ec06ee2dc1e 090ce8762cd840ba8eedda774a81c19f - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/1fdd1a1629caceb533dd1ed1ad40a6716b3e72ac --force-share --output=json" returned: 0 in 0.056s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 04:10:24 compute-0 nova_compute[259850]: 2025-10-11 04:10:24.559 2 DEBUG oslo_concurrency.lockutils [None req-71040294-cc4b-4737-ac55-772d3478d218 9d2ae7a5228f4cb98ea73ec06ee2dc1e 090ce8762cd840ba8eedda774a81c19f - - default default] Acquiring lock "1fdd1a1629caceb533dd1ed1ad40a6716b3e72ac" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 04:10:24 compute-0 nova_compute[259850]: 2025-10-11 04:10:24.560 2 DEBUG oslo_concurrency.lockutils [None req-71040294-cc4b-4737-ac55-772d3478d218 9d2ae7a5228f4cb98ea73ec06ee2dc1e 090ce8762cd840ba8eedda774a81c19f - - default default] Lock "1fdd1a1629caceb533dd1ed1ad40a6716b3e72ac" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 04:10:24 compute-0 nova_compute[259850]: 2025-10-11 04:10:24.560 2 DEBUG oslo_concurrency.lockutils [None req-71040294-cc4b-4737-ac55-772d3478d218 9d2ae7a5228f4cb98ea73ec06ee2dc1e 090ce8762cd840ba8eedda774a81c19f - - default default] Lock "1fdd1a1629caceb533dd1ed1ad40a6716b3e72ac" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 04:10:24 compute-0 nova_compute[259850]: 2025-10-11 04:10:24.584 2 DEBUG nova.storage.rbd_utils [None req-71040294-cc4b-4737-ac55-772d3478d218 9d2ae7a5228f4cb98ea73ec06ee2dc1e 090ce8762cd840ba8eedda774a81c19f - - default default] rbd image b41a3cc1-8f24-43ac-981f-ecd099bcc7ce_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 11 04:10:24 compute-0 nova_compute[259850]: 2025-10-11 04:10:24.587 2 DEBUG oslo_concurrency.processutils [None req-71040294-cc4b-4737-ac55-772d3478d218 9d2ae7a5228f4cb98ea73ec06ee2dc1e 090ce8762cd840ba8eedda774a81c19f - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/1fdd1a1629caceb533dd1ed1ad40a6716b3e72ac b41a3cc1-8f24-43ac-981f-ecd099bcc7ce_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 04:10:24 compute-0 nova_compute[259850]: 2025-10-11 04:10:24.633 2 DEBUG nova.network.neutron [None req-71040294-cc4b-4737-ac55-772d3478d218 9d2ae7a5228f4cb98ea73ec06ee2dc1e 090ce8762cd840ba8eedda774a81c19f - - default default] [instance: b41a3cc1-8f24-43ac-981f-ecd099bcc7ce] Successfully updated port: f962d69d-912c-4bbe-8b62-be8b3ee5a694 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Oct 11 04:10:24 compute-0 nova_compute[259850]: 2025-10-11 04:10:24.649 2 DEBUG oslo_concurrency.lockutils [None req-71040294-cc4b-4737-ac55-772d3478d218 9d2ae7a5228f4cb98ea73ec06ee2dc1e 090ce8762cd840ba8eedda774a81c19f - - default default] Acquiring lock "refresh_cache-b41a3cc1-8f24-43ac-981f-ecd099bcc7ce" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 11 04:10:24 compute-0 nova_compute[259850]: 2025-10-11 04:10:24.650 2 DEBUG oslo_concurrency.lockutils [None req-71040294-cc4b-4737-ac55-772d3478d218 9d2ae7a5228f4cb98ea73ec06ee2dc1e 090ce8762cd840ba8eedda774a81c19f - - default default] Acquired lock "refresh_cache-b41a3cc1-8f24-43ac-981f-ecd099bcc7ce" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 11 04:10:24 compute-0 nova_compute[259850]: 2025-10-11 04:10:24.650 2 DEBUG nova.network.neutron [None req-71040294-cc4b-4737-ac55-772d3478d218 9d2ae7a5228f4cb98ea73ec06ee2dc1e 090ce8762cd840ba8eedda774a81c19f - - default default] [instance: b41a3cc1-8f24-43ac-981f-ecd099bcc7ce] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Oct 11 04:10:24 compute-0 nova_compute[259850]: 2025-10-11 04:10:24.728 2 DEBUG nova.compute.manager [req-71d325b5-a74b-4ec3-954c-759904e9297f req-892407f9-30f6-421a-a628-5c34c4b30b56 f4159c198f2c491aba3076e803251bb4 551ef16ca7c64c7d98e9560ddab5abd8 - - default default] [instance: b41a3cc1-8f24-43ac-981f-ecd099bcc7ce] Received event network-changed-f962d69d-912c-4bbe-8b62-be8b3ee5a694 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 11 04:10:24 compute-0 nova_compute[259850]: 2025-10-11 04:10:24.729 2 DEBUG nova.compute.manager [req-71d325b5-a74b-4ec3-954c-759904e9297f req-892407f9-30f6-421a-a628-5c34c4b30b56 f4159c198f2c491aba3076e803251bb4 551ef16ca7c64c7d98e9560ddab5abd8 - - default default] [instance: b41a3cc1-8f24-43ac-981f-ecd099bcc7ce] Refreshing instance network info cache due to event network-changed-f962d69d-912c-4bbe-8b62-be8b3ee5a694. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 11 04:10:24 compute-0 nova_compute[259850]: 2025-10-11 04:10:24.729 2 DEBUG oslo_concurrency.lockutils [req-71d325b5-a74b-4ec3-954c-759904e9297f req-892407f9-30f6-421a-a628-5c34c4b30b56 f4159c198f2c491aba3076e803251bb4 551ef16ca7c64c7d98e9560ddab5abd8 - - default default] Acquiring lock "refresh_cache-b41a3cc1-8f24-43ac-981f-ecd099bcc7ce" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 11 04:10:24 compute-0 nova_compute[259850]: 2025-10-11 04:10:24.810 2 DEBUG nova.network.neutron [None req-71040294-cc4b-4737-ac55-772d3478d218 9d2ae7a5228f4cb98ea73ec06ee2dc1e 090ce8762cd840ba8eedda774a81c19f - - default default] [instance: b41a3cc1-8f24-43ac-981f-ecd099bcc7ce] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Oct 11 04:10:24 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e271 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 04:10:24 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e271 do_prune osdmap full prune enabled
Oct 11 04:10:24 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e272 e272: 3 total, 3 up, 3 in
Oct 11 04:10:24 compute-0 ceph-mon[74273]: log_channel(cluster) log [DBG] : osdmap e272: 3 total, 3 up, 3 in
Oct 11 04:10:24 compute-0 nova_compute[259850]: 2025-10-11 04:10:24.892 2 DEBUG oslo_concurrency.processutils [None req-71040294-cc4b-4737-ac55-772d3478d218 9d2ae7a5228f4cb98ea73ec06ee2dc1e 090ce8762cd840ba8eedda774a81c19f - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/1fdd1a1629caceb533dd1ed1ad40a6716b3e72ac b41a3cc1-8f24-43ac-981f-ecd099bcc7ce_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.304s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 04:10:24 compute-0 nova_compute[259850]: 2025-10-11 04:10:24.982 2 DEBUG nova.storage.rbd_utils [None req-71040294-cc4b-4737-ac55-772d3478d218 9d2ae7a5228f4cb98ea73ec06ee2dc1e 090ce8762cd840ba8eedda774a81c19f - - default default] resizing rbd image b41a3cc1-8f24-43ac-981f-ecd099bcc7ce_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Oct 11 04:10:24 compute-0 ceph-mon[74273]: pgmap v1282: 305 pgs: 305 active+clean; 2.3 GiB data, 2.4 GiB used, 58 GiB / 60 GiB avail; 114 KiB/s rd, 11 KiB/s wr, 156 op/s
Oct 11 04:10:24 compute-0 ceph-mon[74273]: osdmap e271: 3 total, 3 up, 3 in
Oct 11 04:10:24 compute-0 ceph-mon[74273]: from='client.? 192.168.122.10:0/1867558874' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 11 04:10:24 compute-0 ceph-mon[74273]: osdmap e272: 3 total, 3 up, 3 in
Oct 11 04:10:25 compute-0 nova_compute[259850]: 2025-10-11 04:10:25.106 2 DEBUG nova.objects.instance [None req-71040294-cc4b-4737-ac55-772d3478d218 9d2ae7a5228f4cb98ea73ec06ee2dc1e 090ce8762cd840ba8eedda774a81c19f - - default default] Lazy-loading 'migration_context' on Instance uuid b41a3cc1-8f24-43ac-981f-ecd099bcc7ce obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 11 04:10:25 compute-0 nova_compute[259850]: 2025-10-11 04:10:25.122 2 DEBUG nova.virt.libvirt.driver [None req-71040294-cc4b-4737-ac55-772d3478d218 9d2ae7a5228f4cb98ea73ec06ee2dc1e 090ce8762cd840ba8eedda774a81c19f - - default default] [instance: b41a3cc1-8f24-43ac-981f-ecd099bcc7ce] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Oct 11 04:10:25 compute-0 nova_compute[259850]: 2025-10-11 04:10:25.123 2 DEBUG nova.virt.libvirt.driver [None req-71040294-cc4b-4737-ac55-772d3478d218 9d2ae7a5228f4cb98ea73ec06ee2dc1e 090ce8762cd840ba8eedda774a81c19f - - default default] [instance: b41a3cc1-8f24-43ac-981f-ecd099bcc7ce] Ensure instance console log exists: /var/lib/nova/instances/b41a3cc1-8f24-43ac-981f-ecd099bcc7ce/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Oct 11 04:10:25 compute-0 nova_compute[259850]: 2025-10-11 04:10:25.123 2 DEBUG oslo_concurrency.lockutils [None req-71040294-cc4b-4737-ac55-772d3478d218 9d2ae7a5228f4cb98ea73ec06ee2dc1e 090ce8762cd840ba8eedda774a81c19f - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 04:10:25 compute-0 nova_compute[259850]: 2025-10-11 04:10:25.124 2 DEBUG oslo_concurrency.lockutils [None req-71040294-cc4b-4737-ac55-772d3478d218 9d2ae7a5228f4cb98ea73ec06ee2dc1e 090ce8762cd840ba8eedda774a81c19f - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 04:10:25 compute-0 nova_compute[259850]: 2025-10-11 04:10:25.124 2 DEBUG oslo_concurrency.lockutils [None req-71040294-cc4b-4737-ac55-772d3478d218 9d2ae7a5228f4cb98ea73ec06ee2dc1e 090ce8762cd840ba8eedda774a81c19f - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 04:10:25 compute-0 nova_compute[259850]: 2025-10-11 04:10:25.479 2 DEBUG nova.network.neutron [None req-71040294-cc4b-4737-ac55-772d3478d218 9d2ae7a5228f4cb98ea73ec06ee2dc1e 090ce8762cd840ba8eedda774a81c19f - - default default] [instance: b41a3cc1-8f24-43ac-981f-ecd099bcc7ce] Updating instance_info_cache with network_info: [{"id": "f962d69d-912c-4bbe-8b62-be8b3ee5a694", "address": "fa:16:3e:60:b7:e2", "network": {"id": "ff7747e6-8cbd-486d-acdd-c112ee8b4480", "bridge": "br-int", "label": "tempest-VolumesBackupsTest-1545977489-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "090ce8762cd840ba8eedda774a81c19f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf962d69d-91", "ovs_interfaceid": "f962d69d-912c-4bbe-8b62-be8b3ee5a694", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 11 04:10:25 compute-0 nova_compute[259850]: 2025-10-11 04:10:25.503 2 DEBUG oslo_concurrency.lockutils [None req-71040294-cc4b-4737-ac55-772d3478d218 9d2ae7a5228f4cb98ea73ec06ee2dc1e 090ce8762cd840ba8eedda774a81c19f - - default default] Releasing lock "refresh_cache-b41a3cc1-8f24-43ac-981f-ecd099bcc7ce" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 11 04:10:25 compute-0 nova_compute[259850]: 2025-10-11 04:10:25.504 2 DEBUG nova.compute.manager [None req-71040294-cc4b-4737-ac55-772d3478d218 9d2ae7a5228f4cb98ea73ec06ee2dc1e 090ce8762cd840ba8eedda774a81c19f - - default default] [instance: b41a3cc1-8f24-43ac-981f-ecd099bcc7ce] Instance network_info: |[{"id": "f962d69d-912c-4bbe-8b62-be8b3ee5a694", "address": "fa:16:3e:60:b7:e2", "network": {"id": "ff7747e6-8cbd-486d-acdd-c112ee8b4480", "bridge": "br-int", "label": "tempest-VolumesBackupsTest-1545977489-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "090ce8762cd840ba8eedda774a81c19f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf962d69d-91", "ovs_interfaceid": "f962d69d-912c-4bbe-8b62-be8b3ee5a694", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Oct 11 04:10:25 compute-0 nova_compute[259850]: 2025-10-11 04:10:25.505 2 DEBUG oslo_concurrency.lockutils [req-71d325b5-a74b-4ec3-954c-759904e9297f req-892407f9-30f6-421a-a628-5c34c4b30b56 f4159c198f2c491aba3076e803251bb4 551ef16ca7c64c7d98e9560ddab5abd8 - - default default] Acquired lock "refresh_cache-b41a3cc1-8f24-43ac-981f-ecd099bcc7ce" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 11 04:10:25 compute-0 nova_compute[259850]: 2025-10-11 04:10:25.505 2 DEBUG nova.network.neutron [req-71d325b5-a74b-4ec3-954c-759904e9297f req-892407f9-30f6-421a-a628-5c34c4b30b56 f4159c198f2c491aba3076e803251bb4 551ef16ca7c64c7d98e9560ddab5abd8 - - default default] [instance: b41a3cc1-8f24-43ac-981f-ecd099bcc7ce] Refreshing network info cache for port f962d69d-912c-4bbe-8b62-be8b3ee5a694 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 11 04:10:25 compute-0 nova_compute[259850]: 2025-10-11 04:10:25.511 2 DEBUG nova.virt.libvirt.driver [None req-71040294-cc4b-4737-ac55-772d3478d218 9d2ae7a5228f4cb98ea73ec06ee2dc1e 090ce8762cd840ba8eedda774a81c19f - - default default] [instance: b41a3cc1-8f24-43ac-981f-ecd099bcc7ce] Start _get_guest_xml network_info=[{"id": "f962d69d-912c-4bbe-8b62-be8b3ee5a694", "address": "fa:16:3e:60:b7:e2", "network": {"id": "ff7747e6-8cbd-486d-acdd-c112ee8b4480", "bridge": "br-int", "label": "tempest-VolumesBackupsTest-1545977489-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "090ce8762cd840ba8eedda774a81c19f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf962d69d-91", "ovs_interfaceid": "f962d69d-912c-4bbe-8b62-be8b3ee5a694", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, '/dev/vdb': {'bus': 'virtio', 'dev': 'vdb', 'type': 'disk'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-11T04:01:37Z,direct_url=<?>,disk_format='qcow2',id=1a107e2f-1a9d-4b6f-861d-e64bee7d56be,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='e4ac9f6319b648399a8baca50902ce47',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-11T04:01:39Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_type': 'disk', 'encrypted': False, 'encryption_secret_uuid': None, 'size': 0, 'encryption_format': None, 'boot_index': 0, 'device_name': '/dev/vda', 'guest_format': None, 'encryption_options': None, 'disk_bus': 'virtio', 'image_id': '1a107e2f-1a9d-4b6f-861d-e64bee7d56be'}], 'ephemerals': [], 'block_device_mapping': [{'device_type': 'disk', 'delete_on_termination': False, 'connection_info': {'driver_volume_type': 'rbd', 'data': {'name': 'volumes/volume-0c52691d-f590-4dbc-8ec1-127daac8e8d9', 'hosts': ['192.168.122.100'], 'ports': ['6789'], 'cluster_name': 'ceph', 'auth_enabled': True, 'auth_username': 'openstack', 'secret_type': 'ceph', 'secret_uuid': '***', 'volume_id': '0c52691d-f590-4dbc-8ec1-127daac8e8d9', 'discard': True, 'qos_specs': None, 'access_mode': 'rw', 'encrypted': False, 'cacheable': False}, 'status': 'reserved', 'instance': 'b41a3cc1-8f24-43ac-981f-ecd099bcc7ce', 'attached_at': '', 'detached_at': '', 'volume_id': '0c52691d-f590-4dbc-8ec1-127daac8e8d9', 'serial': '0c52691d-f590-4dbc-8ec1-127daac8e8d9'}, 'boot_index': -1, 'guest_format': None, 'attachment_id': 'f0059ae3-f69a-4c20-824b-8cd2291bdbe4', 'mount_device': '/dev/vdb', 'disk_bus': 'virtio', 'volume_type': None}], ': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Oct 11 04:10:25 compute-0 nova_compute[259850]: 2025-10-11 04:10:25.519 2 WARNING nova.virt.libvirt.driver [None req-71040294-cc4b-4737-ac55-772d3478d218 9d2ae7a5228f4cb98ea73ec06ee2dc1e 090ce8762cd840ba8eedda774a81c19f - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 11 04:10:25 compute-0 nova_compute[259850]: 2025-10-11 04:10:25.530 2 DEBUG nova.virt.libvirt.host [None req-71040294-cc4b-4737-ac55-772d3478d218 9d2ae7a5228f4cb98ea73ec06ee2dc1e 090ce8762cd840ba8eedda774a81c19f - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Oct 11 04:10:25 compute-0 nova_compute[259850]: 2025-10-11 04:10:25.531 2 DEBUG nova.virt.libvirt.host [None req-71040294-cc4b-4737-ac55-772d3478d218 9d2ae7a5228f4cb98ea73ec06ee2dc1e 090ce8762cd840ba8eedda774a81c19f - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Oct 11 04:10:25 compute-0 nova_compute[259850]: 2025-10-11 04:10:25.536 2 DEBUG nova.virt.libvirt.host [None req-71040294-cc4b-4737-ac55-772d3478d218 9d2ae7a5228f4cb98ea73ec06ee2dc1e 090ce8762cd840ba8eedda774a81c19f - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Oct 11 04:10:25 compute-0 nova_compute[259850]: 2025-10-11 04:10:25.537 2 DEBUG nova.virt.libvirt.host [None req-71040294-cc4b-4737-ac55-772d3478d218 9d2ae7a5228f4cb98ea73ec06ee2dc1e 090ce8762cd840ba8eedda774a81c19f - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Oct 11 04:10:25 compute-0 nova_compute[259850]: 2025-10-11 04:10:25.538 2 DEBUG nova.virt.libvirt.driver [None req-71040294-cc4b-4737-ac55-772d3478d218 9d2ae7a5228f4cb98ea73ec06ee2dc1e 090ce8762cd840ba8eedda774a81c19f - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Oct 11 04:10:25 compute-0 nova_compute[259850]: 2025-10-11 04:10:25.539 2 DEBUG nova.virt.hardware [None req-71040294-cc4b-4737-ac55-772d3478d218 9d2ae7a5228f4cb98ea73ec06ee2dc1e 090ce8762cd840ba8eedda774a81c19f - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-11T04:01:35Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='178575de-f0e6-4acd-9fcd-d75e3e09ac2e',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-11T04:01:37Z,direct_url=<?>,disk_format='qcow2',id=1a107e2f-1a9d-4b6f-861d-e64bee7d56be,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='e4ac9f6319b648399a8baca50902ce47',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-11T04:01:39Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Oct 11 04:10:25 compute-0 nova_compute[259850]: 2025-10-11 04:10:25.540 2 DEBUG nova.virt.hardware [None req-71040294-cc4b-4737-ac55-772d3478d218 9d2ae7a5228f4cb98ea73ec06ee2dc1e 090ce8762cd840ba8eedda774a81c19f - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Oct 11 04:10:25 compute-0 nova_compute[259850]: 2025-10-11 04:10:25.541 2 DEBUG nova.virt.hardware [None req-71040294-cc4b-4737-ac55-772d3478d218 9d2ae7a5228f4cb98ea73ec06ee2dc1e 090ce8762cd840ba8eedda774a81c19f - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Oct 11 04:10:25 compute-0 nova_compute[259850]: 2025-10-11 04:10:25.541 2 DEBUG nova.virt.hardware [None req-71040294-cc4b-4737-ac55-772d3478d218 9d2ae7a5228f4cb98ea73ec06ee2dc1e 090ce8762cd840ba8eedda774a81c19f - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Oct 11 04:10:25 compute-0 nova_compute[259850]: 2025-10-11 04:10:25.542 2 DEBUG nova.virt.hardware [None req-71040294-cc4b-4737-ac55-772d3478d218 9d2ae7a5228f4cb98ea73ec06ee2dc1e 090ce8762cd840ba8eedda774a81c19f - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Oct 11 04:10:25 compute-0 nova_compute[259850]: 2025-10-11 04:10:25.542 2 DEBUG nova.virt.hardware [None req-71040294-cc4b-4737-ac55-772d3478d218 9d2ae7a5228f4cb98ea73ec06ee2dc1e 090ce8762cd840ba8eedda774a81c19f - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Oct 11 04:10:25 compute-0 nova_compute[259850]: 2025-10-11 04:10:25.543 2 DEBUG nova.virt.hardware [None req-71040294-cc4b-4737-ac55-772d3478d218 9d2ae7a5228f4cb98ea73ec06ee2dc1e 090ce8762cd840ba8eedda774a81c19f - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Oct 11 04:10:25 compute-0 nova_compute[259850]: 2025-10-11 04:10:25.544 2 DEBUG nova.virt.hardware [None req-71040294-cc4b-4737-ac55-772d3478d218 9d2ae7a5228f4cb98ea73ec06ee2dc1e 090ce8762cd840ba8eedda774a81c19f - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Oct 11 04:10:25 compute-0 nova_compute[259850]: 2025-10-11 04:10:25.544 2 DEBUG nova.virt.hardware [None req-71040294-cc4b-4737-ac55-772d3478d218 9d2ae7a5228f4cb98ea73ec06ee2dc1e 090ce8762cd840ba8eedda774a81c19f - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Oct 11 04:10:25 compute-0 nova_compute[259850]: 2025-10-11 04:10:25.545 2 DEBUG nova.virt.hardware [None req-71040294-cc4b-4737-ac55-772d3478d218 9d2ae7a5228f4cb98ea73ec06ee2dc1e 090ce8762cd840ba8eedda774a81c19f - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Oct 11 04:10:25 compute-0 nova_compute[259850]: 2025-10-11 04:10:25.545 2 DEBUG nova.virt.hardware [None req-71040294-cc4b-4737-ac55-772d3478d218 9d2ae7a5228f4cb98ea73ec06ee2dc1e 090ce8762cd840ba8eedda774a81c19f - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Oct 11 04:10:25 compute-0 nova_compute[259850]: 2025-10-11 04:10:25.552 2 DEBUG oslo_concurrency.processutils [None req-71040294-cc4b-4737-ac55-772d3478d218 9d2ae7a5228f4cb98ea73ec06ee2dc1e 090ce8762cd840ba8eedda774a81c19f - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 04:10:25 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1285: 305 pgs: 305 active+clean; 2.3 GiB data, 2.4 GiB used, 58 GiB / 60 GiB avail; 115 KiB/s rd, 11 KiB/s wr, 157 op/s
Oct 11 04:10:25 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e272 do_prune osdmap full prune enabled
Oct 11 04:10:25 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e273 e273: 3 total, 3 up, 3 in
Oct 11 04:10:25 compute-0 ceph-mon[74273]: log_channel(cluster) log [DBG] : osdmap e273: 3 total, 3 up, 3 in
Oct 11 04:10:26 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 11 04:10:26 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3863295591' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 11 04:10:26 compute-0 nova_compute[259850]: 2025-10-11 04:10:26.045 2 DEBUG oslo_concurrency.processutils [None req-71040294-cc4b-4737-ac55-772d3478d218 9d2ae7a5228f4cb98ea73ec06ee2dc1e 090ce8762cd840ba8eedda774a81c19f - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.493s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 04:10:26 compute-0 nova_compute[259850]: 2025-10-11 04:10:26.079 2 DEBUG nova.storage.rbd_utils [None req-71040294-cc4b-4737-ac55-772d3478d218 9d2ae7a5228f4cb98ea73ec06ee2dc1e 090ce8762cd840ba8eedda774a81c19f - - default default] rbd image b41a3cc1-8f24-43ac-981f-ecd099bcc7ce_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 11 04:10:26 compute-0 nova_compute[259850]: 2025-10-11 04:10:26.083 2 DEBUG oslo_concurrency.processutils [None req-71040294-cc4b-4737-ac55-772d3478d218 9d2ae7a5228f4cb98ea73ec06ee2dc1e 090ce8762cd840ba8eedda774a81c19f - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 04:10:26 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 11 04:10:26 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2335152213' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 11 04:10:26 compute-0 nova_compute[259850]: 2025-10-11 04:10:26.544 2 DEBUG oslo_concurrency.processutils [None req-71040294-cc4b-4737-ac55-772d3478d218 9d2ae7a5228f4cb98ea73ec06ee2dc1e 090ce8762cd840ba8eedda774a81c19f - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.460s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 04:10:26 compute-0 nova_compute[259850]: 2025-10-11 04:10:26.583 2 DEBUG nova.virt.libvirt.vif [None req-71040294-cc4b-4737-ac55-772d3478d218 9d2ae7a5228f4cb98ea73ec06ee2dc1e 090ce8762cd840ba8eedda774a81c19f - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-11T04:10:21Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-instance-548339657',display_name='tempest-instance-548339657',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-instance-548339657',id=13,image_ref='1a107e2f-1a9d-4b6f-861d-e64bee7d56be',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBMQXzvv8j9ChCWwSUhzBAsRlUZyXI+lqj/OoMozxbSHUqJ4/5/GYsaO3pkjm/0yqZjzkCxDD3VnBlE3FJF2ZhD+5SCtyHYSJTgB0HjjoegMlaWoCUK/fwQjWxtBa7z+DkQ==',key_name='tempest-keypair-585022884',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='090ce8762cd840ba8eedda774a81c19f',ramdisk_id='',reservation_id='r-hng0b1qe',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='1a107e2f-1a9d-4b6f-861d-e64bee7d56be',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-VolumesBackupsTest-466402879',owner_user_name='tempest-VolumesBackupsTest-466402879-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-11T04:10:23Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='9d2ae7a5228f4cb98ea73ec06ee2dc1e',uuid=b41a3cc1-8f24-43ac-981f-ecd099bcc7ce,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "f962d69d-912c-4bbe-8b62-be8b3ee5a694", "address": "fa:16:3e:60:b7:e2", "network": {"id": "ff7747e6-8cbd-486d-acdd-c112ee8b4480", "bridge": "br-int", "label": "tempest-VolumesBackupsTest-1545977489-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "090ce8762cd840ba8eedda774a81c19f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf962d69d-91", "ovs_interfaceid": "f962d69d-912c-4bbe-8b62-be8b3ee5a694", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Oct 11 04:10:26 compute-0 nova_compute[259850]: 2025-10-11 04:10:26.584 2 DEBUG nova.network.os_vif_util [None req-71040294-cc4b-4737-ac55-772d3478d218 9d2ae7a5228f4cb98ea73ec06ee2dc1e 090ce8762cd840ba8eedda774a81c19f - - default default] Converting VIF {"id": "f962d69d-912c-4bbe-8b62-be8b3ee5a694", "address": "fa:16:3e:60:b7:e2", "network": {"id": "ff7747e6-8cbd-486d-acdd-c112ee8b4480", "bridge": "br-int", "label": "tempest-VolumesBackupsTest-1545977489-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "090ce8762cd840ba8eedda774a81c19f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf962d69d-91", "ovs_interfaceid": "f962d69d-912c-4bbe-8b62-be8b3ee5a694", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 11 04:10:26 compute-0 nova_compute[259850]: 2025-10-11 04:10:26.586 2 DEBUG nova.network.os_vif_util [None req-71040294-cc4b-4737-ac55-772d3478d218 9d2ae7a5228f4cb98ea73ec06ee2dc1e 090ce8762cd840ba8eedda774a81c19f - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:60:b7:e2,bridge_name='br-int',has_traffic_filtering=True,id=f962d69d-912c-4bbe-8b62-be8b3ee5a694,network=Network(ff7747e6-8cbd-486d-acdd-c112ee8b4480),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf962d69d-91') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 11 04:10:26 compute-0 nova_compute[259850]: 2025-10-11 04:10:26.588 2 DEBUG nova.objects.instance [None req-71040294-cc4b-4737-ac55-772d3478d218 9d2ae7a5228f4cb98ea73ec06ee2dc1e 090ce8762cd840ba8eedda774a81c19f - - default default] Lazy-loading 'pci_devices' on Instance uuid b41a3cc1-8f24-43ac-981f-ecd099bcc7ce obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 11 04:10:26 compute-0 nova_compute[259850]: 2025-10-11 04:10:26.604 2 DEBUG nova.virt.libvirt.driver [None req-71040294-cc4b-4737-ac55-772d3478d218 9d2ae7a5228f4cb98ea73ec06ee2dc1e 090ce8762cd840ba8eedda774a81c19f - - default default] [instance: b41a3cc1-8f24-43ac-981f-ecd099bcc7ce] End _get_guest_xml xml=<domain type="kvm">
Oct 11 04:10:26 compute-0 nova_compute[259850]:   <uuid>b41a3cc1-8f24-43ac-981f-ecd099bcc7ce</uuid>
Oct 11 04:10:26 compute-0 nova_compute[259850]:   <name>instance-0000000d</name>
Oct 11 04:10:26 compute-0 nova_compute[259850]:   <memory>131072</memory>
Oct 11 04:10:26 compute-0 nova_compute[259850]:   <vcpu>1</vcpu>
Oct 11 04:10:26 compute-0 nova_compute[259850]:   <metadata>
Oct 11 04:10:26 compute-0 nova_compute[259850]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 11 04:10:26 compute-0 nova_compute[259850]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 11 04:10:26 compute-0 nova_compute[259850]:       <nova:name>tempest-instance-548339657</nova:name>
Oct 11 04:10:26 compute-0 nova_compute[259850]:       <nova:creationTime>2025-10-11 04:10:25</nova:creationTime>
Oct 11 04:10:26 compute-0 nova_compute[259850]:       <nova:flavor name="m1.nano">
Oct 11 04:10:26 compute-0 nova_compute[259850]:         <nova:memory>128</nova:memory>
Oct 11 04:10:26 compute-0 nova_compute[259850]:         <nova:disk>1</nova:disk>
Oct 11 04:10:26 compute-0 nova_compute[259850]:         <nova:swap>0</nova:swap>
Oct 11 04:10:26 compute-0 nova_compute[259850]:         <nova:ephemeral>0</nova:ephemeral>
Oct 11 04:10:26 compute-0 nova_compute[259850]:         <nova:vcpus>1</nova:vcpus>
Oct 11 04:10:26 compute-0 nova_compute[259850]:       </nova:flavor>
Oct 11 04:10:26 compute-0 nova_compute[259850]:       <nova:owner>
Oct 11 04:10:26 compute-0 nova_compute[259850]:         <nova:user uuid="9d2ae7a5228f4cb98ea73ec06ee2dc1e">tempest-VolumesBackupsTest-466402879-project-member</nova:user>
Oct 11 04:10:26 compute-0 nova_compute[259850]:         <nova:project uuid="090ce8762cd840ba8eedda774a81c19f">tempest-VolumesBackupsTest-466402879</nova:project>
Oct 11 04:10:26 compute-0 nova_compute[259850]:       </nova:owner>
Oct 11 04:10:26 compute-0 nova_compute[259850]:       <nova:root type="image" uuid="1a107e2f-1a9d-4b6f-861d-e64bee7d56be"/>
Oct 11 04:10:26 compute-0 nova_compute[259850]:       <nova:ports>
Oct 11 04:10:26 compute-0 nova_compute[259850]:         <nova:port uuid="f962d69d-912c-4bbe-8b62-be8b3ee5a694">
Oct 11 04:10:26 compute-0 nova_compute[259850]:           <nova:ip type="fixed" address="10.100.0.8" ipVersion="4"/>
Oct 11 04:10:26 compute-0 nova_compute[259850]:         </nova:port>
Oct 11 04:10:26 compute-0 nova_compute[259850]:       </nova:ports>
Oct 11 04:10:26 compute-0 nova_compute[259850]:     </nova:instance>
Oct 11 04:10:26 compute-0 nova_compute[259850]:   </metadata>
Oct 11 04:10:26 compute-0 nova_compute[259850]:   <sysinfo type="smbios">
Oct 11 04:10:26 compute-0 nova_compute[259850]:     <system>
Oct 11 04:10:26 compute-0 nova_compute[259850]:       <entry name="manufacturer">RDO</entry>
Oct 11 04:10:26 compute-0 nova_compute[259850]:       <entry name="product">OpenStack Compute</entry>
Oct 11 04:10:26 compute-0 nova_compute[259850]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 11 04:10:26 compute-0 nova_compute[259850]:       <entry name="serial">b41a3cc1-8f24-43ac-981f-ecd099bcc7ce</entry>
Oct 11 04:10:26 compute-0 nova_compute[259850]:       <entry name="uuid">b41a3cc1-8f24-43ac-981f-ecd099bcc7ce</entry>
Oct 11 04:10:26 compute-0 nova_compute[259850]:       <entry name="family">Virtual Machine</entry>
Oct 11 04:10:26 compute-0 nova_compute[259850]:     </system>
Oct 11 04:10:26 compute-0 nova_compute[259850]:   </sysinfo>
Oct 11 04:10:26 compute-0 nova_compute[259850]:   <os>
Oct 11 04:10:26 compute-0 nova_compute[259850]:     <type arch="x86_64" machine="q35">hvm</type>
Oct 11 04:10:26 compute-0 nova_compute[259850]:     <boot dev="hd"/>
Oct 11 04:10:26 compute-0 nova_compute[259850]:     <smbios mode="sysinfo"/>
Oct 11 04:10:26 compute-0 nova_compute[259850]:   </os>
Oct 11 04:10:26 compute-0 nova_compute[259850]:   <features>
Oct 11 04:10:26 compute-0 nova_compute[259850]:     <acpi/>
Oct 11 04:10:26 compute-0 nova_compute[259850]:     <apic/>
Oct 11 04:10:26 compute-0 nova_compute[259850]:     <vmcoreinfo/>
Oct 11 04:10:26 compute-0 nova_compute[259850]:   </features>
Oct 11 04:10:26 compute-0 nova_compute[259850]:   <clock offset="utc">
Oct 11 04:10:26 compute-0 nova_compute[259850]:     <timer name="pit" tickpolicy="delay"/>
Oct 11 04:10:26 compute-0 nova_compute[259850]:     <timer name="rtc" tickpolicy="catchup"/>
Oct 11 04:10:26 compute-0 nova_compute[259850]:     <timer name="hpet" present="no"/>
Oct 11 04:10:26 compute-0 nova_compute[259850]:   </clock>
Oct 11 04:10:26 compute-0 nova_compute[259850]:   <cpu mode="host-model" match="exact">
Oct 11 04:10:26 compute-0 nova_compute[259850]:     <topology sockets="1" cores="1" threads="1"/>
Oct 11 04:10:26 compute-0 nova_compute[259850]:   </cpu>
Oct 11 04:10:26 compute-0 nova_compute[259850]:   <devices>
Oct 11 04:10:26 compute-0 nova_compute[259850]:     <disk type="network" device="disk">
Oct 11 04:10:26 compute-0 nova_compute[259850]:       <driver type="raw" cache="none"/>
Oct 11 04:10:26 compute-0 nova_compute[259850]:       <source protocol="rbd" name="vms/b41a3cc1-8f24-43ac-981f-ecd099bcc7ce_disk">
Oct 11 04:10:26 compute-0 nova_compute[259850]:         <host name="192.168.122.100" port="6789"/>
Oct 11 04:10:26 compute-0 nova_compute[259850]:       </source>
Oct 11 04:10:26 compute-0 nova_compute[259850]:       <auth username="openstack">
Oct 11 04:10:26 compute-0 nova_compute[259850]:         <secret type="ceph" uuid="23b68101-59a9-532f-ab6b-9acf78fb2162"/>
Oct 11 04:10:26 compute-0 nova_compute[259850]:       </auth>
Oct 11 04:10:26 compute-0 nova_compute[259850]:       <target dev="vda" bus="virtio"/>
Oct 11 04:10:26 compute-0 nova_compute[259850]:     </disk>
Oct 11 04:10:26 compute-0 nova_compute[259850]:     <disk type="network" device="cdrom">
Oct 11 04:10:26 compute-0 nova_compute[259850]:       <driver type="raw" cache="none"/>
Oct 11 04:10:26 compute-0 nova_compute[259850]:       <source protocol="rbd" name="vms/b41a3cc1-8f24-43ac-981f-ecd099bcc7ce_disk.config">
Oct 11 04:10:26 compute-0 nova_compute[259850]:         <host name="192.168.122.100" port="6789"/>
Oct 11 04:10:26 compute-0 nova_compute[259850]:       </source>
Oct 11 04:10:26 compute-0 nova_compute[259850]:       <auth username="openstack">
Oct 11 04:10:26 compute-0 nova_compute[259850]:         <secret type="ceph" uuid="23b68101-59a9-532f-ab6b-9acf78fb2162"/>
Oct 11 04:10:26 compute-0 nova_compute[259850]:       </auth>
Oct 11 04:10:26 compute-0 nova_compute[259850]:       <target dev="sda" bus="sata"/>
Oct 11 04:10:26 compute-0 nova_compute[259850]:     </disk>
Oct 11 04:10:26 compute-0 nova_compute[259850]:     <disk type="network" device="disk">
Oct 11 04:10:26 compute-0 nova_compute[259850]:       <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Oct 11 04:10:26 compute-0 nova_compute[259850]:       <source protocol="rbd" name="volumes/volume-0c52691d-f590-4dbc-8ec1-127daac8e8d9">
Oct 11 04:10:26 compute-0 nova_compute[259850]:         <host name="192.168.122.100" port="6789"/>
Oct 11 04:10:26 compute-0 nova_compute[259850]:       </source>
Oct 11 04:10:26 compute-0 nova_compute[259850]:       <auth username="openstack">
Oct 11 04:10:26 compute-0 nova_compute[259850]:         <secret type="ceph" uuid="23b68101-59a9-532f-ab6b-9acf78fb2162"/>
Oct 11 04:10:26 compute-0 nova_compute[259850]:       </auth>
Oct 11 04:10:26 compute-0 nova_compute[259850]:       <target dev="vdb" bus="virtio"/>
Oct 11 04:10:26 compute-0 nova_compute[259850]:       <serial>0c52691d-f590-4dbc-8ec1-127daac8e8d9</serial>
Oct 11 04:10:26 compute-0 nova_compute[259850]:     </disk>
Oct 11 04:10:26 compute-0 nova_compute[259850]:     <interface type="ethernet">
Oct 11 04:10:26 compute-0 nova_compute[259850]:       <mac address="fa:16:3e:60:b7:e2"/>
Oct 11 04:10:26 compute-0 nova_compute[259850]:       <model type="virtio"/>
Oct 11 04:10:26 compute-0 nova_compute[259850]:       <driver name="vhost" rx_queue_size="512"/>
Oct 11 04:10:26 compute-0 nova_compute[259850]:       <mtu size="1442"/>
Oct 11 04:10:26 compute-0 nova_compute[259850]:       <target dev="tapf962d69d-91"/>
Oct 11 04:10:26 compute-0 nova_compute[259850]:     </interface>
Oct 11 04:10:26 compute-0 nova_compute[259850]:     <serial type="pty">
Oct 11 04:10:26 compute-0 nova_compute[259850]:       <log file="/var/lib/nova/instances/b41a3cc1-8f24-43ac-981f-ecd099bcc7ce/console.log" append="off"/>
Oct 11 04:10:26 compute-0 nova_compute[259850]:     </serial>
Oct 11 04:10:26 compute-0 nova_compute[259850]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 11 04:10:26 compute-0 nova_compute[259850]:     <video>
Oct 11 04:10:26 compute-0 nova_compute[259850]:       <model type="virtio"/>
Oct 11 04:10:26 compute-0 nova_compute[259850]:     </video>
Oct 11 04:10:26 compute-0 nova_compute[259850]:     <input type="tablet" bus="usb"/>
Oct 11 04:10:26 compute-0 nova_compute[259850]:     <rng model="virtio">
Oct 11 04:10:26 compute-0 nova_compute[259850]:       <backend model="random">/dev/urandom</backend>
Oct 11 04:10:26 compute-0 nova_compute[259850]:     </rng>
Oct 11 04:10:26 compute-0 nova_compute[259850]:     <controller type="pci" model="pcie-root"/>
Oct 11 04:10:26 compute-0 nova_compute[259850]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 04:10:26 compute-0 nova_compute[259850]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 04:10:26 compute-0 nova_compute[259850]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 04:10:26 compute-0 nova_compute[259850]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 04:10:26 compute-0 nova_compute[259850]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 04:10:26 compute-0 nova_compute[259850]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 04:10:26 compute-0 nova_compute[259850]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 04:10:26 compute-0 nova_compute[259850]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 04:10:26 compute-0 nova_compute[259850]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 04:10:26 compute-0 nova_compute[259850]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 04:10:26 compute-0 nova_compute[259850]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 04:10:26 compute-0 nova_compute[259850]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 04:10:26 compute-0 nova_compute[259850]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 04:10:26 compute-0 nova_compute[259850]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 04:10:26 compute-0 nova_compute[259850]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 04:10:26 compute-0 nova_compute[259850]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 04:10:26 compute-0 nova_compute[259850]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 04:10:26 compute-0 nova_compute[259850]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 04:10:26 compute-0 nova_compute[259850]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 04:10:26 compute-0 nova_compute[259850]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 04:10:26 compute-0 nova_compute[259850]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 04:10:26 compute-0 nova_compute[259850]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 04:10:26 compute-0 nova_compute[259850]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 04:10:26 compute-0 nova_compute[259850]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 04:10:26 compute-0 nova_compute[259850]:     <controller type="usb" index="0"/>
Oct 11 04:10:26 compute-0 nova_compute[259850]:     <memballoon model="virtio">
Oct 11 04:10:26 compute-0 nova_compute[259850]:       <stats period="10"/>
Oct 11 04:10:26 compute-0 nova_compute[259850]:     </memballoon>
Oct 11 04:10:26 compute-0 nova_compute[259850]:   </devices>
Oct 11 04:10:26 compute-0 nova_compute[259850]: </domain>
Oct 11 04:10:26 compute-0 nova_compute[259850]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Oct 11 04:10:26 compute-0 nova_compute[259850]: 2025-10-11 04:10:26.605 2 DEBUG nova.compute.manager [None req-71040294-cc4b-4737-ac55-772d3478d218 9d2ae7a5228f4cb98ea73ec06ee2dc1e 090ce8762cd840ba8eedda774a81c19f - - default default] [instance: b41a3cc1-8f24-43ac-981f-ecd099bcc7ce] Preparing to wait for external event network-vif-plugged-f962d69d-912c-4bbe-8b62-be8b3ee5a694 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Oct 11 04:10:26 compute-0 nova_compute[259850]: 2025-10-11 04:10:26.606 2 DEBUG oslo_concurrency.lockutils [None req-71040294-cc4b-4737-ac55-772d3478d218 9d2ae7a5228f4cb98ea73ec06ee2dc1e 090ce8762cd840ba8eedda774a81c19f - - default default] Acquiring lock "b41a3cc1-8f24-43ac-981f-ecd099bcc7ce-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 04:10:26 compute-0 nova_compute[259850]: 2025-10-11 04:10:26.607 2 DEBUG oslo_concurrency.lockutils [None req-71040294-cc4b-4737-ac55-772d3478d218 9d2ae7a5228f4cb98ea73ec06ee2dc1e 090ce8762cd840ba8eedda774a81c19f - - default default] Lock "b41a3cc1-8f24-43ac-981f-ecd099bcc7ce-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 04:10:26 compute-0 nova_compute[259850]: 2025-10-11 04:10:26.607 2 DEBUG oslo_concurrency.lockutils [None req-71040294-cc4b-4737-ac55-772d3478d218 9d2ae7a5228f4cb98ea73ec06ee2dc1e 090ce8762cd840ba8eedda774a81c19f - - default default] Lock "b41a3cc1-8f24-43ac-981f-ecd099bcc7ce-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 04:10:26 compute-0 nova_compute[259850]: 2025-10-11 04:10:26.608 2 DEBUG nova.virt.libvirt.vif [None req-71040294-cc4b-4737-ac55-772d3478d218 9d2ae7a5228f4cb98ea73ec06ee2dc1e 090ce8762cd840ba8eedda774a81c19f - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-11T04:10:21Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-instance-548339657',display_name='tempest-instance-548339657',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-instance-548339657',id=13,image_ref='1a107e2f-1a9d-4b6f-861d-e64bee7d56be',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBMQXzvv8j9ChCWwSUhzBAsRlUZyXI+lqj/OoMozxbSHUqJ4/5/GYsaO3pkjm/0yqZjzkCxDD3VnBlE3FJF2ZhD+5SCtyHYSJTgB0HjjoegMlaWoCUK/fwQjWxtBa7z+DkQ==',key_name='tempest-keypair-585022884',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='090ce8762cd840ba8eedda774a81c19f',ramdisk_id='',reservation_id='r-hng0b1qe',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='1a107e2f-1a9d-4b6f-861d-e64bee7d56be',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-VolumesBackupsTest-466402879',owner_user_name='tempest-VolumesBackupsTest-466402879-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-11T04:10:23Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='9d2ae7a5228f4cb98ea73ec06ee2dc1e',uuid=b41a3cc1-8f24-43ac-981f-ecd099bcc7ce,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "f962d69d-912c-4bbe-8b62-be8b3ee5a694", "address": "fa:16:3e:60:b7:e2", "network": {"id": "ff7747e6-8cbd-486d-acdd-c112ee8b4480", "bridge": "br-int", "label": "tempest-VolumesBackupsTest-1545977489-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "090ce8762cd840ba8eedda774a81c19f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf962d69d-91", "ovs_interfaceid": "f962d69d-912c-4bbe-8b62-be8b3ee5a694", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Oct 11 04:10:26 compute-0 nova_compute[259850]: 2025-10-11 04:10:26.608 2 DEBUG nova.network.os_vif_util [None req-71040294-cc4b-4737-ac55-772d3478d218 9d2ae7a5228f4cb98ea73ec06ee2dc1e 090ce8762cd840ba8eedda774a81c19f - - default default] Converting VIF {"id": "f962d69d-912c-4bbe-8b62-be8b3ee5a694", "address": "fa:16:3e:60:b7:e2", "network": {"id": "ff7747e6-8cbd-486d-acdd-c112ee8b4480", "bridge": "br-int", "label": "tempest-VolumesBackupsTest-1545977489-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "090ce8762cd840ba8eedda774a81c19f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf962d69d-91", "ovs_interfaceid": "f962d69d-912c-4bbe-8b62-be8b3ee5a694", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 11 04:10:26 compute-0 nova_compute[259850]: 2025-10-11 04:10:26.609 2 DEBUG nova.network.os_vif_util [None req-71040294-cc4b-4737-ac55-772d3478d218 9d2ae7a5228f4cb98ea73ec06ee2dc1e 090ce8762cd840ba8eedda774a81c19f - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:60:b7:e2,bridge_name='br-int',has_traffic_filtering=True,id=f962d69d-912c-4bbe-8b62-be8b3ee5a694,network=Network(ff7747e6-8cbd-486d-acdd-c112ee8b4480),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf962d69d-91') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 11 04:10:26 compute-0 nova_compute[259850]: 2025-10-11 04:10:26.610 2 DEBUG os_vif [None req-71040294-cc4b-4737-ac55-772d3478d218 9d2ae7a5228f4cb98ea73ec06ee2dc1e 090ce8762cd840ba8eedda774a81c19f - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:60:b7:e2,bridge_name='br-int',has_traffic_filtering=True,id=f962d69d-912c-4bbe-8b62-be8b3ee5a694,network=Network(ff7747e6-8cbd-486d-acdd-c112ee8b4480),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf962d69d-91') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Oct 11 04:10:26 compute-0 nova_compute[259850]: 2025-10-11 04:10:26.611 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 04:10:26 compute-0 nova_compute[259850]: 2025-10-11 04:10:26.612 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 04:10:26 compute-0 nova_compute[259850]: 2025-10-11 04:10:26.612 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 11 04:10:26 compute-0 nova_compute[259850]: 2025-10-11 04:10:26.616 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 04:10:26 compute-0 nova_compute[259850]: 2025-10-11 04:10:26.616 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapf962d69d-91, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 04:10:26 compute-0 nova_compute[259850]: 2025-10-11 04:10:26.617 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapf962d69d-91, col_values=(('external_ids', {'iface-id': 'f962d69d-912c-4bbe-8b62-be8b3ee5a694', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:60:b7:e2', 'vm-uuid': 'b41a3cc1-8f24-43ac-981f-ecd099bcc7ce'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 04:10:26 compute-0 nova_compute[259850]: 2025-10-11 04:10:26.619 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 04:10:26 compute-0 NetworkManager[44920]: <info>  [1760155826.6207] manager: (tapf962d69d-91): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/76)
Oct 11 04:10:26 compute-0 nova_compute[259850]: 2025-10-11 04:10:26.621 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 11 04:10:26 compute-0 nova_compute[259850]: 2025-10-11 04:10:26.628 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 04:10:26 compute-0 nova_compute[259850]: 2025-10-11 04:10:26.630 2 INFO os_vif [None req-71040294-cc4b-4737-ac55-772d3478d218 9d2ae7a5228f4cb98ea73ec06ee2dc1e 090ce8762cd840ba8eedda774a81c19f - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:60:b7:e2,bridge_name='br-int',has_traffic_filtering=True,id=f962d69d-912c-4bbe-8b62-be8b3ee5a694,network=Network(ff7747e6-8cbd-486d-acdd-c112ee8b4480),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf962d69d-91')
Oct 11 04:10:26 compute-0 nova_compute[259850]: 2025-10-11 04:10:26.689 2 DEBUG nova.virt.libvirt.driver [None req-71040294-cc4b-4737-ac55-772d3478d218 9d2ae7a5228f4cb98ea73ec06ee2dc1e 090ce8762cd840ba8eedda774a81c19f - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 11 04:10:26 compute-0 nova_compute[259850]: 2025-10-11 04:10:26.689 2 DEBUG nova.virt.libvirt.driver [None req-71040294-cc4b-4737-ac55-772d3478d218 9d2ae7a5228f4cb98ea73ec06ee2dc1e 090ce8762cd840ba8eedda774a81c19f - - default default] No BDM found with device name vdb, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 11 04:10:26 compute-0 nova_compute[259850]: 2025-10-11 04:10:26.689 2 DEBUG nova.virt.libvirt.driver [None req-71040294-cc4b-4737-ac55-772d3478d218 9d2ae7a5228f4cb98ea73ec06ee2dc1e 090ce8762cd840ba8eedda774a81c19f - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 11 04:10:26 compute-0 nova_compute[259850]: 2025-10-11 04:10:26.689 2 DEBUG nova.virt.libvirt.driver [None req-71040294-cc4b-4737-ac55-772d3478d218 9d2ae7a5228f4cb98ea73ec06ee2dc1e 090ce8762cd840ba8eedda774a81c19f - - default default] No VIF found with MAC fa:16:3e:60:b7:e2, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Oct 11 04:10:26 compute-0 nova_compute[259850]: 2025-10-11 04:10:26.690 2 INFO nova.virt.libvirt.driver [None req-71040294-cc4b-4737-ac55-772d3478d218 9d2ae7a5228f4cb98ea73ec06ee2dc1e 090ce8762cd840ba8eedda774a81c19f - - default default] [instance: b41a3cc1-8f24-43ac-981f-ecd099bcc7ce] Using config drive
Oct 11 04:10:26 compute-0 nova_compute[259850]: 2025-10-11 04:10:26.712 2 DEBUG nova.storage.rbd_utils [None req-71040294-cc4b-4737-ac55-772d3478d218 9d2ae7a5228f4cb98ea73ec06ee2dc1e 090ce8762cd840ba8eedda774a81c19f - - default default] rbd image b41a3cc1-8f24-43ac-981f-ecd099bcc7ce_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 11 04:10:26 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e273 do_prune osdmap full prune enabled
Oct 11 04:10:26 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e274 e274: 3 total, 3 up, 3 in
Oct 11 04:10:26 compute-0 ceph-mon[74273]: log_channel(cluster) log [DBG] : osdmap e274: 3 total, 3 up, 3 in
Oct 11 04:10:26 compute-0 ceph-mon[74273]: pgmap v1285: 305 pgs: 305 active+clean; 2.3 GiB data, 2.4 GiB used, 58 GiB / 60 GiB avail; 115 KiB/s rd, 11 KiB/s wr, 157 op/s
Oct 11 04:10:26 compute-0 ceph-mon[74273]: osdmap e273: 3 total, 3 up, 3 in
Oct 11 04:10:26 compute-0 ceph-mon[74273]: from='client.? 192.168.122.100:0/3863295591' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 11 04:10:26 compute-0 ceph-mon[74273]: from='client.? 192.168.122.100:0/2335152213' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 11 04:10:27 compute-0 nova_compute[259850]: 2025-10-11 04:10:27.059 2 DEBUG oslo_service.periodic_task [None req-a15c22ab-259d-4b26-83d0-eb5f92102649 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 11 04:10:27 compute-0 nova_compute[259850]: 2025-10-11 04:10:27.085 2 DEBUG oslo_service.periodic_task [None req-a15c22ab-259d-4b26-83d0-eb5f92102649 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 11 04:10:27 compute-0 nova_compute[259850]: 2025-10-11 04:10:27.087 2 DEBUG nova.network.neutron [req-71d325b5-a74b-4ec3-954c-759904e9297f req-892407f9-30f6-421a-a628-5c34c4b30b56 f4159c198f2c491aba3076e803251bb4 551ef16ca7c64c7d98e9560ddab5abd8 - - default default] [instance: b41a3cc1-8f24-43ac-981f-ecd099bcc7ce] Updated VIF entry in instance network info cache for port f962d69d-912c-4bbe-8b62-be8b3ee5a694. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 11 04:10:27 compute-0 nova_compute[259850]: 2025-10-11 04:10:27.087 2 DEBUG nova.network.neutron [req-71d325b5-a74b-4ec3-954c-759904e9297f req-892407f9-30f6-421a-a628-5c34c4b30b56 f4159c198f2c491aba3076e803251bb4 551ef16ca7c64c7d98e9560ddab5abd8 - - default default] [instance: b41a3cc1-8f24-43ac-981f-ecd099bcc7ce] Updating instance_info_cache with network_info: [{"id": "f962d69d-912c-4bbe-8b62-be8b3ee5a694", "address": "fa:16:3e:60:b7:e2", "network": {"id": "ff7747e6-8cbd-486d-acdd-c112ee8b4480", "bridge": "br-int", "label": "tempest-VolumesBackupsTest-1545977489-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "090ce8762cd840ba8eedda774a81c19f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf962d69d-91", "ovs_interfaceid": "f962d69d-912c-4bbe-8b62-be8b3ee5a694", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 11 04:10:27 compute-0 nova_compute[259850]: 2025-10-11 04:10:27.103 2 DEBUG oslo_concurrency.lockutils [req-71d325b5-a74b-4ec3-954c-759904e9297f req-892407f9-30f6-421a-a628-5c34c4b30b56 f4159c198f2c491aba3076e803251bb4 551ef16ca7c64c7d98e9560ddab5abd8 - - default default] Releasing lock "refresh_cache-b41a3cc1-8f24-43ac-981f-ecd099bcc7ce" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 11 04:10:27 compute-0 nova_compute[259850]: 2025-10-11 04:10:27.161 2 INFO nova.virt.libvirt.driver [None req-71040294-cc4b-4737-ac55-772d3478d218 9d2ae7a5228f4cb98ea73ec06ee2dc1e 090ce8762cd840ba8eedda774a81c19f - - default default] [instance: b41a3cc1-8f24-43ac-981f-ecd099bcc7ce] Creating config drive at /var/lib/nova/instances/b41a3cc1-8f24-43ac-981f-ecd099bcc7ce/disk.config
Oct 11 04:10:27 compute-0 nova_compute[259850]: 2025-10-11 04:10:27.172 2 DEBUG oslo_concurrency.processutils [None req-71040294-cc4b-4737-ac55-772d3478d218 9d2ae7a5228f4cb98ea73ec06ee2dc1e 090ce8762cd840ba8eedda774a81c19f - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/b41a3cc1-8f24-43ac-981f-ecd099bcc7ce/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp04h51y6d execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 04:10:27 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct 11 04:10:27 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/4102970993' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 11 04:10:27 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct 11 04:10:27 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/4102970993' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 11 04:10:27 compute-0 nova_compute[259850]: 2025-10-11 04:10:27.317 2 DEBUG oslo_concurrency.processutils [None req-71040294-cc4b-4737-ac55-772d3478d218 9d2ae7a5228f4cb98ea73ec06ee2dc1e 090ce8762cd840ba8eedda774a81c19f - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/b41a3cc1-8f24-43ac-981f-ecd099bcc7ce/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp04h51y6d" returned: 0 in 0.145s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 04:10:27 compute-0 nova_compute[259850]: 2025-10-11 04:10:27.347 2 DEBUG nova.storage.rbd_utils [None req-71040294-cc4b-4737-ac55-772d3478d218 9d2ae7a5228f4cb98ea73ec06ee2dc1e 090ce8762cd840ba8eedda774a81c19f - - default default] rbd image b41a3cc1-8f24-43ac-981f-ecd099bcc7ce_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 11 04:10:27 compute-0 nova_compute[259850]: 2025-10-11 04:10:27.351 2 DEBUG oslo_concurrency.processutils [None req-71040294-cc4b-4737-ac55-772d3478d218 9d2ae7a5228f4cb98ea73ec06ee2dc1e 090ce8762cd840ba8eedda774a81c19f - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/b41a3cc1-8f24-43ac-981f-ecd099bcc7ce/disk.config b41a3cc1-8f24-43ac-981f-ecd099bcc7ce_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 04:10:27 compute-0 nova_compute[259850]: 2025-10-11 04:10:27.526 2 DEBUG oslo_concurrency.processutils [None req-71040294-cc4b-4737-ac55-772d3478d218 9d2ae7a5228f4cb98ea73ec06ee2dc1e 090ce8762cd840ba8eedda774a81c19f - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/b41a3cc1-8f24-43ac-981f-ecd099bcc7ce/disk.config b41a3cc1-8f24-43ac-981f-ecd099bcc7ce_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.174s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 04:10:27 compute-0 nova_compute[259850]: 2025-10-11 04:10:27.527 2 INFO nova.virt.libvirt.driver [None req-71040294-cc4b-4737-ac55-772d3478d218 9d2ae7a5228f4cb98ea73ec06ee2dc1e 090ce8762cd840ba8eedda774a81c19f - - default default] [instance: b41a3cc1-8f24-43ac-981f-ecd099bcc7ce] Deleting local config drive /var/lib/nova/instances/b41a3cc1-8f24-43ac-981f-ecd099bcc7ce/disk.config because it was imported into RBD.
Oct 11 04:10:27 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1288: 305 pgs: 305 active+clean; 2.3 GiB data, 2.4 GiB used, 58 GiB / 60 GiB avail
Oct 11 04:10:27 compute-0 kernel: tapf962d69d-91: entered promiscuous mode
Oct 11 04:10:27 compute-0 NetworkManager[44920]: <info>  [1760155827.5994] manager: (tapf962d69d-91): new Tun device (/org/freedesktop/NetworkManager/Devices/77)
Oct 11 04:10:27 compute-0 ovn_controller[152025]: 2025-10-11T04:10:27Z|00130|binding|INFO|Claiming lport f962d69d-912c-4bbe-8b62-be8b3ee5a694 for this chassis.
Oct 11 04:10:27 compute-0 nova_compute[259850]: 2025-10-11 04:10:27.641 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 04:10:27 compute-0 ovn_controller[152025]: 2025-10-11T04:10:27Z|00131|binding|INFO|f962d69d-912c-4bbe-8b62-be8b3ee5a694: Claiming fa:16:3e:60:b7:e2 10.100.0.8
Oct 11 04:10:27 compute-0 ovn_metadata_agent[161897]: 2025-10-11 04:10:27.650 161902 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:60:b7:e2 10.100.0.8'], port_security=['fa:16:3e:60:b7:e2 10.100.0.8'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.8/28', 'neutron:device_id': 'b41a3cc1-8f24-43ac-981f-ecd099bcc7ce', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-ff7747e6-8cbd-486d-acdd-c112ee8b4480', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '090ce8762cd840ba8eedda774a81c19f', 'neutron:revision_number': '2', 'neutron:security_group_ids': '1b928e00-d49e-4a6f-8844-4fae3440d01c', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=c8a58742-49e2-4099-8758-6944a17d14d0, chassis=[<ovs.db.idl.Row object at 0x7f22fde8afd0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f22fde8afd0>], logical_port=f962d69d-912c-4bbe-8b62-be8b3ee5a694) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 11 04:10:27 compute-0 ovn_metadata_agent[161897]: 2025-10-11 04:10:27.653 161902 INFO neutron.agent.ovn.metadata.agent [-] Port f962d69d-912c-4bbe-8b62-be8b3ee5a694 in datapath ff7747e6-8cbd-486d-acdd-c112ee8b4480 bound to our chassis
Oct 11 04:10:27 compute-0 ovn_metadata_agent[161897]: 2025-10-11 04:10:27.656 161902 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network ff7747e6-8cbd-486d-acdd-c112ee8b4480
Oct 11 04:10:27 compute-0 nova_compute[259850]: 2025-10-11 04:10:27.673 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 04:10:27 compute-0 ovn_metadata_agent[161897]: 2025-10-11 04:10:27.672 267637 DEBUG oslo.privsep.daemon [-] privsep: reply[75eae4d8-0271-45a5-88fc-0717d4d7b251]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 04:10:27 compute-0 ovn_metadata_agent[161897]: 2025-10-11 04:10:27.673 161902 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapff7747e6-81 in ovnmeta-ff7747e6-8cbd-486d-acdd-c112ee8b4480 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Oct 11 04:10:27 compute-0 ovn_controller[152025]: 2025-10-11T04:10:27Z|00132|binding|INFO|Setting lport f962d69d-912c-4bbe-8b62-be8b3ee5a694 ovn-installed in OVS
Oct 11 04:10:27 compute-0 ovn_controller[152025]: 2025-10-11T04:10:27Z|00133|binding|INFO|Setting lport f962d69d-912c-4bbe-8b62-be8b3ee5a694 up in Southbound
Oct 11 04:10:27 compute-0 ovn_metadata_agent[161897]: 2025-10-11 04:10:27.675 267637 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapff7747e6-80 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Oct 11 04:10:27 compute-0 ovn_metadata_agent[161897]: 2025-10-11 04:10:27.675 267637 DEBUG oslo.privsep.daemon [-] privsep: reply[3fa45f10-2c49-4ee6-98c4-9e32b93643e2]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 04:10:27 compute-0 ovn_metadata_agent[161897]: 2025-10-11 04:10:27.677 267637 DEBUG oslo.privsep.daemon [-] privsep: reply[cbc5796c-7be9-46dd-9fcd-f5ead7ddb7dc]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 04:10:27 compute-0 systemd-udevd[284233]: Network interface NamePolicy= disabled on kernel command line.
Oct 11 04:10:27 compute-0 systemd-machined[214869]: New machine qemu-13-instance-0000000d.
Oct 11 04:10:27 compute-0 nova_compute[259850]: 2025-10-11 04:10:27.683 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 04:10:27 compute-0 nova_compute[259850]: 2025-10-11 04:10:27.684 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 04:10:27 compute-0 systemd[1]: Started Virtual Machine qemu-13-instance-0000000d.
Oct 11 04:10:27 compute-0 ovn_metadata_agent[161897]: 2025-10-11 04:10:27.692 162015 DEBUG oslo.privsep.daemon [-] privsep: reply[d1be6303-c92f-4451-bee8-ac614e691934]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 04:10:27 compute-0 NetworkManager[44920]: <info>  [1760155827.6987] device (tapf962d69d-91): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 11 04:10:27 compute-0 NetworkManager[44920]: <info>  [1760155827.7015] device (tapf962d69d-91): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 11 04:10:27 compute-0 ovn_metadata_agent[161897]: 2025-10-11 04:10:27.711 267637 DEBUG oslo.privsep.daemon [-] privsep: reply[a7ac73c4-731d-44b6-bfee-74c9f08c138d]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 04:10:27 compute-0 ovn_metadata_agent[161897]: 2025-10-11 04:10:27.757 267785 DEBUG oslo.privsep.daemon [-] privsep: reply[3cafebca-5f26-4d7e-8c50-cfa1fe4e6925]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 04:10:27 compute-0 ovn_metadata_agent[161897]: 2025-10-11 04:10:27.765 267637 DEBUG oslo.privsep.daemon [-] privsep: reply[2b0f238f-7ff0-4873-8463-659832b29bdb]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 04:10:27 compute-0 systemd-udevd[284237]: Network interface NamePolicy= disabled on kernel command line.
Oct 11 04:10:27 compute-0 NetworkManager[44920]: <info>  [1760155827.7679] manager: (tapff7747e6-80): new Veth device (/org/freedesktop/NetworkManager/Devices/78)
Oct 11 04:10:27 compute-0 ovn_metadata_agent[161897]: 2025-10-11 04:10:27.816 267785 DEBUG oslo.privsep.daemon [-] privsep: reply[e2fe438d-8cef-477f-bd35-b1be630457c1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 04:10:27 compute-0 ovn_metadata_agent[161897]: 2025-10-11 04:10:27.820 267785 DEBUG oslo.privsep.daemon [-] privsep: reply[d457a100-1f06-4c74-970a-8b2de11cadc9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 04:10:27 compute-0 NetworkManager[44920]: <info>  [1760155827.8585] device (tapff7747e6-80): carrier: link connected
Oct 11 04:10:27 compute-0 ceph-mon[74273]: osdmap e274: 3 total, 3 up, 3 in
Oct 11 04:10:27 compute-0 ceph-mon[74273]: from='client.? 192.168.122.10:0/4102970993' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 11 04:10:27 compute-0 ceph-mon[74273]: from='client.? 192.168.122.10:0/4102970993' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 11 04:10:27 compute-0 ovn_metadata_agent[161897]: 2025-10-11 04:10:27.870 267785 DEBUG oslo.privsep.daemon [-] privsep: reply[8c748474-2af1-4276-ad0b-1902fbc79eb7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 04:10:27 compute-0 ovn_metadata_agent[161897]: 2025-10-11 04:10:27.896 267637 DEBUG oslo.privsep.daemon [-] privsep: reply[4b68505b-8dc0-4f5d-ac01-9330ecaa0ab8]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapff7747e6-81'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:a9:a8:25'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 47], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 421289, 'reachable_time': 40869, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 284265, 'error': None, 'target': 'ovnmeta-ff7747e6-8cbd-486d-acdd-c112ee8b4480', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 04:10:27 compute-0 ovn_metadata_agent[161897]: 2025-10-11 04:10:27.917 267637 DEBUG oslo.privsep.daemon [-] privsep: reply[8e334b66-4f5c-43c1-96ea-c52a244a5221]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fea9:a825'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 421289, 'tstamp': 421289}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 284266, 'error': None, 'target': 'ovnmeta-ff7747e6-8cbd-486d-acdd-c112ee8b4480', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 04:10:27 compute-0 ovn_metadata_agent[161897]: 2025-10-11 04:10:27.942 267637 DEBUG oslo.privsep.daemon [-] privsep: reply[4020bca3-a59c-4c1d-b845-67a80e665dfe]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapff7747e6-81'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:a9:a8:25'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 2, 'rx_bytes': 110, 'tx_bytes': 176, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 2, 'rx_bytes': 110, 'tx_bytes': 176, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 47], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 421289, 'reachable_time': 40869, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 2, 'outoctets': 148, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 2, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 148, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 2, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 284267, 'error': None, 'target': 'ovnmeta-ff7747e6-8cbd-486d-acdd-c112ee8b4480', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 04:10:27 compute-0 ovn_metadata_agent[161897]: 2025-10-11 04:10:27.990 267637 DEBUG oslo.privsep.daemon [-] privsep: reply[d19cb501-dda8-45c1-9c7d-1bb9eec96522]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 04:10:28 compute-0 nova_compute[259850]: 2025-10-11 04:10:28.037 2 DEBUG nova.compute.manager [req-d2da4724-5757-492d-bfbe-f26964ebc7b3 req-00fc452b-1e5f-430e-abee-bde796b0588a f4159c198f2c491aba3076e803251bb4 551ef16ca7c64c7d98e9560ddab5abd8 - - default default] [instance: b41a3cc1-8f24-43ac-981f-ecd099bcc7ce] Received event network-vif-plugged-f962d69d-912c-4bbe-8b62-be8b3ee5a694 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 11 04:10:28 compute-0 nova_compute[259850]: 2025-10-11 04:10:28.038 2 DEBUG oslo_concurrency.lockutils [req-d2da4724-5757-492d-bfbe-f26964ebc7b3 req-00fc452b-1e5f-430e-abee-bde796b0588a f4159c198f2c491aba3076e803251bb4 551ef16ca7c64c7d98e9560ddab5abd8 - - default default] Acquiring lock "b41a3cc1-8f24-43ac-981f-ecd099bcc7ce-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 04:10:28 compute-0 nova_compute[259850]: 2025-10-11 04:10:28.039 2 DEBUG oslo_concurrency.lockutils [req-d2da4724-5757-492d-bfbe-f26964ebc7b3 req-00fc452b-1e5f-430e-abee-bde796b0588a f4159c198f2c491aba3076e803251bb4 551ef16ca7c64c7d98e9560ddab5abd8 - - default default] Lock "b41a3cc1-8f24-43ac-981f-ecd099bcc7ce-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 04:10:28 compute-0 nova_compute[259850]: 2025-10-11 04:10:28.040 2 DEBUG oslo_concurrency.lockutils [req-d2da4724-5757-492d-bfbe-f26964ebc7b3 req-00fc452b-1e5f-430e-abee-bde796b0588a f4159c198f2c491aba3076e803251bb4 551ef16ca7c64c7d98e9560ddab5abd8 - - default default] Lock "b41a3cc1-8f24-43ac-981f-ecd099bcc7ce-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 04:10:28 compute-0 nova_compute[259850]: 2025-10-11 04:10:28.040 2 DEBUG nova.compute.manager [req-d2da4724-5757-492d-bfbe-f26964ebc7b3 req-00fc452b-1e5f-430e-abee-bde796b0588a f4159c198f2c491aba3076e803251bb4 551ef16ca7c64c7d98e9560ddab5abd8 - - default default] [instance: b41a3cc1-8f24-43ac-981f-ecd099bcc7ce] Processing event network-vif-plugged-f962d69d-912c-4bbe-8b62-be8b3ee5a694 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Oct 11 04:10:28 compute-0 ovn_metadata_agent[161897]: 2025-10-11 04:10:28.072 267637 DEBUG oslo.privsep.daemon [-] privsep: reply[2eda55fb-ec00-4304-a5f3-84cc2299c753]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 04:10:28 compute-0 ovn_metadata_agent[161897]: 2025-10-11 04:10:28.075 161902 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapff7747e6-80, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 04:10:28 compute-0 ovn_metadata_agent[161897]: 2025-10-11 04:10:28.075 161902 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 11 04:10:28 compute-0 ovn_metadata_agent[161897]: 2025-10-11 04:10:28.076 161902 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapff7747e6-80, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 04:10:28 compute-0 kernel: tapff7747e6-80: entered promiscuous mode
Oct 11 04:10:28 compute-0 nova_compute[259850]: 2025-10-11 04:10:28.079 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 04:10:28 compute-0 NetworkManager[44920]: <info>  [1760155828.0808] manager: (tapff7747e6-80): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/79)
Oct 11 04:10:28 compute-0 ovn_metadata_agent[161897]: 2025-10-11 04:10:28.083 161902 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapff7747e6-80, col_values=(('external_ids', {'iface-id': '223c96f4-862f-4536-9399-a30ef7ed1a99'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 04:10:28 compute-0 ovn_controller[152025]: 2025-10-11T04:10:28Z|00134|binding|INFO|Releasing lport 223c96f4-862f-4536-9399-a30ef7ed1a99 from this chassis (sb_readonly=0)
Oct 11 04:10:28 compute-0 nova_compute[259850]: 2025-10-11 04:10:28.085 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 04:10:28 compute-0 ovn_metadata_agent[161897]: 2025-10-11 04:10:28.088 161902 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/ff7747e6-8cbd-486d-acdd-c112ee8b4480.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/ff7747e6-8cbd-486d-acdd-c112ee8b4480.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Oct 11 04:10:28 compute-0 ovn_metadata_agent[161897]: 2025-10-11 04:10:28.089 267637 DEBUG oslo.privsep.daemon [-] privsep: reply[ce9aa3ea-78f0-4295-a9ed-e99feec731a5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 04:10:28 compute-0 ovn_metadata_agent[161897]: 2025-10-11 04:10:28.089 161902 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct 11 04:10:28 compute-0 ovn_metadata_agent[161897]: global
Oct 11 04:10:28 compute-0 ovn_metadata_agent[161897]:     log         /dev/log local0 debug
Oct 11 04:10:28 compute-0 ovn_metadata_agent[161897]:     log-tag     haproxy-metadata-proxy-ff7747e6-8cbd-486d-acdd-c112ee8b4480
Oct 11 04:10:28 compute-0 ovn_metadata_agent[161897]:     user        root
Oct 11 04:10:28 compute-0 ovn_metadata_agent[161897]:     group       root
Oct 11 04:10:28 compute-0 ovn_metadata_agent[161897]:     maxconn     1024
Oct 11 04:10:28 compute-0 ovn_metadata_agent[161897]:     pidfile     /var/lib/neutron/external/pids/ff7747e6-8cbd-486d-acdd-c112ee8b4480.pid.haproxy
Oct 11 04:10:28 compute-0 ovn_metadata_agent[161897]:     daemon
Oct 11 04:10:28 compute-0 ovn_metadata_agent[161897]: 
Oct 11 04:10:28 compute-0 ovn_metadata_agent[161897]: defaults
Oct 11 04:10:28 compute-0 ovn_metadata_agent[161897]:     log global
Oct 11 04:10:28 compute-0 ovn_metadata_agent[161897]:     mode http
Oct 11 04:10:28 compute-0 ovn_metadata_agent[161897]:     option httplog
Oct 11 04:10:28 compute-0 ovn_metadata_agent[161897]:     option dontlognull
Oct 11 04:10:28 compute-0 ovn_metadata_agent[161897]:     option http-server-close
Oct 11 04:10:28 compute-0 ovn_metadata_agent[161897]:     option forwardfor
Oct 11 04:10:28 compute-0 ovn_metadata_agent[161897]:     retries                 3
Oct 11 04:10:28 compute-0 ovn_metadata_agent[161897]:     timeout http-request    30s
Oct 11 04:10:28 compute-0 ovn_metadata_agent[161897]:     timeout connect         30s
Oct 11 04:10:28 compute-0 ovn_metadata_agent[161897]:     timeout client          32s
Oct 11 04:10:28 compute-0 ovn_metadata_agent[161897]:     timeout server          32s
Oct 11 04:10:28 compute-0 ovn_metadata_agent[161897]:     timeout http-keep-alive 30s
Oct 11 04:10:28 compute-0 ovn_metadata_agent[161897]: 
Oct 11 04:10:28 compute-0 ovn_metadata_agent[161897]: 
Oct 11 04:10:28 compute-0 ovn_metadata_agent[161897]: listen listener
Oct 11 04:10:28 compute-0 ovn_metadata_agent[161897]:     bind 169.254.169.254:80
Oct 11 04:10:28 compute-0 ovn_metadata_agent[161897]:     server metadata /var/lib/neutron/metadata_proxy
Oct 11 04:10:28 compute-0 ovn_metadata_agent[161897]:     http-request add-header X-OVN-Network-ID ff7747e6-8cbd-486d-acdd-c112ee8b4480
Oct 11 04:10:28 compute-0 ovn_metadata_agent[161897]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Oct 11 04:10:28 compute-0 ovn_metadata_agent[161897]: 2025-10-11 04:10:28.090 161902 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-ff7747e6-8cbd-486d-acdd-c112ee8b4480', 'env', 'PROCESS_TAG=haproxy-ff7747e6-8cbd-486d-acdd-c112ee8b4480', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/ff7747e6-8cbd-486d-acdd-c112ee8b4480.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Oct 11 04:10:28 compute-0 nova_compute[259850]: 2025-10-11 04:10:28.102 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 04:10:28 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 11 04:10:28 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/453889509' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 11 04:10:28 compute-0 podman[284359]: 2025-10-11 04:10:28.610233776 +0000 UTC m=+0.083137750 container create d9ee66ff1f6c800ff78ae0ff5eca5634f80a1570825bb4afb2eea3e28549382d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-ff7747e6-8cbd-486d-acdd-c112ee8b4480, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, org.label-schema.build-date=20251009, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, tcib_managed=true)
Oct 11 04:10:28 compute-0 systemd[1]: Started libpod-conmon-d9ee66ff1f6c800ff78ae0ff5eca5634f80a1570825bb4afb2eea3e28549382d.scope.
Oct 11 04:10:28 compute-0 podman[284359]: 2025-10-11 04:10:28.569113949 +0000 UTC m=+0.042017963 image pull 1061e4fafe13e0b9aa1ef2c904ba4ad70c44f3e87b1d831f16c6db34937f4022 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Oct 11 04:10:28 compute-0 systemd[1]: Started libcrun container.
Oct 11 04:10:28 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f3fe4b88c554b5a0d61ce5c1e505c16db930210c8d78ae8d445e7cb6c79f3abd/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct 11 04:10:28 compute-0 podman[284359]: 2025-10-11 04:10:28.724455659 +0000 UTC m=+0.197359643 container init d9ee66ff1f6c800ff78ae0ff5eca5634f80a1570825bb4afb2eea3e28549382d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-ff7747e6-8cbd-486d-acdd-c112ee8b4480, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251009, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Oct 11 04:10:28 compute-0 nova_compute[259850]: 2025-10-11 04:10:28.732 2 DEBUG nova.virt.driver [None req-3700afd8-f980-4cf2-b41e-54ecc7fbe9a2 - - - - - -] Emitting event <LifecycleEvent: 1760155828.7315257, b41a3cc1-8f24-43ac-981f-ecd099bcc7ce => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 11 04:10:28 compute-0 nova_compute[259850]: 2025-10-11 04:10:28.732 2 INFO nova.compute.manager [None req-3700afd8-f980-4cf2-b41e-54ecc7fbe9a2 - - - - - -] [instance: b41a3cc1-8f24-43ac-981f-ecd099bcc7ce] VM Started (Lifecycle Event)
Oct 11 04:10:28 compute-0 nova_compute[259850]: 2025-10-11 04:10:28.734 2 DEBUG nova.compute.manager [None req-71040294-cc4b-4737-ac55-772d3478d218 9d2ae7a5228f4cb98ea73ec06ee2dc1e 090ce8762cd840ba8eedda774a81c19f - - default default] [instance: b41a3cc1-8f24-43ac-981f-ecd099bcc7ce] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Oct 11 04:10:28 compute-0 podman[284359]: 2025-10-11 04:10:28.736271811 +0000 UTC m=+0.209175775 container start d9ee66ff1f6c800ff78ae0ff5eca5634f80a1570825bb4afb2eea3e28549382d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-ff7747e6-8cbd-486d-acdd-c112ee8b4480, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, org.label-schema.build-date=20251009, io.buildah.version=1.41.3, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Oct 11 04:10:28 compute-0 nova_compute[259850]: 2025-10-11 04:10:28.740 2 DEBUG nova.virt.libvirt.driver [None req-71040294-cc4b-4737-ac55-772d3478d218 9d2ae7a5228f4cb98ea73ec06ee2dc1e 090ce8762cd840ba8eedda774a81c19f - - default default] [instance: b41a3cc1-8f24-43ac-981f-ecd099bcc7ce] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Oct 11 04:10:28 compute-0 nova_compute[259850]: 2025-10-11 04:10:28.744 2 INFO nova.virt.libvirt.driver [-] [instance: b41a3cc1-8f24-43ac-981f-ecd099bcc7ce] Instance spawned successfully.
Oct 11 04:10:28 compute-0 nova_compute[259850]: 2025-10-11 04:10:28.745 2 DEBUG nova.virt.libvirt.driver [None req-71040294-cc4b-4737-ac55-772d3478d218 9d2ae7a5228f4cb98ea73ec06ee2dc1e 090ce8762cd840ba8eedda774a81c19f - - default default] [instance: b41a3cc1-8f24-43ac-981f-ecd099bcc7ce] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Oct 11 04:10:28 compute-0 nova_compute[259850]: 2025-10-11 04:10:28.758 2 DEBUG nova.compute.manager [None req-3700afd8-f980-4cf2-b41e-54ecc7fbe9a2 - - - - - -] [instance: b41a3cc1-8f24-43ac-981f-ecd099bcc7ce] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 11 04:10:28 compute-0 nova_compute[259850]: 2025-10-11 04:10:28.770 2 DEBUG nova.compute.manager [None req-3700afd8-f980-4cf2-b41e-54ecc7fbe9a2 - - - - - -] [instance: b41a3cc1-8f24-43ac-981f-ecd099bcc7ce] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 11 04:10:28 compute-0 nova_compute[259850]: 2025-10-11 04:10:28.779 2 DEBUG nova.virt.libvirt.driver [None req-71040294-cc4b-4737-ac55-772d3478d218 9d2ae7a5228f4cb98ea73ec06ee2dc1e 090ce8762cd840ba8eedda774a81c19f - - default default] [instance: b41a3cc1-8f24-43ac-981f-ecd099bcc7ce] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 11 04:10:28 compute-0 nova_compute[259850]: 2025-10-11 04:10:28.780 2 DEBUG nova.virt.libvirt.driver [None req-71040294-cc4b-4737-ac55-772d3478d218 9d2ae7a5228f4cb98ea73ec06ee2dc1e 090ce8762cd840ba8eedda774a81c19f - - default default] [instance: b41a3cc1-8f24-43ac-981f-ecd099bcc7ce] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 11 04:10:28 compute-0 nova_compute[259850]: 2025-10-11 04:10:28.781 2 DEBUG nova.virt.libvirt.driver [None req-71040294-cc4b-4737-ac55-772d3478d218 9d2ae7a5228f4cb98ea73ec06ee2dc1e 090ce8762cd840ba8eedda774a81c19f - - default default] [instance: b41a3cc1-8f24-43ac-981f-ecd099bcc7ce] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 11 04:10:28 compute-0 nova_compute[259850]: 2025-10-11 04:10:28.782 2 DEBUG nova.virt.libvirt.driver [None req-71040294-cc4b-4737-ac55-772d3478d218 9d2ae7a5228f4cb98ea73ec06ee2dc1e 090ce8762cd840ba8eedda774a81c19f - - default default] [instance: b41a3cc1-8f24-43ac-981f-ecd099bcc7ce] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 11 04:10:28 compute-0 nova_compute[259850]: 2025-10-11 04:10:28.783 2 DEBUG nova.virt.libvirt.driver [None req-71040294-cc4b-4737-ac55-772d3478d218 9d2ae7a5228f4cb98ea73ec06ee2dc1e 090ce8762cd840ba8eedda774a81c19f - - default default] [instance: b41a3cc1-8f24-43ac-981f-ecd099bcc7ce] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 11 04:10:28 compute-0 neutron-haproxy-ovnmeta-ff7747e6-8cbd-486d-acdd-c112ee8b4480[284374]: [NOTICE]   (284378) : New worker (284380) forked
Oct 11 04:10:28 compute-0 neutron-haproxy-ovnmeta-ff7747e6-8cbd-486d-acdd-c112ee8b4480[284374]: [NOTICE]   (284378) : Loading success.
Oct 11 04:10:28 compute-0 nova_compute[259850]: 2025-10-11 04:10:28.784 2 DEBUG nova.virt.libvirt.driver [None req-71040294-cc4b-4737-ac55-772d3478d218 9d2ae7a5228f4cb98ea73ec06ee2dc1e 090ce8762cd840ba8eedda774a81c19f - - default default] [instance: b41a3cc1-8f24-43ac-981f-ecd099bcc7ce] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 11 04:10:28 compute-0 nova_compute[259850]: 2025-10-11 04:10:28.815 2 INFO nova.compute.manager [None req-3700afd8-f980-4cf2-b41e-54ecc7fbe9a2 - - - - - -] [instance: b41a3cc1-8f24-43ac-981f-ecd099bcc7ce] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 11 04:10:28 compute-0 nova_compute[259850]: 2025-10-11 04:10:28.815 2 DEBUG nova.virt.driver [None req-3700afd8-f980-4cf2-b41e-54ecc7fbe9a2 - - - - - -] Emitting event <LifecycleEvent: 1760155828.7317064, b41a3cc1-8f24-43ac-981f-ecd099bcc7ce => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 11 04:10:28 compute-0 nova_compute[259850]: 2025-10-11 04:10:28.817 2 INFO nova.compute.manager [None req-3700afd8-f980-4cf2-b41e-54ecc7fbe9a2 - - - - - -] [instance: b41a3cc1-8f24-43ac-981f-ecd099bcc7ce] VM Paused (Lifecycle Event)
Oct 11 04:10:28 compute-0 nova_compute[259850]: 2025-10-11 04:10:28.862 2 DEBUG nova.compute.manager [None req-3700afd8-f980-4cf2-b41e-54ecc7fbe9a2 - - - - - -] [instance: b41a3cc1-8f24-43ac-981f-ecd099bcc7ce] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 11 04:10:28 compute-0 nova_compute[259850]: 2025-10-11 04:10:28.865 2 DEBUG nova.virt.driver [None req-3700afd8-f980-4cf2-b41e-54ecc7fbe9a2 - - - - - -] Emitting event <LifecycleEvent: 1760155828.736805, b41a3cc1-8f24-43ac-981f-ecd099bcc7ce => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 11 04:10:28 compute-0 nova_compute[259850]: 2025-10-11 04:10:28.866 2 INFO nova.compute.manager [None req-3700afd8-f980-4cf2-b41e-54ecc7fbe9a2 - - - - - -] [instance: b41a3cc1-8f24-43ac-981f-ecd099bcc7ce] VM Resumed (Lifecycle Event)
Oct 11 04:10:28 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e274 do_prune osdmap full prune enabled
Oct 11 04:10:28 compute-0 ceph-mon[74273]: pgmap v1288: 305 pgs: 305 active+clean; 2.3 GiB data, 2.4 GiB used, 58 GiB / 60 GiB avail
Oct 11 04:10:28 compute-0 ceph-mon[74273]: from='client.? 192.168.122.10:0/453889509' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 11 04:10:28 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e275 e275: 3 total, 3 up, 3 in
Oct 11 04:10:28 compute-0 ceph-mon[74273]: log_channel(cluster) log [DBG] : osdmap e275: 3 total, 3 up, 3 in
Oct 11 04:10:28 compute-0 nova_compute[259850]: 2025-10-11 04:10:28.889 2 INFO nova.compute.manager [None req-71040294-cc4b-4737-ac55-772d3478d218 9d2ae7a5228f4cb98ea73ec06ee2dc1e 090ce8762cd840ba8eedda774a81c19f - - default default] [instance: b41a3cc1-8f24-43ac-981f-ecd099bcc7ce] Took 4.49 seconds to spawn the instance on the hypervisor.
Oct 11 04:10:28 compute-0 nova_compute[259850]: 2025-10-11 04:10:28.889 2 DEBUG nova.compute.manager [None req-71040294-cc4b-4737-ac55-772d3478d218 9d2ae7a5228f4cb98ea73ec06ee2dc1e 090ce8762cd840ba8eedda774a81c19f - - default default] [instance: b41a3cc1-8f24-43ac-981f-ecd099bcc7ce] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 11 04:10:28 compute-0 nova_compute[259850]: 2025-10-11 04:10:28.890 2 DEBUG nova.compute.manager [None req-3700afd8-f980-4cf2-b41e-54ecc7fbe9a2 - - - - - -] [instance: b41a3cc1-8f24-43ac-981f-ecd099bcc7ce] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 11 04:10:28 compute-0 nova_compute[259850]: 2025-10-11 04:10:28.899 2 DEBUG nova.compute.manager [None req-3700afd8-f980-4cf2-b41e-54ecc7fbe9a2 - - - - - -] [instance: b41a3cc1-8f24-43ac-981f-ecd099bcc7ce] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 11 04:10:28 compute-0 nova_compute[259850]: 2025-10-11 04:10:28.930 2 INFO nova.compute.manager [None req-3700afd8-f980-4cf2-b41e-54ecc7fbe9a2 - - - - - -] [instance: b41a3cc1-8f24-43ac-981f-ecd099bcc7ce] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 11 04:10:28 compute-0 nova_compute[259850]: 2025-10-11 04:10:28.966 2 INFO nova.compute.manager [None req-71040294-cc4b-4737-ac55-772d3478d218 9d2ae7a5228f4cb98ea73ec06ee2dc1e 090ce8762cd840ba8eedda774a81c19f - - default default] [instance: b41a3cc1-8f24-43ac-981f-ecd099bcc7ce] Took 6.74 seconds to build instance.
Oct 11 04:10:28 compute-0 nova_compute[259850]: 2025-10-11 04:10:28.990 2 DEBUG oslo_concurrency.lockutils [None req-71040294-cc4b-4737-ac55-772d3478d218 9d2ae7a5228f4cb98ea73ec06ee2dc1e 090ce8762cd840ba8eedda774a81c19f - - default default] Lock "b41a3cc1-8f24-43ac-981f-ecd099bcc7ce" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 6.841s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 04:10:29 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct 11 04:10:29 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2542183989' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 11 04:10:29 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct 11 04:10:29 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2542183989' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 11 04:10:29 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1290: 305 pgs: 305 active+clean; 2.3 GiB data, 2.4 GiB used, 58 GiB / 60 GiB avail; 164 KiB/s rd, 4.5 MiB/s wr, 234 op/s
Oct 11 04:10:29 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e275 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 04:10:29 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e275 do_prune osdmap full prune enabled
Oct 11 04:10:29 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e276 e276: 3 total, 3 up, 3 in
Oct 11 04:10:29 compute-0 ceph-mon[74273]: log_channel(cluster) log [DBG] : osdmap e276: 3 total, 3 up, 3 in
Oct 11 04:10:29 compute-0 ceph-mon[74273]: osdmap e275: 3 total, 3 up, 3 in
Oct 11 04:10:29 compute-0 ceph-mon[74273]: from='client.? 192.168.122.10:0/2542183989' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 11 04:10:29 compute-0 ceph-mon[74273]: from='client.? 192.168.122.10:0/2542183989' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 11 04:10:30 compute-0 nova_compute[259850]: 2025-10-11 04:10:30.139 2 DEBUG nova.compute.manager [req-21797d51-4d7a-40a4-a5dc-0a2bdc4bda4f req-259242cd-e67a-4402-96a6-dd383b432adc f4159c198f2c491aba3076e803251bb4 551ef16ca7c64c7d98e9560ddab5abd8 - - default default] [instance: b41a3cc1-8f24-43ac-981f-ecd099bcc7ce] Received event network-vif-plugged-f962d69d-912c-4bbe-8b62-be8b3ee5a694 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 11 04:10:30 compute-0 nova_compute[259850]: 2025-10-11 04:10:30.139 2 DEBUG oslo_concurrency.lockutils [req-21797d51-4d7a-40a4-a5dc-0a2bdc4bda4f req-259242cd-e67a-4402-96a6-dd383b432adc f4159c198f2c491aba3076e803251bb4 551ef16ca7c64c7d98e9560ddab5abd8 - - default default] Acquiring lock "b41a3cc1-8f24-43ac-981f-ecd099bcc7ce-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 04:10:30 compute-0 nova_compute[259850]: 2025-10-11 04:10:30.140 2 DEBUG oslo_concurrency.lockutils [req-21797d51-4d7a-40a4-a5dc-0a2bdc4bda4f req-259242cd-e67a-4402-96a6-dd383b432adc f4159c198f2c491aba3076e803251bb4 551ef16ca7c64c7d98e9560ddab5abd8 - - default default] Lock "b41a3cc1-8f24-43ac-981f-ecd099bcc7ce-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 04:10:30 compute-0 nova_compute[259850]: 2025-10-11 04:10:30.141 2 DEBUG oslo_concurrency.lockutils [req-21797d51-4d7a-40a4-a5dc-0a2bdc4bda4f req-259242cd-e67a-4402-96a6-dd383b432adc f4159c198f2c491aba3076e803251bb4 551ef16ca7c64c7d98e9560ddab5abd8 - - default default] Lock "b41a3cc1-8f24-43ac-981f-ecd099bcc7ce-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 04:10:30 compute-0 nova_compute[259850]: 2025-10-11 04:10:30.141 2 DEBUG nova.compute.manager [req-21797d51-4d7a-40a4-a5dc-0a2bdc4bda4f req-259242cd-e67a-4402-96a6-dd383b432adc f4159c198f2c491aba3076e803251bb4 551ef16ca7c64c7d98e9560ddab5abd8 - - default default] [instance: b41a3cc1-8f24-43ac-981f-ecd099bcc7ce] No waiting events found dispatching network-vif-plugged-f962d69d-912c-4bbe-8b62-be8b3ee5a694 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 11 04:10:30 compute-0 nova_compute[259850]: 2025-10-11 04:10:30.141 2 WARNING nova.compute.manager [req-21797d51-4d7a-40a4-a5dc-0a2bdc4bda4f req-259242cd-e67a-4402-96a6-dd383b432adc f4159c198f2c491aba3076e803251bb4 551ef16ca7c64c7d98e9560ddab5abd8 - - default default] [instance: b41a3cc1-8f24-43ac-981f-ecd099bcc7ce] Received unexpected event network-vif-plugged-f962d69d-912c-4bbe-8b62-be8b3ee5a694 for instance with vm_state active and task_state None.
Oct 11 04:10:30 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e276 do_prune osdmap full prune enabled
Oct 11 04:10:30 compute-0 ceph-mon[74273]: pgmap v1290: 305 pgs: 305 active+clean; 2.3 GiB data, 2.4 GiB used, 58 GiB / 60 GiB avail; 164 KiB/s rd, 4.5 MiB/s wr, 234 op/s
Oct 11 04:10:30 compute-0 ceph-mon[74273]: osdmap e276: 3 total, 3 up, 3 in
Oct 11 04:10:30 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e277 e277: 3 total, 3 up, 3 in
Oct 11 04:10:30 compute-0 ceph-mon[74273]: log_channel(cluster) log [DBG] : osdmap e277: 3 total, 3 up, 3 in
Oct 11 04:10:31 compute-0 ceph-mgr[74563]: [pg_autoscaler INFO root] _maybe_adjust
Oct 11 04:10:31 compute-0 ceph-mgr[74563]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 04:10:31 compute-0 ceph-mgr[74563]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct 11 04:10:31 compute-0 ceph-mgr[74563]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 04:10:31 compute-0 ceph-mgr[74563]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.001109339897929283 of space, bias 1.0, pg target 0.3328019693787849 quantized to 32 (current 32)
Oct 11 04:10:31 compute-0 ceph-mgr[74563]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 04:10:31 compute-0 ceph-mgr[74563]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.03403355605342841 of space, bias 1.0, pg target 10.210066816028522 quantized to 32 (current 32)
Oct 11 04:10:31 compute-0 ceph-mgr[74563]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 04:10:31 compute-0 ceph-mgr[74563]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.00034637858549846974 of space, bias 1.0, pg target 0.10044978979455622 quantized to 32 (current 32)
Oct 11 04:10:31 compute-0 ceph-mgr[74563]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 04:10:31 compute-0 ceph-mgr[74563]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.000665858301588852 of space, bias 1.0, pg target 0.19309890746076708 quantized to 32 (current 32)
Oct 11 04:10:31 compute-0 ceph-mgr[74563]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 04:10:31 compute-0 ceph-mgr[74563]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0005901217685745913 quantized to 16 (current 16)
Oct 11 04:10:31 compute-0 ceph-mgr[74563]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 04:10:31 compute-0 ceph-mgr[74563]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 11 04:10:31 compute-0 ceph-mgr[74563]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 04:10:31 compute-0 ceph-mgr[74563]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.376522107182392e-05 quantized to 32 (current 32)
Oct 11 04:10:31 compute-0 ceph-mgr[74563]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 04:10:31 compute-0 ceph-mgr[74563]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006270043791105033 quantized to 32 (current 32)
Oct 11 04:10:31 compute-0 ceph-mgr[74563]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 04:10:31 compute-0 ceph-mgr[74563]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 11 04:10:31 compute-0 ceph-mgr[74563]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 04:10:31 compute-0 ceph-mgr[74563]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00014753044214364783 quantized to 32 (current 32)
Oct 11 04:10:31 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct 11 04:10:31 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/276537004' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 11 04:10:31 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct 11 04:10:31 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/276537004' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 11 04:10:31 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1293: 305 pgs: 305 active+clean; 2.3 GiB data, 2.4 GiB used, 58 GiB / 60 GiB avail; 164 KiB/s rd, 4.5 MiB/s wr, 234 op/s
Oct 11 04:10:31 compute-0 nova_compute[259850]: 2025-10-11 04:10:31.620 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 04:10:31 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e277 do_prune osdmap full prune enabled
Oct 11 04:10:31 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e278 e278: 3 total, 3 up, 3 in
Oct 11 04:10:31 compute-0 ceph-mon[74273]: osdmap e277: 3 total, 3 up, 3 in
Oct 11 04:10:31 compute-0 ceph-mon[74273]: from='client.? 192.168.122.10:0/276537004' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 11 04:10:31 compute-0 ceph-mon[74273]: from='client.? 192.168.122.10:0/276537004' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 11 04:10:31 compute-0 ceph-mon[74273]: log_channel(cluster) log [DBG] : osdmap e278: 3 total, 3 up, 3 in
Oct 11 04:10:32 compute-0 nova_compute[259850]: 2025-10-11 04:10:32.678 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 04:10:32 compute-0 nova_compute[259850]: 2025-10-11 04:10:32.783 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 04:10:32 compute-0 ovn_metadata_agent[161897]: 2025-10-11 04:10:32.784 161902 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=11, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '7a:61:6f', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '92:f1:b6:e4:f1:16'}, ipsec=False) old=SB_Global(nb_cfg=10) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 11 04:10:32 compute-0 ovn_metadata_agent[161897]: 2025-10-11 04:10:32.786 161902 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 1 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Oct 11 04:10:32 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e278 do_prune osdmap full prune enabled
Oct 11 04:10:32 compute-0 ceph-mon[74273]: pgmap v1293: 305 pgs: 305 active+clean; 2.3 GiB data, 2.4 GiB used, 58 GiB / 60 GiB avail; 164 KiB/s rd, 4.5 MiB/s wr, 234 op/s
Oct 11 04:10:32 compute-0 ceph-mon[74273]: osdmap e278: 3 total, 3 up, 3 in
Oct 11 04:10:32 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e279 e279: 3 total, 3 up, 3 in
Oct 11 04:10:32 compute-0 ceph-mon[74273]: log_channel(cluster) log [DBG] : osdmap e279: 3 total, 3 up, 3 in
Oct 11 04:10:33 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1296: 305 pgs: 2 active+clean+snaptrim, 6 active+clean+snaptrim_wait, 297 active+clean; 2.3 GiB data, 2.4 GiB used, 58 GiB / 60 GiB avail; 5.9 MiB/s rd, 7.5 KiB/s wr, 433 op/s
Oct 11 04:10:33 compute-0 ovn_metadata_agent[161897]: 2025-10-11 04:10:33.788 161902 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=8a473e03-2208-47ae-afcd-05ad744a5969, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '11'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 04:10:33 compute-0 nova_compute[259850]: 2025-10-11 04:10:33.915 2 DEBUG nova.compute.manager [req-e9860c41-2514-4b0e-95ef-00f979a2ba62 req-2b782ba0-7b54-4627-855d-98ac96aa5b69 f4159c198f2c491aba3076e803251bb4 551ef16ca7c64c7d98e9560ddab5abd8 - - default default] [instance: b41a3cc1-8f24-43ac-981f-ecd099bcc7ce] Received event network-changed-f962d69d-912c-4bbe-8b62-be8b3ee5a694 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 11 04:10:33 compute-0 nova_compute[259850]: 2025-10-11 04:10:33.915 2 DEBUG nova.compute.manager [req-e9860c41-2514-4b0e-95ef-00f979a2ba62 req-2b782ba0-7b54-4627-855d-98ac96aa5b69 f4159c198f2c491aba3076e803251bb4 551ef16ca7c64c7d98e9560ddab5abd8 - - default default] [instance: b41a3cc1-8f24-43ac-981f-ecd099bcc7ce] Refreshing instance network info cache due to event network-changed-f962d69d-912c-4bbe-8b62-be8b3ee5a694. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 11 04:10:33 compute-0 nova_compute[259850]: 2025-10-11 04:10:33.916 2 DEBUG oslo_concurrency.lockutils [req-e9860c41-2514-4b0e-95ef-00f979a2ba62 req-2b782ba0-7b54-4627-855d-98ac96aa5b69 f4159c198f2c491aba3076e803251bb4 551ef16ca7c64c7d98e9560ddab5abd8 - - default default] Acquiring lock "refresh_cache-b41a3cc1-8f24-43ac-981f-ecd099bcc7ce" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 11 04:10:33 compute-0 nova_compute[259850]: 2025-10-11 04:10:33.916 2 DEBUG oslo_concurrency.lockutils [req-e9860c41-2514-4b0e-95ef-00f979a2ba62 req-2b782ba0-7b54-4627-855d-98ac96aa5b69 f4159c198f2c491aba3076e803251bb4 551ef16ca7c64c7d98e9560ddab5abd8 - - default default] Acquired lock "refresh_cache-b41a3cc1-8f24-43ac-981f-ecd099bcc7ce" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 11 04:10:33 compute-0 nova_compute[259850]: 2025-10-11 04:10:33.917 2 DEBUG nova.network.neutron [req-e9860c41-2514-4b0e-95ef-00f979a2ba62 req-2b782ba0-7b54-4627-855d-98ac96aa5b69 f4159c198f2c491aba3076e803251bb4 551ef16ca7c64c7d98e9560ddab5abd8 - - default default] [instance: b41a3cc1-8f24-43ac-981f-ecd099bcc7ce] Refreshing network info cache for port f962d69d-912c-4bbe-8b62-be8b3ee5a694 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 11 04:10:33 compute-0 ceph-mon[74273]: osdmap e279: 3 total, 3 up, 3 in
Oct 11 04:10:34 compute-0 nova_compute[259850]: 2025-10-11 04:10:34.080 2 DEBUG oslo_concurrency.lockutils [None req-98568207-9a95-4b02-8dbf-683ab8a8d591 fc44058c9b8d47d1907c195c404898c8 c04e56df694d49fdbb22c39773dfc036 - - default default] Acquiring lock "3d2a66c2-9869-4f0a-a27f-db3a14d43466" by "nova.compute.manager.ComputeManager.detach_volume.<locals>.do_detach_volume" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 04:10:34 compute-0 nova_compute[259850]: 2025-10-11 04:10:34.080 2 DEBUG oslo_concurrency.lockutils [None req-98568207-9a95-4b02-8dbf-683ab8a8d591 fc44058c9b8d47d1907c195c404898c8 c04e56df694d49fdbb22c39773dfc036 - - default default] Lock "3d2a66c2-9869-4f0a-a27f-db3a14d43466" acquired by "nova.compute.manager.ComputeManager.detach_volume.<locals>.do_detach_volume" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 04:10:34 compute-0 nova_compute[259850]: 2025-10-11 04:10:34.095 2 INFO nova.compute.manager [None req-98568207-9a95-4b02-8dbf-683ab8a8d591 fc44058c9b8d47d1907c195c404898c8 c04e56df694d49fdbb22c39773dfc036 - - default default] [instance: 3d2a66c2-9869-4f0a-a27f-db3a14d43466] Detaching volume a9d7e36f-ac88-4a05-ba10-f3c19cb5ac93
Oct 11 04:10:34 compute-0 nova_compute[259850]: 2025-10-11 04:10:34.311 2 INFO nova.virt.block_device [None req-98568207-9a95-4b02-8dbf-683ab8a8d591 fc44058c9b8d47d1907c195c404898c8 c04e56df694d49fdbb22c39773dfc036 - - default default] [instance: 3d2a66c2-9869-4f0a-a27f-db3a14d43466] Attempting to driver detach volume a9d7e36f-ac88-4a05-ba10-f3c19cb5ac93 from mountpoint /dev/vdb
Oct 11 04:10:34 compute-0 nova_compute[259850]: 2025-10-11 04:10:34.323 2 DEBUG nova.virt.libvirt.driver [None req-98568207-9a95-4b02-8dbf-683ab8a8d591 fc44058c9b8d47d1907c195c404898c8 c04e56df694d49fdbb22c39773dfc036 - - default default] Attempting to detach device vdb from instance 3d2a66c2-9869-4f0a-a27f-db3a14d43466 from the persistent domain config. _detach_from_persistent /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2487
Oct 11 04:10:34 compute-0 nova_compute[259850]: 2025-10-11 04:10:34.325 2 DEBUG nova.virt.libvirt.guest [None req-98568207-9a95-4b02-8dbf-683ab8a8d591 fc44058c9b8d47d1907c195c404898c8 c04e56df694d49fdbb22c39773dfc036 - - default default] detach device xml: <disk type="network" device="disk">
Oct 11 04:10:34 compute-0 nova_compute[259850]:   <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Oct 11 04:10:34 compute-0 nova_compute[259850]:   <source protocol="rbd" name="volumes/volume-a9d7e36f-ac88-4a05-ba10-f3c19cb5ac93">
Oct 11 04:10:34 compute-0 nova_compute[259850]:     <host name="192.168.122.100" port="6789"/>
Oct 11 04:10:34 compute-0 nova_compute[259850]:   </source>
Oct 11 04:10:34 compute-0 nova_compute[259850]:   <target dev="vdb" bus="virtio"/>
Oct 11 04:10:34 compute-0 nova_compute[259850]:   <serial>a9d7e36f-ac88-4a05-ba10-f3c19cb5ac93</serial>
Oct 11 04:10:34 compute-0 nova_compute[259850]:   <address type="pci" domain="0x0000" bus="0x06" slot="0x00" function="0x0"/>
Oct 11 04:10:34 compute-0 nova_compute[259850]: </disk>
Oct 11 04:10:34 compute-0 nova_compute[259850]:  detach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:465
Oct 11 04:10:34 compute-0 nova_compute[259850]: 2025-10-11 04:10:34.336 2 INFO nova.virt.libvirt.driver [None req-98568207-9a95-4b02-8dbf-683ab8a8d591 fc44058c9b8d47d1907c195c404898c8 c04e56df694d49fdbb22c39773dfc036 - - default default] Successfully detached device vdb from instance 3d2a66c2-9869-4f0a-a27f-db3a14d43466 from the persistent domain config.
Oct 11 04:10:34 compute-0 nova_compute[259850]: 2025-10-11 04:10:34.337 2 DEBUG nova.virt.libvirt.driver [None req-98568207-9a95-4b02-8dbf-683ab8a8d591 fc44058c9b8d47d1907c195c404898c8 c04e56df694d49fdbb22c39773dfc036 - - default default] (1/8): Attempting to detach device vdb with device alias virtio-disk1 from instance 3d2a66c2-9869-4f0a-a27f-db3a14d43466 from the live domain config. _detach_from_live_with_retry /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2523
Oct 11 04:10:34 compute-0 nova_compute[259850]: 2025-10-11 04:10:34.337 2 DEBUG nova.virt.libvirt.guest [None req-98568207-9a95-4b02-8dbf-683ab8a8d591 fc44058c9b8d47d1907c195c404898c8 c04e56df694d49fdbb22c39773dfc036 - - default default] detach device xml: <disk type="network" device="disk">
Oct 11 04:10:34 compute-0 nova_compute[259850]:   <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Oct 11 04:10:34 compute-0 nova_compute[259850]:   <source protocol="rbd" name="volumes/volume-a9d7e36f-ac88-4a05-ba10-f3c19cb5ac93">
Oct 11 04:10:34 compute-0 nova_compute[259850]:     <host name="192.168.122.100" port="6789"/>
Oct 11 04:10:34 compute-0 nova_compute[259850]:   </source>
Oct 11 04:10:34 compute-0 nova_compute[259850]:   <target dev="vdb" bus="virtio"/>
Oct 11 04:10:34 compute-0 nova_compute[259850]:   <serial>a9d7e36f-ac88-4a05-ba10-f3c19cb5ac93</serial>
Oct 11 04:10:34 compute-0 nova_compute[259850]:   <address type="pci" domain="0x0000" bus="0x06" slot="0x00" function="0x0"/>
Oct 11 04:10:34 compute-0 nova_compute[259850]: </disk>
Oct 11 04:10:34 compute-0 nova_compute[259850]:  detach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:465
Oct 11 04:10:34 compute-0 nova_compute[259850]: 2025-10-11 04:10:34.463 2 DEBUG nova.virt.libvirt.driver [None req-3700afd8-f980-4cf2-b41e-54ecc7fbe9a2 - - - - - -] Received event <DeviceRemovedEvent: 1760155834.4619942, 3d2a66c2-9869-4f0a-a27f-db3a14d43466 => virtio-disk1> from libvirt while the driver is waiting for it; dispatched. emit_event /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2370
Oct 11 04:10:34 compute-0 nova_compute[259850]: 2025-10-11 04:10:34.465 2 DEBUG nova.virt.libvirt.driver [None req-98568207-9a95-4b02-8dbf-683ab8a8d591 fc44058c9b8d47d1907c195c404898c8 c04e56df694d49fdbb22c39773dfc036 - - default default] Start waiting for the detach event from libvirt for device vdb with device alias virtio-disk1 for instance 3d2a66c2-9869-4f0a-a27f-db3a14d43466 _detach_from_live_and_wait_for_event /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2599
Oct 11 04:10:34 compute-0 nova_compute[259850]: 2025-10-11 04:10:34.468 2 INFO nova.virt.libvirt.driver [None req-98568207-9a95-4b02-8dbf-683ab8a8d591 fc44058c9b8d47d1907c195c404898c8 c04e56df694d49fdbb22c39773dfc036 - - default default] Successfully detached device vdb from instance 3d2a66c2-9869-4f0a-a27f-db3a14d43466 from the live domain config.
Oct 11 04:10:34 compute-0 nova_compute[259850]: 2025-10-11 04:10:34.640 2 DEBUG nova.objects.instance [None req-98568207-9a95-4b02-8dbf-683ab8a8d591 fc44058c9b8d47d1907c195c404898c8 c04e56df694d49fdbb22c39773dfc036 - - default default] Lazy-loading 'flavor' on Instance uuid 3d2a66c2-9869-4f0a-a27f-db3a14d43466 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 11 04:10:34 compute-0 nova_compute[259850]: 2025-10-11 04:10:34.696 2 DEBUG oslo_concurrency.lockutils [None req-98568207-9a95-4b02-8dbf-683ab8a8d591 fc44058c9b8d47d1907c195c404898c8 c04e56df694d49fdbb22c39773dfc036 - - default default] Lock "3d2a66c2-9869-4f0a-a27f-db3a14d43466" "released" by "nova.compute.manager.ComputeManager.detach_volume.<locals>.do_detach_volume" :: held 0.616s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 04:10:34 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e279 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 04:10:34 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e279 do_prune osdmap full prune enabled
Oct 11 04:10:34 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e280 e280: 3 total, 3 up, 3 in
Oct 11 04:10:34 compute-0 ceph-mon[74273]: log_channel(cluster) log [DBG] : osdmap e280: 3 total, 3 up, 3 in
Oct 11 04:10:34 compute-0 ceph-mon[74273]: pgmap v1296: 305 pgs: 2 active+clean+snaptrim, 6 active+clean+snaptrim_wait, 297 active+clean; 2.3 GiB data, 2.4 GiB used, 58 GiB / 60 GiB avail; 5.9 MiB/s rd, 7.5 KiB/s wr, 433 op/s
Oct 11 04:10:34 compute-0 ceph-mon[74273]: osdmap e280: 3 total, 3 up, 3 in
Oct 11 04:10:35 compute-0 podman[284391]: 2025-10-11 04:10:35.387973809 +0000 UTC m=+0.090387724 container health_status 83ff5079e26f0f00bbfd2512db4ebca61e431e3b7e362664573e34fff179574c (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, org.label-schema.build-date=20251009, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true)
Oct 11 04:10:35 compute-0 podman[284392]: 2025-10-11 04:10:35.388198786 +0000 UTC m=+0.086731972 container health_status 8f03625dda308e9a3fe3b3f00360811eab2a53db90537fed5552a11820542f07 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_id=iscsid, container_name=iscsid, io.buildah.version=1.41.3)
Oct 11 04:10:35 compute-0 nova_compute[259850]: 2025-10-11 04:10:35.443 2 DEBUG nova.network.neutron [req-e9860c41-2514-4b0e-95ef-00f979a2ba62 req-2b782ba0-7b54-4627-855d-98ac96aa5b69 f4159c198f2c491aba3076e803251bb4 551ef16ca7c64c7d98e9560ddab5abd8 - - default default] [instance: b41a3cc1-8f24-43ac-981f-ecd099bcc7ce] Updated VIF entry in instance network info cache for port f962d69d-912c-4bbe-8b62-be8b3ee5a694. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 11 04:10:35 compute-0 nova_compute[259850]: 2025-10-11 04:10:35.443 2 DEBUG nova.network.neutron [req-e9860c41-2514-4b0e-95ef-00f979a2ba62 req-2b782ba0-7b54-4627-855d-98ac96aa5b69 f4159c198f2c491aba3076e803251bb4 551ef16ca7c64c7d98e9560ddab5abd8 - - default default] [instance: b41a3cc1-8f24-43ac-981f-ecd099bcc7ce] Updating instance_info_cache with network_info: [{"id": "f962d69d-912c-4bbe-8b62-be8b3ee5a694", "address": "fa:16:3e:60:b7:e2", "network": {"id": "ff7747e6-8cbd-486d-acdd-c112ee8b4480", "bridge": "br-int", "label": "tempest-VolumesBackupsTest-1545977489-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.174", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "090ce8762cd840ba8eedda774a81c19f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf962d69d-91", "ovs_interfaceid": "f962d69d-912c-4bbe-8b62-be8b3ee5a694", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 11 04:10:35 compute-0 nova_compute[259850]: 2025-10-11 04:10:35.468 2 DEBUG oslo_concurrency.lockutils [req-e9860c41-2514-4b0e-95ef-00f979a2ba62 req-2b782ba0-7b54-4627-855d-98ac96aa5b69 f4159c198f2c491aba3076e803251bb4 551ef16ca7c64c7d98e9560ddab5abd8 - - default default] Releasing lock "refresh_cache-b41a3cc1-8f24-43ac-981f-ecd099bcc7ce" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 11 04:10:35 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1298: 305 pgs: 2 active+clean+snaptrim, 6 active+clean+snaptrim_wait, 297 active+clean; 2.3 GiB data, 2.4 GiB used, 58 GiB / 60 GiB avail; 5.0 MiB/s rd, 6.4 KiB/s wr, 370 op/s
Oct 11 04:10:35 compute-0 nova_compute[259850]: 2025-10-11 04:10:35.691 2 DEBUG oslo_concurrency.lockutils [None req-4278a209-9e99-405a-8ef7-2732f6bb6dcf fc44058c9b8d47d1907c195c404898c8 c04e56df694d49fdbb22c39773dfc036 - - default default] Acquiring lock "3d2a66c2-9869-4f0a-a27f-db3a14d43466" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 04:10:35 compute-0 nova_compute[259850]: 2025-10-11 04:10:35.691 2 DEBUG oslo_concurrency.lockutils [None req-4278a209-9e99-405a-8ef7-2732f6bb6dcf fc44058c9b8d47d1907c195c404898c8 c04e56df694d49fdbb22c39773dfc036 - - default default] Lock "3d2a66c2-9869-4f0a-a27f-db3a14d43466" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 04:10:35 compute-0 nova_compute[259850]: 2025-10-11 04:10:35.691 2 DEBUG oslo_concurrency.lockutils [None req-4278a209-9e99-405a-8ef7-2732f6bb6dcf fc44058c9b8d47d1907c195c404898c8 c04e56df694d49fdbb22c39773dfc036 - - default default] Acquiring lock "3d2a66c2-9869-4f0a-a27f-db3a14d43466-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 04:10:35 compute-0 nova_compute[259850]: 2025-10-11 04:10:35.692 2 DEBUG oslo_concurrency.lockutils [None req-4278a209-9e99-405a-8ef7-2732f6bb6dcf fc44058c9b8d47d1907c195c404898c8 c04e56df694d49fdbb22c39773dfc036 - - default default] Lock "3d2a66c2-9869-4f0a-a27f-db3a14d43466-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 04:10:35 compute-0 nova_compute[259850]: 2025-10-11 04:10:35.692 2 DEBUG oslo_concurrency.lockutils [None req-4278a209-9e99-405a-8ef7-2732f6bb6dcf fc44058c9b8d47d1907c195c404898c8 c04e56df694d49fdbb22c39773dfc036 - - default default] Lock "3d2a66c2-9869-4f0a-a27f-db3a14d43466-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 04:10:35 compute-0 nova_compute[259850]: 2025-10-11 04:10:35.693 2 INFO nova.compute.manager [None req-4278a209-9e99-405a-8ef7-2732f6bb6dcf fc44058c9b8d47d1907c195c404898c8 c04e56df694d49fdbb22c39773dfc036 - - default default] [instance: 3d2a66c2-9869-4f0a-a27f-db3a14d43466] Terminating instance
Oct 11 04:10:35 compute-0 nova_compute[259850]: 2025-10-11 04:10:35.694 2 DEBUG nova.compute.manager [None req-4278a209-9e99-405a-8ef7-2732f6bb6dcf fc44058c9b8d47d1907c195c404898c8 c04e56df694d49fdbb22c39773dfc036 - - default default] [instance: 3d2a66c2-9869-4f0a-a27f-db3a14d43466] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Oct 11 04:10:35 compute-0 kernel: tap8701ce4d-ad (unregistering): left promiscuous mode
Oct 11 04:10:35 compute-0 NetworkManager[44920]: <info>  [1760155835.7630] device (tap8701ce4d-ad): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 11 04:10:35 compute-0 nova_compute[259850]: 2025-10-11 04:10:35.778 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 04:10:35 compute-0 ovn_controller[152025]: 2025-10-11T04:10:35Z|00135|binding|INFO|Releasing lport 8701ce4d-adc7-4369-9f76-cf6dea290bff from this chassis (sb_readonly=0)
Oct 11 04:10:35 compute-0 ovn_controller[152025]: 2025-10-11T04:10:35Z|00136|binding|INFO|Setting lport 8701ce4d-adc7-4369-9f76-cf6dea290bff down in Southbound
Oct 11 04:10:35 compute-0 ovn_controller[152025]: 2025-10-11T04:10:35Z|00137|binding|INFO|Removing iface tap8701ce4d-ad ovn-installed in OVS
Oct 11 04:10:35 compute-0 ovn_metadata_agent[161897]: 2025-10-11 04:10:35.788 161902 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:5a:0f:e2 10.100.0.9'], port_security=['fa:16:3e:5a:0f:e2 10.100.0.9'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.9/28', 'neutron:device_id': '3d2a66c2-9869-4f0a-a27f-db3a14d43466', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-8cb72c94-41d7-40be-8ef7-9351e1b06d48', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'c04e56df694d49fdbb22c39773dfc036', 'neutron:revision_number': '4', 'neutron:security_group_ids': '2736498f-8594-48ae-b459-bb8ac5ce5d5a', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com', 'neutron:port_fip': '192.168.122.219'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=e3458ebb-1a6a-4cc8-a158-43868faee92e, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f22fde8afd0>], logical_port=8701ce4d-adc7-4369-9f76-cf6dea290bff) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f22fde8afd0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 11 04:10:35 compute-0 ovn_metadata_agent[161897]: 2025-10-11 04:10:35.790 161902 INFO neutron.agent.ovn.metadata.agent [-] Port 8701ce4d-adc7-4369-9f76-cf6dea290bff in datapath 8cb72c94-41d7-40be-8ef7-9351e1b06d48 unbound from our chassis
Oct 11 04:10:35 compute-0 ovn_metadata_agent[161897]: 2025-10-11 04:10:35.793 161902 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 8cb72c94-41d7-40be-8ef7-9351e1b06d48, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Oct 11 04:10:35 compute-0 ovn_metadata_agent[161897]: 2025-10-11 04:10:35.795 267637 DEBUG oslo.privsep.daemon [-] privsep: reply[72a56d88-892c-4f9c-ae06-c042a3d642f8]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 04:10:35 compute-0 ovn_metadata_agent[161897]: 2025-10-11 04:10:35.796 161902 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-8cb72c94-41d7-40be-8ef7-9351e1b06d48 namespace which is not needed anymore
Oct 11 04:10:35 compute-0 nova_compute[259850]: 2025-10-11 04:10:35.815 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 04:10:35 compute-0 systemd[1]: machine-qemu\x2d12\x2dinstance\x2d0000000c.scope: Deactivated successfully.
Oct 11 04:10:35 compute-0 systemd[1]: machine-qemu\x2d12\x2dinstance\x2d0000000c.scope: Consumed 15.217s CPU time.
Oct 11 04:10:35 compute-0 systemd-machined[214869]: Machine qemu-12-instance-0000000c terminated.
Oct 11 04:10:35 compute-0 neutron-haproxy-ovnmeta-8cb72c94-41d7-40be-8ef7-9351e1b06d48[283484]: [NOTICE]   (283502) : haproxy version is 2.8.14-c23fe91
Oct 11 04:10:35 compute-0 neutron-haproxy-ovnmeta-8cb72c94-41d7-40be-8ef7-9351e1b06d48[283484]: [NOTICE]   (283502) : path to executable is /usr/sbin/haproxy
Oct 11 04:10:35 compute-0 neutron-haproxy-ovnmeta-8cb72c94-41d7-40be-8ef7-9351e1b06d48[283484]: [WARNING]  (283502) : Exiting Master process...
Oct 11 04:10:35 compute-0 neutron-haproxy-ovnmeta-8cb72c94-41d7-40be-8ef7-9351e1b06d48[283484]: [WARNING]  (283502) : Exiting Master process...
Oct 11 04:10:35 compute-0 neutron-haproxy-ovnmeta-8cb72c94-41d7-40be-8ef7-9351e1b06d48[283484]: [ALERT]    (283502) : Current worker (283504) exited with code 143 (Terminated)
Oct 11 04:10:35 compute-0 neutron-haproxy-ovnmeta-8cb72c94-41d7-40be-8ef7-9351e1b06d48[283484]: [WARNING]  (283502) : All workers exited. Exiting... (0)
Oct 11 04:10:35 compute-0 systemd[1]: libpod-e0c60e46670e894b12b03d45b22a2e2d0c93661734ef5e673d0a779afd6d2e5c.scope: Deactivated successfully.
Oct 11 04:10:35 compute-0 nova_compute[259850]: 2025-10-11 04:10:35.934 2 INFO nova.virt.libvirt.driver [-] [instance: 3d2a66c2-9869-4f0a-a27f-db3a14d43466] Instance destroyed successfully.
Oct 11 04:10:35 compute-0 nova_compute[259850]: 2025-10-11 04:10:35.936 2 DEBUG nova.objects.instance [None req-4278a209-9e99-405a-8ef7-2732f6bb6dcf fc44058c9b8d47d1907c195c404898c8 c04e56df694d49fdbb22c39773dfc036 - - default default] Lazy-loading 'resources' on Instance uuid 3d2a66c2-9869-4f0a-a27f-db3a14d43466 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 11 04:10:35 compute-0 podman[284453]: 2025-10-11 04:10:35.937772517 +0000 UTC m=+0.046801948 container died e0c60e46670e894b12b03d45b22a2e2d0c93661734ef5e673d0a779afd6d2e5c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-8cb72c94-41d7-40be-8ef7-9351e1b06d48, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, tcib_managed=true)
Oct 11 04:10:35 compute-0 nova_compute[259850]: 2025-10-11 04:10:35.957 2 DEBUG nova.virt.libvirt.vif [None req-4278a209-9e99-405a-8ef7-2732f6bb6dcf fc44058c9b8d47d1907c195c404898c8 c04e56df694d49fdbb22c39773dfc036 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-11T04:09:30Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-VolumesBackupsTest-instance-410999225',display_name='tempest-VolumesBackupsTest-instance-410999225',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-volumesbackupstest-instance-410999225',id=12,image_ref='1a107e2f-1a9d-4b6f-861d-e64bee7d56be',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBJ1KmH+kJIZMj9qOyrlWxwz+pGXMpc0KLGkIVUjjdWibG6RiDcTS46lNKLmnSn97+2MdyOF62BS3v/NOEEFhaG5BZiPMST03NMPah7Zm6F4yzBBh5fuEr3GtdkCvCwfzbQ==',key_name='tempest-keypair-1804503314',keypairs=<?>,launch_index=0,launched_at=2025-10-11T04:09:41Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='c04e56df694d49fdbb22c39773dfc036',ramdisk_id='',reservation_id='r-z41xhvc5',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='1a107e2f-1a9d-4b6f-861d-e64bee7d56be',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-VolumesBackupsTest-722883341',owner_user_name='tempest-VolumesBackupsTest-722883341-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-11T04:09:41Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='fc44058c9b8d47d1907c195c404898c8',uuid=3d2a66c2-9869-4f0a-a27f-db3a14d43466,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "8701ce4d-adc7-4369-9f76-cf6dea290bff", "address": "fa:16:3e:5a:0f:e2", "network": {"id": "8cb72c94-41d7-40be-8ef7-9351e1b06d48", "bridge": "br-int", "label": "tempest-VolumesBackupsTest-1596968619-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.219", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c04e56df694d49fdbb22c39773dfc036", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8701ce4d-ad", "ovs_interfaceid": "8701ce4d-adc7-4369-9f76-cf6dea290bff", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Oct 11 04:10:35 compute-0 nova_compute[259850]: 2025-10-11 04:10:35.959 2 DEBUG nova.network.os_vif_util [None req-4278a209-9e99-405a-8ef7-2732f6bb6dcf fc44058c9b8d47d1907c195c404898c8 c04e56df694d49fdbb22c39773dfc036 - - default default] Converting VIF {"id": "8701ce4d-adc7-4369-9f76-cf6dea290bff", "address": "fa:16:3e:5a:0f:e2", "network": {"id": "8cb72c94-41d7-40be-8ef7-9351e1b06d48", "bridge": "br-int", "label": "tempest-VolumesBackupsTest-1596968619-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.219", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c04e56df694d49fdbb22c39773dfc036", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8701ce4d-ad", "ovs_interfaceid": "8701ce4d-adc7-4369-9f76-cf6dea290bff", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 11 04:10:35 compute-0 nova_compute[259850]: 2025-10-11 04:10:35.960 2 DEBUG nova.network.os_vif_util [None req-4278a209-9e99-405a-8ef7-2732f6bb6dcf fc44058c9b8d47d1907c195c404898c8 c04e56df694d49fdbb22c39773dfc036 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:5a:0f:e2,bridge_name='br-int',has_traffic_filtering=True,id=8701ce4d-adc7-4369-9f76-cf6dea290bff,network=Network(8cb72c94-41d7-40be-8ef7-9351e1b06d48),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8701ce4d-ad') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 11 04:10:35 compute-0 nova_compute[259850]: 2025-10-11 04:10:35.961 2 DEBUG os_vif [None req-4278a209-9e99-405a-8ef7-2732f6bb6dcf fc44058c9b8d47d1907c195c404898c8 c04e56df694d49fdbb22c39773dfc036 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:5a:0f:e2,bridge_name='br-int',has_traffic_filtering=True,id=8701ce4d-adc7-4369-9f76-cf6dea290bff,network=Network(8cb72c94-41d7-40be-8ef7-9351e1b06d48),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8701ce4d-ad') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Oct 11 04:10:35 compute-0 nova_compute[259850]: 2025-10-11 04:10:35.963 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 04:10:35 compute-0 nova_compute[259850]: 2025-10-11 04:10:35.964 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap8701ce4d-ad, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 04:10:35 compute-0 nova_compute[259850]: 2025-10-11 04:10:35.968 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 04:10:35 compute-0 nova_compute[259850]: 2025-10-11 04:10:35.970 2 INFO os_vif [None req-4278a209-9e99-405a-8ef7-2732f6bb6dcf fc44058c9b8d47d1907c195c404898c8 c04e56df694d49fdbb22c39773dfc036 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:5a:0f:e2,bridge_name='br-int',has_traffic_filtering=True,id=8701ce4d-adc7-4369-9f76-cf6dea290bff,network=Network(8cb72c94-41d7-40be-8ef7-9351e1b06d48),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8701ce4d-ad')
Oct 11 04:10:35 compute-0 systemd[1]: var-lib-containers-storage-overlay-9b6dc132fc2e99bdab9f1bce2eb437599b7505894b0369b7b8333c054fdca669-merged.mount: Deactivated successfully.
Oct 11 04:10:35 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-e0c60e46670e894b12b03d45b22a2e2d0c93661734ef5e673d0a779afd6d2e5c-userdata-shm.mount: Deactivated successfully.
Oct 11 04:10:35 compute-0 podman[284453]: 2025-10-11 04:10:35.993461404 +0000 UTC m=+0.102490875 container cleanup e0c60e46670e894b12b03d45b22a2e2d0c93661734ef5e673d0a779afd6d2e5c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-8cb72c94-41d7-40be-8ef7-9351e1b06d48, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, tcib_managed=true)
Oct 11 04:10:36 compute-0 systemd[1]: libpod-conmon-e0c60e46670e894b12b03d45b22a2e2d0c93661734ef5e673d0a779afd6d2e5c.scope: Deactivated successfully.
Oct 11 04:10:36 compute-0 podman[284510]: 2025-10-11 04:10:36.070668956 +0000 UTC m=+0.047779485 container remove e0c60e46670e894b12b03d45b22a2e2d0c93661734ef5e673d0a779afd6d2e5c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-8cb72c94-41d7-40be-8ef7-9351e1b06d48, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.build-date=20251009, org.label-schema.vendor=CentOS, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, io.buildah.version=1.41.3)
Oct 11 04:10:36 compute-0 ovn_metadata_agent[161897]: 2025-10-11 04:10:36.077 267637 DEBUG oslo.privsep.daemon [-] privsep: reply[81916025-b43e-40c0-a695-cb8336cccd0f]: (4, ('Sat Oct 11 04:10:35 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-8cb72c94-41d7-40be-8ef7-9351e1b06d48 (e0c60e46670e894b12b03d45b22a2e2d0c93661734ef5e673d0a779afd6d2e5c)\ne0c60e46670e894b12b03d45b22a2e2d0c93661734ef5e673d0a779afd6d2e5c\nSat Oct 11 04:10:36 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-8cb72c94-41d7-40be-8ef7-9351e1b06d48 (e0c60e46670e894b12b03d45b22a2e2d0c93661734ef5e673d0a779afd6d2e5c)\ne0c60e46670e894b12b03d45b22a2e2d0c93661734ef5e673d0a779afd6d2e5c\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 04:10:36 compute-0 ovn_metadata_agent[161897]: 2025-10-11 04:10:36.080 267637 DEBUG oslo.privsep.daemon [-] privsep: reply[4d4018a7-a77f-4abc-89fa-f9dd0873248e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 04:10:36 compute-0 ovn_metadata_agent[161897]: 2025-10-11 04:10:36.081 161902 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap8cb72c94-40, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 04:10:36 compute-0 nova_compute[259850]: 2025-10-11 04:10:36.083 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 04:10:36 compute-0 kernel: tap8cb72c94-40: left promiscuous mode
Oct 11 04:10:36 compute-0 nova_compute[259850]: 2025-10-11 04:10:36.085 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 04:10:36 compute-0 ovn_metadata_agent[161897]: 2025-10-11 04:10:36.089 267637 DEBUG oslo.privsep.daemon [-] privsep: reply[93084462-2f39-481e-b6ef-90cfcc7551a6]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 04:10:36 compute-0 nova_compute[259850]: 2025-10-11 04:10:36.122 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 04:10:36 compute-0 ovn_metadata_agent[161897]: 2025-10-11 04:10:36.129 267637 DEBUG oslo.privsep.daemon [-] privsep: reply[18b80af7-335e-48d2-9d0d-eff76cef91f7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 04:10:36 compute-0 ovn_metadata_agent[161897]: 2025-10-11 04:10:36.131 267637 DEBUG oslo.privsep.daemon [-] privsep: reply[6fe986ec-dfcc-4467-8038-511aa63b8b54]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 04:10:36 compute-0 nova_compute[259850]: 2025-10-11 04:10:36.137 2 DEBUG nova.compute.manager [req-56eb67d0-af02-41dc-9c29-9d7b5ec36e1a req-39f3fc72-0a5e-4188-8311-ce3ba829c9d0 f4159c198f2c491aba3076e803251bb4 551ef16ca7c64c7d98e9560ddab5abd8 - - default default] [instance: 3d2a66c2-9869-4f0a-a27f-db3a14d43466] Received event network-vif-unplugged-8701ce4d-adc7-4369-9f76-cf6dea290bff external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 11 04:10:36 compute-0 nova_compute[259850]: 2025-10-11 04:10:36.138 2 DEBUG oslo_concurrency.lockutils [req-56eb67d0-af02-41dc-9c29-9d7b5ec36e1a req-39f3fc72-0a5e-4188-8311-ce3ba829c9d0 f4159c198f2c491aba3076e803251bb4 551ef16ca7c64c7d98e9560ddab5abd8 - - default default] Acquiring lock "3d2a66c2-9869-4f0a-a27f-db3a14d43466-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 04:10:36 compute-0 nova_compute[259850]: 2025-10-11 04:10:36.139 2 DEBUG oslo_concurrency.lockutils [req-56eb67d0-af02-41dc-9c29-9d7b5ec36e1a req-39f3fc72-0a5e-4188-8311-ce3ba829c9d0 f4159c198f2c491aba3076e803251bb4 551ef16ca7c64c7d98e9560ddab5abd8 - - default default] Lock "3d2a66c2-9869-4f0a-a27f-db3a14d43466-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 04:10:36 compute-0 nova_compute[259850]: 2025-10-11 04:10:36.140 2 DEBUG oslo_concurrency.lockutils [req-56eb67d0-af02-41dc-9c29-9d7b5ec36e1a req-39f3fc72-0a5e-4188-8311-ce3ba829c9d0 f4159c198f2c491aba3076e803251bb4 551ef16ca7c64c7d98e9560ddab5abd8 - - default default] Lock "3d2a66c2-9869-4f0a-a27f-db3a14d43466-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 04:10:36 compute-0 nova_compute[259850]: 2025-10-11 04:10:36.140 2 DEBUG nova.compute.manager [req-56eb67d0-af02-41dc-9c29-9d7b5ec36e1a req-39f3fc72-0a5e-4188-8311-ce3ba829c9d0 f4159c198f2c491aba3076e803251bb4 551ef16ca7c64c7d98e9560ddab5abd8 - - default default] [instance: 3d2a66c2-9869-4f0a-a27f-db3a14d43466] No waiting events found dispatching network-vif-unplugged-8701ce4d-adc7-4369-9f76-cf6dea290bff pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 11 04:10:36 compute-0 nova_compute[259850]: 2025-10-11 04:10:36.141 2 DEBUG nova.compute.manager [req-56eb67d0-af02-41dc-9c29-9d7b5ec36e1a req-39f3fc72-0a5e-4188-8311-ce3ba829c9d0 f4159c198f2c491aba3076e803251bb4 551ef16ca7c64c7d98e9560ddab5abd8 - - default default] [instance: 3d2a66c2-9869-4f0a-a27f-db3a14d43466] Received event network-vif-unplugged-8701ce4d-adc7-4369-9f76-cf6dea290bff for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Oct 11 04:10:36 compute-0 ovn_metadata_agent[161897]: 2025-10-11 04:10:36.154 267637 DEBUG oslo.privsep.daemon [-] privsep: reply[68f778e4-ffd3-4984-af4e-25f4752b6232]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 416434, 'reachable_time': 23350, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 284528, 'error': None, 'target': 'ovnmeta-8cb72c94-41d7-40be-8ef7-9351e1b06d48', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 04:10:36 compute-0 ovn_metadata_agent[161897]: 2025-10-11 04:10:36.157 162015 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-8cb72c94-41d7-40be-8ef7-9351e1b06d48 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Oct 11 04:10:36 compute-0 ovn_metadata_agent[161897]: 2025-10-11 04:10:36.158 162015 DEBUG oslo.privsep.daemon [-] privsep: reply[291a3d62-5cd3-4fa3-99ba-1f84ba473f17]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 04:10:36 compute-0 systemd[1]: run-netns-ovnmeta\x2d8cb72c94\x2d41d7\x2d40be\x2d8ef7\x2d9351e1b06d48.mount: Deactivated successfully.
Oct 11 04:10:36 compute-0 nova_compute[259850]: 2025-10-11 04:10:36.415 2 INFO nova.virt.libvirt.driver [None req-4278a209-9e99-405a-8ef7-2732f6bb6dcf fc44058c9b8d47d1907c195c404898c8 c04e56df694d49fdbb22c39773dfc036 - - default default] [instance: 3d2a66c2-9869-4f0a-a27f-db3a14d43466] Deleting instance files /var/lib/nova/instances/3d2a66c2-9869-4f0a-a27f-db3a14d43466_del
Oct 11 04:10:36 compute-0 nova_compute[259850]: 2025-10-11 04:10:36.416 2 INFO nova.virt.libvirt.driver [None req-4278a209-9e99-405a-8ef7-2732f6bb6dcf fc44058c9b8d47d1907c195c404898c8 c04e56df694d49fdbb22c39773dfc036 - - default default] [instance: 3d2a66c2-9869-4f0a-a27f-db3a14d43466] Deletion of /var/lib/nova/instances/3d2a66c2-9869-4f0a-a27f-db3a14d43466_del complete
Oct 11 04:10:36 compute-0 nova_compute[259850]: 2025-10-11 04:10:36.471 2 INFO nova.compute.manager [None req-4278a209-9e99-405a-8ef7-2732f6bb6dcf fc44058c9b8d47d1907c195c404898c8 c04e56df694d49fdbb22c39773dfc036 - - default default] [instance: 3d2a66c2-9869-4f0a-a27f-db3a14d43466] Took 0.78 seconds to destroy the instance on the hypervisor.
Oct 11 04:10:36 compute-0 nova_compute[259850]: 2025-10-11 04:10:36.473 2 DEBUG oslo.service.loopingcall [None req-4278a209-9e99-405a-8ef7-2732f6bb6dcf fc44058c9b8d47d1907c195c404898c8 c04e56df694d49fdbb22c39773dfc036 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Oct 11 04:10:36 compute-0 nova_compute[259850]: 2025-10-11 04:10:36.474 2 DEBUG nova.compute.manager [-] [instance: 3d2a66c2-9869-4f0a-a27f-db3a14d43466] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Oct 11 04:10:36 compute-0 nova_compute[259850]: 2025-10-11 04:10:36.474 2 DEBUG nova.network.neutron [-] [instance: 3d2a66c2-9869-4f0a-a27f-db3a14d43466] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Oct 11 04:10:36 compute-0 ceph-mon[74273]: pgmap v1298: 305 pgs: 2 active+clean+snaptrim, 6 active+clean+snaptrim_wait, 297 active+clean; 2.3 GiB data, 2.4 GiB used, 58 GiB / 60 GiB avail; 5.0 MiB/s rd, 6.4 KiB/s wr, 370 op/s
Oct 11 04:10:37 compute-0 nova_compute[259850]: 2025-10-11 04:10:37.513 2 DEBUG nova.network.neutron [-] [instance: 3d2a66c2-9869-4f0a-a27f-db3a14d43466] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 11 04:10:37 compute-0 nova_compute[259850]: 2025-10-11 04:10:37.529 2 INFO nova.compute.manager [-] [instance: 3d2a66c2-9869-4f0a-a27f-db3a14d43466] Took 1.06 seconds to deallocate network for instance.
Oct 11 04:10:37 compute-0 nova_compute[259850]: 2025-10-11 04:10:37.580 2 DEBUG oslo_concurrency.lockutils [None req-4278a209-9e99-405a-8ef7-2732f6bb6dcf fc44058c9b8d47d1907c195c404898c8 c04e56df694d49fdbb22c39773dfc036 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 04:10:37 compute-0 nova_compute[259850]: 2025-10-11 04:10:37.581 2 DEBUG oslo_concurrency.lockutils [None req-4278a209-9e99-405a-8ef7-2732f6bb6dcf fc44058c9b8d47d1907c195c404898c8 c04e56df694d49fdbb22c39773dfc036 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 04:10:37 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1299: 305 pgs: 2 active+clean+snaptrim, 6 active+clean+snaptrim_wait, 297 active+clean; 2.3 GiB data, 2.4 GiB used, 58 GiB / 60 GiB avail; 3.9 MiB/s rd, 5.0 KiB/s wr, 289 op/s
Oct 11 04:10:37 compute-0 nova_compute[259850]: 2025-10-11 04:10:37.655 2 DEBUG oslo_concurrency.processutils [None req-4278a209-9e99-405a-8ef7-2732f6bb6dcf fc44058c9b8d47d1907c195c404898c8 c04e56df694d49fdbb22c39773dfc036 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 04:10:37 compute-0 nova_compute[259850]: 2025-10-11 04:10:37.683 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 04:10:37 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e280 do_prune osdmap full prune enabled
Oct 11 04:10:37 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e281 e281: 3 total, 3 up, 3 in
Oct 11 04:10:37 compute-0 ceph-mon[74273]: log_channel(cluster) log [DBG] : osdmap e281: 3 total, 3 up, 3 in
Oct 11 04:10:38 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 11 04:10:38 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2885424011' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 04:10:38 compute-0 nova_compute[259850]: 2025-10-11 04:10:38.156 2 DEBUG oslo_concurrency.processutils [None req-4278a209-9e99-405a-8ef7-2732f6bb6dcf fc44058c9b8d47d1907c195c404898c8 c04e56df694d49fdbb22c39773dfc036 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.501s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 04:10:38 compute-0 nova_compute[259850]: 2025-10-11 04:10:38.164 2 DEBUG nova.compute.provider_tree [None req-4278a209-9e99-405a-8ef7-2732f6bb6dcf fc44058c9b8d47d1907c195c404898c8 c04e56df694d49fdbb22c39773dfc036 - - default default] Inventory has not changed in ProviderTree for provider: 108a560b-89c0-4926-a2fc-cb749a6f8386 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 11 04:10:38 compute-0 nova_compute[259850]: 2025-10-11 04:10:38.183 2 DEBUG nova.scheduler.client.report [None req-4278a209-9e99-405a-8ef7-2732f6bb6dcf fc44058c9b8d47d1907c195c404898c8 c04e56df694d49fdbb22c39773dfc036 - - default default] Inventory has not changed for provider 108a560b-89c0-4926-a2fc-cb749a6f8386 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 11 04:10:38 compute-0 nova_compute[259850]: 2025-10-11 04:10:38.211 2 DEBUG oslo_concurrency.lockutils [None req-4278a209-9e99-405a-8ef7-2732f6bb6dcf fc44058c9b8d47d1907c195c404898c8 c04e56df694d49fdbb22c39773dfc036 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.629s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 04:10:38 compute-0 nova_compute[259850]: 2025-10-11 04:10:38.229 2 DEBUG nova.compute.manager [req-4efe1d5c-467c-4482-9350-b3e98c67b05b req-67cd34fd-2edf-483f-afe8-b6a9b676a177 f4159c198f2c491aba3076e803251bb4 551ef16ca7c64c7d98e9560ddab5abd8 - - default default] [instance: 3d2a66c2-9869-4f0a-a27f-db3a14d43466] Received event network-vif-plugged-8701ce4d-adc7-4369-9f76-cf6dea290bff external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 11 04:10:38 compute-0 nova_compute[259850]: 2025-10-11 04:10:38.230 2 DEBUG oslo_concurrency.lockutils [req-4efe1d5c-467c-4482-9350-b3e98c67b05b req-67cd34fd-2edf-483f-afe8-b6a9b676a177 f4159c198f2c491aba3076e803251bb4 551ef16ca7c64c7d98e9560ddab5abd8 - - default default] Acquiring lock "3d2a66c2-9869-4f0a-a27f-db3a14d43466-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 04:10:38 compute-0 nova_compute[259850]: 2025-10-11 04:10:38.230 2 DEBUG oslo_concurrency.lockutils [req-4efe1d5c-467c-4482-9350-b3e98c67b05b req-67cd34fd-2edf-483f-afe8-b6a9b676a177 f4159c198f2c491aba3076e803251bb4 551ef16ca7c64c7d98e9560ddab5abd8 - - default default] Lock "3d2a66c2-9869-4f0a-a27f-db3a14d43466-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 04:10:38 compute-0 nova_compute[259850]: 2025-10-11 04:10:38.231 2 DEBUG oslo_concurrency.lockutils [req-4efe1d5c-467c-4482-9350-b3e98c67b05b req-67cd34fd-2edf-483f-afe8-b6a9b676a177 f4159c198f2c491aba3076e803251bb4 551ef16ca7c64c7d98e9560ddab5abd8 - - default default] Lock "3d2a66c2-9869-4f0a-a27f-db3a14d43466-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 04:10:38 compute-0 nova_compute[259850]: 2025-10-11 04:10:38.231 2 DEBUG nova.compute.manager [req-4efe1d5c-467c-4482-9350-b3e98c67b05b req-67cd34fd-2edf-483f-afe8-b6a9b676a177 f4159c198f2c491aba3076e803251bb4 551ef16ca7c64c7d98e9560ddab5abd8 - - default default] [instance: 3d2a66c2-9869-4f0a-a27f-db3a14d43466] No waiting events found dispatching network-vif-plugged-8701ce4d-adc7-4369-9f76-cf6dea290bff pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 11 04:10:38 compute-0 nova_compute[259850]: 2025-10-11 04:10:38.232 2 WARNING nova.compute.manager [req-4efe1d5c-467c-4482-9350-b3e98c67b05b req-67cd34fd-2edf-483f-afe8-b6a9b676a177 f4159c198f2c491aba3076e803251bb4 551ef16ca7c64c7d98e9560ddab5abd8 - - default default] [instance: 3d2a66c2-9869-4f0a-a27f-db3a14d43466] Received unexpected event network-vif-plugged-8701ce4d-adc7-4369-9f76-cf6dea290bff for instance with vm_state deleted and task_state None.
Oct 11 04:10:38 compute-0 nova_compute[259850]: 2025-10-11 04:10:38.232 2 DEBUG nova.compute.manager [req-4efe1d5c-467c-4482-9350-b3e98c67b05b req-67cd34fd-2edf-483f-afe8-b6a9b676a177 f4159c198f2c491aba3076e803251bb4 551ef16ca7c64c7d98e9560ddab5abd8 - - default default] [instance: 3d2a66c2-9869-4f0a-a27f-db3a14d43466] Received event network-vif-deleted-8701ce4d-adc7-4369-9f76-cf6dea290bff external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 11 04:10:38 compute-0 nova_compute[259850]: 2025-10-11 04:10:38.237 2 INFO nova.scheduler.client.report [None req-4278a209-9e99-405a-8ef7-2732f6bb6dcf fc44058c9b8d47d1907c195c404898c8 c04e56df694d49fdbb22c39773dfc036 - - default default] Deleted allocations for instance 3d2a66c2-9869-4f0a-a27f-db3a14d43466
Oct 11 04:10:38 compute-0 nova_compute[259850]: 2025-10-11 04:10:38.319 2 DEBUG oslo_concurrency.lockutils [None req-4278a209-9e99-405a-8ef7-2732f6bb6dcf fc44058c9b8d47d1907c195c404898c8 c04e56df694d49fdbb22c39773dfc036 - - default default] Lock "3d2a66c2-9869-4f0a-a27f-db3a14d43466" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.628s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 04:10:38 compute-0 ceph-mon[74273]: pgmap v1299: 305 pgs: 2 active+clean+snaptrim, 6 active+clean+snaptrim_wait, 297 active+clean; 2.3 GiB data, 2.4 GiB used, 58 GiB / 60 GiB avail; 3.9 MiB/s rd, 5.0 KiB/s wr, 289 op/s
Oct 11 04:10:38 compute-0 ceph-mon[74273]: osdmap e281: 3 total, 3 up, 3 in
Oct 11 04:10:38 compute-0 ceph-mon[74273]: from='client.? 192.168.122.100:0/2885424011' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 04:10:39 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1301: 305 pgs: 305 active+clean; 2.2 GiB data, 2.4 GiB used, 58 GiB / 60 GiB avail; 1.4 MiB/s rd, 6.6 KiB/s wr, 195 op/s
Oct 11 04:10:39 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e281 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 04:10:40 compute-0 ceph-osd[89722]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #44. Immutable memtables: 1.
Oct 11 04:10:40 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e281 do_prune osdmap full prune enabled
Oct 11 04:10:40 compute-0 nova_compute[259850]: 2025-10-11 04:10:40.966 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 04:10:40 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e282 e282: 3 total, 3 up, 3 in
Oct 11 04:10:40 compute-0 ceph-mon[74273]: log_channel(cluster) log [DBG] : osdmap e282: 3 total, 3 up, 3 in
Oct 11 04:10:40 compute-0 ceph-mon[74273]: pgmap v1301: 305 pgs: 305 active+clean; 2.2 GiB data, 2.4 GiB used, 58 GiB / 60 GiB avail; 1.4 MiB/s rd, 6.6 KiB/s wr, 195 op/s
Oct 11 04:10:41 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct 11 04:10:41 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2877405450' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 11 04:10:41 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct 11 04:10:41 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2877405450' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 11 04:10:41 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1303: 305 pgs: 305 active+clean; 2.2 GiB data, 2.4 GiB used, 58 GiB / 60 GiB avail; 59 KiB/s rd, 4.3 KiB/s wr, 85 op/s
Oct 11 04:10:41 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e282 do_prune osdmap full prune enabled
Oct 11 04:10:41 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e283 e283: 3 total, 3 up, 3 in
Oct 11 04:10:41 compute-0 ceph-mon[74273]: log_channel(cluster) log [DBG] : osdmap e283: 3 total, 3 up, 3 in
Oct 11 04:10:42 compute-0 ceph-mon[74273]: osdmap e282: 3 total, 3 up, 3 in
Oct 11 04:10:42 compute-0 ceph-mon[74273]: from='client.? 192.168.122.10:0/2877405450' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 11 04:10:42 compute-0 ceph-mon[74273]: from='client.? 192.168.122.10:0/2877405450' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 11 04:10:42 compute-0 ovn_controller[152025]: 2025-10-11T04:10:42Z|00020|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:60:b7:e2 10.100.0.8
Oct 11 04:10:42 compute-0 ovn_controller[152025]: 2025-10-11T04:10:42Z|00021|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:60:b7:e2 10.100.0.8
Oct 11 04:10:42 compute-0 sudo[284552]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 04:10:42 compute-0 sudo[284552]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 04:10:42 compute-0 sudo[284552]: pam_unix(sudo:session): session closed for user root
Oct 11 04:10:42 compute-0 nova_compute[259850]: 2025-10-11 04:10:42.744 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 04:10:42 compute-0 sudo[284577]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 11 04:10:42 compute-0 sudo[284577]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 04:10:42 compute-0 sudo[284577]: pam_unix(sudo:session): session closed for user root
Oct 11 04:10:42 compute-0 sudo[284602]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 04:10:42 compute-0 sudo[284602]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 04:10:42 compute-0 sudo[284602]: pam_unix(sudo:session): session closed for user root
Oct 11 04:10:43 compute-0 sudo[284627]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/23b68101-59a9-532f-ab6b-9acf78fb2162/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ls
Oct 11 04:10:43 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e283 do_prune osdmap full prune enabled
Oct 11 04:10:43 compute-0 sudo[284627]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 04:10:43 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e284 e284: 3 total, 3 up, 3 in
Oct 11 04:10:43 compute-0 ceph-mon[74273]: log_channel(cluster) log [DBG] : osdmap e284: 3 total, 3 up, 3 in
Oct 11 04:10:43 compute-0 ceph-mon[74273]: pgmap v1303: 305 pgs: 305 active+clean; 2.2 GiB data, 2.4 GiB used, 58 GiB / 60 GiB avail; 59 KiB/s rd, 4.3 KiB/s wr, 85 op/s
Oct 11 04:10:43 compute-0 ceph-mon[74273]: osdmap e283: 3 total, 3 up, 3 in
Oct 11 04:10:43 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct 11 04:10:43 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/4107881444' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 11 04:10:43 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct 11 04:10:43 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/4107881444' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 11 04:10:43 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1306: 305 pgs: 305 active+clean; 2.3 GiB data, 2.4 GiB used, 58 GiB / 60 GiB avail; 967 KiB/s rd, 4.5 MiB/s wr, 356 op/s
Oct 11 04:10:43 compute-0 podman[284728]: 2025-10-11 04:10:43.605513171 +0000 UTC m=+0.076271097 container exec 24261ba7295af5a6a49cb537d1551fd7fd4de28fdeebff7ecec5d89143ebddf9 (image=quay.io/ceph/ceph:v18, name=ceph-23b68101-59a9-532f-ab6b-9acf78fb2162-mon-compute-0, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.39.3)
Oct 11 04:10:43 compute-0 podman[284728]: 2025-10-11 04:10:43.726632159 +0000 UTC m=+0.197390015 container exec_died 24261ba7295af5a6a49cb537d1551fd7fd4de28fdeebff7ecec5d89143ebddf9 (image=quay.io/ceph/ceph:v18, name=ceph-23b68101-59a9-532f-ab6b-9acf78fb2162-mon-compute-0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507)
Oct 11 04:10:44 compute-0 ceph-mon[74273]: osdmap e284: 3 total, 3 up, 3 in
Oct 11 04:10:44 compute-0 ceph-mon[74273]: from='client.? 192.168.122.10:0/4107881444' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 11 04:10:44 compute-0 ceph-mon[74273]: from='client.? 192.168.122.10:0/4107881444' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 11 04:10:44 compute-0 sudo[284627]: pam_unix(sudo:session): session closed for user root
Oct 11 04:10:44 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 11 04:10:44 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' 
Oct 11 04:10:44 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 11 04:10:44 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' 
Oct 11 04:10:44 compute-0 sudo[284887]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 04:10:44 compute-0 sudo[284887]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 04:10:44 compute-0 sudo[284887]: pam_unix(sudo:session): session closed for user root
Oct 11 04:10:44 compute-0 sudo[284912]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 11 04:10:44 compute-0 sudo[284912]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 04:10:44 compute-0 sudo[284912]: pam_unix(sudo:session): session closed for user root
Oct 11 04:10:44 compute-0 sudo[284937]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 04:10:44 compute-0 sudo[284937]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 04:10:44 compute-0 sudo[284937]: pam_unix(sudo:session): session closed for user root
Oct 11 04:10:44 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 04:10:44 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e284 do_prune osdmap full prune enabled
Oct 11 04:10:44 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e285 e285: 3 total, 3 up, 3 in
Oct 11 04:10:44 compute-0 ceph-mon[74273]: log_channel(cluster) log [DBG] : osdmap e285: 3 total, 3 up, 3 in
Oct 11 04:10:44 compute-0 sudo[284962]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/23b68101-59a9-532f-ab6b-9acf78fb2162/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Oct 11 04:10:44 compute-0 sudo[284962]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 04:10:45 compute-0 ceph-mon[74273]: pgmap v1306: 305 pgs: 305 active+clean; 2.3 GiB data, 2.4 GiB used, 58 GiB / 60 GiB avail; 967 KiB/s rd, 4.5 MiB/s wr, 356 op/s
Oct 11 04:10:45 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' 
Oct 11 04:10:45 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' 
Oct 11 04:10:45 compute-0 ceph-mon[74273]: osdmap e285: 3 total, 3 up, 3 in
Oct 11 04:10:45 compute-0 sudo[284962]: pam_unix(sudo:session): session closed for user root
Oct 11 04:10:45 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 11 04:10:45 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 11 04:10:45 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct 11 04:10:45 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 11 04:10:45 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct 11 04:10:45 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' 
Oct 11 04:10:45 compute-0 ceph-mgr[74563]: [progress WARNING root] complete: ev 3d659c9b-90b5-4d54-8ebd-9e1c10440015 does not exist
Oct 11 04:10:45 compute-0 ceph-mgr[74563]: [progress WARNING root] complete: ev b7880254-8a72-4b32-b92f-1e32cabd6a5a does not exist
Oct 11 04:10:45 compute-0 ceph-mgr[74563]: [progress WARNING root] complete: ev 7d032ca5-467a-49fb-bcdb-74fac7b6d2b5 does not exist
Oct 11 04:10:45 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct 11 04:10:45 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 11 04:10:45 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct 11 04:10:45 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 11 04:10:45 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 11 04:10:45 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 11 04:10:45 compute-0 sudo[285018]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 04:10:45 compute-0 sudo[285018]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 04:10:45 compute-0 sudo[285018]: pam_unix(sudo:session): session closed for user root
Oct 11 04:10:45 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1308: 305 pgs: 305 active+clean; 2.3 GiB data, 2.4 GiB used, 58 GiB / 60 GiB avail; 1.1 MiB/s rd, 5.5 MiB/s wr, 390 op/s
Oct 11 04:10:45 compute-0 sudo[285043]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 11 04:10:45 compute-0 sudo[285043]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 04:10:45 compute-0 sudo[285043]: pam_unix(sudo:session): session closed for user root
Oct 11 04:10:45 compute-0 sudo[285068]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 04:10:45 compute-0 sudo[285068]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 04:10:45 compute-0 sudo[285068]: pam_unix(sudo:session): session closed for user root
Oct 11 04:10:45 compute-0 sudo[285093]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/23b68101-59a9-532f-ab6b-9acf78fb2162/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 23b68101-59a9-532f-ab6b-9acf78fb2162 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Oct 11 04:10:45 compute-0 sudo[285093]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 04:10:45 compute-0 nova_compute[259850]: 2025-10-11 04:10:45.970 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 04:10:46 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 11 04:10:46 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 11 04:10:46 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' 
Oct 11 04:10:46 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 11 04:10:46 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 11 04:10:46 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 11 04:10:46 compute-0 podman[285160]: 2025-10-11 04:10:46.223243532 +0000 UTC m=+0.042046154 container create 3cfb2122481f4ee289ab0d4618d30009efc9693a259bdd45a49147d6d3ee0e30 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_murdock, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, ceph=True)
Oct 11 04:10:46 compute-0 systemd[1]: Started libpod-conmon-3cfb2122481f4ee289ab0d4618d30009efc9693a259bdd45a49147d6d3ee0e30.scope.
Oct 11 04:10:46 compute-0 systemd[1]: Started libcrun container.
Oct 11 04:10:46 compute-0 podman[285160]: 2025-10-11 04:10:46.298933502 +0000 UTC m=+0.117736104 container init 3cfb2122481f4ee289ab0d4618d30009efc9693a259bdd45a49147d6d3ee0e30 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_murdock, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 11 04:10:46 compute-0 podman[285160]: 2025-10-11 04:10:46.204355001 +0000 UTC m=+0.023157613 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 04:10:46 compute-0 podman[285160]: 2025-10-11 04:10:46.305065964 +0000 UTC m=+0.123868546 container start 3cfb2122481f4ee289ab0d4618d30009efc9693a259bdd45a49147d6d3ee0e30 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_murdock, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0)
Oct 11 04:10:46 compute-0 podman[285160]: 2025-10-11 04:10:46.308487661 +0000 UTC m=+0.127290273 container attach 3cfb2122481f4ee289ab0d4618d30009efc9693a259bdd45a49147d6d3ee0e30 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_murdock, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250507)
Oct 11 04:10:46 compute-0 fervent_murdock[285176]: 167 167
Oct 11 04:10:46 compute-0 systemd[1]: libpod-3cfb2122481f4ee289ab0d4618d30009efc9693a259bdd45a49147d6d3ee0e30.scope: Deactivated successfully.
Oct 11 04:10:46 compute-0 podman[285160]: 2025-10-11 04:10:46.311750372 +0000 UTC m=+0.130552964 container died 3cfb2122481f4ee289ab0d4618d30009efc9693a259bdd45a49147d6d3ee0e30 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_murdock, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Oct 11 04:10:46 compute-0 systemd[1]: var-lib-containers-storage-overlay-0e80213f03a2352ac4eff0eca1b92e92d40ab54de6f903f5ee155464c4f5d2dc-merged.mount: Deactivated successfully.
Oct 11 04:10:46 compute-0 podman[285160]: 2025-10-11 04:10:46.350909384 +0000 UTC m=+0.169711986 container remove 3cfb2122481f4ee289ab0d4618d30009efc9693a259bdd45a49147d6d3ee0e30 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_murdock, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2)
Oct 11 04:10:46 compute-0 systemd[1]: libpod-conmon-3cfb2122481f4ee289ab0d4618d30009efc9693a259bdd45a49147d6d3ee0e30.scope: Deactivated successfully.
Oct 11 04:10:46 compute-0 podman[285199]: 2025-10-11 04:10:46.57611807 +0000 UTC m=+0.077669816 container create ad8038dccdc4583e1f4439fafb8e447030961566b9cc67e6fa12922f262afd14 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_faraday, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3)
Oct 11 04:10:46 compute-0 systemd[1]: Started libpod-conmon-ad8038dccdc4583e1f4439fafb8e447030961566b9cc67e6fa12922f262afd14.scope.
Oct 11 04:10:46 compute-0 podman[285199]: 2025-10-11 04:10:46.547378321 +0000 UTC m=+0.048930117 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 04:10:46 compute-0 systemd[1]: Started libcrun container.
Oct 11 04:10:46 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2ffaeacb1018e5c3f2d541fce72f6b44254e05acaa011f3eb02e91a55369f7b9/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 11 04:10:46 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2ffaeacb1018e5c3f2d541fce72f6b44254e05acaa011f3eb02e91a55369f7b9/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 11 04:10:46 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2ffaeacb1018e5c3f2d541fce72f6b44254e05acaa011f3eb02e91a55369f7b9/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 11 04:10:46 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2ffaeacb1018e5c3f2d541fce72f6b44254e05acaa011f3eb02e91a55369f7b9/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 11 04:10:46 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2ffaeacb1018e5c3f2d541fce72f6b44254e05acaa011f3eb02e91a55369f7b9/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct 11 04:10:46 compute-0 podman[285199]: 2025-10-11 04:10:46.669390174 +0000 UTC m=+0.170941880 container init ad8038dccdc4583e1f4439fafb8e447030961566b9cc67e6fa12922f262afd14 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_faraday, ceph=True, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2)
Oct 11 04:10:46 compute-0 podman[285199]: 2025-10-11 04:10:46.682216805 +0000 UTC m=+0.183768551 container start ad8038dccdc4583e1f4439fafb8e447030961566b9cc67e6fa12922f262afd14 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_faraday, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3)
Oct 11 04:10:46 compute-0 podman[285199]: 2025-10-11 04:10:46.686494715 +0000 UTC m=+0.188046421 container attach ad8038dccdc4583e1f4439fafb8e447030961566b9cc67e6fa12922f262afd14 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_faraday, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default)
Oct 11 04:10:47 compute-0 ceph-mon[74273]: pgmap v1308: 305 pgs: 305 active+clean; 2.3 GiB data, 2.4 GiB used, 58 GiB / 60 GiB avail; 1.1 MiB/s rd, 5.5 MiB/s wr, 390 op/s
Oct 11 04:10:47 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1309: 305 pgs: 305 active+clean; 2.3 GiB data, 2.4 GiB used, 58 GiB / 60 GiB avail; 886 KiB/s rd, 4.3 MiB/s wr, 300 op/s
Oct 11 04:10:47 compute-0 nova_compute[259850]: 2025-10-11 04:10:47.747 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 04:10:47 compute-0 tender_faraday[285216]: --> passed data devices: 0 physical, 3 LVM
Oct 11 04:10:47 compute-0 tender_faraday[285216]: --> relative data size: 1.0
Oct 11 04:10:47 compute-0 tender_faraday[285216]: --> All data devices are unavailable
Oct 11 04:10:47 compute-0 systemd[1]: libpod-ad8038dccdc4583e1f4439fafb8e447030961566b9cc67e6fa12922f262afd14.scope: Deactivated successfully.
Oct 11 04:10:47 compute-0 systemd[1]: libpod-ad8038dccdc4583e1f4439fafb8e447030961566b9cc67e6fa12922f262afd14.scope: Consumed 1.103s CPU time.
Oct 11 04:10:47 compute-0 podman[285199]: 2025-10-11 04:10:47.83822628 +0000 UTC m=+1.339778076 container died ad8038dccdc4583e1f4439fafb8e447030961566b9cc67e6fa12922f262afd14 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_faraday, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 11 04:10:47 compute-0 systemd[1]: var-lib-containers-storage-overlay-2ffaeacb1018e5c3f2d541fce72f6b44254e05acaa011f3eb02e91a55369f7b9-merged.mount: Deactivated successfully.
Oct 11 04:10:47 compute-0 podman[285199]: 2025-10-11 04:10:47.919232639 +0000 UTC m=+1.420784375 container remove ad8038dccdc4583e1f4439fafb8e447030961566b9cc67e6fa12922f262afd14 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_faraday, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 11 04:10:47 compute-0 systemd[1]: libpod-conmon-ad8038dccdc4583e1f4439fafb8e447030961566b9cc67e6fa12922f262afd14.scope: Deactivated successfully.
Oct 11 04:10:47 compute-0 sudo[285093]: pam_unix(sudo:session): session closed for user root
Oct 11 04:10:48 compute-0 podman[285246]: 2025-10-11 04:10:48.02732882 +0000 UTC m=+0.137731676 container health_status 648a70a95c858d67aef01a55c153ff0b5b9bef4b589fa10c62a24185180db61f (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, tcib_managed=true, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=ovn_controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Oct 11 04:10:48 compute-0 sudo[285278]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 04:10:48 compute-0 sudo[285278]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 04:10:48 compute-0 sudo[285278]: pam_unix(sudo:session): session closed for user root
Oct 11 04:10:48 compute-0 sudo[285309]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 11 04:10:48 compute-0 sudo[285309]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 04:10:48 compute-0 sudo[285309]: pam_unix(sudo:session): session closed for user root
Oct 11 04:10:48 compute-0 sudo[285334]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 04:10:48 compute-0 sudo[285334]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 04:10:48 compute-0 sudo[285334]: pam_unix(sudo:session): session closed for user root
Oct 11 04:10:48 compute-0 sudo[285359]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/23b68101-59a9-532f-ab6b-9acf78fb2162/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 23b68101-59a9-532f-ab6b-9acf78fb2162 -- lvm list --format json
Oct 11 04:10:48 compute-0 sudo[285359]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 04:10:48 compute-0 ovn_controller[152025]: 2025-10-11T04:10:48Z|00138|binding|INFO|Releasing lport 223c96f4-862f-4536-9399-a30ef7ed1a99 from this chassis (sb_readonly=0)
Oct 11 04:10:48 compute-0 nova_compute[259850]: 2025-10-11 04:10:48.624 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 04:10:48 compute-0 podman[285425]: 2025-10-11 04:10:48.812599924 +0000 UTC m=+0.052730234 container create 8faadf2adff715ea3d270dd9972dbf6090c49c29364727d2ede198826b1a08b4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_chebyshev, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef)
Oct 11 04:10:48 compute-0 systemd[1]: Started libpod-conmon-8faadf2adff715ea3d270dd9972dbf6090c49c29364727d2ede198826b1a08b4.scope.
Oct 11 04:10:48 compute-0 podman[285425]: 2025-10-11 04:10:48.786097128 +0000 UTC m=+0.026227478 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 04:10:48 compute-0 systemd[1]: Started libcrun container.
Oct 11 04:10:48 compute-0 podman[285425]: 2025-10-11 04:10:48.92796654 +0000 UTC m=+0.168096890 container init 8faadf2adff715ea3d270dd9972dbf6090c49c29364727d2ede198826b1a08b4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_chebyshev, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 11 04:10:48 compute-0 podman[285425]: 2025-10-11 04:10:48.937015455 +0000 UTC m=+0.177145735 container start 8faadf2adff715ea3d270dd9972dbf6090c49c29364727d2ede198826b1a08b4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_chebyshev, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.license=GPLv2)
Oct 11 04:10:48 compute-0 podman[285425]: 2025-10-11 04:10:48.941072429 +0000 UTC m=+0.181202739 container attach 8faadf2adff715ea3d270dd9972dbf6090c49c29364727d2ede198826b1a08b4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_chebyshev, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 11 04:10:48 compute-0 nervous_chebyshev[285443]: 167 167
Oct 11 04:10:48 compute-0 systemd[1]: libpod-8faadf2adff715ea3d270dd9972dbf6090c49c29364727d2ede198826b1a08b4.scope: Deactivated successfully.
Oct 11 04:10:48 compute-0 conmon[285443]: conmon 8faadf2adff715ea3d27 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-8faadf2adff715ea3d270dd9972dbf6090c49c29364727d2ede198826b1a08b4.scope/container/memory.events
Oct 11 04:10:48 compute-0 podman[285425]: 2025-10-11 04:10:48.947931232 +0000 UTC m=+0.188061532 container died 8faadf2adff715ea3d270dd9972dbf6090c49c29364727d2ede198826b1a08b4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_chebyshev, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 11 04:10:48 compute-0 systemd[1]: var-lib-containers-storage-overlay-b07f188479872085f0f9a6905c217c196e5fee9d94d20c729945ca443098f1bb-merged.mount: Deactivated successfully.
Oct 11 04:10:49 compute-0 podman[285425]: 2025-10-11 04:10:49.008418124 +0000 UTC m=+0.248548434 container remove 8faadf2adff715ea3d270dd9972dbf6090c49c29364727d2ede198826b1a08b4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_chebyshev, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Oct 11 04:10:49 compute-0 systemd[1]: libpod-conmon-8faadf2adff715ea3d270dd9972dbf6090c49c29364727d2ede198826b1a08b4.scope: Deactivated successfully.
Oct 11 04:10:49 compute-0 ceph-mon[74273]: pgmap v1309: 305 pgs: 305 active+clean; 2.3 GiB data, 2.4 GiB used, 58 GiB / 60 GiB avail; 886 KiB/s rd, 4.3 MiB/s wr, 300 op/s
Oct 11 04:10:49 compute-0 podman[285470]: 2025-10-11 04:10:49.246801831 +0000 UTC m=+0.071775951 container create 667f978e3e50d581692431b0baea7222c86ff4c3ecfadbdb17fcc28047301e7b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_swirles, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0)
Oct 11 04:10:49 compute-0 systemd[1]: Started libpod-conmon-667f978e3e50d581692431b0baea7222c86ff4c3ecfadbdb17fcc28047301e7b.scope.
Oct 11 04:10:49 compute-0 podman[285470]: 2025-10-11 04:10:49.208560905 +0000 UTC m=+0.033535095 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 04:10:49 compute-0 systemd[1]: Started libcrun container.
Oct 11 04:10:49 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bd5d8faf312eafd31e2dcdfaef3e059e2ddb4870a6c70354f54ac1a30a2ede79/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 11 04:10:49 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bd5d8faf312eafd31e2dcdfaef3e059e2ddb4870a6c70354f54ac1a30a2ede79/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 11 04:10:49 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bd5d8faf312eafd31e2dcdfaef3e059e2ddb4870a6c70354f54ac1a30a2ede79/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 11 04:10:49 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bd5d8faf312eafd31e2dcdfaef3e059e2ddb4870a6c70354f54ac1a30a2ede79/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 11 04:10:49 compute-0 podman[285470]: 2025-10-11 04:10:49.348463301 +0000 UTC m=+0.173437401 container init 667f978e3e50d581692431b0baea7222c86ff4c3ecfadbdb17fcc28047301e7b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_swirles, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Oct 11 04:10:49 compute-0 podman[285470]: 2025-10-11 04:10:49.366316103 +0000 UTC m=+0.191290233 container start 667f978e3e50d581692431b0baea7222c86ff4c3ecfadbdb17fcc28047301e7b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_swirles, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 11 04:10:49 compute-0 podman[285470]: 2025-10-11 04:10:49.370306885 +0000 UTC m=+0.195280975 container attach 667f978e3e50d581692431b0baea7222c86ff4c3ecfadbdb17fcc28047301e7b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_swirles, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Oct 11 04:10:49 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1310: 305 pgs: 305 active+clean; 2.3 GiB data, 2.4 GiB used, 58 GiB / 60 GiB avail; 731 KiB/s rd, 3.4 MiB/s wr, 280 op/s
Oct 11 04:10:49 compute-0 nova_compute[259850]: 2025-10-11 04:10:49.630 2 DEBUG oslo_concurrency.lockutils [None req-4de836f0-db2d-4d3e-8eff-154ff12e41ba 9d2ae7a5228f4cb98ea73ec06ee2dc1e 090ce8762cd840ba8eedda774a81c19f - - default default] Acquiring lock "b41a3cc1-8f24-43ac-981f-ecd099bcc7ce" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 04:10:49 compute-0 nova_compute[259850]: 2025-10-11 04:10:49.632 2 DEBUG oslo_concurrency.lockutils [None req-4de836f0-db2d-4d3e-8eff-154ff12e41ba 9d2ae7a5228f4cb98ea73ec06ee2dc1e 090ce8762cd840ba8eedda774a81c19f - - default default] Lock "b41a3cc1-8f24-43ac-981f-ecd099bcc7ce" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 04:10:49 compute-0 nova_compute[259850]: 2025-10-11 04:10:49.632 2 DEBUG oslo_concurrency.lockutils [None req-4de836f0-db2d-4d3e-8eff-154ff12e41ba 9d2ae7a5228f4cb98ea73ec06ee2dc1e 090ce8762cd840ba8eedda774a81c19f - - default default] Acquiring lock "b41a3cc1-8f24-43ac-981f-ecd099bcc7ce-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 04:10:49 compute-0 nova_compute[259850]: 2025-10-11 04:10:49.633 2 DEBUG oslo_concurrency.lockutils [None req-4de836f0-db2d-4d3e-8eff-154ff12e41ba 9d2ae7a5228f4cb98ea73ec06ee2dc1e 090ce8762cd840ba8eedda774a81c19f - - default default] Lock "b41a3cc1-8f24-43ac-981f-ecd099bcc7ce-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 04:10:49 compute-0 nova_compute[259850]: 2025-10-11 04:10:49.633 2 DEBUG oslo_concurrency.lockutils [None req-4de836f0-db2d-4d3e-8eff-154ff12e41ba 9d2ae7a5228f4cb98ea73ec06ee2dc1e 090ce8762cd840ba8eedda774a81c19f - - default default] Lock "b41a3cc1-8f24-43ac-981f-ecd099bcc7ce-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 04:10:49 compute-0 nova_compute[259850]: 2025-10-11 04:10:49.634 2 INFO nova.compute.manager [None req-4de836f0-db2d-4d3e-8eff-154ff12e41ba 9d2ae7a5228f4cb98ea73ec06ee2dc1e 090ce8762cd840ba8eedda774a81c19f - - default default] [instance: b41a3cc1-8f24-43ac-981f-ecd099bcc7ce] Terminating instance
Oct 11 04:10:49 compute-0 nova_compute[259850]: 2025-10-11 04:10:49.636 2 DEBUG nova.compute.manager [None req-4de836f0-db2d-4d3e-8eff-154ff12e41ba 9d2ae7a5228f4cb98ea73ec06ee2dc1e 090ce8762cd840ba8eedda774a81c19f - - default default] [instance: b41a3cc1-8f24-43ac-981f-ecd099bcc7ce] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Oct 11 04:10:49 compute-0 kernel: tapf962d69d-91 (unregistering): left promiscuous mode
Oct 11 04:10:49 compute-0 NetworkManager[44920]: <info>  [1760155849.6978] device (tapf962d69d-91): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 11 04:10:49 compute-0 nova_compute[259850]: 2025-10-11 04:10:49.705 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 04:10:49 compute-0 ovn_controller[152025]: 2025-10-11T04:10:49Z|00139|binding|INFO|Releasing lport f962d69d-912c-4bbe-8b62-be8b3ee5a694 from this chassis (sb_readonly=0)
Oct 11 04:10:49 compute-0 ovn_controller[152025]: 2025-10-11T04:10:49Z|00140|binding|INFO|Setting lport f962d69d-912c-4bbe-8b62-be8b3ee5a694 down in Southbound
Oct 11 04:10:49 compute-0 ovn_controller[152025]: 2025-10-11T04:10:49Z|00141|binding|INFO|Removing iface tapf962d69d-91 ovn-installed in OVS
Oct 11 04:10:49 compute-0 nova_compute[259850]: 2025-10-11 04:10:49.709 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 04:10:49 compute-0 ovn_metadata_agent[161897]: 2025-10-11 04:10:49.718 161902 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:60:b7:e2 10.100.0.8'], port_security=['fa:16:3e:60:b7:e2 10.100.0.8'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.8/28', 'neutron:device_id': 'b41a3cc1-8f24-43ac-981f-ecd099bcc7ce', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-ff7747e6-8cbd-486d-acdd-c112ee8b4480', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '090ce8762cd840ba8eedda774a81c19f', 'neutron:revision_number': '4', 'neutron:security_group_ids': '1b928e00-d49e-4a6f-8844-4fae3440d01c', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com', 'neutron:port_fip': '192.168.122.174'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=c8a58742-49e2-4099-8758-6944a17d14d0, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f22fde8afd0>], logical_port=f962d69d-912c-4bbe-8b62-be8b3ee5a694) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f22fde8afd0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 11 04:10:49 compute-0 ovn_metadata_agent[161897]: 2025-10-11 04:10:49.721 161902 INFO neutron.agent.ovn.metadata.agent [-] Port f962d69d-912c-4bbe-8b62-be8b3ee5a694 in datapath ff7747e6-8cbd-486d-acdd-c112ee8b4480 unbound from our chassis
Oct 11 04:10:49 compute-0 ovn_metadata_agent[161897]: 2025-10-11 04:10:49.722 161902 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network ff7747e6-8cbd-486d-acdd-c112ee8b4480, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Oct 11 04:10:49 compute-0 ovn_metadata_agent[161897]: 2025-10-11 04:10:49.724 267637 DEBUG oslo.privsep.daemon [-] privsep: reply[31e258f4-3c48-408d-a87c-f7d1ac8e8286]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 04:10:49 compute-0 ovn_metadata_agent[161897]: 2025-10-11 04:10:49.726 161902 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-ff7747e6-8cbd-486d-acdd-c112ee8b4480 namespace which is not needed anymore
Oct 11 04:10:49 compute-0 nova_compute[259850]: 2025-10-11 04:10:49.805 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 04:10:49 compute-0 systemd[1]: machine-qemu\x2d13\x2dinstance\x2d0000000d.scope: Deactivated successfully.
Oct 11 04:10:49 compute-0 systemd[1]: machine-qemu\x2d13\x2dinstance\x2d0000000d.scope: Consumed 13.773s CPU time.
Oct 11 04:10:49 compute-0 systemd-machined[214869]: Machine qemu-13-instance-0000000d terminated.
Oct 11 04:10:49 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e285 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 04:10:49 compute-0 nova_compute[259850]: 2025-10-11 04:10:49.865 2 INFO nova.virt.libvirt.driver [-] [instance: b41a3cc1-8f24-43ac-981f-ecd099bcc7ce] Instance destroyed successfully.
Oct 11 04:10:49 compute-0 nova_compute[259850]: 2025-10-11 04:10:49.866 2 DEBUG nova.objects.instance [None req-4de836f0-db2d-4d3e-8eff-154ff12e41ba 9d2ae7a5228f4cb98ea73ec06ee2dc1e 090ce8762cd840ba8eedda774a81c19f - - default default] Lazy-loading 'resources' on Instance uuid b41a3cc1-8f24-43ac-981f-ecd099bcc7ce obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 11 04:10:49 compute-0 nova_compute[259850]: 2025-10-11 04:10:49.878 2 DEBUG nova.virt.libvirt.vif [None req-4de836f0-db2d-4d3e-8eff-154ff12e41ba 9d2ae7a5228f4cb98ea73ec06ee2dc1e 090ce8762cd840ba8eedda774a81c19f - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-11T04:10:21Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-instance-548339657',display_name='tempest-instance-548339657',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-instance-548339657',id=13,image_ref='1a107e2f-1a9d-4b6f-861d-e64bee7d56be',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBMQXzvv8j9ChCWwSUhzBAsRlUZyXI+lqj/OoMozxbSHUqJ4/5/GYsaO3pkjm/0yqZjzkCxDD3VnBlE3FJF2ZhD+5SCtyHYSJTgB0HjjoegMlaWoCUK/fwQjWxtBa7z+DkQ==',key_name='tempest-keypair-585022884',keypairs=<?>,launch_index=0,launched_at=2025-10-11T04:10:28Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='090ce8762cd840ba8eedda774a81c19f',ramdisk_id='',reservation_id='r-hng0b1qe',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='1a107e2f-1a9d-4b6f-861d-e64bee7d56be',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-VolumesBackupsTest-466402879',owner_user_name='tempest-VolumesBackupsTest-466402879-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-11T04:10:28Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='9d2ae7a5228f4cb98ea73ec06ee2dc1e',uuid=b41a3cc1-8f24-43ac-981f-ecd099bcc7ce,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "f962d69d-912c-4bbe-8b62-be8b3ee5a694", "address": "fa:16:3e:60:b7:e2", "network": {"id": "ff7747e6-8cbd-486d-acdd-c112ee8b4480", "bridge": "br-int", "label": "tempest-VolumesBackupsTest-1545977489-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.174", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "090ce8762cd840ba8eedda774a81c19f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf962d69d-91", "ovs_interfaceid": "f962d69d-912c-4bbe-8b62-be8b3ee5a694", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Oct 11 04:10:49 compute-0 nova_compute[259850]: 2025-10-11 04:10:49.879 2 DEBUG nova.network.os_vif_util [None req-4de836f0-db2d-4d3e-8eff-154ff12e41ba 9d2ae7a5228f4cb98ea73ec06ee2dc1e 090ce8762cd840ba8eedda774a81c19f - - default default] Converting VIF {"id": "f962d69d-912c-4bbe-8b62-be8b3ee5a694", "address": "fa:16:3e:60:b7:e2", "network": {"id": "ff7747e6-8cbd-486d-acdd-c112ee8b4480", "bridge": "br-int", "label": "tempest-VolumesBackupsTest-1545977489-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.174", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "090ce8762cd840ba8eedda774a81c19f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf962d69d-91", "ovs_interfaceid": "f962d69d-912c-4bbe-8b62-be8b3ee5a694", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 11 04:10:49 compute-0 nova_compute[259850]: 2025-10-11 04:10:49.879 2 DEBUG nova.network.os_vif_util [None req-4de836f0-db2d-4d3e-8eff-154ff12e41ba 9d2ae7a5228f4cb98ea73ec06ee2dc1e 090ce8762cd840ba8eedda774a81c19f - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:60:b7:e2,bridge_name='br-int',has_traffic_filtering=True,id=f962d69d-912c-4bbe-8b62-be8b3ee5a694,network=Network(ff7747e6-8cbd-486d-acdd-c112ee8b4480),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf962d69d-91') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 11 04:10:49 compute-0 nova_compute[259850]: 2025-10-11 04:10:49.880 2 DEBUG os_vif [None req-4de836f0-db2d-4d3e-8eff-154ff12e41ba 9d2ae7a5228f4cb98ea73ec06ee2dc1e 090ce8762cd840ba8eedda774a81c19f - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:60:b7:e2,bridge_name='br-int',has_traffic_filtering=True,id=f962d69d-912c-4bbe-8b62-be8b3ee5a694,network=Network(ff7747e6-8cbd-486d-acdd-c112ee8b4480),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf962d69d-91') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Oct 11 04:10:49 compute-0 nova_compute[259850]: 2025-10-11 04:10:49.881 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 04:10:49 compute-0 nova_compute[259850]: 2025-10-11 04:10:49.881 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapf962d69d-91, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 04:10:49 compute-0 nova_compute[259850]: 2025-10-11 04:10:49.883 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 04:10:49 compute-0 nova_compute[259850]: 2025-10-11 04:10:49.885 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 11 04:10:49 compute-0 nova_compute[259850]: 2025-10-11 04:10:49.888 2 INFO os_vif [None req-4de836f0-db2d-4d3e-8eff-154ff12e41ba 9d2ae7a5228f4cb98ea73ec06ee2dc1e 090ce8762cd840ba8eedda774a81c19f - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:60:b7:e2,bridge_name='br-int',has_traffic_filtering=True,id=f962d69d-912c-4bbe-8b62-be8b3ee5a694,network=Network(ff7747e6-8cbd-486d-acdd-c112ee8b4480),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf962d69d-91')
Oct 11 04:10:49 compute-0 neutron-haproxy-ovnmeta-ff7747e6-8cbd-486d-acdd-c112ee8b4480[284374]: [NOTICE]   (284378) : haproxy version is 2.8.14-c23fe91
Oct 11 04:10:49 compute-0 neutron-haproxy-ovnmeta-ff7747e6-8cbd-486d-acdd-c112ee8b4480[284374]: [NOTICE]   (284378) : path to executable is /usr/sbin/haproxy
Oct 11 04:10:49 compute-0 neutron-haproxy-ovnmeta-ff7747e6-8cbd-486d-acdd-c112ee8b4480[284374]: [WARNING]  (284378) : Exiting Master process...
Oct 11 04:10:49 compute-0 neutron-haproxy-ovnmeta-ff7747e6-8cbd-486d-acdd-c112ee8b4480[284374]: [WARNING]  (284378) : Exiting Master process...
Oct 11 04:10:49 compute-0 neutron-haproxy-ovnmeta-ff7747e6-8cbd-486d-acdd-c112ee8b4480[284374]: [ALERT]    (284378) : Current worker (284380) exited with code 143 (Terminated)
Oct 11 04:10:49 compute-0 neutron-haproxy-ovnmeta-ff7747e6-8cbd-486d-acdd-c112ee8b4480[284374]: [WARNING]  (284378) : All workers exited. Exiting... (0)
Oct 11 04:10:49 compute-0 systemd[1]: libpod-d9ee66ff1f6c800ff78ae0ff5eca5634f80a1570825bb4afb2eea3e28549382d.scope: Deactivated successfully.
Oct 11 04:10:49 compute-0 podman[285525]: 2025-10-11 04:10:49.958046052 +0000 UTC m=+0.057957252 container died d9ee66ff1f6c800ff78ae0ff5eca5634f80a1570825bb4afb2eea3e28549382d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-ff7747e6-8cbd-486d-acdd-c112ee8b4480, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.build-date=20251009, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 11 04:10:50 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-d9ee66ff1f6c800ff78ae0ff5eca5634f80a1570825bb4afb2eea3e28549382d-userdata-shm.mount: Deactivated successfully.
Oct 11 04:10:50 compute-0 systemd[1]: var-lib-containers-storage-overlay-f3fe4b88c554b5a0d61ce5c1e505c16db930210c8d78ae8d445e7cb6c79f3abd-merged.mount: Deactivated successfully.
Oct 11 04:10:50 compute-0 podman[285525]: 2025-10-11 04:10:50.04149776 +0000 UTC m=+0.141408970 container cleanup d9ee66ff1f6c800ff78ae0ff5eca5634f80a1570825bb4afb2eea3e28549382d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-ff7747e6-8cbd-486d-acdd-c112ee8b4480, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Oct 11 04:10:50 compute-0 systemd[1]: libpod-conmon-d9ee66ff1f6c800ff78ae0ff5eca5634f80a1570825bb4afb2eea3e28549382d.scope: Deactivated successfully.
Oct 11 04:10:50 compute-0 podman[285573]: 2025-10-11 04:10:50.119893495 +0000 UTC m=+0.051625813 container remove d9ee66ff1f6c800ff78ae0ff5eca5634f80a1570825bb4afb2eea3e28549382d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-ff7747e6-8cbd-486d-acdd-c112ee8b4480, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.build-date=20251009, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f)
Oct 11 04:10:50 compute-0 ovn_metadata_agent[161897]: 2025-10-11 04:10:50.128 267637 DEBUG oslo.privsep.daemon [-] privsep: reply[0d18396d-a444-4e8c-b030-03537fd5b68a]: (4, ('Sat Oct 11 04:10:49 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-ff7747e6-8cbd-486d-acdd-c112ee8b4480 (d9ee66ff1f6c800ff78ae0ff5eca5634f80a1570825bb4afb2eea3e28549382d)\nd9ee66ff1f6c800ff78ae0ff5eca5634f80a1570825bb4afb2eea3e28549382d\nSat Oct 11 04:10:50 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-ff7747e6-8cbd-486d-acdd-c112ee8b4480 (d9ee66ff1f6c800ff78ae0ff5eca5634f80a1570825bb4afb2eea3e28549382d)\nd9ee66ff1f6c800ff78ae0ff5eca5634f80a1570825bb4afb2eea3e28549382d\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 04:10:50 compute-0 ovn_metadata_agent[161897]: 2025-10-11 04:10:50.130 267637 DEBUG oslo.privsep.daemon [-] privsep: reply[31fd534a-0030-490a-bbda-b59e9e7b1fe5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 04:10:50 compute-0 ovn_metadata_agent[161897]: 2025-10-11 04:10:50.135 161902 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapff7747e6-80, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 04:10:50 compute-0 nova_compute[259850]: 2025-10-11 04:10:50.137 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 04:10:50 compute-0 kernel: tapff7747e6-80: left promiscuous mode
Oct 11 04:10:50 compute-0 nova_compute[259850]: 2025-10-11 04:10:50.141 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 04:10:50 compute-0 ovn_metadata_agent[161897]: 2025-10-11 04:10:50.146 267637 DEBUG oslo.privsep.daemon [-] privsep: reply[6acf45dc-bae4-47d1-83b2-3776ad50a437]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 04:10:50 compute-0 nova_compute[259850]: 2025-10-11 04:10:50.162 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 04:10:50 compute-0 ovn_metadata_agent[161897]: 2025-10-11 04:10:50.174 267637 DEBUG oslo.privsep.daemon [-] privsep: reply[551de3c4-ad25-4c46-9353-6d6e85805880]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 04:10:50 compute-0 ovn_metadata_agent[161897]: 2025-10-11 04:10:50.175 267637 DEBUG oslo.privsep.daemon [-] privsep: reply[a2bcc1a3-cbd8-4f11-b4ac-505a24833ac4]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 04:10:50 compute-0 relaxed_swirles[285487]: {
Oct 11 04:10:50 compute-0 relaxed_swirles[285487]:     "0": [
Oct 11 04:10:50 compute-0 relaxed_swirles[285487]:         {
Oct 11 04:10:50 compute-0 relaxed_swirles[285487]:             "devices": [
Oct 11 04:10:50 compute-0 relaxed_swirles[285487]:                 "/dev/loop3"
Oct 11 04:10:50 compute-0 relaxed_swirles[285487]:             ],
Oct 11 04:10:50 compute-0 relaxed_swirles[285487]:             "lv_name": "ceph_lv0",
Oct 11 04:10:50 compute-0 relaxed_swirles[285487]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct 11 04:10:50 compute-0 relaxed_swirles[285487]:             "lv_size": "21470642176",
Oct 11 04:10:50 compute-0 relaxed_swirles[285487]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=1XVTvq-Um5W-oCLh-MZzq-EKrd-vgGj-JdO4S3,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=23b68101-59a9-532f-ab6b-9acf78fb2162,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=bd7ac921-1218-45c1-b1c6-7c594dbceccb,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 11 04:10:50 compute-0 relaxed_swirles[285487]:             "lv_uuid": "1XVTvq-Um5W-oCLh-MZzq-EKrd-vgGj-JdO4S3",
Oct 11 04:10:50 compute-0 relaxed_swirles[285487]:             "name": "ceph_lv0",
Oct 11 04:10:50 compute-0 relaxed_swirles[285487]:             "path": "/dev/ceph_vg0/ceph_lv0",
Oct 11 04:10:50 compute-0 relaxed_swirles[285487]:             "tags": {
Oct 11 04:10:50 compute-0 relaxed_swirles[285487]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct 11 04:10:50 compute-0 relaxed_swirles[285487]:                 "ceph.block_uuid": "1XVTvq-Um5W-oCLh-MZzq-EKrd-vgGj-JdO4S3",
Oct 11 04:10:50 compute-0 relaxed_swirles[285487]:                 "ceph.cephx_lockbox_secret": "",
Oct 11 04:10:50 compute-0 relaxed_swirles[285487]:                 "ceph.cluster_fsid": "23b68101-59a9-532f-ab6b-9acf78fb2162",
Oct 11 04:10:50 compute-0 relaxed_swirles[285487]:                 "ceph.cluster_name": "ceph",
Oct 11 04:10:50 compute-0 relaxed_swirles[285487]:                 "ceph.crush_device_class": "",
Oct 11 04:10:50 compute-0 relaxed_swirles[285487]:                 "ceph.encrypted": "0",
Oct 11 04:10:50 compute-0 relaxed_swirles[285487]:                 "ceph.osd_fsid": "bd7ac921-1218-45c1-b1c6-7c594dbceccb",
Oct 11 04:10:50 compute-0 relaxed_swirles[285487]:                 "ceph.osd_id": "0",
Oct 11 04:10:50 compute-0 relaxed_swirles[285487]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 11 04:10:50 compute-0 relaxed_swirles[285487]:                 "ceph.type": "block",
Oct 11 04:10:50 compute-0 relaxed_swirles[285487]:                 "ceph.vdo": "0"
Oct 11 04:10:50 compute-0 relaxed_swirles[285487]:             },
Oct 11 04:10:50 compute-0 relaxed_swirles[285487]:             "type": "block",
Oct 11 04:10:50 compute-0 relaxed_swirles[285487]:             "vg_name": "ceph_vg0"
Oct 11 04:10:50 compute-0 relaxed_swirles[285487]:         }
Oct 11 04:10:50 compute-0 relaxed_swirles[285487]:     ],
Oct 11 04:10:50 compute-0 relaxed_swirles[285487]:     "1": [
Oct 11 04:10:50 compute-0 relaxed_swirles[285487]:         {
Oct 11 04:10:50 compute-0 relaxed_swirles[285487]:             "devices": [
Oct 11 04:10:50 compute-0 relaxed_swirles[285487]:                 "/dev/loop4"
Oct 11 04:10:50 compute-0 relaxed_swirles[285487]:             ],
Oct 11 04:10:50 compute-0 relaxed_swirles[285487]:             "lv_name": "ceph_lv1",
Oct 11 04:10:50 compute-0 relaxed_swirles[285487]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Oct 11 04:10:50 compute-0 relaxed_swirles[285487]:             "lv_size": "21470642176",
Oct 11 04:10:50 compute-0 relaxed_swirles[285487]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=SYHCVo-UVf0-Iwc3-4dzz-QuCG-19LJ-9rRA5x,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=23b68101-59a9-532f-ab6b-9acf78fb2162,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=38da774d-7ecf-442f-9a7a-97978287cff8,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 11 04:10:50 compute-0 relaxed_swirles[285487]:             "lv_uuid": "SYHCVo-UVf0-Iwc3-4dzz-QuCG-19LJ-9rRA5x",
Oct 11 04:10:50 compute-0 relaxed_swirles[285487]:             "name": "ceph_lv1",
Oct 11 04:10:50 compute-0 relaxed_swirles[285487]:             "path": "/dev/ceph_vg1/ceph_lv1",
Oct 11 04:10:50 compute-0 relaxed_swirles[285487]:             "tags": {
Oct 11 04:10:50 compute-0 relaxed_swirles[285487]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Oct 11 04:10:50 compute-0 relaxed_swirles[285487]:                 "ceph.block_uuid": "SYHCVo-UVf0-Iwc3-4dzz-QuCG-19LJ-9rRA5x",
Oct 11 04:10:50 compute-0 relaxed_swirles[285487]:                 "ceph.cephx_lockbox_secret": "",
Oct 11 04:10:50 compute-0 relaxed_swirles[285487]:                 "ceph.cluster_fsid": "23b68101-59a9-532f-ab6b-9acf78fb2162",
Oct 11 04:10:50 compute-0 relaxed_swirles[285487]:                 "ceph.cluster_name": "ceph",
Oct 11 04:10:50 compute-0 ovn_metadata_agent[161897]: 2025-10-11 04:10:50.194 267637 DEBUG oslo.privsep.daemon [-] privsep: reply[c4f0404a-8f43-41eb-994c-216434918679]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 421278, 'reachable_time': 40340, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 285593, 'error': None, 'target': 'ovnmeta-ff7747e6-8cbd-486d-acdd-c112ee8b4480', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 04:10:50 compute-0 relaxed_swirles[285487]:                 "ceph.crush_device_class": "",
Oct 11 04:10:50 compute-0 relaxed_swirles[285487]:                 "ceph.encrypted": "0",
Oct 11 04:10:50 compute-0 relaxed_swirles[285487]:                 "ceph.osd_fsid": "38da774d-7ecf-442f-9a7a-97978287cff8",
Oct 11 04:10:50 compute-0 relaxed_swirles[285487]:                 "ceph.osd_id": "1",
Oct 11 04:10:50 compute-0 relaxed_swirles[285487]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 11 04:10:50 compute-0 relaxed_swirles[285487]:                 "ceph.type": "block",
Oct 11 04:10:50 compute-0 relaxed_swirles[285487]:                 "ceph.vdo": "0"
Oct 11 04:10:50 compute-0 relaxed_swirles[285487]:             },
Oct 11 04:10:50 compute-0 relaxed_swirles[285487]:             "type": "block",
Oct 11 04:10:50 compute-0 relaxed_swirles[285487]:             "vg_name": "ceph_vg1"
Oct 11 04:10:50 compute-0 relaxed_swirles[285487]:         }
Oct 11 04:10:50 compute-0 relaxed_swirles[285487]:     ],
Oct 11 04:10:50 compute-0 relaxed_swirles[285487]:     "2": [
Oct 11 04:10:50 compute-0 relaxed_swirles[285487]:         {
Oct 11 04:10:50 compute-0 relaxed_swirles[285487]:             "devices": [
Oct 11 04:10:50 compute-0 relaxed_swirles[285487]:                 "/dev/loop5"
Oct 11 04:10:50 compute-0 relaxed_swirles[285487]:             ],
Oct 11 04:10:50 compute-0 relaxed_swirles[285487]:             "lv_name": "ceph_lv2",
Oct 11 04:10:50 compute-0 relaxed_swirles[285487]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Oct 11 04:10:50 compute-0 relaxed_swirles[285487]:             "lv_size": "21470642176",
Oct 11 04:10:50 compute-0 relaxed_swirles[285487]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=RoMfIg-8Myq-NBZ3-1HM3-6QfC-sjk8-dlChvt,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=23b68101-59a9-532f-ab6b-9acf78fb2162,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=b0a32a6b-06c7-4cda-9235-0c5ffbd1bd85,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 11 04:10:50 compute-0 relaxed_swirles[285487]:             "lv_uuid": "RoMfIg-8Myq-NBZ3-1HM3-6QfC-sjk8-dlChvt",
Oct 11 04:10:50 compute-0 relaxed_swirles[285487]:             "name": "ceph_lv2",
Oct 11 04:10:50 compute-0 relaxed_swirles[285487]:             "path": "/dev/ceph_vg2/ceph_lv2",
Oct 11 04:10:50 compute-0 relaxed_swirles[285487]:             "tags": {
Oct 11 04:10:50 compute-0 relaxed_swirles[285487]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Oct 11 04:10:50 compute-0 relaxed_swirles[285487]:                 "ceph.block_uuid": "RoMfIg-8Myq-NBZ3-1HM3-6QfC-sjk8-dlChvt",
Oct 11 04:10:50 compute-0 relaxed_swirles[285487]:                 "ceph.cephx_lockbox_secret": "",
Oct 11 04:10:50 compute-0 relaxed_swirles[285487]:                 "ceph.cluster_fsid": "23b68101-59a9-532f-ab6b-9acf78fb2162",
Oct 11 04:10:50 compute-0 relaxed_swirles[285487]:                 "ceph.cluster_name": "ceph",
Oct 11 04:10:50 compute-0 relaxed_swirles[285487]:                 "ceph.crush_device_class": "",
Oct 11 04:10:50 compute-0 relaxed_swirles[285487]:                 "ceph.encrypted": "0",
Oct 11 04:10:50 compute-0 relaxed_swirles[285487]:                 "ceph.osd_fsid": "b0a32a6b-06c7-4cda-9235-0c5ffbd1bd85",
Oct 11 04:10:50 compute-0 relaxed_swirles[285487]:                 "ceph.osd_id": "2",
Oct 11 04:10:50 compute-0 relaxed_swirles[285487]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 11 04:10:50 compute-0 relaxed_swirles[285487]:                 "ceph.type": "block",
Oct 11 04:10:50 compute-0 relaxed_swirles[285487]:                 "ceph.vdo": "0"
Oct 11 04:10:50 compute-0 relaxed_swirles[285487]:             },
Oct 11 04:10:50 compute-0 relaxed_swirles[285487]:             "type": "block",
Oct 11 04:10:50 compute-0 relaxed_swirles[285487]:             "vg_name": "ceph_vg2"
Oct 11 04:10:50 compute-0 relaxed_swirles[285487]:         }
Oct 11 04:10:50 compute-0 relaxed_swirles[285487]:     ]
Oct 11 04:10:50 compute-0 relaxed_swirles[285487]: }
Oct 11 04:10:50 compute-0 systemd[1]: run-netns-ovnmeta\x2dff7747e6\x2d8cbd\x2d486d\x2dacdd\x2dc112ee8b4480.mount: Deactivated successfully.
Oct 11 04:10:50 compute-0 ovn_metadata_agent[161897]: 2025-10-11 04:10:50.200 162015 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-ff7747e6-8cbd-486d-acdd-c112ee8b4480 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Oct 11 04:10:50 compute-0 ovn_metadata_agent[161897]: 2025-10-11 04:10:50.200 162015 DEBUG oslo.privsep.daemon [-] privsep: reply[297b043d-5bdd-4537-973b-942bb36e08c2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 04:10:50 compute-0 systemd[1]: libpod-667f978e3e50d581692431b0baea7222c86ff4c3ecfadbdb17fcc28047301e7b.scope: Deactivated successfully.
Oct 11 04:10:50 compute-0 podman[285470]: 2025-10-11 04:10:50.221324628 +0000 UTC m=+1.046298738 container died 667f978e3e50d581692431b0baea7222c86ff4c3ecfadbdb17fcc28047301e7b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_swirles, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 11 04:10:50 compute-0 nova_compute[259850]: 2025-10-11 04:10:50.246 2 DEBUG nova.compute.manager [req-5a197f50-c1d5-430d-819f-d0b0f6398f55 req-af5f7ead-a09b-43b9-8143-12fc95817a59 f4159c198f2c491aba3076e803251bb4 551ef16ca7c64c7d98e9560ddab5abd8 - - default default] [instance: b41a3cc1-8f24-43ac-981f-ecd099bcc7ce] Received event network-vif-unplugged-f962d69d-912c-4bbe-8b62-be8b3ee5a694 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 11 04:10:50 compute-0 nova_compute[259850]: 2025-10-11 04:10:50.247 2 DEBUG oslo_concurrency.lockutils [req-5a197f50-c1d5-430d-819f-d0b0f6398f55 req-af5f7ead-a09b-43b9-8143-12fc95817a59 f4159c198f2c491aba3076e803251bb4 551ef16ca7c64c7d98e9560ddab5abd8 - - default default] Acquiring lock "b41a3cc1-8f24-43ac-981f-ecd099bcc7ce-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 04:10:50 compute-0 nova_compute[259850]: 2025-10-11 04:10:50.247 2 DEBUG oslo_concurrency.lockutils [req-5a197f50-c1d5-430d-819f-d0b0f6398f55 req-af5f7ead-a09b-43b9-8143-12fc95817a59 f4159c198f2c491aba3076e803251bb4 551ef16ca7c64c7d98e9560ddab5abd8 - - default default] Lock "b41a3cc1-8f24-43ac-981f-ecd099bcc7ce-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 04:10:50 compute-0 nova_compute[259850]: 2025-10-11 04:10:50.248 2 DEBUG oslo_concurrency.lockutils [req-5a197f50-c1d5-430d-819f-d0b0f6398f55 req-af5f7ead-a09b-43b9-8143-12fc95817a59 f4159c198f2c491aba3076e803251bb4 551ef16ca7c64c7d98e9560ddab5abd8 - - default default] Lock "b41a3cc1-8f24-43ac-981f-ecd099bcc7ce-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 04:10:50 compute-0 nova_compute[259850]: 2025-10-11 04:10:50.248 2 DEBUG nova.compute.manager [req-5a197f50-c1d5-430d-819f-d0b0f6398f55 req-af5f7ead-a09b-43b9-8143-12fc95817a59 f4159c198f2c491aba3076e803251bb4 551ef16ca7c64c7d98e9560ddab5abd8 - - default default] [instance: b41a3cc1-8f24-43ac-981f-ecd099bcc7ce] No waiting events found dispatching network-vif-unplugged-f962d69d-912c-4bbe-8b62-be8b3ee5a694 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 11 04:10:50 compute-0 nova_compute[259850]: 2025-10-11 04:10:50.248 2 DEBUG nova.compute.manager [req-5a197f50-c1d5-430d-819f-d0b0f6398f55 req-af5f7ead-a09b-43b9-8143-12fc95817a59 f4159c198f2c491aba3076e803251bb4 551ef16ca7c64c7d98e9560ddab5abd8 - - default default] [instance: b41a3cc1-8f24-43ac-981f-ecd099bcc7ce] Received event network-vif-unplugged-f962d69d-912c-4bbe-8b62-be8b3ee5a694 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Oct 11 04:10:50 compute-0 systemd[1]: var-lib-containers-storage-overlay-bd5d8faf312eafd31e2dcdfaef3e059e2ddb4870a6c70354f54ac1a30a2ede79-merged.mount: Deactivated successfully.
Oct 11 04:10:50 compute-0 podman[285470]: 2025-10-11 04:10:50.287287014 +0000 UTC m=+1.112261104 container remove 667f978e3e50d581692431b0baea7222c86ff4c3ecfadbdb17fcc28047301e7b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_swirles, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Oct 11 04:10:50 compute-0 systemd[1]: libpod-conmon-667f978e3e50d581692431b0baea7222c86ff4c3ecfadbdb17fcc28047301e7b.scope: Deactivated successfully.
Oct 11 04:10:50 compute-0 sudo[285359]: pam_unix(sudo:session): session closed for user root
Oct 11 04:10:50 compute-0 nova_compute[259850]: 2025-10-11 04:10:50.354 2 INFO nova.virt.libvirt.driver [None req-4de836f0-db2d-4d3e-8eff-154ff12e41ba 9d2ae7a5228f4cb98ea73ec06ee2dc1e 090ce8762cd840ba8eedda774a81c19f - - default default] [instance: b41a3cc1-8f24-43ac-981f-ecd099bcc7ce] Deleting instance files /var/lib/nova/instances/b41a3cc1-8f24-43ac-981f-ecd099bcc7ce_del
Oct 11 04:10:50 compute-0 nova_compute[259850]: 2025-10-11 04:10:50.356 2 INFO nova.virt.libvirt.driver [None req-4de836f0-db2d-4d3e-8eff-154ff12e41ba 9d2ae7a5228f4cb98ea73ec06ee2dc1e 090ce8762cd840ba8eedda774a81c19f - - default default] [instance: b41a3cc1-8f24-43ac-981f-ecd099bcc7ce] Deletion of /var/lib/nova/instances/b41a3cc1-8f24-43ac-981f-ecd099bcc7ce_del complete
Oct 11 04:10:50 compute-0 sudo[285605]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 04:10:50 compute-0 sudo[285605]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 04:10:50 compute-0 sudo[285605]: pam_unix(sudo:session): session closed for user root
Oct 11 04:10:50 compute-0 nova_compute[259850]: 2025-10-11 04:10:50.415 2 INFO nova.compute.manager [None req-4de836f0-db2d-4d3e-8eff-154ff12e41ba 9d2ae7a5228f4cb98ea73ec06ee2dc1e 090ce8762cd840ba8eedda774a81c19f - - default default] [instance: b41a3cc1-8f24-43ac-981f-ecd099bcc7ce] Took 0.78 seconds to destroy the instance on the hypervisor.
Oct 11 04:10:50 compute-0 nova_compute[259850]: 2025-10-11 04:10:50.416 2 DEBUG oslo.service.loopingcall [None req-4de836f0-db2d-4d3e-8eff-154ff12e41ba 9d2ae7a5228f4cb98ea73ec06ee2dc1e 090ce8762cd840ba8eedda774a81c19f - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Oct 11 04:10:50 compute-0 nova_compute[259850]: 2025-10-11 04:10:50.416 2 DEBUG nova.compute.manager [-] [instance: b41a3cc1-8f24-43ac-981f-ecd099bcc7ce] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Oct 11 04:10:50 compute-0 nova_compute[259850]: 2025-10-11 04:10:50.417 2 DEBUG nova.network.neutron [-] [instance: b41a3cc1-8f24-43ac-981f-ecd099bcc7ce] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Oct 11 04:10:50 compute-0 sudo[285630]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 11 04:10:50 compute-0 sudo[285630]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 04:10:50 compute-0 sudo[285630]: pam_unix(sudo:session): session closed for user root
Oct 11 04:10:50 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct 11 04:10:50 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1711746918' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 11 04:10:50 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct 11 04:10:50 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1711746918' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 11 04:10:50 compute-0 sudo[285655]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 04:10:50 compute-0 sudo[285655]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 04:10:50 compute-0 sudo[285655]: pam_unix(sudo:session): session closed for user root
Oct 11 04:10:50 compute-0 sudo[285680]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/23b68101-59a9-532f-ab6b-9acf78fb2162/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 23b68101-59a9-532f-ab6b-9acf78fb2162 -- raw list --format json
Oct 11 04:10:50 compute-0 sudo[285680]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 04:10:50 compute-0 ceph-mgr[74563]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 04:10:50 compute-0 ceph-mgr[74563]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 04:10:50 compute-0 ceph-mgr[74563]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 04:10:50 compute-0 ceph-mgr[74563]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 04:10:50 compute-0 ceph-mgr[74563]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 04:10:50 compute-0 ceph-mgr[74563]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 04:10:50 compute-0 nova_compute[259850]: 2025-10-11 04:10:50.932 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1760155835.931259, 3d2a66c2-9869-4f0a-a27f-db3a14d43466 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 11 04:10:50 compute-0 nova_compute[259850]: 2025-10-11 04:10:50.933 2 INFO nova.compute.manager [-] [instance: 3d2a66c2-9869-4f0a-a27f-db3a14d43466] VM Stopped (Lifecycle Event)
Oct 11 04:10:50 compute-0 nova_compute[259850]: 2025-10-11 04:10:50.959 2 DEBUG nova.compute.manager [None req-cbb1a5dc-f207-42e1-a625-47c82f4a909b - - - - - -] [instance: 3d2a66c2-9869-4f0a-a27f-db3a14d43466] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 11 04:10:50 compute-0 podman[285745]: 2025-10-11 04:10:50.995587573 +0000 UTC m=+0.061904183 container create 6bf0b2f0cf5f49bc34f76efb7827357289b6cb0011afc99aa34e089123a47667 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=romantic_rubin, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0)
Oct 11 04:10:51 compute-0 systemd[1]: Started libpod-conmon-6bf0b2f0cf5f49bc34f76efb7827357289b6cb0011afc99aa34e089123a47667.scope.
Oct 11 04:10:51 compute-0 podman[285745]: 2025-10-11 04:10:50.965896367 +0000 UTC m=+0.032212987 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 04:10:51 compute-0 ceph-mon[74273]: pgmap v1310: 305 pgs: 305 active+clean; 2.3 GiB data, 2.4 GiB used, 58 GiB / 60 GiB avail; 731 KiB/s rd, 3.4 MiB/s wr, 280 op/s
Oct 11 04:10:51 compute-0 ceph-mon[74273]: from='client.? 192.168.122.10:0/1711746918' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 11 04:10:51 compute-0 ceph-mon[74273]: from='client.? 192.168.122.10:0/1711746918' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 11 04:10:51 compute-0 systemd[1]: Started libcrun container.
Oct 11 04:10:51 compute-0 podman[285745]: 2025-10-11 04:10:51.088726423 +0000 UTC m=+0.155043023 container init 6bf0b2f0cf5f49bc34f76efb7827357289b6cb0011afc99aa34e089123a47667 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=romantic_rubin, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True)
Oct 11 04:10:51 compute-0 podman[285745]: 2025-10-11 04:10:51.100516175 +0000 UTC m=+0.166832775 container start 6bf0b2f0cf5f49bc34f76efb7827357289b6cb0011afc99aa34e089123a47667 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=romantic_rubin, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 11 04:10:51 compute-0 podman[285745]: 2025-10-11 04:10:51.104362043 +0000 UTC m=+0.170678703 container attach 6bf0b2f0cf5f49bc34f76efb7827357289b6cb0011afc99aa34e089123a47667 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=romantic_rubin, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 11 04:10:51 compute-0 romantic_rubin[285762]: 167 167
Oct 11 04:10:51 compute-0 systemd[1]: libpod-6bf0b2f0cf5f49bc34f76efb7827357289b6cb0011afc99aa34e089123a47667.scope: Deactivated successfully.
Oct 11 04:10:51 compute-0 podman[285745]: 2025-10-11 04:10:51.107667336 +0000 UTC m=+0.173983936 container died 6bf0b2f0cf5f49bc34f76efb7827357289b6cb0011afc99aa34e089123a47667 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=romantic_rubin, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Oct 11 04:10:51 compute-0 systemd[1]: var-lib-containers-storage-overlay-23d93d45547e0221372eaae181f06ccc10ac1dda13c7019d541db44412a4810f-merged.mount: Deactivated successfully.
Oct 11 04:10:51 compute-0 podman[285745]: 2025-10-11 04:10:51.147685022 +0000 UTC m=+0.214001632 container remove 6bf0b2f0cf5f49bc34f76efb7827357289b6cb0011afc99aa34e089123a47667 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=romantic_rubin, ceph=True, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 11 04:10:51 compute-0 systemd[1]: libpod-conmon-6bf0b2f0cf5f49bc34f76efb7827357289b6cb0011afc99aa34e089123a47667.scope: Deactivated successfully.
Oct 11 04:10:51 compute-0 podman[285787]: 2025-10-11 04:10:51.371321184 +0000 UTC m=+0.073919901 container create 429b4420fdb30fc5ce66ec5ed4f3a1fa7ea7133a63a8ea7dda81e72f5f5d23c0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_wiles, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 11 04:10:51 compute-0 systemd[1]: Started libpod-conmon-429b4420fdb30fc5ce66ec5ed4f3a1fa7ea7133a63a8ea7dda81e72f5f5d23c0.scope.
Oct 11 04:10:51 compute-0 podman[285787]: 2025-10-11 04:10:51.343350527 +0000 UTC m=+0.045949314 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 04:10:51 compute-0 systemd[1]: Started libcrun container.
Oct 11 04:10:51 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c6cbc085d67416450a20a728849ce6e9ec24a569016bb36122b5ba585987eac4/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 11 04:10:51 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c6cbc085d67416450a20a728849ce6e9ec24a569016bb36122b5ba585987eac4/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 11 04:10:51 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c6cbc085d67416450a20a728849ce6e9ec24a569016bb36122b5ba585987eac4/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 11 04:10:51 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c6cbc085d67416450a20a728849ce6e9ec24a569016bb36122b5ba585987eac4/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 11 04:10:51 compute-0 podman[285787]: 2025-10-11 04:10:51.476456682 +0000 UTC m=+0.179055409 container init 429b4420fdb30fc5ce66ec5ed4f3a1fa7ea7133a63a8ea7dda81e72f5f5d23c0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_wiles, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.build-date=20250507, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 11 04:10:51 compute-0 podman[285787]: 2025-10-11 04:10:51.493283216 +0000 UTC m=+0.195881963 container start 429b4420fdb30fc5ce66ec5ed4f3a1fa7ea7133a63a8ea7dda81e72f5f5d23c0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_wiles, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 11 04:10:51 compute-0 podman[285787]: 2025-10-11 04:10:51.497483394 +0000 UTC m=+0.200082101 container attach 429b4420fdb30fc5ce66ec5ed4f3a1fa7ea7133a63a8ea7dda81e72f5f5d23c0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_wiles, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Oct 11 04:10:51 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1311: 305 pgs: 305 active+clean; 2.3 GiB data, 2.4 GiB used, 58 GiB / 60 GiB avail; 28 KiB/s rd, 18 KiB/s wr, 38 op/s
Oct 11 04:10:51 compute-0 nova_compute[259850]: 2025-10-11 04:10:51.729 2 DEBUG nova.network.neutron [-] [instance: b41a3cc1-8f24-43ac-981f-ecd099bcc7ce] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 11 04:10:51 compute-0 nova_compute[259850]: 2025-10-11 04:10:51.893 2 INFO nova.compute.manager [-] [instance: b41a3cc1-8f24-43ac-981f-ecd099bcc7ce] Took 1.48 seconds to deallocate network for instance.
Oct 11 04:10:52 compute-0 nova_compute[259850]: 2025-10-11 04:10:52.202 2 DEBUG nova.compute.manager [req-f3e740a5-b2c0-41d4-b864-3dadbe44fa0c req-77b4f343-79c6-4f4b-aa4c-6e8e23c8d11c f4159c198f2c491aba3076e803251bb4 551ef16ca7c64c7d98e9560ddab5abd8 - - default default] [instance: b41a3cc1-8f24-43ac-981f-ecd099bcc7ce] Received event network-vif-deleted-f962d69d-912c-4bbe-8b62-be8b3ee5a694 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 11 04:10:52 compute-0 nova_compute[259850]: 2025-10-11 04:10:52.404 2 DEBUG nova.compute.manager [req-9a98a2ca-7b62-472b-8f7c-463a27e280b2 req-244efe94-d16e-437f-9a83-bb99219e4c8d f4159c198f2c491aba3076e803251bb4 551ef16ca7c64c7d98e9560ddab5abd8 - - default default] [instance: b41a3cc1-8f24-43ac-981f-ecd099bcc7ce] Received event network-vif-plugged-f962d69d-912c-4bbe-8b62-be8b3ee5a694 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 11 04:10:52 compute-0 nova_compute[259850]: 2025-10-11 04:10:52.405 2 DEBUG oslo_concurrency.lockutils [req-9a98a2ca-7b62-472b-8f7c-463a27e280b2 req-244efe94-d16e-437f-9a83-bb99219e4c8d f4159c198f2c491aba3076e803251bb4 551ef16ca7c64c7d98e9560ddab5abd8 - - default default] Acquiring lock "b41a3cc1-8f24-43ac-981f-ecd099bcc7ce-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 04:10:52 compute-0 nova_compute[259850]: 2025-10-11 04:10:52.405 2 DEBUG oslo_concurrency.lockutils [req-9a98a2ca-7b62-472b-8f7c-463a27e280b2 req-244efe94-d16e-437f-9a83-bb99219e4c8d f4159c198f2c491aba3076e803251bb4 551ef16ca7c64c7d98e9560ddab5abd8 - - default default] Lock "b41a3cc1-8f24-43ac-981f-ecd099bcc7ce-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 04:10:52 compute-0 nova_compute[259850]: 2025-10-11 04:10:52.405 2 DEBUG oslo_concurrency.lockutils [req-9a98a2ca-7b62-472b-8f7c-463a27e280b2 req-244efe94-d16e-437f-9a83-bb99219e4c8d f4159c198f2c491aba3076e803251bb4 551ef16ca7c64c7d98e9560ddab5abd8 - - default default] Lock "b41a3cc1-8f24-43ac-981f-ecd099bcc7ce-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 04:10:52 compute-0 nova_compute[259850]: 2025-10-11 04:10:52.406 2 DEBUG nova.compute.manager [req-9a98a2ca-7b62-472b-8f7c-463a27e280b2 req-244efe94-d16e-437f-9a83-bb99219e4c8d f4159c198f2c491aba3076e803251bb4 551ef16ca7c64c7d98e9560ddab5abd8 - - default default] [instance: b41a3cc1-8f24-43ac-981f-ecd099bcc7ce] No waiting events found dispatching network-vif-plugged-f962d69d-912c-4bbe-8b62-be8b3ee5a694 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 11 04:10:52 compute-0 nova_compute[259850]: 2025-10-11 04:10:52.406 2 WARNING nova.compute.manager [req-9a98a2ca-7b62-472b-8f7c-463a27e280b2 req-244efe94-d16e-437f-9a83-bb99219e4c8d f4159c198f2c491aba3076e803251bb4 551ef16ca7c64c7d98e9560ddab5abd8 - - default default] [instance: b41a3cc1-8f24-43ac-981f-ecd099bcc7ce] Received unexpected event network-vif-plugged-f962d69d-912c-4bbe-8b62-be8b3ee5a694 for instance with vm_state active and task_state deleting.
Oct 11 04:10:52 compute-0 nova_compute[259850]: 2025-10-11 04:10:52.455 2 INFO nova.compute.manager [None req-4de836f0-db2d-4d3e-8eff-154ff12e41ba 9d2ae7a5228f4cb98ea73ec06ee2dc1e 090ce8762cd840ba8eedda774a81c19f - - default default] [instance: b41a3cc1-8f24-43ac-981f-ecd099bcc7ce] Took 0.56 seconds to detach 1 volumes for instance.
Oct 11 04:10:52 compute-0 great_wiles[285803]: {
Oct 11 04:10:52 compute-0 great_wiles[285803]:     "38da774d-7ecf-442f-9a7a-97978287cff8": {
Oct 11 04:10:52 compute-0 great_wiles[285803]:         "ceph_fsid": "23b68101-59a9-532f-ab6b-9acf78fb2162",
Oct 11 04:10:52 compute-0 great_wiles[285803]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Oct 11 04:10:52 compute-0 great_wiles[285803]:         "osd_id": 1,
Oct 11 04:10:52 compute-0 great_wiles[285803]:         "osd_uuid": "38da774d-7ecf-442f-9a7a-97978287cff8",
Oct 11 04:10:52 compute-0 great_wiles[285803]:         "type": "bluestore"
Oct 11 04:10:52 compute-0 great_wiles[285803]:     },
Oct 11 04:10:52 compute-0 great_wiles[285803]:     "b0a32a6b-06c7-4cda-9235-0c5ffbd1bd85": {
Oct 11 04:10:52 compute-0 great_wiles[285803]:         "ceph_fsid": "23b68101-59a9-532f-ab6b-9acf78fb2162",
Oct 11 04:10:52 compute-0 great_wiles[285803]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Oct 11 04:10:52 compute-0 great_wiles[285803]:         "osd_id": 2,
Oct 11 04:10:52 compute-0 great_wiles[285803]:         "osd_uuid": "b0a32a6b-06c7-4cda-9235-0c5ffbd1bd85",
Oct 11 04:10:52 compute-0 great_wiles[285803]:         "type": "bluestore"
Oct 11 04:10:52 compute-0 great_wiles[285803]:     },
Oct 11 04:10:52 compute-0 great_wiles[285803]:     "bd7ac921-1218-45c1-b1c6-7c594dbceccb": {
Oct 11 04:10:52 compute-0 great_wiles[285803]:         "ceph_fsid": "23b68101-59a9-532f-ab6b-9acf78fb2162",
Oct 11 04:10:52 compute-0 great_wiles[285803]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct 11 04:10:52 compute-0 great_wiles[285803]:         "osd_id": 0,
Oct 11 04:10:52 compute-0 great_wiles[285803]:         "osd_uuid": "bd7ac921-1218-45c1-b1c6-7c594dbceccb",
Oct 11 04:10:52 compute-0 great_wiles[285803]:         "type": "bluestore"
Oct 11 04:10:52 compute-0 great_wiles[285803]:     }
Oct 11 04:10:52 compute-0 great_wiles[285803]: }
Oct 11 04:10:52 compute-0 systemd[1]: libpod-429b4420fdb30fc5ce66ec5ed4f3a1fa7ea7133a63a8ea7dda81e72f5f5d23c0.scope: Deactivated successfully.
Oct 11 04:10:52 compute-0 systemd[1]: libpod-429b4420fdb30fc5ce66ec5ed4f3a1fa7ea7133a63a8ea7dda81e72f5f5d23c0.scope: Consumed 1.085s CPU time.
Oct 11 04:10:52 compute-0 podman[285787]: 2025-10-11 04:10:52.564596237 +0000 UTC m=+1.267194934 container died 429b4420fdb30fc5ce66ec5ed4f3a1fa7ea7133a63a8ea7dda81e72f5f5d23c0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_wiles, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 11 04:10:52 compute-0 nova_compute[259850]: 2025-10-11 04:10:52.575 2 DEBUG oslo_concurrency.lockutils [None req-4de836f0-db2d-4d3e-8eff-154ff12e41ba 9d2ae7a5228f4cb98ea73ec06ee2dc1e 090ce8762cd840ba8eedda774a81c19f - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 04:10:52 compute-0 nova_compute[259850]: 2025-10-11 04:10:52.577 2 DEBUG oslo_concurrency.lockutils [None req-4de836f0-db2d-4d3e-8eff-154ff12e41ba 9d2ae7a5228f4cb98ea73ec06ee2dc1e 090ce8762cd840ba8eedda774a81c19f - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 04:10:52 compute-0 systemd[1]: var-lib-containers-storage-overlay-c6cbc085d67416450a20a728849ce6e9ec24a569016bb36122b5ba585987eac4-merged.mount: Deactivated successfully.
Oct 11 04:10:52 compute-0 podman[285787]: 2025-10-11 04:10:52.626397256 +0000 UTC m=+1.328995963 container remove 429b4420fdb30fc5ce66ec5ed4f3a1fa7ea7133a63a8ea7dda81e72f5f5d23c0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_wiles, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default)
Oct 11 04:10:52 compute-0 systemd[1]: libpod-conmon-429b4420fdb30fc5ce66ec5ed4f3a1fa7ea7133a63a8ea7dda81e72f5f5d23c0.scope: Deactivated successfully.
Oct 11 04:10:52 compute-0 nova_compute[259850]: 2025-10-11 04:10:52.644 2 DEBUG oslo_concurrency.processutils [None req-4de836f0-db2d-4d3e-8eff-154ff12e41ba 9d2ae7a5228f4cb98ea73ec06ee2dc1e 090ce8762cd840ba8eedda774a81c19f - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 04:10:52 compute-0 sudo[285680]: pam_unix(sudo:session): session closed for user root
Oct 11 04:10:52 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 11 04:10:52 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' 
Oct 11 04:10:52 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 11 04:10:52 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' 
Oct 11 04:10:52 compute-0 ceph-mgr[74563]: [progress WARNING root] complete: ev cae3dd31-885a-4a1d-a737-a07b5cf21b13 does not exist
Oct 11 04:10:52 compute-0 ceph-mgr[74563]: [progress WARNING root] complete: ev 983fdad0-3c00-4696-ae5e-702506720b54 does not exist
Oct 11 04:10:52 compute-0 podman[285836]: 2025-10-11 04:10:52.698182686 +0000 UTC m=+0.087602016 container health_status aa44bb3cffcfb092b9d85ae52a9bd9f36c87e07262632420f97b48a886109eab (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20251009, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2)
Oct 11 04:10:52 compute-0 nova_compute[259850]: 2025-10-11 04:10:52.751 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 04:10:52 compute-0 sudo[285865]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 04:10:52 compute-0 sudo[285865]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 04:10:52 compute-0 sudo[285865]: pam_unix(sudo:session): session closed for user root
Oct 11 04:10:52 compute-0 sudo[285898]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Oct 11 04:10:52 compute-0 sudo[285898]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 04:10:52 compute-0 sudo[285898]: pam_unix(sudo:session): session closed for user root
Oct 11 04:10:53 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 11 04:10:53 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1624624378' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 04:10:53 compute-0 ceph-mon[74273]: pgmap v1311: 305 pgs: 305 active+clean; 2.3 GiB data, 2.4 GiB used, 58 GiB / 60 GiB avail; 28 KiB/s rd, 18 KiB/s wr, 38 op/s
Oct 11 04:10:53 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' 
Oct 11 04:10:53 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' 
Oct 11 04:10:53 compute-0 ceph-mon[74273]: from='client.? 192.168.122.100:0/1624624378' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 04:10:53 compute-0 nova_compute[259850]: 2025-10-11 04:10:53.090 2 DEBUG oslo_concurrency.processutils [None req-4de836f0-db2d-4d3e-8eff-154ff12e41ba 9d2ae7a5228f4cb98ea73ec06ee2dc1e 090ce8762cd840ba8eedda774a81c19f - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.446s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 04:10:53 compute-0 nova_compute[259850]: 2025-10-11 04:10:53.099 2 DEBUG nova.compute.provider_tree [None req-4de836f0-db2d-4d3e-8eff-154ff12e41ba 9d2ae7a5228f4cb98ea73ec06ee2dc1e 090ce8762cd840ba8eedda774a81c19f - - default default] Inventory has not changed in ProviderTree for provider: 108a560b-89c0-4926-a2fc-cb749a6f8386 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 11 04:10:53 compute-0 nova_compute[259850]: 2025-10-11 04:10:53.120 2 DEBUG nova.scheduler.client.report [None req-4de836f0-db2d-4d3e-8eff-154ff12e41ba 9d2ae7a5228f4cb98ea73ec06ee2dc1e 090ce8762cd840ba8eedda774a81c19f - - default default] Inventory has not changed for provider 108a560b-89c0-4926-a2fc-cb749a6f8386 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 11 04:10:53 compute-0 nova_compute[259850]: 2025-10-11 04:10:53.168 2 DEBUG oslo_concurrency.lockutils [None req-4de836f0-db2d-4d3e-8eff-154ff12e41ba 9d2ae7a5228f4cb98ea73ec06ee2dc1e 090ce8762cd840ba8eedda774a81c19f - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.591s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 04:10:53 compute-0 nova_compute[259850]: 2025-10-11 04:10:53.201 2 INFO nova.scheduler.client.report [None req-4de836f0-db2d-4d3e-8eff-154ff12e41ba 9d2ae7a5228f4cb98ea73ec06ee2dc1e 090ce8762cd840ba8eedda774a81c19f - - default default] Deleted allocations for instance b41a3cc1-8f24-43ac-981f-ecd099bcc7ce
Oct 11 04:10:53 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 11 04:10:53 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3633554428' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 11 04:10:53 compute-0 nova_compute[259850]: 2025-10-11 04:10:53.281 2 DEBUG oslo_concurrency.lockutils [None req-4de836f0-db2d-4d3e-8eff-154ff12e41ba 9d2ae7a5228f4cb98ea73ec06ee2dc1e 090ce8762cd840ba8eedda774a81c19f - - default default] Lock "b41a3cc1-8f24-43ac-981f-ecd099bcc7ce" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.649s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 04:10:53 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1312: 305 pgs: 305 active+clean; 2.2 GiB data, 2.4 GiB used, 58 GiB / 60 GiB avail; 48 KiB/s rd, 17 KiB/s wr, 66 op/s
Oct 11 04:10:54 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e285 do_prune osdmap full prune enabled
Oct 11 04:10:54 compute-0 ceph-mon[74273]: from='client.? 192.168.122.10:0/3633554428' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 11 04:10:54 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e286 e286: 3 total, 3 up, 3 in
Oct 11 04:10:54 compute-0 ceph-mon[74273]: log_channel(cluster) log [DBG] : osdmap e286: 3 total, 3 up, 3 in
Oct 11 04:10:54 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 11 04:10:54 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1220088325' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 11 04:10:54 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e286 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 04:10:54 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e286 do_prune osdmap full prune enabled
Oct 11 04:10:54 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e287 e287: 3 total, 3 up, 3 in
Oct 11 04:10:54 compute-0 ceph-mon[74273]: log_channel(cluster) log [DBG] : osdmap e287: 3 total, 3 up, 3 in
Oct 11 04:10:54 compute-0 nova_compute[259850]: 2025-10-11 04:10:54.885 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 04:10:55 compute-0 ceph-mon[74273]: pgmap v1312: 305 pgs: 305 active+clean; 2.2 GiB data, 2.4 GiB used, 58 GiB / 60 GiB avail; 48 KiB/s rd, 17 KiB/s wr, 66 op/s
Oct 11 04:10:55 compute-0 ceph-mon[74273]: osdmap e286: 3 total, 3 up, 3 in
Oct 11 04:10:55 compute-0 ceph-mon[74273]: from='client.? 192.168.122.10:0/1220088325' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 11 04:10:55 compute-0 ceph-mon[74273]: osdmap e287: 3 total, 3 up, 3 in
Oct 11 04:10:55 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1315: 305 pgs: 305 active+clean; 2.2 GiB data, 2.4 GiB used, 58 GiB / 60 GiB avail; 60 KiB/s rd, 21 KiB/s wr, 82 op/s
Oct 11 04:10:55 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e287 do_prune osdmap full prune enabled
Oct 11 04:10:55 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e288 e288: 3 total, 3 up, 3 in
Oct 11 04:10:55 compute-0 ceph-mon[74273]: log_channel(cluster) log [DBG] : osdmap e288: 3 total, 3 up, 3 in
Oct 11 04:10:56 compute-0 ceph-mon[74273]: pgmap v1315: 305 pgs: 305 active+clean; 2.2 GiB data, 2.4 GiB used, 58 GiB / 60 GiB avail; 60 KiB/s rd, 21 KiB/s wr, 82 op/s
Oct 11 04:10:56 compute-0 ceph-mon[74273]: osdmap e288: 3 total, 3 up, 3 in
Oct 11 04:10:57 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e288 do_prune osdmap full prune enabled
Oct 11 04:10:57 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e289 e289: 3 total, 3 up, 3 in
Oct 11 04:10:57 compute-0 ceph-mon[74273]: log_channel(cluster) log [DBG] : osdmap e289: 3 total, 3 up, 3 in
Oct 11 04:10:57 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1318: 305 pgs: 305 active+clean; 2.2 GiB data, 2.4 GiB used, 58 GiB / 60 GiB avail
Oct 11 04:10:57 compute-0 nova_compute[259850]: 2025-10-11 04:10:57.753 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 04:10:58 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e289 do_prune osdmap full prune enabled
Oct 11 04:10:58 compute-0 ceph-mon[74273]: osdmap e289: 3 total, 3 up, 3 in
Oct 11 04:10:58 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e290 e290: 3 total, 3 up, 3 in
Oct 11 04:10:58 compute-0 ceph-mon[74273]: log_channel(cluster) log [DBG] : osdmap e290: 3 total, 3 up, 3 in
Oct 11 04:10:59 compute-0 ceph-mon[74273]: pgmap v1318: 305 pgs: 305 active+clean; 2.2 GiB data, 2.4 GiB used, 58 GiB / 60 GiB avail
Oct 11 04:10:59 compute-0 ceph-mon[74273]: osdmap e290: 3 total, 3 up, 3 in
Oct 11 04:10:59 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e290 do_prune osdmap full prune enabled
Oct 11 04:10:59 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e291 e291: 3 total, 3 up, 3 in
Oct 11 04:10:59 compute-0 ceph-mon[74273]: log_channel(cluster) log [DBG] : osdmap e291: 3 total, 3 up, 3 in
Oct 11 04:10:59 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1321: 305 pgs: 305 active+clean; 2.3 GiB data, 2.4 GiB used, 58 GiB / 60 GiB avail; 12 MiB/s rd, 6.3 MiB/s wr, 250 op/s
Oct 11 04:10:59 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e291 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 04:10:59 compute-0 nova_compute[259850]: 2025-10-11 04:10:59.889 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 04:11:00 compute-0 ceph-mon[74273]: osdmap e291: 3 total, 3 up, 3 in
Oct 11 04:11:00 compute-0 ceph-mon[74273]: pgmap v1321: 305 pgs: 305 active+clean; 2.3 GiB data, 2.4 GiB used, 58 GiB / 60 GiB avail; 12 MiB/s rd, 6.3 MiB/s wr, 250 op/s
Oct 11 04:11:00 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct 11 04:11:00 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/113282647' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 11 04:11:00 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct 11 04:11:00 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/113282647' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 11 04:11:01 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct 11 04:11:01 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3160582588' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 11 04:11:01 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct 11 04:11:01 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3160582588' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 11 04:11:01 compute-0 ceph-mon[74273]: from='client.? 192.168.122.10:0/113282647' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 11 04:11:01 compute-0 ceph-mon[74273]: from='client.? 192.168.122.10:0/113282647' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 11 04:11:01 compute-0 ceph-mon[74273]: from='client.? 192.168.122.10:0/3160582588' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 11 04:11:01 compute-0 ceph-mon[74273]: from='client.? 192.168.122.10:0/3160582588' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 11 04:11:01 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1322: 305 pgs: 305 active+clean; 2.3 GiB data, 2.4 GiB used, 58 GiB / 60 GiB avail; 8.1 MiB/s rd, 4.4 MiB/s wr, 174 op/s
Oct 11 04:11:02 compute-0 ceph-mon[74273]: pgmap v1322: 305 pgs: 305 active+clean; 2.3 GiB data, 2.4 GiB used, 58 GiB / 60 GiB avail; 8.1 MiB/s rd, 4.4 MiB/s wr, 174 op/s
Oct 11 04:11:02 compute-0 nova_compute[259850]: 2025-10-11 04:11:02.755 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 04:11:02 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct 11 04:11:02 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2062258405' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 11 04:11:02 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct 11 04:11:02 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2062258405' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 11 04:11:03 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e291 do_prune osdmap full prune enabled
Oct 11 04:11:03 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e292 e292: 3 total, 3 up, 3 in
Oct 11 04:11:03 compute-0 ceph-mon[74273]: log_channel(cluster) log [DBG] : osdmap e292: 3 total, 3 up, 3 in
Oct 11 04:11:03 compute-0 ceph-mon[74273]: from='client.? 192.168.122.10:0/2062258405' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 11 04:11:03 compute-0 ceph-mon[74273]: from='client.? 192.168.122.10:0/2062258405' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 11 04:11:03 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1324: 305 pgs: 305 active+clean; 2.3 GiB data, 2.5 GiB used, 58 GiB / 60 GiB avail; 7.9 MiB/s rd, 7.8 MiB/s wr, 379 op/s
Oct 11 04:11:04 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct 11 04:11:04 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1654918693' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 11 04:11:04 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct 11 04:11:04 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1654918693' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 11 04:11:04 compute-0 ceph-mon[74273]: osdmap e292: 3 total, 3 up, 3 in
Oct 11 04:11:04 compute-0 ceph-mon[74273]: pgmap v1324: 305 pgs: 305 active+clean; 2.3 GiB data, 2.5 GiB used, 58 GiB / 60 GiB avail; 7.9 MiB/s rd, 7.8 MiB/s wr, 379 op/s
Oct 11 04:11:04 compute-0 ceph-mon[74273]: from='client.? 192.168.122.10:0/1654918693' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 11 04:11:04 compute-0 ceph-mon[74273]: from='client.? 192.168.122.10:0/1654918693' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 11 04:11:04 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e292 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 04:11:04 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e292 do_prune osdmap full prune enabled
Oct 11 04:11:04 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e293 e293: 3 total, 3 up, 3 in
Oct 11 04:11:04 compute-0 ceph-mon[74273]: log_channel(cluster) log [DBG] : osdmap e293: 3 total, 3 up, 3 in
Oct 11 04:11:04 compute-0 nova_compute[259850]: 2025-10-11 04:11:04.864 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1760155849.8633091, b41a3cc1-8f24-43ac-981f-ecd099bcc7ce => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 11 04:11:04 compute-0 nova_compute[259850]: 2025-10-11 04:11:04.865 2 INFO nova.compute.manager [-] [instance: b41a3cc1-8f24-43ac-981f-ecd099bcc7ce] VM Stopped (Lifecycle Event)
Oct 11 04:11:04 compute-0 nova_compute[259850]: 2025-10-11 04:11:04.890 2 DEBUG nova.compute.manager [None req-efd8068c-7623-4973-acdf-de3415bba81c - - - - - -] [instance: b41a3cc1-8f24-43ac-981f-ecd099bcc7ce] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 11 04:11:04 compute-0 nova_compute[259850]: 2025-10-11 04:11:04.941 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 04:11:05 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct 11 04:11:05 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3439286902' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 11 04:11:05 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct 11 04:11:05 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3439286902' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 11 04:11:05 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1326: 305 pgs: 305 active+clean; 2.3 GiB data, 2.5 GiB used, 58 GiB / 60 GiB avail; 140 KiB/s rd, 3.3 MiB/s wr, 198 op/s
Oct 11 04:11:05 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct 11 04:11:05 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/910736435' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 11 04:11:05 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct 11 04:11:05 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/910736435' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 11 04:11:05 compute-0 ceph-mon[74273]: osdmap e293: 3 total, 3 up, 3 in
Oct 11 04:11:05 compute-0 ceph-mon[74273]: from='client.? 192.168.122.10:0/3439286902' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 11 04:11:05 compute-0 ceph-mon[74273]: from='client.? 192.168.122.10:0/3439286902' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 11 04:11:05 compute-0 ceph-mon[74273]: from='client.? 192.168.122.10:0/910736435' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 11 04:11:05 compute-0 ceph-mon[74273]: from='client.? 192.168.122.10:0/910736435' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 11 04:11:06 compute-0 podman[285937]: 2025-10-11 04:11:06.384780542 +0000 UTC m=+0.085104565 container health_status 8f03625dda308e9a3fe3b3f00360811eab2a53db90537fed5552a11820542f07 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251009, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, config_id=iscsid, container_name=iscsid, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, io.buildah.version=1.41.3)
Oct 11 04:11:06 compute-0 podman[285936]: 2025-10-11 04:11:06.394622039 +0000 UTC m=+0.095977591 container health_status 83ff5079e26f0f00bbfd2512db4ebca61e431e3b7e362664573e34fff179574c (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_id=multipathd, org.label-schema.build-date=20251009, container_name=multipathd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f)
Oct 11 04:11:06 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e293 do_prune osdmap full prune enabled
Oct 11 04:11:06 compute-0 ceph-mon[74273]: pgmap v1326: 305 pgs: 305 active+clean; 2.3 GiB data, 2.5 GiB used, 58 GiB / 60 GiB avail; 140 KiB/s rd, 3.3 MiB/s wr, 198 op/s
Oct 11 04:11:06 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e294 e294: 3 total, 3 up, 3 in
Oct 11 04:11:06 compute-0 ceph-mon[74273]: log_channel(cluster) log [DBG] : osdmap e294: 3 total, 3 up, 3 in
Oct 11 04:11:07 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct 11 04:11:07 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1710802821' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 11 04:11:07 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct 11 04:11:07 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1710802821' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 11 04:11:07 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1328: 305 pgs: 305 active+clean; 2.3 GiB data, 2.5 GiB used, 58 GiB / 60 GiB avail; 150 KiB/s rd, 3.5 MiB/s wr, 212 op/s
Oct 11 04:11:07 compute-0 nova_compute[259850]: 2025-10-11 04:11:07.757 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 04:11:07 compute-0 ceph-mon[74273]: osdmap e294: 3 total, 3 up, 3 in
Oct 11 04:11:07 compute-0 ceph-mon[74273]: from='client.? 192.168.122.10:0/1710802821' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 11 04:11:07 compute-0 ceph-mon[74273]: from='client.? 192.168.122.10:0/1710802821' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 11 04:11:08 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 11 04:11:08 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/568662650' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 11 04:11:08 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e294 do_prune osdmap full prune enabled
Oct 11 04:11:08 compute-0 ceph-mon[74273]: pgmap v1328: 305 pgs: 305 active+clean; 2.3 GiB data, 2.5 GiB used, 58 GiB / 60 GiB avail; 150 KiB/s rd, 3.5 MiB/s wr, 212 op/s
Oct 11 04:11:08 compute-0 ceph-mon[74273]: from='client.? 192.168.122.10:0/568662650' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 11 04:11:08 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e295 e295: 3 total, 3 up, 3 in
Oct 11 04:11:08 compute-0 ceph-mon[74273]: log_channel(cluster) log [DBG] : osdmap e295: 3 total, 3 up, 3 in
Oct 11 04:11:09 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1330: 305 pgs: 305 active+clean; 2.2 GiB data, 2.4 GiB used, 58 GiB / 60 GiB avail; 129 KiB/s rd, 7.0 KiB/s wr, 182 op/s
Oct 11 04:11:09 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e295 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 04:11:09 compute-0 ceph-mon[74273]: osdmap e295: 3 total, 3 up, 3 in
Oct 11 04:11:09 compute-0 nova_compute[259850]: 2025-10-11 04:11:09.944 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 04:11:11 compute-0 ceph-mon[74273]: pgmap v1330: 305 pgs: 305 active+clean; 2.2 GiB data, 2.4 GiB used, 58 GiB / 60 GiB avail; 129 KiB/s rd, 7.0 KiB/s wr, 182 op/s
Oct 11 04:11:11 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1331: 305 pgs: 305 active+clean; 2.2 GiB data, 2.4 GiB used, 58 GiB / 60 GiB avail; 115 KiB/s rd, 6.2 KiB/s wr, 161 op/s
Oct 11 04:11:12 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e295 do_prune osdmap full prune enabled
Oct 11 04:11:12 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e296 e296: 3 total, 3 up, 3 in
Oct 11 04:11:12 compute-0 ceph-mon[74273]: log_channel(cluster) log [DBG] : osdmap e296: 3 total, 3 up, 3 in
Oct 11 04:11:12 compute-0 nova_compute[259850]: 2025-10-11 04:11:12.760 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 04:11:13 compute-0 ceph-mon[74273]: pgmap v1331: 305 pgs: 305 active+clean; 2.2 GiB data, 2.4 GiB used, 58 GiB / 60 GiB avail; 115 KiB/s rd, 6.2 KiB/s wr, 161 op/s
Oct 11 04:11:13 compute-0 ceph-mon[74273]: osdmap e296: 3 total, 3 up, 3 in
Oct 11 04:11:13 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1333: 305 pgs: 305 active+clean; 2.7 GiB data, 2.9 GiB used, 57 GiB / 60 GiB avail; 272 KiB/s rd, 80 MiB/s wr, 422 op/s
Oct 11 04:11:14 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct 11 04:11:14 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3550810825' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 11 04:11:14 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct 11 04:11:14 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3550810825' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 11 04:11:14 compute-0 ceph-mon[74273]: from='client.? 192.168.122.10:0/3550810825' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 11 04:11:14 compute-0 ceph-mon[74273]: from='client.? 192.168.122.10:0/3550810825' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 11 04:11:14 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e296 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 04:11:14 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e296 do_prune osdmap full prune enabled
Oct 11 04:11:14 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e297 e297: 3 total, 3 up, 3 in
Oct 11 04:11:14 compute-0 ceph-mon[74273]: log_channel(cluster) log [DBG] : osdmap e297: 3 total, 3 up, 3 in
Oct 11 04:11:14 compute-0 nova_compute[259850]: 2025-10-11 04:11:14.948 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 04:11:15 compute-0 ceph-mon[74273]: pgmap v1333: 305 pgs: 305 active+clean; 2.7 GiB data, 2.9 GiB used, 57 GiB / 60 GiB avail; 272 KiB/s rd, 80 MiB/s wr, 422 op/s
Oct 11 04:11:15 compute-0 ceph-mon[74273]: osdmap e297: 3 total, 3 up, 3 in
Oct 11 04:11:15 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1335: 305 pgs: 305 active+clean; 2.7 GiB data, 2.9 GiB used, 57 GiB / 60 GiB avail; 158 KiB/s rd, 81 MiB/s wr, 261 op/s
Oct 11 04:11:15 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct 11 04:11:15 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2619511810' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 11 04:11:15 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct 11 04:11:15 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2619511810' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 11 04:11:16 compute-0 ceph-mon[74273]: from='client.? 192.168.122.10:0/2619511810' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 11 04:11:16 compute-0 ceph-mon[74273]: from='client.? 192.168.122.10:0/2619511810' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 11 04:11:16 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct 11 04:11:16 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/4082523161' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 11 04:11:16 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct 11 04:11:16 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/4082523161' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 11 04:11:17 compute-0 ceph-mon[74273]: pgmap v1335: 305 pgs: 305 active+clean; 2.7 GiB data, 2.9 GiB used, 57 GiB / 60 GiB avail; 158 KiB/s rd, 81 MiB/s wr, 261 op/s
Oct 11 04:11:17 compute-0 ceph-mon[74273]: from='client.? 192.168.122.10:0/4082523161' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 11 04:11:17 compute-0 ceph-mon[74273]: from='client.? 192.168.122.10:0/4082523161' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 11 04:11:17 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1336: 305 pgs: 305 active+clean; 2.7 GiB data, 2.9 GiB used, 57 GiB / 60 GiB avail; 132 KiB/s rd, 68 MiB/s wr, 218 op/s
Oct 11 04:11:17 compute-0 nova_compute[259850]: 2025-10-11 04:11:17.763 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 04:11:18 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct 11 04:11:18 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1054061857' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 11 04:11:18 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct 11 04:11:18 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1054061857' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 11 04:11:18 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e297 do_prune osdmap full prune enabled
Oct 11 04:11:18 compute-0 ceph-mon[74273]: from='client.? 192.168.122.10:0/1054061857' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 11 04:11:18 compute-0 ceph-mon[74273]: from='client.? 192.168.122.10:0/1054061857' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 11 04:11:18 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e298 e298: 3 total, 3 up, 3 in
Oct 11 04:11:18 compute-0 ceph-mon[74273]: log_channel(cluster) log [DBG] : osdmap e298: 3 total, 3 up, 3 in
Oct 11 04:11:18 compute-0 podman[285975]: 2025-10-11 04:11:18.441942594 +0000 UTC m=+0.136302776 container health_status 648a70a95c858d67aef01a55c153ff0b5b9bef4b589fa10c62a24185180db61f (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_id=ovn_controller, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Oct 11 04:11:19 compute-0 ceph-mon[74273]: pgmap v1336: 305 pgs: 305 active+clean; 2.7 GiB data, 2.9 GiB used, 57 GiB / 60 GiB avail; 132 KiB/s rd, 68 MiB/s wr, 218 op/s
Oct 11 04:11:19 compute-0 ceph-mon[74273]: osdmap e298: 3 total, 3 up, 3 in
Oct 11 04:11:19 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct 11 04:11:19 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2582876361' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 11 04:11:19 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct 11 04:11:19 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2582876361' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 11 04:11:19 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct 11 04:11:19 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1331522714' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 11 04:11:19 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct 11 04:11:19 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1331522714' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 11 04:11:19 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1338: 305 pgs: 305 active+clean; 2.2 GiB data, 2.4 GiB used, 58 GiB / 60 GiB avail; 324 KiB/s rd, 135 MiB/s wr, 546 op/s
Oct 11 04:11:19 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e298 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 04:11:19 compute-0 nova_compute[259850]: 2025-10-11 04:11:19.951 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 04:11:20 compute-0 nova_compute[259850]: 2025-10-11 04:11:20.058 2 DEBUG oslo_service.periodic_task [None req-a15c22ab-259d-4b26-83d0-eb5f92102649 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 11 04:11:20 compute-0 nova_compute[259850]: 2025-10-11 04:11:20.059 2 DEBUG nova.compute.manager [None req-a15c22ab-259d-4b26-83d0-eb5f92102649 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Oct 11 04:11:20 compute-0 nova_compute[259850]: 2025-10-11 04:11:20.059 2 DEBUG oslo_service.periodic_task [None req-a15c22ab-259d-4b26-83d0-eb5f92102649 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 11 04:11:20 compute-0 nova_compute[259850]: 2025-10-11 04:11:20.105 2 DEBUG oslo_concurrency.lockutils [None req-a15c22ab-259d-4b26-83d0-eb5f92102649 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 04:11:20 compute-0 nova_compute[259850]: 2025-10-11 04:11:20.106 2 DEBUG oslo_concurrency.lockutils [None req-a15c22ab-259d-4b26-83d0-eb5f92102649 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 04:11:20 compute-0 nova_compute[259850]: 2025-10-11 04:11:20.106 2 DEBUG oslo_concurrency.lockutils [None req-a15c22ab-259d-4b26-83d0-eb5f92102649 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 04:11:20 compute-0 nova_compute[259850]: 2025-10-11 04:11:20.106 2 DEBUG nova.compute.resource_tracker [None req-a15c22ab-259d-4b26-83d0-eb5f92102649 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Oct 11 04:11:20 compute-0 nova_compute[259850]: 2025-10-11 04:11:20.106 2 DEBUG oslo_concurrency.processutils [None req-a15c22ab-259d-4b26-83d0-eb5f92102649 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 04:11:20 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e298 do_prune osdmap full prune enabled
Oct 11 04:11:20 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e299 e299: 3 total, 3 up, 3 in
Oct 11 04:11:20 compute-0 ceph-mon[74273]: log_channel(cluster) log [DBG] : osdmap e299: 3 total, 3 up, 3 in
Oct 11 04:11:20 compute-0 ceph-mon[74273]: from='client.? 192.168.122.10:0/2582876361' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 11 04:11:20 compute-0 ceph-mon[74273]: from='client.? 192.168.122.10:0/2582876361' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 11 04:11:20 compute-0 ceph-mon[74273]: from='client.? 192.168.122.10:0/1331522714' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 11 04:11:20 compute-0 ceph-mon[74273]: from='client.? 192.168.122.10:0/1331522714' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 11 04:11:20 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 11 04:11:20 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4207921129' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 04:11:20 compute-0 nova_compute[259850]: 2025-10-11 04:11:20.609 2 DEBUG oslo_concurrency.processutils [None req-a15c22ab-259d-4b26-83d0-eb5f92102649 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.502s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 04:11:20 compute-0 ceph-mgr[74563]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 04:11:20 compute-0 ceph-mgr[74563]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 04:11:20 compute-0 ceph-mgr[74563]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 04:11:20 compute-0 ceph-mgr[74563]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 04:11:20 compute-0 ceph-mgr[74563]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 04:11:20 compute-0 ceph-mgr[74563]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 04:11:20 compute-0 nova_compute[259850]: 2025-10-11 04:11:20.786 2 WARNING nova.virt.libvirt.driver [None req-a15c22ab-259d-4b26-83d0-eb5f92102649 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 11 04:11:20 compute-0 nova_compute[259850]: 2025-10-11 04:11:20.787 2 DEBUG nova.compute.resource_tracker [None req-a15c22ab-259d-4b26-83d0-eb5f92102649 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4535MB free_disk=59.988277435302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Oct 11 04:11:20 compute-0 nova_compute[259850]: 2025-10-11 04:11:20.788 2 DEBUG oslo_concurrency.lockutils [None req-a15c22ab-259d-4b26-83d0-eb5f92102649 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 04:11:20 compute-0 nova_compute[259850]: 2025-10-11 04:11:20.788 2 DEBUG oslo_concurrency.lockutils [None req-a15c22ab-259d-4b26-83d0-eb5f92102649 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 04:11:20 compute-0 ceph-mgr[74563]: [balancer INFO root] Optimize plan auto_2025-10-11_04:11:20
Oct 11 04:11:20 compute-0 ceph-mgr[74563]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 11 04:11:20 compute-0 ceph-mgr[74563]: [balancer INFO root] do_upmap
Oct 11 04:11:20 compute-0 ceph-mgr[74563]: [balancer INFO root] pools ['cephfs.cephfs.data', 'backups', '.rgw.root', 'default.rgw.log', 'cephfs.cephfs.meta', 'default.rgw.control', '.mgr', 'images', 'volumes', 'vms', 'default.rgw.meta']
Oct 11 04:11:20 compute-0 ceph-mgr[74563]: [balancer INFO root] prepared 0/10 changes
Oct 11 04:11:20 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct 11 04:11:20 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1028882993' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 11 04:11:20 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct 11 04:11:20 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1028882993' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 11 04:11:21 compute-0 ceph-mgr[74563]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 11 04:11:21 compute-0 ceph-mgr[74563]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 11 04:11:21 compute-0 ceph-mgr[74563]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 11 04:11:21 compute-0 ceph-mgr[74563]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 11 04:11:21 compute-0 ceph-mgr[74563]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 11 04:11:21 compute-0 ceph-mgr[74563]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 11 04:11:21 compute-0 ceph-mgr[74563]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 11 04:11:21 compute-0 ceph-mgr[74563]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 11 04:11:21 compute-0 ceph-mgr[74563]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 11 04:11:21 compute-0 ceph-mgr[74563]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 11 04:11:21 compute-0 nova_compute[259850]: 2025-10-11 04:11:21.066 2 DEBUG nova.compute.resource_tracker [None req-a15c22ab-259d-4b26-83d0-eb5f92102649 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Oct 11 04:11:21 compute-0 nova_compute[259850]: 2025-10-11 04:11:21.067 2 DEBUG nova.compute.resource_tracker [None req-a15c22ab-259d-4b26-83d0-eb5f92102649 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Oct 11 04:11:21 compute-0 nova_compute[259850]: 2025-10-11 04:11:21.139 2 DEBUG oslo_concurrency.processutils [None req-a15c22ab-259d-4b26-83d0-eb5f92102649 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 04:11:21 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e299 do_prune osdmap full prune enabled
Oct 11 04:11:21 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e300 e300: 3 total, 3 up, 3 in
Oct 11 04:11:21 compute-0 ceph-mon[74273]: log_channel(cluster) log [DBG] : osdmap e300: 3 total, 3 up, 3 in
Oct 11 04:11:21 compute-0 ceph-mon[74273]: pgmap v1338: 305 pgs: 305 active+clean; 2.2 GiB data, 2.4 GiB used, 58 GiB / 60 GiB avail; 324 KiB/s rd, 135 MiB/s wr, 546 op/s
Oct 11 04:11:21 compute-0 ceph-mon[74273]: osdmap e299: 3 total, 3 up, 3 in
Oct 11 04:11:21 compute-0 ceph-mon[74273]: from='client.? 192.168.122.100:0/4207921129' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 04:11:21 compute-0 ceph-mon[74273]: from='client.? 192.168.122.10:0/1028882993' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 11 04:11:21 compute-0 ceph-mon[74273]: from='client.? 192.168.122.10:0/1028882993' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 11 04:11:21 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 11 04:11:21 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1691314765' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 04:11:21 compute-0 nova_compute[259850]: 2025-10-11 04:11:21.597 2 DEBUG oslo_concurrency.processutils [None req-a15c22ab-259d-4b26-83d0-eb5f92102649 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.458s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 04:11:21 compute-0 nova_compute[259850]: 2025-10-11 04:11:21.603 2 DEBUG nova.compute.provider_tree [None req-a15c22ab-259d-4b26-83d0-eb5f92102649 - - - - - -] Inventory has not changed in ProviderTree for provider: 108a560b-89c0-4926-a2fc-cb749a6f8386 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 11 04:11:21 compute-0 ovn_controller[152025]: 2025-10-11T04:11:21Z|00142|memory_trim|INFO|Detected inactivity (last active 30003 ms ago): trimming memory
Oct 11 04:11:21 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1341: 305 pgs: 305 active+clean; 2.2 GiB data, 2.4 GiB used, 58 GiB / 60 GiB avail; 232 KiB/s rd, 81 MiB/s wr, 398 op/s
Oct 11 04:11:21 compute-0 nova_compute[259850]: 2025-10-11 04:11:21.617 2 DEBUG nova.scheduler.client.report [None req-a15c22ab-259d-4b26-83d0-eb5f92102649 - - - - - -] Inventory has not changed for provider 108a560b-89c0-4926-a2fc-cb749a6f8386 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 11 04:11:21 compute-0 nova_compute[259850]: 2025-10-11 04:11:21.633 2 DEBUG nova.compute.resource_tracker [None req-a15c22ab-259d-4b26-83d0-eb5f92102649 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Oct 11 04:11:21 compute-0 nova_compute[259850]: 2025-10-11 04:11:21.633 2 DEBUG oslo_concurrency.lockutils [None req-a15c22ab-259d-4b26-83d0-eb5f92102649 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.846s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 04:11:22 compute-0 nova_compute[259850]: 2025-10-11 04:11:22.059 2 DEBUG oslo_service.periodic_task [None req-a15c22ab-259d-4b26-83d0-eb5f92102649 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 11 04:11:22 compute-0 nova_compute[259850]: 2025-10-11 04:11:22.060 2 DEBUG oslo_service.periodic_task [None req-a15c22ab-259d-4b26-83d0-eb5f92102649 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 11 04:11:22 compute-0 nova_compute[259850]: 2025-10-11 04:11:22.060 2 DEBUG oslo_service.periodic_task [None req-a15c22ab-259d-4b26-83d0-eb5f92102649 - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 11 04:11:22 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e300 do_prune osdmap full prune enabled
Oct 11 04:11:22 compute-0 ceph-mon[74273]: osdmap e300: 3 total, 3 up, 3 in
Oct 11 04:11:22 compute-0 ceph-mon[74273]: from='client.? 192.168.122.100:0/1691314765' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 04:11:22 compute-0 ceph-mon[74273]: pgmap v1341: 305 pgs: 305 active+clean; 2.2 GiB data, 2.4 GiB used, 58 GiB / 60 GiB avail; 232 KiB/s rd, 81 MiB/s wr, 398 op/s
Oct 11 04:11:22 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e301 e301: 3 total, 3 up, 3 in
Oct 11 04:11:22 compute-0 ceph-mon[74273]: log_channel(cluster) log [DBG] : osdmap e301: 3 total, 3 up, 3 in
Oct 11 04:11:22 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct 11 04:11:22 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1517789452' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 11 04:11:22 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct 11 04:11:22 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1517789452' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 11 04:11:22 compute-0 nova_compute[259850]: 2025-10-11 04:11:22.764 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 04:11:22 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct 11 04:11:22 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3502981566' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 11 04:11:22 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct 11 04:11:22 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3502981566' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 11 04:11:22 compute-0 ovn_metadata_agent[161897]: 2025-10-11 04:11:22.960 161902 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 04:11:22 compute-0 ovn_metadata_agent[161897]: 2025-10-11 04:11:22.961 161902 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 04:11:22 compute-0 ovn_metadata_agent[161897]: 2025-10-11 04:11:22.961 161902 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 04:11:23 compute-0 nova_compute[259850]: 2025-10-11 04:11:23.078 2 DEBUG oslo_service.periodic_task [None req-a15c22ab-259d-4b26-83d0-eb5f92102649 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 11 04:11:23 compute-0 nova_compute[259850]: 2025-10-11 04:11:23.079 2 DEBUG nova.compute.manager [None req-a15c22ab-259d-4b26-83d0-eb5f92102649 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Oct 11 04:11:23 compute-0 nova_compute[259850]: 2025-10-11 04:11:23.079 2 DEBUG nova.compute.manager [None req-a15c22ab-259d-4b26-83d0-eb5f92102649 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Oct 11 04:11:23 compute-0 nova_compute[259850]: 2025-10-11 04:11:23.099 2 DEBUG nova.compute.manager [None req-a15c22ab-259d-4b26-83d0-eb5f92102649 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Oct 11 04:11:23 compute-0 nova_compute[259850]: 2025-10-11 04:11:23.100 2 DEBUG oslo_service.periodic_task [None req-a15c22ab-259d-4b26-83d0-eb5f92102649 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 11 04:11:23 compute-0 nova_compute[259850]: 2025-10-11 04:11:23.100 2 DEBUG oslo_service.periodic_task [None req-a15c22ab-259d-4b26-83d0-eb5f92102649 - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 11 04:11:23 compute-0 nova_compute[259850]: 2025-10-11 04:11:23.100 2 DEBUG nova.compute.manager [None req-a15c22ab-259d-4b26-83d0-eb5f92102649 - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145
Oct 11 04:11:23 compute-0 nova_compute[259850]: 2025-10-11 04:11:23.114 2 DEBUG nova.compute.manager [None req-a15c22ab-259d-4b26-83d0-eb5f92102649 - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154
Oct 11 04:11:23 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e301 do_prune osdmap full prune enabled
Oct 11 04:11:23 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e302 e302: 3 total, 3 up, 3 in
Oct 11 04:11:23 compute-0 ceph-mon[74273]: log_channel(cluster) log [DBG] : osdmap e302: 3 total, 3 up, 3 in
Oct 11 04:11:23 compute-0 ceph-mon[74273]: osdmap e301: 3 total, 3 up, 3 in
Oct 11 04:11:23 compute-0 ceph-mon[74273]: from='client.? 192.168.122.10:0/1517789452' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 11 04:11:23 compute-0 ceph-mon[74273]: from='client.? 192.168.122.10:0/1517789452' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 11 04:11:23 compute-0 ceph-mon[74273]: from='client.? 192.168.122.10:0/3502981566' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 11 04:11:23 compute-0 ceph-mon[74273]: from='client.? 192.168.122.10:0/3502981566' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 11 04:11:23 compute-0 podman[286047]: 2025-10-11 04:11:23.377551868 +0000 UTC m=+0.079936330 container health_status aa44bb3cffcfb092b9d85ae52a9bd9f36c87e07262632420f97b48a886109eab (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20251009, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true)
Oct 11 04:11:23 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1344: 305 pgs: 6 active+clean+snaptrim, 21 active+clean+snaptrim_wait, 278 active+clean; 2.1 GiB data, 2.4 GiB used, 58 GiB / 60 GiB avail; 231 KiB/s rd, 13 KiB/s wr, 322 op/s
Oct 11 04:11:24 compute-0 nova_compute[259850]: 2025-10-11 04:11:24.074 2 DEBUG oslo_service.periodic_task [None req-a15c22ab-259d-4b26-83d0-eb5f92102649 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 11 04:11:24 compute-0 nova_compute[259850]: 2025-10-11 04:11:24.075 2 DEBUG oslo_service.periodic_task [None req-a15c22ab-259d-4b26-83d0-eb5f92102649 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 11 04:11:24 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e302 do_prune osdmap full prune enabled
Oct 11 04:11:24 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e303 e303: 3 total, 3 up, 3 in
Oct 11 04:11:24 compute-0 ceph-mon[74273]: osdmap e302: 3 total, 3 up, 3 in
Oct 11 04:11:24 compute-0 ceph-mon[74273]: pgmap v1344: 305 pgs: 6 active+clean+snaptrim, 21 active+clean+snaptrim_wait, 278 active+clean; 2.1 GiB data, 2.4 GiB used, 58 GiB / 60 GiB avail; 231 KiB/s rd, 13 KiB/s wr, 322 op/s
Oct 11 04:11:24 compute-0 ceph-mon[74273]: log_channel(cluster) log [DBG] : osdmap e303: 3 total, 3 up, 3 in
Oct 11 04:11:24 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e303 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 04:11:24 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e303 do_prune osdmap full prune enabled
Oct 11 04:11:24 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e304 e304: 3 total, 3 up, 3 in
Oct 11 04:11:24 compute-0 ceph-mon[74273]: log_channel(cluster) log [DBG] : osdmap e304: 3 total, 3 up, 3 in
Oct 11 04:11:24 compute-0 nova_compute[259850]: 2025-10-11 04:11:24.955 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 04:11:25 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct 11 04:11:25 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1467906236' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 11 04:11:25 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct 11 04:11:25 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1467906236' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 11 04:11:25 compute-0 ceph-mon[74273]: osdmap e303: 3 total, 3 up, 3 in
Oct 11 04:11:25 compute-0 ceph-mon[74273]: osdmap e304: 3 total, 3 up, 3 in
Oct 11 04:11:25 compute-0 ceph-mon[74273]: from='client.? 192.168.122.10:0/1467906236' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 11 04:11:25 compute-0 ceph-mon[74273]: from='client.? 192.168.122.10:0/1467906236' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 11 04:11:25 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1347: 305 pgs: 6 active+clean+snaptrim, 21 active+clean+snaptrim_wait, 278 active+clean; 2.1 GiB data, 2.4 GiB used, 58 GiB / 60 GiB avail; 231 KiB/s rd, 13 KiB/s wr, 322 op/s
Oct 11 04:11:26 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e304 do_prune osdmap full prune enabled
Oct 11 04:11:26 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e305 e305: 3 total, 3 up, 3 in
Oct 11 04:11:26 compute-0 ceph-mon[74273]: log_channel(cluster) log [DBG] : osdmap e305: 3 total, 3 up, 3 in
Oct 11 04:11:26 compute-0 ceph-mon[74273]: pgmap v1347: 305 pgs: 6 active+clean+snaptrim, 21 active+clean+snaptrim_wait, 278 active+clean; 2.1 GiB data, 2.4 GiB used, 58 GiB / 60 GiB avail; 231 KiB/s rd, 13 KiB/s wr, 322 op/s
Oct 11 04:11:26 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct 11 04:11:26 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/4285676194' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 11 04:11:26 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct 11 04:11:26 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/4285676194' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 11 04:11:27 compute-0 ceph-mon[74273]: osdmap e305: 3 total, 3 up, 3 in
Oct 11 04:11:27 compute-0 ceph-mon[74273]: from='client.? 192.168.122.10:0/4285676194' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 11 04:11:27 compute-0 ceph-mon[74273]: from='client.? 192.168.122.10:0/4285676194' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 11 04:11:27 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1349: 305 pgs: 6 active+clean+snaptrim, 21 active+clean+snaptrim_wait, 278 active+clean; 2.1 GiB data, 2.4 GiB used, 58 GiB / 60 GiB avail
Oct 11 04:11:27 compute-0 nova_compute[259850]: 2025-10-11 04:11:27.805 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 04:11:28 compute-0 ceph-mon[74273]: pgmap v1349: 305 pgs: 6 active+clean+snaptrim, 21 active+clean+snaptrim_wait, 278 active+clean; 2.1 GiB data, 2.4 GiB used, 58 GiB / 60 GiB avail
Oct 11 04:11:29 compute-0 nova_compute[259850]: 2025-10-11 04:11:29.059 2 DEBUG oslo_service.periodic_task [None req-a15c22ab-259d-4b26-83d0-eb5f92102649 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 11 04:11:29 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1350: 305 pgs: 305 active+clean; 88 MiB data, 403 MiB used, 60 GiB / 60 GiB avail; 80 KiB/s rd, 5.8 KiB/s wr, 193 op/s
Oct 11 04:11:29 compute-0 nova_compute[259850]: 2025-10-11 04:11:29.726 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 04:11:29 compute-0 nova_compute[259850]: 2025-10-11 04:11:29.863 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 04:11:29 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e305 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 04:11:29 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e305 do_prune osdmap full prune enabled
Oct 11 04:11:29 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e306 e306: 3 total, 3 up, 3 in
Oct 11 04:11:29 compute-0 ceph-mon[74273]: log_channel(cluster) log [DBG] : osdmap e306: 3 total, 3 up, 3 in
Oct 11 04:11:29 compute-0 nova_compute[259850]: 2025-10-11 04:11:29.957 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 04:11:30 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e306 do_prune osdmap full prune enabled
Oct 11 04:11:30 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e307 e307: 3 total, 3 up, 3 in
Oct 11 04:11:30 compute-0 ceph-mon[74273]: log_channel(cluster) log [DBG] : osdmap e307: 3 total, 3 up, 3 in
Oct 11 04:11:30 compute-0 ceph-mon[74273]: pgmap v1350: 305 pgs: 305 active+clean; 88 MiB data, 403 MiB used, 60 GiB / 60 GiB avail; 80 KiB/s rd, 5.8 KiB/s wr, 193 op/s
Oct 11 04:11:30 compute-0 ceph-mon[74273]: osdmap e306: 3 total, 3 up, 3 in
Oct 11 04:11:31 compute-0 ceph-mgr[74563]: [pg_autoscaler INFO root] _maybe_adjust
Oct 11 04:11:31 compute-0 ceph-mgr[74563]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 04:11:31 compute-0 ceph-mgr[74563]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct 11 04:11:31 compute-0 ceph-mgr[74563]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 04:11:31 compute-0 ceph-mgr[74563]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Oct 11 04:11:31 compute-0 ceph-mgr[74563]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 04:11:31 compute-0 ceph-mgr[74563]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.00034682372045321354 of space, bias 1.0, pg target 0.10404711613596405 quantized to 32 (current 32)
Oct 11 04:11:31 compute-0 ceph-mgr[74563]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 04:11:31 compute-0 ceph-mgr[74563]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Oct 11 04:11:31 compute-0 ceph-mgr[74563]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 04:11:31 compute-0 ceph-mgr[74563]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.000665858301588852 of space, bias 1.0, pg target 0.19975749047665559 quantized to 32 (current 32)
Oct 11 04:11:31 compute-0 ceph-mgr[74563]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 04:11:31 compute-0 ceph-mgr[74563]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Oct 11 04:11:31 compute-0 ceph-mgr[74563]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 04:11:31 compute-0 ceph-mgr[74563]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 11 04:11:31 compute-0 ceph-mgr[74563]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 04:11:31 compute-0 ceph-mgr[74563]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Oct 11 04:11:31 compute-0 ceph-mgr[74563]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 04:11:31 compute-0 ceph-mgr[74563]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct 11 04:11:31 compute-0 ceph-mgr[74563]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 04:11:31 compute-0 ceph-mgr[74563]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 11 04:11:31 compute-0 ceph-mgr[74563]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 04:11:31 compute-0 ceph-mgr[74563]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct 11 04:11:31 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1353: 305 pgs: 305 active+clean; 88 MiB data, 403 MiB used, 60 GiB / 60 GiB avail; 80 KiB/s rd, 5.8 KiB/s wr, 193 op/s
Oct 11 04:11:31 compute-0 ceph-mon[74273]: osdmap e307: 3 total, 3 up, 3 in
Oct 11 04:11:32 compute-0 nova_compute[259850]: 2025-10-11 04:11:32.060 2 DEBUG oslo_service.periodic_task [None req-a15c22ab-259d-4b26-83d0-eb5f92102649 - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 11 04:11:32 compute-0 nova_compute[259850]: 2025-10-11 04:11:32.060 2 DEBUG nova.compute.manager [None req-a15c22ab-259d-4b26-83d0-eb5f92102649 - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183
Oct 11 04:11:32 compute-0 nova_compute[259850]: 2025-10-11 04:11:32.808 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 04:11:32 compute-0 ceph-mon[74273]: pgmap v1353: 305 pgs: 305 active+clean; 88 MiB data, 403 MiB used, 60 GiB / 60 GiB avail; 80 KiB/s rd, 5.8 KiB/s wr, 193 op/s
Oct 11 04:11:33 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1354: 305 pgs: 305 active+clean; 88 MiB data, 371 MiB used, 60 GiB / 60 GiB avail; 79 KiB/s rd, 6.1 KiB/s wr, 178 op/s
Oct 11 04:11:34 compute-0 ovn_metadata_agent[161897]: 2025-10-11 04:11:34.289 161902 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=12, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '7a:61:6f', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '92:f1:b6:e4:f1:16'}, ipsec=False) old=SB_Global(nb_cfg=11) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 11 04:11:34 compute-0 nova_compute[259850]: 2025-10-11 04:11:34.290 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 04:11:34 compute-0 ovn_metadata_agent[161897]: 2025-10-11 04:11:34.291 161902 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 6 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Oct 11 04:11:34 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e307 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 04:11:34 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e307 do_prune osdmap full prune enabled
Oct 11 04:11:34 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e308 e308: 3 total, 3 up, 3 in
Oct 11 04:11:34 compute-0 ceph-mon[74273]: log_channel(cluster) log [DBG] : osdmap e308: 3 total, 3 up, 3 in
Oct 11 04:11:34 compute-0 ceph-mon[74273]: pgmap v1354: 305 pgs: 305 active+clean; 88 MiB data, 371 MiB used, 60 GiB / 60 GiB avail; 79 KiB/s rd, 6.1 KiB/s wr, 178 op/s
Oct 11 04:11:34 compute-0 nova_compute[259850]: 2025-10-11 04:11:34.993 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 04:11:35 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1356: 305 pgs: 305 active+clean; 88 MiB data, 371 MiB used, 60 GiB / 60 GiB avail; 17 KiB/s rd, 1.7 KiB/s wr, 24 op/s
Oct 11 04:11:35 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e308 do_prune osdmap full prune enabled
Oct 11 04:11:35 compute-0 ceph-mon[74273]: osdmap e308: 3 total, 3 up, 3 in
Oct 11 04:11:35 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e309 e309: 3 total, 3 up, 3 in
Oct 11 04:11:35 compute-0 ceph-mon[74273]: log_channel(cluster) log [DBG] : osdmap e309: 3 total, 3 up, 3 in
Oct 11 04:11:36 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct 11 04:11:36 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/335123955' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 11 04:11:36 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct 11 04:11:36 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/335123955' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 11 04:11:36 compute-0 ceph-mon[74273]: pgmap v1356: 305 pgs: 305 active+clean; 88 MiB data, 371 MiB used, 60 GiB / 60 GiB avail; 17 KiB/s rd, 1.7 KiB/s wr, 24 op/s
Oct 11 04:11:36 compute-0 ceph-mon[74273]: osdmap e309: 3 total, 3 up, 3 in
Oct 11 04:11:36 compute-0 ceph-mon[74273]: from='client.? 192.168.122.10:0/335123955' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 11 04:11:36 compute-0 ceph-mon[74273]: from='client.? 192.168.122.10:0/335123955' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 11 04:11:37 compute-0 podman[286067]: 2025-10-11 04:11:37.388289104 +0000 UTC m=+0.089163830 container health_status 83ff5079e26f0f00bbfd2512db4ebca61e431e3b7e362664573e34fff179574c (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, container_name=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, config_id=multipathd, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2)
Oct 11 04:11:37 compute-0 podman[286068]: 2025-10-11 04:11:37.393225163 +0000 UTC m=+0.087019510 container health_status 8f03625dda308e9a3fe3b3f00360811eab2a53db90537fed5552a11820542f07 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, org.label-schema.license=GPLv2, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_id=iscsid, container_name=iscsid, io.buildah.version=1.41.3, tcib_managed=true)
Oct 11 04:11:37 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1358: 305 pgs: 305 active+clean; 88 MiB data, 371 MiB used, 60 GiB / 60 GiB avail; 15 KiB/s rd, 1.5 KiB/s wr, 21 op/s
Oct 11 04:11:37 compute-0 nova_compute[259850]: 2025-10-11 04:11:37.810 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 04:11:37 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e309 do_prune osdmap full prune enabled
Oct 11 04:11:37 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e310 e310: 3 total, 3 up, 3 in
Oct 11 04:11:37 compute-0 ceph-mon[74273]: log_channel(cluster) log [DBG] : osdmap e310: 3 total, 3 up, 3 in
Oct 11 04:11:38 compute-0 ceph-mon[74273]: pgmap v1358: 305 pgs: 305 active+clean; 88 MiB data, 371 MiB used, 60 GiB / 60 GiB avail; 15 KiB/s rd, 1.5 KiB/s wr, 21 op/s
Oct 11 04:11:38 compute-0 ceph-mon[74273]: osdmap e310: 3 total, 3 up, 3 in
Oct 11 04:11:39 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct 11 04:11:39 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/785139501' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 11 04:11:39 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct 11 04:11:39 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/785139501' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 11 04:11:39 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1360: 305 pgs: 2 active+clean+snaptrim_wait, 2 active+clean+snaptrim, 301 active+clean; 88 MiB data, 371 MiB used, 60 GiB / 60 GiB avail; 65 KiB/s rd, 5.2 KiB/s wr, 88 op/s
Oct 11 04:11:39 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e310 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 04:11:39 compute-0 ceph-mon[74273]: from='client.? 192.168.122.10:0/785139501' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 11 04:11:39 compute-0 ceph-mon[74273]: from='client.? 192.168.122.10:0/785139501' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 11 04:11:39 compute-0 nova_compute[259850]: 2025-10-11 04:11:39.996 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 04:11:40 compute-0 ovn_metadata_agent[161897]: 2025-10-11 04:11:40.292 161902 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=8a473e03-2208-47ae-afcd-05ad744a5969, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '12'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 04:11:40 compute-0 ceph-mon[74273]: pgmap v1360: 305 pgs: 2 active+clean+snaptrim_wait, 2 active+clean+snaptrim, 301 active+clean; 88 MiB data, 371 MiB used, 60 GiB / 60 GiB avail; 65 KiB/s rd, 5.2 KiB/s wr, 88 op/s
Oct 11 04:11:41 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct 11 04:11:41 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3286957846' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 11 04:11:41 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct 11 04:11:41 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3286957846' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 11 04:11:41 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1361: 305 pgs: 2 active+clean+snaptrim_wait, 2 active+clean+snaptrim, 301 active+clean; 88 MiB data, 371 MiB used, 60 GiB / 60 GiB avail; 58 KiB/s rd, 4.6 KiB/s wr, 79 op/s
Oct 11 04:11:41 compute-0 ceph-mon[74273]: from='client.? 192.168.122.10:0/3286957846' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 11 04:11:41 compute-0 ceph-mon[74273]: from='client.? 192.168.122.10:0/3286957846' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 11 04:11:42 compute-0 nova_compute[259850]: 2025-10-11 04:11:42.812 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 04:11:43 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e310 do_prune osdmap full prune enabled
Oct 11 04:11:43 compute-0 ceph-mon[74273]: pgmap v1361: 305 pgs: 2 active+clean+snaptrim_wait, 2 active+clean+snaptrim, 301 active+clean; 88 MiB data, 371 MiB used, 60 GiB / 60 GiB avail; 58 KiB/s rd, 4.6 KiB/s wr, 79 op/s
Oct 11 04:11:43 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e311 e311: 3 total, 3 up, 3 in
Oct 11 04:11:43 compute-0 ceph-mon[74273]: log_channel(cluster) log [DBG] : osdmap e311: 3 total, 3 up, 3 in
Oct 11 04:11:43 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1363: 305 pgs: 305 active+clean; 88 MiB data, 375 MiB used, 60 GiB / 60 GiB avail; 121 KiB/s rd, 7.6 KiB/s wr, 161 op/s
Oct 11 04:11:44 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e311 do_prune osdmap full prune enabled
Oct 11 04:11:44 compute-0 ceph-mon[74273]: osdmap e311: 3 total, 3 up, 3 in
Oct 11 04:11:44 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e312 e312: 3 total, 3 up, 3 in
Oct 11 04:11:44 compute-0 ceph-mon[74273]: log_channel(cluster) log [DBG] : osdmap e312: 3 total, 3 up, 3 in
Oct 11 04:11:44 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e312 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 04:11:44 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e312 do_prune osdmap full prune enabled
Oct 11 04:11:44 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e313 e313: 3 total, 3 up, 3 in
Oct 11 04:11:44 compute-0 ceph-mon[74273]: log_channel(cluster) log [DBG] : osdmap e313: 3 total, 3 up, 3 in
Oct 11 04:11:45 compute-0 nova_compute[259850]: 2025-10-11 04:11:45.000 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 04:11:45 compute-0 ceph-mon[74273]: pgmap v1363: 305 pgs: 305 active+clean; 88 MiB data, 375 MiB used, 60 GiB / 60 GiB avail; 121 KiB/s rd, 7.6 KiB/s wr, 161 op/s
Oct 11 04:11:45 compute-0 ceph-mon[74273]: osdmap e312: 3 total, 3 up, 3 in
Oct 11 04:11:45 compute-0 ceph-mon[74273]: osdmap e313: 3 total, 3 up, 3 in
Oct 11 04:11:45 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1366: 305 pgs: 305 active+clean; 88 MiB data, 375 MiB used, 60 GiB / 60 GiB avail; 90 KiB/s rd, 4.5 KiB/s wr, 118 op/s
Oct 11 04:11:46 compute-0 nova_compute[259850]: 2025-10-11 04:11:46.644 2 DEBUG oslo_concurrency.lockutils [None req-3f127c02-8097-41f0-907e-336c878d4073 ba6ea3b0ff9d4fee8a80f308d0493954 7ff14cec1ef04fa2a41f6d226bc99518 - - default default] Acquiring lock "6f74cee5-3bb9-44f0-9a21-d6e5c1475419" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 04:11:46 compute-0 nova_compute[259850]: 2025-10-11 04:11:46.645 2 DEBUG oslo_concurrency.lockutils [None req-3f127c02-8097-41f0-907e-336c878d4073 ba6ea3b0ff9d4fee8a80f308d0493954 7ff14cec1ef04fa2a41f6d226bc99518 - - default default] Lock "6f74cee5-3bb9-44f0-9a21-d6e5c1475419" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 04:11:46 compute-0 nova_compute[259850]: 2025-10-11 04:11:46.670 2 DEBUG nova.compute.manager [None req-3f127c02-8097-41f0-907e-336c878d4073 ba6ea3b0ff9d4fee8a80f308d0493954 7ff14cec1ef04fa2a41f6d226bc99518 - - default default] [instance: 6f74cee5-3bb9-44f0-9a21-d6e5c1475419] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Oct 11 04:11:46 compute-0 nova_compute[259850]: 2025-10-11 04:11:46.767 2 DEBUG oslo_concurrency.lockutils [None req-3f127c02-8097-41f0-907e-336c878d4073 ba6ea3b0ff9d4fee8a80f308d0493954 7ff14cec1ef04fa2a41f6d226bc99518 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 04:11:46 compute-0 nova_compute[259850]: 2025-10-11 04:11:46.768 2 DEBUG oslo_concurrency.lockutils [None req-3f127c02-8097-41f0-907e-336c878d4073 ba6ea3b0ff9d4fee8a80f308d0493954 7ff14cec1ef04fa2a41f6d226bc99518 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 04:11:46 compute-0 nova_compute[259850]: 2025-10-11 04:11:46.778 2 DEBUG nova.virt.hardware [None req-3f127c02-8097-41f0-907e-336c878d4073 ba6ea3b0ff9d4fee8a80f308d0493954 7ff14cec1ef04fa2a41f6d226bc99518 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Oct 11 04:11:46 compute-0 nova_compute[259850]: 2025-10-11 04:11:46.778 2 INFO nova.compute.claims [None req-3f127c02-8097-41f0-907e-336c878d4073 ba6ea3b0ff9d4fee8a80f308d0493954 7ff14cec1ef04fa2a41f6d226bc99518 - - default default] [instance: 6f74cee5-3bb9-44f0-9a21-d6e5c1475419] Claim successful on node compute-0.ctlplane.example.com
Oct 11 04:11:47 compute-0 nova_compute[259850]: 2025-10-11 04:11:47.014 2 DEBUG oslo_concurrency.processutils [None req-3f127c02-8097-41f0-907e-336c878d4073 ba6ea3b0ff9d4fee8a80f308d0493954 7ff14cec1ef04fa2a41f6d226bc99518 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 04:11:47 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e313 do_prune osdmap full prune enabled
Oct 11 04:11:47 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e314 e314: 3 total, 3 up, 3 in
Oct 11 04:11:47 compute-0 ceph-mon[74273]: log_channel(cluster) log [DBG] : osdmap e314: 3 total, 3 up, 3 in
Oct 11 04:11:47 compute-0 ceph-mon[74273]: pgmap v1366: 305 pgs: 305 active+clean; 88 MiB data, 375 MiB used, 60 GiB / 60 GiB avail; 90 KiB/s rd, 4.5 KiB/s wr, 118 op/s
Oct 11 04:11:47 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 11 04:11:47 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/862655929' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 04:11:47 compute-0 nova_compute[259850]: 2025-10-11 04:11:47.480 2 DEBUG oslo_concurrency.processutils [None req-3f127c02-8097-41f0-907e-336c878d4073 ba6ea3b0ff9d4fee8a80f308d0493954 7ff14cec1ef04fa2a41f6d226bc99518 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.467s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 04:11:47 compute-0 nova_compute[259850]: 2025-10-11 04:11:47.490 2 DEBUG nova.compute.provider_tree [None req-3f127c02-8097-41f0-907e-336c878d4073 ba6ea3b0ff9d4fee8a80f308d0493954 7ff14cec1ef04fa2a41f6d226bc99518 - - default default] Inventory has not changed in ProviderTree for provider: 108a560b-89c0-4926-a2fc-cb749a6f8386 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 11 04:11:47 compute-0 nova_compute[259850]: 2025-10-11 04:11:47.511 2 DEBUG nova.scheduler.client.report [None req-3f127c02-8097-41f0-907e-336c878d4073 ba6ea3b0ff9d4fee8a80f308d0493954 7ff14cec1ef04fa2a41f6d226bc99518 - - default default] Inventory has not changed for provider 108a560b-89c0-4926-a2fc-cb749a6f8386 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 11 04:11:47 compute-0 nova_compute[259850]: 2025-10-11 04:11:47.532 2 DEBUG oslo_concurrency.lockutils [None req-3f127c02-8097-41f0-907e-336c878d4073 ba6ea3b0ff9d4fee8a80f308d0493954 7ff14cec1ef04fa2a41f6d226bc99518 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.763s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 04:11:47 compute-0 nova_compute[259850]: 2025-10-11 04:11:47.532 2 DEBUG nova.compute.manager [None req-3f127c02-8097-41f0-907e-336c878d4073 ba6ea3b0ff9d4fee8a80f308d0493954 7ff14cec1ef04fa2a41f6d226bc99518 - - default default] [instance: 6f74cee5-3bb9-44f0-9a21-d6e5c1475419] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Oct 11 04:11:47 compute-0 nova_compute[259850]: 2025-10-11 04:11:47.572 2 DEBUG nova.compute.manager [None req-3f127c02-8097-41f0-907e-336c878d4073 ba6ea3b0ff9d4fee8a80f308d0493954 7ff14cec1ef04fa2a41f6d226bc99518 - - default default] [instance: 6f74cee5-3bb9-44f0-9a21-d6e5c1475419] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Oct 11 04:11:47 compute-0 nova_compute[259850]: 2025-10-11 04:11:47.572 2 DEBUG nova.network.neutron [None req-3f127c02-8097-41f0-907e-336c878d4073 ba6ea3b0ff9d4fee8a80f308d0493954 7ff14cec1ef04fa2a41f6d226bc99518 - - default default] [instance: 6f74cee5-3bb9-44f0-9a21-d6e5c1475419] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Oct 11 04:11:47 compute-0 nova_compute[259850]: 2025-10-11 04:11:47.596 2 INFO nova.virt.libvirt.driver [None req-3f127c02-8097-41f0-907e-336c878d4073 ba6ea3b0ff9d4fee8a80f308d0493954 7ff14cec1ef04fa2a41f6d226bc99518 - - default default] [instance: 6f74cee5-3bb9-44f0-9a21-d6e5c1475419] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Oct 11 04:11:47 compute-0 nova_compute[259850]: 2025-10-11 04:11:47.612 2 DEBUG nova.compute.manager [None req-3f127c02-8097-41f0-907e-336c878d4073 ba6ea3b0ff9d4fee8a80f308d0493954 7ff14cec1ef04fa2a41f6d226bc99518 - - default default] [instance: 6f74cee5-3bb9-44f0-9a21-d6e5c1475419] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Oct 11 04:11:47 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1368: 305 pgs: 305 active+clean; 88 MiB data, 375 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:11:47 compute-0 nova_compute[259850]: 2025-10-11 04:11:47.728 2 DEBUG nova.compute.manager [None req-3f127c02-8097-41f0-907e-336c878d4073 ba6ea3b0ff9d4fee8a80f308d0493954 7ff14cec1ef04fa2a41f6d226bc99518 - - default default] [instance: 6f74cee5-3bb9-44f0-9a21-d6e5c1475419] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Oct 11 04:11:47 compute-0 nova_compute[259850]: 2025-10-11 04:11:47.730 2 DEBUG nova.virt.libvirt.driver [None req-3f127c02-8097-41f0-907e-336c878d4073 ba6ea3b0ff9d4fee8a80f308d0493954 7ff14cec1ef04fa2a41f6d226bc99518 - - default default] [instance: 6f74cee5-3bb9-44f0-9a21-d6e5c1475419] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Oct 11 04:11:47 compute-0 nova_compute[259850]: 2025-10-11 04:11:47.731 2 INFO nova.virt.libvirt.driver [None req-3f127c02-8097-41f0-907e-336c878d4073 ba6ea3b0ff9d4fee8a80f308d0493954 7ff14cec1ef04fa2a41f6d226bc99518 - - default default] [instance: 6f74cee5-3bb9-44f0-9a21-d6e5c1475419] Creating image(s)
Oct 11 04:11:47 compute-0 nova_compute[259850]: 2025-10-11 04:11:47.763 2 DEBUG nova.storage.rbd_utils [None req-3f127c02-8097-41f0-907e-336c878d4073 ba6ea3b0ff9d4fee8a80f308d0493954 7ff14cec1ef04fa2a41f6d226bc99518 - - default default] rbd image 6f74cee5-3bb9-44f0-9a21-d6e5c1475419_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 11 04:11:47 compute-0 nova_compute[259850]: 2025-10-11 04:11:47.801 2 DEBUG nova.storage.rbd_utils [None req-3f127c02-8097-41f0-907e-336c878d4073 ba6ea3b0ff9d4fee8a80f308d0493954 7ff14cec1ef04fa2a41f6d226bc99518 - - default default] rbd image 6f74cee5-3bb9-44f0-9a21-d6e5c1475419_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 11 04:11:47 compute-0 nova_compute[259850]: 2025-10-11 04:11:47.835 2 DEBUG nova.storage.rbd_utils [None req-3f127c02-8097-41f0-907e-336c878d4073 ba6ea3b0ff9d4fee8a80f308d0493954 7ff14cec1ef04fa2a41f6d226bc99518 - - default default] rbd image 6f74cee5-3bb9-44f0-9a21-d6e5c1475419_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 11 04:11:47 compute-0 nova_compute[259850]: 2025-10-11 04:11:47.839 2 DEBUG oslo_concurrency.processutils [None req-3f127c02-8097-41f0-907e-336c878d4073 ba6ea3b0ff9d4fee8a80f308d0493954 7ff14cec1ef04fa2a41f6d226bc99518 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/1fdd1a1629caceb533dd1ed1ad40a6716b3e72ac --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 04:11:47 compute-0 nova_compute[259850]: 2025-10-11 04:11:47.862 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 04:11:47 compute-0 nova_compute[259850]: 2025-10-11 04:11:47.904 2 DEBUG nova.policy [None req-3f127c02-8097-41f0-907e-336c878d4073 ba6ea3b0ff9d4fee8a80f308d0493954 7ff14cec1ef04fa2a41f6d226bc99518 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'ba6ea3b0ff9d4fee8a80f308d0493954', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '7ff14cec1ef04fa2a41f6d226bc99518', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Oct 11 04:11:47 compute-0 nova_compute[259850]: 2025-10-11 04:11:47.922 2 DEBUG oslo_concurrency.processutils [None req-3f127c02-8097-41f0-907e-336c878d4073 ba6ea3b0ff9d4fee8a80f308d0493954 7ff14cec1ef04fa2a41f6d226bc99518 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/1fdd1a1629caceb533dd1ed1ad40a6716b3e72ac --force-share --output=json" returned: 0 in 0.082s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 04:11:47 compute-0 nova_compute[259850]: 2025-10-11 04:11:47.923 2 DEBUG oslo_concurrency.lockutils [None req-3f127c02-8097-41f0-907e-336c878d4073 ba6ea3b0ff9d4fee8a80f308d0493954 7ff14cec1ef04fa2a41f6d226bc99518 - - default default] Acquiring lock "1fdd1a1629caceb533dd1ed1ad40a6716b3e72ac" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 04:11:47 compute-0 nova_compute[259850]: 2025-10-11 04:11:47.923 2 DEBUG oslo_concurrency.lockutils [None req-3f127c02-8097-41f0-907e-336c878d4073 ba6ea3b0ff9d4fee8a80f308d0493954 7ff14cec1ef04fa2a41f6d226bc99518 - - default default] Lock "1fdd1a1629caceb533dd1ed1ad40a6716b3e72ac" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 04:11:47 compute-0 nova_compute[259850]: 2025-10-11 04:11:47.924 2 DEBUG oslo_concurrency.lockutils [None req-3f127c02-8097-41f0-907e-336c878d4073 ba6ea3b0ff9d4fee8a80f308d0493954 7ff14cec1ef04fa2a41f6d226bc99518 - - default default] Lock "1fdd1a1629caceb533dd1ed1ad40a6716b3e72ac" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 04:11:47 compute-0 nova_compute[259850]: 2025-10-11 04:11:47.957 2 DEBUG nova.storage.rbd_utils [None req-3f127c02-8097-41f0-907e-336c878d4073 ba6ea3b0ff9d4fee8a80f308d0493954 7ff14cec1ef04fa2a41f6d226bc99518 - - default default] rbd image 6f74cee5-3bb9-44f0-9a21-d6e5c1475419_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 11 04:11:47 compute-0 nova_compute[259850]: 2025-10-11 04:11:47.962 2 DEBUG oslo_concurrency.processutils [None req-3f127c02-8097-41f0-907e-336c878d4073 ba6ea3b0ff9d4fee8a80f308d0493954 7ff14cec1ef04fa2a41f6d226bc99518 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/1fdd1a1629caceb533dd1ed1ad40a6716b3e72ac 6f74cee5-3bb9-44f0-9a21-d6e5c1475419_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 04:11:48 compute-0 ceph-mon[74273]: osdmap e314: 3 total, 3 up, 3 in
Oct 11 04:11:48 compute-0 ceph-mon[74273]: from='client.? 192.168.122.100:0/862655929' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 04:11:48 compute-0 nova_compute[259850]: 2025-10-11 04:11:48.348 2 DEBUG oslo_concurrency.processutils [None req-3f127c02-8097-41f0-907e-336c878d4073 ba6ea3b0ff9d4fee8a80f308d0493954 7ff14cec1ef04fa2a41f6d226bc99518 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/1fdd1a1629caceb533dd1ed1ad40a6716b3e72ac 6f74cee5-3bb9-44f0-9a21-d6e5c1475419_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.386s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 04:11:48 compute-0 nova_compute[259850]: 2025-10-11 04:11:48.444 2 DEBUG nova.storage.rbd_utils [None req-3f127c02-8097-41f0-907e-336c878d4073 ba6ea3b0ff9d4fee8a80f308d0493954 7ff14cec1ef04fa2a41f6d226bc99518 - - default default] resizing rbd image 6f74cee5-3bb9-44f0-9a21-d6e5c1475419_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Oct 11 04:11:48 compute-0 nova_compute[259850]: 2025-10-11 04:11:48.582 2 DEBUG nova.objects.instance [None req-3f127c02-8097-41f0-907e-336c878d4073 ba6ea3b0ff9d4fee8a80f308d0493954 7ff14cec1ef04fa2a41f6d226bc99518 - - default default] Lazy-loading 'migration_context' on Instance uuid 6f74cee5-3bb9-44f0-9a21-d6e5c1475419 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 11 04:11:48 compute-0 nova_compute[259850]: 2025-10-11 04:11:48.600 2 DEBUG nova.virt.libvirt.driver [None req-3f127c02-8097-41f0-907e-336c878d4073 ba6ea3b0ff9d4fee8a80f308d0493954 7ff14cec1ef04fa2a41f6d226bc99518 - - default default] [instance: 6f74cee5-3bb9-44f0-9a21-d6e5c1475419] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Oct 11 04:11:48 compute-0 nova_compute[259850]: 2025-10-11 04:11:48.601 2 DEBUG nova.virt.libvirt.driver [None req-3f127c02-8097-41f0-907e-336c878d4073 ba6ea3b0ff9d4fee8a80f308d0493954 7ff14cec1ef04fa2a41f6d226bc99518 - - default default] [instance: 6f74cee5-3bb9-44f0-9a21-d6e5c1475419] Ensure instance console log exists: /var/lib/nova/instances/6f74cee5-3bb9-44f0-9a21-d6e5c1475419/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Oct 11 04:11:48 compute-0 nova_compute[259850]: 2025-10-11 04:11:48.602 2 DEBUG oslo_concurrency.lockutils [None req-3f127c02-8097-41f0-907e-336c878d4073 ba6ea3b0ff9d4fee8a80f308d0493954 7ff14cec1ef04fa2a41f6d226bc99518 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 04:11:48 compute-0 nova_compute[259850]: 2025-10-11 04:11:48.602 2 DEBUG oslo_concurrency.lockutils [None req-3f127c02-8097-41f0-907e-336c878d4073 ba6ea3b0ff9d4fee8a80f308d0493954 7ff14cec1ef04fa2a41f6d226bc99518 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 04:11:48 compute-0 nova_compute[259850]: 2025-10-11 04:11:48.603 2 DEBUG oslo_concurrency.lockutils [None req-3f127c02-8097-41f0-907e-336c878d4073 ba6ea3b0ff9d4fee8a80f308d0493954 7ff14cec1ef04fa2a41f6d226bc99518 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 04:11:49 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e314 do_prune osdmap full prune enabled
Oct 11 04:11:49 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e315 e315: 3 total, 3 up, 3 in
Oct 11 04:11:49 compute-0 ceph-mon[74273]: log_channel(cluster) log [DBG] : osdmap e315: 3 total, 3 up, 3 in
Oct 11 04:11:49 compute-0 ceph-mon[74273]: pgmap v1368: 305 pgs: 305 active+clean; 88 MiB data, 375 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:11:49 compute-0 nova_compute[259850]: 2025-10-11 04:11:49.363 2 DEBUG nova.network.neutron [None req-3f127c02-8097-41f0-907e-336c878d4073 ba6ea3b0ff9d4fee8a80f308d0493954 7ff14cec1ef04fa2a41f6d226bc99518 - - default default] [instance: 6f74cee5-3bb9-44f0-9a21-d6e5c1475419] Successfully created port: 46432b1a-fa02-4a02-9c8f-d607c2cd820c _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Oct 11 04:11:49 compute-0 podman[286294]: 2025-10-11 04:11:49.422235542 +0000 UTC m=+0.120815660 container health_status 648a70a95c858d67aef01a55c153ff0b5b9bef4b589fa10c62a24185180db61f (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, org.label-schema.vendor=CentOS, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.build-date=20251009, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller)
Oct 11 04:11:49 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1370: 305 pgs: 305 active+clean; 88 MiB data, 379 MiB used, 60 GiB / 60 GiB avail; 87 KiB/s rd, 9.5 KiB/s wr, 121 op/s
Oct 11 04:11:49 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e315 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 04:11:50 compute-0 nova_compute[259850]: 2025-10-11 04:11:50.043 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 04:11:50 compute-0 ceph-mon[74273]: osdmap e315: 3 total, 3 up, 3 in
Oct 11 04:11:50 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct 11 04:11:50 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2896065678' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 11 04:11:50 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct 11 04:11:50 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2896065678' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 11 04:11:50 compute-0 ceph-mgr[74563]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 04:11:50 compute-0 ceph-mgr[74563]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 04:11:50 compute-0 ceph-mgr[74563]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 04:11:50 compute-0 ceph-mgr[74563]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 04:11:50 compute-0 ceph-mgr[74563]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 04:11:50 compute-0 ceph-mgr[74563]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 04:11:50 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct 11 04:11:50 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3004557617' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 11 04:11:50 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct 11 04:11:50 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3004557617' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 11 04:11:50 compute-0 nova_compute[259850]: 2025-10-11 04:11:50.996 2 DEBUG nova.network.neutron [None req-3f127c02-8097-41f0-907e-336c878d4073 ba6ea3b0ff9d4fee8a80f308d0493954 7ff14cec1ef04fa2a41f6d226bc99518 - - default default] [instance: 6f74cee5-3bb9-44f0-9a21-d6e5c1475419] Successfully updated port: 46432b1a-fa02-4a02-9c8f-d607c2cd820c _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Oct 11 04:11:51 compute-0 nova_compute[259850]: 2025-10-11 04:11:51.017 2 DEBUG oslo_concurrency.lockutils [None req-3f127c02-8097-41f0-907e-336c878d4073 ba6ea3b0ff9d4fee8a80f308d0493954 7ff14cec1ef04fa2a41f6d226bc99518 - - default default] Acquiring lock "refresh_cache-6f74cee5-3bb9-44f0-9a21-d6e5c1475419" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 11 04:11:51 compute-0 nova_compute[259850]: 2025-10-11 04:11:51.017 2 DEBUG oslo_concurrency.lockutils [None req-3f127c02-8097-41f0-907e-336c878d4073 ba6ea3b0ff9d4fee8a80f308d0493954 7ff14cec1ef04fa2a41f6d226bc99518 - - default default] Acquired lock "refresh_cache-6f74cee5-3bb9-44f0-9a21-d6e5c1475419" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 11 04:11:51 compute-0 nova_compute[259850]: 2025-10-11 04:11:51.017 2 DEBUG nova.network.neutron [None req-3f127c02-8097-41f0-907e-336c878d4073 ba6ea3b0ff9d4fee8a80f308d0493954 7ff14cec1ef04fa2a41f6d226bc99518 - - default default] [instance: 6f74cee5-3bb9-44f0-9a21-d6e5c1475419] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Oct 11 04:11:51 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct 11 04:11:51 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1598744225' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 11 04:11:51 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct 11 04:11:51 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1598744225' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 11 04:11:51 compute-0 ceph-mon[74273]: pgmap v1370: 305 pgs: 305 active+clean; 88 MiB data, 379 MiB used, 60 GiB / 60 GiB avail; 87 KiB/s rd, 9.5 KiB/s wr, 121 op/s
Oct 11 04:11:51 compute-0 ceph-mon[74273]: from='client.? 192.168.122.10:0/2896065678' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 11 04:11:51 compute-0 ceph-mon[74273]: from='client.? 192.168.122.10:0/2896065678' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 11 04:11:51 compute-0 ceph-mon[74273]: from='client.? 192.168.122.10:0/3004557617' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 11 04:11:51 compute-0 ceph-mon[74273]: from='client.? 192.168.122.10:0/3004557617' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 11 04:11:51 compute-0 ceph-mon[74273]: from='client.? 192.168.122.10:0/1598744225' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 11 04:11:51 compute-0 ceph-mon[74273]: from='client.? 192.168.122.10:0/1598744225' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 11 04:11:51 compute-0 nova_compute[259850]: 2025-10-11 04:11:51.107 2 DEBUG nova.compute.manager [req-981faf4c-2861-4e92-9ed9-4f19042ed052 req-ecbccaf4-cc24-4757-b6c9-1ba5c20975c8 f4159c198f2c491aba3076e803251bb4 551ef16ca7c64c7d98e9560ddab5abd8 - - default default] [instance: 6f74cee5-3bb9-44f0-9a21-d6e5c1475419] Received event network-changed-46432b1a-fa02-4a02-9c8f-d607c2cd820c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 11 04:11:51 compute-0 nova_compute[259850]: 2025-10-11 04:11:51.108 2 DEBUG nova.compute.manager [req-981faf4c-2861-4e92-9ed9-4f19042ed052 req-ecbccaf4-cc24-4757-b6c9-1ba5c20975c8 f4159c198f2c491aba3076e803251bb4 551ef16ca7c64c7d98e9560ddab5abd8 - - default default] [instance: 6f74cee5-3bb9-44f0-9a21-d6e5c1475419] Refreshing instance network info cache due to event network-changed-46432b1a-fa02-4a02-9c8f-d607c2cd820c. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 11 04:11:51 compute-0 nova_compute[259850]: 2025-10-11 04:11:51.108 2 DEBUG oslo_concurrency.lockutils [req-981faf4c-2861-4e92-9ed9-4f19042ed052 req-ecbccaf4-cc24-4757-b6c9-1ba5c20975c8 f4159c198f2c491aba3076e803251bb4 551ef16ca7c64c7d98e9560ddab5abd8 - - default default] Acquiring lock "refresh_cache-6f74cee5-3bb9-44f0-9a21-d6e5c1475419" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 11 04:11:51 compute-0 nova_compute[259850]: 2025-10-11 04:11:51.156 2 DEBUG nova.network.neutron [None req-3f127c02-8097-41f0-907e-336c878d4073 ba6ea3b0ff9d4fee8a80f308d0493954 7ff14cec1ef04fa2a41f6d226bc99518 - - default default] [instance: 6f74cee5-3bb9-44f0-9a21-d6e5c1475419] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Oct 11 04:11:51 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1371: 305 pgs: 305 active+clean; 88 MiB data, 379 MiB used, 60 GiB / 60 GiB avail; 72 KiB/s rd, 7.9 KiB/s wr, 100 op/s
Oct 11 04:11:52 compute-0 nova_compute[259850]: 2025-10-11 04:11:52.007 2 DEBUG nova.network.neutron [None req-3f127c02-8097-41f0-907e-336c878d4073 ba6ea3b0ff9d4fee8a80f308d0493954 7ff14cec1ef04fa2a41f6d226bc99518 - - default default] [instance: 6f74cee5-3bb9-44f0-9a21-d6e5c1475419] Updating instance_info_cache with network_info: [{"id": "46432b1a-fa02-4a02-9c8f-d607c2cd820c", "address": "fa:16:3e:2e:cd:1e", "network": {"id": "69760b74-d690-4b6a-a64f-35ceb4582944", "bridge": "br-int", "label": "tempest-TestStampPattern-334026573-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "7ff14cec1ef04fa2a41f6d226bc99518", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap46432b1a-fa", "ovs_interfaceid": "46432b1a-fa02-4a02-9c8f-d607c2cd820c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 11 04:11:52 compute-0 nova_compute[259850]: 2025-10-11 04:11:52.028 2 DEBUG oslo_concurrency.lockutils [None req-3f127c02-8097-41f0-907e-336c878d4073 ba6ea3b0ff9d4fee8a80f308d0493954 7ff14cec1ef04fa2a41f6d226bc99518 - - default default] Releasing lock "refresh_cache-6f74cee5-3bb9-44f0-9a21-d6e5c1475419" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 11 04:11:52 compute-0 nova_compute[259850]: 2025-10-11 04:11:52.029 2 DEBUG nova.compute.manager [None req-3f127c02-8097-41f0-907e-336c878d4073 ba6ea3b0ff9d4fee8a80f308d0493954 7ff14cec1ef04fa2a41f6d226bc99518 - - default default] [instance: 6f74cee5-3bb9-44f0-9a21-d6e5c1475419] Instance network_info: |[{"id": "46432b1a-fa02-4a02-9c8f-d607c2cd820c", "address": "fa:16:3e:2e:cd:1e", "network": {"id": "69760b74-d690-4b6a-a64f-35ceb4582944", "bridge": "br-int", "label": "tempest-TestStampPattern-334026573-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "7ff14cec1ef04fa2a41f6d226bc99518", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap46432b1a-fa", "ovs_interfaceid": "46432b1a-fa02-4a02-9c8f-d607c2cd820c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Oct 11 04:11:52 compute-0 nova_compute[259850]: 2025-10-11 04:11:52.029 2 DEBUG oslo_concurrency.lockutils [req-981faf4c-2861-4e92-9ed9-4f19042ed052 req-ecbccaf4-cc24-4757-b6c9-1ba5c20975c8 f4159c198f2c491aba3076e803251bb4 551ef16ca7c64c7d98e9560ddab5abd8 - - default default] Acquired lock "refresh_cache-6f74cee5-3bb9-44f0-9a21-d6e5c1475419" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 11 04:11:52 compute-0 nova_compute[259850]: 2025-10-11 04:11:52.030 2 DEBUG nova.network.neutron [req-981faf4c-2861-4e92-9ed9-4f19042ed052 req-ecbccaf4-cc24-4757-b6c9-1ba5c20975c8 f4159c198f2c491aba3076e803251bb4 551ef16ca7c64c7d98e9560ddab5abd8 - - default default] [instance: 6f74cee5-3bb9-44f0-9a21-d6e5c1475419] Refreshing network info cache for port 46432b1a-fa02-4a02-9c8f-d607c2cd820c _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 11 04:11:52 compute-0 nova_compute[259850]: 2025-10-11 04:11:52.035 2 DEBUG nova.virt.libvirt.driver [None req-3f127c02-8097-41f0-907e-336c878d4073 ba6ea3b0ff9d4fee8a80f308d0493954 7ff14cec1ef04fa2a41f6d226bc99518 - - default default] [instance: 6f74cee5-3bb9-44f0-9a21-d6e5c1475419] Start _get_guest_xml network_info=[{"id": "46432b1a-fa02-4a02-9c8f-d607c2cd820c", "address": "fa:16:3e:2e:cd:1e", "network": {"id": "69760b74-d690-4b6a-a64f-35ceb4582944", "bridge": "br-int", "label": "tempest-TestStampPattern-334026573-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "7ff14cec1ef04fa2a41f6d226bc99518", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap46432b1a-fa", "ovs_interfaceid": "46432b1a-fa02-4a02-9c8f-d607c2cd820c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-11T04:01:37Z,direct_url=<?>,disk_format='qcow2',id=1a107e2f-1a9d-4b6f-861d-e64bee7d56be,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='e4ac9f6319b648399a8baca50902ce47',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-11T04:01:39Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_type': 'disk', 'encrypted': False, 'encryption_secret_uuid': None, 'size': 0, 'encryption_format': None, 'boot_index': 0, 'device_name': '/dev/vda', 'guest_format': None, 'encryption_options': None, 'disk_bus': 'virtio', 'image_id': '1a107e2f-1a9d-4b6f-861d-e64bee7d56be'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Oct 11 04:11:52 compute-0 nova_compute[259850]: 2025-10-11 04:11:52.041 2 WARNING nova.virt.libvirt.driver [None req-3f127c02-8097-41f0-907e-336c878d4073 ba6ea3b0ff9d4fee8a80f308d0493954 7ff14cec1ef04fa2a41f6d226bc99518 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 11 04:11:52 compute-0 nova_compute[259850]: 2025-10-11 04:11:52.048 2 DEBUG nova.virt.libvirt.host [None req-3f127c02-8097-41f0-907e-336c878d4073 ba6ea3b0ff9d4fee8a80f308d0493954 7ff14cec1ef04fa2a41f6d226bc99518 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Oct 11 04:11:52 compute-0 nova_compute[259850]: 2025-10-11 04:11:52.048 2 DEBUG nova.virt.libvirt.host [None req-3f127c02-8097-41f0-907e-336c878d4073 ba6ea3b0ff9d4fee8a80f308d0493954 7ff14cec1ef04fa2a41f6d226bc99518 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Oct 11 04:11:52 compute-0 nova_compute[259850]: 2025-10-11 04:11:52.057 2 DEBUG nova.virt.libvirt.host [None req-3f127c02-8097-41f0-907e-336c878d4073 ba6ea3b0ff9d4fee8a80f308d0493954 7ff14cec1ef04fa2a41f6d226bc99518 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Oct 11 04:11:52 compute-0 nova_compute[259850]: 2025-10-11 04:11:52.058 2 DEBUG nova.virt.libvirt.host [None req-3f127c02-8097-41f0-907e-336c878d4073 ba6ea3b0ff9d4fee8a80f308d0493954 7ff14cec1ef04fa2a41f6d226bc99518 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Oct 11 04:11:52 compute-0 nova_compute[259850]: 2025-10-11 04:11:52.059 2 DEBUG nova.virt.libvirt.driver [None req-3f127c02-8097-41f0-907e-336c878d4073 ba6ea3b0ff9d4fee8a80f308d0493954 7ff14cec1ef04fa2a41f6d226bc99518 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Oct 11 04:11:52 compute-0 nova_compute[259850]: 2025-10-11 04:11:52.059 2 DEBUG nova.virt.hardware [None req-3f127c02-8097-41f0-907e-336c878d4073 ba6ea3b0ff9d4fee8a80f308d0493954 7ff14cec1ef04fa2a41f6d226bc99518 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-11T04:01:35Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='178575de-f0e6-4acd-9fcd-d75e3e09ac2e',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-11T04:01:37Z,direct_url=<?>,disk_format='qcow2',id=1a107e2f-1a9d-4b6f-861d-e64bee7d56be,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='e4ac9f6319b648399a8baca50902ce47',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-11T04:01:39Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Oct 11 04:11:52 compute-0 nova_compute[259850]: 2025-10-11 04:11:52.060 2 DEBUG nova.virt.hardware [None req-3f127c02-8097-41f0-907e-336c878d4073 ba6ea3b0ff9d4fee8a80f308d0493954 7ff14cec1ef04fa2a41f6d226bc99518 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Oct 11 04:11:52 compute-0 nova_compute[259850]: 2025-10-11 04:11:52.061 2 DEBUG nova.virt.hardware [None req-3f127c02-8097-41f0-907e-336c878d4073 ba6ea3b0ff9d4fee8a80f308d0493954 7ff14cec1ef04fa2a41f6d226bc99518 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Oct 11 04:11:52 compute-0 rsyslogd[1005]: imjournal: journal files changed, reloading...  [v8.2506.0-2.el9 try https://www.rsyslog.com/e/0 ]
Oct 11 04:11:52 compute-0 nova_compute[259850]: 2025-10-11 04:11:52.061 2 DEBUG nova.virt.hardware [None req-3f127c02-8097-41f0-907e-336c878d4073 ba6ea3b0ff9d4fee8a80f308d0493954 7ff14cec1ef04fa2a41f6d226bc99518 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Oct 11 04:11:52 compute-0 rsyslogd[1005]: imjournal: journal files changed, reloading...  [v8.2506.0-2.el9 try https://www.rsyslog.com/e/0 ]
Oct 11 04:11:52 compute-0 nova_compute[259850]: 2025-10-11 04:11:52.062 2 DEBUG nova.virt.hardware [None req-3f127c02-8097-41f0-907e-336c878d4073 ba6ea3b0ff9d4fee8a80f308d0493954 7ff14cec1ef04fa2a41f6d226bc99518 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Oct 11 04:11:52 compute-0 nova_compute[259850]: 2025-10-11 04:11:52.062 2 DEBUG nova.virt.hardware [None req-3f127c02-8097-41f0-907e-336c878d4073 ba6ea3b0ff9d4fee8a80f308d0493954 7ff14cec1ef04fa2a41f6d226bc99518 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Oct 11 04:11:52 compute-0 nova_compute[259850]: 2025-10-11 04:11:52.063 2 DEBUG nova.virt.hardware [None req-3f127c02-8097-41f0-907e-336c878d4073 ba6ea3b0ff9d4fee8a80f308d0493954 7ff14cec1ef04fa2a41f6d226bc99518 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Oct 11 04:11:52 compute-0 nova_compute[259850]: 2025-10-11 04:11:52.064 2 DEBUG nova.virt.hardware [None req-3f127c02-8097-41f0-907e-336c878d4073 ba6ea3b0ff9d4fee8a80f308d0493954 7ff14cec1ef04fa2a41f6d226bc99518 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Oct 11 04:11:52 compute-0 nova_compute[259850]: 2025-10-11 04:11:52.064 2 DEBUG nova.virt.hardware [None req-3f127c02-8097-41f0-907e-336c878d4073 ba6ea3b0ff9d4fee8a80f308d0493954 7ff14cec1ef04fa2a41f6d226bc99518 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Oct 11 04:11:52 compute-0 nova_compute[259850]: 2025-10-11 04:11:52.065 2 DEBUG nova.virt.hardware [None req-3f127c02-8097-41f0-907e-336c878d4073 ba6ea3b0ff9d4fee8a80f308d0493954 7ff14cec1ef04fa2a41f6d226bc99518 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Oct 11 04:11:52 compute-0 nova_compute[259850]: 2025-10-11 04:11:52.065 2 DEBUG nova.virt.hardware [None req-3f127c02-8097-41f0-907e-336c878d4073 ba6ea3b0ff9d4fee8a80f308d0493954 7ff14cec1ef04fa2a41f6d226bc99518 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Oct 11 04:11:52 compute-0 nova_compute[259850]: 2025-10-11 04:11:52.070 2 DEBUG oslo_concurrency.processutils [None req-3f127c02-8097-41f0-907e-336c878d4073 ba6ea3b0ff9d4fee8a80f308d0493954 7ff14cec1ef04fa2a41f6d226bc99518 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 04:11:52 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct 11 04:11:52 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3608362833' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 11 04:11:52 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct 11 04:11:52 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3608362833' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 11 04:11:52 compute-0 ceph-mon[74273]: from='client.? 192.168.122.10:0/3608362833' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 11 04:11:52 compute-0 ceph-mon[74273]: from='client.? 192.168.122.10:0/3608362833' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 11 04:11:52 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 11 04:11:52 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1862507041' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 11 04:11:52 compute-0 nova_compute[259850]: 2025-10-11 04:11:52.527 2 DEBUG oslo_concurrency.processutils [None req-3f127c02-8097-41f0-907e-336c878d4073 ba6ea3b0ff9d4fee8a80f308d0493954 7ff14cec1ef04fa2a41f6d226bc99518 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.457s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 04:11:52 compute-0 nova_compute[259850]: 2025-10-11 04:11:52.559 2 DEBUG nova.storage.rbd_utils [None req-3f127c02-8097-41f0-907e-336c878d4073 ba6ea3b0ff9d4fee8a80f308d0493954 7ff14cec1ef04fa2a41f6d226bc99518 - - default default] rbd image 6f74cee5-3bb9-44f0-9a21-d6e5c1475419_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 11 04:11:52 compute-0 nova_compute[259850]: 2025-10-11 04:11:52.563 2 DEBUG oslo_concurrency.processutils [None req-3f127c02-8097-41f0-907e-336c878d4073 ba6ea3b0ff9d4fee8a80f308d0493954 7ff14cec1ef04fa2a41f6d226bc99518 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 04:11:52 compute-0 nova_compute[259850]: 2025-10-11 04:11:52.817 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 04:11:52 compute-0 sudo[286381]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 04:11:52 compute-0 sudo[286381]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 04:11:52 compute-0 sudo[286381]: pam_unix(sudo:session): session closed for user root
Oct 11 04:11:52 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 11 04:11:52 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3624775785' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 11 04:11:52 compute-0 nova_compute[259850]: 2025-10-11 04:11:52.996 2 DEBUG oslo_concurrency.processutils [None req-3f127c02-8097-41f0-907e-336c878d4073 ba6ea3b0ff9d4fee8a80f308d0493954 7ff14cec1ef04fa2a41f6d226bc99518 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.432s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 04:11:52 compute-0 nova_compute[259850]: 2025-10-11 04:11:52.998 2 DEBUG nova.virt.libvirt.vif [None req-3f127c02-8097-41f0-907e-336c878d4073 ba6ea3b0ff9d4fee8a80f308d0493954 7ff14cec1ef04fa2a41f6d226bc99518 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-11T04:11:45Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestStampPattern-server-1339296303',display_name='tempest-TestStampPattern-server-1339296303',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-teststamppattern-server-1339296303',id=14,image_ref='1a107e2f-1a9d-4b6f-861d-e64bee7d56be',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBJ/2RgkZKOpdewTMCUJ4lxqFHaHkNK2WJjvE3lEkA/Q9gA0jTZZ1SFFzP17eZUjXJUtu1TcmHAM4LPuQ7VsHIzZ1pEO3yPeDhFw+/dw5yXiw9mrTEISzDMcxVMFVOX8L1w==',key_name='tempest-TestStampPattern-1075063988',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='7ff14cec1ef04fa2a41f6d226bc99518',ramdisk_id='',reservation_id='r-ktl2buu1',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='1a107e2f-1a9d-4b6f-861d-e64bee7d56be',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestStampPattern-137571922',owner_user_name='tempest-TestStampPattern-137571922-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-11T04:11:47Z,user_data=None,user_id='ba6ea3b0ff9d4fee8a80f308d0493954',uuid=6f74cee5-3bb9-44f0-9a21-d6e5c1475419,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "46432b1a-fa02-4a02-9c8f-d607c2cd820c", "address": "fa:16:3e:2e:cd:1e", "network": {"id": "69760b74-d690-4b6a-a64f-35ceb4582944", "bridge": "br-int", "label": "tempest-TestStampPattern-334026573-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "7ff14cec1ef04fa2a41f6d226bc99518", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap46432b1a-fa", "ovs_interfaceid": "46432b1a-fa02-4a02-9c8f-d607c2cd820c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Oct 11 04:11:53 compute-0 nova_compute[259850]: 2025-10-11 04:11:52.999 2 DEBUG nova.network.os_vif_util [None req-3f127c02-8097-41f0-907e-336c878d4073 ba6ea3b0ff9d4fee8a80f308d0493954 7ff14cec1ef04fa2a41f6d226bc99518 - - default default] Converting VIF {"id": "46432b1a-fa02-4a02-9c8f-d607c2cd820c", "address": "fa:16:3e:2e:cd:1e", "network": {"id": "69760b74-d690-4b6a-a64f-35ceb4582944", "bridge": "br-int", "label": "tempest-TestStampPattern-334026573-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "7ff14cec1ef04fa2a41f6d226bc99518", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap46432b1a-fa", "ovs_interfaceid": "46432b1a-fa02-4a02-9c8f-d607c2cd820c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 11 04:11:53 compute-0 nova_compute[259850]: 2025-10-11 04:11:53.001 2 DEBUG nova.network.os_vif_util [None req-3f127c02-8097-41f0-907e-336c878d4073 ba6ea3b0ff9d4fee8a80f308d0493954 7ff14cec1ef04fa2a41f6d226bc99518 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:2e:cd:1e,bridge_name='br-int',has_traffic_filtering=True,id=46432b1a-fa02-4a02-9c8f-d607c2cd820c,network=Network(69760b74-d690-4b6a-a64f-35ceb4582944),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap46432b1a-fa') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 11 04:11:53 compute-0 nova_compute[259850]: 2025-10-11 04:11:53.003 2 DEBUG nova.objects.instance [None req-3f127c02-8097-41f0-907e-336c878d4073 ba6ea3b0ff9d4fee8a80f308d0493954 7ff14cec1ef04fa2a41f6d226bc99518 - - default default] Lazy-loading 'pci_devices' on Instance uuid 6f74cee5-3bb9-44f0-9a21-d6e5c1475419 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 11 04:11:53 compute-0 nova_compute[259850]: 2025-10-11 04:11:53.021 2 DEBUG nova.virt.libvirt.driver [None req-3f127c02-8097-41f0-907e-336c878d4073 ba6ea3b0ff9d4fee8a80f308d0493954 7ff14cec1ef04fa2a41f6d226bc99518 - - default default] [instance: 6f74cee5-3bb9-44f0-9a21-d6e5c1475419] End _get_guest_xml xml=<domain type="kvm">
Oct 11 04:11:53 compute-0 nova_compute[259850]:   <uuid>6f74cee5-3bb9-44f0-9a21-d6e5c1475419</uuid>
Oct 11 04:11:53 compute-0 nova_compute[259850]:   <name>instance-0000000e</name>
Oct 11 04:11:53 compute-0 nova_compute[259850]:   <memory>131072</memory>
Oct 11 04:11:53 compute-0 nova_compute[259850]:   <vcpu>1</vcpu>
Oct 11 04:11:53 compute-0 nova_compute[259850]:   <metadata>
Oct 11 04:11:53 compute-0 nova_compute[259850]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 11 04:11:53 compute-0 nova_compute[259850]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 11 04:11:53 compute-0 nova_compute[259850]:       <nova:name>tempest-TestStampPattern-server-1339296303</nova:name>
Oct 11 04:11:53 compute-0 nova_compute[259850]:       <nova:creationTime>2025-10-11 04:11:52</nova:creationTime>
Oct 11 04:11:53 compute-0 nova_compute[259850]:       <nova:flavor name="m1.nano">
Oct 11 04:11:53 compute-0 nova_compute[259850]:         <nova:memory>128</nova:memory>
Oct 11 04:11:53 compute-0 nova_compute[259850]:         <nova:disk>1</nova:disk>
Oct 11 04:11:53 compute-0 nova_compute[259850]:         <nova:swap>0</nova:swap>
Oct 11 04:11:53 compute-0 nova_compute[259850]:         <nova:ephemeral>0</nova:ephemeral>
Oct 11 04:11:53 compute-0 nova_compute[259850]:         <nova:vcpus>1</nova:vcpus>
Oct 11 04:11:53 compute-0 nova_compute[259850]:       </nova:flavor>
Oct 11 04:11:53 compute-0 nova_compute[259850]:       <nova:owner>
Oct 11 04:11:53 compute-0 nova_compute[259850]:         <nova:user uuid="ba6ea3b0ff9d4fee8a80f308d0493954">tempest-TestStampPattern-137571922-project-member</nova:user>
Oct 11 04:11:53 compute-0 nova_compute[259850]:         <nova:project uuid="7ff14cec1ef04fa2a41f6d226bc99518">tempest-TestStampPattern-137571922</nova:project>
Oct 11 04:11:53 compute-0 nova_compute[259850]:       </nova:owner>
Oct 11 04:11:53 compute-0 nova_compute[259850]:       <nova:root type="image" uuid="1a107e2f-1a9d-4b6f-861d-e64bee7d56be"/>
Oct 11 04:11:53 compute-0 nova_compute[259850]:       <nova:ports>
Oct 11 04:11:53 compute-0 nova_compute[259850]:         <nova:port uuid="46432b1a-fa02-4a02-9c8f-d607c2cd820c">
Oct 11 04:11:53 compute-0 nova_compute[259850]:           <nova:ip type="fixed" address="10.100.0.4" ipVersion="4"/>
Oct 11 04:11:53 compute-0 nova_compute[259850]:         </nova:port>
Oct 11 04:11:53 compute-0 nova_compute[259850]:       </nova:ports>
Oct 11 04:11:53 compute-0 nova_compute[259850]:     </nova:instance>
Oct 11 04:11:53 compute-0 nova_compute[259850]:   </metadata>
Oct 11 04:11:53 compute-0 nova_compute[259850]:   <sysinfo type="smbios">
Oct 11 04:11:53 compute-0 nova_compute[259850]:     <system>
Oct 11 04:11:53 compute-0 nova_compute[259850]:       <entry name="manufacturer">RDO</entry>
Oct 11 04:11:53 compute-0 nova_compute[259850]:       <entry name="product">OpenStack Compute</entry>
Oct 11 04:11:53 compute-0 nova_compute[259850]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 11 04:11:53 compute-0 nova_compute[259850]:       <entry name="serial">6f74cee5-3bb9-44f0-9a21-d6e5c1475419</entry>
Oct 11 04:11:53 compute-0 nova_compute[259850]:       <entry name="uuid">6f74cee5-3bb9-44f0-9a21-d6e5c1475419</entry>
Oct 11 04:11:53 compute-0 nova_compute[259850]:       <entry name="family">Virtual Machine</entry>
Oct 11 04:11:53 compute-0 nova_compute[259850]:     </system>
Oct 11 04:11:53 compute-0 nova_compute[259850]:   </sysinfo>
Oct 11 04:11:53 compute-0 nova_compute[259850]:   <os>
Oct 11 04:11:53 compute-0 nova_compute[259850]:     <type arch="x86_64" machine="q35">hvm</type>
Oct 11 04:11:53 compute-0 nova_compute[259850]:     <boot dev="hd"/>
Oct 11 04:11:53 compute-0 nova_compute[259850]:     <smbios mode="sysinfo"/>
Oct 11 04:11:53 compute-0 nova_compute[259850]:   </os>
Oct 11 04:11:53 compute-0 nova_compute[259850]:   <features>
Oct 11 04:11:53 compute-0 nova_compute[259850]:     <acpi/>
Oct 11 04:11:53 compute-0 nova_compute[259850]:     <apic/>
Oct 11 04:11:53 compute-0 nova_compute[259850]:     <vmcoreinfo/>
Oct 11 04:11:53 compute-0 nova_compute[259850]:   </features>
Oct 11 04:11:53 compute-0 nova_compute[259850]:   <clock offset="utc">
Oct 11 04:11:53 compute-0 nova_compute[259850]:     <timer name="pit" tickpolicy="delay"/>
Oct 11 04:11:53 compute-0 nova_compute[259850]:     <timer name="rtc" tickpolicy="catchup"/>
Oct 11 04:11:53 compute-0 nova_compute[259850]:     <timer name="hpet" present="no"/>
Oct 11 04:11:53 compute-0 nova_compute[259850]:   </clock>
Oct 11 04:11:53 compute-0 nova_compute[259850]:   <cpu mode="host-model" match="exact">
Oct 11 04:11:53 compute-0 nova_compute[259850]:     <topology sockets="1" cores="1" threads="1"/>
Oct 11 04:11:53 compute-0 nova_compute[259850]:   </cpu>
Oct 11 04:11:53 compute-0 nova_compute[259850]:   <devices>
Oct 11 04:11:53 compute-0 nova_compute[259850]:     <disk type="network" device="disk">
Oct 11 04:11:53 compute-0 nova_compute[259850]:       <driver type="raw" cache="none"/>
Oct 11 04:11:53 compute-0 nova_compute[259850]:       <source protocol="rbd" name="vms/6f74cee5-3bb9-44f0-9a21-d6e5c1475419_disk">
Oct 11 04:11:53 compute-0 nova_compute[259850]:         <host name="192.168.122.100" port="6789"/>
Oct 11 04:11:53 compute-0 nova_compute[259850]:       </source>
Oct 11 04:11:53 compute-0 nova_compute[259850]:       <auth username="openstack">
Oct 11 04:11:53 compute-0 nova_compute[259850]:         <secret type="ceph" uuid="23b68101-59a9-532f-ab6b-9acf78fb2162"/>
Oct 11 04:11:53 compute-0 nova_compute[259850]:       </auth>
Oct 11 04:11:53 compute-0 nova_compute[259850]:       <target dev="vda" bus="virtio"/>
Oct 11 04:11:53 compute-0 nova_compute[259850]:     </disk>
Oct 11 04:11:53 compute-0 nova_compute[259850]:     <disk type="network" device="cdrom">
Oct 11 04:11:53 compute-0 nova_compute[259850]:       <driver type="raw" cache="none"/>
Oct 11 04:11:53 compute-0 nova_compute[259850]:       <source protocol="rbd" name="vms/6f74cee5-3bb9-44f0-9a21-d6e5c1475419_disk.config">
Oct 11 04:11:53 compute-0 nova_compute[259850]:         <host name="192.168.122.100" port="6789"/>
Oct 11 04:11:53 compute-0 nova_compute[259850]:       </source>
Oct 11 04:11:53 compute-0 nova_compute[259850]:       <auth username="openstack">
Oct 11 04:11:53 compute-0 nova_compute[259850]:         <secret type="ceph" uuid="23b68101-59a9-532f-ab6b-9acf78fb2162"/>
Oct 11 04:11:53 compute-0 nova_compute[259850]:       </auth>
Oct 11 04:11:53 compute-0 nova_compute[259850]:       <target dev="sda" bus="sata"/>
Oct 11 04:11:53 compute-0 nova_compute[259850]:     </disk>
Oct 11 04:11:53 compute-0 nova_compute[259850]:     <interface type="ethernet">
Oct 11 04:11:53 compute-0 nova_compute[259850]:       <mac address="fa:16:3e:2e:cd:1e"/>
Oct 11 04:11:53 compute-0 nova_compute[259850]:       <model type="virtio"/>
Oct 11 04:11:53 compute-0 nova_compute[259850]:       <driver name="vhost" rx_queue_size="512"/>
Oct 11 04:11:53 compute-0 nova_compute[259850]:       <mtu size="1442"/>
Oct 11 04:11:53 compute-0 nova_compute[259850]:       <target dev="tap46432b1a-fa"/>
Oct 11 04:11:53 compute-0 nova_compute[259850]:     </interface>
Oct 11 04:11:53 compute-0 nova_compute[259850]:     <serial type="pty">
Oct 11 04:11:53 compute-0 nova_compute[259850]:       <log file="/var/lib/nova/instances/6f74cee5-3bb9-44f0-9a21-d6e5c1475419/console.log" append="off"/>
Oct 11 04:11:53 compute-0 nova_compute[259850]:     </serial>
Oct 11 04:11:53 compute-0 nova_compute[259850]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 11 04:11:53 compute-0 nova_compute[259850]:     <video>
Oct 11 04:11:53 compute-0 nova_compute[259850]:       <model type="virtio"/>
Oct 11 04:11:53 compute-0 nova_compute[259850]:     </video>
Oct 11 04:11:53 compute-0 nova_compute[259850]:     <input type="tablet" bus="usb"/>
Oct 11 04:11:53 compute-0 nova_compute[259850]:     <rng model="virtio">
Oct 11 04:11:53 compute-0 nova_compute[259850]:       <backend model="random">/dev/urandom</backend>
Oct 11 04:11:53 compute-0 nova_compute[259850]:     </rng>
Oct 11 04:11:53 compute-0 nova_compute[259850]:     <controller type="pci" model="pcie-root"/>
Oct 11 04:11:53 compute-0 nova_compute[259850]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 04:11:53 compute-0 nova_compute[259850]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 04:11:53 compute-0 nova_compute[259850]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 04:11:53 compute-0 nova_compute[259850]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 04:11:53 compute-0 nova_compute[259850]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 04:11:53 compute-0 nova_compute[259850]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 04:11:53 compute-0 nova_compute[259850]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 04:11:53 compute-0 nova_compute[259850]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 04:11:53 compute-0 nova_compute[259850]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 04:11:53 compute-0 nova_compute[259850]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 04:11:53 compute-0 nova_compute[259850]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 04:11:53 compute-0 nova_compute[259850]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 04:11:53 compute-0 nova_compute[259850]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 04:11:53 compute-0 nova_compute[259850]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 04:11:53 compute-0 nova_compute[259850]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 04:11:53 compute-0 nova_compute[259850]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 04:11:53 compute-0 nova_compute[259850]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 04:11:53 compute-0 nova_compute[259850]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 04:11:53 compute-0 nova_compute[259850]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 04:11:53 compute-0 nova_compute[259850]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 04:11:53 compute-0 nova_compute[259850]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 04:11:53 compute-0 nova_compute[259850]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 04:11:53 compute-0 nova_compute[259850]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 04:11:53 compute-0 nova_compute[259850]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 04:11:53 compute-0 nova_compute[259850]:     <controller type="usb" index="0"/>
Oct 11 04:11:53 compute-0 nova_compute[259850]:     <memballoon model="virtio">
Oct 11 04:11:53 compute-0 nova_compute[259850]:       <stats period="10"/>
Oct 11 04:11:53 compute-0 nova_compute[259850]:     </memballoon>
Oct 11 04:11:53 compute-0 nova_compute[259850]:   </devices>
Oct 11 04:11:53 compute-0 nova_compute[259850]: </domain>
Oct 11 04:11:53 compute-0 nova_compute[259850]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Oct 11 04:11:53 compute-0 nova_compute[259850]: 2025-10-11 04:11:53.024 2 DEBUG nova.compute.manager [None req-3f127c02-8097-41f0-907e-336c878d4073 ba6ea3b0ff9d4fee8a80f308d0493954 7ff14cec1ef04fa2a41f6d226bc99518 - - default default] [instance: 6f74cee5-3bb9-44f0-9a21-d6e5c1475419] Preparing to wait for external event network-vif-plugged-46432b1a-fa02-4a02-9c8f-d607c2cd820c prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Oct 11 04:11:53 compute-0 nova_compute[259850]: 2025-10-11 04:11:53.024 2 DEBUG oslo_concurrency.lockutils [None req-3f127c02-8097-41f0-907e-336c878d4073 ba6ea3b0ff9d4fee8a80f308d0493954 7ff14cec1ef04fa2a41f6d226bc99518 - - default default] Acquiring lock "6f74cee5-3bb9-44f0-9a21-d6e5c1475419-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 04:11:53 compute-0 nova_compute[259850]: 2025-10-11 04:11:53.025 2 DEBUG oslo_concurrency.lockutils [None req-3f127c02-8097-41f0-907e-336c878d4073 ba6ea3b0ff9d4fee8a80f308d0493954 7ff14cec1ef04fa2a41f6d226bc99518 - - default default] Lock "6f74cee5-3bb9-44f0-9a21-d6e5c1475419-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 04:11:53 compute-0 nova_compute[259850]: 2025-10-11 04:11:53.025 2 DEBUG oslo_concurrency.lockutils [None req-3f127c02-8097-41f0-907e-336c878d4073 ba6ea3b0ff9d4fee8a80f308d0493954 7ff14cec1ef04fa2a41f6d226bc99518 - - default default] Lock "6f74cee5-3bb9-44f0-9a21-d6e5c1475419-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 04:11:53 compute-0 nova_compute[259850]: 2025-10-11 04:11:53.026 2 DEBUG nova.virt.libvirt.vif [None req-3f127c02-8097-41f0-907e-336c878d4073 ba6ea3b0ff9d4fee8a80f308d0493954 7ff14cec1ef04fa2a41f6d226bc99518 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-11T04:11:45Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestStampPattern-server-1339296303',display_name='tempest-TestStampPattern-server-1339296303',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-teststamppattern-server-1339296303',id=14,image_ref='1a107e2f-1a9d-4b6f-861d-e64bee7d56be',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBJ/2RgkZKOpdewTMCUJ4lxqFHaHkNK2WJjvE3lEkA/Q9gA0jTZZ1SFFzP17eZUjXJUtu1TcmHAM4LPuQ7VsHIzZ1pEO3yPeDhFw+/dw5yXiw9mrTEISzDMcxVMFVOX8L1w==',key_name='tempest-TestStampPattern-1075063988',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='7ff14cec1ef04fa2a41f6d226bc99518',ramdisk_id='',reservation_id='r-ktl2buu1',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='1a107e2f-1a9d-4b6f-861d-e64bee7d56be',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestStampPattern-137571922',owner_user_name='tempest-TestStampPattern-137571922-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-11T04:11:47Z,user_data=None,user_id='ba6ea3b0ff9d4fee8a80f308d0493954',uuid=6f74cee5-3bb9-44f0-9a21-d6e5c1475419,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "46432b1a-fa02-4a02-9c8f-d607c2cd820c", "address": "fa:16:3e:2e:cd:1e", "network": {"id": "69760b74-d690-4b6a-a64f-35ceb4582944", "bridge": "br-int", "label": "tempest-TestStampPattern-334026573-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "7ff14cec1ef04fa2a41f6d226bc99518", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap46432b1a-fa", "ovs_interfaceid": "46432b1a-fa02-4a02-9c8f-d607c2cd820c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Oct 11 04:11:53 compute-0 nova_compute[259850]: 2025-10-11 04:11:53.027 2 DEBUG nova.network.os_vif_util [None req-3f127c02-8097-41f0-907e-336c878d4073 ba6ea3b0ff9d4fee8a80f308d0493954 7ff14cec1ef04fa2a41f6d226bc99518 - - default default] Converting VIF {"id": "46432b1a-fa02-4a02-9c8f-d607c2cd820c", "address": "fa:16:3e:2e:cd:1e", "network": {"id": "69760b74-d690-4b6a-a64f-35ceb4582944", "bridge": "br-int", "label": "tempest-TestStampPattern-334026573-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "7ff14cec1ef04fa2a41f6d226bc99518", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap46432b1a-fa", "ovs_interfaceid": "46432b1a-fa02-4a02-9c8f-d607c2cd820c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 11 04:11:53 compute-0 nova_compute[259850]: 2025-10-11 04:11:53.028 2 DEBUG nova.network.os_vif_util [None req-3f127c02-8097-41f0-907e-336c878d4073 ba6ea3b0ff9d4fee8a80f308d0493954 7ff14cec1ef04fa2a41f6d226bc99518 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:2e:cd:1e,bridge_name='br-int',has_traffic_filtering=True,id=46432b1a-fa02-4a02-9c8f-d607c2cd820c,network=Network(69760b74-d690-4b6a-a64f-35ceb4582944),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap46432b1a-fa') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 11 04:11:53 compute-0 nova_compute[259850]: 2025-10-11 04:11:53.029 2 DEBUG os_vif [None req-3f127c02-8097-41f0-907e-336c878d4073 ba6ea3b0ff9d4fee8a80f308d0493954 7ff14cec1ef04fa2a41f6d226bc99518 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:2e:cd:1e,bridge_name='br-int',has_traffic_filtering=True,id=46432b1a-fa02-4a02-9c8f-d607c2cd820c,network=Network(69760b74-d690-4b6a-a64f-35ceb4582944),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap46432b1a-fa') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Oct 11 04:11:53 compute-0 nova_compute[259850]: 2025-10-11 04:11:53.029 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 04:11:53 compute-0 nova_compute[259850]: 2025-10-11 04:11:53.030 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 04:11:53 compute-0 nova_compute[259850]: 2025-10-11 04:11:53.031 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 11 04:11:53 compute-0 nova_compute[259850]: 2025-10-11 04:11:53.036 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 04:11:53 compute-0 nova_compute[259850]: 2025-10-11 04:11:53.036 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap46432b1a-fa, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 04:11:53 compute-0 nova_compute[259850]: 2025-10-11 04:11:53.037 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap46432b1a-fa, col_values=(('external_ids', {'iface-id': '46432b1a-fa02-4a02-9c8f-d607c2cd820c', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:2e:cd:1e', 'vm-uuid': '6f74cee5-3bb9-44f0-9a21-d6e5c1475419'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 04:11:53 compute-0 nova_compute[259850]: 2025-10-11 04:11:53.039 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 04:11:53 compute-0 NetworkManager[44920]: <info>  [1760155913.0404] manager: (tap46432b1a-fa): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/80)
Oct 11 04:11:53 compute-0 nova_compute[259850]: 2025-10-11 04:11:53.044 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 11 04:11:53 compute-0 nova_compute[259850]: 2025-10-11 04:11:53.047 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 04:11:53 compute-0 nova_compute[259850]: 2025-10-11 04:11:53.048 2 INFO os_vif [None req-3f127c02-8097-41f0-907e-336c878d4073 ba6ea3b0ff9d4fee8a80f308d0493954 7ff14cec1ef04fa2a41f6d226bc99518 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:2e:cd:1e,bridge_name='br-int',has_traffic_filtering=True,id=46432b1a-fa02-4a02-9c8f-d607c2cd820c,network=Network(69760b74-d690-4b6a-a64f-35ceb4582944),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap46432b1a-fa')
Oct 11 04:11:53 compute-0 sudo[286406]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 11 04:11:53 compute-0 sudo[286406]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 04:11:53 compute-0 sudo[286406]: pam_unix(sudo:session): session closed for user root
Oct 11 04:11:53 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e315 do_prune osdmap full prune enabled
Oct 11 04:11:53 compute-0 ceph-mon[74273]: pgmap v1371: 305 pgs: 305 active+clean; 88 MiB data, 379 MiB used, 60 GiB / 60 GiB avail; 72 KiB/s rd, 7.9 KiB/s wr, 100 op/s
Oct 11 04:11:53 compute-0 ceph-mon[74273]: from='client.? 192.168.122.100:0/1862507041' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 11 04:11:53 compute-0 ceph-mon[74273]: from='client.? 192.168.122.100:0/3624775785' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 11 04:11:53 compute-0 nova_compute[259850]: 2025-10-11 04:11:53.116 2 DEBUG nova.virt.libvirt.driver [None req-3f127c02-8097-41f0-907e-336c878d4073 ba6ea3b0ff9d4fee8a80f308d0493954 7ff14cec1ef04fa2a41f6d226bc99518 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 11 04:11:53 compute-0 nova_compute[259850]: 2025-10-11 04:11:53.117 2 DEBUG nova.virt.libvirt.driver [None req-3f127c02-8097-41f0-907e-336c878d4073 ba6ea3b0ff9d4fee8a80f308d0493954 7ff14cec1ef04fa2a41f6d226bc99518 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 11 04:11:53 compute-0 nova_compute[259850]: 2025-10-11 04:11:53.118 2 DEBUG nova.virt.libvirt.driver [None req-3f127c02-8097-41f0-907e-336c878d4073 ba6ea3b0ff9d4fee8a80f308d0493954 7ff14cec1ef04fa2a41f6d226bc99518 - - default default] No VIF found with MAC fa:16:3e:2e:cd:1e, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Oct 11 04:11:53 compute-0 nova_compute[259850]: 2025-10-11 04:11:53.119 2 INFO nova.virt.libvirt.driver [None req-3f127c02-8097-41f0-907e-336c878d4073 ba6ea3b0ff9d4fee8a80f308d0493954 7ff14cec1ef04fa2a41f6d226bc99518 - - default default] [instance: 6f74cee5-3bb9-44f0-9a21-d6e5c1475419] Using config drive
Oct 11 04:11:53 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e316 e316: 3 total, 3 up, 3 in
Oct 11 04:11:53 compute-0 ceph-mon[74273]: log_channel(cluster) log [DBG] : osdmap e316: 3 total, 3 up, 3 in
Oct 11 04:11:53 compute-0 sudo[286435]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 04:11:53 compute-0 nova_compute[259850]: 2025-10-11 04:11:53.155 2 DEBUG nova.storage.rbd_utils [None req-3f127c02-8097-41f0-907e-336c878d4073 ba6ea3b0ff9d4fee8a80f308d0493954 7ff14cec1ef04fa2a41f6d226bc99518 - - default default] rbd image 6f74cee5-3bb9-44f0-9a21-d6e5c1475419_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 11 04:11:53 compute-0 sudo[286435]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 04:11:53 compute-0 sudo[286435]: pam_unix(sudo:session): session closed for user root
Oct 11 04:11:53 compute-0 sudo[286478]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/23b68101-59a9-532f-ab6b-9acf78fb2162/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Oct 11 04:11:53 compute-0 sudo[286478]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 04:11:53 compute-0 nova_compute[259850]: 2025-10-11 04:11:53.493 2 INFO nova.virt.libvirt.driver [None req-3f127c02-8097-41f0-907e-336c878d4073 ba6ea3b0ff9d4fee8a80f308d0493954 7ff14cec1ef04fa2a41f6d226bc99518 - - default default] [instance: 6f74cee5-3bb9-44f0-9a21-d6e5c1475419] Creating config drive at /var/lib/nova/instances/6f74cee5-3bb9-44f0-9a21-d6e5c1475419/disk.config
Oct 11 04:11:53 compute-0 nova_compute[259850]: 2025-10-11 04:11:53.505 2 DEBUG oslo_concurrency.processutils [None req-3f127c02-8097-41f0-907e-336c878d4073 ba6ea3b0ff9d4fee8a80f308d0493954 7ff14cec1ef04fa2a41f6d226bc99518 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/6f74cee5-3bb9-44f0-9a21-d6e5c1475419/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp79_oxdfk execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 04:11:53 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1373: 305 pgs: 305 active+clean; 134 MiB data, 400 MiB used, 60 GiB / 60 GiB avail; 250 KiB/s rd, 3.2 MiB/s wr, 340 op/s
Oct 11 04:11:53 compute-0 nova_compute[259850]: 2025-10-11 04:11:53.656 2 DEBUG oslo_concurrency.processutils [None req-3f127c02-8097-41f0-907e-336c878d4073 ba6ea3b0ff9d4fee8a80f308d0493954 7ff14cec1ef04fa2a41f6d226bc99518 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/6f74cee5-3bb9-44f0-9a21-d6e5c1475419/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp79_oxdfk" returned: 0 in 0.151s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 04:11:53 compute-0 nova_compute[259850]: 2025-10-11 04:11:53.703 2 DEBUG nova.storage.rbd_utils [None req-3f127c02-8097-41f0-907e-336c878d4073 ba6ea3b0ff9d4fee8a80f308d0493954 7ff14cec1ef04fa2a41f6d226bc99518 - - default default] rbd image 6f74cee5-3bb9-44f0-9a21-d6e5c1475419_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 11 04:11:53 compute-0 nova_compute[259850]: 2025-10-11 04:11:53.709 2 DEBUG oslo_concurrency.processutils [None req-3f127c02-8097-41f0-907e-336c878d4073 ba6ea3b0ff9d4fee8a80f308d0493954 7ff14cec1ef04fa2a41f6d226bc99518 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/6f74cee5-3bb9-44f0-9a21-d6e5c1475419/disk.config 6f74cee5-3bb9-44f0-9a21-d6e5c1475419_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 04:11:53 compute-0 nova_compute[259850]: 2025-10-11 04:11:53.741 2 DEBUG nova.network.neutron [req-981faf4c-2861-4e92-9ed9-4f19042ed052 req-ecbccaf4-cc24-4757-b6c9-1ba5c20975c8 f4159c198f2c491aba3076e803251bb4 551ef16ca7c64c7d98e9560ddab5abd8 - - default default] [instance: 6f74cee5-3bb9-44f0-9a21-d6e5c1475419] Updated VIF entry in instance network info cache for port 46432b1a-fa02-4a02-9c8f-d607c2cd820c. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 11 04:11:53 compute-0 nova_compute[259850]: 2025-10-11 04:11:53.742 2 DEBUG nova.network.neutron [req-981faf4c-2861-4e92-9ed9-4f19042ed052 req-ecbccaf4-cc24-4757-b6c9-1ba5c20975c8 f4159c198f2c491aba3076e803251bb4 551ef16ca7c64c7d98e9560ddab5abd8 - - default default] [instance: 6f74cee5-3bb9-44f0-9a21-d6e5c1475419] Updating instance_info_cache with network_info: [{"id": "46432b1a-fa02-4a02-9c8f-d607c2cd820c", "address": "fa:16:3e:2e:cd:1e", "network": {"id": "69760b74-d690-4b6a-a64f-35ceb4582944", "bridge": "br-int", "label": "tempest-TestStampPattern-334026573-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "7ff14cec1ef04fa2a41f6d226bc99518", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap46432b1a-fa", "ovs_interfaceid": "46432b1a-fa02-4a02-9c8f-d607c2cd820c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 11 04:11:53 compute-0 nova_compute[259850]: 2025-10-11 04:11:53.768 2 DEBUG oslo_concurrency.lockutils [req-981faf4c-2861-4e92-9ed9-4f19042ed052 req-ecbccaf4-cc24-4757-b6c9-1ba5c20975c8 f4159c198f2c491aba3076e803251bb4 551ef16ca7c64c7d98e9560ddab5abd8 - - default default] Releasing lock "refresh_cache-6f74cee5-3bb9-44f0-9a21-d6e5c1475419" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 11 04:11:53 compute-0 sudo[286478]: pam_unix(sudo:session): session closed for user root
Oct 11 04:11:53 compute-0 nova_compute[259850]: 2025-10-11 04:11:53.906 2 DEBUG oslo_concurrency.processutils [None req-3f127c02-8097-41f0-907e-336c878d4073 ba6ea3b0ff9d4fee8a80f308d0493954 7ff14cec1ef04fa2a41f6d226bc99518 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/6f74cee5-3bb9-44f0-9a21-d6e5c1475419/disk.config 6f74cee5-3bb9-44f0-9a21-d6e5c1475419_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.197s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 04:11:53 compute-0 nova_compute[259850]: 2025-10-11 04:11:53.907 2 INFO nova.virt.libvirt.driver [None req-3f127c02-8097-41f0-907e-336c878d4073 ba6ea3b0ff9d4fee8a80f308d0493954 7ff14cec1ef04fa2a41f6d226bc99518 - - default default] [instance: 6f74cee5-3bb9-44f0-9a21-d6e5c1475419] Deleting local config drive /var/lib/nova/instances/6f74cee5-3bb9-44f0-9a21-d6e5c1475419/disk.config because it was imported into RBD.
Oct 11 04:11:53 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 11 04:11:53 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 11 04:11:53 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct 11 04:11:53 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 11 04:11:53 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct 11 04:11:53 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' 
Oct 11 04:11:53 compute-0 ceph-mgr[74563]: [progress WARNING root] complete: ev 6adf7356-621c-4204-b18b-1cd93f97779c does not exist
Oct 11 04:11:53 compute-0 ceph-mgr[74563]: [progress WARNING root] complete: ev 64831a77-b995-486c-8c28-a7924eb1ed5b does not exist
Oct 11 04:11:53 compute-0 ceph-mgr[74563]: [progress WARNING root] complete: ev a5464c85-6eb7-489e-8440-4db107b34a25 does not exist
Oct 11 04:11:53 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct 11 04:11:53 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 11 04:11:53 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct 11 04:11:53 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 11 04:11:53 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 11 04:11:53 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 11 04:11:53 compute-0 kernel: tap46432b1a-fa: entered promiscuous mode
Oct 11 04:11:53 compute-0 NetworkManager[44920]: <info>  [1760155913.9730] manager: (tap46432b1a-fa): new Tun device (/org/freedesktop/NetworkManager/Devices/81)
Oct 11 04:11:54 compute-0 ovn_controller[152025]: 2025-10-11T04:11:54Z|00143|binding|INFO|Claiming lport 46432b1a-fa02-4a02-9c8f-d607c2cd820c for this chassis.
Oct 11 04:11:54 compute-0 ovn_controller[152025]: 2025-10-11T04:11:54Z|00144|binding|INFO|46432b1a-fa02-4a02-9c8f-d607c2cd820c: Claiming fa:16:3e:2e:cd:1e 10.100.0.4
Oct 11 04:11:54 compute-0 nova_compute[259850]: 2025-10-11 04:11:54.022 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 04:11:54 compute-0 nova_compute[259850]: 2025-10-11 04:11:54.028 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 04:11:54 compute-0 nova_compute[259850]: 2025-10-11 04:11:54.038 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 04:11:54 compute-0 ovn_metadata_agent[161897]: 2025-10-11 04:11:54.045 161902 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:2e:cd:1e 10.100.0.4'], port_security=['fa:16:3e:2e:cd:1e 10.100.0.4'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.4/28', 'neutron:device_id': '6f74cee5-3bb9-44f0-9a21-d6e5c1475419', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-69760b74-d690-4b6a-a64f-35ceb4582944', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '7ff14cec1ef04fa2a41f6d226bc99518', 'neutron:revision_number': '2', 'neutron:security_group_ids': '0b1fcf6f-b50b-44a2-814d-4972eb6e538b', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=24983540-db74-4f67-b9f8-811887ee0a83, chassis=[<ovs.db.idl.Row object at 0x7f22fde8afd0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f22fde8afd0>], logical_port=46432b1a-fa02-4a02-9c8f-d607c2cd820c) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 11 04:11:54 compute-0 sudo[286579]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 04:11:54 compute-0 ovn_metadata_agent[161897]: 2025-10-11 04:11:54.047 161902 INFO neutron.agent.ovn.metadata.agent [-] Port 46432b1a-fa02-4a02-9c8f-d607c2cd820c in datapath 69760b74-d690-4b6a-a64f-35ceb4582944 bound to our chassis
Oct 11 04:11:54 compute-0 ovn_metadata_agent[161897]: 2025-10-11 04:11:54.049 161902 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 69760b74-d690-4b6a-a64f-35ceb4582944
Oct 11 04:11:54 compute-0 sudo[286579]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 04:11:54 compute-0 systemd-machined[214869]: New machine qemu-14-instance-0000000e.
Oct 11 04:11:54 compute-0 sudo[286579]: pam_unix(sudo:session): session closed for user root
Oct 11 04:11:54 compute-0 ovn_metadata_agent[161897]: 2025-10-11 04:11:54.062 267637 DEBUG oslo.privsep.daemon [-] privsep: reply[2890814c-6b7e-4a0d-8700-cb98c7d7b185]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 04:11:54 compute-0 ovn_metadata_agent[161897]: 2025-10-11 04:11:54.063 161902 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap69760b74-d1 in ovnmeta-69760b74-d690-4b6a-a64f-35ceb4582944 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Oct 11 04:11:54 compute-0 systemd[1]: Started Virtual Machine qemu-14-instance-0000000e.
Oct 11 04:11:54 compute-0 ovn_metadata_agent[161897]: 2025-10-11 04:11:54.065 267637 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap69760b74-d0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Oct 11 04:11:54 compute-0 ovn_metadata_agent[161897]: 2025-10-11 04:11:54.065 267637 DEBUG oslo.privsep.daemon [-] privsep: reply[cb418318-3869-4fce-ab5b-36b0ba987c71]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 04:11:54 compute-0 ovn_metadata_agent[161897]: 2025-10-11 04:11:54.067 267637 DEBUG oslo.privsep.daemon [-] privsep: reply[17c02273-10c2-451a-8fb7-d7679d612314]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 04:11:54 compute-0 ovn_metadata_agent[161897]: 2025-10-11 04:11:54.086 162015 DEBUG oslo.privsep.daemon [-] privsep: reply[e1a1e9d7-e809-477d-914d-0d0f3fe2f795]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 04:11:54 compute-0 systemd-udevd[286633]: Network interface NamePolicy= disabled on kernel command line.
Oct 11 04:11:54 compute-0 NetworkManager[44920]: <info>  [1760155914.1045] device (tap46432b1a-fa): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 11 04:11:54 compute-0 NetworkManager[44920]: <info>  [1760155914.1057] device (tap46432b1a-fa): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 11 04:11:54 compute-0 ovn_metadata_agent[161897]: 2025-10-11 04:11:54.118 267637 DEBUG oslo.privsep.daemon [-] privsep: reply[5bd98991-c64a-407a-ad5a-aaede2bcd746]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 04:11:54 compute-0 sudo[286622]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 11 04:11:54 compute-0 sudo[286622]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 04:11:54 compute-0 sudo[286622]: pam_unix(sudo:session): session closed for user root
Oct 11 04:11:54 compute-0 ceph-mon[74273]: osdmap e316: 3 total, 3 up, 3 in
Oct 11 04:11:54 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 11 04:11:54 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 11 04:11:54 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' 
Oct 11 04:11:54 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 11 04:11:54 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 11 04:11:54 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 11 04:11:54 compute-0 ovn_controller[152025]: 2025-10-11T04:11:54Z|00145|binding|INFO|Setting lport 46432b1a-fa02-4a02-9c8f-d607c2cd820c ovn-installed in OVS
Oct 11 04:11:54 compute-0 ovn_controller[152025]: 2025-10-11T04:11:54Z|00146|binding|INFO|Setting lport 46432b1a-fa02-4a02-9c8f-d607c2cd820c up in Southbound
Oct 11 04:11:54 compute-0 nova_compute[259850]: 2025-10-11 04:11:54.149 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 04:11:54 compute-0 ovn_metadata_agent[161897]: 2025-10-11 04:11:54.152 267785 DEBUG oslo.privsep.daemon [-] privsep: reply[62fd2617-1f18-4423-af3f-eb779bce57a0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 04:11:54 compute-0 ovn_metadata_agent[161897]: 2025-10-11 04:11:54.160 267637 DEBUG oslo.privsep.daemon [-] privsep: reply[fc380a08-cc4b-4c78-b9a6-b11ca2330833]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 04:11:54 compute-0 NetworkManager[44920]: <info>  [1760155914.1613] manager: (tap69760b74-d0): new Veth device (/org/freedesktop/NetworkManager/Devices/82)
Oct 11 04:11:54 compute-0 podman[286607]: 2025-10-11 04:11:54.168234052 +0000 UTC m=+0.117821516 container health_status aa44bb3cffcfb092b9d85ae52a9bd9f36c87e07262632420f97b48a886109eab (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251009, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, config_id=ovn_metadata_agent, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, managed_by=edpm_ansible)
Oct 11 04:11:54 compute-0 sudo[286668]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 04:11:54 compute-0 sudo[286668]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 04:11:54 compute-0 sudo[286668]: pam_unix(sudo:session): session closed for user root
Oct 11 04:11:54 compute-0 ovn_metadata_agent[161897]: 2025-10-11 04:11:54.204 267785 DEBUG oslo.privsep.daemon [-] privsep: reply[46e842f3-bbc8-4de1-881e-8e813ad7efea]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 04:11:54 compute-0 ovn_metadata_agent[161897]: 2025-10-11 04:11:54.207 267785 DEBUG oslo.privsep.daemon [-] privsep: reply[a9655bf6-3830-4c41-98a9-960c24c5d7c1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 04:11:54 compute-0 NetworkManager[44920]: <info>  [1760155914.2307] device (tap69760b74-d0): carrier: link connected
Oct 11 04:11:54 compute-0 ovn_metadata_agent[161897]: 2025-10-11 04:11:54.236 267785 DEBUG oslo.privsep.daemon [-] privsep: reply[fe5f2308-6bca-4316-a08a-db6cd5dbf5bf]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 04:11:54 compute-0 sudo[286710]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/23b68101-59a9-532f-ab6b-9acf78fb2162/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 23b68101-59a9-532f-ab6b-9acf78fb2162 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Oct 11 04:11:54 compute-0 sudo[286710]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 04:11:54 compute-0 ovn_metadata_agent[161897]: 2025-10-11 04:11:54.253 267637 DEBUG oslo.privsep.daemon [-] privsep: reply[9971a058-f71a-49df-9580-0208128bf698]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap69760b74-d1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:4e:85:d9'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 51], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 429926, 'reachable_time': 21479, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 286735, 'error': None, 'target': 'ovnmeta-69760b74-d690-4b6a-a64f-35ceb4582944', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 04:11:54 compute-0 ovn_metadata_agent[161897]: 2025-10-11 04:11:54.270 267637 DEBUG oslo.privsep.daemon [-] privsep: reply[4bdc780f-5128-40e9-8b41-73242f00782d]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe4e:85d9'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 429926, 'tstamp': 429926}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 286738, 'error': None, 'target': 'ovnmeta-69760b74-d690-4b6a-a64f-35ceb4582944', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 04:11:54 compute-0 ovn_metadata_agent[161897]: 2025-10-11 04:11:54.289 267637 DEBUG oslo.privsep.daemon [-] privsep: reply[e6fca5ce-1b3c-40f6-b587-68d13cd9adf4]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap69760b74-d1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:4e:85:d9'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 51], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 429926, 'reachable_time': 21479, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 286739, 'error': None, 'target': 'ovnmeta-69760b74-d690-4b6a-a64f-35ceb4582944', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 04:11:54 compute-0 ovn_metadata_agent[161897]: 2025-10-11 04:11:54.323 267637 DEBUG oslo.privsep.daemon [-] privsep: reply[675899e7-e1e4-4a89-a983-7c0037cc95ec]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 04:11:54 compute-0 ovn_metadata_agent[161897]: 2025-10-11 04:11:54.390 267637 DEBUG oslo.privsep.daemon [-] privsep: reply[ac28e03c-75a3-4d40-a2c8-e490418f094a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 04:11:54 compute-0 ovn_metadata_agent[161897]: 2025-10-11 04:11:54.392 161902 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap69760b74-d0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 04:11:54 compute-0 ovn_metadata_agent[161897]: 2025-10-11 04:11:54.392 161902 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 11 04:11:54 compute-0 ovn_metadata_agent[161897]: 2025-10-11 04:11:54.393 161902 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap69760b74-d0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 04:11:54 compute-0 nova_compute[259850]: 2025-10-11 04:11:54.394 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 04:11:54 compute-0 NetworkManager[44920]: <info>  [1760155914.3954] manager: (tap69760b74-d0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/83)
Oct 11 04:11:54 compute-0 kernel: tap69760b74-d0: entered promiscuous mode
Oct 11 04:11:54 compute-0 nova_compute[259850]: 2025-10-11 04:11:54.397 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 04:11:54 compute-0 ovn_metadata_agent[161897]: 2025-10-11 04:11:54.399 161902 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap69760b74-d0, col_values=(('external_ids', {'iface-id': '1db9314a-9172-441f-a3d7-84ca9c891141'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 04:11:54 compute-0 nova_compute[259850]: 2025-10-11 04:11:54.399 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 04:11:54 compute-0 ovn_controller[152025]: 2025-10-11T04:11:54Z|00147|binding|INFO|Releasing lport 1db9314a-9172-441f-a3d7-84ca9c891141 from this chassis (sb_readonly=0)
Oct 11 04:11:54 compute-0 nova_compute[259850]: 2025-10-11 04:11:54.420 2 DEBUG nova.compute.manager [req-b590b307-baaf-48af-bd12-31974e3f5d03 req-b7101d57-7d0c-45a4-aa7b-8dc7d992e6cd f4159c198f2c491aba3076e803251bb4 551ef16ca7c64c7d98e9560ddab5abd8 - - default default] [instance: 6f74cee5-3bb9-44f0-9a21-d6e5c1475419] Received event network-vif-plugged-46432b1a-fa02-4a02-9c8f-d607c2cd820c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 11 04:11:54 compute-0 nova_compute[259850]: 2025-10-11 04:11:54.421 2 DEBUG oslo_concurrency.lockutils [req-b590b307-baaf-48af-bd12-31974e3f5d03 req-b7101d57-7d0c-45a4-aa7b-8dc7d992e6cd f4159c198f2c491aba3076e803251bb4 551ef16ca7c64c7d98e9560ddab5abd8 - - default default] Acquiring lock "6f74cee5-3bb9-44f0-9a21-d6e5c1475419-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 04:11:54 compute-0 nova_compute[259850]: 2025-10-11 04:11:54.421 2 DEBUG oslo_concurrency.lockutils [req-b590b307-baaf-48af-bd12-31974e3f5d03 req-b7101d57-7d0c-45a4-aa7b-8dc7d992e6cd f4159c198f2c491aba3076e803251bb4 551ef16ca7c64c7d98e9560ddab5abd8 - - default default] Lock "6f74cee5-3bb9-44f0-9a21-d6e5c1475419-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 04:11:54 compute-0 nova_compute[259850]: 2025-10-11 04:11:54.421 2 DEBUG oslo_concurrency.lockutils [req-b590b307-baaf-48af-bd12-31974e3f5d03 req-b7101d57-7d0c-45a4-aa7b-8dc7d992e6cd f4159c198f2c491aba3076e803251bb4 551ef16ca7c64c7d98e9560ddab5abd8 - - default default] Lock "6f74cee5-3bb9-44f0-9a21-d6e5c1475419-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 04:11:54 compute-0 nova_compute[259850]: 2025-10-11 04:11:54.422 2 DEBUG nova.compute.manager [req-b590b307-baaf-48af-bd12-31974e3f5d03 req-b7101d57-7d0c-45a4-aa7b-8dc7d992e6cd f4159c198f2c491aba3076e803251bb4 551ef16ca7c64c7d98e9560ddab5abd8 - - default default] [instance: 6f74cee5-3bb9-44f0-9a21-d6e5c1475419] Processing event network-vif-plugged-46432b1a-fa02-4a02-9c8f-d607c2cd820c _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Oct 11 04:11:54 compute-0 nova_compute[259850]: 2025-10-11 04:11:54.438 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 04:11:54 compute-0 ovn_metadata_agent[161897]: 2025-10-11 04:11:54.440 161902 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/69760b74-d690-4b6a-a64f-35ceb4582944.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/69760b74-d690-4b6a-a64f-35ceb4582944.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Oct 11 04:11:54 compute-0 ovn_metadata_agent[161897]: 2025-10-11 04:11:54.440 267637 DEBUG oslo.privsep.daemon [-] privsep: reply[8a6a0938-87b4-40c5-9161-fba8f8e50025]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 04:11:54 compute-0 ovn_metadata_agent[161897]: 2025-10-11 04:11:54.441 161902 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct 11 04:11:54 compute-0 ovn_metadata_agent[161897]: global
Oct 11 04:11:54 compute-0 ovn_metadata_agent[161897]:     log         /dev/log local0 debug
Oct 11 04:11:54 compute-0 ovn_metadata_agent[161897]:     log-tag     haproxy-metadata-proxy-69760b74-d690-4b6a-a64f-35ceb4582944
Oct 11 04:11:54 compute-0 ovn_metadata_agent[161897]:     user        root
Oct 11 04:11:54 compute-0 ovn_metadata_agent[161897]:     group       root
Oct 11 04:11:54 compute-0 ovn_metadata_agent[161897]:     maxconn     1024
Oct 11 04:11:54 compute-0 ovn_metadata_agent[161897]:     pidfile     /var/lib/neutron/external/pids/69760b74-d690-4b6a-a64f-35ceb4582944.pid.haproxy
Oct 11 04:11:54 compute-0 ovn_metadata_agent[161897]:     daemon
Oct 11 04:11:54 compute-0 ovn_metadata_agent[161897]: 
Oct 11 04:11:54 compute-0 ovn_metadata_agent[161897]: defaults
Oct 11 04:11:54 compute-0 ovn_metadata_agent[161897]:     log global
Oct 11 04:11:54 compute-0 ovn_metadata_agent[161897]:     mode http
Oct 11 04:11:54 compute-0 ovn_metadata_agent[161897]:     option httplog
Oct 11 04:11:54 compute-0 ovn_metadata_agent[161897]:     option dontlognull
Oct 11 04:11:54 compute-0 ovn_metadata_agent[161897]:     option http-server-close
Oct 11 04:11:54 compute-0 ovn_metadata_agent[161897]:     option forwardfor
Oct 11 04:11:54 compute-0 ovn_metadata_agent[161897]:     retries                 3
Oct 11 04:11:54 compute-0 ovn_metadata_agent[161897]:     timeout http-request    30s
Oct 11 04:11:54 compute-0 ovn_metadata_agent[161897]:     timeout connect         30s
Oct 11 04:11:54 compute-0 ovn_metadata_agent[161897]:     timeout client          32s
Oct 11 04:11:54 compute-0 ovn_metadata_agent[161897]:     timeout server          32s
Oct 11 04:11:54 compute-0 ovn_metadata_agent[161897]:     timeout http-keep-alive 30s
Oct 11 04:11:54 compute-0 ovn_metadata_agent[161897]: 
Oct 11 04:11:54 compute-0 ovn_metadata_agent[161897]: 
Oct 11 04:11:54 compute-0 ovn_metadata_agent[161897]: listen listener
Oct 11 04:11:54 compute-0 ovn_metadata_agent[161897]:     bind 169.254.169.254:80
Oct 11 04:11:54 compute-0 ovn_metadata_agent[161897]:     server metadata /var/lib/neutron/metadata_proxy
Oct 11 04:11:54 compute-0 ovn_metadata_agent[161897]:     http-request add-header X-OVN-Network-ID 69760b74-d690-4b6a-a64f-35ceb4582944
Oct 11 04:11:54 compute-0 ovn_metadata_agent[161897]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Oct 11 04:11:54 compute-0 ovn_metadata_agent[161897]: 2025-10-11 04:11:54.441 161902 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-69760b74-d690-4b6a-a64f-35ceb4582944', 'env', 'PROCESS_TAG=haproxy-69760b74-d690-4b6a-a64f-35ceb4582944', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/69760b74-d690-4b6a-a64f-35ceb4582944.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Oct 11 04:11:54 compute-0 podman[286828]: 2025-10-11 04:11:54.620679131 +0000 UTC m=+0.053211578 container create 53368094aea0f6a4812c345a5b687fb2099a213771cd569c15e781dd7ce9a2be (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_ritchie, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 11 04:11:54 compute-0 systemd[1]: Started libpod-conmon-53368094aea0f6a4812c345a5b687fb2099a213771cd569c15e781dd7ce9a2be.scope.
Oct 11 04:11:54 compute-0 podman[286828]: 2025-10-11 04:11:54.59646081 +0000 UTC m=+0.028993317 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 04:11:54 compute-0 systemd[1]: Started libcrun container.
Oct 11 04:11:54 compute-0 podman[286828]: 2025-10-11 04:11:54.71377362 +0000 UTC m=+0.146306067 container init 53368094aea0f6a4812c345a5b687fb2099a213771cd569c15e781dd7ce9a2be (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_ritchie, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=reef, OSD_FLAVOR=default)
Oct 11 04:11:54 compute-0 podman[286828]: 2025-10-11 04:11:54.721474157 +0000 UTC m=+0.154006644 container start 53368094aea0f6a4812c345a5b687fb2099a213771cd569c15e781dd7ce9a2be (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_ritchie, ceph=True, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default)
Oct 11 04:11:54 compute-0 podman[286828]: 2025-10-11 04:11:54.725000246 +0000 UTC m=+0.157532713 container attach 53368094aea0f6a4812c345a5b687fb2099a213771cd569c15e781dd7ce9a2be (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_ritchie, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.vendor=CentOS)
Oct 11 04:11:54 compute-0 reverent_ritchie[286844]: 167 167
Oct 11 04:11:54 compute-0 systemd[1]: libpod-53368094aea0f6a4812c345a5b687fb2099a213771cd569c15e781dd7ce9a2be.scope: Deactivated successfully.
Oct 11 04:11:54 compute-0 conmon[286844]: conmon 53368094aea0f6a4812c <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-53368094aea0f6a4812c345a5b687fb2099a213771cd569c15e781dd7ce9a2be.scope/container/memory.events
Oct 11 04:11:54 compute-0 podman[286828]: 2025-10-11 04:11:54.728007911 +0000 UTC m=+0.160540368 container died 53368094aea0f6a4812c345a5b687fb2099a213771cd569c15e781dd7ce9a2be (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_ritchie, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507)
Oct 11 04:11:54 compute-0 systemd[1]: var-lib-containers-storage-overlay-c48ff8388f9c9a7c6e65bd8bbca7855271dd50dde0d05e8372efe97e578d1217-merged.mount: Deactivated successfully.
Oct 11 04:11:54 compute-0 podman[286828]: 2025-10-11 04:11:54.778307626 +0000 UTC m=+0.210840073 container remove 53368094aea0f6a4812c345a5b687fb2099a213771cd569c15e781dd7ce9a2be (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_ritchie, CEPH_REF=reef, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Oct 11 04:11:54 compute-0 systemd[1]: libpod-conmon-53368094aea0f6a4812c345a5b687fb2099a213771cd569c15e781dd7ce9a2be.scope: Deactivated successfully.
Oct 11 04:11:54 compute-0 podman[286877]: 2025-10-11 04:11:54.825937706 +0000 UTC m=+0.055946045 container create e2c1c1385ce1a242c91f8f257f9c488523f09cf17a3f4cae6cc39386ad85b87e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-69760b74-d690-4b6a-a64f-35ceb4582944, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251009, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true)
Oct 11 04:11:54 compute-0 systemd[1]: Started libpod-conmon-e2c1c1385ce1a242c91f8f257f9c488523f09cf17a3f4cae6cc39386ad85b87e.scope.
Oct 11 04:11:54 compute-0 systemd[1]: Started libcrun container.
Oct 11 04:11:54 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e316 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 04:11:54 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e316 do_prune osdmap full prune enabled
Oct 11 04:11:54 compute-0 podman[286877]: 2025-10-11 04:11:54.805886262 +0000 UTC m=+0.035894611 image pull 1061e4fafe13e0b9aa1ef2c904ba4ad70c44f3e87b1d831f16c6db34937f4022 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Oct 11 04:11:54 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e317 e317: 3 total, 3 up, 3 in
Oct 11 04:11:54 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1d29e2e58c75e07fd5a341ec93ed2396e302c7335665708d6060cbb024b9512c/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct 11 04:11:54 compute-0 ceph-mon[74273]: log_channel(cluster) log [DBG] : osdmap e317: 3 total, 3 up, 3 in
Oct 11 04:11:54 compute-0 podman[286877]: 2025-10-11 04:11:54.915446454 +0000 UTC m=+0.145454813 container init e2c1c1385ce1a242c91f8f257f9c488523f09cf17a3f4cae6cc39386ad85b87e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-69760b74-d690-4b6a-a64f-35ceb4582944, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3)
Oct 11 04:11:54 compute-0 podman[286877]: 2025-10-11 04:11:54.920651821 +0000 UTC m=+0.150660160 container start e2c1c1385ce1a242c91f8f257f9c488523f09cf17a3f4cae6cc39386ad85b87e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-69760b74-d690-4b6a-a64f-35ceb4582944, io.buildah.version=1.41.3, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true)
Oct 11 04:11:54 compute-0 neutron-haproxy-ovnmeta-69760b74-d690-4b6a-a64f-35ceb4582944[286899]: [NOTICE]   (286920) : New worker (286924) forked
Oct 11 04:11:54 compute-0 neutron-haproxy-ovnmeta-69760b74-d690-4b6a-a64f-35ceb4582944[286899]: [NOTICE]   (286920) : Loading success.
Oct 11 04:11:54 compute-0 podman[286907]: 2025-10-11 04:11:54.942689201 +0000 UTC m=+0.041703924 container create 0c50c265858a237ad8ad801b629b0a4e283f45dacf5c922409704289990eb3f1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=boring_curie, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 11 04:11:54 compute-0 systemd[1]: Started libpod-conmon-0c50c265858a237ad8ad801b629b0a4e283f45dacf5c922409704289990eb3f1.scope.
Oct 11 04:11:55 compute-0 systemd[1]: Started libcrun container.
Oct 11 04:11:55 compute-0 podman[286907]: 2025-10-11 04:11:54.92451759 +0000 UTC m=+0.023532333 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 04:11:55 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9ac8c376e02dadbe56c6d4ef403c2ae5ac524cdbd59c59c24547e6af781cbdcd/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 11 04:11:55 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9ac8c376e02dadbe56c6d4ef403c2ae5ac524cdbd59c59c24547e6af781cbdcd/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 11 04:11:55 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9ac8c376e02dadbe56c6d4ef403c2ae5ac524cdbd59c59c24547e6af781cbdcd/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 11 04:11:55 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9ac8c376e02dadbe56c6d4ef403c2ae5ac524cdbd59c59c24547e6af781cbdcd/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 11 04:11:55 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9ac8c376e02dadbe56c6d4ef403c2ae5ac524cdbd59c59c24547e6af781cbdcd/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct 11 04:11:55 compute-0 nova_compute[259850]: 2025-10-11 04:11:55.037 2 DEBUG nova.compute.manager [None req-3f127c02-8097-41f0-907e-336c878d4073 ba6ea3b0ff9d4fee8a80f308d0493954 7ff14cec1ef04fa2a41f6d226bc99518 - - default default] [instance: 6f74cee5-3bb9-44f0-9a21-d6e5c1475419] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Oct 11 04:11:55 compute-0 nova_compute[259850]: 2025-10-11 04:11:55.038 2 DEBUG nova.virt.driver [None req-3700afd8-f980-4cf2-b41e-54ecc7fbe9a2 - - - - - -] Emitting event <LifecycleEvent: 1760155915.0369587, 6f74cee5-3bb9-44f0-9a21-d6e5c1475419 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 11 04:11:55 compute-0 nova_compute[259850]: 2025-10-11 04:11:55.038 2 INFO nova.compute.manager [None req-3700afd8-f980-4cf2-b41e-54ecc7fbe9a2 - - - - - -] [instance: 6f74cee5-3bb9-44f0-9a21-d6e5c1475419] VM Started (Lifecycle Event)
Oct 11 04:11:55 compute-0 nova_compute[259850]: 2025-10-11 04:11:55.042 2 DEBUG nova.virt.libvirt.driver [None req-3f127c02-8097-41f0-907e-336c878d4073 ba6ea3b0ff9d4fee8a80f308d0493954 7ff14cec1ef04fa2a41f6d226bc99518 - - default default] [instance: 6f74cee5-3bb9-44f0-9a21-d6e5c1475419] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Oct 11 04:11:55 compute-0 podman[286907]: 2025-10-11 04:11:55.04466455 +0000 UTC m=+0.143679323 container init 0c50c265858a237ad8ad801b629b0a4e283f45dacf5c922409704289990eb3f1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=boring_curie, CEPH_REF=reef, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 11 04:11:55 compute-0 nova_compute[259850]: 2025-10-11 04:11:55.049 2 INFO nova.virt.libvirt.driver [-] [instance: 6f74cee5-3bb9-44f0-9a21-d6e5c1475419] Instance spawned successfully.
Oct 11 04:11:55 compute-0 nova_compute[259850]: 2025-10-11 04:11:55.049 2 DEBUG nova.virt.libvirt.driver [None req-3f127c02-8097-41f0-907e-336c878d4073 ba6ea3b0ff9d4fee8a80f308d0493954 7ff14cec1ef04fa2a41f6d226bc99518 - - default default] [instance: 6f74cee5-3bb9-44f0-9a21-d6e5c1475419] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Oct 11 04:11:55 compute-0 podman[286907]: 2025-10-11 04:11:55.055793373 +0000 UTC m=+0.154808106 container start 0c50c265858a237ad8ad801b629b0a4e283f45dacf5c922409704289990eb3f1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=boring_curie, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 11 04:11:55 compute-0 nova_compute[259850]: 2025-10-11 04:11:55.058 2 DEBUG nova.compute.manager [None req-3700afd8-f980-4cf2-b41e-54ecc7fbe9a2 - - - - - -] [instance: 6f74cee5-3bb9-44f0-9a21-d6e5c1475419] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 11 04:11:55 compute-0 podman[286907]: 2025-10-11 04:11:55.06031047 +0000 UTC m=+0.159325233 container attach 0c50c265858a237ad8ad801b629b0a4e283f45dacf5c922409704289990eb3f1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=boring_curie, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 11 04:11:55 compute-0 nova_compute[259850]: 2025-10-11 04:11:55.062 2 DEBUG nova.compute.manager [None req-3700afd8-f980-4cf2-b41e-54ecc7fbe9a2 - - - - - -] [instance: 6f74cee5-3bb9-44f0-9a21-d6e5c1475419] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 11 04:11:55 compute-0 nova_compute[259850]: 2025-10-11 04:11:55.069 2 DEBUG nova.virt.libvirt.driver [None req-3f127c02-8097-41f0-907e-336c878d4073 ba6ea3b0ff9d4fee8a80f308d0493954 7ff14cec1ef04fa2a41f6d226bc99518 - - default default] [instance: 6f74cee5-3bb9-44f0-9a21-d6e5c1475419] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 11 04:11:55 compute-0 nova_compute[259850]: 2025-10-11 04:11:55.070 2 DEBUG nova.virt.libvirt.driver [None req-3f127c02-8097-41f0-907e-336c878d4073 ba6ea3b0ff9d4fee8a80f308d0493954 7ff14cec1ef04fa2a41f6d226bc99518 - - default default] [instance: 6f74cee5-3bb9-44f0-9a21-d6e5c1475419] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 11 04:11:55 compute-0 nova_compute[259850]: 2025-10-11 04:11:55.070 2 DEBUG nova.virt.libvirt.driver [None req-3f127c02-8097-41f0-907e-336c878d4073 ba6ea3b0ff9d4fee8a80f308d0493954 7ff14cec1ef04fa2a41f6d226bc99518 - - default default] [instance: 6f74cee5-3bb9-44f0-9a21-d6e5c1475419] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 11 04:11:55 compute-0 nova_compute[259850]: 2025-10-11 04:11:55.071 2 DEBUG nova.virt.libvirt.driver [None req-3f127c02-8097-41f0-907e-336c878d4073 ba6ea3b0ff9d4fee8a80f308d0493954 7ff14cec1ef04fa2a41f6d226bc99518 - - default default] [instance: 6f74cee5-3bb9-44f0-9a21-d6e5c1475419] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 11 04:11:55 compute-0 nova_compute[259850]: 2025-10-11 04:11:55.071 2 DEBUG nova.virt.libvirt.driver [None req-3f127c02-8097-41f0-907e-336c878d4073 ba6ea3b0ff9d4fee8a80f308d0493954 7ff14cec1ef04fa2a41f6d226bc99518 - - default default] [instance: 6f74cee5-3bb9-44f0-9a21-d6e5c1475419] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 11 04:11:55 compute-0 nova_compute[259850]: 2025-10-11 04:11:55.072 2 DEBUG nova.virt.libvirt.driver [None req-3f127c02-8097-41f0-907e-336c878d4073 ba6ea3b0ff9d4fee8a80f308d0493954 7ff14cec1ef04fa2a41f6d226bc99518 - - default default] [instance: 6f74cee5-3bb9-44f0-9a21-d6e5c1475419] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 11 04:11:55 compute-0 nova_compute[259850]: 2025-10-11 04:11:55.078 2 INFO nova.compute.manager [None req-3700afd8-f980-4cf2-b41e-54ecc7fbe9a2 - - - - - -] [instance: 6f74cee5-3bb9-44f0-9a21-d6e5c1475419] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 11 04:11:55 compute-0 nova_compute[259850]: 2025-10-11 04:11:55.079 2 DEBUG nova.virt.driver [None req-3700afd8-f980-4cf2-b41e-54ecc7fbe9a2 - - - - - -] Emitting event <LifecycleEvent: 1760155915.0379646, 6f74cee5-3bb9-44f0-9a21-d6e5c1475419 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 11 04:11:55 compute-0 nova_compute[259850]: 2025-10-11 04:11:55.079 2 INFO nova.compute.manager [None req-3700afd8-f980-4cf2-b41e-54ecc7fbe9a2 - - - - - -] [instance: 6f74cee5-3bb9-44f0-9a21-d6e5c1475419] VM Paused (Lifecycle Event)
Oct 11 04:11:55 compute-0 nova_compute[259850]: 2025-10-11 04:11:55.102 2 DEBUG nova.compute.manager [None req-3700afd8-f980-4cf2-b41e-54ecc7fbe9a2 - - - - - -] [instance: 6f74cee5-3bb9-44f0-9a21-d6e5c1475419] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 11 04:11:55 compute-0 nova_compute[259850]: 2025-10-11 04:11:55.106 2 DEBUG nova.virt.driver [None req-3700afd8-f980-4cf2-b41e-54ecc7fbe9a2 - - - - - -] Emitting event <LifecycleEvent: 1760155915.0415637, 6f74cee5-3bb9-44f0-9a21-d6e5c1475419 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 11 04:11:55 compute-0 nova_compute[259850]: 2025-10-11 04:11:55.106 2 INFO nova.compute.manager [None req-3700afd8-f980-4cf2-b41e-54ecc7fbe9a2 - - - - - -] [instance: 6f74cee5-3bb9-44f0-9a21-d6e5c1475419] VM Resumed (Lifecycle Event)
Oct 11 04:11:55 compute-0 nova_compute[259850]: 2025-10-11 04:11:55.126 2 DEBUG nova.compute.manager [None req-3700afd8-f980-4cf2-b41e-54ecc7fbe9a2 - - - - - -] [instance: 6f74cee5-3bb9-44f0-9a21-d6e5c1475419] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 11 04:11:55 compute-0 nova_compute[259850]: 2025-10-11 04:11:55.130 2 DEBUG nova.compute.manager [None req-3700afd8-f980-4cf2-b41e-54ecc7fbe9a2 - - - - - -] [instance: 6f74cee5-3bb9-44f0-9a21-d6e5c1475419] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 11 04:11:55 compute-0 nova_compute[259850]: 2025-10-11 04:11:55.138 2 INFO nova.compute.manager [None req-3f127c02-8097-41f0-907e-336c878d4073 ba6ea3b0ff9d4fee8a80f308d0493954 7ff14cec1ef04fa2a41f6d226bc99518 - - default default] [instance: 6f74cee5-3bb9-44f0-9a21-d6e5c1475419] Took 7.41 seconds to spawn the instance on the hypervisor.
Oct 11 04:11:55 compute-0 nova_compute[259850]: 2025-10-11 04:11:55.138 2 DEBUG nova.compute.manager [None req-3f127c02-8097-41f0-907e-336c878d4073 ba6ea3b0ff9d4fee8a80f308d0493954 7ff14cec1ef04fa2a41f6d226bc99518 - - default default] [instance: 6f74cee5-3bb9-44f0-9a21-d6e5c1475419] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 11 04:11:55 compute-0 ceph-mon[74273]: pgmap v1373: 305 pgs: 305 active+clean; 134 MiB data, 400 MiB used, 60 GiB / 60 GiB avail; 250 KiB/s rd, 3.2 MiB/s wr, 340 op/s
Oct 11 04:11:55 compute-0 ceph-mon[74273]: osdmap e317: 3 total, 3 up, 3 in
Oct 11 04:11:55 compute-0 nova_compute[259850]: 2025-10-11 04:11:55.151 2 INFO nova.compute.manager [None req-3700afd8-f980-4cf2-b41e-54ecc7fbe9a2 - - - - - -] [instance: 6f74cee5-3bb9-44f0-9a21-d6e5c1475419] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 11 04:11:55 compute-0 nova_compute[259850]: 2025-10-11 04:11:55.221 2 INFO nova.compute.manager [None req-3f127c02-8097-41f0-907e-336c878d4073 ba6ea3b0ff9d4fee8a80f308d0493954 7ff14cec1ef04fa2a41f6d226bc99518 - - default default] [instance: 6f74cee5-3bb9-44f0-9a21-d6e5c1475419] Took 8.49 seconds to build instance.
Oct 11 04:11:55 compute-0 nova_compute[259850]: 2025-10-11 04:11:55.241 2 DEBUG oslo_concurrency.lockutils [None req-3f127c02-8097-41f0-907e-336c878d4073 ba6ea3b0ff9d4fee8a80f308d0493954 7ff14cec1ef04fa2a41f6d226bc99518 - - default default] Lock "6f74cee5-3bb9-44f0-9a21-d6e5c1475419" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 8.597s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 04:11:55 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1375: 305 pgs: 305 active+clean; 134 MiB data, 400 MiB used, 60 GiB / 60 GiB avail; 177 KiB/s rd, 3.3 MiB/s wr, 238 op/s
Oct 11 04:11:56 compute-0 boring_curie[286935]: --> passed data devices: 0 physical, 3 LVM
Oct 11 04:11:56 compute-0 boring_curie[286935]: --> relative data size: 1.0
Oct 11 04:11:56 compute-0 boring_curie[286935]: --> All data devices are unavailable
Oct 11 04:11:56 compute-0 systemd[1]: libpod-0c50c265858a237ad8ad801b629b0a4e283f45dacf5c922409704289990eb3f1.scope: Deactivated successfully.
Oct 11 04:11:56 compute-0 systemd[1]: libpod-0c50c265858a237ad8ad801b629b0a4e283f45dacf5c922409704289990eb3f1.scope: Consumed 1.067s CPU time.
Oct 11 04:11:56 compute-0 podman[286907]: 2025-10-11 04:11:56.227389937 +0000 UTC m=+1.326404670 container died 0c50c265858a237ad8ad801b629b0a4e283f45dacf5c922409704289990eb3f1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=boring_curie, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 11 04:11:56 compute-0 systemd[1]: var-lib-containers-storage-overlay-9ac8c376e02dadbe56c6d4ef403c2ae5ac524cdbd59c59c24547e6af781cbdcd-merged.mount: Deactivated successfully.
Oct 11 04:11:56 compute-0 podman[286907]: 2025-10-11 04:11:56.285980115 +0000 UTC m=+1.384994838 container remove 0c50c265858a237ad8ad801b629b0a4e283f45dacf5c922409704289990eb3f1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=boring_curie, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, ceph=True)
Oct 11 04:11:56 compute-0 systemd[1]: libpod-conmon-0c50c265858a237ad8ad801b629b0a4e283f45dacf5c922409704289990eb3f1.scope: Deactivated successfully.
Oct 11 04:11:56 compute-0 sudo[286710]: pam_unix(sudo:session): session closed for user root
Oct 11 04:11:56 compute-0 sudo[286978]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 04:11:56 compute-0 sudo[286978]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 04:11:56 compute-0 sudo[286978]: pam_unix(sudo:session): session closed for user root
Oct 11 04:11:56 compute-0 nova_compute[259850]: 2025-10-11 04:11:56.491 2 DEBUG nova.compute.manager [req-24ae3414-9366-4d8e-a50c-db5ee9d123a3 req-38028bd7-41d9-43b0-8e3c-2611f62378ad f4159c198f2c491aba3076e803251bb4 551ef16ca7c64c7d98e9560ddab5abd8 - - default default] [instance: 6f74cee5-3bb9-44f0-9a21-d6e5c1475419] Received event network-vif-plugged-46432b1a-fa02-4a02-9c8f-d607c2cd820c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 11 04:11:56 compute-0 nova_compute[259850]: 2025-10-11 04:11:56.494 2 DEBUG oslo_concurrency.lockutils [req-24ae3414-9366-4d8e-a50c-db5ee9d123a3 req-38028bd7-41d9-43b0-8e3c-2611f62378ad f4159c198f2c491aba3076e803251bb4 551ef16ca7c64c7d98e9560ddab5abd8 - - default default] Acquiring lock "6f74cee5-3bb9-44f0-9a21-d6e5c1475419-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 04:11:56 compute-0 nova_compute[259850]: 2025-10-11 04:11:56.494 2 DEBUG oslo_concurrency.lockutils [req-24ae3414-9366-4d8e-a50c-db5ee9d123a3 req-38028bd7-41d9-43b0-8e3c-2611f62378ad f4159c198f2c491aba3076e803251bb4 551ef16ca7c64c7d98e9560ddab5abd8 - - default default] Lock "6f74cee5-3bb9-44f0-9a21-d6e5c1475419-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 04:11:56 compute-0 nova_compute[259850]: 2025-10-11 04:11:56.495 2 DEBUG oslo_concurrency.lockutils [req-24ae3414-9366-4d8e-a50c-db5ee9d123a3 req-38028bd7-41d9-43b0-8e3c-2611f62378ad f4159c198f2c491aba3076e803251bb4 551ef16ca7c64c7d98e9560ddab5abd8 - - default default] Lock "6f74cee5-3bb9-44f0-9a21-d6e5c1475419-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 04:11:56 compute-0 nova_compute[259850]: 2025-10-11 04:11:56.495 2 DEBUG nova.compute.manager [req-24ae3414-9366-4d8e-a50c-db5ee9d123a3 req-38028bd7-41d9-43b0-8e3c-2611f62378ad f4159c198f2c491aba3076e803251bb4 551ef16ca7c64c7d98e9560ddab5abd8 - - default default] [instance: 6f74cee5-3bb9-44f0-9a21-d6e5c1475419] No waiting events found dispatching network-vif-plugged-46432b1a-fa02-4a02-9c8f-d607c2cd820c pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 11 04:11:56 compute-0 nova_compute[259850]: 2025-10-11 04:11:56.496 2 WARNING nova.compute.manager [req-24ae3414-9366-4d8e-a50c-db5ee9d123a3 req-38028bd7-41d9-43b0-8e3c-2611f62378ad f4159c198f2c491aba3076e803251bb4 551ef16ca7c64c7d98e9560ddab5abd8 - - default default] [instance: 6f74cee5-3bb9-44f0-9a21-d6e5c1475419] Received unexpected event network-vif-plugged-46432b1a-fa02-4a02-9c8f-d607c2cd820c for instance with vm_state active and task_state None.
Oct 11 04:11:56 compute-0 sudo[287003]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 11 04:11:56 compute-0 sudo[287003]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 04:11:56 compute-0 sudo[287003]: pam_unix(sudo:session): session closed for user root
Oct 11 04:11:56 compute-0 sudo[287028]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 04:11:56 compute-0 sudo[287028]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 04:11:56 compute-0 sudo[287028]: pam_unix(sudo:session): session closed for user root
Oct 11 04:11:56 compute-0 sudo[287053]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/23b68101-59a9-532f-ab6b-9acf78fb2162/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 23b68101-59a9-532f-ab6b-9acf78fb2162 -- lvm list --format json
Oct 11 04:11:56 compute-0 sudo[287053]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 04:11:57 compute-0 podman[287120]: 2025-10-11 04:11:57.102817927 +0000 UTC m=+0.065682829 container create 49a728ae32af661164a2733f13b11d3e337c34862fcaf6f536bdc7cd9342a017 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_pasteur, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 11 04:11:57 compute-0 systemd[1]: Started libpod-conmon-49a728ae32af661164a2733f13b11d3e337c34862fcaf6f536bdc7cd9342a017.scope.
Oct 11 04:11:57 compute-0 ceph-mon[74273]: pgmap v1375: 305 pgs: 305 active+clean; 134 MiB data, 400 MiB used, 60 GiB / 60 GiB avail; 177 KiB/s rd, 3.3 MiB/s wr, 238 op/s
Oct 11 04:11:57 compute-0 podman[287120]: 2025-10-11 04:11:57.084944264 +0000 UTC m=+0.047809186 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 04:11:57 compute-0 systemd[1]: Started libcrun container.
Oct 11 04:11:57 compute-0 podman[287120]: 2025-10-11 04:11:57.204986862 +0000 UTC m=+0.167851794 container init 49a728ae32af661164a2733f13b11d3e337c34862fcaf6f536bdc7cd9342a017 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_pasteur, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 11 04:11:57 compute-0 podman[287120]: 2025-10-11 04:11:57.213861782 +0000 UTC m=+0.176726714 container start 49a728ae32af661164a2733f13b11d3e337c34862fcaf6f536bdc7cd9342a017 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_pasteur, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Oct 11 04:11:57 compute-0 podman[287120]: 2025-10-11 04:11:57.217432512 +0000 UTC m=+0.180297424 container attach 49a728ae32af661164a2733f13b11d3e337c34862fcaf6f536bdc7cd9342a017 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_pasteur, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Oct 11 04:11:57 compute-0 admiring_pasteur[287136]: 167 167
Oct 11 04:11:57 compute-0 systemd[1]: libpod-49a728ae32af661164a2733f13b11d3e337c34862fcaf6f536bdc7cd9342a017.scope: Deactivated successfully.
Oct 11 04:11:57 compute-0 podman[287120]: 2025-10-11 04:11:57.221884847 +0000 UTC m=+0.184749749 container died 49a728ae32af661164a2733f13b11d3e337c34862fcaf6f536bdc7cd9342a017 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_pasteur, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 11 04:11:57 compute-0 systemd[1]: var-lib-containers-storage-overlay-c74d470e6225aa318a0c28700256b9fc4a07af15a25fae1a7626e607d9788baf-merged.mount: Deactivated successfully.
Oct 11 04:11:57 compute-0 podman[287120]: 2025-10-11 04:11:57.260226806 +0000 UTC m=+0.223091708 container remove 49a728ae32af661164a2733f13b11d3e337c34862fcaf6f536bdc7cd9342a017 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_pasteur, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 11 04:11:57 compute-0 systemd[1]: libpod-conmon-49a728ae32af661164a2733f13b11d3e337c34862fcaf6f536bdc7cd9342a017.scope: Deactivated successfully.
Oct 11 04:11:57 compute-0 podman[287159]: 2025-10-11 04:11:57.481230034 +0000 UTC m=+0.068753435 container create a2aee627577d7ac32fafeb3eae0406cbfa566e53f726e04736e97d6991a4d66b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_feynman, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.build-date=20250507, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Oct 11 04:11:57 compute-0 podman[287159]: 2025-10-11 04:11:57.454117981 +0000 UTC m=+0.041641442 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 04:11:57 compute-0 systemd[1]: Started libpod-conmon-a2aee627577d7ac32fafeb3eae0406cbfa566e53f726e04736e97d6991a4d66b.scope.
Oct 11 04:11:57 compute-0 systemd[1]: Started libcrun container.
Oct 11 04:11:57 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3eaff55519f08b345895cdae47065e6a345c362a85f41f885c587c10f42d22d0/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 11 04:11:57 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3eaff55519f08b345895cdae47065e6a345c362a85f41f885c587c10f42d22d0/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 11 04:11:57 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3eaff55519f08b345895cdae47065e6a345c362a85f41f885c587c10f42d22d0/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 11 04:11:57 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3eaff55519f08b345895cdae47065e6a345c362a85f41f885c587c10f42d22d0/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 11 04:11:57 compute-0 podman[287159]: 2025-10-11 04:11:57.617491068 +0000 UTC m=+0.205014459 container init a2aee627577d7ac32fafeb3eae0406cbfa566e53f726e04736e97d6991a4d66b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_feynman, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, ceph=True, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Oct 11 04:11:57 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1376: 305 pgs: 305 active+clean; 134 MiB data, 400 MiB used, 60 GiB / 60 GiB avail; 145 KiB/s rd, 2.7 MiB/s wr, 195 op/s
Oct 11 04:11:57 compute-0 podman[287159]: 2025-10-11 04:11:57.628980751 +0000 UTC m=+0.216504122 container start a2aee627577d7ac32fafeb3eae0406cbfa566e53f726e04736e97d6991a4d66b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_feynman, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 11 04:11:57 compute-0 podman[287159]: 2025-10-11 04:11:57.633295452 +0000 UTC m=+0.220818863 container attach a2aee627577d7ac32fafeb3eae0406cbfa566e53f726e04736e97d6991a4d66b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_feynman, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250507)
Oct 11 04:11:57 compute-0 nova_compute[259850]: 2025-10-11 04:11:57.819 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 04:11:58 compute-0 nova_compute[259850]: 2025-10-11 04:11:58.038 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 04:11:58 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 11 04:11:58 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1441527227' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 11 04:11:58 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e317 do_prune osdmap full prune enabled
Oct 11 04:11:58 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e318 e318: 3 total, 3 up, 3 in
Oct 11 04:11:58 compute-0 ceph-mon[74273]: log_channel(cluster) log [DBG] : osdmap e318: 3 total, 3 up, 3 in
Oct 11 04:11:58 compute-0 ceph-mon[74273]: from='client.? 192.168.122.10:0/1441527227' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 11 04:11:58 compute-0 nova_compute[259850]: 2025-10-11 04:11:58.351 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 04:11:58 compute-0 NetworkManager[44920]: <info>  [1760155918.3523] manager: (patch-br-int-to-provnet-86cd831a-6a58-4ba8-a51c-57fa1a3acacc): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/84)
Oct 11 04:11:58 compute-0 NetworkManager[44920]: <info>  [1760155918.3536] manager: (patch-provnet-86cd831a-6a58-4ba8-a51c-57fa1a3acacc-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/85)
Oct 11 04:11:58 compute-0 elastic_feynman[287176]: {
Oct 11 04:11:58 compute-0 elastic_feynman[287176]:     "0": [
Oct 11 04:11:58 compute-0 elastic_feynman[287176]:         {
Oct 11 04:11:58 compute-0 elastic_feynman[287176]:             "devices": [
Oct 11 04:11:58 compute-0 elastic_feynman[287176]:                 "/dev/loop3"
Oct 11 04:11:58 compute-0 elastic_feynman[287176]:             ],
Oct 11 04:11:58 compute-0 elastic_feynman[287176]:             "lv_name": "ceph_lv0",
Oct 11 04:11:58 compute-0 elastic_feynman[287176]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct 11 04:11:58 compute-0 elastic_feynman[287176]:             "lv_size": "21470642176",
Oct 11 04:11:58 compute-0 elastic_feynman[287176]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=1XVTvq-Um5W-oCLh-MZzq-EKrd-vgGj-JdO4S3,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=23b68101-59a9-532f-ab6b-9acf78fb2162,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=bd7ac921-1218-45c1-b1c6-7c594dbceccb,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 11 04:11:58 compute-0 elastic_feynman[287176]:             "lv_uuid": "1XVTvq-Um5W-oCLh-MZzq-EKrd-vgGj-JdO4S3",
Oct 11 04:11:58 compute-0 elastic_feynman[287176]:             "name": "ceph_lv0",
Oct 11 04:11:58 compute-0 elastic_feynman[287176]:             "path": "/dev/ceph_vg0/ceph_lv0",
Oct 11 04:11:58 compute-0 elastic_feynman[287176]:             "tags": {
Oct 11 04:11:58 compute-0 elastic_feynman[287176]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct 11 04:11:58 compute-0 elastic_feynman[287176]:                 "ceph.block_uuid": "1XVTvq-Um5W-oCLh-MZzq-EKrd-vgGj-JdO4S3",
Oct 11 04:11:58 compute-0 elastic_feynman[287176]:                 "ceph.cephx_lockbox_secret": "",
Oct 11 04:11:58 compute-0 elastic_feynman[287176]:                 "ceph.cluster_fsid": "23b68101-59a9-532f-ab6b-9acf78fb2162",
Oct 11 04:11:58 compute-0 elastic_feynman[287176]:                 "ceph.cluster_name": "ceph",
Oct 11 04:11:58 compute-0 elastic_feynman[287176]:                 "ceph.crush_device_class": "",
Oct 11 04:11:58 compute-0 elastic_feynman[287176]:                 "ceph.encrypted": "0",
Oct 11 04:11:58 compute-0 elastic_feynman[287176]:                 "ceph.osd_fsid": "bd7ac921-1218-45c1-b1c6-7c594dbceccb",
Oct 11 04:11:58 compute-0 elastic_feynman[287176]:                 "ceph.osd_id": "0",
Oct 11 04:11:58 compute-0 elastic_feynman[287176]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 11 04:11:58 compute-0 elastic_feynman[287176]:                 "ceph.type": "block",
Oct 11 04:11:58 compute-0 elastic_feynman[287176]:                 "ceph.vdo": "0"
Oct 11 04:11:58 compute-0 elastic_feynman[287176]:             },
Oct 11 04:11:58 compute-0 elastic_feynman[287176]:             "type": "block",
Oct 11 04:11:58 compute-0 elastic_feynman[287176]:             "vg_name": "ceph_vg0"
Oct 11 04:11:58 compute-0 elastic_feynman[287176]:         }
Oct 11 04:11:58 compute-0 elastic_feynman[287176]:     ],
Oct 11 04:11:58 compute-0 elastic_feynman[287176]:     "1": [
Oct 11 04:11:58 compute-0 elastic_feynman[287176]:         {
Oct 11 04:11:58 compute-0 elastic_feynman[287176]:             "devices": [
Oct 11 04:11:58 compute-0 elastic_feynman[287176]:                 "/dev/loop4"
Oct 11 04:11:58 compute-0 elastic_feynman[287176]:             ],
Oct 11 04:11:58 compute-0 elastic_feynman[287176]:             "lv_name": "ceph_lv1",
Oct 11 04:11:58 compute-0 elastic_feynman[287176]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Oct 11 04:11:58 compute-0 elastic_feynman[287176]:             "lv_size": "21470642176",
Oct 11 04:11:58 compute-0 elastic_feynman[287176]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=SYHCVo-UVf0-Iwc3-4dzz-QuCG-19LJ-9rRA5x,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=23b68101-59a9-532f-ab6b-9acf78fb2162,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=38da774d-7ecf-442f-9a7a-97978287cff8,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 11 04:11:58 compute-0 elastic_feynman[287176]:             "lv_uuid": "SYHCVo-UVf0-Iwc3-4dzz-QuCG-19LJ-9rRA5x",
Oct 11 04:11:58 compute-0 elastic_feynman[287176]:             "name": "ceph_lv1",
Oct 11 04:11:58 compute-0 elastic_feynman[287176]:             "path": "/dev/ceph_vg1/ceph_lv1",
Oct 11 04:11:58 compute-0 elastic_feynman[287176]:             "tags": {
Oct 11 04:11:58 compute-0 elastic_feynman[287176]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Oct 11 04:11:58 compute-0 elastic_feynman[287176]:                 "ceph.block_uuid": "SYHCVo-UVf0-Iwc3-4dzz-QuCG-19LJ-9rRA5x",
Oct 11 04:11:58 compute-0 elastic_feynman[287176]:                 "ceph.cephx_lockbox_secret": "",
Oct 11 04:11:58 compute-0 elastic_feynman[287176]:                 "ceph.cluster_fsid": "23b68101-59a9-532f-ab6b-9acf78fb2162",
Oct 11 04:11:58 compute-0 elastic_feynman[287176]:                 "ceph.cluster_name": "ceph",
Oct 11 04:11:58 compute-0 elastic_feynman[287176]:                 "ceph.crush_device_class": "",
Oct 11 04:11:58 compute-0 elastic_feynman[287176]:                 "ceph.encrypted": "0",
Oct 11 04:11:58 compute-0 elastic_feynman[287176]:                 "ceph.osd_fsid": "38da774d-7ecf-442f-9a7a-97978287cff8",
Oct 11 04:11:58 compute-0 elastic_feynman[287176]:                 "ceph.osd_id": "1",
Oct 11 04:11:58 compute-0 elastic_feynman[287176]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 11 04:11:58 compute-0 elastic_feynman[287176]:                 "ceph.type": "block",
Oct 11 04:11:58 compute-0 elastic_feynman[287176]:                 "ceph.vdo": "0"
Oct 11 04:11:58 compute-0 elastic_feynman[287176]:             },
Oct 11 04:11:58 compute-0 elastic_feynman[287176]:             "type": "block",
Oct 11 04:11:58 compute-0 elastic_feynman[287176]:             "vg_name": "ceph_vg1"
Oct 11 04:11:58 compute-0 elastic_feynman[287176]:         }
Oct 11 04:11:58 compute-0 elastic_feynman[287176]:     ],
Oct 11 04:11:58 compute-0 elastic_feynman[287176]:     "2": [
Oct 11 04:11:58 compute-0 elastic_feynman[287176]:         {
Oct 11 04:11:58 compute-0 elastic_feynman[287176]:             "devices": [
Oct 11 04:11:58 compute-0 elastic_feynman[287176]:                 "/dev/loop5"
Oct 11 04:11:58 compute-0 elastic_feynman[287176]:             ],
Oct 11 04:11:58 compute-0 elastic_feynman[287176]:             "lv_name": "ceph_lv2",
Oct 11 04:11:58 compute-0 elastic_feynman[287176]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Oct 11 04:11:58 compute-0 elastic_feynman[287176]:             "lv_size": "21470642176",
Oct 11 04:11:58 compute-0 elastic_feynman[287176]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=RoMfIg-8Myq-NBZ3-1HM3-6QfC-sjk8-dlChvt,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=23b68101-59a9-532f-ab6b-9acf78fb2162,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=b0a32a6b-06c7-4cda-9235-0c5ffbd1bd85,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 11 04:11:58 compute-0 elastic_feynman[287176]:             "lv_uuid": "RoMfIg-8Myq-NBZ3-1HM3-6QfC-sjk8-dlChvt",
Oct 11 04:11:58 compute-0 elastic_feynman[287176]:             "name": "ceph_lv2",
Oct 11 04:11:58 compute-0 elastic_feynman[287176]:             "path": "/dev/ceph_vg2/ceph_lv2",
Oct 11 04:11:58 compute-0 elastic_feynman[287176]:             "tags": {
Oct 11 04:11:58 compute-0 elastic_feynman[287176]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Oct 11 04:11:58 compute-0 elastic_feynman[287176]:                 "ceph.block_uuid": "RoMfIg-8Myq-NBZ3-1HM3-6QfC-sjk8-dlChvt",
Oct 11 04:11:58 compute-0 elastic_feynman[287176]:                 "ceph.cephx_lockbox_secret": "",
Oct 11 04:11:58 compute-0 elastic_feynman[287176]:                 "ceph.cluster_fsid": "23b68101-59a9-532f-ab6b-9acf78fb2162",
Oct 11 04:11:58 compute-0 elastic_feynman[287176]:                 "ceph.cluster_name": "ceph",
Oct 11 04:11:58 compute-0 elastic_feynman[287176]:                 "ceph.crush_device_class": "",
Oct 11 04:11:58 compute-0 elastic_feynman[287176]:                 "ceph.encrypted": "0",
Oct 11 04:11:58 compute-0 elastic_feynman[287176]:                 "ceph.osd_fsid": "b0a32a6b-06c7-4cda-9235-0c5ffbd1bd85",
Oct 11 04:11:58 compute-0 elastic_feynman[287176]:                 "ceph.osd_id": "2",
Oct 11 04:11:58 compute-0 elastic_feynman[287176]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 11 04:11:58 compute-0 elastic_feynman[287176]:                 "ceph.type": "block",
Oct 11 04:11:58 compute-0 elastic_feynman[287176]:                 "ceph.vdo": "0"
Oct 11 04:11:58 compute-0 elastic_feynman[287176]:             },
Oct 11 04:11:58 compute-0 elastic_feynman[287176]:             "type": "block",
Oct 11 04:11:58 compute-0 elastic_feynman[287176]:             "vg_name": "ceph_vg2"
Oct 11 04:11:58 compute-0 elastic_feynman[287176]:         }
Oct 11 04:11:58 compute-0 elastic_feynman[287176]:     ]
Oct 11 04:11:58 compute-0 elastic_feynman[287176]: }
Oct 11 04:11:58 compute-0 ovn_controller[152025]: 2025-10-11T04:11:58Z|00148|binding|INFO|Releasing lport 1db9314a-9172-441f-a3d7-84ca9c891141 from this chassis (sb_readonly=0)
Oct 11 04:11:58 compute-0 nova_compute[259850]: 2025-10-11 04:11:58.453 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 04:11:58 compute-0 nova_compute[259850]: 2025-10-11 04:11:58.466 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 04:11:58 compute-0 systemd[1]: libpod-a2aee627577d7ac32fafeb3eae0406cbfa566e53f726e04736e97d6991a4d66b.scope: Deactivated successfully.
Oct 11 04:11:58 compute-0 podman[287186]: 2025-10-11 04:11:58.525037941 +0000 UTC m=+0.031108086 container died a2aee627577d7ac32fafeb3eae0406cbfa566e53f726e04736e97d6991a4d66b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_feynman, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Oct 11 04:11:58 compute-0 systemd[1]: var-lib-containers-storage-overlay-3eaff55519f08b345895cdae47065e6a345c362a85f41f885c587c10f42d22d0-merged.mount: Deactivated successfully.
Oct 11 04:11:58 compute-0 podman[287186]: 2025-10-11 04:11:58.571781156 +0000 UTC m=+0.077851281 container remove a2aee627577d7ac32fafeb3eae0406cbfa566e53f726e04736e97d6991a4d66b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_feynman, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=reef, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS)
Oct 11 04:11:58 compute-0 systemd[1]: libpod-conmon-a2aee627577d7ac32fafeb3eae0406cbfa566e53f726e04736e97d6991a4d66b.scope: Deactivated successfully.
Oct 11 04:11:58 compute-0 nova_compute[259850]: 2025-10-11 04:11:58.607 2 DEBUG nova.compute.manager [req-60c51893-9205-4efb-add7-89eb6621dc63 req-a0f54b77-a3ae-41d3-9720-695e6a12def3 f4159c198f2c491aba3076e803251bb4 551ef16ca7c64c7d98e9560ddab5abd8 - - default default] [instance: 6f74cee5-3bb9-44f0-9a21-d6e5c1475419] Received event network-changed-46432b1a-fa02-4a02-9c8f-d607c2cd820c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 11 04:11:58 compute-0 nova_compute[259850]: 2025-10-11 04:11:58.607 2 DEBUG nova.compute.manager [req-60c51893-9205-4efb-add7-89eb6621dc63 req-a0f54b77-a3ae-41d3-9720-695e6a12def3 f4159c198f2c491aba3076e803251bb4 551ef16ca7c64c7d98e9560ddab5abd8 - - default default] [instance: 6f74cee5-3bb9-44f0-9a21-d6e5c1475419] Refreshing instance network info cache due to event network-changed-46432b1a-fa02-4a02-9c8f-d607c2cd820c. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 11 04:11:58 compute-0 nova_compute[259850]: 2025-10-11 04:11:58.608 2 DEBUG oslo_concurrency.lockutils [req-60c51893-9205-4efb-add7-89eb6621dc63 req-a0f54b77-a3ae-41d3-9720-695e6a12def3 f4159c198f2c491aba3076e803251bb4 551ef16ca7c64c7d98e9560ddab5abd8 - - default default] Acquiring lock "refresh_cache-6f74cee5-3bb9-44f0-9a21-d6e5c1475419" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 11 04:11:58 compute-0 nova_compute[259850]: 2025-10-11 04:11:58.608 2 DEBUG oslo_concurrency.lockutils [req-60c51893-9205-4efb-add7-89eb6621dc63 req-a0f54b77-a3ae-41d3-9720-695e6a12def3 f4159c198f2c491aba3076e803251bb4 551ef16ca7c64c7d98e9560ddab5abd8 - - default default] Acquired lock "refresh_cache-6f74cee5-3bb9-44f0-9a21-d6e5c1475419" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 11 04:11:58 compute-0 nova_compute[259850]: 2025-10-11 04:11:58.608 2 DEBUG nova.network.neutron [req-60c51893-9205-4efb-add7-89eb6621dc63 req-a0f54b77-a3ae-41d3-9720-695e6a12def3 f4159c198f2c491aba3076e803251bb4 551ef16ca7c64c7d98e9560ddab5abd8 - - default default] [instance: 6f74cee5-3bb9-44f0-9a21-d6e5c1475419] Refreshing network info cache for port 46432b1a-fa02-4a02-9c8f-d607c2cd820c _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 11 04:11:58 compute-0 sudo[287053]: pam_unix(sudo:session): session closed for user root
Oct 11 04:11:58 compute-0 sudo[287201]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 04:11:58 compute-0 sudo[287201]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 04:11:58 compute-0 sudo[287201]: pam_unix(sudo:session): session closed for user root
Oct 11 04:11:58 compute-0 sudo[287226]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 11 04:11:58 compute-0 sudo[287226]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 04:11:58 compute-0 sudo[287226]: pam_unix(sudo:session): session closed for user root
Oct 11 04:11:58 compute-0 sudo[287251]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 04:11:58 compute-0 sudo[287251]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 04:11:58 compute-0 sudo[287251]: pam_unix(sudo:session): session closed for user root
Oct 11 04:11:58 compute-0 sudo[287276]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/23b68101-59a9-532f-ab6b-9acf78fb2162/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 23b68101-59a9-532f-ab6b-9acf78fb2162 -- raw list --format json
Oct 11 04:11:58 compute-0 sudo[287276]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 04:11:59 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e318 do_prune osdmap full prune enabled
Oct 11 04:11:59 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e319 e319: 3 total, 3 up, 3 in
Oct 11 04:11:59 compute-0 ceph-mon[74273]: log_channel(cluster) log [DBG] : osdmap e319: 3 total, 3 up, 3 in
Oct 11 04:11:59 compute-0 ceph-mon[74273]: pgmap v1376: 305 pgs: 305 active+clean; 134 MiB data, 400 MiB used, 60 GiB / 60 GiB avail; 145 KiB/s rd, 2.7 MiB/s wr, 195 op/s
Oct 11 04:11:59 compute-0 ceph-mon[74273]: osdmap e318: 3 total, 3 up, 3 in
Oct 11 04:11:59 compute-0 podman[287341]: 2025-10-11 04:11:59.436412263 +0000 UTC m=+0.043522345 container create 447259d383e398f7c3f87049320050686027624b8f185c24f160a4df3b1c667c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_poincare, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 11 04:11:59 compute-0 systemd[1]: Started libpod-conmon-447259d383e398f7c3f87049320050686027624b8f185c24f160a4df3b1c667c.scope.
Oct 11 04:11:59 compute-0 systemd[1]: Started libcrun container.
Oct 11 04:11:59 compute-0 podman[287341]: 2025-10-11 04:11:59.419043514 +0000 UTC m=+0.026153616 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 04:11:59 compute-0 podman[287341]: 2025-10-11 04:11:59.530203082 +0000 UTC m=+0.137313244 container init 447259d383e398f7c3f87049320050686027624b8f185c24f160a4df3b1c667c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_poincare, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3)
Oct 11 04:11:59 compute-0 podman[287341]: 2025-10-11 04:11:59.543360842 +0000 UTC m=+0.150470954 container start 447259d383e398f7c3f87049320050686027624b8f185c24f160a4df3b1c667c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_poincare, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 11 04:11:59 compute-0 crazy_poincare[287358]: 167 167
Oct 11 04:11:59 compute-0 podman[287341]: 2025-10-11 04:11:59.547520019 +0000 UTC m=+0.154630131 container attach 447259d383e398f7c3f87049320050686027624b8f185c24f160a4df3b1c667c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_poincare, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, OSD_FLAVOR=default)
Oct 11 04:11:59 compute-0 systemd[1]: libpod-447259d383e398f7c3f87049320050686027624b8f185c24f160a4df3b1c667c.scope: Deactivated successfully.
Oct 11 04:11:59 compute-0 podman[287341]: 2025-10-11 04:11:59.54897799 +0000 UTC m=+0.156088112 container died 447259d383e398f7c3f87049320050686027624b8f185c24f160a4df3b1c667c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_poincare, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 11 04:11:59 compute-0 systemd[1]: var-lib-containers-storage-overlay-8d3a03a8bd65d75ef1ae0edc2348b6c643d7da553399c3d47955d042191749b1-merged.mount: Deactivated successfully.
Oct 11 04:11:59 compute-0 podman[287341]: 2025-10-11 04:11:59.598409811 +0000 UTC m=+0.205519903 container remove 447259d383e398f7c3f87049320050686027624b8f185c24f160a4df3b1c667c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_poincare, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 11 04:11:59 compute-0 systemd[1]: libpod-conmon-447259d383e398f7c3f87049320050686027624b8f185c24f160a4df3b1c667c.scope: Deactivated successfully.
Oct 11 04:11:59 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1379: 305 pgs: 305 active+clean; 134 MiB data, 401 MiB used, 60 GiB / 60 GiB avail; 3.9 MiB/s rd, 29 KiB/s wr, 213 op/s
Oct 11 04:11:59 compute-0 nova_compute[259850]: 2025-10-11 04:11:59.795 2 DEBUG nova.network.neutron [req-60c51893-9205-4efb-add7-89eb6621dc63 req-a0f54b77-a3ae-41d3-9720-695e6a12def3 f4159c198f2c491aba3076e803251bb4 551ef16ca7c64c7d98e9560ddab5abd8 - - default default] [instance: 6f74cee5-3bb9-44f0-9a21-d6e5c1475419] Updated VIF entry in instance network info cache for port 46432b1a-fa02-4a02-9c8f-d607c2cd820c. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 11 04:11:59 compute-0 nova_compute[259850]: 2025-10-11 04:11:59.798 2 DEBUG nova.network.neutron [req-60c51893-9205-4efb-add7-89eb6621dc63 req-a0f54b77-a3ae-41d3-9720-695e6a12def3 f4159c198f2c491aba3076e803251bb4 551ef16ca7c64c7d98e9560ddab5abd8 - - default default] [instance: 6f74cee5-3bb9-44f0-9a21-d6e5c1475419] Updating instance_info_cache with network_info: [{"id": "46432b1a-fa02-4a02-9c8f-d607c2cd820c", "address": "fa:16:3e:2e:cd:1e", "network": {"id": "69760b74-d690-4b6a-a64f-35ceb4582944", "bridge": "br-int", "label": "tempest-TestStampPattern-334026573-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.220", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "7ff14cec1ef04fa2a41f6d226bc99518", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap46432b1a-fa", "ovs_interfaceid": "46432b1a-fa02-4a02-9c8f-d607c2cd820c", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 11 04:11:59 compute-0 podman[287381]: 2025-10-11 04:11:59.811858926 +0000 UTC m=+0.044669617 container create 13128c52a6c11bbd8b5b5f3c2c67ceecc8eddb74ccbe17163c435051ce943a71 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_raman, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 11 04:11:59 compute-0 nova_compute[259850]: 2025-10-11 04:11:59.818 2 DEBUG oslo_concurrency.lockutils [req-60c51893-9205-4efb-add7-89eb6621dc63 req-a0f54b77-a3ae-41d3-9720-695e6a12def3 f4159c198f2c491aba3076e803251bb4 551ef16ca7c64c7d98e9560ddab5abd8 - - default default] Releasing lock "refresh_cache-6f74cee5-3bb9-44f0-9a21-d6e5c1475419" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 11 04:11:59 compute-0 systemd[1]: Started libpod-conmon-13128c52a6c11bbd8b5b5f3c2c67ceecc8eddb74ccbe17163c435051ce943a71.scope.
Oct 11 04:11:59 compute-0 podman[287381]: 2025-10-11 04:11:59.792179583 +0000 UTC m=+0.024990364 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 04:11:59 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e319 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 04:11:59 compute-0 systemd[1]: Started libcrun container.
Oct 11 04:11:59 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e21e68a716dfc7ac6d5e0f40f1a7aa34e5eb838740a0454c350458891bc8d207/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 11 04:11:59 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e21e68a716dfc7ac6d5e0f40f1a7aa34e5eb838740a0454c350458891bc8d207/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 11 04:11:59 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e21e68a716dfc7ac6d5e0f40f1a7aa34e5eb838740a0454c350458891bc8d207/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 11 04:11:59 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e21e68a716dfc7ac6d5e0f40f1a7aa34e5eb838740a0454c350458891bc8d207/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 11 04:11:59 compute-0 podman[287381]: 2025-10-11 04:11:59.943239013 +0000 UTC m=+0.176049734 container init 13128c52a6c11bbd8b5b5f3c2c67ceecc8eddb74ccbe17163c435051ce943a71 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_raman, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 11 04:11:59 compute-0 podman[287381]: 2025-10-11 04:11:59.956687051 +0000 UTC m=+0.189497762 container start 13128c52a6c11bbd8b5b5f3c2c67ceecc8eddb74ccbe17163c435051ce943a71 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_raman, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507)
Oct 11 04:11:59 compute-0 podman[287381]: 2025-10-11 04:11:59.960508309 +0000 UTC m=+0.193319030 container attach 13128c52a6c11bbd8b5b5f3c2c67ceecc8eddb74ccbe17163c435051ce943a71 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_raman, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, ceph=True, OSD_FLAVOR=default)
Oct 11 04:12:00 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e319 do_prune osdmap full prune enabled
Oct 11 04:12:00 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e320 e320: 3 total, 3 up, 3 in
Oct 11 04:12:00 compute-0 ceph-mon[74273]: log_channel(cluster) log [DBG] : osdmap e320: 3 total, 3 up, 3 in
Oct 11 04:12:00 compute-0 ceph-mon[74273]: osdmap e319: 3 total, 3 up, 3 in
Oct 11 04:12:01 compute-0 vigilant_raman[287398]: {
Oct 11 04:12:01 compute-0 vigilant_raman[287398]:     "38da774d-7ecf-442f-9a7a-97978287cff8": {
Oct 11 04:12:01 compute-0 vigilant_raman[287398]:         "ceph_fsid": "23b68101-59a9-532f-ab6b-9acf78fb2162",
Oct 11 04:12:01 compute-0 vigilant_raman[287398]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Oct 11 04:12:01 compute-0 vigilant_raman[287398]:         "osd_id": 1,
Oct 11 04:12:01 compute-0 vigilant_raman[287398]:         "osd_uuid": "38da774d-7ecf-442f-9a7a-97978287cff8",
Oct 11 04:12:01 compute-0 vigilant_raman[287398]:         "type": "bluestore"
Oct 11 04:12:01 compute-0 vigilant_raman[287398]:     },
Oct 11 04:12:01 compute-0 vigilant_raman[287398]:     "b0a32a6b-06c7-4cda-9235-0c5ffbd1bd85": {
Oct 11 04:12:01 compute-0 vigilant_raman[287398]:         "ceph_fsid": "23b68101-59a9-532f-ab6b-9acf78fb2162",
Oct 11 04:12:01 compute-0 vigilant_raman[287398]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Oct 11 04:12:01 compute-0 vigilant_raman[287398]:         "osd_id": 2,
Oct 11 04:12:01 compute-0 vigilant_raman[287398]:         "osd_uuid": "b0a32a6b-06c7-4cda-9235-0c5ffbd1bd85",
Oct 11 04:12:01 compute-0 vigilant_raman[287398]:         "type": "bluestore"
Oct 11 04:12:01 compute-0 vigilant_raman[287398]:     },
Oct 11 04:12:01 compute-0 vigilant_raman[287398]:     "bd7ac921-1218-45c1-b1c6-7c594dbceccb": {
Oct 11 04:12:01 compute-0 vigilant_raman[287398]:         "ceph_fsid": "23b68101-59a9-532f-ab6b-9acf78fb2162",
Oct 11 04:12:01 compute-0 vigilant_raman[287398]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct 11 04:12:01 compute-0 vigilant_raman[287398]:         "osd_id": 0,
Oct 11 04:12:01 compute-0 vigilant_raman[287398]:         "osd_uuid": "bd7ac921-1218-45c1-b1c6-7c594dbceccb",
Oct 11 04:12:01 compute-0 vigilant_raman[287398]:         "type": "bluestore"
Oct 11 04:12:01 compute-0 vigilant_raman[287398]:     }
Oct 11 04:12:01 compute-0 vigilant_raman[287398]: }
Oct 11 04:12:01 compute-0 systemd[1]: libpod-13128c52a6c11bbd8b5b5f3c2c67ceecc8eddb74ccbe17163c435051ce943a71.scope: Deactivated successfully.
Oct 11 04:12:01 compute-0 podman[287381]: 2025-10-11 04:12:01.065182489 +0000 UTC m=+1.297993210 container died 13128c52a6c11bbd8b5b5f3c2c67ceecc8eddb74ccbe17163c435051ce943a71 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_raman, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Oct 11 04:12:01 compute-0 systemd[1]: libpod-13128c52a6c11bbd8b5b5f3c2c67ceecc8eddb74ccbe17163c435051ce943a71.scope: Consumed 1.105s CPU time.
Oct 11 04:12:01 compute-0 systemd[1]: var-lib-containers-storage-overlay-e21e68a716dfc7ac6d5e0f40f1a7aa34e5eb838740a0454c350458891bc8d207-merged.mount: Deactivated successfully.
Oct 11 04:12:01 compute-0 podman[287381]: 2025-10-11 04:12:01.147900187 +0000 UTC m=+1.380710878 container remove 13128c52a6c11bbd8b5b5f3c2c67ceecc8eddb74ccbe17163c435051ce943a71 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_raman, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Oct 11 04:12:01 compute-0 systemd[1]: libpod-conmon-13128c52a6c11bbd8b5b5f3c2c67ceecc8eddb74ccbe17163c435051ce943a71.scope: Deactivated successfully.
Oct 11 04:12:01 compute-0 sudo[287276]: pam_unix(sudo:session): session closed for user root
Oct 11 04:12:01 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 11 04:12:01 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' 
Oct 11 04:12:01 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 11 04:12:01 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' 
Oct 11 04:12:01 compute-0 ceph-mgr[74563]: [progress WARNING root] complete: ev a3a20cc4-0b83-47eb-9d0f-b3d45e6e1eb9 does not exist
Oct 11 04:12:01 compute-0 ceph-mgr[74563]: [progress WARNING root] complete: ev 1a66d1b3-5bb5-4a95-b97b-d2ab501904e5 does not exist
Oct 11 04:12:01 compute-0 ceph-mon[74273]: pgmap v1379: 305 pgs: 305 active+clean; 134 MiB data, 401 MiB used, 60 GiB / 60 GiB avail; 3.9 MiB/s rd, 29 KiB/s wr, 213 op/s
Oct 11 04:12:01 compute-0 ceph-mon[74273]: osdmap e320: 3 total, 3 up, 3 in
Oct 11 04:12:01 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' 
Oct 11 04:12:01 compute-0 sudo[287444]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 04:12:01 compute-0 sudo[287444]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 04:12:01 compute-0 sudo[287444]: pam_unix(sudo:session): session closed for user root
Oct 11 04:12:01 compute-0 sudo[287469]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Oct 11 04:12:01 compute-0 sudo[287469]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 04:12:01 compute-0 sudo[287469]: pam_unix(sudo:session): session closed for user root
Oct 11 04:12:01 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1381: 305 pgs: 305 active+clean; 134 MiB data, 401 MiB used, 60 GiB / 60 GiB avail; 3.9 MiB/s rd, 29 KiB/s wr, 213 op/s
Oct 11 04:12:01 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 11 04:12:01 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/848064765' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 11 04:12:02 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e320 do_prune osdmap full prune enabled
Oct 11 04:12:02 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e321 e321: 3 total, 3 up, 3 in
Oct 11 04:12:02 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' 
Oct 11 04:12:02 compute-0 ceph-mon[74273]: pgmap v1381: 305 pgs: 305 active+clean; 134 MiB data, 401 MiB used, 60 GiB / 60 GiB avail; 3.9 MiB/s rd, 29 KiB/s wr, 213 op/s
Oct 11 04:12:02 compute-0 ceph-mon[74273]: from='client.? 192.168.122.10:0/848064765' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 11 04:12:02 compute-0 ceph-mon[74273]: log_channel(cluster) log [DBG] : osdmap e321: 3 total, 3 up, 3 in
Oct 11 04:12:02 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct 11 04:12:02 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3521096647' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 11 04:12:02 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct 11 04:12:02 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3521096647' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 11 04:12:02 compute-0 nova_compute[259850]: 2025-10-11 04:12:02.822 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 04:12:03 compute-0 nova_compute[259850]: 2025-10-11 04:12:03.041 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 04:12:03 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e321 do_prune osdmap full prune enabled
Oct 11 04:12:03 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e322 e322: 3 total, 3 up, 3 in
Oct 11 04:12:03 compute-0 ceph-mon[74273]: log_channel(cluster) log [DBG] : osdmap e322: 3 total, 3 up, 3 in
Oct 11 04:12:03 compute-0 ceph-mon[74273]: osdmap e321: 3 total, 3 up, 3 in
Oct 11 04:12:03 compute-0 ceph-mon[74273]: from='client.? 192.168.122.10:0/3521096647' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 11 04:12:03 compute-0 ceph-mon[74273]: from='client.? 192.168.122.10:0/3521096647' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 11 04:12:03 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1384: 305 pgs: 305 active+clean; 134 MiB data, 401 MiB used, 60 GiB / 60 GiB avail; 217 KiB/s rd, 11 KiB/s wr, 283 op/s
Oct 11 04:12:04 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e322 do_prune osdmap full prune enabled
Oct 11 04:12:04 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e323 e323: 3 total, 3 up, 3 in
Oct 11 04:12:04 compute-0 ceph-mon[74273]: log_channel(cluster) log [DBG] : osdmap e323: 3 total, 3 up, 3 in
Oct 11 04:12:04 compute-0 ceph-mon[74273]: osdmap e322: 3 total, 3 up, 3 in
Oct 11 04:12:04 compute-0 ceph-mon[74273]: pgmap v1384: 305 pgs: 305 active+clean; 134 MiB data, 401 MiB used, 60 GiB / 60 GiB avail; 217 KiB/s rd, 11 KiB/s wr, 283 op/s
Oct 11 04:12:04 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct 11 04:12:04 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/88602217' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 11 04:12:04 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct 11 04:12:04 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/88602217' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 11 04:12:04 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e323 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 04:12:04 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e323 do_prune osdmap full prune enabled
Oct 11 04:12:04 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e324 e324: 3 total, 3 up, 3 in
Oct 11 04:12:04 compute-0 ceph-mon[74273]: log_channel(cluster) log [DBG] : osdmap e324: 3 total, 3 up, 3 in
Oct 11 04:12:05 compute-0 ceph-mon[74273]: osdmap e323: 3 total, 3 up, 3 in
Oct 11 04:12:05 compute-0 ceph-mon[74273]: from='client.? 192.168.122.10:0/88602217' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 11 04:12:05 compute-0 ceph-mon[74273]: from='client.? 192.168.122.10:0/88602217' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 11 04:12:05 compute-0 ceph-mon[74273]: osdmap e324: 3 total, 3 up, 3 in
Oct 11 04:12:05 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1387: 305 pgs: 305 active+clean; 134 MiB data, 401 MiB used, 60 GiB / 60 GiB avail; 242 KiB/s rd, 12 KiB/s wr, 315 op/s
Oct 11 04:12:06 compute-0 ceph-mon[74273]: pgmap v1387: 305 pgs: 305 active+clean; 134 MiB data, 401 MiB used, 60 GiB / 60 GiB avail; 242 KiB/s rd, 12 KiB/s wr, 315 op/s
Oct 11 04:12:06 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct 11 04:12:06 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3403311449' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 11 04:12:06 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct 11 04:12:06 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3403311449' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 11 04:12:06 compute-0 ovn_controller[152025]: 2025-10-11T04:12:06Z|00022|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:2e:cd:1e 10.100.0.4
Oct 11 04:12:06 compute-0 ovn_controller[152025]: 2025-10-11T04:12:06Z|00023|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:2e:cd:1e 10.100.0.4
Oct 11 04:12:06 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 11 04:12:06 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3046701213' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 11 04:12:07 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e324 do_prune osdmap full prune enabled
Oct 11 04:12:07 compute-0 ceph-mon[74273]: from='client.? 192.168.122.10:0/3403311449' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 11 04:12:07 compute-0 ceph-mon[74273]: from='client.? 192.168.122.10:0/3403311449' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 11 04:12:07 compute-0 ceph-mon[74273]: from='client.? 192.168.122.10:0/3046701213' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 11 04:12:07 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e325 e325: 3 total, 3 up, 3 in
Oct 11 04:12:07 compute-0 ceph-mon[74273]: log_channel(cluster) log [DBG] : osdmap e325: 3 total, 3 up, 3 in
Oct 11 04:12:07 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1389: 305 pgs: 305 active+clean; 134 MiB data, 401 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:12:07 compute-0 nova_compute[259850]: 2025-10-11 04:12:07.826 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 04:12:08 compute-0 nova_compute[259850]: 2025-10-11 04:12:08.043 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 04:12:08 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e325 do_prune osdmap full prune enabled
Oct 11 04:12:08 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e326 e326: 3 total, 3 up, 3 in
Oct 11 04:12:08 compute-0 ceph-mon[74273]: osdmap e325: 3 total, 3 up, 3 in
Oct 11 04:12:08 compute-0 ceph-mon[74273]: pgmap v1389: 305 pgs: 305 active+clean; 134 MiB data, 401 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:12:08 compute-0 ceph-mon[74273]: log_channel(cluster) log [DBG] : osdmap e326: 3 total, 3 up, 3 in
Oct 11 04:12:08 compute-0 podman[287495]: 2025-10-11 04:12:08.415512243 +0000 UTC m=+0.108583356 container health_status 8f03625dda308e9a3fe3b3f00360811eab2a53db90537fed5552a11820542f07 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, config_id=iscsid, container_name=iscsid, org.label-schema.build-date=20251009, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 11 04:12:08 compute-0 podman[287494]: 2025-10-11 04:12:08.415863103 +0000 UTC m=+0.115530542 container health_status 83ff5079e26f0f00bbfd2512db4ebca61e431e3b7e362664573e34fff179574c (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.build-date=20251009, tcib_managed=true, config_id=multipathd, org.label-schema.license=GPLv2, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, container_name=multipathd)
Oct 11 04:12:09 compute-0 ceph-mon[74273]: osdmap e326: 3 total, 3 up, 3 in
Oct 11 04:12:09 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1391: 305 pgs: 305 active+clean; 167 MiB data, 426 MiB used, 60 GiB / 60 GiB avail; 871 KiB/s rd, 4.7 MiB/s wr, 317 op/s
Oct 11 04:12:09 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e326 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 04:12:10 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e326 do_prune osdmap full prune enabled
Oct 11 04:12:10 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e327 e327: 3 total, 3 up, 3 in
Oct 11 04:12:10 compute-0 ceph-mon[74273]: pgmap v1391: 305 pgs: 305 active+clean; 167 MiB data, 426 MiB used, 60 GiB / 60 GiB avail; 871 KiB/s rd, 4.7 MiB/s wr, 317 op/s
Oct 11 04:12:10 compute-0 ceph-mon[74273]: log_channel(cluster) log [DBG] : osdmap e327: 3 total, 3 up, 3 in
Oct 11 04:12:10 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct 11 04:12:10 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/525982987' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 11 04:12:10 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct 11 04:12:10 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/525982987' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 11 04:12:11 compute-0 ceph-mon[74273]: osdmap e327: 3 total, 3 up, 3 in
Oct 11 04:12:11 compute-0 ceph-mon[74273]: from='client.? 192.168.122.10:0/525982987' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 11 04:12:11 compute-0 ceph-mon[74273]: from='client.? 192.168.122.10:0/525982987' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 11 04:12:11 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1393: 305 pgs: 305 active+clean; 167 MiB data, 426 MiB used, 60 GiB / 60 GiB avail; 782 KiB/s rd, 4.3 MiB/s wr, 284 op/s
Oct 11 04:12:12 compute-0 ceph-mon[74273]: pgmap v1393: 305 pgs: 305 active+clean; 167 MiB data, 426 MiB used, 60 GiB / 60 GiB avail; 782 KiB/s rd, 4.3 MiB/s wr, 284 op/s
Oct 11 04:12:12 compute-0 nova_compute[259850]: 2025-10-11 04:12:12.828 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 04:12:13 compute-0 nova_compute[259850]: 2025-10-11 04:12:13.045 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 04:12:13 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 11 04:12:13 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1551048669' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 11 04:12:13 compute-0 ceph-mon[74273]: from='client.? 192.168.122.10:0/1551048669' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 11 04:12:13 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1394: 305 pgs: 305 active+clean; 167 MiB data, 426 MiB used, 60 GiB / 60 GiB avail; 804 KiB/s rd, 4.1 MiB/s wr, 353 op/s
Oct 11 04:12:14 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e327 do_prune osdmap full prune enabled
Oct 11 04:12:14 compute-0 ceph-mon[74273]: pgmap v1394: 305 pgs: 305 active+clean; 167 MiB data, 426 MiB used, 60 GiB / 60 GiB avail; 804 KiB/s rd, 4.1 MiB/s wr, 353 op/s
Oct 11 04:12:14 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e328 e328: 3 total, 3 up, 3 in
Oct 11 04:12:14 compute-0 ceph-mon[74273]: log_channel(cluster) log [DBG] : osdmap e328: 3 total, 3 up, 3 in
Oct 11 04:12:14 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 11 04:12:14 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2057309257' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 11 04:12:14 compute-0 nova_compute[259850]: 2025-10-11 04:12:14.459 2 DEBUG oslo_concurrency.lockutils [None req-0a679e39-c55c-4965-a0a2-a0807c0e9612 ba6ea3b0ff9d4fee8a80f308d0493954 7ff14cec1ef04fa2a41f6d226bc99518 - - default default] Acquiring lock "6f74cee5-3bb9-44f0-9a21-d6e5c1475419" by "nova.compute.manager.ComputeManager.reserve_block_device_name.<locals>.do_reserve" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 04:12:14 compute-0 nova_compute[259850]: 2025-10-11 04:12:14.460 2 DEBUG oslo_concurrency.lockutils [None req-0a679e39-c55c-4965-a0a2-a0807c0e9612 ba6ea3b0ff9d4fee8a80f308d0493954 7ff14cec1ef04fa2a41f6d226bc99518 - - default default] Lock "6f74cee5-3bb9-44f0-9a21-d6e5c1475419" acquired by "nova.compute.manager.ComputeManager.reserve_block_device_name.<locals>.do_reserve" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 04:12:14 compute-0 nova_compute[259850]: 2025-10-11 04:12:14.485 2 DEBUG nova.objects.instance [None req-0a679e39-c55c-4965-a0a2-a0807c0e9612 ba6ea3b0ff9d4fee8a80f308d0493954 7ff14cec1ef04fa2a41f6d226bc99518 - - default default] Lazy-loading 'flavor' on Instance uuid 6f74cee5-3bb9-44f0-9a21-d6e5c1475419 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 11 04:12:14 compute-0 nova_compute[259850]: 2025-10-11 04:12:14.535 2 DEBUG oslo_concurrency.lockutils [None req-0a679e39-c55c-4965-a0a2-a0807c0e9612 ba6ea3b0ff9d4fee8a80f308d0493954 7ff14cec1ef04fa2a41f6d226bc99518 - - default default] Lock "6f74cee5-3bb9-44f0-9a21-d6e5c1475419" "released" by "nova.compute.manager.ComputeManager.reserve_block_device_name.<locals>.do_reserve" :: held 0.075s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 04:12:14 compute-0 nova_compute[259850]: 2025-10-11 04:12:14.743 2 DEBUG oslo_concurrency.lockutils [None req-0a679e39-c55c-4965-a0a2-a0807c0e9612 ba6ea3b0ff9d4fee8a80f308d0493954 7ff14cec1ef04fa2a41f6d226bc99518 - - default default] Acquiring lock "6f74cee5-3bb9-44f0-9a21-d6e5c1475419" by "nova.compute.manager.ComputeManager.attach_volume.<locals>.do_attach_volume" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 04:12:14 compute-0 nova_compute[259850]: 2025-10-11 04:12:14.744 2 DEBUG oslo_concurrency.lockutils [None req-0a679e39-c55c-4965-a0a2-a0807c0e9612 ba6ea3b0ff9d4fee8a80f308d0493954 7ff14cec1ef04fa2a41f6d226bc99518 - - default default] Lock "6f74cee5-3bb9-44f0-9a21-d6e5c1475419" acquired by "nova.compute.manager.ComputeManager.attach_volume.<locals>.do_attach_volume" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 04:12:14 compute-0 nova_compute[259850]: 2025-10-11 04:12:14.744 2 INFO nova.compute.manager [None req-0a679e39-c55c-4965-a0a2-a0807c0e9612 ba6ea3b0ff9d4fee8a80f308d0493954 7ff14cec1ef04fa2a41f6d226bc99518 - - default default] [instance: 6f74cee5-3bb9-44f0-9a21-d6e5c1475419] Attaching volume fc619257-0352-4c46-b279-77fa10c211f3 to /dev/vdb
Oct 11 04:12:14 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e328 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 04:12:14 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e328 do_prune osdmap full prune enabled
Oct 11 04:12:14 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e329 e329: 3 total, 3 up, 3 in
Oct 11 04:12:14 compute-0 ceph-mon[74273]: log_channel(cluster) log [DBG] : osdmap e329: 3 total, 3 up, 3 in
Oct 11 04:12:14 compute-0 nova_compute[259850]: 2025-10-11 04:12:14.944 2 DEBUG os_brick.utils [None req-0a679e39-c55c-4965-a0a2-a0807c0e9612 ba6ea3b0ff9d4fee8a80f308d0493954 7ff14cec1ef04fa2a41f6d226bc99518 - - default default] ==> get_connector_properties: call "{'root_helper': 'sudo nova-rootwrap /etc/nova/rootwrap.conf', 'my_ip': '192.168.122.100', 'multipath': True, 'enforce_multipath': True, 'host': 'compute-0.ctlplane.example.com', 'execute': None}" trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:176
Oct 11 04:12:14 compute-0 nova_compute[259850]: 2025-10-11 04:12:14.945 675 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): multipathd show status execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 04:12:14 compute-0 nova_compute[259850]: 2025-10-11 04:12:14.963 675 DEBUG oslo_concurrency.processutils [-] CMD "multipathd show status" returned: 0 in 0.018s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 04:12:14 compute-0 nova_compute[259850]: 2025-10-11 04:12:14.963 675 DEBUG oslo.privsep.daemon [-] privsep: reply[3ea18980-d08f-443a-be72-ef733ede9b5b]: (4, ('path checker states:\n\npaths: 0\nbusy: False\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 04:12:14 compute-0 nova_compute[259850]: 2025-10-11 04:12:14.965 675 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): cat /etc/iscsi/initiatorname.iscsi execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 04:12:14 compute-0 nova_compute[259850]: 2025-10-11 04:12:14.978 675 DEBUG oslo_concurrency.processutils [-] CMD "cat /etc/iscsi/initiatorname.iscsi" returned: 0 in 0.013s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 04:12:14 compute-0 nova_compute[259850]: 2025-10-11 04:12:14.978 675 DEBUG oslo.privsep.daemon [-] privsep: reply[feb8857f-c25b-4998-b336-5dda12246264]: (4, ('InitiatorName=iqn.1994-05.com.redhat:e727c2bd432c', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 04:12:14 compute-0 nova_compute[259850]: 2025-10-11 04:12:14.980 675 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): findmnt -v / -n -o SOURCE execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 04:12:14 compute-0 nova_compute[259850]: 2025-10-11 04:12:14.993 675 DEBUG oslo_concurrency.processutils [-] CMD "findmnt -v / -n -o SOURCE" returned: 0 in 0.014s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 04:12:14 compute-0 nova_compute[259850]: 2025-10-11 04:12:14.994 675 DEBUG oslo.privsep.daemon [-] privsep: reply[367a8ae0-583c-47e9-8041-8793c54cfce2]: (4, ('overlay\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 04:12:14 compute-0 nova_compute[259850]: 2025-10-11 04:12:14.995 675 DEBUG oslo.privsep.daemon [-] privsep: reply[8765a63a-c304-4510-88f3-82d203e1cb4d]: (4, 'e4b2deed-ff06-4afb-a523-b61a9dddb9cc') _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 04:12:14 compute-0 nova_compute[259850]: 2025-10-11 04:12:14.996 2 DEBUG oslo_concurrency.processutils [None req-0a679e39-c55c-4965-a0a2-a0807c0e9612 ba6ea3b0ff9d4fee8a80f308d0493954 7ff14cec1ef04fa2a41f6d226bc99518 - - default default] Running cmd (subprocess): nvme version execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 04:12:15 compute-0 nova_compute[259850]: 2025-10-11 04:12:15.028 2 DEBUG oslo_concurrency.processutils [None req-0a679e39-c55c-4965-a0a2-a0807c0e9612 ba6ea3b0ff9d4fee8a80f308d0493954 7ff14cec1ef04fa2a41f6d226bc99518 - - default default] CMD "nvme version" returned: 0 in 0.032s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 04:12:15 compute-0 nova_compute[259850]: 2025-10-11 04:12:15.030 2 DEBUG os_brick.initiator.connectors.lightos [None req-0a679e39-c55c-4965-a0a2-a0807c0e9612 ba6ea3b0ff9d4fee8a80f308d0493954 7ff14cec1ef04fa2a41f6d226bc99518 - - default default] LIGHTOS: [Errno 111] ECONNREFUSED find_dsc /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:98
Oct 11 04:12:15 compute-0 nova_compute[259850]: 2025-10-11 04:12:15.031 2 DEBUG os_brick.initiator.connectors.lightos [None req-0a679e39-c55c-4965-a0a2-a0807c0e9612 ba6ea3b0ff9d4fee8a80f308d0493954 7ff14cec1ef04fa2a41f6d226bc99518 - - default default] LIGHTOS: did not find dsc, continuing anyway. get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:76
Oct 11 04:12:15 compute-0 nova_compute[259850]: 2025-10-11 04:12:15.031 2 DEBUG os_brick.initiator.connectors.lightos [None req-0a679e39-c55c-4965-a0a2-a0807c0e9612 ba6ea3b0ff9d4fee8a80f308d0493954 7ff14cec1ef04fa2a41f6d226bc99518 - - default default] LIGHTOS: finally hostnqn: nqn.2014-08.org.nvmexpress:uuid:83042a20-0f72-4c47-8453-e72ead378624 dsc:  get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:79
Oct 11 04:12:15 compute-0 nova_compute[259850]: 2025-10-11 04:12:15.032 2 DEBUG os_brick.utils [None req-0a679e39-c55c-4965-a0a2-a0807c0e9612 ba6ea3b0ff9d4fee8a80f308d0493954 7ff14cec1ef04fa2a41f6d226bc99518 - - default default] <== get_connector_properties: return (87ms) {'platform': 'x86_64', 'os_type': 'linux', 'ip': '192.168.122.100', 'host': 'compute-0.ctlplane.example.com', 'multipath': True, 'initiator': 'iqn.1994-05.com.redhat:e727c2bd432c', 'do_local_attach': False, 'nvme_hostid': '83042a20-0f72-4c47-8453-e72ead378624', 'system uuid': 'e4b2deed-ff06-4afb-a523-b61a9dddb9cc', 'nqn': 'nqn.2014-08.org.nvmexpress:uuid:83042a20-0f72-4c47-8453-e72ead378624', 'nvme_native_multipath': True, 'found_dsc': ''} trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:203
Oct 11 04:12:15 compute-0 nova_compute[259850]: 2025-10-11 04:12:15.033 2 DEBUG nova.virt.block_device [None req-0a679e39-c55c-4965-a0a2-a0807c0e9612 ba6ea3b0ff9d4fee8a80f308d0493954 7ff14cec1ef04fa2a41f6d226bc99518 - - default default] [instance: 6f74cee5-3bb9-44f0-9a21-d6e5c1475419] Updating existing volume attachment record: b476e2ac-a658-4b63-a2f7-b7e6d4caaae3 _volume_attach /usr/lib/python3.9/site-packages/nova/virt/block_device.py:631
Oct 11 04:12:15 compute-0 ceph-mon[74273]: osdmap e328: 3 total, 3 up, 3 in
Oct 11 04:12:15 compute-0 ceph-mon[74273]: from='client.? 192.168.122.10:0/2057309257' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 11 04:12:15 compute-0 ceph-mon[74273]: osdmap e329: 3 total, 3 up, 3 in
Oct 11 04:12:15 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1397: 305 pgs: 305 active+clean; 167 MiB data, 426 MiB used, 60 GiB / 60 GiB avail; 62 KiB/s rd, 29 KiB/s wr, 85 op/s
Oct 11 04:12:15 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 11 04:12:15 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2345519995' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 11 04:12:15 compute-0 nova_compute[259850]: 2025-10-11 04:12:15.749 2 DEBUG nova.objects.instance [None req-0a679e39-c55c-4965-a0a2-a0807c0e9612 ba6ea3b0ff9d4fee8a80f308d0493954 7ff14cec1ef04fa2a41f6d226bc99518 - - default default] Lazy-loading 'flavor' on Instance uuid 6f74cee5-3bb9-44f0-9a21-d6e5c1475419 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 11 04:12:15 compute-0 nova_compute[259850]: 2025-10-11 04:12:15.775 2 DEBUG nova.virt.libvirt.driver [None req-0a679e39-c55c-4965-a0a2-a0807c0e9612 ba6ea3b0ff9d4fee8a80f308d0493954 7ff14cec1ef04fa2a41f6d226bc99518 - - default default] [instance: 6f74cee5-3bb9-44f0-9a21-d6e5c1475419] Attempting to attach volume fc619257-0352-4c46-b279-77fa10c211f3 with discard support enabled to an instance using an unsupported configuration. target_bus = virtio. Trim commands will not be issued to the storage device. _check_discard_for_attach_volume /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2168
Oct 11 04:12:15 compute-0 nova_compute[259850]: 2025-10-11 04:12:15.779 2 DEBUG nova.virt.libvirt.guest [None req-0a679e39-c55c-4965-a0a2-a0807c0e9612 ba6ea3b0ff9d4fee8a80f308d0493954 7ff14cec1ef04fa2a41f6d226bc99518 - - default default] attach device xml: <disk type="network" device="disk">
Oct 11 04:12:15 compute-0 nova_compute[259850]:   <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Oct 11 04:12:15 compute-0 nova_compute[259850]:   <source protocol="rbd" name="volumes/volume-fc619257-0352-4c46-b279-77fa10c211f3">
Oct 11 04:12:15 compute-0 nova_compute[259850]:     <host name="192.168.122.100" port="6789"/>
Oct 11 04:12:15 compute-0 nova_compute[259850]:   </source>
Oct 11 04:12:15 compute-0 nova_compute[259850]:   <auth username="openstack">
Oct 11 04:12:15 compute-0 nova_compute[259850]:     <secret type="ceph" uuid="23b68101-59a9-532f-ab6b-9acf78fb2162"/>
Oct 11 04:12:15 compute-0 nova_compute[259850]:   </auth>
Oct 11 04:12:15 compute-0 nova_compute[259850]:   <target dev="vdb" bus="virtio"/>
Oct 11 04:12:15 compute-0 nova_compute[259850]:   <serial>fc619257-0352-4c46-b279-77fa10c211f3</serial>
Oct 11 04:12:15 compute-0 nova_compute[259850]: </disk>
Oct 11 04:12:15 compute-0 nova_compute[259850]:  attach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:339
Oct 11 04:12:15 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e329 do_prune osdmap full prune enabled
Oct 11 04:12:15 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e330 e330: 3 total, 3 up, 3 in
Oct 11 04:12:15 compute-0 ceph-mon[74273]: log_channel(cluster) log [DBG] : osdmap e330: 3 total, 3 up, 3 in
Oct 11 04:12:15 compute-0 nova_compute[259850]: 2025-10-11 04:12:15.951 2 DEBUG nova.virt.libvirt.driver [None req-0a679e39-c55c-4965-a0a2-a0807c0e9612 ba6ea3b0ff9d4fee8a80f308d0493954 7ff14cec1ef04fa2a41f6d226bc99518 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 11 04:12:15 compute-0 nova_compute[259850]: 2025-10-11 04:12:15.952 2 DEBUG nova.virt.libvirt.driver [None req-0a679e39-c55c-4965-a0a2-a0807c0e9612 ba6ea3b0ff9d4fee8a80f308d0493954 7ff14cec1ef04fa2a41f6d226bc99518 - - default default] No BDM found with device name vdb, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 11 04:12:15 compute-0 nova_compute[259850]: 2025-10-11 04:12:15.952 2 DEBUG nova.virt.libvirt.driver [None req-0a679e39-c55c-4965-a0a2-a0807c0e9612 ba6ea3b0ff9d4fee8a80f308d0493954 7ff14cec1ef04fa2a41f6d226bc99518 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 11 04:12:15 compute-0 nova_compute[259850]: 2025-10-11 04:12:15.953 2 DEBUG nova.virt.libvirt.driver [None req-0a679e39-c55c-4965-a0a2-a0807c0e9612 ba6ea3b0ff9d4fee8a80f308d0493954 7ff14cec1ef04fa2a41f6d226bc99518 - - default default] No VIF found with MAC fa:16:3e:2e:cd:1e, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Oct 11 04:12:16 compute-0 nova_compute[259850]: 2025-10-11 04:12:16.182 2 DEBUG oslo_concurrency.lockutils [None req-0a679e39-c55c-4965-a0a2-a0807c0e9612 ba6ea3b0ff9d4fee8a80f308d0493954 7ff14cec1ef04fa2a41f6d226bc99518 - - default default] Lock "6f74cee5-3bb9-44f0-9a21-d6e5c1475419" "released" by "nova.compute.manager.ComputeManager.attach_volume.<locals>.do_attach_volume" :: held 1.438s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 04:12:16 compute-0 ceph-mon[74273]: pgmap v1397: 305 pgs: 305 active+clean; 167 MiB data, 426 MiB used, 60 GiB / 60 GiB avail; 62 KiB/s rd, 29 KiB/s wr, 85 op/s
Oct 11 04:12:16 compute-0 ceph-mon[74273]: from='client.? 192.168.122.10:0/2345519995' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 11 04:12:16 compute-0 ceph-mon[74273]: osdmap e330: 3 total, 3 up, 3 in
Oct 11 04:12:17 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1399: 305 pgs: 305 active+clean; 167 MiB data, 426 MiB used, 60 GiB / 60 GiB avail; 62 KiB/s rd, 29 KiB/s wr, 85 op/s
Oct 11 04:12:17 compute-0 ceph-mon[74273]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #60. Immutable memtables: 0.
Oct 11 04:12:17 compute-0 ceph-mon[74273]: rocksdb: (Original Log Time 2025/10/11-04:12:17.695696) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Oct 11 04:12:17 compute-0 ceph-mon[74273]: rocksdb: [db/flush_job.cc:856] [default] [JOB 31] Flushing memtable with next log file: 60
Oct 11 04:12:17 compute-0 ceph-mon[74273]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760155937695744, "job": 31, "event": "flush_started", "num_memtables": 1, "num_entries": 2853, "num_deletes": 543, "total_data_size": 3550249, "memory_usage": 3622464, "flush_reason": "Manual Compaction"}
Oct 11 04:12:17 compute-0 ceph-mon[74273]: rocksdb: [db/flush_job.cc:885] [default] [JOB 31] Level-0 flush table #61: started
Oct 11 04:12:17 compute-0 ceph-mon[74273]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760155937726933, "cf_name": "default", "job": 31, "event": "table_file_creation", "file_number": 61, "file_size": 3433925, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 26150, "largest_seqno": 29002, "table_properties": {"data_size": 3421196, "index_size": 8007, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 3653, "raw_key_size": 31682, "raw_average_key_size": 21, "raw_value_size": 3393332, "raw_average_value_size": 2325, "num_data_blocks": 344, "num_entries": 1459, "num_filter_entries": 1459, "num_deletions": 543, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1760155798, "oldest_key_time": 1760155798, "file_creation_time": 1760155937, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "c5ef686a-ea96-43f3-b64e-136aeef6150d", "db_session_id": "5PENSWPAYOBU0GJSS6OB", "orig_file_number": 61, "seqno_to_time_mapping": "N/A"}}
Oct 11 04:12:17 compute-0 ceph-mon[74273]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 31] Flush lasted 31312 microseconds, and 12022 cpu microseconds.
Oct 11 04:12:17 compute-0 ceph-mon[74273]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 11 04:12:17 compute-0 ceph-mon[74273]: rocksdb: (Original Log Time 2025/10/11-04:12:17.727004) [db/flush_job.cc:967] [default] [JOB 31] Level-0 flush table #61: 3433925 bytes OK
Oct 11 04:12:17 compute-0 ceph-mon[74273]: rocksdb: (Original Log Time 2025/10/11-04:12:17.727034) [db/memtable_list.cc:519] [default] Level-0 commit table #61 started
Oct 11 04:12:17 compute-0 ceph-mon[74273]: rocksdb: (Original Log Time 2025/10/11-04:12:17.728806) [db/memtable_list.cc:722] [default] Level-0 commit table #61: memtable #1 done
Oct 11 04:12:17 compute-0 ceph-mon[74273]: rocksdb: (Original Log Time 2025/10/11-04:12:17.728829) EVENT_LOG_v1 {"time_micros": 1760155937728821, "job": 31, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Oct 11 04:12:17 compute-0 ceph-mon[74273]: rocksdb: (Original Log Time 2025/10/11-04:12:17.728853) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Oct 11 04:12:17 compute-0 ceph-mon[74273]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 31] Try to delete WAL files size 3536695, prev total WAL file size 3536695, number of live WAL files 2.
Oct 11 04:12:17 compute-0 ceph-mon[74273]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000057.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 11 04:12:17 compute-0 ceph-mon[74273]: rocksdb: (Original Log Time 2025/10/11-04:12:17.730378) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730032323539' seq:72057594037927935, type:22 .. '7061786F730032353131' seq:0, type:0; will stop at (end)
Oct 11 04:12:17 compute-0 ceph-mon[74273]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 32] Compacting 1@0 + 1@6 files to L6, score -1.00
Oct 11 04:12:17 compute-0 ceph-mon[74273]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 31 Base level 0, inputs: [61(3353KB)], [59(10219KB)]
Oct 11 04:12:17 compute-0 ceph-mon[74273]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760155937730448, "job": 32, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [61], "files_L6": [59], "score": -1, "input_data_size": 13898686, "oldest_snapshot_seqno": -1}
Oct 11 04:12:17 compute-0 ceph-mon[74273]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 32] Generated table #62: 5703 keys, 8960064 bytes, temperature: kUnknown
Oct 11 04:12:17 compute-0 ceph-mon[74273]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760155937792629, "cf_name": "default", "job": 32, "event": "table_file_creation", "file_number": 62, "file_size": 8960064, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 8918215, "index_size": 26507, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 14277, "raw_key_size": 142417, "raw_average_key_size": 24, "raw_value_size": 8811834, "raw_average_value_size": 1545, "num_data_blocks": 1075, "num_entries": 5703, "num_filter_entries": 5703, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1760153731, "oldest_key_time": 0, "file_creation_time": 1760155937, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "c5ef686a-ea96-43f3-b64e-136aeef6150d", "db_session_id": "5PENSWPAYOBU0GJSS6OB", "orig_file_number": 62, "seqno_to_time_mapping": "N/A"}}
Oct 11 04:12:17 compute-0 ceph-mon[74273]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 11 04:12:17 compute-0 ceph-mon[74273]: rocksdb: (Original Log Time 2025/10/11-04:12:17.792980) [db/compaction/compaction_job.cc:1663] [default] [JOB 32] Compacted 1@0 + 1@6 files to L6 => 8960064 bytes
Oct 11 04:12:17 compute-0 ceph-mon[74273]: rocksdb: (Original Log Time 2025/10/11-04:12:17.794348) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 223.2 rd, 143.9 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(3.3, 10.0 +0.0 blob) out(8.5 +0.0 blob), read-write-amplify(6.7) write-amplify(2.6) OK, records in: 6765, records dropped: 1062 output_compression: NoCompression
Oct 11 04:12:17 compute-0 ceph-mon[74273]: rocksdb: (Original Log Time 2025/10/11-04:12:17.794376) EVENT_LOG_v1 {"time_micros": 1760155937794362, "job": 32, "event": "compaction_finished", "compaction_time_micros": 62281, "compaction_time_cpu_micros": 30456, "output_level": 6, "num_output_files": 1, "total_output_size": 8960064, "num_input_records": 6765, "num_output_records": 5703, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Oct 11 04:12:17 compute-0 ceph-mon[74273]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000061.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 11 04:12:17 compute-0 ceph-mon[74273]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760155937795489, "job": 32, "event": "table_file_deletion", "file_number": 61}
Oct 11 04:12:17 compute-0 ceph-mon[74273]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000059.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 11 04:12:17 compute-0 ceph-mon[74273]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760155937798732, "job": 32, "event": "table_file_deletion", "file_number": 59}
Oct 11 04:12:17 compute-0 ceph-mon[74273]: rocksdb: (Original Log Time 2025/10/11-04:12:17.730252) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 11 04:12:17 compute-0 ceph-mon[74273]: rocksdb: (Original Log Time 2025/10/11-04:12:17.798921) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 11 04:12:17 compute-0 ceph-mon[74273]: rocksdb: (Original Log Time 2025/10/11-04:12:17.798929) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 11 04:12:17 compute-0 ceph-mon[74273]: rocksdb: (Original Log Time 2025/10/11-04:12:17.798931) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 11 04:12:17 compute-0 ceph-mon[74273]: rocksdb: (Original Log Time 2025/10/11-04:12:17.798933) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 11 04:12:17 compute-0 ceph-mon[74273]: rocksdb: (Original Log Time 2025/10/11-04:12:17.798935) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 11 04:12:17 compute-0 nova_compute[259850]: 2025-10-11 04:12:17.830 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 04:12:18 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 11 04:12:18 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1686480387' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 11 04:12:18 compute-0 nova_compute[259850]: 2025-10-11 04:12:18.047 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 04:12:18 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e330 do_prune osdmap full prune enabled
Oct 11 04:12:18 compute-0 ceph-mon[74273]: pgmap v1399: 305 pgs: 305 active+clean; 167 MiB data, 426 MiB used, 60 GiB / 60 GiB avail; 62 KiB/s rd, 29 KiB/s wr, 85 op/s
Oct 11 04:12:18 compute-0 ceph-mon[74273]: from='client.? 192.168.122.10:0/1686480387' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 11 04:12:18 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e331 e331: 3 total, 3 up, 3 in
Oct 11 04:12:18 compute-0 ceph-mon[74273]: log_channel(cluster) log [DBG] : osdmap e331: 3 total, 3 up, 3 in
Oct 11 04:12:19 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct 11 04:12:19 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1211603871' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 11 04:12:19 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct 11 04:12:19 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1211603871' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 11 04:12:19 compute-0 nova_compute[259850]: 2025-10-11 04:12:19.300 2 DEBUG oslo_concurrency.lockutils [None req-5a3af704-b655-4667-85ef-ed2595194434 ba6ea3b0ff9d4fee8a80f308d0493954 7ff14cec1ef04fa2a41f6d226bc99518 - - default default] Acquiring lock "6f74cee5-3bb9-44f0-9a21-d6e5c1475419" by "nova.compute.manager.ComputeManager.detach_volume.<locals>.do_detach_volume" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 04:12:19 compute-0 nova_compute[259850]: 2025-10-11 04:12:19.301 2 DEBUG oslo_concurrency.lockutils [None req-5a3af704-b655-4667-85ef-ed2595194434 ba6ea3b0ff9d4fee8a80f308d0493954 7ff14cec1ef04fa2a41f6d226bc99518 - - default default] Lock "6f74cee5-3bb9-44f0-9a21-d6e5c1475419" acquired by "nova.compute.manager.ComputeManager.detach_volume.<locals>.do_detach_volume" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 04:12:19 compute-0 nova_compute[259850]: 2025-10-11 04:12:19.317 2 INFO nova.compute.manager [None req-5a3af704-b655-4667-85ef-ed2595194434 ba6ea3b0ff9d4fee8a80f308d0493954 7ff14cec1ef04fa2a41f6d226bc99518 - - default default] [instance: 6f74cee5-3bb9-44f0-9a21-d6e5c1475419] Detaching volume fc619257-0352-4c46-b279-77fa10c211f3
Oct 11 04:12:19 compute-0 nova_compute[259850]: 2025-10-11 04:12:19.461 2 INFO nova.virt.block_device [None req-5a3af704-b655-4667-85ef-ed2595194434 ba6ea3b0ff9d4fee8a80f308d0493954 7ff14cec1ef04fa2a41f6d226bc99518 - - default default] [instance: 6f74cee5-3bb9-44f0-9a21-d6e5c1475419] Attempting to driver detach volume fc619257-0352-4c46-b279-77fa10c211f3 from mountpoint /dev/vdb
Oct 11 04:12:19 compute-0 nova_compute[259850]: 2025-10-11 04:12:19.473 2 DEBUG nova.virt.libvirt.driver [None req-5a3af704-b655-4667-85ef-ed2595194434 ba6ea3b0ff9d4fee8a80f308d0493954 7ff14cec1ef04fa2a41f6d226bc99518 - - default default] Attempting to detach device vdb from instance 6f74cee5-3bb9-44f0-9a21-d6e5c1475419 from the persistent domain config. _detach_from_persistent /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2487
Oct 11 04:12:19 compute-0 nova_compute[259850]: 2025-10-11 04:12:19.474 2 DEBUG nova.virt.libvirt.guest [None req-5a3af704-b655-4667-85ef-ed2595194434 ba6ea3b0ff9d4fee8a80f308d0493954 7ff14cec1ef04fa2a41f6d226bc99518 - - default default] detach device xml: <disk type="network" device="disk">
Oct 11 04:12:19 compute-0 nova_compute[259850]:   <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Oct 11 04:12:19 compute-0 nova_compute[259850]:   <source protocol="rbd" name="volumes/volume-fc619257-0352-4c46-b279-77fa10c211f3">
Oct 11 04:12:19 compute-0 nova_compute[259850]:     <host name="192.168.122.100" port="6789"/>
Oct 11 04:12:19 compute-0 nova_compute[259850]:   </source>
Oct 11 04:12:19 compute-0 nova_compute[259850]:   <target dev="vdb" bus="virtio"/>
Oct 11 04:12:19 compute-0 nova_compute[259850]:   <serial>fc619257-0352-4c46-b279-77fa10c211f3</serial>
Oct 11 04:12:19 compute-0 nova_compute[259850]:   <address type="pci" domain="0x0000" bus="0x06" slot="0x00" function="0x0"/>
Oct 11 04:12:19 compute-0 nova_compute[259850]: </disk>
Oct 11 04:12:19 compute-0 nova_compute[259850]:  detach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:465
Oct 11 04:12:19 compute-0 nova_compute[259850]: 2025-10-11 04:12:19.486 2 INFO nova.virt.libvirt.driver [None req-5a3af704-b655-4667-85ef-ed2595194434 ba6ea3b0ff9d4fee8a80f308d0493954 7ff14cec1ef04fa2a41f6d226bc99518 - - default default] Successfully detached device vdb from instance 6f74cee5-3bb9-44f0-9a21-d6e5c1475419 from the persistent domain config.
Oct 11 04:12:19 compute-0 nova_compute[259850]: 2025-10-11 04:12:19.486 2 DEBUG nova.virt.libvirt.driver [None req-5a3af704-b655-4667-85ef-ed2595194434 ba6ea3b0ff9d4fee8a80f308d0493954 7ff14cec1ef04fa2a41f6d226bc99518 - - default default] (1/8): Attempting to detach device vdb with device alias virtio-disk1 from instance 6f74cee5-3bb9-44f0-9a21-d6e5c1475419 from the live domain config. _detach_from_live_with_retry /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2523
Oct 11 04:12:19 compute-0 nova_compute[259850]: 2025-10-11 04:12:19.487 2 DEBUG nova.virt.libvirt.guest [None req-5a3af704-b655-4667-85ef-ed2595194434 ba6ea3b0ff9d4fee8a80f308d0493954 7ff14cec1ef04fa2a41f6d226bc99518 - - default default] detach device xml: <disk type="network" device="disk">
Oct 11 04:12:19 compute-0 nova_compute[259850]:   <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Oct 11 04:12:19 compute-0 nova_compute[259850]:   <source protocol="rbd" name="volumes/volume-fc619257-0352-4c46-b279-77fa10c211f3">
Oct 11 04:12:19 compute-0 nova_compute[259850]:     <host name="192.168.122.100" port="6789"/>
Oct 11 04:12:19 compute-0 nova_compute[259850]:   </source>
Oct 11 04:12:19 compute-0 nova_compute[259850]:   <target dev="vdb" bus="virtio"/>
Oct 11 04:12:19 compute-0 nova_compute[259850]:   <serial>fc619257-0352-4c46-b279-77fa10c211f3</serial>
Oct 11 04:12:19 compute-0 nova_compute[259850]:   <address type="pci" domain="0x0000" bus="0x06" slot="0x00" function="0x0"/>
Oct 11 04:12:19 compute-0 nova_compute[259850]: </disk>
Oct 11 04:12:19 compute-0 nova_compute[259850]:  detach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:465
Oct 11 04:12:19 compute-0 nova_compute[259850]: 2025-10-11 04:12:19.606 2 DEBUG nova.virt.libvirt.driver [None req-3700afd8-f980-4cf2-b41e-54ecc7fbe9a2 - - - - - -] Received event <DeviceRemovedEvent: 1760155939.6053686, 6f74cee5-3bb9-44f0-9a21-d6e5c1475419 => virtio-disk1> from libvirt while the driver is waiting for it; dispatched. emit_event /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2370
Oct 11 04:12:19 compute-0 nova_compute[259850]: 2025-10-11 04:12:19.608 2 DEBUG nova.virt.libvirt.driver [None req-5a3af704-b655-4667-85ef-ed2595194434 ba6ea3b0ff9d4fee8a80f308d0493954 7ff14cec1ef04fa2a41f6d226bc99518 - - default default] Start waiting for the detach event from libvirt for device vdb with device alias virtio-disk1 for instance 6f74cee5-3bb9-44f0-9a21-d6e5c1475419 _detach_from_live_and_wait_for_event /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2599
Oct 11 04:12:19 compute-0 nova_compute[259850]: 2025-10-11 04:12:19.612 2 INFO nova.virt.libvirt.driver [None req-5a3af704-b655-4667-85ef-ed2595194434 ba6ea3b0ff9d4fee8a80f308d0493954 7ff14cec1ef04fa2a41f6d226bc99518 - - default default] Successfully detached device vdb from instance 6f74cee5-3bb9-44f0-9a21-d6e5c1475419 from the live domain config.
Oct 11 04:12:19 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1401: 305 pgs: 305 active+clean; 167 MiB data, 426 MiB used, 60 GiB / 60 GiB avail; 64 KiB/s rd, 9.9 KiB/s wr, 83 op/s
Oct 11 04:12:19 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e331 do_prune osdmap full prune enabled
Oct 11 04:12:19 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e332 e332: 3 total, 3 up, 3 in
Oct 11 04:12:19 compute-0 ceph-mon[74273]: log_channel(cluster) log [DBG] : osdmap e332: 3 total, 3 up, 3 in
Oct 11 04:12:19 compute-0 ceph-mon[74273]: osdmap e331: 3 total, 3 up, 3 in
Oct 11 04:12:19 compute-0 ceph-mon[74273]: from='client.? 192.168.122.10:0/1211603871' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 11 04:12:19 compute-0 ceph-mon[74273]: from='client.? 192.168.122.10:0/1211603871' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 11 04:12:19 compute-0 nova_compute[259850]: 2025-10-11 04:12:19.810 2 DEBUG nova.objects.instance [None req-5a3af704-b655-4667-85ef-ed2595194434 ba6ea3b0ff9d4fee8a80f308d0493954 7ff14cec1ef04fa2a41f6d226bc99518 - - default default] Lazy-loading 'flavor' on Instance uuid 6f74cee5-3bb9-44f0-9a21-d6e5c1475419 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 11 04:12:19 compute-0 nova_compute[259850]: 2025-10-11 04:12:19.869 2 DEBUG oslo_concurrency.lockutils [None req-5a3af704-b655-4667-85ef-ed2595194434 ba6ea3b0ff9d4fee8a80f308d0493954 7ff14cec1ef04fa2a41f6d226bc99518 - - default default] Lock "6f74cee5-3bb9-44f0-9a21-d6e5c1475419" "released" by "nova.compute.manager.ComputeManager.detach_volume.<locals>.do_detach_volume" :: held 0.568s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 04:12:19 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e332 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 04:12:20 compute-0 podman[287560]: 2025-10-11 04:12:20.441651391 +0000 UTC m=+0.158978984 container health_status 648a70a95c858d67aef01a55c153ff0b5b9bef4b589fa10c62a24185180db61f (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, config_id=ovn_controller, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.build-date=20251009, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 11 04:12:20 compute-0 ceph-mon[74273]: pgmap v1401: 305 pgs: 305 active+clean; 167 MiB data, 426 MiB used, 60 GiB / 60 GiB avail; 64 KiB/s rd, 9.9 KiB/s wr, 83 op/s
Oct 11 04:12:20 compute-0 ceph-mon[74273]: osdmap e332: 3 total, 3 up, 3 in
Oct 11 04:12:20 compute-0 ceph-mgr[74563]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 04:12:20 compute-0 ceph-mgr[74563]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 04:12:20 compute-0 ceph-mgr[74563]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 04:12:20 compute-0 ceph-mgr[74563]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 04:12:20 compute-0 ceph-mgr[74563]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 04:12:20 compute-0 ceph-mgr[74563]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 04:12:20 compute-0 ceph-mgr[74563]: [balancer INFO root] Optimize plan auto_2025-10-11_04:12:20
Oct 11 04:12:20 compute-0 ceph-mgr[74563]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 11 04:12:20 compute-0 ceph-mgr[74563]: [balancer INFO root] do_upmap
Oct 11 04:12:20 compute-0 ceph-mgr[74563]: [balancer INFO root] pools ['.mgr', 'vms', '.rgw.root', 'default.rgw.control', 'images', 'default.rgw.meta', 'cephfs.cephfs.data', 'backups', 'volumes', 'cephfs.cephfs.meta', 'default.rgw.log']
Oct 11 04:12:20 compute-0 ceph-mgr[74563]: [balancer INFO root] prepared 0/10 changes
Oct 11 04:12:21 compute-0 ceph-mgr[74563]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 11 04:12:21 compute-0 ceph-mgr[74563]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 11 04:12:21 compute-0 ceph-mgr[74563]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 11 04:12:21 compute-0 ceph-mgr[74563]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 11 04:12:21 compute-0 ceph-mgr[74563]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 11 04:12:21 compute-0 ceph-mgr[74563]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 11 04:12:21 compute-0 ceph-mgr[74563]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 11 04:12:21 compute-0 ceph-mgr[74563]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 11 04:12:21 compute-0 ceph-mgr[74563]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 11 04:12:21 compute-0 ceph-mgr[74563]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 11 04:12:21 compute-0 nova_compute[259850]: 2025-10-11 04:12:21.078 2 DEBUG oslo_service.periodic_task [None req-a15c22ab-259d-4b26-83d0-eb5f92102649 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 11 04:12:21 compute-0 nova_compute[259850]: 2025-10-11 04:12:21.078 2 DEBUG nova.compute.manager [None req-a15c22ab-259d-4b26-83d0-eb5f92102649 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Oct 11 04:12:21 compute-0 nova_compute[259850]: 2025-10-11 04:12:21.078 2 DEBUG oslo_service.periodic_task [None req-a15c22ab-259d-4b26-83d0-eb5f92102649 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 11 04:12:21 compute-0 nova_compute[259850]: 2025-10-11 04:12:21.114 2 DEBUG oslo_concurrency.lockutils [None req-a15c22ab-259d-4b26-83d0-eb5f92102649 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 04:12:21 compute-0 nova_compute[259850]: 2025-10-11 04:12:21.114 2 DEBUG oslo_concurrency.lockutils [None req-a15c22ab-259d-4b26-83d0-eb5f92102649 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 04:12:21 compute-0 nova_compute[259850]: 2025-10-11 04:12:21.114 2 DEBUG oslo_concurrency.lockutils [None req-a15c22ab-259d-4b26-83d0-eb5f92102649 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 04:12:21 compute-0 nova_compute[259850]: 2025-10-11 04:12:21.115 2 DEBUG nova.compute.resource_tracker [None req-a15c22ab-259d-4b26-83d0-eb5f92102649 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Oct 11 04:12:21 compute-0 nova_compute[259850]: 2025-10-11 04:12:21.115 2 DEBUG oslo_concurrency.processutils [None req-a15c22ab-259d-4b26-83d0-eb5f92102649 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 04:12:21 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 11 04:12:21 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/316896954' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 04:12:21 compute-0 nova_compute[259850]: 2025-10-11 04:12:21.554 2 DEBUG oslo_concurrency.processutils [None req-a15c22ab-259d-4b26-83d0-eb5f92102649 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.439s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 04:12:21 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1403: 305 pgs: 305 active+clean; 167 MiB data, 426 MiB used, 60 GiB / 60 GiB avail; 56 KiB/s rd, 8.7 KiB/s wr, 72 op/s
Oct 11 04:12:21 compute-0 nova_compute[259850]: 2025-10-11 04:12:21.640 2 DEBUG nova.virt.libvirt.driver [None req-a15c22ab-259d-4b26-83d0-eb5f92102649 - - - - - -] skipping disk for instance-0000000e as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 11 04:12:21 compute-0 nova_compute[259850]: 2025-10-11 04:12:21.641 2 DEBUG nova.virt.libvirt.driver [None req-a15c22ab-259d-4b26-83d0-eb5f92102649 - - - - - -] skipping disk for instance-0000000e as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 11 04:12:21 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e332 do_prune osdmap full prune enabled
Oct 11 04:12:21 compute-0 ceph-mon[74273]: from='client.? 192.168.122.100:0/316896954' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 04:12:21 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e333 e333: 3 total, 3 up, 3 in
Oct 11 04:12:21 compute-0 ceph-mon[74273]: log_channel(cluster) log [DBG] : osdmap e333: 3 total, 3 up, 3 in
Oct 11 04:12:21 compute-0 nova_compute[259850]: 2025-10-11 04:12:21.861 2 WARNING nova.virt.libvirt.driver [None req-a15c22ab-259d-4b26-83d0-eb5f92102649 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 11 04:12:21 compute-0 nova_compute[259850]: 2025-10-11 04:12:21.862 2 DEBUG nova.compute.resource_tracker [None req-a15c22ab-259d-4b26-83d0-eb5f92102649 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4307MB free_disk=59.942726135253906GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Oct 11 04:12:21 compute-0 nova_compute[259850]: 2025-10-11 04:12:21.862 2 DEBUG oslo_concurrency.lockutils [None req-a15c22ab-259d-4b26-83d0-eb5f92102649 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 04:12:21 compute-0 nova_compute[259850]: 2025-10-11 04:12:21.863 2 DEBUG oslo_concurrency.lockutils [None req-a15c22ab-259d-4b26-83d0-eb5f92102649 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 04:12:21 compute-0 nova_compute[259850]: 2025-10-11 04:12:21.948 2 DEBUG nova.compute.resource_tracker [None req-a15c22ab-259d-4b26-83d0-eb5f92102649 - - - - - -] Instance 6f74cee5-3bb9-44f0-9a21-d6e5c1475419 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Oct 11 04:12:21 compute-0 nova_compute[259850]: 2025-10-11 04:12:21.949 2 DEBUG nova.compute.resource_tracker [None req-a15c22ab-259d-4b26-83d0-eb5f92102649 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Oct 11 04:12:21 compute-0 nova_compute[259850]: 2025-10-11 04:12:21.949 2 DEBUG nova.compute.resource_tracker [None req-a15c22ab-259d-4b26-83d0-eb5f92102649 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=640MB phys_disk=59GB used_disk=1GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Oct 11 04:12:21 compute-0 nova_compute[259850]: 2025-10-11 04:12:21.987 2 DEBUG oslo_concurrency.processutils [None req-a15c22ab-259d-4b26-83d0-eb5f92102649 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 04:12:22 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 11 04:12:22 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3948482987' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 04:12:22 compute-0 nova_compute[259850]: 2025-10-11 04:12:22.466 2 DEBUG oslo_concurrency.processutils [None req-a15c22ab-259d-4b26-83d0-eb5f92102649 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.478s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 04:12:22 compute-0 nova_compute[259850]: 2025-10-11 04:12:22.472 2 DEBUG nova.compute.provider_tree [None req-a15c22ab-259d-4b26-83d0-eb5f92102649 - - - - - -] Inventory has not changed in ProviderTree for provider: 108a560b-89c0-4926-a2fc-cb749a6f8386 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 11 04:12:22 compute-0 nova_compute[259850]: 2025-10-11 04:12:22.487 2 DEBUG nova.scheduler.client.report [None req-a15c22ab-259d-4b26-83d0-eb5f92102649 - - - - - -] Inventory has not changed for provider 108a560b-89c0-4926-a2fc-cb749a6f8386 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 11 04:12:22 compute-0 nova_compute[259850]: 2025-10-11 04:12:22.508 2 DEBUG nova.compute.resource_tracker [None req-a15c22ab-259d-4b26-83d0-eb5f92102649 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Oct 11 04:12:22 compute-0 nova_compute[259850]: 2025-10-11 04:12:22.508 2 DEBUG oslo_concurrency.lockutils [None req-a15c22ab-259d-4b26-83d0-eb5f92102649 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.646s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 04:12:22 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 11 04:12:22 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1749560684' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 11 04:12:22 compute-0 nova_compute[259850]: 2025-10-11 04:12:22.593 2 DEBUG nova.compute.manager [None req-d6457c81-af14-4251-a3c6-2d161379cc5a ba6ea3b0ff9d4fee8a80f308d0493954 7ff14cec1ef04fa2a41f6d226bc99518 - - default default] [instance: 6f74cee5-3bb9-44f0-9a21-d6e5c1475419] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 11 04:12:22 compute-0 nova_compute[259850]: 2025-10-11 04:12:22.635 2 INFO nova.compute.manager [None req-d6457c81-af14-4251-a3c6-2d161379cc5a ba6ea3b0ff9d4fee8a80f308d0493954 7ff14cec1ef04fa2a41f6d226bc99518 - - default default] [instance: 6f74cee5-3bb9-44f0-9a21-d6e5c1475419] instance snapshotting
Oct 11 04:12:22 compute-0 ceph-mon[74273]: pgmap v1403: 305 pgs: 305 active+clean; 167 MiB data, 426 MiB used, 60 GiB / 60 GiB avail; 56 KiB/s rd, 8.7 KiB/s wr, 72 op/s
Oct 11 04:12:22 compute-0 ceph-mon[74273]: osdmap e333: 3 total, 3 up, 3 in
Oct 11 04:12:22 compute-0 ceph-mon[74273]: from='client.? 192.168.122.100:0/3948482987' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 04:12:22 compute-0 ceph-mon[74273]: from='client.? 192.168.122.10:0/1749560684' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 11 04:12:22 compute-0 nova_compute[259850]: 2025-10-11 04:12:22.834 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 04:12:22 compute-0 nova_compute[259850]: 2025-10-11 04:12:22.911 2 INFO nova.virt.libvirt.driver [None req-d6457c81-af14-4251-a3c6-2d161379cc5a ba6ea3b0ff9d4fee8a80f308d0493954 7ff14cec1ef04fa2a41f6d226bc99518 - - default default] [instance: 6f74cee5-3bb9-44f0-9a21-d6e5c1475419] Beginning live snapshot process
Oct 11 04:12:22 compute-0 ovn_metadata_agent[161897]: 2025-10-11 04:12:22.961 161902 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 04:12:22 compute-0 ovn_metadata_agent[161897]: 2025-10-11 04:12:22.962 161902 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 04:12:22 compute-0 ovn_metadata_agent[161897]: 2025-10-11 04:12:22.963 161902 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 04:12:23 compute-0 nova_compute[259850]: 2025-10-11 04:12:23.067 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 04:12:23 compute-0 nova_compute[259850]: 2025-10-11 04:12:23.076 2 DEBUG nova.virt.libvirt.imagebackend [None req-d6457c81-af14-4251-a3c6-2d161379cc5a ba6ea3b0ff9d4fee8a80f308d0493954 7ff14cec1ef04fa2a41f6d226bc99518 - - default default] No parent info for 1a107e2f-1a9d-4b6f-861d-e64bee7d56be; asking the Image API where its store is _get_parent_pool /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1163
Oct 11 04:12:23 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 11 04:12:23 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/4078836998' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 11 04:12:23 compute-0 nova_compute[259850]: 2025-10-11 04:12:23.273 2 DEBUG nova.storage.rbd_utils [None req-d6457c81-af14-4251-a3c6-2d161379cc5a ba6ea3b0ff9d4fee8a80f308d0493954 7ff14cec1ef04fa2a41f6d226bc99518 - - default default] creating snapshot(9a3b7df65bb94bec8e0996169d9b9bdc) on rbd image(6f74cee5-3bb9-44f0-9a21-d6e5c1475419_disk) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462
Oct 11 04:12:23 compute-0 nova_compute[259850]: 2025-10-11 04:12:23.490 2 DEBUG oslo_service.periodic_task [None req-a15c22ab-259d-4b26-83d0-eb5f92102649 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 11 04:12:23 compute-0 nova_compute[259850]: 2025-10-11 04:12:23.491 2 DEBUG nova.compute.manager [None req-a15c22ab-259d-4b26-83d0-eb5f92102649 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Oct 11 04:12:23 compute-0 nova_compute[259850]: 2025-10-11 04:12:23.491 2 DEBUG nova.compute.manager [None req-a15c22ab-259d-4b26-83d0-eb5f92102649 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Oct 11 04:12:23 compute-0 nova_compute[259850]: 2025-10-11 04:12:23.513 2 DEBUG oslo_concurrency.lockutils [None req-a15c22ab-259d-4b26-83d0-eb5f92102649 - - - - - -] Acquiring lock "refresh_cache-6f74cee5-3bb9-44f0-9a21-d6e5c1475419" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 11 04:12:23 compute-0 nova_compute[259850]: 2025-10-11 04:12:23.513 2 DEBUG oslo_concurrency.lockutils [None req-a15c22ab-259d-4b26-83d0-eb5f92102649 - - - - - -] Acquired lock "refresh_cache-6f74cee5-3bb9-44f0-9a21-d6e5c1475419" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 11 04:12:23 compute-0 nova_compute[259850]: 2025-10-11 04:12:23.514 2 DEBUG nova.network.neutron [None req-a15c22ab-259d-4b26-83d0-eb5f92102649 - - - - - -] [instance: 6f74cee5-3bb9-44f0-9a21-d6e5c1475419] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Oct 11 04:12:23 compute-0 nova_compute[259850]: 2025-10-11 04:12:23.514 2 DEBUG nova.objects.instance [None req-a15c22ab-259d-4b26-83d0-eb5f92102649 - - - - - -] Lazy-loading 'info_cache' on Instance uuid 6f74cee5-3bb9-44f0-9a21-d6e5c1475419 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 11 04:12:23 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1405: 305 pgs: 305 active+clean; 169 MiB data, 429 MiB used, 60 GiB / 60 GiB avail; 156 KiB/s rd, 397 KiB/s wr, 215 op/s
Oct 11 04:12:23 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e333 do_prune osdmap full prune enabled
Oct 11 04:12:23 compute-0 ceph-mon[74273]: from='client.? 192.168.122.10:0/4078836998' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 11 04:12:23 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e334 e334: 3 total, 3 up, 3 in
Oct 11 04:12:23 compute-0 ceph-mon[74273]: log_channel(cluster) log [DBG] : osdmap e334: 3 total, 3 up, 3 in
Oct 11 04:12:23 compute-0 nova_compute[259850]: 2025-10-11 04:12:23.830 2 DEBUG nova.storage.rbd_utils [None req-d6457c81-af14-4251-a3c6-2d161379cc5a ba6ea3b0ff9d4fee8a80f308d0493954 7ff14cec1ef04fa2a41f6d226bc99518 - - default default] cloning vms/6f74cee5-3bb9-44f0-9a21-d6e5c1475419_disk@9a3b7df65bb94bec8e0996169d9b9bdc to images/f94d6b77-1844-4032-bf9d-644d95696add clone /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:261
Oct 11 04:12:23 compute-0 nova_compute[259850]: 2025-10-11 04:12:23.988 2 DEBUG nova.storage.rbd_utils [None req-d6457c81-af14-4251-a3c6-2d161379cc5a ba6ea3b0ff9d4fee8a80f308d0493954 7ff14cec1ef04fa2a41f6d226bc99518 - - default default] flattening images/f94d6b77-1844-4032-bf9d-644d95696add flatten /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:314
Oct 11 04:12:24 compute-0 podman[287736]: 2025-10-11 04:12:24.354966123 +0000 UTC m=+0.064101175 container health_status aa44bb3cffcfb092b9d85ae52a9bd9f36c87e07262632420f97b48a886109eab (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible)
Oct 11 04:12:24 compute-0 nova_compute[259850]: 2025-10-11 04:12:24.408 2 DEBUG nova.storage.rbd_utils [None req-d6457c81-af14-4251-a3c6-2d161379cc5a ba6ea3b0ff9d4fee8a80f308d0493954 7ff14cec1ef04fa2a41f6d226bc99518 - - default default] removing snapshot(9a3b7df65bb94bec8e0996169d9b9bdc) on rbd image(6f74cee5-3bb9-44f0-9a21-d6e5c1475419_disk) remove_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:489
Oct 11 04:12:24 compute-0 nova_compute[259850]: 2025-10-11 04:12:24.695 2 DEBUG nova.network.neutron [None req-a15c22ab-259d-4b26-83d0-eb5f92102649 - - - - - -] [instance: 6f74cee5-3bb9-44f0-9a21-d6e5c1475419] Updating instance_info_cache with network_info: [{"id": "46432b1a-fa02-4a02-9c8f-d607c2cd820c", "address": "fa:16:3e:2e:cd:1e", "network": {"id": "69760b74-d690-4b6a-a64f-35ceb4582944", "bridge": "br-int", "label": "tempest-TestStampPattern-334026573-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.220", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "7ff14cec1ef04fa2a41f6d226bc99518", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap46432b1a-fa", "ovs_interfaceid": "46432b1a-fa02-4a02-9c8f-d607c2cd820c", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 11 04:12:24 compute-0 nova_compute[259850]: 2025-10-11 04:12:24.709 2 DEBUG oslo_concurrency.lockutils [None req-a15c22ab-259d-4b26-83d0-eb5f92102649 - - - - - -] Releasing lock "refresh_cache-6f74cee5-3bb9-44f0-9a21-d6e5c1475419" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 11 04:12:24 compute-0 nova_compute[259850]: 2025-10-11 04:12:24.710 2 DEBUG nova.compute.manager [None req-a15c22ab-259d-4b26-83d0-eb5f92102649 - - - - - -] [instance: 6f74cee5-3bb9-44f0-9a21-d6e5c1475419] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Oct 11 04:12:24 compute-0 nova_compute[259850]: 2025-10-11 04:12:24.711 2 DEBUG oslo_service.periodic_task [None req-a15c22ab-259d-4b26-83d0-eb5f92102649 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 11 04:12:24 compute-0 nova_compute[259850]: 2025-10-11 04:12:24.711 2 DEBUG oslo_service.periodic_task [None req-a15c22ab-259d-4b26-83d0-eb5f92102649 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 11 04:12:24 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e334 do_prune osdmap full prune enabled
Oct 11 04:12:24 compute-0 ceph-mon[74273]: pgmap v1405: 305 pgs: 305 active+clean; 169 MiB data, 429 MiB used, 60 GiB / 60 GiB avail; 156 KiB/s rd, 397 KiB/s wr, 215 op/s
Oct 11 04:12:24 compute-0 ceph-mon[74273]: osdmap e334: 3 total, 3 up, 3 in
Oct 11 04:12:24 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e335 e335: 3 total, 3 up, 3 in
Oct 11 04:12:24 compute-0 ceph-mon[74273]: log_channel(cluster) log [DBG] : osdmap e335: 3 total, 3 up, 3 in
Oct 11 04:12:24 compute-0 nova_compute[259850]: 2025-10-11 04:12:24.817 2 DEBUG nova.storage.rbd_utils [None req-d6457c81-af14-4251-a3c6-2d161379cc5a ba6ea3b0ff9d4fee8a80f308d0493954 7ff14cec1ef04fa2a41f6d226bc99518 - - default default] creating snapshot(snap) on rbd image(f94d6b77-1844-4032-bf9d-644d95696add) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462
Oct 11 04:12:24 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e335 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 04:12:24 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e335 do_prune osdmap full prune enabled
Oct 11 04:12:24 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e336 e336: 3 total, 3 up, 3 in
Oct 11 04:12:24 compute-0 ceph-mon[74273]: log_channel(cluster) log [DBG] : osdmap e336: 3 total, 3 up, 3 in
Oct 11 04:12:25 compute-0 nova_compute[259850]: 2025-10-11 04:12:25.059 2 DEBUG oslo_service.periodic_task [None req-a15c22ab-259d-4b26-83d0-eb5f92102649 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 11 04:12:25 compute-0 nova_compute[259850]: 2025-10-11 04:12:25.059 2 DEBUG oslo_service.periodic_task [None req-a15c22ab-259d-4b26-83d0-eb5f92102649 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 11 04:12:25 compute-0 nova_compute[259850]: 2025-10-11 04:12:25.059 2 DEBUG oslo_service.periodic_task [None req-a15c22ab-259d-4b26-83d0-eb5f92102649 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 11 04:12:25 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1409: 305 pgs: 305 active+clean; 169 MiB data, 429 MiB used, 60 GiB / 60 GiB avail; 150 KiB/s rd, 583 KiB/s wr, 214 op/s
Oct 11 04:12:25 compute-0 ceph-mon[74273]: osdmap e335: 3 total, 3 up, 3 in
Oct 11 04:12:25 compute-0 ceph-mon[74273]: osdmap e336: 3 total, 3 up, 3 in
Oct 11 04:12:25 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e336 do_prune osdmap full prune enabled
Oct 11 04:12:25 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e337 e337: 3 total, 3 up, 3 in
Oct 11 04:12:25 compute-0 ceph-mon[74273]: log_channel(cluster) log [DBG] : osdmap e337: 3 total, 3 up, 3 in
Oct 11 04:12:26 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct 11 04:12:26 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1703659796' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 11 04:12:26 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct 11 04:12:26 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1703659796' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 11 04:12:26 compute-0 nova_compute[259850]: 2025-10-11 04:12:26.317 2 INFO nova.virt.libvirt.driver [None req-d6457c81-af14-4251-a3c6-2d161379cc5a ba6ea3b0ff9d4fee8a80f308d0493954 7ff14cec1ef04fa2a41f6d226bc99518 - - default default] [instance: 6f74cee5-3bb9-44f0-9a21-d6e5c1475419] Snapshot image upload complete
Oct 11 04:12:26 compute-0 nova_compute[259850]: 2025-10-11 04:12:26.318 2 INFO nova.compute.manager [None req-d6457c81-af14-4251-a3c6-2d161379cc5a ba6ea3b0ff9d4fee8a80f308d0493954 7ff14cec1ef04fa2a41f6d226bc99518 - - default default] [instance: 6f74cee5-3bb9-44f0-9a21-d6e5c1475419] Took 3.68 seconds to snapshot the instance on the hypervisor.
Oct 11 04:12:26 compute-0 ceph-mon[74273]: pgmap v1409: 305 pgs: 305 active+clean; 169 MiB data, 429 MiB used, 60 GiB / 60 GiB avail; 150 KiB/s rd, 583 KiB/s wr, 214 op/s
Oct 11 04:12:26 compute-0 ceph-mon[74273]: osdmap e337: 3 total, 3 up, 3 in
Oct 11 04:12:26 compute-0 ceph-mon[74273]: from='client.? 192.168.122.10:0/1703659796' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 11 04:12:26 compute-0 ceph-mon[74273]: from='client.? 192.168.122.10:0/1703659796' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 11 04:12:26 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e337 do_prune osdmap full prune enabled
Oct 11 04:12:26 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e338 e338: 3 total, 3 up, 3 in
Oct 11 04:12:26 compute-0 ceph-mon[74273]: log_channel(cluster) log [DBG] : osdmap e338: 3 total, 3 up, 3 in
Oct 11 04:12:27 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct 11 04:12:27 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/636011952' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 11 04:12:27 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct 11 04:12:27 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/636011952' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 11 04:12:27 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1412: 305 pgs: 305 active+clean; 169 MiB data, 429 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:12:27 compute-0 nova_compute[259850]: 2025-10-11 04:12:27.886 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 04:12:27 compute-0 ceph-mon[74273]: osdmap e338: 3 total, 3 up, 3 in
Oct 11 04:12:27 compute-0 ceph-mon[74273]: from='client.? 192.168.122.10:0/636011952' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 11 04:12:27 compute-0 ceph-mon[74273]: from='client.? 192.168.122.10:0/636011952' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 11 04:12:28 compute-0 nova_compute[259850]: 2025-10-11 04:12:28.069 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 04:12:28 compute-0 ovn_controller[152025]: 2025-10-11T04:12:28Z|00149|memory_trim|INFO|Detected inactivity (last active 30008 ms ago): trimming memory
Oct 11 04:12:28 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 11 04:12:28 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/250455107' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 11 04:12:28 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e338 do_prune osdmap full prune enabled
Oct 11 04:12:28 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e339 e339: 3 total, 3 up, 3 in
Oct 11 04:12:28 compute-0 ceph-mon[74273]: log_channel(cluster) log [DBG] : osdmap e339: 3 total, 3 up, 3 in
Oct 11 04:12:28 compute-0 ceph-mon[74273]: pgmap v1412: 305 pgs: 305 active+clean; 169 MiB data, 429 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:12:28 compute-0 ceph-mon[74273]: from='client.? 192.168.122.10:0/250455107' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 11 04:12:29 compute-0 nova_compute[259850]: 2025-10-11 04:12:29.058 2 DEBUG oslo_service.periodic_task [None req-a15c22ab-259d-4b26-83d0-eb5f92102649 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 11 04:12:29 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1414: 305 pgs: 305 active+clean; 248 MiB data, 475 MiB used, 60 GiB / 60 GiB avail; 10 MiB/s rd, 9.9 MiB/s wr, 469 op/s
Oct 11 04:12:29 compute-0 nova_compute[259850]: 2025-10-11 04:12:29.900 2 DEBUG oslo_concurrency.lockutils [None req-f7c2a0d7-6121-45c7-ab46-55f07ad05f48 ba6ea3b0ff9d4fee8a80f308d0493954 7ff14cec1ef04fa2a41f6d226bc99518 - - default default] Acquiring lock "673c41a0-97c6-4a8e-8f65-919ee9c38c79" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 04:12:29 compute-0 nova_compute[259850]: 2025-10-11 04:12:29.900 2 DEBUG oslo_concurrency.lockutils [None req-f7c2a0d7-6121-45c7-ab46-55f07ad05f48 ba6ea3b0ff9d4fee8a80f308d0493954 7ff14cec1ef04fa2a41f6d226bc99518 - - default default] Lock "673c41a0-97c6-4a8e-8f65-919ee9c38c79" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 04:12:29 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e339 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 04:12:29 compute-0 nova_compute[259850]: 2025-10-11 04:12:29.918 2 DEBUG nova.compute.manager [None req-f7c2a0d7-6121-45c7-ab46-55f07ad05f48 ba6ea3b0ff9d4fee8a80f308d0493954 7ff14cec1ef04fa2a41f6d226bc99518 - - default default] [instance: 673c41a0-97c6-4a8e-8f65-919ee9c38c79] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Oct 11 04:12:29 compute-0 ceph-mon[74273]: osdmap e339: 3 total, 3 up, 3 in
Oct 11 04:12:30 compute-0 nova_compute[259850]: 2025-10-11 04:12:30.016 2 DEBUG oslo_concurrency.lockutils [None req-f7c2a0d7-6121-45c7-ab46-55f07ad05f48 ba6ea3b0ff9d4fee8a80f308d0493954 7ff14cec1ef04fa2a41f6d226bc99518 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 04:12:30 compute-0 nova_compute[259850]: 2025-10-11 04:12:30.017 2 DEBUG oslo_concurrency.lockutils [None req-f7c2a0d7-6121-45c7-ab46-55f07ad05f48 ba6ea3b0ff9d4fee8a80f308d0493954 7ff14cec1ef04fa2a41f6d226bc99518 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 04:12:30 compute-0 nova_compute[259850]: 2025-10-11 04:12:30.028 2 DEBUG nova.virt.hardware [None req-f7c2a0d7-6121-45c7-ab46-55f07ad05f48 ba6ea3b0ff9d4fee8a80f308d0493954 7ff14cec1ef04fa2a41f6d226bc99518 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Oct 11 04:12:30 compute-0 nova_compute[259850]: 2025-10-11 04:12:30.029 2 INFO nova.compute.claims [None req-f7c2a0d7-6121-45c7-ab46-55f07ad05f48 ba6ea3b0ff9d4fee8a80f308d0493954 7ff14cec1ef04fa2a41f6d226bc99518 - - default default] [instance: 673c41a0-97c6-4a8e-8f65-919ee9c38c79] Claim successful on node compute-0.ctlplane.example.com
Oct 11 04:12:30 compute-0 nova_compute[259850]: 2025-10-11 04:12:30.054 2 DEBUG oslo_service.periodic_task [None req-a15c22ab-259d-4b26-83d0-eb5f92102649 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 11 04:12:30 compute-0 nova_compute[259850]: 2025-10-11 04:12:30.154 2 DEBUG oslo_concurrency.processutils [None req-f7c2a0d7-6121-45c7-ab46-55f07ad05f48 ba6ea3b0ff9d4fee8a80f308d0493954 7ff14cec1ef04fa2a41f6d226bc99518 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 04:12:30 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 11 04:12:30 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/174292019' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 04:12:30 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 11 04:12:30 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1731718296' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 11 04:12:30 compute-0 nova_compute[259850]: 2025-10-11 04:12:30.689 2 DEBUG oslo_concurrency.processutils [None req-f7c2a0d7-6121-45c7-ab46-55f07ad05f48 ba6ea3b0ff9d4fee8a80f308d0493954 7ff14cec1ef04fa2a41f6d226bc99518 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.534s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 04:12:30 compute-0 nova_compute[259850]: 2025-10-11 04:12:30.695 2 DEBUG nova.compute.provider_tree [None req-f7c2a0d7-6121-45c7-ab46-55f07ad05f48 ba6ea3b0ff9d4fee8a80f308d0493954 7ff14cec1ef04fa2a41f6d226bc99518 - - default default] Inventory has not changed in ProviderTree for provider: 108a560b-89c0-4926-a2fc-cb749a6f8386 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 11 04:12:30 compute-0 nova_compute[259850]: 2025-10-11 04:12:30.721 2 DEBUG nova.scheduler.client.report [None req-f7c2a0d7-6121-45c7-ab46-55f07ad05f48 ba6ea3b0ff9d4fee8a80f308d0493954 7ff14cec1ef04fa2a41f6d226bc99518 - - default default] Inventory has not changed for provider 108a560b-89c0-4926-a2fc-cb749a6f8386 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 11 04:12:30 compute-0 nova_compute[259850]: 2025-10-11 04:12:30.751 2 DEBUG oslo_concurrency.lockutils [None req-f7c2a0d7-6121-45c7-ab46-55f07ad05f48 ba6ea3b0ff9d4fee8a80f308d0493954 7ff14cec1ef04fa2a41f6d226bc99518 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.734s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 04:12:30 compute-0 nova_compute[259850]: 2025-10-11 04:12:30.753 2 DEBUG nova.compute.manager [None req-f7c2a0d7-6121-45c7-ab46-55f07ad05f48 ba6ea3b0ff9d4fee8a80f308d0493954 7ff14cec1ef04fa2a41f6d226bc99518 - - default default] [instance: 673c41a0-97c6-4a8e-8f65-919ee9c38c79] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Oct 11 04:12:30 compute-0 nova_compute[259850]: 2025-10-11 04:12:30.826 2 DEBUG nova.compute.manager [None req-f7c2a0d7-6121-45c7-ab46-55f07ad05f48 ba6ea3b0ff9d4fee8a80f308d0493954 7ff14cec1ef04fa2a41f6d226bc99518 - - default default] [instance: 673c41a0-97c6-4a8e-8f65-919ee9c38c79] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Oct 11 04:12:30 compute-0 nova_compute[259850]: 2025-10-11 04:12:30.827 2 DEBUG nova.network.neutron [None req-f7c2a0d7-6121-45c7-ab46-55f07ad05f48 ba6ea3b0ff9d4fee8a80f308d0493954 7ff14cec1ef04fa2a41f6d226bc99518 - - default default] [instance: 673c41a0-97c6-4a8e-8f65-919ee9c38c79] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Oct 11 04:12:30 compute-0 nova_compute[259850]: 2025-10-11 04:12:30.849 2 INFO nova.virt.libvirt.driver [None req-f7c2a0d7-6121-45c7-ab46-55f07ad05f48 ba6ea3b0ff9d4fee8a80f308d0493954 7ff14cec1ef04fa2a41f6d226bc99518 - - default default] [instance: 673c41a0-97c6-4a8e-8f65-919ee9c38c79] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Oct 11 04:12:30 compute-0 nova_compute[259850]: 2025-10-11 04:12:30.870 2 DEBUG nova.compute.manager [None req-f7c2a0d7-6121-45c7-ab46-55f07ad05f48 ba6ea3b0ff9d4fee8a80f308d0493954 7ff14cec1ef04fa2a41f6d226bc99518 - - default default] [instance: 673c41a0-97c6-4a8e-8f65-919ee9c38c79] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Oct 11 04:12:30 compute-0 nova_compute[259850]: 2025-10-11 04:12:30.965 2 DEBUG nova.compute.manager [None req-f7c2a0d7-6121-45c7-ab46-55f07ad05f48 ba6ea3b0ff9d4fee8a80f308d0493954 7ff14cec1ef04fa2a41f6d226bc99518 - - default default] [instance: 673c41a0-97c6-4a8e-8f65-919ee9c38c79] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Oct 11 04:12:30 compute-0 nova_compute[259850]: 2025-10-11 04:12:30.967 2 DEBUG nova.virt.libvirt.driver [None req-f7c2a0d7-6121-45c7-ab46-55f07ad05f48 ba6ea3b0ff9d4fee8a80f308d0493954 7ff14cec1ef04fa2a41f6d226bc99518 - - default default] [instance: 673c41a0-97c6-4a8e-8f65-919ee9c38c79] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Oct 11 04:12:30 compute-0 nova_compute[259850]: 2025-10-11 04:12:30.968 2 INFO nova.virt.libvirt.driver [None req-f7c2a0d7-6121-45c7-ab46-55f07ad05f48 ba6ea3b0ff9d4fee8a80f308d0493954 7ff14cec1ef04fa2a41f6d226bc99518 - - default default] [instance: 673c41a0-97c6-4a8e-8f65-919ee9c38c79] Creating image(s)
Oct 11 04:12:30 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e339 do_prune osdmap full prune enabled
Oct 11 04:12:30 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e340 e340: 3 total, 3 up, 3 in
Oct 11 04:12:30 compute-0 ceph-mon[74273]: log_channel(cluster) log [DBG] : osdmap e340: 3 total, 3 up, 3 in
Oct 11 04:12:30 compute-0 ceph-mon[74273]: pgmap v1414: 305 pgs: 305 active+clean; 248 MiB data, 475 MiB used, 60 GiB / 60 GiB avail; 10 MiB/s rd, 9.9 MiB/s wr, 469 op/s
Oct 11 04:12:30 compute-0 ceph-mon[74273]: from='client.? 192.168.122.100:0/174292019' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 04:12:30 compute-0 ceph-mon[74273]: from='client.? 192.168.122.10:0/1731718296' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 11 04:12:31 compute-0 nova_compute[259850]: 2025-10-11 04:12:31.005 2 DEBUG nova.storage.rbd_utils [None req-f7c2a0d7-6121-45c7-ab46-55f07ad05f48 ba6ea3b0ff9d4fee8a80f308d0493954 7ff14cec1ef04fa2a41f6d226bc99518 - - default default] rbd image 673c41a0-97c6-4a8e-8f65-919ee9c38c79_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 11 04:12:31 compute-0 nova_compute[259850]: 2025-10-11 04:12:31.030 2 DEBUG nova.storage.rbd_utils [None req-f7c2a0d7-6121-45c7-ab46-55f07ad05f48 ba6ea3b0ff9d4fee8a80f308d0493954 7ff14cec1ef04fa2a41f6d226bc99518 - - default default] rbd image 673c41a0-97c6-4a8e-8f65-919ee9c38c79_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 11 04:12:31 compute-0 nova_compute[259850]: 2025-10-11 04:12:31.054 2 DEBUG nova.storage.rbd_utils [None req-f7c2a0d7-6121-45c7-ab46-55f07ad05f48 ba6ea3b0ff9d4fee8a80f308d0493954 7ff14cec1ef04fa2a41f6d226bc99518 - - default default] rbd image 673c41a0-97c6-4a8e-8f65-919ee9c38c79_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 11 04:12:31 compute-0 nova_compute[259850]: 2025-10-11 04:12:31.057 2 DEBUG oslo_concurrency.lockutils [None req-f7c2a0d7-6121-45c7-ab46-55f07ad05f48 ba6ea3b0ff9d4fee8a80f308d0493954 7ff14cec1ef04fa2a41f6d226bc99518 - - default default] Acquiring lock "29e1d4f830027675d9402505c20824bb3d2aa7be" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 04:12:31 compute-0 nova_compute[259850]: 2025-10-11 04:12:31.057 2 DEBUG oslo_concurrency.lockutils [None req-f7c2a0d7-6121-45c7-ab46-55f07ad05f48 ba6ea3b0ff9d4fee8a80f308d0493954 7ff14cec1ef04fa2a41f6d226bc99518 - - default default] Lock "29e1d4f830027675d9402505c20824bb3d2aa7be" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 04:12:31 compute-0 nova_compute[259850]: 2025-10-11 04:12:31.061 2 DEBUG nova.policy [None req-f7c2a0d7-6121-45c7-ab46-55f07ad05f48 ba6ea3b0ff9d4fee8a80f308d0493954 7ff14cec1ef04fa2a41f6d226bc99518 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'ba6ea3b0ff9d4fee8a80f308d0493954', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '7ff14cec1ef04fa2a41f6d226bc99518', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Oct 11 04:12:31 compute-0 nova_compute[259850]: 2025-10-11 04:12:31.287 2 DEBUG nova.virt.libvirt.imagebackend [None req-f7c2a0d7-6121-45c7-ab46-55f07ad05f48 ba6ea3b0ff9d4fee8a80f308d0493954 7ff14cec1ef04fa2a41f6d226bc99518 - - default default] Image locations are: [{'url': 'rbd://23b68101-59a9-532f-ab6b-9acf78fb2162/images/f94d6b77-1844-4032-bf9d-644d95696add/snap', 'metadata': {'store': 'default_backend'}}, {'url': 'rbd://23b68101-59a9-532f-ab6b-9acf78fb2162/images/f94d6b77-1844-4032-bf9d-644d95696add/snap', 'metadata': {}}] clone /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1085
Oct 11 04:12:31 compute-0 ceph-mgr[74563]: [pg_autoscaler INFO root] _maybe_adjust
Oct 11 04:12:31 compute-0 ceph-mgr[74563]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 04:12:31 compute-0 ceph-mgr[74563]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct 11 04:12:31 compute-0 ceph-mgr[74563]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 04:12:31 compute-0 ceph-mgr[74563]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.000760926409780556 of space, bias 1.0, pg target 0.22827792293416682 quantized to 32 (current 32)
Oct 11 04:12:31 compute-0 ceph-mgr[74563]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 04:12:31 compute-0 ceph-mgr[74563]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.000380463204890278 of space, bias 1.0, pg target 0.11413896146708341 quantized to 32 (current 32)
Oct 11 04:12:31 compute-0 ceph-mgr[74563]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 04:12:31 compute-0 ceph-mgr[74563]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 1.0810420329491437e-06 of space, bias 1.0, pg target 0.0003243126098847431 quantized to 32 (current 32)
Oct 11 04:12:31 compute-0 ceph-mgr[74563]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 04:12:31 compute-0 ceph-mgr[74563]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0014244954458878687 of space, bias 1.0, pg target 0.4273486337663606 quantized to 32 (current 32)
Oct 11 04:12:31 compute-0 ceph-mgr[74563]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 04:12:31 compute-0 ceph-mgr[74563]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Oct 11 04:12:31 compute-0 ceph-mgr[74563]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 04:12:31 compute-0 ceph-mgr[74563]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 11 04:12:31 compute-0 ceph-mgr[74563]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 04:12:31 compute-0 ceph-mgr[74563]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Oct 11 04:12:31 compute-0 ceph-mgr[74563]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 04:12:31 compute-0 ceph-mgr[74563]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct 11 04:12:31 compute-0 ceph-mgr[74563]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 04:12:31 compute-0 ceph-mgr[74563]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 11 04:12:31 compute-0 ceph-mgr[74563]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 04:12:31 compute-0 ceph-mgr[74563]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct 11 04:12:31 compute-0 nova_compute[259850]: 2025-10-11 04:12:31.330 2 DEBUG nova.virt.libvirt.imagebackend [None req-f7c2a0d7-6121-45c7-ab46-55f07ad05f48 ba6ea3b0ff9d4fee8a80f308d0493954 7ff14cec1ef04fa2a41f6d226bc99518 - - default default] Selected location: {'url': 'rbd://23b68101-59a9-532f-ab6b-9acf78fb2162/images/f94d6b77-1844-4032-bf9d-644d95696add/snap', 'metadata': {'store': 'default_backend'}} clone /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1094
Oct 11 04:12:31 compute-0 nova_compute[259850]: 2025-10-11 04:12:31.331 2 DEBUG nova.storage.rbd_utils [None req-f7c2a0d7-6121-45c7-ab46-55f07ad05f48 ba6ea3b0ff9d4fee8a80f308d0493954 7ff14cec1ef04fa2a41f6d226bc99518 - - default default] cloning images/f94d6b77-1844-4032-bf9d-644d95696add@snap to None/673c41a0-97c6-4a8e-8f65-919ee9c38c79_disk clone /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:261
Oct 11 04:12:31 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct 11 04:12:31 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1951737562' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 11 04:12:31 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct 11 04:12:31 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1951737562' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 11 04:12:31 compute-0 nova_compute[259850]: 2025-10-11 04:12:31.434 2 DEBUG oslo_concurrency.lockutils [None req-f7c2a0d7-6121-45c7-ab46-55f07ad05f48 ba6ea3b0ff9d4fee8a80f308d0493954 7ff14cec1ef04fa2a41f6d226bc99518 - - default default] Lock "29e1d4f830027675d9402505c20824bb3d2aa7be" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.376s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 04:12:31 compute-0 nova_compute[259850]: 2025-10-11 04:12:31.568 2 DEBUG nova.objects.instance [None req-f7c2a0d7-6121-45c7-ab46-55f07ad05f48 ba6ea3b0ff9d4fee8a80f308d0493954 7ff14cec1ef04fa2a41f6d226bc99518 - - default default] Lazy-loading 'migration_context' on Instance uuid 673c41a0-97c6-4a8e-8f65-919ee9c38c79 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 11 04:12:31 compute-0 nova_compute[259850]: 2025-10-11 04:12:31.583 2 DEBUG nova.virt.libvirt.driver [None req-f7c2a0d7-6121-45c7-ab46-55f07ad05f48 ba6ea3b0ff9d4fee8a80f308d0493954 7ff14cec1ef04fa2a41f6d226bc99518 - - default default] [instance: 673c41a0-97c6-4a8e-8f65-919ee9c38c79] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Oct 11 04:12:31 compute-0 nova_compute[259850]: 2025-10-11 04:12:31.583 2 DEBUG nova.virt.libvirt.driver [None req-f7c2a0d7-6121-45c7-ab46-55f07ad05f48 ba6ea3b0ff9d4fee8a80f308d0493954 7ff14cec1ef04fa2a41f6d226bc99518 - - default default] [instance: 673c41a0-97c6-4a8e-8f65-919ee9c38c79] Ensure instance console log exists: /var/lib/nova/instances/673c41a0-97c6-4a8e-8f65-919ee9c38c79/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Oct 11 04:12:31 compute-0 nova_compute[259850]: 2025-10-11 04:12:31.584 2 DEBUG oslo_concurrency.lockutils [None req-f7c2a0d7-6121-45c7-ab46-55f07ad05f48 ba6ea3b0ff9d4fee8a80f308d0493954 7ff14cec1ef04fa2a41f6d226bc99518 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 04:12:31 compute-0 nova_compute[259850]: 2025-10-11 04:12:31.584 2 DEBUG oslo_concurrency.lockutils [None req-f7c2a0d7-6121-45c7-ab46-55f07ad05f48 ba6ea3b0ff9d4fee8a80f308d0493954 7ff14cec1ef04fa2a41f6d226bc99518 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 04:12:31 compute-0 nova_compute[259850]: 2025-10-11 04:12:31.585 2 DEBUG oslo_concurrency.lockutils [None req-f7c2a0d7-6121-45c7-ab46-55f07ad05f48 ba6ea3b0ff9d4fee8a80f308d0493954 7ff14cec1ef04fa2a41f6d226bc99518 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 04:12:31 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1416: 305 pgs: 305 active+clean; 248 MiB data, 475 MiB used, 60 GiB / 60 GiB avail; 8.4 MiB/s rd, 8.2 MiB/s wr, 387 op/s
Oct 11 04:12:31 compute-0 nova_compute[259850]: 2025-10-11 04:12:31.696 2 DEBUG nova.network.neutron [None req-f7c2a0d7-6121-45c7-ab46-55f07ad05f48 ba6ea3b0ff9d4fee8a80f308d0493954 7ff14cec1ef04fa2a41f6d226bc99518 - - default default] [instance: 673c41a0-97c6-4a8e-8f65-919ee9c38c79] Successfully created port: f42f98ab-28b0-4e9f-897a-e8be0d64dc33 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Oct 11 04:12:31 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e340 do_prune osdmap full prune enabled
Oct 11 04:12:31 compute-0 ceph-mon[74273]: osdmap e340: 3 total, 3 up, 3 in
Oct 11 04:12:31 compute-0 ceph-mon[74273]: from='client.? 192.168.122.10:0/1951737562' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 11 04:12:31 compute-0 ceph-mon[74273]: from='client.? 192.168.122.10:0/1951737562' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 11 04:12:31 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e341 e341: 3 total, 3 up, 3 in
Oct 11 04:12:31 compute-0 ceph-mon[74273]: log_channel(cluster) log [DBG] : osdmap e341: 3 total, 3 up, 3 in
Oct 11 04:12:32 compute-0 nova_compute[259850]: 2025-10-11 04:12:32.868 2 DEBUG nova.network.neutron [None req-f7c2a0d7-6121-45c7-ab46-55f07ad05f48 ba6ea3b0ff9d4fee8a80f308d0493954 7ff14cec1ef04fa2a41f6d226bc99518 - - default default] [instance: 673c41a0-97c6-4a8e-8f65-919ee9c38c79] Successfully updated port: f42f98ab-28b0-4e9f-897a-e8be0d64dc33 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Oct 11 04:12:32 compute-0 nova_compute[259850]: 2025-10-11 04:12:32.888 2 DEBUG oslo_concurrency.lockutils [None req-f7c2a0d7-6121-45c7-ab46-55f07ad05f48 ba6ea3b0ff9d4fee8a80f308d0493954 7ff14cec1ef04fa2a41f6d226bc99518 - - default default] Acquiring lock "refresh_cache-673c41a0-97c6-4a8e-8f65-919ee9c38c79" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 11 04:12:32 compute-0 nova_compute[259850]: 2025-10-11 04:12:32.888 2 DEBUG oslo_concurrency.lockutils [None req-f7c2a0d7-6121-45c7-ab46-55f07ad05f48 ba6ea3b0ff9d4fee8a80f308d0493954 7ff14cec1ef04fa2a41f6d226bc99518 - - default default] Acquired lock "refresh_cache-673c41a0-97c6-4a8e-8f65-919ee9c38c79" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 11 04:12:32 compute-0 nova_compute[259850]: 2025-10-11 04:12:32.889 2 DEBUG nova.network.neutron [None req-f7c2a0d7-6121-45c7-ab46-55f07ad05f48 ba6ea3b0ff9d4fee8a80f308d0493954 7ff14cec1ef04fa2a41f6d226bc99518 - - default default] [instance: 673c41a0-97c6-4a8e-8f65-919ee9c38c79] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Oct 11 04:12:32 compute-0 nova_compute[259850]: 2025-10-11 04:12:32.890 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 04:12:32 compute-0 nova_compute[259850]: 2025-10-11 04:12:32.993 2 DEBUG nova.compute.manager [req-7286355b-f6fc-4a91-8b74-cdb8aff5055d req-32726bd3-50f9-43ec-aa36-f9ffd8693387 f4159c198f2c491aba3076e803251bb4 551ef16ca7c64c7d98e9560ddab5abd8 - - default default] [instance: 673c41a0-97c6-4a8e-8f65-919ee9c38c79] Received event network-changed-f42f98ab-28b0-4e9f-897a-e8be0d64dc33 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 11 04:12:32 compute-0 nova_compute[259850]: 2025-10-11 04:12:32.994 2 DEBUG nova.compute.manager [req-7286355b-f6fc-4a91-8b74-cdb8aff5055d req-32726bd3-50f9-43ec-aa36-f9ffd8693387 f4159c198f2c491aba3076e803251bb4 551ef16ca7c64c7d98e9560ddab5abd8 - - default default] [instance: 673c41a0-97c6-4a8e-8f65-919ee9c38c79] Refreshing instance network info cache due to event network-changed-f42f98ab-28b0-4e9f-897a-e8be0d64dc33. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 11 04:12:32 compute-0 nova_compute[259850]: 2025-10-11 04:12:32.994 2 DEBUG oslo_concurrency.lockutils [req-7286355b-f6fc-4a91-8b74-cdb8aff5055d req-32726bd3-50f9-43ec-aa36-f9ffd8693387 f4159c198f2c491aba3076e803251bb4 551ef16ca7c64c7d98e9560ddab5abd8 - - default default] Acquiring lock "refresh_cache-673c41a0-97c6-4a8e-8f65-919ee9c38c79" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 11 04:12:32 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e341 do_prune osdmap full prune enabled
Oct 11 04:12:32 compute-0 ceph-mon[74273]: pgmap v1416: 305 pgs: 305 active+clean; 248 MiB data, 475 MiB used, 60 GiB / 60 GiB avail; 8.4 MiB/s rd, 8.2 MiB/s wr, 387 op/s
Oct 11 04:12:32 compute-0 ceph-mon[74273]: osdmap e341: 3 total, 3 up, 3 in
Oct 11 04:12:33 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e342 e342: 3 total, 3 up, 3 in
Oct 11 04:12:33 compute-0 ceph-mon[74273]: log_channel(cluster) log [DBG] : osdmap e342: 3 total, 3 up, 3 in
Oct 11 04:12:33 compute-0 nova_compute[259850]: 2025-10-11 04:12:33.055 2 DEBUG nova.network.neutron [None req-f7c2a0d7-6121-45c7-ab46-55f07ad05f48 ba6ea3b0ff9d4fee8a80f308d0493954 7ff14cec1ef04fa2a41f6d226bc99518 - - default default] [instance: 673c41a0-97c6-4a8e-8f65-919ee9c38c79] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Oct 11 04:12:33 compute-0 nova_compute[259850]: 2025-10-11 04:12:33.070 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 04:12:33 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1419: 305 pgs: 305 active+clean; 248 MiB data, 475 MiB used, 60 GiB / 60 GiB avail; 197 KiB/s rd, 9.4 KiB/s wr, 259 op/s
Oct 11 04:12:33 compute-0 nova_compute[259850]: 2025-10-11 04:12:33.919 2 DEBUG nova.network.neutron [None req-f7c2a0d7-6121-45c7-ab46-55f07ad05f48 ba6ea3b0ff9d4fee8a80f308d0493954 7ff14cec1ef04fa2a41f6d226bc99518 - - default default] [instance: 673c41a0-97c6-4a8e-8f65-919ee9c38c79] Updating instance_info_cache with network_info: [{"id": "f42f98ab-28b0-4e9f-897a-e8be0d64dc33", "address": "fa:16:3e:07:8a:69", "network": {"id": "69760b74-d690-4b6a-a64f-35ceb4582944", "bridge": "br-int", "label": "tempest-TestStampPattern-334026573-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "7ff14cec1ef04fa2a41f6d226bc99518", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf42f98ab-28", "ovs_interfaceid": "f42f98ab-28b0-4e9f-897a-e8be0d64dc33", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 11 04:12:33 compute-0 nova_compute[259850]: 2025-10-11 04:12:33.950 2 DEBUG oslo_concurrency.lockutils [None req-f7c2a0d7-6121-45c7-ab46-55f07ad05f48 ba6ea3b0ff9d4fee8a80f308d0493954 7ff14cec1ef04fa2a41f6d226bc99518 - - default default] Releasing lock "refresh_cache-673c41a0-97c6-4a8e-8f65-919ee9c38c79" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 11 04:12:33 compute-0 nova_compute[259850]: 2025-10-11 04:12:33.951 2 DEBUG nova.compute.manager [None req-f7c2a0d7-6121-45c7-ab46-55f07ad05f48 ba6ea3b0ff9d4fee8a80f308d0493954 7ff14cec1ef04fa2a41f6d226bc99518 - - default default] [instance: 673c41a0-97c6-4a8e-8f65-919ee9c38c79] Instance network_info: |[{"id": "f42f98ab-28b0-4e9f-897a-e8be0d64dc33", "address": "fa:16:3e:07:8a:69", "network": {"id": "69760b74-d690-4b6a-a64f-35ceb4582944", "bridge": "br-int", "label": "tempest-TestStampPattern-334026573-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "7ff14cec1ef04fa2a41f6d226bc99518", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf42f98ab-28", "ovs_interfaceid": "f42f98ab-28b0-4e9f-897a-e8be0d64dc33", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Oct 11 04:12:33 compute-0 nova_compute[259850]: 2025-10-11 04:12:33.952 2 DEBUG oslo_concurrency.lockutils [req-7286355b-f6fc-4a91-8b74-cdb8aff5055d req-32726bd3-50f9-43ec-aa36-f9ffd8693387 f4159c198f2c491aba3076e803251bb4 551ef16ca7c64c7d98e9560ddab5abd8 - - default default] Acquired lock "refresh_cache-673c41a0-97c6-4a8e-8f65-919ee9c38c79" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 11 04:12:33 compute-0 nova_compute[259850]: 2025-10-11 04:12:33.952 2 DEBUG nova.network.neutron [req-7286355b-f6fc-4a91-8b74-cdb8aff5055d req-32726bd3-50f9-43ec-aa36-f9ffd8693387 f4159c198f2c491aba3076e803251bb4 551ef16ca7c64c7d98e9560ddab5abd8 - - default default] [instance: 673c41a0-97c6-4a8e-8f65-919ee9c38c79] Refreshing network info cache for port f42f98ab-28b0-4e9f-897a-e8be0d64dc33 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 11 04:12:33 compute-0 nova_compute[259850]: 2025-10-11 04:12:33.957 2 DEBUG nova.virt.libvirt.driver [None req-f7c2a0d7-6121-45c7-ab46-55f07ad05f48 ba6ea3b0ff9d4fee8a80f308d0493954 7ff14cec1ef04fa2a41f6d226bc99518 - - default default] [instance: 673c41a0-97c6-4a8e-8f65-919ee9c38c79] Start _get_guest_xml network_info=[{"id": "f42f98ab-28b0-4e9f-897a-e8be0d64dc33", "address": "fa:16:3e:07:8a:69", "network": {"id": "69760b74-d690-4b6a-a64f-35ceb4582944", "bridge": "br-int", "label": "tempest-TestStampPattern-334026573-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "7ff14cec1ef04fa2a41f6d226bc99518", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf42f98ab-28", "ovs_interfaceid": "f42f98ab-28b0-4e9f-897a-e8be0d64dc33", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='',container_format='bare',created_at=2025-10-11T04:12:22Z,direct_url=<?>,disk_format='raw',id=f94d6b77-1844-4032-bf9d-644d95696add,min_disk=1,min_ram=0,name='tempest-TestStampPatternsnapshot-160268462',owner='7ff14cec1ef04fa2a41f6d226bc99518',properties=ImageMetaProps,protected=<?>,size=1073741824,status='active',tags=<?>,updated_at=2025-10-11T04:12:26Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_type': 'disk', 'encrypted': False, 'encryption_secret_uuid': None, 'size': 0, 'encryption_format': None, 'boot_index': 0, 'device_name': '/dev/vda', 'guest_format': None, 'encryption_options': None, 'disk_bus': 'virtio', 'image_id': 'f94d6b77-1844-4032-bf9d-644d95696add'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Oct 11 04:12:33 compute-0 nova_compute[259850]: 2025-10-11 04:12:33.963 2 WARNING nova.virt.libvirt.driver [None req-f7c2a0d7-6121-45c7-ab46-55f07ad05f48 ba6ea3b0ff9d4fee8a80f308d0493954 7ff14cec1ef04fa2a41f6d226bc99518 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 11 04:12:33 compute-0 nova_compute[259850]: 2025-10-11 04:12:33.969 2 DEBUG nova.virt.libvirt.host [None req-f7c2a0d7-6121-45c7-ab46-55f07ad05f48 ba6ea3b0ff9d4fee8a80f308d0493954 7ff14cec1ef04fa2a41f6d226bc99518 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Oct 11 04:12:33 compute-0 nova_compute[259850]: 2025-10-11 04:12:33.970 2 DEBUG nova.virt.libvirt.host [None req-f7c2a0d7-6121-45c7-ab46-55f07ad05f48 ba6ea3b0ff9d4fee8a80f308d0493954 7ff14cec1ef04fa2a41f6d226bc99518 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Oct 11 04:12:33 compute-0 nova_compute[259850]: 2025-10-11 04:12:33.974 2 DEBUG nova.virt.libvirt.host [None req-f7c2a0d7-6121-45c7-ab46-55f07ad05f48 ba6ea3b0ff9d4fee8a80f308d0493954 7ff14cec1ef04fa2a41f6d226bc99518 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Oct 11 04:12:33 compute-0 nova_compute[259850]: 2025-10-11 04:12:33.975 2 DEBUG nova.virt.libvirt.host [None req-f7c2a0d7-6121-45c7-ab46-55f07ad05f48 ba6ea3b0ff9d4fee8a80f308d0493954 7ff14cec1ef04fa2a41f6d226bc99518 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Oct 11 04:12:33 compute-0 nova_compute[259850]: 2025-10-11 04:12:33.975 2 DEBUG nova.virt.libvirt.driver [None req-f7c2a0d7-6121-45c7-ab46-55f07ad05f48 ba6ea3b0ff9d4fee8a80f308d0493954 7ff14cec1ef04fa2a41f6d226bc99518 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Oct 11 04:12:33 compute-0 nova_compute[259850]: 2025-10-11 04:12:33.976 2 DEBUG nova.virt.hardware [None req-f7c2a0d7-6121-45c7-ab46-55f07ad05f48 ba6ea3b0ff9d4fee8a80f308d0493954 7ff14cec1ef04fa2a41f6d226bc99518 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-11T04:01:35Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='178575de-f0e6-4acd-9fcd-d75e3e09ac2e',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='',container_format='bare',created_at=2025-10-11T04:12:22Z,direct_url=<?>,disk_format='raw',id=f94d6b77-1844-4032-bf9d-644d95696add,min_disk=1,min_ram=0,name='tempest-TestStampPatternsnapshot-160268462',owner='7ff14cec1ef04fa2a41f6d226bc99518',properties=ImageMetaProps,protected=<?>,size=1073741824,status='active',tags=<?>,updated_at=2025-10-11T04:12:26Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Oct 11 04:12:33 compute-0 nova_compute[259850]: 2025-10-11 04:12:33.977 2 DEBUG nova.virt.hardware [None req-f7c2a0d7-6121-45c7-ab46-55f07ad05f48 ba6ea3b0ff9d4fee8a80f308d0493954 7ff14cec1ef04fa2a41f6d226bc99518 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Oct 11 04:12:33 compute-0 nova_compute[259850]: 2025-10-11 04:12:33.977 2 DEBUG nova.virt.hardware [None req-f7c2a0d7-6121-45c7-ab46-55f07ad05f48 ba6ea3b0ff9d4fee8a80f308d0493954 7ff14cec1ef04fa2a41f6d226bc99518 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Oct 11 04:12:33 compute-0 nova_compute[259850]: 2025-10-11 04:12:33.978 2 DEBUG nova.virt.hardware [None req-f7c2a0d7-6121-45c7-ab46-55f07ad05f48 ba6ea3b0ff9d4fee8a80f308d0493954 7ff14cec1ef04fa2a41f6d226bc99518 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Oct 11 04:12:33 compute-0 nova_compute[259850]: 2025-10-11 04:12:33.978 2 DEBUG nova.virt.hardware [None req-f7c2a0d7-6121-45c7-ab46-55f07ad05f48 ba6ea3b0ff9d4fee8a80f308d0493954 7ff14cec1ef04fa2a41f6d226bc99518 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Oct 11 04:12:33 compute-0 nova_compute[259850]: 2025-10-11 04:12:33.978 2 DEBUG nova.virt.hardware [None req-f7c2a0d7-6121-45c7-ab46-55f07ad05f48 ba6ea3b0ff9d4fee8a80f308d0493954 7ff14cec1ef04fa2a41f6d226bc99518 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Oct 11 04:12:33 compute-0 nova_compute[259850]: 2025-10-11 04:12:33.979 2 DEBUG nova.virt.hardware [None req-f7c2a0d7-6121-45c7-ab46-55f07ad05f48 ba6ea3b0ff9d4fee8a80f308d0493954 7ff14cec1ef04fa2a41f6d226bc99518 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Oct 11 04:12:33 compute-0 nova_compute[259850]: 2025-10-11 04:12:33.979 2 DEBUG nova.virt.hardware [None req-f7c2a0d7-6121-45c7-ab46-55f07ad05f48 ba6ea3b0ff9d4fee8a80f308d0493954 7ff14cec1ef04fa2a41f6d226bc99518 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Oct 11 04:12:33 compute-0 nova_compute[259850]: 2025-10-11 04:12:33.980 2 DEBUG nova.virt.hardware [None req-f7c2a0d7-6121-45c7-ab46-55f07ad05f48 ba6ea3b0ff9d4fee8a80f308d0493954 7ff14cec1ef04fa2a41f6d226bc99518 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Oct 11 04:12:33 compute-0 nova_compute[259850]: 2025-10-11 04:12:33.980 2 DEBUG nova.virt.hardware [None req-f7c2a0d7-6121-45c7-ab46-55f07ad05f48 ba6ea3b0ff9d4fee8a80f308d0493954 7ff14cec1ef04fa2a41f6d226bc99518 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Oct 11 04:12:33 compute-0 nova_compute[259850]: 2025-10-11 04:12:33.981 2 DEBUG nova.virt.hardware [None req-f7c2a0d7-6121-45c7-ab46-55f07ad05f48 ba6ea3b0ff9d4fee8a80f308d0493954 7ff14cec1ef04fa2a41f6d226bc99518 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Oct 11 04:12:33 compute-0 nova_compute[259850]: 2025-10-11 04:12:33.985 2 DEBUG oslo_concurrency.processutils [None req-f7c2a0d7-6121-45c7-ab46-55f07ad05f48 ba6ea3b0ff9d4fee8a80f308d0493954 7ff14cec1ef04fa2a41f6d226bc99518 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 04:12:34 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e342 do_prune osdmap full prune enabled
Oct 11 04:12:34 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e343 e343: 3 total, 3 up, 3 in
Oct 11 04:12:34 compute-0 ceph-mon[74273]: osdmap e342: 3 total, 3 up, 3 in
Oct 11 04:12:34 compute-0 ceph-mon[74273]: log_channel(cluster) log [DBG] : osdmap e343: 3 total, 3 up, 3 in
Oct 11 04:12:34 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 11 04:12:34 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3695716118' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 11 04:12:34 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct 11 04:12:34 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/418917986' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 11 04:12:34 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct 11 04:12:34 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/418917986' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 11 04:12:34 compute-0 nova_compute[259850]: 2025-10-11 04:12:34.466 2 DEBUG oslo_concurrency.processutils [None req-f7c2a0d7-6121-45c7-ab46-55f07ad05f48 ba6ea3b0ff9d4fee8a80f308d0493954 7ff14cec1ef04fa2a41f6d226bc99518 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.481s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 04:12:34 compute-0 nova_compute[259850]: 2025-10-11 04:12:34.502 2 DEBUG nova.storage.rbd_utils [None req-f7c2a0d7-6121-45c7-ab46-55f07ad05f48 ba6ea3b0ff9d4fee8a80f308d0493954 7ff14cec1ef04fa2a41f6d226bc99518 - - default default] rbd image 673c41a0-97c6-4a8e-8f65-919ee9c38c79_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 11 04:12:34 compute-0 nova_compute[259850]: 2025-10-11 04:12:34.507 2 DEBUG oslo_concurrency.processutils [None req-f7c2a0d7-6121-45c7-ab46-55f07ad05f48 ba6ea3b0ff9d4fee8a80f308d0493954 7ff14cec1ef04fa2a41f6d226bc99518 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 04:12:34 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e343 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 04:12:34 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e343 do_prune osdmap full prune enabled
Oct 11 04:12:34 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e344 e344: 3 total, 3 up, 3 in
Oct 11 04:12:34 compute-0 ceph-mon[74273]: log_channel(cluster) log [DBG] : osdmap e344: 3 total, 3 up, 3 in
Oct 11 04:12:34 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 11 04:12:34 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3427554313' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 11 04:12:34 compute-0 nova_compute[259850]: 2025-10-11 04:12:34.952 2 DEBUG oslo_concurrency.processutils [None req-f7c2a0d7-6121-45c7-ab46-55f07ad05f48 ba6ea3b0ff9d4fee8a80f308d0493954 7ff14cec1ef04fa2a41f6d226bc99518 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.444s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 04:12:34 compute-0 nova_compute[259850]: 2025-10-11 04:12:34.954 2 DEBUG nova.virt.libvirt.vif [None req-f7c2a0d7-6121-45c7-ab46-55f07ad05f48 ba6ea3b0ff9d4fee8a80f308d0493954 7ff14cec1ef04fa2a41f6d226bc99518 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-11T04:12:29Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestStampPattern-server-1779017139',display_name='tempest-TestStampPattern-server-1779017139',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-teststamppattern-server-1779017139',id=15,image_ref='f94d6b77-1844-4032-bf9d-644d95696add',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBJ/2RgkZKOpdewTMCUJ4lxqFHaHkNK2WJjvE3lEkA/Q9gA0jTZZ1SFFzP17eZUjXJUtu1TcmHAM4LPuQ7VsHIzZ1pEO3yPeDhFw+/dw5yXiw9mrTEISzDMcxVMFVOX8L1w==',key_name='tempest-TestStampPattern-1075063988',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='7ff14cec1ef04fa2a41f6d226bc99518',ramdisk_id='',reservation_id='r-mwbhdwjj',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='1a107e2f-1a9d-4b6f-861d-e64bee7d56be',image_boot_roles='reader,member',image_container_format='bare',image_disk_format='raw',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_image_location='snapshot',image_image_state='available',image_image_type='snapshot',image_instance_uuid='6f74cee5-3bb9-44f0-9a21-d6e5c1475419',image_min_disk='1',image_min_ram='0',image_owner_id='7ff14cec1ef04fa2a41f6d226bc99518',image_owner_project_name='tempest-TestStampPattern-137571922',image_owner_user_name='tempest-TestStampPattern-137571922-project-member',image_user_id='ba6ea3b0ff9d4fee8a80f308d0493954',network_allocated='True',owner_project_name='tempest-TestStampPattern-137571922',owner_user_name='tempest-TestStampPattern-137571922-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-11T04:12:30Z,user_data=None,user_id='ba6ea3b0ff9d4fee8a80f308d0493954',uuid=673c41a0-97c6-4a8e-8f65-919ee9c38c79,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "f42f98ab-28b0-4e9f-897a-e8be0d64dc33", "address": "fa:16:3e:07:8a:69", "network": {"id": "69760b74-d690-4b6a-a64f-35ceb4582944", "bridge": "br-int", "label": "tempest-TestStampPattern-334026573-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "7ff14cec1ef04fa2a41f6d226bc99518", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf42f98ab-28", "ovs_interfaceid": "f42f98ab-28b0-4e9f-897a-e8be0d64dc33", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Oct 11 04:12:34 compute-0 nova_compute[259850]: 2025-10-11 04:12:34.954 2 DEBUG nova.network.os_vif_util [None req-f7c2a0d7-6121-45c7-ab46-55f07ad05f48 ba6ea3b0ff9d4fee8a80f308d0493954 7ff14cec1ef04fa2a41f6d226bc99518 - - default default] Converting VIF {"id": "f42f98ab-28b0-4e9f-897a-e8be0d64dc33", "address": "fa:16:3e:07:8a:69", "network": {"id": "69760b74-d690-4b6a-a64f-35ceb4582944", "bridge": "br-int", "label": "tempest-TestStampPattern-334026573-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "7ff14cec1ef04fa2a41f6d226bc99518", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf42f98ab-28", "ovs_interfaceid": "f42f98ab-28b0-4e9f-897a-e8be0d64dc33", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 11 04:12:34 compute-0 nova_compute[259850]: 2025-10-11 04:12:34.955 2 DEBUG nova.network.os_vif_util [None req-f7c2a0d7-6121-45c7-ab46-55f07ad05f48 ba6ea3b0ff9d4fee8a80f308d0493954 7ff14cec1ef04fa2a41f6d226bc99518 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:07:8a:69,bridge_name='br-int',has_traffic_filtering=True,id=f42f98ab-28b0-4e9f-897a-e8be0d64dc33,network=Network(69760b74-d690-4b6a-a64f-35ceb4582944),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf42f98ab-28') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 11 04:12:34 compute-0 nova_compute[259850]: 2025-10-11 04:12:34.957 2 DEBUG nova.objects.instance [None req-f7c2a0d7-6121-45c7-ab46-55f07ad05f48 ba6ea3b0ff9d4fee8a80f308d0493954 7ff14cec1ef04fa2a41f6d226bc99518 - - default default] Lazy-loading 'pci_devices' on Instance uuid 673c41a0-97c6-4a8e-8f65-919ee9c38c79 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 11 04:12:34 compute-0 nova_compute[259850]: 2025-10-11 04:12:34.981 2 DEBUG nova.virt.libvirt.driver [None req-f7c2a0d7-6121-45c7-ab46-55f07ad05f48 ba6ea3b0ff9d4fee8a80f308d0493954 7ff14cec1ef04fa2a41f6d226bc99518 - - default default] [instance: 673c41a0-97c6-4a8e-8f65-919ee9c38c79] End _get_guest_xml xml=<domain type="kvm">
Oct 11 04:12:34 compute-0 nova_compute[259850]:   <uuid>673c41a0-97c6-4a8e-8f65-919ee9c38c79</uuid>
Oct 11 04:12:34 compute-0 nova_compute[259850]:   <name>instance-0000000f</name>
Oct 11 04:12:34 compute-0 nova_compute[259850]:   <memory>131072</memory>
Oct 11 04:12:34 compute-0 nova_compute[259850]:   <vcpu>1</vcpu>
Oct 11 04:12:34 compute-0 nova_compute[259850]:   <metadata>
Oct 11 04:12:34 compute-0 nova_compute[259850]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 11 04:12:34 compute-0 nova_compute[259850]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 11 04:12:34 compute-0 nova_compute[259850]:       <nova:name>tempest-TestStampPattern-server-1779017139</nova:name>
Oct 11 04:12:34 compute-0 nova_compute[259850]:       <nova:creationTime>2025-10-11 04:12:33</nova:creationTime>
Oct 11 04:12:34 compute-0 nova_compute[259850]:       <nova:flavor name="m1.nano">
Oct 11 04:12:34 compute-0 nova_compute[259850]:         <nova:memory>128</nova:memory>
Oct 11 04:12:34 compute-0 nova_compute[259850]:         <nova:disk>1</nova:disk>
Oct 11 04:12:34 compute-0 nova_compute[259850]:         <nova:swap>0</nova:swap>
Oct 11 04:12:34 compute-0 nova_compute[259850]:         <nova:ephemeral>0</nova:ephemeral>
Oct 11 04:12:34 compute-0 nova_compute[259850]:         <nova:vcpus>1</nova:vcpus>
Oct 11 04:12:34 compute-0 nova_compute[259850]:       </nova:flavor>
Oct 11 04:12:34 compute-0 nova_compute[259850]:       <nova:owner>
Oct 11 04:12:34 compute-0 nova_compute[259850]:         <nova:user uuid="ba6ea3b0ff9d4fee8a80f308d0493954">tempest-TestStampPattern-137571922-project-member</nova:user>
Oct 11 04:12:34 compute-0 nova_compute[259850]:         <nova:project uuid="7ff14cec1ef04fa2a41f6d226bc99518">tempest-TestStampPattern-137571922</nova:project>
Oct 11 04:12:34 compute-0 nova_compute[259850]:       </nova:owner>
Oct 11 04:12:34 compute-0 nova_compute[259850]:       <nova:root type="image" uuid="f94d6b77-1844-4032-bf9d-644d95696add"/>
Oct 11 04:12:34 compute-0 nova_compute[259850]:       <nova:ports>
Oct 11 04:12:34 compute-0 nova_compute[259850]:         <nova:port uuid="f42f98ab-28b0-4e9f-897a-e8be0d64dc33">
Oct 11 04:12:34 compute-0 nova_compute[259850]:           <nova:ip type="fixed" address="10.100.0.9" ipVersion="4"/>
Oct 11 04:12:34 compute-0 nova_compute[259850]:         </nova:port>
Oct 11 04:12:34 compute-0 nova_compute[259850]:       </nova:ports>
Oct 11 04:12:34 compute-0 nova_compute[259850]:     </nova:instance>
Oct 11 04:12:34 compute-0 nova_compute[259850]:   </metadata>
Oct 11 04:12:34 compute-0 nova_compute[259850]:   <sysinfo type="smbios">
Oct 11 04:12:34 compute-0 nova_compute[259850]:     <system>
Oct 11 04:12:34 compute-0 nova_compute[259850]:       <entry name="manufacturer">RDO</entry>
Oct 11 04:12:34 compute-0 nova_compute[259850]:       <entry name="product">OpenStack Compute</entry>
Oct 11 04:12:34 compute-0 nova_compute[259850]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 11 04:12:34 compute-0 nova_compute[259850]:       <entry name="serial">673c41a0-97c6-4a8e-8f65-919ee9c38c79</entry>
Oct 11 04:12:34 compute-0 nova_compute[259850]:       <entry name="uuid">673c41a0-97c6-4a8e-8f65-919ee9c38c79</entry>
Oct 11 04:12:34 compute-0 nova_compute[259850]:       <entry name="family">Virtual Machine</entry>
Oct 11 04:12:34 compute-0 nova_compute[259850]:     </system>
Oct 11 04:12:34 compute-0 nova_compute[259850]:   </sysinfo>
Oct 11 04:12:34 compute-0 nova_compute[259850]:   <os>
Oct 11 04:12:34 compute-0 nova_compute[259850]:     <type arch="x86_64" machine="q35">hvm</type>
Oct 11 04:12:34 compute-0 nova_compute[259850]:     <boot dev="hd"/>
Oct 11 04:12:34 compute-0 nova_compute[259850]:     <smbios mode="sysinfo"/>
Oct 11 04:12:34 compute-0 nova_compute[259850]:   </os>
Oct 11 04:12:34 compute-0 nova_compute[259850]:   <features>
Oct 11 04:12:34 compute-0 nova_compute[259850]:     <acpi/>
Oct 11 04:12:34 compute-0 nova_compute[259850]:     <apic/>
Oct 11 04:12:34 compute-0 nova_compute[259850]:     <vmcoreinfo/>
Oct 11 04:12:34 compute-0 nova_compute[259850]:   </features>
Oct 11 04:12:34 compute-0 nova_compute[259850]:   <clock offset="utc">
Oct 11 04:12:34 compute-0 nova_compute[259850]:     <timer name="pit" tickpolicy="delay"/>
Oct 11 04:12:34 compute-0 nova_compute[259850]:     <timer name="rtc" tickpolicy="catchup"/>
Oct 11 04:12:34 compute-0 nova_compute[259850]:     <timer name="hpet" present="no"/>
Oct 11 04:12:34 compute-0 nova_compute[259850]:   </clock>
Oct 11 04:12:34 compute-0 nova_compute[259850]:   <cpu mode="host-model" match="exact">
Oct 11 04:12:34 compute-0 nova_compute[259850]:     <topology sockets="1" cores="1" threads="1"/>
Oct 11 04:12:34 compute-0 nova_compute[259850]:   </cpu>
Oct 11 04:12:34 compute-0 nova_compute[259850]:   <devices>
Oct 11 04:12:34 compute-0 nova_compute[259850]:     <disk type="network" device="disk">
Oct 11 04:12:34 compute-0 nova_compute[259850]:       <driver type="raw" cache="none"/>
Oct 11 04:12:34 compute-0 nova_compute[259850]:       <source protocol="rbd" name="vms/673c41a0-97c6-4a8e-8f65-919ee9c38c79_disk">
Oct 11 04:12:34 compute-0 nova_compute[259850]:         <host name="192.168.122.100" port="6789"/>
Oct 11 04:12:34 compute-0 nova_compute[259850]:       </source>
Oct 11 04:12:34 compute-0 nova_compute[259850]:       <auth username="openstack">
Oct 11 04:12:34 compute-0 nova_compute[259850]:         <secret type="ceph" uuid="23b68101-59a9-532f-ab6b-9acf78fb2162"/>
Oct 11 04:12:34 compute-0 nova_compute[259850]:       </auth>
Oct 11 04:12:34 compute-0 nova_compute[259850]:       <target dev="vda" bus="virtio"/>
Oct 11 04:12:34 compute-0 nova_compute[259850]:     </disk>
Oct 11 04:12:34 compute-0 nova_compute[259850]:     <disk type="network" device="cdrom">
Oct 11 04:12:34 compute-0 nova_compute[259850]:       <driver type="raw" cache="none"/>
Oct 11 04:12:34 compute-0 nova_compute[259850]:       <source protocol="rbd" name="vms/673c41a0-97c6-4a8e-8f65-919ee9c38c79_disk.config">
Oct 11 04:12:34 compute-0 nova_compute[259850]:         <host name="192.168.122.100" port="6789"/>
Oct 11 04:12:34 compute-0 nova_compute[259850]:       </source>
Oct 11 04:12:34 compute-0 nova_compute[259850]:       <auth username="openstack">
Oct 11 04:12:34 compute-0 nova_compute[259850]:         <secret type="ceph" uuid="23b68101-59a9-532f-ab6b-9acf78fb2162"/>
Oct 11 04:12:34 compute-0 nova_compute[259850]:       </auth>
Oct 11 04:12:34 compute-0 nova_compute[259850]:       <target dev="sda" bus="sata"/>
Oct 11 04:12:34 compute-0 nova_compute[259850]:     </disk>
Oct 11 04:12:34 compute-0 nova_compute[259850]:     <interface type="ethernet">
Oct 11 04:12:34 compute-0 nova_compute[259850]:       <mac address="fa:16:3e:07:8a:69"/>
Oct 11 04:12:34 compute-0 nova_compute[259850]:       <model type="virtio"/>
Oct 11 04:12:34 compute-0 nova_compute[259850]:       <driver name="vhost" rx_queue_size="512"/>
Oct 11 04:12:34 compute-0 nova_compute[259850]:       <mtu size="1442"/>
Oct 11 04:12:34 compute-0 nova_compute[259850]:       <target dev="tapf42f98ab-28"/>
Oct 11 04:12:34 compute-0 nova_compute[259850]:     </interface>
Oct 11 04:12:34 compute-0 nova_compute[259850]:     <serial type="pty">
Oct 11 04:12:34 compute-0 nova_compute[259850]:       <log file="/var/lib/nova/instances/673c41a0-97c6-4a8e-8f65-919ee9c38c79/console.log" append="off"/>
Oct 11 04:12:34 compute-0 nova_compute[259850]:     </serial>
Oct 11 04:12:34 compute-0 nova_compute[259850]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 11 04:12:34 compute-0 nova_compute[259850]:     <video>
Oct 11 04:12:34 compute-0 nova_compute[259850]:       <model type="virtio"/>
Oct 11 04:12:34 compute-0 nova_compute[259850]:     </video>
Oct 11 04:12:34 compute-0 nova_compute[259850]:     <input type="tablet" bus="usb"/>
Oct 11 04:12:34 compute-0 nova_compute[259850]:     <input type="keyboard" bus="usb"/>
Oct 11 04:12:34 compute-0 nova_compute[259850]:     <rng model="virtio">
Oct 11 04:12:34 compute-0 nova_compute[259850]:       <backend model="random">/dev/urandom</backend>
Oct 11 04:12:34 compute-0 nova_compute[259850]:     </rng>
Oct 11 04:12:34 compute-0 nova_compute[259850]:     <controller type="pci" model="pcie-root"/>
Oct 11 04:12:34 compute-0 nova_compute[259850]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 04:12:34 compute-0 nova_compute[259850]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 04:12:34 compute-0 nova_compute[259850]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 04:12:34 compute-0 nova_compute[259850]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 04:12:34 compute-0 nova_compute[259850]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 04:12:34 compute-0 nova_compute[259850]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 04:12:34 compute-0 nova_compute[259850]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 04:12:34 compute-0 nova_compute[259850]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 04:12:34 compute-0 nova_compute[259850]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 04:12:34 compute-0 nova_compute[259850]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 04:12:34 compute-0 nova_compute[259850]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 04:12:34 compute-0 nova_compute[259850]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 04:12:34 compute-0 nova_compute[259850]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 04:12:34 compute-0 nova_compute[259850]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 04:12:34 compute-0 nova_compute[259850]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 04:12:34 compute-0 nova_compute[259850]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 04:12:34 compute-0 nova_compute[259850]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 04:12:34 compute-0 nova_compute[259850]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 04:12:34 compute-0 nova_compute[259850]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 04:12:34 compute-0 nova_compute[259850]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 04:12:34 compute-0 nova_compute[259850]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 04:12:34 compute-0 nova_compute[259850]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 04:12:34 compute-0 nova_compute[259850]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 04:12:34 compute-0 nova_compute[259850]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 04:12:34 compute-0 nova_compute[259850]:     <controller type="usb" index="0"/>
Oct 11 04:12:34 compute-0 nova_compute[259850]:     <memballoon model="virtio">
Oct 11 04:12:34 compute-0 nova_compute[259850]:       <stats period="10"/>
Oct 11 04:12:34 compute-0 nova_compute[259850]:     </memballoon>
Oct 11 04:12:34 compute-0 nova_compute[259850]:   </devices>
Oct 11 04:12:34 compute-0 nova_compute[259850]: </domain>
Oct 11 04:12:34 compute-0 nova_compute[259850]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Oct 11 04:12:34 compute-0 nova_compute[259850]: 2025-10-11 04:12:34.984 2 DEBUG nova.compute.manager [None req-f7c2a0d7-6121-45c7-ab46-55f07ad05f48 ba6ea3b0ff9d4fee8a80f308d0493954 7ff14cec1ef04fa2a41f6d226bc99518 - - default default] [instance: 673c41a0-97c6-4a8e-8f65-919ee9c38c79] Preparing to wait for external event network-vif-plugged-f42f98ab-28b0-4e9f-897a-e8be0d64dc33 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Oct 11 04:12:34 compute-0 nova_compute[259850]: 2025-10-11 04:12:34.985 2 DEBUG oslo_concurrency.lockutils [None req-f7c2a0d7-6121-45c7-ab46-55f07ad05f48 ba6ea3b0ff9d4fee8a80f308d0493954 7ff14cec1ef04fa2a41f6d226bc99518 - - default default] Acquiring lock "673c41a0-97c6-4a8e-8f65-919ee9c38c79-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 04:12:34 compute-0 nova_compute[259850]: 2025-10-11 04:12:34.985 2 DEBUG oslo_concurrency.lockutils [None req-f7c2a0d7-6121-45c7-ab46-55f07ad05f48 ba6ea3b0ff9d4fee8a80f308d0493954 7ff14cec1ef04fa2a41f6d226bc99518 - - default default] Lock "673c41a0-97c6-4a8e-8f65-919ee9c38c79-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 04:12:34 compute-0 nova_compute[259850]: 2025-10-11 04:12:34.986 2 DEBUG oslo_concurrency.lockutils [None req-f7c2a0d7-6121-45c7-ab46-55f07ad05f48 ba6ea3b0ff9d4fee8a80f308d0493954 7ff14cec1ef04fa2a41f6d226bc99518 - - default default] Lock "673c41a0-97c6-4a8e-8f65-919ee9c38c79-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 04:12:34 compute-0 nova_compute[259850]: 2025-10-11 04:12:34.986 2 DEBUG nova.virt.libvirt.vif [None req-f7c2a0d7-6121-45c7-ab46-55f07ad05f48 ba6ea3b0ff9d4fee8a80f308d0493954 7ff14cec1ef04fa2a41f6d226bc99518 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-11T04:12:29Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestStampPattern-server-1779017139',display_name='tempest-TestStampPattern-server-1779017139',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-teststamppattern-server-1779017139',id=15,image_ref='f94d6b77-1844-4032-bf9d-644d95696add',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBJ/2RgkZKOpdewTMCUJ4lxqFHaHkNK2WJjvE3lEkA/Q9gA0jTZZ1SFFzP17eZUjXJUtu1TcmHAM4LPuQ7VsHIzZ1pEO3yPeDhFw+/dw5yXiw9mrTEISzDMcxVMFVOX8L1w==',key_name='tempest-TestStampPattern-1075063988',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='7ff14cec1ef04fa2a41f6d226bc99518',ramdisk_id='',reservation_id='r-mwbhdwjj',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='1a107e2f-1a9d-4b6f-861d-e64bee7d56be',image_boot_roles='reader,member',image_container_format='bare',image_disk_format='raw',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_image_location='snapshot',image_image_state='available',image_image_type='snapshot',image_instance_uuid='6f74cee5-3bb9-44f0-9a21-d6e5c1475419',image_min_disk='1',image_min_ram='0',image_owner_id='7ff14cec1ef04fa2a41f6d226bc99518',image_owner_project_name='tempest-TestStampPattern-137571922',image_owner_user_name='tempest-TestStampPattern-137571922-project-member',image_user_id='ba6ea3b0ff9d4fee8a80f308d0493954',network_allocated='True',owner_project_name='tempest-TestStampPattern-137571922',owner_user_name='tempest-TestStampPattern-137571922-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-11T04:12:30Z,user_data=None,user_id='ba6ea3b0ff9d4fee8a80f308d0493954',uuid=673c41a0-97c6-4a8e-8f65-919ee9c38c79,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "f42f98ab-28b0-4e9f-897a-e8be0d64dc33", "address": "fa:16:3e:07:8a:69", "network": {"id": "69760b74-d690-4b6a-a64f-35ceb4582944", "bridge": "br-int", "label": "tempest-TestStampPattern-334026573-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "7ff14cec1ef04fa2a41f6d226bc99518", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf42f98ab-28", "ovs_interfaceid": "f42f98ab-28b0-4e9f-897a-e8be0d64dc33", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Oct 11 04:12:34 compute-0 nova_compute[259850]: 2025-10-11 04:12:34.987 2 DEBUG nova.network.os_vif_util [None req-f7c2a0d7-6121-45c7-ab46-55f07ad05f48 ba6ea3b0ff9d4fee8a80f308d0493954 7ff14cec1ef04fa2a41f6d226bc99518 - - default default] Converting VIF {"id": "f42f98ab-28b0-4e9f-897a-e8be0d64dc33", "address": "fa:16:3e:07:8a:69", "network": {"id": "69760b74-d690-4b6a-a64f-35ceb4582944", "bridge": "br-int", "label": "tempest-TestStampPattern-334026573-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "7ff14cec1ef04fa2a41f6d226bc99518", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf42f98ab-28", "ovs_interfaceid": "f42f98ab-28b0-4e9f-897a-e8be0d64dc33", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 11 04:12:34 compute-0 nova_compute[259850]: 2025-10-11 04:12:34.987 2 DEBUG nova.network.os_vif_util [None req-f7c2a0d7-6121-45c7-ab46-55f07ad05f48 ba6ea3b0ff9d4fee8a80f308d0493954 7ff14cec1ef04fa2a41f6d226bc99518 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:07:8a:69,bridge_name='br-int',has_traffic_filtering=True,id=f42f98ab-28b0-4e9f-897a-e8be0d64dc33,network=Network(69760b74-d690-4b6a-a64f-35ceb4582944),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf42f98ab-28') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 11 04:12:34 compute-0 nova_compute[259850]: 2025-10-11 04:12:34.988 2 DEBUG os_vif [None req-f7c2a0d7-6121-45c7-ab46-55f07ad05f48 ba6ea3b0ff9d4fee8a80f308d0493954 7ff14cec1ef04fa2a41f6d226bc99518 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:07:8a:69,bridge_name='br-int',has_traffic_filtering=True,id=f42f98ab-28b0-4e9f-897a-e8be0d64dc33,network=Network(69760b74-d690-4b6a-a64f-35ceb4582944),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf42f98ab-28') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Oct 11 04:12:34 compute-0 nova_compute[259850]: 2025-10-11 04:12:34.989 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 04:12:34 compute-0 nova_compute[259850]: 2025-10-11 04:12:34.989 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 04:12:34 compute-0 nova_compute[259850]: 2025-10-11 04:12:34.990 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 11 04:12:34 compute-0 nova_compute[259850]: 2025-10-11 04:12:34.994 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 04:12:34 compute-0 nova_compute[259850]: 2025-10-11 04:12:34.995 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapf42f98ab-28, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 04:12:34 compute-0 nova_compute[259850]: 2025-10-11 04:12:34.997 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapf42f98ab-28, col_values=(('external_ids', {'iface-id': 'f42f98ab-28b0-4e9f-897a-e8be0d64dc33', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:07:8a:69', 'vm-uuid': '673c41a0-97c6-4a8e-8f65-919ee9c38c79'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 04:12:35 compute-0 ceph-mon[74273]: pgmap v1419: 305 pgs: 305 active+clean; 248 MiB data, 475 MiB used, 60 GiB / 60 GiB avail; 197 KiB/s rd, 9.4 KiB/s wr, 259 op/s
Oct 11 04:12:35 compute-0 ceph-mon[74273]: osdmap e343: 3 total, 3 up, 3 in
Oct 11 04:12:35 compute-0 ceph-mon[74273]: from='client.? 192.168.122.100:0/3695716118' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 11 04:12:35 compute-0 ceph-mon[74273]: from='client.? 192.168.122.10:0/418917986' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 11 04:12:35 compute-0 ceph-mon[74273]: from='client.? 192.168.122.10:0/418917986' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 11 04:12:35 compute-0 ceph-mon[74273]: osdmap e344: 3 total, 3 up, 3 in
Oct 11 04:12:35 compute-0 ceph-mon[74273]: from='client.? 192.168.122.100:0/3427554313' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 11 04:12:35 compute-0 NetworkManager[44920]: <info>  [1760155955.0554] manager: (tapf42f98ab-28): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/86)
Oct 11 04:12:35 compute-0 nova_compute[259850]: 2025-10-11 04:12:35.054 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 04:12:35 compute-0 nova_compute[259850]: 2025-10-11 04:12:35.058 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 11 04:12:35 compute-0 nova_compute[259850]: 2025-10-11 04:12:35.067 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 04:12:35 compute-0 nova_compute[259850]: 2025-10-11 04:12:35.069 2 INFO os_vif [None req-f7c2a0d7-6121-45c7-ab46-55f07ad05f48 ba6ea3b0ff9d4fee8a80f308d0493954 7ff14cec1ef04fa2a41f6d226bc99518 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:07:8a:69,bridge_name='br-int',has_traffic_filtering=True,id=f42f98ab-28b0-4e9f-897a-e8be0d64dc33,network=Network(69760b74-d690-4b6a-a64f-35ceb4582944),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf42f98ab-28')
Oct 11 04:12:35 compute-0 nova_compute[259850]: 2025-10-11 04:12:35.131 2 DEBUG nova.virt.libvirt.driver [None req-f7c2a0d7-6121-45c7-ab46-55f07ad05f48 ba6ea3b0ff9d4fee8a80f308d0493954 7ff14cec1ef04fa2a41f6d226bc99518 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 11 04:12:35 compute-0 nova_compute[259850]: 2025-10-11 04:12:35.131 2 DEBUG nova.virt.libvirt.driver [None req-f7c2a0d7-6121-45c7-ab46-55f07ad05f48 ba6ea3b0ff9d4fee8a80f308d0493954 7ff14cec1ef04fa2a41f6d226bc99518 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 11 04:12:35 compute-0 nova_compute[259850]: 2025-10-11 04:12:35.131 2 DEBUG nova.virt.libvirt.driver [None req-f7c2a0d7-6121-45c7-ab46-55f07ad05f48 ba6ea3b0ff9d4fee8a80f308d0493954 7ff14cec1ef04fa2a41f6d226bc99518 - - default default] No VIF found with MAC fa:16:3e:07:8a:69, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Oct 11 04:12:35 compute-0 nova_compute[259850]: 2025-10-11 04:12:35.132 2 INFO nova.virt.libvirt.driver [None req-f7c2a0d7-6121-45c7-ab46-55f07ad05f48 ba6ea3b0ff9d4fee8a80f308d0493954 7ff14cec1ef04fa2a41f6d226bc99518 - - default default] [instance: 673c41a0-97c6-4a8e-8f65-919ee9c38c79] Using config drive
Oct 11 04:12:35 compute-0 nova_compute[259850]: 2025-10-11 04:12:35.158 2 DEBUG nova.storage.rbd_utils [None req-f7c2a0d7-6121-45c7-ab46-55f07ad05f48 ba6ea3b0ff9d4fee8a80f308d0493954 7ff14cec1ef04fa2a41f6d226bc99518 - - default default] rbd image 673c41a0-97c6-4a8e-8f65-919ee9c38c79_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 11 04:12:35 compute-0 nova_compute[259850]: 2025-10-11 04:12:35.240 2 DEBUG nova.network.neutron [req-7286355b-f6fc-4a91-8b74-cdb8aff5055d req-32726bd3-50f9-43ec-aa36-f9ffd8693387 f4159c198f2c491aba3076e803251bb4 551ef16ca7c64c7d98e9560ddab5abd8 - - default default] [instance: 673c41a0-97c6-4a8e-8f65-919ee9c38c79] Updated VIF entry in instance network info cache for port f42f98ab-28b0-4e9f-897a-e8be0d64dc33. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 11 04:12:35 compute-0 nova_compute[259850]: 2025-10-11 04:12:35.240 2 DEBUG nova.network.neutron [req-7286355b-f6fc-4a91-8b74-cdb8aff5055d req-32726bd3-50f9-43ec-aa36-f9ffd8693387 f4159c198f2c491aba3076e803251bb4 551ef16ca7c64c7d98e9560ddab5abd8 - - default default] [instance: 673c41a0-97c6-4a8e-8f65-919ee9c38c79] Updating instance_info_cache with network_info: [{"id": "f42f98ab-28b0-4e9f-897a-e8be0d64dc33", "address": "fa:16:3e:07:8a:69", "network": {"id": "69760b74-d690-4b6a-a64f-35ceb4582944", "bridge": "br-int", "label": "tempest-TestStampPattern-334026573-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "7ff14cec1ef04fa2a41f6d226bc99518", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf42f98ab-28", "ovs_interfaceid": "f42f98ab-28b0-4e9f-897a-e8be0d64dc33", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 11 04:12:35 compute-0 nova_compute[259850]: 2025-10-11 04:12:35.263 2 DEBUG oslo_concurrency.lockutils [req-7286355b-f6fc-4a91-8b74-cdb8aff5055d req-32726bd3-50f9-43ec-aa36-f9ffd8693387 f4159c198f2c491aba3076e803251bb4 551ef16ca7c64c7d98e9560ddab5abd8 - - default default] Releasing lock "refresh_cache-673c41a0-97c6-4a8e-8f65-919ee9c38c79" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 11 04:12:35 compute-0 nova_compute[259850]: 2025-10-11 04:12:35.489 2 INFO nova.virt.libvirt.driver [None req-f7c2a0d7-6121-45c7-ab46-55f07ad05f48 ba6ea3b0ff9d4fee8a80f308d0493954 7ff14cec1ef04fa2a41f6d226bc99518 - - default default] [instance: 673c41a0-97c6-4a8e-8f65-919ee9c38c79] Creating config drive at /var/lib/nova/instances/673c41a0-97c6-4a8e-8f65-919ee9c38c79/disk.config
Oct 11 04:12:35 compute-0 nova_compute[259850]: 2025-10-11 04:12:35.493 2 DEBUG oslo_concurrency.processutils [None req-f7c2a0d7-6121-45c7-ab46-55f07ad05f48 ba6ea3b0ff9d4fee8a80f308d0493954 7ff14cec1ef04fa2a41f6d226bc99518 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/673c41a0-97c6-4a8e-8f65-919ee9c38c79/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpz8jnet8y execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 04:12:35 compute-0 nova_compute[259850]: 2025-10-11 04:12:35.637 2 DEBUG oslo_concurrency.processutils [None req-f7c2a0d7-6121-45c7-ab46-55f07ad05f48 ba6ea3b0ff9d4fee8a80f308d0493954 7ff14cec1ef04fa2a41f6d226bc99518 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/673c41a0-97c6-4a8e-8f65-919ee9c38c79/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpz8jnet8y" returned: 0 in 0.144s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 04:12:35 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1422: 305 pgs: 305 active+clean; 248 MiB data, 475 MiB used, 60 GiB / 60 GiB avail; 230 KiB/s rd, 11 KiB/s wr, 303 op/s
Oct 11 04:12:35 compute-0 nova_compute[259850]: 2025-10-11 04:12:35.667 2 DEBUG nova.storage.rbd_utils [None req-f7c2a0d7-6121-45c7-ab46-55f07ad05f48 ba6ea3b0ff9d4fee8a80f308d0493954 7ff14cec1ef04fa2a41f6d226bc99518 - - default default] rbd image 673c41a0-97c6-4a8e-8f65-919ee9c38c79_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 11 04:12:35 compute-0 nova_compute[259850]: 2025-10-11 04:12:35.673 2 DEBUG oslo_concurrency.processutils [None req-f7c2a0d7-6121-45c7-ab46-55f07ad05f48 ba6ea3b0ff9d4fee8a80f308d0493954 7ff14cec1ef04fa2a41f6d226bc99518 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/673c41a0-97c6-4a8e-8f65-919ee9c38c79/disk.config 673c41a0-97c6-4a8e-8f65-919ee9c38c79_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 04:12:35 compute-0 nova_compute[259850]: 2025-10-11 04:12:35.851 2 DEBUG oslo_concurrency.processutils [None req-f7c2a0d7-6121-45c7-ab46-55f07ad05f48 ba6ea3b0ff9d4fee8a80f308d0493954 7ff14cec1ef04fa2a41f6d226bc99518 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/673c41a0-97c6-4a8e-8f65-919ee9c38c79/disk.config 673c41a0-97c6-4a8e-8f65-919ee9c38c79_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.178s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 04:12:35 compute-0 nova_compute[259850]: 2025-10-11 04:12:35.852 2 INFO nova.virt.libvirt.driver [None req-f7c2a0d7-6121-45c7-ab46-55f07ad05f48 ba6ea3b0ff9d4fee8a80f308d0493954 7ff14cec1ef04fa2a41f6d226bc99518 - - default default] [instance: 673c41a0-97c6-4a8e-8f65-919ee9c38c79] Deleting local config drive /var/lib/nova/instances/673c41a0-97c6-4a8e-8f65-919ee9c38c79/disk.config because it was imported into RBD.
Oct 11 04:12:35 compute-0 kernel: tapf42f98ab-28: entered promiscuous mode
Oct 11 04:12:35 compute-0 NetworkManager[44920]: <info>  [1760155955.9347] manager: (tapf42f98ab-28): new Tun device (/org/freedesktop/NetworkManager/Devices/87)
Oct 11 04:12:35 compute-0 ovn_controller[152025]: 2025-10-11T04:12:35Z|00150|binding|INFO|Claiming lport f42f98ab-28b0-4e9f-897a-e8be0d64dc33 for this chassis.
Oct 11 04:12:35 compute-0 ovn_controller[152025]: 2025-10-11T04:12:35Z|00151|binding|INFO|f42f98ab-28b0-4e9f-897a-e8be0d64dc33: Claiming fa:16:3e:07:8a:69 10.100.0.9
Oct 11 04:12:35 compute-0 nova_compute[259850]: 2025-10-11 04:12:35.937 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 04:12:35 compute-0 ovn_metadata_agent[161897]: 2025-10-11 04:12:35.951 161902 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:07:8a:69 10.100.0.9'], port_security=['fa:16:3e:07:8a:69 10.100.0.9'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.9/28', 'neutron:device_id': '673c41a0-97c6-4a8e-8f65-919ee9c38c79', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-69760b74-d690-4b6a-a64f-35ceb4582944', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '7ff14cec1ef04fa2a41f6d226bc99518', 'neutron:revision_number': '2', 'neutron:security_group_ids': '0b1fcf6f-b50b-44a2-814d-4972eb6e538b', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=24983540-db74-4f67-b9f8-811887ee0a83, chassis=[<ovs.db.idl.Row object at 0x7f22fde8afd0>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f22fde8afd0>], logical_port=f42f98ab-28b0-4e9f-897a-e8be0d64dc33) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 11 04:12:35 compute-0 ovn_metadata_agent[161897]: 2025-10-11 04:12:35.953 161902 INFO neutron.agent.ovn.metadata.agent [-] Port f42f98ab-28b0-4e9f-897a-e8be0d64dc33 in datapath 69760b74-d690-4b6a-a64f-35ceb4582944 bound to our chassis
Oct 11 04:12:35 compute-0 ovn_metadata_agent[161897]: 2025-10-11 04:12:35.956 161902 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 69760b74-d690-4b6a-a64f-35ceb4582944
Oct 11 04:12:35 compute-0 ovn_controller[152025]: 2025-10-11T04:12:35Z|00152|binding|INFO|Setting lport f42f98ab-28b0-4e9f-897a-e8be0d64dc33 ovn-installed in OVS
Oct 11 04:12:35 compute-0 ovn_controller[152025]: 2025-10-11T04:12:35Z|00153|binding|INFO|Setting lport f42f98ab-28b0-4e9f-897a-e8be0d64dc33 up in Southbound
Oct 11 04:12:35 compute-0 ovn_metadata_agent[161897]: 2025-10-11 04:12:35.980 267637 DEBUG oslo.privsep.daemon [-] privsep: reply[644fc208-3f01-4245-a9e7-fd4b1e711b5c]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 04:12:35 compute-0 nova_compute[259850]: 2025-10-11 04:12:35.983 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 04:12:35 compute-0 nova_compute[259850]: 2025-10-11 04:12:35.986 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 04:12:35 compute-0 systemd-machined[214869]: New machine qemu-15-instance-0000000f.
Oct 11 04:12:35 compute-0 systemd-udevd[288125]: Network interface NamePolicy= disabled on kernel command line.
Oct 11 04:12:36 compute-0 systemd[1]: Started Virtual Machine qemu-15-instance-0000000f.
Oct 11 04:12:36 compute-0 NetworkManager[44920]: <info>  [1760155956.0079] device (tapf42f98ab-28): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 11 04:12:36 compute-0 NetworkManager[44920]: <info>  [1760155956.0098] device (tapf42f98ab-28): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 11 04:12:36 compute-0 ovn_metadata_agent[161897]: 2025-10-11 04:12:36.029 267785 DEBUG oslo.privsep.daemon [-] privsep: reply[9041fe01-8820-442e-903c-c4c982b5393b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 04:12:36 compute-0 ovn_metadata_agent[161897]: 2025-10-11 04:12:36.035 267785 DEBUG oslo.privsep.daemon [-] privsep: reply[c33351ad-d7d4-40f5-9321-ea3735ec0a5e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 04:12:36 compute-0 ovn_metadata_agent[161897]: 2025-10-11 04:12:36.077 267785 DEBUG oslo.privsep.daemon [-] privsep: reply[2965a74b-0449-43dd-8df2-73cb063ff1ab]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 04:12:36 compute-0 ovn_metadata_agent[161897]: 2025-10-11 04:12:36.105 267637 DEBUG oslo.privsep.daemon [-] privsep: reply[87a23c81-8066-4cf8-b95e-861270ee5bf0]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap69760b74-d1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:4e:85:d9'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 10, 'tx_packets': 6, 'rx_bytes': 916, 'tx_bytes': 444, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 10, 'tx_packets': 6, 'rx_bytes': 916, 'tx_bytes': 444, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 51], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 429926, 'reachable_time': 21479, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 8, 'inoctets': 720, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 4, 'outoctets': 304, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 8, 'outmcastpkts': 4, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 720, 'outmcastoctets': 304, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 8, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 4, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 288137, 'error': None, 'target': 'ovnmeta-69760b74-d690-4b6a-a64f-35ceb4582944', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 04:12:36 compute-0 ovn_metadata_agent[161897]: 2025-10-11 04:12:36.126 267637 DEBUG oslo.privsep.daemon [-] privsep: reply[19537a77-937c-4325-934c-5aae0ab5880b]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap69760b74-d1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 429939, 'tstamp': 429939}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 288139, 'error': None, 'target': 'ovnmeta-69760b74-d690-4b6a-a64f-35ceb4582944', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap69760b74-d1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 429942, 'tstamp': 429942}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 288139, 'error': None, 'target': 'ovnmeta-69760b74-d690-4b6a-a64f-35ceb4582944', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 04:12:36 compute-0 ovn_metadata_agent[161897]: 2025-10-11 04:12:36.128 161902 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap69760b74-d0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 04:12:36 compute-0 nova_compute[259850]: 2025-10-11 04:12:36.130 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 04:12:36 compute-0 nova_compute[259850]: 2025-10-11 04:12:36.132 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 04:12:36 compute-0 ovn_metadata_agent[161897]: 2025-10-11 04:12:36.133 161902 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap69760b74-d0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 04:12:36 compute-0 ovn_metadata_agent[161897]: 2025-10-11 04:12:36.133 161902 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 11 04:12:36 compute-0 ovn_metadata_agent[161897]: 2025-10-11 04:12:36.134 161902 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap69760b74-d0, col_values=(('external_ids', {'iface-id': '1db9314a-9172-441f-a3d7-84ca9c891141'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 04:12:36 compute-0 ovn_metadata_agent[161897]: 2025-10-11 04:12:36.135 161902 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 11 04:12:36 compute-0 nova_compute[259850]: 2025-10-11 04:12:36.397 2 DEBUG nova.compute.manager [req-d5f4a583-ffd0-4e39-bb81-6a48c788693c req-bc7e36bd-9445-4fc5-bef2-5e8af615c061 f4159c198f2c491aba3076e803251bb4 551ef16ca7c64c7d98e9560ddab5abd8 - - default default] [instance: 673c41a0-97c6-4a8e-8f65-919ee9c38c79] Received event network-vif-plugged-f42f98ab-28b0-4e9f-897a-e8be0d64dc33 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 11 04:12:36 compute-0 nova_compute[259850]: 2025-10-11 04:12:36.398 2 DEBUG oslo_concurrency.lockutils [req-d5f4a583-ffd0-4e39-bb81-6a48c788693c req-bc7e36bd-9445-4fc5-bef2-5e8af615c061 f4159c198f2c491aba3076e803251bb4 551ef16ca7c64c7d98e9560ddab5abd8 - - default default] Acquiring lock "673c41a0-97c6-4a8e-8f65-919ee9c38c79-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 04:12:36 compute-0 nova_compute[259850]: 2025-10-11 04:12:36.398 2 DEBUG oslo_concurrency.lockutils [req-d5f4a583-ffd0-4e39-bb81-6a48c788693c req-bc7e36bd-9445-4fc5-bef2-5e8af615c061 f4159c198f2c491aba3076e803251bb4 551ef16ca7c64c7d98e9560ddab5abd8 - - default default] Lock "673c41a0-97c6-4a8e-8f65-919ee9c38c79-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 04:12:36 compute-0 nova_compute[259850]: 2025-10-11 04:12:36.398 2 DEBUG oslo_concurrency.lockutils [req-d5f4a583-ffd0-4e39-bb81-6a48c788693c req-bc7e36bd-9445-4fc5-bef2-5e8af615c061 f4159c198f2c491aba3076e803251bb4 551ef16ca7c64c7d98e9560ddab5abd8 - - default default] Lock "673c41a0-97c6-4a8e-8f65-919ee9c38c79-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 04:12:36 compute-0 nova_compute[259850]: 2025-10-11 04:12:36.398 2 DEBUG nova.compute.manager [req-d5f4a583-ffd0-4e39-bb81-6a48c788693c req-bc7e36bd-9445-4fc5-bef2-5e8af615c061 f4159c198f2c491aba3076e803251bb4 551ef16ca7c64c7d98e9560ddab5abd8 - - default default] [instance: 673c41a0-97c6-4a8e-8f65-919ee9c38c79] Processing event network-vif-plugged-f42f98ab-28b0-4e9f-897a-e8be0d64dc33 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Oct 11 04:12:36 compute-0 ovn_metadata_agent[161897]: 2025-10-11 04:12:36.510 161902 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=13, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '7a:61:6f', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '92:f1:b6:e4:f1:16'}, ipsec=False) old=SB_Global(nb_cfg=12) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 11 04:12:36 compute-0 nova_compute[259850]: 2025-10-11 04:12:36.510 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 04:12:36 compute-0 ovn_metadata_agent[161897]: 2025-10-11 04:12:36.511 161902 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 7 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Oct 11 04:12:37 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e344 do_prune osdmap full prune enabled
Oct 11 04:12:37 compute-0 ceph-mon[74273]: pgmap v1422: 305 pgs: 305 active+clean; 248 MiB data, 475 MiB used, 60 GiB / 60 GiB avail; 230 KiB/s rd, 11 KiB/s wr, 303 op/s
Oct 11 04:12:37 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e345 e345: 3 total, 3 up, 3 in
Oct 11 04:12:37 compute-0 ceph-mon[74273]: log_channel(cluster) log [DBG] : osdmap e345: 3 total, 3 up, 3 in
Oct 11 04:12:37 compute-0 nova_compute[259850]: 2025-10-11 04:12:37.147 2 DEBUG nova.virt.driver [None req-3700afd8-f980-4cf2-b41e-54ecc7fbe9a2 - - - - - -] Emitting event <LifecycleEvent: 1760155957.146807, 673c41a0-97c6-4a8e-8f65-919ee9c38c79 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 11 04:12:37 compute-0 nova_compute[259850]: 2025-10-11 04:12:37.147 2 INFO nova.compute.manager [None req-3700afd8-f980-4cf2-b41e-54ecc7fbe9a2 - - - - - -] [instance: 673c41a0-97c6-4a8e-8f65-919ee9c38c79] VM Started (Lifecycle Event)
Oct 11 04:12:37 compute-0 nova_compute[259850]: 2025-10-11 04:12:37.149 2 DEBUG nova.compute.manager [None req-f7c2a0d7-6121-45c7-ab46-55f07ad05f48 ba6ea3b0ff9d4fee8a80f308d0493954 7ff14cec1ef04fa2a41f6d226bc99518 - - default default] [instance: 673c41a0-97c6-4a8e-8f65-919ee9c38c79] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Oct 11 04:12:37 compute-0 nova_compute[259850]: 2025-10-11 04:12:37.153 2 DEBUG nova.virt.libvirt.driver [None req-f7c2a0d7-6121-45c7-ab46-55f07ad05f48 ba6ea3b0ff9d4fee8a80f308d0493954 7ff14cec1ef04fa2a41f6d226bc99518 - - default default] [instance: 673c41a0-97c6-4a8e-8f65-919ee9c38c79] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Oct 11 04:12:37 compute-0 nova_compute[259850]: 2025-10-11 04:12:37.157 2 INFO nova.virt.libvirt.driver [-] [instance: 673c41a0-97c6-4a8e-8f65-919ee9c38c79] Instance spawned successfully.
Oct 11 04:12:37 compute-0 nova_compute[259850]: 2025-10-11 04:12:37.157 2 INFO nova.compute.manager [None req-f7c2a0d7-6121-45c7-ab46-55f07ad05f48 ba6ea3b0ff9d4fee8a80f308d0493954 7ff14cec1ef04fa2a41f6d226bc99518 - - default default] [instance: 673c41a0-97c6-4a8e-8f65-919ee9c38c79] Took 6.19 seconds to spawn the instance on the hypervisor.
Oct 11 04:12:37 compute-0 nova_compute[259850]: 2025-10-11 04:12:37.157 2 DEBUG nova.compute.manager [None req-f7c2a0d7-6121-45c7-ab46-55f07ad05f48 ba6ea3b0ff9d4fee8a80f308d0493954 7ff14cec1ef04fa2a41f6d226bc99518 - - default default] [instance: 673c41a0-97c6-4a8e-8f65-919ee9c38c79] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 11 04:12:37 compute-0 nova_compute[259850]: 2025-10-11 04:12:37.307 2 DEBUG nova.compute.manager [None req-3700afd8-f980-4cf2-b41e-54ecc7fbe9a2 - - - - - -] [instance: 673c41a0-97c6-4a8e-8f65-919ee9c38c79] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 11 04:12:37 compute-0 nova_compute[259850]: 2025-10-11 04:12:37.312 2 DEBUG nova.compute.manager [None req-3700afd8-f980-4cf2-b41e-54ecc7fbe9a2 - - - - - -] [instance: 673c41a0-97c6-4a8e-8f65-919ee9c38c79] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 11 04:12:37 compute-0 nova_compute[259850]: 2025-10-11 04:12:37.353 2 INFO nova.compute.manager [None req-3700afd8-f980-4cf2-b41e-54ecc7fbe9a2 - - - - - -] [instance: 673c41a0-97c6-4a8e-8f65-919ee9c38c79] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 11 04:12:37 compute-0 nova_compute[259850]: 2025-10-11 04:12:37.354 2 DEBUG nova.virt.driver [None req-3700afd8-f980-4cf2-b41e-54ecc7fbe9a2 - - - - - -] Emitting event <LifecycleEvent: 1760155957.1469598, 673c41a0-97c6-4a8e-8f65-919ee9c38c79 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 11 04:12:37 compute-0 nova_compute[259850]: 2025-10-11 04:12:37.354 2 INFO nova.compute.manager [None req-3700afd8-f980-4cf2-b41e-54ecc7fbe9a2 - - - - - -] [instance: 673c41a0-97c6-4a8e-8f65-919ee9c38c79] VM Paused (Lifecycle Event)
Oct 11 04:12:37 compute-0 nova_compute[259850]: 2025-10-11 04:12:37.358 2 INFO nova.compute.manager [None req-f7c2a0d7-6121-45c7-ab46-55f07ad05f48 ba6ea3b0ff9d4fee8a80f308d0493954 7ff14cec1ef04fa2a41f6d226bc99518 - - default default] [instance: 673c41a0-97c6-4a8e-8f65-919ee9c38c79] Took 7.37 seconds to build instance.
Oct 11 04:12:37 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 11 04:12:37 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3627866519' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 11 04:12:37 compute-0 nova_compute[259850]: 2025-10-11 04:12:37.373 2 DEBUG nova.compute.manager [None req-3700afd8-f980-4cf2-b41e-54ecc7fbe9a2 - - - - - -] [instance: 673c41a0-97c6-4a8e-8f65-919ee9c38c79] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 11 04:12:37 compute-0 nova_compute[259850]: 2025-10-11 04:12:37.377 2 DEBUG nova.virt.driver [None req-3700afd8-f980-4cf2-b41e-54ecc7fbe9a2 - - - - - -] Emitting event <LifecycleEvent: 1760155957.1526046, 673c41a0-97c6-4a8e-8f65-919ee9c38c79 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 11 04:12:37 compute-0 nova_compute[259850]: 2025-10-11 04:12:37.377 2 INFO nova.compute.manager [None req-3700afd8-f980-4cf2-b41e-54ecc7fbe9a2 - - - - - -] [instance: 673c41a0-97c6-4a8e-8f65-919ee9c38c79] VM Resumed (Lifecycle Event)
Oct 11 04:12:37 compute-0 nova_compute[259850]: 2025-10-11 04:12:37.381 2 DEBUG oslo_concurrency.lockutils [None req-f7c2a0d7-6121-45c7-ab46-55f07ad05f48 ba6ea3b0ff9d4fee8a80f308d0493954 7ff14cec1ef04fa2a41f6d226bc99518 - - default default] Lock "673c41a0-97c6-4a8e-8f65-919ee9c38c79" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 7.481s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 04:12:37 compute-0 nova_compute[259850]: 2025-10-11 04:12:37.391 2 DEBUG nova.compute.manager [None req-3700afd8-f980-4cf2-b41e-54ecc7fbe9a2 - - - - - -] [instance: 673c41a0-97c6-4a8e-8f65-919ee9c38c79] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 11 04:12:37 compute-0 nova_compute[259850]: 2025-10-11 04:12:37.395 2 DEBUG nova.compute.manager [None req-3700afd8-f980-4cf2-b41e-54ecc7fbe9a2 - - - - - -] [instance: 673c41a0-97c6-4a8e-8f65-919ee9c38c79] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: active, current task_state: None, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 11 04:12:37 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1424: 305 pgs: 305 active+clean; 248 MiB data, 475 MiB used, 60 GiB / 60 GiB avail; 31 KiB/s rd, 1.7 KiB/s wr, 47 op/s
Oct 11 04:12:37 compute-0 nova_compute[259850]: 2025-10-11 04:12:37.893 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 04:12:38 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e345 do_prune osdmap full prune enabled
Oct 11 04:12:38 compute-0 ceph-mon[74273]: osdmap e345: 3 total, 3 up, 3 in
Oct 11 04:12:38 compute-0 ceph-mon[74273]: from='client.? 192.168.122.10:0/3627866519' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 11 04:12:38 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e346 e346: 3 total, 3 up, 3 in
Oct 11 04:12:38 compute-0 ceph-mon[74273]: log_channel(cluster) log [DBG] : osdmap e346: 3 total, 3 up, 3 in
Oct 11 04:12:38 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct 11 04:12:38 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1770337694' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 11 04:12:38 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct 11 04:12:38 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1770337694' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 11 04:12:38 compute-0 nova_compute[259850]: 2025-10-11 04:12:38.471 2 DEBUG nova.compute.manager [req-4738c402-99a4-4075-9d78-6629017df39f req-a8691c3b-0abe-4439-9ab2-767726db8330 f4159c198f2c491aba3076e803251bb4 551ef16ca7c64c7d98e9560ddab5abd8 - - default default] [instance: 673c41a0-97c6-4a8e-8f65-919ee9c38c79] Received event network-vif-plugged-f42f98ab-28b0-4e9f-897a-e8be0d64dc33 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 11 04:12:38 compute-0 nova_compute[259850]: 2025-10-11 04:12:38.472 2 DEBUG oslo_concurrency.lockutils [req-4738c402-99a4-4075-9d78-6629017df39f req-a8691c3b-0abe-4439-9ab2-767726db8330 f4159c198f2c491aba3076e803251bb4 551ef16ca7c64c7d98e9560ddab5abd8 - - default default] Acquiring lock "673c41a0-97c6-4a8e-8f65-919ee9c38c79-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 04:12:38 compute-0 nova_compute[259850]: 2025-10-11 04:12:38.474 2 DEBUG oslo_concurrency.lockutils [req-4738c402-99a4-4075-9d78-6629017df39f req-a8691c3b-0abe-4439-9ab2-767726db8330 f4159c198f2c491aba3076e803251bb4 551ef16ca7c64c7d98e9560ddab5abd8 - - default default] Lock "673c41a0-97c6-4a8e-8f65-919ee9c38c79-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 04:12:38 compute-0 nova_compute[259850]: 2025-10-11 04:12:38.474 2 DEBUG oslo_concurrency.lockutils [req-4738c402-99a4-4075-9d78-6629017df39f req-a8691c3b-0abe-4439-9ab2-767726db8330 f4159c198f2c491aba3076e803251bb4 551ef16ca7c64c7d98e9560ddab5abd8 - - default default] Lock "673c41a0-97c6-4a8e-8f65-919ee9c38c79-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 04:12:38 compute-0 nova_compute[259850]: 2025-10-11 04:12:38.475 2 DEBUG nova.compute.manager [req-4738c402-99a4-4075-9d78-6629017df39f req-a8691c3b-0abe-4439-9ab2-767726db8330 f4159c198f2c491aba3076e803251bb4 551ef16ca7c64c7d98e9560ddab5abd8 - - default default] [instance: 673c41a0-97c6-4a8e-8f65-919ee9c38c79] No waiting events found dispatching network-vif-plugged-f42f98ab-28b0-4e9f-897a-e8be0d64dc33 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 11 04:12:38 compute-0 nova_compute[259850]: 2025-10-11 04:12:38.475 2 WARNING nova.compute.manager [req-4738c402-99a4-4075-9d78-6629017df39f req-a8691c3b-0abe-4439-9ab2-767726db8330 f4159c198f2c491aba3076e803251bb4 551ef16ca7c64c7d98e9560ddab5abd8 - - default default] [instance: 673c41a0-97c6-4a8e-8f65-919ee9c38c79] Received unexpected event network-vif-plugged-f42f98ab-28b0-4e9f-897a-e8be0d64dc33 for instance with vm_state active and task_state None.
Oct 11 04:12:39 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e346 do_prune osdmap full prune enabled
Oct 11 04:12:39 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e347 e347: 3 total, 3 up, 3 in
Oct 11 04:12:39 compute-0 ceph-mon[74273]: log_channel(cluster) log [DBG] : osdmap e347: 3 total, 3 up, 3 in
Oct 11 04:12:39 compute-0 ceph-mon[74273]: pgmap v1424: 305 pgs: 305 active+clean; 248 MiB data, 475 MiB used, 60 GiB / 60 GiB avail; 31 KiB/s rd, 1.7 KiB/s wr, 47 op/s
Oct 11 04:12:39 compute-0 ceph-mon[74273]: osdmap e346: 3 total, 3 up, 3 in
Oct 11 04:12:39 compute-0 ceph-mon[74273]: from='client.? 192.168.122.10:0/1770337694' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 11 04:12:39 compute-0 ceph-mon[74273]: from='client.? 192.168.122.10:0/1770337694' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 11 04:12:39 compute-0 podman[288183]: 2025-10-11 04:12:39.399144453 +0000 UTC m=+0.102130603 container health_status 8f03625dda308e9a3fe3b3f00360811eab2a53db90537fed5552a11820542f07 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, container_name=iscsid, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, config_id=iscsid, org.label-schema.build-date=20251009, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team)
Oct 11 04:12:39 compute-0 podman[288182]: 2025-10-11 04:12:39.415081986 +0000 UTC m=+0.109237575 container health_status 83ff5079e26f0f00bbfd2512db4ebca61e431e3b7e362664573e34fff179574c (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251009, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Oct 11 04:12:39 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1427: 305 pgs: 305 active+clean; 248 MiB data, 480 MiB used, 60 GiB / 60 GiB avail; 726 KiB/s rd, 39 KiB/s wr, 228 op/s
Oct 11 04:12:39 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e347 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 04:12:40 compute-0 nova_compute[259850]: 2025-10-11 04:12:40.055 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 04:12:40 compute-0 ceph-mon[74273]: osdmap e347: 3 total, 3 up, 3 in
Oct 11 04:12:40 compute-0 nova_compute[259850]: 2025-10-11 04:12:40.762 2 DEBUG nova.compute.manager [req-72b0ce83-cb60-4cc4-8f7b-5b20a0963574 req-a6a2a6ee-4bea-4544-b6c4-126c0a976221 f4159c198f2c491aba3076e803251bb4 551ef16ca7c64c7d98e9560ddab5abd8 - - default default] [instance: 673c41a0-97c6-4a8e-8f65-919ee9c38c79] Received event network-changed-f42f98ab-28b0-4e9f-897a-e8be0d64dc33 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 11 04:12:40 compute-0 nova_compute[259850]: 2025-10-11 04:12:40.763 2 DEBUG nova.compute.manager [req-72b0ce83-cb60-4cc4-8f7b-5b20a0963574 req-a6a2a6ee-4bea-4544-b6c4-126c0a976221 f4159c198f2c491aba3076e803251bb4 551ef16ca7c64c7d98e9560ddab5abd8 - - default default] [instance: 673c41a0-97c6-4a8e-8f65-919ee9c38c79] Refreshing instance network info cache due to event network-changed-f42f98ab-28b0-4e9f-897a-e8be0d64dc33. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 11 04:12:40 compute-0 nova_compute[259850]: 2025-10-11 04:12:40.764 2 DEBUG oslo_concurrency.lockutils [req-72b0ce83-cb60-4cc4-8f7b-5b20a0963574 req-a6a2a6ee-4bea-4544-b6c4-126c0a976221 f4159c198f2c491aba3076e803251bb4 551ef16ca7c64c7d98e9560ddab5abd8 - - default default] Acquiring lock "refresh_cache-673c41a0-97c6-4a8e-8f65-919ee9c38c79" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 11 04:12:40 compute-0 nova_compute[259850]: 2025-10-11 04:12:40.764 2 DEBUG oslo_concurrency.lockutils [req-72b0ce83-cb60-4cc4-8f7b-5b20a0963574 req-a6a2a6ee-4bea-4544-b6c4-126c0a976221 f4159c198f2c491aba3076e803251bb4 551ef16ca7c64c7d98e9560ddab5abd8 - - default default] Acquired lock "refresh_cache-673c41a0-97c6-4a8e-8f65-919ee9c38c79" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 11 04:12:40 compute-0 nova_compute[259850]: 2025-10-11 04:12:40.765 2 DEBUG nova.network.neutron [req-72b0ce83-cb60-4cc4-8f7b-5b20a0963574 req-a6a2a6ee-4bea-4544-b6c4-126c0a976221 f4159c198f2c491aba3076e803251bb4 551ef16ca7c64c7d98e9560ddab5abd8 - - default default] [instance: 673c41a0-97c6-4a8e-8f65-919ee9c38c79] Refreshing network info cache for port f42f98ab-28b0-4e9f-897a-e8be0d64dc33 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 11 04:12:41 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e347 do_prune osdmap full prune enabled
Oct 11 04:12:41 compute-0 ceph-mon[74273]: pgmap v1427: 305 pgs: 305 active+clean; 248 MiB data, 480 MiB used, 60 GiB / 60 GiB avail; 726 KiB/s rd, 39 KiB/s wr, 228 op/s
Oct 11 04:12:41 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e348 e348: 3 total, 3 up, 3 in
Oct 11 04:12:41 compute-0 ceph-mon[74273]: log_channel(cluster) log [DBG] : osdmap e348: 3 total, 3 up, 3 in
Oct 11 04:12:41 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct 11 04:12:41 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/583642927' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 11 04:12:41 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct 11 04:12:41 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/583642927' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 11 04:12:41 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1429: 305 pgs: 305 active+clean; 248 MiB data, 480 MiB used, 60 GiB / 60 GiB avail; 745 KiB/s rd, 40 KiB/s wr, 233 op/s
Oct 11 04:12:42 compute-0 ceph-mon[74273]: osdmap e348: 3 total, 3 up, 3 in
Oct 11 04:12:42 compute-0 ceph-mon[74273]: from='client.? 192.168.122.10:0/583642927' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 11 04:12:42 compute-0 ceph-mon[74273]: from='client.? 192.168.122.10:0/583642927' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 11 04:12:42 compute-0 nova_compute[259850]: 2025-10-11 04:12:42.750 2 DEBUG nova.network.neutron [req-72b0ce83-cb60-4cc4-8f7b-5b20a0963574 req-a6a2a6ee-4bea-4544-b6c4-126c0a976221 f4159c198f2c491aba3076e803251bb4 551ef16ca7c64c7d98e9560ddab5abd8 - - default default] [instance: 673c41a0-97c6-4a8e-8f65-919ee9c38c79] Updated VIF entry in instance network info cache for port f42f98ab-28b0-4e9f-897a-e8be0d64dc33. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 11 04:12:42 compute-0 nova_compute[259850]: 2025-10-11 04:12:42.751 2 DEBUG nova.network.neutron [req-72b0ce83-cb60-4cc4-8f7b-5b20a0963574 req-a6a2a6ee-4bea-4544-b6c4-126c0a976221 f4159c198f2c491aba3076e803251bb4 551ef16ca7c64c7d98e9560ddab5abd8 - - default default] [instance: 673c41a0-97c6-4a8e-8f65-919ee9c38c79] Updating instance_info_cache with network_info: [{"id": "f42f98ab-28b0-4e9f-897a-e8be0d64dc33", "address": "fa:16:3e:07:8a:69", "network": {"id": "69760b74-d690-4b6a-a64f-35ceb4582944", "bridge": "br-int", "label": "tempest-TestStampPattern-334026573-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.233", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "7ff14cec1ef04fa2a41f6d226bc99518", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf42f98ab-28", "ovs_interfaceid": "f42f98ab-28b0-4e9f-897a-e8be0d64dc33", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 11 04:12:42 compute-0 nova_compute[259850]: 2025-10-11 04:12:42.774 2 DEBUG oslo_concurrency.lockutils [req-72b0ce83-cb60-4cc4-8f7b-5b20a0963574 req-a6a2a6ee-4bea-4544-b6c4-126c0a976221 f4159c198f2c491aba3076e803251bb4 551ef16ca7c64c7d98e9560ddab5abd8 - - default default] Releasing lock "refresh_cache-673c41a0-97c6-4a8e-8f65-919ee9c38c79" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 11 04:12:42 compute-0 nova_compute[259850]: 2025-10-11 04:12:42.896 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 04:12:43 compute-0 ceph-mon[74273]: pgmap v1429: 305 pgs: 305 active+clean; 248 MiB data, 480 MiB used, 60 GiB / 60 GiB avail; 745 KiB/s rd, 40 KiB/s wr, 233 op/s
Oct 11 04:12:43 compute-0 ovn_metadata_agent[161897]: 2025-10-11 04:12:43.514 161902 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=8a473e03-2208-47ae-afcd-05ad744a5969, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '13'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 04:12:43 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1430: 305 pgs: 305 active+clean; 248 MiB data, 480 MiB used, 60 GiB / 60 GiB avail; 4.0 MiB/s rd, 35 KiB/s wr, 379 op/s
Oct 11 04:12:43 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 11 04:12:43 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/398377412' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 11 04:12:44 compute-0 ceph-mon[74273]: from='client.? 192.168.122.10:0/398377412' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 11 04:12:44 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 11 04:12:44 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/4222077093' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 11 04:12:44 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e348 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 04:12:44 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e348 do_prune osdmap full prune enabled
Oct 11 04:12:44 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e349 e349: 3 total, 3 up, 3 in
Oct 11 04:12:44 compute-0 ceph-mon[74273]: log_channel(cluster) log [DBG] : osdmap e349: 3 total, 3 up, 3 in
Oct 11 04:12:45 compute-0 nova_compute[259850]: 2025-10-11 04:12:45.058 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 04:12:45 compute-0 ceph-mon[74273]: pgmap v1430: 305 pgs: 305 active+clean; 248 MiB data, 480 MiB used, 60 GiB / 60 GiB avail; 4.0 MiB/s rd, 35 KiB/s wr, 379 op/s
Oct 11 04:12:45 compute-0 ceph-mon[74273]: from='client.? 192.168.122.10:0/4222077093' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 11 04:12:45 compute-0 ceph-mon[74273]: osdmap e349: 3 total, 3 up, 3 in
Oct 11 04:12:45 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1432: 305 pgs: 305 active+clean; 248 MiB data, 480 MiB used, 60 GiB / 60 GiB avail; 3.2 MiB/s rd, 4.1 KiB/s wr, 182 op/s
Oct 11 04:12:45 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e349 do_prune osdmap full prune enabled
Oct 11 04:12:45 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e350 e350: 3 total, 3 up, 3 in
Oct 11 04:12:45 compute-0 ceph-mon[74273]: log_channel(cluster) log [DBG] : osdmap e350: 3 total, 3 up, 3 in
Oct 11 04:12:46 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct 11 04:12:46 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2324456511' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 11 04:12:46 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct 11 04:12:46 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2324456511' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 11 04:12:46 compute-0 ceph-mon[74273]: pgmap v1432: 305 pgs: 305 active+clean; 248 MiB data, 480 MiB used, 60 GiB / 60 GiB avail; 3.2 MiB/s rd, 4.1 KiB/s wr, 182 op/s
Oct 11 04:12:46 compute-0 ceph-mon[74273]: osdmap e350: 3 total, 3 up, 3 in
Oct 11 04:12:46 compute-0 ceph-mon[74273]: from='client.? 192.168.122.10:0/2324456511' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 11 04:12:46 compute-0 ceph-mon[74273]: from='client.? 192.168.122.10:0/2324456511' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 11 04:12:47 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e350 do_prune osdmap full prune enabled
Oct 11 04:12:47 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e351 e351: 3 total, 3 up, 3 in
Oct 11 04:12:47 compute-0 ceph-mon[74273]: log_channel(cluster) log [DBG] : osdmap e351: 3 total, 3 up, 3 in
Oct 11 04:12:47 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1435: 305 pgs: 305 active+clean; 248 MiB data, 480 MiB used, 60 GiB / 60 GiB avail; 3.5 MiB/s rd, 4.5 KiB/s wr, 200 op/s
Oct 11 04:12:47 compute-0 nova_compute[259850]: 2025-10-11 04:12:47.897 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 04:12:48 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e351 do_prune osdmap full prune enabled
Oct 11 04:12:48 compute-0 ceph-mon[74273]: osdmap e351: 3 total, 3 up, 3 in
Oct 11 04:12:48 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e352 e352: 3 total, 3 up, 3 in
Oct 11 04:12:48 compute-0 ceph-mon[74273]: log_channel(cluster) log [DBG] : osdmap e352: 3 total, 3 up, 3 in
Oct 11 04:12:48 compute-0 ceph-osd[88594]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #44. Immutable memtables: 1.
Oct 11 04:12:49 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e352 do_prune osdmap full prune enabled
Oct 11 04:12:49 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e353 e353: 3 total, 3 up, 3 in
Oct 11 04:12:49 compute-0 ceph-mon[74273]: log_channel(cluster) log [DBG] : osdmap e353: 3 total, 3 up, 3 in
Oct 11 04:12:49 compute-0 ceph-mon[74273]: pgmap v1435: 305 pgs: 305 active+clean; 248 MiB data, 480 MiB used, 60 GiB / 60 GiB avail; 3.5 MiB/s rd, 4.5 KiB/s wr, 200 op/s
Oct 11 04:12:49 compute-0 ceph-mon[74273]: osdmap e352: 3 total, 3 up, 3 in
Oct 11 04:12:49 compute-0 ovn_controller[152025]: 2025-10-11T04:12:49Z|00024|pinctrl(ovn_pinctrl0)|WARN|DHCPREQUEST requested IP 10.100.0.4 does not match offer 10.100.0.9
Oct 11 04:12:49 compute-0 ovn_controller[152025]: 2025-10-11T04:12:49Z|00025|pinctrl(ovn_pinctrl0)|INFO|DHCPNAK fa:16:3e:07:8a:69 10.100.0.9
Oct 11 04:12:49 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1438: 305 pgs: 305 active+clean; 248 MiB data, 464 MiB used, 60 GiB / 60 GiB avail; 173 KiB/s rd, 8.0 KiB/s wr, 231 op/s
Oct 11 04:12:49 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e353 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 04:12:50 compute-0 nova_compute[259850]: 2025-10-11 04:12:50.094 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 04:12:50 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e353 do_prune osdmap full prune enabled
Oct 11 04:12:50 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e354 e354: 3 total, 3 up, 3 in
Oct 11 04:12:50 compute-0 ceph-mon[74273]: log_channel(cluster) log [DBG] : osdmap e354: 3 total, 3 up, 3 in
Oct 11 04:12:50 compute-0 ceph-mon[74273]: osdmap e353: 3 total, 3 up, 3 in
Oct 11 04:12:50 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct 11 04:12:50 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/752874796' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 11 04:12:50 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct 11 04:12:50 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/752874796' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 11 04:12:50 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct 11 04:12:50 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2073519321' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 11 04:12:50 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct 11 04:12:50 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2073519321' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 11 04:12:50 compute-0 ceph-mgr[74563]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 04:12:50 compute-0 ceph-mgr[74563]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 04:12:50 compute-0 ceph-mgr[74563]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 04:12:50 compute-0 ceph-mgr[74563]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 04:12:50 compute-0 ceph-mgr[74563]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 04:12:50 compute-0 ceph-mgr[74563]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 04:12:51 compute-0 ceph-mon[74273]: pgmap v1438: 305 pgs: 305 active+clean; 248 MiB data, 464 MiB used, 60 GiB / 60 GiB avail; 173 KiB/s rd, 8.0 KiB/s wr, 231 op/s
Oct 11 04:12:51 compute-0 ceph-mon[74273]: osdmap e354: 3 total, 3 up, 3 in
Oct 11 04:12:51 compute-0 ceph-mon[74273]: from='client.? 192.168.122.10:0/752874796' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 11 04:12:51 compute-0 ceph-mon[74273]: from='client.? 192.168.122.10:0/752874796' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 11 04:12:51 compute-0 ceph-mon[74273]: from='client.? 192.168.122.10:0/2073519321' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 11 04:12:51 compute-0 ceph-mon[74273]: from='client.? 192.168.122.10:0/2073519321' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 11 04:12:51 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e354 do_prune osdmap full prune enabled
Oct 11 04:12:51 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e355 e355: 3 total, 3 up, 3 in
Oct 11 04:12:51 compute-0 ceph-mon[74273]: log_channel(cluster) log [DBG] : osdmap e355: 3 total, 3 up, 3 in
Oct 11 04:12:51 compute-0 podman[288224]: 2025-10-11 04:12:51.43608421 +0000 UTC m=+0.128690038 container health_status 648a70a95c858d67aef01a55c153ff0b5b9bef4b589fa10c62a24185180db61f (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.license=GPLv2, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251009, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 11 04:12:51 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1441: 305 pgs: 305 active+clean; 248 MiB data, 464 MiB used, 60 GiB / 60 GiB avail; 173 KiB/s rd, 8.0 KiB/s wr, 231 op/s
Oct 11 04:12:52 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e355 do_prune osdmap full prune enabled
Oct 11 04:12:52 compute-0 ceph-mon[74273]: osdmap e355: 3 total, 3 up, 3 in
Oct 11 04:12:52 compute-0 ceph-mon[74273]: pgmap v1441: 305 pgs: 305 active+clean; 248 MiB data, 464 MiB used, 60 GiB / 60 GiB avail; 173 KiB/s rd, 8.0 KiB/s wr, 231 op/s
Oct 11 04:12:52 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e356 e356: 3 total, 3 up, 3 in
Oct 11 04:12:52 compute-0 ceph-mon[74273]: log_channel(cluster) log [DBG] : osdmap e356: 3 total, 3 up, 3 in
Oct 11 04:12:52 compute-0 nova_compute[259850]: 2025-10-11 04:12:52.900 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 04:12:53 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e356 do_prune osdmap full prune enabled
Oct 11 04:12:53 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e357 e357: 3 total, 3 up, 3 in
Oct 11 04:12:53 compute-0 ceph-mon[74273]: log_channel(cluster) log [DBG] : osdmap e357: 3 total, 3 up, 3 in
Oct 11 04:12:53 compute-0 ceph-mon[74273]: osdmap e356: 3 total, 3 up, 3 in
Oct 11 04:12:53 compute-0 ovn_controller[152025]: 2025-10-11T04:12:53Z|00026|pinctrl(ovn_pinctrl0)|WARN|DHCPREQUEST requested IP 10.100.0.4 does not match offer 10.100.0.9
Oct 11 04:12:53 compute-0 ovn_controller[152025]: 2025-10-11T04:12:53Z|00027|pinctrl(ovn_pinctrl0)|INFO|DHCPNAK fa:16:3e:07:8a:69 10.100.0.9
Oct 11 04:12:53 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct 11 04:12:53 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/212342097' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 11 04:12:53 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct 11 04:12:53 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/212342097' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 11 04:12:53 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1444: 305 pgs: 305 active+clean; 262 MiB data, 487 MiB used, 60 GiB / 60 GiB avail; 3.2 MiB/s rd, 1.5 MiB/s wr, 396 op/s
Oct 11 04:12:54 compute-0 ovn_controller[152025]: 2025-10-11T04:12:54Z|00028|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:07:8a:69 10.100.0.9
Oct 11 04:12:54 compute-0 ovn_controller[152025]: 2025-10-11T04:12:54Z|00029|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:07:8a:69 10.100.0.9
Oct 11 04:12:54 compute-0 ceph-mon[74273]: osdmap e357: 3 total, 3 up, 3 in
Oct 11 04:12:54 compute-0 ceph-mon[74273]: from='client.? 192.168.122.10:0/212342097' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 11 04:12:54 compute-0 ceph-mon[74273]: from='client.? 192.168.122.10:0/212342097' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 11 04:12:54 compute-0 ceph-mon[74273]: pgmap v1444: 305 pgs: 305 active+clean; 262 MiB data, 487 MiB used, 60 GiB / 60 GiB avail; 3.2 MiB/s rd, 1.5 MiB/s wr, 396 op/s
Oct 11 04:12:54 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e357 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 04:12:54 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e357 do_prune osdmap full prune enabled
Oct 11 04:12:54 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e358 e358: 3 total, 3 up, 3 in
Oct 11 04:12:54 compute-0 ceph-mon[74273]: log_channel(cluster) log [DBG] : osdmap e358: 3 total, 3 up, 3 in
Oct 11 04:12:55 compute-0 nova_compute[259850]: 2025-10-11 04:12:55.096 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 04:12:55 compute-0 podman[288251]: 2025-10-11 04:12:55.392485782 +0000 UTC m=+0.097038567 container health_status aa44bb3cffcfb092b9d85ae52a9bd9f36c87e07262632420f97b48a886109eab (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, maintainer=OpenStack Kubernetes Operator team, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251009, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 11 04:12:55 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1446: 305 pgs: 305 active+clean; 262 MiB data, 487 MiB used, 60 GiB / 60 GiB avail; 2.9 MiB/s rd, 1.3 MiB/s wr, 357 op/s
Oct 11 04:12:55 compute-0 ceph-mon[74273]: osdmap e358: 3 total, 3 up, 3 in
Oct 11 04:12:56 compute-0 ceph-mon[74273]: pgmap v1446: 305 pgs: 305 active+clean; 262 MiB data, 487 MiB used, 60 GiB / 60 GiB avail; 2.9 MiB/s rd, 1.3 MiB/s wr, 357 op/s
Oct 11 04:12:57 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1447: 305 pgs: 305 active+clean; 262 MiB data, 487 MiB used, 60 GiB / 60 GiB avail; 2.1 MiB/s rd, 1009 KiB/s wr, 264 op/s
Oct 11 04:12:57 compute-0 nova_compute[259850]: 2025-10-11 04:12:57.902 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 04:12:58 compute-0 sshd-session[288271]: Connection closed by 172.236.228.245 port 12768 [preauth]
Oct 11 04:12:58 compute-0 sshd-session[288273]: Connection closed by 172.236.228.245 port 12782 [preauth]
Oct 11 04:12:58 compute-0 sshd-session[288275]: Connection closed by 172.236.228.245 port 12788 [preauth]
Oct 11 04:12:58 compute-0 ceph-mon[74273]: pgmap v1447: 305 pgs: 305 active+clean; 262 MiB data, 487 MiB used, 60 GiB / 60 GiB avail; 2.1 MiB/s rd, 1009 KiB/s wr, 264 op/s
Oct 11 04:12:59 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1448: 305 pgs: 305 active+clean; 266 MiB data, 488 MiB used, 60 GiB / 60 GiB avail; 1.8 MiB/s rd, 892 KiB/s wr, 239 op/s
Oct 11 04:12:59 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e358 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 04:13:00 compute-0 nova_compute[259850]: 2025-10-11 04:13:00.100 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 04:13:00 compute-0 ceph-mon[74273]: pgmap v1448: 305 pgs: 305 active+clean; 266 MiB data, 488 MiB used, 60 GiB / 60 GiB avail; 1.8 MiB/s rd, 892 KiB/s wr, 239 op/s
Oct 11 04:13:01 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 11 04:13:01 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1143980850' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 11 04:13:01 compute-0 sudo[288277]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 04:13:01 compute-0 sudo[288277]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 04:13:01 compute-0 sudo[288277]: pam_unix(sudo:session): session closed for user root
Oct 11 04:13:01 compute-0 sudo[288302]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 11 04:13:01 compute-0 sudo[288302]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 04:13:01 compute-0 sudo[288302]: pam_unix(sudo:session): session closed for user root
Oct 11 04:13:01 compute-0 sudo[288327]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 04:13:01 compute-0 sudo[288327]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 04:13:01 compute-0 sudo[288327]: pam_unix(sudo:session): session closed for user root
Oct 11 04:13:01 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1449: 305 pgs: 305 active+clean; 266 MiB data, 488 MiB used, 60 GiB / 60 GiB avail; 62 KiB/s rd, 67 KiB/s wr, 22 op/s
Oct 11 04:13:01 compute-0 sudo[288352]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/23b68101-59a9-532f-ab6b-9acf78fb2162/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Oct 11 04:13:01 compute-0 sudo[288352]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 04:13:01 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e358 do_prune osdmap full prune enabled
Oct 11 04:13:01 compute-0 ceph-mon[74273]: from='client.? 192.168.122.10:0/1143980850' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 11 04:13:01 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e359 e359: 3 total, 3 up, 3 in
Oct 11 04:13:01 compute-0 ceph-mon[74273]: log_channel(cluster) log [DBG] : osdmap e359: 3 total, 3 up, 3 in
Oct 11 04:13:02 compute-0 sudo[288352]: pam_unix(sudo:session): session closed for user root
Oct 11 04:13:02 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 11 04:13:02 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 11 04:13:02 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct 11 04:13:02 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 11 04:13:02 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct 11 04:13:02 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' 
Oct 11 04:13:02 compute-0 ceph-mgr[74563]: [progress WARNING root] complete: ev a3870e20-40fb-412b-bc18-e245d56fdda2 does not exist
Oct 11 04:13:02 compute-0 ceph-mgr[74563]: [progress WARNING root] complete: ev 8394ae59-cd1c-48ff-81a8-2aef92ac0719 does not exist
Oct 11 04:13:02 compute-0 ceph-mgr[74563]: [progress WARNING root] complete: ev 44af38b8-2b8d-4ddf-b860-770c991a9dcd does not exist
Oct 11 04:13:02 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct 11 04:13:02 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 11 04:13:02 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct 11 04:13:02 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 11 04:13:02 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 11 04:13:02 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 11 04:13:02 compute-0 sudo[288408]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 04:13:02 compute-0 sudo[288408]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 04:13:02 compute-0 sudo[288408]: pam_unix(sudo:session): session closed for user root
Oct 11 04:13:02 compute-0 sudo[288433]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 11 04:13:02 compute-0 sudo[288433]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 04:13:02 compute-0 sudo[288433]: pam_unix(sudo:session): session closed for user root
Oct 11 04:13:02 compute-0 sudo[288458]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 04:13:02 compute-0 sudo[288458]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 04:13:02 compute-0 sudo[288458]: pam_unix(sudo:session): session closed for user root
Oct 11 04:13:02 compute-0 sudo[288483]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/23b68101-59a9-532f-ab6b-9acf78fb2162/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 23b68101-59a9-532f-ab6b-9acf78fb2162 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Oct 11 04:13:02 compute-0 sudo[288483]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 04:13:02 compute-0 nova_compute[259850]: 2025-10-11 04:13:02.905 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 04:13:02 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e359 do_prune osdmap full prune enabled
Oct 11 04:13:02 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e360 e360: 3 total, 3 up, 3 in
Oct 11 04:13:02 compute-0 ceph-mon[74273]: log_channel(cluster) log [DBG] : osdmap e360: 3 total, 3 up, 3 in
Oct 11 04:13:02 compute-0 ceph-mon[74273]: pgmap v1449: 305 pgs: 305 active+clean; 266 MiB data, 488 MiB used, 60 GiB / 60 GiB avail; 62 KiB/s rd, 67 KiB/s wr, 22 op/s
Oct 11 04:13:02 compute-0 ceph-mon[74273]: osdmap e359: 3 total, 3 up, 3 in
Oct 11 04:13:02 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 11 04:13:02 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 11 04:13:02 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' 
Oct 11 04:13:02 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 11 04:13:02 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 11 04:13:02 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 11 04:13:03 compute-0 podman[288550]: 2025-10-11 04:13:03.316362314 +0000 UTC m=+0.077137613 container create e01a8eba760866d1ffcec791de7550abf8ad113e0a8cfe2580c3ea4312cb9241 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_buck, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef)
Oct 11 04:13:03 compute-0 podman[288550]: 2025-10-11 04:13:03.285802075 +0000 UTC m=+0.046577424 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 04:13:03 compute-0 systemd[1]: Started libpod-conmon-e01a8eba760866d1ffcec791de7550abf8ad113e0a8cfe2580c3ea4312cb9241.scope.
Oct 11 04:13:03 compute-0 systemd[1]: Started libcrun container.
Oct 11 04:13:03 compute-0 podman[288550]: 2025-10-11 04:13:03.423336603 +0000 UTC m=+0.184111862 container init e01a8eba760866d1ffcec791de7550abf8ad113e0a8cfe2580c3ea4312cb9241 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_buck, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3)
Oct 11 04:13:03 compute-0 podman[288550]: 2025-10-11 04:13:03.429926211 +0000 UTC m=+0.190701460 container start e01a8eba760866d1ffcec791de7550abf8ad113e0a8cfe2580c3ea4312cb9241 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_buck, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 11 04:13:03 compute-0 podman[288550]: 2025-10-11 04:13:03.432655648 +0000 UTC m=+0.193430997 container attach e01a8eba760866d1ffcec791de7550abf8ad113e0a8cfe2580c3ea4312cb9241 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_buck, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 11 04:13:03 compute-0 dreamy_buck[288566]: 167 167
Oct 11 04:13:03 compute-0 systemd[1]: libpod-e01a8eba760866d1ffcec791de7550abf8ad113e0a8cfe2580c3ea4312cb9241.scope: Deactivated successfully.
Oct 11 04:13:03 compute-0 conmon[288566]: conmon e01a8eba760866d1ffce <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-e01a8eba760866d1ffcec791de7550abf8ad113e0a8cfe2580c3ea4312cb9241.scope/container/memory.events
Oct 11 04:13:03 compute-0 podman[288550]: 2025-10-11 04:13:03.435925441 +0000 UTC m=+0.196700710 container died e01a8eba760866d1ffcec791de7550abf8ad113e0a8cfe2580c3ea4312cb9241 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_buck, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 11 04:13:03 compute-0 systemd[1]: var-lib-containers-storage-overlay-2c243c289c93f5032ac4ad9c372d4f4e5d954e508bcbd3160cc7338825754f6e-merged.mount: Deactivated successfully.
Oct 11 04:13:03 compute-0 podman[288550]: 2025-10-11 04:13:03.472912952 +0000 UTC m=+0.233688211 container remove e01a8eba760866d1ffcec791de7550abf8ad113e0a8cfe2580c3ea4312cb9241 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_buck, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 11 04:13:03 compute-0 systemd[1]: libpod-conmon-e01a8eba760866d1ffcec791de7550abf8ad113e0a8cfe2580c3ea4312cb9241.scope: Deactivated successfully.
Oct 11 04:13:03 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1452: 305 pgs: 305 active+clean; 266 MiB data, 488 MiB used, 60 GiB / 60 GiB avail; 81 KiB/s rd, 77 KiB/s wr, 49 op/s
Oct 11 04:13:03 compute-0 podman[288590]: 2025-10-11 04:13:03.702253459 +0000 UTC m=+0.061739105 container create 7fb940556e6a5283910765f733a0b971568b9dfe0432f83f5b3261f7ce793f03 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_sinoussi, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Oct 11 04:13:03 compute-0 systemd[1]: Started libpod-conmon-7fb940556e6a5283910765f733a0b971568b9dfe0432f83f5b3261f7ce793f03.scope.
Oct 11 04:13:03 compute-0 podman[288590]: 2025-10-11 04:13:03.68152607 +0000 UTC m=+0.041011756 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 04:13:03 compute-0 systemd[1]: Started libcrun container.
Oct 11 04:13:03 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d15caed4d9a9d706ea458d4ccd230fcb0e537acc605b67a9a9863b5cabe2a3a7/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 11 04:13:03 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d15caed4d9a9d706ea458d4ccd230fcb0e537acc605b67a9a9863b5cabe2a3a7/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 11 04:13:03 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d15caed4d9a9d706ea458d4ccd230fcb0e537acc605b67a9a9863b5cabe2a3a7/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 11 04:13:03 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d15caed4d9a9d706ea458d4ccd230fcb0e537acc605b67a9a9863b5cabe2a3a7/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 11 04:13:03 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d15caed4d9a9d706ea458d4ccd230fcb0e537acc605b67a9a9863b5cabe2a3a7/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct 11 04:13:03 compute-0 podman[288590]: 2025-10-11 04:13:03.806221703 +0000 UTC m=+0.165707399 container init 7fb940556e6a5283910765f733a0b971568b9dfe0432f83f5b3261f7ce793f03 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_sinoussi, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 11 04:13:03 compute-0 podman[288590]: 2025-10-11 04:13:03.821977181 +0000 UTC m=+0.181462837 container start 7fb940556e6a5283910765f733a0b971568b9dfe0432f83f5b3261f7ce793f03 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_sinoussi, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507)
Oct 11 04:13:03 compute-0 podman[288590]: 2025-10-11 04:13:03.825282695 +0000 UTC m=+0.184768381 container attach 7fb940556e6a5283910765f733a0b971568b9dfe0432f83f5b3261f7ce793f03 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_sinoussi, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 11 04:13:03 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e360 do_prune osdmap full prune enabled
Oct 11 04:13:03 compute-0 ceph-mon[74273]: osdmap e360: 3 total, 3 up, 3 in
Oct 11 04:13:04 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e361 e361: 3 total, 3 up, 3 in
Oct 11 04:13:04 compute-0 ceph-mon[74273]: log_channel(cluster) log [DBG] : osdmap e361: 3 total, 3 up, 3 in
Oct 11 04:13:04 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e361 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 04:13:04 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e361 do_prune osdmap full prune enabled
Oct 11 04:13:04 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e362 e362: 3 total, 3 up, 3 in
Oct 11 04:13:04 compute-0 ceph-mon[74273]: log_channel(cluster) log [DBG] : osdmap e362: 3 total, 3 up, 3 in
Oct 11 04:13:04 compute-0 competent_sinoussi[288606]: --> passed data devices: 0 physical, 3 LVM
Oct 11 04:13:04 compute-0 competent_sinoussi[288606]: --> relative data size: 1.0
Oct 11 04:13:04 compute-0 competent_sinoussi[288606]: --> All data devices are unavailable
Oct 11 04:13:05 compute-0 ceph-mon[74273]: pgmap v1452: 305 pgs: 305 active+clean; 266 MiB data, 488 MiB used, 60 GiB / 60 GiB avail; 81 KiB/s rd, 77 KiB/s wr, 49 op/s
Oct 11 04:13:05 compute-0 ceph-mon[74273]: osdmap e361: 3 total, 3 up, 3 in
Oct 11 04:13:05 compute-0 ceph-mon[74273]: osdmap e362: 3 total, 3 up, 3 in
Oct 11 04:13:05 compute-0 systemd[1]: libpod-7fb940556e6a5283910765f733a0b971568b9dfe0432f83f5b3261f7ce793f03.scope: Deactivated successfully.
Oct 11 04:13:05 compute-0 systemd[1]: libpod-7fb940556e6a5283910765f733a0b971568b9dfe0432f83f5b3261f7ce793f03.scope: Consumed 1.124s CPU time.
Oct 11 04:13:05 compute-0 podman[288590]: 2025-10-11 04:13:05.017205304 +0000 UTC m=+1.376691000 container died 7fb940556e6a5283910765f733a0b971568b9dfe0432f83f5b3261f7ce793f03 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_sinoussi, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 11 04:13:05 compute-0 systemd[1]: var-lib-containers-storage-overlay-d15caed4d9a9d706ea458d4ccd230fcb0e537acc605b67a9a9863b5cabe2a3a7-merged.mount: Deactivated successfully.
Oct 11 04:13:05 compute-0 podman[288590]: 2025-10-11 04:13:05.09060715 +0000 UTC m=+1.450092786 container remove 7fb940556e6a5283910765f733a0b971568b9dfe0432f83f5b3261f7ce793f03 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_sinoussi, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3)
Oct 11 04:13:05 compute-0 systemd[1]: libpod-conmon-7fb940556e6a5283910765f733a0b971568b9dfe0432f83f5b3261f7ce793f03.scope: Deactivated successfully.
Oct 11 04:13:05 compute-0 nova_compute[259850]: 2025-10-11 04:13:05.103 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 04:13:05 compute-0 sudo[288483]: pam_unix(sudo:session): session closed for user root
Oct 11 04:13:05 compute-0 sudo[288647]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 04:13:05 compute-0 sudo[288647]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 04:13:05 compute-0 sudo[288647]: pam_unix(sudo:session): session closed for user root
Oct 11 04:13:05 compute-0 sudo[288672]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 11 04:13:05 compute-0 sudo[288672]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 04:13:05 compute-0 sudo[288672]: pam_unix(sudo:session): session closed for user root
Oct 11 04:13:05 compute-0 sudo[288697]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 04:13:05 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct 11 04:13:05 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/4021946279' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 11 04:13:05 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct 11 04:13:05 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/4021946279' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 11 04:13:05 compute-0 sudo[288697]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 04:13:05 compute-0 sudo[288697]: pam_unix(sudo:session): session closed for user root
Oct 11 04:13:05 compute-0 sudo[288722]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/23b68101-59a9-532f-ab6b-9acf78fb2162/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 23b68101-59a9-532f-ab6b-9acf78fb2162 -- lvm list --format json
Oct 11 04:13:05 compute-0 sudo[288722]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 04:13:05 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1455: 305 pgs: 305 active+clean; 266 MiB data, 488 MiB used, 60 GiB / 60 GiB avail; 33 KiB/s rd, 14 KiB/s wr, 50 op/s
Oct 11 04:13:05 compute-0 podman[288787]: 2025-10-11 04:13:05.87324895 +0000 UTC m=+0.042982183 container create 26bbfa406b63a42a1579bdd11f5b0cd95a8e295ec5d50de996c8f03c0d68624e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_chaplygin, CEPH_REF=reef, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 11 04:13:05 compute-0 systemd[1]: Started libpod-conmon-26bbfa406b63a42a1579bdd11f5b0cd95a8e295ec5d50de996c8f03c0d68624e.scope.
Oct 11 04:13:05 compute-0 systemd[1]: Started libcrun container.
Oct 11 04:13:05 compute-0 podman[288787]: 2025-10-11 04:13:05.851710638 +0000 UTC m=+0.021443911 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 04:13:05 compute-0 podman[288787]: 2025-10-11 04:13:05.973948381 +0000 UTC m=+0.143681644 container init 26bbfa406b63a42a1579bdd11f5b0cd95a8e295ec5d50de996c8f03c0d68624e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_chaplygin, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 11 04:13:05 compute-0 podman[288787]: 2025-10-11 04:13:05.98586396 +0000 UTC m=+0.155597223 container start 26bbfa406b63a42a1579bdd11f5b0cd95a8e295ec5d50de996c8f03c0d68624e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_chaplygin, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 11 04:13:05 compute-0 podman[288787]: 2025-10-11 04:13:05.99009718 +0000 UTC m=+0.159830453 container attach 26bbfa406b63a42a1579bdd11f5b0cd95a8e295ec5d50de996c8f03c0d68624e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_chaplygin, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3)
Oct 11 04:13:05 compute-0 dreamy_chaplygin[288803]: 167 167
Oct 11 04:13:05 compute-0 systemd[1]: libpod-26bbfa406b63a42a1579bdd11f5b0cd95a8e295ec5d50de996c8f03c0d68624e.scope: Deactivated successfully.
Oct 11 04:13:05 compute-0 podman[288787]: 2025-10-11 04:13:05.993346552 +0000 UTC m=+0.163079815 container died 26bbfa406b63a42a1579bdd11f5b0cd95a8e295ec5d50de996c8f03c0d68624e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_chaplygin, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=reef, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507)
Oct 11 04:13:06 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e362 do_prune osdmap full prune enabled
Oct 11 04:13:06 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e363 e363: 3 total, 3 up, 3 in
Oct 11 04:13:06 compute-0 ceph-mon[74273]: log_channel(cluster) log [DBG] : osdmap e363: 3 total, 3 up, 3 in
Oct 11 04:13:06 compute-0 systemd[1]: var-lib-containers-storage-overlay-8d00598ea3d9c64620a0abfe5003a98a6bb6da18880ab08246810b0638806734-merged.mount: Deactivated successfully.
Oct 11 04:13:06 compute-0 ceph-mon[74273]: from='client.? 192.168.122.10:0/4021946279' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 11 04:13:06 compute-0 ceph-mon[74273]: from='client.? 192.168.122.10:0/4021946279' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 11 04:13:06 compute-0 podman[288787]: 2025-10-11 04:13:06.050349802 +0000 UTC m=+0.220083065 container remove 26bbfa406b63a42a1579bdd11f5b0cd95a8e295ec5d50de996c8f03c0d68624e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_chaplygin, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 11 04:13:06 compute-0 systemd[1]: libpod-conmon-26bbfa406b63a42a1579bdd11f5b0cd95a8e295ec5d50de996c8f03c0d68624e.scope: Deactivated successfully.
Oct 11 04:13:06 compute-0 podman[288827]: 2025-10-11 04:13:06.330912413 +0000 UTC m=+0.072590453 container create e3d6028714605d5e6e11c78c2af58ff060ee2c0e902090e6b50f5bf183e9e046 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_kilby, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Oct 11 04:13:06 compute-0 systemd[1]: Started libpod-conmon-e3d6028714605d5e6e11c78c2af58ff060ee2c0e902090e6b50f5bf183e9e046.scope.
Oct 11 04:13:06 compute-0 podman[288827]: 2025-10-11 04:13:06.301652452 +0000 UTC m=+0.043330552 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 04:13:06 compute-0 systemd[1]: Started libcrun container.
Oct 11 04:13:06 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6633418d0873bfa5a932551b76db00296c73a0ca452049dc24ff96f0a9819643/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 11 04:13:06 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6633418d0873bfa5a932551b76db00296c73a0ca452049dc24ff96f0a9819643/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 11 04:13:06 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct 11 04:13:06 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1042434284' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 11 04:13:06 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6633418d0873bfa5a932551b76db00296c73a0ca452049dc24ff96f0a9819643/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 11 04:13:06 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6633418d0873bfa5a932551b76db00296c73a0ca452049dc24ff96f0a9819643/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 11 04:13:06 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct 11 04:13:06 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1042434284' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 11 04:13:06 compute-0 podman[288827]: 2025-10-11 04:13:06.437743619 +0000 UTC m=+0.179421659 container init e3d6028714605d5e6e11c78c2af58ff060ee2c0e902090e6b50f5bf183e9e046 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_kilby, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507)
Oct 11 04:13:06 compute-0 podman[288827]: 2025-10-11 04:13:06.448045142 +0000 UTC m=+0.189723152 container start e3d6028714605d5e6e11c78c2af58ff060ee2c0e902090e6b50f5bf183e9e046 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_kilby, CEPH_REF=reef, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 11 04:13:06 compute-0 podman[288827]: 2025-10-11 04:13:06.452688584 +0000 UTC m=+0.194366624 container attach e3d6028714605d5e6e11c78c2af58ff060ee2c0e902090e6b50f5bf183e9e046 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_kilby, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Oct 11 04:13:07 compute-0 ceph-mon[74273]: pgmap v1455: 305 pgs: 305 active+clean; 266 MiB data, 488 MiB used, 60 GiB / 60 GiB avail; 33 KiB/s rd, 14 KiB/s wr, 50 op/s
Oct 11 04:13:07 compute-0 ceph-mon[74273]: osdmap e363: 3 total, 3 up, 3 in
Oct 11 04:13:07 compute-0 ceph-mon[74273]: from='client.? 192.168.122.10:0/1042434284' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 11 04:13:07 compute-0 ceph-mon[74273]: from='client.? 192.168.122.10:0/1042434284' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 11 04:13:07 compute-0 xenodochial_kilby[288843]: {
Oct 11 04:13:07 compute-0 xenodochial_kilby[288843]:     "0": [
Oct 11 04:13:07 compute-0 xenodochial_kilby[288843]:         {
Oct 11 04:13:07 compute-0 xenodochial_kilby[288843]:             "devices": [
Oct 11 04:13:07 compute-0 xenodochial_kilby[288843]:                 "/dev/loop3"
Oct 11 04:13:07 compute-0 xenodochial_kilby[288843]:             ],
Oct 11 04:13:07 compute-0 xenodochial_kilby[288843]:             "lv_name": "ceph_lv0",
Oct 11 04:13:07 compute-0 xenodochial_kilby[288843]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct 11 04:13:07 compute-0 xenodochial_kilby[288843]:             "lv_size": "21470642176",
Oct 11 04:13:07 compute-0 xenodochial_kilby[288843]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=1XVTvq-Um5W-oCLh-MZzq-EKrd-vgGj-JdO4S3,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=23b68101-59a9-532f-ab6b-9acf78fb2162,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=bd7ac921-1218-45c1-b1c6-7c594dbceccb,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 11 04:13:07 compute-0 xenodochial_kilby[288843]:             "lv_uuid": "1XVTvq-Um5W-oCLh-MZzq-EKrd-vgGj-JdO4S3",
Oct 11 04:13:07 compute-0 xenodochial_kilby[288843]:             "name": "ceph_lv0",
Oct 11 04:13:07 compute-0 xenodochial_kilby[288843]:             "path": "/dev/ceph_vg0/ceph_lv0",
Oct 11 04:13:07 compute-0 xenodochial_kilby[288843]:             "tags": {
Oct 11 04:13:07 compute-0 xenodochial_kilby[288843]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct 11 04:13:07 compute-0 xenodochial_kilby[288843]:                 "ceph.block_uuid": "1XVTvq-Um5W-oCLh-MZzq-EKrd-vgGj-JdO4S3",
Oct 11 04:13:07 compute-0 xenodochial_kilby[288843]:                 "ceph.cephx_lockbox_secret": "",
Oct 11 04:13:07 compute-0 xenodochial_kilby[288843]:                 "ceph.cluster_fsid": "23b68101-59a9-532f-ab6b-9acf78fb2162",
Oct 11 04:13:07 compute-0 xenodochial_kilby[288843]:                 "ceph.cluster_name": "ceph",
Oct 11 04:13:07 compute-0 xenodochial_kilby[288843]:                 "ceph.crush_device_class": "",
Oct 11 04:13:07 compute-0 xenodochial_kilby[288843]:                 "ceph.encrypted": "0",
Oct 11 04:13:07 compute-0 xenodochial_kilby[288843]:                 "ceph.osd_fsid": "bd7ac921-1218-45c1-b1c6-7c594dbceccb",
Oct 11 04:13:07 compute-0 xenodochial_kilby[288843]:                 "ceph.osd_id": "0",
Oct 11 04:13:07 compute-0 xenodochial_kilby[288843]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 11 04:13:07 compute-0 xenodochial_kilby[288843]:                 "ceph.type": "block",
Oct 11 04:13:07 compute-0 xenodochial_kilby[288843]:                 "ceph.vdo": "0"
Oct 11 04:13:07 compute-0 xenodochial_kilby[288843]:             },
Oct 11 04:13:07 compute-0 xenodochial_kilby[288843]:             "type": "block",
Oct 11 04:13:07 compute-0 xenodochial_kilby[288843]:             "vg_name": "ceph_vg0"
Oct 11 04:13:07 compute-0 xenodochial_kilby[288843]:         }
Oct 11 04:13:07 compute-0 xenodochial_kilby[288843]:     ],
Oct 11 04:13:07 compute-0 xenodochial_kilby[288843]:     "1": [
Oct 11 04:13:07 compute-0 xenodochial_kilby[288843]:         {
Oct 11 04:13:07 compute-0 xenodochial_kilby[288843]:             "devices": [
Oct 11 04:13:07 compute-0 xenodochial_kilby[288843]:                 "/dev/loop4"
Oct 11 04:13:07 compute-0 xenodochial_kilby[288843]:             ],
Oct 11 04:13:07 compute-0 xenodochial_kilby[288843]:             "lv_name": "ceph_lv1",
Oct 11 04:13:07 compute-0 xenodochial_kilby[288843]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Oct 11 04:13:07 compute-0 xenodochial_kilby[288843]:             "lv_size": "21470642176",
Oct 11 04:13:07 compute-0 xenodochial_kilby[288843]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=SYHCVo-UVf0-Iwc3-4dzz-QuCG-19LJ-9rRA5x,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=23b68101-59a9-532f-ab6b-9acf78fb2162,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=38da774d-7ecf-442f-9a7a-97978287cff8,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 11 04:13:07 compute-0 xenodochial_kilby[288843]:             "lv_uuid": "SYHCVo-UVf0-Iwc3-4dzz-QuCG-19LJ-9rRA5x",
Oct 11 04:13:07 compute-0 xenodochial_kilby[288843]:             "name": "ceph_lv1",
Oct 11 04:13:07 compute-0 xenodochial_kilby[288843]:             "path": "/dev/ceph_vg1/ceph_lv1",
Oct 11 04:13:07 compute-0 xenodochial_kilby[288843]:             "tags": {
Oct 11 04:13:07 compute-0 xenodochial_kilby[288843]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Oct 11 04:13:07 compute-0 xenodochial_kilby[288843]:                 "ceph.block_uuid": "SYHCVo-UVf0-Iwc3-4dzz-QuCG-19LJ-9rRA5x",
Oct 11 04:13:07 compute-0 xenodochial_kilby[288843]:                 "ceph.cephx_lockbox_secret": "",
Oct 11 04:13:07 compute-0 xenodochial_kilby[288843]:                 "ceph.cluster_fsid": "23b68101-59a9-532f-ab6b-9acf78fb2162",
Oct 11 04:13:07 compute-0 xenodochial_kilby[288843]:                 "ceph.cluster_name": "ceph",
Oct 11 04:13:07 compute-0 xenodochial_kilby[288843]:                 "ceph.crush_device_class": "",
Oct 11 04:13:07 compute-0 xenodochial_kilby[288843]:                 "ceph.encrypted": "0",
Oct 11 04:13:07 compute-0 xenodochial_kilby[288843]:                 "ceph.osd_fsid": "38da774d-7ecf-442f-9a7a-97978287cff8",
Oct 11 04:13:07 compute-0 xenodochial_kilby[288843]:                 "ceph.osd_id": "1",
Oct 11 04:13:07 compute-0 xenodochial_kilby[288843]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 11 04:13:07 compute-0 xenodochial_kilby[288843]:                 "ceph.type": "block",
Oct 11 04:13:07 compute-0 xenodochial_kilby[288843]:                 "ceph.vdo": "0"
Oct 11 04:13:07 compute-0 xenodochial_kilby[288843]:             },
Oct 11 04:13:07 compute-0 xenodochial_kilby[288843]:             "type": "block",
Oct 11 04:13:07 compute-0 xenodochial_kilby[288843]:             "vg_name": "ceph_vg1"
Oct 11 04:13:07 compute-0 xenodochial_kilby[288843]:         }
Oct 11 04:13:07 compute-0 xenodochial_kilby[288843]:     ],
Oct 11 04:13:07 compute-0 xenodochial_kilby[288843]:     "2": [
Oct 11 04:13:07 compute-0 xenodochial_kilby[288843]:         {
Oct 11 04:13:07 compute-0 xenodochial_kilby[288843]:             "devices": [
Oct 11 04:13:07 compute-0 xenodochial_kilby[288843]:                 "/dev/loop5"
Oct 11 04:13:07 compute-0 xenodochial_kilby[288843]:             ],
Oct 11 04:13:07 compute-0 xenodochial_kilby[288843]:             "lv_name": "ceph_lv2",
Oct 11 04:13:07 compute-0 xenodochial_kilby[288843]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Oct 11 04:13:07 compute-0 xenodochial_kilby[288843]:             "lv_size": "21470642176",
Oct 11 04:13:07 compute-0 xenodochial_kilby[288843]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=RoMfIg-8Myq-NBZ3-1HM3-6QfC-sjk8-dlChvt,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=23b68101-59a9-532f-ab6b-9acf78fb2162,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=b0a32a6b-06c7-4cda-9235-0c5ffbd1bd85,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 11 04:13:07 compute-0 xenodochial_kilby[288843]:             "lv_uuid": "RoMfIg-8Myq-NBZ3-1HM3-6QfC-sjk8-dlChvt",
Oct 11 04:13:07 compute-0 xenodochial_kilby[288843]:             "name": "ceph_lv2",
Oct 11 04:13:07 compute-0 xenodochial_kilby[288843]:             "path": "/dev/ceph_vg2/ceph_lv2",
Oct 11 04:13:07 compute-0 xenodochial_kilby[288843]:             "tags": {
Oct 11 04:13:07 compute-0 xenodochial_kilby[288843]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Oct 11 04:13:07 compute-0 xenodochial_kilby[288843]:                 "ceph.block_uuid": "RoMfIg-8Myq-NBZ3-1HM3-6QfC-sjk8-dlChvt",
Oct 11 04:13:07 compute-0 xenodochial_kilby[288843]:                 "ceph.cephx_lockbox_secret": "",
Oct 11 04:13:07 compute-0 xenodochial_kilby[288843]:                 "ceph.cluster_fsid": "23b68101-59a9-532f-ab6b-9acf78fb2162",
Oct 11 04:13:07 compute-0 xenodochial_kilby[288843]:                 "ceph.cluster_name": "ceph",
Oct 11 04:13:07 compute-0 xenodochial_kilby[288843]:                 "ceph.crush_device_class": "",
Oct 11 04:13:07 compute-0 xenodochial_kilby[288843]:                 "ceph.encrypted": "0",
Oct 11 04:13:07 compute-0 xenodochial_kilby[288843]:                 "ceph.osd_fsid": "b0a32a6b-06c7-4cda-9235-0c5ffbd1bd85",
Oct 11 04:13:07 compute-0 xenodochial_kilby[288843]:                 "ceph.osd_id": "2",
Oct 11 04:13:07 compute-0 xenodochial_kilby[288843]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 11 04:13:07 compute-0 xenodochial_kilby[288843]:                 "ceph.type": "block",
Oct 11 04:13:07 compute-0 xenodochial_kilby[288843]:                 "ceph.vdo": "0"
Oct 11 04:13:07 compute-0 xenodochial_kilby[288843]:             },
Oct 11 04:13:07 compute-0 xenodochial_kilby[288843]:             "type": "block",
Oct 11 04:13:07 compute-0 xenodochial_kilby[288843]:             "vg_name": "ceph_vg2"
Oct 11 04:13:07 compute-0 xenodochial_kilby[288843]:         }
Oct 11 04:13:07 compute-0 xenodochial_kilby[288843]:     ]
Oct 11 04:13:07 compute-0 xenodochial_kilby[288843]: }
Oct 11 04:13:07 compute-0 systemd[1]: libpod-e3d6028714605d5e6e11c78c2af58ff060ee2c0e902090e6b50f5bf183e9e046.scope: Deactivated successfully.
Oct 11 04:13:07 compute-0 podman[288827]: 2025-10-11 04:13:07.210547349 +0000 UTC m=+0.952225389 container died e3d6028714605d5e6e11c78c2af58ff060ee2c0e902090e6b50f5bf183e9e046 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_kilby, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=reef, io.buildah.version=1.39.3)
Oct 11 04:13:07 compute-0 systemd[1]: var-lib-containers-storage-overlay-6633418d0873bfa5a932551b76db00296c73a0ca452049dc24ff96f0a9819643-merged.mount: Deactivated successfully.
Oct 11 04:13:07 compute-0 podman[288827]: 2025-10-11 04:13:07.280301581 +0000 UTC m=+1.021979631 container remove e3d6028714605d5e6e11c78c2af58ff060ee2c0e902090e6b50f5bf183e9e046 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_kilby, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 11 04:13:07 compute-0 systemd[1]: libpod-conmon-e3d6028714605d5e6e11c78c2af58ff060ee2c0e902090e6b50f5bf183e9e046.scope: Deactivated successfully.
Oct 11 04:13:07 compute-0 sudo[288722]: pam_unix(sudo:session): session closed for user root
Oct 11 04:13:07 compute-0 sudo[288864]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 04:13:07 compute-0 sudo[288864]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 04:13:07 compute-0 sudo[288864]: pam_unix(sudo:session): session closed for user root
Oct 11 04:13:07 compute-0 sudo[288889]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 11 04:13:07 compute-0 sudo[288889]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 04:13:07 compute-0 sudo[288889]: pam_unix(sudo:session): session closed for user root
Oct 11 04:13:07 compute-0 sudo[288914]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 04:13:07 compute-0 sudo[288914]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 04:13:07 compute-0 sudo[288914]: pam_unix(sudo:session): session closed for user root
Oct 11 04:13:07 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1457: 305 pgs: 305 active+clean; 266 MiB data, 488 MiB used, 60 GiB / 60 GiB avail; 7.7 KiB/s rd, 1.5 KiB/s wr, 12 op/s
Oct 11 04:13:07 compute-0 sudo[288939]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/23b68101-59a9-532f-ab6b-9acf78fb2162/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 23b68101-59a9-532f-ab6b-9acf78fb2162 -- raw list --format json
Oct 11 04:13:07 compute-0 sudo[288939]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 04:13:07 compute-0 nova_compute[259850]: 2025-10-11 04:13:07.907 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 04:13:08 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e363 do_prune osdmap full prune enabled
Oct 11 04:13:08 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e364 e364: 3 total, 3 up, 3 in
Oct 11 04:13:08 compute-0 ceph-mon[74273]: log_channel(cluster) log [DBG] : osdmap e364: 3 total, 3 up, 3 in
Oct 11 04:13:08 compute-0 podman[289006]: 2025-10-11 04:13:08.155399698 +0000 UTC m=+0.076054913 container create 45294909f25c02e28c15991bc52c26ebabdc5d055247cdffee635d16cbebf4d6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=priceless_chaum, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 11 04:13:08 compute-0 systemd[1]: Started libpod-conmon-45294909f25c02e28c15991bc52c26ebabdc5d055247cdffee635d16cbebf4d6.scope.
Oct 11 04:13:08 compute-0 podman[289006]: 2025-10-11 04:13:08.129672307 +0000 UTC m=+0.050327612 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 04:13:08 compute-0 systemd[1]: Started libcrun container.
Oct 11 04:13:08 compute-0 podman[289006]: 2025-10-11 04:13:08.264017864 +0000 UTC m=+0.184673169 container init 45294909f25c02e28c15991bc52c26ebabdc5d055247cdffee635d16cbebf4d6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=priceless_chaum, CEPH_REF=reef, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 11 04:13:08 compute-0 podman[289006]: 2025-10-11 04:13:08.277976681 +0000 UTC m=+0.198631906 container start 45294909f25c02e28c15991bc52c26ebabdc5d055247cdffee635d16cbebf4d6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=priceless_chaum, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 11 04:13:08 compute-0 podman[289006]: 2025-10-11 04:13:08.282653074 +0000 UTC m=+0.203308389 container attach 45294909f25c02e28c15991bc52c26ebabdc5d055247cdffee635d16cbebf4d6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=priceless_chaum, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, OSD_FLAVOR=default)
Oct 11 04:13:08 compute-0 priceless_chaum[289023]: 167 167
Oct 11 04:13:08 compute-0 systemd[1]: libpod-45294909f25c02e28c15991bc52c26ebabdc5d055247cdffee635d16cbebf4d6.scope: Deactivated successfully.
Oct 11 04:13:08 compute-0 podman[289006]: 2025-10-11 04:13:08.286522434 +0000 UTC m=+0.207177659 container died 45294909f25c02e28c15991bc52c26ebabdc5d055247cdffee635d16cbebf4d6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=priceless_chaum, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Oct 11 04:13:08 compute-0 systemd[1]: var-lib-containers-storage-overlay-05a4632f033ad5a2c7b16076e9cfa4f273eedec463776247cb86c8b93944a5ed-merged.mount: Deactivated successfully.
Oct 11 04:13:08 compute-0 podman[289006]: 2025-10-11 04:13:08.347511577 +0000 UTC m=+0.268166792 container remove 45294909f25c02e28c15991bc52c26ebabdc5d055247cdffee635d16cbebf4d6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=priceless_chaum, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS)
Oct 11 04:13:08 compute-0 systemd[1]: libpod-conmon-45294909f25c02e28c15991bc52c26ebabdc5d055247cdffee635d16cbebf4d6.scope: Deactivated successfully.
Oct 11 04:13:08 compute-0 podman[289048]: 2025-10-11 04:13:08.586095656 +0000 UTC m=+0.058215535 container create e08334bc1b227cc56c601346d58be948ff6ed97a0f170f80068c965b4325f564 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_spence, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 11 04:13:08 compute-0 systemd[1]: Started libpod-conmon-e08334bc1b227cc56c601346d58be948ff6ed97a0f170f80068c965b4325f564.scope.
Oct 11 04:13:08 compute-0 podman[289048]: 2025-10-11 04:13:08.558506942 +0000 UTC m=+0.030626861 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 04:13:08 compute-0 systemd[1]: Started libcrun container.
Oct 11 04:13:08 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1f8c4ad9bf96471a1bd5c2265cccc231057c16693ded8fcdd97274dfeafd6c21/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 11 04:13:08 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1f8c4ad9bf96471a1bd5c2265cccc231057c16693ded8fcdd97274dfeafd6c21/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 11 04:13:08 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1f8c4ad9bf96471a1bd5c2265cccc231057c16693ded8fcdd97274dfeafd6c21/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 11 04:13:08 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1f8c4ad9bf96471a1bd5c2265cccc231057c16693ded8fcdd97274dfeafd6c21/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 11 04:13:08 compute-0 podman[289048]: 2025-10-11 04:13:08.699467318 +0000 UTC m=+0.171587177 container init e08334bc1b227cc56c601346d58be948ff6ed97a0f170f80068c965b4325f564 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_spence, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 11 04:13:08 compute-0 podman[289048]: 2025-10-11 04:13:08.710742808 +0000 UTC m=+0.182862677 container start e08334bc1b227cc56c601346d58be948ff6ed97a0f170f80068c965b4325f564 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_spence, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 11 04:13:08 compute-0 podman[289048]: 2025-10-11 04:13:08.717621134 +0000 UTC m=+0.189740973 container attach e08334bc1b227cc56c601346d58be948ff6ed97a0f170f80068c965b4325f564 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_spence, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 11 04:13:09 compute-0 ceph-mon[74273]: pgmap v1457: 305 pgs: 305 active+clean; 266 MiB data, 488 MiB used, 60 GiB / 60 GiB avail; 7.7 KiB/s rd, 1.5 KiB/s wr, 12 op/s
Oct 11 04:13:09 compute-0 ceph-mon[74273]: osdmap e364: 3 total, 3 up, 3 in
Oct 11 04:13:09 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1459: 305 pgs: 305 active+clean; 266 MiB data, 492 MiB used, 60 GiB / 60 GiB avail; 110 KiB/s rd, 11 KiB/s wr, 150 op/s
Oct 11 04:13:09 compute-0 strange_spence[289066]: {
Oct 11 04:13:09 compute-0 strange_spence[289066]:     "38da774d-7ecf-442f-9a7a-97978287cff8": {
Oct 11 04:13:09 compute-0 strange_spence[289066]:         "ceph_fsid": "23b68101-59a9-532f-ab6b-9acf78fb2162",
Oct 11 04:13:09 compute-0 strange_spence[289066]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Oct 11 04:13:09 compute-0 strange_spence[289066]:         "osd_id": 1,
Oct 11 04:13:09 compute-0 strange_spence[289066]:         "osd_uuid": "38da774d-7ecf-442f-9a7a-97978287cff8",
Oct 11 04:13:09 compute-0 strange_spence[289066]:         "type": "bluestore"
Oct 11 04:13:09 compute-0 strange_spence[289066]:     },
Oct 11 04:13:09 compute-0 strange_spence[289066]:     "b0a32a6b-06c7-4cda-9235-0c5ffbd1bd85": {
Oct 11 04:13:09 compute-0 strange_spence[289066]:         "ceph_fsid": "23b68101-59a9-532f-ab6b-9acf78fb2162",
Oct 11 04:13:09 compute-0 strange_spence[289066]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Oct 11 04:13:09 compute-0 strange_spence[289066]:         "osd_id": 2,
Oct 11 04:13:09 compute-0 strange_spence[289066]:         "osd_uuid": "b0a32a6b-06c7-4cda-9235-0c5ffbd1bd85",
Oct 11 04:13:09 compute-0 strange_spence[289066]:         "type": "bluestore"
Oct 11 04:13:09 compute-0 strange_spence[289066]:     },
Oct 11 04:13:09 compute-0 strange_spence[289066]:     "bd7ac921-1218-45c1-b1c6-7c594dbceccb": {
Oct 11 04:13:09 compute-0 strange_spence[289066]:         "ceph_fsid": "23b68101-59a9-532f-ab6b-9acf78fb2162",
Oct 11 04:13:09 compute-0 strange_spence[289066]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct 11 04:13:09 compute-0 strange_spence[289066]:         "osd_id": 0,
Oct 11 04:13:09 compute-0 strange_spence[289066]:         "osd_uuid": "bd7ac921-1218-45c1-b1c6-7c594dbceccb",
Oct 11 04:13:09 compute-0 strange_spence[289066]:         "type": "bluestore"
Oct 11 04:13:09 compute-0 strange_spence[289066]:     }
Oct 11 04:13:09 compute-0 strange_spence[289066]: }
Oct 11 04:13:09 compute-0 systemd[1]: libpod-e08334bc1b227cc56c601346d58be948ff6ed97a0f170f80068c965b4325f564.scope: Deactivated successfully.
Oct 11 04:13:09 compute-0 podman[289048]: 2025-10-11 04:13:09.908745149 +0000 UTC m=+1.380865088 container died e08334bc1b227cc56c601346d58be948ff6ed97a0f170f80068c965b4325f564 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_spence, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 11 04:13:09 compute-0 systemd[1]: libpod-e08334bc1b227cc56c601346d58be948ff6ed97a0f170f80068c965b4325f564.scope: Consumed 1.196s CPU time.
Oct 11 04:13:09 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e364 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 04:13:09 compute-0 systemd[1]: var-lib-containers-storage-overlay-1f8c4ad9bf96471a1bd5c2265cccc231057c16693ded8fcdd97274dfeafd6c21-merged.mount: Deactivated successfully.
Oct 11 04:13:10 compute-0 podman[289048]: 2025-10-11 04:13:10.002412001 +0000 UTC m=+1.474531880 container remove e08334bc1b227cc56c601346d58be948ff6ed97a0f170f80068c965b4325f564 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_spence, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 11 04:13:10 compute-0 systemd[1]: libpod-conmon-e08334bc1b227cc56c601346d58be948ff6ed97a0f170f80068c965b4325f564.scope: Deactivated successfully.
Oct 11 04:13:10 compute-0 sudo[288939]: pam_unix(sudo:session): session closed for user root
Oct 11 04:13:10 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 11 04:13:10 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e364 do_prune osdmap full prune enabled
Oct 11 04:13:10 compute-0 podman[289107]: 2025-10-11 04:13:10.057580039 +0000 UTC m=+0.102575206 container health_status 8f03625dda308e9a3fe3b3f00360811eab2a53db90537fed5552a11820542f07 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, container_name=iscsid, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_id=iscsid)
Oct 11 04:13:10 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' 
Oct 11 04:13:10 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 11 04:13:10 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e365 e365: 3 total, 3 up, 3 in
Oct 11 04:13:10 compute-0 ceph-mon[74273]: log_channel(cluster) log [DBG] : osdmap e365: 3 total, 3 up, 3 in
Oct 11 04:13:10 compute-0 podman[289100]: 2025-10-11 04:13:10.06711209 +0000 UTC m=+0.111456779 container health_status 83ff5079e26f0f00bbfd2512db4ebca61e431e3b7e362664573e34fff179574c (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, config_id=multipathd, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.license=GPLv2, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 11 04:13:10 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' 
Oct 11 04:13:10 compute-0 ceph-mgr[74563]: [progress WARNING root] complete: ev 0c410cf8-a612-4bfd-b441-db1ac557b649 does not exist
Oct 11 04:13:10 compute-0 ceph-mgr[74563]: [progress WARNING root] complete: ev 112ac23e-e70d-42fc-9b12-670b158971bd does not exist
Oct 11 04:13:10 compute-0 nova_compute[259850]: 2025-10-11 04:13:10.108 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 04:13:10 compute-0 sudo[289147]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 04:13:10 compute-0 sudo[289147]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 04:13:10 compute-0 sudo[289147]: pam_unix(sudo:session): session closed for user root
Oct 11 04:13:10 compute-0 sudo[289172]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Oct 11 04:13:10 compute-0 sudo[289172]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 04:13:10 compute-0 sudo[289172]: pam_unix(sudo:session): session closed for user root
Oct 11 04:13:10 compute-0 ovn_controller[152025]: 2025-10-11T04:13:10Z|00154|memory_trim|INFO|Detected inactivity (last active 30004 ms ago): trimming memory
Oct 11 04:13:10 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct 11 04:13:10 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3234066762' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 11 04:13:10 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct 11 04:13:10 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3234066762' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 11 04:13:11 compute-0 ceph-mon[74273]: pgmap v1459: 305 pgs: 305 active+clean; 266 MiB data, 492 MiB used, 60 GiB / 60 GiB avail; 110 KiB/s rd, 11 KiB/s wr, 150 op/s
Oct 11 04:13:11 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' 
Oct 11 04:13:11 compute-0 ceph-mon[74273]: osdmap e365: 3 total, 3 up, 3 in
Oct 11 04:13:11 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' 
Oct 11 04:13:11 compute-0 ceph-mon[74273]: from='client.? 192.168.122.10:0/3234066762' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 11 04:13:11 compute-0 ceph-mon[74273]: from='client.? 192.168.122.10:0/3234066762' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 11 04:13:11 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1461: 305 pgs: 305 active+clean; 266 MiB data, 492 MiB used, 60 GiB / 60 GiB avail; 104 KiB/s rd, 10 KiB/s wr, 141 op/s
Oct 11 04:13:12 compute-0 nova_compute[259850]: 2025-10-11 04:13:12.909 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 04:13:13 compute-0 ceph-mon[74273]: pgmap v1461: 305 pgs: 305 active+clean; 266 MiB data, 492 MiB used, 60 GiB / 60 GiB avail; 104 KiB/s rd, 10 KiB/s wr, 141 op/s
Oct 11 04:13:13 compute-0 nova_compute[259850]: 2025-10-11 04:13:13.344 2 DEBUG oslo_concurrency.lockutils [None req-680293e7-d5a2-4a13-82c0-0b6946e46339 ba6ea3b0ff9d4fee8a80f308d0493954 7ff14cec1ef04fa2a41f6d226bc99518 - - default default] Acquiring lock "673c41a0-97c6-4a8e-8f65-919ee9c38c79" by "nova.compute.manager.ComputeManager.reserve_block_device_name.<locals>.do_reserve" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 04:13:13 compute-0 nova_compute[259850]: 2025-10-11 04:13:13.345 2 DEBUG oslo_concurrency.lockutils [None req-680293e7-d5a2-4a13-82c0-0b6946e46339 ba6ea3b0ff9d4fee8a80f308d0493954 7ff14cec1ef04fa2a41f6d226bc99518 - - default default] Lock "673c41a0-97c6-4a8e-8f65-919ee9c38c79" acquired by "nova.compute.manager.ComputeManager.reserve_block_device_name.<locals>.do_reserve" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 04:13:13 compute-0 nova_compute[259850]: 2025-10-11 04:13:13.372 2 DEBUG nova.objects.instance [None req-680293e7-d5a2-4a13-82c0-0b6946e46339 ba6ea3b0ff9d4fee8a80f308d0493954 7ff14cec1ef04fa2a41f6d226bc99518 - - default default] Lazy-loading 'flavor' on Instance uuid 673c41a0-97c6-4a8e-8f65-919ee9c38c79 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 11 04:13:13 compute-0 nova_compute[259850]: 2025-10-11 04:13:13.416 2 DEBUG oslo_concurrency.lockutils [None req-680293e7-d5a2-4a13-82c0-0b6946e46339 ba6ea3b0ff9d4fee8a80f308d0493954 7ff14cec1ef04fa2a41f6d226bc99518 - - default default] Lock "673c41a0-97c6-4a8e-8f65-919ee9c38c79" "released" by "nova.compute.manager.ComputeManager.reserve_block_device_name.<locals>.do_reserve" :: held 0.071s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 04:13:13 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 11 04:13:13 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1249153231' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 11 04:13:13 compute-0 nova_compute[259850]: 2025-10-11 04:13:13.635 2 DEBUG oslo_concurrency.lockutils [None req-680293e7-d5a2-4a13-82c0-0b6946e46339 ba6ea3b0ff9d4fee8a80f308d0493954 7ff14cec1ef04fa2a41f6d226bc99518 - - default default] Acquiring lock "673c41a0-97c6-4a8e-8f65-919ee9c38c79" by "nova.compute.manager.ComputeManager.attach_volume.<locals>.do_attach_volume" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 04:13:13 compute-0 nova_compute[259850]: 2025-10-11 04:13:13.636 2 DEBUG oslo_concurrency.lockutils [None req-680293e7-d5a2-4a13-82c0-0b6946e46339 ba6ea3b0ff9d4fee8a80f308d0493954 7ff14cec1ef04fa2a41f6d226bc99518 - - default default] Lock "673c41a0-97c6-4a8e-8f65-919ee9c38c79" acquired by "nova.compute.manager.ComputeManager.attach_volume.<locals>.do_attach_volume" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 04:13:13 compute-0 nova_compute[259850]: 2025-10-11 04:13:13.636 2 INFO nova.compute.manager [None req-680293e7-d5a2-4a13-82c0-0b6946e46339 ba6ea3b0ff9d4fee8a80f308d0493954 7ff14cec1ef04fa2a41f6d226bc99518 - - default default] [instance: 673c41a0-97c6-4a8e-8f65-919ee9c38c79] Attaching volume 5db26d05-d683-46d1-8641-6d7904a53a59 to /dev/vdb
Oct 11 04:13:13 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1462: 305 pgs: 305 active+clean; 266 MiB data, 492 MiB used, 60 GiB / 60 GiB avail; 115 KiB/s rd, 11 KiB/s wr, 155 op/s
Oct 11 04:13:13 compute-0 nova_compute[259850]: 2025-10-11 04:13:13.805 2 DEBUG os_brick.utils [None req-680293e7-d5a2-4a13-82c0-0b6946e46339 ba6ea3b0ff9d4fee8a80f308d0493954 7ff14cec1ef04fa2a41f6d226bc99518 - - default default] ==> get_connector_properties: call "{'root_helper': 'sudo nova-rootwrap /etc/nova/rootwrap.conf', 'my_ip': '192.168.122.100', 'multipath': True, 'enforce_multipath': True, 'host': 'compute-0.ctlplane.example.com', 'execute': None}" trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:176
Oct 11 04:13:13 compute-0 nova_compute[259850]: 2025-10-11 04:13:13.807 675 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): multipathd show status execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 04:13:13 compute-0 nova_compute[259850]: 2025-10-11 04:13:13.819 675 DEBUG oslo_concurrency.processutils [-] CMD "multipathd show status" returned: 0 in 0.012s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 04:13:13 compute-0 nova_compute[259850]: 2025-10-11 04:13:13.819 675 DEBUG oslo.privsep.daemon [-] privsep: reply[43d93dd2-06ea-41d8-8f82-fa0c095148ca]: (4, ('path checker states:\n\npaths: 0\nbusy: False\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 04:13:13 compute-0 nova_compute[259850]: 2025-10-11 04:13:13.820 675 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): cat /etc/iscsi/initiatorname.iscsi execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 04:13:13 compute-0 nova_compute[259850]: 2025-10-11 04:13:13.830 675 DEBUG oslo_concurrency.processutils [-] CMD "cat /etc/iscsi/initiatorname.iscsi" returned: 0 in 0.009s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 04:13:13 compute-0 nova_compute[259850]: 2025-10-11 04:13:13.830 675 DEBUG oslo.privsep.daemon [-] privsep: reply[0317431e-eb7e-45db-b94c-f946caae3002]: (4, ('InitiatorName=iqn.1994-05.com.redhat:e727c2bd432c', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 04:13:13 compute-0 nova_compute[259850]: 2025-10-11 04:13:13.831 675 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): findmnt -v / -n -o SOURCE execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 04:13:13 compute-0 nova_compute[259850]: 2025-10-11 04:13:13.841 675 DEBUG oslo_concurrency.processutils [-] CMD "findmnt -v / -n -o SOURCE" returned: 0 in 0.010s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 04:13:13 compute-0 nova_compute[259850]: 2025-10-11 04:13:13.842 675 DEBUG oslo.privsep.daemon [-] privsep: reply[c663375b-dc8e-4add-a8ef-e21ae132d00d]: (4, ('overlay\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 04:13:13 compute-0 nova_compute[259850]: 2025-10-11 04:13:13.843 675 DEBUG oslo.privsep.daemon [-] privsep: reply[f311669c-b608-40d1-a01b-44866be087f5]: (4, 'e4b2deed-ff06-4afb-a523-b61a9dddb9cc') _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 04:13:13 compute-0 nova_compute[259850]: 2025-10-11 04:13:13.843 2 DEBUG oslo_concurrency.processutils [None req-680293e7-d5a2-4a13-82c0-0b6946e46339 ba6ea3b0ff9d4fee8a80f308d0493954 7ff14cec1ef04fa2a41f6d226bc99518 - - default default] Running cmd (subprocess): nvme version execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 04:13:13 compute-0 nova_compute[259850]: 2025-10-11 04:13:13.869 2 DEBUG oslo_concurrency.processutils [None req-680293e7-d5a2-4a13-82c0-0b6946e46339 ba6ea3b0ff9d4fee8a80f308d0493954 7ff14cec1ef04fa2a41f6d226bc99518 - - default default] CMD "nvme version" returned: 0 in 0.026s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 04:13:13 compute-0 nova_compute[259850]: 2025-10-11 04:13:13.871 2 DEBUG os_brick.initiator.connectors.lightos [None req-680293e7-d5a2-4a13-82c0-0b6946e46339 ba6ea3b0ff9d4fee8a80f308d0493954 7ff14cec1ef04fa2a41f6d226bc99518 - - default default] LIGHTOS: [Errno 111] ECONNREFUSED find_dsc /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:98
Oct 11 04:13:13 compute-0 nova_compute[259850]: 2025-10-11 04:13:13.871 2 DEBUG os_brick.initiator.connectors.lightos [None req-680293e7-d5a2-4a13-82c0-0b6946e46339 ba6ea3b0ff9d4fee8a80f308d0493954 7ff14cec1ef04fa2a41f6d226bc99518 - - default default] LIGHTOS: did not find dsc, continuing anyway. get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:76
Oct 11 04:13:13 compute-0 nova_compute[259850]: 2025-10-11 04:13:13.872 2 DEBUG os_brick.initiator.connectors.lightos [None req-680293e7-d5a2-4a13-82c0-0b6946e46339 ba6ea3b0ff9d4fee8a80f308d0493954 7ff14cec1ef04fa2a41f6d226bc99518 - - default default] LIGHTOS: finally hostnqn: nqn.2014-08.org.nvmexpress:uuid:83042a20-0f72-4c47-8453-e72ead378624 dsc:  get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:79
Oct 11 04:13:13 compute-0 nova_compute[259850]: 2025-10-11 04:13:13.872 2 DEBUG os_brick.utils [None req-680293e7-d5a2-4a13-82c0-0b6946e46339 ba6ea3b0ff9d4fee8a80f308d0493954 7ff14cec1ef04fa2a41f6d226bc99518 - - default default] <== get_connector_properties: return (65ms) {'platform': 'x86_64', 'os_type': 'linux', 'ip': '192.168.122.100', 'host': 'compute-0.ctlplane.example.com', 'multipath': True, 'initiator': 'iqn.1994-05.com.redhat:e727c2bd432c', 'do_local_attach': False, 'nvme_hostid': '83042a20-0f72-4c47-8453-e72ead378624', 'system uuid': 'e4b2deed-ff06-4afb-a523-b61a9dddb9cc', 'nqn': 'nqn.2014-08.org.nvmexpress:uuid:83042a20-0f72-4c47-8453-e72ead378624', 'nvme_native_multipath': True, 'found_dsc': ''} trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:203
Oct 11 04:13:13 compute-0 nova_compute[259850]: 2025-10-11 04:13:13.873 2 DEBUG nova.virt.block_device [None req-680293e7-d5a2-4a13-82c0-0b6946e46339 ba6ea3b0ff9d4fee8a80f308d0493954 7ff14cec1ef04fa2a41f6d226bc99518 - - default default] [instance: 673c41a0-97c6-4a8e-8f65-919ee9c38c79] Updating existing volume attachment record: d8a8f091-a7dc-4847-93ef-6ca24e66fc59 _volume_attach /usr/lib/python3.9/site-packages/nova/virt/block_device.py:631
Oct 11 04:13:14 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e365 do_prune osdmap full prune enabled
Oct 11 04:13:14 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e366 e366: 3 total, 3 up, 3 in
Oct 11 04:13:14 compute-0 ceph-mon[74273]: log_channel(cluster) log [DBG] : osdmap e366: 3 total, 3 up, 3 in
Oct 11 04:13:14 compute-0 ceph-mon[74273]: from='client.? 192.168.122.10:0/1249153231' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 11 04:13:14 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 11 04:13:14 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2491852078' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 11 04:13:14 compute-0 nova_compute[259850]: 2025-10-11 04:13:14.788 2 DEBUG nova.objects.instance [None req-680293e7-d5a2-4a13-82c0-0b6946e46339 ba6ea3b0ff9d4fee8a80f308d0493954 7ff14cec1ef04fa2a41f6d226bc99518 - - default default] Lazy-loading 'flavor' on Instance uuid 673c41a0-97c6-4a8e-8f65-919ee9c38c79 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 11 04:13:14 compute-0 nova_compute[259850]: 2025-10-11 04:13:14.810 2 DEBUG nova.virt.libvirt.driver [None req-680293e7-d5a2-4a13-82c0-0b6946e46339 ba6ea3b0ff9d4fee8a80f308d0493954 7ff14cec1ef04fa2a41f6d226bc99518 - - default default] [instance: 673c41a0-97c6-4a8e-8f65-919ee9c38c79] Attempting to attach volume 5db26d05-d683-46d1-8641-6d7904a53a59 with discard support enabled to an instance using an unsupported configuration. target_bus = virtio. Trim commands will not be issued to the storage device. _check_discard_for_attach_volume /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2168
Oct 11 04:13:14 compute-0 nova_compute[259850]: 2025-10-11 04:13:14.815 2 DEBUG nova.virt.libvirt.guest [None req-680293e7-d5a2-4a13-82c0-0b6946e46339 ba6ea3b0ff9d4fee8a80f308d0493954 7ff14cec1ef04fa2a41f6d226bc99518 - - default default] attach device xml: <disk type="network" device="disk">
Oct 11 04:13:14 compute-0 nova_compute[259850]:   <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Oct 11 04:13:14 compute-0 nova_compute[259850]:   <source protocol="rbd" name="volumes/volume-5db26d05-d683-46d1-8641-6d7904a53a59">
Oct 11 04:13:14 compute-0 nova_compute[259850]:     <host name="192.168.122.100" port="6789"/>
Oct 11 04:13:14 compute-0 nova_compute[259850]:   </source>
Oct 11 04:13:14 compute-0 nova_compute[259850]:   <auth username="openstack">
Oct 11 04:13:14 compute-0 nova_compute[259850]:     <secret type="ceph" uuid="23b68101-59a9-532f-ab6b-9acf78fb2162"/>
Oct 11 04:13:14 compute-0 nova_compute[259850]:   </auth>
Oct 11 04:13:14 compute-0 nova_compute[259850]:   <target dev="vdb" bus="virtio"/>
Oct 11 04:13:14 compute-0 nova_compute[259850]:   <serial>5db26d05-d683-46d1-8641-6d7904a53a59</serial>
Oct 11 04:13:14 compute-0 nova_compute[259850]: </disk>
Oct 11 04:13:14 compute-0 nova_compute[259850]:  attach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:339
Oct 11 04:13:14 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e366 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 04:13:14 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e366 do_prune osdmap full prune enabled
Oct 11 04:13:14 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e367 e367: 3 total, 3 up, 3 in
Oct 11 04:13:14 compute-0 ceph-mon[74273]: log_channel(cluster) log [DBG] : osdmap e367: 3 total, 3 up, 3 in
Oct 11 04:13:14 compute-0 nova_compute[259850]: 2025-10-11 04:13:14.991 2 DEBUG nova.virt.libvirt.driver [None req-680293e7-d5a2-4a13-82c0-0b6946e46339 ba6ea3b0ff9d4fee8a80f308d0493954 7ff14cec1ef04fa2a41f6d226bc99518 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 11 04:13:14 compute-0 nova_compute[259850]: 2025-10-11 04:13:14.992 2 DEBUG nova.virt.libvirt.driver [None req-680293e7-d5a2-4a13-82c0-0b6946e46339 ba6ea3b0ff9d4fee8a80f308d0493954 7ff14cec1ef04fa2a41f6d226bc99518 - - default default] No BDM found with device name vdb, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 11 04:13:14 compute-0 nova_compute[259850]: 2025-10-11 04:13:14.993 2 DEBUG nova.virt.libvirt.driver [None req-680293e7-d5a2-4a13-82c0-0b6946e46339 ba6ea3b0ff9d4fee8a80f308d0493954 7ff14cec1ef04fa2a41f6d226bc99518 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 11 04:13:14 compute-0 nova_compute[259850]: 2025-10-11 04:13:14.993 2 DEBUG nova.virt.libvirt.driver [None req-680293e7-d5a2-4a13-82c0-0b6946e46339 ba6ea3b0ff9d4fee8a80f308d0493954 7ff14cec1ef04fa2a41f6d226bc99518 - - default default] No VIF found with MAC fa:16:3e:07:8a:69, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Oct 11 04:13:15 compute-0 ceph-mon[74273]: pgmap v1462: 305 pgs: 305 active+clean; 266 MiB data, 492 MiB used, 60 GiB / 60 GiB avail; 115 KiB/s rd, 11 KiB/s wr, 155 op/s
Oct 11 04:13:15 compute-0 ceph-mon[74273]: osdmap e366: 3 total, 3 up, 3 in
Oct 11 04:13:15 compute-0 ceph-mon[74273]: from='client.? 192.168.122.10:0/2491852078' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 11 04:13:15 compute-0 ceph-mon[74273]: osdmap e367: 3 total, 3 up, 3 in
Oct 11 04:13:15 compute-0 nova_compute[259850]: 2025-10-11 04:13:15.112 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 04:13:15 compute-0 nova_compute[259850]: 2025-10-11 04:13:15.301 2 DEBUG oslo_concurrency.lockutils [None req-680293e7-d5a2-4a13-82c0-0b6946e46339 ba6ea3b0ff9d4fee8a80f308d0493954 7ff14cec1ef04fa2a41f6d226bc99518 - - default default] Lock "673c41a0-97c6-4a8e-8f65-919ee9c38c79" "released" by "nova.compute.manager.ComputeManager.attach_volume.<locals>.do_attach_volume" :: held 1.666s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 04:13:15 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1465: 305 pgs: 305 active+clean; 266 MiB data, 492 MiB used, 60 GiB / 60 GiB avail; 43 KiB/s rd, 3.5 KiB/s wr, 56 op/s
Oct 11 04:13:16 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct 11 04:13:16 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1340327760' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 11 04:13:16 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct 11 04:13:16 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1340327760' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 11 04:13:17 compute-0 ceph-mon[74273]: pgmap v1465: 305 pgs: 305 active+clean; 266 MiB data, 492 MiB used, 60 GiB / 60 GiB avail; 43 KiB/s rd, 3.5 KiB/s wr, 56 op/s
Oct 11 04:13:17 compute-0 ceph-mon[74273]: from='client.? 192.168.122.10:0/1340327760' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 11 04:13:17 compute-0 ceph-mon[74273]: from='client.? 192.168.122.10:0/1340327760' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 11 04:13:17 compute-0 nova_compute[259850]: 2025-10-11 04:13:17.441 2 DEBUG oslo_concurrency.lockutils [None req-8d02225e-c813-416d-9865-0e6fc92c80fd ba6ea3b0ff9d4fee8a80f308d0493954 7ff14cec1ef04fa2a41f6d226bc99518 - - default default] Acquiring lock "673c41a0-97c6-4a8e-8f65-919ee9c38c79" by "nova.compute.manager.ComputeManager.detach_volume.<locals>.do_detach_volume" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 04:13:17 compute-0 nova_compute[259850]: 2025-10-11 04:13:17.441 2 DEBUG oslo_concurrency.lockutils [None req-8d02225e-c813-416d-9865-0e6fc92c80fd ba6ea3b0ff9d4fee8a80f308d0493954 7ff14cec1ef04fa2a41f6d226bc99518 - - default default] Lock "673c41a0-97c6-4a8e-8f65-919ee9c38c79" acquired by "nova.compute.manager.ComputeManager.detach_volume.<locals>.do_detach_volume" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 04:13:17 compute-0 nova_compute[259850]: 2025-10-11 04:13:17.456 2 INFO nova.compute.manager [None req-8d02225e-c813-416d-9865-0e6fc92c80fd ba6ea3b0ff9d4fee8a80f308d0493954 7ff14cec1ef04fa2a41f6d226bc99518 - - default default] [instance: 673c41a0-97c6-4a8e-8f65-919ee9c38c79] Detaching volume 5db26d05-d683-46d1-8641-6d7904a53a59
Oct 11 04:13:17 compute-0 nova_compute[259850]: 2025-10-11 04:13:17.578 2 INFO nova.virt.block_device [None req-8d02225e-c813-416d-9865-0e6fc92c80fd ba6ea3b0ff9d4fee8a80f308d0493954 7ff14cec1ef04fa2a41f6d226bc99518 - - default default] [instance: 673c41a0-97c6-4a8e-8f65-919ee9c38c79] Attempting to driver detach volume 5db26d05-d683-46d1-8641-6d7904a53a59 from mountpoint /dev/vdb
Oct 11 04:13:17 compute-0 nova_compute[259850]: 2025-10-11 04:13:17.586 2 DEBUG nova.virt.libvirt.driver [None req-8d02225e-c813-416d-9865-0e6fc92c80fd ba6ea3b0ff9d4fee8a80f308d0493954 7ff14cec1ef04fa2a41f6d226bc99518 - - default default] Attempting to detach device vdb from instance 673c41a0-97c6-4a8e-8f65-919ee9c38c79 from the persistent domain config. _detach_from_persistent /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2487
Oct 11 04:13:17 compute-0 nova_compute[259850]: 2025-10-11 04:13:17.587 2 DEBUG nova.virt.libvirt.guest [None req-8d02225e-c813-416d-9865-0e6fc92c80fd ba6ea3b0ff9d4fee8a80f308d0493954 7ff14cec1ef04fa2a41f6d226bc99518 - - default default] detach device xml: <disk type="network" device="disk">
Oct 11 04:13:17 compute-0 nova_compute[259850]:   <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Oct 11 04:13:17 compute-0 nova_compute[259850]:   <source protocol="rbd" name="volumes/volume-5db26d05-d683-46d1-8641-6d7904a53a59">
Oct 11 04:13:17 compute-0 nova_compute[259850]:     <host name="192.168.122.100" port="6789"/>
Oct 11 04:13:17 compute-0 nova_compute[259850]:   </source>
Oct 11 04:13:17 compute-0 nova_compute[259850]:   <target dev="vdb" bus="virtio"/>
Oct 11 04:13:17 compute-0 nova_compute[259850]:   <serial>5db26d05-d683-46d1-8641-6d7904a53a59</serial>
Oct 11 04:13:17 compute-0 nova_compute[259850]:   <address type="pci" domain="0x0000" bus="0x06" slot="0x00" function="0x0"/>
Oct 11 04:13:17 compute-0 nova_compute[259850]: </disk>
Oct 11 04:13:17 compute-0 nova_compute[259850]:  detach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:465
Oct 11 04:13:17 compute-0 nova_compute[259850]: 2025-10-11 04:13:17.596 2 INFO nova.virt.libvirt.driver [None req-8d02225e-c813-416d-9865-0e6fc92c80fd ba6ea3b0ff9d4fee8a80f308d0493954 7ff14cec1ef04fa2a41f6d226bc99518 - - default default] Successfully detached device vdb from instance 673c41a0-97c6-4a8e-8f65-919ee9c38c79 from the persistent domain config.
Oct 11 04:13:17 compute-0 nova_compute[259850]: 2025-10-11 04:13:17.596 2 DEBUG nova.virt.libvirt.driver [None req-8d02225e-c813-416d-9865-0e6fc92c80fd ba6ea3b0ff9d4fee8a80f308d0493954 7ff14cec1ef04fa2a41f6d226bc99518 - - default default] (1/8): Attempting to detach device vdb with device alias virtio-disk1 from instance 673c41a0-97c6-4a8e-8f65-919ee9c38c79 from the live domain config. _detach_from_live_with_retry /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2523
Oct 11 04:13:17 compute-0 nova_compute[259850]: 2025-10-11 04:13:17.596 2 DEBUG nova.virt.libvirt.guest [None req-8d02225e-c813-416d-9865-0e6fc92c80fd ba6ea3b0ff9d4fee8a80f308d0493954 7ff14cec1ef04fa2a41f6d226bc99518 - - default default] detach device xml: <disk type="network" device="disk">
Oct 11 04:13:17 compute-0 nova_compute[259850]:   <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Oct 11 04:13:17 compute-0 nova_compute[259850]:   <source protocol="rbd" name="volumes/volume-5db26d05-d683-46d1-8641-6d7904a53a59">
Oct 11 04:13:17 compute-0 nova_compute[259850]:     <host name="192.168.122.100" port="6789"/>
Oct 11 04:13:17 compute-0 nova_compute[259850]:   </source>
Oct 11 04:13:17 compute-0 nova_compute[259850]:   <target dev="vdb" bus="virtio"/>
Oct 11 04:13:17 compute-0 nova_compute[259850]:   <serial>5db26d05-d683-46d1-8641-6d7904a53a59</serial>
Oct 11 04:13:17 compute-0 nova_compute[259850]:   <address type="pci" domain="0x0000" bus="0x06" slot="0x00" function="0x0"/>
Oct 11 04:13:17 compute-0 nova_compute[259850]: </disk>
Oct 11 04:13:17 compute-0 nova_compute[259850]:  detach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:465
Oct 11 04:13:17 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1466: 305 pgs: 305 active+clean; 266 MiB data, 492 MiB used, 60 GiB / 60 GiB avail; 34 KiB/s rd, 2.8 KiB/s wr, 45 op/s
Oct 11 04:13:17 compute-0 nova_compute[259850]: 2025-10-11 04:13:17.714 2 DEBUG nova.virt.libvirt.driver [None req-3700afd8-f980-4cf2-b41e-54ecc7fbe9a2 - - - - - -] Received event <DeviceRemovedEvent: 1760155997.713686, 673c41a0-97c6-4a8e-8f65-919ee9c38c79 => virtio-disk1> from libvirt while the driver is waiting for it; dispatched. emit_event /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2370
Oct 11 04:13:17 compute-0 nova_compute[259850]: 2025-10-11 04:13:17.717 2 DEBUG nova.virt.libvirt.driver [None req-8d02225e-c813-416d-9865-0e6fc92c80fd ba6ea3b0ff9d4fee8a80f308d0493954 7ff14cec1ef04fa2a41f6d226bc99518 - - default default] Start waiting for the detach event from libvirt for device vdb with device alias virtio-disk1 for instance 673c41a0-97c6-4a8e-8f65-919ee9c38c79 _detach_from_live_and_wait_for_event /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2599
Oct 11 04:13:17 compute-0 nova_compute[259850]: 2025-10-11 04:13:17.719 2 INFO nova.virt.libvirt.driver [None req-8d02225e-c813-416d-9865-0e6fc92c80fd ba6ea3b0ff9d4fee8a80f308d0493954 7ff14cec1ef04fa2a41f6d226bc99518 - - default default] Successfully detached device vdb from instance 673c41a0-97c6-4a8e-8f65-919ee9c38c79 from the live domain config.
Oct 11 04:13:17 compute-0 nova_compute[259850]: 2025-10-11 04:13:17.864 2 DEBUG nova.objects.instance [None req-8d02225e-c813-416d-9865-0e6fc92c80fd ba6ea3b0ff9d4fee8a80f308d0493954 7ff14cec1ef04fa2a41f6d226bc99518 - - default default] Lazy-loading 'flavor' on Instance uuid 673c41a0-97c6-4a8e-8f65-919ee9c38c79 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 11 04:13:17 compute-0 nova_compute[259850]: 2025-10-11 04:13:17.905 2 DEBUG oslo_concurrency.lockutils [None req-8d02225e-c813-416d-9865-0e6fc92c80fd ba6ea3b0ff9d4fee8a80f308d0493954 7ff14cec1ef04fa2a41f6d226bc99518 - - default default] Lock "673c41a0-97c6-4a8e-8f65-919ee9c38c79" "released" by "nova.compute.manager.ComputeManager.detach_volume.<locals>.do_detach_volume" :: held 0.464s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 04:13:17 compute-0 nova_compute[259850]: 2025-10-11 04:13:17.912 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 04:13:18 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 11 04:13:18 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3727153839' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 11 04:13:19 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e367 do_prune osdmap full prune enabled
Oct 11 04:13:19 compute-0 ceph-mon[74273]: pgmap v1466: 305 pgs: 305 active+clean; 266 MiB data, 492 MiB used, 60 GiB / 60 GiB avail; 34 KiB/s rd, 2.8 KiB/s wr, 45 op/s
Oct 11 04:13:19 compute-0 ceph-mon[74273]: from='client.? 192.168.122.10:0/3727153839' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 11 04:13:19 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e368 e368: 3 total, 3 up, 3 in
Oct 11 04:13:19 compute-0 ceph-mon[74273]: log_channel(cluster) log [DBG] : osdmap e368: 3 total, 3 up, 3 in
Oct 11 04:13:19 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1468: 305 pgs: 305 active+clean; 268 MiB data, 494 MiB used, 60 GiB / 60 GiB avail; 840 KiB/s rd, 348 KiB/s wr, 120 op/s
Oct 11 04:13:19 compute-0 nova_compute[259850]: 2025-10-11 04:13:19.685 2 DEBUG nova.compute.manager [req-4b9a5f15-07f0-4769-a3f2-ee2ac439863f req-9a3aec27-695d-4743-aae8-f7fdbca1b0fc f4159c198f2c491aba3076e803251bb4 551ef16ca7c64c7d98e9560ddab5abd8 - - default default] [instance: 673c41a0-97c6-4a8e-8f65-919ee9c38c79] Received event network-changed-f42f98ab-28b0-4e9f-897a-e8be0d64dc33 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 11 04:13:19 compute-0 nova_compute[259850]: 2025-10-11 04:13:19.686 2 DEBUG nova.compute.manager [req-4b9a5f15-07f0-4769-a3f2-ee2ac439863f req-9a3aec27-695d-4743-aae8-f7fdbca1b0fc f4159c198f2c491aba3076e803251bb4 551ef16ca7c64c7d98e9560ddab5abd8 - - default default] [instance: 673c41a0-97c6-4a8e-8f65-919ee9c38c79] Refreshing instance network info cache due to event network-changed-f42f98ab-28b0-4e9f-897a-e8be0d64dc33. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 11 04:13:19 compute-0 nova_compute[259850]: 2025-10-11 04:13:19.686 2 DEBUG oslo_concurrency.lockutils [req-4b9a5f15-07f0-4769-a3f2-ee2ac439863f req-9a3aec27-695d-4743-aae8-f7fdbca1b0fc f4159c198f2c491aba3076e803251bb4 551ef16ca7c64c7d98e9560ddab5abd8 - - default default] Acquiring lock "refresh_cache-673c41a0-97c6-4a8e-8f65-919ee9c38c79" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 11 04:13:19 compute-0 nova_compute[259850]: 2025-10-11 04:13:19.687 2 DEBUG oslo_concurrency.lockutils [req-4b9a5f15-07f0-4769-a3f2-ee2ac439863f req-9a3aec27-695d-4743-aae8-f7fdbca1b0fc f4159c198f2c491aba3076e803251bb4 551ef16ca7c64c7d98e9560ddab5abd8 - - default default] Acquired lock "refresh_cache-673c41a0-97c6-4a8e-8f65-919ee9c38c79" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 11 04:13:19 compute-0 nova_compute[259850]: 2025-10-11 04:13:19.687 2 DEBUG nova.network.neutron [req-4b9a5f15-07f0-4769-a3f2-ee2ac439863f req-9a3aec27-695d-4743-aae8-f7fdbca1b0fc f4159c198f2c491aba3076e803251bb4 551ef16ca7c64c7d98e9560ddab5abd8 - - default default] [instance: 673c41a0-97c6-4a8e-8f65-919ee9c38c79] Refreshing network info cache for port f42f98ab-28b0-4e9f-897a-e8be0d64dc33 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 11 04:13:19 compute-0 nova_compute[259850]: 2025-10-11 04:13:19.746 2 DEBUG oslo_concurrency.lockutils [None req-0c4fedbf-649c-48b0-8ccf-ae37e484c38e ba6ea3b0ff9d4fee8a80f308d0493954 7ff14cec1ef04fa2a41f6d226bc99518 - - default default] Acquiring lock "673c41a0-97c6-4a8e-8f65-919ee9c38c79" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 04:13:19 compute-0 nova_compute[259850]: 2025-10-11 04:13:19.746 2 DEBUG oslo_concurrency.lockutils [None req-0c4fedbf-649c-48b0-8ccf-ae37e484c38e ba6ea3b0ff9d4fee8a80f308d0493954 7ff14cec1ef04fa2a41f6d226bc99518 - - default default] Lock "673c41a0-97c6-4a8e-8f65-919ee9c38c79" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 04:13:19 compute-0 nova_compute[259850]: 2025-10-11 04:13:19.747 2 DEBUG oslo_concurrency.lockutils [None req-0c4fedbf-649c-48b0-8ccf-ae37e484c38e ba6ea3b0ff9d4fee8a80f308d0493954 7ff14cec1ef04fa2a41f6d226bc99518 - - default default] Acquiring lock "673c41a0-97c6-4a8e-8f65-919ee9c38c79-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 04:13:19 compute-0 nova_compute[259850]: 2025-10-11 04:13:19.747 2 DEBUG oslo_concurrency.lockutils [None req-0c4fedbf-649c-48b0-8ccf-ae37e484c38e ba6ea3b0ff9d4fee8a80f308d0493954 7ff14cec1ef04fa2a41f6d226bc99518 - - default default] Lock "673c41a0-97c6-4a8e-8f65-919ee9c38c79-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 04:13:19 compute-0 nova_compute[259850]: 2025-10-11 04:13:19.747 2 DEBUG oslo_concurrency.lockutils [None req-0c4fedbf-649c-48b0-8ccf-ae37e484c38e ba6ea3b0ff9d4fee8a80f308d0493954 7ff14cec1ef04fa2a41f6d226bc99518 - - default default] Lock "673c41a0-97c6-4a8e-8f65-919ee9c38c79-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 04:13:19 compute-0 nova_compute[259850]: 2025-10-11 04:13:19.749 2 INFO nova.compute.manager [None req-0c4fedbf-649c-48b0-8ccf-ae37e484c38e ba6ea3b0ff9d4fee8a80f308d0493954 7ff14cec1ef04fa2a41f6d226bc99518 - - default default] [instance: 673c41a0-97c6-4a8e-8f65-919ee9c38c79] Terminating instance
Oct 11 04:13:19 compute-0 nova_compute[259850]: 2025-10-11 04:13:19.751 2 DEBUG nova.compute.manager [None req-0c4fedbf-649c-48b0-8ccf-ae37e484c38e ba6ea3b0ff9d4fee8a80f308d0493954 7ff14cec1ef04fa2a41f6d226bc99518 - - default default] [instance: 673c41a0-97c6-4a8e-8f65-919ee9c38c79] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Oct 11 04:13:19 compute-0 kernel: tapf42f98ab-28 (unregistering): left promiscuous mode
Oct 11 04:13:19 compute-0 NetworkManager[44920]: <info>  [1760155999.8094] device (tapf42f98ab-28): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 11 04:13:19 compute-0 ovn_controller[152025]: 2025-10-11T04:13:19Z|00155|binding|INFO|Releasing lport f42f98ab-28b0-4e9f-897a-e8be0d64dc33 from this chassis (sb_readonly=0)
Oct 11 04:13:19 compute-0 nova_compute[259850]: 2025-10-11 04:13:19.821 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 04:13:19 compute-0 ovn_controller[152025]: 2025-10-11T04:13:19Z|00156|binding|INFO|Setting lport f42f98ab-28b0-4e9f-897a-e8be0d64dc33 down in Southbound
Oct 11 04:13:19 compute-0 ovn_controller[152025]: 2025-10-11T04:13:19Z|00157|binding|INFO|Removing iface tapf42f98ab-28 ovn-installed in OVS
Oct 11 04:13:19 compute-0 nova_compute[259850]: 2025-10-11 04:13:19.826 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 04:13:19 compute-0 ovn_metadata_agent[161897]: 2025-10-11 04:13:19.834 161902 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:07:8a:69 10.100.0.9'], port_security=['fa:16:3e:07:8a:69 10.100.0.9'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.9/28', 'neutron:device_id': '673c41a0-97c6-4a8e-8f65-919ee9c38c79', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-69760b74-d690-4b6a-a64f-35ceb4582944', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '7ff14cec1ef04fa2a41f6d226bc99518', 'neutron:revision_number': '4', 'neutron:security_group_ids': '0b1fcf6f-b50b-44a2-814d-4972eb6e538b', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=24983540-db74-4f67-b9f8-811887ee0a83, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f22fde8afd0>], logical_port=f42f98ab-28b0-4e9f-897a-e8be0d64dc33) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f22fde8afd0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 11 04:13:19 compute-0 ovn_metadata_agent[161897]: 2025-10-11 04:13:19.836 161902 INFO neutron.agent.ovn.metadata.agent [-] Port f42f98ab-28b0-4e9f-897a-e8be0d64dc33 in datapath 69760b74-d690-4b6a-a64f-35ceb4582944 unbound from our chassis
Oct 11 04:13:19 compute-0 ovn_metadata_agent[161897]: 2025-10-11 04:13:19.838 161902 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 69760b74-d690-4b6a-a64f-35ceb4582944
Oct 11 04:13:19 compute-0 ovn_metadata_agent[161897]: 2025-10-11 04:13:19.863 267637 DEBUG oslo.privsep.daemon [-] privsep: reply[fb7b5c5e-25b9-472f-bc13-2c02973988f2]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 04:13:19 compute-0 ovn_metadata_agent[161897]: 2025-10-11 04:13:19.895 267785 DEBUG oslo.privsep.daemon [-] privsep: reply[10f40221-3293-4f75-b221-c568b82b0280]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 04:13:19 compute-0 nova_compute[259850]: 2025-10-11 04:13:19.898 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 04:13:19 compute-0 ovn_metadata_agent[161897]: 2025-10-11 04:13:19.900 267785 DEBUG oslo.privsep.daemon [-] privsep: reply[a7889906-6744-42a0-8f2e-c0657554e352]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 04:13:19 compute-0 systemd[1]: machine-qemu\x2d15\x2dinstance\x2d0000000f.scope: Deactivated successfully.
Oct 11 04:13:19 compute-0 systemd[1]: machine-qemu\x2d15\x2dinstance\x2d0000000f.scope: Consumed 15.700s CPU time.
Oct 11 04:13:19 compute-0 systemd-machined[214869]: Machine qemu-15-instance-0000000f terminated.
Oct 11 04:13:19 compute-0 ovn_metadata_agent[161897]: 2025-10-11 04:13:19.934 267785 DEBUG oslo.privsep.daemon [-] privsep: reply[ff53e0af-e481-4df4-8509-58486a19fc1e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 04:13:19 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e368 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 04:13:19 compute-0 ovn_metadata_agent[161897]: 2025-10-11 04:13:19.963 267637 DEBUG oslo.privsep.daemon [-] privsep: reply[d29b5d98-3b03-421e-9b85-2f8245a4ce3a]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap69760b74-d1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:4e:85:d9'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 12, 'tx_packets': 8, 'rx_bytes': 1000, 'tx_bytes': 528, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 12, 'tx_packets': 8, 'rx_bytes': 1000, 'tx_bytes': 528, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 51], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 429926, 'reachable_time': 21479, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 8, 'inoctets': 720, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 4, 'outoctets': 304, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 8, 'outmcastpkts': 4, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 720, 'outmcastoctets': 304, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 8, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 4, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 289240, 'error': None, 'target': 'ovnmeta-69760b74-d690-4b6a-a64f-35ceb4582944', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 04:13:19 compute-0 ovn_metadata_agent[161897]: 2025-10-11 04:13:19.986 267637 DEBUG oslo.privsep.daemon [-] privsep: reply[9d4dd682-8bd7-4f5c-a70e-9b815d6a6338]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap69760b74-d1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 429939, 'tstamp': 429939}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 289243, 'error': None, 'target': 'ovnmeta-69760b74-d690-4b6a-a64f-35ceb4582944', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap69760b74-d1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 429942, 'tstamp': 429942}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 289243, 'error': None, 'target': 'ovnmeta-69760b74-d690-4b6a-a64f-35ceb4582944', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 04:13:19 compute-0 ovn_metadata_agent[161897]: 2025-10-11 04:13:19.989 161902 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap69760b74-d0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 04:13:19 compute-0 nova_compute[259850]: 2025-10-11 04:13:19.991 2 INFO nova.virt.libvirt.driver [-] [instance: 673c41a0-97c6-4a8e-8f65-919ee9c38c79] Instance destroyed successfully.
Oct 11 04:13:19 compute-0 nova_compute[259850]: 2025-10-11 04:13:19.991 2 DEBUG nova.objects.instance [None req-0c4fedbf-649c-48b0-8ccf-ae37e484c38e ba6ea3b0ff9d4fee8a80f308d0493954 7ff14cec1ef04fa2a41f6d226bc99518 - - default default] Lazy-loading 'resources' on Instance uuid 673c41a0-97c6-4a8e-8f65-919ee9c38c79 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 11 04:13:19 compute-0 nova_compute[259850]: 2025-10-11 04:13:19.992 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 04:13:19 compute-0 nova_compute[259850]: 2025-10-11 04:13:19.997 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 04:13:19 compute-0 ovn_metadata_agent[161897]: 2025-10-11 04:13:19.997 161902 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap69760b74-d0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 04:13:19 compute-0 ovn_metadata_agent[161897]: 2025-10-11 04:13:19.998 161902 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 11 04:13:19 compute-0 ovn_metadata_agent[161897]: 2025-10-11 04:13:19.999 161902 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap69760b74-d0, col_values=(('external_ids', {'iface-id': '1db9314a-9172-441f-a3d7-84ca9c891141'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 04:13:19 compute-0 ovn_metadata_agent[161897]: 2025-10-11 04:13:19.999 161902 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 11 04:13:20 compute-0 nova_compute[259850]: 2025-10-11 04:13:20.006 2 DEBUG nova.virt.libvirt.vif [None req-0c4fedbf-649c-48b0-8ccf-ae37e484c38e ba6ea3b0ff9d4fee8a80f308d0493954 7ff14cec1ef04fa2a41f6d226bc99518 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-11T04:12:29Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestStampPattern-server-1779017139',display_name='tempest-TestStampPattern-server-1779017139',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-teststamppattern-server-1779017139',id=15,image_ref='f94d6b77-1844-4032-bf9d-644d95696add',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBJ/2RgkZKOpdewTMCUJ4lxqFHaHkNK2WJjvE3lEkA/Q9gA0jTZZ1SFFzP17eZUjXJUtu1TcmHAM4LPuQ7VsHIzZ1pEO3yPeDhFw+/dw5yXiw9mrTEISzDMcxVMFVOX8L1w==',key_name='tempest-TestStampPattern-1075063988',keypairs=<?>,launch_index=0,launched_at=2025-10-11T04:12:37Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='7ff14cec1ef04fa2a41f6d226bc99518',ramdisk_id='',reservation_id='r-mwbhdwjj',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='1a107e2f-1a9d-4b6f-861d-e64bee7d56be',image_boot_roles='reader,member',image_container_format='bare',image_disk_format='raw',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_image_location='snapshot',image_image_state='available',image_image_type='snapshot',image_instance_uuid='6f74cee5-3bb9-44f0-9a21-d6e5c1475419',image_min_disk='1',image_min_ram='0',image_owner_id='7ff14cec1ef04fa2a41f6d226bc99518',image_owner_project_name='tempest-TestStampPattern-137571922',image_owner_user_name='tempest-TestStampPattern-137571922-project-member',image_user_id='ba6ea3b0ff9d4fee8a80f308d0493954',owner_project_name='tempest-TestStampPattern-137571922',owner_user_name='tempest-TestStampPattern-137571922-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-11T04:12:37Z,user_data=None,user_id='ba6ea3b0ff9d4fee8a80f308d0493954',uuid=673c41a0-97c6-4a8e-8f65-919ee9c38c79,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "f42f98ab-28b0-4e9f-897a-e8be0d64dc33", "address": "fa:16:3e:07:8a:69", "network": {"id": "69760b74-d690-4b6a-a64f-35ceb4582944", "bridge": "br-int", "label": "tempest-TestStampPattern-334026573-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.233", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "7ff14cec1ef04fa2a41f6d226bc99518", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf42f98ab-28", "ovs_interfaceid": "f42f98ab-28b0-4e9f-897a-e8be0d64dc33", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Oct 11 04:13:20 compute-0 nova_compute[259850]: 2025-10-11 04:13:20.006 2 DEBUG nova.network.os_vif_util [None req-0c4fedbf-649c-48b0-8ccf-ae37e484c38e ba6ea3b0ff9d4fee8a80f308d0493954 7ff14cec1ef04fa2a41f6d226bc99518 - - default default] Converting VIF {"id": "f42f98ab-28b0-4e9f-897a-e8be0d64dc33", "address": "fa:16:3e:07:8a:69", "network": {"id": "69760b74-d690-4b6a-a64f-35ceb4582944", "bridge": "br-int", "label": "tempest-TestStampPattern-334026573-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.233", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "7ff14cec1ef04fa2a41f6d226bc99518", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf42f98ab-28", "ovs_interfaceid": "f42f98ab-28b0-4e9f-897a-e8be0d64dc33", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 11 04:13:20 compute-0 nova_compute[259850]: 2025-10-11 04:13:20.007 2 DEBUG nova.network.os_vif_util [None req-0c4fedbf-649c-48b0-8ccf-ae37e484c38e ba6ea3b0ff9d4fee8a80f308d0493954 7ff14cec1ef04fa2a41f6d226bc99518 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:07:8a:69,bridge_name='br-int',has_traffic_filtering=True,id=f42f98ab-28b0-4e9f-897a-e8be0d64dc33,network=Network(69760b74-d690-4b6a-a64f-35ceb4582944),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf42f98ab-28') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 11 04:13:20 compute-0 nova_compute[259850]: 2025-10-11 04:13:20.007 2 DEBUG os_vif [None req-0c4fedbf-649c-48b0-8ccf-ae37e484c38e ba6ea3b0ff9d4fee8a80f308d0493954 7ff14cec1ef04fa2a41f6d226bc99518 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:07:8a:69,bridge_name='br-int',has_traffic_filtering=True,id=f42f98ab-28b0-4e9f-897a-e8be0d64dc33,network=Network(69760b74-d690-4b6a-a64f-35ceb4582944),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf42f98ab-28') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Oct 11 04:13:20 compute-0 nova_compute[259850]: 2025-10-11 04:13:20.009 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 04:13:20 compute-0 nova_compute[259850]: 2025-10-11 04:13:20.009 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapf42f98ab-28, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 04:13:20 compute-0 nova_compute[259850]: 2025-10-11 04:13:20.010 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 04:13:20 compute-0 nova_compute[259850]: 2025-10-11 04:13:20.012 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 11 04:13:20 compute-0 nova_compute[259850]: 2025-10-11 04:13:20.015 2 INFO os_vif [None req-0c4fedbf-649c-48b0-8ccf-ae37e484c38e ba6ea3b0ff9d4fee8a80f308d0493954 7ff14cec1ef04fa2a41f6d226bc99518 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:07:8a:69,bridge_name='br-int',has_traffic_filtering=True,id=f42f98ab-28b0-4e9f-897a-e8be0d64dc33,network=Network(69760b74-d690-4b6a-a64f-35ceb4582944),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf42f98ab-28')
Oct 11 04:13:20 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e368 do_prune osdmap full prune enabled
Oct 11 04:13:20 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e369 e369: 3 total, 3 up, 3 in
Oct 11 04:13:20 compute-0 ceph-mon[74273]: log_channel(cluster) log [DBG] : osdmap e369: 3 total, 3 up, 3 in
Oct 11 04:13:20 compute-0 ceph-mon[74273]: osdmap e368: 3 total, 3 up, 3 in
Oct 11 04:13:20 compute-0 nova_compute[259850]: 2025-10-11 04:13:20.426 2 INFO nova.virt.libvirt.driver [None req-0c4fedbf-649c-48b0-8ccf-ae37e484c38e ba6ea3b0ff9d4fee8a80f308d0493954 7ff14cec1ef04fa2a41f6d226bc99518 - - default default] [instance: 673c41a0-97c6-4a8e-8f65-919ee9c38c79] Deleting instance files /var/lib/nova/instances/673c41a0-97c6-4a8e-8f65-919ee9c38c79_del
Oct 11 04:13:20 compute-0 nova_compute[259850]: 2025-10-11 04:13:20.427 2 INFO nova.virt.libvirt.driver [None req-0c4fedbf-649c-48b0-8ccf-ae37e484c38e ba6ea3b0ff9d4fee8a80f308d0493954 7ff14cec1ef04fa2a41f6d226bc99518 - - default default] [instance: 673c41a0-97c6-4a8e-8f65-919ee9c38c79] Deletion of /var/lib/nova/instances/673c41a0-97c6-4a8e-8f65-919ee9c38c79_del complete
Oct 11 04:13:20 compute-0 nova_compute[259850]: 2025-10-11 04:13:20.494 2 INFO nova.compute.manager [None req-0c4fedbf-649c-48b0-8ccf-ae37e484c38e ba6ea3b0ff9d4fee8a80f308d0493954 7ff14cec1ef04fa2a41f6d226bc99518 - - default default] [instance: 673c41a0-97c6-4a8e-8f65-919ee9c38c79] Took 0.74 seconds to destroy the instance on the hypervisor.
Oct 11 04:13:20 compute-0 nova_compute[259850]: 2025-10-11 04:13:20.495 2 DEBUG oslo.service.loopingcall [None req-0c4fedbf-649c-48b0-8ccf-ae37e484c38e ba6ea3b0ff9d4fee8a80f308d0493954 7ff14cec1ef04fa2a41f6d226bc99518 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Oct 11 04:13:20 compute-0 nova_compute[259850]: 2025-10-11 04:13:20.495 2 DEBUG nova.compute.manager [-] [instance: 673c41a0-97c6-4a8e-8f65-919ee9c38c79] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Oct 11 04:13:20 compute-0 nova_compute[259850]: 2025-10-11 04:13:20.496 2 DEBUG nova.network.neutron [-] [instance: 673c41a0-97c6-4a8e-8f65-919ee9c38c79] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Oct 11 04:13:20 compute-0 ceph-mgr[74563]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 04:13:20 compute-0 ceph-mgr[74563]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 04:13:20 compute-0 ceph-mgr[74563]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 04:13:20 compute-0 ceph-mgr[74563]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 04:13:20 compute-0 ceph-mgr[74563]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 04:13:20 compute-0 ceph-mgr[74563]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 04:13:20 compute-0 ceph-mgr[74563]: [balancer INFO root] Optimize plan auto_2025-10-11_04:13:20
Oct 11 04:13:20 compute-0 ceph-mgr[74563]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 11 04:13:20 compute-0 ceph-mgr[74563]: [balancer INFO root] do_upmap
Oct 11 04:13:20 compute-0 ceph-mgr[74563]: [balancer INFO root] pools ['default.rgw.control', 'volumes', 'default.rgw.meta', 'images', '.mgr', '.rgw.root', 'cephfs.cephfs.data', 'default.rgw.log', 'cephfs.cephfs.meta', 'backups', 'vms']
Oct 11 04:13:20 compute-0 ceph-mgr[74563]: [balancer INFO root] prepared 0/10 changes
Oct 11 04:13:21 compute-0 ceph-mgr[74563]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 11 04:13:21 compute-0 ceph-mgr[74563]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 11 04:13:21 compute-0 ceph-mgr[74563]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 11 04:13:21 compute-0 ceph-mgr[74563]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 11 04:13:21 compute-0 ceph-mgr[74563]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 11 04:13:21 compute-0 ceph-mgr[74563]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 11 04:13:21 compute-0 ceph-mgr[74563]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 11 04:13:21 compute-0 ceph-mgr[74563]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 11 04:13:21 compute-0 ceph-mgr[74563]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 11 04:13:21 compute-0 ceph-mgr[74563]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 11 04:13:21 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e369 do_prune osdmap full prune enabled
Oct 11 04:13:21 compute-0 ceph-mon[74273]: pgmap v1468: 305 pgs: 305 active+clean; 268 MiB data, 494 MiB used, 60 GiB / 60 GiB avail; 840 KiB/s rd, 348 KiB/s wr, 120 op/s
Oct 11 04:13:21 compute-0 ceph-mon[74273]: osdmap e369: 3 total, 3 up, 3 in
Oct 11 04:13:21 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e370 e370: 3 total, 3 up, 3 in
Oct 11 04:13:21 compute-0 ceph-mon[74273]: log_channel(cluster) log [DBG] : osdmap e370: 3 total, 3 up, 3 in
Oct 11 04:13:21 compute-0 nova_compute[259850]: 2025-10-11 04:13:21.533 2 DEBUG nova.network.neutron [req-4b9a5f15-07f0-4769-a3f2-ee2ac439863f req-9a3aec27-695d-4743-aae8-f7fdbca1b0fc f4159c198f2c491aba3076e803251bb4 551ef16ca7c64c7d98e9560ddab5abd8 - - default default] [instance: 673c41a0-97c6-4a8e-8f65-919ee9c38c79] Updated VIF entry in instance network info cache for port f42f98ab-28b0-4e9f-897a-e8be0d64dc33. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 11 04:13:21 compute-0 nova_compute[259850]: 2025-10-11 04:13:21.534 2 DEBUG nova.network.neutron [req-4b9a5f15-07f0-4769-a3f2-ee2ac439863f req-9a3aec27-695d-4743-aae8-f7fdbca1b0fc f4159c198f2c491aba3076e803251bb4 551ef16ca7c64c7d98e9560ddab5abd8 - - default default] [instance: 673c41a0-97c6-4a8e-8f65-919ee9c38c79] Updating instance_info_cache with network_info: [{"id": "f42f98ab-28b0-4e9f-897a-e8be0d64dc33", "address": "fa:16:3e:07:8a:69", "network": {"id": "69760b74-d690-4b6a-a64f-35ceb4582944", "bridge": "br-int", "label": "tempest-TestStampPattern-334026573-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "7ff14cec1ef04fa2a41f6d226bc99518", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf42f98ab-28", "ovs_interfaceid": "f42f98ab-28b0-4e9f-897a-e8be0d64dc33", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 11 04:13:21 compute-0 nova_compute[259850]: 2025-10-11 04:13:21.562 2 DEBUG oslo_concurrency.lockutils [req-4b9a5f15-07f0-4769-a3f2-ee2ac439863f req-9a3aec27-695d-4743-aae8-f7fdbca1b0fc f4159c198f2c491aba3076e803251bb4 551ef16ca7c64c7d98e9560ddab5abd8 - - default default] Releasing lock "refresh_cache-673c41a0-97c6-4a8e-8f65-919ee9c38c79" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 11 04:13:21 compute-0 nova_compute[259850]: 2025-10-11 04:13:21.612 2 DEBUG nova.network.neutron [-] [instance: 673c41a0-97c6-4a8e-8f65-919ee9c38c79] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 11 04:13:21 compute-0 nova_compute[259850]: 2025-10-11 04:13:21.628 2 INFO nova.compute.manager [-] [instance: 673c41a0-97c6-4a8e-8f65-919ee9c38c79] Took 1.13 seconds to deallocate network for instance.
Oct 11 04:13:21 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1471: 305 pgs: 305 active+clean; 268 MiB data, 494 MiB used, 60 GiB / 60 GiB avail; 840 KiB/s rd, 348 KiB/s wr, 120 op/s
Oct 11 04:13:21 compute-0 nova_compute[259850]: 2025-10-11 04:13:21.677 2 DEBUG oslo_concurrency.lockutils [None req-0c4fedbf-649c-48b0-8ccf-ae37e484c38e ba6ea3b0ff9d4fee8a80f308d0493954 7ff14cec1ef04fa2a41f6d226bc99518 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 04:13:21 compute-0 nova_compute[259850]: 2025-10-11 04:13:21.679 2 DEBUG oslo_concurrency.lockutils [None req-0c4fedbf-649c-48b0-8ccf-ae37e484c38e ba6ea3b0ff9d4fee8a80f308d0493954 7ff14cec1ef04fa2a41f6d226bc99518 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 04:13:21 compute-0 nova_compute[259850]: 2025-10-11 04:13:21.759 2 DEBUG oslo_concurrency.processutils [None req-0c4fedbf-649c-48b0-8ccf-ae37e484c38e ba6ea3b0ff9d4fee8a80f308d0493954 7ff14cec1ef04fa2a41f6d226bc99518 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 04:13:21 compute-0 nova_compute[259850]: 2025-10-11 04:13:21.804 2 DEBUG nova.compute.manager [req-0d63f03a-3353-4866-9f99-686732ad579a req-480c7931-ac9b-4263-b05e-747ee1a2cb90 f4159c198f2c491aba3076e803251bb4 551ef16ca7c64c7d98e9560ddab5abd8 - - default default] [instance: 673c41a0-97c6-4a8e-8f65-919ee9c38c79] Received event network-vif-unplugged-f42f98ab-28b0-4e9f-897a-e8be0d64dc33 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 11 04:13:21 compute-0 nova_compute[259850]: 2025-10-11 04:13:21.805 2 DEBUG oslo_concurrency.lockutils [req-0d63f03a-3353-4866-9f99-686732ad579a req-480c7931-ac9b-4263-b05e-747ee1a2cb90 f4159c198f2c491aba3076e803251bb4 551ef16ca7c64c7d98e9560ddab5abd8 - - default default] Acquiring lock "673c41a0-97c6-4a8e-8f65-919ee9c38c79-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 04:13:21 compute-0 nova_compute[259850]: 2025-10-11 04:13:21.806 2 DEBUG oslo_concurrency.lockutils [req-0d63f03a-3353-4866-9f99-686732ad579a req-480c7931-ac9b-4263-b05e-747ee1a2cb90 f4159c198f2c491aba3076e803251bb4 551ef16ca7c64c7d98e9560ddab5abd8 - - default default] Lock "673c41a0-97c6-4a8e-8f65-919ee9c38c79-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 04:13:21 compute-0 nova_compute[259850]: 2025-10-11 04:13:21.807 2 DEBUG oslo_concurrency.lockutils [req-0d63f03a-3353-4866-9f99-686732ad579a req-480c7931-ac9b-4263-b05e-747ee1a2cb90 f4159c198f2c491aba3076e803251bb4 551ef16ca7c64c7d98e9560ddab5abd8 - - default default] Lock "673c41a0-97c6-4a8e-8f65-919ee9c38c79-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 04:13:21 compute-0 nova_compute[259850]: 2025-10-11 04:13:21.807 2 DEBUG nova.compute.manager [req-0d63f03a-3353-4866-9f99-686732ad579a req-480c7931-ac9b-4263-b05e-747ee1a2cb90 f4159c198f2c491aba3076e803251bb4 551ef16ca7c64c7d98e9560ddab5abd8 - - default default] [instance: 673c41a0-97c6-4a8e-8f65-919ee9c38c79] No waiting events found dispatching network-vif-unplugged-f42f98ab-28b0-4e9f-897a-e8be0d64dc33 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 11 04:13:21 compute-0 nova_compute[259850]: 2025-10-11 04:13:21.808 2 WARNING nova.compute.manager [req-0d63f03a-3353-4866-9f99-686732ad579a req-480c7931-ac9b-4263-b05e-747ee1a2cb90 f4159c198f2c491aba3076e803251bb4 551ef16ca7c64c7d98e9560ddab5abd8 - - default default] [instance: 673c41a0-97c6-4a8e-8f65-919ee9c38c79] Received unexpected event network-vif-unplugged-f42f98ab-28b0-4e9f-897a-e8be0d64dc33 for instance with vm_state deleted and task_state None.
Oct 11 04:13:21 compute-0 nova_compute[259850]: 2025-10-11 04:13:21.808 2 DEBUG nova.compute.manager [req-0d63f03a-3353-4866-9f99-686732ad579a req-480c7931-ac9b-4263-b05e-747ee1a2cb90 f4159c198f2c491aba3076e803251bb4 551ef16ca7c64c7d98e9560ddab5abd8 - - default default] [instance: 673c41a0-97c6-4a8e-8f65-919ee9c38c79] Received event network-vif-plugged-f42f98ab-28b0-4e9f-897a-e8be0d64dc33 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 11 04:13:21 compute-0 nova_compute[259850]: 2025-10-11 04:13:21.809 2 DEBUG oslo_concurrency.lockutils [req-0d63f03a-3353-4866-9f99-686732ad579a req-480c7931-ac9b-4263-b05e-747ee1a2cb90 f4159c198f2c491aba3076e803251bb4 551ef16ca7c64c7d98e9560ddab5abd8 - - default default] Acquiring lock "673c41a0-97c6-4a8e-8f65-919ee9c38c79-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 04:13:21 compute-0 nova_compute[259850]: 2025-10-11 04:13:21.810 2 DEBUG oslo_concurrency.lockutils [req-0d63f03a-3353-4866-9f99-686732ad579a req-480c7931-ac9b-4263-b05e-747ee1a2cb90 f4159c198f2c491aba3076e803251bb4 551ef16ca7c64c7d98e9560ddab5abd8 - - default default] Lock "673c41a0-97c6-4a8e-8f65-919ee9c38c79-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 04:13:21 compute-0 nova_compute[259850]: 2025-10-11 04:13:21.810 2 DEBUG oslo_concurrency.lockutils [req-0d63f03a-3353-4866-9f99-686732ad579a req-480c7931-ac9b-4263-b05e-747ee1a2cb90 f4159c198f2c491aba3076e803251bb4 551ef16ca7c64c7d98e9560ddab5abd8 - - default default] Lock "673c41a0-97c6-4a8e-8f65-919ee9c38c79-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 04:13:21 compute-0 nova_compute[259850]: 2025-10-11 04:13:21.811 2 DEBUG nova.compute.manager [req-0d63f03a-3353-4866-9f99-686732ad579a req-480c7931-ac9b-4263-b05e-747ee1a2cb90 f4159c198f2c491aba3076e803251bb4 551ef16ca7c64c7d98e9560ddab5abd8 - - default default] [instance: 673c41a0-97c6-4a8e-8f65-919ee9c38c79] No waiting events found dispatching network-vif-plugged-f42f98ab-28b0-4e9f-897a-e8be0d64dc33 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 11 04:13:21 compute-0 nova_compute[259850]: 2025-10-11 04:13:21.811 2 WARNING nova.compute.manager [req-0d63f03a-3353-4866-9f99-686732ad579a req-480c7931-ac9b-4263-b05e-747ee1a2cb90 f4159c198f2c491aba3076e803251bb4 551ef16ca7c64c7d98e9560ddab5abd8 - - default default] [instance: 673c41a0-97c6-4a8e-8f65-919ee9c38c79] Received unexpected event network-vif-plugged-f42f98ab-28b0-4e9f-897a-e8be0d64dc33 for instance with vm_state deleted and task_state None.
Oct 11 04:13:21 compute-0 nova_compute[259850]: 2025-10-11 04:13:21.812 2 DEBUG nova.compute.manager [req-0d63f03a-3353-4866-9f99-686732ad579a req-480c7931-ac9b-4263-b05e-747ee1a2cb90 f4159c198f2c491aba3076e803251bb4 551ef16ca7c64c7d98e9560ddab5abd8 - - default default] [instance: 673c41a0-97c6-4a8e-8f65-919ee9c38c79] Received event network-vif-deleted-f42f98ab-28b0-4e9f-897a-e8be0d64dc33 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 11 04:13:21 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct 11 04:13:21 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2790236095' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 11 04:13:21 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct 11 04:13:21 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2790236095' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 11 04:13:22 compute-0 nova_compute[259850]: 2025-10-11 04:13:22.059 2 DEBUG oslo_service.periodic_task [None req-a15c22ab-259d-4b26-83d0-eb5f92102649 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 11 04:13:22 compute-0 nova_compute[259850]: 2025-10-11 04:13:22.060 2 DEBUG nova.compute.manager [None req-a15c22ab-259d-4b26-83d0-eb5f92102649 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Oct 11 04:13:22 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e370 do_prune osdmap full prune enabled
Oct 11 04:13:22 compute-0 ceph-mon[74273]: osdmap e370: 3 total, 3 up, 3 in
Oct 11 04:13:22 compute-0 ceph-mon[74273]: from='client.? 192.168.122.10:0/2790236095' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 11 04:13:22 compute-0 ceph-mon[74273]: from='client.? 192.168.122.10:0/2790236095' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 11 04:13:22 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e371 e371: 3 total, 3 up, 3 in
Oct 11 04:13:22 compute-0 ceph-mon[74273]: log_channel(cluster) log [DBG] : osdmap e371: 3 total, 3 up, 3 in
Oct 11 04:13:22 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 11 04:13:22 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1230861983' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 04:13:22 compute-0 nova_compute[259850]: 2025-10-11 04:13:22.230 2 DEBUG oslo_concurrency.processutils [None req-0c4fedbf-649c-48b0-8ccf-ae37e484c38e ba6ea3b0ff9d4fee8a80f308d0493954 7ff14cec1ef04fa2a41f6d226bc99518 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.472s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 04:13:22 compute-0 nova_compute[259850]: 2025-10-11 04:13:22.239 2 DEBUG nova.compute.provider_tree [None req-0c4fedbf-649c-48b0-8ccf-ae37e484c38e ba6ea3b0ff9d4fee8a80f308d0493954 7ff14cec1ef04fa2a41f6d226bc99518 - - default default] Inventory has not changed in ProviderTree for provider: 108a560b-89c0-4926-a2fc-cb749a6f8386 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 11 04:13:22 compute-0 nova_compute[259850]: 2025-10-11 04:13:22.258 2 DEBUG nova.scheduler.client.report [None req-0c4fedbf-649c-48b0-8ccf-ae37e484c38e ba6ea3b0ff9d4fee8a80f308d0493954 7ff14cec1ef04fa2a41f6d226bc99518 - - default default] Inventory has not changed for provider 108a560b-89c0-4926-a2fc-cb749a6f8386 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 11 04:13:22 compute-0 nova_compute[259850]: 2025-10-11 04:13:22.284 2 DEBUG oslo_concurrency.lockutils [None req-0c4fedbf-649c-48b0-8ccf-ae37e484c38e ba6ea3b0ff9d4fee8a80f308d0493954 7ff14cec1ef04fa2a41f6d226bc99518 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.605s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 04:13:22 compute-0 nova_compute[259850]: 2025-10-11 04:13:22.326 2 INFO nova.scheduler.client.report [None req-0c4fedbf-649c-48b0-8ccf-ae37e484c38e ba6ea3b0ff9d4fee8a80f308d0493954 7ff14cec1ef04fa2a41f6d226bc99518 - - default default] Deleted allocations for instance 673c41a0-97c6-4a8e-8f65-919ee9c38c79
Oct 11 04:13:22 compute-0 nova_compute[259850]: 2025-10-11 04:13:22.412 2 DEBUG oslo_concurrency.lockutils [None req-0c4fedbf-649c-48b0-8ccf-ae37e484c38e ba6ea3b0ff9d4fee8a80f308d0493954 7ff14cec1ef04fa2a41f6d226bc99518 - - default default] Lock "673c41a0-97c6-4a8e-8f65-919ee9c38c79" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.666s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 04:13:22 compute-0 podman[289295]: 2025-10-11 04:13:22.429489455 +0000 UTC m=+0.128918094 container health_status 648a70a95c858d67aef01a55c153ff0b5b9bef4b589fa10c62a24185180db61f (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f)
Oct 11 04:13:22 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct 11 04:13:22 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/90284547' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 11 04:13:22 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct 11 04:13:22 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/90284547' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 11 04:13:22 compute-0 nova_compute[259850]: 2025-10-11 04:13:22.918 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 04:13:22 compute-0 ovn_metadata_agent[161897]: 2025-10-11 04:13:22.962 161902 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 04:13:22 compute-0 ovn_metadata_agent[161897]: 2025-10-11 04:13:22.962 161902 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 04:13:22 compute-0 ovn_metadata_agent[161897]: 2025-10-11 04:13:22.963 161902 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 04:13:23 compute-0 nova_compute[259850]: 2025-10-11 04:13:23.058 2 DEBUG oslo_service.periodic_task [None req-a15c22ab-259d-4b26-83d0-eb5f92102649 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 11 04:13:23 compute-0 nova_compute[259850]: 2025-10-11 04:13:23.059 2 DEBUG nova.compute.manager [None req-a15c22ab-259d-4b26-83d0-eb5f92102649 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Oct 11 04:13:23 compute-0 nova_compute[259850]: 2025-10-11 04:13:23.059 2 DEBUG nova.compute.manager [None req-a15c22ab-259d-4b26-83d0-eb5f92102649 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Oct 11 04:13:23 compute-0 ceph-mon[74273]: pgmap v1471: 305 pgs: 305 active+clean; 268 MiB data, 494 MiB used, 60 GiB / 60 GiB avail; 840 KiB/s rd, 348 KiB/s wr, 120 op/s
Oct 11 04:13:23 compute-0 ceph-mon[74273]: osdmap e371: 3 total, 3 up, 3 in
Oct 11 04:13:23 compute-0 ceph-mon[74273]: from='client.? 192.168.122.100:0/1230861983' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 04:13:23 compute-0 ceph-mon[74273]: from='client.? 192.168.122.10:0/90284547' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 11 04:13:23 compute-0 ceph-mon[74273]: from='client.? 192.168.122.10:0/90284547' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 11 04:13:23 compute-0 nova_compute[259850]: 2025-10-11 04:13:23.263 2 DEBUG oslo_concurrency.lockutils [None req-a15c22ab-259d-4b26-83d0-eb5f92102649 - - - - - -] Acquiring lock "refresh_cache-6f74cee5-3bb9-44f0-9a21-d6e5c1475419" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 11 04:13:23 compute-0 nova_compute[259850]: 2025-10-11 04:13:23.263 2 DEBUG oslo_concurrency.lockutils [None req-a15c22ab-259d-4b26-83d0-eb5f92102649 - - - - - -] Acquired lock "refresh_cache-6f74cee5-3bb9-44f0-9a21-d6e5c1475419" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 11 04:13:23 compute-0 nova_compute[259850]: 2025-10-11 04:13:23.264 2 DEBUG nova.network.neutron [None req-a15c22ab-259d-4b26-83d0-eb5f92102649 - - - - - -] [instance: 6f74cee5-3bb9-44f0-9a21-d6e5c1475419] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Oct 11 04:13:23 compute-0 nova_compute[259850]: 2025-10-11 04:13:23.264 2 DEBUG nova.objects.instance [None req-a15c22ab-259d-4b26-83d0-eb5f92102649 - - - - - -] Lazy-loading 'info_cache' on Instance uuid 6f74cee5-3bb9-44f0-9a21-d6e5c1475419 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 11 04:13:23 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct 11 04:13:23 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3139491880' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 11 04:13:23 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct 11 04:13:23 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3139491880' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 11 04:13:23 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1473: 305 pgs: 305 active+clean; 250 MiB data, 488 MiB used, 60 GiB / 60 GiB avail; 663 KiB/s rd, 512 KiB/s wr, 244 op/s
Oct 11 04:13:24 compute-0 ceph-mon[74273]: from='client.? 192.168.122.10:0/3139491880' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 11 04:13:24 compute-0 ceph-mon[74273]: from='client.? 192.168.122.10:0/3139491880' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 11 04:13:24 compute-0 nova_compute[259850]: 2025-10-11 04:13:24.612 2 DEBUG nova.network.neutron [None req-a15c22ab-259d-4b26-83d0-eb5f92102649 - - - - - -] [instance: 6f74cee5-3bb9-44f0-9a21-d6e5c1475419] Updating instance_info_cache with network_info: [{"id": "46432b1a-fa02-4a02-9c8f-d607c2cd820c", "address": "fa:16:3e:2e:cd:1e", "network": {"id": "69760b74-d690-4b6a-a64f-35ceb4582944", "bridge": "br-int", "label": "tempest-TestStampPattern-334026573-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.220", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "7ff14cec1ef04fa2a41f6d226bc99518", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap46432b1a-fa", "ovs_interfaceid": "46432b1a-fa02-4a02-9c8f-d607c2cd820c", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 11 04:13:24 compute-0 nova_compute[259850]: 2025-10-11 04:13:24.636 2 DEBUG oslo_concurrency.lockutils [None req-a15c22ab-259d-4b26-83d0-eb5f92102649 - - - - - -] Releasing lock "refresh_cache-6f74cee5-3bb9-44f0-9a21-d6e5c1475419" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 11 04:13:24 compute-0 nova_compute[259850]: 2025-10-11 04:13:24.636 2 DEBUG nova.compute.manager [None req-a15c22ab-259d-4b26-83d0-eb5f92102649 - - - - - -] [instance: 6f74cee5-3bb9-44f0-9a21-d6e5c1475419] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Oct 11 04:13:24 compute-0 nova_compute[259850]: 2025-10-11 04:13:24.637 2 DEBUG oslo_service.periodic_task [None req-a15c22ab-259d-4b26-83d0-eb5f92102649 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 11 04:13:24 compute-0 nova_compute[259850]: 2025-10-11 04:13:24.638 2 DEBUG oslo_service.periodic_task [None req-a15c22ab-259d-4b26-83d0-eb5f92102649 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 11 04:13:24 compute-0 nova_compute[259850]: 2025-10-11 04:13:24.638 2 DEBUG oslo_service.periodic_task [None req-a15c22ab-259d-4b26-83d0-eb5f92102649 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 11 04:13:24 compute-0 nova_compute[259850]: 2025-10-11 04:13:24.661 2 DEBUG oslo_concurrency.lockutils [None req-a15c22ab-259d-4b26-83d0-eb5f92102649 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 04:13:24 compute-0 nova_compute[259850]: 2025-10-11 04:13:24.662 2 DEBUG oslo_concurrency.lockutils [None req-a15c22ab-259d-4b26-83d0-eb5f92102649 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 04:13:24 compute-0 nova_compute[259850]: 2025-10-11 04:13:24.663 2 DEBUG oslo_concurrency.lockutils [None req-a15c22ab-259d-4b26-83d0-eb5f92102649 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 04:13:24 compute-0 nova_compute[259850]: 2025-10-11 04:13:24.663 2 DEBUG nova.compute.resource_tracker [None req-a15c22ab-259d-4b26-83d0-eb5f92102649 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Oct 11 04:13:24 compute-0 nova_compute[259850]: 2025-10-11 04:13:24.664 2 DEBUG oslo_concurrency.processutils [None req-a15c22ab-259d-4b26-83d0-eb5f92102649 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 04:13:24 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e371 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 04:13:24 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e371 do_prune osdmap full prune enabled
Oct 11 04:13:24 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e372 e372: 3 total, 3 up, 3 in
Oct 11 04:13:24 compute-0 ceph-mon[74273]: log_channel(cluster) log [DBG] : osdmap e372: 3 total, 3 up, 3 in
Oct 11 04:13:25 compute-0 nova_compute[259850]: 2025-10-11 04:13:25.011 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 04:13:25 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 11 04:13:25 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2960747237' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 11 04:13:25 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 11 04:13:25 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3376882980' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 04:13:25 compute-0 nova_compute[259850]: 2025-10-11 04:13:25.149 2 DEBUG oslo_concurrency.processutils [None req-a15c22ab-259d-4b26-83d0-eb5f92102649 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.485s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 04:13:25 compute-0 ceph-mon[74273]: pgmap v1473: 305 pgs: 305 active+clean; 250 MiB data, 488 MiB used, 60 GiB / 60 GiB avail; 663 KiB/s rd, 512 KiB/s wr, 244 op/s
Oct 11 04:13:25 compute-0 ceph-mon[74273]: osdmap e372: 3 total, 3 up, 3 in
Oct 11 04:13:25 compute-0 ceph-mon[74273]: from='client.? 192.168.122.10:0/2960747237' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 11 04:13:25 compute-0 ceph-mon[74273]: from='client.? 192.168.122.100:0/3376882980' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 04:13:25 compute-0 nova_compute[259850]: 2025-10-11 04:13:25.406 2 DEBUG nova.virt.libvirt.driver [None req-a15c22ab-259d-4b26-83d0-eb5f92102649 - - - - - -] skipping disk for instance-0000000e as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 11 04:13:25 compute-0 nova_compute[259850]: 2025-10-11 04:13:25.407 2 DEBUG nova.virt.libvirt.driver [None req-a15c22ab-259d-4b26-83d0-eb5f92102649 - - - - - -] skipping disk for instance-0000000e as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 11 04:13:25 compute-0 nova_compute[259850]: 2025-10-11 04:13:25.625 2 WARNING nova.virt.libvirt.driver [None req-a15c22ab-259d-4b26-83d0-eb5f92102649 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 11 04:13:25 compute-0 nova_compute[259850]: 2025-10-11 04:13:25.626 2 DEBUG nova.compute.resource_tracker [None req-a15c22ab-259d-4b26-83d0-eb5f92102649 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4218MB free_disk=59.94263458251953GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Oct 11 04:13:25 compute-0 nova_compute[259850]: 2025-10-11 04:13:25.627 2 DEBUG oslo_concurrency.lockutils [None req-a15c22ab-259d-4b26-83d0-eb5f92102649 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 04:13:25 compute-0 nova_compute[259850]: 2025-10-11 04:13:25.627 2 DEBUG oslo_concurrency.lockutils [None req-a15c22ab-259d-4b26-83d0-eb5f92102649 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 04:13:25 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1475: 305 pgs: 305 active+clean; 250 MiB data, 488 MiB used, 60 GiB / 60 GiB avail; 544 KiB/s rd, 420 KiB/s wr, 200 op/s
Oct 11 04:13:25 compute-0 nova_compute[259850]: 2025-10-11 04:13:25.706 2 DEBUG nova.compute.resource_tracker [None req-a15c22ab-259d-4b26-83d0-eb5f92102649 - - - - - -] Instance 6f74cee5-3bb9-44f0-9a21-d6e5c1475419 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Oct 11 04:13:25 compute-0 nova_compute[259850]: 2025-10-11 04:13:25.706 2 DEBUG nova.compute.resource_tracker [None req-a15c22ab-259d-4b26-83d0-eb5f92102649 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Oct 11 04:13:25 compute-0 nova_compute[259850]: 2025-10-11 04:13:25.706 2 DEBUG nova.compute.resource_tracker [None req-a15c22ab-259d-4b26-83d0-eb5f92102649 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=640MB phys_disk=59GB used_disk=1GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Oct 11 04:13:25 compute-0 nova_compute[259850]: 2025-10-11 04:13:25.754 2 DEBUG oslo_concurrency.processutils [None req-a15c22ab-259d-4b26-83d0-eb5f92102649 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 04:13:26 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e372 do_prune osdmap full prune enabled
Oct 11 04:13:26 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e373 e373: 3 total, 3 up, 3 in
Oct 11 04:13:26 compute-0 ceph-mon[74273]: log_channel(cluster) log [DBG] : osdmap e373: 3 total, 3 up, 3 in
Oct 11 04:13:26 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 11 04:13:26 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1055399005' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 04:13:26 compute-0 nova_compute[259850]: 2025-10-11 04:13:26.254 2 DEBUG oslo_concurrency.processutils [None req-a15c22ab-259d-4b26-83d0-eb5f92102649 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.500s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 04:13:26 compute-0 nova_compute[259850]: 2025-10-11 04:13:26.259 2 DEBUG nova.compute.provider_tree [None req-a15c22ab-259d-4b26-83d0-eb5f92102649 - - - - - -] Inventory has not changed in ProviderTree for provider: 108a560b-89c0-4926-a2fc-cb749a6f8386 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 11 04:13:26 compute-0 nova_compute[259850]: 2025-10-11 04:13:26.282 2 DEBUG nova.scheduler.client.report [None req-a15c22ab-259d-4b26-83d0-eb5f92102649 - - - - - -] Inventory has not changed for provider 108a560b-89c0-4926-a2fc-cb749a6f8386 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 11 04:13:26 compute-0 nova_compute[259850]: 2025-10-11 04:13:26.312 2 DEBUG nova.compute.resource_tracker [None req-a15c22ab-259d-4b26-83d0-eb5f92102649 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Oct 11 04:13:26 compute-0 nova_compute[259850]: 2025-10-11 04:13:26.312 2 DEBUG oslo_concurrency.lockutils [None req-a15c22ab-259d-4b26-83d0-eb5f92102649 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.685s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 04:13:26 compute-0 podman[289366]: 2025-10-11 04:13:26.388544304 +0000 UTC m=+0.099218990 container health_status aa44bb3cffcfb092b9d85ae52a9bd9f36c87e07262632420f97b48a886109eab (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, org.label-schema.build-date=20251009, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Oct 11 04:13:27 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e373 do_prune osdmap full prune enabled
Oct 11 04:13:27 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e374 e374: 3 total, 3 up, 3 in
Oct 11 04:13:27 compute-0 ceph-mon[74273]: log_channel(cluster) log [DBG] : osdmap e374: 3 total, 3 up, 3 in
Oct 11 04:13:27 compute-0 ceph-mon[74273]: pgmap v1475: 305 pgs: 305 active+clean; 250 MiB data, 488 MiB used, 60 GiB / 60 GiB avail; 544 KiB/s rd, 420 KiB/s wr, 200 op/s
Oct 11 04:13:27 compute-0 ceph-mon[74273]: osdmap e373: 3 total, 3 up, 3 in
Oct 11 04:13:27 compute-0 ceph-mon[74273]: from='client.? 192.168.122.100:0/1055399005' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 04:13:27 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct 11 04:13:27 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/655029162' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 11 04:13:27 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct 11 04:13:27 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/655029162' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 11 04:13:27 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1478: 305 pgs: 305 active+clean; 250 MiB data, 488 MiB used, 60 GiB / 60 GiB avail; 547 KiB/s rd, 423 KiB/s wr, 201 op/s
Oct 11 04:13:27 compute-0 nova_compute[259850]: 2025-10-11 04:13:27.734 2 DEBUG oslo_service.periodic_task [None req-a15c22ab-259d-4b26-83d0-eb5f92102649 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 11 04:13:27 compute-0 nova_compute[259850]: 2025-10-11 04:13:27.735 2 DEBUG oslo_service.periodic_task [None req-a15c22ab-259d-4b26-83d0-eb5f92102649 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 11 04:13:27 compute-0 nova_compute[259850]: 2025-10-11 04:13:27.735 2 DEBUG oslo_service.periodic_task [None req-a15c22ab-259d-4b26-83d0-eb5f92102649 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 11 04:13:27 compute-0 nova_compute[259850]: 2025-10-11 04:13:27.918 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 04:13:28 compute-0 nova_compute[259850]: 2025-10-11 04:13:28.063 2 DEBUG nova.compute.manager [req-8674c3f6-2827-46be-80c1-4222105f8438 req-8785f023-d67f-4510-bd8c-9369e4b389d3 f4159c198f2c491aba3076e803251bb4 551ef16ca7c64c7d98e9560ddab5abd8 - - default default] [instance: 6f74cee5-3bb9-44f0-9a21-d6e5c1475419] Received event network-changed-46432b1a-fa02-4a02-9c8f-d607c2cd820c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 11 04:13:28 compute-0 nova_compute[259850]: 2025-10-11 04:13:28.063 2 DEBUG nova.compute.manager [req-8674c3f6-2827-46be-80c1-4222105f8438 req-8785f023-d67f-4510-bd8c-9369e4b389d3 f4159c198f2c491aba3076e803251bb4 551ef16ca7c64c7d98e9560ddab5abd8 - - default default] [instance: 6f74cee5-3bb9-44f0-9a21-d6e5c1475419] Refreshing instance network info cache due to event network-changed-46432b1a-fa02-4a02-9c8f-d607c2cd820c. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 11 04:13:28 compute-0 nova_compute[259850]: 2025-10-11 04:13:28.064 2 DEBUG oslo_concurrency.lockutils [req-8674c3f6-2827-46be-80c1-4222105f8438 req-8785f023-d67f-4510-bd8c-9369e4b389d3 f4159c198f2c491aba3076e803251bb4 551ef16ca7c64c7d98e9560ddab5abd8 - - default default] Acquiring lock "refresh_cache-6f74cee5-3bb9-44f0-9a21-d6e5c1475419" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 11 04:13:28 compute-0 nova_compute[259850]: 2025-10-11 04:13:28.065 2 DEBUG oslo_concurrency.lockutils [req-8674c3f6-2827-46be-80c1-4222105f8438 req-8785f023-d67f-4510-bd8c-9369e4b389d3 f4159c198f2c491aba3076e803251bb4 551ef16ca7c64c7d98e9560ddab5abd8 - - default default] Acquired lock "refresh_cache-6f74cee5-3bb9-44f0-9a21-d6e5c1475419" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 11 04:13:28 compute-0 nova_compute[259850]: 2025-10-11 04:13:28.065 2 DEBUG nova.network.neutron [req-8674c3f6-2827-46be-80c1-4222105f8438 req-8785f023-d67f-4510-bd8c-9369e4b389d3 f4159c198f2c491aba3076e803251bb4 551ef16ca7c64c7d98e9560ddab5abd8 - - default default] [instance: 6f74cee5-3bb9-44f0-9a21-d6e5c1475419] Refreshing network info cache for port 46432b1a-fa02-4a02-9c8f-d607c2cd820c _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 11 04:13:28 compute-0 nova_compute[259850]: 2025-10-11 04:13:28.169 2 DEBUG oslo_concurrency.lockutils [None req-166813f2-7c46-47ed-b534-a8e970604205 ba6ea3b0ff9d4fee8a80f308d0493954 7ff14cec1ef04fa2a41f6d226bc99518 - - default default] Acquiring lock "6f74cee5-3bb9-44f0-9a21-d6e5c1475419" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 04:13:28 compute-0 nova_compute[259850]: 2025-10-11 04:13:28.169 2 DEBUG oslo_concurrency.lockutils [None req-166813f2-7c46-47ed-b534-a8e970604205 ba6ea3b0ff9d4fee8a80f308d0493954 7ff14cec1ef04fa2a41f6d226bc99518 - - default default] Lock "6f74cee5-3bb9-44f0-9a21-d6e5c1475419" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 04:13:28 compute-0 nova_compute[259850]: 2025-10-11 04:13:28.170 2 DEBUG oslo_concurrency.lockutils [None req-166813f2-7c46-47ed-b534-a8e970604205 ba6ea3b0ff9d4fee8a80f308d0493954 7ff14cec1ef04fa2a41f6d226bc99518 - - default default] Acquiring lock "6f74cee5-3bb9-44f0-9a21-d6e5c1475419-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 04:13:28 compute-0 nova_compute[259850]: 2025-10-11 04:13:28.171 2 DEBUG oslo_concurrency.lockutils [None req-166813f2-7c46-47ed-b534-a8e970604205 ba6ea3b0ff9d4fee8a80f308d0493954 7ff14cec1ef04fa2a41f6d226bc99518 - - default default] Lock "6f74cee5-3bb9-44f0-9a21-d6e5c1475419-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 04:13:28 compute-0 nova_compute[259850]: 2025-10-11 04:13:28.171 2 DEBUG oslo_concurrency.lockutils [None req-166813f2-7c46-47ed-b534-a8e970604205 ba6ea3b0ff9d4fee8a80f308d0493954 7ff14cec1ef04fa2a41f6d226bc99518 - - default default] Lock "6f74cee5-3bb9-44f0-9a21-d6e5c1475419-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 04:13:28 compute-0 nova_compute[259850]: 2025-10-11 04:13:28.173 2 INFO nova.compute.manager [None req-166813f2-7c46-47ed-b534-a8e970604205 ba6ea3b0ff9d4fee8a80f308d0493954 7ff14cec1ef04fa2a41f6d226bc99518 - - default default] [instance: 6f74cee5-3bb9-44f0-9a21-d6e5c1475419] Terminating instance
Oct 11 04:13:28 compute-0 nova_compute[259850]: 2025-10-11 04:13:28.176 2 DEBUG nova.compute.manager [None req-166813f2-7c46-47ed-b534-a8e970604205 ba6ea3b0ff9d4fee8a80f308d0493954 7ff14cec1ef04fa2a41f6d226bc99518 - - default default] [instance: 6f74cee5-3bb9-44f0-9a21-d6e5c1475419] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Oct 11 04:13:28 compute-0 ceph-mon[74273]: osdmap e374: 3 total, 3 up, 3 in
Oct 11 04:13:28 compute-0 ceph-mon[74273]: from='client.? 192.168.122.10:0/655029162' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 11 04:13:28 compute-0 ceph-mon[74273]: from='client.? 192.168.122.10:0/655029162' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 11 04:13:28 compute-0 ceph-mon[74273]: pgmap v1478: 305 pgs: 305 active+clean; 250 MiB data, 488 MiB used, 60 GiB / 60 GiB avail; 547 KiB/s rd, 423 KiB/s wr, 201 op/s
Oct 11 04:13:28 compute-0 kernel: tap46432b1a-fa (unregistering): left promiscuous mode
Oct 11 04:13:28 compute-0 NetworkManager[44920]: <info>  [1760156008.2369] device (tap46432b1a-fa): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 11 04:13:28 compute-0 ovn_controller[152025]: 2025-10-11T04:13:28Z|00158|binding|INFO|Releasing lport 46432b1a-fa02-4a02-9c8f-d607c2cd820c from this chassis (sb_readonly=0)
Oct 11 04:13:28 compute-0 ovn_controller[152025]: 2025-10-11T04:13:28Z|00159|binding|INFO|Setting lport 46432b1a-fa02-4a02-9c8f-d607c2cd820c down in Southbound
Oct 11 04:13:28 compute-0 nova_compute[259850]: 2025-10-11 04:13:28.246 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 04:13:28 compute-0 ovn_controller[152025]: 2025-10-11T04:13:28Z|00160|binding|INFO|Removing iface tap46432b1a-fa ovn-installed in OVS
Oct 11 04:13:28 compute-0 nova_compute[259850]: 2025-10-11 04:13:28.249 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 04:13:28 compute-0 ovn_metadata_agent[161897]: 2025-10-11 04:13:28.256 161902 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:2e:cd:1e 10.100.0.4'], port_security=['fa:16:3e:2e:cd:1e 10.100.0.4'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.4/28', 'neutron:device_id': '6f74cee5-3bb9-44f0-9a21-d6e5c1475419', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-69760b74-d690-4b6a-a64f-35ceb4582944', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '7ff14cec1ef04fa2a41f6d226bc99518', 'neutron:revision_number': '4', 'neutron:security_group_ids': '0b1fcf6f-b50b-44a2-814d-4972eb6e538b', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=24983540-db74-4f67-b9f8-811887ee0a83, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f22fde8afd0>], logical_port=46432b1a-fa02-4a02-9c8f-d607c2cd820c) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f22fde8afd0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 11 04:13:28 compute-0 ovn_metadata_agent[161897]: 2025-10-11 04:13:28.258 161902 INFO neutron.agent.ovn.metadata.agent [-] Port 46432b1a-fa02-4a02-9c8f-d607c2cd820c in datapath 69760b74-d690-4b6a-a64f-35ceb4582944 unbound from our chassis
Oct 11 04:13:28 compute-0 ovn_metadata_agent[161897]: 2025-10-11 04:13:28.260 161902 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 69760b74-d690-4b6a-a64f-35ceb4582944, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Oct 11 04:13:28 compute-0 ovn_metadata_agent[161897]: 2025-10-11 04:13:28.261 267637 DEBUG oslo.privsep.daemon [-] privsep: reply[7d9a9de0-9c44-440c-b047-7fb50006f614]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 04:13:28 compute-0 ovn_metadata_agent[161897]: 2025-10-11 04:13:28.262 161902 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-69760b74-d690-4b6a-a64f-35ceb4582944 namespace which is not needed anymore
Oct 11 04:13:28 compute-0 nova_compute[259850]: 2025-10-11 04:13:28.270 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 04:13:28 compute-0 systemd[1]: machine-qemu\x2d14\x2dinstance\x2d0000000e.scope: Deactivated successfully.
Oct 11 04:13:28 compute-0 systemd[1]: machine-qemu\x2d14\x2dinstance\x2d0000000e.scope: Consumed 16.928s CPU time.
Oct 11 04:13:28 compute-0 systemd-machined[214869]: Machine qemu-14-instance-0000000e terminated.
Oct 11 04:13:28 compute-0 nova_compute[259850]: 2025-10-11 04:13:28.399 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 04:13:28 compute-0 nova_compute[259850]: 2025-10-11 04:13:28.406 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 04:13:28 compute-0 nova_compute[259850]: 2025-10-11 04:13:28.420 2 INFO nova.virt.libvirt.driver [-] [instance: 6f74cee5-3bb9-44f0-9a21-d6e5c1475419] Instance destroyed successfully.
Oct 11 04:13:28 compute-0 nova_compute[259850]: 2025-10-11 04:13:28.421 2 DEBUG nova.objects.instance [None req-166813f2-7c46-47ed-b534-a8e970604205 ba6ea3b0ff9d4fee8a80f308d0493954 7ff14cec1ef04fa2a41f6d226bc99518 - - default default] Lazy-loading 'resources' on Instance uuid 6f74cee5-3bb9-44f0-9a21-d6e5c1475419 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 11 04:13:28 compute-0 nova_compute[259850]: 2025-10-11 04:13:28.437 2 DEBUG nova.virt.libvirt.vif [None req-166813f2-7c46-47ed-b534-a8e970604205 ba6ea3b0ff9d4fee8a80f308d0493954 7ff14cec1ef04fa2a41f6d226bc99518 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-11T04:11:45Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestStampPattern-server-1339296303',display_name='tempest-TestStampPattern-server-1339296303',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-teststamppattern-server-1339296303',id=14,image_ref='1a107e2f-1a9d-4b6f-861d-e64bee7d56be',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBJ/2RgkZKOpdewTMCUJ4lxqFHaHkNK2WJjvE3lEkA/Q9gA0jTZZ1SFFzP17eZUjXJUtu1TcmHAM4LPuQ7VsHIzZ1pEO3yPeDhFw+/dw5yXiw9mrTEISzDMcxVMFVOX8L1w==',key_name='tempest-TestStampPattern-1075063988',keypairs=<?>,launch_index=0,launched_at=2025-10-11T04:11:55Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='7ff14cec1ef04fa2a41f6d226bc99518',ramdisk_id='',reservation_id='r-ktl2buu1',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='1a107e2f-1a9d-4b6f-861d-e64bee7d56be',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestStampPattern-137571922',owner_user_name='tempest-TestStampPattern-137571922-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-11T04:12:26Z,user_data=None,user_id='ba6ea3b0ff9d4fee8a80f308d0493954',uuid=6f74cee5-3bb9-44f0-9a21-d6e5c1475419,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "46432b1a-fa02-4a02-9c8f-d607c2cd820c", "address": "fa:16:3e:2e:cd:1e", "network": {"id": "69760b74-d690-4b6a-a64f-35ceb4582944", "bridge": "br-int", "label": "tempest-TestStampPattern-334026573-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.220", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "7ff14cec1ef04fa2a41f6d226bc99518", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap46432b1a-fa", "ovs_interfaceid": "46432b1a-fa02-4a02-9c8f-d607c2cd820c", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Oct 11 04:13:28 compute-0 nova_compute[259850]: 2025-10-11 04:13:28.440 2 DEBUG nova.network.os_vif_util [None req-166813f2-7c46-47ed-b534-a8e970604205 ba6ea3b0ff9d4fee8a80f308d0493954 7ff14cec1ef04fa2a41f6d226bc99518 - - default default] Converting VIF {"id": "46432b1a-fa02-4a02-9c8f-d607c2cd820c", "address": "fa:16:3e:2e:cd:1e", "network": {"id": "69760b74-d690-4b6a-a64f-35ceb4582944", "bridge": "br-int", "label": "tempest-TestStampPattern-334026573-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.220", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "7ff14cec1ef04fa2a41f6d226bc99518", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap46432b1a-fa", "ovs_interfaceid": "46432b1a-fa02-4a02-9c8f-d607c2cd820c", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 11 04:13:28 compute-0 nova_compute[259850]: 2025-10-11 04:13:28.442 2 DEBUG nova.network.os_vif_util [None req-166813f2-7c46-47ed-b534-a8e970604205 ba6ea3b0ff9d4fee8a80f308d0493954 7ff14cec1ef04fa2a41f6d226bc99518 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:2e:cd:1e,bridge_name='br-int',has_traffic_filtering=True,id=46432b1a-fa02-4a02-9c8f-d607c2cd820c,network=Network(69760b74-d690-4b6a-a64f-35ceb4582944),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap46432b1a-fa') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 11 04:13:28 compute-0 nova_compute[259850]: 2025-10-11 04:13:28.442 2 DEBUG os_vif [None req-166813f2-7c46-47ed-b534-a8e970604205 ba6ea3b0ff9d4fee8a80f308d0493954 7ff14cec1ef04fa2a41f6d226bc99518 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:2e:cd:1e,bridge_name='br-int',has_traffic_filtering=True,id=46432b1a-fa02-4a02-9c8f-d607c2cd820c,network=Network(69760b74-d690-4b6a-a64f-35ceb4582944),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap46432b1a-fa') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Oct 11 04:13:28 compute-0 nova_compute[259850]: 2025-10-11 04:13:28.445 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 04:13:28 compute-0 nova_compute[259850]: 2025-10-11 04:13:28.446 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap46432b1a-fa, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 04:13:28 compute-0 nova_compute[259850]: 2025-10-11 04:13:28.450 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 04:13:28 compute-0 neutron-haproxy-ovnmeta-69760b74-d690-4b6a-a64f-35ceb4582944[286899]: [NOTICE]   (286920) : haproxy version is 2.8.14-c23fe91
Oct 11 04:13:28 compute-0 neutron-haproxy-ovnmeta-69760b74-d690-4b6a-a64f-35ceb4582944[286899]: [NOTICE]   (286920) : path to executable is /usr/sbin/haproxy
Oct 11 04:13:28 compute-0 neutron-haproxy-ovnmeta-69760b74-d690-4b6a-a64f-35ceb4582944[286899]: [WARNING]  (286920) : Exiting Master process...
Oct 11 04:13:28 compute-0 neutron-haproxy-ovnmeta-69760b74-d690-4b6a-a64f-35ceb4582944[286899]: [WARNING]  (286920) : Exiting Master process...
Oct 11 04:13:28 compute-0 neutron-haproxy-ovnmeta-69760b74-d690-4b6a-a64f-35ceb4582944[286899]: [ALERT]    (286920) : Current worker (286924) exited with code 143 (Terminated)
Oct 11 04:13:28 compute-0 neutron-haproxy-ovnmeta-69760b74-d690-4b6a-a64f-35ceb4582944[286899]: [WARNING]  (286920) : All workers exited. Exiting... (0)
Oct 11 04:13:28 compute-0 nova_compute[259850]: 2025-10-11 04:13:28.453 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 11 04:13:28 compute-0 systemd[1]: libpod-e2c1c1385ce1a242c91f8f257f9c488523f09cf17a3f4cae6cc39386ad85b87e.scope: Deactivated successfully.
Oct 11 04:13:28 compute-0 nova_compute[259850]: 2025-10-11 04:13:28.456 2 INFO os_vif [None req-166813f2-7c46-47ed-b534-a8e970604205 ba6ea3b0ff9d4fee8a80f308d0493954 7ff14cec1ef04fa2a41f6d226bc99518 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:2e:cd:1e,bridge_name='br-int',has_traffic_filtering=True,id=46432b1a-fa02-4a02-9c8f-d607c2cd820c,network=Network(69760b74-d690-4b6a-a64f-35ceb4582944),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap46432b1a-fa')
Oct 11 04:13:28 compute-0 podman[289410]: 2025-10-11 04:13:28.463082014 +0000 UTC m=+0.076028091 container died e2c1c1385ce1a242c91f8f257f9c488523f09cf17a3f4cae6cc39386ad85b87e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-69760b74-d690-4b6a-a64f-35ceb4582944, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009)
Oct 11 04:13:28 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-e2c1c1385ce1a242c91f8f257f9c488523f09cf17a3f4cae6cc39386ad85b87e-userdata-shm.mount: Deactivated successfully.
Oct 11 04:13:28 compute-0 systemd[1]: var-lib-containers-storage-overlay-1d29e2e58c75e07fd5a341ec93ed2396e302c7335665708d6060cbb024b9512c-merged.mount: Deactivated successfully.
Oct 11 04:13:28 compute-0 podman[289410]: 2025-10-11 04:13:28.510567063 +0000 UTC m=+0.123513140 container cleanup e2c1c1385ce1a242c91f8f257f9c488523f09cf17a3f4cae6cc39386ad85b87e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-69760b74-d690-4b6a-a64f-35ceb4582944, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009, org.label-schema.vendor=CentOS, tcib_managed=true)
Oct 11 04:13:28 compute-0 systemd[1]: libpod-conmon-e2c1c1385ce1a242c91f8f257f9c488523f09cf17a3f4cae6cc39386ad85b87e.scope: Deactivated successfully.
Oct 11 04:13:28 compute-0 podman[289468]: 2025-10-11 04:13:28.594805347 +0000 UTC m=+0.053556973 container remove e2c1c1385ce1a242c91f8f257f9c488523f09cf17a3f4cae6cc39386ad85b87e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-69760b74-d690-4b6a-a64f-35ceb4582944, org.label-schema.build-date=20251009, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, tcib_managed=true)
Oct 11 04:13:28 compute-0 ovn_metadata_agent[161897]: 2025-10-11 04:13:28.601 267637 DEBUG oslo.privsep.daemon [-] privsep: reply[08ee5563-f319-45bd-9bef-8af4c643910b]: (4, ('Sat Oct 11 04:13:28 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-69760b74-d690-4b6a-a64f-35ceb4582944 (e2c1c1385ce1a242c91f8f257f9c488523f09cf17a3f4cae6cc39386ad85b87e)\ne2c1c1385ce1a242c91f8f257f9c488523f09cf17a3f4cae6cc39386ad85b87e\nSat Oct 11 04:13:28 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-69760b74-d690-4b6a-a64f-35ceb4582944 (e2c1c1385ce1a242c91f8f257f9c488523f09cf17a3f4cae6cc39386ad85b87e)\ne2c1c1385ce1a242c91f8f257f9c488523f09cf17a3f4cae6cc39386ad85b87e\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 04:13:28 compute-0 ovn_metadata_agent[161897]: 2025-10-11 04:13:28.603 267637 DEBUG oslo.privsep.daemon [-] privsep: reply[e50251b8-c93b-4e6a-a6cb-799d239946d1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 04:13:28 compute-0 ovn_metadata_agent[161897]: 2025-10-11 04:13:28.604 161902 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap69760b74-d0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 04:13:28 compute-0 nova_compute[259850]: 2025-10-11 04:13:28.606 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 04:13:28 compute-0 kernel: tap69760b74-d0: left promiscuous mode
Oct 11 04:13:28 compute-0 nova_compute[259850]: 2025-10-11 04:13:28.630 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 04:13:28 compute-0 ovn_metadata_agent[161897]: 2025-10-11 04:13:28.632 267637 DEBUG oslo.privsep.daemon [-] privsep: reply[e6be9805-4eb2-4468-9fb5-0d7c1d2c893d]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 04:13:28 compute-0 ovn_metadata_agent[161897]: 2025-10-11 04:13:28.662 267637 DEBUG oslo.privsep.daemon [-] privsep: reply[b3a23bfe-698a-4eac-8bd1-78e34065869b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 04:13:28 compute-0 ovn_metadata_agent[161897]: 2025-10-11 04:13:28.664 267637 DEBUG oslo.privsep.daemon [-] privsep: reply[289a303a-d3bb-4b3c-b830-0fcb8ab3a13a]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 04:13:28 compute-0 ovn_metadata_agent[161897]: 2025-10-11 04:13:28.679 267637 DEBUG oslo.privsep.daemon [-] privsep: reply[f14640cb-59ee-4298-855c-911be9794f9f]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 429918, 'reachable_time': 34168, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 289483, 'error': None, 'target': 'ovnmeta-69760b74-d690-4b6a-a64f-35ceb4582944', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 04:13:28 compute-0 systemd[1]: run-netns-ovnmeta\x2d69760b74\x2dd690\x2d4b6a\x2da64f\x2d35ceb4582944.mount: Deactivated successfully.
Oct 11 04:13:28 compute-0 ovn_metadata_agent[161897]: 2025-10-11 04:13:28.683 162015 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-69760b74-d690-4b6a-a64f-35ceb4582944 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Oct 11 04:13:28 compute-0 ovn_metadata_agent[161897]: 2025-10-11 04:13:28.683 162015 DEBUG oslo.privsep.daemon [-] privsep: reply[2dbc1aa2-74ca-4aa7-8fb3-9692ff2f67ac]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 04:13:28 compute-0 nova_compute[259850]: 2025-10-11 04:13:28.880 2 INFO nova.virt.libvirt.driver [None req-166813f2-7c46-47ed-b534-a8e970604205 ba6ea3b0ff9d4fee8a80f308d0493954 7ff14cec1ef04fa2a41f6d226bc99518 - - default default] [instance: 6f74cee5-3bb9-44f0-9a21-d6e5c1475419] Deleting instance files /var/lib/nova/instances/6f74cee5-3bb9-44f0-9a21-d6e5c1475419_del
Oct 11 04:13:28 compute-0 nova_compute[259850]: 2025-10-11 04:13:28.881 2 INFO nova.virt.libvirt.driver [None req-166813f2-7c46-47ed-b534-a8e970604205 ba6ea3b0ff9d4fee8a80f308d0493954 7ff14cec1ef04fa2a41f6d226bc99518 - - default default] [instance: 6f74cee5-3bb9-44f0-9a21-d6e5c1475419] Deletion of /var/lib/nova/instances/6f74cee5-3bb9-44f0-9a21-d6e5c1475419_del complete
Oct 11 04:13:28 compute-0 nova_compute[259850]: 2025-10-11 04:13:28.925 2 INFO nova.compute.manager [None req-166813f2-7c46-47ed-b534-a8e970604205 ba6ea3b0ff9d4fee8a80f308d0493954 7ff14cec1ef04fa2a41f6d226bc99518 - - default default] [instance: 6f74cee5-3bb9-44f0-9a21-d6e5c1475419] Took 0.75 seconds to destroy the instance on the hypervisor.
Oct 11 04:13:28 compute-0 nova_compute[259850]: 2025-10-11 04:13:28.926 2 DEBUG oslo.service.loopingcall [None req-166813f2-7c46-47ed-b534-a8e970604205 ba6ea3b0ff9d4fee8a80f308d0493954 7ff14cec1ef04fa2a41f6d226bc99518 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Oct 11 04:13:28 compute-0 nova_compute[259850]: 2025-10-11 04:13:28.927 2 DEBUG nova.compute.manager [-] [instance: 6f74cee5-3bb9-44f0-9a21-d6e5c1475419] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Oct 11 04:13:28 compute-0 nova_compute[259850]: 2025-10-11 04:13:28.927 2 DEBUG nova.network.neutron [-] [instance: 6f74cee5-3bb9-44f0-9a21-d6e5c1475419] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Oct 11 04:13:29 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e374 do_prune osdmap full prune enabled
Oct 11 04:13:29 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e375 e375: 3 total, 3 up, 3 in
Oct 11 04:13:29 compute-0 ceph-mon[74273]: log_channel(cluster) log [DBG] : osdmap e375: 3 total, 3 up, 3 in
Oct 11 04:13:29 compute-0 nova_compute[259850]: 2025-10-11 04:13:29.480 2 DEBUG nova.network.neutron [-] [instance: 6f74cee5-3bb9-44f0-9a21-d6e5c1475419] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 11 04:13:29 compute-0 nova_compute[259850]: 2025-10-11 04:13:29.496 2 INFO nova.compute.manager [-] [instance: 6f74cee5-3bb9-44f0-9a21-d6e5c1475419] Took 0.57 seconds to deallocate network for instance.
Oct 11 04:13:29 compute-0 nova_compute[259850]: 2025-10-11 04:13:29.563 2 DEBUG oslo_concurrency.lockutils [None req-166813f2-7c46-47ed-b534-a8e970604205 ba6ea3b0ff9d4fee8a80f308d0493954 7ff14cec1ef04fa2a41f6d226bc99518 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 04:13:29 compute-0 nova_compute[259850]: 2025-10-11 04:13:29.564 2 DEBUG oslo_concurrency.lockutils [None req-166813f2-7c46-47ed-b534-a8e970604205 ba6ea3b0ff9d4fee8a80f308d0493954 7ff14cec1ef04fa2a41f6d226bc99518 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 04:13:29 compute-0 nova_compute[259850]: 2025-10-11 04:13:29.607 2 DEBUG oslo_concurrency.processutils [None req-166813f2-7c46-47ed-b534-a8e970604205 ba6ea3b0ff9d4fee8a80f308d0493954 7ff14cec1ef04fa2a41f6d226bc99518 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 04:13:29 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1480: 305 pgs: 305 active+clean; 169 MiB data, 423 MiB used, 60 GiB / 60 GiB avail; 176 KiB/s rd, 13 KiB/s wr, 246 op/s
Oct 11 04:13:29 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct 11 04:13:29 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1876029385' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 11 04:13:29 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct 11 04:13:29 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1876029385' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 11 04:13:29 compute-0 nova_compute[259850]: 2025-10-11 04:13:29.731 2 DEBUG nova.network.neutron [req-8674c3f6-2827-46be-80c1-4222105f8438 req-8785f023-d67f-4510-bd8c-9369e4b389d3 f4159c198f2c491aba3076e803251bb4 551ef16ca7c64c7d98e9560ddab5abd8 - - default default] [instance: 6f74cee5-3bb9-44f0-9a21-d6e5c1475419] Updated VIF entry in instance network info cache for port 46432b1a-fa02-4a02-9c8f-d607c2cd820c. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 11 04:13:29 compute-0 nova_compute[259850]: 2025-10-11 04:13:29.732 2 DEBUG nova.network.neutron [req-8674c3f6-2827-46be-80c1-4222105f8438 req-8785f023-d67f-4510-bd8c-9369e4b389d3 f4159c198f2c491aba3076e803251bb4 551ef16ca7c64c7d98e9560ddab5abd8 - - default default] [instance: 6f74cee5-3bb9-44f0-9a21-d6e5c1475419] Updating instance_info_cache with network_info: [{"id": "46432b1a-fa02-4a02-9c8f-d607c2cd820c", "address": "fa:16:3e:2e:cd:1e", "network": {"id": "69760b74-d690-4b6a-a64f-35ceb4582944", "bridge": "br-int", "label": "tempest-TestStampPattern-334026573-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "7ff14cec1ef04fa2a41f6d226bc99518", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap46432b1a-fa", "ovs_interfaceid": "46432b1a-fa02-4a02-9c8f-d607c2cd820c", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 11 04:13:29 compute-0 nova_compute[259850]: 2025-10-11 04:13:29.753 2 DEBUG oslo_concurrency.lockutils [req-8674c3f6-2827-46be-80c1-4222105f8438 req-8785f023-d67f-4510-bd8c-9369e4b389d3 f4159c198f2c491aba3076e803251bb4 551ef16ca7c64c7d98e9560ddab5abd8 - - default default] Releasing lock "refresh_cache-6f74cee5-3bb9-44f0-9a21-d6e5c1475419" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 11 04:13:29 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e375 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 04:13:30 compute-0 nova_compute[259850]: 2025-10-11 04:13:30.058 2 DEBUG oslo_service.periodic_task [None req-a15c22ab-259d-4b26-83d0-eb5f92102649 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 11 04:13:30 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 11 04:13:30 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/969022167' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 04:13:30 compute-0 nova_compute[259850]: 2025-10-11 04:13:30.162 2 DEBUG oslo_concurrency.processutils [None req-166813f2-7c46-47ed-b534-a8e970604205 ba6ea3b0ff9d4fee8a80f308d0493954 7ff14cec1ef04fa2a41f6d226bc99518 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.555s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 04:13:30 compute-0 nova_compute[259850]: 2025-10-11 04:13:30.169 2 DEBUG nova.compute.provider_tree [None req-166813f2-7c46-47ed-b534-a8e970604205 ba6ea3b0ff9d4fee8a80f308d0493954 7ff14cec1ef04fa2a41f6d226bc99518 - - default default] Inventory has not changed in ProviderTree for provider: 108a560b-89c0-4926-a2fc-cb749a6f8386 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 11 04:13:30 compute-0 nova_compute[259850]: 2025-10-11 04:13:30.188 2 DEBUG nova.scheduler.client.report [None req-166813f2-7c46-47ed-b534-a8e970604205 ba6ea3b0ff9d4fee8a80f308d0493954 7ff14cec1ef04fa2a41f6d226bc99518 - - default default] Inventory has not changed for provider 108a560b-89c0-4926-a2fc-cb749a6f8386 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 11 04:13:30 compute-0 ceph-mon[74273]: osdmap e375: 3 total, 3 up, 3 in
Oct 11 04:13:30 compute-0 ceph-mon[74273]: pgmap v1480: 305 pgs: 305 active+clean; 169 MiB data, 423 MiB used, 60 GiB / 60 GiB avail; 176 KiB/s rd, 13 KiB/s wr, 246 op/s
Oct 11 04:13:30 compute-0 ceph-mon[74273]: from='client.? 192.168.122.10:0/1876029385' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 11 04:13:30 compute-0 ceph-mon[74273]: from='client.? 192.168.122.10:0/1876029385' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 11 04:13:30 compute-0 ceph-mon[74273]: from='client.? 192.168.122.100:0/969022167' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 04:13:30 compute-0 nova_compute[259850]: 2025-10-11 04:13:30.235 2 DEBUG oslo_concurrency.lockutils [None req-166813f2-7c46-47ed-b534-a8e970604205 ba6ea3b0ff9d4fee8a80f308d0493954 7ff14cec1ef04fa2a41f6d226bc99518 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.671s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 04:13:30 compute-0 nova_compute[259850]: 2025-10-11 04:13:30.257 2 DEBUG nova.compute.manager [req-296f0d2c-ee06-4846-beae-7668112cd08c req-6391f75a-a99b-4ca5-aa2d-ac8bad33cf71 f4159c198f2c491aba3076e803251bb4 551ef16ca7c64c7d98e9560ddab5abd8 - - default default] [instance: 6f74cee5-3bb9-44f0-9a21-d6e5c1475419] Received event network-vif-unplugged-46432b1a-fa02-4a02-9c8f-d607c2cd820c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 11 04:13:30 compute-0 nova_compute[259850]: 2025-10-11 04:13:30.258 2 DEBUG oslo_concurrency.lockutils [req-296f0d2c-ee06-4846-beae-7668112cd08c req-6391f75a-a99b-4ca5-aa2d-ac8bad33cf71 f4159c198f2c491aba3076e803251bb4 551ef16ca7c64c7d98e9560ddab5abd8 - - default default] Acquiring lock "6f74cee5-3bb9-44f0-9a21-d6e5c1475419-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 04:13:30 compute-0 nova_compute[259850]: 2025-10-11 04:13:30.258 2 DEBUG oslo_concurrency.lockutils [req-296f0d2c-ee06-4846-beae-7668112cd08c req-6391f75a-a99b-4ca5-aa2d-ac8bad33cf71 f4159c198f2c491aba3076e803251bb4 551ef16ca7c64c7d98e9560ddab5abd8 - - default default] Lock "6f74cee5-3bb9-44f0-9a21-d6e5c1475419-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 04:13:30 compute-0 nova_compute[259850]: 2025-10-11 04:13:30.258 2 DEBUG oslo_concurrency.lockutils [req-296f0d2c-ee06-4846-beae-7668112cd08c req-6391f75a-a99b-4ca5-aa2d-ac8bad33cf71 f4159c198f2c491aba3076e803251bb4 551ef16ca7c64c7d98e9560ddab5abd8 - - default default] Lock "6f74cee5-3bb9-44f0-9a21-d6e5c1475419-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 04:13:30 compute-0 nova_compute[259850]: 2025-10-11 04:13:30.259 2 DEBUG nova.compute.manager [req-296f0d2c-ee06-4846-beae-7668112cd08c req-6391f75a-a99b-4ca5-aa2d-ac8bad33cf71 f4159c198f2c491aba3076e803251bb4 551ef16ca7c64c7d98e9560ddab5abd8 - - default default] [instance: 6f74cee5-3bb9-44f0-9a21-d6e5c1475419] No waiting events found dispatching network-vif-unplugged-46432b1a-fa02-4a02-9c8f-d607c2cd820c pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 11 04:13:30 compute-0 nova_compute[259850]: 2025-10-11 04:13:30.259 2 WARNING nova.compute.manager [req-296f0d2c-ee06-4846-beae-7668112cd08c req-6391f75a-a99b-4ca5-aa2d-ac8bad33cf71 f4159c198f2c491aba3076e803251bb4 551ef16ca7c64c7d98e9560ddab5abd8 - - default default] [instance: 6f74cee5-3bb9-44f0-9a21-d6e5c1475419] Received unexpected event network-vif-unplugged-46432b1a-fa02-4a02-9c8f-d607c2cd820c for instance with vm_state deleted and task_state None.
Oct 11 04:13:30 compute-0 nova_compute[259850]: 2025-10-11 04:13:30.259 2 DEBUG nova.compute.manager [req-296f0d2c-ee06-4846-beae-7668112cd08c req-6391f75a-a99b-4ca5-aa2d-ac8bad33cf71 f4159c198f2c491aba3076e803251bb4 551ef16ca7c64c7d98e9560ddab5abd8 - - default default] [instance: 6f74cee5-3bb9-44f0-9a21-d6e5c1475419] Received event network-vif-plugged-46432b1a-fa02-4a02-9c8f-d607c2cd820c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 11 04:13:30 compute-0 nova_compute[259850]: 2025-10-11 04:13:30.259 2 DEBUG oslo_concurrency.lockutils [req-296f0d2c-ee06-4846-beae-7668112cd08c req-6391f75a-a99b-4ca5-aa2d-ac8bad33cf71 f4159c198f2c491aba3076e803251bb4 551ef16ca7c64c7d98e9560ddab5abd8 - - default default] Acquiring lock "6f74cee5-3bb9-44f0-9a21-d6e5c1475419-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 04:13:30 compute-0 nova_compute[259850]: 2025-10-11 04:13:30.260 2 DEBUG oslo_concurrency.lockutils [req-296f0d2c-ee06-4846-beae-7668112cd08c req-6391f75a-a99b-4ca5-aa2d-ac8bad33cf71 f4159c198f2c491aba3076e803251bb4 551ef16ca7c64c7d98e9560ddab5abd8 - - default default] Lock "6f74cee5-3bb9-44f0-9a21-d6e5c1475419-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 04:13:30 compute-0 nova_compute[259850]: 2025-10-11 04:13:30.260 2 DEBUG oslo_concurrency.lockutils [req-296f0d2c-ee06-4846-beae-7668112cd08c req-6391f75a-a99b-4ca5-aa2d-ac8bad33cf71 f4159c198f2c491aba3076e803251bb4 551ef16ca7c64c7d98e9560ddab5abd8 - - default default] Lock "6f74cee5-3bb9-44f0-9a21-d6e5c1475419-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 04:13:30 compute-0 nova_compute[259850]: 2025-10-11 04:13:30.260 2 DEBUG nova.compute.manager [req-296f0d2c-ee06-4846-beae-7668112cd08c req-6391f75a-a99b-4ca5-aa2d-ac8bad33cf71 f4159c198f2c491aba3076e803251bb4 551ef16ca7c64c7d98e9560ddab5abd8 - - default default] [instance: 6f74cee5-3bb9-44f0-9a21-d6e5c1475419] No waiting events found dispatching network-vif-plugged-46432b1a-fa02-4a02-9c8f-d607c2cd820c pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 11 04:13:30 compute-0 nova_compute[259850]: 2025-10-11 04:13:30.260 2 WARNING nova.compute.manager [req-296f0d2c-ee06-4846-beae-7668112cd08c req-6391f75a-a99b-4ca5-aa2d-ac8bad33cf71 f4159c198f2c491aba3076e803251bb4 551ef16ca7c64c7d98e9560ddab5abd8 - - default default] [instance: 6f74cee5-3bb9-44f0-9a21-d6e5c1475419] Received unexpected event network-vif-plugged-46432b1a-fa02-4a02-9c8f-d607c2cd820c for instance with vm_state deleted and task_state None.
Oct 11 04:13:30 compute-0 nova_compute[259850]: 2025-10-11 04:13:30.260 2 DEBUG nova.compute.manager [req-296f0d2c-ee06-4846-beae-7668112cd08c req-6391f75a-a99b-4ca5-aa2d-ac8bad33cf71 f4159c198f2c491aba3076e803251bb4 551ef16ca7c64c7d98e9560ddab5abd8 - - default default] [instance: 6f74cee5-3bb9-44f0-9a21-d6e5c1475419] Received event network-vif-deleted-46432b1a-fa02-4a02-9c8f-d607c2cd820c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 11 04:13:30 compute-0 nova_compute[259850]: 2025-10-11 04:13:30.261 2 INFO nova.compute.manager [req-296f0d2c-ee06-4846-beae-7668112cd08c req-6391f75a-a99b-4ca5-aa2d-ac8bad33cf71 f4159c198f2c491aba3076e803251bb4 551ef16ca7c64c7d98e9560ddab5abd8 - - default default] [instance: 6f74cee5-3bb9-44f0-9a21-d6e5c1475419] Neutron deleted interface 46432b1a-fa02-4a02-9c8f-d607c2cd820c; detaching it from the instance and deleting it from the info cache
Oct 11 04:13:30 compute-0 nova_compute[259850]: 2025-10-11 04:13:30.261 2 DEBUG nova.network.neutron [req-296f0d2c-ee06-4846-beae-7668112cd08c req-6391f75a-a99b-4ca5-aa2d-ac8bad33cf71 f4159c198f2c491aba3076e803251bb4 551ef16ca7c64c7d98e9560ddab5abd8 - - default default] [instance: 6f74cee5-3bb9-44f0-9a21-d6e5c1475419] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 11 04:13:30 compute-0 nova_compute[259850]: 2025-10-11 04:13:30.280 2 INFO nova.scheduler.client.report [None req-166813f2-7c46-47ed-b534-a8e970604205 ba6ea3b0ff9d4fee8a80f308d0493954 7ff14cec1ef04fa2a41f6d226bc99518 - - default default] Deleted allocations for instance 6f74cee5-3bb9-44f0-9a21-d6e5c1475419
Oct 11 04:13:30 compute-0 nova_compute[259850]: 2025-10-11 04:13:30.302 2 DEBUG nova.compute.manager [req-296f0d2c-ee06-4846-beae-7668112cd08c req-6391f75a-a99b-4ca5-aa2d-ac8bad33cf71 f4159c198f2c491aba3076e803251bb4 551ef16ca7c64c7d98e9560ddab5abd8 - - default default] [instance: 6f74cee5-3bb9-44f0-9a21-d6e5c1475419] Detach interface failed, port_id=46432b1a-fa02-4a02-9c8f-d607c2cd820c, reason: Instance 6f74cee5-3bb9-44f0-9a21-d6e5c1475419 could not be found. _process_instance_vif_deleted_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10882
Oct 11 04:13:30 compute-0 nova_compute[259850]: 2025-10-11 04:13:30.351 2 DEBUG oslo_concurrency.lockutils [None req-166813f2-7c46-47ed-b534-a8e970604205 ba6ea3b0ff9d4fee8a80f308d0493954 7ff14cec1ef04fa2a41f6d226bc99518 - - default default] Lock "6f74cee5-3bb9-44f0-9a21-d6e5c1475419" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.182s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 04:13:31 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e375 do_prune osdmap full prune enabled
Oct 11 04:13:31 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e376 e376: 3 total, 3 up, 3 in
Oct 11 04:13:31 compute-0 ceph-mon[74273]: log_channel(cluster) log [DBG] : osdmap e376: 3 total, 3 up, 3 in
Oct 11 04:13:31 compute-0 ceph-mgr[74563]: [pg_autoscaler INFO root] _maybe_adjust
Oct 11 04:13:31 compute-0 ceph-mgr[74563]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 04:13:31 compute-0 ceph-mgr[74563]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct 11 04:13:31 compute-0 ceph-mgr[74563]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 04:13:31 compute-0 ceph-mgr[74563]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.000760926409780556 of space, bias 1.0, pg target 0.22827792293416682 quantized to 32 (current 32)
Oct 11 04:13:31 compute-0 ceph-mgr[74563]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 04:13:31 compute-0 ceph-mgr[74563]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0003800180699355343 of space, bias 1.0, pg target 0.11400542098066029 quantized to 32 (current 32)
Oct 11 04:13:31 compute-0 ceph-mgr[74563]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 04:13:31 compute-0 ceph-mgr[74563]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 8.266792016669923e-07 of space, bias 1.0, pg target 0.0002480037605000977 quantized to 32 (current 32)
Oct 11 04:13:31 compute-0 ceph-mgr[74563]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 04:13:31 compute-0 ceph-mgr[74563]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006661762551279547 of space, bias 1.0, pg target 0.1998528765383864 quantized to 32 (current 32)
Oct 11 04:13:31 compute-0 ceph-mgr[74563]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 04:13:31 compute-0 ceph-mgr[74563]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Oct 11 04:13:31 compute-0 ceph-mgr[74563]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 04:13:31 compute-0 ceph-mgr[74563]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 11 04:13:31 compute-0 ceph-mgr[74563]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 04:13:31 compute-0 ceph-mgr[74563]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Oct 11 04:13:31 compute-0 ceph-mgr[74563]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 04:13:31 compute-0 ceph-mgr[74563]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct 11 04:13:31 compute-0 ceph-mgr[74563]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 04:13:31 compute-0 ceph-mgr[74563]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 11 04:13:31 compute-0 ceph-mgr[74563]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 04:13:31 compute-0 ceph-mgr[74563]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct 11 04:13:31 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1482: 305 pgs: 305 active+clean; 169 MiB data, 423 MiB used, 60 GiB / 60 GiB avail; 152 KiB/s rd, 12 KiB/s wr, 212 op/s
Oct 11 04:13:32 compute-0 ceph-mon[74273]: osdmap e376: 3 total, 3 up, 3 in
Oct 11 04:13:32 compute-0 ceph-mon[74273]: pgmap v1482: 305 pgs: 305 active+clean; 169 MiB data, 423 MiB used, 60 GiB / 60 GiB avail; 152 KiB/s rd, 12 KiB/s wr, 212 op/s
Oct 11 04:13:32 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct 11 04:13:32 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3013291408' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 11 04:13:32 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct 11 04:13:32 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3013291408' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 11 04:13:32 compute-0 nova_compute[259850]: 2025-10-11 04:13:32.920 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 04:13:33 compute-0 ceph-mon[74273]: from='client.? 192.168.122.10:0/3013291408' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 11 04:13:33 compute-0 ceph-mon[74273]: from='client.? 192.168.122.10:0/3013291408' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 11 04:13:33 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e376 do_prune osdmap full prune enabled
Oct 11 04:13:33 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e377 e377: 3 total, 3 up, 3 in
Oct 11 04:13:33 compute-0 ceph-mon[74273]: log_channel(cluster) log [DBG] : osdmap e377: 3 total, 3 up, 3 in
Oct 11 04:13:33 compute-0 nova_compute[259850]: 2025-10-11 04:13:33.448 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 04:13:33 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1484: 305 pgs: 305 active+clean; 88 MiB data, 383 MiB used, 60 GiB / 60 GiB avail; 255 KiB/s rd, 16 KiB/s wr, 354 op/s
Oct 11 04:13:34 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e377 do_prune osdmap full prune enabled
Oct 11 04:13:34 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e378 e378: 3 total, 3 up, 3 in
Oct 11 04:13:34 compute-0 ceph-mon[74273]: osdmap e377: 3 total, 3 up, 3 in
Oct 11 04:13:34 compute-0 ceph-mon[74273]: pgmap v1484: 305 pgs: 305 active+clean; 88 MiB data, 383 MiB used, 60 GiB / 60 GiB avail; 255 KiB/s rd, 16 KiB/s wr, 354 op/s
Oct 11 04:13:34 compute-0 ceph-mon[74273]: log_channel(cluster) log [DBG] : osdmap e378: 3 total, 3 up, 3 in
Oct 11 04:13:34 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e378 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 04:13:34 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e378 do_prune osdmap full prune enabled
Oct 11 04:13:34 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e379 e379: 3 total, 3 up, 3 in
Oct 11 04:13:34 compute-0 ceph-mon[74273]: log_channel(cluster) log [DBG] : osdmap e379: 3 total, 3 up, 3 in
Oct 11 04:13:34 compute-0 nova_compute[259850]: 2025-10-11 04:13:34.990 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1760155999.988981, 673c41a0-97c6-4a8e-8f65-919ee9c38c79 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 11 04:13:34 compute-0 nova_compute[259850]: 2025-10-11 04:13:34.990 2 INFO nova.compute.manager [-] [instance: 673c41a0-97c6-4a8e-8f65-919ee9c38c79] VM Stopped (Lifecycle Event)
Oct 11 04:13:35 compute-0 nova_compute[259850]: 2025-10-11 04:13:35.015 2 DEBUG nova.compute.manager [None req-da9b0ee5-141e-42cf-98db-628070cd8c8c - - - - - -] [instance: 673c41a0-97c6-4a8e-8f65-919ee9c38c79] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 11 04:13:35 compute-0 ceph-mon[74273]: osdmap e378: 3 total, 3 up, 3 in
Oct 11 04:13:35 compute-0 ceph-mon[74273]: osdmap e379: 3 total, 3 up, 3 in
Oct 11 04:13:35 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1487: 305 pgs: 305 active+clean; 88 MiB data, 383 MiB used, 60 GiB / 60 GiB avail; 158 KiB/s rd, 7.9 KiB/s wr, 218 op/s
Oct 11 04:13:35 compute-0 nova_compute[259850]: 2025-10-11 04:13:35.843 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 04:13:36 compute-0 nova_compute[259850]: 2025-10-11 04:13:36.055 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 04:13:36 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e379 do_prune osdmap full prune enabled
Oct 11 04:13:36 compute-0 ceph-mon[74273]: pgmap v1487: 305 pgs: 305 active+clean; 88 MiB data, 383 MiB used, 60 GiB / 60 GiB avail; 158 KiB/s rd, 7.9 KiB/s wr, 218 op/s
Oct 11 04:13:36 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e380 e380: 3 total, 3 up, 3 in
Oct 11 04:13:36 compute-0 ceph-mon[74273]: log_channel(cluster) log [DBG] : osdmap e380: 3 total, 3 up, 3 in
Oct 11 04:13:36 compute-0 ovn_metadata_agent[161897]: 2025-10-11 04:13:36.631 161902 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=14, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '7a:61:6f', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '92:f1:b6:e4:f1:16'}, ipsec=False) old=SB_Global(nb_cfg=13) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 11 04:13:36 compute-0 ovn_metadata_agent[161897]: 2025-10-11 04:13:36.633 161902 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 3 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Oct 11 04:13:36 compute-0 nova_compute[259850]: 2025-10-11 04:13:36.632 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 04:13:36 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct 11 04:13:36 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2186530583' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 11 04:13:36 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct 11 04:13:36 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2186530583' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 11 04:13:37 compute-0 ceph-mon[74273]: osdmap e380: 3 total, 3 up, 3 in
Oct 11 04:13:37 compute-0 ceph-mon[74273]: from='client.? 192.168.122.10:0/2186530583' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 11 04:13:37 compute-0 ceph-mon[74273]: from='client.? 192.168.122.10:0/2186530583' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 11 04:13:37 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1489: 305 pgs: 305 active+clean; 88 MiB data, 383 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:13:37 compute-0 nova_compute[259850]: 2025-10-11 04:13:37.922 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 04:13:38 compute-0 ceph-mon[74273]: pgmap v1489: 305 pgs: 305 active+clean; 88 MiB data, 383 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:13:38 compute-0 nova_compute[259850]: 2025-10-11 04:13:38.481 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 04:13:39 compute-0 ovn_metadata_agent[161897]: 2025-10-11 04:13:39.635 161902 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=8a473e03-2208-47ae-afcd-05ad744a5969, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '14'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 04:13:39 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1490: 305 pgs: 305 active+clean; 88 MiB data, 383 MiB used, 60 GiB / 60 GiB avail; 81 KiB/s rd, 3.5 KiB/s wr, 108 op/s
Oct 11 04:13:39 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e380 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 04:13:39 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e380 do_prune osdmap full prune enabled
Oct 11 04:13:39 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e381 e381: 3 total, 3 up, 3 in
Oct 11 04:13:39 compute-0 ceph-mon[74273]: log_channel(cluster) log [DBG] : osdmap e381: 3 total, 3 up, 3 in
Oct 11 04:13:40 compute-0 podman[289509]: 2025-10-11 04:13:40.372060806 +0000 UTC m=+0.073305624 container health_status 8f03625dda308e9a3fe3b3f00360811eab2a53db90537fed5552a11820542f07 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, container_name=iscsid, io.buildah.version=1.41.3, config_id=iscsid, managed_by=edpm_ansible, org.label-schema.build-date=20251009, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Oct 11 04:13:40 compute-0 podman[289508]: 2025-10-11 04:13:40.3887243 +0000 UTC m=+0.090061221 container health_status 83ff5079e26f0f00bbfd2512db4ebca61e431e3b7e362664573e34fff179574c (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, config_id=multipathd, io.buildah.version=1.41.3, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.build-date=20251009)
Oct 11 04:13:40 compute-0 ceph-mon[74273]: pgmap v1490: 305 pgs: 305 active+clean; 88 MiB data, 383 MiB used, 60 GiB / 60 GiB avail; 81 KiB/s rd, 3.5 KiB/s wr, 108 op/s
Oct 11 04:13:40 compute-0 ceph-mon[74273]: osdmap e381: 3 total, 3 up, 3 in
Oct 11 04:13:41 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1492: 305 pgs: 305 active+clean; 88 MiB data, 383 MiB used, 60 GiB / 60 GiB avail; 73 KiB/s rd, 3.1 KiB/s wr, 96 op/s
Oct 11 04:13:42 compute-0 nova_compute[259850]: 2025-10-11 04:13:42.924 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 04:13:42 compute-0 ceph-mon[74273]: pgmap v1492: 305 pgs: 305 active+clean; 88 MiB data, 383 MiB used, 60 GiB / 60 GiB avail; 73 KiB/s rd, 3.1 KiB/s wr, 96 op/s
Oct 11 04:13:43 compute-0 nova_compute[259850]: 2025-10-11 04:13:43.419 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1760156008.4177682, 6f74cee5-3bb9-44f0-9a21-d6e5c1475419 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 11 04:13:43 compute-0 nova_compute[259850]: 2025-10-11 04:13:43.419 2 INFO nova.compute.manager [-] [instance: 6f74cee5-3bb9-44f0-9a21-d6e5c1475419] VM Stopped (Lifecycle Event)
Oct 11 04:13:43 compute-0 nova_compute[259850]: 2025-10-11 04:13:43.443 2 DEBUG nova.compute.manager [None req-a0efac39-880c-42eb-868f-2099cdaae10b - - - - - -] [instance: 6f74cee5-3bb9-44f0-9a21-d6e5c1475419] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 11 04:13:43 compute-0 nova_compute[259850]: 2025-10-11 04:13:43.483 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 04:13:43 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1493: 305 pgs: 305 active+clean; 88 MiB data, 383 MiB used, 60 GiB / 60 GiB avail; 61 KiB/s rd, 2.6 KiB/s wr, 81 op/s
Oct 11 04:13:44 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e381 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 04:13:44 compute-0 ceph-mon[74273]: pgmap v1493: 305 pgs: 305 active+clean; 88 MiB data, 383 MiB used, 60 GiB / 60 GiB avail; 61 KiB/s rd, 2.6 KiB/s wr, 81 op/s
Oct 11 04:13:45 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1494: 305 pgs: 305 active+clean; 88 MiB data, 383 MiB used, 60 GiB / 60 GiB avail; 52 KiB/s rd, 2.2 KiB/s wr, 69 op/s
Oct 11 04:13:46 compute-0 ceph-mon[74273]: pgmap v1494: 305 pgs: 305 active+clean; 88 MiB data, 383 MiB used, 60 GiB / 60 GiB avail; 52 KiB/s rd, 2.2 KiB/s wr, 69 op/s
Oct 11 04:13:47 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1495: 305 pgs: 305 active+clean; 88 MiB data, 383 MiB used, 60 GiB / 60 GiB avail; 49 KiB/s rd, 2.1 KiB/s wr, 64 op/s
Oct 11 04:13:47 compute-0 nova_compute[259850]: 2025-10-11 04:13:47.926 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 04:13:48 compute-0 nova_compute[259850]: 2025-10-11 04:13:48.485 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 04:13:48 compute-0 ceph-mon[74273]: pgmap v1495: 305 pgs: 305 active+clean; 88 MiB data, 383 MiB used, 60 GiB / 60 GiB avail; 49 KiB/s rd, 2.1 KiB/s wr, 64 op/s
Oct 11 04:13:49 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1496: 305 pgs: 305 active+clean; 88 MiB data, 383 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:13:49 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e381 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 04:13:50 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct 11 04:13:50 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3862586216' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 11 04:13:50 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct 11 04:13:50 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3862586216' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 11 04:13:50 compute-0 ceph-mgr[74563]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 04:13:50 compute-0 ceph-mgr[74563]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 04:13:50 compute-0 ceph-mgr[74563]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 04:13:50 compute-0 ceph-mgr[74563]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 04:13:50 compute-0 ceph-mgr[74563]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 04:13:50 compute-0 ceph-mgr[74563]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 04:13:50 compute-0 ceph-mon[74273]: pgmap v1496: 305 pgs: 305 active+clean; 88 MiB data, 383 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:13:50 compute-0 ceph-mon[74273]: from='client.? 192.168.122.10:0/3862586216' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 11 04:13:50 compute-0 ceph-mon[74273]: from='client.? 192.168.122.10:0/3862586216' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 11 04:13:51 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1497: 305 pgs: 305 active+clean; 88 MiB data, 383 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:13:52 compute-0 nova_compute[259850]: 2025-10-11 04:13:52.929 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 04:13:53 compute-0 ceph-mon[74273]: pgmap v1497: 305 pgs: 305 active+clean; 88 MiB data, 383 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:13:53 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct 11 04:13:53 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2906201131' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 11 04:13:53 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct 11 04:13:53 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2906201131' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 11 04:13:53 compute-0 podman[289545]: 2025-10-11 04:13:53.458075772 +0000 UTC m=+0.163194458 container health_status 648a70a95c858d67aef01a55c153ff0b5b9bef4b589fa10c62a24185180db61f (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, io.buildah.version=1.41.3, org.label-schema.build-date=20251009, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_controller, org.label-schema.license=GPLv2)
Oct 11 04:13:53 compute-0 nova_compute[259850]: 2025-10-11 04:13:53.487 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 04:13:53 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1498: 305 pgs: 305 active+clean; 88 MiB data, 383 MiB used, 60 GiB / 60 GiB avail; 9.1 KiB/s rd, 511 B/s wr, 12 op/s
Oct 11 04:13:54 compute-0 ceph-mon[74273]: from='client.? 192.168.122.10:0/2906201131' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 11 04:13:54 compute-0 ceph-mon[74273]: from='client.? 192.168.122.10:0/2906201131' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 11 04:13:54 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e381 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 04:13:55 compute-0 ceph-mon[74273]: pgmap v1498: 305 pgs: 305 active+clean; 88 MiB data, 383 MiB used, 60 GiB / 60 GiB avail; 9.1 KiB/s rd, 511 B/s wr, 12 op/s
Oct 11 04:13:55 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1499: 305 pgs: 305 active+clean; 88 MiB data, 383 MiB used, 60 GiB / 60 GiB avail; 9.1 KiB/s rd, 511 B/s wr, 12 op/s
Oct 11 04:13:56 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct 11 04:13:56 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1247197398' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 11 04:13:56 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct 11 04:13:56 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1247197398' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 11 04:13:57 compute-0 ceph-mon[74273]: pgmap v1499: 305 pgs: 305 active+clean; 88 MiB data, 383 MiB used, 60 GiB / 60 GiB avail; 9.1 KiB/s rd, 511 B/s wr, 12 op/s
Oct 11 04:13:57 compute-0 ceph-mon[74273]: from='client.? 192.168.122.10:0/1247197398' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 11 04:13:57 compute-0 ceph-mon[74273]: from='client.? 192.168.122.10:0/1247197398' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 11 04:13:57 compute-0 podman[289571]: 2025-10-11 04:13:57.384621155 +0000 UTC m=+0.079699666 container health_status aa44bb3cffcfb092b9d85ae52a9bd9f36c87e07262632420f97b48a886109eab (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS)
Oct 11 04:13:57 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1500: 305 pgs: 305 active+clean; 88 MiB data, 383 MiB used, 60 GiB / 60 GiB avail; 9.1 KiB/s rd, 511 B/s wr, 12 op/s
Oct 11 04:13:57 compute-0 nova_compute[259850]: 2025-10-11 04:13:57.930 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 04:13:58 compute-0 nova_compute[259850]: 2025-10-11 04:13:58.490 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 04:13:59 compute-0 ceph-mon[74273]: pgmap v1500: 305 pgs: 305 active+clean; 88 MiB data, 383 MiB used, 60 GiB / 60 GiB avail; 9.1 KiB/s rd, 511 B/s wr, 12 op/s
Oct 11 04:13:59 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1501: 305 pgs: 305 active+clean; 88 MiB data, 383 MiB used, 60 GiB / 60 GiB avail; 21 KiB/s rd, 1.2 KiB/s wr, 28 op/s
Oct 11 04:13:59 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e381 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 04:14:01 compute-0 ceph-mon[74273]: pgmap v1501: 305 pgs: 305 active+clean; 88 MiB data, 383 MiB used, 60 GiB / 60 GiB avail; 21 KiB/s rd, 1.2 KiB/s wr, 28 op/s
Oct 11 04:14:01 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1502: 305 pgs: 305 active+clean; 88 MiB data, 383 MiB used, 60 GiB / 60 GiB avail; 21 KiB/s rd, 1.2 KiB/s wr, 28 op/s
Oct 11 04:14:02 compute-0 nova_compute[259850]: 2025-10-11 04:14:02.933 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 04:14:03 compute-0 ceph-mon[74273]: pgmap v1502: 305 pgs: 305 active+clean; 88 MiB data, 383 MiB used, 60 GiB / 60 GiB avail; 21 KiB/s rd, 1.2 KiB/s wr, 28 op/s
Oct 11 04:14:03 compute-0 nova_compute[259850]: 2025-10-11 04:14:03.492 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 04:14:03 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1503: 305 pgs: 305 active+clean; 88 MiB data, 383 MiB used, 60 GiB / 60 GiB avail; 21 KiB/s rd, 1.2 KiB/s wr, 28 op/s
Oct 11 04:14:04 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e381 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 04:14:05 compute-0 ceph-mon[74273]: pgmap v1503: 305 pgs: 305 active+clean; 88 MiB data, 383 MiB used, 60 GiB / 60 GiB avail; 21 KiB/s rd, 1.2 KiB/s wr, 28 op/s
Oct 11 04:14:05 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1504: 305 pgs: 305 active+clean; 88 MiB data, 383 MiB used, 60 GiB / 60 GiB avail; 12 KiB/s rd, 682 B/s wr, 16 op/s
Oct 11 04:14:07 compute-0 ceph-mon[74273]: pgmap v1504: 305 pgs: 305 active+clean; 88 MiB data, 383 MiB used, 60 GiB / 60 GiB avail; 12 KiB/s rd, 682 B/s wr, 16 op/s
Oct 11 04:14:07 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1505: 305 pgs: 305 active+clean; 88 MiB data, 383 MiB used, 60 GiB / 60 GiB avail; 12 KiB/s rd, 682 B/s wr, 16 op/s
Oct 11 04:14:07 compute-0 nova_compute[259850]: 2025-10-11 04:14:07.937 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 04:14:08 compute-0 nova_compute[259850]: 2025-10-11 04:14:08.494 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 04:14:09 compute-0 ceph-mon[74273]: pgmap v1505: 305 pgs: 305 active+clean; 88 MiB data, 383 MiB used, 60 GiB / 60 GiB avail; 12 KiB/s rd, 682 B/s wr, 16 op/s
Oct 11 04:14:09 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1506: 305 pgs: 305 active+clean; 88 MiB data, 383 MiB used, 60 GiB / 60 GiB avail; 12 KiB/s rd, 682 B/s wr, 16 op/s
Oct 11 04:14:09 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e381 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 04:14:10 compute-0 sudo[289590]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 04:14:10 compute-0 sudo[289590]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 04:14:10 compute-0 sudo[289590]: pam_unix(sudo:session): session closed for user root
Oct 11 04:14:10 compute-0 sudo[289615]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 11 04:14:10 compute-0 sudo[289615]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 04:14:10 compute-0 sudo[289615]: pam_unix(sudo:session): session closed for user root
Oct 11 04:14:10 compute-0 sudo[289653]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 04:14:10 compute-0 sudo[289653]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 04:14:10 compute-0 sudo[289653]: pam_unix(sudo:session): session closed for user root
Oct 11 04:14:10 compute-0 podman[289639]: 2025-10-11 04:14:10.535527236 +0000 UTC m=+0.087382694 container health_status 83ff5079e26f0f00bbfd2512db4ebca61e431e3b7e362664573e34fff179574c (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, config_id=multipathd, managed_by=edpm_ansible, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, io.buildah.version=1.41.3)
Oct 11 04:14:10 compute-0 podman[289640]: 2025-10-11 04:14:10.575953874 +0000 UTC m=+0.109888063 container health_status 8f03625dda308e9a3fe3b3f00360811eab2a53db90537fed5552a11820542f07 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251009, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=iscsid, container_name=iscsid, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, managed_by=edpm_ansible)
Oct 11 04:14:10 compute-0 sudo[289705]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/23b68101-59a9-532f-ab6b-9acf78fb2162/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Oct 11 04:14:10 compute-0 sudo[289705]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 04:14:11 compute-0 ceph-mon[74273]: pgmap v1506: 305 pgs: 305 active+clean; 88 MiB data, 383 MiB used, 60 GiB / 60 GiB avail; 12 KiB/s rd, 682 B/s wr, 16 op/s
Oct 11 04:14:11 compute-0 sudo[289705]: pam_unix(sudo:session): session closed for user root
Oct 11 04:14:11 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 11 04:14:11 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 11 04:14:11 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct 11 04:14:11 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 11 04:14:11 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct 11 04:14:11 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' 
Oct 11 04:14:11 compute-0 ceph-mgr[74563]: [progress WARNING root] complete: ev 3bb9ff09-487c-4eb8-9bb9-6297ff374d9f does not exist
Oct 11 04:14:11 compute-0 ceph-mgr[74563]: [progress WARNING root] complete: ev e74521b7-9bbf-4164-8f3b-e434289f3a8d does not exist
Oct 11 04:14:11 compute-0 ceph-mgr[74563]: [progress WARNING root] complete: ev 426b6eaa-0dd1-4d65-9301-00a6a6bd3c6d does not exist
Oct 11 04:14:11 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct 11 04:14:11 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 11 04:14:11 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct 11 04:14:11 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 11 04:14:11 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 11 04:14:11 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 11 04:14:11 compute-0 sudo[289763]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 04:14:11 compute-0 sudo[289763]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 04:14:11 compute-0 sudo[289763]: pam_unix(sudo:session): session closed for user root
Oct 11 04:14:11 compute-0 sudo[289788]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 11 04:14:11 compute-0 sudo[289788]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 04:14:11 compute-0 sudo[289788]: pam_unix(sudo:session): session closed for user root
Oct 11 04:14:11 compute-0 sudo[289813]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 04:14:11 compute-0 sudo[289813]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 04:14:11 compute-0 sudo[289813]: pam_unix(sudo:session): session closed for user root
Oct 11 04:14:11 compute-0 sudo[289838]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/23b68101-59a9-532f-ab6b-9acf78fb2162/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 23b68101-59a9-532f-ab6b-9acf78fb2162 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Oct 11 04:14:11 compute-0 sudo[289838]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 04:14:11 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1507: 305 pgs: 305 active+clean; 88 MiB data, 383 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:14:12 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 11 04:14:12 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 11 04:14:12 compute-0 podman[289903]: 2025-10-11 04:14:12.11442007 +0000 UTC m=+0.062995841 container create 47d81c8a2055cfffea734bc840ce4b89d39f58d88c66597e0fbb2b0fc83a5b89 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_bartik, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Oct 11 04:14:12 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' 
Oct 11 04:14:12 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 11 04:14:12 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 11 04:14:12 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 11 04:14:12 compute-0 systemd[1]: Started libpod-conmon-47d81c8a2055cfffea734bc840ce4b89d39f58d88c66597e0fbb2b0fc83a5b89.scope.
Oct 11 04:14:12 compute-0 podman[289903]: 2025-10-11 04:14:12.091674734 +0000 UTC m=+0.040250525 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 04:14:12 compute-0 systemd[1]: Started libcrun container.
Oct 11 04:14:12 compute-0 podman[289903]: 2025-10-11 04:14:12.215953216 +0000 UTC m=+0.164529057 container init 47d81c8a2055cfffea734bc840ce4b89d39f58d88c66597e0fbb2b0fc83a5b89 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_bartik, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, ceph=True, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 11 04:14:12 compute-0 podman[289903]: 2025-10-11 04:14:12.228485632 +0000 UTC m=+0.177061383 container start 47d81c8a2055cfffea734bc840ce4b89d39f58d88c66597e0fbb2b0fc83a5b89 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_bartik, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250507, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 11 04:14:12 compute-0 podman[289903]: 2025-10-11 04:14:12.232397243 +0000 UTC m=+0.180973074 container attach 47d81c8a2055cfffea734bc840ce4b89d39f58d88c66597e0fbb2b0fc83a5b89 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_bartik, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 11 04:14:12 compute-0 ecstatic_bartik[289920]: 167 167
Oct 11 04:14:12 compute-0 systemd[1]: libpod-47d81c8a2055cfffea734bc840ce4b89d39f58d88c66597e0fbb2b0fc83a5b89.scope: Deactivated successfully.
Oct 11 04:14:12 compute-0 podman[289903]: 2025-10-11 04:14:12.237896179 +0000 UTC m=+0.186471960 container died 47d81c8a2055cfffea734bc840ce4b89d39f58d88c66597e0fbb2b0fc83a5b89 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_bartik, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 11 04:14:12 compute-0 systemd[1]: var-lib-containers-storage-overlay-a36385d99c0a8e9e279aa65bbd8a84ca46363599bb1f67c04648fa62e848be8c-merged.mount: Deactivated successfully.
Oct 11 04:14:12 compute-0 podman[289903]: 2025-10-11 04:14:12.289144515 +0000 UTC m=+0.237720266 container remove 47d81c8a2055cfffea734bc840ce4b89d39f58d88c66597e0fbb2b0fc83a5b89 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_bartik, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 11 04:14:12 compute-0 systemd[1]: libpod-conmon-47d81c8a2055cfffea734bc840ce4b89d39f58d88c66597e0fbb2b0fc83a5b89.scope: Deactivated successfully.
Oct 11 04:14:12 compute-0 podman[289943]: 2025-10-11 04:14:12.507612133 +0000 UTC m=+0.071510173 container create 4850cc60e208740ed7ddf95b4a9ff708b6fdbb0fad69b847fe286d750041373d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_cohen, io.buildah.version=1.39.3, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 11 04:14:12 compute-0 systemd[1]: Started libpod-conmon-4850cc60e208740ed7ddf95b4a9ff708b6fdbb0fad69b847fe286d750041373d.scope.
Oct 11 04:14:12 compute-0 podman[289943]: 2025-10-11 04:14:12.4800545 +0000 UTC m=+0.043952560 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 04:14:12 compute-0 systemd[1]: Started libcrun container.
Oct 11 04:14:12 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2ce0bbccedc1c8408edd7075aad4fac6214d25d1a2ae0af8edb21ce3ef7a5fa9/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 11 04:14:12 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2ce0bbccedc1c8408edd7075aad4fac6214d25d1a2ae0af8edb21ce3ef7a5fa9/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 11 04:14:12 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2ce0bbccedc1c8408edd7075aad4fac6214d25d1a2ae0af8edb21ce3ef7a5fa9/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 11 04:14:12 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2ce0bbccedc1c8408edd7075aad4fac6214d25d1a2ae0af8edb21ce3ef7a5fa9/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 11 04:14:12 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2ce0bbccedc1c8408edd7075aad4fac6214d25d1a2ae0af8edb21ce3ef7a5fa9/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct 11 04:14:12 compute-0 podman[289943]: 2025-10-11 04:14:12.623589929 +0000 UTC m=+0.187487999 container init 4850cc60e208740ed7ddf95b4a9ff708b6fdbb0fad69b847fe286d750041373d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_cohen, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.build-date=20250507)
Oct 11 04:14:12 compute-0 podman[289943]: 2025-10-11 04:14:12.644806552 +0000 UTC m=+0.208704582 container start 4850cc60e208740ed7ddf95b4a9ff708b6fdbb0fad69b847fe286d750041373d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_cohen, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 11 04:14:12 compute-0 podman[289943]: 2025-10-11 04:14:12.649426063 +0000 UTC m=+0.213324163 container attach 4850cc60e208740ed7ddf95b4a9ff708b6fdbb0fad69b847fe286d750041373d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_cohen, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 11 04:14:12 compute-0 nova_compute[259850]: 2025-10-11 04:14:12.939 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 04:14:13 compute-0 ceph-mon[74273]: pgmap v1507: 305 pgs: 305 active+clean; 88 MiB data, 383 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:14:13 compute-0 nova_compute[259850]: 2025-10-11 04:14:13.496 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 04:14:13 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1508: 305 pgs: 305 active+clean; 88 MiB data, 383 MiB used, 60 GiB / 60 GiB avail; 3.4 KiB/s rd, 22 KiB/s wr, 5 op/s
Oct 11 04:14:13 compute-0 magical_cohen[289959]: --> passed data devices: 0 physical, 3 LVM
Oct 11 04:14:13 compute-0 magical_cohen[289959]: --> relative data size: 1.0
Oct 11 04:14:13 compute-0 magical_cohen[289959]: --> All data devices are unavailable
Oct 11 04:14:13 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct 11 04:14:13 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3853759907' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 11 04:14:13 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct 11 04:14:13 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3853759907' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 11 04:14:13 compute-0 systemd[1]: libpod-4850cc60e208740ed7ddf95b4a9ff708b6fdbb0fad69b847fe286d750041373d.scope: Deactivated successfully.
Oct 11 04:14:13 compute-0 systemd[1]: libpod-4850cc60e208740ed7ddf95b4a9ff708b6fdbb0fad69b847fe286d750041373d.scope: Consumed 1.147s CPU time.
Oct 11 04:14:13 compute-0 conmon[289959]: conmon 4850cc60e208740ed7dd <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-4850cc60e208740ed7ddf95b4a9ff708b6fdbb0fad69b847fe286d750041373d.scope/container/memory.events
Oct 11 04:14:13 compute-0 podman[289943]: 2025-10-11 04:14:13.853872658 +0000 UTC m=+1.417770698 container died 4850cc60e208740ed7ddf95b4a9ff708b6fdbb0fad69b847fe286d750041373d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_cohen, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 11 04:14:13 compute-0 systemd[1]: var-lib-containers-storage-overlay-2ce0bbccedc1c8408edd7075aad4fac6214d25d1a2ae0af8edb21ce3ef7a5fa9-merged.mount: Deactivated successfully.
Oct 11 04:14:13 compute-0 podman[289943]: 2025-10-11 04:14:13.923053594 +0000 UTC m=+1.486951604 container remove 4850cc60e208740ed7ddf95b4a9ff708b6fdbb0fad69b847fe286d750041373d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_cohen, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 11 04:14:13 compute-0 systemd[1]: libpod-conmon-4850cc60e208740ed7ddf95b4a9ff708b6fdbb0fad69b847fe286d750041373d.scope: Deactivated successfully.
Oct 11 04:14:13 compute-0 sudo[289838]: pam_unix(sudo:session): session closed for user root
Oct 11 04:14:14 compute-0 sudo[290001]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 04:14:14 compute-0 sudo[290001]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 04:14:14 compute-0 sudo[290001]: pam_unix(sudo:session): session closed for user root
Oct 11 04:14:14 compute-0 ceph-mon[74273]: from='client.? 192.168.122.10:0/3853759907' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 11 04:14:14 compute-0 ceph-mon[74273]: from='client.? 192.168.122.10:0/3853759907' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 11 04:14:14 compute-0 sudo[290026]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 11 04:14:14 compute-0 sudo[290026]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 04:14:14 compute-0 sudo[290026]: pam_unix(sudo:session): session closed for user root
Oct 11 04:14:14 compute-0 sudo[290051]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 04:14:14 compute-0 sudo[290051]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 04:14:14 compute-0 sudo[290051]: pam_unix(sudo:session): session closed for user root
Oct 11 04:14:14 compute-0 sudo[290076]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/23b68101-59a9-532f-ab6b-9acf78fb2162/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 23b68101-59a9-532f-ab6b-9acf78fb2162 -- lvm list --format json
Oct 11 04:14:14 compute-0 sudo[290076]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 04:14:14 compute-0 nova_compute[259850]: 2025-10-11 04:14:14.543 2 DEBUG oslo_concurrency.lockutils [None req-b6e63544-78f1-4705-b248-28579fa37926 2a330a845d62440c871f80eda2546881 09ba33ef4bd447699d74946c58839b2d - - default default] Acquiring lock "1b88f60c-1027-4734-a7c1-3dd966e8db2c" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 04:14:14 compute-0 nova_compute[259850]: 2025-10-11 04:14:14.546 2 DEBUG oslo_concurrency.lockutils [None req-b6e63544-78f1-4705-b248-28579fa37926 2a330a845d62440c871f80eda2546881 09ba33ef4bd447699d74946c58839b2d - - default default] Lock "1b88f60c-1027-4734-a7c1-3dd966e8db2c" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.003s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 04:14:14 compute-0 nova_compute[259850]: 2025-10-11 04:14:14.570 2 DEBUG nova.compute.manager [None req-b6e63544-78f1-4705-b248-28579fa37926 2a330a845d62440c871f80eda2546881 09ba33ef4bd447699d74946c58839b2d - - default default] [instance: 1b88f60c-1027-4734-a7c1-3dd966e8db2c] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Oct 11 04:14:14 compute-0 nova_compute[259850]: 2025-10-11 04:14:14.664 2 DEBUG oslo_concurrency.lockutils [None req-b6e63544-78f1-4705-b248-28579fa37926 2a330a845d62440c871f80eda2546881 09ba33ef4bd447699d74946c58839b2d - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 04:14:14 compute-0 nova_compute[259850]: 2025-10-11 04:14:14.665 2 DEBUG oslo_concurrency.lockutils [None req-b6e63544-78f1-4705-b248-28579fa37926 2a330a845d62440c871f80eda2546881 09ba33ef4bd447699d74946c58839b2d - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 04:14:14 compute-0 nova_compute[259850]: 2025-10-11 04:14:14.675 2 DEBUG nova.virt.hardware [None req-b6e63544-78f1-4705-b248-28579fa37926 2a330a845d62440c871f80eda2546881 09ba33ef4bd447699d74946c58839b2d - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Oct 11 04:14:14 compute-0 nova_compute[259850]: 2025-10-11 04:14:14.675 2 INFO nova.compute.claims [None req-b6e63544-78f1-4705-b248-28579fa37926 2a330a845d62440c871f80eda2546881 09ba33ef4bd447699d74946c58839b2d - - default default] [instance: 1b88f60c-1027-4734-a7c1-3dd966e8db2c] Claim successful on node compute-0.ctlplane.example.com
Oct 11 04:14:14 compute-0 nova_compute[259850]: 2025-10-11 04:14:14.780 2 DEBUG nova.scheduler.client.report [None req-b6e63544-78f1-4705-b248-28579fa37926 2a330a845d62440c871f80eda2546881 09ba33ef4bd447699d74946c58839b2d - - default default] Refreshing inventories for resource provider 108a560b-89c0-4926-a2fc-cb749a6f8386 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804
Oct 11 04:14:14 compute-0 nova_compute[259850]: 2025-10-11 04:14:14.815 2 DEBUG nova.scheduler.client.report [None req-b6e63544-78f1-4705-b248-28579fa37926 2a330a845d62440c871f80eda2546881 09ba33ef4bd447699d74946c58839b2d - - default default] Updating ProviderTree inventory for provider 108a560b-89c0-4926-a2fc-cb749a6f8386 from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768
Oct 11 04:14:14 compute-0 nova_compute[259850]: 2025-10-11 04:14:14.816 2 DEBUG nova.compute.provider_tree [None req-b6e63544-78f1-4705-b248-28579fa37926 2a330a845d62440c871f80eda2546881 09ba33ef4bd447699d74946c58839b2d - - default default] Updating inventory in ProviderTree for provider 108a560b-89c0-4926-a2fc-cb749a6f8386 with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176
Oct 11 04:14:14 compute-0 podman[290142]: 2025-10-11 04:14:14.834593355 +0000 UTC m=+0.067857109 container create b2b77a0cde39bf42d509c3ae4ac6b92353ff744b221d4cc4a77353b6d1175405 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_wilson, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=reef)
Oct 11 04:14:14 compute-0 nova_compute[259850]: 2025-10-11 04:14:14.841 2 DEBUG nova.scheduler.client.report [None req-b6e63544-78f1-4705-b248-28579fa37926 2a330a845d62440c871f80eda2546881 09ba33ef4bd447699d74946c58839b2d - - default default] Refreshing aggregate associations for resource provider 108a560b-89c0-4926-a2fc-cb749a6f8386, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813
Oct 11 04:14:14 compute-0 nova_compute[259850]: 2025-10-11 04:14:14.877 2 DEBUG nova.scheduler.client.report [None req-b6e63544-78f1-4705-b248-28579fa37926 2a330a845d62440c871f80eda2546881 09ba33ef4bd447699d74946c58839b2d - - default default] Refreshing trait associations for resource provider 108a560b-89c0-4926-a2fc-cb749a6f8386, traits: COMPUTE_IMAGE_TYPE_QCOW2,HW_CPU_X86_SSSE3,COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_STORAGE_BUS_SCSI,HW_CPU_X86_AESNI,HW_CPU_X86_FMA3,COMPUTE_SECURITY_UEFI_SECURE_BOOT,HW_CPU_X86_F16C,HW_CPU_X86_SVM,COMPUTE_STORAGE_BUS_USB,HW_CPU_X86_SHA,COMPUTE_ACCELERATORS,COMPUTE_IMAGE_TYPE_AMI,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_GRAPHICS_MODEL_BOCHS,COMPUTE_NET_VIF_MODEL_RTL8139,COMPUTE_IMAGE_TYPE_RAW,COMPUTE_NET_VIF_MODEL_NE2K_PCI,HW_CPU_X86_SSE41,COMPUTE_NODE,COMPUTE_RESCUE_BFV,COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_NET_ATTACH_INTERFACE,COMPUTE_IMAGE_TYPE_ISO,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,COMPUTE_TRUSTED_CERTS,COMPUTE_NET_VIF_MODEL_E1000E,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_STORAGE_BUS_SATA,HW_CPU_X86_AVX,COMPUTE_IMAGE_TYPE_AKI,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_STORAGE_BUS_VIRTIO,COMPUTE_NET_VIF_MODEL_VMXNET3,HW_CPU_X86_BMI2,HW_CPU_X86_MMX,COMPUTE_VOLUME_ATTACH_WITH_TAG,COMPUTE_SECURITY_TPM_1_2,COMPUTE_GRAPHICS_MODEL_NONE,HW_CPU_X86_AVX2,COMPUTE_DEVICE_TAGGING,HW_CPU_X86_AMD_SVM,COMPUTE_STORAGE_BUS_IDE,COMPUTE_GRAPHICS_MODEL_VGA,COMPUTE_VOLUME_EXTEND,COMPUTE_NET_VIF_MODEL_PCNET,HW_CPU_X86_CLMUL,HW_CPU_X86_SSE2,HW_CPU_X86_SSE4A,COMPUTE_GRAPHICS_MODEL_VIRTIO,COMPUTE_NET_VIF_MODEL_E1000,HW_CPU_X86_BMI,COMPUTE_VIOMMU_MODEL_INTEL,HW_CPU_X86_SSE,COMPUTE_VOLUME_MULTI_ATTACH,HW_CPU_X86_ABM,HW_CPU_X86_SSE42,COMPUTE_STORAGE_BUS_FDC,COMPUTE_SECURITY_TPM_2_0,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825
Oct 11 04:14:14 compute-0 systemd[1]: Started libpod-conmon-b2b77a0cde39bf42d509c3ae4ac6b92353ff744b221d4cc4a77353b6d1175405.scope.
Oct 11 04:14:14 compute-0 podman[290142]: 2025-10-11 04:14:14.806830826 +0000 UTC m=+0.040094620 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 04:14:14 compute-0 systemd[1]: Started libcrun container.
Oct 11 04:14:14 compute-0 nova_compute[259850]: 2025-10-11 04:14:14.933 2 DEBUG oslo_concurrency.processutils [None req-b6e63544-78f1-4705-b248-28579fa37926 2a330a845d62440c871f80eda2546881 09ba33ef4bd447699d74946c58839b2d - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 04:14:14 compute-0 podman[290142]: 2025-10-11 04:14:14.940447123 +0000 UTC m=+0.173710927 container init b2b77a0cde39bf42d509c3ae4ac6b92353ff744b221d4cc4a77353b6d1175405 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_wilson, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Oct 11 04:14:14 compute-0 podman[290142]: 2025-10-11 04:14:14.954030259 +0000 UTC m=+0.187293963 container start b2b77a0cde39bf42d509c3ae4ac6b92353ff744b221d4cc4a77353b6d1175405 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_wilson, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 11 04:14:14 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e381 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 04:14:14 compute-0 podman[290142]: 2025-10-11 04:14:14.958860606 +0000 UTC m=+0.192124330 container attach b2b77a0cde39bf42d509c3ae4ac6b92353ff744b221d4cc4a77353b6d1175405 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_wilson, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 11 04:14:14 compute-0 cool_wilson[290158]: 167 167
Oct 11 04:14:14 compute-0 systemd[1]: libpod-b2b77a0cde39bf42d509c3ae4ac6b92353ff744b221d4cc4a77353b6d1175405.scope: Deactivated successfully.
Oct 11 04:14:14 compute-0 podman[290142]: 2025-10-11 04:14:14.962688465 +0000 UTC m=+0.195952199 container died b2b77a0cde39bf42d509c3ae4ac6b92353ff744b221d4cc4a77353b6d1175405 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_wilson, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3)
Oct 11 04:14:14 compute-0 systemd[1]: var-lib-containers-storage-overlay-1f5a8d0d10d2da1747a52a853d0dc3fc4e037b55b2fc663d4b3fecdbd374acb4-merged.mount: Deactivated successfully.
Oct 11 04:14:15 compute-0 podman[290142]: 2025-10-11 04:14:15.011939305 +0000 UTC m=+0.245203019 container remove b2b77a0cde39bf42d509c3ae4ac6b92353ff744b221d4cc4a77353b6d1175405 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_wilson, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Oct 11 04:14:15 compute-0 systemd[1]: libpod-conmon-b2b77a0cde39bf42d509c3ae4ac6b92353ff744b221d4cc4a77353b6d1175405.scope: Deactivated successfully.
Oct 11 04:14:15 compute-0 ceph-mon[74273]: pgmap v1508: 305 pgs: 305 active+clean; 88 MiB data, 383 MiB used, 60 GiB / 60 GiB avail; 3.4 KiB/s rd, 22 KiB/s wr, 5 op/s
Oct 11 04:14:15 compute-0 podman[290201]: 2025-10-11 04:14:15.24335302 +0000 UTC m=+0.056919948 container create 7591031f1831214506e6bbc117d85a5c8103fa35d4b24919fd7fb827d7ec6c88 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_blackwell, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.build-date=20250507)
Oct 11 04:14:15 compute-0 systemd[1]: Started libpod-conmon-7591031f1831214506e6bbc117d85a5c8103fa35d4b24919fd7fb827d7ec6c88.scope.
Oct 11 04:14:15 compute-0 podman[290201]: 2025-10-11 04:14:15.215372735 +0000 UTC m=+0.028939703 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 04:14:15 compute-0 systemd[1]: Started libcrun container.
Oct 11 04:14:15 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fd29cfb884f800b7a4361fc8ce0e7f8286d03be306349d76fe1a17bd9d08358a/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 11 04:14:15 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fd29cfb884f800b7a4361fc8ce0e7f8286d03be306349d76fe1a17bd9d08358a/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 11 04:14:15 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fd29cfb884f800b7a4361fc8ce0e7f8286d03be306349d76fe1a17bd9d08358a/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 11 04:14:15 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fd29cfb884f800b7a4361fc8ce0e7f8286d03be306349d76fe1a17bd9d08358a/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 11 04:14:15 compute-0 podman[290201]: 2025-10-11 04:14:15.363761252 +0000 UTC m=+0.177328170 container init 7591031f1831214506e6bbc117d85a5c8103fa35d4b24919fd7fb827d7ec6c88 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_blackwell, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Oct 11 04:14:15 compute-0 podman[290201]: 2025-10-11 04:14:15.376620957 +0000 UTC m=+0.190187885 container start 7591031f1831214506e6bbc117d85a5c8103fa35d4b24919fd7fb827d7ec6c88 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_blackwell, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 11 04:14:15 compute-0 podman[290201]: 2025-10-11 04:14:15.380655762 +0000 UTC m=+0.194222660 container attach 7591031f1831214506e6bbc117d85a5c8103fa35d4b24919fd7fb827d7ec6c88 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_blackwell, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 11 04:14:15 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 11 04:14:15 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3778433870' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 04:14:15 compute-0 nova_compute[259850]: 2025-10-11 04:14:15.445 2 DEBUG oslo_concurrency.processutils [None req-b6e63544-78f1-4705-b248-28579fa37926 2a330a845d62440c871f80eda2546881 09ba33ef4bd447699d74946c58839b2d - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.512s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 04:14:15 compute-0 nova_compute[259850]: 2025-10-11 04:14:15.454 2 DEBUG nova.compute.provider_tree [None req-b6e63544-78f1-4705-b248-28579fa37926 2a330a845d62440c871f80eda2546881 09ba33ef4bd447699d74946c58839b2d - - default default] Inventory has not changed in ProviderTree for provider: 108a560b-89c0-4926-a2fc-cb749a6f8386 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 11 04:14:15 compute-0 nova_compute[259850]: 2025-10-11 04:14:15.471 2 DEBUG nova.scheduler.client.report [None req-b6e63544-78f1-4705-b248-28579fa37926 2a330a845d62440c871f80eda2546881 09ba33ef4bd447699d74946c58839b2d - - default default] Inventory has not changed for provider 108a560b-89c0-4926-a2fc-cb749a6f8386 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 11 04:14:15 compute-0 nova_compute[259850]: 2025-10-11 04:14:15.492 2 DEBUG oslo_concurrency.lockutils [None req-b6e63544-78f1-4705-b248-28579fa37926 2a330a845d62440c871f80eda2546881 09ba33ef4bd447699d74946c58839b2d - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.827s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 04:14:15 compute-0 nova_compute[259850]: 2025-10-11 04:14:15.492 2 DEBUG nova.compute.manager [None req-b6e63544-78f1-4705-b248-28579fa37926 2a330a845d62440c871f80eda2546881 09ba33ef4bd447699d74946c58839b2d - - default default] [instance: 1b88f60c-1027-4734-a7c1-3dd966e8db2c] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Oct 11 04:14:15 compute-0 nova_compute[259850]: 2025-10-11 04:14:15.534 2 DEBUG nova.compute.manager [None req-b6e63544-78f1-4705-b248-28579fa37926 2a330a845d62440c871f80eda2546881 09ba33ef4bd447699d74946c58839b2d - - default default] [instance: 1b88f60c-1027-4734-a7c1-3dd966e8db2c] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Oct 11 04:14:15 compute-0 nova_compute[259850]: 2025-10-11 04:14:15.535 2 DEBUG nova.network.neutron [None req-b6e63544-78f1-4705-b248-28579fa37926 2a330a845d62440c871f80eda2546881 09ba33ef4bd447699d74946c58839b2d - - default default] [instance: 1b88f60c-1027-4734-a7c1-3dd966e8db2c] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Oct 11 04:14:15 compute-0 nova_compute[259850]: 2025-10-11 04:14:15.560 2 INFO nova.virt.libvirt.driver [None req-b6e63544-78f1-4705-b248-28579fa37926 2a330a845d62440c871f80eda2546881 09ba33ef4bd447699d74946c58839b2d - - default default] [instance: 1b88f60c-1027-4734-a7c1-3dd966e8db2c] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Oct 11 04:14:15 compute-0 nova_compute[259850]: 2025-10-11 04:14:15.582 2 DEBUG nova.compute.manager [None req-b6e63544-78f1-4705-b248-28579fa37926 2a330a845d62440c871f80eda2546881 09ba33ef4bd447699d74946c58839b2d - - default default] [instance: 1b88f60c-1027-4734-a7c1-3dd966e8db2c] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Oct 11 04:14:15 compute-0 nova_compute[259850]: 2025-10-11 04:14:15.626 2 INFO nova.virt.block_device [None req-b6e63544-78f1-4705-b248-28579fa37926 2a330a845d62440c871f80eda2546881 09ba33ef4bd447699d74946c58839b2d - - default default] [instance: 1b88f60c-1027-4734-a7c1-3dd966e8db2c] Booting with volume adf9c68b-f22b-415c-977e-607efc15e747 at /dev/vda
Oct 11 04:14:15 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1509: 305 pgs: 305 active+clean; 88 MiB data, 383 MiB used, 60 GiB / 60 GiB avail; 3.4 KiB/s rd, 22 KiB/s wr, 5 op/s
Oct 11 04:14:15 compute-0 nova_compute[259850]: 2025-10-11 04:14:15.759 2 DEBUG nova.policy [None req-b6e63544-78f1-4705-b248-28579fa37926 2a330a845d62440c871f80eda2546881 09ba33ef4bd447699d74946c58839b2d - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '2a330a845d62440c871f80eda2546881', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '09ba33ef4bd447699d74946c58839b2d', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Oct 11 04:14:15 compute-0 nova_compute[259850]: 2025-10-11 04:14:15.835 2 DEBUG os_brick.utils [None req-b6e63544-78f1-4705-b248-28579fa37926 2a330a845d62440c871f80eda2546881 09ba33ef4bd447699d74946c58839b2d - - default default] ==> get_connector_properties: call "{'root_helper': 'sudo nova-rootwrap /etc/nova/rootwrap.conf', 'my_ip': '192.168.122.100', 'multipath': True, 'enforce_multipath': True, 'host': 'compute-0.ctlplane.example.com', 'execute': None}" trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:176
Oct 11 04:14:15 compute-0 nova_compute[259850]: 2025-10-11 04:14:15.838 675 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): multipathd show status execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 04:14:15 compute-0 nova_compute[259850]: 2025-10-11 04:14:15.858 675 DEBUG oslo_concurrency.processutils [-] CMD "multipathd show status" returned: 0 in 0.021s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 04:14:15 compute-0 nova_compute[259850]: 2025-10-11 04:14:15.859 675 DEBUG oslo.privsep.daemon [-] privsep: reply[376fd98b-b201-4f1b-a768-05cd04a77933]: (4, ('path checker states:\n\npaths: 0\nbusy: False\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 04:14:15 compute-0 nova_compute[259850]: 2025-10-11 04:14:15.861 675 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): cat /etc/iscsi/initiatorname.iscsi execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 04:14:15 compute-0 nova_compute[259850]: 2025-10-11 04:14:15.873 675 DEBUG oslo_concurrency.processutils [-] CMD "cat /etc/iscsi/initiatorname.iscsi" returned: 0 in 0.013s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 04:14:15 compute-0 nova_compute[259850]: 2025-10-11 04:14:15.874 675 DEBUG oslo.privsep.daemon [-] privsep: reply[3076804e-ae1b-4704-9805-5210224829e5]: (4, ('InitiatorName=iqn.1994-05.com.redhat:e727c2bd432c', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 04:14:15 compute-0 nova_compute[259850]: 2025-10-11 04:14:15.876 675 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): findmnt -v / -n -o SOURCE execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 04:14:15 compute-0 nova_compute[259850]: 2025-10-11 04:14:15.891 675 DEBUG oslo_concurrency.processutils [-] CMD "findmnt -v / -n -o SOURCE" returned: 0 in 0.014s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 04:14:15 compute-0 nova_compute[259850]: 2025-10-11 04:14:15.891 675 DEBUG oslo.privsep.daemon [-] privsep: reply[e721a493-ff03-4ec3-aa59-54af7fbe8eca]: (4, ('overlay\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 04:14:15 compute-0 nova_compute[259850]: 2025-10-11 04:14:15.893 675 DEBUG oslo.privsep.daemon [-] privsep: reply[74e5c134-c8d5-466f-a9f8-1f05655bdf37]: (4, 'e4b2deed-ff06-4afb-a523-b61a9dddb9cc') _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 04:14:15 compute-0 nova_compute[259850]: 2025-10-11 04:14:15.893 2 DEBUG oslo_concurrency.processutils [None req-b6e63544-78f1-4705-b248-28579fa37926 2a330a845d62440c871f80eda2546881 09ba33ef4bd447699d74946c58839b2d - - default default] Running cmd (subprocess): nvme version execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 04:14:15 compute-0 nova_compute[259850]: 2025-10-11 04:14:15.930 2 DEBUG oslo_concurrency.processutils [None req-b6e63544-78f1-4705-b248-28579fa37926 2a330a845d62440c871f80eda2546881 09ba33ef4bd447699d74946c58839b2d - - default default] CMD "nvme version" returned: 0 in 0.037s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 04:14:15 compute-0 nova_compute[259850]: 2025-10-11 04:14:15.935 2 DEBUG os_brick.initiator.connectors.lightos [None req-b6e63544-78f1-4705-b248-28579fa37926 2a330a845d62440c871f80eda2546881 09ba33ef4bd447699d74946c58839b2d - - default default] LIGHTOS: [Errno 111] ECONNREFUSED find_dsc /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:98
Oct 11 04:14:15 compute-0 nova_compute[259850]: 2025-10-11 04:14:15.935 2 DEBUG os_brick.initiator.connectors.lightos [None req-b6e63544-78f1-4705-b248-28579fa37926 2a330a845d62440c871f80eda2546881 09ba33ef4bd447699d74946c58839b2d - - default default] LIGHTOS: did not find dsc, continuing anyway. get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:76
Oct 11 04:14:15 compute-0 nova_compute[259850]: 2025-10-11 04:14:15.935 2 DEBUG os_brick.initiator.connectors.lightos [None req-b6e63544-78f1-4705-b248-28579fa37926 2a330a845d62440c871f80eda2546881 09ba33ef4bd447699d74946c58839b2d - - default default] LIGHTOS: finally hostnqn: nqn.2014-08.org.nvmexpress:uuid:83042a20-0f72-4c47-8453-e72ead378624 dsc:  get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:79
Oct 11 04:14:15 compute-0 nova_compute[259850]: 2025-10-11 04:14:15.936 2 DEBUG os_brick.utils [None req-b6e63544-78f1-4705-b248-28579fa37926 2a330a845d62440c871f80eda2546881 09ba33ef4bd447699d74946c58839b2d - - default default] <== get_connector_properties: return (100ms) {'platform': 'x86_64', 'os_type': 'linux', 'ip': '192.168.122.100', 'host': 'compute-0.ctlplane.example.com', 'multipath': True, 'initiator': 'iqn.1994-05.com.redhat:e727c2bd432c', 'do_local_attach': False, 'nvme_hostid': '83042a20-0f72-4c47-8453-e72ead378624', 'system uuid': 'e4b2deed-ff06-4afb-a523-b61a9dddb9cc', 'nqn': 'nqn.2014-08.org.nvmexpress:uuid:83042a20-0f72-4c47-8453-e72ead378624', 'nvme_native_multipath': True, 'found_dsc': ''} trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:203
Oct 11 04:14:15 compute-0 nova_compute[259850]: 2025-10-11 04:14:15.936 2 DEBUG nova.virt.block_device [None req-b6e63544-78f1-4705-b248-28579fa37926 2a330a845d62440c871f80eda2546881 09ba33ef4bd447699d74946c58839b2d - - default default] [instance: 1b88f60c-1027-4734-a7c1-3dd966e8db2c] Updating existing volume attachment record: 1a301ba2-1c27-4b12-a028-01fac867dfe6 _volume_attach /usr/lib/python3.9/site-packages/nova/virt/block_device.py:631
Oct 11 04:14:16 compute-0 sad_blackwell[290218]: {
Oct 11 04:14:16 compute-0 sad_blackwell[290218]:     "0": [
Oct 11 04:14:16 compute-0 sad_blackwell[290218]:         {
Oct 11 04:14:16 compute-0 sad_blackwell[290218]:             "devices": [
Oct 11 04:14:16 compute-0 sad_blackwell[290218]:                 "/dev/loop3"
Oct 11 04:14:16 compute-0 sad_blackwell[290218]:             ],
Oct 11 04:14:16 compute-0 sad_blackwell[290218]:             "lv_name": "ceph_lv0",
Oct 11 04:14:16 compute-0 sad_blackwell[290218]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct 11 04:14:16 compute-0 sad_blackwell[290218]:             "lv_size": "21470642176",
Oct 11 04:14:16 compute-0 sad_blackwell[290218]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=1XVTvq-Um5W-oCLh-MZzq-EKrd-vgGj-JdO4S3,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=23b68101-59a9-532f-ab6b-9acf78fb2162,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=bd7ac921-1218-45c1-b1c6-7c594dbceccb,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 11 04:14:16 compute-0 sad_blackwell[290218]:             "lv_uuid": "1XVTvq-Um5W-oCLh-MZzq-EKrd-vgGj-JdO4S3",
Oct 11 04:14:16 compute-0 sad_blackwell[290218]:             "name": "ceph_lv0",
Oct 11 04:14:16 compute-0 sad_blackwell[290218]:             "path": "/dev/ceph_vg0/ceph_lv0",
Oct 11 04:14:16 compute-0 sad_blackwell[290218]:             "tags": {
Oct 11 04:14:16 compute-0 sad_blackwell[290218]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct 11 04:14:16 compute-0 sad_blackwell[290218]:                 "ceph.block_uuid": "1XVTvq-Um5W-oCLh-MZzq-EKrd-vgGj-JdO4S3",
Oct 11 04:14:16 compute-0 sad_blackwell[290218]:                 "ceph.cephx_lockbox_secret": "",
Oct 11 04:14:16 compute-0 sad_blackwell[290218]:                 "ceph.cluster_fsid": "23b68101-59a9-532f-ab6b-9acf78fb2162",
Oct 11 04:14:16 compute-0 sad_blackwell[290218]:                 "ceph.cluster_name": "ceph",
Oct 11 04:14:16 compute-0 sad_blackwell[290218]:                 "ceph.crush_device_class": "",
Oct 11 04:14:16 compute-0 sad_blackwell[290218]:                 "ceph.encrypted": "0",
Oct 11 04:14:16 compute-0 sad_blackwell[290218]:                 "ceph.osd_fsid": "bd7ac921-1218-45c1-b1c6-7c594dbceccb",
Oct 11 04:14:16 compute-0 sad_blackwell[290218]:                 "ceph.osd_id": "0",
Oct 11 04:14:16 compute-0 sad_blackwell[290218]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 11 04:14:16 compute-0 sad_blackwell[290218]:                 "ceph.type": "block",
Oct 11 04:14:16 compute-0 sad_blackwell[290218]:                 "ceph.vdo": "0"
Oct 11 04:14:16 compute-0 sad_blackwell[290218]:             },
Oct 11 04:14:16 compute-0 sad_blackwell[290218]:             "type": "block",
Oct 11 04:14:16 compute-0 sad_blackwell[290218]:             "vg_name": "ceph_vg0"
Oct 11 04:14:16 compute-0 sad_blackwell[290218]:         }
Oct 11 04:14:16 compute-0 sad_blackwell[290218]:     ],
Oct 11 04:14:16 compute-0 sad_blackwell[290218]:     "1": [
Oct 11 04:14:16 compute-0 sad_blackwell[290218]:         {
Oct 11 04:14:16 compute-0 sad_blackwell[290218]:             "devices": [
Oct 11 04:14:16 compute-0 sad_blackwell[290218]:                 "/dev/loop4"
Oct 11 04:14:16 compute-0 sad_blackwell[290218]:             ],
Oct 11 04:14:16 compute-0 sad_blackwell[290218]:             "lv_name": "ceph_lv1",
Oct 11 04:14:16 compute-0 sad_blackwell[290218]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Oct 11 04:14:16 compute-0 sad_blackwell[290218]:             "lv_size": "21470642176",
Oct 11 04:14:16 compute-0 sad_blackwell[290218]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=SYHCVo-UVf0-Iwc3-4dzz-QuCG-19LJ-9rRA5x,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=23b68101-59a9-532f-ab6b-9acf78fb2162,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=38da774d-7ecf-442f-9a7a-97978287cff8,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 11 04:14:16 compute-0 sad_blackwell[290218]:             "lv_uuid": "SYHCVo-UVf0-Iwc3-4dzz-QuCG-19LJ-9rRA5x",
Oct 11 04:14:16 compute-0 sad_blackwell[290218]:             "name": "ceph_lv1",
Oct 11 04:14:16 compute-0 sad_blackwell[290218]:             "path": "/dev/ceph_vg1/ceph_lv1",
Oct 11 04:14:16 compute-0 sad_blackwell[290218]:             "tags": {
Oct 11 04:14:16 compute-0 sad_blackwell[290218]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Oct 11 04:14:16 compute-0 sad_blackwell[290218]:                 "ceph.block_uuid": "SYHCVo-UVf0-Iwc3-4dzz-QuCG-19LJ-9rRA5x",
Oct 11 04:14:16 compute-0 sad_blackwell[290218]:                 "ceph.cephx_lockbox_secret": "",
Oct 11 04:14:16 compute-0 sad_blackwell[290218]:                 "ceph.cluster_fsid": "23b68101-59a9-532f-ab6b-9acf78fb2162",
Oct 11 04:14:16 compute-0 sad_blackwell[290218]:                 "ceph.cluster_name": "ceph",
Oct 11 04:14:16 compute-0 sad_blackwell[290218]:                 "ceph.crush_device_class": "",
Oct 11 04:14:16 compute-0 sad_blackwell[290218]:                 "ceph.encrypted": "0",
Oct 11 04:14:16 compute-0 sad_blackwell[290218]:                 "ceph.osd_fsid": "38da774d-7ecf-442f-9a7a-97978287cff8",
Oct 11 04:14:16 compute-0 sad_blackwell[290218]:                 "ceph.osd_id": "1",
Oct 11 04:14:16 compute-0 sad_blackwell[290218]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 11 04:14:16 compute-0 sad_blackwell[290218]:                 "ceph.type": "block",
Oct 11 04:14:16 compute-0 sad_blackwell[290218]:                 "ceph.vdo": "0"
Oct 11 04:14:16 compute-0 sad_blackwell[290218]:             },
Oct 11 04:14:16 compute-0 sad_blackwell[290218]:             "type": "block",
Oct 11 04:14:16 compute-0 sad_blackwell[290218]:             "vg_name": "ceph_vg1"
Oct 11 04:14:16 compute-0 sad_blackwell[290218]:         }
Oct 11 04:14:16 compute-0 sad_blackwell[290218]:     ],
Oct 11 04:14:16 compute-0 sad_blackwell[290218]:     "2": [
Oct 11 04:14:16 compute-0 sad_blackwell[290218]:         {
Oct 11 04:14:16 compute-0 sad_blackwell[290218]:             "devices": [
Oct 11 04:14:16 compute-0 sad_blackwell[290218]:                 "/dev/loop5"
Oct 11 04:14:16 compute-0 sad_blackwell[290218]:             ],
Oct 11 04:14:16 compute-0 sad_blackwell[290218]:             "lv_name": "ceph_lv2",
Oct 11 04:14:16 compute-0 sad_blackwell[290218]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Oct 11 04:14:16 compute-0 sad_blackwell[290218]:             "lv_size": "21470642176",
Oct 11 04:14:16 compute-0 sad_blackwell[290218]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=RoMfIg-8Myq-NBZ3-1HM3-6QfC-sjk8-dlChvt,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=23b68101-59a9-532f-ab6b-9acf78fb2162,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=b0a32a6b-06c7-4cda-9235-0c5ffbd1bd85,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 11 04:14:16 compute-0 sad_blackwell[290218]:             "lv_uuid": "RoMfIg-8Myq-NBZ3-1HM3-6QfC-sjk8-dlChvt",
Oct 11 04:14:16 compute-0 sad_blackwell[290218]:             "name": "ceph_lv2",
Oct 11 04:14:16 compute-0 sad_blackwell[290218]:             "path": "/dev/ceph_vg2/ceph_lv2",
Oct 11 04:14:16 compute-0 sad_blackwell[290218]:             "tags": {
Oct 11 04:14:16 compute-0 sad_blackwell[290218]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Oct 11 04:14:16 compute-0 sad_blackwell[290218]:                 "ceph.block_uuid": "RoMfIg-8Myq-NBZ3-1HM3-6QfC-sjk8-dlChvt",
Oct 11 04:14:16 compute-0 sad_blackwell[290218]:                 "ceph.cephx_lockbox_secret": "",
Oct 11 04:14:16 compute-0 sad_blackwell[290218]:                 "ceph.cluster_fsid": "23b68101-59a9-532f-ab6b-9acf78fb2162",
Oct 11 04:14:16 compute-0 sad_blackwell[290218]:                 "ceph.cluster_name": "ceph",
Oct 11 04:14:16 compute-0 sad_blackwell[290218]:                 "ceph.crush_device_class": "",
Oct 11 04:14:16 compute-0 sad_blackwell[290218]:                 "ceph.encrypted": "0",
Oct 11 04:14:16 compute-0 sad_blackwell[290218]:                 "ceph.osd_fsid": "b0a32a6b-06c7-4cda-9235-0c5ffbd1bd85",
Oct 11 04:14:16 compute-0 sad_blackwell[290218]:                 "ceph.osd_id": "2",
Oct 11 04:14:16 compute-0 sad_blackwell[290218]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 11 04:14:16 compute-0 sad_blackwell[290218]:                 "ceph.type": "block",
Oct 11 04:14:16 compute-0 sad_blackwell[290218]:                 "ceph.vdo": "0"
Oct 11 04:14:16 compute-0 sad_blackwell[290218]:             },
Oct 11 04:14:16 compute-0 sad_blackwell[290218]:             "type": "block",
Oct 11 04:14:16 compute-0 sad_blackwell[290218]:             "vg_name": "ceph_vg2"
Oct 11 04:14:16 compute-0 sad_blackwell[290218]:         }
Oct 11 04:14:16 compute-0 sad_blackwell[290218]:     ]
Oct 11 04:14:16 compute-0 sad_blackwell[290218]: }
Oct 11 04:14:16 compute-0 ceph-mon[74273]: from='client.? 192.168.122.100:0/3778433870' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 04:14:16 compute-0 systemd[1]: libpod-7591031f1831214506e6bbc117d85a5c8103fa35d4b24919fd7fb827d7ec6c88.scope: Deactivated successfully.
Oct 11 04:14:16 compute-0 podman[290201]: 2025-10-11 04:14:16.161361036 +0000 UTC m=+0.974927954 container died 7591031f1831214506e6bbc117d85a5c8103fa35d4b24919fd7fb827d7ec6c88 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_blackwell, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 11 04:14:16 compute-0 systemd[1]: var-lib-containers-storage-overlay-fd29cfb884f800b7a4361fc8ce0e7f8286d03be306349d76fe1a17bd9d08358a-merged.mount: Deactivated successfully.
Oct 11 04:14:16 compute-0 podman[290201]: 2025-10-11 04:14:16.240280369 +0000 UTC m=+1.053847287 container remove 7591031f1831214506e6bbc117d85a5c8103fa35d4b24919fd7fb827d7ec6c88 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_blackwell, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Oct 11 04:14:16 compute-0 systemd[1]: libpod-conmon-7591031f1831214506e6bbc117d85a5c8103fa35d4b24919fd7fb827d7ec6c88.scope: Deactivated successfully.
Oct 11 04:14:16 compute-0 sudo[290076]: pam_unix(sudo:session): session closed for user root
Oct 11 04:14:16 compute-0 sudo[290247]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 04:14:16 compute-0 sudo[290247]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 04:14:16 compute-0 sudo[290247]: pam_unix(sudo:session): session closed for user root
Oct 11 04:14:16 compute-0 sudo[290272]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 11 04:14:16 compute-0 sudo[290272]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 04:14:16 compute-0 sudo[290272]: pam_unix(sudo:session): session closed for user root
Oct 11 04:14:16 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 11 04:14:16 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/968464313' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 11 04:14:16 compute-0 sudo[290297]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 04:14:16 compute-0 sudo[290297]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 04:14:16 compute-0 sudo[290297]: pam_unix(sudo:session): session closed for user root
Oct 11 04:14:16 compute-0 nova_compute[259850]: 2025-10-11 04:14:16.666 2 DEBUG nova.network.neutron [None req-b6e63544-78f1-4705-b248-28579fa37926 2a330a845d62440c871f80eda2546881 09ba33ef4bd447699d74946c58839b2d - - default default] [instance: 1b88f60c-1027-4734-a7c1-3dd966e8db2c] Successfully created port: 14f93f3e-13f2-4745-8cb2-687295f0fc23 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Oct 11 04:14:16 compute-0 sudo[290322]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/23b68101-59a9-532f-ab6b-9acf78fb2162/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 23b68101-59a9-532f-ab6b-9acf78fb2162 -- raw list --format json
Oct 11 04:14:16 compute-0 sudo[290322]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 04:14:16 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct 11 04:14:16 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2618749882' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 11 04:14:16 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct 11 04:14:16 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2618749882' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 11 04:14:16 compute-0 sshd-session[290347]: Connection closed by 66.228.53.46 port 23152
Oct 11 04:14:16 compute-0 nova_compute[259850]: 2025-10-11 04:14:16.893 2 DEBUG nova.compute.manager [None req-b6e63544-78f1-4705-b248-28579fa37926 2a330a845d62440c871f80eda2546881 09ba33ef4bd447699d74946c58839b2d - - default default] [instance: 1b88f60c-1027-4734-a7c1-3dd966e8db2c] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Oct 11 04:14:16 compute-0 nova_compute[259850]: 2025-10-11 04:14:16.895 2 DEBUG nova.virt.libvirt.driver [None req-b6e63544-78f1-4705-b248-28579fa37926 2a330a845d62440c871f80eda2546881 09ba33ef4bd447699d74946c58839b2d - - default default] [instance: 1b88f60c-1027-4734-a7c1-3dd966e8db2c] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Oct 11 04:14:16 compute-0 nova_compute[259850]: 2025-10-11 04:14:16.895 2 INFO nova.virt.libvirt.driver [None req-b6e63544-78f1-4705-b248-28579fa37926 2a330a845d62440c871f80eda2546881 09ba33ef4bd447699d74946c58839b2d - - default default] [instance: 1b88f60c-1027-4734-a7c1-3dd966e8db2c] Creating image(s)
Oct 11 04:14:16 compute-0 nova_compute[259850]: 2025-10-11 04:14:16.896 2 DEBUG nova.virt.libvirt.driver [None req-b6e63544-78f1-4705-b248-28579fa37926 2a330a845d62440c871f80eda2546881 09ba33ef4bd447699d74946c58839b2d - - default default] [instance: 1b88f60c-1027-4734-a7c1-3dd966e8db2c] Did not create local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4859
Oct 11 04:14:16 compute-0 nova_compute[259850]: 2025-10-11 04:14:16.897 2 DEBUG nova.virt.libvirt.driver [None req-b6e63544-78f1-4705-b248-28579fa37926 2a330a845d62440c871f80eda2546881 09ba33ef4bd447699d74946c58839b2d - - default default] [instance: 1b88f60c-1027-4734-a7c1-3dd966e8db2c] Ensure instance console log exists: /var/lib/nova/instances/1b88f60c-1027-4734-a7c1-3dd966e8db2c/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Oct 11 04:14:16 compute-0 nova_compute[259850]: 2025-10-11 04:14:16.897 2 DEBUG oslo_concurrency.lockutils [None req-b6e63544-78f1-4705-b248-28579fa37926 2a330a845d62440c871f80eda2546881 09ba33ef4bd447699d74946c58839b2d - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 04:14:16 compute-0 nova_compute[259850]: 2025-10-11 04:14:16.898 2 DEBUG oslo_concurrency.lockutils [None req-b6e63544-78f1-4705-b248-28579fa37926 2a330a845d62440c871f80eda2546881 09ba33ef4bd447699d74946c58839b2d - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 04:14:16 compute-0 nova_compute[259850]: 2025-10-11 04:14:16.898 2 DEBUG oslo_concurrency.lockutils [None req-b6e63544-78f1-4705-b248-28579fa37926 2a330a845d62440c871f80eda2546881 09ba33ef4bd447699d74946c58839b2d - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 04:14:16 compute-0 sshd-session[290359]: banner exchange: Connection from 66.228.53.46 port 23162: invalid format
Oct 11 04:14:17 compute-0 ceph-mon[74273]: pgmap v1509: 305 pgs: 305 active+clean; 88 MiB data, 383 MiB used, 60 GiB / 60 GiB avail; 3.4 KiB/s rd, 22 KiB/s wr, 5 op/s
Oct 11 04:14:17 compute-0 ceph-mon[74273]: from='client.? 192.168.122.10:0/968464313' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 11 04:14:17 compute-0 ceph-mon[74273]: from='client.? 192.168.122.10:0/2618749882' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 11 04:14:17 compute-0 ceph-mon[74273]: from='client.? 192.168.122.10:0/2618749882' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 11 04:14:17 compute-0 podman[290390]: 2025-10-11 04:14:17.192196938 +0000 UTC m=+0.073949232 container create 011e190b06f59f45d8c5a5c8f56ec4eedd9a2da5e855bfaf03818cc157306d7a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_antonelli, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Oct 11 04:14:17 compute-0 systemd[1]: Started libpod-conmon-011e190b06f59f45d8c5a5c8f56ec4eedd9a2da5e855bfaf03818cc157306d7a.scope.
Oct 11 04:14:17 compute-0 podman[290390]: 2025-10-11 04:14:17.165916582 +0000 UTC m=+0.047668956 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 04:14:17 compute-0 systemd[1]: Started libcrun container.
Oct 11 04:14:17 compute-0 podman[290390]: 2025-10-11 04:14:17.307982779 +0000 UTC m=+0.189735123 container init 011e190b06f59f45d8c5a5c8f56ec4eedd9a2da5e855bfaf03818cc157306d7a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_antonelli, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Oct 11 04:14:17 compute-0 podman[290390]: 2025-10-11 04:14:17.320414542 +0000 UTC m=+0.202166856 container start 011e190b06f59f45d8c5a5c8f56ec4eedd9a2da5e855bfaf03818cc157306d7a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_antonelli, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_REF=reef, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 11 04:14:17 compute-0 podman[290390]: 2025-10-11 04:14:17.32669132 +0000 UTC m=+0.208443634 container attach 011e190b06f59f45d8c5a5c8f56ec4eedd9a2da5e855bfaf03818cc157306d7a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_antonelli, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_REF=reef, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Oct 11 04:14:17 compute-0 thirsty_antonelli[290406]: 167 167
Oct 11 04:14:17 compute-0 systemd[1]: libpod-011e190b06f59f45d8c5a5c8f56ec4eedd9a2da5e855bfaf03818cc157306d7a.scope: Deactivated successfully.
Oct 11 04:14:17 compute-0 podman[290390]: 2025-10-11 04:14:17.343839807 +0000 UTC m=+0.225592111 container died 011e190b06f59f45d8c5a5c8f56ec4eedd9a2da5e855bfaf03818cc157306d7a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_antonelli, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=reef, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 11 04:14:17 compute-0 systemd[1]: var-lib-containers-storage-overlay-98204176b7d34617d4b6f5041ed47d436f2292c285bd276bfe5dae8fdfa1a1d0-merged.mount: Deactivated successfully.
Oct 11 04:14:17 compute-0 podman[290390]: 2025-10-11 04:14:17.392588073 +0000 UTC m=+0.274340377 container remove 011e190b06f59f45d8c5a5c8f56ec4eedd9a2da5e855bfaf03818cc157306d7a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_antonelli, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Oct 11 04:14:17 compute-0 systemd[1]: libpod-conmon-011e190b06f59f45d8c5a5c8f56ec4eedd9a2da5e855bfaf03818cc157306d7a.scope: Deactivated successfully.
Oct 11 04:14:17 compute-0 nova_compute[259850]: 2025-10-11 04:14:17.569 2 DEBUG nova.network.neutron [None req-b6e63544-78f1-4705-b248-28579fa37926 2a330a845d62440c871f80eda2546881 09ba33ef4bd447699d74946c58839b2d - - default default] [instance: 1b88f60c-1027-4734-a7c1-3dd966e8db2c] Successfully updated port: 14f93f3e-13f2-4745-8cb2-687295f0fc23 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Oct 11 04:14:17 compute-0 nova_compute[259850]: 2025-10-11 04:14:17.587 2 DEBUG oslo_concurrency.lockutils [None req-b6e63544-78f1-4705-b248-28579fa37926 2a330a845d62440c871f80eda2546881 09ba33ef4bd447699d74946c58839b2d - - default default] Acquiring lock "refresh_cache-1b88f60c-1027-4734-a7c1-3dd966e8db2c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 11 04:14:17 compute-0 nova_compute[259850]: 2025-10-11 04:14:17.587 2 DEBUG oslo_concurrency.lockutils [None req-b6e63544-78f1-4705-b248-28579fa37926 2a330a845d62440c871f80eda2546881 09ba33ef4bd447699d74946c58839b2d - - default default] Acquired lock "refresh_cache-1b88f60c-1027-4734-a7c1-3dd966e8db2c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 11 04:14:17 compute-0 nova_compute[259850]: 2025-10-11 04:14:17.588 2 DEBUG nova.network.neutron [None req-b6e63544-78f1-4705-b248-28579fa37926 2a330a845d62440c871f80eda2546881 09ba33ef4bd447699d74946c58839b2d - - default default] [instance: 1b88f60c-1027-4734-a7c1-3dd966e8db2c] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Oct 11 04:14:17 compute-0 podman[290430]: 2025-10-11 04:14:17.598880985 +0000 UTC m=+0.068106027 container create 547311c7bd61bb65e01331652aa148a2bfcf5e831af2d815c7215977cdd5bc18 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_colden, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 11 04:14:17 compute-0 systemd[1]: Started libpod-conmon-547311c7bd61bb65e01331652aa148a2bfcf5e831af2d815c7215977cdd5bc18.scope.
Oct 11 04:14:17 compute-0 podman[290430]: 2025-10-11 04:14:17.576566861 +0000 UTC m=+0.045791923 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 04:14:17 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1510: 305 pgs: 305 active+clean; 88 MiB data, 383 MiB used, 60 GiB / 60 GiB avail; 3.4 KiB/s rd, 22 KiB/s wr, 5 op/s
Oct 11 04:14:17 compute-0 systemd[1]: Started libcrun container.
Oct 11 04:14:17 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/42e8cf034ec72f80d403ea5305a780b4d084529f57663c21b86249bd75a131e4/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 11 04:14:17 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/42e8cf034ec72f80d403ea5305a780b4d084529f57663c21b86249bd75a131e4/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 11 04:14:17 compute-0 nova_compute[259850]: 2025-10-11 04:14:17.690 2 DEBUG nova.compute.manager [req-dab271f6-34e3-4d53-a850-8648e473672d req-0e51e526-4483-4ed8-a4c0-f07023f26f47 f4159c198f2c491aba3076e803251bb4 551ef16ca7c64c7d98e9560ddab5abd8 - - default default] [instance: 1b88f60c-1027-4734-a7c1-3dd966e8db2c] Received event network-changed-14f93f3e-13f2-4745-8cb2-687295f0fc23 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 11 04:14:17 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/42e8cf034ec72f80d403ea5305a780b4d084529f57663c21b86249bd75a131e4/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 11 04:14:17 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/42e8cf034ec72f80d403ea5305a780b4d084529f57663c21b86249bd75a131e4/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 11 04:14:17 compute-0 nova_compute[259850]: 2025-10-11 04:14:17.691 2 DEBUG nova.compute.manager [req-dab271f6-34e3-4d53-a850-8648e473672d req-0e51e526-4483-4ed8-a4c0-f07023f26f47 f4159c198f2c491aba3076e803251bb4 551ef16ca7c64c7d98e9560ddab5abd8 - - default default] [instance: 1b88f60c-1027-4734-a7c1-3dd966e8db2c] Refreshing instance network info cache due to event network-changed-14f93f3e-13f2-4745-8cb2-687295f0fc23. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 11 04:14:17 compute-0 nova_compute[259850]: 2025-10-11 04:14:17.691 2 DEBUG oslo_concurrency.lockutils [req-dab271f6-34e3-4d53-a850-8648e473672d req-0e51e526-4483-4ed8-a4c0-f07023f26f47 f4159c198f2c491aba3076e803251bb4 551ef16ca7c64c7d98e9560ddab5abd8 - - default default] Acquiring lock "refresh_cache-1b88f60c-1027-4734-a7c1-3dd966e8db2c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 11 04:14:17 compute-0 podman[290430]: 2025-10-11 04:14:17.70218288 +0000 UTC m=+0.171407942 container init 547311c7bd61bb65e01331652aa148a2bfcf5e831af2d815c7215977cdd5bc18 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_colden, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3)
Oct 11 04:14:17 compute-0 podman[290430]: 2025-10-11 04:14:17.717207696 +0000 UTC m=+0.186432738 container start 547311c7bd61bb65e01331652aa148a2bfcf5e831af2d815c7215977cdd5bc18 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_colden, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 11 04:14:17 compute-0 podman[290430]: 2025-10-11 04:14:17.720641594 +0000 UTC m=+0.189866636 container attach 547311c7bd61bb65e01331652aa148a2bfcf5e831af2d815c7215977cdd5bc18 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_colden, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 11 04:14:17 compute-0 nova_compute[259850]: 2025-10-11 04:14:17.752 2 DEBUG nova.network.neutron [None req-b6e63544-78f1-4705-b248-28579fa37926 2a330a845d62440c871f80eda2546881 09ba33ef4bd447699d74946c58839b2d - - default default] [instance: 1b88f60c-1027-4734-a7c1-3dd966e8db2c] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Oct 11 04:14:17 compute-0 nova_compute[259850]: 2025-10-11 04:14:17.942 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 04:14:18 compute-0 nova_compute[259850]: 2025-10-11 04:14:18.498 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 04:14:18 compute-0 nova_compute[259850]: 2025-10-11 04:14:18.612 2 DEBUG nova.network.neutron [None req-b6e63544-78f1-4705-b248-28579fa37926 2a330a845d62440c871f80eda2546881 09ba33ef4bd447699d74946c58839b2d - - default default] [instance: 1b88f60c-1027-4734-a7c1-3dd966e8db2c] Updating instance_info_cache with network_info: [{"id": "14f93f3e-13f2-4745-8cb2-687295f0fc23", "address": "fa:16:3e:dd:44:08", "network": {"id": "b6cd64a2-af0b-4f57-b84c-cbc9cde5251d", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-958485850-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "09ba33ef4bd447699d74946c58839b2d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap14f93f3e-13", "ovs_interfaceid": "14f93f3e-13f2-4745-8cb2-687295f0fc23", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 11 04:14:18 compute-0 nova_compute[259850]: 2025-10-11 04:14:18.642 2 DEBUG oslo_concurrency.lockutils [None req-b6e63544-78f1-4705-b248-28579fa37926 2a330a845d62440c871f80eda2546881 09ba33ef4bd447699d74946c58839b2d - - default default] Releasing lock "refresh_cache-1b88f60c-1027-4734-a7c1-3dd966e8db2c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 11 04:14:18 compute-0 nova_compute[259850]: 2025-10-11 04:14:18.642 2 DEBUG nova.compute.manager [None req-b6e63544-78f1-4705-b248-28579fa37926 2a330a845d62440c871f80eda2546881 09ba33ef4bd447699d74946c58839b2d - - default default] [instance: 1b88f60c-1027-4734-a7c1-3dd966e8db2c] Instance network_info: |[{"id": "14f93f3e-13f2-4745-8cb2-687295f0fc23", "address": "fa:16:3e:dd:44:08", "network": {"id": "b6cd64a2-af0b-4f57-b84c-cbc9cde5251d", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-958485850-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "09ba33ef4bd447699d74946c58839b2d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap14f93f3e-13", "ovs_interfaceid": "14f93f3e-13f2-4745-8cb2-687295f0fc23", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Oct 11 04:14:18 compute-0 nova_compute[259850]: 2025-10-11 04:14:18.643 2 DEBUG oslo_concurrency.lockutils [req-dab271f6-34e3-4d53-a850-8648e473672d req-0e51e526-4483-4ed8-a4c0-f07023f26f47 f4159c198f2c491aba3076e803251bb4 551ef16ca7c64c7d98e9560ddab5abd8 - - default default] Acquired lock "refresh_cache-1b88f60c-1027-4734-a7c1-3dd966e8db2c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 11 04:14:18 compute-0 nova_compute[259850]: 2025-10-11 04:14:18.644 2 DEBUG nova.network.neutron [req-dab271f6-34e3-4d53-a850-8648e473672d req-0e51e526-4483-4ed8-a4c0-f07023f26f47 f4159c198f2c491aba3076e803251bb4 551ef16ca7c64c7d98e9560ddab5abd8 - - default default] [instance: 1b88f60c-1027-4734-a7c1-3dd966e8db2c] Refreshing network info cache for port 14f93f3e-13f2-4745-8cb2-687295f0fc23 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 11 04:14:18 compute-0 nova_compute[259850]: 2025-10-11 04:14:18.650 2 DEBUG nova.virt.libvirt.driver [None req-b6e63544-78f1-4705-b248-28579fa37926 2a330a845d62440c871f80eda2546881 09ba33ef4bd447699d74946c58839b2d - - default default] [instance: 1b88f60c-1027-4734-a7c1-3dd966e8db2c] Start _get_guest_xml network_info=[{"id": "14f93f3e-13f2-4745-8cb2-687295f0fc23", "address": "fa:16:3e:dd:44:08", "network": {"id": "b6cd64a2-af0b-4f57-b84c-cbc9cde5251d", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-958485850-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "09ba33ef4bd447699d74946c58839b2d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap14f93f3e-13", "ovs_interfaceid": "14f93f3e-13f2-4745-8cb2-687295f0fc23", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, '/dev/vda': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum=<?>,container_format=<?>,created_at=<?>,direct_url=<?>,disk_format=<?>,id=<?>,min_disk=0,min_ram=0,name=<?>,owner=<?>,properties=ImageMetaProps,protected=<?>,size=1073741824,status='active',tags=<?>,updated_at=<?>,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [], 'ephemerals': [], 'block_device_mapping': [{'device_type': 'disk', 'delete_on_termination': False, 'connection_info': {'driver_volume_type': 'rbd', 'data': {'name': 'volumes/volume-adf9c68b-f22b-415c-977e-607efc15e747', 'hosts': ['192.168.122.100'], 'ports': ['6789'], 'cluster_name': 'ceph', 'auth_enabled': True, 'auth_username': 'openstack', 'secret_type': 'ceph', 'secret_uuid': '***', 'volume_id': 'adf9c68b-f22b-415c-977e-607efc15e747', 'discard': True, 'qos_specs': None, 'access_mode': 'rw', 'encrypted': True, 'cacheable': False}, 'status': 'reserved', 'instance': '1b88f60c-1027-4734-a7c1-3dd966e8db2c', 'attached_at': '', 'detached_at': '', 'volume_id': 'adf9c68b-f22b-415c-977e-607efc15e747', 'serial': 'adf9c68b-f22b-415c-977e-607efc15e747'}, 'boot_index': 0, 'guest_format': None, 'attachment_id': '1a301ba2-1c27-4b12-a028-01fac867dfe6', 'mount_device': '/dev/vda', 'disk_bus': 'virtio', 'volume_type': None}], ': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Oct 11 04:14:18 compute-0 nova_compute[259850]: 2025-10-11 04:14:18.659 2 WARNING nova.virt.libvirt.driver [None req-b6e63544-78f1-4705-b248-28579fa37926 2a330a845d62440c871f80eda2546881 09ba33ef4bd447699d74946c58839b2d - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 11 04:14:18 compute-0 nova_compute[259850]: 2025-10-11 04:14:18.668 2 DEBUG nova.virt.libvirt.host [None req-b6e63544-78f1-4705-b248-28579fa37926 2a330a845d62440c871f80eda2546881 09ba33ef4bd447699d74946c58839b2d - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Oct 11 04:14:18 compute-0 nova_compute[259850]: 2025-10-11 04:14:18.669 2 DEBUG nova.virt.libvirt.host [None req-b6e63544-78f1-4705-b248-28579fa37926 2a330a845d62440c871f80eda2546881 09ba33ef4bd447699d74946c58839b2d - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Oct 11 04:14:18 compute-0 nova_compute[259850]: 2025-10-11 04:14:18.672 2 DEBUG nova.virt.libvirt.host [None req-b6e63544-78f1-4705-b248-28579fa37926 2a330a845d62440c871f80eda2546881 09ba33ef4bd447699d74946c58839b2d - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Oct 11 04:14:18 compute-0 nova_compute[259850]: 2025-10-11 04:14:18.673 2 DEBUG nova.virt.libvirt.host [None req-b6e63544-78f1-4705-b248-28579fa37926 2a330a845d62440c871f80eda2546881 09ba33ef4bd447699d74946c58839b2d - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Oct 11 04:14:18 compute-0 nova_compute[259850]: 2025-10-11 04:14:18.673 2 DEBUG nova.virt.libvirt.driver [None req-b6e63544-78f1-4705-b248-28579fa37926 2a330a845d62440c871f80eda2546881 09ba33ef4bd447699d74946c58839b2d - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Oct 11 04:14:18 compute-0 nova_compute[259850]: 2025-10-11 04:14:18.673 2 DEBUG nova.virt.hardware [None req-b6e63544-78f1-4705-b248-28579fa37926 2a330a845d62440c871f80eda2546881 09ba33ef4bd447699d74946c58839b2d - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-11T04:01:35Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='178575de-f0e6-4acd-9fcd-d75e3e09ac2e',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum=<?>,container_format=<?>,created_at=<?>,direct_url=<?>,disk_format=<?>,id=<?>,min_disk=0,min_ram=0,name=<?>,owner=<?>,properties=ImageMetaProps,protected=<?>,size=1073741824,status='active',tags=<?>,updated_at=<?>,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Oct 11 04:14:18 compute-0 nova_compute[259850]: 2025-10-11 04:14:18.674 2 DEBUG nova.virt.hardware [None req-b6e63544-78f1-4705-b248-28579fa37926 2a330a845d62440c871f80eda2546881 09ba33ef4bd447699d74946c58839b2d - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Oct 11 04:14:18 compute-0 nova_compute[259850]: 2025-10-11 04:14:18.674 2 DEBUG nova.virt.hardware [None req-b6e63544-78f1-4705-b248-28579fa37926 2a330a845d62440c871f80eda2546881 09ba33ef4bd447699d74946c58839b2d - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Oct 11 04:14:18 compute-0 nova_compute[259850]: 2025-10-11 04:14:18.674 2 DEBUG nova.virt.hardware [None req-b6e63544-78f1-4705-b248-28579fa37926 2a330a845d62440c871f80eda2546881 09ba33ef4bd447699d74946c58839b2d - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Oct 11 04:14:18 compute-0 nova_compute[259850]: 2025-10-11 04:14:18.674 2 DEBUG nova.virt.hardware [None req-b6e63544-78f1-4705-b248-28579fa37926 2a330a845d62440c871f80eda2546881 09ba33ef4bd447699d74946c58839b2d - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Oct 11 04:14:18 compute-0 nova_compute[259850]: 2025-10-11 04:14:18.674 2 DEBUG nova.virt.hardware [None req-b6e63544-78f1-4705-b248-28579fa37926 2a330a845d62440c871f80eda2546881 09ba33ef4bd447699d74946c58839b2d - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Oct 11 04:14:18 compute-0 nova_compute[259850]: 2025-10-11 04:14:18.675 2 DEBUG nova.virt.hardware [None req-b6e63544-78f1-4705-b248-28579fa37926 2a330a845d62440c871f80eda2546881 09ba33ef4bd447699d74946c58839b2d - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Oct 11 04:14:18 compute-0 nova_compute[259850]: 2025-10-11 04:14:18.675 2 DEBUG nova.virt.hardware [None req-b6e63544-78f1-4705-b248-28579fa37926 2a330a845d62440c871f80eda2546881 09ba33ef4bd447699d74946c58839b2d - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Oct 11 04:14:18 compute-0 nova_compute[259850]: 2025-10-11 04:14:18.675 2 DEBUG nova.virt.hardware [None req-b6e63544-78f1-4705-b248-28579fa37926 2a330a845d62440c871f80eda2546881 09ba33ef4bd447699d74946c58839b2d - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Oct 11 04:14:18 compute-0 nova_compute[259850]: 2025-10-11 04:14:18.675 2 DEBUG nova.virt.hardware [None req-b6e63544-78f1-4705-b248-28579fa37926 2a330a845d62440c871f80eda2546881 09ba33ef4bd447699d74946c58839b2d - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Oct 11 04:14:18 compute-0 nova_compute[259850]: 2025-10-11 04:14:18.676 2 DEBUG nova.virt.hardware [None req-b6e63544-78f1-4705-b248-28579fa37926 2a330a845d62440c871f80eda2546881 09ba33ef4bd447699d74946c58839b2d - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Oct 11 04:14:18 compute-0 nova_compute[259850]: 2025-10-11 04:14:18.699 2 DEBUG nova.storage.rbd_utils [None req-b6e63544-78f1-4705-b248-28579fa37926 2a330a845d62440c871f80eda2546881 09ba33ef4bd447699d74946c58839b2d - - default default] rbd image 1b88f60c-1027-4734-a7c1-3dd966e8db2c_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 11 04:14:18 compute-0 nova_compute[259850]: 2025-10-11 04:14:18.703 2 DEBUG oslo_concurrency.processutils [None req-b6e63544-78f1-4705-b248-28579fa37926 2a330a845d62440c871f80eda2546881 09ba33ef4bd447699d74946c58839b2d - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 04:14:18 compute-0 intelligent_colden[290446]: {
Oct 11 04:14:18 compute-0 intelligent_colden[290446]:     "38da774d-7ecf-442f-9a7a-97978287cff8": {
Oct 11 04:14:18 compute-0 intelligent_colden[290446]:         "ceph_fsid": "23b68101-59a9-532f-ab6b-9acf78fb2162",
Oct 11 04:14:18 compute-0 intelligent_colden[290446]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Oct 11 04:14:18 compute-0 intelligent_colden[290446]:         "osd_id": 1,
Oct 11 04:14:18 compute-0 intelligent_colden[290446]:         "osd_uuid": "38da774d-7ecf-442f-9a7a-97978287cff8",
Oct 11 04:14:18 compute-0 intelligent_colden[290446]:         "type": "bluestore"
Oct 11 04:14:18 compute-0 intelligent_colden[290446]:     },
Oct 11 04:14:18 compute-0 intelligent_colden[290446]:     "b0a32a6b-06c7-4cda-9235-0c5ffbd1bd85": {
Oct 11 04:14:18 compute-0 intelligent_colden[290446]:         "ceph_fsid": "23b68101-59a9-532f-ab6b-9acf78fb2162",
Oct 11 04:14:18 compute-0 intelligent_colden[290446]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Oct 11 04:14:18 compute-0 intelligent_colden[290446]:         "osd_id": 2,
Oct 11 04:14:18 compute-0 intelligent_colden[290446]:         "osd_uuid": "b0a32a6b-06c7-4cda-9235-0c5ffbd1bd85",
Oct 11 04:14:18 compute-0 intelligent_colden[290446]:         "type": "bluestore"
Oct 11 04:14:18 compute-0 intelligent_colden[290446]:     },
Oct 11 04:14:18 compute-0 intelligent_colden[290446]:     "bd7ac921-1218-45c1-b1c6-7c594dbceccb": {
Oct 11 04:14:18 compute-0 intelligent_colden[290446]:         "ceph_fsid": "23b68101-59a9-532f-ab6b-9acf78fb2162",
Oct 11 04:14:18 compute-0 intelligent_colden[290446]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct 11 04:14:18 compute-0 intelligent_colden[290446]:         "osd_id": 0,
Oct 11 04:14:18 compute-0 intelligent_colden[290446]:         "osd_uuid": "bd7ac921-1218-45c1-b1c6-7c594dbceccb",
Oct 11 04:14:18 compute-0 intelligent_colden[290446]:         "type": "bluestore"
Oct 11 04:14:18 compute-0 intelligent_colden[290446]:     }
Oct 11 04:14:18 compute-0 intelligent_colden[290446]: }
Oct 11 04:14:18 compute-0 systemd[1]: libpod-547311c7bd61bb65e01331652aa148a2bfcf5e831af2d815c7215977cdd5bc18.scope: Deactivated successfully.
Oct 11 04:14:18 compute-0 systemd[1]: libpod-547311c7bd61bb65e01331652aa148a2bfcf5e831af2d815c7215977cdd5bc18.scope: Consumed 1.107s CPU time.
Oct 11 04:14:18 compute-0 conmon[290446]: conmon 547311c7bd61bb65e013 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-547311c7bd61bb65e01331652aa148a2bfcf5e831af2d815c7215977cdd5bc18.scope/container/memory.events
Oct 11 04:14:18 compute-0 podman[290430]: 2025-10-11 04:14:18.824218042 +0000 UTC m=+1.293443114 container died 547311c7bd61bb65e01331652aa148a2bfcf5e831af2d815c7215977cdd5bc18 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_colden, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef)
Oct 11 04:14:18 compute-0 systemd[1]: var-lib-containers-storage-overlay-42e8cf034ec72f80d403ea5305a780b4d084529f57663c21b86249bd75a131e4-merged.mount: Deactivated successfully.
Oct 11 04:14:18 compute-0 podman[290430]: 2025-10-11 04:14:18.891929817 +0000 UTC m=+1.361154869 container remove 547311c7bd61bb65e01331652aa148a2bfcf5e831af2d815c7215977cdd5bc18 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_colden, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3)
Oct 11 04:14:18 compute-0 systemd[1]: libpod-conmon-547311c7bd61bb65e01331652aa148a2bfcf5e831af2d815c7215977cdd5bc18.scope: Deactivated successfully.
Oct 11 04:14:18 compute-0 sudo[290322]: pam_unix(sudo:session): session closed for user root
Oct 11 04:14:18 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 11 04:14:18 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' 
Oct 11 04:14:18 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 11 04:14:18 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' 
Oct 11 04:14:18 compute-0 ceph-mgr[74563]: [progress WARNING root] complete: ev 76e27bcd-bb4c-46b8-b45d-d6dfda319213 does not exist
Oct 11 04:14:18 compute-0 ceph-mgr[74563]: [progress WARNING root] complete: ev 0c005461-557b-4f67-a232-ba70437c472f does not exist
Oct 11 04:14:19 compute-0 sudo[290531]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 04:14:19 compute-0 sudo[290531]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 04:14:19 compute-0 sudo[290531]: pam_unix(sudo:session): session closed for user root
Oct 11 04:14:19 compute-0 sudo[290556]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Oct 11 04:14:19 compute-0 sudo[290556]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 04:14:19 compute-0 sudo[290556]: pam_unix(sudo:session): session closed for user root
Oct 11 04:14:19 compute-0 ceph-mon[74273]: pgmap v1510: 305 pgs: 305 active+clean; 88 MiB data, 383 MiB used, 60 GiB / 60 GiB avail; 3.4 KiB/s rd, 22 KiB/s wr, 5 op/s
Oct 11 04:14:19 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' 
Oct 11 04:14:19 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' 
Oct 11 04:14:19 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 11 04:14:19 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4194430217' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 11 04:14:19 compute-0 nova_compute[259850]: 2025-10-11 04:14:19.232 2 DEBUG oslo_concurrency.processutils [None req-b6e63544-78f1-4705-b248-28579fa37926 2a330a845d62440c871f80eda2546881 09ba33ef4bd447699d74946c58839b2d - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.529s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 04:14:19 compute-0 nova_compute[259850]: 2025-10-11 04:14:19.353 2 DEBUG os_brick.encryptors [None req-b6e63544-78f1-4705-b248-28579fa37926 2a330a845d62440c871f80eda2546881 09ba33ef4bd447699d74946c58839b2d - - default default] Using volume encryption metadata '{'encryption_key_id': '13ac8882-f233-46d9-9657-13dbe681200e', 'control_location': 'front-end', 'cipher': 'aes-xts-plain64', 'key_size': 256, 'provider': 'luks'}' for connection: {'driver_volume_type': 'rbd', 'data': {'name': 'volumes/volume-adf9c68b-f22b-415c-977e-607efc15e747', 'hosts': ['192.168.122.100'], 'ports': ['6789'], 'cluster_name': 'ceph', 'auth_enabled': True, 'auth_username': 'openstack', 'secret_type': 'ceph', 'secret_uuid': '***', 'volume_id': 'adf9c68b-f22b-415c-977e-607efc15e747', 'discard': True, 'qos_specs': None, 'access_mode': 'rw', 'encrypted': True, 'cacheable': False}, 'status': 'reserved', 'instance': '1b88f60c-1027-4734-a7c1-3dd966e8db2c', 'attached_at': '', 'detached_at': '', 'volume_id': 'adf9c68b-f22b-415c-977e-607efc15e747', 'serial': '} get_encryption_metadata /usr/lib/python3.9/site-packages/os_brick/encryptors/__init__.py:135
Oct 11 04:14:19 compute-0 nova_compute[259850]: 2025-10-11 04:14:19.358 2 DEBUG barbicanclient.client [None req-b6e63544-78f1-4705-b248-28579fa37926 2a330a845d62440c871f80eda2546881 09ba33ef4bd447699d74946c58839b2d - - default default] Creating Client object Client /usr/lib/python3.9/site-packages/barbicanclient/client.py:163
Oct 11 04:14:19 compute-0 nova_compute[259850]: 2025-10-11 04:14:19.380 2 DEBUG barbicanclient.v1.secrets [None req-b6e63544-78f1-4705-b248-28579fa37926 2a330a845d62440c871f80eda2546881 09ba33ef4bd447699d74946c58839b2d - - default default] Getting secret - Secret href: https://barbican-internal.openstack.svc:9311/secrets/13ac8882-f233-46d9-9657-13dbe681200e get /usr/lib/python3.9/site-packages/barbicanclient/v1/secrets.py:514
Oct 11 04:14:19 compute-0 nova_compute[259850]: 2025-10-11 04:14:19.381 2 INFO barbicanclient.base [None req-b6e63544-78f1-4705-b248-28579fa37926 2a330a845d62440c871f80eda2546881 09ba33ef4bd447699d74946c58839b2d - - default default] Calculated Secrets uuid ref: secrets/13ac8882-f233-46d9-9657-13dbe681200e
Oct 11 04:14:19 compute-0 nova_compute[259850]: 2025-10-11 04:14:19.405 2 DEBUG barbicanclient.client [None req-b6e63544-78f1-4705-b248-28579fa37926 2a330a845d62440c871f80eda2546881 09ba33ef4bd447699d74946c58839b2d - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87
Oct 11 04:14:19 compute-0 nova_compute[259850]: 2025-10-11 04:14:19.405 2 INFO barbicanclient.base [None req-b6e63544-78f1-4705-b248-28579fa37926 2a330a845d62440c871f80eda2546881 09ba33ef4bd447699d74946c58839b2d - - default default] Calculated Secrets uuid ref: secrets/13ac8882-f233-46d9-9657-13dbe681200e
Oct 11 04:14:19 compute-0 nova_compute[259850]: 2025-10-11 04:14:19.436 2 DEBUG barbicanclient.client [None req-b6e63544-78f1-4705-b248-28579fa37926 2a330a845d62440c871f80eda2546881 09ba33ef4bd447699d74946c58839b2d - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87
Oct 11 04:14:19 compute-0 nova_compute[259850]: 2025-10-11 04:14:19.437 2 INFO barbicanclient.base [None req-b6e63544-78f1-4705-b248-28579fa37926 2a330a845d62440c871f80eda2546881 09ba33ef4bd447699d74946c58839b2d - - default default] Calculated Secrets uuid ref: secrets/13ac8882-f233-46d9-9657-13dbe681200e
Oct 11 04:14:19 compute-0 nova_compute[259850]: 2025-10-11 04:14:19.485 2 DEBUG barbicanclient.client [None req-b6e63544-78f1-4705-b248-28579fa37926 2a330a845d62440c871f80eda2546881 09ba33ef4bd447699d74946c58839b2d - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87
Oct 11 04:14:19 compute-0 nova_compute[259850]: 2025-10-11 04:14:19.486 2 INFO barbicanclient.base [None req-b6e63544-78f1-4705-b248-28579fa37926 2a330a845d62440c871f80eda2546881 09ba33ef4bd447699d74946c58839b2d - - default default] Calculated Secrets uuid ref: secrets/13ac8882-f233-46d9-9657-13dbe681200e
Oct 11 04:14:19 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct 11 04:14:19 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3112595758' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 11 04:14:19 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct 11 04:14:19 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3112595758' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 11 04:14:19 compute-0 nova_compute[259850]: 2025-10-11 04:14:19.518 2 DEBUG barbicanclient.client [None req-b6e63544-78f1-4705-b248-28579fa37926 2a330a845d62440c871f80eda2546881 09ba33ef4bd447699d74946c58839b2d - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87
Oct 11 04:14:19 compute-0 nova_compute[259850]: 2025-10-11 04:14:19.519 2 INFO barbicanclient.base [None req-b6e63544-78f1-4705-b248-28579fa37926 2a330a845d62440c871f80eda2546881 09ba33ef4bd447699d74946c58839b2d - - default default] Calculated Secrets uuid ref: secrets/13ac8882-f233-46d9-9657-13dbe681200e
Oct 11 04:14:19 compute-0 nova_compute[259850]: 2025-10-11 04:14:19.541 2 DEBUG barbicanclient.client [None req-b6e63544-78f1-4705-b248-28579fa37926 2a330a845d62440c871f80eda2546881 09ba33ef4bd447699d74946c58839b2d - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87
Oct 11 04:14:19 compute-0 nova_compute[259850]: 2025-10-11 04:14:19.542 2 INFO barbicanclient.base [None req-b6e63544-78f1-4705-b248-28579fa37926 2a330a845d62440c871f80eda2546881 09ba33ef4bd447699d74946c58839b2d - - default default] Calculated Secrets uuid ref: secrets/13ac8882-f233-46d9-9657-13dbe681200e
Oct 11 04:14:19 compute-0 nova_compute[259850]: 2025-10-11 04:14:19.567 2 DEBUG barbicanclient.client [None req-b6e63544-78f1-4705-b248-28579fa37926 2a330a845d62440c871f80eda2546881 09ba33ef4bd447699d74946c58839b2d - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87
Oct 11 04:14:19 compute-0 nova_compute[259850]: 2025-10-11 04:14:19.568 2 INFO barbicanclient.base [None req-b6e63544-78f1-4705-b248-28579fa37926 2a330a845d62440c871f80eda2546881 09ba33ef4bd447699d74946c58839b2d - - default default] Calculated Secrets uuid ref: secrets/13ac8882-f233-46d9-9657-13dbe681200e
Oct 11 04:14:19 compute-0 nova_compute[259850]: 2025-10-11 04:14:19.591 2 DEBUG barbicanclient.client [None req-b6e63544-78f1-4705-b248-28579fa37926 2a330a845d62440c871f80eda2546881 09ba33ef4bd447699d74946c58839b2d - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87
Oct 11 04:14:19 compute-0 nova_compute[259850]: 2025-10-11 04:14:19.591 2 INFO barbicanclient.base [None req-b6e63544-78f1-4705-b248-28579fa37926 2a330a845d62440c871f80eda2546881 09ba33ef4bd447699d74946c58839b2d - - default default] Calculated Secrets uuid ref: secrets/13ac8882-f233-46d9-9657-13dbe681200e
Oct 11 04:14:19 compute-0 nova_compute[259850]: 2025-10-11 04:14:19.611 2 DEBUG barbicanclient.client [None req-b6e63544-78f1-4705-b248-28579fa37926 2a330a845d62440c871f80eda2546881 09ba33ef4bd447699d74946c58839b2d - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87
Oct 11 04:14:19 compute-0 nova_compute[259850]: 2025-10-11 04:14:19.611 2 INFO barbicanclient.base [None req-b6e63544-78f1-4705-b248-28579fa37926 2a330a845d62440c871f80eda2546881 09ba33ef4bd447699d74946c58839b2d - - default default] Calculated Secrets uuid ref: secrets/13ac8882-f233-46d9-9657-13dbe681200e
Oct 11 04:14:19 compute-0 nova_compute[259850]: 2025-10-11 04:14:19.632 2 DEBUG barbicanclient.client [None req-b6e63544-78f1-4705-b248-28579fa37926 2a330a845d62440c871f80eda2546881 09ba33ef4bd447699d74946c58839b2d - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87
Oct 11 04:14:19 compute-0 nova_compute[259850]: 2025-10-11 04:14:19.633 2 INFO barbicanclient.base [None req-b6e63544-78f1-4705-b248-28579fa37926 2a330a845d62440c871f80eda2546881 09ba33ef4bd447699d74946c58839b2d - - default default] Calculated Secrets uuid ref: secrets/13ac8882-f233-46d9-9657-13dbe681200e
Oct 11 04:14:19 compute-0 nova_compute[259850]: 2025-10-11 04:14:19.675 2 DEBUG barbicanclient.client [None req-b6e63544-78f1-4705-b248-28579fa37926 2a330a845d62440c871f80eda2546881 09ba33ef4bd447699d74946c58839b2d - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87
Oct 11 04:14:19 compute-0 nova_compute[259850]: 2025-10-11 04:14:19.675 2 INFO barbicanclient.base [None req-b6e63544-78f1-4705-b248-28579fa37926 2a330a845d62440c871f80eda2546881 09ba33ef4bd447699d74946c58839b2d - - default default] Calculated Secrets uuid ref: secrets/13ac8882-f233-46d9-9657-13dbe681200e
Oct 11 04:14:19 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1511: 305 pgs: 305 active+clean; 88 MiB data, 383 MiB used, 60 GiB / 60 GiB avail; 24 KiB/s rd, 23 KiB/s wr, 34 op/s
Oct 11 04:14:19 compute-0 nova_compute[259850]: 2025-10-11 04:14:19.698 2 DEBUG barbicanclient.client [None req-b6e63544-78f1-4705-b248-28579fa37926 2a330a845d62440c871f80eda2546881 09ba33ef4bd447699d74946c58839b2d - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87
Oct 11 04:14:19 compute-0 nova_compute[259850]: 2025-10-11 04:14:19.699 2 INFO barbicanclient.base [None req-b6e63544-78f1-4705-b248-28579fa37926 2a330a845d62440c871f80eda2546881 09ba33ef4bd447699d74946c58839b2d - - default default] Calculated Secrets uuid ref: secrets/13ac8882-f233-46d9-9657-13dbe681200e
Oct 11 04:14:19 compute-0 nova_compute[259850]: 2025-10-11 04:14:19.718 2 DEBUG barbicanclient.client [None req-b6e63544-78f1-4705-b248-28579fa37926 2a330a845d62440c871f80eda2546881 09ba33ef4bd447699d74946c58839b2d - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87
Oct 11 04:14:19 compute-0 nova_compute[259850]: 2025-10-11 04:14:19.719 2 INFO barbicanclient.base [None req-b6e63544-78f1-4705-b248-28579fa37926 2a330a845d62440c871f80eda2546881 09ba33ef4bd447699d74946c58839b2d - - default default] Calculated Secrets uuid ref: secrets/13ac8882-f233-46d9-9657-13dbe681200e
Oct 11 04:14:19 compute-0 nova_compute[259850]: 2025-10-11 04:14:19.753 2 DEBUG barbicanclient.client [None req-b6e63544-78f1-4705-b248-28579fa37926 2a330a845d62440c871f80eda2546881 09ba33ef4bd447699d74946c58839b2d - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87
Oct 11 04:14:19 compute-0 nova_compute[259850]: 2025-10-11 04:14:19.754 2 INFO barbicanclient.base [None req-b6e63544-78f1-4705-b248-28579fa37926 2a330a845d62440c871f80eda2546881 09ba33ef4bd447699d74946c58839b2d - - default default] Calculated Secrets uuid ref: secrets/13ac8882-f233-46d9-9657-13dbe681200e
Oct 11 04:14:19 compute-0 nova_compute[259850]: 2025-10-11 04:14:19.787 2 DEBUG barbicanclient.client [None req-b6e63544-78f1-4705-b248-28579fa37926 2a330a845d62440c871f80eda2546881 09ba33ef4bd447699d74946c58839b2d - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87
Oct 11 04:14:19 compute-0 nova_compute[259850]: 2025-10-11 04:14:19.788 2 INFO barbicanclient.base [None req-b6e63544-78f1-4705-b248-28579fa37926 2a330a845d62440c871f80eda2546881 09ba33ef4bd447699d74946c58839b2d - - default default] Calculated Secrets uuid ref: secrets/13ac8882-f233-46d9-9657-13dbe681200e
Oct 11 04:14:19 compute-0 nova_compute[259850]: 2025-10-11 04:14:19.810 2 DEBUG barbicanclient.client [None req-b6e63544-78f1-4705-b248-28579fa37926 2a330a845d62440c871f80eda2546881 09ba33ef4bd447699d74946c58839b2d - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87
Oct 11 04:14:19 compute-0 nova_compute[259850]: 2025-10-11 04:14:19.811 2 DEBUG nova.virt.libvirt.host [None req-b6e63544-78f1-4705-b248-28579fa37926 2a330a845d62440c871f80eda2546881 09ba33ef4bd447699d74946c58839b2d - - default default] Secret XML: <secret ephemeral="no" private="no">
Oct 11 04:14:19 compute-0 nova_compute[259850]:   <usage type="volume">
Oct 11 04:14:19 compute-0 nova_compute[259850]:     <volume>adf9c68b-f22b-415c-977e-607efc15e747</volume>
Oct 11 04:14:19 compute-0 nova_compute[259850]:   </usage>
Oct 11 04:14:19 compute-0 nova_compute[259850]: </secret>
Oct 11 04:14:19 compute-0 nova_compute[259850]:  create_secret /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1131
Oct 11 04:14:19 compute-0 nova_compute[259850]: 2025-10-11 04:14:19.847 2 DEBUG nova.virt.libvirt.vif [None req-b6e63544-78f1-4705-b248-28579fa37926 2a330a845d62440c871f80eda2546881 09ba33ef4bd447699d74946c58839b2d - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-11T04:14:13Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestVolumeBootPattern-server-733896192',display_name='tempest-TestVolumeBootPattern-server-733896192',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testvolumebootpattern-server-733896192',id=16,image_ref='',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='09ba33ef4bd447699d74946c58839b2d',ramdisk_id='',reservation_id='r-m03z6tvj',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='',image_hw_machine_type='q35',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestVolumeBootPattern-771726270',owner_user_name='tempest-TestVolumeBootPattern-771726270-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-11T04:14:15Z,user_data=None,user_id='2a330a845d62440c871f80eda2546881',uuid=1b88f60c-1027-4734-a7c1-3dd966e8db2c,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "14f93f3e-13f2-4745-8cb2-687295f0fc23", "address": "fa:16:3e:dd:44:08", "network": {"id": "b6cd64a2-af0b-4f57-b84c-cbc9cde5251d", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-958485850-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "09ba33ef4bd447699d74946c58839b2d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap14f93f3e-13", "ovs_interfaceid": "14f93f3e-13f2-4745-8cb2-687295f0fc23", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Oct 11 04:14:19 compute-0 nova_compute[259850]: 2025-10-11 04:14:19.848 2 DEBUG nova.network.os_vif_util [None req-b6e63544-78f1-4705-b248-28579fa37926 2a330a845d62440c871f80eda2546881 09ba33ef4bd447699d74946c58839b2d - - default default] Converting VIF {"id": "14f93f3e-13f2-4745-8cb2-687295f0fc23", "address": "fa:16:3e:dd:44:08", "network": {"id": "b6cd64a2-af0b-4f57-b84c-cbc9cde5251d", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-958485850-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "09ba33ef4bd447699d74946c58839b2d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap14f93f3e-13", "ovs_interfaceid": "14f93f3e-13f2-4745-8cb2-687295f0fc23", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 11 04:14:19 compute-0 nova_compute[259850]: 2025-10-11 04:14:19.849 2 DEBUG nova.network.os_vif_util [None req-b6e63544-78f1-4705-b248-28579fa37926 2a330a845d62440c871f80eda2546881 09ba33ef4bd447699d74946c58839b2d - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:dd:44:08,bridge_name='br-int',has_traffic_filtering=True,id=14f93f3e-13f2-4745-8cb2-687295f0fc23,network=Network(b6cd64a2-af0b-4f57-b84c-cbc9cde5251d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap14f93f3e-13') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 11 04:14:19 compute-0 nova_compute[259850]: 2025-10-11 04:14:19.852 2 DEBUG nova.objects.instance [None req-b6e63544-78f1-4705-b248-28579fa37926 2a330a845d62440c871f80eda2546881 09ba33ef4bd447699d74946c58839b2d - - default default] Lazy-loading 'pci_devices' on Instance uuid 1b88f60c-1027-4734-a7c1-3dd966e8db2c obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 11 04:14:19 compute-0 nova_compute[259850]: 2025-10-11 04:14:19.858 2 DEBUG nova.network.neutron [req-dab271f6-34e3-4d53-a850-8648e473672d req-0e51e526-4483-4ed8-a4c0-f07023f26f47 f4159c198f2c491aba3076e803251bb4 551ef16ca7c64c7d98e9560ddab5abd8 - - default default] [instance: 1b88f60c-1027-4734-a7c1-3dd966e8db2c] Updated VIF entry in instance network info cache for port 14f93f3e-13f2-4745-8cb2-687295f0fc23. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 11 04:14:19 compute-0 nova_compute[259850]: 2025-10-11 04:14:19.859 2 DEBUG nova.network.neutron [req-dab271f6-34e3-4d53-a850-8648e473672d req-0e51e526-4483-4ed8-a4c0-f07023f26f47 f4159c198f2c491aba3076e803251bb4 551ef16ca7c64c7d98e9560ddab5abd8 - - default default] [instance: 1b88f60c-1027-4734-a7c1-3dd966e8db2c] Updating instance_info_cache with network_info: [{"id": "14f93f3e-13f2-4745-8cb2-687295f0fc23", "address": "fa:16:3e:dd:44:08", "network": {"id": "b6cd64a2-af0b-4f57-b84c-cbc9cde5251d", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-958485850-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "09ba33ef4bd447699d74946c58839b2d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap14f93f3e-13", "ovs_interfaceid": "14f93f3e-13f2-4745-8cb2-687295f0fc23", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 11 04:14:19 compute-0 nova_compute[259850]: 2025-10-11 04:14:19.878 2 DEBUG nova.virt.libvirt.driver [None req-b6e63544-78f1-4705-b248-28579fa37926 2a330a845d62440c871f80eda2546881 09ba33ef4bd447699d74946c58839b2d - - default default] [instance: 1b88f60c-1027-4734-a7c1-3dd966e8db2c] End _get_guest_xml xml=<domain type="kvm">
Oct 11 04:14:19 compute-0 nova_compute[259850]:   <uuid>1b88f60c-1027-4734-a7c1-3dd966e8db2c</uuid>
Oct 11 04:14:19 compute-0 nova_compute[259850]:   <name>instance-00000010</name>
Oct 11 04:14:19 compute-0 nova_compute[259850]:   <memory>131072</memory>
Oct 11 04:14:19 compute-0 nova_compute[259850]:   <vcpu>1</vcpu>
Oct 11 04:14:19 compute-0 nova_compute[259850]:   <metadata>
Oct 11 04:14:19 compute-0 nova_compute[259850]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 11 04:14:19 compute-0 nova_compute[259850]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 11 04:14:19 compute-0 nova_compute[259850]:       <nova:name>tempest-TestVolumeBootPattern-server-733896192</nova:name>
Oct 11 04:14:19 compute-0 nova_compute[259850]:       <nova:creationTime>2025-10-11 04:14:18</nova:creationTime>
Oct 11 04:14:19 compute-0 nova_compute[259850]:       <nova:flavor name="m1.nano">
Oct 11 04:14:19 compute-0 nova_compute[259850]:         <nova:memory>128</nova:memory>
Oct 11 04:14:19 compute-0 nova_compute[259850]:         <nova:disk>1</nova:disk>
Oct 11 04:14:19 compute-0 nova_compute[259850]:         <nova:swap>0</nova:swap>
Oct 11 04:14:19 compute-0 nova_compute[259850]:         <nova:ephemeral>0</nova:ephemeral>
Oct 11 04:14:19 compute-0 nova_compute[259850]:         <nova:vcpus>1</nova:vcpus>
Oct 11 04:14:19 compute-0 nova_compute[259850]:       </nova:flavor>
Oct 11 04:14:19 compute-0 nova_compute[259850]:       <nova:owner>
Oct 11 04:14:19 compute-0 nova_compute[259850]:         <nova:user uuid="2a330a845d62440c871f80eda2546881">tempest-TestVolumeBootPattern-771726270-project-member</nova:user>
Oct 11 04:14:19 compute-0 nova_compute[259850]:         <nova:project uuid="09ba33ef4bd447699d74946c58839b2d">tempest-TestVolumeBootPattern-771726270</nova:project>
Oct 11 04:14:19 compute-0 nova_compute[259850]:       </nova:owner>
Oct 11 04:14:19 compute-0 nova_compute[259850]:       <nova:ports>
Oct 11 04:14:19 compute-0 nova_compute[259850]:         <nova:port uuid="14f93f3e-13f2-4745-8cb2-687295f0fc23">
Oct 11 04:14:19 compute-0 nova_compute[259850]:           <nova:ip type="fixed" address="10.100.0.14" ipVersion="4"/>
Oct 11 04:14:19 compute-0 nova_compute[259850]:         </nova:port>
Oct 11 04:14:19 compute-0 nova_compute[259850]:       </nova:ports>
Oct 11 04:14:19 compute-0 nova_compute[259850]:     </nova:instance>
Oct 11 04:14:19 compute-0 nova_compute[259850]:   </metadata>
Oct 11 04:14:19 compute-0 nova_compute[259850]:   <sysinfo type="smbios">
Oct 11 04:14:19 compute-0 nova_compute[259850]:     <system>
Oct 11 04:14:19 compute-0 nova_compute[259850]:       <entry name="manufacturer">RDO</entry>
Oct 11 04:14:19 compute-0 nova_compute[259850]:       <entry name="product">OpenStack Compute</entry>
Oct 11 04:14:19 compute-0 nova_compute[259850]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 11 04:14:19 compute-0 nova_compute[259850]:       <entry name="serial">1b88f60c-1027-4734-a7c1-3dd966e8db2c</entry>
Oct 11 04:14:19 compute-0 nova_compute[259850]:       <entry name="uuid">1b88f60c-1027-4734-a7c1-3dd966e8db2c</entry>
Oct 11 04:14:19 compute-0 nova_compute[259850]:       <entry name="family">Virtual Machine</entry>
Oct 11 04:14:19 compute-0 nova_compute[259850]:     </system>
Oct 11 04:14:19 compute-0 nova_compute[259850]:   </sysinfo>
Oct 11 04:14:19 compute-0 nova_compute[259850]:   <os>
Oct 11 04:14:19 compute-0 nova_compute[259850]:     <type arch="x86_64" machine="q35">hvm</type>
Oct 11 04:14:19 compute-0 nova_compute[259850]:     <boot dev="hd"/>
Oct 11 04:14:19 compute-0 nova_compute[259850]:     <smbios mode="sysinfo"/>
Oct 11 04:14:19 compute-0 nova_compute[259850]:   </os>
Oct 11 04:14:19 compute-0 nova_compute[259850]:   <features>
Oct 11 04:14:19 compute-0 nova_compute[259850]:     <acpi/>
Oct 11 04:14:19 compute-0 nova_compute[259850]:     <apic/>
Oct 11 04:14:19 compute-0 nova_compute[259850]:     <vmcoreinfo/>
Oct 11 04:14:19 compute-0 nova_compute[259850]:   </features>
Oct 11 04:14:19 compute-0 nova_compute[259850]:   <clock offset="utc">
Oct 11 04:14:19 compute-0 nova_compute[259850]:     <timer name="pit" tickpolicy="delay"/>
Oct 11 04:14:19 compute-0 nova_compute[259850]:     <timer name="rtc" tickpolicy="catchup"/>
Oct 11 04:14:19 compute-0 nova_compute[259850]:     <timer name="hpet" present="no"/>
Oct 11 04:14:19 compute-0 nova_compute[259850]:   </clock>
Oct 11 04:14:19 compute-0 nova_compute[259850]:   <cpu mode="host-model" match="exact">
Oct 11 04:14:19 compute-0 nova_compute[259850]:     <topology sockets="1" cores="1" threads="1"/>
Oct 11 04:14:19 compute-0 nova_compute[259850]:   </cpu>
Oct 11 04:14:19 compute-0 nova_compute[259850]:   <devices>
Oct 11 04:14:19 compute-0 nova_compute[259850]:     <disk type="network" device="cdrom">
Oct 11 04:14:19 compute-0 nova_compute[259850]:       <driver type="raw" cache="none"/>
Oct 11 04:14:19 compute-0 nova_compute[259850]:       <source protocol="rbd" name="vms/1b88f60c-1027-4734-a7c1-3dd966e8db2c_disk.config">
Oct 11 04:14:19 compute-0 nova_compute[259850]:         <host name="192.168.122.100" port="6789"/>
Oct 11 04:14:19 compute-0 nova_compute[259850]:       </source>
Oct 11 04:14:19 compute-0 nova_compute[259850]:       <auth username="openstack">
Oct 11 04:14:19 compute-0 nova_compute[259850]:         <secret type="ceph" uuid="23b68101-59a9-532f-ab6b-9acf78fb2162"/>
Oct 11 04:14:19 compute-0 nova_compute[259850]:       </auth>
Oct 11 04:14:19 compute-0 nova_compute[259850]:       <target dev="sda" bus="sata"/>
Oct 11 04:14:19 compute-0 nova_compute[259850]:     </disk>
Oct 11 04:14:19 compute-0 nova_compute[259850]:     <disk type="network" device="disk">
Oct 11 04:14:19 compute-0 nova_compute[259850]:       <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Oct 11 04:14:19 compute-0 nova_compute[259850]:       <source protocol="rbd" name="volumes/volume-adf9c68b-f22b-415c-977e-607efc15e747">
Oct 11 04:14:19 compute-0 nova_compute[259850]:         <host name="192.168.122.100" port="6789"/>
Oct 11 04:14:19 compute-0 nova_compute[259850]:       </source>
Oct 11 04:14:19 compute-0 nova_compute[259850]:       <auth username="openstack">
Oct 11 04:14:19 compute-0 nova_compute[259850]:         <secret type="ceph" uuid="23b68101-59a9-532f-ab6b-9acf78fb2162"/>
Oct 11 04:14:19 compute-0 nova_compute[259850]:       </auth>
Oct 11 04:14:19 compute-0 nova_compute[259850]:       <target dev="vda" bus="virtio"/>
Oct 11 04:14:19 compute-0 nova_compute[259850]:       <serial>adf9c68b-f22b-415c-977e-607efc15e747</serial>
Oct 11 04:14:19 compute-0 nova_compute[259850]:       <encryption format="luks">
Oct 11 04:14:19 compute-0 nova_compute[259850]:         <secret type="passphrase" uuid="cbaa3f2a-599f-4758-a6e3-bd26db691368"/>
Oct 11 04:14:19 compute-0 nova_compute[259850]:       </encryption>
Oct 11 04:14:19 compute-0 nova_compute[259850]:     </disk>
Oct 11 04:14:19 compute-0 nova_compute[259850]:     <interface type="ethernet">
Oct 11 04:14:19 compute-0 nova_compute[259850]:       <mac address="fa:16:3e:dd:44:08"/>
Oct 11 04:14:19 compute-0 nova_compute[259850]:       <model type="virtio"/>
Oct 11 04:14:19 compute-0 nova_compute[259850]:       <driver name="vhost" rx_queue_size="512"/>
Oct 11 04:14:19 compute-0 nova_compute[259850]:       <mtu size="1442"/>
Oct 11 04:14:19 compute-0 nova_compute[259850]:       <target dev="tap14f93f3e-13"/>
Oct 11 04:14:19 compute-0 nova_compute[259850]:     </interface>
Oct 11 04:14:19 compute-0 nova_compute[259850]:     <serial type="pty">
Oct 11 04:14:19 compute-0 nova_compute[259850]:       <log file="/var/lib/nova/instances/1b88f60c-1027-4734-a7c1-3dd966e8db2c/console.log" append="off"/>
Oct 11 04:14:19 compute-0 nova_compute[259850]:     </serial>
Oct 11 04:14:19 compute-0 nova_compute[259850]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 11 04:14:19 compute-0 nova_compute[259850]:     <video>
Oct 11 04:14:19 compute-0 nova_compute[259850]:       <model type="virtio"/>
Oct 11 04:14:19 compute-0 nova_compute[259850]:     </video>
Oct 11 04:14:19 compute-0 nova_compute[259850]:     <input type="tablet" bus="usb"/>
Oct 11 04:14:19 compute-0 nova_compute[259850]:     <rng model="virtio">
Oct 11 04:14:19 compute-0 nova_compute[259850]:       <backend model="random">/dev/urandom</backend>
Oct 11 04:14:19 compute-0 nova_compute[259850]:     </rng>
Oct 11 04:14:19 compute-0 nova_compute[259850]:     <controller type="pci" model="pcie-root"/>
Oct 11 04:14:19 compute-0 nova_compute[259850]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 04:14:19 compute-0 nova_compute[259850]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 04:14:19 compute-0 nova_compute[259850]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 04:14:19 compute-0 nova_compute[259850]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 04:14:19 compute-0 nova_compute[259850]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 04:14:19 compute-0 nova_compute[259850]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 04:14:19 compute-0 nova_compute[259850]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 04:14:19 compute-0 nova_compute[259850]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 04:14:19 compute-0 nova_compute[259850]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 04:14:19 compute-0 nova_compute[259850]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 04:14:19 compute-0 nova_compute[259850]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 04:14:19 compute-0 nova_compute[259850]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 04:14:19 compute-0 nova_compute[259850]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 04:14:19 compute-0 nova_compute[259850]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 04:14:19 compute-0 nova_compute[259850]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 04:14:19 compute-0 nova_compute[259850]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 04:14:19 compute-0 nova_compute[259850]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 04:14:19 compute-0 nova_compute[259850]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 04:14:19 compute-0 nova_compute[259850]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 04:14:19 compute-0 nova_compute[259850]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 04:14:19 compute-0 nova_compute[259850]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 04:14:19 compute-0 nova_compute[259850]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 04:14:19 compute-0 nova_compute[259850]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 04:14:19 compute-0 nova_compute[259850]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 04:14:19 compute-0 nova_compute[259850]:     <controller type="usb" index="0"/>
Oct 11 04:14:19 compute-0 nova_compute[259850]:     <memballoon model="virtio">
Oct 11 04:14:19 compute-0 nova_compute[259850]:       <stats period="10"/>
Oct 11 04:14:19 compute-0 nova_compute[259850]:     </memballoon>
Oct 11 04:14:19 compute-0 nova_compute[259850]:   </devices>
Oct 11 04:14:19 compute-0 nova_compute[259850]: </domain>
Oct 11 04:14:19 compute-0 nova_compute[259850]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Oct 11 04:14:19 compute-0 nova_compute[259850]: 2025-10-11 04:14:19.880 2 DEBUG nova.compute.manager [None req-b6e63544-78f1-4705-b248-28579fa37926 2a330a845d62440c871f80eda2546881 09ba33ef4bd447699d74946c58839b2d - - default default] [instance: 1b88f60c-1027-4734-a7c1-3dd966e8db2c] Preparing to wait for external event network-vif-plugged-14f93f3e-13f2-4745-8cb2-687295f0fc23 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Oct 11 04:14:19 compute-0 nova_compute[259850]: 2025-10-11 04:14:19.881 2 DEBUG oslo_concurrency.lockutils [None req-b6e63544-78f1-4705-b248-28579fa37926 2a330a845d62440c871f80eda2546881 09ba33ef4bd447699d74946c58839b2d - - default default] Acquiring lock "1b88f60c-1027-4734-a7c1-3dd966e8db2c-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 04:14:19 compute-0 nova_compute[259850]: 2025-10-11 04:14:19.881 2 DEBUG oslo_concurrency.lockutils [None req-b6e63544-78f1-4705-b248-28579fa37926 2a330a845d62440c871f80eda2546881 09ba33ef4bd447699d74946c58839b2d - - default default] Lock "1b88f60c-1027-4734-a7c1-3dd966e8db2c-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 04:14:19 compute-0 nova_compute[259850]: 2025-10-11 04:14:19.882 2 DEBUG oslo_concurrency.lockutils [None req-b6e63544-78f1-4705-b248-28579fa37926 2a330a845d62440c871f80eda2546881 09ba33ef4bd447699d74946c58839b2d - - default default] Lock "1b88f60c-1027-4734-a7c1-3dd966e8db2c-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 04:14:19 compute-0 nova_compute[259850]: 2025-10-11 04:14:19.883 2 DEBUG nova.virt.libvirt.vif [None req-b6e63544-78f1-4705-b248-28579fa37926 2a330a845d62440c871f80eda2546881 09ba33ef4bd447699d74946c58839b2d - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-11T04:14:13Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestVolumeBootPattern-server-733896192',display_name='tempest-TestVolumeBootPattern-server-733896192',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testvolumebootpattern-server-733896192',id=16,image_ref='',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='09ba33ef4bd447699d74946c58839b2d',ramdisk_id='',reservation_id='r-m03z6tvj',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='',image_hw_machine_type='q35',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestVolumeBootPattern-771726270',owner_user_name='tempest-TestVolumeBootPattern-771726270-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-11T04:14:15Z,user_data=None,user_id='2a330a845d62440c871f80eda2546881',uuid=1b88f60c-1027-4734-a7c1-3dd966e8db2c,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "14f93f3e-13f2-4745-8cb2-687295f0fc23", "address": "fa:16:3e:dd:44:08", "network": {"id": "b6cd64a2-af0b-4f57-b84c-cbc9cde5251d", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-958485850-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "09ba33ef4bd447699d74946c58839b2d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap14f93f3e-13", "ovs_interfaceid": "14f93f3e-13f2-4745-8cb2-687295f0fc23", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Oct 11 04:14:19 compute-0 nova_compute[259850]: 2025-10-11 04:14:19.883 2 DEBUG nova.network.os_vif_util [None req-b6e63544-78f1-4705-b248-28579fa37926 2a330a845d62440c871f80eda2546881 09ba33ef4bd447699d74946c58839b2d - - default default] Converting VIF {"id": "14f93f3e-13f2-4745-8cb2-687295f0fc23", "address": "fa:16:3e:dd:44:08", "network": {"id": "b6cd64a2-af0b-4f57-b84c-cbc9cde5251d", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-958485850-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "09ba33ef4bd447699d74946c58839b2d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap14f93f3e-13", "ovs_interfaceid": "14f93f3e-13f2-4745-8cb2-687295f0fc23", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 11 04:14:19 compute-0 nova_compute[259850]: 2025-10-11 04:14:19.884 2 DEBUG nova.network.os_vif_util [None req-b6e63544-78f1-4705-b248-28579fa37926 2a330a845d62440c871f80eda2546881 09ba33ef4bd447699d74946c58839b2d - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:dd:44:08,bridge_name='br-int',has_traffic_filtering=True,id=14f93f3e-13f2-4745-8cb2-687295f0fc23,network=Network(b6cd64a2-af0b-4f57-b84c-cbc9cde5251d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap14f93f3e-13') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 11 04:14:19 compute-0 nova_compute[259850]: 2025-10-11 04:14:19.885 2 DEBUG os_vif [None req-b6e63544-78f1-4705-b248-28579fa37926 2a330a845d62440c871f80eda2546881 09ba33ef4bd447699d74946c58839b2d - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:dd:44:08,bridge_name='br-int',has_traffic_filtering=True,id=14f93f3e-13f2-4745-8cb2-687295f0fc23,network=Network(b6cd64a2-af0b-4f57-b84c-cbc9cde5251d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap14f93f3e-13') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Oct 11 04:14:19 compute-0 nova_compute[259850]: 2025-10-11 04:14:19.887 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 04:14:19 compute-0 nova_compute[259850]: 2025-10-11 04:14:19.888 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 04:14:19 compute-0 nova_compute[259850]: 2025-10-11 04:14:19.889 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 11 04:14:19 compute-0 nova_compute[259850]: 2025-10-11 04:14:19.889 2 DEBUG oslo_concurrency.lockutils [req-dab271f6-34e3-4d53-a850-8648e473672d req-0e51e526-4483-4ed8-a4c0-f07023f26f47 f4159c198f2c491aba3076e803251bb4 551ef16ca7c64c7d98e9560ddab5abd8 - - default default] Releasing lock "refresh_cache-1b88f60c-1027-4734-a7c1-3dd966e8db2c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 11 04:14:19 compute-0 nova_compute[259850]: 2025-10-11 04:14:19.894 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 04:14:19 compute-0 nova_compute[259850]: 2025-10-11 04:14:19.895 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap14f93f3e-13, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 04:14:19 compute-0 nova_compute[259850]: 2025-10-11 04:14:19.896 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap14f93f3e-13, col_values=(('external_ids', {'iface-id': '14f93f3e-13f2-4745-8cb2-687295f0fc23', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:dd:44:08', 'vm-uuid': '1b88f60c-1027-4734-a7c1-3dd966e8db2c'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 04:14:19 compute-0 nova_compute[259850]: 2025-10-11 04:14:19.898 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 04:14:19 compute-0 NetworkManager[44920]: <info>  [1760156059.8997] manager: (tap14f93f3e-13): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/88)
Oct 11 04:14:19 compute-0 nova_compute[259850]: 2025-10-11 04:14:19.901 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 11 04:14:19 compute-0 nova_compute[259850]: 2025-10-11 04:14:19.911 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 04:14:19 compute-0 nova_compute[259850]: 2025-10-11 04:14:19.912 2 INFO os_vif [None req-b6e63544-78f1-4705-b248-28579fa37926 2a330a845d62440c871f80eda2546881 09ba33ef4bd447699d74946c58839b2d - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:dd:44:08,bridge_name='br-int',has_traffic_filtering=True,id=14f93f3e-13f2-4745-8cb2-687295f0fc23,network=Network(b6cd64a2-af0b-4f57-b84c-cbc9cde5251d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap14f93f3e-13')
Oct 11 04:14:19 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e381 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 04:14:19 compute-0 nova_compute[259850]: 2025-10-11 04:14:19.984 2 DEBUG nova.virt.libvirt.driver [None req-b6e63544-78f1-4705-b248-28579fa37926 2a330a845d62440c871f80eda2546881 09ba33ef4bd447699d74946c58839b2d - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 11 04:14:19 compute-0 nova_compute[259850]: 2025-10-11 04:14:19.985 2 DEBUG nova.virt.libvirt.driver [None req-b6e63544-78f1-4705-b248-28579fa37926 2a330a845d62440c871f80eda2546881 09ba33ef4bd447699d74946c58839b2d - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 11 04:14:19 compute-0 nova_compute[259850]: 2025-10-11 04:14:19.985 2 DEBUG nova.virt.libvirt.driver [None req-b6e63544-78f1-4705-b248-28579fa37926 2a330a845d62440c871f80eda2546881 09ba33ef4bd447699d74946c58839b2d - - default default] No VIF found with MAC fa:16:3e:dd:44:08, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Oct 11 04:14:19 compute-0 nova_compute[259850]: 2025-10-11 04:14:19.987 2 INFO nova.virt.libvirt.driver [None req-b6e63544-78f1-4705-b248-28579fa37926 2a330a845d62440c871f80eda2546881 09ba33ef4bd447699d74946c58839b2d - - default default] [instance: 1b88f60c-1027-4734-a7c1-3dd966e8db2c] Using config drive
Oct 11 04:14:20 compute-0 nova_compute[259850]: 2025-10-11 04:14:20.021 2 DEBUG nova.storage.rbd_utils [None req-b6e63544-78f1-4705-b248-28579fa37926 2a330a845d62440c871f80eda2546881 09ba33ef4bd447699d74946c58839b2d - - default default] rbd image 1b88f60c-1027-4734-a7c1-3dd966e8db2c_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 11 04:14:20 compute-0 ovn_controller[152025]: 2025-10-11T04:14:20Z|00161|memory_trim|INFO|Detected inactivity (last active 30002 ms ago): trimming memory
Oct 11 04:14:20 compute-0 ceph-mon[74273]: from='client.? 192.168.122.100:0/4194430217' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 11 04:14:20 compute-0 ceph-mon[74273]: from='client.? 192.168.122.10:0/3112595758' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 11 04:14:20 compute-0 ceph-mon[74273]: from='client.? 192.168.122.10:0/3112595758' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 11 04:14:20 compute-0 nova_compute[259850]: 2025-10-11 04:14:20.438 2 INFO nova.virt.libvirt.driver [None req-b6e63544-78f1-4705-b248-28579fa37926 2a330a845d62440c871f80eda2546881 09ba33ef4bd447699d74946c58839b2d - - default default] [instance: 1b88f60c-1027-4734-a7c1-3dd966e8db2c] Creating config drive at /var/lib/nova/instances/1b88f60c-1027-4734-a7c1-3dd966e8db2c/disk.config
Oct 11 04:14:20 compute-0 nova_compute[259850]: 2025-10-11 04:14:20.445 2 DEBUG oslo_concurrency.processutils [None req-b6e63544-78f1-4705-b248-28579fa37926 2a330a845d62440c871f80eda2546881 09ba33ef4bd447699d74946c58839b2d - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/1b88f60c-1027-4734-a7c1-3dd966e8db2c/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpyvu9o_aj execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 04:14:20 compute-0 nova_compute[259850]: 2025-10-11 04:14:20.591 2 DEBUG oslo_concurrency.processutils [None req-b6e63544-78f1-4705-b248-28579fa37926 2a330a845d62440c871f80eda2546881 09ba33ef4bd447699d74946c58839b2d - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/1b88f60c-1027-4734-a7c1-3dd966e8db2c/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpyvu9o_aj" returned: 0 in 0.146s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 04:14:20 compute-0 nova_compute[259850]: 2025-10-11 04:14:20.633 2 DEBUG nova.storage.rbd_utils [None req-b6e63544-78f1-4705-b248-28579fa37926 2a330a845d62440c871f80eda2546881 09ba33ef4bd447699d74946c58839b2d - - default default] rbd image 1b88f60c-1027-4734-a7c1-3dd966e8db2c_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 11 04:14:20 compute-0 nova_compute[259850]: 2025-10-11 04:14:20.638 2 DEBUG oslo_concurrency.processutils [None req-b6e63544-78f1-4705-b248-28579fa37926 2a330a845d62440c871f80eda2546881 09ba33ef4bd447699d74946c58839b2d - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/1b88f60c-1027-4734-a7c1-3dd966e8db2c/disk.config 1b88f60c-1027-4734-a7c1-3dd966e8db2c_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 04:14:20 compute-0 ceph-mgr[74563]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 04:14:20 compute-0 ceph-mgr[74563]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 04:14:20 compute-0 ceph-mgr[74563]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 04:14:20 compute-0 ceph-mgr[74563]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 04:14:20 compute-0 ceph-mgr[74563]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 04:14:20 compute-0 ceph-mgr[74563]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 04:14:20 compute-0 ceph-mgr[74563]: [balancer INFO root] Optimize plan auto_2025-10-11_04:14:20
Oct 11 04:14:20 compute-0 ceph-mgr[74563]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 11 04:14:20 compute-0 ceph-mgr[74563]: [balancer INFO root] do_upmap
Oct 11 04:14:20 compute-0 ceph-mgr[74563]: [balancer INFO root] pools ['default.rgw.meta', '.rgw.root', '.mgr', 'cephfs.cephfs.meta', 'backups', 'default.rgw.control', 'images', 'default.rgw.log', 'vms', 'cephfs.cephfs.data', 'volumes']
Oct 11 04:14:20 compute-0 ceph-mgr[74563]: [balancer INFO root] prepared 0/10 changes
Oct 11 04:14:20 compute-0 nova_compute[259850]: 2025-10-11 04:14:20.828 2 DEBUG oslo_concurrency.processutils [None req-b6e63544-78f1-4705-b248-28579fa37926 2a330a845d62440c871f80eda2546881 09ba33ef4bd447699d74946c58839b2d - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/1b88f60c-1027-4734-a7c1-3dd966e8db2c/disk.config 1b88f60c-1027-4734-a7c1-3dd966e8db2c_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.190s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 04:14:20 compute-0 nova_compute[259850]: 2025-10-11 04:14:20.829 2 INFO nova.virt.libvirt.driver [None req-b6e63544-78f1-4705-b248-28579fa37926 2a330a845d62440c871f80eda2546881 09ba33ef4bd447699d74946c58839b2d - - default default] [instance: 1b88f60c-1027-4734-a7c1-3dd966e8db2c] Deleting local config drive /var/lib/nova/instances/1b88f60c-1027-4734-a7c1-3dd966e8db2c/disk.config because it was imported into RBD.
Oct 11 04:14:20 compute-0 kernel: tap14f93f3e-13: entered promiscuous mode
Oct 11 04:14:20 compute-0 NetworkManager[44920]: <info>  [1760156060.9099] manager: (tap14f93f3e-13): new Tun device (/org/freedesktop/NetworkManager/Devices/89)
Oct 11 04:14:20 compute-0 nova_compute[259850]: 2025-10-11 04:14:20.909 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 04:14:20 compute-0 ovn_controller[152025]: 2025-10-11T04:14:20Z|00162|binding|INFO|Claiming lport 14f93f3e-13f2-4745-8cb2-687295f0fc23 for this chassis.
Oct 11 04:14:20 compute-0 ovn_controller[152025]: 2025-10-11T04:14:20Z|00163|binding|INFO|14f93f3e-13f2-4745-8cb2-687295f0fc23: Claiming fa:16:3e:dd:44:08 10.100.0.14
Oct 11 04:14:20 compute-0 nova_compute[259850]: 2025-10-11 04:14:20.923 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 04:14:20 compute-0 ovn_metadata_agent[161897]: 2025-10-11 04:14:20.931 161902 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:dd:44:08 10.100.0.14'], port_security=['fa:16:3e:dd:44:08 10.100.0.14'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.14/28', 'neutron:device_id': '1b88f60c-1027-4734-a7c1-3dd966e8db2c', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-b6cd64a2-af0b-4f57-b84c-cbc9cde5251d', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '09ba33ef4bd447699d74946c58839b2d', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'ad6cc707-9ce2-4240-811a-f6df84b349db', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=27b77226-c1f8-485e-969b-bae9a3bf7ceb, chassis=[<ovs.db.idl.Row object at 0x7f22fde8afd0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f22fde8afd0>], logical_port=14f93f3e-13f2-4745-8cb2-687295f0fc23) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 11 04:14:20 compute-0 ovn_metadata_agent[161897]: 2025-10-11 04:14:20.933 161902 INFO neutron.agent.ovn.metadata.agent [-] Port 14f93f3e-13f2-4745-8cb2-687295f0fc23 in datapath b6cd64a2-af0b-4f57-b84c-cbc9cde5251d bound to our chassis
Oct 11 04:14:20 compute-0 ovn_metadata_agent[161897]: 2025-10-11 04:14:20.934 161902 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network b6cd64a2-af0b-4f57-b84c-cbc9cde5251d
Oct 11 04:14:20 compute-0 systemd-machined[214869]: New machine qemu-16-instance-00000010.
Oct 11 04:14:20 compute-0 ovn_metadata_agent[161897]: 2025-10-11 04:14:20.951 267637 DEBUG oslo.privsep.daemon [-] privsep: reply[9283d860-51e8-471a-a14d-2516c5bf8854]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 04:14:20 compute-0 ovn_metadata_agent[161897]: 2025-10-11 04:14:20.952 161902 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapb6cd64a2-a1 in ovnmeta-b6cd64a2-af0b-4f57-b84c-cbc9cde5251d namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Oct 11 04:14:20 compute-0 ovn_metadata_agent[161897]: 2025-10-11 04:14:20.954 267637 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapb6cd64a2-a0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Oct 11 04:14:20 compute-0 ovn_metadata_agent[161897]: 2025-10-11 04:14:20.954 267637 DEBUG oslo.privsep.daemon [-] privsep: reply[6223e282-212e-483c-84d5-c6797b1a995b]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 04:14:20 compute-0 ovn_metadata_agent[161897]: 2025-10-11 04:14:20.955 267637 DEBUG oslo.privsep.daemon [-] privsep: reply[6bec4930-a5b2-4f06-bb2c-f17dc4a0c2cb]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 04:14:20 compute-0 ovn_metadata_agent[161897]: 2025-10-11 04:14:20.972 162015 DEBUG oslo.privsep.daemon [-] privsep: reply[b5cd4aa4-01b2-4613-8586-a97f850b365e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 04:14:21 compute-0 ovn_metadata_agent[161897]: 2025-10-11 04:14:21.002 267637 DEBUG oslo.privsep.daemon [-] privsep: reply[2d68b3db-a4eb-4d33-835f-93af21ad90d3]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 04:14:21 compute-0 systemd[1]: Started Virtual Machine qemu-16-instance-00000010.
Oct 11 04:14:21 compute-0 ovn_controller[152025]: 2025-10-11T04:14:21Z|00164|binding|INFO|Setting lport 14f93f3e-13f2-4745-8cb2-687295f0fc23 ovn-installed in OVS
Oct 11 04:14:21 compute-0 ovn_controller[152025]: 2025-10-11T04:14:21Z|00165|binding|INFO|Setting lport 14f93f3e-13f2-4745-8cb2-687295f0fc23 up in Southbound
Oct 11 04:14:21 compute-0 nova_compute[259850]: 2025-10-11 04:14:21.016 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 04:14:21 compute-0 systemd-udevd[290661]: Network interface NamePolicy= disabled on kernel command line.
Oct 11 04:14:21 compute-0 ovn_metadata_agent[161897]: 2025-10-11 04:14:21.036 267785 DEBUG oslo.privsep.daemon [-] privsep: reply[1c893aef-cedb-40f2-875a-792a782cc4bd]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 04:14:21 compute-0 systemd-udevd[290663]: Network interface NamePolicy= disabled on kernel command line.
Oct 11 04:14:21 compute-0 ovn_metadata_agent[161897]: 2025-10-11 04:14:21.042 267637 DEBUG oslo.privsep.daemon [-] privsep: reply[57d0fd04-a891-41f7-93e7-1a0e6267ed4d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 04:14:21 compute-0 NetworkManager[44920]: <info>  [1760156061.0445] manager: (tapb6cd64a2-a0): new Veth device (/org/freedesktop/NetworkManager/Devices/90)
Oct 11 04:14:21 compute-0 NetworkManager[44920]: <info>  [1760156061.0503] device (tap14f93f3e-13): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 11 04:14:21 compute-0 NetworkManager[44920]: <info>  [1760156061.0527] device (tap14f93f3e-13): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 11 04:14:21 compute-0 ceph-mgr[74563]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 11 04:14:21 compute-0 ceph-mgr[74563]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 11 04:14:21 compute-0 ceph-mgr[74563]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 11 04:14:21 compute-0 ceph-mgr[74563]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 11 04:14:21 compute-0 ceph-mgr[74563]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 11 04:14:21 compute-0 ceph-mgr[74563]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 11 04:14:21 compute-0 ceph-mgr[74563]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 11 04:14:21 compute-0 ceph-mgr[74563]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 11 04:14:21 compute-0 ceph-mgr[74563]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 11 04:14:21 compute-0 ceph-mgr[74563]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 11 04:14:21 compute-0 ovn_metadata_agent[161897]: 2025-10-11 04:14:21.083 267785 DEBUG oslo.privsep.daemon [-] privsep: reply[675f0870-2575-47aa-aa04-5e4b02ff932a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 04:14:21 compute-0 ovn_metadata_agent[161897]: 2025-10-11 04:14:21.088 267785 DEBUG oslo.privsep.daemon [-] privsep: reply[349f5704-a063-4977-af58-7a23552ce237]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 04:14:21 compute-0 NetworkManager[44920]: <info>  [1760156061.1161] device (tapb6cd64a2-a0): carrier: link connected
Oct 11 04:14:21 compute-0 ovn_metadata_agent[161897]: 2025-10-11 04:14:21.119 267785 DEBUG oslo.privsep.daemon [-] privsep: reply[ec4719dc-8f0d-4349-be05-36c7c696d1e3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 04:14:21 compute-0 ovn_metadata_agent[161897]: 2025-10-11 04:14:21.136 267637 DEBUG oslo.privsep.daemon [-] privsep: reply[fe3c3d0a-0c2f-4919-a721-fc128bf8cf8d]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapb6cd64a2-a1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:11:9f:02'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 56], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 444615, 'reachable_time': 19091, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 290689, 'error': None, 'target': 'ovnmeta-b6cd64a2-af0b-4f57-b84c-cbc9cde5251d', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 04:14:21 compute-0 ovn_metadata_agent[161897]: 2025-10-11 04:14:21.149 267637 DEBUG oslo.privsep.daemon [-] privsep: reply[3b87bad5-ab95-4c5d-a5d5-eede9426a42e]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe11:9f02'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 444615, 'tstamp': 444615}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 290690, 'error': None, 'target': 'ovnmeta-b6cd64a2-af0b-4f57-b84c-cbc9cde5251d', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 04:14:21 compute-0 ovn_metadata_agent[161897]: 2025-10-11 04:14:21.170 267637 DEBUG oslo.privsep.daemon [-] privsep: reply[4b399dc6-a8a1-46d0-8669-a8199ae18016]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapb6cd64a2-a1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:11:9f:02'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 56], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 444615, 'reachable_time': 19091, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 290691, 'error': None, 'target': 'ovnmeta-b6cd64a2-af0b-4f57-b84c-cbc9cde5251d', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 04:14:21 compute-0 ceph-mon[74273]: pgmap v1511: 305 pgs: 305 active+clean; 88 MiB data, 383 MiB used, 60 GiB / 60 GiB avail; 24 KiB/s rd, 23 KiB/s wr, 34 op/s
Oct 11 04:14:21 compute-0 ovn_metadata_agent[161897]: 2025-10-11 04:14:21.206 267637 DEBUG oslo.privsep.daemon [-] privsep: reply[2560e92a-1a48-42af-a25b-fc22c27abf14]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 04:14:21 compute-0 nova_compute[259850]: 2025-10-11 04:14:21.217 2 DEBUG nova.compute.manager [req-a3a7f15c-2e40-47a5-8875-33666daf1661 req-465d3a46-2819-441f-aef4-d87f471c5051 f4159c198f2c491aba3076e803251bb4 551ef16ca7c64c7d98e9560ddab5abd8 - - default default] [instance: 1b88f60c-1027-4734-a7c1-3dd966e8db2c] Received event network-vif-plugged-14f93f3e-13f2-4745-8cb2-687295f0fc23 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 11 04:14:21 compute-0 nova_compute[259850]: 2025-10-11 04:14:21.218 2 DEBUG oslo_concurrency.lockutils [req-a3a7f15c-2e40-47a5-8875-33666daf1661 req-465d3a46-2819-441f-aef4-d87f471c5051 f4159c198f2c491aba3076e803251bb4 551ef16ca7c64c7d98e9560ddab5abd8 - - default default] Acquiring lock "1b88f60c-1027-4734-a7c1-3dd966e8db2c-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 04:14:21 compute-0 nova_compute[259850]: 2025-10-11 04:14:21.218 2 DEBUG oslo_concurrency.lockutils [req-a3a7f15c-2e40-47a5-8875-33666daf1661 req-465d3a46-2819-441f-aef4-d87f471c5051 f4159c198f2c491aba3076e803251bb4 551ef16ca7c64c7d98e9560ddab5abd8 - - default default] Lock "1b88f60c-1027-4734-a7c1-3dd966e8db2c-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 04:14:21 compute-0 nova_compute[259850]: 2025-10-11 04:14:21.218 2 DEBUG oslo_concurrency.lockutils [req-a3a7f15c-2e40-47a5-8875-33666daf1661 req-465d3a46-2819-441f-aef4-d87f471c5051 f4159c198f2c491aba3076e803251bb4 551ef16ca7c64c7d98e9560ddab5abd8 - - default default] Lock "1b88f60c-1027-4734-a7c1-3dd966e8db2c-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 04:14:21 compute-0 nova_compute[259850]: 2025-10-11 04:14:21.219 2 DEBUG nova.compute.manager [req-a3a7f15c-2e40-47a5-8875-33666daf1661 req-465d3a46-2819-441f-aef4-d87f471c5051 f4159c198f2c491aba3076e803251bb4 551ef16ca7c64c7d98e9560ddab5abd8 - - default default] [instance: 1b88f60c-1027-4734-a7c1-3dd966e8db2c] Processing event network-vif-plugged-14f93f3e-13f2-4745-8cb2-687295f0fc23 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Oct 11 04:14:21 compute-0 ovn_metadata_agent[161897]: 2025-10-11 04:14:21.279 267637 DEBUG oslo.privsep.daemon [-] privsep: reply[2255e673-6b50-4aa1-926f-f884f3812a86]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 04:14:21 compute-0 ovn_metadata_agent[161897]: 2025-10-11 04:14:21.280 161902 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapb6cd64a2-a0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 04:14:21 compute-0 ovn_metadata_agent[161897]: 2025-10-11 04:14:21.281 161902 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 11 04:14:21 compute-0 ovn_metadata_agent[161897]: 2025-10-11 04:14:21.281 161902 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapb6cd64a2-a0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 04:14:21 compute-0 NetworkManager[44920]: <info>  [1760156061.2838] manager: (tapb6cd64a2-a0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/91)
Oct 11 04:14:21 compute-0 nova_compute[259850]: 2025-10-11 04:14:21.283 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 04:14:21 compute-0 kernel: tapb6cd64a2-a0: entered promiscuous mode
Oct 11 04:14:21 compute-0 nova_compute[259850]: 2025-10-11 04:14:21.288 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 04:14:21 compute-0 ovn_metadata_agent[161897]: 2025-10-11 04:14:21.288 161902 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapb6cd64a2-a0, col_values=(('external_ids', {'iface-id': 'c2cbaf15-a50c-40b8-9f65-12b11618e7fc'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 04:14:21 compute-0 ovn_controller[152025]: 2025-10-11T04:14:21Z|00166|binding|INFO|Releasing lport c2cbaf15-a50c-40b8-9f65-12b11618e7fc from this chassis (sb_readonly=0)
Oct 11 04:14:21 compute-0 nova_compute[259850]: 2025-10-11 04:14:21.324 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 04:14:21 compute-0 ovn_metadata_agent[161897]: 2025-10-11 04:14:21.324 161902 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/b6cd64a2-af0b-4f57-b84c-cbc9cde5251d.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/b6cd64a2-af0b-4f57-b84c-cbc9cde5251d.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Oct 11 04:14:21 compute-0 ovn_metadata_agent[161897]: 2025-10-11 04:14:21.326 267637 DEBUG oslo.privsep.daemon [-] privsep: reply[563ee24e-5c35-4273-b7b6-7cf252f5fe0d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 04:14:21 compute-0 ovn_metadata_agent[161897]: 2025-10-11 04:14:21.326 161902 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct 11 04:14:21 compute-0 ovn_metadata_agent[161897]: global
Oct 11 04:14:21 compute-0 ovn_metadata_agent[161897]:     log         /dev/log local0 debug
Oct 11 04:14:21 compute-0 ovn_metadata_agent[161897]:     log-tag     haproxy-metadata-proxy-b6cd64a2-af0b-4f57-b84c-cbc9cde5251d
Oct 11 04:14:21 compute-0 ovn_metadata_agent[161897]:     user        root
Oct 11 04:14:21 compute-0 ovn_metadata_agent[161897]:     group       root
Oct 11 04:14:21 compute-0 ovn_metadata_agent[161897]:     maxconn     1024
Oct 11 04:14:21 compute-0 ovn_metadata_agent[161897]:     pidfile     /var/lib/neutron/external/pids/b6cd64a2-af0b-4f57-b84c-cbc9cde5251d.pid.haproxy
Oct 11 04:14:21 compute-0 ovn_metadata_agent[161897]:     daemon
Oct 11 04:14:21 compute-0 ovn_metadata_agent[161897]: 
Oct 11 04:14:21 compute-0 ovn_metadata_agent[161897]: defaults
Oct 11 04:14:21 compute-0 ovn_metadata_agent[161897]:     log global
Oct 11 04:14:21 compute-0 ovn_metadata_agent[161897]:     mode http
Oct 11 04:14:21 compute-0 ovn_metadata_agent[161897]:     option httplog
Oct 11 04:14:21 compute-0 ovn_metadata_agent[161897]:     option dontlognull
Oct 11 04:14:21 compute-0 ovn_metadata_agent[161897]:     option http-server-close
Oct 11 04:14:21 compute-0 ovn_metadata_agent[161897]:     option forwardfor
Oct 11 04:14:21 compute-0 ovn_metadata_agent[161897]:     retries                 3
Oct 11 04:14:21 compute-0 ovn_metadata_agent[161897]:     timeout http-request    30s
Oct 11 04:14:21 compute-0 ovn_metadata_agent[161897]:     timeout connect         30s
Oct 11 04:14:21 compute-0 ovn_metadata_agent[161897]:     timeout client          32s
Oct 11 04:14:21 compute-0 ovn_metadata_agent[161897]:     timeout server          32s
Oct 11 04:14:21 compute-0 ovn_metadata_agent[161897]:     timeout http-keep-alive 30s
Oct 11 04:14:21 compute-0 ovn_metadata_agent[161897]: 
Oct 11 04:14:21 compute-0 ovn_metadata_agent[161897]: 
Oct 11 04:14:21 compute-0 ovn_metadata_agent[161897]: listen listener
Oct 11 04:14:21 compute-0 ovn_metadata_agent[161897]:     bind 169.254.169.254:80
Oct 11 04:14:21 compute-0 ovn_metadata_agent[161897]:     server metadata /var/lib/neutron/metadata_proxy
Oct 11 04:14:21 compute-0 ovn_metadata_agent[161897]:     http-request add-header X-OVN-Network-ID b6cd64a2-af0b-4f57-b84c-cbc9cde5251d
Oct 11 04:14:21 compute-0 ovn_metadata_agent[161897]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Oct 11 04:14:21 compute-0 ovn_metadata_agent[161897]: 2025-10-11 04:14:21.327 161902 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-b6cd64a2-af0b-4f57-b84c-cbc9cde5251d', 'env', 'PROCESS_TAG=haproxy-b6cd64a2-af0b-4f57-b84c-cbc9cde5251d', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/b6cd64a2-af0b-4f57-b84c-cbc9cde5251d.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Oct 11 04:14:21 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1512: 305 pgs: 305 active+clean; 88 MiB data, 383 MiB used, 60 GiB / 60 GiB avail; 24 KiB/s rd, 23 KiB/s wr, 34 op/s
Oct 11 04:14:21 compute-0 podman[290759]: 2025-10-11 04:14:21.775418712 +0000 UTC m=+0.082357681 container create 7106ccecf11d77670a573e0dad35162402fd95df5f3baa323904cfe1e8329df7 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-b6cd64a2-af0b-4f57-b84c-cbc9cde5251d, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Oct 11 04:14:21 compute-0 podman[290759]: 2025-10-11 04:14:21.725384901 +0000 UTC m=+0.032323910 image pull 1061e4fafe13e0b9aa1ef2c904ba4ad70c44f3e87b1d831f16c6db34937f4022 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Oct 11 04:14:21 compute-0 systemd[1]: Started libpod-conmon-7106ccecf11d77670a573e0dad35162402fd95df5f3baa323904cfe1e8329df7.scope.
Oct 11 04:14:21 compute-0 systemd[1]: Started libcrun container.
Oct 11 04:14:21 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dac4183693c82b0bb3e7d6078a23fd66d44dec381bd0774e1b0ab5941a9ca72f/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct 11 04:14:21 compute-0 podman[290759]: 2025-10-11 04:14:21.893742465 +0000 UTC m=+0.200681484 container init 7106ccecf11d77670a573e0dad35162402fd95df5f3baa323904cfe1e8329df7 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-b6cd64a2-af0b-4f57-b84c-cbc9cde5251d, org.label-schema.license=GPLv2, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 11 04:14:21 compute-0 podman[290759]: 2025-10-11 04:14:21.907769563 +0000 UTC m=+0.214708532 container start 7106ccecf11d77670a573e0dad35162402fd95df5f3baa323904cfe1e8329df7 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-b6cd64a2-af0b-4f57-b84c-cbc9cde5251d, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.build-date=20251009, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, org.label-schema.license=GPLv2, io.buildah.version=1.41.3)
Oct 11 04:14:21 compute-0 neutron-haproxy-ovnmeta-b6cd64a2-af0b-4f57-b84c-cbc9cde5251d[290774]: [NOTICE]   (290778) : New worker (290780) forked
Oct 11 04:14:21 compute-0 neutron-haproxy-ovnmeta-b6cd64a2-af0b-4f57-b84c-cbc9cde5251d[290774]: [NOTICE]   (290778) : Loading success.
Oct 11 04:14:22 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct 11 04:14:22 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1786978900' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 11 04:14:22 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct 11 04:14:22 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1786978900' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 11 04:14:22 compute-0 nova_compute[259850]: 2025-10-11 04:14:22.946 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 04:14:22 compute-0 ovn_metadata_agent[161897]: 2025-10-11 04:14:22.963 161902 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 04:14:22 compute-0 ovn_metadata_agent[161897]: 2025-10-11 04:14:22.964 161902 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 04:14:22 compute-0 ovn_metadata_agent[161897]: 2025-10-11 04:14:22.965 161902 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 04:14:23 compute-0 ceph-mon[74273]: pgmap v1512: 305 pgs: 305 active+clean; 88 MiB data, 383 MiB used, 60 GiB / 60 GiB avail; 24 KiB/s rd, 23 KiB/s wr, 34 op/s
Oct 11 04:14:23 compute-0 ceph-mon[74273]: from='client.? 192.168.122.10:0/1786978900' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 11 04:14:23 compute-0 ceph-mon[74273]: from='client.? 192.168.122.10:0/1786978900' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 11 04:14:23 compute-0 nova_compute[259850]: 2025-10-11 04:14:23.288 2 DEBUG nova.compute.manager [req-7821362a-0f06-44eb-ac6e-def393f82ebc req-f73ad722-18f9-40d4-80ea-b747f0b7f528 f4159c198f2c491aba3076e803251bb4 551ef16ca7c64c7d98e9560ddab5abd8 - - default default] [instance: 1b88f60c-1027-4734-a7c1-3dd966e8db2c] Received event network-vif-plugged-14f93f3e-13f2-4745-8cb2-687295f0fc23 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 11 04:14:23 compute-0 nova_compute[259850]: 2025-10-11 04:14:23.288 2 DEBUG oslo_concurrency.lockutils [req-7821362a-0f06-44eb-ac6e-def393f82ebc req-f73ad722-18f9-40d4-80ea-b747f0b7f528 f4159c198f2c491aba3076e803251bb4 551ef16ca7c64c7d98e9560ddab5abd8 - - default default] Acquiring lock "1b88f60c-1027-4734-a7c1-3dd966e8db2c-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 04:14:23 compute-0 nova_compute[259850]: 2025-10-11 04:14:23.289 2 DEBUG oslo_concurrency.lockutils [req-7821362a-0f06-44eb-ac6e-def393f82ebc req-f73ad722-18f9-40d4-80ea-b747f0b7f528 f4159c198f2c491aba3076e803251bb4 551ef16ca7c64c7d98e9560ddab5abd8 - - default default] Lock "1b88f60c-1027-4734-a7c1-3dd966e8db2c-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 04:14:23 compute-0 nova_compute[259850]: 2025-10-11 04:14:23.289 2 DEBUG oslo_concurrency.lockutils [req-7821362a-0f06-44eb-ac6e-def393f82ebc req-f73ad722-18f9-40d4-80ea-b747f0b7f528 f4159c198f2c491aba3076e803251bb4 551ef16ca7c64c7d98e9560ddab5abd8 - - default default] Lock "1b88f60c-1027-4734-a7c1-3dd966e8db2c-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 04:14:23 compute-0 nova_compute[259850]: 2025-10-11 04:14:23.290 2 DEBUG nova.compute.manager [req-7821362a-0f06-44eb-ac6e-def393f82ebc req-f73ad722-18f9-40d4-80ea-b747f0b7f528 f4159c198f2c491aba3076e803251bb4 551ef16ca7c64c7d98e9560ddab5abd8 - - default default] [instance: 1b88f60c-1027-4734-a7c1-3dd966e8db2c] No waiting events found dispatching network-vif-plugged-14f93f3e-13f2-4745-8cb2-687295f0fc23 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 11 04:14:23 compute-0 nova_compute[259850]: 2025-10-11 04:14:23.290 2 WARNING nova.compute.manager [req-7821362a-0f06-44eb-ac6e-def393f82ebc req-f73ad722-18f9-40d4-80ea-b747f0b7f528 f4159c198f2c491aba3076e803251bb4 551ef16ca7c64c7d98e9560ddab5abd8 - - default default] [instance: 1b88f60c-1027-4734-a7c1-3dd966e8db2c] Received unexpected event network-vif-plugged-14f93f3e-13f2-4745-8cb2-687295f0fc23 for instance with vm_state building and task_state spawning.
Oct 11 04:14:23 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1513: 305 pgs: 305 active+clean; 88 MiB data, 383 MiB used, 60 GiB / 60 GiB avail; 52 KiB/s rd, 37 KiB/s wr, 72 op/s
Oct 11 04:14:24 compute-0 nova_compute[259850]: 2025-10-11 04:14:24.059 2 DEBUG oslo_service.periodic_task [None req-a15c22ab-259d-4b26-83d0-eb5f92102649 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 11 04:14:24 compute-0 nova_compute[259850]: 2025-10-11 04:14:24.060 2 DEBUG nova.compute.manager [None req-a15c22ab-259d-4b26-83d0-eb5f92102649 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Oct 11 04:14:24 compute-0 nova_compute[259850]: 2025-10-11 04:14:24.060 2 DEBUG nova.compute.manager [None req-a15c22ab-259d-4b26-83d0-eb5f92102649 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Oct 11 04:14:24 compute-0 nova_compute[259850]: 2025-10-11 04:14:24.081 2 DEBUG nova.compute.manager [None req-a15c22ab-259d-4b26-83d0-eb5f92102649 - - - - - -] [instance: 1b88f60c-1027-4734-a7c1-3dd966e8db2c] Skipping network cache update for instance because it is Building. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9871
Oct 11 04:14:24 compute-0 nova_compute[259850]: 2025-10-11 04:14:24.082 2 DEBUG nova.compute.manager [None req-a15c22ab-259d-4b26-83d0-eb5f92102649 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Oct 11 04:14:24 compute-0 nova_compute[259850]: 2025-10-11 04:14:24.083 2 DEBUG oslo_service.periodic_task [None req-a15c22ab-259d-4b26-83d0-eb5f92102649 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 11 04:14:24 compute-0 nova_compute[259850]: 2025-10-11 04:14:24.083 2 DEBUG oslo_service.periodic_task [None req-a15c22ab-259d-4b26-83d0-eb5f92102649 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 11 04:14:24 compute-0 nova_compute[259850]: 2025-10-11 04:14:24.084 2 DEBUG nova.compute.manager [None req-a15c22ab-259d-4b26-83d0-eb5f92102649 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Oct 11 04:14:24 compute-0 nova_compute[259850]: 2025-10-11 04:14:24.276 2 DEBUG nova.virt.driver [None req-3700afd8-f980-4cf2-b41e-54ecc7fbe9a2 - - - - - -] Emitting event <LifecycleEvent: 1760156064.2761228, 1b88f60c-1027-4734-a7c1-3dd966e8db2c => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 11 04:14:24 compute-0 nova_compute[259850]: 2025-10-11 04:14:24.277 2 INFO nova.compute.manager [None req-3700afd8-f980-4cf2-b41e-54ecc7fbe9a2 - - - - - -] [instance: 1b88f60c-1027-4734-a7c1-3dd966e8db2c] VM Started (Lifecycle Event)
Oct 11 04:14:24 compute-0 nova_compute[259850]: 2025-10-11 04:14:24.281 2 DEBUG nova.compute.manager [None req-b6e63544-78f1-4705-b248-28579fa37926 2a330a845d62440c871f80eda2546881 09ba33ef4bd447699d74946c58839b2d - - default default] [instance: 1b88f60c-1027-4734-a7c1-3dd966e8db2c] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Oct 11 04:14:24 compute-0 nova_compute[259850]: 2025-10-11 04:14:24.283 2 DEBUG nova.virt.libvirt.driver [None req-b6e63544-78f1-4705-b248-28579fa37926 2a330a845d62440c871f80eda2546881 09ba33ef4bd447699d74946c58839b2d - - default default] [instance: 1b88f60c-1027-4734-a7c1-3dd966e8db2c] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Oct 11 04:14:24 compute-0 nova_compute[259850]: 2025-10-11 04:14:24.286 2 INFO nova.virt.libvirt.driver [-] [instance: 1b88f60c-1027-4734-a7c1-3dd966e8db2c] Instance spawned successfully.
Oct 11 04:14:24 compute-0 nova_compute[259850]: 2025-10-11 04:14:24.286 2 DEBUG nova.virt.libvirt.driver [None req-b6e63544-78f1-4705-b248-28579fa37926 2a330a845d62440c871f80eda2546881 09ba33ef4bd447699d74946c58839b2d - - default default] [instance: 1b88f60c-1027-4734-a7c1-3dd966e8db2c] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Oct 11 04:14:24 compute-0 nova_compute[259850]: 2025-10-11 04:14:24.308 2 DEBUG nova.compute.manager [None req-3700afd8-f980-4cf2-b41e-54ecc7fbe9a2 - - - - - -] [instance: 1b88f60c-1027-4734-a7c1-3dd966e8db2c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 11 04:14:24 compute-0 nova_compute[259850]: 2025-10-11 04:14:24.311 2 DEBUG nova.virt.libvirt.driver [None req-b6e63544-78f1-4705-b248-28579fa37926 2a330a845d62440c871f80eda2546881 09ba33ef4bd447699d74946c58839b2d - - default default] [instance: 1b88f60c-1027-4734-a7c1-3dd966e8db2c] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 11 04:14:24 compute-0 nova_compute[259850]: 2025-10-11 04:14:24.312 2 DEBUG nova.virt.libvirt.driver [None req-b6e63544-78f1-4705-b248-28579fa37926 2a330a845d62440c871f80eda2546881 09ba33ef4bd447699d74946c58839b2d - - default default] [instance: 1b88f60c-1027-4734-a7c1-3dd966e8db2c] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 11 04:14:24 compute-0 nova_compute[259850]: 2025-10-11 04:14:24.312 2 DEBUG nova.virt.libvirt.driver [None req-b6e63544-78f1-4705-b248-28579fa37926 2a330a845d62440c871f80eda2546881 09ba33ef4bd447699d74946c58839b2d - - default default] [instance: 1b88f60c-1027-4734-a7c1-3dd966e8db2c] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 11 04:14:24 compute-0 nova_compute[259850]: 2025-10-11 04:14:24.312 2 DEBUG nova.virt.libvirt.driver [None req-b6e63544-78f1-4705-b248-28579fa37926 2a330a845d62440c871f80eda2546881 09ba33ef4bd447699d74946c58839b2d - - default default] [instance: 1b88f60c-1027-4734-a7c1-3dd966e8db2c] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 11 04:14:24 compute-0 nova_compute[259850]: 2025-10-11 04:14:24.312 2 DEBUG nova.virt.libvirt.driver [None req-b6e63544-78f1-4705-b248-28579fa37926 2a330a845d62440c871f80eda2546881 09ba33ef4bd447699d74946c58839b2d - - default default] [instance: 1b88f60c-1027-4734-a7c1-3dd966e8db2c] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 11 04:14:24 compute-0 nova_compute[259850]: 2025-10-11 04:14:24.313 2 DEBUG nova.virt.libvirt.driver [None req-b6e63544-78f1-4705-b248-28579fa37926 2a330a845d62440c871f80eda2546881 09ba33ef4bd447699d74946c58839b2d - - default default] [instance: 1b88f60c-1027-4734-a7c1-3dd966e8db2c] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 11 04:14:24 compute-0 nova_compute[259850]: 2025-10-11 04:14:24.317 2 DEBUG nova.compute.manager [None req-3700afd8-f980-4cf2-b41e-54ecc7fbe9a2 - - - - - -] [instance: 1b88f60c-1027-4734-a7c1-3dd966e8db2c] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 11 04:14:24 compute-0 nova_compute[259850]: 2025-10-11 04:14:24.353 2 INFO nova.compute.manager [None req-3700afd8-f980-4cf2-b41e-54ecc7fbe9a2 - - - - - -] [instance: 1b88f60c-1027-4734-a7c1-3dd966e8db2c] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 11 04:14:24 compute-0 nova_compute[259850]: 2025-10-11 04:14:24.353 2 DEBUG nova.virt.driver [None req-3700afd8-f980-4cf2-b41e-54ecc7fbe9a2 - - - - - -] Emitting event <LifecycleEvent: 1760156064.27703, 1b88f60c-1027-4734-a7c1-3dd966e8db2c => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 11 04:14:24 compute-0 nova_compute[259850]: 2025-10-11 04:14:24.353 2 INFO nova.compute.manager [None req-3700afd8-f980-4cf2-b41e-54ecc7fbe9a2 - - - - - -] [instance: 1b88f60c-1027-4734-a7c1-3dd966e8db2c] VM Paused (Lifecycle Event)
Oct 11 04:14:24 compute-0 podman[290795]: 2025-10-11 04:14:24.373612612 +0000 UTC m=+0.093611231 container health_status 648a70a95c858d67aef01a55c153ff0b5b9bef4b589fa10c62a24185180db61f (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3)
Oct 11 04:14:24 compute-0 nova_compute[259850]: 2025-10-11 04:14:24.384 2 DEBUG nova.compute.manager [None req-3700afd8-f980-4cf2-b41e-54ecc7fbe9a2 - - - - - -] [instance: 1b88f60c-1027-4734-a7c1-3dd966e8db2c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 11 04:14:24 compute-0 nova_compute[259850]: 2025-10-11 04:14:24.388 2 DEBUG nova.virt.driver [None req-3700afd8-f980-4cf2-b41e-54ecc7fbe9a2 - - - - - -] Emitting event <LifecycleEvent: 1760156064.2829134, 1b88f60c-1027-4734-a7c1-3dd966e8db2c => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 11 04:14:24 compute-0 nova_compute[259850]: 2025-10-11 04:14:24.388 2 INFO nova.compute.manager [None req-3700afd8-f980-4cf2-b41e-54ecc7fbe9a2 - - - - - -] [instance: 1b88f60c-1027-4734-a7c1-3dd966e8db2c] VM Resumed (Lifecycle Event)
Oct 11 04:14:24 compute-0 nova_compute[259850]: 2025-10-11 04:14:24.394 2 INFO nova.compute.manager [None req-b6e63544-78f1-4705-b248-28579fa37926 2a330a845d62440c871f80eda2546881 09ba33ef4bd447699d74946c58839b2d - - default default] [instance: 1b88f60c-1027-4734-a7c1-3dd966e8db2c] Took 7.50 seconds to spawn the instance on the hypervisor.
Oct 11 04:14:24 compute-0 nova_compute[259850]: 2025-10-11 04:14:24.394 2 DEBUG nova.compute.manager [None req-b6e63544-78f1-4705-b248-28579fa37926 2a330a845d62440c871f80eda2546881 09ba33ef4bd447699d74946c58839b2d - - default default] [instance: 1b88f60c-1027-4734-a7c1-3dd966e8db2c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 11 04:14:24 compute-0 nova_compute[259850]: 2025-10-11 04:14:24.424 2 DEBUG nova.compute.manager [None req-3700afd8-f980-4cf2-b41e-54ecc7fbe9a2 - - - - - -] [instance: 1b88f60c-1027-4734-a7c1-3dd966e8db2c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 11 04:14:24 compute-0 nova_compute[259850]: 2025-10-11 04:14:24.427 2 DEBUG nova.compute.manager [None req-3700afd8-f980-4cf2-b41e-54ecc7fbe9a2 - - - - - -] [instance: 1b88f60c-1027-4734-a7c1-3dd966e8db2c] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 11 04:14:24 compute-0 nova_compute[259850]: 2025-10-11 04:14:24.472 2 INFO nova.compute.manager [None req-3700afd8-f980-4cf2-b41e-54ecc7fbe9a2 - - - - - -] [instance: 1b88f60c-1027-4734-a7c1-3dd966e8db2c] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 11 04:14:24 compute-0 nova_compute[259850]: 2025-10-11 04:14:24.510 2 INFO nova.compute.manager [None req-b6e63544-78f1-4705-b248-28579fa37926 2a330a845d62440c871f80eda2546881 09ba33ef4bd447699d74946c58839b2d - - default default] [instance: 1b88f60c-1027-4734-a7c1-3dd966e8db2c] Took 9.89 seconds to build instance.
Oct 11 04:14:24 compute-0 nova_compute[259850]: 2025-10-11 04:14:24.527 2 DEBUG oslo_concurrency.lockutils [None req-b6e63544-78f1-4705-b248-28579fa37926 2a330a845d62440c871f80eda2546881 09ba33ef4bd447699d74946c58839b2d - - default default] Lock "1b88f60c-1027-4734-a7c1-3dd966e8db2c" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 9.981s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 04:14:24 compute-0 nova_compute[259850]: 2025-10-11 04:14:24.898 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 04:14:24 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e381 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 04:14:25 compute-0 nova_compute[259850]: 2025-10-11 04:14:25.059 2 DEBUG oslo_service.periodic_task [None req-a15c22ab-259d-4b26-83d0-eb5f92102649 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 11 04:14:25 compute-0 nova_compute[259850]: 2025-10-11 04:14:25.060 2 DEBUG oslo_service.periodic_task [None req-a15c22ab-259d-4b26-83d0-eb5f92102649 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 11 04:14:25 compute-0 nova_compute[259850]: 2025-10-11 04:14:25.087 2 DEBUG oslo_concurrency.lockutils [None req-a15c22ab-259d-4b26-83d0-eb5f92102649 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 04:14:25 compute-0 nova_compute[259850]: 2025-10-11 04:14:25.088 2 DEBUG oslo_concurrency.lockutils [None req-a15c22ab-259d-4b26-83d0-eb5f92102649 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 04:14:25 compute-0 nova_compute[259850]: 2025-10-11 04:14:25.088 2 DEBUG oslo_concurrency.lockutils [None req-a15c22ab-259d-4b26-83d0-eb5f92102649 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 04:14:25 compute-0 nova_compute[259850]: 2025-10-11 04:14:25.089 2 DEBUG nova.compute.resource_tracker [None req-a15c22ab-259d-4b26-83d0-eb5f92102649 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Oct 11 04:14:25 compute-0 nova_compute[259850]: 2025-10-11 04:14:25.090 2 DEBUG oslo_concurrency.processutils [None req-a15c22ab-259d-4b26-83d0-eb5f92102649 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 04:14:25 compute-0 ceph-mon[74273]: pgmap v1513: 305 pgs: 305 active+clean; 88 MiB data, 383 MiB used, 60 GiB / 60 GiB avail; 52 KiB/s rd, 37 KiB/s wr, 72 op/s
Oct 11 04:14:25 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 11 04:14:25 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1075299708' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 04:14:25 compute-0 nova_compute[259850]: 2025-10-11 04:14:25.580 2 DEBUG oslo_concurrency.processutils [None req-a15c22ab-259d-4b26-83d0-eb5f92102649 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.490s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 04:14:25 compute-0 nova_compute[259850]: 2025-10-11 04:14:25.670 2 DEBUG nova.virt.libvirt.driver [None req-a15c22ab-259d-4b26-83d0-eb5f92102649 - - - - - -] skipping disk for instance-00000010 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 11 04:14:25 compute-0 nova_compute[259850]: 2025-10-11 04:14:25.671 2 DEBUG nova.virt.libvirt.driver [None req-a15c22ab-259d-4b26-83d0-eb5f92102649 - - - - - -] skipping disk for instance-00000010 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 11 04:14:25 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1514: 305 pgs: 305 active+clean; 88 MiB data, 383 MiB used, 60 GiB / 60 GiB avail; 48 KiB/s rd, 14 KiB/s wr, 66 op/s
Oct 11 04:14:25 compute-0 nova_compute[259850]: 2025-10-11 04:14:25.893 2 WARNING nova.virt.libvirt.driver [None req-a15c22ab-259d-4b26-83d0-eb5f92102649 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 11 04:14:25 compute-0 nova_compute[259850]: 2025-10-11 04:14:25.894 2 DEBUG nova.compute.resource_tracker [None req-a15c22ab-259d-4b26-83d0-eb5f92102649 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4348MB free_disk=59.98813247680664GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Oct 11 04:14:25 compute-0 nova_compute[259850]: 2025-10-11 04:14:25.895 2 DEBUG oslo_concurrency.lockutils [None req-a15c22ab-259d-4b26-83d0-eb5f92102649 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 04:14:25 compute-0 nova_compute[259850]: 2025-10-11 04:14:25.895 2 DEBUG oslo_concurrency.lockutils [None req-a15c22ab-259d-4b26-83d0-eb5f92102649 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 04:14:25 compute-0 nova_compute[259850]: 2025-10-11 04:14:25.966 2 DEBUG nova.compute.resource_tracker [None req-a15c22ab-259d-4b26-83d0-eb5f92102649 - - - - - -] Instance 1b88f60c-1027-4734-a7c1-3dd966e8db2c actively managed on this compute host and has allocations in placement: {'resources': {'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Oct 11 04:14:25 compute-0 nova_compute[259850]: 2025-10-11 04:14:25.967 2 DEBUG nova.compute.resource_tracker [None req-a15c22ab-259d-4b26-83d0-eb5f92102649 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Oct 11 04:14:25 compute-0 nova_compute[259850]: 2025-10-11 04:14:25.967 2 DEBUG nova.compute.resource_tracker [None req-a15c22ab-259d-4b26-83d0-eb5f92102649 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=640MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Oct 11 04:14:26 compute-0 nova_compute[259850]: 2025-10-11 04:14:26.005 2 DEBUG oslo_concurrency.processutils [None req-a15c22ab-259d-4b26-83d0-eb5f92102649 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 04:14:26 compute-0 ceph-mon[74273]: from='client.? 192.168.122.100:0/1075299708' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 04:14:26 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 11 04:14:26 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2620999848' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 04:14:26 compute-0 nova_compute[259850]: 2025-10-11 04:14:26.431 2 DEBUG oslo_concurrency.processutils [None req-a15c22ab-259d-4b26-83d0-eb5f92102649 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.425s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 04:14:26 compute-0 nova_compute[259850]: 2025-10-11 04:14:26.441 2 DEBUG nova.compute.provider_tree [None req-a15c22ab-259d-4b26-83d0-eb5f92102649 - - - - - -] Inventory has not changed in ProviderTree for provider: 108a560b-89c0-4926-a2fc-cb749a6f8386 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 11 04:14:26 compute-0 nova_compute[259850]: 2025-10-11 04:14:26.468 2 DEBUG nova.scheduler.client.report [None req-a15c22ab-259d-4b26-83d0-eb5f92102649 - - - - - -] Inventory has not changed for provider 108a560b-89c0-4926-a2fc-cb749a6f8386 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 11 04:14:26 compute-0 nova_compute[259850]: 2025-10-11 04:14:26.500 2 DEBUG nova.compute.resource_tracker [None req-a15c22ab-259d-4b26-83d0-eb5f92102649 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Oct 11 04:14:26 compute-0 nova_compute[259850]: 2025-10-11 04:14:26.500 2 DEBUG oslo_concurrency.lockutils [None req-a15c22ab-259d-4b26-83d0-eb5f92102649 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.605s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 04:14:27 compute-0 ceph-mon[74273]: pgmap v1514: 305 pgs: 305 active+clean; 88 MiB data, 383 MiB used, 60 GiB / 60 GiB avail; 48 KiB/s rd, 14 KiB/s wr, 66 op/s
Oct 11 04:14:27 compute-0 ceph-mon[74273]: from='client.? 192.168.122.100:0/2620999848' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 04:14:27 compute-0 nova_compute[259850]: 2025-10-11 04:14:27.499 2 DEBUG oslo_service.periodic_task [None req-a15c22ab-259d-4b26-83d0-eb5f92102649 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 11 04:14:27 compute-0 nova_compute[259850]: 2025-10-11 04:14:27.500 2 DEBUG oslo_service.periodic_task [None req-a15c22ab-259d-4b26-83d0-eb5f92102649 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 11 04:14:27 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1515: 305 pgs: 305 active+clean; 88 MiB data, 383 MiB used, 60 GiB / 60 GiB avail; 48 KiB/s rd, 14 KiB/s wr, 66 op/s
Oct 11 04:14:27 compute-0 nova_compute[259850]: 2025-10-11 04:14:27.736 2 DEBUG oslo_concurrency.lockutils [None req-0f039064-f166-49e1-953a-d676373a9045 2a330a845d62440c871f80eda2546881 09ba33ef4bd447699d74946c58839b2d - - default default] Acquiring lock "1b88f60c-1027-4734-a7c1-3dd966e8db2c" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 04:14:27 compute-0 nova_compute[259850]: 2025-10-11 04:14:27.737 2 DEBUG oslo_concurrency.lockutils [None req-0f039064-f166-49e1-953a-d676373a9045 2a330a845d62440c871f80eda2546881 09ba33ef4bd447699d74946c58839b2d - - default default] Lock "1b88f60c-1027-4734-a7c1-3dd966e8db2c" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 04:14:27 compute-0 nova_compute[259850]: 2025-10-11 04:14:27.737 2 DEBUG oslo_concurrency.lockutils [None req-0f039064-f166-49e1-953a-d676373a9045 2a330a845d62440c871f80eda2546881 09ba33ef4bd447699d74946c58839b2d - - default default] Acquiring lock "1b88f60c-1027-4734-a7c1-3dd966e8db2c-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 04:14:27 compute-0 nova_compute[259850]: 2025-10-11 04:14:27.738 2 DEBUG oslo_concurrency.lockutils [None req-0f039064-f166-49e1-953a-d676373a9045 2a330a845d62440c871f80eda2546881 09ba33ef4bd447699d74946c58839b2d - - default default] Lock "1b88f60c-1027-4734-a7c1-3dd966e8db2c-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 04:14:27 compute-0 nova_compute[259850]: 2025-10-11 04:14:27.738 2 DEBUG oslo_concurrency.lockutils [None req-0f039064-f166-49e1-953a-d676373a9045 2a330a845d62440c871f80eda2546881 09ba33ef4bd447699d74946c58839b2d - - default default] Lock "1b88f60c-1027-4734-a7c1-3dd966e8db2c-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 04:14:27 compute-0 nova_compute[259850]: 2025-10-11 04:14:27.740 2 INFO nova.compute.manager [None req-0f039064-f166-49e1-953a-d676373a9045 2a330a845d62440c871f80eda2546881 09ba33ef4bd447699d74946c58839b2d - - default default] [instance: 1b88f60c-1027-4734-a7c1-3dd966e8db2c] Terminating instance
Oct 11 04:14:27 compute-0 nova_compute[259850]: 2025-10-11 04:14:27.743 2 DEBUG nova.compute.manager [None req-0f039064-f166-49e1-953a-d676373a9045 2a330a845d62440c871f80eda2546881 09ba33ef4bd447699d74946c58839b2d - - default default] [instance: 1b88f60c-1027-4734-a7c1-3dd966e8db2c] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Oct 11 04:14:27 compute-0 kernel: tap14f93f3e-13 (unregistering): left promiscuous mode
Oct 11 04:14:27 compute-0 NetworkManager[44920]: <info>  [1760156067.7961] device (tap14f93f3e-13): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 11 04:14:27 compute-0 ovn_controller[152025]: 2025-10-11T04:14:27Z|00167|binding|INFO|Releasing lport 14f93f3e-13f2-4745-8cb2-687295f0fc23 from this chassis (sb_readonly=0)
Oct 11 04:14:27 compute-0 ovn_controller[152025]: 2025-10-11T04:14:27Z|00168|binding|INFO|Setting lport 14f93f3e-13f2-4745-8cb2-687295f0fc23 down in Southbound
Oct 11 04:14:27 compute-0 ovn_controller[152025]: 2025-10-11T04:14:27Z|00169|binding|INFO|Removing iface tap14f93f3e-13 ovn-installed in OVS
Oct 11 04:14:27 compute-0 nova_compute[259850]: 2025-10-11 04:14:27.808 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 04:14:27 compute-0 nova_compute[259850]: 2025-10-11 04:14:27.811 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 04:14:27 compute-0 ovn_metadata_agent[161897]: 2025-10-11 04:14:27.817 161902 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:dd:44:08 10.100.0.14'], port_security=['fa:16:3e:dd:44:08 10.100.0.14'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.14/28', 'neutron:device_id': '1b88f60c-1027-4734-a7c1-3dd966e8db2c', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-b6cd64a2-af0b-4f57-b84c-cbc9cde5251d', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '09ba33ef4bd447699d74946c58839b2d', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'ad6cc707-9ce2-4240-811a-f6df84b349db', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=27b77226-c1f8-485e-969b-bae9a3bf7ceb, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f22fde8afd0>], logical_port=14f93f3e-13f2-4745-8cb2-687295f0fc23) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f22fde8afd0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 11 04:14:27 compute-0 ovn_metadata_agent[161897]: 2025-10-11 04:14:27.819 161902 INFO neutron.agent.ovn.metadata.agent [-] Port 14f93f3e-13f2-4745-8cb2-687295f0fc23 in datapath b6cd64a2-af0b-4f57-b84c-cbc9cde5251d unbound from our chassis
Oct 11 04:14:27 compute-0 ovn_metadata_agent[161897]: 2025-10-11 04:14:27.823 161902 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network b6cd64a2-af0b-4f57-b84c-cbc9cde5251d, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Oct 11 04:14:27 compute-0 ovn_metadata_agent[161897]: 2025-10-11 04:14:27.826 267637 DEBUG oslo.privsep.daemon [-] privsep: reply[714e2213-9a3e-43f1-8329-88ba676f920a]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 04:14:27 compute-0 ovn_metadata_agent[161897]: 2025-10-11 04:14:27.827 161902 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-b6cd64a2-af0b-4f57-b84c-cbc9cde5251d namespace which is not needed anymore
Oct 11 04:14:27 compute-0 nova_compute[259850]: 2025-10-11 04:14:27.853 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 04:14:27 compute-0 systemd[1]: machine-qemu\x2d16\x2dinstance\x2d00000010.scope: Deactivated successfully.
Oct 11 04:14:27 compute-0 systemd[1]: machine-qemu\x2d16\x2dinstance\x2d00000010.scope: Consumed 3.633s CPU time.
Oct 11 04:14:27 compute-0 systemd-machined[214869]: Machine qemu-16-instance-00000010 terminated.
Oct 11 04:14:27 compute-0 podman[290866]: 2025-10-11 04:14:27.938708155 +0000 UTC m=+0.101251068 container health_status aa44bb3cffcfb092b9d85ae52a9bd9f36c87e07262632420f97b48a886109eab (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, org.label-schema.vendor=CentOS, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_metadata_agent, org.label-schema.build-date=20251009)
Oct 11 04:14:27 compute-0 nova_compute[259850]: 2025-10-11 04:14:27.947 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 04:14:27 compute-0 kernel: tap14f93f3e-13: entered promiscuous mode
Oct 11 04:14:27 compute-0 NetworkManager[44920]: <info>  [1760156067.9651] manager: (tap14f93f3e-13): new Tun device (/org/freedesktop/NetworkManager/Devices/92)
Oct 11 04:14:27 compute-0 systemd-udevd[290883]: Network interface NamePolicy= disabled on kernel command line.
Oct 11 04:14:27 compute-0 kernel: tap14f93f3e-13 (unregistering): left promiscuous mode
Oct 11 04:14:27 compute-0 nova_compute[259850]: 2025-10-11 04:14:27.972 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 04:14:27 compute-0 ovn_controller[152025]: 2025-10-11T04:14:27Z|00170|binding|INFO|Claiming lport 14f93f3e-13f2-4745-8cb2-687295f0fc23 for this chassis.
Oct 11 04:14:27 compute-0 ovn_controller[152025]: 2025-10-11T04:14:27Z|00171|binding|INFO|14f93f3e-13f2-4745-8cb2-687295f0fc23: Claiming fa:16:3e:dd:44:08 10.100.0.14
Oct 11 04:14:27 compute-0 neutron-haproxy-ovnmeta-b6cd64a2-af0b-4f57-b84c-cbc9cde5251d[290774]: [NOTICE]   (290778) : haproxy version is 2.8.14-c23fe91
Oct 11 04:14:27 compute-0 neutron-haproxy-ovnmeta-b6cd64a2-af0b-4f57-b84c-cbc9cde5251d[290774]: [NOTICE]   (290778) : path to executable is /usr/sbin/haproxy
Oct 11 04:14:27 compute-0 neutron-haproxy-ovnmeta-b6cd64a2-af0b-4f57-b84c-cbc9cde5251d[290774]: [WARNING]  (290778) : Exiting Master process...
Oct 11 04:14:27 compute-0 neutron-haproxy-ovnmeta-b6cd64a2-af0b-4f57-b84c-cbc9cde5251d[290774]: [ALERT]    (290778) : Current worker (290780) exited with code 143 (Terminated)
Oct 11 04:14:27 compute-0 neutron-haproxy-ovnmeta-b6cd64a2-af0b-4f57-b84c-cbc9cde5251d[290774]: [WARNING]  (290778) : All workers exited. Exiting... (0)
Oct 11 04:14:27 compute-0 systemd[1]: libpod-7106ccecf11d77670a573e0dad35162402fd95df5f3baa323904cfe1e8329df7.scope: Deactivated successfully.
Oct 11 04:14:27 compute-0 ovn_metadata_agent[161897]: 2025-10-11 04:14:27.982 161902 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:dd:44:08 10.100.0.14'], port_security=['fa:16:3e:dd:44:08 10.100.0.14'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.14/28', 'neutron:device_id': '1b88f60c-1027-4734-a7c1-3dd966e8db2c', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-b6cd64a2-af0b-4f57-b84c-cbc9cde5251d', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '09ba33ef4bd447699d74946c58839b2d', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'ad6cc707-9ce2-4240-811a-f6df84b349db', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=27b77226-c1f8-485e-969b-bae9a3bf7ceb, chassis=[<ovs.db.idl.Row object at 0x7f22fde8afd0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f22fde8afd0>], logical_port=14f93f3e-13f2-4745-8cb2-687295f0fc23) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 11 04:14:27 compute-0 podman[290908]: 2025-10-11 04:14:27.987978165 +0000 UTC m=+0.049082826 container died 7106ccecf11d77670a573e0dad35162402fd95df5f3baa323904cfe1e8329df7 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-b6cd64a2-af0b-4f57-b84c-cbc9cde5251d, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251009)
Oct 11 04:14:27 compute-0 nova_compute[259850]: 2025-10-11 04:14:27.995 2 INFO nova.virt.libvirt.driver [-] [instance: 1b88f60c-1027-4734-a7c1-3dd966e8db2c] Instance destroyed successfully.
Oct 11 04:14:27 compute-0 nova_compute[259850]: 2025-10-11 04:14:27.996 2 DEBUG nova.objects.instance [None req-0f039064-f166-49e1-953a-d676373a9045 2a330a845d62440c871f80eda2546881 09ba33ef4bd447699d74946c58839b2d - - default default] Lazy-loading 'resources' on Instance uuid 1b88f60c-1027-4734-a7c1-3dd966e8db2c obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 11 04:14:27 compute-0 ovn_controller[152025]: 2025-10-11T04:14:27Z|00172|binding|INFO|Setting lport 14f93f3e-13f2-4745-8cb2-687295f0fc23 ovn-installed in OVS
Oct 11 04:14:27 compute-0 ovn_controller[152025]: 2025-10-11T04:14:27Z|00173|binding|INFO|Setting lport 14f93f3e-13f2-4745-8cb2-687295f0fc23 up in Southbound
Oct 11 04:14:27 compute-0 nova_compute[259850]: 2025-10-11 04:14:27.997 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 04:14:27 compute-0 ovn_controller[152025]: 2025-10-11T04:14:27Z|00174|binding|INFO|Releasing lport 14f93f3e-13f2-4745-8cb2-687295f0fc23 from this chassis (sb_readonly=1)
Oct 11 04:14:27 compute-0 ovn_controller[152025]: 2025-10-11T04:14:27Z|00175|if_status|INFO|Not setting lport 14f93f3e-13f2-4745-8cb2-687295f0fc23 down as sb is readonly
Oct 11 04:14:27 compute-0 ovn_controller[152025]: 2025-10-11T04:14:27Z|00176|binding|INFO|Removing iface tap14f93f3e-13 ovn-installed in OVS
Oct 11 04:14:27 compute-0 nova_compute[259850]: 2025-10-11 04:14:27.998 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 04:14:28 compute-0 ovn_controller[152025]: 2025-10-11T04:14:28Z|00177|binding|INFO|Releasing lport 14f93f3e-13f2-4745-8cb2-687295f0fc23 from this chassis (sb_readonly=1)
Oct 11 04:14:28 compute-0 ovn_controller[152025]: 2025-10-11T04:14:28Z|00178|binding|INFO|Setting lport 14f93f3e-13f2-4745-8cb2-687295f0fc23 down in Southbound
Oct 11 04:14:28 compute-0 ovn_metadata_agent[161897]: 2025-10-11 04:14:28.018 161902 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:dd:44:08 10.100.0.14'], port_security=['fa:16:3e:dd:44:08 10.100.0.14'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.14/28', 'neutron:device_id': '1b88f60c-1027-4734-a7c1-3dd966e8db2c', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-b6cd64a2-af0b-4f57-b84c-cbc9cde5251d', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '09ba33ef4bd447699d74946c58839b2d', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'ad6cc707-9ce2-4240-811a-f6df84b349db', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=27b77226-c1f8-485e-969b-bae9a3bf7ceb, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f22fde8afd0>], logical_port=14f93f3e-13f2-4745-8cb2-687295f0fc23) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f22fde8afd0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 11 04:14:28 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-7106ccecf11d77670a573e0dad35162402fd95df5f3baa323904cfe1e8329df7-userdata-shm.mount: Deactivated successfully.
Oct 11 04:14:28 compute-0 nova_compute[259850]: 2025-10-11 04:14:28.015 2 DEBUG nova.virt.libvirt.vif [None req-0f039064-f166-49e1-953a-d676373a9045 2a330a845d62440c871f80eda2546881 09ba33ef4bd447699d74946c58839b2d - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-11T04:14:13Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestVolumeBootPattern-server-733896192',display_name='tempest-TestVolumeBootPattern-server-733896192',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testvolumebootpattern-server-733896192',id=16,image_ref='',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-10-11T04:14:24Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='09ba33ef4bd447699d74946c58839b2d',ramdisk_id='',reservation_id='r-m03z6tvj',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestVolumeBootPattern-771726270',owner_user_name='tempest-TestVolumeBootPattern-771726270-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-11T04:14:24Z,user_data=None,user_id='2a330a845d62440c871f80eda2546881',uuid=1b88f60c-1027-4734-a7c1-3dd966e8db2c,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "14f93f3e-13f2-4745-8cb2-687295f0fc23", "address": "fa:16:3e:dd:44:08", "network": {"id": "b6cd64a2-af0b-4f57-b84c-cbc9cde5251d", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-958485850-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "09ba33ef4bd447699d74946c58839b2d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap14f93f3e-13", "ovs_interfaceid": "14f93f3e-13f2-4745-8cb2-687295f0fc23", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Oct 11 04:14:28 compute-0 nova_compute[259850]: 2025-10-11 04:14:28.015 2 DEBUG nova.network.os_vif_util [None req-0f039064-f166-49e1-953a-d676373a9045 2a330a845d62440c871f80eda2546881 09ba33ef4bd447699d74946c58839b2d - - default default] Converting VIF {"id": "14f93f3e-13f2-4745-8cb2-687295f0fc23", "address": "fa:16:3e:dd:44:08", "network": {"id": "b6cd64a2-af0b-4f57-b84c-cbc9cde5251d", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-958485850-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "09ba33ef4bd447699d74946c58839b2d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap14f93f3e-13", "ovs_interfaceid": "14f93f3e-13f2-4745-8cb2-687295f0fc23", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 11 04:14:28 compute-0 nova_compute[259850]: 2025-10-11 04:14:28.016 2 DEBUG nova.network.os_vif_util [None req-0f039064-f166-49e1-953a-d676373a9045 2a330a845d62440c871f80eda2546881 09ba33ef4bd447699d74946c58839b2d - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:dd:44:08,bridge_name='br-int',has_traffic_filtering=True,id=14f93f3e-13f2-4745-8cb2-687295f0fc23,network=Network(b6cd64a2-af0b-4f57-b84c-cbc9cde5251d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap14f93f3e-13') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 11 04:14:28 compute-0 nova_compute[259850]: 2025-10-11 04:14:28.017 2 DEBUG os_vif [None req-0f039064-f166-49e1-953a-d676373a9045 2a330a845d62440c871f80eda2546881 09ba33ef4bd447699d74946c58839b2d - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:dd:44:08,bridge_name='br-int',has_traffic_filtering=True,id=14f93f3e-13f2-4745-8cb2-687295f0fc23,network=Network(b6cd64a2-af0b-4f57-b84c-cbc9cde5251d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap14f93f3e-13') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Oct 11 04:14:28 compute-0 nova_compute[259850]: 2025-10-11 04:14:28.019 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 04:14:28 compute-0 nova_compute[259850]: 2025-10-11 04:14:28.020 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap14f93f3e-13, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 04:14:28 compute-0 systemd[1]: var-lib-containers-storage-overlay-dac4183693c82b0bb3e7d6078a23fd66d44dec381bd0774e1b0ab5941a9ca72f-merged.mount: Deactivated successfully.
Oct 11 04:14:28 compute-0 nova_compute[259850]: 2025-10-11 04:14:28.021 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 04:14:28 compute-0 nova_compute[259850]: 2025-10-11 04:14:28.022 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 04:14:28 compute-0 nova_compute[259850]: 2025-10-11 04:14:28.025 2 INFO os_vif [None req-0f039064-f166-49e1-953a-d676373a9045 2a330a845d62440c871f80eda2546881 09ba33ef4bd447699d74946c58839b2d - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:dd:44:08,bridge_name='br-int',has_traffic_filtering=True,id=14f93f3e-13f2-4745-8cb2-687295f0fc23,network=Network(b6cd64a2-af0b-4f57-b84c-cbc9cde5251d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap14f93f3e-13')
Oct 11 04:14:28 compute-0 podman[290908]: 2025-10-11 04:14:28.03354627 +0000 UTC m=+0.094650931 container cleanup 7106ccecf11d77670a573e0dad35162402fd95df5f3baa323904cfe1e8329df7 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-b6cd64a2-af0b-4f57-b84c-cbc9cde5251d, org.label-schema.vendor=CentOS, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 11 04:14:28 compute-0 systemd[1]: libpod-conmon-7106ccecf11d77670a573e0dad35162402fd95df5f3baa323904cfe1e8329df7.scope: Deactivated successfully.
Oct 11 04:14:28 compute-0 podman[290950]: 2025-10-11 04:14:28.115128788 +0000 UTC m=+0.056524117 container remove 7106ccecf11d77670a573e0dad35162402fd95df5f3baa323904cfe1e8329df7 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-b6cd64a2-af0b-4f57-b84c-cbc9cde5251d, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true)
Oct 11 04:14:28 compute-0 ovn_metadata_agent[161897]: 2025-10-11 04:14:28.120 267637 DEBUG oslo.privsep.daemon [-] privsep: reply[8e31df02-ce5b-44a0-af76-d1e795e76d64]: (4, ('Sat Oct 11 04:14:27 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-b6cd64a2-af0b-4f57-b84c-cbc9cde5251d (7106ccecf11d77670a573e0dad35162402fd95df5f3baa323904cfe1e8329df7)\n7106ccecf11d77670a573e0dad35162402fd95df5f3baa323904cfe1e8329df7\nSat Oct 11 04:14:28 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-b6cd64a2-af0b-4f57-b84c-cbc9cde5251d (7106ccecf11d77670a573e0dad35162402fd95df5f3baa323904cfe1e8329df7)\n7106ccecf11d77670a573e0dad35162402fd95df5f3baa323904cfe1e8329df7\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 04:14:28 compute-0 ovn_metadata_agent[161897]: 2025-10-11 04:14:28.121 267637 DEBUG oslo.privsep.daemon [-] privsep: reply[4f386f1b-5c28-4b61-9b6f-cf1ca8f12a1b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 04:14:28 compute-0 ovn_metadata_agent[161897]: 2025-10-11 04:14:28.122 161902 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapb6cd64a2-a0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 04:14:28 compute-0 nova_compute[259850]: 2025-10-11 04:14:28.168 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 04:14:28 compute-0 kernel: tapb6cd64a2-a0: left promiscuous mode
Oct 11 04:14:28 compute-0 nova_compute[259850]: 2025-10-11 04:14:28.183 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 04:14:28 compute-0 ovn_metadata_agent[161897]: 2025-10-11 04:14:28.185 267637 DEBUG oslo.privsep.daemon [-] privsep: reply[c17e2484-b4a7-48ea-8241-9ed71f966dab]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 04:14:28 compute-0 ovn_metadata_agent[161897]: 2025-10-11 04:14:28.217 267637 DEBUG oslo.privsep.daemon [-] privsep: reply[8861072f-b0a2-420c-99ef-ab5f7e77a37c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 04:14:28 compute-0 ovn_metadata_agent[161897]: 2025-10-11 04:14:28.219 267637 DEBUG oslo.privsep.daemon [-] privsep: reply[2ec67699-1e9a-4f2d-91a9-b7e654c1ece7]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 04:14:28 compute-0 nova_compute[259850]: 2025-10-11 04:14:28.222 2 INFO nova.virt.libvirt.driver [None req-0f039064-f166-49e1-953a-d676373a9045 2a330a845d62440c871f80eda2546881 09ba33ef4bd447699d74946c58839b2d - - default default] [instance: 1b88f60c-1027-4734-a7c1-3dd966e8db2c] Deleting instance files /var/lib/nova/instances/1b88f60c-1027-4734-a7c1-3dd966e8db2c_del
Oct 11 04:14:28 compute-0 nova_compute[259850]: 2025-10-11 04:14:28.223 2 INFO nova.virt.libvirt.driver [None req-0f039064-f166-49e1-953a-d676373a9045 2a330a845d62440c871f80eda2546881 09ba33ef4bd447699d74946c58839b2d - - default default] [instance: 1b88f60c-1027-4734-a7c1-3dd966e8db2c] Deletion of /var/lib/nova/instances/1b88f60c-1027-4734-a7c1-3dd966e8db2c_del complete
Oct 11 04:14:28 compute-0 ceph-mon[74273]: pgmap v1515: 305 pgs: 305 active+clean; 88 MiB data, 383 MiB used, 60 GiB / 60 GiB avail; 48 KiB/s rd, 14 KiB/s wr, 66 op/s
Oct 11 04:14:28 compute-0 ovn_metadata_agent[161897]: 2025-10-11 04:14:28.233 267637 DEBUG oslo.privsep.daemon [-] privsep: reply[88281c0b-72d3-429e-b1f9-51f807a1f7a8]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 444606, 'reachable_time': 30451, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 290975, 'error': None, 'target': 'ovnmeta-b6cd64a2-af0b-4f57-b84c-cbc9cde5251d', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 04:14:28 compute-0 systemd[1]: run-netns-ovnmeta\x2db6cd64a2\x2daf0b\x2d4f57\x2db84c\x2dcbc9cde5251d.mount: Deactivated successfully.
Oct 11 04:14:28 compute-0 ovn_metadata_agent[161897]: 2025-10-11 04:14:28.236 162015 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-b6cd64a2-af0b-4f57-b84c-cbc9cde5251d deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Oct 11 04:14:28 compute-0 ovn_metadata_agent[161897]: 2025-10-11 04:14:28.237 162015 DEBUG oslo.privsep.daemon [-] privsep: reply[8529eca3-763f-4382-baca-9c8c450b4285]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 04:14:28 compute-0 ovn_metadata_agent[161897]: 2025-10-11 04:14:28.237 161902 INFO neutron.agent.ovn.metadata.agent [-] Port 14f93f3e-13f2-4745-8cb2-687295f0fc23 in datapath b6cd64a2-af0b-4f57-b84c-cbc9cde5251d unbound from our chassis
Oct 11 04:14:28 compute-0 ovn_metadata_agent[161897]: 2025-10-11 04:14:28.238 161902 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network b6cd64a2-af0b-4f57-b84c-cbc9cde5251d, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Oct 11 04:14:28 compute-0 ovn_metadata_agent[161897]: 2025-10-11 04:14:28.239 267637 DEBUG oslo.privsep.daemon [-] privsep: reply[138f35af-d71f-4245-a022-767c16b748ac]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 04:14:28 compute-0 ovn_metadata_agent[161897]: 2025-10-11 04:14:28.239 161902 INFO neutron.agent.ovn.metadata.agent [-] Port 14f93f3e-13f2-4745-8cb2-687295f0fc23 in datapath b6cd64a2-af0b-4f57-b84c-cbc9cde5251d unbound from our chassis
Oct 11 04:14:28 compute-0 ovn_metadata_agent[161897]: 2025-10-11 04:14:28.240 161902 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network b6cd64a2-af0b-4f57-b84c-cbc9cde5251d, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Oct 11 04:14:28 compute-0 ovn_metadata_agent[161897]: 2025-10-11 04:14:28.240 267637 DEBUG oslo.privsep.daemon [-] privsep: reply[5944dd47-1f22-46e7-894a-69ec9d913d0f]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 04:14:28 compute-0 nova_compute[259850]: 2025-10-11 04:14:28.278 2 INFO nova.compute.manager [None req-0f039064-f166-49e1-953a-d676373a9045 2a330a845d62440c871f80eda2546881 09ba33ef4bd447699d74946c58839b2d - - default default] [instance: 1b88f60c-1027-4734-a7c1-3dd966e8db2c] Took 0.53 seconds to destroy the instance on the hypervisor.
Oct 11 04:14:28 compute-0 nova_compute[259850]: 2025-10-11 04:14:28.278 2 DEBUG oslo.service.loopingcall [None req-0f039064-f166-49e1-953a-d676373a9045 2a330a845d62440c871f80eda2546881 09ba33ef4bd447699d74946c58839b2d - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Oct 11 04:14:28 compute-0 nova_compute[259850]: 2025-10-11 04:14:28.279 2 DEBUG nova.compute.manager [-] [instance: 1b88f60c-1027-4734-a7c1-3dd966e8db2c] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Oct 11 04:14:28 compute-0 nova_compute[259850]: 2025-10-11 04:14:28.279 2 DEBUG nova.network.neutron [-] [instance: 1b88f60c-1027-4734-a7c1-3dd966e8db2c] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Oct 11 04:14:28 compute-0 nova_compute[259850]: 2025-10-11 04:14:28.306 2 DEBUG nova.compute.manager [req-f934fa0b-ee6d-45b9-90a6-20a54806ff3b req-8c382780-b937-4385-abc1-1d25de8ffc13 f4159c198f2c491aba3076e803251bb4 551ef16ca7c64c7d98e9560ddab5abd8 - - default default] [instance: 1b88f60c-1027-4734-a7c1-3dd966e8db2c] Received event network-vif-unplugged-14f93f3e-13f2-4745-8cb2-687295f0fc23 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 11 04:14:28 compute-0 nova_compute[259850]: 2025-10-11 04:14:28.307 2 DEBUG oslo_concurrency.lockutils [req-f934fa0b-ee6d-45b9-90a6-20a54806ff3b req-8c382780-b937-4385-abc1-1d25de8ffc13 f4159c198f2c491aba3076e803251bb4 551ef16ca7c64c7d98e9560ddab5abd8 - - default default] Acquiring lock "1b88f60c-1027-4734-a7c1-3dd966e8db2c-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 04:14:28 compute-0 nova_compute[259850]: 2025-10-11 04:14:28.307 2 DEBUG oslo_concurrency.lockutils [req-f934fa0b-ee6d-45b9-90a6-20a54806ff3b req-8c382780-b937-4385-abc1-1d25de8ffc13 f4159c198f2c491aba3076e803251bb4 551ef16ca7c64c7d98e9560ddab5abd8 - - default default] Lock "1b88f60c-1027-4734-a7c1-3dd966e8db2c-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 04:14:28 compute-0 nova_compute[259850]: 2025-10-11 04:14:28.308 2 DEBUG oslo_concurrency.lockutils [req-f934fa0b-ee6d-45b9-90a6-20a54806ff3b req-8c382780-b937-4385-abc1-1d25de8ffc13 f4159c198f2c491aba3076e803251bb4 551ef16ca7c64c7d98e9560ddab5abd8 - - default default] Lock "1b88f60c-1027-4734-a7c1-3dd966e8db2c-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 04:14:28 compute-0 nova_compute[259850]: 2025-10-11 04:14:28.308 2 DEBUG nova.compute.manager [req-f934fa0b-ee6d-45b9-90a6-20a54806ff3b req-8c382780-b937-4385-abc1-1d25de8ffc13 f4159c198f2c491aba3076e803251bb4 551ef16ca7c64c7d98e9560ddab5abd8 - - default default] [instance: 1b88f60c-1027-4734-a7c1-3dd966e8db2c] No waiting events found dispatching network-vif-unplugged-14f93f3e-13f2-4745-8cb2-687295f0fc23 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 11 04:14:28 compute-0 nova_compute[259850]: 2025-10-11 04:14:28.309 2 DEBUG nova.compute.manager [req-f934fa0b-ee6d-45b9-90a6-20a54806ff3b req-8c382780-b937-4385-abc1-1d25de8ffc13 f4159c198f2c491aba3076e803251bb4 551ef16ca7c64c7d98e9560ddab5abd8 - - default default] [instance: 1b88f60c-1027-4734-a7c1-3dd966e8db2c] Received event network-vif-unplugged-14f93f3e-13f2-4745-8cb2-687295f0fc23 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Oct 11 04:14:28 compute-0 nova_compute[259850]: 2025-10-11 04:14:28.784 2 DEBUG nova.network.neutron [-] [instance: 1b88f60c-1027-4734-a7c1-3dd966e8db2c] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 11 04:14:28 compute-0 nova_compute[259850]: 2025-10-11 04:14:28.801 2 INFO nova.compute.manager [-] [instance: 1b88f60c-1027-4734-a7c1-3dd966e8db2c] Took 0.52 seconds to deallocate network for instance.
Oct 11 04:14:28 compute-0 nova_compute[259850]: 2025-10-11 04:14:28.899 2 DEBUG nova.compute.manager [req-fc3c5f0c-9ecc-4283-9d4f-82946013afa6 req-669d0b82-a135-4f64-971e-a02720ae0eab f4159c198f2c491aba3076e803251bb4 551ef16ca7c64c7d98e9560ddab5abd8 - - default default] [instance: 1b88f60c-1027-4734-a7c1-3dd966e8db2c] Received event network-vif-deleted-14f93f3e-13f2-4745-8cb2-687295f0fc23 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 11 04:14:29 compute-0 nova_compute[259850]: 2025-10-11 04:14:29.047 2 INFO nova.compute.manager [None req-0f039064-f166-49e1-953a-d676373a9045 2a330a845d62440c871f80eda2546881 09ba33ef4bd447699d74946c58839b2d - - default default] [instance: 1b88f60c-1027-4734-a7c1-3dd966e8db2c] Took 0.25 seconds to detach 1 volumes for instance.
Oct 11 04:14:29 compute-0 nova_compute[259850]: 2025-10-11 04:14:29.059 2 DEBUG oslo_service.periodic_task [None req-a15c22ab-259d-4b26-83d0-eb5f92102649 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 11 04:14:29 compute-0 nova_compute[259850]: 2025-10-11 04:14:29.111 2 DEBUG oslo_concurrency.lockutils [None req-0f039064-f166-49e1-953a-d676373a9045 2a330a845d62440c871f80eda2546881 09ba33ef4bd447699d74946c58839b2d - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 04:14:29 compute-0 nova_compute[259850]: 2025-10-11 04:14:29.111 2 DEBUG oslo_concurrency.lockutils [None req-0f039064-f166-49e1-953a-d676373a9045 2a330a845d62440c871f80eda2546881 09ba33ef4bd447699d74946c58839b2d - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 04:14:29 compute-0 nova_compute[259850]: 2025-10-11 04:14:29.156 2 DEBUG oslo_concurrency.processutils [None req-0f039064-f166-49e1-953a-d676373a9045 2a330a845d62440c871f80eda2546881 09ba33ef4bd447699d74946c58839b2d - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 04:14:29 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 11 04:14:29 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/217752227' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 04:14:29 compute-0 nova_compute[259850]: 2025-10-11 04:14:29.642 2 DEBUG oslo_concurrency.processutils [None req-0f039064-f166-49e1-953a-d676373a9045 2a330a845d62440c871f80eda2546881 09ba33ef4bd447699d74946c58839b2d - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.486s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 04:14:29 compute-0 nova_compute[259850]: 2025-10-11 04:14:29.653 2 DEBUG nova.compute.provider_tree [None req-0f039064-f166-49e1-953a-d676373a9045 2a330a845d62440c871f80eda2546881 09ba33ef4bd447699d74946c58839b2d - - default default] Inventory has not changed in ProviderTree for provider: 108a560b-89c0-4926-a2fc-cb749a6f8386 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 11 04:14:29 compute-0 nova_compute[259850]: 2025-10-11 04:14:29.670 2 DEBUG nova.scheduler.client.report [None req-0f039064-f166-49e1-953a-d676373a9045 2a330a845d62440c871f80eda2546881 09ba33ef4bd447699d74946c58839b2d - - default default] Inventory has not changed for provider 108a560b-89c0-4926-a2fc-cb749a6f8386 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 11 04:14:29 compute-0 ceph-mon[74273]: from='client.? 192.168.122.100:0/217752227' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 04:14:29 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1516: 305 pgs: 305 active+clean; 88 MiB data, 383 MiB used, 60 GiB / 60 GiB avail; 69 KiB/s rd, 14 KiB/s wr, 66 op/s
Oct 11 04:14:29 compute-0 nova_compute[259850]: 2025-10-11 04:14:29.695 2 DEBUG oslo_concurrency.lockutils [None req-0f039064-f166-49e1-953a-d676373a9045 2a330a845d62440c871f80eda2546881 09ba33ef4bd447699d74946c58839b2d - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.584s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 04:14:29 compute-0 nova_compute[259850]: 2025-10-11 04:14:29.721 2 INFO nova.scheduler.client.report [None req-0f039064-f166-49e1-953a-d676373a9045 2a330a845d62440c871f80eda2546881 09ba33ef4bd447699d74946c58839b2d - - default default] Deleted allocations for instance 1b88f60c-1027-4734-a7c1-3dd966e8db2c
Oct 11 04:14:29 compute-0 nova_compute[259850]: 2025-10-11 04:14:29.773 2 DEBUG oslo_concurrency.lockutils [None req-0f039064-f166-49e1-953a-d676373a9045 2a330a845d62440c871f80eda2546881 09ba33ef4bd447699d74946c58839b2d - - default default] Lock "1b88f60c-1027-4734-a7c1-3dd966e8db2c" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.036s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 04:14:29 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e381 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 04:14:30 compute-0 nova_compute[259850]: 2025-10-11 04:14:30.451 2 DEBUG nova.compute.manager [req-df22f488-8218-4a3c-86be-1a77725c5b53 req-b14fb1b0-ac50-4611-97e1-52d43af42beb f4159c198f2c491aba3076e803251bb4 551ef16ca7c64c7d98e9560ddab5abd8 - - default default] [instance: 1b88f60c-1027-4734-a7c1-3dd966e8db2c] Received event network-vif-plugged-14f93f3e-13f2-4745-8cb2-687295f0fc23 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 11 04:14:30 compute-0 nova_compute[259850]: 2025-10-11 04:14:30.451 2 DEBUG oslo_concurrency.lockutils [req-df22f488-8218-4a3c-86be-1a77725c5b53 req-b14fb1b0-ac50-4611-97e1-52d43af42beb f4159c198f2c491aba3076e803251bb4 551ef16ca7c64c7d98e9560ddab5abd8 - - default default] Acquiring lock "1b88f60c-1027-4734-a7c1-3dd966e8db2c-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 04:14:30 compute-0 nova_compute[259850]: 2025-10-11 04:14:30.451 2 DEBUG oslo_concurrency.lockutils [req-df22f488-8218-4a3c-86be-1a77725c5b53 req-b14fb1b0-ac50-4611-97e1-52d43af42beb f4159c198f2c491aba3076e803251bb4 551ef16ca7c64c7d98e9560ddab5abd8 - - default default] Lock "1b88f60c-1027-4734-a7c1-3dd966e8db2c-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 04:14:30 compute-0 nova_compute[259850]: 2025-10-11 04:14:30.452 2 DEBUG oslo_concurrency.lockutils [req-df22f488-8218-4a3c-86be-1a77725c5b53 req-b14fb1b0-ac50-4611-97e1-52d43af42beb f4159c198f2c491aba3076e803251bb4 551ef16ca7c64c7d98e9560ddab5abd8 - - default default] Lock "1b88f60c-1027-4734-a7c1-3dd966e8db2c-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 04:14:30 compute-0 nova_compute[259850]: 2025-10-11 04:14:30.452 2 DEBUG nova.compute.manager [req-df22f488-8218-4a3c-86be-1a77725c5b53 req-b14fb1b0-ac50-4611-97e1-52d43af42beb f4159c198f2c491aba3076e803251bb4 551ef16ca7c64c7d98e9560ddab5abd8 - - default default] [instance: 1b88f60c-1027-4734-a7c1-3dd966e8db2c] No waiting events found dispatching network-vif-plugged-14f93f3e-13f2-4745-8cb2-687295f0fc23 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 11 04:14:30 compute-0 nova_compute[259850]: 2025-10-11 04:14:30.452 2 WARNING nova.compute.manager [req-df22f488-8218-4a3c-86be-1a77725c5b53 req-b14fb1b0-ac50-4611-97e1-52d43af42beb f4159c198f2c491aba3076e803251bb4 551ef16ca7c64c7d98e9560ddab5abd8 - - default default] [instance: 1b88f60c-1027-4734-a7c1-3dd966e8db2c] Received unexpected event network-vif-plugged-14f93f3e-13f2-4745-8cb2-687295f0fc23 for instance with vm_state deleted and task_state None.
Oct 11 04:14:30 compute-0 nova_compute[259850]: 2025-10-11 04:14:30.452 2 DEBUG nova.compute.manager [req-df22f488-8218-4a3c-86be-1a77725c5b53 req-b14fb1b0-ac50-4611-97e1-52d43af42beb f4159c198f2c491aba3076e803251bb4 551ef16ca7c64c7d98e9560ddab5abd8 - - default default] [instance: 1b88f60c-1027-4734-a7c1-3dd966e8db2c] Received event network-vif-plugged-14f93f3e-13f2-4745-8cb2-687295f0fc23 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 11 04:14:30 compute-0 nova_compute[259850]: 2025-10-11 04:14:30.452 2 DEBUG oslo_concurrency.lockutils [req-df22f488-8218-4a3c-86be-1a77725c5b53 req-b14fb1b0-ac50-4611-97e1-52d43af42beb f4159c198f2c491aba3076e803251bb4 551ef16ca7c64c7d98e9560ddab5abd8 - - default default] Acquiring lock "1b88f60c-1027-4734-a7c1-3dd966e8db2c-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 04:14:30 compute-0 nova_compute[259850]: 2025-10-11 04:14:30.452 2 DEBUG oslo_concurrency.lockutils [req-df22f488-8218-4a3c-86be-1a77725c5b53 req-b14fb1b0-ac50-4611-97e1-52d43af42beb f4159c198f2c491aba3076e803251bb4 551ef16ca7c64c7d98e9560ddab5abd8 - - default default] Lock "1b88f60c-1027-4734-a7c1-3dd966e8db2c-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 04:14:30 compute-0 nova_compute[259850]: 2025-10-11 04:14:30.453 2 DEBUG oslo_concurrency.lockutils [req-df22f488-8218-4a3c-86be-1a77725c5b53 req-b14fb1b0-ac50-4611-97e1-52d43af42beb f4159c198f2c491aba3076e803251bb4 551ef16ca7c64c7d98e9560ddab5abd8 - - default default] Lock "1b88f60c-1027-4734-a7c1-3dd966e8db2c-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 04:14:30 compute-0 nova_compute[259850]: 2025-10-11 04:14:30.453 2 DEBUG nova.compute.manager [req-df22f488-8218-4a3c-86be-1a77725c5b53 req-b14fb1b0-ac50-4611-97e1-52d43af42beb f4159c198f2c491aba3076e803251bb4 551ef16ca7c64c7d98e9560ddab5abd8 - - default default] [instance: 1b88f60c-1027-4734-a7c1-3dd966e8db2c] No waiting events found dispatching network-vif-plugged-14f93f3e-13f2-4745-8cb2-687295f0fc23 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 11 04:14:30 compute-0 nova_compute[259850]: 2025-10-11 04:14:30.453 2 WARNING nova.compute.manager [req-df22f488-8218-4a3c-86be-1a77725c5b53 req-b14fb1b0-ac50-4611-97e1-52d43af42beb f4159c198f2c491aba3076e803251bb4 551ef16ca7c64c7d98e9560ddab5abd8 - - default default] [instance: 1b88f60c-1027-4734-a7c1-3dd966e8db2c] Received unexpected event network-vif-plugged-14f93f3e-13f2-4745-8cb2-687295f0fc23 for instance with vm_state deleted and task_state None.
Oct 11 04:14:30 compute-0 ceph-mon[74273]: pgmap v1516: 305 pgs: 305 active+clean; 88 MiB data, 383 MiB used, 60 GiB / 60 GiB avail; 69 KiB/s rd, 14 KiB/s wr, 66 op/s
Oct 11 04:14:31 compute-0 nova_compute[259850]: 2025-10-11 04:14:31.058 2 DEBUG oslo_service.periodic_task [None req-a15c22ab-259d-4b26-83d0-eb5f92102649 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 11 04:14:31 compute-0 ceph-mgr[74563]: [pg_autoscaler INFO root] _maybe_adjust
Oct 11 04:14:31 compute-0 ceph-mgr[74563]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 04:14:31 compute-0 ceph-mgr[74563]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct 11 04:14:31 compute-0 ceph-mgr[74563]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 04:14:31 compute-0 ceph-mgr[74563]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 2.480037605000977e-06 of space, bias 1.0, pg target 0.0007440112815002931 quantized to 32 (current 32)
Oct 11 04:14:31 compute-0 ceph-mgr[74563]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 04:14:31 compute-0 ceph-mgr[74563]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.000351020707169369 of space, bias 1.0, pg target 0.10530621215081071 quantized to 32 (current 32)
Oct 11 04:14:31 compute-0 ceph-mgr[74563]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 04:14:31 compute-0 ceph-mgr[74563]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Oct 11 04:14:31 compute-0 ceph-mgr[74563]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 04:14:31 compute-0 ceph-mgr[74563]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006661762551279547 of space, bias 1.0, pg target 0.1998528765383864 quantized to 32 (current 32)
Oct 11 04:14:31 compute-0 ceph-mgr[74563]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 04:14:31 compute-0 ceph-mgr[74563]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Oct 11 04:14:31 compute-0 ceph-mgr[74563]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 04:14:31 compute-0 ceph-mgr[74563]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 11 04:14:31 compute-0 ceph-mgr[74563]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 04:14:31 compute-0 ceph-mgr[74563]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Oct 11 04:14:31 compute-0 ceph-mgr[74563]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 04:14:31 compute-0 ceph-mgr[74563]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct 11 04:14:31 compute-0 ceph-mgr[74563]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 04:14:31 compute-0 ceph-mgr[74563]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 11 04:14:31 compute-0 ceph-mgr[74563]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 04:14:31 compute-0 ceph-mgr[74563]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct 11 04:14:31 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct 11 04:14:31 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/165936324' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 11 04:14:31 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct 11 04:14:31 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/165936324' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 11 04:14:31 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1517: 305 pgs: 305 active+clean; 88 MiB data, 383 MiB used, 60 GiB / 60 GiB avail; 49 KiB/s rd, 13 KiB/s wr, 38 op/s
Oct 11 04:14:31 compute-0 ceph-mon[74273]: from='client.? 192.168.122.10:0/165936324' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 11 04:14:31 compute-0 ceph-mon[74273]: from='client.? 192.168.122.10:0/165936324' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 11 04:14:32 compute-0 ceph-mon[74273]: pgmap v1517: 305 pgs: 305 active+clean; 88 MiB data, 383 MiB used, 60 GiB / 60 GiB avail; 49 KiB/s rd, 13 KiB/s wr, 38 op/s
Oct 11 04:14:32 compute-0 nova_compute[259850]: 2025-10-11 04:14:32.949 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 04:14:33 compute-0 nova_compute[259850]: 2025-10-11 04:14:33.021 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 04:14:33 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1518: 305 pgs: 305 active+clean; 88 MiB data, 383 MiB used, 60 GiB / 60 GiB avail; 70 KiB/s rd, 15 KiB/s wr, 66 op/s
Oct 11 04:14:34 compute-0 ceph-mon[74273]: pgmap v1518: 305 pgs: 305 active+clean; 88 MiB data, 383 MiB used, 60 GiB / 60 GiB avail; 70 KiB/s rd, 15 KiB/s wr, 66 op/s
Oct 11 04:14:34 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e381 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 04:14:35 compute-0 nova_compute[259850]: 2025-10-11 04:14:35.055 2 DEBUG oslo_service.periodic_task [None req-a15c22ab-259d-4b26-83d0-eb5f92102649 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 11 04:14:35 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1519: 305 pgs: 305 active+clean; 88 MiB data, 383 MiB used, 60 GiB / 60 GiB avail; 42 KiB/s rd, 1.2 KiB/s wr, 27 op/s
Oct 11 04:14:36 compute-0 ceph-mon[74273]: pgmap v1519: 305 pgs: 305 active+clean; 88 MiB data, 383 MiB used, 60 GiB / 60 GiB avail; 42 KiB/s rd, 1.2 KiB/s wr, 27 op/s
Oct 11 04:14:37 compute-0 ovn_metadata_agent[161897]: 2025-10-11 04:14:37.678 161902 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=15, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '7a:61:6f', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '92:f1:b6:e4:f1:16'}, ipsec=False) old=SB_Global(nb_cfg=14) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 11 04:14:37 compute-0 nova_compute[259850]: 2025-10-11 04:14:37.678 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 04:14:37 compute-0 ovn_metadata_agent[161897]: 2025-10-11 04:14:37.679 161902 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 6 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Oct 11 04:14:37 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1520: 305 pgs: 305 active+clean; 88 MiB data, 383 MiB used, 60 GiB / 60 GiB avail; 42 KiB/s rd, 1.2 KiB/s wr, 27 op/s
Oct 11 04:14:37 compute-0 nova_compute[259850]: 2025-10-11 04:14:37.951 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 04:14:38 compute-0 nova_compute[259850]: 2025-10-11 04:14:38.066 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 04:14:38 compute-0 ceph-mon[74273]: pgmap v1520: 305 pgs: 305 active+clean; 88 MiB data, 383 MiB used, 60 GiB / 60 GiB avail; 42 KiB/s rd, 1.2 KiB/s wr, 27 op/s
Oct 11 04:14:39 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1521: 305 pgs: 305 active+clean; 88 MiB data, 383 MiB used, 60 GiB / 60 GiB avail; 1.7 MiB/s rd, 1.6 KiB/s wr, 35 op/s
Oct 11 04:14:39 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e381 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 04:14:40 compute-0 ceph-mon[74273]: pgmap v1521: 305 pgs: 305 active+clean; 88 MiB data, 383 MiB used, 60 GiB / 60 GiB avail; 1.7 MiB/s rd, 1.6 KiB/s wr, 35 op/s
Oct 11 04:14:41 compute-0 podman[291000]: 2025-10-11 04:14:41.368434817 +0000 UTC m=+0.075491856 container health_status 8f03625dda308e9a3fe3b3f00360811eab2a53db90537fed5552a11820542f07 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, config_id=iscsid, org.label-schema.build-date=20251009, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, container_name=iscsid, io.buildah.version=1.41.3, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']})
Oct 11 04:14:41 compute-0 podman[290999]: 2025-10-11 04:14:41.389613099 +0000 UTC m=+0.093516279 container health_status 83ff5079e26f0f00bbfd2512db4ebca61e431e3b7e362664573e34fff179574c (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, container_name=multipathd, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f)
Oct 11 04:14:41 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1522: 305 pgs: 305 active+clean; 88 MiB data, 383 MiB used, 60 GiB / 60 GiB avail; 1.7 MiB/s rd, 1.6 KiB/s wr, 35 op/s
Oct 11 04:14:41 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e381 do_prune osdmap full prune enabled
Oct 11 04:14:41 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e382 e382: 3 total, 3 up, 3 in
Oct 11 04:14:41 compute-0 ceph-mon[74273]: log_channel(cluster) log [DBG] : osdmap e382: 3 total, 3 up, 3 in
Oct 11 04:14:42 compute-0 ceph-mon[74273]: pgmap v1522: 305 pgs: 305 active+clean; 88 MiB data, 383 MiB used, 60 GiB / 60 GiB avail; 1.7 MiB/s rd, 1.6 KiB/s wr, 35 op/s
Oct 11 04:14:42 compute-0 ceph-mon[74273]: osdmap e382: 3 total, 3 up, 3 in
Oct 11 04:14:42 compute-0 nova_compute[259850]: 2025-10-11 04:14:42.956 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 04:14:42 compute-0 nova_compute[259850]: 2025-10-11 04:14:42.992 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1760156067.9906762, 1b88f60c-1027-4734-a7c1-3dd966e8db2c => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 11 04:14:42 compute-0 nova_compute[259850]: 2025-10-11 04:14:42.992 2 INFO nova.compute.manager [-] [instance: 1b88f60c-1027-4734-a7c1-3dd966e8db2c] VM Stopped (Lifecycle Event)
Oct 11 04:14:43 compute-0 nova_compute[259850]: 2025-10-11 04:14:43.015 2 DEBUG nova.compute.manager [None req-1322bd74-180b-46e4-997d-f9d4506a81a0 - - - - - -] [instance: 1b88f60c-1027-4734-a7c1-3dd966e8db2c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 11 04:14:43 compute-0 nova_compute[259850]: 2025-10-11 04:14:43.067 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 04:14:43 compute-0 ovn_metadata_agent[161897]: 2025-10-11 04:14:43.681 161902 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=8a473e03-2208-47ae-afcd-05ad744a5969, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '15'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 04:14:43 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1524: 305 pgs: 305 active+clean; 134 MiB data, 404 MiB used, 60 GiB / 60 GiB avail; 2.1 MiB/s rd, 2.1 MiB/s wr, 57 op/s
Oct 11 04:14:44 compute-0 ceph-mon[74273]: pgmap v1524: 305 pgs: 305 active+clean; 134 MiB data, 404 MiB used, 60 GiB / 60 GiB avail; 2.1 MiB/s rd, 2.1 MiB/s wr, 57 op/s
Oct 11 04:14:44 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e382 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 04:14:45 compute-0 nova_compute[259850]: 2025-10-11 04:14:45.091 2 DEBUG oslo_concurrency.lockutils [None req-b70eb092-7cdc-4bd6-8e86-5451a1048501 2a330a845d62440c871f80eda2546881 09ba33ef4bd447699d74946c58839b2d - - default default] Acquiring lock "86c48ed0-a954-49c0-bdcc-c312bcf59248" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 04:14:45 compute-0 nova_compute[259850]: 2025-10-11 04:14:45.092 2 DEBUG oslo_concurrency.lockutils [None req-b70eb092-7cdc-4bd6-8e86-5451a1048501 2a330a845d62440c871f80eda2546881 09ba33ef4bd447699d74946c58839b2d - - default default] Lock "86c48ed0-a954-49c0-bdcc-c312bcf59248" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 04:14:45 compute-0 nova_compute[259850]: 2025-10-11 04:14:45.120 2 DEBUG nova.compute.manager [None req-b70eb092-7cdc-4bd6-8e86-5451a1048501 2a330a845d62440c871f80eda2546881 09ba33ef4bd447699d74946c58839b2d - - default default] [instance: 86c48ed0-a954-49c0-bdcc-c312bcf59248] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Oct 11 04:14:45 compute-0 nova_compute[259850]: 2025-10-11 04:14:45.227 2 DEBUG oslo_concurrency.lockutils [None req-b70eb092-7cdc-4bd6-8e86-5451a1048501 2a330a845d62440c871f80eda2546881 09ba33ef4bd447699d74946c58839b2d - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 04:14:45 compute-0 nova_compute[259850]: 2025-10-11 04:14:45.227 2 DEBUG oslo_concurrency.lockutils [None req-b70eb092-7cdc-4bd6-8e86-5451a1048501 2a330a845d62440c871f80eda2546881 09ba33ef4bd447699d74946c58839b2d - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 04:14:45 compute-0 nova_compute[259850]: 2025-10-11 04:14:45.235 2 DEBUG nova.virt.hardware [None req-b70eb092-7cdc-4bd6-8e86-5451a1048501 2a330a845d62440c871f80eda2546881 09ba33ef4bd447699d74946c58839b2d - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Oct 11 04:14:45 compute-0 nova_compute[259850]: 2025-10-11 04:14:45.235 2 INFO nova.compute.claims [None req-b70eb092-7cdc-4bd6-8e86-5451a1048501 2a330a845d62440c871f80eda2546881 09ba33ef4bd447699d74946c58839b2d - - default default] [instance: 86c48ed0-a954-49c0-bdcc-c312bcf59248] Claim successful on node compute-0.ctlplane.example.com
Oct 11 04:14:45 compute-0 nova_compute[259850]: 2025-10-11 04:14:45.330 2 DEBUG oslo_concurrency.processutils [None req-b70eb092-7cdc-4bd6-8e86-5451a1048501 2a330a845d62440c871f80eda2546881 09ba33ef4bd447699d74946c58839b2d - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 04:14:45 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1525: 305 pgs: 305 active+clean; 134 MiB data, 404 MiB used, 60 GiB / 60 GiB avail; 2.1 MiB/s rd, 2.1 MiB/s wr, 57 op/s
Oct 11 04:14:45 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 11 04:14:45 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2030910487' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 04:14:45 compute-0 nova_compute[259850]: 2025-10-11 04:14:45.764 2 DEBUG oslo_concurrency.processutils [None req-b70eb092-7cdc-4bd6-8e86-5451a1048501 2a330a845d62440c871f80eda2546881 09ba33ef4bd447699d74946c58839b2d - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.435s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 04:14:45 compute-0 nova_compute[259850]: 2025-10-11 04:14:45.774 2 DEBUG nova.compute.provider_tree [None req-b70eb092-7cdc-4bd6-8e86-5451a1048501 2a330a845d62440c871f80eda2546881 09ba33ef4bd447699d74946c58839b2d - - default default] Inventory has not changed in ProviderTree for provider: 108a560b-89c0-4926-a2fc-cb749a6f8386 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 11 04:14:45 compute-0 nova_compute[259850]: 2025-10-11 04:14:45.803 2 DEBUG nova.scheduler.client.report [None req-b70eb092-7cdc-4bd6-8e86-5451a1048501 2a330a845d62440c871f80eda2546881 09ba33ef4bd447699d74946c58839b2d - - default default] Inventory has not changed for provider 108a560b-89c0-4926-a2fc-cb749a6f8386 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 11 04:14:45 compute-0 ceph-mon[74273]: from='client.? 192.168.122.100:0/2030910487' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 04:14:45 compute-0 nova_compute[259850]: 2025-10-11 04:14:45.894 2 DEBUG oslo_concurrency.lockutils [None req-b70eb092-7cdc-4bd6-8e86-5451a1048501 2a330a845d62440c871f80eda2546881 09ba33ef4bd447699d74946c58839b2d - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.666s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 04:14:45 compute-0 nova_compute[259850]: 2025-10-11 04:14:45.895 2 DEBUG nova.compute.manager [None req-b70eb092-7cdc-4bd6-8e86-5451a1048501 2a330a845d62440c871f80eda2546881 09ba33ef4bd447699d74946c58839b2d - - default default] [instance: 86c48ed0-a954-49c0-bdcc-c312bcf59248] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Oct 11 04:14:45 compute-0 nova_compute[259850]: 2025-10-11 04:14:45.965 2 DEBUG nova.compute.manager [None req-b70eb092-7cdc-4bd6-8e86-5451a1048501 2a330a845d62440c871f80eda2546881 09ba33ef4bd447699d74946c58839b2d - - default default] [instance: 86c48ed0-a954-49c0-bdcc-c312bcf59248] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Oct 11 04:14:45 compute-0 nova_compute[259850]: 2025-10-11 04:14:45.966 2 DEBUG nova.network.neutron [None req-b70eb092-7cdc-4bd6-8e86-5451a1048501 2a330a845d62440c871f80eda2546881 09ba33ef4bd447699d74946c58839b2d - - default default] [instance: 86c48ed0-a954-49c0-bdcc-c312bcf59248] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Oct 11 04:14:46 compute-0 nova_compute[259850]: 2025-10-11 04:14:46.001 2 INFO nova.virt.libvirt.driver [None req-b70eb092-7cdc-4bd6-8e86-5451a1048501 2a330a845d62440c871f80eda2546881 09ba33ef4bd447699d74946c58839b2d - - default default] [instance: 86c48ed0-a954-49c0-bdcc-c312bcf59248] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Oct 11 04:14:46 compute-0 nova_compute[259850]: 2025-10-11 04:14:46.024 2 DEBUG nova.compute.manager [None req-b70eb092-7cdc-4bd6-8e86-5451a1048501 2a330a845d62440c871f80eda2546881 09ba33ef4bd447699d74946c58839b2d - - default default] [instance: 86c48ed0-a954-49c0-bdcc-c312bcf59248] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Oct 11 04:14:46 compute-0 nova_compute[259850]: 2025-10-11 04:14:46.112 2 INFO nova.virt.block_device [None req-b70eb092-7cdc-4bd6-8e86-5451a1048501 2a330a845d62440c871f80eda2546881 09ba33ef4bd447699d74946c58839b2d - - default default] [instance: 86c48ed0-a954-49c0-bdcc-c312bcf59248] Booting with volume snapshot 8582e8a2-d2ee-4699-8ad6-39998450f2b4 at /dev/vda
Oct 11 04:14:46 compute-0 nova_compute[259850]: 2025-10-11 04:14:46.353 2 DEBUG nova.policy [None req-b70eb092-7cdc-4bd6-8e86-5451a1048501 2a330a845d62440c871f80eda2546881 09ba33ef4bd447699d74946c58839b2d - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '2a330a845d62440c871f80eda2546881', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '09ba33ef4bd447699d74946c58839b2d', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Oct 11 04:14:46 compute-0 ceph-mon[74273]: pgmap v1525: 305 pgs: 305 active+clean; 134 MiB data, 404 MiB used, 60 GiB / 60 GiB avail; 2.1 MiB/s rd, 2.1 MiB/s wr, 57 op/s
Oct 11 04:14:47 compute-0 nova_compute[259850]: 2025-10-11 04:14:47.637 2 DEBUG nova.network.neutron [None req-b70eb092-7cdc-4bd6-8e86-5451a1048501 2a330a845d62440c871f80eda2546881 09ba33ef4bd447699d74946c58839b2d - - default default] [instance: 86c48ed0-a954-49c0-bdcc-c312bcf59248] Successfully created port: 38533cc0-07ff-4685-a074-b75317ab358c _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Oct 11 04:14:47 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1526: 305 pgs: 305 active+clean; 134 MiB data, 404 MiB used, 60 GiB / 60 GiB avail; 2.1 MiB/s rd, 2.1 MiB/s wr, 57 op/s
Oct 11 04:14:47 compute-0 nova_compute[259850]: 2025-10-11 04:14:47.957 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 04:14:48 compute-0 nova_compute[259850]: 2025-10-11 04:14:48.069 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 04:14:48 compute-0 ceph-mon[74273]: pgmap v1526: 305 pgs: 305 active+clean; 134 MiB data, 404 MiB used, 60 GiB / 60 GiB avail; 2.1 MiB/s rd, 2.1 MiB/s wr, 57 op/s
Oct 11 04:14:49 compute-0 nova_compute[259850]: 2025-10-11 04:14:49.369 2 DEBUG nova.network.neutron [None req-b70eb092-7cdc-4bd6-8e86-5451a1048501 2a330a845d62440c871f80eda2546881 09ba33ef4bd447699d74946c58839b2d - - default default] [instance: 86c48ed0-a954-49c0-bdcc-c312bcf59248] Successfully updated port: 38533cc0-07ff-4685-a074-b75317ab358c _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Oct 11 04:14:49 compute-0 nova_compute[259850]: 2025-10-11 04:14:49.386 2 DEBUG oslo_concurrency.lockutils [None req-b70eb092-7cdc-4bd6-8e86-5451a1048501 2a330a845d62440c871f80eda2546881 09ba33ef4bd447699d74946c58839b2d - - default default] Acquiring lock "refresh_cache-86c48ed0-a954-49c0-bdcc-c312bcf59248" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 11 04:14:49 compute-0 nova_compute[259850]: 2025-10-11 04:14:49.387 2 DEBUG oslo_concurrency.lockutils [None req-b70eb092-7cdc-4bd6-8e86-5451a1048501 2a330a845d62440c871f80eda2546881 09ba33ef4bd447699d74946c58839b2d - - default default] Acquired lock "refresh_cache-86c48ed0-a954-49c0-bdcc-c312bcf59248" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 11 04:14:49 compute-0 nova_compute[259850]: 2025-10-11 04:14:49.387 2 DEBUG nova.network.neutron [None req-b70eb092-7cdc-4bd6-8e86-5451a1048501 2a330a845d62440c871f80eda2546881 09ba33ef4bd447699d74946c58839b2d - - default default] [instance: 86c48ed0-a954-49c0-bdcc-c312bcf59248] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Oct 11 04:14:49 compute-0 nova_compute[259850]: 2025-10-11 04:14:49.489 2 DEBUG nova.compute.manager [req-bb654be5-3c22-474d-90fd-f1eaeb5588ed req-079ae271-b3a0-4ad1-9d6d-d1c2774f521c f4159c198f2c491aba3076e803251bb4 551ef16ca7c64c7d98e9560ddab5abd8 - - default default] [instance: 86c48ed0-a954-49c0-bdcc-c312bcf59248] Received event network-changed-38533cc0-07ff-4685-a074-b75317ab358c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 11 04:14:49 compute-0 nova_compute[259850]: 2025-10-11 04:14:49.490 2 DEBUG nova.compute.manager [req-bb654be5-3c22-474d-90fd-f1eaeb5588ed req-079ae271-b3a0-4ad1-9d6d-d1c2774f521c f4159c198f2c491aba3076e803251bb4 551ef16ca7c64c7d98e9560ddab5abd8 - - default default] [instance: 86c48ed0-a954-49c0-bdcc-c312bcf59248] Refreshing instance network info cache due to event network-changed-38533cc0-07ff-4685-a074-b75317ab358c. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 11 04:14:49 compute-0 nova_compute[259850]: 2025-10-11 04:14:49.490 2 DEBUG oslo_concurrency.lockutils [req-bb654be5-3c22-474d-90fd-f1eaeb5588ed req-079ae271-b3a0-4ad1-9d6d-d1c2774f521c f4159c198f2c491aba3076e803251bb4 551ef16ca7c64c7d98e9560ddab5abd8 - - default default] Acquiring lock "refresh_cache-86c48ed0-a954-49c0-bdcc-c312bcf59248" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 11 04:14:49 compute-0 nova_compute[259850]: 2025-10-11 04:14:49.625 2 DEBUG nova.network.neutron [None req-b70eb092-7cdc-4bd6-8e86-5451a1048501 2a330a845d62440c871f80eda2546881 09ba33ef4bd447699d74946c58839b2d - - default default] [instance: 86c48ed0-a954-49c0-bdcc-c312bcf59248] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Oct 11 04:14:49 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1527: 305 pgs: 305 active+clean; 134 MiB data, 404 MiB used, 60 GiB / 60 GiB avail; 41 KiB/s rd, 2.1 MiB/s wr, 61 op/s
Oct 11 04:14:49 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e382 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 04:14:50 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct 11 04:14:50 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/360554547' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 11 04:14:50 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct 11 04:14:50 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/360554547' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 11 04:14:50 compute-0 ceph-mgr[74563]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 04:14:50 compute-0 ceph-mgr[74563]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 04:14:50 compute-0 nova_compute[259850]: 2025-10-11 04:14:50.778 2 DEBUG os_brick.utils [None req-b70eb092-7cdc-4bd6-8e86-5451a1048501 2a330a845d62440c871f80eda2546881 09ba33ef4bd447699d74946c58839b2d - - default default] ==> get_connector_properties: call "{'root_helper': 'sudo nova-rootwrap /etc/nova/rootwrap.conf', 'my_ip': '192.168.122.100', 'multipath': True, 'enforce_multipath': True, 'host': 'compute-0.ctlplane.example.com', 'execute': None}" trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:176
Oct 11 04:14:50 compute-0 nova_compute[259850]: 2025-10-11 04:14:50.779 675 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): multipathd show status execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 04:14:50 compute-0 ceph-mgr[74563]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 04:14:50 compute-0 ceph-mgr[74563]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 04:14:50 compute-0 ceph-mgr[74563]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 04:14:50 compute-0 ceph-mgr[74563]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 04:14:50 compute-0 nova_compute[259850]: 2025-10-11 04:14:50.788 675 DEBUG oslo_concurrency.processutils [-] CMD "multipathd show status" returned: 0 in 0.009s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 04:14:50 compute-0 nova_compute[259850]: 2025-10-11 04:14:50.789 675 DEBUG oslo.privsep.daemon [-] privsep: reply[e7a8157d-0a81-465b-b251-009fd120be00]: (4, ('path checker states:\n\npaths: 0\nbusy: False\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 04:14:50 compute-0 nova_compute[259850]: 2025-10-11 04:14:50.790 675 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): cat /etc/iscsi/initiatorname.iscsi execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 04:14:50 compute-0 nova_compute[259850]: 2025-10-11 04:14:50.799 675 DEBUG oslo_concurrency.processutils [-] CMD "cat /etc/iscsi/initiatorname.iscsi" returned: 0 in 0.009s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 04:14:50 compute-0 nova_compute[259850]: 2025-10-11 04:14:50.799 675 DEBUG oslo.privsep.daemon [-] privsep: reply[144897e3-5a27-43a3-8d0f-3baa425f1cfe]: (4, ('InitiatorName=iqn.1994-05.com.redhat:e727c2bd432c', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 04:14:50 compute-0 nova_compute[259850]: 2025-10-11 04:14:50.800 675 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): findmnt -v / -n -o SOURCE execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 04:14:50 compute-0 nova_compute[259850]: 2025-10-11 04:14:50.807 675 DEBUG oslo_concurrency.processutils [-] CMD "findmnt -v / -n -o SOURCE" returned: 0 in 0.007s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 04:14:50 compute-0 nova_compute[259850]: 2025-10-11 04:14:50.808 675 DEBUG oslo.privsep.daemon [-] privsep: reply[17742583-26e9-4bde-a93d-058c82a68b9e]: (4, ('overlay\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 04:14:50 compute-0 nova_compute[259850]: 2025-10-11 04:14:50.809 675 DEBUG oslo.privsep.daemon [-] privsep: reply[0bf247ee-3c1c-4ad7-abef-5b7ff3a65ed6]: (4, 'e4b2deed-ff06-4afb-a523-b61a9dddb9cc') _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 04:14:50 compute-0 nova_compute[259850]: 2025-10-11 04:14:50.809 2 DEBUG oslo_concurrency.processutils [None req-b70eb092-7cdc-4bd6-8e86-5451a1048501 2a330a845d62440c871f80eda2546881 09ba33ef4bd447699d74946c58839b2d - - default default] Running cmd (subprocess): nvme version execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 04:14:50 compute-0 ceph-mon[74273]: pgmap v1527: 305 pgs: 305 active+clean; 134 MiB data, 404 MiB used, 60 GiB / 60 GiB avail; 41 KiB/s rd, 2.1 MiB/s wr, 61 op/s
Oct 11 04:14:50 compute-0 ceph-mon[74273]: from='client.? 192.168.122.10:0/360554547' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 11 04:14:50 compute-0 ceph-mon[74273]: from='client.? 192.168.122.10:0/360554547' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 11 04:14:50 compute-0 nova_compute[259850]: 2025-10-11 04:14:50.844 2 DEBUG oslo_concurrency.processutils [None req-b70eb092-7cdc-4bd6-8e86-5451a1048501 2a330a845d62440c871f80eda2546881 09ba33ef4bd447699d74946c58839b2d - - default default] CMD "nvme version" returned: 0 in 0.034s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 04:14:50 compute-0 nova_compute[259850]: 2025-10-11 04:14:50.846 2 DEBUG os_brick.initiator.connectors.lightos [None req-b70eb092-7cdc-4bd6-8e86-5451a1048501 2a330a845d62440c871f80eda2546881 09ba33ef4bd447699d74946c58839b2d - - default default] LIGHTOS: [Errno 111] ECONNREFUSED find_dsc /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:98
Oct 11 04:14:50 compute-0 nova_compute[259850]: 2025-10-11 04:14:50.846 2 DEBUG os_brick.initiator.connectors.lightos [None req-b70eb092-7cdc-4bd6-8e86-5451a1048501 2a330a845d62440c871f80eda2546881 09ba33ef4bd447699d74946c58839b2d - - default default] LIGHTOS: did not find dsc, continuing anyway. get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:76
Oct 11 04:14:50 compute-0 nova_compute[259850]: 2025-10-11 04:14:50.846 2 DEBUG os_brick.initiator.connectors.lightos [None req-b70eb092-7cdc-4bd6-8e86-5451a1048501 2a330a845d62440c871f80eda2546881 09ba33ef4bd447699d74946c58839b2d - - default default] LIGHTOS: finally hostnqn: nqn.2014-08.org.nvmexpress:uuid:83042a20-0f72-4c47-8453-e72ead378624 dsc:  get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:79
Oct 11 04:14:50 compute-0 nova_compute[259850]: 2025-10-11 04:14:50.847 2 DEBUG os_brick.utils [None req-b70eb092-7cdc-4bd6-8e86-5451a1048501 2a330a845d62440c871f80eda2546881 09ba33ef4bd447699d74946c58839b2d - - default default] <== get_connector_properties: return (68ms) {'platform': 'x86_64', 'os_type': 'linux', 'ip': '192.168.122.100', 'host': 'compute-0.ctlplane.example.com', 'multipath': True, 'initiator': 'iqn.1994-05.com.redhat:e727c2bd432c', 'do_local_attach': False, 'nvme_hostid': '83042a20-0f72-4c47-8453-e72ead378624', 'system uuid': 'e4b2deed-ff06-4afb-a523-b61a9dddb9cc', 'nqn': 'nqn.2014-08.org.nvmexpress:uuid:83042a20-0f72-4c47-8453-e72ead378624', 'nvme_native_multipath': True, 'found_dsc': ''} trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:203
Oct 11 04:14:50 compute-0 nova_compute[259850]: 2025-10-11 04:14:50.847 2 DEBUG nova.virt.block_device [None req-b70eb092-7cdc-4bd6-8e86-5451a1048501 2a330a845d62440c871f80eda2546881 09ba33ef4bd447699d74946c58839b2d - - default default] [instance: 86c48ed0-a954-49c0-bdcc-c312bcf59248] Updating existing volume attachment record: ced1c68f-09c3-4ad3-bcd9-fe3bc005e5e8 _volume_attach /usr/lib/python3.9/site-packages/nova/virt/block_device.py:631
Oct 11 04:14:51 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 11 04:14:51 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3356625302' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 11 04:14:51 compute-0 nova_compute[259850]: 2025-10-11 04:14:51.611 2 DEBUG nova.network.neutron [None req-b70eb092-7cdc-4bd6-8e86-5451a1048501 2a330a845d62440c871f80eda2546881 09ba33ef4bd447699d74946c58839b2d - - default default] [instance: 86c48ed0-a954-49c0-bdcc-c312bcf59248] Updating instance_info_cache with network_info: [{"id": "38533cc0-07ff-4685-a074-b75317ab358c", "address": "fa:16:3e:3d:83:26", "network": {"id": "b6cd64a2-af0b-4f57-b84c-cbc9cde5251d", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-958485850-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "09ba33ef4bd447699d74946c58839b2d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap38533cc0-07", "ovs_interfaceid": "38533cc0-07ff-4685-a074-b75317ab358c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 11 04:14:51 compute-0 nova_compute[259850]: 2025-10-11 04:14:51.629 2 DEBUG oslo_concurrency.lockutils [None req-b70eb092-7cdc-4bd6-8e86-5451a1048501 2a330a845d62440c871f80eda2546881 09ba33ef4bd447699d74946c58839b2d - - default default] Releasing lock "refresh_cache-86c48ed0-a954-49c0-bdcc-c312bcf59248" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 11 04:14:51 compute-0 nova_compute[259850]: 2025-10-11 04:14:51.630 2 DEBUG nova.compute.manager [None req-b70eb092-7cdc-4bd6-8e86-5451a1048501 2a330a845d62440c871f80eda2546881 09ba33ef4bd447699d74946c58839b2d - - default default] [instance: 86c48ed0-a954-49c0-bdcc-c312bcf59248] Instance network_info: |[{"id": "38533cc0-07ff-4685-a074-b75317ab358c", "address": "fa:16:3e:3d:83:26", "network": {"id": "b6cd64a2-af0b-4f57-b84c-cbc9cde5251d", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-958485850-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "09ba33ef4bd447699d74946c58839b2d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap38533cc0-07", "ovs_interfaceid": "38533cc0-07ff-4685-a074-b75317ab358c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Oct 11 04:14:51 compute-0 nova_compute[259850]: 2025-10-11 04:14:51.630 2 DEBUG oslo_concurrency.lockutils [req-bb654be5-3c22-474d-90fd-f1eaeb5588ed req-079ae271-b3a0-4ad1-9d6d-d1c2774f521c f4159c198f2c491aba3076e803251bb4 551ef16ca7c64c7d98e9560ddab5abd8 - - default default] Acquired lock "refresh_cache-86c48ed0-a954-49c0-bdcc-c312bcf59248" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 11 04:14:51 compute-0 nova_compute[259850]: 2025-10-11 04:14:51.630 2 DEBUG nova.network.neutron [req-bb654be5-3c22-474d-90fd-f1eaeb5588ed req-079ae271-b3a0-4ad1-9d6d-d1c2774f521c f4159c198f2c491aba3076e803251bb4 551ef16ca7c64c7d98e9560ddab5abd8 - - default default] [instance: 86c48ed0-a954-49c0-bdcc-c312bcf59248] Refreshing network info cache for port 38533cc0-07ff-4685-a074-b75317ab358c _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 11 04:14:51 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1528: 305 pgs: 305 active+clean; 134 MiB data, 404 MiB used, 60 GiB / 60 GiB avail; 41 KiB/s rd, 2.1 MiB/s wr, 61 op/s
Oct 11 04:14:51 compute-0 ceph-mon[74273]: from='client.? 192.168.122.10:0/3356625302' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 11 04:14:51 compute-0 nova_compute[259850]: 2025-10-11 04:14:51.921 2 DEBUG nova.compute.manager [None req-b70eb092-7cdc-4bd6-8e86-5451a1048501 2a330a845d62440c871f80eda2546881 09ba33ef4bd447699d74946c58839b2d - - default default] [instance: 86c48ed0-a954-49c0-bdcc-c312bcf59248] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Oct 11 04:14:51 compute-0 nova_compute[259850]: 2025-10-11 04:14:51.926 2 DEBUG nova.virt.libvirt.driver [None req-b70eb092-7cdc-4bd6-8e86-5451a1048501 2a330a845d62440c871f80eda2546881 09ba33ef4bd447699d74946c58839b2d - - default default] [instance: 86c48ed0-a954-49c0-bdcc-c312bcf59248] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Oct 11 04:14:51 compute-0 nova_compute[259850]: 2025-10-11 04:14:51.927 2 INFO nova.virt.libvirt.driver [None req-b70eb092-7cdc-4bd6-8e86-5451a1048501 2a330a845d62440c871f80eda2546881 09ba33ef4bd447699d74946c58839b2d - - default default] [instance: 86c48ed0-a954-49c0-bdcc-c312bcf59248] Creating image(s)
Oct 11 04:14:51 compute-0 nova_compute[259850]: 2025-10-11 04:14:51.928 2 DEBUG nova.virt.libvirt.driver [None req-b70eb092-7cdc-4bd6-8e86-5451a1048501 2a330a845d62440c871f80eda2546881 09ba33ef4bd447699d74946c58839b2d - - default default] [instance: 86c48ed0-a954-49c0-bdcc-c312bcf59248] Did not create local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4859
Oct 11 04:14:51 compute-0 nova_compute[259850]: 2025-10-11 04:14:51.928 2 DEBUG nova.virt.libvirt.driver [None req-b70eb092-7cdc-4bd6-8e86-5451a1048501 2a330a845d62440c871f80eda2546881 09ba33ef4bd447699d74946c58839b2d - - default default] [instance: 86c48ed0-a954-49c0-bdcc-c312bcf59248] Ensure instance console log exists: /var/lib/nova/instances/86c48ed0-a954-49c0-bdcc-c312bcf59248/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Oct 11 04:14:51 compute-0 nova_compute[259850]: 2025-10-11 04:14:51.928 2 DEBUG oslo_concurrency.lockutils [None req-b70eb092-7cdc-4bd6-8e86-5451a1048501 2a330a845d62440c871f80eda2546881 09ba33ef4bd447699d74946c58839b2d - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 04:14:51 compute-0 nova_compute[259850]: 2025-10-11 04:14:51.929 2 DEBUG oslo_concurrency.lockutils [None req-b70eb092-7cdc-4bd6-8e86-5451a1048501 2a330a845d62440c871f80eda2546881 09ba33ef4bd447699d74946c58839b2d - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 04:14:51 compute-0 nova_compute[259850]: 2025-10-11 04:14:51.929 2 DEBUG oslo_concurrency.lockutils [None req-b70eb092-7cdc-4bd6-8e86-5451a1048501 2a330a845d62440c871f80eda2546881 09ba33ef4bd447699d74946c58839b2d - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 04:14:51 compute-0 nova_compute[259850]: 2025-10-11 04:14:51.931 2 DEBUG nova.virt.libvirt.driver [None req-b70eb092-7cdc-4bd6-8e86-5451a1048501 2a330a845d62440c871f80eda2546881 09ba33ef4bd447699d74946c58839b2d - - default default] [instance: 86c48ed0-a954-49c0-bdcc-c312bcf59248] Start _get_guest_xml network_info=[{"id": "38533cc0-07ff-4685-a074-b75317ab358c", "address": "fa:16:3e:3d:83:26", "network": {"id": "b6cd64a2-af0b-4f57-b84c-cbc9cde5251d", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-958485850-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "09ba33ef4bd447699d74946c58839b2d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap38533cc0-07", "ovs_interfaceid": "38533cc0-07ff-4685-a074-b75317ab358c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, '/dev/vda': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum=<?>,container_format=<?>,created_at=<?>,direct_url=<?>,disk_format=<?>,id=<?>,min_disk=0,min_ram=0,name=<?>,owner=<?>,properties=ImageMetaProps,protected=<?>,size=1073741824,status='active',tags=<?>,updated_at=<?>,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [], 'ephemerals': [], 'block_device_mapping': [{'device_type': 'disk', 'delete_on_termination': True, 'connection_info': {'driver_volume_type': 'rbd', 'data': {'name': 'volumes/volume-69400be7-7fce-4f2b-b9c7-4588caea0a97', 'hosts': ['192.168.122.100'], 'ports': ['6789'], 'cluster_name': 'ceph', 'auth_enabled': True, 'auth_username': 'openstack', 'secret_type': 'ceph', 'secret_uuid': '***', 'volume_id': '69400be7-7fce-4f2b-b9c7-4588caea0a97', 'discard': True, 'qos_specs': None, 'access_mode': 'rw', 'encrypted': False, 'cacheable': False}, 'status': 'reserved', 'instance': '86c48ed0-a954-49c0-bdcc-c312bcf59248', 'attached_at': '', 'detached_at': '', 'volume_id': '69400be7-7fce-4f2b-b9c7-4588caea0a97', 'serial': '69400be7-7fce-4f2b-b9c7-4588caea0a97'}, 'boot_index': 0, 'guest_format': None, 'attachment_id': 'ced1c68f-09c3-4ad3-bcd9-fe3bc005e5e8', 'mount_device': '/dev/vda', 'disk_bus': 'virtio', 'volume_type': None}], ': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Oct 11 04:14:51 compute-0 nova_compute[259850]: 2025-10-11 04:14:51.936 2 WARNING nova.virt.libvirt.driver [None req-b70eb092-7cdc-4bd6-8e86-5451a1048501 2a330a845d62440c871f80eda2546881 09ba33ef4bd447699d74946c58839b2d - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 11 04:14:51 compute-0 nova_compute[259850]: 2025-10-11 04:14:51.942 2 DEBUG nova.virt.libvirt.host [None req-b70eb092-7cdc-4bd6-8e86-5451a1048501 2a330a845d62440c871f80eda2546881 09ba33ef4bd447699d74946c58839b2d - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Oct 11 04:14:51 compute-0 nova_compute[259850]: 2025-10-11 04:14:51.942 2 DEBUG nova.virt.libvirt.host [None req-b70eb092-7cdc-4bd6-8e86-5451a1048501 2a330a845d62440c871f80eda2546881 09ba33ef4bd447699d74946c58839b2d - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Oct 11 04:14:51 compute-0 nova_compute[259850]: 2025-10-11 04:14:51.945 2 DEBUG nova.virt.libvirt.host [None req-b70eb092-7cdc-4bd6-8e86-5451a1048501 2a330a845d62440c871f80eda2546881 09ba33ef4bd447699d74946c58839b2d - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Oct 11 04:14:51 compute-0 nova_compute[259850]: 2025-10-11 04:14:51.946 2 DEBUG nova.virt.libvirt.host [None req-b70eb092-7cdc-4bd6-8e86-5451a1048501 2a330a845d62440c871f80eda2546881 09ba33ef4bd447699d74946c58839b2d - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Oct 11 04:14:51 compute-0 nova_compute[259850]: 2025-10-11 04:14:51.946 2 DEBUG nova.virt.libvirt.driver [None req-b70eb092-7cdc-4bd6-8e86-5451a1048501 2a330a845d62440c871f80eda2546881 09ba33ef4bd447699d74946c58839b2d - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Oct 11 04:14:51 compute-0 nova_compute[259850]: 2025-10-11 04:14:51.947 2 DEBUG nova.virt.hardware [None req-b70eb092-7cdc-4bd6-8e86-5451a1048501 2a330a845d62440c871f80eda2546881 09ba33ef4bd447699d74946c58839b2d - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-11T04:01:35Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='178575de-f0e6-4acd-9fcd-d75e3e09ac2e',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum=<?>,container_format=<?>,created_at=<?>,direct_url=<?>,disk_format=<?>,id=<?>,min_disk=0,min_ram=0,name=<?>,owner=<?>,properties=ImageMetaProps,protected=<?>,size=1073741824,status='active',tags=<?>,updated_at=<?>,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Oct 11 04:14:51 compute-0 nova_compute[259850]: 2025-10-11 04:14:51.947 2 DEBUG nova.virt.hardware [None req-b70eb092-7cdc-4bd6-8e86-5451a1048501 2a330a845d62440c871f80eda2546881 09ba33ef4bd447699d74946c58839b2d - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Oct 11 04:14:51 compute-0 nova_compute[259850]: 2025-10-11 04:14:51.947 2 DEBUG nova.virt.hardware [None req-b70eb092-7cdc-4bd6-8e86-5451a1048501 2a330a845d62440c871f80eda2546881 09ba33ef4bd447699d74946c58839b2d - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Oct 11 04:14:51 compute-0 nova_compute[259850]: 2025-10-11 04:14:51.947 2 DEBUG nova.virt.hardware [None req-b70eb092-7cdc-4bd6-8e86-5451a1048501 2a330a845d62440c871f80eda2546881 09ba33ef4bd447699d74946c58839b2d - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Oct 11 04:14:51 compute-0 nova_compute[259850]: 2025-10-11 04:14:51.948 2 DEBUG nova.virt.hardware [None req-b70eb092-7cdc-4bd6-8e86-5451a1048501 2a330a845d62440c871f80eda2546881 09ba33ef4bd447699d74946c58839b2d - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Oct 11 04:14:51 compute-0 nova_compute[259850]: 2025-10-11 04:14:51.948 2 DEBUG nova.virt.hardware [None req-b70eb092-7cdc-4bd6-8e86-5451a1048501 2a330a845d62440c871f80eda2546881 09ba33ef4bd447699d74946c58839b2d - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Oct 11 04:14:51 compute-0 nova_compute[259850]: 2025-10-11 04:14:51.948 2 DEBUG nova.virt.hardware [None req-b70eb092-7cdc-4bd6-8e86-5451a1048501 2a330a845d62440c871f80eda2546881 09ba33ef4bd447699d74946c58839b2d - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Oct 11 04:14:51 compute-0 nova_compute[259850]: 2025-10-11 04:14:51.948 2 DEBUG nova.virt.hardware [None req-b70eb092-7cdc-4bd6-8e86-5451a1048501 2a330a845d62440c871f80eda2546881 09ba33ef4bd447699d74946c58839b2d - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Oct 11 04:14:51 compute-0 nova_compute[259850]: 2025-10-11 04:14:51.948 2 DEBUG nova.virt.hardware [None req-b70eb092-7cdc-4bd6-8e86-5451a1048501 2a330a845d62440c871f80eda2546881 09ba33ef4bd447699d74946c58839b2d - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Oct 11 04:14:51 compute-0 nova_compute[259850]: 2025-10-11 04:14:51.949 2 DEBUG nova.virt.hardware [None req-b70eb092-7cdc-4bd6-8e86-5451a1048501 2a330a845d62440c871f80eda2546881 09ba33ef4bd447699d74946c58839b2d - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Oct 11 04:14:51 compute-0 nova_compute[259850]: 2025-10-11 04:14:51.949 2 DEBUG nova.virt.hardware [None req-b70eb092-7cdc-4bd6-8e86-5451a1048501 2a330a845d62440c871f80eda2546881 09ba33ef4bd447699d74946c58839b2d - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Oct 11 04:14:51 compute-0 nova_compute[259850]: 2025-10-11 04:14:51.971 2 DEBUG nova.storage.rbd_utils [None req-b70eb092-7cdc-4bd6-8e86-5451a1048501 2a330a845d62440c871f80eda2546881 09ba33ef4bd447699d74946c58839b2d - - default default] rbd image 86c48ed0-a954-49c0-bdcc-c312bcf59248_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 11 04:14:51 compute-0 nova_compute[259850]: 2025-10-11 04:14:51.976 2 DEBUG oslo_concurrency.processutils [None req-b70eb092-7cdc-4bd6-8e86-5451a1048501 2a330a845d62440c871f80eda2546881 09ba33ef4bd447699d74946c58839b2d - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 04:14:52 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 11 04:14:52 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1155418233' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 11 04:14:52 compute-0 nova_compute[259850]: 2025-10-11 04:14:52.427 2 DEBUG oslo_concurrency.processutils [None req-b70eb092-7cdc-4bd6-8e86-5451a1048501 2a330a845d62440c871f80eda2546881 09ba33ef4bd447699d74946c58839b2d - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.451s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 04:14:52 compute-0 nova_compute[259850]: 2025-10-11 04:14:52.454 2 DEBUG nova.virt.libvirt.vif [None req-b70eb092-7cdc-4bd6-8e86-5451a1048501 2a330a845d62440c871f80eda2546881 09ba33ef4bd447699d74946c58839b2d - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-11T04:14:44Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestVolumeBootPattern-server-1794916287',display_name='tempest-TestVolumeBootPattern-server-1794916287',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testvolumebootpattern-server-1794916287',id=17,image_ref='',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='09ba33ef4bd447699d74946c58839b2d',ramdisk_id='',reservation_id='r-8n7qwdm4',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',image_signature_verified='False',network_allocated='True',owner_project_name='tempest-TestVolumeBootPattern-771726270',owner_user_name='tempest-TestVolumeBootPattern-771726270-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-11T04:14:46Z,user_data=None,user_id='2a330a845d62440c871f80eda2546881',uuid=86c48ed0-a954-49c0-bdcc-c312bcf59248,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "38533cc0-07ff-4685-a074-b75317ab358c", "address": "fa:16:3e:3d:83:26", "network": {"id": "b6cd64a2-af0b-4f57-b84c-cbc9cde5251d", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-958485850-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "09ba33ef4bd447699d74946c58839b2d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap38533cc0-07", "ovs_interfaceid": "38533cc0-07ff-4685-a074-b75317ab358c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Oct 11 04:14:52 compute-0 nova_compute[259850]: 2025-10-11 04:14:52.455 2 DEBUG nova.network.os_vif_util [None req-b70eb092-7cdc-4bd6-8e86-5451a1048501 2a330a845d62440c871f80eda2546881 09ba33ef4bd447699d74946c58839b2d - - default default] Converting VIF {"id": "38533cc0-07ff-4685-a074-b75317ab358c", "address": "fa:16:3e:3d:83:26", "network": {"id": "b6cd64a2-af0b-4f57-b84c-cbc9cde5251d", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-958485850-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "09ba33ef4bd447699d74946c58839b2d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap38533cc0-07", "ovs_interfaceid": "38533cc0-07ff-4685-a074-b75317ab358c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 11 04:14:52 compute-0 nova_compute[259850]: 2025-10-11 04:14:52.456 2 DEBUG nova.network.os_vif_util [None req-b70eb092-7cdc-4bd6-8e86-5451a1048501 2a330a845d62440c871f80eda2546881 09ba33ef4bd447699d74946c58839b2d - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:3d:83:26,bridge_name='br-int',has_traffic_filtering=True,id=38533cc0-07ff-4685-a074-b75317ab358c,network=Network(b6cd64a2-af0b-4f57-b84c-cbc9cde5251d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap38533cc0-07') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 11 04:14:52 compute-0 nova_compute[259850]: 2025-10-11 04:14:52.457 2 DEBUG nova.objects.instance [None req-b70eb092-7cdc-4bd6-8e86-5451a1048501 2a330a845d62440c871f80eda2546881 09ba33ef4bd447699d74946c58839b2d - - default default] Lazy-loading 'pci_devices' on Instance uuid 86c48ed0-a954-49c0-bdcc-c312bcf59248 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 11 04:14:52 compute-0 nova_compute[259850]: 2025-10-11 04:14:52.472 2 DEBUG nova.virt.libvirt.driver [None req-b70eb092-7cdc-4bd6-8e86-5451a1048501 2a330a845d62440c871f80eda2546881 09ba33ef4bd447699d74946c58839b2d - - default default] [instance: 86c48ed0-a954-49c0-bdcc-c312bcf59248] End _get_guest_xml xml=<domain type="kvm">
Oct 11 04:14:52 compute-0 nova_compute[259850]:   <uuid>86c48ed0-a954-49c0-bdcc-c312bcf59248</uuid>
Oct 11 04:14:52 compute-0 nova_compute[259850]:   <name>instance-00000011</name>
Oct 11 04:14:52 compute-0 nova_compute[259850]:   <memory>131072</memory>
Oct 11 04:14:52 compute-0 nova_compute[259850]:   <vcpu>1</vcpu>
Oct 11 04:14:52 compute-0 nova_compute[259850]:   <metadata>
Oct 11 04:14:52 compute-0 nova_compute[259850]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 11 04:14:52 compute-0 nova_compute[259850]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 11 04:14:52 compute-0 nova_compute[259850]:       <nova:name>tempest-TestVolumeBootPattern-server-1794916287</nova:name>
Oct 11 04:14:52 compute-0 nova_compute[259850]:       <nova:creationTime>2025-10-11 04:14:51</nova:creationTime>
Oct 11 04:14:52 compute-0 nova_compute[259850]:       <nova:flavor name="m1.nano">
Oct 11 04:14:52 compute-0 nova_compute[259850]:         <nova:memory>128</nova:memory>
Oct 11 04:14:52 compute-0 nova_compute[259850]:         <nova:disk>1</nova:disk>
Oct 11 04:14:52 compute-0 nova_compute[259850]:         <nova:swap>0</nova:swap>
Oct 11 04:14:52 compute-0 nova_compute[259850]:         <nova:ephemeral>0</nova:ephemeral>
Oct 11 04:14:52 compute-0 nova_compute[259850]:         <nova:vcpus>1</nova:vcpus>
Oct 11 04:14:52 compute-0 nova_compute[259850]:       </nova:flavor>
Oct 11 04:14:52 compute-0 nova_compute[259850]:       <nova:owner>
Oct 11 04:14:52 compute-0 nova_compute[259850]:         <nova:user uuid="2a330a845d62440c871f80eda2546881">tempest-TestVolumeBootPattern-771726270-project-member</nova:user>
Oct 11 04:14:52 compute-0 nova_compute[259850]:         <nova:project uuid="09ba33ef4bd447699d74946c58839b2d">tempest-TestVolumeBootPattern-771726270</nova:project>
Oct 11 04:14:52 compute-0 nova_compute[259850]:       </nova:owner>
Oct 11 04:14:52 compute-0 nova_compute[259850]:       <nova:ports>
Oct 11 04:14:52 compute-0 nova_compute[259850]:         <nova:port uuid="38533cc0-07ff-4685-a074-b75317ab358c">
Oct 11 04:14:52 compute-0 nova_compute[259850]:           <nova:ip type="fixed" address="10.100.0.3" ipVersion="4"/>
Oct 11 04:14:52 compute-0 nova_compute[259850]:         </nova:port>
Oct 11 04:14:52 compute-0 nova_compute[259850]:       </nova:ports>
Oct 11 04:14:52 compute-0 nova_compute[259850]:     </nova:instance>
Oct 11 04:14:52 compute-0 nova_compute[259850]:   </metadata>
Oct 11 04:14:52 compute-0 nova_compute[259850]:   <sysinfo type="smbios">
Oct 11 04:14:52 compute-0 nova_compute[259850]:     <system>
Oct 11 04:14:52 compute-0 nova_compute[259850]:       <entry name="manufacturer">RDO</entry>
Oct 11 04:14:52 compute-0 nova_compute[259850]:       <entry name="product">OpenStack Compute</entry>
Oct 11 04:14:52 compute-0 nova_compute[259850]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 11 04:14:52 compute-0 nova_compute[259850]:       <entry name="serial">86c48ed0-a954-49c0-bdcc-c312bcf59248</entry>
Oct 11 04:14:52 compute-0 nova_compute[259850]:       <entry name="uuid">86c48ed0-a954-49c0-bdcc-c312bcf59248</entry>
Oct 11 04:14:52 compute-0 nova_compute[259850]:       <entry name="family">Virtual Machine</entry>
Oct 11 04:14:52 compute-0 nova_compute[259850]:     </system>
Oct 11 04:14:52 compute-0 nova_compute[259850]:   </sysinfo>
Oct 11 04:14:52 compute-0 nova_compute[259850]:   <os>
Oct 11 04:14:52 compute-0 nova_compute[259850]:     <type arch="x86_64" machine="q35">hvm</type>
Oct 11 04:14:52 compute-0 nova_compute[259850]:     <boot dev="hd"/>
Oct 11 04:14:52 compute-0 nova_compute[259850]:     <smbios mode="sysinfo"/>
Oct 11 04:14:52 compute-0 nova_compute[259850]:   </os>
Oct 11 04:14:52 compute-0 nova_compute[259850]:   <features>
Oct 11 04:14:52 compute-0 nova_compute[259850]:     <acpi/>
Oct 11 04:14:52 compute-0 nova_compute[259850]:     <apic/>
Oct 11 04:14:52 compute-0 nova_compute[259850]:     <vmcoreinfo/>
Oct 11 04:14:52 compute-0 nova_compute[259850]:   </features>
Oct 11 04:14:52 compute-0 nova_compute[259850]:   <clock offset="utc">
Oct 11 04:14:52 compute-0 nova_compute[259850]:     <timer name="pit" tickpolicy="delay"/>
Oct 11 04:14:52 compute-0 nova_compute[259850]:     <timer name="rtc" tickpolicy="catchup"/>
Oct 11 04:14:52 compute-0 nova_compute[259850]:     <timer name="hpet" present="no"/>
Oct 11 04:14:52 compute-0 nova_compute[259850]:   </clock>
Oct 11 04:14:52 compute-0 nova_compute[259850]:   <cpu mode="host-model" match="exact">
Oct 11 04:14:52 compute-0 nova_compute[259850]:     <topology sockets="1" cores="1" threads="1"/>
Oct 11 04:14:52 compute-0 nova_compute[259850]:   </cpu>
Oct 11 04:14:52 compute-0 nova_compute[259850]:   <devices>
Oct 11 04:14:52 compute-0 nova_compute[259850]:     <disk type="network" device="cdrom">
Oct 11 04:14:52 compute-0 nova_compute[259850]:       <driver type="raw" cache="none"/>
Oct 11 04:14:52 compute-0 nova_compute[259850]:       <source protocol="rbd" name="vms/86c48ed0-a954-49c0-bdcc-c312bcf59248_disk.config">
Oct 11 04:14:52 compute-0 nova_compute[259850]:         <host name="192.168.122.100" port="6789"/>
Oct 11 04:14:52 compute-0 nova_compute[259850]:       </source>
Oct 11 04:14:52 compute-0 nova_compute[259850]:       <auth username="openstack">
Oct 11 04:14:52 compute-0 nova_compute[259850]:         <secret type="ceph" uuid="23b68101-59a9-532f-ab6b-9acf78fb2162"/>
Oct 11 04:14:52 compute-0 nova_compute[259850]:       </auth>
Oct 11 04:14:52 compute-0 nova_compute[259850]:       <target dev="sda" bus="sata"/>
Oct 11 04:14:52 compute-0 nova_compute[259850]:     </disk>
Oct 11 04:14:52 compute-0 nova_compute[259850]:     <disk type="network" device="disk">
Oct 11 04:14:52 compute-0 nova_compute[259850]:       <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Oct 11 04:14:52 compute-0 nova_compute[259850]:       <source protocol="rbd" name="volumes/volume-69400be7-7fce-4f2b-b9c7-4588caea0a97">
Oct 11 04:14:52 compute-0 nova_compute[259850]:         <host name="192.168.122.100" port="6789"/>
Oct 11 04:14:52 compute-0 nova_compute[259850]:       </source>
Oct 11 04:14:52 compute-0 nova_compute[259850]:       <auth username="openstack">
Oct 11 04:14:52 compute-0 nova_compute[259850]:         <secret type="ceph" uuid="23b68101-59a9-532f-ab6b-9acf78fb2162"/>
Oct 11 04:14:52 compute-0 nova_compute[259850]:       </auth>
Oct 11 04:14:52 compute-0 nova_compute[259850]:       <target dev="vda" bus="virtio"/>
Oct 11 04:14:52 compute-0 nova_compute[259850]:       <serial>69400be7-7fce-4f2b-b9c7-4588caea0a97</serial>
Oct 11 04:14:52 compute-0 nova_compute[259850]:     </disk>
Oct 11 04:14:52 compute-0 nova_compute[259850]:     <interface type="ethernet">
Oct 11 04:14:52 compute-0 nova_compute[259850]:       <mac address="fa:16:3e:3d:83:26"/>
Oct 11 04:14:52 compute-0 nova_compute[259850]:       <model type="virtio"/>
Oct 11 04:14:52 compute-0 nova_compute[259850]:       <driver name="vhost" rx_queue_size="512"/>
Oct 11 04:14:52 compute-0 nova_compute[259850]:       <mtu size="1442"/>
Oct 11 04:14:52 compute-0 nova_compute[259850]:       <target dev="tap38533cc0-07"/>
Oct 11 04:14:52 compute-0 nova_compute[259850]:     </interface>
Oct 11 04:14:52 compute-0 nova_compute[259850]:     <serial type="pty">
Oct 11 04:14:52 compute-0 nova_compute[259850]:       <log file="/var/lib/nova/instances/86c48ed0-a954-49c0-bdcc-c312bcf59248/console.log" append="off"/>
Oct 11 04:14:52 compute-0 nova_compute[259850]:     </serial>
Oct 11 04:14:52 compute-0 nova_compute[259850]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 11 04:14:52 compute-0 nova_compute[259850]:     <video>
Oct 11 04:14:52 compute-0 nova_compute[259850]:       <model type="virtio"/>
Oct 11 04:14:52 compute-0 nova_compute[259850]:     </video>
Oct 11 04:14:52 compute-0 nova_compute[259850]:     <input type="tablet" bus="usb"/>
Oct 11 04:14:52 compute-0 nova_compute[259850]:     <rng model="virtio">
Oct 11 04:14:52 compute-0 nova_compute[259850]:       <backend model="random">/dev/urandom</backend>
Oct 11 04:14:52 compute-0 nova_compute[259850]:     </rng>
Oct 11 04:14:52 compute-0 nova_compute[259850]:     <controller type="pci" model="pcie-root"/>
Oct 11 04:14:52 compute-0 nova_compute[259850]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 04:14:52 compute-0 nova_compute[259850]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 04:14:52 compute-0 nova_compute[259850]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 04:14:52 compute-0 nova_compute[259850]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 04:14:52 compute-0 nova_compute[259850]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 04:14:52 compute-0 nova_compute[259850]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 04:14:52 compute-0 nova_compute[259850]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 04:14:52 compute-0 nova_compute[259850]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 04:14:52 compute-0 nova_compute[259850]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 04:14:52 compute-0 nova_compute[259850]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 04:14:52 compute-0 nova_compute[259850]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 04:14:52 compute-0 nova_compute[259850]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 04:14:52 compute-0 nova_compute[259850]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 04:14:52 compute-0 nova_compute[259850]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 04:14:52 compute-0 nova_compute[259850]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 04:14:52 compute-0 nova_compute[259850]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 04:14:52 compute-0 nova_compute[259850]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 04:14:52 compute-0 nova_compute[259850]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 04:14:52 compute-0 nova_compute[259850]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 04:14:52 compute-0 nova_compute[259850]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 04:14:52 compute-0 nova_compute[259850]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 04:14:52 compute-0 nova_compute[259850]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 04:14:52 compute-0 nova_compute[259850]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 04:14:52 compute-0 nova_compute[259850]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 04:14:52 compute-0 nova_compute[259850]:     <controller type="usb" index="0"/>
Oct 11 04:14:52 compute-0 nova_compute[259850]:     <memballoon model="virtio">
Oct 11 04:14:52 compute-0 nova_compute[259850]:       <stats period="10"/>
Oct 11 04:14:52 compute-0 nova_compute[259850]:     </memballoon>
Oct 11 04:14:52 compute-0 nova_compute[259850]:   </devices>
Oct 11 04:14:52 compute-0 nova_compute[259850]: </domain>
Oct 11 04:14:52 compute-0 nova_compute[259850]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Oct 11 04:14:52 compute-0 nova_compute[259850]: 2025-10-11 04:14:52.474 2 DEBUG nova.compute.manager [None req-b70eb092-7cdc-4bd6-8e86-5451a1048501 2a330a845d62440c871f80eda2546881 09ba33ef4bd447699d74946c58839b2d - - default default] [instance: 86c48ed0-a954-49c0-bdcc-c312bcf59248] Preparing to wait for external event network-vif-plugged-38533cc0-07ff-4685-a074-b75317ab358c prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Oct 11 04:14:52 compute-0 nova_compute[259850]: 2025-10-11 04:14:52.474 2 DEBUG oslo_concurrency.lockutils [None req-b70eb092-7cdc-4bd6-8e86-5451a1048501 2a330a845d62440c871f80eda2546881 09ba33ef4bd447699d74946c58839b2d - - default default] Acquiring lock "86c48ed0-a954-49c0-bdcc-c312bcf59248-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 04:14:52 compute-0 nova_compute[259850]: 2025-10-11 04:14:52.475 2 DEBUG oslo_concurrency.lockutils [None req-b70eb092-7cdc-4bd6-8e86-5451a1048501 2a330a845d62440c871f80eda2546881 09ba33ef4bd447699d74946c58839b2d - - default default] Lock "86c48ed0-a954-49c0-bdcc-c312bcf59248-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 04:14:52 compute-0 nova_compute[259850]: 2025-10-11 04:14:52.475 2 DEBUG oslo_concurrency.lockutils [None req-b70eb092-7cdc-4bd6-8e86-5451a1048501 2a330a845d62440c871f80eda2546881 09ba33ef4bd447699d74946c58839b2d - - default default] Lock "86c48ed0-a954-49c0-bdcc-c312bcf59248-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 04:14:52 compute-0 nova_compute[259850]: 2025-10-11 04:14:52.476 2 DEBUG nova.virt.libvirt.vif [None req-b70eb092-7cdc-4bd6-8e86-5451a1048501 2a330a845d62440c871f80eda2546881 09ba33ef4bd447699d74946c58839b2d - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-11T04:14:44Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestVolumeBootPattern-server-1794916287',display_name='tempest-TestVolumeBootPattern-server-1794916287',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testvolumebootpattern-server-1794916287',id=17,image_ref='',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='09ba33ef4bd447699d74946c58839b2d',ramdisk_id='',reservation_id='r-8n7qwdm4',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',image_signature_verified='False',network_allocated='True',owner_project_name='tempest-TestVolumeBootPattern-771726270',owner_user_name='tempest-TestVolumeBootPattern-771726270-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-11T04:14:46Z,user_data=None,user_id='2a330a845d62440c871f80eda2546881',uuid=86c48ed0-a954-49c0-bdcc-c312bcf59248,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "38533cc0-07ff-4685-a074-b75317ab358c", "address": "fa:16:3e:3d:83:26", "network": {"id": "b6cd64a2-af0b-4f57-b84c-cbc9cde5251d", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-958485850-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "09ba33ef4bd447699d74946c58839b2d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap38533cc0-07", "ovs_interfaceid": "38533cc0-07ff-4685-a074-b75317ab358c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Oct 11 04:14:52 compute-0 nova_compute[259850]: 2025-10-11 04:14:52.476 2 DEBUG nova.network.os_vif_util [None req-b70eb092-7cdc-4bd6-8e86-5451a1048501 2a330a845d62440c871f80eda2546881 09ba33ef4bd447699d74946c58839b2d - - default default] Converting VIF {"id": "38533cc0-07ff-4685-a074-b75317ab358c", "address": "fa:16:3e:3d:83:26", "network": {"id": "b6cd64a2-af0b-4f57-b84c-cbc9cde5251d", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-958485850-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "09ba33ef4bd447699d74946c58839b2d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap38533cc0-07", "ovs_interfaceid": "38533cc0-07ff-4685-a074-b75317ab358c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 11 04:14:52 compute-0 nova_compute[259850]: 2025-10-11 04:14:52.477 2 DEBUG nova.network.os_vif_util [None req-b70eb092-7cdc-4bd6-8e86-5451a1048501 2a330a845d62440c871f80eda2546881 09ba33ef4bd447699d74946c58839b2d - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:3d:83:26,bridge_name='br-int',has_traffic_filtering=True,id=38533cc0-07ff-4685-a074-b75317ab358c,network=Network(b6cd64a2-af0b-4f57-b84c-cbc9cde5251d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap38533cc0-07') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 11 04:14:52 compute-0 nova_compute[259850]: 2025-10-11 04:14:52.477 2 DEBUG os_vif [None req-b70eb092-7cdc-4bd6-8e86-5451a1048501 2a330a845d62440c871f80eda2546881 09ba33ef4bd447699d74946c58839b2d - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:3d:83:26,bridge_name='br-int',has_traffic_filtering=True,id=38533cc0-07ff-4685-a074-b75317ab358c,network=Network(b6cd64a2-af0b-4f57-b84c-cbc9cde5251d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap38533cc0-07') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Oct 11 04:14:52 compute-0 nova_compute[259850]: 2025-10-11 04:14:52.478 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 04:14:52 compute-0 nova_compute[259850]: 2025-10-11 04:14:52.478 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 04:14:52 compute-0 nova_compute[259850]: 2025-10-11 04:14:52.479 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 11 04:14:52 compute-0 nova_compute[259850]: 2025-10-11 04:14:52.485 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 04:14:52 compute-0 nova_compute[259850]: 2025-10-11 04:14:52.486 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap38533cc0-07, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 04:14:52 compute-0 nova_compute[259850]: 2025-10-11 04:14:52.487 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap38533cc0-07, col_values=(('external_ids', {'iface-id': '38533cc0-07ff-4685-a074-b75317ab358c', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:3d:83:26', 'vm-uuid': '86c48ed0-a954-49c0-bdcc-c312bcf59248'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 04:14:52 compute-0 nova_compute[259850]: 2025-10-11 04:14:52.490 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 04:14:52 compute-0 NetworkManager[44920]: <info>  [1760156092.4920] manager: (tap38533cc0-07): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/93)
Oct 11 04:14:52 compute-0 nova_compute[259850]: 2025-10-11 04:14:52.493 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 11 04:14:52 compute-0 nova_compute[259850]: 2025-10-11 04:14:52.497 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 04:14:52 compute-0 nova_compute[259850]: 2025-10-11 04:14:52.499 2 INFO os_vif [None req-b70eb092-7cdc-4bd6-8e86-5451a1048501 2a330a845d62440c871f80eda2546881 09ba33ef4bd447699d74946c58839b2d - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:3d:83:26,bridge_name='br-int',has_traffic_filtering=True,id=38533cc0-07ff-4685-a074-b75317ab358c,network=Network(b6cd64a2-af0b-4f57-b84c-cbc9cde5251d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap38533cc0-07')
Oct 11 04:14:52 compute-0 nova_compute[259850]: 2025-10-11 04:14:52.567 2 DEBUG nova.virt.libvirt.driver [None req-b70eb092-7cdc-4bd6-8e86-5451a1048501 2a330a845d62440c871f80eda2546881 09ba33ef4bd447699d74946c58839b2d - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 11 04:14:52 compute-0 nova_compute[259850]: 2025-10-11 04:14:52.569 2 DEBUG nova.virt.libvirt.driver [None req-b70eb092-7cdc-4bd6-8e86-5451a1048501 2a330a845d62440c871f80eda2546881 09ba33ef4bd447699d74946c58839b2d - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 11 04:14:52 compute-0 nova_compute[259850]: 2025-10-11 04:14:52.569 2 DEBUG nova.virt.libvirt.driver [None req-b70eb092-7cdc-4bd6-8e86-5451a1048501 2a330a845d62440c871f80eda2546881 09ba33ef4bd447699d74946c58839b2d - - default default] No VIF found with MAC fa:16:3e:3d:83:26, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Oct 11 04:14:52 compute-0 nova_compute[259850]: 2025-10-11 04:14:52.570 2 INFO nova.virt.libvirt.driver [None req-b70eb092-7cdc-4bd6-8e86-5451a1048501 2a330a845d62440c871f80eda2546881 09ba33ef4bd447699d74946c58839b2d - - default default] [instance: 86c48ed0-a954-49c0-bdcc-c312bcf59248] Using config drive
Oct 11 04:14:52 compute-0 nova_compute[259850]: 2025-10-11 04:14:52.601 2 DEBUG nova.storage.rbd_utils [None req-b70eb092-7cdc-4bd6-8e86-5451a1048501 2a330a845d62440c871f80eda2546881 09ba33ef4bd447699d74946c58839b2d - - default default] rbd image 86c48ed0-a954-49c0-bdcc-c312bcf59248_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 11 04:14:52 compute-0 ceph-mon[74273]: pgmap v1528: 305 pgs: 305 active+clean; 134 MiB data, 404 MiB used, 60 GiB / 60 GiB avail; 41 KiB/s rd, 2.1 MiB/s wr, 61 op/s
Oct 11 04:14:52 compute-0 ceph-mon[74273]: from='client.? 192.168.122.100:0/1155418233' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 11 04:14:52 compute-0 nova_compute[259850]: 2025-10-11 04:14:52.883 2 DEBUG nova.network.neutron [req-bb654be5-3c22-474d-90fd-f1eaeb5588ed req-079ae271-b3a0-4ad1-9d6d-d1c2774f521c f4159c198f2c491aba3076e803251bb4 551ef16ca7c64c7d98e9560ddab5abd8 - - default default] [instance: 86c48ed0-a954-49c0-bdcc-c312bcf59248] Updated VIF entry in instance network info cache for port 38533cc0-07ff-4685-a074-b75317ab358c. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 11 04:14:52 compute-0 nova_compute[259850]: 2025-10-11 04:14:52.884 2 DEBUG nova.network.neutron [req-bb654be5-3c22-474d-90fd-f1eaeb5588ed req-079ae271-b3a0-4ad1-9d6d-d1c2774f521c f4159c198f2c491aba3076e803251bb4 551ef16ca7c64c7d98e9560ddab5abd8 - - default default] [instance: 86c48ed0-a954-49c0-bdcc-c312bcf59248] Updating instance_info_cache with network_info: [{"id": "38533cc0-07ff-4685-a074-b75317ab358c", "address": "fa:16:3e:3d:83:26", "network": {"id": "b6cd64a2-af0b-4f57-b84c-cbc9cde5251d", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-958485850-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "09ba33ef4bd447699d74946c58839b2d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap38533cc0-07", "ovs_interfaceid": "38533cc0-07ff-4685-a074-b75317ab358c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 11 04:14:52 compute-0 nova_compute[259850]: 2025-10-11 04:14:52.907 2 DEBUG oslo_concurrency.lockutils [req-bb654be5-3c22-474d-90fd-f1eaeb5588ed req-079ae271-b3a0-4ad1-9d6d-d1c2774f521c f4159c198f2c491aba3076e803251bb4 551ef16ca7c64c7d98e9560ddab5abd8 - - default default] Releasing lock "refresh_cache-86c48ed0-a954-49c0-bdcc-c312bcf59248" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 11 04:14:52 compute-0 nova_compute[259850]: 2025-10-11 04:14:52.985 2 INFO nova.virt.libvirt.driver [None req-b70eb092-7cdc-4bd6-8e86-5451a1048501 2a330a845d62440c871f80eda2546881 09ba33ef4bd447699d74946c58839b2d - - default default] [instance: 86c48ed0-a954-49c0-bdcc-c312bcf59248] Creating config drive at /var/lib/nova/instances/86c48ed0-a954-49c0-bdcc-c312bcf59248/disk.config
Oct 11 04:14:52 compute-0 nova_compute[259850]: 2025-10-11 04:14:52.993 2 DEBUG oslo_concurrency.processutils [None req-b70eb092-7cdc-4bd6-8e86-5451a1048501 2a330a845d62440c871f80eda2546881 09ba33ef4bd447699d74946c58839b2d - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/86c48ed0-a954-49c0-bdcc-c312bcf59248/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp9giowc56 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 04:14:53 compute-0 nova_compute[259850]: 2025-10-11 04:14:53.026 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 04:14:53 compute-0 nova_compute[259850]: 2025-10-11 04:14:53.142 2 DEBUG oslo_concurrency.processutils [None req-b70eb092-7cdc-4bd6-8e86-5451a1048501 2a330a845d62440c871f80eda2546881 09ba33ef4bd447699d74946c58839b2d - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/86c48ed0-a954-49c0-bdcc-c312bcf59248/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp9giowc56" returned: 0 in 0.149s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 04:14:53 compute-0 nova_compute[259850]: 2025-10-11 04:14:53.178 2 DEBUG nova.storage.rbd_utils [None req-b70eb092-7cdc-4bd6-8e86-5451a1048501 2a330a845d62440c871f80eda2546881 09ba33ef4bd447699d74946c58839b2d - - default default] rbd image 86c48ed0-a954-49c0-bdcc-c312bcf59248_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 11 04:14:53 compute-0 nova_compute[259850]: 2025-10-11 04:14:53.182 2 DEBUG oslo_concurrency.processutils [None req-b70eb092-7cdc-4bd6-8e86-5451a1048501 2a330a845d62440c871f80eda2546881 09ba33ef4bd447699d74946c58839b2d - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/86c48ed0-a954-49c0-bdcc-c312bcf59248/disk.config 86c48ed0-a954-49c0-bdcc-c312bcf59248_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 04:14:53 compute-0 nova_compute[259850]: 2025-10-11 04:14:53.363 2 DEBUG oslo_concurrency.processutils [None req-b70eb092-7cdc-4bd6-8e86-5451a1048501 2a330a845d62440c871f80eda2546881 09ba33ef4bd447699d74946c58839b2d - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/86c48ed0-a954-49c0-bdcc-c312bcf59248/disk.config 86c48ed0-a954-49c0-bdcc-c312bcf59248_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.181s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 04:14:53 compute-0 nova_compute[259850]: 2025-10-11 04:14:53.365 2 INFO nova.virt.libvirt.driver [None req-b70eb092-7cdc-4bd6-8e86-5451a1048501 2a330a845d62440c871f80eda2546881 09ba33ef4bd447699d74946c58839b2d - - default default] [instance: 86c48ed0-a954-49c0-bdcc-c312bcf59248] Deleting local config drive /var/lib/nova/instances/86c48ed0-a954-49c0-bdcc-c312bcf59248/disk.config because it was imported into RBD.
Oct 11 04:14:53 compute-0 kernel: tap38533cc0-07: entered promiscuous mode
Oct 11 04:14:53 compute-0 NetworkManager[44920]: <info>  [1760156093.4166] manager: (tap38533cc0-07): new Tun device (/org/freedesktop/NetworkManager/Devices/94)
Oct 11 04:14:53 compute-0 ovn_controller[152025]: 2025-10-11T04:14:53Z|00179|binding|INFO|Claiming lport 38533cc0-07ff-4685-a074-b75317ab358c for this chassis.
Oct 11 04:14:53 compute-0 nova_compute[259850]: 2025-10-11 04:14:53.418 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 04:14:53 compute-0 ovn_controller[152025]: 2025-10-11T04:14:53Z|00180|binding|INFO|38533cc0-07ff-4685-a074-b75317ab358c: Claiming fa:16:3e:3d:83:26 10.100.0.3
Oct 11 04:14:53 compute-0 nova_compute[259850]: 2025-10-11 04:14:53.421 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 04:14:53 compute-0 ovn_metadata_agent[161897]: 2025-10-11 04:14:53.433 161902 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:3d:83:26 10.100.0.3'], port_security=['fa:16:3e:3d:83:26 10.100.0.3'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.3/28', 'neutron:device_id': '86c48ed0-a954-49c0-bdcc-c312bcf59248', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-b6cd64a2-af0b-4f57-b84c-cbc9cde5251d', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '09ba33ef4bd447699d74946c58839b2d', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'ad6cc707-9ce2-4240-811a-f6df84b349db', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=27b77226-c1f8-485e-969b-bae9a3bf7ceb, chassis=[<ovs.db.idl.Row object at 0x7f22fde8afd0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f22fde8afd0>], logical_port=38533cc0-07ff-4685-a074-b75317ab358c) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 11 04:14:53 compute-0 ovn_metadata_agent[161897]: 2025-10-11 04:14:53.435 161902 INFO neutron.agent.ovn.metadata.agent [-] Port 38533cc0-07ff-4685-a074-b75317ab358c in datapath b6cd64a2-af0b-4f57-b84c-cbc9cde5251d bound to our chassis
Oct 11 04:14:53 compute-0 ovn_metadata_agent[161897]: 2025-10-11 04:14:53.436 161902 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network b6cd64a2-af0b-4f57-b84c-cbc9cde5251d
Oct 11 04:14:53 compute-0 ovn_controller[152025]: 2025-10-11T04:14:53Z|00181|binding|INFO|Setting lport 38533cc0-07ff-4685-a074-b75317ab358c ovn-installed in OVS
Oct 11 04:14:53 compute-0 ovn_controller[152025]: 2025-10-11T04:14:53Z|00182|binding|INFO|Setting lport 38533cc0-07ff-4685-a074-b75317ab358c up in Southbound
Oct 11 04:14:53 compute-0 nova_compute[259850]: 2025-10-11 04:14:53.448 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 04:14:53 compute-0 ovn_metadata_agent[161897]: 2025-10-11 04:14:53.449 267637 DEBUG oslo.privsep.daemon [-] privsep: reply[e064209e-1e86-43c8-8497-a6e11ffcb8de]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 04:14:53 compute-0 ovn_metadata_agent[161897]: 2025-10-11 04:14:53.451 161902 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapb6cd64a2-a1 in ovnmeta-b6cd64a2-af0b-4f57-b84c-cbc9cde5251d namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Oct 11 04:14:53 compute-0 ovn_metadata_agent[161897]: 2025-10-11 04:14:53.454 267637 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapb6cd64a2-a0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Oct 11 04:14:53 compute-0 systemd-machined[214869]: New machine qemu-17-instance-00000011.
Oct 11 04:14:53 compute-0 ovn_metadata_agent[161897]: 2025-10-11 04:14:53.454 267637 DEBUG oslo.privsep.daemon [-] privsep: reply[0e13aaa7-4a7e-4891-b9f4-bca46d3b62c7]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 04:14:53 compute-0 ovn_metadata_agent[161897]: 2025-10-11 04:14:53.457 267637 DEBUG oslo.privsep.daemon [-] privsep: reply[5dc322e2-87fc-45ed-813f-ff74aa909c2d]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 04:14:53 compute-0 systemd[1]: Started Virtual Machine qemu-17-instance-00000011.
Oct 11 04:14:53 compute-0 ovn_metadata_agent[161897]: 2025-10-11 04:14:53.473 162015 DEBUG oslo.privsep.daemon [-] privsep: reply[2f0a8db7-2e6f-47ad-a54d-aa5aa02ca04a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 04:14:53 compute-0 ovn_metadata_agent[161897]: 2025-10-11 04:14:53.485 267637 DEBUG oslo.privsep.daemon [-] privsep: reply[3f5082f5-93c5-4831-a08a-391336c0f6be]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 04:14:53 compute-0 systemd-udevd[291185]: Network interface NamePolicy= disabled on kernel command line.
Oct 11 04:14:53 compute-0 NetworkManager[44920]: <info>  [1760156093.5118] device (tap38533cc0-07): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 11 04:14:53 compute-0 NetworkManager[44920]: <info>  [1760156093.5135] device (tap38533cc0-07): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 11 04:14:53 compute-0 ovn_metadata_agent[161897]: 2025-10-11 04:14:53.522 267785 DEBUG oslo.privsep.daemon [-] privsep: reply[74a88527-b7a6-415a-9a96-f6795b9d69d5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 04:14:53 compute-0 ovn_metadata_agent[161897]: 2025-10-11 04:14:53.527 267637 DEBUG oslo.privsep.daemon [-] privsep: reply[069a712b-167c-4846-b5a8-30ea9cc78648]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 04:14:53 compute-0 systemd-udevd[291189]: Network interface NamePolicy= disabled on kernel command line.
Oct 11 04:14:53 compute-0 NetworkManager[44920]: <info>  [1760156093.5289] manager: (tapb6cd64a2-a0): new Veth device (/org/freedesktop/NetworkManager/Devices/95)
Oct 11 04:14:53 compute-0 ovn_metadata_agent[161897]: 2025-10-11 04:14:53.563 267785 DEBUG oslo.privsep.daemon [-] privsep: reply[659cc1a2-cc75-494a-8c33-574bfc233d7c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 04:14:53 compute-0 ovn_metadata_agent[161897]: 2025-10-11 04:14:53.567 267785 DEBUG oslo.privsep.daemon [-] privsep: reply[564e6c12-fe1e-45a4-83e5-b5aeca02ccfb]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 04:14:53 compute-0 NetworkManager[44920]: <info>  [1760156093.5903] device (tapb6cd64a2-a0): carrier: link connected
Oct 11 04:14:53 compute-0 ovn_metadata_agent[161897]: 2025-10-11 04:14:53.595 267785 DEBUG oslo.privsep.daemon [-] privsep: reply[4ac74f72-c258-4c3e-bf40-4fc28e3816f0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 04:14:53 compute-0 ovn_metadata_agent[161897]: 2025-10-11 04:14:53.611 267637 DEBUG oslo.privsep.daemon [-] privsep: reply[2acb8cff-df4a-4592-8418-6acb3872ba01]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapb6cd64a2-a1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:11:9f:02'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 59], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 447862, 'reachable_time': 27871, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 291214, 'error': None, 'target': 'ovnmeta-b6cd64a2-af0b-4f57-b84c-cbc9cde5251d', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 04:14:53 compute-0 ovn_metadata_agent[161897]: 2025-10-11 04:14:53.625 267637 DEBUG oslo.privsep.daemon [-] privsep: reply[6effcba9-dae4-409a-92e8-6592282a7252]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe11:9f02'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 447862, 'tstamp': 447862}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 291215, 'error': None, 'target': 'ovnmeta-b6cd64a2-af0b-4f57-b84c-cbc9cde5251d', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 04:14:53 compute-0 ovn_metadata_agent[161897]: 2025-10-11 04:14:53.640 267637 DEBUG oslo.privsep.daemon [-] privsep: reply[f1ce3c48-b0de-4a42-bcfe-be55829f0461]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapb6cd64a2-a1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:11:9f:02'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 59], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 447862, 'reachable_time': 27871, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 291216, 'error': None, 'target': 'ovnmeta-b6cd64a2-af0b-4f57-b84c-cbc9cde5251d', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 04:14:53 compute-0 ovn_metadata_agent[161897]: 2025-10-11 04:14:53.676 267637 DEBUG oslo.privsep.daemon [-] privsep: reply[e4c476ba-738d-40ed-bf8c-7d6a921d84f5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 04:14:53 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1529: 305 pgs: 305 active+clean; 134 MiB data, 404 MiB used, 60 GiB / 60 GiB avail; 35 KiB/s rd, 1.8 MiB/s wr, 51 op/s
Oct 11 04:14:53 compute-0 ovn_metadata_agent[161897]: 2025-10-11 04:14:53.740 267637 DEBUG oslo.privsep.daemon [-] privsep: reply[e6c779ee-f875-4fe4-a839-9b36f0df19e3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 04:14:53 compute-0 ovn_metadata_agent[161897]: 2025-10-11 04:14:53.742 161902 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapb6cd64a2-a0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 04:14:53 compute-0 ovn_metadata_agent[161897]: 2025-10-11 04:14:53.742 161902 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 11 04:14:53 compute-0 ovn_metadata_agent[161897]: 2025-10-11 04:14:53.744 161902 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapb6cd64a2-a0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 04:14:53 compute-0 nova_compute[259850]: 2025-10-11 04:14:53.746 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 04:14:53 compute-0 NetworkManager[44920]: <info>  [1760156093.7466] manager: (tapb6cd64a2-a0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/96)
Oct 11 04:14:53 compute-0 kernel: tapb6cd64a2-a0: entered promiscuous mode
Oct 11 04:14:53 compute-0 ovn_metadata_agent[161897]: 2025-10-11 04:14:53.755 161902 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapb6cd64a2-a0, col_values=(('external_ids', {'iface-id': 'c2cbaf15-a50c-40b8-9f65-12b11618e7fc'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 04:14:53 compute-0 ovn_controller[152025]: 2025-10-11T04:14:53Z|00183|binding|INFO|Releasing lport c2cbaf15-a50c-40b8-9f65-12b11618e7fc from this chassis (sb_readonly=0)
Oct 11 04:14:53 compute-0 nova_compute[259850]: 2025-10-11 04:14:53.758 2 DEBUG nova.compute.manager [req-8a307313-4b11-4379-8a8b-2c2abd7d5519 req-86b50872-041c-4b49-aea4-dd629851624b f4159c198f2c491aba3076e803251bb4 551ef16ca7c64c7d98e9560ddab5abd8 - - default default] [instance: 86c48ed0-a954-49c0-bdcc-c312bcf59248] Received event network-vif-plugged-38533cc0-07ff-4685-a074-b75317ab358c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 11 04:14:53 compute-0 nova_compute[259850]: 2025-10-11 04:14:53.758 2 DEBUG oslo_concurrency.lockutils [req-8a307313-4b11-4379-8a8b-2c2abd7d5519 req-86b50872-041c-4b49-aea4-dd629851624b f4159c198f2c491aba3076e803251bb4 551ef16ca7c64c7d98e9560ddab5abd8 - - default default] Acquiring lock "86c48ed0-a954-49c0-bdcc-c312bcf59248-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 04:14:53 compute-0 nova_compute[259850]: 2025-10-11 04:14:53.759 2 DEBUG oslo_concurrency.lockutils [req-8a307313-4b11-4379-8a8b-2c2abd7d5519 req-86b50872-041c-4b49-aea4-dd629851624b f4159c198f2c491aba3076e803251bb4 551ef16ca7c64c7d98e9560ddab5abd8 - - default default] Lock "86c48ed0-a954-49c0-bdcc-c312bcf59248-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 04:14:53 compute-0 ovn_metadata_agent[161897]: 2025-10-11 04:14:53.759 161902 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/b6cd64a2-af0b-4f57-b84c-cbc9cde5251d.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/b6cd64a2-af0b-4f57-b84c-cbc9cde5251d.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Oct 11 04:14:53 compute-0 nova_compute[259850]: 2025-10-11 04:14:53.760 2 DEBUG oslo_concurrency.lockutils [req-8a307313-4b11-4379-8a8b-2c2abd7d5519 req-86b50872-041c-4b49-aea4-dd629851624b f4159c198f2c491aba3076e803251bb4 551ef16ca7c64c7d98e9560ddab5abd8 - - default default] Lock "86c48ed0-a954-49c0-bdcc-c312bcf59248-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 04:14:53 compute-0 nova_compute[259850]: 2025-10-11 04:14:53.760 2 DEBUG nova.compute.manager [req-8a307313-4b11-4379-8a8b-2c2abd7d5519 req-86b50872-041c-4b49-aea4-dd629851624b f4159c198f2c491aba3076e803251bb4 551ef16ca7c64c7d98e9560ddab5abd8 - - default default] [instance: 86c48ed0-a954-49c0-bdcc-c312bcf59248] Processing event network-vif-plugged-38533cc0-07ff-4685-a074-b75317ab358c _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Oct 11 04:14:53 compute-0 nova_compute[259850]: 2025-10-11 04:14:53.761 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 04:14:53 compute-0 nova_compute[259850]: 2025-10-11 04:14:53.771 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 04:14:53 compute-0 ovn_metadata_agent[161897]: 2025-10-11 04:14:53.772 267637 DEBUG oslo.privsep.daemon [-] privsep: reply[ef44667e-bbe4-43fc-b34e-7e66ea328d4e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 04:14:53 compute-0 ovn_metadata_agent[161897]: 2025-10-11 04:14:53.773 161902 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct 11 04:14:53 compute-0 ovn_metadata_agent[161897]: global
Oct 11 04:14:53 compute-0 ovn_metadata_agent[161897]:     log         /dev/log local0 debug
Oct 11 04:14:53 compute-0 ovn_metadata_agent[161897]:     log-tag     haproxy-metadata-proxy-b6cd64a2-af0b-4f57-b84c-cbc9cde5251d
Oct 11 04:14:53 compute-0 ovn_metadata_agent[161897]:     user        root
Oct 11 04:14:53 compute-0 ovn_metadata_agent[161897]:     group       root
Oct 11 04:14:53 compute-0 ovn_metadata_agent[161897]:     maxconn     1024
Oct 11 04:14:53 compute-0 ovn_metadata_agent[161897]:     pidfile     /var/lib/neutron/external/pids/b6cd64a2-af0b-4f57-b84c-cbc9cde5251d.pid.haproxy
Oct 11 04:14:53 compute-0 ovn_metadata_agent[161897]:     daemon
Oct 11 04:14:53 compute-0 ovn_metadata_agent[161897]: 
Oct 11 04:14:53 compute-0 ovn_metadata_agent[161897]: defaults
Oct 11 04:14:53 compute-0 ovn_metadata_agent[161897]:     log global
Oct 11 04:14:53 compute-0 ovn_metadata_agent[161897]:     mode http
Oct 11 04:14:53 compute-0 ovn_metadata_agent[161897]:     option httplog
Oct 11 04:14:53 compute-0 ovn_metadata_agent[161897]:     option dontlognull
Oct 11 04:14:53 compute-0 ovn_metadata_agent[161897]:     option http-server-close
Oct 11 04:14:53 compute-0 ovn_metadata_agent[161897]:     option forwardfor
Oct 11 04:14:53 compute-0 ovn_metadata_agent[161897]:     retries                 3
Oct 11 04:14:53 compute-0 ovn_metadata_agent[161897]:     timeout http-request    30s
Oct 11 04:14:53 compute-0 ovn_metadata_agent[161897]:     timeout connect         30s
Oct 11 04:14:53 compute-0 ovn_metadata_agent[161897]:     timeout client          32s
Oct 11 04:14:53 compute-0 ovn_metadata_agent[161897]:     timeout server          32s
Oct 11 04:14:53 compute-0 ovn_metadata_agent[161897]:     timeout http-keep-alive 30s
Oct 11 04:14:53 compute-0 ovn_metadata_agent[161897]: 
Oct 11 04:14:53 compute-0 ovn_metadata_agent[161897]: 
Oct 11 04:14:53 compute-0 ovn_metadata_agent[161897]: listen listener
Oct 11 04:14:53 compute-0 ovn_metadata_agent[161897]:     bind 169.254.169.254:80
Oct 11 04:14:53 compute-0 ovn_metadata_agent[161897]:     server metadata /var/lib/neutron/metadata_proxy
Oct 11 04:14:53 compute-0 ovn_metadata_agent[161897]:     http-request add-header X-OVN-Network-ID b6cd64a2-af0b-4f57-b84c-cbc9cde5251d
Oct 11 04:14:53 compute-0 ovn_metadata_agent[161897]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Oct 11 04:14:53 compute-0 ovn_metadata_agent[161897]: 2025-10-11 04:14:53.773 161902 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-b6cd64a2-af0b-4f57-b84c-cbc9cde5251d', 'env', 'PROCESS_TAG=haproxy-b6cd64a2-af0b-4f57-b84c-cbc9cde5251d', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/b6cd64a2-af0b-4f57-b84c-cbc9cde5251d.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Oct 11 04:14:54 compute-0 podman[291289]: 2025-10-11 04:14:54.218756964 +0000 UTC m=+0.068117957 container create ada98f0c47a865744bb4c62fb8468ad5010e3170e3f2c2f6a12c1774c9fcdd88 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-b6cd64a2-af0b-4f57-b84c-cbc9cde5251d, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251009)
Oct 11 04:14:54 compute-0 systemd[1]: Started libpod-conmon-ada98f0c47a865744bb4c62fb8468ad5010e3170e3f2c2f6a12c1774c9fcdd88.scope.
Oct 11 04:14:54 compute-0 podman[291289]: 2025-10-11 04:14:54.191998643 +0000 UTC m=+0.041359646 image pull 1061e4fafe13e0b9aa1ef2c904ba4ad70c44f3e87b1d831f16c6db34937f4022 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Oct 11 04:14:54 compute-0 systemd[1]: Started libcrun container.
Oct 11 04:14:54 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/075b8fc0a1f1a3da2adc843171ac43dfe2298221fa908ebd442b1293a5207a6a/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct 11 04:14:54 compute-0 podman[291289]: 2025-10-11 04:14:54.322758199 +0000 UTC m=+0.172119292 container init ada98f0c47a865744bb4c62fb8468ad5010e3170e3f2c2f6a12c1774c9fcdd88 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-b6cd64a2-af0b-4f57-b84c-cbc9cde5251d, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, io.buildah.version=1.41.3)
Oct 11 04:14:54 compute-0 podman[291289]: 2025-10-11 04:14:54.329430358 +0000 UTC m=+0.178791392 container start ada98f0c47a865744bb4c62fb8468ad5010e3170e3f2c2f6a12c1774c9fcdd88 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-b6cd64a2-af0b-4f57-b84c-cbc9cde5251d, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251009, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, tcib_managed=true)
Oct 11 04:14:54 compute-0 neutron-haproxy-ovnmeta-b6cd64a2-af0b-4f57-b84c-cbc9cde5251d[291304]: [NOTICE]   (291310) : New worker (291312) forked
Oct 11 04:14:54 compute-0 neutron-haproxy-ovnmeta-b6cd64a2-af0b-4f57-b84c-cbc9cde5251d[291304]: [NOTICE]   (291310) : Loading success.
Oct 11 04:14:54 compute-0 nova_compute[259850]: 2025-10-11 04:14:54.621 2 DEBUG nova.compute.manager [None req-b70eb092-7cdc-4bd6-8e86-5451a1048501 2a330a845d62440c871f80eda2546881 09ba33ef4bd447699d74946c58839b2d - - default default] [instance: 86c48ed0-a954-49c0-bdcc-c312bcf59248] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Oct 11 04:14:54 compute-0 nova_compute[259850]: 2025-10-11 04:14:54.622 2 DEBUG nova.virt.driver [None req-3700afd8-f980-4cf2-b41e-54ecc7fbe9a2 - - - - - -] Emitting event <LifecycleEvent: 1760156094.6212173, 86c48ed0-a954-49c0-bdcc-c312bcf59248 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 11 04:14:54 compute-0 nova_compute[259850]: 2025-10-11 04:14:54.622 2 INFO nova.compute.manager [None req-3700afd8-f980-4cf2-b41e-54ecc7fbe9a2 - - - - - -] [instance: 86c48ed0-a954-49c0-bdcc-c312bcf59248] VM Started (Lifecycle Event)
Oct 11 04:14:54 compute-0 nova_compute[259850]: 2025-10-11 04:14:54.625 2 DEBUG nova.virt.libvirt.driver [None req-b70eb092-7cdc-4bd6-8e86-5451a1048501 2a330a845d62440c871f80eda2546881 09ba33ef4bd447699d74946c58839b2d - - default default] [instance: 86c48ed0-a954-49c0-bdcc-c312bcf59248] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Oct 11 04:14:54 compute-0 nova_compute[259850]: 2025-10-11 04:14:54.628 2 INFO nova.virt.libvirt.driver [-] [instance: 86c48ed0-a954-49c0-bdcc-c312bcf59248] Instance spawned successfully.
Oct 11 04:14:54 compute-0 nova_compute[259850]: 2025-10-11 04:14:54.628 2 DEBUG nova.virt.libvirt.driver [None req-b70eb092-7cdc-4bd6-8e86-5451a1048501 2a330a845d62440c871f80eda2546881 09ba33ef4bd447699d74946c58839b2d - - default default] [instance: 86c48ed0-a954-49c0-bdcc-c312bcf59248] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Oct 11 04:14:54 compute-0 nova_compute[259850]: 2025-10-11 04:14:54.646 2 DEBUG nova.compute.manager [None req-3700afd8-f980-4cf2-b41e-54ecc7fbe9a2 - - - - - -] [instance: 86c48ed0-a954-49c0-bdcc-c312bcf59248] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 11 04:14:54 compute-0 nova_compute[259850]: 2025-10-11 04:14:54.654 2 DEBUG nova.compute.manager [None req-3700afd8-f980-4cf2-b41e-54ecc7fbe9a2 - - - - - -] [instance: 86c48ed0-a954-49c0-bdcc-c312bcf59248] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 11 04:14:54 compute-0 nova_compute[259850]: 2025-10-11 04:14:54.662 2 DEBUG nova.virt.libvirt.driver [None req-b70eb092-7cdc-4bd6-8e86-5451a1048501 2a330a845d62440c871f80eda2546881 09ba33ef4bd447699d74946c58839b2d - - default default] [instance: 86c48ed0-a954-49c0-bdcc-c312bcf59248] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 11 04:14:54 compute-0 nova_compute[259850]: 2025-10-11 04:14:54.663 2 DEBUG nova.virt.libvirt.driver [None req-b70eb092-7cdc-4bd6-8e86-5451a1048501 2a330a845d62440c871f80eda2546881 09ba33ef4bd447699d74946c58839b2d - - default default] [instance: 86c48ed0-a954-49c0-bdcc-c312bcf59248] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 11 04:14:54 compute-0 nova_compute[259850]: 2025-10-11 04:14:54.664 2 DEBUG nova.virt.libvirt.driver [None req-b70eb092-7cdc-4bd6-8e86-5451a1048501 2a330a845d62440c871f80eda2546881 09ba33ef4bd447699d74946c58839b2d - - default default] [instance: 86c48ed0-a954-49c0-bdcc-c312bcf59248] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 11 04:14:54 compute-0 nova_compute[259850]: 2025-10-11 04:14:54.664 2 DEBUG nova.virt.libvirt.driver [None req-b70eb092-7cdc-4bd6-8e86-5451a1048501 2a330a845d62440c871f80eda2546881 09ba33ef4bd447699d74946c58839b2d - - default default] [instance: 86c48ed0-a954-49c0-bdcc-c312bcf59248] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 11 04:14:54 compute-0 nova_compute[259850]: 2025-10-11 04:14:54.665 2 DEBUG nova.virt.libvirt.driver [None req-b70eb092-7cdc-4bd6-8e86-5451a1048501 2a330a845d62440c871f80eda2546881 09ba33ef4bd447699d74946c58839b2d - - default default] [instance: 86c48ed0-a954-49c0-bdcc-c312bcf59248] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 11 04:14:54 compute-0 nova_compute[259850]: 2025-10-11 04:14:54.666 2 DEBUG nova.virt.libvirt.driver [None req-b70eb092-7cdc-4bd6-8e86-5451a1048501 2a330a845d62440c871f80eda2546881 09ba33ef4bd447699d74946c58839b2d - - default default] [instance: 86c48ed0-a954-49c0-bdcc-c312bcf59248] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 11 04:14:54 compute-0 nova_compute[259850]: 2025-10-11 04:14:54.709 2 INFO nova.compute.manager [None req-3700afd8-f980-4cf2-b41e-54ecc7fbe9a2 - - - - - -] [instance: 86c48ed0-a954-49c0-bdcc-c312bcf59248] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 11 04:14:54 compute-0 nova_compute[259850]: 2025-10-11 04:14:54.710 2 DEBUG nova.virt.driver [None req-3700afd8-f980-4cf2-b41e-54ecc7fbe9a2 - - - - - -] Emitting event <LifecycleEvent: 1760156094.6225572, 86c48ed0-a954-49c0-bdcc-c312bcf59248 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 11 04:14:54 compute-0 nova_compute[259850]: 2025-10-11 04:14:54.710 2 INFO nova.compute.manager [None req-3700afd8-f980-4cf2-b41e-54ecc7fbe9a2 - - - - - -] [instance: 86c48ed0-a954-49c0-bdcc-c312bcf59248] VM Paused (Lifecycle Event)
Oct 11 04:14:54 compute-0 nova_compute[259850]: 2025-10-11 04:14:54.747 2 DEBUG nova.compute.manager [None req-3700afd8-f980-4cf2-b41e-54ecc7fbe9a2 - - - - - -] [instance: 86c48ed0-a954-49c0-bdcc-c312bcf59248] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 11 04:14:54 compute-0 nova_compute[259850]: 2025-10-11 04:14:54.750 2 DEBUG nova.virt.driver [None req-3700afd8-f980-4cf2-b41e-54ecc7fbe9a2 - - - - - -] Emitting event <LifecycleEvent: 1760156094.6246474, 86c48ed0-a954-49c0-bdcc-c312bcf59248 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 11 04:14:54 compute-0 nova_compute[259850]: 2025-10-11 04:14:54.751 2 INFO nova.compute.manager [None req-3700afd8-f980-4cf2-b41e-54ecc7fbe9a2 - - - - - -] [instance: 86c48ed0-a954-49c0-bdcc-c312bcf59248] VM Resumed (Lifecycle Event)
Oct 11 04:14:54 compute-0 nova_compute[259850]: 2025-10-11 04:14:54.755 2 INFO nova.compute.manager [None req-b70eb092-7cdc-4bd6-8e86-5451a1048501 2a330a845d62440c871f80eda2546881 09ba33ef4bd447699d74946c58839b2d - - default default] [instance: 86c48ed0-a954-49c0-bdcc-c312bcf59248] Took 2.83 seconds to spawn the instance on the hypervisor.
Oct 11 04:14:54 compute-0 nova_compute[259850]: 2025-10-11 04:14:54.756 2 DEBUG nova.compute.manager [None req-b70eb092-7cdc-4bd6-8e86-5451a1048501 2a330a845d62440c871f80eda2546881 09ba33ef4bd447699d74946c58839b2d - - default default] [instance: 86c48ed0-a954-49c0-bdcc-c312bcf59248] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 11 04:14:54 compute-0 nova_compute[259850]: 2025-10-11 04:14:54.766 2 DEBUG nova.compute.manager [None req-3700afd8-f980-4cf2-b41e-54ecc7fbe9a2 - - - - - -] [instance: 86c48ed0-a954-49c0-bdcc-c312bcf59248] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 11 04:14:54 compute-0 nova_compute[259850]: 2025-10-11 04:14:54.769 2 DEBUG nova.compute.manager [None req-3700afd8-f980-4cf2-b41e-54ecc7fbe9a2 - - - - - -] [instance: 86c48ed0-a954-49c0-bdcc-c312bcf59248] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 11 04:14:54 compute-0 nova_compute[259850]: 2025-10-11 04:14:54.792 2 INFO nova.compute.manager [None req-3700afd8-f980-4cf2-b41e-54ecc7fbe9a2 - - - - - -] [instance: 86c48ed0-a954-49c0-bdcc-c312bcf59248] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 11 04:14:54 compute-0 nova_compute[259850]: 2025-10-11 04:14:54.813 2 INFO nova.compute.manager [None req-b70eb092-7cdc-4bd6-8e86-5451a1048501 2a330a845d62440c871f80eda2546881 09ba33ef4bd447699d74946c58839b2d - - default default] [instance: 86c48ed0-a954-49c0-bdcc-c312bcf59248] Took 9.64 seconds to build instance.
Oct 11 04:14:54 compute-0 nova_compute[259850]: 2025-10-11 04:14:54.831 2 DEBUG oslo_concurrency.lockutils [None req-b70eb092-7cdc-4bd6-8e86-5451a1048501 2a330a845d62440c871f80eda2546881 09ba33ef4bd447699d74946c58839b2d - - default default] Lock "86c48ed0-a954-49c0-bdcc-c312bcf59248" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 9.740s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 04:14:54 compute-0 ceph-mon[74273]: pgmap v1529: 305 pgs: 305 active+clean; 134 MiB data, 404 MiB used, 60 GiB / 60 GiB avail; 35 KiB/s rd, 1.8 MiB/s wr, 51 op/s
Oct 11 04:14:54 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e382 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 04:14:55 compute-0 podman[291321]: 2025-10-11 04:14:55.453348306 +0000 UTC m=+0.139465754 container health_status 648a70a95c858d67aef01a55c153ff0b5b9bef4b589fa10c62a24185180db61f (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.vendor=CentOS, container_name=ovn_controller, org.label-schema.build-date=20251009, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, tcib_managed=true, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3)
Oct 11 04:14:55 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1530: 305 pgs: 305 active+clean; 134 MiB data, 404 MiB used, 60 GiB / 60 GiB avail; 8.2 KiB/s rd, 938 B/s wr, 10 op/s
Oct 11 04:14:55 compute-0 nova_compute[259850]: 2025-10-11 04:14:55.878 2 DEBUG nova.compute.manager [req-01c35ad8-0088-446b-87bb-8520416cefd1 req-7230a618-e660-4c95-a7ec-13c587ca0cc7 f4159c198f2c491aba3076e803251bb4 551ef16ca7c64c7d98e9560ddab5abd8 - - default default] [instance: 86c48ed0-a954-49c0-bdcc-c312bcf59248] Received event network-vif-plugged-38533cc0-07ff-4685-a074-b75317ab358c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 11 04:14:55 compute-0 nova_compute[259850]: 2025-10-11 04:14:55.880 2 DEBUG oslo_concurrency.lockutils [req-01c35ad8-0088-446b-87bb-8520416cefd1 req-7230a618-e660-4c95-a7ec-13c587ca0cc7 f4159c198f2c491aba3076e803251bb4 551ef16ca7c64c7d98e9560ddab5abd8 - - default default] Acquiring lock "86c48ed0-a954-49c0-bdcc-c312bcf59248-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 04:14:55 compute-0 nova_compute[259850]: 2025-10-11 04:14:55.881 2 DEBUG oslo_concurrency.lockutils [req-01c35ad8-0088-446b-87bb-8520416cefd1 req-7230a618-e660-4c95-a7ec-13c587ca0cc7 f4159c198f2c491aba3076e803251bb4 551ef16ca7c64c7d98e9560ddab5abd8 - - default default] Lock "86c48ed0-a954-49c0-bdcc-c312bcf59248-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 04:14:55 compute-0 nova_compute[259850]: 2025-10-11 04:14:55.881 2 DEBUG oslo_concurrency.lockutils [req-01c35ad8-0088-446b-87bb-8520416cefd1 req-7230a618-e660-4c95-a7ec-13c587ca0cc7 f4159c198f2c491aba3076e803251bb4 551ef16ca7c64c7d98e9560ddab5abd8 - - default default] Lock "86c48ed0-a954-49c0-bdcc-c312bcf59248-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 04:14:55 compute-0 nova_compute[259850]: 2025-10-11 04:14:55.882 2 DEBUG nova.compute.manager [req-01c35ad8-0088-446b-87bb-8520416cefd1 req-7230a618-e660-4c95-a7ec-13c587ca0cc7 f4159c198f2c491aba3076e803251bb4 551ef16ca7c64c7d98e9560ddab5abd8 - - default default] [instance: 86c48ed0-a954-49c0-bdcc-c312bcf59248] No waiting events found dispatching network-vif-plugged-38533cc0-07ff-4685-a074-b75317ab358c pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 11 04:14:55 compute-0 nova_compute[259850]: 2025-10-11 04:14:55.883 2 WARNING nova.compute.manager [req-01c35ad8-0088-446b-87bb-8520416cefd1 req-7230a618-e660-4c95-a7ec-13c587ca0cc7 f4159c198f2c491aba3076e803251bb4 551ef16ca7c64c7d98e9560ddab5abd8 - - default default] [instance: 86c48ed0-a954-49c0-bdcc-c312bcf59248] Received unexpected event network-vif-plugged-38533cc0-07ff-4685-a074-b75317ab358c for instance with vm_state active and task_state None.
Oct 11 04:14:56 compute-0 ceph-mon[74273]: pgmap v1530: 305 pgs: 305 active+clean; 134 MiB data, 404 MiB used, 60 GiB / 60 GiB avail; 8.2 KiB/s rd, 938 B/s wr, 10 op/s
Oct 11 04:14:57 compute-0 nova_compute[259850]: 2025-10-11 04:14:57.282 2 DEBUG oslo_concurrency.lockutils [None req-55866910-3fb6-4451-88d4-03a9147a1e5f 2a330a845d62440c871f80eda2546881 09ba33ef4bd447699d74946c58839b2d - - default default] Acquiring lock "86c48ed0-a954-49c0-bdcc-c312bcf59248" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 04:14:57 compute-0 nova_compute[259850]: 2025-10-11 04:14:57.283 2 DEBUG oslo_concurrency.lockutils [None req-55866910-3fb6-4451-88d4-03a9147a1e5f 2a330a845d62440c871f80eda2546881 09ba33ef4bd447699d74946c58839b2d - - default default] Lock "86c48ed0-a954-49c0-bdcc-c312bcf59248" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 04:14:57 compute-0 nova_compute[259850]: 2025-10-11 04:14:57.283 2 DEBUG oslo_concurrency.lockutils [None req-55866910-3fb6-4451-88d4-03a9147a1e5f 2a330a845d62440c871f80eda2546881 09ba33ef4bd447699d74946c58839b2d - - default default] Acquiring lock "86c48ed0-a954-49c0-bdcc-c312bcf59248-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 04:14:57 compute-0 nova_compute[259850]: 2025-10-11 04:14:57.283 2 DEBUG oslo_concurrency.lockutils [None req-55866910-3fb6-4451-88d4-03a9147a1e5f 2a330a845d62440c871f80eda2546881 09ba33ef4bd447699d74946c58839b2d - - default default] Lock "86c48ed0-a954-49c0-bdcc-c312bcf59248-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 04:14:57 compute-0 nova_compute[259850]: 2025-10-11 04:14:57.284 2 DEBUG oslo_concurrency.lockutils [None req-55866910-3fb6-4451-88d4-03a9147a1e5f 2a330a845d62440c871f80eda2546881 09ba33ef4bd447699d74946c58839b2d - - default default] Lock "86c48ed0-a954-49c0-bdcc-c312bcf59248-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 04:14:57 compute-0 nova_compute[259850]: 2025-10-11 04:14:57.285 2 INFO nova.compute.manager [None req-55866910-3fb6-4451-88d4-03a9147a1e5f 2a330a845d62440c871f80eda2546881 09ba33ef4bd447699d74946c58839b2d - - default default] [instance: 86c48ed0-a954-49c0-bdcc-c312bcf59248] Terminating instance
Oct 11 04:14:57 compute-0 nova_compute[259850]: 2025-10-11 04:14:57.287 2 DEBUG nova.compute.manager [None req-55866910-3fb6-4451-88d4-03a9147a1e5f 2a330a845d62440c871f80eda2546881 09ba33ef4bd447699d74946c58839b2d - - default default] [instance: 86c48ed0-a954-49c0-bdcc-c312bcf59248] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Oct 11 04:14:57 compute-0 kernel: tap38533cc0-07 (unregistering): left promiscuous mode
Oct 11 04:14:57 compute-0 NetworkManager[44920]: <info>  [1760156097.3393] device (tap38533cc0-07): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 11 04:14:57 compute-0 ovn_controller[152025]: 2025-10-11T04:14:57Z|00184|binding|INFO|Releasing lport 38533cc0-07ff-4685-a074-b75317ab358c from this chassis (sb_readonly=0)
Oct 11 04:14:57 compute-0 ovn_controller[152025]: 2025-10-11T04:14:57Z|00185|binding|INFO|Setting lport 38533cc0-07ff-4685-a074-b75317ab358c down in Southbound
Oct 11 04:14:57 compute-0 nova_compute[259850]: 2025-10-11 04:14:57.352 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 04:14:57 compute-0 ovn_controller[152025]: 2025-10-11T04:14:57Z|00186|binding|INFO|Removing iface tap38533cc0-07 ovn-installed in OVS
Oct 11 04:14:57 compute-0 nova_compute[259850]: 2025-10-11 04:14:57.354 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 04:14:57 compute-0 ovn_metadata_agent[161897]: 2025-10-11 04:14:57.360 161902 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:3d:83:26 10.100.0.3'], port_security=['fa:16:3e:3d:83:26 10.100.0.3'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.3/28', 'neutron:device_id': '86c48ed0-a954-49c0-bdcc-c312bcf59248', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-b6cd64a2-af0b-4f57-b84c-cbc9cde5251d', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '09ba33ef4bd447699d74946c58839b2d', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'ad6cc707-9ce2-4240-811a-f6df84b349db', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=27b77226-c1f8-485e-969b-bae9a3bf7ceb, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f22fde8afd0>], logical_port=38533cc0-07ff-4685-a074-b75317ab358c) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f22fde8afd0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 11 04:14:57 compute-0 ovn_metadata_agent[161897]: 2025-10-11 04:14:57.362 161902 INFO neutron.agent.ovn.metadata.agent [-] Port 38533cc0-07ff-4685-a074-b75317ab358c in datapath b6cd64a2-af0b-4f57-b84c-cbc9cde5251d unbound from our chassis
Oct 11 04:14:57 compute-0 ovn_metadata_agent[161897]: 2025-10-11 04:14:57.365 161902 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network b6cd64a2-af0b-4f57-b84c-cbc9cde5251d, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Oct 11 04:14:57 compute-0 ovn_metadata_agent[161897]: 2025-10-11 04:14:57.366 267637 DEBUG oslo.privsep.daemon [-] privsep: reply[0788b997-364e-4dd3-8c6f-00a37bcb6614]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 04:14:57 compute-0 ovn_metadata_agent[161897]: 2025-10-11 04:14:57.367 161902 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-b6cd64a2-af0b-4f57-b84c-cbc9cde5251d namespace which is not needed anymore
Oct 11 04:14:57 compute-0 nova_compute[259850]: 2025-10-11 04:14:57.388 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 04:14:57 compute-0 systemd[1]: machine-qemu\x2d17\x2dinstance\x2d00000011.scope: Deactivated successfully.
Oct 11 04:14:57 compute-0 systemd[1]: machine-qemu\x2d17\x2dinstance\x2d00000011.scope: Consumed 4.048s CPU time.
Oct 11 04:14:57 compute-0 systemd-machined[214869]: Machine qemu-17-instance-00000011 terminated.
Oct 11 04:14:57 compute-0 nova_compute[259850]: 2025-10-11 04:14:57.490 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 04:14:57 compute-0 nova_compute[259850]: 2025-10-11 04:14:57.529 2 INFO nova.virt.libvirt.driver [-] [instance: 86c48ed0-a954-49c0-bdcc-c312bcf59248] Instance destroyed successfully.
Oct 11 04:14:57 compute-0 nova_compute[259850]: 2025-10-11 04:14:57.530 2 DEBUG nova.objects.instance [None req-55866910-3fb6-4451-88d4-03a9147a1e5f 2a330a845d62440c871f80eda2546881 09ba33ef4bd447699d74946c58839b2d - - default default] Lazy-loading 'resources' on Instance uuid 86c48ed0-a954-49c0-bdcc-c312bcf59248 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 11 04:14:57 compute-0 nova_compute[259850]: 2025-10-11 04:14:57.544 2 DEBUG nova.virt.libvirt.vif [None req-55866910-3fb6-4451-88d4-03a9147a1e5f 2a330a845d62440c871f80eda2546881 09ba33ef4bd447699d74946c58839b2d - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-11T04:14:44Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestVolumeBootPattern-server-1794916287',display_name='tempest-TestVolumeBootPattern-server-1794916287',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testvolumebootpattern-server-1794916287',id=17,image_ref='',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-10-11T04:14:54Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='09ba33ef4bd447699d74946c58839b2d',ramdisk_id='',reservation_id='r-8n7qwdm4',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',image_signature_verified='False',owner_project_name='tempest-TestVolumeBootPattern-771726270',owner_user_name='tempest-TestVolumeBootPattern-771726270-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-11T04:14:54Z,user_data=None,user_id='2a330a845d62440c871f80eda2546881',uuid=86c48ed0-a954-49c0-bdcc-c312bcf59248,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "38533cc0-07ff-4685-a074-b75317ab358c", "address": "fa:16:3e:3d:83:26", "network": {"id": "b6cd64a2-af0b-4f57-b84c-cbc9cde5251d", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-958485850-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "09ba33ef4bd447699d74946c58839b2d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap38533cc0-07", "ovs_interfaceid": "38533cc0-07ff-4685-a074-b75317ab358c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Oct 11 04:14:57 compute-0 nova_compute[259850]: 2025-10-11 04:14:57.545 2 DEBUG nova.network.os_vif_util [None req-55866910-3fb6-4451-88d4-03a9147a1e5f 2a330a845d62440c871f80eda2546881 09ba33ef4bd447699d74946c58839b2d - - default default] Converting VIF {"id": "38533cc0-07ff-4685-a074-b75317ab358c", "address": "fa:16:3e:3d:83:26", "network": {"id": "b6cd64a2-af0b-4f57-b84c-cbc9cde5251d", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-958485850-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "09ba33ef4bd447699d74946c58839b2d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap38533cc0-07", "ovs_interfaceid": "38533cc0-07ff-4685-a074-b75317ab358c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 11 04:14:57 compute-0 nova_compute[259850]: 2025-10-11 04:14:57.546 2 DEBUG nova.network.os_vif_util [None req-55866910-3fb6-4451-88d4-03a9147a1e5f 2a330a845d62440c871f80eda2546881 09ba33ef4bd447699d74946c58839b2d - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:3d:83:26,bridge_name='br-int',has_traffic_filtering=True,id=38533cc0-07ff-4685-a074-b75317ab358c,network=Network(b6cd64a2-af0b-4f57-b84c-cbc9cde5251d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap38533cc0-07') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 11 04:14:57 compute-0 nova_compute[259850]: 2025-10-11 04:14:57.546 2 DEBUG os_vif [None req-55866910-3fb6-4451-88d4-03a9147a1e5f 2a330a845d62440c871f80eda2546881 09ba33ef4bd447699d74946c58839b2d - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:3d:83:26,bridge_name='br-int',has_traffic_filtering=True,id=38533cc0-07ff-4685-a074-b75317ab358c,network=Network(b6cd64a2-af0b-4f57-b84c-cbc9cde5251d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap38533cc0-07') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Oct 11 04:14:57 compute-0 nova_compute[259850]: 2025-10-11 04:14:57.547 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 04:14:57 compute-0 nova_compute[259850]: 2025-10-11 04:14:57.547 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap38533cc0-07, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 04:14:57 compute-0 nova_compute[259850]: 2025-10-11 04:14:57.552 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 04:14:57 compute-0 nova_compute[259850]: 2025-10-11 04:14:57.555 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 11 04:14:57 compute-0 nova_compute[259850]: 2025-10-11 04:14:57.559 2 INFO os_vif [None req-55866910-3fb6-4451-88d4-03a9147a1e5f 2a330a845d62440c871f80eda2546881 09ba33ef4bd447699d74946c58839b2d - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:3d:83:26,bridge_name='br-int',has_traffic_filtering=True,id=38533cc0-07ff-4685-a074-b75317ab358c,network=Network(b6cd64a2-af0b-4f57-b84c-cbc9cde5251d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap38533cc0-07')
Oct 11 04:14:57 compute-0 neutron-haproxy-ovnmeta-b6cd64a2-af0b-4f57-b84c-cbc9cde5251d[291304]: [NOTICE]   (291310) : haproxy version is 2.8.14-c23fe91
Oct 11 04:14:57 compute-0 neutron-haproxy-ovnmeta-b6cd64a2-af0b-4f57-b84c-cbc9cde5251d[291304]: [NOTICE]   (291310) : path to executable is /usr/sbin/haproxy
Oct 11 04:14:57 compute-0 neutron-haproxy-ovnmeta-b6cd64a2-af0b-4f57-b84c-cbc9cde5251d[291304]: [WARNING]  (291310) : Exiting Master process...
Oct 11 04:14:57 compute-0 neutron-haproxy-ovnmeta-b6cd64a2-af0b-4f57-b84c-cbc9cde5251d[291304]: [ALERT]    (291310) : Current worker (291312) exited with code 143 (Terminated)
Oct 11 04:14:57 compute-0 neutron-haproxy-ovnmeta-b6cd64a2-af0b-4f57-b84c-cbc9cde5251d[291304]: [WARNING]  (291310) : All workers exited. Exiting... (0)
Oct 11 04:14:57 compute-0 systemd[1]: libpod-ada98f0c47a865744bb4c62fb8468ad5010e3170e3f2c2f6a12c1774c9fcdd88.scope: Deactivated successfully.
Oct 11 04:14:57 compute-0 podman[291372]: 2025-10-11 04:14:57.577405781 +0000 UTC m=+0.072135780 container died ada98f0c47a865744bb4c62fb8468ad5010e3170e3f2c2f6a12c1774c9fcdd88 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-b6cd64a2-af0b-4f57-b84c-cbc9cde5251d, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, tcib_managed=true, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Oct 11 04:14:57 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-ada98f0c47a865744bb4c62fb8468ad5010e3170e3f2c2f6a12c1774c9fcdd88-userdata-shm.mount: Deactivated successfully.
Oct 11 04:14:57 compute-0 systemd[1]: var-lib-containers-storage-overlay-075b8fc0a1f1a3da2adc843171ac43dfe2298221fa908ebd442b1293a5207a6a-merged.mount: Deactivated successfully.
Oct 11 04:14:57 compute-0 podman[291372]: 2025-10-11 04:14:57.640403681 +0000 UTC m=+0.135133650 container cleanup ada98f0c47a865744bb4c62fb8468ad5010e3170e3f2c2f6a12c1774c9fcdd88 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-b6cd64a2-af0b-4f57-b84c-cbc9cde5251d, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, io.buildah.version=1.41.3, org.label-schema.build-date=20251009, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Oct 11 04:14:57 compute-0 systemd[1]: libpod-conmon-ada98f0c47a865744bb4c62fb8468ad5010e3170e3f2c2f6a12c1774c9fcdd88.scope: Deactivated successfully.
Oct 11 04:14:57 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1531: 305 pgs: 305 active+clean; 134 MiB data, 404 MiB used, 60 GiB / 60 GiB avail; 8.2 KiB/s rd, 938 B/s wr, 10 op/s
Oct 11 04:14:57 compute-0 podman[291425]: 2025-10-11 04:14:57.739305092 +0000 UTC m=+0.062943110 container remove ada98f0c47a865744bb4c62fb8468ad5010e3170e3f2c2f6a12c1774c9fcdd88 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-b6cd64a2-af0b-4f57-b84c-cbc9cde5251d, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Oct 11 04:14:57 compute-0 ovn_metadata_agent[161897]: 2025-10-11 04:14:57.750 267637 DEBUG oslo.privsep.daemon [-] privsep: reply[b8603420-7939-4953-8f7b-dd9ae8f8b339]: (4, ('Sat Oct 11 04:14:57 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-b6cd64a2-af0b-4f57-b84c-cbc9cde5251d (ada98f0c47a865744bb4c62fb8468ad5010e3170e3f2c2f6a12c1774c9fcdd88)\nada98f0c47a865744bb4c62fb8468ad5010e3170e3f2c2f6a12c1774c9fcdd88\nSat Oct 11 04:14:57 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-b6cd64a2-af0b-4f57-b84c-cbc9cde5251d (ada98f0c47a865744bb4c62fb8468ad5010e3170e3f2c2f6a12c1774c9fcdd88)\nada98f0c47a865744bb4c62fb8468ad5010e3170e3f2c2f6a12c1774c9fcdd88\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 04:14:57 compute-0 ovn_metadata_agent[161897]: 2025-10-11 04:14:57.752 267637 DEBUG oslo.privsep.daemon [-] privsep: reply[fa9d5e8e-0bc5-4032-bb4c-e8fccbedd038]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 04:14:57 compute-0 ovn_metadata_agent[161897]: 2025-10-11 04:14:57.753 161902 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapb6cd64a2-a0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 04:14:57 compute-0 kernel: tapb6cd64a2-a0: left promiscuous mode
Oct 11 04:14:57 compute-0 nova_compute[259850]: 2025-10-11 04:14:57.757 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 04:14:57 compute-0 nova_compute[259850]: 2025-10-11 04:14:57.779 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 04:14:57 compute-0 ovn_metadata_agent[161897]: 2025-10-11 04:14:57.784 267637 DEBUG oslo.privsep.daemon [-] privsep: reply[c95610e9-da3d-4b22-9dc0-9f96ae7f3b1f]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 04:14:57 compute-0 nova_compute[259850]: 2025-10-11 04:14:57.807 2 INFO nova.virt.libvirt.driver [None req-55866910-3fb6-4451-88d4-03a9147a1e5f 2a330a845d62440c871f80eda2546881 09ba33ef4bd447699d74946c58839b2d - - default default] [instance: 86c48ed0-a954-49c0-bdcc-c312bcf59248] Deleting instance files /var/lib/nova/instances/86c48ed0-a954-49c0-bdcc-c312bcf59248_del
Oct 11 04:14:57 compute-0 nova_compute[259850]: 2025-10-11 04:14:57.808 2 INFO nova.virt.libvirt.driver [None req-55866910-3fb6-4451-88d4-03a9147a1e5f 2a330a845d62440c871f80eda2546881 09ba33ef4bd447699d74946c58839b2d - - default default] [instance: 86c48ed0-a954-49c0-bdcc-c312bcf59248] Deletion of /var/lib/nova/instances/86c48ed0-a954-49c0-bdcc-c312bcf59248_del complete
Oct 11 04:14:57 compute-0 ovn_metadata_agent[161897]: 2025-10-11 04:14:57.823 267637 DEBUG oslo.privsep.daemon [-] privsep: reply[0042f0c4-d51f-4d42-8f8c-3775c3f629e2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 04:14:57 compute-0 ovn_metadata_agent[161897]: 2025-10-11 04:14:57.825 267637 DEBUG oslo.privsep.daemon [-] privsep: reply[bbe53abb-d8f2-4605-b09a-2f961fefc4dd]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 04:14:57 compute-0 ovn_metadata_agent[161897]: 2025-10-11 04:14:57.851 267637 DEBUG oslo.privsep.daemon [-] privsep: reply[26539fbd-f52c-4675-a7be-01e7603ff813]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 447855, 'reachable_time': 15556, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 291439, 'error': None, 'target': 'ovnmeta-b6cd64a2-af0b-4f57-b84c-cbc9cde5251d', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 04:14:57 compute-0 ovn_metadata_agent[161897]: 2025-10-11 04:14:57.855 162015 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-b6cd64a2-af0b-4f57-b84c-cbc9cde5251d deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Oct 11 04:14:57 compute-0 ovn_metadata_agent[161897]: 2025-10-11 04:14:57.855 162015 DEBUG oslo.privsep.daemon [-] privsep: reply[042b00d4-9880-45b1-a3f0-c5aa55ecb8ea]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 04:14:57 compute-0 systemd[1]: run-netns-ovnmeta\x2db6cd64a2\x2daf0b\x2d4f57\x2db84c\x2dcbc9cde5251d.mount: Deactivated successfully.
Oct 11 04:14:57 compute-0 nova_compute[259850]: 2025-10-11 04:14:57.868 2 INFO nova.compute.manager [None req-55866910-3fb6-4451-88d4-03a9147a1e5f 2a330a845d62440c871f80eda2546881 09ba33ef4bd447699d74946c58839b2d - - default default] [instance: 86c48ed0-a954-49c0-bdcc-c312bcf59248] Took 0.58 seconds to destroy the instance on the hypervisor.
Oct 11 04:14:57 compute-0 nova_compute[259850]: 2025-10-11 04:14:57.868 2 DEBUG oslo.service.loopingcall [None req-55866910-3fb6-4451-88d4-03a9147a1e5f 2a330a845d62440c871f80eda2546881 09ba33ef4bd447699d74946c58839b2d - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Oct 11 04:14:57 compute-0 nova_compute[259850]: 2025-10-11 04:14:57.869 2 DEBUG nova.compute.manager [-] [instance: 86c48ed0-a954-49c0-bdcc-c312bcf59248] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Oct 11 04:14:57 compute-0 nova_compute[259850]: 2025-10-11 04:14:57.870 2 DEBUG nova.network.neutron [-] [instance: 86c48ed0-a954-49c0-bdcc-c312bcf59248] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Oct 11 04:14:57 compute-0 nova_compute[259850]: 2025-10-11 04:14:57.957 2 DEBUG nova.compute.manager [req-ca5fbbbb-49d3-4126-b9d2-3755bae59b2f req-16b2578f-6aba-4114-a400-f4e9bad7e10b f4159c198f2c491aba3076e803251bb4 551ef16ca7c64c7d98e9560ddab5abd8 - - default default] [instance: 86c48ed0-a954-49c0-bdcc-c312bcf59248] Received event network-vif-unplugged-38533cc0-07ff-4685-a074-b75317ab358c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 11 04:14:57 compute-0 nova_compute[259850]: 2025-10-11 04:14:57.957 2 DEBUG oslo_concurrency.lockutils [req-ca5fbbbb-49d3-4126-b9d2-3755bae59b2f req-16b2578f-6aba-4114-a400-f4e9bad7e10b f4159c198f2c491aba3076e803251bb4 551ef16ca7c64c7d98e9560ddab5abd8 - - default default] Acquiring lock "86c48ed0-a954-49c0-bdcc-c312bcf59248-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 04:14:57 compute-0 nova_compute[259850]: 2025-10-11 04:14:57.958 2 DEBUG oslo_concurrency.lockutils [req-ca5fbbbb-49d3-4126-b9d2-3755bae59b2f req-16b2578f-6aba-4114-a400-f4e9bad7e10b f4159c198f2c491aba3076e803251bb4 551ef16ca7c64c7d98e9560ddab5abd8 - - default default] Lock "86c48ed0-a954-49c0-bdcc-c312bcf59248-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 04:14:57 compute-0 nova_compute[259850]: 2025-10-11 04:14:57.958 2 DEBUG oslo_concurrency.lockutils [req-ca5fbbbb-49d3-4126-b9d2-3755bae59b2f req-16b2578f-6aba-4114-a400-f4e9bad7e10b f4159c198f2c491aba3076e803251bb4 551ef16ca7c64c7d98e9560ddab5abd8 - - default default] Lock "86c48ed0-a954-49c0-bdcc-c312bcf59248-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 04:14:57 compute-0 nova_compute[259850]: 2025-10-11 04:14:57.958 2 DEBUG nova.compute.manager [req-ca5fbbbb-49d3-4126-b9d2-3755bae59b2f req-16b2578f-6aba-4114-a400-f4e9bad7e10b f4159c198f2c491aba3076e803251bb4 551ef16ca7c64c7d98e9560ddab5abd8 - - default default] [instance: 86c48ed0-a954-49c0-bdcc-c312bcf59248] No waiting events found dispatching network-vif-unplugged-38533cc0-07ff-4685-a074-b75317ab358c pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 11 04:14:57 compute-0 nova_compute[259850]: 2025-10-11 04:14:57.958 2 DEBUG nova.compute.manager [req-ca5fbbbb-49d3-4126-b9d2-3755bae59b2f req-16b2578f-6aba-4114-a400-f4e9bad7e10b f4159c198f2c491aba3076e803251bb4 551ef16ca7c64c7d98e9560ddab5abd8 - - default default] [instance: 86c48ed0-a954-49c0-bdcc-c312bcf59248] Received event network-vif-unplugged-38533cc0-07ff-4685-a074-b75317ab358c for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Oct 11 04:14:58 compute-0 nova_compute[259850]: 2025-10-11 04:14:58.034 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 04:14:58 compute-0 podman[291440]: 2025-10-11 04:14:58.363756896 +0000 UTC m=+0.070018950 container health_status aa44bb3cffcfb092b9d85ae52a9bd9f36c87e07262632420f97b48a886109eab (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251009, org.label-schema.schema-version=1.0, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 11 04:14:58 compute-0 nova_compute[259850]: 2025-10-11 04:14:58.421 2 DEBUG nova.network.neutron [-] [instance: 86c48ed0-a954-49c0-bdcc-c312bcf59248] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 11 04:14:58 compute-0 nova_compute[259850]: 2025-10-11 04:14:58.441 2 INFO nova.compute.manager [-] [instance: 86c48ed0-a954-49c0-bdcc-c312bcf59248] Took 0.57 seconds to deallocate network for instance.
Oct 11 04:14:58 compute-0 nova_compute[259850]: 2025-10-11 04:14:58.640 2 INFO nova.compute.manager [None req-55866910-3fb6-4451-88d4-03a9147a1e5f 2a330a845d62440c871f80eda2546881 09ba33ef4bd447699d74946c58839b2d - - default default] [instance: 86c48ed0-a954-49c0-bdcc-c312bcf59248] Took 0.20 seconds to detach 1 volumes for instance.
Oct 11 04:14:58 compute-0 nova_compute[259850]: 2025-10-11 04:14:58.642 2 DEBUG nova.compute.manager [None req-55866910-3fb6-4451-88d4-03a9147a1e5f 2a330a845d62440c871f80eda2546881 09ba33ef4bd447699d74946c58839b2d - - default default] [instance: 86c48ed0-a954-49c0-bdcc-c312bcf59248] Deleting volume: 69400be7-7fce-4f2b-b9c7-4588caea0a97 _cleanup_volumes /usr/lib/python3.9/site-packages/nova/compute/manager.py:3217
Oct 11 04:14:58 compute-0 nova_compute[259850]: 2025-10-11 04:14:58.809 2 DEBUG oslo_concurrency.lockutils [None req-55866910-3fb6-4451-88d4-03a9147a1e5f 2a330a845d62440c871f80eda2546881 09ba33ef4bd447699d74946c58839b2d - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 04:14:58 compute-0 nova_compute[259850]: 2025-10-11 04:14:58.810 2 DEBUG oslo_concurrency.lockutils [None req-55866910-3fb6-4451-88d4-03a9147a1e5f 2a330a845d62440c871f80eda2546881 09ba33ef4bd447699d74946c58839b2d - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 04:14:58 compute-0 ceph-mon[74273]: pgmap v1531: 305 pgs: 305 active+clean; 134 MiB data, 404 MiB used, 60 GiB / 60 GiB avail; 8.2 KiB/s rd, 938 B/s wr, 10 op/s
Oct 11 04:14:58 compute-0 nova_compute[259850]: 2025-10-11 04:14:58.906 2 DEBUG oslo_concurrency.processutils [None req-55866910-3fb6-4451-88d4-03a9147a1e5f 2a330a845d62440c871f80eda2546881 09ba33ef4bd447699d74946c58839b2d - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 04:14:59 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct 11 04:14:59 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3080688717' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 11 04:14:59 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct 11 04:14:59 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3080688717' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 11 04:14:59 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 11 04:14:59 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2753527487' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 04:14:59 compute-0 nova_compute[259850]: 2025-10-11 04:14:59.333 2 DEBUG oslo_concurrency.processutils [None req-55866910-3fb6-4451-88d4-03a9147a1e5f 2a330a845d62440c871f80eda2546881 09ba33ef4bd447699d74946c58839b2d - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.427s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 04:14:59 compute-0 nova_compute[259850]: 2025-10-11 04:14:59.340 2 DEBUG nova.compute.provider_tree [None req-55866910-3fb6-4451-88d4-03a9147a1e5f 2a330a845d62440c871f80eda2546881 09ba33ef4bd447699d74946c58839b2d - - default default] Inventory has not changed in ProviderTree for provider: 108a560b-89c0-4926-a2fc-cb749a6f8386 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 11 04:14:59 compute-0 nova_compute[259850]: 2025-10-11 04:14:59.358 2 DEBUG nova.scheduler.client.report [None req-55866910-3fb6-4451-88d4-03a9147a1e5f 2a330a845d62440c871f80eda2546881 09ba33ef4bd447699d74946c58839b2d - - default default] Inventory has not changed for provider 108a560b-89c0-4926-a2fc-cb749a6f8386 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 11 04:14:59 compute-0 nova_compute[259850]: 2025-10-11 04:14:59.383 2 DEBUG oslo_concurrency.lockutils [None req-55866910-3fb6-4451-88d4-03a9147a1e5f 2a330a845d62440c871f80eda2546881 09ba33ef4bd447699d74946c58839b2d - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.573s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 04:14:59 compute-0 nova_compute[259850]: 2025-10-11 04:14:59.404 2 INFO nova.scheduler.client.report [None req-55866910-3fb6-4451-88d4-03a9147a1e5f 2a330a845d62440c871f80eda2546881 09ba33ef4bd447699d74946c58839b2d - - default default] Deleted allocations for instance 86c48ed0-a954-49c0-bdcc-c312bcf59248
Oct 11 04:14:59 compute-0 nova_compute[259850]: 2025-10-11 04:14:59.463 2 DEBUG oslo_concurrency.lockutils [None req-55866910-3fb6-4451-88d4-03a9147a1e5f 2a330a845d62440c871f80eda2546881 09ba33ef4bd447699d74946c58839b2d - - default default] Lock "86c48ed0-a954-49c0-bdcc-c312bcf59248" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.180s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 04:14:59 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1532: 305 pgs: 305 active+clean; 134 MiB data, 404 MiB used, 60 GiB / 60 GiB avail; 3.6 MiB/s rd, 14 KiB/s wr, 106 op/s
Oct 11 04:14:59 compute-0 ceph-mon[74273]: from='client.? 192.168.122.10:0/3080688717' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 11 04:14:59 compute-0 ceph-mon[74273]: from='client.? 192.168.122.10:0/3080688717' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 11 04:14:59 compute-0 ceph-mon[74273]: from='client.? 192.168.122.100:0/2753527487' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 04:14:59 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e382 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 04:15:00 compute-0 nova_compute[259850]: 2025-10-11 04:15:00.068 2 DEBUG nova.compute.manager [req-69b31637-a3c4-47bf-b1d4-f7e45f6ba98d req-5f22f88f-9bbb-46c2-9dd4-5bd73fff1940 f4159c198f2c491aba3076e803251bb4 551ef16ca7c64c7d98e9560ddab5abd8 - - default default] [instance: 86c48ed0-a954-49c0-bdcc-c312bcf59248] Received event network-vif-plugged-38533cc0-07ff-4685-a074-b75317ab358c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 11 04:15:00 compute-0 nova_compute[259850]: 2025-10-11 04:15:00.068 2 DEBUG oslo_concurrency.lockutils [req-69b31637-a3c4-47bf-b1d4-f7e45f6ba98d req-5f22f88f-9bbb-46c2-9dd4-5bd73fff1940 f4159c198f2c491aba3076e803251bb4 551ef16ca7c64c7d98e9560ddab5abd8 - - default default] Acquiring lock "86c48ed0-a954-49c0-bdcc-c312bcf59248-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 04:15:00 compute-0 nova_compute[259850]: 2025-10-11 04:15:00.069 2 DEBUG oslo_concurrency.lockutils [req-69b31637-a3c4-47bf-b1d4-f7e45f6ba98d req-5f22f88f-9bbb-46c2-9dd4-5bd73fff1940 f4159c198f2c491aba3076e803251bb4 551ef16ca7c64c7d98e9560ddab5abd8 - - default default] Lock "86c48ed0-a954-49c0-bdcc-c312bcf59248-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 04:15:00 compute-0 nova_compute[259850]: 2025-10-11 04:15:00.069 2 DEBUG oslo_concurrency.lockutils [req-69b31637-a3c4-47bf-b1d4-f7e45f6ba98d req-5f22f88f-9bbb-46c2-9dd4-5bd73fff1940 f4159c198f2c491aba3076e803251bb4 551ef16ca7c64c7d98e9560ddab5abd8 - - default default] Lock "86c48ed0-a954-49c0-bdcc-c312bcf59248-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 04:15:00 compute-0 nova_compute[259850]: 2025-10-11 04:15:00.069 2 DEBUG nova.compute.manager [req-69b31637-a3c4-47bf-b1d4-f7e45f6ba98d req-5f22f88f-9bbb-46c2-9dd4-5bd73fff1940 f4159c198f2c491aba3076e803251bb4 551ef16ca7c64c7d98e9560ddab5abd8 - - default default] [instance: 86c48ed0-a954-49c0-bdcc-c312bcf59248] No waiting events found dispatching network-vif-plugged-38533cc0-07ff-4685-a074-b75317ab358c pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 11 04:15:00 compute-0 nova_compute[259850]: 2025-10-11 04:15:00.069 2 WARNING nova.compute.manager [req-69b31637-a3c4-47bf-b1d4-f7e45f6ba98d req-5f22f88f-9bbb-46c2-9dd4-5bd73fff1940 f4159c198f2c491aba3076e803251bb4 551ef16ca7c64c7d98e9560ddab5abd8 - - default default] [instance: 86c48ed0-a954-49c0-bdcc-c312bcf59248] Received unexpected event network-vif-plugged-38533cc0-07ff-4685-a074-b75317ab358c for instance with vm_state deleted and task_state None.
Oct 11 04:15:00 compute-0 nova_compute[259850]: 2025-10-11 04:15:00.070 2 DEBUG nova.compute.manager [req-69b31637-a3c4-47bf-b1d4-f7e45f6ba98d req-5f22f88f-9bbb-46c2-9dd4-5bd73fff1940 f4159c198f2c491aba3076e803251bb4 551ef16ca7c64c7d98e9560ddab5abd8 - - default default] [instance: 86c48ed0-a954-49c0-bdcc-c312bcf59248] Received event network-vif-deleted-38533cc0-07ff-4685-a074-b75317ab358c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 11 04:15:00 compute-0 ceph-mon[74273]: pgmap v1532: 305 pgs: 305 active+clean; 134 MiB data, 404 MiB used, 60 GiB / 60 GiB avail; 3.6 MiB/s rd, 14 KiB/s wr, 106 op/s
Oct 11 04:15:01 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1533: 305 pgs: 305 active+clean; 134 MiB data, 404 MiB used, 60 GiB / 60 GiB avail; 3.6 MiB/s rd, 13 KiB/s wr, 95 op/s
Oct 11 04:15:01 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e382 do_prune osdmap full prune enabled
Oct 11 04:15:01 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e383 e383: 3 total, 3 up, 3 in
Oct 11 04:15:01 compute-0 ceph-mon[74273]: log_channel(cluster) log [DBG] : osdmap e383: 3 total, 3 up, 3 in
Oct 11 04:15:02 compute-0 nova_compute[259850]: 2025-10-11 04:15:02.552 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 04:15:02 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct 11 04:15:02 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/4172123217' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 11 04:15:02 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct 11 04:15:02 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/4172123217' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 11 04:15:02 compute-0 ceph-mon[74273]: pgmap v1533: 305 pgs: 305 active+clean; 134 MiB data, 404 MiB used, 60 GiB / 60 GiB avail; 3.6 MiB/s rd, 13 KiB/s wr, 95 op/s
Oct 11 04:15:02 compute-0 ceph-mon[74273]: osdmap e383: 3 total, 3 up, 3 in
Oct 11 04:15:02 compute-0 ceph-mon[74273]: from='client.? 192.168.122.10:0/4172123217' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 11 04:15:02 compute-0 ceph-mon[74273]: from='client.? 192.168.122.10:0/4172123217' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 11 04:15:03 compute-0 nova_compute[259850]: 2025-10-11 04:15:03.036 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 04:15:03 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1535: 305 pgs: 305 active+clean; 88 MiB data, 391 MiB used, 60 GiB / 60 GiB avail; 4.4 MiB/s rd, 18 KiB/s wr, 166 op/s
Oct 11 04:15:04 compute-0 ceph-mon[74273]: pgmap v1535: 305 pgs: 305 active+clean; 88 MiB data, 391 MiB used, 60 GiB / 60 GiB avail; 4.4 MiB/s rd, 18 KiB/s wr, 166 op/s
Oct 11 04:15:04 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e383 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 04:15:04 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e383 do_prune osdmap full prune enabled
Oct 11 04:15:04 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e384 e384: 3 total, 3 up, 3 in
Oct 11 04:15:04 compute-0 ceph-mon[74273]: log_channel(cluster) log [DBG] : osdmap e384: 3 total, 3 up, 3 in
Oct 11 04:15:05 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1537: 305 pgs: 305 active+clean; 88 MiB data, 391 MiB used, 60 GiB / 60 GiB avail; 5.5 MiB/s rd, 22 KiB/s wr, 207 op/s
Oct 11 04:15:05 compute-0 ceph-mon[74273]: osdmap e384: 3 total, 3 up, 3 in
Oct 11 04:15:06 compute-0 ceph-mon[74273]: pgmap v1537: 305 pgs: 305 active+clean; 88 MiB data, 391 MiB used, 60 GiB / 60 GiB avail; 5.5 MiB/s rd, 22 KiB/s wr, 207 op/s
Oct 11 04:15:06 compute-0 ceph-mon[74273]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #63. Immutable memtables: 0.
Oct 11 04:15:06 compute-0 ceph-mon[74273]: rocksdb: (Original Log Time 2025/10/11-04:15:06.995055) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Oct 11 04:15:06 compute-0 ceph-mon[74273]: rocksdb: [db/flush_job.cc:856] [default] [JOB 33] Flushing memtable with next log file: 63
Oct 11 04:15:06 compute-0 ceph-mon[74273]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760156106995106, "job": 33, "event": "flush_started", "num_memtables": 1, "num_entries": 2731, "num_deletes": 550, "total_data_size": 3481397, "memory_usage": 3547568, "flush_reason": "Manual Compaction"}
Oct 11 04:15:06 compute-0 ceph-mon[74273]: rocksdb: [db/flush_job.cc:885] [default] [JOB 33] Level-0 flush table #64: started
Oct 11 04:15:07 compute-0 ceph-mon[74273]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760156107012695, "cf_name": "default", "job": 33, "event": "table_file_creation", "file_number": 64, "file_size": 3420853, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 29003, "largest_seqno": 31733, "table_properties": {"data_size": 3408783, "index_size": 7412, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 3589, "raw_key_size": 29671, "raw_average_key_size": 21, "raw_value_size": 3382251, "raw_average_value_size": 2397, "num_data_blocks": 320, "num_entries": 1411, "num_filter_entries": 1411, "num_deletions": 550, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1760155938, "oldest_key_time": 1760155938, "file_creation_time": 1760156106, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "c5ef686a-ea96-43f3-b64e-136aeef6150d", "db_session_id": "5PENSWPAYOBU0GJSS6OB", "orig_file_number": 64, "seqno_to_time_mapping": "N/A"}}
Oct 11 04:15:07 compute-0 ceph-mon[74273]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 33] Flush lasted 17709 microseconds, and 9639 cpu microseconds.
Oct 11 04:15:07 compute-0 ceph-mon[74273]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 11 04:15:07 compute-0 ceph-mon[74273]: rocksdb: (Original Log Time 2025/10/11-04:15:07.012759) [db/flush_job.cc:967] [default] [JOB 33] Level-0 flush table #64: 3420853 bytes OK
Oct 11 04:15:07 compute-0 ceph-mon[74273]: rocksdb: (Original Log Time 2025/10/11-04:15:07.012786) [db/memtable_list.cc:519] [default] Level-0 commit table #64 started
Oct 11 04:15:07 compute-0 ceph-mon[74273]: rocksdb: (Original Log Time 2025/10/11-04:15:07.015176) [db/memtable_list.cc:722] [default] Level-0 commit table #64: memtable #1 done
Oct 11 04:15:07 compute-0 ceph-mon[74273]: rocksdb: (Original Log Time 2025/10/11-04:15:07.015204) EVENT_LOG_v1 {"time_micros": 1760156107015196, "job": 33, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Oct 11 04:15:07 compute-0 ceph-mon[74273]: rocksdb: (Original Log Time 2025/10/11-04:15:07.015229) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Oct 11 04:15:07 compute-0 ceph-mon[74273]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 33] Try to delete WAL files size 3468402, prev total WAL file size 3468402, number of live WAL files 2.
Oct 11 04:15:07 compute-0 ceph-mon[74273]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000060.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 11 04:15:07 compute-0 ceph-mon[74273]: rocksdb: (Original Log Time 2025/10/11-04:15:07.016642) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730032353130' seq:72057594037927935, type:22 .. '7061786F730032373632' seq:0, type:0; will stop at (end)
Oct 11 04:15:07 compute-0 ceph-mon[74273]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 34] Compacting 1@0 + 1@6 files to L6, score -1.00
Oct 11 04:15:07 compute-0 ceph-mon[74273]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 33 Base level 0, inputs: [64(3340KB)], [62(8750KB)]
Oct 11 04:15:07 compute-0 ceph-mon[74273]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760156107016733, "job": 34, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [64], "files_L6": [62], "score": -1, "input_data_size": 12380917, "oldest_snapshot_seqno": -1}
Oct 11 04:15:07 compute-0 ceph-mon[74273]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 34] Generated table #65: 6034 keys, 10563352 bytes, temperature: kUnknown
Oct 11 04:15:07 compute-0 ceph-mon[74273]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760156107082651, "cf_name": "default", "job": 34, "event": "table_file_creation", "file_number": 65, "file_size": 10563352, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 10516544, "index_size": 30634, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 15109, "raw_key_size": 151910, "raw_average_key_size": 25, "raw_value_size": 10401575, "raw_average_value_size": 1723, "num_data_blocks": 1234, "num_entries": 6034, "num_filter_entries": 6034, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1760153731, "oldest_key_time": 0, "file_creation_time": 1760156107, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "c5ef686a-ea96-43f3-b64e-136aeef6150d", "db_session_id": "5PENSWPAYOBU0GJSS6OB", "orig_file_number": 65, "seqno_to_time_mapping": "N/A"}}
Oct 11 04:15:07 compute-0 ceph-mon[74273]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 11 04:15:07 compute-0 ceph-mon[74273]: rocksdb: (Original Log Time 2025/10/11-04:15:07.082995) [db/compaction/compaction_job.cc:1663] [default] [JOB 34] Compacted 1@0 + 1@6 files to L6 => 10563352 bytes
Oct 11 04:15:07 compute-0 ceph-mon[74273]: rocksdb: (Original Log Time 2025/10/11-04:15:07.084526) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 187.5 rd, 160.0 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(3.3, 8.5 +0.0 blob) out(10.1 +0.0 blob), read-write-amplify(6.7) write-amplify(3.1) OK, records in: 7114, records dropped: 1080 output_compression: NoCompression
Oct 11 04:15:07 compute-0 ceph-mon[74273]: rocksdb: (Original Log Time 2025/10/11-04:15:07.084557) EVENT_LOG_v1 {"time_micros": 1760156107084542, "job": 34, "event": "compaction_finished", "compaction_time_micros": 66024, "compaction_time_cpu_micros": 47037, "output_level": 6, "num_output_files": 1, "total_output_size": 10563352, "num_input_records": 7114, "num_output_records": 6034, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Oct 11 04:15:07 compute-0 ceph-mon[74273]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000064.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 11 04:15:07 compute-0 ceph-mon[74273]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760156107085762, "job": 34, "event": "table_file_deletion", "file_number": 64}
Oct 11 04:15:07 compute-0 ceph-mon[74273]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000062.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 11 04:15:07 compute-0 ceph-mon[74273]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760156107088657, "job": 34, "event": "table_file_deletion", "file_number": 62}
Oct 11 04:15:07 compute-0 ceph-mon[74273]: rocksdb: (Original Log Time 2025/10/11-04:15:07.016500) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 11 04:15:07 compute-0 ceph-mon[74273]: rocksdb: (Original Log Time 2025/10/11-04:15:07.088923) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 11 04:15:07 compute-0 ceph-mon[74273]: rocksdb: (Original Log Time 2025/10/11-04:15:07.088931) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 11 04:15:07 compute-0 ceph-mon[74273]: rocksdb: (Original Log Time 2025/10/11-04:15:07.088933) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 11 04:15:07 compute-0 ceph-mon[74273]: rocksdb: (Original Log Time 2025/10/11-04:15:07.088935) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 11 04:15:07 compute-0 ceph-mon[74273]: rocksdb: (Original Log Time 2025/10/11-04:15:07.088937) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 11 04:15:07 compute-0 nova_compute[259850]: 2025-10-11 04:15:07.556 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 04:15:07 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1538: 305 pgs: 305 active+clean; 88 MiB data, 391 MiB used, 60 GiB / 60 GiB avail; 47 KiB/s rd, 2.4 KiB/s wr, 63 op/s
Oct 11 04:15:08 compute-0 nova_compute[259850]: 2025-10-11 04:15:08.037 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 04:15:09 compute-0 ceph-mon[74273]: pgmap v1538: 305 pgs: 305 active+clean; 88 MiB data, 391 MiB used, 60 GiB / 60 GiB avail; 47 KiB/s rd, 2.4 KiB/s wr, 63 op/s
Oct 11 04:15:09 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1539: 305 pgs: 305 active+clean; 88 MiB data, 383 MiB used, 60 GiB / 60 GiB avail; 2.6 MiB/s rd, 35 KiB/s wr, 81 op/s
Oct 11 04:15:09 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e384 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 04:15:11 compute-0 ceph-mon[74273]: pgmap v1539: 305 pgs: 305 active+clean; 88 MiB data, 383 MiB used, 60 GiB / 60 GiB avail; 2.6 MiB/s rd, 35 KiB/s wr, 81 op/s
Oct 11 04:15:11 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1540: 305 pgs: 305 active+clean; 88 MiB data, 383 MiB used, 60 GiB / 60 GiB avail; 2.1 MiB/s rd, 29 KiB/s wr, 66 op/s
Oct 11 04:15:12 compute-0 podman[291484]: 2025-10-11 04:15:12.377818764 +0000 UTC m=+0.076698161 container health_status 8f03625dda308e9a3fe3b3f00360811eab2a53db90537fed5552a11820542f07 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.build-date=20251009, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=iscsid, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, container_name=iscsid, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Oct 11 04:15:12 compute-0 podman[291483]: 2025-10-11 04:15:12.394000784 +0000 UTC m=+0.096117933 container health_status 83ff5079e26f0f00bbfd2512db4ebca61e431e3b7e362664573e34fff179574c (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251009, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, org.label-schema.vendor=CentOS)
Oct 11 04:15:12 compute-0 nova_compute[259850]: 2025-10-11 04:15:12.526 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1760156097.523913, 86c48ed0-a954-49c0-bdcc-c312bcf59248 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 11 04:15:12 compute-0 nova_compute[259850]: 2025-10-11 04:15:12.527 2 INFO nova.compute.manager [-] [instance: 86c48ed0-a954-49c0-bdcc-c312bcf59248] VM Stopped (Lifecycle Event)
Oct 11 04:15:12 compute-0 nova_compute[259850]: 2025-10-11 04:15:12.559 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 04:15:12 compute-0 nova_compute[259850]: 2025-10-11 04:15:12.564 2 DEBUG nova.compute.manager [None req-8b9ea3c6-7abd-49e3-af7a-b8a5e9d825f8 - - - - - -] [instance: 86c48ed0-a954-49c0-bdcc-c312bcf59248] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 11 04:15:13 compute-0 ceph-mon[74273]: pgmap v1540: 305 pgs: 305 active+clean; 88 MiB data, 383 MiB used, 60 GiB / 60 GiB avail; 2.1 MiB/s rd, 29 KiB/s wr, 66 op/s
Oct 11 04:15:13 compute-0 nova_compute[259850]: 2025-10-11 04:15:13.077 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 04:15:13 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1541: 305 pgs: 305 active+clean; 134 MiB data, 404 MiB used, 60 GiB / 60 GiB avail; 2.1 MiB/s rd, 2.2 MiB/s wr, 55 op/s
Oct 11 04:15:14 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e384 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 04:15:15 compute-0 ceph-mon[74273]: pgmap v1541: 305 pgs: 305 active+clean; 134 MiB data, 404 MiB used, 60 GiB / 60 GiB avail; 2.1 MiB/s rd, 2.2 MiB/s wr, 55 op/s
Oct 11 04:15:15 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1542: 305 pgs: 305 active+clean; 134 MiB data, 404 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 2.0 MiB/s wr, 51 op/s
Oct 11 04:15:17 compute-0 ceph-mon[74273]: pgmap v1542: 305 pgs: 305 active+clean; 134 MiB data, 404 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 2.0 MiB/s wr, 51 op/s
Oct 11 04:15:17 compute-0 nova_compute[259850]: 2025-10-11 04:15:17.563 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 04:15:17 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1543: 305 pgs: 305 active+clean; 134 MiB data, 404 MiB used, 60 GiB / 60 GiB avail; 1.7 MiB/s rd, 1.8 MiB/s wr, 46 op/s
Oct 11 04:15:18 compute-0 nova_compute[259850]: 2025-10-11 04:15:18.079 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 04:15:19 compute-0 ceph-mon[74273]: pgmap v1543: 305 pgs: 305 active+clean; 134 MiB data, 404 MiB used, 60 GiB / 60 GiB avail; 1.7 MiB/s rd, 1.8 MiB/s wr, 46 op/s
Oct 11 04:15:19 compute-0 sudo[291523]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 04:15:19 compute-0 sudo[291523]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 04:15:19 compute-0 sudo[291523]: pam_unix(sudo:session): session closed for user root
Oct 11 04:15:19 compute-0 sudo[291548]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 11 04:15:19 compute-0 sudo[291548]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 04:15:19 compute-0 sudo[291548]: pam_unix(sudo:session): session closed for user root
Oct 11 04:15:19 compute-0 sudo[291573]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 04:15:19 compute-0 sudo[291573]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 04:15:19 compute-0 sudo[291573]: pam_unix(sudo:session): session closed for user root
Oct 11 04:15:19 compute-0 sudo[291598]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/23b68101-59a9-532f-ab6b-9acf78fb2162/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Oct 11 04:15:19 compute-0 nova_compute[259850]: 2025-10-11 04:15:19.513 2 DEBUG oslo_concurrency.lockutils [None req-6542b70e-9085-48c3-a013-46621b3f3e3c 2a330a845d62440c871f80eda2546881 09ba33ef4bd447699d74946c58839b2d - - default default] Acquiring lock "f4568c68-41ba-4de0-a607-76bf5907f37c" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 04:15:19 compute-0 nova_compute[259850]: 2025-10-11 04:15:19.515 2 DEBUG oslo_concurrency.lockutils [None req-6542b70e-9085-48c3-a013-46621b3f3e3c 2a330a845d62440c871f80eda2546881 09ba33ef4bd447699d74946c58839b2d - - default default] Lock "f4568c68-41ba-4de0-a607-76bf5907f37c" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 04:15:19 compute-0 sudo[291598]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 04:15:19 compute-0 nova_compute[259850]: 2025-10-11 04:15:19.531 2 DEBUG nova.compute.manager [None req-6542b70e-9085-48c3-a013-46621b3f3e3c 2a330a845d62440c871f80eda2546881 09ba33ef4bd447699d74946c58839b2d - - default default] [instance: f4568c68-41ba-4de0-a607-76bf5907f37c] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Oct 11 04:15:19 compute-0 nova_compute[259850]: 2025-10-11 04:15:19.606 2 DEBUG oslo_concurrency.lockutils [None req-6542b70e-9085-48c3-a013-46621b3f3e3c 2a330a845d62440c871f80eda2546881 09ba33ef4bd447699d74946c58839b2d - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 04:15:19 compute-0 nova_compute[259850]: 2025-10-11 04:15:19.607 2 DEBUG oslo_concurrency.lockutils [None req-6542b70e-9085-48c3-a013-46621b3f3e3c 2a330a845d62440c871f80eda2546881 09ba33ef4bd447699d74946c58839b2d - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 04:15:19 compute-0 nova_compute[259850]: 2025-10-11 04:15:19.617 2 DEBUG nova.virt.hardware [None req-6542b70e-9085-48c3-a013-46621b3f3e3c 2a330a845d62440c871f80eda2546881 09ba33ef4bd447699d74946c58839b2d - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Oct 11 04:15:19 compute-0 nova_compute[259850]: 2025-10-11 04:15:19.618 2 INFO nova.compute.claims [None req-6542b70e-9085-48c3-a013-46621b3f3e3c 2a330a845d62440c871f80eda2546881 09ba33ef4bd447699d74946c58839b2d - - default default] [instance: f4568c68-41ba-4de0-a607-76bf5907f37c] Claim successful on node compute-0.ctlplane.example.com
Oct 11 04:15:19 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1544: 305 pgs: 305 active+clean; 134 MiB data, 404 MiB used, 60 GiB / 60 GiB avail; 1.7 MiB/s rd, 1.8 MiB/s wr, 46 op/s
Oct 11 04:15:19 compute-0 nova_compute[259850]: 2025-10-11 04:15:19.768 2 DEBUG oslo_concurrency.processutils [None req-6542b70e-9085-48c3-a013-46621b3f3e3c 2a330a845d62440c871f80eda2546881 09ba33ef4bd447699d74946c58839b2d - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 04:15:19 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e384 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 04:15:20 compute-0 sudo[291598]: pam_unix(sudo:session): session closed for user root
Oct 11 04:15:20 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 11 04:15:20 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 11 04:15:20 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct 11 04:15:20 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 11 04:15:20 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct 11 04:15:20 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' 
Oct 11 04:15:20 compute-0 ceph-mgr[74563]: [progress WARNING root] complete: ev a33430fa-9edb-48fb-b0e4-ca7ed6a061d6 does not exist
Oct 11 04:15:20 compute-0 ceph-mgr[74563]: [progress WARNING root] complete: ev 205df274-db35-4124-994f-b3e2597d59a1 does not exist
Oct 11 04:15:20 compute-0 ceph-mgr[74563]: [progress WARNING root] complete: ev 5fa724e9-36a2-4b63-85bd-d899b4eaa775 does not exist
Oct 11 04:15:20 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct 11 04:15:20 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 11 04:15:20 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct 11 04:15:20 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 11 04:15:20 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 11 04:15:20 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 11 04:15:20 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 11 04:15:20 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1186065594' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 04:15:20 compute-0 nova_compute[259850]: 2025-10-11 04:15:20.290 2 DEBUG oslo_concurrency.processutils [None req-6542b70e-9085-48c3-a013-46621b3f3e3c 2a330a845d62440c871f80eda2546881 09ba33ef4bd447699d74946c58839b2d - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.522s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 04:15:20 compute-0 nova_compute[259850]: 2025-10-11 04:15:20.301 2 DEBUG nova.compute.provider_tree [None req-6542b70e-9085-48c3-a013-46621b3f3e3c 2a330a845d62440c871f80eda2546881 09ba33ef4bd447699d74946c58839b2d - - default default] Inventory has not changed in ProviderTree for provider: 108a560b-89c0-4926-a2fc-cb749a6f8386 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 11 04:15:20 compute-0 sudo[291673]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 04:15:20 compute-0 sudo[291673]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 04:15:20 compute-0 sudo[291673]: pam_unix(sudo:session): session closed for user root
Oct 11 04:15:20 compute-0 nova_compute[259850]: 2025-10-11 04:15:20.341 2 DEBUG nova.scheduler.client.report [None req-6542b70e-9085-48c3-a013-46621b3f3e3c 2a330a845d62440c871f80eda2546881 09ba33ef4bd447699d74946c58839b2d - - default default] Inventory has not changed for provider 108a560b-89c0-4926-a2fc-cb749a6f8386 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 11 04:15:20 compute-0 nova_compute[259850]: 2025-10-11 04:15:20.380 2 DEBUG oslo_concurrency.lockutils [None req-6542b70e-9085-48c3-a013-46621b3f3e3c 2a330a845d62440c871f80eda2546881 09ba33ef4bd447699d74946c58839b2d - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.773s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 04:15:20 compute-0 nova_compute[259850]: 2025-10-11 04:15:20.381 2 DEBUG nova.compute.manager [None req-6542b70e-9085-48c3-a013-46621b3f3e3c 2a330a845d62440c871f80eda2546881 09ba33ef4bd447699d74946c58839b2d - - default default] [instance: f4568c68-41ba-4de0-a607-76bf5907f37c] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Oct 11 04:15:20 compute-0 sudo[291700]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 11 04:15:20 compute-0 sudo[291700]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 04:15:20 compute-0 sudo[291700]: pam_unix(sudo:session): session closed for user root
Oct 11 04:15:20 compute-0 nova_compute[259850]: 2025-10-11 04:15:20.445 2 DEBUG nova.compute.manager [None req-6542b70e-9085-48c3-a013-46621b3f3e3c 2a330a845d62440c871f80eda2546881 09ba33ef4bd447699d74946c58839b2d - - default default] [instance: f4568c68-41ba-4de0-a607-76bf5907f37c] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Oct 11 04:15:20 compute-0 nova_compute[259850]: 2025-10-11 04:15:20.446 2 DEBUG nova.network.neutron [None req-6542b70e-9085-48c3-a013-46621b3f3e3c 2a330a845d62440c871f80eda2546881 09ba33ef4bd447699d74946c58839b2d - - default default] [instance: f4568c68-41ba-4de0-a607-76bf5907f37c] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Oct 11 04:15:20 compute-0 nova_compute[259850]: 2025-10-11 04:15:20.474 2 INFO nova.virt.libvirt.driver [None req-6542b70e-9085-48c3-a013-46621b3f3e3c 2a330a845d62440c871f80eda2546881 09ba33ef4bd447699d74946c58839b2d - - default default] [instance: f4568c68-41ba-4de0-a607-76bf5907f37c] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Oct 11 04:15:20 compute-0 nova_compute[259850]: 2025-10-11 04:15:20.499 2 DEBUG nova.compute.manager [None req-6542b70e-9085-48c3-a013-46621b3f3e3c 2a330a845d62440c871f80eda2546881 09ba33ef4bd447699d74946c58839b2d - - default default] [instance: f4568c68-41ba-4de0-a607-76bf5907f37c] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Oct 11 04:15:20 compute-0 sudo[291725]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 04:15:20 compute-0 sudo[291725]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 04:15:20 compute-0 sudo[291725]: pam_unix(sudo:session): session closed for user root
Oct 11 04:15:20 compute-0 nova_compute[259850]: 2025-10-11 04:15:20.555 2 INFO nova.virt.block_device [None req-6542b70e-9085-48c3-a013-46621b3f3e3c 2a330a845d62440c871f80eda2546881 09ba33ef4bd447699d74946c58839b2d - - default default] [instance: f4568c68-41ba-4de0-a607-76bf5907f37c] Booting with volume d0a276fd-ac37-4f51-aa93-2a88fc08b739 at /dev/vda
Oct 11 04:15:20 compute-0 sudo[291750]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/23b68101-59a9-532f-ab6b-9acf78fb2162/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 23b68101-59a9-532f-ab6b-9acf78fb2162 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Oct 11 04:15:20 compute-0 sudo[291750]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 04:15:20 compute-0 nova_compute[259850]: 2025-10-11 04:15:20.612 2 DEBUG nova.policy [None req-6542b70e-9085-48c3-a013-46621b3f3e3c 2a330a845d62440c871f80eda2546881 09ba33ef4bd447699d74946c58839b2d - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '2a330a845d62440c871f80eda2546881', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '09ba33ef4bd447699d74946c58839b2d', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Oct 11 04:15:20 compute-0 nova_compute[259850]: 2025-10-11 04:15:20.715 2 DEBUG os_brick.utils [None req-6542b70e-9085-48c3-a013-46621b3f3e3c 2a330a845d62440c871f80eda2546881 09ba33ef4bd447699d74946c58839b2d - - default default] ==> get_connector_properties: call "{'root_helper': 'sudo nova-rootwrap /etc/nova/rootwrap.conf', 'my_ip': '192.168.122.100', 'multipath': True, 'enforce_multipath': True, 'host': 'compute-0.ctlplane.example.com', 'execute': None}" trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:176
Oct 11 04:15:20 compute-0 nova_compute[259850]: 2025-10-11 04:15:20.716 675 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): multipathd show status execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 04:15:20 compute-0 nova_compute[259850]: 2025-10-11 04:15:20.735 675 DEBUG oslo_concurrency.processutils [-] CMD "multipathd show status" returned: 0 in 0.019s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 04:15:20 compute-0 nova_compute[259850]: 2025-10-11 04:15:20.736 675 DEBUG oslo.privsep.daemon [-] privsep: reply[8efd53d3-8387-4e97-8586-64300da61abb]: (4, ('path checker states:\n\npaths: 0\nbusy: False\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 04:15:20 compute-0 nova_compute[259850]: 2025-10-11 04:15:20.737 675 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): cat /etc/iscsi/initiatorname.iscsi execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 04:15:20 compute-0 nova_compute[259850]: 2025-10-11 04:15:20.750 675 DEBUG oslo_concurrency.processutils [-] CMD "cat /etc/iscsi/initiatorname.iscsi" returned: 0 in 0.013s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 04:15:20 compute-0 nova_compute[259850]: 2025-10-11 04:15:20.751 675 DEBUG oslo.privsep.daemon [-] privsep: reply[1866978b-62b7-4dd9-bb8a-77461d3d1881]: (4, ('InitiatorName=iqn.1994-05.com.redhat:e727c2bd432c', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 04:15:20 compute-0 nova_compute[259850]: 2025-10-11 04:15:20.754 675 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): findmnt -v / -n -o SOURCE execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 04:15:20 compute-0 nova_compute[259850]: 2025-10-11 04:15:20.770 675 DEBUG oslo_concurrency.processutils [-] CMD "findmnt -v / -n -o SOURCE" returned: 0 in 0.016s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 04:15:20 compute-0 nova_compute[259850]: 2025-10-11 04:15:20.771 675 DEBUG oslo.privsep.daemon [-] privsep: reply[7be8536e-4ee9-47e4-8608-87b180519d9c]: (4, ('overlay\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 04:15:20 compute-0 nova_compute[259850]: 2025-10-11 04:15:20.772 675 DEBUG oslo.privsep.daemon [-] privsep: reply[6f058855-d1a6-4dc1-8eee-f618ffba521a]: (4, 'e4b2deed-ff06-4afb-a523-b61a9dddb9cc') _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 04:15:20 compute-0 nova_compute[259850]: 2025-10-11 04:15:20.774 2 DEBUG oslo_concurrency.processutils [None req-6542b70e-9085-48c3-a013-46621b3f3e3c 2a330a845d62440c871f80eda2546881 09ba33ef4bd447699d74946c58839b2d - - default default] Running cmd (subprocess): nvme version execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 04:15:20 compute-0 ceph-mgr[74563]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 04:15:20 compute-0 ceph-mgr[74563]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 04:15:20 compute-0 ceph-mgr[74563]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 04:15:20 compute-0 ceph-mgr[74563]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 04:15:20 compute-0 ceph-mgr[74563]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 04:15:20 compute-0 ceph-mgr[74563]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 04:15:20 compute-0 nova_compute[259850]: 2025-10-11 04:15:20.809 2 DEBUG oslo_concurrency.processutils [None req-6542b70e-9085-48c3-a013-46621b3f3e3c 2a330a845d62440c871f80eda2546881 09ba33ef4bd447699d74946c58839b2d - - default default] CMD "nvme version" returned: 0 in 0.035s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 04:15:20 compute-0 nova_compute[259850]: 2025-10-11 04:15:20.814 2 DEBUG os_brick.initiator.connectors.lightos [None req-6542b70e-9085-48c3-a013-46621b3f3e3c 2a330a845d62440c871f80eda2546881 09ba33ef4bd447699d74946c58839b2d - - default default] LIGHTOS: [Errno 111] ECONNREFUSED find_dsc /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:98
Oct 11 04:15:20 compute-0 nova_compute[259850]: 2025-10-11 04:15:20.814 2 DEBUG os_brick.initiator.connectors.lightos [None req-6542b70e-9085-48c3-a013-46621b3f3e3c 2a330a845d62440c871f80eda2546881 09ba33ef4bd447699d74946c58839b2d - - default default] LIGHTOS: did not find dsc, continuing anyway. get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:76
Oct 11 04:15:20 compute-0 nova_compute[259850]: 2025-10-11 04:15:20.815 2 DEBUG os_brick.initiator.connectors.lightos [None req-6542b70e-9085-48c3-a013-46621b3f3e3c 2a330a845d62440c871f80eda2546881 09ba33ef4bd447699d74946c58839b2d - - default default] LIGHTOS: finally hostnqn: nqn.2014-08.org.nvmexpress:uuid:83042a20-0f72-4c47-8453-e72ead378624 dsc:  get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:79
Oct 11 04:15:20 compute-0 ceph-mgr[74563]: [balancer INFO root] Optimize plan auto_2025-10-11_04:15:20
Oct 11 04:15:20 compute-0 ceph-mgr[74563]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 11 04:15:20 compute-0 ceph-mgr[74563]: [balancer INFO root] do_upmap
Oct 11 04:15:20 compute-0 nova_compute[259850]: 2025-10-11 04:15:20.816 2 DEBUG os_brick.utils [None req-6542b70e-9085-48c3-a013-46621b3f3e3c 2a330a845d62440c871f80eda2546881 09ba33ef4bd447699d74946c58839b2d - - default default] <== get_connector_properties: return (99ms) {'platform': 'x86_64', 'os_type': 'linux', 'ip': '192.168.122.100', 'host': 'compute-0.ctlplane.example.com', 'multipath': True, 'initiator': 'iqn.1994-05.com.redhat:e727c2bd432c', 'do_local_attach': False, 'nvme_hostid': '83042a20-0f72-4c47-8453-e72ead378624', 'system uuid': 'e4b2deed-ff06-4afb-a523-b61a9dddb9cc', 'nqn': 'nqn.2014-08.org.nvmexpress:uuid:83042a20-0f72-4c47-8453-e72ead378624', 'nvme_native_multipath': True, 'found_dsc': ''} trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:203
Oct 11 04:15:20 compute-0 ceph-mgr[74563]: [balancer INFO root] pools ['backups', 'vms', 'cephfs.cephfs.meta', 'volumes', 'default.rgw.control', 'cephfs.cephfs.data', '.mgr', '.rgw.root', 'images', 'default.rgw.log', 'default.rgw.meta']
Oct 11 04:15:20 compute-0 ceph-mgr[74563]: [balancer INFO root] prepared 0/10 changes
Oct 11 04:15:20 compute-0 nova_compute[259850]: 2025-10-11 04:15:20.816 2 DEBUG nova.virt.block_device [None req-6542b70e-9085-48c3-a013-46621b3f3e3c 2a330a845d62440c871f80eda2546881 09ba33ef4bd447699d74946c58839b2d - - default default] [instance: f4568c68-41ba-4de0-a607-76bf5907f37c] Updating existing volume attachment record: 3cdd1485-57e3-4393-8579-08d700c05610 _volume_attach /usr/lib/python3.9/site-packages/nova/virt/block_device.py:631
Oct 11 04:15:21 compute-0 ceph-mon[74273]: pgmap v1544: 305 pgs: 305 active+clean; 134 MiB data, 404 MiB used, 60 GiB / 60 GiB avail; 1.7 MiB/s rd, 1.8 MiB/s wr, 46 op/s
Oct 11 04:15:21 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 11 04:15:21 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 11 04:15:21 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' 
Oct 11 04:15:21 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 11 04:15:21 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 11 04:15:21 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 11 04:15:21 compute-0 ceph-mon[74273]: from='client.? 192.168.122.100:0/1186065594' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 04:15:21 compute-0 ceph-mgr[74563]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 11 04:15:21 compute-0 ceph-mgr[74563]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 11 04:15:21 compute-0 ceph-mgr[74563]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 11 04:15:21 compute-0 ceph-mgr[74563]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 11 04:15:21 compute-0 ceph-mgr[74563]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 11 04:15:21 compute-0 ceph-mgr[74563]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 11 04:15:21 compute-0 ceph-mgr[74563]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 11 04:15:21 compute-0 ceph-mgr[74563]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 11 04:15:21 compute-0 ceph-mgr[74563]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 11 04:15:21 compute-0 ceph-mgr[74563]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 11 04:15:21 compute-0 podman[291825]: 2025-10-11 04:15:21.074402065 +0000 UTC m=+0.075689522 container create e76057a7d6e8b0c44606a4e8f51bccc4e21db1f78bdb2b9aae19674f9b569bac (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_pascal, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 11 04:15:21 compute-0 systemd[1]: Started libpod-conmon-e76057a7d6e8b0c44606a4e8f51bccc4e21db1f78bdb2b9aae19674f9b569bac.scope.
Oct 11 04:15:21 compute-0 podman[291825]: 2025-10-11 04:15:21.041072088 +0000 UTC m=+0.042359605 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 04:15:21 compute-0 systemd[1]: Started libcrun container.
Oct 11 04:15:21 compute-0 podman[291825]: 2025-10-11 04:15:21.182120076 +0000 UTC m=+0.183407573 container init e76057a7d6e8b0c44606a4e8f51bccc4e21db1f78bdb2b9aae19674f9b569bac (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_pascal, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Oct 11 04:15:21 compute-0 podman[291825]: 2025-10-11 04:15:21.194523928 +0000 UTC m=+0.195811345 container start e76057a7d6e8b0c44606a4e8f51bccc4e21db1f78bdb2b9aae19674f9b569bac (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_pascal, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 11 04:15:21 compute-0 podman[291825]: 2025-10-11 04:15:21.197702288 +0000 UTC m=+0.198989745 container attach e76057a7d6e8b0c44606a4e8f51bccc4e21db1f78bdb2b9aae19674f9b569bac (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_pascal, org.label-schema.license=GPLv2, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 11 04:15:21 compute-0 ecstatic_pascal[291841]: 167 167
Oct 11 04:15:21 compute-0 systemd[1]: libpod-e76057a7d6e8b0c44606a4e8f51bccc4e21db1f78bdb2b9aae19674f9b569bac.scope: Deactivated successfully.
Oct 11 04:15:21 compute-0 podman[291825]: 2025-10-11 04:15:21.204372138 +0000 UTC m=+0.205659565 container died e76057a7d6e8b0c44606a4e8f51bccc4e21db1f78bdb2b9aae19674f9b569bac (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_pascal, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2)
Oct 11 04:15:21 compute-0 systemd[1]: var-lib-containers-storage-overlay-51f256e441b060402bf7c99e5806d117160b7e1ec0550a986d12112845f0bd14-merged.mount: Deactivated successfully.
Oct 11 04:15:21 compute-0 podman[291825]: 2025-10-11 04:15:21.256341495 +0000 UTC m=+0.257628952 container remove e76057a7d6e8b0c44606a4e8f51bccc4e21db1f78bdb2b9aae19674f9b569bac (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_pascal, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=reef, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Oct 11 04:15:21 compute-0 systemd[1]: libpod-conmon-e76057a7d6e8b0c44606a4e8f51bccc4e21db1f78bdb2b9aae19674f9b569bac.scope: Deactivated successfully.
Oct 11 04:15:21 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 11 04:15:21 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2610306899' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 11 04:15:21 compute-0 nova_compute[259850]: 2025-10-11 04:15:21.497 2 DEBUG nova.network.neutron [None req-6542b70e-9085-48c3-a013-46621b3f3e3c 2a330a845d62440c871f80eda2546881 09ba33ef4bd447699d74946c58839b2d - - default default] [instance: f4568c68-41ba-4de0-a607-76bf5907f37c] Successfully created port: 7a1af6b7-a442-4ea8-beca-2843ffb42e3c _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Oct 11 04:15:21 compute-0 podman[291865]: 2025-10-11 04:15:21.500676508 +0000 UTC m=+0.055856389 container create 4d1b0cda1efa8de489c2853f0b4a2e9b6ad1bf574511a2576ac0c2ebe150b56d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jovial_lichterman, CEPH_REF=reef, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 11 04:15:21 compute-0 systemd[1]: Started libpod-conmon-4d1b0cda1efa8de489c2853f0b4a2e9b6ad1bf574511a2576ac0c2ebe150b56d.scope.
Oct 11 04:15:21 compute-0 podman[291865]: 2025-10-11 04:15:21.485181097 +0000 UTC m=+0.040361078 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 04:15:21 compute-0 systemd[1]: Started libcrun container.
Oct 11 04:15:21 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/41695f3f945e838c57e93fdbe0d10720530c884d8cd3b2c391d738ba37aae113/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 11 04:15:21 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/41695f3f945e838c57e93fdbe0d10720530c884d8cd3b2c391d738ba37aae113/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 11 04:15:21 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/41695f3f945e838c57e93fdbe0d10720530c884d8cd3b2c391d738ba37aae113/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 11 04:15:21 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/41695f3f945e838c57e93fdbe0d10720530c884d8cd3b2c391d738ba37aae113/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 11 04:15:21 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/41695f3f945e838c57e93fdbe0d10720530c884d8cd3b2c391d738ba37aae113/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct 11 04:15:21 compute-0 podman[291865]: 2025-10-11 04:15:21.626660427 +0000 UTC m=+0.181840408 container init 4d1b0cda1efa8de489c2853f0b4a2e9b6ad1bf574511a2576ac0c2ebe150b56d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jovial_lichterman, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507)
Oct 11 04:15:21 compute-0 podman[291865]: 2025-10-11 04:15:21.644122884 +0000 UTC m=+0.199302765 container start 4d1b0cda1efa8de489c2853f0b4a2e9b6ad1bf574511a2576ac0c2ebe150b56d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jovial_lichterman, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 11 04:15:21 compute-0 podman[291865]: 2025-10-11 04:15:21.647819859 +0000 UTC m=+0.202999750 container attach 4d1b0cda1efa8de489c2853f0b4a2e9b6ad1bf574511a2576ac0c2ebe150b56d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jovial_lichterman, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 11 04:15:21 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1545: 305 pgs: 305 active+clean; 134 MiB data, 404 MiB used, 60 GiB / 60 GiB avail; 22 KiB/s rd, 1.8 MiB/s wr, 34 op/s
Oct 11 04:15:21 compute-0 nova_compute[259850]: 2025-10-11 04:15:21.851 2 DEBUG nova.compute.manager [None req-6542b70e-9085-48c3-a013-46621b3f3e3c 2a330a845d62440c871f80eda2546881 09ba33ef4bd447699d74946c58839b2d - - default default] [instance: f4568c68-41ba-4de0-a607-76bf5907f37c] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Oct 11 04:15:21 compute-0 nova_compute[259850]: 2025-10-11 04:15:21.855 2 DEBUG nova.virt.libvirt.driver [None req-6542b70e-9085-48c3-a013-46621b3f3e3c 2a330a845d62440c871f80eda2546881 09ba33ef4bd447699d74946c58839b2d - - default default] [instance: f4568c68-41ba-4de0-a607-76bf5907f37c] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Oct 11 04:15:21 compute-0 nova_compute[259850]: 2025-10-11 04:15:21.856 2 INFO nova.virt.libvirt.driver [None req-6542b70e-9085-48c3-a013-46621b3f3e3c 2a330a845d62440c871f80eda2546881 09ba33ef4bd447699d74946c58839b2d - - default default] [instance: f4568c68-41ba-4de0-a607-76bf5907f37c] Creating image(s)
Oct 11 04:15:21 compute-0 nova_compute[259850]: 2025-10-11 04:15:21.857 2 DEBUG nova.virt.libvirt.driver [None req-6542b70e-9085-48c3-a013-46621b3f3e3c 2a330a845d62440c871f80eda2546881 09ba33ef4bd447699d74946c58839b2d - - default default] [instance: f4568c68-41ba-4de0-a607-76bf5907f37c] Did not create local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4859
Oct 11 04:15:21 compute-0 nova_compute[259850]: 2025-10-11 04:15:21.857 2 DEBUG nova.virt.libvirt.driver [None req-6542b70e-9085-48c3-a013-46621b3f3e3c 2a330a845d62440c871f80eda2546881 09ba33ef4bd447699d74946c58839b2d - - default default] [instance: f4568c68-41ba-4de0-a607-76bf5907f37c] Ensure instance console log exists: /var/lib/nova/instances/f4568c68-41ba-4de0-a607-76bf5907f37c/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Oct 11 04:15:21 compute-0 nova_compute[259850]: 2025-10-11 04:15:21.858 2 DEBUG oslo_concurrency.lockutils [None req-6542b70e-9085-48c3-a013-46621b3f3e3c 2a330a845d62440c871f80eda2546881 09ba33ef4bd447699d74946c58839b2d - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 04:15:21 compute-0 nova_compute[259850]: 2025-10-11 04:15:21.859 2 DEBUG oslo_concurrency.lockutils [None req-6542b70e-9085-48c3-a013-46621b3f3e3c 2a330a845d62440c871f80eda2546881 09ba33ef4bd447699d74946c58839b2d - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 04:15:21 compute-0 nova_compute[259850]: 2025-10-11 04:15:21.860 2 DEBUG oslo_concurrency.lockutils [None req-6542b70e-9085-48c3-a013-46621b3f3e3c 2a330a845d62440c871f80eda2546881 09ba33ef4bd447699d74946c58839b2d - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 04:15:22 compute-0 ceph-mon[74273]: from='client.? 192.168.122.10:0/2610306899' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 11 04:15:22 compute-0 nova_compute[259850]: 2025-10-11 04:15:22.118 2 DEBUG nova.network.neutron [None req-6542b70e-9085-48c3-a013-46621b3f3e3c 2a330a845d62440c871f80eda2546881 09ba33ef4bd447699d74946c58839b2d - - default default] [instance: f4568c68-41ba-4de0-a607-76bf5907f37c] Successfully updated port: 7a1af6b7-a442-4ea8-beca-2843ffb42e3c _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Oct 11 04:15:22 compute-0 nova_compute[259850]: 2025-10-11 04:15:22.136 2 DEBUG oslo_concurrency.lockutils [None req-6542b70e-9085-48c3-a013-46621b3f3e3c 2a330a845d62440c871f80eda2546881 09ba33ef4bd447699d74946c58839b2d - - default default] Acquiring lock "refresh_cache-f4568c68-41ba-4de0-a607-76bf5907f37c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 11 04:15:22 compute-0 nova_compute[259850]: 2025-10-11 04:15:22.137 2 DEBUG oslo_concurrency.lockutils [None req-6542b70e-9085-48c3-a013-46621b3f3e3c 2a330a845d62440c871f80eda2546881 09ba33ef4bd447699d74946c58839b2d - - default default] Acquired lock "refresh_cache-f4568c68-41ba-4de0-a607-76bf5907f37c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 11 04:15:22 compute-0 nova_compute[259850]: 2025-10-11 04:15:22.137 2 DEBUG nova.network.neutron [None req-6542b70e-9085-48c3-a013-46621b3f3e3c 2a330a845d62440c871f80eda2546881 09ba33ef4bd447699d74946c58839b2d - - default default] [instance: f4568c68-41ba-4de0-a607-76bf5907f37c] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Oct 11 04:15:22 compute-0 nova_compute[259850]: 2025-10-11 04:15:22.202 2 DEBUG nova.compute.manager [req-765d83c2-94ff-4b0c-a5ac-d3c266a5b842 req-8b3af623-f2f7-4b9b-837a-16c8956cd3c1 f4159c198f2c491aba3076e803251bb4 551ef16ca7c64c7d98e9560ddab5abd8 - - default default] [instance: f4568c68-41ba-4de0-a607-76bf5907f37c] Received event network-changed-7a1af6b7-a442-4ea8-beca-2843ffb42e3c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 11 04:15:22 compute-0 nova_compute[259850]: 2025-10-11 04:15:22.203 2 DEBUG nova.compute.manager [req-765d83c2-94ff-4b0c-a5ac-d3c266a5b842 req-8b3af623-f2f7-4b9b-837a-16c8956cd3c1 f4159c198f2c491aba3076e803251bb4 551ef16ca7c64c7d98e9560ddab5abd8 - - default default] [instance: f4568c68-41ba-4de0-a607-76bf5907f37c] Refreshing instance network info cache due to event network-changed-7a1af6b7-a442-4ea8-beca-2843ffb42e3c. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 11 04:15:22 compute-0 nova_compute[259850]: 2025-10-11 04:15:22.204 2 DEBUG oslo_concurrency.lockutils [req-765d83c2-94ff-4b0c-a5ac-d3c266a5b842 req-8b3af623-f2f7-4b9b-837a-16c8956cd3c1 f4159c198f2c491aba3076e803251bb4 551ef16ca7c64c7d98e9560ddab5abd8 - - default default] Acquiring lock "refresh_cache-f4568c68-41ba-4de0-a607-76bf5907f37c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 11 04:15:22 compute-0 nova_compute[259850]: 2025-10-11 04:15:22.299 2 DEBUG nova.network.neutron [None req-6542b70e-9085-48c3-a013-46621b3f3e3c 2a330a845d62440c871f80eda2546881 09ba33ef4bd447699d74946c58839b2d - - default default] [instance: f4568c68-41ba-4de0-a607-76bf5907f37c] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Oct 11 04:15:22 compute-0 nova_compute[259850]: 2025-10-11 04:15:22.566 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 04:15:22 compute-0 jovial_lichterman[291881]: --> passed data devices: 0 physical, 3 LVM
Oct 11 04:15:22 compute-0 jovial_lichterman[291881]: --> relative data size: 1.0
Oct 11 04:15:22 compute-0 jovial_lichterman[291881]: --> All data devices are unavailable
Oct 11 04:15:22 compute-0 systemd[1]: libpod-4d1b0cda1efa8de489c2853f0b4a2e9b6ad1bf574511a2576ac0c2ebe150b56d.scope: Deactivated successfully.
Oct 11 04:15:22 compute-0 systemd[1]: libpod-4d1b0cda1efa8de489c2853f0b4a2e9b6ad1bf574511a2576ac0c2ebe150b56d.scope: Consumed 1.096s CPU time.
Oct 11 04:15:22 compute-0 podman[291865]: 2025-10-11 04:15:22.800823361 +0000 UTC m=+1.356003252 container died 4d1b0cda1efa8de489c2853f0b4a2e9b6ad1bf574511a2576ac0c2ebe150b56d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jovial_lichterman, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 11 04:15:22 compute-0 systemd[1]: var-lib-containers-storage-overlay-41695f3f945e838c57e93fdbe0d10720530c884d8cd3b2c391d738ba37aae113-merged.mount: Deactivated successfully.
Oct 11 04:15:22 compute-0 podman[291865]: 2025-10-11 04:15:22.875256836 +0000 UTC m=+1.430436727 container remove 4d1b0cda1efa8de489c2853f0b4a2e9b6ad1bf574511a2576ac0c2ebe150b56d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jovial_lichterman, CEPH_REF=reef, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Oct 11 04:15:22 compute-0 systemd[1]: libpod-conmon-4d1b0cda1efa8de489c2853f0b4a2e9b6ad1bf574511a2576ac0c2ebe150b56d.scope: Deactivated successfully.
Oct 11 04:15:22 compute-0 sudo[291750]: pam_unix(sudo:session): session closed for user root
Oct 11 04:15:22 compute-0 ovn_metadata_agent[161897]: 2025-10-11 04:15:22.964 161902 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 04:15:22 compute-0 ovn_metadata_agent[161897]: 2025-10-11 04:15:22.966 161902 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 04:15:22 compute-0 ovn_metadata_agent[161897]: 2025-10-11 04:15:22.966 161902 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 04:15:23 compute-0 sudo[291922]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 04:15:23 compute-0 sudo[291922]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 04:15:23 compute-0 sudo[291922]: pam_unix(sudo:session): session closed for user root
Oct 11 04:15:23 compute-0 nova_compute[259850]: 2025-10-11 04:15:23.116 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 04:15:23 compute-0 ceph-mon[74273]: pgmap v1545: 305 pgs: 305 active+clean; 134 MiB data, 404 MiB used, 60 GiB / 60 GiB avail; 22 KiB/s rd, 1.8 MiB/s wr, 34 op/s
Oct 11 04:15:23 compute-0 nova_compute[259850]: 2025-10-11 04:15:23.132 2 DEBUG oslo_concurrency.lockutils [None req-f94e59bf-b900-400f-a67b-3e84677dd23e 77d11e860ca1460cab1c20bca4d4c0ea bfcc78a613a4442d88231798d10634c9 - - default default] Acquiring lock "ab2c9a76-86d0-4cca-92b5-ae402fda2905" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 04:15:23 compute-0 nova_compute[259850]: 2025-10-11 04:15:23.133 2 DEBUG oslo_concurrency.lockutils [None req-f94e59bf-b900-400f-a67b-3e84677dd23e 77d11e860ca1460cab1c20bca4d4c0ea bfcc78a613a4442d88231798d10634c9 - - default default] Lock "ab2c9a76-86d0-4cca-92b5-ae402fda2905" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 04:15:23 compute-0 sudo[291947]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 11 04:15:23 compute-0 sudo[291947]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 04:15:23 compute-0 sudo[291947]: pam_unix(sudo:session): session closed for user root
Oct 11 04:15:23 compute-0 nova_compute[259850]: 2025-10-11 04:15:23.162 2 DEBUG nova.compute.manager [None req-f94e59bf-b900-400f-a67b-3e84677dd23e 77d11e860ca1460cab1c20bca4d4c0ea bfcc78a613a4442d88231798d10634c9 - - default default] [instance: ab2c9a76-86d0-4cca-92b5-ae402fda2905] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Oct 11 04:15:23 compute-0 sudo[291972]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 04:15:23 compute-0 sudo[291972]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 04:15:23 compute-0 sudo[291972]: pam_unix(sudo:session): session closed for user root
Oct 11 04:15:23 compute-0 nova_compute[259850]: 2025-10-11 04:15:23.232 2 DEBUG oslo_concurrency.lockutils [None req-f94e59bf-b900-400f-a67b-3e84677dd23e 77d11e860ca1460cab1c20bca4d4c0ea bfcc78a613a4442d88231798d10634c9 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 04:15:23 compute-0 nova_compute[259850]: 2025-10-11 04:15:23.232 2 DEBUG oslo_concurrency.lockutils [None req-f94e59bf-b900-400f-a67b-3e84677dd23e 77d11e860ca1460cab1c20bca4d4c0ea bfcc78a613a4442d88231798d10634c9 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 04:15:23 compute-0 nova_compute[259850]: 2025-10-11 04:15:23.242 2 DEBUG nova.virt.hardware [None req-f94e59bf-b900-400f-a67b-3e84677dd23e 77d11e860ca1460cab1c20bca4d4c0ea bfcc78a613a4442d88231798d10634c9 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Oct 11 04:15:23 compute-0 nova_compute[259850]: 2025-10-11 04:15:23.242 2 INFO nova.compute.claims [None req-f94e59bf-b900-400f-a67b-3e84677dd23e 77d11e860ca1460cab1c20bca4d4c0ea bfcc78a613a4442d88231798d10634c9 - - default default] [instance: ab2c9a76-86d0-4cca-92b5-ae402fda2905] Claim successful on node compute-0.ctlplane.example.com
Oct 11 04:15:23 compute-0 sudo[291997]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/23b68101-59a9-532f-ab6b-9acf78fb2162/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 23b68101-59a9-532f-ab6b-9acf78fb2162 -- lvm list --format json
Oct 11 04:15:23 compute-0 sudo[291997]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 04:15:23 compute-0 nova_compute[259850]: 2025-10-11 04:15:23.354 2 DEBUG oslo_concurrency.processutils [None req-f94e59bf-b900-400f-a67b-3e84677dd23e 77d11e860ca1460cab1c20bca4d4c0ea bfcc78a613a4442d88231798d10634c9 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 04:15:23 compute-0 nova_compute[259850]: 2025-10-11 04:15:23.398 2 DEBUG nova.network.neutron [None req-6542b70e-9085-48c3-a013-46621b3f3e3c 2a330a845d62440c871f80eda2546881 09ba33ef4bd447699d74946c58839b2d - - default default] [instance: f4568c68-41ba-4de0-a607-76bf5907f37c] Updating instance_info_cache with network_info: [{"id": "7a1af6b7-a442-4ea8-beca-2843ffb42e3c", "address": "fa:16:3e:e8:e3:04", "network": {"id": "b6cd64a2-af0b-4f57-b84c-cbc9cde5251d", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-958485850-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "09ba33ef4bd447699d74946c58839b2d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7a1af6b7-a4", "ovs_interfaceid": "7a1af6b7-a442-4ea8-beca-2843ffb42e3c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 11 04:15:23 compute-0 nova_compute[259850]: 2025-10-11 04:15:23.420 2 DEBUG oslo_concurrency.lockutils [None req-6542b70e-9085-48c3-a013-46621b3f3e3c 2a330a845d62440c871f80eda2546881 09ba33ef4bd447699d74946c58839b2d - - default default] Releasing lock "refresh_cache-f4568c68-41ba-4de0-a607-76bf5907f37c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 11 04:15:23 compute-0 nova_compute[259850]: 2025-10-11 04:15:23.421 2 DEBUG nova.compute.manager [None req-6542b70e-9085-48c3-a013-46621b3f3e3c 2a330a845d62440c871f80eda2546881 09ba33ef4bd447699d74946c58839b2d - - default default] [instance: f4568c68-41ba-4de0-a607-76bf5907f37c] Instance network_info: |[{"id": "7a1af6b7-a442-4ea8-beca-2843ffb42e3c", "address": "fa:16:3e:e8:e3:04", "network": {"id": "b6cd64a2-af0b-4f57-b84c-cbc9cde5251d", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-958485850-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "09ba33ef4bd447699d74946c58839b2d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7a1af6b7-a4", "ovs_interfaceid": "7a1af6b7-a442-4ea8-beca-2843ffb42e3c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Oct 11 04:15:23 compute-0 nova_compute[259850]: 2025-10-11 04:15:23.422 2 DEBUG oslo_concurrency.lockutils [req-765d83c2-94ff-4b0c-a5ac-d3c266a5b842 req-8b3af623-f2f7-4b9b-837a-16c8956cd3c1 f4159c198f2c491aba3076e803251bb4 551ef16ca7c64c7d98e9560ddab5abd8 - - default default] Acquired lock "refresh_cache-f4568c68-41ba-4de0-a607-76bf5907f37c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 11 04:15:23 compute-0 nova_compute[259850]: 2025-10-11 04:15:23.423 2 DEBUG nova.network.neutron [req-765d83c2-94ff-4b0c-a5ac-d3c266a5b842 req-8b3af623-f2f7-4b9b-837a-16c8956cd3c1 f4159c198f2c491aba3076e803251bb4 551ef16ca7c64c7d98e9560ddab5abd8 - - default default] [instance: f4568c68-41ba-4de0-a607-76bf5907f37c] Refreshing network info cache for port 7a1af6b7-a442-4ea8-beca-2843ffb42e3c _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 11 04:15:23 compute-0 nova_compute[259850]: 2025-10-11 04:15:23.429 2 DEBUG nova.virt.libvirt.driver [None req-6542b70e-9085-48c3-a013-46621b3f3e3c 2a330a845d62440c871f80eda2546881 09ba33ef4bd447699d74946c58839b2d - - default default] [instance: f4568c68-41ba-4de0-a607-76bf5907f37c] Start _get_guest_xml network_info=[{"id": "7a1af6b7-a442-4ea8-beca-2843ffb42e3c", "address": "fa:16:3e:e8:e3:04", "network": {"id": "b6cd64a2-af0b-4f57-b84c-cbc9cde5251d", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-958485850-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "09ba33ef4bd447699d74946c58839b2d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7a1af6b7-a4", "ovs_interfaceid": "7a1af6b7-a442-4ea8-beca-2843ffb42e3c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, '/dev/vda': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum=<?>,container_format=<?>,created_at=<?>,direct_url=<?>,disk_format=<?>,id=<?>,min_disk=0,min_ram=0,name=<?>,owner=<?>,properties=ImageMetaProps,protected=<?>,size=1073741824,status='active',tags=<?>,updated_at=<?>,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [], 'ephemerals': [], 'block_device_mapping': [{'device_type': 'disk', 'delete_on_termination': True, 'connection_info': {'driver_volume_type': 'rbd', 'data': {'name': 'volumes/volume-d0a276fd-ac37-4f51-aa93-2a88fc08b739', 'hosts': ['192.168.122.100'], 'ports': ['6789'], 'cluster_name': 'ceph', 'auth_enabled': True, 'auth_username': 'openstack', 'secret_type': 'ceph', 'secret_uuid': '***', 'volume_id': 'd0a276fd-ac37-4f51-aa93-2a88fc08b739', 'discard': True, 'qos_specs': None, 'access_mode': 'rw', 'encrypted': False, 'cacheable': False}, 'status': 'reserved', 'instance': 'f4568c68-41ba-4de0-a607-76bf5907f37c', 'attached_at': '', 'detached_at': '', 'volume_id': 'd0a276fd-ac37-4f51-aa93-2a88fc08b739', 'serial': 'd0a276fd-ac37-4f51-aa93-2a88fc08b739'}, 'boot_index': 0, 'guest_format': None, 'attachment_id': '3cdd1485-57e3-4393-8579-08d700c05610', 'mount_device': '/dev/vda', 'disk_bus': 'virtio', 'volume_type': None}], ': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Oct 11 04:15:23 compute-0 nova_compute[259850]: 2025-10-11 04:15:23.436 2 WARNING nova.virt.libvirt.driver [None req-6542b70e-9085-48c3-a013-46621b3f3e3c 2a330a845d62440c871f80eda2546881 09ba33ef4bd447699d74946c58839b2d - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 11 04:15:23 compute-0 nova_compute[259850]: 2025-10-11 04:15:23.450 2 DEBUG nova.virt.libvirt.host [None req-6542b70e-9085-48c3-a013-46621b3f3e3c 2a330a845d62440c871f80eda2546881 09ba33ef4bd447699d74946c58839b2d - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Oct 11 04:15:23 compute-0 nova_compute[259850]: 2025-10-11 04:15:23.451 2 DEBUG nova.virt.libvirt.host [None req-6542b70e-9085-48c3-a013-46621b3f3e3c 2a330a845d62440c871f80eda2546881 09ba33ef4bd447699d74946c58839b2d - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Oct 11 04:15:23 compute-0 nova_compute[259850]: 2025-10-11 04:15:23.458 2 DEBUG nova.virt.libvirt.host [None req-6542b70e-9085-48c3-a013-46621b3f3e3c 2a330a845d62440c871f80eda2546881 09ba33ef4bd447699d74946c58839b2d - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Oct 11 04:15:23 compute-0 nova_compute[259850]: 2025-10-11 04:15:23.459 2 DEBUG nova.virt.libvirt.host [None req-6542b70e-9085-48c3-a013-46621b3f3e3c 2a330a845d62440c871f80eda2546881 09ba33ef4bd447699d74946c58839b2d - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Oct 11 04:15:23 compute-0 nova_compute[259850]: 2025-10-11 04:15:23.460 2 DEBUG nova.virt.libvirt.driver [None req-6542b70e-9085-48c3-a013-46621b3f3e3c 2a330a845d62440c871f80eda2546881 09ba33ef4bd447699d74946c58839b2d - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Oct 11 04:15:23 compute-0 nova_compute[259850]: 2025-10-11 04:15:23.460 2 DEBUG nova.virt.hardware [None req-6542b70e-9085-48c3-a013-46621b3f3e3c 2a330a845d62440c871f80eda2546881 09ba33ef4bd447699d74946c58839b2d - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-11T04:01:35Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='178575de-f0e6-4acd-9fcd-d75e3e09ac2e',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum=<?>,container_format=<?>,created_at=<?>,direct_url=<?>,disk_format=<?>,id=<?>,min_disk=0,min_ram=0,name=<?>,owner=<?>,properties=ImageMetaProps,protected=<?>,size=1073741824,status='active',tags=<?>,updated_at=<?>,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Oct 11 04:15:23 compute-0 nova_compute[259850]: 2025-10-11 04:15:23.461 2 DEBUG nova.virt.hardware [None req-6542b70e-9085-48c3-a013-46621b3f3e3c 2a330a845d62440c871f80eda2546881 09ba33ef4bd447699d74946c58839b2d - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Oct 11 04:15:23 compute-0 nova_compute[259850]: 2025-10-11 04:15:23.462 2 DEBUG nova.virt.hardware [None req-6542b70e-9085-48c3-a013-46621b3f3e3c 2a330a845d62440c871f80eda2546881 09ba33ef4bd447699d74946c58839b2d - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Oct 11 04:15:23 compute-0 nova_compute[259850]: 2025-10-11 04:15:23.462 2 DEBUG nova.virt.hardware [None req-6542b70e-9085-48c3-a013-46621b3f3e3c 2a330a845d62440c871f80eda2546881 09ba33ef4bd447699d74946c58839b2d - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Oct 11 04:15:23 compute-0 nova_compute[259850]: 2025-10-11 04:15:23.463 2 DEBUG nova.virt.hardware [None req-6542b70e-9085-48c3-a013-46621b3f3e3c 2a330a845d62440c871f80eda2546881 09ba33ef4bd447699d74946c58839b2d - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Oct 11 04:15:23 compute-0 nova_compute[259850]: 2025-10-11 04:15:23.463 2 DEBUG nova.virt.hardware [None req-6542b70e-9085-48c3-a013-46621b3f3e3c 2a330a845d62440c871f80eda2546881 09ba33ef4bd447699d74946c58839b2d - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Oct 11 04:15:23 compute-0 nova_compute[259850]: 2025-10-11 04:15:23.464 2 DEBUG nova.virt.hardware [None req-6542b70e-9085-48c3-a013-46621b3f3e3c 2a330a845d62440c871f80eda2546881 09ba33ef4bd447699d74946c58839b2d - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Oct 11 04:15:23 compute-0 nova_compute[259850]: 2025-10-11 04:15:23.464 2 DEBUG nova.virt.hardware [None req-6542b70e-9085-48c3-a013-46621b3f3e3c 2a330a845d62440c871f80eda2546881 09ba33ef4bd447699d74946c58839b2d - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Oct 11 04:15:23 compute-0 nova_compute[259850]: 2025-10-11 04:15:23.465 2 DEBUG nova.virt.hardware [None req-6542b70e-9085-48c3-a013-46621b3f3e3c 2a330a845d62440c871f80eda2546881 09ba33ef4bd447699d74946c58839b2d - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Oct 11 04:15:23 compute-0 nova_compute[259850]: 2025-10-11 04:15:23.465 2 DEBUG nova.virt.hardware [None req-6542b70e-9085-48c3-a013-46621b3f3e3c 2a330a845d62440c871f80eda2546881 09ba33ef4bd447699d74946c58839b2d - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Oct 11 04:15:23 compute-0 nova_compute[259850]: 2025-10-11 04:15:23.466 2 DEBUG nova.virt.hardware [None req-6542b70e-9085-48c3-a013-46621b3f3e3c 2a330a845d62440c871f80eda2546881 09ba33ef4bd447699d74946c58839b2d - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Oct 11 04:15:23 compute-0 nova_compute[259850]: 2025-10-11 04:15:23.506 2 DEBUG nova.storage.rbd_utils [None req-6542b70e-9085-48c3-a013-46621b3f3e3c 2a330a845d62440c871f80eda2546881 09ba33ef4bd447699d74946c58839b2d - - default default] rbd image f4568c68-41ba-4de0-a607-76bf5907f37c_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 11 04:15:23 compute-0 nova_compute[259850]: 2025-10-11 04:15:23.511 2 DEBUG oslo_concurrency.processutils [None req-6542b70e-9085-48c3-a013-46621b3f3e3c 2a330a845d62440c871f80eda2546881 09ba33ef4bd447699d74946c58839b2d - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 04:15:23 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1546: 305 pgs: 305 active+clean; 248 MiB data, 516 MiB used, 59 GiB / 60 GiB avail; 50 KiB/s rd, 11 MiB/s wr, 78 op/s
Oct 11 04:15:23 compute-0 podman[292102]: 2025-10-11 04:15:23.727560535 +0000 UTC m=+0.054569961 container create 9a51162ce92bdd97fac88138c28f8882ad6d29b2d5c22f516a7d3d74bbd9c0ee (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jovial_buck, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, ceph=True)
Oct 11 04:15:23 compute-0 systemd[1]: Started libpod-conmon-9a51162ce92bdd97fac88138c28f8882ad6d29b2d5c22f516a7d3d74bbd9c0ee.scope.
Oct 11 04:15:23 compute-0 podman[292102]: 2025-10-11 04:15:23.702510703 +0000 UTC m=+0.029520139 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 04:15:23 compute-0 systemd[1]: Started libcrun container.
Oct 11 04:15:23 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 11 04:15:23 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2275738536' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 04:15:23 compute-0 podman[292102]: 2025-10-11 04:15:23.827520366 +0000 UTC m=+0.154529872 container init 9a51162ce92bdd97fac88138c28f8882ad6d29b2d5c22f516a7d3d74bbd9c0ee (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jovial_buck, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True)
Oct 11 04:15:23 compute-0 podman[292102]: 2025-10-11 04:15:23.835696708 +0000 UTC m=+0.162706164 container start 9a51162ce92bdd97fac88138c28f8882ad6d29b2d5c22f516a7d3d74bbd9c0ee (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jovial_buck, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 11 04:15:23 compute-0 jovial_buck[292135]: 167 167
Oct 11 04:15:23 compute-0 systemd[1]: libpod-9a51162ce92bdd97fac88138c28f8882ad6d29b2d5c22f516a7d3d74bbd9c0ee.scope: Deactivated successfully.
Oct 11 04:15:23 compute-0 podman[292102]: 2025-10-11 04:15:23.8417345 +0000 UTC m=+0.168743986 container attach 9a51162ce92bdd97fac88138c28f8882ad6d29b2d5c22f516a7d3d74bbd9c0ee (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jovial_buck, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=reef)
Oct 11 04:15:23 compute-0 podman[292102]: 2025-10-11 04:15:23.842550713 +0000 UTC m=+0.169560169 container died 9a51162ce92bdd97fac88138c28f8882ad6d29b2d5c22f516a7d3d74bbd9c0ee (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jovial_buck, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 11 04:15:23 compute-0 nova_compute[259850]: 2025-10-11 04:15:23.843 2 DEBUG oslo_concurrency.processutils [None req-f94e59bf-b900-400f-a67b-3e84677dd23e 77d11e860ca1460cab1c20bca4d4c0ea bfcc78a613a4442d88231798d10634c9 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.488s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 04:15:23 compute-0 nova_compute[259850]: 2025-10-11 04:15:23.850 2 DEBUG nova.compute.provider_tree [None req-f94e59bf-b900-400f-a67b-3e84677dd23e 77d11e860ca1460cab1c20bca4d4c0ea bfcc78a613a4442d88231798d10634c9 - - default default] Inventory has not changed in ProviderTree for provider: 108a560b-89c0-4926-a2fc-cb749a6f8386 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 11 04:15:23 compute-0 nova_compute[259850]: 2025-10-11 04:15:23.869 2 DEBUG nova.scheduler.client.report [None req-f94e59bf-b900-400f-a67b-3e84677dd23e 77d11e860ca1460cab1c20bca4d4c0ea bfcc78a613a4442d88231798d10634c9 - - default default] Inventory has not changed for provider 108a560b-89c0-4926-a2fc-cb749a6f8386 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 11 04:15:23 compute-0 systemd[1]: var-lib-containers-storage-overlay-f3e00e30aaa30d260d181162e504291390ee86ef91ebe3da562023a71421af8e-merged.mount: Deactivated successfully.
Oct 11 04:15:23 compute-0 podman[292102]: 2025-10-11 04:15:23.888272942 +0000 UTC m=+0.215282368 container remove 9a51162ce92bdd97fac88138c28f8882ad6d29b2d5c22f516a7d3d74bbd9c0ee (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jovial_buck, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 11 04:15:23 compute-0 nova_compute[259850]: 2025-10-11 04:15:23.892 2 DEBUG oslo_concurrency.lockutils [None req-f94e59bf-b900-400f-a67b-3e84677dd23e 77d11e860ca1460cab1c20bca4d4c0ea bfcc78a613a4442d88231798d10634c9 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.659s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 04:15:23 compute-0 nova_compute[259850]: 2025-10-11 04:15:23.893 2 DEBUG nova.compute.manager [None req-f94e59bf-b900-400f-a67b-3e84677dd23e 77d11e860ca1460cab1c20bca4d4c0ea bfcc78a613a4442d88231798d10634c9 - - default default] [instance: ab2c9a76-86d0-4cca-92b5-ae402fda2905] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Oct 11 04:15:23 compute-0 systemd[1]: libpod-conmon-9a51162ce92bdd97fac88138c28f8882ad6d29b2d5c22f516a7d3d74bbd9c0ee.scope: Deactivated successfully.
Oct 11 04:15:23 compute-0 nova_compute[259850]: 2025-10-11 04:15:23.945 2 DEBUG nova.compute.manager [None req-f94e59bf-b900-400f-a67b-3e84677dd23e 77d11e860ca1460cab1c20bca4d4c0ea bfcc78a613a4442d88231798d10634c9 - - default default] [instance: ab2c9a76-86d0-4cca-92b5-ae402fda2905] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Oct 11 04:15:23 compute-0 nova_compute[259850]: 2025-10-11 04:15:23.946 2 DEBUG nova.network.neutron [None req-f94e59bf-b900-400f-a67b-3e84677dd23e 77d11e860ca1460cab1c20bca4d4c0ea bfcc78a613a4442d88231798d10634c9 - - default default] [instance: ab2c9a76-86d0-4cca-92b5-ae402fda2905] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Oct 11 04:15:23 compute-0 nova_compute[259850]: 2025-10-11 04:15:23.961 2 INFO nova.virt.libvirt.driver [None req-f94e59bf-b900-400f-a67b-3e84677dd23e 77d11e860ca1460cab1c20bca4d4c0ea bfcc78a613a4442d88231798d10634c9 - - default default] [instance: ab2c9a76-86d0-4cca-92b5-ae402fda2905] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Oct 11 04:15:23 compute-0 nova_compute[259850]: 2025-10-11 04:15:23.976 2 DEBUG nova.compute.manager [None req-f94e59bf-b900-400f-a67b-3e84677dd23e 77d11e860ca1460cab1c20bca4d4c0ea bfcc78a613a4442d88231798d10634c9 - - default default] [instance: ab2c9a76-86d0-4cca-92b5-ae402fda2905] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Oct 11 04:15:24 compute-0 nova_compute[259850]: 2025-10-11 04:15:24.019 2 INFO nova.virt.block_device [None req-f94e59bf-b900-400f-a67b-3e84677dd23e 77d11e860ca1460cab1c20bca4d4c0ea bfcc78a613a4442d88231798d10634c9 - - default default] [instance: ab2c9a76-86d0-4cca-92b5-ae402fda2905] Booting with volume c358a4fb-dbe4-4873-963a-9b4d3369e2f4 at /dev/vda
Oct 11 04:15:24 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 11 04:15:24 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3551747099' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 11 04:15:24 compute-0 nova_compute[259850]: 2025-10-11 04:15:24.039 2 DEBUG oslo_concurrency.processutils [None req-6542b70e-9085-48c3-a013-46621b3f3e3c 2a330a845d62440c871f80eda2546881 09ba33ef4bd447699d74946c58839b2d - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.527s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 04:15:24 compute-0 nova_compute[259850]: 2025-10-11 04:15:24.059 2 DEBUG oslo_service.periodic_task [None req-a15c22ab-259d-4b26-83d0-eb5f92102649 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 11 04:15:24 compute-0 nova_compute[259850]: 2025-10-11 04:15:24.059 2 DEBUG oslo_service.periodic_task [None req-a15c22ab-259d-4b26-83d0-eb5f92102649 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 11 04:15:24 compute-0 nova_compute[259850]: 2025-10-11 04:15:24.059 2 DEBUG nova.compute.manager [None req-a15c22ab-259d-4b26-83d0-eb5f92102649 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Oct 11 04:15:24 compute-0 nova_compute[259850]: 2025-10-11 04:15:24.063 2 DEBUG nova.virt.libvirt.vif [None req-6542b70e-9085-48c3-a013-46621b3f3e3c 2a330a845d62440c871f80eda2546881 09ba33ef4bd447699d74946c58839b2d - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-11T04:15:18Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestVolumeBootPattern-volume-backed-server-735375716',display_name='tempest-TestVolumeBootPattern-volume-backed-server-735375716',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testvolumebootpattern-volume-backed-server-735375716',id=18,image_ref='',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBJlNFjANMenAUMjm3c+Yt/pV1YDteEbOrKj8pDNXp+AZ2bzyNSZQdsoCOqS2FJ+bZXXJhyzIuhHoqTJa3/aEXpu3IGJyP1VFFF028Wsjb+CD09ZVWGqe9jlbmQCXenrv1g==',key_name='tempest-keypair-338328634',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='09ba33ef4bd447699d74946c58839b2d',ramdisk_id='',reservation_id='r-ggvrzzwl',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',image_signature_verified='False',network_allocated='True',owner_project_name='tempest-TestVolumeBootPattern-771726270',owner_user_name='tempest-TestVolumeBootPattern-771726270-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-11T04:15:20Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='2a330a845d62440c871f80eda2546881',uuid=f4568c68-41ba-4de0-a607-76bf5907f37c,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "7a1af6b7-a442-4ea8-beca-2843ffb42e3c", "address": "fa:16:3e:e8:e3:04", "network": {"id": "b6cd64a2-af0b-4f57-b84c-cbc9cde5251d", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-958485850-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "09ba33ef4bd447699d74946c58839b2d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7a1af6b7-a4", "ovs_interfaceid": "7a1af6b7-a442-4ea8-beca-2843ffb42e3c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Oct 11 04:15:24 compute-0 nova_compute[259850]: 2025-10-11 04:15:24.065 2 DEBUG nova.network.os_vif_util [None req-6542b70e-9085-48c3-a013-46621b3f3e3c 2a330a845d62440c871f80eda2546881 09ba33ef4bd447699d74946c58839b2d - - default default] Converting VIF {"id": "7a1af6b7-a442-4ea8-beca-2843ffb42e3c", "address": "fa:16:3e:e8:e3:04", "network": {"id": "b6cd64a2-af0b-4f57-b84c-cbc9cde5251d", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-958485850-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "09ba33ef4bd447699d74946c58839b2d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7a1af6b7-a4", "ovs_interfaceid": "7a1af6b7-a442-4ea8-beca-2843ffb42e3c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 11 04:15:24 compute-0 nova_compute[259850]: 2025-10-11 04:15:24.067 2 DEBUG nova.network.os_vif_util [None req-6542b70e-9085-48c3-a013-46621b3f3e3c 2a330a845d62440c871f80eda2546881 09ba33ef4bd447699d74946c58839b2d - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:e8:e3:04,bridge_name='br-int',has_traffic_filtering=True,id=7a1af6b7-a442-4ea8-beca-2843ffb42e3c,network=Network(b6cd64a2-af0b-4f57-b84c-cbc9cde5251d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap7a1af6b7-a4') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 11 04:15:24 compute-0 nova_compute[259850]: 2025-10-11 04:15:24.069 2 DEBUG nova.objects.instance [None req-6542b70e-9085-48c3-a013-46621b3f3e3c 2a330a845d62440c871f80eda2546881 09ba33ef4bd447699d74946c58839b2d - - default default] Lazy-loading 'pci_devices' on Instance uuid f4568c68-41ba-4de0-a607-76bf5907f37c obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 11 04:15:24 compute-0 nova_compute[259850]: 2025-10-11 04:15:24.093 2 DEBUG nova.virt.libvirt.driver [None req-6542b70e-9085-48c3-a013-46621b3f3e3c 2a330a845d62440c871f80eda2546881 09ba33ef4bd447699d74946c58839b2d - - default default] [instance: f4568c68-41ba-4de0-a607-76bf5907f37c] End _get_guest_xml xml=<domain type="kvm">
Oct 11 04:15:24 compute-0 nova_compute[259850]:   <uuid>f4568c68-41ba-4de0-a607-76bf5907f37c</uuid>
Oct 11 04:15:24 compute-0 nova_compute[259850]:   <name>instance-00000012</name>
Oct 11 04:15:24 compute-0 nova_compute[259850]:   <memory>131072</memory>
Oct 11 04:15:24 compute-0 nova_compute[259850]:   <vcpu>1</vcpu>
Oct 11 04:15:24 compute-0 nova_compute[259850]:   <metadata>
Oct 11 04:15:24 compute-0 nova_compute[259850]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 11 04:15:24 compute-0 nova_compute[259850]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 11 04:15:24 compute-0 nova_compute[259850]:       <nova:name>tempest-TestVolumeBootPattern-volume-backed-server-735375716</nova:name>
Oct 11 04:15:24 compute-0 nova_compute[259850]:       <nova:creationTime>2025-10-11 04:15:23</nova:creationTime>
Oct 11 04:15:24 compute-0 nova_compute[259850]:       <nova:flavor name="m1.nano">
Oct 11 04:15:24 compute-0 nova_compute[259850]:         <nova:memory>128</nova:memory>
Oct 11 04:15:24 compute-0 nova_compute[259850]:         <nova:disk>1</nova:disk>
Oct 11 04:15:24 compute-0 nova_compute[259850]:         <nova:swap>0</nova:swap>
Oct 11 04:15:24 compute-0 nova_compute[259850]:         <nova:ephemeral>0</nova:ephemeral>
Oct 11 04:15:24 compute-0 nova_compute[259850]:         <nova:vcpus>1</nova:vcpus>
Oct 11 04:15:24 compute-0 nova_compute[259850]:       </nova:flavor>
Oct 11 04:15:24 compute-0 nova_compute[259850]:       <nova:owner>
Oct 11 04:15:24 compute-0 nova_compute[259850]:         <nova:user uuid="2a330a845d62440c871f80eda2546881">tempest-TestVolumeBootPattern-771726270-project-member</nova:user>
Oct 11 04:15:24 compute-0 nova_compute[259850]:         <nova:project uuid="09ba33ef4bd447699d74946c58839b2d">tempest-TestVolumeBootPattern-771726270</nova:project>
Oct 11 04:15:24 compute-0 nova_compute[259850]:       </nova:owner>
Oct 11 04:15:24 compute-0 nova_compute[259850]:       <nova:ports>
Oct 11 04:15:24 compute-0 nova_compute[259850]:         <nova:port uuid="7a1af6b7-a442-4ea8-beca-2843ffb42e3c">
Oct 11 04:15:24 compute-0 nova_compute[259850]:           <nova:ip type="fixed" address="10.100.0.6" ipVersion="4"/>
Oct 11 04:15:24 compute-0 nova_compute[259850]:         </nova:port>
Oct 11 04:15:24 compute-0 nova_compute[259850]:       </nova:ports>
Oct 11 04:15:24 compute-0 nova_compute[259850]:     </nova:instance>
Oct 11 04:15:24 compute-0 nova_compute[259850]:   </metadata>
Oct 11 04:15:24 compute-0 nova_compute[259850]:   <sysinfo type="smbios">
Oct 11 04:15:24 compute-0 nova_compute[259850]:     <system>
Oct 11 04:15:24 compute-0 nova_compute[259850]:       <entry name="manufacturer">RDO</entry>
Oct 11 04:15:24 compute-0 nova_compute[259850]:       <entry name="product">OpenStack Compute</entry>
Oct 11 04:15:24 compute-0 nova_compute[259850]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 11 04:15:24 compute-0 nova_compute[259850]:       <entry name="serial">f4568c68-41ba-4de0-a607-76bf5907f37c</entry>
Oct 11 04:15:24 compute-0 nova_compute[259850]:       <entry name="uuid">f4568c68-41ba-4de0-a607-76bf5907f37c</entry>
Oct 11 04:15:24 compute-0 nova_compute[259850]:       <entry name="family">Virtual Machine</entry>
Oct 11 04:15:24 compute-0 nova_compute[259850]:     </system>
Oct 11 04:15:24 compute-0 nova_compute[259850]:   </sysinfo>
Oct 11 04:15:24 compute-0 nova_compute[259850]:   <os>
Oct 11 04:15:24 compute-0 nova_compute[259850]:     <type arch="x86_64" machine="q35">hvm</type>
Oct 11 04:15:24 compute-0 nova_compute[259850]:     <boot dev="hd"/>
Oct 11 04:15:24 compute-0 nova_compute[259850]:     <smbios mode="sysinfo"/>
Oct 11 04:15:24 compute-0 nova_compute[259850]:   </os>
Oct 11 04:15:24 compute-0 nova_compute[259850]:   <features>
Oct 11 04:15:24 compute-0 nova_compute[259850]:     <acpi/>
Oct 11 04:15:24 compute-0 nova_compute[259850]:     <apic/>
Oct 11 04:15:24 compute-0 nova_compute[259850]:     <vmcoreinfo/>
Oct 11 04:15:24 compute-0 nova_compute[259850]:   </features>
Oct 11 04:15:24 compute-0 nova_compute[259850]:   <clock offset="utc">
Oct 11 04:15:24 compute-0 nova_compute[259850]:     <timer name="pit" tickpolicy="delay"/>
Oct 11 04:15:24 compute-0 nova_compute[259850]:     <timer name="rtc" tickpolicy="catchup"/>
Oct 11 04:15:24 compute-0 nova_compute[259850]:     <timer name="hpet" present="no"/>
Oct 11 04:15:24 compute-0 nova_compute[259850]:   </clock>
Oct 11 04:15:24 compute-0 nova_compute[259850]:   <cpu mode="host-model" match="exact">
Oct 11 04:15:24 compute-0 nova_compute[259850]:     <topology sockets="1" cores="1" threads="1"/>
Oct 11 04:15:24 compute-0 nova_compute[259850]:   </cpu>
Oct 11 04:15:24 compute-0 nova_compute[259850]:   <devices>
Oct 11 04:15:24 compute-0 nova_compute[259850]:     <disk type="network" device="cdrom">
Oct 11 04:15:24 compute-0 nova_compute[259850]:       <driver type="raw" cache="none"/>
Oct 11 04:15:24 compute-0 nova_compute[259850]:       <source protocol="rbd" name="vms/f4568c68-41ba-4de0-a607-76bf5907f37c_disk.config">
Oct 11 04:15:24 compute-0 nova_compute[259850]:         <host name="192.168.122.100" port="6789"/>
Oct 11 04:15:24 compute-0 nova_compute[259850]:       </source>
Oct 11 04:15:24 compute-0 nova_compute[259850]:       <auth username="openstack">
Oct 11 04:15:24 compute-0 nova_compute[259850]:         <secret type="ceph" uuid="23b68101-59a9-532f-ab6b-9acf78fb2162"/>
Oct 11 04:15:24 compute-0 nova_compute[259850]:       </auth>
Oct 11 04:15:24 compute-0 nova_compute[259850]:       <target dev="sda" bus="sata"/>
Oct 11 04:15:24 compute-0 nova_compute[259850]:     </disk>
Oct 11 04:15:24 compute-0 nova_compute[259850]:     <disk type="network" device="disk">
Oct 11 04:15:24 compute-0 nova_compute[259850]:       <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Oct 11 04:15:24 compute-0 nova_compute[259850]:       <source protocol="rbd" name="volumes/volume-d0a276fd-ac37-4f51-aa93-2a88fc08b739">
Oct 11 04:15:24 compute-0 nova_compute[259850]:         <host name="192.168.122.100" port="6789"/>
Oct 11 04:15:24 compute-0 nova_compute[259850]:       </source>
Oct 11 04:15:24 compute-0 nova_compute[259850]:       <auth username="openstack">
Oct 11 04:15:24 compute-0 nova_compute[259850]:         <secret type="ceph" uuid="23b68101-59a9-532f-ab6b-9acf78fb2162"/>
Oct 11 04:15:24 compute-0 nova_compute[259850]:       </auth>
Oct 11 04:15:24 compute-0 nova_compute[259850]:       <target dev="vda" bus="virtio"/>
Oct 11 04:15:24 compute-0 nova_compute[259850]:       <serial>d0a276fd-ac37-4f51-aa93-2a88fc08b739</serial>
Oct 11 04:15:24 compute-0 nova_compute[259850]:     </disk>
Oct 11 04:15:24 compute-0 nova_compute[259850]:     <interface type="ethernet">
Oct 11 04:15:24 compute-0 nova_compute[259850]:       <mac address="fa:16:3e:e8:e3:04"/>
Oct 11 04:15:24 compute-0 nova_compute[259850]:       <model type="virtio"/>
Oct 11 04:15:24 compute-0 nova_compute[259850]:       <driver name="vhost" rx_queue_size="512"/>
Oct 11 04:15:24 compute-0 nova_compute[259850]:       <mtu size="1442"/>
Oct 11 04:15:24 compute-0 nova_compute[259850]:       <target dev="tap7a1af6b7-a4"/>
Oct 11 04:15:24 compute-0 nova_compute[259850]:     </interface>
Oct 11 04:15:24 compute-0 nova_compute[259850]:     <serial type="pty">
Oct 11 04:15:24 compute-0 nova_compute[259850]:       <log file="/var/lib/nova/instances/f4568c68-41ba-4de0-a607-76bf5907f37c/console.log" append="off"/>
Oct 11 04:15:24 compute-0 nova_compute[259850]:     </serial>
Oct 11 04:15:24 compute-0 nova_compute[259850]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 11 04:15:24 compute-0 nova_compute[259850]:     <video>
Oct 11 04:15:24 compute-0 nova_compute[259850]:       <model type="virtio"/>
Oct 11 04:15:24 compute-0 nova_compute[259850]:     </video>
Oct 11 04:15:24 compute-0 nova_compute[259850]:     <input type="tablet" bus="usb"/>
Oct 11 04:15:24 compute-0 nova_compute[259850]:     <rng model="virtio">
Oct 11 04:15:24 compute-0 nova_compute[259850]:       <backend model="random">/dev/urandom</backend>
Oct 11 04:15:24 compute-0 nova_compute[259850]:     </rng>
Oct 11 04:15:24 compute-0 nova_compute[259850]:     <controller type="pci" model="pcie-root"/>
Oct 11 04:15:24 compute-0 nova_compute[259850]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 04:15:24 compute-0 nova_compute[259850]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 04:15:24 compute-0 nova_compute[259850]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 04:15:24 compute-0 nova_compute[259850]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 04:15:24 compute-0 nova_compute[259850]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 04:15:24 compute-0 nova_compute[259850]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 04:15:24 compute-0 nova_compute[259850]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 04:15:24 compute-0 nova_compute[259850]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 04:15:24 compute-0 nova_compute[259850]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 04:15:24 compute-0 nova_compute[259850]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 04:15:24 compute-0 nova_compute[259850]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 04:15:24 compute-0 nova_compute[259850]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 04:15:24 compute-0 nova_compute[259850]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 04:15:24 compute-0 nova_compute[259850]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 04:15:24 compute-0 nova_compute[259850]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 04:15:24 compute-0 nova_compute[259850]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 04:15:24 compute-0 nova_compute[259850]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 04:15:24 compute-0 nova_compute[259850]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 04:15:24 compute-0 nova_compute[259850]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 04:15:24 compute-0 nova_compute[259850]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 04:15:24 compute-0 nova_compute[259850]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 04:15:24 compute-0 nova_compute[259850]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 04:15:24 compute-0 nova_compute[259850]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 04:15:24 compute-0 nova_compute[259850]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 04:15:24 compute-0 nova_compute[259850]:     <controller type="usb" index="0"/>
Oct 11 04:15:24 compute-0 nova_compute[259850]:     <memballoon model="virtio">
Oct 11 04:15:24 compute-0 nova_compute[259850]:       <stats period="10"/>
Oct 11 04:15:24 compute-0 nova_compute[259850]:     </memballoon>
Oct 11 04:15:24 compute-0 nova_compute[259850]:   </devices>
Oct 11 04:15:24 compute-0 nova_compute[259850]: </domain>
Oct 11 04:15:24 compute-0 nova_compute[259850]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Oct 11 04:15:24 compute-0 nova_compute[259850]: 2025-10-11 04:15:24.095 2 DEBUG nova.compute.manager [None req-6542b70e-9085-48c3-a013-46621b3f3e3c 2a330a845d62440c871f80eda2546881 09ba33ef4bd447699d74946c58839b2d - - default default] [instance: f4568c68-41ba-4de0-a607-76bf5907f37c] Preparing to wait for external event network-vif-plugged-7a1af6b7-a442-4ea8-beca-2843ffb42e3c prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Oct 11 04:15:24 compute-0 nova_compute[259850]: 2025-10-11 04:15:24.096 2 DEBUG oslo_concurrency.lockutils [None req-6542b70e-9085-48c3-a013-46621b3f3e3c 2a330a845d62440c871f80eda2546881 09ba33ef4bd447699d74946c58839b2d - - default default] Acquiring lock "f4568c68-41ba-4de0-a607-76bf5907f37c-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 04:15:24 compute-0 nova_compute[259850]: 2025-10-11 04:15:24.096 2 DEBUG oslo_concurrency.lockutils [None req-6542b70e-9085-48c3-a013-46621b3f3e3c 2a330a845d62440c871f80eda2546881 09ba33ef4bd447699d74946c58839b2d - - default default] Lock "f4568c68-41ba-4de0-a607-76bf5907f37c-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 04:15:24 compute-0 nova_compute[259850]: 2025-10-11 04:15:24.097 2 DEBUG oslo_concurrency.lockutils [None req-6542b70e-9085-48c3-a013-46621b3f3e3c 2a330a845d62440c871f80eda2546881 09ba33ef4bd447699d74946c58839b2d - - default default] Lock "f4568c68-41ba-4de0-a607-76bf5907f37c-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 04:15:24 compute-0 nova_compute[259850]: 2025-10-11 04:15:24.098 2 DEBUG nova.virt.libvirt.vif [None req-6542b70e-9085-48c3-a013-46621b3f3e3c 2a330a845d62440c871f80eda2546881 09ba33ef4bd447699d74946c58839b2d - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-11T04:15:18Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestVolumeBootPattern-volume-backed-server-735375716',display_name='tempest-TestVolumeBootPattern-volume-backed-server-735375716',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testvolumebootpattern-volume-backed-server-735375716',id=18,image_ref='',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBJlNFjANMenAUMjm3c+Yt/pV1YDteEbOrKj8pDNXp+AZ2bzyNSZQdsoCOqS2FJ+bZXXJhyzIuhHoqTJa3/aEXpu3IGJyP1VFFF028Wsjb+CD09ZVWGqe9jlbmQCXenrv1g==',key_name='tempest-keypair-338328634',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='09ba33ef4bd447699d74946c58839b2d',ramdisk_id='',reservation_id='r-ggvrzzwl',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',image_signature_verified='False',network_allocated='True',owner_project_name='tempest-TestVolumeBootPattern-771726270',owner_user_name='tempest-TestVolumeBootPattern-771726270-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-11T04:15:20Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='2a330a845d62440c871f80eda2546881',uuid=f4568c68-41ba-4de0-a607-76bf5907f37c,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "7a1af6b7-a442-4ea8-beca-2843ffb42e3c", "address": "fa:16:3e:e8:e3:04", "network": {"id": "b6cd64a2-af0b-4f57-b84c-cbc9cde5251d", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-958485850-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "09ba33ef4bd447699d74946c58839b2d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7a1af6b7-a4", "ovs_interfaceid": "7a1af6b7-a442-4ea8-beca-2843ffb42e3c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Oct 11 04:15:24 compute-0 nova_compute[259850]: 2025-10-11 04:15:24.099 2 DEBUG nova.network.os_vif_util [None req-6542b70e-9085-48c3-a013-46621b3f3e3c 2a330a845d62440c871f80eda2546881 09ba33ef4bd447699d74946c58839b2d - - default default] Converting VIF {"id": "7a1af6b7-a442-4ea8-beca-2843ffb42e3c", "address": "fa:16:3e:e8:e3:04", "network": {"id": "b6cd64a2-af0b-4f57-b84c-cbc9cde5251d", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-958485850-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "09ba33ef4bd447699d74946c58839b2d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7a1af6b7-a4", "ovs_interfaceid": "7a1af6b7-a442-4ea8-beca-2843ffb42e3c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 11 04:15:24 compute-0 nova_compute[259850]: 2025-10-11 04:15:24.100 2 DEBUG nova.network.os_vif_util [None req-6542b70e-9085-48c3-a013-46621b3f3e3c 2a330a845d62440c871f80eda2546881 09ba33ef4bd447699d74946c58839b2d - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:e8:e3:04,bridge_name='br-int',has_traffic_filtering=True,id=7a1af6b7-a442-4ea8-beca-2843ffb42e3c,network=Network(b6cd64a2-af0b-4f57-b84c-cbc9cde5251d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap7a1af6b7-a4') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 11 04:15:24 compute-0 nova_compute[259850]: 2025-10-11 04:15:24.101 2 DEBUG os_vif [None req-6542b70e-9085-48c3-a013-46621b3f3e3c 2a330a845d62440c871f80eda2546881 09ba33ef4bd447699d74946c58839b2d - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:e8:e3:04,bridge_name='br-int',has_traffic_filtering=True,id=7a1af6b7-a442-4ea8-beca-2843ffb42e3c,network=Network(b6cd64a2-af0b-4f57-b84c-cbc9cde5251d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap7a1af6b7-a4') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Oct 11 04:15:24 compute-0 nova_compute[259850]: 2025-10-11 04:15:24.101 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 04:15:24 compute-0 nova_compute[259850]: 2025-10-11 04:15:24.102 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 04:15:24 compute-0 nova_compute[259850]: 2025-10-11 04:15:24.103 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 11 04:15:24 compute-0 nova_compute[259850]: 2025-10-11 04:15:24.111 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 04:15:24 compute-0 nova_compute[259850]: 2025-10-11 04:15:24.111 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap7a1af6b7-a4, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 04:15:24 compute-0 nova_compute[259850]: 2025-10-11 04:15:24.112 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap7a1af6b7-a4, col_values=(('external_ids', {'iface-id': '7a1af6b7-a442-4ea8-beca-2843ffb42e3c', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:e8:e3:04', 'vm-uuid': 'f4568c68-41ba-4de0-a607-76bf5907f37c'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 04:15:24 compute-0 NetworkManager[44920]: <info>  [1760156124.1156] manager: (tap7a1af6b7-a4): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/97)
Oct 11 04:15:24 compute-0 nova_compute[259850]: 2025-10-11 04:15:24.114 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 04:15:24 compute-0 nova_compute[259850]: 2025-10-11 04:15:24.119 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 11 04:15:24 compute-0 nova_compute[259850]: 2025-10-11 04:15:24.122 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 04:15:24 compute-0 nova_compute[259850]: 2025-10-11 04:15:24.123 2 INFO os_vif [None req-6542b70e-9085-48c3-a013-46621b3f3e3c 2a330a845d62440c871f80eda2546881 09ba33ef4bd447699d74946c58839b2d - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:e8:e3:04,bridge_name='br-int',has_traffic_filtering=True,id=7a1af6b7-a442-4ea8-beca-2843ffb42e3c,network=Network(b6cd64a2-af0b-4f57-b84c-cbc9cde5251d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap7a1af6b7-a4')
Oct 11 04:15:24 compute-0 ceph-mon[74273]: from='client.? 192.168.122.100:0/2275738536' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 04:15:24 compute-0 ceph-mon[74273]: from='client.? 192.168.122.100:0/3551747099' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 11 04:15:24 compute-0 podman[292163]: 2025-10-11 04:15:24.139524882 +0000 UTC m=+0.078478091 container create c64111dd2b18b693cf809c14b8c9fd3b1880d5d600a76b3e592a8d56e0d62064 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_mccarthy, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.build-date=20250507)
Oct 11 04:15:24 compute-0 nova_compute[259850]: 2025-10-11 04:15:24.149 2 DEBUG nova.policy [None req-f94e59bf-b900-400f-a67b-3e84677dd23e 77d11e860ca1460cab1c20bca4d4c0ea bfcc78a613a4442d88231798d10634c9 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '77d11e860ca1460cab1c20bca4d4c0ea', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'bfcc78a613a4442d88231798d10634c9', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Oct 11 04:15:24 compute-0 nova_compute[259850]: 2025-10-11 04:15:24.156 2 DEBUG os_brick.utils [None req-f94e59bf-b900-400f-a67b-3e84677dd23e 77d11e860ca1460cab1c20bca4d4c0ea bfcc78a613a4442d88231798d10634c9 - - default default] ==> get_connector_properties: call "{'root_helper': 'sudo nova-rootwrap /etc/nova/rootwrap.conf', 'my_ip': '192.168.122.100', 'multipath': True, 'enforce_multipath': True, 'host': 'compute-0.ctlplane.example.com', 'execute': None}" trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:176
Oct 11 04:15:24 compute-0 nova_compute[259850]: 2025-10-11 04:15:24.157 675 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): multipathd show status execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 04:15:24 compute-0 nova_compute[259850]: 2025-10-11 04:15:24.172 675 DEBUG oslo_concurrency.processutils [-] CMD "multipathd show status" returned: 0 in 0.015s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 04:15:24 compute-0 nova_compute[259850]: 2025-10-11 04:15:24.173 675 DEBUG oslo.privsep.daemon [-] privsep: reply[21d946a2-717f-45c3-940f-a5017df335f8]: (4, ('path checker states:\n\npaths: 0\nbusy: False\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 04:15:24 compute-0 nova_compute[259850]: 2025-10-11 04:15:24.177 675 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): cat /etc/iscsi/initiatorname.iscsi execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 04:15:24 compute-0 nova_compute[259850]: 2025-10-11 04:15:24.189 675 DEBUG oslo_concurrency.processutils [-] CMD "cat /etc/iscsi/initiatorname.iscsi" returned: 0 in 0.011s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 04:15:24 compute-0 nova_compute[259850]: 2025-10-11 04:15:24.189 675 DEBUG oslo.privsep.daemon [-] privsep: reply[336487b4-c06e-4662-b815-ce36d3eff065]: (4, ('InitiatorName=iqn.1994-05.com.redhat:e727c2bd432c', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 04:15:24 compute-0 systemd[1]: Started libpod-conmon-c64111dd2b18b693cf809c14b8c9fd3b1880d5d600a76b3e592a8d56e0d62064.scope.
Oct 11 04:15:24 compute-0 nova_compute[259850]: 2025-10-11 04:15:24.191 675 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): findmnt -v / -n -o SOURCE execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 04:15:24 compute-0 podman[292163]: 2025-10-11 04:15:24.107893563 +0000 UTC m=+0.046846862 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 04:15:24 compute-0 nova_compute[259850]: 2025-10-11 04:15:24.196 2 DEBUG nova.virt.libvirt.driver [None req-6542b70e-9085-48c3-a013-46621b3f3e3c 2a330a845d62440c871f80eda2546881 09ba33ef4bd447699d74946c58839b2d - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 11 04:15:24 compute-0 nova_compute[259850]: 2025-10-11 04:15:24.196 2 DEBUG nova.virt.libvirt.driver [None req-6542b70e-9085-48c3-a013-46621b3f3e3c 2a330a845d62440c871f80eda2546881 09ba33ef4bd447699d74946c58839b2d - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 11 04:15:24 compute-0 nova_compute[259850]: 2025-10-11 04:15:24.197 2 DEBUG nova.virt.libvirt.driver [None req-6542b70e-9085-48c3-a013-46621b3f3e3c 2a330a845d62440c871f80eda2546881 09ba33ef4bd447699d74946c58839b2d - - default default] No VIF found with MAC fa:16:3e:e8:e3:04, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Oct 11 04:15:24 compute-0 nova_compute[259850]: 2025-10-11 04:15:24.198 2 INFO nova.virt.libvirt.driver [None req-6542b70e-9085-48c3-a013-46621b3f3e3c 2a330a845d62440c871f80eda2546881 09ba33ef4bd447699d74946c58839b2d - - default default] [instance: f4568c68-41ba-4de0-a607-76bf5907f37c] Using config drive
Oct 11 04:15:24 compute-0 systemd[1]: Started libcrun container.
Oct 11 04:15:24 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8843c08255fbf111b988454d1feb93317f76bf8cd807765179698a3fb9361553/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 11 04:15:24 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8843c08255fbf111b988454d1feb93317f76bf8cd807765179698a3fb9361553/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 11 04:15:24 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8843c08255fbf111b988454d1feb93317f76bf8cd807765179698a3fb9361553/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 11 04:15:24 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8843c08255fbf111b988454d1feb93317f76bf8cd807765179698a3fb9361553/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 11 04:15:24 compute-0 nova_compute[259850]: 2025-10-11 04:15:24.244 2 DEBUG nova.storage.rbd_utils [None req-6542b70e-9085-48c3-a013-46621b3f3e3c 2a330a845d62440c871f80eda2546881 09ba33ef4bd447699d74946c58839b2d - - default default] rbd image f4568c68-41ba-4de0-a607-76bf5907f37c_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 11 04:15:24 compute-0 podman[292163]: 2025-10-11 04:15:24.252259585 +0000 UTC m=+0.191212824 container init c64111dd2b18b693cf809c14b8c9fd3b1880d5d600a76b3e592a8d56e0d62064 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_mccarthy, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 11 04:15:24 compute-0 nova_compute[259850]: 2025-10-11 04:15:24.202 675 DEBUG oslo_concurrency.processutils [-] CMD "findmnt -v / -n -o SOURCE" returned: 0 in 0.011s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 04:15:24 compute-0 nova_compute[259850]: 2025-10-11 04:15:24.203 675 DEBUG oslo.privsep.daemon [-] privsep: reply[a5575c10-fcfe-4139-975c-4701baca225b]: (4, ('overlay\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 04:15:24 compute-0 nova_compute[259850]: 2025-10-11 04:15:24.261 675 DEBUG oslo.privsep.daemon [-] privsep: reply[32d2c3cc-df6f-46c0-be3f-a67ddd19f02b]: (4, 'e4b2deed-ff06-4afb-a523-b61a9dddb9cc') _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 04:15:24 compute-0 nova_compute[259850]: 2025-10-11 04:15:24.263 2 DEBUG oslo_concurrency.processutils [None req-f94e59bf-b900-400f-a67b-3e84677dd23e 77d11e860ca1460cab1c20bca4d4c0ea bfcc78a613a4442d88231798d10634c9 - - default default] Running cmd (subprocess): nvme version execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 04:15:24 compute-0 podman[292163]: 2025-10-11 04:15:24.269605568 +0000 UTC m=+0.208558817 container start c64111dd2b18b693cf809c14b8c9fd3b1880d5d600a76b3e592a8d56e0d62064 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_mccarthy, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef)
Oct 11 04:15:24 compute-0 podman[292163]: 2025-10-11 04:15:24.273715265 +0000 UTC m=+0.212668574 container attach c64111dd2b18b693cf809c14b8c9fd3b1880d5d600a76b3e592a8d56e0d62064 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_mccarthy, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 11 04:15:24 compute-0 nova_compute[259850]: 2025-10-11 04:15:24.309 2 DEBUG oslo_concurrency.processutils [None req-f94e59bf-b900-400f-a67b-3e84677dd23e 77d11e860ca1460cab1c20bca4d4c0ea bfcc78a613a4442d88231798d10634c9 - - default default] CMD "nvme version" returned: 0 in 0.046s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 04:15:24 compute-0 nova_compute[259850]: 2025-10-11 04:15:24.313 2 DEBUG os_brick.initiator.connectors.lightos [None req-f94e59bf-b900-400f-a67b-3e84677dd23e 77d11e860ca1460cab1c20bca4d4c0ea bfcc78a613a4442d88231798d10634c9 - - default default] LIGHTOS: [Errno 111] ECONNREFUSED find_dsc /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:98
Oct 11 04:15:24 compute-0 nova_compute[259850]: 2025-10-11 04:15:24.314 2 DEBUG os_brick.initiator.connectors.lightos [None req-f94e59bf-b900-400f-a67b-3e84677dd23e 77d11e860ca1460cab1c20bca4d4c0ea bfcc78a613a4442d88231798d10634c9 - - default default] LIGHTOS: did not find dsc, continuing anyway. get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:76
Oct 11 04:15:24 compute-0 nova_compute[259850]: 2025-10-11 04:15:24.314 2 DEBUG os_brick.initiator.connectors.lightos [None req-f94e59bf-b900-400f-a67b-3e84677dd23e 77d11e860ca1460cab1c20bca4d4c0ea bfcc78a613a4442d88231798d10634c9 - - default default] LIGHTOS: finally hostnqn: nqn.2014-08.org.nvmexpress:uuid:83042a20-0f72-4c47-8453-e72ead378624 dsc:  get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:79
Oct 11 04:15:24 compute-0 nova_compute[259850]: 2025-10-11 04:15:24.315 2 DEBUG os_brick.utils [None req-f94e59bf-b900-400f-a67b-3e84677dd23e 77d11e860ca1460cab1c20bca4d4c0ea bfcc78a613a4442d88231798d10634c9 - - default default] <== get_connector_properties: return (157ms) {'platform': 'x86_64', 'os_type': 'linux', 'ip': '192.168.122.100', 'host': 'compute-0.ctlplane.example.com', 'multipath': True, 'initiator': 'iqn.1994-05.com.redhat:e727c2bd432c', 'do_local_attach': False, 'nvme_hostid': '83042a20-0f72-4c47-8453-e72ead378624', 'system uuid': 'e4b2deed-ff06-4afb-a523-b61a9dddb9cc', 'nqn': 'nqn.2014-08.org.nvmexpress:uuid:83042a20-0f72-4c47-8453-e72ead378624', 'nvme_native_multipath': True, 'found_dsc': ''} trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:203
Oct 11 04:15:24 compute-0 nova_compute[259850]: 2025-10-11 04:15:24.315 2 DEBUG nova.virt.block_device [None req-f94e59bf-b900-400f-a67b-3e84677dd23e 77d11e860ca1460cab1c20bca4d4c0ea bfcc78a613a4442d88231798d10634c9 - - default default] [instance: ab2c9a76-86d0-4cca-92b5-ae402fda2905] Updating existing volume attachment record: e12510a9-6ec0-4cad-a671-279953259e45 _volume_attach /usr/lib/python3.9/site-packages/nova/virt/block_device.py:631
Oct 11 04:15:24 compute-0 nova_compute[259850]: 2025-10-11 04:15:24.865 2 INFO nova.virt.libvirt.driver [None req-6542b70e-9085-48c3-a013-46621b3f3e3c 2a330a845d62440c871f80eda2546881 09ba33ef4bd447699d74946c58839b2d - - default default] [instance: f4568c68-41ba-4de0-a607-76bf5907f37c] Creating config drive at /var/lib/nova/instances/f4568c68-41ba-4de0-a607-76bf5907f37c/disk.config
Oct 11 04:15:24 compute-0 nova_compute[259850]: 2025-10-11 04:15:24.878 2 DEBUG oslo_concurrency.processutils [None req-6542b70e-9085-48c3-a013-46621b3f3e3c 2a330a845d62440c871f80eda2546881 09ba33ef4bd447699d74946c58839b2d - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/f4568c68-41ba-4de0-a607-76bf5907f37c/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmphf7spl_s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 04:15:24 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 11 04:15:24 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/4206500645' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 11 04:15:24 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e384 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 04:15:25 compute-0 quirky_mccarthy[292187]: {
Oct 11 04:15:25 compute-0 quirky_mccarthy[292187]:     "0": [
Oct 11 04:15:25 compute-0 quirky_mccarthy[292187]:         {
Oct 11 04:15:25 compute-0 quirky_mccarthy[292187]:             "devices": [
Oct 11 04:15:25 compute-0 quirky_mccarthy[292187]:                 "/dev/loop3"
Oct 11 04:15:25 compute-0 quirky_mccarthy[292187]:             ],
Oct 11 04:15:25 compute-0 quirky_mccarthy[292187]:             "lv_name": "ceph_lv0",
Oct 11 04:15:25 compute-0 quirky_mccarthy[292187]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct 11 04:15:25 compute-0 quirky_mccarthy[292187]:             "lv_size": "21470642176",
Oct 11 04:15:25 compute-0 quirky_mccarthy[292187]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=1XVTvq-Um5W-oCLh-MZzq-EKrd-vgGj-JdO4S3,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=23b68101-59a9-532f-ab6b-9acf78fb2162,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=bd7ac921-1218-45c1-b1c6-7c594dbceccb,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 11 04:15:25 compute-0 quirky_mccarthy[292187]:             "lv_uuid": "1XVTvq-Um5W-oCLh-MZzq-EKrd-vgGj-JdO4S3",
Oct 11 04:15:25 compute-0 quirky_mccarthy[292187]:             "name": "ceph_lv0",
Oct 11 04:15:25 compute-0 quirky_mccarthy[292187]:             "path": "/dev/ceph_vg0/ceph_lv0",
Oct 11 04:15:25 compute-0 quirky_mccarthy[292187]:             "tags": {
Oct 11 04:15:25 compute-0 quirky_mccarthy[292187]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct 11 04:15:25 compute-0 quirky_mccarthy[292187]:                 "ceph.block_uuid": "1XVTvq-Um5W-oCLh-MZzq-EKrd-vgGj-JdO4S3",
Oct 11 04:15:25 compute-0 quirky_mccarthy[292187]:                 "ceph.cephx_lockbox_secret": "",
Oct 11 04:15:25 compute-0 quirky_mccarthy[292187]:                 "ceph.cluster_fsid": "23b68101-59a9-532f-ab6b-9acf78fb2162",
Oct 11 04:15:25 compute-0 quirky_mccarthy[292187]:                 "ceph.cluster_name": "ceph",
Oct 11 04:15:25 compute-0 quirky_mccarthy[292187]:                 "ceph.crush_device_class": "",
Oct 11 04:15:25 compute-0 quirky_mccarthy[292187]:                 "ceph.encrypted": "0",
Oct 11 04:15:25 compute-0 quirky_mccarthy[292187]:                 "ceph.osd_fsid": "bd7ac921-1218-45c1-b1c6-7c594dbceccb",
Oct 11 04:15:25 compute-0 quirky_mccarthy[292187]:                 "ceph.osd_id": "0",
Oct 11 04:15:25 compute-0 quirky_mccarthy[292187]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 11 04:15:25 compute-0 quirky_mccarthy[292187]:                 "ceph.type": "block",
Oct 11 04:15:25 compute-0 quirky_mccarthy[292187]:                 "ceph.vdo": "0"
Oct 11 04:15:25 compute-0 quirky_mccarthy[292187]:             },
Oct 11 04:15:25 compute-0 quirky_mccarthy[292187]:             "type": "block",
Oct 11 04:15:25 compute-0 quirky_mccarthy[292187]:             "vg_name": "ceph_vg0"
Oct 11 04:15:25 compute-0 quirky_mccarthy[292187]:         }
Oct 11 04:15:25 compute-0 quirky_mccarthy[292187]:     ],
Oct 11 04:15:25 compute-0 quirky_mccarthy[292187]:     "1": [
Oct 11 04:15:25 compute-0 quirky_mccarthy[292187]:         {
Oct 11 04:15:25 compute-0 quirky_mccarthy[292187]:             "devices": [
Oct 11 04:15:25 compute-0 quirky_mccarthy[292187]:                 "/dev/loop4"
Oct 11 04:15:25 compute-0 quirky_mccarthy[292187]:             ],
Oct 11 04:15:25 compute-0 quirky_mccarthy[292187]:             "lv_name": "ceph_lv1",
Oct 11 04:15:25 compute-0 quirky_mccarthy[292187]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Oct 11 04:15:25 compute-0 quirky_mccarthy[292187]:             "lv_size": "21470642176",
Oct 11 04:15:25 compute-0 quirky_mccarthy[292187]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=SYHCVo-UVf0-Iwc3-4dzz-QuCG-19LJ-9rRA5x,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=23b68101-59a9-532f-ab6b-9acf78fb2162,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=38da774d-7ecf-442f-9a7a-97978287cff8,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 11 04:15:25 compute-0 quirky_mccarthy[292187]:             "lv_uuid": "SYHCVo-UVf0-Iwc3-4dzz-QuCG-19LJ-9rRA5x",
Oct 11 04:15:25 compute-0 quirky_mccarthy[292187]:             "name": "ceph_lv1",
Oct 11 04:15:25 compute-0 quirky_mccarthy[292187]:             "path": "/dev/ceph_vg1/ceph_lv1",
Oct 11 04:15:25 compute-0 quirky_mccarthy[292187]:             "tags": {
Oct 11 04:15:25 compute-0 quirky_mccarthy[292187]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Oct 11 04:15:25 compute-0 quirky_mccarthy[292187]:                 "ceph.block_uuid": "SYHCVo-UVf0-Iwc3-4dzz-QuCG-19LJ-9rRA5x",
Oct 11 04:15:25 compute-0 quirky_mccarthy[292187]:                 "ceph.cephx_lockbox_secret": "",
Oct 11 04:15:25 compute-0 quirky_mccarthy[292187]:                 "ceph.cluster_fsid": "23b68101-59a9-532f-ab6b-9acf78fb2162",
Oct 11 04:15:25 compute-0 quirky_mccarthy[292187]:                 "ceph.cluster_name": "ceph",
Oct 11 04:15:25 compute-0 quirky_mccarthy[292187]:                 "ceph.crush_device_class": "",
Oct 11 04:15:25 compute-0 quirky_mccarthy[292187]:                 "ceph.encrypted": "0",
Oct 11 04:15:25 compute-0 quirky_mccarthy[292187]:                 "ceph.osd_fsid": "38da774d-7ecf-442f-9a7a-97978287cff8",
Oct 11 04:15:25 compute-0 quirky_mccarthy[292187]:                 "ceph.osd_id": "1",
Oct 11 04:15:25 compute-0 quirky_mccarthy[292187]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 11 04:15:25 compute-0 quirky_mccarthy[292187]:                 "ceph.type": "block",
Oct 11 04:15:25 compute-0 quirky_mccarthy[292187]:                 "ceph.vdo": "0"
Oct 11 04:15:25 compute-0 quirky_mccarthy[292187]:             },
Oct 11 04:15:25 compute-0 quirky_mccarthy[292187]:             "type": "block",
Oct 11 04:15:25 compute-0 quirky_mccarthy[292187]:             "vg_name": "ceph_vg1"
Oct 11 04:15:25 compute-0 quirky_mccarthy[292187]:         }
Oct 11 04:15:25 compute-0 quirky_mccarthy[292187]:     ],
Oct 11 04:15:25 compute-0 quirky_mccarthy[292187]:     "2": [
Oct 11 04:15:25 compute-0 quirky_mccarthy[292187]:         {
Oct 11 04:15:25 compute-0 quirky_mccarthy[292187]:             "devices": [
Oct 11 04:15:25 compute-0 quirky_mccarthy[292187]:                 "/dev/loop5"
Oct 11 04:15:25 compute-0 quirky_mccarthy[292187]:             ],
Oct 11 04:15:25 compute-0 quirky_mccarthy[292187]:             "lv_name": "ceph_lv2",
Oct 11 04:15:25 compute-0 quirky_mccarthy[292187]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Oct 11 04:15:25 compute-0 quirky_mccarthy[292187]:             "lv_size": "21470642176",
Oct 11 04:15:25 compute-0 quirky_mccarthy[292187]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=RoMfIg-8Myq-NBZ3-1HM3-6QfC-sjk8-dlChvt,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=23b68101-59a9-532f-ab6b-9acf78fb2162,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=b0a32a6b-06c7-4cda-9235-0c5ffbd1bd85,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 11 04:15:25 compute-0 quirky_mccarthy[292187]:             "lv_uuid": "RoMfIg-8Myq-NBZ3-1HM3-6QfC-sjk8-dlChvt",
Oct 11 04:15:25 compute-0 quirky_mccarthy[292187]:             "name": "ceph_lv2",
Oct 11 04:15:25 compute-0 quirky_mccarthy[292187]:             "path": "/dev/ceph_vg2/ceph_lv2",
Oct 11 04:15:25 compute-0 quirky_mccarthy[292187]:             "tags": {
Oct 11 04:15:25 compute-0 quirky_mccarthy[292187]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Oct 11 04:15:25 compute-0 quirky_mccarthy[292187]:                 "ceph.block_uuid": "RoMfIg-8Myq-NBZ3-1HM3-6QfC-sjk8-dlChvt",
Oct 11 04:15:25 compute-0 quirky_mccarthy[292187]:                 "ceph.cephx_lockbox_secret": "",
Oct 11 04:15:25 compute-0 quirky_mccarthy[292187]:                 "ceph.cluster_fsid": "23b68101-59a9-532f-ab6b-9acf78fb2162",
Oct 11 04:15:25 compute-0 quirky_mccarthy[292187]:                 "ceph.cluster_name": "ceph",
Oct 11 04:15:25 compute-0 quirky_mccarthy[292187]:                 "ceph.crush_device_class": "",
Oct 11 04:15:25 compute-0 quirky_mccarthy[292187]:                 "ceph.encrypted": "0",
Oct 11 04:15:25 compute-0 quirky_mccarthy[292187]:                 "ceph.osd_fsid": "b0a32a6b-06c7-4cda-9235-0c5ffbd1bd85",
Oct 11 04:15:25 compute-0 quirky_mccarthy[292187]:                 "ceph.osd_id": "2",
Oct 11 04:15:25 compute-0 quirky_mccarthy[292187]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 11 04:15:25 compute-0 quirky_mccarthy[292187]:                 "ceph.type": "block",
Oct 11 04:15:25 compute-0 quirky_mccarthy[292187]:                 "ceph.vdo": "0"
Oct 11 04:15:25 compute-0 quirky_mccarthy[292187]:             },
Oct 11 04:15:25 compute-0 quirky_mccarthy[292187]:             "type": "block",
Oct 11 04:15:25 compute-0 quirky_mccarthy[292187]:             "vg_name": "ceph_vg2"
Oct 11 04:15:25 compute-0 quirky_mccarthy[292187]:         }
Oct 11 04:15:25 compute-0 quirky_mccarthy[292187]:     ]
Oct 11 04:15:25 compute-0 quirky_mccarthy[292187]: }
Oct 11 04:15:25 compute-0 systemd[1]: libpod-c64111dd2b18b693cf809c14b8c9fd3b1880d5d600a76b3e592a8d56e0d62064.scope: Deactivated successfully.
Oct 11 04:15:25 compute-0 podman[292163]: 2025-10-11 04:15:25.03406186 +0000 UTC m=+0.973015069 container died c64111dd2b18b693cf809c14b8c9fd3b1880d5d600a76b3e592a8d56e0d62064 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_mccarthy, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0)
Oct 11 04:15:25 compute-0 nova_compute[259850]: 2025-10-11 04:15:25.034 2 DEBUG oslo_concurrency.processutils [None req-6542b70e-9085-48c3-a013-46621b3f3e3c 2a330a845d62440c871f80eda2546881 09ba33ef4bd447699d74946c58839b2d - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/f4568c68-41ba-4de0-a607-76bf5907f37c/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmphf7spl_s" returned: 0 in 0.156s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 04:15:25 compute-0 systemd[1]: var-lib-containers-storage-overlay-8843c08255fbf111b988454d1feb93317f76bf8cd807765179698a3fb9361553-merged.mount: Deactivated successfully.
Oct 11 04:15:25 compute-0 nova_compute[259850]: 2025-10-11 04:15:25.082 2 DEBUG nova.storage.rbd_utils [None req-6542b70e-9085-48c3-a013-46621b3f3e3c 2a330a845d62440c871f80eda2546881 09ba33ef4bd447699d74946c58839b2d - - default default] rbd image f4568c68-41ba-4de0-a607-76bf5907f37c_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 11 04:15:25 compute-0 podman[292163]: 2025-10-11 04:15:25.086662775 +0000 UTC m=+1.025615994 container remove c64111dd2b18b693cf809c14b8c9fd3b1880d5d600a76b3e592a8d56e0d62064 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_mccarthy, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True)
Oct 11 04:15:25 compute-0 nova_compute[259850]: 2025-10-11 04:15:25.086 2 DEBUG oslo_concurrency.processutils [None req-6542b70e-9085-48c3-a013-46621b3f3e3c 2a330a845d62440c871f80eda2546881 09ba33ef4bd447699d74946c58839b2d - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/f4568c68-41ba-4de0-a607-76bf5907f37c/disk.config f4568c68-41ba-4de0-a607-76bf5907f37c_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 04:15:25 compute-0 systemd[1]: libpod-conmon-c64111dd2b18b693cf809c14b8c9fd3b1880d5d600a76b3e592a8d56e0d62064.scope: Deactivated successfully.
Oct 11 04:15:25 compute-0 nova_compute[259850]: 2025-10-11 04:15:25.120 2 DEBUG nova.network.neutron [req-765d83c2-94ff-4b0c-a5ac-d3c266a5b842 req-8b3af623-f2f7-4b9b-837a-16c8956cd3c1 f4159c198f2c491aba3076e803251bb4 551ef16ca7c64c7d98e9560ddab5abd8 - - default default] [instance: f4568c68-41ba-4de0-a607-76bf5907f37c] Updated VIF entry in instance network info cache for port 7a1af6b7-a442-4ea8-beca-2843ffb42e3c. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 11 04:15:25 compute-0 nova_compute[259850]: 2025-10-11 04:15:25.122 2 DEBUG nova.network.neutron [req-765d83c2-94ff-4b0c-a5ac-d3c266a5b842 req-8b3af623-f2f7-4b9b-837a-16c8956cd3c1 f4159c198f2c491aba3076e803251bb4 551ef16ca7c64c7d98e9560ddab5abd8 - - default default] [instance: f4568c68-41ba-4de0-a607-76bf5907f37c] Updating instance_info_cache with network_info: [{"id": "7a1af6b7-a442-4ea8-beca-2843ffb42e3c", "address": "fa:16:3e:e8:e3:04", "network": {"id": "b6cd64a2-af0b-4f57-b84c-cbc9cde5251d", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-958485850-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "09ba33ef4bd447699d74946c58839b2d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7a1af6b7-a4", "ovs_interfaceid": "7a1af6b7-a442-4ea8-beca-2843ffb42e3c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 11 04:15:25 compute-0 sudo[291997]: pam_unix(sudo:session): session closed for user root
Oct 11 04:15:25 compute-0 ceph-mon[74273]: pgmap v1546: 305 pgs: 305 active+clean; 248 MiB data, 516 MiB used, 59 GiB / 60 GiB avail; 50 KiB/s rd, 11 MiB/s wr, 78 op/s
Oct 11 04:15:25 compute-0 ceph-mon[74273]: from='client.? 192.168.122.10:0/4206500645' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 11 04:15:25 compute-0 nova_compute[259850]: 2025-10-11 04:15:25.167 2 DEBUG oslo_concurrency.lockutils [req-765d83c2-94ff-4b0c-a5ac-d3c266a5b842 req-8b3af623-f2f7-4b9b-837a-16c8956cd3c1 f4159c198f2c491aba3076e803251bb4 551ef16ca7c64c7d98e9560ddab5abd8 - - default default] Releasing lock "refresh_cache-f4568c68-41ba-4de0-a607-76bf5907f37c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 11 04:15:25 compute-0 sudo[292250]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 04:15:25 compute-0 sudo[292250]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 04:15:25 compute-0 sudo[292250]: pam_unix(sudo:session): session closed for user root
Oct 11 04:15:25 compute-0 nova_compute[259850]: 2025-10-11 04:15:25.258 2 DEBUG nova.network.neutron [None req-f94e59bf-b900-400f-a67b-3e84677dd23e 77d11e860ca1460cab1c20bca4d4c0ea bfcc78a613a4442d88231798d10634c9 - - default default] [instance: ab2c9a76-86d0-4cca-92b5-ae402fda2905] Successfully created port: a1f30276-c6ab-493e-9be5-8e3baf249a38 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Oct 11 04:15:25 compute-0 nova_compute[259850]: 2025-10-11 04:15:25.267 2 DEBUG oslo_concurrency.processutils [None req-6542b70e-9085-48c3-a013-46621b3f3e3c 2a330a845d62440c871f80eda2546881 09ba33ef4bd447699d74946c58839b2d - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/f4568c68-41ba-4de0-a607-76bf5907f37c/disk.config f4568c68-41ba-4de0-a607-76bf5907f37c_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.181s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 04:15:25 compute-0 nova_compute[259850]: 2025-10-11 04:15:25.267 2 INFO nova.virt.libvirt.driver [None req-6542b70e-9085-48c3-a013-46621b3f3e3c 2a330a845d62440c871f80eda2546881 09ba33ef4bd447699d74946c58839b2d - - default default] [instance: f4568c68-41ba-4de0-a607-76bf5907f37c] Deleting local config drive /var/lib/nova/instances/f4568c68-41ba-4de0-a607-76bf5907f37c/disk.config because it was imported into RBD.
Oct 11 04:15:25 compute-0 sudo[292293]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 11 04:15:25 compute-0 sudo[292293]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 04:15:25 compute-0 sudo[292293]: pam_unix(sudo:session): session closed for user root
Oct 11 04:15:25 compute-0 kernel: tap7a1af6b7-a4: entered promiscuous mode
Oct 11 04:15:25 compute-0 NetworkManager[44920]: <info>  [1760156125.3251] manager: (tap7a1af6b7-a4): new Tun device (/org/freedesktop/NetworkManager/Devices/98)
Oct 11 04:15:25 compute-0 ovn_controller[152025]: 2025-10-11T04:15:25Z|00187|binding|INFO|Claiming lport 7a1af6b7-a442-4ea8-beca-2843ffb42e3c for this chassis.
Oct 11 04:15:25 compute-0 ovn_controller[152025]: 2025-10-11T04:15:25Z|00188|binding|INFO|7a1af6b7-a442-4ea8-beca-2843ffb42e3c: Claiming fa:16:3e:e8:e3:04 10.100.0.6
Oct 11 04:15:25 compute-0 nova_compute[259850]: 2025-10-11 04:15:25.328 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 04:15:25 compute-0 ovn_metadata_agent[161897]: 2025-10-11 04:15:25.332 161902 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:e8:e3:04 10.100.0.6'], port_security=['fa:16:3e:e8:e3:04 10.100.0.6'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.6/28', 'neutron:device_id': 'f4568c68-41ba-4de0-a607-76bf5907f37c', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-b6cd64a2-af0b-4f57-b84c-cbc9cde5251d', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '09ba33ef4bd447699d74946c58839b2d', 'neutron:revision_number': '2', 'neutron:security_group_ids': '3ee4b1ef-419d-44da-a657-f91e5ccf3725', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=27b77226-c1f8-485e-969b-bae9a3bf7ceb, chassis=[<ovs.db.idl.Row object at 0x7f22fde8afd0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f22fde8afd0>], logical_port=7a1af6b7-a442-4ea8-beca-2843ffb42e3c) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 11 04:15:25 compute-0 ovn_metadata_agent[161897]: 2025-10-11 04:15:25.333 161902 INFO neutron.agent.ovn.metadata.agent [-] Port 7a1af6b7-a442-4ea8-beca-2843ffb42e3c in datapath b6cd64a2-af0b-4f57-b84c-cbc9cde5251d bound to our chassis
Oct 11 04:15:25 compute-0 ovn_metadata_agent[161897]: 2025-10-11 04:15:25.334 161902 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network b6cd64a2-af0b-4f57-b84c-cbc9cde5251d
Oct 11 04:15:25 compute-0 ovn_controller[152025]: 2025-10-11T04:15:25Z|00189|binding|INFO|Setting lport 7a1af6b7-a442-4ea8-beca-2843ffb42e3c ovn-installed in OVS
Oct 11 04:15:25 compute-0 ovn_controller[152025]: 2025-10-11T04:15:25Z|00190|binding|INFO|Setting lport 7a1af6b7-a442-4ea8-beca-2843ffb42e3c up in Southbound
Oct 11 04:15:25 compute-0 nova_compute[259850]: 2025-10-11 04:15:25.352 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 04:15:25 compute-0 ovn_metadata_agent[161897]: 2025-10-11 04:15:25.353 267637 DEBUG oslo.privsep.daemon [-] privsep: reply[caf3b532-e93a-4fa0-833a-e66bc764d457]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 04:15:25 compute-0 ovn_metadata_agent[161897]: 2025-10-11 04:15:25.355 161902 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapb6cd64a2-a1 in ovnmeta-b6cd64a2-af0b-4f57-b84c-cbc9cde5251d namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Oct 11 04:15:25 compute-0 ovn_metadata_agent[161897]: 2025-10-11 04:15:25.358 267637 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapb6cd64a2-a0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Oct 11 04:15:25 compute-0 ovn_metadata_agent[161897]: 2025-10-11 04:15:25.358 267637 DEBUG oslo.privsep.daemon [-] privsep: reply[63b44d0b-abad-4846-9bd9-8861769ac946]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 04:15:25 compute-0 ovn_metadata_agent[161897]: 2025-10-11 04:15:25.360 267637 DEBUG oslo.privsep.daemon [-] privsep: reply[b618daf8-bfe7-44fc-a301-cb2a86db89ee]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 04:15:25 compute-0 sudo[292322]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 04:15:25 compute-0 systemd-udevd[292355]: Network interface NamePolicy= disabled on kernel command line.
Oct 11 04:15:25 compute-0 sudo[292322]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 04:15:25 compute-0 sudo[292322]: pam_unix(sudo:session): session closed for user root
Oct 11 04:15:25 compute-0 ovn_metadata_agent[161897]: 2025-10-11 04:15:25.378 162015 DEBUG oslo.privsep.daemon [-] privsep: reply[3b9ff1c2-ac3d-41f2-a558-5fd1efdcbb34]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 04:15:25 compute-0 NetworkManager[44920]: <info>  [1760156125.3880] device (tap7a1af6b7-a4): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 11 04:15:25 compute-0 NetworkManager[44920]: <info>  [1760156125.3889] device (tap7a1af6b7-a4): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 11 04:15:25 compute-0 systemd-machined[214869]: New machine qemu-18-instance-00000012.
Oct 11 04:15:25 compute-0 ovn_metadata_agent[161897]: 2025-10-11 04:15:25.402 267637 DEBUG oslo.privsep.daemon [-] privsep: reply[378ae84b-b1a4-4acb-b8c3-14f1c44e9da6]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 04:15:25 compute-0 systemd[1]: Started Virtual Machine qemu-18-instance-00000012.
Oct 11 04:15:25 compute-0 nova_compute[259850]: 2025-10-11 04:15:25.417 2 DEBUG nova.compute.manager [None req-f94e59bf-b900-400f-a67b-3e84677dd23e 77d11e860ca1460cab1c20bca4d4c0ea bfcc78a613a4442d88231798d10634c9 - - default default] [instance: ab2c9a76-86d0-4cca-92b5-ae402fda2905] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Oct 11 04:15:25 compute-0 nova_compute[259850]: 2025-10-11 04:15:25.419 2 DEBUG nova.virt.libvirt.driver [None req-f94e59bf-b900-400f-a67b-3e84677dd23e 77d11e860ca1460cab1c20bca4d4c0ea bfcc78a613a4442d88231798d10634c9 - - default default] [instance: ab2c9a76-86d0-4cca-92b5-ae402fda2905] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Oct 11 04:15:25 compute-0 nova_compute[259850]: 2025-10-11 04:15:25.419 2 INFO nova.virt.libvirt.driver [None req-f94e59bf-b900-400f-a67b-3e84677dd23e 77d11e860ca1460cab1c20bca4d4c0ea bfcc78a613a4442d88231798d10634c9 - - default default] [instance: ab2c9a76-86d0-4cca-92b5-ae402fda2905] Creating image(s)
Oct 11 04:15:25 compute-0 nova_compute[259850]: 2025-10-11 04:15:25.419 2 DEBUG nova.virt.libvirt.driver [None req-f94e59bf-b900-400f-a67b-3e84677dd23e 77d11e860ca1460cab1c20bca4d4c0ea bfcc78a613a4442d88231798d10634c9 - - default default] [instance: ab2c9a76-86d0-4cca-92b5-ae402fda2905] Did not create local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4859
Oct 11 04:15:25 compute-0 nova_compute[259850]: 2025-10-11 04:15:25.420 2 DEBUG nova.virt.libvirt.driver [None req-f94e59bf-b900-400f-a67b-3e84677dd23e 77d11e860ca1460cab1c20bca4d4c0ea bfcc78a613a4442d88231798d10634c9 - - default default] [instance: ab2c9a76-86d0-4cca-92b5-ae402fda2905] Ensure instance console log exists: /var/lib/nova/instances/ab2c9a76-86d0-4cca-92b5-ae402fda2905/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Oct 11 04:15:25 compute-0 nova_compute[259850]: 2025-10-11 04:15:25.420 2 DEBUG oslo_concurrency.lockutils [None req-f94e59bf-b900-400f-a67b-3e84677dd23e 77d11e860ca1460cab1c20bca4d4c0ea bfcc78a613a4442d88231798d10634c9 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 04:15:25 compute-0 nova_compute[259850]: 2025-10-11 04:15:25.420 2 DEBUG oslo_concurrency.lockutils [None req-f94e59bf-b900-400f-a67b-3e84677dd23e 77d11e860ca1460cab1c20bca4d4c0ea bfcc78a613a4442d88231798d10634c9 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 04:15:25 compute-0 nova_compute[259850]: 2025-10-11 04:15:25.420 2 DEBUG oslo_concurrency.lockutils [None req-f94e59bf-b900-400f-a67b-3e84677dd23e 77d11e860ca1460cab1c20bca4d4c0ea bfcc78a613a4442d88231798d10634c9 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 04:15:25 compute-0 ovn_metadata_agent[161897]: 2025-10-11 04:15:25.430 267785 DEBUG oslo.privsep.daemon [-] privsep: reply[4a5bbb6f-4a3e-4ad8-8196-0e7e8a083851]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 04:15:25 compute-0 ovn_metadata_agent[161897]: 2025-10-11 04:15:25.435 267637 DEBUG oslo.privsep.daemon [-] privsep: reply[eece932e-3755-4a88-98d2-22587939781c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 04:15:25 compute-0 NetworkManager[44920]: <info>  [1760156125.4369] manager: (tapb6cd64a2-a0): new Veth device (/org/freedesktop/NetworkManager/Devices/99)
Oct 11 04:15:25 compute-0 systemd-udevd[292362]: Network interface NamePolicy= disabled on kernel command line.
Oct 11 04:15:25 compute-0 sudo[292361]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/23b68101-59a9-532f-ab6b-9acf78fb2162/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 23b68101-59a9-532f-ab6b-9acf78fb2162 -- raw list --format json
Oct 11 04:15:25 compute-0 sudo[292361]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 04:15:25 compute-0 ovn_metadata_agent[161897]: 2025-10-11 04:15:25.465 267785 DEBUG oslo.privsep.daemon [-] privsep: reply[1263e2dd-2f31-46a2-ab86-baf5e30277cf]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 04:15:25 compute-0 ovn_metadata_agent[161897]: 2025-10-11 04:15:25.469 267785 DEBUG oslo.privsep.daemon [-] privsep: reply[e377febf-6b5b-432e-bc17-e234364c555c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 04:15:25 compute-0 NetworkManager[44920]: <info>  [1760156125.4953] device (tapb6cd64a2-a0): carrier: link connected
Oct 11 04:15:25 compute-0 ovn_metadata_agent[161897]: 2025-10-11 04:15:25.501 267785 DEBUG oslo.privsep.daemon [-] privsep: reply[09f68f33-ebcd-4441-b1a9-ce216b2744c0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 04:15:25 compute-0 ovn_metadata_agent[161897]: 2025-10-11 04:15:25.522 267637 DEBUG oslo.privsep.daemon [-] privsep: reply[3d6884e1-d6c1-4e64-ab2a-98b5df4ec1f4]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapb6cd64a2-a1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:11:9f:02'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 62], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 451053, 'reachable_time': 43256, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 292414, 'error': None, 'target': 'ovnmeta-b6cd64a2-af0b-4f57-b84c-cbc9cde5251d', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 04:15:25 compute-0 ovn_metadata_agent[161897]: 2025-10-11 04:15:25.537 267637 DEBUG oslo.privsep.daemon [-] privsep: reply[b51bc346-ebce-4b06-b0ab-b7b65280d311]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe11:9f02'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 451053, 'tstamp': 451053}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 292415, 'error': None, 'target': 'ovnmeta-b6cd64a2-af0b-4f57-b84c-cbc9cde5251d', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 04:15:25 compute-0 ovn_metadata_agent[161897]: 2025-10-11 04:15:25.554 267637 DEBUG oslo.privsep.daemon [-] privsep: reply[645cabb1-f290-4591-93e0-1826b0aa795e]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapb6cd64a2-a1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:11:9f:02'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 62], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 451053, 'reachable_time': 43256, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 292416, 'error': None, 'target': 'ovnmeta-b6cd64a2-af0b-4f57-b84c-cbc9cde5251d', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 04:15:25 compute-0 ovn_metadata_agent[161897]: 2025-10-11 04:15:25.587 267637 DEBUG oslo.privsep.daemon [-] privsep: reply[85971710-f66e-43d4-a95b-1c587b8b8dda]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 04:15:25 compute-0 nova_compute[259850]: 2025-10-11 04:15:25.634 2 DEBUG nova.compute.manager [req-bd4d720c-d557-4542-b46e-0be60b529bc7 req-17e3fb5c-7784-4cf1-a15b-99b96a618387 f4159c198f2c491aba3076e803251bb4 551ef16ca7c64c7d98e9560ddab5abd8 - - default default] [instance: f4568c68-41ba-4de0-a607-76bf5907f37c] Received event network-vif-plugged-7a1af6b7-a442-4ea8-beca-2843ffb42e3c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 11 04:15:25 compute-0 nova_compute[259850]: 2025-10-11 04:15:25.635 2 DEBUG oslo_concurrency.lockutils [req-bd4d720c-d557-4542-b46e-0be60b529bc7 req-17e3fb5c-7784-4cf1-a15b-99b96a618387 f4159c198f2c491aba3076e803251bb4 551ef16ca7c64c7d98e9560ddab5abd8 - - default default] Acquiring lock "f4568c68-41ba-4de0-a607-76bf5907f37c-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 04:15:25 compute-0 nova_compute[259850]: 2025-10-11 04:15:25.635 2 DEBUG oslo_concurrency.lockutils [req-bd4d720c-d557-4542-b46e-0be60b529bc7 req-17e3fb5c-7784-4cf1-a15b-99b96a618387 f4159c198f2c491aba3076e803251bb4 551ef16ca7c64c7d98e9560ddab5abd8 - - default default] Lock "f4568c68-41ba-4de0-a607-76bf5907f37c-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 04:15:25 compute-0 nova_compute[259850]: 2025-10-11 04:15:25.636 2 DEBUG oslo_concurrency.lockutils [req-bd4d720c-d557-4542-b46e-0be60b529bc7 req-17e3fb5c-7784-4cf1-a15b-99b96a618387 f4159c198f2c491aba3076e803251bb4 551ef16ca7c64c7d98e9560ddab5abd8 - - default default] Lock "f4568c68-41ba-4de0-a607-76bf5907f37c-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 04:15:25 compute-0 nova_compute[259850]: 2025-10-11 04:15:25.636 2 DEBUG nova.compute.manager [req-bd4d720c-d557-4542-b46e-0be60b529bc7 req-17e3fb5c-7784-4cf1-a15b-99b96a618387 f4159c198f2c491aba3076e803251bb4 551ef16ca7c64c7d98e9560ddab5abd8 - - default default] [instance: f4568c68-41ba-4de0-a607-76bf5907f37c] Processing event network-vif-plugged-7a1af6b7-a442-4ea8-beca-2843ffb42e3c _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Oct 11 04:15:25 compute-0 ovn_metadata_agent[161897]: 2025-10-11 04:15:25.668 267637 DEBUG oslo.privsep.daemon [-] privsep: reply[9698a855-0124-49c1-b0fe-4f403d87c9b0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 04:15:25 compute-0 ovn_metadata_agent[161897]: 2025-10-11 04:15:25.670 161902 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapb6cd64a2-a0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 04:15:25 compute-0 ovn_metadata_agent[161897]: 2025-10-11 04:15:25.671 161902 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 11 04:15:25 compute-0 ovn_metadata_agent[161897]: 2025-10-11 04:15:25.671 161902 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapb6cd64a2-a0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 04:15:25 compute-0 NetworkManager[44920]: <info>  [1760156125.7156] manager: (tapb6cd64a2-a0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/100)
Oct 11 04:15:25 compute-0 kernel: tapb6cd64a2-a0: entered promiscuous mode
Oct 11 04:15:25 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1547: 305 pgs: 305 active+clean; 248 MiB data, 516 MiB used, 59 GiB / 60 GiB avail; 29 KiB/s rd, 9.4 MiB/s wr, 44 op/s
Oct 11 04:15:25 compute-0 nova_compute[259850]: 2025-10-11 04:15:25.714 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 04:15:25 compute-0 ovn_metadata_agent[161897]: 2025-10-11 04:15:25.725 161902 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapb6cd64a2-a0, col_values=(('external_ids', {'iface-id': 'c2cbaf15-a50c-40b8-9f65-12b11618e7fc'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 04:15:25 compute-0 nova_compute[259850]: 2025-10-11 04:15:25.727 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 04:15:25 compute-0 ovn_controller[152025]: 2025-10-11T04:15:25Z|00191|binding|INFO|Releasing lport c2cbaf15-a50c-40b8-9f65-12b11618e7fc from this chassis (sb_readonly=0)
Oct 11 04:15:25 compute-0 ovn_metadata_agent[161897]: 2025-10-11 04:15:25.741 161902 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/b6cd64a2-af0b-4f57-b84c-cbc9cde5251d.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/b6cd64a2-af0b-4f57-b84c-cbc9cde5251d.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Oct 11 04:15:25 compute-0 ovn_metadata_agent[161897]: 2025-10-11 04:15:25.742 267637 DEBUG oslo.privsep.daemon [-] privsep: reply[1697df2b-de9b-4124-a9dd-7d0303db0c7c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 04:15:25 compute-0 ovn_metadata_agent[161897]: 2025-10-11 04:15:25.743 161902 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct 11 04:15:25 compute-0 ovn_metadata_agent[161897]: global
Oct 11 04:15:25 compute-0 ovn_metadata_agent[161897]:     log         /dev/log local0 debug
Oct 11 04:15:25 compute-0 ovn_metadata_agent[161897]:     log-tag     haproxy-metadata-proxy-b6cd64a2-af0b-4f57-b84c-cbc9cde5251d
Oct 11 04:15:25 compute-0 ovn_metadata_agent[161897]:     user        root
Oct 11 04:15:25 compute-0 ovn_metadata_agent[161897]:     group       root
Oct 11 04:15:25 compute-0 ovn_metadata_agent[161897]:     maxconn     1024
Oct 11 04:15:25 compute-0 ovn_metadata_agent[161897]:     pidfile     /var/lib/neutron/external/pids/b6cd64a2-af0b-4f57-b84c-cbc9cde5251d.pid.haproxy
Oct 11 04:15:25 compute-0 ovn_metadata_agent[161897]:     daemon
Oct 11 04:15:25 compute-0 ovn_metadata_agent[161897]: 
Oct 11 04:15:25 compute-0 ovn_metadata_agent[161897]: defaults
Oct 11 04:15:25 compute-0 ovn_metadata_agent[161897]:     log global
Oct 11 04:15:25 compute-0 ovn_metadata_agent[161897]:     mode http
Oct 11 04:15:25 compute-0 ovn_metadata_agent[161897]:     option httplog
Oct 11 04:15:25 compute-0 ovn_metadata_agent[161897]:     option dontlognull
Oct 11 04:15:25 compute-0 ovn_metadata_agent[161897]:     option http-server-close
Oct 11 04:15:25 compute-0 ovn_metadata_agent[161897]:     option forwardfor
Oct 11 04:15:25 compute-0 ovn_metadata_agent[161897]:     retries                 3
Oct 11 04:15:25 compute-0 ovn_metadata_agent[161897]:     timeout http-request    30s
Oct 11 04:15:25 compute-0 ovn_metadata_agent[161897]:     timeout connect         30s
Oct 11 04:15:25 compute-0 ovn_metadata_agent[161897]:     timeout client          32s
Oct 11 04:15:25 compute-0 ovn_metadata_agent[161897]:     timeout server          32s
Oct 11 04:15:25 compute-0 ovn_metadata_agent[161897]:     timeout http-keep-alive 30s
Oct 11 04:15:25 compute-0 ovn_metadata_agent[161897]: 
Oct 11 04:15:25 compute-0 ovn_metadata_agent[161897]: 
Oct 11 04:15:25 compute-0 ovn_metadata_agent[161897]: listen listener
Oct 11 04:15:25 compute-0 ovn_metadata_agent[161897]:     bind 169.254.169.254:80
Oct 11 04:15:25 compute-0 ovn_metadata_agent[161897]:     server metadata /var/lib/neutron/metadata_proxy
Oct 11 04:15:25 compute-0 ovn_metadata_agent[161897]:     http-request add-header X-OVN-Network-ID b6cd64a2-af0b-4f57-b84c-cbc9cde5251d
Oct 11 04:15:25 compute-0 ovn_metadata_agent[161897]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Oct 11 04:15:25 compute-0 nova_compute[259850]: 2025-10-11 04:15:25.745 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 04:15:25 compute-0 ovn_metadata_agent[161897]: 2025-10-11 04:15:25.744 161902 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-b6cd64a2-af0b-4f57-b84c-cbc9cde5251d', 'env', 'PROCESS_TAG=haproxy-b6cd64a2-af0b-4f57-b84c-cbc9cde5251d', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/b6cd64a2-af0b-4f57-b84c-cbc9cde5251d.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Oct 11 04:15:25 compute-0 podman[292463]: 2025-10-11 04:15:25.871396333 +0000 UTC m=+0.060919792 container create 892873957b3fe5d43b645e1a38876f31c0ab3dbab15d2b0341be43fbe476584f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_aryabhata, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 11 04:15:25 compute-0 systemd[1]: Started libpod-conmon-892873957b3fe5d43b645e1a38876f31c0ab3dbab15d2b0341be43fbe476584f.scope.
Oct 11 04:15:25 compute-0 systemd[1]: Started libcrun container.
Oct 11 04:15:25 compute-0 podman[292463]: 2025-10-11 04:15:25.852011002 +0000 UTC m=+0.041534441 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 04:15:25 compute-0 podman[292463]: 2025-10-11 04:15:25.960967268 +0000 UTC m=+0.150490757 container init 892873957b3fe5d43b645e1a38876f31c0ab3dbab15d2b0341be43fbe476584f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_aryabhata, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.schema-version=1.0)
Oct 11 04:15:25 compute-0 podman[292463]: 2025-10-11 04:15:25.975129871 +0000 UTC m=+0.164653300 container start 892873957b3fe5d43b645e1a38876f31c0ab3dbab15d2b0341be43fbe476584f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_aryabhata, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Oct 11 04:15:25 compute-0 podman[292463]: 2025-10-11 04:15:25.978696662 +0000 UTC m=+0.168220121 container attach 892873957b3fe5d43b645e1a38876f31c0ab3dbab15d2b0341be43fbe476584f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_aryabhata, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20250507)
Oct 11 04:15:25 compute-0 modest_aryabhata[292495]: 167 167
Oct 11 04:15:25 compute-0 systemd[1]: libpod-892873957b3fe5d43b645e1a38876f31c0ab3dbab15d2b0341be43fbe476584f.scope: Deactivated successfully.
Oct 11 04:15:25 compute-0 podman[292463]: 2025-10-11 04:15:25.983075486 +0000 UTC m=+0.172598915 container died 892873957b3fe5d43b645e1a38876f31c0ab3dbab15d2b0341be43fbe476584f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_aryabhata, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 11 04:15:26 compute-0 systemd[1]: var-lib-containers-storage-overlay-430986f4806c43e28838df003a7242e10adeebd38d48d7228481f61e25d04ff7-merged.mount: Deactivated successfully.
Oct 11 04:15:26 compute-0 podman[292463]: 2025-10-11 04:15:26.020717626 +0000 UTC m=+0.210241055 container remove 892873957b3fe5d43b645e1a38876f31c0ab3dbab15d2b0341be43fbe476584f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_aryabhata, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.build-date=20250507)
Oct 11 04:15:26 compute-0 systemd[1]: libpod-conmon-892873957b3fe5d43b645e1a38876f31c0ab3dbab15d2b0341be43fbe476584f.scope: Deactivated successfully.
Oct 11 04:15:26 compute-0 podman[292477]: 2025-10-11 04:15:26.046993333 +0000 UTC m=+0.130610773 container health_status 648a70a95c858d67aef01a55c153ff0b5b9bef4b589fa10c62a24185180db61f (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 11 04:15:26 compute-0 nova_compute[259850]: 2025-10-11 04:15:26.055 2 DEBUG oslo_service.periodic_task [None req-a15c22ab-259d-4b26-83d0-eb5f92102649 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 11 04:15:26 compute-0 nova_compute[259850]: 2025-10-11 04:15:26.059 2 DEBUG oslo_service.periodic_task [None req-a15c22ab-259d-4b26-83d0-eb5f92102649 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 11 04:15:26 compute-0 nova_compute[259850]: 2025-10-11 04:15:26.059 2 DEBUG nova.compute.manager [None req-a15c22ab-259d-4b26-83d0-eb5f92102649 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Oct 11 04:15:26 compute-0 nova_compute[259850]: 2025-10-11 04:15:26.060 2 DEBUG nova.compute.manager [None req-a15c22ab-259d-4b26-83d0-eb5f92102649 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Oct 11 04:15:26 compute-0 nova_compute[259850]: 2025-10-11 04:15:26.094 2 DEBUG nova.compute.manager [None req-a15c22ab-259d-4b26-83d0-eb5f92102649 - - - - - -] [instance: f4568c68-41ba-4de0-a607-76bf5907f37c] Skipping network cache update for instance because it is Building. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9871
Oct 11 04:15:26 compute-0 nova_compute[259850]: 2025-10-11 04:15:26.095 2 DEBUG nova.compute.manager [None req-a15c22ab-259d-4b26-83d0-eb5f92102649 - - - - - -] [instance: ab2c9a76-86d0-4cca-92b5-ae402fda2905] Skipping network cache update for instance because it is Building. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9871
Oct 11 04:15:26 compute-0 nova_compute[259850]: 2025-10-11 04:15:26.095 2 DEBUG nova.compute.manager [None req-a15c22ab-259d-4b26-83d0-eb5f92102649 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Oct 11 04:15:26 compute-0 nova_compute[259850]: 2025-10-11 04:15:26.095 2 DEBUG oslo_service.periodic_task [None req-a15c22ab-259d-4b26-83d0-eb5f92102649 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 11 04:15:26 compute-0 nova_compute[259850]: 2025-10-11 04:15:26.157 2 DEBUG nova.network.neutron [None req-f94e59bf-b900-400f-a67b-3e84677dd23e 77d11e860ca1460cab1c20bca4d4c0ea bfcc78a613a4442d88231798d10634c9 - - default default] [instance: ab2c9a76-86d0-4cca-92b5-ae402fda2905] Successfully updated port: a1f30276-c6ab-493e-9be5-8e3baf249a38 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Oct 11 04:15:26 compute-0 nova_compute[259850]: 2025-10-11 04:15:26.178 2 DEBUG oslo_concurrency.lockutils [None req-f94e59bf-b900-400f-a67b-3e84677dd23e 77d11e860ca1460cab1c20bca4d4c0ea bfcc78a613a4442d88231798d10634c9 - - default default] Acquiring lock "refresh_cache-ab2c9a76-86d0-4cca-92b5-ae402fda2905" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 11 04:15:26 compute-0 nova_compute[259850]: 2025-10-11 04:15:26.178 2 DEBUG oslo_concurrency.lockutils [None req-f94e59bf-b900-400f-a67b-3e84677dd23e 77d11e860ca1460cab1c20bca4d4c0ea bfcc78a613a4442d88231798d10634c9 - - default default] Acquired lock "refresh_cache-ab2c9a76-86d0-4cca-92b5-ae402fda2905" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 11 04:15:26 compute-0 nova_compute[259850]: 2025-10-11 04:15:26.179 2 DEBUG nova.network.neutron [None req-f94e59bf-b900-400f-a67b-3e84677dd23e 77d11e860ca1460cab1c20bca4d4c0ea bfcc78a613a4442d88231798d10634c9 - - default default] [instance: ab2c9a76-86d0-4cca-92b5-ae402fda2905] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Oct 11 04:15:26 compute-0 podman[292584]: 2025-10-11 04:15:26.193090974 +0000 UTC m=+0.057470284 container create cb732540a8bb9d3b6cdc2a1c2c1e7e379d48b942fe2e5f10b4ac14c661bd9924 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-b6cd64a2-af0b-4f57-b84c-cbc9cde5251d, org.label-schema.build-date=20251009, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3)
Oct 11 04:15:26 compute-0 podman[292598]: 2025-10-11 04:15:26.232961307 +0000 UTC m=+0.056426014 container create 5f0663be066390f29ab60769eec34c03031934832ca3dbb3341b0371b94a6238 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_banzai, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Oct 11 04:15:26 compute-0 systemd[1]: Started libpod-conmon-cb732540a8bb9d3b6cdc2a1c2c1e7e379d48b942fe2e5f10b4ac14c661bd9924.scope.
Oct 11 04:15:26 compute-0 podman[292584]: 2025-10-11 04:15:26.162646629 +0000 UTC m=+0.027026029 image pull 1061e4fafe13e0b9aa1ef2c904ba4ad70c44f3e87b1d831f16c6db34937f4022 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Oct 11 04:15:26 compute-0 systemd[1]: Started libcrun container.
Oct 11 04:15:26 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f4ed434db46fd7c321be10b5d05b995c67259606389f83d2a63bd458183cd84b/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct 11 04:15:26 compute-0 systemd[1]: Started libpod-conmon-5f0663be066390f29ab60769eec34c03031934832ca3dbb3341b0371b94a6238.scope.
Oct 11 04:15:26 compute-0 podman[292584]: 2025-10-11 04:15:26.284240914 +0000 UTC m=+0.148620214 container init cb732540a8bb9d3b6cdc2a1c2c1e7e379d48b942fe2e5f10b4ac14c661bd9924 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-b6cd64a2-af0b-4f57-b84c-cbc9cde5251d, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.build-date=20251009, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f)
Oct 11 04:15:26 compute-0 podman[292584]: 2025-10-11 04:15:26.289473413 +0000 UTC m=+0.153852723 container start cb732540a8bb9d3b6cdc2a1c2c1e7e379d48b942fe2e5f10b4ac14c661bd9924 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-b6cd64a2-af0b-4f57-b84c-cbc9cde5251d, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.license=GPLv2, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251009, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3)
Oct 11 04:15:26 compute-0 systemd[1]: Started libcrun container.
Oct 11 04:15:26 compute-0 podman[292598]: 2025-10-11 04:15:26.207542985 +0000 UTC m=+0.031007682 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 04:15:26 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bdd700af0a9b3dc3612a8ded7e1e13415063b7fd4ee4529ebb7b70a0dbb4d51e/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 11 04:15:26 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bdd700af0a9b3dc3612a8ded7e1e13415063b7fd4ee4529ebb7b70a0dbb4d51e/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 11 04:15:26 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bdd700af0a9b3dc3612a8ded7e1e13415063b7fd4ee4529ebb7b70a0dbb4d51e/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 11 04:15:26 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bdd700af0a9b3dc3612a8ded7e1e13415063b7fd4ee4529ebb7b70a0dbb4d51e/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 11 04:15:26 compute-0 podman[292598]: 2025-10-11 04:15:26.322183962 +0000 UTC m=+0.145648689 container init 5f0663be066390f29ab60769eec34c03031934832ca3dbb3341b0371b94a6238 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_banzai, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=reef)
Oct 11 04:15:26 compute-0 neutron-haproxy-ovnmeta-b6cd64a2-af0b-4f57-b84c-cbc9cde5251d[292616]: [NOTICE]   (292628) : New worker (292630) forked
Oct 11 04:15:26 compute-0 neutron-haproxy-ovnmeta-b6cd64a2-af0b-4f57-b84c-cbc9cde5251d[292616]: [NOTICE]   (292628) : Loading success.
Oct 11 04:15:26 compute-0 podman[292598]: 2025-10-11 04:15:26.328855392 +0000 UTC m=+0.152320069 container start 5f0663be066390f29ab60769eec34c03031934832ca3dbb3341b0371b94a6238 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_banzai, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 11 04:15:26 compute-0 podman[292598]: 2025-10-11 04:15:26.332050953 +0000 UTC m=+0.155515680 container attach 5f0663be066390f29ab60769eec34c03031934832ca3dbb3341b0371b94a6238 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_banzai, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_REF=reef, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 11 04:15:26 compute-0 nova_compute[259850]: 2025-10-11 04:15:26.344 2 DEBUG nova.network.neutron [None req-f94e59bf-b900-400f-a67b-3e84677dd23e 77d11e860ca1460cab1c20bca4d4c0ea bfcc78a613a4442d88231798d10634c9 - - default default] [instance: ab2c9a76-86d0-4cca-92b5-ae402fda2905] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Oct 11 04:15:26 compute-0 nova_compute[259850]: 2025-10-11 04:15:26.513 2 DEBUG nova.virt.driver [None req-3700afd8-f980-4cf2-b41e-54ecc7fbe9a2 - - - - - -] Emitting event <LifecycleEvent: 1760156126.5129595, f4568c68-41ba-4de0-a607-76bf5907f37c => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 11 04:15:26 compute-0 nova_compute[259850]: 2025-10-11 04:15:26.514 2 INFO nova.compute.manager [None req-3700afd8-f980-4cf2-b41e-54ecc7fbe9a2 - - - - - -] [instance: f4568c68-41ba-4de0-a607-76bf5907f37c] VM Started (Lifecycle Event)
Oct 11 04:15:26 compute-0 nova_compute[259850]: 2025-10-11 04:15:26.519 2 DEBUG nova.compute.manager [None req-6542b70e-9085-48c3-a013-46621b3f3e3c 2a330a845d62440c871f80eda2546881 09ba33ef4bd447699d74946c58839b2d - - default default] [instance: f4568c68-41ba-4de0-a607-76bf5907f37c] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Oct 11 04:15:26 compute-0 nova_compute[259850]: 2025-10-11 04:15:26.537 2 DEBUG nova.virt.libvirt.driver [None req-6542b70e-9085-48c3-a013-46621b3f3e3c 2a330a845d62440c871f80eda2546881 09ba33ef4bd447699d74946c58839b2d - - default default] [instance: f4568c68-41ba-4de0-a607-76bf5907f37c] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Oct 11 04:15:26 compute-0 nova_compute[259850]: 2025-10-11 04:15:26.540 2 DEBUG nova.compute.manager [None req-3700afd8-f980-4cf2-b41e-54ecc7fbe9a2 - - - - - -] [instance: f4568c68-41ba-4de0-a607-76bf5907f37c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 11 04:15:26 compute-0 nova_compute[259850]: 2025-10-11 04:15:26.544 2 INFO nova.virt.libvirt.driver [-] [instance: f4568c68-41ba-4de0-a607-76bf5907f37c] Instance spawned successfully.
Oct 11 04:15:26 compute-0 nova_compute[259850]: 2025-10-11 04:15:26.545 2 DEBUG nova.virt.libvirt.driver [None req-6542b70e-9085-48c3-a013-46621b3f3e3c 2a330a845d62440c871f80eda2546881 09ba33ef4bd447699d74946c58839b2d - - default default] [instance: f4568c68-41ba-4de0-a607-76bf5907f37c] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Oct 11 04:15:26 compute-0 nova_compute[259850]: 2025-10-11 04:15:26.548 2 DEBUG nova.compute.manager [None req-3700afd8-f980-4cf2-b41e-54ecc7fbe9a2 - - - - - -] [instance: f4568c68-41ba-4de0-a607-76bf5907f37c] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 11 04:15:26 compute-0 nova_compute[259850]: 2025-10-11 04:15:26.573 2 INFO nova.compute.manager [None req-3700afd8-f980-4cf2-b41e-54ecc7fbe9a2 - - - - - -] [instance: f4568c68-41ba-4de0-a607-76bf5907f37c] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 11 04:15:26 compute-0 nova_compute[259850]: 2025-10-11 04:15:26.574 2 DEBUG nova.virt.driver [None req-3700afd8-f980-4cf2-b41e-54ecc7fbe9a2 - - - - - -] Emitting event <LifecycleEvent: 1760156126.51436, f4568c68-41ba-4de0-a607-76bf5907f37c => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 11 04:15:26 compute-0 nova_compute[259850]: 2025-10-11 04:15:26.574 2 INFO nova.compute.manager [None req-3700afd8-f980-4cf2-b41e-54ecc7fbe9a2 - - - - - -] [instance: f4568c68-41ba-4de0-a607-76bf5907f37c] VM Paused (Lifecycle Event)
Oct 11 04:15:26 compute-0 nova_compute[259850]: 2025-10-11 04:15:26.583 2 DEBUG nova.virt.libvirt.driver [None req-6542b70e-9085-48c3-a013-46621b3f3e3c 2a330a845d62440c871f80eda2546881 09ba33ef4bd447699d74946c58839b2d - - default default] [instance: f4568c68-41ba-4de0-a607-76bf5907f37c] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 11 04:15:26 compute-0 nova_compute[259850]: 2025-10-11 04:15:26.583 2 DEBUG nova.virt.libvirt.driver [None req-6542b70e-9085-48c3-a013-46621b3f3e3c 2a330a845d62440c871f80eda2546881 09ba33ef4bd447699d74946c58839b2d - - default default] [instance: f4568c68-41ba-4de0-a607-76bf5907f37c] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 11 04:15:26 compute-0 nova_compute[259850]: 2025-10-11 04:15:26.584 2 DEBUG nova.virt.libvirt.driver [None req-6542b70e-9085-48c3-a013-46621b3f3e3c 2a330a845d62440c871f80eda2546881 09ba33ef4bd447699d74946c58839b2d - - default default] [instance: f4568c68-41ba-4de0-a607-76bf5907f37c] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 11 04:15:26 compute-0 nova_compute[259850]: 2025-10-11 04:15:26.585 2 DEBUG nova.virt.libvirt.driver [None req-6542b70e-9085-48c3-a013-46621b3f3e3c 2a330a845d62440c871f80eda2546881 09ba33ef4bd447699d74946c58839b2d - - default default] [instance: f4568c68-41ba-4de0-a607-76bf5907f37c] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 11 04:15:26 compute-0 nova_compute[259850]: 2025-10-11 04:15:26.585 2 DEBUG nova.virt.libvirt.driver [None req-6542b70e-9085-48c3-a013-46621b3f3e3c 2a330a845d62440c871f80eda2546881 09ba33ef4bd447699d74946c58839b2d - - default default] [instance: f4568c68-41ba-4de0-a607-76bf5907f37c] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 11 04:15:26 compute-0 nova_compute[259850]: 2025-10-11 04:15:26.586 2 DEBUG nova.virt.libvirt.driver [None req-6542b70e-9085-48c3-a013-46621b3f3e3c 2a330a845d62440c871f80eda2546881 09ba33ef4bd447699d74946c58839b2d - - default default] [instance: f4568c68-41ba-4de0-a607-76bf5907f37c] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 11 04:15:26 compute-0 nova_compute[259850]: 2025-10-11 04:15:26.594 2 DEBUG nova.compute.manager [None req-3700afd8-f980-4cf2-b41e-54ecc7fbe9a2 - - - - - -] [instance: f4568c68-41ba-4de0-a607-76bf5907f37c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 11 04:15:26 compute-0 nova_compute[259850]: 2025-10-11 04:15:26.598 2 DEBUG nova.virt.driver [None req-3700afd8-f980-4cf2-b41e-54ecc7fbe9a2 - - - - - -] Emitting event <LifecycleEvent: 1760156126.5221705, f4568c68-41ba-4de0-a607-76bf5907f37c => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 11 04:15:26 compute-0 nova_compute[259850]: 2025-10-11 04:15:26.599 2 INFO nova.compute.manager [None req-3700afd8-f980-4cf2-b41e-54ecc7fbe9a2 - - - - - -] [instance: f4568c68-41ba-4de0-a607-76bf5907f37c] VM Resumed (Lifecycle Event)
Oct 11 04:15:26 compute-0 nova_compute[259850]: 2025-10-11 04:15:26.618 2 DEBUG nova.compute.manager [None req-3700afd8-f980-4cf2-b41e-54ecc7fbe9a2 - - - - - -] [instance: f4568c68-41ba-4de0-a607-76bf5907f37c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 11 04:15:26 compute-0 nova_compute[259850]: 2025-10-11 04:15:26.627 2 DEBUG nova.compute.manager [None req-3700afd8-f980-4cf2-b41e-54ecc7fbe9a2 - - - - - -] [instance: f4568c68-41ba-4de0-a607-76bf5907f37c] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 11 04:15:26 compute-0 nova_compute[259850]: 2025-10-11 04:15:26.651 2 INFO nova.compute.manager [None req-6542b70e-9085-48c3-a013-46621b3f3e3c 2a330a845d62440c871f80eda2546881 09ba33ef4bd447699d74946c58839b2d - - default default] [instance: f4568c68-41ba-4de0-a607-76bf5907f37c] Took 4.80 seconds to spawn the instance on the hypervisor.
Oct 11 04:15:26 compute-0 nova_compute[259850]: 2025-10-11 04:15:26.652 2 DEBUG nova.compute.manager [None req-6542b70e-9085-48c3-a013-46621b3f3e3c 2a330a845d62440c871f80eda2546881 09ba33ef4bd447699d74946c58839b2d - - default default] [instance: f4568c68-41ba-4de0-a607-76bf5907f37c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 11 04:15:26 compute-0 nova_compute[259850]: 2025-10-11 04:15:26.653 2 INFO nova.compute.manager [None req-3700afd8-f980-4cf2-b41e-54ecc7fbe9a2 - - - - - -] [instance: f4568c68-41ba-4de0-a607-76bf5907f37c] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 11 04:15:26 compute-0 nova_compute[259850]: 2025-10-11 04:15:26.714 2 INFO nova.compute.manager [None req-6542b70e-9085-48c3-a013-46621b3f3e3c 2a330a845d62440c871f80eda2546881 09ba33ef4bd447699d74946c58839b2d - - default default] [instance: f4568c68-41ba-4de0-a607-76bf5907f37c] Took 7.14 seconds to build instance.
Oct 11 04:15:26 compute-0 nova_compute[259850]: 2025-10-11 04:15:26.731 2 DEBUG oslo_concurrency.lockutils [None req-6542b70e-9085-48c3-a013-46621b3f3e3c 2a330a845d62440c871f80eda2546881 09ba33ef4bd447699d74946c58839b2d - - default default] Lock "f4568c68-41ba-4de0-a607-76bf5907f37c" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 7.216s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 04:15:27 compute-0 nova_compute[259850]: 2025-10-11 04:15:27.059 2 DEBUG oslo_service.periodic_task [None req-a15c22ab-259d-4b26-83d0-eb5f92102649 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 11 04:15:27 compute-0 nova_compute[259850]: 2025-10-11 04:15:27.060 2 DEBUG oslo_service.periodic_task [None req-a15c22ab-259d-4b26-83d0-eb5f92102649 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 11 04:15:27 compute-0 nova_compute[259850]: 2025-10-11 04:15:27.086 2 DEBUG oslo_concurrency.lockutils [None req-a15c22ab-259d-4b26-83d0-eb5f92102649 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 04:15:27 compute-0 nova_compute[259850]: 2025-10-11 04:15:27.087 2 DEBUG oslo_concurrency.lockutils [None req-a15c22ab-259d-4b26-83d0-eb5f92102649 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 04:15:27 compute-0 nova_compute[259850]: 2025-10-11 04:15:27.088 2 DEBUG oslo_concurrency.lockutils [None req-a15c22ab-259d-4b26-83d0-eb5f92102649 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 04:15:27 compute-0 nova_compute[259850]: 2025-10-11 04:15:27.088 2 DEBUG nova.compute.resource_tracker [None req-a15c22ab-259d-4b26-83d0-eb5f92102649 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Oct 11 04:15:27 compute-0 nova_compute[259850]: 2025-10-11 04:15:27.088 2 DEBUG oslo_concurrency.processutils [None req-a15c22ab-259d-4b26-83d0-eb5f92102649 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 04:15:27 compute-0 ceph-mon[74273]: pgmap v1547: 305 pgs: 305 active+clean; 248 MiB data, 516 MiB used, 59 GiB / 60 GiB avail; 29 KiB/s rd, 9.4 MiB/s wr, 44 op/s
Oct 11 04:15:27 compute-0 distracted_banzai[292624]: {
Oct 11 04:15:27 compute-0 distracted_banzai[292624]:     "38da774d-7ecf-442f-9a7a-97978287cff8": {
Oct 11 04:15:27 compute-0 distracted_banzai[292624]:         "ceph_fsid": "23b68101-59a9-532f-ab6b-9acf78fb2162",
Oct 11 04:15:27 compute-0 distracted_banzai[292624]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Oct 11 04:15:27 compute-0 distracted_banzai[292624]:         "osd_id": 1,
Oct 11 04:15:27 compute-0 distracted_banzai[292624]:         "osd_uuid": "38da774d-7ecf-442f-9a7a-97978287cff8",
Oct 11 04:15:27 compute-0 distracted_banzai[292624]:         "type": "bluestore"
Oct 11 04:15:27 compute-0 distracted_banzai[292624]:     },
Oct 11 04:15:27 compute-0 distracted_banzai[292624]:     "b0a32a6b-06c7-4cda-9235-0c5ffbd1bd85": {
Oct 11 04:15:27 compute-0 distracted_banzai[292624]:         "ceph_fsid": "23b68101-59a9-532f-ab6b-9acf78fb2162",
Oct 11 04:15:27 compute-0 distracted_banzai[292624]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Oct 11 04:15:27 compute-0 distracted_banzai[292624]:         "osd_id": 2,
Oct 11 04:15:27 compute-0 distracted_banzai[292624]:         "osd_uuid": "b0a32a6b-06c7-4cda-9235-0c5ffbd1bd85",
Oct 11 04:15:27 compute-0 distracted_banzai[292624]:         "type": "bluestore"
Oct 11 04:15:27 compute-0 distracted_banzai[292624]:     },
Oct 11 04:15:27 compute-0 distracted_banzai[292624]:     "bd7ac921-1218-45c1-b1c6-7c594dbceccb": {
Oct 11 04:15:27 compute-0 distracted_banzai[292624]:         "ceph_fsid": "23b68101-59a9-532f-ab6b-9acf78fb2162",
Oct 11 04:15:27 compute-0 distracted_banzai[292624]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct 11 04:15:27 compute-0 distracted_banzai[292624]:         "osd_id": 0,
Oct 11 04:15:27 compute-0 distracted_banzai[292624]:         "osd_uuid": "bd7ac921-1218-45c1-b1c6-7c594dbceccb",
Oct 11 04:15:27 compute-0 distracted_banzai[292624]:         "type": "bluestore"
Oct 11 04:15:27 compute-0 distracted_banzai[292624]:     }
Oct 11 04:15:27 compute-0 distracted_banzai[292624]: }
Oct 11 04:15:27 compute-0 systemd[1]: libpod-5f0663be066390f29ab60769eec34c03031934832ca3dbb3341b0371b94a6238.scope: Deactivated successfully.
Oct 11 04:15:27 compute-0 systemd[1]: libpod-5f0663be066390f29ab60769eec34c03031934832ca3dbb3341b0371b94a6238.scope: Consumed 1.111s CPU time.
Oct 11 04:15:27 compute-0 podman[292598]: 2025-10-11 04:15:27.474561908 +0000 UTC m=+1.298026595 container died 5f0663be066390f29ab60769eec34c03031934832ca3dbb3341b0371b94a6238 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_banzai, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 11 04:15:27 compute-0 systemd[1]: var-lib-containers-storage-overlay-bdd700af0a9b3dc3612a8ded7e1e13415063b7fd4ee4529ebb7b70a0dbb4d51e-merged.mount: Deactivated successfully.
Oct 11 04:15:27 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 11 04:15:27 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1414669568' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 04:15:27 compute-0 podman[292598]: 2025-10-11 04:15:27.537559718 +0000 UTC m=+1.361024395 container remove 5f0663be066390f29ab60769eec34c03031934832ca3dbb3341b0371b94a6238 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_banzai, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 11 04:15:27 compute-0 nova_compute[259850]: 2025-10-11 04:15:27.557 2 DEBUG oslo_concurrency.processutils [None req-a15c22ab-259d-4b26-83d0-eb5f92102649 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.468s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 04:15:27 compute-0 systemd[1]: libpod-conmon-5f0663be066390f29ab60769eec34c03031934832ca3dbb3341b0371b94a6238.scope: Deactivated successfully.
Oct 11 04:15:27 compute-0 sudo[292361]: pam_unix(sudo:session): session closed for user root
Oct 11 04:15:27 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 11 04:15:27 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' 
Oct 11 04:15:27 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 11 04:15:27 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' 
Oct 11 04:15:27 compute-0 ceph-mgr[74563]: [progress WARNING root] complete: ev 45ef4a67-8ad4-4e7e-bfe4-873c4dda2d22 does not exist
Oct 11 04:15:27 compute-0 ceph-mgr[74563]: [progress WARNING root] complete: ev 413c3e3c-e642-49e0-83ee-c03ece3ed5db does not exist
Oct 11 04:15:27 compute-0 nova_compute[259850]: 2025-10-11 04:15:27.621 2 DEBUG nova.virt.libvirt.driver [None req-a15c22ab-259d-4b26-83d0-eb5f92102649 - - - - - -] skipping disk for instance-00000012 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 11 04:15:27 compute-0 nova_compute[259850]: 2025-10-11 04:15:27.621 2 DEBUG nova.virt.libvirt.driver [None req-a15c22ab-259d-4b26-83d0-eb5f92102649 - - - - - -] skipping disk for instance-00000012 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 11 04:15:27 compute-0 sudo[292703]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 04:15:27 compute-0 sudo[292703]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 04:15:27 compute-0 sudo[292703]: pam_unix(sudo:session): session closed for user root
Oct 11 04:15:27 compute-0 nova_compute[259850]: 2025-10-11 04:15:27.690 2 DEBUG nova.network.neutron [None req-f94e59bf-b900-400f-a67b-3e84677dd23e 77d11e860ca1460cab1c20bca4d4c0ea bfcc78a613a4442d88231798d10634c9 - - default default] [instance: ab2c9a76-86d0-4cca-92b5-ae402fda2905] Updating instance_info_cache with network_info: [{"id": "a1f30276-c6ab-493e-9be5-8e3baf249a38", "address": "fa:16:3e:f7:2a:9b", "network": {"id": "1c86b315-3a4b-4db0-8b3c-39658c19ef9c", "bridge": "br-int", "label": "tempest-TransferEncryptedVolumeTest-443747755-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "bfcc78a613a4442d88231798d10634c9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa1f30276-c6", "ovs_interfaceid": "a1f30276-c6ab-493e-9be5-8e3baf249a38", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 11 04:15:27 compute-0 nova_compute[259850]: 2025-10-11 04:15:27.710 2 DEBUG oslo_concurrency.lockutils [None req-f94e59bf-b900-400f-a67b-3e84677dd23e 77d11e860ca1460cab1c20bca4d4c0ea bfcc78a613a4442d88231798d10634c9 - - default default] Releasing lock "refresh_cache-ab2c9a76-86d0-4cca-92b5-ae402fda2905" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 11 04:15:27 compute-0 nova_compute[259850]: 2025-10-11 04:15:27.710 2 DEBUG nova.compute.manager [None req-f94e59bf-b900-400f-a67b-3e84677dd23e 77d11e860ca1460cab1c20bca4d4c0ea bfcc78a613a4442d88231798d10634c9 - - default default] [instance: ab2c9a76-86d0-4cca-92b5-ae402fda2905] Instance network_info: |[{"id": "a1f30276-c6ab-493e-9be5-8e3baf249a38", "address": "fa:16:3e:f7:2a:9b", "network": {"id": "1c86b315-3a4b-4db0-8b3c-39658c19ef9c", "bridge": "br-int", "label": "tempest-TransferEncryptedVolumeTest-443747755-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "bfcc78a613a4442d88231798d10634c9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa1f30276-c6", "ovs_interfaceid": "a1f30276-c6ab-493e-9be5-8e3baf249a38", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Oct 11 04:15:27 compute-0 nova_compute[259850]: 2025-10-11 04:15:27.713 2 DEBUG nova.virt.libvirt.driver [None req-f94e59bf-b900-400f-a67b-3e84677dd23e 77d11e860ca1460cab1c20bca4d4c0ea bfcc78a613a4442d88231798d10634c9 - - default default] [instance: ab2c9a76-86d0-4cca-92b5-ae402fda2905] Start _get_guest_xml network_info=[{"id": "a1f30276-c6ab-493e-9be5-8e3baf249a38", "address": "fa:16:3e:f7:2a:9b", "network": {"id": "1c86b315-3a4b-4db0-8b3c-39658c19ef9c", "bridge": "br-int", "label": "tempest-TransferEncryptedVolumeTest-443747755-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "bfcc78a613a4442d88231798d10634c9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa1f30276-c6", "ovs_interfaceid": "a1f30276-c6ab-493e-9be5-8e3baf249a38", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, '/dev/vda': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum=<?>,container_format=<?>,created_at=<?>,direct_url=<?>,disk_format=<?>,id=<?>,min_disk=0,min_ram=0,name=<?>,owner=<?>,properties=ImageMetaProps,protected=<?>,size=1073741824,status='active',tags=<?>,updated_at=<?>,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [], 'ephemerals': [], 'block_device_mapping': [{'device_type': 'disk', 'delete_on_termination': False, 'connection_info': {'driver_volume_type': 'rbd', 'data': {'name': 'volumes/volume-c358a4fb-dbe4-4873-963a-9b4d3369e2f4', 'hosts': ['192.168.122.100'], 'ports': ['6789'], 'cluster_name': 'ceph', 'auth_enabled': True, 'auth_username': 'openstack', 'secret_type': 'ceph', 'secret_uuid': '***', 'volume_id': 'c358a4fb-dbe4-4873-963a-9b4d3369e2f4', 'discard': True, 'qos_specs': None, 'access_mode': 'rw', 'encrypted': True, 'cacheable': False}, 'status': 'reserved', 'instance': 'ab2c9a76-86d0-4cca-92b5-ae402fda2905', 'attached_at': '', 'detached_at': '', 'volume_id': 'c358a4fb-dbe4-4873-963a-9b4d3369e2f4', 'serial': 'c358a4fb-dbe4-4873-963a-9b4d3369e2f4'}, 'boot_index': 0, 'guest_format': None, 'attachment_id': 'e12510a9-6ec0-4cad-a671-279953259e45', 'mount_device': '/dev/vda', 'disk_bus': 'virtio', 'volume_type': None}], ': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Oct 11 04:15:27 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1548: 305 pgs: 305 active+clean; 248 MiB data, 516 MiB used, 59 GiB / 60 GiB avail; 29 KiB/s rd, 9.4 MiB/s wr, 44 op/s
Oct 11 04:15:27 compute-0 nova_compute[259850]: 2025-10-11 04:15:27.716 2 WARNING nova.virt.libvirt.driver [None req-f94e59bf-b900-400f-a67b-3e84677dd23e 77d11e860ca1460cab1c20bca4d4c0ea bfcc78a613a4442d88231798d10634c9 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 11 04:15:27 compute-0 nova_compute[259850]: 2025-10-11 04:15:27.721 2 DEBUG nova.virt.libvirt.host [None req-f94e59bf-b900-400f-a67b-3e84677dd23e 77d11e860ca1460cab1c20bca4d4c0ea bfcc78a613a4442d88231798d10634c9 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Oct 11 04:15:27 compute-0 nova_compute[259850]: 2025-10-11 04:15:27.721 2 DEBUG nova.virt.libvirt.host [None req-f94e59bf-b900-400f-a67b-3e84677dd23e 77d11e860ca1460cab1c20bca4d4c0ea bfcc78a613a4442d88231798d10634c9 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Oct 11 04:15:27 compute-0 nova_compute[259850]: 2025-10-11 04:15:27.725 2 DEBUG nova.virt.libvirt.host [None req-f94e59bf-b900-400f-a67b-3e84677dd23e 77d11e860ca1460cab1c20bca4d4c0ea bfcc78a613a4442d88231798d10634c9 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Oct 11 04:15:27 compute-0 nova_compute[259850]: 2025-10-11 04:15:27.725 2 DEBUG nova.virt.libvirt.host [None req-f94e59bf-b900-400f-a67b-3e84677dd23e 77d11e860ca1460cab1c20bca4d4c0ea bfcc78a613a4442d88231798d10634c9 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Oct 11 04:15:27 compute-0 nova_compute[259850]: 2025-10-11 04:15:27.725 2 DEBUG nova.virt.libvirt.driver [None req-f94e59bf-b900-400f-a67b-3e84677dd23e 77d11e860ca1460cab1c20bca4d4c0ea bfcc78a613a4442d88231798d10634c9 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Oct 11 04:15:27 compute-0 nova_compute[259850]: 2025-10-11 04:15:27.726 2 DEBUG nova.virt.hardware [None req-f94e59bf-b900-400f-a67b-3e84677dd23e 77d11e860ca1460cab1c20bca4d4c0ea bfcc78a613a4442d88231798d10634c9 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-11T04:01:35Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='178575de-f0e6-4acd-9fcd-d75e3e09ac2e',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum=<?>,container_format=<?>,created_at=<?>,direct_url=<?>,disk_format=<?>,id=<?>,min_disk=0,min_ram=0,name=<?>,owner=<?>,properties=ImageMetaProps,protected=<?>,size=1073741824,status='active',tags=<?>,updated_at=<?>,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Oct 11 04:15:27 compute-0 nova_compute[259850]: 2025-10-11 04:15:27.726 2 DEBUG nova.virt.hardware [None req-f94e59bf-b900-400f-a67b-3e84677dd23e 77d11e860ca1460cab1c20bca4d4c0ea bfcc78a613a4442d88231798d10634c9 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Oct 11 04:15:27 compute-0 nova_compute[259850]: 2025-10-11 04:15:27.726 2 DEBUG nova.virt.hardware [None req-f94e59bf-b900-400f-a67b-3e84677dd23e 77d11e860ca1460cab1c20bca4d4c0ea bfcc78a613a4442d88231798d10634c9 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Oct 11 04:15:27 compute-0 nova_compute[259850]: 2025-10-11 04:15:27.726 2 DEBUG nova.virt.hardware [None req-f94e59bf-b900-400f-a67b-3e84677dd23e 77d11e860ca1460cab1c20bca4d4c0ea bfcc78a613a4442d88231798d10634c9 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Oct 11 04:15:27 compute-0 nova_compute[259850]: 2025-10-11 04:15:27.727 2 DEBUG nova.virt.hardware [None req-f94e59bf-b900-400f-a67b-3e84677dd23e 77d11e860ca1460cab1c20bca4d4c0ea bfcc78a613a4442d88231798d10634c9 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Oct 11 04:15:27 compute-0 nova_compute[259850]: 2025-10-11 04:15:27.727 2 DEBUG nova.virt.hardware [None req-f94e59bf-b900-400f-a67b-3e84677dd23e 77d11e860ca1460cab1c20bca4d4c0ea bfcc78a613a4442d88231798d10634c9 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Oct 11 04:15:27 compute-0 nova_compute[259850]: 2025-10-11 04:15:27.727 2 DEBUG nova.virt.hardware [None req-f94e59bf-b900-400f-a67b-3e84677dd23e 77d11e860ca1460cab1c20bca4d4c0ea bfcc78a613a4442d88231798d10634c9 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Oct 11 04:15:27 compute-0 nova_compute[259850]: 2025-10-11 04:15:27.727 2 DEBUG nova.virt.hardware [None req-f94e59bf-b900-400f-a67b-3e84677dd23e 77d11e860ca1460cab1c20bca4d4c0ea bfcc78a613a4442d88231798d10634c9 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Oct 11 04:15:27 compute-0 nova_compute[259850]: 2025-10-11 04:15:27.727 2 DEBUG nova.virt.hardware [None req-f94e59bf-b900-400f-a67b-3e84677dd23e 77d11e860ca1460cab1c20bca4d4c0ea bfcc78a613a4442d88231798d10634c9 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Oct 11 04:15:27 compute-0 nova_compute[259850]: 2025-10-11 04:15:27.728 2 DEBUG nova.virt.hardware [None req-f94e59bf-b900-400f-a67b-3e84677dd23e 77d11e860ca1460cab1c20bca4d4c0ea bfcc78a613a4442d88231798d10634c9 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Oct 11 04:15:27 compute-0 nova_compute[259850]: 2025-10-11 04:15:27.728 2 DEBUG nova.virt.hardware [None req-f94e59bf-b900-400f-a67b-3e84677dd23e 77d11e860ca1460cab1c20bca4d4c0ea bfcc78a613a4442d88231798d10634c9 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Oct 11 04:15:27 compute-0 sudo[292728]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Oct 11 04:15:27 compute-0 sudo[292728]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 04:15:27 compute-0 sudo[292728]: pam_unix(sudo:session): session closed for user root
Oct 11 04:15:27 compute-0 nova_compute[259850]: 2025-10-11 04:15:27.754 2 DEBUG nova.storage.rbd_utils [None req-f94e59bf-b900-400f-a67b-3e84677dd23e 77d11e860ca1460cab1c20bca4d4c0ea bfcc78a613a4442d88231798d10634c9 - - default default] rbd image ab2c9a76-86d0-4cca-92b5-ae402fda2905_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 11 04:15:27 compute-0 nova_compute[259850]: 2025-10-11 04:15:27.758 2 DEBUG oslo_concurrency.processutils [None req-f94e59bf-b900-400f-a67b-3e84677dd23e 77d11e860ca1460cab1c20bca4d4c0ea bfcc78a613a4442d88231798d10634c9 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 04:15:27 compute-0 nova_compute[259850]: 2025-10-11 04:15:27.788 2 DEBUG nova.compute.manager [req-67ac4d33-1bc8-4183-937b-6952d2c473b0 req-c681ffad-41dc-435a-b56a-537641b049df f4159c198f2c491aba3076e803251bb4 551ef16ca7c64c7d98e9560ddab5abd8 - - default default] [instance: f4568c68-41ba-4de0-a607-76bf5907f37c] Received event network-vif-plugged-7a1af6b7-a442-4ea8-beca-2843ffb42e3c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 11 04:15:27 compute-0 nova_compute[259850]: 2025-10-11 04:15:27.789 2 DEBUG oslo_concurrency.lockutils [req-67ac4d33-1bc8-4183-937b-6952d2c473b0 req-c681ffad-41dc-435a-b56a-537641b049df f4159c198f2c491aba3076e803251bb4 551ef16ca7c64c7d98e9560ddab5abd8 - - default default] Acquiring lock "f4568c68-41ba-4de0-a607-76bf5907f37c-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 04:15:27 compute-0 nova_compute[259850]: 2025-10-11 04:15:27.789 2 DEBUG oslo_concurrency.lockutils [req-67ac4d33-1bc8-4183-937b-6952d2c473b0 req-c681ffad-41dc-435a-b56a-537641b049df f4159c198f2c491aba3076e803251bb4 551ef16ca7c64c7d98e9560ddab5abd8 - - default default] Lock "f4568c68-41ba-4de0-a607-76bf5907f37c-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 04:15:27 compute-0 nova_compute[259850]: 2025-10-11 04:15:27.790 2 DEBUG oslo_concurrency.lockutils [req-67ac4d33-1bc8-4183-937b-6952d2c473b0 req-c681ffad-41dc-435a-b56a-537641b049df f4159c198f2c491aba3076e803251bb4 551ef16ca7c64c7d98e9560ddab5abd8 - - default default] Lock "f4568c68-41ba-4de0-a607-76bf5907f37c-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 04:15:27 compute-0 nova_compute[259850]: 2025-10-11 04:15:27.790 2 DEBUG nova.compute.manager [req-67ac4d33-1bc8-4183-937b-6952d2c473b0 req-c681ffad-41dc-435a-b56a-537641b049df f4159c198f2c491aba3076e803251bb4 551ef16ca7c64c7d98e9560ddab5abd8 - - default default] [instance: f4568c68-41ba-4de0-a607-76bf5907f37c] No waiting events found dispatching network-vif-plugged-7a1af6b7-a442-4ea8-beca-2843ffb42e3c pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 11 04:15:27 compute-0 nova_compute[259850]: 2025-10-11 04:15:27.790 2 WARNING nova.compute.manager [req-67ac4d33-1bc8-4183-937b-6952d2c473b0 req-c681ffad-41dc-435a-b56a-537641b049df f4159c198f2c491aba3076e803251bb4 551ef16ca7c64c7d98e9560ddab5abd8 - - default default] [instance: f4568c68-41ba-4de0-a607-76bf5907f37c] Received unexpected event network-vif-plugged-7a1af6b7-a442-4ea8-beca-2843ffb42e3c for instance with vm_state active and task_state None.
Oct 11 04:15:27 compute-0 nova_compute[259850]: 2025-10-11 04:15:27.791 2 DEBUG nova.compute.manager [req-67ac4d33-1bc8-4183-937b-6952d2c473b0 req-c681ffad-41dc-435a-b56a-537641b049df f4159c198f2c491aba3076e803251bb4 551ef16ca7c64c7d98e9560ddab5abd8 - - default default] [instance: ab2c9a76-86d0-4cca-92b5-ae402fda2905] Received event network-changed-a1f30276-c6ab-493e-9be5-8e3baf249a38 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 11 04:15:27 compute-0 nova_compute[259850]: 2025-10-11 04:15:27.791 2 DEBUG nova.compute.manager [req-67ac4d33-1bc8-4183-937b-6952d2c473b0 req-c681ffad-41dc-435a-b56a-537641b049df f4159c198f2c491aba3076e803251bb4 551ef16ca7c64c7d98e9560ddab5abd8 - - default default] [instance: ab2c9a76-86d0-4cca-92b5-ae402fda2905] Refreshing instance network info cache due to event network-changed-a1f30276-c6ab-493e-9be5-8e3baf249a38. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 11 04:15:27 compute-0 nova_compute[259850]: 2025-10-11 04:15:27.792 2 DEBUG oslo_concurrency.lockutils [req-67ac4d33-1bc8-4183-937b-6952d2c473b0 req-c681ffad-41dc-435a-b56a-537641b049df f4159c198f2c491aba3076e803251bb4 551ef16ca7c64c7d98e9560ddab5abd8 - - default default] Acquiring lock "refresh_cache-ab2c9a76-86d0-4cca-92b5-ae402fda2905" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 11 04:15:27 compute-0 nova_compute[259850]: 2025-10-11 04:15:27.792 2 DEBUG oslo_concurrency.lockutils [req-67ac4d33-1bc8-4183-937b-6952d2c473b0 req-c681ffad-41dc-435a-b56a-537641b049df f4159c198f2c491aba3076e803251bb4 551ef16ca7c64c7d98e9560ddab5abd8 - - default default] Acquired lock "refresh_cache-ab2c9a76-86d0-4cca-92b5-ae402fda2905" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 11 04:15:27 compute-0 nova_compute[259850]: 2025-10-11 04:15:27.792 2 DEBUG nova.network.neutron [req-67ac4d33-1bc8-4183-937b-6952d2c473b0 req-c681ffad-41dc-435a-b56a-537641b049df f4159c198f2c491aba3076e803251bb4 551ef16ca7c64c7d98e9560ddab5abd8 - - default default] [instance: ab2c9a76-86d0-4cca-92b5-ae402fda2905] Refreshing network info cache for port a1f30276-c6ab-493e-9be5-8e3baf249a38 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 11 04:15:27 compute-0 nova_compute[259850]: 2025-10-11 04:15:27.928 2 WARNING nova.virt.libvirt.driver [None req-a15c22ab-259d-4b26-83d0-eb5f92102649 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 11 04:15:27 compute-0 nova_compute[259850]: 2025-10-11 04:15:27.929 2 DEBUG nova.compute.resource_tracker [None req-a15c22ab-259d-4b26-83d0-eb5f92102649 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4264MB free_disk=59.988277435302734GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Oct 11 04:15:27 compute-0 nova_compute[259850]: 2025-10-11 04:15:27.929 2 DEBUG oslo_concurrency.lockutils [None req-a15c22ab-259d-4b26-83d0-eb5f92102649 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 04:15:27 compute-0 nova_compute[259850]: 2025-10-11 04:15:27.929 2 DEBUG oslo_concurrency.lockutils [None req-a15c22ab-259d-4b26-83d0-eb5f92102649 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 04:15:28 compute-0 nova_compute[259850]: 2025-10-11 04:15:28.002 2 DEBUG nova.compute.resource_tracker [None req-a15c22ab-259d-4b26-83d0-eb5f92102649 - - - - - -] Instance f4568c68-41ba-4de0-a607-76bf5907f37c actively managed on this compute host and has allocations in placement: {'resources': {'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Oct 11 04:15:28 compute-0 nova_compute[259850]: 2025-10-11 04:15:28.002 2 DEBUG nova.compute.resource_tracker [None req-a15c22ab-259d-4b26-83d0-eb5f92102649 - - - - - -] Instance ab2c9a76-86d0-4cca-92b5-ae402fda2905 actively managed on this compute host and has allocations in placement: {'resources': {'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Oct 11 04:15:28 compute-0 nova_compute[259850]: 2025-10-11 04:15:28.002 2 DEBUG nova.compute.resource_tracker [None req-a15c22ab-259d-4b26-83d0-eb5f92102649 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Oct 11 04:15:28 compute-0 nova_compute[259850]: 2025-10-11 04:15:28.002 2 DEBUG nova.compute.resource_tracker [None req-a15c22ab-259d-4b26-83d0-eb5f92102649 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=768MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Oct 11 04:15:28 compute-0 nova_compute[259850]: 2025-10-11 04:15:28.053 2 DEBUG oslo_concurrency.processutils [None req-a15c22ab-259d-4b26-83d0-eb5f92102649 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 04:15:28 compute-0 nova_compute[259850]: 2025-10-11 04:15:28.119 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 04:15:28 compute-0 ceph-mon[74273]: from='client.? 192.168.122.100:0/1414669568' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 04:15:28 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' 
Oct 11 04:15:28 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' 
Oct 11 04:15:28 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 11 04:15:28 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2433230986' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 11 04:15:28 compute-0 nova_compute[259850]: 2025-10-11 04:15:28.205 2 DEBUG oslo_concurrency.processutils [None req-f94e59bf-b900-400f-a67b-3e84677dd23e 77d11e860ca1460cab1c20bca4d4c0ea bfcc78a613a4442d88231798d10634c9 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.447s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 04:15:28 compute-0 nova_compute[259850]: 2025-10-11 04:15:28.341 2 DEBUG os_brick.encryptors [None req-f94e59bf-b900-400f-a67b-3e84677dd23e 77d11e860ca1460cab1c20bca4d4c0ea bfcc78a613a4442d88231798d10634c9 - - default default] Using volume encryption metadata '{'encryption_key_id': 'dec7c0d5-cc82-40d8-b785-22125cdec965', 'control_location': 'front-end', 'cipher': 'aes-xts-plain64', 'key_size': 256, 'provider': 'luks'}' for connection: {'driver_volume_type': 'rbd', 'data': {'name': 'volumes/volume-c358a4fb-dbe4-4873-963a-9b4d3369e2f4', 'hosts': ['192.168.122.100'], 'ports': ['6789'], 'cluster_name': 'ceph', 'auth_enabled': True, 'auth_username': 'openstack', 'secret_type': 'ceph', 'secret_uuid': '***', 'volume_id': 'c358a4fb-dbe4-4873-963a-9b4d3369e2f4', 'discard': True, 'qos_specs': None, 'access_mode': 'rw', 'encrypted': True, 'cacheable': False}, 'status': 'reserved', 'instance': 'ab2c9a76-86d0-4cca-92b5-ae402fda2905', 'attached_at': '', 'detached_at': '', 'volume_id': 'c358a4fb-dbe4-4873-963a-9b4d3369e2f4', 'serial': '} get_encryption_metadata /usr/lib/python3.9/site-packages/os_brick/encryptors/__init__.py:135
Oct 11 04:15:28 compute-0 nova_compute[259850]: 2025-10-11 04:15:28.344 2 DEBUG barbicanclient.client [None req-f94e59bf-b900-400f-a67b-3e84677dd23e 77d11e860ca1460cab1c20bca4d4c0ea bfcc78a613a4442d88231798d10634c9 - - default default] Creating Client object Client /usr/lib/python3.9/site-packages/barbicanclient/client.py:163
Oct 11 04:15:28 compute-0 nova_compute[259850]: 2025-10-11 04:15:28.358 2 DEBUG barbicanclient.v1.secrets [None req-f94e59bf-b900-400f-a67b-3e84677dd23e 77d11e860ca1460cab1c20bca4d4c0ea bfcc78a613a4442d88231798d10634c9 - - default default] Getting secret - Secret href: https://barbican-internal.openstack.svc:9311/secrets/dec7c0d5-cc82-40d8-b785-22125cdec965 get /usr/lib/python3.9/site-packages/barbicanclient/v1/secrets.py:514
Oct 11 04:15:28 compute-0 nova_compute[259850]: 2025-10-11 04:15:28.358 2 INFO barbicanclient.base [None req-f94e59bf-b900-400f-a67b-3e84677dd23e 77d11e860ca1460cab1c20bca4d4c0ea bfcc78a613a4442d88231798d10634c9 - - default default] Calculated Secrets uuid ref: secrets/dec7c0d5-cc82-40d8-b785-22125cdec965
Oct 11 04:15:28 compute-0 nova_compute[259850]: 2025-10-11 04:15:28.397 2 DEBUG barbicanclient.client [None req-f94e59bf-b900-400f-a67b-3e84677dd23e 77d11e860ca1460cab1c20bca4d4c0ea bfcc78a613a4442d88231798d10634c9 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87
Oct 11 04:15:28 compute-0 nova_compute[259850]: 2025-10-11 04:15:28.406 2 INFO barbicanclient.base [None req-f94e59bf-b900-400f-a67b-3e84677dd23e 77d11e860ca1460cab1c20bca4d4c0ea bfcc78a613a4442d88231798d10634c9 - - default default] Calculated Secrets uuid ref: secrets/dec7c0d5-cc82-40d8-b785-22125cdec965
Oct 11 04:15:28 compute-0 nova_compute[259850]: 2025-10-11 04:15:28.444 2 DEBUG barbicanclient.client [None req-f94e59bf-b900-400f-a67b-3e84677dd23e 77d11e860ca1460cab1c20bca4d4c0ea bfcc78a613a4442d88231798d10634c9 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87
Oct 11 04:15:28 compute-0 nova_compute[259850]: 2025-10-11 04:15:28.446 2 INFO barbicanclient.base [None req-f94e59bf-b900-400f-a67b-3e84677dd23e 77d11e860ca1460cab1c20bca4d4c0ea bfcc78a613a4442d88231798d10634c9 - - default default] Calculated Secrets uuid ref: secrets/dec7c0d5-cc82-40d8-b785-22125cdec965
Oct 11 04:15:28 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 11 04:15:28 compute-0 nova_compute[259850]: 2025-10-11 04:15:28.477 2 DEBUG barbicanclient.client [None req-f94e59bf-b900-400f-a67b-3e84677dd23e 77d11e860ca1460cab1c20bca4d4c0ea bfcc78a613a4442d88231798d10634c9 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87
Oct 11 04:15:28 compute-0 nova_compute[259850]: 2025-10-11 04:15:28.477 2 INFO barbicanclient.base [None req-f94e59bf-b900-400f-a67b-3e84677dd23e 77d11e860ca1460cab1c20bca4d4c0ea bfcc78a613a4442d88231798d10634c9 - - default default] Calculated Secrets uuid ref: secrets/dec7c0d5-cc82-40d8-b785-22125cdec965
Oct 11 04:15:28 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3267991770' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 04:15:28 compute-0 nova_compute[259850]: 2025-10-11 04:15:28.498 2 DEBUG oslo_concurrency.processutils [None req-a15c22ab-259d-4b26-83d0-eb5f92102649 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.445s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 04:15:28 compute-0 nova_compute[259850]: 2025-10-11 04:15:28.506 2 DEBUG barbicanclient.client [None req-f94e59bf-b900-400f-a67b-3e84677dd23e 77d11e860ca1460cab1c20bca4d4c0ea bfcc78a613a4442d88231798d10634c9 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87
Oct 11 04:15:28 compute-0 nova_compute[259850]: 2025-10-11 04:15:28.507 2 INFO barbicanclient.base [None req-f94e59bf-b900-400f-a67b-3e84677dd23e 77d11e860ca1460cab1c20bca4d4c0ea bfcc78a613a4442d88231798d10634c9 - - default default] Calculated Secrets uuid ref: secrets/dec7c0d5-cc82-40d8-b785-22125cdec965
Oct 11 04:15:28 compute-0 nova_compute[259850]: 2025-10-11 04:15:28.515 2 DEBUG nova.compute.provider_tree [None req-a15c22ab-259d-4b26-83d0-eb5f92102649 - - - - - -] Inventory has not changed in ProviderTree for provider: 108a560b-89c0-4926-a2fc-cb749a6f8386 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 11 04:15:28 compute-0 nova_compute[259850]: 2025-10-11 04:15:28.536 2 DEBUG nova.scheduler.client.report [None req-a15c22ab-259d-4b26-83d0-eb5f92102649 - - - - - -] Inventory has not changed for provider 108a560b-89c0-4926-a2fc-cb749a6f8386 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 11 04:15:28 compute-0 nova_compute[259850]: 2025-10-11 04:15:28.546 2 DEBUG barbicanclient.client [None req-f94e59bf-b900-400f-a67b-3e84677dd23e 77d11e860ca1460cab1c20bca4d4c0ea bfcc78a613a4442d88231798d10634c9 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87
Oct 11 04:15:28 compute-0 nova_compute[259850]: 2025-10-11 04:15:28.546 2 INFO barbicanclient.base [None req-f94e59bf-b900-400f-a67b-3e84677dd23e 77d11e860ca1460cab1c20bca4d4c0ea bfcc78a613a4442d88231798d10634c9 - - default default] Calculated Secrets uuid ref: secrets/dec7c0d5-cc82-40d8-b785-22125cdec965
Oct 11 04:15:28 compute-0 nova_compute[259850]: 2025-10-11 04:15:28.574 2 DEBUG nova.compute.resource_tracker [None req-a15c22ab-259d-4b26-83d0-eb5f92102649 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Oct 11 04:15:28 compute-0 nova_compute[259850]: 2025-10-11 04:15:28.576 2 DEBUG oslo_concurrency.lockutils [None req-a15c22ab-259d-4b26-83d0-eb5f92102649 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.647s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 04:15:28 compute-0 nova_compute[259850]: 2025-10-11 04:15:28.584 2 DEBUG barbicanclient.client [None req-f94e59bf-b900-400f-a67b-3e84677dd23e 77d11e860ca1460cab1c20bca4d4c0ea bfcc78a613a4442d88231798d10634c9 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87
Oct 11 04:15:28 compute-0 nova_compute[259850]: 2025-10-11 04:15:28.585 2 INFO barbicanclient.base [None req-f94e59bf-b900-400f-a67b-3e84677dd23e 77d11e860ca1460cab1c20bca4d4c0ea bfcc78a613a4442d88231798d10634c9 - - default default] Calculated Secrets uuid ref: secrets/dec7c0d5-cc82-40d8-b785-22125cdec965
Oct 11 04:15:28 compute-0 nova_compute[259850]: 2025-10-11 04:15:28.618 2 DEBUG barbicanclient.client [None req-f94e59bf-b900-400f-a67b-3e84677dd23e 77d11e860ca1460cab1c20bca4d4c0ea bfcc78a613a4442d88231798d10634c9 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87
Oct 11 04:15:28 compute-0 nova_compute[259850]: 2025-10-11 04:15:28.619 2 INFO barbicanclient.base [None req-f94e59bf-b900-400f-a67b-3e84677dd23e 77d11e860ca1460cab1c20bca4d4c0ea bfcc78a613a4442d88231798d10634c9 - - default default] Calculated Secrets uuid ref: secrets/dec7c0d5-cc82-40d8-b785-22125cdec965
Oct 11 04:15:28 compute-0 nova_compute[259850]: 2025-10-11 04:15:28.649 2 DEBUG barbicanclient.client [None req-f94e59bf-b900-400f-a67b-3e84677dd23e 77d11e860ca1460cab1c20bca4d4c0ea bfcc78a613a4442d88231798d10634c9 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87
Oct 11 04:15:28 compute-0 nova_compute[259850]: 2025-10-11 04:15:28.650 2 INFO barbicanclient.base [None req-f94e59bf-b900-400f-a67b-3e84677dd23e 77d11e860ca1460cab1c20bca4d4c0ea bfcc78a613a4442d88231798d10634c9 - - default default] Calculated Secrets uuid ref: secrets/dec7c0d5-cc82-40d8-b785-22125cdec965
Oct 11 04:15:28 compute-0 nova_compute[259850]: 2025-10-11 04:15:28.675 2 DEBUG barbicanclient.client [None req-f94e59bf-b900-400f-a67b-3e84677dd23e 77d11e860ca1460cab1c20bca4d4c0ea bfcc78a613a4442d88231798d10634c9 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87
Oct 11 04:15:28 compute-0 nova_compute[259850]: 2025-10-11 04:15:28.676 2 INFO barbicanclient.base [None req-f94e59bf-b900-400f-a67b-3e84677dd23e 77d11e860ca1460cab1c20bca4d4c0ea bfcc78a613a4442d88231798d10634c9 - - default default] Calculated Secrets uuid ref: secrets/dec7c0d5-cc82-40d8-b785-22125cdec965
Oct 11 04:15:28 compute-0 nova_compute[259850]: 2025-10-11 04:15:28.705 2 DEBUG barbicanclient.client [None req-f94e59bf-b900-400f-a67b-3e84677dd23e 77d11e860ca1460cab1c20bca4d4c0ea bfcc78a613a4442d88231798d10634c9 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87
Oct 11 04:15:28 compute-0 nova_compute[259850]: 2025-10-11 04:15:28.706 2 INFO barbicanclient.base [None req-f94e59bf-b900-400f-a67b-3e84677dd23e 77d11e860ca1460cab1c20bca4d4c0ea bfcc78a613a4442d88231798d10634c9 - - default default] Calculated Secrets uuid ref: secrets/dec7c0d5-cc82-40d8-b785-22125cdec965
Oct 11 04:15:28 compute-0 nova_compute[259850]: 2025-10-11 04:15:28.731 2 DEBUG barbicanclient.client [None req-f94e59bf-b900-400f-a67b-3e84677dd23e 77d11e860ca1460cab1c20bca4d4c0ea bfcc78a613a4442d88231798d10634c9 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87
Oct 11 04:15:28 compute-0 nova_compute[259850]: 2025-10-11 04:15:28.731 2 INFO barbicanclient.base [None req-f94e59bf-b900-400f-a67b-3e84677dd23e 77d11e860ca1460cab1c20bca4d4c0ea bfcc78a613a4442d88231798d10634c9 - - default default] Calculated Secrets uuid ref: secrets/dec7c0d5-cc82-40d8-b785-22125cdec965
Oct 11 04:15:28 compute-0 nova_compute[259850]: 2025-10-11 04:15:28.754 2 DEBUG barbicanclient.client [None req-f94e59bf-b900-400f-a67b-3e84677dd23e 77d11e860ca1460cab1c20bca4d4c0ea bfcc78a613a4442d88231798d10634c9 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87
Oct 11 04:15:28 compute-0 nova_compute[259850]: 2025-10-11 04:15:28.754 2 INFO barbicanclient.base [None req-f94e59bf-b900-400f-a67b-3e84677dd23e 77d11e860ca1460cab1c20bca4d4c0ea bfcc78a613a4442d88231798d10634c9 - - default default] Calculated Secrets uuid ref: secrets/dec7c0d5-cc82-40d8-b785-22125cdec965
Oct 11 04:15:28 compute-0 nova_compute[259850]: 2025-10-11 04:15:28.781 2 DEBUG barbicanclient.client [None req-f94e59bf-b900-400f-a67b-3e84677dd23e 77d11e860ca1460cab1c20bca4d4c0ea bfcc78a613a4442d88231798d10634c9 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87
Oct 11 04:15:28 compute-0 nova_compute[259850]: 2025-10-11 04:15:28.781 2 INFO barbicanclient.base [None req-f94e59bf-b900-400f-a67b-3e84677dd23e 77d11e860ca1460cab1c20bca4d4c0ea bfcc78a613a4442d88231798d10634c9 - - default default] Calculated Secrets uuid ref: secrets/dec7c0d5-cc82-40d8-b785-22125cdec965
Oct 11 04:15:28 compute-0 nova_compute[259850]: 2025-10-11 04:15:28.800 2 DEBUG barbicanclient.client [None req-f94e59bf-b900-400f-a67b-3e84677dd23e 77d11e860ca1460cab1c20bca4d4c0ea bfcc78a613a4442d88231798d10634c9 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87
Oct 11 04:15:28 compute-0 nova_compute[259850]: 2025-10-11 04:15:28.800 2 INFO barbicanclient.base [None req-f94e59bf-b900-400f-a67b-3e84677dd23e 77d11e860ca1460cab1c20bca4d4c0ea bfcc78a613a4442d88231798d10634c9 - - default default] Calculated Secrets uuid ref: secrets/dec7c0d5-cc82-40d8-b785-22125cdec965
Oct 11 04:15:28 compute-0 nova_compute[259850]: 2025-10-11 04:15:28.820 2 DEBUG barbicanclient.client [None req-f94e59bf-b900-400f-a67b-3e84677dd23e 77d11e860ca1460cab1c20bca4d4c0ea bfcc78a613a4442d88231798d10634c9 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87
Oct 11 04:15:28 compute-0 nova_compute[259850]: 2025-10-11 04:15:28.822 2 DEBUG nova.virt.libvirt.host [None req-f94e59bf-b900-400f-a67b-3e84677dd23e 77d11e860ca1460cab1c20bca4d4c0ea bfcc78a613a4442d88231798d10634c9 - - default default] Secret XML: <secret ephemeral="no" private="no">
Oct 11 04:15:28 compute-0 nova_compute[259850]:   <usage type="volume">
Oct 11 04:15:28 compute-0 nova_compute[259850]:     <volume>c358a4fb-dbe4-4873-963a-9b4d3369e2f4</volume>
Oct 11 04:15:28 compute-0 nova_compute[259850]:   </usage>
Oct 11 04:15:28 compute-0 nova_compute[259850]: </secret>
Oct 11 04:15:28 compute-0 nova_compute[259850]:  create_secret /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1131
Oct 11 04:15:28 compute-0 nova_compute[259850]: 2025-10-11 04:15:28.855 2 DEBUG nova.virt.libvirt.vif [None req-f94e59bf-b900-400f-a67b-3e84677dd23e 77d11e860ca1460cab1c20bca4d4c0ea bfcc78a613a4442d88231798d10634c9 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-11T04:15:22Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TransferEncryptedVolumeTest-server-1105377736',display_name='tempest-TransferEncryptedVolumeTest-server-1105377736',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-transferencryptedvolumetest-server-1105377736',id=19,image_ref='',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBEm9PgkiGfSXOx0o4WQq8AZOWxjh5dTcSz2vccU0Qwona7kINKHr8yu5DCKNDP+0OzTB5mKLuoYtalc5W0loL0xt3InkbNaE80zGvKzG26ntAx/WTjaE+AjoYDpLrsq4bA==',key_name='tempest-TransferEncryptedVolumeTest-726747697',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='bfcc78a613a4442d88231798d10634c9',ramdisk_id='',reservation_id='r-2uzv56jf',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',image_signature_verified='False',network_allocated='True',owner_project_name='tempest-TransferEncryptedVolumeTest-1941581237',owner_user_name='tempest-TransferEncryptedVolumeTest-1941581237-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-11T04:15:24Z,user_data=None,user_id='77d11e860ca1460cab1c20bca4d4c0ea',uuid=ab2c9a76-86d0-4cca-92b5-ae402fda2905,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "a1f30276-c6ab-493e-9be5-8e3baf249a38", "address": "fa:16:3e:f7:2a:9b", "network": {"id": "1c86b315-3a4b-4db0-8b3c-39658c19ef9c", "bridge": "br-int", "label": "tempest-TransferEncryptedVolumeTest-443747755-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "bfcc78a613a4442d88231798d10634c9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa1f30276-c6", "ovs_interfaceid": "a1f30276-c6ab-493e-9be5-8e3baf249a38", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Oct 11 04:15:28 compute-0 nova_compute[259850]: 2025-10-11 04:15:28.856 2 DEBUG nova.network.os_vif_util [None req-f94e59bf-b900-400f-a67b-3e84677dd23e 77d11e860ca1460cab1c20bca4d4c0ea bfcc78a613a4442d88231798d10634c9 - - default default] Converting VIF {"id": "a1f30276-c6ab-493e-9be5-8e3baf249a38", "address": "fa:16:3e:f7:2a:9b", "network": {"id": "1c86b315-3a4b-4db0-8b3c-39658c19ef9c", "bridge": "br-int", "label": "tempest-TransferEncryptedVolumeTest-443747755-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "bfcc78a613a4442d88231798d10634c9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa1f30276-c6", "ovs_interfaceid": "a1f30276-c6ab-493e-9be5-8e3baf249a38", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 11 04:15:28 compute-0 nova_compute[259850]: 2025-10-11 04:15:28.857 2 DEBUG nova.network.os_vif_util [None req-f94e59bf-b900-400f-a67b-3e84677dd23e 77d11e860ca1460cab1c20bca4d4c0ea bfcc78a613a4442d88231798d10634c9 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:f7:2a:9b,bridge_name='br-int',has_traffic_filtering=True,id=a1f30276-c6ab-493e-9be5-8e3baf249a38,network=Network(1c86b315-3a4b-4db0-8b3c-39658c19ef9c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa1f30276-c6') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 11 04:15:28 compute-0 nova_compute[259850]: 2025-10-11 04:15:28.859 2 DEBUG nova.objects.instance [None req-f94e59bf-b900-400f-a67b-3e84677dd23e 77d11e860ca1460cab1c20bca4d4c0ea bfcc78a613a4442d88231798d10634c9 - - default default] Lazy-loading 'pci_devices' on Instance uuid ab2c9a76-86d0-4cca-92b5-ae402fda2905 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 11 04:15:28 compute-0 nova_compute[259850]: 2025-10-11 04:15:28.873 2 DEBUG nova.virt.libvirt.driver [None req-f94e59bf-b900-400f-a67b-3e84677dd23e 77d11e860ca1460cab1c20bca4d4c0ea bfcc78a613a4442d88231798d10634c9 - - default default] [instance: ab2c9a76-86d0-4cca-92b5-ae402fda2905] End _get_guest_xml xml=<domain type="kvm">
Oct 11 04:15:28 compute-0 nova_compute[259850]:   <uuid>ab2c9a76-86d0-4cca-92b5-ae402fda2905</uuid>
Oct 11 04:15:28 compute-0 nova_compute[259850]:   <name>instance-00000013</name>
Oct 11 04:15:28 compute-0 nova_compute[259850]:   <memory>131072</memory>
Oct 11 04:15:28 compute-0 nova_compute[259850]:   <vcpu>1</vcpu>
Oct 11 04:15:28 compute-0 nova_compute[259850]:   <metadata>
Oct 11 04:15:28 compute-0 nova_compute[259850]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 11 04:15:28 compute-0 nova_compute[259850]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 11 04:15:28 compute-0 nova_compute[259850]:       <nova:name>tempest-TransferEncryptedVolumeTest-server-1105377736</nova:name>
Oct 11 04:15:28 compute-0 nova_compute[259850]:       <nova:creationTime>2025-10-11 04:15:27</nova:creationTime>
Oct 11 04:15:28 compute-0 nova_compute[259850]:       <nova:flavor name="m1.nano">
Oct 11 04:15:28 compute-0 nova_compute[259850]:         <nova:memory>128</nova:memory>
Oct 11 04:15:28 compute-0 nova_compute[259850]:         <nova:disk>1</nova:disk>
Oct 11 04:15:28 compute-0 nova_compute[259850]:         <nova:swap>0</nova:swap>
Oct 11 04:15:28 compute-0 nova_compute[259850]:         <nova:ephemeral>0</nova:ephemeral>
Oct 11 04:15:28 compute-0 nova_compute[259850]:         <nova:vcpus>1</nova:vcpus>
Oct 11 04:15:28 compute-0 nova_compute[259850]:       </nova:flavor>
Oct 11 04:15:28 compute-0 nova_compute[259850]:       <nova:owner>
Oct 11 04:15:28 compute-0 nova_compute[259850]:         <nova:user uuid="77d11e860ca1460cab1c20bca4d4c0ea">tempest-TransferEncryptedVolumeTest-1941581237-project-member</nova:user>
Oct 11 04:15:28 compute-0 nova_compute[259850]:         <nova:project uuid="bfcc78a613a4442d88231798d10634c9">tempest-TransferEncryptedVolumeTest-1941581237</nova:project>
Oct 11 04:15:28 compute-0 nova_compute[259850]:       </nova:owner>
Oct 11 04:15:28 compute-0 nova_compute[259850]:       <nova:ports>
Oct 11 04:15:28 compute-0 nova_compute[259850]:         <nova:port uuid="a1f30276-c6ab-493e-9be5-8e3baf249a38">
Oct 11 04:15:28 compute-0 nova_compute[259850]:           <nova:ip type="fixed" address="10.100.0.6" ipVersion="4"/>
Oct 11 04:15:28 compute-0 nova_compute[259850]:         </nova:port>
Oct 11 04:15:28 compute-0 nova_compute[259850]:       </nova:ports>
Oct 11 04:15:28 compute-0 nova_compute[259850]:     </nova:instance>
Oct 11 04:15:28 compute-0 nova_compute[259850]:   </metadata>
Oct 11 04:15:28 compute-0 nova_compute[259850]:   <sysinfo type="smbios">
Oct 11 04:15:28 compute-0 nova_compute[259850]:     <system>
Oct 11 04:15:28 compute-0 nova_compute[259850]:       <entry name="manufacturer">RDO</entry>
Oct 11 04:15:28 compute-0 nova_compute[259850]:       <entry name="product">OpenStack Compute</entry>
Oct 11 04:15:28 compute-0 nova_compute[259850]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 11 04:15:28 compute-0 nova_compute[259850]:       <entry name="serial">ab2c9a76-86d0-4cca-92b5-ae402fda2905</entry>
Oct 11 04:15:28 compute-0 nova_compute[259850]:       <entry name="uuid">ab2c9a76-86d0-4cca-92b5-ae402fda2905</entry>
Oct 11 04:15:28 compute-0 nova_compute[259850]:       <entry name="family">Virtual Machine</entry>
Oct 11 04:15:28 compute-0 nova_compute[259850]:     </system>
Oct 11 04:15:28 compute-0 nova_compute[259850]:   </sysinfo>
Oct 11 04:15:28 compute-0 nova_compute[259850]:   <os>
Oct 11 04:15:28 compute-0 nova_compute[259850]:     <type arch="x86_64" machine="q35">hvm</type>
Oct 11 04:15:28 compute-0 nova_compute[259850]:     <boot dev="hd"/>
Oct 11 04:15:28 compute-0 nova_compute[259850]:     <smbios mode="sysinfo"/>
Oct 11 04:15:28 compute-0 nova_compute[259850]:   </os>
Oct 11 04:15:28 compute-0 nova_compute[259850]:   <features>
Oct 11 04:15:28 compute-0 nova_compute[259850]:     <acpi/>
Oct 11 04:15:28 compute-0 nova_compute[259850]:     <apic/>
Oct 11 04:15:28 compute-0 nova_compute[259850]:     <vmcoreinfo/>
Oct 11 04:15:28 compute-0 nova_compute[259850]:   </features>
Oct 11 04:15:28 compute-0 nova_compute[259850]:   <clock offset="utc">
Oct 11 04:15:28 compute-0 nova_compute[259850]:     <timer name="pit" tickpolicy="delay"/>
Oct 11 04:15:28 compute-0 nova_compute[259850]:     <timer name="rtc" tickpolicy="catchup"/>
Oct 11 04:15:28 compute-0 nova_compute[259850]:     <timer name="hpet" present="no"/>
Oct 11 04:15:28 compute-0 nova_compute[259850]:   </clock>
Oct 11 04:15:28 compute-0 nova_compute[259850]:   <cpu mode="host-model" match="exact">
Oct 11 04:15:28 compute-0 nova_compute[259850]:     <topology sockets="1" cores="1" threads="1"/>
Oct 11 04:15:28 compute-0 nova_compute[259850]:   </cpu>
Oct 11 04:15:28 compute-0 nova_compute[259850]:   <devices>
Oct 11 04:15:28 compute-0 nova_compute[259850]:     <disk type="network" device="cdrom">
Oct 11 04:15:28 compute-0 nova_compute[259850]:       <driver type="raw" cache="none"/>
Oct 11 04:15:28 compute-0 nova_compute[259850]:       <source protocol="rbd" name="vms/ab2c9a76-86d0-4cca-92b5-ae402fda2905_disk.config">
Oct 11 04:15:28 compute-0 nova_compute[259850]:         <host name="192.168.122.100" port="6789"/>
Oct 11 04:15:28 compute-0 nova_compute[259850]:       </source>
Oct 11 04:15:28 compute-0 nova_compute[259850]:       <auth username="openstack">
Oct 11 04:15:28 compute-0 nova_compute[259850]:         <secret type="ceph" uuid="23b68101-59a9-532f-ab6b-9acf78fb2162"/>
Oct 11 04:15:28 compute-0 nova_compute[259850]:       </auth>
Oct 11 04:15:28 compute-0 nova_compute[259850]:       <target dev="sda" bus="sata"/>
Oct 11 04:15:28 compute-0 nova_compute[259850]:     </disk>
Oct 11 04:15:28 compute-0 nova_compute[259850]:     <disk type="network" device="disk">
Oct 11 04:15:28 compute-0 nova_compute[259850]:       <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Oct 11 04:15:28 compute-0 nova_compute[259850]:       <source protocol="rbd" name="volumes/volume-c358a4fb-dbe4-4873-963a-9b4d3369e2f4">
Oct 11 04:15:28 compute-0 nova_compute[259850]:         <host name="192.168.122.100" port="6789"/>
Oct 11 04:15:28 compute-0 nova_compute[259850]:       </source>
Oct 11 04:15:28 compute-0 nova_compute[259850]:       <auth username="openstack">
Oct 11 04:15:28 compute-0 nova_compute[259850]:         <secret type="ceph" uuid="23b68101-59a9-532f-ab6b-9acf78fb2162"/>
Oct 11 04:15:28 compute-0 nova_compute[259850]:       </auth>
Oct 11 04:15:28 compute-0 nova_compute[259850]:       <target dev="vda" bus="virtio"/>
Oct 11 04:15:28 compute-0 nova_compute[259850]:       <serial>c358a4fb-dbe4-4873-963a-9b4d3369e2f4</serial>
Oct 11 04:15:28 compute-0 nova_compute[259850]:       <encryption format="luks">
Oct 11 04:15:28 compute-0 nova_compute[259850]:         <secret type="passphrase" uuid="5b15e1ca-8c30-4feb-93b4-9357b1b1e00b"/>
Oct 11 04:15:28 compute-0 nova_compute[259850]:       </encryption>
Oct 11 04:15:28 compute-0 nova_compute[259850]:     </disk>
Oct 11 04:15:28 compute-0 nova_compute[259850]:     <interface type="ethernet">
Oct 11 04:15:28 compute-0 nova_compute[259850]:       <mac address="fa:16:3e:f7:2a:9b"/>
Oct 11 04:15:28 compute-0 nova_compute[259850]:       <model type="virtio"/>
Oct 11 04:15:28 compute-0 nova_compute[259850]:       <driver name="vhost" rx_queue_size="512"/>
Oct 11 04:15:28 compute-0 nova_compute[259850]:       <mtu size="1442"/>
Oct 11 04:15:28 compute-0 nova_compute[259850]:       <target dev="tapa1f30276-c6"/>
Oct 11 04:15:28 compute-0 nova_compute[259850]:     </interface>
Oct 11 04:15:28 compute-0 nova_compute[259850]:     <serial type="pty">
Oct 11 04:15:28 compute-0 nova_compute[259850]:       <log file="/var/lib/nova/instances/ab2c9a76-86d0-4cca-92b5-ae402fda2905/console.log" append="off"/>
Oct 11 04:15:28 compute-0 nova_compute[259850]:     </serial>
Oct 11 04:15:28 compute-0 nova_compute[259850]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 11 04:15:28 compute-0 nova_compute[259850]:     <video>
Oct 11 04:15:28 compute-0 nova_compute[259850]:       <model type="virtio"/>
Oct 11 04:15:28 compute-0 nova_compute[259850]:     </video>
Oct 11 04:15:28 compute-0 nova_compute[259850]:     <input type="tablet" bus="usb"/>
Oct 11 04:15:28 compute-0 nova_compute[259850]:     <rng model="virtio">
Oct 11 04:15:28 compute-0 nova_compute[259850]:       <backend model="random">/dev/urandom</backend>
Oct 11 04:15:28 compute-0 nova_compute[259850]:     </rng>
Oct 11 04:15:28 compute-0 nova_compute[259850]:     <controller type="pci" model="pcie-root"/>
Oct 11 04:15:28 compute-0 nova_compute[259850]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 04:15:28 compute-0 nova_compute[259850]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 04:15:28 compute-0 nova_compute[259850]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 04:15:28 compute-0 nova_compute[259850]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 04:15:28 compute-0 nova_compute[259850]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 04:15:28 compute-0 nova_compute[259850]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 04:15:28 compute-0 nova_compute[259850]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 04:15:28 compute-0 nova_compute[259850]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 04:15:28 compute-0 nova_compute[259850]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 04:15:28 compute-0 nova_compute[259850]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 04:15:28 compute-0 nova_compute[259850]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 04:15:28 compute-0 nova_compute[259850]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 04:15:28 compute-0 nova_compute[259850]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 04:15:28 compute-0 nova_compute[259850]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 04:15:28 compute-0 nova_compute[259850]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 04:15:28 compute-0 nova_compute[259850]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 04:15:28 compute-0 nova_compute[259850]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 04:15:28 compute-0 nova_compute[259850]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 04:15:28 compute-0 nova_compute[259850]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 04:15:28 compute-0 nova_compute[259850]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 04:15:28 compute-0 nova_compute[259850]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 04:15:28 compute-0 nova_compute[259850]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 04:15:28 compute-0 nova_compute[259850]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 04:15:28 compute-0 nova_compute[259850]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 04:15:28 compute-0 nova_compute[259850]:     <controller type="usb" index="0"/>
Oct 11 04:15:28 compute-0 nova_compute[259850]:     <memballoon model="virtio">
Oct 11 04:15:28 compute-0 nova_compute[259850]:       <stats period="10"/>
Oct 11 04:15:28 compute-0 nova_compute[259850]:     </memballoon>
Oct 11 04:15:28 compute-0 nova_compute[259850]:   </devices>
Oct 11 04:15:28 compute-0 nova_compute[259850]: </domain>
Oct 11 04:15:28 compute-0 nova_compute[259850]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Oct 11 04:15:28 compute-0 nova_compute[259850]: 2025-10-11 04:15:28.875 2 DEBUG nova.compute.manager [None req-f94e59bf-b900-400f-a67b-3e84677dd23e 77d11e860ca1460cab1c20bca4d4c0ea bfcc78a613a4442d88231798d10634c9 - - default default] [instance: ab2c9a76-86d0-4cca-92b5-ae402fda2905] Preparing to wait for external event network-vif-plugged-a1f30276-c6ab-493e-9be5-8e3baf249a38 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Oct 11 04:15:28 compute-0 nova_compute[259850]: 2025-10-11 04:15:28.875 2 DEBUG oslo_concurrency.lockutils [None req-f94e59bf-b900-400f-a67b-3e84677dd23e 77d11e860ca1460cab1c20bca4d4c0ea bfcc78a613a4442d88231798d10634c9 - - default default] Acquiring lock "ab2c9a76-86d0-4cca-92b5-ae402fda2905-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 04:15:28 compute-0 nova_compute[259850]: 2025-10-11 04:15:28.875 2 DEBUG oslo_concurrency.lockutils [None req-f94e59bf-b900-400f-a67b-3e84677dd23e 77d11e860ca1460cab1c20bca4d4c0ea bfcc78a613a4442d88231798d10634c9 - - default default] Lock "ab2c9a76-86d0-4cca-92b5-ae402fda2905-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 04:15:28 compute-0 nova_compute[259850]: 2025-10-11 04:15:28.876 2 DEBUG oslo_concurrency.lockutils [None req-f94e59bf-b900-400f-a67b-3e84677dd23e 77d11e860ca1460cab1c20bca4d4c0ea bfcc78a613a4442d88231798d10634c9 - - default default] Lock "ab2c9a76-86d0-4cca-92b5-ae402fda2905-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 04:15:28 compute-0 nova_compute[259850]: 2025-10-11 04:15:28.877 2 DEBUG nova.virt.libvirt.vif [None req-f94e59bf-b900-400f-a67b-3e84677dd23e 77d11e860ca1460cab1c20bca4d4c0ea bfcc78a613a4442d88231798d10634c9 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-11T04:15:22Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TransferEncryptedVolumeTest-server-1105377736',display_name='tempest-TransferEncryptedVolumeTest-server-1105377736',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-transferencryptedvolumetest-server-1105377736',id=19,image_ref='',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBEm9PgkiGfSXOx0o4WQq8AZOWxjh5dTcSz2vccU0Qwona7kINKHr8yu5DCKNDP+0OzTB5mKLuoYtalc5W0loL0xt3InkbNaE80zGvKzG26ntAx/WTjaE+AjoYDpLrsq4bA==',key_name='tempest-TransferEncryptedVolumeTest-726747697',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='bfcc78a613a4442d88231798d10634c9',ramdisk_id='',reservation_id='r-2uzv56jf',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',image_signature_verified='False',network_allocated='True',owner_project_name='tempest-TransferEncryptedVolumeTest-1941581237',owner_user_name='tempest-TransferEncryptedVolumeTest-1941581237-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-11T04:15:24Z,user_data=None,user_id='77d11e860ca1460cab1c20bca4d4c0ea',uuid=ab2c9a76-86d0-4cca-92b5-ae402fda2905,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "a1f30276-c6ab-493e-9be5-8e3baf249a38", "address": "fa:16:3e:f7:2a:9b", "network": {"id": "1c86b315-3a4b-4db0-8b3c-39658c19ef9c", "bridge": "br-int", "label": "tempest-TransferEncryptedVolumeTest-443747755-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "bfcc78a613a4442d88231798d10634c9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa1f30276-c6", "ovs_interfaceid": "a1f30276-c6ab-493e-9be5-8e3baf249a38", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Oct 11 04:15:28 compute-0 nova_compute[259850]: 2025-10-11 04:15:28.877 2 DEBUG nova.network.os_vif_util [None req-f94e59bf-b900-400f-a67b-3e84677dd23e 77d11e860ca1460cab1c20bca4d4c0ea bfcc78a613a4442d88231798d10634c9 - - default default] Converting VIF {"id": "a1f30276-c6ab-493e-9be5-8e3baf249a38", "address": "fa:16:3e:f7:2a:9b", "network": {"id": "1c86b315-3a4b-4db0-8b3c-39658c19ef9c", "bridge": "br-int", "label": "tempest-TransferEncryptedVolumeTest-443747755-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "bfcc78a613a4442d88231798d10634c9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa1f30276-c6", "ovs_interfaceid": "a1f30276-c6ab-493e-9be5-8e3baf249a38", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 11 04:15:28 compute-0 nova_compute[259850]: 2025-10-11 04:15:28.878 2 DEBUG nova.network.os_vif_util [None req-f94e59bf-b900-400f-a67b-3e84677dd23e 77d11e860ca1460cab1c20bca4d4c0ea bfcc78a613a4442d88231798d10634c9 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:f7:2a:9b,bridge_name='br-int',has_traffic_filtering=True,id=a1f30276-c6ab-493e-9be5-8e3baf249a38,network=Network(1c86b315-3a4b-4db0-8b3c-39658c19ef9c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa1f30276-c6') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 11 04:15:28 compute-0 nova_compute[259850]: 2025-10-11 04:15:28.879 2 DEBUG os_vif [None req-f94e59bf-b900-400f-a67b-3e84677dd23e 77d11e860ca1460cab1c20bca4d4c0ea bfcc78a613a4442d88231798d10634c9 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:f7:2a:9b,bridge_name='br-int',has_traffic_filtering=True,id=a1f30276-c6ab-493e-9be5-8e3baf249a38,network=Network(1c86b315-3a4b-4db0-8b3c-39658c19ef9c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa1f30276-c6') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Oct 11 04:15:28 compute-0 nova_compute[259850]: 2025-10-11 04:15:28.880 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 04:15:28 compute-0 nova_compute[259850]: 2025-10-11 04:15:28.881 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 04:15:28 compute-0 nova_compute[259850]: 2025-10-11 04:15:28.881 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 11 04:15:28 compute-0 nova_compute[259850]: 2025-10-11 04:15:28.885 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 04:15:28 compute-0 nova_compute[259850]: 2025-10-11 04:15:28.886 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapa1f30276-c6, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 04:15:28 compute-0 nova_compute[259850]: 2025-10-11 04:15:28.886 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapa1f30276-c6, col_values=(('external_ids', {'iface-id': 'a1f30276-c6ab-493e-9be5-8e3baf249a38', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:f7:2a:9b', 'vm-uuid': 'ab2c9a76-86d0-4cca-92b5-ae402fda2905'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 04:15:28 compute-0 nova_compute[259850]: 2025-10-11 04:15:28.888 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 04:15:28 compute-0 NetworkManager[44920]: <info>  [1760156128.8896] manager: (tapa1f30276-c6): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/101)
Oct 11 04:15:28 compute-0 nova_compute[259850]: 2025-10-11 04:15:28.890 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 11 04:15:28 compute-0 nova_compute[259850]: 2025-10-11 04:15:28.897 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 04:15:28 compute-0 nova_compute[259850]: 2025-10-11 04:15:28.899 2 INFO os_vif [None req-f94e59bf-b900-400f-a67b-3e84677dd23e 77d11e860ca1460cab1c20bca4d4c0ea bfcc78a613a4442d88231798d10634c9 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:f7:2a:9b,bridge_name='br-int',has_traffic_filtering=True,id=a1f30276-c6ab-493e-9be5-8e3baf249a38,network=Network(1c86b315-3a4b-4db0-8b3c-39658c19ef9c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa1f30276-c6')
Oct 11 04:15:28 compute-0 nova_compute[259850]: 2025-10-11 04:15:28.954 2 DEBUG nova.virt.libvirt.driver [None req-f94e59bf-b900-400f-a67b-3e84677dd23e 77d11e860ca1460cab1c20bca4d4c0ea bfcc78a613a4442d88231798d10634c9 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 11 04:15:28 compute-0 nova_compute[259850]: 2025-10-11 04:15:28.955 2 DEBUG nova.virt.libvirt.driver [None req-f94e59bf-b900-400f-a67b-3e84677dd23e 77d11e860ca1460cab1c20bca4d4c0ea bfcc78a613a4442d88231798d10634c9 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 11 04:15:28 compute-0 nova_compute[259850]: 2025-10-11 04:15:28.955 2 DEBUG nova.virt.libvirt.driver [None req-f94e59bf-b900-400f-a67b-3e84677dd23e 77d11e860ca1460cab1c20bca4d4c0ea bfcc78a613a4442d88231798d10634c9 - - default default] No VIF found with MAC fa:16:3e:f7:2a:9b, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Oct 11 04:15:28 compute-0 nova_compute[259850]: 2025-10-11 04:15:28.956 2 INFO nova.virt.libvirt.driver [None req-f94e59bf-b900-400f-a67b-3e84677dd23e 77d11e860ca1460cab1c20bca4d4c0ea bfcc78a613a4442d88231798d10634c9 - - default default] [instance: ab2c9a76-86d0-4cca-92b5-ae402fda2905] Using config drive
Oct 11 04:15:28 compute-0 nova_compute[259850]: 2025-10-11 04:15:28.981 2 DEBUG nova.storage.rbd_utils [None req-f94e59bf-b900-400f-a67b-3e84677dd23e 77d11e860ca1460cab1c20bca4d4c0ea bfcc78a613a4442d88231798d10634c9 - - default default] rbd image ab2c9a76-86d0-4cca-92b5-ae402fda2905_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 11 04:15:29 compute-0 ceph-mon[74273]: pgmap v1548: 305 pgs: 305 active+clean; 248 MiB data, 516 MiB used, 59 GiB / 60 GiB avail; 29 KiB/s rd, 9.4 MiB/s wr, 44 op/s
Oct 11 04:15:29 compute-0 ceph-mon[74273]: from='client.? 192.168.122.100:0/2433230986' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 11 04:15:29 compute-0 ceph-mon[74273]: from='client.? 192.168.122.100:0/3267991770' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 04:15:29 compute-0 podman[292835]: 2025-10-11 04:15:29.360964241 +0000 UTC m=+0.067303774 container health_status aa44bb3cffcfb092b9d85ae52a9bd9f36c87e07262632420f97b48a886109eab (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, io.buildah.version=1.41.3, org.label-schema.build-date=20251009, org.label-schema.vendor=CentOS, tcib_managed=true)
Oct 11 04:15:29 compute-0 NetworkManager[44920]: <info>  [1760156129.3762] manager: (patch-br-int-to-provnet-86cd831a-6a58-4ba8-a51c-57fa1a3acacc): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/102)
Oct 11 04:15:29 compute-0 nova_compute[259850]: 2025-10-11 04:15:29.375 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 04:15:29 compute-0 NetworkManager[44920]: <info>  [1760156129.3769] manager: (patch-provnet-86cd831a-6a58-4ba8-a51c-57fa1a3acacc-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/103)
Oct 11 04:15:29 compute-0 nova_compute[259850]: 2025-10-11 04:15:29.579 2 INFO nova.virt.libvirt.driver [None req-f94e59bf-b900-400f-a67b-3e84677dd23e 77d11e860ca1460cab1c20bca4d4c0ea bfcc78a613a4442d88231798d10634c9 - - default default] [instance: ab2c9a76-86d0-4cca-92b5-ae402fda2905] Creating config drive at /var/lib/nova/instances/ab2c9a76-86d0-4cca-92b5-ae402fda2905/disk.config
Oct 11 04:15:29 compute-0 nova_compute[259850]: 2025-10-11 04:15:29.588 2 DEBUG oslo_concurrency.processutils [None req-f94e59bf-b900-400f-a67b-3e84677dd23e 77d11e860ca1460cab1c20bca4d4c0ea bfcc78a613a4442d88231798d10634c9 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/ab2c9a76-86d0-4cca-92b5-ae402fda2905/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpa9c11syi execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 04:15:29 compute-0 ovn_controller[152025]: 2025-10-11T04:15:29Z|00192|binding|INFO|Releasing lport c2cbaf15-a50c-40b8-9f65-12b11618e7fc from this chassis (sb_readonly=0)
Oct 11 04:15:29 compute-0 nova_compute[259850]: 2025-10-11 04:15:29.635 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 04:15:29 compute-0 nova_compute[259850]: 2025-10-11 04:15:29.643 2 DEBUG nova.network.neutron [req-67ac4d33-1bc8-4183-937b-6952d2c473b0 req-c681ffad-41dc-435a-b56a-537641b049df f4159c198f2c491aba3076e803251bb4 551ef16ca7c64c7d98e9560ddab5abd8 - - default default] [instance: ab2c9a76-86d0-4cca-92b5-ae402fda2905] Updated VIF entry in instance network info cache for port a1f30276-c6ab-493e-9be5-8e3baf249a38. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 11 04:15:29 compute-0 nova_compute[259850]: 2025-10-11 04:15:29.646 2 DEBUG nova.network.neutron [req-67ac4d33-1bc8-4183-937b-6952d2c473b0 req-c681ffad-41dc-435a-b56a-537641b049df f4159c198f2c491aba3076e803251bb4 551ef16ca7c64c7d98e9560ddab5abd8 - - default default] [instance: ab2c9a76-86d0-4cca-92b5-ae402fda2905] Updating instance_info_cache with network_info: [{"id": "a1f30276-c6ab-493e-9be5-8e3baf249a38", "address": "fa:16:3e:f7:2a:9b", "network": {"id": "1c86b315-3a4b-4db0-8b3c-39658c19ef9c", "bridge": "br-int", "label": "tempest-TransferEncryptedVolumeTest-443747755-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "bfcc78a613a4442d88231798d10634c9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa1f30276-c6", "ovs_interfaceid": "a1f30276-c6ab-493e-9be5-8e3baf249a38", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 11 04:15:29 compute-0 nova_compute[259850]: 2025-10-11 04:15:29.654 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 04:15:29 compute-0 nova_compute[259850]: 2025-10-11 04:15:29.667 2 DEBUG oslo_concurrency.lockutils [req-67ac4d33-1bc8-4183-937b-6952d2c473b0 req-c681ffad-41dc-435a-b56a-537641b049df f4159c198f2c491aba3076e803251bb4 551ef16ca7c64c7d98e9560ddab5abd8 - - default default] Releasing lock "refresh_cache-ab2c9a76-86d0-4cca-92b5-ae402fda2905" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 11 04:15:29 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1549: 305 pgs: 305 active+clean; 248 MiB data, 517 MiB used, 59 GiB / 60 GiB avail; 1.8 MiB/s rd, 9.4 MiB/s wr, 114 op/s
Oct 11 04:15:29 compute-0 nova_compute[259850]: 2025-10-11 04:15:29.742 2 DEBUG oslo_concurrency.processutils [None req-f94e59bf-b900-400f-a67b-3e84677dd23e 77d11e860ca1460cab1c20bca4d4c0ea bfcc78a613a4442d88231798d10634c9 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/ab2c9a76-86d0-4cca-92b5-ae402fda2905/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpa9c11syi" returned: 0 in 0.154s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 04:15:29 compute-0 nova_compute[259850]: 2025-10-11 04:15:29.770 2 DEBUG nova.storage.rbd_utils [None req-f94e59bf-b900-400f-a67b-3e84677dd23e 77d11e860ca1460cab1c20bca4d4c0ea bfcc78a613a4442d88231798d10634c9 - - default default] rbd image ab2c9a76-86d0-4cca-92b5-ae402fda2905_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 11 04:15:29 compute-0 nova_compute[259850]: 2025-10-11 04:15:29.774 2 DEBUG oslo_concurrency.processutils [None req-f94e59bf-b900-400f-a67b-3e84677dd23e 77d11e860ca1460cab1c20bca4d4c0ea bfcc78a613a4442d88231798d10634c9 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/ab2c9a76-86d0-4cca-92b5-ae402fda2905/disk.config ab2c9a76-86d0-4cca-92b5-ae402fda2905_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 04:15:29 compute-0 nova_compute[259850]: 2025-10-11 04:15:29.829 2 DEBUG nova.compute.manager [req-0095e0ed-de3e-4d3c-ba4f-967bc44c76a4 req-5de6fd69-cbd1-47a2-81d8-acab91b0b967 f4159c198f2c491aba3076e803251bb4 551ef16ca7c64c7d98e9560ddab5abd8 - - default default] [instance: f4568c68-41ba-4de0-a607-76bf5907f37c] Received event network-changed-7a1af6b7-a442-4ea8-beca-2843ffb42e3c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 11 04:15:29 compute-0 nova_compute[259850]: 2025-10-11 04:15:29.830 2 DEBUG nova.compute.manager [req-0095e0ed-de3e-4d3c-ba4f-967bc44c76a4 req-5de6fd69-cbd1-47a2-81d8-acab91b0b967 f4159c198f2c491aba3076e803251bb4 551ef16ca7c64c7d98e9560ddab5abd8 - - default default] [instance: f4568c68-41ba-4de0-a607-76bf5907f37c] Refreshing instance network info cache due to event network-changed-7a1af6b7-a442-4ea8-beca-2843ffb42e3c. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 11 04:15:29 compute-0 nova_compute[259850]: 2025-10-11 04:15:29.831 2 DEBUG oslo_concurrency.lockutils [req-0095e0ed-de3e-4d3c-ba4f-967bc44c76a4 req-5de6fd69-cbd1-47a2-81d8-acab91b0b967 f4159c198f2c491aba3076e803251bb4 551ef16ca7c64c7d98e9560ddab5abd8 - - default default] Acquiring lock "refresh_cache-f4568c68-41ba-4de0-a607-76bf5907f37c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 11 04:15:29 compute-0 nova_compute[259850]: 2025-10-11 04:15:29.831 2 DEBUG oslo_concurrency.lockutils [req-0095e0ed-de3e-4d3c-ba4f-967bc44c76a4 req-5de6fd69-cbd1-47a2-81d8-acab91b0b967 f4159c198f2c491aba3076e803251bb4 551ef16ca7c64c7d98e9560ddab5abd8 - - default default] Acquired lock "refresh_cache-f4568c68-41ba-4de0-a607-76bf5907f37c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 11 04:15:29 compute-0 nova_compute[259850]: 2025-10-11 04:15:29.831 2 DEBUG nova.network.neutron [req-0095e0ed-de3e-4d3c-ba4f-967bc44c76a4 req-5de6fd69-cbd1-47a2-81d8-acab91b0b967 f4159c198f2c491aba3076e803251bb4 551ef16ca7c64c7d98e9560ddab5abd8 - - default default] [instance: f4568c68-41ba-4de0-a607-76bf5907f37c] Refreshing network info cache for port 7a1af6b7-a442-4ea8-beca-2843ffb42e3c _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 11 04:15:29 compute-0 nova_compute[259850]: 2025-10-11 04:15:29.963 2 DEBUG oslo_concurrency.processutils [None req-f94e59bf-b900-400f-a67b-3e84677dd23e 77d11e860ca1460cab1c20bca4d4c0ea bfcc78a613a4442d88231798d10634c9 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/ab2c9a76-86d0-4cca-92b5-ae402fda2905/disk.config ab2c9a76-86d0-4cca-92b5-ae402fda2905_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.189s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 04:15:29 compute-0 nova_compute[259850]: 2025-10-11 04:15:29.964 2 INFO nova.virt.libvirt.driver [None req-f94e59bf-b900-400f-a67b-3e84677dd23e 77d11e860ca1460cab1c20bca4d4c0ea bfcc78a613a4442d88231798d10634c9 - - default default] [instance: ab2c9a76-86d0-4cca-92b5-ae402fda2905] Deleting local config drive /var/lib/nova/instances/ab2c9a76-86d0-4cca-92b5-ae402fda2905/disk.config because it was imported into RBD.
Oct 11 04:15:29 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e384 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 04:15:30 compute-0 kernel: tapa1f30276-c6: entered promiscuous mode
Oct 11 04:15:30 compute-0 NetworkManager[44920]: <info>  [1760156130.0407] manager: (tapa1f30276-c6): new Tun device (/org/freedesktop/NetworkManager/Devices/104)
Oct 11 04:15:30 compute-0 ovn_controller[152025]: 2025-10-11T04:15:30Z|00193|binding|INFO|Claiming lport a1f30276-c6ab-493e-9be5-8e3baf249a38 for this chassis.
Oct 11 04:15:30 compute-0 ovn_controller[152025]: 2025-10-11T04:15:30Z|00194|binding|INFO|a1f30276-c6ab-493e-9be5-8e3baf249a38: Claiming fa:16:3e:f7:2a:9b 10.100.0.6
Oct 11 04:15:30 compute-0 nova_compute[259850]: 2025-10-11 04:15:30.045 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 04:15:30 compute-0 ovn_metadata_agent[161897]: 2025-10-11 04:15:30.053 161902 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:f7:2a:9b 10.100.0.6'], port_security=['fa:16:3e:f7:2a:9b 10.100.0.6'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.6/28', 'neutron:device_id': 'ab2c9a76-86d0-4cca-92b5-ae402fda2905', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-1c86b315-3a4b-4db0-8b3c-39658c19ef9c', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'bfcc78a613a4442d88231798d10634c9', 'neutron:revision_number': '2', 'neutron:security_group_ids': '3c69b653-6cff-45f0-9360-306b50c7cbb5', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=756f4bd0-4cbc-4611-9397-52eb34ec09ab, chassis=[<ovs.db.idl.Row object at 0x7f22fde8afd0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f22fde8afd0>], logical_port=a1f30276-c6ab-493e-9be5-8e3baf249a38) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 11 04:15:30 compute-0 ovn_metadata_agent[161897]: 2025-10-11 04:15:30.056 161902 INFO neutron.agent.ovn.metadata.agent [-] Port a1f30276-c6ab-493e-9be5-8e3baf249a38 in datapath 1c86b315-3a4b-4db0-8b3c-39658c19ef9c bound to our chassis
Oct 11 04:15:30 compute-0 ovn_metadata_agent[161897]: 2025-10-11 04:15:30.060 161902 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 1c86b315-3a4b-4db0-8b3c-39658c19ef9c
Oct 11 04:15:30 compute-0 systemd-udevd[292907]: Network interface NamePolicy= disabled on kernel command line.
Oct 11 04:15:30 compute-0 ovn_metadata_agent[161897]: 2025-10-11 04:15:30.076 267637 DEBUG oslo.privsep.daemon [-] privsep: reply[a9a4b724-d976-40fc-8740-a72b80b484c1]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 04:15:30 compute-0 ovn_metadata_agent[161897]: 2025-10-11 04:15:30.079 161902 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap1c86b315-31 in ovnmeta-1c86b315-3a4b-4db0-8b3c-39658c19ef9c namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Oct 11 04:15:30 compute-0 ovn_controller[152025]: 2025-10-11T04:15:30Z|00195|binding|INFO|Setting lport a1f30276-c6ab-493e-9be5-8e3baf249a38 ovn-installed in OVS
Oct 11 04:15:30 compute-0 ovn_controller[152025]: 2025-10-11T04:15:30Z|00196|binding|INFO|Setting lport a1f30276-c6ab-493e-9be5-8e3baf249a38 up in Southbound
Oct 11 04:15:30 compute-0 nova_compute[259850]: 2025-10-11 04:15:30.086 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 04:15:30 compute-0 ovn_metadata_agent[161897]: 2025-10-11 04:15:30.081 267637 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap1c86b315-30 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Oct 11 04:15:30 compute-0 ovn_metadata_agent[161897]: 2025-10-11 04:15:30.081 267637 DEBUG oslo.privsep.daemon [-] privsep: reply[772f0b70-88f0-4b9d-bb95-3758f66e9b18]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 04:15:30 compute-0 ovn_metadata_agent[161897]: 2025-10-11 04:15:30.090 267637 DEBUG oslo.privsep.daemon [-] privsep: reply[a8689784-b102-447d-8b5c-023af92fd436]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 04:15:30 compute-0 NetworkManager[44920]: <info>  [1760156130.1020] device (tapa1f30276-c6): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 11 04:15:30 compute-0 NetworkManager[44920]: <info>  [1760156130.1029] device (tapa1f30276-c6): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 11 04:15:30 compute-0 systemd-machined[214869]: New machine qemu-19-instance-00000013.
Oct 11 04:15:30 compute-0 ovn_metadata_agent[161897]: 2025-10-11 04:15:30.117 162015 DEBUG oslo.privsep.daemon [-] privsep: reply[dcd539a4-8a33-4076-af4e-619704a5dfb3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 04:15:30 compute-0 systemd[1]: Started Virtual Machine qemu-19-instance-00000013.
Oct 11 04:15:30 compute-0 ovn_metadata_agent[161897]: 2025-10-11 04:15:30.145 267637 DEBUG oslo.privsep.daemon [-] privsep: reply[62654a5a-9e03-4f83-b370-35ffb19864a2]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 04:15:30 compute-0 ovn_metadata_agent[161897]: 2025-10-11 04:15:30.167 267785 DEBUG oslo.privsep.daemon [-] privsep: reply[3fcdb574-7d32-4026-9b23-7e102d0ef89e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 04:15:30 compute-0 NetworkManager[44920]: <info>  [1760156130.1729] manager: (tap1c86b315-30): new Veth device (/org/freedesktop/NetworkManager/Devices/105)
Oct 11 04:15:30 compute-0 ovn_metadata_agent[161897]: 2025-10-11 04:15:30.170 267637 DEBUG oslo.privsep.daemon [-] privsep: reply[fc0edc50-59f0-46be-a2a6-1a15f147f9b5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 04:15:30 compute-0 ovn_metadata_agent[161897]: 2025-10-11 04:15:30.208 267785 DEBUG oslo.privsep.daemon [-] privsep: reply[6b5d9005-8ccf-4342-85fe-58a446ba7227]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 04:15:30 compute-0 ovn_metadata_agent[161897]: 2025-10-11 04:15:30.211 267785 DEBUG oslo.privsep.daemon [-] privsep: reply[860bb796-621c-4869-a0eb-635396f0a91d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 04:15:30 compute-0 NetworkManager[44920]: <info>  [1760156130.2288] device (tap1c86b315-30): carrier: link connected
Oct 11 04:15:30 compute-0 ovn_metadata_agent[161897]: 2025-10-11 04:15:30.232 267785 DEBUG oslo.privsep.daemon [-] privsep: reply[349f6bbf-e0a9-4e88-b4e5-017dd64c520a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 04:15:30 compute-0 ovn_metadata_agent[161897]: 2025-10-11 04:15:30.246 267637 DEBUG oslo.privsep.daemon [-] privsep: reply[6f6c4566-1087-413e-b3cf-9f057fca5623]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap1c86b315-31'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:b2:1b:d4'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 64], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 451526, 'reachable_time': 43601, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 292941, 'error': None, 'target': 'ovnmeta-1c86b315-3a4b-4db0-8b3c-39658c19ef9c', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 04:15:30 compute-0 ovn_metadata_agent[161897]: 2025-10-11 04:15:30.258 267637 DEBUG oslo.privsep.daemon [-] privsep: reply[82427991-c4bf-421a-a32a-2250f239ec68]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:feb2:1bd4'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 451526, 'tstamp': 451526}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 292942, 'error': None, 'target': 'ovnmeta-1c86b315-3a4b-4db0-8b3c-39658c19ef9c', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 04:15:30 compute-0 ovn_metadata_agent[161897]: 2025-10-11 04:15:30.270 267637 DEBUG oslo.privsep.daemon [-] privsep: reply[e0cc9ca1-a322-4040-bd9a-6f48507dc9c9]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap1c86b315-31'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:b2:1b:d4'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 64], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 451526, 'reachable_time': 43601, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 292943, 'error': None, 'target': 'ovnmeta-1c86b315-3a4b-4db0-8b3c-39658c19ef9c', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 04:15:30 compute-0 ovn_metadata_agent[161897]: 2025-10-11 04:15:30.297 267637 DEBUG oslo.privsep.daemon [-] privsep: reply[a04405d7-d0ea-45a1-b10a-7bef604ec314]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 04:15:30 compute-0 ovn_metadata_agent[161897]: 2025-10-11 04:15:30.342 267637 DEBUG oslo.privsep.daemon [-] privsep: reply[6ebc4e4f-0522-4fa7-b005-e215229870b8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 04:15:30 compute-0 ovn_metadata_agent[161897]: 2025-10-11 04:15:30.344 161902 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap1c86b315-30, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 04:15:30 compute-0 ovn_metadata_agent[161897]: 2025-10-11 04:15:30.344 161902 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 11 04:15:30 compute-0 ovn_metadata_agent[161897]: 2025-10-11 04:15:30.345 161902 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap1c86b315-30, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 04:15:30 compute-0 NetworkManager[44920]: <info>  [1760156130.3991] manager: (tap1c86b315-30): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/106)
Oct 11 04:15:30 compute-0 nova_compute[259850]: 2025-10-11 04:15:30.399 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 04:15:30 compute-0 kernel: tap1c86b315-30: entered promiscuous mode
Oct 11 04:15:30 compute-0 ovn_metadata_agent[161897]: 2025-10-11 04:15:30.407 161902 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap1c86b315-30, col_values=(('external_ids', {'iface-id': '075f096d-d25a-4cca-804c-0df80c22a72a'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 04:15:30 compute-0 ovn_controller[152025]: 2025-10-11T04:15:30Z|00197|binding|INFO|Releasing lport 075f096d-d25a-4cca-804c-0df80c22a72a from this chassis (sb_readonly=0)
Oct 11 04:15:30 compute-0 nova_compute[259850]: 2025-10-11 04:15:30.409 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 04:15:30 compute-0 nova_compute[259850]: 2025-10-11 04:15:30.436 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 04:15:30 compute-0 ovn_metadata_agent[161897]: 2025-10-11 04:15:30.439 161902 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/1c86b315-3a4b-4db0-8b3c-39658c19ef9c.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/1c86b315-3a4b-4db0-8b3c-39658c19ef9c.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Oct 11 04:15:30 compute-0 ovn_metadata_agent[161897]: 2025-10-11 04:15:30.440 267637 DEBUG oslo.privsep.daemon [-] privsep: reply[363fae78-9c5d-4236-9eaf-13e5bc84ac32]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 04:15:30 compute-0 ovn_metadata_agent[161897]: 2025-10-11 04:15:30.440 161902 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct 11 04:15:30 compute-0 ovn_metadata_agent[161897]: global
Oct 11 04:15:30 compute-0 ovn_metadata_agent[161897]:     log         /dev/log local0 debug
Oct 11 04:15:30 compute-0 ovn_metadata_agent[161897]:     log-tag     haproxy-metadata-proxy-1c86b315-3a4b-4db0-8b3c-39658c19ef9c
Oct 11 04:15:30 compute-0 ovn_metadata_agent[161897]:     user        root
Oct 11 04:15:30 compute-0 ovn_metadata_agent[161897]:     group       root
Oct 11 04:15:30 compute-0 ovn_metadata_agent[161897]:     maxconn     1024
Oct 11 04:15:30 compute-0 ovn_metadata_agent[161897]:     pidfile     /var/lib/neutron/external/pids/1c86b315-3a4b-4db0-8b3c-39658c19ef9c.pid.haproxy
Oct 11 04:15:30 compute-0 ovn_metadata_agent[161897]:     daemon
Oct 11 04:15:30 compute-0 ovn_metadata_agent[161897]: 
Oct 11 04:15:30 compute-0 ovn_metadata_agent[161897]: defaults
Oct 11 04:15:30 compute-0 ovn_metadata_agent[161897]:     log global
Oct 11 04:15:30 compute-0 ovn_metadata_agent[161897]:     mode http
Oct 11 04:15:30 compute-0 ovn_metadata_agent[161897]:     option httplog
Oct 11 04:15:30 compute-0 ovn_metadata_agent[161897]:     option dontlognull
Oct 11 04:15:30 compute-0 ovn_metadata_agent[161897]:     option http-server-close
Oct 11 04:15:30 compute-0 ovn_metadata_agent[161897]:     option forwardfor
Oct 11 04:15:30 compute-0 ovn_metadata_agent[161897]:     retries                 3
Oct 11 04:15:30 compute-0 ovn_metadata_agent[161897]:     timeout http-request    30s
Oct 11 04:15:30 compute-0 ovn_metadata_agent[161897]:     timeout connect         30s
Oct 11 04:15:30 compute-0 ovn_metadata_agent[161897]:     timeout client          32s
Oct 11 04:15:30 compute-0 ovn_metadata_agent[161897]:     timeout server          32s
Oct 11 04:15:30 compute-0 ovn_metadata_agent[161897]:     timeout http-keep-alive 30s
Oct 11 04:15:30 compute-0 ovn_metadata_agent[161897]: 
Oct 11 04:15:30 compute-0 ovn_metadata_agent[161897]: 
Oct 11 04:15:30 compute-0 ovn_metadata_agent[161897]: listen listener
Oct 11 04:15:30 compute-0 ovn_metadata_agent[161897]:     bind 169.254.169.254:80
Oct 11 04:15:30 compute-0 ovn_metadata_agent[161897]:     server metadata /var/lib/neutron/metadata_proxy
Oct 11 04:15:30 compute-0 ovn_metadata_agent[161897]:     http-request add-header X-OVN-Network-ID 1c86b315-3a4b-4db0-8b3c-39658c19ef9c
Oct 11 04:15:30 compute-0 ovn_metadata_agent[161897]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Oct 11 04:15:30 compute-0 ovn_metadata_agent[161897]: 2025-10-11 04:15:30.441 161902 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-1c86b315-3a4b-4db0-8b3c-39658c19ef9c', 'env', 'PROCESS_TAG=haproxy-1c86b315-3a4b-4db0-8b3c-39658c19ef9c', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/1c86b315-3a4b-4db0-8b3c-39658c19ef9c.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Oct 11 04:15:30 compute-0 nova_compute[259850]: 2025-10-11 04:15:30.538 2 DEBUG nova.compute.manager [req-45f6c0cd-a86c-408a-9425-e3781bda2930 req-2e9e81d9-36db-40ef-8dfb-8c03f2bd387f f4159c198f2c491aba3076e803251bb4 551ef16ca7c64c7d98e9560ddab5abd8 - - default default] [instance: ab2c9a76-86d0-4cca-92b5-ae402fda2905] Received event network-vif-plugged-a1f30276-c6ab-493e-9be5-8e3baf249a38 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 11 04:15:30 compute-0 nova_compute[259850]: 2025-10-11 04:15:30.539 2 DEBUG oslo_concurrency.lockutils [req-45f6c0cd-a86c-408a-9425-e3781bda2930 req-2e9e81d9-36db-40ef-8dfb-8c03f2bd387f f4159c198f2c491aba3076e803251bb4 551ef16ca7c64c7d98e9560ddab5abd8 - - default default] Acquiring lock "ab2c9a76-86d0-4cca-92b5-ae402fda2905-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 04:15:30 compute-0 nova_compute[259850]: 2025-10-11 04:15:30.539 2 DEBUG oslo_concurrency.lockutils [req-45f6c0cd-a86c-408a-9425-e3781bda2930 req-2e9e81d9-36db-40ef-8dfb-8c03f2bd387f f4159c198f2c491aba3076e803251bb4 551ef16ca7c64c7d98e9560ddab5abd8 - - default default] Lock "ab2c9a76-86d0-4cca-92b5-ae402fda2905-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 04:15:30 compute-0 nova_compute[259850]: 2025-10-11 04:15:30.539 2 DEBUG oslo_concurrency.lockutils [req-45f6c0cd-a86c-408a-9425-e3781bda2930 req-2e9e81d9-36db-40ef-8dfb-8c03f2bd387f f4159c198f2c491aba3076e803251bb4 551ef16ca7c64c7d98e9560ddab5abd8 - - default default] Lock "ab2c9a76-86d0-4cca-92b5-ae402fda2905-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 04:15:30 compute-0 nova_compute[259850]: 2025-10-11 04:15:30.540 2 DEBUG nova.compute.manager [req-45f6c0cd-a86c-408a-9425-e3781bda2930 req-2e9e81d9-36db-40ef-8dfb-8c03f2bd387f f4159c198f2c491aba3076e803251bb4 551ef16ca7c64c7d98e9560ddab5abd8 - - default default] [instance: ab2c9a76-86d0-4cca-92b5-ae402fda2905] Processing event network-vif-plugged-a1f30276-c6ab-493e-9be5-8e3baf249a38 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Oct 11 04:15:30 compute-0 podman[293011]: 2025-10-11 04:15:30.833747791 +0000 UTC m=+0.044938088 container create 5a43cdc49776b49890262baad696a76f610a6c55434b5eecd1e5ef3ed48b7e3a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-1c86b315-3a4b-4db0-8b3c-39658c19ef9c, tcib_managed=true, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20251009)
Oct 11 04:15:30 compute-0 systemd[1]: Started libpod-conmon-5a43cdc49776b49890262baad696a76f610a6c55434b5eecd1e5ef3ed48b7e3a.scope.
Oct 11 04:15:30 compute-0 systemd[1]: Started libcrun container.
Oct 11 04:15:30 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b9f482aa7569a95a72a2f1d17f8ac483d59c3214c3961d8884c336a437c631ef/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct 11 04:15:30 compute-0 podman[293011]: 2025-10-11 04:15:30.812025694 +0000 UTC m=+0.023216001 image pull 1061e4fafe13e0b9aa1ef2c904ba4ad70c44f3e87b1d831f16c6db34937f4022 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Oct 11 04:15:30 compute-0 podman[293011]: 2025-10-11 04:15:30.92449128 +0000 UTC m=+0.135681647 container init 5a43cdc49776b49890262baad696a76f610a6c55434b5eecd1e5ef3ed48b7e3a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-1c86b315-3a4b-4db0-8b3c-39658c19ef9c, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20251009)
Oct 11 04:15:30 compute-0 podman[293011]: 2025-10-11 04:15:30.929385989 +0000 UTC m=+0.140576296 container start 5a43cdc49776b49890262baad696a76f610a6c55434b5eecd1e5ef3ed48b7e3a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-1c86b315-3a4b-4db0-8b3c-39658c19ef9c, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251009, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f)
Oct 11 04:15:30 compute-0 neutron-haproxy-ovnmeta-1c86b315-3a4b-4db0-8b3c-39658c19ef9c[293026]: [NOTICE]   (293030) : New worker (293032) forked
Oct 11 04:15:30 compute-0 neutron-haproxy-ovnmeta-1c86b315-3a4b-4db0-8b3c-39658c19ef9c[293026]: [NOTICE]   (293030) : Loading success.
Oct 11 04:15:31 compute-0 ceph-mon[74273]: pgmap v1549: 305 pgs: 305 active+clean; 248 MiB data, 517 MiB used, 59 GiB / 60 GiB avail; 1.8 MiB/s rd, 9.4 MiB/s wr, 114 op/s
Oct 11 04:15:31 compute-0 ceph-mgr[74563]: [pg_autoscaler INFO root] _maybe_adjust
Oct 11 04:15:31 compute-0 ceph-mgr[74563]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 04:15:31 compute-0 ceph-mgr[74563]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct 11 04:15:31 compute-0 ceph-mgr[74563]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 04:15:31 compute-0 ceph-mgr[74563]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 2.8615818519242038e-06 of space, bias 1.0, pg target 0.0008584745555772611 quantized to 32 (current 32)
Oct 11 04:15:31 compute-0 ceph-mgr[74563]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 04:15:31 compute-0 ceph-mgr[74563]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0025202269323435565 of space, bias 1.0, pg target 0.756068079703067 quantized to 32 (current 32)
Oct 11 04:15:31 compute-0 ceph-mgr[74563]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 04:15:31 compute-0 ceph-mgr[74563]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Oct 11 04:15:31 compute-0 ceph-mgr[74563]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 04:15:31 compute-0 ceph-mgr[74563]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006661762551279547 of space, bias 1.0, pg target 0.1998528765383864 quantized to 32 (current 32)
Oct 11 04:15:31 compute-0 ceph-mgr[74563]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 04:15:31 compute-0 ceph-mgr[74563]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Oct 11 04:15:31 compute-0 ceph-mgr[74563]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 04:15:31 compute-0 ceph-mgr[74563]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 11 04:15:31 compute-0 ceph-mgr[74563]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 04:15:31 compute-0 ceph-mgr[74563]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Oct 11 04:15:31 compute-0 ceph-mgr[74563]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 04:15:31 compute-0 ceph-mgr[74563]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct 11 04:15:31 compute-0 ceph-mgr[74563]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 04:15:31 compute-0 ceph-mgr[74563]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 11 04:15:31 compute-0 ceph-mgr[74563]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 04:15:31 compute-0 ceph-mgr[74563]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct 11 04:15:31 compute-0 nova_compute[259850]: 2025-10-11 04:15:31.572 2 DEBUG nova.network.neutron [req-0095e0ed-de3e-4d3c-ba4f-967bc44c76a4 req-5de6fd69-cbd1-47a2-81d8-acab91b0b967 f4159c198f2c491aba3076e803251bb4 551ef16ca7c64c7d98e9560ddab5abd8 - - default default] [instance: f4568c68-41ba-4de0-a607-76bf5907f37c] Updated VIF entry in instance network info cache for port 7a1af6b7-a442-4ea8-beca-2843ffb42e3c. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 11 04:15:31 compute-0 nova_compute[259850]: 2025-10-11 04:15:31.573 2 DEBUG nova.network.neutron [req-0095e0ed-de3e-4d3c-ba4f-967bc44c76a4 req-5de6fd69-cbd1-47a2-81d8-acab91b0b967 f4159c198f2c491aba3076e803251bb4 551ef16ca7c64c7d98e9560ddab5abd8 - - default default] [instance: f4568c68-41ba-4de0-a607-76bf5907f37c] Updating instance_info_cache with network_info: [{"id": "7a1af6b7-a442-4ea8-beca-2843ffb42e3c", "address": "fa:16:3e:e8:e3:04", "network": {"id": "b6cd64a2-af0b-4f57-b84c-cbc9cde5251d", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-958485850-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.213", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "09ba33ef4bd447699d74946c58839b2d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7a1af6b7-a4", "ovs_interfaceid": "7a1af6b7-a442-4ea8-beca-2843ffb42e3c", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 11 04:15:31 compute-0 nova_compute[259850]: 2025-10-11 04:15:31.595 2 DEBUG oslo_concurrency.lockutils [req-0095e0ed-de3e-4d3c-ba4f-967bc44c76a4 req-5de6fd69-cbd1-47a2-81d8-acab91b0b967 f4159c198f2c491aba3076e803251bb4 551ef16ca7c64c7d98e9560ddab5abd8 - - default default] Releasing lock "refresh_cache-f4568c68-41ba-4de0-a607-76bf5907f37c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 11 04:15:31 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1550: 305 pgs: 305 active+clean; 248 MiB data, 517 MiB used, 59 GiB / 60 GiB avail; 1.8 MiB/s rd, 9.4 MiB/s wr, 114 op/s
Oct 11 04:15:32 compute-0 nova_compute[259850]: 2025-10-11 04:15:32.578 2 DEBUG oslo_service.periodic_task [None req-a15c22ab-259d-4b26-83d0-eb5f92102649 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 11 04:15:32 compute-0 nova_compute[259850]: 2025-10-11 04:15:32.579 2 DEBUG oslo_service.periodic_task [None req-a15c22ab-259d-4b26-83d0-eb5f92102649 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 11 04:15:32 compute-0 nova_compute[259850]: 2025-10-11 04:15:32.627 2 DEBUG nova.compute.manager [req-f8b80de2-528a-4283-9435-94749103b638 req-d82fc7dc-6e84-4c45-bf9e-ad2a1f73c267 f4159c198f2c491aba3076e803251bb4 551ef16ca7c64c7d98e9560ddab5abd8 - - default default] [instance: ab2c9a76-86d0-4cca-92b5-ae402fda2905] Received event network-vif-plugged-a1f30276-c6ab-493e-9be5-8e3baf249a38 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 11 04:15:32 compute-0 nova_compute[259850]: 2025-10-11 04:15:32.628 2 DEBUG oslo_concurrency.lockutils [req-f8b80de2-528a-4283-9435-94749103b638 req-d82fc7dc-6e84-4c45-bf9e-ad2a1f73c267 f4159c198f2c491aba3076e803251bb4 551ef16ca7c64c7d98e9560ddab5abd8 - - default default] Acquiring lock "ab2c9a76-86d0-4cca-92b5-ae402fda2905-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 04:15:32 compute-0 nova_compute[259850]: 2025-10-11 04:15:32.629 2 DEBUG oslo_concurrency.lockutils [req-f8b80de2-528a-4283-9435-94749103b638 req-d82fc7dc-6e84-4c45-bf9e-ad2a1f73c267 f4159c198f2c491aba3076e803251bb4 551ef16ca7c64c7d98e9560ddab5abd8 - - default default] Lock "ab2c9a76-86d0-4cca-92b5-ae402fda2905-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 04:15:32 compute-0 nova_compute[259850]: 2025-10-11 04:15:32.630 2 DEBUG oslo_concurrency.lockutils [req-f8b80de2-528a-4283-9435-94749103b638 req-d82fc7dc-6e84-4c45-bf9e-ad2a1f73c267 f4159c198f2c491aba3076e803251bb4 551ef16ca7c64c7d98e9560ddab5abd8 - - default default] Lock "ab2c9a76-86d0-4cca-92b5-ae402fda2905-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 04:15:32 compute-0 nova_compute[259850]: 2025-10-11 04:15:32.630 2 DEBUG nova.compute.manager [req-f8b80de2-528a-4283-9435-94749103b638 req-d82fc7dc-6e84-4c45-bf9e-ad2a1f73c267 f4159c198f2c491aba3076e803251bb4 551ef16ca7c64c7d98e9560ddab5abd8 - - default default] [instance: ab2c9a76-86d0-4cca-92b5-ae402fda2905] No waiting events found dispatching network-vif-plugged-a1f30276-c6ab-493e-9be5-8e3baf249a38 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 11 04:15:32 compute-0 nova_compute[259850]: 2025-10-11 04:15:32.631 2 WARNING nova.compute.manager [req-f8b80de2-528a-4283-9435-94749103b638 req-d82fc7dc-6e84-4c45-bf9e-ad2a1f73c267 f4159c198f2c491aba3076e803251bb4 551ef16ca7c64c7d98e9560ddab5abd8 - - default default] [instance: ab2c9a76-86d0-4cca-92b5-ae402fda2905] Received unexpected event network-vif-plugged-a1f30276-c6ab-493e-9be5-8e3baf249a38 for instance with vm_state building and task_state spawning.
Oct 11 04:15:33 compute-0 nova_compute[259850]: 2025-10-11 04:15:33.123 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 04:15:33 compute-0 ceph-mon[74273]: pgmap v1550: 305 pgs: 305 active+clean; 248 MiB data, 517 MiB used, 59 GiB / 60 GiB avail; 1.8 MiB/s rd, 9.4 MiB/s wr, 114 op/s
Oct 11 04:15:33 compute-0 nova_compute[259850]: 2025-10-11 04:15:33.579 2 DEBUG nova.virt.driver [None req-3700afd8-f980-4cf2-b41e-54ecc7fbe9a2 - - - - - -] Emitting event <LifecycleEvent: 1760156133.5793157, ab2c9a76-86d0-4cca-92b5-ae402fda2905 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 11 04:15:33 compute-0 nova_compute[259850]: 2025-10-11 04:15:33.581 2 INFO nova.compute.manager [None req-3700afd8-f980-4cf2-b41e-54ecc7fbe9a2 - - - - - -] [instance: ab2c9a76-86d0-4cca-92b5-ae402fda2905] VM Started (Lifecycle Event)
Oct 11 04:15:33 compute-0 nova_compute[259850]: 2025-10-11 04:15:33.584 2 DEBUG nova.compute.manager [None req-f94e59bf-b900-400f-a67b-3e84677dd23e 77d11e860ca1460cab1c20bca4d4c0ea bfcc78a613a4442d88231798d10634c9 - - default default] [instance: ab2c9a76-86d0-4cca-92b5-ae402fda2905] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Oct 11 04:15:33 compute-0 nova_compute[259850]: 2025-10-11 04:15:33.594 2 DEBUG nova.virt.libvirt.driver [None req-f94e59bf-b900-400f-a67b-3e84677dd23e 77d11e860ca1460cab1c20bca4d4c0ea bfcc78a613a4442d88231798d10634c9 - - default default] [instance: ab2c9a76-86d0-4cca-92b5-ae402fda2905] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Oct 11 04:15:33 compute-0 nova_compute[259850]: 2025-10-11 04:15:33.599 2 INFO nova.virt.libvirt.driver [-] [instance: ab2c9a76-86d0-4cca-92b5-ae402fda2905] Instance spawned successfully.
Oct 11 04:15:33 compute-0 nova_compute[259850]: 2025-10-11 04:15:33.599 2 DEBUG nova.virt.libvirt.driver [None req-f94e59bf-b900-400f-a67b-3e84677dd23e 77d11e860ca1460cab1c20bca4d4c0ea bfcc78a613a4442d88231798d10634c9 - - default default] [instance: ab2c9a76-86d0-4cca-92b5-ae402fda2905] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Oct 11 04:15:33 compute-0 nova_compute[259850]: 2025-10-11 04:15:33.627 2 DEBUG nova.compute.manager [None req-3700afd8-f980-4cf2-b41e-54ecc7fbe9a2 - - - - - -] [instance: ab2c9a76-86d0-4cca-92b5-ae402fda2905] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 11 04:15:33 compute-0 nova_compute[259850]: 2025-10-11 04:15:33.638 2 DEBUG nova.compute.manager [None req-3700afd8-f980-4cf2-b41e-54ecc7fbe9a2 - - - - - -] [instance: ab2c9a76-86d0-4cca-92b5-ae402fda2905] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 11 04:15:33 compute-0 nova_compute[259850]: 2025-10-11 04:15:33.642 2 DEBUG nova.virt.libvirt.driver [None req-f94e59bf-b900-400f-a67b-3e84677dd23e 77d11e860ca1460cab1c20bca4d4c0ea bfcc78a613a4442d88231798d10634c9 - - default default] [instance: ab2c9a76-86d0-4cca-92b5-ae402fda2905] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 11 04:15:33 compute-0 nova_compute[259850]: 2025-10-11 04:15:33.643 2 DEBUG nova.virt.libvirt.driver [None req-f94e59bf-b900-400f-a67b-3e84677dd23e 77d11e860ca1460cab1c20bca4d4c0ea bfcc78a613a4442d88231798d10634c9 - - default default] [instance: ab2c9a76-86d0-4cca-92b5-ae402fda2905] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 11 04:15:33 compute-0 nova_compute[259850]: 2025-10-11 04:15:33.648 2 DEBUG nova.virt.libvirt.driver [None req-f94e59bf-b900-400f-a67b-3e84677dd23e 77d11e860ca1460cab1c20bca4d4c0ea bfcc78a613a4442d88231798d10634c9 - - default default] [instance: ab2c9a76-86d0-4cca-92b5-ae402fda2905] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 11 04:15:33 compute-0 nova_compute[259850]: 2025-10-11 04:15:33.648 2 DEBUG nova.virt.libvirt.driver [None req-f94e59bf-b900-400f-a67b-3e84677dd23e 77d11e860ca1460cab1c20bca4d4c0ea bfcc78a613a4442d88231798d10634c9 - - default default] [instance: ab2c9a76-86d0-4cca-92b5-ae402fda2905] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 11 04:15:33 compute-0 nova_compute[259850]: 2025-10-11 04:15:33.649 2 DEBUG nova.virt.libvirt.driver [None req-f94e59bf-b900-400f-a67b-3e84677dd23e 77d11e860ca1460cab1c20bca4d4c0ea bfcc78a613a4442d88231798d10634c9 - - default default] [instance: ab2c9a76-86d0-4cca-92b5-ae402fda2905] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 11 04:15:33 compute-0 nova_compute[259850]: 2025-10-11 04:15:33.650 2 DEBUG nova.virt.libvirt.driver [None req-f94e59bf-b900-400f-a67b-3e84677dd23e 77d11e860ca1460cab1c20bca4d4c0ea bfcc78a613a4442d88231798d10634c9 - - default default] [instance: ab2c9a76-86d0-4cca-92b5-ae402fda2905] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 11 04:15:33 compute-0 nova_compute[259850]: 2025-10-11 04:15:33.691 2 INFO nova.compute.manager [None req-3700afd8-f980-4cf2-b41e-54ecc7fbe9a2 - - - - - -] [instance: ab2c9a76-86d0-4cca-92b5-ae402fda2905] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 11 04:15:33 compute-0 nova_compute[259850]: 2025-10-11 04:15:33.691 2 DEBUG nova.virt.driver [None req-3700afd8-f980-4cf2-b41e-54ecc7fbe9a2 - - - - - -] Emitting event <LifecycleEvent: 1760156133.5794818, ab2c9a76-86d0-4cca-92b5-ae402fda2905 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 11 04:15:33 compute-0 nova_compute[259850]: 2025-10-11 04:15:33.692 2 INFO nova.compute.manager [None req-3700afd8-f980-4cf2-b41e-54ecc7fbe9a2 - - - - - -] [instance: ab2c9a76-86d0-4cca-92b5-ae402fda2905] VM Paused (Lifecycle Event)
Oct 11 04:15:33 compute-0 nova_compute[259850]: 2025-10-11 04:15:33.714 2 DEBUG nova.compute.manager [None req-3700afd8-f980-4cf2-b41e-54ecc7fbe9a2 - - - - - -] [instance: ab2c9a76-86d0-4cca-92b5-ae402fda2905] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 11 04:15:33 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1551: 305 pgs: 305 active+clean; 248 MiB data, 517 MiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 9.4 MiB/s wr, 128 op/s
Oct 11 04:15:33 compute-0 nova_compute[259850]: 2025-10-11 04:15:33.728 2 INFO nova.compute.manager [None req-f94e59bf-b900-400f-a67b-3e84677dd23e 77d11e860ca1460cab1c20bca4d4c0ea bfcc78a613a4442d88231798d10634c9 - - default default] [instance: ab2c9a76-86d0-4cca-92b5-ae402fda2905] Took 8.31 seconds to spawn the instance on the hypervisor.
Oct 11 04:15:33 compute-0 nova_compute[259850]: 2025-10-11 04:15:33.728 2 DEBUG nova.compute.manager [None req-f94e59bf-b900-400f-a67b-3e84677dd23e 77d11e860ca1460cab1c20bca4d4c0ea bfcc78a613a4442d88231798d10634c9 - - default default] [instance: ab2c9a76-86d0-4cca-92b5-ae402fda2905] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 11 04:15:33 compute-0 nova_compute[259850]: 2025-10-11 04:15:33.734 2 DEBUG nova.virt.driver [None req-3700afd8-f980-4cf2-b41e-54ecc7fbe9a2 - - - - - -] Emitting event <LifecycleEvent: 1760156133.5924551, ab2c9a76-86d0-4cca-92b5-ae402fda2905 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 11 04:15:33 compute-0 nova_compute[259850]: 2025-10-11 04:15:33.734 2 INFO nova.compute.manager [None req-3700afd8-f980-4cf2-b41e-54ecc7fbe9a2 - - - - - -] [instance: ab2c9a76-86d0-4cca-92b5-ae402fda2905] VM Resumed (Lifecycle Event)
Oct 11 04:15:33 compute-0 nova_compute[259850]: 2025-10-11 04:15:33.764 2 DEBUG nova.compute.manager [None req-3700afd8-f980-4cf2-b41e-54ecc7fbe9a2 - - - - - -] [instance: ab2c9a76-86d0-4cca-92b5-ae402fda2905] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 11 04:15:33 compute-0 nova_compute[259850]: 2025-10-11 04:15:33.769 2 DEBUG nova.compute.manager [None req-3700afd8-f980-4cf2-b41e-54ecc7fbe9a2 - - - - - -] [instance: ab2c9a76-86d0-4cca-92b5-ae402fda2905] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 11 04:15:33 compute-0 nova_compute[259850]: 2025-10-11 04:15:33.800 2 INFO nova.compute.manager [None req-f94e59bf-b900-400f-a67b-3e84677dd23e 77d11e860ca1460cab1c20bca4d4c0ea bfcc78a613a4442d88231798d10634c9 - - default default] [instance: ab2c9a76-86d0-4cca-92b5-ae402fda2905] Took 10.59 seconds to build instance.
Oct 11 04:15:33 compute-0 nova_compute[259850]: 2025-10-11 04:15:33.818 2 DEBUG oslo_concurrency.lockutils [None req-f94e59bf-b900-400f-a67b-3e84677dd23e 77d11e860ca1460cab1c20bca4d4c0ea bfcc78a613a4442d88231798d10634c9 - - default default] Lock "ab2c9a76-86d0-4cca-92b5-ae402fda2905" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 10.684s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 04:15:33 compute-0 nova_compute[259850]: 2025-10-11 04:15:33.889 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 04:15:34 compute-0 ceph-mon[74273]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Oct 11 04:15:34 compute-0 ceph-mon[74273]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 2400.0 total, 600.0 interval
                                           Cumulative writes: 6959 writes, 31K keys, 6959 commit groups, 1.0 writes per commit group, ingest: 0.04 GB, 0.02 MB/s
                                           Cumulative WAL: 6959 writes, 6959 syncs, 1.00 writes per sync, written: 0.04 GB, 0.02 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 2085 writes, 9980 keys, 2085 commit groups, 1.0 writes per commit group, ingest: 12.25 MB, 0.02 MB/s
                                           Interval WAL: 2085 writes, 2085 syncs, 1.00 writes per sync, written: 0.01 GB, 0.02 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                           
                                           ** Compaction Stats [default] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0    120.6      0.31              0.14        17    0.018       0      0       0.0       0.0
                                             L6      1/0   10.07 MB   0.0      0.1     0.0      0.1       0.1      0.0       0.0   3.3    165.1    135.6      0.91              0.52        16    0.057     78K   9530       0.0       0.0
                                            Sum      1/0   10.07 MB   0.0      0.1     0.0      0.1       0.2      0.0       0.0   4.3    123.6    131.9      1.22              0.66        33    0.037     78K   9530       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.1     0.0      0.0       0.1      0.0       0.0   4.6    133.2    140.2      0.43              0.24        10    0.043     30K   3750       0.0       0.0
                                           
                                           ** Compaction Stats [default] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Low      0/0    0.00 KB   0.0      0.1     0.0      0.1       0.1      0.0       0.0   0.0    165.1    135.6      0.91              0.52        16    0.057     78K   9530       0.0       0.0
                                           High      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0    122.1      0.30              0.14        16    0.019       0      0       0.0       0.0
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0     13.2      0.00              0.00         1    0.004       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 2400.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.036, interval 0.013
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.16 GB write, 0.07 MB/s write, 0.15 GB read, 0.06 MB/s read, 1.2 seconds
                                           Interval compaction: 0.06 GB write, 0.10 MB/s write, 0.06 GB read, 0.09 MB/s read, 0.4 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x558495a5d1f0#2 capacity: 304.00 MB usage: 17.40 MB table_size: 0 occupancy: 18446744073709551615 collections: 5 last_copies: 0 last_secs: 0.000128 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1167,16.75 MB,5.51049%) FilterBlock(34,230.17 KB,0.0739399%) IndexBlock(34,435.23 KB,0.139814%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [default] **
Oct 11 04:15:34 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e384 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 04:15:35 compute-0 ceph-mon[74273]: pgmap v1551: 305 pgs: 305 active+clean; 248 MiB data, 517 MiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 9.4 MiB/s wr, 128 op/s
Oct 11 04:15:35 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1552: 305 pgs: 305 active+clean; 248 MiB data, 517 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 27 KiB/s wr, 83 op/s
Oct 11 04:15:37 compute-0 ceph-mon[74273]: pgmap v1552: 305 pgs: 305 active+clean; 248 MiB data, 517 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 27 KiB/s wr, 83 op/s
Oct 11 04:15:37 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1553: 305 pgs: 305 active+clean; 248 MiB data, 517 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 27 KiB/s wr, 83 op/s
Oct 11 04:15:38 compute-0 nova_compute[259850]: 2025-10-11 04:15:38.124 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 04:15:38 compute-0 nova_compute[259850]: 2025-10-11 04:15:38.890 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 04:15:39 compute-0 ceph-mon[74273]: pgmap v1553: 305 pgs: 305 active+clean; 248 MiB data, 517 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 27 KiB/s wr, 83 op/s
Oct 11 04:15:39 compute-0 ovn_controller[152025]: 2025-10-11T04:15:39Z|00030|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:e8:e3:04 10.100.0.6
Oct 11 04:15:39 compute-0 ovn_controller[152025]: 2025-10-11T04:15:39Z|00031|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:e8:e3:04 10.100.0.6
Oct 11 04:15:39 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1554: 305 pgs: 305 active+clean; 270 MiB data, 541 MiB used, 59 GiB / 60 GiB avail; 4.1 MiB/s rd, 2.1 MiB/s wr, 192 op/s
Oct 11 04:15:39 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e384 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 04:15:40 compute-0 ovn_metadata_agent[161897]: 2025-10-11 04:15:40.372 161902 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=16, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '7a:61:6f', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '92:f1:b6:e4:f1:16'}, ipsec=False) old=SB_Global(nb_cfg=15) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 11 04:15:40 compute-0 ovn_metadata_agent[161897]: 2025-10-11 04:15:40.372 161902 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 10 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Oct 11 04:15:40 compute-0 nova_compute[259850]: 2025-10-11 04:15:40.374 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 04:15:41 compute-0 ceph-mon[74273]: pgmap v1554: 305 pgs: 305 active+clean; 270 MiB data, 541 MiB used, 59 GiB / 60 GiB avail; 4.1 MiB/s rd, 2.1 MiB/s wr, 192 op/s
Oct 11 04:15:41 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1555: 305 pgs: 305 active+clean; 270 MiB data, 541 MiB used, 59 GiB / 60 GiB avail; 2.3 MiB/s rd, 2.0 MiB/s wr, 122 op/s
Oct 11 04:15:42 compute-0 nova_compute[259850]: 2025-10-11 04:15:42.465 2 DEBUG nova.compute.manager [req-b7e9a0bf-6f08-406d-a61d-d4c27e794bc3 req-0a5b16fc-71c8-4305-9603-01837e5a96b5 f4159c198f2c491aba3076e803251bb4 551ef16ca7c64c7d98e9560ddab5abd8 - - default default] [instance: ab2c9a76-86d0-4cca-92b5-ae402fda2905] Received event network-changed-a1f30276-c6ab-493e-9be5-8e3baf249a38 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 11 04:15:42 compute-0 nova_compute[259850]: 2025-10-11 04:15:42.465 2 DEBUG nova.compute.manager [req-b7e9a0bf-6f08-406d-a61d-d4c27e794bc3 req-0a5b16fc-71c8-4305-9603-01837e5a96b5 f4159c198f2c491aba3076e803251bb4 551ef16ca7c64c7d98e9560ddab5abd8 - - default default] [instance: ab2c9a76-86d0-4cca-92b5-ae402fda2905] Refreshing instance network info cache due to event network-changed-a1f30276-c6ab-493e-9be5-8e3baf249a38. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 11 04:15:42 compute-0 nova_compute[259850]: 2025-10-11 04:15:42.465 2 DEBUG oslo_concurrency.lockutils [req-b7e9a0bf-6f08-406d-a61d-d4c27e794bc3 req-0a5b16fc-71c8-4305-9603-01837e5a96b5 f4159c198f2c491aba3076e803251bb4 551ef16ca7c64c7d98e9560ddab5abd8 - - default default] Acquiring lock "refresh_cache-ab2c9a76-86d0-4cca-92b5-ae402fda2905" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 11 04:15:42 compute-0 nova_compute[259850]: 2025-10-11 04:15:42.466 2 DEBUG oslo_concurrency.lockutils [req-b7e9a0bf-6f08-406d-a61d-d4c27e794bc3 req-0a5b16fc-71c8-4305-9603-01837e5a96b5 f4159c198f2c491aba3076e803251bb4 551ef16ca7c64c7d98e9560ddab5abd8 - - default default] Acquired lock "refresh_cache-ab2c9a76-86d0-4cca-92b5-ae402fda2905" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 11 04:15:42 compute-0 nova_compute[259850]: 2025-10-11 04:15:42.466 2 DEBUG nova.network.neutron [req-b7e9a0bf-6f08-406d-a61d-d4c27e794bc3 req-0a5b16fc-71c8-4305-9603-01837e5a96b5 f4159c198f2c491aba3076e803251bb4 551ef16ca7c64c7d98e9560ddab5abd8 - - default default] [instance: ab2c9a76-86d0-4cca-92b5-ae402fda2905] Refreshing network info cache for port a1f30276-c6ab-493e-9be5-8e3baf249a38 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 11 04:15:43 compute-0 nova_compute[259850]: 2025-10-11 04:15:43.171 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 04:15:43 compute-0 ceph-mon[74273]: pgmap v1555: 305 pgs: 305 active+clean; 270 MiB data, 541 MiB used, 59 GiB / 60 GiB avail; 2.3 MiB/s rd, 2.0 MiB/s wr, 122 op/s
Oct 11 04:15:43 compute-0 podman[293048]: 2025-10-11 04:15:43.394548133 +0000 UTC m=+0.082587398 container health_status 8f03625dda308e9a3fe3b3f00360811eab2a53db90537fed5552a11820542f07 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, config_id=iscsid, container_name=iscsid, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, tcib_managed=true, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Oct 11 04:15:43 compute-0 podman[293047]: 2025-10-11 04:15:43.40921892 +0000 UTC m=+0.099330904 container health_status 83ff5079e26f0f00bbfd2512db4ebca61e431e3b7e362664573e34fff179574c (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251009, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, tcib_managed=true)
Oct 11 04:15:43 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1556: 305 pgs: 305 active+clean; 281 MiB data, 542 MiB used, 59 GiB / 60 GiB avail; 2.4 MiB/s rd, 2.1 MiB/s wr, 143 op/s
Oct 11 04:15:43 compute-0 nova_compute[259850]: 2025-10-11 04:15:43.892 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 04:15:44 compute-0 nova_compute[259850]: 2025-10-11 04:15:44.238 2 DEBUG nova.network.neutron [req-b7e9a0bf-6f08-406d-a61d-d4c27e794bc3 req-0a5b16fc-71c8-4305-9603-01837e5a96b5 f4159c198f2c491aba3076e803251bb4 551ef16ca7c64c7d98e9560ddab5abd8 - - default default] [instance: ab2c9a76-86d0-4cca-92b5-ae402fda2905] Updated VIF entry in instance network info cache for port a1f30276-c6ab-493e-9be5-8e3baf249a38. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 11 04:15:44 compute-0 nova_compute[259850]: 2025-10-11 04:15:44.239 2 DEBUG nova.network.neutron [req-b7e9a0bf-6f08-406d-a61d-d4c27e794bc3 req-0a5b16fc-71c8-4305-9603-01837e5a96b5 f4159c198f2c491aba3076e803251bb4 551ef16ca7c64c7d98e9560ddab5abd8 - - default default] [instance: ab2c9a76-86d0-4cca-92b5-ae402fda2905] Updating instance_info_cache with network_info: [{"id": "a1f30276-c6ab-493e-9be5-8e3baf249a38", "address": "fa:16:3e:f7:2a:9b", "network": {"id": "1c86b315-3a4b-4db0-8b3c-39658c19ef9c", "bridge": "br-int", "label": "tempest-TransferEncryptedVolumeTest-443747755-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.230", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "bfcc78a613a4442d88231798d10634c9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa1f30276-c6", "ovs_interfaceid": "a1f30276-c6ab-493e-9be5-8e3baf249a38", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 11 04:15:44 compute-0 nova_compute[259850]: 2025-10-11 04:15:44.262 2 DEBUG oslo_concurrency.lockutils [req-b7e9a0bf-6f08-406d-a61d-d4c27e794bc3 req-0a5b16fc-71c8-4305-9603-01837e5a96b5 f4159c198f2c491aba3076e803251bb4 551ef16ca7c64c7d98e9560ddab5abd8 - - default default] Releasing lock "refresh_cache-ab2c9a76-86d0-4cca-92b5-ae402fda2905" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 11 04:15:44 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e384 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 04:15:45 compute-0 ceph-mon[74273]: pgmap v1556: 305 pgs: 305 active+clean; 281 MiB data, 542 MiB used, 59 GiB / 60 GiB avail; 2.4 MiB/s rd, 2.1 MiB/s wr, 143 op/s
Oct 11 04:15:45 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1557: 305 pgs: 305 active+clean; 281 MiB data, 542 MiB used, 59 GiB / 60 GiB avail; 2.2 MiB/s rd, 2.1 MiB/s wr, 129 op/s
Oct 11 04:15:46 compute-0 ovn_controller[152025]: 2025-10-11T04:15:46Z|00032|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:f7:2a:9b 10.100.0.6
Oct 11 04:15:46 compute-0 ovn_controller[152025]: 2025-10-11T04:15:46Z|00033|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:f7:2a:9b 10.100.0.6
Oct 11 04:15:47 compute-0 ceph-mon[74273]: pgmap v1557: 305 pgs: 305 active+clean; 281 MiB data, 542 MiB used, 59 GiB / 60 GiB avail; 2.2 MiB/s rd, 2.1 MiB/s wr, 129 op/s
Oct 11 04:15:47 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1558: 305 pgs: 305 active+clean; 281 MiB data, 542 MiB used, 59 GiB / 60 GiB avail; 2.2 MiB/s rd, 2.1 MiB/s wr, 129 op/s
Oct 11 04:15:48 compute-0 nova_compute[259850]: 2025-10-11 04:15:48.175 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 04:15:48 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e384 do_prune osdmap full prune enabled
Oct 11 04:15:48 compute-0 ceph-mon[74273]: pgmap v1558: 305 pgs: 305 active+clean; 281 MiB data, 542 MiB used, 59 GiB / 60 GiB avail; 2.2 MiB/s rd, 2.1 MiB/s wr, 129 op/s
Oct 11 04:15:48 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e385 e385: 3 total, 3 up, 3 in
Oct 11 04:15:48 compute-0 ceph-mon[74273]: log_channel(cluster) log [DBG] : osdmap e385: 3 total, 3 up, 3 in
Oct 11 04:15:48 compute-0 nova_compute[259850]: 2025-10-11 04:15:48.895 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 04:15:49 compute-0 ceph-mon[74273]: osdmap e385: 3 total, 3 up, 3 in
Oct 11 04:15:49 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1560: 305 pgs: 305 active+clean; 350 MiB data, 604 MiB used, 59 GiB / 60 GiB avail; 768 KiB/s rd, 7.1 MiB/s wr, 122 op/s
Oct 11 04:15:49 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e385 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 04:15:50 compute-0 ceph-mon[74273]: pgmap v1560: 305 pgs: 305 active+clean; 350 MiB data, 604 MiB used, 59 GiB / 60 GiB avail; 768 KiB/s rd, 7.1 MiB/s wr, 122 op/s
Oct 11 04:15:50 compute-0 ovn_metadata_agent[161897]: 2025-10-11 04:15:50.375 161902 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=8a473e03-2208-47ae-afcd-05ad744a5969, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '16'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 04:15:50 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct 11 04:15:50 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3027277068' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 11 04:15:50 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct 11 04:15:50 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3027277068' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 11 04:15:50 compute-0 ceph-mgr[74563]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 04:15:50 compute-0 ceph-mgr[74563]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 04:15:50 compute-0 ceph-mgr[74563]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 04:15:50 compute-0 ceph-mgr[74563]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 04:15:50 compute-0 ceph-mgr[74563]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 04:15:50 compute-0 ceph-mgr[74563]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 04:15:51 compute-0 ceph-mon[74273]: from='client.? 192.168.122.10:0/3027277068' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 11 04:15:51 compute-0 ceph-mon[74273]: from='client.? 192.168.122.10:0/3027277068' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 11 04:15:51 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1561: 305 pgs: 305 active+clean; 350 MiB data, 604 MiB used, 59 GiB / 60 GiB avail; 768 KiB/s rd, 7.1 MiB/s wr, 122 op/s
Oct 11 04:15:52 compute-0 ceph-mon[74273]: pgmap v1561: 305 pgs: 305 active+clean; 350 MiB data, 604 MiB used, 59 GiB / 60 GiB avail; 768 KiB/s rd, 7.1 MiB/s wr, 122 op/s
Oct 11 04:15:53 compute-0 nova_compute[259850]: 2025-10-11 04:15:53.221 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 04:15:53 compute-0 nova_compute[259850]: 2025-10-11 04:15:53.651 2 DEBUG oslo_concurrency.lockutils [None req-f2e3f189-feab-4a4d-a1f7-4cb071161c61 2a330a845d62440c871f80eda2546881 09ba33ef4bd447699d74946c58839b2d - - default default] Acquiring lock "b19922f4-8c6a-4465-8051-c33652138fd9" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 04:15:53 compute-0 nova_compute[259850]: 2025-10-11 04:15:53.651 2 DEBUG oslo_concurrency.lockutils [None req-f2e3f189-feab-4a4d-a1f7-4cb071161c61 2a330a845d62440c871f80eda2546881 09ba33ef4bd447699d74946c58839b2d - - default default] Lock "b19922f4-8c6a-4465-8051-c33652138fd9" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 04:15:53 compute-0 nova_compute[259850]: 2025-10-11 04:15:53.673 2 DEBUG nova.compute.manager [None req-f2e3f189-feab-4a4d-a1f7-4cb071161c61 2a330a845d62440c871f80eda2546881 09ba33ef4bd447699d74946c58839b2d - - default default] [instance: b19922f4-8c6a-4465-8051-c33652138fd9] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Oct 11 04:15:53 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1562: 305 pgs: 305 active+clean; 350 MiB data, 604 MiB used, 59 GiB / 60 GiB avail; 665 KiB/s rd, 7.0 MiB/s wr, 113 op/s
Oct 11 04:15:53 compute-0 nova_compute[259850]: 2025-10-11 04:15:53.756 2 DEBUG oslo_concurrency.lockutils [None req-f2e3f189-feab-4a4d-a1f7-4cb071161c61 2a330a845d62440c871f80eda2546881 09ba33ef4bd447699d74946c58839b2d - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 04:15:53 compute-0 nova_compute[259850]: 2025-10-11 04:15:53.757 2 DEBUG oslo_concurrency.lockutils [None req-f2e3f189-feab-4a4d-a1f7-4cb071161c61 2a330a845d62440c871f80eda2546881 09ba33ef4bd447699d74946c58839b2d - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 04:15:53 compute-0 nova_compute[259850]: 2025-10-11 04:15:53.765 2 DEBUG nova.virt.hardware [None req-f2e3f189-feab-4a4d-a1f7-4cb071161c61 2a330a845d62440c871f80eda2546881 09ba33ef4bd447699d74946c58839b2d - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Oct 11 04:15:53 compute-0 nova_compute[259850]: 2025-10-11 04:15:53.766 2 INFO nova.compute.claims [None req-f2e3f189-feab-4a4d-a1f7-4cb071161c61 2a330a845d62440c871f80eda2546881 09ba33ef4bd447699d74946c58839b2d - - default default] [instance: b19922f4-8c6a-4465-8051-c33652138fd9] Claim successful on node compute-0.ctlplane.example.com
Oct 11 04:15:53 compute-0 nova_compute[259850]: 2025-10-11 04:15:53.897 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 04:15:53 compute-0 nova_compute[259850]: 2025-10-11 04:15:53.900 2 DEBUG oslo_concurrency.processutils [None req-f2e3f189-feab-4a4d-a1f7-4cb071161c61 2a330a845d62440c871f80eda2546881 09ba33ef4bd447699d74946c58839b2d - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 04:15:54 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 11 04:15:54 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2294053288' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 04:15:54 compute-0 nova_compute[259850]: 2025-10-11 04:15:54.395 2 DEBUG oslo_concurrency.processutils [None req-f2e3f189-feab-4a4d-a1f7-4cb071161c61 2a330a845d62440c871f80eda2546881 09ba33ef4bd447699d74946c58839b2d - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.494s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 04:15:54 compute-0 nova_compute[259850]: 2025-10-11 04:15:54.405 2 DEBUG nova.compute.provider_tree [None req-f2e3f189-feab-4a4d-a1f7-4cb071161c61 2a330a845d62440c871f80eda2546881 09ba33ef4bd447699d74946c58839b2d - - default default] Inventory has not changed in ProviderTree for provider: 108a560b-89c0-4926-a2fc-cb749a6f8386 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 11 04:15:54 compute-0 nova_compute[259850]: 2025-10-11 04:15:54.424 2 DEBUG nova.scheduler.client.report [None req-f2e3f189-feab-4a4d-a1f7-4cb071161c61 2a330a845d62440c871f80eda2546881 09ba33ef4bd447699d74946c58839b2d - - default default] Inventory has not changed for provider 108a560b-89c0-4926-a2fc-cb749a6f8386 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 11 04:15:54 compute-0 nova_compute[259850]: 2025-10-11 04:15:54.444 2 DEBUG oslo_concurrency.lockutils [None req-f2e3f189-feab-4a4d-a1f7-4cb071161c61 2a330a845d62440c871f80eda2546881 09ba33ef4bd447699d74946c58839b2d - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.687s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 04:15:54 compute-0 nova_compute[259850]: 2025-10-11 04:15:54.446 2 DEBUG nova.compute.manager [None req-f2e3f189-feab-4a4d-a1f7-4cb071161c61 2a330a845d62440c871f80eda2546881 09ba33ef4bd447699d74946c58839b2d - - default default] [instance: b19922f4-8c6a-4465-8051-c33652138fd9] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Oct 11 04:15:54 compute-0 nova_compute[259850]: 2025-10-11 04:15:54.512 2 INFO nova.virt.libvirt.driver [None req-f2e3f189-feab-4a4d-a1f7-4cb071161c61 2a330a845d62440c871f80eda2546881 09ba33ef4bd447699d74946c58839b2d - - default default] [instance: b19922f4-8c6a-4465-8051-c33652138fd9] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Oct 11 04:15:54 compute-0 nova_compute[259850]: 2025-10-11 04:15:54.517 2 DEBUG nova.compute.manager [None req-f2e3f189-feab-4a4d-a1f7-4cb071161c61 2a330a845d62440c871f80eda2546881 09ba33ef4bd447699d74946c58839b2d - - default default] [instance: b19922f4-8c6a-4465-8051-c33652138fd9] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Oct 11 04:15:54 compute-0 nova_compute[259850]: 2025-10-11 04:15:54.517 2 DEBUG nova.network.neutron [None req-f2e3f189-feab-4a4d-a1f7-4cb071161c61 2a330a845d62440c871f80eda2546881 09ba33ef4bd447699d74946c58839b2d - - default default] [instance: b19922f4-8c6a-4465-8051-c33652138fd9] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Oct 11 04:15:54 compute-0 nova_compute[259850]: 2025-10-11 04:15:54.542 2 DEBUG nova.compute.manager [None req-f2e3f189-feab-4a4d-a1f7-4cb071161c61 2a330a845d62440c871f80eda2546881 09ba33ef4bd447699d74946c58839b2d - - default default] [instance: b19922f4-8c6a-4465-8051-c33652138fd9] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Oct 11 04:15:54 compute-0 nova_compute[259850]: 2025-10-11 04:15:54.599 2 INFO nova.virt.block_device [None req-f2e3f189-feab-4a4d-a1f7-4cb071161c61 2a330a845d62440c871f80eda2546881 09ba33ef4bd447699d74946c58839b2d - - default default] [instance: b19922f4-8c6a-4465-8051-c33652138fd9] Booting with volume snapshot 65e711a8-ad46-42fa-a0f8-097512143fd7 at /dev/vda
Oct 11 04:15:54 compute-0 ceph-mon[74273]: pgmap v1562: 305 pgs: 305 active+clean; 350 MiB data, 604 MiB used, 59 GiB / 60 GiB avail; 665 KiB/s rd, 7.0 MiB/s wr, 113 op/s
Oct 11 04:15:54 compute-0 ceph-mon[74273]: from='client.? 192.168.122.100:0/2294053288' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 04:15:54 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e385 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 04:15:55 compute-0 nova_compute[259850]: 2025-10-11 04:15:55.107 2 DEBUG nova.policy [None req-f2e3f189-feab-4a4d-a1f7-4cb071161c61 2a330a845d62440c871f80eda2546881 09ba33ef4bd447699d74946c58839b2d - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '2a330a845d62440c871f80eda2546881', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '09ba33ef4bd447699d74946c58839b2d', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Oct 11 04:15:55 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1563: 305 pgs: 305 active+clean; 350 MiB data, 604 MiB used, 59 GiB / 60 GiB avail; 665 KiB/s rd, 7.0 MiB/s wr, 113 op/s
Oct 11 04:15:56 compute-0 podman[293109]: 2025-10-11 04:15:56.441623535 +0000 UTC m=+0.142502350 container health_status 648a70a95c858d67aef01a55c153ff0b5b9bef4b589fa10c62a24185180db61f (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.license=GPLv2, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Oct 11 04:15:56 compute-0 nova_compute[259850]: 2025-10-11 04:15:56.508 2 DEBUG nova.network.neutron [None req-f2e3f189-feab-4a4d-a1f7-4cb071161c61 2a330a845d62440c871f80eda2546881 09ba33ef4bd447699d74946c58839b2d - - default default] [instance: b19922f4-8c6a-4465-8051-c33652138fd9] Successfully created port: a0bc9537-bbc3-4bb6-9d95-a11aeb47b514 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Oct 11 04:15:56 compute-0 ceph-mon[74273]: pgmap v1563: 305 pgs: 305 active+clean; 350 MiB data, 604 MiB used, 59 GiB / 60 GiB avail; 665 KiB/s rd, 7.0 MiB/s wr, 113 op/s
Oct 11 04:15:57 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1564: 305 pgs: 305 active+clean; 350 MiB data, 604 MiB used, 59 GiB / 60 GiB avail; 665 KiB/s rd, 7.0 MiB/s wr, 113 op/s
Oct 11 04:15:58 compute-0 nova_compute[259850]: 2025-10-11 04:15:58.188 2 DEBUG nova.network.neutron [None req-f2e3f189-feab-4a4d-a1f7-4cb071161c61 2a330a845d62440c871f80eda2546881 09ba33ef4bd447699d74946c58839b2d - - default default] [instance: b19922f4-8c6a-4465-8051-c33652138fd9] Successfully updated port: a0bc9537-bbc3-4bb6-9d95-a11aeb47b514 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Oct 11 04:15:58 compute-0 nova_compute[259850]: 2025-10-11 04:15:58.216 2 DEBUG oslo_concurrency.lockutils [None req-f2e3f189-feab-4a4d-a1f7-4cb071161c61 2a330a845d62440c871f80eda2546881 09ba33ef4bd447699d74946c58839b2d - - default default] Acquiring lock "refresh_cache-b19922f4-8c6a-4465-8051-c33652138fd9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 11 04:15:58 compute-0 nova_compute[259850]: 2025-10-11 04:15:58.216 2 DEBUG oslo_concurrency.lockutils [None req-f2e3f189-feab-4a4d-a1f7-4cb071161c61 2a330a845d62440c871f80eda2546881 09ba33ef4bd447699d74946c58839b2d - - default default] Acquired lock "refresh_cache-b19922f4-8c6a-4465-8051-c33652138fd9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 11 04:15:58 compute-0 nova_compute[259850]: 2025-10-11 04:15:58.217 2 DEBUG nova.network.neutron [None req-f2e3f189-feab-4a4d-a1f7-4cb071161c61 2a330a845d62440c871f80eda2546881 09ba33ef4bd447699d74946c58839b2d - - default default] [instance: b19922f4-8c6a-4465-8051-c33652138fd9] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Oct 11 04:15:58 compute-0 nova_compute[259850]: 2025-10-11 04:15:58.224 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 04:15:58 compute-0 nova_compute[259850]: 2025-10-11 04:15:58.290 2 DEBUG nova.compute.manager [req-70e3696f-bc6b-4caa-b377-11692159d8ca req-35b0e0a1-3e13-480b-b216-033078fab715 f4159c198f2c491aba3076e803251bb4 551ef16ca7c64c7d98e9560ddab5abd8 - - default default] [instance: b19922f4-8c6a-4465-8051-c33652138fd9] Received event network-changed-a0bc9537-bbc3-4bb6-9d95-a11aeb47b514 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 11 04:15:58 compute-0 nova_compute[259850]: 2025-10-11 04:15:58.290 2 DEBUG nova.compute.manager [req-70e3696f-bc6b-4caa-b377-11692159d8ca req-35b0e0a1-3e13-480b-b216-033078fab715 f4159c198f2c491aba3076e803251bb4 551ef16ca7c64c7d98e9560ddab5abd8 - - default default] [instance: b19922f4-8c6a-4465-8051-c33652138fd9] Refreshing instance network info cache due to event network-changed-a0bc9537-bbc3-4bb6-9d95-a11aeb47b514. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 11 04:15:58 compute-0 nova_compute[259850]: 2025-10-11 04:15:58.291 2 DEBUG oslo_concurrency.lockutils [req-70e3696f-bc6b-4caa-b377-11692159d8ca req-35b0e0a1-3e13-480b-b216-033078fab715 f4159c198f2c491aba3076e803251bb4 551ef16ca7c64c7d98e9560ddab5abd8 - - default default] Acquiring lock "refresh_cache-b19922f4-8c6a-4465-8051-c33652138fd9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 11 04:15:58 compute-0 nova_compute[259850]: 2025-10-11 04:15:58.374 2 DEBUG nova.network.neutron [None req-f2e3f189-feab-4a4d-a1f7-4cb071161c61 2a330a845d62440c871f80eda2546881 09ba33ef4bd447699d74946c58839b2d - - default default] [instance: b19922f4-8c6a-4465-8051-c33652138fd9] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Oct 11 04:15:58 compute-0 nova_compute[259850]: 2025-10-11 04:15:58.569 2 DEBUG oslo_concurrency.lockutils [None req-6124908f-e531-4910-8f28-18871a2b87ff 77d11e860ca1460cab1c20bca4d4c0ea bfcc78a613a4442d88231798d10634c9 - - default default] Acquiring lock "ab2c9a76-86d0-4cca-92b5-ae402fda2905" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 04:15:58 compute-0 nova_compute[259850]: 2025-10-11 04:15:58.570 2 DEBUG oslo_concurrency.lockutils [None req-6124908f-e531-4910-8f28-18871a2b87ff 77d11e860ca1460cab1c20bca4d4c0ea bfcc78a613a4442d88231798d10634c9 - - default default] Lock "ab2c9a76-86d0-4cca-92b5-ae402fda2905" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 04:15:58 compute-0 nova_compute[259850]: 2025-10-11 04:15:58.571 2 DEBUG oslo_concurrency.lockutils [None req-6124908f-e531-4910-8f28-18871a2b87ff 77d11e860ca1460cab1c20bca4d4c0ea bfcc78a613a4442d88231798d10634c9 - - default default] Acquiring lock "ab2c9a76-86d0-4cca-92b5-ae402fda2905-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 04:15:58 compute-0 nova_compute[259850]: 2025-10-11 04:15:58.572 2 DEBUG oslo_concurrency.lockutils [None req-6124908f-e531-4910-8f28-18871a2b87ff 77d11e860ca1460cab1c20bca4d4c0ea bfcc78a613a4442d88231798d10634c9 - - default default] Lock "ab2c9a76-86d0-4cca-92b5-ae402fda2905-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 04:15:58 compute-0 nova_compute[259850]: 2025-10-11 04:15:58.572 2 DEBUG oslo_concurrency.lockutils [None req-6124908f-e531-4910-8f28-18871a2b87ff 77d11e860ca1460cab1c20bca4d4c0ea bfcc78a613a4442d88231798d10634c9 - - default default] Lock "ab2c9a76-86d0-4cca-92b5-ae402fda2905-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 04:15:58 compute-0 nova_compute[259850]: 2025-10-11 04:15:58.574 2 INFO nova.compute.manager [None req-6124908f-e531-4910-8f28-18871a2b87ff 77d11e860ca1460cab1c20bca4d4c0ea bfcc78a613a4442d88231798d10634c9 - - default default] [instance: ab2c9a76-86d0-4cca-92b5-ae402fda2905] Terminating instance
Oct 11 04:15:58 compute-0 nova_compute[259850]: 2025-10-11 04:15:58.577 2 DEBUG nova.compute.manager [None req-6124908f-e531-4910-8f28-18871a2b87ff 77d11e860ca1460cab1c20bca4d4c0ea bfcc78a613a4442d88231798d10634c9 - - default default] [instance: ab2c9a76-86d0-4cca-92b5-ae402fda2905] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Oct 11 04:15:58 compute-0 kernel: tapa1f30276-c6 (unregistering): left promiscuous mode
Oct 11 04:15:58 compute-0 NetworkManager[44920]: <info>  [1760156158.6297] device (tapa1f30276-c6): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 11 04:15:58 compute-0 ovn_controller[152025]: 2025-10-11T04:15:58Z|00198|binding|INFO|Releasing lport a1f30276-c6ab-493e-9be5-8e3baf249a38 from this chassis (sb_readonly=0)
Oct 11 04:15:58 compute-0 ovn_controller[152025]: 2025-10-11T04:15:58Z|00199|binding|INFO|Setting lport a1f30276-c6ab-493e-9be5-8e3baf249a38 down in Southbound
Oct 11 04:15:58 compute-0 ovn_controller[152025]: 2025-10-11T04:15:58Z|00200|binding|INFO|Removing iface tapa1f30276-c6 ovn-installed in OVS
Oct 11 04:15:58 compute-0 nova_compute[259850]: 2025-10-11 04:15:58.644 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 04:15:58 compute-0 ovn_metadata_agent[161897]: 2025-10-11 04:15:58.649 161902 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:f7:2a:9b 10.100.0.6'], port_security=['fa:16:3e:f7:2a:9b 10.100.0.6'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.6/28', 'neutron:device_id': 'ab2c9a76-86d0-4cca-92b5-ae402fda2905', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-1c86b315-3a4b-4db0-8b3c-39658c19ef9c', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'bfcc78a613a4442d88231798d10634c9', 'neutron:revision_number': '4', 'neutron:security_group_ids': '3c69b653-6cff-45f0-9360-306b50c7cbb5', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com', 'neutron:port_fip': '192.168.122.230'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=756f4bd0-4cbc-4611-9397-52eb34ec09ab, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f22fde8afd0>], logical_port=a1f30276-c6ab-493e-9be5-8e3baf249a38) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f22fde8afd0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 11 04:15:58 compute-0 ovn_metadata_agent[161897]: 2025-10-11 04:15:58.652 161902 INFO neutron.agent.ovn.metadata.agent [-] Port a1f30276-c6ab-493e-9be5-8e3baf249a38 in datapath 1c86b315-3a4b-4db0-8b3c-39658c19ef9c unbound from our chassis
Oct 11 04:15:58 compute-0 ovn_metadata_agent[161897]: 2025-10-11 04:15:58.655 161902 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 1c86b315-3a4b-4db0-8b3c-39658c19ef9c, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Oct 11 04:15:58 compute-0 ovn_metadata_agent[161897]: 2025-10-11 04:15:58.656 267637 DEBUG oslo.privsep.daemon [-] privsep: reply[25e27665-b55a-46ee-abb2-0372428f577e]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 04:15:58 compute-0 ovn_metadata_agent[161897]: 2025-10-11 04:15:58.657 161902 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-1c86b315-3a4b-4db0-8b3c-39658c19ef9c namespace which is not needed anymore
Oct 11 04:15:58 compute-0 nova_compute[259850]: 2025-10-11 04:15:58.660 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 04:15:58 compute-0 systemd[1]: machine-qemu\x2d19\x2dinstance\x2d00000013.scope: Deactivated successfully.
Oct 11 04:15:58 compute-0 systemd[1]: machine-qemu\x2d19\x2dinstance\x2d00000013.scope: Consumed 16.308s CPU time.
Oct 11 04:15:58 compute-0 systemd-machined[214869]: Machine qemu-19-instance-00000013 terminated.
Oct 11 04:15:58 compute-0 neutron-haproxy-ovnmeta-1c86b315-3a4b-4db0-8b3c-39658c19ef9c[293026]: [NOTICE]   (293030) : haproxy version is 2.8.14-c23fe91
Oct 11 04:15:58 compute-0 neutron-haproxy-ovnmeta-1c86b315-3a4b-4db0-8b3c-39658c19ef9c[293026]: [NOTICE]   (293030) : path to executable is /usr/sbin/haproxy
Oct 11 04:15:58 compute-0 neutron-haproxy-ovnmeta-1c86b315-3a4b-4db0-8b3c-39658c19ef9c[293026]: [WARNING]  (293030) : Exiting Master process...
Oct 11 04:15:58 compute-0 neutron-haproxy-ovnmeta-1c86b315-3a4b-4db0-8b3c-39658c19ef9c[293026]: [WARNING]  (293030) : Exiting Master process...
Oct 11 04:15:58 compute-0 neutron-haproxy-ovnmeta-1c86b315-3a4b-4db0-8b3c-39658c19ef9c[293026]: [ALERT]    (293030) : Current worker (293032) exited with code 143 (Terminated)
Oct 11 04:15:58 compute-0 neutron-haproxy-ovnmeta-1c86b315-3a4b-4db0-8b3c-39658c19ef9c[293026]: [WARNING]  (293030) : All workers exited. Exiting... (0)
Oct 11 04:15:58 compute-0 ceph-mon[74273]: pgmap v1564: 305 pgs: 305 active+clean; 350 MiB data, 604 MiB used, 59 GiB / 60 GiB avail; 665 KiB/s rd, 7.0 MiB/s wr, 113 op/s
Oct 11 04:15:58 compute-0 systemd[1]: libpod-5a43cdc49776b49890262baad696a76f610a6c55434b5eecd1e5ef3ed48b7e3a.scope: Deactivated successfully.
Oct 11 04:15:58 compute-0 nova_compute[259850]: 2025-10-11 04:15:58.817 2 INFO nova.virt.libvirt.driver [-] [instance: ab2c9a76-86d0-4cca-92b5-ae402fda2905] Instance destroyed successfully.
Oct 11 04:15:58 compute-0 nova_compute[259850]: 2025-10-11 04:15:58.818 2 DEBUG nova.objects.instance [None req-6124908f-e531-4910-8f28-18871a2b87ff 77d11e860ca1460cab1c20bca4d4c0ea bfcc78a613a4442d88231798d10634c9 - - default default] Lazy-loading 'resources' on Instance uuid ab2c9a76-86d0-4cca-92b5-ae402fda2905 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 11 04:15:58 compute-0 podman[293159]: 2025-10-11 04:15:58.823196399 +0000 UTC m=+0.060841569 container died 5a43cdc49776b49890262baad696a76f610a6c55434b5eecd1e5ef3ed48b7e3a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-1c86b315-3a4b-4db0-8b3c-39658c19ef9c, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Oct 11 04:15:58 compute-0 nova_compute[259850]: 2025-10-11 04:15:58.844 2 DEBUG nova.virt.libvirt.vif [None req-6124908f-e531-4910-8f28-18871a2b87ff 77d11e860ca1460cab1c20bca4d4c0ea bfcc78a613a4442d88231798d10634c9 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-11T04:15:22Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TransferEncryptedVolumeTest-server-1105377736',display_name='tempest-TransferEncryptedVolumeTest-server-1105377736',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-transferencryptedvolumetest-server-1105377736',id=19,image_ref='',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBEm9PgkiGfSXOx0o4WQq8AZOWxjh5dTcSz2vccU0Qwona7kINKHr8yu5DCKNDP+0OzTB5mKLuoYtalc5W0loL0xt3InkbNaE80zGvKzG26ntAx/WTjaE+AjoYDpLrsq4bA==',key_name='tempest-TransferEncryptedVolumeTest-726747697',keypairs=<?>,launch_index=0,launched_at=2025-10-11T04:15:33Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='bfcc78a613a4442d88231798d10634c9',ramdisk_id='',reservation_id='r-2uzv56jf',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',image_signature_verified='False',owner_project_name='tempest-TransferEncryptedVolumeTest-1941581237',owner_user_name='tempest-TransferEncryptedVolumeTest-1941581237-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-11T04:15:33Z,user_data=None,user_id='77d11e860ca1460cab1c20bca4d4c0ea',uuid=ab2c9a76-86d0-4cca-92b5-ae402fda2905,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "a1f30276-c6ab-493e-9be5-8e3baf249a38", "address": "fa:16:3e:f7:2a:9b", "network": {"id": "1c86b315-3a4b-4db0-8b3c-39658c19ef9c", "bridge": "br-int", "label": "tempest-TransferEncryptedVolumeTest-443747755-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.230", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "bfcc78a613a4442d88231798d10634c9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa1f30276-c6", "ovs_interfaceid": "a1f30276-c6ab-493e-9be5-8e3baf249a38", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Oct 11 04:15:58 compute-0 nova_compute[259850]: 2025-10-11 04:15:58.845 2 DEBUG nova.network.os_vif_util [None req-6124908f-e531-4910-8f28-18871a2b87ff 77d11e860ca1460cab1c20bca4d4c0ea bfcc78a613a4442d88231798d10634c9 - - default default] Converting VIF {"id": "a1f30276-c6ab-493e-9be5-8e3baf249a38", "address": "fa:16:3e:f7:2a:9b", "network": {"id": "1c86b315-3a4b-4db0-8b3c-39658c19ef9c", "bridge": "br-int", "label": "tempest-TransferEncryptedVolumeTest-443747755-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.230", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "bfcc78a613a4442d88231798d10634c9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa1f30276-c6", "ovs_interfaceid": "a1f30276-c6ab-493e-9be5-8e3baf249a38", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 11 04:15:58 compute-0 nova_compute[259850]: 2025-10-11 04:15:58.845 2 DEBUG nova.network.os_vif_util [None req-6124908f-e531-4910-8f28-18871a2b87ff 77d11e860ca1460cab1c20bca4d4c0ea bfcc78a613a4442d88231798d10634c9 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:f7:2a:9b,bridge_name='br-int',has_traffic_filtering=True,id=a1f30276-c6ab-493e-9be5-8e3baf249a38,network=Network(1c86b315-3a4b-4db0-8b3c-39658c19ef9c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa1f30276-c6') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 11 04:15:58 compute-0 nova_compute[259850]: 2025-10-11 04:15:58.846 2 DEBUG os_vif [None req-6124908f-e531-4910-8f28-18871a2b87ff 77d11e860ca1460cab1c20bca4d4c0ea bfcc78a613a4442d88231798d10634c9 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:f7:2a:9b,bridge_name='br-int',has_traffic_filtering=True,id=a1f30276-c6ab-493e-9be5-8e3baf249a38,network=Network(1c86b315-3a4b-4db0-8b3c-39658c19ef9c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa1f30276-c6') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Oct 11 04:15:58 compute-0 nova_compute[259850]: 2025-10-11 04:15:58.852 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 04:15:58 compute-0 nova_compute[259850]: 2025-10-11 04:15:58.854 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapa1f30276-c6, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 04:15:58 compute-0 nova_compute[259850]: 2025-10-11 04:15:58.856 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 04:15:58 compute-0 nova_compute[259850]: 2025-10-11 04:15:58.859 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 11 04:15:58 compute-0 nova_compute[259850]: 2025-10-11 04:15:58.861 2 INFO os_vif [None req-6124908f-e531-4910-8f28-18871a2b87ff 77d11e860ca1460cab1c20bca4d4c0ea bfcc78a613a4442d88231798d10634c9 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:f7:2a:9b,bridge_name='br-int',has_traffic_filtering=True,id=a1f30276-c6ab-493e-9be5-8e3baf249a38,network=Network(1c86b315-3a4b-4db0-8b3c-39658c19ef9c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa1f30276-c6')
Oct 11 04:15:58 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-5a43cdc49776b49890262baad696a76f610a6c55434b5eecd1e5ef3ed48b7e3a-userdata-shm.mount: Deactivated successfully.
Oct 11 04:15:58 compute-0 systemd[1]: var-lib-containers-storage-overlay-b9f482aa7569a95a72a2f1d17f8ac483d59c3214c3961d8884c336a437c631ef-merged.mount: Deactivated successfully.
Oct 11 04:15:58 compute-0 podman[293159]: 2025-10-11 04:15:58.886488588 +0000 UTC m=+0.124133618 container cleanup 5a43cdc49776b49890262baad696a76f610a6c55434b5eecd1e5ef3ed48b7e3a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-1c86b315-3a4b-4db0-8b3c-39658c19ef9c, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009)
Oct 11 04:15:58 compute-0 nova_compute[259850]: 2025-10-11 04:15:58.898 2 DEBUG os_brick.utils [None req-f2e3f189-feab-4a4d-a1f7-4cb071161c61 2a330a845d62440c871f80eda2546881 09ba33ef4bd447699d74946c58839b2d - - default default] ==> get_connector_properties: call "{'root_helper': 'sudo nova-rootwrap /etc/nova/rootwrap.conf', 'my_ip': '192.168.122.100', 'multipath': True, 'enforce_multipath': True, 'host': 'compute-0.ctlplane.example.com', 'execute': None}" trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:176
Oct 11 04:15:58 compute-0 systemd[1]: libpod-conmon-5a43cdc49776b49890262baad696a76f610a6c55434b5eecd1e5ef3ed48b7e3a.scope: Deactivated successfully.
Oct 11 04:15:58 compute-0 nova_compute[259850]: 2025-10-11 04:15:58.902 2 DEBUG nova.compute.manager [req-83d02c0a-ea28-453c-94a7-656badc174d0 req-a4a64cb5-914f-4854-a991-dd62dc9e6ed2 f4159c198f2c491aba3076e803251bb4 551ef16ca7c64c7d98e9560ddab5abd8 - - default default] [instance: ab2c9a76-86d0-4cca-92b5-ae402fda2905] Received event network-vif-unplugged-a1f30276-c6ab-493e-9be5-8e3baf249a38 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 11 04:15:58 compute-0 nova_compute[259850]: 2025-10-11 04:15:58.903 2 DEBUG oslo_concurrency.lockutils [req-83d02c0a-ea28-453c-94a7-656badc174d0 req-a4a64cb5-914f-4854-a991-dd62dc9e6ed2 f4159c198f2c491aba3076e803251bb4 551ef16ca7c64c7d98e9560ddab5abd8 - - default default] Acquiring lock "ab2c9a76-86d0-4cca-92b5-ae402fda2905-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 04:15:58 compute-0 nova_compute[259850]: 2025-10-11 04:15:58.903 2 DEBUG oslo_concurrency.lockutils [req-83d02c0a-ea28-453c-94a7-656badc174d0 req-a4a64cb5-914f-4854-a991-dd62dc9e6ed2 f4159c198f2c491aba3076e803251bb4 551ef16ca7c64c7d98e9560ddab5abd8 - - default default] Lock "ab2c9a76-86d0-4cca-92b5-ae402fda2905-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 04:15:58 compute-0 nova_compute[259850]: 2025-10-11 04:15:58.904 2 DEBUG oslo_concurrency.lockutils [req-83d02c0a-ea28-453c-94a7-656badc174d0 req-a4a64cb5-914f-4854-a991-dd62dc9e6ed2 f4159c198f2c491aba3076e803251bb4 551ef16ca7c64c7d98e9560ddab5abd8 - - default default] Lock "ab2c9a76-86d0-4cca-92b5-ae402fda2905-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 04:15:58 compute-0 nova_compute[259850]: 2025-10-11 04:15:58.904 2 DEBUG nova.compute.manager [req-83d02c0a-ea28-453c-94a7-656badc174d0 req-a4a64cb5-914f-4854-a991-dd62dc9e6ed2 f4159c198f2c491aba3076e803251bb4 551ef16ca7c64c7d98e9560ddab5abd8 - - default default] [instance: ab2c9a76-86d0-4cca-92b5-ae402fda2905] No waiting events found dispatching network-vif-unplugged-a1f30276-c6ab-493e-9be5-8e3baf249a38 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 11 04:15:58 compute-0 nova_compute[259850]: 2025-10-11 04:15:58.904 2 DEBUG nova.compute.manager [req-83d02c0a-ea28-453c-94a7-656badc174d0 req-a4a64cb5-914f-4854-a991-dd62dc9e6ed2 f4159c198f2c491aba3076e803251bb4 551ef16ca7c64c7d98e9560ddab5abd8 - - default default] [instance: ab2c9a76-86d0-4cca-92b5-ae402fda2905] Received event network-vif-unplugged-a1f30276-c6ab-493e-9be5-8e3baf249a38 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Oct 11 04:15:58 compute-0 nova_compute[259850]: 2025-10-11 04:15:58.900 675 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): multipathd show status execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 04:15:58 compute-0 nova_compute[259850]: 2025-10-11 04:15:58.917 675 DEBUG oslo_concurrency.processutils [-] CMD "multipathd show status" returned: 0 in 0.017s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 04:15:58 compute-0 nova_compute[259850]: 2025-10-11 04:15:58.918 675 DEBUG oslo.privsep.daemon [-] privsep: reply[13145fe0-6eae-4058-a1e5-6da73e8b3374]: (4, ('path checker states:\n\npaths: 0\nbusy: False\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 04:15:58 compute-0 nova_compute[259850]: 2025-10-11 04:15:58.919 675 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): cat /etc/iscsi/initiatorname.iscsi execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 04:15:58 compute-0 nova_compute[259850]: 2025-10-11 04:15:58.934 675 DEBUG oslo_concurrency.processutils [-] CMD "cat /etc/iscsi/initiatorname.iscsi" returned: 0 in 0.015s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 04:15:58 compute-0 nova_compute[259850]: 2025-10-11 04:15:58.934 675 DEBUG oslo.privsep.daemon [-] privsep: reply[7ec63754-5a9f-4f0a-917c-7ccac2e48ac3]: (4, ('InitiatorName=iqn.1994-05.com.redhat:e727c2bd432c', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 04:15:58 compute-0 nova_compute[259850]: 2025-10-11 04:15:58.936 675 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): findmnt -v / -n -o SOURCE execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 04:15:58 compute-0 nova_compute[259850]: 2025-10-11 04:15:58.954 675 DEBUG oslo_concurrency.processutils [-] CMD "findmnt -v / -n -o SOURCE" returned: 0 in 0.018s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 04:15:58 compute-0 nova_compute[259850]: 2025-10-11 04:15:58.954 675 DEBUG oslo.privsep.daemon [-] privsep: reply[a7b5c593-4a06-401d-a3af-7972a8c00a7e]: (4, ('overlay\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 04:15:58 compute-0 nova_compute[259850]: 2025-10-11 04:15:58.957 675 DEBUG oslo.privsep.daemon [-] privsep: reply[ec8651db-e6e3-4724-9851-e6c55f05c5b4]: (4, 'e4b2deed-ff06-4afb-a523-b61a9dddb9cc') _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 04:15:58 compute-0 nova_compute[259850]: 2025-10-11 04:15:58.957 2 DEBUG oslo_concurrency.processutils [None req-f2e3f189-feab-4a4d-a1f7-4cb071161c61 2a330a845d62440c871f80eda2546881 09ba33ef4bd447699d74946c58839b2d - - default default] Running cmd (subprocess): nvme version execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 04:15:58 compute-0 podman[293216]: 2025-10-11 04:15:58.975319902 +0000 UTC m=+0.055754745 container remove 5a43cdc49776b49890262baad696a76f610a6c55434b5eecd1e5ef3ed48b7e3a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-1c86b315-3a4b-4db0-8b3c-39658c19ef9c, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, tcib_managed=true, io.buildah.version=1.41.3)
Oct 11 04:15:58 compute-0 ovn_metadata_agent[161897]: 2025-10-11 04:15:58.983 267637 DEBUG oslo.privsep.daemon [-] privsep: reply[3708d720-8ca9-4dec-b8a4-f516f44a162a]: (4, ('Sat Oct 11 04:15:58 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-1c86b315-3a4b-4db0-8b3c-39658c19ef9c (5a43cdc49776b49890262baad696a76f610a6c55434b5eecd1e5ef3ed48b7e3a)\n5a43cdc49776b49890262baad696a76f610a6c55434b5eecd1e5ef3ed48b7e3a\nSat Oct 11 04:15:58 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-1c86b315-3a4b-4db0-8b3c-39658c19ef9c (5a43cdc49776b49890262baad696a76f610a6c55434b5eecd1e5ef3ed48b7e3a)\n5a43cdc49776b49890262baad696a76f610a6c55434b5eecd1e5ef3ed48b7e3a\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 04:15:58 compute-0 ovn_metadata_agent[161897]: 2025-10-11 04:15:58.985 267637 DEBUG oslo.privsep.daemon [-] privsep: reply[e0593241-44e6-4083-9b0f-356bd87d1046]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 04:15:58 compute-0 ovn_metadata_agent[161897]: 2025-10-11 04:15:58.986 161902 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap1c86b315-30, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 04:15:58 compute-0 nova_compute[259850]: 2025-10-11 04:15:58.988 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 04:15:58 compute-0 kernel: tap1c86b315-30: left promiscuous mode
Oct 11 04:15:58 compute-0 nova_compute[259850]: 2025-10-11 04:15:58.990 2 DEBUG oslo_concurrency.processutils [None req-f2e3f189-feab-4a4d-a1f7-4cb071161c61 2a330a845d62440c871f80eda2546881 09ba33ef4bd447699d74946c58839b2d - - default default] CMD "nvme version" returned: 0 in 0.033s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 04:15:58 compute-0 nova_compute[259850]: 2025-10-11 04:15:58.992 2 DEBUG os_brick.initiator.connectors.lightos [None req-f2e3f189-feab-4a4d-a1f7-4cb071161c61 2a330a845d62440c871f80eda2546881 09ba33ef4bd447699d74946c58839b2d - - default default] LIGHTOS: [Errno 111] ECONNREFUSED find_dsc /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:98
Oct 11 04:15:58 compute-0 nova_compute[259850]: 2025-10-11 04:15:58.992 2 DEBUG os_brick.initiator.connectors.lightos [None req-f2e3f189-feab-4a4d-a1f7-4cb071161c61 2a330a845d62440c871f80eda2546881 09ba33ef4bd447699d74946c58839b2d - - default default] LIGHTOS: did not find dsc, continuing anyway. get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:76
Oct 11 04:15:58 compute-0 nova_compute[259850]: 2025-10-11 04:15:58.992 2 DEBUG os_brick.initiator.connectors.lightos [None req-f2e3f189-feab-4a4d-a1f7-4cb071161c61 2a330a845d62440c871f80eda2546881 09ba33ef4bd447699d74946c58839b2d - - default default] LIGHTOS: finally hostnqn: nqn.2014-08.org.nvmexpress:uuid:83042a20-0f72-4c47-8453-e72ead378624 dsc:  get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:79
Oct 11 04:15:58 compute-0 nova_compute[259850]: 2025-10-11 04:15:58.993 2 DEBUG os_brick.utils [None req-f2e3f189-feab-4a4d-a1f7-4cb071161c61 2a330a845d62440c871f80eda2546881 09ba33ef4bd447699d74946c58839b2d - - default default] <== get_connector_properties: return (94ms) {'platform': 'x86_64', 'os_type': 'linux', 'ip': '192.168.122.100', 'host': 'compute-0.ctlplane.example.com', 'multipath': True, 'initiator': 'iqn.1994-05.com.redhat:e727c2bd432c', 'do_local_attach': False, 'nvme_hostid': '83042a20-0f72-4c47-8453-e72ead378624', 'system uuid': 'e4b2deed-ff06-4afb-a523-b61a9dddb9cc', 'nqn': 'nqn.2014-08.org.nvmexpress:uuid:83042a20-0f72-4c47-8453-e72ead378624', 'nvme_native_multipath': True, 'found_dsc': ''} trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:203
Oct 11 04:15:58 compute-0 nova_compute[259850]: 2025-10-11 04:15:58.993 2 DEBUG nova.virt.block_device [None req-f2e3f189-feab-4a4d-a1f7-4cb071161c61 2a330a845d62440c871f80eda2546881 09ba33ef4bd447699d74946c58839b2d - - default default] [instance: b19922f4-8c6a-4465-8051-c33652138fd9] Updating existing volume attachment record: eb53ed32-c893-43c5-8c00-09110d3acb62 _volume_attach /usr/lib/python3.9/site-packages/nova/virt/block_device.py:631
Oct 11 04:15:59 compute-0 nova_compute[259850]: 2025-10-11 04:15:59.018 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 04:15:59 compute-0 ovn_metadata_agent[161897]: 2025-10-11 04:15:59.020 267637 DEBUG oslo.privsep.daemon [-] privsep: reply[29a66a93-17cd-474b-b898-83340979b00e]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 04:15:59 compute-0 ovn_metadata_agent[161897]: 2025-10-11 04:15:59.044 267637 DEBUG oslo.privsep.daemon [-] privsep: reply[4f91f4c4-c068-4849-89b3-9568b7698b43]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 04:15:59 compute-0 ovn_metadata_agent[161897]: 2025-10-11 04:15:59.046 267637 DEBUG oslo.privsep.daemon [-] privsep: reply[258f699e-c51d-48ef-8b6a-00f5859207c4]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 04:15:59 compute-0 ovn_metadata_agent[161897]: 2025-10-11 04:15:59.066 267637 DEBUG oslo.privsep.daemon [-] privsep: reply[5fb7ab37-dcaf-494d-aed1-f80123f404cd]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 451519, 'reachable_time': 28312, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 293241, 'error': None, 'target': 'ovnmeta-1c86b315-3a4b-4db0-8b3c-39658c19ef9c', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 04:15:59 compute-0 ovn_metadata_agent[161897]: 2025-10-11 04:15:59.069 162015 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-1c86b315-3a4b-4db0-8b3c-39658c19ef9c deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Oct 11 04:15:59 compute-0 ovn_metadata_agent[161897]: 2025-10-11 04:15:59.070 162015 DEBUG oslo.privsep.daemon [-] privsep: reply[3d295f83-c30d-44c7-80c8-62e14bdb384d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 04:15:59 compute-0 systemd[1]: run-netns-ovnmeta\x2d1c86b315\x2d3a4b\x2d4db0\x2d8b3c\x2d39658c19ef9c.mount: Deactivated successfully.
Oct 11 04:15:59 compute-0 nova_compute[259850]: 2025-10-11 04:15:59.083 2 INFO nova.virt.libvirt.driver [None req-6124908f-e531-4910-8f28-18871a2b87ff 77d11e860ca1460cab1c20bca4d4c0ea bfcc78a613a4442d88231798d10634c9 - - default default] [instance: ab2c9a76-86d0-4cca-92b5-ae402fda2905] Deleting instance files /var/lib/nova/instances/ab2c9a76-86d0-4cca-92b5-ae402fda2905_del
Oct 11 04:15:59 compute-0 nova_compute[259850]: 2025-10-11 04:15:59.085 2 INFO nova.virt.libvirt.driver [None req-6124908f-e531-4910-8f28-18871a2b87ff 77d11e860ca1460cab1c20bca4d4c0ea bfcc78a613a4442d88231798d10634c9 - - default default] [instance: ab2c9a76-86d0-4cca-92b5-ae402fda2905] Deletion of /var/lib/nova/instances/ab2c9a76-86d0-4cca-92b5-ae402fda2905_del complete
Oct 11 04:15:59 compute-0 nova_compute[259850]: 2025-10-11 04:15:59.141 2 INFO nova.compute.manager [None req-6124908f-e531-4910-8f28-18871a2b87ff 77d11e860ca1460cab1c20bca4d4c0ea bfcc78a613a4442d88231798d10634c9 - - default default] [instance: ab2c9a76-86d0-4cca-92b5-ae402fda2905] Took 0.56 seconds to destroy the instance on the hypervisor.
Oct 11 04:15:59 compute-0 nova_compute[259850]: 2025-10-11 04:15:59.142 2 DEBUG oslo.service.loopingcall [None req-6124908f-e531-4910-8f28-18871a2b87ff 77d11e860ca1460cab1c20bca4d4c0ea bfcc78a613a4442d88231798d10634c9 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Oct 11 04:15:59 compute-0 nova_compute[259850]: 2025-10-11 04:15:59.143 2 DEBUG nova.compute.manager [-] [instance: ab2c9a76-86d0-4cca-92b5-ae402fda2905] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Oct 11 04:15:59 compute-0 nova_compute[259850]: 2025-10-11 04:15:59.143 2 DEBUG nova.network.neutron [-] [instance: ab2c9a76-86d0-4cca-92b5-ae402fda2905] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Oct 11 04:15:59 compute-0 nova_compute[259850]: 2025-10-11 04:15:59.178 2 DEBUG nova.network.neutron [None req-f2e3f189-feab-4a4d-a1f7-4cb071161c61 2a330a845d62440c871f80eda2546881 09ba33ef4bd447699d74946c58839b2d - - default default] [instance: b19922f4-8c6a-4465-8051-c33652138fd9] Updating instance_info_cache with network_info: [{"id": "a0bc9537-bbc3-4bb6-9d95-a11aeb47b514", "address": "fa:16:3e:aa:79:a8", "network": {"id": "b6cd64a2-af0b-4f57-b84c-cbc9cde5251d", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-958485850-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "09ba33ef4bd447699d74946c58839b2d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa0bc9537-bb", "ovs_interfaceid": "a0bc9537-bbc3-4bb6-9d95-a11aeb47b514", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 11 04:15:59 compute-0 nova_compute[259850]: 2025-10-11 04:15:59.203 2 DEBUG oslo_concurrency.lockutils [None req-f2e3f189-feab-4a4d-a1f7-4cb071161c61 2a330a845d62440c871f80eda2546881 09ba33ef4bd447699d74946c58839b2d - - default default] Releasing lock "refresh_cache-b19922f4-8c6a-4465-8051-c33652138fd9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 11 04:15:59 compute-0 nova_compute[259850]: 2025-10-11 04:15:59.204 2 DEBUG nova.compute.manager [None req-f2e3f189-feab-4a4d-a1f7-4cb071161c61 2a330a845d62440c871f80eda2546881 09ba33ef4bd447699d74946c58839b2d - - default default] [instance: b19922f4-8c6a-4465-8051-c33652138fd9] Instance network_info: |[{"id": "a0bc9537-bbc3-4bb6-9d95-a11aeb47b514", "address": "fa:16:3e:aa:79:a8", "network": {"id": "b6cd64a2-af0b-4f57-b84c-cbc9cde5251d", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-958485850-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "09ba33ef4bd447699d74946c58839b2d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa0bc9537-bb", "ovs_interfaceid": "a0bc9537-bbc3-4bb6-9d95-a11aeb47b514", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Oct 11 04:15:59 compute-0 nova_compute[259850]: 2025-10-11 04:15:59.205 2 DEBUG oslo_concurrency.lockutils [req-70e3696f-bc6b-4caa-b377-11692159d8ca req-35b0e0a1-3e13-480b-b216-033078fab715 f4159c198f2c491aba3076e803251bb4 551ef16ca7c64c7d98e9560ddab5abd8 - - default default] Acquired lock "refresh_cache-b19922f4-8c6a-4465-8051-c33652138fd9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 11 04:15:59 compute-0 nova_compute[259850]: 2025-10-11 04:15:59.205 2 DEBUG nova.network.neutron [req-70e3696f-bc6b-4caa-b377-11692159d8ca req-35b0e0a1-3e13-480b-b216-033078fab715 f4159c198f2c491aba3076e803251bb4 551ef16ca7c64c7d98e9560ddab5abd8 - - default default] [instance: b19922f4-8c6a-4465-8051-c33652138fd9] Refreshing network info cache for port a0bc9537-bbc3-4bb6-9d95-a11aeb47b514 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 11 04:15:59 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 11 04:15:59 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1353915526' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 11 04:15:59 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1565: 305 pgs: 305 active+clean; 350 MiB data, 604 MiB used, 59 GiB / 60 GiB avail; 19 KiB/s rd, 10 KiB/s wr, 25 op/s
Oct 11 04:15:59 compute-0 ceph-mon[74273]: from='client.? 192.168.122.10:0/1353915526' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 11 04:15:59 compute-0 nova_compute[259850]: 2025-10-11 04:15:59.953 2 DEBUG nova.network.neutron [-] [instance: ab2c9a76-86d0-4cca-92b5-ae402fda2905] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 11 04:15:59 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e385 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 04:15:59 compute-0 nova_compute[259850]: 2025-10-11 04:15:59.998 2 DEBUG nova.compute.manager [None req-f2e3f189-feab-4a4d-a1f7-4cb071161c61 2a330a845d62440c871f80eda2546881 09ba33ef4bd447699d74946c58839b2d - - default default] [instance: b19922f4-8c6a-4465-8051-c33652138fd9] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Oct 11 04:16:00 compute-0 nova_compute[259850]: 2025-10-11 04:16:00.000 2 DEBUG nova.virt.libvirt.driver [None req-f2e3f189-feab-4a4d-a1f7-4cb071161c61 2a330a845d62440c871f80eda2546881 09ba33ef4bd447699d74946c58839b2d - - default default] [instance: b19922f4-8c6a-4465-8051-c33652138fd9] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Oct 11 04:16:00 compute-0 nova_compute[259850]: 2025-10-11 04:16:00.001 2 INFO nova.virt.libvirt.driver [None req-f2e3f189-feab-4a4d-a1f7-4cb071161c61 2a330a845d62440c871f80eda2546881 09ba33ef4bd447699d74946c58839b2d - - default default] [instance: b19922f4-8c6a-4465-8051-c33652138fd9] Creating image(s)
Oct 11 04:16:00 compute-0 nova_compute[259850]: 2025-10-11 04:16:00.002 2 DEBUG nova.virt.libvirt.driver [None req-f2e3f189-feab-4a4d-a1f7-4cb071161c61 2a330a845d62440c871f80eda2546881 09ba33ef4bd447699d74946c58839b2d - - default default] [instance: b19922f4-8c6a-4465-8051-c33652138fd9] Did not create local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4859
Oct 11 04:16:00 compute-0 nova_compute[259850]: 2025-10-11 04:16:00.003 2 DEBUG nova.virt.libvirt.driver [None req-f2e3f189-feab-4a4d-a1f7-4cb071161c61 2a330a845d62440c871f80eda2546881 09ba33ef4bd447699d74946c58839b2d - - default default] [instance: b19922f4-8c6a-4465-8051-c33652138fd9] Ensure instance console log exists: /var/lib/nova/instances/b19922f4-8c6a-4465-8051-c33652138fd9/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Oct 11 04:16:00 compute-0 nova_compute[259850]: 2025-10-11 04:16:00.003 2 DEBUG oslo_concurrency.lockutils [None req-f2e3f189-feab-4a4d-a1f7-4cb071161c61 2a330a845d62440c871f80eda2546881 09ba33ef4bd447699d74946c58839b2d - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 04:16:00 compute-0 nova_compute[259850]: 2025-10-11 04:16:00.004 2 DEBUG oslo_concurrency.lockutils [None req-f2e3f189-feab-4a4d-a1f7-4cb071161c61 2a330a845d62440c871f80eda2546881 09ba33ef4bd447699d74946c58839b2d - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 04:16:00 compute-0 nova_compute[259850]: 2025-10-11 04:16:00.004 2 DEBUG oslo_concurrency.lockutils [None req-f2e3f189-feab-4a4d-a1f7-4cb071161c61 2a330a845d62440c871f80eda2546881 09ba33ef4bd447699d74946c58839b2d - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 04:16:00 compute-0 nova_compute[259850]: 2025-10-11 04:16:00.009 2 DEBUG nova.virt.libvirt.driver [None req-f2e3f189-feab-4a4d-a1f7-4cb071161c61 2a330a845d62440c871f80eda2546881 09ba33ef4bd447699d74946c58839b2d - - default default] [instance: b19922f4-8c6a-4465-8051-c33652138fd9] Start _get_guest_xml network_info=[{"id": "a0bc9537-bbc3-4bb6-9d95-a11aeb47b514", "address": "fa:16:3e:aa:79:a8", "network": {"id": "b6cd64a2-af0b-4f57-b84c-cbc9cde5251d", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-958485850-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "09ba33ef4bd447699d74946c58839b2d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa0bc9537-bb", "ovs_interfaceid": "a0bc9537-bbc3-4bb6-9d95-a11aeb47b514", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, '/dev/vda': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='d41d8cd98f00b204e9800998ecf8427e',container_format='bare',created_at=2025-10-11T04:15:47Z,direct_url=<?>,disk_format='qcow2',id=4414e0e0-7d08-46a8-a7d9-7794d12c96fc,min_disk=1,min_ram=0,name='tempest-TestVolumeBootPatternsnapshot-367854597',owner='09ba33ef4bd447699d74946c58839b2d',properties=ImageMetaProps,protected=<?>,size=0,status='active',tags=<?>,updated_at=2025-10-11T04:15:48Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [], 'ephemerals': [], 'block_device_mapping': [{'device_type': 'disk', 'delete_on_termination': True, 'connection_info': {'driver_volume_type': 'rbd', 'data': {'name': 'volumes/volume-2d69b9c1-92be-4e87-b166-e1c5b2e5f688', 'hosts': ['192.168.122.100'], 'ports': ['6789'], 'cluster_name': 'ceph', 'auth_enabled': True, 'auth_username': 'openstack', 'secret_type': 'ceph', 'secret_uuid': '***', 'volume_id': '2d69b9c1-92be-4e87-b166-e1c5b2e5f688', 'discard': True, 'qos_specs': None, 'access_mode': 'rw', 'encrypted': False, 'cacheable': False}, 'status': 'reserved', 'instance': 'b19922f4-8c6a-4465-8051-c33652138fd9', 'attached_at': '', 'detached_at': '', 'volume_id': '2d69b9c1-92be-4e87-b166-e1c5b2e5f688', 'serial': '2d69b9c1-92be-4e87-b166-e1c5b2e5f688'}, 'boot_index': 0, 'guest_format': None, 'attachment_id': 'eb53ed32-c893-43c5-8c00-09110d3acb62', 'mount_device': '/dev/vda', 'disk_bus': 'virtio', 'volume_type': None}], ': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Oct 11 04:16:00 compute-0 nova_compute[259850]: 2025-10-11 04:16:00.010 2 INFO nova.compute.manager [-] [instance: ab2c9a76-86d0-4cca-92b5-ae402fda2905] Took 0.87 seconds to deallocate network for instance.
Oct 11 04:16:00 compute-0 nova_compute[259850]: 2025-10-11 04:16:00.018 2 WARNING nova.virt.libvirt.driver [None req-f2e3f189-feab-4a4d-a1f7-4cb071161c61 2a330a845d62440c871f80eda2546881 09ba33ef4bd447699d74946c58839b2d - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 11 04:16:00 compute-0 nova_compute[259850]: 2025-10-11 04:16:00.028 2 DEBUG nova.virt.libvirt.host [None req-f2e3f189-feab-4a4d-a1f7-4cb071161c61 2a330a845d62440c871f80eda2546881 09ba33ef4bd447699d74946c58839b2d - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Oct 11 04:16:00 compute-0 nova_compute[259850]: 2025-10-11 04:16:00.029 2 DEBUG nova.virt.libvirt.host [None req-f2e3f189-feab-4a4d-a1f7-4cb071161c61 2a330a845d62440c871f80eda2546881 09ba33ef4bd447699d74946c58839b2d - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Oct 11 04:16:00 compute-0 nova_compute[259850]: 2025-10-11 04:16:00.034 2 DEBUG nova.virt.libvirt.host [None req-f2e3f189-feab-4a4d-a1f7-4cb071161c61 2a330a845d62440c871f80eda2546881 09ba33ef4bd447699d74946c58839b2d - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Oct 11 04:16:00 compute-0 nova_compute[259850]: 2025-10-11 04:16:00.035 2 DEBUG nova.virt.libvirt.host [None req-f2e3f189-feab-4a4d-a1f7-4cb071161c61 2a330a845d62440c871f80eda2546881 09ba33ef4bd447699d74946c58839b2d - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Oct 11 04:16:00 compute-0 nova_compute[259850]: 2025-10-11 04:16:00.036 2 DEBUG nova.virt.libvirt.driver [None req-f2e3f189-feab-4a4d-a1f7-4cb071161c61 2a330a845d62440c871f80eda2546881 09ba33ef4bd447699d74946c58839b2d - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Oct 11 04:16:00 compute-0 nova_compute[259850]: 2025-10-11 04:16:00.036 2 DEBUG nova.virt.hardware [None req-f2e3f189-feab-4a4d-a1f7-4cb071161c61 2a330a845d62440c871f80eda2546881 09ba33ef4bd447699d74946c58839b2d - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-11T04:01:35Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='178575de-f0e6-4acd-9fcd-d75e3e09ac2e',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='d41d8cd98f00b204e9800998ecf8427e',container_format='bare',created_at=2025-10-11T04:15:47Z,direct_url=<?>,disk_format='qcow2',id=4414e0e0-7d08-46a8-a7d9-7794d12c96fc,min_disk=1,min_ram=0,name='tempest-TestVolumeBootPatternsnapshot-367854597',owner='09ba33ef4bd447699d74946c58839b2d',properties=ImageMetaProps,protected=<?>,size=0,status='active',tags=<?>,updated_at=2025-10-11T04:15:48Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Oct 11 04:16:00 compute-0 nova_compute[259850]: 2025-10-11 04:16:00.037 2 DEBUG nova.virt.hardware [None req-f2e3f189-feab-4a4d-a1f7-4cb071161c61 2a330a845d62440c871f80eda2546881 09ba33ef4bd447699d74946c58839b2d - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Oct 11 04:16:00 compute-0 nova_compute[259850]: 2025-10-11 04:16:00.038 2 DEBUG nova.virt.hardware [None req-f2e3f189-feab-4a4d-a1f7-4cb071161c61 2a330a845d62440c871f80eda2546881 09ba33ef4bd447699d74946c58839b2d - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Oct 11 04:16:00 compute-0 nova_compute[259850]: 2025-10-11 04:16:00.038 2 DEBUG nova.virt.hardware [None req-f2e3f189-feab-4a4d-a1f7-4cb071161c61 2a330a845d62440c871f80eda2546881 09ba33ef4bd447699d74946c58839b2d - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Oct 11 04:16:00 compute-0 nova_compute[259850]: 2025-10-11 04:16:00.039 2 DEBUG nova.virt.hardware [None req-f2e3f189-feab-4a4d-a1f7-4cb071161c61 2a330a845d62440c871f80eda2546881 09ba33ef4bd447699d74946c58839b2d - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Oct 11 04:16:00 compute-0 nova_compute[259850]: 2025-10-11 04:16:00.039 2 DEBUG nova.virt.hardware [None req-f2e3f189-feab-4a4d-a1f7-4cb071161c61 2a330a845d62440c871f80eda2546881 09ba33ef4bd447699d74946c58839b2d - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Oct 11 04:16:00 compute-0 nova_compute[259850]: 2025-10-11 04:16:00.040 2 DEBUG nova.virt.hardware [None req-f2e3f189-feab-4a4d-a1f7-4cb071161c61 2a330a845d62440c871f80eda2546881 09ba33ef4bd447699d74946c58839b2d - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Oct 11 04:16:00 compute-0 nova_compute[259850]: 2025-10-11 04:16:00.040 2 DEBUG nova.virt.hardware [None req-f2e3f189-feab-4a4d-a1f7-4cb071161c61 2a330a845d62440c871f80eda2546881 09ba33ef4bd447699d74946c58839b2d - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Oct 11 04:16:00 compute-0 nova_compute[259850]: 2025-10-11 04:16:00.041 2 DEBUG nova.virt.hardware [None req-f2e3f189-feab-4a4d-a1f7-4cb071161c61 2a330a845d62440c871f80eda2546881 09ba33ef4bd447699d74946c58839b2d - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Oct 11 04:16:00 compute-0 nova_compute[259850]: 2025-10-11 04:16:00.041 2 DEBUG nova.virt.hardware [None req-f2e3f189-feab-4a4d-a1f7-4cb071161c61 2a330a845d62440c871f80eda2546881 09ba33ef4bd447699d74946c58839b2d - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Oct 11 04:16:00 compute-0 nova_compute[259850]: 2025-10-11 04:16:00.042 2 DEBUG nova.virt.hardware [None req-f2e3f189-feab-4a4d-a1f7-4cb071161c61 2a330a845d62440c871f80eda2546881 09ba33ef4bd447699d74946c58839b2d - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Oct 11 04:16:00 compute-0 nova_compute[259850]: 2025-10-11 04:16:00.078 2 DEBUG nova.storage.rbd_utils [None req-f2e3f189-feab-4a4d-a1f7-4cb071161c61 2a330a845d62440c871f80eda2546881 09ba33ef4bd447699d74946c58839b2d - - default default] rbd image b19922f4-8c6a-4465-8051-c33652138fd9_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 11 04:16:00 compute-0 nova_compute[259850]: 2025-10-11 04:16:00.085 2 DEBUG oslo_concurrency.processutils [None req-f2e3f189-feab-4a4d-a1f7-4cb071161c61 2a330a845d62440c871f80eda2546881 09ba33ef4bd447699d74946c58839b2d - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 04:16:00 compute-0 nova_compute[259850]: 2025-10-11 04:16:00.167 2 INFO nova.compute.manager [None req-6124908f-e531-4910-8f28-18871a2b87ff 77d11e860ca1460cab1c20bca4d4c0ea bfcc78a613a4442d88231798d10634c9 - - default default] [instance: ab2c9a76-86d0-4cca-92b5-ae402fda2905] Took 0.16 seconds to detach 1 volumes for instance.
Oct 11 04:16:00 compute-0 nova_compute[259850]: 2025-10-11 04:16:00.235 2 DEBUG oslo_concurrency.lockutils [None req-6124908f-e531-4910-8f28-18871a2b87ff 77d11e860ca1460cab1c20bca4d4c0ea bfcc78a613a4442d88231798d10634c9 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 04:16:00 compute-0 nova_compute[259850]: 2025-10-11 04:16:00.236 2 DEBUG oslo_concurrency.lockutils [None req-6124908f-e531-4910-8f28-18871a2b87ff 77d11e860ca1460cab1c20bca4d4c0ea bfcc78a613a4442d88231798d10634c9 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 04:16:00 compute-0 nova_compute[259850]: 2025-10-11 04:16:00.330 2 DEBUG oslo_concurrency.processutils [None req-6124908f-e531-4910-8f28-18871a2b87ff 77d11e860ca1460cab1c20bca4d4c0ea bfcc78a613a4442d88231798d10634c9 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 04:16:00 compute-0 podman[293281]: 2025-10-11 04:16:00.347583046 +0000 UTC m=+0.062480347 container health_status aa44bb3cffcfb092b9d85ae52a9bd9f36c87e07262632420f97b48a886109eab (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20251009, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, managed_by=edpm_ansible, org.label-schema.license=GPLv2)
Oct 11 04:16:00 compute-0 nova_compute[259850]: 2025-10-11 04:16:00.391 2 DEBUG nova.compute.manager [req-33837e0a-0481-4535-aae4-2ed01a2b3a42 req-11b84b1a-4b14-4536-bd02-876a119aff4b f4159c198f2c491aba3076e803251bb4 551ef16ca7c64c7d98e9560ddab5abd8 - - default default] [instance: ab2c9a76-86d0-4cca-92b5-ae402fda2905] Received event network-vif-deleted-a1f30276-c6ab-493e-9be5-8e3baf249a38 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 11 04:16:00 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 11 04:16:00 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2813380590' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 11 04:16:00 compute-0 nova_compute[259850]: 2025-10-11 04:16:00.535 2 DEBUG oslo_concurrency.processutils [None req-f2e3f189-feab-4a4d-a1f7-4cb071161c61 2a330a845d62440c871f80eda2546881 09ba33ef4bd447699d74946c58839b2d - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.450s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 04:16:00 compute-0 nova_compute[259850]: 2025-10-11 04:16:00.562 2 DEBUG nova.virt.libvirt.vif [None req-f2e3f189-feab-4a4d-a1f7-4cb071161c61 2a330a845d62440c871f80eda2546881 09ba33ef4bd447699d74946c58839b2d - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-11T04:15:52Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestVolumeBootPattern-image-snapshot-server-1999044293',display_name='tempest-TestVolumeBootPattern-image-snapshot-server-1999044293',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testvolumebootpattern-image-snapshot-server-1999044293',id=20,image_ref='4414e0e0-7d08-46a8-a7d9-7794d12c96fc',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBDUxThBUJhvO/gkOYwxW/lPS4n8OMhZe6TOX5ElcKOSryPpXQOKfBpX1K1WckyrkPSMC42WqitbH/2Ksdi9ua2+VFCgI81hDR6lqh2OHDc0/2HOB79NiKWtPVPn3ngNTCQ==',key_name='tempest-keypair-1544766429',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='09ba33ef4bd447699d74946c58839b2d',ramdisk_id='',reservation_id='r-zs29qwxo',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='',image_bdm_v2='True',image_boot_roles='reader,member',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',image_owner_project_name='tempest-TestVolumeBootPattern-771726270',image_owner_user_name='tempest-TestVolumeBootPattern-771726270-project-member',image_root_device_name='/dev/vda',image_signature_verified='False',network_allocated='True',owner_project_name='tempest-TestVolumeBootPattern-771726270',owner_user_name='tempest-TestVolumeBootPattern-771726270-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-11T04:15:54Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='2a330a845d62440c871f80eda2546881',uuid=b19922f4-8c6a-4465-8051-c33652138fd9,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "a0bc9537-bbc3-4bb6-9d95-a11aeb47b514", "address": "fa:16:3e:aa:79:a8", "network": {"id": "b6cd64a2-af0b-4f57-b84c-cbc9cde5251d", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-958485850-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "09ba33ef4bd447699d74946c58839b2d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa0bc9537-bb", "ovs_interfaceid": "a0bc9537-bbc3-4bb6-9d95-a11aeb47b514", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Oct 11 04:16:00 compute-0 nova_compute[259850]: 2025-10-11 04:16:00.563 2 DEBUG nova.network.os_vif_util [None req-f2e3f189-feab-4a4d-a1f7-4cb071161c61 2a330a845d62440c871f80eda2546881 09ba33ef4bd447699d74946c58839b2d - - default default] Converting VIF {"id": "a0bc9537-bbc3-4bb6-9d95-a11aeb47b514", "address": "fa:16:3e:aa:79:a8", "network": {"id": "b6cd64a2-af0b-4f57-b84c-cbc9cde5251d", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-958485850-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "09ba33ef4bd447699d74946c58839b2d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa0bc9537-bb", "ovs_interfaceid": "a0bc9537-bbc3-4bb6-9d95-a11aeb47b514", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 11 04:16:00 compute-0 nova_compute[259850]: 2025-10-11 04:16:00.564 2 DEBUG nova.network.os_vif_util [None req-f2e3f189-feab-4a4d-a1f7-4cb071161c61 2a330a845d62440c871f80eda2546881 09ba33ef4bd447699d74946c58839b2d - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:aa:79:a8,bridge_name='br-int',has_traffic_filtering=True,id=a0bc9537-bbc3-4bb6-9d95-a11aeb47b514,network=Network(b6cd64a2-af0b-4f57-b84c-cbc9cde5251d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa0bc9537-bb') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 11 04:16:00 compute-0 nova_compute[259850]: 2025-10-11 04:16:00.566 2 DEBUG nova.objects.instance [None req-f2e3f189-feab-4a4d-a1f7-4cb071161c61 2a330a845d62440c871f80eda2546881 09ba33ef4bd447699d74946c58839b2d - - default default] Lazy-loading 'pci_devices' on Instance uuid b19922f4-8c6a-4465-8051-c33652138fd9 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 11 04:16:00 compute-0 nova_compute[259850]: 2025-10-11 04:16:00.583 2 DEBUG nova.virt.libvirt.driver [None req-f2e3f189-feab-4a4d-a1f7-4cb071161c61 2a330a845d62440c871f80eda2546881 09ba33ef4bd447699d74946c58839b2d - - default default] [instance: b19922f4-8c6a-4465-8051-c33652138fd9] End _get_guest_xml xml=<domain type="kvm">
Oct 11 04:16:00 compute-0 nova_compute[259850]:   <uuid>b19922f4-8c6a-4465-8051-c33652138fd9</uuid>
Oct 11 04:16:00 compute-0 nova_compute[259850]:   <name>instance-00000014</name>
Oct 11 04:16:00 compute-0 nova_compute[259850]:   <memory>131072</memory>
Oct 11 04:16:00 compute-0 nova_compute[259850]:   <vcpu>1</vcpu>
Oct 11 04:16:00 compute-0 nova_compute[259850]:   <metadata>
Oct 11 04:16:00 compute-0 nova_compute[259850]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 11 04:16:00 compute-0 nova_compute[259850]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 11 04:16:00 compute-0 nova_compute[259850]:       <nova:name>tempest-TestVolumeBootPattern-image-snapshot-server-1999044293</nova:name>
Oct 11 04:16:00 compute-0 nova_compute[259850]:       <nova:creationTime>2025-10-11 04:16:00</nova:creationTime>
Oct 11 04:16:00 compute-0 nova_compute[259850]:       <nova:flavor name="m1.nano">
Oct 11 04:16:00 compute-0 nova_compute[259850]:         <nova:memory>128</nova:memory>
Oct 11 04:16:00 compute-0 nova_compute[259850]:         <nova:disk>1</nova:disk>
Oct 11 04:16:00 compute-0 nova_compute[259850]:         <nova:swap>0</nova:swap>
Oct 11 04:16:00 compute-0 nova_compute[259850]:         <nova:ephemeral>0</nova:ephemeral>
Oct 11 04:16:00 compute-0 nova_compute[259850]:         <nova:vcpus>1</nova:vcpus>
Oct 11 04:16:00 compute-0 nova_compute[259850]:       </nova:flavor>
Oct 11 04:16:00 compute-0 nova_compute[259850]:       <nova:owner>
Oct 11 04:16:00 compute-0 nova_compute[259850]:         <nova:user uuid="2a330a845d62440c871f80eda2546881">tempest-TestVolumeBootPattern-771726270-project-member</nova:user>
Oct 11 04:16:00 compute-0 nova_compute[259850]:         <nova:project uuid="09ba33ef4bd447699d74946c58839b2d">tempest-TestVolumeBootPattern-771726270</nova:project>
Oct 11 04:16:00 compute-0 nova_compute[259850]:       </nova:owner>
Oct 11 04:16:00 compute-0 nova_compute[259850]:       <nova:root type="image" uuid="4414e0e0-7d08-46a8-a7d9-7794d12c96fc"/>
Oct 11 04:16:00 compute-0 nova_compute[259850]:       <nova:ports>
Oct 11 04:16:00 compute-0 nova_compute[259850]:         <nova:port uuid="a0bc9537-bbc3-4bb6-9d95-a11aeb47b514">
Oct 11 04:16:00 compute-0 nova_compute[259850]:           <nova:ip type="fixed" address="10.100.0.3" ipVersion="4"/>
Oct 11 04:16:00 compute-0 nova_compute[259850]:         </nova:port>
Oct 11 04:16:00 compute-0 nova_compute[259850]:       </nova:ports>
Oct 11 04:16:00 compute-0 nova_compute[259850]:     </nova:instance>
Oct 11 04:16:00 compute-0 nova_compute[259850]:   </metadata>
Oct 11 04:16:00 compute-0 nova_compute[259850]:   <sysinfo type="smbios">
Oct 11 04:16:00 compute-0 nova_compute[259850]:     <system>
Oct 11 04:16:00 compute-0 nova_compute[259850]:       <entry name="manufacturer">RDO</entry>
Oct 11 04:16:00 compute-0 nova_compute[259850]:       <entry name="product">OpenStack Compute</entry>
Oct 11 04:16:00 compute-0 nova_compute[259850]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 11 04:16:00 compute-0 nova_compute[259850]:       <entry name="serial">b19922f4-8c6a-4465-8051-c33652138fd9</entry>
Oct 11 04:16:00 compute-0 nova_compute[259850]:       <entry name="uuid">b19922f4-8c6a-4465-8051-c33652138fd9</entry>
Oct 11 04:16:00 compute-0 nova_compute[259850]:       <entry name="family">Virtual Machine</entry>
Oct 11 04:16:00 compute-0 nova_compute[259850]:     </system>
Oct 11 04:16:00 compute-0 nova_compute[259850]:   </sysinfo>
Oct 11 04:16:00 compute-0 nova_compute[259850]:   <os>
Oct 11 04:16:00 compute-0 nova_compute[259850]:     <type arch="x86_64" machine="q35">hvm</type>
Oct 11 04:16:00 compute-0 nova_compute[259850]:     <boot dev="hd"/>
Oct 11 04:16:00 compute-0 nova_compute[259850]:     <smbios mode="sysinfo"/>
Oct 11 04:16:00 compute-0 nova_compute[259850]:   </os>
Oct 11 04:16:00 compute-0 nova_compute[259850]:   <features>
Oct 11 04:16:00 compute-0 nova_compute[259850]:     <acpi/>
Oct 11 04:16:00 compute-0 nova_compute[259850]:     <apic/>
Oct 11 04:16:00 compute-0 nova_compute[259850]:     <vmcoreinfo/>
Oct 11 04:16:00 compute-0 nova_compute[259850]:   </features>
Oct 11 04:16:00 compute-0 nova_compute[259850]:   <clock offset="utc">
Oct 11 04:16:00 compute-0 nova_compute[259850]:     <timer name="pit" tickpolicy="delay"/>
Oct 11 04:16:00 compute-0 nova_compute[259850]:     <timer name="rtc" tickpolicy="catchup"/>
Oct 11 04:16:00 compute-0 nova_compute[259850]:     <timer name="hpet" present="no"/>
Oct 11 04:16:00 compute-0 nova_compute[259850]:   </clock>
Oct 11 04:16:00 compute-0 nova_compute[259850]:   <cpu mode="host-model" match="exact">
Oct 11 04:16:00 compute-0 nova_compute[259850]:     <topology sockets="1" cores="1" threads="1"/>
Oct 11 04:16:00 compute-0 nova_compute[259850]:   </cpu>
Oct 11 04:16:00 compute-0 nova_compute[259850]:   <devices>
Oct 11 04:16:00 compute-0 nova_compute[259850]:     <disk type="network" device="cdrom">
Oct 11 04:16:00 compute-0 nova_compute[259850]:       <driver type="raw" cache="none"/>
Oct 11 04:16:00 compute-0 nova_compute[259850]:       <source protocol="rbd" name="vms/b19922f4-8c6a-4465-8051-c33652138fd9_disk.config">
Oct 11 04:16:00 compute-0 nova_compute[259850]:         <host name="192.168.122.100" port="6789"/>
Oct 11 04:16:00 compute-0 nova_compute[259850]:       </source>
Oct 11 04:16:00 compute-0 nova_compute[259850]:       <auth username="openstack">
Oct 11 04:16:00 compute-0 nova_compute[259850]:         <secret type="ceph" uuid="23b68101-59a9-532f-ab6b-9acf78fb2162"/>
Oct 11 04:16:00 compute-0 nova_compute[259850]:       </auth>
Oct 11 04:16:00 compute-0 nova_compute[259850]:       <target dev="sda" bus="sata"/>
Oct 11 04:16:00 compute-0 nova_compute[259850]:     </disk>
Oct 11 04:16:00 compute-0 nova_compute[259850]:     <disk type="network" device="disk">
Oct 11 04:16:00 compute-0 nova_compute[259850]:       <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Oct 11 04:16:00 compute-0 nova_compute[259850]:       <source protocol="rbd" name="volumes/volume-2d69b9c1-92be-4e87-b166-e1c5b2e5f688">
Oct 11 04:16:00 compute-0 nova_compute[259850]:         <host name="192.168.122.100" port="6789"/>
Oct 11 04:16:00 compute-0 nova_compute[259850]:       </source>
Oct 11 04:16:00 compute-0 nova_compute[259850]:       <auth username="openstack">
Oct 11 04:16:00 compute-0 nova_compute[259850]:         <secret type="ceph" uuid="23b68101-59a9-532f-ab6b-9acf78fb2162"/>
Oct 11 04:16:00 compute-0 nova_compute[259850]:       </auth>
Oct 11 04:16:00 compute-0 nova_compute[259850]:       <target dev="vda" bus="virtio"/>
Oct 11 04:16:00 compute-0 nova_compute[259850]:       <serial>2d69b9c1-92be-4e87-b166-e1c5b2e5f688</serial>
Oct 11 04:16:00 compute-0 nova_compute[259850]:     </disk>
Oct 11 04:16:00 compute-0 nova_compute[259850]:     <interface type="ethernet">
Oct 11 04:16:00 compute-0 nova_compute[259850]:       <mac address="fa:16:3e:aa:79:a8"/>
Oct 11 04:16:00 compute-0 nova_compute[259850]:       <model type="virtio"/>
Oct 11 04:16:00 compute-0 nova_compute[259850]:       <driver name="vhost" rx_queue_size="512"/>
Oct 11 04:16:00 compute-0 nova_compute[259850]:       <mtu size="1442"/>
Oct 11 04:16:00 compute-0 nova_compute[259850]:       <target dev="tapa0bc9537-bb"/>
Oct 11 04:16:00 compute-0 nova_compute[259850]:     </interface>
Oct 11 04:16:00 compute-0 nova_compute[259850]:     <serial type="pty">
Oct 11 04:16:00 compute-0 nova_compute[259850]:       <log file="/var/lib/nova/instances/b19922f4-8c6a-4465-8051-c33652138fd9/console.log" append="off"/>
Oct 11 04:16:00 compute-0 nova_compute[259850]:     </serial>
Oct 11 04:16:00 compute-0 nova_compute[259850]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 11 04:16:00 compute-0 nova_compute[259850]:     <video>
Oct 11 04:16:00 compute-0 nova_compute[259850]:       <model type="virtio"/>
Oct 11 04:16:00 compute-0 nova_compute[259850]:     </video>
Oct 11 04:16:00 compute-0 nova_compute[259850]:     <input type="tablet" bus="usb"/>
Oct 11 04:16:00 compute-0 nova_compute[259850]:     <input type="keyboard" bus="usb"/>
Oct 11 04:16:00 compute-0 nova_compute[259850]:     <rng model="virtio">
Oct 11 04:16:00 compute-0 nova_compute[259850]:       <backend model="random">/dev/urandom</backend>
Oct 11 04:16:00 compute-0 nova_compute[259850]:     </rng>
Oct 11 04:16:00 compute-0 nova_compute[259850]:     <controller type="pci" model="pcie-root"/>
Oct 11 04:16:00 compute-0 nova_compute[259850]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 04:16:00 compute-0 nova_compute[259850]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 04:16:00 compute-0 nova_compute[259850]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 04:16:00 compute-0 nova_compute[259850]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 04:16:00 compute-0 nova_compute[259850]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 04:16:00 compute-0 nova_compute[259850]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 04:16:00 compute-0 nova_compute[259850]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 04:16:00 compute-0 nova_compute[259850]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 04:16:00 compute-0 nova_compute[259850]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 04:16:00 compute-0 nova_compute[259850]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 04:16:00 compute-0 nova_compute[259850]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 04:16:00 compute-0 nova_compute[259850]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 04:16:00 compute-0 nova_compute[259850]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 04:16:00 compute-0 nova_compute[259850]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 04:16:00 compute-0 nova_compute[259850]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 04:16:00 compute-0 nova_compute[259850]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 04:16:00 compute-0 nova_compute[259850]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 04:16:00 compute-0 nova_compute[259850]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 04:16:00 compute-0 nova_compute[259850]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 04:16:00 compute-0 nova_compute[259850]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 04:16:00 compute-0 nova_compute[259850]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 04:16:00 compute-0 nova_compute[259850]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 04:16:00 compute-0 nova_compute[259850]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 04:16:00 compute-0 nova_compute[259850]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 04:16:00 compute-0 nova_compute[259850]:     <controller type="usb" index="0"/>
Oct 11 04:16:00 compute-0 nova_compute[259850]:     <memballoon model="virtio">
Oct 11 04:16:00 compute-0 nova_compute[259850]:       <stats period="10"/>
Oct 11 04:16:00 compute-0 nova_compute[259850]:     </memballoon>
Oct 11 04:16:00 compute-0 nova_compute[259850]:   </devices>
Oct 11 04:16:00 compute-0 nova_compute[259850]: </domain>
Oct 11 04:16:00 compute-0 nova_compute[259850]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Oct 11 04:16:00 compute-0 nova_compute[259850]: 2025-10-11 04:16:00.591 2 DEBUG nova.compute.manager [None req-f2e3f189-feab-4a4d-a1f7-4cb071161c61 2a330a845d62440c871f80eda2546881 09ba33ef4bd447699d74946c58839b2d - - default default] [instance: b19922f4-8c6a-4465-8051-c33652138fd9] Preparing to wait for external event network-vif-plugged-a0bc9537-bbc3-4bb6-9d95-a11aeb47b514 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Oct 11 04:16:00 compute-0 nova_compute[259850]: 2025-10-11 04:16:00.591 2 DEBUG oslo_concurrency.lockutils [None req-f2e3f189-feab-4a4d-a1f7-4cb071161c61 2a330a845d62440c871f80eda2546881 09ba33ef4bd447699d74946c58839b2d - - default default] Acquiring lock "b19922f4-8c6a-4465-8051-c33652138fd9-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 04:16:00 compute-0 nova_compute[259850]: 2025-10-11 04:16:00.592 2 DEBUG oslo_concurrency.lockutils [None req-f2e3f189-feab-4a4d-a1f7-4cb071161c61 2a330a845d62440c871f80eda2546881 09ba33ef4bd447699d74946c58839b2d - - default default] Lock "b19922f4-8c6a-4465-8051-c33652138fd9-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 04:16:00 compute-0 nova_compute[259850]: 2025-10-11 04:16:00.592 2 DEBUG oslo_concurrency.lockutils [None req-f2e3f189-feab-4a4d-a1f7-4cb071161c61 2a330a845d62440c871f80eda2546881 09ba33ef4bd447699d74946c58839b2d - - default default] Lock "b19922f4-8c6a-4465-8051-c33652138fd9-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 04:16:00 compute-0 nova_compute[259850]: 2025-10-11 04:16:00.593 2 DEBUG nova.virt.libvirt.vif [None req-f2e3f189-feab-4a4d-a1f7-4cb071161c61 2a330a845d62440c871f80eda2546881 09ba33ef4bd447699d74946c58839b2d - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-11T04:15:52Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestVolumeBootPattern-image-snapshot-server-1999044293',display_name='tempest-TestVolumeBootPattern-image-snapshot-server-1999044293',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testvolumebootpattern-image-snapshot-server-1999044293',id=20,image_ref='4414e0e0-7d08-46a8-a7d9-7794d12c96fc',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBDUxThBUJhvO/gkOYwxW/lPS4n8OMhZe6TOX5ElcKOSryPpXQOKfBpX1K1WckyrkPSMC42WqitbH/2Ksdi9ua2+VFCgI81hDR6lqh2OHDc0/2HOB79NiKWtPVPn3ngNTCQ==',key_name='tempest-keypair-1544766429',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='09ba33ef4bd447699d74946c58839b2d',ramdisk_id='',reservation_id='r-zs29qwxo',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='',image_bdm_v2='True',image_boot_roles='reader,member',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',image_owner_project_name='tempest-TestVolumeBootPattern-771726270',image_owner_user_name='tempest-TestVolumeBootPattern-771726270-project-member',image_root_device_name='/dev/vda',image_signature_verified='False',network_allocated='True',owner_project_name='tempest-TestVolumeBootPattern-771726270',owner_user_name='tempest-TestVolumeBootPattern-771726270-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-11T04:15:54Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='2a330a845d62440c871f80eda2546881',uuid=b19922f4-8c6a-4465-8051-c33652138fd9,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "a0bc9537-bbc3-4bb6-9d95-a11aeb47b514", "address": "fa:16:3e:aa:79:a8", "network": {"id": "b6cd64a2-af0b-4f57-b84c-cbc9cde5251d", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-958485850-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "09ba33ef4bd447699d74946c58839b2d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa0bc9537-bb", "ovs_interfaceid": "a0bc9537-bbc3-4bb6-9d95-a11aeb47b514", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Oct 11 04:16:00 compute-0 nova_compute[259850]: 2025-10-11 04:16:00.593 2 DEBUG nova.network.os_vif_util [None req-f2e3f189-feab-4a4d-a1f7-4cb071161c61 2a330a845d62440c871f80eda2546881 09ba33ef4bd447699d74946c58839b2d - - default default] Converting VIF {"id": "a0bc9537-bbc3-4bb6-9d95-a11aeb47b514", "address": "fa:16:3e:aa:79:a8", "network": {"id": "b6cd64a2-af0b-4f57-b84c-cbc9cde5251d", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-958485850-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "09ba33ef4bd447699d74946c58839b2d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa0bc9537-bb", "ovs_interfaceid": "a0bc9537-bbc3-4bb6-9d95-a11aeb47b514", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 11 04:16:00 compute-0 nova_compute[259850]: 2025-10-11 04:16:00.594 2 DEBUG nova.network.os_vif_util [None req-f2e3f189-feab-4a4d-a1f7-4cb071161c61 2a330a845d62440c871f80eda2546881 09ba33ef4bd447699d74946c58839b2d - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:aa:79:a8,bridge_name='br-int',has_traffic_filtering=True,id=a0bc9537-bbc3-4bb6-9d95-a11aeb47b514,network=Network(b6cd64a2-af0b-4f57-b84c-cbc9cde5251d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa0bc9537-bb') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 11 04:16:00 compute-0 nova_compute[259850]: 2025-10-11 04:16:00.594 2 DEBUG os_vif [None req-f2e3f189-feab-4a4d-a1f7-4cb071161c61 2a330a845d62440c871f80eda2546881 09ba33ef4bd447699d74946c58839b2d - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:aa:79:a8,bridge_name='br-int',has_traffic_filtering=True,id=a0bc9537-bbc3-4bb6-9d95-a11aeb47b514,network=Network(b6cd64a2-af0b-4f57-b84c-cbc9cde5251d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa0bc9537-bb') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Oct 11 04:16:00 compute-0 nova_compute[259850]: 2025-10-11 04:16:00.595 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 04:16:00 compute-0 nova_compute[259850]: 2025-10-11 04:16:00.596 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 04:16:00 compute-0 nova_compute[259850]: 2025-10-11 04:16:00.596 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 11 04:16:00 compute-0 nova_compute[259850]: 2025-10-11 04:16:00.599 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 04:16:00 compute-0 nova_compute[259850]: 2025-10-11 04:16:00.599 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapa0bc9537-bb, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 04:16:00 compute-0 nova_compute[259850]: 2025-10-11 04:16:00.599 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapa0bc9537-bb, col_values=(('external_ids', {'iface-id': 'a0bc9537-bbc3-4bb6-9d95-a11aeb47b514', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:aa:79:a8', 'vm-uuid': 'b19922f4-8c6a-4465-8051-c33652138fd9'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 04:16:00 compute-0 nova_compute[259850]: 2025-10-11 04:16:00.601 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 04:16:00 compute-0 NetworkManager[44920]: <info>  [1760156160.6025] manager: (tapa0bc9537-bb): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/107)
Oct 11 04:16:00 compute-0 nova_compute[259850]: 2025-10-11 04:16:00.604 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 11 04:16:00 compute-0 nova_compute[259850]: 2025-10-11 04:16:00.607 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 04:16:00 compute-0 nova_compute[259850]: 2025-10-11 04:16:00.608 2 INFO os_vif [None req-f2e3f189-feab-4a4d-a1f7-4cb071161c61 2a330a845d62440c871f80eda2546881 09ba33ef4bd447699d74946c58839b2d - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:aa:79:a8,bridge_name='br-int',has_traffic_filtering=True,id=a0bc9537-bbc3-4bb6-9d95-a11aeb47b514,network=Network(b6cd64a2-af0b-4f57-b84c-cbc9cde5251d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa0bc9537-bb')
Oct 11 04:16:00 compute-0 nova_compute[259850]: 2025-10-11 04:16:00.629 2 DEBUG nova.network.neutron [req-70e3696f-bc6b-4caa-b377-11692159d8ca req-35b0e0a1-3e13-480b-b216-033078fab715 f4159c198f2c491aba3076e803251bb4 551ef16ca7c64c7d98e9560ddab5abd8 - - default default] [instance: b19922f4-8c6a-4465-8051-c33652138fd9] Updated VIF entry in instance network info cache for port a0bc9537-bbc3-4bb6-9d95-a11aeb47b514. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 11 04:16:00 compute-0 nova_compute[259850]: 2025-10-11 04:16:00.629 2 DEBUG nova.network.neutron [req-70e3696f-bc6b-4caa-b377-11692159d8ca req-35b0e0a1-3e13-480b-b216-033078fab715 f4159c198f2c491aba3076e803251bb4 551ef16ca7c64c7d98e9560ddab5abd8 - - default default] [instance: b19922f4-8c6a-4465-8051-c33652138fd9] Updating instance_info_cache with network_info: [{"id": "a0bc9537-bbc3-4bb6-9d95-a11aeb47b514", "address": "fa:16:3e:aa:79:a8", "network": {"id": "b6cd64a2-af0b-4f57-b84c-cbc9cde5251d", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-958485850-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "09ba33ef4bd447699d74946c58839b2d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa0bc9537-bb", "ovs_interfaceid": "a0bc9537-bbc3-4bb6-9d95-a11aeb47b514", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 11 04:16:00 compute-0 nova_compute[259850]: 2025-10-11 04:16:00.656 2 DEBUG oslo_concurrency.lockutils [req-70e3696f-bc6b-4caa-b377-11692159d8ca req-35b0e0a1-3e13-480b-b216-033078fab715 f4159c198f2c491aba3076e803251bb4 551ef16ca7c64c7d98e9560ddab5abd8 - - default default] Releasing lock "refresh_cache-b19922f4-8c6a-4465-8051-c33652138fd9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 11 04:16:00 compute-0 nova_compute[259850]: 2025-10-11 04:16:00.681 2 DEBUG nova.virt.libvirt.driver [None req-f2e3f189-feab-4a4d-a1f7-4cb071161c61 2a330a845d62440c871f80eda2546881 09ba33ef4bd447699d74946c58839b2d - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 11 04:16:00 compute-0 nova_compute[259850]: 2025-10-11 04:16:00.681 2 DEBUG nova.virt.libvirt.driver [None req-f2e3f189-feab-4a4d-a1f7-4cb071161c61 2a330a845d62440c871f80eda2546881 09ba33ef4bd447699d74946c58839b2d - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 11 04:16:00 compute-0 nova_compute[259850]: 2025-10-11 04:16:00.682 2 DEBUG nova.virt.libvirt.driver [None req-f2e3f189-feab-4a4d-a1f7-4cb071161c61 2a330a845d62440c871f80eda2546881 09ba33ef4bd447699d74946c58839b2d - - default default] No VIF found with MAC fa:16:3e:aa:79:a8, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Oct 11 04:16:00 compute-0 nova_compute[259850]: 2025-10-11 04:16:00.682 2 INFO nova.virt.libvirt.driver [None req-f2e3f189-feab-4a4d-a1f7-4cb071161c61 2a330a845d62440c871f80eda2546881 09ba33ef4bd447699d74946c58839b2d - - default default] [instance: b19922f4-8c6a-4465-8051-c33652138fd9] Using config drive
Oct 11 04:16:00 compute-0 nova_compute[259850]: 2025-10-11 04:16:00.708 2 DEBUG nova.storage.rbd_utils [None req-f2e3f189-feab-4a4d-a1f7-4cb071161c61 2a330a845d62440c871f80eda2546881 09ba33ef4bd447699d74946c58839b2d - - default default] rbd image b19922f4-8c6a-4465-8051-c33652138fd9_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 11 04:16:00 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 11 04:16:00 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4022328984' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 04:16:00 compute-0 nova_compute[259850]: 2025-10-11 04:16:00.807 2 DEBUG oslo_concurrency.processutils [None req-6124908f-e531-4910-8f28-18871a2b87ff 77d11e860ca1460cab1c20bca4d4c0ea bfcc78a613a4442d88231798d10634c9 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.476s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 04:16:00 compute-0 nova_compute[259850]: 2025-10-11 04:16:00.814 2 DEBUG nova.compute.provider_tree [None req-6124908f-e531-4910-8f28-18871a2b87ff 77d11e860ca1460cab1c20bca4d4c0ea bfcc78a613a4442d88231798d10634c9 - - default default] Inventory has not changed in ProviderTree for provider: 108a560b-89c0-4926-a2fc-cb749a6f8386 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 11 04:16:00 compute-0 ceph-mon[74273]: pgmap v1565: 305 pgs: 305 active+clean; 350 MiB data, 604 MiB used, 59 GiB / 60 GiB avail; 19 KiB/s rd, 10 KiB/s wr, 25 op/s
Oct 11 04:16:00 compute-0 ceph-mon[74273]: from='client.? 192.168.122.100:0/2813380590' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 11 04:16:00 compute-0 ceph-mon[74273]: from='client.? 192.168.122.100:0/4022328984' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 04:16:00 compute-0 nova_compute[259850]: 2025-10-11 04:16:00.831 2 DEBUG nova.scheduler.client.report [None req-6124908f-e531-4910-8f28-18871a2b87ff 77d11e860ca1460cab1c20bca4d4c0ea bfcc78a613a4442d88231798d10634c9 - - default default] Inventory has not changed for provider 108a560b-89c0-4926-a2fc-cb749a6f8386 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 11 04:16:00 compute-0 nova_compute[259850]: 2025-10-11 04:16:00.860 2 DEBUG oslo_concurrency.lockutils [None req-6124908f-e531-4910-8f28-18871a2b87ff 77d11e860ca1460cab1c20bca4d4c0ea bfcc78a613a4442d88231798d10634c9 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.624s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 04:16:00 compute-0 nova_compute[259850]: 2025-10-11 04:16:00.891 2 INFO nova.scheduler.client.report [None req-6124908f-e531-4910-8f28-18871a2b87ff 77d11e860ca1460cab1c20bca4d4c0ea bfcc78a613a4442d88231798d10634c9 - - default default] Deleted allocations for instance ab2c9a76-86d0-4cca-92b5-ae402fda2905
Oct 11 04:16:00 compute-0 nova_compute[259850]: 2025-10-11 04:16:00.975 2 DEBUG oslo_concurrency.lockutils [None req-6124908f-e531-4910-8f28-18871a2b87ff 77d11e860ca1460cab1c20bca4d4c0ea bfcc78a613a4442d88231798d10634c9 - - default default] Lock "ab2c9a76-86d0-4cca-92b5-ae402fda2905" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.405s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 04:16:00 compute-0 nova_compute[259850]: 2025-10-11 04:16:00.979 2 DEBUG nova.compute.manager [req-a4b75ff0-2d45-4828-9a79-440c55336fce req-76d3b01b-c7a4-4955-a065-2d3d1e8e2d2a f4159c198f2c491aba3076e803251bb4 551ef16ca7c64c7d98e9560ddab5abd8 - - default default] [instance: ab2c9a76-86d0-4cca-92b5-ae402fda2905] Received event network-vif-plugged-a1f30276-c6ab-493e-9be5-8e3baf249a38 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 11 04:16:00 compute-0 nova_compute[259850]: 2025-10-11 04:16:00.979 2 DEBUG oslo_concurrency.lockutils [req-a4b75ff0-2d45-4828-9a79-440c55336fce req-76d3b01b-c7a4-4955-a065-2d3d1e8e2d2a f4159c198f2c491aba3076e803251bb4 551ef16ca7c64c7d98e9560ddab5abd8 - - default default] Acquiring lock "ab2c9a76-86d0-4cca-92b5-ae402fda2905-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 04:16:00 compute-0 nova_compute[259850]: 2025-10-11 04:16:00.980 2 DEBUG oslo_concurrency.lockutils [req-a4b75ff0-2d45-4828-9a79-440c55336fce req-76d3b01b-c7a4-4955-a065-2d3d1e8e2d2a f4159c198f2c491aba3076e803251bb4 551ef16ca7c64c7d98e9560ddab5abd8 - - default default] Lock "ab2c9a76-86d0-4cca-92b5-ae402fda2905-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 04:16:00 compute-0 nova_compute[259850]: 2025-10-11 04:16:00.980 2 DEBUG oslo_concurrency.lockutils [req-a4b75ff0-2d45-4828-9a79-440c55336fce req-76d3b01b-c7a4-4955-a065-2d3d1e8e2d2a f4159c198f2c491aba3076e803251bb4 551ef16ca7c64c7d98e9560ddab5abd8 - - default default] Lock "ab2c9a76-86d0-4cca-92b5-ae402fda2905-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 04:16:00 compute-0 nova_compute[259850]: 2025-10-11 04:16:00.980 2 DEBUG nova.compute.manager [req-a4b75ff0-2d45-4828-9a79-440c55336fce req-76d3b01b-c7a4-4955-a065-2d3d1e8e2d2a f4159c198f2c491aba3076e803251bb4 551ef16ca7c64c7d98e9560ddab5abd8 - - default default] [instance: ab2c9a76-86d0-4cca-92b5-ae402fda2905] No waiting events found dispatching network-vif-plugged-a1f30276-c6ab-493e-9be5-8e3baf249a38 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 11 04:16:00 compute-0 nova_compute[259850]: 2025-10-11 04:16:00.981 2 WARNING nova.compute.manager [req-a4b75ff0-2d45-4828-9a79-440c55336fce req-76d3b01b-c7a4-4955-a065-2d3d1e8e2d2a f4159c198f2c491aba3076e803251bb4 551ef16ca7c64c7d98e9560ddab5abd8 - - default default] [instance: ab2c9a76-86d0-4cca-92b5-ae402fda2905] Received unexpected event network-vif-plugged-a1f30276-c6ab-493e-9be5-8e3baf249a38 for instance with vm_state deleted and task_state None.
Oct 11 04:16:01 compute-0 nova_compute[259850]: 2025-10-11 04:16:01.092 2 INFO nova.virt.libvirt.driver [None req-f2e3f189-feab-4a4d-a1f7-4cb071161c61 2a330a845d62440c871f80eda2546881 09ba33ef4bd447699d74946c58839b2d - - default default] [instance: b19922f4-8c6a-4465-8051-c33652138fd9] Creating config drive at /var/lib/nova/instances/b19922f4-8c6a-4465-8051-c33652138fd9/disk.config
Oct 11 04:16:01 compute-0 nova_compute[259850]: 2025-10-11 04:16:01.104 2 DEBUG oslo_concurrency.processutils [None req-f2e3f189-feab-4a4d-a1f7-4cb071161c61 2a330a845d62440c871f80eda2546881 09ba33ef4bd447699d74946c58839b2d - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/b19922f4-8c6a-4465-8051-c33652138fd9/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpv6ml7tgf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 04:16:01 compute-0 nova_compute[259850]: 2025-10-11 04:16:01.255 2 DEBUG oslo_concurrency.processutils [None req-f2e3f189-feab-4a4d-a1f7-4cb071161c61 2a330a845d62440c871f80eda2546881 09ba33ef4bd447699d74946c58839b2d - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/b19922f4-8c6a-4465-8051-c33652138fd9/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpv6ml7tgf" returned: 0 in 0.150s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 04:16:01 compute-0 nova_compute[259850]: 2025-10-11 04:16:01.276 2 DEBUG nova.storage.rbd_utils [None req-f2e3f189-feab-4a4d-a1f7-4cb071161c61 2a330a845d62440c871f80eda2546881 09ba33ef4bd447699d74946c58839b2d - - default default] rbd image b19922f4-8c6a-4465-8051-c33652138fd9_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 11 04:16:01 compute-0 nova_compute[259850]: 2025-10-11 04:16:01.279 2 DEBUG oslo_concurrency.processutils [None req-f2e3f189-feab-4a4d-a1f7-4cb071161c61 2a330a845d62440c871f80eda2546881 09ba33ef4bd447699d74946c58839b2d - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/b19922f4-8c6a-4465-8051-c33652138fd9/disk.config b19922f4-8c6a-4465-8051-c33652138fd9_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 04:16:01 compute-0 nova_compute[259850]: 2025-10-11 04:16:01.438 2 DEBUG oslo_concurrency.processutils [None req-f2e3f189-feab-4a4d-a1f7-4cb071161c61 2a330a845d62440c871f80eda2546881 09ba33ef4bd447699d74946c58839b2d - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/b19922f4-8c6a-4465-8051-c33652138fd9/disk.config b19922f4-8c6a-4465-8051-c33652138fd9_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.159s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 04:16:01 compute-0 nova_compute[259850]: 2025-10-11 04:16:01.439 2 INFO nova.virt.libvirt.driver [None req-f2e3f189-feab-4a4d-a1f7-4cb071161c61 2a330a845d62440c871f80eda2546881 09ba33ef4bd447699d74946c58839b2d - - default default] [instance: b19922f4-8c6a-4465-8051-c33652138fd9] Deleting local config drive /var/lib/nova/instances/b19922f4-8c6a-4465-8051-c33652138fd9/disk.config because it was imported into RBD.
Oct 11 04:16:01 compute-0 kernel: tapa0bc9537-bb: entered promiscuous mode
Oct 11 04:16:01 compute-0 ovn_controller[152025]: 2025-10-11T04:16:01Z|00201|binding|INFO|Claiming lport a0bc9537-bbc3-4bb6-9d95-a11aeb47b514 for this chassis.
Oct 11 04:16:01 compute-0 ovn_controller[152025]: 2025-10-11T04:16:01Z|00202|binding|INFO|a0bc9537-bbc3-4bb6-9d95-a11aeb47b514: Claiming fa:16:3e:aa:79:a8 10.100.0.3
Oct 11 04:16:01 compute-0 NetworkManager[44920]: <info>  [1760156161.4823] manager: (tapa0bc9537-bb): new Tun device (/org/freedesktop/NetworkManager/Devices/108)
Oct 11 04:16:01 compute-0 systemd-udevd[293143]: Network interface NamePolicy= disabled on kernel command line.
Oct 11 04:16:01 compute-0 nova_compute[259850]: 2025-10-11 04:16:01.483 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 04:16:01 compute-0 ovn_metadata_agent[161897]: 2025-10-11 04:16:01.489 161902 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:aa:79:a8 10.100.0.3'], port_security=['fa:16:3e:aa:79:a8 10.100.0.3'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.3/28', 'neutron:device_id': 'b19922f4-8c6a-4465-8051-c33652138fd9', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-b6cd64a2-af0b-4f57-b84c-cbc9cde5251d', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '09ba33ef4bd447699d74946c58839b2d', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'ffb3c2f8-c470-4ea8-b009-8568480a2510', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=27b77226-c1f8-485e-969b-bae9a3bf7ceb, chassis=[<ovs.db.idl.Row object at 0x7f22fde8afd0>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f22fde8afd0>], logical_port=a0bc9537-bbc3-4bb6-9d95-a11aeb47b514) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 11 04:16:01 compute-0 ovn_metadata_agent[161897]: 2025-10-11 04:16:01.490 161902 INFO neutron.agent.ovn.metadata.agent [-] Port a0bc9537-bbc3-4bb6-9d95-a11aeb47b514 in datapath b6cd64a2-af0b-4f57-b84c-cbc9cde5251d bound to our chassis
Oct 11 04:16:01 compute-0 ovn_metadata_agent[161897]: 2025-10-11 04:16:01.492 161902 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network b6cd64a2-af0b-4f57-b84c-cbc9cde5251d
Oct 11 04:16:01 compute-0 ovn_controller[152025]: 2025-10-11T04:16:01Z|00203|binding|INFO|Setting lport a0bc9537-bbc3-4bb6-9d95-a11aeb47b514 ovn-installed in OVS
Oct 11 04:16:01 compute-0 ovn_controller[152025]: 2025-10-11T04:16:01Z|00204|binding|INFO|Setting lport a0bc9537-bbc3-4bb6-9d95-a11aeb47b514 up in Southbound
Oct 11 04:16:01 compute-0 NetworkManager[44920]: <info>  [1760156161.5058] device (tapa0bc9537-bb): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 11 04:16:01 compute-0 nova_compute[259850]: 2025-10-11 04:16:01.505 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 04:16:01 compute-0 NetworkManager[44920]: <info>  [1760156161.5082] device (tapa0bc9537-bb): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 11 04:16:01 compute-0 ovn_metadata_agent[161897]: 2025-10-11 04:16:01.509 267637 DEBUG oslo.privsep.daemon [-] privsep: reply[5ee95d8e-7847-45c1-9a81-91005a629fdd]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 04:16:01 compute-0 systemd-machined[214869]: New machine qemu-20-instance-00000014.
Oct 11 04:16:01 compute-0 systemd[1]: Started Virtual Machine qemu-20-instance-00000014.
Oct 11 04:16:01 compute-0 ovn_metadata_agent[161897]: 2025-10-11 04:16:01.548 267785 DEBUG oslo.privsep.daemon [-] privsep: reply[cddb4819-5258-458b-b20c-01ab969df756]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 04:16:01 compute-0 ovn_metadata_agent[161897]: 2025-10-11 04:16:01.552 267785 DEBUG oslo.privsep.daemon [-] privsep: reply[1b0b03d9-5f58-481c-b731-906ef99fa4e8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 04:16:01 compute-0 ovn_metadata_agent[161897]: 2025-10-11 04:16:01.591 267785 DEBUG oslo.privsep.daemon [-] privsep: reply[7d8d1159-c174-4585-a0cc-921f3ba31b91]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 04:16:01 compute-0 ovn_metadata_agent[161897]: 2025-10-11 04:16:01.617 267637 DEBUG oslo.privsep.daemon [-] privsep: reply[05f0e13f-0e0e-45cc-951b-86a37cabc83a]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapb6cd64a2-a1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:11:9f:02'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 10, 'tx_packets': 5, 'rx_bytes': 916, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 10, 'tx_packets': 5, 'rx_bytes': 916, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 62], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 451053, 'reachable_time': 43256, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 8, 'inoctets': 720, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 8, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 720, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 8, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 293410, 'error': None, 'target': 'ovnmeta-b6cd64a2-af0b-4f57-b84c-cbc9cde5251d', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 04:16:01 compute-0 ovn_metadata_agent[161897]: 2025-10-11 04:16:01.641 267637 DEBUG oslo.privsep.daemon [-] privsep: reply[83abcfdd-c2d3-432f-857d-4160c716862d]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tapb6cd64a2-a1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 451065, 'tstamp': 451065}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 293411, 'error': None, 'target': 'ovnmeta-b6cd64a2-af0b-4f57-b84c-cbc9cde5251d', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tapb6cd64a2-a1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 451069, 'tstamp': 451069}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 293411, 'error': None, 'target': 'ovnmeta-b6cd64a2-af0b-4f57-b84c-cbc9cde5251d', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 04:16:01 compute-0 ovn_metadata_agent[161897]: 2025-10-11 04:16:01.643 161902 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapb6cd64a2-a0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 04:16:01 compute-0 nova_compute[259850]: 2025-10-11 04:16:01.645 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 04:16:01 compute-0 nova_compute[259850]: 2025-10-11 04:16:01.646 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 04:16:01 compute-0 ovn_metadata_agent[161897]: 2025-10-11 04:16:01.649 161902 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapb6cd64a2-a0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 04:16:01 compute-0 ovn_metadata_agent[161897]: 2025-10-11 04:16:01.649 161902 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 11 04:16:01 compute-0 ovn_metadata_agent[161897]: 2025-10-11 04:16:01.650 161902 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapb6cd64a2-a0, col_values=(('external_ids', {'iface-id': 'c2cbaf15-a50c-40b8-9f65-12b11618e7fc'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 04:16:01 compute-0 ovn_metadata_agent[161897]: 2025-10-11 04:16:01.651 161902 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 11 04:16:01 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1566: 305 pgs: 305 active+clean; 350 MiB data, 604 MiB used, 59 GiB / 60 GiB avail; 18 KiB/s rd, 9.6 KiB/s wr, 24 op/s
Oct 11 04:16:02 compute-0 nova_compute[259850]: 2025-10-11 04:16:02.304 2 DEBUG nova.virt.driver [None req-3700afd8-f980-4cf2-b41e-54ecc7fbe9a2 - - - - - -] Emitting event <LifecycleEvent: 1760156162.3039525, b19922f4-8c6a-4465-8051-c33652138fd9 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 11 04:16:02 compute-0 nova_compute[259850]: 2025-10-11 04:16:02.305 2 INFO nova.compute.manager [None req-3700afd8-f980-4cf2-b41e-54ecc7fbe9a2 - - - - - -] [instance: b19922f4-8c6a-4465-8051-c33652138fd9] VM Started (Lifecycle Event)
Oct 11 04:16:02 compute-0 nova_compute[259850]: 2025-10-11 04:16:02.335 2 DEBUG nova.compute.manager [None req-3700afd8-f980-4cf2-b41e-54ecc7fbe9a2 - - - - - -] [instance: b19922f4-8c6a-4465-8051-c33652138fd9] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 11 04:16:02 compute-0 nova_compute[259850]: 2025-10-11 04:16:02.339 2 DEBUG nova.virt.driver [None req-3700afd8-f980-4cf2-b41e-54ecc7fbe9a2 - - - - - -] Emitting event <LifecycleEvent: 1760156162.304412, b19922f4-8c6a-4465-8051-c33652138fd9 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 11 04:16:02 compute-0 nova_compute[259850]: 2025-10-11 04:16:02.339 2 INFO nova.compute.manager [None req-3700afd8-f980-4cf2-b41e-54ecc7fbe9a2 - - - - - -] [instance: b19922f4-8c6a-4465-8051-c33652138fd9] VM Paused (Lifecycle Event)
Oct 11 04:16:02 compute-0 nova_compute[259850]: 2025-10-11 04:16:02.357 2 DEBUG nova.compute.manager [None req-3700afd8-f980-4cf2-b41e-54ecc7fbe9a2 - - - - - -] [instance: b19922f4-8c6a-4465-8051-c33652138fd9] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 11 04:16:02 compute-0 nova_compute[259850]: 2025-10-11 04:16:02.360 2 DEBUG nova.compute.manager [None req-3700afd8-f980-4cf2-b41e-54ecc7fbe9a2 - - - - - -] [instance: b19922f4-8c6a-4465-8051-c33652138fd9] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 11 04:16:02 compute-0 nova_compute[259850]: 2025-10-11 04:16:02.380 2 INFO nova.compute.manager [None req-3700afd8-f980-4cf2-b41e-54ecc7fbe9a2 - - - - - -] [instance: b19922f4-8c6a-4465-8051-c33652138fd9] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 11 04:16:02 compute-0 nova_compute[259850]: 2025-10-11 04:16:02.463 2 DEBUG nova.compute.manager [req-ef6b97a8-9769-4bbe-9dcb-4239aa23b830 req-ae4ef9cf-10e6-4366-b687-8650bdabdb22 f4159c198f2c491aba3076e803251bb4 551ef16ca7c64c7d98e9560ddab5abd8 - - default default] [instance: b19922f4-8c6a-4465-8051-c33652138fd9] Received event network-vif-plugged-a0bc9537-bbc3-4bb6-9d95-a11aeb47b514 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 11 04:16:02 compute-0 nova_compute[259850]: 2025-10-11 04:16:02.463 2 DEBUG oslo_concurrency.lockutils [req-ef6b97a8-9769-4bbe-9dcb-4239aa23b830 req-ae4ef9cf-10e6-4366-b687-8650bdabdb22 f4159c198f2c491aba3076e803251bb4 551ef16ca7c64c7d98e9560ddab5abd8 - - default default] Acquiring lock "b19922f4-8c6a-4465-8051-c33652138fd9-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 04:16:02 compute-0 nova_compute[259850]: 2025-10-11 04:16:02.464 2 DEBUG oslo_concurrency.lockutils [req-ef6b97a8-9769-4bbe-9dcb-4239aa23b830 req-ae4ef9cf-10e6-4366-b687-8650bdabdb22 f4159c198f2c491aba3076e803251bb4 551ef16ca7c64c7d98e9560ddab5abd8 - - default default] Lock "b19922f4-8c6a-4465-8051-c33652138fd9-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 04:16:02 compute-0 nova_compute[259850]: 2025-10-11 04:16:02.464 2 DEBUG oslo_concurrency.lockutils [req-ef6b97a8-9769-4bbe-9dcb-4239aa23b830 req-ae4ef9cf-10e6-4366-b687-8650bdabdb22 f4159c198f2c491aba3076e803251bb4 551ef16ca7c64c7d98e9560ddab5abd8 - - default default] Lock "b19922f4-8c6a-4465-8051-c33652138fd9-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 04:16:02 compute-0 nova_compute[259850]: 2025-10-11 04:16:02.465 2 DEBUG nova.compute.manager [req-ef6b97a8-9769-4bbe-9dcb-4239aa23b830 req-ae4ef9cf-10e6-4366-b687-8650bdabdb22 f4159c198f2c491aba3076e803251bb4 551ef16ca7c64c7d98e9560ddab5abd8 - - default default] [instance: b19922f4-8c6a-4465-8051-c33652138fd9] Processing event network-vif-plugged-a0bc9537-bbc3-4bb6-9d95-a11aeb47b514 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Oct 11 04:16:02 compute-0 nova_compute[259850]: 2025-10-11 04:16:02.465 2 DEBUG nova.compute.manager [req-ef6b97a8-9769-4bbe-9dcb-4239aa23b830 req-ae4ef9cf-10e6-4366-b687-8650bdabdb22 f4159c198f2c491aba3076e803251bb4 551ef16ca7c64c7d98e9560ddab5abd8 - - default default] [instance: b19922f4-8c6a-4465-8051-c33652138fd9] Received event network-vif-plugged-a0bc9537-bbc3-4bb6-9d95-a11aeb47b514 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 11 04:16:02 compute-0 nova_compute[259850]: 2025-10-11 04:16:02.466 2 DEBUG oslo_concurrency.lockutils [req-ef6b97a8-9769-4bbe-9dcb-4239aa23b830 req-ae4ef9cf-10e6-4366-b687-8650bdabdb22 f4159c198f2c491aba3076e803251bb4 551ef16ca7c64c7d98e9560ddab5abd8 - - default default] Acquiring lock "b19922f4-8c6a-4465-8051-c33652138fd9-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 04:16:02 compute-0 nova_compute[259850]: 2025-10-11 04:16:02.466 2 DEBUG oslo_concurrency.lockutils [req-ef6b97a8-9769-4bbe-9dcb-4239aa23b830 req-ae4ef9cf-10e6-4366-b687-8650bdabdb22 f4159c198f2c491aba3076e803251bb4 551ef16ca7c64c7d98e9560ddab5abd8 - - default default] Lock "b19922f4-8c6a-4465-8051-c33652138fd9-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 04:16:02 compute-0 nova_compute[259850]: 2025-10-11 04:16:02.467 2 DEBUG oslo_concurrency.lockutils [req-ef6b97a8-9769-4bbe-9dcb-4239aa23b830 req-ae4ef9cf-10e6-4366-b687-8650bdabdb22 f4159c198f2c491aba3076e803251bb4 551ef16ca7c64c7d98e9560ddab5abd8 - - default default] Lock "b19922f4-8c6a-4465-8051-c33652138fd9-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 04:16:02 compute-0 nova_compute[259850]: 2025-10-11 04:16:02.467 2 DEBUG nova.compute.manager [req-ef6b97a8-9769-4bbe-9dcb-4239aa23b830 req-ae4ef9cf-10e6-4366-b687-8650bdabdb22 f4159c198f2c491aba3076e803251bb4 551ef16ca7c64c7d98e9560ddab5abd8 - - default default] [instance: b19922f4-8c6a-4465-8051-c33652138fd9] No waiting events found dispatching network-vif-plugged-a0bc9537-bbc3-4bb6-9d95-a11aeb47b514 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 11 04:16:02 compute-0 nova_compute[259850]: 2025-10-11 04:16:02.468 2 WARNING nova.compute.manager [req-ef6b97a8-9769-4bbe-9dcb-4239aa23b830 req-ae4ef9cf-10e6-4366-b687-8650bdabdb22 f4159c198f2c491aba3076e803251bb4 551ef16ca7c64c7d98e9560ddab5abd8 - - default default] [instance: b19922f4-8c6a-4465-8051-c33652138fd9] Received unexpected event network-vif-plugged-a0bc9537-bbc3-4bb6-9d95-a11aeb47b514 for instance with vm_state building and task_state spawning.
Oct 11 04:16:02 compute-0 nova_compute[259850]: 2025-10-11 04:16:02.469 2 DEBUG nova.compute.manager [None req-f2e3f189-feab-4a4d-a1f7-4cb071161c61 2a330a845d62440c871f80eda2546881 09ba33ef4bd447699d74946c58839b2d - - default default] [instance: b19922f4-8c6a-4465-8051-c33652138fd9] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Oct 11 04:16:02 compute-0 nova_compute[259850]: 2025-10-11 04:16:02.474 2 DEBUG nova.virt.driver [None req-3700afd8-f980-4cf2-b41e-54ecc7fbe9a2 - - - - - -] Emitting event <LifecycleEvent: 1760156162.4744015, b19922f4-8c6a-4465-8051-c33652138fd9 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 11 04:16:02 compute-0 nova_compute[259850]: 2025-10-11 04:16:02.475 2 INFO nova.compute.manager [None req-3700afd8-f980-4cf2-b41e-54ecc7fbe9a2 - - - - - -] [instance: b19922f4-8c6a-4465-8051-c33652138fd9] VM Resumed (Lifecycle Event)
Oct 11 04:16:02 compute-0 nova_compute[259850]: 2025-10-11 04:16:02.476 2 DEBUG nova.virt.libvirt.driver [None req-f2e3f189-feab-4a4d-a1f7-4cb071161c61 2a330a845d62440c871f80eda2546881 09ba33ef4bd447699d74946c58839b2d - - default default] [instance: b19922f4-8c6a-4465-8051-c33652138fd9] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Oct 11 04:16:02 compute-0 nova_compute[259850]: 2025-10-11 04:16:02.480 2 INFO nova.virt.libvirt.driver [-] [instance: b19922f4-8c6a-4465-8051-c33652138fd9] Instance spawned successfully.
Oct 11 04:16:02 compute-0 nova_compute[259850]: 2025-10-11 04:16:02.480 2 INFO nova.compute.manager [None req-f2e3f189-feab-4a4d-a1f7-4cb071161c61 2a330a845d62440c871f80eda2546881 09ba33ef4bd447699d74946c58839b2d - - default default] [instance: b19922f4-8c6a-4465-8051-c33652138fd9] Took 2.48 seconds to spawn the instance on the hypervisor.
Oct 11 04:16:02 compute-0 nova_compute[259850]: 2025-10-11 04:16:02.481 2 DEBUG nova.compute.manager [None req-f2e3f189-feab-4a4d-a1f7-4cb071161c61 2a330a845d62440c871f80eda2546881 09ba33ef4bd447699d74946c58839b2d - - default default] [instance: b19922f4-8c6a-4465-8051-c33652138fd9] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 11 04:16:02 compute-0 nova_compute[259850]: 2025-10-11 04:16:02.506 2 DEBUG nova.compute.manager [None req-3700afd8-f980-4cf2-b41e-54ecc7fbe9a2 - - - - - -] [instance: b19922f4-8c6a-4465-8051-c33652138fd9] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 11 04:16:02 compute-0 nova_compute[259850]: 2025-10-11 04:16:02.510 2 DEBUG nova.compute.manager [None req-3700afd8-f980-4cf2-b41e-54ecc7fbe9a2 - - - - - -] [instance: b19922f4-8c6a-4465-8051-c33652138fd9] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 11 04:16:02 compute-0 nova_compute[259850]: 2025-10-11 04:16:02.540 2 INFO nova.compute.manager [None req-3700afd8-f980-4cf2-b41e-54ecc7fbe9a2 - - - - - -] [instance: b19922f4-8c6a-4465-8051-c33652138fd9] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 11 04:16:02 compute-0 nova_compute[259850]: 2025-10-11 04:16:02.561 2 INFO nova.compute.manager [None req-f2e3f189-feab-4a4d-a1f7-4cb071161c61 2a330a845d62440c871f80eda2546881 09ba33ef4bd447699d74946c58839b2d - - default default] [instance: b19922f4-8c6a-4465-8051-c33652138fd9] Took 8.84 seconds to build instance.
Oct 11 04:16:02 compute-0 nova_compute[259850]: 2025-10-11 04:16:02.582 2 DEBUG oslo_concurrency.lockutils [None req-f2e3f189-feab-4a4d-a1f7-4cb071161c61 2a330a845d62440c871f80eda2546881 09ba33ef4bd447699d74946c58839b2d - - default default] Lock "b19922f4-8c6a-4465-8051-c33652138fd9" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 8.930s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 04:16:02 compute-0 ceph-mon[74273]: pgmap v1566: 305 pgs: 305 active+clean; 350 MiB data, 604 MiB used, 59 GiB / 60 GiB avail; 18 KiB/s rd, 9.6 KiB/s wr, 24 op/s
Oct 11 04:16:03 compute-0 nova_compute[259850]: 2025-10-11 04:16:03.259 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 04:16:03 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1567: 305 pgs: 305 active+clean; 350 MiB data, 604 MiB used, 59 GiB / 60 GiB avail; 43 KiB/s rd, 37 KiB/s wr, 52 op/s
Oct 11 04:16:04 compute-0 ceph-mon[74273]: pgmap v1567: 305 pgs: 305 active+clean; 350 MiB data, 604 MiB used, 59 GiB / 60 GiB avail; 43 KiB/s rd, 37 KiB/s wr, 52 op/s
Oct 11 04:16:04 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e385 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 04:16:05 compute-0 nova_compute[259850]: 2025-10-11 04:16:05.055 2 DEBUG nova.compute.manager [req-1bd55c4c-0514-4c0e-bda5-132bb64ae4e7 req-4dc3c265-f90f-416f-85c5-23358e0dac05 f4159c198f2c491aba3076e803251bb4 551ef16ca7c64c7d98e9560ddab5abd8 - - default default] [instance: b19922f4-8c6a-4465-8051-c33652138fd9] Received event network-changed-a0bc9537-bbc3-4bb6-9d95-a11aeb47b514 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 11 04:16:05 compute-0 nova_compute[259850]: 2025-10-11 04:16:05.055 2 DEBUG nova.compute.manager [req-1bd55c4c-0514-4c0e-bda5-132bb64ae4e7 req-4dc3c265-f90f-416f-85c5-23358e0dac05 f4159c198f2c491aba3076e803251bb4 551ef16ca7c64c7d98e9560ddab5abd8 - - default default] [instance: b19922f4-8c6a-4465-8051-c33652138fd9] Refreshing instance network info cache due to event network-changed-a0bc9537-bbc3-4bb6-9d95-a11aeb47b514. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 11 04:16:05 compute-0 nova_compute[259850]: 2025-10-11 04:16:05.056 2 DEBUG oslo_concurrency.lockutils [req-1bd55c4c-0514-4c0e-bda5-132bb64ae4e7 req-4dc3c265-f90f-416f-85c5-23358e0dac05 f4159c198f2c491aba3076e803251bb4 551ef16ca7c64c7d98e9560ddab5abd8 - - default default] Acquiring lock "refresh_cache-b19922f4-8c6a-4465-8051-c33652138fd9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 11 04:16:05 compute-0 nova_compute[259850]: 2025-10-11 04:16:05.056 2 DEBUG oslo_concurrency.lockutils [req-1bd55c4c-0514-4c0e-bda5-132bb64ae4e7 req-4dc3c265-f90f-416f-85c5-23358e0dac05 f4159c198f2c491aba3076e803251bb4 551ef16ca7c64c7d98e9560ddab5abd8 - - default default] Acquired lock "refresh_cache-b19922f4-8c6a-4465-8051-c33652138fd9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 11 04:16:05 compute-0 nova_compute[259850]: 2025-10-11 04:16:05.057 2 DEBUG nova.network.neutron [req-1bd55c4c-0514-4c0e-bda5-132bb64ae4e7 req-4dc3c265-f90f-416f-85c5-23358e0dac05 f4159c198f2c491aba3076e803251bb4 551ef16ca7c64c7d98e9560ddab5abd8 - - default default] [instance: b19922f4-8c6a-4465-8051-c33652138fd9] Refreshing network info cache for port a0bc9537-bbc3-4bb6-9d95-a11aeb47b514 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 11 04:16:05 compute-0 nova_compute[259850]: 2025-10-11 04:16:05.603 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 04:16:05 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1568: 305 pgs: 305 active+clean; 350 MiB data, 604 MiB used, 59 GiB / 60 GiB avail; 33 KiB/s rd, 30 KiB/s wr, 39 op/s
Oct 11 04:16:06 compute-0 nova_compute[259850]: 2025-10-11 04:16:06.471 2 DEBUG nova.network.neutron [req-1bd55c4c-0514-4c0e-bda5-132bb64ae4e7 req-4dc3c265-f90f-416f-85c5-23358e0dac05 f4159c198f2c491aba3076e803251bb4 551ef16ca7c64c7d98e9560ddab5abd8 - - default default] [instance: b19922f4-8c6a-4465-8051-c33652138fd9] Updated VIF entry in instance network info cache for port a0bc9537-bbc3-4bb6-9d95-a11aeb47b514. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 11 04:16:06 compute-0 nova_compute[259850]: 2025-10-11 04:16:06.473 2 DEBUG nova.network.neutron [req-1bd55c4c-0514-4c0e-bda5-132bb64ae4e7 req-4dc3c265-f90f-416f-85c5-23358e0dac05 f4159c198f2c491aba3076e803251bb4 551ef16ca7c64c7d98e9560ddab5abd8 - - default default] [instance: b19922f4-8c6a-4465-8051-c33652138fd9] Updating instance_info_cache with network_info: [{"id": "a0bc9537-bbc3-4bb6-9d95-a11aeb47b514", "address": "fa:16:3e:aa:79:a8", "network": {"id": "b6cd64a2-af0b-4f57-b84c-cbc9cde5251d", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-958485850-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.175", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "09ba33ef4bd447699d74946c58839b2d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa0bc9537-bb", "ovs_interfaceid": "a0bc9537-bbc3-4bb6-9d95-a11aeb47b514", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 11 04:16:06 compute-0 nova_compute[259850]: 2025-10-11 04:16:06.491 2 DEBUG oslo_concurrency.lockutils [req-1bd55c4c-0514-4c0e-bda5-132bb64ae4e7 req-4dc3c265-f90f-416f-85c5-23358e0dac05 f4159c198f2c491aba3076e803251bb4 551ef16ca7c64c7d98e9560ddab5abd8 - - default default] Releasing lock "refresh_cache-b19922f4-8c6a-4465-8051-c33652138fd9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 11 04:16:06 compute-0 ceph-mon[74273]: pgmap v1568: 305 pgs: 305 active+clean; 350 MiB data, 604 MiB used, 59 GiB / 60 GiB avail; 33 KiB/s rd, 30 KiB/s wr, 39 op/s
Oct 11 04:16:07 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1569: 305 pgs: 305 active+clean; 350 MiB data, 604 MiB used, 59 GiB / 60 GiB avail; 33 KiB/s rd, 30 KiB/s wr, 39 op/s
Oct 11 04:16:08 compute-0 nova_compute[259850]: 2025-10-11 04:16:08.263 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 04:16:08 compute-0 ceph-mon[74273]: pgmap v1569: 305 pgs: 305 active+clean; 350 MiB data, 604 MiB used, 59 GiB / 60 GiB avail; 33 KiB/s rd, 30 KiB/s wr, 39 op/s
Oct 11 04:16:09 compute-0 ceph-osd[87591]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #45. Immutable memtables: 2.
Oct 11 04:16:09 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1570: 305 pgs: 305 active+clean; 350 MiB data, 604 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 30 KiB/s wr, 103 op/s
Oct 11 04:16:09 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e385 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 04:16:10 compute-0 nova_compute[259850]: 2025-10-11 04:16:10.606 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 04:16:10 compute-0 ceph-mon[74273]: pgmap v1570: 305 pgs: 305 active+clean; 350 MiB data, 604 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 30 KiB/s wr, 103 op/s
Oct 11 04:16:11 compute-0 nova_compute[259850]: 2025-10-11 04:16:11.659 2 DEBUG oslo_concurrency.lockutils [None req-e930425f-cba7-458e-9d2c-d390af47d184 77d11e860ca1460cab1c20bca4d4c0ea bfcc78a613a4442d88231798d10634c9 - - default default] Acquiring lock "001be5b3-e842-4242-a6ad-2ccbfa7b39c2" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 04:16:11 compute-0 nova_compute[259850]: 2025-10-11 04:16:11.661 2 DEBUG oslo_concurrency.lockutils [None req-e930425f-cba7-458e-9d2c-d390af47d184 77d11e860ca1460cab1c20bca4d4c0ea bfcc78a613a4442d88231798d10634c9 - - default default] Lock "001be5b3-e842-4242-a6ad-2ccbfa7b39c2" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 04:16:11 compute-0 nova_compute[259850]: 2025-10-11 04:16:11.682 2 DEBUG nova.compute.manager [None req-e930425f-cba7-458e-9d2c-d390af47d184 77d11e860ca1460cab1c20bca4d4c0ea bfcc78a613a4442d88231798d10634c9 - - default default] [instance: 001be5b3-e842-4242-a6ad-2ccbfa7b39c2] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Oct 11 04:16:11 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1571: 305 pgs: 305 active+clean; 350 MiB data, 604 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 27 KiB/s wr, 91 op/s
Oct 11 04:16:11 compute-0 nova_compute[259850]: 2025-10-11 04:16:11.756 2 DEBUG oslo_concurrency.lockutils [None req-e930425f-cba7-458e-9d2c-d390af47d184 77d11e860ca1460cab1c20bca4d4c0ea bfcc78a613a4442d88231798d10634c9 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 04:16:11 compute-0 nova_compute[259850]: 2025-10-11 04:16:11.756 2 DEBUG oslo_concurrency.lockutils [None req-e930425f-cba7-458e-9d2c-d390af47d184 77d11e860ca1460cab1c20bca4d4c0ea bfcc78a613a4442d88231798d10634c9 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 04:16:11 compute-0 nova_compute[259850]: 2025-10-11 04:16:11.763 2 DEBUG nova.virt.hardware [None req-e930425f-cba7-458e-9d2c-d390af47d184 77d11e860ca1460cab1c20bca4d4c0ea bfcc78a613a4442d88231798d10634c9 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Oct 11 04:16:11 compute-0 nova_compute[259850]: 2025-10-11 04:16:11.764 2 INFO nova.compute.claims [None req-e930425f-cba7-458e-9d2c-d390af47d184 77d11e860ca1460cab1c20bca4d4c0ea bfcc78a613a4442d88231798d10634c9 - - default default] [instance: 001be5b3-e842-4242-a6ad-2ccbfa7b39c2] Claim successful on node compute-0.ctlplane.example.com
Oct 11 04:16:11 compute-0 nova_compute[259850]: 2025-10-11 04:16:11.923 2 DEBUG oslo_concurrency.processutils [None req-e930425f-cba7-458e-9d2c-d390af47d184 77d11e860ca1460cab1c20bca4d4c0ea bfcc78a613a4442d88231798d10634c9 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 04:16:12 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 11 04:16:12 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1819570564' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 04:16:12 compute-0 nova_compute[259850]: 2025-10-11 04:16:12.399 2 DEBUG oslo_concurrency.processutils [None req-e930425f-cba7-458e-9d2c-d390af47d184 77d11e860ca1460cab1c20bca4d4c0ea bfcc78a613a4442d88231798d10634c9 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.476s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 04:16:12 compute-0 nova_compute[259850]: 2025-10-11 04:16:12.408 2 DEBUG nova.compute.provider_tree [None req-e930425f-cba7-458e-9d2c-d390af47d184 77d11e860ca1460cab1c20bca4d4c0ea bfcc78a613a4442d88231798d10634c9 - - default default] Inventory has not changed in ProviderTree for provider: 108a560b-89c0-4926-a2fc-cb749a6f8386 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 11 04:16:12 compute-0 nova_compute[259850]: 2025-10-11 04:16:12.430 2 DEBUG nova.scheduler.client.report [None req-e930425f-cba7-458e-9d2c-d390af47d184 77d11e860ca1460cab1c20bca4d4c0ea bfcc78a613a4442d88231798d10634c9 - - default default] Inventory has not changed for provider 108a560b-89c0-4926-a2fc-cb749a6f8386 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 11 04:16:12 compute-0 nova_compute[259850]: 2025-10-11 04:16:12.471 2 DEBUG oslo_concurrency.lockutils [None req-e930425f-cba7-458e-9d2c-d390af47d184 77d11e860ca1460cab1c20bca4d4c0ea bfcc78a613a4442d88231798d10634c9 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.714s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 04:16:12 compute-0 nova_compute[259850]: 2025-10-11 04:16:12.472 2 DEBUG nova.compute.manager [None req-e930425f-cba7-458e-9d2c-d390af47d184 77d11e860ca1460cab1c20bca4d4c0ea bfcc78a613a4442d88231798d10634c9 - - default default] [instance: 001be5b3-e842-4242-a6ad-2ccbfa7b39c2] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Oct 11 04:16:12 compute-0 nova_compute[259850]: 2025-10-11 04:16:12.529 2 DEBUG nova.compute.manager [None req-e930425f-cba7-458e-9d2c-d390af47d184 77d11e860ca1460cab1c20bca4d4c0ea bfcc78a613a4442d88231798d10634c9 - - default default] [instance: 001be5b3-e842-4242-a6ad-2ccbfa7b39c2] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Oct 11 04:16:12 compute-0 nova_compute[259850]: 2025-10-11 04:16:12.530 2 DEBUG nova.network.neutron [None req-e930425f-cba7-458e-9d2c-d390af47d184 77d11e860ca1460cab1c20bca4d4c0ea bfcc78a613a4442d88231798d10634c9 - - default default] [instance: 001be5b3-e842-4242-a6ad-2ccbfa7b39c2] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Oct 11 04:16:12 compute-0 nova_compute[259850]: 2025-10-11 04:16:12.553 2 INFO nova.virt.libvirt.driver [None req-e930425f-cba7-458e-9d2c-d390af47d184 77d11e860ca1460cab1c20bca4d4c0ea bfcc78a613a4442d88231798d10634c9 - - default default] [instance: 001be5b3-e842-4242-a6ad-2ccbfa7b39c2] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Oct 11 04:16:12 compute-0 nova_compute[259850]: 2025-10-11 04:16:12.572 2 DEBUG nova.compute.manager [None req-e930425f-cba7-458e-9d2c-d390af47d184 77d11e860ca1460cab1c20bca4d4c0ea bfcc78a613a4442d88231798d10634c9 - - default default] [instance: 001be5b3-e842-4242-a6ad-2ccbfa7b39c2] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Oct 11 04:16:12 compute-0 nova_compute[259850]: 2025-10-11 04:16:12.633 2 INFO nova.virt.block_device [None req-e930425f-cba7-458e-9d2c-d390af47d184 77d11e860ca1460cab1c20bca4d4c0ea bfcc78a613a4442d88231798d10634c9 - - default default] [instance: 001be5b3-e842-4242-a6ad-2ccbfa7b39c2] Booting with volume c358a4fb-dbe4-4873-963a-9b4d3369e2f4 at /dev/vda
Oct 11 04:16:12 compute-0 nova_compute[259850]: 2025-10-11 04:16:12.790 2 DEBUG os_brick.utils [None req-e930425f-cba7-458e-9d2c-d390af47d184 77d11e860ca1460cab1c20bca4d4c0ea bfcc78a613a4442d88231798d10634c9 - - default default] ==> get_connector_properties: call "{'root_helper': 'sudo nova-rootwrap /etc/nova/rootwrap.conf', 'my_ip': '192.168.122.100', 'multipath': True, 'enforce_multipath': True, 'host': 'compute-0.ctlplane.example.com', 'execute': None}" trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:176
Oct 11 04:16:12 compute-0 nova_compute[259850]: 2025-10-11 04:16:12.791 675 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): multipathd show status execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 04:16:12 compute-0 nova_compute[259850]: 2025-10-11 04:16:12.808 675 DEBUG oslo_concurrency.processutils [-] CMD "multipathd show status" returned: 0 in 0.017s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 04:16:12 compute-0 nova_compute[259850]: 2025-10-11 04:16:12.809 675 DEBUG oslo.privsep.daemon [-] privsep: reply[0a3490c6-a38b-460a-9ef4-b5ca57e973cf]: (4, ('path checker states:\n\npaths: 0\nbusy: False\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 04:16:12 compute-0 nova_compute[259850]: 2025-10-11 04:16:12.811 675 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): cat /etc/iscsi/initiatorname.iscsi execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 04:16:12 compute-0 nova_compute[259850]: 2025-10-11 04:16:12.825 675 DEBUG oslo_concurrency.processutils [-] CMD "cat /etc/iscsi/initiatorname.iscsi" returned: 0 in 0.014s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 04:16:12 compute-0 nova_compute[259850]: 2025-10-11 04:16:12.826 675 DEBUG oslo.privsep.daemon [-] privsep: reply[a5435fdb-da8c-49bc-bb7f-f77ade95cf4e]: (4, ('InitiatorName=iqn.1994-05.com.redhat:e727c2bd432c', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 04:16:12 compute-0 nova_compute[259850]: 2025-10-11 04:16:12.828 675 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): findmnt -v / -n -o SOURCE execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 04:16:12 compute-0 nova_compute[259850]: 2025-10-11 04:16:12.845 675 DEBUG oslo_concurrency.processutils [-] CMD "findmnt -v / -n -o SOURCE" returned: 0 in 0.017s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 04:16:12 compute-0 nova_compute[259850]: 2025-10-11 04:16:12.846 675 DEBUG oslo.privsep.daemon [-] privsep: reply[66d4bf74-977f-4607-9f42-e7aec29d1560]: (4, ('overlay\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 04:16:12 compute-0 nova_compute[259850]: 2025-10-11 04:16:12.847 675 DEBUG oslo.privsep.daemon [-] privsep: reply[d5262941-622f-4b01-bdac-cbb5d54f8eb8]: (4, 'e4b2deed-ff06-4afb-a523-b61a9dddb9cc') _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 04:16:12 compute-0 nova_compute[259850]: 2025-10-11 04:16:12.848 2 DEBUG oslo_concurrency.processutils [None req-e930425f-cba7-458e-9d2c-d390af47d184 77d11e860ca1460cab1c20bca4d4c0ea bfcc78a613a4442d88231798d10634c9 - - default default] Running cmd (subprocess): nvme version execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 04:16:12 compute-0 ceph-mon[74273]: pgmap v1571: 305 pgs: 305 active+clean; 350 MiB data, 604 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 27 KiB/s wr, 91 op/s
Oct 11 04:16:12 compute-0 ceph-mon[74273]: from='client.? 192.168.122.100:0/1819570564' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 04:16:12 compute-0 nova_compute[259850]: 2025-10-11 04:16:12.888 2 DEBUG oslo_concurrency.processutils [None req-e930425f-cba7-458e-9d2c-d390af47d184 77d11e860ca1460cab1c20bca4d4c0ea bfcc78a613a4442d88231798d10634c9 - - default default] CMD "nvme version" returned: 0 in 0.040s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 04:16:12 compute-0 nova_compute[259850]: 2025-10-11 04:16:12.892 2 DEBUG os_brick.initiator.connectors.lightos [None req-e930425f-cba7-458e-9d2c-d390af47d184 77d11e860ca1460cab1c20bca4d4c0ea bfcc78a613a4442d88231798d10634c9 - - default default] LIGHTOS: [Errno 111] ECONNREFUSED find_dsc /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:98
Oct 11 04:16:12 compute-0 nova_compute[259850]: 2025-10-11 04:16:12.893 2 DEBUG os_brick.initiator.connectors.lightos [None req-e930425f-cba7-458e-9d2c-d390af47d184 77d11e860ca1460cab1c20bca4d4c0ea bfcc78a613a4442d88231798d10634c9 - - default default] LIGHTOS: did not find dsc, continuing anyway. get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:76
Oct 11 04:16:12 compute-0 nova_compute[259850]: 2025-10-11 04:16:12.893 2 DEBUG os_brick.initiator.connectors.lightos [None req-e930425f-cba7-458e-9d2c-d390af47d184 77d11e860ca1460cab1c20bca4d4c0ea bfcc78a613a4442d88231798d10634c9 - - default default] LIGHTOS: finally hostnqn: nqn.2014-08.org.nvmexpress:uuid:83042a20-0f72-4c47-8453-e72ead378624 dsc:  get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:79
Oct 11 04:16:12 compute-0 nova_compute[259850]: 2025-10-11 04:16:12.894 2 DEBUG os_brick.utils [None req-e930425f-cba7-458e-9d2c-d390af47d184 77d11e860ca1460cab1c20bca4d4c0ea bfcc78a613a4442d88231798d10634c9 - - default default] <== get_connector_properties: return (103ms) {'platform': 'x86_64', 'os_type': 'linux', 'ip': '192.168.122.100', 'host': 'compute-0.ctlplane.example.com', 'multipath': True, 'initiator': 'iqn.1994-05.com.redhat:e727c2bd432c', 'do_local_attach': False, 'nvme_hostid': '83042a20-0f72-4c47-8453-e72ead378624', 'system uuid': 'e4b2deed-ff06-4afb-a523-b61a9dddb9cc', 'nqn': 'nqn.2014-08.org.nvmexpress:uuid:83042a20-0f72-4c47-8453-e72ead378624', 'nvme_native_multipath': True, 'found_dsc': ''} trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:203
Oct 11 04:16:12 compute-0 nova_compute[259850]: 2025-10-11 04:16:12.895 2 DEBUG nova.virt.block_device [None req-e930425f-cba7-458e-9d2c-d390af47d184 77d11e860ca1460cab1c20bca4d4c0ea bfcc78a613a4442d88231798d10634c9 - - default default] [instance: 001be5b3-e842-4242-a6ad-2ccbfa7b39c2] Updating existing volume attachment record: bcaa98e1-06f3-4b80-ae9e-5ce0ddeb748d _volume_attach /usr/lib/python3.9/site-packages/nova/virt/block_device.py:631
Oct 11 04:16:12 compute-0 nova_compute[259850]: 2025-10-11 04:16:12.904 2 DEBUG nova.policy [None req-e930425f-cba7-458e-9d2c-d390af47d184 77d11e860ca1460cab1c20bca4d4c0ea bfcc78a613a4442d88231798d10634c9 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '77d11e860ca1460cab1c20bca4d4c0ea', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'bfcc78a613a4442d88231798d10634c9', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Oct 11 04:16:13 compute-0 nova_compute[259850]: 2025-10-11 04:16:13.265 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 04:16:13 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 11 04:16:13 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/252750620' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 11 04:16:13 compute-0 nova_compute[259850]: 2025-10-11 04:16:13.680 2 DEBUG nova.network.neutron [None req-e930425f-cba7-458e-9d2c-d390af47d184 77d11e860ca1460cab1c20bca4d4c0ea bfcc78a613a4442d88231798d10634c9 - - default default] [instance: 001be5b3-e842-4242-a6ad-2ccbfa7b39c2] Successfully created port: b0fef7a6-460f-49a1-8586-9008e9d3f648 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Oct 11 04:16:13 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1572: 305 pgs: 305 active+clean; 350 MiB data, 621 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 30 KiB/s wr, 95 op/s
Oct 11 04:16:13 compute-0 nova_compute[259850]: 2025-10-11 04:16:13.813 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1760156158.8122575, ab2c9a76-86d0-4cca-92b5-ae402fda2905 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 11 04:16:13 compute-0 nova_compute[259850]: 2025-10-11 04:16:13.815 2 INFO nova.compute.manager [-] [instance: ab2c9a76-86d0-4cca-92b5-ae402fda2905] VM Stopped (Lifecycle Event)
Oct 11 04:16:13 compute-0 nova_compute[259850]: 2025-10-11 04:16:13.837 2 DEBUG nova.compute.manager [None req-4af5dfcf-f93e-4cfa-89a7-ca4c68faa807 - - - - - -] [instance: ab2c9a76-86d0-4cca-92b5-ae402fda2905] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 11 04:16:13 compute-0 ceph-mon[74273]: from='client.? 192.168.122.10:0/252750620' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 11 04:16:13 compute-0 nova_compute[259850]: 2025-10-11 04:16:13.921 2 DEBUG nova.compute.manager [None req-e930425f-cba7-458e-9d2c-d390af47d184 77d11e860ca1460cab1c20bca4d4c0ea bfcc78a613a4442d88231798d10634c9 - - default default] [instance: 001be5b3-e842-4242-a6ad-2ccbfa7b39c2] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Oct 11 04:16:13 compute-0 nova_compute[259850]: 2025-10-11 04:16:13.924 2 DEBUG nova.virt.libvirt.driver [None req-e930425f-cba7-458e-9d2c-d390af47d184 77d11e860ca1460cab1c20bca4d4c0ea bfcc78a613a4442d88231798d10634c9 - - default default] [instance: 001be5b3-e842-4242-a6ad-2ccbfa7b39c2] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Oct 11 04:16:13 compute-0 nova_compute[259850]: 2025-10-11 04:16:13.924 2 INFO nova.virt.libvirt.driver [None req-e930425f-cba7-458e-9d2c-d390af47d184 77d11e860ca1460cab1c20bca4d4c0ea bfcc78a613a4442d88231798d10634c9 - - default default] [instance: 001be5b3-e842-4242-a6ad-2ccbfa7b39c2] Creating image(s)
Oct 11 04:16:13 compute-0 nova_compute[259850]: 2025-10-11 04:16:13.925 2 DEBUG nova.virt.libvirt.driver [None req-e930425f-cba7-458e-9d2c-d390af47d184 77d11e860ca1460cab1c20bca4d4c0ea bfcc78a613a4442d88231798d10634c9 - - default default] [instance: 001be5b3-e842-4242-a6ad-2ccbfa7b39c2] Did not create local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4859
Oct 11 04:16:13 compute-0 nova_compute[259850]: 2025-10-11 04:16:13.926 2 DEBUG nova.virt.libvirt.driver [None req-e930425f-cba7-458e-9d2c-d390af47d184 77d11e860ca1460cab1c20bca4d4c0ea bfcc78a613a4442d88231798d10634c9 - - default default] [instance: 001be5b3-e842-4242-a6ad-2ccbfa7b39c2] Ensure instance console log exists: /var/lib/nova/instances/001be5b3-e842-4242-a6ad-2ccbfa7b39c2/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Oct 11 04:16:13 compute-0 nova_compute[259850]: 2025-10-11 04:16:13.926 2 DEBUG oslo_concurrency.lockutils [None req-e930425f-cba7-458e-9d2c-d390af47d184 77d11e860ca1460cab1c20bca4d4c0ea bfcc78a613a4442d88231798d10634c9 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 04:16:13 compute-0 nova_compute[259850]: 2025-10-11 04:16:13.927 2 DEBUG oslo_concurrency.lockutils [None req-e930425f-cba7-458e-9d2c-d390af47d184 77d11e860ca1460cab1c20bca4d4c0ea bfcc78a613a4442d88231798d10634c9 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 04:16:13 compute-0 nova_compute[259850]: 2025-10-11 04:16:13.927 2 DEBUG oslo_concurrency.lockutils [None req-e930425f-cba7-458e-9d2c-d390af47d184 77d11e860ca1460cab1c20bca4d4c0ea bfcc78a613a4442d88231798d10634c9 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 04:16:14 compute-0 podman[293486]: 2025-10-11 04:16:14.410871632 +0000 UTC m=+0.101281019 container health_status 8f03625dda308e9a3fe3b3f00360811eab2a53db90537fed5552a11820542f07 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, config_id=iscsid, container_name=iscsid, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3)
Oct 11 04:16:14 compute-0 podman[293485]: 2025-10-11 04:16:14.420741482 +0000 UTC m=+0.111329044 container health_status 83ff5079e26f0f00bbfd2512db4ebca61e431e3b7e362664573e34fff179574c (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, org.label-schema.build-date=20251009, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, config_id=multipathd, org.label-schema.license=GPLv2, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f)
Oct 11 04:16:14 compute-0 nova_compute[259850]: 2025-10-11 04:16:14.472 2 DEBUG nova.network.neutron [None req-e930425f-cba7-458e-9d2c-d390af47d184 77d11e860ca1460cab1c20bca4d4c0ea bfcc78a613a4442d88231798d10634c9 - - default default] [instance: 001be5b3-e842-4242-a6ad-2ccbfa7b39c2] Successfully updated port: b0fef7a6-460f-49a1-8586-9008e9d3f648 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Oct 11 04:16:14 compute-0 nova_compute[259850]: 2025-10-11 04:16:14.496 2 DEBUG oslo_concurrency.lockutils [None req-e930425f-cba7-458e-9d2c-d390af47d184 77d11e860ca1460cab1c20bca4d4c0ea bfcc78a613a4442d88231798d10634c9 - - default default] Acquiring lock "refresh_cache-001be5b3-e842-4242-a6ad-2ccbfa7b39c2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 11 04:16:14 compute-0 nova_compute[259850]: 2025-10-11 04:16:14.496 2 DEBUG oslo_concurrency.lockutils [None req-e930425f-cba7-458e-9d2c-d390af47d184 77d11e860ca1460cab1c20bca4d4c0ea bfcc78a613a4442d88231798d10634c9 - - default default] Acquired lock "refresh_cache-001be5b3-e842-4242-a6ad-2ccbfa7b39c2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 11 04:16:14 compute-0 nova_compute[259850]: 2025-10-11 04:16:14.496 2 DEBUG nova.network.neutron [None req-e930425f-cba7-458e-9d2c-d390af47d184 77d11e860ca1460cab1c20bca4d4c0ea bfcc78a613a4442d88231798d10634c9 - - default default] [instance: 001be5b3-e842-4242-a6ad-2ccbfa7b39c2] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Oct 11 04:16:14 compute-0 nova_compute[259850]: 2025-10-11 04:16:14.593 2 DEBUG nova.compute.manager [req-808374ea-aeb1-4b95-82c8-09270414f94b req-3850e7c6-c4d8-4bce-8cd5-0e8ba3f64cd1 f4159c198f2c491aba3076e803251bb4 551ef16ca7c64c7d98e9560ddab5abd8 - - default default] [instance: 001be5b3-e842-4242-a6ad-2ccbfa7b39c2] Received event network-changed-b0fef7a6-460f-49a1-8586-9008e9d3f648 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 11 04:16:14 compute-0 nova_compute[259850]: 2025-10-11 04:16:14.594 2 DEBUG nova.compute.manager [req-808374ea-aeb1-4b95-82c8-09270414f94b req-3850e7c6-c4d8-4bce-8cd5-0e8ba3f64cd1 f4159c198f2c491aba3076e803251bb4 551ef16ca7c64c7d98e9560ddab5abd8 - - default default] [instance: 001be5b3-e842-4242-a6ad-2ccbfa7b39c2] Refreshing instance network info cache due to event network-changed-b0fef7a6-460f-49a1-8586-9008e9d3f648. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 11 04:16:14 compute-0 nova_compute[259850]: 2025-10-11 04:16:14.595 2 DEBUG oslo_concurrency.lockutils [req-808374ea-aeb1-4b95-82c8-09270414f94b req-3850e7c6-c4d8-4bce-8cd5-0e8ba3f64cd1 f4159c198f2c491aba3076e803251bb4 551ef16ca7c64c7d98e9560ddab5abd8 - - default default] Acquiring lock "refresh_cache-001be5b3-e842-4242-a6ad-2ccbfa7b39c2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 11 04:16:14 compute-0 nova_compute[259850]: 2025-10-11 04:16:14.810 2 DEBUG nova.network.neutron [None req-e930425f-cba7-458e-9d2c-d390af47d184 77d11e860ca1460cab1c20bca4d4c0ea bfcc78a613a4442d88231798d10634c9 - - default default] [instance: 001be5b3-e842-4242-a6ad-2ccbfa7b39c2] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Oct 11 04:16:14 compute-0 ceph-mon[74273]: pgmap v1572: 305 pgs: 305 active+clean; 350 MiB data, 621 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 30 KiB/s wr, 95 op/s
Oct 11 04:16:14 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e385 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 04:16:15 compute-0 ovn_controller[152025]: 2025-10-11T04:16:15Z|00034|pinctrl(ovn_pinctrl0)|WARN|DHCPREQUEST requested IP 10.100.0.6 does not match offer 10.100.0.3
Oct 11 04:16:15 compute-0 ovn_controller[152025]: 2025-10-11T04:16:15Z|00035|pinctrl(ovn_pinctrl0)|INFO|DHCPNAK fa:16:3e:aa:79:a8 10.100.0.3
Oct 11 04:16:15 compute-0 nova_compute[259850]: 2025-10-11 04:16:15.609 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 04:16:15 compute-0 nova_compute[259850]: 2025-10-11 04:16:15.629 2 DEBUG nova.network.neutron [None req-e930425f-cba7-458e-9d2c-d390af47d184 77d11e860ca1460cab1c20bca4d4c0ea bfcc78a613a4442d88231798d10634c9 - - default default] [instance: 001be5b3-e842-4242-a6ad-2ccbfa7b39c2] Updating instance_info_cache with network_info: [{"id": "b0fef7a6-460f-49a1-8586-9008e9d3f648", "address": "fa:16:3e:05:f8:44", "network": {"id": "1c86b315-3a4b-4db0-8b3c-39658c19ef9c", "bridge": "br-int", "label": "tempest-TransferEncryptedVolumeTest-443747755-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "bfcc78a613a4442d88231798d10634c9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb0fef7a6-46", "ovs_interfaceid": "b0fef7a6-460f-49a1-8586-9008e9d3f648", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 11 04:16:15 compute-0 nova_compute[259850]: 2025-10-11 04:16:15.653 2 DEBUG oslo_concurrency.lockutils [None req-e930425f-cba7-458e-9d2c-d390af47d184 77d11e860ca1460cab1c20bca4d4c0ea bfcc78a613a4442d88231798d10634c9 - - default default] Releasing lock "refresh_cache-001be5b3-e842-4242-a6ad-2ccbfa7b39c2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 11 04:16:15 compute-0 nova_compute[259850]: 2025-10-11 04:16:15.653 2 DEBUG nova.compute.manager [None req-e930425f-cba7-458e-9d2c-d390af47d184 77d11e860ca1460cab1c20bca4d4c0ea bfcc78a613a4442d88231798d10634c9 - - default default] [instance: 001be5b3-e842-4242-a6ad-2ccbfa7b39c2] Instance network_info: |[{"id": "b0fef7a6-460f-49a1-8586-9008e9d3f648", "address": "fa:16:3e:05:f8:44", "network": {"id": "1c86b315-3a4b-4db0-8b3c-39658c19ef9c", "bridge": "br-int", "label": "tempest-TransferEncryptedVolumeTest-443747755-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "bfcc78a613a4442d88231798d10634c9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb0fef7a6-46", "ovs_interfaceid": "b0fef7a6-460f-49a1-8586-9008e9d3f648", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Oct 11 04:16:15 compute-0 nova_compute[259850]: 2025-10-11 04:16:15.654 2 DEBUG oslo_concurrency.lockutils [req-808374ea-aeb1-4b95-82c8-09270414f94b req-3850e7c6-c4d8-4bce-8cd5-0e8ba3f64cd1 f4159c198f2c491aba3076e803251bb4 551ef16ca7c64c7d98e9560ddab5abd8 - - default default] Acquired lock "refresh_cache-001be5b3-e842-4242-a6ad-2ccbfa7b39c2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 11 04:16:15 compute-0 nova_compute[259850]: 2025-10-11 04:16:15.654 2 DEBUG nova.network.neutron [req-808374ea-aeb1-4b95-82c8-09270414f94b req-3850e7c6-c4d8-4bce-8cd5-0e8ba3f64cd1 f4159c198f2c491aba3076e803251bb4 551ef16ca7c64c7d98e9560ddab5abd8 - - default default] [instance: 001be5b3-e842-4242-a6ad-2ccbfa7b39c2] Refreshing network info cache for port b0fef7a6-460f-49a1-8586-9008e9d3f648 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 11 04:16:15 compute-0 nova_compute[259850]: 2025-10-11 04:16:15.660 2 DEBUG nova.virt.libvirt.driver [None req-e930425f-cba7-458e-9d2c-d390af47d184 77d11e860ca1460cab1c20bca4d4c0ea bfcc78a613a4442d88231798d10634c9 - - default default] [instance: 001be5b3-e842-4242-a6ad-2ccbfa7b39c2] Start _get_guest_xml network_info=[{"id": "b0fef7a6-460f-49a1-8586-9008e9d3f648", "address": "fa:16:3e:05:f8:44", "network": {"id": "1c86b315-3a4b-4db0-8b3c-39658c19ef9c", "bridge": "br-int", "label": "tempest-TransferEncryptedVolumeTest-443747755-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "bfcc78a613a4442d88231798d10634c9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb0fef7a6-46", "ovs_interfaceid": "b0fef7a6-460f-49a1-8586-9008e9d3f648", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, '/dev/vda': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum=<?>,container_format=<?>,created_at=<?>,direct_url=<?>,disk_format=<?>,id=<?>,min_disk=0,min_ram=0,name=<?>,owner=<?>,properties=ImageMetaProps,protected=<?>,size=1073741824,status='active',tags=<?>,updated_at=<?>,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [], 'ephemerals': [], 'block_device_mapping': [{'device_type': 'disk', 'delete_on_termination': False, 'connection_info': {'driver_volume_type': 'rbd', 'data': {'name': 'volumes/volume-c358a4fb-dbe4-4873-963a-9b4d3369e2f4', 'hosts': ['192.168.122.100'], 'ports': ['6789'], 'cluster_name': 'ceph', 'auth_enabled': True, 'auth_username': 'openstack', 'secret_type': 'ceph', 'secret_uuid': '***', 'volume_id': 'c358a4fb-dbe4-4873-963a-9b4d3369e2f4', 'discard': True, 'qos_specs': None, 'access_mode': 'rw', 'encrypted': True, 'cacheable': False}, 'status': 'reserved', 'instance': '001be5b3-e842-4242-a6ad-2ccbfa7b39c2', 'attached_at': '', 'detached_at': '', 'volume_id': 'c358a4fb-dbe4-4873-963a-9b4d3369e2f4', 'serial': 'c358a4fb-dbe4-4873-963a-9b4d3369e2f4'}, 'boot_index': 0, 'guest_format': None, 'attachment_id': 'bcaa98e1-06f3-4b80-ae9e-5ce0ddeb748d', 'mount_device': '/dev/vda', 'disk_bus': 'virtio', 'volume_type': None}], ': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Oct 11 04:16:15 compute-0 nova_compute[259850]: 2025-10-11 04:16:15.667 2 WARNING nova.virt.libvirt.driver [None req-e930425f-cba7-458e-9d2c-d390af47d184 77d11e860ca1460cab1c20bca4d4c0ea bfcc78a613a4442d88231798d10634c9 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 11 04:16:15 compute-0 nova_compute[259850]: 2025-10-11 04:16:15.678 2 DEBUG nova.virt.libvirt.host [None req-e930425f-cba7-458e-9d2c-d390af47d184 77d11e860ca1460cab1c20bca4d4c0ea bfcc78a613a4442d88231798d10634c9 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Oct 11 04:16:15 compute-0 nova_compute[259850]: 2025-10-11 04:16:15.678 2 DEBUG nova.virt.libvirt.host [None req-e930425f-cba7-458e-9d2c-d390af47d184 77d11e860ca1460cab1c20bca4d4c0ea bfcc78a613a4442d88231798d10634c9 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Oct 11 04:16:15 compute-0 nova_compute[259850]: 2025-10-11 04:16:15.683 2 DEBUG nova.virt.libvirt.host [None req-e930425f-cba7-458e-9d2c-d390af47d184 77d11e860ca1460cab1c20bca4d4c0ea bfcc78a613a4442d88231798d10634c9 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Oct 11 04:16:15 compute-0 nova_compute[259850]: 2025-10-11 04:16:15.683 2 DEBUG nova.virt.libvirt.host [None req-e930425f-cba7-458e-9d2c-d390af47d184 77d11e860ca1460cab1c20bca4d4c0ea bfcc78a613a4442d88231798d10634c9 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Oct 11 04:16:15 compute-0 nova_compute[259850]: 2025-10-11 04:16:15.684 2 DEBUG nova.virt.libvirt.driver [None req-e930425f-cba7-458e-9d2c-d390af47d184 77d11e860ca1460cab1c20bca4d4c0ea bfcc78a613a4442d88231798d10634c9 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Oct 11 04:16:15 compute-0 nova_compute[259850]: 2025-10-11 04:16:15.684 2 DEBUG nova.virt.hardware [None req-e930425f-cba7-458e-9d2c-d390af47d184 77d11e860ca1460cab1c20bca4d4c0ea bfcc78a613a4442d88231798d10634c9 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-11T04:01:35Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='178575de-f0e6-4acd-9fcd-d75e3e09ac2e',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum=<?>,container_format=<?>,created_at=<?>,direct_url=<?>,disk_format=<?>,id=<?>,min_disk=0,min_ram=0,name=<?>,owner=<?>,properties=ImageMetaProps,protected=<?>,size=1073741824,status='active',tags=<?>,updated_at=<?>,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Oct 11 04:16:15 compute-0 nova_compute[259850]: 2025-10-11 04:16:15.685 2 DEBUG nova.virt.hardware [None req-e930425f-cba7-458e-9d2c-d390af47d184 77d11e860ca1460cab1c20bca4d4c0ea bfcc78a613a4442d88231798d10634c9 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Oct 11 04:16:15 compute-0 nova_compute[259850]: 2025-10-11 04:16:15.685 2 DEBUG nova.virt.hardware [None req-e930425f-cba7-458e-9d2c-d390af47d184 77d11e860ca1460cab1c20bca4d4c0ea bfcc78a613a4442d88231798d10634c9 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Oct 11 04:16:15 compute-0 nova_compute[259850]: 2025-10-11 04:16:15.685 2 DEBUG nova.virt.hardware [None req-e930425f-cba7-458e-9d2c-d390af47d184 77d11e860ca1460cab1c20bca4d4c0ea bfcc78a613a4442d88231798d10634c9 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Oct 11 04:16:15 compute-0 nova_compute[259850]: 2025-10-11 04:16:15.686 2 DEBUG nova.virt.hardware [None req-e930425f-cba7-458e-9d2c-d390af47d184 77d11e860ca1460cab1c20bca4d4c0ea bfcc78a613a4442d88231798d10634c9 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Oct 11 04:16:15 compute-0 nova_compute[259850]: 2025-10-11 04:16:15.686 2 DEBUG nova.virt.hardware [None req-e930425f-cba7-458e-9d2c-d390af47d184 77d11e860ca1460cab1c20bca4d4c0ea bfcc78a613a4442d88231798d10634c9 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Oct 11 04:16:15 compute-0 nova_compute[259850]: 2025-10-11 04:16:15.686 2 DEBUG nova.virt.hardware [None req-e930425f-cba7-458e-9d2c-d390af47d184 77d11e860ca1460cab1c20bca4d4c0ea bfcc78a613a4442d88231798d10634c9 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Oct 11 04:16:15 compute-0 nova_compute[259850]: 2025-10-11 04:16:15.687 2 DEBUG nova.virt.hardware [None req-e930425f-cba7-458e-9d2c-d390af47d184 77d11e860ca1460cab1c20bca4d4c0ea bfcc78a613a4442d88231798d10634c9 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Oct 11 04:16:15 compute-0 nova_compute[259850]: 2025-10-11 04:16:15.687 2 DEBUG nova.virt.hardware [None req-e930425f-cba7-458e-9d2c-d390af47d184 77d11e860ca1460cab1c20bca4d4c0ea bfcc78a613a4442d88231798d10634c9 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Oct 11 04:16:15 compute-0 nova_compute[259850]: 2025-10-11 04:16:15.687 2 DEBUG nova.virt.hardware [None req-e930425f-cba7-458e-9d2c-d390af47d184 77d11e860ca1460cab1c20bca4d4c0ea bfcc78a613a4442d88231798d10634c9 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Oct 11 04:16:15 compute-0 nova_compute[259850]: 2025-10-11 04:16:15.688 2 DEBUG nova.virt.hardware [None req-e930425f-cba7-458e-9d2c-d390af47d184 77d11e860ca1460cab1c20bca4d4c0ea bfcc78a613a4442d88231798d10634c9 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Oct 11 04:16:15 compute-0 nova_compute[259850]: 2025-10-11 04:16:15.729 2 DEBUG nova.storage.rbd_utils [None req-e930425f-cba7-458e-9d2c-d390af47d184 77d11e860ca1460cab1c20bca4d4c0ea bfcc78a613a4442d88231798d10634c9 - - default default] rbd image 001be5b3-e842-4242-a6ad-2ccbfa7b39c2_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 11 04:16:15 compute-0 nova_compute[259850]: 2025-10-11 04:16:15.736 2 DEBUG oslo_concurrency.processutils [None req-e930425f-cba7-458e-9d2c-d390af47d184 77d11e860ca1460cab1c20bca4d4c0ea bfcc78a613a4442d88231798d10634c9 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 04:16:15 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1573: 305 pgs: 305 active+clean; 350 MiB data, 621 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 2.9 KiB/s wr, 67 op/s
Oct 11 04:16:16 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 11 04:16:16 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3424144019' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 11 04:16:16 compute-0 nova_compute[259850]: 2025-10-11 04:16:16.224 2 DEBUG oslo_concurrency.processutils [None req-e930425f-cba7-458e-9d2c-d390af47d184 77d11e860ca1460cab1c20bca4d4c0ea bfcc78a613a4442d88231798d10634c9 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.488s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 04:16:16 compute-0 nova_compute[259850]: 2025-10-11 04:16:16.352 2 DEBUG os_brick.encryptors [None req-e930425f-cba7-458e-9d2c-d390af47d184 77d11e860ca1460cab1c20bca4d4c0ea bfcc78a613a4442d88231798d10634c9 - - default default] Using volume encryption metadata '{'encryption_key_id': '06533f56-da68-4f4b-87b6-26f9846b74ef', 'control_location': 'front-end', 'cipher': 'aes-xts-plain64', 'key_size': 256, 'provider': 'luks'}' for connection: {'driver_volume_type': 'rbd', 'data': {'name': 'volumes/volume-c358a4fb-dbe4-4873-963a-9b4d3369e2f4', 'hosts': ['192.168.122.100'], 'ports': ['6789'], 'cluster_name': 'ceph', 'auth_enabled': True, 'auth_username': 'openstack', 'secret_type': 'ceph', 'secret_uuid': '***', 'volume_id': 'c358a4fb-dbe4-4873-963a-9b4d3369e2f4', 'discard': True, 'qos_specs': None, 'access_mode': 'rw', 'encrypted': True, 'cacheable': False}, 'status': 'reserved', 'instance': '001be5b3-e842-4242-a6ad-2ccbfa7b39c2', 'attached_at': '', 'detached_at': '', 'volume_id': 'c358a4fb-dbe4-4873-963a-9b4d3369e2f4', 'serial': '} get_encryption_metadata /usr/lib/python3.9/site-packages/os_brick/encryptors/__init__.py:135
Oct 11 04:16:16 compute-0 nova_compute[259850]: 2025-10-11 04:16:16.356 2 DEBUG barbicanclient.client [None req-e930425f-cba7-458e-9d2c-d390af47d184 77d11e860ca1460cab1c20bca4d4c0ea bfcc78a613a4442d88231798d10634c9 - - default default] Creating Client object Client /usr/lib/python3.9/site-packages/barbicanclient/client.py:163
Oct 11 04:16:16 compute-0 nova_compute[259850]: 2025-10-11 04:16:16.379 2 DEBUG barbicanclient.v1.secrets [None req-e930425f-cba7-458e-9d2c-d390af47d184 77d11e860ca1460cab1c20bca4d4c0ea bfcc78a613a4442d88231798d10634c9 - - default default] Getting secret - Secret href: https://barbican-internal.openstack.svc:9311/secrets/06533f56-da68-4f4b-87b6-26f9846b74ef get /usr/lib/python3.9/site-packages/barbicanclient/v1/secrets.py:514
Oct 11 04:16:16 compute-0 nova_compute[259850]: 2025-10-11 04:16:16.380 2 INFO barbicanclient.base [None req-e930425f-cba7-458e-9d2c-d390af47d184 77d11e860ca1460cab1c20bca4d4c0ea bfcc78a613a4442d88231798d10634c9 - - default default] Calculated Secrets uuid ref: secrets/06533f56-da68-4f4b-87b6-26f9846b74ef
Oct 11 04:16:16 compute-0 nova_compute[259850]: 2025-10-11 04:16:16.406 2 DEBUG barbicanclient.client [None req-e930425f-cba7-458e-9d2c-d390af47d184 77d11e860ca1460cab1c20bca4d4c0ea bfcc78a613a4442d88231798d10634c9 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87
Oct 11 04:16:16 compute-0 nova_compute[259850]: 2025-10-11 04:16:16.407 2 INFO barbicanclient.base [None req-e930425f-cba7-458e-9d2c-d390af47d184 77d11e860ca1460cab1c20bca4d4c0ea bfcc78a613a4442d88231798d10634c9 - - default default] Calculated Secrets uuid ref: secrets/06533f56-da68-4f4b-87b6-26f9846b74ef
Oct 11 04:16:16 compute-0 nova_compute[259850]: 2025-10-11 04:16:16.434 2 DEBUG barbicanclient.client [None req-e930425f-cba7-458e-9d2c-d390af47d184 77d11e860ca1460cab1c20bca4d4c0ea bfcc78a613a4442d88231798d10634c9 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87
Oct 11 04:16:16 compute-0 nova_compute[259850]: 2025-10-11 04:16:16.435 2 INFO barbicanclient.base [None req-e930425f-cba7-458e-9d2c-d390af47d184 77d11e860ca1460cab1c20bca4d4c0ea bfcc78a613a4442d88231798d10634c9 - - default default] Calculated Secrets uuid ref: secrets/06533f56-da68-4f4b-87b6-26f9846b74ef
Oct 11 04:16:16 compute-0 nova_compute[259850]: 2025-10-11 04:16:16.467 2 DEBUG barbicanclient.client [None req-e930425f-cba7-458e-9d2c-d390af47d184 77d11e860ca1460cab1c20bca4d4c0ea bfcc78a613a4442d88231798d10634c9 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87
Oct 11 04:16:16 compute-0 nova_compute[259850]: 2025-10-11 04:16:16.467 2 INFO barbicanclient.base [None req-e930425f-cba7-458e-9d2c-d390af47d184 77d11e860ca1460cab1c20bca4d4c0ea bfcc78a613a4442d88231798d10634c9 - - default default] Calculated Secrets uuid ref: secrets/06533f56-da68-4f4b-87b6-26f9846b74ef
Oct 11 04:16:16 compute-0 nova_compute[259850]: 2025-10-11 04:16:16.502 2 DEBUG barbicanclient.client [None req-e930425f-cba7-458e-9d2c-d390af47d184 77d11e860ca1460cab1c20bca4d4c0ea bfcc78a613a4442d88231798d10634c9 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87
Oct 11 04:16:16 compute-0 nova_compute[259850]: 2025-10-11 04:16:16.503 2 INFO barbicanclient.base [None req-e930425f-cba7-458e-9d2c-d390af47d184 77d11e860ca1460cab1c20bca4d4c0ea bfcc78a613a4442d88231798d10634c9 - - default default] Calculated Secrets uuid ref: secrets/06533f56-da68-4f4b-87b6-26f9846b74ef
Oct 11 04:16:16 compute-0 nova_compute[259850]: 2025-10-11 04:16:16.527 2 DEBUG barbicanclient.client [None req-e930425f-cba7-458e-9d2c-d390af47d184 77d11e860ca1460cab1c20bca4d4c0ea bfcc78a613a4442d88231798d10634c9 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87
Oct 11 04:16:16 compute-0 nova_compute[259850]: 2025-10-11 04:16:16.527 2 INFO barbicanclient.base [None req-e930425f-cba7-458e-9d2c-d390af47d184 77d11e860ca1460cab1c20bca4d4c0ea bfcc78a613a4442d88231798d10634c9 - - default default] Calculated Secrets uuid ref: secrets/06533f56-da68-4f4b-87b6-26f9846b74ef
Oct 11 04:16:16 compute-0 nova_compute[259850]: 2025-10-11 04:16:16.563 2 DEBUG barbicanclient.client [None req-e930425f-cba7-458e-9d2c-d390af47d184 77d11e860ca1460cab1c20bca4d4c0ea bfcc78a613a4442d88231798d10634c9 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87
Oct 11 04:16:16 compute-0 nova_compute[259850]: 2025-10-11 04:16:16.564 2 INFO barbicanclient.base [None req-e930425f-cba7-458e-9d2c-d390af47d184 77d11e860ca1460cab1c20bca4d4c0ea bfcc78a613a4442d88231798d10634c9 - - default default] Calculated Secrets uuid ref: secrets/06533f56-da68-4f4b-87b6-26f9846b74ef
Oct 11 04:16:16 compute-0 nova_compute[259850]: 2025-10-11 04:16:16.591 2 DEBUG barbicanclient.client [None req-e930425f-cba7-458e-9d2c-d390af47d184 77d11e860ca1460cab1c20bca4d4c0ea bfcc78a613a4442d88231798d10634c9 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87
Oct 11 04:16:16 compute-0 nova_compute[259850]: 2025-10-11 04:16:16.592 2 INFO barbicanclient.base [None req-e930425f-cba7-458e-9d2c-d390af47d184 77d11e860ca1460cab1c20bca4d4c0ea bfcc78a613a4442d88231798d10634c9 - - default default] Calculated Secrets uuid ref: secrets/06533f56-da68-4f4b-87b6-26f9846b74ef
Oct 11 04:16:16 compute-0 nova_compute[259850]: 2025-10-11 04:16:16.617 2 DEBUG barbicanclient.client [None req-e930425f-cba7-458e-9d2c-d390af47d184 77d11e860ca1460cab1c20bca4d4c0ea bfcc78a613a4442d88231798d10634c9 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87
Oct 11 04:16:16 compute-0 nova_compute[259850]: 2025-10-11 04:16:16.617 2 INFO barbicanclient.base [None req-e930425f-cba7-458e-9d2c-d390af47d184 77d11e860ca1460cab1c20bca4d4c0ea bfcc78a613a4442d88231798d10634c9 - - default default] Calculated Secrets uuid ref: secrets/06533f56-da68-4f4b-87b6-26f9846b74ef
Oct 11 04:16:16 compute-0 nova_compute[259850]: 2025-10-11 04:16:16.641 2 DEBUG barbicanclient.client [None req-e930425f-cba7-458e-9d2c-d390af47d184 77d11e860ca1460cab1c20bca4d4c0ea bfcc78a613a4442d88231798d10634c9 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87
Oct 11 04:16:16 compute-0 nova_compute[259850]: 2025-10-11 04:16:16.642 2 INFO barbicanclient.base [None req-e930425f-cba7-458e-9d2c-d390af47d184 77d11e860ca1460cab1c20bca4d4c0ea bfcc78a613a4442d88231798d10634c9 - - default default] Calculated Secrets uuid ref: secrets/06533f56-da68-4f4b-87b6-26f9846b74ef
Oct 11 04:16:16 compute-0 nova_compute[259850]: 2025-10-11 04:16:16.675 2 DEBUG barbicanclient.client [None req-e930425f-cba7-458e-9d2c-d390af47d184 77d11e860ca1460cab1c20bca4d4c0ea bfcc78a613a4442d88231798d10634c9 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87
Oct 11 04:16:16 compute-0 nova_compute[259850]: 2025-10-11 04:16:16.676 2 INFO barbicanclient.base [None req-e930425f-cba7-458e-9d2c-d390af47d184 77d11e860ca1460cab1c20bca4d4c0ea bfcc78a613a4442d88231798d10634c9 - - default default] Calculated Secrets uuid ref: secrets/06533f56-da68-4f4b-87b6-26f9846b74ef
Oct 11 04:16:16 compute-0 nova_compute[259850]: 2025-10-11 04:16:16.698 2 DEBUG barbicanclient.client [None req-e930425f-cba7-458e-9d2c-d390af47d184 77d11e860ca1460cab1c20bca4d4c0ea bfcc78a613a4442d88231798d10634c9 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87
Oct 11 04:16:16 compute-0 nova_compute[259850]: 2025-10-11 04:16:16.699 2 INFO barbicanclient.base [None req-e930425f-cba7-458e-9d2c-d390af47d184 77d11e860ca1460cab1c20bca4d4c0ea bfcc78a613a4442d88231798d10634c9 - - default default] Calculated Secrets uuid ref: secrets/06533f56-da68-4f4b-87b6-26f9846b74ef
Oct 11 04:16:16 compute-0 nova_compute[259850]: 2025-10-11 04:16:16.809 2 DEBUG barbicanclient.client [None req-e930425f-cba7-458e-9d2c-d390af47d184 77d11e860ca1460cab1c20bca4d4c0ea bfcc78a613a4442d88231798d10634c9 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87
Oct 11 04:16:16 compute-0 nova_compute[259850]: 2025-10-11 04:16:16.810 2 INFO barbicanclient.base [None req-e930425f-cba7-458e-9d2c-d390af47d184 77d11e860ca1460cab1c20bca4d4c0ea bfcc78a613a4442d88231798d10634c9 - - default default] Calculated Secrets uuid ref: secrets/06533f56-da68-4f4b-87b6-26f9846b74ef
Oct 11 04:16:16 compute-0 nova_compute[259850]: 2025-10-11 04:16:16.833 2 DEBUG barbicanclient.client [None req-e930425f-cba7-458e-9d2c-d390af47d184 77d11e860ca1460cab1c20bca4d4c0ea bfcc78a613a4442d88231798d10634c9 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87
Oct 11 04:16:16 compute-0 nova_compute[259850]: 2025-10-11 04:16:16.834 2 INFO barbicanclient.base [None req-e930425f-cba7-458e-9d2c-d390af47d184 77d11e860ca1460cab1c20bca4d4c0ea bfcc78a613a4442d88231798d10634c9 - - default default] Calculated Secrets uuid ref: secrets/06533f56-da68-4f4b-87b6-26f9846b74ef
Oct 11 04:16:16 compute-0 nova_compute[259850]: 2025-10-11 04:16:16.854 2 DEBUG barbicanclient.client [None req-e930425f-cba7-458e-9d2c-d390af47d184 77d11e860ca1460cab1c20bca4d4c0ea bfcc78a613a4442d88231798d10634c9 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87
Oct 11 04:16:16 compute-0 nova_compute[259850]: 2025-10-11 04:16:16.854 2 INFO barbicanclient.base [None req-e930425f-cba7-458e-9d2c-d390af47d184 77d11e860ca1460cab1c20bca4d4c0ea bfcc78a613a4442d88231798d10634c9 - - default default] Calculated Secrets uuid ref: secrets/06533f56-da68-4f4b-87b6-26f9846b74ef
Oct 11 04:16:16 compute-0 nova_compute[259850]: 2025-10-11 04:16:16.881 2 DEBUG barbicanclient.client [None req-e930425f-cba7-458e-9d2c-d390af47d184 77d11e860ca1460cab1c20bca4d4c0ea bfcc78a613a4442d88231798d10634c9 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87
Oct 11 04:16:16 compute-0 nova_compute[259850]: 2025-10-11 04:16:16.882 2 DEBUG nova.virt.libvirt.host [None req-e930425f-cba7-458e-9d2c-d390af47d184 77d11e860ca1460cab1c20bca4d4c0ea bfcc78a613a4442d88231798d10634c9 - - default default] Secret XML: <secret ephemeral="no" private="no">
Oct 11 04:16:16 compute-0 nova_compute[259850]:   <usage type="volume">
Oct 11 04:16:16 compute-0 nova_compute[259850]:     <volume>c358a4fb-dbe4-4873-963a-9b4d3369e2f4</volume>
Oct 11 04:16:16 compute-0 nova_compute[259850]:   </usage>
Oct 11 04:16:16 compute-0 nova_compute[259850]: </secret>
Oct 11 04:16:16 compute-0 nova_compute[259850]:  create_secret /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1131
Oct 11 04:16:16 compute-0 ceph-mon[74273]: pgmap v1573: 305 pgs: 305 active+clean; 350 MiB data, 621 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 2.9 KiB/s wr, 67 op/s
Oct 11 04:16:16 compute-0 ceph-mon[74273]: from='client.? 192.168.122.100:0/3424144019' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 11 04:16:16 compute-0 nova_compute[259850]: 2025-10-11 04:16:16.917 2 DEBUG nova.virt.libvirt.vif [None req-e930425f-cba7-458e-9d2c-d390af47d184 77d11e860ca1460cab1c20bca4d4c0ea bfcc78a613a4442d88231798d10634c9 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-11T04:16:10Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TransferEncryptedVolumeTest-server-1610846760',display_name='tempest-TransferEncryptedVolumeTest-server-1610846760',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-transferencryptedvolumetest-server-1610846760',id=21,image_ref='',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBEm9PgkiGfSXOx0o4WQq8AZOWxjh5dTcSz2vccU0Qwona7kINKHr8yu5DCKNDP+0OzTB5mKLuoYtalc5W0loL0xt3InkbNaE80zGvKzG26ntAx/WTjaE+AjoYDpLrsq4bA==',key_name='tempest-TransferEncryptedVolumeTest-726747697',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='bfcc78a613a4442d88231798d10634c9',ramdisk_id='',reservation_id='r-t3w6fxsi',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',image_signature_verified='False',network_allocated='True',owner_project_name='tempest-TransferEncryptedVolumeTest-1941581237',owner_user_name='tempest-TransferEncryptedVolumeTest-1941581237-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-11T04:16:12Z,user_data=None,user_id='77d11e860ca1460cab1c20bca4d4c0ea',uuid=001be5b3-e842-4242-a6ad-2ccbfa7b39c2,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "b0fef7a6-460f-49a1-8586-9008e9d3f648", "address": "fa:16:3e:05:f8:44", "network": {"id": "1c86b315-3a4b-4db0-8b3c-39658c19ef9c", "bridge": "br-int", "label": "tempest-TransferEncryptedVolumeTest-443747755-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "bfcc78a613a4442d88231798d10634c9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb0fef7a6-46", "ovs_interfaceid": "b0fef7a6-460f-49a1-8586-9008e9d3f648", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Oct 11 04:16:16 compute-0 nova_compute[259850]: 2025-10-11 04:16:16.917 2 DEBUG nova.network.os_vif_util [None req-e930425f-cba7-458e-9d2c-d390af47d184 77d11e860ca1460cab1c20bca4d4c0ea bfcc78a613a4442d88231798d10634c9 - - default default] Converting VIF {"id": "b0fef7a6-460f-49a1-8586-9008e9d3f648", "address": "fa:16:3e:05:f8:44", "network": {"id": "1c86b315-3a4b-4db0-8b3c-39658c19ef9c", "bridge": "br-int", "label": "tempest-TransferEncryptedVolumeTest-443747755-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "bfcc78a613a4442d88231798d10634c9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb0fef7a6-46", "ovs_interfaceid": "b0fef7a6-460f-49a1-8586-9008e9d3f648", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 11 04:16:16 compute-0 nova_compute[259850]: 2025-10-11 04:16:16.919 2 DEBUG nova.network.os_vif_util [None req-e930425f-cba7-458e-9d2c-d390af47d184 77d11e860ca1460cab1c20bca4d4c0ea bfcc78a613a4442d88231798d10634c9 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:05:f8:44,bridge_name='br-int',has_traffic_filtering=True,id=b0fef7a6-460f-49a1-8586-9008e9d3f648,network=Network(1c86b315-3a4b-4db0-8b3c-39658c19ef9c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb0fef7a6-46') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 11 04:16:16 compute-0 nova_compute[259850]: 2025-10-11 04:16:16.921 2 DEBUG nova.objects.instance [None req-e930425f-cba7-458e-9d2c-d390af47d184 77d11e860ca1460cab1c20bca4d4c0ea bfcc78a613a4442d88231798d10634c9 - - default default] Lazy-loading 'pci_devices' on Instance uuid 001be5b3-e842-4242-a6ad-2ccbfa7b39c2 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 11 04:16:16 compute-0 nova_compute[259850]: 2025-10-11 04:16:16.938 2 DEBUG nova.virt.libvirt.driver [None req-e930425f-cba7-458e-9d2c-d390af47d184 77d11e860ca1460cab1c20bca4d4c0ea bfcc78a613a4442d88231798d10634c9 - - default default] [instance: 001be5b3-e842-4242-a6ad-2ccbfa7b39c2] End _get_guest_xml xml=<domain type="kvm">
Oct 11 04:16:16 compute-0 nova_compute[259850]:   <uuid>001be5b3-e842-4242-a6ad-2ccbfa7b39c2</uuid>
Oct 11 04:16:16 compute-0 nova_compute[259850]:   <name>instance-00000015</name>
Oct 11 04:16:16 compute-0 nova_compute[259850]:   <memory>131072</memory>
Oct 11 04:16:16 compute-0 nova_compute[259850]:   <vcpu>1</vcpu>
Oct 11 04:16:16 compute-0 nova_compute[259850]:   <metadata>
Oct 11 04:16:16 compute-0 nova_compute[259850]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 11 04:16:16 compute-0 nova_compute[259850]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 11 04:16:16 compute-0 nova_compute[259850]:       <nova:name>tempest-TransferEncryptedVolumeTest-server-1610846760</nova:name>
Oct 11 04:16:16 compute-0 nova_compute[259850]:       <nova:creationTime>2025-10-11 04:16:15</nova:creationTime>
Oct 11 04:16:16 compute-0 nova_compute[259850]:       <nova:flavor name="m1.nano">
Oct 11 04:16:16 compute-0 nova_compute[259850]:         <nova:memory>128</nova:memory>
Oct 11 04:16:16 compute-0 nova_compute[259850]:         <nova:disk>1</nova:disk>
Oct 11 04:16:16 compute-0 nova_compute[259850]:         <nova:swap>0</nova:swap>
Oct 11 04:16:16 compute-0 nova_compute[259850]:         <nova:ephemeral>0</nova:ephemeral>
Oct 11 04:16:16 compute-0 nova_compute[259850]:         <nova:vcpus>1</nova:vcpus>
Oct 11 04:16:16 compute-0 nova_compute[259850]:       </nova:flavor>
Oct 11 04:16:16 compute-0 nova_compute[259850]:       <nova:owner>
Oct 11 04:16:16 compute-0 nova_compute[259850]:         <nova:user uuid="77d11e860ca1460cab1c20bca4d4c0ea">tempest-TransferEncryptedVolumeTest-1941581237-project-member</nova:user>
Oct 11 04:16:16 compute-0 nova_compute[259850]:         <nova:project uuid="bfcc78a613a4442d88231798d10634c9">tempest-TransferEncryptedVolumeTest-1941581237</nova:project>
Oct 11 04:16:16 compute-0 nova_compute[259850]:       </nova:owner>
Oct 11 04:16:16 compute-0 nova_compute[259850]:       <nova:ports>
Oct 11 04:16:16 compute-0 nova_compute[259850]:         <nova:port uuid="b0fef7a6-460f-49a1-8586-9008e9d3f648">
Oct 11 04:16:16 compute-0 nova_compute[259850]:           <nova:ip type="fixed" address="10.100.0.14" ipVersion="4"/>
Oct 11 04:16:16 compute-0 nova_compute[259850]:         </nova:port>
Oct 11 04:16:16 compute-0 nova_compute[259850]:       </nova:ports>
Oct 11 04:16:16 compute-0 nova_compute[259850]:     </nova:instance>
Oct 11 04:16:16 compute-0 nova_compute[259850]:   </metadata>
Oct 11 04:16:16 compute-0 nova_compute[259850]:   <sysinfo type="smbios">
Oct 11 04:16:16 compute-0 nova_compute[259850]:     <system>
Oct 11 04:16:16 compute-0 nova_compute[259850]:       <entry name="manufacturer">RDO</entry>
Oct 11 04:16:16 compute-0 nova_compute[259850]:       <entry name="product">OpenStack Compute</entry>
Oct 11 04:16:16 compute-0 nova_compute[259850]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 11 04:16:16 compute-0 nova_compute[259850]:       <entry name="serial">001be5b3-e842-4242-a6ad-2ccbfa7b39c2</entry>
Oct 11 04:16:16 compute-0 nova_compute[259850]:       <entry name="uuid">001be5b3-e842-4242-a6ad-2ccbfa7b39c2</entry>
Oct 11 04:16:16 compute-0 nova_compute[259850]:       <entry name="family">Virtual Machine</entry>
Oct 11 04:16:16 compute-0 nova_compute[259850]:     </system>
Oct 11 04:16:16 compute-0 nova_compute[259850]:   </sysinfo>
Oct 11 04:16:16 compute-0 nova_compute[259850]:   <os>
Oct 11 04:16:16 compute-0 nova_compute[259850]:     <type arch="x86_64" machine="q35">hvm</type>
Oct 11 04:16:16 compute-0 nova_compute[259850]:     <boot dev="hd"/>
Oct 11 04:16:16 compute-0 nova_compute[259850]:     <smbios mode="sysinfo"/>
Oct 11 04:16:16 compute-0 nova_compute[259850]:   </os>
Oct 11 04:16:16 compute-0 nova_compute[259850]:   <features>
Oct 11 04:16:16 compute-0 nova_compute[259850]:     <acpi/>
Oct 11 04:16:16 compute-0 nova_compute[259850]:     <apic/>
Oct 11 04:16:16 compute-0 nova_compute[259850]:     <vmcoreinfo/>
Oct 11 04:16:16 compute-0 nova_compute[259850]:   </features>
Oct 11 04:16:16 compute-0 nova_compute[259850]:   <clock offset="utc">
Oct 11 04:16:16 compute-0 nova_compute[259850]:     <timer name="pit" tickpolicy="delay"/>
Oct 11 04:16:16 compute-0 nova_compute[259850]:     <timer name="rtc" tickpolicy="catchup"/>
Oct 11 04:16:16 compute-0 nova_compute[259850]:     <timer name="hpet" present="no"/>
Oct 11 04:16:16 compute-0 nova_compute[259850]:   </clock>
Oct 11 04:16:16 compute-0 nova_compute[259850]:   <cpu mode="host-model" match="exact">
Oct 11 04:16:16 compute-0 nova_compute[259850]:     <topology sockets="1" cores="1" threads="1"/>
Oct 11 04:16:16 compute-0 nova_compute[259850]:   </cpu>
Oct 11 04:16:16 compute-0 nova_compute[259850]:   <devices>
Oct 11 04:16:16 compute-0 nova_compute[259850]:     <disk type="network" device="cdrom">
Oct 11 04:16:16 compute-0 nova_compute[259850]:       <driver type="raw" cache="none"/>
Oct 11 04:16:16 compute-0 nova_compute[259850]:       <source protocol="rbd" name="vms/001be5b3-e842-4242-a6ad-2ccbfa7b39c2_disk.config">
Oct 11 04:16:16 compute-0 nova_compute[259850]:         <host name="192.168.122.100" port="6789"/>
Oct 11 04:16:16 compute-0 nova_compute[259850]:       </source>
Oct 11 04:16:16 compute-0 nova_compute[259850]:       <auth username="openstack">
Oct 11 04:16:16 compute-0 nova_compute[259850]:         <secret type="ceph" uuid="23b68101-59a9-532f-ab6b-9acf78fb2162"/>
Oct 11 04:16:16 compute-0 nova_compute[259850]:       </auth>
Oct 11 04:16:16 compute-0 nova_compute[259850]:       <target dev="sda" bus="sata"/>
Oct 11 04:16:16 compute-0 nova_compute[259850]:     </disk>
Oct 11 04:16:16 compute-0 nova_compute[259850]:     <disk type="network" device="disk">
Oct 11 04:16:16 compute-0 nova_compute[259850]:       <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Oct 11 04:16:16 compute-0 nova_compute[259850]:       <source protocol="rbd" name="volumes/volume-c358a4fb-dbe4-4873-963a-9b4d3369e2f4">
Oct 11 04:16:16 compute-0 nova_compute[259850]:         <host name="192.168.122.100" port="6789"/>
Oct 11 04:16:16 compute-0 nova_compute[259850]:       </source>
Oct 11 04:16:16 compute-0 nova_compute[259850]:       <auth username="openstack">
Oct 11 04:16:16 compute-0 nova_compute[259850]:         <secret type="ceph" uuid="23b68101-59a9-532f-ab6b-9acf78fb2162"/>
Oct 11 04:16:16 compute-0 nova_compute[259850]:       </auth>
Oct 11 04:16:16 compute-0 nova_compute[259850]:       <target dev="vda" bus="virtio"/>
Oct 11 04:16:16 compute-0 nova_compute[259850]:       <serial>c358a4fb-dbe4-4873-963a-9b4d3369e2f4</serial>
Oct 11 04:16:16 compute-0 nova_compute[259850]:       <encryption format="luks">
Oct 11 04:16:16 compute-0 nova_compute[259850]:         <secret type="passphrase" uuid="bf92c018-3900-46ae-b84b-35ff6d88c904"/>
Oct 11 04:16:16 compute-0 nova_compute[259850]:       </encryption>
Oct 11 04:16:16 compute-0 nova_compute[259850]:     </disk>
Oct 11 04:16:16 compute-0 nova_compute[259850]:     <interface type="ethernet">
Oct 11 04:16:16 compute-0 nova_compute[259850]:       <mac address="fa:16:3e:05:f8:44"/>
Oct 11 04:16:16 compute-0 nova_compute[259850]:       <model type="virtio"/>
Oct 11 04:16:16 compute-0 nova_compute[259850]:       <driver name="vhost" rx_queue_size="512"/>
Oct 11 04:16:16 compute-0 nova_compute[259850]:       <mtu size="1442"/>
Oct 11 04:16:16 compute-0 nova_compute[259850]:       <target dev="tapb0fef7a6-46"/>
Oct 11 04:16:16 compute-0 nova_compute[259850]:     </interface>
Oct 11 04:16:16 compute-0 nova_compute[259850]:     <serial type="pty">
Oct 11 04:16:16 compute-0 nova_compute[259850]:       <log file="/var/lib/nova/instances/001be5b3-e842-4242-a6ad-2ccbfa7b39c2/console.log" append="off"/>
Oct 11 04:16:16 compute-0 nova_compute[259850]:     </serial>
Oct 11 04:16:16 compute-0 nova_compute[259850]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 11 04:16:16 compute-0 nova_compute[259850]:     <video>
Oct 11 04:16:16 compute-0 nova_compute[259850]:       <model type="virtio"/>
Oct 11 04:16:16 compute-0 nova_compute[259850]:     </video>
Oct 11 04:16:16 compute-0 nova_compute[259850]:     <input type="tablet" bus="usb"/>
Oct 11 04:16:16 compute-0 nova_compute[259850]:     <rng model="virtio">
Oct 11 04:16:16 compute-0 nova_compute[259850]:       <backend model="random">/dev/urandom</backend>
Oct 11 04:16:16 compute-0 nova_compute[259850]:     </rng>
Oct 11 04:16:16 compute-0 nova_compute[259850]:     <controller type="pci" model="pcie-root"/>
Oct 11 04:16:16 compute-0 nova_compute[259850]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 04:16:16 compute-0 nova_compute[259850]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 04:16:16 compute-0 nova_compute[259850]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 04:16:16 compute-0 nova_compute[259850]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 04:16:16 compute-0 nova_compute[259850]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 04:16:16 compute-0 nova_compute[259850]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 04:16:16 compute-0 nova_compute[259850]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 04:16:16 compute-0 nova_compute[259850]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 04:16:16 compute-0 nova_compute[259850]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 04:16:16 compute-0 nova_compute[259850]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 04:16:16 compute-0 nova_compute[259850]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 04:16:16 compute-0 nova_compute[259850]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 04:16:16 compute-0 nova_compute[259850]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 04:16:16 compute-0 nova_compute[259850]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 04:16:16 compute-0 nova_compute[259850]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 04:16:16 compute-0 nova_compute[259850]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 04:16:16 compute-0 nova_compute[259850]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 04:16:16 compute-0 nova_compute[259850]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 04:16:16 compute-0 nova_compute[259850]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 04:16:16 compute-0 nova_compute[259850]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 04:16:16 compute-0 nova_compute[259850]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 04:16:16 compute-0 nova_compute[259850]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 04:16:16 compute-0 nova_compute[259850]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 04:16:16 compute-0 nova_compute[259850]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 04:16:16 compute-0 nova_compute[259850]:     <controller type="usb" index="0"/>
Oct 11 04:16:16 compute-0 nova_compute[259850]:     <memballoon model="virtio">
Oct 11 04:16:16 compute-0 nova_compute[259850]:       <stats period="10"/>
Oct 11 04:16:16 compute-0 nova_compute[259850]:     </memballoon>
Oct 11 04:16:16 compute-0 nova_compute[259850]:   </devices>
Oct 11 04:16:16 compute-0 nova_compute[259850]: </domain>
Oct 11 04:16:16 compute-0 nova_compute[259850]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Oct 11 04:16:16 compute-0 nova_compute[259850]: 2025-10-11 04:16:16.939 2 DEBUG nova.compute.manager [None req-e930425f-cba7-458e-9d2c-d390af47d184 77d11e860ca1460cab1c20bca4d4c0ea bfcc78a613a4442d88231798d10634c9 - - default default] [instance: 001be5b3-e842-4242-a6ad-2ccbfa7b39c2] Preparing to wait for external event network-vif-plugged-b0fef7a6-460f-49a1-8586-9008e9d3f648 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Oct 11 04:16:16 compute-0 nova_compute[259850]: 2025-10-11 04:16:16.940 2 DEBUG oslo_concurrency.lockutils [None req-e930425f-cba7-458e-9d2c-d390af47d184 77d11e860ca1460cab1c20bca4d4c0ea bfcc78a613a4442d88231798d10634c9 - - default default] Acquiring lock "001be5b3-e842-4242-a6ad-2ccbfa7b39c2-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 04:16:16 compute-0 nova_compute[259850]: 2025-10-11 04:16:16.940 2 DEBUG oslo_concurrency.lockutils [None req-e930425f-cba7-458e-9d2c-d390af47d184 77d11e860ca1460cab1c20bca4d4c0ea bfcc78a613a4442d88231798d10634c9 - - default default] Lock "001be5b3-e842-4242-a6ad-2ccbfa7b39c2-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 04:16:16 compute-0 nova_compute[259850]: 2025-10-11 04:16:16.941 2 DEBUG oslo_concurrency.lockutils [None req-e930425f-cba7-458e-9d2c-d390af47d184 77d11e860ca1460cab1c20bca4d4c0ea bfcc78a613a4442d88231798d10634c9 - - default default] Lock "001be5b3-e842-4242-a6ad-2ccbfa7b39c2-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 04:16:16 compute-0 nova_compute[259850]: 2025-10-11 04:16:16.942 2 DEBUG nova.virt.libvirt.vif [None req-e930425f-cba7-458e-9d2c-d390af47d184 77d11e860ca1460cab1c20bca4d4c0ea bfcc78a613a4442d88231798d10634c9 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-11T04:16:10Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TransferEncryptedVolumeTest-server-1610846760',display_name='tempest-TransferEncryptedVolumeTest-server-1610846760',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-transferencryptedvolumetest-server-1610846760',id=21,image_ref='',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBEm9PgkiGfSXOx0o4WQq8AZOWxjh5dTcSz2vccU0Qwona7kINKHr8yu5DCKNDP+0OzTB5mKLuoYtalc5W0loL0xt3InkbNaE80zGvKzG26ntAx/WTjaE+AjoYDpLrsq4bA==',key_name='tempest-TransferEncryptedVolumeTest-726747697',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='bfcc78a613a4442d88231798d10634c9',ramdisk_id='',reservation_id='r-t3w6fxsi',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',image_signature_verified='False',network_allocated='True',owner_project_name='tempest-TransferEncryptedVolumeTest-1941581237',owner_user_name='tempest-TransferEncryptedVolumeTest-1941581237-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-11T04:16:12Z,user_data=None,user_id='77d11e860ca1460cab1c20bca4d4c0ea',uuid=001be5b3-e842-4242-a6ad-2ccbfa7b39c2,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "b0fef7a6-460f-49a1-8586-9008e9d3f648", "address": "fa:16:3e:05:f8:44", "network": {"id": "1c86b315-3a4b-4db0-8b3c-39658c19ef9c", "bridge": "br-int", "label": "tempest-TransferEncryptedVolumeTest-443747755-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "bfcc78a613a4442d88231798d10634c9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb0fef7a6-46", "ovs_interfaceid": "b0fef7a6-460f-49a1-8586-9008e9d3f648", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Oct 11 04:16:16 compute-0 nova_compute[259850]: 2025-10-11 04:16:16.942 2 DEBUG nova.network.os_vif_util [None req-e930425f-cba7-458e-9d2c-d390af47d184 77d11e860ca1460cab1c20bca4d4c0ea bfcc78a613a4442d88231798d10634c9 - - default default] Converting VIF {"id": "b0fef7a6-460f-49a1-8586-9008e9d3f648", "address": "fa:16:3e:05:f8:44", "network": {"id": "1c86b315-3a4b-4db0-8b3c-39658c19ef9c", "bridge": "br-int", "label": "tempest-TransferEncryptedVolumeTest-443747755-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "bfcc78a613a4442d88231798d10634c9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb0fef7a6-46", "ovs_interfaceid": "b0fef7a6-460f-49a1-8586-9008e9d3f648", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 11 04:16:16 compute-0 nova_compute[259850]: 2025-10-11 04:16:16.943 2 DEBUG nova.network.os_vif_util [None req-e930425f-cba7-458e-9d2c-d390af47d184 77d11e860ca1460cab1c20bca4d4c0ea bfcc78a613a4442d88231798d10634c9 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:05:f8:44,bridge_name='br-int',has_traffic_filtering=True,id=b0fef7a6-460f-49a1-8586-9008e9d3f648,network=Network(1c86b315-3a4b-4db0-8b3c-39658c19ef9c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb0fef7a6-46') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 11 04:16:16 compute-0 nova_compute[259850]: 2025-10-11 04:16:16.944 2 DEBUG os_vif [None req-e930425f-cba7-458e-9d2c-d390af47d184 77d11e860ca1460cab1c20bca4d4c0ea bfcc78a613a4442d88231798d10634c9 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:05:f8:44,bridge_name='br-int',has_traffic_filtering=True,id=b0fef7a6-460f-49a1-8586-9008e9d3f648,network=Network(1c86b315-3a4b-4db0-8b3c-39658c19ef9c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb0fef7a6-46') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Oct 11 04:16:16 compute-0 nova_compute[259850]: 2025-10-11 04:16:16.945 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 04:16:16 compute-0 nova_compute[259850]: 2025-10-11 04:16:16.946 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 04:16:16 compute-0 nova_compute[259850]: 2025-10-11 04:16:16.947 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 11 04:16:16 compute-0 nova_compute[259850]: 2025-10-11 04:16:16.952 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 04:16:16 compute-0 nova_compute[259850]: 2025-10-11 04:16:16.952 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapb0fef7a6-46, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 04:16:16 compute-0 nova_compute[259850]: 2025-10-11 04:16:16.953 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapb0fef7a6-46, col_values=(('external_ids', {'iface-id': 'b0fef7a6-460f-49a1-8586-9008e9d3f648', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:05:f8:44', 'vm-uuid': '001be5b3-e842-4242-a6ad-2ccbfa7b39c2'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 04:16:17 compute-0 NetworkManager[44920]: <info>  [1760156177.0035] manager: (tapb0fef7a6-46): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/109)
Oct 11 04:16:17 compute-0 nova_compute[259850]: 2025-10-11 04:16:17.003 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 04:16:17 compute-0 nova_compute[259850]: 2025-10-11 04:16:17.005 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 11 04:16:17 compute-0 nova_compute[259850]: 2025-10-11 04:16:17.009 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 04:16:17 compute-0 nova_compute[259850]: 2025-10-11 04:16:17.012 2 INFO os_vif [None req-e930425f-cba7-458e-9d2c-d390af47d184 77d11e860ca1460cab1c20bca4d4c0ea bfcc78a613a4442d88231798d10634c9 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:05:f8:44,bridge_name='br-int',has_traffic_filtering=True,id=b0fef7a6-460f-49a1-8586-9008e9d3f648,network=Network(1c86b315-3a4b-4db0-8b3c-39658c19ef9c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb0fef7a6-46')
Oct 11 04:16:17 compute-0 nova_compute[259850]: 2025-10-11 04:16:17.057 2 DEBUG nova.virt.libvirt.driver [None req-e930425f-cba7-458e-9d2c-d390af47d184 77d11e860ca1460cab1c20bca4d4c0ea bfcc78a613a4442d88231798d10634c9 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 11 04:16:17 compute-0 nova_compute[259850]: 2025-10-11 04:16:17.058 2 DEBUG nova.virt.libvirt.driver [None req-e930425f-cba7-458e-9d2c-d390af47d184 77d11e860ca1460cab1c20bca4d4c0ea bfcc78a613a4442d88231798d10634c9 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 11 04:16:17 compute-0 nova_compute[259850]: 2025-10-11 04:16:17.058 2 DEBUG nova.virt.libvirt.driver [None req-e930425f-cba7-458e-9d2c-d390af47d184 77d11e860ca1460cab1c20bca4d4c0ea bfcc78a613a4442d88231798d10634c9 - - default default] No VIF found with MAC fa:16:3e:05:f8:44, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Oct 11 04:16:17 compute-0 nova_compute[259850]: 2025-10-11 04:16:17.059 2 INFO nova.virt.libvirt.driver [None req-e930425f-cba7-458e-9d2c-d390af47d184 77d11e860ca1460cab1c20bca4d4c0ea bfcc78a613a4442d88231798d10634c9 - - default default] [instance: 001be5b3-e842-4242-a6ad-2ccbfa7b39c2] Using config drive
Oct 11 04:16:17 compute-0 nova_compute[259850]: 2025-10-11 04:16:17.080 2 DEBUG nova.storage.rbd_utils [None req-e930425f-cba7-458e-9d2c-d390af47d184 77d11e860ca1460cab1c20bca4d4c0ea bfcc78a613a4442d88231798d10634c9 - - default default] rbd image 001be5b3-e842-4242-a6ad-2ccbfa7b39c2_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 11 04:16:17 compute-0 nova_compute[259850]: 2025-10-11 04:16:17.472 2 INFO nova.virt.libvirt.driver [None req-e930425f-cba7-458e-9d2c-d390af47d184 77d11e860ca1460cab1c20bca4d4c0ea bfcc78a613a4442d88231798d10634c9 - - default default] [instance: 001be5b3-e842-4242-a6ad-2ccbfa7b39c2] Creating config drive at /var/lib/nova/instances/001be5b3-e842-4242-a6ad-2ccbfa7b39c2/disk.config
Oct 11 04:16:17 compute-0 nova_compute[259850]: 2025-10-11 04:16:17.477 2 DEBUG oslo_concurrency.processutils [None req-e930425f-cba7-458e-9d2c-d390af47d184 77d11e860ca1460cab1c20bca4d4c0ea bfcc78a613a4442d88231798d10634c9 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/001be5b3-e842-4242-a6ad-2ccbfa7b39c2/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpgvlb2d_r execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 04:16:17 compute-0 nova_compute[259850]: 2025-10-11 04:16:17.602 2 DEBUG oslo_concurrency.processutils [None req-e930425f-cba7-458e-9d2c-d390af47d184 77d11e860ca1460cab1c20bca4d4c0ea bfcc78a613a4442d88231798d10634c9 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/001be5b3-e842-4242-a6ad-2ccbfa7b39c2/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpgvlb2d_r" returned: 0 in 0.125s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 04:16:17 compute-0 nova_compute[259850]: 2025-10-11 04:16:17.628 2 DEBUG nova.storage.rbd_utils [None req-e930425f-cba7-458e-9d2c-d390af47d184 77d11e860ca1460cab1c20bca4d4c0ea bfcc78a613a4442d88231798d10634c9 - - default default] rbd image 001be5b3-e842-4242-a6ad-2ccbfa7b39c2_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 11 04:16:17 compute-0 nova_compute[259850]: 2025-10-11 04:16:17.632 2 DEBUG oslo_concurrency.processutils [None req-e930425f-cba7-458e-9d2c-d390af47d184 77d11e860ca1460cab1c20bca4d4c0ea bfcc78a613a4442d88231798d10634c9 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/001be5b3-e842-4242-a6ad-2ccbfa7b39c2/disk.config 001be5b3-e842-4242-a6ad-2ccbfa7b39c2_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 04:16:17 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1574: 305 pgs: 305 active+clean; 350 MiB data, 621 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 2.9 KiB/s wr, 67 op/s
Oct 11 04:16:17 compute-0 nova_compute[259850]: 2025-10-11 04:16:17.784 2 DEBUG oslo_concurrency.processutils [None req-e930425f-cba7-458e-9d2c-d390af47d184 77d11e860ca1460cab1c20bca4d4c0ea bfcc78a613a4442d88231798d10634c9 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/001be5b3-e842-4242-a6ad-2ccbfa7b39c2/disk.config 001be5b3-e842-4242-a6ad-2ccbfa7b39c2_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.152s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 04:16:17 compute-0 nova_compute[259850]: 2025-10-11 04:16:17.785 2 INFO nova.virt.libvirt.driver [None req-e930425f-cba7-458e-9d2c-d390af47d184 77d11e860ca1460cab1c20bca4d4c0ea bfcc78a613a4442d88231798d10634c9 - - default default] [instance: 001be5b3-e842-4242-a6ad-2ccbfa7b39c2] Deleting local config drive /var/lib/nova/instances/001be5b3-e842-4242-a6ad-2ccbfa7b39c2/disk.config because it was imported into RBD.
Oct 11 04:16:17 compute-0 kernel: tapb0fef7a6-46: entered promiscuous mode
Oct 11 04:16:17 compute-0 NetworkManager[44920]: <info>  [1760156177.8548] manager: (tapb0fef7a6-46): new Tun device (/org/freedesktop/NetworkManager/Devices/110)
Oct 11 04:16:17 compute-0 nova_compute[259850]: 2025-10-11 04:16:17.857 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 04:16:17 compute-0 ovn_controller[152025]: 2025-10-11T04:16:17Z|00205|binding|INFO|Claiming lport b0fef7a6-460f-49a1-8586-9008e9d3f648 for this chassis.
Oct 11 04:16:17 compute-0 ovn_controller[152025]: 2025-10-11T04:16:17Z|00206|binding|INFO|b0fef7a6-460f-49a1-8586-9008e9d3f648: Claiming fa:16:3e:05:f8:44 10.100.0.14
Oct 11 04:16:17 compute-0 ovn_metadata_agent[161897]: 2025-10-11 04:16:17.870 161902 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:05:f8:44 10.100.0.14'], port_security=['fa:16:3e:05:f8:44 10.100.0.14'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.14/28', 'neutron:device_id': '001be5b3-e842-4242-a6ad-2ccbfa7b39c2', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-1c86b315-3a4b-4db0-8b3c-39658c19ef9c', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'bfcc78a613a4442d88231798d10634c9', 'neutron:revision_number': '2', 'neutron:security_group_ids': '3c69b653-6cff-45f0-9360-306b50c7cbb5', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=756f4bd0-4cbc-4611-9397-52eb34ec09ab, chassis=[<ovs.db.idl.Row object at 0x7f22fde8afd0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f22fde8afd0>], logical_port=b0fef7a6-460f-49a1-8586-9008e9d3f648) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 11 04:16:17 compute-0 ovn_metadata_agent[161897]: 2025-10-11 04:16:17.872 161902 INFO neutron.agent.ovn.metadata.agent [-] Port b0fef7a6-460f-49a1-8586-9008e9d3f648 in datapath 1c86b315-3a4b-4db0-8b3c-39658c19ef9c bound to our chassis
Oct 11 04:16:17 compute-0 ovn_metadata_agent[161897]: 2025-10-11 04:16:17.874 161902 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 1c86b315-3a4b-4db0-8b3c-39658c19ef9c
Oct 11 04:16:17 compute-0 ovn_controller[152025]: 2025-10-11T04:16:17Z|00207|binding|INFO|Setting lport b0fef7a6-460f-49a1-8586-9008e9d3f648 ovn-installed in OVS
Oct 11 04:16:17 compute-0 ovn_controller[152025]: 2025-10-11T04:16:17Z|00208|binding|INFO|Setting lport b0fef7a6-460f-49a1-8586-9008e9d3f648 up in Southbound
Oct 11 04:16:17 compute-0 nova_compute[259850]: 2025-10-11 04:16:17.888 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 04:16:17 compute-0 ovn_metadata_agent[161897]: 2025-10-11 04:16:17.889 267637 DEBUG oslo.privsep.daemon [-] privsep: reply[3b18d75e-545b-4860-9d8b-f09013c37640]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 04:16:17 compute-0 ovn_metadata_agent[161897]: 2025-10-11 04:16:17.890 161902 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap1c86b315-31 in ovnmeta-1c86b315-3a4b-4db0-8b3c-39658c19ef9c namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Oct 11 04:16:17 compute-0 ovn_metadata_agent[161897]: 2025-10-11 04:16:17.893 267637 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap1c86b315-30 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Oct 11 04:16:17 compute-0 ovn_metadata_agent[161897]: 2025-10-11 04:16:17.893 267637 DEBUG oslo.privsep.daemon [-] privsep: reply[f6ef8967-abbf-40cd-8537-47c2da9e1b56]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 04:16:17 compute-0 ovn_metadata_agent[161897]: 2025-10-11 04:16:17.894 267637 DEBUG oslo.privsep.daemon [-] privsep: reply[9e418feb-2e67-4973-8c9a-6f921fc31de9]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 04:16:17 compute-0 systemd-udevd[293641]: Network interface NamePolicy= disabled on kernel command line.
Oct 11 04:16:17 compute-0 ovn_metadata_agent[161897]: 2025-10-11 04:16:17.909 162015 DEBUG oslo.privsep.daemon [-] privsep: reply[9c97d41a-5f47-44a7-b3a3-c763fdef8d5a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 04:16:17 compute-0 systemd-machined[214869]: New machine qemu-21-instance-00000015.
Oct 11 04:16:17 compute-0 systemd[1]: Started Virtual Machine qemu-21-instance-00000015.
Oct 11 04:16:17 compute-0 NetworkManager[44920]: <info>  [1760156177.9253] device (tapb0fef7a6-46): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 11 04:16:17 compute-0 NetworkManager[44920]: <info>  [1760156177.9267] device (tapb0fef7a6-46): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 11 04:16:17 compute-0 ovn_metadata_agent[161897]: 2025-10-11 04:16:17.931 267637 DEBUG oslo.privsep.daemon [-] privsep: reply[cc00114b-2af0-4a6b-af04-2b56a088c29a]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 04:16:17 compute-0 ovn_metadata_agent[161897]: 2025-10-11 04:16:17.973 267785 DEBUG oslo.privsep.daemon [-] privsep: reply[808b3b8a-6bf8-4f34-a479-4710f43962ab]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 04:16:17 compute-0 ovn_metadata_agent[161897]: 2025-10-11 04:16:17.979 267637 DEBUG oslo.privsep.daemon [-] privsep: reply[34490725-9d1a-4eaa-881c-32f6b0554d75]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 04:16:17 compute-0 systemd-udevd[293644]: Network interface NamePolicy= disabled on kernel command line.
Oct 11 04:16:17 compute-0 NetworkManager[44920]: <info>  [1760156177.9817] manager: (tap1c86b315-30): new Veth device (/org/freedesktop/NetworkManager/Devices/111)
Oct 11 04:16:17 compute-0 nova_compute[259850]: 2025-10-11 04:16:17.997 2 DEBUG nova.network.neutron [req-808374ea-aeb1-4b95-82c8-09270414f94b req-3850e7c6-c4d8-4bce-8cd5-0e8ba3f64cd1 f4159c198f2c491aba3076e803251bb4 551ef16ca7c64c7d98e9560ddab5abd8 - - default default] [instance: 001be5b3-e842-4242-a6ad-2ccbfa7b39c2] Updated VIF entry in instance network info cache for port b0fef7a6-460f-49a1-8586-9008e9d3f648. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 11 04:16:17 compute-0 nova_compute[259850]: 2025-10-11 04:16:17.998 2 DEBUG nova.network.neutron [req-808374ea-aeb1-4b95-82c8-09270414f94b req-3850e7c6-c4d8-4bce-8cd5-0e8ba3f64cd1 f4159c198f2c491aba3076e803251bb4 551ef16ca7c64c7d98e9560ddab5abd8 - - default default] [instance: 001be5b3-e842-4242-a6ad-2ccbfa7b39c2] Updating instance_info_cache with network_info: [{"id": "b0fef7a6-460f-49a1-8586-9008e9d3f648", "address": "fa:16:3e:05:f8:44", "network": {"id": "1c86b315-3a4b-4db0-8b3c-39658c19ef9c", "bridge": "br-int", "label": "tempest-TransferEncryptedVolumeTest-443747755-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "bfcc78a613a4442d88231798d10634c9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb0fef7a6-46", "ovs_interfaceid": "b0fef7a6-460f-49a1-8586-9008e9d3f648", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 11 04:16:18 compute-0 ovn_metadata_agent[161897]: 2025-10-11 04:16:18.016 267785 DEBUG oslo.privsep.daemon [-] privsep: reply[c357ef65-92fc-483f-9785-7fb0e8f6b795]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 04:16:18 compute-0 ovn_metadata_agent[161897]: 2025-10-11 04:16:18.021 267785 DEBUG oslo.privsep.daemon [-] privsep: reply[3fcc9e16-2926-4dd9-8e0b-ecb789013874]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 04:16:18 compute-0 nova_compute[259850]: 2025-10-11 04:16:18.026 2 DEBUG oslo_concurrency.lockutils [req-808374ea-aeb1-4b95-82c8-09270414f94b req-3850e7c6-c4d8-4bce-8cd5-0e8ba3f64cd1 f4159c198f2c491aba3076e803251bb4 551ef16ca7c64c7d98e9560ddab5abd8 - - default default] Releasing lock "refresh_cache-001be5b3-e842-4242-a6ad-2ccbfa7b39c2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 11 04:16:18 compute-0 NetworkManager[44920]: <info>  [1760156178.0492] device (tap1c86b315-30): carrier: link connected
Oct 11 04:16:18 compute-0 ovn_metadata_agent[161897]: 2025-10-11 04:16:18.060 267785 DEBUG oslo.privsep.daemon [-] privsep: reply[783d4277-81d2-4627-8fd5-2d01b67f6e3f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 04:16:18 compute-0 ovn_metadata_agent[161897]: 2025-10-11 04:16:18.089 267637 DEBUG oslo.privsep.daemon [-] privsep: reply[9d848f26-037d-4adb-a9c6-97ce253b4221]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap1c86b315-31'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:b2:1b:d4'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 68], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 456308, 'reachable_time': 42115, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 293672, 'error': None, 'target': 'ovnmeta-1c86b315-3a4b-4db0-8b3c-39658c19ef9c', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 04:16:18 compute-0 ovn_metadata_agent[161897]: 2025-10-11 04:16:18.109 267637 DEBUG oslo.privsep.daemon [-] privsep: reply[b03a694a-6e8a-4ef2-9b73-8655551ce2a6]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:feb2:1bd4'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 456308, 'tstamp': 456308}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 293673, 'error': None, 'target': 'ovnmeta-1c86b315-3a4b-4db0-8b3c-39658c19ef9c', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 04:16:18 compute-0 nova_compute[259850]: 2025-10-11 04:16:18.135 2 DEBUG nova.compute.manager [req-e028b5d9-2643-45dc-97f2-6f32f42362a5 req-592451bd-8012-4cea-8a5c-f4481cd3888a f4159c198f2c491aba3076e803251bb4 551ef16ca7c64c7d98e9560ddab5abd8 - - default default] [instance: 001be5b3-e842-4242-a6ad-2ccbfa7b39c2] Received event network-vif-plugged-b0fef7a6-460f-49a1-8586-9008e9d3f648 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 11 04:16:18 compute-0 nova_compute[259850]: 2025-10-11 04:16:18.135 2 DEBUG oslo_concurrency.lockutils [req-e028b5d9-2643-45dc-97f2-6f32f42362a5 req-592451bd-8012-4cea-8a5c-f4481cd3888a f4159c198f2c491aba3076e803251bb4 551ef16ca7c64c7d98e9560ddab5abd8 - - default default] Acquiring lock "001be5b3-e842-4242-a6ad-2ccbfa7b39c2-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 04:16:18 compute-0 nova_compute[259850]: 2025-10-11 04:16:18.136 2 DEBUG oslo_concurrency.lockutils [req-e028b5d9-2643-45dc-97f2-6f32f42362a5 req-592451bd-8012-4cea-8a5c-f4481cd3888a f4159c198f2c491aba3076e803251bb4 551ef16ca7c64c7d98e9560ddab5abd8 - - default default] Lock "001be5b3-e842-4242-a6ad-2ccbfa7b39c2-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 04:16:18 compute-0 nova_compute[259850]: 2025-10-11 04:16:18.136 2 DEBUG oslo_concurrency.lockutils [req-e028b5d9-2643-45dc-97f2-6f32f42362a5 req-592451bd-8012-4cea-8a5c-f4481cd3888a f4159c198f2c491aba3076e803251bb4 551ef16ca7c64c7d98e9560ddab5abd8 - - default default] Lock "001be5b3-e842-4242-a6ad-2ccbfa7b39c2-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 04:16:18 compute-0 nova_compute[259850]: 2025-10-11 04:16:18.137 2 DEBUG nova.compute.manager [req-e028b5d9-2643-45dc-97f2-6f32f42362a5 req-592451bd-8012-4cea-8a5c-f4481cd3888a f4159c198f2c491aba3076e803251bb4 551ef16ca7c64c7d98e9560ddab5abd8 - - default default] [instance: 001be5b3-e842-4242-a6ad-2ccbfa7b39c2] Processing event network-vif-plugged-b0fef7a6-460f-49a1-8586-9008e9d3f648 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Oct 11 04:16:18 compute-0 ovn_metadata_agent[161897]: 2025-10-11 04:16:18.137 267637 DEBUG oslo.privsep.daemon [-] privsep: reply[ea980f40-d4af-4fe8-b99d-a6678d2715d9]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap1c86b315-31'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:b2:1b:d4'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 68], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 456308, 'reachable_time': 42115, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 293674, 'error': None, 'target': 'ovnmeta-1c86b315-3a4b-4db0-8b3c-39658c19ef9c', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 04:16:18 compute-0 ovn_metadata_agent[161897]: 2025-10-11 04:16:18.178 267637 DEBUG oslo.privsep.daemon [-] privsep: reply[72c8412b-de42-4c7f-910f-0fc94a56f49f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 04:16:18 compute-0 ovn_controller[152025]: 2025-10-11T04:16:18Z|00036|pinctrl(ovn_pinctrl0)|WARN|DHCPREQUEST requested IP 10.100.0.6 does not match offer 10.100.0.3
Oct 11 04:16:18 compute-0 ovn_controller[152025]: 2025-10-11T04:16:18Z|00037|pinctrl(ovn_pinctrl0)|INFO|DHCPNAK fa:16:3e:aa:79:a8 10.100.0.3
Oct 11 04:16:18 compute-0 nova_compute[259850]: 2025-10-11 04:16:18.267 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 04:16:18 compute-0 ovn_metadata_agent[161897]: 2025-10-11 04:16:18.278 267637 DEBUG oslo.privsep.daemon [-] privsep: reply[4e2820e9-6f00-42bd-828c-51bd2c7634dc]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 04:16:18 compute-0 ovn_metadata_agent[161897]: 2025-10-11 04:16:18.279 161902 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap1c86b315-30, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 04:16:18 compute-0 ovn_metadata_agent[161897]: 2025-10-11 04:16:18.280 161902 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 11 04:16:18 compute-0 ovn_metadata_agent[161897]: 2025-10-11 04:16:18.280 161902 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap1c86b315-30, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 04:16:18 compute-0 kernel: tap1c86b315-30: entered promiscuous mode
Oct 11 04:16:18 compute-0 nova_compute[259850]: 2025-10-11 04:16:18.282 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 04:16:18 compute-0 NetworkManager[44920]: <info>  [1760156178.2827] manager: (tap1c86b315-30): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/112)
Oct 11 04:16:18 compute-0 nova_compute[259850]: 2025-10-11 04:16:18.284 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 04:16:18 compute-0 ovn_metadata_agent[161897]: 2025-10-11 04:16:18.285 161902 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap1c86b315-30, col_values=(('external_ids', {'iface-id': '075f096d-d25a-4cca-804c-0df80c22a72a'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 04:16:18 compute-0 ovn_controller[152025]: 2025-10-11T04:16:18Z|00209|binding|INFO|Releasing lport 075f096d-d25a-4cca-804c-0df80c22a72a from this chassis (sb_readonly=0)
Oct 11 04:16:18 compute-0 nova_compute[259850]: 2025-10-11 04:16:18.286 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 04:16:18 compute-0 nova_compute[259850]: 2025-10-11 04:16:18.317 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 04:16:18 compute-0 ovn_metadata_agent[161897]: 2025-10-11 04:16:18.319 161902 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/1c86b315-3a4b-4db0-8b3c-39658c19ef9c.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/1c86b315-3a4b-4db0-8b3c-39658c19ef9c.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Oct 11 04:16:18 compute-0 ovn_metadata_agent[161897]: 2025-10-11 04:16:18.320 267637 DEBUG oslo.privsep.daemon [-] privsep: reply[a300ead3-0f59-4005-8508-c648c6048235]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 04:16:18 compute-0 ovn_metadata_agent[161897]: 2025-10-11 04:16:18.321 161902 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct 11 04:16:18 compute-0 ovn_metadata_agent[161897]: global
Oct 11 04:16:18 compute-0 ovn_metadata_agent[161897]:     log         /dev/log local0 debug
Oct 11 04:16:18 compute-0 ovn_metadata_agent[161897]:     log-tag     haproxy-metadata-proxy-1c86b315-3a4b-4db0-8b3c-39658c19ef9c
Oct 11 04:16:18 compute-0 ovn_metadata_agent[161897]:     user        root
Oct 11 04:16:18 compute-0 ovn_metadata_agent[161897]:     group       root
Oct 11 04:16:18 compute-0 ovn_metadata_agent[161897]:     maxconn     1024
Oct 11 04:16:18 compute-0 ovn_metadata_agent[161897]:     pidfile     /var/lib/neutron/external/pids/1c86b315-3a4b-4db0-8b3c-39658c19ef9c.pid.haproxy
Oct 11 04:16:18 compute-0 ovn_metadata_agent[161897]:     daemon
Oct 11 04:16:18 compute-0 ovn_metadata_agent[161897]: 
Oct 11 04:16:18 compute-0 ovn_metadata_agent[161897]: defaults
Oct 11 04:16:18 compute-0 ovn_metadata_agent[161897]:     log global
Oct 11 04:16:18 compute-0 ovn_metadata_agent[161897]:     mode http
Oct 11 04:16:18 compute-0 ovn_metadata_agent[161897]:     option httplog
Oct 11 04:16:18 compute-0 ovn_metadata_agent[161897]:     option dontlognull
Oct 11 04:16:18 compute-0 ovn_metadata_agent[161897]:     option http-server-close
Oct 11 04:16:18 compute-0 ovn_metadata_agent[161897]:     option forwardfor
Oct 11 04:16:18 compute-0 ovn_metadata_agent[161897]:     retries                 3
Oct 11 04:16:18 compute-0 ovn_metadata_agent[161897]:     timeout http-request    30s
Oct 11 04:16:18 compute-0 ovn_metadata_agent[161897]:     timeout connect         30s
Oct 11 04:16:18 compute-0 ovn_metadata_agent[161897]:     timeout client          32s
Oct 11 04:16:18 compute-0 ovn_metadata_agent[161897]:     timeout server          32s
Oct 11 04:16:18 compute-0 ovn_metadata_agent[161897]:     timeout http-keep-alive 30s
Oct 11 04:16:18 compute-0 ovn_metadata_agent[161897]: 
Oct 11 04:16:18 compute-0 ovn_metadata_agent[161897]: 
Oct 11 04:16:18 compute-0 ovn_metadata_agent[161897]: listen listener
Oct 11 04:16:18 compute-0 ovn_metadata_agent[161897]:     bind 169.254.169.254:80
Oct 11 04:16:18 compute-0 ovn_metadata_agent[161897]:     server metadata /var/lib/neutron/metadata_proxy
Oct 11 04:16:18 compute-0 ovn_metadata_agent[161897]:     http-request add-header X-OVN-Network-ID 1c86b315-3a4b-4db0-8b3c-39658c19ef9c
Oct 11 04:16:18 compute-0 ovn_metadata_agent[161897]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Oct 11 04:16:18 compute-0 ovn_metadata_agent[161897]: 2025-10-11 04:16:18.321 161902 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-1c86b315-3a4b-4db0-8b3c-39658c19ef9c', 'env', 'PROCESS_TAG=haproxy-1c86b315-3a4b-4db0-8b3c-39658c19ef9c', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/1c86b315-3a4b-4db0-8b3c-39658c19ef9c.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Oct 11 04:16:18 compute-0 podman[293742]: 2025-10-11 04:16:18.784938805 +0000 UTC m=+0.064458503 container create 6f26fa6c48d37e37927488cb452029c109a1a59e9cc7cb6cc94426aad094b949 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-1c86b315-3a4b-4db0-8b3c-39658c19ef9c, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20251009, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f)
Oct 11 04:16:18 compute-0 systemd[1]: Started libpod-conmon-6f26fa6c48d37e37927488cb452029c109a1a59e9cc7cb6cc94426aad094b949.scope.
Oct 11 04:16:18 compute-0 podman[293742]: 2025-10-11 04:16:18.759958985 +0000 UTC m=+0.039478713 image pull 1061e4fafe13e0b9aa1ef2c904ba4ad70c44f3e87b1d831f16c6db34937f4022 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Oct 11 04:16:18 compute-0 systemd[1]: Started libcrun container.
Oct 11 04:16:18 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b8129ca4513e3b7e35f69efed3b0897e5e1b1ba766e7b29c8d843c26130076b0/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct 11 04:16:18 compute-0 podman[293742]: 2025-10-11 04:16:18.893737756 +0000 UTC m=+0.173257474 container init 6f26fa6c48d37e37927488cb452029c109a1a59e9cc7cb6cc94426aad094b949 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-1c86b315-3a4b-4db0-8b3c-39658c19ef9c, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.build-date=20251009, org.label-schema.vendor=CentOS)
Oct 11 04:16:18 compute-0 podman[293742]: 2025-10-11 04:16:18.907782885 +0000 UTC m=+0.187302593 container start 6f26fa6c48d37e37927488cb452029c109a1a59e9cc7cb6cc94426aad094b949 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-1c86b315-3a4b-4db0-8b3c-39658c19ef9c, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251009, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, tcib_managed=true)
Oct 11 04:16:18 compute-0 ceph-mon[74273]: pgmap v1574: 305 pgs: 305 active+clean; 350 MiB data, 621 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 2.9 KiB/s wr, 67 op/s
Oct 11 04:16:18 compute-0 neutron-haproxy-ovnmeta-1c86b315-3a4b-4db0-8b3c-39658c19ef9c[293758]: [NOTICE]   (293762) : New worker (293764) forked
Oct 11 04:16:18 compute-0 neutron-haproxy-ovnmeta-1c86b315-3a4b-4db0-8b3c-39658c19ef9c[293758]: [NOTICE]   (293762) : Loading success.
Oct 11 04:16:19 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1575: 305 pgs: 305 active+clean; 364 MiB data, 627 MiB used, 59 GiB / 60 GiB avail; 2.9 MiB/s rd, 508 KiB/s wr, 124 op/s
Oct 11 04:16:19 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e385 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 04:16:20 compute-0 ovn_controller[152025]: 2025-10-11T04:16:20Z|00038|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:aa:79:a8 10.100.0.3
Oct 11 04:16:20 compute-0 ovn_controller[152025]: 2025-10-11T04:16:20Z|00039|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:aa:79:a8 10.100.0.3
Oct 11 04:16:20 compute-0 nova_compute[259850]: 2025-10-11 04:16:20.248 2 DEBUG nova.compute.manager [req-11c3e30b-f955-46c1-82a0-bfdd68e4ab86 req-80ff3d77-d769-405b-8f7c-2d11d6542f8e f4159c198f2c491aba3076e803251bb4 551ef16ca7c64c7d98e9560ddab5abd8 - - default default] [instance: 001be5b3-e842-4242-a6ad-2ccbfa7b39c2] Received event network-vif-plugged-b0fef7a6-460f-49a1-8586-9008e9d3f648 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 11 04:16:20 compute-0 nova_compute[259850]: 2025-10-11 04:16:20.248 2 DEBUG oslo_concurrency.lockutils [req-11c3e30b-f955-46c1-82a0-bfdd68e4ab86 req-80ff3d77-d769-405b-8f7c-2d11d6542f8e f4159c198f2c491aba3076e803251bb4 551ef16ca7c64c7d98e9560ddab5abd8 - - default default] Acquiring lock "001be5b3-e842-4242-a6ad-2ccbfa7b39c2-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 04:16:20 compute-0 nova_compute[259850]: 2025-10-11 04:16:20.249 2 DEBUG oslo_concurrency.lockutils [req-11c3e30b-f955-46c1-82a0-bfdd68e4ab86 req-80ff3d77-d769-405b-8f7c-2d11d6542f8e f4159c198f2c491aba3076e803251bb4 551ef16ca7c64c7d98e9560ddab5abd8 - - default default] Lock "001be5b3-e842-4242-a6ad-2ccbfa7b39c2-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 04:16:20 compute-0 nova_compute[259850]: 2025-10-11 04:16:20.250 2 DEBUG oslo_concurrency.lockutils [req-11c3e30b-f955-46c1-82a0-bfdd68e4ab86 req-80ff3d77-d769-405b-8f7c-2d11d6542f8e f4159c198f2c491aba3076e803251bb4 551ef16ca7c64c7d98e9560ddab5abd8 - - default default] Lock "001be5b3-e842-4242-a6ad-2ccbfa7b39c2-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 04:16:20 compute-0 nova_compute[259850]: 2025-10-11 04:16:20.250 2 DEBUG nova.compute.manager [req-11c3e30b-f955-46c1-82a0-bfdd68e4ab86 req-80ff3d77-d769-405b-8f7c-2d11d6542f8e f4159c198f2c491aba3076e803251bb4 551ef16ca7c64c7d98e9560ddab5abd8 - - default default] [instance: 001be5b3-e842-4242-a6ad-2ccbfa7b39c2] No waiting events found dispatching network-vif-plugged-b0fef7a6-460f-49a1-8586-9008e9d3f648 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 11 04:16:20 compute-0 nova_compute[259850]: 2025-10-11 04:16:20.251 2 WARNING nova.compute.manager [req-11c3e30b-f955-46c1-82a0-bfdd68e4ab86 req-80ff3d77-d769-405b-8f7c-2d11d6542f8e f4159c198f2c491aba3076e803251bb4 551ef16ca7c64c7d98e9560ddab5abd8 - - default default] [instance: 001be5b3-e842-4242-a6ad-2ccbfa7b39c2] Received unexpected event network-vif-plugged-b0fef7a6-460f-49a1-8586-9008e9d3f648 for instance with vm_state building and task_state spawning.
Oct 11 04:16:20 compute-0 ceph-mgr[74563]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 04:16:20 compute-0 ceph-mgr[74563]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 04:16:20 compute-0 ceph-mgr[74563]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 04:16:20 compute-0 ceph-mgr[74563]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 04:16:20 compute-0 ceph-mgr[74563]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 04:16:20 compute-0 ceph-mgr[74563]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 04:16:20 compute-0 ceph-mgr[74563]: [balancer INFO root] Optimize plan auto_2025-10-11_04:16:20
Oct 11 04:16:20 compute-0 ceph-mgr[74563]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 11 04:16:20 compute-0 ceph-mgr[74563]: [balancer INFO root] do_upmap
Oct 11 04:16:20 compute-0 ceph-mgr[74563]: [balancer INFO root] pools ['.mgr', 'images', 'default.rgw.control', 'volumes', 'vms', 'default.rgw.log', 'cephfs.cephfs.data', 'cephfs.cephfs.meta', 'default.rgw.meta', '.rgw.root', 'backups']
Oct 11 04:16:20 compute-0 ceph-mgr[74563]: [balancer INFO root] prepared 0/10 changes
Oct 11 04:16:20 compute-0 ceph-mon[74273]: pgmap v1575: 305 pgs: 305 active+clean; 364 MiB data, 627 MiB used, 59 GiB / 60 GiB avail; 2.9 MiB/s rd, 508 KiB/s wr, 124 op/s
Oct 11 04:16:21 compute-0 ceph-mgr[74563]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 11 04:16:21 compute-0 ceph-mgr[74563]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 11 04:16:21 compute-0 ceph-mgr[74563]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 11 04:16:21 compute-0 ceph-mgr[74563]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 11 04:16:21 compute-0 ceph-mgr[74563]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 11 04:16:21 compute-0 ceph-mgr[74563]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 11 04:16:21 compute-0 ceph-mgr[74563]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 11 04:16:21 compute-0 ceph-mgr[74563]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 11 04:16:21 compute-0 ceph-mgr[74563]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 11 04:16:21 compute-0 ceph-mgr[74563]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 11 04:16:21 compute-0 nova_compute[259850]: 2025-10-11 04:16:21.134 2 DEBUG nova.virt.driver [None req-3700afd8-f980-4cf2-b41e-54ecc7fbe9a2 - - - - - -] Emitting event <LifecycleEvent: 1760156181.1344597, 001be5b3-e842-4242-a6ad-2ccbfa7b39c2 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 11 04:16:21 compute-0 nova_compute[259850]: 2025-10-11 04:16:21.135 2 INFO nova.compute.manager [None req-3700afd8-f980-4cf2-b41e-54ecc7fbe9a2 - - - - - -] [instance: 001be5b3-e842-4242-a6ad-2ccbfa7b39c2] VM Started (Lifecycle Event)
Oct 11 04:16:21 compute-0 nova_compute[259850]: 2025-10-11 04:16:21.137 2 DEBUG nova.compute.manager [None req-e930425f-cba7-458e-9d2c-d390af47d184 77d11e860ca1460cab1c20bca4d4c0ea bfcc78a613a4442d88231798d10634c9 - - default default] [instance: 001be5b3-e842-4242-a6ad-2ccbfa7b39c2] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Oct 11 04:16:21 compute-0 nova_compute[259850]: 2025-10-11 04:16:21.141 2 DEBUG nova.virt.libvirt.driver [None req-e930425f-cba7-458e-9d2c-d390af47d184 77d11e860ca1460cab1c20bca4d4c0ea bfcc78a613a4442d88231798d10634c9 - - default default] [instance: 001be5b3-e842-4242-a6ad-2ccbfa7b39c2] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Oct 11 04:16:21 compute-0 nova_compute[259850]: 2025-10-11 04:16:21.144 2 INFO nova.virt.libvirt.driver [-] [instance: 001be5b3-e842-4242-a6ad-2ccbfa7b39c2] Instance spawned successfully.
Oct 11 04:16:21 compute-0 nova_compute[259850]: 2025-10-11 04:16:21.145 2 DEBUG nova.virt.libvirt.driver [None req-e930425f-cba7-458e-9d2c-d390af47d184 77d11e860ca1460cab1c20bca4d4c0ea bfcc78a613a4442d88231798d10634c9 - - default default] [instance: 001be5b3-e842-4242-a6ad-2ccbfa7b39c2] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Oct 11 04:16:21 compute-0 nova_compute[259850]: 2025-10-11 04:16:21.170 2 DEBUG nova.compute.manager [None req-3700afd8-f980-4cf2-b41e-54ecc7fbe9a2 - - - - - -] [instance: 001be5b3-e842-4242-a6ad-2ccbfa7b39c2] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 11 04:16:21 compute-0 nova_compute[259850]: 2025-10-11 04:16:21.176 2 DEBUG nova.compute.manager [None req-3700afd8-f980-4cf2-b41e-54ecc7fbe9a2 - - - - - -] [instance: 001be5b3-e842-4242-a6ad-2ccbfa7b39c2] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 11 04:16:21 compute-0 nova_compute[259850]: 2025-10-11 04:16:21.182 2 DEBUG nova.virt.libvirt.driver [None req-e930425f-cba7-458e-9d2c-d390af47d184 77d11e860ca1460cab1c20bca4d4c0ea bfcc78a613a4442d88231798d10634c9 - - default default] [instance: 001be5b3-e842-4242-a6ad-2ccbfa7b39c2] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 11 04:16:21 compute-0 nova_compute[259850]: 2025-10-11 04:16:21.182 2 DEBUG nova.virt.libvirt.driver [None req-e930425f-cba7-458e-9d2c-d390af47d184 77d11e860ca1460cab1c20bca4d4c0ea bfcc78a613a4442d88231798d10634c9 - - default default] [instance: 001be5b3-e842-4242-a6ad-2ccbfa7b39c2] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 11 04:16:21 compute-0 nova_compute[259850]: 2025-10-11 04:16:21.183 2 DEBUG nova.virt.libvirt.driver [None req-e930425f-cba7-458e-9d2c-d390af47d184 77d11e860ca1460cab1c20bca4d4c0ea bfcc78a613a4442d88231798d10634c9 - - default default] [instance: 001be5b3-e842-4242-a6ad-2ccbfa7b39c2] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 11 04:16:21 compute-0 nova_compute[259850]: 2025-10-11 04:16:21.183 2 DEBUG nova.virt.libvirt.driver [None req-e930425f-cba7-458e-9d2c-d390af47d184 77d11e860ca1460cab1c20bca4d4c0ea bfcc78a613a4442d88231798d10634c9 - - default default] [instance: 001be5b3-e842-4242-a6ad-2ccbfa7b39c2] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 11 04:16:21 compute-0 nova_compute[259850]: 2025-10-11 04:16:21.184 2 DEBUG nova.virt.libvirt.driver [None req-e930425f-cba7-458e-9d2c-d390af47d184 77d11e860ca1460cab1c20bca4d4c0ea bfcc78a613a4442d88231798d10634c9 - - default default] [instance: 001be5b3-e842-4242-a6ad-2ccbfa7b39c2] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 11 04:16:21 compute-0 nova_compute[259850]: 2025-10-11 04:16:21.184 2 DEBUG nova.virt.libvirt.driver [None req-e930425f-cba7-458e-9d2c-d390af47d184 77d11e860ca1460cab1c20bca4d4c0ea bfcc78a613a4442d88231798d10634c9 - - default default] [instance: 001be5b3-e842-4242-a6ad-2ccbfa7b39c2] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 11 04:16:21 compute-0 nova_compute[259850]: 2025-10-11 04:16:21.208 2 INFO nova.compute.manager [None req-3700afd8-f980-4cf2-b41e-54ecc7fbe9a2 - - - - - -] [instance: 001be5b3-e842-4242-a6ad-2ccbfa7b39c2] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 11 04:16:21 compute-0 nova_compute[259850]: 2025-10-11 04:16:21.209 2 DEBUG nova.virt.driver [None req-3700afd8-f980-4cf2-b41e-54ecc7fbe9a2 - - - - - -] Emitting event <LifecycleEvent: 1760156181.1346707, 001be5b3-e842-4242-a6ad-2ccbfa7b39c2 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 11 04:16:21 compute-0 nova_compute[259850]: 2025-10-11 04:16:21.209 2 INFO nova.compute.manager [None req-3700afd8-f980-4cf2-b41e-54ecc7fbe9a2 - - - - - -] [instance: 001be5b3-e842-4242-a6ad-2ccbfa7b39c2] VM Paused (Lifecycle Event)
Oct 11 04:16:21 compute-0 nova_compute[259850]: 2025-10-11 04:16:21.243 2 INFO nova.compute.manager [None req-e930425f-cba7-458e-9d2c-d390af47d184 77d11e860ca1460cab1c20bca4d4c0ea bfcc78a613a4442d88231798d10634c9 - - default default] [instance: 001be5b3-e842-4242-a6ad-2ccbfa7b39c2] Took 7.32 seconds to spawn the instance on the hypervisor.
Oct 11 04:16:21 compute-0 nova_compute[259850]: 2025-10-11 04:16:21.244 2 DEBUG nova.compute.manager [None req-e930425f-cba7-458e-9d2c-d390af47d184 77d11e860ca1460cab1c20bca4d4c0ea bfcc78a613a4442d88231798d10634c9 - - default default] [instance: 001be5b3-e842-4242-a6ad-2ccbfa7b39c2] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 11 04:16:21 compute-0 nova_compute[259850]: 2025-10-11 04:16:21.245 2 DEBUG nova.compute.manager [None req-3700afd8-f980-4cf2-b41e-54ecc7fbe9a2 - - - - - -] [instance: 001be5b3-e842-4242-a6ad-2ccbfa7b39c2] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 11 04:16:21 compute-0 nova_compute[259850]: 2025-10-11 04:16:21.256 2 DEBUG nova.virt.driver [None req-3700afd8-f980-4cf2-b41e-54ecc7fbe9a2 - - - - - -] Emitting event <LifecycleEvent: 1760156181.1402693, 001be5b3-e842-4242-a6ad-2ccbfa7b39c2 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 11 04:16:21 compute-0 nova_compute[259850]: 2025-10-11 04:16:21.257 2 INFO nova.compute.manager [None req-3700afd8-f980-4cf2-b41e-54ecc7fbe9a2 - - - - - -] [instance: 001be5b3-e842-4242-a6ad-2ccbfa7b39c2] VM Resumed (Lifecycle Event)
Oct 11 04:16:21 compute-0 nova_compute[259850]: 2025-10-11 04:16:21.281 2 DEBUG nova.compute.manager [None req-3700afd8-f980-4cf2-b41e-54ecc7fbe9a2 - - - - - -] [instance: 001be5b3-e842-4242-a6ad-2ccbfa7b39c2] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 11 04:16:21 compute-0 nova_compute[259850]: 2025-10-11 04:16:21.287 2 DEBUG nova.compute.manager [None req-3700afd8-f980-4cf2-b41e-54ecc7fbe9a2 - - - - - -] [instance: 001be5b3-e842-4242-a6ad-2ccbfa7b39c2] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 11 04:16:21 compute-0 nova_compute[259850]: 2025-10-11 04:16:21.323 2 INFO nova.compute.manager [None req-e930425f-cba7-458e-9d2c-d390af47d184 77d11e860ca1460cab1c20bca4d4c0ea bfcc78a613a4442d88231798d10634c9 - - default default] [instance: 001be5b3-e842-4242-a6ad-2ccbfa7b39c2] Took 9.60 seconds to build instance.
Oct 11 04:16:21 compute-0 nova_compute[259850]: 2025-10-11 04:16:21.346 2 DEBUG oslo_concurrency.lockutils [None req-e930425f-cba7-458e-9d2c-d390af47d184 77d11e860ca1460cab1c20bca4d4c0ea bfcc78a613a4442d88231798d10634c9 - - default default] Lock "001be5b3-e842-4242-a6ad-2ccbfa7b39c2" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 9.685s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 04:16:21 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1576: 305 pgs: 305 active+clean; 364 MiB data, 627 MiB used, 59 GiB / 60 GiB avail; 1.0 MiB/s rd, 508 KiB/s wr, 61 op/s
Oct 11 04:16:22 compute-0 nova_compute[259850]: 2025-10-11 04:16:22.003 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 04:16:22 compute-0 ceph-mon[74273]: pgmap v1576: 305 pgs: 305 active+clean; 364 MiB data, 627 MiB used, 59 GiB / 60 GiB avail; 1.0 MiB/s rd, 508 KiB/s wr, 61 op/s
Oct 11 04:16:22 compute-0 ovn_metadata_agent[161897]: 2025-10-11 04:16:22.966 161902 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 04:16:22 compute-0 ovn_metadata_agent[161897]: 2025-10-11 04:16:22.967 161902 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 04:16:22 compute-0 ovn_metadata_agent[161897]: 2025-10-11 04:16:22.969 161902 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 04:16:23 compute-0 nova_compute[259850]: 2025-10-11 04:16:23.270 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 04:16:23 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1577: 305 pgs: 305 active+clean; 368 MiB data, 628 MiB used, 59 GiB / 60 GiB avail; 3.0 MiB/s rd, 555 KiB/s wr, 132 op/s
Oct 11 04:16:24 compute-0 ceph-mon[74273]: pgmap v1577: 305 pgs: 305 active+clean; 368 MiB data, 628 MiB used, 59 GiB / 60 GiB avail; 3.0 MiB/s rd, 555 KiB/s wr, 132 op/s
Oct 11 04:16:24 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e385 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 04:16:25 compute-0 nova_compute[259850]: 2025-10-11 04:16:25.060 2 DEBUG oslo_service.periodic_task [None req-a15c22ab-259d-4b26-83d0-eb5f92102649 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 11 04:16:25 compute-0 nova_compute[259850]: 2025-10-11 04:16:25.061 2 DEBUG nova.compute.manager [None req-a15c22ab-259d-4b26-83d0-eb5f92102649 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Oct 11 04:16:25 compute-0 nova_compute[259850]: 2025-10-11 04:16:25.693 2 DEBUG nova.compute.manager [req-a6a3fddf-bb01-4c25-adf2-195d324f2a72 req-f7e18799-eb37-4837-96ae-ca2196322d3f f4159c198f2c491aba3076e803251bb4 551ef16ca7c64c7d98e9560ddab5abd8 - - default default] [instance: 001be5b3-e842-4242-a6ad-2ccbfa7b39c2] Received event network-changed-b0fef7a6-460f-49a1-8586-9008e9d3f648 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 11 04:16:25 compute-0 nova_compute[259850]: 2025-10-11 04:16:25.694 2 DEBUG nova.compute.manager [req-a6a3fddf-bb01-4c25-adf2-195d324f2a72 req-f7e18799-eb37-4837-96ae-ca2196322d3f f4159c198f2c491aba3076e803251bb4 551ef16ca7c64c7d98e9560ddab5abd8 - - default default] [instance: 001be5b3-e842-4242-a6ad-2ccbfa7b39c2] Refreshing instance network info cache due to event network-changed-b0fef7a6-460f-49a1-8586-9008e9d3f648. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 11 04:16:25 compute-0 nova_compute[259850]: 2025-10-11 04:16:25.694 2 DEBUG oslo_concurrency.lockutils [req-a6a3fddf-bb01-4c25-adf2-195d324f2a72 req-f7e18799-eb37-4837-96ae-ca2196322d3f f4159c198f2c491aba3076e803251bb4 551ef16ca7c64c7d98e9560ddab5abd8 - - default default] Acquiring lock "refresh_cache-001be5b3-e842-4242-a6ad-2ccbfa7b39c2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 11 04:16:25 compute-0 nova_compute[259850]: 2025-10-11 04:16:25.694 2 DEBUG oslo_concurrency.lockutils [req-a6a3fddf-bb01-4c25-adf2-195d324f2a72 req-f7e18799-eb37-4837-96ae-ca2196322d3f f4159c198f2c491aba3076e803251bb4 551ef16ca7c64c7d98e9560ddab5abd8 - - default default] Acquired lock "refresh_cache-001be5b3-e842-4242-a6ad-2ccbfa7b39c2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 11 04:16:25 compute-0 nova_compute[259850]: 2025-10-11 04:16:25.694 2 DEBUG nova.network.neutron [req-a6a3fddf-bb01-4c25-adf2-195d324f2a72 req-f7e18799-eb37-4837-96ae-ca2196322d3f f4159c198f2c491aba3076e803251bb4 551ef16ca7c64c7d98e9560ddab5abd8 - - default default] [instance: 001be5b3-e842-4242-a6ad-2ccbfa7b39c2] Refreshing network info cache for port b0fef7a6-460f-49a1-8586-9008e9d3f648 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 11 04:16:25 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1578: 305 pgs: 305 active+clean; 368 MiB data, 628 MiB used, 59 GiB / 60 GiB avail; 3.0 MiB/s rd, 552 KiB/s wr, 129 op/s
Oct 11 04:16:26 compute-0 nova_compute[259850]: 2025-10-11 04:16:26.060 2 DEBUG oslo_service.periodic_task [None req-a15c22ab-259d-4b26-83d0-eb5f92102649 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 11 04:16:26 compute-0 nova_compute[259850]: 2025-10-11 04:16:26.061 2 DEBUG oslo_service.periodic_task [None req-a15c22ab-259d-4b26-83d0-eb5f92102649 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 11 04:16:26 compute-0 ceph-mon[74273]: pgmap v1578: 305 pgs: 305 active+clean; 368 MiB data, 628 MiB used, 59 GiB / 60 GiB avail; 3.0 MiB/s rd, 552 KiB/s wr, 129 op/s
Oct 11 04:16:27 compute-0 nova_compute[259850]: 2025-10-11 04:16:27.005 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 04:16:27 compute-0 nova_compute[259850]: 2025-10-11 04:16:27.055 2 DEBUG oslo_service.periodic_task [None req-a15c22ab-259d-4b26-83d0-eb5f92102649 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 11 04:16:27 compute-0 podman[293779]: 2025-10-11 04:16:27.424183193 +0000 UTC m=+0.131168959 container health_status 648a70a95c858d67aef01a55c153ff0b5b9bef4b589fa10c62a24185180db61f (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251009, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Oct 11 04:16:27 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1579: 305 pgs: 305 active+clean; 368 MiB data, 628 MiB used, 59 GiB / 60 GiB avail; 3.0 MiB/s rd, 552 KiB/s wr, 129 op/s
Oct 11 04:16:27 compute-0 sudo[293806]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 04:16:27 compute-0 sudo[293806]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 04:16:27 compute-0 sudo[293806]: pam_unix(sudo:session): session closed for user root
Oct 11 04:16:27 compute-0 sudo[293831]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 11 04:16:27 compute-0 sudo[293831]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 04:16:27 compute-0 sudo[293831]: pam_unix(sudo:session): session closed for user root
Oct 11 04:16:28 compute-0 sudo[293856]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 04:16:28 compute-0 sudo[293856]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 04:16:28 compute-0 sudo[293856]: pam_unix(sudo:session): session closed for user root
Oct 11 04:16:28 compute-0 nova_compute[259850]: 2025-10-11 04:16:28.059 2 DEBUG oslo_service.periodic_task [None req-a15c22ab-259d-4b26-83d0-eb5f92102649 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 11 04:16:28 compute-0 nova_compute[259850]: 2025-10-11 04:16:28.060 2 DEBUG nova.compute.manager [None req-a15c22ab-259d-4b26-83d0-eb5f92102649 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Oct 11 04:16:28 compute-0 nova_compute[259850]: 2025-10-11 04:16:28.060 2 DEBUG nova.compute.manager [None req-a15c22ab-259d-4b26-83d0-eb5f92102649 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Oct 11 04:16:28 compute-0 sudo[293881]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/23b68101-59a9-532f-ab6b-9acf78fb2162/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Oct 11 04:16:28 compute-0 sudo[293881]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 04:16:28 compute-0 nova_compute[259850]: 2025-10-11 04:16:28.272 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 04:16:28 compute-0 nova_compute[259850]: 2025-10-11 04:16:28.327 2 DEBUG oslo_concurrency.lockutils [None req-a15c22ab-259d-4b26-83d0-eb5f92102649 - - - - - -] Acquiring lock "refresh_cache-f4568c68-41ba-4de0-a607-76bf5907f37c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 11 04:16:28 compute-0 nova_compute[259850]: 2025-10-11 04:16:28.328 2 DEBUG oslo_concurrency.lockutils [None req-a15c22ab-259d-4b26-83d0-eb5f92102649 - - - - - -] Acquired lock "refresh_cache-f4568c68-41ba-4de0-a607-76bf5907f37c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 11 04:16:28 compute-0 nova_compute[259850]: 2025-10-11 04:16:28.328 2 DEBUG nova.network.neutron [None req-a15c22ab-259d-4b26-83d0-eb5f92102649 - - - - - -] [instance: f4568c68-41ba-4de0-a607-76bf5907f37c] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Oct 11 04:16:28 compute-0 nova_compute[259850]: 2025-10-11 04:16:28.328 2 DEBUG nova.objects.instance [None req-a15c22ab-259d-4b26-83d0-eb5f92102649 - - - - - -] Lazy-loading 'info_cache' on Instance uuid f4568c68-41ba-4de0-a607-76bf5907f37c obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 11 04:16:28 compute-0 nova_compute[259850]: 2025-10-11 04:16:28.340 2 DEBUG nova.network.neutron [req-a6a3fddf-bb01-4c25-adf2-195d324f2a72 req-f7e18799-eb37-4837-96ae-ca2196322d3f f4159c198f2c491aba3076e803251bb4 551ef16ca7c64c7d98e9560ddab5abd8 - - default default] [instance: 001be5b3-e842-4242-a6ad-2ccbfa7b39c2] Updated VIF entry in instance network info cache for port b0fef7a6-460f-49a1-8586-9008e9d3f648. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 11 04:16:28 compute-0 nova_compute[259850]: 2025-10-11 04:16:28.341 2 DEBUG nova.network.neutron [req-a6a3fddf-bb01-4c25-adf2-195d324f2a72 req-f7e18799-eb37-4837-96ae-ca2196322d3f f4159c198f2c491aba3076e803251bb4 551ef16ca7c64c7d98e9560ddab5abd8 - - default default] [instance: 001be5b3-e842-4242-a6ad-2ccbfa7b39c2] Updating instance_info_cache with network_info: [{"id": "b0fef7a6-460f-49a1-8586-9008e9d3f648", "address": "fa:16:3e:05:f8:44", "network": {"id": "1c86b315-3a4b-4db0-8b3c-39658c19ef9c", "bridge": "br-int", "label": "tempest-TransferEncryptedVolumeTest-443747755-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.237", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "bfcc78a613a4442d88231798d10634c9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb0fef7a6-46", "ovs_interfaceid": "b0fef7a6-460f-49a1-8586-9008e9d3f648", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 11 04:16:28 compute-0 nova_compute[259850]: 2025-10-11 04:16:28.364 2 DEBUG oslo_concurrency.lockutils [req-a6a3fddf-bb01-4c25-adf2-195d324f2a72 req-f7e18799-eb37-4837-96ae-ca2196322d3f f4159c198f2c491aba3076e803251bb4 551ef16ca7c64c7d98e9560ddab5abd8 - - default default] Releasing lock "refresh_cache-001be5b3-e842-4242-a6ad-2ccbfa7b39c2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 11 04:16:28 compute-0 sudo[293881]: pam_unix(sudo:session): session closed for user root
Oct 11 04:16:28 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 11 04:16:28 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 11 04:16:28 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct 11 04:16:28 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 11 04:16:28 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct 11 04:16:28 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' 
Oct 11 04:16:28 compute-0 ceph-mgr[74563]: [progress WARNING root] complete: ev e1fbf2dc-ba96-4569-b482-701d931a7176 does not exist
Oct 11 04:16:28 compute-0 ceph-mgr[74563]: [progress WARNING root] complete: ev de0ce740-bd13-4582-a336-0b710923133d does not exist
Oct 11 04:16:28 compute-0 ceph-mgr[74563]: [progress WARNING root] complete: ev 0056be3f-200b-4d99-a929-6734fd2c630c does not exist
Oct 11 04:16:28 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct 11 04:16:28 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 11 04:16:28 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct 11 04:16:28 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 11 04:16:28 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 11 04:16:28 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 11 04:16:28 compute-0 sudo[293939]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 04:16:28 compute-0 sudo[293939]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 04:16:28 compute-0 sudo[293939]: pam_unix(sudo:session): session closed for user root
Oct 11 04:16:28 compute-0 sudo[293964]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 11 04:16:28 compute-0 sudo[293964]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 04:16:28 compute-0 sudo[293964]: pam_unix(sudo:session): session closed for user root
Oct 11 04:16:28 compute-0 sudo[293989]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 04:16:28 compute-0 sudo[293989]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 04:16:28 compute-0 sudo[293989]: pam_unix(sudo:session): session closed for user root
Oct 11 04:16:28 compute-0 sudo[294014]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/23b68101-59a9-532f-ab6b-9acf78fb2162/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 23b68101-59a9-532f-ab6b-9acf78fb2162 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Oct 11 04:16:28 compute-0 sudo[294014]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 04:16:28 compute-0 ceph-mon[74273]: pgmap v1579: 305 pgs: 305 active+clean; 368 MiB data, 628 MiB used, 59 GiB / 60 GiB avail; 3.0 MiB/s rd, 552 KiB/s wr, 129 op/s
Oct 11 04:16:28 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 11 04:16:28 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 11 04:16:28 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' 
Oct 11 04:16:28 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 11 04:16:28 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 11 04:16:28 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 11 04:16:29 compute-0 podman[294080]: 2025-10-11 04:16:29.294003835 +0000 UTC m=+0.043238420 container create 17a04fad3557b98a57953b71eb78106f8c2b87c87bd534396a12d598f1ffc9ab (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_blackburn, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef)
Oct 11 04:16:29 compute-0 systemd[1]: Started libpod-conmon-17a04fad3557b98a57953b71eb78106f8c2b87c87bd534396a12d598f1ffc9ab.scope.
Oct 11 04:16:29 compute-0 systemd[1]: Started libcrun container.
Oct 11 04:16:29 compute-0 podman[294080]: 2025-10-11 04:16:29.274341096 +0000 UTC m=+0.023575721 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 04:16:29 compute-0 podman[294080]: 2025-10-11 04:16:29.380099762 +0000 UTC m=+0.129334387 container init 17a04fad3557b98a57953b71eb78106f8c2b87c87bd534396a12d598f1ffc9ab (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_blackburn, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 11 04:16:29 compute-0 podman[294080]: 2025-10-11 04:16:29.388689516 +0000 UTC m=+0.137924091 container start 17a04fad3557b98a57953b71eb78106f8c2b87c87bd534396a12d598f1ffc9ab (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_blackburn, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0)
Oct 11 04:16:29 compute-0 podman[294080]: 2025-10-11 04:16:29.391097494 +0000 UTC m=+0.140332089 container attach 17a04fad3557b98a57953b71eb78106f8c2b87c87bd534396a12d598f1ffc9ab (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_blackburn, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 11 04:16:29 compute-0 cranky_blackburn[294096]: 167 167
Oct 11 04:16:29 compute-0 systemd[1]: libpod-17a04fad3557b98a57953b71eb78106f8c2b87c87bd534396a12d598f1ffc9ab.scope: Deactivated successfully.
Oct 11 04:16:29 compute-0 podman[294080]: 2025-10-11 04:16:29.39341403 +0000 UTC m=+0.142648605 container died 17a04fad3557b98a57953b71eb78106f8c2b87c87bd534396a12d598f1ffc9ab (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_blackburn, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, ceph=True, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 11 04:16:29 compute-0 systemd[1]: var-lib-containers-storage-overlay-40a628a847677b06bbc48b93858dcca0cc3f60f2c9300531cb007082822d9d8a-merged.mount: Deactivated successfully.
Oct 11 04:16:29 compute-0 podman[294080]: 2025-10-11 04:16:29.425124141 +0000 UTC m=+0.174358726 container remove 17a04fad3557b98a57953b71eb78106f8c2b87c87bd534396a12d598f1ffc9ab (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_blackburn, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True)
Oct 11 04:16:29 compute-0 systemd[1]: libpod-conmon-17a04fad3557b98a57953b71eb78106f8c2b87c87bd534396a12d598f1ffc9ab.scope: Deactivated successfully.
Oct 11 04:16:29 compute-0 nova_compute[259850]: 2025-10-11 04:16:29.531 2 DEBUG nova.network.neutron [None req-a15c22ab-259d-4b26-83d0-eb5f92102649 - - - - - -] [instance: f4568c68-41ba-4de0-a607-76bf5907f37c] Updating instance_info_cache with network_info: [{"id": "7a1af6b7-a442-4ea8-beca-2843ffb42e3c", "address": "fa:16:3e:e8:e3:04", "network": {"id": "b6cd64a2-af0b-4f57-b84c-cbc9cde5251d", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-958485850-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.213", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "09ba33ef4bd447699d74946c58839b2d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7a1af6b7-a4", "ovs_interfaceid": "7a1af6b7-a442-4ea8-beca-2843ffb42e3c", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 11 04:16:29 compute-0 nova_compute[259850]: 2025-10-11 04:16:29.549 2 DEBUG oslo_concurrency.lockutils [None req-a15c22ab-259d-4b26-83d0-eb5f92102649 - - - - - -] Releasing lock "refresh_cache-f4568c68-41ba-4de0-a607-76bf5907f37c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 11 04:16:29 compute-0 nova_compute[259850]: 2025-10-11 04:16:29.549 2 DEBUG nova.compute.manager [None req-a15c22ab-259d-4b26-83d0-eb5f92102649 - - - - - -] [instance: f4568c68-41ba-4de0-a607-76bf5907f37c] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Oct 11 04:16:29 compute-0 nova_compute[259850]: 2025-10-11 04:16:29.549 2 DEBUG oslo_service.periodic_task [None req-a15c22ab-259d-4b26-83d0-eb5f92102649 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 11 04:16:29 compute-0 nova_compute[259850]: 2025-10-11 04:16:29.550 2 DEBUG oslo_service.periodic_task [None req-a15c22ab-259d-4b26-83d0-eb5f92102649 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 11 04:16:29 compute-0 nova_compute[259850]: 2025-10-11 04:16:29.571 2 DEBUG oslo_concurrency.lockutils [None req-a15c22ab-259d-4b26-83d0-eb5f92102649 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 04:16:29 compute-0 nova_compute[259850]: 2025-10-11 04:16:29.572 2 DEBUG oslo_concurrency.lockutils [None req-a15c22ab-259d-4b26-83d0-eb5f92102649 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 04:16:29 compute-0 nova_compute[259850]: 2025-10-11 04:16:29.573 2 DEBUG oslo_concurrency.lockutils [None req-a15c22ab-259d-4b26-83d0-eb5f92102649 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 04:16:29 compute-0 nova_compute[259850]: 2025-10-11 04:16:29.573 2 DEBUG nova.compute.resource_tracker [None req-a15c22ab-259d-4b26-83d0-eb5f92102649 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Oct 11 04:16:29 compute-0 nova_compute[259850]: 2025-10-11 04:16:29.573 2 DEBUG oslo_concurrency.processutils [None req-a15c22ab-259d-4b26-83d0-eb5f92102649 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 04:16:29 compute-0 podman[294120]: 2025-10-11 04:16:29.585610011 +0000 UTC m=+0.039218775 container create 06bed8243e52999bac258d0803647c7139c3a31653bcb563e6fdf797ee20a6ab (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=priceless_black, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, ceph=True)
Oct 11 04:16:29 compute-0 systemd[1]: Started libpod-conmon-06bed8243e52999bac258d0803647c7139c3a31653bcb563e6fdf797ee20a6ab.scope.
Oct 11 04:16:29 compute-0 systemd[1]: Started libcrun container.
Oct 11 04:16:29 compute-0 podman[294120]: 2025-10-11 04:16:29.568552857 +0000 UTC m=+0.022161641 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 04:16:29 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/517ff77f59a254ddfc23b66358be110fc305400714e97af72587528cc7b99a0d/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 11 04:16:29 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/517ff77f59a254ddfc23b66358be110fc305400714e97af72587528cc7b99a0d/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 11 04:16:29 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/517ff77f59a254ddfc23b66358be110fc305400714e97af72587528cc7b99a0d/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 11 04:16:29 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/517ff77f59a254ddfc23b66358be110fc305400714e97af72587528cc7b99a0d/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 11 04:16:29 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/517ff77f59a254ddfc23b66358be110fc305400714e97af72587528cc7b99a0d/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct 11 04:16:29 compute-0 podman[294120]: 2025-10-11 04:16:29.691416498 +0000 UTC m=+0.145025262 container init 06bed8243e52999bac258d0803647c7139c3a31653bcb563e6fdf797ee20a6ab (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=priceless_black, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 11 04:16:29 compute-0 podman[294120]: 2025-10-11 04:16:29.699165158 +0000 UTC m=+0.152773922 container start 06bed8243e52999bac258d0803647c7139c3a31653bcb563e6fdf797ee20a6ab (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=priceless_black, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 11 04:16:29 compute-0 podman[294120]: 2025-10-11 04:16:29.702980336 +0000 UTC m=+0.156589140 container attach 06bed8243e52999bac258d0803647c7139c3a31653bcb563e6fdf797ee20a6ab (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=priceless_black, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef)
Oct 11 04:16:29 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1580: 305 pgs: 305 active+clean; 368 MiB data, 628 MiB used, 59 GiB / 60 GiB avail; 3.0 MiB/s rd, 565 KiB/s wr, 131 op/s
Oct 11 04:16:29 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e385 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 04:16:29 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 11 04:16:29 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3711710481' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 04:16:30 compute-0 nova_compute[259850]: 2025-10-11 04:16:30.013 2 DEBUG oslo_concurrency.processutils [None req-a15c22ab-259d-4b26-83d0-eb5f92102649 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.440s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 04:16:30 compute-0 nova_compute[259850]: 2025-10-11 04:16:30.095 2 DEBUG nova.virt.libvirt.driver [None req-a15c22ab-259d-4b26-83d0-eb5f92102649 - - - - - -] skipping disk for instance-00000012 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 11 04:16:30 compute-0 nova_compute[259850]: 2025-10-11 04:16:30.096 2 DEBUG nova.virt.libvirt.driver [None req-a15c22ab-259d-4b26-83d0-eb5f92102649 - - - - - -] skipping disk for instance-00000012 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 11 04:16:30 compute-0 nova_compute[259850]: 2025-10-11 04:16:30.100 2 DEBUG nova.virt.libvirt.driver [None req-a15c22ab-259d-4b26-83d0-eb5f92102649 - - - - - -] skipping disk for instance-00000014 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 11 04:16:30 compute-0 nova_compute[259850]: 2025-10-11 04:16:30.100 2 DEBUG nova.virt.libvirt.driver [None req-a15c22ab-259d-4b26-83d0-eb5f92102649 - - - - - -] skipping disk for instance-00000014 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 11 04:16:30 compute-0 nova_compute[259850]: 2025-10-11 04:16:30.103 2 DEBUG nova.virt.libvirt.driver [None req-a15c22ab-259d-4b26-83d0-eb5f92102649 - - - - - -] skipping disk for instance-00000015 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 11 04:16:30 compute-0 nova_compute[259850]: 2025-10-11 04:16:30.103 2 DEBUG nova.virt.libvirt.driver [None req-a15c22ab-259d-4b26-83d0-eb5f92102649 - - - - - -] skipping disk for instance-00000015 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 11 04:16:30 compute-0 nova_compute[259850]: 2025-10-11 04:16:30.267 2 WARNING nova.virt.libvirt.driver [None req-a15c22ab-259d-4b26-83d0-eb5f92102649 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 11 04:16:30 compute-0 nova_compute[259850]: 2025-10-11 04:16:30.268 2 DEBUG nova.compute.resource_tracker [None req-a15c22ab-259d-4b26-83d0-eb5f92102649 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3746MB free_disk=59.987796783447266GB free_vcpus=5 pci_devices=[{"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Oct 11 04:16:30 compute-0 nova_compute[259850]: 2025-10-11 04:16:30.268 2 DEBUG oslo_concurrency.lockutils [None req-a15c22ab-259d-4b26-83d0-eb5f92102649 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 04:16:30 compute-0 nova_compute[259850]: 2025-10-11 04:16:30.268 2 DEBUG oslo_concurrency.lockutils [None req-a15c22ab-259d-4b26-83d0-eb5f92102649 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 04:16:30 compute-0 nova_compute[259850]: 2025-10-11 04:16:30.477 2 DEBUG nova.compute.resource_tracker [None req-a15c22ab-259d-4b26-83d0-eb5f92102649 - - - - - -] Instance f4568c68-41ba-4de0-a607-76bf5907f37c actively managed on this compute host and has allocations in placement: {'resources': {'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Oct 11 04:16:30 compute-0 nova_compute[259850]: 2025-10-11 04:16:30.478 2 DEBUG nova.compute.resource_tracker [None req-a15c22ab-259d-4b26-83d0-eb5f92102649 - - - - - -] Instance b19922f4-8c6a-4465-8051-c33652138fd9 actively managed on this compute host and has allocations in placement: {'resources': {'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Oct 11 04:16:30 compute-0 nova_compute[259850]: 2025-10-11 04:16:30.478 2 DEBUG nova.compute.resource_tracker [None req-a15c22ab-259d-4b26-83d0-eb5f92102649 - - - - - -] Instance 001be5b3-e842-4242-a6ad-2ccbfa7b39c2 actively managed on this compute host and has allocations in placement: {'resources': {'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Oct 11 04:16:30 compute-0 nova_compute[259850]: 2025-10-11 04:16:30.478 2 DEBUG nova.compute.resource_tracker [None req-a15c22ab-259d-4b26-83d0-eb5f92102649 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 3 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Oct 11 04:16:30 compute-0 nova_compute[259850]: 2025-10-11 04:16:30.478 2 DEBUG nova.compute.resource_tracker [None req-a15c22ab-259d-4b26-83d0-eb5f92102649 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=896MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=3 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Oct 11 04:16:30 compute-0 nova_compute[259850]: 2025-10-11 04:16:30.580 2 DEBUG oslo_concurrency.processutils [None req-a15c22ab-259d-4b26-83d0-eb5f92102649 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 04:16:30 compute-0 priceless_black[294138]: --> passed data devices: 0 physical, 3 LVM
Oct 11 04:16:30 compute-0 priceless_black[294138]: --> relative data size: 1.0
Oct 11 04:16:30 compute-0 priceless_black[294138]: --> All data devices are unavailable
Oct 11 04:16:30 compute-0 systemd[1]: libpod-06bed8243e52999bac258d0803647c7139c3a31653bcb563e6fdf797ee20a6ab.scope: Deactivated successfully.
Oct 11 04:16:30 compute-0 systemd[1]: libpod-06bed8243e52999bac258d0803647c7139c3a31653bcb563e6fdf797ee20a6ab.scope: Consumed 1.002s CPU time.
Oct 11 04:16:30 compute-0 conmon[294138]: conmon 06bed8243e52999bac25 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-06bed8243e52999bac258d0803647c7139c3a31653bcb563e6fdf797ee20a6ab.scope/container/memory.events
Oct 11 04:16:30 compute-0 podman[294120]: 2025-10-11 04:16:30.829652571 +0000 UTC m=+1.283261375 container died 06bed8243e52999bac258d0803647c7139c3a31653bcb563e6fdf797ee20a6ab (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=priceless_black, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Oct 11 04:16:30 compute-0 systemd[1]: var-lib-containers-storage-overlay-517ff77f59a254ddfc23b66358be110fc305400714e97af72587528cc7b99a0d-merged.mount: Deactivated successfully.
Oct 11 04:16:30 compute-0 podman[294120]: 2025-10-11 04:16:30.897943101 +0000 UTC m=+1.351551895 container remove 06bed8243e52999bac258d0803647c7139c3a31653bcb563e6fdf797ee20a6ab (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=priceless_black, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Oct 11 04:16:30 compute-0 systemd[1]: libpod-conmon-06bed8243e52999bac258d0803647c7139c3a31653bcb563e6fdf797ee20a6ab.scope: Deactivated successfully.
Oct 11 04:16:30 compute-0 sudo[294014]: pam_unix(sudo:session): session closed for user root
Oct 11 04:16:30 compute-0 ceph-mon[74273]: pgmap v1580: 305 pgs: 305 active+clean; 368 MiB data, 628 MiB used, 59 GiB / 60 GiB avail; 3.0 MiB/s rd, 565 KiB/s wr, 131 op/s
Oct 11 04:16:30 compute-0 ceph-mon[74273]: from='client.? 192.168.122.100:0/3711710481' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 04:16:30 compute-0 podman[294209]: 2025-10-11 04:16:30.96687801 +0000 UTC m=+0.100552078 container health_status aa44bb3cffcfb092b9d85ae52a9bd9f36c87e07262632420f97b48a886109eab (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251009, org.label-schema.schema-version=1.0)
Oct 11 04:16:30 compute-0 sudo[294237]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 04:16:31 compute-0 sudo[294237]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 04:16:31 compute-0 sudo[294237]: pam_unix(sudo:session): session closed for user root
Oct 11 04:16:31 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 11 04:16:31 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2991912452' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 04:16:31 compute-0 nova_compute[259850]: 2025-10-11 04:16:31.053 2 DEBUG oslo_concurrency.processutils [None req-a15c22ab-259d-4b26-83d0-eb5f92102649 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.473s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 04:16:31 compute-0 sudo[294265]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 11 04:16:31 compute-0 sudo[294265]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 04:16:31 compute-0 sudo[294265]: pam_unix(sudo:session): session closed for user root
Oct 11 04:16:31 compute-0 nova_compute[259850]: 2025-10-11 04:16:31.061 2 DEBUG nova.compute.provider_tree [None req-a15c22ab-259d-4b26-83d0-eb5f92102649 - - - - - -] Inventory has not changed in ProviderTree for provider: 108a560b-89c0-4926-a2fc-cb749a6f8386 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 11 04:16:31 compute-0 nova_compute[259850]: 2025-10-11 04:16:31.079 2 DEBUG nova.scheduler.client.report [None req-a15c22ab-259d-4b26-83d0-eb5f92102649 - - - - - -] Inventory has not changed for provider 108a560b-89c0-4926-a2fc-cb749a6f8386 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 11 04:16:31 compute-0 nova_compute[259850]: 2025-10-11 04:16:31.103 2 DEBUG nova.compute.resource_tracker [None req-a15c22ab-259d-4b26-83d0-eb5f92102649 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Oct 11 04:16:31 compute-0 nova_compute[259850]: 2025-10-11 04:16:31.103 2 DEBUG oslo_concurrency.lockutils [None req-a15c22ab-259d-4b26-83d0-eb5f92102649 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.835s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 04:16:31 compute-0 nova_compute[259850]: 2025-10-11 04:16:31.104 2 DEBUG oslo_service.periodic_task [None req-a15c22ab-259d-4b26-83d0-eb5f92102649 - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 11 04:16:31 compute-0 nova_compute[259850]: 2025-10-11 04:16:31.104 2 DEBUG nova.compute.manager [None req-a15c22ab-259d-4b26-83d0-eb5f92102649 - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145
Oct 11 04:16:31 compute-0 nova_compute[259850]: 2025-10-11 04:16:31.116 2 DEBUG nova.compute.manager [None req-a15c22ab-259d-4b26-83d0-eb5f92102649 - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154
Oct 11 04:16:31 compute-0 sudo[294292]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 04:16:31 compute-0 sudo[294292]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 04:16:31 compute-0 sudo[294292]: pam_unix(sudo:session): session closed for user root
Oct 11 04:16:31 compute-0 sudo[294317]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/23b68101-59a9-532f-ab6b-9acf78fb2162/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 23b68101-59a9-532f-ab6b-9acf78fb2162 -- lvm list --format json
Oct 11 04:16:31 compute-0 sudo[294317]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 04:16:31 compute-0 ceph-mgr[74563]: [pg_autoscaler INFO root] _maybe_adjust
Oct 11 04:16:31 compute-0 ceph-mgr[74563]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 04:16:31 compute-0 ceph-mgr[74563]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct 11 04:16:31 compute-0 ceph-mgr[74563]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 04:16:31 compute-0 ceph-mgr[74563]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 8.076019893208309e-06 of space, bias 1.0, pg target 0.0024228059679624924 quantized to 32 (current 32)
Oct 11 04:16:31 compute-0 ceph-mgr[74563]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 04:16:31 compute-0 ceph-mgr[74563]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0037500076308849386 of space, bias 1.0, pg target 1.1250022892654816 quantized to 32 (current 32)
Oct 11 04:16:31 compute-0 ceph-mgr[74563]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 04:16:31 compute-0 ceph-mgr[74563]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.9013621638340822e-05 quantized to 32 (current 32)
Oct 11 04:16:31 compute-0 ceph-mgr[74563]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 04:16:31 compute-0 ceph-mgr[74563]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006663670272514163 of space, bias 1.0, pg target 0.19924374114817345 quantized to 32 (current 32)
Oct 11 04:16:31 compute-0 ceph-mgr[74563]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 04:16:31 compute-0 ceph-mgr[74563]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006084358924269063 quantized to 16 (current 16)
Oct 11 04:16:31 compute-0 ceph-mgr[74563]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 04:16:31 compute-0 ceph-mgr[74563]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 11 04:16:31 compute-0 ceph-mgr[74563]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 04:16:31 compute-0 ceph-mgr[74563]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.605448655336329e-05 quantized to 32 (current 32)
Oct 11 04:16:31 compute-0 ceph-mgr[74563]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 04:16:31 compute-0 ceph-mgr[74563]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006464631357035879 quantized to 32 (current 32)
Oct 11 04:16:31 compute-0 ceph-mgr[74563]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 04:16:31 compute-0 ceph-mgr[74563]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 11 04:16:31 compute-0 ceph-mgr[74563]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 04:16:31 compute-0 ceph-mgr[74563]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015210897310672657 quantized to 32 (current 32)
Oct 11 04:16:31 compute-0 podman[294383]: 2025-10-11 04:16:31.674625521 +0000 UTC m=+0.097034008 container create 54ef6ba47c6fb571f17ee084c0d0d4f5c1606d959df46098879d31099c0c63bd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_mayer, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, OSD_FLAVOR=default)
Oct 11 04:16:31 compute-0 systemd[1]: Started libpod-conmon-54ef6ba47c6fb571f17ee084c0d0d4f5c1606d959df46098879d31099c0c63bd.scope.
Oct 11 04:16:31 compute-0 podman[294383]: 2025-10-11 04:16:31.650364542 +0000 UTC m=+0.072773119 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 04:16:31 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1581: 305 pgs: 305 active+clean; 368 MiB data, 628 MiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 59 KiB/s wr, 74 op/s
Oct 11 04:16:31 compute-0 systemd[1]: Started libcrun container.
Oct 11 04:16:31 compute-0 podman[294383]: 2025-10-11 04:16:31.785833081 +0000 UTC m=+0.208241568 container init 54ef6ba47c6fb571f17ee084c0d0d4f5c1606d959df46098879d31099c0c63bd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_mayer, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Oct 11 04:16:31 compute-0 podman[294383]: 2025-10-11 04:16:31.792295795 +0000 UTC m=+0.214704272 container start 54ef6ba47c6fb571f17ee084c0d0d4f5c1606d959df46098879d31099c0c63bd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_mayer, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2)
Oct 11 04:16:31 compute-0 podman[294383]: 2025-10-11 04:16:31.795445384 +0000 UTC m=+0.217853921 container attach 54ef6ba47c6fb571f17ee084c0d0d4f5c1606d959df46098879d31099c0c63bd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_mayer, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507)
Oct 11 04:16:31 compute-0 great_mayer[294399]: 167 167
Oct 11 04:16:31 compute-0 systemd[1]: libpod-54ef6ba47c6fb571f17ee084c0d0d4f5c1606d959df46098879d31099c0c63bd.scope: Deactivated successfully.
Oct 11 04:16:31 compute-0 podman[294383]: 2025-10-11 04:16:31.802636399 +0000 UTC m=+0.225044916 container died 54ef6ba47c6fb571f17ee084c0d0d4f5c1606d959df46098879d31099c0c63bd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_mayer, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef)
Oct 11 04:16:31 compute-0 systemd[1]: var-lib-containers-storage-overlay-072aa75b6195511556f7c9c992e7acfa59ab8647727aac235d67c36c6cf4055a-merged.mount: Deactivated successfully.
Oct 11 04:16:31 compute-0 podman[294383]: 2025-10-11 04:16:31.846016421 +0000 UTC m=+0.268424908 container remove 54ef6ba47c6fb571f17ee084c0d0d4f5c1606d959df46098879d31099c0c63bd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_mayer, CEPH_REF=reef, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 11 04:16:31 compute-0 systemd[1]: libpod-conmon-54ef6ba47c6fb571f17ee084c0d0d4f5c1606d959df46098879d31099c0c63bd.scope: Deactivated successfully.
Oct 11 04:16:31 compute-0 ceph-mon[74273]: from='client.? 192.168.122.100:0/2991912452' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 04:16:32 compute-0 nova_compute[259850]: 2025-10-11 04:16:32.007 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 04:16:32 compute-0 podman[294423]: 2025-10-11 04:16:32.076801849 +0000 UTC m=+0.051415552 container create 7ff5f94353b5f578f9e3d47de9d79a0f051484f654e98b83ced5b3d3bb7e0ad5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_mirzakhani, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250507)
Oct 11 04:16:32 compute-0 systemd[1]: Started libpod-conmon-7ff5f94353b5f578f9e3d47de9d79a0f051484f654e98b83ced5b3d3bb7e0ad5.scope.
Oct 11 04:16:32 compute-0 podman[294423]: 2025-10-11 04:16:32.05534491 +0000 UTC m=+0.029958613 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 04:16:32 compute-0 systemd[1]: Started libcrun container.
Oct 11 04:16:32 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/52b1c284dc1be621f4e00aeb9f5557587424d878523923299e00315005648008/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 11 04:16:32 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/52b1c284dc1be621f4e00aeb9f5557587424d878523923299e00315005648008/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 11 04:16:32 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/52b1c284dc1be621f4e00aeb9f5557587424d878523923299e00315005648008/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 11 04:16:32 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/52b1c284dc1be621f4e00aeb9f5557587424d878523923299e00315005648008/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 11 04:16:32 compute-0 podman[294423]: 2025-10-11 04:16:32.185066586 +0000 UTC m=+0.159680349 container init 7ff5f94353b5f578f9e3d47de9d79a0f051484f654e98b83ced5b3d3bb7e0ad5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_mirzakhani, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Oct 11 04:16:32 compute-0 podman[294423]: 2025-10-11 04:16:32.196644515 +0000 UTC m=+0.171258228 container start 7ff5f94353b5f578f9e3d47de9d79a0f051484f654e98b83ced5b3d3bb7e0ad5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_mirzakhani, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 11 04:16:32 compute-0 podman[294423]: 2025-10-11 04:16:32.200798383 +0000 UTC m=+0.175412146 container attach 7ff5f94353b5f578f9e3d47de9d79a0f051484f654e98b83ced5b3d3bb7e0ad5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_mirzakhani, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 11 04:16:32 compute-0 nice_mirzakhani[294439]: {
Oct 11 04:16:32 compute-0 nice_mirzakhani[294439]:     "0": [
Oct 11 04:16:32 compute-0 nice_mirzakhani[294439]:         {
Oct 11 04:16:32 compute-0 nice_mirzakhani[294439]:             "devices": [
Oct 11 04:16:32 compute-0 nice_mirzakhani[294439]:                 "/dev/loop3"
Oct 11 04:16:32 compute-0 nice_mirzakhani[294439]:             ],
Oct 11 04:16:32 compute-0 nice_mirzakhani[294439]:             "lv_name": "ceph_lv0",
Oct 11 04:16:32 compute-0 nice_mirzakhani[294439]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct 11 04:16:32 compute-0 nice_mirzakhani[294439]:             "lv_size": "21470642176",
Oct 11 04:16:32 compute-0 nice_mirzakhani[294439]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=1XVTvq-Um5W-oCLh-MZzq-EKrd-vgGj-JdO4S3,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=23b68101-59a9-532f-ab6b-9acf78fb2162,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=bd7ac921-1218-45c1-b1c6-7c594dbceccb,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 11 04:16:32 compute-0 nice_mirzakhani[294439]:             "lv_uuid": "1XVTvq-Um5W-oCLh-MZzq-EKrd-vgGj-JdO4S3",
Oct 11 04:16:32 compute-0 nice_mirzakhani[294439]:             "name": "ceph_lv0",
Oct 11 04:16:32 compute-0 nice_mirzakhani[294439]:             "path": "/dev/ceph_vg0/ceph_lv0",
Oct 11 04:16:32 compute-0 nice_mirzakhani[294439]:             "tags": {
Oct 11 04:16:32 compute-0 nice_mirzakhani[294439]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct 11 04:16:32 compute-0 nice_mirzakhani[294439]:                 "ceph.block_uuid": "1XVTvq-Um5W-oCLh-MZzq-EKrd-vgGj-JdO4S3",
Oct 11 04:16:32 compute-0 nice_mirzakhani[294439]:                 "ceph.cephx_lockbox_secret": "",
Oct 11 04:16:32 compute-0 nice_mirzakhani[294439]:                 "ceph.cluster_fsid": "23b68101-59a9-532f-ab6b-9acf78fb2162",
Oct 11 04:16:32 compute-0 nice_mirzakhani[294439]:                 "ceph.cluster_name": "ceph",
Oct 11 04:16:32 compute-0 nice_mirzakhani[294439]:                 "ceph.crush_device_class": "",
Oct 11 04:16:32 compute-0 nice_mirzakhani[294439]:                 "ceph.encrypted": "0",
Oct 11 04:16:32 compute-0 nice_mirzakhani[294439]:                 "ceph.osd_fsid": "bd7ac921-1218-45c1-b1c6-7c594dbceccb",
Oct 11 04:16:32 compute-0 nice_mirzakhani[294439]:                 "ceph.osd_id": "0",
Oct 11 04:16:32 compute-0 nice_mirzakhani[294439]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 11 04:16:32 compute-0 nice_mirzakhani[294439]:                 "ceph.type": "block",
Oct 11 04:16:32 compute-0 nice_mirzakhani[294439]:                 "ceph.vdo": "0"
Oct 11 04:16:32 compute-0 nice_mirzakhani[294439]:             },
Oct 11 04:16:32 compute-0 nice_mirzakhani[294439]:             "type": "block",
Oct 11 04:16:32 compute-0 nice_mirzakhani[294439]:             "vg_name": "ceph_vg0"
Oct 11 04:16:32 compute-0 nice_mirzakhani[294439]:         }
Oct 11 04:16:32 compute-0 nice_mirzakhani[294439]:     ],
Oct 11 04:16:32 compute-0 nice_mirzakhani[294439]:     "1": [
Oct 11 04:16:32 compute-0 nice_mirzakhani[294439]:         {
Oct 11 04:16:32 compute-0 nice_mirzakhani[294439]:             "devices": [
Oct 11 04:16:32 compute-0 nice_mirzakhani[294439]:                 "/dev/loop4"
Oct 11 04:16:32 compute-0 nice_mirzakhani[294439]:             ],
Oct 11 04:16:32 compute-0 nice_mirzakhani[294439]:             "lv_name": "ceph_lv1",
Oct 11 04:16:32 compute-0 nice_mirzakhani[294439]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Oct 11 04:16:32 compute-0 nice_mirzakhani[294439]:             "lv_size": "21470642176",
Oct 11 04:16:32 compute-0 nice_mirzakhani[294439]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=SYHCVo-UVf0-Iwc3-4dzz-QuCG-19LJ-9rRA5x,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=23b68101-59a9-532f-ab6b-9acf78fb2162,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=38da774d-7ecf-442f-9a7a-97978287cff8,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 11 04:16:32 compute-0 nice_mirzakhani[294439]:             "lv_uuid": "SYHCVo-UVf0-Iwc3-4dzz-QuCG-19LJ-9rRA5x",
Oct 11 04:16:32 compute-0 nice_mirzakhani[294439]:             "name": "ceph_lv1",
Oct 11 04:16:32 compute-0 nice_mirzakhani[294439]:             "path": "/dev/ceph_vg1/ceph_lv1",
Oct 11 04:16:32 compute-0 nice_mirzakhani[294439]:             "tags": {
Oct 11 04:16:32 compute-0 nice_mirzakhani[294439]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Oct 11 04:16:32 compute-0 nice_mirzakhani[294439]:                 "ceph.block_uuid": "SYHCVo-UVf0-Iwc3-4dzz-QuCG-19LJ-9rRA5x",
Oct 11 04:16:32 compute-0 nice_mirzakhani[294439]:                 "ceph.cephx_lockbox_secret": "",
Oct 11 04:16:32 compute-0 nice_mirzakhani[294439]:                 "ceph.cluster_fsid": "23b68101-59a9-532f-ab6b-9acf78fb2162",
Oct 11 04:16:32 compute-0 nice_mirzakhani[294439]:                 "ceph.cluster_name": "ceph",
Oct 11 04:16:32 compute-0 nice_mirzakhani[294439]:                 "ceph.crush_device_class": "",
Oct 11 04:16:32 compute-0 nice_mirzakhani[294439]:                 "ceph.encrypted": "0",
Oct 11 04:16:32 compute-0 nice_mirzakhani[294439]:                 "ceph.osd_fsid": "38da774d-7ecf-442f-9a7a-97978287cff8",
Oct 11 04:16:32 compute-0 nice_mirzakhani[294439]:                 "ceph.osd_id": "1",
Oct 11 04:16:32 compute-0 nice_mirzakhani[294439]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 11 04:16:32 compute-0 nice_mirzakhani[294439]:                 "ceph.type": "block",
Oct 11 04:16:32 compute-0 nice_mirzakhani[294439]:                 "ceph.vdo": "0"
Oct 11 04:16:32 compute-0 nice_mirzakhani[294439]:             },
Oct 11 04:16:32 compute-0 nice_mirzakhani[294439]:             "type": "block",
Oct 11 04:16:32 compute-0 nice_mirzakhani[294439]:             "vg_name": "ceph_vg1"
Oct 11 04:16:32 compute-0 nice_mirzakhani[294439]:         }
Oct 11 04:16:32 compute-0 nice_mirzakhani[294439]:     ],
Oct 11 04:16:32 compute-0 nice_mirzakhani[294439]:     "2": [
Oct 11 04:16:32 compute-0 nice_mirzakhani[294439]:         {
Oct 11 04:16:32 compute-0 nice_mirzakhani[294439]:             "devices": [
Oct 11 04:16:32 compute-0 nice_mirzakhani[294439]:                 "/dev/loop5"
Oct 11 04:16:32 compute-0 nice_mirzakhani[294439]:             ],
Oct 11 04:16:32 compute-0 nice_mirzakhani[294439]:             "lv_name": "ceph_lv2",
Oct 11 04:16:32 compute-0 nice_mirzakhani[294439]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Oct 11 04:16:32 compute-0 nice_mirzakhani[294439]:             "lv_size": "21470642176",
Oct 11 04:16:32 compute-0 nice_mirzakhani[294439]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=RoMfIg-8Myq-NBZ3-1HM3-6QfC-sjk8-dlChvt,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=23b68101-59a9-532f-ab6b-9acf78fb2162,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=b0a32a6b-06c7-4cda-9235-0c5ffbd1bd85,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 11 04:16:32 compute-0 nice_mirzakhani[294439]:             "lv_uuid": "RoMfIg-8Myq-NBZ3-1HM3-6QfC-sjk8-dlChvt",
Oct 11 04:16:32 compute-0 nice_mirzakhani[294439]:             "name": "ceph_lv2",
Oct 11 04:16:32 compute-0 nice_mirzakhani[294439]:             "path": "/dev/ceph_vg2/ceph_lv2",
Oct 11 04:16:32 compute-0 nice_mirzakhani[294439]:             "tags": {
Oct 11 04:16:32 compute-0 nice_mirzakhani[294439]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Oct 11 04:16:32 compute-0 nice_mirzakhani[294439]:                 "ceph.block_uuid": "RoMfIg-8Myq-NBZ3-1HM3-6QfC-sjk8-dlChvt",
Oct 11 04:16:32 compute-0 nice_mirzakhani[294439]:                 "ceph.cephx_lockbox_secret": "",
Oct 11 04:16:32 compute-0 nice_mirzakhani[294439]:                 "ceph.cluster_fsid": "23b68101-59a9-532f-ab6b-9acf78fb2162",
Oct 11 04:16:32 compute-0 nice_mirzakhani[294439]:                 "ceph.cluster_name": "ceph",
Oct 11 04:16:32 compute-0 nice_mirzakhani[294439]:                 "ceph.crush_device_class": "",
Oct 11 04:16:32 compute-0 nice_mirzakhani[294439]:                 "ceph.encrypted": "0",
Oct 11 04:16:32 compute-0 nice_mirzakhani[294439]:                 "ceph.osd_fsid": "b0a32a6b-06c7-4cda-9235-0c5ffbd1bd85",
Oct 11 04:16:32 compute-0 nice_mirzakhani[294439]:                 "ceph.osd_id": "2",
Oct 11 04:16:32 compute-0 nice_mirzakhani[294439]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 11 04:16:32 compute-0 nice_mirzakhani[294439]:                 "ceph.type": "block",
Oct 11 04:16:32 compute-0 nice_mirzakhani[294439]:                 "ceph.vdo": "0"
Oct 11 04:16:32 compute-0 nice_mirzakhani[294439]:             },
Oct 11 04:16:32 compute-0 nice_mirzakhani[294439]:             "type": "block",
Oct 11 04:16:32 compute-0 nice_mirzakhani[294439]:             "vg_name": "ceph_vg2"
Oct 11 04:16:32 compute-0 nice_mirzakhani[294439]:         }
Oct 11 04:16:32 compute-0 nice_mirzakhani[294439]:     ]
Oct 11 04:16:32 compute-0 nice_mirzakhani[294439]: }
Oct 11 04:16:32 compute-0 systemd[1]: libpod-7ff5f94353b5f578f9e3d47de9d79a0f051484f654e98b83ced5b3d3bb7e0ad5.scope: Deactivated successfully.
Oct 11 04:16:32 compute-0 conmon[294439]: conmon 7ff5f94353b5f578f9e3 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-7ff5f94353b5f578f9e3d47de9d79a0f051484f654e98b83ced5b3d3bb7e0ad5.scope/container/memory.events
Oct 11 04:16:32 compute-0 podman[294423]: 2025-10-11 04:16:32.934570074 +0000 UTC m=+0.909183787 container died 7ff5f94353b5f578f9e3d47de9d79a0f051484f654e98b83ced5b3d3bb7e0ad5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_mirzakhani, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=reef)
Oct 11 04:16:32 compute-0 systemd[1]: var-lib-containers-storage-overlay-52b1c284dc1be621f4e00aeb9f5557587424d878523923299e00315005648008-merged.mount: Deactivated successfully.
Oct 11 04:16:32 compute-0 ceph-mon[74273]: pgmap v1581: 305 pgs: 305 active+clean; 368 MiB data, 628 MiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 59 KiB/s wr, 74 op/s
Oct 11 04:16:32 compute-0 podman[294423]: 2025-10-11 04:16:32.99744257 +0000 UTC m=+0.972056263 container remove 7ff5f94353b5f578f9e3d47de9d79a0f051484f654e98b83ced5b3d3bb7e0ad5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_mirzakhani, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_REF=reef, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.build-date=20250507)
Oct 11 04:16:33 compute-0 ovn_controller[152025]: 2025-10-11T04:16:33Z|00040|pinctrl(ovn_pinctrl0)|WARN|DHCPREQUEST requested IP 10.100.0.6 does not match offer 10.100.0.14
Oct 11 04:16:33 compute-0 ovn_controller[152025]: 2025-10-11T04:16:33Z|00041|pinctrl(ovn_pinctrl0)|INFO|DHCPNAK fa:16:3e:05:f8:44 10.100.0.14
Oct 11 04:16:33 compute-0 systemd[1]: libpod-conmon-7ff5f94353b5f578f9e3d47de9d79a0f051484f654e98b83ced5b3d3bb7e0ad5.scope: Deactivated successfully.
Oct 11 04:16:33 compute-0 sudo[294317]: pam_unix(sudo:session): session closed for user root
Oct 11 04:16:33 compute-0 nova_compute[259850]: 2025-10-11 04:16:33.060 2 DEBUG oslo_service.periodic_task [None req-a15c22ab-259d-4b26-83d0-eb5f92102649 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 11 04:16:33 compute-0 nova_compute[259850]: 2025-10-11 04:16:33.061 2 DEBUG oslo_service.periodic_task [None req-a15c22ab-259d-4b26-83d0-eb5f92102649 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 11 04:16:33 compute-0 nova_compute[259850]: 2025-10-11 04:16:33.061 2 DEBUG oslo_service.periodic_task [None req-a15c22ab-259d-4b26-83d0-eb5f92102649 - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 11 04:16:33 compute-0 sudo[294459]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 04:16:33 compute-0 sudo[294459]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 04:16:33 compute-0 sudo[294459]: pam_unix(sudo:session): session closed for user root
Oct 11 04:16:33 compute-0 sudo[294484]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 11 04:16:33 compute-0 sudo[294484]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 04:16:33 compute-0 sudo[294484]: pam_unix(sudo:session): session closed for user root
Oct 11 04:16:33 compute-0 sudo[294509]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 04:16:33 compute-0 sudo[294509]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 04:16:33 compute-0 sudo[294509]: pam_unix(sudo:session): session closed for user root
Oct 11 04:16:33 compute-0 nova_compute[259850]: 2025-10-11 04:16:33.274 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 04:16:33 compute-0 sudo[294534]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/23b68101-59a9-532f-ab6b-9acf78fb2162/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 23b68101-59a9-532f-ab6b-9acf78fb2162 -- raw list --format json
Oct 11 04:16:33 compute-0 sudo[294534]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 04:16:33 compute-0 podman[294599]: 2025-10-11 04:16:33.693971202 +0000 UTC m=+0.050380663 container create 57940ca7a364001dd2ba9a1cdadf65d5dec20c9b6d3ef7326b45a8a85b032ca7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_hawking, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0)
Oct 11 04:16:33 compute-0 systemd[1]: Started libpod-conmon-57940ca7a364001dd2ba9a1cdadf65d5dec20c9b6d3ef7326b45a8a85b032ca7.scope.
Oct 11 04:16:33 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1582: 305 pgs: 305 active+clean; 368 MiB data, 628 MiB used, 59 GiB / 60 GiB avail; 2.4 MiB/s rd, 68 KiB/s wr, 107 op/s
Oct 11 04:16:33 compute-0 systemd[1]: Started libcrun container.
Oct 11 04:16:33 compute-0 podman[294599]: 2025-10-11 04:16:33.675794266 +0000 UTC m=+0.032203706 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 04:16:33 compute-0 podman[294599]: 2025-10-11 04:16:33.787991793 +0000 UTC m=+0.144401253 container init 57940ca7a364001dd2ba9a1cdadf65d5dec20c9b6d3ef7326b45a8a85b032ca7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_hawking, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 11 04:16:33 compute-0 podman[294599]: 2025-10-11 04:16:33.799336476 +0000 UTC m=+0.155745936 container start 57940ca7a364001dd2ba9a1cdadf65d5dec20c9b6d3ef7326b45a8a85b032ca7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_hawking, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Oct 11 04:16:33 compute-0 podman[294599]: 2025-10-11 04:16:33.803391111 +0000 UTC m=+0.159800601 container attach 57940ca7a364001dd2ba9a1cdadf65d5dec20c9b6d3ef7326b45a8a85b032ca7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_hawking, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 11 04:16:33 compute-0 magical_hawking[294615]: 167 167
Oct 11 04:16:33 compute-0 systemd[1]: libpod-57940ca7a364001dd2ba9a1cdadf65d5dec20c9b6d3ef7326b45a8a85b032ca7.scope: Deactivated successfully.
Oct 11 04:16:33 compute-0 conmon[294615]: conmon 57940ca7a364001dd2ba <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-57940ca7a364001dd2ba9a1cdadf65d5dec20c9b6d3ef7326b45a8a85b032ca7.scope/container/memory.events
Oct 11 04:16:33 compute-0 podman[294599]: 2025-10-11 04:16:33.809394831 +0000 UTC m=+0.165804291 container died 57940ca7a364001dd2ba9a1cdadf65d5dec20c9b6d3ef7326b45a8a85b032ca7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_hawking, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 11 04:16:33 compute-0 systemd[1]: var-lib-containers-storage-overlay-9b37cdda7f55abae2c2586d17bcf689987f659606e34d598d890733fb0243e17-merged.mount: Deactivated successfully.
Oct 11 04:16:33 compute-0 podman[294599]: 2025-10-11 04:16:33.857804417 +0000 UTC m=+0.214213827 container remove 57940ca7a364001dd2ba9a1cdadf65d5dec20c9b6d3ef7326b45a8a85b032ca7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_hawking, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS)
Oct 11 04:16:33 compute-0 systemd[1]: libpod-conmon-57940ca7a364001dd2ba9a1cdadf65d5dec20c9b6d3ef7326b45a8a85b032ca7.scope: Deactivated successfully.
Oct 11 04:16:34 compute-0 podman[294639]: 2025-10-11 04:16:34.105525106 +0000 UTC m=+0.070192665 container create bb4d8c574a8808d5e9fcdb370fcce31778cfa344db0a502111b64f417d83934b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_easley, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 11 04:16:34 compute-0 systemd[1]: Started libpod-conmon-bb4d8c574a8808d5e9fcdb370fcce31778cfa344db0a502111b64f417d83934b.scope.
Oct 11 04:16:34 compute-0 podman[294639]: 2025-10-11 04:16:34.077175651 +0000 UTC m=+0.041843270 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 04:16:34 compute-0 systemd[1]: Started libcrun container.
Oct 11 04:16:34 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1834e71955fa27dea6b2fdea4740ca5fd7a1d6554f9e008b0f0eb97154ea45de/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 11 04:16:34 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1834e71955fa27dea6b2fdea4740ca5fd7a1d6554f9e008b0f0eb97154ea45de/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 11 04:16:34 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1834e71955fa27dea6b2fdea4740ca5fd7a1d6554f9e008b0f0eb97154ea45de/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 11 04:16:34 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1834e71955fa27dea6b2fdea4740ca5fd7a1d6554f9e008b0f0eb97154ea45de/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 11 04:16:34 compute-0 podman[294639]: 2025-10-11 04:16:34.225211017 +0000 UTC m=+0.189878576 container init bb4d8c574a8808d5e9fcdb370fcce31778cfa344db0a502111b64f417d83934b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_easley, CEPH_REF=reef, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default)
Oct 11 04:16:34 compute-0 podman[294639]: 2025-10-11 04:16:34.239119532 +0000 UTC m=+0.203787071 container start bb4d8c574a8808d5e9fcdb370fcce31778cfa344db0a502111b64f417d83934b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_easley, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Oct 11 04:16:34 compute-0 podman[294639]: 2025-10-11 04:16:34.242850388 +0000 UTC m=+0.207517967 container attach bb4d8c574a8808d5e9fcdb370fcce31778cfa344db0a502111b64f417d83934b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_easley, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 11 04:16:34 compute-0 ceph-mon[74273]: pgmap v1582: 305 pgs: 305 active+clean; 368 MiB data, 628 MiB used, 59 GiB / 60 GiB avail; 2.4 MiB/s rd, 68 KiB/s wr, 107 op/s
Oct 11 04:16:34 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e385 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 04:16:35 compute-0 crazy_easley[294655]: {
Oct 11 04:16:35 compute-0 crazy_easley[294655]:     "38da774d-7ecf-442f-9a7a-97978287cff8": {
Oct 11 04:16:35 compute-0 crazy_easley[294655]:         "ceph_fsid": "23b68101-59a9-532f-ab6b-9acf78fb2162",
Oct 11 04:16:35 compute-0 crazy_easley[294655]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Oct 11 04:16:35 compute-0 crazy_easley[294655]:         "osd_id": 1,
Oct 11 04:16:35 compute-0 crazy_easley[294655]:         "osd_uuid": "38da774d-7ecf-442f-9a7a-97978287cff8",
Oct 11 04:16:35 compute-0 crazy_easley[294655]:         "type": "bluestore"
Oct 11 04:16:35 compute-0 crazy_easley[294655]:     },
Oct 11 04:16:35 compute-0 crazy_easley[294655]:     "b0a32a6b-06c7-4cda-9235-0c5ffbd1bd85": {
Oct 11 04:16:35 compute-0 crazy_easley[294655]:         "ceph_fsid": "23b68101-59a9-532f-ab6b-9acf78fb2162",
Oct 11 04:16:35 compute-0 crazy_easley[294655]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Oct 11 04:16:35 compute-0 crazy_easley[294655]:         "osd_id": 2,
Oct 11 04:16:35 compute-0 crazy_easley[294655]:         "osd_uuid": "b0a32a6b-06c7-4cda-9235-0c5ffbd1bd85",
Oct 11 04:16:35 compute-0 crazy_easley[294655]:         "type": "bluestore"
Oct 11 04:16:35 compute-0 crazy_easley[294655]:     },
Oct 11 04:16:35 compute-0 crazy_easley[294655]:     "bd7ac921-1218-45c1-b1c6-7c594dbceccb": {
Oct 11 04:16:35 compute-0 crazy_easley[294655]:         "ceph_fsid": "23b68101-59a9-532f-ab6b-9acf78fb2162",
Oct 11 04:16:35 compute-0 crazy_easley[294655]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct 11 04:16:35 compute-0 crazy_easley[294655]:         "osd_id": 0,
Oct 11 04:16:35 compute-0 crazy_easley[294655]:         "osd_uuid": "bd7ac921-1218-45c1-b1c6-7c594dbceccb",
Oct 11 04:16:35 compute-0 crazy_easley[294655]:         "type": "bluestore"
Oct 11 04:16:35 compute-0 crazy_easley[294655]:     }
Oct 11 04:16:35 compute-0 crazy_easley[294655]: }
Oct 11 04:16:35 compute-0 systemd[1]: libpod-bb4d8c574a8808d5e9fcdb370fcce31778cfa344db0a502111b64f417d83934b.scope: Deactivated successfully.
Oct 11 04:16:35 compute-0 podman[294639]: 2025-10-11 04:16:35.383979554 +0000 UTC m=+1.348647113 container died bb4d8c574a8808d5e9fcdb370fcce31778cfa344db0a502111b64f417d83934b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_easley, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 11 04:16:35 compute-0 systemd[1]: libpod-bb4d8c574a8808d5e9fcdb370fcce31778cfa344db0a502111b64f417d83934b.scope: Consumed 1.146s CPU time.
Oct 11 04:16:35 compute-0 systemd[1]: var-lib-containers-storage-overlay-1834e71955fa27dea6b2fdea4740ca5fd7a1d6554f9e008b0f0eb97154ea45de-merged.mount: Deactivated successfully.
Oct 11 04:16:35 compute-0 podman[294639]: 2025-10-11 04:16:35.452130191 +0000 UTC m=+1.416797760 container remove bb4d8c574a8808d5e9fcdb370fcce31778cfa344db0a502111b64f417d83934b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_easley, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 11 04:16:35 compute-0 systemd[1]: libpod-conmon-bb4d8c574a8808d5e9fcdb370fcce31778cfa344db0a502111b64f417d83934b.scope: Deactivated successfully.
Oct 11 04:16:35 compute-0 sudo[294534]: pam_unix(sudo:session): session closed for user root
Oct 11 04:16:35 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 11 04:16:35 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' 
Oct 11 04:16:35 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 11 04:16:35 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' 
Oct 11 04:16:35 compute-0 ceph-mgr[74563]: [progress WARNING root] complete: ev 0cdb23fb-aafe-4767-8fc5-840599684b04 does not exist
Oct 11 04:16:35 compute-0 ceph-mgr[74563]: [progress WARNING root] complete: ev 33cd982d-43b1-43dd-8f88-bd4445a17378 does not exist
Oct 11 04:16:35 compute-0 sudo[294700]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 04:16:35 compute-0 sudo[294700]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 04:16:35 compute-0 sudo[294700]: pam_unix(sudo:session): session closed for user root
Oct 11 04:16:35 compute-0 sudo[294725]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Oct 11 04:16:35 compute-0 sudo[294725]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 04:16:35 compute-0 sudo[294725]: pam_unix(sudo:session): session closed for user root
Oct 11 04:16:35 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1583: 305 pgs: 305 active+clean; 368 MiB data, 628 MiB used, 59 GiB / 60 GiB avail; 405 KiB/s rd, 21 KiB/s wr, 36 op/s
Oct 11 04:16:36 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' 
Oct 11 04:16:36 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' 
Oct 11 04:16:36 compute-0 ceph-mon[74273]: pgmap v1583: 305 pgs: 305 active+clean; 368 MiB data, 628 MiB used, 59 GiB / 60 GiB avail; 405 KiB/s rd, 21 KiB/s wr, 36 op/s
Oct 11 04:16:37 compute-0 nova_compute[259850]: 2025-10-11 04:16:37.010 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 04:16:37 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1584: 305 pgs: 305 active+clean; 368 MiB data, 628 MiB used, 59 GiB / 60 GiB avail; 405 KiB/s rd, 21 KiB/s wr, 36 op/s
Oct 11 04:16:37 compute-0 ovn_controller[152025]: 2025-10-11T04:16:37Z|00042|pinctrl(ovn_pinctrl0)|WARN|DHCPREQUEST requested IP 10.100.0.6 does not match offer 10.100.0.14
Oct 11 04:16:37 compute-0 ovn_controller[152025]: 2025-10-11T04:16:37Z|00043|pinctrl(ovn_pinctrl0)|INFO|DHCPNAK fa:16:3e:05:f8:44 10.100.0.14
Oct 11 04:16:38 compute-0 ovn_controller[152025]: 2025-10-11T04:16:38Z|00044|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:05:f8:44 10.100.0.14
Oct 11 04:16:38 compute-0 ovn_controller[152025]: 2025-10-11T04:16:38Z|00045|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:05:f8:44 10.100.0.14
Oct 11 04:16:38 compute-0 nova_compute[259850]: 2025-10-11 04:16:38.278 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 04:16:38 compute-0 nova_compute[259850]: 2025-10-11 04:16:38.346 2 DEBUG oslo_concurrency.lockutils [None req-22227eb6-c9f2-4619-8668-734bb5c2021d 2a330a845d62440c871f80eda2546881 09ba33ef4bd447699d74946c58839b2d - - default default] Acquiring lock "b19922f4-8c6a-4465-8051-c33652138fd9" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 04:16:38 compute-0 nova_compute[259850]: 2025-10-11 04:16:38.347 2 DEBUG oslo_concurrency.lockutils [None req-22227eb6-c9f2-4619-8668-734bb5c2021d 2a330a845d62440c871f80eda2546881 09ba33ef4bd447699d74946c58839b2d - - default default] Lock "b19922f4-8c6a-4465-8051-c33652138fd9" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 04:16:38 compute-0 nova_compute[259850]: 2025-10-11 04:16:38.348 2 DEBUG oslo_concurrency.lockutils [None req-22227eb6-c9f2-4619-8668-734bb5c2021d 2a330a845d62440c871f80eda2546881 09ba33ef4bd447699d74946c58839b2d - - default default] Acquiring lock "b19922f4-8c6a-4465-8051-c33652138fd9-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 04:16:38 compute-0 nova_compute[259850]: 2025-10-11 04:16:38.348 2 DEBUG oslo_concurrency.lockutils [None req-22227eb6-c9f2-4619-8668-734bb5c2021d 2a330a845d62440c871f80eda2546881 09ba33ef4bd447699d74946c58839b2d - - default default] Lock "b19922f4-8c6a-4465-8051-c33652138fd9-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 04:16:38 compute-0 nova_compute[259850]: 2025-10-11 04:16:38.349 2 DEBUG oslo_concurrency.lockutils [None req-22227eb6-c9f2-4619-8668-734bb5c2021d 2a330a845d62440c871f80eda2546881 09ba33ef4bd447699d74946c58839b2d - - default default] Lock "b19922f4-8c6a-4465-8051-c33652138fd9-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 04:16:38 compute-0 nova_compute[259850]: 2025-10-11 04:16:38.351 2 INFO nova.compute.manager [None req-22227eb6-c9f2-4619-8668-734bb5c2021d 2a330a845d62440c871f80eda2546881 09ba33ef4bd447699d74946c58839b2d - - default default] [instance: b19922f4-8c6a-4465-8051-c33652138fd9] Terminating instance
Oct 11 04:16:38 compute-0 nova_compute[259850]: 2025-10-11 04:16:38.353 2 DEBUG nova.compute.manager [None req-22227eb6-c9f2-4619-8668-734bb5c2021d 2a330a845d62440c871f80eda2546881 09ba33ef4bd447699d74946c58839b2d - - default default] [instance: b19922f4-8c6a-4465-8051-c33652138fd9] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Oct 11 04:16:38 compute-0 kernel: tapa0bc9537-bb (unregistering): left promiscuous mode
Oct 11 04:16:38 compute-0 NetworkManager[44920]: <info>  [1760156198.4183] device (tapa0bc9537-bb): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 11 04:16:38 compute-0 nova_compute[259850]: 2025-10-11 04:16:38.433 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 04:16:38 compute-0 ovn_controller[152025]: 2025-10-11T04:16:38Z|00210|binding|INFO|Releasing lport a0bc9537-bbc3-4bb6-9d95-a11aeb47b514 from this chassis (sb_readonly=0)
Oct 11 04:16:38 compute-0 ovn_controller[152025]: 2025-10-11T04:16:38Z|00211|binding|INFO|Setting lport a0bc9537-bbc3-4bb6-9d95-a11aeb47b514 down in Southbound
Oct 11 04:16:38 compute-0 ovn_controller[152025]: 2025-10-11T04:16:38Z|00212|binding|INFO|Removing iface tapa0bc9537-bb ovn-installed in OVS
Oct 11 04:16:38 compute-0 nova_compute[259850]: 2025-10-11 04:16:38.440 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 04:16:38 compute-0 ovn_metadata_agent[161897]: 2025-10-11 04:16:38.447 161902 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:aa:79:a8 10.100.0.3'], port_security=['fa:16:3e:aa:79:a8 10.100.0.3'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.3/28', 'neutron:device_id': 'b19922f4-8c6a-4465-8051-c33652138fd9', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-b6cd64a2-af0b-4f57-b84c-cbc9cde5251d', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '09ba33ef4bd447699d74946c58839b2d', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'ffb3c2f8-c470-4ea8-b009-8568480a2510', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com', 'neutron:port_fip': '192.168.122.175'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=27b77226-c1f8-485e-969b-bae9a3bf7ceb, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f22fde8afd0>], logical_port=a0bc9537-bbc3-4bb6-9d95-a11aeb47b514) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f22fde8afd0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 11 04:16:38 compute-0 ovn_metadata_agent[161897]: 2025-10-11 04:16:38.449 161902 INFO neutron.agent.ovn.metadata.agent [-] Port a0bc9537-bbc3-4bb6-9d95-a11aeb47b514 in datapath b6cd64a2-af0b-4f57-b84c-cbc9cde5251d unbound from our chassis
Oct 11 04:16:38 compute-0 ovn_metadata_agent[161897]: 2025-10-11 04:16:38.452 161902 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network b6cd64a2-af0b-4f57-b84c-cbc9cde5251d
Oct 11 04:16:38 compute-0 nova_compute[259850]: 2025-10-11 04:16:38.473 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 04:16:38 compute-0 ovn_metadata_agent[161897]: 2025-10-11 04:16:38.477 267637 DEBUG oslo.privsep.daemon [-] privsep: reply[be7bf38c-5ead-4fdf-bce5-0665853f7149]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 04:16:38 compute-0 systemd[1]: machine-qemu\x2d20\x2dinstance\x2d00000014.scope: Deactivated successfully.
Oct 11 04:16:38 compute-0 systemd[1]: machine-qemu\x2d20\x2dinstance\x2d00000014.scope: Consumed 14.605s CPU time.
Oct 11 04:16:38 compute-0 systemd-machined[214869]: Machine qemu-20-instance-00000014 terminated.
Oct 11 04:16:38 compute-0 ovn_metadata_agent[161897]: 2025-10-11 04:16:38.528 267785 DEBUG oslo.privsep.daemon [-] privsep: reply[49a3f70d-c100-482e-9485-df1f8e189746]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 04:16:38 compute-0 ovn_metadata_agent[161897]: 2025-10-11 04:16:38.533 267785 DEBUG oslo.privsep.daemon [-] privsep: reply[f262a444-61de-41d1-825e-83adfa2bd383]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 04:16:38 compute-0 ovn_metadata_agent[161897]: 2025-10-11 04:16:38.580 267785 DEBUG oslo.privsep.daemon [-] privsep: reply[2104fad9-442e-4797-af63-533389e92263]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 04:16:38 compute-0 nova_compute[259850]: 2025-10-11 04:16:38.602 2 INFO nova.virt.libvirt.driver [-] [instance: b19922f4-8c6a-4465-8051-c33652138fd9] Instance destroyed successfully.
Oct 11 04:16:38 compute-0 nova_compute[259850]: 2025-10-11 04:16:38.602 2 DEBUG nova.objects.instance [None req-22227eb6-c9f2-4619-8668-734bb5c2021d 2a330a845d62440c871f80eda2546881 09ba33ef4bd447699d74946c58839b2d - - default default] Lazy-loading 'resources' on Instance uuid b19922f4-8c6a-4465-8051-c33652138fd9 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 11 04:16:38 compute-0 ovn_metadata_agent[161897]: 2025-10-11 04:16:38.614 267637 DEBUG oslo.privsep.daemon [-] privsep: reply[db47e30c-26b6-4374-93c5-b4c7b78574fc]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapb6cd64a2-a1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:11:9f:02'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 12, 'tx_packets': 7, 'rx_bytes': 1000, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 12, 'tx_packets': 7, 'rx_bytes': 1000, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 62], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 451053, 'reachable_time': 43256, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 8, 'inoctets': 720, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 8, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 720, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 8, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 294767, 'error': None, 'target': 'ovnmeta-b6cd64a2-af0b-4f57-b84c-cbc9cde5251d', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 04:16:38 compute-0 nova_compute[259850]: 2025-10-11 04:16:38.616 2 DEBUG nova.virt.libvirt.vif [None req-22227eb6-c9f2-4619-8668-734bb5c2021d 2a330a845d62440c871f80eda2546881 09ba33ef4bd447699d74946c58839b2d - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-11T04:15:52Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestVolumeBootPattern-image-snapshot-server-1999044293',display_name='tempest-TestVolumeBootPattern-image-snapshot-server-1999044293',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testvolumebootpattern-image-snapshot-server-1999044293',id=20,image_ref='4414e0e0-7d08-46a8-a7d9-7794d12c96fc',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBDUxThBUJhvO/gkOYwxW/lPS4n8OMhZe6TOX5ElcKOSryPpXQOKfBpX1K1WckyrkPSMC42WqitbH/2Ksdi9ua2+VFCgI81hDR6lqh2OHDc0/2HOB79NiKWtPVPn3ngNTCQ==',key_name='tempest-keypair-1544766429',keypairs=<?>,launch_index=0,launched_at=2025-10-11T04:16:02Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='09ba33ef4bd447699d74946c58839b2d',ramdisk_id='',reservation_id='r-zs29qwxo',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='',image_bdm_v2='True',image_boot_roles='reader,member',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',image_owner_project_name='tempest-TestVolumeBootPattern-771726270',image_owner_user_name='tempest-TestVolumeBootPattern-771726270-project-member',image_root_device_name='/dev/vda',image_signature_verified='False',owner_project_name='tempest-TestVolumeBootPattern-771726270',owner_user_name='tempest-TestVolumeBootPattern-771726270-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-11T04:16:02Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='2a330a845d62440c871f80eda2546881',uuid=b19922f4-8c6a-4465-8051-c33652138fd9,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "a0bc9537-bbc3-4bb6-9d95-a11aeb47b514", "address": "fa:16:3e:aa:79:a8", "network": {"id": "b6cd64a2-af0b-4f57-b84c-cbc9cde5251d", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-958485850-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.175", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "09ba33ef4bd447699d74946c58839b2d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa0bc9537-bb", "ovs_interfaceid": "a0bc9537-bbc3-4bb6-9d95-a11aeb47b514", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Oct 11 04:16:38 compute-0 nova_compute[259850]: 2025-10-11 04:16:38.617 2 DEBUG nova.network.os_vif_util [None req-22227eb6-c9f2-4619-8668-734bb5c2021d 2a330a845d62440c871f80eda2546881 09ba33ef4bd447699d74946c58839b2d - - default default] Converting VIF {"id": "a0bc9537-bbc3-4bb6-9d95-a11aeb47b514", "address": "fa:16:3e:aa:79:a8", "network": {"id": "b6cd64a2-af0b-4f57-b84c-cbc9cde5251d", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-958485850-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.175", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "09ba33ef4bd447699d74946c58839b2d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa0bc9537-bb", "ovs_interfaceid": "a0bc9537-bbc3-4bb6-9d95-a11aeb47b514", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 11 04:16:38 compute-0 nova_compute[259850]: 2025-10-11 04:16:38.618 2 DEBUG nova.network.os_vif_util [None req-22227eb6-c9f2-4619-8668-734bb5c2021d 2a330a845d62440c871f80eda2546881 09ba33ef4bd447699d74946c58839b2d - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:aa:79:a8,bridge_name='br-int',has_traffic_filtering=True,id=a0bc9537-bbc3-4bb6-9d95-a11aeb47b514,network=Network(b6cd64a2-af0b-4f57-b84c-cbc9cde5251d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa0bc9537-bb') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 11 04:16:38 compute-0 nova_compute[259850]: 2025-10-11 04:16:38.619 2 DEBUG os_vif [None req-22227eb6-c9f2-4619-8668-734bb5c2021d 2a330a845d62440c871f80eda2546881 09ba33ef4bd447699d74946c58839b2d - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:aa:79:a8,bridge_name='br-int',has_traffic_filtering=True,id=a0bc9537-bbc3-4bb6-9d95-a11aeb47b514,network=Network(b6cd64a2-af0b-4f57-b84c-cbc9cde5251d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa0bc9537-bb') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Oct 11 04:16:38 compute-0 nova_compute[259850]: 2025-10-11 04:16:38.622 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 04:16:38 compute-0 nova_compute[259850]: 2025-10-11 04:16:38.623 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapa0bc9537-bb, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 04:16:38 compute-0 nova_compute[259850]: 2025-10-11 04:16:38.681 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 04:16:38 compute-0 nova_compute[259850]: 2025-10-11 04:16:38.683 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 04:16:38 compute-0 nova_compute[259850]: 2025-10-11 04:16:38.687 2 INFO os_vif [None req-22227eb6-c9f2-4619-8668-734bb5c2021d 2a330a845d62440c871f80eda2546881 09ba33ef4bd447699d74946c58839b2d - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:aa:79:a8,bridge_name='br-int',has_traffic_filtering=True,id=a0bc9537-bbc3-4bb6-9d95-a11aeb47b514,network=Network(b6cd64a2-af0b-4f57-b84c-cbc9cde5251d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa0bc9537-bb')
Oct 11 04:16:38 compute-0 ovn_metadata_agent[161897]: 2025-10-11 04:16:38.687 267637 DEBUG oslo.privsep.daemon [-] privsep: reply[d4731f4e-6c51-463e-bfeb-a373e3277040]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tapb6cd64a2-a1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 451065, 'tstamp': 451065}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 294774, 'error': None, 'target': 'ovnmeta-b6cd64a2-af0b-4f57-b84c-cbc9cde5251d', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tapb6cd64a2-a1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 451069, 'tstamp': 451069}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 294774, 'error': None, 'target': 'ovnmeta-b6cd64a2-af0b-4f57-b84c-cbc9cde5251d', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 04:16:38 compute-0 ovn_metadata_agent[161897]: 2025-10-11 04:16:38.690 161902 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapb6cd64a2-a0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 04:16:38 compute-0 ovn_metadata_agent[161897]: 2025-10-11 04:16:38.696 161902 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapb6cd64a2-a0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 04:16:38 compute-0 ovn_metadata_agent[161897]: 2025-10-11 04:16:38.696 161902 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 11 04:16:38 compute-0 ovn_metadata_agent[161897]: 2025-10-11 04:16:38.696 161902 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapb6cd64a2-a0, col_values=(('external_ids', {'iface-id': 'c2cbaf15-a50c-40b8-9f65-12b11618e7fc'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 04:16:38 compute-0 ovn_metadata_agent[161897]: 2025-10-11 04:16:38.697 161902 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 11 04:16:38 compute-0 nova_compute[259850]: 2025-10-11 04:16:38.715 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 04:16:38 compute-0 ceph-mon[74273]: pgmap v1584: 305 pgs: 305 active+clean; 368 MiB data, 628 MiB used, 59 GiB / 60 GiB avail; 405 KiB/s rd, 21 KiB/s wr, 36 op/s
Oct 11 04:16:38 compute-0 nova_compute[259850]: 2025-10-11 04:16:38.934 2 INFO nova.virt.libvirt.driver [None req-22227eb6-c9f2-4619-8668-734bb5c2021d 2a330a845d62440c871f80eda2546881 09ba33ef4bd447699d74946c58839b2d - - default default] [instance: b19922f4-8c6a-4465-8051-c33652138fd9] Deleting instance files /var/lib/nova/instances/b19922f4-8c6a-4465-8051-c33652138fd9_del
Oct 11 04:16:38 compute-0 nova_compute[259850]: 2025-10-11 04:16:38.935 2 INFO nova.virt.libvirt.driver [None req-22227eb6-c9f2-4619-8668-734bb5c2021d 2a330a845d62440c871f80eda2546881 09ba33ef4bd447699d74946c58839b2d - - default default] [instance: b19922f4-8c6a-4465-8051-c33652138fd9] Deletion of /var/lib/nova/instances/b19922f4-8c6a-4465-8051-c33652138fd9_del complete
Oct 11 04:16:38 compute-0 nova_compute[259850]: 2025-10-11 04:16:38.996 2 INFO nova.compute.manager [None req-22227eb6-c9f2-4619-8668-734bb5c2021d 2a330a845d62440c871f80eda2546881 09ba33ef4bd447699d74946c58839b2d - - default default] [instance: b19922f4-8c6a-4465-8051-c33652138fd9] Took 0.64 seconds to destroy the instance on the hypervisor.
Oct 11 04:16:38 compute-0 nova_compute[259850]: 2025-10-11 04:16:38.997 2 DEBUG oslo.service.loopingcall [None req-22227eb6-c9f2-4619-8668-734bb5c2021d 2a330a845d62440c871f80eda2546881 09ba33ef4bd447699d74946c58839b2d - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Oct 11 04:16:38 compute-0 nova_compute[259850]: 2025-10-11 04:16:38.997 2 DEBUG nova.compute.manager [-] [instance: b19922f4-8c6a-4465-8051-c33652138fd9] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Oct 11 04:16:38 compute-0 nova_compute[259850]: 2025-10-11 04:16:38.998 2 DEBUG nova.network.neutron [-] [instance: b19922f4-8c6a-4465-8051-c33652138fd9] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Oct 11 04:16:39 compute-0 nova_compute[259850]: 2025-10-11 04:16:39.074 2 DEBUG oslo_service.periodic_task [None req-a15c22ab-259d-4b26-83d0-eb5f92102649 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 11 04:16:39 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1585: 305 pgs: 305 active+clean; 368 MiB data, 628 MiB used, 59 GiB / 60 GiB avail; 604 KiB/s rd, 35 KiB/s wr, 47 op/s
Oct 11 04:16:39 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e385 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 04:16:40 compute-0 nova_compute[259850]: 2025-10-11 04:16:40.545 2 DEBUG nova.compute.manager [req-5fc1b876-46b0-400d-9d70-1ae397404f77 req-4d86b756-5bbf-404f-b80e-a58f35fdd1f3 f4159c198f2c491aba3076e803251bb4 551ef16ca7c64c7d98e9560ddab5abd8 - - default default] [instance: b19922f4-8c6a-4465-8051-c33652138fd9] Received event network-vif-unplugged-a0bc9537-bbc3-4bb6-9d95-a11aeb47b514 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 11 04:16:40 compute-0 nova_compute[259850]: 2025-10-11 04:16:40.545 2 DEBUG oslo_concurrency.lockutils [req-5fc1b876-46b0-400d-9d70-1ae397404f77 req-4d86b756-5bbf-404f-b80e-a58f35fdd1f3 f4159c198f2c491aba3076e803251bb4 551ef16ca7c64c7d98e9560ddab5abd8 - - default default] Acquiring lock "b19922f4-8c6a-4465-8051-c33652138fd9-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 04:16:40 compute-0 nova_compute[259850]: 2025-10-11 04:16:40.546 2 DEBUG oslo_concurrency.lockutils [req-5fc1b876-46b0-400d-9d70-1ae397404f77 req-4d86b756-5bbf-404f-b80e-a58f35fdd1f3 f4159c198f2c491aba3076e803251bb4 551ef16ca7c64c7d98e9560ddab5abd8 - - default default] Lock "b19922f4-8c6a-4465-8051-c33652138fd9-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 04:16:40 compute-0 nova_compute[259850]: 2025-10-11 04:16:40.546 2 DEBUG oslo_concurrency.lockutils [req-5fc1b876-46b0-400d-9d70-1ae397404f77 req-4d86b756-5bbf-404f-b80e-a58f35fdd1f3 f4159c198f2c491aba3076e803251bb4 551ef16ca7c64c7d98e9560ddab5abd8 - - default default] Lock "b19922f4-8c6a-4465-8051-c33652138fd9-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 04:16:40 compute-0 nova_compute[259850]: 2025-10-11 04:16:40.546 2 DEBUG nova.compute.manager [req-5fc1b876-46b0-400d-9d70-1ae397404f77 req-4d86b756-5bbf-404f-b80e-a58f35fdd1f3 f4159c198f2c491aba3076e803251bb4 551ef16ca7c64c7d98e9560ddab5abd8 - - default default] [instance: b19922f4-8c6a-4465-8051-c33652138fd9] No waiting events found dispatching network-vif-unplugged-a0bc9537-bbc3-4bb6-9d95-a11aeb47b514 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 11 04:16:40 compute-0 nova_compute[259850]: 2025-10-11 04:16:40.547 2 DEBUG nova.compute.manager [req-5fc1b876-46b0-400d-9d70-1ae397404f77 req-4d86b756-5bbf-404f-b80e-a58f35fdd1f3 f4159c198f2c491aba3076e803251bb4 551ef16ca7c64c7d98e9560ddab5abd8 - - default default] [instance: b19922f4-8c6a-4465-8051-c33652138fd9] Received event network-vif-unplugged-a0bc9537-bbc3-4bb6-9d95-a11aeb47b514 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Oct 11 04:16:40 compute-0 nova_compute[259850]: 2025-10-11 04:16:40.807 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 04:16:40 compute-0 ovn_metadata_agent[161897]: 2025-10-11 04:16:40.808 161902 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=17, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '7a:61:6f', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '92:f1:b6:e4:f1:16'}, ipsec=False) old=SB_Global(nb_cfg=16) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 11 04:16:40 compute-0 ovn_metadata_agent[161897]: 2025-10-11 04:16:40.809 161902 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 1 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Oct 11 04:16:40 compute-0 ceph-mon[74273]: pgmap v1585: 305 pgs: 305 active+clean; 368 MiB data, 628 MiB used, 59 GiB / 60 GiB avail; 604 KiB/s rd, 35 KiB/s wr, 47 op/s
Oct 11 04:16:40 compute-0 nova_compute[259850]: 2025-10-11 04:16:40.990 2 DEBUG nova.network.neutron [-] [instance: b19922f4-8c6a-4465-8051-c33652138fd9] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 11 04:16:41 compute-0 nova_compute[259850]: 2025-10-11 04:16:41.014 2 INFO nova.compute.manager [-] [instance: b19922f4-8c6a-4465-8051-c33652138fd9] Took 2.02 seconds to deallocate network for instance.
Oct 11 04:16:41 compute-0 nova_compute[259850]: 2025-10-11 04:16:41.242 2 INFO nova.compute.manager [None req-22227eb6-c9f2-4619-8668-734bb5c2021d 2a330a845d62440c871f80eda2546881 09ba33ef4bd447699d74946c58839b2d - - default default] [instance: b19922f4-8c6a-4465-8051-c33652138fd9] Took 0.23 seconds to detach 1 volumes for instance.
Oct 11 04:16:41 compute-0 nova_compute[259850]: 2025-10-11 04:16:41.245 2 DEBUG nova.compute.manager [None req-22227eb6-c9f2-4619-8668-734bb5c2021d 2a330a845d62440c871f80eda2546881 09ba33ef4bd447699d74946c58839b2d - - default default] [instance: b19922f4-8c6a-4465-8051-c33652138fd9] Deleting volume: 2d69b9c1-92be-4e87-b166-e1c5b2e5f688 _cleanup_volumes /usr/lib/python3.9/site-packages/nova/compute/manager.py:3217
Oct 11 04:16:41 compute-0 nova_compute[259850]: 2025-10-11 04:16:41.432 2 DEBUG oslo_concurrency.lockutils [None req-22227eb6-c9f2-4619-8668-734bb5c2021d 2a330a845d62440c871f80eda2546881 09ba33ef4bd447699d74946c58839b2d - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 04:16:41 compute-0 nova_compute[259850]: 2025-10-11 04:16:41.433 2 DEBUG oslo_concurrency.lockutils [None req-22227eb6-c9f2-4619-8668-734bb5c2021d 2a330a845d62440c871f80eda2546881 09ba33ef4bd447699d74946c58839b2d - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 04:16:41 compute-0 nova_compute[259850]: 2025-10-11 04:16:41.536 2 DEBUG oslo_concurrency.processutils [None req-22227eb6-c9f2-4619-8668-734bb5c2021d 2a330a845d62440c871f80eda2546881 09ba33ef4bd447699d74946c58839b2d - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 04:16:41 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1586: 305 pgs: 305 active+clean; 368 MiB data, 628 MiB used, 59 GiB / 60 GiB avail; 604 KiB/s rd, 22 KiB/s wr, 45 op/s
Oct 11 04:16:41 compute-0 ovn_metadata_agent[161897]: 2025-10-11 04:16:41.811 161902 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=8a473e03-2208-47ae-afcd-05ad744a5969, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '17'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 04:16:41 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 11 04:16:41 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/113558243' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 04:16:41 compute-0 nova_compute[259850]: 2025-10-11 04:16:41.959 2 DEBUG oslo_concurrency.processutils [None req-22227eb6-c9f2-4619-8668-734bb5c2021d 2a330a845d62440c871f80eda2546881 09ba33ef4bd447699d74946c58839b2d - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.423s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 04:16:41 compute-0 nova_compute[259850]: 2025-10-11 04:16:41.969 2 DEBUG nova.compute.provider_tree [None req-22227eb6-c9f2-4619-8668-734bb5c2021d 2a330a845d62440c871f80eda2546881 09ba33ef4bd447699d74946c58839b2d - - default default] Inventory has not changed in ProviderTree for provider: 108a560b-89c0-4926-a2fc-cb749a6f8386 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 11 04:16:41 compute-0 nova_compute[259850]: 2025-10-11 04:16:41.986 2 DEBUG nova.scheduler.client.report [None req-22227eb6-c9f2-4619-8668-734bb5c2021d 2a330a845d62440c871f80eda2546881 09ba33ef4bd447699d74946c58839b2d - - default default] Inventory has not changed for provider 108a560b-89c0-4926-a2fc-cb749a6f8386 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 11 04:16:42 compute-0 nova_compute[259850]: 2025-10-11 04:16:42.004 2 DEBUG oslo_concurrency.lockutils [None req-22227eb6-c9f2-4619-8668-734bb5c2021d 2a330a845d62440c871f80eda2546881 09ba33ef4bd447699d74946c58839b2d - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.571s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 04:16:42 compute-0 nova_compute[259850]: 2025-10-11 04:16:42.038 2 INFO nova.scheduler.client.report [None req-22227eb6-c9f2-4619-8668-734bb5c2021d 2a330a845d62440c871f80eda2546881 09ba33ef4bd447699d74946c58839b2d - - default default] Deleted allocations for instance b19922f4-8c6a-4465-8051-c33652138fd9
Oct 11 04:16:42 compute-0 nova_compute[259850]: 2025-10-11 04:16:42.059 2 DEBUG oslo_service.periodic_task [None req-a15c22ab-259d-4b26-83d0-eb5f92102649 - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 11 04:16:42 compute-0 nova_compute[259850]: 2025-10-11 04:16:42.059 2 DEBUG nova.compute.manager [None req-a15c22ab-259d-4b26-83d0-eb5f92102649 - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183
Oct 11 04:16:42 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct 11 04:16:42 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2971946823' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 11 04:16:42 compute-0 nova_compute[259850]: 2025-10-11 04:16:42.112 2 DEBUG oslo_concurrency.lockutils [None req-22227eb6-c9f2-4619-8668-734bb5c2021d 2a330a845d62440c871f80eda2546881 09ba33ef4bd447699d74946c58839b2d - - default default] Lock "b19922f4-8c6a-4465-8051-c33652138fd9" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.765s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 04:16:42 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct 11 04:16:42 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2971946823' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 11 04:16:42 compute-0 nova_compute[259850]: 2025-10-11 04:16:42.627 2 DEBUG nova.compute.manager [req-1ecd53b6-3386-48b1-b3e8-de49ddd627be req-4a1b9dda-46b5-4d7d-b349-491a9eb0904b f4159c198f2c491aba3076e803251bb4 551ef16ca7c64c7d98e9560ddab5abd8 - - default default] [instance: b19922f4-8c6a-4465-8051-c33652138fd9] Received event network-vif-plugged-a0bc9537-bbc3-4bb6-9d95-a11aeb47b514 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 11 04:16:42 compute-0 nova_compute[259850]: 2025-10-11 04:16:42.628 2 DEBUG oslo_concurrency.lockutils [req-1ecd53b6-3386-48b1-b3e8-de49ddd627be req-4a1b9dda-46b5-4d7d-b349-491a9eb0904b f4159c198f2c491aba3076e803251bb4 551ef16ca7c64c7d98e9560ddab5abd8 - - default default] Acquiring lock "b19922f4-8c6a-4465-8051-c33652138fd9-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 04:16:42 compute-0 nova_compute[259850]: 2025-10-11 04:16:42.629 2 DEBUG oslo_concurrency.lockutils [req-1ecd53b6-3386-48b1-b3e8-de49ddd627be req-4a1b9dda-46b5-4d7d-b349-491a9eb0904b f4159c198f2c491aba3076e803251bb4 551ef16ca7c64c7d98e9560ddab5abd8 - - default default] Lock "b19922f4-8c6a-4465-8051-c33652138fd9-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 04:16:42 compute-0 nova_compute[259850]: 2025-10-11 04:16:42.629 2 DEBUG oslo_concurrency.lockutils [req-1ecd53b6-3386-48b1-b3e8-de49ddd627be req-4a1b9dda-46b5-4d7d-b349-491a9eb0904b f4159c198f2c491aba3076e803251bb4 551ef16ca7c64c7d98e9560ddab5abd8 - - default default] Lock "b19922f4-8c6a-4465-8051-c33652138fd9-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 04:16:42 compute-0 nova_compute[259850]: 2025-10-11 04:16:42.629 2 DEBUG nova.compute.manager [req-1ecd53b6-3386-48b1-b3e8-de49ddd627be req-4a1b9dda-46b5-4d7d-b349-491a9eb0904b f4159c198f2c491aba3076e803251bb4 551ef16ca7c64c7d98e9560ddab5abd8 - - default default] [instance: b19922f4-8c6a-4465-8051-c33652138fd9] No waiting events found dispatching network-vif-plugged-a0bc9537-bbc3-4bb6-9d95-a11aeb47b514 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 11 04:16:42 compute-0 nova_compute[259850]: 2025-10-11 04:16:42.630 2 WARNING nova.compute.manager [req-1ecd53b6-3386-48b1-b3e8-de49ddd627be req-4a1b9dda-46b5-4d7d-b349-491a9eb0904b f4159c198f2c491aba3076e803251bb4 551ef16ca7c64c7d98e9560ddab5abd8 - - default default] [instance: b19922f4-8c6a-4465-8051-c33652138fd9] Received unexpected event network-vif-plugged-a0bc9537-bbc3-4bb6-9d95-a11aeb47b514 for instance with vm_state deleted and task_state None.
Oct 11 04:16:42 compute-0 nova_compute[259850]: 2025-10-11 04:16:42.630 2 DEBUG nova.compute.manager [req-1ecd53b6-3386-48b1-b3e8-de49ddd627be req-4a1b9dda-46b5-4d7d-b349-491a9eb0904b f4159c198f2c491aba3076e803251bb4 551ef16ca7c64c7d98e9560ddab5abd8 - - default default] [instance: b19922f4-8c6a-4465-8051-c33652138fd9] Received event network-vif-deleted-a0bc9537-bbc3-4bb6-9d95-a11aeb47b514 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 11 04:16:42 compute-0 ceph-mon[74273]: pgmap v1586: 305 pgs: 305 active+clean; 368 MiB data, 628 MiB used, 59 GiB / 60 GiB avail; 604 KiB/s rd, 22 KiB/s wr, 45 op/s
Oct 11 04:16:42 compute-0 ceph-mon[74273]: from='client.? 192.168.122.100:0/113558243' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 04:16:42 compute-0 ceph-mon[74273]: from='client.? 192.168.122.10:0/2971946823' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 11 04:16:42 compute-0 ceph-mon[74273]: from='client.? 192.168.122.10:0/2971946823' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 11 04:16:43 compute-0 nova_compute[259850]: 2025-10-11 04:16:43.280 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 04:16:43 compute-0 nova_compute[259850]: 2025-10-11 04:16:43.682 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 04:16:43 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1587: 305 pgs: 305 active+clean; 350 MiB data, 621 MiB used, 59 GiB / 60 GiB avail; 627 KiB/s rd, 23 KiB/s wr, 77 op/s
Oct 11 04:16:43 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e385 do_prune osdmap full prune enabled
Oct 11 04:16:43 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e386 e386: 3 total, 3 up, 3 in
Oct 11 04:16:43 compute-0 ceph-mon[74273]: log_channel(cluster) log [DBG] : osdmap e386: 3 total, 3 up, 3 in
Oct 11 04:16:44 compute-0 nova_compute[259850]: 2025-10-11 04:16:44.791 2 DEBUG oslo_concurrency.lockutils [None req-4d562263-e95c-4823-a1ea-1a9aca7f6d58 2a330a845d62440c871f80eda2546881 09ba33ef4bd447699d74946c58839b2d - - default default] Acquiring lock "f4568c68-41ba-4de0-a607-76bf5907f37c" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 04:16:44 compute-0 nova_compute[259850]: 2025-10-11 04:16:44.791 2 DEBUG oslo_concurrency.lockutils [None req-4d562263-e95c-4823-a1ea-1a9aca7f6d58 2a330a845d62440c871f80eda2546881 09ba33ef4bd447699d74946c58839b2d - - default default] Lock "f4568c68-41ba-4de0-a607-76bf5907f37c" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 04:16:44 compute-0 nova_compute[259850]: 2025-10-11 04:16:44.792 2 DEBUG oslo_concurrency.lockutils [None req-4d562263-e95c-4823-a1ea-1a9aca7f6d58 2a330a845d62440c871f80eda2546881 09ba33ef4bd447699d74946c58839b2d - - default default] Acquiring lock "f4568c68-41ba-4de0-a607-76bf5907f37c-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 04:16:44 compute-0 nova_compute[259850]: 2025-10-11 04:16:44.792 2 DEBUG oslo_concurrency.lockutils [None req-4d562263-e95c-4823-a1ea-1a9aca7f6d58 2a330a845d62440c871f80eda2546881 09ba33ef4bd447699d74946c58839b2d - - default default] Lock "f4568c68-41ba-4de0-a607-76bf5907f37c-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 04:16:44 compute-0 nova_compute[259850]: 2025-10-11 04:16:44.793 2 DEBUG oslo_concurrency.lockutils [None req-4d562263-e95c-4823-a1ea-1a9aca7f6d58 2a330a845d62440c871f80eda2546881 09ba33ef4bd447699d74946c58839b2d - - default default] Lock "f4568c68-41ba-4de0-a607-76bf5907f37c-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 04:16:44 compute-0 nova_compute[259850]: 2025-10-11 04:16:44.795 2 INFO nova.compute.manager [None req-4d562263-e95c-4823-a1ea-1a9aca7f6d58 2a330a845d62440c871f80eda2546881 09ba33ef4bd447699d74946c58839b2d - - default default] [instance: f4568c68-41ba-4de0-a607-76bf5907f37c] Terminating instance
Oct 11 04:16:44 compute-0 nova_compute[259850]: 2025-10-11 04:16:44.797 2 DEBUG nova.compute.manager [None req-4d562263-e95c-4823-a1ea-1a9aca7f6d58 2a330a845d62440c871f80eda2546881 09ba33ef4bd447699d74946c58839b2d - - default default] [instance: f4568c68-41ba-4de0-a607-76bf5907f37c] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Oct 11 04:16:44 compute-0 ceph-mon[74273]: pgmap v1587: 305 pgs: 305 active+clean; 350 MiB data, 621 MiB used, 59 GiB / 60 GiB avail; 627 KiB/s rd, 23 KiB/s wr, 77 op/s
Oct 11 04:16:44 compute-0 ceph-mon[74273]: osdmap e386: 3 total, 3 up, 3 in
Oct 11 04:16:44 compute-0 kernel: tap7a1af6b7-a4 (unregistering): left promiscuous mode
Oct 11 04:16:44 compute-0 NetworkManager[44920]: <info>  [1760156204.8615] device (tap7a1af6b7-a4): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 11 04:16:44 compute-0 ovn_controller[152025]: 2025-10-11T04:16:44Z|00213|binding|INFO|Releasing lport 7a1af6b7-a442-4ea8-beca-2843ffb42e3c from this chassis (sb_readonly=0)
Oct 11 04:16:44 compute-0 ovn_controller[152025]: 2025-10-11T04:16:44Z|00214|binding|INFO|Setting lport 7a1af6b7-a442-4ea8-beca-2843ffb42e3c down in Southbound
Oct 11 04:16:44 compute-0 nova_compute[259850]: 2025-10-11 04:16:44.872 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 04:16:44 compute-0 ovn_controller[152025]: 2025-10-11T04:16:44Z|00215|binding|INFO|Removing iface tap7a1af6b7-a4 ovn-installed in OVS
Oct 11 04:16:44 compute-0 nova_compute[259850]: 2025-10-11 04:16:44.876 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 04:16:44 compute-0 ovn_metadata_agent[161897]: 2025-10-11 04:16:44.881 161902 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:e8:e3:04 10.100.0.6'], port_security=['fa:16:3e:e8:e3:04 10.100.0.6'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.6/28', 'neutron:device_id': 'f4568c68-41ba-4de0-a607-76bf5907f37c', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-b6cd64a2-af0b-4f57-b84c-cbc9cde5251d', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '09ba33ef4bd447699d74946c58839b2d', 'neutron:revision_number': '4', 'neutron:security_group_ids': '3ee4b1ef-419d-44da-a657-f91e5ccf3725', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com', 'neutron:port_fip': '192.168.122.213'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=27b77226-c1f8-485e-969b-bae9a3bf7ceb, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f22fde8afd0>], logical_port=7a1af6b7-a442-4ea8-beca-2843ffb42e3c) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f22fde8afd0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 11 04:16:44 compute-0 ovn_metadata_agent[161897]: 2025-10-11 04:16:44.883 161902 INFO neutron.agent.ovn.metadata.agent [-] Port 7a1af6b7-a442-4ea8-beca-2843ffb42e3c in datapath b6cd64a2-af0b-4f57-b84c-cbc9cde5251d unbound from our chassis
Oct 11 04:16:44 compute-0 ovn_metadata_agent[161897]: 2025-10-11 04:16:44.886 161902 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network b6cd64a2-af0b-4f57-b84c-cbc9cde5251d, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Oct 11 04:16:44 compute-0 ovn_metadata_agent[161897]: 2025-10-11 04:16:44.892 267637 DEBUG oslo.privsep.daemon [-] privsep: reply[d7477df6-25c6-4a6f-ae8f-3b9fa59d0ab1]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 04:16:44 compute-0 ovn_metadata_agent[161897]: 2025-10-11 04:16:44.893 161902 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-b6cd64a2-af0b-4f57-b84c-cbc9cde5251d namespace which is not needed anymore
Oct 11 04:16:44 compute-0 nova_compute[259850]: 2025-10-11 04:16:44.906 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 04:16:44 compute-0 systemd[1]: machine-qemu\x2d18\x2dinstance\x2d00000012.scope: Deactivated successfully.
Oct 11 04:16:44 compute-0 systemd[1]: machine-qemu\x2d18\x2dinstance\x2d00000012.scope: Consumed 15.982s CPU time.
Oct 11 04:16:44 compute-0 systemd-machined[214869]: Machine qemu-18-instance-00000012 terminated.
Oct 11 04:16:44 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e386 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 04:16:45 compute-0 podman[294819]: 2025-10-11 04:16:45.015225521 +0000 UTC m=+0.096150773 container health_status 8f03625dda308e9a3fe3b3f00360811eab2a53db90537fed5552a11820542f07 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, container_name=iscsid, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, tcib_managed=true, config_id=iscsid, org.label-schema.build-date=20251009, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.schema-version=1.0)
Oct 11 04:16:45 compute-0 podman[294818]: 2025-10-11 04:16:45.015248162 +0000 UTC m=+0.096888435 container health_status 83ff5079e26f0f00bbfd2512db4ebca61e431e3b7e362664573e34fff179574c (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_id=multipathd, container_name=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.build-date=20251009, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, tcib_managed=true)
Oct 11 04:16:45 compute-0 kernel: tap7a1af6b7-a4: entered promiscuous mode
Oct 11 04:16:45 compute-0 systemd-udevd[294843]: Network interface NamePolicy= disabled on kernel command line.
Oct 11 04:16:45 compute-0 NetworkManager[44920]: <info>  [1760156205.0233] manager: (tap7a1af6b7-a4): new Tun device (/org/freedesktop/NetworkManager/Devices/113)
Oct 11 04:16:45 compute-0 kernel: tap7a1af6b7-a4 (unregistering): left promiscuous mode
Oct 11 04:16:45 compute-0 nova_compute[259850]: 2025-10-11 04:16:45.027 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 04:16:45 compute-0 ovn_controller[152025]: 2025-10-11T04:16:45Z|00216|binding|INFO|Claiming lport 7a1af6b7-a442-4ea8-beca-2843ffb42e3c for this chassis.
Oct 11 04:16:45 compute-0 ovn_controller[152025]: 2025-10-11T04:16:45Z|00217|binding|INFO|7a1af6b7-a442-4ea8-beca-2843ffb42e3c: Claiming fa:16:3e:e8:e3:04 10.100.0.6
Oct 11 04:16:45 compute-0 ovn_metadata_agent[161897]: 2025-10-11 04:16:45.035 161902 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:e8:e3:04 10.100.0.6'], port_security=['fa:16:3e:e8:e3:04 10.100.0.6'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.6/28', 'neutron:device_id': 'f4568c68-41ba-4de0-a607-76bf5907f37c', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-b6cd64a2-af0b-4f57-b84c-cbc9cde5251d', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '09ba33ef4bd447699d74946c58839b2d', 'neutron:revision_number': '4', 'neutron:security_group_ids': '3ee4b1ef-419d-44da-a657-f91e5ccf3725', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com', 'neutron:port_fip': '192.168.122.213'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=27b77226-c1f8-485e-969b-bae9a3bf7ceb, chassis=[<ovs.db.idl.Row object at 0x7f22fde8afd0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f22fde8afd0>], logical_port=7a1af6b7-a442-4ea8-beca-2843ffb42e3c) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 11 04:16:45 compute-0 nova_compute[259850]: 2025-10-11 04:16:45.050 2 INFO nova.virt.libvirt.driver [-] [instance: f4568c68-41ba-4de0-a607-76bf5907f37c] Instance destroyed successfully.
Oct 11 04:16:45 compute-0 nova_compute[259850]: 2025-10-11 04:16:45.051 2 DEBUG nova.objects.instance [None req-4d562263-e95c-4823-a1ea-1a9aca7f6d58 2a330a845d62440c871f80eda2546881 09ba33ef4bd447699d74946c58839b2d - - default default] Lazy-loading 'resources' on Instance uuid f4568c68-41ba-4de0-a607-76bf5907f37c obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 11 04:16:45 compute-0 nova_compute[259850]: 2025-10-11 04:16:45.052 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 04:16:45 compute-0 ovn_controller[152025]: 2025-10-11T04:16:45Z|00218|binding|INFO|Releasing lport 7a1af6b7-a442-4ea8-beca-2843ffb42e3c from this chassis (sb_readonly=0)
Oct 11 04:16:45 compute-0 ovn_metadata_agent[161897]: 2025-10-11 04:16:45.060 161902 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:e8:e3:04 10.100.0.6'], port_security=['fa:16:3e:e8:e3:04 10.100.0.6'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.6/28', 'neutron:device_id': 'f4568c68-41ba-4de0-a607-76bf5907f37c', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-b6cd64a2-af0b-4f57-b84c-cbc9cde5251d', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '09ba33ef4bd447699d74946c58839b2d', 'neutron:revision_number': '4', 'neutron:security_group_ids': '3ee4b1ef-419d-44da-a657-f91e5ccf3725', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com', 'neutron:port_fip': '192.168.122.213'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=27b77226-c1f8-485e-969b-bae9a3bf7ceb, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f22fde8afd0>], logical_port=7a1af6b7-a442-4ea8-beca-2843ffb42e3c) old=Port_Binding(chassis=[<ovs.db.idl.Row object at 0x7f22fde8afd0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 11 04:16:45 compute-0 nova_compute[259850]: 2025-10-11 04:16:45.069 2 DEBUG nova.virt.libvirt.vif [None req-4d562263-e95c-4823-a1ea-1a9aca7f6d58 2a330a845d62440c871f80eda2546881 09ba33ef4bd447699d74946c58839b2d - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-11T04:15:18Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestVolumeBootPattern-volume-backed-server-735375716',display_name='tempest-TestVolumeBootPattern-volume-backed-server-735375716',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testvolumebootpattern-volume-backed-server-735375716',id=18,image_ref='',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBJlNFjANMenAUMjm3c+Yt/pV1YDteEbOrKj8pDNXp+AZ2bzyNSZQdsoCOqS2FJ+bZXXJhyzIuhHoqTJa3/aEXpu3IGJyP1VFFF028Wsjb+CD09ZVWGqe9jlbmQCXenrv1g==',key_name='tempest-keypair-338328634',keypairs=<?>,launch_index=0,launched_at=2025-10-11T04:15:26Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='09ba33ef4bd447699d74946c58839b2d',ramdisk_id='',reservation_id='r-ggvrzzwl',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',image_signature_verified='False',owner_project_name='tempest-TestVolumeBootPattern-771726270',owner_user_name='tempest-TestVolumeBootPattern-771726270-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-11T04:15:26Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='2a330a845d62440c871f80eda2546881',uuid=f4568c68-41ba-4de0-a607-76bf5907f37c,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "7a1af6b7-a442-4ea8-beca-2843ffb42e3c", "address": "fa:16:3e:e8:e3:04", "network": {"id": "b6cd64a2-af0b-4f57-b84c-cbc9cde5251d", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-958485850-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.213", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "09ba33ef4bd447699d74946c58839b2d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7a1af6b7-a4", "ovs_interfaceid": "7a1af6b7-a442-4ea8-beca-2843ffb42e3c", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Oct 11 04:16:45 compute-0 nova_compute[259850]: 2025-10-11 04:16:45.069 2 DEBUG nova.network.os_vif_util [None req-4d562263-e95c-4823-a1ea-1a9aca7f6d58 2a330a845d62440c871f80eda2546881 09ba33ef4bd447699d74946c58839b2d - - default default] Converting VIF {"id": "7a1af6b7-a442-4ea8-beca-2843ffb42e3c", "address": "fa:16:3e:e8:e3:04", "network": {"id": "b6cd64a2-af0b-4f57-b84c-cbc9cde5251d", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-958485850-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.213", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "09ba33ef4bd447699d74946c58839b2d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7a1af6b7-a4", "ovs_interfaceid": "7a1af6b7-a442-4ea8-beca-2843ffb42e3c", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 11 04:16:45 compute-0 nova_compute[259850]: 2025-10-11 04:16:45.070 2 DEBUG nova.network.os_vif_util [None req-4d562263-e95c-4823-a1ea-1a9aca7f6d58 2a330a845d62440c871f80eda2546881 09ba33ef4bd447699d74946c58839b2d - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:e8:e3:04,bridge_name='br-int',has_traffic_filtering=True,id=7a1af6b7-a442-4ea8-beca-2843ffb42e3c,network=Network(b6cd64a2-af0b-4f57-b84c-cbc9cde5251d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap7a1af6b7-a4') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 11 04:16:45 compute-0 nova_compute[259850]: 2025-10-11 04:16:45.071 2 DEBUG os_vif [None req-4d562263-e95c-4823-a1ea-1a9aca7f6d58 2a330a845d62440c871f80eda2546881 09ba33ef4bd447699d74946c58839b2d - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:e8:e3:04,bridge_name='br-int',has_traffic_filtering=True,id=7a1af6b7-a442-4ea8-beca-2843ffb42e3c,network=Network(b6cd64a2-af0b-4f57-b84c-cbc9cde5251d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap7a1af6b7-a4') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Oct 11 04:16:45 compute-0 nova_compute[259850]: 2025-10-11 04:16:45.072 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 04:16:45 compute-0 nova_compute[259850]: 2025-10-11 04:16:45.072 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap7a1af6b7-a4, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 04:16:45 compute-0 neutron-haproxy-ovnmeta-b6cd64a2-af0b-4f57-b84c-cbc9cde5251d[292616]: [NOTICE]   (292628) : haproxy version is 2.8.14-c23fe91
Oct 11 04:16:45 compute-0 neutron-haproxy-ovnmeta-b6cd64a2-af0b-4f57-b84c-cbc9cde5251d[292616]: [NOTICE]   (292628) : path to executable is /usr/sbin/haproxy
Oct 11 04:16:45 compute-0 neutron-haproxy-ovnmeta-b6cd64a2-af0b-4f57-b84c-cbc9cde5251d[292616]: [WARNING]  (292628) : Exiting Master process...
Oct 11 04:16:45 compute-0 neutron-haproxy-ovnmeta-b6cd64a2-af0b-4f57-b84c-cbc9cde5251d[292616]: [WARNING]  (292628) : Exiting Master process...
Oct 11 04:16:45 compute-0 nova_compute[259850]: 2025-10-11 04:16:45.075 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 04:16:45 compute-0 neutron-haproxy-ovnmeta-b6cd64a2-af0b-4f57-b84c-cbc9cde5251d[292616]: [ALERT]    (292628) : Current worker (292630) exited with code 143 (Terminated)
Oct 11 04:16:45 compute-0 neutron-haproxy-ovnmeta-b6cd64a2-af0b-4f57-b84c-cbc9cde5251d[292616]: [WARNING]  (292628) : All workers exited. Exiting... (0)
Oct 11 04:16:45 compute-0 nova_compute[259850]: 2025-10-11 04:16:45.077 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 11 04:16:45 compute-0 systemd[1]: libpod-cb732540a8bb9d3b6cdc2a1c2c1e7e379d48b942fe2e5f10b4ac14c661bd9924.scope: Deactivated successfully.
Oct 11 04:16:45 compute-0 nova_compute[259850]: 2025-10-11 04:16:45.079 2 INFO os_vif [None req-4d562263-e95c-4823-a1ea-1a9aca7f6d58 2a330a845d62440c871f80eda2546881 09ba33ef4bd447699d74946c58839b2d - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:e8:e3:04,bridge_name='br-int',has_traffic_filtering=True,id=7a1af6b7-a442-4ea8-beca-2843ffb42e3c,network=Network(b6cd64a2-af0b-4f57-b84c-cbc9cde5251d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap7a1af6b7-a4')
Oct 11 04:16:45 compute-0 podman[294879]: 2025-10-11 04:16:45.086493266 +0000 UTC m=+0.068548749 container died cb732540a8bb9d3b6cdc2a1c2c1e7e379d48b942fe2e5f10b4ac14c661bd9924 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-b6cd64a2-af0b-4f57-b84c-cbc9cde5251d, org.label-schema.build-date=20251009, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.vendor=CentOS, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, org.label-schema.schema-version=1.0)
Oct 11 04:16:45 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-cb732540a8bb9d3b6cdc2a1c2c1e7e379d48b942fe2e5f10b4ac14c661bd9924-userdata-shm.mount: Deactivated successfully.
Oct 11 04:16:45 compute-0 systemd[1]: var-lib-containers-storage-overlay-f4ed434db46fd7c321be10b5d05b995c67259606389f83d2a63bd458183cd84b-merged.mount: Deactivated successfully.
Oct 11 04:16:45 compute-0 podman[294879]: 2025-10-11 04:16:45.117531458 +0000 UTC m=+0.099586931 container cleanup cb732540a8bb9d3b6cdc2a1c2c1e7e379d48b942fe2e5f10b4ac14c661bd9924 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-b6cd64a2-af0b-4f57-b84c-cbc9cde5251d, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.build-date=20251009, org.label-schema.vendor=CentOS)
Oct 11 04:16:45 compute-0 systemd[1]: libpod-conmon-cb732540a8bb9d3b6cdc2a1c2c1e7e379d48b942fe2e5f10b4ac14c661bd9924.scope: Deactivated successfully.
Oct 11 04:16:45 compute-0 podman[294931]: 2025-10-11 04:16:45.19361217 +0000 UTC m=+0.049204049 container remove cb732540a8bb9d3b6cdc2a1c2c1e7e379d48b942fe2e5f10b4ac14c661bd9924 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-b6cd64a2-af0b-4f57-b84c-cbc9cde5251d, tcib_managed=true, org.label-schema.build-date=20251009, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0)
Oct 11 04:16:45 compute-0 ovn_metadata_agent[161897]: 2025-10-11 04:16:45.201 267637 DEBUG oslo.privsep.daemon [-] privsep: reply[debddcb4-cbf7-4c02-a4ce-ceb23c8316e1]: (4, ('Sat Oct 11 04:16:45 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-b6cd64a2-af0b-4f57-b84c-cbc9cde5251d (cb732540a8bb9d3b6cdc2a1c2c1e7e379d48b942fe2e5f10b4ac14c661bd9924)\ncb732540a8bb9d3b6cdc2a1c2c1e7e379d48b942fe2e5f10b4ac14c661bd9924\nSat Oct 11 04:16:45 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-b6cd64a2-af0b-4f57-b84c-cbc9cde5251d (cb732540a8bb9d3b6cdc2a1c2c1e7e379d48b942fe2e5f10b4ac14c661bd9924)\ncb732540a8bb9d3b6cdc2a1c2c1e7e379d48b942fe2e5f10b4ac14c661bd9924\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 04:16:45 compute-0 ovn_metadata_agent[161897]: 2025-10-11 04:16:45.203 267637 DEBUG oslo.privsep.daemon [-] privsep: reply[aea36c1d-0e6b-4138-8351-99faaf7717cb]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 04:16:45 compute-0 ovn_metadata_agent[161897]: 2025-10-11 04:16:45.205 161902 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapb6cd64a2-a0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 04:16:45 compute-0 kernel: tapb6cd64a2-a0: left promiscuous mode
Oct 11 04:16:45 compute-0 nova_compute[259850]: 2025-10-11 04:16:45.207 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 04:16:45 compute-0 ovn_metadata_agent[161897]: 2025-10-11 04:16:45.212 267637 DEBUG oslo.privsep.daemon [-] privsep: reply[a0936af4-fa8b-43f6-8719-154ddb2d20ec]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 04:16:45 compute-0 nova_compute[259850]: 2025-10-11 04:16:45.238 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 04:16:45 compute-0 ovn_metadata_agent[161897]: 2025-10-11 04:16:45.250 267637 DEBUG oslo.privsep.daemon [-] privsep: reply[d5d80c40-1b4a-472a-838a-a6f5342a36c9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 04:16:45 compute-0 ovn_metadata_agent[161897]: 2025-10-11 04:16:45.252 267637 DEBUG oslo.privsep.daemon [-] privsep: reply[0ad91d22-773b-4a12-9fda-2d1d8c1e75a8]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 04:16:45 compute-0 nova_compute[259850]: 2025-10-11 04:16:45.255 2 INFO nova.virt.libvirt.driver [None req-4d562263-e95c-4823-a1ea-1a9aca7f6d58 2a330a845d62440c871f80eda2546881 09ba33ef4bd447699d74946c58839b2d - - default default] [instance: f4568c68-41ba-4de0-a607-76bf5907f37c] Deleting instance files /var/lib/nova/instances/f4568c68-41ba-4de0-a607-76bf5907f37c_del
Oct 11 04:16:45 compute-0 nova_compute[259850]: 2025-10-11 04:16:45.256 2 INFO nova.virt.libvirt.driver [None req-4d562263-e95c-4823-a1ea-1a9aca7f6d58 2a330a845d62440c871f80eda2546881 09ba33ef4bd447699d74946c58839b2d - - default default] [instance: f4568c68-41ba-4de0-a607-76bf5907f37c] Deletion of /var/lib/nova/instances/f4568c68-41ba-4de0-a607-76bf5907f37c_del complete
Oct 11 04:16:45 compute-0 ovn_metadata_agent[161897]: 2025-10-11 04:16:45.274 267637 DEBUG oslo.privsep.daemon [-] privsep: reply[d35a0011-fe49-4bc5-8624-81fec820a60d]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 451046, 'reachable_time': 35804, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 294947, 'error': None, 'target': 'ovnmeta-b6cd64a2-af0b-4f57-b84c-cbc9cde5251d', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 04:16:45 compute-0 systemd[1]: run-netns-ovnmeta\x2db6cd64a2\x2daf0b\x2d4f57\x2db84c\x2dcbc9cde5251d.mount: Deactivated successfully.
Oct 11 04:16:45 compute-0 ovn_metadata_agent[161897]: 2025-10-11 04:16:45.277 162015 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-b6cd64a2-af0b-4f57-b84c-cbc9cde5251d deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Oct 11 04:16:45 compute-0 ovn_metadata_agent[161897]: 2025-10-11 04:16:45.278 162015 DEBUG oslo.privsep.daemon [-] privsep: reply[82400309-9ba1-4b8c-9183-a37b47ca1417]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 04:16:45 compute-0 ovn_metadata_agent[161897]: 2025-10-11 04:16:45.280 161902 INFO neutron.agent.ovn.metadata.agent [-] Port 7a1af6b7-a442-4ea8-beca-2843ffb42e3c in datapath b6cd64a2-af0b-4f57-b84c-cbc9cde5251d unbound from our chassis
Oct 11 04:16:45 compute-0 ovn_metadata_agent[161897]: 2025-10-11 04:16:45.283 161902 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network b6cd64a2-af0b-4f57-b84c-cbc9cde5251d, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Oct 11 04:16:45 compute-0 ovn_metadata_agent[161897]: 2025-10-11 04:16:45.284 267637 DEBUG oslo.privsep.daemon [-] privsep: reply[87b827fc-340a-47f4-a17d-357fb5d4ceac]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 04:16:45 compute-0 ovn_metadata_agent[161897]: 2025-10-11 04:16:45.284 161902 INFO neutron.agent.ovn.metadata.agent [-] Port 7a1af6b7-a442-4ea8-beca-2843ffb42e3c in datapath b6cd64a2-af0b-4f57-b84c-cbc9cde5251d unbound from our chassis
Oct 11 04:16:45 compute-0 ovn_metadata_agent[161897]: 2025-10-11 04:16:45.287 161902 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network b6cd64a2-af0b-4f57-b84c-cbc9cde5251d, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Oct 11 04:16:45 compute-0 ovn_metadata_agent[161897]: 2025-10-11 04:16:45.287 267637 DEBUG oslo.privsep.daemon [-] privsep: reply[5d4fc52c-6ce3-428d-b65b-9d6fe6f0bb53]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 04:16:45 compute-0 nova_compute[259850]: 2025-10-11 04:16:45.299 2 DEBUG nova.compute.manager [req-0e2efc45-8b06-4131-9ac1-8e07cee70174 req-d6391464-e13d-4cf1-8a26-28e035f7763b f4159c198f2c491aba3076e803251bb4 551ef16ca7c64c7d98e9560ddab5abd8 - - default default] [instance: f4568c68-41ba-4de0-a607-76bf5907f37c] Received event network-vif-unplugged-7a1af6b7-a442-4ea8-beca-2843ffb42e3c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 11 04:16:45 compute-0 nova_compute[259850]: 2025-10-11 04:16:45.299 2 DEBUG oslo_concurrency.lockutils [req-0e2efc45-8b06-4131-9ac1-8e07cee70174 req-d6391464-e13d-4cf1-8a26-28e035f7763b f4159c198f2c491aba3076e803251bb4 551ef16ca7c64c7d98e9560ddab5abd8 - - default default] Acquiring lock "f4568c68-41ba-4de0-a607-76bf5907f37c-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 04:16:45 compute-0 nova_compute[259850]: 2025-10-11 04:16:45.300 2 DEBUG oslo_concurrency.lockutils [req-0e2efc45-8b06-4131-9ac1-8e07cee70174 req-d6391464-e13d-4cf1-8a26-28e035f7763b f4159c198f2c491aba3076e803251bb4 551ef16ca7c64c7d98e9560ddab5abd8 - - default default] Lock "f4568c68-41ba-4de0-a607-76bf5907f37c-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 04:16:45 compute-0 nova_compute[259850]: 2025-10-11 04:16:45.300 2 DEBUG oslo_concurrency.lockutils [req-0e2efc45-8b06-4131-9ac1-8e07cee70174 req-d6391464-e13d-4cf1-8a26-28e035f7763b f4159c198f2c491aba3076e803251bb4 551ef16ca7c64c7d98e9560ddab5abd8 - - default default] Lock "f4568c68-41ba-4de0-a607-76bf5907f37c-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 04:16:45 compute-0 nova_compute[259850]: 2025-10-11 04:16:45.301 2 DEBUG nova.compute.manager [req-0e2efc45-8b06-4131-9ac1-8e07cee70174 req-d6391464-e13d-4cf1-8a26-28e035f7763b f4159c198f2c491aba3076e803251bb4 551ef16ca7c64c7d98e9560ddab5abd8 - - default default] [instance: f4568c68-41ba-4de0-a607-76bf5907f37c] No waiting events found dispatching network-vif-unplugged-7a1af6b7-a442-4ea8-beca-2843ffb42e3c pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 11 04:16:45 compute-0 nova_compute[259850]: 2025-10-11 04:16:45.301 2 DEBUG nova.compute.manager [req-0e2efc45-8b06-4131-9ac1-8e07cee70174 req-d6391464-e13d-4cf1-8a26-28e035f7763b f4159c198f2c491aba3076e803251bb4 551ef16ca7c64c7d98e9560ddab5abd8 - - default default] [instance: f4568c68-41ba-4de0-a607-76bf5907f37c] Received event network-vif-unplugged-7a1af6b7-a442-4ea8-beca-2843ffb42e3c for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Oct 11 04:16:45 compute-0 nova_compute[259850]: 2025-10-11 04:16:45.319 2 INFO nova.compute.manager [None req-4d562263-e95c-4823-a1ea-1a9aca7f6d58 2a330a845d62440c871f80eda2546881 09ba33ef4bd447699d74946c58839b2d - - default default] [instance: f4568c68-41ba-4de0-a607-76bf5907f37c] Took 0.52 seconds to destroy the instance on the hypervisor.
Oct 11 04:16:45 compute-0 nova_compute[259850]: 2025-10-11 04:16:45.320 2 DEBUG oslo.service.loopingcall [None req-4d562263-e95c-4823-a1ea-1a9aca7f6d58 2a330a845d62440c871f80eda2546881 09ba33ef4bd447699d74946c58839b2d - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Oct 11 04:16:45 compute-0 nova_compute[259850]: 2025-10-11 04:16:45.321 2 DEBUG nova.compute.manager [-] [instance: f4568c68-41ba-4de0-a607-76bf5907f37c] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Oct 11 04:16:45 compute-0 nova_compute[259850]: 2025-10-11 04:16:45.322 2 DEBUG nova.network.neutron [-] [instance: f4568c68-41ba-4de0-a607-76bf5907f37c] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Oct 11 04:16:45 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1589: 305 pgs: 305 active+clean; 350 MiB data, 621 MiB used, 59 GiB / 60 GiB avail; 268 KiB/s rd, 18 KiB/s wr, 52 op/s
Oct 11 04:16:46 compute-0 nova_compute[259850]: 2025-10-11 04:16:46.078 2 DEBUG nova.network.neutron [-] [instance: f4568c68-41ba-4de0-a607-76bf5907f37c] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 11 04:16:46 compute-0 nova_compute[259850]: 2025-10-11 04:16:46.102 2 INFO nova.compute.manager [-] [instance: f4568c68-41ba-4de0-a607-76bf5907f37c] Took 0.78 seconds to deallocate network for instance.
Oct 11 04:16:46 compute-0 nova_compute[259850]: 2025-10-11 04:16:46.279 2 INFO nova.compute.manager [None req-4d562263-e95c-4823-a1ea-1a9aca7f6d58 2a330a845d62440c871f80eda2546881 09ba33ef4bd447699d74946c58839b2d - - default default] [instance: f4568c68-41ba-4de0-a607-76bf5907f37c] Took 0.18 seconds to detach 1 volumes for instance.
Oct 11 04:16:46 compute-0 nova_compute[259850]: 2025-10-11 04:16:46.281 2 DEBUG nova.compute.manager [None req-4d562263-e95c-4823-a1ea-1a9aca7f6d58 2a330a845d62440c871f80eda2546881 09ba33ef4bd447699d74946c58839b2d - - default default] [instance: f4568c68-41ba-4de0-a607-76bf5907f37c] Deleting volume: d0a276fd-ac37-4f51-aa93-2a88fc08b739 _cleanup_volumes /usr/lib/python3.9/site-packages/nova/compute/manager.py:3217
Oct 11 04:16:46 compute-0 nova_compute[259850]: 2025-10-11 04:16:46.479 2 DEBUG oslo_concurrency.lockutils [None req-4d562263-e95c-4823-a1ea-1a9aca7f6d58 2a330a845d62440c871f80eda2546881 09ba33ef4bd447699d74946c58839b2d - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 04:16:46 compute-0 nova_compute[259850]: 2025-10-11 04:16:46.480 2 DEBUG oslo_concurrency.lockutils [None req-4d562263-e95c-4823-a1ea-1a9aca7f6d58 2a330a845d62440c871f80eda2546881 09ba33ef4bd447699d74946c58839b2d - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 04:16:46 compute-0 nova_compute[259850]: 2025-10-11 04:16:46.599 2 DEBUG oslo_concurrency.processutils [None req-4d562263-e95c-4823-a1ea-1a9aca7f6d58 2a330a845d62440c871f80eda2546881 09ba33ef4bd447699d74946c58839b2d - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 04:16:46 compute-0 ceph-mon[74273]: pgmap v1589: 305 pgs: 305 active+clean; 350 MiB data, 621 MiB used, 59 GiB / 60 GiB avail; 268 KiB/s rd, 18 KiB/s wr, 52 op/s
Oct 11 04:16:47 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct 11 04:16:47 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/37506244' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 11 04:16:47 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 11 04:16:47 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/711596831' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 04:16:47 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct 11 04:16:47 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/37506244' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 11 04:16:47 compute-0 nova_compute[259850]: 2025-10-11 04:16:47.063 2 DEBUG oslo_concurrency.processutils [None req-4d562263-e95c-4823-a1ea-1a9aca7f6d58 2a330a845d62440c871f80eda2546881 09ba33ef4bd447699d74946c58839b2d - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.464s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 04:16:47 compute-0 nova_compute[259850]: 2025-10-11 04:16:47.072 2 DEBUG nova.compute.provider_tree [None req-4d562263-e95c-4823-a1ea-1a9aca7f6d58 2a330a845d62440c871f80eda2546881 09ba33ef4bd447699d74946c58839b2d - - default default] Inventory has not changed in ProviderTree for provider: 108a560b-89c0-4926-a2fc-cb749a6f8386 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 11 04:16:47 compute-0 nova_compute[259850]: 2025-10-11 04:16:47.102 2 DEBUG nova.scheduler.client.report [None req-4d562263-e95c-4823-a1ea-1a9aca7f6d58 2a330a845d62440c871f80eda2546881 09ba33ef4bd447699d74946c58839b2d - - default default] Inventory has not changed for provider 108a560b-89c0-4926-a2fc-cb749a6f8386 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 11 04:16:47 compute-0 nova_compute[259850]: 2025-10-11 04:16:47.125 2 DEBUG oslo_concurrency.lockutils [None req-4d562263-e95c-4823-a1ea-1a9aca7f6d58 2a330a845d62440c871f80eda2546881 09ba33ef4bd447699d74946c58839b2d - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.644s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 04:16:47 compute-0 nova_compute[259850]: 2025-10-11 04:16:47.145 2 INFO nova.scheduler.client.report [None req-4d562263-e95c-4823-a1ea-1a9aca7f6d58 2a330a845d62440c871f80eda2546881 09ba33ef4bd447699d74946c58839b2d - - default default] Deleted allocations for instance f4568c68-41ba-4de0-a607-76bf5907f37c
Oct 11 04:16:47 compute-0 nova_compute[259850]: 2025-10-11 04:16:47.218 2 DEBUG oslo_concurrency.lockutils [None req-4d562263-e95c-4823-a1ea-1a9aca7f6d58 2a330a845d62440c871f80eda2546881 09ba33ef4bd447699d74946c58839b2d - - default default] Lock "f4568c68-41ba-4de0-a607-76bf5907f37c" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.427s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 04:16:47 compute-0 nova_compute[259850]: 2025-10-11 04:16:47.376 2 DEBUG nova.compute.manager [req-8179ad10-7f55-432b-930e-6e3c3fbf0a4f req-45cd1b7a-7508-4428-a65d-97b2b73b3328 f4159c198f2c491aba3076e803251bb4 551ef16ca7c64c7d98e9560ddab5abd8 - - default default] [instance: f4568c68-41ba-4de0-a607-76bf5907f37c] Received event network-vif-plugged-7a1af6b7-a442-4ea8-beca-2843ffb42e3c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 11 04:16:47 compute-0 nova_compute[259850]: 2025-10-11 04:16:47.377 2 DEBUG oslo_concurrency.lockutils [req-8179ad10-7f55-432b-930e-6e3c3fbf0a4f req-45cd1b7a-7508-4428-a65d-97b2b73b3328 f4159c198f2c491aba3076e803251bb4 551ef16ca7c64c7d98e9560ddab5abd8 - - default default] Acquiring lock "f4568c68-41ba-4de0-a607-76bf5907f37c-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 04:16:47 compute-0 nova_compute[259850]: 2025-10-11 04:16:47.377 2 DEBUG oslo_concurrency.lockutils [req-8179ad10-7f55-432b-930e-6e3c3fbf0a4f req-45cd1b7a-7508-4428-a65d-97b2b73b3328 f4159c198f2c491aba3076e803251bb4 551ef16ca7c64c7d98e9560ddab5abd8 - - default default] Lock "f4568c68-41ba-4de0-a607-76bf5907f37c-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 04:16:47 compute-0 nova_compute[259850]: 2025-10-11 04:16:47.378 2 DEBUG oslo_concurrency.lockutils [req-8179ad10-7f55-432b-930e-6e3c3fbf0a4f req-45cd1b7a-7508-4428-a65d-97b2b73b3328 f4159c198f2c491aba3076e803251bb4 551ef16ca7c64c7d98e9560ddab5abd8 - - default default] Lock "f4568c68-41ba-4de0-a607-76bf5907f37c-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 04:16:47 compute-0 nova_compute[259850]: 2025-10-11 04:16:47.378 2 DEBUG nova.compute.manager [req-8179ad10-7f55-432b-930e-6e3c3fbf0a4f req-45cd1b7a-7508-4428-a65d-97b2b73b3328 f4159c198f2c491aba3076e803251bb4 551ef16ca7c64c7d98e9560ddab5abd8 - - default default] [instance: f4568c68-41ba-4de0-a607-76bf5907f37c] No waiting events found dispatching network-vif-plugged-7a1af6b7-a442-4ea8-beca-2843ffb42e3c pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 11 04:16:47 compute-0 nova_compute[259850]: 2025-10-11 04:16:47.378 2 WARNING nova.compute.manager [req-8179ad10-7f55-432b-930e-6e3c3fbf0a4f req-45cd1b7a-7508-4428-a65d-97b2b73b3328 f4159c198f2c491aba3076e803251bb4 551ef16ca7c64c7d98e9560ddab5abd8 - - default default] [instance: f4568c68-41ba-4de0-a607-76bf5907f37c] Received unexpected event network-vif-plugged-7a1af6b7-a442-4ea8-beca-2843ffb42e3c for instance with vm_state deleted and task_state None.
Oct 11 04:16:47 compute-0 nova_compute[259850]: 2025-10-11 04:16:47.379 2 DEBUG nova.compute.manager [req-8179ad10-7f55-432b-930e-6e3c3fbf0a4f req-45cd1b7a-7508-4428-a65d-97b2b73b3328 f4159c198f2c491aba3076e803251bb4 551ef16ca7c64c7d98e9560ddab5abd8 - - default default] [instance: f4568c68-41ba-4de0-a607-76bf5907f37c] Received event network-vif-deleted-7a1af6b7-a442-4ea8-beca-2843ffb42e3c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 11 04:16:47 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1590: 305 pgs: 305 active+clean; 350 MiB data, 621 MiB used, 59 GiB / 60 GiB avail; 268 KiB/s rd, 18 KiB/s wr, 52 op/s
Oct 11 04:16:47 compute-0 ceph-mon[74273]: from='client.? 192.168.122.10:0/37506244' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 11 04:16:47 compute-0 ceph-mon[74273]: from='client.? 192.168.122.100:0/711596831' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 04:16:47 compute-0 ceph-mon[74273]: from='client.? 192.168.122.10:0/37506244' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 11 04:16:48 compute-0 nova_compute[259850]: 2025-10-11 04:16:48.282 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 04:16:48 compute-0 nova_compute[259850]: 2025-10-11 04:16:48.421 2 DEBUG oslo_service.periodic_task [None req-a15c22ab-259d-4b26-83d0-eb5f92102649 - - - - - -] Running periodic task ComputeManager._sync_power_states run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 11 04:16:48 compute-0 nova_compute[259850]: 2025-10-11 04:16:48.440 2 DEBUG nova.compute.manager [None req-a15c22ab-259d-4b26-83d0-eb5f92102649 - - - - - -] Triggering sync for uuid 001be5b3-e842-4242-a6ad-2ccbfa7b39c2 _sync_power_states /usr/lib/python3.9/site-packages/nova/compute/manager.py:10268
Oct 11 04:16:48 compute-0 nova_compute[259850]: 2025-10-11 04:16:48.442 2 DEBUG oslo_concurrency.lockutils [None req-a15c22ab-259d-4b26-83d0-eb5f92102649 - - - - - -] Acquiring lock "001be5b3-e842-4242-a6ad-2ccbfa7b39c2" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 04:16:48 compute-0 nova_compute[259850]: 2025-10-11 04:16:48.443 2 DEBUG oslo_concurrency.lockutils [None req-a15c22ab-259d-4b26-83d0-eb5f92102649 - - - - - -] Lock "001be5b3-e842-4242-a6ad-2ccbfa7b39c2" acquired by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 04:16:48 compute-0 nova_compute[259850]: 2025-10-11 04:16:48.501 2 DEBUG oslo_concurrency.lockutils [None req-a15c22ab-259d-4b26-83d0-eb5f92102649 - - - - - -] Lock "001be5b3-e842-4242-a6ad-2ccbfa7b39c2" "released" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: held 0.059s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 04:16:48 compute-0 ceph-mon[74273]: pgmap v1590: 305 pgs: 305 active+clean; 350 MiB data, 621 MiB used, 59 GiB / 60 GiB avail; 268 KiB/s rd, 18 KiB/s wr, 52 op/s
Oct 11 04:16:49 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1591: 305 pgs: 305 active+clean; 270 MiB data, 579 MiB used, 59 GiB / 60 GiB avail; 65 KiB/s rd, 19 KiB/s wr, 90 op/s
Oct 11 04:16:49 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e386 do_prune osdmap full prune enabled
Oct 11 04:16:49 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e387 e387: 3 total, 3 up, 3 in
Oct 11 04:16:49 compute-0 ceph-mon[74273]: log_channel(cluster) log [DBG] : osdmap e387: 3 total, 3 up, 3 in
Oct 11 04:16:49 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e387 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 04:16:49 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e387 do_prune osdmap full prune enabled
Oct 11 04:16:49 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e388 e388: 3 total, 3 up, 3 in
Oct 11 04:16:50 compute-0 ceph-mon[74273]: log_channel(cluster) log [DBG] : osdmap e388: 3 total, 3 up, 3 in
Oct 11 04:16:50 compute-0 nova_compute[259850]: 2025-10-11 04:16:50.075 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 04:16:50 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct 11 04:16:50 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3357806514' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 11 04:16:50 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct 11 04:16:50 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3357806514' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 11 04:16:50 compute-0 ceph-mgr[74563]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 04:16:50 compute-0 ceph-mgr[74563]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 04:16:50 compute-0 ceph-mgr[74563]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 04:16:50 compute-0 ceph-mgr[74563]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 04:16:50 compute-0 ceph-mgr[74563]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 04:16:50 compute-0 ceph-mgr[74563]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 04:16:50 compute-0 ceph-mon[74273]: pgmap v1591: 305 pgs: 305 active+clean; 270 MiB data, 579 MiB used, 59 GiB / 60 GiB avail; 65 KiB/s rd, 19 KiB/s wr, 90 op/s
Oct 11 04:16:50 compute-0 ceph-mon[74273]: osdmap e387: 3 total, 3 up, 3 in
Oct 11 04:16:50 compute-0 ceph-mon[74273]: osdmap e388: 3 total, 3 up, 3 in
Oct 11 04:16:50 compute-0 ceph-mon[74273]: from='client.? 192.168.122.10:0/3357806514' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 11 04:16:50 compute-0 ceph-mon[74273]: from='client.? 192.168.122.10:0/3357806514' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 11 04:16:51 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1594: 305 pgs: 305 active+clean; 270 MiB data, 579 MiB used, 59 GiB / 60 GiB avail; 46 KiB/s rd, 22 KiB/s wr, 66 op/s
Oct 11 04:16:52 compute-0 ceph-mon[74273]: pgmap v1594: 305 pgs: 305 active+clean; 270 MiB data, 579 MiB used, 59 GiB / 60 GiB avail; 46 KiB/s rd, 22 KiB/s wr, 66 op/s
Oct 11 04:16:53 compute-0 nova_compute[259850]: 2025-10-11 04:16:53.285 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 04:16:53 compute-0 nova_compute[259850]: 2025-10-11 04:16:53.598 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1760156198.5963194, b19922f4-8c6a-4465-8051-c33652138fd9 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 11 04:16:53 compute-0 nova_compute[259850]: 2025-10-11 04:16:53.599 2 INFO nova.compute.manager [-] [instance: b19922f4-8c6a-4465-8051-c33652138fd9] VM Stopped (Lifecycle Event)
Oct 11 04:16:53 compute-0 nova_compute[259850]: 2025-10-11 04:16:53.641 2 DEBUG nova.compute.manager [None req-431d5df5-a076-4583-96e8-f33ccd94d7bc - - - - - -] [instance: b19922f4-8c6a-4465-8051-c33652138fd9] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 11 04:16:53 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1595: 305 pgs: 305 active+clean; 270 MiB data, 579 MiB used, 59 GiB / 60 GiB avail; 66 KiB/s rd, 25 KiB/s wr, 94 op/s
Oct 11 04:16:54 compute-0 ceph-mon[74273]: pgmap v1595: 305 pgs: 305 active+clean; 270 MiB data, 579 MiB used, 59 GiB / 60 GiB avail; 66 KiB/s rd, 25 KiB/s wr, 94 op/s
Oct 11 04:16:54 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e388 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 04:16:54 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e388 do_prune osdmap full prune enabled
Oct 11 04:16:55 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e389 e389: 3 total, 3 up, 3 in
Oct 11 04:16:55 compute-0 ceph-mon[74273]: log_channel(cluster) log [DBG] : osdmap e389: 3 total, 3 up, 3 in
Oct 11 04:16:55 compute-0 nova_compute[259850]: 2025-10-11 04:16:55.078 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 04:16:55 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1597: 305 pgs: 305 active+clean; 270 MiB data, 579 MiB used, 59 GiB / 60 GiB avail; 29 KiB/s rd, 3.5 KiB/s wr, 38 op/s
Oct 11 04:16:56 compute-0 ceph-mon[74273]: osdmap e389: 3 total, 3 up, 3 in
Oct 11 04:16:57 compute-0 ceph-mon[74273]: pgmap v1597: 305 pgs: 305 active+clean; 270 MiB data, 579 MiB used, 59 GiB / 60 GiB avail; 29 KiB/s rd, 3.5 KiB/s wr, 38 op/s
Oct 11 04:16:57 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1598: 305 pgs: 305 active+clean; 270 MiB data, 579 MiB used, 59 GiB / 60 GiB avail; 22 KiB/s rd, 2.7 KiB/s wr, 29 op/s
Oct 11 04:16:57 compute-0 nova_compute[259850]: 2025-10-11 04:16:57.982 2 DEBUG oslo_concurrency.lockutils [None req-36f9e8b7-ac06-4ff3-86c3-6716d03c660a 77d11e860ca1460cab1c20bca4d4c0ea bfcc78a613a4442d88231798d10634c9 - - default default] Acquiring lock "001be5b3-e842-4242-a6ad-2ccbfa7b39c2" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 04:16:57 compute-0 nova_compute[259850]: 2025-10-11 04:16:57.982 2 DEBUG oslo_concurrency.lockutils [None req-36f9e8b7-ac06-4ff3-86c3-6716d03c660a 77d11e860ca1460cab1c20bca4d4c0ea bfcc78a613a4442d88231798d10634c9 - - default default] Lock "001be5b3-e842-4242-a6ad-2ccbfa7b39c2" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 04:16:57 compute-0 nova_compute[259850]: 2025-10-11 04:16:57.983 2 DEBUG oslo_concurrency.lockutils [None req-36f9e8b7-ac06-4ff3-86c3-6716d03c660a 77d11e860ca1460cab1c20bca4d4c0ea bfcc78a613a4442d88231798d10634c9 - - default default] Acquiring lock "001be5b3-e842-4242-a6ad-2ccbfa7b39c2-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 04:16:57 compute-0 nova_compute[259850]: 2025-10-11 04:16:57.983 2 DEBUG oslo_concurrency.lockutils [None req-36f9e8b7-ac06-4ff3-86c3-6716d03c660a 77d11e860ca1460cab1c20bca4d4c0ea bfcc78a613a4442d88231798d10634c9 - - default default] Lock "001be5b3-e842-4242-a6ad-2ccbfa7b39c2-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 04:16:57 compute-0 nova_compute[259850]: 2025-10-11 04:16:57.983 2 DEBUG oslo_concurrency.lockutils [None req-36f9e8b7-ac06-4ff3-86c3-6716d03c660a 77d11e860ca1460cab1c20bca4d4c0ea bfcc78a613a4442d88231798d10634c9 - - default default] Lock "001be5b3-e842-4242-a6ad-2ccbfa7b39c2-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 04:16:57 compute-0 nova_compute[259850]: 2025-10-11 04:16:57.984 2 INFO nova.compute.manager [None req-36f9e8b7-ac06-4ff3-86c3-6716d03c660a 77d11e860ca1460cab1c20bca4d4c0ea bfcc78a613a4442d88231798d10634c9 - - default default] [instance: 001be5b3-e842-4242-a6ad-2ccbfa7b39c2] Terminating instance
Oct 11 04:16:57 compute-0 nova_compute[259850]: 2025-10-11 04:16:57.985 2 DEBUG nova.compute.manager [None req-36f9e8b7-ac06-4ff3-86c3-6716d03c660a 77d11e860ca1460cab1c20bca4d4c0ea bfcc78a613a4442d88231798d10634c9 - - default default] [instance: 001be5b3-e842-4242-a6ad-2ccbfa7b39c2] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Oct 11 04:16:58 compute-0 kernel: tapb0fef7a6-46 (unregistering): left promiscuous mode
Oct 11 04:16:58 compute-0 NetworkManager[44920]: <info>  [1760156218.0517] device (tapb0fef7a6-46): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 11 04:16:58 compute-0 ovn_controller[152025]: 2025-10-11T04:16:58Z|00219|binding|INFO|Releasing lport b0fef7a6-460f-49a1-8586-9008e9d3f648 from this chassis (sb_readonly=0)
Oct 11 04:16:58 compute-0 ovn_controller[152025]: 2025-10-11T04:16:58Z|00220|binding|INFO|Setting lport b0fef7a6-460f-49a1-8586-9008e9d3f648 down in Southbound
Oct 11 04:16:58 compute-0 ovn_controller[152025]: 2025-10-11T04:16:58Z|00221|binding|INFO|Removing iface tapb0fef7a6-46 ovn-installed in OVS
Oct 11 04:16:58 compute-0 nova_compute[259850]: 2025-10-11 04:16:58.063 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 04:16:58 compute-0 nova_compute[259850]: 2025-10-11 04:16:58.065 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 04:16:58 compute-0 ovn_metadata_agent[161897]: 2025-10-11 04:16:58.078 161902 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:05:f8:44 10.100.0.14'], port_security=['fa:16:3e:05:f8:44 10.100.0.14'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.14/28', 'neutron:device_id': '001be5b3-e842-4242-a6ad-2ccbfa7b39c2', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-1c86b315-3a4b-4db0-8b3c-39658c19ef9c', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'bfcc78a613a4442d88231798d10634c9', 'neutron:revision_number': '4', 'neutron:security_group_ids': '3c69b653-6cff-45f0-9360-306b50c7cbb5', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com', 'neutron:port_fip': '192.168.122.237'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=756f4bd0-4cbc-4611-9397-52eb34ec09ab, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f22fde8afd0>], logical_port=b0fef7a6-460f-49a1-8586-9008e9d3f648) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f22fde8afd0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 11 04:16:58 compute-0 ovn_metadata_agent[161897]: 2025-10-11 04:16:58.080 161902 INFO neutron.agent.ovn.metadata.agent [-] Port b0fef7a6-460f-49a1-8586-9008e9d3f648 in datapath 1c86b315-3a4b-4db0-8b3c-39658c19ef9c unbound from our chassis
Oct 11 04:16:58 compute-0 ovn_metadata_agent[161897]: 2025-10-11 04:16:58.082 161902 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 1c86b315-3a4b-4db0-8b3c-39658c19ef9c, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Oct 11 04:16:58 compute-0 ovn_metadata_agent[161897]: 2025-10-11 04:16:58.083 267637 DEBUG oslo.privsep.daemon [-] privsep: reply[5ef98a2e-07bb-4411-b8e1-96b21c870c01]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 04:16:58 compute-0 ovn_metadata_agent[161897]: 2025-10-11 04:16:58.084 161902 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-1c86b315-3a4b-4db0-8b3c-39658c19ef9c namespace which is not needed anymore
Oct 11 04:16:58 compute-0 nova_compute[259850]: 2025-10-11 04:16:58.096 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 04:16:58 compute-0 systemd[1]: machine-qemu\x2d21\x2dinstance\x2d00000015.scope: Deactivated successfully.
Oct 11 04:16:58 compute-0 systemd[1]: machine-qemu\x2d21\x2dinstance\x2d00000015.scope: Consumed 16.685s CPU time.
Oct 11 04:16:58 compute-0 systemd-machined[214869]: Machine qemu-21-instance-00000015 terminated.
Oct 11 04:16:58 compute-0 podman[294970]: 2025-10-11 04:16:58.215601455 +0000 UTC m=+0.119593927 container health_status 648a70a95c858d67aef01a55c153ff0b5b9bef4b589fa10c62a24185180db61f (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, config_id=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20251009)
Oct 11 04:16:58 compute-0 nova_compute[259850]: 2025-10-11 04:16:58.228 2 INFO nova.virt.libvirt.driver [-] [instance: 001be5b3-e842-4242-a6ad-2ccbfa7b39c2] Instance destroyed successfully.
Oct 11 04:16:58 compute-0 nova_compute[259850]: 2025-10-11 04:16:58.229 2 DEBUG nova.objects.instance [None req-36f9e8b7-ac06-4ff3-86c3-6716d03c660a 77d11e860ca1460cab1c20bca4d4c0ea bfcc78a613a4442d88231798d10634c9 - - default default] Lazy-loading 'resources' on Instance uuid 001be5b3-e842-4242-a6ad-2ccbfa7b39c2 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 11 04:16:58 compute-0 neutron-haproxy-ovnmeta-1c86b315-3a4b-4db0-8b3c-39658c19ef9c[293758]: [NOTICE]   (293762) : haproxy version is 2.8.14-c23fe91
Oct 11 04:16:58 compute-0 neutron-haproxy-ovnmeta-1c86b315-3a4b-4db0-8b3c-39658c19ef9c[293758]: [NOTICE]   (293762) : path to executable is /usr/sbin/haproxy
Oct 11 04:16:58 compute-0 neutron-haproxy-ovnmeta-1c86b315-3a4b-4db0-8b3c-39658c19ef9c[293758]: [WARNING]  (293762) : Exiting Master process...
Oct 11 04:16:58 compute-0 neutron-haproxy-ovnmeta-1c86b315-3a4b-4db0-8b3c-39658c19ef9c[293758]: [WARNING]  (293762) : Exiting Master process...
Oct 11 04:16:58 compute-0 neutron-haproxy-ovnmeta-1c86b315-3a4b-4db0-8b3c-39658c19ef9c[293758]: [ALERT]    (293762) : Current worker (293764) exited with code 143 (Terminated)
Oct 11 04:16:58 compute-0 neutron-haproxy-ovnmeta-1c86b315-3a4b-4db0-8b3c-39658c19ef9c[293758]: [WARNING]  (293762) : All workers exited. Exiting... (0)
Oct 11 04:16:58 compute-0 nova_compute[259850]: 2025-10-11 04:16:58.249 2 DEBUG nova.virt.libvirt.vif [None req-36f9e8b7-ac06-4ff3-86c3-6716d03c660a 77d11e860ca1460cab1c20bca4d4c0ea bfcc78a613a4442d88231798d10634c9 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-11T04:16:10Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TransferEncryptedVolumeTest-server-1610846760',display_name='tempest-TransferEncryptedVolumeTest-server-1610846760',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-transferencryptedvolumetest-server-1610846760',id=21,image_ref='',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBEm9PgkiGfSXOx0o4WQq8AZOWxjh5dTcSz2vccU0Qwona7kINKHr8yu5DCKNDP+0OzTB5mKLuoYtalc5W0loL0xt3InkbNaE80zGvKzG26ntAx/WTjaE+AjoYDpLrsq4bA==',key_name='tempest-TransferEncryptedVolumeTest-726747697',keypairs=<?>,launch_index=0,launched_at=2025-10-11T04:16:21Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='bfcc78a613a4442d88231798d10634c9',ramdisk_id='',reservation_id='r-t3w6fxsi',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',image_signature_verified='False',owner_project_name='tempest-TransferEncryptedVolumeTest-1941581237',owner_user_name='tempest-TransferEncryptedVolumeTest-1941581237-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-11T04:16:21Z,user_data=None,user_id='77d11e860ca1460cab1c20bca4d4c0ea',uuid=001be5b3-e842-4242-a6ad-2ccbfa7b39c2,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "b0fef7a6-460f-49a1-8586-9008e9d3f648", "address": "fa:16:3e:05:f8:44", "network": {"id": "1c86b315-3a4b-4db0-8b3c-39658c19ef9c", "bridge": "br-int", "label": "tempest-TransferEncryptedVolumeTest-443747755-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.237", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "bfcc78a613a4442d88231798d10634c9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb0fef7a6-46", "ovs_interfaceid": "b0fef7a6-460f-49a1-8586-9008e9d3f648", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Oct 11 04:16:58 compute-0 nova_compute[259850]: 2025-10-11 04:16:58.250 2 DEBUG nova.network.os_vif_util [None req-36f9e8b7-ac06-4ff3-86c3-6716d03c660a 77d11e860ca1460cab1c20bca4d4c0ea bfcc78a613a4442d88231798d10634c9 - - default default] Converting VIF {"id": "b0fef7a6-460f-49a1-8586-9008e9d3f648", "address": "fa:16:3e:05:f8:44", "network": {"id": "1c86b315-3a4b-4db0-8b3c-39658c19ef9c", "bridge": "br-int", "label": "tempest-TransferEncryptedVolumeTest-443747755-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.237", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "bfcc78a613a4442d88231798d10634c9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb0fef7a6-46", "ovs_interfaceid": "b0fef7a6-460f-49a1-8586-9008e9d3f648", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 11 04:16:58 compute-0 systemd[1]: libpod-6f26fa6c48d37e37927488cb452029c109a1a59e9cc7cb6cc94426aad094b949.scope: Deactivated successfully.
Oct 11 04:16:58 compute-0 podman[295020]: 2025-10-11 04:16:58.258000295 +0000 UTC m=+0.055192384 container died 6f26fa6c48d37e37927488cb452029c109a1a59e9cc7cb6cc94426aad094b949 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-1c86b315-3a4b-4db0-8b3c-39658c19ef9c, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009, org.label-schema.schema-version=1.0, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3)
Oct 11 04:16:58 compute-0 nova_compute[259850]: 2025-10-11 04:16:58.257 2 DEBUG nova.network.os_vif_util [None req-36f9e8b7-ac06-4ff3-86c3-6716d03c660a 77d11e860ca1460cab1c20bca4d4c0ea bfcc78a613a4442d88231798d10634c9 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:05:f8:44,bridge_name='br-int',has_traffic_filtering=True,id=b0fef7a6-460f-49a1-8586-9008e9d3f648,network=Network(1c86b315-3a4b-4db0-8b3c-39658c19ef9c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb0fef7a6-46') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 11 04:16:58 compute-0 nova_compute[259850]: 2025-10-11 04:16:58.259 2 DEBUG os_vif [None req-36f9e8b7-ac06-4ff3-86c3-6716d03c660a 77d11e860ca1460cab1c20bca4d4c0ea bfcc78a613a4442d88231798d10634c9 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:05:f8:44,bridge_name='br-int',has_traffic_filtering=True,id=b0fef7a6-460f-49a1-8586-9008e9d3f648,network=Network(1c86b315-3a4b-4db0-8b3c-39658c19ef9c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb0fef7a6-46') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Oct 11 04:16:58 compute-0 nova_compute[259850]: 2025-10-11 04:16:58.261 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 04:16:58 compute-0 nova_compute[259850]: 2025-10-11 04:16:58.261 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapb0fef7a6-46, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 04:16:58 compute-0 nova_compute[259850]: 2025-10-11 04:16:58.264 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 04:16:58 compute-0 nova_compute[259850]: 2025-10-11 04:16:58.266 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 11 04:16:58 compute-0 nova_compute[259850]: 2025-10-11 04:16:58.267 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 04:16:58 compute-0 nova_compute[259850]: 2025-10-11 04:16:58.269 2 INFO os_vif [None req-36f9e8b7-ac06-4ff3-86c3-6716d03c660a 77d11e860ca1460cab1c20bca4d4c0ea bfcc78a613a4442d88231798d10634c9 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:05:f8:44,bridge_name='br-int',has_traffic_filtering=True,id=b0fef7a6-460f-49a1-8586-9008e9d3f648,network=Network(1c86b315-3a4b-4db0-8b3c-39658c19ef9c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb0fef7a6-46')
Oct 11 04:16:58 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-6f26fa6c48d37e37927488cb452029c109a1a59e9cc7cb6cc94426aad094b949-userdata-shm.mount: Deactivated successfully.
Oct 11 04:16:58 compute-0 systemd[1]: var-lib-containers-storage-overlay-b8129ca4513e3b7e35f69efed3b0897e5e1b1ba766e7b29c8d843c26130076b0-merged.mount: Deactivated successfully.
Oct 11 04:16:58 compute-0 podman[295020]: 2025-10-11 04:16:58.298112071 +0000 UTC m=+0.095304160 container cleanup 6f26fa6c48d37e37927488cb452029c109a1a59e9cc7cb6cc94426aad094b949 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-1c86b315-3a4b-4db0-8b3c-39658c19ef9c, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f)
Oct 11 04:16:58 compute-0 nova_compute[259850]: 2025-10-11 04:16:58.301 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 04:16:58 compute-0 systemd[1]: libpod-conmon-6f26fa6c48d37e37927488cb452029c109a1a59e9cc7cb6cc94426aad094b949.scope: Deactivated successfully.
Oct 11 04:16:58 compute-0 podman[295074]: 2025-10-11 04:16:58.377676283 +0000 UTC m=+0.050447759 container remove 6f26fa6c48d37e37927488cb452029c109a1a59e9cc7cb6cc94426aad094b949 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-1c86b315-3a4b-4db0-8b3c-39658c19ef9c, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Oct 11 04:16:58 compute-0 ovn_metadata_agent[161897]: 2025-10-11 04:16:58.388 267637 DEBUG oslo.privsep.daemon [-] privsep: reply[f22aea33-d93b-4fe4-93f2-a36c1db3f42c]: (4, ('Sat Oct 11 04:16:58 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-1c86b315-3a4b-4db0-8b3c-39658c19ef9c (6f26fa6c48d37e37927488cb452029c109a1a59e9cc7cb6cc94426aad094b949)\n6f26fa6c48d37e37927488cb452029c109a1a59e9cc7cb6cc94426aad094b949\nSat Oct 11 04:16:58 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-1c86b315-3a4b-4db0-8b3c-39658c19ef9c (6f26fa6c48d37e37927488cb452029c109a1a59e9cc7cb6cc94426aad094b949)\n6f26fa6c48d37e37927488cb452029c109a1a59e9cc7cb6cc94426aad094b949\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 04:16:58 compute-0 ovn_metadata_agent[161897]: 2025-10-11 04:16:58.390 267637 DEBUG oslo.privsep.daemon [-] privsep: reply[c397b4cd-067a-455f-907d-e6a168c40bb9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 04:16:58 compute-0 ovn_metadata_agent[161897]: 2025-10-11 04:16:58.391 161902 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap1c86b315-30, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 04:16:58 compute-0 nova_compute[259850]: 2025-10-11 04:16:58.393 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 04:16:58 compute-0 kernel: tap1c86b315-30: left promiscuous mode
Oct 11 04:16:58 compute-0 nova_compute[259850]: 2025-10-11 04:16:58.411 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 04:16:58 compute-0 ovn_metadata_agent[161897]: 2025-10-11 04:16:58.415 267637 DEBUG oslo.privsep.daemon [-] privsep: reply[3006664f-f67b-41b4-8f7e-725c2c9dfab9]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 04:16:58 compute-0 ovn_metadata_agent[161897]: 2025-10-11 04:16:58.440 267637 DEBUG oslo.privsep.daemon [-] privsep: reply[32d9186c-82e5-4c68-af41-4b368fa1e0bc]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 04:16:58 compute-0 ovn_metadata_agent[161897]: 2025-10-11 04:16:58.443 267637 DEBUG oslo.privsep.daemon [-] privsep: reply[21e42d86-1e94-4f2c-9c96-255a22ddfb56]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 04:16:58 compute-0 ovn_metadata_agent[161897]: 2025-10-11 04:16:58.460 267637 DEBUG oslo.privsep.daemon [-] privsep: reply[fa7ec52b-e240-4128-afce-672446992cdc]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 456300, 'reachable_time': 18436, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 295092, 'error': None, 'target': 'ovnmeta-1c86b315-3a4b-4db0-8b3c-39658c19ef9c', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 04:16:58 compute-0 ovn_metadata_agent[161897]: 2025-10-11 04:16:58.463 162015 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-1c86b315-3a4b-4db0-8b3c-39658c19ef9c deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Oct 11 04:16:58 compute-0 ovn_metadata_agent[161897]: 2025-10-11 04:16:58.463 162015 DEBUG oslo.privsep.daemon [-] privsep: reply[59f44e8c-3ed2-4766-825e-6b07d7066d08]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 04:16:58 compute-0 systemd[1]: run-netns-ovnmeta\x2d1c86b315\x2d3a4b\x2d4db0\x2d8b3c\x2d39658c19ef9c.mount: Deactivated successfully.
Oct 11 04:16:58 compute-0 nova_compute[259850]: 2025-10-11 04:16:58.474 2 INFO nova.virt.libvirt.driver [None req-36f9e8b7-ac06-4ff3-86c3-6716d03c660a 77d11e860ca1460cab1c20bca4d4c0ea bfcc78a613a4442d88231798d10634c9 - - default default] [instance: 001be5b3-e842-4242-a6ad-2ccbfa7b39c2] Deleting instance files /var/lib/nova/instances/001be5b3-e842-4242-a6ad-2ccbfa7b39c2_del
Oct 11 04:16:58 compute-0 nova_compute[259850]: 2025-10-11 04:16:58.474 2 INFO nova.virt.libvirt.driver [None req-36f9e8b7-ac06-4ff3-86c3-6716d03c660a 77d11e860ca1460cab1c20bca4d4c0ea bfcc78a613a4442d88231798d10634c9 - - default default] [instance: 001be5b3-e842-4242-a6ad-2ccbfa7b39c2] Deletion of /var/lib/nova/instances/001be5b3-e842-4242-a6ad-2ccbfa7b39c2_del complete
Oct 11 04:16:58 compute-0 nova_compute[259850]: 2025-10-11 04:16:58.541 2 INFO nova.compute.manager [None req-36f9e8b7-ac06-4ff3-86c3-6716d03c660a 77d11e860ca1460cab1c20bca4d4c0ea bfcc78a613a4442d88231798d10634c9 - - default default] [instance: 001be5b3-e842-4242-a6ad-2ccbfa7b39c2] Took 0.56 seconds to destroy the instance on the hypervisor.
Oct 11 04:16:58 compute-0 nova_compute[259850]: 2025-10-11 04:16:58.542 2 DEBUG oslo.service.loopingcall [None req-36f9e8b7-ac06-4ff3-86c3-6716d03c660a 77d11e860ca1460cab1c20bca4d4c0ea bfcc78a613a4442d88231798d10634c9 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Oct 11 04:16:58 compute-0 nova_compute[259850]: 2025-10-11 04:16:58.542 2 DEBUG nova.compute.manager [-] [instance: 001be5b3-e842-4242-a6ad-2ccbfa7b39c2] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Oct 11 04:16:58 compute-0 nova_compute[259850]: 2025-10-11 04:16:58.542 2 DEBUG nova.network.neutron [-] [instance: 001be5b3-e842-4242-a6ad-2ccbfa7b39c2] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Oct 11 04:16:58 compute-0 nova_compute[259850]: 2025-10-11 04:16:58.646 2 DEBUG nova.compute.manager [req-aef89368-197e-4ce2-8a23-ee7f1d266486 req-dcd731c1-14e7-4084-a90d-014e077cd620 f4159c198f2c491aba3076e803251bb4 551ef16ca7c64c7d98e9560ddab5abd8 - - default default] [instance: 001be5b3-e842-4242-a6ad-2ccbfa7b39c2] Received event network-vif-unplugged-b0fef7a6-460f-49a1-8586-9008e9d3f648 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 11 04:16:58 compute-0 nova_compute[259850]: 2025-10-11 04:16:58.647 2 DEBUG oslo_concurrency.lockutils [req-aef89368-197e-4ce2-8a23-ee7f1d266486 req-dcd731c1-14e7-4084-a90d-014e077cd620 f4159c198f2c491aba3076e803251bb4 551ef16ca7c64c7d98e9560ddab5abd8 - - default default] Acquiring lock "001be5b3-e842-4242-a6ad-2ccbfa7b39c2-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 04:16:58 compute-0 nova_compute[259850]: 2025-10-11 04:16:58.648 2 DEBUG oslo_concurrency.lockutils [req-aef89368-197e-4ce2-8a23-ee7f1d266486 req-dcd731c1-14e7-4084-a90d-014e077cd620 f4159c198f2c491aba3076e803251bb4 551ef16ca7c64c7d98e9560ddab5abd8 - - default default] Lock "001be5b3-e842-4242-a6ad-2ccbfa7b39c2-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 04:16:58 compute-0 nova_compute[259850]: 2025-10-11 04:16:58.648 2 DEBUG oslo_concurrency.lockutils [req-aef89368-197e-4ce2-8a23-ee7f1d266486 req-dcd731c1-14e7-4084-a90d-014e077cd620 f4159c198f2c491aba3076e803251bb4 551ef16ca7c64c7d98e9560ddab5abd8 - - default default] Lock "001be5b3-e842-4242-a6ad-2ccbfa7b39c2-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 04:16:58 compute-0 nova_compute[259850]: 2025-10-11 04:16:58.648 2 DEBUG nova.compute.manager [req-aef89368-197e-4ce2-8a23-ee7f1d266486 req-dcd731c1-14e7-4084-a90d-014e077cd620 f4159c198f2c491aba3076e803251bb4 551ef16ca7c64c7d98e9560ddab5abd8 - - default default] [instance: 001be5b3-e842-4242-a6ad-2ccbfa7b39c2] No waiting events found dispatching network-vif-unplugged-b0fef7a6-460f-49a1-8586-9008e9d3f648 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 11 04:16:58 compute-0 nova_compute[259850]: 2025-10-11 04:16:58.649 2 DEBUG nova.compute.manager [req-aef89368-197e-4ce2-8a23-ee7f1d266486 req-dcd731c1-14e7-4084-a90d-014e077cd620 f4159c198f2c491aba3076e803251bb4 551ef16ca7c64c7d98e9560ddab5abd8 - - default default] [instance: 001be5b3-e842-4242-a6ad-2ccbfa7b39c2] Received event network-vif-unplugged-b0fef7a6-460f-49a1-8586-9008e9d3f648 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Oct 11 04:16:59 compute-0 ceph-mon[74273]: pgmap v1598: 305 pgs: 305 active+clean; 270 MiB data, 579 MiB used, 59 GiB / 60 GiB avail; 22 KiB/s rd, 2.7 KiB/s wr, 29 op/s
Oct 11 04:16:59 compute-0 nova_compute[259850]: 2025-10-11 04:16:59.510 2 DEBUG nova.network.neutron [-] [instance: 001be5b3-e842-4242-a6ad-2ccbfa7b39c2] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 11 04:16:59 compute-0 nova_compute[259850]: 2025-10-11 04:16:59.534 2 INFO nova.compute.manager [-] [instance: 001be5b3-e842-4242-a6ad-2ccbfa7b39c2] Took 0.99 seconds to deallocate network for instance.
Oct 11 04:16:59 compute-0 nova_compute[259850]: 2025-10-11 04:16:59.742 2 INFO nova.compute.manager [None req-36f9e8b7-ac06-4ff3-86c3-6716d03c660a 77d11e860ca1460cab1c20bca4d4c0ea bfcc78a613a4442d88231798d10634c9 - - default default] [instance: 001be5b3-e842-4242-a6ad-2ccbfa7b39c2] Took 0.21 seconds to detach 1 volumes for instance.
Oct 11 04:16:59 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1599: 305 pgs: 305 active+clean; 270 MiB data, 579 MiB used, 59 GiB / 60 GiB avail; 2.4 MiB/s rd, 2.3 KiB/s wr, 39 op/s
Oct 11 04:16:59 compute-0 nova_compute[259850]: 2025-10-11 04:16:59.800 2 DEBUG oslo_concurrency.lockutils [None req-36f9e8b7-ac06-4ff3-86c3-6716d03c660a 77d11e860ca1460cab1c20bca4d4c0ea bfcc78a613a4442d88231798d10634c9 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 04:16:59 compute-0 nova_compute[259850]: 2025-10-11 04:16:59.801 2 DEBUG oslo_concurrency.lockutils [None req-36f9e8b7-ac06-4ff3-86c3-6716d03c660a 77d11e860ca1460cab1c20bca4d4c0ea bfcc78a613a4442d88231798d10634c9 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 04:16:59 compute-0 nova_compute[259850]: 2025-10-11 04:16:59.882 2 DEBUG oslo_concurrency.processutils [None req-36f9e8b7-ac06-4ff3-86c3-6716d03c660a 77d11e860ca1460cab1c20bca4d4c0ea bfcc78a613a4442d88231798d10634c9 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 04:17:00 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e389 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 04:17:00 compute-0 nova_compute[259850]: 2025-10-11 04:17:00.050 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1760156205.0489812, f4568c68-41ba-4de0-a607-76bf5907f37c => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 11 04:17:00 compute-0 nova_compute[259850]: 2025-10-11 04:17:00.051 2 INFO nova.compute.manager [-] [instance: f4568c68-41ba-4de0-a607-76bf5907f37c] VM Stopped (Lifecycle Event)
Oct 11 04:17:00 compute-0 nova_compute[259850]: 2025-10-11 04:17:00.074 2 DEBUG nova.compute.manager [None req-ad83338e-f487-4d09-bf70-c4fcd39e3b7d - - - - - -] [instance: f4568c68-41ba-4de0-a607-76bf5907f37c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 11 04:17:00 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 11 04:17:00 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2773857583' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 04:17:00 compute-0 nova_compute[259850]: 2025-10-11 04:17:00.339 2 DEBUG oslo_concurrency.processutils [None req-36f9e8b7-ac06-4ff3-86c3-6716d03c660a 77d11e860ca1460cab1c20bca4d4c0ea bfcc78a613a4442d88231798d10634c9 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.457s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 04:17:00 compute-0 nova_compute[259850]: 2025-10-11 04:17:00.348 2 DEBUG nova.compute.provider_tree [None req-36f9e8b7-ac06-4ff3-86c3-6716d03c660a 77d11e860ca1460cab1c20bca4d4c0ea bfcc78a613a4442d88231798d10634c9 - - default default] Inventory has not changed in ProviderTree for provider: 108a560b-89c0-4926-a2fc-cb749a6f8386 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 11 04:17:00 compute-0 nova_compute[259850]: 2025-10-11 04:17:00.376 2 DEBUG nova.scheduler.client.report [None req-36f9e8b7-ac06-4ff3-86c3-6716d03c660a 77d11e860ca1460cab1c20bca4d4c0ea bfcc78a613a4442d88231798d10634c9 - - default default] Inventory has not changed for provider 108a560b-89c0-4926-a2fc-cb749a6f8386 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 11 04:17:00 compute-0 nova_compute[259850]: 2025-10-11 04:17:00.422 2 DEBUG oslo_concurrency.lockutils [None req-36f9e8b7-ac06-4ff3-86c3-6716d03c660a 77d11e860ca1460cab1c20bca4d4c0ea bfcc78a613a4442d88231798d10634c9 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.621s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 04:17:00 compute-0 nova_compute[259850]: 2025-10-11 04:17:00.462 2 INFO nova.scheduler.client.report [None req-36f9e8b7-ac06-4ff3-86c3-6716d03c660a 77d11e860ca1460cab1c20bca4d4c0ea bfcc78a613a4442d88231798d10634c9 - - default default] Deleted allocations for instance 001be5b3-e842-4242-a6ad-2ccbfa7b39c2
Oct 11 04:17:00 compute-0 nova_compute[259850]: 2025-10-11 04:17:00.555 2 DEBUG oslo_concurrency.lockutils [None req-36f9e8b7-ac06-4ff3-86c3-6716d03c660a 77d11e860ca1460cab1c20bca4d4c0ea bfcc78a613a4442d88231798d10634c9 - - default default] Lock "001be5b3-e842-4242-a6ad-2ccbfa7b39c2" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.573s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 04:17:00 compute-0 nova_compute[259850]: 2025-10-11 04:17:00.791 2 DEBUG nova.compute.manager [req-2964038d-b8eb-4a33-862e-64fd7388aa10 req-ced295c8-d866-4123-96cb-f60997bc423d f4159c198f2c491aba3076e803251bb4 551ef16ca7c64c7d98e9560ddab5abd8 - - default default] [instance: 001be5b3-e842-4242-a6ad-2ccbfa7b39c2] Received event network-vif-plugged-b0fef7a6-460f-49a1-8586-9008e9d3f648 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 11 04:17:00 compute-0 nova_compute[259850]: 2025-10-11 04:17:00.792 2 DEBUG oslo_concurrency.lockutils [req-2964038d-b8eb-4a33-862e-64fd7388aa10 req-ced295c8-d866-4123-96cb-f60997bc423d f4159c198f2c491aba3076e803251bb4 551ef16ca7c64c7d98e9560ddab5abd8 - - default default] Acquiring lock "001be5b3-e842-4242-a6ad-2ccbfa7b39c2-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 04:17:00 compute-0 nova_compute[259850]: 2025-10-11 04:17:00.793 2 DEBUG oslo_concurrency.lockutils [req-2964038d-b8eb-4a33-862e-64fd7388aa10 req-ced295c8-d866-4123-96cb-f60997bc423d f4159c198f2c491aba3076e803251bb4 551ef16ca7c64c7d98e9560ddab5abd8 - - default default] Lock "001be5b3-e842-4242-a6ad-2ccbfa7b39c2-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 04:17:00 compute-0 nova_compute[259850]: 2025-10-11 04:17:00.793 2 DEBUG oslo_concurrency.lockutils [req-2964038d-b8eb-4a33-862e-64fd7388aa10 req-ced295c8-d866-4123-96cb-f60997bc423d f4159c198f2c491aba3076e803251bb4 551ef16ca7c64c7d98e9560ddab5abd8 - - default default] Lock "001be5b3-e842-4242-a6ad-2ccbfa7b39c2-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 04:17:00 compute-0 nova_compute[259850]: 2025-10-11 04:17:00.793 2 DEBUG nova.compute.manager [req-2964038d-b8eb-4a33-862e-64fd7388aa10 req-ced295c8-d866-4123-96cb-f60997bc423d f4159c198f2c491aba3076e803251bb4 551ef16ca7c64c7d98e9560ddab5abd8 - - default default] [instance: 001be5b3-e842-4242-a6ad-2ccbfa7b39c2] No waiting events found dispatching network-vif-plugged-b0fef7a6-460f-49a1-8586-9008e9d3f648 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 11 04:17:00 compute-0 nova_compute[259850]: 2025-10-11 04:17:00.794 2 WARNING nova.compute.manager [req-2964038d-b8eb-4a33-862e-64fd7388aa10 req-ced295c8-d866-4123-96cb-f60997bc423d f4159c198f2c491aba3076e803251bb4 551ef16ca7c64c7d98e9560ddab5abd8 - - default default] [instance: 001be5b3-e842-4242-a6ad-2ccbfa7b39c2] Received unexpected event network-vif-plugged-b0fef7a6-460f-49a1-8586-9008e9d3f648 for instance with vm_state deleted and task_state None.
Oct 11 04:17:00 compute-0 nova_compute[259850]: 2025-10-11 04:17:00.794 2 DEBUG nova.compute.manager [req-2964038d-b8eb-4a33-862e-64fd7388aa10 req-ced295c8-d866-4123-96cb-f60997bc423d f4159c198f2c491aba3076e803251bb4 551ef16ca7c64c7d98e9560ddab5abd8 - - default default] [instance: 001be5b3-e842-4242-a6ad-2ccbfa7b39c2] Received event network-vif-deleted-b0fef7a6-460f-49a1-8586-9008e9d3f648 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 11 04:17:01 compute-0 ceph-mon[74273]: pgmap v1599: 305 pgs: 305 active+clean; 270 MiB data, 579 MiB used, 59 GiB / 60 GiB avail; 2.4 MiB/s rd, 2.3 KiB/s wr, 39 op/s
Oct 11 04:17:01 compute-0 ceph-mon[74273]: from='client.? 192.168.122.100:0/2773857583' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 04:17:01 compute-0 podman[295116]: 2025-10-11 04:17:01.381040985 +0000 UTC m=+0.078223076 container health_status aa44bb3cffcfb092b9d85ae52a9bd9f36c87e07262632420f97b48a886109eab (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, managed_by=edpm_ansible, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2)
Oct 11 04:17:01 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1600: 305 pgs: 305 active+clean; 270 MiB data, 579 MiB used, 59 GiB / 60 GiB avail; 2.3 MiB/s rd, 2.2 KiB/s wr, 38 op/s
Oct 11 04:17:03 compute-0 ceph-mon[74273]: pgmap v1600: 305 pgs: 305 active+clean; 270 MiB data, 579 MiB used, 59 GiB / 60 GiB avail; 2.3 MiB/s rd, 2.2 KiB/s wr, 38 op/s
Oct 11 04:17:03 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct 11 04:17:03 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3729401009' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 11 04:17:03 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct 11 04:17:03 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3729401009' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 11 04:17:03 compute-0 nova_compute[259850]: 2025-10-11 04:17:03.266 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 04:17:03 compute-0 nova_compute[259850]: 2025-10-11 04:17:03.289 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 04:17:03 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1601: 305 pgs: 305 active+clean; 194 MiB data, 567 MiB used, 59 GiB / 60 GiB avail; 2.3 MiB/s rd, 2.1 MiB/s wr, 89 op/s
Oct 11 04:17:04 compute-0 ceph-mon[74273]: from='client.? 192.168.122.10:0/3729401009' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 11 04:17:04 compute-0 ceph-mon[74273]: from='client.? 192.168.122.10:0/3729401009' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 11 04:17:05 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e389 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 04:17:05 compute-0 ceph-mon[74273]: pgmap v1601: 305 pgs: 305 active+clean; 194 MiB data, 567 MiB used, 59 GiB / 60 GiB avail; 2.3 MiB/s rd, 2.1 MiB/s wr, 89 op/s
Oct 11 04:17:05 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1602: 305 pgs: 305 active+clean; 194 MiB data, 567 MiB used, 59 GiB / 60 GiB avail; 2.2 MiB/s rd, 2.0 MiB/s wr, 83 op/s
Oct 11 04:17:06 compute-0 nova_compute[259850]: 2025-10-11 04:17:06.228 2 DEBUG oslo_concurrency.lockutils [None req-02339ba4-a927-4180-be31-b844e39b3179 2a330a845d62440c871f80eda2546881 09ba33ef4bd447699d74946c58839b2d - - default default] Acquiring lock "170beb52-e998-40b5-8315-a0d138f2cbf6" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 04:17:06 compute-0 nova_compute[259850]: 2025-10-11 04:17:06.228 2 DEBUG oslo_concurrency.lockutils [None req-02339ba4-a927-4180-be31-b844e39b3179 2a330a845d62440c871f80eda2546881 09ba33ef4bd447699d74946c58839b2d - - default default] Lock "170beb52-e998-40b5-8315-a0d138f2cbf6" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 04:17:06 compute-0 nova_compute[259850]: 2025-10-11 04:17:06.252 2 DEBUG nova.compute.manager [None req-02339ba4-a927-4180-be31-b844e39b3179 2a330a845d62440c871f80eda2546881 09ba33ef4bd447699d74946c58839b2d - - default default] [instance: 170beb52-e998-40b5-8315-a0d138f2cbf6] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Oct 11 04:17:06 compute-0 nova_compute[259850]: 2025-10-11 04:17:06.342 2 DEBUG oslo_concurrency.lockutils [None req-02339ba4-a927-4180-be31-b844e39b3179 2a330a845d62440c871f80eda2546881 09ba33ef4bd447699d74946c58839b2d - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 04:17:06 compute-0 nova_compute[259850]: 2025-10-11 04:17:06.343 2 DEBUG oslo_concurrency.lockutils [None req-02339ba4-a927-4180-be31-b844e39b3179 2a330a845d62440c871f80eda2546881 09ba33ef4bd447699d74946c58839b2d - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 04:17:06 compute-0 nova_compute[259850]: 2025-10-11 04:17:06.355 2 DEBUG nova.virt.hardware [None req-02339ba4-a927-4180-be31-b844e39b3179 2a330a845d62440c871f80eda2546881 09ba33ef4bd447699d74946c58839b2d - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Oct 11 04:17:06 compute-0 nova_compute[259850]: 2025-10-11 04:17:06.355 2 INFO nova.compute.claims [None req-02339ba4-a927-4180-be31-b844e39b3179 2a330a845d62440c871f80eda2546881 09ba33ef4bd447699d74946c58839b2d - - default default] [instance: 170beb52-e998-40b5-8315-a0d138f2cbf6] Claim successful on node compute-0.ctlplane.example.com
Oct 11 04:17:06 compute-0 nova_compute[259850]: 2025-10-11 04:17:06.465 2 DEBUG oslo_concurrency.processutils [None req-02339ba4-a927-4180-be31-b844e39b3179 2a330a845d62440c871f80eda2546881 09ba33ef4bd447699d74946c58839b2d - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 04:17:06 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 11 04:17:06 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2811726483' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 04:17:06 compute-0 nova_compute[259850]: 2025-10-11 04:17:06.942 2 DEBUG oslo_concurrency.processutils [None req-02339ba4-a927-4180-be31-b844e39b3179 2a330a845d62440c871f80eda2546881 09ba33ef4bd447699d74946c58839b2d - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.477s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 04:17:06 compute-0 nova_compute[259850]: 2025-10-11 04:17:06.950 2 DEBUG nova.compute.provider_tree [None req-02339ba4-a927-4180-be31-b844e39b3179 2a330a845d62440c871f80eda2546881 09ba33ef4bd447699d74946c58839b2d - - default default] Inventory has not changed in ProviderTree for provider: 108a560b-89c0-4926-a2fc-cb749a6f8386 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 11 04:17:06 compute-0 nova_compute[259850]: 2025-10-11 04:17:06.971 2 DEBUG nova.scheduler.client.report [None req-02339ba4-a927-4180-be31-b844e39b3179 2a330a845d62440c871f80eda2546881 09ba33ef4bd447699d74946c58839b2d - - default default] Inventory has not changed for provider 108a560b-89c0-4926-a2fc-cb749a6f8386 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 11 04:17:07 compute-0 nova_compute[259850]: 2025-10-11 04:17:07.003 2 DEBUG oslo_concurrency.lockutils [None req-02339ba4-a927-4180-be31-b844e39b3179 2a330a845d62440c871f80eda2546881 09ba33ef4bd447699d74946c58839b2d - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.659s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 04:17:07 compute-0 nova_compute[259850]: 2025-10-11 04:17:07.004 2 DEBUG nova.compute.manager [None req-02339ba4-a927-4180-be31-b844e39b3179 2a330a845d62440c871f80eda2546881 09ba33ef4bd447699d74946c58839b2d - - default default] [instance: 170beb52-e998-40b5-8315-a0d138f2cbf6] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Oct 11 04:17:07 compute-0 ceph-mon[74273]: pgmap v1602: 305 pgs: 305 active+clean; 194 MiB data, 567 MiB used, 59 GiB / 60 GiB avail; 2.2 MiB/s rd, 2.0 MiB/s wr, 83 op/s
Oct 11 04:17:07 compute-0 ceph-mon[74273]: from='client.? 192.168.122.100:0/2811726483' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 04:17:07 compute-0 nova_compute[259850]: 2025-10-11 04:17:07.067 2 DEBUG nova.compute.manager [None req-02339ba4-a927-4180-be31-b844e39b3179 2a330a845d62440c871f80eda2546881 09ba33ef4bd447699d74946c58839b2d - - default default] [instance: 170beb52-e998-40b5-8315-a0d138f2cbf6] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Oct 11 04:17:07 compute-0 nova_compute[259850]: 2025-10-11 04:17:07.068 2 DEBUG nova.network.neutron [None req-02339ba4-a927-4180-be31-b844e39b3179 2a330a845d62440c871f80eda2546881 09ba33ef4bd447699d74946c58839b2d - - default default] [instance: 170beb52-e998-40b5-8315-a0d138f2cbf6] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Oct 11 04:17:07 compute-0 nova_compute[259850]: 2025-10-11 04:17:07.096 2 INFO nova.virt.libvirt.driver [None req-02339ba4-a927-4180-be31-b844e39b3179 2a330a845d62440c871f80eda2546881 09ba33ef4bd447699d74946c58839b2d - - default default] [instance: 170beb52-e998-40b5-8315-a0d138f2cbf6] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Oct 11 04:17:07 compute-0 nova_compute[259850]: 2025-10-11 04:17:07.128 2 DEBUG nova.compute.manager [None req-02339ba4-a927-4180-be31-b844e39b3179 2a330a845d62440c871f80eda2546881 09ba33ef4bd447699d74946c58839b2d - - default default] [instance: 170beb52-e998-40b5-8315-a0d138f2cbf6] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Oct 11 04:17:07 compute-0 nova_compute[259850]: 2025-10-11 04:17:07.191 2 INFO nova.virt.block_device [None req-02339ba4-a927-4180-be31-b844e39b3179 2a330a845d62440c871f80eda2546881 09ba33ef4bd447699d74946c58839b2d - - default default] [instance: 170beb52-e998-40b5-8315-a0d138f2cbf6] Booting with volume b1e9d80d-01ad-4211-b429-299f6fd98f5c at /dev/vda
Oct 11 04:17:07 compute-0 nova_compute[259850]: 2025-10-11 04:17:07.341 2 DEBUG os_brick.utils [None req-02339ba4-a927-4180-be31-b844e39b3179 2a330a845d62440c871f80eda2546881 09ba33ef4bd447699d74946c58839b2d - - default default] ==> get_connector_properties: call "{'root_helper': 'sudo nova-rootwrap /etc/nova/rootwrap.conf', 'my_ip': '192.168.122.100', 'multipath': True, 'enforce_multipath': True, 'host': 'compute-0.ctlplane.example.com', 'execute': None}" trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:176
Oct 11 04:17:07 compute-0 nova_compute[259850]: 2025-10-11 04:17:07.343 675 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): multipathd show status execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 04:17:07 compute-0 nova_compute[259850]: 2025-10-11 04:17:07.361 675 DEBUG oslo_concurrency.processutils [-] CMD "multipathd show status" returned: 0 in 0.018s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 04:17:07 compute-0 nova_compute[259850]: 2025-10-11 04:17:07.361 675 DEBUG oslo.privsep.daemon [-] privsep: reply[e5f71a5a-5dc6-405e-bd10-f440cb6f54ff]: (4, ('path checker states:\n\npaths: 0\nbusy: False\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 04:17:07 compute-0 nova_compute[259850]: 2025-10-11 04:17:07.363 675 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): cat /etc/iscsi/initiatorname.iscsi execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 04:17:07 compute-0 nova_compute[259850]: 2025-10-11 04:17:07.375 675 DEBUG oslo_concurrency.processutils [-] CMD "cat /etc/iscsi/initiatorname.iscsi" returned: 0 in 0.012s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 04:17:07 compute-0 nova_compute[259850]: 2025-10-11 04:17:07.376 675 DEBUG oslo.privsep.daemon [-] privsep: reply[335d87a7-8b87-464d-998f-98800680797e]: (4, ('InitiatorName=iqn.1994-05.com.redhat:e727c2bd432c', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 04:17:07 compute-0 nova_compute[259850]: 2025-10-11 04:17:07.378 675 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): findmnt -v / -n -o SOURCE execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 04:17:07 compute-0 nova_compute[259850]: 2025-10-11 04:17:07.393 675 DEBUG oslo_concurrency.processutils [-] CMD "findmnt -v / -n -o SOURCE" returned: 0 in 0.015s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 04:17:07 compute-0 nova_compute[259850]: 2025-10-11 04:17:07.393 675 DEBUG oslo.privsep.daemon [-] privsep: reply[e11db1ec-ae9e-452a-b1e7-81370a69ba48]: (4, ('overlay\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 04:17:07 compute-0 nova_compute[259850]: 2025-10-11 04:17:07.395 675 DEBUG oslo.privsep.daemon [-] privsep: reply[86640905-064a-4955-8e6b-8f90f78ebf66]: (4, 'e4b2deed-ff06-4afb-a523-b61a9dddb9cc') _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 04:17:07 compute-0 nova_compute[259850]: 2025-10-11 04:17:07.395 2 DEBUG oslo_concurrency.processutils [None req-02339ba4-a927-4180-be31-b844e39b3179 2a330a845d62440c871f80eda2546881 09ba33ef4bd447699d74946c58839b2d - - default default] Running cmd (subprocess): nvme version execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 04:17:07 compute-0 nova_compute[259850]: 2025-10-11 04:17:07.432 2 DEBUG nova.policy [None req-02339ba4-a927-4180-be31-b844e39b3179 2a330a845d62440c871f80eda2546881 09ba33ef4bd447699d74946c58839b2d - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '2a330a845d62440c871f80eda2546881', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '09ba33ef4bd447699d74946c58839b2d', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Oct 11 04:17:07 compute-0 nova_compute[259850]: 2025-10-11 04:17:07.439 2 DEBUG oslo_concurrency.processutils [None req-02339ba4-a927-4180-be31-b844e39b3179 2a330a845d62440c871f80eda2546881 09ba33ef4bd447699d74946c58839b2d - - default default] CMD "nvme version" returned: 0 in 0.043s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 04:17:07 compute-0 nova_compute[259850]: 2025-10-11 04:17:07.442 2 DEBUG os_brick.initiator.connectors.lightos [None req-02339ba4-a927-4180-be31-b844e39b3179 2a330a845d62440c871f80eda2546881 09ba33ef4bd447699d74946c58839b2d - - default default] LIGHTOS: [Errno 111] ECONNREFUSED find_dsc /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:98
Oct 11 04:17:07 compute-0 nova_compute[259850]: 2025-10-11 04:17:07.443 2 DEBUG os_brick.initiator.connectors.lightos [None req-02339ba4-a927-4180-be31-b844e39b3179 2a330a845d62440c871f80eda2546881 09ba33ef4bd447699d74946c58839b2d - - default default] LIGHTOS: did not find dsc, continuing anyway. get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:76
Oct 11 04:17:07 compute-0 nova_compute[259850]: 2025-10-11 04:17:07.443 2 DEBUG os_brick.initiator.connectors.lightos [None req-02339ba4-a927-4180-be31-b844e39b3179 2a330a845d62440c871f80eda2546881 09ba33ef4bd447699d74946c58839b2d - - default default] LIGHTOS: finally hostnqn: nqn.2014-08.org.nvmexpress:uuid:83042a20-0f72-4c47-8453-e72ead378624 dsc:  get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:79
Oct 11 04:17:07 compute-0 nova_compute[259850]: 2025-10-11 04:17:07.444 2 DEBUG os_brick.utils [None req-02339ba4-a927-4180-be31-b844e39b3179 2a330a845d62440c871f80eda2546881 09ba33ef4bd447699d74946c58839b2d - - default default] <== get_connector_properties: return (101ms) {'platform': 'x86_64', 'os_type': 'linux', 'ip': '192.168.122.100', 'host': 'compute-0.ctlplane.example.com', 'multipath': True, 'initiator': 'iqn.1994-05.com.redhat:e727c2bd432c', 'do_local_attach': False, 'nvme_hostid': '83042a20-0f72-4c47-8453-e72ead378624', 'system uuid': 'e4b2deed-ff06-4afb-a523-b61a9dddb9cc', 'nqn': 'nqn.2014-08.org.nvmexpress:uuid:83042a20-0f72-4c47-8453-e72ead378624', 'nvme_native_multipath': True, 'found_dsc': ''} trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:203
Oct 11 04:17:07 compute-0 nova_compute[259850]: 2025-10-11 04:17:07.445 2 DEBUG nova.virt.block_device [None req-02339ba4-a927-4180-be31-b844e39b3179 2a330a845d62440c871f80eda2546881 09ba33ef4bd447699d74946c58839b2d - - default default] [instance: 170beb52-e998-40b5-8315-a0d138f2cbf6] Updating existing volume attachment record: c18e29ea-9924-4dda-a890-954a9aa0c1a8 _volume_attach /usr/lib/python3.9/site-packages/nova/virt/block_device.py:631
Oct 11 04:17:07 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1603: 305 pgs: 305 active+clean; 194 MiB data, 567 MiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 1.8 MiB/s wr, 74 op/s
Oct 11 04:17:08 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 11 04:17:08 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/801562776' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 11 04:17:08 compute-0 ceph-mon[74273]: from='client.? 192.168.122.10:0/801562776' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 11 04:17:08 compute-0 nova_compute[259850]: 2025-10-11 04:17:08.270 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 04:17:08 compute-0 nova_compute[259850]: 2025-10-11 04:17:08.291 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 04:17:08 compute-0 nova_compute[259850]: 2025-10-11 04:17:08.474 2 DEBUG nova.compute.manager [None req-02339ba4-a927-4180-be31-b844e39b3179 2a330a845d62440c871f80eda2546881 09ba33ef4bd447699d74946c58839b2d - - default default] [instance: 170beb52-e998-40b5-8315-a0d138f2cbf6] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Oct 11 04:17:08 compute-0 nova_compute[259850]: 2025-10-11 04:17:08.477 2 DEBUG nova.virt.libvirt.driver [None req-02339ba4-a927-4180-be31-b844e39b3179 2a330a845d62440c871f80eda2546881 09ba33ef4bd447699d74946c58839b2d - - default default] [instance: 170beb52-e998-40b5-8315-a0d138f2cbf6] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Oct 11 04:17:08 compute-0 nova_compute[259850]: 2025-10-11 04:17:08.478 2 INFO nova.virt.libvirt.driver [None req-02339ba4-a927-4180-be31-b844e39b3179 2a330a845d62440c871f80eda2546881 09ba33ef4bd447699d74946c58839b2d - - default default] [instance: 170beb52-e998-40b5-8315-a0d138f2cbf6] Creating image(s)
Oct 11 04:17:08 compute-0 nova_compute[259850]: 2025-10-11 04:17:08.479 2 DEBUG nova.virt.libvirt.driver [None req-02339ba4-a927-4180-be31-b844e39b3179 2a330a845d62440c871f80eda2546881 09ba33ef4bd447699d74946c58839b2d - - default default] [instance: 170beb52-e998-40b5-8315-a0d138f2cbf6] Did not create local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4859
Oct 11 04:17:08 compute-0 nova_compute[259850]: 2025-10-11 04:17:08.479 2 DEBUG nova.virt.libvirt.driver [None req-02339ba4-a927-4180-be31-b844e39b3179 2a330a845d62440c871f80eda2546881 09ba33ef4bd447699d74946c58839b2d - - default default] [instance: 170beb52-e998-40b5-8315-a0d138f2cbf6] Ensure instance console log exists: /var/lib/nova/instances/170beb52-e998-40b5-8315-a0d138f2cbf6/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Oct 11 04:17:08 compute-0 nova_compute[259850]: 2025-10-11 04:17:08.480 2 DEBUG oslo_concurrency.lockutils [None req-02339ba4-a927-4180-be31-b844e39b3179 2a330a845d62440c871f80eda2546881 09ba33ef4bd447699d74946c58839b2d - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 04:17:08 compute-0 nova_compute[259850]: 2025-10-11 04:17:08.481 2 DEBUG oslo_concurrency.lockutils [None req-02339ba4-a927-4180-be31-b844e39b3179 2a330a845d62440c871f80eda2546881 09ba33ef4bd447699d74946c58839b2d - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 04:17:08 compute-0 nova_compute[259850]: 2025-10-11 04:17:08.481 2 DEBUG oslo_concurrency.lockutils [None req-02339ba4-a927-4180-be31-b844e39b3179 2a330a845d62440c871f80eda2546881 09ba33ef4bd447699d74946c58839b2d - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 04:17:08 compute-0 nova_compute[259850]: 2025-10-11 04:17:08.562 2 DEBUG nova.network.neutron [None req-02339ba4-a927-4180-be31-b844e39b3179 2a330a845d62440c871f80eda2546881 09ba33ef4bd447699d74946c58839b2d - - default default] [instance: 170beb52-e998-40b5-8315-a0d138f2cbf6] Successfully created port: 4c25174b-7eef-47cc-9c7d-618a905c5e5e _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Oct 11 04:17:09 compute-0 ceph-mon[74273]: pgmap v1603: 305 pgs: 305 active+clean; 194 MiB data, 567 MiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 1.8 MiB/s wr, 74 op/s
Oct 11 04:17:09 compute-0 nova_compute[259850]: 2025-10-11 04:17:09.184 2 DEBUG nova.network.neutron [None req-02339ba4-a927-4180-be31-b844e39b3179 2a330a845d62440c871f80eda2546881 09ba33ef4bd447699d74946c58839b2d - - default default] [instance: 170beb52-e998-40b5-8315-a0d138f2cbf6] Successfully updated port: 4c25174b-7eef-47cc-9c7d-618a905c5e5e _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Oct 11 04:17:09 compute-0 nova_compute[259850]: 2025-10-11 04:17:09.201 2 DEBUG oslo_concurrency.lockutils [None req-02339ba4-a927-4180-be31-b844e39b3179 2a330a845d62440c871f80eda2546881 09ba33ef4bd447699d74946c58839b2d - - default default] Acquiring lock "refresh_cache-170beb52-e998-40b5-8315-a0d138f2cbf6" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 11 04:17:09 compute-0 nova_compute[259850]: 2025-10-11 04:17:09.202 2 DEBUG oslo_concurrency.lockutils [None req-02339ba4-a927-4180-be31-b844e39b3179 2a330a845d62440c871f80eda2546881 09ba33ef4bd447699d74946c58839b2d - - default default] Acquired lock "refresh_cache-170beb52-e998-40b5-8315-a0d138f2cbf6" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 11 04:17:09 compute-0 nova_compute[259850]: 2025-10-11 04:17:09.202 2 DEBUG nova.network.neutron [None req-02339ba4-a927-4180-be31-b844e39b3179 2a330a845d62440c871f80eda2546881 09ba33ef4bd447699d74946c58839b2d - - default default] [instance: 170beb52-e998-40b5-8315-a0d138f2cbf6] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Oct 11 04:17:09 compute-0 nova_compute[259850]: 2025-10-11 04:17:09.284 2 DEBUG nova.compute.manager [req-255ed9c0-7272-43c1-ba83-d4b8de5ea5d6 req-f90d72eb-c858-407b-a485-b6915ae76259 f4159c198f2c491aba3076e803251bb4 551ef16ca7c64c7d98e9560ddab5abd8 - - default default] [instance: 170beb52-e998-40b5-8315-a0d138f2cbf6] Received event network-changed-4c25174b-7eef-47cc-9c7d-618a905c5e5e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 11 04:17:09 compute-0 nova_compute[259850]: 2025-10-11 04:17:09.284 2 DEBUG nova.compute.manager [req-255ed9c0-7272-43c1-ba83-d4b8de5ea5d6 req-f90d72eb-c858-407b-a485-b6915ae76259 f4159c198f2c491aba3076e803251bb4 551ef16ca7c64c7d98e9560ddab5abd8 - - default default] [instance: 170beb52-e998-40b5-8315-a0d138f2cbf6] Refreshing instance network info cache due to event network-changed-4c25174b-7eef-47cc-9c7d-618a905c5e5e. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 11 04:17:09 compute-0 nova_compute[259850]: 2025-10-11 04:17:09.285 2 DEBUG oslo_concurrency.lockutils [req-255ed9c0-7272-43c1-ba83-d4b8de5ea5d6 req-f90d72eb-c858-407b-a485-b6915ae76259 f4159c198f2c491aba3076e803251bb4 551ef16ca7c64c7d98e9560ddab5abd8 - - default default] Acquiring lock "refresh_cache-170beb52-e998-40b5-8315-a0d138f2cbf6" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 11 04:17:09 compute-0 nova_compute[259850]: 2025-10-11 04:17:09.386 2 DEBUG nova.network.neutron [None req-02339ba4-a927-4180-be31-b844e39b3179 2a330a845d62440c871f80eda2546881 09ba33ef4bd447699d74946c58839b2d - - default default] [instance: 170beb52-e998-40b5-8315-a0d138f2cbf6] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Oct 11 04:17:09 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1604: 305 pgs: 305 active+clean; 134 MiB data, 481 MiB used, 60 GiB / 60 GiB avail; 2.0 MiB/s rd, 1.8 MiB/s wr, 78 op/s
Oct 11 04:17:10 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e389 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 04:17:10 compute-0 nova_compute[259850]: 2025-10-11 04:17:10.135 2 DEBUG nova.network.neutron [None req-02339ba4-a927-4180-be31-b844e39b3179 2a330a845d62440c871f80eda2546881 09ba33ef4bd447699d74946c58839b2d - - default default] [instance: 170beb52-e998-40b5-8315-a0d138f2cbf6] Updating instance_info_cache with network_info: [{"id": "4c25174b-7eef-47cc-9c7d-618a905c5e5e", "address": "fa:16:3e:47:b9:1f", "network": {"id": "b6cd64a2-af0b-4f57-b84c-cbc9cde5251d", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-958485850-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "09ba33ef4bd447699d74946c58839b2d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4c25174b-7e", "ovs_interfaceid": "4c25174b-7eef-47cc-9c7d-618a905c5e5e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 11 04:17:10 compute-0 nova_compute[259850]: 2025-10-11 04:17:10.159 2 DEBUG oslo_concurrency.lockutils [None req-02339ba4-a927-4180-be31-b844e39b3179 2a330a845d62440c871f80eda2546881 09ba33ef4bd447699d74946c58839b2d - - default default] Releasing lock "refresh_cache-170beb52-e998-40b5-8315-a0d138f2cbf6" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 11 04:17:10 compute-0 nova_compute[259850]: 2025-10-11 04:17:10.159 2 DEBUG nova.compute.manager [None req-02339ba4-a927-4180-be31-b844e39b3179 2a330a845d62440c871f80eda2546881 09ba33ef4bd447699d74946c58839b2d - - default default] [instance: 170beb52-e998-40b5-8315-a0d138f2cbf6] Instance network_info: |[{"id": "4c25174b-7eef-47cc-9c7d-618a905c5e5e", "address": "fa:16:3e:47:b9:1f", "network": {"id": "b6cd64a2-af0b-4f57-b84c-cbc9cde5251d", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-958485850-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "09ba33ef4bd447699d74946c58839b2d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4c25174b-7e", "ovs_interfaceid": "4c25174b-7eef-47cc-9c7d-618a905c5e5e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Oct 11 04:17:10 compute-0 nova_compute[259850]: 2025-10-11 04:17:10.161 2 DEBUG oslo_concurrency.lockutils [req-255ed9c0-7272-43c1-ba83-d4b8de5ea5d6 req-f90d72eb-c858-407b-a485-b6915ae76259 f4159c198f2c491aba3076e803251bb4 551ef16ca7c64c7d98e9560ddab5abd8 - - default default] Acquired lock "refresh_cache-170beb52-e998-40b5-8315-a0d138f2cbf6" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 11 04:17:10 compute-0 nova_compute[259850]: 2025-10-11 04:17:10.161 2 DEBUG nova.network.neutron [req-255ed9c0-7272-43c1-ba83-d4b8de5ea5d6 req-f90d72eb-c858-407b-a485-b6915ae76259 f4159c198f2c491aba3076e803251bb4 551ef16ca7c64c7d98e9560ddab5abd8 - - default default] [instance: 170beb52-e998-40b5-8315-a0d138f2cbf6] Refreshing network info cache for port 4c25174b-7eef-47cc-9c7d-618a905c5e5e _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 11 04:17:10 compute-0 nova_compute[259850]: 2025-10-11 04:17:10.167 2 DEBUG nova.virt.libvirt.driver [None req-02339ba4-a927-4180-be31-b844e39b3179 2a330a845d62440c871f80eda2546881 09ba33ef4bd447699d74946c58839b2d - - default default] [instance: 170beb52-e998-40b5-8315-a0d138f2cbf6] Start _get_guest_xml network_info=[{"id": "4c25174b-7eef-47cc-9c7d-618a905c5e5e", "address": "fa:16:3e:47:b9:1f", "network": {"id": "b6cd64a2-af0b-4f57-b84c-cbc9cde5251d", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-958485850-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "09ba33ef4bd447699d74946c58839b2d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4c25174b-7e", "ovs_interfaceid": "4c25174b-7eef-47cc-9c7d-618a905c5e5e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, '/dev/vda': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum=<?>,container_format=<?>,created_at=<?>,direct_url=<?>,disk_format=<?>,id=<?>,min_disk=0,min_ram=0,name=<?>,owner=<?>,properties=ImageMetaProps,protected=<?>,size=1073741824,status='active',tags=<?>,updated_at=<?>,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [], 'ephemerals': [], 'block_device_mapping': [{'device_type': 'disk', 'delete_on_termination': False, 'connection_info': {'driver_volume_type': 'rbd', 'data': {'name': 'volumes/volume-b1e9d80d-01ad-4211-b429-299f6fd98f5c', 'hosts': ['192.168.122.100'], 'ports': ['6789'], 'cluster_name': 'ceph', 'auth_enabled': True, 'auth_username': 'openstack', 'secret_type': 'ceph', 'secret_uuid': '***', 'volume_id': 'b1e9d80d-01ad-4211-b429-299f6fd98f5c', 'discard': True, 'qos_specs': None, 'access_mode': 'rw', 'encrypted': False, 'cacheable': False}, 'status': 'reserved', 'instance': '170beb52-e998-40b5-8315-a0d138f2cbf6', 'attached_at': '', 'detached_at': '', 'volume_id': 'b1e9d80d-01ad-4211-b429-299f6fd98f5c', 'serial': 'b1e9d80d-01ad-4211-b429-299f6fd98f5c'}, 'boot_index': 0, 'guest_format': None, 'attachment_id': 'c18e29ea-9924-4dda-a890-954a9aa0c1a8', 'mount_device': '/dev/vda', 'disk_bus': 'virtio', 'volume_type': None}], ': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Oct 11 04:17:10 compute-0 nova_compute[259850]: 2025-10-11 04:17:10.175 2 WARNING nova.virt.libvirt.driver [None req-02339ba4-a927-4180-be31-b844e39b3179 2a330a845d62440c871f80eda2546881 09ba33ef4bd447699d74946c58839b2d - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 11 04:17:10 compute-0 nova_compute[259850]: 2025-10-11 04:17:10.183 2 DEBUG nova.virt.libvirt.host [None req-02339ba4-a927-4180-be31-b844e39b3179 2a330a845d62440c871f80eda2546881 09ba33ef4bd447699d74946c58839b2d - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Oct 11 04:17:10 compute-0 nova_compute[259850]: 2025-10-11 04:17:10.184 2 DEBUG nova.virt.libvirt.host [None req-02339ba4-a927-4180-be31-b844e39b3179 2a330a845d62440c871f80eda2546881 09ba33ef4bd447699d74946c58839b2d - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Oct 11 04:17:10 compute-0 nova_compute[259850]: 2025-10-11 04:17:10.195 2 DEBUG nova.virt.libvirt.host [None req-02339ba4-a927-4180-be31-b844e39b3179 2a330a845d62440c871f80eda2546881 09ba33ef4bd447699d74946c58839b2d - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Oct 11 04:17:10 compute-0 nova_compute[259850]: 2025-10-11 04:17:10.196 2 DEBUG nova.virt.libvirt.host [None req-02339ba4-a927-4180-be31-b844e39b3179 2a330a845d62440c871f80eda2546881 09ba33ef4bd447699d74946c58839b2d - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Oct 11 04:17:10 compute-0 nova_compute[259850]: 2025-10-11 04:17:10.197 2 DEBUG nova.virt.libvirt.driver [None req-02339ba4-a927-4180-be31-b844e39b3179 2a330a845d62440c871f80eda2546881 09ba33ef4bd447699d74946c58839b2d - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Oct 11 04:17:10 compute-0 nova_compute[259850]: 2025-10-11 04:17:10.198 2 DEBUG nova.virt.hardware [None req-02339ba4-a927-4180-be31-b844e39b3179 2a330a845d62440c871f80eda2546881 09ba33ef4bd447699d74946c58839b2d - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-11T04:01:35Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='178575de-f0e6-4acd-9fcd-d75e3e09ac2e',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum=<?>,container_format=<?>,created_at=<?>,direct_url=<?>,disk_format=<?>,id=<?>,min_disk=0,min_ram=0,name=<?>,owner=<?>,properties=ImageMetaProps,protected=<?>,size=1073741824,status='active',tags=<?>,updated_at=<?>,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Oct 11 04:17:10 compute-0 nova_compute[259850]: 2025-10-11 04:17:10.198 2 DEBUG nova.virt.hardware [None req-02339ba4-a927-4180-be31-b844e39b3179 2a330a845d62440c871f80eda2546881 09ba33ef4bd447699d74946c58839b2d - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Oct 11 04:17:10 compute-0 nova_compute[259850]: 2025-10-11 04:17:10.199 2 DEBUG nova.virt.hardware [None req-02339ba4-a927-4180-be31-b844e39b3179 2a330a845d62440c871f80eda2546881 09ba33ef4bd447699d74946c58839b2d - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Oct 11 04:17:10 compute-0 nova_compute[259850]: 2025-10-11 04:17:10.199 2 DEBUG nova.virt.hardware [None req-02339ba4-a927-4180-be31-b844e39b3179 2a330a845d62440c871f80eda2546881 09ba33ef4bd447699d74946c58839b2d - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Oct 11 04:17:10 compute-0 nova_compute[259850]: 2025-10-11 04:17:10.200 2 DEBUG nova.virt.hardware [None req-02339ba4-a927-4180-be31-b844e39b3179 2a330a845d62440c871f80eda2546881 09ba33ef4bd447699d74946c58839b2d - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Oct 11 04:17:10 compute-0 nova_compute[259850]: 2025-10-11 04:17:10.200 2 DEBUG nova.virt.hardware [None req-02339ba4-a927-4180-be31-b844e39b3179 2a330a845d62440c871f80eda2546881 09ba33ef4bd447699d74946c58839b2d - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Oct 11 04:17:10 compute-0 nova_compute[259850]: 2025-10-11 04:17:10.201 2 DEBUG nova.virt.hardware [None req-02339ba4-a927-4180-be31-b844e39b3179 2a330a845d62440c871f80eda2546881 09ba33ef4bd447699d74946c58839b2d - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Oct 11 04:17:10 compute-0 nova_compute[259850]: 2025-10-11 04:17:10.201 2 DEBUG nova.virt.hardware [None req-02339ba4-a927-4180-be31-b844e39b3179 2a330a845d62440c871f80eda2546881 09ba33ef4bd447699d74946c58839b2d - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Oct 11 04:17:10 compute-0 nova_compute[259850]: 2025-10-11 04:17:10.202 2 DEBUG nova.virt.hardware [None req-02339ba4-a927-4180-be31-b844e39b3179 2a330a845d62440c871f80eda2546881 09ba33ef4bd447699d74946c58839b2d - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Oct 11 04:17:10 compute-0 nova_compute[259850]: 2025-10-11 04:17:10.202 2 DEBUG nova.virt.hardware [None req-02339ba4-a927-4180-be31-b844e39b3179 2a330a845d62440c871f80eda2546881 09ba33ef4bd447699d74946c58839b2d - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Oct 11 04:17:10 compute-0 nova_compute[259850]: 2025-10-11 04:17:10.202 2 DEBUG nova.virt.hardware [None req-02339ba4-a927-4180-be31-b844e39b3179 2a330a845d62440c871f80eda2546881 09ba33ef4bd447699d74946c58839b2d - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Oct 11 04:17:10 compute-0 nova_compute[259850]: 2025-10-11 04:17:10.239 2 DEBUG nova.storage.rbd_utils [None req-02339ba4-a927-4180-be31-b844e39b3179 2a330a845d62440c871f80eda2546881 09ba33ef4bd447699d74946c58839b2d - - default default] rbd image 170beb52-e998-40b5-8315-a0d138f2cbf6_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 11 04:17:10 compute-0 nova_compute[259850]: 2025-10-11 04:17:10.245 2 DEBUG oslo_concurrency.processutils [None req-02339ba4-a927-4180-be31-b844e39b3179 2a330a845d62440c871f80eda2546881 09ba33ef4bd447699d74946c58839b2d - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 04:17:10 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 11 04:17:10 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3158560666' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 11 04:17:10 compute-0 nova_compute[259850]: 2025-10-11 04:17:10.762 2 DEBUG oslo_concurrency.processutils [None req-02339ba4-a927-4180-be31-b844e39b3179 2a330a845d62440c871f80eda2546881 09ba33ef4bd447699d74946c58839b2d - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.517s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 04:17:10 compute-0 nova_compute[259850]: 2025-10-11 04:17:10.803 2 DEBUG nova.virt.libvirt.vif [None req-02339ba4-a927-4180-be31-b844e39b3179 2a330a845d62440c871f80eda2546881 09ba33ef4bd447699d74946c58839b2d - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-11T04:17:05Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestVolumeBootPattern-server-820503556',display_name='tempest-TestVolumeBootPattern-server-820503556',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testvolumebootpattern-server-820503556',id=22,image_ref='',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBPDNAGL8Dkg4WTlPf45cAzyjNlMaZ9CdFtcbPahhttGWfFDtL3wJAU2pqWIpDJ427A+TFzstq4HW+M8hdPFbiZnk9MFQHh3rRb7amRkcTpIWOFEgpDmf92zhQgzfL3p2ZA==',key_name='tempest-TestVolumeBootPattern-2018721323',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='09ba33ef4bd447699d74946c58839b2d',ramdisk_id='',reservation_id='r-s193x3ob',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',image_signature_verified='False',network_allocated='True',owner_project_name='tempest-TestVolumeBootPattern-771726270',owner_user_name='tempest-TestVolumeBootPattern-771726270-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-11T04:17:07Z,user_data=None,user_id='2a330a845d62440c871f80eda2546881',uuid=170beb52-e998-40b5-8315-a0d138f2cbf6,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "4c25174b-7eef-47cc-9c7d-618a905c5e5e", "address": "fa:16:3e:47:b9:1f", "network": {"id": "b6cd64a2-af0b-4f57-b84c-cbc9cde5251d", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-958485850-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "09ba33ef4bd447699d74946c58839b2d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4c25174b-7e", "ovs_interfaceid": "4c25174b-7eef-47cc-9c7d-618a905c5e5e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Oct 11 04:17:10 compute-0 nova_compute[259850]: 2025-10-11 04:17:10.804 2 DEBUG nova.network.os_vif_util [None req-02339ba4-a927-4180-be31-b844e39b3179 2a330a845d62440c871f80eda2546881 09ba33ef4bd447699d74946c58839b2d - - default default] Converting VIF {"id": "4c25174b-7eef-47cc-9c7d-618a905c5e5e", "address": "fa:16:3e:47:b9:1f", "network": {"id": "b6cd64a2-af0b-4f57-b84c-cbc9cde5251d", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-958485850-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "09ba33ef4bd447699d74946c58839b2d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4c25174b-7e", "ovs_interfaceid": "4c25174b-7eef-47cc-9c7d-618a905c5e5e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 11 04:17:10 compute-0 nova_compute[259850]: 2025-10-11 04:17:10.806 2 DEBUG nova.network.os_vif_util [None req-02339ba4-a927-4180-be31-b844e39b3179 2a330a845d62440c871f80eda2546881 09ba33ef4bd447699d74946c58839b2d - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:47:b9:1f,bridge_name='br-int',has_traffic_filtering=True,id=4c25174b-7eef-47cc-9c7d-618a905c5e5e,network=Network(b6cd64a2-af0b-4f57-b84c-cbc9cde5251d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap4c25174b-7e') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 11 04:17:10 compute-0 nova_compute[259850]: 2025-10-11 04:17:10.808 2 DEBUG nova.objects.instance [None req-02339ba4-a927-4180-be31-b844e39b3179 2a330a845d62440c871f80eda2546881 09ba33ef4bd447699d74946c58839b2d - - default default] Lazy-loading 'pci_devices' on Instance uuid 170beb52-e998-40b5-8315-a0d138f2cbf6 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 11 04:17:10 compute-0 nova_compute[259850]: 2025-10-11 04:17:10.826 2 DEBUG nova.virt.libvirt.driver [None req-02339ba4-a927-4180-be31-b844e39b3179 2a330a845d62440c871f80eda2546881 09ba33ef4bd447699d74946c58839b2d - - default default] [instance: 170beb52-e998-40b5-8315-a0d138f2cbf6] End _get_guest_xml xml=<domain type="kvm">
Oct 11 04:17:10 compute-0 nova_compute[259850]:   <uuid>170beb52-e998-40b5-8315-a0d138f2cbf6</uuid>
Oct 11 04:17:10 compute-0 nova_compute[259850]:   <name>instance-00000016</name>
Oct 11 04:17:10 compute-0 nova_compute[259850]:   <memory>131072</memory>
Oct 11 04:17:10 compute-0 nova_compute[259850]:   <vcpu>1</vcpu>
Oct 11 04:17:10 compute-0 nova_compute[259850]:   <metadata>
Oct 11 04:17:10 compute-0 nova_compute[259850]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 11 04:17:10 compute-0 nova_compute[259850]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 11 04:17:10 compute-0 nova_compute[259850]:       <nova:name>tempest-TestVolumeBootPattern-server-820503556</nova:name>
Oct 11 04:17:10 compute-0 nova_compute[259850]:       <nova:creationTime>2025-10-11 04:17:10</nova:creationTime>
Oct 11 04:17:10 compute-0 nova_compute[259850]:       <nova:flavor name="m1.nano">
Oct 11 04:17:10 compute-0 nova_compute[259850]:         <nova:memory>128</nova:memory>
Oct 11 04:17:10 compute-0 nova_compute[259850]:         <nova:disk>1</nova:disk>
Oct 11 04:17:10 compute-0 nova_compute[259850]:         <nova:swap>0</nova:swap>
Oct 11 04:17:10 compute-0 nova_compute[259850]:         <nova:ephemeral>0</nova:ephemeral>
Oct 11 04:17:10 compute-0 nova_compute[259850]:         <nova:vcpus>1</nova:vcpus>
Oct 11 04:17:10 compute-0 nova_compute[259850]:       </nova:flavor>
Oct 11 04:17:10 compute-0 nova_compute[259850]:       <nova:owner>
Oct 11 04:17:10 compute-0 nova_compute[259850]:         <nova:user uuid="2a330a845d62440c871f80eda2546881">tempest-TestVolumeBootPattern-771726270-project-member</nova:user>
Oct 11 04:17:10 compute-0 nova_compute[259850]:         <nova:project uuid="09ba33ef4bd447699d74946c58839b2d">tempest-TestVolumeBootPattern-771726270</nova:project>
Oct 11 04:17:10 compute-0 nova_compute[259850]:       </nova:owner>
Oct 11 04:17:10 compute-0 nova_compute[259850]:       <nova:ports>
Oct 11 04:17:10 compute-0 nova_compute[259850]:         <nova:port uuid="4c25174b-7eef-47cc-9c7d-618a905c5e5e">
Oct 11 04:17:10 compute-0 nova_compute[259850]:           <nova:ip type="fixed" address="10.100.0.5" ipVersion="4"/>
Oct 11 04:17:10 compute-0 nova_compute[259850]:         </nova:port>
Oct 11 04:17:10 compute-0 nova_compute[259850]:       </nova:ports>
Oct 11 04:17:10 compute-0 nova_compute[259850]:     </nova:instance>
Oct 11 04:17:10 compute-0 nova_compute[259850]:   </metadata>
Oct 11 04:17:10 compute-0 nova_compute[259850]:   <sysinfo type="smbios">
Oct 11 04:17:10 compute-0 nova_compute[259850]:     <system>
Oct 11 04:17:10 compute-0 nova_compute[259850]:       <entry name="manufacturer">RDO</entry>
Oct 11 04:17:10 compute-0 nova_compute[259850]:       <entry name="product">OpenStack Compute</entry>
Oct 11 04:17:10 compute-0 nova_compute[259850]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 11 04:17:10 compute-0 nova_compute[259850]:       <entry name="serial">170beb52-e998-40b5-8315-a0d138f2cbf6</entry>
Oct 11 04:17:10 compute-0 nova_compute[259850]:       <entry name="uuid">170beb52-e998-40b5-8315-a0d138f2cbf6</entry>
Oct 11 04:17:10 compute-0 nova_compute[259850]:       <entry name="family">Virtual Machine</entry>
Oct 11 04:17:10 compute-0 nova_compute[259850]:     </system>
Oct 11 04:17:10 compute-0 nova_compute[259850]:   </sysinfo>
Oct 11 04:17:10 compute-0 nova_compute[259850]:   <os>
Oct 11 04:17:10 compute-0 nova_compute[259850]:     <type arch="x86_64" machine="q35">hvm</type>
Oct 11 04:17:10 compute-0 nova_compute[259850]:     <boot dev="hd"/>
Oct 11 04:17:10 compute-0 nova_compute[259850]:     <smbios mode="sysinfo"/>
Oct 11 04:17:10 compute-0 nova_compute[259850]:   </os>
Oct 11 04:17:10 compute-0 nova_compute[259850]:   <features>
Oct 11 04:17:10 compute-0 nova_compute[259850]:     <acpi/>
Oct 11 04:17:10 compute-0 nova_compute[259850]:     <apic/>
Oct 11 04:17:10 compute-0 nova_compute[259850]:     <vmcoreinfo/>
Oct 11 04:17:10 compute-0 nova_compute[259850]:   </features>
Oct 11 04:17:10 compute-0 nova_compute[259850]:   <clock offset="utc">
Oct 11 04:17:10 compute-0 nova_compute[259850]:     <timer name="pit" tickpolicy="delay"/>
Oct 11 04:17:10 compute-0 nova_compute[259850]:     <timer name="rtc" tickpolicy="catchup"/>
Oct 11 04:17:10 compute-0 nova_compute[259850]:     <timer name="hpet" present="no"/>
Oct 11 04:17:10 compute-0 nova_compute[259850]:   </clock>
Oct 11 04:17:10 compute-0 nova_compute[259850]:   <cpu mode="host-model" match="exact">
Oct 11 04:17:10 compute-0 nova_compute[259850]:     <topology sockets="1" cores="1" threads="1"/>
Oct 11 04:17:10 compute-0 nova_compute[259850]:   </cpu>
Oct 11 04:17:10 compute-0 nova_compute[259850]:   <devices>
Oct 11 04:17:10 compute-0 nova_compute[259850]:     <disk type="network" device="cdrom">
Oct 11 04:17:10 compute-0 nova_compute[259850]:       <driver type="raw" cache="none"/>
Oct 11 04:17:10 compute-0 nova_compute[259850]:       <source protocol="rbd" name="vms/170beb52-e998-40b5-8315-a0d138f2cbf6_disk.config">
Oct 11 04:17:10 compute-0 nova_compute[259850]:         <host name="192.168.122.100" port="6789"/>
Oct 11 04:17:10 compute-0 nova_compute[259850]:       </source>
Oct 11 04:17:10 compute-0 nova_compute[259850]:       <auth username="openstack">
Oct 11 04:17:10 compute-0 nova_compute[259850]:         <secret type="ceph" uuid="23b68101-59a9-532f-ab6b-9acf78fb2162"/>
Oct 11 04:17:10 compute-0 nova_compute[259850]:       </auth>
Oct 11 04:17:10 compute-0 nova_compute[259850]:       <target dev="sda" bus="sata"/>
Oct 11 04:17:10 compute-0 nova_compute[259850]:     </disk>
Oct 11 04:17:10 compute-0 nova_compute[259850]:     <disk type="network" device="disk">
Oct 11 04:17:10 compute-0 nova_compute[259850]:       <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Oct 11 04:17:10 compute-0 nova_compute[259850]:       <source protocol="rbd" name="volumes/volume-b1e9d80d-01ad-4211-b429-299f6fd98f5c">
Oct 11 04:17:10 compute-0 nova_compute[259850]:         <host name="192.168.122.100" port="6789"/>
Oct 11 04:17:10 compute-0 nova_compute[259850]:       </source>
Oct 11 04:17:10 compute-0 nova_compute[259850]:       <auth username="openstack">
Oct 11 04:17:10 compute-0 nova_compute[259850]:         <secret type="ceph" uuid="23b68101-59a9-532f-ab6b-9acf78fb2162"/>
Oct 11 04:17:10 compute-0 nova_compute[259850]:       </auth>
Oct 11 04:17:10 compute-0 nova_compute[259850]:       <target dev="vda" bus="virtio"/>
Oct 11 04:17:10 compute-0 nova_compute[259850]:       <serial>b1e9d80d-01ad-4211-b429-299f6fd98f5c</serial>
Oct 11 04:17:10 compute-0 nova_compute[259850]:     </disk>
Oct 11 04:17:10 compute-0 nova_compute[259850]:     <interface type="ethernet">
Oct 11 04:17:10 compute-0 nova_compute[259850]:       <mac address="fa:16:3e:47:b9:1f"/>
Oct 11 04:17:10 compute-0 nova_compute[259850]:       <model type="virtio"/>
Oct 11 04:17:10 compute-0 nova_compute[259850]:       <driver name="vhost" rx_queue_size="512"/>
Oct 11 04:17:10 compute-0 nova_compute[259850]:       <mtu size="1442"/>
Oct 11 04:17:10 compute-0 nova_compute[259850]:       <target dev="tap4c25174b-7e"/>
Oct 11 04:17:10 compute-0 nova_compute[259850]:     </interface>
Oct 11 04:17:10 compute-0 nova_compute[259850]:     <serial type="pty">
Oct 11 04:17:10 compute-0 nova_compute[259850]:       <log file="/var/lib/nova/instances/170beb52-e998-40b5-8315-a0d138f2cbf6/console.log" append="off"/>
Oct 11 04:17:10 compute-0 nova_compute[259850]:     </serial>
Oct 11 04:17:10 compute-0 nova_compute[259850]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 11 04:17:10 compute-0 nova_compute[259850]:     <video>
Oct 11 04:17:10 compute-0 nova_compute[259850]:       <model type="virtio"/>
Oct 11 04:17:10 compute-0 nova_compute[259850]:     </video>
Oct 11 04:17:10 compute-0 nova_compute[259850]:     <input type="tablet" bus="usb"/>
Oct 11 04:17:10 compute-0 nova_compute[259850]:     <rng model="virtio">
Oct 11 04:17:10 compute-0 nova_compute[259850]:       <backend model="random">/dev/urandom</backend>
Oct 11 04:17:10 compute-0 nova_compute[259850]:     </rng>
Oct 11 04:17:10 compute-0 nova_compute[259850]:     <controller type="pci" model="pcie-root"/>
Oct 11 04:17:10 compute-0 nova_compute[259850]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 04:17:10 compute-0 nova_compute[259850]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 04:17:10 compute-0 nova_compute[259850]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 04:17:10 compute-0 nova_compute[259850]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 04:17:10 compute-0 nova_compute[259850]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 04:17:10 compute-0 nova_compute[259850]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 04:17:10 compute-0 nova_compute[259850]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 04:17:10 compute-0 nova_compute[259850]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 04:17:10 compute-0 nova_compute[259850]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 04:17:10 compute-0 nova_compute[259850]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 04:17:10 compute-0 nova_compute[259850]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 04:17:10 compute-0 nova_compute[259850]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 04:17:10 compute-0 nova_compute[259850]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 04:17:10 compute-0 nova_compute[259850]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 04:17:10 compute-0 nova_compute[259850]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 04:17:10 compute-0 nova_compute[259850]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 04:17:10 compute-0 nova_compute[259850]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 04:17:10 compute-0 nova_compute[259850]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 04:17:10 compute-0 nova_compute[259850]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 04:17:10 compute-0 nova_compute[259850]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 04:17:10 compute-0 nova_compute[259850]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 04:17:10 compute-0 nova_compute[259850]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 04:17:10 compute-0 nova_compute[259850]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 04:17:10 compute-0 nova_compute[259850]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 04:17:10 compute-0 nova_compute[259850]:     <controller type="usb" index="0"/>
Oct 11 04:17:10 compute-0 nova_compute[259850]:     <memballoon model="virtio">
Oct 11 04:17:10 compute-0 nova_compute[259850]:       <stats period="10"/>
Oct 11 04:17:10 compute-0 nova_compute[259850]:     </memballoon>
Oct 11 04:17:10 compute-0 nova_compute[259850]:   </devices>
Oct 11 04:17:10 compute-0 nova_compute[259850]: </domain>
Oct 11 04:17:10 compute-0 nova_compute[259850]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Oct 11 04:17:10 compute-0 nova_compute[259850]: 2025-10-11 04:17:10.829 2 DEBUG nova.compute.manager [None req-02339ba4-a927-4180-be31-b844e39b3179 2a330a845d62440c871f80eda2546881 09ba33ef4bd447699d74946c58839b2d - - default default] [instance: 170beb52-e998-40b5-8315-a0d138f2cbf6] Preparing to wait for external event network-vif-plugged-4c25174b-7eef-47cc-9c7d-618a905c5e5e prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Oct 11 04:17:10 compute-0 nova_compute[259850]: 2025-10-11 04:17:10.829 2 DEBUG oslo_concurrency.lockutils [None req-02339ba4-a927-4180-be31-b844e39b3179 2a330a845d62440c871f80eda2546881 09ba33ef4bd447699d74946c58839b2d - - default default] Acquiring lock "170beb52-e998-40b5-8315-a0d138f2cbf6-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 04:17:10 compute-0 nova_compute[259850]: 2025-10-11 04:17:10.830 2 DEBUG oslo_concurrency.lockutils [None req-02339ba4-a927-4180-be31-b844e39b3179 2a330a845d62440c871f80eda2546881 09ba33ef4bd447699d74946c58839b2d - - default default] Lock "170beb52-e998-40b5-8315-a0d138f2cbf6-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 04:17:10 compute-0 nova_compute[259850]: 2025-10-11 04:17:10.831 2 DEBUG oslo_concurrency.lockutils [None req-02339ba4-a927-4180-be31-b844e39b3179 2a330a845d62440c871f80eda2546881 09ba33ef4bd447699d74946c58839b2d - - default default] Lock "170beb52-e998-40b5-8315-a0d138f2cbf6-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 04:17:10 compute-0 nova_compute[259850]: 2025-10-11 04:17:10.832 2 DEBUG nova.virt.libvirt.vif [None req-02339ba4-a927-4180-be31-b844e39b3179 2a330a845d62440c871f80eda2546881 09ba33ef4bd447699d74946c58839b2d - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-11T04:17:05Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestVolumeBootPattern-server-820503556',display_name='tempest-TestVolumeBootPattern-server-820503556',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testvolumebootpattern-server-820503556',id=22,image_ref='',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBPDNAGL8Dkg4WTlPf45cAzyjNlMaZ9CdFtcbPahhttGWfFDtL3wJAU2pqWIpDJ427A+TFzstq4HW+M8hdPFbiZnk9MFQHh3rRb7amRkcTpIWOFEgpDmf92zhQgzfL3p2ZA==',key_name='tempest-TestVolumeBootPattern-2018721323',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='09ba33ef4bd447699d74946c58839b2d',ramdisk_id='',reservation_id='r-s193x3ob',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',image_signature_verified='False',network_allocated='True',owner_project_name='tempest-TestVolumeBootPattern-771726270',owner_user_name='tempest-TestVolumeBootPattern-771726270-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-11T04:17:07Z,user_data=None,user_id='2a330a845d62440c871f80eda2546881',uuid=170beb52-e998-40b5-8315-a0d138f2cbf6,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "4c25174b-7eef-47cc-9c7d-618a905c5e5e", "address": "fa:16:3e:47:b9:1f", "network": {"id": "b6cd64a2-af0b-4f57-b84c-cbc9cde5251d", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-958485850-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "09ba33ef4bd447699d74946c58839b2d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4c25174b-7e", "ovs_interfaceid": "4c25174b-7eef-47cc-9c7d-618a905c5e5e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Oct 11 04:17:10 compute-0 nova_compute[259850]: 2025-10-11 04:17:10.832 2 DEBUG nova.network.os_vif_util [None req-02339ba4-a927-4180-be31-b844e39b3179 2a330a845d62440c871f80eda2546881 09ba33ef4bd447699d74946c58839b2d - - default default] Converting VIF {"id": "4c25174b-7eef-47cc-9c7d-618a905c5e5e", "address": "fa:16:3e:47:b9:1f", "network": {"id": "b6cd64a2-af0b-4f57-b84c-cbc9cde5251d", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-958485850-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "09ba33ef4bd447699d74946c58839b2d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4c25174b-7e", "ovs_interfaceid": "4c25174b-7eef-47cc-9c7d-618a905c5e5e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 11 04:17:10 compute-0 nova_compute[259850]: 2025-10-11 04:17:10.833 2 DEBUG nova.network.os_vif_util [None req-02339ba4-a927-4180-be31-b844e39b3179 2a330a845d62440c871f80eda2546881 09ba33ef4bd447699d74946c58839b2d - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:47:b9:1f,bridge_name='br-int',has_traffic_filtering=True,id=4c25174b-7eef-47cc-9c7d-618a905c5e5e,network=Network(b6cd64a2-af0b-4f57-b84c-cbc9cde5251d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap4c25174b-7e') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 11 04:17:10 compute-0 nova_compute[259850]: 2025-10-11 04:17:10.834 2 DEBUG os_vif [None req-02339ba4-a927-4180-be31-b844e39b3179 2a330a845d62440c871f80eda2546881 09ba33ef4bd447699d74946c58839b2d - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:47:b9:1f,bridge_name='br-int',has_traffic_filtering=True,id=4c25174b-7eef-47cc-9c7d-618a905c5e5e,network=Network(b6cd64a2-af0b-4f57-b84c-cbc9cde5251d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap4c25174b-7e') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Oct 11 04:17:10 compute-0 nova_compute[259850]: 2025-10-11 04:17:10.835 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 04:17:10 compute-0 nova_compute[259850]: 2025-10-11 04:17:10.836 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 04:17:10 compute-0 nova_compute[259850]: 2025-10-11 04:17:10.837 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 11 04:17:10 compute-0 nova_compute[259850]: 2025-10-11 04:17:10.841 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 04:17:10 compute-0 nova_compute[259850]: 2025-10-11 04:17:10.842 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap4c25174b-7e, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 04:17:10 compute-0 nova_compute[259850]: 2025-10-11 04:17:10.844 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap4c25174b-7e, col_values=(('external_ids', {'iface-id': '4c25174b-7eef-47cc-9c7d-618a905c5e5e', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:47:b9:1f', 'vm-uuid': '170beb52-e998-40b5-8315-a0d138f2cbf6'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 04:17:10 compute-0 NetworkManager[44920]: <info>  [1760156230.8929] manager: (tap4c25174b-7e): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/114)
Oct 11 04:17:10 compute-0 nova_compute[259850]: 2025-10-11 04:17:10.891 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 04:17:10 compute-0 nova_compute[259850]: 2025-10-11 04:17:10.897 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 04:17:10 compute-0 nova_compute[259850]: 2025-10-11 04:17:10.899 2 INFO os_vif [None req-02339ba4-a927-4180-be31-b844e39b3179 2a330a845d62440c871f80eda2546881 09ba33ef4bd447699d74946c58839b2d - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:47:b9:1f,bridge_name='br-int',has_traffic_filtering=True,id=4c25174b-7eef-47cc-9c7d-618a905c5e5e,network=Network(b6cd64a2-af0b-4f57-b84c-cbc9cde5251d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap4c25174b-7e')
Oct 11 04:17:10 compute-0 nova_compute[259850]: 2025-10-11 04:17:10.982 2 DEBUG nova.virt.libvirt.driver [None req-02339ba4-a927-4180-be31-b844e39b3179 2a330a845d62440c871f80eda2546881 09ba33ef4bd447699d74946c58839b2d - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 11 04:17:10 compute-0 nova_compute[259850]: 2025-10-11 04:17:10.983 2 DEBUG nova.virt.libvirt.driver [None req-02339ba4-a927-4180-be31-b844e39b3179 2a330a845d62440c871f80eda2546881 09ba33ef4bd447699d74946c58839b2d - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 11 04:17:10 compute-0 nova_compute[259850]: 2025-10-11 04:17:10.983 2 DEBUG nova.virt.libvirt.driver [None req-02339ba4-a927-4180-be31-b844e39b3179 2a330a845d62440c871f80eda2546881 09ba33ef4bd447699d74946c58839b2d - - default default] No VIF found with MAC fa:16:3e:47:b9:1f, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Oct 11 04:17:10 compute-0 nova_compute[259850]: 2025-10-11 04:17:10.984 2 INFO nova.virt.libvirt.driver [None req-02339ba4-a927-4180-be31-b844e39b3179 2a330a845d62440c871f80eda2546881 09ba33ef4bd447699d74946c58839b2d - - default default] [instance: 170beb52-e998-40b5-8315-a0d138f2cbf6] Using config drive
Oct 11 04:17:11 compute-0 nova_compute[259850]: 2025-10-11 04:17:11.021 2 DEBUG nova.storage.rbd_utils [None req-02339ba4-a927-4180-be31-b844e39b3179 2a330a845d62440c871f80eda2546881 09ba33ef4bd447699d74946c58839b2d - - default default] rbd image 170beb52-e998-40b5-8315-a0d138f2cbf6_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 11 04:17:11 compute-0 ceph-mon[74273]: pgmap v1604: 305 pgs: 305 active+clean; 134 MiB data, 481 MiB used, 60 GiB / 60 GiB avail; 2.0 MiB/s rd, 1.8 MiB/s wr, 78 op/s
Oct 11 04:17:11 compute-0 ceph-mon[74273]: from='client.? 192.168.122.100:0/3158560666' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 11 04:17:11 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1605: 305 pgs: 305 active+clean; 134 MiB data, 481 MiB used, 60 GiB / 60 GiB avail; 42 KiB/s rd, 1.8 MiB/s wr, 66 op/s
Oct 11 04:17:11 compute-0 nova_compute[259850]: 2025-10-11 04:17:11.783 2 INFO nova.virt.libvirt.driver [None req-02339ba4-a927-4180-be31-b844e39b3179 2a330a845d62440c871f80eda2546881 09ba33ef4bd447699d74946c58839b2d - - default default] [instance: 170beb52-e998-40b5-8315-a0d138f2cbf6] Creating config drive at /var/lib/nova/instances/170beb52-e998-40b5-8315-a0d138f2cbf6/disk.config
Oct 11 04:17:11 compute-0 nova_compute[259850]: 2025-10-11 04:17:11.792 2 DEBUG oslo_concurrency.processutils [None req-02339ba4-a927-4180-be31-b844e39b3179 2a330a845d62440c871f80eda2546881 09ba33ef4bd447699d74946c58839b2d - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/170beb52-e998-40b5-8315-a0d138f2cbf6/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpz_mospag execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 04:17:11 compute-0 nova_compute[259850]: 2025-10-11 04:17:11.940 2 DEBUG oslo_concurrency.processutils [None req-02339ba4-a927-4180-be31-b844e39b3179 2a330a845d62440c871f80eda2546881 09ba33ef4bd447699d74946c58839b2d - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/170beb52-e998-40b5-8315-a0d138f2cbf6/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpz_mospag" returned: 0 in 0.148s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 04:17:11 compute-0 nova_compute[259850]: 2025-10-11 04:17:11.973 2 DEBUG nova.storage.rbd_utils [None req-02339ba4-a927-4180-be31-b844e39b3179 2a330a845d62440c871f80eda2546881 09ba33ef4bd447699d74946c58839b2d - - default default] rbd image 170beb52-e998-40b5-8315-a0d138f2cbf6_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 11 04:17:11 compute-0 nova_compute[259850]: 2025-10-11 04:17:11.977 2 DEBUG oslo_concurrency.processutils [None req-02339ba4-a927-4180-be31-b844e39b3179 2a330a845d62440c871f80eda2546881 09ba33ef4bd447699d74946c58839b2d - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/170beb52-e998-40b5-8315-a0d138f2cbf6/disk.config 170beb52-e998-40b5-8315-a0d138f2cbf6_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 04:17:12 compute-0 nova_compute[259850]: 2025-10-11 04:17:12.168 2 DEBUG oslo_concurrency.processutils [None req-02339ba4-a927-4180-be31-b844e39b3179 2a330a845d62440c871f80eda2546881 09ba33ef4bd447699d74946c58839b2d - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/170beb52-e998-40b5-8315-a0d138f2cbf6/disk.config 170beb52-e998-40b5-8315-a0d138f2cbf6_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.190s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 04:17:12 compute-0 nova_compute[259850]: 2025-10-11 04:17:12.168 2 INFO nova.virt.libvirt.driver [None req-02339ba4-a927-4180-be31-b844e39b3179 2a330a845d62440c871f80eda2546881 09ba33ef4bd447699d74946c58839b2d - - default default] [instance: 170beb52-e998-40b5-8315-a0d138f2cbf6] Deleting local config drive /var/lib/nova/instances/170beb52-e998-40b5-8315-a0d138f2cbf6/disk.config because it was imported into RBD.
Oct 11 04:17:12 compute-0 kernel: tap4c25174b-7e: entered promiscuous mode
Oct 11 04:17:12 compute-0 NetworkManager[44920]: <info>  [1760156232.2237] manager: (tap4c25174b-7e): new Tun device (/org/freedesktop/NetworkManager/Devices/115)
Oct 11 04:17:12 compute-0 ovn_controller[152025]: 2025-10-11T04:17:12Z|00222|binding|INFO|Claiming lport 4c25174b-7eef-47cc-9c7d-618a905c5e5e for this chassis.
Oct 11 04:17:12 compute-0 ovn_controller[152025]: 2025-10-11T04:17:12Z|00223|binding|INFO|4c25174b-7eef-47cc-9c7d-618a905c5e5e: Claiming fa:16:3e:47:b9:1f 10.100.0.5
Oct 11 04:17:12 compute-0 nova_compute[259850]: 2025-10-11 04:17:12.225 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 04:17:12 compute-0 ovn_metadata_agent[161897]: 2025-10-11 04:17:12.229 161902 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:47:b9:1f 10.100.0.5'], port_security=['fa:16:3e:47:b9:1f 10.100.0.5'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.5/28', 'neutron:device_id': '170beb52-e998-40b5-8315-a0d138f2cbf6', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-b6cd64a2-af0b-4f57-b84c-cbc9cde5251d', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '09ba33ef4bd447699d74946c58839b2d', 'neutron:revision_number': '2', 'neutron:security_group_ids': '802c56f7-efb1-44ec-9107-b20b0a13ea5d', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=27b77226-c1f8-485e-969b-bae9a3bf7ceb, chassis=[<ovs.db.idl.Row object at 0x7f22fde8afd0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f22fde8afd0>], logical_port=4c25174b-7eef-47cc-9c7d-618a905c5e5e) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 11 04:17:12 compute-0 ovn_metadata_agent[161897]: 2025-10-11 04:17:12.231 161902 INFO neutron.agent.ovn.metadata.agent [-] Port 4c25174b-7eef-47cc-9c7d-618a905c5e5e in datapath b6cd64a2-af0b-4f57-b84c-cbc9cde5251d bound to our chassis
Oct 11 04:17:12 compute-0 ovn_metadata_agent[161897]: 2025-10-11 04:17:12.232 161902 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network b6cd64a2-af0b-4f57-b84c-cbc9cde5251d
Oct 11 04:17:12 compute-0 ovn_controller[152025]: 2025-10-11T04:17:12Z|00224|binding|INFO|Setting lport 4c25174b-7eef-47cc-9c7d-618a905c5e5e ovn-installed in OVS
Oct 11 04:17:12 compute-0 ovn_controller[152025]: 2025-10-11T04:17:12Z|00225|binding|INFO|Setting lport 4c25174b-7eef-47cc-9c7d-618a905c5e5e up in Southbound
Oct 11 04:17:12 compute-0 nova_compute[259850]: 2025-10-11 04:17:12.246 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 04:17:12 compute-0 nova_compute[259850]: 2025-10-11 04:17:12.251 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 04:17:12 compute-0 ovn_metadata_agent[161897]: 2025-10-11 04:17:12.247 267637 DEBUG oslo.privsep.daemon [-] privsep: reply[822813f7-241b-479e-857b-6817dc5e21d9]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 04:17:12 compute-0 ovn_metadata_agent[161897]: 2025-10-11 04:17:12.248 161902 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapb6cd64a2-a1 in ovnmeta-b6cd64a2-af0b-4f57-b84c-cbc9cde5251d namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Oct 11 04:17:12 compute-0 ovn_metadata_agent[161897]: 2025-10-11 04:17:12.251 267637 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapb6cd64a2-a0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Oct 11 04:17:12 compute-0 ovn_metadata_agent[161897]: 2025-10-11 04:17:12.251 267637 DEBUG oslo.privsep.daemon [-] privsep: reply[438ec3ce-e7ff-4e43-8399-a7fc44f830ca]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 04:17:12 compute-0 ovn_metadata_agent[161897]: 2025-10-11 04:17:12.252 267637 DEBUG oslo.privsep.daemon [-] privsep: reply[f507d97f-bc7f-4870-b26d-175bf376c96c]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 04:17:12 compute-0 systemd-udevd[295279]: Network interface NamePolicy= disabled on kernel command line.
Oct 11 04:17:12 compute-0 systemd-machined[214869]: New machine qemu-22-instance-00000016.
Oct 11 04:17:12 compute-0 ovn_metadata_agent[161897]: 2025-10-11 04:17:12.263 162015 DEBUG oslo.privsep.daemon [-] privsep: reply[988b2c34-43e2-4223-bdf6-6da9e0242b24]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 04:17:12 compute-0 NetworkManager[44920]: <info>  [1760156232.2713] device (tap4c25174b-7e): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 11 04:17:12 compute-0 NetworkManager[44920]: <info>  [1760156232.2723] device (tap4c25174b-7e): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 11 04:17:12 compute-0 ovn_metadata_agent[161897]: 2025-10-11 04:17:12.277 267637 DEBUG oslo.privsep.daemon [-] privsep: reply[84fa73c5-e0bc-4388-8307-d0583ff61d3f]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 04:17:12 compute-0 systemd[1]: Started Virtual Machine qemu-22-instance-00000016.
Oct 11 04:17:12 compute-0 ovn_metadata_agent[161897]: 2025-10-11 04:17:12.305 267785 DEBUG oslo.privsep.daemon [-] privsep: reply[4943f0ba-a6ad-4671-9125-9cc4ee4e770d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 04:17:12 compute-0 ovn_metadata_agent[161897]: 2025-10-11 04:17:12.309 267637 DEBUG oslo.privsep.daemon [-] privsep: reply[8b4031cd-e86a-4bb3-ad5f-f5012f8b0c85]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 04:17:12 compute-0 NetworkManager[44920]: <info>  [1760156232.3109] manager: (tapb6cd64a2-a0): new Veth device (/org/freedesktop/NetworkManager/Devices/116)
Oct 11 04:17:12 compute-0 ovn_metadata_agent[161897]: 2025-10-11 04:17:12.347 267785 DEBUG oslo.privsep.daemon [-] privsep: reply[4b79744f-8877-490b-b298-8350ed3a6bef]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 04:17:12 compute-0 ovn_metadata_agent[161897]: 2025-10-11 04:17:12.350 267785 DEBUG oslo.privsep.daemon [-] privsep: reply[ee18926e-9c57-4c6d-a829-0a0c7f0f25f4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 04:17:12 compute-0 NetworkManager[44920]: <info>  [1760156232.3742] device (tapb6cd64a2-a0): carrier: link connected
Oct 11 04:17:12 compute-0 ovn_metadata_agent[161897]: 2025-10-11 04:17:12.377 267785 DEBUG oslo.privsep.daemon [-] privsep: reply[0ba2b98a-2dff-4967-8cf3-9f507f620a73]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 04:17:12 compute-0 ovn_metadata_agent[161897]: 2025-10-11 04:17:12.393 267637 DEBUG oslo.privsep.daemon [-] privsep: reply[1e9ac426-946f-4415-8945-2eb3c78254ad]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapb6cd64a2-a1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:11:9f:02'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 73], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 461741, 'reachable_time': 21647, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 295311, 'error': None, 'target': 'ovnmeta-b6cd64a2-af0b-4f57-b84c-cbc9cde5251d', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 04:17:12 compute-0 ovn_metadata_agent[161897]: 2025-10-11 04:17:12.414 267637 DEBUG oslo.privsep.daemon [-] privsep: reply[a799f4f3-e86e-4aeb-858e-781072d28755]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe11:9f02'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 461741, 'tstamp': 461741}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 295312, 'error': None, 'target': 'ovnmeta-b6cd64a2-af0b-4f57-b84c-cbc9cde5251d', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 04:17:12 compute-0 ovn_metadata_agent[161897]: 2025-10-11 04:17:12.437 267637 DEBUG oslo.privsep.daemon [-] privsep: reply[6ae8c1d9-c601-452c-aa91-d3d0911efb6f]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapb6cd64a2-a1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:11:9f:02'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 73], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 461741, 'reachable_time': 21647, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 295313, 'error': None, 'target': 'ovnmeta-b6cd64a2-af0b-4f57-b84c-cbc9cde5251d', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 04:17:12 compute-0 ovn_metadata_agent[161897]: 2025-10-11 04:17:12.471 267637 DEBUG oslo.privsep.daemon [-] privsep: reply[f0226349-3399-4ec1-92e1-e17677739ff0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 04:17:12 compute-0 nova_compute[259850]: 2025-10-11 04:17:12.493 2 DEBUG nova.compute.manager [req-fb6f220a-1ebf-4643-af62-0827e903ce02 req-3c49145e-1e21-4e59-8b76-a410a7555a66 f4159c198f2c491aba3076e803251bb4 551ef16ca7c64c7d98e9560ddab5abd8 - - default default] [instance: 170beb52-e998-40b5-8315-a0d138f2cbf6] Received event network-vif-plugged-4c25174b-7eef-47cc-9c7d-618a905c5e5e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 11 04:17:12 compute-0 nova_compute[259850]: 2025-10-11 04:17:12.494 2 DEBUG oslo_concurrency.lockutils [req-fb6f220a-1ebf-4643-af62-0827e903ce02 req-3c49145e-1e21-4e59-8b76-a410a7555a66 f4159c198f2c491aba3076e803251bb4 551ef16ca7c64c7d98e9560ddab5abd8 - - default default] Acquiring lock "170beb52-e998-40b5-8315-a0d138f2cbf6-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 04:17:12 compute-0 nova_compute[259850]: 2025-10-11 04:17:12.494 2 DEBUG oslo_concurrency.lockutils [req-fb6f220a-1ebf-4643-af62-0827e903ce02 req-3c49145e-1e21-4e59-8b76-a410a7555a66 f4159c198f2c491aba3076e803251bb4 551ef16ca7c64c7d98e9560ddab5abd8 - - default default] Lock "170beb52-e998-40b5-8315-a0d138f2cbf6-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 04:17:12 compute-0 nova_compute[259850]: 2025-10-11 04:17:12.494 2 DEBUG oslo_concurrency.lockutils [req-fb6f220a-1ebf-4643-af62-0827e903ce02 req-3c49145e-1e21-4e59-8b76-a410a7555a66 f4159c198f2c491aba3076e803251bb4 551ef16ca7c64c7d98e9560ddab5abd8 - - default default] Lock "170beb52-e998-40b5-8315-a0d138f2cbf6-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 04:17:12 compute-0 nova_compute[259850]: 2025-10-11 04:17:12.494 2 DEBUG nova.compute.manager [req-fb6f220a-1ebf-4643-af62-0827e903ce02 req-3c49145e-1e21-4e59-8b76-a410a7555a66 f4159c198f2c491aba3076e803251bb4 551ef16ca7c64c7d98e9560ddab5abd8 - - default default] [instance: 170beb52-e998-40b5-8315-a0d138f2cbf6] Processing event network-vif-plugged-4c25174b-7eef-47cc-9c7d-618a905c5e5e _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Oct 11 04:17:12 compute-0 ovn_metadata_agent[161897]: 2025-10-11 04:17:12.526 267637 DEBUG oslo.privsep.daemon [-] privsep: reply[7836ffb7-bb75-43a8-ae0a-8a8c39c9954e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 04:17:12 compute-0 ovn_metadata_agent[161897]: 2025-10-11 04:17:12.527 161902 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapb6cd64a2-a0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 04:17:12 compute-0 ovn_metadata_agent[161897]: 2025-10-11 04:17:12.527 161902 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 11 04:17:12 compute-0 ovn_metadata_agent[161897]: 2025-10-11 04:17:12.527 161902 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapb6cd64a2-a0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 04:17:12 compute-0 nova_compute[259850]: 2025-10-11 04:17:12.529 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 04:17:12 compute-0 NetworkManager[44920]: <info>  [1760156232.5295] manager: (tapb6cd64a2-a0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/117)
Oct 11 04:17:12 compute-0 kernel: tapb6cd64a2-a0: entered promiscuous mode
Oct 11 04:17:12 compute-0 nova_compute[259850]: 2025-10-11 04:17:12.530 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 04:17:12 compute-0 ovn_metadata_agent[161897]: 2025-10-11 04:17:12.532 161902 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapb6cd64a2-a0, col_values=(('external_ids', {'iface-id': 'c2cbaf15-a50c-40b8-9f65-12b11618e7fc'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 04:17:12 compute-0 nova_compute[259850]: 2025-10-11 04:17:12.533 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 04:17:12 compute-0 ovn_controller[152025]: 2025-10-11T04:17:12Z|00226|binding|INFO|Releasing lport c2cbaf15-a50c-40b8-9f65-12b11618e7fc from this chassis (sb_readonly=0)
Oct 11 04:17:12 compute-0 nova_compute[259850]: 2025-10-11 04:17:12.534 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 04:17:12 compute-0 ovn_metadata_agent[161897]: 2025-10-11 04:17:12.535 161902 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/b6cd64a2-af0b-4f57-b84c-cbc9cde5251d.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/b6cd64a2-af0b-4f57-b84c-cbc9cde5251d.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Oct 11 04:17:12 compute-0 ovn_metadata_agent[161897]: 2025-10-11 04:17:12.536 267637 DEBUG oslo.privsep.daemon [-] privsep: reply[c5c2f2d8-8f0d-49ac-8201-9de0f2f098f6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 04:17:12 compute-0 ovn_metadata_agent[161897]: 2025-10-11 04:17:12.537 161902 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct 11 04:17:12 compute-0 ovn_metadata_agent[161897]: global
Oct 11 04:17:12 compute-0 ovn_metadata_agent[161897]:     log         /dev/log local0 debug
Oct 11 04:17:12 compute-0 ovn_metadata_agent[161897]:     log-tag     haproxy-metadata-proxy-b6cd64a2-af0b-4f57-b84c-cbc9cde5251d
Oct 11 04:17:12 compute-0 ovn_metadata_agent[161897]:     user        root
Oct 11 04:17:12 compute-0 ovn_metadata_agent[161897]:     group       root
Oct 11 04:17:12 compute-0 ovn_metadata_agent[161897]:     maxconn     1024
Oct 11 04:17:12 compute-0 ovn_metadata_agent[161897]:     pidfile     /var/lib/neutron/external/pids/b6cd64a2-af0b-4f57-b84c-cbc9cde5251d.pid.haproxy
Oct 11 04:17:12 compute-0 ovn_metadata_agent[161897]:     daemon
Oct 11 04:17:12 compute-0 ovn_metadata_agent[161897]: 
Oct 11 04:17:12 compute-0 ovn_metadata_agent[161897]: defaults
Oct 11 04:17:12 compute-0 ovn_metadata_agent[161897]:     log global
Oct 11 04:17:12 compute-0 ovn_metadata_agent[161897]:     mode http
Oct 11 04:17:12 compute-0 ovn_metadata_agent[161897]:     option httplog
Oct 11 04:17:12 compute-0 ovn_metadata_agent[161897]:     option dontlognull
Oct 11 04:17:12 compute-0 ovn_metadata_agent[161897]:     option http-server-close
Oct 11 04:17:12 compute-0 ovn_metadata_agent[161897]:     option forwardfor
Oct 11 04:17:12 compute-0 ovn_metadata_agent[161897]:     retries                 3
Oct 11 04:17:12 compute-0 ovn_metadata_agent[161897]:     timeout http-request    30s
Oct 11 04:17:12 compute-0 ovn_metadata_agent[161897]:     timeout connect         30s
Oct 11 04:17:12 compute-0 ovn_metadata_agent[161897]:     timeout client          32s
Oct 11 04:17:12 compute-0 ovn_metadata_agent[161897]:     timeout server          32s
Oct 11 04:17:12 compute-0 ovn_metadata_agent[161897]:     timeout http-keep-alive 30s
Oct 11 04:17:12 compute-0 ovn_metadata_agent[161897]: 
Oct 11 04:17:12 compute-0 ovn_metadata_agent[161897]: 
Oct 11 04:17:12 compute-0 ovn_metadata_agent[161897]: listen listener
Oct 11 04:17:12 compute-0 ovn_metadata_agent[161897]:     bind 169.254.169.254:80
Oct 11 04:17:12 compute-0 ovn_metadata_agent[161897]:     server metadata /var/lib/neutron/metadata_proxy
Oct 11 04:17:12 compute-0 ovn_metadata_agent[161897]:     http-request add-header X-OVN-Network-ID b6cd64a2-af0b-4f57-b84c-cbc9cde5251d
Oct 11 04:17:12 compute-0 ovn_metadata_agent[161897]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Oct 11 04:17:12 compute-0 ovn_metadata_agent[161897]: 2025-10-11 04:17:12.538 161902 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-b6cd64a2-af0b-4f57-b84c-cbc9cde5251d', 'env', 'PROCESS_TAG=haproxy-b6cd64a2-af0b-4f57-b84c-cbc9cde5251d', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/b6cd64a2-af0b-4f57-b84c-cbc9cde5251d.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Oct 11 04:17:12 compute-0 nova_compute[259850]: 2025-10-11 04:17:12.548 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 04:17:12 compute-0 nova_compute[259850]: 2025-10-11 04:17:12.901 2 DEBUG nova.network.neutron [req-255ed9c0-7272-43c1-ba83-d4b8de5ea5d6 req-f90d72eb-c858-407b-a485-b6915ae76259 f4159c198f2c491aba3076e803251bb4 551ef16ca7c64c7d98e9560ddab5abd8 - - default default] [instance: 170beb52-e998-40b5-8315-a0d138f2cbf6] Updated VIF entry in instance network info cache for port 4c25174b-7eef-47cc-9c7d-618a905c5e5e. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 11 04:17:12 compute-0 nova_compute[259850]: 2025-10-11 04:17:12.902 2 DEBUG nova.network.neutron [req-255ed9c0-7272-43c1-ba83-d4b8de5ea5d6 req-f90d72eb-c858-407b-a485-b6915ae76259 f4159c198f2c491aba3076e803251bb4 551ef16ca7c64c7d98e9560ddab5abd8 - - default default] [instance: 170beb52-e998-40b5-8315-a0d138f2cbf6] Updating instance_info_cache with network_info: [{"id": "4c25174b-7eef-47cc-9c7d-618a905c5e5e", "address": "fa:16:3e:47:b9:1f", "network": {"id": "b6cd64a2-af0b-4f57-b84c-cbc9cde5251d", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-958485850-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "09ba33ef4bd447699d74946c58839b2d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4c25174b-7e", "ovs_interfaceid": "4c25174b-7eef-47cc-9c7d-618a905c5e5e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 11 04:17:12 compute-0 nova_compute[259850]: 2025-10-11 04:17:12.942 2 DEBUG oslo_concurrency.lockutils [req-255ed9c0-7272-43c1-ba83-d4b8de5ea5d6 req-f90d72eb-c858-407b-a485-b6915ae76259 f4159c198f2c491aba3076e803251bb4 551ef16ca7c64c7d98e9560ddab5abd8 - - default default] Releasing lock "refresh_cache-170beb52-e998-40b5-8315-a0d138f2cbf6" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 11 04:17:12 compute-0 podman[295387]: 2025-10-11 04:17:12.97694485 +0000 UTC m=+0.066818723 container create ee29e926f7bc1ee0604b4caaf9325d82c14839ce500a9f37088f823d4eca38c5 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-b6cd64a2-af0b-4f57-b84c-cbc9cde5251d, io.buildah.version=1.41.3, org.label-schema.build-date=20251009, tcib_managed=true, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 11 04:17:13 compute-0 systemd[1]: Started libpod-conmon-ee29e926f7bc1ee0604b4caaf9325d82c14839ce500a9f37088f823d4eca38c5.scope.
Oct 11 04:17:13 compute-0 systemd[1]: Started libcrun container.
Oct 11 04:17:13 compute-0 podman[295387]: 2025-10-11 04:17:12.944260994 +0000 UTC m=+0.034134897 image pull 1061e4fafe13e0b9aa1ef2c904ba4ad70c44f3e87b1d831f16c6db34937f4022 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Oct 11 04:17:13 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5663c6b975d2fa8055cc88485a8ea82cb1a0db63bd1d7fb75ba733a3d14075c0/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct 11 04:17:13 compute-0 podman[295387]: 2025-10-11 04:17:13.066011271 +0000 UTC m=+0.155885154 container init ee29e926f7bc1ee0604b4caaf9325d82c14839ce500a9f37088f823d4eca38c5 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-b6cd64a2-af0b-4f57-b84c-cbc9cde5251d, io.buildah.version=1.41.3, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251009, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true)
Oct 11 04:17:13 compute-0 podman[295387]: 2025-10-11 04:17:13.070944231 +0000 UTC m=+0.160818094 container start ee29e926f7bc1ee0604b4caaf9325d82c14839ce500a9f37088f823d4eca38c5 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-b6cd64a2-af0b-4f57-b84c-cbc9cde5251d, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251009, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true)
Oct 11 04:17:13 compute-0 ceph-mon[74273]: pgmap v1605: 305 pgs: 305 active+clean; 134 MiB data, 481 MiB used, 60 GiB / 60 GiB avail; 42 KiB/s rd, 1.8 MiB/s wr, 66 op/s
Oct 11 04:17:13 compute-0 neutron-haproxy-ovnmeta-b6cd64a2-af0b-4f57-b84c-cbc9cde5251d[295402]: [NOTICE]   (295406) : New worker (295408) forked
Oct 11 04:17:13 compute-0 neutron-haproxy-ovnmeta-b6cd64a2-af0b-4f57-b84c-cbc9cde5251d[295402]: [NOTICE]   (295406) : Loading success.
Oct 11 04:17:13 compute-0 nova_compute[259850]: 2025-10-11 04:17:13.199 2 DEBUG nova.virt.driver [None req-3700afd8-f980-4cf2-b41e-54ecc7fbe9a2 - - - - - -] Emitting event <LifecycleEvent: 1760156233.198868, 170beb52-e998-40b5-8315-a0d138f2cbf6 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 11 04:17:13 compute-0 nova_compute[259850]: 2025-10-11 04:17:13.200 2 INFO nova.compute.manager [None req-3700afd8-f980-4cf2-b41e-54ecc7fbe9a2 - - - - - -] [instance: 170beb52-e998-40b5-8315-a0d138f2cbf6] VM Started (Lifecycle Event)
Oct 11 04:17:13 compute-0 nova_compute[259850]: 2025-10-11 04:17:13.202 2 DEBUG nova.compute.manager [None req-02339ba4-a927-4180-be31-b844e39b3179 2a330a845d62440c871f80eda2546881 09ba33ef4bd447699d74946c58839b2d - - default default] [instance: 170beb52-e998-40b5-8315-a0d138f2cbf6] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Oct 11 04:17:13 compute-0 nova_compute[259850]: 2025-10-11 04:17:13.207 2 DEBUG nova.virt.libvirt.driver [None req-02339ba4-a927-4180-be31-b844e39b3179 2a330a845d62440c871f80eda2546881 09ba33ef4bd447699d74946c58839b2d - - default default] [instance: 170beb52-e998-40b5-8315-a0d138f2cbf6] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Oct 11 04:17:13 compute-0 nova_compute[259850]: 2025-10-11 04:17:13.210 2 INFO nova.virt.libvirt.driver [-] [instance: 170beb52-e998-40b5-8315-a0d138f2cbf6] Instance spawned successfully.
Oct 11 04:17:13 compute-0 nova_compute[259850]: 2025-10-11 04:17:13.211 2 DEBUG nova.virt.libvirt.driver [None req-02339ba4-a927-4180-be31-b844e39b3179 2a330a845d62440c871f80eda2546881 09ba33ef4bd447699d74946c58839b2d - - default default] [instance: 170beb52-e998-40b5-8315-a0d138f2cbf6] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Oct 11 04:17:13 compute-0 nova_compute[259850]: 2025-10-11 04:17:13.219 2 DEBUG nova.compute.manager [None req-3700afd8-f980-4cf2-b41e-54ecc7fbe9a2 - - - - - -] [instance: 170beb52-e998-40b5-8315-a0d138f2cbf6] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 11 04:17:13 compute-0 nova_compute[259850]: 2025-10-11 04:17:13.224 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1760156218.2239377, 001be5b3-e842-4242-a6ad-2ccbfa7b39c2 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 11 04:17:13 compute-0 nova_compute[259850]: 2025-10-11 04:17:13.225 2 INFO nova.compute.manager [-] [instance: 001be5b3-e842-4242-a6ad-2ccbfa7b39c2] VM Stopped (Lifecycle Event)
Oct 11 04:17:13 compute-0 nova_compute[259850]: 2025-10-11 04:17:13.226 2 DEBUG nova.compute.manager [None req-3700afd8-f980-4cf2-b41e-54ecc7fbe9a2 - - - - - -] [instance: 170beb52-e998-40b5-8315-a0d138f2cbf6] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 11 04:17:13 compute-0 nova_compute[259850]: 2025-10-11 04:17:13.237 2 DEBUG nova.virt.libvirt.driver [None req-02339ba4-a927-4180-be31-b844e39b3179 2a330a845d62440c871f80eda2546881 09ba33ef4bd447699d74946c58839b2d - - default default] [instance: 170beb52-e998-40b5-8315-a0d138f2cbf6] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 11 04:17:13 compute-0 nova_compute[259850]: 2025-10-11 04:17:13.238 2 DEBUG nova.virt.libvirt.driver [None req-02339ba4-a927-4180-be31-b844e39b3179 2a330a845d62440c871f80eda2546881 09ba33ef4bd447699d74946c58839b2d - - default default] [instance: 170beb52-e998-40b5-8315-a0d138f2cbf6] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 11 04:17:13 compute-0 nova_compute[259850]: 2025-10-11 04:17:13.239 2 DEBUG nova.virt.libvirt.driver [None req-02339ba4-a927-4180-be31-b844e39b3179 2a330a845d62440c871f80eda2546881 09ba33ef4bd447699d74946c58839b2d - - default default] [instance: 170beb52-e998-40b5-8315-a0d138f2cbf6] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 11 04:17:13 compute-0 nova_compute[259850]: 2025-10-11 04:17:13.239 2 DEBUG nova.virt.libvirt.driver [None req-02339ba4-a927-4180-be31-b844e39b3179 2a330a845d62440c871f80eda2546881 09ba33ef4bd447699d74946c58839b2d - - default default] [instance: 170beb52-e998-40b5-8315-a0d138f2cbf6] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 11 04:17:13 compute-0 nova_compute[259850]: 2025-10-11 04:17:13.240 2 DEBUG nova.virt.libvirt.driver [None req-02339ba4-a927-4180-be31-b844e39b3179 2a330a845d62440c871f80eda2546881 09ba33ef4bd447699d74946c58839b2d - - default default] [instance: 170beb52-e998-40b5-8315-a0d138f2cbf6] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 11 04:17:13 compute-0 nova_compute[259850]: 2025-10-11 04:17:13.240 2 DEBUG nova.virt.libvirt.driver [None req-02339ba4-a927-4180-be31-b844e39b3179 2a330a845d62440c871f80eda2546881 09ba33ef4bd447699d74946c58839b2d - - default default] [instance: 170beb52-e998-40b5-8315-a0d138f2cbf6] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 11 04:17:13 compute-0 nova_compute[259850]: 2025-10-11 04:17:13.267 2 DEBUG nova.compute.manager [None req-c517cdf8-f3f4-4c2e-af64-47ed26f43a24 - - - - - -] [instance: 001be5b3-e842-4242-a6ad-2ccbfa7b39c2] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 11 04:17:13 compute-0 nova_compute[259850]: 2025-10-11 04:17:13.269 2 INFO nova.compute.manager [None req-3700afd8-f980-4cf2-b41e-54ecc7fbe9a2 - - - - - -] [instance: 170beb52-e998-40b5-8315-a0d138f2cbf6] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 11 04:17:13 compute-0 nova_compute[259850]: 2025-10-11 04:17:13.269 2 DEBUG nova.virt.driver [None req-3700afd8-f980-4cf2-b41e-54ecc7fbe9a2 - - - - - -] Emitting event <LifecycleEvent: 1760156233.199257, 170beb52-e998-40b5-8315-a0d138f2cbf6 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 11 04:17:13 compute-0 nova_compute[259850]: 2025-10-11 04:17:13.270 2 INFO nova.compute.manager [None req-3700afd8-f980-4cf2-b41e-54ecc7fbe9a2 - - - - - -] [instance: 170beb52-e998-40b5-8315-a0d138f2cbf6] VM Paused (Lifecycle Event)
Oct 11 04:17:13 compute-0 nova_compute[259850]: 2025-10-11 04:17:13.293 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 04:17:13 compute-0 nova_compute[259850]: 2025-10-11 04:17:13.305 2 DEBUG nova.compute.manager [None req-3700afd8-f980-4cf2-b41e-54ecc7fbe9a2 - - - - - -] [instance: 170beb52-e998-40b5-8315-a0d138f2cbf6] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 11 04:17:13 compute-0 nova_compute[259850]: 2025-10-11 04:17:13.309 2 DEBUG nova.virt.driver [None req-3700afd8-f980-4cf2-b41e-54ecc7fbe9a2 - - - - - -] Emitting event <LifecycleEvent: 1760156233.206334, 170beb52-e998-40b5-8315-a0d138f2cbf6 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 11 04:17:13 compute-0 nova_compute[259850]: 2025-10-11 04:17:13.309 2 INFO nova.compute.manager [None req-3700afd8-f980-4cf2-b41e-54ecc7fbe9a2 - - - - - -] [instance: 170beb52-e998-40b5-8315-a0d138f2cbf6] VM Resumed (Lifecycle Event)
Oct 11 04:17:13 compute-0 nova_compute[259850]: 2025-10-11 04:17:13.313 2 INFO nova.compute.manager [None req-02339ba4-a927-4180-be31-b844e39b3179 2a330a845d62440c871f80eda2546881 09ba33ef4bd447699d74946c58839b2d - - default default] [instance: 170beb52-e998-40b5-8315-a0d138f2cbf6] Took 4.84 seconds to spawn the instance on the hypervisor.
Oct 11 04:17:13 compute-0 nova_compute[259850]: 2025-10-11 04:17:13.314 2 DEBUG nova.compute.manager [None req-02339ba4-a927-4180-be31-b844e39b3179 2a330a845d62440c871f80eda2546881 09ba33ef4bd447699d74946c58839b2d - - default default] [instance: 170beb52-e998-40b5-8315-a0d138f2cbf6] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 11 04:17:13 compute-0 nova_compute[259850]: 2025-10-11 04:17:13.351 2 DEBUG nova.compute.manager [None req-3700afd8-f980-4cf2-b41e-54ecc7fbe9a2 - - - - - -] [instance: 170beb52-e998-40b5-8315-a0d138f2cbf6] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 11 04:17:13 compute-0 nova_compute[259850]: 2025-10-11 04:17:13.355 2 DEBUG nova.compute.manager [None req-3700afd8-f980-4cf2-b41e-54ecc7fbe9a2 - - - - - -] [instance: 170beb52-e998-40b5-8315-a0d138f2cbf6] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 11 04:17:13 compute-0 nova_compute[259850]: 2025-10-11 04:17:13.393 2 INFO nova.compute.manager [None req-3700afd8-f980-4cf2-b41e-54ecc7fbe9a2 - - - - - -] [instance: 170beb52-e998-40b5-8315-a0d138f2cbf6] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 11 04:17:13 compute-0 nova_compute[259850]: 2025-10-11 04:17:13.408 2 INFO nova.compute.manager [None req-02339ba4-a927-4180-be31-b844e39b3179 2a330a845d62440c871f80eda2546881 09ba33ef4bd447699d74946c58839b2d - - default default] [instance: 170beb52-e998-40b5-8315-a0d138f2cbf6] Took 7.10 seconds to build instance.
Oct 11 04:17:13 compute-0 nova_compute[259850]: 2025-10-11 04:17:13.426 2 DEBUG oslo_concurrency.lockutils [None req-02339ba4-a927-4180-be31-b844e39b3179 2a330a845d62440c871f80eda2546881 09ba33ef4bd447699d74946c58839b2d - - default default] Lock "170beb52-e998-40b5-8315-a0d138f2cbf6" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 7.197s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 04:17:13 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1606: 305 pgs: 305 active+clean; 134 MiB data, 452 MiB used, 60 GiB / 60 GiB avail; 1.8 MiB/s rd, 1.8 MiB/s wr, 83 op/s
Oct 11 04:17:14 compute-0 nova_compute[259850]: 2025-10-11 04:17:14.608 2 DEBUG nova.compute.manager [req-63af2eaa-5de9-4f9d-b089-0861a02627a4 req-c5055b84-af9f-4956-bba4-f786097a8f2b f4159c198f2c491aba3076e803251bb4 551ef16ca7c64c7d98e9560ddab5abd8 - - default default] [instance: 170beb52-e998-40b5-8315-a0d138f2cbf6] Received event network-vif-plugged-4c25174b-7eef-47cc-9c7d-618a905c5e5e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 11 04:17:14 compute-0 nova_compute[259850]: 2025-10-11 04:17:14.609 2 DEBUG oslo_concurrency.lockutils [req-63af2eaa-5de9-4f9d-b089-0861a02627a4 req-c5055b84-af9f-4956-bba4-f786097a8f2b f4159c198f2c491aba3076e803251bb4 551ef16ca7c64c7d98e9560ddab5abd8 - - default default] Acquiring lock "170beb52-e998-40b5-8315-a0d138f2cbf6-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 04:17:14 compute-0 nova_compute[259850]: 2025-10-11 04:17:14.610 2 DEBUG oslo_concurrency.lockutils [req-63af2eaa-5de9-4f9d-b089-0861a02627a4 req-c5055b84-af9f-4956-bba4-f786097a8f2b f4159c198f2c491aba3076e803251bb4 551ef16ca7c64c7d98e9560ddab5abd8 - - default default] Lock "170beb52-e998-40b5-8315-a0d138f2cbf6-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 04:17:14 compute-0 nova_compute[259850]: 2025-10-11 04:17:14.610 2 DEBUG oslo_concurrency.lockutils [req-63af2eaa-5de9-4f9d-b089-0861a02627a4 req-c5055b84-af9f-4956-bba4-f786097a8f2b f4159c198f2c491aba3076e803251bb4 551ef16ca7c64c7d98e9560ddab5abd8 - - default default] Lock "170beb52-e998-40b5-8315-a0d138f2cbf6-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 04:17:14 compute-0 nova_compute[259850]: 2025-10-11 04:17:14.611 2 DEBUG nova.compute.manager [req-63af2eaa-5de9-4f9d-b089-0861a02627a4 req-c5055b84-af9f-4956-bba4-f786097a8f2b f4159c198f2c491aba3076e803251bb4 551ef16ca7c64c7d98e9560ddab5abd8 - - default default] [instance: 170beb52-e998-40b5-8315-a0d138f2cbf6] No waiting events found dispatching network-vif-plugged-4c25174b-7eef-47cc-9c7d-618a905c5e5e pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 11 04:17:14 compute-0 nova_compute[259850]: 2025-10-11 04:17:14.611 2 WARNING nova.compute.manager [req-63af2eaa-5de9-4f9d-b089-0861a02627a4 req-c5055b84-af9f-4956-bba4-f786097a8f2b f4159c198f2c491aba3076e803251bb4 551ef16ca7c64c7d98e9560ddab5abd8 - - default default] [instance: 170beb52-e998-40b5-8315-a0d138f2cbf6] Received unexpected event network-vif-plugged-4c25174b-7eef-47cc-9c7d-618a905c5e5e for instance with vm_state active and task_state None.
Oct 11 04:17:15 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e389 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 04:17:15 compute-0 ceph-mon[74273]: pgmap v1606: 305 pgs: 305 active+clean; 134 MiB data, 452 MiB used, 60 GiB / 60 GiB avail; 1.8 MiB/s rd, 1.8 MiB/s wr, 83 op/s
Oct 11 04:17:15 compute-0 podman[295417]: 2025-10-11 04:17:15.381677873 +0000 UTC m=+0.086943773 container health_status 83ff5079e26f0f00bbfd2512db4ebca61e431e3b7e362664573e34fff179574c (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, config_id=multipathd, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, io.buildah.version=1.41.3)
Oct 11 04:17:15 compute-0 podman[295418]: 2025-10-11 04:17:15.398789777 +0000 UTC m=+0.092332565 container health_status 8f03625dda308e9a3fe3b3f00360811eab2a53db90537fed5552a11820542f07 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, container_name=iscsid, org.label-schema.build-date=20251009, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_managed=true, config_id=iscsid, org.label-schema.license=GPLv2)
Oct 11 04:17:15 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1607: 305 pgs: 305 active+clean; 134 MiB data, 452 MiB used, 60 GiB / 60 GiB avail; 1.7 MiB/s rd, 13 KiB/s wr, 20 op/s
Oct 11 04:17:15 compute-0 nova_compute[259850]: 2025-10-11 04:17:15.894 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 04:17:16 compute-0 ceph-osd[87591]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Oct 11 04:17:16 compute-0 ceph-osd[87591]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 2400.1 total, 600.0 interval
                                           Cumulative writes: 26K writes, 101K keys, 26K commit groups, 1.0 writes per commit group, ingest: 0.07 GB, 0.03 MB/s
                                           Cumulative WAL: 26K writes, 9708 syncs, 2.69 writes per sync, written: 0.07 GB, 0.03 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 14K writes, 54K keys, 14K commit groups, 1.0 writes per commit group, ingest: 39.04 MB, 0.07 MB/s
                                           Interval WAL: 14K writes, 6082 syncs, 2.37 writes per sync, written: 0.04 GB, 0.07 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Oct 11 04:17:17 compute-0 ceph-mon[74273]: pgmap v1607: 305 pgs: 305 active+clean; 134 MiB data, 452 MiB used, 60 GiB / 60 GiB avail; 1.7 MiB/s rd, 13 KiB/s wr, 20 op/s
Oct 11 04:17:17 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1608: 305 pgs: 305 active+clean; 134 MiB data, 444 MiB used, 60 GiB / 60 GiB avail; 1.7 MiB/s rd, 13 KiB/s wr, 24 op/s
Oct 11 04:17:18 compute-0 nova_compute[259850]: 2025-10-11 04:17:18.295 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 04:17:18 compute-0 nova_compute[259850]: 2025-10-11 04:17:18.432 2 DEBUG nova.compute.manager [req-1eff914e-acc6-43f3-9d3d-1e236420b845 req-45925f7e-d0f7-4dac-a514-46bdbae19d01 f4159c198f2c491aba3076e803251bb4 551ef16ca7c64c7d98e9560ddab5abd8 - - default default] [instance: 170beb52-e998-40b5-8315-a0d138f2cbf6] Received event network-changed-4c25174b-7eef-47cc-9c7d-618a905c5e5e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 11 04:17:18 compute-0 nova_compute[259850]: 2025-10-11 04:17:18.432 2 DEBUG nova.compute.manager [req-1eff914e-acc6-43f3-9d3d-1e236420b845 req-45925f7e-d0f7-4dac-a514-46bdbae19d01 f4159c198f2c491aba3076e803251bb4 551ef16ca7c64c7d98e9560ddab5abd8 - - default default] [instance: 170beb52-e998-40b5-8315-a0d138f2cbf6] Refreshing instance network info cache due to event network-changed-4c25174b-7eef-47cc-9c7d-618a905c5e5e. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 11 04:17:18 compute-0 nova_compute[259850]: 2025-10-11 04:17:18.433 2 DEBUG oslo_concurrency.lockutils [req-1eff914e-acc6-43f3-9d3d-1e236420b845 req-45925f7e-d0f7-4dac-a514-46bdbae19d01 f4159c198f2c491aba3076e803251bb4 551ef16ca7c64c7d98e9560ddab5abd8 - - default default] Acquiring lock "refresh_cache-170beb52-e998-40b5-8315-a0d138f2cbf6" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 11 04:17:18 compute-0 nova_compute[259850]: 2025-10-11 04:17:18.433 2 DEBUG oslo_concurrency.lockutils [req-1eff914e-acc6-43f3-9d3d-1e236420b845 req-45925f7e-d0f7-4dac-a514-46bdbae19d01 f4159c198f2c491aba3076e803251bb4 551ef16ca7c64c7d98e9560ddab5abd8 - - default default] Acquired lock "refresh_cache-170beb52-e998-40b5-8315-a0d138f2cbf6" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 11 04:17:18 compute-0 nova_compute[259850]: 2025-10-11 04:17:18.433 2 DEBUG nova.network.neutron [req-1eff914e-acc6-43f3-9d3d-1e236420b845 req-45925f7e-d0f7-4dac-a514-46bdbae19d01 f4159c198f2c491aba3076e803251bb4 551ef16ca7c64c7d98e9560ddab5abd8 - - default default] [instance: 170beb52-e998-40b5-8315-a0d138f2cbf6] Refreshing network info cache for port 4c25174b-7eef-47cc-9c7d-618a905c5e5e _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 11 04:17:19 compute-0 ceph-mon[74273]: pgmap v1608: 305 pgs: 305 active+clean; 134 MiB data, 444 MiB used, 60 GiB / 60 GiB avail; 1.7 MiB/s rd, 13 KiB/s wr, 24 op/s
Oct 11 04:17:19 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1609: 305 pgs: 305 active+clean; 134 MiB data, 444 MiB used, 60 GiB / 60 GiB avail; 3.6 MiB/s rd, 35 KiB/s wr, 89 op/s
Oct 11 04:17:20 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e389 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 04:17:20 compute-0 nova_compute[259850]: 2025-10-11 04:17:20.431 2 DEBUG nova.network.neutron [req-1eff914e-acc6-43f3-9d3d-1e236420b845 req-45925f7e-d0f7-4dac-a514-46bdbae19d01 f4159c198f2c491aba3076e803251bb4 551ef16ca7c64c7d98e9560ddab5abd8 - - default default] [instance: 170beb52-e998-40b5-8315-a0d138f2cbf6] Updated VIF entry in instance network info cache for port 4c25174b-7eef-47cc-9c7d-618a905c5e5e. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 11 04:17:20 compute-0 nova_compute[259850]: 2025-10-11 04:17:20.432 2 DEBUG nova.network.neutron [req-1eff914e-acc6-43f3-9d3d-1e236420b845 req-45925f7e-d0f7-4dac-a514-46bdbae19d01 f4159c198f2c491aba3076e803251bb4 551ef16ca7c64c7d98e9560ddab5abd8 - - default default] [instance: 170beb52-e998-40b5-8315-a0d138f2cbf6] Updating instance_info_cache with network_info: [{"id": "4c25174b-7eef-47cc-9c7d-618a905c5e5e", "address": "fa:16:3e:47:b9:1f", "network": {"id": "b6cd64a2-af0b-4f57-b84c-cbc9cde5251d", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-958485850-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.176", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "09ba33ef4bd447699d74946c58839b2d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4c25174b-7e", "ovs_interfaceid": "4c25174b-7eef-47cc-9c7d-618a905c5e5e", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 11 04:17:20 compute-0 nova_compute[259850]: 2025-10-11 04:17:20.453 2 DEBUG oslo_concurrency.lockutils [req-1eff914e-acc6-43f3-9d3d-1e236420b845 req-45925f7e-d0f7-4dac-a514-46bdbae19d01 f4159c198f2c491aba3076e803251bb4 551ef16ca7c64c7d98e9560ddab5abd8 - - default default] Releasing lock "refresh_cache-170beb52-e998-40b5-8315-a0d138f2cbf6" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 11 04:17:20 compute-0 ceph-mgr[74563]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 04:17:20 compute-0 ceph-mgr[74563]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 04:17:20 compute-0 ceph-mgr[74563]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 04:17:20 compute-0 ceph-mgr[74563]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 04:17:20 compute-0 ceph-mgr[74563]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 04:17:20 compute-0 ceph-mgr[74563]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 04:17:20 compute-0 ceph-mgr[74563]: [balancer INFO root] Optimize plan auto_2025-10-11_04:17:20
Oct 11 04:17:20 compute-0 ceph-mgr[74563]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 11 04:17:20 compute-0 ceph-mgr[74563]: [balancer INFO root] do_upmap
Oct 11 04:17:20 compute-0 ceph-mgr[74563]: [balancer INFO root] pools ['images', 'default.rgw.control', 'default.rgw.log', 'default.rgw.meta', 'backups', '.mgr', 'cephfs.cephfs.meta', 'volumes', 'vms', '.rgw.root', 'cephfs.cephfs.data']
Oct 11 04:17:20 compute-0 ceph-mgr[74563]: [balancer INFO root] prepared 0/10 changes
Oct 11 04:17:20 compute-0 nova_compute[259850]: 2025-10-11 04:17:20.897 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 04:17:21 compute-0 ceph-mgr[74563]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 11 04:17:21 compute-0 ceph-mgr[74563]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 11 04:17:21 compute-0 ceph-mgr[74563]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 11 04:17:21 compute-0 ceph-mgr[74563]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 11 04:17:21 compute-0 ceph-mgr[74563]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 11 04:17:21 compute-0 ceph-mgr[74563]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 11 04:17:21 compute-0 ceph-mgr[74563]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 11 04:17:21 compute-0 ceph-mgr[74563]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 11 04:17:21 compute-0 ceph-mgr[74563]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 11 04:17:21 compute-0 ceph-mgr[74563]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 11 04:17:21 compute-0 ceph-mon[74273]: pgmap v1609: 305 pgs: 305 active+clean; 134 MiB data, 444 MiB used, 60 GiB / 60 GiB avail; 3.6 MiB/s rd, 35 KiB/s wr, 89 op/s
Oct 11 04:17:21 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1610: 305 pgs: 305 active+clean; 134 MiB data, 444 MiB used, 60 GiB / 60 GiB avail; 3.6 MiB/s rd, 34 KiB/s wr, 85 op/s
Oct 11 04:17:21 compute-0 ceph-osd[88594]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Oct 11 04:17:21 compute-0 ceph-osd[88594]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 2400.1 total, 600.0 interval
                                           Cumulative writes: 24K writes, 98K keys, 24K commit groups, 1.0 writes per commit group, ingest: 0.06 GB, 0.02 MB/s
                                           Cumulative WAL: 24K writes, 8978 syncs, 2.77 writes per sync, written: 0.06 GB, 0.02 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 12K writes, 46K keys, 12K commit groups, 1.0 writes per commit group, ingest: 27.82 MB, 0.05 MB/s
                                           Interval WAL: 12K writes, 5303 syncs, 2.33 writes per sync, written: 0.03 GB, 0.05 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Oct 11 04:17:22 compute-0 ovn_metadata_agent[161897]: 2025-10-11 04:17:22.967 161902 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 04:17:22 compute-0 ovn_metadata_agent[161897]: 2025-10-11 04:17:22.968 161902 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 04:17:22 compute-0 ovn_metadata_agent[161897]: 2025-10-11 04:17:22.969 161902 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 04:17:23 compute-0 ceph-mon[74273]: pgmap v1610: 305 pgs: 305 active+clean; 134 MiB data, 444 MiB used, 60 GiB / 60 GiB avail; 3.6 MiB/s rd, 34 KiB/s wr, 85 op/s
Oct 11 04:17:23 compute-0 nova_compute[259850]: 2025-10-11 04:17:23.298 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 04:17:23 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1611: 305 pgs: 305 active+clean; 134 MiB data, 444 MiB used, 60 GiB / 60 GiB avail; 3.6 MiB/s rd, 34 KiB/s wr, 86 op/s
Oct 11 04:17:25 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e389 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 04:17:25 compute-0 nova_compute[259850]: 2025-10-11 04:17:25.059 2 DEBUG oslo_service.periodic_task [None req-a15c22ab-259d-4b26-83d0-eb5f92102649 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 11 04:17:25 compute-0 nova_compute[259850]: 2025-10-11 04:17:25.059 2 DEBUG nova.compute.manager [None req-a15c22ab-259d-4b26-83d0-eb5f92102649 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Oct 11 04:17:25 compute-0 ceph-mon[74273]: pgmap v1611: 305 pgs: 305 active+clean; 134 MiB data, 444 MiB used, 60 GiB / 60 GiB avail; 3.6 MiB/s rd, 34 KiB/s wr, 86 op/s
Oct 11 04:17:25 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1612: 305 pgs: 305 active+clean; 134 MiB data, 444 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 22 KiB/s wr, 69 op/s
Oct 11 04:17:25 compute-0 ovn_controller[152025]: 2025-10-11T04:17:25Z|00046|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:47:b9:1f 10.100.0.5
Oct 11 04:17:25 compute-0 ovn_controller[152025]: 2025-10-11T04:17:25Z|00047|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:47:b9:1f 10.100.0.5
Oct 11 04:17:25 compute-0 nova_compute[259850]: 2025-10-11 04:17:25.901 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 04:17:27 compute-0 ceph-mon[74273]: pgmap v1612: 305 pgs: 305 active+clean; 134 MiB data, 444 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 22 KiB/s wr, 69 op/s
Oct 11 04:17:27 compute-0 ceph-osd[89722]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Oct 11 04:17:27 compute-0 ceph-osd[89722]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 2400.1 total, 600.0 interval
                                           Cumulative writes: 20K writes, 85K keys, 20K commit groups, 1.0 writes per commit group, ingest: 0.06 GB, 0.02 MB/s
                                           Cumulative WAL: 20K writes, 7367 syncs, 2.85 writes per sync, written: 0.06 GB, 0.02 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 10K writes, 40K keys, 10K commit groups, 1.0 writes per commit group, ingest: 26.41 MB, 0.04 MB/s
                                           Interval WAL: 10K writes, 4263 syncs, 2.38 writes per sync, written: 0.03 GB, 0.04 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Oct 11 04:17:27 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1613: 305 pgs: 305 active+clean; 138 MiB data, 444 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 57 KiB/s wr, 70 op/s
Oct 11 04:17:28 compute-0 ceph-mgr[74563]: [devicehealth INFO root] Check health
Oct 11 04:17:28 compute-0 nova_compute[259850]: 2025-10-11 04:17:28.060 2 DEBUG oslo_service.periodic_task [None req-a15c22ab-259d-4b26-83d0-eb5f92102649 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 11 04:17:28 compute-0 nova_compute[259850]: 2025-10-11 04:17:28.061 2 DEBUG oslo_service.periodic_task [None req-a15c22ab-259d-4b26-83d0-eb5f92102649 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 11 04:17:28 compute-0 nova_compute[259850]: 2025-10-11 04:17:28.301 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 04:17:28 compute-0 podman[295454]: 2025-10-11 04:17:28.495499901 +0000 UTC m=+0.189384103 container health_status 648a70a95c858d67aef01a55c153ff0b5b9bef4b589fa10c62a24185180db61f (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.build-date=20251009, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 11 04:17:29 compute-0 nova_compute[259850]: 2025-10-11 04:17:29.055 2 DEBUG oslo_service.periodic_task [None req-a15c22ab-259d-4b26-83d0-eb5f92102649 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 11 04:17:29 compute-0 nova_compute[259850]: 2025-10-11 04:17:29.058 2 DEBUG oslo_service.periodic_task [None req-a15c22ab-259d-4b26-83d0-eb5f92102649 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 11 04:17:29 compute-0 nova_compute[259850]: 2025-10-11 04:17:29.059 2 DEBUG nova.compute.manager [None req-a15c22ab-259d-4b26-83d0-eb5f92102649 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Oct 11 04:17:29 compute-0 nova_compute[259850]: 2025-10-11 04:17:29.102 2 DEBUG nova.compute.manager [None req-a15c22ab-259d-4b26-83d0-eb5f92102649 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Oct 11 04:17:29 compute-0 nova_compute[259850]: 2025-10-11 04:17:29.103 2 DEBUG oslo_service.periodic_task [None req-a15c22ab-259d-4b26-83d0-eb5f92102649 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 11 04:17:29 compute-0 nova_compute[259850]: 2025-10-11 04:17:29.137 2 DEBUG oslo_concurrency.lockutils [None req-a15c22ab-259d-4b26-83d0-eb5f92102649 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 04:17:29 compute-0 nova_compute[259850]: 2025-10-11 04:17:29.138 2 DEBUG oslo_concurrency.lockutils [None req-a15c22ab-259d-4b26-83d0-eb5f92102649 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 04:17:29 compute-0 nova_compute[259850]: 2025-10-11 04:17:29.139 2 DEBUG oslo_concurrency.lockutils [None req-a15c22ab-259d-4b26-83d0-eb5f92102649 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 04:17:29 compute-0 nova_compute[259850]: 2025-10-11 04:17:29.139 2 DEBUG nova.compute.resource_tracker [None req-a15c22ab-259d-4b26-83d0-eb5f92102649 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Oct 11 04:17:29 compute-0 nova_compute[259850]: 2025-10-11 04:17:29.140 2 DEBUG oslo_concurrency.processutils [None req-a15c22ab-259d-4b26-83d0-eb5f92102649 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 04:17:29 compute-0 ceph-mon[74273]: pgmap v1613: 305 pgs: 305 active+clean; 138 MiB data, 444 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 57 KiB/s wr, 70 op/s
Oct 11 04:17:29 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 11 04:17:29 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/313347499' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 04:17:29 compute-0 nova_compute[259850]: 2025-10-11 04:17:29.649 2 DEBUG oslo_concurrency.processutils [None req-a15c22ab-259d-4b26-83d0-eb5f92102649 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.510s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 04:17:29 compute-0 nova_compute[259850]: 2025-10-11 04:17:29.726 2 DEBUG nova.virt.libvirt.driver [None req-a15c22ab-259d-4b26-83d0-eb5f92102649 - - - - - -] skipping disk for instance-00000016 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 11 04:17:29 compute-0 nova_compute[259850]: 2025-10-11 04:17:29.727 2 DEBUG nova.virt.libvirt.driver [None req-a15c22ab-259d-4b26-83d0-eb5f92102649 - - - - - -] skipping disk for instance-00000016 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 11 04:17:29 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1614: 305 pgs: 305 active+clean; 167 MiB data, 469 MiB used, 60 GiB / 60 GiB avail; 2.2 MiB/s rd, 2.2 MiB/s wr, 128 op/s
Oct 11 04:17:29 compute-0 nova_compute[259850]: 2025-10-11 04:17:29.941 2 WARNING nova.virt.libvirt.driver [None req-a15c22ab-259d-4b26-83d0-eb5f92102649 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 11 04:17:29 compute-0 nova_compute[259850]: 2025-10-11 04:17:29.942 2 DEBUG nova.compute.resource_tracker [None req-a15c22ab-259d-4b26-83d0-eb5f92102649 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4186MB free_disk=59.98813247680664GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Oct 11 04:17:29 compute-0 nova_compute[259850]: 2025-10-11 04:17:29.942 2 DEBUG oslo_concurrency.lockutils [None req-a15c22ab-259d-4b26-83d0-eb5f92102649 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 04:17:29 compute-0 nova_compute[259850]: 2025-10-11 04:17:29.943 2 DEBUG oslo_concurrency.lockutils [None req-a15c22ab-259d-4b26-83d0-eb5f92102649 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 04:17:30 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e389 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 04:17:30 compute-0 nova_compute[259850]: 2025-10-11 04:17:30.019 2 DEBUG nova.compute.resource_tracker [None req-a15c22ab-259d-4b26-83d0-eb5f92102649 - - - - - -] Instance 170beb52-e998-40b5-8315-a0d138f2cbf6 actively managed on this compute host and has allocations in placement: {'resources': {'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Oct 11 04:17:30 compute-0 nova_compute[259850]: 2025-10-11 04:17:30.019 2 DEBUG nova.compute.resource_tracker [None req-a15c22ab-259d-4b26-83d0-eb5f92102649 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Oct 11 04:17:30 compute-0 nova_compute[259850]: 2025-10-11 04:17:30.020 2 DEBUG nova.compute.resource_tracker [None req-a15c22ab-259d-4b26-83d0-eb5f92102649 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=640MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Oct 11 04:17:30 compute-0 nova_compute[259850]: 2025-10-11 04:17:30.056 2 DEBUG oslo_concurrency.processutils [None req-a15c22ab-259d-4b26-83d0-eb5f92102649 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 04:17:30 compute-0 ceph-mon[74273]: from='client.? 192.168.122.100:0/313347499' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 04:17:30 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 11 04:17:30 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4151692798' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 04:17:30 compute-0 nova_compute[259850]: 2025-10-11 04:17:30.590 2 DEBUG oslo_concurrency.processutils [None req-a15c22ab-259d-4b26-83d0-eb5f92102649 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.535s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 04:17:30 compute-0 nova_compute[259850]: 2025-10-11 04:17:30.600 2 DEBUG nova.compute.provider_tree [None req-a15c22ab-259d-4b26-83d0-eb5f92102649 - - - - - -] Inventory has not changed in ProviderTree for provider: 108a560b-89c0-4926-a2fc-cb749a6f8386 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 11 04:17:30 compute-0 nova_compute[259850]: 2025-10-11 04:17:30.621 2 DEBUG nova.scheduler.client.report [None req-a15c22ab-259d-4b26-83d0-eb5f92102649 - - - - - -] Inventory has not changed for provider 108a560b-89c0-4926-a2fc-cb749a6f8386 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 11 04:17:30 compute-0 nova_compute[259850]: 2025-10-11 04:17:30.646 2 DEBUG nova.compute.resource_tracker [None req-a15c22ab-259d-4b26-83d0-eb5f92102649 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Oct 11 04:17:30 compute-0 nova_compute[259850]: 2025-10-11 04:17:30.647 2 DEBUG oslo_concurrency.lockutils [None req-a15c22ab-259d-4b26-83d0-eb5f92102649 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.704s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 04:17:30 compute-0 nova_compute[259850]: 2025-10-11 04:17:30.905 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 04:17:31 compute-0 ceph-mon[74273]: pgmap v1614: 305 pgs: 305 active+clean; 167 MiB data, 469 MiB used, 60 GiB / 60 GiB avail; 2.2 MiB/s rd, 2.2 MiB/s wr, 128 op/s
Oct 11 04:17:31 compute-0 ceph-mon[74273]: from='client.? 192.168.122.100:0/4151692798' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 04:17:31 compute-0 ceph-mgr[74563]: [pg_autoscaler INFO root] _maybe_adjust
Oct 11 04:17:31 compute-0 ceph-mgr[74563]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 04:17:31 compute-0 ceph-mgr[74563]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct 11 04:17:31 compute-0 ceph-mgr[74563]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 04:17:31 compute-0 ceph-mgr[74563]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 2.480037605000977e-06 of space, bias 1.0, pg target 0.0007440112815002931 quantized to 32 (current 32)
Oct 11 04:17:31 compute-0 ceph-mgr[74563]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 04:17:31 compute-0 ceph-mgr[74563]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.001105206501920948 of space, bias 1.0, pg target 0.3315619505762844 quantized to 32 (current 32)
Oct 11 04:17:31 compute-0 ceph-mgr[74563]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 04:17:31 compute-0 ceph-mgr[74563]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Oct 11 04:17:31 compute-0 ceph-mgr[74563]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 04:17:31 compute-0 ceph-mgr[74563]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006661762551279547 of space, bias 1.0, pg target 0.1998528765383864 quantized to 32 (current 32)
Oct 11 04:17:31 compute-0 ceph-mgr[74563]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 04:17:31 compute-0 ceph-mgr[74563]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Oct 11 04:17:31 compute-0 ceph-mgr[74563]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 04:17:31 compute-0 ceph-mgr[74563]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 11 04:17:31 compute-0 ceph-mgr[74563]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 04:17:31 compute-0 ceph-mgr[74563]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Oct 11 04:17:31 compute-0 ceph-mgr[74563]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 04:17:31 compute-0 ceph-mgr[74563]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct 11 04:17:31 compute-0 ceph-mgr[74563]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 04:17:31 compute-0 ceph-mgr[74563]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 11 04:17:31 compute-0 ceph-mgr[74563]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 04:17:31 compute-0 ceph-mgr[74563]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct 11 04:17:31 compute-0 nova_compute[259850]: 2025-10-11 04:17:31.604 2 DEBUG oslo_service.periodic_task [None req-a15c22ab-259d-4b26-83d0-eb5f92102649 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 11 04:17:31 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1615: 305 pgs: 305 active+clean; 167 MiB data, 469 MiB used, 60 GiB / 60 GiB avail; 325 KiB/s rd, 2.1 MiB/s wr, 63 op/s
Oct 11 04:17:32 compute-0 nova_compute[259850]: 2025-10-11 04:17:32.059 2 DEBUG oslo_service.periodic_task [None req-a15c22ab-259d-4b26-83d0-eb5f92102649 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 11 04:17:32 compute-0 podman[295525]: 2025-10-11 04:17:32.384788055 +0000 UTC m=+0.086494899 container health_status aa44bb3cffcfb092b9d85ae52a9bd9f36c87e07262632420f97b48a886109eab (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, tcib_managed=true, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20251009, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, io.buildah.version=1.41.3)
Oct 11 04:17:33 compute-0 ceph-mon[74273]: pgmap v1615: 305 pgs: 305 active+clean; 167 MiB data, 469 MiB used, 60 GiB / 60 GiB avail; 325 KiB/s rd, 2.1 MiB/s wr, 63 op/s
Oct 11 04:17:33 compute-0 nova_compute[259850]: 2025-10-11 04:17:33.304 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 04:17:33 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1616: 305 pgs: 305 active+clean; 281 MiB data, 581 MiB used, 59 GiB / 60 GiB avail; 354 KiB/s rd, 11 MiB/s wr, 108 op/s
Oct 11 04:17:34 compute-0 nova_compute[259850]: 2025-10-11 04:17:34.060 2 DEBUG oslo_service.periodic_task [None req-a15c22ab-259d-4b26-83d0-eb5f92102649 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 11 04:17:34 compute-0 nova_compute[259850]: 2025-10-11 04:17:34.351 2 DEBUG oslo_concurrency.lockutils [None req-02c5b278-a6d4-4776-8278-78077b1ab3e7 2a330a845d62440c871f80eda2546881 09ba33ef4bd447699d74946c58839b2d - - default default] Acquiring lock "170beb52-e998-40b5-8315-a0d138f2cbf6" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 04:17:34 compute-0 nova_compute[259850]: 2025-10-11 04:17:34.352 2 DEBUG oslo_concurrency.lockutils [None req-02c5b278-a6d4-4776-8278-78077b1ab3e7 2a330a845d62440c871f80eda2546881 09ba33ef4bd447699d74946c58839b2d - - default default] Lock "170beb52-e998-40b5-8315-a0d138f2cbf6" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 04:17:34 compute-0 nova_compute[259850]: 2025-10-11 04:17:34.352 2 DEBUG oslo_concurrency.lockutils [None req-02c5b278-a6d4-4776-8278-78077b1ab3e7 2a330a845d62440c871f80eda2546881 09ba33ef4bd447699d74946c58839b2d - - default default] Acquiring lock "170beb52-e998-40b5-8315-a0d138f2cbf6-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 04:17:34 compute-0 nova_compute[259850]: 2025-10-11 04:17:34.353 2 DEBUG oslo_concurrency.lockutils [None req-02c5b278-a6d4-4776-8278-78077b1ab3e7 2a330a845d62440c871f80eda2546881 09ba33ef4bd447699d74946c58839b2d - - default default] Lock "170beb52-e998-40b5-8315-a0d138f2cbf6-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 04:17:34 compute-0 nova_compute[259850]: 2025-10-11 04:17:34.353 2 DEBUG oslo_concurrency.lockutils [None req-02c5b278-a6d4-4776-8278-78077b1ab3e7 2a330a845d62440c871f80eda2546881 09ba33ef4bd447699d74946c58839b2d - - default default] Lock "170beb52-e998-40b5-8315-a0d138f2cbf6-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 04:17:34 compute-0 nova_compute[259850]: 2025-10-11 04:17:34.356 2 INFO nova.compute.manager [None req-02c5b278-a6d4-4776-8278-78077b1ab3e7 2a330a845d62440c871f80eda2546881 09ba33ef4bd447699d74946c58839b2d - - default default] [instance: 170beb52-e998-40b5-8315-a0d138f2cbf6] Terminating instance
Oct 11 04:17:34 compute-0 nova_compute[259850]: 2025-10-11 04:17:34.358 2 DEBUG nova.compute.manager [None req-02c5b278-a6d4-4776-8278-78077b1ab3e7 2a330a845d62440c871f80eda2546881 09ba33ef4bd447699d74946c58839b2d - - default default] [instance: 170beb52-e998-40b5-8315-a0d138f2cbf6] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Oct 11 04:17:34 compute-0 kernel: tap4c25174b-7e (unregistering): left promiscuous mode
Oct 11 04:17:34 compute-0 NetworkManager[44920]: <info>  [1760156254.4177] device (tap4c25174b-7e): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 11 04:17:34 compute-0 nova_compute[259850]: 2025-10-11 04:17:34.475 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 04:17:34 compute-0 ovn_controller[152025]: 2025-10-11T04:17:34Z|00227|binding|INFO|Releasing lport 4c25174b-7eef-47cc-9c7d-618a905c5e5e from this chassis (sb_readonly=0)
Oct 11 04:17:34 compute-0 ovn_controller[152025]: 2025-10-11T04:17:34Z|00228|binding|INFO|Setting lport 4c25174b-7eef-47cc-9c7d-618a905c5e5e down in Southbound
Oct 11 04:17:34 compute-0 ovn_controller[152025]: 2025-10-11T04:17:34Z|00229|binding|INFO|Removing iface tap4c25174b-7e ovn-installed in OVS
Oct 11 04:17:34 compute-0 nova_compute[259850]: 2025-10-11 04:17:34.477 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 04:17:34 compute-0 ovn_metadata_agent[161897]: 2025-10-11 04:17:34.481 161902 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:47:b9:1f 10.100.0.5'], port_security=['fa:16:3e:47:b9:1f 10.100.0.5'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.5/28', 'neutron:device_id': '170beb52-e998-40b5-8315-a0d138f2cbf6', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-b6cd64a2-af0b-4f57-b84c-cbc9cde5251d', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '09ba33ef4bd447699d74946c58839b2d', 'neutron:revision_number': '4', 'neutron:security_group_ids': '802c56f7-efb1-44ec-9107-b20b0a13ea5d', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com', 'neutron:port_fip': '192.168.122.176'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=27b77226-c1f8-485e-969b-bae9a3bf7ceb, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f22fde8afd0>], logical_port=4c25174b-7eef-47cc-9c7d-618a905c5e5e) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f22fde8afd0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 11 04:17:34 compute-0 ovn_metadata_agent[161897]: 2025-10-11 04:17:34.482 161902 INFO neutron.agent.ovn.metadata.agent [-] Port 4c25174b-7eef-47cc-9c7d-618a905c5e5e in datapath b6cd64a2-af0b-4f57-b84c-cbc9cde5251d unbound from our chassis
Oct 11 04:17:34 compute-0 ovn_metadata_agent[161897]: 2025-10-11 04:17:34.483 161902 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network b6cd64a2-af0b-4f57-b84c-cbc9cde5251d, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Oct 11 04:17:34 compute-0 ovn_metadata_agent[161897]: 2025-10-11 04:17:34.484 267637 DEBUG oslo.privsep.daemon [-] privsep: reply[d08b7e65-b305-4c83-9dca-57d706a39de8]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 04:17:34 compute-0 ovn_metadata_agent[161897]: 2025-10-11 04:17:34.485 161902 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-b6cd64a2-af0b-4f57-b84c-cbc9cde5251d namespace which is not needed anymore
Oct 11 04:17:34 compute-0 systemd[1]: machine-qemu\x2d22\x2dinstance\x2d00000016.scope: Deactivated successfully.
Oct 11 04:17:34 compute-0 nova_compute[259850]: 2025-10-11 04:17:34.499 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 04:17:34 compute-0 systemd[1]: machine-qemu\x2d22\x2dinstance\x2d00000016.scope: Consumed 12.757s CPU time.
Oct 11 04:17:34 compute-0 systemd-machined[214869]: Machine qemu-22-instance-00000016 terminated.
Oct 11 04:17:34 compute-0 nova_compute[259850]: 2025-10-11 04:17:34.597 2 INFO nova.virt.libvirt.driver [-] [instance: 170beb52-e998-40b5-8315-a0d138f2cbf6] Instance destroyed successfully.
Oct 11 04:17:34 compute-0 nova_compute[259850]: 2025-10-11 04:17:34.599 2 DEBUG nova.objects.instance [None req-02c5b278-a6d4-4776-8278-78077b1ab3e7 2a330a845d62440c871f80eda2546881 09ba33ef4bd447699d74946c58839b2d - - default default] Lazy-loading 'resources' on Instance uuid 170beb52-e998-40b5-8315-a0d138f2cbf6 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 11 04:17:34 compute-0 nova_compute[259850]: 2025-10-11 04:17:34.614 2 DEBUG nova.virt.libvirt.vif [None req-02c5b278-a6d4-4776-8278-78077b1ab3e7 2a330a845d62440c871f80eda2546881 09ba33ef4bd447699d74946c58839b2d - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-11T04:17:05Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestVolumeBootPattern-server-820503556',display_name='tempest-TestVolumeBootPattern-server-820503556',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testvolumebootpattern-server-820503556',id=22,image_ref='',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBPDNAGL8Dkg4WTlPf45cAzyjNlMaZ9CdFtcbPahhttGWfFDtL3wJAU2pqWIpDJ427A+TFzstq4HW+M8hdPFbiZnk9MFQHh3rRb7amRkcTpIWOFEgpDmf92zhQgzfL3p2ZA==',key_name='tempest-TestVolumeBootPattern-2018721323',keypairs=<?>,launch_index=0,launched_at=2025-10-11T04:17:13Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='09ba33ef4bd447699d74946c58839b2d',ramdisk_id='',reservation_id='r-s193x3ob',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',image_signature_verified='False',owner_project_name='tempest-TestVolumeBootPattern-771726270',owner_user_name='tempest-TestVolumeBootPattern-771726270-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-11T04:17:13Z,user_data=None,user_id='2a330a845d62440c871f80eda2546881',uuid=170beb52-e998-40b5-8315-a0d138f2cbf6,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "4c25174b-7eef-47cc-9c7d-618a905c5e5e", "address": "fa:16:3e:47:b9:1f", "network": {"id": "b6cd64a2-af0b-4f57-b84c-cbc9cde5251d", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-958485850-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.176", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "09ba33ef4bd447699d74946c58839b2d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4c25174b-7e", "ovs_interfaceid": "4c25174b-7eef-47cc-9c7d-618a905c5e5e", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Oct 11 04:17:34 compute-0 nova_compute[259850]: 2025-10-11 04:17:34.614 2 DEBUG nova.network.os_vif_util [None req-02c5b278-a6d4-4776-8278-78077b1ab3e7 2a330a845d62440c871f80eda2546881 09ba33ef4bd447699d74946c58839b2d - - default default] Converting VIF {"id": "4c25174b-7eef-47cc-9c7d-618a905c5e5e", "address": "fa:16:3e:47:b9:1f", "network": {"id": "b6cd64a2-af0b-4f57-b84c-cbc9cde5251d", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-958485850-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.176", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "09ba33ef4bd447699d74946c58839b2d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4c25174b-7e", "ovs_interfaceid": "4c25174b-7eef-47cc-9c7d-618a905c5e5e", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 11 04:17:34 compute-0 nova_compute[259850]: 2025-10-11 04:17:34.615 2 DEBUG nova.network.os_vif_util [None req-02c5b278-a6d4-4776-8278-78077b1ab3e7 2a330a845d62440c871f80eda2546881 09ba33ef4bd447699d74946c58839b2d - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:47:b9:1f,bridge_name='br-int',has_traffic_filtering=True,id=4c25174b-7eef-47cc-9c7d-618a905c5e5e,network=Network(b6cd64a2-af0b-4f57-b84c-cbc9cde5251d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap4c25174b-7e') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 11 04:17:34 compute-0 nova_compute[259850]: 2025-10-11 04:17:34.615 2 DEBUG os_vif [None req-02c5b278-a6d4-4776-8278-78077b1ab3e7 2a330a845d62440c871f80eda2546881 09ba33ef4bd447699d74946c58839b2d - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:47:b9:1f,bridge_name='br-int',has_traffic_filtering=True,id=4c25174b-7eef-47cc-9c7d-618a905c5e5e,network=Network(b6cd64a2-af0b-4f57-b84c-cbc9cde5251d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap4c25174b-7e') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Oct 11 04:17:34 compute-0 nova_compute[259850]: 2025-10-11 04:17:34.617 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 04:17:34 compute-0 nova_compute[259850]: 2025-10-11 04:17:34.617 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap4c25174b-7e, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 04:17:34 compute-0 nova_compute[259850]: 2025-10-11 04:17:34.619 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 04:17:34 compute-0 nova_compute[259850]: 2025-10-11 04:17:34.620 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 04:17:34 compute-0 nova_compute[259850]: 2025-10-11 04:17:34.623 2 INFO os_vif [None req-02c5b278-a6d4-4776-8278-78077b1ab3e7 2a330a845d62440c871f80eda2546881 09ba33ef4bd447699d74946c58839b2d - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:47:b9:1f,bridge_name='br-int',has_traffic_filtering=True,id=4c25174b-7eef-47cc-9c7d-618a905c5e5e,network=Network(b6cd64a2-af0b-4f57-b84c-cbc9cde5251d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap4c25174b-7e')
Oct 11 04:17:34 compute-0 neutron-haproxy-ovnmeta-b6cd64a2-af0b-4f57-b84c-cbc9cde5251d[295402]: [NOTICE]   (295406) : haproxy version is 2.8.14-c23fe91
Oct 11 04:17:34 compute-0 neutron-haproxy-ovnmeta-b6cd64a2-af0b-4f57-b84c-cbc9cde5251d[295402]: [NOTICE]   (295406) : path to executable is /usr/sbin/haproxy
Oct 11 04:17:34 compute-0 neutron-haproxy-ovnmeta-b6cd64a2-af0b-4f57-b84c-cbc9cde5251d[295402]: [WARNING]  (295406) : Exiting Master process...
Oct 11 04:17:34 compute-0 neutron-haproxy-ovnmeta-b6cd64a2-af0b-4f57-b84c-cbc9cde5251d[295402]: [ALERT]    (295406) : Current worker (295408) exited with code 143 (Terminated)
Oct 11 04:17:34 compute-0 neutron-haproxy-ovnmeta-b6cd64a2-af0b-4f57-b84c-cbc9cde5251d[295402]: [WARNING]  (295406) : All workers exited. Exiting... (0)
Oct 11 04:17:34 compute-0 systemd[1]: libpod-ee29e926f7bc1ee0604b4caaf9325d82c14839ce500a9f37088f823d4eca38c5.scope: Deactivated successfully.
Oct 11 04:17:34 compute-0 podman[295571]: 2025-10-11 04:17:34.643448203 +0000 UTC m=+0.050826930 container died ee29e926f7bc1ee0604b4caaf9325d82c14839ce500a9f37088f823d4eca38c5 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-b6cd64a2-af0b-4f57-b84c-cbc9cde5251d, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, org.label-schema.license=GPLv2)
Oct 11 04:17:34 compute-0 systemd[1]: var-lib-containers-storage-overlay-5663c6b975d2fa8055cc88485a8ea82cb1a0db63bd1d7fb75ba733a3d14075c0-merged.mount: Deactivated successfully.
Oct 11 04:17:34 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-ee29e926f7bc1ee0604b4caaf9325d82c14839ce500a9f37088f823d4eca38c5-userdata-shm.mount: Deactivated successfully.
Oct 11 04:17:34 compute-0 podman[295571]: 2025-10-11 04:17:34.682934941 +0000 UTC m=+0.090313678 container cleanup ee29e926f7bc1ee0604b4caaf9325d82c14839ce500a9f37088f823d4eca38c5 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-b6cd64a2-af0b-4f57-b84c-cbc9cde5251d, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 11 04:17:34 compute-0 nova_compute[259850]: 2025-10-11 04:17:34.696 2 DEBUG nova.compute.manager [req-e8f081b5-7371-4546-9966-3e3928265a40 req-b77e7831-bfee-4d0e-9777-9b62887a00a6 f4159c198f2c491aba3076e803251bb4 551ef16ca7c64c7d98e9560ddab5abd8 - - default default] [instance: 170beb52-e998-40b5-8315-a0d138f2cbf6] Received event network-vif-unplugged-4c25174b-7eef-47cc-9c7d-618a905c5e5e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 11 04:17:34 compute-0 nova_compute[259850]: 2025-10-11 04:17:34.696 2 DEBUG oslo_concurrency.lockutils [req-e8f081b5-7371-4546-9966-3e3928265a40 req-b77e7831-bfee-4d0e-9777-9b62887a00a6 f4159c198f2c491aba3076e803251bb4 551ef16ca7c64c7d98e9560ddab5abd8 - - default default] Acquiring lock "170beb52-e998-40b5-8315-a0d138f2cbf6-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 04:17:34 compute-0 nova_compute[259850]: 2025-10-11 04:17:34.697 2 DEBUG oslo_concurrency.lockutils [req-e8f081b5-7371-4546-9966-3e3928265a40 req-b77e7831-bfee-4d0e-9777-9b62887a00a6 f4159c198f2c491aba3076e803251bb4 551ef16ca7c64c7d98e9560ddab5abd8 - - default default] Lock "170beb52-e998-40b5-8315-a0d138f2cbf6-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 04:17:34 compute-0 nova_compute[259850]: 2025-10-11 04:17:34.697 2 DEBUG oslo_concurrency.lockutils [req-e8f081b5-7371-4546-9966-3e3928265a40 req-b77e7831-bfee-4d0e-9777-9b62887a00a6 f4159c198f2c491aba3076e803251bb4 551ef16ca7c64c7d98e9560ddab5abd8 - - default default] Lock "170beb52-e998-40b5-8315-a0d138f2cbf6-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 04:17:34 compute-0 nova_compute[259850]: 2025-10-11 04:17:34.698 2 DEBUG nova.compute.manager [req-e8f081b5-7371-4546-9966-3e3928265a40 req-b77e7831-bfee-4d0e-9777-9b62887a00a6 f4159c198f2c491aba3076e803251bb4 551ef16ca7c64c7d98e9560ddab5abd8 - - default default] [instance: 170beb52-e998-40b5-8315-a0d138f2cbf6] No waiting events found dispatching network-vif-unplugged-4c25174b-7eef-47cc-9c7d-618a905c5e5e pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 11 04:17:34 compute-0 systemd[1]: libpod-conmon-ee29e926f7bc1ee0604b4caaf9325d82c14839ce500a9f37088f823d4eca38c5.scope: Deactivated successfully.
Oct 11 04:17:34 compute-0 nova_compute[259850]: 2025-10-11 04:17:34.698 2 DEBUG nova.compute.manager [req-e8f081b5-7371-4546-9966-3e3928265a40 req-b77e7831-bfee-4d0e-9777-9b62887a00a6 f4159c198f2c491aba3076e803251bb4 551ef16ca7c64c7d98e9560ddab5abd8 - - default default] [instance: 170beb52-e998-40b5-8315-a0d138f2cbf6] Received event network-vif-unplugged-4c25174b-7eef-47cc-9c7d-618a905c5e5e for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Oct 11 04:17:34 compute-0 podman[295624]: 2025-10-11 04:17:34.759220251 +0000 UTC m=+0.051475959 container remove ee29e926f7bc1ee0604b4caaf9325d82c14839ce500a9f37088f823d4eca38c5 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-b6cd64a2-af0b-4f57-b84c-cbc9cde5251d, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, tcib_managed=true, org.label-schema.build-date=20251009, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Oct 11 04:17:34 compute-0 ovn_metadata_agent[161897]: 2025-10-11 04:17:34.766 267637 DEBUG oslo.privsep.daemon [-] privsep: reply[a95b3734-b2b2-4d36-8f29-54056c57104a]: (4, ('Sat Oct 11 04:17:34 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-b6cd64a2-af0b-4f57-b84c-cbc9cde5251d (ee29e926f7bc1ee0604b4caaf9325d82c14839ce500a9f37088f823d4eca38c5)\nee29e926f7bc1ee0604b4caaf9325d82c14839ce500a9f37088f823d4eca38c5\nSat Oct 11 04:17:34 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-b6cd64a2-af0b-4f57-b84c-cbc9cde5251d (ee29e926f7bc1ee0604b4caaf9325d82c14839ce500a9f37088f823d4eca38c5)\nee29e926f7bc1ee0604b4caaf9325d82c14839ce500a9f37088f823d4eca38c5\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 04:17:34 compute-0 ovn_metadata_agent[161897]: 2025-10-11 04:17:34.768 267637 DEBUG oslo.privsep.daemon [-] privsep: reply[e9e5e23f-c58f-462c-ac1d-7cefab5b9f32]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 04:17:34 compute-0 ovn_metadata_agent[161897]: 2025-10-11 04:17:34.769 161902 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapb6cd64a2-a0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 04:17:34 compute-0 nova_compute[259850]: 2025-10-11 04:17:34.771 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 04:17:34 compute-0 kernel: tapb6cd64a2-a0: left promiscuous mode
Oct 11 04:17:34 compute-0 nova_compute[259850]: 2025-10-11 04:17:34.773 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 04:17:34 compute-0 ovn_metadata_agent[161897]: 2025-10-11 04:17:34.777 267637 DEBUG oslo.privsep.daemon [-] privsep: reply[a2a1ea29-e645-4b5e-b74c-c7e50bad4631]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 04:17:34 compute-0 nova_compute[259850]: 2025-10-11 04:17:34.790 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 04:17:34 compute-0 ovn_metadata_agent[161897]: 2025-10-11 04:17:34.811 267637 DEBUG oslo.privsep.daemon [-] privsep: reply[05c15745-f0cb-4048-9191-b6d12a2e21d4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 04:17:34 compute-0 ovn_metadata_agent[161897]: 2025-10-11 04:17:34.812 267637 DEBUG oslo.privsep.daemon [-] privsep: reply[50de7978-0296-4fbf-ac71-7d556b95c7f5]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 04:17:34 compute-0 ovn_metadata_agent[161897]: 2025-10-11 04:17:34.832 267637 DEBUG oslo.privsep.daemon [-] privsep: reply[2513ffe8-d01b-4574-8632-a28754a3a013]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 461733, 'reachable_time': 21223, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 295639, 'error': None, 'target': 'ovnmeta-b6cd64a2-af0b-4f57-b84c-cbc9cde5251d', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 04:17:34 compute-0 ovn_metadata_agent[161897]: 2025-10-11 04:17:34.834 162015 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-b6cd64a2-af0b-4f57-b84c-cbc9cde5251d deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Oct 11 04:17:34 compute-0 ovn_metadata_agent[161897]: 2025-10-11 04:17:34.834 162015 DEBUG oslo.privsep.daemon [-] privsep: reply[3f1b7dd8-8141-451f-8013-ea76135322ef]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 04:17:34 compute-0 systemd[1]: run-netns-ovnmeta\x2db6cd64a2\x2daf0b\x2d4f57\x2db84c\x2dcbc9cde5251d.mount: Deactivated successfully.
Oct 11 04:17:34 compute-0 nova_compute[259850]: 2025-10-11 04:17:34.841 2 INFO nova.virt.libvirt.driver [None req-02c5b278-a6d4-4776-8278-78077b1ab3e7 2a330a845d62440c871f80eda2546881 09ba33ef4bd447699d74946c58839b2d - - default default] [instance: 170beb52-e998-40b5-8315-a0d138f2cbf6] Deleting instance files /var/lib/nova/instances/170beb52-e998-40b5-8315-a0d138f2cbf6_del
Oct 11 04:17:34 compute-0 nova_compute[259850]: 2025-10-11 04:17:34.842 2 INFO nova.virt.libvirt.driver [None req-02c5b278-a6d4-4776-8278-78077b1ab3e7 2a330a845d62440c871f80eda2546881 09ba33ef4bd447699d74946c58839b2d - - default default] [instance: 170beb52-e998-40b5-8315-a0d138f2cbf6] Deletion of /var/lib/nova/instances/170beb52-e998-40b5-8315-a0d138f2cbf6_del complete
Oct 11 04:17:34 compute-0 nova_compute[259850]: 2025-10-11 04:17:34.854 2 DEBUG oslo_concurrency.lockutils [None req-e48d9e10-8970-4316-8d21-d7f025ac5787 77d11e860ca1460cab1c20bca4d4c0ea bfcc78a613a4442d88231798d10634c9 - - default default] Acquiring lock "a3d4ef44-fada-41fc-9a12-641bff0536a4" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 04:17:34 compute-0 nova_compute[259850]: 2025-10-11 04:17:34.855 2 DEBUG oslo_concurrency.lockutils [None req-e48d9e10-8970-4316-8d21-d7f025ac5787 77d11e860ca1460cab1c20bca4d4c0ea bfcc78a613a4442d88231798d10634c9 - - default default] Lock "a3d4ef44-fada-41fc-9a12-641bff0536a4" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 04:17:34 compute-0 nova_compute[259850]: 2025-10-11 04:17:34.883 2 DEBUG nova.compute.manager [None req-e48d9e10-8970-4316-8d21-d7f025ac5787 77d11e860ca1460cab1c20bca4d4c0ea bfcc78a613a4442d88231798d10634c9 - - default default] [instance: a3d4ef44-fada-41fc-9a12-641bff0536a4] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Oct 11 04:17:34 compute-0 nova_compute[259850]: 2025-10-11 04:17:34.905 2 INFO nova.compute.manager [None req-02c5b278-a6d4-4776-8278-78077b1ab3e7 2a330a845d62440c871f80eda2546881 09ba33ef4bd447699d74946c58839b2d - - default default] [instance: 170beb52-e998-40b5-8315-a0d138f2cbf6] Took 0.55 seconds to destroy the instance on the hypervisor.
Oct 11 04:17:34 compute-0 nova_compute[259850]: 2025-10-11 04:17:34.905 2 DEBUG oslo.service.loopingcall [None req-02c5b278-a6d4-4776-8278-78077b1ab3e7 2a330a845d62440c871f80eda2546881 09ba33ef4bd447699d74946c58839b2d - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Oct 11 04:17:34 compute-0 nova_compute[259850]: 2025-10-11 04:17:34.906 2 DEBUG nova.compute.manager [-] [instance: 170beb52-e998-40b5-8315-a0d138f2cbf6] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Oct 11 04:17:34 compute-0 nova_compute[259850]: 2025-10-11 04:17:34.906 2 DEBUG nova.network.neutron [-] [instance: 170beb52-e998-40b5-8315-a0d138f2cbf6] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Oct 11 04:17:34 compute-0 nova_compute[259850]: 2025-10-11 04:17:34.959 2 DEBUG oslo_concurrency.lockutils [None req-e48d9e10-8970-4316-8d21-d7f025ac5787 77d11e860ca1460cab1c20bca4d4c0ea bfcc78a613a4442d88231798d10634c9 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 04:17:34 compute-0 nova_compute[259850]: 2025-10-11 04:17:34.959 2 DEBUG oslo_concurrency.lockutils [None req-e48d9e10-8970-4316-8d21-d7f025ac5787 77d11e860ca1460cab1c20bca4d4c0ea bfcc78a613a4442d88231798d10634c9 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 04:17:34 compute-0 nova_compute[259850]: 2025-10-11 04:17:34.967 2 DEBUG nova.virt.hardware [None req-e48d9e10-8970-4316-8d21-d7f025ac5787 77d11e860ca1460cab1c20bca4d4c0ea bfcc78a613a4442d88231798d10634c9 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Oct 11 04:17:34 compute-0 nova_compute[259850]: 2025-10-11 04:17:34.968 2 INFO nova.compute.claims [None req-e48d9e10-8970-4316-8d21-d7f025ac5787 77d11e860ca1460cab1c20bca4d4c0ea bfcc78a613a4442d88231798d10634c9 - - default default] [instance: a3d4ef44-fada-41fc-9a12-641bff0536a4] Claim successful on node compute-0.ctlplane.example.com
Oct 11 04:17:35 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e389 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 04:17:35 compute-0 nova_compute[259850]: 2025-10-11 04:17:35.080 2 DEBUG oslo_concurrency.processutils [None req-e48d9e10-8970-4316-8d21-d7f025ac5787 77d11e860ca1460cab1c20bca4d4c0ea bfcc78a613a4442d88231798d10634c9 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 04:17:35 compute-0 ceph-mon[74273]: pgmap v1616: 305 pgs: 305 active+clean; 281 MiB data, 581 MiB used, 59 GiB / 60 GiB avail; 354 KiB/s rd, 11 MiB/s wr, 108 op/s
Oct 11 04:17:35 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 11 04:17:35 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3665525448' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 04:17:35 compute-0 nova_compute[259850]: 2025-10-11 04:17:35.548 2 DEBUG oslo_concurrency.processutils [None req-e48d9e10-8970-4316-8d21-d7f025ac5787 77d11e860ca1460cab1c20bca4d4c0ea bfcc78a613a4442d88231798d10634c9 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.468s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 04:17:35 compute-0 nova_compute[259850]: 2025-10-11 04:17:35.554 2 DEBUG nova.compute.provider_tree [None req-e48d9e10-8970-4316-8d21-d7f025ac5787 77d11e860ca1460cab1c20bca4d4c0ea bfcc78a613a4442d88231798d10634c9 - - default default] Inventory has not changed in ProviderTree for provider: 108a560b-89c0-4926-a2fc-cb749a6f8386 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 11 04:17:35 compute-0 nova_compute[259850]: 2025-10-11 04:17:35.577 2 DEBUG nova.scheduler.client.report [None req-e48d9e10-8970-4316-8d21-d7f025ac5787 77d11e860ca1460cab1c20bca4d4c0ea bfcc78a613a4442d88231798d10634c9 - - default default] Inventory has not changed for provider 108a560b-89c0-4926-a2fc-cb749a6f8386 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 11 04:17:35 compute-0 nova_compute[259850]: 2025-10-11 04:17:35.613 2 DEBUG oslo_concurrency.lockutils [None req-e48d9e10-8970-4316-8d21-d7f025ac5787 77d11e860ca1460cab1c20bca4d4c0ea bfcc78a613a4442d88231798d10634c9 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.653s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 04:17:35 compute-0 nova_compute[259850]: 2025-10-11 04:17:35.614 2 DEBUG nova.compute.manager [None req-e48d9e10-8970-4316-8d21-d7f025ac5787 77d11e860ca1460cab1c20bca4d4c0ea bfcc78a613a4442d88231798d10634c9 - - default default] [instance: a3d4ef44-fada-41fc-9a12-641bff0536a4] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Oct 11 04:17:35 compute-0 nova_compute[259850]: 2025-10-11 04:17:35.689 2 DEBUG nova.compute.manager [None req-e48d9e10-8970-4316-8d21-d7f025ac5787 77d11e860ca1460cab1c20bca4d4c0ea bfcc78a613a4442d88231798d10634c9 - - default default] [instance: a3d4ef44-fada-41fc-9a12-641bff0536a4] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Oct 11 04:17:35 compute-0 nova_compute[259850]: 2025-10-11 04:17:35.690 2 DEBUG nova.network.neutron [None req-e48d9e10-8970-4316-8d21-d7f025ac5787 77d11e860ca1460cab1c20bca4d4c0ea bfcc78a613a4442d88231798d10634c9 - - default default] [instance: a3d4ef44-fada-41fc-9a12-641bff0536a4] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Oct 11 04:17:35 compute-0 nova_compute[259850]: 2025-10-11 04:17:35.718 2 INFO nova.virt.libvirt.driver [None req-e48d9e10-8970-4316-8d21-d7f025ac5787 77d11e860ca1460cab1c20bca4d4c0ea bfcc78a613a4442d88231798d10634c9 - - default default] [instance: a3d4ef44-fada-41fc-9a12-641bff0536a4] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Oct 11 04:17:35 compute-0 nova_compute[259850]: 2025-10-11 04:17:35.738 2 DEBUG nova.compute.manager [None req-e48d9e10-8970-4316-8d21-d7f025ac5787 77d11e860ca1460cab1c20bca4d4c0ea bfcc78a613a4442d88231798d10634c9 - - default default] [instance: a3d4ef44-fada-41fc-9a12-641bff0536a4] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Oct 11 04:17:35 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1617: 305 pgs: 305 active+clean; 281 MiB data, 581 MiB used, 59 GiB / 60 GiB avail; 350 KiB/s rd, 11 MiB/s wr, 107 op/s
Oct 11 04:17:35 compute-0 nova_compute[259850]: 2025-10-11 04:17:35.787 2 INFO nova.virt.block_device [None req-e48d9e10-8970-4316-8d21-d7f025ac5787 77d11e860ca1460cab1c20bca4d4c0ea bfcc78a613a4442d88231798d10634c9 - - default default] [instance: a3d4ef44-fada-41fc-9a12-641bff0536a4] Booting with volume b02ce934-9de7-422d-b3ba-5ade72993920 at /dev/vda
Oct 11 04:17:35 compute-0 sudo[295663]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 04:17:35 compute-0 sudo[295663]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 04:17:35 compute-0 sudo[295663]: pam_unix(sudo:session): session closed for user root
Oct 11 04:17:35 compute-0 sudo[295688]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 11 04:17:35 compute-0 sudo[295688]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 04:17:35 compute-0 sudo[295688]: pam_unix(sudo:session): session closed for user root
Oct 11 04:17:35 compute-0 nova_compute[259850]: 2025-10-11 04:17:35.921 2 DEBUG nova.network.neutron [-] [instance: 170beb52-e998-40b5-8315-a0d138f2cbf6] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 11 04:17:35 compute-0 nova_compute[259850]: 2025-10-11 04:17:35.949 2 INFO nova.compute.manager [-] [instance: 170beb52-e998-40b5-8315-a0d138f2cbf6] Took 1.04 seconds to deallocate network for instance.
Oct 11 04:17:35 compute-0 sudo[295713]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 04:17:35 compute-0 sudo[295713]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 04:17:35 compute-0 sudo[295713]: pam_unix(sudo:session): session closed for user root
Oct 11 04:17:35 compute-0 nova_compute[259850]: 2025-10-11 04:17:35.997 2 DEBUG nova.policy [None req-e48d9e10-8970-4316-8d21-d7f025ac5787 77d11e860ca1460cab1c20bca4d4c0ea bfcc78a613a4442d88231798d10634c9 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '77d11e860ca1460cab1c20bca4d4c0ea', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'bfcc78a613a4442d88231798d10634c9', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Oct 11 04:17:36 compute-0 sudo[295738]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/23b68101-59a9-532f-ab6b-9acf78fb2162/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Oct 11 04:17:36 compute-0 sudo[295738]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 04:17:36 compute-0 nova_compute[259850]: 2025-10-11 04:17:36.058 2 DEBUG os_brick.utils [None req-e48d9e10-8970-4316-8d21-d7f025ac5787 77d11e860ca1460cab1c20bca4d4c0ea bfcc78a613a4442d88231798d10634c9 - - default default] ==> get_connector_properties: call "{'root_helper': 'sudo nova-rootwrap /etc/nova/rootwrap.conf', 'my_ip': '192.168.122.100', 'multipath': True, 'enforce_multipath': True, 'host': 'compute-0.ctlplane.example.com', 'execute': None}" trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:176
Oct 11 04:17:36 compute-0 nova_compute[259850]: 2025-10-11 04:17:36.059 675 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): multipathd show status execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 04:17:36 compute-0 nova_compute[259850]: 2025-10-11 04:17:36.073 675 DEBUG oslo_concurrency.processutils [-] CMD "multipathd show status" returned: 0 in 0.015s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 04:17:36 compute-0 nova_compute[259850]: 2025-10-11 04:17:36.074 675 DEBUG oslo.privsep.daemon [-] privsep: reply[a04fd02e-9eea-43dd-9d1b-a7a2e6adc5a8]: (4, ('path checker states:\n\npaths: 0\nbusy: False\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 04:17:36 compute-0 nova_compute[259850]: 2025-10-11 04:17:36.075 675 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): cat /etc/iscsi/initiatorname.iscsi execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 04:17:36 compute-0 nova_compute[259850]: 2025-10-11 04:17:36.087 675 DEBUG oslo_concurrency.processutils [-] CMD "cat /etc/iscsi/initiatorname.iscsi" returned: 0 in 0.012s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 04:17:36 compute-0 nova_compute[259850]: 2025-10-11 04:17:36.087 675 DEBUG oslo.privsep.daemon [-] privsep: reply[c9f3928a-6c11-407c-b76d-3f60ad043607]: (4, ('InitiatorName=iqn.1994-05.com.redhat:e727c2bd432c', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 04:17:36 compute-0 nova_compute[259850]: 2025-10-11 04:17:36.090 675 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): findmnt -v / -n -o SOURCE execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 04:17:36 compute-0 nova_compute[259850]: 2025-10-11 04:17:36.104 675 DEBUG oslo_concurrency.processutils [-] CMD "findmnt -v / -n -o SOURCE" returned: 0 in 0.013s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 04:17:36 compute-0 nova_compute[259850]: 2025-10-11 04:17:36.104 675 DEBUG oslo.privsep.daemon [-] privsep: reply[50a1f789-1814-4e32-97de-222b4ec075ad]: (4, ('overlay\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 04:17:36 compute-0 nova_compute[259850]: 2025-10-11 04:17:36.106 675 DEBUG oslo.privsep.daemon [-] privsep: reply[c9f7345d-4751-487d-a3aa-2e08836b8820]: (4, 'e4b2deed-ff06-4afb-a523-b61a9dddb9cc') _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 04:17:36 compute-0 nova_compute[259850]: 2025-10-11 04:17:36.106 2 DEBUG oslo_concurrency.processutils [None req-e48d9e10-8970-4316-8d21-d7f025ac5787 77d11e860ca1460cab1c20bca4d4c0ea bfcc78a613a4442d88231798d10634c9 - - default default] Running cmd (subprocess): nvme version execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 04:17:36 compute-0 nova_compute[259850]: 2025-10-11 04:17:36.143 2 DEBUG oslo_concurrency.processutils [None req-e48d9e10-8970-4316-8d21-d7f025ac5787 77d11e860ca1460cab1c20bca4d4c0ea bfcc78a613a4442d88231798d10634c9 - - default default] CMD "nvme version" returned: 0 in 0.036s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 04:17:36 compute-0 nova_compute[259850]: 2025-10-11 04:17:36.149 2 INFO nova.compute.manager [None req-02c5b278-a6d4-4776-8278-78077b1ab3e7 2a330a845d62440c871f80eda2546881 09ba33ef4bd447699d74946c58839b2d - - default default] [instance: 170beb52-e998-40b5-8315-a0d138f2cbf6] Took 0.20 seconds to detach 1 volumes for instance.
Oct 11 04:17:36 compute-0 nova_compute[259850]: 2025-10-11 04:17:36.157 2 DEBUG os_brick.initiator.connectors.lightos [None req-e48d9e10-8970-4316-8d21-d7f025ac5787 77d11e860ca1460cab1c20bca4d4c0ea bfcc78a613a4442d88231798d10634c9 - - default default] LIGHTOS: [Errno 111] ECONNREFUSED find_dsc /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:98
Oct 11 04:17:36 compute-0 nova_compute[259850]: 2025-10-11 04:17:36.158 2 DEBUG os_brick.initiator.connectors.lightos [None req-e48d9e10-8970-4316-8d21-d7f025ac5787 77d11e860ca1460cab1c20bca4d4c0ea bfcc78a613a4442d88231798d10634c9 - - default default] LIGHTOS: did not find dsc, continuing anyway. get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:76
Oct 11 04:17:36 compute-0 nova_compute[259850]: 2025-10-11 04:17:36.158 2 DEBUG os_brick.initiator.connectors.lightos [None req-e48d9e10-8970-4316-8d21-d7f025ac5787 77d11e860ca1460cab1c20bca4d4c0ea bfcc78a613a4442d88231798d10634c9 - - default default] LIGHTOS: finally hostnqn: nqn.2014-08.org.nvmexpress:uuid:83042a20-0f72-4c47-8453-e72ead378624 dsc:  get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:79
Oct 11 04:17:36 compute-0 nova_compute[259850]: 2025-10-11 04:17:36.159 2 DEBUG os_brick.utils [None req-e48d9e10-8970-4316-8d21-d7f025ac5787 77d11e860ca1460cab1c20bca4d4c0ea bfcc78a613a4442d88231798d10634c9 - - default default] <== get_connector_properties: return (100ms) {'platform': 'x86_64', 'os_type': 'linux', 'ip': '192.168.122.100', 'host': 'compute-0.ctlplane.example.com', 'multipath': True, 'initiator': 'iqn.1994-05.com.redhat:e727c2bd432c', 'do_local_attach': False, 'nvme_hostid': '83042a20-0f72-4c47-8453-e72ead378624', 'system uuid': 'e4b2deed-ff06-4afb-a523-b61a9dddb9cc', 'nqn': 'nqn.2014-08.org.nvmexpress:uuid:83042a20-0f72-4c47-8453-e72ead378624', 'nvme_native_multipath': True, 'found_dsc': ''} trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:203
Oct 11 04:17:36 compute-0 nova_compute[259850]: 2025-10-11 04:17:36.159 2 DEBUG nova.virt.block_device [None req-e48d9e10-8970-4316-8d21-d7f025ac5787 77d11e860ca1460cab1c20bca4d4c0ea bfcc78a613a4442d88231798d10634c9 - - default default] [instance: a3d4ef44-fada-41fc-9a12-641bff0536a4] Updating existing volume attachment record: aabb3d08-3f0a-45f7-b3d7-015e23bd80f0 _volume_attach /usr/lib/python3.9/site-packages/nova/virt/block_device.py:631
Oct 11 04:17:36 compute-0 ceph-mon[74273]: from='client.? 192.168.122.100:0/3665525448' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 04:17:36 compute-0 nova_compute[259850]: 2025-10-11 04:17:36.231 2 DEBUG oslo_concurrency.lockutils [None req-02c5b278-a6d4-4776-8278-78077b1ab3e7 2a330a845d62440c871f80eda2546881 09ba33ef4bd447699d74946c58839b2d - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 04:17:36 compute-0 nova_compute[259850]: 2025-10-11 04:17:36.231 2 DEBUG oslo_concurrency.lockutils [None req-02c5b278-a6d4-4776-8278-78077b1ab3e7 2a330a845d62440c871f80eda2546881 09ba33ef4bd447699d74946c58839b2d - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 04:17:36 compute-0 nova_compute[259850]: 2025-10-11 04:17:36.307 2 DEBUG oslo_concurrency.processutils [None req-02c5b278-a6d4-4776-8278-78077b1ab3e7 2a330a845d62440c871f80eda2546881 09ba33ef4bd447699d74946c58839b2d - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 04:17:36 compute-0 sudo[295738]: pam_unix(sudo:session): session closed for user root
Oct 11 04:17:36 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 11 04:17:36 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 11 04:17:36 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct 11 04:17:36 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 11 04:17:36 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct 11 04:17:36 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' 
Oct 11 04:17:36 compute-0 ceph-mgr[74563]: [progress WARNING root] complete: ev e11c8333-ec30-4c12-b452-0277ce418227 does not exist
Oct 11 04:17:36 compute-0 ceph-mgr[74563]: [progress WARNING root] complete: ev 4cc8b3cd-eb48-4ebc-a8bf-956a68330a01 does not exist
Oct 11 04:17:36 compute-0 ceph-mgr[74563]: [progress WARNING root] complete: ev 25c3ed16-5220-4472-9099-b2177b8887dd does not exist
Oct 11 04:17:36 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct 11 04:17:36 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 11 04:17:36 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct 11 04:17:36 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 11 04:17:36 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 11 04:17:36 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 11 04:17:36 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 11 04:17:36 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1060736512' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 04:17:36 compute-0 nova_compute[259850]: 2025-10-11 04:17:36.785 2 DEBUG oslo_concurrency.processutils [None req-02c5b278-a6d4-4776-8278-78077b1ab3e7 2a330a845d62440c871f80eda2546881 09ba33ef4bd447699d74946c58839b2d - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.477s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 04:17:36 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 11 04:17:36 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2718697562' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 11 04:17:36 compute-0 nova_compute[259850]: 2025-10-11 04:17:36.795 2 DEBUG nova.compute.provider_tree [None req-02c5b278-a6d4-4776-8278-78077b1ab3e7 2a330a845d62440c871f80eda2546881 09ba33ef4bd447699d74946c58839b2d - - default default] Inventory has not changed in ProviderTree for provider: 108a560b-89c0-4926-a2fc-cb749a6f8386 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 11 04:17:36 compute-0 nova_compute[259850]: 2025-10-11 04:17:36.801 2 DEBUG nova.network.neutron [None req-e48d9e10-8970-4316-8d21-d7f025ac5787 77d11e860ca1460cab1c20bca4d4c0ea bfcc78a613a4442d88231798d10634c9 - - default default] [instance: a3d4ef44-fada-41fc-9a12-641bff0536a4] Successfully created port: 86c2cef2-4a07-459b-8237-e7fda4a39f81 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Oct 11 04:17:36 compute-0 nova_compute[259850]: 2025-10-11 04:17:36.814 2 DEBUG nova.compute.manager [req-a14a0c3b-ea68-4924-91cc-814fe5d40542 req-aaebdb4b-52f2-48f9-b08c-128355a12eb8 f4159c198f2c491aba3076e803251bb4 551ef16ca7c64c7d98e9560ddab5abd8 - - default default] [instance: 170beb52-e998-40b5-8315-a0d138f2cbf6] Received event network-vif-plugged-4c25174b-7eef-47cc-9c7d-618a905c5e5e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 11 04:17:36 compute-0 nova_compute[259850]: 2025-10-11 04:17:36.814 2 DEBUG oslo_concurrency.lockutils [req-a14a0c3b-ea68-4924-91cc-814fe5d40542 req-aaebdb4b-52f2-48f9-b08c-128355a12eb8 f4159c198f2c491aba3076e803251bb4 551ef16ca7c64c7d98e9560ddab5abd8 - - default default] Acquiring lock "170beb52-e998-40b5-8315-a0d138f2cbf6-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 04:17:36 compute-0 sudo[295821]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 04:17:36 compute-0 nova_compute[259850]: 2025-10-11 04:17:36.815 2 DEBUG oslo_concurrency.lockutils [req-a14a0c3b-ea68-4924-91cc-814fe5d40542 req-aaebdb4b-52f2-48f9-b08c-128355a12eb8 f4159c198f2c491aba3076e803251bb4 551ef16ca7c64c7d98e9560ddab5abd8 - - default default] Lock "170beb52-e998-40b5-8315-a0d138f2cbf6-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 04:17:36 compute-0 nova_compute[259850]: 2025-10-11 04:17:36.815 2 DEBUG oslo_concurrency.lockutils [req-a14a0c3b-ea68-4924-91cc-814fe5d40542 req-aaebdb4b-52f2-48f9-b08c-128355a12eb8 f4159c198f2c491aba3076e803251bb4 551ef16ca7c64c7d98e9560ddab5abd8 - - default default] Lock "170beb52-e998-40b5-8315-a0d138f2cbf6-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 04:17:36 compute-0 nova_compute[259850]: 2025-10-11 04:17:36.816 2 DEBUG nova.compute.manager [req-a14a0c3b-ea68-4924-91cc-814fe5d40542 req-aaebdb4b-52f2-48f9-b08c-128355a12eb8 f4159c198f2c491aba3076e803251bb4 551ef16ca7c64c7d98e9560ddab5abd8 - - default default] [instance: 170beb52-e998-40b5-8315-a0d138f2cbf6] No waiting events found dispatching network-vif-plugged-4c25174b-7eef-47cc-9c7d-618a905c5e5e pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 11 04:17:36 compute-0 nova_compute[259850]: 2025-10-11 04:17:36.816 2 WARNING nova.compute.manager [req-a14a0c3b-ea68-4924-91cc-814fe5d40542 req-aaebdb4b-52f2-48f9-b08c-128355a12eb8 f4159c198f2c491aba3076e803251bb4 551ef16ca7c64c7d98e9560ddab5abd8 - - default default] [instance: 170beb52-e998-40b5-8315-a0d138f2cbf6] Received unexpected event network-vif-plugged-4c25174b-7eef-47cc-9c7d-618a905c5e5e for instance with vm_state deleted and task_state None.
Oct 11 04:17:36 compute-0 nova_compute[259850]: 2025-10-11 04:17:36.817 2 DEBUG nova.compute.manager [req-a14a0c3b-ea68-4924-91cc-814fe5d40542 req-aaebdb4b-52f2-48f9-b08c-128355a12eb8 f4159c198f2c491aba3076e803251bb4 551ef16ca7c64c7d98e9560ddab5abd8 - - default default] [instance: 170beb52-e998-40b5-8315-a0d138f2cbf6] Received event network-vif-deleted-4c25174b-7eef-47cc-9c7d-618a905c5e5e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 11 04:17:36 compute-0 sudo[295821]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 04:17:36 compute-0 nova_compute[259850]: 2025-10-11 04:17:36.822 2 DEBUG nova.scheduler.client.report [None req-02c5b278-a6d4-4776-8278-78077b1ab3e7 2a330a845d62440c871f80eda2546881 09ba33ef4bd447699d74946c58839b2d - - default default] Inventory has not changed for provider 108a560b-89c0-4926-a2fc-cb749a6f8386 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 11 04:17:36 compute-0 sudo[295821]: pam_unix(sudo:session): session closed for user root
Oct 11 04:17:36 compute-0 nova_compute[259850]: 2025-10-11 04:17:36.861 2 DEBUG oslo_concurrency.lockutils [None req-02c5b278-a6d4-4776-8278-78077b1ab3e7 2a330a845d62440c871f80eda2546881 09ba33ef4bd447699d74946c58839b2d - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.630s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 04:17:36 compute-0 nova_compute[259850]: 2025-10-11 04:17:36.890 2 INFO nova.scheduler.client.report [None req-02c5b278-a6d4-4776-8278-78077b1ab3e7 2a330a845d62440c871f80eda2546881 09ba33ef4bd447699d74946c58839b2d - - default default] Deleted allocations for instance 170beb52-e998-40b5-8315-a0d138f2cbf6
Oct 11 04:17:36 compute-0 sudo[295848]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 11 04:17:36 compute-0 sudo[295848]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 04:17:36 compute-0 sudo[295848]: pam_unix(sudo:session): session closed for user root
Oct 11 04:17:36 compute-0 nova_compute[259850]: 2025-10-11 04:17:36.984 2 DEBUG oslo_concurrency.lockutils [None req-02c5b278-a6d4-4776-8278-78077b1ab3e7 2a330a845d62440c871f80eda2546881 09ba33ef4bd447699d74946c58839b2d - - default default] Lock "170beb52-e998-40b5-8315-a0d138f2cbf6" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.632s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 04:17:37 compute-0 sudo[295873]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 04:17:37 compute-0 sudo[295873]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 04:17:37 compute-0 sudo[295873]: pam_unix(sudo:session): session closed for user root
Oct 11 04:17:37 compute-0 sudo[295898]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/23b68101-59a9-532f-ab6b-9acf78fb2162/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 23b68101-59a9-532f-ab6b-9acf78fb2162 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Oct 11 04:17:37 compute-0 sudo[295898]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 04:17:37 compute-0 nova_compute[259850]: 2025-10-11 04:17:37.165 2 DEBUG nova.compute.manager [None req-e48d9e10-8970-4316-8d21-d7f025ac5787 77d11e860ca1460cab1c20bca4d4c0ea bfcc78a613a4442d88231798d10634c9 - - default default] [instance: a3d4ef44-fada-41fc-9a12-641bff0536a4] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Oct 11 04:17:37 compute-0 nova_compute[259850]: 2025-10-11 04:17:37.168 2 DEBUG nova.virt.libvirt.driver [None req-e48d9e10-8970-4316-8d21-d7f025ac5787 77d11e860ca1460cab1c20bca4d4c0ea bfcc78a613a4442d88231798d10634c9 - - default default] [instance: a3d4ef44-fada-41fc-9a12-641bff0536a4] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Oct 11 04:17:37 compute-0 nova_compute[259850]: 2025-10-11 04:17:37.168 2 INFO nova.virt.libvirt.driver [None req-e48d9e10-8970-4316-8d21-d7f025ac5787 77d11e860ca1460cab1c20bca4d4c0ea bfcc78a613a4442d88231798d10634c9 - - default default] [instance: a3d4ef44-fada-41fc-9a12-641bff0536a4] Creating image(s)
Oct 11 04:17:37 compute-0 nova_compute[259850]: 2025-10-11 04:17:37.169 2 DEBUG nova.virt.libvirt.driver [None req-e48d9e10-8970-4316-8d21-d7f025ac5787 77d11e860ca1460cab1c20bca4d4c0ea bfcc78a613a4442d88231798d10634c9 - - default default] [instance: a3d4ef44-fada-41fc-9a12-641bff0536a4] Did not create local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4859
Oct 11 04:17:37 compute-0 nova_compute[259850]: 2025-10-11 04:17:37.170 2 DEBUG nova.virt.libvirt.driver [None req-e48d9e10-8970-4316-8d21-d7f025ac5787 77d11e860ca1460cab1c20bca4d4c0ea bfcc78a613a4442d88231798d10634c9 - - default default] [instance: a3d4ef44-fada-41fc-9a12-641bff0536a4] Ensure instance console log exists: /var/lib/nova/instances/a3d4ef44-fada-41fc-9a12-641bff0536a4/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Oct 11 04:17:37 compute-0 nova_compute[259850]: 2025-10-11 04:17:37.170 2 DEBUG oslo_concurrency.lockutils [None req-e48d9e10-8970-4316-8d21-d7f025ac5787 77d11e860ca1460cab1c20bca4d4c0ea bfcc78a613a4442d88231798d10634c9 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 04:17:37 compute-0 nova_compute[259850]: 2025-10-11 04:17:37.171 2 DEBUG oslo_concurrency.lockutils [None req-e48d9e10-8970-4316-8d21-d7f025ac5787 77d11e860ca1460cab1c20bca4d4c0ea bfcc78a613a4442d88231798d10634c9 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 04:17:37 compute-0 nova_compute[259850]: 2025-10-11 04:17:37.172 2 DEBUG oslo_concurrency.lockutils [None req-e48d9e10-8970-4316-8d21-d7f025ac5787 77d11e860ca1460cab1c20bca4d4c0ea bfcc78a613a4442d88231798d10634c9 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 04:17:37 compute-0 ceph-mon[74273]: pgmap v1617: 305 pgs: 305 active+clean; 281 MiB data, 581 MiB used, 59 GiB / 60 GiB avail; 350 KiB/s rd, 11 MiB/s wr, 107 op/s
Oct 11 04:17:37 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 11 04:17:37 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 11 04:17:37 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' 
Oct 11 04:17:37 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 11 04:17:37 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 11 04:17:37 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 11 04:17:37 compute-0 ceph-mon[74273]: from='client.? 192.168.122.100:0/1060736512' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 04:17:37 compute-0 ceph-mon[74273]: from='client.? 192.168.122.10:0/2718697562' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 11 04:17:37 compute-0 podman[295964]: 2025-10-11 04:17:37.553804982 +0000 UTC m=+0.058389744 container create 44550dad67c2b90087fd82127e78a8c63c2ae519dff2612f1fc9b5c0e52047b1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=heuristic_jones, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Oct 11 04:17:37 compute-0 nova_compute[259850]: 2025-10-11 04:17:37.594 2 DEBUG nova.network.neutron [None req-e48d9e10-8970-4316-8d21-d7f025ac5787 77d11e860ca1460cab1c20bca4d4c0ea bfcc78a613a4442d88231798d10634c9 - - default default] [instance: a3d4ef44-fada-41fc-9a12-641bff0536a4] Successfully updated port: 86c2cef2-4a07-459b-8237-e7fda4a39f81 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Oct 11 04:17:37 compute-0 systemd[1]: Started libpod-conmon-44550dad67c2b90087fd82127e78a8c63c2ae519dff2612f1fc9b5c0e52047b1.scope.
Oct 11 04:17:37 compute-0 nova_compute[259850]: 2025-10-11 04:17:37.609 2 DEBUG oslo_concurrency.lockutils [None req-e48d9e10-8970-4316-8d21-d7f025ac5787 77d11e860ca1460cab1c20bca4d4c0ea bfcc78a613a4442d88231798d10634c9 - - default default] Acquiring lock "refresh_cache-a3d4ef44-fada-41fc-9a12-641bff0536a4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 11 04:17:37 compute-0 nova_compute[259850]: 2025-10-11 04:17:37.609 2 DEBUG oslo_concurrency.lockutils [None req-e48d9e10-8970-4316-8d21-d7f025ac5787 77d11e860ca1460cab1c20bca4d4c0ea bfcc78a613a4442d88231798d10634c9 - - default default] Acquired lock "refresh_cache-a3d4ef44-fada-41fc-9a12-641bff0536a4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 11 04:17:37 compute-0 nova_compute[259850]: 2025-10-11 04:17:37.610 2 DEBUG nova.network.neutron [None req-e48d9e10-8970-4316-8d21-d7f025ac5787 77d11e860ca1460cab1c20bca4d4c0ea bfcc78a613a4442d88231798d10634c9 - - default default] [instance: a3d4ef44-fada-41fc-9a12-641bff0536a4] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Oct 11 04:17:37 compute-0 podman[295964]: 2025-10-11 04:17:37.524741869 +0000 UTC m=+0.029326711 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 04:17:37 compute-0 systemd[1]: Started libcrun container.
Oct 11 04:17:37 compute-0 podman[295964]: 2025-10-11 04:17:37.664494986 +0000 UTC m=+0.169079798 container init 44550dad67c2b90087fd82127e78a8c63c2ae519dff2612f1fc9b5c0e52047b1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=heuristic_jones, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_REF=reef, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Oct 11 04:17:37 compute-0 podman[295964]: 2025-10-11 04:17:37.672944665 +0000 UTC m=+0.177529427 container start 44550dad67c2b90087fd82127e78a8c63c2ae519dff2612f1fc9b5c0e52047b1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=heuristic_jones, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default)
Oct 11 04:17:37 compute-0 podman[295964]: 2025-10-11 04:17:37.67769938 +0000 UTC m=+0.182284232 container attach 44550dad67c2b90087fd82127e78a8c63c2ae519dff2612f1fc9b5c0e52047b1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=heuristic_jones, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 11 04:17:37 compute-0 heuristic_jones[295980]: 167 167
Oct 11 04:17:37 compute-0 systemd[1]: libpod-44550dad67c2b90087fd82127e78a8c63c2ae519dff2612f1fc9b5c0e52047b1.scope: Deactivated successfully.
Oct 11 04:17:37 compute-0 podman[295964]: 2025-10-11 04:17:37.682874246 +0000 UTC m=+0.187459018 container died 44550dad67c2b90087fd82127e78a8c63c2ae519dff2612f1fc9b5c0e52047b1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=heuristic_jones, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507)
Oct 11 04:17:37 compute-0 systemd[1]: var-lib-containers-storage-overlay-fd070a61e6d3bf218e61d9b29c9fe5cded9637b18bac9b4bfed1b3aa7076d26d-merged.mount: Deactivated successfully.
Oct 11 04:17:37 compute-0 podman[295964]: 2025-10-11 04:17:37.727255973 +0000 UTC m=+0.231840735 container remove 44550dad67c2b90087fd82127e78a8c63c2ae519dff2612f1fc9b5c0e52047b1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=heuristic_jones, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507)
Oct 11 04:17:37 compute-0 systemd[1]: libpod-conmon-44550dad67c2b90087fd82127e78a8c63c2ae519dff2612f1fc9b5c0e52047b1.scope: Deactivated successfully.
Oct 11 04:17:37 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1618: 305 pgs: 305 active+clean; 281 MiB data, 581 MiB used, 59 GiB / 60 GiB avail; 351 KiB/s rd, 11 MiB/s wr, 108 op/s
Oct 11 04:17:37 compute-0 nova_compute[259850]: 2025-10-11 04:17:37.902 2 DEBUG nova.network.neutron [None req-e48d9e10-8970-4316-8d21-d7f025ac5787 77d11e860ca1460cab1c20bca4d4c0ea bfcc78a613a4442d88231798d10634c9 - - default default] [instance: a3d4ef44-fada-41fc-9a12-641bff0536a4] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Oct 11 04:17:37 compute-0 podman[296004]: 2025-10-11 04:17:37.99276489 +0000 UTC m=+0.077306760 container create ce989d1a04bbcfc429156a43b9262a55ef4c5f1ad5261b97a5963c963f56c612 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_hugle, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507)
Oct 11 04:17:38 compute-0 systemd[1]: Started libpod-conmon-ce989d1a04bbcfc429156a43b9262a55ef4c5f1ad5261b97a5963c963f56c612.scope.
Oct 11 04:17:38 compute-0 podman[296004]: 2025-10-11 04:17:37.962083292 +0000 UTC m=+0.046625202 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 04:17:38 compute-0 systemd[1]: Started libcrun container.
Oct 11 04:17:38 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8bb946a381496d38b5de366fe59ba5776f7fc28e8c31c52dc8ed0a27a05459cb/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 11 04:17:38 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8bb946a381496d38b5de366fe59ba5776f7fc28e8c31c52dc8ed0a27a05459cb/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 11 04:17:38 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8bb946a381496d38b5de366fe59ba5776f7fc28e8c31c52dc8ed0a27a05459cb/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 11 04:17:38 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8bb946a381496d38b5de366fe59ba5776f7fc28e8c31c52dc8ed0a27a05459cb/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 11 04:17:38 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8bb946a381496d38b5de366fe59ba5776f7fc28e8c31c52dc8ed0a27a05459cb/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct 11 04:17:38 compute-0 podman[296004]: 2025-10-11 04:17:38.100915562 +0000 UTC m=+0.185457492 container init ce989d1a04bbcfc429156a43b9262a55ef4c5f1ad5261b97a5963c963f56c612 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_hugle, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 11 04:17:38 compute-0 podman[296004]: 2025-10-11 04:17:38.113802367 +0000 UTC m=+0.198344237 container start ce989d1a04bbcfc429156a43b9262a55ef4c5f1ad5261b97a5963c963f56c612 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_hugle, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 11 04:17:38 compute-0 podman[296004]: 2025-10-11 04:17:38.11848395 +0000 UTC m=+0.203025810 container attach ce989d1a04bbcfc429156a43b9262a55ef4c5f1ad5261b97a5963c963f56c612 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_hugle, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 11 04:17:38 compute-0 nova_compute[259850]: 2025-10-11 04:17:38.306 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 04:17:38 compute-0 nova_compute[259850]: 2025-10-11 04:17:38.655 2 DEBUG nova.network.neutron [None req-e48d9e10-8970-4316-8d21-d7f025ac5787 77d11e860ca1460cab1c20bca4d4c0ea bfcc78a613a4442d88231798d10634c9 - - default default] [instance: a3d4ef44-fada-41fc-9a12-641bff0536a4] Updating instance_info_cache with network_info: [{"id": "86c2cef2-4a07-459b-8237-e7fda4a39f81", "address": "fa:16:3e:f3:89:d6", "network": {"id": "1c86b315-3a4b-4db0-8b3c-39658c19ef9c", "bridge": "br-int", "label": "tempest-TransferEncryptedVolumeTest-443747755-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "bfcc78a613a4442d88231798d10634c9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap86c2cef2-4a", "ovs_interfaceid": "86c2cef2-4a07-459b-8237-e7fda4a39f81", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 11 04:17:38 compute-0 nova_compute[259850]: 2025-10-11 04:17:38.677 2 DEBUG oslo_concurrency.lockutils [None req-e48d9e10-8970-4316-8d21-d7f025ac5787 77d11e860ca1460cab1c20bca4d4c0ea bfcc78a613a4442d88231798d10634c9 - - default default] Releasing lock "refresh_cache-a3d4ef44-fada-41fc-9a12-641bff0536a4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 11 04:17:38 compute-0 nova_compute[259850]: 2025-10-11 04:17:38.677 2 DEBUG nova.compute.manager [None req-e48d9e10-8970-4316-8d21-d7f025ac5787 77d11e860ca1460cab1c20bca4d4c0ea bfcc78a613a4442d88231798d10634c9 - - default default] [instance: a3d4ef44-fada-41fc-9a12-641bff0536a4] Instance network_info: |[{"id": "86c2cef2-4a07-459b-8237-e7fda4a39f81", "address": "fa:16:3e:f3:89:d6", "network": {"id": "1c86b315-3a4b-4db0-8b3c-39658c19ef9c", "bridge": "br-int", "label": "tempest-TransferEncryptedVolumeTest-443747755-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "bfcc78a613a4442d88231798d10634c9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap86c2cef2-4a", "ovs_interfaceid": "86c2cef2-4a07-459b-8237-e7fda4a39f81", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Oct 11 04:17:38 compute-0 nova_compute[259850]: 2025-10-11 04:17:38.682 2 DEBUG nova.virt.libvirt.driver [None req-e48d9e10-8970-4316-8d21-d7f025ac5787 77d11e860ca1460cab1c20bca4d4c0ea bfcc78a613a4442d88231798d10634c9 - - default default] [instance: a3d4ef44-fada-41fc-9a12-641bff0536a4] Start _get_guest_xml network_info=[{"id": "86c2cef2-4a07-459b-8237-e7fda4a39f81", "address": "fa:16:3e:f3:89:d6", "network": {"id": "1c86b315-3a4b-4db0-8b3c-39658c19ef9c", "bridge": "br-int", "label": "tempest-TransferEncryptedVolumeTest-443747755-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "bfcc78a613a4442d88231798d10634c9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap86c2cef2-4a", "ovs_interfaceid": "86c2cef2-4a07-459b-8237-e7fda4a39f81", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, '/dev/vda': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum=<?>,container_format=<?>,created_at=<?>,direct_url=<?>,disk_format=<?>,id=<?>,min_disk=0,min_ram=0,name=<?>,owner=<?>,properties=ImageMetaProps,protected=<?>,size=1073741824,status='active',tags=<?>,updated_at=<?>,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [], 'ephemerals': [], 'block_device_mapping': [{'device_type': 'disk', 'delete_on_termination': False, 'connection_info': {'driver_volume_type': 'rbd', 'data': {'name': 'volumes/volume-b02ce934-9de7-422d-b3ba-5ade72993920', 'hosts': ['192.168.122.100'], 'ports': ['6789'], 'cluster_name': 'ceph', 'auth_enabled': True, 'auth_username': 'openstack', 'secret_type': 'ceph', 'secret_uuid': '***', 'volume_id': 'b02ce934-9de7-422d-b3ba-5ade72993920', 'discard': True, 'qos_specs': None, 'access_mode': 'rw', 'encrypted': True, 'cacheable': False}, 'status': 'reserved', 'instance': 'a3d4ef44-fada-41fc-9a12-641bff0536a4', 'attached_at': '', 'detached_at': '', 'volume_id': 'b02ce934-9de7-422d-b3ba-5ade72993920', 'serial': 'b02ce934-9de7-422d-b3ba-5ade72993920'}, 'boot_index': 0, 'guest_format': None, 'attachment_id': 'aabb3d08-3f0a-45f7-b3d7-015e23bd80f0', 'mount_device': '/dev/vda', 'disk_bus': 'virtio', 'volume_type': None}], ': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Oct 11 04:17:38 compute-0 nova_compute[259850]: 2025-10-11 04:17:38.688 2 WARNING nova.virt.libvirt.driver [None req-e48d9e10-8970-4316-8d21-d7f025ac5787 77d11e860ca1460cab1c20bca4d4c0ea bfcc78a613a4442d88231798d10634c9 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 11 04:17:38 compute-0 nova_compute[259850]: 2025-10-11 04:17:38.702 2 DEBUG nova.virt.libvirt.host [None req-e48d9e10-8970-4316-8d21-d7f025ac5787 77d11e860ca1460cab1c20bca4d4c0ea bfcc78a613a4442d88231798d10634c9 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Oct 11 04:17:38 compute-0 nova_compute[259850]: 2025-10-11 04:17:38.703 2 DEBUG nova.virt.libvirt.host [None req-e48d9e10-8970-4316-8d21-d7f025ac5787 77d11e860ca1460cab1c20bca4d4c0ea bfcc78a613a4442d88231798d10634c9 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Oct 11 04:17:38 compute-0 nova_compute[259850]: 2025-10-11 04:17:38.707 2 DEBUG nova.virt.libvirt.host [None req-e48d9e10-8970-4316-8d21-d7f025ac5787 77d11e860ca1460cab1c20bca4d4c0ea bfcc78a613a4442d88231798d10634c9 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Oct 11 04:17:38 compute-0 nova_compute[259850]: 2025-10-11 04:17:38.708 2 DEBUG nova.virt.libvirt.host [None req-e48d9e10-8970-4316-8d21-d7f025ac5787 77d11e860ca1460cab1c20bca4d4c0ea bfcc78a613a4442d88231798d10634c9 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Oct 11 04:17:38 compute-0 nova_compute[259850]: 2025-10-11 04:17:38.709 2 DEBUG nova.virt.libvirt.driver [None req-e48d9e10-8970-4316-8d21-d7f025ac5787 77d11e860ca1460cab1c20bca4d4c0ea bfcc78a613a4442d88231798d10634c9 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Oct 11 04:17:38 compute-0 nova_compute[259850]: 2025-10-11 04:17:38.709 2 DEBUG nova.virt.hardware [None req-e48d9e10-8970-4316-8d21-d7f025ac5787 77d11e860ca1460cab1c20bca4d4c0ea bfcc78a613a4442d88231798d10634c9 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-11T04:01:35Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='178575de-f0e6-4acd-9fcd-d75e3e09ac2e',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum=<?>,container_format=<?>,created_at=<?>,direct_url=<?>,disk_format=<?>,id=<?>,min_disk=0,min_ram=0,name=<?>,owner=<?>,properties=ImageMetaProps,protected=<?>,size=1073741824,status='active',tags=<?>,updated_at=<?>,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Oct 11 04:17:38 compute-0 nova_compute[259850]: 2025-10-11 04:17:38.710 2 DEBUG nova.virt.hardware [None req-e48d9e10-8970-4316-8d21-d7f025ac5787 77d11e860ca1460cab1c20bca4d4c0ea bfcc78a613a4442d88231798d10634c9 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Oct 11 04:17:38 compute-0 nova_compute[259850]: 2025-10-11 04:17:38.710 2 DEBUG nova.virt.hardware [None req-e48d9e10-8970-4316-8d21-d7f025ac5787 77d11e860ca1460cab1c20bca4d4c0ea bfcc78a613a4442d88231798d10634c9 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Oct 11 04:17:38 compute-0 nova_compute[259850]: 2025-10-11 04:17:38.711 2 DEBUG nova.virt.hardware [None req-e48d9e10-8970-4316-8d21-d7f025ac5787 77d11e860ca1460cab1c20bca4d4c0ea bfcc78a613a4442d88231798d10634c9 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Oct 11 04:17:38 compute-0 nova_compute[259850]: 2025-10-11 04:17:38.711 2 DEBUG nova.virt.hardware [None req-e48d9e10-8970-4316-8d21-d7f025ac5787 77d11e860ca1460cab1c20bca4d4c0ea bfcc78a613a4442d88231798d10634c9 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Oct 11 04:17:38 compute-0 nova_compute[259850]: 2025-10-11 04:17:38.711 2 DEBUG nova.virt.hardware [None req-e48d9e10-8970-4316-8d21-d7f025ac5787 77d11e860ca1460cab1c20bca4d4c0ea bfcc78a613a4442d88231798d10634c9 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Oct 11 04:17:38 compute-0 nova_compute[259850]: 2025-10-11 04:17:38.712 2 DEBUG nova.virt.hardware [None req-e48d9e10-8970-4316-8d21-d7f025ac5787 77d11e860ca1460cab1c20bca4d4c0ea bfcc78a613a4442d88231798d10634c9 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Oct 11 04:17:38 compute-0 nova_compute[259850]: 2025-10-11 04:17:38.712 2 DEBUG nova.virt.hardware [None req-e48d9e10-8970-4316-8d21-d7f025ac5787 77d11e860ca1460cab1c20bca4d4c0ea bfcc78a613a4442d88231798d10634c9 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Oct 11 04:17:38 compute-0 nova_compute[259850]: 2025-10-11 04:17:38.713 2 DEBUG nova.virt.hardware [None req-e48d9e10-8970-4316-8d21-d7f025ac5787 77d11e860ca1460cab1c20bca4d4c0ea bfcc78a613a4442d88231798d10634c9 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Oct 11 04:17:38 compute-0 nova_compute[259850]: 2025-10-11 04:17:38.713 2 DEBUG nova.virt.hardware [None req-e48d9e10-8970-4316-8d21-d7f025ac5787 77d11e860ca1460cab1c20bca4d4c0ea bfcc78a613a4442d88231798d10634c9 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Oct 11 04:17:38 compute-0 nova_compute[259850]: 2025-10-11 04:17:38.714 2 DEBUG nova.virt.hardware [None req-e48d9e10-8970-4316-8d21-d7f025ac5787 77d11e860ca1460cab1c20bca4d4c0ea bfcc78a613a4442d88231798d10634c9 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Oct 11 04:17:38 compute-0 nova_compute[259850]: 2025-10-11 04:17:38.760 2 DEBUG nova.storage.rbd_utils [None req-e48d9e10-8970-4316-8d21-d7f025ac5787 77d11e860ca1460cab1c20bca4d4c0ea bfcc78a613a4442d88231798d10634c9 - - default default] rbd image a3d4ef44-fada-41fc-9a12-641bff0536a4_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 11 04:17:38 compute-0 nova_compute[259850]: 2025-10-11 04:17:38.768 2 DEBUG oslo_concurrency.processutils [None req-e48d9e10-8970-4316-8d21-d7f025ac5787 77d11e860ca1460cab1c20bca4d4c0ea bfcc78a613a4442d88231798d10634c9 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 04:17:38 compute-0 nova_compute[259850]: 2025-10-11 04:17:38.904 2 DEBUG nova.compute.manager [req-20506f81-2cf3-4329-bf76-11317a90a5eb req-cceeedf6-cbe7-4d88-bd7b-d00b77db1363 f4159c198f2c491aba3076e803251bb4 551ef16ca7c64c7d98e9560ddab5abd8 - - default default] [instance: a3d4ef44-fada-41fc-9a12-641bff0536a4] Received event network-changed-86c2cef2-4a07-459b-8237-e7fda4a39f81 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 11 04:17:38 compute-0 nova_compute[259850]: 2025-10-11 04:17:38.905 2 DEBUG nova.compute.manager [req-20506f81-2cf3-4329-bf76-11317a90a5eb req-cceeedf6-cbe7-4d88-bd7b-d00b77db1363 f4159c198f2c491aba3076e803251bb4 551ef16ca7c64c7d98e9560ddab5abd8 - - default default] [instance: a3d4ef44-fada-41fc-9a12-641bff0536a4] Refreshing instance network info cache due to event network-changed-86c2cef2-4a07-459b-8237-e7fda4a39f81. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 11 04:17:38 compute-0 nova_compute[259850]: 2025-10-11 04:17:38.905 2 DEBUG oslo_concurrency.lockutils [req-20506f81-2cf3-4329-bf76-11317a90a5eb req-cceeedf6-cbe7-4d88-bd7b-d00b77db1363 f4159c198f2c491aba3076e803251bb4 551ef16ca7c64c7d98e9560ddab5abd8 - - default default] Acquiring lock "refresh_cache-a3d4ef44-fada-41fc-9a12-641bff0536a4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 11 04:17:38 compute-0 nova_compute[259850]: 2025-10-11 04:17:38.906 2 DEBUG oslo_concurrency.lockutils [req-20506f81-2cf3-4329-bf76-11317a90a5eb req-cceeedf6-cbe7-4d88-bd7b-d00b77db1363 f4159c198f2c491aba3076e803251bb4 551ef16ca7c64c7d98e9560ddab5abd8 - - default default] Acquired lock "refresh_cache-a3d4ef44-fada-41fc-9a12-641bff0536a4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 11 04:17:38 compute-0 nova_compute[259850]: 2025-10-11 04:17:38.906 2 DEBUG nova.network.neutron [req-20506f81-2cf3-4329-bf76-11317a90a5eb req-cceeedf6-cbe7-4d88-bd7b-d00b77db1363 f4159c198f2c491aba3076e803251bb4 551ef16ca7c64c7d98e9560ddab5abd8 - - default default] [instance: a3d4ef44-fada-41fc-9a12-641bff0536a4] Refreshing network info cache for port 86c2cef2-4a07-459b-8237-e7fda4a39f81 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 11 04:17:39 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 11 04:17:39 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3475605989' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 11 04:17:39 compute-0 ceph-mon[74273]: pgmap v1618: 305 pgs: 305 active+clean; 281 MiB data, 581 MiB used, 59 GiB / 60 GiB avail; 351 KiB/s rd, 11 MiB/s wr, 108 op/s
Oct 11 04:17:39 compute-0 ceph-mon[74273]: from='client.? 192.168.122.100:0/3475605989' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 11 04:17:39 compute-0 nova_compute[259850]: 2025-10-11 04:17:39.245 2 DEBUG oslo_concurrency.processutils [None req-e48d9e10-8970-4316-8d21-d7f025ac5787 77d11e860ca1460cab1c20bca4d4c0ea bfcc78a613a4442d88231798d10634c9 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.477s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 04:17:39 compute-0 nervous_hugle[296020]: --> passed data devices: 0 physical, 3 LVM
Oct 11 04:17:39 compute-0 nervous_hugle[296020]: --> relative data size: 1.0
Oct 11 04:17:39 compute-0 nervous_hugle[296020]: --> All data devices are unavailable
Oct 11 04:17:39 compute-0 systemd[1]: libpod-ce989d1a04bbcfc429156a43b9262a55ef4c5f1ad5261b97a5963c963f56c612.scope: Deactivated successfully.
Oct 11 04:17:39 compute-0 systemd[1]: libpod-ce989d1a04bbcfc429156a43b9262a55ef4c5f1ad5261b97a5963c963f56c612.scope: Consumed 1.141s CPU time.
Oct 11 04:17:39 compute-0 podman[296089]: 2025-10-11 04:17:39.371732581 +0000 UTC m=+0.032791539 container died ce989d1a04bbcfc429156a43b9262a55ef4c5f1ad5261b97a5963c963f56c612 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_hugle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 11 04:17:39 compute-0 systemd[1]: var-lib-containers-storage-overlay-8bb946a381496d38b5de366fe59ba5776f7fc28e8c31c52dc8ed0a27a05459cb-merged.mount: Deactivated successfully.
Oct 11 04:17:39 compute-0 podman[296089]: 2025-10-11 04:17:39.421460599 +0000 UTC m=+0.082519537 container remove ce989d1a04bbcfc429156a43b9262a55ef4c5f1ad5261b97a5963c963f56c612 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_hugle, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Oct 11 04:17:39 compute-0 systemd[1]: libpod-conmon-ce989d1a04bbcfc429156a43b9262a55ef4c5f1ad5261b97a5963c963f56c612.scope: Deactivated successfully.
Oct 11 04:17:39 compute-0 nova_compute[259850]: 2025-10-11 04:17:39.444 2 DEBUG os_brick.encryptors [None req-e48d9e10-8970-4316-8d21-d7f025ac5787 77d11e860ca1460cab1c20bca4d4c0ea bfcc78a613a4442d88231798d10634c9 - - default default] Using volume encryption metadata '{'encryption_key_id': '149f439d-9949-42e8-80e6-afea3378dd73', 'control_location': 'front-end', 'cipher': 'aes-xts-plain64', 'key_size': 256, 'provider': 'luks'}' for connection: {'driver_volume_type': 'rbd', 'data': {'name': 'volumes/volume-b02ce934-9de7-422d-b3ba-5ade72993920', 'hosts': ['192.168.122.100'], 'ports': ['6789'], 'cluster_name': 'ceph', 'auth_enabled': True, 'auth_username': 'openstack', 'secret_type': 'ceph', 'secret_uuid': '***', 'volume_id': 'b02ce934-9de7-422d-b3ba-5ade72993920', 'discard': True, 'qos_specs': None, 'access_mode': 'rw', 'encrypted': True, 'cacheable': False}, 'status': 'reserved', 'instance': 'a3d4ef44-fada-41fc-9a12-641bff0536a4', 'attached_at': '', 'detached_at': '', 'volume_id': 'b02ce934-9de7-422d-b3ba-5ade72993920', 'serial': '} get_encryption_metadata /usr/lib/python3.9/site-packages/os_brick/encryptors/__init__.py:135
Oct 11 04:17:39 compute-0 nova_compute[259850]: 2025-10-11 04:17:39.447 2 DEBUG barbicanclient.client [None req-e48d9e10-8970-4316-8d21-d7f025ac5787 77d11e860ca1460cab1c20bca4d4c0ea bfcc78a613a4442d88231798d10634c9 - - default default] Creating Client object Client /usr/lib/python3.9/site-packages/barbicanclient/client.py:163
Oct 11 04:17:39 compute-0 sudo[295898]: pam_unix(sudo:session): session closed for user root
Oct 11 04:17:39 compute-0 nova_compute[259850]: 2025-10-11 04:17:39.466 2 DEBUG barbicanclient.v1.secrets [None req-e48d9e10-8970-4316-8d21-d7f025ac5787 77d11e860ca1460cab1c20bca4d4c0ea bfcc78a613a4442d88231798d10634c9 - - default default] Getting secret - Secret href: https://barbican-internal.openstack.svc:9311/secrets/149f439d-9949-42e8-80e6-afea3378dd73 get /usr/lib/python3.9/site-packages/barbicanclient/v1/secrets.py:514
Oct 11 04:17:39 compute-0 nova_compute[259850]: 2025-10-11 04:17:39.467 2 INFO barbicanclient.base [None req-e48d9e10-8970-4316-8d21-d7f025ac5787 77d11e860ca1460cab1c20bca4d4c0ea bfcc78a613a4442d88231798d10634c9 - - default default] Calculated Secrets uuid ref: secrets/149f439d-9949-42e8-80e6-afea3378dd73
Oct 11 04:17:39 compute-0 nova_compute[259850]: 2025-10-11 04:17:39.502 2 DEBUG barbicanclient.client [None req-e48d9e10-8970-4316-8d21-d7f025ac5787 77d11e860ca1460cab1c20bca4d4c0ea bfcc78a613a4442d88231798d10634c9 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87
Oct 11 04:17:39 compute-0 nova_compute[259850]: 2025-10-11 04:17:39.503 2 INFO barbicanclient.base [None req-e48d9e10-8970-4316-8d21-d7f025ac5787 77d11e860ca1460cab1c20bca4d4c0ea bfcc78a613a4442d88231798d10634c9 - - default default] Calculated Secrets uuid ref: secrets/149f439d-9949-42e8-80e6-afea3378dd73
Oct 11 04:17:39 compute-0 nova_compute[259850]: 2025-10-11 04:17:39.531 2 DEBUG barbicanclient.client [None req-e48d9e10-8970-4316-8d21-d7f025ac5787 77d11e860ca1460cab1c20bca4d4c0ea bfcc78a613a4442d88231798d10634c9 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87
Oct 11 04:17:39 compute-0 nova_compute[259850]: 2025-10-11 04:17:39.531 2 INFO barbicanclient.base [None req-e48d9e10-8970-4316-8d21-d7f025ac5787 77d11e860ca1460cab1c20bca4d4c0ea bfcc78a613a4442d88231798d10634c9 - - default default] Calculated Secrets uuid ref: secrets/149f439d-9949-42e8-80e6-afea3378dd73
Oct 11 04:17:39 compute-0 sudo[296104]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 04:17:39 compute-0 sudo[296104]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 04:17:39 compute-0 sudo[296104]: pam_unix(sudo:session): session closed for user root
Oct 11 04:17:39 compute-0 nova_compute[259850]: 2025-10-11 04:17:39.554 2 DEBUG barbicanclient.client [None req-e48d9e10-8970-4316-8d21-d7f025ac5787 77d11e860ca1460cab1c20bca4d4c0ea bfcc78a613a4442d88231798d10634c9 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87
Oct 11 04:17:39 compute-0 nova_compute[259850]: 2025-10-11 04:17:39.555 2 INFO barbicanclient.base [None req-e48d9e10-8970-4316-8d21-d7f025ac5787 77d11e860ca1460cab1c20bca4d4c0ea bfcc78a613a4442d88231798d10634c9 - - default default] Calculated Secrets uuid ref: secrets/149f439d-9949-42e8-80e6-afea3378dd73
Oct 11 04:17:39 compute-0 nova_compute[259850]: 2025-10-11 04:17:39.585 2 DEBUG barbicanclient.client [None req-e48d9e10-8970-4316-8d21-d7f025ac5787 77d11e860ca1460cab1c20bca4d4c0ea bfcc78a613a4442d88231798d10634c9 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87
Oct 11 04:17:39 compute-0 nova_compute[259850]: 2025-10-11 04:17:39.586 2 INFO barbicanclient.base [None req-e48d9e10-8970-4316-8d21-d7f025ac5787 77d11e860ca1460cab1c20bca4d4c0ea bfcc78a613a4442d88231798d10634c9 - - default default] Calculated Secrets uuid ref: secrets/149f439d-9949-42e8-80e6-afea3378dd73
Oct 11 04:17:39 compute-0 nova_compute[259850]: 2025-10-11 04:17:39.617 2 DEBUG barbicanclient.client [None req-e48d9e10-8970-4316-8d21-d7f025ac5787 77d11e860ca1460cab1c20bca4d4c0ea bfcc78a613a4442d88231798d10634c9 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87
Oct 11 04:17:39 compute-0 nova_compute[259850]: 2025-10-11 04:17:39.618 2 INFO barbicanclient.base [None req-e48d9e10-8970-4316-8d21-d7f025ac5787 77d11e860ca1460cab1c20bca4d4c0ea bfcc78a613a4442d88231798d10634c9 - - default default] Calculated Secrets uuid ref: secrets/149f439d-9949-42e8-80e6-afea3378dd73
Oct 11 04:17:39 compute-0 nova_compute[259850]: 2025-10-11 04:17:39.619 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 04:17:39 compute-0 sudo[296129]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 11 04:17:39 compute-0 sudo[296129]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 04:17:39 compute-0 sudo[296129]: pam_unix(sudo:session): session closed for user root
Oct 11 04:17:39 compute-0 nova_compute[259850]: 2025-10-11 04:17:39.650 2 DEBUG barbicanclient.client [None req-e48d9e10-8970-4316-8d21-d7f025ac5787 77d11e860ca1460cab1c20bca4d4c0ea bfcc78a613a4442d88231798d10634c9 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87
Oct 11 04:17:39 compute-0 nova_compute[259850]: 2025-10-11 04:17:39.651 2 INFO barbicanclient.base [None req-e48d9e10-8970-4316-8d21-d7f025ac5787 77d11e860ca1460cab1c20bca4d4c0ea bfcc78a613a4442d88231798d10634c9 - - default default] Calculated Secrets uuid ref: secrets/149f439d-9949-42e8-80e6-afea3378dd73
Oct 11 04:17:39 compute-0 nova_compute[259850]: 2025-10-11 04:17:39.688 2 DEBUG barbicanclient.client [None req-e48d9e10-8970-4316-8d21-d7f025ac5787 77d11e860ca1460cab1c20bca4d4c0ea bfcc78a613a4442d88231798d10634c9 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87
Oct 11 04:17:39 compute-0 nova_compute[259850]: 2025-10-11 04:17:39.689 2 INFO barbicanclient.base [None req-e48d9e10-8970-4316-8d21-d7f025ac5787 77d11e860ca1460cab1c20bca4d4c0ea bfcc78a613a4442d88231798d10634c9 - - default default] Calculated Secrets uuid ref: secrets/149f439d-9949-42e8-80e6-afea3378dd73
Oct 11 04:17:39 compute-0 nova_compute[259850]: 2025-10-11 04:17:39.715 2 DEBUG barbicanclient.client [None req-e48d9e10-8970-4316-8d21-d7f025ac5787 77d11e860ca1460cab1c20bca4d4c0ea bfcc78a613a4442d88231798d10634c9 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87
Oct 11 04:17:39 compute-0 nova_compute[259850]: 2025-10-11 04:17:39.716 2 INFO barbicanclient.base [None req-e48d9e10-8970-4316-8d21-d7f025ac5787 77d11e860ca1460cab1c20bca4d4c0ea bfcc78a613a4442d88231798d10634c9 - - default default] Calculated Secrets uuid ref: secrets/149f439d-9949-42e8-80e6-afea3378dd73
Oct 11 04:17:39 compute-0 sudo[296154]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 04:17:39 compute-0 sudo[296154]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 04:17:39 compute-0 sudo[296154]: pam_unix(sudo:session): session closed for user root
Oct 11 04:17:39 compute-0 nova_compute[259850]: 2025-10-11 04:17:39.745 2 DEBUG barbicanclient.client [None req-e48d9e10-8970-4316-8d21-d7f025ac5787 77d11e860ca1460cab1c20bca4d4c0ea bfcc78a613a4442d88231798d10634c9 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87
Oct 11 04:17:39 compute-0 nova_compute[259850]: 2025-10-11 04:17:39.746 2 INFO barbicanclient.base [None req-e48d9e10-8970-4316-8d21-d7f025ac5787 77d11e860ca1460cab1c20bca4d4c0ea bfcc78a613a4442d88231798d10634c9 - - default default] Calculated Secrets uuid ref: secrets/149f439d-9949-42e8-80e6-afea3378dd73
Oct 11 04:17:39 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1619: 305 pgs: 305 active+clean; 281 MiB data, 581 MiB used, 59 GiB / 60 GiB avail; 363 KiB/s rd, 11 MiB/s wr, 122 op/s
Oct 11 04:17:39 compute-0 nova_compute[259850]: 2025-10-11 04:17:39.799 2 DEBUG barbicanclient.client [None req-e48d9e10-8970-4316-8d21-d7f025ac5787 77d11e860ca1460cab1c20bca4d4c0ea bfcc78a613a4442d88231798d10634c9 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87
Oct 11 04:17:39 compute-0 nova_compute[259850]: 2025-10-11 04:17:39.800 2 INFO barbicanclient.base [None req-e48d9e10-8970-4316-8d21-d7f025ac5787 77d11e860ca1460cab1c20bca4d4c0ea bfcc78a613a4442d88231798d10634c9 - - default default] Calculated Secrets uuid ref: secrets/149f439d-9949-42e8-80e6-afea3378dd73
Oct 11 04:17:39 compute-0 nova_compute[259850]: 2025-10-11 04:17:39.830 2 DEBUG barbicanclient.client [None req-e48d9e10-8970-4316-8d21-d7f025ac5787 77d11e860ca1460cab1c20bca4d4c0ea bfcc78a613a4442d88231798d10634c9 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87
Oct 11 04:17:39 compute-0 nova_compute[259850]: 2025-10-11 04:17:39.831 2 INFO barbicanclient.base [None req-e48d9e10-8970-4316-8d21-d7f025ac5787 77d11e860ca1460cab1c20bca4d4c0ea bfcc78a613a4442d88231798d10634c9 - - default default] Calculated Secrets uuid ref: secrets/149f439d-9949-42e8-80e6-afea3378dd73
Oct 11 04:17:39 compute-0 sudo[296179]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/23b68101-59a9-532f-ab6b-9acf78fb2162/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 23b68101-59a9-532f-ab6b-9acf78fb2162 -- lvm list --format json
Oct 11 04:17:39 compute-0 sudo[296179]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 04:17:39 compute-0 nova_compute[259850]: 2025-10-11 04:17:39.863 2 DEBUG barbicanclient.client [None req-e48d9e10-8970-4316-8d21-d7f025ac5787 77d11e860ca1460cab1c20bca4d4c0ea bfcc78a613a4442d88231798d10634c9 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87
Oct 11 04:17:39 compute-0 nova_compute[259850]: 2025-10-11 04:17:39.864 2 INFO barbicanclient.base [None req-e48d9e10-8970-4316-8d21-d7f025ac5787 77d11e860ca1460cab1c20bca4d4c0ea bfcc78a613a4442d88231798d10634c9 - - default default] Calculated Secrets uuid ref: secrets/149f439d-9949-42e8-80e6-afea3378dd73
Oct 11 04:17:39 compute-0 nova_compute[259850]: 2025-10-11 04:17:39.904 2 DEBUG barbicanclient.client [None req-e48d9e10-8970-4316-8d21-d7f025ac5787 77d11e860ca1460cab1c20bca4d4c0ea bfcc78a613a4442d88231798d10634c9 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87
Oct 11 04:17:39 compute-0 nova_compute[259850]: 2025-10-11 04:17:39.904 2 INFO barbicanclient.base [None req-e48d9e10-8970-4316-8d21-d7f025ac5787 77d11e860ca1460cab1c20bca4d4c0ea bfcc78a613a4442d88231798d10634c9 - - default default] Calculated Secrets uuid ref: secrets/149f439d-9949-42e8-80e6-afea3378dd73
Oct 11 04:17:39 compute-0 nova_compute[259850]: 2025-10-11 04:17:39.928 2 DEBUG barbicanclient.client [None req-e48d9e10-8970-4316-8d21-d7f025ac5787 77d11e860ca1460cab1c20bca4d4c0ea bfcc78a613a4442d88231798d10634c9 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87
Oct 11 04:17:39 compute-0 nova_compute[259850]: 2025-10-11 04:17:39.929 2 INFO barbicanclient.base [None req-e48d9e10-8970-4316-8d21-d7f025ac5787 77d11e860ca1460cab1c20bca4d4c0ea bfcc78a613a4442d88231798d10634c9 - - default default] Calculated Secrets uuid ref: secrets/149f439d-9949-42e8-80e6-afea3378dd73
Oct 11 04:17:39 compute-0 nova_compute[259850]: 2025-10-11 04:17:39.947 2 DEBUG barbicanclient.client [None req-e48d9e10-8970-4316-8d21-d7f025ac5787 77d11e860ca1460cab1c20bca4d4c0ea bfcc78a613a4442d88231798d10634c9 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87
Oct 11 04:17:39 compute-0 nova_compute[259850]: 2025-10-11 04:17:39.948 2 DEBUG nova.virt.libvirt.host [None req-e48d9e10-8970-4316-8d21-d7f025ac5787 77d11e860ca1460cab1c20bca4d4c0ea bfcc78a613a4442d88231798d10634c9 - - default default] Secret XML: <secret ephemeral="no" private="no">
Oct 11 04:17:39 compute-0 nova_compute[259850]:   <usage type="volume">
Oct 11 04:17:39 compute-0 nova_compute[259850]:     <volume>b02ce934-9de7-422d-b3ba-5ade72993920</volume>
Oct 11 04:17:39 compute-0 nova_compute[259850]:   </usage>
Oct 11 04:17:39 compute-0 nova_compute[259850]: </secret>
Oct 11 04:17:39 compute-0 nova_compute[259850]:  create_secret /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1131
Oct 11 04:17:39 compute-0 nova_compute[259850]: 2025-10-11 04:17:39.967 2 DEBUG oslo_concurrency.lockutils [None req-fa426191-cae8-4e3b-9372-779e15f9f60d 2a330a845d62440c871f80eda2546881 09ba33ef4bd447699d74946c58839b2d - - default default] Acquiring lock "a5deabc3-2396-4c23-81c2-959d49bb6da1" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 04:17:39 compute-0 nova_compute[259850]: 2025-10-11 04:17:39.969 2 DEBUG oslo_concurrency.lockutils [None req-fa426191-cae8-4e3b-9372-779e15f9f60d 2a330a845d62440c871f80eda2546881 09ba33ef4bd447699d74946c58839b2d - - default default] Lock "a5deabc3-2396-4c23-81c2-959d49bb6da1" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 04:17:39 compute-0 nova_compute[259850]: 2025-10-11 04:17:39.994 2 DEBUG nova.compute.manager [None req-fa426191-cae8-4e3b-9372-779e15f9f60d 2a330a845d62440c871f80eda2546881 09ba33ef4bd447699d74946c58839b2d - - default default] [instance: a5deabc3-2396-4c23-81c2-959d49bb6da1] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Oct 11 04:17:40 compute-0 nova_compute[259850]: 2025-10-11 04:17:40.000 2 DEBUG nova.virt.libvirt.vif [None req-e48d9e10-8970-4316-8d21-d7f025ac5787 77d11e860ca1460cab1c20bca4d4c0ea bfcc78a613a4442d88231798d10634c9 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-11T04:17:33Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TransferEncryptedVolumeTest-server-87844196',display_name='tempest-TransferEncryptedVolumeTest-server-87844196',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-transferencryptedvolumetest-server-87844196',id=23,image_ref='',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBD3jnhyRBlsX5VUAbGtWGwnjXDJ0mJnyIiUqsAyoyyDd6H6M/5DSgSJwDh4tkaNqmtKzFuE8XyeYbmLUFFbEZUE8j9mB2B0zj5nn/QlG6TOs2XcStAmJ+ejUjSzP7rh2Lg==',key_name='tempest-TransferEncryptedVolumeTest-513808347',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='bfcc78a613a4442d88231798d10634c9',ramdisk_id='',reservation_id='r-4imm6rsu',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',image_signature_verified='False',network_allocated='True',owner_project_name='tempest-TransferEncryptedVolumeTest-1941581237',owner_user_name='tempest-TransferEncryptedVolumeTest-1941581237-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-11T04:17:35Z,user_data=None,user_id='77d11e860ca1460cab1c20bca4d4c0ea',uuid=a3d4ef44-fada-41fc-9a12-641bff0536a4,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "86c2cef2-4a07-459b-8237-e7fda4a39f81", "address": "fa:16:3e:f3:89:d6", "network": {"id": "1c86b315-3a4b-4db0-8b3c-39658c19ef9c", "bridge": "br-int", "label": "tempest-TransferEncryptedVolumeTest-443747755-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "bfcc78a613a4442d88231798d10634c9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap86c2cef2-4a", "ovs_interfaceid": "86c2cef2-4a07-459b-8237-e7fda4a39f81", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Oct 11 04:17:40 compute-0 nova_compute[259850]: 2025-10-11 04:17:40.001 2 DEBUG nova.network.os_vif_util [None req-e48d9e10-8970-4316-8d21-d7f025ac5787 77d11e860ca1460cab1c20bca4d4c0ea bfcc78a613a4442d88231798d10634c9 - - default default] Converting VIF {"id": "86c2cef2-4a07-459b-8237-e7fda4a39f81", "address": "fa:16:3e:f3:89:d6", "network": {"id": "1c86b315-3a4b-4db0-8b3c-39658c19ef9c", "bridge": "br-int", "label": "tempest-TransferEncryptedVolumeTest-443747755-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "bfcc78a613a4442d88231798d10634c9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap86c2cef2-4a", "ovs_interfaceid": "86c2cef2-4a07-459b-8237-e7fda4a39f81", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 11 04:17:40 compute-0 nova_compute[259850]: 2025-10-11 04:17:40.002 2 DEBUG nova.network.os_vif_util [None req-e48d9e10-8970-4316-8d21-d7f025ac5787 77d11e860ca1460cab1c20bca4d4c0ea bfcc78a613a4442d88231798d10634c9 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:f3:89:d6,bridge_name='br-int',has_traffic_filtering=True,id=86c2cef2-4a07-459b-8237-e7fda4a39f81,network=Network(1c86b315-3a4b-4db0-8b3c-39658c19ef9c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap86c2cef2-4a') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 11 04:17:40 compute-0 nova_compute[259850]: 2025-10-11 04:17:40.005 2 DEBUG nova.objects.instance [None req-e48d9e10-8970-4316-8d21-d7f025ac5787 77d11e860ca1460cab1c20bca4d4c0ea bfcc78a613a4442d88231798d10634c9 - - default default] Lazy-loading 'pci_devices' on Instance uuid a3d4ef44-fada-41fc-9a12-641bff0536a4 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 11 04:17:40 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e389 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 04:17:40 compute-0 nova_compute[259850]: 2025-10-11 04:17:40.038 2 DEBUG nova.virt.libvirt.driver [None req-e48d9e10-8970-4316-8d21-d7f025ac5787 77d11e860ca1460cab1c20bca4d4c0ea bfcc78a613a4442d88231798d10634c9 - - default default] [instance: a3d4ef44-fada-41fc-9a12-641bff0536a4] End _get_guest_xml xml=<domain type="kvm">
Oct 11 04:17:40 compute-0 nova_compute[259850]:   <uuid>a3d4ef44-fada-41fc-9a12-641bff0536a4</uuid>
Oct 11 04:17:40 compute-0 nova_compute[259850]:   <name>instance-00000017</name>
Oct 11 04:17:40 compute-0 nova_compute[259850]:   <memory>131072</memory>
Oct 11 04:17:40 compute-0 nova_compute[259850]:   <vcpu>1</vcpu>
Oct 11 04:17:40 compute-0 nova_compute[259850]:   <metadata>
Oct 11 04:17:40 compute-0 nova_compute[259850]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 11 04:17:40 compute-0 nova_compute[259850]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 11 04:17:40 compute-0 nova_compute[259850]:       <nova:name>tempest-TransferEncryptedVolumeTest-server-87844196</nova:name>
Oct 11 04:17:40 compute-0 nova_compute[259850]:       <nova:creationTime>2025-10-11 04:17:38</nova:creationTime>
Oct 11 04:17:40 compute-0 nova_compute[259850]:       <nova:flavor name="m1.nano">
Oct 11 04:17:40 compute-0 nova_compute[259850]:         <nova:memory>128</nova:memory>
Oct 11 04:17:40 compute-0 nova_compute[259850]:         <nova:disk>1</nova:disk>
Oct 11 04:17:40 compute-0 nova_compute[259850]:         <nova:swap>0</nova:swap>
Oct 11 04:17:40 compute-0 nova_compute[259850]:         <nova:ephemeral>0</nova:ephemeral>
Oct 11 04:17:40 compute-0 nova_compute[259850]:         <nova:vcpus>1</nova:vcpus>
Oct 11 04:17:40 compute-0 nova_compute[259850]:       </nova:flavor>
Oct 11 04:17:40 compute-0 nova_compute[259850]:       <nova:owner>
Oct 11 04:17:40 compute-0 nova_compute[259850]:         <nova:user uuid="77d11e860ca1460cab1c20bca4d4c0ea">tempest-TransferEncryptedVolumeTest-1941581237-project-member</nova:user>
Oct 11 04:17:40 compute-0 nova_compute[259850]:         <nova:project uuid="bfcc78a613a4442d88231798d10634c9">tempest-TransferEncryptedVolumeTest-1941581237</nova:project>
Oct 11 04:17:40 compute-0 nova_compute[259850]:       </nova:owner>
Oct 11 04:17:40 compute-0 nova_compute[259850]:       <nova:ports>
Oct 11 04:17:40 compute-0 nova_compute[259850]:         <nova:port uuid="86c2cef2-4a07-459b-8237-e7fda4a39f81">
Oct 11 04:17:40 compute-0 nova_compute[259850]:           <nova:ip type="fixed" address="10.100.0.11" ipVersion="4"/>
Oct 11 04:17:40 compute-0 nova_compute[259850]:         </nova:port>
Oct 11 04:17:40 compute-0 nova_compute[259850]:       </nova:ports>
Oct 11 04:17:40 compute-0 nova_compute[259850]:     </nova:instance>
Oct 11 04:17:40 compute-0 nova_compute[259850]:   </metadata>
Oct 11 04:17:40 compute-0 nova_compute[259850]:   <sysinfo type="smbios">
Oct 11 04:17:40 compute-0 nova_compute[259850]:     <system>
Oct 11 04:17:40 compute-0 nova_compute[259850]:       <entry name="manufacturer">RDO</entry>
Oct 11 04:17:40 compute-0 nova_compute[259850]:       <entry name="product">OpenStack Compute</entry>
Oct 11 04:17:40 compute-0 nova_compute[259850]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 11 04:17:40 compute-0 nova_compute[259850]:       <entry name="serial">a3d4ef44-fada-41fc-9a12-641bff0536a4</entry>
Oct 11 04:17:40 compute-0 nova_compute[259850]:       <entry name="uuid">a3d4ef44-fada-41fc-9a12-641bff0536a4</entry>
Oct 11 04:17:40 compute-0 nova_compute[259850]:       <entry name="family">Virtual Machine</entry>
Oct 11 04:17:40 compute-0 nova_compute[259850]:     </system>
Oct 11 04:17:40 compute-0 nova_compute[259850]:   </sysinfo>
Oct 11 04:17:40 compute-0 nova_compute[259850]:   <os>
Oct 11 04:17:40 compute-0 nova_compute[259850]:     <type arch="x86_64" machine="q35">hvm</type>
Oct 11 04:17:40 compute-0 nova_compute[259850]:     <boot dev="hd"/>
Oct 11 04:17:40 compute-0 nova_compute[259850]:     <smbios mode="sysinfo"/>
Oct 11 04:17:40 compute-0 nova_compute[259850]:   </os>
Oct 11 04:17:40 compute-0 nova_compute[259850]:   <features>
Oct 11 04:17:40 compute-0 nova_compute[259850]:     <acpi/>
Oct 11 04:17:40 compute-0 nova_compute[259850]:     <apic/>
Oct 11 04:17:40 compute-0 nova_compute[259850]:     <vmcoreinfo/>
Oct 11 04:17:40 compute-0 nova_compute[259850]:   </features>
Oct 11 04:17:40 compute-0 nova_compute[259850]:   <clock offset="utc">
Oct 11 04:17:40 compute-0 nova_compute[259850]:     <timer name="pit" tickpolicy="delay"/>
Oct 11 04:17:40 compute-0 nova_compute[259850]:     <timer name="rtc" tickpolicy="catchup"/>
Oct 11 04:17:40 compute-0 nova_compute[259850]:     <timer name="hpet" present="no"/>
Oct 11 04:17:40 compute-0 nova_compute[259850]:   </clock>
Oct 11 04:17:40 compute-0 nova_compute[259850]:   <cpu mode="host-model" match="exact">
Oct 11 04:17:40 compute-0 nova_compute[259850]:     <topology sockets="1" cores="1" threads="1"/>
Oct 11 04:17:40 compute-0 nova_compute[259850]:   </cpu>
Oct 11 04:17:40 compute-0 nova_compute[259850]:   <devices>
Oct 11 04:17:40 compute-0 nova_compute[259850]:     <disk type="network" device="cdrom">
Oct 11 04:17:40 compute-0 nova_compute[259850]:       <driver type="raw" cache="none"/>
Oct 11 04:17:40 compute-0 nova_compute[259850]:       <source protocol="rbd" name="vms/a3d4ef44-fada-41fc-9a12-641bff0536a4_disk.config">
Oct 11 04:17:40 compute-0 nova_compute[259850]:         <host name="192.168.122.100" port="6789"/>
Oct 11 04:17:40 compute-0 nova_compute[259850]:       </source>
Oct 11 04:17:40 compute-0 nova_compute[259850]:       <auth username="openstack">
Oct 11 04:17:40 compute-0 nova_compute[259850]:         <secret type="ceph" uuid="23b68101-59a9-532f-ab6b-9acf78fb2162"/>
Oct 11 04:17:40 compute-0 nova_compute[259850]:       </auth>
Oct 11 04:17:40 compute-0 nova_compute[259850]:       <target dev="sda" bus="sata"/>
Oct 11 04:17:40 compute-0 nova_compute[259850]:     </disk>
Oct 11 04:17:40 compute-0 nova_compute[259850]:     <disk type="network" device="disk">
Oct 11 04:17:40 compute-0 nova_compute[259850]:       <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Oct 11 04:17:40 compute-0 nova_compute[259850]:       <source protocol="rbd" name="volumes/volume-b02ce934-9de7-422d-b3ba-5ade72993920">
Oct 11 04:17:40 compute-0 nova_compute[259850]:         <host name="192.168.122.100" port="6789"/>
Oct 11 04:17:40 compute-0 nova_compute[259850]:       </source>
Oct 11 04:17:40 compute-0 nova_compute[259850]:       <auth username="openstack">
Oct 11 04:17:40 compute-0 nova_compute[259850]:         <secret type="ceph" uuid="23b68101-59a9-532f-ab6b-9acf78fb2162"/>
Oct 11 04:17:40 compute-0 nova_compute[259850]:       </auth>
Oct 11 04:17:40 compute-0 nova_compute[259850]:       <target dev="vda" bus="virtio"/>
Oct 11 04:17:40 compute-0 nova_compute[259850]:       <serial>b02ce934-9de7-422d-b3ba-5ade72993920</serial>
Oct 11 04:17:40 compute-0 nova_compute[259850]:       <encryption format="luks">
Oct 11 04:17:40 compute-0 nova_compute[259850]:         <secret type="passphrase" uuid="b14127a8-db7e-421e-b4f8-b82f228b9bec"/>
Oct 11 04:17:40 compute-0 nova_compute[259850]:       </encryption>
Oct 11 04:17:40 compute-0 nova_compute[259850]:     </disk>
Oct 11 04:17:40 compute-0 nova_compute[259850]:     <interface type="ethernet">
Oct 11 04:17:40 compute-0 nova_compute[259850]:       <mac address="fa:16:3e:f3:89:d6"/>
Oct 11 04:17:40 compute-0 nova_compute[259850]:       <model type="virtio"/>
Oct 11 04:17:40 compute-0 nova_compute[259850]:       <driver name="vhost" rx_queue_size="512"/>
Oct 11 04:17:40 compute-0 nova_compute[259850]:       <mtu size="1442"/>
Oct 11 04:17:40 compute-0 nova_compute[259850]:       <target dev="tap86c2cef2-4a"/>
Oct 11 04:17:40 compute-0 nova_compute[259850]:     </interface>
Oct 11 04:17:40 compute-0 nova_compute[259850]:     <serial type="pty">
Oct 11 04:17:40 compute-0 nova_compute[259850]:       <log file="/var/lib/nova/instances/a3d4ef44-fada-41fc-9a12-641bff0536a4/console.log" append="off"/>
Oct 11 04:17:40 compute-0 nova_compute[259850]:     </serial>
Oct 11 04:17:40 compute-0 nova_compute[259850]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 11 04:17:40 compute-0 nova_compute[259850]:     <video>
Oct 11 04:17:40 compute-0 nova_compute[259850]:       <model type="virtio"/>
Oct 11 04:17:40 compute-0 nova_compute[259850]:     </video>
Oct 11 04:17:40 compute-0 nova_compute[259850]:     <input type="tablet" bus="usb"/>
Oct 11 04:17:40 compute-0 nova_compute[259850]:     <rng model="virtio">
Oct 11 04:17:40 compute-0 nova_compute[259850]:       <backend model="random">/dev/urandom</backend>
Oct 11 04:17:40 compute-0 nova_compute[259850]:     </rng>
Oct 11 04:17:40 compute-0 nova_compute[259850]:     <controller type="pci" model="pcie-root"/>
Oct 11 04:17:40 compute-0 nova_compute[259850]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 04:17:40 compute-0 nova_compute[259850]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 04:17:40 compute-0 nova_compute[259850]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 04:17:40 compute-0 nova_compute[259850]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 04:17:40 compute-0 nova_compute[259850]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 04:17:40 compute-0 nova_compute[259850]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 04:17:40 compute-0 nova_compute[259850]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 04:17:40 compute-0 nova_compute[259850]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 04:17:40 compute-0 nova_compute[259850]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 04:17:40 compute-0 nova_compute[259850]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 04:17:40 compute-0 nova_compute[259850]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 04:17:40 compute-0 nova_compute[259850]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 04:17:40 compute-0 nova_compute[259850]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 04:17:40 compute-0 nova_compute[259850]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 04:17:40 compute-0 nova_compute[259850]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 04:17:40 compute-0 nova_compute[259850]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 04:17:40 compute-0 nova_compute[259850]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 04:17:40 compute-0 nova_compute[259850]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 04:17:40 compute-0 nova_compute[259850]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 04:17:40 compute-0 nova_compute[259850]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 04:17:40 compute-0 nova_compute[259850]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 04:17:40 compute-0 nova_compute[259850]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 04:17:40 compute-0 nova_compute[259850]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 04:17:40 compute-0 nova_compute[259850]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 04:17:40 compute-0 nova_compute[259850]:     <controller type="usb" index="0"/>
Oct 11 04:17:40 compute-0 nova_compute[259850]:     <memballoon model="virtio">
Oct 11 04:17:40 compute-0 nova_compute[259850]:       <stats period="10"/>
Oct 11 04:17:40 compute-0 nova_compute[259850]:     </memballoon>
Oct 11 04:17:40 compute-0 nova_compute[259850]:   </devices>
Oct 11 04:17:40 compute-0 nova_compute[259850]: </domain>
Oct 11 04:17:40 compute-0 nova_compute[259850]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Oct 11 04:17:40 compute-0 nova_compute[259850]: 2025-10-11 04:17:40.041 2 DEBUG nova.compute.manager [None req-e48d9e10-8970-4316-8d21-d7f025ac5787 77d11e860ca1460cab1c20bca4d4c0ea bfcc78a613a4442d88231798d10634c9 - - default default] [instance: a3d4ef44-fada-41fc-9a12-641bff0536a4] Preparing to wait for external event network-vif-plugged-86c2cef2-4a07-459b-8237-e7fda4a39f81 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Oct 11 04:17:40 compute-0 nova_compute[259850]: 2025-10-11 04:17:40.042 2 DEBUG oslo_concurrency.lockutils [None req-e48d9e10-8970-4316-8d21-d7f025ac5787 77d11e860ca1460cab1c20bca4d4c0ea bfcc78a613a4442d88231798d10634c9 - - default default] Acquiring lock "a3d4ef44-fada-41fc-9a12-641bff0536a4-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 04:17:40 compute-0 nova_compute[259850]: 2025-10-11 04:17:40.042 2 DEBUG oslo_concurrency.lockutils [None req-e48d9e10-8970-4316-8d21-d7f025ac5787 77d11e860ca1460cab1c20bca4d4c0ea bfcc78a613a4442d88231798d10634c9 - - default default] Lock "a3d4ef44-fada-41fc-9a12-641bff0536a4-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 04:17:40 compute-0 nova_compute[259850]: 2025-10-11 04:17:40.043 2 DEBUG oslo_concurrency.lockutils [None req-e48d9e10-8970-4316-8d21-d7f025ac5787 77d11e860ca1460cab1c20bca4d4c0ea bfcc78a613a4442d88231798d10634c9 - - default default] Lock "a3d4ef44-fada-41fc-9a12-641bff0536a4-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 04:17:40 compute-0 nova_compute[259850]: 2025-10-11 04:17:40.044 2 DEBUG nova.virt.libvirt.vif [None req-e48d9e10-8970-4316-8d21-d7f025ac5787 77d11e860ca1460cab1c20bca4d4c0ea bfcc78a613a4442d88231798d10634c9 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-11T04:17:33Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TransferEncryptedVolumeTest-server-87844196',display_name='tempest-TransferEncryptedVolumeTest-server-87844196',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-transferencryptedvolumetest-server-87844196',id=23,image_ref='',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBD3jnhyRBlsX5VUAbGtWGwnjXDJ0mJnyIiUqsAyoyyDd6H6M/5DSgSJwDh4tkaNqmtKzFuE8XyeYbmLUFFbEZUE8j9mB2B0zj5nn/QlG6TOs2XcStAmJ+ejUjSzP7rh2Lg==',key_name='tempest-TransferEncryptedVolumeTest-513808347',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='bfcc78a613a4442d88231798d10634c9',ramdisk_id='',reservation_id='r-4imm6rsu',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',image_signature_verified='False',network_allocated='True',owner_project_name='tempest-TransferEncryptedVolumeTest-1941581237',owner_user_name='tempest-TransferEncryptedVolumeTest-1941581237-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-11T04:17:35Z,user_data=None,user_id='77d11e860ca1460cab1c20bca4d4c0ea',uuid=a3d4ef44-fada-41fc-9a12-641bff0536a4,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "86c2cef2-4a07-459b-8237-e7fda4a39f81", "address": "fa:16:3e:f3:89:d6", "network": {"id": "1c86b315-3a4b-4db0-8b3c-39658c19ef9c", "bridge": "br-int", "label": "tempest-TransferEncryptedVolumeTest-443747755-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "bfcc78a613a4442d88231798d10634c9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap86c2cef2-4a", "ovs_interfaceid": "86c2cef2-4a07-459b-8237-e7fda4a39f81", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Oct 11 04:17:40 compute-0 nova_compute[259850]: 2025-10-11 04:17:40.045 2 DEBUG nova.network.os_vif_util [None req-e48d9e10-8970-4316-8d21-d7f025ac5787 77d11e860ca1460cab1c20bca4d4c0ea bfcc78a613a4442d88231798d10634c9 - - default default] Converting VIF {"id": "86c2cef2-4a07-459b-8237-e7fda4a39f81", "address": "fa:16:3e:f3:89:d6", "network": {"id": "1c86b315-3a4b-4db0-8b3c-39658c19ef9c", "bridge": "br-int", "label": "tempest-TransferEncryptedVolumeTest-443747755-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "bfcc78a613a4442d88231798d10634c9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap86c2cef2-4a", "ovs_interfaceid": "86c2cef2-4a07-459b-8237-e7fda4a39f81", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 11 04:17:40 compute-0 nova_compute[259850]: 2025-10-11 04:17:40.046 2 DEBUG nova.network.os_vif_util [None req-e48d9e10-8970-4316-8d21-d7f025ac5787 77d11e860ca1460cab1c20bca4d4c0ea bfcc78a613a4442d88231798d10634c9 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:f3:89:d6,bridge_name='br-int',has_traffic_filtering=True,id=86c2cef2-4a07-459b-8237-e7fda4a39f81,network=Network(1c86b315-3a4b-4db0-8b3c-39658c19ef9c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap86c2cef2-4a') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 11 04:17:40 compute-0 nova_compute[259850]: 2025-10-11 04:17:40.047 2 DEBUG os_vif [None req-e48d9e10-8970-4316-8d21-d7f025ac5787 77d11e860ca1460cab1c20bca4d4c0ea bfcc78a613a4442d88231798d10634c9 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:f3:89:d6,bridge_name='br-int',has_traffic_filtering=True,id=86c2cef2-4a07-459b-8237-e7fda4a39f81,network=Network(1c86b315-3a4b-4db0-8b3c-39658c19ef9c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap86c2cef2-4a') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Oct 11 04:17:40 compute-0 nova_compute[259850]: 2025-10-11 04:17:40.049 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 04:17:40 compute-0 nova_compute[259850]: 2025-10-11 04:17:40.049 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 04:17:40 compute-0 nova_compute[259850]: 2025-10-11 04:17:40.050 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 11 04:17:40 compute-0 nova_compute[259850]: 2025-10-11 04:17:40.054 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 04:17:40 compute-0 nova_compute[259850]: 2025-10-11 04:17:40.055 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap86c2cef2-4a, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 04:17:40 compute-0 nova_compute[259850]: 2025-10-11 04:17:40.056 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap86c2cef2-4a, col_values=(('external_ids', {'iface-id': '86c2cef2-4a07-459b-8237-e7fda4a39f81', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:f3:89:d6', 'vm-uuid': 'a3d4ef44-fada-41fc-9a12-641bff0536a4'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 04:17:40 compute-0 nova_compute[259850]: 2025-10-11 04:17:40.058 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 04:17:40 compute-0 NetworkManager[44920]: <info>  [1760156260.0594] manager: (tap86c2cef2-4a): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/118)
Oct 11 04:17:40 compute-0 nova_compute[259850]: 2025-10-11 04:17:40.062 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 11 04:17:40 compute-0 nova_compute[259850]: 2025-10-11 04:17:40.065 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 04:17:40 compute-0 nova_compute[259850]: 2025-10-11 04:17:40.068 2 INFO os_vif [None req-e48d9e10-8970-4316-8d21-d7f025ac5787 77d11e860ca1460cab1c20bca4d4c0ea bfcc78a613a4442d88231798d10634c9 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:f3:89:d6,bridge_name='br-int',has_traffic_filtering=True,id=86c2cef2-4a07-459b-8237-e7fda4a39f81,network=Network(1c86b315-3a4b-4db0-8b3c-39658c19ef9c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap86c2cef2-4a')
Oct 11 04:17:40 compute-0 nova_compute[259850]: 2025-10-11 04:17:40.101 2 DEBUG oslo_concurrency.lockutils [None req-fa426191-cae8-4e3b-9372-779e15f9f60d 2a330a845d62440c871f80eda2546881 09ba33ef4bd447699d74946c58839b2d - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 04:17:40 compute-0 nova_compute[259850]: 2025-10-11 04:17:40.101 2 DEBUG oslo_concurrency.lockutils [None req-fa426191-cae8-4e3b-9372-779e15f9f60d 2a330a845d62440c871f80eda2546881 09ba33ef4bd447699d74946c58839b2d - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 04:17:40 compute-0 nova_compute[259850]: 2025-10-11 04:17:40.107 2 DEBUG nova.virt.hardware [None req-fa426191-cae8-4e3b-9372-779e15f9f60d 2a330a845d62440c871f80eda2546881 09ba33ef4bd447699d74946c58839b2d - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Oct 11 04:17:40 compute-0 nova_compute[259850]: 2025-10-11 04:17:40.107 2 INFO nova.compute.claims [None req-fa426191-cae8-4e3b-9372-779e15f9f60d 2a330a845d62440c871f80eda2546881 09ba33ef4bd447699d74946c58839b2d - - default default] [instance: a5deabc3-2396-4c23-81c2-959d49bb6da1] Claim successful on node compute-0.ctlplane.example.com
Oct 11 04:17:40 compute-0 nova_compute[259850]: 2025-10-11 04:17:40.171 2 DEBUG nova.virt.libvirt.driver [None req-e48d9e10-8970-4316-8d21-d7f025ac5787 77d11e860ca1460cab1c20bca4d4c0ea bfcc78a613a4442d88231798d10634c9 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 11 04:17:40 compute-0 nova_compute[259850]: 2025-10-11 04:17:40.171 2 DEBUG nova.virt.libvirt.driver [None req-e48d9e10-8970-4316-8d21-d7f025ac5787 77d11e860ca1460cab1c20bca4d4c0ea bfcc78a613a4442d88231798d10634c9 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 11 04:17:40 compute-0 nova_compute[259850]: 2025-10-11 04:17:40.172 2 DEBUG nova.virt.libvirt.driver [None req-e48d9e10-8970-4316-8d21-d7f025ac5787 77d11e860ca1460cab1c20bca4d4c0ea bfcc78a613a4442d88231798d10634c9 - - default default] No VIF found with MAC fa:16:3e:f3:89:d6, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Oct 11 04:17:40 compute-0 nova_compute[259850]: 2025-10-11 04:17:40.173 2 INFO nova.virt.libvirt.driver [None req-e48d9e10-8970-4316-8d21-d7f025ac5787 77d11e860ca1460cab1c20bca4d4c0ea bfcc78a613a4442d88231798d10634c9 - - default default] [instance: a3d4ef44-fada-41fc-9a12-641bff0536a4] Using config drive
Oct 11 04:17:40 compute-0 nova_compute[259850]: 2025-10-11 04:17:40.212 2 DEBUG nova.storage.rbd_utils [None req-e48d9e10-8970-4316-8d21-d7f025ac5787 77d11e860ca1460cab1c20bca4d4c0ea bfcc78a613a4442d88231798d10634c9 - - default default] rbd image a3d4ef44-fada-41fc-9a12-641bff0536a4_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 11 04:17:40 compute-0 podman[296246]: 2025-10-11 04:17:40.219686019 +0000 UTC m=+0.040822717 container create f4a31c0724a3b18b45e27e40693209caae8bbd145589d39633df3dfd557b3c15 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_mayer, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.build-date=20250507)
Oct 11 04:17:40 compute-0 nova_compute[259850]: 2025-10-11 04:17:40.220 2 DEBUG nova.network.neutron [req-20506f81-2cf3-4329-bf76-11317a90a5eb req-cceeedf6-cbe7-4d88-bd7b-d00b77db1363 f4159c198f2c491aba3076e803251bb4 551ef16ca7c64c7d98e9560ddab5abd8 - - default default] [instance: a3d4ef44-fada-41fc-9a12-641bff0536a4] Updated VIF entry in instance network info cache for port 86c2cef2-4a07-459b-8237-e7fda4a39f81. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 11 04:17:40 compute-0 nova_compute[259850]: 2025-10-11 04:17:40.222 2 DEBUG nova.network.neutron [req-20506f81-2cf3-4329-bf76-11317a90a5eb req-cceeedf6-cbe7-4d88-bd7b-d00b77db1363 f4159c198f2c491aba3076e803251bb4 551ef16ca7c64c7d98e9560ddab5abd8 - - default default] [instance: a3d4ef44-fada-41fc-9a12-641bff0536a4] Updating instance_info_cache with network_info: [{"id": "86c2cef2-4a07-459b-8237-e7fda4a39f81", "address": "fa:16:3e:f3:89:d6", "network": {"id": "1c86b315-3a4b-4db0-8b3c-39658c19ef9c", "bridge": "br-int", "label": "tempest-TransferEncryptedVolumeTest-443747755-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "bfcc78a613a4442d88231798d10634c9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap86c2cef2-4a", "ovs_interfaceid": "86c2cef2-4a07-459b-8237-e7fda4a39f81", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 11 04:17:40 compute-0 systemd[1]: Started libpod-conmon-f4a31c0724a3b18b45e27e40693209caae8bbd145589d39633df3dfd557b3c15.scope.
Oct 11 04:17:40 compute-0 nova_compute[259850]: 2025-10-11 04:17:40.272 2 DEBUG oslo_concurrency.lockutils [req-20506f81-2cf3-4329-bf76-11317a90a5eb req-cceeedf6-cbe7-4d88-bd7b-d00b77db1363 f4159c198f2c491aba3076e803251bb4 551ef16ca7c64c7d98e9560ddab5abd8 - - default default] Releasing lock "refresh_cache-a3d4ef44-fada-41fc-9a12-641bff0536a4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 11 04:17:40 compute-0 systemd[1]: Started libcrun container.
Oct 11 04:17:40 compute-0 podman[296246]: 2025-10-11 04:17:40.200868406 +0000 UTC m=+0.022005154 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 04:17:40 compute-0 podman[296246]: 2025-10-11 04:17:40.3136926 +0000 UTC m=+0.134829328 container init f4a31c0724a3b18b45e27e40693209caae8bbd145589d39633df3dfd557b3c15 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_mayer, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.build-date=20250507)
Oct 11 04:17:40 compute-0 podman[296246]: 2025-10-11 04:17:40.321855921 +0000 UTC m=+0.142992629 container start f4a31c0724a3b18b45e27e40693209caae8bbd145589d39633df3dfd557b3c15 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_mayer, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Oct 11 04:17:40 compute-0 podman[296246]: 2025-10-11 04:17:40.325453883 +0000 UTC m=+0.146590621 container attach f4a31c0724a3b18b45e27e40693209caae8bbd145589d39633df3dfd557b3c15 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_mayer, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 11 04:17:40 compute-0 brave_mayer[296280]: 167 167
Oct 11 04:17:40 compute-0 systemd[1]: libpod-f4a31c0724a3b18b45e27e40693209caae8bbd145589d39633df3dfd557b3c15.scope: Deactivated successfully.
Oct 11 04:17:40 compute-0 podman[296246]: 2025-10-11 04:17:40.330514877 +0000 UTC m=+0.151651615 container died f4a31c0724a3b18b45e27e40693209caae8bbd145589d39633df3dfd557b3c15 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_mayer, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.license=GPLv2)
Oct 11 04:17:40 compute-0 systemd[1]: var-lib-containers-storage-overlay-554f01134d93ec512cad140334c7a71499cba81834d1cebb4ac178e6bc0d1e08-merged.mount: Deactivated successfully.
Oct 11 04:17:40 compute-0 podman[296246]: 2025-10-11 04:17:40.382770756 +0000 UTC m=+0.203907464 container remove f4a31c0724a3b18b45e27e40693209caae8bbd145589d39633df3dfd557b3c15 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_mayer, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 11 04:17:40 compute-0 nova_compute[259850]: 2025-10-11 04:17:40.382 2 DEBUG oslo_concurrency.processutils [None req-fa426191-cae8-4e3b-9372-779e15f9f60d 2a330a845d62440c871f80eda2546881 09ba33ef4bd447699d74946c58839b2d - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 04:17:40 compute-0 systemd[1]: libpod-conmon-f4a31c0724a3b18b45e27e40693209caae8bbd145589d39633df3dfd557b3c15.scope: Deactivated successfully.
Oct 11 04:17:40 compute-0 podman[296305]: 2025-10-11 04:17:40.588633135 +0000 UTC m=+0.078211016 container create cf0165dae4ca02d26a6c26295c7d2e075a41ab299c8dc47ee20a98e45f6937c0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=frosty_ishizaka, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef)
Oct 11 04:17:40 compute-0 nova_compute[259850]: 2025-10-11 04:17:40.615 2 INFO nova.virt.libvirt.driver [None req-e48d9e10-8970-4316-8d21-d7f025ac5787 77d11e860ca1460cab1c20bca4d4c0ea bfcc78a613a4442d88231798d10634c9 - - default default] [instance: a3d4ef44-fada-41fc-9a12-641bff0536a4] Creating config drive at /var/lib/nova/instances/a3d4ef44-fada-41fc-9a12-641bff0536a4/disk.config
Oct 11 04:17:40 compute-0 nova_compute[259850]: 2025-10-11 04:17:40.625 2 DEBUG oslo_concurrency.processutils [None req-e48d9e10-8970-4316-8d21-d7f025ac5787 77d11e860ca1460cab1c20bca4d4c0ea bfcc78a613a4442d88231798d10634c9 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/a3d4ef44-fada-41fc-9a12-641bff0536a4/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpm72jy1vv execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 04:17:40 compute-0 systemd[1]: Started libpod-conmon-cf0165dae4ca02d26a6c26295c7d2e075a41ab299c8dc47ee20a98e45f6937c0.scope.
Oct 11 04:17:40 compute-0 podman[296305]: 2025-10-11 04:17:40.55598572 +0000 UTC m=+0.045563661 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 04:17:40 compute-0 systemd[1]: Started libcrun container.
Oct 11 04:17:40 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/65dab28a643050e0c1788a6d0722343ab2f4ab1a6fa69fbbc1cfadd5c19648a3/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 11 04:17:40 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/65dab28a643050e0c1788a6d0722343ab2f4ab1a6fa69fbbc1cfadd5c19648a3/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 11 04:17:40 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/65dab28a643050e0c1788a6d0722343ab2f4ab1a6fa69fbbc1cfadd5c19648a3/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 11 04:17:40 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/65dab28a643050e0c1788a6d0722343ab2f4ab1a6fa69fbbc1cfadd5c19648a3/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 11 04:17:40 compute-0 podman[296305]: 2025-10-11 04:17:40.713828689 +0000 UTC m=+0.203406590 container init cf0165dae4ca02d26a6c26295c7d2e075a41ab299c8dc47ee20a98e45f6937c0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=frosty_ishizaka, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 11 04:17:40 compute-0 podman[296305]: 2025-10-11 04:17:40.723893004 +0000 UTC m=+0.213470865 container start cf0165dae4ca02d26a6c26295c7d2e075a41ab299c8dc47ee20a98e45f6937c0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=frosty_ishizaka, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 11 04:17:40 compute-0 podman[296305]: 2025-10-11 04:17:40.727424794 +0000 UTC m=+0.217002745 container attach cf0165dae4ca02d26a6c26295c7d2e075a41ab299c8dc47ee20a98e45f6937c0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=frosty_ishizaka, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 11 04:17:40 compute-0 nova_compute[259850]: 2025-10-11 04:17:40.780 2 DEBUG oslo_concurrency.processutils [None req-e48d9e10-8970-4316-8d21-d7f025ac5787 77d11e860ca1460cab1c20bca4d4c0ea bfcc78a613a4442d88231798d10634c9 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/a3d4ef44-fada-41fc-9a12-641bff0536a4/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpm72jy1vv" returned: 0 in 0.155s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 04:17:40 compute-0 nova_compute[259850]: 2025-10-11 04:17:40.807 2 DEBUG nova.storage.rbd_utils [None req-e48d9e10-8970-4316-8d21-d7f025ac5787 77d11e860ca1460cab1c20bca4d4c0ea bfcc78a613a4442d88231798d10634c9 - - default default] rbd image a3d4ef44-fada-41fc-9a12-641bff0536a4_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 11 04:17:40 compute-0 nova_compute[259850]: 2025-10-11 04:17:40.811 2 DEBUG oslo_concurrency.processutils [None req-e48d9e10-8970-4316-8d21-d7f025ac5787 77d11e860ca1460cab1c20bca4d4c0ea bfcc78a613a4442d88231798d10634c9 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/a3d4ef44-fada-41fc-9a12-641bff0536a4/disk.config a3d4ef44-fada-41fc-9a12-641bff0536a4_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 04:17:40 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 11 04:17:40 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2981913008' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 04:17:40 compute-0 nova_compute[259850]: 2025-10-11 04:17:40.863 2 DEBUG oslo_concurrency.processutils [None req-fa426191-cae8-4e3b-9372-779e15f9f60d 2a330a845d62440c871f80eda2546881 09ba33ef4bd447699d74946c58839b2d - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.481s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 04:17:40 compute-0 nova_compute[259850]: 2025-10-11 04:17:40.874 2 DEBUG nova.compute.provider_tree [None req-fa426191-cae8-4e3b-9372-779e15f9f60d 2a330a845d62440c871f80eda2546881 09ba33ef4bd447699d74946c58839b2d - - default default] Inventory has not changed in ProviderTree for provider: 108a560b-89c0-4926-a2fc-cb749a6f8386 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 11 04:17:40 compute-0 nova_compute[259850]: 2025-10-11 04:17:40.893 2 DEBUG nova.scheduler.client.report [None req-fa426191-cae8-4e3b-9372-779e15f9f60d 2a330a845d62440c871f80eda2546881 09ba33ef4bd447699d74946c58839b2d - - default default] Inventory has not changed for provider 108a560b-89c0-4926-a2fc-cb749a6f8386 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 11 04:17:40 compute-0 nova_compute[259850]: 2025-10-11 04:17:40.917 2 DEBUG oslo_concurrency.lockutils [None req-fa426191-cae8-4e3b-9372-779e15f9f60d 2a330a845d62440c871f80eda2546881 09ba33ef4bd447699d74946c58839b2d - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.816s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 04:17:40 compute-0 nova_compute[259850]: 2025-10-11 04:17:40.918 2 DEBUG nova.compute.manager [None req-fa426191-cae8-4e3b-9372-779e15f9f60d 2a330a845d62440c871f80eda2546881 09ba33ef4bd447699d74946c58839b2d - - default default] [instance: a5deabc3-2396-4c23-81c2-959d49bb6da1] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Oct 11 04:17:40 compute-0 nova_compute[259850]: 2025-10-11 04:17:40.966 2 DEBUG nova.compute.manager [None req-fa426191-cae8-4e3b-9372-779e15f9f60d 2a330a845d62440c871f80eda2546881 09ba33ef4bd447699d74946c58839b2d - - default default] [instance: a5deabc3-2396-4c23-81c2-959d49bb6da1] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Oct 11 04:17:40 compute-0 nova_compute[259850]: 2025-10-11 04:17:40.967 2 DEBUG nova.network.neutron [None req-fa426191-cae8-4e3b-9372-779e15f9f60d 2a330a845d62440c871f80eda2546881 09ba33ef4bd447699d74946c58839b2d - - default default] [instance: a5deabc3-2396-4c23-81c2-959d49bb6da1] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Oct 11 04:17:40 compute-0 nova_compute[259850]: 2025-10-11 04:17:40.985 2 INFO nova.virt.libvirt.driver [None req-fa426191-cae8-4e3b-9372-779e15f9f60d 2a330a845d62440c871f80eda2546881 09ba33ef4bd447699d74946c58839b2d - - default default] [instance: a5deabc3-2396-4c23-81c2-959d49bb6da1] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Oct 11 04:17:40 compute-0 nova_compute[259850]: 2025-10-11 04:17:40.999 2 DEBUG oslo_concurrency.processutils [None req-e48d9e10-8970-4316-8d21-d7f025ac5787 77d11e860ca1460cab1c20bca4d4c0ea bfcc78a613a4442d88231798d10634c9 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/a3d4ef44-fada-41fc-9a12-641bff0536a4/disk.config a3d4ef44-fada-41fc-9a12-641bff0536a4_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.188s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 04:17:40 compute-0 nova_compute[259850]: 2025-10-11 04:17:40.999 2 INFO nova.virt.libvirt.driver [None req-e48d9e10-8970-4316-8d21-d7f025ac5787 77d11e860ca1460cab1c20bca4d4c0ea bfcc78a613a4442d88231798d10634c9 - - default default] [instance: a3d4ef44-fada-41fc-9a12-641bff0536a4] Deleting local config drive /var/lib/nova/instances/a3d4ef44-fada-41fc-9a12-641bff0536a4/disk.config because it was imported into RBD.
Oct 11 04:17:41 compute-0 nova_compute[259850]: 2025-10-11 04:17:41.004 2 DEBUG nova.compute.manager [None req-fa426191-cae8-4e3b-9372-779e15f9f60d 2a330a845d62440c871f80eda2546881 09ba33ef4bd447699d74946c58839b2d - - default default] [instance: a5deabc3-2396-4c23-81c2-959d49bb6da1] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Oct 11 04:17:41 compute-0 kernel: tap86c2cef2-4a: entered promiscuous mode
Oct 11 04:17:41 compute-0 NetworkManager[44920]: <info>  [1760156261.0583] manager: (tap86c2cef2-4a): new Tun device (/org/freedesktop/NetworkManager/Devices/119)
Oct 11 04:17:41 compute-0 ovn_controller[152025]: 2025-10-11T04:17:41Z|00230|binding|INFO|Claiming lport 86c2cef2-4a07-459b-8237-e7fda4a39f81 for this chassis.
Oct 11 04:17:41 compute-0 ovn_controller[152025]: 2025-10-11T04:17:41Z|00231|binding|INFO|86c2cef2-4a07-459b-8237-e7fda4a39f81: Claiming fa:16:3e:f3:89:d6 10.100.0.11
Oct 11 04:17:41 compute-0 ovn_metadata_agent[161897]: 2025-10-11 04:17:41.067 161902 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:f3:89:d6 10.100.0.11'], port_security=['fa:16:3e:f3:89:d6 10.100.0.11'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.11/28', 'neutron:device_id': 'a3d4ef44-fada-41fc-9a12-641bff0536a4', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-1c86b315-3a4b-4db0-8b3c-39658c19ef9c', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'bfcc78a613a4442d88231798d10634c9', 'neutron:revision_number': '2', 'neutron:security_group_ids': '8fd56502-e733-457c-89c4-96f24dc7f6d9', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=756f4bd0-4cbc-4611-9397-52eb34ec09ab, chassis=[<ovs.db.idl.Row object at 0x7f22fde8afd0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f22fde8afd0>], logical_port=86c2cef2-4a07-459b-8237-e7fda4a39f81) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 11 04:17:41 compute-0 ovn_metadata_agent[161897]: 2025-10-11 04:17:41.068 161902 INFO neutron.agent.ovn.metadata.agent [-] Port 86c2cef2-4a07-459b-8237-e7fda4a39f81 in datapath 1c86b315-3a4b-4db0-8b3c-39658c19ef9c bound to our chassis
Oct 11 04:17:41 compute-0 ovn_metadata_agent[161897]: 2025-10-11 04:17:41.069 161902 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 1c86b315-3a4b-4db0-8b3c-39658c19ef9c
Oct 11 04:17:41 compute-0 nova_compute[259850]: 2025-10-11 04:17:41.071 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 04:17:41 compute-0 nova_compute[259850]: 2025-10-11 04:17:41.076 2 INFO nova.virt.block_device [None req-fa426191-cae8-4e3b-9372-779e15f9f60d 2a330a845d62440c871f80eda2546881 09ba33ef4bd447699d74946c58839b2d - - default default] [instance: a5deabc3-2396-4c23-81c2-959d49bb6da1] Booting with volume b1e9d80d-01ad-4211-b429-299f6fd98f5c at /dev/vda
Oct 11 04:17:41 compute-0 ovn_controller[152025]: 2025-10-11T04:17:41Z|00232|binding|INFO|Setting lport 86c2cef2-4a07-459b-8237-e7fda4a39f81 ovn-installed in OVS
Oct 11 04:17:41 compute-0 ovn_controller[152025]: 2025-10-11T04:17:41Z|00233|binding|INFO|Setting lport 86c2cef2-4a07-459b-8237-e7fda4a39f81 up in Southbound
Oct 11 04:17:41 compute-0 ovn_metadata_agent[161897]: 2025-10-11 04:17:41.084 267637 DEBUG oslo.privsep.daemon [-] privsep: reply[080b13f0-8e96-4e2f-949d-0bf123a5cd00]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 04:17:41 compute-0 ovn_metadata_agent[161897]: 2025-10-11 04:17:41.087 161902 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap1c86b315-31 in ovnmeta-1c86b315-3a4b-4db0-8b3c-39658c19ef9c namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Oct 11 04:17:41 compute-0 nova_compute[259850]: 2025-10-11 04:17:41.089 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 04:17:41 compute-0 ovn_metadata_agent[161897]: 2025-10-11 04:17:41.091 267637 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap1c86b315-30 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Oct 11 04:17:41 compute-0 ovn_metadata_agent[161897]: 2025-10-11 04:17:41.091 267637 DEBUG oslo.privsep.daemon [-] privsep: reply[dc3b9f72-cfb9-4c7b-a1b7-9d633b960992]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 04:17:41 compute-0 ovn_metadata_agent[161897]: 2025-10-11 04:17:41.093 267637 DEBUG oslo.privsep.daemon [-] privsep: reply[abcbb6f1-08fd-43d9-a1ea-b34ea4096a36]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 04:17:41 compute-0 nova_compute[259850]: 2025-10-11 04:17:41.095 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 04:17:41 compute-0 systemd-machined[214869]: New machine qemu-23-instance-00000017.
Oct 11 04:17:41 compute-0 ovn_metadata_agent[161897]: 2025-10-11 04:17:41.113 162015 DEBUG oslo.privsep.daemon [-] privsep: reply[258bb282-87d4-4032-aa04-e695b025f253]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 04:17:41 compute-0 systemd[1]: Started Virtual Machine qemu-23-instance-00000017.
Oct 11 04:17:41 compute-0 ovn_metadata_agent[161897]: 2025-10-11 04:17:41.149 267637 DEBUG oslo.privsep.daemon [-] privsep: reply[dea75149-1a6e-4afb-a972-55a8a073aa1d]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 04:17:41 compute-0 systemd-udevd[296402]: Network interface NamePolicy= disabled on kernel command line.
Oct 11 04:17:41 compute-0 NetworkManager[44920]: <info>  [1760156261.1704] device (tap86c2cef2-4a): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 11 04:17:41 compute-0 NetworkManager[44920]: <info>  [1760156261.1718] device (tap86c2cef2-4a): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 11 04:17:41 compute-0 nova_compute[259850]: 2025-10-11 04:17:41.182 2 DEBUG nova.policy [None req-fa426191-cae8-4e3b-9372-779e15f9f60d 2a330a845d62440c871f80eda2546881 09ba33ef4bd447699d74946c58839b2d - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '2a330a845d62440c871f80eda2546881', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '09ba33ef4bd447699d74946c58839b2d', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Oct 11 04:17:41 compute-0 ovn_metadata_agent[161897]: 2025-10-11 04:17:41.186 267785 DEBUG oslo.privsep.daemon [-] privsep: reply[3eba8cbc-d67b-42e5-acd1-27c1c33033d1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 04:17:41 compute-0 NetworkManager[44920]: <info>  [1760156261.1954] manager: (tap1c86b315-30): new Veth device (/org/freedesktop/NetworkManager/Devices/120)
Oct 11 04:17:41 compute-0 ovn_metadata_agent[161897]: 2025-10-11 04:17:41.193 267637 DEBUG oslo.privsep.daemon [-] privsep: reply[d56e722e-97df-44bc-89fc-20a3a4919d7f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 04:17:41 compute-0 nova_compute[259850]: 2025-10-11 04:17:41.204 2 DEBUG os_brick.utils [None req-fa426191-cae8-4e3b-9372-779e15f9f60d 2a330a845d62440c871f80eda2546881 09ba33ef4bd447699d74946c58839b2d - - default default] ==> get_connector_properties: call "{'root_helper': 'sudo nova-rootwrap /etc/nova/rootwrap.conf', 'my_ip': '192.168.122.100', 'multipath': True, 'enforce_multipath': True, 'host': 'compute-0.ctlplane.example.com', 'execute': None}" trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:176
Oct 11 04:17:41 compute-0 nova_compute[259850]: 2025-10-11 04:17:41.204 675 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): multipathd show status execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 04:17:41 compute-0 nova_compute[259850]: 2025-10-11 04:17:41.219 675 DEBUG oslo_concurrency.processutils [-] CMD "multipathd show status" returned: 0 in 0.014s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 04:17:41 compute-0 nova_compute[259850]: 2025-10-11 04:17:41.219 675 DEBUG oslo.privsep.daemon [-] privsep: reply[5caf23c6-676f-4f60-bf5d-8735c64e8a8b]: (4, ('path checker states:\n\npaths: 0\nbusy: False\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 04:17:41 compute-0 nova_compute[259850]: 2025-10-11 04:17:41.221 675 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): cat /etc/iscsi/initiatorname.iscsi execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 04:17:41 compute-0 nova_compute[259850]: 2025-10-11 04:17:41.229 675 DEBUG oslo_concurrency.processutils [-] CMD "cat /etc/iscsi/initiatorname.iscsi" returned: 0 in 0.008s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 04:17:41 compute-0 nova_compute[259850]: 2025-10-11 04:17:41.229 675 DEBUG oslo.privsep.daemon [-] privsep: reply[5bae645c-f738-4dfd-a96e-af1961ca6757]: (4, ('InitiatorName=iqn.1994-05.com.redhat:e727c2bd432c', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 04:17:41 compute-0 nova_compute[259850]: 2025-10-11 04:17:41.232 675 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): findmnt -v / -n -o SOURCE execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 04:17:41 compute-0 ovn_metadata_agent[161897]: 2025-10-11 04:17:41.243 267785 DEBUG oslo.privsep.daemon [-] privsep: reply[1eb15d85-4d2f-4c47-b861-01027773c294]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 04:17:41 compute-0 nova_compute[259850]: 2025-10-11 04:17:41.246 675 DEBUG oslo_concurrency.processutils [-] CMD "findmnt -v / -n -o SOURCE" returned: 0 in 0.013s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 04:17:41 compute-0 ceph-mon[74273]: pgmap v1619: 305 pgs: 305 active+clean; 281 MiB data, 581 MiB used, 59 GiB / 60 GiB avail; 363 KiB/s rd, 11 MiB/s wr, 122 op/s
Oct 11 04:17:41 compute-0 nova_compute[259850]: 2025-10-11 04:17:41.246 675 DEBUG oslo.privsep.daemon [-] privsep: reply[95298ed1-7e37-4260-80cb-f4a5f1f91138]: (4, ('overlay\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 04:17:41 compute-0 ceph-mon[74273]: from='client.? 192.168.122.100:0/2981913008' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 04:17:41 compute-0 ovn_metadata_agent[161897]: 2025-10-11 04:17:41.247 267785 DEBUG oslo.privsep.daemon [-] privsep: reply[35a39d58-adee-4914-a7c3-16d7e8074884]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 04:17:41 compute-0 nova_compute[259850]: 2025-10-11 04:17:41.250 675 DEBUG oslo.privsep.daemon [-] privsep: reply[d966c56e-2cae-4b0b-bda5-c01b3f0a2385]: (4, 'e4b2deed-ff06-4afb-a523-b61a9dddb9cc') _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 04:17:41 compute-0 nova_compute[259850]: 2025-10-11 04:17:41.250 2 DEBUG oslo_concurrency.processutils [None req-fa426191-cae8-4e3b-9372-779e15f9f60d 2a330a845d62440c871f80eda2546881 09ba33ef4bd447699d74946c58839b2d - - default default] Running cmd (subprocess): nvme version execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 04:17:41 compute-0 NetworkManager[44920]: <info>  [1760156261.2735] device (tap1c86b315-30): carrier: link connected
Oct 11 04:17:41 compute-0 ovn_metadata_agent[161897]: 2025-10-11 04:17:41.280 267785 DEBUG oslo.privsep.daemon [-] privsep: reply[dc59e12b-2390-40c2-8385-20669d4e2bf7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 04:17:41 compute-0 nova_compute[259850]: 2025-10-11 04:17:41.281 2 DEBUG oslo_concurrency.processutils [None req-fa426191-cae8-4e3b-9372-779e15f9f60d 2a330a845d62440c871f80eda2546881 09ba33ef4bd447699d74946c58839b2d - - default default] CMD "nvme version" returned: 0 in 0.031s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 04:17:41 compute-0 nova_compute[259850]: 2025-10-11 04:17:41.284 2 DEBUG os_brick.initiator.connectors.lightos [None req-fa426191-cae8-4e3b-9372-779e15f9f60d 2a330a845d62440c871f80eda2546881 09ba33ef4bd447699d74946c58839b2d - - default default] LIGHTOS: [Errno 111] ECONNREFUSED find_dsc /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:98
Oct 11 04:17:41 compute-0 nova_compute[259850]: 2025-10-11 04:17:41.285 2 DEBUG os_brick.initiator.connectors.lightos [None req-fa426191-cae8-4e3b-9372-779e15f9f60d 2a330a845d62440c871f80eda2546881 09ba33ef4bd447699d74946c58839b2d - - default default] LIGHTOS: did not find dsc, continuing anyway. get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:76
Oct 11 04:17:41 compute-0 nova_compute[259850]: 2025-10-11 04:17:41.285 2 DEBUG os_brick.initiator.connectors.lightos [None req-fa426191-cae8-4e3b-9372-779e15f9f60d 2a330a845d62440c871f80eda2546881 09ba33ef4bd447699d74946c58839b2d - - default default] LIGHTOS: finally hostnqn: nqn.2014-08.org.nvmexpress:uuid:83042a20-0f72-4c47-8453-e72ead378624 dsc:  get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:79
Oct 11 04:17:41 compute-0 nova_compute[259850]: 2025-10-11 04:17:41.285 2 DEBUG os_brick.utils [None req-fa426191-cae8-4e3b-9372-779e15f9f60d 2a330a845d62440c871f80eda2546881 09ba33ef4bd447699d74946c58839b2d - - default default] <== get_connector_properties: return (81ms) {'platform': 'x86_64', 'os_type': 'linux', 'ip': '192.168.122.100', 'host': 'compute-0.ctlplane.example.com', 'multipath': True, 'initiator': 'iqn.1994-05.com.redhat:e727c2bd432c', 'do_local_attach': False, 'nvme_hostid': '83042a20-0f72-4c47-8453-e72ead378624', 'system uuid': 'e4b2deed-ff06-4afb-a523-b61a9dddb9cc', 'nqn': 'nqn.2014-08.org.nvmexpress:uuid:83042a20-0f72-4c47-8453-e72ead378624', 'nvme_native_multipath': True, 'found_dsc': ''} trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:203
Oct 11 04:17:41 compute-0 nova_compute[259850]: 2025-10-11 04:17:41.286 2 DEBUG nova.virt.block_device [None req-fa426191-cae8-4e3b-9372-779e15f9f60d 2a330a845d62440c871f80eda2546881 09ba33ef4bd447699d74946c58839b2d - - default default] [instance: a5deabc3-2396-4c23-81c2-959d49bb6da1] Updating existing volume attachment record: 5a59ebad-9d53-4c3a-ac4c-ea62cfc5fc2b _volume_attach /usr/lib/python3.9/site-packages/nova/virt/block_device.py:631
Oct 11 04:17:41 compute-0 ovn_metadata_agent[161897]: 2025-10-11 04:17:41.303 267637 DEBUG oslo.privsep.daemon [-] privsep: reply[a8e18219-c065-42ee-83ac-9a2dcda53a4f]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap1c86b315-31'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:b2:1b:d4'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 76], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 464630, 'reachable_time': 31581, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 296439, 'error': None, 'target': 'ovnmeta-1c86b315-3a4b-4db0-8b3c-39658c19ef9c', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 04:17:41 compute-0 ovn_metadata_agent[161897]: 2025-10-11 04:17:41.321 267637 DEBUG oslo.privsep.daemon [-] privsep: reply[6c95a18b-96ae-4cbf-802e-23001b669058]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:feb2:1bd4'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 464630, 'tstamp': 464630}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 296440, 'error': None, 'target': 'ovnmeta-1c86b315-3a4b-4db0-8b3c-39658c19ef9c', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 04:17:41 compute-0 ovn_metadata_agent[161897]: 2025-10-11 04:17:41.341 267637 DEBUG oslo.privsep.daemon [-] privsep: reply[50af8c71-1bbe-4f8c-8f58-01e7705662e8]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap1c86b315-31'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:b2:1b:d4'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 76], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 464630, 'reachable_time': 31581, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 296441, 'error': None, 'target': 'ovnmeta-1c86b315-3a4b-4db0-8b3c-39658c19ef9c', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 04:17:41 compute-0 ovn_metadata_agent[161897]: 2025-10-11 04:17:41.384 267637 DEBUG oslo.privsep.daemon [-] privsep: reply[0b7766b5-7d39-4b62-a693-2534a174fee5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 04:17:41 compute-0 ovn_metadata_agent[161897]: 2025-10-11 04:17:41.474 267637 DEBUG oslo.privsep.daemon [-] privsep: reply[8e897e29-b7c2-4a81-ab34-4633616c8cea]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 04:17:41 compute-0 ovn_metadata_agent[161897]: 2025-10-11 04:17:41.477 161902 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap1c86b315-30, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 04:17:41 compute-0 ovn_metadata_agent[161897]: 2025-10-11 04:17:41.478 161902 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 11 04:17:41 compute-0 ovn_metadata_agent[161897]: 2025-10-11 04:17:41.479 161902 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap1c86b315-30, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 04:17:41 compute-0 kernel: tap1c86b315-30: entered promiscuous mode
Oct 11 04:17:41 compute-0 NetworkManager[44920]: <info>  [1760156261.4828] manager: (tap1c86b315-30): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/121)
Oct 11 04:17:41 compute-0 nova_compute[259850]: 2025-10-11 04:17:41.481 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 04:17:41 compute-0 ovn_metadata_agent[161897]: 2025-10-11 04:17:41.489 161902 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap1c86b315-30, col_values=(('external_ids', {'iface-id': '075f096d-d25a-4cca-804c-0df80c22a72a'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 04:17:41 compute-0 nova_compute[259850]: 2025-10-11 04:17:41.491 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 04:17:41 compute-0 ovn_controller[152025]: 2025-10-11T04:17:41Z|00234|binding|INFO|Releasing lport 075f096d-d25a-4cca-804c-0df80c22a72a from this chassis (sb_readonly=0)
Oct 11 04:17:41 compute-0 nova_compute[259850]: 2025-10-11 04:17:41.492 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 04:17:41 compute-0 ovn_metadata_agent[161897]: 2025-10-11 04:17:41.494 161902 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/1c86b315-3a4b-4db0-8b3c-39658c19ef9c.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/1c86b315-3a4b-4db0-8b3c-39658c19ef9c.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Oct 11 04:17:41 compute-0 ovn_metadata_agent[161897]: 2025-10-11 04:17:41.495 267637 DEBUG oslo.privsep.daemon [-] privsep: reply[d9d52f62-6d6b-4fcb-bdcc-7564eebda595]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 04:17:41 compute-0 ovn_metadata_agent[161897]: 2025-10-11 04:17:41.496 161902 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct 11 04:17:41 compute-0 ovn_metadata_agent[161897]: global
Oct 11 04:17:41 compute-0 ovn_metadata_agent[161897]:     log         /dev/log local0 debug
Oct 11 04:17:41 compute-0 ovn_metadata_agent[161897]:     log-tag     haproxy-metadata-proxy-1c86b315-3a4b-4db0-8b3c-39658c19ef9c
Oct 11 04:17:41 compute-0 ovn_metadata_agent[161897]:     user        root
Oct 11 04:17:41 compute-0 ovn_metadata_agent[161897]:     group       root
Oct 11 04:17:41 compute-0 ovn_metadata_agent[161897]:     maxconn     1024
Oct 11 04:17:41 compute-0 ovn_metadata_agent[161897]:     pidfile     /var/lib/neutron/external/pids/1c86b315-3a4b-4db0-8b3c-39658c19ef9c.pid.haproxy
Oct 11 04:17:41 compute-0 ovn_metadata_agent[161897]:     daemon
Oct 11 04:17:41 compute-0 ovn_metadata_agent[161897]: 
Oct 11 04:17:41 compute-0 ovn_metadata_agent[161897]: defaults
Oct 11 04:17:41 compute-0 ovn_metadata_agent[161897]:     log global
Oct 11 04:17:41 compute-0 ovn_metadata_agent[161897]:     mode http
Oct 11 04:17:41 compute-0 ovn_metadata_agent[161897]:     option httplog
Oct 11 04:17:41 compute-0 ovn_metadata_agent[161897]:     option dontlognull
Oct 11 04:17:41 compute-0 ovn_metadata_agent[161897]:     option http-server-close
Oct 11 04:17:41 compute-0 ovn_metadata_agent[161897]:     option forwardfor
Oct 11 04:17:41 compute-0 ovn_metadata_agent[161897]:     retries                 3
Oct 11 04:17:41 compute-0 ovn_metadata_agent[161897]:     timeout http-request    30s
Oct 11 04:17:41 compute-0 ovn_metadata_agent[161897]:     timeout connect         30s
Oct 11 04:17:41 compute-0 ovn_metadata_agent[161897]:     timeout client          32s
Oct 11 04:17:41 compute-0 ovn_metadata_agent[161897]:     timeout server          32s
Oct 11 04:17:41 compute-0 ovn_metadata_agent[161897]:     timeout http-keep-alive 30s
Oct 11 04:17:41 compute-0 ovn_metadata_agent[161897]: 
Oct 11 04:17:41 compute-0 ovn_metadata_agent[161897]: 
Oct 11 04:17:41 compute-0 ovn_metadata_agent[161897]: listen listener
Oct 11 04:17:41 compute-0 ovn_metadata_agent[161897]:     bind 169.254.169.254:80
Oct 11 04:17:41 compute-0 ovn_metadata_agent[161897]:     server metadata /var/lib/neutron/metadata_proxy
Oct 11 04:17:41 compute-0 ovn_metadata_agent[161897]:     http-request add-header X-OVN-Network-ID 1c86b315-3a4b-4db0-8b3c-39658c19ef9c
Oct 11 04:17:41 compute-0 ovn_metadata_agent[161897]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Oct 11 04:17:41 compute-0 ovn_metadata_agent[161897]: 2025-10-11 04:17:41.500 161902 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-1c86b315-3a4b-4db0-8b3c-39658c19ef9c', 'env', 'PROCESS_TAG=haproxy-1c86b315-3a4b-4db0-8b3c-39658c19ef9c', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/1c86b315-3a4b-4db0-8b3c-39658c19ef9c.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Oct 11 04:17:41 compute-0 nova_compute[259850]: 2025-10-11 04:17:41.505 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 04:17:41 compute-0 frosty_ishizaka[296341]: {
Oct 11 04:17:41 compute-0 frosty_ishizaka[296341]:     "0": [
Oct 11 04:17:41 compute-0 frosty_ishizaka[296341]:         {
Oct 11 04:17:41 compute-0 frosty_ishizaka[296341]:             "devices": [
Oct 11 04:17:41 compute-0 frosty_ishizaka[296341]:                 "/dev/loop3"
Oct 11 04:17:41 compute-0 frosty_ishizaka[296341]:             ],
Oct 11 04:17:41 compute-0 frosty_ishizaka[296341]:             "lv_name": "ceph_lv0",
Oct 11 04:17:41 compute-0 frosty_ishizaka[296341]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct 11 04:17:41 compute-0 frosty_ishizaka[296341]:             "lv_size": "21470642176",
Oct 11 04:17:41 compute-0 frosty_ishizaka[296341]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=1XVTvq-Um5W-oCLh-MZzq-EKrd-vgGj-JdO4S3,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=23b68101-59a9-532f-ab6b-9acf78fb2162,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=bd7ac921-1218-45c1-b1c6-7c594dbceccb,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 11 04:17:41 compute-0 frosty_ishizaka[296341]:             "lv_uuid": "1XVTvq-Um5W-oCLh-MZzq-EKrd-vgGj-JdO4S3",
Oct 11 04:17:41 compute-0 frosty_ishizaka[296341]:             "name": "ceph_lv0",
Oct 11 04:17:41 compute-0 frosty_ishizaka[296341]:             "path": "/dev/ceph_vg0/ceph_lv0",
Oct 11 04:17:41 compute-0 frosty_ishizaka[296341]:             "tags": {
Oct 11 04:17:41 compute-0 frosty_ishizaka[296341]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct 11 04:17:41 compute-0 frosty_ishizaka[296341]:                 "ceph.block_uuid": "1XVTvq-Um5W-oCLh-MZzq-EKrd-vgGj-JdO4S3",
Oct 11 04:17:41 compute-0 frosty_ishizaka[296341]:                 "ceph.cephx_lockbox_secret": "",
Oct 11 04:17:41 compute-0 frosty_ishizaka[296341]:                 "ceph.cluster_fsid": "23b68101-59a9-532f-ab6b-9acf78fb2162",
Oct 11 04:17:41 compute-0 frosty_ishizaka[296341]:                 "ceph.cluster_name": "ceph",
Oct 11 04:17:41 compute-0 frosty_ishizaka[296341]:                 "ceph.crush_device_class": "",
Oct 11 04:17:41 compute-0 frosty_ishizaka[296341]:                 "ceph.encrypted": "0",
Oct 11 04:17:41 compute-0 frosty_ishizaka[296341]:                 "ceph.osd_fsid": "bd7ac921-1218-45c1-b1c6-7c594dbceccb",
Oct 11 04:17:41 compute-0 frosty_ishizaka[296341]:                 "ceph.osd_id": "0",
Oct 11 04:17:41 compute-0 frosty_ishizaka[296341]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 11 04:17:41 compute-0 frosty_ishizaka[296341]:                 "ceph.type": "block",
Oct 11 04:17:41 compute-0 frosty_ishizaka[296341]:                 "ceph.vdo": "0"
Oct 11 04:17:41 compute-0 frosty_ishizaka[296341]:             },
Oct 11 04:17:41 compute-0 frosty_ishizaka[296341]:             "type": "block",
Oct 11 04:17:41 compute-0 frosty_ishizaka[296341]:             "vg_name": "ceph_vg0"
Oct 11 04:17:41 compute-0 frosty_ishizaka[296341]:         }
Oct 11 04:17:41 compute-0 frosty_ishizaka[296341]:     ],
Oct 11 04:17:41 compute-0 frosty_ishizaka[296341]:     "1": [
Oct 11 04:17:41 compute-0 frosty_ishizaka[296341]:         {
Oct 11 04:17:41 compute-0 frosty_ishizaka[296341]:             "devices": [
Oct 11 04:17:41 compute-0 frosty_ishizaka[296341]:                 "/dev/loop4"
Oct 11 04:17:41 compute-0 frosty_ishizaka[296341]:             ],
Oct 11 04:17:41 compute-0 frosty_ishizaka[296341]:             "lv_name": "ceph_lv1",
Oct 11 04:17:41 compute-0 frosty_ishizaka[296341]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Oct 11 04:17:41 compute-0 frosty_ishizaka[296341]:             "lv_size": "21470642176",
Oct 11 04:17:41 compute-0 frosty_ishizaka[296341]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=SYHCVo-UVf0-Iwc3-4dzz-QuCG-19LJ-9rRA5x,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=23b68101-59a9-532f-ab6b-9acf78fb2162,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=38da774d-7ecf-442f-9a7a-97978287cff8,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 11 04:17:41 compute-0 frosty_ishizaka[296341]:             "lv_uuid": "SYHCVo-UVf0-Iwc3-4dzz-QuCG-19LJ-9rRA5x",
Oct 11 04:17:41 compute-0 frosty_ishizaka[296341]:             "name": "ceph_lv1",
Oct 11 04:17:41 compute-0 frosty_ishizaka[296341]:             "path": "/dev/ceph_vg1/ceph_lv1",
Oct 11 04:17:41 compute-0 frosty_ishizaka[296341]:             "tags": {
Oct 11 04:17:41 compute-0 frosty_ishizaka[296341]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Oct 11 04:17:41 compute-0 frosty_ishizaka[296341]:                 "ceph.block_uuid": "SYHCVo-UVf0-Iwc3-4dzz-QuCG-19LJ-9rRA5x",
Oct 11 04:17:41 compute-0 frosty_ishizaka[296341]:                 "ceph.cephx_lockbox_secret": "",
Oct 11 04:17:41 compute-0 frosty_ishizaka[296341]:                 "ceph.cluster_fsid": "23b68101-59a9-532f-ab6b-9acf78fb2162",
Oct 11 04:17:41 compute-0 frosty_ishizaka[296341]:                 "ceph.cluster_name": "ceph",
Oct 11 04:17:41 compute-0 frosty_ishizaka[296341]:                 "ceph.crush_device_class": "",
Oct 11 04:17:41 compute-0 frosty_ishizaka[296341]:                 "ceph.encrypted": "0",
Oct 11 04:17:41 compute-0 frosty_ishizaka[296341]:                 "ceph.osd_fsid": "38da774d-7ecf-442f-9a7a-97978287cff8",
Oct 11 04:17:41 compute-0 frosty_ishizaka[296341]:                 "ceph.osd_id": "1",
Oct 11 04:17:41 compute-0 frosty_ishizaka[296341]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 11 04:17:41 compute-0 frosty_ishizaka[296341]:                 "ceph.type": "block",
Oct 11 04:17:41 compute-0 frosty_ishizaka[296341]:                 "ceph.vdo": "0"
Oct 11 04:17:41 compute-0 frosty_ishizaka[296341]:             },
Oct 11 04:17:41 compute-0 frosty_ishizaka[296341]:             "type": "block",
Oct 11 04:17:41 compute-0 frosty_ishizaka[296341]:             "vg_name": "ceph_vg1"
Oct 11 04:17:41 compute-0 frosty_ishizaka[296341]:         }
Oct 11 04:17:41 compute-0 frosty_ishizaka[296341]:     ],
Oct 11 04:17:41 compute-0 frosty_ishizaka[296341]:     "2": [
Oct 11 04:17:41 compute-0 frosty_ishizaka[296341]:         {
Oct 11 04:17:41 compute-0 frosty_ishizaka[296341]:             "devices": [
Oct 11 04:17:41 compute-0 frosty_ishizaka[296341]:                 "/dev/loop5"
Oct 11 04:17:41 compute-0 frosty_ishizaka[296341]:             ],
Oct 11 04:17:41 compute-0 frosty_ishizaka[296341]:             "lv_name": "ceph_lv2",
Oct 11 04:17:41 compute-0 frosty_ishizaka[296341]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Oct 11 04:17:41 compute-0 frosty_ishizaka[296341]:             "lv_size": "21470642176",
Oct 11 04:17:41 compute-0 frosty_ishizaka[296341]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=RoMfIg-8Myq-NBZ3-1HM3-6QfC-sjk8-dlChvt,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=23b68101-59a9-532f-ab6b-9acf78fb2162,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=b0a32a6b-06c7-4cda-9235-0c5ffbd1bd85,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 11 04:17:41 compute-0 frosty_ishizaka[296341]:             "lv_uuid": "RoMfIg-8Myq-NBZ3-1HM3-6QfC-sjk8-dlChvt",
Oct 11 04:17:41 compute-0 frosty_ishizaka[296341]:             "name": "ceph_lv2",
Oct 11 04:17:41 compute-0 frosty_ishizaka[296341]:             "path": "/dev/ceph_vg2/ceph_lv2",
Oct 11 04:17:41 compute-0 frosty_ishizaka[296341]:             "tags": {
Oct 11 04:17:41 compute-0 frosty_ishizaka[296341]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Oct 11 04:17:41 compute-0 frosty_ishizaka[296341]:                 "ceph.block_uuid": "RoMfIg-8Myq-NBZ3-1HM3-6QfC-sjk8-dlChvt",
Oct 11 04:17:41 compute-0 frosty_ishizaka[296341]:                 "ceph.cephx_lockbox_secret": "",
Oct 11 04:17:41 compute-0 frosty_ishizaka[296341]:                 "ceph.cluster_fsid": "23b68101-59a9-532f-ab6b-9acf78fb2162",
Oct 11 04:17:41 compute-0 frosty_ishizaka[296341]:                 "ceph.cluster_name": "ceph",
Oct 11 04:17:41 compute-0 frosty_ishizaka[296341]:                 "ceph.crush_device_class": "",
Oct 11 04:17:41 compute-0 frosty_ishizaka[296341]:                 "ceph.encrypted": "0",
Oct 11 04:17:41 compute-0 frosty_ishizaka[296341]:                 "ceph.osd_fsid": "b0a32a6b-06c7-4cda-9235-0c5ffbd1bd85",
Oct 11 04:17:41 compute-0 frosty_ishizaka[296341]:                 "ceph.osd_id": "2",
Oct 11 04:17:41 compute-0 frosty_ishizaka[296341]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 11 04:17:41 compute-0 frosty_ishizaka[296341]:                 "ceph.type": "block",
Oct 11 04:17:41 compute-0 frosty_ishizaka[296341]:                 "ceph.vdo": "0"
Oct 11 04:17:41 compute-0 frosty_ishizaka[296341]:             },
Oct 11 04:17:41 compute-0 frosty_ishizaka[296341]:             "type": "block",
Oct 11 04:17:41 compute-0 frosty_ishizaka[296341]:             "vg_name": "ceph_vg2"
Oct 11 04:17:41 compute-0 frosty_ishizaka[296341]:         }
Oct 11 04:17:41 compute-0 frosty_ishizaka[296341]:     ]
Oct 11 04:17:41 compute-0 frosty_ishizaka[296341]: }
Oct 11 04:17:41 compute-0 systemd[1]: libpod-cf0165dae4ca02d26a6c26295c7d2e075a41ab299c8dc47ee20a98e45f6937c0.scope: Deactivated successfully.
Oct 11 04:17:41 compute-0 nova_compute[259850]: 2025-10-11 04:17:41.593 2 DEBUG nova.compute.manager [req-43e85443-7209-4e78-a435-4584d924a442 req-0bdaad7d-a656-4d4b-8f94-b33942abbd0f f4159c198f2c491aba3076e803251bb4 551ef16ca7c64c7d98e9560ddab5abd8 - - default default] [instance: a3d4ef44-fada-41fc-9a12-641bff0536a4] Received event network-vif-plugged-86c2cef2-4a07-459b-8237-e7fda4a39f81 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 11 04:17:41 compute-0 nova_compute[259850]: 2025-10-11 04:17:41.594 2 DEBUG oslo_concurrency.lockutils [req-43e85443-7209-4e78-a435-4584d924a442 req-0bdaad7d-a656-4d4b-8f94-b33942abbd0f f4159c198f2c491aba3076e803251bb4 551ef16ca7c64c7d98e9560ddab5abd8 - - default default] Acquiring lock "a3d4ef44-fada-41fc-9a12-641bff0536a4-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 04:17:41 compute-0 nova_compute[259850]: 2025-10-11 04:17:41.594 2 DEBUG oslo_concurrency.lockutils [req-43e85443-7209-4e78-a435-4584d924a442 req-0bdaad7d-a656-4d4b-8f94-b33942abbd0f f4159c198f2c491aba3076e803251bb4 551ef16ca7c64c7d98e9560ddab5abd8 - - default default] Lock "a3d4ef44-fada-41fc-9a12-641bff0536a4-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 04:17:41 compute-0 nova_compute[259850]: 2025-10-11 04:17:41.594 2 DEBUG oslo_concurrency.lockutils [req-43e85443-7209-4e78-a435-4584d924a442 req-0bdaad7d-a656-4d4b-8f94-b33942abbd0f f4159c198f2c491aba3076e803251bb4 551ef16ca7c64c7d98e9560ddab5abd8 - - default default] Lock "a3d4ef44-fada-41fc-9a12-641bff0536a4-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 04:17:41 compute-0 nova_compute[259850]: 2025-10-11 04:17:41.594 2 DEBUG nova.compute.manager [req-43e85443-7209-4e78-a435-4584d924a442 req-0bdaad7d-a656-4d4b-8f94-b33942abbd0f f4159c198f2c491aba3076e803251bb4 551ef16ca7c64c7d98e9560ddab5abd8 - - default default] [instance: a3d4ef44-fada-41fc-9a12-641bff0536a4] Processing event network-vif-plugged-86c2cef2-4a07-459b-8237-e7fda4a39f81 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Oct 11 04:17:41 compute-0 podman[296455]: 2025-10-11 04:17:41.629091933 +0000 UTC m=+0.038379898 container died cf0165dae4ca02d26a6c26295c7d2e075a41ab299c8dc47ee20a98e45f6937c0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=frosty_ishizaka, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 11 04:17:41 compute-0 systemd[1]: var-lib-containers-storage-overlay-65dab28a643050e0c1788a6d0722343ab2f4ab1a6fa69fbbc1cfadd5c19648a3-merged.mount: Deactivated successfully.
Oct 11 04:17:41 compute-0 podman[296455]: 2025-10-11 04:17:41.700830974 +0000 UTC m=+0.110118879 container remove cf0165dae4ca02d26a6c26295c7d2e075a41ab299c8dc47ee20a98e45f6937c0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=frosty_ishizaka, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507)
Oct 11 04:17:41 compute-0 systemd[1]: libpod-conmon-cf0165dae4ca02d26a6c26295c7d2e075a41ab299c8dc47ee20a98e45f6937c0.scope: Deactivated successfully.
Oct 11 04:17:41 compute-0 sudo[296179]: pam_unix(sudo:session): session closed for user root
Oct 11 04:17:41 compute-0 nova_compute[259850]: 2025-10-11 04:17:41.764 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 04:17:41 compute-0 ovn_metadata_agent[161897]: 2025-10-11 04:17:41.768 161902 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=18, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '7a:61:6f', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '92:f1:b6:e4:f1:16'}, ipsec=False) old=SB_Global(nb_cfg=17) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 11 04:17:41 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1620: 305 pgs: 305 active+clean; 281 MiB data, 581 MiB used, 59 GiB / 60 GiB avail; 41 KiB/s rd, 9.4 MiB/s wr, 60 op/s
Oct 11 04:17:41 compute-0 sudo[296477]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 04:17:41 compute-0 sudo[296477]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 04:17:41 compute-0 sudo[296477]: pam_unix(sudo:session): session closed for user root
Oct 11 04:17:41 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 11 04:17:41 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/34861257' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 11 04:17:41 compute-0 sudo[296545]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 11 04:17:41 compute-0 sudo[296545]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 04:17:41 compute-0 sudo[296545]: pam_unix(sudo:session): session closed for user root
Oct 11 04:17:41 compute-0 podman[296566]: 2025-10-11 04:17:41.942551976 +0000 UTC m=+0.067512952 container create c927a2ab9c402ba53f1c05f7c661c42ba3560042229956c8089c65a948666e72 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-1c86b315-3a4b-4db0-8b3c-39658c19ef9c, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Oct 11 04:17:41 compute-0 systemd[1]: Started libpod-conmon-c927a2ab9c402ba53f1c05f7c661c42ba3560042229956c8089c65a948666e72.scope.
Oct 11 04:17:41 compute-0 sudo[296590]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 04:17:41 compute-0 sudo[296590]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 04:17:41 compute-0 podman[296566]: 2025-10-11 04:17:41.909521951 +0000 UTC m=+0.034483017 image pull 1061e4fafe13e0b9aa1ef2c904ba4ad70c44f3e87b1d831f16c6db34937f4022 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Oct 11 04:17:41 compute-0 sudo[296590]: pam_unix(sudo:session): session closed for user root
Oct 11 04:17:42 compute-0 systemd[1]: Started libcrun container.
Oct 11 04:17:42 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/21fc82e825f2ec4313507a0e2a5b0b8b9f1f368111c16401dc76f5d46ce15fc6/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct 11 04:17:42 compute-0 podman[296566]: 2025-10-11 04:17:42.028099768 +0000 UTC m=+0.153060774 container init c927a2ab9c402ba53f1c05f7c661c42ba3560042229956c8089c65a948666e72 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-1c86b315-3a4b-4db0-8b3c-39658c19ef9c, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251009, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, maintainer=OpenStack Kubernetes Operator team)
Oct 11 04:17:42 compute-0 podman[296566]: 2025-10-11 04:17:42.039553913 +0000 UTC m=+0.164514889 container start c927a2ab9c402ba53f1c05f7c661c42ba3560042229956c8089c65a948666e72 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-1c86b315-3a4b-4db0-8b3c-39658c19ef9c, org.label-schema.license=GPLv2, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251009)
Oct 11 04:17:42 compute-0 neutron-haproxy-ovnmeta-1c86b315-3a4b-4db0-8b3c-39658c19ef9c[296617]: [NOTICE]   (296642) : New worker (296647) forked
Oct 11 04:17:42 compute-0 neutron-haproxy-ovnmeta-1c86b315-3a4b-4db0-8b3c-39658c19ef9c[296617]: [NOTICE]   (296642) : Loading success.
Oct 11 04:17:42 compute-0 sudo[296621]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/23b68101-59a9-532f-ab6b-9acf78fb2162/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 23b68101-59a9-532f-ab6b-9acf78fb2162 -- raw list --format json
Oct 11 04:17:42 compute-0 sudo[296621]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 04:17:42 compute-0 ovn_metadata_agent[161897]: 2025-10-11 04:17:42.095 161902 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 1 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Oct 11 04:17:42 compute-0 nova_compute[259850]: 2025-10-11 04:17:42.163 2 DEBUG nova.network.neutron [None req-fa426191-cae8-4e3b-9372-779e15f9f60d 2a330a845d62440c871f80eda2546881 09ba33ef4bd447699d74946c58839b2d - - default default] [instance: a5deabc3-2396-4c23-81c2-959d49bb6da1] Successfully created port: 560c29a9-2a29-42bd-a75a-485874b2cbc8 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Oct 11 04:17:42 compute-0 nova_compute[259850]: 2025-10-11 04:17:42.238 2 DEBUG nova.compute.manager [None req-fa426191-cae8-4e3b-9372-779e15f9f60d 2a330a845d62440c871f80eda2546881 09ba33ef4bd447699d74946c58839b2d - - default default] [instance: a5deabc3-2396-4c23-81c2-959d49bb6da1] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Oct 11 04:17:42 compute-0 nova_compute[259850]: 2025-10-11 04:17:42.241 2 DEBUG nova.virt.libvirt.driver [None req-fa426191-cae8-4e3b-9372-779e15f9f60d 2a330a845d62440c871f80eda2546881 09ba33ef4bd447699d74946c58839b2d - - default default] [instance: a5deabc3-2396-4c23-81c2-959d49bb6da1] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Oct 11 04:17:42 compute-0 nova_compute[259850]: 2025-10-11 04:17:42.242 2 INFO nova.virt.libvirt.driver [None req-fa426191-cae8-4e3b-9372-779e15f9f60d 2a330a845d62440c871f80eda2546881 09ba33ef4bd447699d74946c58839b2d - - default default] [instance: a5deabc3-2396-4c23-81c2-959d49bb6da1] Creating image(s)
Oct 11 04:17:42 compute-0 nova_compute[259850]: 2025-10-11 04:17:42.243 2 DEBUG nova.virt.libvirt.driver [None req-fa426191-cae8-4e3b-9372-779e15f9f60d 2a330a845d62440c871f80eda2546881 09ba33ef4bd447699d74946c58839b2d - - default default] [instance: a5deabc3-2396-4c23-81c2-959d49bb6da1] Did not create local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4859
Oct 11 04:17:42 compute-0 nova_compute[259850]: 2025-10-11 04:17:42.243 2 DEBUG nova.virt.libvirt.driver [None req-fa426191-cae8-4e3b-9372-779e15f9f60d 2a330a845d62440c871f80eda2546881 09ba33ef4bd447699d74946c58839b2d - - default default] [instance: a5deabc3-2396-4c23-81c2-959d49bb6da1] Ensure instance console log exists: /var/lib/nova/instances/a5deabc3-2396-4c23-81c2-959d49bb6da1/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Oct 11 04:17:42 compute-0 nova_compute[259850]: 2025-10-11 04:17:42.244 2 DEBUG oslo_concurrency.lockutils [None req-fa426191-cae8-4e3b-9372-779e15f9f60d 2a330a845d62440c871f80eda2546881 09ba33ef4bd447699d74946c58839b2d - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 04:17:42 compute-0 nova_compute[259850]: 2025-10-11 04:17:42.245 2 DEBUG oslo_concurrency.lockutils [None req-fa426191-cae8-4e3b-9372-779e15f9f60d 2a330a845d62440c871f80eda2546881 09ba33ef4bd447699d74946c58839b2d - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 04:17:42 compute-0 nova_compute[259850]: 2025-10-11 04:17:42.245 2 DEBUG oslo_concurrency.lockutils [None req-fa426191-cae8-4e3b-9372-779e15f9f60d 2a330a845d62440c871f80eda2546881 09ba33ef4bd447699d74946c58839b2d - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 04:17:42 compute-0 ceph-mon[74273]: from='client.? 192.168.122.10:0/34861257' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 11 04:17:42 compute-0 podman[296699]: 2025-10-11 04:17:42.467702475 +0000 UTC m=+0.047174987 container create 7bcd9e231d3dfc825513b74d15acc6caaf045183b6798e3c4fd93820c5e1120a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_blackburn, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.license=GPLv2)
Oct 11 04:17:42 compute-0 systemd[1]: Started libpod-conmon-7bcd9e231d3dfc825513b74d15acc6caaf045183b6798e3c4fd93820c5e1120a.scope.
Oct 11 04:17:42 compute-0 systemd[1]: Started libcrun container.
Oct 11 04:17:42 compute-0 podman[296699]: 2025-10-11 04:17:42.447647097 +0000 UTC m=+0.027119629 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 04:17:42 compute-0 podman[296699]: 2025-10-11 04:17:42.555992364 +0000 UTC m=+0.135464906 container init 7bcd9e231d3dfc825513b74d15acc6caaf045183b6798e3c4fd93820c5e1120a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_blackburn, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 11 04:17:42 compute-0 podman[296699]: 2025-10-11 04:17:42.567051527 +0000 UTC m=+0.146524029 container start 7bcd9e231d3dfc825513b74d15acc6caaf045183b6798e3c4fd93820c5e1120a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_blackburn, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_REF=reef, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3)
Oct 11 04:17:42 compute-0 podman[296699]: 2025-10-11 04:17:42.570387272 +0000 UTC m=+0.149859804 container attach 7bcd9e231d3dfc825513b74d15acc6caaf045183b6798e3c4fd93820c5e1120a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_blackburn, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True)
Oct 11 04:17:42 compute-0 compassionate_blackburn[296715]: 167 167
Oct 11 04:17:42 compute-0 systemd[1]: libpod-7bcd9e231d3dfc825513b74d15acc6caaf045183b6798e3c4fd93820c5e1120a.scope: Deactivated successfully.
Oct 11 04:17:42 compute-0 conmon[296715]: conmon 7bcd9e231d3dfc825513 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-7bcd9e231d3dfc825513b74d15acc6caaf045183b6798e3c4fd93820c5e1120a.scope/container/memory.events
Oct 11 04:17:42 compute-0 podman[296699]: 2025-10-11 04:17:42.576529466 +0000 UTC m=+0.156002008 container died 7bcd9e231d3dfc825513b74d15acc6caaf045183b6798e3c4fd93820c5e1120a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_blackburn, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Oct 11 04:17:42 compute-0 systemd[1]: var-lib-containers-storage-overlay-4650154bc467c10c08bd46c7aaebc6d5bbeb9edc206b98bdbc4be1efcecf1612-merged.mount: Deactivated successfully.
Oct 11 04:17:42 compute-0 podman[296699]: 2025-10-11 04:17:42.616285121 +0000 UTC m=+0.195757633 container remove 7bcd9e231d3dfc825513b74d15acc6caaf045183b6798e3c4fd93820c5e1120a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_blackburn, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507)
Oct 11 04:17:42 compute-0 systemd[1]: libpod-conmon-7bcd9e231d3dfc825513b74d15acc6caaf045183b6798e3c4fd93820c5e1120a.scope: Deactivated successfully.
Oct 11 04:17:42 compute-0 nova_compute[259850]: 2025-10-11 04:17:42.833 2 DEBUG nova.network.neutron [None req-fa426191-cae8-4e3b-9372-779e15f9f60d 2a330a845d62440c871f80eda2546881 09ba33ef4bd447699d74946c58839b2d - - default default] [instance: a5deabc3-2396-4c23-81c2-959d49bb6da1] Successfully updated port: 560c29a9-2a29-42bd-a75a-485874b2cbc8 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Oct 11 04:17:42 compute-0 podman[296738]: 2025-10-11 04:17:42.846296783 +0000 UTC m=+0.058946389 container create a417f9c1db1ce2cffd42401c48d4da150f680a525846fd3918f0de781f7ccd28 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_easley, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Oct 11 04:17:42 compute-0 nova_compute[259850]: 2025-10-11 04:17:42.848 2 DEBUG oslo_concurrency.lockutils [None req-fa426191-cae8-4e3b-9372-779e15f9f60d 2a330a845d62440c871f80eda2546881 09ba33ef4bd447699d74946c58839b2d - - default default] Acquiring lock "refresh_cache-a5deabc3-2396-4c23-81c2-959d49bb6da1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 11 04:17:42 compute-0 nova_compute[259850]: 2025-10-11 04:17:42.848 2 DEBUG oslo_concurrency.lockutils [None req-fa426191-cae8-4e3b-9372-779e15f9f60d 2a330a845d62440c871f80eda2546881 09ba33ef4bd447699d74946c58839b2d - - default default] Acquired lock "refresh_cache-a5deabc3-2396-4c23-81c2-959d49bb6da1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 11 04:17:42 compute-0 nova_compute[259850]: 2025-10-11 04:17:42.848 2 DEBUG nova.network.neutron [None req-fa426191-cae8-4e3b-9372-779e15f9f60d 2a330a845d62440c871f80eda2546881 09ba33ef4bd447699d74946c58839b2d - - default default] [instance: a5deabc3-2396-4c23-81c2-959d49bb6da1] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Oct 11 04:17:42 compute-0 systemd[1]: Started libpod-conmon-a417f9c1db1ce2cffd42401c48d4da150f680a525846fd3918f0de781f7ccd28.scope.
Oct 11 04:17:42 compute-0 podman[296738]: 2025-10-11 04:17:42.818570648 +0000 UTC m=+0.031220284 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 04:17:42 compute-0 systemd[1]: Started libcrun container.
Oct 11 04:17:42 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/16d2127b0c7e9cb73d729eac2ab5f1228d238740293db16362169763868f2bc4/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 11 04:17:42 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/16d2127b0c7e9cb73d729eac2ab5f1228d238740293db16362169763868f2bc4/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 11 04:17:42 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/16d2127b0c7e9cb73d729eac2ab5f1228d238740293db16362169763868f2bc4/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 11 04:17:42 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/16d2127b0c7e9cb73d729eac2ab5f1228d238740293db16362169763868f2bc4/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 11 04:17:42 compute-0 podman[296738]: 2025-10-11 04:17:42.956587976 +0000 UTC m=+0.169237572 container init a417f9c1db1ce2cffd42401c48d4da150f680a525846fd3918f0de781f7ccd28 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_easley, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 11 04:17:42 compute-0 podman[296738]: 2025-10-11 04:17:42.967296699 +0000 UTC m=+0.179946305 container start a417f9c1db1ce2cffd42401c48d4da150f680a525846fd3918f0de781f7ccd28 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_easley, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=reef)
Oct 11 04:17:42 compute-0 podman[296738]: 2025-10-11 04:17:42.974300358 +0000 UTC m=+0.186949954 container attach a417f9c1db1ce2cffd42401c48d4da150f680a525846fd3918f0de781f7ccd28 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_easley, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Oct 11 04:17:43 compute-0 nova_compute[259850]: 2025-10-11 04:17:43.009 2 DEBUG nova.network.neutron [None req-fa426191-cae8-4e3b-9372-779e15f9f60d 2a330a845d62440c871f80eda2546881 09ba33ef4bd447699d74946c58839b2d - - default default] [instance: a5deabc3-2396-4c23-81c2-959d49bb6da1] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Oct 11 04:17:43 compute-0 ovn_metadata_agent[161897]: 2025-10-11 04:17:43.098 161902 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=8a473e03-2208-47ae-afcd-05ad744a5969, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '18'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 04:17:43 compute-0 ceph-mon[74273]: pgmap v1620: 305 pgs: 305 active+clean; 281 MiB data, 581 MiB used, 59 GiB / 60 GiB avail; 41 KiB/s rd, 9.4 MiB/s wr, 60 op/s
Oct 11 04:17:43 compute-0 nova_compute[259850]: 2025-10-11 04:17:43.309 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 04:17:43 compute-0 nova_compute[259850]: 2025-10-11 04:17:43.671 2 DEBUG nova.compute.manager [req-92bb88d8-eaaa-49a4-80c2-587ed4ab7201 req-03366c63-baeb-4f7e-a5f1-99e3d5ab7ac7 f4159c198f2c491aba3076e803251bb4 551ef16ca7c64c7d98e9560ddab5abd8 - - default default] [instance: a3d4ef44-fada-41fc-9a12-641bff0536a4] Received event network-vif-plugged-86c2cef2-4a07-459b-8237-e7fda4a39f81 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 11 04:17:43 compute-0 nova_compute[259850]: 2025-10-11 04:17:43.672 2 DEBUG oslo_concurrency.lockutils [req-92bb88d8-eaaa-49a4-80c2-587ed4ab7201 req-03366c63-baeb-4f7e-a5f1-99e3d5ab7ac7 f4159c198f2c491aba3076e803251bb4 551ef16ca7c64c7d98e9560ddab5abd8 - - default default] Acquiring lock "a3d4ef44-fada-41fc-9a12-641bff0536a4-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 04:17:43 compute-0 nova_compute[259850]: 2025-10-11 04:17:43.672 2 DEBUG oslo_concurrency.lockutils [req-92bb88d8-eaaa-49a4-80c2-587ed4ab7201 req-03366c63-baeb-4f7e-a5f1-99e3d5ab7ac7 f4159c198f2c491aba3076e803251bb4 551ef16ca7c64c7d98e9560ddab5abd8 - - default default] Lock "a3d4ef44-fada-41fc-9a12-641bff0536a4-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 04:17:43 compute-0 nova_compute[259850]: 2025-10-11 04:17:43.673 2 DEBUG oslo_concurrency.lockutils [req-92bb88d8-eaaa-49a4-80c2-587ed4ab7201 req-03366c63-baeb-4f7e-a5f1-99e3d5ab7ac7 f4159c198f2c491aba3076e803251bb4 551ef16ca7c64c7d98e9560ddab5abd8 - - default default] Lock "a3d4ef44-fada-41fc-9a12-641bff0536a4-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 04:17:43 compute-0 nova_compute[259850]: 2025-10-11 04:17:43.674 2 DEBUG nova.compute.manager [req-92bb88d8-eaaa-49a4-80c2-587ed4ab7201 req-03366c63-baeb-4f7e-a5f1-99e3d5ab7ac7 f4159c198f2c491aba3076e803251bb4 551ef16ca7c64c7d98e9560ddab5abd8 - - default default] [instance: a3d4ef44-fada-41fc-9a12-641bff0536a4] No waiting events found dispatching network-vif-plugged-86c2cef2-4a07-459b-8237-e7fda4a39f81 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 11 04:17:43 compute-0 nova_compute[259850]: 2025-10-11 04:17:43.674 2 WARNING nova.compute.manager [req-92bb88d8-eaaa-49a4-80c2-587ed4ab7201 req-03366c63-baeb-4f7e-a5f1-99e3d5ab7ac7 f4159c198f2c491aba3076e803251bb4 551ef16ca7c64c7d98e9560ddab5abd8 - - default default] [instance: a3d4ef44-fada-41fc-9a12-641bff0536a4] Received unexpected event network-vif-plugged-86c2cef2-4a07-459b-8237-e7fda4a39f81 for instance with vm_state building and task_state spawning.
Oct 11 04:17:43 compute-0 nova_compute[259850]: 2025-10-11 04:17:43.675 2 DEBUG nova.compute.manager [req-92bb88d8-eaaa-49a4-80c2-587ed4ab7201 req-03366c63-baeb-4f7e-a5f1-99e3d5ab7ac7 f4159c198f2c491aba3076e803251bb4 551ef16ca7c64c7d98e9560ddab5abd8 - - default default] [instance: a5deabc3-2396-4c23-81c2-959d49bb6da1] Received event network-changed-560c29a9-2a29-42bd-a75a-485874b2cbc8 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 11 04:17:43 compute-0 nova_compute[259850]: 2025-10-11 04:17:43.676 2 DEBUG nova.compute.manager [req-92bb88d8-eaaa-49a4-80c2-587ed4ab7201 req-03366c63-baeb-4f7e-a5f1-99e3d5ab7ac7 f4159c198f2c491aba3076e803251bb4 551ef16ca7c64c7d98e9560ddab5abd8 - - default default] [instance: a5deabc3-2396-4c23-81c2-959d49bb6da1] Refreshing instance network info cache due to event network-changed-560c29a9-2a29-42bd-a75a-485874b2cbc8. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 11 04:17:43 compute-0 nova_compute[259850]: 2025-10-11 04:17:43.676 2 DEBUG oslo_concurrency.lockutils [req-92bb88d8-eaaa-49a4-80c2-587ed4ab7201 req-03366c63-baeb-4f7e-a5f1-99e3d5ab7ac7 f4159c198f2c491aba3076e803251bb4 551ef16ca7c64c7d98e9560ddab5abd8 - - default default] Acquiring lock "refresh_cache-a5deabc3-2396-4c23-81c2-959d49bb6da1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 11 04:17:43 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1621: 305 pgs: 305 active+clean; 281 MiB data, 581 MiB used, 59 GiB / 60 GiB avail; 48 KiB/s rd, 9.4 MiB/s wr, 70 op/s
Oct 11 04:17:44 compute-0 thirsty_easley[296755]: {
Oct 11 04:17:44 compute-0 thirsty_easley[296755]:     "38da774d-7ecf-442f-9a7a-97978287cff8": {
Oct 11 04:17:44 compute-0 thirsty_easley[296755]:         "ceph_fsid": "23b68101-59a9-532f-ab6b-9acf78fb2162",
Oct 11 04:17:44 compute-0 thirsty_easley[296755]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Oct 11 04:17:44 compute-0 thirsty_easley[296755]:         "osd_id": 1,
Oct 11 04:17:44 compute-0 thirsty_easley[296755]:         "osd_uuid": "38da774d-7ecf-442f-9a7a-97978287cff8",
Oct 11 04:17:44 compute-0 thirsty_easley[296755]:         "type": "bluestore"
Oct 11 04:17:44 compute-0 thirsty_easley[296755]:     },
Oct 11 04:17:44 compute-0 thirsty_easley[296755]:     "b0a32a6b-06c7-4cda-9235-0c5ffbd1bd85": {
Oct 11 04:17:44 compute-0 thirsty_easley[296755]:         "ceph_fsid": "23b68101-59a9-532f-ab6b-9acf78fb2162",
Oct 11 04:17:44 compute-0 thirsty_easley[296755]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Oct 11 04:17:44 compute-0 thirsty_easley[296755]:         "osd_id": 2,
Oct 11 04:17:44 compute-0 thirsty_easley[296755]:         "osd_uuid": "b0a32a6b-06c7-4cda-9235-0c5ffbd1bd85",
Oct 11 04:17:44 compute-0 thirsty_easley[296755]:         "type": "bluestore"
Oct 11 04:17:44 compute-0 thirsty_easley[296755]:     },
Oct 11 04:17:44 compute-0 thirsty_easley[296755]:     "bd7ac921-1218-45c1-b1c6-7c594dbceccb": {
Oct 11 04:17:44 compute-0 thirsty_easley[296755]:         "ceph_fsid": "23b68101-59a9-532f-ab6b-9acf78fb2162",
Oct 11 04:17:44 compute-0 thirsty_easley[296755]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct 11 04:17:44 compute-0 thirsty_easley[296755]:         "osd_id": 0,
Oct 11 04:17:44 compute-0 thirsty_easley[296755]:         "osd_uuid": "bd7ac921-1218-45c1-b1c6-7c594dbceccb",
Oct 11 04:17:44 compute-0 thirsty_easley[296755]:         "type": "bluestore"
Oct 11 04:17:44 compute-0 thirsty_easley[296755]:     }
Oct 11 04:17:44 compute-0 thirsty_easley[296755]: }
Oct 11 04:17:44 compute-0 systemd[1]: libpod-a417f9c1db1ce2cffd42401c48d4da150f680a525846fd3918f0de781f7ccd28.scope: Deactivated successfully.
Oct 11 04:17:44 compute-0 systemd[1]: libpod-a417f9c1db1ce2cffd42401c48d4da150f680a525846fd3918f0de781f7ccd28.scope: Consumed 1.136s CPU time.
Oct 11 04:17:44 compute-0 podman[296738]: 2025-10-11 04:17:44.09969397 +0000 UTC m=+1.312343576 container died a417f9c1db1ce2cffd42401c48d4da150f680a525846fd3918f0de781f7ccd28 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_easley, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Oct 11 04:17:44 compute-0 systemd[1]: var-lib-containers-storage-overlay-16d2127b0c7e9cb73d729eac2ab5f1228d238740293db16362169763868f2bc4-merged.mount: Deactivated successfully.
Oct 11 04:17:44 compute-0 podman[296738]: 2025-10-11 04:17:44.188992488 +0000 UTC m=+1.401642064 container remove a417f9c1db1ce2cffd42401c48d4da150f680a525846fd3918f0de781f7ccd28 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_easley, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Oct 11 04:17:44 compute-0 systemd[1]: libpod-conmon-a417f9c1db1ce2cffd42401c48d4da150f680a525846fd3918f0de781f7ccd28.scope: Deactivated successfully.
Oct 11 04:17:44 compute-0 sudo[296621]: pam_unix(sudo:session): session closed for user root
Oct 11 04:17:44 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 11 04:17:44 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' 
Oct 11 04:17:44 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 11 04:17:44 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' 
Oct 11 04:17:44 compute-0 ceph-mgr[74563]: [progress WARNING root] complete: ev 418587bd-a466-4d00-90a6-5846c6093f73 does not exist
Oct 11 04:17:44 compute-0 ceph-mgr[74563]: [progress WARNING root] complete: ev 3482dc17-af57-4b6f-a31b-071e5e1e865d does not exist
Oct 11 04:17:44 compute-0 nova_compute[259850]: 2025-10-11 04:17:44.297 2 DEBUG nova.network.neutron [None req-fa426191-cae8-4e3b-9372-779e15f9f60d 2a330a845d62440c871f80eda2546881 09ba33ef4bd447699d74946c58839b2d - - default default] [instance: a5deabc3-2396-4c23-81c2-959d49bb6da1] Updating instance_info_cache with network_info: [{"id": "560c29a9-2a29-42bd-a75a-485874b2cbc8", "address": "fa:16:3e:b2:35:03", "network": {"id": "b6cd64a2-af0b-4f57-b84c-cbc9cde5251d", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-958485850-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "09ba33ef4bd447699d74946c58839b2d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap560c29a9-2a", "ovs_interfaceid": "560c29a9-2a29-42bd-a75a-485874b2cbc8", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 11 04:17:44 compute-0 nova_compute[259850]: 2025-10-11 04:17:44.318 2 DEBUG oslo_concurrency.lockutils [None req-fa426191-cae8-4e3b-9372-779e15f9f60d 2a330a845d62440c871f80eda2546881 09ba33ef4bd447699d74946c58839b2d - - default default] Releasing lock "refresh_cache-a5deabc3-2396-4c23-81c2-959d49bb6da1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 11 04:17:44 compute-0 nova_compute[259850]: 2025-10-11 04:17:44.319 2 DEBUG nova.compute.manager [None req-fa426191-cae8-4e3b-9372-779e15f9f60d 2a330a845d62440c871f80eda2546881 09ba33ef4bd447699d74946c58839b2d - - default default] [instance: a5deabc3-2396-4c23-81c2-959d49bb6da1] Instance network_info: |[{"id": "560c29a9-2a29-42bd-a75a-485874b2cbc8", "address": "fa:16:3e:b2:35:03", "network": {"id": "b6cd64a2-af0b-4f57-b84c-cbc9cde5251d", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-958485850-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "09ba33ef4bd447699d74946c58839b2d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap560c29a9-2a", "ovs_interfaceid": "560c29a9-2a29-42bd-a75a-485874b2cbc8", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Oct 11 04:17:44 compute-0 nova_compute[259850]: 2025-10-11 04:17:44.319 2 DEBUG oslo_concurrency.lockutils [req-92bb88d8-eaaa-49a4-80c2-587ed4ab7201 req-03366c63-baeb-4f7e-a5f1-99e3d5ab7ac7 f4159c198f2c491aba3076e803251bb4 551ef16ca7c64c7d98e9560ddab5abd8 - - default default] Acquired lock "refresh_cache-a5deabc3-2396-4c23-81c2-959d49bb6da1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 11 04:17:44 compute-0 nova_compute[259850]: 2025-10-11 04:17:44.319 2 DEBUG nova.network.neutron [req-92bb88d8-eaaa-49a4-80c2-587ed4ab7201 req-03366c63-baeb-4f7e-a5f1-99e3d5ab7ac7 f4159c198f2c491aba3076e803251bb4 551ef16ca7c64c7d98e9560ddab5abd8 - - default default] [instance: a5deabc3-2396-4c23-81c2-959d49bb6da1] Refreshing network info cache for port 560c29a9-2a29-42bd-a75a-485874b2cbc8 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 11 04:17:44 compute-0 nova_compute[259850]: 2025-10-11 04:17:44.325 2 DEBUG nova.virt.libvirt.driver [None req-fa426191-cae8-4e3b-9372-779e15f9f60d 2a330a845d62440c871f80eda2546881 09ba33ef4bd447699d74946c58839b2d - - default default] [instance: a5deabc3-2396-4c23-81c2-959d49bb6da1] Start _get_guest_xml network_info=[{"id": "560c29a9-2a29-42bd-a75a-485874b2cbc8", "address": "fa:16:3e:b2:35:03", "network": {"id": "b6cd64a2-af0b-4f57-b84c-cbc9cde5251d", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-958485850-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "09ba33ef4bd447699d74946c58839b2d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap560c29a9-2a", "ovs_interfaceid": "560c29a9-2a29-42bd-a75a-485874b2cbc8", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, '/dev/vda': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum=<?>,container_format=<?>,created_at=<?>,direct_url=<?>,disk_format=<?>,id=<?>,min_disk=0,min_ram=0,name=<?>,owner=<?>,properties=ImageMetaProps,protected=<?>,size=1073741824,status='active',tags=<?>,updated_at=<?>,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [], 'ephemerals': [], 'block_device_mapping': [{'device_type': 'disk', 'delete_on_termination': False, 'connection_info': {'driver_volume_type': 'rbd', 'data': {'name': 'volumes/volume-b1e9d80d-01ad-4211-b429-299f6fd98f5c', 'hosts': ['192.168.122.100'], 'ports': ['6789'], 'cluster_name': 'ceph', 'auth_enabled': True, 'auth_username': 'openstack', 'secret_type': 'ceph', 'secret_uuid': '***', 'volume_id': 'b1e9d80d-01ad-4211-b429-299f6fd98f5c', 'discard': True, 'qos_specs': None, 'access_mode': 'rw', 'encrypted': False, 'cacheable': False}, 'status': 'reserved', 'instance': 'a5deabc3-2396-4c23-81c2-959d49bb6da1', 'attached_at': '', 'detached_at': '', 'volume_id': 'b1e9d80d-01ad-4211-b429-299f6fd98f5c', 'serial': 'b1e9d80d-01ad-4211-b429-299f6fd98f5c'}, 'boot_index': 0, 'guest_format': None, 'attachment_id': '5a59ebad-9d53-4c3a-ac4c-ea62cfc5fc2b', 'mount_device': '/dev/vda', 'disk_bus': 'virtio', 'volume_type': None}], ': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Oct 11 04:17:44 compute-0 nova_compute[259850]: 2025-10-11 04:17:44.333 2 WARNING nova.virt.libvirt.driver [None req-fa426191-cae8-4e3b-9372-779e15f9f60d 2a330a845d62440c871f80eda2546881 09ba33ef4bd447699d74946c58839b2d - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 11 04:17:44 compute-0 nova_compute[259850]: 2025-10-11 04:17:44.344 2 DEBUG nova.virt.libvirt.host [None req-fa426191-cae8-4e3b-9372-779e15f9f60d 2a330a845d62440c871f80eda2546881 09ba33ef4bd447699d74946c58839b2d - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Oct 11 04:17:44 compute-0 nova_compute[259850]: 2025-10-11 04:17:44.347 2 DEBUG nova.virt.libvirt.host [None req-fa426191-cae8-4e3b-9372-779e15f9f60d 2a330a845d62440c871f80eda2546881 09ba33ef4bd447699d74946c58839b2d - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Oct 11 04:17:44 compute-0 sudo[296808]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 04:17:44 compute-0 sudo[296808]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 04:17:44 compute-0 nova_compute[259850]: 2025-10-11 04:17:44.355 2 DEBUG nova.virt.libvirt.host [None req-fa426191-cae8-4e3b-9372-779e15f9f60d 2a330a845d62440c871f80eda2546881 09ba33ef4bd447699d74946c58839b2d - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Oct 11 04:17:44 compute-0 sudo[296808]: pam_unix(sudo:session): session closed for user root
Oct 11 04:17:44 compute-0 nova_compute[259850]: 2025-10-11 04:17:44.356 2 DEBUG nova.virt.libvirt.host [None req-fa426191-cae8-4e3b-9372-779e15f9f60d 2a330a845d62440c871f80eda2546881 09ba33ef4bd447699d74946c58839b2d - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Oct 11 04:17:44 compute-0 nova_compute[259850]: 2025-10-11 04:17:44.357 2 DEBUG nova.virt.libvirt.driver [None req-fa426191-cae8-4e3b-9372-779e15f9f60d 2a330a845d62440c871f80eda2546881 09ba33ef4bd447699d74946c58839b2d - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Oct 11 04:17:44 compute-0 nova_compute[259850]: 2025-10-11 04:17:44.358 2 DEBUG nova.virt.hardware [None req-fa426191-cae8-4e3b-9372-779e15f9f60d 2a330a845d62440c871f80eda2546881 09ba33ef4bd447699d74946c58839b2d - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-11T04:01:35Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='178575de-f0e6-4acd-9fcd-d75e3e09ac2e',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum=<?>,container_format=<?>,created_at=<?>,direct_url=<?>,disk_format=<?>,id=<?>,min_disk=0,min_ram=0,name=<?>,owner=<?>,properties=ImageMetaProps,protected=<?>,size=1073741824,status='active',tags=<?>,updated_at=<?>,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Oct 11 04:17:44 compute-0 nova_compute[259850]: 2025-10-11 04:17:44.359 2 DEBUG nova.virt.hardware [None req-fa426191-cae8-4e3b-9372-779e15f9f60d 2a330a845d62440c871f80eda2546881 09ba33ef4bd447699d74946c58839b2d - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Oct 11 04:17:44 compute-0 nova_compute[259850]: 2025-10-11 04:17:44.360 2 DEBUG nova.virt.hardware [None req-fa426191-cae8-4e3b-9372-779e15f9f60d 2a330a845d62440c871f80eda2546881 09ba33ef4bd447699d74946c58839b2d - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Oct 11 04:17:44 compute-0 nova_compute[259850]: 2025-10-11 04:17:44.361 2 DEBUG nova.virt.hardware [None req-fa426191-cae8-4e3b-9372-779e15f9f60d 2a330a845d62440c871f80eda2546881 09ba33ef4bd447699d74946c58839b2d - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Oct 11 04:17:44 compute-0 nova_compute[259850]: 2025-10-11 04:17:44.362 2 DEBUG nova.virt.hardware [None req-fa426191-cae8-4e3b-9372-779e15f9f60d 2a330a845d62440c871f80eda2546881 09ba33ef4bd447699d74946c58839b2d - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Oct 11 04:17:44 compute-0 nova_compute[259850]: 2025-10-11 04:17:44.364 2 DEBUG nova.virt.hardware [None req-fa426191-cae8-4e3b-9372-779e15f9f60d 2a330a845d62440c871f80eda2546881 09ba33ef4bd447699d74946c58839b2d - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Oct 11 04:17:44 compute-0 nova_compute[259850]: 2025-10-11 04:17:44.365 2 DEBUG nova.virt.hardware [None req-fa426191-cae8-4e3b-9372-779e15f9f60d 2a330a845d62440c871f80eda2546881 09ba33ef4bd447699d74946c58839b2d - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Oct 11 04:17:44 compute-0 nova_compute[259850]: 2025-10-11 04:17:44.366 2 DEBUG nova.virt.hardware [None req-fa426191-cae8-4e3b-9372-779e15f9f60d 2a330a845d62440c871f80eda2546881 09ba33ef4bd447699d74946c58839b2d - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Oct 11 04:17:44 compute-0 nova_compute[259850]: 2025-10-11 04:17:44.366 2 DEBUG nova.virt.hardware [None req-fa426191-cae8-4e3b-9372-779e15f9f60d 2a330a845d62440c871f80eda2546881 09ba33ef4bd447699d74946c58839b2d - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Oct 11 04:17:44 compute-0 nova_compute[259850]: 2025-10-11 04:17:44.367 2 DEBUG nova.virt.hardware [None req-fa426191-cae8-4e3b-9372-779e15f9f60d 2a330a845d62440c871f80eda2546881 09ba33ef4bd447699d74946c58839b2d - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Oct 11 04:17:44 compute-0 nova_compute[259850]: 2025-10-11 04:17:44.367 2 DEBUG nova.virt.hardware [None req-fa426191-cae8-4e3b-9372-779e15f9f60d 2a330a845d62440c871f80eda2546881 09ba33ef4bd447699d74946c58839b2d - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Oct 11 04:17:44 compute-0 nova_compute[259850]: 2025-10-11 04:17:44.398 2 DEBUG nova.storage.rbd_utils [None req-fa426191-cae8-4e3b-9372-779e15f9f60d 2a330a845d62440c871f80eda2546881 09ba33ef4bd447699d74946c58839b2d - - default default] rbd image a5deabc3-2396-4c23-81c2-959d49bb6da1_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 11 04:17:44 compute-0 nova_compute[259850]: 2025-10-11 04:17:44.403 2 DEBUG oslo_concurrency.processutils [None req-fa426191-cae8-4e3b-9372-779e15f9f60d 2a330a845d62440c871f80eda2546881 09ba33ef4bd447699d74946c58839b2d - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 04:17:44 compute-0 sudo[296833]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Oct 11 04:17:44 compute-0 sudo[296833]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 04:17:44 compute-0 sudo[296833]: pam_unix(sudo:session): session closed for user root
Oct 11 04:17:44 compute-0 nova_compute[259850]: 2025-10-11 04:17:44.688 2 DEBUG nova.virt.driver [None req-3700afd8-f980-4cf2-b41e-54ecc7fbe9a2 - - - - - -] Emitting event <LifecycleEvent: 1760156264.6872902, a3d4ef44-fada-41fc-9a12-641bff0536a4 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 11 04:17:44 compute-0 nova_compute[259850]: 2025-10-11 04:17:44.689 2 INFO nova.compute.manager [None req-3700afd8-f980-4cf2-b41e-54ecc7fbe9a2 - - - - - -] [instance: a3d4ef44-fada-41fc-9a12-641bff0536a4] VM Started (Lifecycle Event)
Oct 11 04:17:44 compute-0 nova_compute[259850]: 2025-10-11 04:17:44.694 2 DEBUG nova.compute.manager [None req-e48d9e10-8970-4316-8d21-d7f025ac5787 77d11e860ca1460cab1c20bca4d4c0ea bfcc78a613a4442d88231798d10634c9 - - default default] [instance: a3d4ef44-fada-41fc-9a12-641bff0536a4] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Oct 11 04:17:44 compute-0 nova_compute[259850]: 2025-10-11 04:17:44.698 2 DEBUG nova.virt.libvirt.driver [None req-e48d9e10-8970-4316-8d21-d7f025ac5787 77d11e860ca1460cab1c20bca4d4c0ea bfcc78a613a4442d88231798d10634c9 - - default default] [instance: a3d4ef44-fada-41fc-9a12-641bff0536a4] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Oct 11 04:17:44 compute-0 nova_compute[259850]: 2025-10-11 04:17:44.708 2 DEBUG nova.compute.manager [None req-3700afd8-f980-4cf2-b41e-54ecc7fbe9a2 - - - - - -] [instance: a3d4ef44-fada-41fc-9a12-641bff0536a4] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 11 04:17:44 compute-0 nova_compute[259850]: 2025-10-11 04:17:44.711 2 INFO nova.virt.libvirt.driver [-] [instance: a3d4ef44-fada-41fc-9a12-641bff0536a4] Instance spawned successfully.
Oct 11 04:17:44 compute-0 nova_compute[259850]: 2025-10-11 04:17:44.712 2 DEBUG nova.virt.libvirt.driver [None req-e48d9e10-8970-4316-8d21-d7f025ac5787 77d11e860ca1460cab1c20bca4d4c0ea bfcc78a613a4442d88231798d10634c9 - - default default] [instance: a3d4ef44-fada-41fc-9a12-641bff0536a4] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Oct 11 04:17:44 compute-0 nova_compute[259850]: 2025-10-11 04:17:44.717 2 DEBUG nova.compute.manager [None req-3700afd8-f980-4cf2-b41e-54ecc7fbe9a2 - - - - - -] [instance: a3d4ef44-fada-41fc-9a12-641bff0536a4] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 11 04:17:44 compute-0 nova_compute[259850]: 2025-10-11 04:17:44.741 2 INFO nova.compute.manager [None req-3700afd8-f980-4cf2-b41e-54ecc7fbe9a2 - - - - - -] [instance: a3d4ef44-fada-41fc-9a12-641bff0536a4] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 11 04:17:44 compute-0 nova_compute[259850]: 2025-10-11 04:17:44.742 2 DEBUG nova.virt.driver [None req-3700afd8-f980-4cf2-b41e-54ecc7fbe9a2 - - - - - -] Emitting event <LifecycleEvent: 1760156264.6874752, a3d4ef44-fada-41fc-9a12-641bff0536a4 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 11 04:17:44 compute-0 nova_compute[259850]: 2025-10-11 04:17:44.743 2 INFO nova.compute.manager [None req-3700afd8-f980-4cf2-b41e-54ecc7fbe9a2 - - - - - -] [instance: a3d4ef44-fada-41fc-9a12-641bff0536a4] VM Paused (Lifecycle Event)
Oct 11 04:17:44 compute-0 nova_compute[259850]: 2025-10-11 04:17:44.751 2 DEBUG nova.virt.libvirt.driver [None req-e48d9e10-8970-4316-8d21-d7f025ac5787 77d11e860ca1460cab1c20bca4d4c0ea bfcc78a613a4442d88231798d10634c9 - - default default] [instance: a3d4ef44-fada-41fc-9a12-641bff0536a4] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 11 04:17:44 compute-0 nova_compute[259850]: 2025-10-11 04:17:44.752 2 DEBUG nova.virt.libvirt.driver [None req-e48d9e10-8970-4316-8d21-d7f025ac5787 77d11e860ca1460cab1c20bca4d4c0ea bfcc78a613a4442d88231798d10634c9 - - default default] [instance: a3d4ef44-fada-41fc-9a12-641bff0536a4] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 11 04:17:44 compute-0 nova_compute[259850]: 2025-10-11 04:17:44.753 2 DEBUG nova.virt.libvirt.driver [None req-e48d9e10-8970-4316-8d21-d7f025ac5787 77d11e860ca1460cab1c20bca4d4c0ea bfcc78a613a4442d88231798d10634c9 - - default default] [instance: a3d4ef44-fada-41fc-9a12-641bff0536a4] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 11 04:17:44 compute-0 nova_compute[259850]: 2025-10-11 04:17:44.753 2 DEBUG nova.virt.libvirt.driver [None req-e48d9e10-8970-4316-8d21-d7f025ac5787 77d11e860ca1460cab1c20bca4d4c0ea bfcc78a613a4442d88231798d10634c9 - - default default] [instance: a3d4ef44-fada-41fc-9a12-641bff0536a4] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 11 04:17:44 compute-0 nova_compute[259850]: 2025-10-11 04:17:44.755 2 DEBUG nova.virt.libvirt.driver [None req-e48d9e10-8970-4316-8d21-d7f025ac5787 77d11e860ca1460cab1c20bca4d4c0ea bfcc78a613a4442d88231798d10634c9 - - default default] [instance: a3d4ef44-fada-41fc-9a12-641bff0536a4] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 11 04:17:44 compute-0 nova_compute[259850]: 2025-10-11 04:17:44.756 2 DEBUG nova.virt.libvirt.driver [None req-e48d9e10-8970-4316-8d21-d7f025ac5787 77d11e860ca1460cab1c20bca4d4c0ea bfcc78a613a4442d88231798d10634c9 - - default default] [instance: a3d4ef44-fada-41fc-9a12-641bff0536a4] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 11 04:17:44 compute-0 nova_compute[259850]: 2025-10-11 04:17:44.762 2 DEBUG nova.compute.manager [None req-3700afd8-f980-4cf2-b41e-54ecc7fbe9a2 - - - - - -] [instance: a3d4ef44-fada-41fc-9a12-641bff0536a4] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 11 04:17:44 compute-0 nova_compute[259850]: 2025-10-11 04:17:44.767 2 DEBUG nova.virt.driver [None req-3700afd8-f980-4cf2-b41e-54ecc7fbe9a2 - - - - - -] Emitting event <LifecycleEvent: 1760156264.697008, a3d4ef44-fada-41fc-9a12-641bff0536a4 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 11 04:17:44 compute-0 nova_compute[259850]: 2025-10-11 04:17:44.767 2 INFO nova.compute.manager [None req-3700afd8-f980-4cf2-b41e-54ecc7fbe9a2 - - - - - -] [instance: a3d4ef44-fada-41fc-9a12-641bff0536a4] VM Resumed (Lifecycle Event)
Oct 11 04:17:44 compute-0 nova_compute[259850]: 2025-10-11 04:17:44.786 2 DEBUG nova.compute.manager [None req-3700afd8-f980-4cf2-b41e-54ecc7fbe9a2 - - - - - -] [instance: a3d4ef44-fada-41fc-9a12-641bff0536a4] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 11 04:17:44 compute-0 nova_compute[259850]: 2025-10-11 04:17:44.792 2 DEBUG nova.compute.manager [None req-3700afd8-f980-4cf2-b41e-54ecc7fbe9a2 - - - - - -] [instance: a3d4ef44-fada-41fc-9a12-641bff0536a4] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 11 04:17:44 compute-0 nova_compute[259850]: 2025-10-11 04:17:44.813 2 INFO nova.compute.manager [None req-e48d9e10-8970-4316-8d21-d7f025ac5787 77d11e860ca1460cab1c20bca4d4c0ea bfcc78a613a4442d88231798d10634c9 - - default default] [instance: a3d4ef44-fada-41fc-9a12-641bff0536a4] Took 7.65 seconds to spawn the instance on the hypervisor.
Oct 11 04:17:44 compute-0 nova_compute[259850]: 2025-10-11 04:17:44.813 2 DEBUG nova.compute.manager [None req-e48d9e10-8970-4316-8d21-d7f025ac5787 77d11e860ca1460cab1c20bca4d4c0ea bfcc78a613a4442d88231798d10634c9 - - default default] [instance: a3d4ef44-fada-41fc-9a12-641bff0536a4] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 11 04:17:44 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 11 04:17:44 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/883406687' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 11 04:17:44 compute-0 nova_compute[259850]: 2025-10-11 04:17:44.822 2 INFO nova.compute.manager [None req-3700afd8-f980-4cf2-b41e-54ecc7fbe9a2 - - - - - -] [instance: a3d4ef44-fada-41fc-9a12-641bff0536a4] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 11 04:17:44 compute-0 nova_compute[259850]: 2025-10-11 04:17:44.834 2 DEBUG oslo_concurrency.processutils [None req-fa426191-cae8-4e3b-9372-779e15f9f60d 2a330a845d62440c871f80eda2546881 09ba33ef4bd447699d74946c58839b2d - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.430s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 04:17:44 compute-0 nova_compute[259850]: 2025-10-11 04:17:44.867 2 DEBUG nova.virt.libvirt.vif [None req-fa426191-cae8-4e3b-9372-779e15f9f60d 2a330a845d62440c871f80eda2546881 09ba33ef4bd447699d74946c58839b2d - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-11T04:17:39Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestVolumeBootPattern-server-1344490819',display_name='tempest-TestVolumeBootPattern-server-1344490819',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testvolumebootpattern-server-1344490819',id=24,image_ref='',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBPDNAGL8Dkg4WTlPf45cAzyjNlMaZ9CdFtcbPahhttGWfFDtL3wJAU2pqWIpDJ427A+TFzstq4HW+M8hdPFbiZnk9MFQHh3rRb7amRkcTpIWOFEgpDmf92zhQgzfL3p2ZA==',key_name='tempest-TestVolumeBootPattern-2018721323',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='09ba33ef4bd447699d74946c58839b2d',ramdisk_id='',reservation_id='r-orrljhtm',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',image_signature_verified='False',network_allocated='True',owner_project_name='tempest-TestVolumeBootPattern-771726270',owner_user_name='tempest-TestVolumeBootPattern-771726270-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-11T04:17:41Z,user_data=None,user_id='2a330a845d62440c871f80eda2546881',uuid=a5deabc3-2396-4c23-81c2-959d49bb6da1,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "560c29a9-2a29-42bd-a75a-485874b2cbc8", "address": "fa:16:3e:b2:35:03", "network": {"id": "b6cd64a2-af0b-4f57-b84c-cbc9cde5251d", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-958485850-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "09ba33ef4bd447699d74946c58839b2d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap560c29a9-2a", "ovs_interfaceid": "560c29a9-2a29-42bd-a75a-485874b2cbc8", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Oct 11 04:17:44 compute-0 nova_compute[259850]: 2025-10-11 04:17:44.868 2 DEBUG nova.network.os_vif_util [None req-fa426191-cae8-4e3b-9372-779e15f9f60d 2a330a845d62440c871f80eda2546881 09ba33ef4bd447699d74946c58839b2d - - default default] Converting VIF {"id": "560c29a9-2a29-42bd-a75a-485874b2cbc8", "address": "fa:16:3e:b2:35:03", "network": {"id": "b6cd64a2-af0b-4f57-b84c-cbc9cde5251d", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-958485850-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "09ba33ef4bd447699d74946c58839b2d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap560c29a9-2a", "ovs_interfaceid": "560c29a9-2a29-42bd-a75a-485874b2cbc8", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 11 04:17:44 compute-0 nova_compute[259850]: 2025-10-11 04:17:44.869 2 DEBUG nova.network.os_vif_util [None req-fa426191-cae8-4e3b-9372-779e15f9f60d 2a330a845d62440c871f80eda2546881 09ba33ef4bd447699d74946c58839b2d - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:b2:35:03,bridge_name='br-int',has_traffic_filtering=True,id=560c29a9-2a29-42bd-a75a-485874b2cbc8,network=Network(b6cd64a2-af0b-4f57-b84c-cbc9cde5251d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap560c29a9-2a') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 11 04:17:44 compute-0 nova_compute[259850]: 2025-10-11 04:17:44.871 2 DEBUG nova.objects.instance [None req-fa426191-cae8-4e3b-9372-779e15f9f60d 2a330a845d62440c871f80eda2546881 09ba33ef4bd447699d74946c58839b2d - - default default] Lazy-loading 'pci_devices' on Instance uuid a5deabc3-2396-4c23-81c2-959d49bb6da1 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 11 04:17:44 compute-0 nova_compute[259850]: 2025-10-11 04:17:44.881 2 INFO nova.compute.manager [None req-e48d9e10-8970-4316-8d21-d7f025ac5787 77d11e860ca1460cab1c20bca4d4c0ea bfcc78a613a4442d88231798d10634c9 - - default default] [instance: a3d4ef44-fada-41fc-9a12-641bff0536a4] Took 9.94 seconds to build instance.
Oct 11 04:17:44 compute-0 nova_compute[259850]: 2025-10-11 04:17:44.886 2 DEBUG nova.virt.libvirt.driver [None req-fa426191-cae8-4e3b-9372-779e15f9f60d 2a330a845d62440c871f80eda2546881 09ba33ef4bd447699d74946c58839b2d - - default default] [instance: a5deabc3-2396-4c23-81c2-959d49bb6da1] End _get_guest_xml xml=<domain type="kvm">
Oct 11 04:17:44 compute-0 nova_compute[259850]:   <uuid>a5deabc3-2396-4c23-81c2-959d49bb6da1</uuid>
Oct 11 04:17:44 compute-0 nova_compute[259850]:   <name>instance-00000018</name>
Oct 11 04:17:44 compute-0 nova_compute[259850]:   <memory>131072</memory>
Oct 11 04:17:44 compute-0 nova_compute[259850]:   <vcpu>1</vcpu>
Oct 11 04:17:44 compute-0 nova_compute[259850]:   <metadata>
Oct 11 04:17:44 compute-0 nova_compute[259850]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 11 04:17:44 compute-0 nova_compute[259850]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 11 04:17:44 compute-0 nova_compute[259850]:       <nova:name>tempest-TestVolumeBootPattern-server-1344490819</nova:name>
Oct 11 04:17:44 compute-0 nova_compute[259850]:       <nova:creationTime>2025-10-11 04:17:44</nova:creationTime>
Oct 11 04:17:44 compute-0 nova_compute[259850]:       <nova:flavor name="m1.nano">
Oct 11 04:17:44 compute-0 nova_compute[259850]:         <nova:memory>128</nova:memory>
Oct 11 04:17:44 compute-0 nova_compute[259850]:         <nova:disk>1</nova:disk>
Oct 11 04:17:44 compute-0 nova_compute[259850]:         <nova:swap>0</nova:swap>
Oct 11 04:17:44 compute-0 nova_compute[259850]:         <nova:ephemeral>0</nova:ephemeral>
Oct 11 04:17:44 compute-0 nova_compute[259850]:         <nova:vcpus>1</nova:vcpus>
Oct 11 04:17:44 compute-0 nova_compute[259850]:       </nova:flavor>
Oct 11 04:17:44 compute-0 nova_compute[259850]:       <nova:owner>
Oct 11 04:17:44 compute-0 nova_compute[259850]:         <nova:user uuid="2a330a845d62440c871f80eda2546881">tempest-TestVolumeBootPattern-771726270-project-member</nova:user>
Oct 11 04:17:44 compute-0 nova_compute[259850]:         <nova:project uuid="09ba33ef4bd447699d74946c58839b2d">tempest-TestVolumeBootPattern-771726270</nova:project>
Oct 11 04:17:44 compute-0 nova_compute[259850]:       </nova:owner>
Oct 11 04:17:44 compute-0 nova_compute[259850]:       <nova:ports>
Oct 11 04:17:44 compute-0 nova_compute[259850]:         <nova:port uuid="560c29a9-2a29-42bd-a75a-485874b2cbc8">
Oct 11 04:17:44 compute-0 nova_compute[259850]:           <nova:ip type="fixed" address="10.100.0.10" ipVersion="4"/>
Oct 11 04:17:44 compute-0 nova_compute[259850]:         </nova:port>
Oct 11 04:17:44 compute-0 nova_compute[259850]:       </nova:ports>
Oct 11 04:17:44 compute-0 nova_compute[259850]:     </nova:instance>
Oct 11 04:17:44 compute-0 nova_compute[259850]:   </metadata>
Oct 11 04:17:44 compute-0 nova_compute[259850]:   <sysinfo type="smbios">
Oct 11 04:17:44 compute-0 nova_compute[259850]:     <system>
Oct 11 04:17:44 compute-0 nova_compute[259850]:       <entry name="manufacturer">RDO</entry>
Oct 11 04:17:44 compute-0 nova_compute[259850]:       <entry name="product">OpenStack Compute</entry>
Oct 11 04:17:44 compute-0 nova_compute[259850]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 11 04:17:44 compute-0 nova_compute[259850]:       <entry name="serial">a5deabc3-2396-4c23-81c2-959d49bb6da1</entry>
Oct 11 04:17:44 compute-0 nova_compute[259850]:       <entry name="uuid">a5deabc3-2396-4c23-81c2-959d49bb6da1</entry>
Oct 11 04:17:44 compute-0 nova_compute[259850]:       <entry name="family">Virtual Machine</entry>
Oct 11 04:17:44 compute-0 nova_compute[259850]:     </system>
Oct 11 04:17:44 compute-0 nova_compute[259850]:   </sysinfo>
Oct 11 04:17:44 compute-0 nova_compute[259850]:   <os>
Oct 11 04:17:44 compute-0 nova_compute[259850]:     <type arch="x86_64" machine="q35">hvm</type>
Oct 11 04:17:44 compute-0 nova_compute[259850]:     <boot dev="hd"/>
Oct 11 04:17:44 compute-0 nova_compute[259850]:     <smbios mode="sysinfo"/>
Oct 11 04:17:44 compute-0 nova_compute[259850]:   </os>
Oct 11 04:17:44 compute-0 nova_compute[259850]:   <features>
Oct 11 04:17:44 compute-0 nova_compute[259850]:     <acpi/>
Oct 11 04:17:44 compute-0 nova_compute[259850]:     <apic/>
Oct 11 04:17:44 compute-0 nova_compute[259850]:     <vmcoreinfo/>
Oct 11 04:17:44 compute-0 nova_compute[259850]:   </features>
Oct 11 04:17:44 compute-0 nova_compute[259850]:   <clock offset="utc">
Oct 11 04:17:44 compute-0 nova_compute[259850]:     <timer name="pit" tickpolicy="delay"/>
Oct 11 04:17:44 compute-0 nova_compute[259850]:     <timer name="rtc" tickpolicy="catchup"/>
Oct 11 04:17:44 compute-0 nova_compute[259850]:     <timer name="hpet" present="no"/>
Oct 11 04:17:44 compute-0 nova_compute[259850]:   </clock>
Oct 11 04:17:44 compute-0 nova_compute[259850]:   <cpu mode="host-model" match="exact">
Oct 11 04:17:44 compute-0 nova_compute[259850]:     <topology sockets="1" cores="1" threads="1"/>
Oct 11 04:17:44 compute-0 nova_compute[259850]:   </cpu>
Oct 11 04:17:44 compute-0 nova_compute[259850]:   <devices>
Oct 11 04:17:44 compute-0 nova_compute[259850]:     <disk type="network" device="cdrom">
Oct 11 04:17:44 compute-0 nova_compute[259850]:       <driver type="raw" cache="none"/>
Oct 11 04:17:44 compute-0 nova_compute[259850]:       <source protocol="rbd" name="vms/a5deabc3-2396-4c23-81c2-959d49bb6da1_disk.config">
Oct 11 04:17:44 compute-0 nova_compute[259850]:         <host name="192.168.122.100" port="6789"/>
Oct 11 04:17:44 compute-0 nova_compute[259850]:       </source>
Oct 11 04:17:44 compute-0 nova_compute[259850]:       <auth username="openstack">
Oct 11 04:17:44 compute-0 nova_compute[259850]:         <secret type="ceph" uuid="23b68101-59a9-532f-ab6b-9acf78fb2162"/>
Oct 11 04:17:44 compute-0 nova_compute[259850]:       </auth>
Oct 11 04:17:44 compute-0 nova_compute[259850]:       <target dev="sda" bus="sata"/>
Oct 11 04:17:44 compute-0 nova_compute[259850]:     </disk>
Oct 11 04:17:44 compute-0 nova_compute[259850]:     <disk type="network" device="disk">
Oct 11 04:17:44 compute-0 nova_compute[259850]:       <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Oct 11 04:17:44 compute-0 nova_compute[259850]:       <source protocol="rbd" name="volumes/volume-b1e9d80d-01ad-4211-b429-299f6fd98f5c">
Oct 11 04:17:44 compute-0 nova_compute[259850]:         <host name="192.168.122.100" port="6789"/>
Oct 11 04:17:44 compute-0 nova_compute[259850]:       </source>
Oct 11 04:17:44 compute-0 nova_compute[259850]:       <auth username="openstack">
Oct 11 04:17:44 compute-0 nova_compute[259850]:         <secret type="ceph" uuid="23b68101-59a9-532f-ab6b-9acf78fb2162"/>
Oct 11 04:17:44 compute-0 nova_compute[259850]:       </auth>
Oct 11 04:17:44 compute-0 nova_compute[259850]:       <target dev="vda" bus="virtio"/>
Oct 11 04:17:44 compute-0 nova_compute[259850]:       <serial>b1e9d80d-01ad-4211-b429-299f6fd98f5c</serial>
Oct 11 04:17:44 compute-0 nova_compute[259850]:     </disk>
Oct 11 04:17:44 compute-0 nova_compute[259850]:     <interface type="ethernet">
Oct 11 04:17:44 compute-0 nova_compute[259850]:       <mac address="fa:16:3e:b2:35:03"/>
Oct 11 04:17:44 compute-0 nova_compute[259850]:       <model type="virtio"/>
Oct 11 04:17:44 compute-0 nova_compute[259850]:       <driver name="vhost" rx_queue_size="512"/>
Oct 11 04:17:44 compute-0 nova_compute[259850]:       <mtu size="1442"/>
Oct 11 04:17:44 compute-0 nova_compute[259850]:       <target dev="tap560c29a9-2a"/>
Oct 11 04:17:44 compute-0 nova_compute[259850]:     </interface>
Oct 11 04:17:44 compute-0 nova_compute[259850]:     <serial type="pty">
Oct 11 04:17:44 compute-0 nova_compute[259850]:       <log file="/var/lib/nova/instances/a5deabc3-2396-4c23-81c2-959d49bb6da1/console.log" append="off"/>
Oct 11 04:17:44 compute-0 nova_compute[259850]:     </serial>
Oct 11 04:17:44 compute-0 nova_compute[259850]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 11 04:17:44 compute-0 nova_compute[259850]:     <video>
Oct 11 04:17:44 compute-0 nova_compute[259850]:       <model type="virtio"/>
Oct 11 04:17:44 compute-0 nova_compute[259850]:     </video>
Oct 11 04:17:44 compute-0 nova_compute[259850]:     <input type="tablet" bus="usb"/>
Oct 11 04:17:44 compute-0 nova_compute[259850]:     <rng model="virtio">
Oct 11 04:17:44 compute-0 nova_compute[259850]:       <backend model="random">/dev/urandom</backend>
Oct 11 04:17:44 compute-0 nova_compute[259850]:     </rng>
Oct 11 04:17:44 compute-0 nova_compute[259850]:     <controller type="pci" model="pcie-root"/>
Oct 11 04:17:44 compute-0 nova_compute[259850]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 04:17:44 compute-0 nova_compute[259850]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 04:17:44 compute-0 nova_compute[259850]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 04:17:44 compute-0 nova_compute[259850]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 04:17:44 compute-0 nova_compute[259850]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 04:17:44 compute-0 nova_compute[259850]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 04:17:44 compute-0 nova_compute[259850]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 04:17:44 compute-0 nova_compute[259850]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 04:17:44 compute-0 nova_compute[259850]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 04:17:44 compute-0 nova_compute[259850]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 04:17:44 compute-0 nova_compute[259850]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 04:17:44 compute-0 nova_compute[259850]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 04:17:44 compute-0 nova_compute[259850]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 04:17:44 compute-0 nova_compute[259850]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 04:17:44 compute-0 nova_compute[259850]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 04:17:44 compute-0 nova_compute[259850]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 04:17:44 compute-0 nova_compute[259850]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 04:17:44 compute-0 nova_compute[259850]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 04:17:44 compute-0 nova_compute[259850]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 04:17:44 compute-0 nova_compute[259850]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 04:17:44 compute-0 nova_compute[259850]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 04:17:44 compute-0 nova_compute[259850]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 04:17:44 compute-0 nova_compute[259850]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 04:17:44 compute-0 nova_compute[259850]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 04:17:44 compute-0 nova_compute[259850]:     <controller type="usb" index="0"/>
Oct 11 04:17:44 compute-0 nova_compute[259850]:     <memballoon model="virtio">
Oct 11 04:17:44 compute-0 nova_compute[259850]:       <stats period="10"/>
Oct 11 04:17:44 compute-0 nova_compute[259850]:     </memballoon>
Oct 11 04:17:44 compute-0 nova_compute[259850]:   </devices>
Oct 11 04:17:44 compute-0 nova_compute[259850]: </domain>
Oct 11 04:17:44 compute-0 nova_compute[259850]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Oct 11 04:17:44 compute-0 nova_compute[259850]: 2025-10-11 04:17:44.893 2 DEBUG nova.compute.manager [None req-fa426191-cae8-4e3b-9372-779e15f9f60d 2a330a845d62440c871f80eda2546881 09ba33ef4bd447699d74946c58839b2d - - default default] [instance: a5deabc3-2396-4c23-81c2-959d49bb6da1] Preparing to wait for external event network-vif-plugged-560c29a9-2a29-42bd-a75a-485874b2cbc8 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Oct 11 04:17:44 compute-0 nova_compute[259850]: 2025-10-11 04:17:44.893 2 DEBUG oslo_concurrency.lockutils [None req-fa426191-cae8-4e3b-9372-779e15f9f60d 2a330a845d62440c871f80eda2546881 09ba33ef4bd447699d74946c58839b2d - - default default] Acquiring lock "a5deabc3-2396-4c23-81c2-959d49bb6da1-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 04:17:44 compute-0 nova_compute[259850]: 2025-10-11 04:17:44.894 2 DEBUG oslo_concurrency.lockutils [None req-fa426191-cae8-4e3b-9372-779e15f9f60d 2a330a845d62440c871f80eda2546881 09ba33ef4bd447699d74946c58839b2d - - default default] Lock "a5deabc3-2396-4c23-81c2-959d49bb6da1-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 04:17:44 compute-0 nova_compute[259850]: 2025-10-11 04:17:44.894 2 DEBUG oslo_concurrency.lockutils [None req-fa426191-cae8-4e3b-9372-779e15f9f60d 2a330a845d62440c871f80eda2546881 09ba33ef4bd447699d74946c58839b2d - - default default] Lock "a5deabc3-2396-4c23-81c2-959d49bb6da1-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 04:17:44 compute-0 nova_compute[259850]: 2025-10-11 04:17:44.895 2 DEBUG nova.virt.libvirt.vif [None req-fa426191-cae8-4e3b-9372-779e15f9f60d 2a330a845d62440c871f80eda2546881 09ba33ef4bd447699d74946c58839b2d - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-11T04:17:39Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestVolumeBootPattern-server-1344490819',display_name='tempest-TestVolumeBootPattern-server-1344490819',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testvolumebootpattern-server-1344490819',id=24,image_ref='',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBPDNAGL8Dkg4WTlPf45cAzyjNlMaZ9CdFtcbPahhttGWfFDtL3wJAU2pqWIpDJ427A+TFzstq4HW+M8hdPFbiZnk9MFQHh3rRb7amRkcTpIWOFEgpDmf92zhQgzfL3p2ZA==',key_name='tempest-TestVolumeBootPattern-2018721323',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='09ba33ef4bd447699d74946c58839b2d',ramdisk_id='',reservation_id='r-orrljhtm',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',image_signature_verified='False',network_allocated='True',owner_project_name='tempest-TestVolumeBootPattern-771726270',owner_user_name='tempest-TestVolumeBootPattern-771726270-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-11T04:17:41Z,user_data=None,user_id='2a330a845d62440c871f80eda2546881',uuid=a5deabc3-2396-4c23-81c2-959d49bb6da1,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "560c29a9-2a29-42bd-a75a-485874b2cbc8", "address": "fa:16:3e:b2:35:03", "network": {"id": "b6cd64a2-af0b-4f57-b84c-cbc9cde5251d", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-958485850-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "09ba33ef4bd447699d74946c58839b2d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap560c29a9-2a", "ovs_interfaceid": "560c29a9-2a29-42bd-a75a-485874b2cbc8", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Oct 11 04:17:44 compute-0 nova_compute[259850]: 2025-10-11 04:17:44.895 2 DEBUG nova.network.os_vif_util [None req-fa426191-cae8-4e3b-9372-779e15f9f60d 2a330a845d62440c871f80eda2546881 09ba33ef4bd447699d74946c58839b2d - - default default] Converting VIF {"id": "560c29a9-2a29-42bd-a75a-485874b2cbc8", "address": "fa:16:3e:b2:35:03", "network": {"id": "b6cd64a2-af0b-4f57-b84c-cbc9cde5251d", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-958485850-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "09ba33ef4bd447699d74946c58839b2d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap560c29a9-2a", "ovs_interfaceid": "560c29a9-2a29-42bd-a75a-485874b2cbc8", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 11 04:17:44 compute-0 nova_compute[259850]: 2025-10-11 04:17:44.896 2 DEBUG nova.network.os_vif_util [None req-fa426191-cae8-4e3b-9372-779e15f9f60d 2a330a845d62440c871f80eda2546881 09ba33ef4bd447699d74946c58839b2d - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:b2:35:03,bridge_name='br-int',has_traffic_filtering=True,id=560c29a9-2a29-42bd-a75a-485874b2cbc8,network=Network(b6cd64a2-af0b-4f57-b84c-cbc9cde5251d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap560c29a9-2a') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 11 04:17:44 compute-0 nova_compute[259850]: 2025-10-11 04:17:44.896 2 DEBUG os_vif [None req-fa426191-cae8-4e3b-9372-779e15f9f60d 2a330a845d62440c871f80eda2546881 09ba33ef4bd447699d74946c58839b2d - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:b2:35:03,bridge_name='br-int',has_traffic_filtering=True,id=560c29a9-2a29-42bd-a75a-485874b2cbc8,network=Network(b6cd64a2-af0b-4f57-b84c-cbc9cde5251d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap560c29a9-2a') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Oct 11 04:17:44 compute-0 nova_compute[259850]: 2025-10-11 04:17:44.897 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 04:17:44 compute-0 nova_compute[259850]: 2025-10-11 04:17:44.898 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 04:17:44 compute-0 nova_compute[259850]: 2025-10-11 04:17:44.899 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 11 04:17:44 compute-0 nova_compute[259850]: 2025-10-11 04:17:44.902 2 DEBUG oslo_concurrency.lockutils [None req-e48d9e10-8970-4316-8d21-d7f025ac5787 77d11e860ca1460cab1c20bca4d4c0ea bfcc78a613a4442d88231798d10634c9 - - default default] Lock "a3d4ef44-fada-41fc-9a12-641bff0536a4" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 10.047s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 04:17:44 compute-0 nova_compute[259850]: 2025-10-11 04:17:44.902 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 04:17:44 compute-0 nova_compute[259850]: 2025-10-11 04:17:44.903 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap560c29a9-2a, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 04:17:44 compute-0 nova_compute[259850]: 2025-10-11 04:17:44.903 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap560c29a9-2a, col_values=(('external_ids', {'iface-id': '560c29a9-2a29-42bd-a75a-485874b2cbc8', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:b2:35:03', 'vm-uuid': 'a5deabc3-2396-4c23-81c2-959d49bb6da1'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 04:17:44 compute-0 nova_compute[259850]: 2025-10-11 04:17:44.905 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 04:17:44 compute-0 NetworkManager[44920]: <info>  [1760156264.9072] manager: (tap560c29a9-2a): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/122)
Oct 11 04:17:44 compute-0 nova_compute[259850]: 2025-10-11 04:17:44.907 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 11 04:17:44 compute-0 nova_compute[259850]: 2025-10-11 04:17:44.915 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 04:17:44 compute-0 nova_compute[259850]: 2025-10-11 04:17:44.916 2 INFO os_vif [None req-fa426191-cae8-4e3b-9372-779e15f9f60d 2a330a845d62440c871f80eda2546881 09ba33ef4bd447699d74946c58839b2d - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:b2:35:03,bridge_name='br-int',has_traffic_filtering=True,id=560c29a9-2a29-42bd-a75a-485874b2cbc8,network=Network(b6cd64a2-af0b-4f57-b84c-cbc9cde5251d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap560c29a9-2a')
Oct 11 04:17:44 compute-0 nova_compute[259850]: 2025-10-11 04:17:44.970 2 DEBUG nova.virt.libvirt.driver [None req-fa426191-cae8-4e3b-9372-779e15f9f60d 2a330a845d62440c871f80eda2546881 09ba33ef4bd447699d74946c58839b2d - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 11 04:17:44 compute-0 nova_compute[259850]: 2025-10-11 04:17:44.971 2 DEBUG nova.virt.libvirt.driver [None req-fa426191-cae8-4e3b-9372-779e15f9f60d 2a330a845d62440c871f80eda2546881 09ba33ef4bd447699d74946c58839b2d - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 11 04:17:44 compute-0 nova_compute[259850]: 2025-10-11 04:17:44.971 2 DEBUG nova.virt.libvirt.driver [None req-fa426191-cae8-4e3b-9372-779e15f9f60d 2a330a845d62440c871f80eda2546881 09ba33ef4bd447699d74946c58839b2d - - default default] No VIF found with MAC fa:16:3e:b2:35:03, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Oct 11 04:17:44 compute-0 nova_compute[259850]: 2025-10-11 04:17:44.971 2 INFO nova.virt.libvirt.driver [None req-fa426191-cae8-4e3b-9372-779e15f9f60d 2a330a845d62440c871f80eda2546881 09ba33ef4bd447699d74946c58839b2d - - default default] [instance: a5deabc3-2396-4c23-81c2-959d49bb6da1] Using config drive
Oct 11 04:17:44 compute-0 nova_compute[259850]: 2025-10-11 04:17:44.994 2 DEBUG nova.storage.rbd_utils [None req-fa426191-cae8-4e3b-9372-779e15f9f60d 2a330a845d62440c871f80eda2546881 09ba33ef4bd447699d74946c58839b2d - - default default] rbd image a5deabc3-2396-4c23-81c2-959d49bb6da1_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 11 04:17:45 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e389 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 04:17:45 compute-0 ceph-mon[74273]: pgmap v1621: 305 pgs: 305 active+clean; 281 MiB data, 581 MiB used, 59 GiB / 60 GiB avail; 48 KiB/s rd, 9.4 MiB/s wr, 70 op/s
Oct 11 04:17:45 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' 
Oct 11 04:17:45 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' 
Oct 11 04:17:45 compute-0 ceph-mon[74273]: from='client.? 192.168.122.100:0/883406687' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 11 04:17:45 compute-0 nova_compute[259850]: 2025-10-11 04:17:45.322 2 INFO nova.virt.libvirt.driver [None req-fa426191-cae8-4e3b-9372-779e15f9f60d 2a330a845d62440c871f80eda2546881 09ba33ef4bd447699d74946c58839b2d - - default default] [instance: a5deabc3-2396-4c23-81c2-959d49bb6da1] Creating config drive at /var/lib/nova/instances/a5deabc3-2396-4c23-81c2-959d49bb6da1/disk.config
Oct 11 04:17:45 compute-0 nova_compute[259850]: 2025-10-11 04:17:45.329 2 DEBUG oslo_concurrency.processutils [None req-fa426191-cae8-4e3b-9372-779e15f9f60d 2a330a845d62440c871f80eda2546881 09ba33ef4bd447699d74946c58839b2d - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/a5deabc3-2396-4c23-81c2-959d49bb6da1/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpzygv4ap_ execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 04:17:45 compute-0 nova_compute[259850]: 2025-10-11 04:17:45.467 2 DEBUG oslo_concurrency.processutils [None req-fa426191-cae8-4e3b-9372-779e15f9f60d 2a330a845d62440c871f80eda2546881 09ba33ef4bd447699d74946c58839b2d - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/a5deabc3-2396-4c23-81c2-959d49bb6da1/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpzygv4ap_" returned: 0 in 0.138s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 04:17:45 compute-0 nova_compute[259850]: 2025-10-11 04:17:45.499 2 DEBUG nova.storage.rbd_utils [None req-fa426191-cae8-4e3b-9372-779e15f9f60d 2a330a845d62440c871f80eda2546881 09ba33ef4bd447699d74946c58839b2d - - default default] rbd image a5deabc3-2396-4c23-81c2-959d49bb6da1_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 11 04:17:45 compute-0 nova_compute[259850]: 2025-10-11 04:17:45.503 2 DEBUG oslo_concurrency.processutils [None req-fa426191-cae8-4e3b-9372-779e15f9f60d 2a330a845d62440c871f80eda2546881 09ba33ef4bd447699d74946c58839b2d - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/a5deabc3-2396-4c23-81c2-959d49bb6da1/disk.config a5deabc3-2396-4c23-81c2-959d49bb6da1_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 04:17:45 compute-0 nova_compute[259850]: 2025-10-11 04:17:45.694 2 DEBUG oslo_concurrency.processutils [None req-fa426191-cae8-4e3b-9372-779e15f9f60d 2a330a845d62440c871f80eda2546881 09ba33ef4bd447699d74946c58839b2d - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/a5deabc3-2396-4c23-81c2-959d49bb6da1/disk.config a5deabc3-2396-4c23-81c2-959d49bb6da1_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.190s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 04:17:45 compute-0 nova_compute[259850]: 2025-10-11 04:17:45.695 2 INFO nova.virt.libvirt.driver [None req-fa426191-cae8-4e3b-9372-779e15f9f60d 2a330a845d62440c871f80eda2546881 09ba33ef4bd447699d74946c58839b2d - - default default] [instance: a5deabc3-2396-4c23-81c2-959d49bb6da1] Deleting local config drive /var/lib/nova/instances/a5deabc3-2396-4c23-81c2-959d49bb6da1/disk.config because it was imported into RBD.
Oct 11 04:17:45 compute-0 kernel: tap560c29a9-2a: entered promiscuous mode
Oct 11 04:17:45 compute-0 NetworkManager[44920]: <info>  [1760156265.7624] manager: (tap560c29a9-2a): new Tun device (/org/freedesktop/NetworkManager/Devices/123)
Oct 11 04:17:45 compute-0 systemd-udevd[296807]: Network interface NamePolicy= disabled on kernel command line.
Oct 11 04:17:45 compute-0 NetworkManager[44920]: <info>  [1760156265.7841] device (tap560c29a9-2a): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 11 04:17:45 compute-0 NetworkManager[44920]: <info>  [1760156265.7849] device (tap560c29a9-2a): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 11 04:17:45 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1622: 305 pgs: 305 active+clean; 281 MiB data, 581 MiB used, 59 GiB / 60 GiB avail; 20 KiB/s rd, 32 KiB/s wr, 25 op/s
Oct 11 04:17:45 compute-0 ovn_controller[152025]: 2025-10-11T04:17:45Z|00235|binding|INFO|Claiming lport 560c29a9-2a29-42bd-a75a-485874b2cbc8 for this chassis.
Oct 11 04:17:45 compute-0 ovn_controller[152025]: 2025-10-11T04:17:45Z|00236|binding|INFO|560c29a9-2a29-42bd-a75a-485874b2cbc8: Claiming fa:16:3e:b2:35:03 10.100.0.10
Oct 11 04:17:45 compute-0 nova_compute[259850]: 2025-10-11 04:17:45.800 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 04:17:45 compute-0 ovn_metadata_agent[161897]: 2025-10-11 04:17:45.809 161902 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:b2:35:03 10.100.0.10'], port_security=['fa:16:3e:b2:35:03 10.100.0.10'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.10/28', 'neutron:device_id': 'a5deabc3-2396-4c23-81c2-959d49bb6da1', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-b6cd64a2-af0b-4f57-b84c-cbc9cde5251d', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '09ba33ef4bd447699d74946c58839b2d', 'neutron:revision_number': '2', 'neutron:security_group_ids': '802c56f7-efb1-44ec-9107-b20b0a13ea5d', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=27b77226-c1f8-485e-969b-bae9a3bf7ceb, chassis=[<ovs.db.idl.Row object at 0x7f22fde8afd0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f22fde8afd0>], logical_port=560c29a9-2a29-42bd-a75a-485874b2cbc8) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 11 04:17:45 compute-0 ovn_metadata_agent[161897]: 2025-10-11 04:17:45.809 161902 INFO neutron.agent.ovn.metadata.agent [-] Port 560c29a9-2a29-42bd-a75a-485874b2cbc8 in datapath b6cd64a2-af0b-4f57-b84c-cbc9cde5251d bound to our chassis
Oct 11 04:17:45 compute-0 ovn_metadata_agent[161897]: 2025-10-11 04:17:45.811 161902 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network b6cd64a2-af0b-4f57-b84c-cbc9cde5251d
Oct 11 04:17:45 compute-0 ovn_metadata_agent[161897]: 2025-10-11 04:17:45.821 267637 DEBUG oslo.privsep.daemon [-] privsep: reply[4015571a-582f-4e4b-81ad-c050edf86e85]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 04:17:45 compute-0 ovn_metadata_agent[161897]: 2025-10-11 04:17:45.822 161902 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapb6cd64a2-a1 in ovnmeta-b6cd64a2-af0b-4f57-b84c-cbc9cde5251d namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Oct 11 04:17:45 compute-0 ovn_metadata_agent[161897]: 2025-10-11 04:17:45.825 267637 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapb6cd64a2-a0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Oct 11 04:17:45 compute-0 ovn_metadata_agent[161897]: 2025-10-11 04:17:45.825 267637 DEBUG oslo.privsep.daemon [-] privsep: reply[4b5c83f6-b62c-40b3-aa4d-84048180e9bf]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 04:17:45 compute-0 ovn_metadata_agent[161897]: 2025-10-11 04:17:45.826 267637 DEBUG oslo.privsep.daemon [-] privsep: reply[59158e0e-dbfb-4be5-8a5f-eb203398e2cf]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 04:17:45 compute-0 ovn_controller[152025]: 2025-10-11T04:17:45Z|00237|binding|INFO|Setting lport 560c29a9-2a29-42bd-a75a-485874b2cbc8 ovn-installed in OVS
Oct 11 04:17:45 compute-0 ovn_controller[152025]: 2025-10-11T04:17:45Z|00238|binding|INFO|Setting lport 560c29a9-2a29-42bd-a75a-485874b2cbc8 up in Southbound
Oct 11 04:17:45 compute-0 nova_compute[259850]: 2025-10-11 04:17:45.830 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 04:17:45 compute-0 ovn_metadata_agent[161897]: 2025-10-11 04:17:45.845 162015 DEBUG oslo.privsep.daemon [-] privsep: reply[2a38705d-b61e-4f22-a35e-f3b38d6404d3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 04:17:45 compute-0 systemd-machined[214869]: New machine qemu-24-instance-00000018.
Oct 11 04:17:45 compute-0 systemd[1]: Started Virtual Machine qemu-24-instance-00000018.
Oct 11 04:17:45 compute-0 ovn_metadata_agent[161897]: 2025-10-11 04:17:45.874 267637 DEBUG oslo.privsep.daemon [-] privsep: reply[a0470646-aa4a-472b-9ae9-b92db952550d]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 04:17:45 compute-0 podman[296971]: 2025-10-11 04:17:45.90707325 +0000 UTC m=+0.077644339 container health_status 8f03625dda308e9a3fe3b3f00360811eab2a53db90537fed5552a11820542f07 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, config_id=iscsid, org.label-schema.vendor=CentOS, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, container_name=iscsid, io.buildah.version=1.41.3, managed_by=edpm_ansible)
Oct 11 04:17:45 compute-0 podman[296970]: 2025-10-11 04:17:45.907094971 +0000 UTC m=+0.075321853 container health_status 83ff5079e26f0f00bbfd2512db4ebca61e431e3b7e362664573e34fff179574c (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009, org.label-schema.schema-version=1.0, config_id=multipathd, org.label-schema.vendor=CentOS, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd)
Oct 11 04:17:45 compute-0 ovn_metadata_agent[161897]: 2025-10-11 04:17:45.912 267785 DEBUG oslo.privsep.daemon [-] privsep: reply[11c781c6-fc33-438f-8c4f-324fe76502eb]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 04:17:45 compute-0 NetworkManager[44920]: <info>  [1760156265.9197] manager: (tapb6cd64a2-a0): new Veth device (/org/freedesktop/NetworkManager/Devices/124)
Oct 11 04:17:45 compute-0 systemd-udevd[296969]: Network interface NamePolicy= disabled on kernel command line.
Oct 11 04:17:45 compute-0 ovn_metadata_agent[161897]: 2025-10-11 04:17:45.920 267637 DEBUG oslo.privsep.daemon [-] privsep: reply[b23d2f61-d332-41c8-8318-6f3ea71db0d5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 04:17:45 compute-0 ovn_metadata_agent[161897]: 2025-10-11 04:17:45.956 267785 DEBUG oslo.privsep.daemon [-] privsep: reply[2b05df78-87e4-4df6-893d-25feb4434cc5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 04:17:45 compute-0 ovn_metadata_agent[161897]: 2025-10-11 04:17:45.960 267785 DEBUG oslo.privsep.daemon [-] privsep: reply[ecf54379-d437-443d-bd50-4331e887fb56]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 04:17:45 compute-0 NetworkManager[44920]: <info>  [1760156265.9864] device (tapb6cd64a2-a0): carrier: link connected
Oct 11 04:17:45 compute-0 ovn_metadata_agent[161897]: 2025-10-11 04:17:45.991 267785 DEBUG oslo.privsep.daemon [-] privsep: reply[feb23890-ba75-4f26-a8bb-35cb4140f05e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 04:17:46 compute-0 ovn_metadata_agent[161897]: 2025-10-11 04:17:46.009 267637 DEBUG oslo.privsep.daemon [-] privsep: reply[32211a56-60a4-45c6-a85b-fb91cdbb8489]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapb6cd64a2-a1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:11:9f:02'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 78], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 465102, 'reachable_time': 34708, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 297041, 'error': None, 'target': 'ovnmeta-b6cd64a2-af0b-4f57-b84c-cbc9cde5251d', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 04:17:46 compute-0 ovn_metadata_agent[161897]: 2025-10-11 04:17:46.026 267637 DEBUG oslo.privsep.daemon [-] privsep: reply[f49cdfd7-48e5-425f-92d2-f4ba28976b6e]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe11:9f02'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 465102, 'tstamp': 465102}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 297042, 'error': None, 'target': 'ovnmeta-b6cd64a2-af0b-4f57-b84c-cbc9cde5251d', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 04:17:46 compute-0 ovn_metadata_agent[161897]: 2025-10-11 04:17:46.045 267637 DEBUG oslo.privsep.daemon [-] privsep: reply[ecd77fdf-69ed-4209-8f4a-e2b239975b72]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapb6cd64a2-a1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:11:9f:02'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 78], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 465102, 'reachable_time': 34708, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 297043, 'error': None, 'target': 'ovnmeta-b6cd64a2-af0b-4f57-b84c-cbc9cde5251d', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 04:17:46 compute-0 ovn_metadata_agent[161897]: 2025-10-11 04:17:46.082 267637 DEBUG oslo.privsep.daemon [-] privsep: reply[35e88095-ffce-454c-ac99-6ab9c6792a3f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 04:17:46 compute-0 ovn_metadata_agent[161897]: 2025-10-11 04:17:46.152 267637 DEBUG oslo.privsep.daemon [-] privsep: reply[b6554581-1a37-4b34-a380-83944a45b5ea]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 04:17:46 compute-0 ovn_metadata_agent[161897]: 2025-10-11 04:17:46.153 161902 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapb6cd64a2-a0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 04:17:46 compute-0 ovn_metadata_agent[161897]: 2025-10-11 04:17:46.154 161902 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 11 04:17:46 compute-0 ovn_metadata_agent[161897]: 2025-10-11 04:17:46.154 161902 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapb6cd64a2-a0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 04:17:46 compute-0 NetworkManager[44920]: <info>  [1760156266.1574] manager: (tapb6cd64a2-a0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/125)
Oct 11 04:17:46 compute-0 nova_compute[259850]: 2025-10-11 04:17:46.156 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 04:17:46 compute-0 kernel: tapb6cd64a2-a0: entered promiscuous mode
Oct 11 04:17:46 compute-0 ovn_metadata_agent[161897]: 2025-10-11 04:17:46.160 161902 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapb6cd64a2-a0, col_values=(('external_ids', {'iface-id': 'c2cbaf15-a50c-40b8-9f65-12b11618e7fc'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 04:17:46 compute-0 ovn_controller[152025]: 2025-10-11T04:17:46Z|00239|binding|INFO|Releasing lport c2cbaf15-a50c-40b8-9f65-12b11618e7fc from this chassis (sb_readonly=0)
Oct 11 04:17:46 compute-0 nova_compute[259850]: 2025-10-11 04:17:46.194 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 04:17:46 compute-0 ovn_metadata_agent[161897]: 2025-10-11 04:17:46.195 161902 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/b6cd64a2-af0b-4f57-b84c-cbc9cde5251d.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/b6cd64a2-af0b-4f57-b84c-cbc9cde5251d.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Oct 11 04:17:46 compute-0 ovn_metadata_agent[161897]: 2025-10-11 04:17:46.200 267637 DEBUG oslo.privsep.daemon [-] privsep: reply[c6516028-9375-4108-b3e1-4c22887c12ea]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 04:17:46 compute-0 ovn_metadata_agent[161897]: 2025-10-11 04:17:46.201 161902 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct 11 04:17:46 compute-0 ovn_metadata_agent[161897]: global
Oct 11 04:17:46 compute-0 ovn_metadata_agent[161897]:     log         /dev/log local0 debug
Oct 11 04:17:46 compute-0 ovn_metadata_agent[161897]:     log-tag     haproxy-metadata-proxy-b6cd64a2-af0b-4f57-b84c-cbc9cde5251d
Oct 11 04:17:46 compute-0 ovn_metadata_agent[161897]:     user        root
Oct 11 04:17:46 compute-0 ovn_metadata_agent[161897]:     group       root
Oct 11 04:17:46 compute-0 ovn_metadata_agent[161897]:     maxconn     1024
Oct 11 04:17:46 compute-0 ovn_metadata_agent[161897]:     pidfile     /var/lib/neutron/external/pids/b6cd64a2-af0b-4f57-b84c-cbc9cde5251d.pid.haproxy
Oct 11 04:17:46 compute-0 ovn_metadata_agent[161897]:     daemon
Oct 11 04:17:46 compute-0 ovn_metadata_agent[161897]: 
Oct 11 04:17:46 compute-0 ovn_metadata_agent[161897]: defaults
Oct 11 04:17:46 compute-0 ovn_metadata_agent[161897]:     log global
Oct 11 04:17:46 compute-0 ovn_metadata_agent[161897]:     mode http
Oct 11 04:17:46 compute-0 ovn_metadata_agent[161897]:     option httplog
Oct 11 04:17:46 compute-0 ovn_metadata_agent[161897]:     option dontlognull
Oct 11 04:17:46 compute-0 ovn_metadata_agent[161897]:     option http-server-close
Oct 11 04:17:46 compute-0 ovn_metadata_agent[161897]:     option forwardfor
Oct 11 04:17:46 compute-0 ovn_metadata_agent[161897]:     retries                 3
Oct 11 04:17:46 compute-0 ovn_metadata_agent[161897]:     timeout http-request    30s
Oct 11 04:17:46 compute-0 ovn_metadata_agent[161897]:     timeout connect         30s
Oct 11 04:17:46 compute-0 ovn_metadata_agent[161897]:     timeout client          32s
Oct 11 04:17:46 compute-0 ovn_metadata_agent[161897]:     timeout server          32s
Oct 11 04:17:46 compute-0 ovn_metadata_agent[161897]:     timeout http-keep-alive 30s
Oct 11 04:17:46 compute-0 ovn_metadata_agent[161897]: 
Oct 11 04:17:46 compute-0 ovn_metadata_agent[161897]: 
Oct 11 04:17:46 compute-0 ovn_metadata_agent[161897]: listen listener
Oct 11 04:17:46 compute-0 ovn_metadata_agent[161897]:     bind 169.254.169.254:80
Oct 11 04:17:46 compute-0 ovn_metadata_agent[161897]:     server metadata /var/lib/neutron/metadata_proxy
Oct 11 04:17:46 compute-0 ovn_metadata_agent[161897]:     http-request add-header X-OVN-Network-ID b6cd64a2-af0b-4f57-b84c-cbc9cde5251d
Oct 11 04:17:46 compute-0 ovn_metadata_agent[161897]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Oct 11 04:17:46 compute-0 ovn_metadata_agent[161897]: 2025-10-11 04:17:46.203 161902 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-b6cd64a2-af0b-4f57-b84c-cbc9cde5251d', 'env', 'PROCESS_TAG=haproxy-b6cd64a2-af0b-4f57-b84c-cbc9cde5251d', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/b6cd64a2-af0b-4f57-b84c-cbc9cde5251d.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Oct 11 04:17:46 compute-0 nova_compute[259850]: 2025-10-11 04:17:46.457 2 DEBUG nova.network.neutron [req-92bb88d8-eaaa-49a4-80c2-587ed4ab7201 req-03366c63-baeb-4f7e-a5f1-99e3d5ab7ac7 f4159c198f2c491aba3076e803251bb4 551ef16ca7c64c7d98e9560ddab5abd8 - - default default] [instance: a5deabc3-2396-4c23-81c2-959d49bb6da1] Updated VIF entry in instance network info cache for port 560c29a9-2a29-42bd-a75a-485874b2cbc8. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 11 04:17:46 compute-0 nova_compute[259850]: 2025-10-11 04:17:46.458 2 DEBUG nova.network.neutron [req-92bb88d8-eaaa-49a4-80c2-587ed4ab7201 req-03366c63-baeb-4f7e-a5f1-99e3d5ab7ac7 f4159c198f2c491aba3076e803251bb4 551ef16ca7c64c7d98e9560ddab5abd8 - - default default] [instance: a5deabc3-2396-4c23-81c2-959d49bb6da1] Updating instance_info_cache with network_info: [{"id": "560c29a9-2a29-42bd-a75a-485874b2cbc8", "address": "fa:16:3e:b2:35:03", "network": {"id": "b6cd64a2-af0b-4f57-b84c-cbc9cde5251d", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-958485850-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "09ba33ef4bd447699d74946c58839b2d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap560c29a9-2a", "ovs_interfaceid": "560c29a9-2a29-42bd-a75a-485874b2cbc8", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 11 04:17:46 compute-0 nova_compute[259850]: 2025-10-11 04:17:46.475 2 DEBUG oslo_concurrency.lockutils [req-92bb88d8-eaaa-49a4-80c2-587ed4ab7201 req-03366c63-baeb-4f7e-a5f1-99e3d5ab7ac7 f4159c198f2c491aba3076e803251bb4 551ef16ca7c64c7d98e9560ddab5abd8 - - default default] Releasing lock "refresh_cache-a5deabc3-2396-4c23-81c2-959d49bb6da1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 11 04:17:46 compute-0 podman[297117]: 2025-10-11 04:17:46.720104619 +0000 UTC m=+0.076369683 container create c5ab96a640495b4ce5473c9c41e9eff99602f2f92dedd41740eafa8ad5e88b29 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-b6cd64a2-af0b-4f57-b84c-cbc9cde5251d, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, org.label-schema.build-date=20251009, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team)
Oct 11 04:17:46 compute-0 systemd[1]: Started libpod-conmon-c5ab96a640495b4ce5473c9c41e9eff99602f2f92dedd41740eafa8ad5e88b29.scope.
Oct 11 04:17:46 compute-0 podman[297117]: 2025-10-11 04:17:46.687136226 +0000 UTC m=+0.043401320 image pull 1061e4fafe13e0b9aa1ef2c904ba4ad70c44f3e87b1d831f16c6db34937f4022 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Oct 11 04:17:46 compute-0 systemd[1]: Started libcrun container.
Oct 11 04:17:46 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7746dd53ae5b557815f637423b887d95b31ba92a9b79b687487574b0569cc81d/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct 11 04:17:46 compute-0 podman[297117]: 2025-10-11 04:17:46.810522639 +0000 UTC m=+0.166787703 container init c5ab96a640495b4ce5473c9c41e9eff99602f2f92dedd41740eafa8ad5e88b29 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-b6cd64a2-af0b-4f57-b84c-cbc9cde5251d, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, org.label-schema.build-date=20251009, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS)
Oct 11 04:17:46 compute-0 podman[297117]: 2025-10-11 04:17:46.81974863 +0000 UTC m=+0.176013694 container start c5ab96a640495b4ce5473c9c41e9eff99602f2f92dedd41740eafa8ad5e88b29 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-b6cd64a2-af0b-4f57-b84c-cbc9cde5251d, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251009)
Oct 11 04:17:46 compute-0 neutron-haproxy-ovnmeta-b6cd64a2-af0b-4f57-b84c-cbc9cde5251d[297132]: [NOTICE]   (297136) : New worker (297138) forked
Oct 11 04:17:46 compute-0 neutron-haproxy-ovnmeta-b6cd64a2-af0b-4f57-b84c-cbc9cde5251d[297132]: [NOTICE]   (297136) : Loading success.
Oct 11 04:17:47 compute-0 nova_compute[259850]: 2025-10-11 04:17:47.077 2 DEBUG nova.virt.driver [None req-3700afd8-f980-4cf2-b41e-54ecc7fbe9a2 - - - - - -] Emitting event <LifecycleEvent: 1760156267.0766995, a5deabc3-2396-4c23-81c2-959d49bb6da1 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 11 04:17:47 compute-0 nova_compute[259850]: 2025-10-11 04:17:47.077 2 INFO nova.compute.manager [None req-3700afd8-f980-4cf2-b41e-54ecc7fbe9a2 - - - - - -] [instance: a5deabc3-2396-4c23-81c2-959d49bb6da1] VM Started (Lifecycle Event)
Oct 11 04:17:47 compute-0 nova_compute[259850]: 2025-10-11 04:17:47.097 2 DEBUG nova.compute.manager [None req-3700afd8-f980-4cf2-b41e-54ecc7fbe9a2 - - - - - -] [instance: a5deabc3-2396-4c23-81c2-959d49bb6da1] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 11 04:17:47 compute-0 nova_compute[259850]: 2025-10-11 04:17:47.100 2 DEBUG nova.virt.driver [None req-3700afd8-f980-4cf2-b41e-54ecc7fbe9a2 - - - - - -] Emitting event <LifecycleEvent: 1760156267.0768406, a5deabc3-2396-4c23-81c2-959d49bb6da1 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 11 04:17:47 compute-0 nova_compute[259850]: 2025-10-11 04:17:47.101 2 INFO nova.compute.manager [None req-3700afd8-f980-4cf2-b41e-54ecc7fbe9a2 - - - - - -] [instance: a5deabc3-2396-4c23-81c2-959d49bb6da1] VM Paused (Lifecycle Event)
Oct 11 04:17:47 compute-0 nova_compute[259850]: 2025-10-11 04:17:47.116 2 DEBUG nova.compute.manager [None req-3700afd8-f980-4cf2-b41e-54ecc7fbe9a2 - - - - - -] [instance: a5deabc3-2396-4c23-81c2-959d49bb6da1] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 11 04:17:47 compute-0 nova_compute[259850]: 2025-10-11 04:17:47.118 2 DEBUG nova.compute.manager [None req-3700afd8-f980-4cf2-b41e-54ecc7fbe9a2 - - - - - -] [instance: a5deabc3-2396-4c23-81c2-959d49bb6da1] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 11 04:17:47 compute-0 nova_compute[259850]: 2025-10-11 04:17:47.142 2 INFO nova.compute.manager [None req-3700afd8-f980-4cf2-b41e-54ecc7fbe9a2 - - - - - -] [instance: a5deabc3-2396-4c23-81c2-959d49bb6da1] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 11 04:17:47 compute-0 ceph-mon[74273]: pgmap v1622: 305 pgs: 305 active+clean; 281 MiB data, 581 MiB used, 59 GiB / 60 GiB avail; 20 KiB/s rd, 32 KiB/s wr, 25 op/s
Oct 11 04:17:47 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1623: 305 pgs: 305 active+clean; 281 MiB data, 581 MiB used, 59 GiB / 60 GiB avail; 423 KiB/s rd, 32 KiB/s wr, 43 op/s
Oct 11 04:17:48 compute-0 nova_compute[259850]: 2025-10-11 04:17:48.311 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 04:17:48 compute-0 nova_compute[259850]: 2025-10-11 04:17:48.384 2 DEBUG nova.compute.manager [req-661f35a5-6361-49b6-8d2b-9123c44b1b74 req-9604601b-c464-45b6-b24d-a6f992ed4fff f4159c198f2c491aba3076e803251bb4 551ef16ca7c64c7d98e9560ddab5abd8 - - default default] [instance: a5deabc3-2396-4c23-81c2-959d49bb6da1] Received event network-vif-plugged-560c29a9-2a29-42bd-a75a-485874b2cbc8 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 11 04:17:48 compute-0 nova_compute[259850]: 2025-10-11 04:17:48.385 2 DEBUG oslo_concurrency.lockutils [req-661f35a5-6361-49b6-8d2b-9123c44b1b74 req-9604601b-c464-45b6-b24d-a6f992ed4fff f4159c198f2c491aba3076e803251bb4 551ef16ca7c64c7d98e9560ddab5abd8 - - default default] Acquiring lock "a5deabc3-2396-4c23-81c2-959d49bb6da1-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 04:17:48 compute-0 nova_compute[259850]: 2025-10-11 04:17:48.385 2 DEBUG oslo_concurrency.lockutils [req-661f35a5-6361-49b6-8d2b-9123c44b1b74 req-9604601b-c464-45b6-b24d-a6f992ed4fff f4159c198f2c491aba3076e803251bb4 551ef16ca7c64c7d98e9560ddab5abd8 - - default default] Lock "a5deabc3-2396-4c23-81c2-959d49bb6da1-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 04:17:48 compute-0 nova_compute[259850]: 2025-10-11 04:17:48.385 2 DEBUG oslo_concurrency.lockutils [req-661f35a5-6361-49b6-8d2b-9123c44b1b74 req-9604601b-c464-45b6-b24d-a6f992ed4fff f4159c198f2c491aba3076e803251bb4 551ef16ca7c64c7d98e9560ddab5abd8 - - default default] Lock "a5deabc3-2396-4c23-81c2-959d49bb6da1-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 04:17:48 compute-0 nova_compute[259850]: 2025-10-11 04:17:48.385 2 DEBUG nova.compute.manager [req-661f35a5-6361-49b6-8d2b-9123c44b1b74 req-9604601b-c464-45b6-b24d-a6f992ed4fff f4159c198f2c491aba3076e803251bb4 551ef16ca7c64c7d98e9560ddab5abd8 - - default default] [instance: a5deabc3-2396-4c23-81c2-959d49bb6da1] Processing event network-vif-plugged-560c29a9-2a29-42bd-a75a-485874b2cbc8 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Oct 11 04:17:48 compute-0 nova_compute[259850]: 2025-10-11 04:17:48.386 2 DEBUG nova.compute.manager [None req-fa426191-cae8-4e3b-9372-779e15f9f60d 2a330a845d62440c871f80eda2546881 09ba33ef4bd447699d74946c58839b2d - - default default] [instance: a5deabc3-2396-4c23-81c2-959d49bb6da1] Instance event wait completed in 1 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Oct 11 04:17:48 compute-0 nova_compute[259850]: 2025-10-11 04:17:48.407 2 DEBUG nova.virt.driver [None req-3700afd8-f980-4cf2-b41e-54ecc7fbe9a2 - - - - - -] Emitting event <LifecycleEvent: 1760156268.3975368, a5deabc3-2396-4c23-81c2-959d49bb6da1 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 11 04:17:48 compute-0 nova_compute[259850]: 2025-10-11 04:17:48.408 2 INFO nova.compute.manager [None req-3700afd8-f980-4cf2-b41e-54ecc7fbe9a2 - - - - - -] [instance: a5deabc3-2396-4c23-81c2-959d49bb6da1] VM Resumed (Lifecycle Event)
Oct 11 04:17:48 compute-0 nova_compute[259850]: 2025-10-11 04:17:48.413 2 DEBUG nova.virt.libvirt.driver [None req-fa426191-cae8-4e3b-9372-779e15f9f60d 2a330a845d62440c871f80eda2546881 09ba33ef4bd447699d74946c58839b2d - - default default] [instance: a5deabc3-2396-4c23-81c2-959d49bb6da1] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Oct 11 04:17:48 compute-0 nova_compute[259850]: 2025-10-11 04:17:48.419 2 INFO nova.virt.libvirt.driver [-] [instance: a5deabc3-2396-4c23-81c2-959d49bb6da1] Instance spawned successfully.
Oct 11 04:17:48 compute-0 nova_compute[259850]: 2025-10-11 04:17:48.420 2 DEBUG nova.virt.libvirt.driver [None req-fa426191-cae8-4e3b-9372-779e15f9f60d 2a330a845d62440c871f80eda2546881 09ba33ef4bd447699d74946c58839b2d - - default default] [instance: a5deabc3-2396-4c23-81c2-959d49bb6da1] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Oct 11 04:17:48 compute-0 nova_compute[259850]: 2025-10-11 04:17:48.433 2 DEBUG nova.compute.manager [None req-3700afd8-f980-4cf2-b41e-54ecc7fbe9a2 - - - - - -] [instance: a5deabc3-2396-4c23-81c2-959d49bb6da1] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 11 04:17:48 compute-0 nova_compute[259850]: 2025-10-11 04:17:48.443 2 DEBUG nova.compute.manager [None req-3700afd8-f980-4cf2-b41e-54ecc7fbe9a2 - - - - - -] [instance: a5deabc3-2396-4c23-81c2-959d49bb6da1] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 11 04:17:48 compute-0 nova_compute[259850]: 2025-10-11 04:17:48.450 2 DEBUG nova.virt.libvirt.driver [None req-fa426191-cae8-4e3b-9372-779e15f9f60d 2a330a845d62440c871f80eda2546881 09ba33ef4bd447699d74946c58839b2d - - default default] [instance: a5deabc3-2396-4c23-81c2-959d49bb6da1] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 11 04:17:48 compute-0 nova_compute[259850]: 2025-10-11 04:17:48.451 2 DEBUG nova.virt.libvirt.driver [None req-fa426191-cae8-4e3b-9372-779e15f9f60d 2a330a845d62440c871f80eda2546881 09ba33ef4bd447699d74946c58839b2d - - default default] [instance: a5deabc3-2396-4c23-81c2-959d49bb6da1] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 11 04:17:48 compute-0 nova_compute[259850]: 2025-10-11 04:17:48.452 2 DEBUG nova.virt.libvirt.driver [None req-fa426191-cae8-4e3b-9372-779e15f9f60d 2a330a845d62440c871f80eda2546881 09ba33ef4bd447699d74946c58839b2d - - default default] [instance: a5deabc3-2396-4c23-81c2-959d49bb6da1] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 11 04:17:48 compute-0 nova_compute[259850]: 2025-10-11 04:17:48.453 2 DEBUG nova.virt.libvirt.driver [None req-fa426191-cae8-4e3b-9372-779e15f9f60d 2a330a845d62440c871f80eda2546881 09ba33ef4bd447699d74946c58839b2d - - default default] [instance: a5deabc3-2396-4c23-81c2-959d49bb6da1] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 11 04:17:48 compute-0 nova_compute[259850]: 2025-10-11 04:17:48.454 2 DEBUG nova.virt.libvirt.driver [None req-fa426191-cae8-4e3b-9372-779e15f9f60d 2a330a845d62440c871f80eda2546881 09ba33ef4bd447699d74946c58839b2d - - default default] [instance: a5deabc3-2396-4c23-81c2-959d49bb6da1] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 11 04:17:48 compute-0 nova_compute[259850]: 2025-10-11 04:17:48.455 2 DEBUG nova.virt.libvirt.driver [None req-fa426191-cae8-4e3b-9372-779e15f9f60d 2a330a845d62440c871f80eda2546881 09ba33ef4bd447699d74946c58839b2d - - default default] [instance: a5deabc3-2396-4c23-81c2-959d49bb6da1] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 11 04:17:48 compute-0 nova_compute[259850]: 2025-10-11 04:17:48.471 2 INFO nova.compute.manager [None req-3700afd8-f980-4cf2-b41e-54ecc7fbe9a2 - - - - - -] [instance: a5deabc3-2396-4c23-81c2-959d49bb6da1] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 11 04:17:48 compute-0 nova_compute[259850]: 2025-10-11 04:17:48.517 2 INFO nova.compute.manager [None req-fa426191-cae8-4e3b-9372-779e15f9f60d 2a330a845d62440c871f80eda2546881 09ba33ef4bd447699d74946c58839b2d - - default default] [instance: a5deabc3-2396-4c23-81c2-959d49bb6da1] Took 6.28 seconds to spawn the instance on the hypervisor.
Oct 11 04:17:48 compute-0 nova_compute[259850]: 2025-10-11 04:17:48.518 2 DEBUG nova.compute.manager [None req-fa426191-cae8-4e3b-9372-779e15f9f60d 2a330a845d62440c871f80eda2546881 09ba33ef4bd447699d74946c58839b2d - - default default] [instance: a5deabc3-2396-4c23-81c2-959d49bb6da1] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 11 04:17:48 compute-0 nova_compute[259850]: 2025-10-11 04:17:48.578 2 INFO nova.compute.manager [None req-fa426191-cae8-4e3b-9372-779e15f9f60d 2a330a845d62440c871f80eda2546881 09ba33ef4bd447699d74946c58839b2d - - default default] [instance: a5deabc3-2396-4c23-81c2-959d49bb6da1] Took 8.51 seconds to build instance.
Oct 11 04:17:48 compute-0 unix_chkpwd[297149]: password check failed for user (root)
Oct 11 04:17:48 compute-0 sshd-session[297147]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=193.46.255.7  user=root
Oct 11 04:17:48 compute-0 nova_compute[259850]: 2025-10-11 04:17:48.592 2 DEBUG oslo_concurrency.lockutils [None req-fa426191-cae8-4e3b-9372-779e15f9f60d 2a330a845d62440c871f80eda2546881 09ba33ef4bd447699d74946c58839b2d - - default default] Lock "a5deabc3-2396-4c23-81c2-959d49bb6da1" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 8.622s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 04:17:49 compute-0 ceph-mon[74273]: pgmap v1623: 305 pgs: 305 active+clean; 281 MiB data, 581 MiB used, 59 GiB / 60 GiB avail; 423 KiB/s rd, 32 KiB/s wr, 43 op/s
Oct 11 04:17:49 compute-0 nova_compute[259850]: 2025-10-11 04:17:49.592 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1760156254.5911725, 170beb52-e998-40b5-8315-a0d138f2cbf6 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 11 04:17:49 compute-0 nova_compute[259850]: 2025-10-11 04:17:49.592 2 INFO nova.compute.manager [-] [instance: 170beb52-e998-40b5-8315-a0d138f2cbf6] VM Stopped (Lifecycle Event)
Oct 11 04:17:49 compute-0 nova_compute[259850]: 2025-10-11 04:17:49.615 2 DEBUG nova.compute.manager [None req-358779f1-866f-4608-bdf1-dbafaf934cbd - - - - - -] [instance: 170beb52-e998-40b5-8315-a0d138f2cbf6] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 11 04:17:49 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1624: 305 pgs: 305 active+clean; 281 MiB data, 581 MiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 45 KiB/s wr, 98 op/s
Oct 11 04:17:49 compute-0 nova_compute[259850]: 2025-10-11 04:17:49.958 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 04:17:50 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e389 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 04:17:50 compute-0 sshd-session[297147]: Failed password for root from 193.46.255.7 port 15962 ssh2
Oct 11 04:17:50 compute-0 nova_compute[259850]: 2025-10-11 04:17:50.470 2 DEBUG nova.compute.manager [req-1d84f984-83ca-496c-a09d-7f327140c8a9 req-6f2d844b-d536-49fd-b91a-73d641fc4693 f4159c198f2c491aba3076e803251bb4 551ef16ca7c64c7d98e9560ddab5abd8 - - default default] [instance: a5deabc3-2396-4c23-81c2-959d49bb6da1] Received event network-vif-plugged-560c29a9-2a29-42bd-a75a-485874b2cbc8 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 11 04:17:50 compute-0 nova_compute[259850]: 2025-10-11 04:17:50.470 2 DEBUG oslo_concurrency.lockutils [req-1d84f984-83ca-496c-a09d-7f327140c8a9 req-6f2d844b-d536-49fd-b91a-73d641fc4693 f4159c198f2c491aba3076e803251bb4 551ef16ca7c64c7d98e9560ddab5abd8 - - default default] Acquiring lock "a5deabc3-2396-4c23-81c2-959d49bb6da1-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 04:17:50 compute-0 nova_compute[259850]: 2025-10-11 04:17:50.471 2 DEBUG oslo_concurrency.lockutils [req-1d84f984-83ca-496c-a09d-7f327140c8a9 req-6f2d844b-d536-49fd-b91a-73d641fc4693 f4159c198f2c491aba3076e803251bb4 551ef16ca7c64c7d98e9560ddab5abd8 - - default default] Lock "a5deabc3-2396-4c23-81c2-959d49bb6da1-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 04:17:50 compute-0 nova_compute[259850]: 2025-10-11 04:17:50.471 2 DEBUG oslo_concurrency.lockutils [req-1d84f984-83ca-496c-a09d-7f327140c8a9 req-6f2d844b-d536-49fd-b91a-73d641fc4693 f4159c198f2c491aba3076e803251bb4 551ef16ca7c64c7d98e9560ddab5abd8 - - default default] Lock "a5deabc3-2396-4c23-81c2-959d49bb6da1-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 04:17:50 compute-0 nova_compute[259850]: 2025-10-11 04:17:50.472 2 DEBUG nova.compute.manager [req-1d84f984-83ca-496c-a09d-7f327140c8a9 req-6f2d844b-d536-49fd-b91a-73d641fc4693 f4159c198f2c491aba3076e803251bb4 551ef16ca7c64c7d98e9560ddab5abd8 - - default default] [instance: a5deabc3-2396-4c23-81c2-959d49bb6da1] No waiting events found dispatching network-vif-plugged-560c29a9-2a29-42bd-a75a-485874b2cbc8 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 11 04:17:50 compute-0 nova_compute[259850]: 2025-10-11 04:17:50.472 2 WARNING nova.compute.manager [req-1d84f984-83ca-496c-a09d-7f327140c8a9 req-6f2d844b-d536-49fd-b91a-73d641fc4693 f4159c198f2c491aba3076e803251bb4 551ef16ca7c64c7d98e9560ddab5abd8 - - default default] [instance: a5deabc3-2396-4c23-81c2-959d49bb6da1] Received unexpected event network-vif-plugged-560c29a9-2a29-42bd-a75a-485874b2cbc8 for instance with vm_state active and task_state None.
Oct 11 04:17:50 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct 11 04:17:50 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3138127793' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 11 04:17:50 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct 11 04:17:50 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3138127793' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 11 04:17:50 compute-0 ceph-mgr[74563]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 04:17:50 compute-0 ceph-mgr[74563]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 04:17:50 compute-0 ceph-mgr[74563]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 04:17:50 compute-0 ceph-mgr[74563]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 04:17:50 compute-0 ceph-mgr[74563]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 04:17:50 compute-0 ceph-mgr[74563]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 04:17:51 compute-0 ceph-mon[74273]: pgmap v1624: 305 pgs: 305 active+clean; 281 MiB data, 581 MiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 45 KiB/s wr, 98 op/s
Oct 11 04:17:51 compute-0 ceph-mon[74273]: from='client.? 192.168.122.10:0/3138127793' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 11 04:17:51 compute-0 ceph-mon[74273]: from='client.? 192.168.122.10:0/3138127793' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 11 04:17:51 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1625: 305 pgs: 305 active+clean; 281 MiB data, 581 MiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 25 KiB/s wr, 83 op/s
Oct 11 04:17:52 compute-0 unix_chkpwd[297150]: password check failed for user (root)
Oct 11 04:17:53 compute-0 ceph-mon[74273]: pgmap v1625: 305 pgs: 305 active+clean; 281 MiB data, 581 MiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 25 KiB/s wr, 83 op/s
Oct 11 04:17:53 compute-0 nova_compute[259850]: 2025-10-11 04:17:53.314 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 04:17:53 compute-0 nova_compute[259850]: 2025-10-11 04:17:53.452 2 DEBUG nova.compute.manager [req-d29f41f0-dead-4cee-a604-5e4ec33e8b92 req-54fd1c6b-2d2a-4e29-a090-62bf773ca186 f4159c198f2c491aba3076e803251bb4 551ef16ca7c64c7d98e9560ddab5abd8 - - default default] [instance: a3d4ef44-fada-41fc-9a12-641bff0536a4] Received event network-changed-86c2cef2-4a07-459b-8237-e7fda4a39f81 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 11 04:17:53 compute-0 nova_compute[259850]: 2025-10-11 04:17:53.452 2 DEBUG nova.compute.manager [req-d29f41f0-dead-4cee-a604-5e4ec33e8b92 req-54fd1c6b-2d2a-4e29-a090-62bf773ca186 f4159c198f2c491aba3076e803251bb4 551ef16ca7c64c7d98e9560ddab5abd8 - - default default] [instance: a3d4ef44-fada-41fc-9a12-641bff0536a4] Refreshing instance network info cache due to event network-changed-86c2cef2-4a07-459b-8237-e7fda4a39f81. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 11 04:17:53 compute-0 nova_compute[259850]: 2025-10-11 04:17:53.453 2 DEBUG oslo_concurrency.lockutils [req-d29f41f0-dead-4cee-a604-5e4ec33e8b92 req-54fd1c6b-2d2a-4e29-a090-62bf773ca186 f4159c198f2c491aba3076e803251bb4 551ef16ca7c64c7d98e9560ddab5abd8 - - default default] Acquiring lock "refresh_cache-a3d4ef44-fada-41fc-9a12-641bff0536a4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 11 04:17:53 compute-0 nova_compute[259850]: 2025-10-11 04:17:53.453 2 DEBUG oslo_concurrency.lockutils [req-d29f41f0-dead-4cee-a604-5e4ec33e8b92 req-54fd1c6b-2d2a-4e29-a090-62bf773ca186 f4159c198f2c491aba3076e803251bb4 551ef16ca7c64c7d98e9560ddab5abd8 - - default default] Acquired lock "refresh_cache-a3d4ef44-fada-41fc-9a12-641bff0536a4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 11 04:17:53 compute-0 nova_compute[259850]: 2025-10-11 04:17:53.453 2 DEBUG nova.network.neutron [req-d29f41f0-dead-4cee-a604-5e4ec33e8b92 req-54fd1c6b-2d2a-4e29-a090-62bf773ca186 f4159c198f2c491aba3076e803251bb4 551ef16ca7c64c7d98e9560ddab5abd8 - - default default] [instance: a3d4ef44-fada-41fc-9a12-641bff0536a4] Refreshing network info cache for port 86c2cef2-4a07-459b-8237-e7fda4a39f81 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 11 04:17:53 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1626: 305 pgs: 305 active+clean; 281 MiB data, 581 MiB used, 59 GiB / 60 GiB avail; 3.9 MiB/s rd, 25 KiB/s wr, 147 op/s
Oct 11 04:17:54 compute-0 sshd-session[297147]: Failed password for root from 193.46.255.7 port 15962 ssh2
Oct 11 04:17:54 compute-0 unix_chkpwd[297151]: password check failed for user (root)
Oct 11 04:17:54 compute-0 nova_compute[259850]: 2025-10-11 04:17:54.566 2 DEBUG nova.compute.manager [req-b06cd699-2f44-46c7-b3ed-121af8386127 req-288e43f5-ce63-4744-a121-ffb173ae4b94 f4159c198f2c491aba3076e803251bb4 551ef16ca7c64c7d98e9560ddab5abd8 - - default default] [instance: a5deabc3-2396-4c23-81c2-959d49bb6da1] Received event network-changed-560c29a9-2a29-42bd-a75a-485874b2cbc8 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 11 04:17:54 compute-0 nova_compute[259850]: 2025-10-11 04:17:54.567 2 DEBUG nova.compute.manager [req-b06cd699-2f44-46c7-b3ed-121af8386127 req-288e43f5-ce63-4744-a121-ffb173ae4b94 f4159c198f2c491aba3076e803251bb4 551ef16ca7c64c7d98e9560ddab5abd8 - - default default] [instance: a5deabc3-2396-4c23-81c2-959d49bb6da1] Refreshing instance network info cache due to event network-changed-560c29a9-2a29-42bd-a75a-485874b2cbc8. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 11 04:17:54 compute-0 nova_compute[259850]: 2025-10-11 04:17:54.567 2 DEBUG oslo_concurrency.lockutils [req-b06cd699-2f44-46c7-b3ed-121af8386127 req-288e43f5-ce63-4744-a121-ffb173ae4b94 f4159c198f2c491aba3076e803251bb4 551ef16ca7c64c7d98e9560ddab5abd8 - - default default] Acquiring lock "refresh_cache-a5deabc3-2396-4c23-81c2-959d49bb6da1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 11 04:17:54 compute-0 nova_compute[259850]: 2025-10-11 04:17:54.568 2 DEBUG oslo_concurrency.lockutils [req-b06cd699-2f44-46c7-b3ed-121af8386127 req-288e43f5-ce63-4744-a121-ffb173ae4b94 f4159c198f2c491aba3076e803251bb4 551ef16ca7c64c7d98e9560ddab5abd8 - - default default] Acquired lock "refresh_cache-a5deabc3-2396-4c23-81c2-959d49bb6da1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 11 04:17:54 compute-0 nova_compute[259850]: 2025-10-11 04:17:54.568 2 DEBUG nova.network.neutron [req-b06cd699-2f44-46c7-b3ed-121af8386127 req-288e43f5-ce63-4744-a121-ffb173ae4b94 f4159c198f2c491aba3076e803251bb4 551ef16ca7c64c7d98e9560ddab5abd8 - - default default] [instance: a5deabc3-2396-4c23-81c2-959d49bb6da1] Refreshing network info cache for port 560c29a9-2a29-42bd-a75a-485874b2cbc8 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 11 04:17:54 compute-0 nova_compute[259850]: 2025-10-11 04:17:54.856 2 DEBUG nova.network.neutron [req-d29f41f0-dead-4cee-a604-5e4ec33e8b92 req-54fd1c6b-2d2a-4e29-a090-62bf773ca186 f4159c198f2c491aba3076e803251bb4 551ef16ca7c64c7d98e9560ddab5abd8 - - default default] [instance: a3d4ef44-fada-41fc-9a12-641bff0536a4] Updated VIF entry in instance network info cache for port 86c2cef2-4a07-459b-8237-e7fda4a39f81. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 11 04:17:54 compute-0 nova_compute[259850]: 2025-10-11 04:17:54.857 2 DEBUG nova.network.neutron [req-d29f41f0-dead-4cee-a604-5e4ec33e8b92 req-54fd1c6b-2d2a-4e29-a090-62bf773ca186 f4159c198f2c491aba3076e803251bb4 551ef16ca7c64c7d98e9560ddab5abd8 - - default default] [instance: a3d4ef44-fada-41fc-9a12-641bff0536a4] Updating instance_info_cache with network_info: [{"id": "86c2cef2-4a07-459b-8237-e7fda4a39f81", "address": "fa:16:3e:f3:89:d6", "network": {"id": "1c86b315-3a4b-4db0-8b3c-39658c19ef9c", "bridge": "br-int", "label": "tempest-TransferEncryptedVolumeTest-443747755-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.213", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "bfcc78a613a4442d88231798d10634c9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap86c2cef2-4a", "ovs_interfaceid": "86c2cef2-4a07-459b-8237-e7fda4a39f81", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 11 04:17:54 compute-0 nova_compute[259850]: 2025-10-11 04:17:54.889 2 DEBUG oslo_concurrency.lockutils [req-d29f41f0-dead-4cee-a604-5e4ec33e8b92 req-54fd1c6b-2d2a-4e29-a090-62bf773ca186 f4159c198f2c491aba3076e803251bb4 551ef16ca7c64c7d98e9560ddab5abd8 - - default default] Releasing lock "refresh_cache-a3d4ef44-fada-41fc-9a12-641bff0536a4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 11 04:17:54 compute-0 nova_compute[259850]: 2025-10-11 04:17:54.961 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 04:17:55 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e389 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 04:17:55 compute-0 ceph-mon[74273]: pgmap v1626: 305 pgs: 305 active+clean; 281 MiB data, 581 MiB used, 59 GiB / 60 GiB avail; 3.9 MiB/s rd, 25 KiB/s wr, 147 op/s
Oct 11 04:17:55 compute-0 sshd-session[297147]: Failed password for root from 193.46.255.7 port 15962 ssh2
Oct 11 04:17:55 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1627: 305 pgs: 305 active+clean; 281 MiB data, 581 MiB used, 59 GiB / 60 GiB avail; 3.9 MiB/s rd, 12 KiB/s wr, 137 op/s
Oct 11 04:17:56 compute-0 sshd-session[297147]: Received disconnect from 193.46.255.7 port 15962:11:  [preauth]
Oct 11 04:17:56 compute-0 sshd-session[297147]: Disconnected from authenticating user root 193.46.255.7 port 15962 [preauth]
Oct 11 04:17:56 compute-0 sshd-session[297147]: PAM 2 more authentication failures; logname= uid=0 euid=0 tty=ssh ruser= rhost=193.46.255.7  user=root
Oct 11 04:17:56 compute-0 nova_compute[259850]: 2025-10-11 04:17:56.446 2 DEBUG nova.network.neutron [req-b06cd699-2f44-46c7-b3ed-121af8386127 req-288e43f5-ce63-4744-a121-ffb173ae4b94 f4159c198f2c491aba3076e803251bb4 551ef16ca7c64c7d98e9560ddab5abd8 - - default default] [instance: a5deabc3-2396-4c23-81c2-959d49bb6da1] Updated VIF entry in instance network info cache for port 560c29a9-2a29-42bd-a75a-485874b2cbc8. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 11 04:17:56 compute-0 nova_compute[259850]: 2025-10-11 04:17:56.447 2 DEBUG nova.network.neutron [req-b06cd699-2f44-46c7-b3ed-121af8386127 req-288e43f5-ce63-4744-a121-ffb173ae4b94 f4159c198f2c491aba3076e803251bb4 551ef16ca7c64c7d98e9560ddab5abd8 - - default default] [instance: a5deabc3-2396-4c23-81c2-959d49bb6da1] Updating instance_info_cache with network_info: [{"id": "560c29a9-2a29-42bd-a75a-485874b2cbc8", "address": "fa:16:3e:b2:35:03", "network": {"id": "b6cd64a2-af0b-4f57-b84c-cbc9cde5251d", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-958485850-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.226", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "09ba33ef4bd447699d74946c58839b2d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap560c29a9-2a", "ovs_interfaceid": "560c29a9-2a29-42bd-a75a-485874b2cbc8", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 11 04:17:56 compute-0 nova_compute[259850]: 2025-10-11 04:17:56.468 2 DEBUG oslo_concurrency.lockutils [req-b06cd699-2f44-46c7-b3ed-121af8386127 req-288e43f5-ce63-4744-a121-ffb173ae4b94 f4159c198f2c491aba3076e803251bb4 551ef16ca7c64c7d98e9560ddab5abd8 - - default default] Releasing lock "refresh_cache-a5deabc3-2396-4c23-81c2-959d49bb6da1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 11 04:17:57 compute-0 ovn_controller[152025]: 2025-10-11T04:17:57Z|00048|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:f3:89:d6 10.100.0.11
Oct 11 04:17:57 compute-0 ovn_controller[152025]: 2025-10-11T04:17:57Z|00049|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:f3:89:d6 10.100.0.11
Oct 11 04:17:57 compute-0 unix_chkpwd[297155]: password check failed for user (root)
Oct 11 04:17:57 compute-0 sshd-session[297153]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=193.46.255.7  user=root
Oct 11 04:17:57 compute-0 ceph-mon[74273]: pgmap v1627: 305 pgs: 305 active+clean; 281 MiB data, 581 MiB used, 59 GiB / 60 GiB avail; 3.9 MiB/s rd, 12 KiB/s wr, 137 op/s
Oct 11 04:17:57 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1628: 305 pgs: 305 active+clean; 306 MiB data, 598 MiB used, 59 GiB / 60 GiB avail; 3.9 MiB/s rd, 2.1 MiB/s wr, 159 op/s
Oct 11 04:17:58 compute-0 nova_compute[259850]: 2025-10-11 04:17:58.316 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 04:17:58 compute-0 sshd-session[297153]: Failed password for root from 193.46.255.7 port 30210 ssh2
Oct 11 04:17:59 compute-0 unix_chkpwd[297156]: password check failed for user (root)
Oct 11 04:17:59 compute-0 ovn_controller[152025]: 2025-10-11T04:17:59Z|00050|pinctrl(ovn_pinctrl0)|WARN|DHCPREQUEST requested IP 10.100.0.5 does not match offer 10.100.0.10
Oct 11 04:17:59 compute-0 ovn_controller[152025]: 2025-10-11T04:17:59Z|00051|pinctrl(ovn_pinctrl0)|INFO|DHCPNAK fa:16:3e:b2:35:03 10.100.0.10
Oct 11 04:17:59 compute-0 ceph-mon[74273]: pgmap v1628: 305 pgs: 305 active+clean; 306 MiB data, 598 MiB used, 59 GiB / 60 GiB avail; 3.9 MiB/s rd, 2.1 MiB/s wr, 159 op/s
Oct 11 04:17:59 compute-0 podman[297157]: 2025-10-11 04:17:59.438468975 +0000 UTC m=+0.141910489 container health_status 648a70a95c858d67aef01a55c153ff0b5b9bef4b589fa10c62a24185180db61f (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, container_name=ovn_controller, managed_by=edpm_ansible, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251009, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Oct 11 04:17:59 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1629: 305 pgs: 305 active+clean; 350 MiB data, 627 MiB used, 59 GiB / 60 GiB avail; 4.0 MiB/s rd, 5.8 MiB/s wr, 196 op/s
Oct 11 04:17:59 compute-0 nova_compute[259850]: 2025-10-11 04:17:59.963 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 04:18:00 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e389 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 04:18:00 compute-0 ceph-mon[74273]: pgmap v1629: 305 pgs: 305 active+clean; 350 MiB data, 627 MiB used, 59 GiB / 60 GiB avail; 4.0 MiB/s rd, 5.8 MiB/s wr, 196 op/s
Oct 11 04:18:01 compute-0 sshd-session[297153]: Failed password for root from 193.46.255.7 port 30210 ssh2
Oct 11 04:18:01 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1630: 305 pgs: 305 active+clean; 350 MiB data, 627 MiB used, 59 GiB / 60 GiB avail; 2.4 MiB/s rd, 5.8 MiB/s wr, 140 op/s
Oct 11 04:18:02 compute-0 ceph-mon[74273]: pgmap v1630: 305 pgs: 305 active+clean; 350 MiB data, 627 MiB used, 59 GiB / 60 GiB avail; 2.4 MiB/s rd, 5.8 MiB/s wr, 140 op/s
Oct 11 04:18:02 compute-0 unix_chkpwd[297183]: password check failed for user (root)
Oct 11 04:18:03 compute-0 nova_compute[259850]: 2025-10-11 04:18:03.357 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 04:18:03 compute-0 podman[297184]: 2025-10-11 04:18:03.387677212 +0000 UTC m=+0.088426344 container health_status aa44bb3cffcfb092b9d85ae52a9bd9f36c87e07262632420f97b48a886109eab (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, tcib_managed=true)
Oct 11 04:18:03 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1631: 305 pgs: 305 active+clean; 350 MiB data, 633 MiB used, 59 GiB / 60 GiB avail; 3.0 MiB/s rd, 5.8 MiB/s wr, 185 op/s
Oct 11 04:18:03 compute-0 ovn_controller[152025]: 2025-10-11T04:18:03Z|00052|pinctrl(ovn_pinctrl0)|WARN|DHCPREQUEST requested IP 10.100.0.5 does not match offer 10.100.0.10
Oct 11 04:18:03 compute-0 ovn_controller[152025]: 2025-10-11T04:18:03Z|00053|pinctrl(ovn_pinctrl0)|INFO|DHCPNAK fa:16:3e:b2:35:03 10.100.0.10
Oct 11 04:18:04 compute-0 ovn_controller[152025]: 2025-10-11T04:18:04Z|00054|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:b2:35:03 10.100.0.10
Oct 11 04:18:04 compute-0 ovn_controller[152025]: 2025-10-11T04:18:04Z|00055|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:b2:35:03 10.100.0.10
Oct 11 04:18:04 compute-0 sshd-session[297153]: Failed password for root from 193.46.255.7 port 30210 ssh2
Oct 11 04:18:04 compute-0 ceph-mon[74273]: pgmap v1631: 305 pgs: 305 active+clean; 350 MiB data, 633 MiB used, 59 GiB / 60 GiB avail; 3.0 MiB/s rd, 5.8 MiB/s wr, 185 op/s
Oct 11 04:18:04 compute-0 sshd-session[297153]: Received disconnect from 193.46.255.7 port 30210:11:  [preauth]
Oct 11 04:18:04 compute-0 sshd-session[297153]: Disconnected from authenticating user root 193.46.255.7 port 30210 [preauth]
Oct 11 04:18:04 compute-0 sshd-session[297153]: PAM 2 more authentication failures; logname= uid=0 euid=0 tty=ssh ruser= rhost=193.46.255.7  user=root
Oct 11 04:18:04 compute-0 nova_compute[259850]: 2025-10-11 04:18:04.967 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 04:18:05 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e389 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 04:18:05 compute-0 unix_chkpwd[297205]: password check failed for user (root)
Oct 11 04:18:05 compute-0 sshd-session[297203]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=193.46.255.7  user=root
Oct 11 04:18:05 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1632: 305 pgs: 305 active+clean; 350 MiB data, 633 MiB used, 59 GiB / 60 GiB avail; 1.1 MiB/s rd, 5.8 MiB/s wr, 121 op/s
Oct 11 04:18:06 compute-0 ceph-mon[74273]: pgmap v1632: 305 pgs: 305 active+clean; 350 MiB data, 633 MiB used, 59 GiB / 60 GiB avail; 1.1 MiB/s rd, 5.8 MiB/s wr, 121 op/s
Oct 11 04:18:07 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1633: 305 pgs: 305 active+clean; 352 MiB data, 630 MiB used, 59 GiB / 60 GiB avail; 1.1 MiB/s rd, 5.8 MiB/s wr, 122 op/s
Oct 11 04:18:07 compute-0 sshd-session[297203]: Failed password for root from 193.46.255.7 port 62190 ssh2
Oct 11 04:18:08 compute-0 nova_compute[259850]: 2025-10-11 04:18:08.359 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 04:18:08 compute-0 ceph-mon[74273]: pgmap v1633: 305 pgs: 305 active+clean; 352 MiB data, 630 MiB used, 59 GiB / 60 GiB avail; 1.1 MiB/s rd, 5.8 MiB/s wr, 122 op/s
Oct 11 04:18:09 compute-0 unix_chkpwd[297206]: password check failed for user (root)
Oct 11 04:18:09 compute-0 nova_compute[259850]: 2025-10-11 04:18:09.562 2 DEBUG oslo_concurrency.lockutils [None req-a5999865-6d6d-4980-bae6-d56b6b831b10 77d11e860ca1460cab1c20bca4d4c0ea bfcc78a613a4442d88231798d10634c9 - - default default] Acquiring lock "a3d4ef44-fada-41fc-9a12-641bff0536a4" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 04:18:09 compute-0 nova_compute[259850]: 2025-10-11 04:18:09.563 2 DEBUG oslo_concurrency.lockutils [None req-a5999865-6d6d-4980-bae6-d56b6b831b10 77d11e860ca1460cab1c20bca4d4c0ea bfcc78a613a4442d88231798d10634c9 - - default default] Lock "a3d4ef44-fada-41fc-9a12-641bff0536a4" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 04:18:09 compute-0 nova_compute[259850]: 2025-10-11 04:18:09.564 2 DEBUG oslo_concurrency.lockutils [None req-a5999865-6d6d-4980-bae6-d56b6b831b10 77d11e860ca1460cab1c20bca4d4c0ea bfcc78a613a4442d88231798d10634c9 - - default default] Acquiring lock "a3d4ef44-fada-41fc-9a12-641bff0536a4-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 04:18:09 compute-0 nova_compute[259850]: 2025-10-11 04:18:09.564 2 DEBUG oslo_concurrency.lockutils [None req-a5999865-6d6d-4980-bae6-d56b6b831b10 77d11e860ca1460cab1c20bca4d4c0ea bfcc78a613a4442d88231798d10634c9 - - default default] Lock "a3d4ef44-fada-41fc-9a12-641bff0536a4-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 04:18:09 compute-0 nova_compute[259850]: 2025-10-11 04:18:09.565 2 DEBUG oslo_concurrency.lockutils [None req-a5999865-6d6d-4980-bae6-d56b6b831b10 77d11e860ca1460cab1c20bca4d4c0ea bfcc78a613a4442d88231798d10634c9 - - default default] Lock "a3d4ef44-fada-41fc-9a12-641bff0536a4-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 04:18:09 compute-0 nova_compute[259850]: 2025-10-11 04:18:09.567 2 INFO nova.compute.manager [None req-a5999865-6d6d-4980-bae6-d56b6b831b10 77d11e860ca1460cab1c20bca4d4c0ea bfcc78a613a4442d88231798d10634c9 - - default default] [instance: a3d4ef44-fada-41fc-9a12-641bff0536a4] Terminating instance
Oct 11 04:18:09 compute-0 nova_compute[259850]: 2025-10-11 04:18:09.569 2 DEBUG nova.compute.manager [None req-a5999865-6d6d-4980-bae6-d56b6b831b10 77d11e860ca1460cab1c20bca4d4c0ea bfcc78a613a4442d88231798d10634c9 - - default default] [instance: a3d4ef44-fada-41fc-9a12-641bff0536a4] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Oct 11 04:18:09 compute-0 kernel: tap86c2cef2-4a (unregistering): left promiscuous mode
Oct 11 04:18:09 compute-0 NetworkManager[44920]: <info>  [1760156289.6395] device (tap86c2cef2-4a): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 11 04:18:09 compute-0 ovn_controller[152025]: 2025-10-11T04:18:09Z|00240|binding|INFO|Releasing lport 86c2cef2-4a07-459b-8237-e7fda4a39f81 from this chassis (sb_readonly=0)
Oct 11 04:18:09 compute-0 ovn_controller[152025]: 2025-10-11T04:18:09Z|00241|binding|INFO|Setting lport 86c2cef2-4a07-459b-8237-e7fda4a39f81 down in Southbound
Oct 11 04:18:09 compute-0 nova_compute[259850]: 2025-10-11 04:18:09.694 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 04:18:09 compute-0 ovn_controller[152025]: 2025-10-11T04:18:09Z|00242|binding|INFO|Removing iface tap86c2cef2-4a ovn-installed in OVS
Oct 11 04:18:09 compute-0 nova_compute[259850]: 2025-10-11 04:18:09.698 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 04:18:09 compute-0 ovn_metadata_agent[161897]: 2025-10-11 04:18:09.706 161902 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:f3:89:d6 10.100.0.11'], port_security=['fa:16:3e:f3:89:d6 10.100.0.11'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.11/28', 'neutron:device_id': 'a3d4ef44-fada-41fc-9a12-641bff0536a4', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-1c86b315-3a4b-4db0-8b3c-39658c19ef9c', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'bfcc78a613a4442d88231798d10634c9', 'neutron:revision_number': '4', 'neutron:security_group_ids': '8fd56502-e733-457c-89c4-96f24dc7f6d9', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com', 'neutron:port_fip': '192.168.122.213'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=756f4bd0-4cbc-4611-9397-52eb34ec09ab, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f22fde8afd0>], logical_port=86c2cef2-4a07-459b-8237-e7fda4a39f81) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f22fde8afd0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 11 04:18:09 compute-0 ovn_metadata_agent[161897]: 2025-10-11 04:18:09.709 161902 INFO neutron.agent.ovn.metadata.agent [-] Port 86c2cef2-4a07-459b-8237-e7fda4a39f81 in datapath 1c86b315-3a4b-4db0-8b3c-39658c19ef9c unbound from our chassis
Oct 11 04:18:09 compute-0 ovn_metadata_agent[161897]: 2025-10-11 04:18:09.712 161902 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 1c86b315-3a4b-4db0-8b3c-39658c19ef9c, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Oct 11 04:18:09 compute-0 ovn_metadata_agent[161897]: 2025-10-11 04:18:09.715 267637 DEBUG oslo.privsep.daemon [-] privsep: reply[39000bb7-7578-4d60-a98d-c15af21adb16]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 04:18:09 compute-0 ovn_metadata_agent[161897]: 2025-10-11 04:18:09.715 161902 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-1c86b315-3a4b-4db0-8b3c-39658c19ef9c namespace which is not needed anymore
Oct 11 04:18:09 compute-0 nova_compute[259850]: 2025-10-11 04:18:09.729 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 04:18:09 compute-0 systemd[1]: machine-qemu\x2d23\x2dinstance\x2d00000017.scope: Deactivated successfully.
Oct 11 04:18:09 compute-0 systemd[1]: machine-qemu\x2d23\x2dinstance\x2d00000017.scope: Consumed 16.063s CPU time.
Oct 11 04:18:09 compute-0 systemd-machined[214869]: Machine qemu-23-instance-00000017 terminated.
Oct 11 04:18:09 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1634: 305 pgs: 305 active+clean; 352 MiB data, 630 MiB used, 59 GiB / 60 GiB avail; 994 KiB/s rd, 3.7 MiB/s wr, 100 op/s
Oct 11 04:18:09 compute-0 nova_compute[259850]: 2025-10-11 04:18:09.819 2 INFO nova.virt.libvirt.driver [-] [instance: a3d4ef44-fada-41fc-9a12-641bff0536a4] Instance destroyed successfully.
Oct 11 04:18:09 compute-0 nova_compute[259850]: 2025-10-11 04:18:09.820 2 DEBUG nova.objects.instance [None req-a5999865-6d6d-4980-bae6-d56b6b831b10 77d11e860ca1460cab1c20bca4d4c0ea bfcc78a613a4442d88231798d10634c9 - - default default] Lazy-loading 'resources' on Instance uuid a3d4ef44-fada-41fc-9a12-641bff0536a4 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 11 04:18:09 compute-0 nova_compute[259850]: 2025-10-11 04:18:09.836 2 DEBUG nova.virt.libvirt.vif [None req-a5999865-6d6d-4980-bae6-d56b6b831b10 77d11e860ca1460cab1c20bca4d4c0ea bfcc78a613a4442d88231798d10634c9 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-11T04:17:33Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TransferEncryptedVolumeTest-server-87844196',display_name='tempest-TransferEncryptedVolumeTest-server-87844196',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-transferencryptedvolumetest-server-87844196',id=23,image_ref='',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBD3jnhyRBlsX5VUAbGtWGwnjXDJ0mJnyIiUqsAyoyyDd6H6M/5DSgSJwDh4tkaNqmtKzFuE8XyeYbmLUFFbEZUE8j9mB2B0zj5nn/QlG6TOs2XcStAmJ+ejUjSzP7rh2Lg==',key_name='tempest-TransferEncryptedVolumeTest-513808347',keypairs=<?>,launch_index=0,launched_at=2025-10-11T04:17:44Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='bfcc78a613a4442d88231798d10634c9',ramdisk_id='',reservation_id='r-4imm6rsu',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',image_signature_verified='False',owner_project_name='tempest-TransferEncryptedVolumeTest-1941581237',owner_user_name='tempest-TransferEncryptedVolumeTest-1941581237-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-11T04:17:44Z,user_data=None,user_id='77d11e860ca1460cab1c20bca4d4c0ea',uuid=a3d4ef44-fada-41fc-9a12-641bff0536a4,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "86c2cef2-4a07-459b-8237-e7fda4a39f81", "address": "fa:16:3e:f3:89:d6", "network": {"id": "1c86b315-3a4b-4db0-8b3c-39658c19ef9c", "bridge": "br-int", "label": "tempest-TransferEncryptedVolumeTest-443747755-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.213", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "bfcc78a613a4442d88231798d10634c9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap86c2cef2-4a", "ovs_interfaceid": "86c2cef2-4a07-459b-8237-e7fda4a39f81", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Oct 11 04:18:09 compute-0 nova_compute[259850]: 2025-10-11 04:18:09.836 2 DEBUG nova.network.os_vif_util [None req-a5999865-6d6d-4980-bae6-d56b6b831b10 77d11e860ca1460cab1c20bca4d4c0ea bfcc78a613a4442d88231798d10634c9 - - default default] Converting VIF {"id": "86c2cef2-4a07-459b-8237-e7fda4a39f81", "address": "fa:16:3e:f3:89:d6", "network": {"id": "1c86b315-3a4b-4db0-8b3c-39658c19ef9c", "bridge": "br-int", "label": "tempest-TransferEncryptedVolumeTest-443747755-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.213", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "bfcc78a613a4442d88231798d10634c9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap86c2cef2-4a", "ovs_interfaceid": "86c2cef2-4a07-459b-8237-e7fda4a39f81", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 11 04:18:09 compute-0 nova_compute[259850]: 2025-10-11 04:18:09.838 2 DEBUG nova.network.os_vif_util [None req-a5999865-6d6d-4980-bae6-d56b6b831b10 77d11e860ca1460cab1c20bca4d4c0ea bfcc78a613a4442d88231798d10634c9 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:f3:89:d6,bridge_name='br-int',has_traffic_filtering=True,id=86c2cef2-4a07-459b-8237-e7fda4a39f81,network=Network(1c86b315-3a4b-4db0-8b3c-39658c19ef9c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap86c2cef2-4a') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 11 04:18:09 compute-0 nova_compute[259850]: 2025-10-11 04:18:09.839 2 DEBUG os_vif [None req-a5999865-6d6d-4980-bae6-d56b6b831b10 77d11e860ca1460cab1c20bca4d4c0ea bfcc78a613a4442d88231798d10634c9 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:f3:89:d6,bridge_name='br-int',has_traffic_filtering=True,id=86c2cef2-4a07-459b-8237-e7fda4a39f81,network=Network(1c86b315-3a4b-4db0-8b3c-39658c19ef9c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap86c2cef2-4a') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Oct 11 04:18:09 compute-0 nova_compute[259850]: 2025-10-11 04:18:09.842 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 04:18:09 compute-0 nova_compute[259850]: 2025-10-11 04:18:09.844 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap86c2cef2-4a, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 04:18:09 compute-0 nova_compute[259850]: 2025-10-11 04:18:09.846 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 04:18:09 compute-0 nova_compute[259850]: 2025-10-11 04:18:09.850 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 11 04:18:09 compute-0 nova_compute[259850]: 2025-10-11 04:18:09.854 2 INFO os_vif [None req-a5999865-6d6d-4980-bae6-d56b6b831b10 77d11e860ca1460cab1c20bca4d4c0ea bfcc78a613a4442d88231798d10634c9 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:f3:89:d6,bridge_name='br-int',has_traffic_filtering=True,id=86c2cef2-4a07-459b-8237-e7fda4a39f81,network=Network(1c86b315-3a4b-4db0-8b3c-39658c19ef9c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap86c2cef2-4a')
Oct 11 04:18:09 compute-0 neutron-haproxy-ovnmeta-1c86b315-3a4b-4db0-8b3c-39658c19ef9c[296617]: [NOTICE]   (296642) : haproxy version is 2.8.14-c23fe91
Oct 11 04:18:09 compute-0 neutron-haproxy-ovnmeta-1c86b315-3a4b-4db0-8b3c-39658c19ef9c[296617]: [NOTICE]   (296642) : path to executable is /usr/sbin/haproxy
Oct 11 04:18:09 compute-0 neutron-haproxy-ovnmeta-1c86b315-3a4b-4db0-8b3c-39658c19ef9c[296617]: [ALERT]    (296642) : Current worker (296647) exited with code 143 (Terminated)
Oct 11 04:18:09 compute-0 neutron-haproxy-ovnmeta-1c86b315-3a4b-4db0-8b3c-39658c19ef9c[296617]: [WARNING]  (296642) : All workers exited. Exiting... (0)
Oct 11 04:18:09 compute-0 systemd[1]: libpod-c927a2ab9c402ba53f1c05f7c661c42ba3560042229956c8089c65a948666e72.scope: Deactivated successfully.
Oct 11 04:18:09 compute-0 podman[297238]: 2025-10-11 04:18:09.928749286 +0000 UTC m=+0.054607467 container died c927a2ab9c402ba53f1c05f7c661c42ba3560042229956c8089c65a948666e72 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-1c86b315-3a4b-4db0-8b3c-39658c19ef9c, org.label-schema.schema-version=1.0, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 11 04:18:09 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-c927a2ab9c402ba53f1c05f7c661c42ba3560042229956c8089c65a948666e72-userdata-shm.mount: Deactivated successfully.
Oct 11 04:18:09 compute-0 systemd[1]: var-lib-containers-storage-overlay-21fc82e825f2ec4313507a0e2a5b0b8b9f1f368111c16401dc76f5d46ce15fc6-merged.mount: Deactivated successfully.
Oct 11 04:18:09 compute-0 podman[297238]: 2025-10-11 04:18:09.979908524 +0000 UTC m=+0.105766705 container cleanup c927a2ab9c402ba53f1c05f7c661c42ba3560042229956c8089c65a948666e72 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-1c86b315-3a4b-4db0-8b3c-39658c19ef9c, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.schema-version=1.0, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f)
Oct 11 04:18:09 compute-0 systemd[1]: libpod-conmon-c927a2ab9c402ba53f1c05f7c661c42ba3560042229956c8089c65a948666e72.scope: Deactivated successfully.
Oct 11 04:18:10 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e389 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 04:18:10 compute-0 nova_compute[259850]: 2025-10-11 04:18:10.076 2 INFO nova.virt.libvirt.driver [None req-a5999865-6d6d-4980-bae6-d56b6b831b10 77d11e860ca1460cab1c20bca4d4c0ea bfcc78a613a4442d88231798d10634c9 - - default default] [instance: a3d4ef44-fada-41fc-9a12-641bff0536a4] Deleting instance files /var/lib/nova/instances/a3d4ef44-fada-41fc-9a12-641bff0536a4_del
Oct 11 04:18:10 compute-0 nova_compute[259850]: 2025-10-11 04:18:10.077 2 INFO nova.virt.libvirt.driver [None req-a5999865-6d6d-4980-bae6-d56b6b831b10 77d11e860ca1460cab1c20bca4d4c0ea bfcc78a613a4442d88231798d10634c9 - - default default] [instance: a3d4ef44-fada-41fc-9a12-641bff0536a4] Deletion of /var/lib/nova/instances/a3d4ef44-fada-41fc-9a12-641bff0536a4_del complete
Oct 11 04:18:10 compute-0 podman[297285]: 2025-10-11 04:18:10.1090444 +0000 UTC m=+0.082431365 container remove c927a2ab9c402ba53f1c05f7c661c42ba3560042229956c8089c65a948666e72 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-1c86b315-3a4b-4db0-8b3c-39658c19ef9c, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, org.label-schema.build-date=20251009, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.schema-version=1.0)
Oct 11 04:18:10 compute-0 ovn_metadata_agent[161897]: 2025-10-11 04:18:10.116 267637 DEBUG oslo.privsep.daemon [-] privsep: reply[6ddf4e07-ebfa-43ce-a854-a825b3210139]: (4, ('Sat Oct 11 04:18:09 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-1c86b315-3a4b-4db0-8b3c-39658c19ef9c (c927a2ab9c402ba53f1c05f7c661c42ba3560042229956c8089c65a948666e72)\nc927a2ab9c402ba53f1c05f7c661c42ba3560042229956c8089c65a948666e72\nSat Oct 11 04:18:09 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-1c86b315-3a4b-4db0-8b3c-39658c19ef9c (c927a2ab9c402ba53f1c05f7c661c42ba3560042229956c8089c65a948666e72)\nc927a2ab9c402ba53f1c05f7c661c42ba3560042229956c8089c65a948666e72\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 04:18:10 compute-0 ovn_metadata_agent[161897]: 2025-10-11 04:18:10.119 267637 DEBUG oslo.privsep.daemon [-] privsep: reply[786917c3-bd6c-40d3-b48f-12e17e53a6c6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 04:18:10 compute-0 ovn_metadata_agent[161897]: 2025-10-11 04:18:10.120 161902 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap1c86b315-30, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 04:18:10 compute-0 nova_compute[259850]: 2025-10-11 04:18:10.123 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 04:18:10 compute-0 kernel: tap1c86b315-30: left promiscuous mode
Oct 11 04:18:10 compute-0 nova_compute[259850]: 2025-10-11 04:18:10.136 2 INFO nova.compute.manager [None req-a5999865-6d6d-4980-bae6-d56b6b831b10 77d11e860ca1460cab1c20bca4d4c0ea bfcc78a613a4442d88231798d10634c9 - - default default] [instance: a3d4ef44-fada-41fc-9a12-641bff0536a4] Took 0.57 seconds to destroy the instance on the hypervisor.
Oct 11 04:18:10 compute-0 nova_compute[259850]: 2025-10-11 04:18:10.137 2 DEBUG oslo.service.loopingcall [None req-a5999865-6d6d-4980-bae6-d56b6b831b10 77d11e860ca1460cab1c20bca4d4c0ea bfcc78a613a4442d88231798d10634c9 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Oct 11 04:18:10 compute-0 nova_compute[259850]: 2025-10-11 04:18:10.138 2 DEBUG nova.compute.manager [-] [instance: a3d4ef44-fada-41fc-9a12-641bff0536a4] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Oct 11 04:18:10 compute-0 nova_compute[259850]: 2025-10-11 04:18:10.138 2 DEBUG nova.network.neutron [-] [instance: a3d4ef44-fada-41fc-9a12-641bff0536a4] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Oct 11 04:18:10 compute-0 nova_compute[259850]: 2025-10-11 04:18:10.143 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 04:18:10 compute-0 ovn_metadata_agent[161897]: 2025-10-11 04:18:10.146 267637 DEBUG oslo.privsep.daemon [-] privsep: reply[04a99fc1-b47f-48e9-b189-68d22ffe7e74]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 04:18:10 compute-0 ovn_metadata_agent[161897]: 2025-10-11 04:18:10.178 267637 DEBUG oslo.privsep.daemon [-] privsep: reply[17f8995c-8dc4-4e5f-ad94-794b3d8b44ee]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 04:18:10 compute-0 ovn_metadata_agent[161897]: 2025-10-11 04:18:10.180 267637 DEBUG oslo.privsep.daemon [-] privsep: reply[84f5282b-9ab6-43ed-a054-b23e347c38e1]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 04:18:10 compute-0 ovn_metadata_agent[161897]: 2025-10-11 04:18:10.201 267637 DEBUG oslo.privsep.daemon [-] privsep: reply[84ddcb91-bf70-4a9c-a17d-629b05d5c877]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 464621, 'reachable_time': 27931, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 297301, 'error': None, 'target': 'ovnmeta-1c86b315-3a4b-4db0-8b3c-39658c19ef9c', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 04:18:10 compute-0 ovn_metadata_agent[161897]: 2025-10-11 04:18:10.203 162015 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-1c86b315-3a4b-4db0-8b3c-39658c19ef9c deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Oct 11 04:18:10 compute-0 ovn_metadata_agent[161897]: 2025-10-11 04:18:10.204 162015 DEBUG oslo.privsep.daemon [-] privsep: reply[3a35eab2-b88d-4d71-b043-f59d7426e66a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 04:18:10 compute-0 systemd[1]: run-netns-ovnmeta\x2d1c86b315\x2d3a4b\x2d4db0\x2d8b3c\x2d39658c19ef9c.mount: Deactivated successfully.
Oct 11 04:18:10 compute-0 nova_compute[259850]: 2025-10-11 04:18:10.746 2 DEBUG nova.compute.manager [req-52e307ed-c346-4465-827f-971a74c7c04d req-0b1f0cf3-7578-4653-b9b2-03c2ea602859 f4159c198f2c491aba3076e803251bb4 551ef16ca7c64c7d98e9560ddab5abd8 - - default default] [instance: a3d4ef44-fada-41fc-9a12-641bff0536a4] Received event network-vif-unplugged-86c2cef2-4a07-459b-8237-e7fda4a39f81 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 11 04:18:10 compute-0 nova_compute[259850]: 2025-10-11 04:18:10.747 2 DEBUG oslo_concurrency.lockutils [req-52e307ed-c346-4465-827f-971a74c7c04d req-0b1f0cf3-7578-4653-b9b2-03c2ea602859 f4159c198f2c491aba3076e803251bb4 551ef16ca7c64c7d98e9560ddab5abd8 - - default default] Acquiring lock "a3d4ef44-fada-41fc-9a12-641bff0536a4-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 04:18:10 compute-0 nova_compute[259850]: 2025-10-11 04:18:10.747 2 DEBUG oslo_concurrency.lockutils [req-52e307ed-c346-4465-827f-971a74c7c04d req-0b1f0cf3-7578-4653-b9b2-03c2ea602859 f4159c198f2c491aba3076e803251bb4 551ef16ca7c64c7d98e9560ddab5abd8 - - default default] Lock "a3d4ef44-fada-41fc-9a12-641bff0536a4-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 04:18:10 compute-0 nova_compute[259850]: 2025-10-11 04:18:10.748 2 DEBUG oslo_concurrency.lockutils [req-52e307ed-c346-4465-827f-971a74c7c04d req-0b1f0cf3-7578-4653-b9b2-03c2ea602859 f4159c198f2c491aba3076e803251bb4 551ef16ca7c64c7d98e9560ddab5abd8 - - default default] Lock "a3d4ef44-fada-41fc-9a12-641bff0536a4-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 04:18:10 compute-0 nova_compute[259850]: 2025-10-11 04:18:10.748 2 DEBUG nova.compute.manager [req-52e307ed-c346-4465-827f-971a74c7c04d req-0b1f0cf3-7578-4653-b9b2-03c2ea602859 f4159c198f2c491aba3076e803251bb4 551ef16ca7c64c7d98e9560ddab5abd8 - - default default] [instance: a3d4ef44-fada-41fc-9a12-641bff0536a4] No waiting events found dispatching network-vif-unplugged-86c2cef2-4a07-459b-8237-e7fda4a39f81 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 11 04:18:10 compute-0 nova_compute[259850]: 2025-10-11 04:18:10.749 2 DEBUG nova.compute.manager [req-52e307ed-c346-4465-827f-971a74c7c04d req-0b1f0cf3-7578-4653-b9b2-03c2ea602859 f4159c198f2c491aba3076e803251bb4 551ef16ca7c64c7d98e9560ddab5abd8 - - default default] [instance: a3d4ef44-fada-41fc-9a12-641bff0536a4] Received event network-vif-unplugged-86c2cef2-4a07-459b-8237-e7fda4a39f81 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Oct 11 04:18:10 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e389 do_prune osdmap full prune enabled
Oct 11 04:18:10 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e390 e390: 3 total, 3 up, 3 in
Oct 11 04:18:10 compute-0 ceph-mon[74273]: log_channel(cluster) log [DBG] : osdmap e390: 3 total, 3 up, 3 in
Oct 11 04:18:10 compute-0 ceph-mon[74273]: pgmap v1634: 305 pgs: 305 active+clean; 352 MiB data, 630 MiB used, 59 GiB / 60 GiB avail; 994 KiB/s rd, 3.7 MiB/s wr, 100 op/s
Oct 11 04:18:11 compute-0 sshd-session[297203]: Failed password for root from 193.46.255.7 port 62190 ssh2
Oct 11 04:18:11 compute-0 unix_chkpwd[297302]: password check failed for user (root)
Oct 11 04:18:11 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1636: 305 pgs: 305 active+clean; 352 MiB data, 630 MiB used, 59 GiB / 60 GiB avail; 646 KiB/s rd, 32 KiB/s wr, 55 op/s
Oct 11 04:18:11 compute-0 nova_compute[259850]: 2025-10-11 04:18:11.910 2 DEBUG nova.network.neutron [-] [instance: a3d4ef44-fada-41fc-9a12-641bff0536a4] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 11 04:18:11 compute-0 ceph-mon[74273]: osdmap e390: 3 total, 3 up, 3 in
Oct 11 04:18:11 compute-0 nova_compute[259850]: 2025-10-11 04:18:11.946 2 INFO nova.compute.manager [-] [instance: a3d4ef44-fada-41fc-9a12-641bff0536a4] Took 1.81 seconds to deallocate network for instance.
Oct 11 04:18:12 compute-0 nova_compute[259850]: 2025-10-11 04:18:12.222 2 INFO nova.compute.manager [None req-a5999865-6d6d-4980-bae6-d56b6b831b10 77d11e860ca1460cab1c20bca4d4c0ea bfcc78a613a4442d88231798d10634c9 - - default default] [instance: a3d4ef44-fada-41fc-9a12-641bff0536a4] Took 0.28 seconds to detach 1 volumes for instance.
Oct 11 04:18:12 compute-0 nova_compute[259850]: 2025-10-11 04:18:12.271 2 DEBUG oslo_concurrency.lockutils [None req-a5999865-6d6d-4980-bae6-d56b6b831b10 77d11e860ca1460cab1c20bca4d4c0ea bfcc78a613a4442d88231798d10634c9 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 04:18:12 compute-0 nova_compute[259850]: 2025-10-11 04:18:12.271 2 DEBUG oslo_concurrency.lockutils [None req-a5999865-6d6d-4980-bae6-d56b6b831b10 77d11e860ca1460cab1c20bca4d4c0ea bfcc78a613a4442d88231798d10634c9 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 04:18:12 compute-0 nova_compute[259850]: 2025-10-11 04:18:12.349 2 DEBUG oslo_concurrency.processutils [None req-a5999865-6d6d-4980-bae6-d56b6b831b10 77d11e860ca1460cab1c20bca4d4c0ea bfcc78a613a4442d88231798d10634c9 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 04:18:12 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 11 04:18:12 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3402647054' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 04:18:12 compute-0 nova_compute[259850]: 2025-10-11 04:18:12.826 2 DEBUG nova.compute.manager [req-02d0399e-f01f-4d4d-a015-d615a4c73a64 req-7bbe89ad-9f29-4ed9-a245-76d18be39059 f4159c198f2c491aba3076e803251bb4 551ef16ca7c64c7d98e9560ddab5abd8 - - default default] [instance: a3d4ef44-fada-41fc-9a12-641bff0536a4] Received event network-vif-plugged-86c2cef2-4a07-459b-8237-e7fda4a39f81 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 11 04:18:12 compute-0 nova_compute[259850]: 2025-10-11 04:18:12.826 2 DEBUG oslo_concurrency.lockutils [req-02d0399e-f01f-4d4d-a015-d615a4c73a64 req-7bbe89ad-9f29-4ed9-a245-76d18be39059 f4159c198f2c491aba3076e803251bb4 551ef16ca7c64c7d98e9560ddab5abd8 - - default default] Acquiring lock "a3d4ef44-fada-41fc-9a12-641bff0536a4-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 04:18:12 compute-0 nova_compute[259850]: 2025-10-11 04:18:12.827 2 DEBUG oslo_concurrency.lockutils [req-02d0399e-f01f-4d4d-a015-d615a4c73a64 req-7bbe89ad-9f29-4ed9-a245-76d18be39059 f4159c198f2c491aba3076e803251bb4 551ef16ca7c64c7d98e9560ddab5abd8 - - default default] Lock "a3d4ef44-fada-41fc-9a12-641bff0536a4-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 04:18:12 compute-0 nova_compute[259850]: 2025-10-11 04:18:12.827 2 DEBUG oslo_concurrency.lockutils [req-02d0399e-f01f-4d4d-a015-d615a4c73a64 req-7bbe89ad-9f29-4ed9-a245-76d18be39059 f4159c198f2c491aba3076e803251bb4 551ef16ca7c64c7d98e9560ddab5abd8 - - default default] Lock "a3d4ef44-fada-41fc-9a12-641bff0536a4-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 04:18:12 compute-0 nova_compute[259850]: 2025-10-11 04:18:12.827 2 DEBUG nova.compute.manager [req-02d0399e-f01f-4d4d-a015-d615a4c73a64 req-7bbe89ad-9f29-4ed9-a245-76d18be39059 f4159c198f2c491aba3076e803251bb4 551ef16ca7c64c7d98e9560ddab5abd8 - - default default] [instance: a3d4ef44-fada-41fc-9a12-641bff0536a4] No waiting events found dispatching network-vif-plugged-86c2cef2-4a07-459b-8237-e7fda4a39f81 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 11 04:18:12 compute-0 nova_compute[259850]: 2025-10-11 04:18:12.827 2 WARNING nova.compute.manager [req-02d0399e-f01f-4d4d-a015-d615a4c73a64 req-7bbe89ad-9f29-4ed9-a245-76d18be39059 f4159c198f2c491aba3076e803251bb4 551ef16ca7c64c7d98e9560ddab5abd8 - - default default] [instance: a3d4ef44-fada-41fc-9a12-641bff0536a4] Received unexpected event network-vif-plugged-86c2cef2-4a07-459b-8237-e7fda4a39f81 for instance with vm_state deleted and task_state None.
Oct 11 04:18:12 compute-0 nova_compute[259850]: 2025-10-11 04:18:12.828 2 DEBUG nova.compute.manager [req-02d0399e-f01f-4d4d-a015-d615a4c73a64 req-7bbe89ad-9f29-4ed9-a245-76d18be39059 f4159c198f2c491aba3076e803251bb4 551ef16ca7c64c7d98e9560ddab5abd8 - - default default] [instance: a3d4ef44-fada-41fc-9a12-641bff0536a4] Received event network-vif-deleted-86c2cef2-4a07-459b-8237-e7fda4a39f81 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 11 04:18:12 compute-0 nova_compute[259850]: 2025-10-11 04:18:12.833 2 DEBUG oslo_concurrency.processutils [None req-a5999865-6d6d-4980-bae6-d56b6b831b10 77d11e860ca1460cab1c20bca4d4c0ea bfcc78a613a4442d88231798d10634c9 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.484s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 04:18:12 compute-0 nova_compute[259850]: 2025-10-11 04:18:12.839 2 DEBUG nova.compute.provider_tree [None req-a5999865-6d6d-4980-bae6-d56b6b831b10 77d11e860ca1460cab1c20bca4d4c0ea bfcc78a613a4442d88231798d10634c9 - - default default] Inventory has not changed in ProviderTree for provider: 108a560b-89c0-4926-a2fc-cb749a6f8386 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 11 04:18:12 compute-0 nova_compute[259850]: 2025-10-11 04:18:12.860 2 DEBUG nova.scheduler.client.report [None req-a5999865-6d6d-4980-bae6-d56b6b831b10 77d11e860ca1460cab1c20bca4d4c0ea bfcc78a613a4442d88231798d10634c9 - - default default] Inventory has not changed for provider 108a560b-89c0-4926-a2fc-cb749a6f8386 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 11 04:18:12 compute-0 nova_compute[259850]: 2025-10-11 04:18:12.891 2 DEBUG oslo_concurrency.lockutils [None req-a5999865-6d6d-4980-bae6-d56b6b831b10 77d11e860ca1460cab1c20bca4d4c0ea bfcc78a613a4442d88231798d10634c9 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.619s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 04:18:12 compute-0 nova_compute[259850]: 2025-10-11 04:18:12.913 2 INFO nova.scheduler.client.report [None req-a5999865-6d6d-4980-bae6-d56b6b831b10 77d11e860ca1460cab1c20bca4d4c0ea bfcc78a613a4442d88231798d10634c9 - - default default] Deleted allocations for instance a3d4ef44-fada-41fc-9a12-641bff0536a4
Oct 11 04:18:12 compute-0 ceph-mon[74273]: pgmap v1636: 305 pgs: 305 active+clean; 352 MiB data, 630 MiB used, 59 GiB / 60 GiB avail; 646 KiB/s rd, 32 KiB/s wr, 55 op/s
Oct 11 04:18:12 compute-0 ceph-mon[74273]: from='client.? 192.168.122.100:0/3402647054' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 04:18:13 compute-0 nova_compute[259850]: 2025-10-11 04:18:13.011 2 DEBUG oslo_concurrency.lockutils [None req-a5999865-6d6d-4980-bae6-d56b6b831b10 77d11e860ca1460cab1c20bca4d4c0ea bfcc78a613a4442d88231798d10634c9 - - default default] Lock "a3d4ef44-fada-41fc-9a12-641bff0536a4" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.448s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 04:18:13 compute-0 nova_compute[259850]: 2025-10-11 04:18:13.364 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 04:18:13 compute-0 sshd-session[297203]: Failed password for root from 193.46.255.7 port 62190 ssh2
Oct 11 04:18:13 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1637: 305 pgs: 305 active+clean; 352 MiB data, 629 MiB used, 59 GiB / 60 GiB avail; 283 KiB/s rd, 43 KiB/s wr, 47 op/s
Oct 11 04:18:14 compute-0 nova_compute[259850]: 2025-10-11 04:18:14.847 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 04:18:14 compute-0 ceph-mon[74273]: pgmap v1637: 305 pgs: 305 active+clean; 352 MiB data, 629 MiB used, 59 GiB / 60 GiB avail; 283 KiB/s rd, 43 KiB/s wr, 47 op/s
Oct 11 04:18:15 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e390 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 04:18:15 compute-0 sshd-session[297203]: Received disconnect from 193.46.255.7 port 62190:11:  [preauth]
Oct 11 04:18:15 compute-0 sshd-session[297203]: Disconnected from authenticating user root 193.46.255.7 port 62190 [preauth]
Oct 11 04:18:15 compute-0 sshd-session[297203]: PAM 2 more authentication failures; logname= uid=0 euid=0 tty=ssh ruser= rhost=193.46.255.7  user=root
Oct 11 04:18:15 compute-0 nova_compute[259850]: 2025-10-11 04:18:15.726 2 DEBUG oslo_concurrency.lockutils [None req-b187c7d5-55d6-40a2-bba4-0852f73acffa 2a330a845d62440c871f80eda2546881 09ba33ef4bd447699d74946c58839b2d - - default default] Acquiring lock "68b44a2f-a694-4458-9a40-89e194a02624" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 04:18:15 compute-0 nova_compute[259850]: 2025-10-11 04:18:15.726 2 DEBUG oslo_concurrency.lockutils [None req-b187c7d5-55d6-40a2-bba4-0852f73acffa 2a330a845d62440c871f80eda2546881 09ba33ef4bd447699d74946c58839b2d - - default default] Lock "68b44a2f-a694-4458-9a40-89e194a02624" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 04:18:15 compute-0 nova_compute[259850]: 2025-10-11 04:18:15.752 2 DEBUG nova.compute.manager [None req-b187c7d5-55d6-40a2-bba4-0852f73acffa 2a330a845d62440c871f80eda2546881 09ba33ef4bd447699d74946c58839b2d - - default default] [instance: 68b44a2f-a694-4458-9a40-89e194a02624] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Oct 11 04:18:15 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1638: 305 pgs: 305 active+clean; 352 MiB data, 629 MiB used, 59 GiB / 60 GiB avail; 283 KiB/s rd, 43 KiB/s wr, 47 op/s
Oct 11 04:18:15 compute-0 nova_compute[259850]: 2025-10-11 04:18:15.832 2 DEBUG oslo_concurrency.lockutils [None req-b187c7d5-55d6-40a2-bba4-0852f73acffa 2a330a845d62440c871f80eda2546881 09ba33ef4bd447699d74946c58839b2d - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 04:18:15 compute-0 nova_compute[259850]: 2025-10-11 04:18:15.833 2 DEBUG oslo_concurrency.lockutils [None req-b187c7d5-55d6-40a2-bba4-0852f73acffa 2a330a845d62440c871f80eda2546881 09ba33ef4bd447699d74946c58839b2d - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 04:18:15 compute-0 nova_compute[259850]: 2025-10-11 04:18:15.843 2 DEBUG nova.virt.hardware [None req-b187c7d5-55d6-40a2-bba4-0852f73acffa 2a330a845d62440c871f80eda2546881 09ba33ef4bd447699d74946c58839b2d - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Oct 11 04:18:15 compute-0 nova_compute[259850]: 2025-10-11 04:18:15.843 2 INFO nova.compute.claims [None req-b187c7d5-55d6-40a2-bba4-0852f73acffa 2a330a845d62440c871f80eda2546881 09ba33ef4bd447699d74946c58839b2d - - default default] [instance: 68b44a2f-a694-4458-9a40-89e194a02624] Claim successful on node compute-0.ctlplane.example.com
Oct 11 04:18:15 compute-0 nova_compute[259850]: 2025-10-11 04:18:15.990 2 DEBUG oslo_concurrency.processutils [None req-b187c7d5-55d6-40a2-bba4-0852f73acffa 2a330a845d62440c871f80eda2546881 09ba33ef4bd447699d74946c58839b2d - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 04:18:16 compute-0 podman[297346]: 2025-10-11 04:18:16.40526421 +0000 UTC m=+0.103776489 container health_status 83ff5079e26f0f00bbfd2512db4ebca61e431e3b7e362664573e34fff179574c (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251009, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, container_name=multipathd, tcib_managed=true, config_id=multipathd)
Oct 11 04:18:16 compute-0 podman[297347]: 2025-10-11 04:18:16.429585078 +0000 UTC m=+0.123467386 container health_status 8f03625dda308e9a3fe3b3f00360811eab2a53db90537fed5552a11820542f07 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, tcib_managed=true, container_name=iscsid, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251009, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, config_id=iscsid)
Oct 11 04:18:16 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 11 04:18:16 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/578546283' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 04:18:16 compute-0 nova_compute[259850]: 2025-10-11 04:18:16.504 2 DEBUG oslo_concurrency.processutils [None req-b187c7d5-55d6-40a2-bba4-0852f73acffa 2a330a845d62440c871f80eda2546881 09ba33ef4bd447699d74946c58839b2d - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.514s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 04:18:16 compute-0 nova_compute[259850]: 2025-10-11 04:18:16.514 2 DEBUG nova.compute.provider_tree [None req-b187c7d5-55d6-40a2-bba4-0852f73acffa 2a330a845d62440c871f80eda2546881 09ba33ef4bd447699d74946c58839b2d - - default default] Inventory has not changed in ProviderTree for provider: 108a560b-89c0-4926-a2fc-cb749a6f8386 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 11 04:18:16 compute-0 nova_compute[259850]: 2025-10-11 04:18:16.531 2 DEBUG nova.scheduler.client.report [None req-b187c7d5-55d6-40a2-bba4-0852f73acffa 2a330a845d62440c871f80eda2546881 09ba33ef4bd447699d74946c58839b2d - - default default] Inventory has not changed for provider 108a560b-89c0-4926-a2fc-cb749a6f8386 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 11 04:18:16 compute-0 nova_compute[259850]: 2025-10-11 04:18:16.558 2 DEBUG oslo_concurrency.lockutils [None req-b187c7d5-55d6-40a2-bba4-0852f73acffa 2a330a845d62440c871f80eda2546881 09ba33ef4bd447699d74946c58839b2d - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.725s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 04:18:16 compute-0 nova_compute[259850]: 2025-10-11 04:18:16.559 2 DEBUG nova.compute.manager [None req-b187c7d5-55d6-40a2-bba4-0852f73acffa 2a330a845d62440c871f80eda2546881 09ba33ef4bd447699d74946c58839b2d - - default default] [instance: 68b44a2f-a694-4458-9a40-89e194a02624] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Oct 11 04:18:16 compute-0 nova_compute[259850]: 2025-10-11 04:18:16.618 2 DEBUG nova.compute.manager [None req-b187c7d5-55d6-40a2-bba4-0852f73acffa 2a330a845d62440c871f80eda2546881 09ba33ef4bd447699d74946c58839b2d - - default default] [instance: 68b44a2f-a694-4458-9a40-89e194a02624] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Oct 11 04:18:16 compute-0 nova_compute[259850]: 2025-10-11 04:18:16.619 2 DEBUG nova.network.neutron [None req-b187c7d5-55d6-40a2-bba4-0852f73acffa 2a330a845d62440c871f80eda2546881 09ba33ef4bd447699d74946c58839b2d - - default default] [instance: 68b44a2f-a694-4458-9a40-89e194a02624] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Oct 11 04:18:16 compute-0 nova_compute[259850]: 2025-10-11 04:18:16.643 2 INFO nova.virt.libvirt.driver [None req-b187c7d5-55d6-40a2-bba4-0852f73acffa 2a330a845d62440c871f80eda2546881 09ba33ef4bd447699d74946c58839b2d - - default default] [instance: 68b44a2f-a694-4458-9a40-89e194a02624] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Oct 11 04:18:16 compute-0 nova_compute[259850]: 2025-10-11 04:18:16.666 2 DEBUG nova.compute.manager [None req-b187c7d5-55d6-40a2-bba4-0852f73acffa 2a330a845d62440c871f80eda2546881 09ba33ef4bd447699d74946c58839b2d - - default default] [instance: 68b44a2f-a694-4458-9a40-89e194a02624] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Oct 11 04:18:16 compute-0 nova_compute[259850]: 2025-10-11 04:18:16.715 2 INFO nova.virt.block_device [None req-b187c7d5-55d6-40a2-bba4-0852f73acffa 2a330a845d62440c871f80eda2546881 09ba33ef4bd447699d74946c58839b2d - - default default] [instance: 68b44a2f-a694-4458-9a40-89e194a02624] Booting with volume cc426753-0683-44d8-a993-700a8a812cbd at /dev/vda
Oct 11 04:18:16 compute-0 nova_compute[259850]: 2025-10-11 04:18:16.782 2 DEBUG nova.policy [None req-b187c7d5-55d6-40a2-bba4-0852f73acffa 2a330a845d62440c871f80eda2546881 09ba33ef4bd447699d74946c58839b2d - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '2a330a845d62440c871f80eda2546881', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '09ba33ef4bd447699d74946c58839b2d', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Oct 11 04:18:16 compute-0 nova_compute[259850]: 2025-10-11 04:18:16.822 2 DEBUG os_brick.utils [None req-b187c7d5-55d6-40a2-bba4-0852f73acffa 2a330a845d62440c871f80eda2546881 09ba33ef4bd447699d74946c58839b2d - - default default] ==> get_connector_properties: call "{'root_helper': 'sudo nova-rootwrap /etc/nova/rootwrap.conf', 'my_ip': '192.168.122.100', 'multipath': True, 'enforce_multipath': True, 'host': 'compute-0.ctlplane.example.com', 'execute': None}" trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:176
Oct 11 04:18:16 compute-0 nova_compute[259850]: 2025-10-11 04:18:16.824 675 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): multipathd show status execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 04:18:16 compute-0 nova_compute[259850]: 2025-10-11 04:18:16.837 675 DEBUG oslo_concurrency.processutils [-] CMD "multipathd show status" returned: 0 in 0.013s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 04:18:16 compute-0 nova_compute[259850]: 2025-10-11 04:18:16.838 675 DEBUG oslo.privsep.daemon [-] privsep: reply[b348bb7c-5b21-429b-b711-a255389a17d7]: (4, ('path checker states:\n\npaths: 0\nbusy: False\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 04:18:16 compute-0 nova_compute[259850]: 2025-10-11 04:18:16.839 675 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): cat /etc/iscsi/initiatorname.iscsi execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 04:18:16 compute-0 nova_compute[259850]: 2025-10-11 04:18:16.854 675 DEBUG oslo_concurrency.processutils [-] CMD "cat /etc/iscsi/initiatorname.iscsi" returned: 0 in 0.014s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 04:18:16 compute-0 nova_compute[259850]: 2025-10-11 04:18:16.854 675 DEBUG oslo.privsep.daemon [-] privsep: reply[1e41fd07-cc42-474a-b7b6-4ec63206943d]: (4, ('InitiatorName=iqn.1994-05.com.redhat:e727c2bd432c', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 04:18:16 compute-0 nova_compute[259850]: 2025-10-11 04:18:16.856 675 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): findmnt -v / -n -o SOURCE execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 04:18:16 compute-0 nova_compute[259850]: 2025-10-11 04:18:16.868 675 DEBUG oslo_concurrency.processutils [-] CMD "findmnt -v / -n -o SOURCE" returned: 0 in 0.012s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 04:18:16 compute-0 nova_compute[259850]: 2025-10-11 04:18:16.868 675 DEBUG oslo.privsep.daemon [-] privsep: reply[6738b079-dca0-403a-b339-1dccf2721951]: (4, ('overlay\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 04:18:16 compute-0 nova_compute[259850]: 2025-10-11 04:18:16.870 675 DEBUG oslo.privsep.daemon [-] privsep: reply[8ea16cc8-6903-475d-a12a-341c776447a5]: (4, 'e4b2deed-ff06-4afb-a523-b61a9dddb9cc') _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 04:18:16 compute-0 nova_compute[259850]: 2025-10-11 04:18:16.871 2 DEBUG oslo_concurrency.processutils [None req-b187c7d5-55d6-40a2-bba4-0852f73acffa 2a330a845d62440c871f80eda2546881 09ba33ef4bd447699d74946c58839b2d - - default default] Running cmd (subprocess): nvme version execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 04:18:16 compute-0 nova_compute[259850]: 2025-10-11 04:18:16.913 2 DEBUG oslo_concurrency.processutils [None req-b187c7d5-55d6-40a2-bba4-0852f73acffa 2a330a845d62440c871f80eda2546881 09ba33ef4bd447699d74946c58839b2d - - default default] CMD "nvme version" returned: 0 in 0.042s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 04:18:16 compute-0 nova_compute[259850]: 2025-10-11 04:18:16.917 2 DEBUG os_brick.initiator.connectors.lightos [None req-b187c7d5-55d6-40a2-bba4-0852f73acffa 2a330a845d62440c871f80eda2546881 09ba33ef4bd447699d74946c58839b2d - - default default] LIGHTOS: [Errno 111] ECONNREFUSED find_dsc /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:98
Oct 11 04:18:16 compute-0 nova_compute[259850]: 2025-10-11 04:18:16.917 2 DEBUG os_brick.initiator.connectors.lightos [None req-b187c7d5-55d6-40a2-bba4-0852f73acffa 2a330a845d62440c871f80eda2546881 09ba33ef4bd447699d74946c58839b2d - - default default] LIGHTOS: did not find dsc, continuing anyway. get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:76
Oct 11 04:18:16 compute-0 nova_compute[259850]: 2025-10-11 04:18:16.918 2 DEBUG os_brick.initiator.connectors.lightos [None req-b187c7d5-55d6-40a2-bba4-0852f73acffa 2a330a845d62440c871f80eda2546881 09ba33ef4bd447699d74946c58839b2d - - default default] LIGHTOS: finally hostnqn: nqn.2014-08.org.nvmexpress:uuid:83042a20-0f72-4c47-8453-e72ead378624 dsc:  get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:79
Oct 11 04:18:16 compute-0 nova_compute[259850]: 2025-10-11 04:18:16.919 2 DEBUG os_brick.utils [None req-b187c7d5-55d6-40a2-bba4-0852f73acffa 2a330a845d62440c871f80eda2546881 09ba33ef4bd447699d74946c58839b2d - - default default] <== get_connector_properties: return (95ms) {'platform': 'x86_64', 'os_type': 'linux', 'ip': '192.168.122.100', 'host': 'compute-0.ctlplane.example.com', 'multipath': True, 'initiator': 'iqn.1994-05.com.redhat:e727c2bd432c', 'do_local_attach': False, 'nvme_hostid': '83042a20-0f72-4c47-8453-e72ead378624', 'system uuid': 'e4b2deed-ff06-4afb-a523-b61a9dddb9cc', 'nqn': 'nqn.2014-08.org.nvmexpress:uuid:83042a20-0f72-4c47-8453-e72ead378624', 'nvme_native_multipath': True, 'found_dsc': ''} trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:203
Oct 11 04:18:16 compute-0 nova_compute[259850]: 2025-10-11 04:18:16.920 2 DEBUG nova.virt.block_device [None req-b187c7d5-55d6-40a2-bba4-0852f73acffa 2a330a845d62440c871f80eda2546881 09ba33ef4bd447699d74946c58839b2d - - default default] [instance: 68b44a2f-a694-4458-9a40-89e194a02624] Updating existing volume attachment record: 6a20a897-f317-411e-9d92-3eae07c8723d _volume_attach /usr/lib/python3.9/site-packages/nova/virt/block_device.py:631
Oct 11 04:18:16 compute-0 ceph-mon[74273]: pgmap v1638: 305 pgs: 305 active+clean; 352 MiB data, 629 MiB used, 59 GiB / 60 GiB avail; 283 KiB/s rd, 43 KiB/s wr, 47 op/s
Oct 11 04:18:16 compute-0 ceph-mon[74273]: from='client.? 192.168.122.100:0/578546283' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 04:18:17 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 11 04:18:17 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/4120639796' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 11 04:18:17 compute-0 nova_compute[259850]: 2025-10-11 04:18:17.581 2 DEBUG nova.network.neutron [None req-b187c7d5-55d6-40a2-bba4-0852f73acffa 2a330a845d62440c871f80eda2546881 09ba33ef4bd447699d74946c58839b2d - - default default] [instance: 68b44a2f-a694-4458-9a40-89e194a02624] Successfully created port: ae4b6054-7dfb-4b28-a3dc-0a31f2446ea3 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Oct 11 04:18:17 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1639: 305 pgs: 305 active+clean; 352 MiB data, 629 MiB used, 59 GiB / 60 GiB avail; 283 KiB/s rd, 38 KiB/s wr, 46 op/s
Oct 11 04:18:17 compute-0 ceph-mon[74273]: from='client.? 192.168.122.10:0/4120639796' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 11 04:18:17 compute-0 nova_compute[259850]: 2025-10-11 04:18:17.982 2 DEBUG nova.compute.manager [None req-b187c7d5-55d6-40a2-bba4-0852f73acffa 2a330a845d62440c871f80eda2546881 09ba33ef4bd447699d74946c58839b2d - - default default] [instance: 68b44a2f-a694-4458-9a40-89e194a02624] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Oct 11 04:18:17 compute-0 nova_compute[259850]: 2025-10-11 04:18:17.985 2 DEBUG nova.virt.libvirt.driver [None req-b187c7d5-55d6-40a2-bba4-0852f73acffa 2a330a845d62440c871f80eda2546881 09ba33ef4bd447699d74946c58839b2d - - default default] [instance: 68b44a2f-a694-4458-9a40-89e194a02624] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Oct 11 04:18:17 compute-0 nova_compute[259850]: 2025-10-11 04:18:17.986 2 INFO nova.virt.libvirt.driver [None req-b187c7d5-55d6-40a2-bba4-0852f73acffa 2a330a845d62440c871f80eda2546881 09ba33ef4bd447699d74946c58839b2d - - default default] [instance: 68b44a2f-a694-4458-9a40-89e194a02624] Creating image(s)
Oct 11 04:18:17 compute-0 nova_compute[259850]: 2025-10-11 04:18:17.986 2 DEBUG nova.virt.libvirt.driver [None req-b187c7d5-55d6-40a2-bba4-0852f73acffa 2a330a845d62440c871f80eda2546881 09ba33ef4bd447699d74946c58839b2d - - default default] [instance: 68b44a2f-a694-4458-9a40-89e194a02624] Did not create local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4859
Oct 11 04:18:17 compute-0 nova_compute[259850]: 2025-10-11 04:18:17.987 2 DEBUG nova.virt.libvirt.driver [None req-b187c7d5-55d6-40a2-bba4-0852f73acffa 2a330a845d62440c871f80eda2546881 09ba33ef4bd447699d74946c58839b2d - - default default] [instance: 68b44a2f-a694-4458-9a40-89e194a02624] Ensure instance console log exists: /var/lib/nova/instances/68b44a2f-a694-4458-9a40-89e194a02624/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Oct 11 04:18:17 compute-0 nova_compute[259850]: 2025-10-11 04:18:17.988 2 DEBUG oslo_concurrency.lockutils [None req-b187c7d5-55d6-40a2-bba4-0852f73acffa 2a330a845d62440c871f80eda2546881 09ba33ef4bd447699d74946c58839b2d - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 04:18:17 compute-0 nova_compute[259850]: 2025-10-11 04:18:17.988 2 DEBUG oslo_concurrency.lockutils [None req-b187c7d5-55d6-40a2-bba4-0852f73acffa 2a330a845d62440c871f80eda2546881 09ba33ef4bd447699d74946c58839b2d - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 04:18:17 compute-0 nova_compute[259850]: 2025-10-11 04:18:17.989 2 DEBUG oslo_concurrency.lockutils [None req-b187c7d5-55d6-40a2-bba4-0852f73acffa 2a330a845d62440c871f80eda2546881 09ba33ef4bd447699d74946c58839b2d - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 04:18:18 compute-0 nova_compute[259850]: 2025-10-11 04:18:18.367 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 04:18:18 compute-0 ceph-mon[74273]: pgmap v1639: 305 pgs: 305 active+clean; 352 MiB data, 629 MiB used, 59 GiB / 60 GiB avail; 283 KiB/s rd, 38 KiB/s wr, 46 op/s
Oct 11 04:18:19 compute-0 nova_compute[259850]: 2025-10-11 04:18:19.493 2 DEBUG nova.network.neutron [None req-b187c7d5-55d6-40a2-bba4-0852f73acffa 2a330a845d62440c871f80eda2546881 09ba33ef4bd447699d74946c58839b2d - - default default] [instance: 68b44a2f-a694-4458-9a40-89e194a02624] Successfully updated port: ae4b6054-7dfb-4b28-a3dc-0a31f2446ea3 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Oct 11 04:18:19 compute-0 nova_compute[259850]: 2025-10-11 04:18:19.527 2 DEBUG oslo_concurrency.lockutils [None req-b187c7d5-55d6-40a2-bba4-0852f73acffa 2a330a845d62440c871f80eda2546881 09ba33ef4bd447699d74946c58839b2d - - default default] Acquiring lock "refresh_cache-68b44a2f-a694-4458-9a40-89e194a02624" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 11 04:18:19 compute-0 nova_compute[259850]: 2025-10-11 04:18:19.528 2 DEBUG oslo_concurrency.lockutils [None req-b187c7d5-55d6-40a2-bba4-0852f73acffa 2a330a845d62440c871f80eda2546881 09ba33ef4bd447699d74946c58839b2d - - default default] Acquired lock "refresh_cache-68b44a2f-a694-4458-9a40-89e194a02624" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 11 04:18:19 compute-0 nova_compute[259850]: 2025-10-11 04:18:19.529 2 DEBUG nova.network.neutron [None req-b187c7d5-55d6-40a2-bba4-0852f73acffa 2a330a845d62440c871f80eda2546881 09ba33ef4bd447699d74946c58839b2d - - default default] [instance: 68b44a2f-a694-4458-9a40-89e194a02624] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Oct 11 04:18:19 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1640: 305 pgs: 305 active+clean; 352 MiB data, 629 MiB used, 59 GiB / 60 GiB avail; 285 KiB/s rd, 28 KiB/s wr, 49 op/s
Oct 11 04:18:19 compute-0 nova_compute[259850]: 2025-10-11 04:18:19.895 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 04:18:19 compute-0 nova_compute[259850]: 2025-10-11 04:18:19.963 2 DEBUG nova.network.neutron [None req-b187c7d5-55d6-40a2-bba4-0852f73acffa 2a330a845d62440c871f80eda2546881 09ba33ef4bd447699d74946c58839b2d - - default default] [instance: 68b44a2f-a694-4458-9a40-89e194a02624] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Oct 11 04:18:20 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e390 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 04:18:20 compute-0 nova_compute[259850]: 2025-10-11 04:18:20.100 2 DEBUG nova.compute.manager [req-a9d592c1-52d2-48a5-9318-ff7564b234bb req-0e7e016c-80f2-4278-a3e6-81d69662fcfb f4159c198f2c491aba3076e803251bb4 551ef16ca7c64c7d98e9560ddab5abd8 - - default default] [instance: 68b44a2f-a694-4458-9a40-89e194a02624] Received event network-changed-ae4b6054-7dfb-4b28-a3dc-0a31f2446ea3 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 11 04:18:20 compute-0 nova_compute[259850]: 2025-10-11 04:18:20.101 2 DEBUG nova.compute.manager [req-a9d592c1-52d2-48a5-9318-ff7564b234bb req-0e7e016c-80f2-4278-a3e6-81d69662fcfb f4159c198f2c491aba3076e803251bb4 551ef16ca7c64c7d98e9560ddab5abd8 - - default default] [instance: 68b44a2f-a694-4458-9a40-89e194a02624] Refreshing instance network info cache due to event network-changed-ae4b6054-7dfb-4b28-a3dc-0a31f2446ea3. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 11 04:18:20 compute-0 nova_compute[259850]: 2025-10-11 04:18:20.101 2 DEBUG oslo_concurrency.lockutils [req-a9d592c1-52d2-48a5-9318-ff7564b234bb req-0e7e016c-80f2-4278-a3e6-81d69662fcfb f4159c198f2c491aba3076e803251bb4 551ef16ca7c64c7d98e9560ddab5abd8 - - default default] Acquiring lock "refresh_cache-68b44a2f-a694-4458-9a40-89e194a02624" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 11 04:18:20 compute-0 nova_compute[259850]: 2025-10-11 04:18:20.735 2 DEBUG oslo_concurrency.lockutils [None req-dbf2f84f-58e2-48e2-a847-43729ecf620e 77d11e860ca1460cab1c20bca4d4c0ea bfcc78a613a4442d88231798d10634c9 - - default default] Acquiring lock "8bfdc99e-9df9-4825-a631-7cd07eff5dfb" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 04:18:20 compute-0 nova_compute[259850]: 2025-10-11 04:18:20.736 2 DEBUG oslo_concurrency.lockutils [None req-dbf2f84f-58e2-48e2-a847-43729ecf620e 77d11e860ca1460cab1c20bca4d4c0ea bfcc78a613a4442d88231798d10634c9 - - default default] Lock "8bfdc99e-9df9-4825-a631-7cd07eff5dfb" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 04:18:20 compute-0 nova_compute[259850]: 2025-10-11 04:18:20.761 2 DEBUG nova.compute.manager [None req-dbf2f84f-58e2-48e2-a847-43729ecf620e 77d11e860ca1460cab1c20bca4d4c0ea bfcc78a613a4442d88231798d10634c9 - - default default] [instance: 8bfdc99e-9df9-4825-a631-7cd07eff5dfb] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Oct 11 04:18:20 compute-0 ceph-mgr[74563]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 04:18:20 compute-0 ceph-mgr[74563]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 04:18:20 compute-0 ceph-mgr[74563]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 04:18:20 compute-0 ceph-mgr[74563]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 04:18:20 compute-0 ceph-mgr[74563]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 04:18:20 compute-0 ceph-mgr[74563]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 04:18:20 compute-0 ceph-mgr[74563]: [balancer INFO root] Optimize plan auto_2025-10-11_04:18:20
Oct 11 04:18:20 compute-0 ceph-mgr[74563]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 11 04:18:20 compute-0 ceph-mgr[74563]: [balancer INFO root] do_upmap
Oct 11 04:18:20 compute-0 ceph-mgr[74563]: [balancer INFO root] pools ['default.rgw.log', 'cephfs.cephfs.data', 'volumes', '.mgr', 'cephfs.cephfs.meta', 'backups', 'vms', 'default.rgw.control', '.rgw.root', 'images', 'default.rgw.meta']
Oct 11 04:18:20 compute-0 ceph-mgr[74563]: [balancer INFO root] prepared 0/10 changes
Oct 11 04:18:20 compute-0 nova_compute[259850]: 2025-10-11 04:18:20.835 2 DEBUG oslo_concurrency.lockutils [None req-dbf2f84f-58e2-48e2-a847-43729ecf620e 77d11e860ca1460cab1c20bca4d4c0ea bfcc78a613a4442d88231798d10634c9 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 04:18:20 compute-0 nova_compute[259850]: 2025-10-11 04:18:20.836 2 DEBUG oslo_concurrency.lockutils [None req-dbf2f84f-58e2-48e2-a847-43729ecf620e 77d11e860ca1460cab1c20bca4d4c0ea bfcc78a613a4442d88231798d10634c9 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 04:18:20 compute-0 nova_compute[259850]: 2025-10-11 04:18:20.858 2 DEBUG nova.virt.hardware [None req-dbf2f84f-58e2-48e2-a847-43729ecf620e 77d11e860ca1460cab1c20bca4d4c0ea bfcc78a613a4442d88231798d10634c9 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Oct 11 04:18:20 compute-0 nova_compute[259850]: 2025-10-11 04:18:20.859 2 INFO nova.compute.claims [None req-dbf2f84f-58e2-48e2-a847-43729ecf620e 77d11e860ca1460cab1c20bca4d4c0ea bfcc78a613a4442d88231798d10634c9 - - default default] [instance: 8bfdc99e-9df9-4825-a631-7cd07eff5dfb] Claim successful on node compute-0.ctlplane.example.com
Oct 11 04:18:20 compute-0 ceph-mon[74273]: pgmap v1640: 305 pgs: 305 active+clean; 352 MiB data, 629 MiB used, 59 GiB / 60 GiB avail; 285 KiB/s rd, 28 KiB/s wr, 49 op/s
Oct 11 04:18:21 compute-0 nova_compute[259850]: 2025-10-11 04:18:21.047 2 DEBUG oslo_concurrency.processutils [None req-dbf2f84f-58e2-48e2-a847-43729ecf620e 77d11e860ca1460cab1c20bca4d4c0ea bfcc78a613a4442d88231798d10634c9 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 04:18:21 compute-0 ceph-mgr[74563]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 11 04:18:21 compute-0 ceph-mgr[74563]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 11 04:18:21 compute-0 ceph-mgr[74563]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 11 04:18:21 compute-0 ceph-mgr[74563]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 11 04:18:21 compute-0 ceph-mgr[74563]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 11 04:18:21 compute-0 ceph-mgr[74563]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 11 04:18:21 compute-0 ceph-mgr[74563]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 11 04:18:21 compute-0 ceph-mgr[74563]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 11 04:18:21 compute-0 ceph-mgr[74563]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 11 04:18:21 compute-0 ceph-mgr[74563]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 11 04:18:21 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 11 04:18:21 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3024946841' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 04:18:21 compute-0 nova_compute[259850]: 2025-10-11 04:18:21.546 2 DEBUG oslo_concurrency.processutils [None req-dbf2f84f-58e2-48e2-a847-43729ecf620e 77d11e860ca1460cab1c20bca4d4c0ea bfcc78a613a4442d88231798d10634c9 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.499s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 04:18:21 compute-0 nova_compute[259850]: 2025-10-11 04:18:21.552 2 DEBUG nova.compute.provider_tree [None req-dbf2f84f-58e2-48e2-a847-43729ecf620e 77d11e860ca1460cab1c20bca4d4c0ea bfcc78a613a4442d88231798d10634c9 - - default default] Inventory has not changed in ProviderTree for provider: 108a560b-89c0-4926-a2fc-cb749a6f8386 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 11 04:18:21 compute-0 nova_compute[259850]: 2025-10-11 04:18:21.556 2 DEBUG nova.network.neutron [None req-b187c7d5-55d6-40a2-bba4-0852f73acffa 2a330a845d62440c871f80eda2546881 09ba33ef4bd447699d74946c58839b2d - - default default] [instance: 68b44a2f-a694-4458-9a40-89e194a02624] Updating instance_info_cache with network_info: [{"id": "ae4b6054-7dfb-4b28-a3dc-0a31f2446ea3", "address": "fa:16:3e:78:cd:05", "network": {"id": "b6cd64a2-af0b-4f57-b84c-cbc9cde5251d", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-958485850-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "09ba33ef4bd447699d74946c58839b2d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapae4b6054-7d", "ovs_interfaceid": "ae4b6054-7dfb-4b28-a3dc-0a31f2446ea3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 11 04:18:21 compute-0 nova_compute[259850]: 2025-10-11 04:18:21.615 2 DEBUG oslo_concurrency.lockutils [None req-b187c7d5-55d6-40a2-bba4-0852f73acffa 2a330a845d62440c871f80eda2546881 09ba33ef4bd447699d74946c58839b2d - - default default] Releasing lock "refresh_cache-68b44a2f-a694-4458-9a40-89e194a02624" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 11 04:18:21 compute-0 nova_compute[259850]: 2025-10-11 04:18:21.616 2 DEBUG nova.compute.manager [None req-b187c7d5-55d6-40a2-bba4-0852f73acffa 2a330a845d62440c871f80eda2546881 09ba33ef4bd447699d74946c58839b2d - - default default] [instance: 68b44a2f-a694-4458-9a40-89e194a02624] Instance network_info: |[{"id": "ae4b6054-7dfb-4b28-a3dc-0a31f2446ea3", "address": "fa:16:3e:78:cd:05", "network": {"id": "b6cd64a2-af0b-4f57-b84c-cbc9cde5251d", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-958485850-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "09ba33ef4bd447699d74946c58839b2d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapae4b6054-7d", "ovs_interfaceid": "ae4b6054-7dfb-4b28-a3dc-0a31f2446ea3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Oct 11 04:18:21 compute-0 nova_compute[259850]: 2025-10-11 04:18:21.616 2 DEBUG oslo_concurrency.lockutils [req-a9d592c1-52d2-48a5-9318-ff7564b234bb req-0e7e016c-80f2-4278-a3e6-81d69662fcfb f4159c198f2c491aba3076e803251bb4 551ef16ca7c64c7d98e9560ddab5abd8 - - default default] Acquired lock "refresh_cache-68b44a2f-a694-4458-9a40-89e194a02624" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 11 04:18:21 compute-0 nova_compute[259850]: 2025-10-11 04:18:21.616 2 DEBUG nova.network.neutron [req-a9d592c1-52d2-48a5-9318-ff7564b234bb req-0e7e016c-80f2-4278-a3e6-81d69662fcfb f4159c198f2c491aba3076e803251bb4 551ef16ca7c64c7d98e9560ddab5abd8 - - default default] [instance: 68b44a2f-a694-4458-9a40-89e194a02624] Refreshing network info cache for port ae4b6054-7dfb-4b28-a3dc-0a31f2446ea3 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 11 04:18:21 compute-0 nova_compute[259850]: 2025-10-11 04:18:21.620 2 DEBUG nova.virt.libvirt.driver [None req-b187c7d5-55d6-40a2-bba4-0852f73acffa 2a330a845d62440c871f80eda2546881 09ba33ef4bd447699d74946c58839b2d - - default default] [instance: 68b44a2f-a694-4458-9a40-89e194a02624] Start _get_guest_xml network_info=[{"id": "ae4b6054-7dfb-4b28-a3dc-0a31f2446ea3", "address": "fa:16:3e:78:cd:05", "network": {"id": "b6cd64a2-af0b-4f57-b84c-cbc9cde5251d", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-958485850-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "09ba33ef4bd447699d74946c58839b2d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapae4b6054-7d", "ovs_interfaceid": "ae4b6054-7dfb-4b28-a3dc-0a31f2446ea3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, '/dev/vda': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum=<?>,container_format=<?>,created_at=<?>,direct_url=<?>,disk_format=<?>,id=<?>,min_disk=0,min_ram=0,name=<?>,owner=<?>,properties=ImageMetaProps,protected=<?>,size=1073741824,status='active',tags=<?>,updated_at=<?>,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [], 'ephemerals': [], 'block_device_mapping': [{'device_type': 'disk', 'delete_on_termination': False, 'connection_info': {'driver_volume_type': 'rbd', 'data': {'name': 'volumes/volume-cc426753-0683-44d8-a993-700a8a812cbd', 'hosts': ['192.168.122.100'], 'ports': ['6789'], 'cluster_name': 'ceph', 'auth_enabled': True, 'auth_username': 'openstack', 'secret_type': 'ceph', 'secret_uuid': '***', 'volume_id': 'cc426753-0683-44d8-a993-700a8a812cbd', 'discard': True, 'qos_specs': None, 'access_mode': 'rw', 'encrypted': False, 'cacheable': False}, 'status': 'reserved', 'instance': '68b44a2f-a694-4458-9a40-89e194a02624', 'attached_at': '', 'detached_at': '', 'volume_id': 'cc426753-0683-44d8-a993-700a8a812cbd', 'serial': 'cc426753-0683-44d8-a993-700a8a812cbd'}, 'boot_index': 0, 'guest_format': None, 'attachment_id': '6a20a897-f317-411e-9d92-3eae07c8723d', 'mount_device': '/dev/vda', 'disk_bus': 'virtio', 'volume_type': None}], ': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Oct 11 04:18:21 compute-0 nova_compute[259850]: 2025-10-11 04:18:21.625 2 WARNING nova.virt.libvirt.driver [None req-b187c7d5-55d6-40a2-bba4-0852f73acffa 2a330a845d62440c871f80eda2546881 09ba33ef4bd447699d74946c58839b2d - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 11 04:18:21 compute-0 nova_compute[259850]: 2025-10-11 04:18:21.679 2 DEBUG nova.virt.libvirt.host [None req-b187c7d5-55d6-40a2-bba4-0852f73acffa 2a330a845d62440c871f80eda2546881 09ba33ef4bd447699d74946c58839b2d - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Oct 11 04:18:21 compute-0 nova_compute[259850]: 2025-10-11 04:18:21.681 2 DEBUG nova.virt.libvirt.host [None req-b187c7d5-55d6-40a2-bba4-0852f73acffa 2a330a845d62440c871f80eda2546881 09ba33ef4bd447699d74946c58839b2d - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Oct 11 04:18:21 compute-0 nova_compute[259850]: 2025-10-11 04:18:21.722 2 DEBUG nova.scheduler.client.report [None req-dbf2f84f-58e2-48e2-a847-43729ecf620e 77d11e860ca1460cab1c20bca4d4c0ea bfcc78a613a4442d88231798d10634c9 - - default default] Inventory has not changed for provider 108a560b-89c0-4926-a2fc-cb749a6f8386 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 11 04:18:21 compute-0 nova_compute[259850]: 2025-10-11 04:18:21.752 2 DEBUG nova.virt.libvirt.host [None req-b187c7d5-55d6-40a2-bba4-0852f73acffa 2a330a845d62440c871f80eda2546881 09ba33ef4bd447699d74946c58839b2d - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Oct 11 04:18:21 compute-0 nova_compute[259850]: 2025-10-11 04:18:21.753 2 DEBUG nova.virt.libvirt.host [None req-b187c7d5-55d6-40a2-bba4-0852f73acffa 2a330a845d62440c871f80eda2546881 09ba33ef4bd447699d74946c58839b2d - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Oct 11 04:18:21 compute-0 nova_compute[259850]: 2025-10-11 04:18:21.754 2 DEBUG nova.virt.libvirt.driver [None req-b187c7d5-55d6-40a2-bba4-0852f73acffa 2a330a845d62440c871f80eda2546881 09ba33ef4bd447699d74946c58839b2d - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Oct 11 04:18:21 compute-0 nova_compute[259850]: 2025-10-11 04:18:21.755 2 DEBUG nova.virt.hardware [None req-b187c7d5-55d6-40a2-bba4-0852f73acffa 2a330a845d62440c871f80eda2546881 09ba33ef4bd447699d74946c58839b2d - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-11T04:01:35Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='178575de-f0e6-4acd-9fcd-d75e3e09ac2e',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum=<?>,container_format=<?>,created_at=<?>,direct_url=<?>,disk_format=<?>,id=<?>,min_disk=0,min_ram=0,name=<?>,owner=<?>,properties=ImageMetaProps,protected=<?>,size=1073741824,status='active',tags=<?>,updated_at=<?>,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Oct 11 04:18:21 compute-0 nova_compute[259850]: 2025-10-11 04:18:21.756 2 DEBUG nova.virt.hardware [None req-b187c7d5-55d6-40a2-bba4-0852f73acffa 2a330a845d62440c871f80eda2546881 09ba33ef4bd447699d74946c58839b2d - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Oct 11 04:18:21 compute-0 nova_compute[259850]: 2025-10-11 04:18:21.756 2 DEBUG nova.virt.hardware [None req-b187c7d5-55d6-40a2-bba4-0852f73acffa 2a330a845d62440c871f80eda2546881 09ba33ef4bd447699d74946c58839b2d - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Oct 11 04:18:21 compute-0 nova_compute[259850]: 2025-10-11 04:18:21.757 2 DEBUG nova.virt.hardware [None req-b187c7d5-55d6-40a2-bba4-0852f73acffa 2a330a845d62440c871f80eda2546881 09ba33ef4bd447699d74946c58839b2d - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Oct 11 04:18:21 compute-0 nova_compute[259850]: 2025-10-11 04:18:21.757 2 DEBUG nova.virt.hardware [None req-b187c7d5-55d6-40a2-bba4-0852f73acffa 2a330a845d62440c871f80eda2546881 09ba33ef4bd447699d74946c58839b2d - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Oct 11 04:18:21 compute-0 nova_compute[259850]: 2025-10-11 04:18:21.758 2 DEBUG nova.virt.hardware [None req-b187c7d5-55d6-40a2-bba4-0852f73acffa 2a330a845d62440c871f80eda2546881 09ba33ef4bd447699d74946c58839b2d - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Oct 11 04:18:21 compute-0 nova_compute[259850]: 2025-10-11 04:18:21.758 2 DEBUG nova.virt.hardware [None req-b187c7d5-55d6-40a2-bba4-0852f73acffa 2a330a845d62440c871f80eda2546881 09ba33ef4bd447699d74946c58839b2d - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Oct 11 04:18:21 compute-0 nova_compute[259850]: 2025-10-11 04:18:21.759 2 DEBUG nova.virt.hardware [None req-b187c7d5-55d6-40a2-bba4-0852f73acffa 2a330a845d62440c871f80eda2546881 09ba33ef4bd447699d74946c58839b2d - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Oct 11 04:18:21 compute-0 nova_compute[259850]: 2025-10-11 04:18:21.759 2 DEBUG nova.virt.hardware [None req-b187c7d5-55d6-40a2-bba4-0852f73acffa 2a330a845d62440c871f80eda2546881 09ba33ef4bd447699d74946c58839b2d - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Oct 11 04:18:21 compute-0 nova_compute[259850]: 2025-10-11 04:18:21.760 2 DEBUG nova.virt.hardware [None req-b187c7d5-55d6-40a2-bba4-0852f73acffa 2a330a845d62440c871f80eda2546881 09ba33ef4bd447699d74946c58839b2d - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Oct 11 04:18:21 compute-0 nova_compute[259850]: 2025-10-11 04:18:21.760 2 DEBUG nova.virt.hardware [None req-b187c7d5-55d6-40a2-bba4-0852f73acffa 2a330a845d62440c871f80eda2546881 09ba33ef4bd447699d74946c58839b2d - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Oct 11 04:18:21 compute-0 nova_compute[259850]: 2025-10-11 04:18:21.793 2 DEBUG nova.storage.rbd_utils [None req-b187c7d5-55d6-40a2-bba4-0852f73acffa 2a330a845d62440c871f80eda2546881 09ba33ef4bd447699d74946c58839b2d - - default default] rbd image 68b44a2f-a694-4458-9a40-89e194a02624_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 11 04:18:21 compute-0 nova_compute[259850]: 2025-10-11 04:18:21.800 2 DEBUG oslo_concurrency.processutils [None req-b187c7d5-55d6-40a2-bba4-0852f73acffa 2a330a845d62440c871f80eda2546881 09ba33ef4bd447699d74946c58839b2d - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 04:18:21 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1641: 305 pgs: 305 active+clean; 352 MiB data, 629 MiB used, 59 GiB / 60 GiB avail; 262 KiB/s rd, 26 KiB/s wr, 45 op/s
Oct 11 04:18:21 compute-0 nova_compute[259850]: 2025-10-11 04:18:21.829 2 DEBUG oslo_concurrency.lockutils [None req-dbf2f84f-58e2-48e2-a847-43729ecf620e 77d11e860ca1460cab1c20bca4d4c0ea bfcc78a613a4442d88231798d10634c9 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.993s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 04:18:21 compute-0 nova_compute[259850]: 2025-10-11 04:18:21.831 2 DEBUG nova.compute.manager [None req-dbf2f84f-58e2-48e2-a847-43729ecf620e 77d11e860ca1460cab1c20bca4d4c0ea bfcc78a613a4442d88231798d10634c9 - - default default] [instance: 8bfdc99e-9df9-4825-a631-7cd07eff5dfb] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Oct 11 04:18:21 compute-0 nova_compute[259850]: 2025-10-11 04:18:21.880 2 DEBUG nova.compute.manager [None req-dbf2f84f-58e2-48e2-a847-43729ecf620e 77d11e860ca1460cab1c20bca4d4c0ea bfcc78a613a4442d88231798d10634c9 - - default default] [instance: 8bfdc99e-9df9-4825-a631-7cd07eff5dfb] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Oct 11 04:18:21 compute-0 nova_compute[259850]: 2025-10-11 04:18:21.881 2 DEBUG nova.network.neutron [None req-dbf2f84f-58e2-48e2-a847-43729ecf620e 77d11e860ca1460cab1c20bca4d4c0ea bfcc78a613a4442d88231798d10634c9 - - default default] [instance: 8bfdc99e-9df9-4825-a631-7cd07eff5dfb] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Oct 11 04:18:21 compute-0 nova_compute[259850]: 2025-10-11 04:18:21.920 2 INFO nova.virt.libvirt.driver [None req-dbf2f84f-58e2-48e2-a847-43729ecf620e 77d11e860ca1460cab1c20bca4d4c0ea bfcc78a613a4442d88231798d10634c9 - - default default] [instance: 8bfdc99e-9df9-4825-a631-7cd07eff5dfb] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Oct 11 04:18:21 compute-0 nova_compute[259850]: 2025-10-11 04:18:21.940 2 DEBUG nova.compute.manager [None req-dbf2f84f-58e2-48e2-a847-43729ecf620e 77d11e860ca1460cab1c20bca4d4c0ea bfcc78a613a4442d88231798d10634c9 - - default default] [instance: 8bfdc99e-9df9-4825-a631-7cd07eff5dfb] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Oct 11 04:18:21 compute-0 ceph-mon[74273]: from='client.? 192.168.122.100:0/3024946841' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 04:18:21 compute-0 nova_compute[259850]: 2025-10-11 04:18:21.996 2 INFO nova.virt.block_device [None req-dbf2f84f-58e2-48e2-a847-43729ecf620e 77d11e860ca1460cab1c20bca4d4c0ea bfcc78a613a4442d88231798d10634c9 - - default default] [instance: 8bfdc99e-9df9-4825-a631-7cd07eff5dfb] Booting with volume b02ce934-9de7-422d-b3ba-5ade72993920 at /dev/vda
Oct 11 04:18:22 compute-0 nova_compute[259850]: 2025-10-11 04:18:22.124 2 DEBUG os_brick.utils [None req-dbf2f84f-58e2-48e2-a847-43729ecf620e 77d11e860ca1460cab1c20bca4d4c0ea bfcc78a613a4442d88231798d10634c9 - - default default] ==> get_connector_properties: call "{'root_helper': 'sudo nova-rootwrap /etc/nova/rootwrap.conf', 'my_ip': '192.168.122.100', 'multipath': True, 'enforce_multipath': True, 'host': 'compute-0.ctlplane.example.com', 'execute': None}" trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:176
Oct 11 04:18:22 compute-0 nova_compute[259850]: 2025-10-11 04:18:22.125 675 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): multipathd show status execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 04:18:22 compute-0 nova_compute[259850]: 2025-10-11 04:18:22.143 675 DEBUG oslo_concurrency.processutils [-] CMD "multipathd show status" returned: 0 in 0.018s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 04:18:22 compute-0 nova_compute[259850]: 2025-10-11 04:18:22.144 675 DEBUG oslo.privsep.daemon [-] privsep: reply[04d4553a-e437-4c89-913f-cb2d797916e5]: (4, ('path checker states:\n\npaths: 0\nbusy: False\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 04:18:22 compute-0 nova_compute[259850]: 2025-10-11 04:18:22.146 675 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): cat /etc/iscsi/initiatorname.iscsi execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 04:18:22 compute-0 nova_compute[259850]: 2025-10-11 04:18:22.158 675 DEBUG oslo_concurrency.processutils [-] CMD "cat /etc/iscsi/initiatorname.iscsi" returned: 0 in 0.012s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 04:18:22 compute-0 nova_compute[259850]: 2025-10-11 04:18:22.159 675 DEBUG oslo.privsep.daemon [-] privsep: reply[e5dbdbd6-4468-4223-9453-4b3f16335bcb]: (4, ('InitiatorName=iqn.1994-05.com.redhat:e727c2bd432c', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 04:18:22 compute-0 nova_compute[259850]: 2025-10-11 04:18:22.161 675 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): findmnt -v / -n -o SOURCE execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 04:18:22 compute-0 nova_compute[259850]: 2025-10-11 04:18:22.173 675 DEBUG oslo_concurrency.processutils [-] CMD "findmnt -v / -n -o SOURCE" returned: 0 in 0.012s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 04:18:22 compute-0 nova_compute[259850]: 2025-10-11 04:18:22.173 675 DEBUG oslo.privsep.daemon [-] privsep: reply[35459a1d-80d9-4aad-a5d2-64ab5e6456ba]: (4, ('overlay\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 04:18:22 compute-0 nova_compute[259850]: 2025-10-11 04:18:22.175 675 DEBUG oslo.privsep.daemon [-] privsep: reply[c9abc2e4-615d-4647-8afd-3b01808eb06e]: (4, 'e4b2deed-ff06-4afb-a523-b61a9dddb9cc') _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 04:18:22 compute-0 nova_compute[259850]: 2025-10-11 04:18:22.175 2 DEBUG oslo_concurrency.processutils [None req-dbf2f84f-58e2-48e2-a847-43729ecf620e 77d11e860ca1460cab1c20bca4d4c0ea bfcc78a613a4442d88231798d10634c9 - - default default] Running cmd (subprocess): nvme version execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 04:18:22 compute-0 nova_compute[259850]: 2025-10-11 04:18:22.213 2 DEBUG oslo_concurrency.processutils [None req-dbf2f84f-58e2-48e2-a847-43729ecf620e 77d11e860ca1460cab1c20bca4d4c0ea bfcc78a613a4442d88231798d10634c9 - - default default] CMD "nvme version" returned: 0 in 0.037s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 04:18:22 compute-0 nova_compute[259850]: 2025-10-11 04:18:22.217 2 DEBUG os_brick.initiator.connectors.lightos [None req-dbf2f84f-58e2-48e2-a847-43729ecf620e 77d11e860ca1460cab1c20bca4d4c0ea bfcc78a613a4442d88231798d10634c9 - - default default] LIGHTOS: [Errno 111] ECONNREFUSED find_dsc /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:98
Oct 11 04:18:22 compute-0 nova_compute[259850]: 2025-10-11 04:18:22.217 2 DEBUG os_brick.initiator.connectors.lightos [None req-dbf2f84f-58e2-48e2-a847-43729ecf620e 77d11e860ca1460cab1c20bca4d4c0ea bfcc78a613a4442d88231798d10634c9 - - default default] LIGHTOS: did not find dsc, continuing anyway. get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:76
Oct 11 04:18:22 compute-0 nova_compute[259850]: 2025-10-11 04:18:22.218 2 DEBUG os_brick.initiator.connectors.lightos [None req-dbf2f84f-58e2-48e2-a847-43729ecf620e 77d11e860ca1460cab1c20bca4d4c0ea bfcc78a613a4442d88231798d10634c9 - - default default] LIGHTOS: finally hostnqn: nqn.2014-08.org.nvmexpress:uuid:83042a20-0f72-4c47-8453-e72ead378624 dsc:  get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:79
Oct 11 04:18:22 compute-0 nova_compute[259850]: 2025-10-11 04:18:22.218 2 DEBUG os_brick.utils [None req-dbf2f84f-58e2-48e2-a847-43729ecf620e 77d11e860ca1460cab1c20bca4d4c0ea bfcc78a613a4442d88231798d10634c9 - - default default] <== get_connector_properties: return (94ms) {'platform': 'x86_64', 'os_type': 'linux', 'ip': '192.168.122.100', 'host': 'compute-0.ctlplane.example.com', 'multipath': True, 'initiator': 'iqn.1994-05.com.redhat:e727c2bd432c', 'do_local_attach': False, 'nvme_hostid': '83042a20-0f72-4c47-8453-e72ead378624', 'system uuid': 'e4b2deed-ff06-4afb-a523-b61a9dddb9cc', 'nqn': 'nqn.2014-08.org.nvmexpress:uuid:83042a20-0f72-4c47-8453-e72ead378624', 'nvme_native_multipath': True, 'found_dsc': ''} trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:203
Oct 11 04:18:22 compute-0 nova_compute[259850]: 2025-10-11 04:18:22.219 2 DEBUG nova.virt.block_device [None req-dbf2f84f-58e2-48e2-a847-43729ecf620e 77d11e860ca1460cab1c20bca4d4c0ea bfcc78a613a4442d88231798d10634c9 - - default default] [instance: 8bfdc99e-9df9-4825-a631-7cd07eff5dfb] Updating existing volume attachment record: 4af325f2-23bc-4049-a84c-461597100e68 _volume_attach /usr/lib/python3.9/site-packages/nova/virt/block_device.py:631
Oct 11 04:18:22 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 11 04:18:22 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3163711870' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 11 04:18:22 compute-0 nova_compute[259850]: 2025-10-11 04:18:22.284 2 DEBUG oslo_concurrency.processutils [None req-b187c7d5-55d6-40a2-bba4-0852f73acffa 2a330a845d62440c871f80eda2546881 09ba33ef4bd447699d74946c58839b2d - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.484s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 04:18:22 compute-0 nova_compute[259850]: 2025-10-11 04:18:22.319 2 DEBUG nova.virt.libvirt.vif [None req-b187c7d5-55d6-40a2-bba4-0852f73acffa 2a330a845d62440c871f80eda2546881 09ba33ef4bd447699d74946c58839b2d - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-11T04:18:14Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestVolumeBootPattern-server-592058258',display_name='tempest-TestVolumeBootPattern-server-592058258',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testvolumebootpattern-server-592058258',id=25,image_ref='',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBPDNAGL8Dkg4WTlPf45cAzyjNlMaZ9CdFtcbPahhttGWfFDtL3wJAU2pqWIpDJ427A+TFzstq4HW+M8hdPFbiZnk9MFQHh3rRb7amRkcTpIWOFEgpDmf92zhQgzfL3p2ZA==',key_name='tempest-TestVolumeBootPattern-2018721323',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='09ba33ef4bd447699d74946c58839b2d',ramdisk_id='',reservation_id='r-zxt09pvy',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',image_signature_verified='False',network_allocated='True',owner_project_name='tempest-TestVolumeBootPattern-771726270',owner_user_name='tempest-TestVolumeBootPattern-771726270-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-11T04:18:16Z,user_data=None,user_id='2a330a845d62440c871f80eda2546881',uuid=68b44a2f-a694-4458-9a40-89e194a02624,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "ae4b6054-7dfb-4b28-a3dc-0a31f2446ea3", "address": "fa:16:3e:78:cd:05", "network": {"id": "b6cd64a2-af0b-4f57-b84c-cbc9cde5251d", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-958485850-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "09ba33ef4bd447699d74946c58839b2d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapae4b6054-7d", "ovs_interfaceid": "ae4b6054-7dfb-4b28-a3dc-0a31f2446ea3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Oct 11 04:18:22 compute-0 nova_compute[259850]: 2025-10-11 04:18:22.320 2 DEBUG nova.network.os_vif_util [None req-b187c7d5-55d6-40a2-bba4-0852f73acffa 2a330a845d62440c871f80eda2546881 09ba33ef4bd447699d74946c58839b2d - - default default] Converting VIF {"id": "ae4b6054-7dfb-4b28-a3dc-0a31f2446ea3", "address": "fa:16:3e:78:cd:05", "network": {"id": "b6cd64a2-af0b-4f57-b84c-cbc9cde5251d", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-958485850-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "09ba33ef4bd447699d74946c58839b2d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapae4b6054-7d", "ovs_interfaceid": "ae4b6054-7dfb-4b28-a3dc-0a31f2446ea3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 11 04:18:22 compute-0 nova_compute[259850]: 2025-10-11 04:18:22.322 2 DEBUG nova.network.os_vif_util [None req-b187c7d5-55d6-40a2-bba4-0852f73acffa 2a330a845d62440c871f80eda2546881 09ba33ef4bd447699d74946c58839b2d - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:78:cd:05,bridge_name='br-int',has_traffic_filtering=True,id=ae4b6054-7dfb-4b28-a3dc-0a31f2446ea3,network=Network(b6cd64a2-af0b-4f57-b84c-cbc9cde5251d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapae4b6054-7d') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 11 04:18:22 compute-0 nova_compute[259850]: 2025-10-11 04:18:22.324 2 DEBUG nova.objects.instance [None req-b187c7d5-55d6-40a2-bba4-0852f73acffa 2a330a845d62440c871f80eda2546881 09ba33ef4bd447699d74946c58839b2d - - default default] Lazy-loading 'pci_devices' on Instance uuid 68b44a2f-a694-4458-9a40-89e194a02624 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 11 04:18:22 compute-0 nova_compute[259850]: 2025-10-11 04:18:22.361 2 DEBUG nova.virt.libvirt.driver [None req-b187c7d5-55d6-40a2-bba4-0852f73acffa 2a330a845d62440c871f80eda2546881 09ba33ef4bd447699d74946c58839b2d - - default default] [instance: 68b44a2f-a694-4458-9a40-89e194a02624] End _get_guest_xml xml=<domain type="kvm">
Oct 11 04:18:22 compute-0 nova_compute[259850]:   <uuid>68b44a2f-a694-4458-9a40-89e194a02624</uuid>
Oct 11 04:18:22 compute-0 nova_compute[259850]:   <name>instance-00000019</name>
Oct 11 04:18:22 compute-0 nova_compute[259850]:   <memory>131072</memory>
Oct 11 04:18:22 compute-0 nova_compute[259850]:   <vcpu>1</vcpu>
Oct 11 04:18:22 compute-0 nova_compute[259850]:   <metadata>
Oct 11 04:18:22 compute-0 nova_compute[259850]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 11 04:18:22 compute-0 nova_compute[259850]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 11 04:18:22 compute-0 nova_compute[259850]:       <nova:name>tempest-TestVolumeBootPattern-server-592058258</nova:name>
Oct 11 04:18:22 compute-0 nova_compute[259850]:       <nova:creationTime>2025-10-11 04:18:21</nova:creationTime>
Oct 11 04:18:22 compute-0 nova_compute[259850]:       <nova:flavor name="m1.nano">
Oct 11 04:18:22 compute-0 nova_compute[259850]:         <nova:memory>128</nova:memory>
Oct 11 04:18:22 compute-0 nova_compute[259850]:         <nova:disk>1</nova:disk>
Oct 11 04:18:22 compute-0 nova_compute[259850]:         <nova:swap>0</nova:swap>
Oct 11 04:18:22 compute-0 nova_compute[259850]:         <nova:ephemeral>0</nova:ephemeral>
Oct 11 04:18:22 compute-0 nova_compute[259850]:         <nova:vcpus>1</nova:vcpus>
Oct 11 04:18:22 compute-0 nova_compute[259850]:       </nova:flavor>
Oct 11 04:18:22 compute-0 nova_compute[259850]:       <nova:owner>
Oct 11 04:18:22 compute-0 nova_compute[259850]:         <nova:user uuid="2a330a845d62440c871f80eda2546881">tempest-TestVolumeBootPattern-771726270-project-member</nova:user>
Oct 11 04:18:22 compute-0 nova_compute[259850]:         <nova:project uuid="09ba33ef4bd447699d74946c58839b2d">tempest-TestVolumeBootPattern-771726270</nova:project>
Oct 11 04:18:22 compute-0 nova_compute[259850]:       </nova:owner>
Oct 11 04:18:22 compute-0 nova_compute[259850]:       <nova:ports>
Oct 11 04:18:22 compute-0 nova_compute[259850]:         <nova:port uuid="ae4b6054-7dfb-4b28-a3dc-0a31f2446ea3">
Oct 11 04:18:22 compute-0 nova_compute[259850]:           <nova:ip type="fixed" address="10.100.0.8" ipVersion="4"/>
Oct 11 04:18:22 compute-0 nova_compute[259850]:         </nova:port>
Oct 11 04:18:22 compute-0 nova_compute[259850]:       </nova:ports>
Oct 11 04:18:22 compute-0 nova_compute[259850]:     </nova:instance>
Oct 11 04:18:22 compute-0 nova_compute[259850]:   </metadata>
Oct 11 04:18:22 compute-0 nova_compute[259850]:   <sysinfo type="smbios">
Oct 11 04:18:22 compute-0 nova_compute[259850]:     <system>
Oct 11 04:18:22 compute-0 nova_compute[259850]:       <entry name="manufacturer">RDO</entry>
Oct 11 04:18:22 compute-0 nova_compute[259850]:       <entry name="product">OpenStack Compute</entry>
Oct 11 04:18:22 compute-0 nova_compute[259850]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 11 04:18:22 compute-0 nova_compute[259850]:       <entry name="serial">68b44a2f-a694-4458-9a40-89e194a02624</entry>
Oct 11 04:18:22 compute-0 nova_compute[259850]:       <entry name="uuid">68b44a2f-a694-4458-9a40-89e194a02624</entry>
Oct 11 04:18:22 compute-0 nova_compute[259850]:       <entry name="family">Virtual Machine</entry>
Oct 11 04:18:22 compute-0 nova_compute[259850]:     </system>
Oct 11 04:18:22 compute-0 nova_compute[259850]:   </sysinfo>
Oct 11 04:18:22 compute-0 nova_compute[259850]:   <os>
Oct 11 04:18:22 compute-0 nova_compute[259850]:     <type arch="x86_64" machine="q35">hvm</type>
Oct 11 04:18:22 compute-0 nova_compute[259850]:     <boot dev="hd"/>
Oct 11 04:18:22 compute-0 nova_compute[259850]:     <smbios mode="sysinfo"/>
Oct 11 04:18:22 compute-0 nova_compute[259850]:   </os>
Oct 11 04:18:22 compute-0 nova_compute[259850]:   <features>
Oct 11 04:18:22 compute-0 nova_compute[259850]:     <acpi/>
Oct 11 04:18:22 compute-0 nova_compute[259850]:     <apic/>
Oct 11 04:18:22 compute-0 nova_compute[259850]:     <vmcoreinfo/>
Oct 11 04:18:22 compute-0 nova_compute[259850]:   </features>
Oct 11 04:18:22 compute-0 nova_compute[259850]:   <clock offset="utc">
Oct 11 04:18:22 compute-0 nova_compute[259850]:     <timer name="pit" tickpolicy="delay"/>
Oct 11 04:18:22 compute-0 nova_compute[259850]:     <timer name="rtc" tickpolicy="catchup"/>
Oct 11 04:18:22 compute-0 nova_compute[259850]:     <timer name="hpet" present="no"/>
Oct 11 04:18:22 compute-0 nova_compute[259850]:   </clock>
Oct 11 04:18:22 compute-0 nova_compute[259850]:   <cpu mode="host-model" match="exact">
Oct 11 04:18:22 compute-0 nova_compute[259850]:     <topology sockets="1" cores="1" threads="1"/>
Oct 11 04:18:22 compute-0 nova_compute[259850]:   </cpu>
Oct 11 04:18:22 compute-0 nova_compute[259850]:   <devices>
Oct 11 04:18:22 compute-0 nova_compute[259850]:     <disk type="network" device="cdrom">
Oct 11 04:18:22 compute-0 nova_compute[259850]:       <driver type="raw" cache="none"/>
Oct 11 04:18:22 compute-0 nova_compute[259850]:       <source protocol="rbd" name="vms/68b44a2f-a694-4458-9a40-89e194a02624_disk.config">
Oct 11 04:18:22 compute-0 nova_compute[259850]:         <host name="192.168.122.100" port="6789"/>
Oct 11 04:18:22 compute-0 nova_compute[259850]:       </source>
Oct 11 04:18:22 compute-0 nova_compute[259850]:       <auth username="openstack">
Oct 11 04:18:22 compute-0 nova_compute[259850]:         <secret type="ceph" uuid="23b68101-59a9-532f-ab6b-9acf78fb2162"/>
Oct 11 04:18:22 compute-0 nova_compute[259850]:       </auth>
Oct 11 04:18:22 compute-0 nova_compute[259850]:       <target dev="sda" bus="sata"/>
Oct 11 04:18:22 compute-0 nova_compute[259850]:     </disk>
Oct 11 04:18:22 compute-0 nova_compute[259850]:     <disk type="network" device="disk">
Oct 11 04:18:22 compute-0 nova_compute[259850]:       <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Oct 11 04:18:22 compute-0 nova_compute[259850]:       <source protocol="rbd" name="volumes/volume-cc426753-0683-44d8-a993-700a8a812cbd">
Oct 11 04:18:22 compute-0 nova_compute[259850]:         <host name="192.168.122.100" port="6789"/>
Oct 11 04:18:22 compute-0 nova_compute[259850]:       </source>
Oct 11 04:18:22 compute-0 nova_compute[259850]:       <auth username="openstack">
Oct 11 04:18:22 compute-0 nova_compute[259850]:         <secret type="ceph" uuid="23b68101-59a9-532f-ab6b-9acf78fb2162"/>
Oct 11 04:18:22 compute-0 nova_compute[259850]:       </auth>
Oct 11 04:18:22 compute-0 nova_compute[259850]:       <target dev="vda" bus="virtio"/>
Oct 11 04:18:22 compute-0 nova_compute[259850]:       <serial>cc426753-0683-44d8-a993-700a8a812cbd</serial>
Oct 11 04:18:22 compute-0 nova_compute[259850]:     </disk>
Oct 11 04:18:22 compute-0 nova_compute[259850]:     <interface type="ethernet">
Oct 11 04:18:22 compute-0 nova_compute[259850]:       <mac address="fa:16:3e:78:cd:05"/>
Oct 11 04:18:22 compute-0 nova_compute[259850]:       <model type="virtio"/>
Oct 11 04:18:22 compute-0 nova_compute[259850]:       <driver name="vhost" rx_queue_size="512"/>
Oct 11 04:18:22 compute-0 nova_compute[259850]:       <mtu size="1442"/>
Oct 11 04:18:22 compute-0 nova_compute[259850]:       <target dev="tapae4b6054-7d"/>
Oct 11 04:18:22 compute-0 nova_compute[259850]:     </interface>
Oct 11 04:18:22 compute-0 nova_compute[259850]:     <serial type="pty">
Oct 11 04:18:22 compute-0 nova_compute[259850]:       <log file="/var/lib/nova/instances/68b44a2f-a694-4458-9a40-89e194a02624/console.log" append="off"/>
Oct 11 04:18:22 compute-0 nova_compute[259850]:     </serial>
Oct 11 04:18:22 compute-0 nova_compute[259850]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 11 04:18:22 compute-0 nova_compute[259850]:     <video>
Oct 11 04:18:22 compute-0 nova_compute[259850]:       <model type="virtio"/>
Oct 11 04:18:22 compute-0 nova_compute[259850]:     </video>
Oct 11 04:18:22 compute-0 nova_compute[259850]:     <input type="tablet" bus="usb"/>
Oct 11 04:18:22 compute-0 nova_compute[259850]:     <rng model="virtio">
Oct 11 04:18:22 compute-0 nova_compute[259850]:       <backend model="random">/dev/urandom</backend>
Oct 11 04:18:22 compute-0 nova_compute[259850]:     </rng>
Oct 11 04:18:22 compute-0 nova_compute[259850]:     <controller type="pci" model="pcie-root"/>
Oct 11 04:18:22 compute-0 nova_compute[259850]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 04:18:22 compute-0 nova_compute[259850]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 04:18:22 compute-0 nova_compute[259850]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 04:18:22 compute-0 nova_compute[259850]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 04:18:22 compute-0 nova_compute[259850]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 04:18:22 compute-0 nova_compute[259850]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 04:18:22 compute-0 nova_compute[259850]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 04:18:22 compute-0 nova_compute[259850]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 04:18:22 compute-0 nova_compute[259850]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 04:18:22 compute-0 nova_compute[259850]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 04:18:22 compute-0 nova_compute[259850]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 04:18:22 compute-0 nova_compute[259850]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 04:18:22 compute-0 nova_compute[259850]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 04:18:22 compute-0 nova_compute[259850]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 04:18:22 compute-0 nova_compute[259850]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 04:18:22 compute-0 nova_compute[259850]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 04:18:22 compute-0 nova_compute[259850]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 04:18:22 compute-0 nova_compute[259850]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 04:18:22 compute-0 nova_compute[259850]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 04:18:22 compute-0 nova_compute[259850]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 04:18:22 compute-0 nova_compute[259850]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 04:18:22 compute-0 nova_compute[259850]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 04:18:22 compute-0 nova_compute[259850]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 04:18:22 compute-0 nova_compute[259850]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 04:18:22 compute-0 nova_compute[259850]:     <controller type="usb" index="0"/>
Oct 11 04:18:22 compute-0 nova_compute[259850]:     <memballoon model="virtio">
Oct 11 04:18:22 compute-0 nova_compute[259850]:       <stats period="10"/>
Oct 11 04:18:22 compute-0 nova_compute[259850]:     </memballoon>
Oct 11 04:18:22 compute-0 nova_compute[259850]:   </devices>
Oct 11 04:18:22 compute-0 nova_compute[259850]: </domain>
Oct 11 04:18:22 compute-0 nova_compute[259850]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Oct 11 04:18:22 compute-0 nova_compute[259850]: 2025-10-11 04:18:22.363 2 DEBUG nova.compute.manager [None req-b187c7d5-55d6-40a2-bba4-0852f73acffa 2a330a845d62440c871f80eda2546881 09ba33ef4bd447699d74946c58839b2d - - default default] [instance: 68b44a2f-a694-4458-9a40-89e194a02624] Preparing to wait for external event network-vif-plugged-ae4b6054-7dfb-4b28-a3dc-0a31f2446ea3 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Oct 11 04:18:22 compute-0 nova_compute[259850]: 2025-10-11 04:18:22.364 2 DEBUG oslo_concurrency.lockutils [None req-b187c7d5-55d6-40a2-bba4-0852f73acffa 2a330a845d62440c871f80eda2546881 09ba33ef4bd447699d74946c58839b2d - - default default] Acquiring lock "68b44a2f-a694-4458-9a40-89e194a02624-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 04:18:22 compute-0 nova_compute[259850]: 2025-10-11 04:18:22.364 2 DEBUG oslo_concurrency.lockutils [None req-b187c7d5-55d6-40a2-bba4-0852f73acffa 2a330a845d62440c871f80eda2546881 09ba33ef4bd447699d74946c58839b2d - - default default] Lock "68b44a2f-a694-4458-9a40-89e194a02624-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 04:18:22 compute-0 nova_compute[259850]: 2025-10-11 04:18:22.365 2 DEBUG oslo_concurrency.lockutils [None req-b187c7d5-55d6-40a2-bba4-0852f73acffa 2a330a845d62440c871f80eda2546881 09ba33ef4bd447699d74946c58839b2d - - default default] Lock "68b44a2f-a694-4458-9a40-89e194a02624-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 04:18:22 compute-0 nova_compute[259850]: 2025-10-11 04:18:22.366 2 DEBUG nova.virt.libvirt.vif [None req-b187c7d5-55d6-40a2-bba4-0852f73acffa 2a330a845d62440c871f80eda2546881 09ba33ef4bd447699d74946c58839b2d - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-11T04:18:14Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestVolumeBootPattern-server-592058258',display_name='tempest-TestVolumeBootPattern-server-592058258',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testvolumebootpattern-server-592058258',id=25,image_ref='',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBPDNAGL8Dkg4WTlPf45cAzyjNlMaZ9CdFtcbPahhttGWfFDtL3wJAU2pqWIpDJ427A+TFzstq4HW+M8hdPFbiZnk9MFQHh3rRb7amRkcTpIWOFEgpDmf92zhQgzfL3p2ZA==',key_name='tempest-TestVolumeBootPattern-2018721323',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='09ba33ef4bd447699d74946c58839b2d',ramdisk_id='',reservation_id='r-zxt09pvy',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',image_signature_verified='False',network_allocated='True',owner_project_name='tempest-TestVolumeBootPattern-771726270',owner_user_name='tempest-TestVolumeBootPattern-771726270-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-11T04:18:16Z,user_data=None,user_id='2a330a845d62440c871f80eda2546881',uuid=68b44a2f-a694-4458-9a40-89e194a02624,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "ae4b6054-7dfb-4b28-a3dc-0a31f2446ea3", "address": "fa:16:3e:78:cd:05", "network": {"id": "b6cd64a2-af0b-4f57-b84c-cbc9cde5251d", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-958485850-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "09ba33ef4bd447699d74946c58839b2d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapae4b6054-7d", "ovs_interfaceid": "ae4b6054-7dfb-4b28-a3dc-0a31f2446ea3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Oct 11 04:18:22 compute-0 nova_compute[259850]: 2025-10-11 04:18:22.367 2 DEBUG nova.network.os_vif_util [None req-b187c7d5-55d6-40a2-bba4-0852f73acffa 2a330a845d62440c871f80eda2546881 09ba33ef4bd447699d74946c58839b2d - - default default] Converting VIF {"id": "ae4b6054-7dfb-4b28-a3dc-0a31f2446ea3", "address": "fa:16:3e:78:cd:05", "network": {"id": "b6cd64a2-af0b-4f57-b84c-cbc9cde5251d", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-958485850-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "09ba33ef4bd447699d74946c58839b2d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapae4b6054-7d", "ovs_interfaceid": "ae4b6054-7dfb-4b28-a3dc-0a31f2446ea3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 11 04:18:22 compute-0 nova_compute[259850]: 2025-10-11 04:18:22.368 2 DEBUG nova.network.os_vif_util [None req-b187c7d5-55d6-40a2-bba4-0852f73acffa 2a330a845d62440c871f80eda2546881 09ba33ef4bd447699d74946c58839b2d - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:78:cd:05,bridge_name='br-int',has_traffic_filtering=True,id=ae4b6054-7dfb-4b28-a3dc-0a31f2446ea3,network=Network(b6cd64a2-af0b-4f57-b84c-cbc9cde5251d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapae4b6054-7d') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 11 04:18:22 compute-0 nova_compute[259850]: 2025-10-11 04:18:22.369 2 DEBUG os_vif [None req-b187c7d5-55d6-40a2-bba4-0852f73acffa 2a330a845d62440c871f80eda2546881 09ba33ef4bd447699d74946c58839b2d - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:78:cd:05,bridge_name='br-int',has_traffic_filtering=True,id=ae4b6054-7dfb-4b28-a3dc-0a31f2446ea3,network=Network(b6cd64a2-af0b-4f57-b84c-cbc9cde5251d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapae4b6054-7d') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Oct 11 04:18:22 compute-0 nova_compute[259850]: 2025-10-11 04:18:22.370 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 04:18:22 compute-0 nova_compute[259850]: 2025-10-11 04:18:22.370 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 04:18:22 compute-0 nova_compute[259850]: 2025-10-11 04:18:22.371 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 11 04:18:22 compute-0 nova_compute[259850]: 2025-10-11 04:18:22.374 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 04:18:22 compute-0 nova_compute[259850]: 2025-10-11 04:18:22.375 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapae4b6054-7d, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 04:18:22 compute-0 nova_compute[259850]: 2025-10-11 04:18:22.376 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapae4b6054-7d, col_values=(('external_ids', {'iface-id': 'ae4b6054-7dfb-4b28-a3dc-0a31f2446ea3', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:78:cd:05', 'vm-uuid': '68b44a2f-a694-4458-9a40-89e194a02624'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 04:18:22 compute-0 nova_compute[259850]: 2025-10-11 04:18:22.380 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 04:18:22 compute-0 NetworkManager[44920]: <info>  [1760156302.3810] manager: (tapae4b6054-7d): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/126)
Oct 11 04:18:22 compute-0 nova_compute[259850]: 2025-10-11 04:18:22.384 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 11 04:18:22 compute-0 nova_compute[259850]: 2025-10-11 04:18:22.386 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 04:18:22 compute-0 nova_compute[259850]: 2025-10-11 04:18:22.388 2 INFO os_vif [None req-b187c7d5-55d6-40a2-bba4-0852f73acffa 2a330a845d62440c871f80eda2546881 09ba33ef4bd447699d74946c58839b2d - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:78:cd:05,bridge_name='br-int',has_traffic_filtering=True,id=ae4b6054-7dfb-4b28-a3dc-0a31f2446ea3,network=Network(b6cd64a2-af0b-4f57-b84c-cbc9cde5251d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapae4b6054-7d')
Oct 11 04:18:22 compute-0 nova_compute[259850]: 2025-10-11 04:18:22.467 2 DEBUG nova.policy [None req-dbf2f84f-58e2-48e2-a847-43729ecf620e 77d11e860ca1460cab1c20bca4d4c0ea bfcc78a613a4442d88231798d10634c9 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '77d11e860ca1460cab1c20bca4d4c0ea', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'bfcc78a613a4442d88231798d10634c9', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Oct 11 04:18:22 compute-0 nova_compute[259850]: 2025-10-11 04:18:22.477 2 DEBUG nova.virt.libvirt.driver [None req-b187c7d5-55d6-40a2-bba4-0852f73acffa 2a330a845d62440c871f80eda2546881 09ba33ef4bd447699d74946c58839b2d - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 11 04:18:22 compute-0 nova_compute[259850]: 2025-10-11 04:18:22.477 2 DEBUG nova.virt.libvirt.driver [None req-b187c7d5-55d6-40a2-bba4-0852f73acffa 2a330a845d62440c871f80eda2546881 09ba33ef4bd447699d74946c58839b2d - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 11 04:18:22 compute-0 nova_compute[259850]: 2025-10-11 04:18:22.477 2 DEBUG nova.virt.libvirt.driver [None req-b187c7d5-55d6-40a2-bba4-0852f73acffa 2a330a845d62440c871f80eda2546881 09ba33ef4bd447699d74946c58839b2d - - default default] No VIF found with MAC fa:16:3e:78:cd:05, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Oct 11 04:18:22 compute-0 nova_compute[259850]: 2025-10-11 04:18:22.478 2 INFO nova.virt.libvirt.driver [None req-b187c7d5-55d6-40a2-bba4-0852f73acffa 2a330a845d62440c871f80eda2546881 09ba33ef4bd447699d74946c58839b2d - - default default] [instance: 68b44a2f-a694-4458-9a40-89e194a02624] Using config drive
Oct 11 04:18:22 compute-0 nova_compute[259850]: 2025-10-11 04:18:22.501 2 DEBUG nova.storage.rbd_utils [None req-b187c7d5-55d6-40a2-bba4-0852f73acffa 2a330a845d62440c871f80eda2546881 09ba33ef4bd447699d74946c58839b2d - - default default] rbd image 68b44a2f-a694-4458-9a40-89e194a02624_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 11 04:18:22 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 11 04:18:22 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1488770007' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 11 04:18:22 compute-0 ovn_metadata_agent[161897]: 2025-10-11 04:18:22.967 161902 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 04:18:22 compute-0 ovn_metadata_agent[161897]: 2025-10-11 04:18:22.968 161902 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 04:18:22 compute-0 ovn_metadata_agent[161897]: 2025-10-11 04:18:22.969 161902 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 04:18:22 compute-0 nova_compute[259850]: 2025-10-11 04:18:22.978 2 INFO nova.virt.libvirt.driver [None req-b187c7d5-55d6-40a2-bba4-0852f73acffa 2a330a845d62440c871f80eda2546881 09ba33ef4bd447699d74946c58839b2d - - default default] [instance: 68b44a2f-a694-4458-9a40-89e194a02624] Creating config drive at /var/lib/nova/instances/68b44a2f-a694-4458-9a40-89e194a02624/disk.config
Oct 11 04:18:22 compute-0 nova_compute[259850]: 2025-10-11 04:18:22.988 2 DEBUG oslo_concurrency.processutils [None req-b187c7d5-55d6-40a2-bba4-0852f73acffa 2a330a845d62440c871f80eda2546881 09ba33ef4bd447699d74946c58839b2d - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/68b44a2f-a694-4458-9a40-89e194a02624/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpdnlgwge9 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 04:18:22 compute-0 ceph-mon[74273]: pgmap v1641: 305 pgs: 305 active+clean; 352 MiB data, 629 MiB used, 59 GiB / 60 GiB avail; 262 KiB/s rd, 26 KiB/s wr, 45 op/s
Oct 11 04:18:22 compute-0 ceph-mon[74273]: from='client.? 192.168.122.100:0/3163711870' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 11 04:18:22 compute-0 ceph-mon[74273]: from='client.? 192.168.122.10:0/1488770007' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 11 04:18:23 compute-0 nova_compute[259850]: 2025-10-11 04:18:23.138 2 DEBUG oslo_concurrency.processutils [None req-b187c7d5-55d6-40a2-bba4-0852f73acffa 2a330a845d62440c871f80eda2546881 09ba33ef4bd447699d74946c58839b2d - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/68b44a2f-a694-4458-9a40-89e194a02624/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpdnlgwge9" returned: 0 in 0.150s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 04:18:23 compute-0 nova_compute[259850]: 2025-10-11 04:18:23.181 2 DEBUG nova.storage.rbd_utils [None req-b187c7d5-55d6-40a2-bba4-0852f73acffa 2a330a845d62440c871f80eda2546881 09ba33ef4bd447699d74946c58839b2d - - default default] rbd image 68b44a2f-a694-4458-9a40-89e194a02624_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 11 04:18:23 compute-0 nova_compute[259850]: 2025-10-11 04:18:23.186 2 DEBUG oslo_concurrency.processutils [None req-b187c7d5-55d6-40a2-bba4-0852f73acffa 2a330a845d62440c871f80eda2546881 09ba33ef4bd447699d74946c58839b2d - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/68b44a2f-a694-4458-9a40-89e194a02624/disk.config 68b44a2f-a694-4458-9a40-89e194a02624_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 04:18:23 compute-0 nova_compute[259850]: 2025-10-11 04:18:23.329 2 DEBUG nova.compute.manager [None req-dbf2f84f-58e2-48e2-a847-43729ecf620e 77d11e860ca1460cab1c20bca4d4c0ea bfcc78a613a4442d88231798d10634c9 - - default default] [instance: 8bfdc99e-9df9-4825-a631-7cd07eff5dfb] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Oct 11 04:18:23 compute-0 nova_compute[259850]: 2025-10-11 04:18:23.332 2 DEBUG nova.virt.libvirt.driver [None req-dbf2f84f-58e2-48e2-a847-43729ecf620e 77d11e860ca1460cab1c20bca4d4c0ea bfcc78a613a4442d88231798d10634c9 - - default default] [instance: 8bfdc99e-9df9-4825-a631-7cd07eff5dfb] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Oct 11 04:18:23 compute-0 nova_compute[259850]: 2025-10-11 04:18:23.332 2 INFO nova.virt.libvirt.driver [None req-dbf2f84f-58e2-48e2-a847-43729ecf620e 77d11e860ca1460cab1c20bca4d4c0ea bfcc78a613a4442d88231798d10634c9 - - default default] [instance: 8bfdc99e-9df9-4825-a631-7cd07eff5dfb] Creating image(s)
Oct 11 04:18:23 compute-0 nova_compute[259850]: 2025-10-11 04:18:23.333 2 DEBUG nova.virt.libvirt.driver [None req-dbf2f84f-58e2-48e2-a847-43729ecf620e 77d11e860ca1460cab1c20bca4d4c0ea bfcc78a613a4442d88231798d10634c9 - - default default] [instance: 8bfdc99e-9df9-4825-a631-7cd07eff5dfb] Did not create local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4859
Oct 11 04:18:23 compute-0 nova_compute[259850]: 2025-10-11 04:18:23.333 2 DEBUG nova.virt.libvirt.driver [None req-dbf2f84f-58e2-48e2-a847-43729ecf620e 77d11e860ca1460cab1c20bca4d4c0ea bfcc78a613a4442d88231798d10634c9 - - default default] [instance: 8bfdc99e-9df9-4825-a631-7cd07eff5dfb] Ensure instance console log exists: /var/lib/nova/instances/8bfdc99e-9df9-4825-a631-7cd07eff5dfb/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Oct 11 04:18:23 compute-0 nova_compute[259850]: 2025-10-11 04:18:23.334 2 DEBUG oslo_concurrency.lockutils [None req-dbf2f84f-58e2-48e2-a847-43729ecf620e 77d11e860ca1460cab1c20bca4d4c0ea bfcc78a613a4442d88231798d10634c9 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 04:18:23 compute-0 nova_compute[259850]: 2025-10-11 04:18:23.334 2 DEBUG oslo_concurrency.lockutils [None req-dbf2f84f-58e2-48e2-a847-43729ecf620e 77d11e860ca1460cab1c20bca4d4c0ea bfcc78a613a4442d88231798d10634c9 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 04:18:23 compute-0 nova_compute[259850]: 2025-10-11 04:18:23.335 2 DEBUG oslo_concurrency.lockutils [None req-dbf2f84f-58e2-48e2-a847-43729ecf620e 77d11e860ca1460cab1c20bca4d4c0ea bfcc78a613a4442d88231798d10634c9 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 04:18:23 compute-0 nova_compute[259850]: 2025-10-11 04:18:23.369 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 04:18:23 compute-0 nova_compute[259850]: 2025-10-11 04:18:23.372 2 DEBUG nova.network.neutron [None req-dbf2f84f-58e2-48e2-a847-43729ecf620e 77d11e860ca1460cab1c20bca4d4c0ea bfcc78a613a4442d88231798d10634c9 - - default default] [instance: 8bfdc99e-9df9-4825-a631-7cd07eff5dfb] Successfully created port: 04efb511-1fd7-4507-91a8-508780bc5e8d _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Oct 11 04:18:23 compute-0 nova_compute[259850]: 2025-10-11 04:18:23.411 2 DEBUG oslo_concurrency.processutils [None req-b187c7d5-55d6-40a2-bba4-0852f73acffa 2a330a845d62440c871f80eda2546881 09ba33ef4bd447699d74946c58839b2d - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/68b44a2f-a694-4458-9a40-89e194a02624/disk.config 68b44a2f-a694-4458-9a40-89e194a02624_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.225s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 04:18:23 compute-0 nova_compute[259850]: 2025-10-11 04:18:23.412 2 INFO nova.virt.libvirt.driver [None req-b187c7d5-55d6-40a2-bba4-0852f73acffa 2a330a845d62440c871f80eda2546881 09ba33ef4bd447699d74946c58839b2d - - default default] [instance: 68b44a2f-a694-4458-9a40-89e194a02624] Deleting local config drive /var/lib/nova/instances/68b44a2f-a694-4458-9a40-89e194a02624/disk.config because it was imported into RBD.
Oct 11 04:18:23 compute-0 kernel: tapae4b6054-7d: entered promiscuous mode
Oct 11 04:18:23 compute-0 ovn_controller[152025]: 2025-10-11T04:18:23Z|00243|binding|INFO|Claiming lport ae4b6054-7dfb-4b28-a3dc-0a31f2446ea3 for this chassis.
Oct 11 04:18:23 compute-0 ovn_controller[152025]: 2025-10-11T04:18:23Z|00244|binding|INFO|ae4b6054-7dfb-4b28-a3dc-0a31f2446ea3: Claiming fa:16:3e:78:cd:05 10.100.0.8
Oct 11 04:18:23 compute-0 NetworkManager[44920]: <info>  [1760156303.4876] manager: (tapae4b6054-7d): new Tun device (/org/freedesktop/NetworkManager/Devices/127)
Oct 11 04:18:23 compute-0 nova_compute[259850]: 2025-10-11 04:18:23.489 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 04:18:23 compute-0 ovn_metadata_agent[161897]: 2025-10-11 04:18:23.496 161902 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:78:cd:05 10.100.0.8'], port_security=['fa:16:3e:78:cd:05 10.100.0.8'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.8/28', 'neutron:device_id': '68b44a2f-a694-4458-9a40-89e194a02624', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-b6cd64a2-af0b-4f57-b84c-cbc9cde5251d', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '09ba33ef4bd447699d74946c58839b2d', 'neutron:revision_number': '2', 'neutron:security_group_ids': '802c56f7-efb1-44ec-9107-b20b0a13ea5d', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=27b77226-c1f8-485e-969b-bae9a3bf7ceb, chassis=[<ovs.db.idl.Row object at 0x7f22fde8afd0>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f22fde8afd0>], logical_port=ae4b6054-7dfb-4b28-a3dc-0a31f2446ea3) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 11 04:18:23 compute-0 ovn_metadata_agent[161897]: 2025-10-11 04:18:23.498 161902 INFO neutron.agent.ovn.metadata.agent [-] Port ae4b6054-7dfb-4b28-a3dc-0a31f2446ea3 in datapath b6cd64a2-af0b-4f57-b84c-cbc9cde5251d bound to our chassis
Oct 11 04:18:23 compute-0 ovn_metadata_agent[161897]: 2025-10-11 04:18:23.500 161902 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network b6cd64a2-af0b-4f57-b84c-cbc9cde5251d
Oct 11 04:18:23 compute-0 nova_compute[259850]: 2025-10-11 04:18:23.508 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 04:18:23 compute-0 ovn_controller[152025]: 2025-10-11T04:18:23Z|00245|binding|INFO|Setting lport ae4b6054-7dfb-4b28-a3dc-0a31f2446ea3 ovn-installed in OVS
Oct 11 04:18:23 compute-0 ovn_controller[152025]: 2025-10-11T04:18:23Z|00246|binding|INFO|Setting lport ae4b6054-7dfb-4b28-a3dc-0a31f2446ea3 up in Southbound
Oct 11 04:18:23 compute-0 nova_compute[259850]: 2025-10-11 04:18:23.512 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 04:18:23 compute-0 nova_compute[259850]: 2025-10-11 04:18:23.514 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 04:18:23 compute-0 ovn_metadata_agent[161897]: 2025-10-11 04:18:23.525 267637 DEBUG oslo.privsep.daemon [-] privsep: reply[3e5f63ac-020a-4d19-8b9e-37d712f48dae]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 04:18:23 compute-0 systemd-udevd[297539]: Network interface NamePolicy= disabled on kernel command line.
Oct 11 04:18:23 compute-0 systemd-machined[214869]: New machine qemu-25-instance-00000019.
Oct 11 04:18:23 compute-0 NetworkManager[44920]: <info>  [1760156303.5543] device (tapae4b6054-7d): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 11 04:18:23 compute-0 NetworkManager[44920]: <info>  [1760156303.5586] device (tapae4b6054-7d): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 11 04:18:23 compute-0 systemd[1]: Started Virtual Machine qemu-25-instance-00000019.
Oct 11 04:18:23 compute-0 ovn_metadata_agent[161897]: 2025-10-11 04:18:23.568 267785 DEBUG oslo.privsep.daemon [-] privsep: reply[8107744a-478a-4f84-b8a4-2b699c71ff47]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 04:18:23 compute-0 ovn_metadata_agent[161897]: 2025-10-11 04:18:23.573 267785 DEBUG oslo.privsep.daemon [-] privsep: reply[7db693c6-7188-4dc1-b928-a34d29f48e4d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 04:18:23 compute-0 ovn_metadata_agent[161897]: 2025-10-11 04:18:23.618 267785 DEBUG oslo.privsep.daemon [-] privsep: reply[6d87640b-d3e6-4141-bf82-73d515dcc367]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 04:18:23 compute-0 ovn_metadata_agent[161897]: 2025-10-11 04:18:23.643 267637 DEBUG oslo.privsep.daemon [-] privsep: reply[1b7d4aee-0845-413f-bdf9-be8ed0f43b0b]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapb6cd64a2-a1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:11:9f:02'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 10, 'tx_packets': 5, 'rx_bytes': 916, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 10, 'tx_packets': 5, 'rx_bytes': 916, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 78], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 465102, 'reachable_time': 34708, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 8, 'inoctets': 720, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 8, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 720, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 8, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 297550, 'error': None, 'target': 'ovnmeta-b6cd64a2-af0b-4f57-b84c-cbc9cde5251d', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 04:18:23 compute-0 ovn_metadata_agent[161897]: 2025-10-11 04:18:23.666 267637 DEBUG oslo.privsep.daemon [-] privsep: reply[924ec7ec-4884-44cd-93ac-50aa6da54439]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tapb6cd64a2-a1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 465114, 'tstamp': 465114}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 297553, 'error': None, 'target': 'ovnmeta-b6cd64a2-af0b-4f57-b84c-cbc9cde5251d', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tapb6cd64a2-a1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 465118, 'tstamp': 465118}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 297553, 'error': None, 'target': 'ovnmeta-b6cd64a2-af0b-4f57-b84c-cbc9cde5251d', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 04:18:23 compute-0 ovn_metadata_agent[161897]: 2025-10-11 04:18:23.669 161902 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapb6cd64a2-a0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 04:18:23 compute-0 nova_compute[259850]: 2025-10-11 04:18:23.670 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 04:18:23 compute-0 nova_compute[259850]: 2025-10-11 04:18:23.671 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 04:18:23 compute-0 ovn_metadata_agent[161897]: 2025-10-11 04:18:23.672 161902 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapb6cd64a2-a0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 04:18:23 compute-0 ovn_metadata_agent[161897]: 2025-10-11 04:18:23.672 161902 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 11 04:18:23 compute-0 ovn_metadata_agent[161897]: 2025-10-11 04:18:23.673 161902 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapb6cd64a2-a0, col_values=(('external_ids', {'iface-id': 'c2cbaf15-a50c-40b8-9f65-12b11618e7fc'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 04:18:23 compute-0 ovn_metadata_agent[161897]: 2025-10-11 04:18:23.673 161902 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 11 04:18:23 compute-0 nova_compute[259850]: 2025-10-11 04:18:23.743 2 DEBUG nova.network.neutron [req-a9d592c1-52d2-48a5-9318-ff7564b234bb req-0e7e016c-80f2-4278-a3e6-81d69662fcfb f4159c198f2c491aba3076e803251bb4 551ef16ca7c64c7d98e9560ddab5abd8 - - default default] [instance: 68b44a2f-a694-4458-9a40-89e194a02624] Updated VIF entry in instance network info cache for port ae4b6054-7dfb-4b28-a3dc-0a31f2446ea3. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 11 04:18:23 compute-0 nova_compute[259850]: 2025-10-11 04:18:23.744 2 DEBUG nova.network.neutron [req-a9d592c1-52d2-48a5-9318-ff7564b234bb req-0e7e016c-80f2-4278-a3e6-81d69662fcfb f4159c198f2c491aba3076e803251bb4 551ef16ca7c64c7d98e9560ddab5abd8 - - default default] [instance: 68b44a2f-a694-4458-9a40-89e194a02624] Updating instance_info_cache with network_info: [{"id": "ae4b6054-7dfb-4b28-a3dc-0a31f2446ea3", "address": "fa:16:3e:78:cd:05", "network": {"id": "b6cd64a2-af0b-4f57-b84c-cbc9cde5251d", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-958485850-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "09ba33ef4bd447699d74946c58839b2d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapae4b6054-7d", "ovs_interfaceid": "ae4b6054-7dfb-4b28-a3dc-0a31f2446ea3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 11 04:18:23 compute-0 nova_compute[259850]: 2025-10-11 04:18:23.762 2 DEBUG oslo_concurrency.lockutils [req-a9d592c1-52d2-48a5-9318-ff7564b234bb req-0e7e016c-80f2-4278-a3e6-81d69662fcfb f4159c198f2c491aba3076e803251bb4 551ef16ca7c64c7d98e9560ddab5abd8 - - default default] Releasing lock "refresh_cache-68b44a2f-a694-4458-9a40-89e194a02624" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 11 04:18:23 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1642: 305 pgs: 305 active+clean; 352 MiB data, 630 MiB used, 59 GiB / 60 GiB avail; 251 KiB/s rd, 34 KiB/s wr, 64 op/s
Oct 11 04:18:24 compute-0 nova_compute[259850]: 2025-10-11 04:18:24.107 2 DEBUG nova.compute.manager [req-ee90c6d2-be2d-4635-85df-97ed73042bcb req-82bd50cf-7785-4fa1-bf16-0a0059db61f8 f4159c198f2c491aba3076e803251bb4 551ef16ca7c64c7d98e9560ddab5abd8 - - default default] [instance: 68b44a2f-a694-4458-9a40-89e194a02624] Received event network-vif-plugged-ae4b6054-7dfb-4b28-a3dc-0a31f2446ea3 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 11 04:18:24 compute-0 nova_compute[259850]: 2025-10-11 04:18:24.108 2 DEBUG oslo_concurrency.lockutils [req-ee90c6d2-be2d-4635-85df-97ed73042bcb req-82bd50cf-7785-4fa1-bf16-0a0059db61f8 f4159c198f2c491aba3076e803251bb4 551ef16ca7c64c7d98e9560ddab5abd8 - - default default] Acquiring lock "68b44a2f-a694-4458-9a40-89e194a02624-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 04:18:24 compute-0 nova_compute[259850]: 2025-10-11 04:18:24.108 2 DEBUG oslo_concurrency.lockutils [req-ee90c6d2-be2d-4635-85df-97ed73042bcb req-82bd50cf-7785-4fa1-bf16-0a0059db61f8 f4159c198f2c491aba3076e803251bb4 551ef16ca7c64c7d98e9560ddab5abd8 - - default default] Lock "68b44a2f-a694-4458-9a40-89e194a02624-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 04:18:24 compute-0 nova_compute[259850]: 2025-10-11 04:18:24.108 2 DEBUG oslo_concurrency.lockutils [req-ee90c6d2-be2d-4635-85df-97ed73042bcb req-82bd50cf-7785-4fa1-bf16-0a0059db61f8 f4159c198f2c491aba3076e803251bb4 551ef16ca7c64c7d98e9560ddab5abd8 - - default default] Lock "68b44a2f-a694-4458-9a40-89e194a02624-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 04:18:24 compute-0 nova_compute[259850]: 2025-10-11 04:18:24.109 2 DEBUG nova.compute.manager [req-ee90c6d2-be2d-4635-85df-97ed73042bcb req-82bd50cf-7785-4fa1-bf16-0a0059db61f8 f4159c198f2c491aba3076e803251bb4 551ef16ca7c64c7d98e9560ddab5abd8 - - default default] [instance: 68b44a2f-a694-4458-9a40-89e194a02624] Processing event network-vif-plugged-ae4b6054-7dfb-4b28-a3dc-0a31f2446ea3 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Oct 11 04:18:24 compute-0 nova_compute[259850]: 2025-10-11 04:18:24.296 2 DEBUG nova.network.neutron [None req-dbf2f84f-58e2-48e2-a847-43729ecf620e 77d11e860ca1460cab1c20bca4d4c0ea bfcc78a613a4442d88231798d10634c9 - - default default] [instance: 8bfdc99e-9df9-4825-a631-7cd07eff5dfb] Successfully updated port: 04efb511-1fd7-4507-91a8-508780bc5e8d _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Oct 11 04:18:24 compute-0 nova_compute[259850]: 2025-10-11 04:18:24.311 2 DEBUG oslo_concurrency.lockutils [None req-dbf2f84f-58e2-48e2-a847-43729ecf620e 77d11e860ca1460cab1c20bca4d4c0ea bfcc78a613a4442d88231798d10634c9 - - default default] Acquiring lock "refresh_cache-8bfdc99e-9df9-4825-a631-7cd07eff5dfb" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 11 04:18:24 compute-0 nova_compute[259850]: 2025-10-11 04:18:24.312 2 DEBUG oslo_concurrency.lockutils [None req-dbf2f84f-58e2-48e2-a847-43729ecf620e 77d11e860ca1460cab1c20bca4d4c0ea bfcc78a613a4442d88231798d10634c9 - - default default] Acquired lock "refresh_cache-8bfdc99e-9df9-4825-a631-7cd07eff5dfb" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 11 04:18:24 compute-0 nova_compute[259850]: 2025-10-11 04:18:24.312 2 DEBUG nova.network.neutron [None req-dbf2f84f-58e2-48e2-a847-43729ecf620e 77d11e860ca1460cab1c20bca4d4c0ea bfcc78a613a4442d88231798d10634c9 - - default default] [instance: 8bfdc99e-9df9-4825-a631-7cd07eff5dfb] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Oct 11 04:18:24 compute-0 nova_compute[259850]: 2025-10-11 04:18:24.447 2 DEBUG nova.network.neutron [None req-dbf2f84f-58e2-48e2-a847-43729ecf620e 77d11e860ca1460cab1c20bca4d4c0ea bfcc78a613a4442d88231798d10634c9 - - default default] [instance: 8bfdc99e-9df9-4825-a631-7cd07eff5dfb] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Oct 11 04:18:24 compute-0 nova_compute[259850]: 2025-10-11 04:18:24.816 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1760156289.8147833, a3d4ef44-fada-41fc-9a12-641bff0536a4 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 11 04:18:24 compute-0 nova_compute[259850]: 2025-10-11 04:18:24.816 2 INFO nova.compute.manager [-] [instance: a3d4ef44-fada-41fc-9a12-641bff0536a4] VM Stopped (Lifecycle Event)
Oct 11 04:18:24 compute-0 nova_compute[259850]: 2025-10-11 04:18:24.837 2 DEBUG nova.compute.manager [None req-832e5387-6d0f-4d61-8138-2e2ed5778c7f - - - - - -] [instance: a3d4ef44-fada-41fc-9a12-641bff0536a4] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 11 04:18:24 compute-0 nova_compute[259850]: 2025-10-11 04:18:24.877 2 DEBUG nova.virt.driver [None req-3700afd8-f980-4cf2-b41e-54ecc7fbe9a2 - - - - - -] Emitting event <LifecycleEvent: 1760156304.8770933, 68b44a2f-a694-4458-9a40-89e194a02624 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 11 04:18:24 compute-0 nova_compute[259850]: 2025-10-11 04:18:24.878 2 INFO nova.compute.manager [None req-3700afd8-f980-4cf2-b41e-54ecc7fbe9a2 - - - - - -] [instance: 68b44a2f-a694-4458-9a40-89e194a02624] VM Started (Lifecycle Event)
Oct 11 04:18:24 compute-0 nova_compute[259850]: 2025-10-11 04:18:24.880 2 DEBUG nova.compute.manager [None req-b187c7d5-55d6-40a2-bba4-0852f73acffa 2a330a845d62440c871f80eda2546881 09ba33ef4bd447699d74946c58839b2d - - default default] [instance: 68b44a2f-a694-4458-9a40-89e194a02624] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Oct 11 04:18:24 compute-0 nova_compute[259850]: 2025-10-11 04:18:24.886 2 DEBUG nova.virt.libvirt.driver [None req-b187c7d5-55d6-40a2-bba4-0852f73acffa 2a330a845d62440c871f80eda2546881 09ba33ef4bd447699d74946c58839b2d - - default default] [instance: 68b44a2f-a694-4458-9a40-89e194a02624] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Oct 11 04:18:24 compute-0 nova_compute[259850]: 2025-10-11 04:18:24.890 2 INFO nova.virt.libvirt.driver [-] [instance: 68b44a2f-a694-4458-9a40-89e194a02624] Instance spawned successfully.
Oct 11 04:18:24 compute-0 nova_compute[259850]: 2025-10-11 04:18:24.890 2 DEBUG nova.virt.libvirt.driver [None req-b187c7d5-55d6-40a2-bba4-0852f73acffa 2a330a845d62440c871f80eda2546881 09ba33ef4bd447699d74946c58839b2d - - default default] [instance: 68b44a2f-a694-4458-9a40-89e194a02624] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Oct 11 04:18:24 compute-0 nova_compute[259850]: 2025-10-11 04:18:24.901 2 DEBUG nova.compute.manager [None req-3700afd8-f980-4cf2-b41e-54ecc7fbe9a2 - - - - - -] [instance: 68b44a2f-a694-4458-9a40-89e194a02624] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 11 04:18:24 compute-0 nova_compute[259850]: 2025-10-11 04:18:24.906 2 DEBUG nova.compute.manager [None req-3700afd8-f980-4cf2-b41e-54ecc7fbe9a2 - - - - - -] [instance: 68b44a2f-a694-4458-9a40-89e194a02624] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 11 04:18:24 compute-0 nova_compute[259850]: 2025-10-11 04:18:24.919 2 DEBUG nova.virt.libvirt.driver [None req-b187c7d5-55d6-40a2-bba4-0852f73acffa 2a330a845d62440c871f80eda2546881 09ba33ef4bd447699d74946c58839b2d - - default default] [instance: 68b44a2f-a694-4458-9a40-89e194a02624] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 11 04:18:24 compute-0 nova_compute[259850]: 2025-10-11 04:18:24.919 2 DEBUG nova.virt.libvirt.driver [None req-b187c7d5-55d6-40a2-bba4-0852f73acffa 2a330a845d62440c871f80eda2546881 09ba33ef4bd447699d74946c58839b2d - - default default] [instance: 68b44a2f-a694-4458-9a40-89e194a02624] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 11 04:18:24 compute-0 nova_compute[259850]: 2025-10-11 04:18:24.920 2 DEBUG nova.virt.libvirt.driver [None req-b187c7d5-55d6-40a2-bba4-0852f73acffa 2a330a845d62440c871f80eda2546881 09ba33ef4bd447699d74946c58839b2d - - default default] [instance: 68b44a2f-a694-4458-9a40-89e194a02624] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 11 04:18:24 compute-0 nova_compute[259850]: 2025-10-11 04:18:24.921 2 DEBUG nova.virt.libvirt.driver [None req-b187c7d5-55d6-40a2-bba4-0852f73acffa 2a330a845d62440c871f80eda2546881 09ba33ef4bd447699d74946c58839b2d - - default default] [instance: 68b44a2f-a694-4458-9a40-89e194a02624] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 11 04:18:24 compute-0 nova_compute[259850]: 2025-10-11 04:18:24.922 2 DEBUG nova.virt.libvirt.driver [None req-b187c7d5-55d6-40a2-bba4-0852f73acffa 2a330a845d62440c871f80eda2546881 09ba33ef4bd447699d74946c58839b2d - - default default] [instance: 68b44a2f-a694-4458-9a40-89e194a02624] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 11 04:18:24 compute-0 nova_compute[259850]: 2025-10-11 04:18:24.922 2 DEBUG nova.virt.libvirt.driver [None req-b187c7d5-55d6-40a2-bba4-0852f73acffa 2a330a845d62440c871f80eda2546881 09ba33ef4bd447699d74946c58839b2d - - default default] [instance: 68b44a2f-a694-4458-9a40-89e194a02624] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 11 04:18:24 compute-0 nova_compute[259850]: 2025-10-11 04:18:24.929 2 INFO nova.compute.manager [None req-3700afd8-f980-4cf2-b41e-54ecc7fbe9a2 - - - - - -] [instance: 68b44a2f-a694-4458-9a40-89e194a02624] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 11 04:18:24 compute-0 nova_compute[259850]: 2025-10-11 04:18:24.929 2 DEBUG nova.virt.driver [None req-3700afd8-f980-4cf2-b41e-54ecc7fbe9a2 - - - - - -] Emitting event <LifecycleEvent: 1760156304.8801825, 68b44a2f-a694-4458-9a40-89e194a02624 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 11 04:18:24 compute-0 nova_compute[259850]: 2025-10-11 04:18:24.930 2 INFO nova.compute.manager [None req-3700afd8-f980-4cf2-b41e-54ecc7fbe9a2 - - - - - -] [instance: 68b44a2f-a694-4458-9a40-89e194a02624] VM Paused (Lifecycle Event)
Oct 11 04:18:24 compute-0 nova_compute[259850]: 2025-10-11 04:18:24.959 2 DEBUG nova.compute.manager [None req-3700afd8-f980-4cf2-b41e-54ecc7fbe9a2 - - - - - -] [instance: 68b44a2f-a694-4458-9a40-89e194a02624] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 11 04:18:24 compute-0 nova_compute[259850]: 2025-10-11 04:18:24.963 2 DEBUG nova.virt.driver [None req-3700afd8-f980-4cf2-b41e-54ecc7fbe9a2 - - - - - -] Emitting event <LifecycleEvent: 1760156304.8842041, 68b44a2f-a694-4458-9a40-89e194a02624 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 11 04:18:24 compute-0 nova_compute[259850]: 2025-10-11 04:18:24.964 2 INFO nova.compute.manager [None req-3700afd8-f980-4cf2-b41e-54ecc7fbe9a2 - - - - - -] [instance: 68b44a2f-a694-4458-9a40-89e194a02624] VM Resumed (Lifecycle Event)
Oct 11 04:18:24 compute-0 nova_compute[259850]: 2025-10-11 04:18:24.990 2 DEBUG nova.compute.manager [None req-3700afd8-f980-4cf2-b41e-54ecc7fbe9a2 - - - - - -] [instance: 68b44a2f-a694-4458-9a40-89e194a02624] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 11 04:18:24 compute-0 nova_compute[259850]: 2025-10-11 04:18:24.994 2 DEBUG nova.compute.manager [None req-3700afd8-f980-4cf2-b41e-54ecc7fbe9a2 - - - - - -] [instance: 68b44a2f-a694-4458-9a40-89e194a02624] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 11 04:18:24 compute-0 nova_compute[259850]: 2025-10-11 04:18:24.999 2 INFO nova.compute.manager [None req-b187c7d5-55d6-40a2-bba4-0852f73acffa 2a330a845d62440c871f80eda2546881 09ba33ef4bd447699d74946c58839b2d - - default default] [instance: 68b44a2f-a694-4458-9a40-89e194a02624] Took 7.02 seconds to spawn the instance on the hypervisor.
Oct 11 04:18:24 compute-0 nova_compute[259850]: 2025-10-11 04:18:24.999 2 DEBUG nova.compute.manager [None req-b187c7d5-55d6-40a2-bba4-0852f73acffa 2a330a845d62440c871f80eda2546881 09ba33ef4bd447699d74946c58839b2d - - default default] [instance: 68b44a2f-a694-4458-9a40-89e194a02624] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 11 04:18:25 compute-0 ceph-mon[74273]: pgmap v1642: 305 pgs: 305 active+clean; 352 MiB data, 630 MiB used, 59 GiB / 60 GiB avail; 251 KiB/s rd, 34 KiB/s wr, 64 op/s
Oct 11 04:18:25 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e390 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 04:18:25 compute-0 nova_compute[259850]: 2025-10-11 04:18:25.034 2 INFO nova.compute.manager [None req-3700afd8-f980-4cf2-b41e-54ecc7fbe9a2 - - - - - -] [instance: 68b44a2f-a694-4458-9a40-89e194a02624] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 11 04:18:25 compute-0 nova_compute[259850]: 2025-10-11 04:18:25.084 2 INFO nova.compute.manager [None req-b187c7d5-55d6-40a2-bba4-0852f73acffa 2a330a845d62440c871f80eda2546881 09ba33ef4bd447699d74946c58839b2d - - default default] [instance: 68b44a2f-a694-4458-9a40-89e194a02624] Took 9.28 seconds to build instance.
Oct 11 04:18:25 compute-0 nova_compute[259850]: 2025-10-11 04:18:25.103 2 DEBUG oslo_concurrency.lockutils [None req-b187c7d5-55d6-40a2-bba4-0852f73acffa 2a330a845d62440c871f80eda2546881 09ba33ef4bd447699d74946c58839b2d - - default default] Lock "68b44a2f-a694-4458-9a40-89e194a02624" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 9.377s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 04:18:25 compute-0 nova_compute[259850]: 2025-10-11 04:18:25.337 2 DEBUG nova.network.neutron [None req-dbf2f84f-58e2-48e2-a847-43729ecf620e 77d11e860ca1460cab1c20bca4d4c0ea bfcc78a613a4442d88231798d10634c9 - - default default] [instance: 8bfdc99e-9df9-4825-a631-7cd07eff5dfb] Updating instance_info_cache with network_info: [{"id": "04efb511-1fd7-4507-91a8-508780bc5e8d", "address": "fa:16:3e:e9:ef:be", "network": {"id": "1c86b315-3a4b-4db0-8b3c-39658c19ef9c", "bridge": "br-int", "label": "tempest-TransferEncryptedVolumeTest-443747755-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "bfcc78a613a4442d88231798d10634c9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap04efb511-1f", "ovs_interfaceid": "04efb511-1fd7-4507-91a8-508780bc5e8d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 11 04:18:25 compute-0 nova_compute[259850]: 2025-10-11 04:18:25.366 2 DEBUG oslo_concurrency.lockutils [None req-dbf2f84f-58e2-48e2-a847-43729ecf620e 77d11e860ca1460cab1c20bca4d4c0ea bfcc78a613a4442d88231798d10634c9 - - default default] Releasing lock "refresh_cache-8bfdc99e-9df9-4825-a631-7cd07eff5dfb" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 11 04:18:25 compute-0 nova_compute[259850]: 2025-10-11 04:18:25.367 2 DEBUG nova.compute.manager [None req-dbf2f84f-58e2-48e2-a847-43729ecf620e 77d11e860ca1460cab1c20bca4d4c0ea bfcc78a613a4442d88231798d10634c9 - - default default] [instance: 8bfdc99e-9df9-4825-a631-7cd07eff5dfb] Instance network_info: |[{"id": "04efb511-1fd7-4507-91a8-508780bc5e8d", "address": "fa:16:3e:e9:ef:be", "network": {"id": "1c86b315-3a4b-4db0-8b3c-39658c19ef9c", "bridge": "br-int", "label": "tempest-TransferEncryptedVolumeTest-443747755-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "bfcc78a613a4442d88231798d10634c9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap04efb511-1f", "ovs_interfaceid": "04efb511-1fd7-4507-91a8-508780bc5e8d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Oct 11 04:18:25 compute-0 nova_compute[259850]: 2025-10-11 04:18:25.371 2 DEBUG nova.virt.libvirt.driver [None req-dbf2f84f-58e2-48e2-a847-43729ecf620e 77d11e860ca1460cab1c20bca4d4c0ea bfcc78a613a4442d88231798d10634c9 - - default default] [instance: 8bfdc99e-9df9-4825-a631-7cd07eff5dfb] Start _get_guest_xml network_info=[{"id": "04efb511-1fd7-4507-91a8-508780bc5e8d", "address": "fa:16:3e:e9:ef:be", "network": {"id": "1c86b315-3a4b-4db0-8b3c-39658c19ef9c", "bridge": "br-int", "label": "tempest-TransferEncryptedVolumeTest-443747755-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "bfcc78a613a4442d88231798d10634c9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap04efb511-1f", "ovs_interfaceid": "04efb511-1fd7-4507-91a8-508780bc5e8d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, '/dev/vda': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum=<?>,container_format=<?>,created_at=<?>,direct_url=<?>,disk_format=<?>,id=<?>,min_disk=0,min_ram=0,name=<?>,owner=<?>,properties=ImageMetaProps,protected=<?>,size=1073741824,status='active',tags=<?>,updated_at=<?>,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [], 'ephemerals': [], 'block_device_mapping': [{'device_type': 'disk', 'delete_on_termination': False, 'connection_info': {'driver_volume_type': 'rbd', 'data': {'name': 'volumes/volume-b02ce934-9de7-422d-b3ba-5ade72993920', 'hosts': ['192.168.122.100'], 'ports': ['6789'], 'cluster_name': 'ceph', 'auth_enabled': True, 'auth_username': 'openstack', 'secret_type': 'ceph', 'secret_uuid': '***', 'volume_id': 'b02ce934-9de7-422d-b3ba-5ade72993920', 'discard': True, 'qos_specs': None, 'access_mode': 'rw', 'encrypted': True, 'cacheable': False}, 'status': 'reserved', 'instance': '8bfdc99e-9df9-4825-a631-7cd07eff5dfb', 'attached_at': '', 'detached_at': '', 'volume_id': 'b02ce934-9de7-422d-b3ba-5ade72993920', 'serial': 'b02ce934-9de7-422d-b3ba-5ade72993920'}, 'boot_index': 0, 'guest_format': None, 'attachment_id': '4af325f2-23bc-4049-a84c-461597100e68', 'mount_device': '/dev/vda', 'disk_bus': 'virtio', 'volume_type': None}], ': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Oct 11 04:18:25 compute-0 nova_compute[259850]: 2025-10-11 04:18:25.377 2 WARNING nova.virt.libvirt.driver [None req-dbf2f84f-58e2-48e2-a847-43729ecf620e 77d11e860ca1460cab1c20bca4d4c0ea bfcc78a613a4442d88231798d10634c9 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 11 04:18:25 compute-0 nova_compute[259850]: 2025-10-11 04:18:25.384 2 DEBUG nova.virt.libvirt.host [None req-dbf2f84f-58e2-48e2-a847-43729ecf620e 77d11e860ca1460cab1c20bca4d4c0ea bfcc78a613a4442d88231798d10634c9 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Oct 11 04:18:25 compute-0 nova_compute[259850]: 2025-10-11 04:18:25.385 2 DEBUG nova.virt.libvirt.host [None req-dbf2f84f-58e2-48e2-a847-43729ecf620e 77d11e860ca1460cab1c20bca4d4c0ea bfcc78a613a4442d88231798d10634c9 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Oct 11 04:18:25 compute-0 nova_compute[259850]: 2025-10-11 04:18:25.389 2 DEBUG nova.virt.libvirt.host [None req-dbf2f84f-58e2-48e2-a847-43729ecf620e 77d11e860ca1460cab1c20bca4d4c0ea bfcc78a613a4442d88231798d10634c9 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Oct 11 04:18:25 compute-0 nova_compute[259850]: 2025-10-11 04:18:25.390 2 DEBUG nova.virt.libvirt.host [None req-dbf2f84f-58e2-48e2-a847-43729ecf620e 77d11e860ca1460cab1c20bca4d4c0ea bfcc78a613a4442d88231798d10634c9 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Oct 11 04:18:25 compute-0 nova_compute[259850]: 2025-10-11 04:18:25.390 2 DEBUG nova.virt.libvirt.driver [None req-dbf2f84f-58e2-48e2-a847-43729ecf620e 77d11e860ca1460cab1c20bca4d4c0ea bfcc78a613a4442d88231798d10634c9 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Oct 11 04:18:25 compute-0 nova_compute[259850]: 2025-10-11 04:18:25.391 2 DEBUG nova.virt.hardware [None req-dbf2f84f-58e2-48e2-a847-43729ecf620e 77d11e860ca1460cab1c20bca4d4c0ea bfcc78a613a4442d88231798d10634c9 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-11T04:01:35Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='178575de-f0e6-4acd-9fcd-d75e3e09ac2e',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum=<?>,container_format=<?>,created_at=<?>,direct_url=<?>,disk_format=<?>,id=<?>,min_disk=0,min_ram=0,name=<?>,owner=<?>,properties=ImageMetaProps,protected=<?>,size=1073741824,status='active',tags=<?>,updated_at=<?>,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Oct 11 04:18:25 compute-0 nova_compute[259850]: 2025-10-11 04:18:25.391 2 DEBUG nova.virt.hardware [None req-dbf2f84f-58e2-48e2-a847-43729ecf620e 77d11e860ca1460cab1c20bca4d4c0ea bfcc78a613a4442d88231798d10634c9 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Oct 11 04:18:25 compute-0 nova_compute[259850]: 2025-10-11 04:18:25.392 2 DEBUG nova.virt.hardware [None req-dbf2f84f-58e2-48e2-a847-43729ecf620e 77d11e860ca1460cab1c20bca4d4c0ea bfcc78a613a4442d88231798d10634c9 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Oct 11 04:18:25 compute-0 nova_compute[259850]: 2025-10-11 04:18:25.392 2 DEBUG nova.virt.hardware [None req-dbf2f84f-58e2-48e2-a847-43729ecf620e 77d11e860ca1460cab1c20bca4d4c0ea bfcc78a613a4442d88231798d10634c9 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Oct 11 04:18:25 compute-0 nova_compute[259850]: 2025-10-11 04:18:25.392 2 DEBUG nova.virt.hardware [None req-dbf2f84f-58e2-48e2-a847-43729ecf620e 77d11e860ca1460cab1c20bca4d4c0ea bfcc78a613a4442d88231798d10634c9 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Oct 11 04:18:25 compute-0 nova_compute[259850]: 2025-10-11 04:18:25.392 2 DEBUG nova.virt.hardware [None req-dbf2f84f-58e2-48e2-a847-43729ecf620e 77d11e860ca1460cab1c20bca4d4c0ea bfcc78a613a4442d88231798d10634c9 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Oct 11 04:18:25 compute-0 nova_compute[259850]: 2025-10-11 04:18:25.393 2 DEBUG nova.virt.hardware [None req-dbf2f84f-58e2-48e2-a847-43729ecf620e 77d11e860ca1460cab1c20bca4d4c0ea bfcc78a613a4442d88231798d10634c9 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Oct 11 04:18:25 compute-0 nova_compute[259850]: 2025-10-11 04:18:25.393 2 DEBUG nova.virt.hardware [None req-dbf2f84f-58e2-48e2-a847-43729ecf620e 77d11e860ca1460cab1c20bca4d4c0ea bfcc78a613a4442d88231798d10634c9 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Oct 11 04:18:25 compute-0 nova_compute[259850]: 2025-10-11 04:18:25.393 2 DEBUG nova.virt.hardware [None req-dbf2f84f-58e2-48e2-a847-43729ecf620e 77d11e860ca1460cab1c20bca4d4c0ea bfcc78a613a4442d88231798d10634c9 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Oct 11 04:18:25 compute-0 nova_compute[259850]: 2025-10-11 04:18:25.393 2 DEBUG nova.virt.hardware [None req-dbf2f84f-58e2-48e2-a847-43729ecf620e 77d11e860ca1460cab1c20bca4d4c0ea bfcc78a613a4442d88231798d10634c9 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Oct 11 04:18:25 compute-0 nova_compute[259850]: 2025-10-11 04:18:25.394 2 DEBUG nova.virt.hardware [None req-dbf2f84f-58e2-48e2-a847-43729ecf620e 77d11e860ca1460cab1c20bca4d4c0ea bfcc78a613a4442d88231798d10634c9 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Oct 11 04:18:25 compute-0 nova_compute[259850]: 2025-10-11 04:18:25.432 2 DEBUG nova.storage.rbd_utils [None req-dbf2f84f-58e2-48e2-a847-43729ecf620e 77d11e860ca1460cab1c20bca4d4c0ea bfcc78a613a4442d88231798d10634c9 - - default default] rbd image 8bfdc99e-9df9-4825-a631-7cd07eff5dfb_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 11 04:18:25 compute-0 nova_compute[259850]: 2025-10-11 04:18:25.438 2 DEBUG oslo_concurrency.processutils [None req-dbf2f84f-58e2-48e2-a847-43729ecf620e 77d11e860ca1460cab1c20bca4d4c0ea bfcc78a613a4442d88231798d10634c9 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 04:18:25 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1643: 305 pgs: 305 active+clean; 352 MiB data, 630 MiB used, 59 GiB / 60 GiB avail; 15 KiB/s rd, 12 KiB/s wr, 26 op/s
Oct 11 04:18:25 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 11 04:18:25 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/875552719' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 11 04:18:25 compute-0 nova_compute[259850]: 2025-10-11 04:18:25.953 2 DEBUG oslo_concurrency.processutils [None req-dbf2f84f-58e2-48e2-a847-43729ecf620e 77d11e860ca1460cab1c20bca4d4c0ea bfcc78a613a4442d88231798d10634c9 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.515s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 04:18:26 compute-0 ceph-mon[74273]: from='client.? 192.168.122.100:0/875552719' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 11 04:18:26 compute-0 nova_compute[259850]: 2025-10-11 04:18:26.096 2 DEBUG os_brick.encryptors [None req-dbf2f84f-58e2-48e2-a847-43729ecf620e 77d11e860ca1460cab1c20bca4d4c0ea bfcc78a613a4442d88231798d10634c9 - - default default] Using volume encryption metadata '{'encryption_key_id': 'edee8ecf-5890-4919-979b-2e61d58fb9b1', 'control_location': 'front-end', 'cipher': 'aes-xts-plain64', 'key_size': 256, 'provider': 'luks'}' for connection: {'driver_volume_type': 'rbd', 'data': {'name': 'volumes/volume-b02ce934-9de7-422d-b3ba-5ade72993920', 'hosts': ['192.168.122.100'], 'ports': ['6789'], 'cluster_name': 'ceph', 'auth_enabled': True, 'auth_username': 'openstack', 'secret_type': 'ceph', 'secret_uuid': '***', 'volume_id': 'b02ce934-9de7-422d-b3ba-5ade72993920', 'discard': True, 'qos_specs': None, 'access_mode': 'rw', 'encrypted': True, 'cacheable': False}, 'status': 'reserved', 'instance': '8bfdc99e-9df9-4825-a631-7cd07eff5dfb', 'attached_at': '', 'detached_at': '', 'volume_id': 'b02ce934-9de7-422d-b3ba-5ade72993920', 'serial': '} get_encryption_metadata /usr/lib/python3.9/site-packages/os_brick/encryptors/__init__.py:135
Oct 11 04:18:26 compute-0 nova_compute[259850]: 2025-10-11 04:18:26.100 2 DEBUG barbicanclient.client [None req-dbf2f84f-58e2-48e2-a847-43729ecf620e 77d11e860ca1460cab1c20bca4d4c0ea bfcc78a613a4442d88231798d10634c9 - - default default] Creating Client object Client /usr/lib/python3.9/site-packages/barbicanclient/client.py:163
Oct 11 04:18:26 compute-0 nova_compute[259850]: 2025-10-11 04:18:26.120 2 DEBUG barbicanclient.v1.secrets [None req-dbf2f84f-58e2-48e2-a847-43729ecf620e 77d11e860ca1460cab1c20bca4d4c0ea bfcc78a613a4442d88231798d10634c9 - - default default] Getting secret - Secret href: https://barbican-internal.openstack.svc:9311/secrets/edee8ecf-5890-4919-979b-2e61d58fb9b1 get /usr/lib/python3.9/site-packages/barbicanclient/v1/secrets.py:514
Oct 11 04:18:26 compute-0 nova_compute[259850]: 2025-10-11 04:18:26.121 2 INFO barbicanclient.base [None req-dbf2f84f-58e2-48e2-a847-43729ecf620e 77d11e860ca1460cab1c20bca4d4c0ea bfcc78a613a4442d88231798d10634c9 - - default default] Calculated Secrets uuid ref: secrets/edee8ecf-5890-4919-979b-2e61d58fb9b1
Oct 11 04:18:26 compute-0 nova_compute[259850]: 2025-10-11 04:18:26.148 2 DEBUG barbicanclient.client [None req-dbf2f84f-58e2-48e2-a847-43729ecf620e 77d11e860ca1460cab1c20bca4d4c0ea bfcc78a613a4442d88231798d10634c9 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87
Oct 11 04:18:26 compute-0 nova_compute[259850]: 2025-10-11 04:18:26.149 2 INFO barbicanclient.base [None req-dbf2f84f-58e2-48e2-a847-43729ecf620e 77d11e860ca1460cab1c20bca4d4c0ea bfcc78a613a4442d88231798d10634c9 - - default default] Calculated Secrets uuid ref: secrets/edee8ecf-5890-4919-979b-2e61d58fb9b1
Oct 11 04:18:26 compute-0 nova_compute[259850]: 2025-10-11 04:18:26.194 2 DEBUG barbicanclient.client [None req-dbf2f84f-58e2-48e2-a847-43729ecf620e 77d11e860ca1460cab1c20bca4d4c0ea bfcc78a613a4442d88231798d10634c9 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87
Oct 11 04:18:26 compute-0 nova_compute[259850]: 2025-10-11 04:18:26.195 2 INFO barbicanclient.base [None req-dbf2f84f-58e2-48e2-a847-43729ecf620e 77d11e860ca1460cab1c20bca4d4c0ea bfcc78a613a4442d88231798d10634c9 - - default default] Calculated Secrets uuid ref: secrets/edee8ecf-5890-4919-979b-2e61d58fb9b1
Oct 11 04:18:26 compute-0 nova_compute[259850]: 2025-10-11 04:18:26.217 2 DEBUG nova.compute.manager [req-aecd4c77-64b7-4e06-94d7-e55825df6b9e req-e522e665-107f-4580-b416-86bf10979331 f4159c198f2c491aba3076e803251bb4 551ef16ca7c64c7d98e9560ddab5abd8 - - default default] [instance: 68b44a2f-a694-4458-9a40-89e194a02624] Received event network-vif-plugged-ae4b6054-7dfb-4b28-a3dc-0a31f2446ea3 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 11 04:18:26 compute-0 nova_compute[259850]: 2025-10-11 04:18:26.218 2 DEBUG oslo_concurrency.lockutils [req-aecd4c77-64b7-4e06-94d7-e55825df6b9e req-e522e665-107f-4580-b416-86bf10979331 f4159c198f2c491aba3076e803251bb4 551ef16ca7c64c7d98e9560ddab5abd8 - - default default] Acquiring lock "68b44a2f-a694-4458-9a40-89e194a02624-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 04:18:26 compute-0 nova_compute[259850]: 2025-10-11 04:18:26.218 2 DEBUG oslo_concurrency.lockutils [req-aecd4c77-64b7-4e06-94d7-e55825df6b9e req-e522e665-107f-4580-b416-86bf10979331 f4159c198f2c491aba3076e803251bb4 551ef16ca7c64c7d98e9560ddab5abd8 - - default default] Lock "68b44a2f-a694-4458-9a40-89e194a02624-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 04:18:26 compute-0 nova_compute[259850]: 2025-10-11 04:18:26.218 2 DEBUG oslo_concurrency.lockutils [req-aecd4c77-64b7-4e06-94d7-e55825df6b9e req-e522e665-107f-4580-b416-86bf10979331 f4159c198f2c491aba3076e803251bb4 551ef16ca7c64c7d98e9560ddab5abd8 - - default default] Lock "68b44a2f-a694-4458-9a40-89e194a02624-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 04:18:26 compute-0 nova_compute[259850]: 2025-10-11 04:18:26.219 2 DEBUG nova.compute.manager [req-aecd4c77-64b7-4e06-94d7-e55825df6b9e req-e522e665-107f-4580-b416-86bf10979331 f4159c198f2c491aba3076e803251bb4 551ef16ca7c64c7d98e9560ddab5abd8 - - default default] [instance: 68b44a2f-a694-4458-9a40-89e194a02624] No waiting events found dispatching network-vif-plugged-ae4b6054-7dfb-4b28-a3dc-0a31f2446ea3 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 11 04:18:26 compute-0 nova_compute[259850]: 2025-10-11 04:18:26.219 2 WARNING nova.compute.manager [req-aecd4c77-64b7-4e06-94d7-e55825df6b9e req-e522e665-107f-4580-b416-86bf10979331 f4159c198f2c491aba3076e803251bb4 551ef16ca7c64c7d98e9560ddab5abd8 - - default default] [instance: 68b44a2f-a694-4458-9a40-89e194a02624] Received unexpected event network-vif-plugged-ae4b6054-7dfb-4b28-a3dc-0a31f2446ea3 for instance with vm_state active and task_state None.
Oct 11 04:18:26 compute-0 nova_compute[259850]: 2025-10-11 04:18:26.219 2 DEBUG nova.compute.manager [req-aecd4c77-64b7-4e06-94d7-e55825df6b9e req-e522e665-107f-4580-b416-86bf10979331 f4159c198f2c491aba3076e803251bb4 551ef16ca7c64c7d98e9560ddab5abd8 - - default default] [instance: 8bfdc99e-9df9-4825-a631-7cd07eff5dfb] Received event network-changed-04efb511-1fd7-4507-91a8-508780bc5e8d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 11 04:18:26 compute-0 nova_compute[259850]: 2025-10-11 04:18:26.219 2 DEBUG nova.compute.manager [req-aecd4c77-64b7-4e06-94d7-e55825df6b9e req-e522e665-107f-4580-b416-86bf10979331 f4159c198f2c491aba3076e803251bb4 551ef16ca7c64c7d98e9560ddab5abd8 - - default default] [instance: 8bfdc99e-9df9-4825-a631-7cd07eff5dfb] Refreshing instance network info cache due to event network-changed-04efb511-1fd7-4507-91a8-508780bc5e8d. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 11 04:18:26 compute-0 nova_compute[259850]: 2025-10-11 04:18:26.219 2 DEBUG oslo_concurrency.lockutils [req-aecd4c77-64b7-4e06-94d7-e55825df6b9e req-e522e665-107f-4580-b416-86bf10979331 f4159c198f2c491aba3076e803251bb4 551ef16ca7c64c7d98e9560ddab5abd8 - - default default] Acquiring lock "refresh_cache-8bfdc99e-9df9-4825-a631-7cd07eff5dfb" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 11 04:18:26 compute-0 nova_compute[259850]: 2025-10-11 04:18:26.220 2 DEBUG oslo_concurrency.lockutils [req-aecd4c77-64b7-4e06-94d7-e55825df6b9e req-e522e665-107f-4580-b416-86bf10979331 f4159c198f2c491aba3076e803251bb4 551ef16ca7c64c7d98e9560ddab5abd8 - - default default] Acquired lock "refresh_cache-8bfdc99e-9df9-4825-a631-7cd07eff5dfb" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 11 04:18:26 compute-0 nova_compute[259850]: 2025-10-11 04:18:26.220 2 DEBUG nova.network.neutron [req-aecd4c77-64b7-4e06-94d7-e55825df6b9e req-e522e665-107f-4580-b416-86bf10979331 f4159c198f2c491aba3076e803251bb4 551ef16ca7c64c7d98e9560ddab5abd8 - - default default] [instance: 8bfdc99e-9df9-4825-a631-7cd07eff5dfb] Refreshing network info cache for port 04efb511-1fd7-4507-91a8-508780bc5e8d _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 11 04:18:26 compute-0 nova_compute[259850]: 2025-10-11 04:18:26.236 2 DEBUG barbicanclient.client [None req-dbf2f84f-58e2-48e2-a847-43729ecf620e 77d11e860ca1460cab1c20bca4d4c0ea bfcc78a613a4442d88231798d10634c9 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87
Oct 11 04:18:26 compute-0 nova_compute[259850]: 2025-10-11 04:18:26.237 2 INFO barbicanclient.base [None req-dbf2f84f-58e2-48e2-a847-43729ecf620e 77d11e860ca1460cab1c20bca4d4c0ea bfcc78a613a4442d88231798d10634c9 - - default default] Calculated Secrets uuid ref: secrets/edee8ecf-5890-4919-979b-2e61d58fb9b1
Oct 11 04:18:26 compute-0 nova_compute[259850]: 2025-10-11 04:18:26.275 2 DEBUG barbicanclient.client [None req-dbf2f84f-58e2-48e2-a847-43729ecf620e 77d11e860ca1460cab1c20bca4d4c0ea bfcc78a613a4442d88231798d10634c9 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87
Oct 11 04:18:26 compute-0 nova_compute[259850]: 2025-10-11 04:18:26.276 2 INFO barbicanclient.base [None req-dbf2f84f-58e2-48e2-a847-43729ecf620e 77d11e860ca1460cab1c20bca4d4c0ea bfcc78a613a4442d88231798d10634c9 - - default default] Calculated Secrets uuid ref: secrets/edee8ecf-5890-4919-979b-2e61d58fb9b1
Oct 11 04:18:26 compute-0 nova_compute[259850]: 2025-10-11 04:18:26.306 2 DEBUG barbicanclient.client [None req-dbf2f84f-58e2-48e2-a847-43729ecf620e 77d11e860ca1460cab1c20bca4d4c0ea bfcc78a613a4442d88231798d10634c9 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87
Oct 11 04:18:26 compute-0 nova_compute[259850]: 2025-10-11 04:18:26.306 2 INFO barbicanclient.base [None req-dbf2f84f-58e2-48e2-a847-43729ecf620e 77d11e860ca1460cab1c20bca4d4c0ea bfcc78a613a4442d88231798d10634c9 - - default default] Calculated Secrets uuid ref: secrets/edee8ecf-5890-4919-979b-2e61d58fb9b1
Oct 11 04:18:26 compute-0 nova_compute[259850]: 2025-10-11 04:18:26.332 2 DEBUG barbicanclient.client [None req-dbf2f84f-58e2-48e2-a847-43729ecf620e 77d11e860ca1460cab1c20bca4d4c0ea bfcc78a613a4442d88231798d10634c9 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87
Oct 11 04:18:26 compute-0 nova_compute[259850]: 2025-10-11 04:18:26.333 2 INFO barbicanclient.base [None req-dbf2f84f-58e2-48e2-a847-43729ecf620e 77d11e860ca1460cab1c20bca4d4c0ea bfcc78a613a4442d88231798d10634c9 - - default default] Calculated Secrets uuid ref: secrets/edee8ecf-5890-4919-979b-2e61d58fb9b1
Oct 11 04:18:26 compute-0 nova_compute[259850]: 2025-10-11 04:18:26.358 2 DEBUG barbicanclient.client [None req-dbf2f84f-58e2-48e2-a847-43729ecf620e 77d11e860ca1460cab1c20bca4d4c0ea bfcc78a613a4442d88231798d10634c9 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87
Oct 11 04:18:26 compute-0 nova_compute[259850]: 2025-10-11 04:18:26.359 2 INFO barbicanclient.base [None req-dbf2f84f-58e2-48e2-a847-43729ecf620e 77d11e860ca1460cab1c20bca4d4c0ea bfcc78a613a4442d88231798d10634c9 - - default default] Calculated Secrets uuid ref: secrets/edee8ecf-5890-4919-979b-2e61d58fb9b1
Oct 11 04:18:26 compute-0 nova_compute[259850]: 2025-10-11 04:18:26.389 2 DEBUG barbicanclient.client [None req-dbf2f84f-58e2-48e2-a847-43729ecf620e 77d11e860ca1460cab1c20bca4d4c0ea bfcc78a613a4442d88231798d10634c9 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87
Oct 11 04:18:26 compute-0 nova_compute[259850]: 2025-10-11 04:18:26.390 2 INFO barbicanclient.base [None req-dbf2f84f-58e2-48e2-a847-43729ecf620e 77d11e860ca1460cab1c20bca4d4c0ea bfcc78a613a4442d88231798d10634c9 - - default default] Calculated Secrets uuid ref: secrets/edee8ecf-5890-4919-979b-2e61d58fb9b1
Oct 11 04:18:26 compute-0 nova_compute[259850]: 2025-10-11 04:18:26.415 2 DEBUG barbicanclient.client [None req-dbf2f84f-58e2-48e2-a847-43729ecf620e 77d11e860ca1460cab1c20bca4d4c0ea bfcc78a613a4442d88231798d10634c9 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87
Oct 11 04:18:26 compute-0 nova_compute[259850]: 2025-10-11 04:18:26.415 2 INFO barbicanclient.base [None req-dbf2f84f-58e2-48e2-a847-43729ecf620e 77d11e860ca1460cab1c20bca4d4c0ea bfcc78a613a4442d88231798d10634c9 - - default default] Calculated Secrets uuid ref: secrets/edee8ecf-5890-4919-979b-2e61d58fb9b1
Oct 11 04:18:26 compute-0 nova_compute[259850]: 2025-10-11 04:18:26.438 2 DEBUG barbicanclient.client [None req-dbf2f84f-58e2-48e2-a847-43729ecf620e 77d11e860ca1460cab1c20bca4d4c0ea bfcc78a613a4442d88231798d10634c9 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87
Oct 11 04:18:26 compute-0 nova_compute[259850]: 2025-10-11 04:18:26.439 2 INFO barbicanclient.base [None req-dbf2f84f-58e2-48e2-a847-43729ecf620e 77d11e860ca1460cab1c20bca4d4c0ea bfcc78a613a4442d88231798d10634c9 - - default default] Calculated Secrets uuid ref: secrets/edee8ecf-5890-4919-979b-2e61d58fb9b1
Oct 11 04:18:26 compute-0 nova_compute[259850]: 2025-10-11 04:18:26.463 2 DEBUG barbicanclient.client [None req-dbf2f84f-58e2-48e2-a847-43729ecf620e 77d11e860ca1460cab1c20bca4d4c0ea bfcc78a613a4442d88231798d10634c9 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87
Oct 11 04:18:26 compute-0 nova_compute[259850]: 2025-10-11 04:18:26.464 2 INFO barbicanclient.base [None req-dbf2f84f-58e2-48e2-a847-43729ecf620e 77d11e860ca1460cab1c20bca4d4c0ea bfcc78a613a4442d88231798d10634c9 - - default default] Calculated Secrets uuid ref: secrets/edee8ecf-5890-4919-979b-2e61d58fb9b1
Oct 11 04:18:26 compute-0 nova_compute[259850]: 2025-10-11 04:18:26.497 2 DEBUG barbicanclient.client [None req-dbf2f84f-58e2-48e2-a847-43729ecf620e 77d11e860ca1460cab1c20bca4d4c0ea bfcc78a613a4442d88231798d10634c9 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87
Oct 11 04:18:26 compute-0 nova_compute[259850]: 2025-10-11 04:18:26.498 2 INFO barbicanclient.base [None req-dbf2f84f-58e2-48e2-a847-43729ecf620e 77d11e860ca1460cab1c20bca4d4c0ea bfcc78a613a4442d88231798d10634c9 - - default default] Calculated Secrets uuid ref: secrets/edee8ecf-5890-4919-979b-2e61d58fb9b1
Oct 11 04:18:26 compute-0 nova_compute[259850]: 2025-10-11 04:18:26.526 2 DEBUG barbicanclient.client [None req-dbf2f84f-58e2-48e2-a847-43729ecf620e 77d11e860ca1460cab1c20bca4d4c0ea bfcc78a613a4442d88231798d10634c9 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87
Oct 11 04:18:26 compute-0 nova_compute[259850]: 2025-10-11 04:18:26.527 2 INFO barbicanclient.base [None req-dbf2f84f-58e2-48e2-a847-43729ecf620e 77d11e860ca1460cab1c20bca4d4c0ea bfcc78a613a4442d88231798d10634c9 - - default default] Calculated Secrets uuid ref: secrets/edee8ecf-5890-4919-979b-2e61d58fb9b1
Oct 11 04:18:26 compute-0 nova_compute[259850]: 2025-10-11 04:18:26.556 2 DEBUG barbicanclient.client [None req-dbf2f84f-58e2-48e2-a847-43729ecf620e 77d11e860ca1460cab1c20bca4d4c0ea bfcc78a613a4442d88231798d10634c9 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87
Oct 11 04:18:26 compute-0 nova_compute[259850]: 2025-10-11 04:18:26.557 2 INFO barbicanclient.base [None req-dbf2f84f-58e2-48e2-a847-43729ecf620e 77d11e860ca1460cab1c20bca4d4c0ea bfcc78a613a4442d88231798d10634c9 - - default default] Calculated Secrets uuid ref: secrets/edee8ecf-5890-4919-979b-2e61d58fb9b1
Oct 11 04:18:26 compute-0 nova_compute[259850]: 2025-10-11 04:18:26.588 2 DEBUG barbicanclient.client [None req-dbf2f84f-58e2-48e2-a847-43729ecf620e 77d11e860ca1460cab1c20bca4d4c0ea bfcc78a613a4442d88231798d10634c9 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87
Oct 11 04:18:26 compute-0 nova_compute[259850]: 2025-10-11 04:18:26.589 2 DEBUG nova.virt.libvirt.host [None req-dbf2f84f-58e2-48e2-a847-43729ecf620e 77d11e860ca1460cab1c20bca4d4c0ea bfcc78a613a4442d88231798d10634c9 - - default default] Secret XML: <secret ephemeral="no" private="no">
Oct 11 04:18:26 compute-0 nova_compute[259850]:   <usage type="volume">
Oct 11 04:18:26 compute-0 nova_compute[259850]:     <volume>b02ce934-9de7-422d-b3ba-5ade72993920</volume>
Oct 11 04:18:26 compute-0 nova_compute[259850]:   </usage>
Oct 11 04:18:26 compute-0 nova_compute[259850]: </secret>
Oct 11 04:18:26 compute-0 nova_compute[259850]:  create_secret /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1131
Oct 11 04:18:26 compute-0 nova_compute[259850]: 2025-10-11 04:18:26.623 2 DEBUG nova.virt.libvirt.vif [None req-dbf2f84f-58e2-48e2-a847-43729ecf620e 77d11e860ca1460cab1c20bca4d4c0ea bfcc78a613a4442d88231798d10634c9 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-11T04:18:19Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TransferEncryptedVolumeTest-server-2102536866',display_name='tempest-TransferEncryptedVolumeTest-server-2102536866',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-transferencryptedvolumetest-server-2102536866',id=26,image_ref='',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBD3jnhyRBlsX5VUAbGtWGwnjXDJ0mJnyIiUqsAyoyyDd6H6M/5DSgSJwDh4tkaNqmtKzFuE8XyeYbmLUFFbEZUE8j9mB2B0zj5nn/QlG6TOs2XcStAmJ+ejUjSzP7rh2Lg==',key_name='tempest-TransferEncryptedVolumeTest-513808347',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='bfcc78a613a4442d88231798d10634c9',ramdisk_id='',reservation_id='r-rb87z0iv',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',image_signature_verified='False',network_allocated='True',owner_project_name='tempest-TransferEncryptedVolumeTest-1941581237',owner_user_name='tempest-TransferEncryptedVolumeTest-1941581237-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-11T04:18:21Z,user_data=None,user_id='77d11e860ca1460cab1c20bca4d4c0ea',uuid=8bfdc99e-9df9-4825-a631-7cd07eff5dfb,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "04efb511-1fd7-4507-91a8-508780bc5e8d", "address": "fa:16:3e:e9:ef:be", "network": {"id": "1c86b315-3a4b-4db0-8b3c-39658c19ef9c", "bridge": "br-int", "label": "tempest-TransferEncryptedVolumeTest-443747755-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "bfcc78a613a4442d88231798d10634c9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap04efb511-1f", "ovs_interfaceid": "04efb511-1fd7-4507-91a8-508780bc5e8d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Oct 11 04:18:26 compute-0 nova_compute[259850]: 2025-10-11 04:18:26.623 2 DEBUG nova.network.os_vif_util [None req-dbf2f84f-58e2-48e2-a847-43729ecf620e 77d11e860ca1460cab1c20bca4d4c0ea bfcc78a613a4442d88231798d10634c9 - - default default] Converting VIF {"id": "04efb511-1fd7-4507-91a8-508780bc5e8d", "address": "fa:16:3e:e9:ef:be", "network": {"id": "1c86b315-3a4b-4db0-8b3c-39658c19ef9c", "bridge": "br-int", "label": "tempest-TransferEncryptedVolumeTest-443747755-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "bfcc78a613a4442d88231798d10634c9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap04efb511-1f", "ovs_interfaceid": "04efb511-1fd7-4507-91a8-508780bc5e8d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 11 04:18:26 compute-0 nova_compute[259850]: 2025-10-11 04:18:26.625 2 DEBUG nova.network.os_vif_util [None req-dbf2f84f-58e2-48e2-a847-43729ecf620e 77d11e860ca1460cab1c20bca4d4c0ea bfcc78a613a4442d88231798d10634c9 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:e9:ef:be,bridge_name='br-int',has_traffic_filtering=True,id=04efb511-1fd7-4507-91a8-508780bc5e8d,network=Network(1c86b315-3a4b-4db0-8b3c-39658c19ef9c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap04efb511-1f') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 11 04:18:26 compute-0 nova_compute[259850]: 2025-10-11 04:18:26.627 2 DEBUG nova.objects.instance [None req-dbf2f84f-58e2-48e2-a847-43729ecf620e 77d11e860ca1460cab1c20bca4d4c0ea bfcc78a613a4442d88231798d10634c9 - - default default] Lazy-loading 'pci_devices' on Instance uuid 8bfdc99e-9df9-4825-a631-7cd07eff5dfb obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 11 04:18:26 compute-0 nova_compute[259850]: 2025-10-11 04:18:26.642 2 DEBUG nova.virt.libvirt.driver [None req-dbf2f84f-58e2-48e2-a847-43729ecf620e 77d11e860ca1460cab1c20bca4d4c0ea bfcc78a613a4442d88231798d10634c9 - - default default] [instance: 8bfdc99e-9df9-4825-a631-7cd07eff5dfb] End _get_guest_xml xml=<domain type="kvm">
Oct 11 04:18:26 compute-0 nova_compute[259850]:   <uuid>8bfdc99e-9df9-4825-a631-7cd07eff5dfb</uuid>
Oct 11 04:18:26 compute-0 nova_compute[259850]:   <name>instance-0000001a</name>
Oct 11 04:18:26 compute-0 nova_compute[259850]:   <memory>131072</memory>
Oct 11 04:18:26 compute-0 nova_compute[259850]:   <vcpu>1</vcpu>
Oct 11 04:18:26 compute-0 nova_compute[259850]:   <metadata>
Oct 11 04:18:26 compute-0 nova_compute[259850]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 11 04:18:26 compute-0 nova_compute[259850]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 11 04:18:26 compute-0 nova_compute[259850]:       <nova:name>tempest-TransferEncryptedVolumeTest-server-2102536866</nova:name>
Oct 11 04:18:26 compute-0 nova_compute[259850]:       <nova:creationTime>2025-10-11 04:18:25</nova:creationTime>
Oct 11 04:18:26 compute-0 nova_compute[259850]:       <nova:flavor name="m1.nano">
Oct 11 04:18:26 compute-0 nova_compute[259850]:         <nova:memory>128</nova:memory>
Oct 11 04:18:26 compute-0 nova_compute[259850]:         <nova:disk>1</nova:disk>
Oct 11 04:18:26 compute-0 nova_compute[259850]:         <nova:swap>0</nova:swap>
Oct 11 04:18:26 compute-0 nova_compute[259850]:         <nova:ephemeral>0</nova:ephemeral>
Oct 11 04:18:26 compute-0 nova_compute[259850]:         <nova:vcpus>1</nova:vcpus>
Oct 11 04:18:26 compute-0 nova_compute[259850]:       </nova:flavor>
Oct 11 04:18:26 compute-0 nova_compute[259850]:       <nova:owner>
Oct 11 04:18:26 compute-0 nova_compute[259850]:         <nova:user uuid="77d11e860ca1460cab1c20bca4d4c0ea">tempest-TransferEncryptedVolumeTest-1941581237-project-member</nova:user>
Oct 11 04:18:26 compute-0 nova_compute[259850]:         <nova:project uuid="bfcc78a613a4442d88231798d10634c9">tempest-TransferEncryptedVolumeTest-1941581237</nova:project>
Oct 11 04:18:26 compute-0 nova_compute[259850]:       </nova:owner>
Oct 11 04:18:26 compute-0 nova_compute[259850]:       <nova:ports>
Oct 11 04:18:26 compute-0 nova_compute[259850]:         <nova:port uuid="04efb511-1fd7-4507-91a8-508780bc5e8d">
Oct 11 04:18:26 compute-0 nova_compute[259850]:           <nova:ip type="fixed" address="10.100.0.4" ipVersion="4"/>
Oct 11 04:18:26 compute-0 nova_compute[259850]:         </nova:port>
Oct 11 04:18:26 compute-0 nova_compute[259850]:       </nova:ports>
Oct 11 04:18:26 compute-0 nova_compute[259850]:     </nova:instance>
Oct 11 04:18:26 compute-0 nova_compute[259850]:   </metadata>
Oct 11 04:18:26 compute-0 nova_compute[259850]:   <sysinfo type="smbios">
Oct 11 04:18:26 compute-0 nova_compute[259850]:     <system>
Oct 11 04:18:26 compute-0 nova_compute[259850]:       <entry name="manufacturer">RDO</entry>
Oct 11 04:18:26 compute-0 nova_compute[259850]:       <entry name="product">OpenStack Compute</entry>
Oct 11 04:18:26 compute-0 nova_compute[259850]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 11 04:18:26 compute-0 nova_compute[259850]:       <entry name="serial">8bfdc99e-9df9-4825-a631-7cd07eff5dfb</entry>
Oct 11 04:18:26 compute-0 nova_compute[259850]:       <entry name="uuid">8bfdc99e-9df9-4825-a631-7cd07eff5dfb</entry>
Oct 11 04:18:26 compute-0 nova_compute[259850]:       <entry name="family">Virtual Machine</entry>
Oct 11 04:18:26 compute-0 nova_compute[259850]:     </system>
Oct 11 04:18:26 compute-0 nova_compute[259850]:   </sysinfo>
Oct 11 04:18:26 compute-0 nova_compute[259850]:   <os>
Oct 11 04:18:26 compute-0 nova_compute[259850]:     <type arch="x86_64" machine="q35">hvm</type>
Oct 11 04:18:26 compute-0 nova_compute[259850]:     <boot dev="hd"/>
Oct 11 04:18:26 compute-0 nova_compute[259850]:     <smbios mode="sysinfo"/>
Oct 11 04:18:26 compute-0 nova_compute[259850]:   </os>
Oct 11 04:18:26 compute-0 nova_compute[259850]:   <features>
Oct 11 04:18:26 compute-0 nova_compute[259850]:     <acpi/>
Oct 11 04:18:26 compute-0 nova_compute[259850]:     <apic/>
Oct 11 04:18:26 compute-0 nova_compute[259850]:     <vmcoreinfo/>
Oct 11 04:18:26 compute-0 nova_compute[259850]:   </features>
Oct 11 04:18:26 compute-0 nova_compute[259850]:   <clock offset="utc">
Oct 11 04:18:26 compute-0 nova_compute[259850]:     <timer name="pit" tickpolicy="delay"/>
Oct 11 04:18:26 compute-0 nova_compute[259850]:     <timer name="rtc" tickpolicy="catchup"/>
Oct 11 04:18:26 compute-0 nova_compute[259850]:     <timer name="hpet" present="no"/>
Oct 11 04:18:26 compute-0 nova_compute[259850]:   </clock>
Oct 11 04:18:26 compute-0 nova_compute[259850]:   <cpu mode="host-model" match="exact">
Oct 11 04:18:26 compute-0 nova_compute[259850]:     <topology sockets="1" cores="1" threads="1"/>
Oct 11 04:18:26 compute-0 nova_compute[259850]:   </cpu>
Oct 11 04:18:26 compute-0 nova_compute[259850]:   <devices>
Oct 11 04:18:26 compute-0 nova_compute[259850]:     <disk type="network" device="cdrom">
Oct 11 04:18:26 compute-0 nova_compute[259850]:       <driver type="raw" cache="none"/>
Oct 11 04:18:26 compute-0 nova_compute[259850]:       <source protocol="rbd" name="vms/8bfdc99e-9df9-4825-a631-7cd07eff5dfb_disk.config">
Oct 11 04:18:26 compute-0 nova_compute[259850]:         <host name="192.168.122.100" port="6789"/>
Oct 11 04:18:26 compute-0 nova_compute[259850]:       </source>
Oct 11 04:18:26 compute-0 nova_compute[259850]:       <auth username="openstack">
Oct 11 04:18:26 compute-0 nova_compute[259850]:         <secret type="ceph" uuid="23b68101-59a9-532f-ab6b-9acf78fb2162"/>
Oct 11 04:18:26 compute-0 nova_compute[259850]:       </auth>
Oct 11 04:18:26 compute-0 nova_compute[259850]:       <target dev="sda" bus="sata"/>
Oct 11 04:18:26 compute-0 nova_compute[259850]:     </disk>
Oct 11 04:18:26 compute-0 nova_compute[259850]:     <disk type="network" device="disk">
Oct 11 04:18:26 compute-0 nova_compute[259850]:       <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Oct 11 04:18:26 compute-0 nova_compute[259850]:       <source protocol="rbd" name="volumes/volume-b02ce934-9de7-422d-b3ba-5ade72993920">
Oct 11 04:18:26 compute-0 nova_compute[259850]:         <host name="192.168.122.100" port="6789"/>
Oct 11 04:18:26 compute-0 nova_compute[259850]:       </source>
Oct 11 04:18:26 compute-0 nova_compute[259850]:       <auth username="openstack">
Oct 11 04:18:26 compute-0 nova_compute[259850]:         <secret type="ceph" uuid="23b68101-59a9-532f-ab6b-9acf78fb2162"/>
Oct 11 04:18:26 compute-0 nova_compute[259850]:       </auth>
Oct 11 04:18:26 compute-0 nova_compute[259850]:       <target dev="vda" bus="virtio"/>
Oct 11 04:18:26 compute-0 nova_compute[259850]:       <serial>b02ce934-9de7-422d-b3ba-5ade72993920</serial>
Oct 11 04:18:26 compute-0 nova_compute[259850]:       <encryption format="luks">
Oct 11 04:18:26 compute-0 nova_compute[259850]:         <secret type="passphrase" uuid="cc60aed7-1549-47a1-b801-57b5aaa761ac"/>
Oct 11 04:18:26 compute-0 nova_compute[259850]:       </encryption>
Oct 11 04:18:26 compute-0 nova_compute[259850]:     </disk>
Oct 11 04:18:26 compute-0 nova_compute[259850]:     <interface type="ethernet">
Oct 11 04:18:26 compute-0 nova_compute[259850]:       <mac address="fa:16:3e:e9:ef:be"/>
Oct 11 04:18:26 compute-0 nova_compute[259850]:       <model type="virtio"/>
Oct 11 04:18:26 compute-0 nova_compute[259850]:       <driver name="vhost" rx_queue_size="512"/>
Oct 11 04:18:26 compute-0 nova_compute[259850]:       <mtu size="1442"/>
Oct 11 04:18:26 compute-0 nova_compute[259850]:       <target dev="tap04efb511-1f"/>
Oct 11 04:18:26 compute-0 nova_compute[259850]:     </interface>
Oct 11 04:18:26 compute-0 nova_compute[259850]:     <serial type="pty">
Oct 11 04:18:26 compute-0 nova_compute[259850]:       <log file="/var/lib/nova/instances/8bfdc99e-9df9-4825-a631-7cd07eff5dfb/console.log" append="off"/>
Oct 11 04:18:26 compute-0 nova_compute[259850]:     </serial>
Oct 11 04:18:26 compute-0 nova_compute[259850]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 11 04:18:26 compute-0 nova_compute[259850]:     <video>
Oct 11 04:18:26 compute-0 nova_compute[259850]:       <model type="virtio"/>
Oct 11 04:18:26 compute-0 nova_compute[259850]:     </video>
Oct 11 04:18:26 compute-0 nova_compute[259850]:     <input type="tablet" bus="usb"/>
Oct 11 04:18:26 compute-0 nova_compute[259850]:     <rng model="virtio">
Oct 11 04:18:26 compute-0 nova_compute[259850]:       <backend model="random">/dev/urandom</backend>
Oct 11 04:18:26 compute-0 nova_compute[259850]:     </rng>
Oct 11 04:18:26 compute-0 nova_compute[259850]:     <controller type="pci" model="pcie-root"/>
Oct 11 04:18:26 compute-0 nova_compute[259850]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 04:18:26 compute-0 nova_compute[259850]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 04:18:26 compute-0 nova_compute[259850]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 04:18:26 compute-0 nova_compute[259850]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 04:18:26 compute-0 nova_compute[259850]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 04:18:26 compute-0 nova_compute[259850]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 04:18:26 compute-0 nova_compute[259850]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 04:18:26 compute-0 nova_compute[259850]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 04:18:26 compute-0 nova_compute[259850]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 04:18:26 compute-0 nova_compute[259850]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 04:18:26 compute-0 nova_compute[259850]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 04:18:26 compute-0 nova_compute[259850]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 04:18:26 compute-0 nova_compute[259850]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 04:18:26 compute-0 nova_compute[259850]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 04:18:26 compute-0 nova_compute[259850]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 04:18:26 compute-0 nova_compute[259850]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 04:18:26 compute-0 nova_compute[259850]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 04:18:26 compute-0 nova_compute[259850]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 04:18:26 compute-0 nova_compute[259850]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 04:18:26 compute-0 nova_compute[259850]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 04:18:26 compute-0 nova_compute[259850]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 04:18:26 compute-0 nova_compute[259850]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 04:18:26 compute-0 nova_compute[259850]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 04:18:26 compute-0 nova_compute[259850]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 04:18:26 compute-0 nova_compute[259850]:     <controller type="usb" index="0"/>
Oct 11 04:18:26 compute-0 nova_compute[259850]:     <memballoon model="virtio">
Oct 11 04:18:26 compute-0 nova_compute[259850]:       <stats period="10"/>
Oct 11 04:18:26 compute-0 nova_compute[259850]:     </memballoon>
Oct 11 04:18:26 compute-0 nova_compute[259850]:   </devices>
Oct 11 04:18:26 compute-0 nova_compute[259850]: </domain>
Oct 11 04:18:26 compute-0 nova_compute[259850]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Oct 11 04:18:26 compute-0 nova_compute[259850]: 2025-10-11 04:18:26.645 2 DEBUG nova.compute.manager [None req-dbf2f84f-58e2-48e2-a847-43729ecf620e 77d11e860ca1460cab1c20bca4d4c0ea bfcc78a613a4442d88231798d10634c9 - - default default] [instance: 8bfdc99e-9df9-4825-a631-7cd07eff5dfb] Preparing to wait for external event network-vif-plugged-04efb511-1fd7-4507-91a8-508780bc5e8d prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Oct 11 04:18:26 compute-0 nova_compute[259850]: 2025-10-11 04:18:26.646 2 DEBUG oslo_concurrency.lockutils [None req-dbf2f84f-58e2-48e2-a847-43729ecf620e 77d11e860ca1460cab1c20bca4d4c0ea bfcc78a613a4442d88231798d10634c9 - - default default] Acquiring lock "8bfdc99e-9df9-4825-a631-7cd07eff5dfb-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 04:18:26 compute-0 nova_compute[259850]: 2025-10-11 04:18:26.647 2 DEBUG oslo_concurrency.lockutils [None req-dbf2f84f-58e2-48e2-a847-43729ecf620e 77d11e860ca1460cab1c20bca4d4c0ea bfcc78a613a4442d88231798d10634c9 - - default default] Lock "8bfdc99e-9df9-4825-a631-7cd07eff5dfb-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 04:18:26 compute-0 nova_compute[259850]: 2025-10-11 04:18:26.647 2 DEBUG oslo_concurrency.lockutils [None req-dbf2f84f-58e2-48e2-a847-43729ecf620e 77d11e860ca1460cab1c20bca4d4c0ea bfcc78a613a4442d88231798d10634c9 - - default default] Lock "8bfdc99e-9df9-4825-a631-7cd07eff5dfb-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 04:18:26 compute-0 nova_compute[259850]: 2025-10-11 04:18:26.649 2 DEBUG nova.virt.libvirt.vif [None req-dbf2f84f-58e2-48e2-a847-43729ecf620e 77d11e860ca1460cab1c20bca4d4c0ea bfcc78a613a4442d88231798d10634c9 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-11T04:18:19Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TransferEncryptedVolumeTest-server-2102536866',display_name='tempest-TransferEncryptedVolumeTest-server-2102536866',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-transferencryptedvolumetest-server-2102536866',id=26,image_ref='',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBD3jnhyRBlsX5VUAbGtWGwnjXDJ0mJnyIiUqsAyoyyDd6H6M/5DSgSJwDh4tkaNqmtKzFuE8XyeYbmLUFFbEZUE8j9mB2B0zj5nn/QlG6TOs2XcStAmJ+ejUjSzP7rh2Lg==',key_name='tempest-TransferEncryptedVolumeTest-513808347',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='bfcc78a613a4442d88231798d10634c9',ramdisk_id='',reservation_id='r-rb87z0iv',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',image_signature_verified='False',network_allocated='True',owner_project_name='tempest-TransferEncryptedVolumeTest-1941581237',owner_user_name='tempest-TransferEncryptedVolumeTest-1941581237-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-11T04:18:21Z,user_data=None,user_id='77d11e860ca1460cab1c20bca4d4c0ea',uuid=8bfdc99e-9df9-4825-a631-7cd07eff5dfb,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "04efb511-1fd7-4507-91a8-508780bc5e8d", "address": "fa:16:3e:e9:ef:be", "network": {"id": "1c86b315-3a4b-4db0-8b3c-39658c19ef9c", "bridge": "br-int", "label": "tempest-TransferEncryptedVolumeTest-443747755-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "bfcc78a613a4442d88231798d10634c9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap04efb511-1f", "ovs_interfaceid": "04efb511-1fd7-4507-91a8-508780bc5e8d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Oct 11 04:18:26 compute-0 nova_compute[259850]: 2025-10-11 04:18:26.649 2 DEBUG nova.network.os_vif_util [None req-dbf2f84f-58e2-48e2-a847-43729ecf620e 77d11e860ca1460cab1c20bca4d4c0ea bfcc78a613a4442d88231798d10634c9 - - default default] Converting VIF {"id": "04efb511-1fd7-4507-91a8-508780bc5e8d", "address": "fa:16:3e:e9:ef:be", "network": {"id": "1c86b315-3a4b-4db0-8b3c-39658c19ef9c", "bridge": "br-int", "label": "tempest-TransferEncryptedVolumeTest-443747755-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "bfcc78a613a4442d88231798d10634c9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap04efb511-1f", "ovs_interfaceid": "04efb511-1fd7-4507-91a8-508780bc5e8d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 11 04:18:26 compute-0 nova_compute[259850]: 2025-10-11 04:18:26.651 2 DEBUG nova.network.os_vif_util [None req-dbf2f84f-58e2-48e2-a847-43729ecf620e 77d11e860ca1460cab1c20bca4d4c0ea bfcc78a613a4442d88231798d10634c9 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:e9:ef:be,bridge_name='br-int',has_traffic_filtering=True,id=04efb511-1fd7-4507-91a8-508780bc5e8d,network=Network(1c86b315-3a4b-4db0-8b3c-39658c19ef9c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap04efb511-1f') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 11 04:18:26 compute-0 nova_compute[259850]: 2025-10-11 04:18:26.652 2 DEBUG os_vif [None req-dbf2f84f-58e2-48e2-a847-43729ecf620e 77d11e860ca1460cab1c20bca4d4c0ea bfcc78a613a4442d88231798d10634c9 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:e9:ef:be,bridge_name='br-int',has_traffic_filtering=True,id=04efb511-1fd7-4507-91a8-508780bc5e8d,network=Network(1c86b315-3a4b-4db0-8b3c-39658c19ef9c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap04efb511-1f') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Oct 11 04:18:26 compute-0 nova_compute[259850]: 2025-10-11 04:18:26.653 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 04:18:26 compute-0 nova_compute[259850]: 2025-10-11 04:18:26.654 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 04:18:26 compute-0 nova_compute[259850]: 2025-10-11 04:18:26.655 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 11 04:18:26 compute-0 nova_compute[259850]: 2025-10-11 04:18:26.659 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 04:18:26 compute-0 nova_compute[259850]: 2025-10-11 04:18:26.660 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap04efb511-1f, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 04:18:26 compute-0 nova_compute[259850]: 2025-10-11 04:18:26.661 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap04efb511-1f, col_values=(('external_ids', {'iface-id': '04efb511-1fd7-4507-91a8-508780bc5e8d', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:e9:ef:be', 'vm-uuid': '8bfdc99e-9df9-4825-a631-7cd07eff5dfb'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 04:18:26 compute-0 NetworkManager[44920]: <info>  [1760156306.6649] manager: (tap04efb511-1f): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/128)
Oct 11 04:18:26 compute-0 nova_compute[259850]: 2025-10-11 04:18:26.663 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 04:18:26 compute-0 nova_compute[259850]: 2025-10-11 04:18:26.668 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 11 04:18:26 compute-0 nova_compute[259850]: 2025-10-11 04:18:26.673 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 04:18:26 compute-0 nova_compute[259850]: 2025-10-11 04:18:26.675 2 INFO os_vif [None req-dbf2f84f-58e2-48e2-a847-43729ecf620e 77d11e860ca1460cab1c20bca4d4c0ea bfcc78a613a4442d88231798d10634c9 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:e9:ef:be,bridge_name='br-int',has_traffic_filtering=True,id=04efb511-1fd7-4507-91a8-508780bc5e8d,network=Network(1c86b315-3a4b-4db0-8b3c-39658c19ef9c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap04efb511-1f')
Oct 11 04:18:26 compute-0 nova_compute[259850]: 2025-10-11 04:18:26.743 2 DEBUG nova.virt.libvirt.driver [None req-dbf2f84f-58e2-48e2-a847-43729ecf620e 77d11e860ca1460cab1c20bca4d4c0ea bfcc78a613a4442d88231798d10634c9 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 11 04:18:26 compute-0 nova_compute[259850]: 2025-10-11 04:18:26.743 2 DEBUG nova.virt.libvirt.driver [None req-dbf2f84f-58e2-48e2-a847-43729ecf620e 77d11e860ca1460cab1c20bca4d4c0ea bfcc78a613a4442d88231798d10634c9 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 11 04:18:26 compute-0 nova_compute[259850]: 2025-10-11 04:18:26.744 2 DEBUG nova.virt.libvirt.driver [None req-dbf2f84f-58e2-48e2-a847-43729ecf620e 77d11e860ca1460cab1c20bca4d4c0ea bfcc78a613a4442d88231798d10634c9 - - default default] No VIF found with MAC fa:16:3e:e9:ef:be, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Oct 11 04:18:26 compute-0 nova_compute[259850]: 2025-10-11 04:18:26.744 2 INFO nova.virt.libvirt.driver [None req-dbf2f84f-58e2-48e2-a847-43729ecf620e 77d11e860ca1460cab1c20bca4d4c0ea bfcc78a613a4442d88231798d10634c9 - - default default] [instance: 8bfdc99e-9df9-4825-a631-7cd07eff5dfb] Using config drive
Oct 11 04:18:26 compute-0 nova_compute[259850]: 2025-10-11 04:18:26.775 2 DEBUG nova.storage.rbd_utils [None req-dbf2f84f-58e2-48e2-a847-43729ecf620e 77d11e860ca1460cab1c20bca4d4c0ea bfcc78a613a4442d88231798d10634c9 - - default default] rbd image 8bfdc99e-9df9-4825-a631-7cd07eff5dfb_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 11 04:18:27 compute-0 ceph-mon[74273]: pgmap v1643: 305 pgs: 305 active+clean; 352 MiB data, 630 MiB used, 59 GiB / 60 GiB avail; 15 KiB/s rd, 12 KiB/s wr, 26 op/s
Oct 11 04:18:27 compute-0 nova_compute[259850]: 2025-10-11 04:18:27.059 2 DEBUG oslo_service.periodic_task [None req-a15c22ab-259d-4b26-83d0-eb5f92102649 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 11 04:18:27 compute-0 nova_compute[259850]: 2025-10-11 04:18:27.060 2 DEBUG nova.compute.manager [None req-a15c22ab-259d-4b26-83d0-eb5f92102649 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Oct 11 04:18:27 compute-0 nova_compute[259850]: 2025-10-11 04:18:27.143 2 INFO nova.virt.libvirt.driver [None req-dbf2f84f-58e2-48e2-a847-43729ecf620e 77d11e860ca1460cab1c20bca4d4c0ea bfcc78a613a4442d88231798d10634c9 - - default default] [instance: 8bfdc99e-9df9-4825-a631-7cd07eff5dfb] Creating config drive at /var/lib/nova/instances/8bfdc99e-9df9-4825-a631-7cd07eff5dfb/disk.config
Oct 11 04:18:27 compute-0 nova_compute[259850]: 2025-10-11 04:18:27.153 2 DEBUG oslo_concurrency.processutils [None req-dbf2f84f-58e2-48e2-a847-43729ecf620e 77d11e860ca1460cab1c20bca4d4c0ea bfcc78a613a4442d88231798d10634c9 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/8bfdc99e-9df9-4825-a631-7cd07eff5dfb/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp46mau2tl execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 04:18:27 compute-0 nova_compute[259850]: 2025-10-11 04:18:27.294 2 DEBUG oslo_concurrency.processutils [None req-dbf2f84f-58e2-48e2-a847-43729ecf620e 77d11e860ca1460cab1c20bca4d4c0ea bfcc78a613a4442d88231798d10634c9 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/8bfdc99e-9df9-4825-a631-7cd07eff5dfb/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp46mau2tl" returned: 0 in 0.140s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 04:18:27 compute-0 nova_compute[259850]: 2025-10-11 04:18:27.333 2 DEBUG nova.storage.rbd_utils [None req-dbf2f84f-58e2-48e2-a847-43729ecf620e 77d11e860ca1460cab1c20bca4d4c0ea bfcc78a613a4442d88231798d10634c9 - - default default] rbd image 8bfdc99e-9df9-4825-a631-7cd07eff5dfb_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 11 04:18:27 compute-0 nova_compute[259850]: 2025-10-11 04:18:27.338 2 DEBUG oslo_concurrency.processutils [None req-dbf2f84f-58e2-48e2-a847-43729ecf620e 77d11e860ca1460cab1c20bca4d4c0ea bfcc78a613a4442d88231798d10634c9 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/8bfdc99e-9df9-4825-a631-7cd07eff5dfb/disk.config 8bfdc99e-9df9-4825-a631-7cd07eff5dfb_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 04:18:27 compute-0 nova_compute[259850]: 2025-10-11 04:18:27.534 2 DEBUG oslo_concurrency.processutils [None req-dbf2f84f-58e2-48e2-a847-43729ecf620e 77d11e860ca1460cab1c20bca4d4c0ea bfcc78a613a4442d88231798d10634c9 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/8bfdc99e-9df9-4825-a631-7cd07eff5dfb/disk.config 8bfdc99e-9df9-4825-a631-7cd07eff5dfb_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.196s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 04:18:27 compute-0 nova_compute[259850]: 2025-10-11 04:18:27.536 2 INFO nova.virt.libvirt.driver [None req-dbf2f84f-58e2-48e2-a847-43729ecf620e 77d11e860ca1460cab1c20bca4d4c0ea bfcc78a613a4442d88231798d10634c9 - - default default] [instance: 8bfdc99e-9df9-4825-a631-7cd07eff5dfb] Deleting local config drive /var/lib/nova/instances/8bfdc99e-9df9-4825-a631-7cd07eff5dfb/disk.config because it was imported into RBD.
Oct 11 04:18:27 compute-0 kernel: tap04efb511-1f: entered promiscuous mode
Oct 11 04:18:27 compute-0 NetworkManager[44920]: <info>  [1760156307.6130] manager: (tap04efb511-1f): new Tun device (/org/freedesktop/NetworkManager/Devices/129)
Oct 11 04:18:27 compute-0 ovn_controller[152025]: 2025-10-11T04:18:27Z|00247|binding|INFO|Claiming lport 04efb511-1fd7-4507-91a8-508780bc5e8d for this chassis.
Oct 11 04:18:27 compute-0 ovn_controller[152025]: 2025-10-11T04:18:27Z|00248|binding|INFO|04efb511-1fd7-4507-91a8-508780bc5e8d: Claiming fa:16:3e:e9:ef:be 10.100.0.4
Oct 11 04:18:27 compute-0 nova_compute[259850]: 2025-10-11 04:18:27.645 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 04:18:27 compute-0 systemd-udevd[297709]: Network interface NamePolicy= disabled on kernel command line.
Oct 11 04:18:27 compute-0 ovn_metadata_agent[161897]: 2025-10-11 04:18:27.660 161902 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:e9:ef:be 10.100.0.4'], port_security=['fa:16:3e:e9:ef:be 10.100.0.4'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.4/28', 'neutron:device_id': '8bfdc99e-9df9-4825-a631-7cd07eff5dfb', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-1c86b315-3a4b-4db0-8b3c-39658c19ef9c', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'bfcc78a613a4442d88231798d10634c9', 'neutron:revision_number': '2', 'neutron:security_group_ids': '8fd56502-e733-457c-89c4-96f24dc7f6d9', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=756f4bd0-4cbc-4611-9397-52eb34ec09ab, chassis=[<ovs.db.idl.Row object at 0x7f22fde8afd0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f22fde8afd0>], logical_port=04efb511-1fd7-4507-91a8-508780bc5e8d) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 11 04:18:27 compute-0 ovn_metadata_agent[161897]: 2025-10-11 04:18:27.662 161902 INFO neutron.agent.ovn.metadata.agent [-] Port 04efb511-1fd7-4507-91a8-508780bc5e8d in datapath 1c86b315-3a4b-4db0-8b3c-39658c19ef9c bound to our chassis
Oct 11 04:18:27 compute-0 ovn_metadata_agent[161897]: 2025-10-11 04:18:27.665 161902 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 1c86b315-3a4b-4db0-8b3c-39658c19ef9c
Oct 11 04:18:27 compute-0 NetworkManager[44920]: <info>  [1760156307.6806] device (tap04efb511-1f): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 11 04:18:27 compute-0 NetworkManager[44920]: <info>  [1760156307.6829] device (tap04efb511-1f): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 11 04:18:27 compute-0 ovn_controller[152025]: 2025-10-11T04:18:27Z|00249|binding|INFO|Setting lport 04efb511-1fd7-4507-91a8-508780bc5e8d ovn-installed in OVS
Oct 11 04:18:27 compute-0 ovn_controller[152025]: 2025-10-11T04:18:27Z|00250|binding|INFO|Setting lport 04efb511-1fd7-4507-91a8-508780bc5e8d up in Southbound
Oct 11 04:18:27 compute-0 nova_compute[259850]: 2025-10-11 04:18:27.686 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 04:18:27 compute-0 ovn_metadata_agent[161897]: 2025-10-11 04:18:27.688 267637 DEBUG oslo.privsep.daemon [-] privsep: reply[d68f1882-3f1d-416a-9732-5ec24b1d65b6]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 04:18:27 compute-0 ovn_metadata_agent[161897]: 2025-10-11 04:18:27.689 161902 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap1c86b315-31 in ovnmeta-1c86b315-3a4b-4db0-8b3c-39658c19ef9c namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Oct 11 04:18:27 compute-0 ovn_metadata_agent[161897]: 2025-10-11 04:18:27.691 267637 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap1c86b315-30 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Oct 11 04:18:27 compute-0 ovn_metadata_agent[161897]: 2025-10-11 04:18:27.691 267637 DEBUG oslo.privsep.daemon [-] privsep: reply[2a9da2f5-8af7-496b-a83e-68e011b02bd2]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 04:18:27 compute-0 systemd-machined[214869]: New machine qemu-26-instance-0000001a.
Oct 11 04:18:27 compute-0 ovn_metadata_agent[161897]: 2025-10-11 04:18:27.693 267637 DEBUG oslo.privsep.daemon [-] privsep: reply[e013925a-1e27-452c-89cc-15693618243a]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 04:18:27 compute-0 systemd[1]: Started Virtual Machine qemu-26-instance-0000001a.
Oct 11 04:18:27 compute-0 ovn_metadata_agent[161897]: 2025-10-11 04:18:27.705 162015 DEBUG oslo.privsep.daemon [-] privsep: reply[9c0fa40c-e318-4e3d-a7f3-728958c43498]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 04:18:27 compute-0 ovn_metadata_agent[161897]: 2025-10-11 04:18:27.737 267637 DEBUG oslo.privsep.daemon [-] privsep: reply[cbdfe973-3d84-497d-87ed-01453df7879a]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 04:18:27 compute-0 ovn_metadata_agent[161897]: 2025-10-11 04:18:27.776 267785 DEBUG oslo.privsep.daemon [-] privsep: reply[d0b79bd6-b996-450a-8bcc-76e36a5a4e95]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 04:18:27 compute-0 ovn_metadata_agent[161897]: 2025-10-11 04:18:27.790 267637 DEBUG oslo.privsep.daemon [-] privsep: reply[78f566d2-664a-47ad-9808-2b8d6bbb5766]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 04:18:27 compute-0 NetworkManager[44920]: <info>  [1760156307.7913] manager: (tap1c86b315-30): new Veth device (/org/freedesktop/NetworkManager/Devices/130)
Oct 11 04:18:27 compute-0 systemd-udevd[297713]: Network interface NamePolicy= disabled on kernel command line.
Oct 11 04:18:27 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1644: 305 pgs: 305 active+clean; 352 MiB data, 630 MiB used, 59 GiB / 60 GiB avail; 27 KiB/s rd, 24 KiB/s wr, 44 op/s
Oct 11 04:18:27 compute-0 ovn_metadata_agent[161897]: 2025-10-11 04:18:27.818 267785 DEBUG oslo.privsep.daemon [-] privsep: reply[17712664-6c0b-491c-a940-f5d9b4e89bfb]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 04:18:27 compute-0 ovn_metadata_agent[161897]: 2025-10-11 04:18:27.824 267785 DEBUG oslo.privsep.daemon [-] privsep: reply[1067740f-3659-487c-a1ee-8c92253e298c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 04:18:27 compute-0 NetworkManager[44920]: <info>  [1760156307.8471] device (tap1c86b315-30): carrier: link connected
Oct 11 04:18:27 compute-0 ovn_metadata_agent[161897]: 2025-10-11 04:18:27.851 267785 DEBUG oslo.privsep.daemon [-] privsep: reply[0cb6583c-33fa-422a-81ea-fedd8b096264]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 04:18:27 compute-0 ovn_metadata_agent[161897]: 2025-10-11 04:18:27.867 267637 DEBUG oslo.privsep.daemon [-] privsep: reply[4534acae-4f73-40c0-be0d-0ff32b6b1c86]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap1c86b315-31'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:b2:1b:d4'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 82], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 469288, 'reachable_time': 22961, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 297746, 'error': None, 'target': 'ovnmeta-1c86b315-3a4b-4db0-8b3c-39658c19ef9c', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 04:18:27 compute-0 ovn_metadata_agent[161897]: 2025-10-11 04:18:27.891 267637 DEBUG oslo.privsep.daemon [-] privsep: reply[e5b83bdc-0303-419f-93b2-49d860cf342a]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:feb2:1bd4'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 469288, 'tstamp': 469288}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 297747, 'error': None, 'target': 'ovnmeta-1c86b315-3a4b-4db0-8b3c-39658c19ef9c', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 04:18:27 compute-0 nova_compute[259850]: 2025-10-11 04:18:27.889 2 DEBUG nova.network.neutron [req-aecd4c77-64b7-4e06-94d7-e55825df6b9e req-e522e665-107f-4580-b416-86bf10979331 f4159c198f2c491aba3076e803251bb4 551ef16ca7c64c7d98e9560ddab5abd8 - - default default] [instance: 8bfdc99e-9df9-4825-a631-7cd07eff5dfb] Updated VIF entry in instance network info cache for port 04efb511-1fd7-4507-91a8-508780bc5e8d. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 11 04:18:27 compute-0 nova_compute[259850]: 2025-10-11 04:18:27.890 2 DEBUG nova.network.neutron [req-aecd4c77-64b7-4e06-94d7-e55825df6b9e req-e522e665-107f-4580-b416-86bf10979331 f4159c198f2c491aba3076e803251bb4 551ef16ca7c64c7d98e9560ddab5abd8 - - default default] [instance: 8bfdc99e-9df9-4825-a631-7cd07eff5dfb] Updating instance_info_cache with network_info: [{"id": "04efb511-1fd7-4507-91a8-508780bc5e8d", "address": "fa:16:3e:e9:ef:be", "network": {"id": "1c86b315-3a4b-4db0-8b3c-39658c19ef9c", "bridge": "br-int", "label": "tempest-TransferEncryptedVolumeTest-443747755-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "bfcc78a613a4442d88231798d10634c9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap04efb511-1f", "ovs_interfaceid": "04efb511-1fd7-4507-91a8-508780bc5e8d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 11 04:18:27 compute-0 nova_compute[259850]: 2025-10-11 04:18:27.902 2 DEBUG nova.compute.manager [req-f6b32681-7dfa-4ffc-b7e9-e6d875d9b3b5 req-165f5c80-20af-4e98-9512-3843eac2b436 f4159c198f2c491aba3076e803251bb4 551ef16ca7c64c7d98e9560ddab5abd8 - - default default] [instance: 8bfdc99e-9df9-4825-a631-7cd07eff5dfb] Received event network-vif-plugged-04efb511-1fd7-4507-91a8-508780bc5e8d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 11 04:18:27 compute-0 nova_compute[259850]: 2025-10-11 04:18:27.903 2 DEBUG oslo_concurrency.lockutils [req-f6b32681-7dfa-4ffc-b7e9-e6d875d9b3b5 req-165f5c80-20af-4e98-9512-3843eac2b436 f4159c198f2c491aba3076e803251bb4 551ef16ca7c64c7d98e9560ddab5abd8 - - default default] Acquiring lock "8bfdc99e-9df9-4825-a631-7cd07eff5dfb-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 04:18:27 compute-0 nova_compute[259850]: 2025-10-11 04:18:27.903 2 DEBUG oslo_concurrency.lockutils [req-f6b32681-7dfa-4ffc-b7e9-e6d875d9b3b5 req-165f5c80-20af-4e98-9512-3843eac2b436 f4159c198f2c491aba3076e803251bb4 551ef16ca7c64c7d98e9560ddab5abd8 - - default default] Lock "8bfdc99e-9df9-4825-a631-7cd07eff5dfb-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 04:18:27 compute-0 nova_compute[259850]: 2025-10-11 04:18:27.904 2 DEBUG oslo_concurrency.lockutils [req-f6b32681-7dfa-4ffc-b7e9-e6d875d9b3b5 req-165f5c80-20af-4e98-9512-3843eac2b436 f4159c198f2c491aba3076e803251bb4 551ef16ca7c64c7d98e9560ddab5abd8 - - default default] Lock "8bfdc99e-9df9-4825-a631-7cd07eff5dfb-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 04:18:27 compute-0 nova_compute[259850]: 2025-10-11 04:18:27.904 2 DEBUG nova.compute.manager [req-f6b32681-7dfa-4ffc-b7e9-e6d875d9b3b5 req-165f5c80-20af-4e98-9512-3843eac2b436 f4159c198f2c491aba3076e803251bb4 551ef16ca7c64c7d98e9560ddab5abd8 - - default default] [instance: 8bfdc99e-9df9-4825-a631-7cd07eff5dfb] Processing event network-vif-plugged-04efb511-1fd7-4507-91a8-508780bc5e8d _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Oct 11 04:18:27 compute-0 nova_compute[259850]: 2025-10-11 04:18:27.907 2 DEBUG oslo_concurrency.lockutils [req-aecd4c77-64b7-4e06-94d7-e55825df6b9e req-e522e665-107f-4580-b416-86bf10979331 f4159c198f2c491aba3076e803251bb4 551ef16ca7c64c7d98e9560ddab5abd8 - - default default] Releasing lock "refresh_cache-8bfdc99e-9df9-4825-a631-7cd07eff5dfb" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 11 04:18:27 compute-0 ovn_metadata_agent[161897]: 2025-10-11 04:18:27.910 267637 DEBUG oslo.privsep.daemon [-] privsep: reply[1172c23c-3785-4f53-8b19-d8c14e3939be]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap1c86b315-31'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:b2:1b:d4'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 82], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 469288, 'reachable_time': 22961, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 297748, 'error': None, 'target': 'ovnmeta-1c86b315-3a4b-4db0-8b3c-39658c19ef9c', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 04:18:27 compute-0 ovn_metadata_agent[161897]: 2025-10-11 04:18:27.951 267637 DEBUG oslo.privsep.daemon [-] privsep: reply[ba66b231-eeb5-4425-bc7d-756452686238]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 04:18:28 compute-0 ovn_metadata_agent[161897]: 2025-10-11 04:18:28.010 267637 DEBUG oslo.privsep.daemon [-] privsep: reply[4c0a8baa-1cb2-4f72-bb33-a3927f10960e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 04:18:28 compute-0 ovn_metadata_agent[161897]: 2025-10-11 04:18:28.011 161902 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap1c86b315-30, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 04:18:28 compute-0 ovn_metadata_agent[161897]: 2025-10-11 04:18:28.012 161902 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 11 04:18:28 compute-0 ovn_metadata_agent[161897]: 2025-10-11 04:18:28.013 161902 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap1c86b315-30, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 04:18:28 compute-0 nova_compute[259850]: 2025-10-11 04:18:28.015 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 04:18:28 compute-0 NetworkManager[44920]: <info>  [1760156308.0157] manager: (tap1c86b315-30): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/131)
Oct 11 04:18:28 compute-0 kernel: tap1c86b315-30: entered promiscuous mode
Oct 11 04:18:28 compute-0 nova_compute[259850]: 2025-10-11 04:18:28.021 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 04:18:28 compute-0 ovn_metadata_agent[161897]: 2025-10-11 04:18:28.023 161902 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap1c86b315-30, col_values=(('external_ids', {'iface-id': '075f096d-d25a-4cca-804c-0df80c22a72a'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 04:18:28 compute-0 ovn_controller[152025]: 2025-10-11T04:18:28Z|00251|binding|INFO|Releasing lport 075f096d-d25a-4cca-804c-0df80c22a72a from this chassis (sb_readonly=0)
Oct 11 04:18:28 compute-0 nova_compute[259850]: 2025-10-11 04:18:28.024 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 04:18:28 compute-0 nova_compute[259850]: 2025-10-11 04:18:28.037 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 04:18:28 compute-0 ovn_metadata_agent[161897]: 2025-10-11 04:18:28.038 161902 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/1c86b315-3a4b-4db0-8b3c-39658c19ef9c.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/1c86b315-3a4b-4db0-8b3c-39658c19ef9c.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Oct 11 04:18:28 compute-0 ovn_metadata_agent[161897]: 2025-10-11 04:18:28.040 267637 DEBUG oslo.privsep.daemon [-] privsep: reply[bce5041c-a0bf-41db-a8d7-3c73d3518a2b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 04:18:28 compute-0 ovn_metadata_agent[161897]: 2025-10-11 04:18:28.041 161902 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct 11 04:18:28 compute-0 ovn_metadata_agent[161897]: global
Oct 11 04:18:28 compute-0 ovn_metadata_agent[161897]:     log         /dev/log local0 debug
Oct 11 04:18:28 compute-0 ovn_metadata_agent[161897]:     log-tag     haproxy-metadata-proxy-1c86b315-3a4b-4db0-8b3c-39658c19ef9c
Oct 11 04:18:28 compute-0 ovn_metadata_agent[161897]:     user        root
Oct 11 04:18:28 compute-0 ovn_metadata_agent[161897]:     group       root
Oct 11 04:18:28 compute-0 ovn_metadata_agent[161897]:     maxconn     1024
Oct 11 04:18:28 compute-0 ovn_metadata_agent[161897]:     pidfile     /var/lib/neutron/external/pids/1c86b315-3a4b-4db0-8b3c-39658c19ef9c.pid.haproxy
Oct 11 04:18:28 compute-0 ovn_metadata_agent[161897]:     daemon
Oct 11 04:18:28 compute-0 ovn_metadata_agent[161897]: 
Oct 11 04:18:28 compute-0 ovn_metadata_agent[161897]: defaults
Oct 11 04:18:28 compute-0 ovn_metadata_agent[161897]:     log global
Oct 11 04:18:28 compute-0 ovn_metadata_agent[161897]:     mode http
Oct 11 04:18:28 compute-0 ovn_metadata_agent[161897]:     option httplog
Oct 11 04:18:28 compute-0 ovn_metadata_agent[161897]:     option dontlognull
Oct 11 04:18:28 compute-0 ovn_metadata_agent[161897]:     option http-server-close
Oct 11 04:18:28 compute-0 ovn_metadata_agent[161897]:     option forwardfor
Oct 11 04:18:28 compute-0 ovn_metadata_agent[161897]:     retries                 3
Oct 11 04:18:28 compute-0 ovn_metadata_agent[161897]:     timeout http-request    30s
Oct 11 04:18:28 compute-0 ovn_metadata_agent[161897]:     timeout connect         30s
Oct 11 04:18:28 compute-0 ovn_metadata_agent[161897]:     timeout client          32s
Oct 11 04:18:28 compute-0 ovn_metadata_agent[161897]:     timeout server          32s
Oct 11 04:18:28 compute-0 ovn_metadata_agent[161897]:     timeout http-keep-alive 30s
Oct 11 04:18:28 compute-0 ovn_metadata_agent[161897]: 
Oct 11 04:18:28 compute-0 ovn_metadata_agent[161897]: 
Oct 11 04:18:28 compute-0 ovn_metadata_agent[161897]: listen listener
Oct 11 04:18:28 compute-0 ovn_metadata_agent[161897]:     bind 169.254.169.254:80
Oct 11 04:18:28 compute-0 ovn_metadata_agent[161897]:     server metadata /var/lib/neutron/metadata_proxy
Oct 11 04:18:28 compute-0 ovn_metadata_agent[161897]:     http-request add-header X-OVN-Network-ID 1c86b315-3a4b-4db0-8b3c-39658c19ef9c
Oct 11 04:18:28 compute-0 ovn_metadata_agent[161897]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Oct 11 04:18:28 compute-0 ovn_metadata_agent[161897]: 2025-10-11 04:18:28.042 161902 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-1c86b315-3a4b-4db0-8b3c-39658c19ef9c', 'env', 'PROCESS_TAG=haproxy-1c86b315-3a4b-4db0-8b3c-39658c19ef9c', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/1c86b315-3a4b-4db0-8b3c-39658c19ef9c.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Oct 11 04:18:28 compute-0 nova_compute[259850]: 2025-10-11 04:18:28.060 2 DEBUG oslo_service.periodic_task [None req-a15c22ab-259d-4b26-83d0-eb5f92102649 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 11 04:18:28 compute-0 nova_compute[259850]: 2025-10-11 04:18:28.371 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 04:18:28 compute-0 nova_compute[259850]: 2025-10-11 04:18:28.472 2 DEBUG nova.compute.manager [req-7309a7c4-6bea-45e0-87a4-97bb48735e52 req-ceaee669-9388-47b9-97ca-68ff8b0efc48 f4159c198f2c491aba3076e803251bb4 551ef16ca7c64c7d98e9560ddab5abd8 - - default default] [instance: 68b44a2f-a694-4458-9a40-89e194a02624] Received event network-changed-ae4b6054-7dfb-4b28-a3dc-0a31f2446ea3 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 11 04:18:28 compute-0 nova_compute[259850]: 2025-10-11 04:18:28.472 2 DEBUG nova.compute.manager [req-7309a7c4-6bea-45e0-87a4-97bb48735e52 req-ceaee669-9388-47b9-97ca-68ff8b0efc48 f4159c198f2c491aba3076e803251bb4 551ef16ca7c64c7d98e9560ddab5abd8 - - default default] [instance: 68b44a2f-a694-4458-9a40-89e194a02624] Refreshing instance network info cache due to event network-changed-ae4b6054-7dfb-4b28-a3dc-0a31f2446ea3. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 11 04:18:28 compute-0 nova_compute[259850]: 2025-10-11 04:18:28.473 2 DEBUG oslo_concurrency.lockutils [req-7309a7c4-6bea-45e0-87a4-97bb48735e52 req-ceaee669-9388-47b9-97ca-68ff8b0efc48 f4159c198f2c491aba3076e803251bb4 551ef16ca7c64c7d98e9560ddab5abd8 - - default default] Acquiring lock "refresh_cache-68b44a2f-a694-4458-9a40-89e194a02624" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 11 04:18:28 compute-0 nova_compute[259850]: 2025-10-11 04:18:28.473 2 DEBUG oslo_concurrency.lockutils [req-7309a7c4-6bea-45e0-87a4-97bb48735e52 req-ceaee669-9388-47b9-97ca-68ff8b0efc48 f4159c198f2c491aba3076e803251bb4 551ef16ca7c64c7d98e9560ddab5abd8 - - default default] Acquired lock "refresh_cache-68b44a2f-a694-4458-9a40-89e194a02624" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 11 04:18:28 compute-0 nova_compute[259850]: 2025-10-11 04:18:28.474 2 DEBUG nova.network.neutron [req-7309a7c4-6bea-45e0-87a4-97bb48735e52 req-ceaee669-9388-47b9-97ca-68ff8b0efc48 f4159c198f2c491aba3076e803251bb4 551ef16ca7c64c7d98e9560ddab5abd8 - - default default] [instance: 68b44a2f-a694-4458-9a40-89e194a02624] Refreshing network info cache for port ae4b6054-7dfb-4b28-a3dc-0a31f2446ea3 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 11 04:18:28 compute-0 podman[297815]: 2025-10-11 04:18:28.513629294 +0000 UTC m=+0.063962372 container create 2d6bfc1415f2aa707a54c872db60426c4e3ba5967c6a388242f1ac4cc64ed303 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-1c86b315-3a4b-4db0-8b3c-39658c19ef9c, org.label-schema.build-date=20251009, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.vendor=CentOS, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f)
Oct 11 04:18:28 compute-0 systemd[1]: Started libpod-conmon-2d6bfc1415f2aa707a54c872db60426c4e3ba5967c6a388242f1ac4cc64ed303.scope.
Oct 11 04:18:28 compute-0 podman[297815]: 2025-10-11 04:18:28.484924361 +0000 UTC m=+0.035257469 image pull 1061e4fafe13e0b9aa1ef2c904ba4ad70c44f3e87b1d831f16c6db34937f4022 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Oct 11 04:18:28 compute-0 systemd[1]: Started libcrun container.
Oct 11 04:18:28 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/52db8f7b4dc2610fb6451516909c887a612859a6ce3c5a771ad42fe20c355486/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct 11 04:18:28 compute-0 podman[297815]: 2025-10-11 04:18:28.615488598 +0000 UTC m=+0.165821696 container init 2d6bfc1415f2aa707a54c872db60426c4e3ba5967c6a388242f1ac4cc64ed303 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-1c86b315-3a4b-4db0-8b3c-39658c19ef9c, org.label-schema.build-date=20251009, org.label-schema.vendor=CentOS, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, tcib_managed=true)
Oct 11 04:18:28 compute-0 podman[297815]: 2025-10-11 04:18:28.620381196 +0000 UTC m=+0.170714294 container start 2d6bfc1415f2aa707a54c872db60426c4e3ba5967c6a388242f1ac4cc64ed303 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-1c86b315-3a4b-4db0-8b3c-39658c19ef9c, io.buildah.version=1.41.3, org.label-schema.build-date=20251009, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, tcib_managed=true, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Oct 11 04:18:28 compute-0 neutron-haproxy-ovnmeta-1c86b315-3a4b-4db0-8b3c-39658c19ef9c[297830]: [NOTICE]   (297834) : New worker (297836) forked
Oct 11 04:18:28 compute-0 neutron-haproxy-ovnmeta-1c86b315-3a4b-4db0-8b3c-39658c19ef9c[297830]: [NOTICE]   (297834) : Loading success.
Oct 11 04:18:29 compute-0 ceph-mon[74273]: pgmap v1644: 305 pgs: 305 active+clean; 352 MiB data, 630 MiB used, 59 GiB / 60 GiB avail; 27 KiB/s rd, 24 KiB/s wr, 44 op/s
Oct 11 04:18:29 compute-0 nova_compute[259850]: 2025-10-11 04:18:29.059 2 DEBUG oslo_service.periodic_task [None req-a15c22ab-259d-4b26-83d0-eb5f92102649 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 11 04:18:29 compute-0 nova_compute[259850]: 2025-10-11 04:18:29.612 2 DEBUG nova.network.neutron [req-7309a7c4-6bea-45e0-87a4-97bb48735e52 req-ceaee669-9388-47b9-97ca-68ff8b0efc48 f4159c198f2c491aba3076e803251bb4 551ef16ca7c64c7d98e9560ddab5abd8 - - default default] [instance: 68b44a2f-a694-4458-9a40-89e194a02624] Updated VIF entry in instance network info cache for port ae4b6054-7dfb-4b28-a3dc-0a31f2446ea3. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 11 04:18:29 compute-0 nova_compute[259850]: 2025-10-11 04:18:29.613 2 DEBUG nova.network.neutron [req-7309a7c4-6bea-45e0-87a4-97bb48735e52 req-ceaee669-9388-47b9-97ca-68ff8b0efc48 f4159c198f2c491aba3076e803251bb4 551ef16ca7c64c7d98e9560ddab5abd8 - - default default] [instance: 68b44a2f-a694-4458-9a40-89e194a02624] Updating instance_info_cache with network_info: [{"id": "ae4b6054-7dfb-4b28-a3dc-0a31f2446ea3", "address": "fa:16:3e:78:cd:05", "network": {"id": "b6cd64a2-af0b-4f57-b84c-cbc9cde5251d", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-958485850-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.203", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "09ba33ef4bd447699d74946c58839b2d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapae4b6054-7d", "ovs_interfaceid": "ae4b6054-7dfb-4b28-a3dc-0a31f2446ea3", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 11 04:18:29 compute-0 nova_compute[259850]: 2025-10-11 04:18:29.642 2 DEBUG oslo_concurrency.lockutils [req-7309a7c4-6bea-45e0-87a4-97bb48735e52 req-ceaee669-9388-47b9-97ca-68ff8b0efc48 f4159c198f2c491aba3076e803251bb4 551ef16ca7c64c7d98e9560ddab5abd8 - - default default] Releasing lock "refresh_cache-68b44a2f-a694-4458-9a40-89e194a02624" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 11 04:18:29 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1645: 305 pgs: 305 active+clean; 352 MiB data, 630 MiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 37 KiB/s wr, 149 op/s
Oct 11 04:18:29 compute-0 nova_compute[259850]: 2025-10-11 04:18:29.972 2 DEBUG nova.compute.manager [req-99738f51-c316-4e5b-af60-bbde2bb82063 req-69cdcd5f-9b15-45be-a63c-d79091f29419 f4159c198f2c491aba3076e803251bb4 551ef16ca7c64c7d98e9560ddab5abd8 - - default default] [instance: 8bfdc99e-9df9-4825-a631-7cd07eff5dfb] Received event network-vif-plugged-04efb511-1fd7-4507-91a8-508780bc5e8d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 11 04:18:29 compute-0 nova_compute[259850]: 2025-10-11 04:18:29.973 2 DEBUG oslo_concurrency.lockutils [req-99738f51-c316-4e5b-af60-bbde2bb82063 req-69cdcd5f-9b15-45be-a63c-d79091f29419 f4159c198f2c491aba3076e803251bb4 551ef16ca7c64c7d98e9560ddab5abd8 - - default default] Acquiring lock "8bfdc99e-9df9-4825-a631-7cd07eff5dfb-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 04:18:29 compute-0 nova_compute[259850]: 2025-10-11 04:18:29.973 2 DEBUG oslo_concurrency.lockutils [req-99738f51-c316-4e5b-af60-bbde2bb82063 req-69cdcd5f-9b15-45be-a63c-d79091f29419 f4159c198f2c491aba3076e803251bb4 551ef16ca7c64c7d98e9560ddab5abd8 - - default default] Lock "8bfdc99e-9df9-4825-a631-7cd07eff5dfb-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 04:18:29 compute-0 nova_compute[259850]: 2025-10-11 04:18:29.974 2 DEBUG oslo_concurrency.lockutils [req-99738f51-c316-4e5b-af60-bbde2bb82063 req-69cdcd5f-9b15-45be-a63c-d79091f29419 f4159c198f2c491aba3076e803251bb4 551ef16ca7c64c7d98e9560ddab5abd8 - - default default] Lock "8bfdc99e-9df9-4825-a631-7cd07eff5dfb-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 04:18:29 compute-0 nova_compute[259850]: 2025-10-11 04:18:29.975 2 DEBUG nova.compute.manager [req-99738f51-c316-4e5b-af60-bbde2bb82063 req-69cdcd5f-9b15-45be-a63c-d79091f29419 f4159c198f2c491aba3076e803251bb4 551ef16ca7c64c7d98e9560ddab5abd8 - - default default] [instance: 8bfdc99e-9df9-4825-a631-7cd07eff5dfb] No waiting events found dispatching network-vif-plugged-04efb511-1fd7-4507-91a8-508780bc5e8d pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 11 04:18:29 compute-0 nova_compute[259850]: 2025-10-11 04:18:29.975 2 WARNING nova.compute.manager [req-99738f51-c316-4e5b-af60-bbde2bb82063 req-69cdcd5f-9b15-45be-a63c-d79091f29419 f4159c198f2c491aba3076e803251bb4 551ef16ca7c64c7d98e9560ddab5abd8 - - default default] [instance: 8bfdc99e-9df9-4825-a631-7cd07eff5dfb] Received unexpected event network-vif-plugged-04efb511-1fd7-4507-91a8-508780bc5e8d for instance with vm_state building and task_state spawning.
Oct 11 04:18:30 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e390 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 04:18:30 compute-0 nova_compute[259850]: 2025-10-11 04:18:30.059 2 DEBUG oslo_service.periodic_task [None req-a15c22ab-259d-4b26-83d0-eb5f92102649 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 11 04:18:30 compute-0 nova_compute[259850]: 2025-10-11 04:18:30.061 2 DEBUG nova.compute.manager [None req-a15c22ab-259d-4b26-83d0-eb5f92102649 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Oct 11 04:18:30 compute-0 nova_compute[259850]: 2025-10-11 04:18:30.061 2 DEBUG nova.compute.manager [None req-a15c22ab-259d-4b26-83d0-eb5f92102649 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Oct 11 04:18:30 compute-0 nova_compute[259850]: 2025-10-11 04:18:30.083 2 DEBUG nova.compute.manager [None req-a15c22ab-259d-4b26-83d0-eb5f92102649 - - - - - -] [instance: 8bfdc99e-9df9-4825-a631-7cd07eff5dfb] Skipping network cache update for instance because it is Building. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9871
Oct 11 04:18:30 compute-0 podman[297845]: 2025-10-11 04:18:30.414827102 +0000 UTC m=+0.113833194 container health_status 648a70a95c858d67aef01a55c153ff0b5b9bef4b589fa10c62a24185180db61f (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, config_id=ovn_controller, org.label-schema.build-date=20251009, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, container_name=ovn_controller, managed_by=edpm_ansible)
Oct 11 04:18:30 compute-0 nova_compute[259850]: 2025-10-11 04:18:30.431 2 DEBUG oslo_concurrency.lockutils [None req-a15c22ab-259d-4b26-83d0-eb5f92102649 - - - - - -] Acquiring lock "refresh_cache-a5deabc3-2396-4c23-81c2-959d49bb6da1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 11 04:18:30 compute-0 nova_compute[259850]: 2025-10-11 04:18:30.432 2 DEBUG oslo_concurrency.lockutils [None req-a15c22ab-259d-4b26-83d0-eb5f92102649 - - - - - -] Acquired lock "refresh_cache-a5deabc3-2396-4c23-81c2-959d49bb6da1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 11 04:18:30 compute-0 nova_compute[259850]: 2025-10-11 04:18:30.433 2 DEBUG nova.network.neutron [None req-a15c22ab-259d-4b26-83d0-eb5f92102649 - - - - - -] [instance: a5deabc3-2396-4c23-81c2-959d49bb6da1] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Oct 11 04:18:30 compute-0 nova_compute[259850]: 2025-10-11 04:18:30.434 2 DEBUG nova.objects.instance [None req-a15c22ab-259d-4b26-83d0-eb5f92102649 - - - - - -] Lazy-loading 'info_cache' on Instance uuid a5deabc3-2396-4c23-81c2-959d49bb6da1 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 11 04:18:31 compute-0 nova_compute[259850]: 2025-10-11 04:18:31.034 2 DEBUG nova.virt.driver [None req-3700afd8-f980-4cf2-b41e-54ecc7fbe9a2 - - - - - -] Emitting event <LifecycleEvent: 1760156311.03411, 8bfdc99e-9df9-4825-a631-7cd07eff5dfb => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 11 04:18:31 compute-0 nova_compute[259850]: 2025-10-11 04:18:31.035 2 INFO nova.compute.manager [None req-3700afd8-f980-4cf2-b41e-54ecc7fbe9a2 - - - - - -] [instance: 8bfdc99e-9df9-4825-a631-7cd07eff5dfb] VM Started (Lifecycle Event)
Oct 11 04:18:31 compute-0 nova_compute[259850]: 2025-10-11 04:18:31.038 2 DEBUG nova.compute.manager [None req-dbf2f84f-58e2-48e2-a847-43729ecf620e 77d11e860ca1460cab1c20bca4d4c0ea bfcc78a613a4442d88231798d10634c9 - - default default] [instance: 8bfdc99e-9df9-4825-a631-7cd07eff5dfb] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Oct 11 04:18:31 compute-0 ceph-mon[74273]: pgmap v1645: 305 pgs: 305 active+clean; 352 MiB data, 630 MiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 37 KiB/s wr, 149 op/s
Oct 11 04:18:31 compute-0 nova_compute[259850]: 2025-10-11 04:18:31.049 2 DEBUG nova.virt.libvirt.driver [None req-dbf2f84f-58e2-48e2-a847-43729ecf620e 77d11e860ca1460cab1c20bca4d4c0ea bfcc78a613a4442d88231798d10634c9 - - default default] [instance: 8bfdc99e-9df9-4825-a631-7cd07eff5dfb] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Oct 11 04:18:31 compute-0 nova_compute[259850]: 2025-10-11 04:18:31.059 2 INFO nova.virt.libvirt.driver [-] [instance: 8bfdc99e-9df9-4825-a631-7cd07eff5dfb] Instance spawned successfully.
Oct 11 04:18:31 compute-0 nova_compute[259850]: 2025-10-11 04:18:31.062 2 DEBUG nova.virt.libvirt.driver [None req-dbf2f84f-58e2-48e2-a847-43729ecf620e 77d11e860ca1460cab1c20bca4d4c0ea bfcc78a613a4442d88231798d10634c9 - - default default] [instance: 8bfdc99e-9df9-4825-a631-7cd07eff5dfb] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Oct 11 04:18:31 compute-0 nova_compute[259850]: 2025-10-11 04:18:31.064 2 DEBUG nova.compute.manager [None req-3700afd8-f980-4cf2-b41e-54ecc7fbe9a2 - - - - - -] [instance: 8bfdc99e-9df9-4825-a631-7cd07eff5dfb] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 11 04:18:31 compute-0 nova_compute[259850]: 2025-10-11 04:18:31.069 2 DEBUG nova.compute.manager [None req-3700afd8-f980-4cf2-b41e-54ecc7fbe9a2 - - - - - -] [instance: 8bfdc99e-9df9-4825-a631-7cd07eff5dfb] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 11 04:18:31 compute-0 nova_compute[259850]: 2025-10-11 04:18:31.087 2 INFO nova.compute.manager [None req-3700afd8-f980-4cf2-b41e-54ecc7fbe9a2 - - - - - -] [instance: 8bfdc99e-9df9-4825-a631-7cd07eff5dfb] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 11 04:18:31 compute-0 nova_compute[259850]: 2025-10-11 04:18:31.088 2 DEBUG nova.virt.driver [None req-3700afd8-f980-4cf2-b41e-54ecc7fbe9a2 - - - - - -] Emitting event <LifecycleEvent: 1760156311.0353916, 8bfdc99e-9df9-4825-a631-7cd07eff5dfb => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 11 04:18:31 compute-0 nova_compute[259850]: 2025-10-11 04:18:31.089 2 INFO nova.compute.manager [None req-3700afd8-f980-4cf2-b41e-54ecc7fbe9a2 - - - - - -] [instance: 8bfdc99e-9df9-4825-a631-7cd07eff5dfb] VM Paused (Lifecycle Event)
Oct 11 04:18:31 compute-0 nova_compute[259850]: 2025-10-11 04:18:31.092 2 DEBUG nova.virt.libvirt.driver [None req-dbf2f84f-58e2-48e2-a847-43729ecf620e 77d11e860ca1460cab1c20bca4d4c0ea bfcc78a613a4442d88231798d10634c9 - - default default] [instance: 8bfdc99e-9df9-4825-a631-7cd07eff5dfb] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 11 04:18:31 compute-0 nova_compute[259850]: 2025-10-11 04:18:31.093 2 DEBUG nova.virt.libvirt.driver [None req-dbf2f84f-58e2-48e2-a847-43729ecf620e 77d11e860ca1460cab1c20bca4d4c0ea bfcc78a613a4442d88231798d10634c9 - - default default] [instance: 8bfdc99e-9df9-4825-a631-7cd07eff5dfb] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 11 04:18:31 compute-0 nova_compute[259850]: 2025-10-11 04:18:31.093 2 DEBUG nova.virt.libvirt.driver [None req-dbf2f84f-58e2-48e2-a847-43729ecf620e 77d11e860ca1460cab1c20bca4d4c0ea bfcc78a613a4442d88231798d10634c9 - - default default] [instance: 8bfdc99e-9df9-4825-a631-7cd07eff5dfb] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 11 04:18:31 compute-0 nova_compute[259850]: 2025-10-11 04:18:31.094 2 DEBUG nova.virt.libvirt.driver [None req-dbf2f84f-58e2-48e2-a847-43729ecf620e 77d11e860ca1460cab1c20bca4d4c0ea bfcc78a613a4442d88231798d10634c9 - - default default] [instance: 8bfdc99e-9df9-4825-a631-7cd07eff5dfb] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 11 04:18:31 compute-0 nova_compute[259850]: 2025-10-11 04:18:31.094 2 DEBUG nova.virt.libvirt.driver [None req-dbf2f84f-58e2-48e2-a847-43729ecf620e 77d11e860ca1460cab1c20bca4d4c0ea bfcc78a613a4442d88231798d10634c9 - - default default] [instance: 8bfdc99e-9df9-4825-a631-7cd07eff5dfb] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 11 04:18:31 compute-0 nova_compute[259850]: 2025-10-11 04:18:31.095 2 DEBUG nova.virt.libvirt.driver [None req-dbf2f84f-58e2-48e2-a847-43729ecf620e 77d11e860ca1460cab1c20bca4d4c0ea bfcc78a613a4442d88231798d10634c9 - - default default] [instance: 8bfdc99e-9df9-4825-a631-7cd07eff5dfb] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 11 04:18:31 compute-0 nova_compute[259850]: 2025-10-11 04:18:31.123 2 DEBUG nova.compute.manager [None req-3700afd8-f980-4cf2-b41e-54ecc7fbe9a2 - - - - - -] [instance: 8bfdc99e-9df9-4825-a631-7cd07eff5dfb] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 11 04:18:31 compute-0 nova_compute[259850]: 2025-10-11 04:18:31.128 2 DEBUG nova.virt.driver [None req-3700afd8-f980-4cf2-b41e-54ecc7fbe9a2 - - - - - -] Emitting event <LifecycleEvent: 1760156311.0488985, 8bfdc99e-9df9-4825-a631-7cd07eff5dfb => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 11 04:18:31 compute-0 nova_compute[259850]: 2025-10-11 04:18:31.129 2 INFO nova.compute.manager [None req-3700afd8-f980-4cf2-b41e-54ecc7fbe9a2 - - - - - -] [instance: 8bfdc99e-9df9-4825-a631-7cd07eff5dfb] VM Resumed (Lifecycle Event)
Oct 11 04:18:31 compute-0 nova_compute[259850]: 2025-10-11 04:18:31.151 2 DEBUG nova.compute.manager [None req-3700afd8-f980-4cf2-b41e-54ecc7fbe9a2 - - - - - -] [instance: 8bfdc99e-9df9-4825-a631-7cd07eff5dfb] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 11 04:18:31 compute-0 nova_compute[259850]: 2025-10-11 04:18:31.156 2 DEBUG nova.compute.manager [None req-3700afd8-f980-4cf2-b41e-54ecc7fbe9a2 - - - - - -] [instance: 8bfdc99e-9df9-4825-a631-7cd07eff5dfb] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 11 04:18:31 compute-0 nova_compute[259850]: 2025-10-11 04:18:31.168 2 INFO nova.compute.manager [None req-dbf2f84f-58e2-48e2-a847-43729ecf620e 77d11e860ca1460cab1c20bca4d4c0ea bfcc78a613a4442d88231798d10634c9 - - default default] [instance: 8bfdc99e-9df9-4825-a631-7cd07eff5dfb] Took 7.84 seconds to spawn the instance on the hypervisor.
Oct 11 04:18:31 compute-0 nova_compute[259850]: 2025-10-11 04:18:31.169 2 DEBUG nova.compute.manager [None req-dbf2f84f-58e2-48e2-a847-43729ecf620e 77d11e860ca1460cab1c20bca4d4c0ea bfcc78a613a4442d88231798d10634c9 - - default default] [instance: 8bfdc99e-9df9-4825-a631-7cd07eff5dfb] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 11 04:18:31 compute-0 nova_compute[259850]: 2025-10-11 04:18:31.181 2 INFO nova.compute.manager [None req-3700afd8-f980-4cf2-b41e-54ecc7fbe9a2 - - - - - -] [instance: 8bfdc99e-9df9-4825-a631-7cd07eff5dfb] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 11 04:18:31 compute-0 nova_compute[259850]: 2025-10-11 04:18:31.228 2 INFO nova.compute.manager [None req-dbf2f84f-58e2-48e2-a847-43729ecf620e 77d11e860ca1460cab1c20bca4d4c0ea bfcc78a613a4442d88231798d10634c9 - - default default] [instance: 8bfdc99e-9df9-4825-a631-7cd07eff5dfb] Took 10.41 seconds to build instance.
Oct 11 04:18:31 compute-0 nova_compute[259850]: 2025-10-11 04:18:31.246 2 DEBUG oslo_concurrency.lockutils [None req-dbf2f84f-58e2-48e2-a847-43729ecf620e 77d11e860ca1460cab1c20bca4d4c0ea bfcc78a613a4442d88231798d10634c9 - - default default] Lock "8bfdc99e-9df9-4825-a631-7cd07eff5dfb" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 10.510s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 04:18:31 compute-0 ceph-mgr[74563]: [pg_autoscaler INFO root] _maybe_adjust
Oct 11 04:18:31 compute-0 ceph-mgr[74563]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 04:18:31 compute-0 ceph-mgr[74563]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct 11 04:18:31 compute-0 ceph-mgr[74563]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 04:18:31 compute-0 ceph-mgr[74563]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 7.312931399361854e-06 of space, bias 1.0, pg target 0.002193879419808556 quantized to 32 (current 32)
Oct 11 04:18:31 compute-0 ceph-mgr[74563]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 04:18:31 compute-0 ceph-mgr[74563]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.003650615354561438 of space, bias 1.0, pg target 1.0951846063684314 quantized to 32 (current 32)
Oct 11 04:18:31 compute-0 ceph-mgr[74563]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 04:18:31 compute-0 ceph-mgr[74563]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.9013621638340822e-05 quantized to 32 (current 32)
Oct 11 04:18:31 compute-0 ceph-mgr[74563]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 04:18:31 compute-0 ceph-mgr[74563]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006661762551279547 of space, bias 1.0, pg target 0.19918670028325844 quantized to 32 (current 32)
Oct 11 04:18:31 compute-0 ceph-mgr[74563]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 04:18:31 compute-0 ceph-mgr[74563]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006084358924269063 quantized to 16 (current 16)
Oct 11 04:18:31 compute-0 ceph-mgr[74563]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 04:18:31 compute-0 ceph-mgr[74563]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 11 04:18:31 compute-0 ceph-mgr[74563]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 04:18:31 compute-0 ceph-mgr[74563]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.605448655336329e-05 quantized to 32 (current 32)
Oct 11 04:18:31 compute-0 ceph-mgr[74563]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 04:18:31 compute-0 ceph-mgr[74563]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006464631357035879 quantized to 32 (current 32)
Oct 11 04:18:31 compute-0 ceph-mgr[74563]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 04:18:31 compute-0 ceph-mgr[74563]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 11 04:18:31 compute-0 ceph-mgr[74563]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 04:18:31 compute-0 ceph-mgr[74563]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015210897310672657 quantized to 32 (current 32)
Oct 11 04:18:31 compute-0 nova_compute[259850]: 2025-10-11 04:18:31.447 2 DEBUG nova.network.neutron [None req-a15c22ab-259d-4b26-83d0-eb5f92102649 - - - - - -] [instance: a5deabc3-2396-4c23-81c2-959d49bb6da1] Updating instance_info_cache with network_info: [{"id": "560c29a9-2a29-42bd-a75a-485874b2cbc8", "address": "fa:16:3e:b2:35:03", "network": {"id": "b6cd64a2-af0b-4f57-b84c-cbc9cde5251d", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-958485850-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.226", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "09ba33ef4bd447699d74946c58839b2d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap560c29a9-2a", "ovs_interfaceid": "560c29a9-2a29-42bd-a75a-485874b2cbc8", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 11 04:18:31 compute-0 nova_compute[259850]: 2025-10-11 04:18:31.461 2 DEBUG oslo_concurrency.lockutils [None req-a15c22ab-259d-4b26-83d0-eb5f92102649 - - - - - -] Releasing lock "refresh_cache-a5deabc3-2396-4c23-81c2-959d49bb6da1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 11 04:18:31 compute-0 nova_compute[259850]: 2025-10-11 04:18:31.462 2 DEBUG nova.compute.manager [None req-a15c22ab-259d-4b26-83d0-eb5f92102649 - - - - - -] [instance: a5deabc3-2396-4c23-81c2-959d49bb6da1] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Oct 11 04:18:31 compute-0 nova_compute[259850]: 2025-10-11 04:18:31.463 2 DEBUG oslo_service.periodic_task [None req-a15c22ab-259d-4b26-83d0-eb5f92102649 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 11 04:18:31 compute-0 nova_compute[259850]: 2025-10-11 04:18:31.464 2 DEBUG oslo_service.periodic_task [None req-a15c22ab-259d-4b26-83d0-eb5f92102649 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 11 04:18:31 compute-0 nova_compute[259850]: 2025-10-11 04:18:31.497 2 DEBUG oslo_concurrency.lockutils [None req-a15c22ab-259d-4b26-83d0-eb5f92102649 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 04:18:31 compute-0 nova_compute[259850]: 2025-10-11 04:18:31.498 2 DEBUG oslo_concurrency.lockutils [None req-a15c22ab-259d-4b26-83d0-eb5f92102649 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 04:18:31 compute-0 nova_compute[259850]: 2025-10-11 04:18:31.499 2 DEBUG oslo_concurrency.lockutils [None req-a15c22ab-259d-4b26-83d0-eb5f92102649 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 04:18:31 compute-0 nova_compute[259850]: 2025-10-11 04:18:31.499 2 DEBUG nova.compute.resource_tracker [None req-a15c22ab-259d-4b26-83d0-eb5f92102649 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Oct 11 04:18:31 compute-0 nova_compute[259850]: 2025-10-11 04:18:31.501 2 DEBUG oslo_concurrency.processutils [None req-a15c22ab-259d-4b26-83d0-eb5f92102649 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 04:18:31 compute-0 nova_compute[259850]: 2025-10-11 04:18:31.665 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 04:18:31 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1646: 305 pgs: 305 active+clean; 352 MiB data, 630 MiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 36 KiB/s wr, 146 op/s
Oct 11 04:18:31 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 11 04:18:31 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/96124141' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 04:18:32 compute-0 nova_compute[259850]: 2025-10-11 04:18:32.018 2 DEBUG oslo_concurrency.processutils [None req-a15c22ab-259d-4b26-83d0-eb5f92102649 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.518s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 04:18:32 compute-0 ceph-mon[74273]: from='client.? 192.168.122.100:0/96124141' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 04:18:32 compute-0 nova_compute[259850]: 2025-10-11 04:18:32.099 2 DEBUG nova.virt.libvirt.driver [None req-a15c22ab-259d-4b26-83d0-eb5f92102649 - - - - - -] skipping disk for instance-00000018 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 11 04:18:32 compute-0 nova_compute[259850]: 2025-10-11 04:18:32.101 2 DEBUG nova.virt.libvirt.driver [None req-a15c22ab-259d-4b26-83d0-eb5f92102649 - - - - - -] skipping disk for instance-00000018 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 11 04:18:32 compute-0 nova_compute[259850]: 2025-10-11 04:18:32.105 2 DEBUG nova.virt.libvirt.driver [None req-a15c22ab-259d-4b26-83d0-eb5f92102649 - - - - - -] skipping disk for instance-00000019 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 11 04:18:32 compute-0 nova_compute[259850]: 2025-10-11 04:18:32.105 2 DEBUG nova.virt.libvirt.driver [None req-a15c22ab-259d-4b26-83d0-eb5f92102649 - - - - - -] skipping disk for instance-00000019 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 11 04:18:32 compute-0 nova_compute[259850]: 2025-10-11 04:18:32.109 2 DEBUG nova.virt.libvirt.driver [None req-a15c22ab-259d-4b26-83d0-eb5f92102649 - - - - - -] skipping disk for instance-0000001a as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 11 04:18:32 compute-0 nova_compute[259850]: 2025-10-11 04:18:32.109 2 DEBUG nova.virt.libvirt.driver [None req-a15c22ab-259d-4b26-83d0-eb5f92102649 - - - - - -] skipping disk for instance-0000001a as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 11 04:18:32 compute-0 nova_compute[259850]: 2025-10-11 04:18:32.273 2 WARNING nova.virt.libvirt.driver [None req-a15c22ab-259d-4b26-83d0-eb5f92102649 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 11 04:18:32 compute-0 nova_compute[259850]: 2025-10-11 04:18:32.274 2 DEBUG nova.compute.resource_tracker [None req-a15c22ab-259d-4b26-83d0-eb5f92102649 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3890MB free_disk=59.98784255981445GB free_vcpus=5 pci_devices=[{"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Oct 11 04:18:32 compute-0 nova_compute[259850]: 2025-10-11 04:18:32.274 2 DEBUG oslo_concurrency.lockutils [None req-a15c22ab-259d-4b26-83d0-eb5f92102649 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 04:18:32 compute-0 nova_compute[259850]: 2025-10-11 04:18:32.275 2 DEBUG oslo_concurrency.lockutils [None req-a15c22ab-259d-4b26-83d0-eb5f92102649 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 04:18:32 compute-0 nova_compute[259850]: 2025-10-11 04:18:32.363 2 DEBUG nova.compute.resource_tracker [None req-a15c22ab-259d-4b26-83d0-eb5f92102649 - - - - - -] Instance a5deabc3-2396-4c23-81c2-959d49bb6da1 actively managed on this compute host and has allocations in placement: {'resources': {'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Oct 11 04:18:32 compute-0 nova_compute[259850]: 2025-10-11 04:18:32.364 2 DEBUG nova.compute.resource_tracker [None req-a15c22ab-259d-4b26-83d0-eb5f92102649 - - - - - -] Instance 68b44a2f-a694-4458-9a40-89e194a02624 actively managed on this compute host and has allocations in placement: {'resources': {'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Oct 11 04:18:32 compute-0 nova_compute[259850]: 2025-10-11 04:18:32.364 2 DEBUG nova.compute.resource_tracker [None req-a15c22ab-259d-4b26-83d0-eb5f92102649 - - - - - -] Instance 8bfdc99e-9df9-4825-a631-7cd07eff5dfb actively managed on this compute host and has allocations in placement: {'resources': {'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Oct 11 04:18:32 compute-0 nova_compute[259850]: 2025-10-11 04:18:32.364 2 DEBUG nova.compute.resource_tracker [None req-a15c22ab-259d-4b26-83d0-eb5f92102649 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 3 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Oct 11 04:18:32 compute-0 nova_compute[259850]: 2025-10-11 04:18:32.364 2 DEBUG nova.compute.resource_tracker [None req-a15c22ab-259d-4b26-83d0-eb5f92102649 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=896MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=3 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Oct 11 04:18:32 compute-0 nova_compute[259850]: 2025-10-11 04:18:32.423 2 DEBUG oslo_concurrency.processutils [None req-a15c22ab-259d-4b26-83d0-eb5f92102649 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 04:18:32 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 11 04:18:32 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3030367576' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 04:18:32 compute-0 nova_compute[259850]: 2025-10-11 04:18:32.873 2 DEBUG oslo_concurrency.processutils [None req-a15c22ab-259d-4b26-83d0-eb5f92102649 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.451s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 04:18:32 compute-0 nova_compute[259850]: 2025-10-11 04:18:32.880 2 DEBUG nova.compute.provider_tree [None req-a15c22ab-259d-4b26-83d0-eb5f92102649 - - - - - -] Inventory has not changed in ProviderTree for provider: 108a560b-89c0-4926-a2fc-cb749a6f8386 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 11 04:18:32 compute-0 nova_compute[259850]: 2025-10-11 04:18:32.899 2 DEBUG nova.scheduler.client.report [None req-a15c22ab-259d-4b26-83d0-eb5f92102649 - - - - - -] Inventory has not changed for provider 108a560b-89c0-4926-a2fc-cb749a6f8386 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 11 04:18:32 compute-0 nova_compute[259850]: 2025-10-11 04:18:32.919 2 DEBUG nova.compute.resource_tracker [None req-a15c22ab-259d-4b26-83d0-eb5f92102649 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Oct 11 04:18:32 compute-0 nova_compute[259850]: 2025-10-11 04:18:32.920 2 DEBUG oslo_concurrency.lockutils [None req-a15c22ab-259d-4b26-83d0-eb5f92102649 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.645s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 04:18:33 compute-0 ceph-mon[74273]: pgmap v1646: 305 pgs: 305 active+clean; 352 MiB data, 630 MiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 36 KiB/s wr, 146 op/s
Oct 11 04:18:33 compute-0 ceph-mon[74273]: from='client.? 192.168.122.100:0/3030367576' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 04:18:33 compute-0 nova_compute[259850]: 2025-10-11 04:18:33.373 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 04:18:33 compute-0 nova_compute[259850]: 2025-10-11 04:18:33.515 2 DEBUG oslo_service.periodic_task [None req-a15c22ab-259d-4b26-83d0-eb5f92102649 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 11 04:18:33 compute-0 nova_compute[259850]: 2025-10-11 04:18:33.516 2 DEBUG oslo_service.periodic_task [None req-a15c22ab-259d-4b26-83d0-eb5f92102649 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 11 04:18:33 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1647: 305 pgs: 305 active+clean; 352 MiB data, 630 MiB used, 59 GiB / 60 GiB avail; 3.9 MiB/s rd, 36 KiB/s wr, 215 op/s
Oct 11 04:18:34 compute-0 podman[297923]: 2025-10-11 04:18:34.343022457 +0000 UTC m=+0.054326269 container health_status aa44bb3cffcfb092b9d85ae52a9bd9f36c87e07262632420f97b48a886109eab (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009, org.label-schema.schema-version=1.0, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f)
Oct 11 04:18:34 compute-0 nova_compute[259850]: 2025-10-11 04:18:34.741 2 DEBUG nova.compute.manager [req-4e9f7358-fe97-4559-bb71-09fcfe1808f3 req-077060fb-bec2-4b5f-9e7e-f41056501e0d f4159c198f2c491aba3076e803251bb4 551ef16ca7c64c7d98e9560ddab5abd8 - - default default] [instance: 8bfdc99e-9df9-4825-a631-7cd07eff5dfb] Received event network-changed-04efb511-1fd7-4507-91a8-508780bc5e8d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 11 04:18:34 compute-0 nova_compute[259850]: 2025-10-11 04:18:34.741 2 DEBUG nova.compute.manager [req-4e9f7358-fe97-4559-bb71-09fcfe1808f3 req-077060fb-bec2-4b5f-9e7e-f41056501e0d f4159c198f2c491aba3076e803251bb4 551ef16ca7c64c7d98e9560ddab5abd8 - - default default] [instance: 8bfdc99e-9df9-4825-a631-7cd07eff5dfb] Refreshing instance network info cache due to event network-changed-04efb511-1fd7-4507-91a8-508780bc5e8d. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 11 04:18:34 compute-0 nova_compute[259850]: 2025-10-11 04:18:34.741 2 DEBUG oslo_concurrency.lockutils [req-4e9f7358-fe97-4559-bb71-09fcfe1808f3 req-077060fb-bec2-4b5f-9e7e-f41056501e0d f4159c198f2c491aba3076e803251bb4 551ef16ca7c64c7d98e9560ddab5abd8 - - default default] Acquiring lock "refresh_cache-8bfdc99e-9df9-4825-a631-7cd07eff5dfb" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 11 04:18:34 compute-0 nova_compute[259850]: 2025-10-11 04:18:34.742 2 DEBUG oslo_concurrency.lockutils [req-4e9f7358-fe97-4559-bb71-09fcfe1808f3 req-077060fb-bec2-4b5f-9e7e-f41056501e0d f4159c198f2c491aba3076e803251bb4 551ef16ca7c64c7d98e9560ddab5abd8 - - default default] Acquired lock "refresh_cache-8bfdc99e-9df9-4825-a631-7cd07eff5dfb" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 11 04:18:34 compute-0 nova_compute[259850]: 2025-10-11 04:18:34.742 2 DEBUG nova.network.neutron [req-4e9f7358-fe97-4559-bb71-09fcfe1808f3 req-077060fb-bec2-4b5f-9e7e-f41056501e0d f4159c198f2c491aba3076e803251bb4 551ef16ca7c64c7d98e9560ddab5abd8 - - default default] [instance: 8bfdc99e-9df9-4825-a631-7cd07eff5dfb] Refreshing network info cache for port 04efb511-1fd7-4507-91a8-508780bc5e8d _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 11 04:18:35 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e390 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 04:18:35 compute-0 ceph-mon[74273]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #66. Immutable memtables: 0.
Oct 11 04:18:35 compute-0 ceph-mon[74273]: rocksdb: (Original Log Time 2025/10/11-04:18:35.026129) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Oct 11 04:18:35 compute-0 ceph-mon[74273]: rocksdb: [db/flush_job.cc:856] [default] [JOB 35] Flushing memtable with next log file: 66
Oct 11 04:18:35 compute-0 ceph-mon[74273]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760156315026207, "job": 35, "event": "flush_started", "num_memtables": 1, "num_entries": 2040, "num_deletes": 252, "total_data_size": 3240514, "memory_usage": 3294176, "flush_reason": "Manual Compaction"}
Oct 11 04:18:35 compute-0 ceph-mon[74273]: rocksdb: [db/flush_job.cc:885] [default] [JOB 35] Level-0 flush table #67: started
Oct 11 04:18:35 compute-0 ceph-mon[74273]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760156315035173, "cf_name": "default", "job": 35, "event": "table_file_creation", "file_number": 67, "file_size": 1897876, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 31734, "largest_seqno": 33773, "table_properties": {"data_size": 1891082, "index_size": 3548, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2181, "raw_key_size": 18010, "raw_average_key_size": 21, "raw_value_size": 1875919, "raw_average_value_size": 2199, "num_data_blocks": 161, "num_entries": 853, "num_filter_entries": 853, "num_deletions": 252, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1760156107, "oldest_key_time": 1760156107, "file_creation_time": 1760156315, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "c5ef686a-ea96-43f3-b64e-136aeef6150d", "db_session_id": "5PENSWPAYOBU0GJSS6OB", "orig_file_number": 67, "seqno_to_time_mapping": "N/A"}}
Oct 11 04:18:35 compute-0 ceph-mon[74273]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 35] Flush lasted 9052 microseconds, and 4464 cpu microseconds.
Oct 11 04:18:35 compute-0 ceph-mon[74273]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 11 04:18:35 compute-0 ceph-mon[74273]: rocksdb: (Original Log Time 2025/10/11-04:18:35.035209) [db/flush_job.cc:967] [default] [JOB 35] Level-0 flush table #67: 1897876 bytes OK
Oct 11 04:18:35 compute-0 ceph-mon[74273]: rocksdb: (Original Log Time 2025/10/11-04:18:35.035225) [db/memtable_list.cc:519] [default] Level-0 commit table #67 started
Oct 11 04:18:35 compute-0 ceph-mon[74273]: rocksdb: (Original Log Time 2025/10/11-04:18:35.036635) [db/memtable_list.cc:722] [default] Level-0 commit table #67: memtable #1 done
Oct 11 04:18:35 compute-0 ceph-mon[74273]: rocksdb: (Original Log Time 2025/10/11-04:18:35.036648) EVENT_LOG_v1 {"time_micros": 1760156315036644, "job": 35, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Oct 11 04:18:35 compute-0 ceph-mon[74273]: rocksdb: (Original Log Time 2025/10/11-04:18:35.036663) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Oct 11 04:18:35 compute-0 ceph-mon[74273]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 35] Try to delete WAL files size 3231862, prev total WAL file size 3231862, number of live WAL files 2.
Oct 11 04:18:35 compute-0 ceph-mon[74273]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000063.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 11 04:18:35 compute-0 ceph-mon[74273]: rocksdb: (Original Log Time 2025/10/11-04:18:35.037976) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6D6772737461740031303032' seq:72057594037927935, type:22 .. '6D6772737461740031323533' seq:0, type:0; will stop at (end)
Oct 11 04:18:35 compute-0 ceph-mon[74273]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 36] Compacting 1@0 + 1@6 files to L6, score -1.00
Oct 11 04:18:35 compute-0 ceph-mon[74273]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 35 Base level 0, inputs: [67(1853KB)], [65(10MB)]
Oct 11 04:18:35 compute-0 ceph-mon[74273]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760156315038062, "job": 36, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [67], "files_L6": [65], "score": -1, "input_data_size": 12461228, "oldest_snapshot_seqno": -1}
Oct 11 04:18:35 compute-0 nova_compute[259850]: 2025-10-11 04:18:35.060 2 DEBUG oslo_service.periodic_task [None req-a15c22ab-259d-4b26-83d0-eb5f92102649 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 11 04:18:35 compute-0 ceph-mon[74273]: pgmap v1647: 305 pgs: 305 active+clean; 352 MiB data, 630 MiB used, 59 GiB / 60 GiB avail; 3.9 MiB/s rd, 36 KiB/s wr, 215 op/s
Oct 11 04:18:35 compute-0 ceph-mon[74273]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 36] Generated table #68: 6465 keys, 10443853 bytes, temperature: kUnknown
Oct 11 04:18:35 compute-0 ceph-mon[74273]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760156315111854, "cf_name": "default", "job": 36, "event": "table_file_creation", "file_number": 68, "file_size": 10443853, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 10396015, "index_size": 30576, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 16197, "raw_key_size": 161191, "raw_average_key_size": 24, "raw_value_size": 10275406, "raw_average_value_size": 1589, "num_data_blocks": 1239, "num_entries": 6465, "num_filter_entries": 6465, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1760153731, "oldest_key_time": 0, "file_creation_time": 1760156315, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "c5ef686a-ea96-43f3-b64e-136aeef6150d", "db_session_id": "5PENSWPAYOBU0GJSS6OB", "orig_file_number": 68, "seqno_to_time_mapping": "N/A"}}
Oct 11 04:18:35 compute-0 ceph-mon[74273]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 11 04:18:35 compute-0 ceph-mon[74273]: rocksdb: (Original Log Time 2025/10/11-04:18:35.112357) [db/compaction/compaction_job.cc:1663] [default] [JOB 36] Compacted 1@0 + 1@6 files to L6 => 10443853 bytes
Oct 11 04:18:35 compute-0 ceph-mon[74273]: rocksdb: (Original Log Time 2025/10/11-04:18:35.113516) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 168.7 rd, 141.4 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.8, 10.1 +0.0 blob) out(10.0 +0.0 blob), read-write-amplify(12.1) write-amplify(5.5) OK, records in: 6887, records dropped: 422 output_compression: NoCompression
Oct 11 04:18:35 compute-0 ceph-mon[74273]: rocksdb: (Original Log Time 2025/10/11-04:18:35.113544) EVENT_LOG_v1 {"time_micros": 1760156315113531, "job": 36, "event": "compaction_finished", "compaction_time_micros": 73872, "compaction_time_cpu_micros": 53067, "output_level": 6, "num_output_files": 1, "total_output_size": 10443853, "num_input_records": 6887, "num_output_records": 6465, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Oct 11 04:18:35 compute-0 ceph-mon[74273]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000067.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 11 04:18:35 compute-0 ceph-mon[74273]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760156315114234, "job": 36, "event": "table_file_deletion", "file_number": 67}
Oct 11 04:18:35 compute-0 ceph-mon[74273]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000065.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 11 04:18:35 compute-0 ceph-mon[74273]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760156315117600, "job": 36, "event": "table_file_deletion", "file_number": 65}
Oct 11 04:18:35 compute-0 ceph-mon[74273]: rocksdb: (Original Log Time 2025/10/11-04:18:35.037832) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 11 04:18:35 compute-0 ceph-mon[74273]: rocksdb: (Original Log Time 2025/10/11-04:18:35.117741) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 11 04:18:35 compute-0 ceph-mon[74273]: rocksdb: (Original Log Time 2025/10/11-04:18:35.117750) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 11 04:18:35 compute-0 ceph-mon[74273]: rocksdb: (Original Log Time 2025/10/11-04:18:35.117754) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 11 04:18:35 compute-0 ceph-mon[74273]: rocksdb: (Original Log Time 2025/10/11-04:18:35.117757) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 11 04:18:35 compute-0 ceph-mon[74273]: rocksdb: (Original Log Time 2025/10/11-04:18:35.117760) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 11 04:18:35 compute-0 nova_compute[259850]: 2025-10-11 04:18:35.567 2 DEBUG nova.network.neutron [req-4e9f7358-fe97-4559-bb71-09fcfe1808f3 req-077060fb-bec2-4b5f-9e7e-f41056501e0d f4159c198f2c491aba3076e803251bb4 551ef16ca7c64c7d98e9560ddab5abd8 - - default default] [instance: 8bfdc99e-9df9-4825-a631-7cd07eff5dfb] Updated VIF entry in instance network info cache for port 04efb511-1fd7-4507-91a8-508780bc5e8d. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 11 04:18:35 compute-0 nova_compute[259850]: 2025-10-11 04:18:35.567 2 DEBUG nova.network.neutron [req-4e9f7358-fe97-4559-bb71-09fcfe1808f3 req-077060fb-bec2-4b5f-9e7e-f41056501e0d f4159c198f2c491aba3076e803251bb4 551ef16ca7c64c7d98e9560ddab5abd8 - - default default] [instance: 8bfdc99e-9df9-4825-a631-7cd07eff5dfb] Updating instance_info_cache with network_info: [{"id": "04efb511-1fd7-4507-91a8-508780bc5e8d", "address": "fa:16:3e:e9:ef:be", "network": {"id": "1c86b315-3a4b-4db0-8b3c-39658c19ef9c", "bridge": "br-int", "label": "tempest-TransferEncryptedVolumeTest-443747755-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.217", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "bfcc78a613a4442d88231798d10634c9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap04efb511-1f", "ovs_interfaceid": "04efb511-1fd7-4507-91a8-508780bc5e8d", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 11 04:18:35 compute-0 nova_compute[259850]: 2025-10-11 04:18:35.587 2 DEBUG oslo_concurrency.lockutils [req-4e9f7358-fe97-4559-bb71-09fcfe1808f3 req-077060fb-bec2-4b5f-9e7e-f41056501e0d f4159c198f2c491aba3076e803251bb4 551ef16ca7c64c7d98e9560ddab5abd8 - - default default] Releasing lock "refresh_cache-8bfdc99e-9df9-4825-a631-7cd07eff5dfb" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 11 04:18:35 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1648: 305 pgs: 305 active+clean; 352 MiB data, 630 MiB used, 59 GiB / 60 GiB avail; 3.9 MiB/s rd, 25 KiB/s wr, 192 op/s
Oct 11 04:18:36 compute-0 nova_compute[259850]: 2025-10-11 04:18:36.669 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 04:18:36 compute-0 ovn_controller[152025]: 2025-10-11T04:18:36Z|00056|pinctrl(ovn_pinctrl0)|WARN|DHCPREQUEST requested IP 10.100.0.10 does not match offer 10.100.0.8
Oct 11 04:18:36 compute-0 ovn_controller[152025]: 2025-10-11T04:18:36Z|00057|pinctrl(ovn_pinctrl0)|INFO|DHCPNAK fa:16:3e:78:cd:05 10.100.0.8
Oct 11 04:18:37 compute-0 ceph-mon[74273]: pgmap v1648: 305 pgs: 305 active+clean; 352 MiB data, 630 MiB used, 59 GiB / 60 GiB avail; 3.9 MiB/s rd, 25 KiB/s wr, 192 op/s
Oct 11 04:18:37 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1649: 305 pgs: 305 active+clean; 357 MiB data, 631 MiB used, 59 GiB / 60 GiB avail; 3.9 MiB/s rd, 110 KiB/s wr, 204 op/s
Oct 11 04:18:38 compute-0 nova_compute[259850]: 2025-10-11 04:18:38.376 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 04:18:39 compute-0 nova_compute[259850]: 2025-10-11 04:18:39.054 2 DEBUG oslo_service.periodic_task [None req-a15c22ab-259d-4b26-83d0-eb5f92102649 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 11 04:18:39 compute-0 ceph-mon[74273]: pgmap v1649: 305 pgs: 305 active+clean; 357 MiB data, 631 MiB used, 59 GiB / 60 GiB avail; 3.9 MiB/s rd, 110 KiB/s wr, 204 op/s
Oct 11 04:18:39 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1650: 305 pgs: 305 active+clean; 366 MiB data, 636 MiB used, 59 GiB / 60 GiB avail; 4.9 MiB/s rd, 513 KiB/s wr, 224 op/s
Oct 11 04:18:40 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e390 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 04:18:41 compute-0 ceph-mon[74273]: pgmap v1650: 305 pgs: 305 active+clean; 366 MiB data, 636 MiB used, 59 GiB / 60 GiB avail; 4.9 MiB/s rd, 513 KiB/s wr, 224 op/s
Oct 11 04:18:41 compute-0 ovn_controller[152025]: 2025-10-11T04:18:41Z|00058|pinctrl(ovn_pinctrl0)|WARN|DHCPREQUEST requested IP 10.100.0.10 does not match offer 10.100.0.8
Oct 11 04:18:41 compute-0 ovn_controller[152025]: 2025-10-11T04:18:41Z|00059|pinctrl(ovn_pinctrl0)|INFO|DHCPNAK fa:16:3e:78:cd:05 10.100.0.8
Oct 11 04:18:41 compute-0 nova_compute[259850]: 2025-10-11 04:18:41.671 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 04:18:41 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1651: 305 pgs: 305 active+clean; 366 MiB data, 636 MiB used, 59 GiB / 60 GiB avail; 3.0 MiB/s rd, 500 KiB/s wr, 119 op/s
Oct 11 04:18:41 compute-0 ovn_controller[152025]: 2025-10-11T04:18:41Z|00060|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:78:cd:05 10.100.0.8
Oct 11 04:18:41 compute-0 ovn_controller[152025]: 2025-10-11T04:18:41Z|00061|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:78:cd:05 10.100.0.8
Oct 11 04:18:42 compute-0 ceph-mon[74273]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #69. Immutable memtables: 0.
Oct 11 04:18:42 compute-0 ceph-mon[74273]: rocksdb: (Original Log Time 2025/10/11-04:18:42.114639) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Oct 11 04:18:42 compute-0 ceph-mon[74273]: rocksdb: [db/flush_job.cc:856] [default] [JOB 37] Flushing memtable with next log file: 69
Oct 11 04:18:42 compute-0 ceph-mon[74273]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760156322115526, "job": 37, "event": "flush_started", "num_memtables": 1, "num_entries": 312, "num_deletes": 251, "total_data_size": 124758, "memory_usage": 130920, "flush_reason": "Manual Compaction"}
Oct 11 04:18:42 compute-0 ceph-mon[74273]: rocksdb: [db/flush_job.cc:885] [default] [JOB 37] Level-0 flush table #70: started
Oct 11 04:18:42 compute-0 ceph-mon[74273]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760156322120064, "cf_name": "default", "job": 37, "event": "table_file_creation", "file_number": 70, "file_size": 123841, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 33774, "largest_seqno": 34085, "table_properties": {"data_size": 121827, "index_size": 242, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 709, "raw_key_size": 5127, "raw_average_key_size": 18, "raw_value_size": 117879, "raw_average_value_size": 422, "num_data_blocks": 11, "num_entries": 279, "num_filter_entries": 279, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1760156315, "oldest_key_time": 1760156315, "file_creation_time": 1760156322, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "c5ef686a-ea96-43f3-b64e-136aeef6150d", "db_session_id": "5PENSWPAYOBU0GJSS6OB", "orig_file_number": 70, "seqno_to_time_mapping": "N/A"}}
Oct 11 04:18:42 compute-0 ceph-mon[74273]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 37] Flush lasted 4753 microseconds, and 1888 cpu microseconds.
Oct 11 04:18:42 compute-0 ceph-mon[74273]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 11 04:18:42 compute-0 ceph-mon[74273]: rocksdb: (Original Log Time 2025/10/11-04:18:42.120193) [db/flush_job.cc:967] [default] [JOB 37] Level-0 flush table #70: 123841 bytes OK
Oct 11 04:18:42 compute-0 ceph-mon[74273]: rocksdb: (Original Log Time 2025/10/11-04:18:42.120220) [db/memtable_list.cc:519] [default] Level-0 commit table #70 started
Oct 11 04:18:42 compute-0 ceph-mon[74273]: rocksdb: (Original Log Time 2025/10/11-04:18:42.122188) [db/memtable_list.cc:722] [default] Level-0 commit table #70: memtable #1 done
Oct 11 04:18:42 compute-0 ceph-mon[74273]: rocksdb: (Original Log Time 2025/10/11-04:18:42.122213) EVENT_LOG_v1 {"time_micros": 1760156322122206, "job": 37, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Oct 11 04:18:42 compute-0 ceph-mon[74273]: rocksdb: (Original Log Time 2025/10/11-04:18:42.122243) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Oct 11 04:18:42 compute-0 ceph-mon[74273]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 37] Try to delete WAL files size 122539, prev total WAL file size 122539, number of live WAL files 2.
Oct 11 04:18:42 compute-0 ceph-mon[74273]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000066.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 11 04:18:42 compute-0 ceph-mon[74273]: rocksdb: (Original Log Time 2025/10/11-04:18:42.123070) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730032373631' seq:72057594037927935, type:22 .. '7061786F730033303133' seq:0, type:0; will stop at (end)
Oct 11 04:18:42 compute-0 ceph-mon[74273]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 38] Compacting 1@0 + 1@6 files to L6, score -1.00
Oct 11 04:18:42 compute-0 ceph-mon[74273]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 37 Base level 0, inputs: [70(120KB)], [68(10199KB)]
Oct 11 04:18:42 compute-0 ceph-mon[74273]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760156322123127, "job": 38, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [70], "files_L6": [68], "score": -1, "input_data_size": 10567694, "oldest_snapshot_seqno": -1}
Oct 11 04:18:42 compute-0 ceph-mon[74273]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 38] Generated table #71: 6235 keys, 8780511 bytes, temperature: kUnknown
Oct 11 04:18:42 compute-0 ceph-mon[74273]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760156322181442, "cf_name": "default", "job": 38, "event": "table_file_creation", "file_number": 71, "file_size": 8780511, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 8736246, "index_size": 27579, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 15621, "raw_key_size": 157099, "raw_average_key_size": 25, "raw_value_size": 8621693, "raw_average_value_size": 1382, "num_data_blocks": 1102, "num_entries": 6235, "num_filter_entries": 6235, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1760153731, "oldest_key_time": 0, "file_creation_time": 1760156322, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "c5ef686a-ea96-43f3-b64e-136aeef6150d", "db_session_id": "5PENSWPAYOBU0GJSS6OB", "orig_file_number": 71, "seqno_to_time_mapping": "N/A"}}
Oct 11 04:18:42 compute-0 ceph-mon[74273]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 11 04:18:42 compute-0 ceph-mon[74273]: rocksdb: (Original Log Time 2025/10/11-04:18:42.181725) [db/compaction/compaction_job.cc:1663] [default] [JOB 38] Compacted 1@0 + 1@6 files to L6 => 8780511 bytes
Oct 11 04:18:42 compute-0 ceph-mon[74273]: rocksdb: (Original Log Time 2025/10/11-04:18:42.183243) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 181.0 rd, 150.4 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.1, 10.0 +0.0 blob) out(8.4 +0.0 blob), read-write-amplify(156.2) write-amplify(70.9) OK, records in: 6744, records dropped: 509 output_compression: NoCompression
Oct 11 04:18:42 compute-0 ceph-mon[74273]: rocksdb: (Original Log Time 2025/10/11-04:18:42.183272) EVENT_LOG_v1 {"time_micros": 1760156322183259, "job": 38, "event": "compaction_finished", "compaction_time_micros": 58400, "compaction_time_cpu_micros": 41043, "output_level": 6, "num_output_files": 1, "total_output_size": 8780511, "num_input_records": 6744, "num_output_records": 6235, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Oct 11 04:18:42 compute-0 ceph-mon[74273]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000070.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 11 04:18:42 compute-0 ceph-mon[74273]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760156322183539, "job": 38, "event": "table_file_deletion", "file_number": 70}
Oct 11 04:18:42 compute-0 ceph-mon[74273]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000068.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 11 04:18:42 compute-0 ceph-mon[74273]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760156322186974, "job": 38, "event": "table_file_deletion", "file_number": 68}
Oct 11 04:18:42 compute-0 ceph-mon[74273]: rocksdb: (Original Log Time 2025/10/11-04:18:42.122978) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 11 04:18:42 compute-0 ceph-mon[74273]: rocksdb: (Original Log Time 2025/10/11-04:18:42.187043) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 11 04:18:42 compute-0 ceph-mon[74273]: rocksdb: (Original Log Time 2025/10/11-04:18:42.187049) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 11 04:18:42 compute-0 ceph-mon[74273]: rocksdb: (Original Log Time 2025/10/11-04:18:42.187051) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 11 04:18:42 compute-0 ceph-mon[74273]: rocksdb: (Original Log Time 2025/10/11-04:18:42.187053) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 11 04:18:42 compute-0 ceph-mon[74273]: rocksdb: (Original Log Time 2025/10/11-04:18:42.187054) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 11 04:18:43 compute-0 ceph-mon[74273]: pgmap v1651: 305 pgs: 305 active+clean; 366 MiB data, 636 MiB used, 59 GiB / 60 GiB avail; 3.0 MiB/s rd, 500 KiB/s wr, 119 op/s
Oct 11 04:18:43 compute-0 nova_compute[259850]: 2025-10-11 04:18:43.381 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 04:18:43 compute-0 ovn_controller[152025]: 2025-10-11T04:18:43Z|00062|pinctrl(ovn_pinctrl0)|INFO|DHCPNAK fa:16:3e:e9:ef:be 10.100.0.4
Oct 11 04:18:43 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1652: 305 pgs: 305 active+clean; 370 MiB data, 637 MiB used, 59 GiB / 60 GiB avail; 3.6 MiB/s rd, 587 KiB/s wr, 165 op/s
Oct 11 04:18:44 compute-0 sudo[297941]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 04:18:44 compute-0 sudo[297941]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 04:18:44 compute-0 sudo[297941]: pam_unix(sudo:session): session closed for user root
Oct 11 04:18:44 compute-0 sudo[297966]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 11 04:18:44 compute-0 sudo[297966]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 04:18:44 compute-0 sudo[297966]: pam_unix(sudo:session): session closed for user root
Oct 11 04:18:44 compute-0 sudo[297991]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 04:18:44 compute-0 sudo[297991]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 04:18:44 compute-0 sudo[297991]: pam_unix(sudo:session): session closed for user root
Oct 11 04:18:44 compute-0 sudo[298016]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/23b68101-59a9-532f-ab6b-9acf78fb2162/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 check-host
Oct 11 04:18:44 compute-0 sudo[298016]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 04:18:45 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e390 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 04:18:45 compute-0 sudo[298016]: pam_unix(sudo:session): session closed for user root
Oct 11 04:18:45 compute-0 ceph-mon[74273]: pgmap v1652: 305 pgs: 305 active+clean; 370 MiB data, 637 MiB used, 59 GiB / 60 GiB avail; 3.6 MiB/s rd, 587 KiB/s wr, 165 op/s
Oct 11 04:18:45 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 11 04:18:45 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' 
Oct 11 04:18:45 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 11 04:18:45 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' 
Oct 11 04:18:45 compute-0 sudo[298061]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 04:18:45 compute-0 sudo[298061]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 04:18:45 compute-0 sudo[298061]: pam_unix(sudo:session): session closed for user root
Oct 11 04:18:45 compute-0 sudo[298086]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 11 04:18:45 compute-0 sudo[298086]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 04:18:45 compute-0 sudo[298086]: pam_unix(sudo:session): session closed for user root
Oct 11 04:18:45 compute-0 sudo[298111]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 04:18:45 compute-0 sudo[298111]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 04:18:45 compute-0 sudo[298111]: pam_unix(sudo:session): session closed for user root
Oct 11 04:18:45 compute-0 sudo[298136]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/23b68101-59a9-532f-ab6b-9acf78fb2162/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Oct 11 04:18:45 compute-0 sudo[298136]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 04:18:45 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1653: 305 pgs: 305 active+clean; 370 MiB data, 637 MiB used, 59 GiB / 60 GiB avail; 1.7 MiB/s rd, 587 KiB/s wr, 96 op/s
Oct 11 04:18:46 compute-0 sudo[298136]: pam_unix(sudo:session): session closed for user root
Oct 11 04:18:46 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' 
Oct 11 04:18:46 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' 
Oct 11 04:18:46 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"} v 0) v1
Oct 11 04:18:46 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Oct 11 04:18:46 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 11 04:18:46 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 11 04:18:46 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct 11 04:18:46 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 11 04:18:46 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct 11 04:18:46 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' 
Oct 11 04:18:46 compute-0 ceph-mgr[74563]: [progress WARNING root] complete: ev 3d4e05cd-7732-4a6e-bd62-f5a61219c132 does not exist
Oct 11 04:18:46 compute-0 ceph-mgr[74563]: [progress WARNING root] complete: ev 9cab6678-0264-4027-9236-83c6a9066be7 does not exist
Oct 11 04:18:46 compute-0 ceph-mgr[74563]: [progress WARNING root] complete: ev 0f5c17b0-6e7e-41f6-b6d6-3d21d5222ead does not exist
Oct 11 04:18:46 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct 11 04:18:46 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 11 04:18:46 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct 11 04:18:46 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 11 04:18:46 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 11 04:18:46 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 11 04:18:46 compute-0 sudo[298192]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 04:18:46 compute-0 sudo[298192]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 04:18:46 compute-0 sudo[298192]: pam_unix(sudo:session): session closed for user root
Oct 11 04:18:46 compute-0 sudo[298217]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 11 04:18:46 compute-0 sudo[298217]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 04:18:46 compute-0 sudo[298217]: pam_unix(sudo:session): session closed for user root
Oct 11 04:18:46 compute-0 sudo[298254]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 04:18:46 compute-0 sudo[298254]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 04:18:46 compute-0 sudo[298254]: pam_unix(sudo:session): session closed for user root
Oct 11 04:18:46 compute-0 podman[298241]: 2025-10-11 04:18:46.57704766 +0000 UTC m=+0.106483876 container health_status 83ff5079e26f0f00bbfd2512db4ebca61e431e3b7e362664573e34fff179574c (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, container_name=multipathd, org.label-schema.schema-version=1.0, config_id=multipathd)
Oct 11 04:18:46 compute-0 podman[298242]: 2025-10-11 04:18:46.593233178 +0000 UTC m=+0.118100614 container health_status 8f03625dda308e9a3fe3b3f00360811eab2a53db90537fed5552a11820542f07 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=iscsid, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, container_name=iscsid, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009)
Oct 11 04:18:46 compute-0 sudo[298306]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/23b68101-59a9-532f-ab6b-9acf78fb2162/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 23b68101-59a9-532f-ab6b-9acf78fb2162 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Oct 11 04:18:46 compute-0 sudo[298306]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 04:18:46 compute-0 nova_compute[259850]: 2025-10-11 04:18:46.674 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 04:18:47 compute-0 podman[298372]: 2025-10-11 04:18:47.09518786 +0000 UTC m=+0.058973741 container create 2875c1a5164444956309b16db651b7c29d909b621533e1c0ceddc9e710510bf1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_tu, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Oct 11 04:18:47 compute-0 systemd[1]: Started libpod-conmon-2875c1a5164444956309b16db651b7c29d909b621533e1c0ceddc9e710510bf1.scope.
Oct 11 04:18:47 compute-0 podman[298372]: 2025-10-11 04:18:47.067673091 +0000 UTC m=+0.031459022 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 04:18:47 compute-0 ceph-mon[74273]: pgmap v1653: 305 pgs: 305 active+clean; 370 MiB data, 637 MiB used, 59 GiB / 60 GiB avail; 1.7 MiB/s rd, 587 KiB/s wr, 96 op/s
Oct 11 04:18:47 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Oct 11 04:18:47 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 11 04:18:47 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 11 04:18:47 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' 
Oct 11 04:18:47 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 11 04:18:47 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 11 04:18:47 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 11 04:18:47 compute-0 systemd[1]: Started libcrun container.
Oct 11 04:18:47 compute-0 podman[298372]: 2025-10-11 04:18:47.216812823 +0000 UTC m=+0.180598754 container init 2875c1a5164444956309b16db651b7c29d909b621533e1c0ceddc9e710510bf1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_tu, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 11 04:18:47 compute-0 podman[298372]: 2025-10-11 04:18:47.229504643 +0000 UTC m=+0.193290484 container start 2875c1a5164444956309b16db651b7c29d909b621533e1c0ceddc9e710510bf1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_tu, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3)
Oct 11 04:18:47 compute-0 podman[298372]: 2025-10-11 04:18:47.232471207 +0000 UTC m=+0.196257128 container attach 2875c1a5164444956309b16db651b7c29d909b621533e1c0ceddc9e710510bf1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_tu, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 11 04:18:47 compute-0 lucid_tu[298389]: 167 167
Oct 11 04:18:47 compute-0 systemd[1]: libpod-2875c1a5164444956309b16db651b7c29d909b621533e1c0ceddc9e710510bf1.scope: Deactivated successfully.
Oct 11 04:18:47 compute-0 conmon[298389]: conmon 2875c1a5164444956309 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-2875c1a5164444956309b16db651b7c29d909b621533e1c0ceddc9e710510bf1.scope/container/memory.events
Oct 11 04:18:47 compute-0 podman[298372]: 2025-10-11 04:18:47.24211237 +0000 UTC m=+0.205898261 container died 2875c1a5164444956309b16db651b7c29d909b621533e1c0ceddc9e710510bf1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_tu, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 11 04:18:47 compute-0 systemd[1]: var-lib-containers-storage-overlay-befdf66c3f182de54471d05513782837d370ee8757085a1b8e12b3f498f123f3-merged.mount: Deactivated successfully.
Oct 11 04:18:47 compute-0 podman[298372]: 2025-10-11 04:18:47.285825797 +0000 UTC m=+0.249611658 container remove 2875c1a5164444956309b16db651b7c29d909b621533e1c0ceddc9e710510bf1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_tu, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 11 04:18:47 compute-0 systemd[1]: libpod-conmon-2875c1a5164444956309b16db651b7c29d909b621533e1c0ceddc9e710510bf1.scope: Deactivated successfully.
Oct 11 04:18:47 compute-0 podman[298412]: 2025-10-11 04:18:47.503295484 +0000 UTC m=+0.058521067 container create 1d21e89656c89c925cb3ef6cc220c6b603766b24aec6b061d940eabd680a1a2c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=mystifying_sanderson, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3)
Oct 11 04:18:47 compute-0 systemd[1]: Started libpod-conmon-1d21e89656c89c925cb3ef6cc220c6b603766b24aec6b061d940eabd680a1a2c.scope.
Oct 11 04:18:47 compute-0 podman[298412]: 2025-10-11 04:18:47.481538588 +0000 UTC m=+0.036764161 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 04:18:47 compute-0 systemd[1]: Started libcrun container.
Oct 11 04:18:47 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c743c5bef053e027a8d191d2c6faf2b2c36a26a8075f08e010415c7474503ed6/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 11 04:18:47 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c743c5bef053e027a8d191d2c6faf2b2c36a26a8075f08e010415c7474503ed6/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 11 04:18:47 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c743c5bef053e027a8d191d2c6faf2b2c36a26a8075f08e010415c7474503ed6/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 11 04:18:47 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c743c5bef053e027a8d191d2c6faf2b2c36a26a8075f08e010415c7474503ed6/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 11 04:18:47 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c743c5bef053e027a8d191d2c6faf2b2c36a26a8075f08e010415c7474503ed6/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct 11 04:18:47 compute-0 podman[298412]: 2025-10-11 04:18:47.631869465 +0000 UTC m=+0.187095068 container init 1d21e89656c89c925cb3ef6cc220c6b603766b24aec6b061d940eabd680a1a2c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=mystifying_sanderson, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3)
Oct 11 04:18:47 compute-0 podman[298412]: 2025-10-11 04:18:47.645939673 +0000 UTC m=+0.201165216 container start 1d21e89656c89c925cb3ef6cc220c6b603766b24aec6b061d940eabd680a1a2c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=mystifying_sanderson, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 11 04:18:47 compute-0 podman[298412]: 2025-10-11 04:18:47.667072441 +0000 UTC m=+0.222298024 container attach 1d21e89656c89c925cb3ef6cc220c6b603766b24aec6b061d940eabd680a1a2c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=mystifying_sanderson, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef)
Oct 11 04:18:47 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1654: 305 pgs: 305 active+clean; 370 MiB data, 637 MiB used, 59 GiB / 60 GiB avail; 1.7 MiB/s rd, 594 KiB/s wr, 97 op/s
Oct 11 04:18:47 compute-0 ovn_controller[152025]: 2025-10-11T04:18:47Z|00063|pinctrl(ovn_pinctrl0)|INFO|DHCPNAK fa:16:3e:e9:ef:be 10.100.0.4
Oct 11 04:18:48 compute-0 nova_compute[259850]: 2025-10-11 04:18:48.383 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 04:18:48 compute-0 ovn_controller[152025]: 2025-10-11T04:18:48Z|00064|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:e9:ef:be 10.100.0.4
Oct 11 04:18:48 compute-0 ovn_controller[152025]: 2025-10-11T04:18:48Z|00065|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:e9:ef:be 10.100.0.4
Oct 11 04:18:48 compute-0 mystifying_sanderson[298428]: --> passed data devices: 0 physical, 3 LVM
Oct 11 04:18:48 compute-0 mystifying_sanderson[298428]: --> relative data size: 1.0
Oct 11 04:18:48 compute-0 mystifying_sanderson[298428]: --> All data devices are unavailable
Oct 11 04:18:48 compute-0 systemd[1]: libpod-1d21e89656c89c925cb3ef6cc220c6b603766b24aec6b061d940eabd680a1a2c.scope: Deactivated successfully.
Oct 11 04:18:48 compute-0 podman[298412]: 2025-10-11 04:18:48.882991127 +0000 UTC m=+1.438216710 container died 1d21e89656c89c925cb3ef6cc220c6b603766b24aec6b061d940eabd680a1a2c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=mystifying_sanderson, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.license=GPLv2)
Oct 11 04:18:48 compute-0 systemd[1]: libpod-1d21e89656c89c925cb3ef6cc220c6b603766b24aec6b061d940eabd680a1a2c.scope: Consumed 1.168s CPU time.
Oct 11 04:18:48 compute-0 systemd[1]: var-lib-containers-storage-overlay-c743c5bef053e027a8d191d2c6faf2b2c36a26a8075f08e010415c7474503ed6-merged.mount: Deactivated successfully.
Oct 11 04:18:48 compute-0 podman[298412]: 2025-10-11 04:18:48.952267828 +0000 UTC m=+1.507493411 container remove 1d21e89656c89c925cb3ef6cc220c6b603766b24aec6b061d940eabd680a1a2c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=mystifying_sanderson, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Oct 11 04:18:48 compute-0 systemd[1]: libpod-conmon-1d21e89656c89c925cb3ef6cc220c6b603766b24aec6b061d940eabd680a1a2c.scope: Deactivated successfully.
Oct 11 04:18:49 compute-0 sudo[298306]: pam_unix(sudo:session): session closed for user root
Oct 11 04:18:49 compute-0 sudo[298469]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 04:18:49 compute-0 sudo[298469]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 04:18:49 compute-0 sudo[298469]: pam_unix(sudo:session): session closed for user root
Oct 11 04:18:49 compute-0 sudo[298494]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 11 04:18:49 compute-0 sudo[298494]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 04:18:49 compute-0 sudo[298494]: pam_unix(sudo:session): session closed for user root
Oct 11 04:18:49 compute-0 ceph-mon[74273]: pgmap v1654: 305 pgs: 305 active+clean; 370 MiB data, 637 MiB used, 59 GiB / 60 GiB avail; 1.7 MiB/s rd, 594 KiB/s wr, 97 op/s
Oct 11 04:18:49 compute-0 sudo[298519]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 04:18:49 compute-0 sudo[298519]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 04:18:49 compute-0 sudo[298519]: pam_unix(sudo:session): session closed for user root
Oct 11 04:18:49 compute-0 sudo[298544]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/23b68101-59a9-532f-ab6b-9acf78fb2162/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 23b68101-59a9-532f-ab6b-9acf78fb2162 -- lvm list --format json
Oct 11 04:18:49 compute-0 sudo[298544]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 04:18:49 compute-0 podman[298611]: 2025-10-11 04:18:49.774918159 +0000 UTC m=+0.071301939 container create fd12be808df6de4b181e5a30e7db32c74d295b30b480adfb52997e8e22424b86 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_bell, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 11 04:18:49 compute-0 systemd[1]: Started libpod-conmon-fd12be808df6de4b181e5a30e7db32c74d295b30b480adfb52997e8e22424b86.scope.
Oct 11 04:18:49 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1655: 305 pgs: 305 active+clean; 370 MiB data, 637 MiB used, 59 GiB / 60 GiB avail; 1.6 MiB/s rd, 520 KiB/s wr, 85 op/s
Oct 11 04:18:49 compute-0 podman[298611]: 2025-10-11 04:18:49.744016504 +0000 UTC m=+0.040400334 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 04:18:49 compute-0 systemd[1]: Started libcrun container.
Oct 11 04:18:49 compute-0 podman[298611]: 2025-10-11 04:18:49.889384369 +0000 UTC m=+0.185768209 container init fd12be808df6de4b181e5a30e7db32c74d295b30b480adfb52997e8e22424b86 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_bell, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 11 04:18:49 compute-0 podman[298611]: 2025-10-11 04:18:49.900450793 +0000 UTC m=+0.196834573 container start fd12be808df6de4b181e5a30e7db32c74d295b30b480adfb52997e8e22424b86 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_bell, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, OSD_FLAVOR=default)
Oct 11 04:18:49 compute-0 podman[298611]: 2025-10-11 04:18:49.904476677 +0000 UTC m=+0.200860507 container attach fd12be808df6de4b181e5a30e7db32c74d295b30b480adfb52997e8e22424b86 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_bell, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_REF=reef, ceph=True, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Oct 11 04:18:49 compute-0 thirsty_bell[298627]: 167 167
Oct 11 04:18:49 compute-0 systemd[1]: libpod-fd12be808df6de4b181e5a30e7db32c74d295b30b480adfb52997e8e22424b86.scope: Deactivated successfully.
Oct 11 04:18:49 compute-0 conmon[298627]: conmon fd12be808df6de4b181e <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-fd12be808df6de4b181e5a30e7db32c74d295b30b480adfb52997e8e22424b86.scope/container/memory.events
Oct 11 04:18:49 compute-0 podman[298611]: 2025-10-11 04:18:49.910306192 +0000 UTC m=+0.206689972 container died fd12be808df6de4b181e5a30e7db32c74d295b30b480adfb52997e8e22424b86 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_bell, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 11 04:18:49 compute-0 systemd[1]: var-lib-containers-storage-overlay-b221c5bf398976c4a7ad11eb806002f0e4ab67db9a18f7cbc741c86c1da571dc-merged.mount: Deactivated successfully.
Oct 11 04:18:49 compute-0 podman[298611]: 2025-10-11 04:18:49.969919219 +0000 UTC m=+0.266302999 container remove fd12be808df6de4b181e5a30e7db32c74d295b30b480adfb52997e8e22424b86 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_bell, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 11 04:18:49 compute-0 systemd[1]: libpod-conmon-fd12be808df6de4b181e5a30e7db32c74d295b30b480adfb52997e8e22424b86.scope: Deactivated successfully.
Oct 11 04:18:50 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e390 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 04:18:50 compute-0 podman[298650]: 2025-10-11 04:18:50.242488857 +0000 UTC m=+0.071463435 container create b93d8ee1798a1ef37ad06a00284875a1b0fbc0be4f9338ba033f33ffcb679036 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_bhabha, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 11 04:18:50 compute-0 systemd[1]: Started libpod-conmon-b93d8ee1798a1ef37ad06a00284875a1b0fbc0be4f9338ba033f33ffcb679036.scope.
Oct 11 04:18:50 compute-0 podman[298650]: 2025-10-11 04:18:50.215217084 +0000 UTC m=+0.044191722 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 04:18:50 compute-0 systemd[1]: Started libcrun container.
Oct 11 04:18:50 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a3d7fa7d420bbecc2dcca81ee90cc5a3b13c011af42362419707a127a176fcf1/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 11 04:18:50 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a3d7fa7d420bbecc2dcca81ee90cc5a3b13c011af42362419707a127a176fcf1/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 11 04:18:50 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a3d7fa7d420bbecc2dcca81ee90cc5a3b13c011af42362419707a127a176fcf1/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 11 04:18:50 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a3d7fa7d420bbecc2dcca81ee90cc5a3b13c011af42362419707a127a176fcf1/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 11 04:18:50 compute-0 podman[298650]: 2025-10-11 04:18:50.353434258 +0000 UTC m=+0.182408846 container init b93d8ee1798a1ef37ad06a00284875a1b0fbc0be4f9338ba033f33ffcb679036 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_bhabha, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 11 04:18:50 compute-0 podman[298650]: 2025-10-11 04:18:50.368051082 +0000 UTC m=+0.197025670 container start b93d8ee1798a1ef37ad06a00284875a1b0fbc0be4f9338ba033f33ffcb679036 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_bhabha, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, ceph=True, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Oct 11 04:18:50 compute-0 podman[298650]: 2025-10-11 04:18:50.373765353 +0000 UTC m=+0.202739911 container attach b93d8ee1798a1ef37ad06a00284875a1b0fbc0be4f9338ba033f33ffcb679036 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_bhabha, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Oct 11 04:18:50 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct 11 04:18:50 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3571645406' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 11 04:18:50 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct 11 04:18:50 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3571645406' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 11 04:18:50 compute-0 ceph-mgr[74563]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 04:18:50 compute-0 ceph-mgr[74563]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 04:18:50 compute-0 ceph-mgr[74563]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 04:18:50 compute-0 ceph-mgr[74563]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 04:18:50 compute-0 ceph-mgr[74563]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 04:18:50 compute-0 ceph-mgr[74563]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 04:18:51 compute-0 eager_bhabha[298666]: {
Oct 11 04:18:51 compute-0 eager_bhabha[298666]:     "0": [
Oct 11 04:18:51 compute-0 eager_bhabha[298666]:         {
Oct 11 04:18:51 compute-0 eager_bhabha[298666]:             "devices": [
Oct 11 04:18:51 compute-0 eager_bhabha[298666]:                 "/dev/loop3"
Oct 11 04:18:51 compute-0 eager_bhabha[298666]:             ],
Oct 11 04:18:51 compute-0 eager_bhabha[298666]:             "lv_name": "ceph_lv0",
Oct 11 04:18:51 compute-0 eager_bhabha[298666]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct 11 04:18:51 compute-0 eager_bhabha[298666]:             "lv_size": "21470642176",
Oct 11 04:18:51 compute-0 eager_bhabha[298666]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=1XVTvq-Um5W-oCLh-MZzq-EKrd-vgGj-JdO4S3,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=23b68101-59a9-532f-ab6b-9acf78fb2162,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=bd7ac921-1218-45c1-b1c6-7c594dbceccb,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 11 04:18:51 compute-0 eager_bhabha[298666]:             "lv_uuid": "1XVTvq-Um5W-oCLh-MZzq-EKrd-vgGj-JdO4S3",
Oct 11 04:18:51 compute-0 eager_bhabha[298666]:             "name": "ceph_lv0",
Oct 11 04:18:51 compute-0 eager_bhabha[298666]:             "path": "/dev/ceph_vg0/ceph_lv0",
Oct 11 04:18:51 compute-0 eager_bhabha[298666]:             "tags": {
Oct 11 04:18:51 compute-0 eager_bhabha[298666]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct 11 04:18:51 compute-0 eager_bhabha[298666]:                 "ceph.block_uuid": "1XVTvq-Um5W-oCLh-MZzq-EKrd-vgGj-JdO4S3",
Oct 11 04:18:51 compute-0 eager_bhabha[298666]:                 "ceph.cephx_lockbox_secret": "",
Oct 11 04:18:51 compute-0 eager_bhabha[298666]:                 "ceph.cluster_fsid": "23b68101-59a9-532f-ab6b-9acf78fb2162",
Oct 11 04:18:51 compute-0 eager_bhabha[298666]:                 "ceph.cluster_name": "ceph",
Oct 11 04:18:51 compute-0 eager_bhabha[298666]:                 "ceph.crush_device_class": "",
Oct 11 04:18:51 compute-0 eager_bhabha[298666]:                 "ceph.encrypted": "0",
Oct 11 04:18:51 compute-0 eager_bhabha[298666]:                 "ceph.osd_fsid": "bd7ac921-1218-45c1-b1c6-7c594dbceccb",
Oct 11 04:18:51 compute-0 eager_bhabha[298666]:                 "ceph.osd_id": "0",
Oct 11 04:18:51 compute-0 eager_bhabha[298666]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 11 04:18:51 compute-0 eager_bhabha[298666]:                 "ceph.type": "block",
Oct 11 04:18:51 compute-0 eager_bhabha[298666]:                 "ceph.vdo": "0"
Oct 11 04:18:51 compute-0 eager_bhabha[298666]:             },
Oct 11 04:18:51 compute-0 eager_bhabha[298666]:             "type": "block",
Oct 11 04:18:51 compute-0 eager_bhabha[298666]:             "vg_name": "ceph_vg0"
Oct 11 04:18:51 compute-0 eager_bhabha[298666]:         }
Oct 11 04:18:51 compute-0 eager_bhabha[298666]:     ],
Oct 11 04:18:51 compute-0 eager_bhabha[298666]:     "1": [
Oct 11 04:18:51 compute-0 eager_bhabha[298666]:         {
Oct 11 04:18:51 compute-0 eager_bhabha[298666]:             "devices": [
Oct 11 04:18:51 compute-0 eager_bhabha[298666]:                 "/dev/loop4"
Oct 11 04:18:51 compute-0 eager_bhabha[298666]:             ],
Oct 11 04:18:51 compute-0 eager_bhabha[298666]:             "lv_name": "ceph_lv1",
Oct 11 04:18:51 compute-0 eager_bhabha[298666]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Oct 11 04:18:51 compute-0 eager_bhabha[298666]:             "lv_size": "21470642176",
Oct 11 04:18:51 compute-0 eager_bhabha[298666]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=SYHCVo-UVf0-Iwc3-4dzz-QuCG-19LJ-9rRA5x,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=23b68101-59a9-532f-ab6b-9acf78fb2162,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=38da774d-7ecf-442f-9a7a-97978287cff8,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 11 04:18:51 compute-0 eager_bhabha[298666]:             "lv_uuid": "SYHCVo-UVf0-Iwc3-4dzz-QuCG-19LJ-9rRA5x",
Oct 11 04:18:51 compute-0 eager_bhabha[298666]:             "name": "ceph_lv1",
Oct 11 04:18:51 compute-0 eager_bhabha[298666]:             "path": "/dev/ceph_vg1/ceph_lv1",
Oct 11 04:18:51 compute-0 eager_bhabha[298666]:             "tags": {
Oct 11 04:18:51 compute-0 eager_bhabha[298666]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Oct 11 04:18:51 compute-0 eager_bhabha[298666]:                 "ceph.block_uuid": "SYHCVo-UVf0-Iwc3-4dzz-QuCG-19LJ-9rRA5x",
Oct 11 04:18:51 compute-0 eager_bhabha[298666]:                 "ceph.cephx_lockbox_secret": "",
Oct 11 04:18:51 compute-0 eager_bhabha[298666]:                 "ceph.cluster_fsid": "23b68101-59a9-532f-ab6b-9acf78fb2162",
Oct 11 04:18:51 compute-0 eager_bhabha[298666]:                 "ceph.cluster_name": "ceph",
Oct 11 04:18:51 compute-0 eager_bhabha[298666]:                 "ceph.crush_device_class": "",
Oct 11 04:18:51 compute-0 eager_bhabha[298666]:                 "ceph.encrypted": "0",
Oct 11 04:18:51 compute-0 eager_bhabha[298666]:                 "ceph.osd_fsid": "38da774d-7ecf-442f-9a7a-97978287cff8",
Oct 11 04:18:51 compute-0 eager_bhabha[298666]:                 "ceph.osd_id": "1",
Oct 11 04:18:51 compute-0 eager_bhabha[298666]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 11 04:18:51 compute-0 eager_bhabha[298666]:                 "ceph.type": "block",
Oct 11 04:18:51 compute-0 eager_bhabha[298666]:                 "ceph.vdo": "0"
Oct 11 04:18:51 compute-0 eager_bhabha[298666]:             },
Oct 11 04:18:51 compute-0 eager_bhabha[298666]:             "type": "block",
Oct 11 04:18:51 compute-0 eager_bhabha[298666]:             "vg_name": "ceph_vg1"
Oct 11 04:18:51 compute-0 eager_bhabha[298666]:         }
Oct 11 04:18:51 compute-0 eager_bhabha[298666]:     ],
Oct 11 04:18:51 compute-0 eager_bhabha[298666]:     "2": [
Oct 11 04:18:51 compute-0 eager_bhabha[298666]:         {
Oct 11 04:18:51 compute-0 eager_bhabha[298666]:             "devices": [
Oct 11 04:18:51 compute-0 eager_bhabha[298666]:                 "/dev/loop5"
Oct 11 04:18:51 compute-0 eager_bhabha[298666]:             ],
Oct 11 04:18:51 compute-0 eager_bhabha[298666]:             "lv_name": "ceph_lv2",
Oct 11 04:18:51 compute-0 eager_bhabha[298666]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Oct 11 04:18:51 compute-0 eager_bhabha[298666]:             "lv_size": "21470642176",
Oct 11 04:18:51 compute-0 eager_bhabha[298666]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=RoMfIg-8Myq-NBZ3-1HM3-6QfC-sjk8-dlChvt,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=23b68101-59a9-532f-ab6b-9acf78fb2162,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=b0a32a6b-06c7-4cda-9235-0c5ffbd1bd85,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 11 04:18:51 compute-0 eager_bhabha[298666]:             "lv_uuid": "RoMfIg-8Myq-NBZ3-1HM3-6QfC-sjk8-dlChvt",
Oct 11 04:18:51 compute-0 eager_bhabha[298666]:             "name": "ceph_lv2",
Oct 11 04:18:51 compute-0 eager_bhabha[298666]:             "path": "/dev/ceph_vg2/ceph_lv2",
Oct 11 04:18:51 compute-0 eager_bhabha[298666]:             "tags": {
Oct 11 04:18:51 compute-0 eager_bhabha[298666]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Oct 11 04:18:51 compute-0 eager_bhabha[298666]:                 "ceph.block_uuid": "RoMfIg-8Myq-NBZ3-1HM3-6QfC-sjk8-dlChvt",
Oct 11 04:18:51 compute-0 eager_bhabha[298666]:                 "ceph.cephx_lockbox_secret": "",
Oct 11 04:18:51 compute-0 eager_bhabha[298666]:                 "ceph.cluster_fsid": "23b68101-59a9-532f-ab6b-9acf78fb2162",
Oct 11 04:18:51 compute-0 eager_bhabha[298666]:                 "ceph.cluster_name": "ceph",
Oct 11 04:18:51 compute-0 eager_bhabha[298666]:                 "ceph.crush_device_class": "",
Oct 11 04:18:51 compute-0 eager_bhabha[298666]:                 "ceph.encrypted": "0",
Oct 11 04:18:51 compute-0 eager_bhabha[298666]:                 "ceph.osd_fsid": "b0a32a6b-06c7-4cda-9235-0c5ffbd1bd85",
Oct 11 04:18:51 compute-0 eager_bhabha[298666]:                 "ceph.osd_id": "2",
Oct 11 04:18:51 compute-0 eager_bhabha[298666]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 11 04:18:51 compute-0 eager_bhabha[298666]:                 "ceph.type": "block",
Oct 11 04:18:51 compute-0 eager_bhabha[298666]:                 "ceph.vdo": "0"
Oct 11 04:18:51 compute-0 eager_bhabha[298666]:             },
Oct 11 04:18:51 compute-0 eager_bhabha[298666]:             "type": "block",
Oct 11 04:18:51 compute-0 eager_bhabha[298666]:             "vg_name": "ceph_vg2"
Oct 11 04:18:51 compute-0 eager_bhabha[298666]:         }
Oct 11 04:18:51 compute-0 eager_bhabha[298666]:     ]
Oct 11 04:18:51 compute-0 eager_bhabha[298666]: }
Oct 11 04:18:51 compute-0 systemd[1]: libpod-b93d8ee1798a1ef37ad06a00284875a1b0fbc0be4f9338ba033f33ffcb679036.scope: Deactivated successfully.
Oct 11 04:18:51 compute-0 podman[298650]: 2025-10-11 04:18:51.131597619 +0000 UTC m=+0.960572207 container died b93d8ee1798a1ef37ad06a00284875a1b0fbc0be4f9338ba033f33ffcb679036 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_bhabha, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Oct 11 04:18:51 compute-0 systemd[1]: var-lib-containers-storage-overlay-a3d7fa7d420bbecc2dcca81ee90cc5a3b13c011af42362419707a127a176fcf1-merged.mount: Deactivated successfully.
Oct 11 04:18:51 compute-0 podman[298650]: 2025-10-11 04:18:51.201110438 +0000 UTC m=+1.030084986 container remove b93d8ee1798a1ef37ad06a00284875a1b0fbc0be4f9338ba033f33ffcb679036 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_bhabha, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef)
Oct 11 04:18:51 compute-0 ceph-mon[74273]: pgmap v1655: 305 pgs: 305 active+clean; 370 MiB data, 637 MiB used, 59 GiB / 60 GiB avail; 1.6 MiB/s rd, 520 KiB/s wr, 85 op/s
Oct 11 04:18:51 compute-0 ceph-mon[74273]: from='client.? 192.168.122.10:0/3571645406' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 11 04:18:51 compute-0 ceph-mon[74273]: from='client.? 192.168.122.10:0/3571645406' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 11 04:18:51 compute-0 systemd[1]: libpod-conmon-b93d8ee1798a1ef37ad06a00284875a1b0fbc0be4f9338ba033f33ffcb679036.scope: Deactivated successfully.
Oct 11 04:18:51 compute-0 sudo[298544]: pam_unix(sudo:session): session closed for user root
Oct 11 04:18:51 compute-0 sudo[298689]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 04:18:51 compute-0 sudo[298689]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 04:18:51 compute-0 sudo[298689]: pam_unix(sudo:session): session closed for user root
Oct 11 04:18:51 compute-0 sudo[298714]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 11 04:18:51 compute-0 sudo[298714]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 04:18:51 compute-0 sudo[298714]: pam_unix(sudo:session): session closed for user root
Oct 11 04:18:51 compute-0 sudo[298739]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 04:18:51 compute-0 sudo[298739]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 04:18:51 compute-0 sudo[298739]: pam_unix(sudo:session): session closed for user root
Oct 11 04:18:51 compute-0 sudo[298764]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/23b68101-59a9-532f-ab6b-9acf78fb2162/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 23b68101-59a9-532f-ab6b-9acf78fb2162 -- raw list --format json
Oct 11 04:18:51 compute-0 sudo[298764]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 04:18:51 compute-0 nova_compute[259850]: 2025-10-11 04:18:51.677 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 04:18:51 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1656: 305 pgs: 305 active+clean; 370 MiB data, 637 MiB used, 59 GiB / 60 GiB avail; 670 KiB/s rd, 105 KiB/s wr, 47 op/s
Oct 11 04:18:52 compute-0 podman[298829]: 2025-10-11 04:18:52.117264756 +0000 UTC m=+0.073500642 container create 6043df08200f3e14a5c09fce184fdf99774c2933873dbefb370289f2882e0028 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=mystifying_sinoussi, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507)
Oct 11 04:18:52 compute-0 systemd[1]: Started libpod-conmon-6043df08200f3e14a5c09fce184fdf99774c2933873dbefb370289f2882e0028.scope.
Oct 11 04:18:52 compute-0 podman[298829]: 2025-10-11 04:18:52.085787275 +0000 UTC m=+0.042023231 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 04:18:52 compute-0 systemd[1]: Started libcrun container.
Oct 11 04:18:52 compute-0 podman[298829]: 2025-10-11 04:18:52.199642958 +0000 UTC m=+0.155878914 container init 6043df08200f3e14a5c09fce184fdf99774c2933873dbefb370289f2882e0028 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=mystifying_sinoussi, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507)
Oct 11 04:18:52 compute-0 podman[298829]: 2025-10-11 04:18:52.20817877 +0000 UTC m=+0.164414646 container start 6043df08200f3e14a5c09fce184fdf99774c2933873dbefb370289f2882e0028 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=mystifying_sinoussi, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 11 04:18:52 compute-0 podman[298829]: 2025-10-11 04:18:52.211321459 +0000 UTC m=+0.167557345 container attach 6043df08200f3e14a5c09fce184fdf99774c2933873dbefb370289f2882e0028 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=mystifying_sinoussi, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507)
Oct 11 04:18:52 compute-0 mystifying_sinoussi[298846]: 167 167
Oct 11 04:18:52 compute-0 systemd[1]: libpod-6043df08200f3e14a5c09fce184fdf99774c2933873dbefb370289f2882e0028.scope: Deactivated successfully.
Oct 11 04:18:52 compute-0 podman[298829]: 2025-10-11 04:18:52.213657795 +0000 UTC m=+0.169893711 container died 6043df08200f3e14a5c09fce184fdf99774c2933873dbefb370289f2882e0028 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=mystifying_sinoussi, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, ceph=True, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 11 04:18:52 compute-0 systemd[1]: var-lib-containers-storage-overlay-8ad583bfa0ff4110e699bab3756940b0eda23ac14a9d31c95aef822d28780431-merged.mount: Deactivated successfully.
Oct 11 04:18:52 compute-0 podman[298829]: 2025-10-11 04:18:52.25903935 +0000 UTC m=+0.215275226 container remove 6043df08200f3e14a5c09fce184fdf99774c2933873dbefb370289f2882e0028 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=mystifying_sinoussi, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 11 04:18:52 compute-0 systemd[1]: libpod-conmon-6043df08200f3e14a5c09fce184fdf99774c2933873dbefb370289f2882e0028.scope: Deactivated successfully.
Oct 11 04:18:52 compute-0 podman[298870]: 2025-10-11 04:18:52.468522151 +0000 UTC m=+0.041271950 container create f70036fdcb6936d0a5cf4ad057a4e6cdc71cc0a818647656414dd69075e0db53 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_khorana, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Oct 11 04:18:52 compute-0 systemd[1]: Started libpod-conmon-f70036fdcb6936d0a5cf4ad057a4e6cdc71cc0a818647656414dd69075e0db53.scope.
Oct 11 04:18:52 compute-0 systemd[1]: Started libcrun container.
Oct 11 04:18:52 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/70d95aefbffc3a476b0a873c58eea52d0bbc775095acda1576f6f2243fdf824c/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 11 04:18:52 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/70d95aefbffc3a476b0a873c58eea52d0bbc775095acda1576f6f2243fdf824c/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 11 04:18:52 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/70d95aefbffc3a476b0a873c58eea52d0bbc775095acda1576f6f2243fdf824c/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 11 04:18:52 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/70d95aefbffc3a476b0a873c58eea52d0bbc775095acda1576f6f2243fdf824c/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 11 04:18:52 compute-0 podman[298870]: 2025-10-11 04:18:52.454088702 +0000 UTC m=+0.026838521 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 04:18:52 compute-0 podman[298870]: 2025-10-11 04:18:52.554392552 +0000 UTC m=+0.127142401 container init f70036fdcb6936d0a5cf4ad057a4e6cdc71cc0a818647656414dd69075e0db53 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_khorana, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 11 04:18:52 compute-0 podman[298870]: 2025-10-11 04:18:52.563075458 +0000 UTC m=+0.135825267 container start f70036fdcb6936d0a5cf4ad057a4e6cdc71cc0a818647656414dd69075e0db53 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_khorana, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.vendor=CentOS)
Oct 11 04:18:52 compute-0 podman[298870]: 2025-10-11 04:18:52.566465524 +0000 UTC m=+0.139215333 container attach f70036fdcb6936d0a5cf4ad057a4e6cdc71cc0a818647656414dd69075e0db53 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_khorana, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 11 04:18:53 compute-0 ceph-mon[74273]: pgmap v1656: 305 pgs: 305 active+clean; 370 MiB data, 637 MiB used, 59 GiB / 60 GiB avail; 670 KiB/s rd, 105 KiB/s wr, 47 op/s
Oct 11 04:18:53 compute-0 nova_compute[259850]: 2025-10-11 04:18:53.385 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 04:18:53 compute-0 objective_khorana[298887]: {
Oct 11 04:18:53 compute-0 objective_khorana[298887]:     "38da774d-7ecf-442f-9a7a-97978287cff8": {
Oct 11 04:18:53 compute-0 objective_khorana[298887]:         "ceph_fsid": "23b68101-59a9-532f-ab6b-9acf78fb2162",
Oct 11 04:18:53 compute-0 objective_khorana[298887]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Oct 11 04:18:53 compute-0 objective_khorana[298887]:         "osd_id": 1,
Oct 11 04:18:53 compute-0 objective_khorana[298887]:         "osd_uuid": "38da774d-7ecf-442f-9a7a-97978287cff8",
Oct 11 04:18:53 compute-0 objective_khorana[298887]:         "type": "bluestore"
Oct 11 04:18:53 compute-0 objective_khorana[298887]:     },
Oct 11 04:18:53 compute-0 objective_khorana[298887]:     "b0a32a6b-06c7-4cda-9235-0c5ffbd1bd85": {
Oct 11 04:18:53 compute-0 objective_khorana[298887]:         "ceph_fsid": "23b68101-59a9-532f-ab6b-9acf78fb2162",
Oct 11 04:18:53 compute-0 objective_khorana[298887]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Oct 11 04:18:53 compute-0 objective_khorana[298887]:         "osd_id": 2,
Oct 11 04:18:53 compute-0 objective_khorana[298887]:         "osd_uuid": "b0a32a6b-06c7-4cda-9235-0c5ffbd1bd85",
Oct 11 04:18:53 compute-0 objective_khorana[298887]:         "type": "bluestore"
Oct 11 04:18:53 compute-0 objective_khorana[298887]:     },
Oct 11 04:18:53 compute-0 objective_khorana[298887]:     "bd7ac921-1218-45c1-b1c6-7c594dbceccb": {
Oct 11 04:18:53 compute-0 objective_khorana[298887]:         "ceph_fsid": "23b68101-59a9-532f-ab6b-9acf78fb2162",
Oct 11 04:18:53 compute-0 objective_khorana[298887]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct 11 04:18:53 compute-0 objective_khorana[298887]:         "osd_id": 0,
Oct 11 04:18:53 compute-0 objective_khorana[298887]:         "osd_uuid": "bd7ac921-1218-45c1-b1c6-7c594dbceccb",
Oct 11 04:18:53 compute-0 objective_khorana[298887]:         "type": "bluestore"
Oct 11 04:18:53 compute-0 objective_khorana[298887]:     }
Oct 11 04:18:53 compute-0 objective_khorana[298887]: }
Oct 11 04:18:53 compute-0 systemd[1]: libpod-f70036fdcb6936d0a5cf4ad057a4e6cdc71cc0a818647656414dd69075e0db53.scope: Deactivated successfully.
Oct 11 04:18:53 compute-0 podman[298870]: 2025-10-11 04:18:53.654533729 +0000 UTC m=+1.227283588 container died f70036fdcb6936d0a5cf4ad057a4e6cdc71cc0a818647656414dd69075e0db53 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_khorana, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 11 04:18:53 compute-0 systemd[1]: libpod-f70036fdcb6936d0a5cf4ad057a4e6cdc71cc0a818647656414dd69075e0db53.scope: Consumed 1.083s CPU time.
Oct 11 04:18:53 compute-0 systemd[1]: var-lib-containers-storage-overlay-70d95aefbffc3a476b0a873c58eea52d0bbc775095acda1576f6f2243fdf824c-merged.mount: Deactivated successfully.
Oct 11 04:18:53 compute-0 podman[298870]: 2025-10-11 04:18:53.7379186 +0000 UTC m=+1.310668419 container remove f70036fdcb6936d0a5cf4ad057a4e6cdc71cc0a818647656414dd69075e0db53 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_khorana, ceph=True, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Oct 11 04:18:53 compute-0 systemd[1]: libpod-conmon-f70036fdcb6936d0a5cf4ad057a4e6cdc71cc0a818647656414dd69075e0db53.scope: Deactivated successfully.
Oct 11 04:18:53 compute-0 sudo[298764]: pam_unix(sudo:session): session closed for user root
Oct 11 04:18:53 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 11 04:18:53 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' 
Oct 11 04:18:53 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 11 04:18:53 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' 
Oct 11 04:18:53 compute-0 ceph-mgr[74563]: [progress WARNING root] complete: ev f240242d-103a-461e-ae6a-85a516a8cf76 does not exist
Oct 11 04:18:53 compute-0 ceph-mgr[74563]: [progress WARNING root] complete: ev 1e136213-8642-4512-a23c-b2f66699eb75 does not exist
Oct 11 04:18:53 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1657: 305 pgs: 305 active+clean; 370 MiB data, 637 MiB used, 59 GiB / 60 GiB avail; 670 KiB/s rd, 105 KiB/s wr, 47 op/s
Oct 11 04:18:53 compute-0 sudo[298933]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 04:18:53 compute-0 sudo[298933]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 04:18:53 compute-0 sudo[298933]: pam_unix(sudo:session): session closed for user root
Oct 11 04:18:53 compute-0 sudo[298958]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Oct 11 04:18:53 compute-0 sudo[298958]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 04:18:53 compute-0 sudo[298958]: pam_unix(sudo:session): session closed for user root
Oct 11 04:18:54 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' 
Oct 11 04:18:54 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' 
Oct 11 04:18:54 compute-0 ceph-mon[74273]: pgmap v1657: 305 pgs: 305 active+clean; 370 MiB data, 637 MiB used, 59 GiB / 60 GiB avail; 670 KiB/s rd, 105 KiB/s wr, 47 op/s
Oct 11 04:18:55 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e390 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 04:18:55 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1658: 305 pgs: 305 active+clean; 370 MiB data, 637 MiB used, 59 GiB / 60 GiB avail; 341 B/s rd, 18 KiB/s wr, 1 op/s
Oct 11 04:18:56 compute-0 nova_compute[259850]: 2025-10-11 04:18:56.679 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 04:18:56 compute-0 ceph-mon[74273]: pgmap v1658: 305 pgs: 305 active+clean; 370 MiB data, 637 MiB used, 59 GiB / 60 GiB avail; 341 B/s rd, 18 KiB/s wr, 1 op/s
Oct 11 04:18:57 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1659: 305 pgs: 305 active+clean; 374 MiB data, 637 MiB used, 59 GiB / 60 GiB avail; 37 KiB/s rd, 63 KiB/s wr, 3 op/s
Oct 11 04:18:58 compute-0 sshd-session[297394]: Connection reset by 223.93.8.66 port 47114 [preauth]
Oct 11 04:18:58 compute-0 nova_compute[259850]: 2025-10-11 04:18:58.388 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 04:18:58 compute-0 ceph-mon[74273]: pgmap v1659: 305 pgs: 305 active+clean; 374 MiB data, 637 MiB used, 59 GiB / 60 GiB avail; 37 KiB/s rd, 63 KiB/s wr, 3 op/s
Oct 11 04:18:59 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1660: 305 pgs: 305 active+clean; 374 MiB data, 637 MiB used, 59 GiB / 60 GiB avail; 38 KiB/s rd, 69 KiB/s wr, 5 op/s
Oct 11 04:19:00 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e390 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 04:19:00 compute-0 ceph-mon[74273]: pgmap v1660: 305 pgs: 305 active+clean; 374 MiB data, 637 MiB used, 59 GiB / 60 GiB avail; 38 KiB/s rd, 69 KiB/s wr, 5 op/s
Oct 11 04:19:01 compute-0 podman[298983]: 2025-10-11 04:19:01.460825002 +0000 UTC m=+0.154527056 container health_status 648a70a95c858d67aef01a55c153ff0b5b9bef4b589fa10c62a24185180db61f (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, io.buildah.version=1.41.3, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.build-date=20251009, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, config_id=ovn_controller)
Oct 11 04:19:01 compute-0 nova_compute[259850]: 2025-10-11 04:19:01.681 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 04:19:01 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1661: 305 pgs: 305 active+clean; 374 MiB data, 637 MiB used, 59 GiB / 60 GiB avail; 37 KiB/s rd, 58 KiB/s wr, 5 op/s
Oct 11 04:19:02 compute-0 ovn_metadata_agent[161897]: 2025-10-11 04:19:02.453 161902 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=19, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '7a:61:6f', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '92:f1:b6:e4:f1:16'}, ipsec=False) old=SB_Global(nb_cfg=18) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 11 04:19:02 compute-0 nova_compute[259850]: 2025-10-11 04:19:02.454 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 04:19:02 compute-0 ovn_metadata_agent[161897]: 2025-10-11 04:19:02.455 161902 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 0 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Oct 11 04:19:02 compute-0 ovn_metadata_agent[161897]: 2025-10-11 04:19:02.456 161902 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=8a473e03-2208-47ae-afcd-05ad744a5969, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '19'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 04:19:02 compute-0 nova_compute[259850]: 2025-10-11 04:19:02.710 2 DEBUG nova.compute.manager [req-04304bf2-27c7-4105-b6d0-734a1a19b1c0 req-93d68dae-146d-4bb8-abfd-99780d69fd39 f4159c198f2c491aba3076e803251bb4 551ef16ca7c64c7d98e9560ddab5abd8 - - default default] [instance: 68b44a2f-a694-4458-9a40-89e194a02624] Received event network-changed-ae4b6054-7dfb-4b28-a3dc-0a31f2446ea3 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 11 04:19:02 compute-0 nova_compute[259850]: 2025-10-11 04:19:02.711 2 DEBUG nova.compute.manager [req-04304bf2-27c7-4105-b6d0-734a1a19b1c0 req-93d68dae-146d-4bb8-abfd-99780d69fd39 f4159c198f2c491aba3076e803251bb4 551ef16ca7c64c7d98e9560ddab5abd8 - - default default] [instance: 68b44a2f-a694-4458-9a40-89e194a02624] Refreshing instance network info cache due to event network-changed-ae4b6054-7dfb-4b28-a3dc-0a31f2446ea3. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 11 04:19:02 compute-0 nova_compute[259850]: 2025-10-11 04:19:02.711 2 DEBUG oslo_concurrency.lockutils [req-04304bf2-27c7-4105-b6d0-734a1a19b1c0 req-93d68dae-146d-4bb8-abfd-99780d69fd39 f4159c198f2c491aba3076e803251bb4 551ef16ca7c64c7d98e9560ddab5abd8 - - default default] Acquiring lock "refresh_cache-68b44a2f-a694-4458-9a40-89e194a02624" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 11 04:19:02 compute-0 nova_compute[259850]: 2025-10-11 04:19:02.711 2 DEBUG oslo_concurrency.lockutils [req-04304bf2-27c7-4105-b6d0-734a1a19b1c0 req-93d68dae-146d-4bb8-abfd-99780d69fd39 f4159c198f2c491aba3076e803251bb4 551ef16ca7c64c7d98e9560ddab5abd8 - - default default] Acquired lock "refresh_cache-68b44a2f-a694-4458-9a40-89e194a02624" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 11 04:19:02 compute-0 nova_compute[259850]: 2025-10-11 04:19:02.711 2 DEBUG nova.network.neutron [req-04304bf2-27c7-4105-b6d0-734a1a19b1c0 req-93d68dae-146d-4bb8-abfd-99780d69fd39 f4159c198f2c491aba3076e803251bb4 551ef16ca7c64c7d98e9560ddab5abd8 - - default default] [instance: 68b44a2f-a694-4458-9a40-89e194a02624] Refreshing network info cache for port ae4b6054-7dfb-4b28-a3dc-0a31f2446ea3 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 11 04:19:02 compute-0 nova_compute[259850]: 2025-10-11 04:19:02.762 2 DEBUG oslo_concurrency.lockutils [None req-2ca4f2b4-2f05-4f06-8ab9-bd4ed7e6ef71 2a330a845d62440c871f80eda2546881 09ba33ef4bd447699d74946c58839b2d - - default default] Acquiring lock "68b44a2f-a694-4458-9a40-89e194a02624" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 04:19:02 compute-0 nova_compute[259850]: 2025-10-11 04:19:02.763 2 DEBUG oslo_concurrency.lockutils [None req-2ca4f2b4-2f05-4f06-8ab9-bd4ed7e6ef71 2a330a845d62440c871f80eda2546881 09ba33ef4bd447699d74946c58839b2d - - default default] Lock "68b44a2f-a694-4458-9a40-89e194a02624" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 04:19:02 compute-0 nova_compute[259850]: 2025-10-11 04:19:02.763 2 DEBUG oslo_concurrency.lockutils [None req-2ca4f2b4-2f05-4f06-8ab9-bd4ed7e6ef71 2a330a845d62440c871f80eda2546881 09ba33ef4bd447699d74946c58839b2d - - default default] Acquiring lock "68b44a2f-a694-4458-9a40-89e194a02624-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 04:19:02 compute-0 nova_compute[259850]: 2025-10-11 04:19:02.764 2 DEBUG oslo_concurrency.lockutils [None req-2ca4f2b4-2f05-4f06-8ab9-bd4ed7e6ef71 2a330a845d62440c871f80eda2546881 09ba33ef4bd447699d74946c58839b2d - - default default] Lock "68b44a2f-a694-4458-9a40-89e194a02624-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 04:19:02 compute-0 nova_compute[259850]: 2025-10-11 04:19:02.764 2 DEBUG oslo_concurrency.lockutils [None req-2ca4f2b4-2f05-4f06-8ab9-bd4ed7e6ef71 2a330a845d62440c871f80eda2546881 09ba33ef4bd447699d74946c58839b2d - - default default] Lock "68b44a2f-a694-4458-9a40-89e194a02624-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 04:19:02 compute-0 nova_compute[259850]: 2025-10-11 04:19:02.766 2 INFO nova.compute.manager [None req-2ca4f2b4-2f05-4f06-8ab9-bd4ed7e6ef71 2a330a845d62440c871f80eda2546881 09ba33ef4bd447699d74946c58839b2d - - default default] [instance: 68b44a2f-a694-4458-9a40-89e194a02624] Terminating instance
Oct 11 04:19:02 compute-0 nova_compute[259850]: 2025-10-11 04:19:02.768 2 DEBUG nova.compute.manager [None req-2ca4f2b4-2f05-4f06-8ab9-bd4ed7e6ef71 2a330a845d62440c871f80eda2546881 09ba33ef4bd447699d74946c58839b2d - - default default] [instance: 68b44a2f-a694-4458-9a40-89e194a02624] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Oct 11 04:19:02 compute-0 kernel: tapae4b6054-7d (unregistering): left promiscuous mode
Oct 11 04:19:02 compute-0 NetworkManager[44920]: <info>  [1760156342.8358] device (tapae4b6054-7d): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 11 04:19:02 compute-0 nova_compute[259850]: 2025-10-11 04:19:02.846 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 04:19:02 compute-0 ovn_controller[152025]: 2025-10-11T04:19:02Z|00252|binding|INFO|Releasing lport ae4b6054-7dfb-4b28-a3dc-0a31f2446ea3 from this chassis (sb_readonly=0)
Oct 11 04:19:02 compute-0 ovn_controller[152025]: 2025-10-11T04:19:02Z|00253|binding|INFO|Setting lport ae4b6054-7dfb-4b28-a3dc-0a31f2446ea3 down in Southbound
Oct 11 04:19:02 compute-0 ovn_controller[152025]: 2025-10-11T04:19:02Z|00254|binding|INFO|Removing iface tapae4b6054-7d ovn-installed in OVS
Oct 11 04:19:02 compute-0 nova_compute[259850]: 2025-10-11 04:19:02.852 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 04:19:02 compute-0 ovn_metadata_agent[161897]: 2025-10-11 04:19:02.858 161902 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:78:cd:05 10.100.0.8'], port_security=['fa:16:3e:78:cd:05 10.100.0.8'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.8/28', 'neutron:device_id': '68b44a2f-a694-4458-9a40-89e194a02624', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-b6cd64a2-af0b-4f57-b84c-cbc9cde5251d', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '09ba33ef4bd447699d74946c58839b2d', 'neutron:revision_number': '4', 'neutron:security_group_ids': '802c56f7-efb1-44ec-9107-b20b0a13ea5d', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=27b77226-c1f8-485e-969b-bae9a3bf7ceb, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f22fde8afd0>], logical_port=ae4b6054-7dfb-4b28-a3dc-0a31f2446ea3) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f22fde8afd0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 11 04:19:02 compute-0 ovn_metadata_agent[161897]: 2025-10-11 04:19:02.860 161902 INFO neutron.agent.ovn.metadata.agent [-] Port ae4b6054-7dfb-4b28-a3dc-0a31f2446ea3 in datapath b6cd64a2-af0b-4f57-b84c-cbc9cde5251d unbound from our chassis
Oct 11 04:19:02 compute-0 ovn_metadata_agent[161897]: 2025-10-11 04:19:02.864 161902 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network b6cd64a2-af0b-4f57-b84c-cbc9cde5251d
Oct 11 04:19:02 compute-0 nova_compute[259850]: 2025-10-11 04:19:02.876 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 04:19:02 compute-0 ovn_metadata_agent[161897]: 2025-10-11 04:19:02.890 267637 DEBUG oslo.privsep.daemon [-] privsep: reply[6e9bbd67-2c2c-49b8-a5bc-6b21dcbeab36]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 04:19:02 compute-0 systemd[1]: machine-qemu\x2d25\x2dinstance\x2d00000019.scope: Deactivated successfully.
Oct 11 04:19:02 compute-0 systemd[1]: machine-qemu\x2d25\x2dinstance\x2d00000019.scope: Consumed 14.431s CPU time.
Oct 11 04:19:02 compute-0 systemd-machined[214869]: Machine qemu-25-instance-00000019 terminated.
Oct 11 04:19:02 compute-0 ceph-mon[74273]: pgmap v1661: 305 pgs: 305 active+clean; 374 MiB data, 637 MiB used, 59 GiB / 60 GiB avail; 37 KiB/s rd, 58 KiB/s wr, 5 op/s
Oct 11 04:19:02 compute-0 ovn_metadata_agent[161897]: 2025-10-11 04:19:02.937 267785 DEBUG oslo.privsep.daemon [-] privsep: reply[1e014fb2-a15c-4138-be51-66cf8e5737df]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 04:19:02 compute-0 ovn_metadata_agent[161897]: 2025-10-11 04:19:02.942 267785 DEBUG oslo.privsep.daemon [-] privsep: reply[2bae2766-5820-4dff-8916-81ef55fed2f3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 04:19:02 compute-0 ovn_metadata_agent[161897]: 2025-10-11 04:19:02.983 267785 DEBUG oslo.privsep.daemon [-] privsep: reply[01705c07-8e62-4748-9103-8ca78408e227]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 04:19:03 compute-0 ovn_metadata_agent[161897]: 2025-10-11 04:19:03.003 267637 DEBUG oslo.privsep.daemon [-] privsep: reply[9faa4cbe-16ec-4d47-8316-c757b0927858]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapb6cd64a2-a1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:11:9f:02'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 12, 'tx_packets': 7, 'rx_bytes': 1000, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 12, 'tx_packets': 7, 'rx_bytes': 1000, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 78], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 465102, 'reachable_time': 34708, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 8, 'inoctets': 720, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 8, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 720, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 8, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 299023, 'error': None, 'target': 'ovnmeta-b6cd64a2-af0b-4f57-b84c-cbc9cde5251d', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 04:19:03 compute-0 nova_compute[259850]: 2025-10-11 04:19:03.014 2 INFO nova.virt.libvirt.driver [-] [instance: 68b44a2f-a694-4458-9a40-89e194a02624] Instance destroyed successfully.
Oct 11 04:19:03 compute-0 nova_compute[259850]: 2025-10-11 04:19:03.015 2 DEBUG nova.objects.instance [None req-2ca4f2b4-2f05-4f06-8ab9-bd4ed7e6ef71 2a330a845d62440c871f80eda2546881 09ba33ef4bd447699d74946c58839b2d - - default default] Lazy-loading 'resources' on Instance uuid 68b44a2f-a694-4458-9a40-89e194a02624 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 11 04:19:03 compute-0 nova_compute[259850]: 2025-10-11 04:19:03.029 2 DEBUG nova.virt.libvirt.vif [None req-2ca4f2b4-2f05-4f06-8ab9-bd4ed7e6ef71 2a330a845d62440c871f80eda2546881 09ba33ef4bd447699d74946c58839b2d - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-11T04:18:14Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestVolumeBootPattern-server-592058258',display_name='tempest-TestVolumeBootPattern-server-592058258',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testvolumebootpattern-server-592058258',id=25,image_ref='',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBPDNAGL8Dkg4WTlPf45cAzyjNlMaZ9CdFtcbPahhttGWfFDtL3wJAU2pqWIpDJ427A+TFzstq4HW+M8hdPFbiZnk9MFQHh3rRb7amRkcTpIWOFEgpDmf92zhQgzfL3p2ZA==',key_name='tempest-TestVolumeBootPattern-2018721323',keypairs=<?>,launch_index=0,launched_at=2025-10-11T04:18:25Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='09ba33ef4bd447699d74946c58839b2d',ramdisk_id='',reservation_id='r-zxt09pvy',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',image_signature_verified='False',owner_project_name='tempest-TestVolumeBootPattern-771726270',owner_user_name='tempest-TestVolumeBootPattern-771726270-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-11T04:18:25Z,user_data=None,user_id='2a330a845d62440c871f80eda2546881',uuid=68b44a2f-a694-4458-9a40-89e194a02624,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "ae4b6054-7dfb-4b28-a3dc-0a31f2446ea3", "address": "fa:16:3e:78:cd:05", "network": {"id": "b6cd64a2-af0b-4f57-b84c-cbc9cde5251d", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-958485850-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.203", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "09ba33ef4bd447699d74946c58839b2d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapae4b6054-7d", "ovs_interfaceid": "ae4b6054-7dfb-4b28-a3dc-0a31f2446ea3", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Oct 11 04:19:03 compute-0 nova_compute[259850]: 2025-10-11 04:19:03.030 2 DEBUG nova.network.os_vif_util [None req-2ca4f2b4-2f05-4f06-8ab9-bd4ed7e6ef71 2a330a845d62440c871f80eda2546881 09ba33ef4bd447699d74946c58839b2d - - default default] Converting VIF {"id": "ae4b6054-7dfb-4b28-a3dc-0a31f2446ea3", "address": "fa:16:3e:78:cd:05", "network": {"id": "b6cd64a2-af0b-4f57-b84c-cbc9cde5251d", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-958485850-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.203", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "09ba33ef4bd447699d74946c58839b2d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapae4b6054-7d", "ovs_interfaceid": "ae4b6054-7dfb-4b28-a3dc-0a31f2446ea3", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 11 04:19:03 compute-0 nova_compute[259850]: 2025-10-11 04:19:03.031 2 DEBUG nova.network.os_vif_util [None req-2ca4f2b4-2f05-4f06-8ab9-bd4ed7e6ef71 2a330a845d62440c871f80eda2546881 09ba33ef4bd447699d74946c58839b2d - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:78:cd:05,bridge_name='br-int',has_traffic_filtering=True,id=ae4b6054-7dfb-4b28-a3dc-0a31f2446ea3,network=Network(b6cd64a2-af0b-4f57-b84c-cbc9cde5251d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapae4b6054-7d') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 11 04:19:03 compute-0 nova_compute[259850]: 2025-10-11 04:19:03.033 2 DEBUG os_vif [None req-2ca4f2b4-2f05-4f06-8ab9-bd4ed7e6ef71 2a330a845d62440c871f80eda2546881 09ba33ef4bd447699d74946c58839b2d - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:78:cd:05,bridge_name='br-int',has_traffic_filtering=True,id=ae4b6054-7dfb-4b28-a3dc-0a31f2446ea3,network=Network(b6cd64a2-af0b-4f57-b84c-cbc9cde5251d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapae4b6054-7d') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Oct 11 04:19:03 compute-0 ovn_metadata_agent[161897]: 2025-10-11 04:19:03.034 267637 DEBUG oslo.privsep.daemon [-] privsep: reply[e7bc1ae1-a6f0-4a7a-b88a-489fda91c129]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tapb6cd64a2-a1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 465114, 'tstamp': 465114}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 299031, 'error': None, 'target': 'ovnmeta-b6cd64a2-af0b-4f57-b84c-cbc9cde5251d', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tapb6cd64a2-a1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 465118, 'tstamp': 465118}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 299031, 'error': None, 'target': 'ovnmeta-b6cd64a2-af0b-4f57-b84c-cbc9cde5251d', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 04:19:03 compute-0 ovn_metadata_agent[161897]: 2025-10-11 04:19:03.036 161902 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapb6cd64a2-a0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 04:19:03 compute-0 nova_compute[259850]: 2025-10-11 04:19:03.036 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 04:19:03 compute-0 nova_compute[259850]: 2025-10-11 04:19:03.037 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapae4b6054-7d, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 04:19:03 compute-0 nova_compute[259850]: 2025-10-11 04:19:03.041 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 11 04:19:03 compute-0 ovn_metadata_agent[161897]: 2025-10-11 04:19:03.042 161902 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapb6cd64a2-a0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 04:19:03 compute-0 ovn_metadata_agent[161897]: 2025-10-11 04:19:03.044 161902 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 11 04:19:03 compute-0 nova_compute[259850]: 2025-10-11 04:19:03.044 2 INFO os_vif [None req-2ca4f2b4-2f05-4f06-8ab9-bd4ed7e6ef71 2a330a845d62440c871f80eda2546881 09ba33ef4bd447699d74946c58839b2d - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:78:cd:05,bridge_name='br-int',has_traffic_filtering=True,id=ae4b6054-7dfb-4b28-a3dc-0a31f2446ea3,network=Network(b6cd64a2-af0b-4f57-b84c-cbc9cde5251d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapae4b6054-7d')
Oct 11 04:19:03 compute-0 ovn_metadata_agent[161897]: 2025-10-11 04:19:03.045 161902 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapb6cd64a2-a0, col_values=(('external_ids', {'iface-id': 'c2cbaf15-a50c-40b8-9f65-12b11618e7fc'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 04:19:03 compute-0 ovn_metadata_agent[161897]: 2025-10-11 04:19:03.045 161902 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 11 04:19:03 compute-0 nova_compute[259850]: 2025-10-11 04:19:03.258 2 INFO nova.virt.libvirt.driver [None req-2ca4f2b4-2f05-4f06-8ab9-bd4ed7e6ef71 2a330a845d62440c871f80eda2546881 09ba33ef4bd447699d74946c58839b2d - - default default] [instance: 68b44a2f-a694-4458-9a40-89e194a02624] Deleting instance files /var/lib/nova/instances/68b44a2f-a694-4458-9a40-89e194a02624_del
Oct 11 04:19:03 compute-0 nova_compute[259850]: 2025-10-11 04:19:03.263 2 INFO nova.virt.libvirt.driver [None req-2ca4f2b4-2f05-4f06-8ab9-bd4ed7e6ef71 2a330a845d62440c871f80eda2546881 09ba33ef4bd447699d74946c58839b2d - - default default] [instance: 68b44a2f-a694-4458-9a40-89e194a02624] Deletion of /var/lib/nova/instances/68b44a2f-a694-4458-9a40-89e194a02624_del complete
Oct 11 04:19:03 compute-0 nova_compute[259850]: 2025-10-11 04:19:03.326 2 INFO nova.compute.manager [None req-2ca4f2b4-2f05-4f06-8ab9-bd4ed7e6ef71 2a330a845d62440c871f80eda2546881 09ba33ef4bd447699d74946c58839b2d - - default default] [instance: 68b44a2f-a694-4458-9a40-89e194a02624] Took 0.56 seconds to destroy the instance on the hypervisor.
Oct 11 04:19:03 compute-0 nova_compute[259850]: 2025-10-11 04:19:03.327 2 DEBUG oslo.service.loopingcall [None req-2ca4f2b4-2f05-4f06-8ab9-bd4ed7e6ef71 2a330a845d62440c871f80eda2546881 09ba33ef4bd447699d74946c58839b2d - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Oct 11 04:19:03 compute-0 nova_compute[259850]: 2025-10-11 04:19:03.329 2 DEBUG nova.compute.manager [-] [instance: 68b44a2f-a694-4458-9a40-89e194a02624] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Oct 11 04:19:03 compute-0 nova_compute[259850]: 2025-10-11 04:19:03.329 2 DEBUG nova.network.neutron [-] [instance: 68b44a2f-a694-4458-9a40-89e194a02624] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Oct 11 04:19:03 compute-0 nova_compute[259850]: 2025-10-11 04:19:03.391 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 04:19:03 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1662: 305 pgs: 305 active+clean; 374 MiB data, 637 MiB used, 59 GiB / 60 GiB avail; 250 KiB/s rd, 61 KiB/s wr, 13 op/s
Oct 11 04:19:04 compute-0 nova_compute[259850]: 2025-10-11 04:19:04.045 2 DEBUG nova.network.neutron [-] [instance: 68b44a2f-a694-4458-9a40-89e194a02624] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 11 04:19:04 compute-0 nova_compute[259850]: 2025-10-11 04:19:04.064 2 INFO nova.compute.manager [-] [instance: 68b44a2f-a694-4458-9a40-89e194a02624] Took 0.74 seconds to deallocate network for instance.
Oct 11 04:19:04 compute-0 nova_compute[259850]: 2025-10-11 04:19:04.265 2 INFO nova.compute.manager [None req-2ca4f2b4-2f05-4f06-8ab9-bd4ed7e6ef71 2a330a845d62440c871f80eda2546881 09ba33ef4bd447699d74946c58839b2d - - default default] [instance: 68b44a2f-a694-4458-9a40-89e194a02624] Took 0.20 seconds to detach 1 volumes for instance.
Oct 11 04:19:04 compute-0 nova_compute[259850]: 2025-10-11 04:19:04.323 2 DEBUG oslo_concurrency.lockutils [None req-2ca4f2b4-2f05-4f06-8ab9-bd4ed7e6ef71 2a330a845d62440c871f80eda2546881 09ba33ef4bd447699d74946c58839b2d - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 04:19:04 compute-0 nova_compute[259850]: 2025-10-11 04:19:04.324 2 DEBUG oslo_concurrency.lockutils [None req-2ca4f2b4-2f05-4f06-8ab9-bd4ed7e6ef71 2a330a845d62440c871f80eda2546881 09ba33ef4bd447699d74946c58839b2d - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 04:19:04 compute-0 nova_compute[259850]: 2025-10-11 04:19:04.423 2 DEBUG nova.network.neutron [req-04304bf2-27c7-4105-b6d0-734a1a19b1c0 req-93d68dae-146d-4bb8-abfd-99780d69fd39 f4159c198f2c491aba3076e803251bb4 551ef16ca7c64c7d98e9560ddab5abd8 - - default default] [instance: 68b44a2f-a694-4458-9a40-89e194a02624] Updated VIF entry in instance network info cache for port ae4b6054-7dfb-4b28-a3dc-0a31f2446ea3. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 11 04:19:04 compute-0 nova_compute[259850]: 2025-10-11 04:19:04.424 2 DEBUG nova.network.neutron [req-04304bf2-27c7-4105-b6d0-734a1a19b1c0 req-93d68dae-146d-4bb8-abfd-99780d69fd39 f4159c198f2c491aba3076e803251bb4 551ef16ca7c64c7d98e9560ddab5abd8 - - default default] [instance: 68b44a2f-a694-4458-9a40-89e194a02624] Updating instance_info_cache with network_info: [{"id": "ae4b6054-7dfb-4b28-a3dc-0a31f2446ea3", "address": "fa:16:3e:78:cd:05", "network": {"id": "b6cd64a2-af0b-4f57-b84c-cbc9cde5251d", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-958485850-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "09ba33ef4bd447699d74946c58839b2d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapae4b6054-7d", "ovs_interfaceid": "ae4b6054-7dfb-4b28-a3dc-0a31f2446ea3", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 11 04:19:04 compute-0 nova_compute[259850]: 2025-10-11 04:19:04.439 2 DEBUG oslo_concurrency.processutils [None req-2ca4f2b4-2f05-4f06-8ab9-bd4ed7e6ef71 2a330a845d62440c871f80eda2546881 09ba33ef4bd447699d74946c58839b2d - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 04:19:04 compute-0 nova_compute[259850]: 2025-10-11 04:19:04.478 2 DEBUG oslo_concurrency.lockutils [req-04304bf2-27c7-4105-b6d0-734a1a19b1c0 req-93d68dae-146d-4bb8-abfd-99780d69fd39 f4159c198f2c491aba3076e803251bb4 551ef16ca7c64c7d98e9560ddab5abd8 - - default default] Releasing lock "refresh_cache-68b44a2f-a694-4458-9a40-89e194a02624" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 11 04:19:04 compute-0 nova_compute[259850]: 2025-10-11 04:19:04.827 2 DEBUG nova.compute.manager [req-c2adf082-8f7f-4c4a-9bdb-0a0ca7de5bc0 req-35da3624-7b58-42cf-954b-848a37561a6d f4159c198f2c491aba3076e803251bb4 551ef16ca7c64c7d98e9560ddab5abd8 - - default default] [instance: 68b44a2f-a694-4458-9a40-89e194a02624] Received event network-vif-unplugged-ae4b6054-7dfb-4b28-a3dc-0a31f2446ea3 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 11 04:19:04 compute-0 nova_compute[259850]: 2025-10-11 04:19:04.829 2 DEBUG oslo_concurrency.lockutils [req-c2adf082-8f7f-4c4a-9bdb-0a0ca7de5bc0 req-35da3624-7b58-42cf-954b-848a37561a6d f4159c198f2c491aba3076e803251bb4 551ef16ca7c64c7d98e9560ddab5abd8 - - default default] Acquiring lock "68b44a2f-a694-4458-9a40-89e194a02624-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 04:19:04 compute-0 nova_compute[259850]: 2025-10-11 04:19:04.829 2 DEBUG oslo_concurrency.lockutils [req-c2adf082-8f7f-4c4a-9bdb-0a0ca7de5bc0 req-35da3624-7b58-42cf-954b-848a37561a6d f4159c198f2c491aba3076e803251bb4 551ef16ca7c64c7d98e9560ddab5abd8 - - default default] Lock "68b44a2f-a694-4458-9a40-89e194a02624-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 04:19:04 compute-0 nova_compute[259850]: 2025-10-11 04:19:04.830 2 DEBUG oslo_concurrency.lockutils [req-c2adf082-8f7f-4c4a-9bdb-0a0ca7de5bc0 req-35da3624-7b58-42cf-954b-848a37561a6d f4159c198f2c491aba3076e803251bb4 551ef16ca7c64c7d98e9560ddab5abd8 - - default default] Lock "68b44a2f-a694-4458-9a40-89e194a02624-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 04:19:04 compute-0 nova_compute[259850]: 2025-10-11 04:19:04.830 2 DEBUG nova.compute.manager [req-c2adf082-8f7f-4c4a-9bdb-0a0ca7de5bc0 req-35da3624-7b58-42cf-954b-848a37561a6d f4159c198f2c491aba3076e803251bb4 551ef16ca7c64c7d98e9560ddab5abd8 - - default default] [instance: 68b44a2f-a694-4458-9a40-89e194a02624] No waiting events found dispatching network-vif-unplugged-ae4b6054-7dfb-4b28-a3dc-0a31f2446ea3 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 11 04:19:04 compute-0 nova_compute[259850]: 2025-10-11 04:19:04.831 2 WARNING nova.compute.manager [req-c2adf082-8f7f-4c4a-9bdb-0a0ca7de5bc0 req-35da3624-7b58-42cf-954b-848a37561a6d f4159c198f2c491aba3076e803251bb4 551ef16ca7c64c7d98e9560ddab5abd8 - - default default] [instance: 68b44a2f-a694-4458-9a40-89e194a02624] Received unexpected event network-vif-unplugged-ae4b6054-7dfb-4b28-a3dc-0a31f2446ea3 for instance with vm_state deleted and task_state None.
Oct 11 04:19:04 compute-0 nova_compute[259850]: 2025-10-11 04:19:04.831 2 DEBUG nova.compute.manager [req-c2adf082-8f7f-4c4a-9bdb-0a0ca7de5bc0 req-35da3624-7b58-42cf-954b-848a37561a6d f4159c198f2c491aba3076e803251bb4 551ef16ca7c64c7d98e9560ddab5abd8 - - default default] [instance: 68b44a2f-a694-4458-9a40-89e194a02624] Received event network-vif-plugged-ae4b6054-7dfb-4b28-a3dc-0a31f2446ea3 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 11 04:19:04 compute-0 nova_compute[259850]: 2025-10-11 04:19:04.832 2 DEBUG oslo_concurrency.lockutils [req-c2adf082-8f7f-4c4a-9bdb-0a0ca7de5bc0 req-35da3624-7b58-42cf-954b-848a37561a6d f4159c198f2c491aba3076e803251bb4 551ef16ca7c64c7d98e9560ddab5abd8 - - default default] Acquiring lock "68b44a2f-a694-4458-9a40-89e194a02624-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 04:19:04 compute-0 nova_compute[259850]: 2025-10-11 04:19:04.832 2 DEBUG oslo_concurrency.lockutils [req-c2adf082-8f7f-4c4a-9bdb-0a0ca7de5bc0 req-35da3624-7b58-42cf-954b-848a37561a6d f4159c198f2c491aba3076e803251bb4 551ef16ca7c64c7d98e9560ddab5abd8 - - default default] Lock "68b44a2f-a694-4458-9a40-89e194a02624-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 04:19:04 compute-0 nova_compute[259850]: 2025-10-11 04:19:04.833 2 DEBUG oslo_concurrency.lockutils [req-c2adf082-8f7f-4c4a-9bdb-0a0ca7de5bc0 req-35da3624-7b58-42cf-954b-848a37561a6d f4159c198f2c491aba3076e803251bb4 551ef16ca7c64c7d98e9560ddab5abd8 - - default default] Lock "68b44a2f-a694-4458-9a40-89e194a02624-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 04:19:04 compute-0 nova_compute[259850]: 2025-10-11 04:19:04.833 2 DEBUG nova.compute.manager [req-c2adf082-8f7f-4c4a-9bdb-0a0ca7de5bc0 req-35da3624-7b58-42cf-954b-848a37561a6d f4159c198f2c491aba3076e803251bb4 551ef16ca7c64c7d98e9560ddab5abd8 - - default default] [instance: 68b44a2f-a694-4458-9a40-89e194a02624] No waiting events found dispatching network-vif-plugged-ae4b6054-7dfb-4b28-a3dc-0a31f2446ea3 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 11 04:19:04 compute-0 nova_compute[259850]: 2025-10-11 04:19:04.834 2 WARNING nova.compute.manager [req-c2adf082-8f7f-4c4a-9bdb-0a0ca7de5bc0 req-35da3624-7b58-42cf-954b-848a37561a6d f4159c198f2c491aba3076e803251bb4 551ef16ca7c64c7d98e9560ddab5abd8 - - default default] [instance: 68b44a2f-a694-4458-9a40-89e194a02624] Received unexpected event network-vif-plugged-ae4b6054-7dfb-4b28-a3dc-0a31f2446ea3 for instance with vm_state deleted and task_state None.
Oct 11 04:19:04 compute-0 nova_compute[259850]: 2025-10-11 04:19:04.834 2 DEBUG nova.compute.manager [req-c2adf082-8f7f-4c4a-9bdb-0a0ca7de5bc0 req-35da3624-7b58-42cf-954b-848a37561a6d f4159c198f2c491aba3076e803251bb4 551ef16ca7c64c7d98e9560ddab5abd8 - - default default] [instance: 68b44a2f-a694-4458-9a40-89e194a02624] Received event network-vif-deleted-ae4b6054-7dfb-4b28-a3dc-0a31f2446ea3 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 11 04:19:04 compute-0 nova_compute[259850]: 2025-10-11 04:19:04.835 2 INFO nova.compute.manager [req-c2adf082-8f7f-4c4a-9bdb-0a0ca7de5bc0 req-35da3624-7b58-42cf-954b-848a37561a6d f4159c198f2c491aba3076e803251bb4 551ef16ca7c64c7d98e9560ddab5abd8 - - default default] [instance: 68b44a2f-a694-4458-9a40-89e194a02624] Neutron deleted interface ae4b6054-7dfb-4b28-a3dc-0a31f2446ea3; detaching it from the instance and deleting it from the info cache
Oct 11 04:19:04 compute-0 nova_compute[259850]: 2025-10-11 04:19:04.835 2 DEBUG nova.network.neutron [req-c2adf082-8f7f-4c4a-9bdb-0a0ca7de5bc0 req-35da3624-7b58-42cf-954b-848a37561a6d f4159c198f2c491aba3076e803251bb4 551ef16ca7c64c7d98e9560ddab5abd8 - - default default] [instance: 68b44a2f-a694-4458-9a40-89e194a02624] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 11 04:19:04 compute-0 nova_compute[259850]: 2025-10-11 04:19:04.864 2 DEBUG nova.compute.manager [req-c2adf082-8f7f-4c4a-9bdb-0a0ca7de5bc0 req-35da3624-7b58-42cf-954b-848a37561a6d f4159c198f2c491aba3076e803251bb4 551ef16ca7c64c7d98e9560ddab5abd8 - - default default] [instance: 68b44a2f-a694-4458-9a40-89e194a02624] Detach interface failed, port_id=ae4b6054-7dfb-4b28-a3dc-0a31f2446ea3, reason: Instance 68b44a2f-a694-4458-9a40-89e194a02624 could not be found. _process_instance_vif_deleted_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10882
Oct 11 04:19:04 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 11 04:19:04 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/644697026' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 04:19:04 compute-0 ceph-mon[74273]: pgmap v1662: 305 pgs: 305 active+clean; 374 MiB data, 637 MiB used, 59 GiB / 60 GiB avail; 250 KiB/s rd, 61 KiB/s wr, 13 op/s
Oct 11 04:19:04 compute-0 ceph-mon[74273]: from='client.? 192.168.122.100:0/644697026' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 04:19:04 compute-0 nova_compute[259850]: 2025-10-11 04:19:04.937 2 DEBUG oslo_concurrency.processutils [None req-2ca4f2b4-2f05-4f06-8ab9-bd4ed7e6ef71 2a330a845d62440c871f80eda2546881 09ba33ef4bd447699d74946c58839b2d - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.498s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 04:19:04 compute-0 nova_compute[259850]: 2025-10-11 04:19:04.944 2 DEBUG nova.compute.provider_tree [None req-2ca4f2b4-2f05-4f06-8ab9-bd4ed7e6ef71 2a330a845d62440c871f80eda2546881 09ba33ef4bd447699d74946c58839b2d - - default default] Inventory has not changed in ProviderTree for provider: 108a560b-89c0-4926-a2fc-cb749a6f8386 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 11 04:19:04 compute-0 nova_compute[259850]: 2025-10-11 04:19:04.965 2 DEBUG nova.scheduler.client.report [None req-2ca4f2b4-2f05-4f06-8ab9-bd4ed7e6ef71 2a330a845d62440c871f80eda2546881 09ba33ef4bd447699d74946c58839b2d - - default default] Inventory has not changed for provider 108a560b-89c0-4926-a2fc-cb749a6f8386 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 11 04:19:05 compute-0 nova_compute[259850]: 2025-10-11 04:19:05.001 2 DEBUG oslo_concurrency.lockutils [None req-2ca4f2b4-2f05-4f06-8ab9-bd4ed7e6ef71 2a330a845d62440c871f80eda2546881 09ba33ef4bd447699d74946c58839b2d - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.677s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 04:19:05 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e390 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 04:19:05 compute-0 nova_compute[259850]: 2025-10-11 04:19:05.042 2 INFO nova.scheduler.client.report [None req-2ca4f2b4-2f05-4f06-8ab9-bd4ed7e6ef71 2a330a845d62440c871f80eda2546881 09ba33ef4bd447699d74946c58839b2d - - default default] Deleted allocations for instance 68b44a2f-a694-4458-9a40-89e194a02624
Oct 11 04:19:05 compute-0 nova_compute[259850]: 2025-10-11 04:19:05.124 2 DEBUG oslo_concurrency.lockutils [None req-2ca4f2b4-2f05-4f06-8ab9-bd4ed7e6ef71 2a330a845d62440c871f80eda2546881 09ba33ef4bd447699d74946c58839b2d - - default default] Lock "68b44a2f-a694-4458-9a40-89e194a02624" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.362s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 04:19:05 compute-0 podman[299075]: 2025-10-11 04:19:05.396018536 +0000 UTC m=+0.092611813 container health_status aa44bb3cffcfb092b9d85ae52a9bd9f36c87e07262632420f97b48a886109eab (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251009, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, managed_by=edpm_ansible, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f)
Oct 11 04:19:05 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1663: 305 pgs: 305 active+clean; 374 MiB data, 637 MiB used, 59 GiB / 60 GiB avail; 250 KiB/s rd, 61 KiB/s wr, 13 op/s
Oct 11 04:19:06 compute-0 nova_compute[259850]: 2025-10-11 04:19:06.773 2 DEBUG oslo_concurrency.lockutils [None req-3f54b595-3d9a-4949-8d93-21f6e8849d93 77d11e860ca1460cab1c20bca4d4c0ea bfcc78a613a4442d88231798d10634c9 - - default default] Acquiring lock "8bfdc99e-9df9-4825-a631-7cd07eff5dfb" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 04:19:06 compute-0 nova_compute[259850]: 2025-10-11 04:19:06.774 2 DEBUG oslo_concurrency.lockutils [None req-3f54b595-3d9a-4949-8d93-21f6e8849d93 77d11e860ca1460cab1c20bca4d4c0ea bfcc78a613a4442d88231798d10634c9 - - default default] Lock "8bfdc99e-9df9-4825-a631-7cd07eff5dfb" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 04:19:06 compute-0 nova_compute[259850]: 2025-10-11 04:19:06.774 2 DEBUG oslo_concurrency.lockutils [None req-3f54b595-3d9a-4949-8d93-21f6e8849d93 77d11e860ca1460cab1c20bca4d4c0ea bfcc78a613a4442d88231798d10634c9 - - default default] Acquiring lock "8bfdc99e-9df9-4825-a631-7cd07eff5dfb-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 04:19:06 compute-0 nova_compute[259850]: 2025-10-11 04:19:06.774 2 DEBUG oslo_concurrency.lockutils [None req-3f54b595-3d9a-4949-8d93-21f6e8849d93 77d11e860ca1460cab1c20bca4d4c0ea bfcc78a613a4442d88231798d10634c9 - - default default] Lock "8bfdc99e-9df9-4825-a631-7cd07eff5dfb-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 04:19:06 compute-0 nova_compute[259850]: 2025-10-11 04:19:06.775 2 DEBUG oslo_concurrency.lockutils [None req-3f54b595-3d9a-4949-8d93-21f6e8849d93 77d11e860ca1460cab1c20bca4d4c0ea bfcc78a613a4442d88231798d10634c9 - - default default] Lock "8bfdc99e-9df9-4825-a631-7cd07eff5dfb-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 04:19:06 compute-0 nova_compute[259850]: 2025-10-11 04:19:06.776 2 INFO nova.compute.manager [None req-3f54b595-3d9a-4949-8d93-21f6e8849d93 77d11e860ca1460cab1c20bca4d4c0ea bfcc78a613a4442d88231798d10634c9 - - default default] [instance: 8bfdc99e-9df9-4825-a631-7cd07eff5dfb] Terminating instance
Oct 11 04:19:06 compute-0 nova_compute[259850]: 2025-10-11 04:19:06.778 2 DEBUG nova.compute.manager [None req-3f54b595-3d9a-4949-8d93-21f6e8849d93 77d11e860ca1460cab1c20bca4d4c0ea bfcc78a613a4442d88231798d10634c9 - - default default] [instance: 8bfdc99e-9df9-4825-a631-7cd07eff5dfb] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Oct 11 04:19:06 compute-0 kernel: tap04efb511-1f (unregistering): left promiscuous mode
Oct 11 04:19:06 compute-0 NetworkManager[44920]: <info>  [1760156346.8482] device (tap04efb511-1f): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 11 04:19:06 compute-0 nova_compute[259850]: 2025-10-11 04:19:06.855 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 04:19:06 compute-0 ovn_controller[152025]: 2025-10-11T04:19:06Z|00255|binding|INFO|Releasing lport 04efb511-1fd7-4507-91a8-508780bc5e8d from this chassis (sb_readonly=0)
Oct 11 04:19:06 compute-0 ovn_controller[152025]: 2025-10-11T04:19:06Z|00256|binding|INFO|Setting lport 04efb511-1fd7-4507-91a8-508780bc5e8d down in Southbound
Oct 11 04:19:06 compute-0 ovn_controller[152025]: 2025-10-11T04:19:06Z|00257|binding|INFO|Removing iface tap04efb511-1f ovn-installed in OVS
Oct 11 04:19:06 compute-0 nova_compute[259850]: 2025-10-11 04:19:06.857 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 04:19:06 compute-0 ovn_metadata_agent[161897]: 2025-10-11 04:19:06.862 161902 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:e9:ef:be 10.100.0.4'], port_security=['fa:16:3e:e9:ef:be 10.100.0.4'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.4/28', 'neutron:device_id': '8bfdc99e-9df9-4825-a631-7cd07eff5dfb', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-1c86b315-3a4b-4db0-8b3c-39658c19ef9c', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'bfcc78a613a4442d88231798d10634c9', 'neutron:revision_number': '4', 'neutron:security_group_ids': '8fd56502-e733-457c-89c4-96f24dc7f6d9', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com', 'neutron:port_fip': '192.168.122.217'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=756f4bd0-4cbc-4611-9397-52eb34ec09ab, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f22fde8afd0>], logical_port=04efb511-1fd7-4507-91a8-508780bc5e8d) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f22fde8afd0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 11 04:19:06 compute-0 ovn_metadata_agent[161897]: 2025-10-11 04:19:06.864 161902 INFO neutron.agent.ovn.metadata.agent [-] Port 04efb511-1fd7-4507-91a8-508780bc5e8d in datapath 1c86b315-3a4b-4db0-8b3c-39658c19ef9c unbound from our chassis
Oct 11 04:19:06 compute-0 ovn_metadata_agent[161897]: 2025-10-11 04:19:06.865 161902 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 1c86b315-3a4b-4db0-8b3c-39658c19ef9c, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Oct 11 04:19:06 compute-0 ovn_metadata_agent[161897]: 2025-10-11 04:19:06.866 267637 DEBUG oslo.privsep.daemon [-] privsep: reply[f2edca65-b79f-4113-bc3b-509d177b18c7]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 04:19:06 compute-0 ovn_metadata_agent[161897]: 2025-10-11 04:19:06.867 161902 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-1c86b315-3a4b-4db0-8b3c-39658c19ef9c namespace which is not needed anymore
Oct 11 04:19:06 compute-0 nova_compute[259850]: 2025-10-11 04:19:06.887 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 04:19:06 compute-0 systemd[1]: machine-qemu\x2d26\x2dinstance\x2d0000001a.scope: Deactivated successfully.
Oct 11 04:19:06 compute-0 systemd[1]: machine-qemu\x2d26\x2dinstance\x2d0000001a.scope: Consumed 15.802s CPU time.
Oct 11 04:19:06 compute-0 systemd-machined[214869]: Machine qemu-26-instance-0000001a terminated.
Oct 11 04:19:06 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct 11 04:19:06 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2529864770' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 11 04:19:06 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct 11 04:19:06 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2529864770' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 11 04:19:06 compute-0 ceph-mon[74273]: pgmap v1663: 305 pgs: 305 active+clean; 374 MiB data, 637 MiB used, 59 GiB / 60 GiB avail; 250 KiB/s rd, 61 KiB/s wr, 13 op/s
Oct 11 04:19:06 compute-0 ceph-mon[74273]: from='client.? 192.168.122.10:0/2529864770' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 11 04:19:06 compute-0 ceph-mon[74273]: from='client.? 192.168.122.10:0/2529864770' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 11 04:19:07 compute-0 nova_compute[259850]: 2025-10-11 04:19:07.008 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 04:19:07 compute-0 nova_compute[259850]: 2025-10-11 04:19:07.019 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 04:19:07 compute-0 nova_compute[259850]: 2025-10-11 04:19:07.026 2 INFO nova.virt.libvirt.driver [-] [instance: 8bfdc99e-9df9-4825-a631-7cd07eff5dfb] Instance destroyed successfully.
Oct 11 04:19:07 compute-0 nova_compute[259850]: 2025-10-11 04:19:07.027 2 DEBUG nova.objects.instance [None req-3f54b595-3d9a-4949-8d93-21f6e8849d93 77d11e860ca1460cab1c20bca4d4c0ea bfcc78a613a4442d88231798d10634c9 - - default default] Lazy-loading 'resources' on Instance uuid 8bfdc99e-9df9-4825-a631-7cd07eff5dfb obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 11 04:19:07 compute-0 neutron-haproxy-ovnmeta-1c86b315-3a4b-4db0-8b3c-39658c19ef9c[297830]: [NOTICE]   (297834) : haproxy version is 2.8.14-c23fe91
Oct 11 04:19:07 compute-0 neutron-haproxy-ovnmeta-1c86b315-3a4b-4db0-8b3c-39658c19ef9c[297830]: [NOTICE]   (297834) : path to executable is /usr/sbin/haproxy
Oct 11 04:19:07 compute-0 neutron-haproxy-ovnmeta-1c86b315-3a4b-4db0-8b3c-39658c19ef9c[297830]: [WARNING]  (297834) : Exiting Master process...
Oct 11 04:19:07 compute-0 neutron-haproxy-ovnmeta-1c86b315-3a4b-4db0-8b3c-39658c19ef9c[297830]: [WARNING]  (297834) : Exiting Master process...
Oct 11 04:19:07 compute-0 neutron-haproxy-ovnmeta-1c86b315-3a4b-4db0-8b3c-39658c19ef9c[297830]: [ALERT]    (297834) : Current worker (297836) exited with code 143 (Terminated)
Oct 11 04:19:07 compute-0 neutron-haproxy-ovnmeta-1c86b315-3a4b-4db0-8b3c-39658c19ef9c[297830]: [WARNING]  (297834) : All workers exited. Exiting... (0)
Oct 11 04:19:07 compute-0 nova_compute[259850]: 2025-10-11 04:19:07.046 2 DEBUG nova.virt.libvirt.vif [None req-3f54b595-3d9a-4949-8d93-21f6e8849d93 77d11e860ca1460cab1c20bca4d4c0ea bfcc78a613a4442d88231798d10634c9 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-11T04:18:19Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TransferEncryptedVolumeTest-server-2102536866',display_name='tempest-TransferEncryptedVolumeTest-server-2102536866',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-transferencryptedvolumetest-server-2102536866',id=26,image_ref='',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBD3jnhyRBlsX5VUAbGtWGwnjXDJ0mJnyIiUqsAyoyyDd6H6M/5DSgSJwDh4tkaNqmtKzFuE8XyeYbmLUFFbEZUE8j9mB2B0zj5nn/QlG6TOs2XcStAmJ+ejUjSzP7rh2Lg==',key_name='tempest-TransferEncryptedVolumeTest-513808347',keypairs=<?>,launch_index=0,launched_at=2025-10-11T04:18:31Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='bfcc78a613a4442d88231798d10634c9',ramdisk_id='',reservation_id='r-rb87z0iv',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',image_signature_verified='False',owner_project_name='tempest-TransferEncryptedVolumeTest-1941581237',owner_user_name='tempest-TransferEncryptedVolumeTest-1941581237-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-11T04:18:31Z,user_data=None,user_id='77d11e860ca1460cab1c20bca4d4c0ea',uuid=8bfdc99e-9df9-4825-a631-7cd07eff5dfb,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "04efb511-1fd7-4507-91a8-508780bc5e8d", "address": "fa:16:3e:e9:ef:be", "network": {"id": "1c86b315-3a4b-4db0-8b3c-39658c19ef9c", "bridge": "br-int", "label": "tempest-TransferEncryptedVolumeTest-443747755-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.217", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "bfcc78a613a4442d88231798d10634c9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap04efb511-1f", "ovs_interfaceid": "04efb511-1fd7-4507-91a8-508780bc5e8d", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Oct 11 04:19:07 compute-0 nova_compute[259850]: 2025-10-11 04:19:07.047 2 DEBUG nova.network.os_vif_util [None req-3f54b595-3d9a-4949-8d93-21f6e8849d93 77d11e860ca1460cab1c20bca4d4c0ea bfcc78a613a4442d88231798d10634c9 - - default default] Converting VIF {"id": "04efb511-1fd7-4507-91a8-508780bc5e8d", "address": "fa:16:3e:e9:ef:be", "network": {"id": "1c86b315-3a4b-4db0-8b3c-39658c19ef9c", "bridge": "br-int", "label": "tempest-TransferEncryptedVolumeTest-443747755-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.217", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "bfcc78a613a4442d88231798d10634c9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap04efb511-1f", "ovs_interfaceid": "04efb511-1fd7-4507-91a8-508780bc5e8d", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 11 04:19:07 compute-0 systemd[1]: libpod-2d6bfc1415f2aa707a54c872db60426c4e3ba5967c6a388242f1ac4cc64ed303.scope: Deactivated successfully.
Oct 11 04:19:07 compute-0 nova_compute[259850]: 2025-10-11 04:19:07.048 2 DEBUG nova.network.os_vif_util [None req-3f54b595-3d9a-4949-8d93-21f6e8849d93 77d11e860ca1460cab1c20bca4d4c0ea bfcc78a613a4442d88231798d10634c9 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:e9:ef:be,bridge_name='br-int',has_traffic_filtering=True,id=04efb511-1fd7-4507-91a8-508780bc5e8d,network=Network(1c86b315-3a4b-4db0-8b3c-39658c19ef9c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap04efb511-1f') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 11 04:19:07 compute-0 nova_compute[259850]: 2025-10-11 04:19:07.049 2 DEBUG os_vif [None req-3f54b595-3d9a-4949-8d93-21f6e8849d93 77d11e860ca1460cab1c20bca4d4c0ea bfcc78a613a4442d88231798d10634c9 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:e9:ef:be,bridge_name='br-int',has_traffic_filtering=True,id=04efb511-1fd7-4507-91a8-508780bc5e8d,network=Network(1c86b315-3a4b-4db0-8b3c-39658c19ef9c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap04efb511-1f') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Oct 11 04:19:07 compute-0 nova_compute[259850]: 2025-10-11 04:19:07.052 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 04:19:07 compute-0 podman[299120]: 2025-10-11 04:19:07.053553034 +0000 UTC m=+0.072699789 container died 2d6bfc1415f2aa707a54c872db60426c4e3ba5967c6a388242f1ac4cc64ed303 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-1c86b315-3a4b-4db0-8b3c-39658c19ef9c, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2)
Oct 11 04:19:07 compute-0 nova_compute[259850]: 2025-10-11 04:19:07.053 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap04efb511-1f, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 04:19:07 compute-0 nova_compute[259850]: 2025-10-11 04:19:07.055 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 04:19:07 compute-0 nova_compute[259850]: 2025-10-11 04:19:07.057 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 04:19:07 compute-0 nova_compute[259850]: 2025-10-11 04:19:07.064 2 INFO os_vif [None req-3f54b595-3d9a-4949-8d93-21f6e8849d93 77d11e860ca1460cab1c20bca4d4c0ea bfcc78a613a4442d88231798d10634c9 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:e9:ef:be,bridge_name='br-int',has_traffic_filtering=True,id=04efb511-1fd7-4507-91a8-508780bc5e8d,network=Network(1c86b315-3a4b-4db0-8b3c-39658c19ef9c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap04efb511-1f')
Oct 11 04:19:07 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-2d6bfc1415f2aa707a54c872db60426c4e3ba5967c6a388242f1ac4cc64ed303-userdata-shm.mount: Deactivated successfully.
Oct 11 04:19:07 compute-0 systemd[1]: var-lib-containers-storage-overlay-52db8f7b4dc2610fb6451516909c887a612859a6ce3c5a771ad42fe20c355486-merged.mount: Deactivated successfully.
Oct 11 04:19:07 compute-0 podman[299120]: 2025-10-11 04:19:07.100539304 +0000 UTC m=+0.119686079 container cleanup 2d6bfc1415f2aa707a54c872db60426c4e3ba5967c6a388242f1ac4cc64ed303 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-1c86b315-3a4b-4db0-8b3c-39658c19ef9c, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true)
Oct 11 04:19:07 compute-0 systemd[1]: libpod-conmon-2d6bfc1415f2aa707a54c872db60426c4e3ba5967c6a388242f1ac4cc64ed303.scope: Deactivated successfully.
Oct 11 04:19:07 compute-0 nova_compute[259850]: 2025-10-11 04:19:07.161 2 DEBUG nova.compute.manager [req-403fb0f8-d999-4005-9472-65d7e4250d38 req-59aefe3e-d9aa-46d5-9ac1-2be0612458b1 f4159c198f2c491aba3076e803251bb4 551ef16ca7c64c7d98e9560ddab5abd8 - - default default] [instance: 8bfdc99e-9df9-4825-a631-7cd07eff5dfb] Received event network-vif-unplugged-04efb511-1fd7-4507-91a8-508780bc5e8d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 11 04:19:07 compute-0 nova_compute[259850]: 2025-10-11 04:19:07.161 2 DEBUG oslo_concurrency.lockutils [req-403fb0f8-d999-4005-9472-65d7e4250d38 req-59aefe3e-d9aa-46d5-9ac1-2be0612458b1 f4159c198f2c491aba3076e803251bb4 551ef16ca7c64c7d98e9560ddab5abd8 - - default default] Acquiring lock "8bfdc99e-9df9-4825-a631-7cd07eff5dfb-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 04:19:07 compute-0 nova_compute[259850]: 2025-10-11 04:19:07.162 2 DEBUG oslo_concurrency.lockutils [req-403fb0f8-d999-4005-9472-65d7e4250d38 req-59aefe3e-d9aa-46d5-9ac1-2be0612458b1 f4159c198f2c491aba3076e803251bb4 551ef16ca7c64c7d98e9560ddab5abd8 - - default default] Lock "8bfdc99e-9df9-4825-a631-7cd07eff5dfb-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 04:19:07 compute-0 nova_compute[259850]: 2025-10-11 04:19:07.162 2 DEBUG oslo_concurrency.lockutils [req-403fb0f8-d999-4005-9472-65d7e4250d38 req-59aefe3e-d9aa-46d5-9ac1-2be0612458b1 f4159c198f2c491aba3076e803251bb4 551ef16ca7c64c7d98e9560ddab5abd8 - - default default] Lock "8bfdc99e-9df9-4825-a631-7cd07eff5dfb-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 04:19:07 compute-0 nova_compute[259850]: 2025-10-11 04:19:07.162 2 DEBUG nova.compute.manager [req-403fb0f8-d999-4005-9472-65d7e4250d38 req-59aefe3e-d9aa-46d5-9ac1-2be0612458b1 f4159c198f2c491aba3076e803251bb4 551ef16ca7c64c7d98e9560ddab5abd8 - - default default] [instance: 8bfdc99e-9df9-4825-a631-7cd07eff5dfb] No waiting events found dispatching network-vif-unplugged-04efb511-1fd7-4507-91a8-508780bc5e8d pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 11 04:19:07 compute-0 nova_compute[259850]: 2025-10-11 04:19:07.163 2 DEBUG nova.compute.manager [req-403fb0f8-d999-4005-9472-65d7e4250d38 req-59aefe3e-d9aa-46d5-9ac1-2be0612458b1 f4159c198f2c491aba3076e803251bb4 551ef16ca7c64c7d98e9560ddab5abd8 - - default default] [instance: 8bfdc99e-9df9-4825-a631-7cd07eff5dfb] Received event network-vif-unplugged-04efb511-1fd7-4507-91a8-508780bc5e8d for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Oct 11 04:19:07 compute-0 podman[299174]: 2025-10-11 04:19:07.225842352 +0000 UTC m=+0.081192990 container remove 2d6bfc1415f2aa707a54c872db60426c4e3ba5967c6a388242f1ac4cc64ed303 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-1c86b315-3a4b-4db0-8b3c-39658c19ef9c, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0)
Oct 11 04:19:07 compute-0 ovn_metadata_agent[161897]: 2025-10-11 04:19:07.237 267637 DEBUG oslo.privsep.daemon [-] privsep: reply[af0e9731-2b95-439b-98a1-8cc699ab0b4d]: (4, ('Sat Oct 11 04:19:06 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-1c86b315-3a4b-4db0-8b3c-39658c19ef9c (2d6bfc1415f2aa707a54c872db60426c4e3ba5967c6a388242f1ac4cc64ed303)\n2d6bfc1415f2aa707a54c872db60426c4e3ba5967c6a388242f1ac4cc64ed303\nSat Oct 11 04:19:07 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-1c86b315-3a4b-4db0-8b3c-39658c19ef9c (2d6bfc1415f2aa707a54c872db60426c4e3ba5967c6a388242f1ac4cc64ed303)\n2d6bfc1415f2aa707a54c872db60426c4e3ba5967c6a388242f1ac4cc64ed303\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 04:19:07 compute-0 ovn_metadata_agent[161897]: 2025-10-11 04:19:07.240 267637 DEBUG oslo.privsep.daemon [-] privsep: reply[6547943f-3296-4f2f-a050-0f2cce73633e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 04:19:07 compute-0 ovn_metadata_agent[161897]: 2025-10-11 04:19:07.241 161902 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap1c86b315-30, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 04:19:07 compute-0 nova_compute[259850]: 2025-10-11 04:19:07.244 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 04:19:07 compute-0 kernel: tap1c86b315-30: left promiscuous mode
Oct 11 04:19:07 compute-0 nova_compute[259850]: 2025-10-11 04:19:07.265 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 04:19:07 compute-0 ovn_metadata_agent[161897]: 2025-10-11 04:19:07.268 267637 DEBUG oslo.privsep.daemon [-] privsep: reply[be2a0264-09eb-42c5-af18-63d2fb6d4541]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 04:19:07 compute-0 ovn_metadata_agent[161897]: 2025-10-11 04:19:07.291 267637 DEBUG oslo.privsep.daemon [-] privsep: reply[e9a94ee8-9276-4971-88b3-1fc9a32a63da]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 04:19:07 compute-0 ovn_metadata_agent[161897]: 2025-10-11 04:19:07.293 267637 DEBUG oslo.privsep.daemon [-] privsep: reply[b0049030-7d42-4c4b-aec1-675f6b86ea9c]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 04:19:07 compute-0 ovn_metadata_agent[161897]: 2025-10-11 04:19:07.308 267637 DEBUG oslo.privsep.daemon [-] privsep: reply[226a81c9-1ab5-4dcf-bc51-61ade98babc1]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 469280, 'reachable_time': 17781, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 299193, 'error': None, 'target': 'ovnmeta-1c86b315-3a4b-4db0-8b3c-39658c19ef9c', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 04:19:07 compute-0 systemd[1]: run-netns-ovnmeta\x2d1c86b315\x2d3a4b\x2d4db0\x2d8b3c\x2d39658c19ef9c.mount: Deactivated successfully.
Oct 11 04:19:07 compute-0 nova_compute[259850]: 2025-10-11 04:19:07.310 2 INFO nova.virt.libvirt.driver [None req-3f54b595-3d9a-4949-8d93-21f6e8849d93 77d11e860ca1460cab1c20bca4d4c0ea bfcc78a613a4442d88231798d10634c9 - - default default] [instance: 8bfdc99e-9df9-4825-a631-7cd07eff5dfb] Deleting instance files /var/lib/nova/instances/8bfdc99e-9df9-4825-a631-7cd07eff5dfb_del
Oct 11 04:19:07 compute-0 nova_compute[259850]: 2025-10-11 04:19:07.311 2 INFO nova.virt.libvirt.driver [None req-3f54b595-3d9a-4949-8d93-21f6e8849d93 77d11e860ca1460cab1c20bca4d4c0ea bfcc78a613a4442d88231798d10634c9 - - default default] [instance: 8bfdc99e-9df9-4825-a631-7cd07eff5dfb] Deletion of /var/lib/nova/instances/8bfdc99e-9df9-4825-a631-7cd07eff5dfb_del complete
Oct 11 04:19:07 compute-0 ovn_metadata_agent[161897]: 2025-10-11 04:19:07.313 162015 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-1c86b315-3a4b-4db0-8b3c-39658c19ef9c deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Oct 11 04:19:07 compute-0 ovn_metadata_agent[161897]: 2025-10-11 04:19:07.313 162015 DEBUG oslo.privsep.daemon [-] privsep: reply[16a623b0-bfa4-4505-82cd-07c09326caeb]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 04:19:07 compute-0 nova_compute[259850]: 2025-10-11 04:19:07.400 2 INFO nova.compute.manager [None req-3f54b595-3d9a-4949-8d93-21f6e8849d93 77d11e860ca1460cab1c20bca4d4c0ea bfcc78a613a4442d88231798d10634c9 - - default default] [instance: 8bfdc99e-9df9-4825-a631-7cd07eff5dfb] Took 0.62 seconds to destroy the instance on the hypervisor.
Oct 11 04:19:07 compute-0 nova_compute[259850]: 2025-10-11 04:19:07.401 2 DEBUG oslo.service.loopingcall [None req-3f54b595-3d9a-4949-8d93-21f6e8849d93 77d11e860ca1460cab1c20bca4d4c0ea bfcc78a613a4442d88231798d10634c9 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Oct 11 04:19:07 compute-0 nova_compute[259850]: 2025-10-11 04:19:07.401 2 DEBUG nova.compute.manager [-] [instance: 8bfdc99e-9df9-4825-a631-7cd07eff5dfb] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Oct 11 04:19:07 compute-0 nova_compute[259850]: 2025-10-11 04:19:07.401 2 DEBUG nova.network.neutron [-] [instance: 8bfdc99e-9df9-4825-a631-7cd07eff5dfb] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Oct 11 04:19:07 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1664: 305 pgs: 305 active+clean; 365 MiB data, 635 MiB used, 59 GiB / 60 GiB avail; 340 KiB/s rd, 62 KiB/s wr, 28 op/s
Oct 11 04:19:07 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e390 do_prune osdmap full prune enabled
Oct 11 04:19:07 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e391 e391: 3 total, 3 up, 3 in
Oct 11 04:19:07 compute-0 ceph-mon[74273]: log_channel(cluster) log [DBG] : osdmap e391: 3 total, 3 up, 3 in
Oct 11 04:19:08 compute-0 nova_compute[259850]: 2025-10-11 04:19:08.083 2 DEBUG nova.network.neutron [-] [instance: 8bfdc99e-9df9-4825-a631-7cd07eff5dfb] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 11 04:19:08 compute-0 nova_compute[259850]: 2025-10-11 04:19:08.101 2 INFO nova.compute.manager [-] [instance: 8bfdc99e-9df9-4825-a631-7cd07eff5dfb] Took 0.70 seconds to deallocate network for instance.
Oct 11 04:19:08 compute-0 nova_compute[259850]: 2025-10-11 04:19:08.280 2 INFO nova.compute.manager [None req-3f54b595-3d9a-4949-8d93-21f6e8849d93 77d11e860ca1460cab1c20bca4d4c0ea bfcc78a613a4442d88231798d10634c9 - - default default] [instance: 8bfdc99e-9df9-4825-a631-7cd07eff5dfb] Took 0.18 seconds to detach 1 volumes for instance.
Oct 11 04:19:08 compute-0 nova_compute[259850]: 2025-10-11 04:19:08.352 2 DEBUG oslo_concurrency.lockutils [None req-3f54b595-3d9a-4949-8d93-21f6e8849d93 77d11e860ca1460cab1c20bca4d4c0ea bfcc78a613a4442d88231798d10634c9 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 04:19:08 compute-0 nova_compute[259850]: 2025-10-11 04:19:08.352 2 DEBUG oslo_concurrency.lockutils [None req-3f54b595-3d9a-4949-8d93-21f6e8849d93 77d11e860ca1460cab1c20bca4d4c0ea bfcc78a613a4442d88231798d10634c9 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 04:19:08 compute-0 nova_compute[259850]: 2025-10-11 04:19:08.428 2 DEBUG oslo_concurrency.processutils [None req-3f54b595-3d9a-4949-8d93-21f6e8849d93 77d11e860ca1460cab1c20bca4d4c0ea bfcc78a613a4442d88231798d10634c9 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 04:19:08 compute-0 nova_compute[259850]: 2025-10-11 04:19:08.463 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 04:19:08 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 11 04:19:08 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3164268270' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 04:19:08 compute-0 nova_compute[259850]: 2025-10-11 04:19:08.938 2 DEBUG oslo_concurrency.processutils [None req-3f54b595-3d9a-4949-8d93-21f6e8849d93 77d11e860ca1460cab1c20bca4d4c0ea bfcc78a613a4442d88231798d10634c9 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.510s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 04:19:08 compute-0 nova_compute[259850]: 2025-10-11 04:19:08.949 2 DEBUG nova.compute.provider_tree [None req-3f54b595-3d9a-4949-8d93-21f6e8849d93 77d11e860ca1460cab1c20bca4d4c0ea bfcc78a613a4442d88231798d10634c9 - - default default] Inventory has not changed in ProviderTree for provider: 108a560b-89c0-4926-a2fc-cb749a6f8386 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 11 04:19:08 compute-0 ceph-mon[74273]: pgmap v1664: 305 pgs: 305 active+clean; 365 MiB data, 635 MiB used, 59 GiB / 60 GiB avail; 340 KiB/s rd, 62 KiB/s wr, 28 op/s
Oct 11 04:19:08 compute-0 ceph-mon[74273]: osdmap e391: 3 total, 3 up, 3 in
Oct 11 04:19:08 compute-0 ceph-mon[74273]: from='client.? 192.168.122.100:0/3164268270' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 04:19:08 compute-0 nova_compute[259850]: 2025-10-11 04:19:08.973 2 DEBUG nova.scheduler.client.report [None req-3f54b595-3d9a-4949-8d93-21f6e8849d93 77d11e860ca1460cab1c20bca4d4c0ea bfcc78a613a4442d88231798d10634c9 - - default default] Inventory has not changed for provider 108a560b-89c0-4926-a2fc-cb749a6f8386 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 11 04:19:08 compute-0 nova_compute[259850]: 2025-10-11 04:19:08.994 2 DEBUG oslo_concurrency.lockutils [None req-3f54b595-3d9a-4949-8d93-21f6e8849d93 77d11e860ca1460cab1c20bca4d4c0ea bfcc78a613a4442d88231798d10634c9 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.642s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 04:19:09 compute-0 nova_compute[259850]: 2025-10-11 04:19:09.023 2 INFO nova.scheduler.client.report [None req-3f54b595-3d9a-4949-8d93-21f6e8849d93 77d11e860ca1460cab1c20bca4d4c0ea bfcc78a613a4442d88231798d10634c9 - - default default] Deleted allocations for instance 8bfdc99e-9df9-4825-a631-7cd07eff5dfb
Oct 11 04:19:09 compute-0 nova_compute[259850]: 2025-10-11 04:19:09.104 2 DEBUG oslo_concurrency.lockutils [None req-3f54b595-3d9a-4949-8d93-21f6e8849d93 77d11e860ca1460cab1c20bca4d4c0ea bfcc78a613a4442d88231798d10634c9 - - default default] Lock "8bfdc99e-9df9-4825-a631-7cd07eff5dfb" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.331s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 04:19:09 compute-0 nova_compute[259850]: 2025-10-11 04:19:09.239 2 DEBUG nova.compute.manager [req-d97cd8cd-d16a-43a5-8f78-f2b1ad2bf906 req-73dbe227-8cde-4a6e-a502-f1a3150f362d f4159c198f2c491aba3076e803251bb4 551ef16ca7c64c7d98e9560ddab5abd8 - - default default] [instance: 8bfdc99e-9df9-4825-a631-7cd07eff5dfb] Received event network-vif-plugged-04efb511-1fd7-4507-91a8-508780bc5e8d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 11 04:19:09 compute-0 nova_compute[259850]: 2025-10-11 04:19:09.240 2 DEBUG oslo_concurrency.lockutils [req-d97cd8cd-d16a-43a5-8f78-f2b1ad2bf906 req-73dbe227-8cde-4a6e-a502-f1a3150f362d f4159c198f2c491aba3076e803251bb4 551ef16ca7c64c7d98e9560ddab5abd8 - - default default] Acquiring lock "8bfdc99e-9df9-4825-a631-7cd07eff5dfb-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 04:19:09 compute-0 nova_compute[259850]: 2025-10-11 04:19:09.240 2 DEBUG oslo_concurrency.lockutils [req-d97cd8cd-d16a-43a5-8f78-f2b1ad2bf906 req-73dbe227-8cde-4a6e-a502-f1a3150f362d f4159c198f2c491aba3076e803251bb4 551ef16ca7c64c7d98e9560ddab5abd8 - - default default] Lock "8bfdc99e-9df9-4825-a631-7cd07eff5dfb-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 04:19:09 compute-0 nova_compute[259850]: 2025-10-11 04:19:09.241 2 DEBUG oslo_concurrency.lockutils [req-d97cd8cd-d16a-43a5-8f78-f2b1ad2bf906 req-73dbe227-8cde-4a6e-a502-f1a3150f362d f4159c198f2c491aba3076e803251bb4 551ef16ca7c64c7d98e9560ddab5abd8 - - default default] Lock "8bfdc99e-9df9-4825-a631-7cd07eff5dfb-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 04:19:09 compute-0 nova_compute[259850]: 2025-10-11 04:19:09.241 2 DEBUG nova.compute.manager [req-d97cd8cd-d16a-43a5-8f78-f2b1ad2bf906 req-73dbe227-8cde-4a6e-a502-f1a3150f362d f4159c198f2c491aba3076e803251bb4 551ef16ca7c64c7d98e9560ddab5abd8 - - default default] [instance: 8bfdc99e-9df9-4825-a631-7cd07eff5dfb] No waiting events found dispatching network-vif-plugged-04efb511-1fd7-4507-91a8-508780bc5e8d pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 11 04:19:09 compute-0 nova_compute[259850]: 2025-10-11 04:19:09.242 2 WARNING nova.compute.manager [req-d97cd8cd-d16a-43a5-8f78-f2b1ad2bf906 req-73dbe227-8cde-4a6e-a502-f1a3150f362d f4159c198f2c491aba3076e803251bb4 551ef16ca7c64c7d98e9560ddab5abd8 - - default default] [instance: 8bfdc99e-9df9-4825-a631-7cd07eff5dfb] Received unexpected event network-vif-plugged-04efb511-1fd7-4507-91a8-508780bc5e8d for instance with vm_state deleted and task_state None.
Oct 11 04:19:09 compute-0 nova_compute[259850]: 2025-10-11 04:19:09.242 2 DEBUG nova.compute.manager [req-d97cd8cd-d16a-43a5-8f78-f2b1ad2bf906 req-73dbe227-8cde-4a6e-a502-f1a3150f362d f4159c198f2c491aba3076e803251bb4 551ef16ca7c64c7d98e9560ddab5abd8 - - default default] [instance: 8bfdc99e-9df9-4825-a631-7cd07eff5dfb] Received event network-vif-deleted-04efb511-1fd7-4507-91a8-508780bc5e8d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 11 04:19:09 compute-0 nova_compute[259850]: 2025-10-11 04:19:09.488 2 DEBUG oslo_concurrency.lockutils [None req-856e9bed-b4d9-4f2a-9523-b8fa8a5fb05e 2a330a845d62440c871f80eda2546881 09ba33ef4bd447699d74946c58839b2d - - default default] Acquiring lock "a5deabc3-2396-4c23-81c2-959d49bb6da1" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 04:19:09 compute-0 nova_compute[259850]: 2025-10-11 04:19:09.489 2 DEBUG oslo_concurrency.lockutils [None req-856e9bed-b4d9-4f2a-9523-b8fa8a5fb05e 2a330a845d62440c871f80eda2546881 09ba33ef4bd447699d74946c58839b2d - - default default] Lock "a5deabc3-2396-4c23-81c2-959d49bb6da1" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 04:19:09 compute-0 nova_compute[259850]: 2025-10-11 04:19:09.490 2 DEBUG oslo_concurrency.lockutils [None req-856e9bed-b4d9-4f2a-9523-b8fa8a5fb05e 2a330a845d62440c871f80eda2546881 09ba33ef4bd447699d74946c58839b2d - - default default] Acquiring lock "a5deabc3-2396-4c23-81c2-959d49bb6da1-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 04:19:09 compute-0 nova_compute[259850]: 2025-10-11 04:19:09.490 2 DEBUG oslo_concurrency.lockutils [None req-856e9bed-b4d9-4f2a-9523-b8fa8a5fb05e 2a330a845d62440c871f80eda2546881 09ba33ef4bd447699d74946c58839b2d - - default default] Lock "a5deabc3-2396-4c23-81c2-959d49bb6da1-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 04:19:09 compute-0 nova_compute[259850]: 2025-10-11 04:19:09.490 2 DEBUG oslo_concurrency.lockutils [None req-856e9bed-b4d9-4f2a-9523-b8fa8a5fb05e 2a330a845d62440c871f80eda2546881 09ba33ef4bd447699d74946c58839b2d - - default default] Lock "a5deabc3-2396-4c23-81c2-959d49bb6da1-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 04:19:09 compute-0 nova_compute[259850]: 2025-10-11 04:19:09.492 2 INFO nova.compute.manager [None req-856e9bed-b4d9-4f2a-9523-b8fa8a5fb05e 2a330a845d62440c871f80eda2546881 09ba33ef4bd447699d74946c58839b2d - - default default] [instance: a5deabc3-2396-4c23-81c2-959d49bb6da1] Terminating instance
Oct 11 04:19:09 compute-0 nova_compute[259850]: 2025-10-11 04:19:09.494 2 DEBUG nova.compute.manager [None req-856e9bed-b4d9-4f2a-9523-b8fa8a5fb05e 2a330a845d62440c871f80eda2546881 09ba33ef4bd447699d74946c58839b2d - - default default] [instance: a5deabc3-2396-4c23-81c2-959d49bb6da1] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Oct 11 04:19:09 compute-0 kernel: tap560c29a9-2a (unregistering): left promiscuous mode
Oct 11 04:19:09 compute-0 NetworkManager[44920]: <info>  [1760156349.5586] device (tap560c29a9-2a): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 11 04:19:09 compute-0 nova_compute[259850]: 2025-10-11 04:19:09.618 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 04:19:09 compute-0 ovn_controller[152025]: 2025-10-11T04:19:09Z|00258|binding|INFO|Releasing lport 560c29a9-2a29-42bd-a75a-485874b2cbc8 from this chassis (sb_readonly=0)
Oct 11 04:19:09 compute-0 ovn_controller[152025]: 2025-10-11T04:19:09Z|00259|binding|INFO|Setting lport 560c29a9-2a29-42bd-a75a-485874b2cbc8 down in Southbound
Oct 11 04:19:09 compute-0 ovn_controller[152025]: 2025-10-11T04:19:09Z|00260|binding|INFO|Removing iface tap560c29a9-2a ovn-installed in OVS
Oct 11 04:19:09 compute-0 nova_compute[259850]: 2025-10-11 04:19:09.622 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 04:19:09 compute-0 ovn_metadata_agent[161897]: 2025-10-11 04:19:09.636 161902 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:b2:35:03 10.100.0.10'], port_security=['fa:16:3e:b2:35:03 10.100.0.10'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.10/28', 'neutron:device_id': 'a5deabc3-2396-4c23-81c2-959d49bb6da1', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-b6cd64a2-af0b-4f57-b84c-cbc9cde5251d', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '09ba33ef4bd447699d74946c58839b2d', 'neutron:revision_number': '4', 'neutron:security_group_ids': '802c56f7-efb1-44ec-9107-b20b0a13ea5d', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=27b77226-c1f8-485e-969b-bae9a3bf7ceb, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f22fde8afd0>], logical_port=560c29a9-2a29-42bd-a75a-485874b2cbc8) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f22fde8afd0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 11 04:19:09 compute-0 ovn_metadata_agent[161897]: 2025-10-11 04:19:09.638 161902 INFO neutron.agent.ovn.metadata.agent [-] Port 560c29a9-2a29-42bd-a75a-485874b2cbc8 in datapath b6cd64a2-af0b-4f57-b84c-cbc9cde5251d unbound from our chassis
Oct 11 04:19:09 compute-0 ovn_metadata_agent[161897]: 2025-10-11 04:19:09.639 161902 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network b6cd64a2-af0b-4f57-b84c-cbc9cde5251d, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Oct 11 04:19:09 compute-0 ovn_metadata_agent[161897]: 2025-10-11 04:19:09.641 267637 DEBUG oslo.privsep.daemon [-] privsep: reply[fc6c247a-c721-4cc9-8eb9-32556e9d14f5]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 04:19:09 compute-0 ovn_metadata_agent[161897]: 2025-10-11 04:19:09.642 161902 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-b6cd64a2-af0b-4f57-b84c-cbc9cde5251d namespace which is not needed anymore
Oct 11 04:19:09 compute-0 nova_compute[259850]: 2025-10-11 04:19:09.653 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 04:19:09 compute-0 systemd[1]: machine-qemu\x2d24\x2dinstance\x2d00000018.scope: Deactivated successfully.
Oct 11 04:19:09 compute-0 systemd[1]: machine-qemu\x2d24\x2dinstance\x2d00000018.scope: Consumed 16.229s CPU time.
Oct 11 04:19:09 compute-0 systemd-machined[214869]: Machine qemu-24-instance-00000018 terminated.
Oct 11 04:19:09 compute-0 NetworkManager[44920]: <info>  [1760156349.7150] manager: (tap560c29a9-2a): new Tun device (/org/freedesktop/NetworkManager/Devices/132)
Oct 11 04:19:09 compute-0 nova_compute[259850]: 2025-10-11 04:19:09.717 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 04:19:09 compute-0 nova_compute[259850]: 2025-10-11 04:19:09.724 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 04:19:09 compute-0 nova_compute[259850]: 2025-10-11 04:19:09.736 2 INFO nova.virt.libvirt.driver [-] [instance: a5deabc3-2396-4c23-81c2-959d49bb6da1] Instance destroyed successfully.
Oct 11 04:19:09 compute-0 nova_compute[259850]: 2025-10-11 04:19:09.737 2 DEBUG nova.objects.instance [None req-856e9bed-b4d9-4f2a-9523-b8fa8a5fb05e 2a330a845d62440c871f80eda2546881 09ba33ef4bd447699d74946c58839b2d - - default default] Lazy-loading 'resources' on Instance uuid a5deabc3-2396-4c23-81c2-959d49bb6da1 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 11 04:19:09 compute-0 nova_compute[259850]: 2025-10-11 04:19:09.759 2 DEBUG nova.virt.libvirt.vif [None req-856e9bed-b4d9-4f2a-9523-b8fa8a5fb05e 2a330a845d62440c871f80eda2546881 09ba33ef4bd447699d74946c58839b2d - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-11T04:17:39Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestVolumeBootPattern-server-1344490819',display_name='tempest-TestVolumeBootPattern-server-1344490819',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testvolumebootpattern-server-1344490819',id=24,image_ref='',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBPDNAGL8Dkg4WTlPf45cAzyjNlMaZ9CdFtcbPahhttGWfFDtL3wJAU2pqWIpDJ427A+TFzstq4HW+M8hdPFbiZnk9MFQHh3rRb7amRkcTpIWOFEgpDmf92zhQgzfL3p2ZA==',key_name='tempest-TestVolumeBootPattern-2018721323',keypairs=<?>,launch_index=0,launched_at=2025-10-11T04:17:48Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='09ba33ef4bd447699d74946c58839b2d',ramdisk_id='',reservation_id='r-orrljhtm',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',image_signature_verified='False',owner_project_name='tempest-TestVolumeBootPattern-771726270',owner_user_name='tempest-TestVolumeBootPattern-771726270-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-11T04:17:48Z,user_data=None,user_id='2a330a845d62440c871f80eda2546881',uuid=a5deabc3-2396-4c23-81c2-959d49bb6da1,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "560c29a9-2a29-42bd-a75a-485874b2cbc8", "address": "fa:16:3e:b2:35:03", "network": {"id": "b6cd64a2-af0b-4f57-b84c-cbc9cde5251d", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-958485850-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.226", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "09ba33ef4bd447699d74946c58839b2d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap560c29a9-2a", "ovs_interfaceid": "560c29a9-2a29-42bd-a75a-485874b2cbc8", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Oct 11 04:19:09 compute-0 nova_compute[259850]: 2025-10-11 04:19:09.759 2 DEBUG nova.network.os_vif_util [None req-856e9bed-b4d9-4f2a-9523-b8fa8a5fb05e 2a330a845d62440c871f80eda2546881 09ba33ef4bd447699d74946c58839b2d - - default default] Converting VIF {"id": "560c29a9-2a29-42bd-a75a-485874b2cbc8", "address": "fa:16:3e:b2:35:03", "network": {"id": "b6cd64a2-af0b-4f57-b84c-cbc9cde5251d", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-958485850-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.226", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "09ba33ef4bd447699d74946c58839b2d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap560c29a9-2a", "ovs_interfaceid": "560c29a9-2a29-42bd-a75a-485874b2cbc8", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 11 04:19:09 compute-0 nova_compute[259850]: 2025-10-11 04:19:09.760 2 DEBUG nova.network.os_vif_util [None req-856e9bed-b4d9-4f2a-9523-b8fa8a5fb05e 2a330a845d62440c871f80eda2546881 09ba33ef4bd447699d74946c58839b2d - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:b2:35:03,bridge_name='br-int',has_traffic_filtering=True,id=560c29a9-2a29-42bd-a75a-485874b2cbc8,network=Network(b6cd64a2-af0b-4f57-b84c-cbc9cde5251d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap560c29a9-2a') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 11 04:19:09 compute-0 nova_compute[259850]: 2025-10-11 04:19:09.761 2 DEBUG os_vif [None req-856e9bed-b4d9-4f2a-9523-b8fa8a5fb05e 2a330a845d62440c871f80eda2546881 09ba33ef4bd447699d74946c58839b2d - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:b2:35:03,bridge_name='br-int',has_traffic_filtering=True,id=560c29a9-2a29-42bd-a75a-485874b2cbc8,network=Network(b6cd64a2-af0b-4f57-b84c-cbc9cde5251d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap560c29a9-2a') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Oct 11 04:19:09 compute-0 nova_compute[259850]: 2025-10-11 04:19:09.762 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 04:19:09 compute-0 nova_compute[259850]: 2025-10-11 04:19:09.763 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap560c29a9-2a, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 04:19:09 compute-0 nova_compute[259850]: 2025-10-11 04:19:09.766 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 11 04:19:09 compute-0 nova_compute[259850]: 2025-10-11 04:19:09.769 2 INFO os_vif [None req-856e9bed-b4d9-4f2a-9523-b8fa8a5fb05e 2a330a845d62440c871f80eda2546881 09ba33ef4bd447699d74946c58839b2d - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:b2:35:03,bridge_name='br-int',has_traffic_filtering=True,id=560c29a9-2a29-42bd-a75a-485874b2cbc8,network=Network(b6cd64a2-af0b-4f57-b84c-cbc9cde5251d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap560c29a9-2a')
Oct 11 04:19:09 compute-0 neutron-haproxy-ovnmeta-b6cd64a2-af0b-4f57-b84c-cbc9cde5251d[297132]: [NOTICE]   (297136) : haproxy version is 2.8.14-c23fe91
Oct 11 04:19:09 compute-0 neutron-haproxy-ovnmeta-b6cd64a2-af0b-4f57-b84c-cbc9cde5251d[297132]: [NOTICE]   (297136) : path to executable is /usr/sbin/haproxy
Oct 11 04:19:09 compute-0 neutron-haproxy-ovnmeta-b6cd64a2-af0b-4f57-b84c-cbc9cde5251d[297132]: [WARNING]  (297136) : Exiting Master process...
Oct 11 04:19:09 compute-0 neutron-haproxy-ovnmeta-b6cd64a2-af0b-4f57-b84c-cbc9cde5251d[297132]: [WARNING]  (297136) : Exiting Master process...
Oct 11 04:19:09 compute-0 neutron-haproxy-ovnmeta-b6cd64a2-af0b-4f57-b84c-cbc9cde5251d[297132]: [ALERT]    (297136) : Current worker (297138) exited with code 143 (Terminated)
Oct 11 04:19:09 compute-0 neutron-haproxy-ovnmeta-b6cd64a2-af0b-4f57-b84c-cbc9cde5251d[297132]: [WARNING]  (297136) : All workers exited. Exiting... (0)
Oct 11 04:19:09 compute-0 systemd[1]: libpod-c5ab96a640495b4ce5473c9c41e9eff99602f2f92dedd41740eafa8ad5e88b29.scope: Deactivated successfully.
Oct 11 04:19:09 compute-0 podman[299247]: 2025-10-11 04:19:09.831980827 +0000 UTC m=+0.065493965 container died c5ab96a640495b4ce5473c9c41e9eff99602f2f92dedd41740eafa8ad5e88b29 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-b6cd64a2-af0b-4f57-b84c-cbc9cde5251d, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0)
Oct 11 04:19:09 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1666: 305 pgs: 305 active+clean; 352 MiB data, 630 MiB used, 59 GiB / 60 GiB avail; 554 KiB/s rd, 6.5 KiB/s wr, 79 op/s
Oct 11 04:19:09 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-c5ab96a640495b4ce5473c9c41e9eff99602f2f92dedd41740eafa8ad5e88b29-userdata-shm.mount: Deactivated successfully.
Oct 11 04:19:09 compute-0 systemd[1]: var-lib-containers-storage-overlay-7746dd53ae5b557815f637423b887d95b31ba92a9b79b687487574b0569cc81d-merged.mount: Deactivated successfully.
Oct 11 04:19:09 compute-0 podman[299247]: 2025-10-11 04:19:09.874409669 +0000 UTC m=+0.107922807 container cleanup c5ab96a640495b4ce5473c9c41e9eff99602f2f92dedd41740eafa8ad5e88b29 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-b6cd64a2-af0b-4f57-b84c-cbc9cde5251d, org.label-schema.license=GPLv2, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251009, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Oct 11 04:19:09 compute-0 systemd[1]: libpod-conmon-c5ab96a640495b4ce5473c9c41e9eff99602f2f92dedd41740eafa8ad5e88b29.scope: Deactivated successfully.
Oct 11 04:19:09 compute-0 podman[299294]: 2025-10-11 04:19:09.959952451 +0000 UTC m=+0.059970759 container remove c5ab96a640495b4ce5473c9c41e9eff99602f2f92dedd41740eafa8ad5e88b29 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-b6cd64a2-af0b-4f57-b84c-cbc9cde5251d, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, tcib_managed=true, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f)
Oct 11 04:19:09 compute-0 ovn_metadata_agent[161897]: 2025-10-11 04:19:09.968 267637 DEBUG oslo.privsep.daemon [-] privsep: reply[00420868-ec01-40d0-8e13-e2d22dce59af]: (4, ('Sat Oct 11 04:19:09 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-b6cd64a2-af0b-4f57-b84c-cbc9cde5251d (c5ab96a640495b4ce5473c9c41e9eff99602f2f92dedd41740eafa8ad5e88b29)\nc5ab96a640495b4ce5473c9c41e9eff99602f2f92dedd41740eafa8ad5e88b29\nSat Oct 11 04:19:09 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-b6cd64a2-af0b-4f57-b84c-cbc9cde5251d (c5ab96a640495b4ce5473c9c41e9eff99602f2f92dedd41740eafa8ad5e88b29)\nc5ab96a640495b4ce5473c9c41e9eff99602f2f92dedd41740eafa8ad5e88b29\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 04:19:09 compute-0 ovn_metadata_agent[161897]: 2025-10-11 04:19:09.970 267637 DEBUG oslo.privsep.daemon [-] privsep: reply[598e37cf-29c0-45d0-81f7-22cb946a2b6f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 04:19:09 compute-0 ovn_metadata_agent[161897]: 2025-10-11 04:19:09.971 161902 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapb6cd64a2-a0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 04:19:09 compute-0 nova_compute[259850]: 2025-10-11 04:19:09.973 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 04:19:09 compute-0 kernel: tapb6cd64a2-a0: left promiscuous mode
Oct 11 04:19:10 compute-0 ovn_metadata_agent[161897]: 2025-10-11 04:19:10.002 267637 DEBUG oslo.privsep.daemon [-] privsep: reply[8bf4ae6d-838f-4ed1-aa52-d54fbb1bc0b7]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 04:19:10 compute-0 nova_compute[259850]: 2025-10-11 04:19:10.002 2 INFO nova.virt.libvirt.driver [None req-856e9bed-b4d9-4f2a-9523-b8fa8a5fb05e 2a330a845d62440c871f80eda2546881 09ba33ef4bd447699d74946c58839b2d - - default default] [instance: a5deabc3-2396-4c23-81c2-959d49bb6da1] Deleting instance files /var/lib/nova/instances/a5deabc3-2396-4c23-81c2-959d49bb6da1_del
Oct 11 04:19:10 compute-0 nova_compute[259850]: 2025-10-11 04:19:10.003 2 INFO nova.virt.libvirt.driver [None req-856e9bed-b4d9-4f2a-9523-b8fa8a5fb05e 2a330a845d62440c871f80eda2546881 09ba33ef4bd447699d74946c58839b2d - - default default] [instance: a5deabc3-2396-4c23-81c2-959d49bb6da1] Deletion of /var/lib/nova/instances/a5deabc3-2396-4c23-81c2-959d49bb6da1_del complete
Oct 11 04:19:10 compute-0 nova_compute[259850]: 2025-10-11 04:19:10.009 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 04:19:10 compute-0 ovn_metadata_agent[161897]: 2025-10-11 04:19:10.025 267637 DEBUG oslo.privsep.daemon [-] privsep: reply[86503daa-5e33-4d5e-873f-de3edcc75079]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 04:19:10 compute-0 ovn_metadata_agent[161897]: 2025-10-11 04:19:10.027 267637 DEBUG oslo.privsep.daemon [-] privsep: reply[ba5cc323-c390-4aba-a582-585853968c63]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 04:19:10 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e391 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 04:19:10 compute-0 ovn_metadata_agent[161897]: 2025-10-11 04:19:10.046 267637 DEBUG oslo.privsep.daemon [-] privsep: reply[3b0a9ef8-3e73-4ab2-9db1-3fa11fd278f0]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 465093, 'reachable_time': 42902, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 299308, 'error': None, 'target': 'ovnmeta-b6cd64a2-af0b-4f57-b84c-cbc9cde5251d', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 04:19:10 compute-0 systemd[1]: run-netns-ovnmeta\x2db6cd64a2\x2daf0b\x2d4f57\x2db84c\x2dcbc9cde5251d.mount: Deactivated successfully.
Oct 11 04:19:10 compute-0 ovn_metadata_agent[161897]: 2025-10-11 04:19:10.051 162015 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-b6cd64a2-af0b-4f57-b84c-cbc9cde5251d deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Oct 11 04:19:10 compute-0 ovn_metadata_agent[161897]: 2025-10-11 04:19:10.052 162015 DEBUG oslo.privsep.daemon [-] privsep: reply[4b023e7f-345d-4cac-afbb-11b7eef8a504]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 04:19:10 compute-0 nova_compute[259850]: 2025-10-11 04:19:10.059 2 INFO nova.compute.manager [None req-856e9bed-b4d9-4f2a-9523-b8fa8a5fb05e 2a330a845d62440c871f80eda2546881 09ba33ef4bd447699d74946c58839b2d - - default default] [instance: a5deabc3-2396-4c23-81c2-959d49bb6da1] Took 0.56 seconds to destroy the instance on the hypervisor.
Oct 11 04:19:10 compute-0 nova_compute[259850]: 2025-10-11 04:19:10.059 2 DEBUG oslo.service.loopingcall [None req-856e9bed-b4d9-4f2a-9523-b8fa8a5fb05e 2a330a845d62440c871f80eda2546881 09ba33ef4bd447699d74946c58839b2d - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Oct 11 04:19:10 compute-0 nova_compute[259850]: 2025-10-11 04:19:10.060 2 DEBUG nova.compute.manager [-] [instance: a5deabc3-2396-4c23-81c2-959d49bb6da1] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Oct 11 04:19:10 compute-0 nova_compute[259850]: 2025-10-11 04:19:10.060 2 DEBUG nova.network.neutron [-] [instance: a5deabc3-2396-4c23-81c2-959d49bb6da1] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Oct 11 04:19:10 compute-0 ceph-mon[74273]: pgmap v1666: 305 pgs: 305 active+clean; 352 MiB data, 630 MiB used, 59 GiB / 60 GiB avail; 554 KiB/s rd, 6.5 KiB/s wr, 79 op/s
Oct 11 04:19:11 compute-0 nova_compute[259850]: 2025-10-11 04:19:11.251 2 DEBUG nova.network.neutron [-] [instance: a5deabc3-2396-4c23-81c2-959d49bb6da1] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 11 04:19:11 compute-0 nova_compute[259850]: 2025-10-11 04:19:11.283 2 INFO nova.compute.manager [-] [instance: a5deabc3-2396-4c23-81c2-959d49bb6da1] Took 1.22 seconds to deallocate network for instance.
Oct 11 04:19:11 compute-0 nova_compute[259850]: 2025-10-11 04:19:11.324 2 DEBUG nova.compute.manager [req-1065d8d1-4d08-4956-9a52-af8338af3155 req-b73f9bfc-55fe-4a27-9dd2-b0f51dbab503 f4159c198f2c491aba3076e803251bb4 551ef16ca7c64c7d98e9560ddab5abd8 - - default default] [instance: a5deabc3-2396-4c23-81c2-959d49bb6da1] Received event network-changed-560c29a9-2a29-42bd-a75a-485874b2cbc8 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 11 04:19:11 compute-0 nova_compute[259850]: 2025-10-11 04:19:11.324 2 DEBUG nova.compute.manager [req-1065d8d1-4d08-4956-9a52-af8338af3155 req-b73f9bfc-55fe-4a27-9dd2-b0f51dbab503 f4159c198f2c491aba3076e803251bb4 551ef16ca7c64c7d98e9560ddab5abd8 - - default default] [instance: a5deabc3-2396-4c23-81c2-959d49bb6da1] Refreshing instance network info cache due to event network-changed-560c29a9-2a29-42bd-a75a-485874b2cbc8. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 11 04:19:11 compute-0 nova_compute[259850]: 2025-10-11 04:19:11.325 2 DEBUG oslo_concurrency.lockutils [req-1065d8d1-4d08-4956-9a52-af8338af3155 req-b73f9bfc-55fe-4a27-9dd2-b0f51dbab503 f4159c198f2c491aba3076e803251bb4 551ef16ca7c64c7d98e9560ddab5abd8 - - default default] Acquiring lock "refresh_cache-a5deabc3-2396-4c23-81c2-959d49bb6da1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 11 04:19:11 compute-0 nova_compute[259850]: 2025-10-11 04:19:11.325 2 DEBUG oslo_concurrency.lockutils [req-1065d8d1-4d08-4956-9a52-af8338af3155 req-b73f9bfc-55fe-4a27-9dd2-b0f51dbab503 f4159c198f2c491aba3076e803251bb4 551ef16ca7c64c7d98e9560ddab5abd8 - - default default] Acquired lock "refresh_cache-a5deabc3-2396-4c23-81c2-959d49bb6da1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 11 04:19:11 compute-0 nova_compute[259850]: 2025-10-11 04:19:11.325 2 DEBUG nova.network.neutron [req-1065d8d1-4d08-4956-9a52-af8338af3155 req-b73f9bfc-55fe-4a27-9dd2-b0f51dbab503 f4159c198f2c491aba3076e803251bb4 551ef16ca7c64c7d98e9560ddab5abd8 - - default default] [instance: a5deabc3-2396-4c23-81c2-959d49bb6da1] Refreshing network info cache for port 560c29a9-2a29-42bd-a75a-485874b2cbc8 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 11 04:19:11 compute-0 nova_compute[259850]: 2025-10-11 04:19:11.515 2 DEBUG nova.network.neutron [req-1065d8d1-4d08-4956-9a52-af8338af3155 req-b73f9bfc-55fe-4a27-9dd2-b0f51dbab503 f4159c198f2c491aba3076e803251bb4 551ef16ca7c64c7d98e9560ddab5abd8 - - default default] [instance: a5deabc3-2396-4c23-81c2-959d49bb6da1] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Oct 11 04:19:11 compute-0 nova_compute[259850]: 2025-10-11 04:19:11.530 2 INFO nova.compute.manager [None req-856e9bed-b4d9-4f2a-9523-b8fa8a5fb05e 2a330a845d62440c871f80eda2546881 09ba33ef4bd447699d74946c58839b2d - - default default] [instance: a5deabc3-2396-4c23-81c2-959d49bb6da1] Took 0.25 seconds to detach 1 volumes for instance.
Oct 11 04:19:11 compute-0 nova_compute[259850]: 2025-10-11 04:19:11.584 2 DEBUG oslo_concurrency.lockutils [None req-856e9bed-b4d9-4f2a-9523-b8fa8a5fb05e 2a330a845d62440c871f80eda2546881 09ba33ef4bd447699d74946c58839b2d - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 04:19:11 compute-0 nova_compute[259850]: 2025-10-11 04:19:11.585 2 DEBUG oslo_concurrency.lockutils [None req-856e9bed-b4d9-4f2a-9523-b8fa8a5fb05e 2a330a845d62440c871f80eda2546881 09ba33ef4bd447699d74946c58839b2d - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 04:19:11 compute-0 nova_compute[259850]: 2025-10-11 04:19:11.626 2 DEBUG oslo_concurrency.processutils [None req-856e9bed-b4d9-4f2a-9523-b8fa8a5fb05e 2a330a845d62440c871f80eda2546881 09ba33ef4bd447699d74946c58839b2d - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 04:19:11 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1667: 305 pgs: 305 active+clean; 352 MiB data, 630 MiB used, 59 GiB / 60 GiB avail; 554 KiB/s rd, 6.5 KiB/s wr, 79 op/s
Oct 11 04:19:11 compute-0 nova_compute[259850]: 2025-10-11 04:19:11.855 2 DEBUG nova.network.neutron [req-1065d8d1-4d08-4956-9a52-af8338af3155 req-b73f9bfc-55fe-4a27-9dd2-b0f51dbab503 f4159c198f2c491aba3076e803251bb4 551ef16ca7c64c7d98e9560ddab5abd8 - - default default] [instance: a5deabc3-2396-4c23-81c2-959d49bb6da1] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 11 04:19:11 compute-0 nova_compute[259850]: 2025-10-11 04:19:11.874 2 DEBUG oslo_concurrency.lockutils [req-1065d8d1-4d08-4956-9a52-af8338af3155 req-b73f9bfc-55fe-4a27-9dd2-b0f51dbab503 f4159c198f2c491aba3076e803251bb4 551ef16ca7c64c7d98e9560ddab5abd8 - - default default] Releasing lock "refresh_cache-a5deabc3-2396-4c23-81c2-959d49bb6da1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 11 04:19:11 compute-0 nova_compute[259850]: 2025-10-11 04:19:11.875 2 DEBUG nova.compute.manager [req-1065d8d1-4d08-4956-9a52-af8338af3155 req-b73f9bfc-55fe-4a27-9dd2-b0f51dbab503 f4159c198f2c491aba3076e803251bb4 551ef16ca7c64c7d98e9560ddab5abd8 - - default default] [instance: a5deabc3-2396-4c23-81c2-959d49bb6da1] Received event network-vif-unplugged-560c29a9-2a29-42bd-a75a-485874b2cbc8 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 11 04:19:11 compute-0 nova_compute[259850]: 2025-10-11 04:19:11.875 2 DEBUG oslo_concurrency.lockutils [req-1065d8d1-4d08-4956-9a52-af8338af3155 req-b73f9bfc-55fe-4a27-9dd2-b0f51dbab503 f4159c198f2c491aba3076e803251bb4 551ef16ca7c64c7d98e9560ddab5abd8 - - default default] Acquiring lock "a5deabc3-2396-4c23-81c2-959d49bb6da1-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 04:19:11 compute-0 nova_compute[259850]: 2025-10-11 04:19:11.875 2 DEBUG oslo_concurrency.lockutils [req-1065d8d1-4d08-4956-9a52-af8338af3155 req-b73f9bfc-55fe-4a27-9dd2-b0f51dbab503 f4159c198f2c491aba3076e803251bb4 551ef16ca7c64c7d98e9560ddab5abd8 - - default default] Lock "a5deabc3-2396-4c23-81c2-959d49bb6da1-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 04:19:11 compute-0 nova_compute[259850]: 2025-10-11 04:19:11.876 2 DEBUG oslo_concurrency.lockutils [req-1065d8d1-4d08-4956-9a52-af8338af3155 req-b73f9bfc-55fe-4a27-9dd2-b0f51dbab503 f4159c198f2c491aba3076e803251bb4 551ef16ca7c64c7d98e9560ddab5abd8 - - default default] Lock "a5deabc3-2396-4c23-81c2-959d49bb6da1-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 04:19:11 compute-0 nova_compute[259850]: 2025-10-11 04:19:11.876 2 DEBUG nova.compute.manager [req-1065d8d1-4d08-4956-9a52-af8338af3155 req-b73f9bfc-55fe-4a27-9dd2-b0f51dbab503 f4159c198f2c491aba3076e803251bb4 551ef16ca7c64c7d98e9560ddab5abd8 - - default default] [instance: a5deabc3-2396-4c23-81c2-959d49bb6da1] No waiting events found dispatching network-vif-unplugged-560c29a9-2a29-42bd-a75a-485874b2cbc8 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 11 04:19:11 compute-0 nova_compute[259850]: 2025-10-11 04:19:11.876 2 DEBUG nova.compute.manager [req-1065d8d1-4d08-4956-9a52-af8338af3155 req-b73f9bfc-55fe-4a27-9dd2-b0f51dbab503 f4159c198f2c491aba3076e803251bb4 551ef16ca7c64c7d98e9560ddab5abd8 - - default default] [instance: a5deabc3-2396-4c23-81c2-959d49bb6da1] Received event network-vif-unplugged-560c29a9-2a29-42bd-a75a-485874b2cbc8 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Oct 11 04:19:11 compute-0 nova_compute[259850]: 2025-10-11 04:19:11.876 2 DEBUG nova.compute.manager [req-1065d8d1-4d08-4956-9a52-af8338af3155 req-b73f9bfc-55fe-4a27-9dd2-b0f51dbab503 f4159c198f2c491aba3076e803251bb4 551ef16ca7c64c7d98e9560ddab5abd8 - - default default] [instance: a5deabc3-2396-4c23-81c2-959d49bb6da1] Received event network-vif-plugged-560c29a9-2a29-42bd-a75a-485874b2cbc8 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 11 04:19:11 compute-0 nova_compute[259850]: 2025-10-11 04:19:11.877 2 DEBUG oslo_concurrency.lockutils [req-1065d8d1-4d08-4956-9a52-af8338af3155 req-b73f9bfc-55fe-4a27-9dd2-b0f51dbab503 f4159c198f2c491aba3076e803251bb4 551ef16ca7c64c7d98e9560ddab5abd8 - - default default] Acquiring lock "a5deabc3-2396-4c23-81c2-959d49bb6da1-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 04:19:11 compute-0 nova_compute[259850]: 2025-10-11 04:19:11.877 2 DEBUG oslo_concurrency.lockutils [req-1065d8d1-4d08-4956-9a52-af8338af3155 req-b73f9bfc-55fe-4a27-9dd2-b0f51dbab503 f4159c198f2c491aba3076e803251bb4 551ef16ca7c64c7d98e9560ddab5abd8 - - default default] Lock "a5deabc3-2396-4c23-81c2-959d49bb6da1-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 04:19:11 compute-0 nova_compute[259850]: 2025-10-11 04:19:11.877 2 DEBUG oslo_concurrency.lockutils [req-1065d8d1-4d08-4956-9a52-af8338af3155 req-b73f9bfc-55fe-4a27-9dd2-b0f51dbab503 f4159c198f2c491aba3076e803251bb4 551ef16ca7c64c7d98e9560ddab5abd8 - - default default] Lock "a5deabc3-2396-4c23-81c2-959d49bb6da1-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 04:19:11 compute-0 nova_compute[259850]: 2025-10-11 04:19:11.877 2 DEBUG nova.compute.manager [req-1065d8d1-4d08-4956-9a52-af8338af3155 req-b73f9bfc-55fe-4a27-9dd2-b0f51dbab503 f4159c198f2c491aba3076e803251bb4 551ef16ca7c64c7d98e9560ddab5abd8 - - default default] [instance: a5deabc3-2396-4c23-81c2-959d49bb6da1] No waiting events found dispatching network-vif-plugged-560c29a9-2a29-42bd-a75a-485874b2cbc8 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 11 04:19:11 compute-0 nova_compute[259850]: 2025-10-11 04:19:11.878 2 WARNING nova.compute.manager [req-1065d8d1-4d08-4956-9a52-af8338af3155 req-b73f9bfc-55fe-4a27-9dd2-b0f51dbab503 f4159c198f2c491aba3076e803251bb4 551ef16ca7c64c7d98e9560ddab5abd8 - - default default] [instance: a5deabc3-2396-4c23-81c2-959d49bb6da1] Received unexpected event network-vif-plugged-560c29a9-2a29-42bd-a75a-485874b2cbc8 for instance with vm_state active and task_state deleting.
Oct 11 04:19:11 compute-0 nova_compute[259850]: 2025-10-11 04:19:11.878 2 DEBUG nova.compute.manager [req-1065d8d1-4d08-4956-9a52-af8338af3155 req-b73f9bfc-55fe-4a27-9dd2-b0f51dbab503 f4159c198f2c491aba3076e803251bb4 551ef16ca7c64c7d98e9560ddab5abd8 - - default default] [instance: a5deabc3-2396-4c23-81c2-959d49bb6da1] Received event network-vif-deleted-560c29a9-2a29-42bd-a75a-485874b2cbc8 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 11 04:19:12 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 11 04:19:12 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2440394384' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 04:19:12 compute-0 nova_compute[259850]: 2025-10-11 04:19:12.068 2 DEBUG oslo_concurrency.processutils [None req-856e9bed-b4d9-4f2a-9523-b8fa8a5fb05e 2a330a845d62440c871f80eda2546881 09ba33ef4bd447699d74946c58839b2d - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.442s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 04:19:12 compute-0 nova_compute[259850]: 2025-10-11 04:19:12.076 2 DEBUG nova.compute.provider_tree [None req-856e9bed-b4d9-4f2a-9523-b8fa8a5fb05e 2a330a845d62440c871f80eda2546881 09ba33ef4bd447699d74946c58839b2d - - default default] Inventory has not changed in ProviderTree for provider: 108a560b-89c0-4926-a2fc-cb749a6f8386 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 11 04:19:12 compute-0 nova_compute[259850]: 2025-10-11 04:19:12.092 2 DEBUG nova.scheduler.client.report [None req-856e9bed-b4d9-4f2a-9523-b8fa8a5fb05e 2a330a845d62440c871f80eda2546881 09ba33ef4bd447699d74946c58839b2d - - default default] Inventory has not changed for provider 108a560b-89c0-4926-a2fc-cb749a6f8386 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 11 04:19:12 compute-0 nova_compute[259850]: 2025-10-11 04:19:12.111 2 DEBUG oslo_concurrency.lockutils [None req-856e9bed-b4d9-4f2a-9523-b8fa8a5fb05e 2a330a845d62440c871f80eda2546881 09ba33ef4bd447699d74946c58839b2d - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.526s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 04:19:12 compute-0 nova_compute[259850]: 2025-10-11 04:19:12.131 2 INFO nova.scheduler.client.report [None req-856e9bed-b4d9-4f2a-9523-b8fa8a5fb05e 2a330a845d62440c871f80eda2546881 09ba33ef4bd447699d74946c58839b2d - - default default] Deleted allocations for instance a5deabc3-2396-4c23-81c2-959d49bb6da1
Oct 11 04:19:12 compute-0 nova_compute[259850]: 2025-10-11 04:19:12.195 2 DEBUG oslo_concurrency.lockutils [None req-856e9bed-b4d9-4f2a-9523-b8fa8a5fb05e 2a330a845d62440c871f80eda2546881 09ba33ef4bd447699d74946c58839b2d - - default default] Lock "a5deabc3-2396-4c23-81c2-959d49bb6da1" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.706s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 04:19:12 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct 11 04:19:12 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3666424886' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 11 04:19:12 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct 11 04:19:12 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3666424886' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 11 04:19:12 compute-0 ceph-mon[74273]: pgmap v1667: 305 pgs: 305 active+clean; 352 MiB data, 630 MiB used, 59 GiB / 60 GiB avail; 554 KiB/s rd, 6.5 KiB/s wr, 79 op/s
Oct 11 04:19:12 compute-0 ceph-mon[74273]: from='client.? 192.168.122.100:0/2440394384' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 04:19:12 compute-0 ceph-mon[74273]: from='client.? 192.168.122.10:0/3666424886' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 11 04:19:12 compute-0 ceph-mon[74273]: from='client.? 192.168.122.10:0/3666424886' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 11 04:19:13 compute-0 nova_compute[259850]: 2025-10-11 04:19:13.441 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 04:19:13 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1668: 305 pgs: 305 active+clean; 169 MiB data, 534 MiB used, 59 GiB / 60 GiB avail; 326 KiB/s rd, 4.4 KiB/s wr, 110 op/s
Oct 11 04:19:14 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct 11 04:19:14 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1138330263' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 11 04:19:14 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct 11 04:19:14 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1138330263' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 11 04:19:14 compute-0 nova_compute[259850]: 2025-10-11 04:19:14.764 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 04:19:14 compute-0 ceph-mon[74273]: pgmap v1668: 305 pgs: 305 active+clean; 169 MiB data, 534 MiB used, 59 GiB / 60 GiB avail; 326 KiB/s rd, 4.4 KiB/s wr, 110 op/s
Oct 11 04:19:14 compute-0 ceph-mon[74273]: from='client.? 192.168.122.10:0/1138330263' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 11 04:19:14 compute-0 ceph-mon[74273]: from='client.? 192.168.122.10:0/1138330263' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 11 04:19:15 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e391 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 04:19:15 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e391 do_prune osdmap full prune enabled
Oct 11 04:19:15 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e392 e392: 3 total, 3 up, 3 in
Oct 11 04:19:15 compute-0 ceph-mon[74273]: log_channel(cluster) log [DBG] : osdmap e392: 3 total, 3 up, 3 in
Oct 11 04:19:15 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1670: 305 pgs: 305 active+clean; 169 MiB data, 534 MiB used, 59 GiB / 60 GiB avail; 273 KiB/s rd, 4.7 KiB/s wr, 116 op/s
Oct 11 04:19:16 compute-0 ceph-mon[74273]: osdmap e392: 3 total, 3 up, 3 in
Oct 11 04:19:16 compute-0 nova_compute[259850]: 2025-10-11 04:19:16.830 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 04:19:17 compute-0 ceph-mon[74273]: pgmap v1670: 305 pgs: 305 active+clean; 169 MiB data, 534 MiB used, 59 GiB / 60 GiB avail; 273 KiB/s rd, 4.7 KiB/s wr, 116 op/s
Oct 11 04:19:17 compute-0 nova_compute[259850]: 2025-10-11 04:19:17.128 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 04:19:17 compute-0 podman[299338]: 2025-10-11 04:19:17.389895923 +0000 UTC m=+0.084377750 container health_status 8f03625dda308e9a3fe3b3f00360811eab2a53db90537fed5552a11820542f07 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=iscsid, managed_by=edpm_ansible, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, container_name=iscsid, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009)
Oct 11 04:19:17 compute-0 podman[299337]: 2025-10-11 04:19:17.39088145 +0000 UTC m=+0.089181885 container health_status 83ff5079e26f0f00bbfd2512db4ebca61e431e3b7e362664573e34fff179574c (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, tcib_managed=true, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251009, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, config_id=multipathd)
Oct 11 04:19:17 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1671: 305 pgs: 305 active+clean; 163 MiB data, 485 MiB used, 60 GiB / 60 GiB avail; 190 KiB/s rd, 1.9 KiB/s wr, 57 op/s
Oct 11 04:19:18 compute-0 nova_compute[259850]: 2025-10-11 04:19:18.010 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1760156343.0093982, 68b44a2f-a694-4458-9a40-89e194a02624 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 11 04:19:18 compute-0 nova_compute[259850]: 2025-10-11 04:19:18.011 2 INFO nova.compute.manager [-] [instance: 68b44a2f-a694-4458-9a40-89e194a02624] VM Stopped (Lifecycle Event)
Oct 11 04:19:18 compute-0 nova_compute[259850]: 2025-10-11 04:19:18.032 2 DEBUG nova.compute.manager [None req-64918ac2-0dee-43f1-a2d5-6091350900bb - - - - - -] [instance: 68b44a2f-a694-4458-9a40-89e194a02624] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 11 04:19:18 compute-0 nova_compute[259850]: 2025-10-11 04:19:18.444 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 04:19:19 compute-0 ceph-mon[74273]: pgmap v1671: 305 pgs: 305 active+clean; 163 MiB data, 485 MiB used, 60 GiB / 60 GiB avail; 190 KiB/s rd, 1.9 KiB/s wr, 57 op/s
Oct 11 04:19:19 compute-0 nova_compute[259850]: 2025-10-11 04:19:19.766 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 04:19:19 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1672: 305 pgs: 305 active+clean; 88 MiB data, 427 MiB used, 60 GiB / 60 GiB avail; 40 KiB/s rd, 2.2 KiB/s wr, 61 op/s
Oct 11 04:19:20 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e392 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 04:19:20 compute-0 ceph-mgr[74563]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 04:19:20 compute-0 ceph-mgr[74563]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 04:19:20 compute-0 ceph-mgr[74563]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 04:19:20 compute-0 ceph-mgr[74563]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 04:19:20 compute-0 ceph-mgr[74563]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 04:19:20 compute-0 ceph-mgr[74563]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 04:19:20 compute-0 ceph-mgr[74563]: [balancer INFO root] Optimize plan auto_2025-10-11_04:19:20
Oct 11 04:19:20 compute-0 ceph-mgr[74563]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 11 04:19:20 compute-0 ceph-mgr[74563]: [balancer INFO root] do_upmap
Oct 11 04:19:20 compute-0 ceph-mgr[74563]: [balancer INFO root] pools ['default.rgw.log', 'cephfs.cephfs.meta', '.mgr', 'images', 'vms', 'default.rgw.meta', '.rgw.root', 'cephfs.cephfs.data', 'default.rgw.control', 'volumes', 'backups']
Oct 11 04:19:20 compute-0 ceph-mgr[74563]: [balancer INFO root] prepared 0/10 changes
Oct 11 04:19:21 compute-0 ceph-mon[74273]: pgmap v1672: 305 pgs: 305 active+clean; 88 MiB data, 427 MiB used, 60 GiB / 60 GiB avail; 40 KiB/s rd, 2.2 KiB/s wr, 61 op/s
Oct 11 04:19:21 compute-0 ceph-mgr[74563]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 11 04:19:21 compute-0 ceph-mgr[74563]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 11 04:19:21 compute-0 ceph-mgr[74563]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 11 04:19:21 compute-0 ceph-mgr[74563]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 11 04:19:21 compute-0 ceph-mgr[74563]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 11 04:19:21 compute-0 ceph-mgr[74563]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 11 04:19:21 compute-0 ceph-mgr[74563]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 11 04:19:21 compute-0 ceph-mgr[74563]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 11 04:19:21 compute-0 ceph-mgr[74563]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 11 04:19:21 compute-0 ceph-mgr[74563]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 11 04:19:21 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1673: 305 pgs: 305 active+clean; 88 MiB data, 427 MiB used, 60 GiB / 60 GiB avail; 40 KiB/s rd, 2.2 KiB/s wr, 61 op/s
Oct 11 04:19:22 compute-0 nova_compute[259850]: 2025-10-11 04:19:22.022 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1760156347.018891, 8bfdc99e-9df9-4825-a631-7cd07eff5dfb => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 11 04:19:22 compute-0 nova_compute[259850]: 2025-10-11 04:19:22.022 2 INFO nova.compute.manager [-] [instance: 8bfdc99e-9df9-4825-a631-7cd07eff5dfb] VM Stopped (Lifecycle Event)
Oct 11 04:19:22 compute-0 nova_compute[259850]: 2025-10-11 04:19:22.044 2 DEBUG nova.compute.manager [None req-16eefb44-7d04-4f77-a548-09800c62b27c - - - - - -] [instance: 8bfdc99e-9df9-4825-a631-7cd07eff5dfb] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 11 04:19:22 compute-0 ovn_metadata_agent[161897]: 2025-10-11 04:19:22.968 161902 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 04:19:22 compute-0 ovn_metadata_agent[161897]: 2025-10-11 04:19:22.968 161902 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 04:19:22 compute-0 ovn_metadata_agent[161897]: 2025-10-11 04:19:22.968 161902 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 04:19:23 compute-0 ceph-mon[74273]: pgmap v1673: 305 pgs: 305 active+clean; 88 MiB data, 427 MiB used, 60 GiB / 60 GiB avail; 40 KiB/s rd, 2.2 KiB/s wr, 61 op/s
Oct 11 04:19:23 compute-0 nova_compute[259850]: 2025-10-11 04:19:23.446 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 04:19:23 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1674: 305 pgs: 305 active+clean; 88 MiB data, 427 MiB used, 60 GiB / 60 GiB avail; 13 KiB/s rd, 716 B/s wr, 20 op/s
Oct 11 04:19:24 compute-0 nova_compute[259850]: 2025-10-11 04:19:24.732 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1760156349.7308457, a5deabc3-2396-4c23-81c2-959d49bb6da1 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 11 04:19:24 compute-0 nova_compute[259850]: 2025-10-11 04:19:24.733 2 INFO nova.compute.manager [-] [instance: a5deabc3-2396-4c23-81c2-959d49bb6da1] VM Stopped (Lifecycle Event)
Oct 11 04:19:24 compute-0 nova_compute[259850]: 2025-10-11 04:19:24.755 2 DEBUG nova.compute.manager [None req-2fab4b79-c91c-40d6-b481-e7c1affa583d - - - - - -] [instance: a5deabc3-2396-4c23-81c2-959d49bb6da1] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 11 04:19:24 compute-0 nova_compute[259850]: 2025-10-11 04:19:24.769 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 04:19:25 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e392 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 04:19:25 compute-0 ceph-mon[74273]: pgmap v1674: 305 pgs: 305 active+clean; 88 MiB data, 427 MiB used, 60 GiB / 60 GiB avail; 13 KiB/s rd, 716 B/s wr, 20 op/s
Oct 11 04:19:25 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1675: 305 pgs: 305 active+clean; 88 MiB data, 427 MiB used, 60 GiB / 60 GiB avail; 12 KiB/s rd, 663 B/s wr, 18 op/s
Oct 11 04:19:27 compute-0 ceph-mon[74273]: pgmap v1675: 305 pgs: 305 active+clean; 88 MiB data, 427 MiB used, 60 GiB / 60 GiB avail; 12 KiB/s rd, 663 B/s wr, 18 op/s
Oct 11 04:19:27 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1676: 305 pgs: 305 active+clean; 88 MiB data, 427 MiB used, 60 GiB / 60 GiB avail; 12 KiB/s rd, 938 B/s wr, 17 op/s
Oct 11 04:19:28 compute-0 nova_compute[259850]: 2025-10-11 04:19:28.058 2 DEBUG oslo_service.periodic_task [None req-a15c22ab-259d-4b26-83d0-eb5f92102649 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 11 04:19:28 compute-0 nova_compute[259850]: 2025-10-11 04:19:28.059 2 DEBUG oslo_service.periodic_task [None req-a15c22ab-259d-4b26-83d0-eb5f92102649 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 11 04:19:28 compute-0 nova_compute[259850]: 2025-10-11 04:19:28.059 2 DEBUG nova.compute.manager [None req-a15c22ab-259d-4b26-83d0-eb5f92102649 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Oct 11 04:19:28 compute-0 nova_compute[259850]: 2025-10-11 04:19:28.447 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 04:19:29 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e392 do_prune osdmap full prune enabled
Oct 11 04:19:29 compute-0 ceph-mon[74273]: pgmap v1676: 305 pgs: 305 active+clean; 88 MiB data, 427 MiB used, 60 GiB / 60 GiB avail; 12 KiB/s rd, 938 B/s wr, 17 op/s
Oct 11 04:19:29 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e393 e393: 3 total, 3 up, 3 in
Oct 11 04:19:29 compute-0 ceph-mon[74273]: log_channel(cluster) log [DBG] : osdmap e393: 3 total, 3 up, 3 in
Oct 11 04:19:29 compute-0 nova_compute[259850]: 2025-10-11 04:19:29.771 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 04:19:29 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1678: 305 pgs: 305 active+clean; 88 MiB data, 427 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 409 B/s wr, 1 op/s
Oct 11 04:19:30 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e393 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 04:19:30 compute-0 nova_compute[259850]: 2025-10-11 04:19:30.059 2 DEBUG oslo_service.periodic_task [None req-a15c22ab-259d-4b26-83d0-eb5f92102649 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 11 04:19:30 compute-0 nova_compute[259850]: 2025-10-11 04:19:30.060 2 DEBUG nova.compute.manager [None req-a15c22ab-259d-4b26-83d0-eb5f92102649 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Oct 11 04:19:30 compute-0 ceph-mon[74273]: osdmap e393: 3 total, 3 up, 3 in
Oct 11 04:19:30 compute-0 nova_compute[259850]: 2025-10-11 04:19:30.211 2 DEBUG nova.compute.manager [None req-a15c22ab-259d-4b26-83d0-eb5f92102649 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Oct 11 04:19:30 compute-0 nova_compute[259850]: 2025-10-11 04:19:30.211 2 DEBUG oslo_service.periodic_task [None req-a15c22ab-259d-4b26-83d0-eb5f92102649 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 11 04:19:30 compute-0 nova_compute[259850]: 2025-10-11 04:19:30.212 2 DEBUG oslo_service.periodic_task [None req-a15c22ab-259d-4b26-83d0-eb5f92102649 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 11 04:19:30 compute-0 nova_compute[259850]: 2025-10-11 04:19:30.235 2 DEBUG oslo_concurrency.lockutils [None req-a15c22ab-259d-4b26-83d0-eb5f92102649 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 04:19:30 compute-0 nova_compute[259850]: 2025-10-11 04:19:30.235 2 DEBUG oslo_concurrency.lockutils [None req-a15c22ab-259d-4b26-83d0-eb5f92102649 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 04:19:30 compute-0 nova_compute[259850]: 2025-10-11 04:19:30.236 2 DEBUG oslo_concurrency.lockutils [None req-a15c22ab-259d-4b26-83d0-eb5f92102649 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 04:19:30 compute-0 nova_compute[259850]: 2025-10-11 04:19:30.236 2 DEBUG nova.compute.resource_tracker [None req-a15c22ab-259d-4b26-83d0-eb5f92102649 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Oct 11 04:19:30 compute-0 nova_compute[259850]: 2025-10-11 04:19:30.236 2 DEBUG oslo_concurrency.processutils [None req-a15c22ab-259d-4b26-83d0-eb5f92102649 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 04:19:30 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 11 04:19:30 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1702358348' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 04:19:30 compute-0 nova_compute[259850]: 2025-10-11 04:19:30.719 2 DEBUG oslo_concurrency.processutils [None req-a15c22ab-259d-4b26-83d0-eb5f92102649 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.483s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 04:19:30 compute-0 nova_compute[259850]: 2025-10-11 04:19:30.935 2 WARNING nova.virt.libvirt.driver [None req-a15c22ab-259d-4b26-83d0-eb5f92102649 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 11 04:19:30 compute-0 nova_compute[259850]: 2025-10-11 04:19:30.937 2 DEBUG nova.compute.resource_tracker [None req-a15c22ab-259d-4b26-83d0-eb5f92102649 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4357MB free_disk=59.988277435302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Oct 11 04:19:30 compute-0 nova_compute[259850]: 2025-10-11 04:19:30.937 2 DEBUG oslo_concurrency.lockutils [None req-a15c22ab-259d-4b26-83d0-eb5f92102649 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 04:19:30 compute-0 nova_compute[259850]: 2025-10-11 04:19:30.938 2 DEBUG oslo_concurrency.lockutils [None req-a15c22ab-259d-4b26-83d0-eb5f92102649 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 04:19:31 compute-0 nova_compute[259850]: 2025-10-11 04:19:31.020 2 DEBUG nova.compute.resource_tracker [None req-a15c22ab-259d-4b26-83d0-eb5f92102649 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Oct 11 04:19:31 compute-0 nova_compute[259850]: 2025-10-11 04:19:31.021 2 DEBUG nova.compute.resource_tracker [None req-a15c22ab-259d-4b26-83d0-eb5f92102649 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Oct 11 04:19:31 compute-0 nova_compute[259850]: 2025-10-11 04:19:31.039 2 DEBUG nova.scheduler.client.report [None req-a15c22ab-259d-4b26-83d0-eb5f92102649 - - - - - -] Refreshing inventories for resource provider 108a560b-89c0-4926-a2fc-cb749a6f8386 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804
Oct 11 04:19:31 compute-0 nova_compute[259850]: 2025-10-11 04:19:31.063 2 DEBUG nova.scheduler.client.report [None req-a15c22ab-259d-4b26-83d0-eb5f92102649 - - - - - -] Updating ProviderTree inventory for provider 108a560b-89c0-4926-a2fc-cb749a6f8386 from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768
Oct 11 04:19:31 compute-0 nova_compute[259850]: 2025-10-11 04:19:31.063 2 DEBUG nova.compute.provider_tree [None req-a15c22ab-259d-4b26-83d0-eb5f92102649 - - - - - -] Updating inventory in ProviderTree for provider 108a560b-89c0-4926-a2fc-cb749a6f8386 with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176
Oct 11 04:19:31 compute-0 nova_compute[259850]: 2025-10-11 04:19:31.080 2 DEBUG nova.scheduler.client.report [None req-a15c22ab-259d-4b26-83d0-eb5f92102649 - - - - - -] Refreshing aggregate associations for resource provider 108a560b-89c0-4926-a2fc-cb749a6f8386, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813
Oct 11 04:19:31 compute-0 nova_compute[259850]: 2025-10-11 04:19:31.103 2 DEBUG nova.scheduler.client.report [None req-a15c22ab-259d-4b26-83d0-eb5f92102649 - - - - - -] Refreshing trait associations for resource provider 108a560b-89c0-4926-a2fc-cb749a6f8386, traits: COMPUTE_IMAGE_TYPE_QCOW2,HW_CPU_X86_SSSE3,COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_STORAGE_BUS_SCSI,HW_CPU_X86_AESNI,HW_CPU_X86_FMA3,COMPUTE_SECURITY_UEFI_SECURE_BOOT,HW_CPU_X86_F16C,HW_CPU_X86_SVM,COMPUTE_STORAGE_BUS_USB,HW_CPU_X86_SHA,COMPUTE_ACCELERATORS,COMPUTE_IMAGE_TYPE_AMI,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_GRAPHICS_MODEL_BOCHS,COMPUTE_NET_VIF_MODEL_RTL8139,COMPUTE_IMAGE_TYPE_RAW,COMPUTE_NET_VIF_MODEL_NE2K_PCI,HW_CPU_X86_SSE41,COMPUTE_NODE,COMPUTE_RESCUE_BFV,COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_NET_ATTACH_INTERFACE,COMPUTE_IMAGE_TYPE_ISO,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,COMPUTE_TRUSTED_CERTS,COMPUTE_NET_VIF_MODEL_E1000E,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_STORAGE_BUS_SATA,HW_CPU_X86_AVX,COMPUTE_IMAGE_TYPE_AKI,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_STORAGE_BUS_VIRTIO,COMPUTE_NET_VIF_MODEL_VMXNET3,HW_CPU_X86_BMI2,HW_CPU_X86_MMX,COMPUTE_VOLUME_ATTACH_WITH_TAG,COMPUTE_SECURITY_TPM_1_2,COMPUTE_GRAPHICS_MODEL_NONE,HW_CPU_X86_AVX2,COMPUTE_DEVICE_TAGGING,HW_CPU_X86_AMD_SVM,COMPUTE_STORAGE_BUS_IDE,COMPUTE_GRAPHICS_MODEL_VGA,COMPUTE_VOLUME_EXTEND,COMPUTE_NET_VIF_MODEL_PCNET,HW_CPU_X86_CLMUL,HW_CPU_X86_SSE2,HW_CPU_X86_SSE4A,COMPUTE_GRAPHICS_MODEL_VIRTIO,COMPUTE_NET_VIF_MODEL_E1000,HW_CPU_X86_BMI,COMPUTE_VIOMMU_MODEL_INTEL,HW_CPU_X86_SSE,COMPUTE_VOLUME_MULTI_ATTACH,HW_CPU_X86_ABM,HW_CPU_X86_SSE42,COMPUTE_STORAGE_BUS_FDC,COMPUTE_SECURITY_TPM_2_0,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825
Oct 11 04:19:31 compute-0 nova_compute[259850]: 2025-10-11 04:19:31.127 2 DEBUG oslo_concurrency.processutils [None req-a15c22ab-259d-4b26-83d0-eb5f92102649 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 04:19:31 compute-0 ceph-mon[74273]: pgmap v1678: 305 pgs: 305 active+clean; 88 MiB data, 427 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 409 B/s wr, 1 op/s
Oct 11 04:19:31 compute-0 ceph-mon[74273]: from='client.? 192.168.122.100:0/1702358348' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 04:19:31 compute-0 ceph-mgr[74563]: [pg_autoscaler INFO root] _maybe_adjust
Oct 11 04:19:31 compute-0 ceph-mgr[74563]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 04:19:31 compute-0 ceph-mgr[74563]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct 11 04:19:31 compute-0 ceph-mgr[74563]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 04:19:31 compute-0 ceph-mgr[74563]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Oct 11 04:19:31 compute-0 ceph-mgr[74563]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 04:19:31 compute-0 ceph-mgr[74563]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0003469509018688546 of space, bias 1.0, pg target 0.10408527056065639 quantized to 32 (current 32)
Oct 11 04:19:31 compute-0 ceph-mgr[74563]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 04:19:31 compute-0 ceph-mgr[74563]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Oct 11 04:19:31 compute-0 ceph-mgr[74563]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 04:19:31 compute-0 ceph-mgr[74563]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006661762551279547 of space, bias 1.0, pg target 0.1998528765383864 quantized to 32 (current 32)
Oct 11 04:19:31 compute-0 ceph-mgr[74563]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 04:19:31 compute-0 ceph-mgr[74563]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Oct 11 04:19:31 compute-0 ceph-mgr[74563]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 04:19:31 compute-0 ceph-mgr[74563]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 11 04:19:31 compute-0 ceph-mgr[74563]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 04:19:31 compute-0 ceph-mgr[74563]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Oct 11 04:19:31 compute-0 ceph-mgr[74563]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 04:19:31 compute-0 ceph-mgr[74563]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct 11 04:19:31 compute-0 ceph-mgr[74563]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 04:19:31 compute-0 ceph-mgr[74563]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 11 04:19:31 compute-0 ceph-mgr[74563]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 04:19:31 compute-0 ceph-mgr[74563]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct 11 04:19:31 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 11 04:19:31 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1227767862' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 04:19:31 compute-0 nova_compute[259850]: 2025-10-11 04:19:31.572 2 DEBUG oslo_concurrency.processutils [None req-a15c22ab-259d-4b26-83d0-eb5f92102649 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.445s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 04:19:31 compute-0 nova_compute[259850]: 2025-10-11 04:19:31.580 2 DEBUG nova.compute.provider_tree [None req-a15c22ab-259d-4b26-83d0-eb5f92102649 - - - - - -] Inventory has not changed in ProviderTree for provider: 108a560b-89c0-4926-a2fc-cb749a6f8386 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 11 04:19:31 compute-0 nova_compute[259850]: 2025-10-11 04:19:31.600 2 DEBUG nova.scheduler.client.report [None req-a15c22ab-259d-4b26-83d0-eb5f92102649 - - - - - -] Inventory has not changed for provider 108a560b-89c0-4926-a2fc-cb749a6f8386 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 11 04:19:31 compute-0 nova_compute[259850]: 2025-10-11 04:19:31.626 2 DEBUG nova.compute.resource_tracker [None req-a15c22ab-259d-4b26-83d0-eb5f92102649 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Oct 11 04:19:31 compute-0 nova_compute[259850]: 2025-10-11 04:19:31.627 2 DEBUG oslo_concurrency.lockutils [None req-a15c22ab-259d-4b26-83d0-eb5f92102649 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.689s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 04:19:31 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1679: 305 pgs: 305 active+clean; 88 MiB data, 427 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 409 B/s wr, 1 op/s
Oct 11 04:19:32 compute-0 ceph-mon[74273]: from='client.? 192.168.122.100:0/1227767862' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 04:19:32 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct 11 04:19:32 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3073287781' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 11 04:19:32 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct 11 04:19:32 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3073287781' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 11 04:19:32 compute-0 podman[299420]: 2025-10-11 04:19:32.443258666 +0000 UTC m=+0.151320995 container health_status 648a70a95c858d67aef01a55c153ff0b5b9bef4b589fa10c62a24185180db61f (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251009, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, container_name=ovn_controller, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3)
Oct 11 04:19:33 compute-0 ceph-mon[74273]: pgmap v1679: 305 pgs: 305 active+clean; 88 MiB data, 427 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 409 B/s wr, 1 op/s
Oct 11 04:19:33 compute-0 ceph-mon[74273]: from='client.? 192.168.122.10:0/3073287781' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 11 04:19:33 compute-0 ceph-mon[74273]: from='client.? 192.168.122.10:0/3073287781' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 11 04:19:33 compute-0 nova_compute[259850]: 2025-10-11 04:19:33.451 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 04:19:33 compute-0 nova_compute[259850]: 2025-10-11 04:19:33.474 2 DEBUG oslo_service.periodic_task [None req-a15c22ab-259d-4b26-83d0-eb5f92102649 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 11 04:19:33 compute-0 nova_compute[259850]: 2025-10-11 04:19:33.475 2 DEBUG oslo_service.periodic_task [None req-a15c22ab-259d-4b26-83d0-eb5f92102649 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 11 04:19:33 compute-0 nova_compute[259850]: 2025-10-11 04:19:33.475 2 DEBUG oslo_service.periodic_task [None req-a15c22ab-259d-4b26-83d0-eb5f92102649 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 11 04:19:33 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct 11 04:19:33 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3192870079' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 11 04:19:33 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct 11 04:19:33 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3192870079' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 11 04:19:33 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1680: 305 pgs: 305 active+clean; 88 MiB data, 427 MiB used, 60 GiB / 60 GiB avail; 21 KiB/s rd, 3.1 KiB/s wr, 31 op/s
Oct 11 04:19:34 compute-0 ceph-mon[74273]: from='client.? 192.168.122.10:0/3192870079' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 11 04:19:34 compute-0 ceph-mon[74273]: from='client.? 192.168.122.10:0/3192870079' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 11 04:19:34 compute-0 nova_compute[259850]: 2025-10-11 04:19:34.815 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 04:19:34 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct 11 04:19:34 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1831772092' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 11 04:19:34 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct 11 04:19:34 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1831772092' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 11 04:19:35 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e393 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 04:19:35 compute-0 ceph-mon[74273]: pgmap v1680: 305 pgs: 305 active+clean; 88 MiB data, 427 MiB used, 60 GiB / 60 GiB avail; 21 KiB/s rd, 3.1 KiB/s wr, 31 op/s
Oct 11 04:19:35 compute-0 ceph-mon[74273]: from='client.? 192.168.122.10:0/1831772092' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 11 04:19:35 compute-0 ceph-mon[74273]: from='client.? 192.168.122.10:0/1831772092' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 11 04:19:35 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1681: 305 pgs: 305 active+clean; 88 MiB data, 427 MiB used, 60 GiB / 60 GiB avail; 21 KiB/s rd, 3.1 KiB/s wr, 31 op/s
Oct 11 04:19:36 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct 11 04:19:36 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3532886420' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 11 04:19:36 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct 11 04:19:36 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3532886420' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 11 04:19:36 compute-0 podman[299446]: 2025-10-11 04:19:36.38168218 +0000 UTC m=+0.081311453 container health_status aa44bb3cffcfb092b9d85ae52a9bd9f36c87e07262632420f97b48a886109eab (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, org.label-schema.schema-version=1.0, tcib_managed=true, managed_by=edpm_ansible)
Oct 11 04:19:37 compute-0 nova_compute[259850]: 2025-10-11 04:19:37.059 2 DEBUG oslo_service.periodic_task [None req-a15c22ab-259d-4b26-83d0-eb5f92102649 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 11 04:19:37 compute-0 ceph-mon[74273]: pgmap v1681: 305 pgs: 305 active+clean; 88 MiB data, 427 MiB used, 60 GiB / 60 GiB avail; 21 KiB/s rd, 3.1 KiB/s wr, 31 op/s
Oct 11 04:19:37 compute-0 ceph-mon[74273]: from='client.? 192.168.122.10:0/3532886420' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 11 04:19:37 compute-0 ceph-mon[74273]: from='client.? 192.168.122.10:0/3532886420' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 11 04:19:37 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct 11 04:19:37 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1569997143' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 11 04:19:37 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct 11 04:19:37 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1569997143' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 11 04:19:37 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1682: 305 pgs: 305 active+clean; 88 MiB data, 427 MiB used, 60 GiB / 60 GiB avail; 36 KiB/s rd, 2.7 KiB/s wr, 52 op/s
Oct 11 04:19:38 compute-0 ceph-mon[74273]: from='client.? 192.168.122.10:0/1569997143' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 11 04:19:38 compute-0 ceph-mon[74273]: from='client.? 192.168.122.10:0/1569997143' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 11 04:19:38 compute-0 nova_compute[259850]: 2025-10-11 04:19:38.498 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 04:19:39 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e393 do_prune osdmap full prune enabled
Oct 11 04:19:39 compute-0 ceph-mon[74273]: pgmap v1682: 305 pgs: 305 active+clean; 88 MiB data, 427 MiB used, 60 GiB / 60 GiB avail; 36 KiB/s rd, 2.7 KiB/s wr, 52 op/s
Oct 11 04:19:39 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e394 e394: 3 total, 3 up, 3 in
Oct 11 04:19:39 compute-0 ceph-mon[74273]: log_channel(cluster) log [DBG] : osdmap e394: 3 total, 3 up, 3 in
Oct 11 04:19:39 compute-0 nova_compute[259850]: 2025-10-11 04:19:39.850 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 04:19:39 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1684: 305 pgs: 305 active+clean; 88 MiB data, 427 MiB used, 60 GiB / 60 GiB avail; 68 KiB/s rd, 3.9 KiB/s wr, 94 op/s
Oct 11 04:19:40 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e394 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 04:19:40 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct 11 04:19:40 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3050402380' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 11 04:19:40 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct 11 04:19:40 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3050402380' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 11 04:19:40 compute-0 ceph-mon[74273]: osdmap e394: 3 total, 3 up, 3 in
Oct 11 04:19:40 compute-0 ceph-mon[74273]: from='client.? 192.168.122.10:0/3050402380' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 11 04:19:40 compute-0 ceph-mon[74273]: from='client.? 192.168.122.10:0/3050402380' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 11 04:19:41 compute-0 ceph-mon[74273]: pgmap v1684: 305 pgs: 305 active+clean; 88 MiB data, 427 MiB used, 60 GiB / 60 GiB avail; 68 KiB/s rd, 3.9 KiB/s wr, 94 op/s
Oct 11 04:19:41 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1685: 305 pgs: 305 active+clean; 88 MiB data, 427 MiB used, 60 GiB / 60 GiB avail; 68 KiB/s rd, 3.9 KiB/s wr, 94 op/s
Oct 11 04:19:43 compute-0 ceph-mon[74273]: pgmap v1685: 305 pgs: 305 active+clean; 88 MiB data, 427 MiB used, 60 GiB / 60 GiB avail; 68 KiB/s rd, 3.9 KiB/s wr, 94 op/s
Oct 11 04:19:43 compute-0 nova_compute[259850]: 2025-10-11 04:19:43.498 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 04:19:43 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1686: 305 pgs: 305 active+clean; 88 MiB data, 427 MiB used, 60 GiB / 60 GiB avail; 68 KiB/s rd, 2.1 KiB/s wr, 91 op/s
Oct 11 04:19:44 compute-0 nova_compute[259850]: 2025-10-11 04:19:44.852 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 04:19:45 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e394 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 04:19:45 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e394 do_prune osdmap full prune enabled
Oct 11 04:19:45 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e395 e395: 3 total, 3 up, 3 in
Oct 11 04:19:45 compute-0 ceph-mon[74273]: log_channel(cluster) log [DBG] : osdmap e395: 3 total, 3 up, 3 in
Oct 11 04:19:45 compute-0 ceph-mon[74273]: pgmap v1686: 305 pgs: 305 active+clean; 88 MiB data, 427 MiB used, 60 GiB / 60 GiB avail; 68 KiB/s rd, 2.1 KiB/s wr, 91 op/s
Oct 11 04:19:45 compute-0 ceph-mon[74273]: osdmap e395: 3 total, 3 up, 3 in
Oct 11 04:19:45 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1688: 305 pgs: 305 active+clean; 88 MiB data, 427 MiB used, 60 GiB / 60 GiB avail; 65 KiB/s rd, 2.6 KiB/s wr, 85 op/s
Oct 11 04:19:47 compute-0 ceph-mon[74273]: pgmap v1688: 305 pgs: 305 active+clean; 88 MiB data, 427 MiB used, 60 GiB / 60 GiB avail; 65 KiB/s rd, 2.6 KiB/s wr, 85 op/s
Oct 11 04:19:47 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1689: 305 pgs: 305 active+clean; 88 MiB data, 427 MiB used, 60 GiB / 60 GiB avail; 27 KiB/s rd, 2.3 KiB/s wr, 36 op/s
Oct 11 04:19:48 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct 11 04:19:48 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/4117327793' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 11 04:19:48 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct 11 04:19:48 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/4117327793' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 11 04:19:48 compute-0 podman[299467]: 2025-10-11 04:19:48.37710243 +0000 UTC m=+0.071161246 container health_status 8f03625dda308e9a3fe3b3f00360811eab2a53db90537fed5552a11820542f07 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, org.label-schema.schema-version=1.0, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, container_name=iscsid, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=iscsid, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible)
Oct 11 04:19:48 compute-0 podman[299466]: 2025-10-11 04:19:48.386393353 +0000 UTC m=+0.083713111 container health_status 83ff5079e26f0f00bbfd2512db4ebca61e431e3b7e362664573e34fff179574c (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, container_name=multipathd, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, config_id=multipathd)
Oct 11 04:19:48 compute-0 nova_compute[259850]: 2025-10-11 04:19:48.500 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 04:19:49 compute-0 ceph-mon[74273]: pgmap v1689: 305 pgs: 305 active+clean; 88 MiB data, 427 MiB used, 60 GiB / 60 GiB avail; 27 KiB/s rd, 2.3 KiB/s wr, 36 op/s
Oct 11 04:19:49 compute-0 ceph-mon[74273]: from='client.? 192.168.122.10:0/4117327793' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 11 04:19:49 compute-0 ceph-mon[74273]: from='client.? 192.168.122.10:0/4117327793' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 11 04:19:49 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct 11 04:19:49 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2635215387' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 11 04:19:49 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct 11 04:19:49 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2635215387' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 11 04:19:49 compute-0 nova_compute[259850]: 2025-10-11 04:19:49.854 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 04:19:49 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1690: 305 pgs: 305 active+clean; 88 MiB data, 427 MiB used, 60 GiB / 60 GiB avail; 24 KiB/s rd, 3.3 KiB/s wr, 35 op/s
Oct 11 04:19:50 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e395 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 04:19:50 compute-0 ceph-mon[74273]: from='client.? 192.168.122.10:0/2635215387' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 11 04:19:50 compute-0 ceph-mon[74273]: from='client.? 192.168.122.10:0/2635215387' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 11 04:19:50 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct 11 04:19:50 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2952664957' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 11 04:19:50 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct 11 04:19:50 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2952664957' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 11 04:19:50 compute-0 ceph-mgr[74563]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 04:19:50 compute-0 ceph-mgr[74563]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 04:19:50 compute-0 ceph-mgr[74563]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 04:19:50 compute-0 ceph-mgr[74563]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 04:19:50 compute-0 ceph-mgr[74563]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 04:19:50 compute-0 ceph-mgr[74563]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 04:19:51 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct 11 04:19:51 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2031364555' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 11 04:19:51 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct 11 04:19:51 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2031364555' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 11 04:19:51 compute-0 ceph-mon[74273]: pgmap v1690: 305 pgs: 305 active+clean; 88 MiB data, 427 MiB used, 60 GiB / 60 GiB avail; 24 KiB/s rd, 3.3 KiB/s wr, 35 op/s
Oct 11 04:19:51 compute-0 ceph-mon[74273]: from='client.? 192.168.122.10:0/2952664957' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 11 04:19:51 compute-0 ceph-mon[74273]: from='client.? 192.168.122.10:0/2952664957' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 11 04:19:51 compute-0 ceph-mon[74273]: from='client.? 192.168.122.10:0/2031364555' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 11 04:19:51 compute-0 ceph-mon[74273]: from='client.? 192.168.122.10:0/2031364555' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 11 04:19:51 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1691: 305 pgs: 305 active+clean; 88 MiB data, 427 MiB used, 60 GiB / 60 GiB avail; 24 KiB/s rd, 3.3 KiB/s wr, 35 op/s
Oct 11 04:19:52 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct 11 04:19:52 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/235197899' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 11 04:19:52 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct 11 04:19:52 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/235197899' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 11 04:19:53 compute-0 ceph-mon[74273]: pgmap v1691: 305 pgs: 305 active+clean; 88 MiB data, 427 MiB used, 60 GiB / 60 GiB avail; 24 KiB/s rd, 3.3 KiB/s wr, 35 op/s
Oct 11 04:19:53 compute-0 ceph-mon[74273]: from='client.? 192.168.122.10:0/235197899' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 11 04:19:53 compute-0 ceph-mon[74273]: from='client.? 192.168.122.10:0/235197899' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 11 04:19:53 compute-0 nova_compute[259850]: 2025-10-11 04:19:53.503 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 04:19:53 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct 11 04:19:53 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2995808143' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 11 04:19:53 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct 11 04:19:53 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2995808143' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 11 04:19:53 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1692: 305 pgs: 305 active+clean; 88 MiB data, 427 MiB used, 60 GiB / 60 GiB avail; 51 KiB/s rd, 3.6 KiB/s wr, 71 op/s
Oct 11 04:19:54 compute-0 sudo[299504]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 04:19:54 compute-0 sudo[299504]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 04:19:54 compute-0 sudo[299504]: pam_unix(sudo:session): session closed for user root
Oct 11 04:19:54 compute-0 sudo[299529]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 11 04:19:54 compute-0 sudo[299529]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 04:19:54 compute-0 sudo[299529]: pam_unix(sudo:session): session closed for user root
Oct 11 04:19:54 compute-0 sudo[299554]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 04:19:54 compute-0 sudo[299554]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 04:19:54 compute-0 sudo[299554]: pam_unix(sudo:session): session closed for user root
Oct 11 04:19:54 compute-0 ceph-mon[74273]: from='client.? 192.168.122.10:0/2995808143' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 11 04:19:54 compute-0 ceph-mon[74273]: from='client.? 192.168.122.10:0/2995808143' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 11 04:19:54 compute-0 sudo[299579]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/23b68101-59a9-532f-ab6b-9acf78fb2162/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Oct 11 04:19:54 compute-0 sudo[299579]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 04:19:54 compute-0 sudo[299579]: pam_unix(sudo:session): session closed for user root
Oct 11 04:19:54 compute-0 nova_compute[259850]: 2025-10-11 04:19:54.856 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 04:19:54 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 11 04:19:54 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 11 04:19:54 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct 11 04:19:54 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 11 04:19:54 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct 11 04:19:54 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' 
Oct 11 04:19:54 compute-0 ceph-mgr[74563]: [progress WARNING root] complete: ev eb3c247d-c11f-4e99-9a24-2a7253e66350 does not exist
Oct 11 04:19:54 compute-0 ceph-mgr[74563]: [progress WARNING root] complete: ev ad85b97b-9510-4153-9e1e-10f00801f629 does not exist
Oct 11 04:19:54 compute-0 ceph-mgr[74563]: [progress WARNING root] complete: ev 7292d9e1-d269-4ba1-88f0-b03bf5b9d9b1 does not exist
Oct 11 04:19:54 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct 11 04:19:54 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 11 04:19:54 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct 11 04:19:54 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 11 04:19:54 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 11 04:19:54 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 11 04:19:54 compute-0 sudo[299635]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 04:19:54 compute-0 sudo[299635]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 04:19:54 compute-0 sudo[299635]: pam_unix(sudo:session): session closed for user root
Oct 11 04:19:55 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e395 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 04:19:55 compute-0 sudo[299660]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 11 04:19:55 compute-0 sudo[299660]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 04:19:55 compute-0 sudo[299660]: pam_unix(sudo:session): session closed for user root
Oct 11 04:19:55 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct 11 04:19:55 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2115662753' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 11 04:19:55 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct 11 04:19:55 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2115662753' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 11 04:19:55 compute-0 sudo[299685]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 04:19:55 compute-0 sudo[299685]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 04:19:55 compute-0 sudo[299685]: pam_unix(sudo:session): session closed for user root
Oct 11 04:19:55 compute-0 sudo[299710]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/23b68101-59a9-532f-ab6b-9acf78fb2162/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 23b68101-59a9-532f-ab6b-9acf78fb2162 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Oct 11 04:19:55 compute-0 sudo[299710]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 04:19:55 compute-0 ceph-mon[74273]: pgmap v1692: 305 pgs: 305 active+clean; 88 MiB data, 427 MiB used, 60 GiB / 60 GiB avail; 51 KiB/s rd, 3.6 KiB/s wr, 71 op/s
Oct 11 04:19:55 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 11 04:19:55 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 11 04:19:55 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' 
Oct 11 04:19:55 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 11 04:19:55 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 11 04:19:55 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 11 04:19:55 compute-0 ceph-mon[74273]: from='client.? 192.168.122.10:0/2115662753' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 11 04:19:55 compute-0 ceph-mon[74273]: from='client.? 192.168.122.10:0/2115662753' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 11 04:19:55 compute-0 ovn_controller[152025]: 2025-10-11T04:19:55Z|00261|memory_trim|INFO|Detected inactivity (last active 30012 ms ago): trimming memory
Oct 11 04:19:55 compute-0 podman[299776]: 2025-10-11 04:19:55.681393463 +0000 UTC m=+0.071547737 container create ff749d26a0068e3073bb7b1e80a158ebf2b742c06e45435529f04fb7a493c1d0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_maxwell, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, io.buildah.version=1.39.3, OSD_FLAVOR=default)
Oct 11 04:19:55 compute-0 systemd[1]: Started libpod-conmon-ff749d26a0068e3073bb7b1e80a158ebf2b742c06e45435529f04fb7a493c1d0.scope.
Oct 11 04:19:55 compute-0 podman[299776]: 2025-10-11 04:19:55.650068206 +0000 UTC m=+0.040222500 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 04:19:55 compute-0 systemd[1]: Started libcrun container.
Oct 11 04:19:55 compute-0 podman[299776]: 2025-10-11 04:19:55.790347037 +0000 UTC m=+0.180501411 container init ff749d26a0068e3073bb7b1e80a158ebf2b742c06e45435529f04fb7a493c1d0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_maxwell, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507)
Oct 11 04:19:55 compute-0 podman[299776]: 2025-10-11 04:19:55.801934475 +0000 UTC m=+0.192088749 container start ff749d26a0068e3073bb7b1e80a158ebf2b742c06e45435529f04fb7a493c1d0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_maxwell, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2)
Oct 11 04:19:55 compute-0 podman[299776]: 2025-10-11 04:19:55.805086855 +0000 UTC m=+0.195241229 container attach ff749d26a0068e3073bb7b1e80a158ebf2b742c06e45435529f04fb7a493c1d0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_maxwell, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=reef, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 11 04:19:55 compute-0 cranky_maxwell[299792]: 167 167
Oct 11 04:19:55 compute-0 systemd[1]: libpod-ff749d26a0068e3073bb7b1e80a158ebf2b742c06e45435529f04fb7a493c1d0.scope: Deactivated successfully.
Oct 11 04:19:55 compute-0 conmon[299792]: conmon ff749d26a0068e3073bb <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-ff749d26a0068e3073bb7b1e80a158ebf2b742c06e45435529f04fb7a493c1d0.scope/container/memory.events
Oct 11 04:19:55 compute-0 podman[299776]: 2025-10-11 04:19:55.811718872 +0000 UTC m=+0.201873176 container died ff749d26a0068e3073bb7b1e80a158ebf2b742c06e45435529f04fb7a493c1d0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_maxwell, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Oct 11 04:19:55 compute-0 systemd[1]: var-lib-containers-storage-overlay-f5e114bb0f2fa1c46201c12753b90f4f827b75d86d8d5d6146ea935ac91486a2-merged.mount: Deactivated successfully.
Oct 11 04:19:55 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1693: 305 pgs: 305 active+clean; 88 MiB data, 427 MiB used, 60 GiB / 60 GiB avail; 47 KiB/s rd, 3.3 KiB/s wr, 66 op/s
Oct 11 04:19:55 compute-0 podman[299776]: 2025-10-11 04:19:55.86460775 +0000 UTC m=+0.254762054 container remove ff749d26a0068e3073bb7b1e80a158ebf2b742c06e45435529f04fb7a493c1d0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_maxwell, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 11 04:19:55 compute-0 systemd[1]: libpod-conmon-ff749d26a0068e3073bb7b1e80a158ebf2b742c06e45435529f04fb7a493c1d0.scope: Deactivated successfully.
Oct 11 04:19:56 compute-0 podman[299816]: 2025-10-11 04:19:56.112718634 +0000 UTC m=+0.068010136 container create c33ba432b22e72bd531e67b34ebb5a76f0cb2a5c733490a7be6a3b185b4a7a3d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_bell, org.label-schema.build-date=20250507, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Oct 11 04:19:56 compute-0 systemd[1]: Started libpod-conmon-c33ba432b22e72bd531e67b34ebb5a76f0cb2a5c733490a7be6a3b185b4a7a3d.scope.
Oct 11 04:19:56 compute-0 podman[299816]: 2025-10-11 04:19:56.08607032 +0000 UTC m=+0.041361862 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 04:19:56 compute-0 systemd[1]: Started libcrun container.
Oct 11 04:19:56 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3c496889a09f08e5acd2bc8c5af1fdd517c11b98b9fe5b2dd3143ec39f9ef215/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 11 04:19:56 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3c496889a09f08e5acd2bc8c5af1fdd517c11b98b9fe5b2dd3143ec39f9ef215/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 11 04:19:56 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3c496889a09f08e5acd2bc8c5af1fdd517c11b98b9fe5b2dd3143ec39f9ef215/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 11 04:19:56 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3c496889a09f08e5acd2bc8c5af1fdd517c11b98b9fe5b2dd3143ec39f9ef215/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 11 04:19:56 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3c496889a09f08e5acd2bc8c5af1fdd517c11b98b9fe5b2dd3143ec39f9ef215/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct 11 04:19:56 compute-0 podman[299816]: 2025-10-11 04:19:56.220639 +0000 UTC m=+0.175930532 container init c33ba432b22e72bd531e67b34ebb5a76f0cb2a5c733490a7be6a3b185b4a7a3d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_bell, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef)
Oct 11 04:19:56 compute-0 podman[299816]: 2025-10-11 04:19:56.237066435 +0000 UTC m=+0.192357937 container start c33ba432b22e72bd531e67b34ebb5a76f0cb2a5c733490a7be6a3b185b4a7a3d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_bell, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Oct 11 04:19:56 compute-0 podman[299816]: 2025-10-11 04:19:56.241868921 +0000 UTC m=+0.197160453 container attach c33ba432b22e72bd531e67b34ebb5a76f0cb2a5c733490a7be6a3b185b4a7a3d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_bell, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 11 04:19:57 compute-0 ceph-mon[74273]: pgmap v1693: 305 pgs: 305 active+clean; 88 MiB data, 427 MiB used, 60 GiB / 60 GiB avail; 47 KiB/s rd, 3.3 KiB/s wr, 66 op/s
Oct 11 04:19:57 compute-0 epic_bell[299832]: --> passed data devices: 0 physical, 3 LVM
Oct 11 04:19:57 compute-0 epic_bell[299832]: --> relative data size: 1.0
Oct 11 04:19:57 compute-0 epic_bell[299832]: --> All data devices are unavailable
Oct 11 04:19:57 compute-0 systemd[1]: libpod-c33ba432b22e72bd531e67b34ebb5a76f0cb2a5c733490a7be6a3b185b4a7a3d.scope: Deactivated successfully.
Oct 11 04:19:57 compute-0 systemd[1]: libpod-c33ba432b22e72bd531e67b34ebb5a76f0cb2a5c733490a7be6a3b185b4a7a3d.scope: Consumed 1.012s CPU time.
Oct 11 04:19:57 compute-0 podman[299816]: 2025-10-11 04:19:57.351320742 +0000 UTC m=+1.306612234 container died c33ba432b22e72bd531e67b34ebb5a76f0cb2a5c733490a7be6a3b185b4a7a3d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_bell, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 11 04:19:57 compute-0 systemd[1]: var-lib-containers-storage-overlay-3c496889a09f08e5acd2bc8c5af1fdd517c11b98b9fe5b2dd3143ec39f9ef215-merged.mount: Deactivated successfully.
Oct 11 04:19:57 compute-0 podman[299816]: 2025-10-11 04:19:57.42788419 +0000 UTC m=+1.383175662 container remove c33ba432b22e72bd531e67b34ebb5a76f0cb2a5c733490a7be6a3b185b4a7a3d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_bell, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True)
Oct 11 04:19:57 compute-0 systemd[1]: libpod-conmon-c33ba432b22e72bd531e67b34ebb5a76f0cb2a5c733490a7be6a3b185b4a7a3d.scope: Deactivated successfully.
Oct 11 04:19:57 compute-0 sudo[299710]: pam_unix(sudo:session): session closed for user root
Oct 11 04:19:57 compute-0 sudo[299874]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 04:19:57 compute-0 sudo[299874]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 04:19:57 compute-0 sudo[299874]: pam_unix(sudo:session): session closed for user root
Oct 11 04:19:57 compute-0 sudo[299899]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 11 04:19:57 compute-0 sudo[299899]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 04:19:57 compute-0 sudo[299899]: pam_unix(sudo:session): session closed for user root
Oct 11 04:19:57 compute-0 sudo[299924]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 04:19:57 compute-0 sudo[299924]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 04:19:57 compute-0 sudo[299924]: pam_unix(sudo:session): session closed for user root
Oct 11 04:19:57 compute-0 sudo[299949]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/23b68101-59a9-532f-ab6b-9acf78fb2162/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 23b68101-59a9-532f-ab6b-9acf78fb2162 -- lvm list --format json
Oct 11 04:19:57 compute-0 sudo[299949]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 04:19:57 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1694: 305 pgs: 305 active+clean; 88 MiB data, 427 MiB used, 60 GiB / 60 GiB avail; 45 KiB/s rd, 3.0 KiB/s wr, 63 op/s
Oct 11 04:19:58 compute-0 podman[300015]: 2025-10-11 04:19:58.311125496 +0000 UTC m=+0.078559815 container create 32fd030f9343b13ca5d252b6db77d2ea7eff8b99cf7d00ca5b4580aeba8dc07d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_tesla, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef)
Oct 11 04:19:58 compute-0 systemd[1]: Started libpod-conmon-32fd030f9343b13ca5d252b6db77d2ea7eff8b99cf7d00ca5b4580aeba8dc07d.scope.
Oct 11 04:19:58 compute-0 podman[300015]: 2025-10-11 04:19:58.2812388 +0000 UTC m=+0.048673179 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 04:19:58 compute-0 systemd[1]: Started libcrun container.
Oct 11 04:19:58 compute-0 podman[300015]: 2025-10-11 04:19:58.410789397 +0000 UTC m=+0.178223936 container init 32fd030f9343b13ca5d252b6db77d2ea7eff8b99cf7d00ca5b4580aeba8dc07d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_tesla, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef)
Oct 11 04:19:58 compute-0 podman[300015]: 2025-10-11 04:19:58.422650213 +0000 UTC m=+0.190084522 container start 32fd030f9343b13ca5d252b6db77d2ea7eff8b99cf7d00ca5b4580aeba8dc07d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_tesla, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, ceph=True)
Oct 11 04:19:58 compute-0 podman[300015]: 2025-10-11 04:19:58.426342898 +0000 UTC m=+0.193777217 container attach 32fd030f9343b13ca5d252b6db77d2ea7eff8b99cf7d00ca5b4580aeba8dc07d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_tesla, CEPH_REF=reef, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 11 04:19:58 compute-0 friendly_tesla[300032]: 167 167
Oct 11 04:19:58 compute-0 systemd[1]: libpod-32fd030f9343b13ca5d252b6db77d2ea7eff8b99cf7d00ca5b4580aeba8dc07d.scope: Deactivated successfully.
Oct 11 04:19:58 compute-0 podman[300015]: 2025-10-11 04:19:58.431918066 +0000 UTC m=+0.199352385 container died 32fd030f9343b13ca5d252b6db77d2ea7eff8b99cf7d00ca5b4580aeba8dc07d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_tesla, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2)
Oct 11 04:19:58 compute-0 systemd[1]: var-lib-containers-storage-overlay-72f4596617590ebca79cf200fa9c5170d3d0570922bc8faf49db92e67c3b8512-merged.mount: Deactivated successfully.
Oct 11 04:19:58 compute-0 podman[300015]: 2025-10-11 04:19:58.483361362 +0000 UTC m=+0.250795651 container remove 32fd030f9343b13ca5d252b6db77d2ea7eff8b99cf7d00ca5b4580aeba8dc07d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_tesla, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 11 04:19:58 compute-0 systemd[1]: libpod-conmon-32fd030f9343b13ca5d252b6db77d2ea7eff8b99cf7d00ca5b4580aeba8dc07d.scope: Deactivated successfully.
Oct 11 04:19:58 compute-0 nova_compute[259850]: 2025-10-11 04:19:58.505 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 04:19:58 compute-0 podman[300056]: 2025-10-11 04:19:58.743698213 +0000 UTC m=+0.069786617 container create 87c8b689f366eec8521e66dad9aa778bd6d586c7c09d2d4f5e694e68eaa7e981 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=romantic_sinoussi, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 11 04:19:58 compute-0 systemd[1]: Started libpod-conmon-87c8b689f366eec8521e66dad9aa778bd6d586c7c09d2d4f5e694e68eaa7e981.scope.
Oct 11 04:19:58 compute-0 podman[300056]: 2025-10-11 04:19:58.714784194 +0000 UTC m=+0.040872648 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 04:19:58 compute-0 systemd[1]: Started libcrun container.
Oct 11 04:19:58 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b8cae7e2ecfc130494a0c8367cb921b86082fa9188a0f06a116f31a3923fb2ba/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 11 04:19:58 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b8cae7e2ecfc130494a0c8367cb921b86082fa9188a0f06a116f31a3923fb2ba/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 11 04:19:58 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b8cae7e2ecfc130494a0c8367cb921b86082fa9188a0f06a116f31a3923fb2ba/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 11 04:19:58 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b8cae7e2ecfc130494a0c8367cb921b86082fa9188a0f06a116f31a3923fb2ba/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 11 04:19:58 compute-0 podman[300056]: 2025-10-11 04:19:58.867701124 +0000 UTC m=+0.193789508 container init 87c8b689f366eec8521e66dad9aa778bd6d586c7c09d2d4f5e694e68eaa7e981 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=romantic_sinoussi, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=reef, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Oct 11 04:19:58 compute-0 podman[300056]: 2025-10-11 04:19:58.877194303 +0000 UTC m=+0.203282707 container start 87c8b689f366eec8521e66dad9aa778bd6d586c7c09d2d4f5e694e68eaa7e981 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=romantic_sinoussi, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3)
Oct 11 04:19:58 compute-0 podman[300056]: 2025-10-11 04:19:58.881072552 +0000 UTC m=+0.207160916 container attach 87c8b689f366eec8521e66dad9aa778bd6d586c7c09d2d4f5e694e68eaa7e981 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=romantic_sinoussi, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Oct 11 04:19:59 compute-0 ceph-mon[74273]: pgmap v1694: 305 pgs: 305 active+clean; 88 MiB data, 427 MiB used, 60 GiB / 60 GiB avail; 45 KiB/s rd, 3.0 KiB/s wr, 63 op/s
Oct 11 04:19:59 compute-0 romantic_sinoussi[300073]: {
Oct 11 04:19:59 compute-0 romantic_sinoussi[300073]:     "0": [
Oct 11 04:19:59 compute-0 romantic_sinoussi[300073]:         {
Oct 11 04:19:59 compute-0 romantic_sinoussi[300073]:             "devices": [
Oct 11 04:19:59 compute-0 romantic_sinoussi[300073]:                 "/dev/loop3"
Oct 11 04:19:59 compute-0 romantic_sinoussi[300073]:             ],
Oct 11 04:19:59 compute-0 romantic_sinoussi[300073]:             "lv_name": "ceph_lv0",
Oct 11 04:19:59 compute-0 romantic_sinoussi[300073]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct 11 04:19:59 compute-0 romantic_sinoussi[300073]:             "lv_size": "21470642176",
Oct 11 04:19:59 compute-0 romantic_sinoussi[300073]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=1XVTvq-Um5W-oCLh-MZzq-EKrd-vgGj-JdO4S3,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=23b68101-59a9-532f-ab6b-9acf78fb2162,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=bd7ac921-1218-45c1-b1c6-7c594dbceccb,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 11 04:19:59 compute-0 romantic_sinoussi[300073]:             "lv_uuid": "1XVTvq-Um5W-oCLh-MZzq-EKrd-vgGj-JdO4S3",
Oct 11 04:19:59 compute-0 romantic_sinoussi[300073]:             "name": "ceph_lv0",
Oct 11 04:19:59 compute-0 romantic_sinoussi[300073]:             "path": "/dev/ceph_vg0/ceph_lv0",
Oct 11 04:19:59 compute-0 romantic_sinoussi[300073]:             "tags": {
Oct 11 04:19:59 compute-0 romantic_sinoussi[300073]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct 11 04:19:59 compute-0 romantic_sinoussi[300073]:                 "ceph.block_uuid": "1XVTvq-Um5W-oCLh-MZzq-EKrd-vgGj-JdO4S3",
Oct 11 04:19:59 compute-0 romantic_sinoussi[300073]:                 "ceph.cephx_lockbox_secret": "",
Oct 11 04:19:59 compute-0 romantic_sinoussi[300073]:                 "ceph.cluster_fsid": "23b68101-59a9-532f-ab6b-9acf78fb2162",
Oct 11 04:19:59 compute-0 romantic_sinoussi[300073]:                 "ceph.cluster_name": "ceph",
Oct 11 04:19:59 compute-0 romantic_sinoussi[300073]:                 "ceph.crush_device_class": "",
Oct 11 04:19:59 compute-0 romantic_sinoussi[300073]:                 "ceph.encrypted": "0",
Oct 11 04:19:59 compute-0 romantic_sinoussi[300073]:                 "ceph.osd_fsid": "bd7ac921-1218-45c1-b1c6-7c594dbceccb",
Oct 11 04:19:59 compute-0 romantic_sinoussi[300073]:                 "ceph.osd_id": "0",
Oct 11 04:19:59 compute-0 romantic_sinoussi[300073]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 11 04:19:59 compute-0 romantic_sinoussi[300073]:                 "ceph.type": "block",
Oct 11 04:19:59 compute-0 romantic_sinoussi[300073]:                 "ceph.vdo": "0"
Oct 11 04:19:59 compute-0 romantic_sinoussi[300073]:             },
Oct 11 04:19:59 compute-0 romantic_sinoussi[300073]:             "type": "block",
Oct 11 04:19:59 compute-0 romantic_sinoussi[300073]:             "vg_name": "ceph_vg0"
Oct 11 04:19:59 compute-0 romantic_sinoussi[300073]:         }
Oct 11 04:19:59 compute-0 romantic_sinoussi[300073]:     ],
Oct 11 04:19:59 compute-0 romantic_sinoussi[300073]:     "1": [
Oct 11 04:19:59 compute-0 romantic_sinoussi[300073]:         {
Oct 11 04:19:59 compute-0 romantic_sinoussi[300073]:             "devices": [
Oct 11 04:19:59 compute-0 romantic_sinoussi[300073]:                 "/dev/loop4"
Oct 11 04:19:59 compute-0 romantic_sinoussi[300073]:             ],
Oct 11 04:19:59 compute-0 romantic_sinoussi[300073]:             "lv_name": "ceph_lv1",
Oct 11 04:19:59 compute-0 romantic_sinoussi[300073]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Oct 11 04:19:59 compute-0 romantic_sinoussi[300073]:             "lv_size": "21470642176",
Oct 11 04:19:59 compute-0 romantic_sinoussi[300073]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=SYHCVo-UVf0-Iwc3-4dzz-QuCG-19LJ-9rRA5x,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=23b68101-59a9-532f-ab6b-9acf78fb2162,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=38da774d-7ecf-442f-9a7a-97978287cff8,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 11 04:19:59 compute-0 romantic_sinoussi[300073]:             "lv_uuid": "SYHCVo-UVf0-Iwc3-4dzz-QuCG-19LJ-9rRA5x",
Oct 11 04:19:59 compute-0 romantic_sinoussi[300073]:             "name": "ceph_lv1",
Oct 11 04:19:59 compute-0 romantic_sinoussi[300073]:             "path": "/dev/ceph_vg1/ceph_lv1",
Oct 11 04:19:59 compute-0 romantic_sinoussi[300073]:             "tags": {
Oct 11 04:19:59 compute-0 romantic_sinoussi[300073]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Oct 11 04:19:59 compute-0 romantic_sinoussi[300073]:                 "ceph.block_uuid": "SYHCVo-UVf0-Iwc3-4dzz-QuCG-19LJ-9rRA5x",
Oct 11 04:19:59 compute-0 romantic_sinoussi[300073]:                 "ceph.cephx_lockbox_secret": "",
Oct 11 04:19:59 compute-0 romantic_sinoussi[300073]:                 "ceph.cluster_fsid": "23b68101-59a9-532f-ab6b-9acf78fb2162",
Oct 11 04:19:59 compute-0 romantic_sinoussi[300073]:                 "ceph.cluster_name": "ceph",
Oct 11 04:19:59 compute-0 romantic_sinoussi[300073]:                 "ceph.crush_device_class": "",
Oct 11 04:19:59 compute-0 romantic_sinoussi[300073]:                 "ceph.encrypted": "0",
Oct 11 04:19:59 compute-0 romantic_sinoussi[300073]:                 "ceph.osd_fsid": "38da774d-7ecf-442f-9a7a-97978287cff8",
Oct 11 04:19:59 compute-0 romantic_sinoussi[300073]:                 "ceph.osd_id": "1",
Oct 11 04:19:59 compute-0 romantic_sinoussi[300073]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 11 04:19:59 compute-0 romantic_sinoussi[300073]:                 "ceph.type": "block",
Oct 11 04:19:59 compute-0 romantic_sinoussi[300073]:                 "ceph.vdo": "0"
Oct 11 04:19:59 compute-0 romantic_sinoussi[300073]:             },
Oct 11 04:19:59 compute-0 romantic_sinoussi[300073]:             "type": "block",
Oct 11 04:19:59 compute-0 romantic_sinoussi[300073]:             "vg_name": "ceph_vg1"
Oct 11 04:19:59 compute-0 romantic_sinoussi[300073]:         }
Oct 11 04:19:59 compute-0 romantic_sinoussi[300073]:     ],
Oct 11 04:19:59 compute-0 romantic_sinoussi[300073]:     "2": [
Oct 11 04:19:59 compute-0 romantic_sinoussi[300073]:         {
Oct 11 04:19:59 compute-0 romantic_sinoussi[300073]:             "devices": [
Oct 11 04:19:59 compute-0 romantic_sinoussi[300073]:                 "/dev/loop5"
Oct 11 04:19:59 compute-0 romantic_sinoussi[300073]:             ],
Oct 11 04:19:59 compute-0 romantic_sinoussi[300073]:             "lv_name": "ceph_lv2",
Oct 11 04:19:59 compute-0 romantic_sinoussi[300073]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Oct 11 04:19:59 compute-0 romantic_sinoussi[300073]:             "lv_size": "21470642176",
Oct 11 04:19:59 compute-0 romantic_sinoussi[300073]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=RoMfIg-8Myq-NBZ3-1HM3-6QfC-sjk8-dlChvt,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=23b68101-59a9-532f-ab6b-9acf78fb2162,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=b0a32a6b-06c7-4cda-9235-0c5ffbd1bd85,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 11 04:19:59 compute-0 romantic_sinoussi[300073]:             "lv_uuid": "RoMfIg-8Myq-NBZ3-1HM3-6QfC-sjk8-dlChvt",
Oct 11 04:19:59 compute-0 romantic_sinoussi[300073]:             "name": "ceph_lv2",
Oct 11 04:19:59 compute-0 romantic_sinoussi[300073]:             "path": "/dev/ceph_vg2/ceph_lv2",
Oct 11 04:19:59 compute-0 romantic_sinoussi[300073]:             "tags": {
Oct 11 04:19:59 compute-0 romantic_sinoussi[300073]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Oct 11 04:19:59 compute-0 romantic_sinoussi[300073]:                 "ceph.block_uuid": "RoMfIg-8Myq-NBZ3-1HM3-6QfC-sjk8-dlChvt",
Oct 11 04:19:59 compute-0 romantic_sinoussi[300073]:                 "ceph.cephx_lockbox_secret": "",
Oct 11 04:19:59 compute-0 romantic_sinoussi[300073]:                 "ceph.cluster_fsid": "23b68101-59a9-532f-ab6b-9acf78fb2162",
Oct 11 04:19:59 compute-0 romantic_sinoussi[300073]:                 "ceph.cluster_name": "ceph",
Oct 11 04:19:59 compute-0 romantic_sinoussi[300073]:                 "ceph.crush_device_class": "",
Oct 11 04:19:59 compute-0 romantic_sinoussi[300073]:                 "ceph.encrypted": "0",
Oct 11 04:19:59 compute-0 romantic_sinoussi[300073]:                 "ceph.osd_fsid": "b0a32a6b-06c7-4cda-9235-0c5ffbd1bd85",
Oct 11 04:19:59 compute-0 romantic_sinoussi[300073]:                 "ceph.osd_id": "2",
Oct 11 04:19:59 compute-0 romantic_sinoussi[300073]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 11 04:19:59 compute-0 romantic_sinoussi[300073]:                 "ceph.type": "block",
Oct 11 04:19:59 compute-0 romantic_sinoussi[300073]:                 "ceph.vdo": "0"
Oct 11 04:19:59 compute-0 romantic_sinoussi[300073]:             },
Oct 11 04:19:59 compute-0 romantic_sinoussi[300073]:             "type": "block",
Oct 11 04:19:59 compute-0 romantic_sinoussi[300073]:             "vg_name": "ceph_vg2"
Oct 11 04:19:59 compute-0 romantic_sinoussi[300073]:         }
Oct 11 04:19:59 compute-0 romantic_sinoussi[300073]:     ]
Oct 11 04:19:59 compute-0 romantic_sinoussi[300073]: }
Oct 11 04:19:59 compute-0 systemd[1]: libpod-87c8b689f366eec8521e66dad9aa778bd6d586c7c09d2d4f5e694e68eaa7e981.scope: Deactivated successfully.
Oct 11 04:19:59 compute-0 podman[300056]: 2025-10-11 04:19:59.648112239 +0000 UTC m=+0.974200603 container died 87c8b689f366eec8521e66dad9aa778bd6d586c7c09d2d4f5e694e68eaa7e981 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=romantic_sinoussi, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 11 04:19:59 compute-0 systemd[1]: var-lib-containers-storage-overlay-b8cae7e2ecfc130494a0c8367cb921b86082fa9188a0f06a116f31a3923fb2ba-merged.mount: Deactivated successfully.
Oct 11 04:19:59 compute-0 podman[300056]: 2025-10-11 04:19:59.712181653 +0000 UTC m=+1.038270017 container remove 87c8b689f366eec8521e66dad9aa778bd6d586c7c09d2d4f5e694e68eaa7e981 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=romantic_sinoussi, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 11 04:19:59 compute-0 systemd[1]: libpod-conmon-87c8b689f366eec8521e66dad9aa778bd6d586c7c09d2d4f5e694e68eaa7e981.scope: Deactivated successfully.
Oct 11 04:19:59 compute-0 sudo[299949]: pam_unix(sudo:session): session closed for user root
Oct 11 04:19:59 compute-0 sudo[300094]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 04:19:59 compute-0 sudo[300094]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 04:19:59 compute-0 sudo[300094]: pam_unix(sudo:session): session closed for user root
Oct 11 04:19:59 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1695: 305 pgs: 305 active+clean; 88 MiB data, 427 MiB used, 60 GiB / 60 GiB avail; 60 KiB/s rd, 2.6 KiB/s wr, 82 op/s
Oct 11 04:19:59 compute-0 nova_compute[259850]: 2025-10-11 04:19:59.859 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 04:19:59 compute-0 sudo[300119]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 11 04:19:59 compute-0 sudo[300119]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 04:19:59 compute-0 sudo[300119]: pam_unix(sudo:session): session closed for user root
Oct 11 04:20:00 compute-0 sudo[300144]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 04:20:00 compute-0 sudo[300144]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 04:20:00 compute-0 sudo[300144]: pam_unix(sudo:session): session closed for user root
Oct 11 04:20:00 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e395 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 04:20:00 compute-0 sudo[300169]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/23b68101-59a9-532f-ab6b-9acf78fb2162/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 23b68101-59a9-532f-ab6b-9acf78fb2162 -- raw list --format json
Oct 11 04:20:00 compute-0 sudo[300169]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 04:20:00 compute-0 podman[300233]: 2025-10-11 04:20:00.589296586 +0000 UTC m=+0.071361791 container create 68ee37f5c4f2731f10a3d75f52ebfed96126e1dcd92732ef6876478a1d5684ee (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_lovelace, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=reef)
Oct 11 04:20:00 compute-0 systemd[1]: Started libpod-conmon-68ee37f5c4f2731f10a3d75f52ebfed96126e1dcd92732ef6876478a1d5684ee.scope.
Oct 11 04:20:00 compute-0 podman[300233]: 2025-10-11 04:20:00.560515931 +0000 UTC m=+0.042581176 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 04:20:00 compute-0 systemd[1]: Started libcrun container.
Oct 11 04:20:00 compute-0 podman[300233]: 2025-10-11 04:20:00.69608508 +0000 UTC m=+0.178150255 container init 68ee37f5c4f2731f10a3d75f52ebfed96126e1dcd92732ef6876478a1d5684ee (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_lovelace, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 11 04:20:00 compute-0 podman[300233]: 2025-10-11 04:20:00.703940242 +0000 UTC m=+0.186005417 container start 68ee37f5c4f2731f10a3d75f52ebfed96126e1dcd92732ef6876478a1d5684ee (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_lovelace, CEPH_REF=reef, org.label-schema.build-date=20250507, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 11 04:20:00 compute-0 podman[300233]: 2025-10-11 04:20:00.707347969 +0000 UTC m=+0.189413224 container attach 68ee37f5c4f2731f10a3d75f52ebfed96126e1dcd92732ef6876478a1d5684ee (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_lovelace, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Oct 11 04:20:00 compute-0 eager_lovelace[300249]: 167 167
Oct 11 04:20:00 compute-0 systemd[1]: libpod-68ee37f5c4f2731f10a3d75f52ebfed96126e1dcd92732ef6876478a1d5684ee.scope: Deactivated successfully.
Oct 11 04:20:00 compute-0 conmon[300249]: conmon 68ee37f5c4f2731f10a3 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-68ee37f5c4f2731f10a3d75f52ebfed96126e1dcd92732ef6876478a1d5684ee.scope/container/memory.events
Oct 11 04:20:00 compute-0 podman[300233]: 2025-10-11 04:20:00.714260934 +0000 UTC m=+0.196326139 container died 68ee37f5c4f2731f10a3d75f52ebfed96126e1dcd92732ef6876478a1d5684ee (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_lovelace, org.label-schema.build-date=20250507, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Oct 11 04:20:00 compute-0 systemd[1]: var-lib-containers-storage-overlay-953215bcf3eac461ecdd57756c546572cfa61db1bd5c7d8d3946cabcf6cce827-merged.mount: Deactivated successfully.
Oct 11 04:20:00 compute-0 podman[300233]: 2025-10-11 04:20:00.763863229 +0000 UTC m=+0.245928414 container remove 68ee37f5c4f2731f10a3d75f52ebfed96126e1dcd92732ef6876478a1d5684ee (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_lovelace, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250507, io.buildah.version=1.39.3)
Oct 11 04:20:00 compute-0 systemd[1]: libpod-conmon-68ee37f5c4f2731f10a3d75f52ebfed96126e1dcd92732ef6876478a1d5684ee.scope: Deactivated successfully.
Oct 11 04:20:00 compute-0 podman[300273]: 2025-10-11 04:20:00.979043821 +0000 UTC m=+0.056308695 container create e6664d4b114ac5d6f66446c1658a76527d14030332f30c4df5a40eb3f09583c7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_mayer, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 11 04:20:01 compute-0 podman[300273]: 2025-10-11 04:20:00.950088831 +0000 UTC m=+0.027353805 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 04:20:01 compute-0 systemd[1]: Started libpod-conmon-e6664d4b114ac5d6f66446c1658a76527d14030332f30c4df5a40eb3f09583c7.scope.
Oct 11 04:20:01 compute-0 systemd[1]: Started libcrun container.
Oct 11 04:20:01 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c69e24fa6be1b68d1fbad743aedfccae7865f5eef3afd6598a46ffe14bd5f61f/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 11 04:20:01 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c69e24fa6be1b68d1fbad743aedfccae7865f5eef3afd6598a46ffe14bd5f61f/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 11 04:20:01 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c69e24fa6be1b68d1fbad743aedfccae7865f5eef3afd6598a46ffe14bd5f61f/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 11 04:20:01 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c69e24fa6be1b68d1fbad743aedfccae7865f5eef3afd6598a46ffe14bd5f61f/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 11 04:20:01 compute-0 podman[300273]: 2025-10-11 04:20:01.114896028 +0000 UTC m=+0.192160942 container init e6664d4b114ac5d6f66446c1658a76527d14030332f30c4df5a40eb3f09583c7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_mayer, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Oct 11 04:20:01 compute-0 podman[300273]: 2025-10-11 04:20:01.134033649 +0000 UTC m=+0.211298543 container start e6664d4b114ac5d6f66446c1658a76527d14030332f30c4df5a40eb3f09583c7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_mayer, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 11 04:20:01 compute-0 podman[300273]: 2025-10-11 04:20:01.13829099 +0000 UTC m=+0.215555884 container attach e6664d4b114ac5d6f66446c1658a76527d14030332f30c4df5a40eb3f09583c7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_mayer, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef)
Oct 11 04:20:01 compute-0 ceph-mon[74273]: pgmap v1695: 305 pgs: 305 active+clean; 88 MiB data, 427 MiB used, 60 GiB / 60 GiB avail; 60 KiB/s rd, 2.6 KiB/s wr, 82 op/s
Oct 11 04:20:01 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1696: 305 pgs: 305 active+clean; 88 MiB data, 427 MiB used, 60 GiB / 60 GiB avail; 60 KiB/s rd, 1.5 KiB/s wr, 79 op/s
Oct 11 04:20:02 compute-0 practical_mayer[300289]: {
Oct 11 04:20:02 compute-0 practical_mayer[300289]:     "38da774d-7ecf-442f-9a7a-97978287cff8": {
Oct 11 04:20:02 compute-0 practical_mayer[300289]:         "ceph_fsid": "23b68101-59a9-532f-ab6b-9acf78fb2162",
Oct 11 04:20:02 compute-0 practical_mayer[300289]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Oct 11 04:20:02 compute-0 practical_mayer[300289]:         "osd_id": 1,
Oct 11 04:20:02 compute-0 practical_mayer[300289]:         "osd_uuid": "38da774d-7ecf-442f-9a7a-97978287cff8",
Oct 11 04:20:02 compute-0 practical_mayer[300289]:         "type": "bluestore"
Oct 11 04:20:02 compute-0 practical_mayer[300289]:     },
Oct 11 04:20:02 compute-0 practical_mayer[300289]:     "b0a32a6b-06c7-4cda-9235-0c5ffbd1bd85": {
Oct 11 04:20:02 compute-0 practical_mayer[300289]:         "ceph_fsid": "23b68101-59a9-532f-ab6b-9acf78fb2162",
Oct 11 04:20:02 compute-0 practical_mayer[300289]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Oct 11 04:20:02 compute-0 practical_mayer[300289]:         "osd_id": 2,
Oct 11 04:20:02 compute-0 practical_mayer[300289]:         "osd_uuid": "b0a32a6b-06c7-4cda-9235-0c5ffbd1bd85",
Oct 11 04:20:02 compute-0 practical_mayer[300289]:         "type": "bluestore"
Oct 11 04:20:02 compute-0 practical_mayer[300289]:     },
Oct 11 04:20:02 compute-0 practical_mayer[300289]:     "bd7ac921-1218-45c1-b1c6-7c594dbceccb": {
Oct 11 04:20:02 compute-0 practical_mayer[300289]:         "ceph_fsid": "23b68101-59a9-532f-ab6b-9acf78fb2162",
Oct 11 04:20:02 compute-0 practical_mayer[300289]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct 11 04:20:02 compute-0 practical_mayer[300289]:         "osd_id": 0,
Oct 11 04:20:02 compute-0 practical_mayer[300289]:         "osd_uuid": "bd7ac921-1218-45c1-b1c6-7c594dbceccb",
Oct 11 04:20:02 compute-0 practical_mayer[300289]:         "type": "bluestore"
Oct 11 04:20:02 compute-0 practical_mayer[300289]:     }
Oct 11 04:20:02 compute-0 practical_mayer[300289]: }
Oct 11 04:20:02 compute-0 systemd[1]: libpod-e6664d4b114ac5d6f66446c1658a76527d14030332f30c4df5a40eb3f09583c7.scope: Deactivated successfully.
Oct 11 04:20:02 compute-0 podman[300273]: 2025-10-11 04:20:02.236957035 +0000 UTC m=+1.314221939 container died e6664d4b114ac5d6f66446c1658a76527d14030332f30c4df5a40eb3f09583c7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_mayer, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Oct 11 04:20:02 compute-0 systemd[1]: libpod-e6664d4b114ac5d6f66446c1658a76527d14030332f30c4df5a40eb3f09583c7.scope: Consumed 1.108s CPU time.
Oct 11 04:20:02 compute-0 systemd[1]: var-lib-containers-storage-overlay-c69e24fa6be1b68d1fbad743aedfccae7865f5eef3afd6598a46ffe14bd5f61f-merged.mount: Deactivated successfully.
Oct 11 04:20:02 compute-0 podman[300273]: 2025-10-11 04:20:02.30462318 +0000 UTC m=+1.381888054 container remove e6664d4b114ac5d6f66446c1658a76527d14030332f30c4df5a40eb3f09583c7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_mayer, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Oct 11 04:20:02 compute-0 systemd[1]: libpod-conmon-e6664d4b114ac5d6f66446c1658a76527d14030332f30c4df5a40eb3f09583c7.scope: Deactivated successfully.
Oct 11 04:20:02 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e395 do_prune osdmap full prune enabled
Oct 11 04:20:02 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e396 e396: 3 total, 3 up, 3 in
Oct 11 04:20:02 compute-0 ceph-mon[74273]: log_channel(cluster) log [DBG] : osdmap e396: 3 total, 3 up, 3 in
Oct 11 04:20:02 compute-0 sudo[300169]: pam_unix(sudo:session): session closed for user root
Oct 11 04:20:02 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 11 04:20:02 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' 
Oct 11 04:20:02 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 11 04:20:02 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' 
Oct 11 04:20:02 compute-0 ceph-mgr[74563]: [progress WARNING root] complete: ev 7567afb2-85d9-4f1f-9fa5-ff57269a3917 does not exist
Oct 11 04:20:02 compute-0 ceph-mgr[74563]: [progress WARNING root] complete: ev 9e422fd0-0b48-46a4-a243-b822924def71 does not exist
Oct 11 04:20:02 compute-0 sudo[300334]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 04:20:02 compute-0 sudo[300334]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 04:20:02 compute-0 sudo[300334]: pam_unix(sudo:session): session closed for user root
Oct 11 04:20:02 compute-0 sudo[300365]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Oct 11 04:20:02 compute-0 sudo[300365]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 04:20:02 compute-0 sudo[300365]: pam_unix(sudo:session): session closed for user root
Oct 11 04:20:02 compute-0 podman[300358]: 2025-10-11 04:20:02.656024169 +0000 UTC m=+0.150495581 container health_status 648a70a95c858d67aef01a55c153ff0b5b9bef4b589fa10c62a24185180db61f (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, config_id=ovn_controller, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, container_name=ovn_controller)
Oct 11 04:20:03 compute-0 ceph-mon[74273]: pgmap v1696: 305 pgs: 305 active+clean; 88 MiB data, 427 MiB used, 60 GiB / 60 GiB avail; 60 KiB/s rd, 1.5 KiB/s wr, 79 op/s
Oct 11 04:20:03 compute-0 ceph-mon[74273]: osdmap e396: 3 total, 3 up, 3 in
Oct 11 04:20:03 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' 
Oct 11 04:20:03 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' 
Oct 11 04:20:03 compute-0 nova_compute[259850]: 2025-10-11 04:20:03.508 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 04:20:03 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1698: 305 pgs: 305 active+clean; 88 MiB data, 427 MiB used, 60 GiB / 60 GiB avail; 30 KiB/s rd, 1.4 KiB/s wr, 41 op/s
Oct 11 04:20:04 compute-0 ceph-mon[74273]: pgmap v1698: 305 pgs: 305 active+clean; 88 MiB data, 427 MiB used, 60 GiB / 60 GiB avail; 30 KiB/s rd, 1.4 KiB/s wr, 41 op/s
Oct 11 04:20:04 compute-0 nova_compute[259850]: 2025-10-11 04:20:04.864 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 04:20:05 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e396 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 04:20:05 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e396 do_prune osdmap full prune enabled
Oct 11 04:20:05 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e397 e397: 3 total, 3 up, 3 in
Oct 11 04:20:05 compute-0 ceph-mon[74273]: log_channel(cluster) log [DBG] : osdmap e397: 3 total, 3 up, 3 in
Oct 11 04:20:05 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1700: 305 pgs: 305 active+clean; 88 MiB data, 427 MiB used, 60 GiB / 60 GiB avail; 34 KiB/s rd, 1.7 KiB/s wr, 45 op/s
Oct 11 04:20:06 compute-0 ceph-mon[74273]: osdmap e397: 3 total, 3 up, 3 in
Oct 11 04:20:06 compute-0 ceph-mon[74273]: pgmap v1700: 305 pgs: 305 active+clean; 88 MiB data, 427 MiB used, 60 GiB / 60 GiB avail; 34 KiB/s rd, 1.7 KiB/s wr, 45 op/s
Oct 11 04:20:07 compute-0 podman[300410]: 2025-10-11 04:20:07.389294949 +0000 UTC m=+0.088043934 container health_status aa44bb3cffcfb092b9d85ae52a9bd9f36c87e07262632420f97b48a886109eab (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251009, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, io.buildah.version=1.41.3)
Oct 11 04:20:07 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e397 do_prune osdmap full prune enabled
Oct 11 04:20:07 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e398 e398: 3 total, 3 up, 3 in
Oct 11 04:20:07 compute-0 ceph-mon[74273]: log_channel(cluster) log [DBG] : osdmap e398: 3 total, 3 up, 3 in
Oct 11 04:20:07 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1702: 305 pgs: 305 active+clean; 88 MiB data, 427 MiB used, 60 GiB / 60 GiB avail; 51 KiB/s rd, 3.8 KiB/s wr, 66 op/s
Oct 11 04:20:08 compute-0 nova_compute[259850]: 2025-10-11 04:20:08.509 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 04:20:08 compute-0 ceph-mon[74273]: osdmap e398: 3 total, 3 up, 3 in
Oct 11 04:20:08 compute-0 ceph-mon[74273]: pgmap v1702: 305 pgs: 305 active+clean; 88 MiB data, 427 MiB used, 60 GiB / 60 GiB avail; 51 KiB/s rd, 3.8 KiB/s wr, 66 op/s
Oct 11 04:20:09 compute-0 nova_compute[259850]: 2025-10-11 04:20:09.866 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 04:20:09 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1703: 305 pgs: 305 active+clean; 88 MiB data, 431 MiB used, 60 GiB / 60 GiB avail; 95 KiB/s rd, 6.2 KiB/s wr, 122 op/s
Oct 11 04:20:10 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e398 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 04:20:10 compute-0 ceph-mon[74273]: pgmap v1703: 305 pgs: 305 active+clean; 88 MiB data, 431 MiB used, 60 GiB / 60 GiB avail; 95 KiB/s rd, 6.2 KiB/s wr, 122 op/s
Oct 11 04:20:11 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1704: 305 pgs: 305 active+clean; 88 MiB data, 431 MiB used, 60 GiB / 60 GiB avail; 82 KiB/s rd, 4.9 KiB/s wr, 103 op/s
Oct 11 04:20:11 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct 11 04:20:11 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1335788973' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 11 04:20:11 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct 11 04:20:11 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1335788973' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 11 04:20:12 compute-0 ceph-mon[74273]: from='client.? 192.168.122.10:0/1335788973' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 11 04:20:12 compute-0 ceph-mon[74273]: from='client.? 192.168.122.10:0/1335788973' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 11 04:20:13 compute-0 ceph-mon[74273]: pgmap v1704: 305 pgs: 305 active+clean; 88 MiB data, 431 MiB used, 60 GiB / 60 GiB avail; 82 KiB/s rd, 4.9 KiB/s wr, 103 op/s
Oct 11 04:20:13 compute-0 nova_compute[259850]: 2025-10-11 04:20:13.510 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 04:20:13 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1705: 305 pgs: 305 active+clean; 88 MiB data, 431 MiB used, 60 GiB / 60 GiB avail; 141 KiB/s rd, 5.8 KiB/s wr, 177 op/s
Oct 11 04:20:14 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e398 do_prune osdmap full prune enabled
Oct 11 04:20:14 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e399 e399: 3 total, 3 up, 3 in
Oct 11 04:20:14 compute-0 ceph-mon[74273]: log_channel(cluster) log [DBG] : osdmap e399: 3 total, 3 up, 3 in
Oct 11 04:20:14 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct 11 04:20:14 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2007397057' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 11 04:20:14 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct 11 04:20:14 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2007397057' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 11 04:20:14 compute-0 nova_compute[259850]: 2025-10-11 04:20:14.868 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 04:20:15 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e399 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 04:20:15 compute-0 ceph-mon[74273]: pgmap v1705: 305 pgs: 305 active+clean; 88 MiB data, 431 MiB used, 60 GiB / 60 GiB avail; 141 KiB/s rd, 5.8 KiB/s wr, 177 op/s
Oct 11 04:20:15 compute-0 ceph-mon[74273]: osdmap e399: 3 total, 3 up, 3 in
Oct 11 04:20:15 compute-0 ceph-mon[74273]: from='client.? 192.168.122.10:0/2007397057' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 11 04:20:15 compute-0 ceph-mon[74273]: from='client.? 192.168.122.10:0/2007397057' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 11 04:20:15 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1707: 305 pgs: 305 active+clean; 88 MiB data, 431 MiB used, 60 GiB / 60 GiB avail; 146 KiB/s rd, 6.0 KiB/s wr, 183 op/s
Oct 11 04:20:16 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e399 do_prune osdmap full prune enabled
Oct 11 04:20:16 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e400 e400: 3 total, 3 up, 3 in
Oct 11 04:20:16 compute-0 ceph-mon[74273]: log_channel(cluster) log [DBG] : osdmap e400: 3 total, 3 up, 3 in
Oct 11 04:20:16 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct 11 04:20:16 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/348934147' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 11 04:20:16 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct 11 04:20:16 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/348934147' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 11 04:20:17 compute-0 ceph-mon[74273]: pgmap v1707: 305 pgs: 305 active+clean; 88 MiB data, 431 MiB used, 60 GiB / 60 GiB avail; 146 KiB/s rd, 6.0 KiB/s wr, 183 op/s
Oct 11 04:20:17 compute-0 ceph-mon[74273]: osdmap e400: 3 total, 3 up, 3 in
Oct 11 04:20:17 compute-0 ceph-mon[74273]: from='client.? 192.168.122.10:0/348934147' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 11 04:20:17 compute-0 ceph-mon[74273]: from='client.? 192.168.122.10:0/348934147' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 11 04:20:17 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1709: 305 pgs: 305 active+clean; 88 MiB data, 431 MiB used, 60 GiB / 60 GiB avail; 109 KiB/s rd, 1.9 KiB/s wr, 133 op/s
Oct 11 04:20:18 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct 11 04:20:18 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1706012383' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 11 04:20:18 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct 11 04:20:18 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1706012383' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 11 04:20:18 compute-0 nova_compute[259850]: 2025-10-11 04:20:18.513 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 04:20:19 compute-0 ceph-mon[74273]: pgmap v1709: 305 pgs: 305 active+clean; 88 MiB data, 431 MiB used, 60 GiB / 60 GiB avail; 109 KiB/s rd, 1.9 KiB/s wr, 133 op/s
Oct 11 04:20:19 compute-0 ceph-mon[74273]: from='client.? 192.168.122.10:0/1706012383' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 11 04:20:19 compute-0 ceph-mon[74273]: from='client.? 192.168.122.10:0/1706012383' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 11 04:20:19 compute-0 podman[300430]: 2025-10-11 04:20:19.39512219 +0000 UTC m=+0.096549905 container health_status 83ff5079e26f0f00bbfd2512db4ebca61e431e3b7e362664573e34fff179574c (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, config_id=multipathd, managed_by=edpm_ansible, org.label-schema.build-date=20251009, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=multipathd)
Oct 11 04:20:19 compute-0 podman[300431]: 2025-10-11 04:20:19.413779238 +0000 UTC m=+0.108436151 container health_status 8f03625dda308e9a3fe3b3f00360811eab2a53db90537fed5552a11820542f07 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, container_name=iscsid, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, config_id=iscsid, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS)
Oct 11 04:20:19 compute-0 nova_compute[259850]: 2025-10-11 04:20:19.870 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 04:20:19 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1710: 305 pgs: 305 active+clean; 88 MiB data, 431 MiB used, 60 GiB / 60 GiB avail; 156 KiB/s rd, 3.9 KiB/s wr, 197 op/s
Oct 11 04:20:20 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e400 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 04:20:20 compute-0 ceph-mon[74273]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #72. Immutable memtables: 0.
Oct 11 04:20:20 compute-0 ceph-mon[74273]: rocksdb: (Original Log Time 2025/10/11-04:20:20.056810) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Oct 11 04:20:20 compute-0 ceph-mon[74273]: rocksdb: [db/flush_job.cc:856] [default] [JOB 39] Flushing memtable with next log file: 72
Oct 11 04:20:20 compute-0 ceph-mon[74273]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760156420056910, "job": 39, "event": "flush_started", "num_memtables": 1, "num_entries": 1244, "num_deletes": 258, "total_data_size": 1724800, "memory_usage": 1754288, "flush_reason": "Manual Compaction"}
Oct 11 04:20:20 compute-0 ceph-mon[74273]: rocksdb: [db/flush_job.cc:885] [default] [JOB 39] Level-0 flush table #73: started
Oct 11 04:20:20 compute-0 ceph-mon[74273]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760156420069520, "cf_name": "default", "job": 39, "event": "table_file_creation", "file_number": 73, "file_size": 1684298, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 34086, "largest_seqno": 35329, "table_properties": {"data_size": 1678294, "index_size": 3271, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1669, "raw_key_size": 13437, "raw_average_key_size": 20, "raw_value_size": 1665894, "raw_average_value_size": 2505, "num_data_blocks": 145, "num_entries": 665, "num_filter_entries": 665, "num_deletions": 258, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1760156323, "oldest_key_time": 1760156323, "file_creation_time": 1760156420, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "c5ef686a-ea96-43f3-b64e-136aeef6150d", "db_session_id": "5PENSWPAYOBU0GJSS6OB", "orig_file_number": 73, "seqno_to_time_mapping": "N/A"}}
Oct 11 04:20:20 compute-0 ceph-mon[74273]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 39] Flush lasted 12763 microseconds, and 8478 cpu microseconds.
Oct 11 04:20:20 compute-0 ceph-mon[74273]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 11 04:20:20 compute-0 ceph-mon[74273]: rocksdb: (Original Log Time 2025/10/11-04:20:20.069582) [db/flush_job.cc:967] [default] [JOB 39] Level-0 flush table #73: 1684298 bytes OK
Oct 11 04:20:20 compute-0 ceph-mon[74273]: rocksdb: (Original Log Time 2025/10/11-04:20:20.069610) [db/memtable_list.cc:519] [default] Level-0 commit table #73 started
Oct 11 04:20:20 compute-0 ceph-mon[74273]: rocksdb: (Original Log Time 2025/10/11-04:20:20.071394) [db/memtable_list.cc:722] [default] Level-0 commit table #73: memtable #1 done
Oct 11 04:20:20 compute-0 ceph-mon[74273]: rocksdb: (Original Log Time 2025/10/11-04:20:20.071417) EVENT_LOG_v1 {"time_micros": 1760156420071409, "job": 39, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Oct 11 04:20:20 compute-0 ceph-mon[74273]: rocksdb: (Original Log Time 2025/10/11-04:20:20.071439) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Oct 11 04:20:20 compute-0 ceph-mon[74273]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 39] Try to delete WAL files size 1718978, prev total WAL file size 1718978, number of live WAL files 2.
Oct 11 04:20:20 compute-0 ceph-mon[74273]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000069.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 11 04:20:20 compute-0 ceph-mon[74273]: rocksdb: (Original Log Time 2025/10/11-04:20:20.072544) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6C6F676D0031303033' seq:72057594037927935, type:22 .. '6C6F676D0031323534' seq:0, type:0; will stop at (end)
Oct 11 04:20:20 compute-0 ceph-mon[74273]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 40] Compacting 1@0 + 1@6 files to L6, score -1.00
Oct 11 04:20:20 compute-0 ceph-mon[74273]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 39 Base level 0, inputs: [73(1644KB)], [71(8574KB)]
Oct 11 04:20:20 compute-0 ceph-mon[74273]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760156420072582, "job": 40, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [73], "files_L6": [71], "score": -1, "input_data_size": 10464809, "oldest_snapshot_seqno": -1}
Oct 11 04:20:20 compute-0 ceph-mon[74273]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 40] Generated table #74: 6370 keys, 10311219 bytes, temperature: kUnknown
Oct 11 04:20:20 compute-0 ceph-mon[74273]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760156420121060, "cf_name": "default", "job": 40, "event": "table_file_creation", "file_number": 74, "file_size": 10311219, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 10263994, "index_size": 30219, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 15941, "raw_key_size": 161130, "raw_average_key_size": 25, "raw_value_size": 10145000, "raw_average_value_size": 1592, "num_data_blocks": 1211, "num_entries": 6370, "num_filter_entries": 6370, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1760153731, "oldest_key_time": 0, "file_creation_time": 1760156420, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "c5ef686a-ea96-43f3-b64e-136aeef6150d", "db_session_id": "5PENSWPAYOBU0GJSS6OB", "orig_file_number": 74, "seqno_to_time_mapping": "N/A"}}
Oct 11 04:20:20 compute-0 ceph-mon[74273]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 11 04:20:20 compute-0 ceph-mon[74273]: rocksdb: (Original Log Time 2025/10/11-04:20:20.121464) [db/compaction/compaction_job.cc:1663] [default] [JOB 40] Compacted 1@0 + 1@6 files to L6 => 10311219 bytes
Oct 11 04:20:20 compute-0 ceph-mon[74273]: rocksdb: (Original Log Time 2025/10/11-04:20:20.123023) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 215.3 rd, 212.1 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.6, 8.4 +0.0 blob) out(9.8 +0.0 blob), read-write-amplify(12.3) write-amplify(6.1) OK, records in: 6900, records dropped: 530 output_compression: NoCompression
Oct 11 04:20:20 compute-0 ceph-mon[74273]: rocksdb: (Original Log Time 2025/10/11-04:20:20.123054) EVENT_LOG_v1 {"time_micros": 1760156420123040, "job": 40, "event": "compaction_finished", "compaction_time_micros": 48615, "compaction_time_cpu_micros": 28041, "output_level": 6, "num_output_files": 1, "total_output_size": 10311219, "num_input_records": 6900, "num_output_records": 6370, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Oct 11 04:20:20 compute-0 ceph-mon[74273]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000073.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 11 04:20:20 compute-0 ceph-mon[74273]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760156420123987, "job": 40, "event": "table_file_deletion", "file_number": 73}
Oct 11 04:20:20 compute-0 ceph-mon[74273]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000071.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 11 04:20:20 compute-0 ceph-mon[74273]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760156420127515, "job": 40, "event": "table_file_deletion", "file_number": 71}
Oct 11 04:20:20 compute-0 ceph-mon[74273]: rocksdb: (Original Log Time 2025/10/11-04:20:20.072421) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 11 04:20:20 compute-0 ceph-mon[74273]: rocksdb: (Original Log Time 2025/10/11-04:20:20.127601) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 11 04:20:20 compute-0 ceph-mon[74273]: rocksdb: (Original Log Time 2025/10/11-04:20:20.127608) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 11 04:20:20 compute-0 ceph-mon[74273]: rocksdb: (Original Log Time 2025/10/11-04:20:20.127611) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 11 04:20:20 compute-0 ceph-mon[74273]: rocksdb: (Original Log Time 2025/10/11-04:20:20.127614) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 11 04:20:20 compute-0 ceph-mon[74273]: rocksdb: (Original Log Time 2025/10/11-04:20:20.127618) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 11 04:20:20 compute-0 ceph-mgr[74563]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 04:20:20 compute-0 ceph-mgr[74563]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 04:20:20 compute-0 ceph-mgr[74563]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 04:20:20 compute-0 ceph-mgr[74563]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 04:20:20 compute-0 ceph-mgr[74563]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 04:20:20 compute-0 ceph-mgr[74563]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 04:20:20 compute-0 ceph-mgr[74563]: [balancer INFO root] Optimize plan auto_2025-10-11_04:20:20
Oct 11 04:20:20 compute-0 ceph-mgr[74563]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 11 04:20:20 compute-0 ceph-mgr[74563]: [balancer INFO root] do_upmap
Oct 11 04:20:20 compute-0 ceph-mgr[74563]: [balancer INFO root] pools ['volumes', 'cephfs.cephfs.data', 'default.rgw.control', '.rgw.root', 'vms', 'backups', 'default.rgw.log', 'cephfs.cephfs.meta', 'default.rgw.meta', '.mgr', 'images']
Oct 11 04:20:20 compute-0 ceph-mgr[74563]: [balancer INFO root] prepared 0/10 changes
Oct 11 04:20:21 compute-0 ceph-mgr[74563]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 11 04:20:21 compute-0 ceph-mgr[74563]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 11 04:20:21 compute-0 ceph-mgr[74563]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 11 04:20:21 compute-0 ceph-mgr[74563]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 11 04:20:21 compute-0 ceph-mgr[74563]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 11 04:20:21 compute-0 ceph-mgr[74563]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 11 04:20:21 compute-0 ceph-mgr[74563]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 11 04:20:21 compute-0 ceph-mgr[74563]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 11 04:20:21 compute-0 ceph-mgr[74563]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 11 04:20:21 compute-0 ceph-mgr[74563]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 11 04:20:21 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e400 do_prune osdmap full prune enabled
Oct 11 04:20:21 compute-0 ceph-mon[74273]: pgmap v1710: 305 pgs: 305 active+clean; 88 MiB data, 431 MiB used, 60 GiB / 60 GiB avail; 156 KiB/s rd, 3.9 KiB/s wr, 197 op/s
Oct 11 04:20:21 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e401 e401: 3 total, 3 up, 3 in
Oct 11 04:20:21 compute-0 ceph-mon[74273]: log_channel(cluster) log [DBG] : osdmap e401: 3 total, 3 up, 3 in
Oct 11 04:20:21 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1712: 305 pgs: 305 active+clean; 88 MiB data, 431 MiB used, 60 GiB / 60 GiB avail; 90 KiB/s rd, 2.7 KiB/s wr, 115 op/s
Oct 11 04:20:22 compute-0 ceph-mon[74273]: osdmap e401: 3 total, 3 up, 3 in
Oct 11 04:20:22 compute-0 ovn_metadata_agent[161897]: 2025-10-11 04:20:22.969 161902 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 04:20:22 compute-0 ovn_metadata_agent[161897]: 2025-10-11 04:20:22.969 161902 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 04:20:22 compute-0 ovn_metadata_agent[161897]: 2025-10-11 04:20:22.970 161902 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 04:20:23 compute-0 ceph-mon[74273]: pgmap v1712: 305 pgs: 305 active+clean; 88 MiB data, 431 MiB used, 60 GiB / 60 GiB avail; 90 KiB/s rd, 2.7 KiB/s wr, 115 op/s
Oct 11 04:20:23 compute-0 nova_compute[259850]: 2025-10-11 04:20:23.515 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 04:20:23 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1713: 305 pgs: 305 active+clean; 88 MiB data, 431 MiB used, 60 GiB / 60 GiB avail; 109 KiB/s rd, 5.0 KiB/s wr, 142 op/s
Oct 11 04:20:24 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e401 do_prune osdmap full prune enabled
Oct 11 04:20:24 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e402 e402: 3 total, 3 up, 3 in
Oct 11 04:20:24 compute-0 ceph-mon[74273]: log_channel(cluster) log [DBG] : osdmap e402: 3 total, 3 up, 3 in
Oct 11 04:20:24 compute-0 nova_compute[259850]: 2025-10-11 04:20:24.872 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 04:20:25 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e402 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 04:20:25 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e402 do_prune osdmap full prune enabled
Oct 11 04:20:25 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e403 e403: 3 total, 3 up, 3 in
Oct 11 04:20:25 compute-0 ceph-mon[74273]: log_channel(cluster) log [DBG] : osdmap e403: 3 total, 3 up, 3 in
Oct 11 04:20:25 compute-0 ceph-mon[74273]: pgmap v1713: 305 pgs: 305 active+clean; 88 MiB data, 431 MiB used, 60 GiB / 60 GiB avail; 109 KiB/s rd, 5.0 KiB/s wr, 142 op/s
Oct 11 04:20:25 compute-0 ceph-mon[74273]: osdmap e402: 3 total, 3 up, 3 in
Oct 11 04:20:25 compute-0 ceph-mon[74273]: osdmap e403: 3 total, 3 up, 3 in
Oct 11 04:20:25 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1716: 305 pgs: 305 active+clean; 88 MiB data, 431 MiB used, 60 GiB / 60 GiB avail; 28 KiB/s rd, 3.2 KiB/s wr, 39 op/s
Oct 11 04:20:27 compute-0 ceph-mon[74273]: pgmap v1716: 305 pgs: 305 active+clean; 88 MiB data, 431 MiB used, 60 GiB / 60 GiB avail; 28 KiB/s rd, 3.2 KiB/s wr, 39 op/s
Oct 11 04:20:27 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1717: 305 pgs: 305 active+clean; 88 MiB data, 431 MiB used, 60 GiB / 60 GiB avail; 52 KiB/s rd, 3.9 KiB/s wr, 69 op/s
Oct 11 04:20:28 compute-0 nova_compute[259850]: 2025-10-11 04:20:28.516 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 04:20:29 compute-0 nova_compute[259850]: 2025-10-11 04:20:29.058 2 DEBUG oslo_service.periodic_task [None req-a15c22ab-259d-4b26-83d0-eb5f92102649 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 11 04:20:29 compute-0 nova_compute[259850]: 2025-10-11 04:20:29.059 2 DEBUG oslo_service.periodic_task [None req-a15c22ab-259d-4b26-83d0-eb5f92102649 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 11 04:20:29 compute-0 nova_compute[259850]: 2025-10-11 04:20:29.059 2 DEBUG nova.compute.manager [None req-a15c22ab-259d-4b26-83d0-eb5f92102649 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Oct 11 04:20:29 compute-0 ceph-mon[74273]: pgmap v1717: 305 pgs: 305 active+clean; 88 MiB data, 431 MiB used, 60 GiB / 60 GiB avail; 52 KiB/s rd, 3.9 KiB/s wr, 69 op/s
Oct 11 04:20:29 compute-0 nova_compute[259850]: 2025-10-11 04:20:29.874 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 04:20:29 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1718: 305 pgs: 305 active+clean; 88 MiB data, 431 MiB used, 60 GiB / 60 GiB avail; 62 KiB/s rd, 4.6 KiB/s wr, 82 op/s
Oct 11 04:20:30 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e403 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 04:20:30 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e403 do_prune osdmap full prune enabled
Oct 11 04:20:30 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e404 e404: 3 total, 3 up, 3 in
Oct 11 04:20:30 compute-0 ceph-mon[74273]: log_channel(cluster) log [DBG] : osdmap e404: 3 total, 3 up, 3 in
Oct 11 04:20:30 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct 11 04:20:30 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1438204750' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 11 04:20:30 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct 11 04:20:30 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1438204750' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 11 04:20:31 compute-0 nova_compute[259850]: 2025-10-11 04:20:31.059 2 DEBUG oslo_service.periodic_task [None req-a15c22ab-259d-4b26-83d0-eb5f92102649 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 11 04:20:31 compute-0 nova_compute[259850]: 2025-10-11 04:20:31.060 2 DEBUG oslo_service.periodic_task [None req-a15c22ab-259d-4b26-83d0-eb5f92102649 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 11 04:20:31 compute-0 nova_compute[259850]: 2025-10-11 04:20:31.091 2 DEBUG oslo_concurrency.lockutils [None req-a15c22ab-259d-4b26-83d0-eb5f92102649 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 04:20:31 compute-0 nova_compute[259850]: 2025-10-11 04:20:31.092 2 DEBUG oslo_concurrency.lockutils [None req-a15c22ab-259d-4b26-83d0-eb5f92102649 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 04:20:31 compute-0 nova_compute[259850]: 2025-10-11 04:20:31.092 2 DEBUG oslo_concurrency.lockutils [None req-a15c22ab-259d-4b26-83d0-eb5f92102649 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 04:20:31 compute-0 nova_compute[259850]: 2025-10-11 04:20:31.093 2 DEBUG nova.compute.resource_tracker [None req-a15c22ab-259d-4b26-83d0-eb5f92102649 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Oct 11 04:20:31 compute-0 nova_compute[259850]: 2025-10-11 04:20:31.093 2 DEBUG oslo_concurrency.processutils [None req-a15c22ab-259d-4b26-83d0-eb5f92102649 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 04:20:31 compute-0 ceph-mon[74273]: pgmap v1718: 305 pgs: 305 active+clean; 88 MiB data, 431 MiB used, 60 GiB / 60 GiB avail; 62 KiB/s rd, 4.6 KiB/s wr, 82 op/s
Oct 11 04:20:31 compute-0 ceph-mon[74273]: osdmap e404: 3 total, 3 up, 3 in
Oct 11 04:20:31 compute-0 ceph-mon[74273]: from='client.? 192.168.122.10:0/1438204750' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 11 04:20:31 compute-0 ceph-mon[74273]: from='client.? 192.168.122.10:0/1438204750' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 11 04:20:31 compute-0 ceph-mgr[74563]: [pg_autoscaler INFO root] _maybe_adjust
Oct 11 04:20:31 compute-0 ceph-mgr[74563]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 04:20:31 compute-0 ceph-mgr[74563]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct 11 04:20:31 compute-0 ceph-mgr[74563]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 04:20:31 compute-0 ceph-mgr[74563]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Oct 11 04:20:31 compute-0 ceph-mgr[74563]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 04:20:31 compute-0 ceph-mgr[74563]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0003475232182392394 of space, bias 1.0, pg target 0.10425696547177182 quantized to 32 (current 32)
Oct 11 04:20:31 compute-0 ceph-mgr[74563]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 04:20:31 compute-0 ceph-mgr[74563]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Oct 11 04:20:31 compute-0 ceph-mgr[74563]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 04:20:31 compute-0 ceph-mgr[74563]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006661762551279547 of space, bias 1.0, pg target 0.1998528765383864 quantized to 32 (current 32)
Oct 11 04:20:31 compute-0 ceph-mgr[74563]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 04:20:31 compute-0 ceph-mgr[74563]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Oct 11 04:20:31 compute-0 ceph-mgr[74563]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 04:20:31 compute-0 ceph-mgr[74563]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 11 04:20:31 compute-0 ceph-mgr[74563]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 04:20:31 compute-0 ceph-mgr[74563]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Oct 11 04:20:31 compute-0 ceph-mgr[74563]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 04:20:31 compute-0 ceph-mgr[74563]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct 11 04:20:31 compute-0 ceph-mgr[74563]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 04:20:31 compute-0 ceph-mgr[74563]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 11 04:20:31 compute-0 ceph-mgr[74563]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 04:20:31 compute-0 ceph-mgr[74563]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct 11 04:20:31 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 11 04:20:31 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2564990775' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 04:20:31 compute-0 nova_compute[259850]: 2025-10-11 04:20:31.521 2 DEBUG oslo_concurrency.processutils [None req-a15c22ab-259d-4b26-83d0-eb5f92102649 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.428s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 04:20:31 compute-0 nova_compute[259850]: 2025-10-11 04:20:31.747 2 WARNING nova.virt.libvirt.driver [None req-a15c22ab-259d-4b26-83d0-eb5f92102649 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 11 04:20:31 compute-0 nova_compute[259850]: 2025-10-11 04:20:31.748 2 DEBUG nova.compute.resource_tracker [None req-a15c22ab-259d-4b26-83d0-eb5f92102649 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4374MB free_disk=59.988277435302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Oct 11 04:20:31 compute-0 nova_compute[259850]: 2025-10-11 04:20:31.748 2 DEBUG oslo_concurrency.lockutils [None req-a15c22ab-259d-4b26-83d0-eb5f92102649 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 04:20:31 compute-0 nova_compute[259850]: 2025-10-11 04:20:31.748 2 DEBUG oslo_concurrency.lockutils [None req-a15c22ab-259d-4b26-83d0-eb5f92102649 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 04:20:31 compute-0 nova_compute[259850]: 2025-10-11 04:20:31.822 2 DEBUG nova.compute.resource_tracker [None req-a15c22ab-259d-4b26-83d0-eb5f92102649 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Oct 11 04:20:31 compute-0 nova_compute[259850]: 2025-10-11 04:20:31.824 2 DEBUG nova.compute.resource_tracker [None req-a15c22ab-259d-4b26-83d0-eb5f92102649 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Oct 11 04:20:31 compute-0 nova_compute[259850]: 2025-10-11 04:20:31.851 2 DEBUG oslo_concurrency.processutils [None req-a15c22ab-259d-4b26-83d0-eb5f92102649 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 04:20:31 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1720: 305 pgs: 305 active+clean; 88 MiB data, 431 MiB used, 60 GiB / 60 GiB avail; 42 KiB/s rd, 2.3 KiB/s wr, 54 op/s
Oct 11 04:20:32 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e404 do_prune osdmap full prune enabled
Oct 11 04:20:32 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e405 e405: 3 total, 3 up, 3 in
Oct 11 04:20:32 compute-0 ceph-mon[74273]: from='client.? 192.168.122.100:0/2564990775' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 04:20:32 compute-0 ceph-mon[74273]: log_channel(cluster) log [DBG] : osdmap e405: 3 total, 3 up, 3 in
Oct 11 04:20:32 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 11 04:20:32 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2251197143' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 04:20:32 compute-0 nova_compute[259850]: 2025-10-11 04:20:32.319 2 DEBUG oslo_concurrency.processutils [None req-a15c22ab-259d-4b26-83d0-eb5f92102649 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.468s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 04:20:32 compute-0 nova_compute[259850]: 2025-10-11 04:20:32.329 2 DEBUG nova.compute.provider_tree [None req-a15c22ab-259d-4b26-83d0-eb5f92102649 - - - - - -] Inventory has not changed in ProviderTree for provider: 108a560b-89c0-4926-a2fc-cb749a6f8386 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 11 04:20:32 compute-0 nova_compute[259850]: 2025-10-11 04:20:32.355 2 DEBUG nova.scheduler.client.report [None req-a15c22ab-259d-4b26-83d0-eb5f92102649 - - - - - -] Inventory has not changed for provider 108a560b-89c0-4926-a2fc-cb749a6f8386 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 11 04:20:32 compute-0 nova_compute[259850]: 2025-10-11 04:20:32.361 2 DEBUG nova.compute.resource_tracker [None req-a15c22ab-259d-4b26-83d0-eb5f92102649 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Oct 11 04:20:32 compute-0 nova_compute[259850]: 2025-10-11 04:20:32.362 2 DEBUG oslo_concurrency.lockutils [None req-a15c22ab-259d-4b26-83d0-eb5f92102649 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.614s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 04:20:32 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct 11 04:20:32 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1829034128' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 11 04:20:32 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct 11 04:20:32 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1829034128' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 11 04:20:33 compute-0 ceph-mon[74273]: pgmap v1720: 305 pgs: 305 active+clean; 88 MiB data, 431 MiB used, 60 GiB / 60 GiB avail; 42 KiB/s rd, 2.3 KiB/s wr, 54 op/s
Oct 11 04:20:33 compute-0 ceph-mon[74273]: osdmap e405: 3 total, 3 up, 3 in
Oct 11 04:20:33 compute-0 ceph-mon[74273]: from='client.? 192.168.122.100:0/2251197143' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 04:20:33 compute-0 ceph-mon[74273]: from='client.? 192.168.122.10:0/1829034128' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 11 04:20:33 compute-0 ceph-mon[74273]: from='client.? 192.168.122.10:0/1829034128' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 11 04:20:33 compute-0 nova_compute[259850]: 2025-10-11 04:20:33.359 2 DEBUG oslo_service.periodic_task [None req-a15c22ab-259d-4b26-83d0-eb5f92102649 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 11 04:20:33 compute-0 nova_compute[259850]: 2025-10-11 04:20:33.359 2 DEBUG oslo_service.periodic_task [None req-a15c22ab-259d-4b26-83d0-eb5f92102649 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 11 04:20:33 compute-0 nova_compute[259850]: 2025-10-11 04:20:33.360 2 DEBUG nova.compute.manager [None req-a15c22ab-259d-4b26-83d0-eb5f92102649 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Oct 11 04:20:33 compute-0 nova_compute[259850]: 2025-10-11 04:20:33.360 2 DEBUG nova.compute.manager [None req-a15c22ab-259d-4b26-83d0-eb5f92102649 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Oct 11 04:20:33 compute-0 nova_compute[259850]: 2025-10-11 04:20:33.379 2 DEBUG nova.compute.manager [None req-a15c22ab-259d-4b26-83d0-eb5f92102649 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Oct 11 04:20:33 compute-0 nova_compute[259850]: 2025-10-11 04:20:33.379 2 DEBUG oslo_service.periodic_task [None req-a15c22ab-259d-4b26-83d0-eb5f92102649 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 11 04:20:33 compute-0 nova_compute[259850]: 2025-10-11 04:20:33.380 2 DEBUG oslo_service.periodic_task [None req-a15c22ab-259d-4b26-83d0-eb5f92102649 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 11 04:20:33 compute-0 podman[300512]: 2025-10-11 04:20:33.471586174 +0000 UTC m=+0.174701008 container health_status 648a70a95c858d67aef01a55c153ff0b5b9bef4b589fa10c62a24185180db61f (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251009, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, container_name=ovn_controller, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f)
Oct 11 04:20:33 compute-0 nova_compute[259850]: 2025-10-11 04:20:33.518 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 04:20:33 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct 11 04:20:33 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3942962760' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 11 04:20:33 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct 11 04:20:33 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3942962760' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 11 04:20:33 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1722: 305 pgs: 305 active+clean; 88 MiB data, 415 MiB used, 60 GiB / 60 GiB avail; 139 KiB/s rd, 4.6 KiB/s wr, 177 op/s
Oct 11 04:20:34 compute-0 ceph-mon[74273]: from='client.? 192.168.122.10:0/3942962760' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 11 04:20:34 compute-0 ceph-mon[74273]: from='client.? 192.168.122.10:0/3942962760' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 11 04:20:34 compute-0 nova_compute[259850]: 2025-10-11 04:20:34.876 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 04:20:35 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e405 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 04:20:35 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e405 do_prune osdmap full prune enabled
Oct 11 04:20:35 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e406 e406: 3 total, 3 up, 3 in
Oct 11 04:20:35 compute-0 ceph-mon[74273]: log_channel(cluster) log [DBG] : osdmap e406: 3 total, 3 up, 3 in
Oct 11 04:20:35 compute-0 ceph-mon[74273]: pgmap v1722: 305 pgs: 305 active+clean; 88 MiB data, 415 MiB used, 60 GiB / 60 GiB avail; 139 KiB/s rd, 4.6 KiB/s wr, 177 op/s
Oct 11 04:20:35 compute-0 ceph-mon[74273]: osdmap e406: 3 total, 3 up, 3 in
Oct 11 04:20:35 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1724: 305 pgs: 305 active+clean; 88 MiB data, 415 MiB used, 60 GiB / 60 GiB avail; 131 KiB/s rd, 3.2 KiB/s wr, 167 op/s
Oct 11 04:20:36 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e406 do_prune osdmap full prune enabled
Oct 11 04:20:36 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e407 e407: 3 total, 3 up, 3 in
Oct 11 04:20:36 compute-0 ceph-mon[74273]: log_channel(cluster) log [DBG] : osdmap e407: 3 total, 3 up, 3 in
Oct 11 04:20:37 compute-0 nova_compute[259850]: 2025-10-11 04:20:37.059 2 DEBUG oslo_service.periodic_task [None req-a15c22ab-259d-4b26-83d0-eb5f92102649 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 11 04:20:37 compute-0 ceph-mon[74273]: pgmap v1724: 305 pgs: 305 active+clean; 88 MiB data, 415 MiB used, 60 GiB / 60 GiB avail; 131 KiB/s rd, 3.2 KiB/s wr, 167 op/s
Oct 11 04:20:37 compute-0 ceph-mon[74273]: osdmap e407: 3 total, 3 up, 3 in
Oct 11 04:20:37 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1726: 305 pgs: 305 active+clean; 88 MiB data, 415 MiB used, 60 GiB / 60 GiB avail; 153 KiB/s rd, 4.8 KiB/s wr, 195 op/s
Oct 11 04:20:38 compute-0 podman[300538]: 2025-10-11 04:20:38.379400123 +0000 UTC m=+0.082145217 container health_status aa44bb3cffcfb092b9d85ae52a9bd9f36c87e07262632420f97b48a886109eab (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, managed_by=edpm_ansible)
Oct 11 04:20:38 compute-0 nova_compute[259850]: 2025-10-11 04:20:38.519 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 04:20:39 compute-0 nova_compute[259850]: 2025-10-11 04:20:39.055 2 DEBUG oslo_service.periodic_task [None req-a15c22ab-259d-4b26-83d0-eb5f92102649 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 11 04:20:39 compute-0 ceph-mon[74273]: pgmap v1726: 305 pgs: 305 active+clean; 88 MiB data, 415 MiB used, 60 GiB / 60 GiB avail; 153 KiB/s rd, 4.8 KiB/s wr, 195 op/s
Oct 11 04:20:39 compute-0 nova_compute[259850]: 2025-10-11 04:20:39.878 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 04:20:39 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1727: 305 pgs: 305 active+clean; 88 MiB data, 415 MiB used, 60 GiB / 60 GiB avail; 140 KiB/s rd, 5.4 KiB/s wr, 181 op/s
Oct 11 04:20:40 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e407 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 04:20:40 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e407 do_prune osdmap full prune enabled
Oct 11 04:20:40 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e408 e408: 3 total, 3 up, 3 in
Oct 11 04:20:40 compute-0 ceph-mon[74273]: log_channel(cluster) log [DBG] : osdmap e408: 3 total, 3 up, 3 in
Oct 11 04:20:40 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct 11 04:20:40 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3801701089' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 11 04:20:40 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct 11 04:20:40 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3801701089' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 11 04:20:41 compute-0 ceph-mon[74273]: pgmap v1727: 305 pgs: 305 active+clean; 88 MiB data, 415 MiB used, 60 GiB / 60 GiB avail; 140 KiB/s rd, 5.4 KiB/s wr, 181 op/s
Oct 11 04:20:41 compute-0 ceph-mon[74273]: osdmap e408: 3 total, 3 up, 3 in
Oct 11 04:20:41 compute-0 ceph-mon[74273]: from='client.? 192.168.122.10:0/3801701089' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 11 04:20:41 compute-0 ceph-mon[74273]: from='client.? 192.168.122.10:0/3801701089' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 11 04:20:41 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1729: 305 pgs: 305 active+clean; 88 MiB data, 415 MiB used, 60 GiB / 60 GiB avail; 42 KiB/s rd, 3.2 KiB/s wr, 56 op/s
Oct 11 04:20:43 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e408 do_prune osdmap full prune enabled
Oct 11 04:20:43 compute-0 ceph-mon[74273]: pgmap v1729: 305 pgs: 305 active+clean; 88 MiB data, 415 MiB used, 60 GiB / 60 GiB avail; 42 KiB/s rd, 3.2 KiB/s wr, 56 op/s
Oct 11 04:20:43 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e409 e409: 3 total, 3 up, 3 in
Oct 11 04:20:43 compute-0 ceph-mon[74273]: log_channel(cluster) log [DBG] : osdmap e409: 3 total, 3 up, 3 in
Oct 11 04:20:43 compute-0 nova_compute[259850]: 2025-10-11 04:20:43.522 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 04:20:43 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1731: 305 pgs: 305 active+clean; 88 MiB data, 415 MiB used, 60 GiB / 60 GiB avail; 90 KiB/s rd, 4.9 KiB/s wr, 119 op/s
Oct 11 04:20:44 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct 11 04:20:44 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2153513242' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 11 04:20:44 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct 11 04:20:44 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2153513242' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 11 04:20:44 compute-0 ceph-mon[74273]: osdmap e409: 3 total, 3 up, 3 in
Oct 11 04:20:44 compute-0 ceph-mon[74273]: pgmap v1731: 305 pgs: 305 active+clean; 88 MiB data, 415 MiB used, 60 GiB / 60 GiB avail; 90 KiB/s rd, 4.9 KiB/s wr, 119 op/s
Oct 11 04:20:44 compute-0 ceph-mon[74273]: from='client.? 192.168.122.10:0/2153513242' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 11 04:20:44 compute-0 ceph-mon[74273]: from='client.? 192.168.122.10:0/2153513242' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 11 04:20:44 compute-0 nova_compute[259850]: 2025-10-11 04:20:44.880 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 04:20:45 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e409 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 04:20:45 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e409 do_prune osdmap full prune enabled
Oct 11 04:20:45 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e410 e410: 3 total, 3 up, 3 in
Oct 11 04:20:45 compute-0 ceph-mon[74273]: log_channel(cluster) log [DBG] : osdmap e410: 3 total, 3 up, 3 in
Oct 11 04:20:45 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct 11 04:20:45 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3947345926' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 11 04:20:45 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct 11 04:20:45 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3947345926' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 11 04:20:45 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1733: 305 pgs: 305 active+clean; 88 MiB data, 415 MiB used, 60 GiB / 60 GiB avail; 67 KiB/s rd, 2.5 KiB/s wr, 88 op/s
Oct 11 04:20:46 compute-0 ceph-mon[74273]: osdmap e410: 3 total, 3 up, 3 in
Oct 11 04:20:46 compute-0 ceph-mon[74273]: from='client.? 192.168.122.10:0/3947345926' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 11 04:20:46 compute-0 ceph-mon[74273]: from='client.? 192.168.122.10:0/3947345926' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 11 04:20:46 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct 11 04:20:46 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3651477484' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 11 04:20:46 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct 11 04:20:46 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3651477484' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 11 04:20:47 compute-0 ceph-mon[74273]: pgmap v1733: 305 pgs: 305 active+clean; 88 MiB data, 415 MiB used, 60 GiB / 60 GiB avail; 67 KiB/s rd, 2.5 KiB/s wr, 88 op/s
Oct 11 04:20:47 compute-0 ceph-mon[74273]: from='client.? 192.168.122.10:0/3651477484' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 11 04:20:47 compute-0 ceph-mon[74273]: from='client.? 192.168.122.10:0/3651477484' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 11 04:20:47 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1734: 305 pgs: 305 active+clean; 88 MiB data, 415 MiB used, 60 GiB / 60 GiB avail; 103 KiB/s rd, 4.2 KiB/s wr, 133 op/s
Oct 11 04:20:48 compute-0 nova_compute[259850]: 2025-10-11 04:20:48.526 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 04:20:49 compute-0 ceph-mon[74273]: pgmap v1734: 305 pgs: 305 active+clean; 88 MiB data, 415 MiB used, 60 GiB / 60 GiB avail; 103 KiB/s rd, 4.2 KiB/s wr, 133 op/s
Oct 11 04:20:49 compute-0 nova_compute[259850]: 2025-10-11 04:20:49.882 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 04:20:49 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1735: 305 pgs: 305 active+clean; 88 MiB data, 415 MiB used, 60 GiB / 60 GiB avail; 114 KiB/s rd, 5.0 KiB/s wr, 150 op/s
Oct 11 04:20:50 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e410 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 04:20:50 compute-0 podman[300558]: 2025-10-11 04:20:50.369899158 +0000 UTC m=+0.077512315 container health_status 8f03625dda308e9a3fe3b3f00360811eab2a53db90537fed5552a11820542f07 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, container_name=iscsid, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=iscsid)
Oct 11 04:20:50 compute-0 podman[300557]: 2025-10-11 04:20:50.390051719 +0000 UTC m=+0.095722291 container health_status 83ff5079e26f0f00bbfd2512db4ebca61e431e3b7e362664573e34fff179574c (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251009, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, container_name=multipathd, io.buildah.version=1.41.3)
Oct 11 04:20:50 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct 11 04:20:50 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/246943880' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 11 04:20:50 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct 11 04:20:50 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/246943880' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 11 04:20:50 compute-0 ovn_metadata_agent[161897]: 2025-10-11 04:20:50.693 161902 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=20, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '7a:61:6f', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '92:f1:b6:e4:f1:16'}, ipsec=False) old=SB_Global(nb_cfg=19) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 11 04:20:50 compute-0 ovn_metadata_agent[161897]: 2025-10-11 04:20:50.694 161902 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 4 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Oct 11 04:20:50 compute-0 nova_compute[259850]: 2025-10-11 04:20:50.718 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 04:20:50 compute-0 ceph-mgr[74563]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 04:20:50 compute-0 ceph-mgr[74563]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 04:20:50 compute-0 ceph-mgr[74563]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 04:20:50 compute-0 ceph-mgr[74563]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 04:20:50 compute-0 ceph-mgr[74563]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 04:20:50 compute-0 ceph-mgr[74563]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 04:20:51 compute-0 ceph-mon[74273]: pgmap v1735: 305 pgs: 305 active+clean; 88 MiB data, 415 MiB used, 60 GiB / 60 GiB avail; 114 KiB/s rd, 5.0 KiB/s wr, 150 op/s
Oct 11 04:20:51 compute-0 ceph-mon[74273]: from='client.? 192.168.122.10:0/246943880' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 11 04:20:51 compute-0 ceph-mon[74273]: from='client.? 192.168.122.10:0/246943880' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 11 04:20:51 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1736: 305 pgs: 305 active+clean; 88 MiB data, 415 MiB used, 60 GiB / 60 GiB avail; 60 KiB/s rd, 2.9 KiB/s wr, 78 op/s
Oct 11 04:20:53 compute-0 ceph-mon[74273]: pgmap v1736: 305 pgs: 305 active+clean; 88 MiB data, 415 MiB used, 60 GiB / 60 GiB avail; 60 KiB/s rd, 2.9 KiB/s wr, 78 op/s
Oct 11 04:20:53 compute-0 nova_compute[259850]: 2025-10-11 04:20:53.527 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 04:20:53 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1737: 305 pgs: 305 active+clean; 88 MiB data, 415 MiB used, 60 GiB / 60 GiB avail; 51 KiB/s rd, 2.5 KiB/s wr, 67 op/s
Oct 11 04:20:54 compute-0 ovn_metadata_agent[161897]: 2025-10-11 04:20:54.697 161902 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=8a473e03-2208-47ae-afcd-05ad744a5969, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '20'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 04:20:54 compute-0 nova_compute[259850]: 2025-10-11 04:20:54.884 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 04:20:55 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e410 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 04:20:55 compute-0 ceph-mon[74273]: pgmap v1737: 305 pgs: 305 active+clean; 88 MiB data, 415 MiB used, 60 GiB / 60 GiB avail; 51 KiB/s rd, 2.5 KiB/s wr, 67 op/s
Oct 11 04:20:55 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1738: 305 pgs: 305 active+clean; 88 MiB data, 415 MiB used, 60 GiB / 60 GiB avail; 47 KiB/s rd, 2.3 KiB/s wr, 62 op/s
Oct 11 04:20:56 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 11 04:20:56 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/538303810' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 11 04:20:57 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e410 do_prune osdmap full prune enabled
Oct 11 04:20:57 compute-0 ceph-mon[74273]: pgmap v1738: 305 pgs: 305 active+clean; 88 MiB data, 415 MiB used, 60 GiB / 60 GiB avail; 47 KiB/s rd, 2.3 KiB/s wr, 62 op/s
Oct 11 04:20:57 compute-0 ceph-mon[74273]: from='client.? 192.168.122.10:0/538303810' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 11 04:20:57 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e411 e411: 3 total, 3 up, 3 in
Oct 11 04:20:57 compute-0 ceph-mon[74273]: log_channel(cluster) log [DBG] : osdmap e411: 3 total, 3 up, 3 in
Oct 11 04:20:57 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1740: 305 pgs: 305 active+clean; 88 MiB data, 415 MiB used, 60 GiB / 60 GiB avail; 14 KiB/s rd, 1.1 KiB/s wr, 21 op/s
Oct 11 04:20:58 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e411 do_prune osdmap full prune enabled
Oct 11 04:20:58 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e412 e412: 3 total, 3 up, 3 in
Oct 11 04:20:58 compute-0 ceph-mon[74273]: osdmap e411: 3 total, 3 up, 3 in
Oct 11 04:20:58 compute-0 ceph-mon[74273]: log_channel(cluster) log [DBG] : osdmap e412: 3 total, 3 up, 3 in
Oct 11 04:20:58 compute-0 nova_compute[259850]: 2025-10-11 04:20:58.528 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 04:20:59 compute-0 ceph-mon[74273]: pgmap v1740: 305 pgs: 305 active+clean; 88 MiB data, 415 MiB used, 60 GiB / 60 GiB avail; 14 KiB/s rd, 1.1 KiB/s wr, 21 op/s
Oct 11 04:20:59 compute-0 ceph-mon[74273]: osdmap e412: 3 total, 3 up, 3 in
Oct 11 04:20:59 compute-0 nova_compute[259850]: 2025-10-11 04:20:59.886 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 04:20:59 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1742: 305 pgs: 305 active+clean; 88 MiB data, 415 MiB used, 60 GiB / 60 GiB avail; 12 KiB/s rd, 1.4 KiB/s wr, 17 op/s
Oct 11 04:21:00 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e412 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 04:21:00 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e412 do_prune osdmap full prune enabled
Oct 11 04:21:00 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e413 e413: 3 total, 3 up, 3 in
Oct 11 04:21:00 compute-0 ceph-mon[74273]: log_channel(cluster) log [DBG] : osdmap e413: 3 total, 3 up, 3 in
Oct 11 04:21:01 compute-0 ceph-mon[74273]: pgmap v1742: 305 pgs: 305 active+clean; 88 MiB data, 415 MiB used, 60 GiB / 60 GiB avail; 12 KiB/s rd, 1.4 KiB/s wr, 17 op/s
Oct 11 04:21:01 compute-0 ceph-mon[74273]: osdmap e413: 3 total, 3 up, 3 in
Oct 11 04:21:01 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1744: 305 pgs: 305 active+clean; 88 MiB data, 415 MiB used, 60 GiB / 60 GiB avail; 16 KiB/s rd, 1.8 KiB/s wr, 23 op/s
Oct 11 04:21:02 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e413 do_prune osdmap full prune enabled
Oct 11 04:21:02 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e414 e414: 3 total, 3 up, 3 in
Oct 11 04:21:02 compute-0 ceph-mon[74273]: log_channel(cluster) log [DBG] : osdmap e414: 3 total, 3 up, 3 in
Oct 11 04:21:02 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct 11 04:21:02 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2981869406' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 11 04:21:02 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct 11 04:21:02 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2981869406' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 11 04:21:02 compute-0 sudo[300596]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 04:21:02 compute-0 sudo[300596]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 04:21:02 compute-0 sudo[300596]: pam_unix(sudo:session): session closed for user root
Oct 11 04:21:02 compute-0 sudo[300621]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 11 04:21:02 compute-0 sudo[300621]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 04:21:02 compute-0 sudo[300621]: pam_unix(sudo:session): session closed for user root
Oct 11 04:21:02 compute-0 sudo[300646]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 04:21:02 compute-0 sudo[300646]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 04:21:02 compute-0 sudo[300646]: pam_unix(sudo:session): session closed for user root
Oct 11 04:21:02 compute-0 sudo[300671]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/23b68101-59a9-532f-ab6b-9acf78fb2162/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ls
Oct 11 04:21:02 compute-0 sudo[300671]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 04:21:03 compute-0 ceph-mon[74273]: pgmap v1744: 305 pgs: 305 active+clean; 88 MiB data, 415 MiB used, 60 GiB / 60 GiB avail; 16 KiB/s rd, 1.8 KiB/s wr, 23 op/s
Oct 11 04:21:03 compute-0 ceph-mon[74273]: osdmap e414: 3 total, 3 up, 3 in
Oct 11 04:21:03 compute-0 ceph-mon[74273]: from='client.? 192.168.122.10:0/2981869406' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 11 04:21:03 compute-0 ceph-mon[74273]: from='client.? 192.168.122.10:0/2981869406' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 11 04:21:03 compute-0 nova_compute[259850]: 2025-10-11 04:21:03.530 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 04:21:03 compute-0 podman[300769]: 2025-10-11 04:21:03.646135925 +0000 UTC m=+0.095631278 container exec 24261ba7295af5a6a49cb537d1551fd7fd4de28fdeebff7ecec5d89143ebddf9 (image=quay.io/ceph/ceph:v18, name=ceph-23b68101-59a9-532f-ab6b-9acf78fb2162-mon-compute-0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Oct 11 04:21:03 compute-0 podman[300769]: 2025-10-11 04:21:03.736955607 +0000 UTC m=+0.186451000 container exec_died 24261ba7295af5a6a49cb537d1551fd7fd4de28fdeebff7ecec5d89143ebddf9 (image=quay.io/ceph/ceph:v18, name=ceph-23b68101-59a9-532f-ab6b-9acf78fb2162-mon-compute-0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2)
Oct 11 04:21:03 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1746: 305 pgs: 305 active+clean; 88 MiB data, 415 MiB used, 60 GiB / 60 GiB avail; 75 KiB/s rd, 4.5 KiB/s wr, 101 op/s
Oct 11 04:21:03 compute-0 podman[300804]: 2025-10-11 04:21:03.973498474 +0000 UTC m=+0.155290628 container health_status 648a70a95c858d67aef01a55c153ff0b5b9bef4b589fa10c62a24185180db61f (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.build-date=20251009, org.label-schema.vendor=CentOS, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Oct 11 04:21:04 compute-0 sudo[300671]: pam_unix(sudo:session): session closed for user root
Oct 11 04:21:04 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 11 04:21:04 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' 
Oct 11 04:21:04 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 11 04:21:04 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' 
Oct 11 04:21:04 compute-0 sudo[300954]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 04:21:04 compute-0 sudo[300954]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 04:21:04 compute-0 sudo[300954]: pam_unix(sudo:session): session closed for user root
Oct 11 04:21:04 compute-0 sudo[300979]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 11 04:21:04 compute-0 sudo[300979]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 04:21:04 compute-0 sudo[300979]: pam_unix(sudo:session): session closed for user root
Oct 11 04:21:04 compute-0 sudo[301004]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 04:21:04 compute-0 sudo[301004]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 04:21:04 compute-0 nova_compute[259850]: 2025-10-11 04:21:04.887 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 04:21:04 compute-0 sudo[301004]: pam_unix(sudo:session): session closed for user root
Oct 11 04:21:04 compute-0 sudo[301029]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/23b68101-59a9-532f-ab6b-9acf78fb2162/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Oct 11 04:21:04 compute-0 sudo[301029]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 04:21:05 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e414 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 04:21:05 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e414 do_prune osdmap full prune enabled
Oct 11 04:21:05 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e415 e415: 3 total, 3 up, 3 in
Oct 11 04:21:05 compute-0 ceph-mon[74273]: log_channel(cluster) log [DBG] : osdmap e415: 3 total, 3 up, 3 in
Oct 11 04:21:05 compute-0 ceph-mon[74273]: pgmap v1746: 305 pgs: 305 active+clean; 88 MiB data, 415 MiB used, 60 GiB / 60 GiB avail; 75 KiB/s rd, 4.5 KiB/s wr, 101 op/s
Oct 11 04:21:05 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' 
Oct 11 04:21:05 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' 
Oct 11 04:21:05 compute-0 ceph-mon[74273]: osdmap e415: 3 total, 3 up, 3 in
Oct 11 04:21:05 compute-0 sudo[301029]: pam_unix(sudo:session): session closed for user root
Oct 11 04:21:05 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 11 04:21:05 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 11 04:21:05 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct 11 04:21:05 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 11 04:21:05 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct 11 04:21:05 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' 
Oct 11 04:21:05 compute-0 ceph-mgr[74563]: [progress WARNING root] complete: ev dabcd450-a9a9-47a7-b99b-b2a15d193d8a does not exist
Oct 11 04:21:05 compute-0 ceph-mgr[74563]: [progress WARNING root] complete: ev a87c4bb4-7ee7-4569-aeff-28aa5f6dfcd2 does not exist
Oct 11 04:21:05 compute-0 ceph-mgr[74563]: [progress WARNING root] complete: ev 68e24bc2-dc61-4b53-af12-494bd80353b3 does not exist
Oct 11 04:21:05 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct 11 04:21:05 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 11 04:21:05 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct 11 04:21:05 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 11 04:21:05 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 11 04:21:05 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 11 04:21:05 compute-0 sudo[301085]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 04:21:05 compute-0 sudo[301085]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 04:21:05 compute-0 sudo[301085]: pam_unix(sudo:session): session closed for user root
Oct 11 04:21:05 compute-0 sudo[301110]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 11 04:21:05 compute-0 sudo[301110]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 04:21:05 compute-0 sudo[301110]: pam_unix(sudo:session): session closed for user root
Oct 11 04:21:05 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1748: 305 pgs: 305 active+clean; 88 MiB data, 415 MiB used, 60 GiB / 60 GiB avail; 60 KiB/s rd, 3.2 KiB/s wr, 81 op/s
Oct 11 04:21:05 compute-0 sudo[301135]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 04:21:05 compute-0 sudo[301135]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 04:21:05 compute-0 sudo[301135]: pam_unix(sudo:session): session closed for user root
Oct 11 04:21:06 compute-0 sudo[301160]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/23b68101-59a9-532f-ab6b-9acf78fb2162/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 23b68101-59a9-532f-ab6b-9acf78fb2162 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Oct 11 04:21:06 compute-0 sudo[301160]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 04:21:06 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 11 04:21:06 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 11 04:21:06 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' 
Oct 11 04:21:06 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 11 04:21:06 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 11 04:21:06 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 11 04:21:06 compute-0 podman[301226]: 2025-10-11 04:21:06.376849488 +0000 UTC m=+0.042498925 container create 980d57f927f90a939ad5645a44a5d0cb55d226c204b33bd4874752699ec8f8e9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_bhaskara, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Oct 11 04:21:06 compute-0 systemd[1]: Started libpod-conmon-980d57f927f90a939ad5645a44a5d0cb55d226c204b33bd4874752699ec8f8e9.scope.
Oct 11 04:21:06 compute-0 systemd[1]: Started libcrun container.
Oct 11 04:21:06 compute-0 podman[301226]: 2025-10-11 04:21:06.35892065 +0000 UTC m=+0.024570067 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 04:21:06 compute-0 podman[301226]: 2025-10-11 04:21:06.463558833 +0000 UTC m=+0.129208300 container init 980d57f927f90a939ad5645a44a5d0cb55d226c204b33bd4874752699ec8f8e9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_bhaskara, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, ceph=True)
Oct 11 04:21:06 compute-0 podman[301226]: 2025-10-11 04:21:06.476329174 +0000 UTC m=+0.141978611 container start 980d57f927f90a939ad5645a44a5d0cb55d226c204b33bd4874752699ec8f8e9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_bhaskara, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Oct 11 04:21:06 compute-0 podman[301226]: 2025-10-11 04:21:06.479670799 +0000 UTC m=+0.145320236 container attach 980d57f927f90a939ad5645a44a5d0cb55d226c204b33bd4874752699ec8f8e9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_bhaskara, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Oct 11 04:21:06 compute-0 relaxed_bhaskara[301243]: 167 167
Oct 11 04:21:06 compute-0 systemd[1]: libpod-980d57f927f90a939ad5645a44a5d0cb55d226c204b33bd4874752699ec8f8e9.scope: Deactivated successfully.
Oct 11 04:21:06 compute-0 podman[301226]: 2025-10-11 04:21:06.483474316 +0000 UTC m=+0.149123823 container died 980d57f927f90a939ad5645a44a5d0cb55d226c204b33bd4874752699ec8f8e9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_bhaskara, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Oct 11 04:21:06 compute-0 systemd[1]: var-lib-containers-storage-overlay-f189a21739e41e3bcde9654b3c1a8b01c2ca49612d4bad9211e45fa829afb3eb-merged.mount: Deactivated successfully.
Oct 11 04:21:06 compute-0 podman[301226]: 2025-10-11 04:21:06.517613573 +0000 UTC m=+0.183262980 container remove 980d57f927f90a939ad5645a44a5d0cb55d226c204b33bd4874752699ec8f8e9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_bhaskara, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Oct 11 04:21:06 compute-0 systemd[1]: libpod-conmon-980d57f927f90a939ad5645a44a5d0cb55d226c204b33bd4874752699ec8f8e9.scope: Deactivated successfully.
Oct 11 04:21:06 compute-0 podman[301267]: 2025-10-11 04:21:06.72977861 +0000 UTC m=+0.065142175 container create 674307fbc700911bebb507ad2543a2dd58f873f48971af60d1e0f54ddff3fbae (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_benz, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507)
Oct 11 04:21:06 compute-0 systemd[1]: Started libpod-conmon-674307fbc700911bebb507ad2543a2dd58f873f48971af60d1e0f54ddff3fbae.scope.
Oct 11 04:21:06 compute-0 podman[301267]: 2025-10-11 04:21:06.707977583 +0000 UTC m=+0.043341148 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 04:21:06 compute-0 systemd[1]: Started libcrun container.
Oct 11 04:21:06 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0458ebdd1ad247923c179dd88e449ea63d47d58999aea6b0ac64cf35f200cc0a/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 11 04:21:06 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0458ebdd1ad247923c179dd88e449ea63d47d58999aea6b0ac64cf35f200cc0a/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 11 04:21:06 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0458ebdd1ad247923c179dd88e449ea63d47d58999aea6b0ac64cf35f200cc0a/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 11 04:21:06 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0458ebdd1ad247923c179dd88e449ea63d47d58999aea6b0ac64cf35f200cc0a/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 11 04:21:06 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0458ebdd1ad247923c179dd88e449ea63d47d58999aea6b0ac64cf35f200cc0a/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct 11 04:21:06 compute-0 podman[301267]: 2025-10-11 04:21:06.825342745 +0000 UTC m=+0.160706350 container init 674307fbc700911bebb507ad2543a2dd58f873f48971af60d1e0f54ddff3fbae (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_benz, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 11 04:21:06 compute-0 podman[301267]: 2025-10-11 04:21:06.838196679 +0000 UTC m=+0.173560244 container start 674307fbc700911bebb507ad2543a2dd58f873f48971af60d1e0f54ddff3fbae (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_benz, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 11 04:21:06 compute-0 podman[301267]: 2025-10-11 04:21:06.842605664 +0000 UTC m=+0.177969279 container attach 674307fbc700911bebb507ad2543a2dd58f873f48971af60d1e0f54ddff3fbae (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_benz, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True)
Oct 11 04:21:07 compute-0 ceph-mon[74273]: pgmap v1748: 305 pgs: 305 active+clean; 88 MiB data, 415 MiB used, 60 GiB / 60 GiB avail; 60 KiB/s rd, 3.2 KiB/s wr, 81 op/s
Oct 11 04:21:07 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1749: 305 pgs: 305 active+clean; 88 MiB data, 415 MiB used, 60 GiB / 60 GiB avail; 47 KiB/s rd, 2.5 KiB/s wr, 62 op/s
Oct 11 04:21:08 compute-0 relaxed_benz[301283]: --> passed data devices: 0 physical, 3 LVM
Oct 11 04:21:08 compute-0 relaxed_benz[301283]: --> relative data size: 1.0
Oct 11 04:21:08 compute-0 relaxed_benz[301283]: --> All data devices are unavailable
Oct 11 04:21:08 compute-0 systemd[1]: libpod-674307fbc700911bebb507ad2543a2dd58f873f48971af60d1e0f54ddff3fbae.scope: Deactivated successfully.
Oct 11 04:21:08 compute-0 podman[301267]: 2025-10-11 04:21:08.039836091 +0000 UTC m=+1.375199616 container died 674307fbc700911bebb507ad2543a2dd58f873f48971af60d1e0f54ddff3fbae (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_benz, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 11 04:21:08 compute-0 systemd[1]: libpod-674307fbc700911bebb507ad2543a2dd58f873f48971af60d1e0f54ddff3fbae.scope: Consumed 1.148s CPU time.
Oct 11 04:21:08 compute-0 systemd[1]: var-lib-containers-storage-overlay-0458ebdd1ad247923c179dd88e449ea63d47d58999aea6b0ac64cf35f200cc0a-merged.mount: Deactivated successfully.
Oct 11 04:21:08 compute-0 podman[301267]: 2025-10-11 04:21:08.115982487 +0000 UTC m=+1.451346022 container remove 674307fbc700911bebb507ad2543a2dd58f873f48971af60d1e0f54ddff3fbae (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_benz, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.build-date=20250507, CEPH_REF=reef)
Oct 11 04:21:08 compute-0 systemd[1]: libpod-conmon-674307fbc700911bebb507ad2543a2dd58f873f48971af60d1e0f54ddff3fbae.scope: Deactivated successfully.
Oct 11 04:21:08 compute-0 sudo[301160]: pam_unix(sudo:session): session closed for user root
Oct 11 04:21:08 compute-0 sudo[301324]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 04:21:08 compute-0 sudo[301324]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 04:21:08 compute-0 sudo[301324]: pam_unix(sudo:session): session closed for user root
Oct 11 04:21:08 compute-0 sudo[301349]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 11 04:21:08 compute-0 sudo[301349]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 04:21:08 compute-0 sudo[301349]: pam_unix(sudo:session): session closed for user root
Oct 11 04:21:08 compute-0 sudo[301374]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 04:21:08 compute-0 sudo[301374]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 04:21:08 compute-0 sudo[301374]: pam_unix(sudo:session): session closed for user root
Oct 11 04:21:08 compute-0 nova_compute[259850]: 2025-10-11 04:21:08.532 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 04:21:08 compute-0 sudo[301405]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/23b68101-59a9-532f-ab6b-9acf78fb2162/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 23b68101-59a9-532f-ab6b-9acf78fb2162 -- lvm list --format json
Oct 11 04:21:08 compute-0 sudo[301405]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 04:21:08 compute-0 podman[301398]: 2025-10-11 04:21:08.592672643 +0000 UTC m=+0.119599817 container health_status aa44bb3cffcfb092b9d85ae52a9bd9f36c87e07262632420f97b48a886109eab (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251009, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_managed=true, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent, io.buildah.version=1.41.3)
Oct 11 04:21:08 compute-0 podman[301485]: 2025-10-11 04:21:08.992856613 +0000 UTC m=+0.053061643 container create 409ff1f6a6152c0f63453cd2595aff95c19bcaadf3ab4e433db1d39fa004827c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_tu, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 11 04:21:09 compute-0 systemd[1]: Started libpod-conmon-409ff1f6a6152c0f63453cd2595aff95c19bcaadf3ab4e433db1d39fa004827c.scope.
Oct 11 04:21:09 compute-0 podman[301485]: 2025-10-11 04:21:08.965044236 +0000 UTC m=+0.025249316 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 04:21:09 compute-0 systemd[1]: Started libcrun container.
Oct 11 04:21:09 compute-0 podman[301485]: 2025-10-11 04:21:09.095627793 +0000 UTC m=+0.155832873 container init 409ff1f6a6152c0f63453cd2595aff95c19bcaadf3ab4e433db1d39fa004827c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_tu, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 11 04:21:09 compute-0 podman[301485]: 2025-10-11 04:21:09.107079577 +0000 UTC m=+0.167284607 container start 409ff1f6a6152c0f63453cd2595aff95c19bcaadf3ab4e433db1d39fa004827c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_tu, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 11 04:21:09 compute-0 podman[301485]: 2025-10-11 04:21:09.110561996 +0000 UTC m=+0.170767096 container attach 409ff1f6a6152c0f63453cd2595aff95c19bcaadf3ab4e433db1d39fa004827c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_tu, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 11 04:21:09 compute-0 inspiring_tu[301501]: 167 167
Oct 11 04:21:09 compute-0 systemd[1]: libpod-409ff1f6a6152c0f63453cd2595aff95c19bcaadf3ab4e433db1d39fa004827c.scope: Deactivated successfully.
Oct 11 04:21:09 compute-0 conmon[301501]: conmon 409ff1f6a6152c0f6345 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-409ff1f6a6152c0f63453cd2595aff95c19bcaadf3ab4e433db1d39fa004827c.scope/container/memory.events
Oct 11 04:21:09 compute-0 podman[301485]: 2025-10-11 04:21:09.115801974 +0000 UTC m=+0.176007014 container died 409ff1f6a6152c0f63453cd2595aff95c19bcaadf3ab4e433db1d39fa004827c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_tu, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 11 04:21:09 compute-0 systemd[1]: var-lib-containers-storage-overlay-42380a640f166ebe897c4fdf24886a949e696b2bc4e7acb9dc951ed8117be496-merged.mount: Deactivated successfully.
Oct 11 04:21:09 compute-0 podman[301485]: 2025-10-11 04:21:09.158426721 +0000 UTC m=+0.218631751 container remove 409ff1f6a6152c0f63453cd2595aff95c19bcaadf3ab4e433db1d39fa004827c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_tu, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 11 04:21:09 compute-0 systemd[1]: libpod-conmon-409ff1f6a6152c0f63453cd2595aff95c19bcaadf3ab4e433db1d39fa004827c.scope: Deactivated successfully.
Oct 11 04:21:09 compute-0 ceph-mon[74273]: pgmap v1749: 305 pgs: 305 active+clean; 88 MiB data, 415 MiB used, 60 GiB / 60 GiB avail; 47 KiB/s rd, 2.5 KiB/s wr, 62 op/s
Oct 11 04:21:09 compute-0 podman[301525]: 2025-10-11 04:21:09.414686095 +0000 UTC m=+0.074684625 container create 412fd5862a863ffb02e90e2dd9e9baedb5c4bc9ba12faf93166be73973f20b44 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_spence, CEPH_REF=reef, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.39.3)
Oct 11 04:21:09 compute-0 systemd[1]: Started libpod-conmon-412fd5862a863ffb02e90e2dd9e9baedb5c4bc9ba12faf93166be73973f20b44.scope.
Oct 11 04:21:09 compute-0 podman[301525]: 2025-10-11 04:21:09.386439006 +0000 UTC m=+0.046437576 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 04:21:09 compute-0 systemd[1]: Started libcrun container.
Oct 11 04:21:09 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/84c6124cdcc316eba14b2a2220cfd0c068287f47133651b0e64a7711c2eacdb8/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 11 04:21:09 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/84c6124cdcc316eba14b2a2220cfd0c068287f47133651b0e64a7711c2eacdb8/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 11 04:21:09 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/84c6124cdcc316eba14b2a2220cfd0c068287f47133651b0e64a7711c2eacdb8/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 11 04:21:09 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/84c6124cdcc316eba14b2a2220cfd0c068287f47133651b0e64a7711c2eacdb8/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 11 04:21:09 compute-0 podman[301525]: 2025-10-11 04:21:09.516837307 +0000 UTC m=+0.176835857 container init 412fd5862a863ffb02e90e2dd9e9baedb5c4bc9ba12faf93166be73973f20b44 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_spence, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_REF=reef)
Oct 11 04:21:09 compute-0 podman[301525]: 2025-10-11 04:21:09.533464498 +0000 UTC m=+0.193463018 container start 412fd5862a863ffb02e90e2dd9e9baedb5c4bc9ba12faf93166be73973f20b44 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_spence, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3)
Oct 11 04:21:09 compute-0 podman[301525]: 2025-10-11 04:21:09.537445771 +0000 UTC m=+0.197444351 container attach 412fd5862a863ffb02e90e2dd9e9baedb5c4bc9ba12faf93166be73973f20b44 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_spence, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Oct 11 04:21:09 compute-0 nova_compute[259850]: 2025-10-11 04:21:09.889 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 04:21:09 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1750: 305 pgs: 305 active+clean; 88 MiB data, 433 MiB used, 60 GiB / 60 GiB avail; 45 KiB/s rd, 2.4 KiB/s wr, 60 op/s
Oct 11 04:21:10 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e415 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 04:21:10 compute-0 laughing_spence[301541]: {
Oct 11 04:21:10 compute-0 laughing_spence[301541]:     "0": [
Oct 11 04:21:10 compute-0 laughing_spence[301541]:         {
Oct 11 04:21:10 compute-0 laughing_spence[301541]:             "devices": [
Oct 11 04:21:10 compute-0 laughing_spence[301541]:                 "/dev/loop3"
Oct 11 04:21:10 compute-0 laughing_spence[301541]:             ],
Oct 11 04:21:10 compute-0 laughing_spence[301541]:             "lv_name": "ceph_lv0",
Oct 11 04:21:10 compute-0 laughing_spence[301541]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct 11 04:21:10 compute-0 laughing_spence[301541]:             "lv_size": "21470642176",
Oct 11 04:21:10 compute-0 laughing_spence[301541]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=1XVTvq-Um5W-oCLh-MZzq-EKrd-vgGj-JdO4S3,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=23b68101-59a9-532f-ab6b-9acf78fb2162,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=bd7ac921-1218-45c1-b1c6-7c594dbceccb,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 11 04:21:10 compute-0 laughing_spence[301541]:             "lv_uuid": "1XVTvq-Um5W-oCLh-MZzq-EKrd-vgGj-JdO4S3",
Oct 11 04:21:10 compute-0 laughing_spence[301541]:             "name": "ceph_lv0",
Oct 11 04:21:10 compute-0 laughing_spence[301541]:             "path": "/dev/ceph_vg0/ceph_lv0",
Oct 11 04:21:10 compute-0 laughing_spence[301541]:             "tags": {
Oct 11 04:21:10 compute-0 laughing_spence[301541]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct 11 04:21:10 compute-0 laughing_spence[301541]:                 "ceph.block_uuid": "1XVTvq-Um5W-oCLh-MZzq-EKrd-vgGj-JdO4S3",
Oct 11 04:21:10 compute-0 laughing_spence[301541]:                 "ceph.cephx_lockbox_secret": "",
Oct 11 04:21:10 compute-0 laughing_spence[301541]:                 "ceph.cluster_fsid": "23b68101-59a9-532f-ab6b-9acf78fb2162",
Oct 11 04:21:10 compute-0 laughing_spence[301541]:                 "ceph.cluster_name": "ceph",
Oct 11 04:21:10 compute-0 laughing_spence[301541]:                 "ceph.crush_device_class": "",
Oct 11 04:21:10 compute-0 laughing_spence[301541]:                 "ceph.encrypted": "0",
Oct 11 04:21:10 compute-0 laughing_spence[301541]:                 "ceph.osd_fsid": "bd7ac921-1218-45c1-b1c6-7c594dbceccb",
Oct 11 04:21:10 compute-0 laughing_spence[301541]:                 "ceph.osd_id": "0",
Oct 11 04:21:10 compute-0 laughing_spence[301541]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 11 04:21:10 compute-0 laughing_spence[301541]:                 "ceph.type": "block",
Oct 11 04:21:10 compute-0 laughing_spence[301541]:                 "ceph.vdo": "0"
Oct 11 04:21:10 compute-0 laughing_spence[301541]:             },
Oct 11 04:21:10 compute-0 laughing_spence[301541]:             "type": "block",
Oct 11 04:21:10 compute-0 laughing_spence[301541]:             "vg_name": "ceph_vg0"
Oct 11 04:21:10 compute-0 laughing_spence[301541]:         }
Oct 11 04:21:10 compute-0 laughing_spence[301541]:     ],
Oct 11 04:21:10 compute-0 laughing_spence[301541]:     "1": [
Oct 11 04:21:10 compute-0 laughing_spence[301541]:         {
Oct 11 04:21:10 compute-0 laughing_spence[301541]:             "devices": [
Oct 11 04:21:10 compute-0 laughing_spence[301541]:                 "/dev/loop4"
Oct 11 04:21:10 compute-0 laughing_spence[301541]:             ],
Oct 11 04:21:10 compute-0 laughing_spence[301541]:             "lv_name": "ceph_lv1",
Oct 11 04:21:10 compute-0 laughing_spence[301541]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Oct 11 04:21:10 compute-0 laughing_spence[301541]:             "lv_size": "21470642176",
Oct 11 04:21:10 compute-0 laughing_spence[301541]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=SYHCVo-UVf0-Iwc3-4dzz-QuCG-19LJ-9rRA5x,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=23b68101-59a9-532f-ab6b-9acf78fb2162,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=38da774d-7ecf-442f-9a7a-97978287cff8,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 11 04:21:10 compute-0 laughing_spence[301541]:             "lv_uuid": "SYHCVo-UVf0-Iwc3-4dzz-QuCG-19LJ-9rRA5x",
Oct 11 04:21:10 compute-0 laughing_spence[301541]:             "name": "ceph_lv1",
Oct 11 04:21:10 compute-0 laughing_spence[301541]:             "path": "/dev/ceph_vg1/ceph_lv1",
Oct 11 04:21:10 compute-0 laughing_spence[301541]:             "tags": {
Oct 11 04:21:10 compute-0 laughing_spence[301541]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Oct 11 04:21:10 compute-0 laughing_spence[301541]:                 "ceph.block_uuid": "SYHCVo-UVf0-Iwc3-4dzz-QuCG-19LJ-9rRA5x",
Oct 11 04:21:10 compute-0 laughing_spence[301541]:                 "ceph.cephx_lockbox_secret": "",
Oct 11 04:21:10 compute-0 laughing_spence[301541]:                 "ceph.cluster_fsid": "23b68101-59a9-532f-ab6b-9acf78fb2162",
Oct 11 04:21:10 compute-0 laughing_spence[301541]:                 "ceph.cluster_name": "ceph",
Oct 11 04:21:10 compute-0 laughing_spence[301541]:                 "ceph.crush_device_class": "",
Oct 11 04:21:10 compute-0 laughing_spence[301541]:                 "ceph.encrypted": "0",
Oct 11 04:21:10 compute-0 laughing_spence[301541]:                 "ceph.osd_fsid": "38da774d-7ecf-442f-9a7a-97978287cff8",
Oct 11 04:21:10 compute-0 laughing_spence[301541]:                 "ceph.osd_id": "1",
Oct 11 04:21:10 compute-0 laughing_spence[301541]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 11 04:21:10 compute-0 laughing_spence[301541]:                 "ceph.type": "block",
Oct 11 04:21:10 compute-0 laughing_spence[301541]:                 "ceph.vdo": "0"
Oct 11 04:21:10 compute-0 laughing_spence[301541]:             },
Oct 11 04:21:10 compute-0 laughing_spence[301541]:             "type": "block",
Oct 11 04:21:10 compute-0 laughing_spence[301541]:             "vg_name": "ceph_vg1"
Oct 11 04:21:10 compute-0 laughing_spence[301541]:         }
Oct 11 04:21:10 compute-0 laughing_spence[301541]:     ],
Oct 11 04:21:10 compute-0 laughing_spence[301541]:     "2": [
Oct 11 04:21:10 compute-0 laughing_spence[301541]:         {
Oct 11 04:21:10 compute-0 laughing_spence[301541]:             "devices": [
Oct 11 04:21:10 compute-0 laughing_spence[301541]:                 "/dev/loop5"
Oct 11 04:21:10 compute-0 laughing_spence[301541]:             ],
Oct 11 04:21:10 compute-0 laughing_spence[301541]:             "lv_name": "ceph_lv2",
Oct 11 04:21:10 compute-0 laughing_spence[301541]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Oct 11 04:21:10 compute-0 laughing_spence[301541]:             "lv_size": "21470642176",
Oct 11 04:21:10 compute-0 laughing_spence[301541]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=RoMfIg-8Myq-NBZ3-1HM3-6QfC-sjk8-dlChvt,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=23b68101-59a9-532f-ab6b-9acf78fb2162,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=b0a32a6b-06c7-4cda-9235-0c5ffbd1bd85,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 11 04:21:10 compute-0 laughing_spence[301541]:             "lv_uuid": "RoMfIg-8Myq-NBZ3-1HM3-6QfC-sjk8-dlChvt",
Oct 11 04:21:10 compute-0 laughing_spence[301541]:             "name": "ceph_lv2",
Oct 11 04:21:10 compute-0 laughing_spence[301541]:             "path": "/dev/ceph_vg2/ceph_lv2",
Oct 11 04:21:10 compute-0 laughing_spence[301541]:             "tags": {
Oct 11 04:21:10 compute-0 laughing_spence[301541]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Oct 11 04:21:10 compute-0 laughing_spence[301541]:                 "ceph.block_uuid": "RoMfIg-8Myq-NBZ3-1HM3-6QfC-sjk8-dlChvt",
Oct 11 04:21:10 compute-0 laughing_spence[301541]:                 "ceph.cephx_lockbox_secret": "",
Oct 11 04:21:10 compute-0 laughing_spence[301541]:                 "ceph.cluster_fsid": "23b68101-59a9-532f-ab6b-9acf78fb2162",
Oct 11 04:21:10 compute-0 laughing_spence[301541]:                 "ceph.cluster_name": "ceph",
Oct 11 04:21:10 compute-0 laughing_spence[301541]:                 "ceph.crush_device_class": "",
Oct 11 04:21:10 compute-0 laughing_spence[301541]:                 "ceph.encrypted": "0",
Oct 11 04:21:10 compute-0 laughing_spence[301541]:                 "ceph.osd_fsid": "b0a32a6b-06c7-4cda-9235-0c5ffbd1bd85",
Oct 11 04:21:10 compute-0 laughing_spence[301541]:                 "ceph.osd_id": "2",
Oct 11 04:21:10 compute-0 laughing_spence[301541]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 11 04:21:10 compute-0 laughing_spence[301541]:                 "ceph.type": "block",
Oct 11 04:21:10 compute-0 laughing_spence[301541]:                 "ceph.vdo": "0"
Oct 11 04:21:10 compute-0 laughing_spence[301541]:             },
Oct 11 04:21:10 compute-0 laughing_spence[301541]:             "type": "block",
Oct 11 04:21:10 compute-0 laughing_spence[301541]:             "vg_name": "ceph_vg2"
Oct 11 04:21:10 compute-0 laughing_spence[301541]:         }
Oct 11 04:21:10 compute-0 laughing_spence[301541]:     ]
Oct 11 04:21:10 compute-0 laughing_spence[301541]: }
Oct 11 04:21:10 compute-0 systemd[1]: libpod-412fd5862a863ffb02e90e2dd9e9baedb5c4bc9ba12faf93166be73973f20b44.scope: Deactivated successfully.
Oct 11 04:21:10 compute-0 podman[301525]: 2025-10-11 04:21:10.35613767 +0000 UTC m=+1.016136170 container died 412fd5862a863ffb02e90e2dd9e9baedb5c4bc9ba12faf93166be73973f20b44 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_spence, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_REF=reef)
Oct 11 04:21:10 compute-0 systemd[1]: var-lib-containers-storage-overlay-84c6124cdcc316eba14b2a2220cfd0c068287f47133651b0e64a7711c2eacdb8-merged.mount: Deactivated successfully.
Oct 11 04:21:10 compute-0 podman[301525]: 2025-10-11 04:21:10.424769203 +0000 UTC m=+1.084767723 container remove 412fd5862a863ffb02e90e2dd9e9baedb5c4bc9ba12faf93166be73973f20b44 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_spence, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_REF=reef)
Oct 11 04:21:10 compute-0 systemd[1]: libpod-conmon-412fd5862a863ffb02e90e2dd9e9baedb5c4bc9ba12faf93166be73973f20b44.scope: Deactivated successfully.
Oct 11 04:21:10 compute-0 sudo[301405]: pam_unix(sudo:session): session closed for user root
Oct 11 04:21:10 compute-0 sudo[301564]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 04:21:10 compute-0 sudo[301564]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 04:21:10 compute-0 sudo[301564]: pam_unix(sudo:session): session closed for user root
Oct 11 04:21:10 compute-0 sudo[301589]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 11 04:21:10 compute-0 sudo[301589]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 04:21:10 compute-0 sudo[301589]: pam_unix(sudo:session): session closed for user root
Oct 11 04:21:10 compute-0 sudo[301614]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 04:21:10 compute-0 sudo[301614]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 04:21:10 compute-0 sudo[301614]: pam_unix(sudo:session): session closed for user root
Oct 11 04:21:10 compute-0 sudo[301639]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/23b68101-59a9-532f-ab6b-9acf78fb2162/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 23b68101-59a9-532f-ab6b-9acf78fb2162 -- raw list --format json
Oct 11 04:21:10 compute-0 sudo[301639]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 04:21:11 compute-0 ceph-mon[74273]: pgmap v1750: 305 pgs: 305 active+clean; 88 MiB data, 433 MiB used, 60 GiB / 60 GiB avail; 45 KiB/s rd, 2.4 KiB/s wr, 60 op/s
Oct 11 04:21:11 compute-0 podman[301704]: 2025-10-11 04:21:11.343770062 +0000 UTC m=+0.066573585 container create e226511d208b297ff539550a4814b6884f671d8ae2609a118330fa153c3bf4bb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_kapitsa, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 11 04:21:11 compute-0 systemd[1]: Started libpod-conmon-e226511d208b297ff539550a4814b6884f671d8ae2609a118330fa153c3bf4bb.scope.
Oct 11 04:21:11 compute-0 podman[301704]: 2025-10-11 04:21:11.317044786 +0000 UTC m=+0.039848359 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 04:21:11 compute-0 systemd[1]: Started libcrun container.
Oct 11 04:21:11 compute-0 podman[301704]: 2025-10-11 04:21:11.44754 +0000 UTC m=+0.170343573 container init e226511d208b297ff539550a4814b6884f671d8ae2609a118330fa153c3bf4bb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_kapitsa, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 11 04:21:11 compute-0 podman[301704]: 2025-10-11 04:21:11.460466756 +0000 UTC m=+0.183270279 container start e226511d208b297ff539550a4814b6884f671d8ae2609a118330fa153c3bf4bb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_kapitsa, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507)
Oct 11 04:21:11 compute-0 podman[301704]: 2025-10-11 04:21:11.464631134 +0000 UTC m=+0.187434707 container attach e226511d208b297ff539550a4814b6884f671d8ae2609a118330fa153c3bf4bb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_kapitsa, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Oct 11 04:21:11 compute-0 peaceful_kapitsa[301720]: 167 167
Oct 11 04:21:11 compute-0 systemd[1]: libpod-e226511d208b297ff539550a4814b6884f671d8ae2609a118330fa153c3bf4bb.scope: Deactivated successfully.
Oct 11 04:21:11 compute-0 podman[301704]: 2025-10-11 04:21:11.469295016 +0000 UTC m=+0.192098539 container died e226511d208b297ff539550a4814b6884f671d8ae2609a118330fa153c3bf4bb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_kapitsa, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, ceph=True)
Oct 11 04:21:11 compute-0 systemd[1]: var-lib-containers-storage-overlay-c3941a8678af7839af615014ea7991eedcec983e86da9788a279fc0338270073-merged.mount: Deactivated successfully.
Oct 11 04:21:11 compute-0 podman[301704]: 2025-10-11 04:21:11.5248928 +0000 UTC m=+0.247696323 container remove e226511d208b297ff539550a4814b6884f671d8ae2609a118330fa153c3bf4bb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_kapitsa, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 11 04:21:11 compute-0 systemd[1]: libpod-conmon-e226511d208b297ff539550a4814b6884f671d8ae2609a118330fa153c3bf4bb.scope: Deactivated successfully.
Oct 11 04:21:11 compute-0 podman[301744]: 2025-10-11 04:21:11.78765437 +0000 UTC m=+0.069340324 container create 8942433a8294e73dfd85b3ad29d4acd71162b70028ea22b5ab5fd5756857c3f9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_cori, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 11 04:21:11 compute-0 systemd[1]: Started libpod-conmon-8942433a8294e73dfd85b3ad29d4acd71162b70028ea22b5ab5fd5756857c3f9.scope.
Oct 11 04:21:11 compute-0 systemd[1]: Started libcrun container.
Oct 11 04:21:11 compute-0 podman[301744]: 2025-10-11 04:21:11.761560151 +0000 UTC m=+0.043246155 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 04:21:11 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0490013917e5bd3827557f14c0173ec7a7f6f44cd2a647abbef41a0a14ed6c9a/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 11 04:21:11 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0490013917e5bd3827557f14c0173ec7a7f6f44cd2a647abbef41a0a14ed6c9a/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 11 04:21:11 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0490013917e5bd3827557f14c0173ec7a7f6f44cd2a647abbef41a0a14ed6c9a/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 11 04:21:11 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0490013917e5bd3827557f14c0173ec7a7f6f44cd2a647abbef41a0a14ed6c9a/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 11 04:21:11 compute-0 podman[301744]: 2025-10-11 04:21:11.874035625 +0000 UTC m=+0.155721599 container init 8942433a8294e73dfd85b3ad29d4acd71162b70028ea22b5ab5fd5756857c3f9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_cori, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=reef, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 11 04:21:11 compute-0 podman[301744]: 2025-10-11 04:21:11.881055574 +0000 UTC m=+0.162741498 container start 8942433a8294e73dfd85b3ad29d4acd71162b70028ea22b5ab5fd5756857c3f9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_cori, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Oct 11 04:21:11 compute-0 podman[301744]: 2025-10-11 04:21:11.883958226 +0000 UTC m=+0.165644190 container attach 8942433a8294e73dfd85b3ad29d4acd71162b70028ea22b5ab5fd5756857c3f9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_cori, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Oct 11 04:21:11 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1751: 305 pgs: 305 active+clean; 88 MiB data, 433 MiB used, 60 GiB / 60 GiB avail; 37 KiB/s rd, 2.0 KiB/s wr, 50 op/s
Oct 11 04:21:12 compute-0 naughty_cori[301760]: {
Oct 11 04:21:12 compute-0 naughty_cori[301760]:     "38da774d-7ecf-442f-9a7a-97978287cff8": {
Oct 11 04:21:12 compute-0 naughty_cori[301760]:         "ceph_fsid": "23b68101-59a9-532f-ab6b-9acf78fb2162",
Oct 11 04:21:12 compute-0 naughty_cori[301760]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Oct 11 04:21:12 compute-0 naughty_cori[301760]:         "osd_id": 1,
Oct 11 04:21:12 compute-0 naughty_cori[301760]:         "osd_uuid": "38da774d-7ecf-442f-9a7a-97978287cff8",
Oct 11 04:21:12 compute-0 naughty_cori[301760]:         "type": "bluestore"
Oct 11 04:21:12 compute-0 naughty_cori[301760]:     },
Oct 11 04:21:12 compute-0 naughty_cori[301760]:     "b0a32a6b-06c7-4cda-9235-0c5ffbd1bd85": {
Oct 11 04:21:12 compute-0 naughty_cori[301760]:         "ceph_fsid": "23b68101-59a9-532f-ab6b-9acf78fb2162",
Oct 11 04:21:12 compute-0 naughty_cori[301760]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Oct 11 04:21:12 compute-0 naughty_cori[301760]:         "osd_id": 2,
Oct 11 04:21:12 compute-0 naughty_cori[301760]:         "osd_uuid": "b0a32a6b-06c7-4cda-9235-0c5ffbd1bd85",
Oct 11 04:21:12 compute-0 naughty_cori[301760]:         "type": "bluestore"
Oct 11 04:21:12 compute-0 naughty_cori[301760]:     },
Oct 11 04:21:12 compute-0 naughty_cori[301760]:     "bd7ac921-1218-45c1-b1c6-7c594dbceccb": {
Oct 11 04:21:12 compute-0 naughty_cori[301760]:         "ceph_fsid": "23b68101-59a9-532f-ab6b-9acf78fb2162",
Oct 11 04:21:12 compute-0 naughty_cori[301760]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct 11 04:21:12 compute-0 naughty_cori[301760]:         "osd_id": 0,
Oct 11 04:21:12 compute-0 naughty_cori[301760]:         "osd_uuid": "bd7ac921-1218-45c1-b1c6-7c594dbceccb",
Oct 11 04:21:12 compute-0 naughty_cori[301760]:         "type": "bluestore"
Oct 11 04:21:12 compute-0 naughty_cori[301760]:     }
Oct 11 04:21:12 compute-0 naughty_cori[301760]: }
Oct 11 04:21:12 compute-0 systemd[1]: libpod-8942433a8294e73dfd85b3ad29d4acd71162b70028ea22b5ab5fd5756857c3f9.scope: Deactivated successfully.
Oct 11 04:21:12 compute-0 podman[301744]: 2025-10-11 04:21:12.997824392 +0000 UTC m=+1.279510356 container died 8942433a8294e73dfd85b3ad29d4acd71162b70028ea22b5ab5fd5756857c3f9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_cori, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 11 04:21:12 compute-0 systemd[1]: libpod-8942433a8294e73dfd85b3ad29d4acd71162b70028ea22b5ab5fd5756857c3f9.scope: Consumed 1.119s CPU time.
Oct 11 04:21:13 compute-0 systemd[1]: var-lib-containers-storage-overlay-0490013917e5bd3827557f14c0173ec7a7f6f44cd2a647abbef41a0a14ed6c9a-merged.mount: Deactivated successfully.
Oct 11 04:21:13 compute-0 podman[301744]: 2025-10-11 04:21:13.092196444 +0000 UTC m=+1.373882408 container remove 8942433a8294e73dfd85b3ad29d4acd71162b70028ea22b5ab5fd5756857c3f9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_cori, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=reef, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 11 04:21:13 compute-0 systemd[1]: libpod-conmon-8942433a8294e73dfd85b3ad29d4acd71162b70028ea22b5ab5fd5756857c3f9.scope: Deactivated successfully.
Oct 11 04:21:13 compute-0 sudo[301639]: pam_unix(sudo:session): session closed for user root
Oct 11 04:21:13 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 11 04:21:13 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' 
Oct 11 04:21:13 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 11 04:21:13 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' 
Oct 11 04:21:13 compute-0 ceph-mgr[74563]: [progress WARNING root] complete: ev cec30430-aecc-498b-a144-bfe5aec81796 does not exist
Oct 11 04:21:13 compute-0 ceph-mgr[74563]: [progress WARNING root] complete: ev 510d1b64-8b5e-4ea5-aac2-315fd34c6bf5 does not exist
Oct 11 04:21:13 compute-0 ceph-mon[74273]: pgmap v1751: 305 pgs: 305 active+clean; 88 MiB data, 433 MiB used, 60 GiB / 60 GiB avail; 37 KiB/s rd, 2.0 KiB/s wr, 50 op/s
Oct 11 04:21:13 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' 
Oct 11 04:21:13 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' 
Oct 11 04:21:13 compute-0 sudo[301804]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 04:21:13 compute-0 sudo[301804]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 04:21:13 compute-0 sudo[301804]: pam_unix(sudo:session): session closed for user root
Oct 11 04:21:13 compute-0 sudo[301829]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Oct 11 04:21:13 compute-0 sudo[301829]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 04:21:13 compute-0 sudo[301829]: pam_unix(sudo:session): session closed for user root
Oct 11 04:21:13 compute-0 nova_compute[259850]: 2025-10-11 04:21:13.533 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 04:21:13 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1752: 305 pgs: 305 active+clean; 88 MiB data, 433 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:21:14 compute-0 nova_compute[259850]: 2025-10-11 04:21:14.892 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 04:21:15 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e415 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 04:21:15 compute-0 ceph-mon[74273]: pgmap v1752: 305 pgs: 305 active+clean; 88 MiB data, 433 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:21:15 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1753: 305 pgs: 305 active+clean; 88 MiB data, 433 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:21:17 compute-0 ceph-mon[74273]: pgmap v1753: 305 pgs: 305 active+clean; 88 MiB data, 433 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:21:17 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1754: 305 pgs: 305 active+clean; 88 MiB data, 433 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:21:18 compute-0 nova_compute[259850]: 2025-10-11 04:21:18.536 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 04:21:19 compute-0 ceph-mon[74273]: pgmap v1754: 305 pgs: 305 active+clean; 88 MiB data, 433 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:21:19 compute-0 nova_compute[259850]: 2025-10-11 04:21:19.894 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 04:21:19 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1755: 305 pgs: 305 active+clean; 88 MiB data, 433 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:21:20 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e415 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 04:21:20 compute-0 ceph-mgr[74563]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 04:21:20 compute-0 ceph-mgr[74563]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 04:21:20 compute-0 ceph-mgr[74563]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 04:21:20 compute-0 ceph-mgr[74563]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 04:21:20 compute-0 ceph-mgr[74563]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 04:21:20 compute-0 ceph-mgr[74563]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 04:21:20 compute-0 ceph-mgr[74563]: [balancer INFO root] Optimize plan auto_2025-10-11_04:21:20
Oct 11 04:21:20 compute-0 ceph-mgr[74563]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 11 04:21:20 compute-0 ceph-mgr[74563]: [balancer INFO root] do_upmap
Oct 11 04:21:20 compute-0 ceph-mgr[74563]: [balancer INFO root] pools ['backups', 'images', 'vms', 'default.rgw.meta', 'volumes', 'default.rgw.log', 'cephfs.cephfs.meta', 'default.rgw.control', 'cephfs.cephfs.data', '.mgr', '.rgw.root']
Oct 11 04:21:20 compute-0 ceph-mgr[74563]: [balancer INFO root] prepared 0/10 changes
Oct 11 04:21:20 compute-0 podman[301854]: 2025-10-11 04:21:20.926956663 +0000 UTC m=+0.096911735 container health_status 83ff5079e26f0f00bbfd2512db4ebca61e431e3b7e362664573e34fff179574c (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, config_id=multipathd, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']})
Oct 11 04:21:20 compute-0 podman[301855]: 2025-10-11 04:21:20.929145335 +0000 UTC m=+0.098837199 container health_status 8f03625dda308e9a3fe3b3f00360811eab2a53db90537fed5552a11820542f07 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.build-date=20251009, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, container_name=iscsid, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=iscsid, managed_by=edpm_ansible, org.label-schema.license=GPLv2)
Oct 11 04:21:21 compute-0 ceph-mgr[74563]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 11 04:21:21 compute-0 ceph-mgr[74563]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 11 04:21:21 compute-0 ceph-mgr[74563]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 11 04:21:21 compute-0 ceph-mgr[74563]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 11 04:21:21 compute-0 ceph-mgr[74563]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 11 04:21:21 compute-0 ceph-mgr[74563]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 11 04:21:21 compute-0 ceph-mgr[74563]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 11 04:21:21 compute-0 ceph-mgr[74563]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 11 04:21:21 compute-0 ceph-mgr[74563]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 11 04:21:21 compute-0 ceph-mgr[74563]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 11 04:21:21 compute-0 ceph-mon[74273]: pgmap v1755: 305 pgs: 305 active+clean; 88 MiB data, 433 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:21:21 compute-0 ceph-mgr[74563]: client.0 ms_handle_reset on v2:192.168.122.100:6800/3360631616
Oct 11 04:21:21 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1756: 305 pgs: 305 active+clean; 88 MiB data, 433 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:21:22 compute-0 ovn_metadata_agent[161897]: 2025-10-11 04:21:22.971 161902 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 04:21:22 compute-0 ovn_metadata_agent[161897]: 2025-10-11 04:21:22.971 161902 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 04:21:22 compute-0 ovn_metadata_agent[161897]: 2025-10-11 04:21:22.971 161902 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 04:21:23 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e415 do_prune osdmap full prune enabled
Oct 11 04:21:23 compute-0 ceph-mon[74273]: pgmap v1756: 305 pgs: 305 active+clean; 88 MiB data, 433 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:21:23 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e416 e416: 3 total, 3 up, 3 in
Oct 11 04:21:23 compute-0 ceph-mon[74273]: log_channel(cluster) log [DBG] : osdmap e416: 3 total, 3 up, 3 in
Oct 11 04:21:23 compute-0 nova_compute[259850]: 2025-10-11 04:21:23.538 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 04:21:23 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1758: 305 pgs: 305 active+clean; 88 MiB data, 433 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 409 B/s wr, 1 op/s
Oct 11 04:21:24 compute-0 ceph-mon[74273]: osdmap e416: 3 total, 3 up, 3 in
Oct 11 04:21:24 compute-0 nova_compute[259850]: 2025-10-11 04:21:24.896 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 04:21:25 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e416 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 04:21:25 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e416 do_prune osdmap full prune enabled
Oct 11 04:21:25 compute-0 ceph-mon[74273]: pgmap v1758: 305 pgs: 305 active+clean; 88 MiB data, 433 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 409 B/s wr, 1 op/s
Oct 11 04:21:25 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e417 e417: 3 total, 3 up, 3 in
Oct 11 04:21:25 compute-0 ceph-mon[74273]: log_channel(cluster) log [DBG] : osdmap e417: 3 total, 3 up, 3 in
Oct 11 04:21:25 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1760: 305 pgs: 305 active+clean; 88 MiB data, 433 MiB used, 60 GiB / 60 GiB avail; 639 B/s rd, 511 B/s wr, 1 op/s
Oct 11 04:21:26 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct 11 04:21:26 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/533024228' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 11 04:21:26 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct 11 04:21:26 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/533024228' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 11 04:21:26 compute-0 ceph-mon[74273]: osdmap e417: 3 total, 3 up, 3 in
Oct 11 04:21:26 compute-0 ceph-mon[74273]: from='client.? 192.168.122.10:0/533024228' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 11 04:21:26 compute-0 ceph-mon[74273]: from='client.? 192.168.122.10:0/533024228' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 11 04:21:27 compute-0 ceph-mon[74273]: pgmap v1760: 305 pgs: 305 active+clean; 88 MiB data, 433 MiB used, 60 GiB / 60 GiB avail; 639 B/s rd, 511 B/s wr, 1 op/s
Oct 11 04:21:27 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1761: 305 pgs: 305 active+clean; 88 MiB data, 433 MiB used, 60 GiB / 60 GiB avail; 3.5 KiB/s rd, 895 B/s wr, 6 op/s
Oct 11 04:21:28 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e417 do_prune osdmap full prune enabled
Oct 11 04:21:28 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e418 e418: 3 total, 3 up, 3 in
Oct 11 04:21:28 compute-0 ceph-mon[74273]: log_channel(cluster) log [DBG] : osdmap e418: 3 total, 3 up, 3 in
Oct 11 04:21:28 compute-0 nova_compute[259850]: 2025-10-11 04:21:28.538 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 04:21:29 compute-0 nova_compute[259850]: 2025-10-11 04:21:29.059 2 DEBUG oslo_service.periodic_task [None req-a15c22ab-259d-4b26-83d0-eb5f92102649 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 11 04:21:29 compute-0 nova_compute[259850]: 2025-10-11 04:21:29.060 2 DEBUG oslo_service.periodic_task [None req-a15c22ab-259d-4b26-83d0-eb5f92102649 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 11 04:21:29 compute-0 nova_compute[259850]: 2025-10-11 04:21:29.060 2 DEBUG nova.compute.manager [None req-a15c22ab-259d-4b26-83d0-eb5f92102649 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Oct 11 04:21:29 compute-0 ceph-mon[74273]: pgmap v1761: 305 pgs: 305 active+clean; 88 MiB data, 433 MiB used, 60 GiB / 60 GiB avail; 3.5 KiB/s rd, 895 B/s wr, 6 op/s
Oct 11 04:21:29 compute-0 ceph-mon[74273]: osdmap e418: 3 total, 3 up, 3 in
Oct 11 04:21:29 compute-0 nova_compute[259850]: 2025-10-11 04:21:29.898 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 04:21:29 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1763: 305 pgs: 305 active+clean; 88 MiB data, 433 MiB used, 60 GiB / 60 GiB avail; 40 KiB/s rd, 2.6 KiB/s wr, 54 op/s
Oct 11 04:21:30 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e418 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 04:21:30 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e418 do_prune osdmap full prune enabled
Oct 11 04:21:30 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e419 e419: 3 total, 3 up, 3 in
Oct 11 04:21:30 compute-0 ceph-mon[74273]: log_channel(cluster) log [DBG] : osdmap e419: 3 total, 3 up, 3 in
Oct 11 04:21:31 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct 11 04:21:31 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1417048537' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 11 04:21:31 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct 11 04:21:31 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1417048537' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 11 04:21:31 compute-0 ceph-mon[74273]: pgmap v1763: 305 pgs: 305 active+clean; 88 MiB data, 433 MiB used, 60 GiB / 60 GiB avail; 40 KiB/s rd, 2.6 KiB/s wr, 54 op/s
Oct 11 04:21:31 compute-0 ceph-mon[74273]: osdmap e419: 3 total, 3 up, 3 in
Oct 11 04:21:31 compute-0 ceph-mon[74273]: from='client.? 192.168.122.10:0/1417048537' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 11 04:21:31 compute-0 ceph-mon[74273]: from='client.? 192.168.122.10:0/1417048537' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 11 04:21:31 compute-0 ceph-mgr[74563]: [pg_autoscaler INFO root] _maybe_adjust
Oct 11 04:21:31 compute-0 ceph-mgr[74563]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 04:21:31 compute-0 ceph-mgr[74563]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct 11 04:21:31 compute-0 ceph-mgr[74563]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 04:21:31 compute-0 ceph-mgr[74563]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Oct 11 04:21:31 compute-0 ceph-mgr[74563]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 04:21:31 compute-0 ceph-mgr[74563]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.00034733244611577784 of space, bias 1.0, pg target 0.10419973383473335 quantized to 32 (current 32)
Oct 11 04:21:31 compute-0 ceph-mgr[74563]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 04:21:31 compute-0 ceph-mgr[74563]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Oct 11 04:21:31 compute-0 ceph-mgr[74563]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 04:21:31 compute-0 ceph-mgr[74563]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006661762551279547 of space, bias 1.0, pg target 0.1998528765383864 quantized to 32 (current 32)
Oct 11 04:21:31 compute-0 ceph-mgr[74563]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 04:21:31 compute-0 ceph-mgr[74563]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Oct 11 04:21:31 compute-0 ceph-mgr[74563]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 04:21:31 compute-0 ceph-mgr[74563]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 11 04:21:31 compute-0 ceph-mgr[74563]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 04:21:31 compute-0 ceph-mgr[74563]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Oct 11 04:21:31 compute-0 ceph-mgr[74563]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 04:21:31 compute-0 ceph-mgr[74563]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct 11 04:21:31 compute-0 ceph-mgr[74563]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 04:21:31 compute-0 ceph-mgr[74563]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 11 04:21:31 compute-0 ceph-mgr[74563]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 04:21:31 compute-0 ceph-mgr[74563]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct 11 04:21:31 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1765: 305 pgs: 305 active+clean; 88 MiB data, 433 MiB used, 60 GiB / 60 GiB avail; 40 KiB/s rd, 2.6 KiB/s wr, 55 op/s
Oct 11 04:21:32 compute-0 nova_compute[259850]: 2025-10-11 04:21:32.055 2 DEBUG oslo_service.periodic_task [None req-a15c22ab-259d-4b26-83d0-eb5f92102649 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 11 04:21:32 compute-0 nova_compute[259850]: 2025-10-11 04:21:32.059 2 DEBUG oslo_service.periodic_task [None req-a15c22ab-259d-4b26-83d0-eb5f92102649 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 11 04:21:32 compute-0 nova_compute[259850]: 2025-10-11 04:21:32.059 2 DEBUG oslo_service.periodic_task [None req-a15c22ab-259d-4b26-83d0-eb5f92102649 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 11 04:21:32 compute-0 nova_compute[259850]: 2025-10-11 04:21:32.059 2 DEBUG oslo_service.periodic_task [None req-a15c22ab-259d-4b26-83d0-eb5f92102649 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 11 04:21:32 compute-0 nova_compute[259850]: 2025-10-11 04:21:32.118 2 DEBUG oslo_concurrency.lockutils [None req-a15c22ab-259d-4b26-83d0-eb5f92102649 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 04:21:32 compute-0 nova_compute[259850]: 2025-10-11 04:21:32.119 2 DEBUG oslo_concurrency.lockutils [None req-a15c22ab-259d-4b26-83d0-eb5f92102649 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 04:21:32 compute-0 nova_compute[259850]: 2025-10-11 04:21:32.119 2 DEBUG oslo_concurrency.lockutils [None req-a15c22ab-259d-4b26-83d0-eb5f92102649 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 04:21:32 compute-0 nova_compute[259850]: 2025-10-11 04:21:32.119 2 DEBUG nova.compute.resource_tracker [None req-a15c22ab-259d-4b26-83d0-eb5f92102649 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Oct 11 04:21:32 compute-0 nova_compute[259850]: 2025-10-11 04:21:32.120 2 DEBUG oslo_concurrency.processutils [None req-a15c22ab-259d-4b26-83d0-eb5f92102649 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 04:21:32 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 11 04:21:32 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1015633024' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 04:21:32 compute-0 nova_compute[259850]: 2025-10-11 04:21:32.575 2 DEBUG oslo_concurrency.processutils [None req-a15c22ab-259d-4b26-83d0-eb5f92102649 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.456s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 04:21:32 compute-0 nova_compute[259850]: 2025-10-11 04:21:32.771 2 WARNING nova.virt.libvirt.driver [None req-a15c22ab-259d-4b26-83d0-eb5f92102649 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 11 04:21:32 compute-0 nova_compute[259850]: 2025-10-11 04:21:32.773 2 DEBUG nova.compute.resource_tracker [None req-a15c22ab-259d-4b26-83d0-eb5f92102649 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4327MB free_disk=59.988277435302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Oct 11 04:21:32 compute-0 nova_compute[259850]: 2025-10-11 04:21:32.774 2 DEBUG oslo_concurrency.lockutils [None req-a15c22ab-259d-4b26-83d0-eb5f92102649 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 04:21:32 compute-0 nova_compute[259850]: 2025-10-11 04:21:32.774 2 DEBUG oslo_concurrency.lockutils [None req-a15c22ab-259d-4b26-83d0-eb5f92102649 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 04:21:32 compute-0 nova_compute[259850]: 2025-10-11 04:21:32.938 2 DEBUG nova.compute.resource_tracker [None req-a15c22ab-259d-4b26-83d0-eb5f92102649 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Oct 11 04:21:32 compute-0 nova_compute[259850]: 2025-10-11 04:21:32.939 2 DEBUG nova.compute.resource_tracker [None req-a15c22ab-259d-4b26-83d0-eb5f92102649 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Oct 11 04:21:32 compute-0 nova_compute[259850]: 2025-10-11 04:21:32.990 2 DEBUG oslo_concurrency.processutils [None req-a15c22ab-259d-4b26-83d0-eb5f92102649 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 04:21:33 compute-0 ceph-mon[74273]: pgmap v1765: 305 pgs: 305 active+clean; 88 MiB data, 433 MiB used, 60 GiB / 60 GiB avail; 40 KiB/s rd, 2.6 KiB/s wr, 55 op/s
Oct 11 04:21:33 compute-0 ceph-mon[74273]: from='client.? 192.168.122.100:0/1015633024' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 04:21:33 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 11 04:21:33 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1390646240' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 04:21:33 compute-0 nova_compute[259850]: 2025-10-11 04:21:33.462 2 DEBUG oslo_concurrency.processutils [None req-a15c22ab-259d-4b26-83d0-eb5f92102649 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.471s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 04:21:33 compute-0 nova_compute[259850]: 2025-10-11 04:21:33.467 2 DEBUG nova.compute.provider_tree [None req-a15c22ab-259d-4b26-83d0-eb5f92102649 - - - - - -] Inventory has not changed in ProviderTree for provider: 108a560b-89c0-4926-a2fc-cb749a6f8386 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 11 04:21:33 compute-0 nova_compute[259850]: 2025-10-11 04:21:33.483 2 DEBUG nova.scheduler.client.report [None req-a15c22ab-259d-4b26-83d0-eb5f92102649 - - - - - -] Inventory has not changed for provider 108a560b-89c0-4926-a2fc-cb749a6f8386 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 11 04:21:33 compute-0 nova_compute[259850]: 2025-10-11 04:21:33.486 2 DEBUG nova.compute.resource_tracker [None req-a15c22ab-259d-4b26-83d0-eb5f92102649 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Oct 11 04:21:33 compute-0 nova_compute[259850]: 2025-10-11 04:21:33.486 2 DEBUG oslo_concurrency.lockutils [None req-a15c22ab-259d-4b26-83d0-eb5f92102649 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.712s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 04:21:33 compute-0 nova_compute[259850]: 2025-10-11 04:21:33.540 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 04:21:33 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1766: 305 pgs: 305 active+clean; 88 MiB data, 437 MiB used, 60 GiB / 60 GiB avail; 67 KiB/s rd, 4.2 KiB/s wr, 90 op/s
Oct 11 04:21:34 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e419 do_prune osdmap full prune enabled
Oct 11 04:21:34 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e420 e420: 3 total, 3 up, 3 in
Oct 11 04:21:34 compute-0 ceph-mon[74273]: log_channel(cluster) log [DBG] : osdmap e420: 3 total, 3 up, 3 in
Oct 11 04:21:34 compute-0 ceph-mon[74273]: from='client.? 192.168.122.100:0/1390646240' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 04:21:34 compute-0 podman[301937]: 2025-10-11 04:21:34.440087148 +0000 UTC m=+0.138449089 container health_status 648a70a95c858d67aef01a55c153ff0b5b9bef4b589fa10c62a24185180db61f (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Oct 11 04:21:34 compute-0 nova_compute[259850]: 2025-10-11 04:21:34.488 2 DEBUG oslo_service.periodic_task [None req-a15c22ab-259d-4b26-83d0-eb5f92102649 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 11 04:21:34 compute-0 nova_compute[259850]: 2025-10-11 04:21:34.489 2 DEBUG nova.compute.manager [None req-a15c22ab-259d-4b26-83d0-eb5f92102649 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Oct 11 04:21:34 compute-0 nova_compute[259850]: 2025-10-11 04:21:34.489 2 DEBUG nova.compute.manager [None req-a15c22ab-259d-4b26-83d0-eb5f92102649 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Oct 11 04:21:34 compute-0 nova_compute[259850]: 2025-10-11 04:21:34.506 2 DEBUG nova.compute.manager [None req-a15c22ab-259d-4b26-83d0-eb5f92102649 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Oct 11 04:21:34 compute-0 nova_compute[259850]: 2025-10-11 04:21:34.900 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 04:21:35 compute-0 nova_compute[259850]: 2025-10-11 04:21:35.059 2 DEBUG oslo_service.periodic_task [None req-a15c22ab-259d-4b26-83d0-eb5f92102649 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 11 04:21:35 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e420 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 04:21:35 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e420 do_prune osdmap full prune enabled
Oct 11 04:21:35 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e421 e421: 3 total, 3 up, 3 in
Oct 11 04:21:35 compute-0 ceph-mon[74273]: log_channel(cluster) log [DBG] : osdmap e421: 3 total, 3 up, 3 in
Oct 11 04:21:35 compute-0 ceph-mon[74273]: pgmap v1766: 305 pgs: 305 active+clean; 88 MiB data, 437 MiB used, 60 GiB / 60 GiB avail; 67 KiB/s rd, 4.2 KiB/s wr, 90 op/s
Oct 11 04:21:35 compute-0 ceph-mon[74273]: osdmap e420: 3 total, 3 up, 3 in
Oct 11 04:21:35 compute-0 ceph-mon[74273]: osdmap e421: 3 total, 3 up, 3 in
Oct 11 04:21:35 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1769: 305 pgs: 305 active+clean; 88 MiB data, 437 MiB used, 60 GiB / 60 GiB avail; 44 KiB/s rd, 2.8 KiB/s wr, 60 op/s
Oct 11 04:21:36 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct 11 04:21:36 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1315702147' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 11 04:21:36 compute-0 rsyslogd[1005]: imjournal: journal files changed, reloading...  [v8.2506.0-2.el9 try https://www.rsyslog.com/e/0 ]
Oct 11 04:21:36 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct 11 04:21:36 compute-0 rsyslogd[1005]: imjournal: journal files changed, reloading...  [v8.2506.0-2.el9 try https://www.rsyslog.com/e/0 ]
Oct 11 04:21:36 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1315702147' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 11 04:21:36 compute-0 ceph-mon[74273]: pgmap v1769: 305 pgs: 305 active+clean; 88 MiB data, 437 MiB used, 60 GiB / 60 GiB avail; 44 KiB/s rd, 2.8 KiB/s wr, 60 op/s
Oct 11 04:21:36 compute-0 ceph-mon[74273]: from='client.? 192.168.122.10:0/1315702147' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 11 04:21:36 compute-0 ceph-mon[74273]: from='client.? 192.168.122.10:0/1315702147' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 11 04:21:37 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1770: 305 pgs: 305 active+clean; 88 MiB data, 437 MiB used, 60 GiB / 60 GiB avail; 67 KiB/s rd, 3.8 KiB/s wr, 89 op/s
Oct 11 04:21:38 compute-0 nova_compute[259850]: 2025-10-11 04:21:38.060 2 DEBUG oslo_service.periodic_task [None req-a15c22ab-259d-4b26-83d0-eb5f92102649 - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 11 04:21:38 compute-0 nova_compute[259850]: 2025-10-11 04:21:38.060 2 DEBUG nova.compute.manager [None req-a15c22ab-259d-4b26-83d0-eb5f92102649 - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145
Oct 11 04:21:38 compute-0 nova_compute[259850]: 2025-10-11 04:21:38.079 2 DEBUG nova.compute.manager [None req-a15c22ab-259d-4b26-83d0-eb5f92102649 - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154
Oct 11 04:21:38 compute-0 nova_compute[259850]: 2025-10-11 04:21:38.543 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 04:21:38 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e421 do_prune osdmap full prune enabled
Oct 11 04:21:38 compute-0 ceph-mon[74273]: pgmap v1770: 305 pgs: 305 active+clean; 88 MiB data, 437 MiB used, 60 GiB / 60 GiB avail; 67 KiB/s rd, 3.8 KiB/s wr, 89 op/s
Oct 11 04:21:38 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e422 e422: 3 total, 3 up, 3 in
Oct 11 04:21:38 compute-0 ceph-mon[74273]: log_channel(cluster) log [DBG] : osdmap e422: 3 total, 3 up, 3 in
Oct 11 04:21:39 compute-0 nova_compute[259850]: 2025-10-11 04:21:39.078 2 DEBUG oslo_service.periodic_task [None req-a15c22ab-259d-4b26-83d0-eb5f92102649 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 11 04:21:39 compute-0 podman[301964]: 2025-10-11 04:21:39.385603896 +0000 UTC m=+0.084899125 container health_status aa44bb3cffcfb092b9d85ae52a9bd9f36c87e07262632420f97b48a886109eab (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0)
Oct 11 04:21:39 compute-0 nova_compute[259850]: 2025-10-11 04:21:39.902 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 04:21:39 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1772: 305 pgs: 305 active+clean; 88 MiB data, 437 MiB used, 60 GiB / 60 GiB avail; 44 KiB/s rd, 2.8 KiB/s wr, 60 op/s
Oct 11 04:21:39 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e422 do_prune osdmap full prune enabled
Oct 11 04:21:39 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e423 e423: 3 total, 3 up, 3 in
Oct 11 04:21:39 compute-0 ceph-mon[74273]: log_channel(cluster) log [DBG] : osdmap e423: 3 total, 3 up, 3 in
Oct 11 04:21:39 compute-0 ceph-mon[74273]: osdmap e422: 3 total, 3 up, 3 in
Oct 11 04:21:40 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e423 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 04:21:40 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e423 do_prune osdmap full prune enabled
Oct 11 04:21:40 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e424 e424: 3 total, 3 up, 3 in
Oct 11 04:21:40 compute-0 ceph-mon[74273]: log_channel(cluster) log [DBG] : osdmap e424: 3 total, 3 up, 3 in
Oct 11 04:21:40 compute-0 ceph-mon[74273]: pgmap v1772: 305 pgs: 305 active+clean; 88 MiB data, 437 MiB used, 60 GiB / 60 GiB avail; 44 KiB/s rd, 2.8 KiB/s wr, 60 op/s
Oct 11 04:21:40 compute-0 ceph-mon[74273]: osdmap e423: 3 total, 3 up, 3 in
Oct 11 04:21:40 compute-0 ceph-mon[74273]: osdmap e424: 3 total, 3 up, 3 in
Oct 11 04:21:41 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct 11 04:21:41 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/689148065' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 11 04:21:41 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct 11 04:21:41 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/689148065' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 11 04:21:41 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1775: 305 pgs: 305 active+clean; 88 MiB data, 437 MiB used, 60 GiB / 60 GiB avail; 44 KiB/s rd, 2.8 KiB/s wr, 60 op/s
Oct 11 04:21:41 compute-0 ceph-mon[74273]: from='client.? 192.168.122.10:0/689148065' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 11 04:21:41 compute-0 ceph-mon[74273]: from='client.? 192.168.122.10:0/689148065' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 11 04:21:42 compute-0 nova_compute[259850]: 2025-10-11 04:21:42.060 2 DEBUG oslo_service.periodic_task [None req-a15c22ab-259d-4b26-83d0-eb5f92102649 - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 11 04:21:43 compute-0 ceph-mon[74273]: pgmap v1775: 305 pgs: 305 active+clean; 88 MiB data, 437 MiB used, 60 GiB / 60 GiB avail; 44 KiB/s rd, 2.8 KiB/s wr, 60 op/s
Oct 11 04:21:43 compute-0 nova_compute[259850]: 2025-10-11 04:21:43.545 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 04:21:43 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1776: 305 pgs: 305 active+clean; 88 MiB data, 441 MiB used, 60 GiB / 60 GiB avail; 49 KiB/s rd, 3.7 KiB/s wr, 69 op/s
Oct 11 04:21:44 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e424 do_prune osdmap full prune enabled
Oct 11 04:21:44 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e425 e425: 3 total, 3 up, 3 in
Oct 11 04:21:44 compute-0 nova_compute[259850]: 2025-10-11 04:21:44.078 2 DEBUG oslo_service.periodic_task [None req-a15c22ab-259d-4b26-83d0-eb5f92102649 - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 11 04:21:44 compute-0 nova_compute[259850]: 2025-10-11 04:21:44.078 2 DEBUG nova.compute.manager [None req-a15c22ab-259d-4b26-83d0-eb5f92102649 - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183
Oct 11 04:21:44 compute-0 ceph-mon[74273]: log_channel(cluster) log [DBG] : osdmap e425: 3 total, 3 up, 3 in
Oct 11 04:21:44 compute-0 nova_compute[259850]: 2025-10-11 04:21:44.904 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 04:21:45 compute-0 ceph-mon[74273]: pgmap v1776: 305 pgs: 305 active+clean; 88 MiB data, 441 MiB used, 60 GiB / 60 GiB avail; 49 KiB/s rd, 3.7 KiB/s wr, 69 op/s
Oct 11 04:21:45 compute-0 ceph-mon[74273]: osdmap e425: 3 total, 3 up, 3 in
Oct 11 04:21:45 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e425 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 04:21:45 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e425 do_prune osdmap full prune enabled
Oct 11 04:21:45 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e426 e426: 3 total, 3 up, 3 in
Oct 11 04:21:45 compute-0 ceph-mon[74273]: log_channel(cluster) log [DBG] : osdmap e426: 3 total, 3 up, 3 in
Oct 11 04:21:45 compute-0 ceph-mon[74273]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #75. Immutable memtables: 0.
Oct 11 04:21:45 compute-0 ceph-mon[74273]: rocksdb: (Original Log Time 2025/10/11-04:21:45.095553) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Oct 11 04:21:45 compute-0 ceph-mon[74273]: rocksdb: [db/flush_job.cc:856] [default] [JOB 41] Flushing memtable with next log file: 75
Oct 11 04:21:45 compute-0 ceph-mon[74273]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760156505095587, "job": 41, "event": "flush_started", "num_memtables": 1, "num_entries": 1305, "num_deletes": 260, "total_data_size": 1761603, "memory_usage": 1798416, "flush_reason": "Manual Compaction"}
Oct 11 04:21:45 compute-0 ceph-mon[74273]: rocksdb: [db/flush_job.cc:885] [default] [JOB 41] Level-0 flush table #76: started
Oct 11 04:21:45 compute-0 ceph-mon[74273]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760156505106918, "cf_name": "default", "job": 41, "event": "table_file_creation", "file_number": 76, "file_size": 1728719, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 35330, "largest_seqno": 36634, "table_properties": {"data_size": 1722208, "index_size": 3714, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1733, "raw_key_size": 14159, "raw_average_key_size": 20, "raw_value_size": 1709125, "raw_average_value_size": 2528, "num_data_blocks": 163, "num_entries": 676, "num_filter_entries": 676, "num_deletions": 260, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1760156420, "oldest_key_time": 1760156420, "file_creation_time": 1760156505, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "c5ef686a-ea96-43f3-b64e-136aeef6150d", "db_session_id": "5PENSWPAYOBU0GJSS6OB", "orig_file_number": 76, "seqno_to_time_mapping": "N/A"}}
Oct 11 04:21:45 compute-0 ceph-mon[74273]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 41] Flush lasted 11424 microseconds, and 5267 cpu microseconds.
Oct 11 04:21:45 compute-0 ceph-mon[74273]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 11 04:21:45 compute-0 ceph-mon[74273]: rocksdb: (Original Log Time 2025/10/11-04:21:45.106973) [db/flush_job.cc:967] [default] [JOB 41] Level-0 flush table #76: 1728719 bytes OK
Oct 11 04:21:45 compute-0 ceph-mon[74273]: rocksdb: (Original Log Time 2025/10/11-04:21:45.106996) [db/memtable_list.cc:519] [default] Level-0 commit table #76 started
Oct 11 04:21:45 compute-0 ceph-mon[74273]: rocksdb: (Original Log Time 2025/10/11-04:21:45.108446) [db/memtable_list.cc:722] [default] Level-0 commit table #76: memtable #1 done
Oct 11 04:21:45 compute-0 ceph-mon[74273]: rocksdb: (Original Log Time 2025/10/11-04:21:45.108472) EVENT_LOG_v1 {"time_micros": 1760156505108463, "job": 41, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Oct 11 04:21:45 compute-0 ceph-mon[74273]: rocksdb: (Original Log Time 2025/10/11-04:21:45.108495) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Oct 11 04:21:45 compute-0 ceph-mon[74273]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 41] Try to delete WAL files size 1755520, prev total WAL file size 1755520, number of live WAL files 2.
Oct 11 04:21:45 compute-0 ceph-mon[74273]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000072.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 11 04:21:45 compute-0 ceph-mon[74273]: rocksdb: (Original Log Time 2025/10/11-04:21:45.109589) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730033303132' seq:72057594037927935, type:22 .. '7061786F730033323634' seq:0, type:0; will stop at (end)
Oct 11 04:21:45 compute-0 ceph-mon[74273]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 42] Compacting 1@0 + 1@6 files to L6, score -1.00
Oct 11 04:21:45 compute-0 ceph-mon[74273]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 41 Base level 0, inputs: [76(1688KB)], [74(10069KB)]
Oct 11 04:21:45 compute-0 ceph-mon[74273]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760156505109678, "job": 42, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [76], "files_L6": [74], "score": -1, "input_data_size": 12039938, "oldest_snapshot_seqno": -1}
Oct 11 04:21:45 compute-0 ceph-mon[74273]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 42] Generated table #77: 6516 keys, 10276252 bytes, temperature: kUnknown
Oct 11 04:21:45 compute-0 ceph-mon[74273]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760156505177197, "cf_name": "default", "job": 42, "event": "table_file_creation", "file_number": 77, "file_size": 10276252, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 10227800, "index_size": 31062, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 16325, "raw_key_size": 165190, "raw_average_key_size": 25, "raw_value_size": 10105913, "raw_average_value_size": 1550, "num_data_blocks": 1239, "num_entries": 6516, "num_filter_entries": 6516, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1760153731, "oldest_key_time": 0, "file_creation_time": 1760156505, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "c5ef686a-ea96-43f3-b64e-136aeef6150d", "db_session_id": "5PENSWPAYOBU0GJSS6OB", "orig_file_number": 77, "seqno_to_time_mapping": "N/A"}}
Oct 11 04:21:45 compute-0 ceph-mon[74273]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 11 04:21:45 compute-0 ceph-mon[74273]: rocksdb: (Original Log Time 2025/10/11-04:21:45.177575) [db/compaction/compaction_job.cc:1663] [default] [JOB 42] Compacted 1@0 + 1@6 files to L6 => 10276252 bytes
Oct 11 04:21:45 compute-0 ceph-mon[74273]: rocksdb: (Original Log Time 2025/10/11-04:21:45.178978) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 178.0 rd, 152.0 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.6, 9.8 +0.0 blob) out(9.8 +0.0 blob), read-write-amplify(12.9) write-amplify(5.9) OK, records in: 7046, records dropped: 530 output_compression: NoCompression
Oct 11 04:21:45 compute-0 ceph-mon[74273]: rocksdb: (Original Log Time 2025/10/11-04:21:45.178999) EVENT_LOG_v1 {"time_micros": 1760156505178989, "job": 42, "event": "compaction_finished", "compaction_time_micros": 67622, "compaction_time_cpu_micros": 44363, "output_level": 6, "num_output_files": 1, "total_output_size": 10276252, "num_input_records": 7046, "num_output_records": 6516, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Oct 11 04:21:45 compute-0 ceph-mon[74273]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000076.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 11 04:21:45 compute-0 ceph-mon[74273]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760156505179585, "job": 42, "event": "table_file_deletion", "file_number": 76}
Oct 11 04:21:45 compute-0 ceph-mon[74273]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000074.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 11 04:21:45 compute-0 ceph-mon[74273]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760156505183328, "job": 42, "event": "table_file_deletion", "file_number": 74}
Oct 11 04:21:45 compute-0 ceph-mon[74273]: rocksdb: (Original Log Time 2025/10/11-04:21:45.109440) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 11 04:21:45 compute-0 ceph-mon[74273]: rocksdb: (Original Log Time 2025/10/11-04:21:45.183463) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 11 04:21:45 compute-0 ceph-mon[74273]: rocksdb: (Original Log Time 2025/10/11-04:21:45.183474) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 11 04:21:45 compute-0 ceph-mon[74273]: rocksdb: (Original Log Time 2025/10/11-04:21:45.183477) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 11 04:21:45 compute-0 ceph-mon[74273]: rocksdb: (Original Log Time 2025/10/11-04:21:45.183481) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 11 04:21:45 compute-0 ceph-mon[74273]: rocksdb: (Original Log Time 2025/10/11-04:21:45.183484) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 11 04:21:45 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1779: 305 pgs: 305 active+clean; 88 MiB data, 441 MiB used, 60 GiB / 60 GiB avail; 45 KiB/s rd, 2.9 KiB/s wr, 61 op/s
Oct 11 04:21:46 compute-0 ceph-mon[74273]: osdmap e426: 3 total, 3 up, 3 in
Oct 11 04:21:46 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e426 do_prune osdmap full prune enabled
Oct 11 04:21:46 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e427 e427: 3 total, 3 up, 3 in
Oct 11 04:21:46 compute-0 ceph-mon[74273]: log_channel(cluster) log [DBG] : osdmap e427: 3 total, 3 up, 3 in
Oct 11 04:21:46 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct 11 04:21:46 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2890765894' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 11 04:21:46 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct 11 04:21:46 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2890765894' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 11 04:21:47 compute-0 ceph-mon[74273]: pgmap v1779: 305 pgs: 305 active+clean; 88 MiB data, 441 MiB used, 60 GiB / 60 GiB avail; 45 KiB/s rd, 2.9 KiB/s wr, 61 op/s
Oct 11 04:21:47 compute-0 ceph-mon[74273]: osdmap e427: 3 total, 3 up, 3 in
Oct 11 04:21:47 compute-0 ceph-mon[74273]: from='client.? 192.168.122.10:0/2890765894' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 11 04:21:47 compute-0 ceph-mon[74273]: from='client.? 192.168.122.10:0/2890765894' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 11 04:21:47 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1781: 305 pgs: 305 active+clean; 88 MiB data, 441 MiB used, 60 GiB / 60 GiB avail; 87 KiB/s rd, 4.5 KiB/s wr, 115 op/s
Oct 11 04:21:48 compute-0 nova_compute[259850]: 2025-10-11 04:21:48.548 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 04:21:49 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e427 do_prune osdmap full prune enabled
Oct 11 04:21:49 compute-0 ceph-mon[74273]: pgmap v1781: 305 pgs: 305 active+clean; 88 MiB data, 441 MiB used, 60 GiB / 60 GiB avail; 87 KiB/s rd, 4.5 KiB/s wr, 115 op/s
Oct 11 04:21:49 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e428 e428: 3 total, 3 up, 3 in
Oct 11 04:21:49 compute-0 ceph-mon[74273]: log_channel(cluster) log [DBG] : osdmap e428: 3 total, 3 up, 3 in
Oct 11 04:21:49 compute-0 nova_compute[259850]: 2025-10-11 04:21:49.906 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 04:21:49 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1783: 305 pgs: 305 active+clean; 88 MiB data, 441 MiB used, 60 GiB / 60 GiB avail; 46 KiB/s rd, 2.9 KiB/s wr, 62 op/s
Oct 11 04:21:50 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e428 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 04:21:50 compute-0 ceph-mon[74273]: osdmap e428: 3 total, 3 up, 3 in
Oct 11 04:21:50 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct 11 04:21:50 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1545487671' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 11 04:21:50 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct 11 04:21:50 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1545487671' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 11 04:21:50 compute-0 ceph-mgr[74563]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 04:21:50 compute-0 ceph-mgr[74563]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 04:21:50 compute-0 ceph-mgr[74563]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 04:21:50 compute-0 ceph-mgr[74563]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 04:21:50 compute-0 ceph-mgr[74563]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 04:21:50 compute-0 ceph-mgr[74563]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 04:21:51 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e428 do_prune osdmap full prune enabled
Oct 11 04:21:51 compute-0 ceph-mon[74273]: pgmap v1783: 305 pgs: 305 active+clean; 88 MiB data, 441 MiB used, 60 GiB / 60 GiB avail; 46 KiB/s rd, 2.9 KiB/s wr, 62 op/s
Oct 11 04:21:51 compute-0 ceph-mon[74273]: from='client.? 192.168.122.10:0/1545487671' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 11 04:21:51 compute-0 ceph-mon[74273]: from='client.? 192.168.122.10:0/1545487671' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 11 04:21:51 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e429 e429: 3 total, 3 up, 3 in
Oct 11 04:21:51 compute-0 ceph-mon[74273]: log_channel(cluster) log [DBG] : osdmap e429: 3 total, 3 up, 3 in
Oct 11 04:21:51 compute-0 podman[301985]: 2025-10-11 04:21:51.386027035 +0000 UTC m=+0.081619152 container health_status 8f03625dda308e9a3fe3b3f00360811eab2a53db90537fed5552a11820542f07 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, config_id=iscsid, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, container_name=iscsid, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009, org.label-schema.vendor=CentOS, tcib_managed=true)
Oct 11 04:21:51 compute-0 podman[301984]: 2025-10-11 04:21:51.395249706 +0000 UTC m=+0.094940519 container health_status 83ff5079e26f0f00bbfd2512db4ebca61e431e3b7e362664573e34fff179574c (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, container_name=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, org.label-schema.build-date=20251009, tcib_managed=true, config_id=multipathd)
Oct 11 04:21:51 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct 11 04:21:51 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/580998961' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 11 04:21:51 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct 11 04:21:51 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/580998961' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 11 04:21:51 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1785: 305 pgs: 305 active+clean; 88 MiB data, 441 MiB used, 60 GiB / 60 GiB avail; 44 KiB/s rd, 2.8 KiB/s wr, 60 op/s
Oct 11 04:21:52 compute-0 ceph-mon[74273]: osdmap e429: 3 total, 3 up, 3 in
Oct 11 04:21:52 compute-0 ceph-mon[74273]: from='client.? 192.168.122.10:0/580998961' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 11 04:21:52 compute-0 ceph-mon[74273]: from='client.? 192.168.122.10:0/580998961' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 11 04:21:53 compute-0 ceph-mon[74273]: pgmap v1785: 305 pgs: 305 active+clean; 88 MiB data, 441 MiB used, 60 GiB / 60 GiB avail; 44 KiB/s rd, 2.8 KiB/s wr, 60 op/s
Oct 11 04:21:53 compute-0 nova_compute[259850]: 2025-10-11 04:21:53.551 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 04:21:53 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1786: 305 pgs: 305 active+clean; 88 MiB data, 441 MiB used, 60 GiB / 60 GiB avail; 68 KiB/s rd, 3.8 KiB/s wr, 91 op/s
Oct 11 04:21:54 compute-0 nova_compute[259850]: 2025-10-11 04:21:54.909 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 04:21:55 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e429 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 04:21:55 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e429 do_prune osdmap full prune enabled
Oct 11 04:21:55 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e430 e430: 3 total, 3 up, 3 in
Oct 11 04:21:55 compute-0 ceph-mon[74273]: log_channel(cluster) log [DBG] : osdmap e430: 3 total, 3 up, 3 in
Oct 11 04:21:55 compute-0 ceph-mon[74273]: pgmap v1786: 305 pgs: 305 active+clean; 88 MiB data, 441 MiB used, 60 GiB / 60 GiB avail; 68 KiB/s rd, 3.8 KiB/s wr, 91 op/s
Oct 11 04:21:55 compute-0 ceph-mon[74273]: osdmap e430: 3 total, 3 up, 3 in
Oct 11 04:21:55 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1788: 305 pgs: 305 active+clean; 88 MiB data, 441 MiB used, 60 GiB / 60 GiB avail; 39 KiB/s rd, 1.9 KiB/s wr, 51 op/s
Oct 11 04:21:56 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e430 do_prune osdmap full prune enabled
Oct 11 04:21:56 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e431 e431: 3 total, 3 up, 3 in
Oct 11 04:21:56 compute-0 ceph-mon[74273]: log_channel(cluster) log [DBG] : osdmap e431: 3 total, 3 up, 3 in
Oct 11 04:21:57 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct 11 04:21:57 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1587795123' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 11 04:21:57 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct 11 04:21:57 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1587795123' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 11 04:21:57 compute-0 ceph-mon[74273]: pgmap v1788: 305 pgs: 305 active+clean; 88 MiB data, 441 MiB used, 60 GiB / 60 GiB avail; 39 KiB/s rd, 1.9 KiB/s wr, 51 op/s
Oct 11 04:21:57 compute-0 ceph-mon[74273]: osdmap e431: 3 total, 3 up, 3 in
Oct 11 04:21:57 compute-0 ceph-mon[74273]: from='client.? 192.168.122.10:0/1587795123' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 11 04:21:57 compute-0 ceph-mon[74273]: from='client.? 192.168.122.10:0/1587795123' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 11 04:21:57 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1790: 305 pgs: 305 active+clean; 88 MiB data, 441 MiB used, 60 GiB / 60 GiB avail; 74 KiB/s rd, 3.6 KiB/s wr, 95 op/s
Oct 11 04:21:58 compute-0 nova_compute[259850]: 2025-10-11 04:21:58.552 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 04:21:59 compute-0 ceph-mon[74273]: pgmap v1790: 305 pgs: 305 active+clean; 88 MiB data, 441 MiB used, 60 GiB / 60 GiB avail; 74 KiB/s rd, 3.6 KiB/s wr, 95 op/s
Oct 11 04:21:59 compute-0 nova_compute[259850]: 2025-10-11 04:21:59.911 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 04:21:59 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1791: 305 pgs: 305 active+clean; 88 MiB data, 442 MiB used, 60 GiB / 60 GiB avail; 66 KiB/s rd, 3.7 KiB/s wr, 89 op/s
Oct 11 04:22:00 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e431 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 04:22:00 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e431 do_prune osdmap full prune enabled
Oct 11 04:22:00 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e432 e432: 3 total, 3 up, 3 in
Oct 11 04:22:00 compute-0 ceph-mon[74273]: log_channel(cluster) log [DBG] : osdmap e432: 3 total, 3 up, 3 in
Oct 11 04:22:01 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e432 do_prune osdmap full prune enabled
Oct 11 04:22:01 compute-0 ceph-mon[74273]: pgmap v1791: 305 pgs: 305 active+clean; 88 MiB data, 442 MiB used, 60 GiB / 60 GiB avail; 66 KiB/s rd, 3.7 KiB/s wr, 89 op/s
Oct 11 04:22:01 compute-0 ceph-mon[74273]: osdmap e432: 3 total, 3 up, 3 in
Oct 11 04:22:01 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e433 e433: 3 total, 3 up, 3 in
Oct 11 04:22:01 compute-0 ceph-mon[74273]: log_channel(cluster) log [DBG] : osdmap e433: 3 total, 3 up, 3 in
Oct 11 04:22:01 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1794: 305 pgs: 305 active+clean; 88 MiB data, 442 MiB used, 60 GiB / 60 GiB avail; 44 KiB/s rd, 2.8 KiB/s wr, 60 op/s
Oct 11 04:22:02 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct 11 04:22:02 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2583237047' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 11 04:22:02 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct 11 04:22:02 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2583237047' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 11 04:22:02 compute-0 ceph-mon[74273]: osdmap e433: 3 total, 3 up, 3 in
Oct 11 04:22:02 compute-0 ceph-mon[74273]: from='client.? 192.168.122.10:0/2583237047' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 11 04:22:02 compute-0 ceph-mon[74273]: from='client.? 192.168.122.10:0/2583237047' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 11 04:22:03 compute-0 ceph-mon[74273]: pgmap v1794: 305 pgs: 305 active+clean; 88 MiB data, 442 MiB used, 60 GiB / 60 GiB avail; 44 KiB/s rd, 2.8 KiB/s wr, 60 op/s
Oct 11 04:22:03 compute-0 nova_compute[259850]: 2025-10-11 04:22:03.555 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 04:22:03 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1795: 305 pgs: 305 active+clean; 88 MiB data, 442 MiB used, 60 GiB / 60 GiB avail; 69 KiB/s rd, 4.4 KiB/s wr, 93 op/s
Oct 11 04:22:04 compute-0 nova_compute[259850]: 2025-10-11 04:22:04.913 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 04:22:05 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e433 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 04:22:05 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e433 do_prune osdmap full prune enabled
Oct 11 04:22:05 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e434 e434: 3 total, 3 up, 3 in
Oct 11 04:22:05 compute-0 ceph-mon[74273]: log_channel(cluster) log [DBG] : osdmap e434: 3 total, 3 up, 3 in
Oct 11 04:22:05 compute-0 ceph-mon[74273]: pgmap v1795: 305 pgs: 305 active+clean; 88 MiB data, 442 MiB used, 60 GiB / 60 GiB avail; 69 KiB/s rd, 4.4 KiB/s wr, 93 op/s
Oct 11 04:22:05 compute-0 ceph-mon[74273]: osdmap e434: 3 total, 3 up, 3 in
Oct 11 04:22:05 compute-0 podman[302026]: 2025-10-11 04:22:05.449482834 +0000 UTC m=+0.153015813 container health_status 648a70a95c858d67aef01a55c153ff0b5b9bef4b589fa10c62a24185180db61f (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.build-date=20251009, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_controller)
Oct 11 04:22:05 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1797: 305 pgs: 305 active+clean; 88 MiB data, 442 MiB used, 60 GiB / 60 GiB avail; 44 KiB/s rd, 2.8 KiB/s wr, 60 op/s
Oct 11 04:22:06 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e434 do_prune osdmap full prune enabled
Oct 11 04:22:06 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e435 e435: 3 total, 3 up, 3 in
Oct 11 04:22:06 compute-0 ceph-mon[74273]: log_channel(cluster) log [DBG] : osdmap e435: 3 total, 3 up, 3 in
Oct 11 04:22:07 compute-0 ceph-mon[74273]: pgmap v1797: 305 pgs: 305 active+clean; 88 MiB data, 442 MiB used, 60 GiB / 60 GiB avail; 44 KiB/s rd, 2.8 KiB/s wr, 60 op/s
Oct 11 04:22:07 compute-0 ceph-mon[74273]: osdmap e435: 3 total, 3 up, 3 in
Oct 11 04:22:07 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct 11 04:22:07 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/90049358' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 11 04:22:07 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct 11 04:22:07 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/90049358' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 11 04:22:07 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1799: 305 pgs: 305 active+clean; 88 MiB data, 442 MiB used, 60 GiB / 60 GiB avail; 43 KiB/s rd, 2.7 KiB/s wr, 57 op/s
Oct 11 04:22:08 compute-0 ceph-mon[74273]: from='client.? 192.168.122.10:0/90049358' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 11 04:22:08 compute-0 ceph-mon[74273]: from='client.? 192.168.122.10:0/90049358' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 11 04:22:08 compute-0 nova_compute[259850]: 2025-10-11 04:22:08.556 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 04:22:09 compute-0 ceph-mon[74273]: pgmap v1799: 305 pgs: 305 active+clean; 88 MiB data, 442 MiB used, 60 GiB / 60 GiB avail; 43 KiB/s rd, 2.7 KiB/s wr, 57 op/s
Oct 11 04:22:09 compute-0 ovn_metadata_agent[161897]: 2025-10-11 04:22:09.706 161902 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=21, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '7a:61:6f', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '92:f1:b6:e4:f1:16'}, ipsec=False) old=SB_Global(nb_cfg=20) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 11 04:22:09 compute-0 nova_compute[259850]: 2025-10-11 04:22:09.706 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 04:22:09 compute-0 ovn_metadata_agent[161897]: 2025-10-11 04:22:09.708 161902 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 2 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Oct 11 04:22:09 compute-0 nova_compute[259850]: 2025-10-11 04:22:09.914 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 04:22:09 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1800: 305 pgs: 305 active+clean; 88 MiB data, 442 MiB used, 60 GiB / 60 GiB avail; 67 KiB/s rd, 4.2 KiB/s wr, 90 op/s
Oct 11 04:22:10 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e435 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 04:22:10 compute-0 podman[302052]: 2025-10-11 04:22:10.362607894 +0000 UTC m=+0.067685818 container health_status aa44bb3cffcfb092b9d85ae52a9bd9f36c87e07262632420f97b48a886109eab (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, config_id=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.build-date=20251009)
Oct 11 04:22:11 compute-0 ceph-mon[74273]: pgmap v1800: 305 pgs: 305 active+clean; 88 MiB data, 442 MiB used, 60 GiB / 60 GiB avail; 67 KiB/s rd, 4.2 KiB/s wr, 90 op/s
Oct 11 04:22:11 compute-0 ovn_metadata_agent[161897]: 2025-10-11 04:22:11.711 161902 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=8a473e03-2208-47ae-afcd-05ad744a5969, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '21'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 04:22:11 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1801: 305 pgs: 305 active+clean; 88 MiB data, 442 MiB used, 60 GiB / 60 GiB avail; 33 KiB/s rd, 2.1 KiB/s wr, 45 op/s
Oct 11 04:22:13 compute-0 ceph-mon[74273]: pgmap v1801: 305 pgs: 305 active+clean; 88 MiB data, 442 MiB used, 60 GiB / 60 GiB avail; 33 KiB/s rd, 2.1 KiB/s wr, 45 op/s
Oct 11 04:22:13 compute-0 sudo[302071]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 04:22:13 compute-0 sudo[302071]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 04:22:13 compute-0 sudo[302071]: pam_unix(sudo:session): session closed for user root
Oct 11 04:22:13 compute-0 nova_compute[259850]: 2025-10-11 04:22:13.559 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 04:22:13 compute-0 sudo[302096]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 11 04:22:13 compute-0 sudo[302096]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 04:22:13 compute-0 sudo[302096]: pam_unix(sudo:session): session closed for user root
Oct 11 04:22:13 compute-0 sudo[302121]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 04:22:13 compute-0 sudo[302121]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 04:22:13 compute-0 sudo[302121]: pam_unix(sudo:session): session closed for user root
Oct 11 04:22:13 compute-0 sudo[302146]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/23b68101-59a9-532f-ab6b-9acf78fb2162/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Oct 11 04:22:13 compute-0 sudo[302146]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 04:22:13 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1802: 305 pgs: 305 active+clean; 88 MiB data, 442 MiB used, 60 GiB / 60 GiB avail; 30 KiB/s rd, 1.9 KiB/s wr, 41 op/s
Oct 11 04:22:14 compute-0 sudo[302146]: pam_unix(sudo:session): session closed for user root
Oct 11 04:22:14 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 11 04:22:14 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 11 04:22:14 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct 11 04:22:14 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 11 04:22:14 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct 11 04:22:14 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' 
Oct 11 04:22:14 compute-0 ceph-mgr[74563]: [progress WARNING root] complete: ev b6f58155-1d62-440a-a475-ccf3ed544d67 does not exist
Oct 11 04:22:14 compute-0 ceph-mgr[74563]: [progress WARNING root] complete: ev 8e42cb3d-8956-4e4d-ac4b-c7ca58822c60 does not exist
Oct 11 04:22:14 compute-0 ceph-mgr[74563]: [progress WARNING root] complete: ev 03590922-d873-4742-a800-85c23dcec27c does not exist
Oct 11 04:22:14 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct 11 04:22:14 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 11 04:22:14 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct 11 04:22:14 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 11 04:22:14 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 11 04:22:14 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 11 04:22:14 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 11 04:22:14 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 11 04:22:14 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' 
Oct 11 04:22:14 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 11 04:22:14 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 11 04:22:14 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 11 04:22:14 compute-0 sudo[302203]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 04:22:14 compute-0 sudo[302203]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 04:22:14 compute-0 sudo[302203]: pam_unix(sudo:session): session closed for user root
Oct 11 04:22:14 compute-0 sudo[302228]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 11 04:22:14 compute-0 sudo[302228]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 04:22:14 compute-0 sudo[302228]: pam_unix(sudo:session): session closed for user root
Oct 11 04:22:14 compute-0 sudo[302253]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 04:22:14 compute-0 sudo[302253]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 04:22:14 compute-0 sudo[302253]: pam_unix(sudo:session): session closed for user root
Oct 11 04:22:14 compute-0 sudo[302278]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/23b68101-59a9-532f-ab6b-9acf78fb2162/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 23b68101-59a9-532f-ab6b-9acf78fb2162 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Oct 11 04:22:14 compute-0 sudo[302278]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 04:22:14 compute-0 nova_compute[259850]: 2025-10-11 04:22:14.915 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 04:22:15 compute-0 podman[302345]: 2025-10-11 04:22:15.01207047 +0000 UTC m=+0.052067015 container create b156e52daed9a70d0e2bb6236c13620101b27f52eea04905f999dda109062ed6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_herschel, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 11 04:22:15 compute-0 systemd[1]: Started libpod-conmon-b156e52daed9a70d0e2bb6236c13620101b27f52eea04905f999dda109062ed6.scope.
Oct 11 04:22:15 compute-0 podman[302345]: 2025-10-11 04:22:14.993968007 +0000 UTC m=+0.033964572 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 04:22:15 compute-0 systemd[1]: Started libcrun container.
Oct 11 04:22:15 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e435 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 04:22:15 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e435 do_prune osdmap full prune enabled
Oct 11 04:22:15 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e436 e436: 3 total, 3 up, 3 in
Oct 11 04:22:15 compute-0 ceph-mon[74273]: log_channel(cluster) log [DBG] : osdmap e436: 3 total, 3 up, 3 in
Oct 11 04:22:15 compute-0 podman[302345]: 2025-10-11 04:22:15.108274204 +0000 UTC m=+0.148271039 container init b156e52daed9a70d0e2bb6236c13620101b27f52eea04905f999dda109062ed6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_herschel, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 11 04:22:15 compute-0 podman[302345]: 2025-10-11 04:22:15.117102854 +0000 UTC m=+0.157099399 container start b156e52daed9a70d0e2bb6236c13620101b27f52eea04905f999dda109062ed6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_herschel, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 11 04:22:15 compute-0 podman[302345]: 2025-10-11 04:22:15.120250813 +0000 UTC m=+0.160247398 container attach b156e52daed9a70d0e2bb6236c13620101b27f52eea04905f999dda109062ed6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_herschel, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Oct 11 04:22:15 compute-0 zen_herschel[302362]: 167 167
Oct 11 04:22:15 compute-0 systemd[1]: libpod-b156e52daed9a70d0e2bb6236c13620101b27f52eea04905f999dda109062ed6.scope: Deactivated successfully.
Oct 11 04:22:15 compute-0 podman[302345]: 2025-10-11 04:22:15.123462694 +0000 UTC m=+0.163459249 container died b156e52daed9a70d0e2bb6236c13620101b27f52eea04905f999dda109062ed6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_herschel, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, ceph=True)
Oct 11 04:22:15 compute-0 systemd[1]: var-lib-containers-storage-overlay-6854c05ff6dc4eb3bd199cc2d60ee5f30b5a3a3f9895045300f104d0fcff90c4-merged.mount: Deactivated successfully.
Oct 11 04:22:15 compute-0 podman[302345]: 2025-10-11 04:22:15.168210571 +0000 UTC m=+0.208207166 container remove b156e52daed9a70d0e2bb6236c13620101b27f52eea04905f999dda109062ed6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_herschel, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Oct 11 04:22:15 compute-0 systemd[1]: libpod-conmon-b156e52daed9a70d0e2bb6236c13620101b27f52eea04905f999dda109062ed6.scope: Deactivated successfully.
Oct 11 04:22:15 compute-0 ceph-mon[74273]: pgmap v1802: 305 pgs: 305 active+clean; 88 MiB data, 442 MiB used, 60 GiB / 60 GiB avail; 30 KiB/s rd, 1.9 KiB/s wr, 41 op/s
Oct 11 04:22:15 compute-0 ceph-mon[74273]: osdmap e436: 3 total, 3 up, 3 in
Oct 11 04:22:15 compute-0 podman[302387]: 2025-10-11 04:22:15.360413522 +0000 UTC m=+0.064390224 container create a32b8d2b1a4597c85ae50ca05493d44e94b381a1ae373f1e12d7f8e87a6fb082 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_fermat, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 11 04:22:15 compute-0 systemd[1]: Started libpod-conmon-a32b8d2b1a4597c85ae50ca05493d44e94b381a1ae373f1e12d7f8e87a6fb082.scope.
Oct 11 04:22:15 compute-0 podman[302387]: 2025-10-11 04:22:15.33138775 +0000 UTC m=+0.035364512 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 04:22:15 compute-0 systemd[1]: Started libcrun container.
Oct 11 04:22:15 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e0015228b306dac87d5691d6733f54972e5e2f33c5e57b85d3d10b61b9a5d694/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 11 04:22:15 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e0015228b306dac87d5691d6733f54972e5e2f33c5e57b85d3d10b61b9a5d694/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 11 04:22:15 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e0015228b306dac87d5691d6733f54972e5e2f33c5e57b85d3d10b61b9a5d694/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 11 04:22:15 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e0015228b306dac87d5691d6733f54972e5e2f33c5e57b85d3d10b61b9a5d694/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 11 04:22:15 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e0015228b306dac87d5691d6733f54972e5e2f33c5e57b85d3d10b61b9a5d694/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct 11 04:22:15 compute-0 podman[302387]: 2025-10-11 04:22:15.482955272 +0000 UTC m=+0.186931994 container init a32b8d2b1a4597c85ae50ca05493d44e94b381a1ae373f1e12d7f8e87a6fb082 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_fermat, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Oct 11 04:22:15 compute-0 podman[302387]: 2025-10-11 04:22:15.497575456 +0000 UTC m=+0.201552158 container start a32b8d2b1a4597c85ae50ca05493d44e94b381a1ae373f1e12d7f8e87a6fb082 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_fermat, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 11 04:22:15 compute-0 podman[302387]: 2025-10-11 04:22:15.501284951 +0000 UTC m=+0.205261653 container attach a32b8d2b1a4597c85ae50ca05493d44e94b381a1ae373f1e12d7f8e87a6fb082 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_fermat, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=reef)
Oct 11 04:22:15 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1804: 305 pgs: 305 active+clean; 88 MiB data, 442 MiB used, 60 GiB / 60 GiB avail; 28 KiB/s rd, 1.8 KiB/s wr, 37 op/s
Oct 11 04:22:16 compute-0 elastic_fermat[302404]: --> passed data devices: 0 physical, 3 LVM
Oct 11 04:22:16 compute-0 elastic_fermat[302404]: --> relative data size: 1.0
Oct 11 04:22:16 compute-0 elastic_fermat[302404]: --> All data devices are unavailable
Oct 11 04:22:16 compute-0 systemd[1]: libpod-a32b8d2b1a4597c85ae50ca05493d44e94b381a1ae373f1e12d7f8e87a6fb082.scope: Deactivated successfully.
Oct 11 04:22:16 compute-0 podman[302387]: 2025-10-11 04:22:16.633383933 +0000 UTC m=+1.337360645 container died a32b8d2b1a4597c85ae50ca05493d44e94b381a1ae373f1e12d7f8e87a6fb082 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_fermat, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 11 04:22:16 compute-0 systemd[1]: libpod-a32b8d2b1a4597c85ae50ca05493d44e94b381a1ae373f1e12d7f8e87a6fb082.scope: Consumed 1.100s CPU time.
Oct 11 04:22:16 compute-0 systemd[1]: var-lib-containers-storage-overlay-e0015228b306dac87d5691d6733f54972e5e2f33c5e57b85d3d10b61b9a5d694-merged.mount: Deactivated successfully.
Oct 11 04:22:16 compute-0 podman[302387]: 2025-10-11 04:22:16.708907861 +0000 UTC m=+1.412884563 container remove a32b8d2b1a4597c85ae50ca05493d44e94b381a1ae373f1e12d7f8e87a6fb082 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_fermat, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507)
Oct 11 04:22:16 compute-0 systemd[1]: libpod-conmon-a32b8d2b1a4597c85ae50ca05493d44e94b381a1ae373f1e12d7f8e87a6fb082.scope: Deactivated successfully.
Oct 11 04:22:16 compute-0 sudo[302278]: pam_unix(sudo:session): session closed for user root
Oct 11 04:22:16 compute-0 sudo[302447]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 04:22:16 compute-0 sudo[302447]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 04:22:16 compute-0 sudo[302447]: pam_unix(sudo:session): session closed for user root
Oct 11 04:22:16 compute-0 sudo[302472]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 11 04:22:16 compute-0 sudo[302472]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 04:22:16 compute-0 sudo[302472]: pam_unix(sudo:session): session closed for user root
Oct 11 04:22:17 compute-0 sudo[302497]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 04:22:17 compute-0 sudo[302497]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 04:22:17 compute-0 sudo[302497]: pam_unix(sudo:session): session closed for user root
Oct 11 04:22:17 compute-0 sudo[302522]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/23b68101-59a9-532f-ab6b-9acf78fb2162/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 23b68101-59a9-532f-ab6b-9acf78fb2162 -- lvm list --format json
Oct 11 04:22:17 compute-0 sudo[302522]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 04:22:17 compute-0 ceph-mon[74273]: pgmap v1804: 305 pgs: 305 active+clean; 88 MiB data, 442 MiB used, 60 GiB / 60 GiB avail; 28 KiB/s rd, 1.8 KiB/s wr, 37 op/s
Oct 11 04:22:17 compute-0 podman[302589]: 2025-10-11 04:22:17.456185918 +0000 UTC m=+0.048729031 container create f17619bfdc3be920c45eddf45e88ffe53940cb504fd33c45750ed04b07d9acc7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=upbeat_cartwright, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef)
Oct 11 04:22:17 compute-0 systemd[1]: Started libpod-conmon-f17619bfdc3be920c45eddf45e88ffe53940cb504fd33c45750ed04b07d9acc7.scope.
Oct 11 04:22:17 compute-0 podman[302589]: 2025-10-11 04:22:17.431478358 +0000 UTC m=+0.024021531 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 04:22:17 compute-0 systemd[1]: Started libcrun container.
Oct 11 04:22:17 compute-0 podman[302589]: 2025-10-11 04:22:17.557659031 +0000 UTC m=+0.150202204 container init f17619bfdc3be920c45eddf45e88ffe53940cb504fd33c45750ed04b07d9acc7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=upbeat_cartwright, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 11 04:22:17 compute-0 podman[302589]: 2025-10-11 04:22:17.572290695 +0000 UTC m=+0.164833818 container start f17619bfdc3be920c45eddf45e88ffe53940cb504fd33c45750ed04b07d9acc7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=upbeat_cartwright, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, io.buildah.version=1.39.3)
Oct 11 04:22:17 compute-0 podman[302589]: 2025-10-11 04:22:17.575822625 +0000 UTC m=+0.168365748 container attach f17619bfdc3be920c45eddf45e88ffe53940cb504fd33c45750ed04b07d9acc7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=upbeat_cartwright, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Oct 11 04:22:17 compute-0 upbeat_cartwright[302605]: 167 167
Oct 11 04:22:17 compute-0 systemd[1]: libpod-f17619bfdc3be920c45eddf45e88ffe53940cb504fd33c45750ed04b07d9acc7.scope: Deactivated successfully.
Oct 11 04:22:17 compute-0 podman[302589]: 2025-10-11 04:22:17.580085746 +0000 UTC m=+0.172628859 container died f17619bfdc3be920c45eddf45e88ffe53940cb504fd33c45750ed04b07d9acc7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=upbeat_cartwright, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3)
Oct 11 04:22:17 compute-0 systemd[1]: var-lib-containers-storage-overlay-c7940bf8cf3d67279332ccce3b0bf4d8efcd8473ec168aba59f58da0497adcdf-merged.mount: Deactivated successfully.
Oct 11 04:22:17 compute-0 podman[302589]: 2025-10-11 04:22:17.626451648 +0000 UTC m=+0.218994771 container remove f17619bfdc3be920c45eddf45e88ffe53940cb504fd33c45750ed04b07d9acc7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=upbeat_cartwright, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 11 04:22:17 compute-0 systemd[1]: libpod-conmon-f17619bfdc3be920c45eddf45e88ffe53940cb504fd33c45750ed04b07d9acc7.scope: Deactivated successfully.
Oct 11 04:22:17 compute-0 podman[302629]: 2025-10-11 04:22:17.868184692 +0000 UTC m=+0.066101782 container create 6f509b748f049df89ec1265d8a5a04b5de4e0004cd142d733e507919fe9afb45 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_gagarin, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 11 04:22:17 compute-0 systemd[1]: Started libpod-conmon-6f509b748f049df89ec1265d8a5a04b5de4e0004cd142d733e507919fe9afb45.scope.
Oct 11 04:22:17 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1805: 305 pgs: 305 active+clean; 88 MiB data, 442 MiB used, 60 GiB / 60 GiB avail; 25 KiB/s rd, 1.6 KiB/s wr, 34 op/s
Oct 11 04:22:17 compute-0 podman[302629]: 2025-10-11 04:22:17.848524596 +0000 UTC m=+0.046441706 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 04:22:17 compute-0 systemd[1]: Started libcrun container.
Oct 11 04:22:17 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/44b9602a127a335ab5d98850d3c221bdd4a43126da71673f5202238dd7d84715/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 11 04:22:17 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/44b9602a127a335ab5d98850d3c221bdd4a43126da71673f5202238dd7d84715/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 11 04:22:17 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/44b9602a127a335ab5d98850d3c221bdd4a43126da71673f5202238dd7d84715/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 11 04:22:17 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/44b9602a127a335ab5d98850d3c221bdd4a43126da71673f5202238dd7d84715/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 11 04:22:17 compute-0 podman[302629]: 2025-10-11 04:22:17.960039863 +0000 UTC m=+0.157956983 container init 6f509b748f049df89ec1265d8a5a04b5de4e0004cd142d733e507919fe9afb45 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_gagarin, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=reef)
Oct 11 04:22:17 compute-0 podman[302629]: 2025-10-11 04:22:17.974715149 +0000 UTC m=+0.172632279 container start 6f509b748f049df89ec1265d8a5a04b5de4e0004cd142d733e507919fe9afb45 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_gagarin, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 11 04:22:17 compute-0 podman[302629]: 2025-10-11 04:22:17.979494894 +0000 UTC m=+0.177412034 container attach 6f509b748f049df89ec1265d8a5a04b5de4e0004cd142d733e507919fe9afb45 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_gagarin, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3)
Oct 11 04:22:18 compute-0 nova_compute[259850]: 2025-10-11 04:22:18.562 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 04:22:18 compute-0 friendly_gagarin[302646]: {
Oct 11 04:22:18 compute-0 friendly_gagarin[302646]:     "0": [
Oct 11 04:22:18 compute-0 friendly_gagarin[302646]:         {
Oct 11 04:22:18 compute-0 friendly_gagarin[302646]:             "devices": [
Oct 11 04:22:18 compute-0 friendly_gagarin[302646]:                 "/dev/loop3"
Oct 11 04:22:18 compute-0 friendly_gagarin[302646]:             ],
Oct 11 04:22:18 compute-0 friendly_gagarin[302646]:             "lv_name": "ceph_lv0",
Oct 11 04:22:18 compute-0 friendly_gagarin[302646]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct 11 04:22:18 compute-0 friendly_gagarin[302646]:             "lv_size": "21470642176",
Oct 11 04:22:18 compute-0 friendly_gagarin[302646]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=1XVTvq-Um5W-oCLh-MZzq-EKrd-vgGj-JdO4S3,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=23b68101-59a9-532f-ab6b-9acf78fb2162,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=bd7ac921-1218-45c1-b1c6-7c594dbceccb,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 11 04:22:18 compute-0 friendly_gagarin[302646]:             "lv_uuid": "1XVTvq-Um5W-oCLh-MZzq-EKrd-vgGj-JdO4S3",
Oct 11 04:22:18 compute-0 friendly_gagarin[302646]:             "name": "ceph_lv0",
Oct 11 04:22:18 compute-0 friendly_gagarin[302646]:             "path": "/dev/ceph_vg0/ceph_lv0",
Oct 11 04:22:18 compute-0 friendly_gagarin[302646]:             "tags": {
Oct 11 04:22:18 compute-0 friendly_gagarin[302646]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct 11 04:22:18 compute-0 friendly_gagarin[302646]:                 "ceph.block_uuid": "1XVTvq-Um5W-oCLh-MZzq-EKrd-vgGj-JdO4S3",
Oct 11 04:22:18 compute-0 friendly_gagarin[302646]:                 "ceph.cephx_lockbox_secret": "",
Oct 11 04:22:18 compute-0 friendly_gagarin[302646]:                 "ceph.cluster_fsid": "23b68101-59a9-532f-ab6b-9acf78fb2162",
Oct 11 04:22:18 compute-0 friendly_gagarin[302646]:                 "ceph.cluster_name": "ceph",
Oct 11 04:22:18 compute-0 friendly_gagarin[302646]:                 "ceph.crush_device_class": "",
Oct 11 04:22:18 compute-0 friendly_gagarin[302646]:                 "ceph.encrypted": "0",
Oct 11 04:22:18 compute-0 friendly_gagarin[302646]:                 "ceph.osd_fsid": "bd7ac921-1218-45c1-b1c6-7c594dbceccb",
Oct 11 04:22:18 compute-0 friendly_gagarin[302646]:                 "ceph.osd_id": "0",
Oct 11 04:22:18 compute-0 friendly_gagarin[302646]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 11 04:22:18 compute-0 friendly_gagarin[302646]:                 "ceph.type": "block",
Oct 11 04:22:18 compute-0 friendly_gagarin[302646]:                 "ceph.vdo": "0"
Oct 11 04:22:18 compute-0 friendly_gagarin[302646]:             },
Oct 11 04:22:18 compute-0 friendly_gagarin[302646]:             "type": "block",
Oct 11 04:22:18 compute-0 friendly_gagarin[302646]:             "vg_name": "ceph_vg0"
Oct 11 04:22:18 compute-0 friendly_gagarin[302646]:         }
Oct 11 04:22:18 compute-0 friendly_gagarin[302646]:     ],
Oct 11 04:22:18 compute-0 friendly_gagarin[302646]:     "1": [
Oct 11 04:22:18 compute-0 friendly_gagarin[302646]:         {
Oct 11 04:22:18 compute-0 friendly_gagarin[302646]:             "devices": [
Oct 11 04:22:18 compute-0 friendly_gagarin[302646]:                 "/dev/loop4"
Oct 11 04:22:18 compute-0 friendly_gagarin[302646]:             ],
Oct 11 04:22:18 compute-0 friendly_gagarin[302646]:             "lv_name": "ceph_lv1",
Oct 11 04:22:18 compute-0 friendly_gagarin[302646]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Oct 11 04:22:18 compute-0 friendly_gagarin[302646]:             "lv_size": "21470642176",
Oct 11 04:22:18 compute-0 friendly_gagarin[302646]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=SYHCVo-UVf0-Iwc3-4dzz-QuCG-19LJ-9rRA5x,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=23b68101-59a9-532f-ab6b-9acf78fb2162,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=38da774d-7ecf-442f-9a7a-97978287cff8,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 11 04:22:18 compute-0 friendly_gagarin[302646]:             "lv_uuid": "SYHCVo-UVf0-Iwc3-4dzz-QuCG-19LJ-9rRA5x",
Oct 11 04:22:18 compute-0 friendly_gagarin[302646]:             "name": "ceph_lv1",
Oct 11 04:22:18 compute-0 friendly_gagarin[302646]:             "path": "/dev/ceph_vg1/ceph_lv1",
Oct 11 04:22:18 compute-0 friendly_gagarin[302646]:             "tags": {
Oct 11 04:22:18 compute-0 friendly_gagarin[302646]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Oct 11 04:22:18 compute-0 friendly_gagarin[302646]:                 "ceph.block_uuid": "SYHCVo-UVf0-Iwc3-4dzz-QuCG-19LJ-9rRA5x",
Oct 11 04:22:18 compute-0 friendly_gagarin[302646]:                 "ceph.cephx_lockbox_secret": "",
Oct 11 04:22:18 compute-0 friendly_gagarin[302646]:                 "ceph.cluster_fsid": "23b68101-59a9-532f-ab6b-9acf78fb2162",
Oct 11 04:22:18 compute-0 friendly_gagarin[302646]:                 "ceph.cluster_name": "ceph",
Oct 11 04:22:18 compute-0 friendly_gagarin[302646]:                 "ceph.crush_device_class": "",
Oct 11 04:22:18 compute-0 friendly_gagarin[302646]:                 "ceph.encrypted": "0",
Oct 11 04:22:18 compute-0 friendly_gagarin[302646]:                 "ceph.osd_fsid": "38da774d-7ecf-442f-9a7a-97978287cff8",
Oct 11 04:22:18 compute-0 friendly_gagarin[302646]:                 "ceph.osd_id": "1",
Oct 11 04:22:18 compute-0 friendly_gagarin[302646]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 11 04:22:18 compute-0 friendly_gagarin[302646]:                 "ceph.type": "block",
Oct 11 04:22:18 compute-0 friendly_gagarin[302646]:                 "ceph.vdo": "0"
Oct 11 04:22:18 compute-0 friendly_gagarin[302646]:             },
Oct 11 04:22:18 compute-0 friendly_gagarin[302646]:             "type": "block",
Oct 11 04:22:18 compute-0 friendly_gagarin[302646]:             "vg_name": "ceph_vg1"
Oct 11 04:22:18 compute-0 friendly_gagarin[302646]:         }
Oct 11 04:22:18 compute-0 friendly_gagarin[302646]:     ],
Oct 11 04:22:18 compute-0 friendly_gagarin[302646]:     "2": [
Oct 11 04:22:18 compute-0 friendly_gagarin[302646]:         {
Oct 11 04:22:18 compute-0 friendly_gagarin[302646]:             "devices": [
Oct 11 04:22:18 compute-0 friendly_gagarin[302646]:                 "/dev/loop5"
Oct 11 04:22:18 compute-0 friendly_gagarin[302646]:             ],
Oct 11 04:22:18 compute-0 friendly_gagarin[302646]:             "lv_name": "ceph_lv2",
Oct 11 04:22:18 compute-0 friendly_gagarin[302646]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Oct 11 04:22:18 compute-0 friendly_gagarin[302646]:             "lv_size": "21470642176",
Oct 11 04:22:18 compute-0 friendly_gagarin[302646]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=RoMfIg-8Myq-NBZ3-1HM3-6QfC-sjk8-dlChvt,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=23b68101-59a9-532f-ab6b-9acf78fb2162,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=b0a32a6b-06c7-4cda-9235-0c5ffbd1bd85,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 11 04:22:18 compute-0 friendly_gagarin[302646]:             "lv_uuid": "RoMfIg-8Myq-NBZ3-1HM3-6QfC-sjk8-dlChvt",
Oct 11 04:22:18 compute-0 friendly_gagarin[302646]:             "name": "ceph_lv2",
Oct 11 04:22:18 compute-0 friendly_gagarin[302646]:             "path": "/dev/ceph_vg2/ceph_lv2",
Oct 11 04:22:18 compute-0 friendly_gagarin[302646]:             "tags": {
Oct 11 04:22:18 compute-0 friendly_gagarin[302646]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Oct 11 04:22:18 compute-0 friendly_gagarin[302646]:                 "ceph.block_uuid": "RoMfIg-8Myq-NBZ3-1HM3-6QfC-sjk8-dlChvt",
Oct 11 04:22:18 compute-0 friendly_gagarin[302646]:                 "ceph.cephx_lockbox_secret": "",
Oct 11 04:22:18 compute-0 friendly_gagarin[302646]:                 "ceph.cluster_fsid": "23b68101-59a9-532f-ab6b-9acf78fb2162",
Oct 11 04:22:18 compute-0 friendly_gagarin[302646]:                 "ceph.cluster_name": "ceph",
Oct 11 04:22:18 compute-0 friendly_gagarin[302646]:                 "ceph.crush_device_class": "",
Oct 11 04:22:18 compute-0 friendly_gagarin[302646]:                 "ceph.encrypted": "0",
Oct 11 04:22:18 compute-0 friendly_gagarin[302646]:                 "ceph.osd_fsid": "b0a32a6b-06c7-4cda-9235-0c5ffbd1bd85",
Oct 11 04:22:18 compute-0 friendly_gagarin[302646]:                 "ceph.osd_id": "2",
Oct 11 04:22:18 compute-0 friendly_gagarin[302646]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 11 04:22:18 compute-0 friendly_gagarin[302646]:                 "ceph.type": "block",
Oct 11 04:22:18 compute-0 friendly_gagarin[302646]:                 "ceph.vdo": "0"
Oct 11 04:22:18 compute-0 friendly_gagarin[302646]:             },
Oct 11 04:22:18 compute-0 friendly_gagarin[302646]:             "type": "block",
Oct 11 04:22:18 compute-0 friendly_gagarin[302646]:             "vg_name": "ceph_vg2"
Oct 11 04:22:18 compute-0 friendly_gagarin[302646]:         }
Oct 11 04:22:18 compute-0 friendly_gagarin[302646]:     ]
Oct 11 04:22:18 compute-0 friendly_gagarin[302646]: }
Oct 11 04:22:18 compute-0 systemd[1]: libpod-6f509b748f049df89ec1265d8a5a04b5de4e0004cd142d733e507919fe9afb45.scope: Deactivated successfully.
Oct 11 04:22:18 compute-0 podman[302629]: 2025-10-11 04:22:18.718876048 +0000 UTC m=+0.916793148 container died 6f509b748f049df89ec1265d8a5a04b5de4e0004cd142d733e507919fe9afb45 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_gagarin, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 11 04:22:18 compute-0 systemd[1]: var-lib-containers-storage-overlay-44b9602a127a335ab5d98850d3c221bdd4a43126da71673f5202238dd7d84715-merged.mount: Deactivated successfully.
Oct 11 04:22:18 compute-0 podman[302629]: 2025-10-11 04:22:18.789178828 +0000 UTC m=+0.987095948 container remove 6f509b748f049df89ec1265d8a5a04b5de4e0004cd142d733e507919fe9afb45 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_gagarin, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 11 04:22:18 compute-0 systemd[1]: libpod-conmon-6f509b748f049df89ec1265d8a5a04b5de4e0004cd142d733e507919fe9afb45.scope: Deactivated successfully.
Oct 11 04:22:18 compute-0 sudo[302522]: pam_unix(sudo:session): session closed for user root
Oct 11 04:22:18 compute-0 sudo[302666]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 04:22:18 compute-0 sudo[302666]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 04:22:18 compute-0 sudo[302666]: pam_unix(sudo:session): session closed for user root
Oct 11 04:22:18 compute-0 sudo[302691]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 11 04:22:18 compute-0 sudo[302691]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 04:22:18 compute-0 sudo[302691]: pam_unix(sudo:session): session closed for user root
Oct 11 04:22:19 compute-0 sudo[302716]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 04:22:19 compute-0 sudo[302716]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 04:22:19 compute-0 sudo[302716]: pam_unix(sudo:session): session closed for user root
Oct 11 04:22:19 compute-0 sudo[302741]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/23b68101-59a9-532f-ab6b-9acf78fb2162/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 23b68101-59a9-532f-ab6b-9acf78fb2162 -- raw list --format json
Oct 11 04:22:19 compute-0 sudo[302741]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 04:22:19 compute-0 ceph-mon[74273]: pgmap v1805: 305 pgs: 305 active+clean; 88 MiB data, 442 MiB used, 60 GiB / 60 GiB avail; 25 KiB/s rd, 1.6 KiB/s wr, 34 op/s
Oct 11 04:22:19 compute-0 podman[302809]: 2025-10-11 04:22:19.624957791 +0000 UTC m=+0.070186928 container create df5395bcd848a91db8fb6e268c7562268c17f5191c3f846489de8475c2bdc188 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_mcnulty, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Oct 11 04:22:19 compute-0 systemd[1]: Started libpod-conmon-df5395bcd848a91db8fb6e268c7562268c17f5191c3f846489de8475c2bdc188.scope.
Oct 11 04:22:19 compute-0 podman[302809]: 2025-10-11 04:22:19.595234729 +0000 UTC m=+0.040463926 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 04:22:19 compute-0 systemd[1]: Started libcrun container.
Oct 11 04:22:19 compute-0 podman[302809]: 2025-10-11 04:22:19.728349218 +0000 UTC m=+0.173578385 container init df5395bcd848a91db8fb6e268c7562268c17f5191c3f846489de8475c2bdc188 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_mcnulty, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_REF=reef, ceph=True)
Oct 11 04:22:19 compute-0 podman[302809]: 2025-10-11 04:22:19.739229816 +0000 UTC m=+0.184458963 container start df5395bcd848a91db8fb6e268c7562268c17f5191c3f846489de8475c2bdc188 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_mcnulty, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 11 04:22:19 compute-0 podman[302809]: 2025-10-11 04:22:19.743560289 +0000 UTC m=+0.188789486 container attach df5395bcd848a91db8fb6e268c7562268c17f5191c3f846489de8475c2bdc188 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_mcnulty, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 11 04:22:19 compute-0 fervent_mcnulty[302825]: 167 167
Oct 11 04:22:19 compute-0 systemd[1]: libpod-df5395bcd848a91db8fb6e268c7562268c17f5191c3f846489de8475c2bdc188.scope: Deactivated successfully.
Oct 11 04:22:19 compute-0 conmon[302825]: conmon df5395bcd848a91db8fb <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-df5395bcd848a91db8fb6e268c7562268c17f5191c3f846489de8475c2bdc188.scope/container/memory.events
Oct 11 04:22:19 compute-0 podman[302809]: 2025-10-11 04:22:19.750333021 +0000 UTC m=+0.195562158 container died df5395bcd848a91db8fb6e268c7562268c17f5191c3f846489de8475c2bdc188 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_mcnulty, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, io.buildah.version=1.39.3)
Oct 11 04:22:19 compute-0 systemd[1]: var-lib-containers-storage-overlay-0f1c9e2cec6100ec8999cdbb0606bf426f2bb096369d52768145495679e06af9-merged.mount: Deactivated successfully.
Oct 11 04:22:19 compute-0 podman[302809]: 2025-10-11 04:22:19.8043679 +0000 UTC m=+0.249597017 container remove df5395bcd848a91db8fb6e268c7562268c17f5191c3f846489de8475c2bdc188 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_mcnulty, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507)
Oct 11 04:22:19 compute-0 systemd[1]: libpod-conmon-df5395bcd848a91db8fb6e268c7562268c17f5191c3f846489de8475c2bdc188.scope: Deactivated successfully.
Oct 11 04:22:19 compute-0 nova_compute[259850]: 2025-10-11 04:22:19.917 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 04:22:19 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1806: 305 pgs: 305 active+clean; 88 MiB data, 442 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:22:20 compute-0 podman[302848]: 2025-10-11 04:22:20.053612997 +0000 UTC m=+0.057322204 container create 0ca591b5a54b089bfc5b53ff4cedff765935b023e913f160bb2143456e2fd51a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_proskuriakova, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Oct 11 04:22:20 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e436 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 04:22:20 compute-0 systemd[1]: Started libpod-conmon-0ca591b5a54b089bfc5b53ff4cedff765935b023e913f160bb2143456e2fd51a.scope.
Oct 11 04:22:20 compute-0 podman[302848]: 2025-10-11 04:22:20.034738713 +0000 UTC m=+0.038447840 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 04:22:20 compute-0 systemd[1]: Started libcrun container.
Oct 11 04:22:20 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d35bd4251f4b92381455bb0562cdfb8b7b335b27035e3a620f291262b8b8664a/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 11 04:22:20 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d35bd4251f4b92381455bb0562cdfb8b7b335b27035e3a620f291262b8b8664a/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 11 04:22:20 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d35bd4251f4b92381455bb0562cdfb8b7b335b27035e3a620f291262b8b8664a/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 11 04:22:20 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d35bd4251f4b92381455bb0562cdfb8b7b335b27035e3a620f291262b8b8664a/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 11 04:22:20 compute-0 podman[302848]: 2025-10-11 04:22:20.169985942 +0000 UTC m=+0.173695129 container init 0ca591b5a54b089bfc5b53ff4cedff765935b023e913f160bb2143456e2fd51a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_proskuriakova, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 11 04:22:20 compute-0 podman[302848]: 2025-10-11 04:22:20.17909163 +0000 UTC m=+0.182800777 container start 0ca591b5a54b089bfc5b53ff4cedff765935b023e913f160bb2143456e2fd51a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_proskuriakova, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, ceph=True)
Oct 11 04:22:20 compute-0 podman[302848]: 2025-10-11 04:22:20.183060062 +0000 UTC m=+0.186769209 container attach 0ca591b5a54b089bfc5b53ff4cedff765935b023e913f160bb2143456e2fd51a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_proskuriakova, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 11 04:22:20 compute-0 ceph-mgr[74563]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 04:22:20 compute-0 ceph-mgr[74563]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 04:22:20 compute-0 ceph-mgr[74563]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 04:22:20 compute-0 ceph-mgr[74563]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 04:22:20 compute-0 ceph-mgr[74563]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 04:22:20 compute-0 ceph-mgr[74563]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 04:22:20 compute-0 ceph-mgr[74563]: [balancer INFO root] Optimize plan auto_2025-10-11_04:22:20
Oct 11 04:22:20 compute-0 ceph-mgr[74563]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 11 04:22:20 compute-0 ceph-mgr[74563]: [balancer INFO root] do_upmap
Oct 11 04:22:20 compute-0 ceph-mgr[74563]: [balancer INFO root] pools ['volumes', 'default.rgw.meta', '.mgr', 'cephfs.cephfs.meta', 'default.rgw.control', '.rgw.root', 'cephfs.cephfs.data', 'images', 'default.rgw.log', 'backups', 'vms']
Oct 11 04:22:20 compute-0 ceph-mgr[74563]: [balancer INFO root] prepared 0/10 changes
Oct 11 04:22:21 compute-0 ceph-mgr[74563]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 11 04:22:21 compute-0 ceph-mgr[74563]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 11 04:22:21 compute-0 ceph-mgr[74563]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 11 04:22:21 compute-0 ceph-mgr[74563]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 11 04:22:21 compute-0 ceph-mgr[74563]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 11 04:22:21 compute-0 ceph-mgr[74563]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 11 04:22:21 compute-0 ceph-mgr[74563]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 11 04:22:21 compute-0 ceph-mgr[74563]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 11 04:22:21 compute-0 ceph-mgr[74563]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 11 04:22:21 compute-0 ceph-mgr[74563]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 11 04:22:21 compute-0 trusting_proskuriakova[302865]: {
Oct 11 04:22:21 compute-0 trusting_proskuriakova[302865]:     "38da774d-7ecf-442f-9a7a-97978287cff8": {
Oct 11 04:22:21 compute-0 trusting_proskuriakova[302865]:         "ceph_fsid": "23b68101-59a9-532f-ab6b-9acf78fb2162",
Oct 11 04:22:21 compute-0 trusting_proskuriakova[302865]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Oct 11 04:22:21 compute-0 trusting_proskuriakova[302865]:         "osd_id": 1,
Oct 11 04:22:21 compute-0 trusting_proskuriakova[302865]:         "osd_uuid": "38da774d-7ecf-442f-9a7a-97978287cff8",
Oct 11 04:22:21 compute-0 trusting_proskuriakova[302865]:         "type": "bluestore"
Oct 11 04:22:21 compute-0 trusting_proskuriakova[302865]:     },
Oct 11 04:22:21 compute-0 trusting_proskuriakova[302865]:     "b0a32a6b-06c7-4cda-9235-0c5ffbd1bd85": {
Oct 11 04:22:21 compute-0 trusting_proskuriakova[302865]:         "ceph_fsid": "23b68101-59a9-532f-ab6b-9acf78fb2162",
Oct 11 04:22:21 compute-0 trusting_proskuriakova[302865]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Oct 11 04:22:21 compute-0 trusting_proskuriakova[302865]:         "osd_id": 2,
Oct 11 04:22:21 compute-0 trusting_proskuriakova[302865]:         "osd_uuid": "b0a32a6b-06c7-4cda-9235-0c5ffbd1bd85",
Oct 11 04:22:21 compute-0 trusting_proskuriakova[302865]:         "type": "bluestore"
Oct 11 04:22:21 compute-0 trusting_proskuriakova[302865]:     },
Oct 11 04:22:21 compute-0 trusting_proskuriakova[302865]:     "bd7ac921-1218-45c1-b1c6-7c594dbceccb": {
Oct 11 04:22:21 compute-0 trusting_proskuriakova[302865]:         "ceph_fsid": "23b68101-59a9-532f-ab6b-9acf78fb2162",
Oct 11 04:22:21 compute-0 trusting_proskuriakova[302865]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct 11 04:22:21 compute-0 trusting_proskuriakova[302865]:         "osd_id": 0,
Oct 11 04:22:21 compute-0 trusting_proskuriakova[302865]:         "osd_uuid": "bd7ac921-1218-45c1-b1c6-7c594dbceccb",
Oct 11 04:22:21 compute-0 trusting_proskuriakova[302865]:         "type": "bluestore"
Oct 11 04:22:21 compute-0 trusting_proskuriakova[302865]:     }
Oct 11 04:22:21 compute-0 trusting_proskuriakova[302865]: }
Oct 11 04:22:21 compute-0 systemd[1]: libpod-0ca591b5a54b089bfc5b53ff4cedff765935b023e913f160bb2143456e2fd51a.scope: Deactivated successfully.
Oct 11 04:22:21 compute-0 systemd[1]: libpod-0ca591b5a54b089bfc5b53ff4cedff765935b023e913f160bb2143456e2fd51a.scope: Consumed 1.161s CPU time.
Oct 11 04:22:21 compute-0 ceph-mon[74273]: pgmap v1806: 305 pgs: 305 active+clean; 88 MiB data, 442 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:22:21 compute-0 podman[302899]: 2025-10-11 04:22:21.384346293 +0000 UTC m=+0.031720819 container died 0ca591b5a54b089bfc5b53ff4cedff765935b023e913f160bb2143456e2fd51a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_proskuriakova, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 11 04:22:21 compute-0 systemd[1]: var-lib-containers-storage-overlay-d35bd4251f4b92381455bb0562cdfb8b7b335b27035e3a620f291262b8b8664a-merged.mount: Deactivated successfully.
Oct 11 04:22:21 compute-0 podman[302899]: 2025-10-11 04:22:21.43724549 +0000 UTC m=+0.084619986 container remove 0ca591b5a54b089bfc5b53ff4cedff765935b023e913f160bb2143456e2fd51a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_proskuriakova, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.license=GPLv2)
Oct 11 04:22:21 compute-0 systemd[1]: libpod-conmon-0ca591b5a54b089bfc5b53ff4cedff765935b023e913f160bb2143456e2fd51a.scope: Deactivated successfully.
Oct 11 04:22:21 compute-0 sudo[302741]: pam_unix(sudo:session): session closed for user root
Oct 11 04:22:21 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 11 04:22:21 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' 
Oct 11 04:22:21 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 11 04:22:21 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' 
Oct 11 04:22:21 compute-0 ceph-mgr[74563]: [progress WARNING root] complete: ev 0a5a7896-16ac-4e14-b46e-709a54c2079e does not exist
Oct 11 04:22:21 compute-0 ceph-mgr[74563]: [progress WARNING root] complete: ev 4b3aaa81-73c1-4bd0-8d9f-e32018ad68e8 does not exist
Oct 11 04:22:21 compute-0 podman[302914]: 2025-10-11 04:22:21.535367928 +0000 UTC m=+0.094837536 container health_status 8f03625dda308e9a3fe3b3f00360811eab2a53db90537fed5552a11820542f07 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, config_id=iscsid, container_name=iscsid, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team)
Oct 11 04:22:21 compute-0 podman[302915]: 2025-10-11 04:22:21.535331887 +0000 UTC m=+0.094533617 container health_status 83ff5079e26f0f00bbfd2512db4ebca61e431e3b7e362664573e34fff179574c (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251009, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true)
Oct 11 04:22:21 compute-0 sudo[302951]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 04:22:21 compute-0 sudo[302951]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 04:22:21 compute-0 sudo[302951]: pam_unix(sudo:session): session closed for user root
Oct 11 04:22:21 compute-0 sudo[302976]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Oct 11 04:22:21 compute-0 sudo[302976]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 04:22:21 compute-0 sudo[302976]: pam_unix(sudo:session): session closed for user root
Oct 11 04:22:21 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1807: 305 pgs: 305 active+clean; 88 MiB data, 442 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:22:22 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' 
Oct 11 04:22:22 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' 
Oct 11 04:22:22 compute-0 ceph-mon[74273]: pgmap v1807: 305 pgs: 305 active+clean; 88 MiB data, 442 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:22:22 compute-0 ovn_metadata_agent[161897]: 2025-10-11 04:22:22.973 161902 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 04:22:22 compute-0 ovn_metadata_agent[161897]: 2025-10-11 04:22:22.974 161902 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 04:22:22 compute-0 ovn_metadata_agent[161897]: 2025-10-11 04:22:22.974 161902 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 04:22:23 compute-0 nova_compute[259850]: 2025-10-11 04:22:23.563 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 04:22:23 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1808: 305 pgs: 305 active+clean; 88 MiB data, 442 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:22:24 compute-0 nova_compute[259850]: 2025-10-11 04:22:24.920 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 04:22:24 compute-0 ceph-mon[74273]: pgmap v1808: 305 pgs: 305 active+clean; 88 MiB data, 442 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:22:25 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e436 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 04:22:25 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1809: 305 pgs: 305 active+clean; 88 MiB data, 442 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:22:27 compute-0 ceph-mon[74273]: pgmap v1809: 305 pgs: 305 active+clean; 88 MiB data, 442 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:22:27 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1810: 305 pgs: 305 active+clean; 88 MiB data, 442 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:22:28 compute-0 nova_compute[259850]: 2025-10-11 04:22:28.564 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 04:22:29 compute-0 ceph-mon[74273]: pgmap v1810: 305 pgs: 305 active+clean; 88 MiB data, 442 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:22:29 compute-0 nova_compute[259850]: 2025-10-11 04:22:29.923 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 04:22:29 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1811: 305 pgs: 305 active+clean; 88 MiB data, 442 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:22:30 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e436 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 04:22:30 compute-0 nova_compute[259850]: 2025-10-11 04:22:30.108 2 DEBUG oslo_service.periodic_task [None req-a15c22ab-259d-4b26-83d0-eb5f92102649 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 11 04:22:30 compute-0 nova_compute[259850]: 2025-10-11 04:22:30.109 2 DEBUG nova.compute.manager [None req-a15c22ab-259d-4b26-83d0-eb5f92102649 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Oct 11 04:22:31 compute-0 ceph-mon[74273]: pgmap v1811: 305 pgs: 305 active+clean; 88 MiB data, 442 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:22:31 compute-0 nova_compute[259850]: 2025-10-11 04:22:31.059 2 DEBUG oslo_service.periodic_task [None req-a15c22ab-259d-4b26-83d0-eb5f92102649 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 11 04:22:31 compute-0 ceph-mgr[74563]: [pg_autoscaler INFO root] _maybe_adjust
Oct 11 04:22:31 compute-0 ceph-mgr[74563]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 04:22:31 compute-0 ceph-mgr[74563]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct 11 04:22:31 compute-0 ceph-mgr[74563]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 04:22:31 compute-0 ceph-mgr[74563]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Oct 11 04:22:31 compute-0 ceph-mgr[74563]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 04:22:31 compute-0 ceph-mgr[74563]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.00034720526470013676 of space, bias 1.0, pg target 0.10416157941004103 quantized to 32 (current 32)
Oct 11 04:22:31 compute-0 ceph-mgr[74563]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 04:22:31 compute-0 ceph-mgr[74563]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Oct 11 04:22:31 compute-0 ceph-mgr[74563]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 04:22:31 compute-0 ceph-mgr[74563]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006661762551279547 of space, bias 1.0, pg target 0.1998528765383864 quantized to 32 (current 32)
Oct 11 04:22:31 compute-0 ceph-mgr[74563]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 04:22:31 compute-0 ceph-mgr[74563]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Oct 11 04:22:31 compute-0 ceph-mgr[74563]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 04:22:31 compute-0 ceph-mgr[74563]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 11 04:22:31 compute-0 ceph-mgr[74563]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 04:22:31 compute-0 ceph-mgr[74563]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Oct 11 04:22:31 compute-0 ceph-mgr[74563]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 04:22:31 compute-0 ceph-mgr[74563]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct 11 04:22:31 compute-0 ceph-mgr[74563]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 04:22:31 compute-0 ceph-mgr[74563]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 11 04:22:31 compute-0 ceph-mgr[74563]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 04:22:31 compute-0 ceph-mgr[74563]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct 11 04:22:31 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1812: 305 pgs: 305 active+clean; 88 MiB data, 442 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:22:32 compute-0 nova_compute[259850]: 2025-10-11 04:22:32.054 2 DEBUG oslo_service.periodic_task [None req-a15c22ab-259d-4b26-83d0-eb5f92102649 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 11 04:22:33 compute-0 ceph-mon[74273]: pgmap v1812: 305 pgs: 305 active+clean; 88 MiB data, 442 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:22:33 compute-0 nova_compute[259850]: 2025-10-11 04:22:33.059 2 DEBUG oslo_service.periodic_task [None req-a15c22ab-259d-4b26-83d0-eb5f92102649 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 11 04:22:33 compute-0 nova_compute[259850]: 2025-10-11 04:22:33.059 2 DEBUG nova.compute.manager [None req-a15c22ab-259d-4b26-83d0-eb5f92102649 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Oct 11 04:22:33 compute-0 nova_compute[259850]: 2025-10-11 04:22:33.060 2 DEBUG nova.compute.manager [None req-a15c22ab-259d-4b26-83d0-eb5f92102649 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Oct 11 04:22:33 compute-0 nova_compute[259850]: 2025-10-11 04:22:33.077 2 DEBUG nova.compute.manager [None req-a15c22ab-259d-4b26-83d0-eb5f92102649 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Oct 11 04:22:33 compute-0 nova_compute[259850]: 2025-10-11 04:22:33.078 2 DEBUG oslo_service.periodic_task [None req-a15c22ab-259d-4b26-83d0-eb5f92102649 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 11 04:22:33 compute-0 nova_compute[259850]: 2025-10-11 04:22:33.567 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 04:22:33 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1813: 305 pgs: 305 active+clean; 88 MiB data, 442 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:22:34 compute-0 nova_compute[259850]: 2025-10-11 04:22:34.059 2 DEBUG oslo_service.periodic_task [None req-a15c22ab-259d-4b26-83d0-eb5f92102649 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 11 04:22:34 compute-0 nova_compute[259850]: 2025-10-11 04:22:34.060 2 DEBUG oslo_service.periodic_task [None req-a15c22ab-259d-4b26-83d0-eb5f92102649 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 11 04:22:34 compute-0 nova_compute[259850]: 2025-10-11 04:22:34.086 2 DEBUG oslo_concurrency.lockutils [None req-a15c22ab-259d-4b26-83d0-eb5f92102649 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 04:22:34 compute-0 nova_compute[259850]: 2025-10-11 04:22:34.087 2 DEBUG oslo_concurrency.lockutils [None req-a15c22ab-259d-4b26-83d0-eb5f92102649 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 04:22:34 compute-0 nova_compute[259850]: 2025-10-11 04:22:34.087 2 DEBUG oslo_concurrency.lockutils [None req-a15c22ab-259d-4b26-83d0-eb5f92102649 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 04:22:34 compute-0 nova_compute[259850]: 2025-10-11 04:22:34.088 2 DEBUG nova.compute.resource_tracker [None req-a15c22ab-259d-4b26-83d0-eb5f92102649 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Oct 11 04:22:34 compute-0 nova_compute[259850]: 2025-10-11 04:22:34.088 2 DEBUG oslo_concurrency.processutils [None req-a15c22ab-259d-4b26-83d0-eb5f92102649 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 04:22:34 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 11 04:22:34 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2048832082' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 04:22:34 compute-0 nova_compute[259850]: 2025-10-11 04:22:34.518 2 DEBUG oslo_concurrency.processutils [None req-a15c22ab-259d-4b26-83d0-eb5f92102649 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.430s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 04:22:34 compute-0 nova_compute[259850]: 2025-10-11 04:22:34.768 2 WARNING nova.virt.libvirt.driver [None req-a15c22ab-259d-4b26-83d0-eb5f92102649 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 11 04:22:34 compute-0 nova_compute[259850]: 2025-10-11 04:22:34.770 2 DEBUG nova.compute.resource_tracker [None req-a15c22ab-259d-4b26-83d0-eb5f92102649 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4338MB free_disk=59.988277435302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Oct 11 04:22:34 compute-0 nova_compute[259850]: 2025-10-11 04:22:34.771 2 DEBUG oslo_concurrency.lockutils [None req-a15c22ab-259d-4b26-83d0-eb5f92102649 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 04:22:34 compute-0 nova_compute[259850]: 2025-10-11 04:22:34.771 2 DEBUG oslo_concurrency.lockutils [None req-a15c22ab-259d-4b26-83d0-eb5f92102649 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 04:22:34 compute-0 nova_compute[259850]: 2025-10-11 04:22:34.926 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 04:22:35 compute-0 nova_compute[259850]: 2025-10-11 04:22:35.031 2 DEBUG nova.compute.resource_tracker [None req-a15c22ab-259d-4b26-83d0-eb5f92102649 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Oct 11 04:22:35 compute-0 nova_compute[259850]: 2025-10-11 04:22:35.032 2 DEBUG nova.compute.resource_tracker [None req-a15c22ab-259d-4b26-83d0-eb5f92102649 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Oct 11 04:22:35 compute-0 ceph-mon[74273]: pgmap v1813: 305 pgs: 305 active+clean; 88 MiB data, 442 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:22:35 compute-0 ceph-mon[74273]: from='client.? 192.168.122.100:0/2048832082' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 04:22:35 compute-0 nova_compute[259850]: 2025-10-11 04:22:35.099 2 DEBUG oslo_concurrency.processutils [None req-a15c22ab-259d-4b26-83d0-eb5f92102649 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 04:22:35 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e436 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 04:22:35 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 11 04:22:35 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3028897921' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 04:22:35 compute-0 nova_compute[259850]: 2025-10-11 04:22:35.571 2 DEBUG oslo_concurrency.processutils [None req-a15c22ab-259d-4b26-83d0-eb5f92102649 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.472s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 04:22:35 compute-0 nova_compute[259850]: 2025-10-11 04:22:35.580 2 DEBUG nova.compute.provider_tree [None req-a15c22ab-259d-4b26-83d0-eb5f92102649 - - - - - -] Inventory has not changed in ProviderTree for provider: 108a560b-89c0-4926-a2fc-cb749a6f8386 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 11 04:22:35 compute-0 nova_compute[259850]: 2025-10-11 04:22:35.600 2 DEBUG nova.scheduler.client.report [None req-a15c22ab-259d-4b26-83d0-eb5f92102649 - - - - - -] Inventory has not changed for provider 108a560b-89c0-4926-a2fc-cb749a6f8386 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 11 04:22:35 compute-0 nova_compute[259850]: 2025-10-11 04:22:35.603 2 DEBUG nova.compute.resource_tracker [None req-a15c22ab-259d-4b26-83d0-eb5f92102649 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Oct 11 04:22:35 compute-0 nova_compute[259850]: 2025-10-11 04:22:35.604 2 DEBUG oslo_concurrency.lockutils [None req-a15c22ab-259d-4b26-83d0-eb5f92102649 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.832s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 04:22:35 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1814: 305 pgs: 305 active+clean; 88 MiB data, 442 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:22:36 compute-0 ceph-mon[74273]: from='client.? 192.168.122.100:0/3028897921' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 04:22:36 compute-0 podman[303045]: 2025-10-11 04:22:36.439377944 +0000 UTC m=+0.139711966 container health_status 648a70a95c858d67aef01a55c153ff0b5b9bef4b589fa10c62a24185180db61f (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, config_id=ovn_controller, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team)
Oct 11 04:22:36 compute-0 nova_compute[259850]: 2025-10-11 04:22:36.605 2 DEBUG oslo_service.periodic_task [None req-a15c22ab-259d-4b26-83d0-eb5f92102649 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 11 04:22:37 compute-0 ceph-mon[74273]: pgmap v1814: 305 pgs: 305 active+clean; 88 MiB data, 442 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:22:37 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1815: 305 pgs: 305 active+clean; 88 MiB data, 442 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:22:38 compute-0 nova_compute[259850]: 2025-10-11 04:22:38.570 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 04:22:39 compute-0 ceph-mon[74273]: pgmap v1815: 305 pgs: 305 active+clean; 88 MiB data, 442 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:22:39 compute-0 nova_compute[259850]: 2025-10-11 04:22:39.929 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 04:22:39 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1816: 305 pgs: 305 active+clean; 88 MiB data, 442 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:22:40 compute-0 nova_compute[259850]: 2025-10-11 04:22:40.055 2 DEBUG oslo_service.periodic_task [None req-a15c22ab-259d-4b26-83d0-eb5f92102649 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 11 04:22:40 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e436 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 04:22:41 compute-0 nova_compute[259850]: 2025-10-11 04:22:41.058 2 DEBUG oslo_service.periodic_task [None req-a15c22ab-259d-4b26-83d0-eb5f92102649 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 11 04:22:41 compute-0 ceph-mon[74273]: pgmap v1816: 305 pgs: 305 active+clean; 88 MiB data, 442 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:22:41 compute-0 podman[303071]: 2025-10-11 04:22:41.362108288 +0000 UTC m=+0.064453926 container health_status aa44bb3cffcfb092b9d85ae52a9bd9f36c87e07262632420f97b48a886109eab (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.build-date=20251009, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 11 04:22:41 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1817: 305 pgs: 305 active+clean; 88 MiB data, 442 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:22:43 compute-0 ceph-mon[74273]: pgmap v1817: 305 pgs: 305 active+clean; 88 MiB data, 442 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:22:43 compute-0 nova_compute[259850]: 2025-10-11 04:22:43.573 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 04:22:43 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1818: 305 pgs: 305 active+clean; 88 MiB data, 442 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:22:44 compute-0 nova_compute[259850]: 2025-10-11 04:22:44.932 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 04:22:45 compute-0 ceph-mon[74273]: pgmap v1818: 305 pgs: 305 active+clean; 88 MiB data, 442 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:22:45 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e436 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 04:22:45 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1819: 305 pgs: 305 active+clean; 88 MiB data, 442 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:22:47 compute-0 ceph-mon[74273]: pgmap v1819: 305 pgs: 305 active+clean; 88 MiB data, 442 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:22:47 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1820: 305 pgs: 305 active+clean; 88 MiB data, 442 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:22:48 compute-0 nova_compute[259850]: 2025-10-11 04:22:48.373 2 DEBUG oslo_concurrency.lockutils [None req-d4612f08-4753-4a24-8e21-adda82f1151f 33278e6c76494cbbac3a77443a2127d6 5a777d54362640ae90dbd99f4e0ce865 - - default default] Acquiring lock "cd805eb4-703c-4647-bda1-59e3435d8c15" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 04:22:48 compute-0 nova_compute[259850]: 2025-10-11 04:22:48.373 2 DEBUG oslo_concurrency.lockutils [None req-d4612f08-4753-4a24-8e21-adda82f1151f 33278e6c76494cbbac3a77443a2127d6 5a777d54362640ae90dbd99f4e0ce865 - - default default] Lock "cd805eb4-703c-4647-bda1-59e3435d8c15" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 04:22:48 compute-0 nova_compute[259850]: 2025-10-11 04:22:48.388 2 DEBUG nova.compute.manager [None req-d4612f08-4753-4a24-8e21-adda82f1151f 33278e6c76494cbbac3a77443a2127d6 5a777d54362640ae90dbd99f4e0ce865 - - default default] [instance: cd805eb4-703c-4647-bda1-59e3435d8c15] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Oct 11 04:22:48 compute-0 nova_compute[259850]: 2025-10-11 04:22:48.485 2 DEBUG oslo_concurrency.lockutils [None req-d4612f08-4753-4a24-8e21-adda82f1151f 33278e6c76494cbbac3a77443a2127d6 5a777d54362640ae90dbd99f4e0ce865 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 04:22:48 compute-0 nova_compute[259850]: 2025-10-11 04:22:48.486 2 DEBUG oslo_concurrency.lockutils [None req-d4612f08-4753-4a24-8e21-adda82f1151f 33278e6c76494cbbac3a77443a2127d6 5a777d54362640ae90dbd99f4e0ce865 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 04:22:48 compute-0 nova_compute[259850]: 2025-10-11 04:22:48.498 2 DEBUG nova.virt.hardware [None req-d4612f08-4753-4a24-8e21-adda82f1151f 33278e6c76494cbbac3a77443a2127d6 5a777d54362640ae90dbd99f4e0ce865 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Oct 11 04:22:48 compute-0 nova_compute[259850]: 2025-10-11 04:22:48.498 2 INFO nova.compute.claims [None req-d4612f08-4753-4a24-8e21-adda82f1151f 33278e6c76494cbbac3a77443a2127d6 5a777d54362640ae90dbd99f4e0ce865 - - default default] [instance: cd805eb4-703c-4647-bda1-59e3435d8c15] Claim successful on node compute-0.ctlplane.example.com
Oct 11 04:22:48 compute-0 nova_compute[259850]: 2025-10-11 04:22:48.581 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 04:22:48 compute-0 nova_compute[259850]: 2025-10-11 04:22:48.631 2 DEBUG oslo_concurrency.processutils [None req-d4612f08-4753-4a24-8e21-adda82f1151f 33278e6c76494cbbac3a77443a2127d6 5a777d54362640ae90dbd99f4e0ce865 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 04:22:49 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 11 04:22:49 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2155888870' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 04:22:49 compute-0 nova_compute[259850]: 2025-10-11 04:22:49.104 2 DEBUG oslo_concurrency.processutils [None req-d4612f08-4753-4a24-8e21-adda82f1151f 33278e6c76494cbbac3a77443a2127d6 5a777d54362640ae90dbd99f4e0ce865 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.474s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 04:22:49 compute-0 ceph-mon[74273]: pgmap v1820: 305 pgs: 305 active+clean; 88 MiB data, 442 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:22:49 compute-0 ceph-mon[74273]: from='client.? 192.168.122.100:0/2155888870' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 04:22:49 compute-0 nova_compute[259850]: 2025-10-11 04:22:49.115 2 DEBUG nova.compute.provider_tree [None req-d4612f08-4753-4a24-8e21-adda82f1151f 33278e6c76494cbbac3a77443a2127d6 5a777d54362640ae90dbd99f4e0ce865 - - default default] Inventory has not changed in ProviderTree for provider: 108a560b-89c0-4926-a2fc-cb749a6f8386 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 11 04:22:49 compute-0 nova_compute[259850]: 2025-10-11 04:22:49.132 2 DEBUG nova.scheduler.client.report [None req-d4612f08-4753-4a24-8e21-adda82f1151f 33278e6c76494cbbac3a77443a2127d6 5a777d54362640ae90dbd99f4e0ce865 - - default default] Inventory has not changed for provider 108a560b-89c0-4926-a2fc-cb749a6f8386 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 11 04:22:49 compute-0 nova_compute[259850]: 2025-10-11 04:22:49.156 2 DEBUG oslo_concurrency.lockutils [None req-d4612f08-4753-4a24-8e21-adda82f1151f 33278e6c76494cbbac3a77443a2127d6 5a777d54362640ae90dbd99f4e0ce865 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.671s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 04:22:49 compute-0 nova_compute[259850]: 2025-10-11 04:22:49.158 2 DEBUG nova.compute.manager [None req-d4612f08-4753-4a24-8e21-adda82f1151f 33278e6c76494cbbac3a77443a2127d6 5a777d54362640ae90dbd99f4e0ce865 - - default default] [instance: cd805eb4-703c-4647-bda1-59e3435d8c15] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Oct 11 04:22:49 compute-0 nova_compute[259850]: 2025-10-11 04:22:49.207 2 DEBUG nova.compute.manager [None req-d4612f08-4753-4a24-8e21-adda82f1151f 33278e6c76494cbbac3a77443a2127d6 5a777d54362640ae90dbd99f4e0ce865 - - default default] [instance: cd805eb4-703c-4647-bda1-59e3435d8c15] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Oct 11 04:22:49 compute-0 nova_compute[259850]: 2025-10-11 04:22:49.208 2 DEBUG nova.network.neutron [None req-d4612f08-4753-4a24-8e21-adda82f1151f 33278e6c76494cbbac3a77443a2127d6 5a777d54362640ae90dbd99f4e0ce865 - - default default] [instance: cd805eb4-703c-4647-bda1-59e3435d8c15] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Oct 11 04:22:49 compute-0 nova_compute[259850]: 2025-10-11 04:22:49.235 2 INFO nova.virt.libvirt.driver [None req-d4612f08-4753-4a24-8e21-adda82f1151f 33278e6c76494cbbac3a77443a2127d6 5a777d54362640ae90dbd99f4e0ce865 - - default default] [instance: cd805eb4-703c-4647-bda1-59e3435d8c15] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Oct 11 04:22:49 compute-0 nova_compute[259850]: 2025-10-11 04:22:49.257 2 DEBUG nova.compute.manager [None req-d4612f08-4753-4a24-8e21-adda82f1151f 33278e6c76494cbbac3a77443a2127d6 5a777d54362640ae90dbd99f4e0ce865 - - default default] [instance: cd805eb4-703c-4647-bda1-59e3435d8c15] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Oct 11 04:22:49 compute-0 nova_compute[259850]: 2025-10-11 04:22:49.344 2 DEBUG nova.compute.manager [None req-d4612f08-4753-4a24-8e21-adda82f1151f 33278e6c76494cbbac3a77443a2127d6 5a777d54362640ae90dbd99f4e0ce865 - - default default] [instance: cd805eb4-703c-4647-bda1-59e3435d8c15] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Oct 11 04:22:49 compute-0 nova_compute[259850]: 2025-10-11 04:22:49.346 2 DEBUG nova.virt.libvirt.driver [None req-d4612f08-4753-4a24-8e21-adda82f1151f 33278e6c76494cbbac3a77443a2127d6 5a777d54362640ae90dbd99f4e0ce865 - - default default] [instance: cd805eb4-703c-4647-bda1-59e3435d8c15] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Oct 11 04:22:49 compute-0 nova_compute[259850]: 2025-10-11 04:22:49.347 2 INFO nova.virt.libvirt.driver [None req-d4612f08-4753-4a24-8e21-adda82f1151f 33278e6c76494cbbac3a77443a2127d6 5a777d54362640ae90dbd99f4e0ce865 - - default default] [instance: cd805eb4-703c-4647-bda1-59e3435d8c15] Creating image(s)
Oct 11 04:22:49 compute-0 nova_compute[259850]: 2025-10-11 04:22:49.385 2 DEBUG nova.storage.rbd_utils [None req-d4612f08-4753-4a24-8e21-adda82f1151f 33278e6c76494cbbac3a77443a2127d6 5a777d54362640ae90dbd99f4e0ce865 - - default default] rbd image cd805eb4-703c-4647-bda1-59e3435d8c15_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 11 04:22:49 compute-0 nova_compute[259850]: 2025-10-11 04:22:49.418 2 DEBUG nova.storage.rbd_utils [None req-d4612f08-4753-4a24-8e21-adda82f1151f 33278e6c76494cbbac3a77443a2127d6 5a777d54362640ae90dbd99f4e0ce865 - - default default] rbd image cd805eb4-703c-4647-bda1-59e3435d8c15_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 11 04:22:49 compute-0 nova_compute[259850]: 2025-10-11 04:22:49.444 2 DEBUG nova.storage.rbd_utils [None req-d4612f08-4753-4a24-8e21-adda82f1151f 33278e6c76494cbbac3a77443a2127d6 5a777d54362640ae90dbd99f4e0ce865 - - default default] rbd image cd805eb4-703c-4647-bda1-59e3435d8c15_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 11 04:22:49 compute-0 nova_compute[259850]: 2025-10-11 04:22:49.448 2 DEBUG oslo_concurrency.processutils [None req-d4612f08-4753-4a24-8e21-adda82f1151f 33278e6c76494cbbac3a77443a2127d6 5a777d54362640ae90dbd99f4e0ce865 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/1fdd1a1629caceb533dd1ed1ad40a6716b3e72ac --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 04:22:49 compute-0 nova_compute[259850]: 2025-10-11 04:22:49.529 2 DEBUG nova.policy [None req-d4612f08-4753-4a24-8e21-adda82f1151f 33278e6c76494cbbac3a77443a2127d6 5a777d54362640ae90dbd99f4e0ce865 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '33278e6c76494cbbac3a77443a2127d6', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '5a777d54362640ae90dbd99f4e0ce865', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Oct 11 04:22:49 compute-0 nova_compute[259850]: 2025-10-11 04:22:49.536 2 DEBUG oslo_concurrency.processutils [None req-d4612f08-4753-4a24-8e21-adda82f1151f 33278e6c76494cbbac3a77443a2127d6 5a777d54362640ae90dbd99f4e0ce865 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/1fdd1a1629caceb533dd1ed1ad40a6716b3e72ac --force-share --output=json" returned: 0 in 0.089s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 04:22:49 compute-0 nova_compute[259850]: 2025-10-11 04:22:49.537 2 DEBUG oslo_concurrency.lockutils [None req-d4612f08-4753-4a24-8e21-adda82f1151f 33278e6c76494cbbac3a77443a2127d6 5a777d54362640ae90dbd99f4e0ce865 - - default default] Acquiring lock "1fdd1a1629caceb533dd1ed1ad40a6716b3e72ac" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 04:22:49 compute-0 nova_compute[259850]: 2025-10-11 04:22:49.538 2 DEBUG oslo_concurrency.lockutils [None req-d4612f08-4753-4a24-8e21-adda82f1151f 33278e6c76494cbbac3a77443a2127d6 5a777d54362640ae90dbd99f4e0ce865 - - default default] Lock "1fdd1a1629caceb533dd1ed1ad40a6716b3e72ac" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 04:22:49 compute-0 nova_compute[259850]: 2025-10-11 04:22:49.539 2 DEBUG oslo_concurrency.lockutils [None req-d4612f08-4753-4a24-8e21-adda82f1151f 33278e6c76494cbbac3a77443a2127d6 5a777d54362640ae90dbd99f4e0ce865 - - default default] Lock "1fdd1a1629caceb533dd1ed1ad40a6716b3e72ac" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 04:22:49 compute-0 nova_compute[259850]: 2025-10-11 04:22:49.572 2 DEBUG nova.storage.rbd_utils [None req-d4612f08-4753-4a24-8e21-adda82f1151f 33278e6c76494cbbac3a77443a2127d6 5a777d54362640ae90dbd99f4e0ce865 - - default default] rbd image cd805eb4-703c-4647-bda1-59e3435d8c15_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 11 04:22:49 compute-0 nova_compute[259850]: 2025-10-11 04:22:49.577 2 DEBUG oslo_concurrency.processutils [None req-d4612f08-4753-4a24-8e21-adda82f1151f 33278e6c76494cbbac3a77443a2127d6 5a777d54362640ae90dbd99f4e0ce865 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/1fdd1a1629caceb533dd1ed1ad40a6716b3e72ac cd805eb4-703c-4647-bda1-59e3435d8c15_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 04:22:49 compute-0 nova_compute[259850]: 2025-10-11 04:22:49.899 2 DEBUG oslo_concurrency.processutils [None req-d4612f08-4753-4a24-8e21-adda82f1151f 33278e6c76494cbbac3a77443a2127d6 5a777d54362640ae90dbd99f4e0ce865 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/1fdd1a1629caceb533dd1ed1ad40a6716b3e72ac cd805eb4-703c-4647-bda1-59e3435d8c15_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.322s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 04:22:49 compute-0 nova_compute[259850]: 2025-10-11 04:22:49.938 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 04:22:49 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1821: 305 pgs: 305 active+clean; 88 MiB data, 442 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:22:49 compute-0 nova_compute[259850]: 2025-10-11 04:22:49.987 2 DEBUG nova.storage.rbd_utils [None req-d4612f08-4753-4a24-8e21-adda82f1151f 33278e6c76494cbbac3a77443a2127d6 5a777d54362640ae90dbd99f4e0ce865 - - default default] resizing rbd image cd805eb4-703c-4647-bda1-59e3435d8c15_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Oct 11 04:22:50 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e436 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 04:22:50 compute-0 nova_compute[259850]: 2025-10-11 04:22:50.114 2 DEBUG nova.objects.instance [None req-d4612f08-4753-4a24-8e21-adda82f1151f 33278e6c76494cbbac3a77443a2127d6 5a777d54362640ae90dbd99f4e0ce865 - - default default] Lazy-loading 'migration_context' on Instance uuid cd805eb4-703c-4647-bda1-59e3435d8c15 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 11 04:22:50 compute-0 nova_compute[259850]: 2025-10-11 04:22:50.130 2 DEBUG nova.virt.libvirt.driver [None req-d4612f08-4753-4a24-8e21-adda82f1151f 33278e6c76494cbbac3a77443a2127d6 5a777d54362640ae90dbd99f4e0ce865 - - default default] [instance: cd805eb4-703c-4647-bda1-59e3435d8c15] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Oct 11 04:22:50 compute-0 nova_compute[259850]: 2025-10-11 04:22:50.130 2 DEBUG nova.virt.libvirt.driver [None req-d4612f08-4753-4a24-8e21-adda82f1151f 33278e6c76494cbbac3a77443a2127d6 5a777d54362640ae90dbd99f4e0ce865 - - default default] [instance: cd805eb4-703c-4647-bda1-59e3435d8c15] Ensure instance console log exists: /var/lib/nova/instances/cd805eb4-703c-4647-bda1-59e3435d8c15/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Oct 11 04:22:50 compute-0 nova_compute[259850]: 2025-10-11 04:22:50.131 2 DEBUG oslo_concurrency.lockutils [None req-d4612f08-4753-4a24-8e21-adda82f1151f 33278e6c76494cbbac3a77443a2127d6 5a777d54362640ae90dbd99f4e0ce865 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 04:22:50 compute-0 nova_compute[259850]: 2025-10-11 04:22:50.132 2 DEBUG oslo_concurrency.lockutils [None req-d4612f08-4753-4a24-8e21-adda82f1151f 33278e6c76494cbbac3a77443a2127d6 5a777d54362640ae90dbd99f4e0ce865 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 04:22:50 compute-0 nova_compute[259850]: 2025-10-11 04:22:50.132 2 DEBUG oslo_concurrency.lockutils [None req-d4612f08-4753-4a24-8e21-adda82f1151f 33278e6c76494cbbac3a77443a2127d6 5a777d54362640ae90dbd99f4e0ce865 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 04:22:50 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct 11 04:22:50 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/396946027' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 11 04:22:50 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct 11 04:22:50 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/396946027' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 11 04:22:50 compute-0 nova_compute[259850]: 2025-10-11 04:22:50.695 2 DEBUG nova.network.neutron [None req-d4612f08-4753-4a24-8e21-adda82f1151f 33278e6c76494cbbac3a77443a2127d6 5a777d54362640ae90dbd99f4e0ce865 - - default default] [instance: cd805eb4-703c-4647-bda1-59e3435d8c15] Successfully created port: 9e5f5bdc-671b-4d1a-b567-050dd8925c57 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Oct 11 04:22:50 compute-0 ceph-mgr[74563]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 04:22:50 compute-0 ceph-mgr[74563]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 04:22:50 compute-0 ceph-mgr[74563]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 04:22:50 compute-0 ceph-mgr[74563]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 04:22:50 compute-0 ceph-mgr[74563]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 04:22:50 compute-0 ceph-mgr[74563]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 04:22:51 compute-0 ceph-mon[74273]: pgmap v1821: 305 pgs: 305 active+clean; 88 MiB data, 442 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:22:51 compute-0 ceph-mon[74273]: from='client.? 192.168.122.10:0/396946027' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 11 04:22:51 compute-0 ceph-mon[74273]: from='client.? 192.168.122.10:0/396946027' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 11 04:22:51 compute-0 nova_compute[259850]: 2025-10-11 04:22:51.873 2 DEBUG nova.network.neutron [None req-d4612f08-4753-4a24-8e21-adda82f1151f 33278e6c76494cbbac3a77443a2127d6 5a777d54362640ae90dbd99f4e0ce865 - - default default] [instance: cd805eb4-703c-4647-bda1-59e3435d8c15] Successfully updated port: 9e5f5bdc-671b-4d1a-b567-050dd8925c57 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Oct 11 04:22:51 compute-0 nova_compute[259850]: 2025-10-11 04:22:51.933 2 DEBUG oslo_concurrency.lockutils [None req-d4612f08-4753-4a24-8e21-adda82f1151f 33278e6c76494cbbac3a77443a2127d6 5a777d54362640ae90dbd99f4e0ce865 - - default default] Acquiring lock "refresh_cache-cd805eb4-703c-4647-bda1-59e3435d8c15" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 11 04:22:51 compute-0 nova_compute[259850]: 2025-10-11 04:22:51.934 2 DEBUG oslo_concurrency.lockutils [None req-d4612f08-4753-4a24-8e21-adda82f1151f 33278e6c76494cbbac3a77443a2127d6 5a777d54362640ae90dbd99f4e0ce865 - - default default] Acquired lock "refresh_cache-cd805eb4-703c-4647-bda1-59e3435d8c15" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 11 04:22:51 compute-0 nova_compute[259850]: 2025-10-11 04:22:51.934 2 DEBUG nova.network.neutron [None req-d4612f08-4753-4a24-8e21-adda82f1151f 33278e6c76494cbbac3a77443a2127d6 5a777d54362640ae90dbd99f4e0ce865 - - default default] [instance: cd805eb4-703c-4647-bda1-59e3435d8c15] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Oct 11 04:22:51 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1822: 305 pgs: 305 active+clean; 88 MiB data, 442 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:22:51 compute-0 nova_compute[259850]: 2025-10-11 04:22:51.988 2 DEBUG nova.compute.manager [req-8fb5b09d-a293-4d57-857b-b6de225a295c req-4e5ece48-0cd7-4fdb-ab97-c35eb54f4316 f4159c198f2c491aba3076e803251bb4 551ef16ca7c64c7d98e9560ddab5abd8 - - default default] [instance: cd805eb4-703c-4647-bda1-59e3435d8c15] Received event network-changed-9e5f5bdc-671b-4d1a-b567-050dd8925c57 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 11 04:22:51 compute-0 nova_compute[259850]: 2025-10-11 04:22:51.989 2 DEBUG nova.compute.manager [req-8fb5b09d-a293-4d57-857b-b6de225a295c req-4e5ece48-0cd7-4fdb-ab97-c35eb54f4316 f4159c198f2c491aba3076e803251bb4 551ef16ca7c64c7d98e9560ddab5abd8 - - default default] [instance: cd805eb4-703c-4647-bda1-59e3435d8c15] Refreshing instance network info cache due to event network-changed-9e5f5bdc-671b-4d1a-b567-050dd8925c57. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 11 04:22:51 compute-0 nova_compute[259850]: 2025-10-11 04:22:51.989 2 DEBUG oslo_concurrency.lockutils [req-8fb5b09d-a293-4d57-857b-b6de225a295c req-4e5ece48-0cd7-4fdb-ab97-c35eb54f4316 f4159c198f2c491aba3076e803251bb4 551ef16ca7c64c7d98e9560ddab5abd8 - - default default] Acquiring lock "refresh_cache-cd805eb4-703c-4647-bda1-59e3435d8c15" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 11 04:22:52 compute-0 nova_compute[259850]: 2025-10-11 04:22:52.075 2 DEBUG nova.network.neutron [None req-d4612f08-4753-4a24-8e21-adda82f1151f 33278e6c76494cbbac3a77443a2127d6 5a777d54362640ae90dbd99f4e0ce865 - - default default] [instance: cd805eb4-703c-4647-bda1-59e3435d8c15] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Oct 11 04:22:52 compute-0 podman[303279]: 2025-10-11 04:22:52.400237911 +0000 UTC m=+0.087371395 container health_status 8f03625dda308e9a3fe3b3f00360811eab2a53db90537fed5552a11820542f07 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, container_name=iscsid, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, tcib_managed=true, config_id=iscsid, io.buildah.version=1.41.3, managed_by=edpm_ansible)
Oct 11 04:22:52 compute-0 podman[303278]: 2025-10-11 04:22:52.410857312 +0000 UTC m=+0.103716848 container health_status 83ff5079e26f0f00bbfd2512db4ebca61e431e3b7e362664573e34fff179574c (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.build-date=20251009, org.label-schema.schema-version=1.0, config_id=multipathd, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']})
Oct 11 04:22:53 compute-0 nova_compute[259850]: 2025-10-11 04:22:53.128 2 DEBUG nova.network.neutron [None req-d4612f08-4753-4a24-8e21-adda82f1151f 33278e6c76494cbbac3a77443a2127d6 5a777d54362640ae90dbd99f4e0ce865 - - default default] [instance: cd805eb4-703c-4647-bda1-59e3435d8c15] Updating instance_info_cache with network_info: [{"id": "9e5f5bdc-671b-4d1a-b567-050dd8925c57", "address": "fa:16:3e:06:fe:bd", "network": {"id": "be3c4303-5003-4d44-a9c5-e31dbe7169fc", "bridge": "br-int", "label": "tempest-SnapshotDataIntegrityTests-760144367-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5a777d54362640ae90dbd99f4e0ce865", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9e5f5bdc-67", "ovs_interfaceid": "9e5f5bdc-671b-4d1a-b567-050dd8925c57", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 11 04:22:53 compute-0 ceph-mon[74273]: pgmap v1822: 305 pgs: 305 active+clean; 88 MiB data, 442 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:22:53 compute-0 nova_compute[259850]: 2025-10-11 04:22:53.146 2 DEBUG oslo_concurrency.lockutils [None req-d4612f08-4753-4a24-8e21-adda82f1151f 33278e6c76494cbbac3a77443a2127d6 5a777d54362640ae90dbd99f4e0ce865 - - default default] Releasing lock "refresh_cache-cd805eb4-703c-4647-bda1-59e3435d8c15" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 11 04:22:53 compute-0 nova_compute[259850]: 2025-10-11 04:22:53.146 2 DEBUG nova.compute.manager [None req-d4612f08-4753-4a24-8e21-adda82f1151f 33278e6c76494cbbac3a77443a2127d6 5a777d54362640ae90dbd99f4e0ce865 - - default default] [instance: cd805eb4-703c-4647-bda1-59e3435d8c15] Instance network_info: |[{"id": "9e5f5bdc-671b-4d1a-b567-050dd8925c57", "address": "fa:16:3e:06:fe:bd", "network": {"id": "be3c4303-5003-4d44-a9c5-e31dbe7169fc", "bridge": "br-int", "label": "tempest-SnapshotDataIntegrityTests-760144367-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5a777d54362640ae90dbd99f4e0ce865", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9e5f5bdc-67", "ovs_interfaceid": "9e5f5bdc-671b-4d1a-b567-050dd8925c57", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Oct 11 04:22:53 compute-0 nova_compute[259850]: 2025-10-11 04:22:53.146 2 DEBUG oslo_concurrency.lockutils [req-8fb5b09d-a293-4d57-857b-b6de225a295c req-4e5ece48-0cd7-4fdb-ab97-c35eb54f4316 f4159c198f2c491aba3076e803251bb4 551ef16ca7c64c7d98e9560ddab5abd8 - - default default] Acquired lock "refresh_cache-cd805eb4-703c-4647-bda1-59e3435d8c15" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 11 04:22:53 compute-0 nova_compute[259850]: 2025-10-11 04:22:53.147 2 DEBUG nova.network.neutron [req-8fb5b09d-a293-4d57-857b-b6de225a295c req-4e5ece48-0cd7-4fdb-ab97-c35eb54f4316 f4159c198f2c491aba3076e803251bb4 551ef16ca7c64c7d98e9560ddab5abd8 - - default default] [instance: cd805eb4-703c-4647-bda1-59e3435d8c15] Refreshing network info cache for port 9e5f5bdc-671b-4d1a-b567-050dd8925c57 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 11 04:22:53 compute-0 nova_compute[259850]: 2025-10-11 04:22:53.149 2 DEBUG nova.virt.libvirt.driver [None req-d4612f08-4753-4a24-8e21-adda82f1151f 33278e6c76494cbbac3a77443a2127d6 5a777d54362640ae90dbd99f4e0ce865 - - default default] [instance: cd805eb4-703c-4647-bda1-59e3435d8c15] Start _get_guest_xml network_info=[{"id": "9e5f5bdc-671b-4d1a-b567-050dd8925c57", "address": "fa:16:3e:06:fe:bd", "network": {"id": "be3c4303-5003-4d44-a9c5-e31dbe7169fc", "bridge": "br-int", "label": "tempest-SnapshotDataIntegrityTests-760144367-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5a777d54362640ae90dbd99f4e0ce865", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9e5f5bdc-67", "ovs_interfaceid": "9e5f5bdc-671b-4d1a-b567-050dd8925c57", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-11T04:01:37Z,direct_url=<?>,disk_format='qcow2',id=1a107e2f-1a9d-4b6f-861d-e64bee7d56be,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='e4ac9f6319b648399a8baca50902ce47',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-11T04:01:39Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_type': 'disk', 'encrypted': False, 'encryption_secret_uuid': None, 'size': 0, 'encryption_format': None, 'boot_index': 0, 'device_name': '/dev/vda', 'guest_format': None, 'encryption_options': None, 'disk_bus': 'virtio', 'image_id': '1a107e2f-1a9d-4b6f-861d-e64bee7d56be'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Oct 11 04:22:53 compute-0 nova_compute[259850]: 2025-10-11 04:22:53.154 2 WARNING nova.virt.libvirt.driver [None req-d4612f08-4753-4a24-8e21-adda82f1151f 33278e6c76494cbbac3a77443a2127d6 5a777d54362640ae90dbd99f4e0ce865 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 11 04:22:53 compute-0 nova_compute[259850]: 2025-10-11 04:22:53.161 2 DEBUG nova.virt.libvirt.host [None req-d4612f08-4753-4a24-8e21-adda82f1151f 33278e6c76494cbbac3a77443a2127d6 5a777d54362640ae90dbd99f4e0ce865 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Oct 11 04:22:53 compute-0 nova_compute[259850]: 2025-10-11 04:22:53.162 2 DEBUG nova.virt.libvirt.host [None req-d4612f08-4753-4a24-8e21-adda82f1151f 33278e6c76494cbbac3a77443a2127d6 5a777d54362640ae90dbd99f4e0ce865 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Oct 11 04:22:53 compute-0 nova_compute[259850]: 2025-10-11 04:22:53.168 2 DEBUG nova.virt.libvirt.host [None req-d4612f08-4753-4a24-8e21-adda82f1151f 33278e6c76494cbbac3a77443a2127d6 5a777d54362640ae90dbd99f4e0ce865 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Oct 11 04:22:53 compute-0 nova_compute[259850]: 2025-10-11 04:22:53.169 2 DEBUG nova.virt.libvirt.host [None req-d4612f08-4753-4a24-8e21-adda82f1151f 33278e6c76494cbbac3a77443a2127d6 5a777d54362640ae90dbd99f4e0ce865 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Oct 11 04:22:53 compute-0 nova_compute[259850]: 2025-10-11 04:22:53.170 2 DEBUG nova.virt.libvirt.driver [None req-d4612f08-4753-4a24-8e21-adda82f1151f 33278e6c76494cbbac3a77443a2127d6 5a777d54362640ae90dbd99f4e0ce865 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Oct 11 04:22:53 compute-0 nova_compute[259850]: 2025-10-11 04:22:53.170 2 DEBUG nova.virt.hardware [None req-d4612f08-4753-4a24-8e21-adda82f1151f 33278e6c76494cbbac3a77443a2127d6 5a777d54362640ae90dbd99f4e0ce865 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-11T04:01:35Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='178575de-f0e6-4acd-9fcd-d75e3e09ac2e',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-11T04:01:37Z,direct_url=<?>,disk_format='qcow2',id=1a107e2f-1a9d-4b6f-861d-e64bee7d56be,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='e4ac9f6319b648399a8baca50902ce47',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-11T04:01:39Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Oct 11 04:22:53 compute-0 nova_compute[259850]: 2025-10-11 04:22:53.171 2 DEBUG nova.virt.hardware [None req-d4612f08-4753-4a24-8e21-adda82f1151f 33278e6c76494cbbac3a77443a2127d6 5a777d54362640ae90dbd99f4e0ce865 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Oct 11 04:22:53 compute-0 nova_compute[259850]: 2025-10-11 04:22:53.171 2 DEBUG nova.virt.hardware [None req-d4612f08-4753-4a24-8e21-adda82f1151f 33278e6c76494cbbac3a77443a2127d6 5a777d54362640ae90dbd99f4e0ce865 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Oct 11 04:22:53 compute-0 nova_compute[259850]: 2025-10-11 04:22:53.172 2 DEBUG nova.virt.hardware [None req-d4612f08-4753-4a24-8e21-adda82f1151f 33278e6c76494cbbac3a77443a2127d6 5a777d54362640ae90dbd99f4e0ce865 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Oct 11 04:22:53 compute-0 nova_compute[259850]: 2025-10-11 04:22:53.172 2 DEBUG nova.virt.hardware [None req-d4612f08-4753-4a24-8e21-adda82f1151f 33278e6c76494cbbac3a77443a2127d6 5a777d54362640ae90dbd99f4e0ce865 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Oct 11 04:22:53 compute-0 nova_compute[259850]: 2025-10-11 04:22:53.173 2 DEBUG nova.virt.hardware [None req-d4612f08-4753-4a24-8e21-adda82f1151f 33278e6c76494cbbac3a77443a2127d6 5a777d54362640ae90dbd99f4e0ce865 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Oct 11 04:22:53 compute-0 nova_compute[259850]: 2025-10-11 04:22:53.173 2 DEBUG nova.virt.hardware [None req-d4612f08-4753-4a24-8e21-adda82f1151f 33278e6c76494cbbac3a77443a2127d6 5a777d54362640ae90dbd99f4e0ce865 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Oct 11 04:22:53 compute-0 nova_compute[259850]: 2025-10-11 04:22:53.174 2 DEBUG nova.virt.hardware [None req-d4612f08-4753-4a24-8e21-adda82f1151f 33278e6c76494cbbac3a77443a2127d6 5a777d54362640ae90dbd99f4e0ce865 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Oct 11 04:22:53 compute-0 nova_compute[259850]: 2025-10-11 04:22:53.174 2 DEBUG nova.virt.hardware [None req-d4612f08-4753-4a24-8e21-adda82f1151f 33278e6c76494cbbac3a77443a2127d6 5a777d54362640ae90dbd99f4e0ce865 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Oct 11 04:22:53 compute-0 nova_compute[259850]: 2025-10-11 04:22:53.174 2 DEBUG nova.virt.hardware [None req-d4612f08-4753-4a24-8e21-adda82f1151f 33278e6c76494cbbac3a77443a2127d6 5a777d54362640ae90dbd99f4e0ce865 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Oct 11 04:22:53 compute-0 nova_compute[259850]: 2025-10-11 04:22:53.175 2 DEBUG nova.virt.hardware [None req-d4612f08-4753-4a24-8e21-adda82f1151f 33278e6c76494cbbac3a77443a2127d6 5a777d54362640ae90dbd99f4e0ce865 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Oct 11 04:22:53 compute-0 nova_compute[259850]: 2025-10-11 04:22:53.180 2 DEBUG oslo_concurrency.processutils [None req-d4612f08-4753-4a24-8e21-adda82f1151f 33278e6c76494cbbac3a77443a2127d6 5a777d54362640ae90dbd99f4e0ce865 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 04:22:53 compute-0 nova_compute[259850]: 2025-10-11 04:22:53.579 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 04:22:53 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 11 04:22:53 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2124144644' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 11 04:22:53 compute-0 nova_compute[259850]: 2025-10-11 04:22:53.647 2 DEBUG oslo_concurrency.processutils [None req-d4612f08-4753-4a24-8e21-adda82f1151f 33278e6c76494cbbac3a77443a2127d6 5a777d54362640ae90dbd99f4e0ce865 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.467s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 04:22:53 compute-0 nova_compute[259850]: 2025-10-11 04:22:53.680 2 DEBUG nova.storage.rbd_utils [None req-d4612f08-4753-4a24-8e21-adda82f1151f 33278e6c76494cbbac3a77443a2127d6 5a777d54362640ae90dbd99f4e0ce865 - - default default] rbd image cd805eb4-703c-4647-bda1-59e3435d8c15_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 11 04:22:53 compute-0 nova_compute[259850]: 2025-10-11 04:22:53.685 2 DEBUG oslo_concurrency.processutils [None req-d4612f08-4753-4a24-8e21-adda82f1151f 33278e6c76494cbbac3a77443a2127d6 5a777d54362640ae90dbd99f4e0ce865 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 04:22:53 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1823: 305 pgs: 305 active+clean; 134 MiB data, 463 MiB used, 60 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Oct 11 04:22:54 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 11 04:22:54 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3797222745' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 11 04:22:54 compute-0 nova_compute[259850]: 2025-10-11 04:22:54.119 2 DEBUG oslo_concurrency.processutils [None req-d4612f08-4753-4a24-8e21-adda82f1151f 33278e6c76494cbbac3a77443a2127d6 5a777d54362640ae90dbd99f4e0ce865 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.434s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 04:22:54 compute-0 nova_compute[259850]: 2025-10-11 04:22:54.121 2 DEBUG nova.virt.libvirt.vif [None req-d4612f08-4753-4a24-8e21-adda82f1151f 33278e6c76494cbbac3a77443a2127d6 5a777d54362640ae90dbd99f4e0ce865 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-11T04:22:47Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-SnapshotDataIntegrityTests-server-1290778056',display_name='tempest-SnapshotDataIntegrityTests-server-1290778056',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-snapshotdataintegritytests-server-1290778056',id=27,image_ref='1a107e2f-1a9d-4b6f-861d-e64bee7d56be',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBG4DGf/+arJzPlezMFuNYUmz37ccTM3o0sMAVGnA02+UGb+0y+Li2G8x8tYV+3LVZQKX5GcWAfEAeF1ZTWclNaUpF1iwZFukTt8FazO3avvAP/xJ52zMuY5wOn+lOjw9PQ==',key_name='tempest-keypair-1926732821',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='5a777d54362640ae90dbd99f4e0ce865',ramdisk_id='',reservation_id='r-vgfud55l',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='1a107e2f-1a9d-4b6f-861d-e64bee7d56be',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-SnapshotDataIntegrityTests-640213236',owner_user_name='tempest-SnapshotDataIntegrityTests-640213236-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-11T04:22:49Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='33278e6c76494cbbac3a77443a2127d6',uuid=cd805eb4-703c-4647-bda1-59e3435d8c15,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "9e5f5bdc-671b-4d1a-b567-050dd8925c57", "address": "fa:16:3e:06:fe:bd", "network": {"id": "be3c4303-5003-4d44-a9c5-e31dbe7169fc", "bridge": "br-int", "label": "tempest-SnapshotDataIntegrityTests-760144367-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5a777d54362640ae90dbd99f4e0ce865", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9e5f5bdc-67", "ovs_interfaceid": "9e5f5bdc-671b-4d1a-b567-050dd8925c57", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Oct 11 04:22:54 compute-0 nova_compute[259850]: 2025-10-11 04:22:54.121 2 DEBUG nova.network.os_vif_util [None req-d4612f08-4753-4a24-8e21-adda82f1151f 33278e6c76494cbbac3a77443a2127d6 5a777d54362640ae90dbd99f4e0ce865 - - default default] Converting VIF {"id": "9e5f5bdc-671b-4d1a-b567-050dd8925c57", "address": "fa:16:3e:06:fe:bd", "network": {"id": "be3c4303-5003-4d44-a9c5-e31dbe7169fc", "bridge": "br-int", "label": "tempest-SnapshotDataIntegrityTests-760144367-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5a777d54362640ae90dbd99f4e0ce865", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9e5f5bdc-67", "ovs_interfaceid": "9e5f5bdc-671b-4d1a-b567-050dd8925c57", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 11 04:22:54 compute-0 nova_compute[259850]: 2025-10-11 04:22:54.122 2 DEBUG nova.network.os_vif_util [None req-d4612f08-4753-4a24-8e21-adda82f1151f 33278e6c76494cbbac3a77443a2127d6 5a777d54362640ae90dbd99f4e0ce865 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:06:fe:bd,bridge_name='br-int',has_traffic_filtering=True,id=9e5f5bdc-671b-4d1a-b567-050dd8925c57,network=Network(be3c4303-5003-4d44-a9c5-e31dbe7169fc),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9e5f5bdc-67') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 11 04:22:54 compute-0 nova_compute[259850]: 2025-10-11 04:22:54.123 2 DEBUG nova.objects.instance [None req-d4612f08-4753-4a24-8e21-adda82f1151f 33278e6c76494cbbac3a77443a2127d6 5a777d54362640ae90dbd99f4e0ce865 - - default default] Lazy-loading 'pci_devices' on Instance uuid cd805eb4-703c-4647-bda1-59e3435d8c15 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 11 04:22:54 compute-0 nova_compute[259850]: 2025-10-11 04:22:54.138 2 DEBUG nova.virt.libvirt.driver [None req-d4612f08-4753-4a24-8e21-adda82f1151f 33278e6c76494cbbac3a77443a2127d6 5a777d54362640ae90dbd99f4e0ce865 - - default default] [instance: cd805eb4-703c-4647-bda1-59e3435d8c15] End _get_guest_xml xml=<domain type="kvm">
Oct 11 04:22:54 compute-0 nova_compute[259850]:   <uuid>cd805eb4-703c-4647-bda1-59e3435d8c15</uuid>
Oct 11 04:22:54 compute-0 nova_compute[259850]:   <name>instance-0000001b</name>
Oct 11 04:22:54 compute-0 nova_compute[259850]:   <memory>131072</memory>
Oct 11 04:22:54 compute-0 nova_compute[259850]:   <vcpu>1</vcpu>
Oct 11 04:22:54 compute-0 nova_compute[259850]:   <metadata>
Oct 11 04:22:54 compute-0 nova_compute[259850]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 11 04:22:54 compute-0 nova_compute[259850]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 11 04:22:54 compute-0 nova_compute[259850]:       <nova:name>tempest-SnapshotDataIntegrityTests-server-1290778056</nova:name>
Oct 11 04:22:54 compute-0 nova_compute[259850]:       <nova:creationTime>2025-10-11 04:22:53</nova:creationTime>
Oct 11 04:22:54 compute-0 nova_compute[259850]:       <nova:flavor name="m1.nano">
Oct 11 04:22:54 compute-0 nova_compute[259850]:         <nova:memory>128</nova:memory>
Oct 11 04:22:54 compute-0 nova_compute[259850]:         <nova:disk>1</nova:disk>
Oct 11 04:22:54 compute-0 nova_compute[259850]:         <nova:swap>0</nova:swap>
Oct 11 04:22:54 compute-0 nova_compute[259850]:         <nova:ephemeral>0</nova:ephemeral>
Oct 11 04:22:54 compute-0 nova_compute[259850]:         <nova:vcpus>1</nova:vcpus>
Oct 11 04:22:54 compute-0 nova_compute[259850]:       </nova:flavor>
Oct 11 04:22:54 compute-0 nova_compute[259850]:       <nova:owner>
Oct 11 04:22:54 compute-0 nova_compute[259850]:         <nova:user uuid="33278e6c76494cbbac3a77443a2127d6">tempest-SnapshotDataIntegrityTests-640213236-project-member</nova:user>
Oct 11 04:22:54 compute-0 nova_compute[259850]:         <nova:project uuid="5a777d54362640ae90dbd99f4e0ce865">tempest-SnapshotDataIntegrityTests-640213236</nova:project>
Oct 11 04:22:54 compute-0 nova_compute[259850]:       </nova:owner>
Oct 11 04:22:54 compute-0 nova_compute[259850]:       <nova:root type="image" uuid="1a107e2f-1a9d-4b6f-861d-e64bee7d56be"/>
Oct 11 04:22:54 compute-0 nova_compute[259850]:       <nova:ports>
Oct 11 04:22:54 compute-0 nova_compute[259850]:         <nova:port uuid="9e5f5bdc-671b-4d1a-b567-050dd8925c57">
Oct 11 04:22:54 compute-0 nova_compute[259850]:           <nova:ip type="fixed" address="10.100.0.8" ipVersion="4"/>
Oct 11 04:22:54 compute-0 nova_compute[259850]:         </nova:port>
Oct 11 04:22:54 compute-0 nova_compute[259850]:       </nova:ports>
Oct 11 04:22:54 compute-0 nova_compute[259850]:     </nova:instance>
Oct 11 04:22:54 compute-0 nova_compute[259850]:   </metadata>
Oct 11 04:22:54 compute-0 nova_compute[259850]:   <sysinfo type="smbios">
Oct 11 04:22:54 compute-0 nova_compute[259850]:     <system>
Oct 11 04:22:54 compute-0 nova_compute[259850]:       <entry name="manufacturer">RDO</entry>
Oct 11 04:22:54 compute-0 nova_compute[259850]:       <entry name="product">OpenStack Compute</entry>
Oct 11 04:22:54 compute-0 nova_compute[259850]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 11 04:22:54 compute-0 nova_compute[259850]:       <entry name="serial">cd805eb4-703c-4647-bda1-59e3435d8c15</entry>
Oct 11 04:22:54 compute-0 nova_compute[259850]:       <entry name="uuid">cd805eb4-703c-4647-bda1-59e3435d8c15</entry>
Oct 11 04:22:54 compute-0 nova_compute[259850]:       <entry name="family">Virtual Machine</entry>
Oct 11 04:22:54 compute-0 nova_compute[259850]:     </system>
Oct 11 04:22:54 compute-0 nova_compute[259850]:   </sysinfo>
Oct 11 04:22:54 compute-0 nova_compute[259850]:   <os>
Oct 11 04:22:54 compute-0 nova_compute[259850]:     <type arch="x86_64" machine="q35">hvm</type>
Oct 11 04:22:54 compute-0 nova_compute[259850]:     <boot dev="hd"/>
Oct 11 04:22:54 compute-0 nova_compute[259850]:     <smbios mode="sysinfo"/>
Oct 11 04:22:54 compute-0 nova_compute[259850]:   </os>
Oct 11 04:22:54 compute-0 nova_compute[259850]:   <features>
Oct 11 04:22:54 compute-0 nova_compute[259850]:     <acpi/>
Oct 11 04:22:54 compute-0 nova_compute[259850]:     <apic/>
Oct 11 04:22:54 compute-0 nova_compute[259850]:     <vmcoreinfo/>
Oct 11 04:22:54 compute-0 nova_compute[259850]:   </features>
Oct 11 04:22:54 compute-0 nova_compute[259850]:   <clock offset="utc">
Oct 11 04:22:54 compute-0 nova_compute[259850]:     <timer name="pit" tickpolicy="delay"/>
Oct 11 04:22:54 compute-0 nova_compute[259850]:     <timer name="rtc" tickpolicy="catchup"/>
Oct 11 04:22:54 compute-0 nova_compute[259850]:     <timer name="hpet" present="no"/>
Oct 11 04:22:54 compute-0 nova_compute[259850]:   </clock>
Oct 11 04:22:54 compute-0 nova_compute[259850]:   <cpu mode="host-model" match="exact">
Oct 11 04:22:54 compute-0 nova_compute[259850]:     <topology sockets="1" cores="1" threads="1"/>
Oct 11 04:22:54 compute-0 nova_compute[259850]:   </cpu>
Oct 11 04:22:54 compute-0 nova_compute[259850]:   <devices>
Oct 11 04:22:54 compute-0 nova_compute[259850]:     <disk type="network" device="disk">
Oct 11 04:22:54 compute-0 nova_compute[259850]:       <driver type="raw" cache="none"/>
Oct 11 04:22:54 compute-0 nova_compute[259850]:       <source protocol="rbd" name="vms/cd805eb4-703c-4647-bda1-59e3435d8c15_disk">
Oct 11 04:22:54 compute-0 nova_compute[259850]:         <host name="192.168.122.100" port="6789"/>
Oct 11 04:22:54 compute-0 nova_compute[259850]:       </source>
Oct 11 04:22:54 compute-0 nova_compute[259850]:       <auth username="openstack">
Oct 11 04:22:54 compute-0 nova_compute[259850]:         <secret type="ceph" uuid="23b68101-59a9-532f-ab6b-9acf78fb2162"/>
Oct 11 04:22:54 compute-0 nova_compute[259850]:       </auth>
Oct 11 04:22:54 compute-0 nova_compute[259850]:       <target dev="vda" bus="virtio"/>
Oct 11 04:22:54 compute-0 nova_compute[259850]:     </disk>
Oct 11 04:22:54 compute-0 nova_compute[259850]:     <disk type="network" device="cdrom">
Oct 11 04:22:54 compute-0 nova_compute[259850]:       <driver type="raw" cache="none"/>
Oct 11 04:22:54 compute-0 nova_compute[259850]:       <source protocol="rbd" name="vms/cd805eb4-703c-4647-bda1-59e3435d8c15_disk.config">
Oct 11 04:22:54 compute-0 nova_compute[259850]:         <host name="192.168.122.100" port="6789"/>
Oct 11 04:22:54 compute-0 nova_compute[259850]:       </source>
Oct 11 04:22:54 compute-0 nova_compute[259850]:       <auth username="openstack">
Oct 11 04:22:54 compute-0 nova_compute[259850]:         <secret type="ceph" uuid="23b68101-59a9-532f-ab6b-9acf78fb2162"/>
Oct 11 04:22:54 compute-0 nova_compute[259850]:       </auth>
Oct 11 04:22:54 compute-0 nova_compute[259850]:       <target dev="sda" bus="sata"/>
Oct 11 04:22:54 compute-0 nova_compute[259850]:     </disk>
Oct 11 04:22:54 compute-0 nova_compute[259850]:     <interface type="ethernet">
Oct 11 04:22:54 compute-0 nova_compute[259850]:       <mac address="fa:16:3e:06:fe:bd"/>
Oct 11 04:22:54 compute-0 nova_compute[259850]:       <model type="virtio"/>
Oct 11 04:22:54 compute-0 nova_compute[259850]:       <driver name="vhost" rx_queue_size="512"/>
Oct 11 04:22:54 compute-0 nova_compute[259850]:       <mtu size="1442"/>
Oct 11 04:22:54 compute-0 nova_compute[259850]:       <target dev="tap9e5f5bdc-67"/>
Oct 11 04:22:54 compute-0 nova_compute[259850]:     </interface>
Oct 11 04:22:54 compute-0 nova_compute[259850]:     <serial type="pty">
Oct 11 04:22:54 compute-0 nova_compute[259850]:       <log file="/var/lib/nova/instances/cd805eb4-703c-4647-bda1-59e3435d8c15/console.log" append="off"/>
Oct 11 04:22:54 compute-0 nova_compute[259850]:     </serial>
Oct 11 04:22:54 compute-0 nova_compute[259850]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 11 04:22:54 compute-0 nova_compute[259850]:     <video>
Oct 11 04:22:54 compute-0 nova_compute[259850]:       <model type="virtio"/>
Oct 11 04:22:54 compute-0 nova_compute[259850]:     </video>
Oct 11 04:22:54 compute-0 nova_compute[259850]:     <input type="tablet" bus="usb"/>
Oct 11 04:22:54 compute-0 nova_compute[259850]:     <rng model="virtio">
Oct 11 04:22:54 compute-0 nova_compute[259850]:       <backend model="random">/dev/urandom</backend>
Oct 11 04:22:54 compute-0 nova_compute[259850]:     </rng>
Oct 11 04:22:54 compute-0 nova_compute[259850]:     <controller type="pci" model="pcie-root"/>
Oct 11 04:22:54 compute-0 nova_compute[259850]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 04:22:54 compute-0 nova_compute[259850]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 04:22:54 compute-0 nova_compute[259850]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 04:22:54 compute-0 nova_compute[259850]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 04:22:54 compute-0 nova_compute[259850]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 04:22:54 compute-0 nova_compute[259850]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 04:22:54 compute-0 nova_compute[259850]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 04:22:54 compute-0 nova_compute[259850]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 04:22:54 compute-0 nova_compute[259850]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 04:22:54 compute-0 nova_compute[259850]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 04:22:54 compute-0 nova_compute[259850]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 04:22:54 compute-0 nova_compute[259850]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 04:22:54 compute-0 nova_compute[259850]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 04:22:54 compute-0 nova_compute[259850]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 04:22:54 compute-0 nova_compute[259850]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 04:22:54 compute-0 nova_compute[259850]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 04:22:54 compute-0 nova_compute[259850]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 04:22:54 compute-0 nova_compute[259850]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 04:22:54 compute-0 nova_compute[259850]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 04:22:54 compute-0 nova_compute[259850]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 04:22:54 compute-0 nova_compute[259850]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 04:22:54 compute-0 nova_compute[259850]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 04:22:54 compute-0 nova_compute[259850]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 04:22:54 compute-0 nova_compute[259850]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 04:22:54 compute-0 nova_compute[259850]:     <controller type="usb" index="0"/>
Oct 11 04:22:54 compute-0 nova_compute[259850]:     <memballoon model="virtio">
Oct 11 04:22:54 compute-0 nova_compute[259850]:       <stats period="10"/>
Oct 11 04:22:54 compute-0 nova_compute[259850]:     </memballoon>
Oct 11 04:22:54 compute-0 nova_compute[259850]:   </devices>
Oct 11 04:22:54 compute-0 nova_compute[259850]: </domain>
Oct 11 04:22:54 compute-0 nova_compute[259850]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Oct 11 04:22:54 compute-0 ceph-mon[74273]: from='client.? 192.168.122.100:0/2124144644' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 11 04:22:54 compute-0 ceph-mon[74273]: from='client.? 192.168.122.100:0/3797222745' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 11 04:22:54 compute-0 nova_compute[259850]: 2025-10-11 04:22:54.140 2 DEBUG nova.compute.manager [None req-d4612f08-4753-4a24-8e21-adda82f1151f 33278e6c76494cbbac3a77443a2127d6 5a777d54362640ae90dbd99f4e0ce865 - - default default] [instance: cd805eb4-703c-4647-bda1-59e3435d8c15] Preparing to wait for external event network-vif-plugged-9e5f5bdc-671b-4d1a-b567-050dd8925c57 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Oct 11 04:22:54 compute-0 nova_compute[259850]: 2025-10-11 04:22:54.140 2 DEBUG oslo_concurrency.lockutils [None req-d4612f08-4753-4a24-8e21-adda82f1151f 33278e6c76494cbbac3a77443a2127d6 5a777d54362640ae90dbd99f4e0ce865 - - default default] Acquiring lock "cd805eb4-703c-4647-bda1-59e3435d8c15-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 04:22:54 compute-0 nova_compute[259850]: 2025-10-11 04:22:54.141 2 DEBUG oslo_concurrency.lockutils [None req-d4612f08-4753-4a24-8e21-adda82f1151f 33278e6c76494cbbac3a77443a2127d6 5a777d54362640ae90dbd99f4e0ce865 - - default default] Lock "cd805eb4-703c-4647-bda1-59e3435d8c15-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 04:22:54 compute-0 nova_compute[259850]: 2025-10-11 04:22:54.141 2 DEBUG oslo_concurrency.lockutils [None req-d4612f08-4753-4a24-8e21-adda82f1151f 33278e6c76494cbbac3a77443a2127d6 5a777d54362640ae90dbd99f4e0ce865 - - default default] Lock "cd805eb4-703c-4647-bda1-59e3435d8c15-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 04:22:54 compute-0 nova_compute[259850]: 2025-10-11 04:22:54.142 2 DEBUG nova.virt.libvirt.vif [None req-d4612f08-4753-4a24-8e21-adda82f1151f 33278e6c76494cbbac3a77443a2127d6 5a777d54362640ae90dbd99f4e0ce865 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-11T04:22:47Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-SnapshotDataIntegrityTests-server-1290778056',display_name='tempest-SnapshotDataIntegrityTests-server-1290778056',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-snapshotdataintegritytests-server-1290778056',id=27,image_ref='1a107e2f-1a9d-4b6f-861d-e64bee7d56be',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBG4DGf/+arJzPlezMFuNYUmz37ccTM3o0sMAVGnA02+UGb+0y+Li2G8x8tYV+3LVZQKX5GcWAfEAeF1ZTWclNaUpF1iwZFukTt8FazO3avvAP/xJ52zMuY5wOn+lOjw9PQ==',key_name='tempest-keypair-1926732821',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='5a777d54362640ae90dbd99f4e0ce865',ramdisk_id='',reservation_id='r-vgfud55l',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='1a107e2f-1a9d-4b6f-861d-e64bee7d56be',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-SnapshotDataIntegrityTests-640213236',owner_user_name='tempest-SnapshotDataIntegrityTests-640213236-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-11T04:22:49Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='33278e6c76494cbbac3a77443a2127d6',uuid=cd805eb4-703c-4647-bda1-59e3435d8c15,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "9e5f5bdc-671b-4d1a-b567-050dd8925c57", "address": "fa:16:3e:06:fe:bd", "network": {"id": "be3c4303-5003-4d44-a9c5-e31dbe7169fc", "bridge": "br-int", "label": "tempest-SnapshotDataIntegrityTests-760144367-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5a777d54362640ae90dbd99f4e0ce865", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9e5f5bdc-67", "ovs_interfaceid": "9e5f5bdc-671b-4d1a-b567-050dd8925c57", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Oct 11 04:22:54 compute-0 nova_compute[259850]: 2025-10-11 04:22:54.142 2 DEBUG nova.network.os_vif_util [None req-d4612f08-4753-4a24-8e21-adda82f1151f 33278e6c76494cbbac3a77443a2127d6 5a777d54362640ae90dbd99f4e0ce865 - - default default] Converting VIF {"id": "9e5f5bdc-671b-4d1a-b567-050dd8925c57", "address": "fa:16:3e:06:fe:bd", "network": {"id": "be3c4303-5003-4d44-a9c5-e31dbe7169fc", "bridge": "br-int", "label": "tempest-SnapshotDataIntegrityTests-760144367-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5a777d54362640ae90dbd99f4e0ce865", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9e5f5bdc-67", "ovs_interfaceid": "9e5f5bdc-671b-4d1a-b567-050dd8925c57", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 11 04:22:54 compute-0 nova_compute[259850]: 2025-10-11 04:22:54.143 2 DEBUG nova.network.os_vif_util [None req-d4612f08-4753-4a24-8e21-adda82f1151f 33278e6c76494cbbac3a77443a2127d6 5a777d54362640ae90dbd99f4e0ce865 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:06:fe:bd,bridge_name='br-int',has_traffic_filtering=True,id=9e5f5bdc-671b-4d1a-b567-050dd8925c57,network=Network(be3c4303-5003-4d44-a9c5-e31dbe7169fc),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9e5f5bdc-67') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 11 04:22:54 compute-0 nova_compute[259850]: 2025-10-11 04:22:54.143 2 DEBUG os_vif [None req-d4612f08-4753-4a24-8e21-adda82f1151f 33278e6c76494cbbac3a77443a2127d6 5a777d54362640ae90dbd99f4e0ce865 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:06:fe:bd,bridge_name='br-int',has_traffic_filtering=True,id=9e5f5bdc-671b-4d1a-b567-050dd8925c57,network=Network(be3c4303-5003-4d44-a9c5-e31dbe7169fc),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9e5f5bdc-67') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Oct 11 04:22:54 compute-0 nova_compute[259850]: 2025-10-11 04:22:54.144 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 04:22:54 compute-0 nova_compute[259850]: 2025-10-11 04:22:54.144 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 04:22:54 compute-0 nova_compute[259850]: 2025-10-11 04:22:54.145 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 11 04:22:54 compute-0 nova_compute[259850]: 2025-10-11 04:22:54.148 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 04:22:54 compute-0 nova_compute[259850]: 2025-10-11 04:22:54.149 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap9e5f5bdc-67, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 04:22:54 compute-0 nova_compute[259850]: 2025-10-11 04:22:54.149 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap9e5f5bdc-67, col_values=(('external_ids', {'iface-id': '9e5f5bdc-671b-4d1a-b567-050dd8925c57', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:06:fe:bd', 'vm-uuid': 'cd805eb4-703c-4647-bda1-59e3435d8c15'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 04:22:54 compute-0 nova_compute[259850]: 2025-10-11 04:22:54.151 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 04:22:54 compute-0 NetworkManager[44920]: <info>  [1760156574.1523] manager: (tap9e5f5bdc-67): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/133)
Oct 11 04:22:54 compute-0 nova_compute[259850]: 2025-10-11 04:22:54.153 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 11 04:22:54 compute-0 nova_compute[259850]: 2025-10-11 04:22:54.157 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 04:22:54 compute-0 nova_compute[259850]: 2025-10-11 04:22:54.158 2 INFO os_vif [None req-d4612f08-4753-4a24-8e21-adda82f1151f 33278e6c76494cbbac3a77443a2127d6 5a777d54362640ae90dbd99f4e0ce865 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:06:fe:bd,bridge_name='br-int',has_traffic_filtering=True,id=9e5f5bdc-671b-4d1a-b567-050dd8925c57,network=Network(be3c4303-5003-4d44-a9c5-e31dbe7169fc),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9e5f5bdc-67')
Oct 11 04:22:54 compute-0 nova_compute[259850]: 2025-10-11 04:22:54.199 2 DEBUG nova.virt.libvirt.driver [None req-d4612f08-4753-4a24-8e21-adda82f1151f 33278e6c76494cbbac3a77443a2127d6 5a777d54362640ae90dbd99f4e0ce865 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 11 04:22:54 compute-0 nova_compute[259850]: 2025-10-11 04:22:54.200 2 DEBUG nova.virt.libvirt.driver [None req-d4612f08-4753-4a24-8e21-adda82f1151f 33278e6c76494cbbac3a77443a2127d6 5a777d54362640ae90dbd99f4e0ce865 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 11 04:22:54 compute-0 nova_compute[259850]: 2025-10-11 04:22:54.200 2 DEBUG nova.virt.libvirt.driver [None req-d4612f08-4753-4a24-8e21-adda82f1151f 33278e6c76494cbbac3a77443a2127d6 5a777d54362640ae90dbd99f4e0ce865 - - default default] No VIF found with MAC fa:16:3e:06:fe:bd, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Oct 11 04:22:54 compute-0 nova_compute[259850]: 2025-10-11 04:22:54.201 2 INFO nova.virt.libvirt.driver [None req-d4612f08-4753-4a24-8e21-adda82f1151f 33278e6c76494cbbac3a77443a2127d6 5a777d54362640ae90dbd99f4e0ce865 - - default default] [instance: cd805eb4-703c-4647-bda1-59e3435d8c15] Using config drive
Oct 11 04:22:54 compute-0 nova_compute[259850]: 2025-10-11 04:22:54.221 2 DEBUG nova.storage.rbd_utils [None req-d4612f08-4753-4a24-8e21-adda82f1151f 33278e6c76494cbbac3a77443a2127d6 5a777d54362640ae90dbd99f4e0ce865 - - default default] rbd image cd805eb4-703c-4647-bda1-59e3435d8c15_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 11 04:22:54 compute-0 nova_compute[259850]: 2025-10-11 04:22:54.534 2 DEBUG nova.network.neutron [req-8fb5b09d-a293-4d57-857b-b6de225a295c req-4e5ece48-0cd7-4fdb-ab97-c35eb54f4316 f4159c198f2c491aba3076e803251bb4 551ef16ca7c64c7d98e9560ddab5abd8 - - default default] [instance: cd805eb4-703c-4647-bda1-59e3435d8c15] Updated VIF entry in instance network info cache for port 9e5f5bdc-671b-4d1a-b567-050dd8925c57. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 11 04:22:54 compute-0 nova_compute[259850]: 2025-10-11 04:22:54.535 2 DEBUG nova.network.neutron [req-8fb5b09d-a293-4d57-857b-b6de225a295c req-4e5ece48-0cd7-4fdb-ab97-c35eb54f4316 f4159c198f2c491aba3076e803251bb4 551ef16ca7c64c7d98e9560ddab5abd8 - - default default] [instance: cd805eb4-703c-4647-bda1-59e3435d8c15] Updating instance_info_cache with network_info: [{"id": "9e5f5bdc-671b-4d1a-b567-050dd8925c57", "address": "fa:16:3e:06:fe:bd", "network": {"id": "be3c4303-5003-4d44-a9c5-e31dbe7169fc", "bridge": "br-int", "label": "tempest-SnapshotDataIntegrityTests-760144367-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5a777d54362640ae90dbd99f4e0ce865", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9e5f5bdc-67", "ovs_interfaceid": "9e5f5bdc-671b-4d1a-b567-050dd8925c57", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 11 04:22:54 compute-0 nova_compute[259850]: 2025-10-11 04:22:54.549 2 DEBUG oslo_concurrency.lockutils [req-8fb5b09d-a293-4d57-857b-b6de225a295c req-4e5ece48-0cd7-4fdb-ab97-c35eb54f4316 f4159c198f2c491aba3076e803251bb4 551ef16ca7c64c7d98e9560ddab5abd8 - - default default] Releasing lock "refresh_cache-cd805eb4-703c-4647-bda1-59e3435d8c15" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 11 04:22:54 compute-0 nova_compute[259850]: 2025-10-11 04:22:54.646 2 INFO nova.virt.libvirt.driver [None req-d4612f08-4753-4a24-8e21-adda82f1151f 33278e6c76494cbbac3a77443a2127d6 5a777d54362640ae90dbd99f4e0ce865 - - default default] [instance: cd805eb4-703c-4647-bda1-59e3435d8c15] Creating config drive at /var/lib/nova/instances/cd805eb4-703c-4647-bda1-59e3435d8c15/disk.config
Oct 11 04:22:54 compute-0 nova_compute[259850]: 2025-10-11 04:22:54.651 2 DEBUG oslo_concurrency.processutils [None req-d4612f08-4753-4a24-8e21-adda82f1151f 33278e6c76494cbbac3a77443a2127d6 5a777d54362640ae90dbd99f4e0ce865 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/cd805eb4-703c-4647-bda1-59e3435d8c15/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmptjr772ee execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 04:22:54 compute-0 nova_compute[259850]: 2025-10-11 04:22:54.777 2 DEBUG oslo_concurrency.processutils [None req-d4612f08-4753-4a24-8e21-adda82f1151f 33278e6c76494cbbac3a77443a2127d6 5a777d54362640ae90dbd99f4e0ce865 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/cd805eb4-703c-4647-bda1-59e3435d8c15/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmptjr772ee" returned: 0 in 0.126s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 04:22:54 compute-0 nova_compute[259850]: 2025-10-11 04:22:54.807 2 DEBUG nova.storage.rbd_utils [None req-d4612f08-4753-4a24-8e21-adda82f1151f 33278e6c76494cbbac3a77443a2127d6 5a777d54362640ae90dbd99f4e0ce865 - - default default] rbd image cd805eb4-703c-4647-bda1-59e3435d8c15_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 11 04:22:54 compute-0 nova_compute[259850]: 2025-10-11 04:22:54.813 2 DEBUG oslo_concurrency.processutils [None req-d4612f08-4753-4a24-8e21-adda82f1151f 33278e6c76494cbbac3a77443a2127d6 5a777d54362640ae90dbd99f4e0ce865 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/cd805eb4-703c-4647-bda1-59e3435d8c15/disk.config cd805eb4-703c-4647-bda1-59e3435d8c15_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 04:22:54 compute-0 nova_compute[259850]: 2025-10-11 04:22:54.956 2 DEBUG oslo_concurrency.processutils [None req-d4612f08-4753-4a24-8e21-adda82f1151f 33278e6c76494cbbac3a77443a2127d6 5a777d54362640ae90dbd99f4e0ce865 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/cd805eb4-703c-4647-bda1-59e3435d8c15/disk.config cd805eb4-703c-4647-bda1-59e3435d8c15_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.144s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 04:22:54 compute-0 nova_compute[259850]: 2025-10-11 04:22:54.958 2 INFO nova.virt.libvirt.driver [None req-d4612f08-4753-4a24-8e21-adda82f1151f 33278e6c76494cbbac3a77443a2127d6 5a777d54362640ae90dbd99f4e0ce865 - - default default] [instance: cd805eb4-703c-4647-bda1-59e3435d8c15] Deleting local config drive /var/lib/nova/instances/cd805eb4-703c-4647-bda1-59e3435d8c15/disk.config because it was imported into RBD.
Oct 11 04:22:55 compute-0 NetworkManager[44920]: <info>  [1760156575.0319] manager: (tap9e5f5bdc-67): new Tun device (/org/freedesktop/NetworkManager/Devices/134)
Oct 11 04:22:55 compute-0 kernel: tap9e5f5bdc-67: entered promiscuous mode
Oct 11 04:22:55 compute-0 ovn_controller[152025]: 2025-10-11T04:22:55Z|00262|binding|INFO|Claiming lport 9e5f5bdc-671b-4d1a-b567-050dd8925c57 for this chassis.
Oct 11 04:22:55 compute-0 ovn_controller[152025]: 2025-10-11T04:22:55Z|00263|binding|INFO|9e5f5bdc-671b-4d1a-b567-050dd8925c57: Claiming fa:16:3e:06:fe:bd 10.100.0.8
Oct 11 04:22:55 compute-0 nova_compute[259850]: 2025-10-11 04:22:55.034 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 04:22:55 compute-0 nova_compute[259850]: 2025-10-11 04:22:55.041 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 04:22:55 compute-0 nova_compute[259850]: 2025-10-11 04:22:55.043 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 04:22:55 compute-0 nova_compute[259850]: 2025-10-11 04:22:55.048 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 04:22:55 compute-0 ovn_metadata_agent[161897]: 2025-10-11 04:22:55.064 161902 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:06:fe:bd 10.100.0.8'], port_security=['fa:16:3e:06:fe:bd 10.100.0.8'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.8/28', 'neutron:device_id': 'cd805eb4-703c-4647-bda1-59e3435d8c15', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-be3c4303-5003-4d44-a9c5-e31dbe7169fc', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '5a777d54362640ae90dbd99f4e0ce865', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'bd6e7ca9-0308-4e68-bc42-966df1f6185a f840dff6-e5d9-49f8-a626-819e7f43b785', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=6b95af32-6805-48be-814d-5ce721b1d9c1, chassis=[<ovs.db.idl.Row object at 0x7f22fde8afd0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f22fde8afd0>], logical_port=9e5f5bdc-671b-4d1a-b567-050dd8925c57) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 11 04:22:55 compute-0 ovn_metadata_agent[161897]: 2025-10-11 04:22:55.066 161902 INFO neutron.agent.ovn.metadata.agent [-] Port 9e5f5bdc-671b-4d1a-b567-050dd8925c57 in datapath be3c4303-5003-4d44-a9c5-e31dbe7169fc bound to our chassis
Oct 11 04:22:55 compute-0 ovn_metadata_agent[161897]: 2025-10-11 04:22:55.068 161902 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network be3c4303-5003-4d44-a9c5-e31dbe7169fc
Oct 11 04:22:55 compute-0 systemd-machined[214869]: New machine qemu-27-instance-0000001b.
Oct 11 04:22:55 compute-0 ovn_metadata_agent[161897]: 2025-10-11 04:22:55.080 267637 DEBUG oslo.privsep.daemon [-] privsep: reply[5a24f62a-5568-4c38-8f58-2e348080c73c]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 04:22:55 compute-0 ovn_metadata_agent[161897]: 2025-10-11 04:22:55.081 161902 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapbe3c4303-51 in ovnmeta-be3c4303-5003-4d44-a9c5-e31dbe7169fc namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Oct 11 04:22:55 compute-0 ovn_metadata_agent[161897]: 2025-10-11 04:22:55.083 267637 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapbe3c4303-50 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Oct 11 04:22:55 compute-0 ovn_metadata_agent[161897]: 2025-10-11 04:22:55.083 267637 DEBUG oslo.privsep.daemon [-] privsep: reply[abe3fb4f-7a38-4e2a-865c-d953de7f34cf]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 04:22:55 compute-0 ovn_metadata_agent[161897]: 2025-10-11 04:22:55.085 267637 DEBUG oslo.privsep.daemon [-] privsep: reply[62ac0b07-56af-4e78-9854-e58449203d47]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 04:22:55 compute-0 ovn_metadata_agent[161897]: 2025-10-11 04:22:55.099 162015 DEBUG oslo.privsep.daemon [-] privsep: reply[b6305ca9-ac66-4a1f-bba9-f646304cfebe]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 04:22:55 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e436 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 04:22:55 compute-0 systemd[1]: Started Virtual Machine qemu-27-instance-0000001b.
Oct 11 04:22:55 compute-0 ovn_metadata_agent[161897]: 2025-10-11 04:22:55.123 267637 DEBUG oslo.privsep.daemon [-] privsep: reply[99fa8fc0-777a-473c-a7ac-6a16d1ee41df]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 04:22:55 compute-0 ovn_controller[152025]: 2025-10-11T04:22:55Z|00264|binding|INFO|Setting lport 9e5f5bdc-671b-4d1a-b567-050dd8925c57 ovn-installed in OVS
Oct 11 04:22:55 compute-0 ovn_controller[152025]: 2025-10-11T04:22:55Z|00265|binding|INFO|Setting lport 9e5f5bdc-671b-4d1a-b567-050dd8925c57 up in Southbound
Oct 11 04:22:55 compute-0 nova_compute[259850]: 2025-10-11 04:22:55.132 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 04:22:55 compute-0 ceph-mon[74273]: pgmap v1823: 305 pgs: 305 active+clean; 134 MiB data, 463 MiB used, 60 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Oct 11 04:22:55 compute-0 systemd-udevd[303456]: Network interface NamePolicy= disabled on kernel command line.
Oct 11 04:22:55 compute-0 ovn_metadata_agent[161897]: 2025-10-11 04:22:55.160 267785 DEBUG oslo.privsep.daemon [-] privsep: reply[16589189-e9d3-42ae-9f77-caa4b7c2523c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 04:22:55 compute-0 NetworkManager[44920]: <info>  [1760156575.1686] manager: (tapbe3c4303-50): new Veth device (/org/freedesktop/NetworkManager/Devices/135)
Oct 11 04:22:55 compute-0 ovn_metadata_agent[161897]: 2025-10-11 04:22:55.167 267637 DEBUG oslo.privsep.daemon [-] privsep: reply[2c37c29a-1222-4a1c-a82d-6ad17363b883]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 04:22:55 compute-0 NetworkManager[44920]: <info>  [1760156575.1698] device (tap9e5f5bdc-67): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 11 04:22:55 compute-0 NetworkManager[44920]: <info>  [1760156575.1708] device (tap9e5f5bdc-67): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 11 04:22:55 compute-0 systemd-udevd[303458]: Network interface NamePolicy= disabled on kernel command line.
Oct 11 04:22:55 compute-0 ovn_metadata_agent[161897]: 2025-10-11 04:22:55.210 267785 DEBUG oslo.privsep.daemon [-] privsep: reply[d58fd19e-4550-4a5a-8d6d-4b9d2eb0fac9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 04:22:55 compute-0 ovn_metadata_agent[161897]: 2025-10-11 04:22:55.214 267785 DEBUG oslo.privsep.daemon [-] privsep: reply[8948f0ab-5888-4708-a764-a1cd0ada2a1f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 04:22:55 compute-0 NetworkManager[44920]: <info>  [1760156575.2439] device (tapbe3c4303-50): carrier: link connected
Oct 11 04:22:55 compute-0 ovn_metadata_agent[161897]: 2025-10-11 04:22:55.253 267785 DEBUG oslo.privsep.daemon [-] privsep: reply[5adefe22-527a-46fc-b537-81632b734b3c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 04:22:55 compute-0 ovn_metadata_agent[161897]: 2025-10-11 04:22:55.272 267637 DEBUG oslo.privsep.daemon [-] privsep: reply[e0d1c1c5-1c3b-47ae-8f45-cb0c26310c40]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapbe3c4303-51'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:a1:44:ec'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 87], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 496028, 'reachable_time': 41889, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 303484, 'error': None, 'target': 'ovnmeta-be3c4303-5003-4d44-a9c5-e31dbe7169fc', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 04:22:55 compute-0 ovn_metadata_agent[161897]: 2025-10-11 04:22:55.296 267637 DEBUG oslo.privsep.daemon [-] privsep: reply[d8a3e1cb-b458-424c-b9bb-73792649a892]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fea1:44ec'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 496028, 'tstamp': 496028}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 303485, 'error': None, 'target': 'ovnmeta-be3c4303-5003-4d44-a9c5-e31dbe7169fc', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 04:22:55 compute-0 ovn_metadata_agent[161897]: 2025-10-11 04:22:55.315 267637 DEBUG oslo.privsep.daemon [-] privsep: reply[fc845466-a793-4625-8d4f-9df2b4c79b1b]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapbe3c4303-51'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:a1:44:ec'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 2, 'tx_packets': 1, 'rx_bytes': 196, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 2, 'tx_packets': 1, 'rx_bytes': 196, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 87], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 496028, 'reachable_time': 41889, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 2, 'inoctets': 168, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 2, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 168, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 2, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 303486, 'error': None, 'target': 'ovnmeta-be3c4303-5003-4d44-a9c5-e31dbe7169fc', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 04:22:55 compute-0 nova_compute[259850]: 2025-10-11 04:22:55.347 2 DEBUG nova.compute.manager [req-d3650dc9-802b-4220-884a-9f114510e90f req-1110ecb5-8bd2-4f3a-b9dc-1b5a85f19fe2 f4159c198f2c491aba3076e803251bb4 551ef16ca7c64c7d98e9560ddab5abd8 - - default default] [instance: cd805eb4-703c-4647-bda1-59e3435d8c15] Received event network-vif-plugged-9e5f5bdc-671b-4d1a-b567-050dd8925c57 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 11 04:22:55 compute-0 nova_compute[259850]: 2025-10-11 04:22:55.348 2 DEBUG oslo_concurrency.lockutils [req-d3650dc9-802b-4220-884a-9f114510e90f req-1110ecb5-8bd2-4f3a-b9dc-1b5a85f19fe2 f4159c198f2c491aba3076e803251bb4 551ef16ca7c64c7d98e9560ddab5abd8 - - default default] Acquiring lock "cd805eb4-703c-4647-bda1-59e3435d8c15-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 04:22:55 compute-0 nova_compute[259850]: 2025-10-11 04:22:55.348 2 DEBUG oslo_concurrency.lockutils [req-d3650dc9-802b-4220-884a-9f114510e90f req-1110ecb5-8bd2-4f3a-b9dc-1b5a85f19fe2 f4159c198f2c491aba3076e803251bb4 551ef16ca7c64c7d98e9560ddab5abd8 - - default default] Lock "cd805eb4-703c-4647-bda1-59e3435d8c15-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 04:22:55 compute-0 nova_compute[259850]: 2025-10-11 04:22:55.349 2 DEBUG oslo_concurrency.lockutils [req-d3650dc9-802b-4220-884a-9f114510e90f req-1110ecb5-8bd2-4f3a-b9dc-1b5a85f19fe2 f4159c198f2c491aba3076e803251bb4 551ef16ca7c64c7d98e9560ddab5abd8 - - default default] Lock "cd805eb4-703c-4647-bda1-59e3435d8c15-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 04:22:55 compute-0 nova_compute[259850]: 2025-10-11 04:22:55.350 2 DEBUG nova.compute.manager [req-d3650dc9-802b-4220-884a-9f114510e90f req-1110ecb5-8bd2-4f3a-b9dc-1b5a85f19fe2 f4159c198f2c491aba3076e803251bb4 551ef16ca7c64c7d98e9560ddab5abd8 - - default default] [instance: cd805eb4-703c-4647-bda1-59e3435d8c15] Processing event network-vif-plugged-9e5f5bdc-671b-4d1a-b567-050dd8925c57 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Oct 11 04:22:55 compute-0 ovn_metadata_agent[161897]: 2025-10-11 04:22:55.367 267637 DEBUG oslo.privsep.daemon [-] privsep: reply[e803bc72-9a61-41b4-ac6f-48d7a4cf82e3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 04:22:55 compute-0 ovn_metadata_agent[161897]: 2025-10-11 04:22:55.443 267637 DEBUG oslo.privsep.daemon [-] privsep: reply[e14df58f-c8ec-42c2-b7e9-1e61bd44591a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 04:22:55 compute-0 ovn_metadata_agent[161897]: 2025-10-11 04:22:55.445 161902 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapbe3c4303-50, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 04:22:55 compute-0 ovn_metadata_agent[161897]: 2025-10-11 04:22:55.446 161902 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 11 04:22:55 compute-0 ovn_metadata_agent[161897]: 2025-10-11 04:22:55.446 161902 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapbe3c4303-50, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 04:22:55 compute-0 nova_compute[259850]: 2025-10-11 04:22:55.449 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 04:22:55 compute-0 kernel: tapbe3c4303-50: entered promiscuous mode
Oct 11 04:22:55 compute-0 NetworkManager[44920]: <info>  [1760156575.4501] manager: (tapbe3c4303-50): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/136)
Oct 11 04:22:55 compute-0 ovn_metadata_agent[161897]: 2025-10-11 04:22:55.455 161902 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapbe3c4303-50, col_values=(('external_ids', {'iface-id': '0b1ebd2f-e627-497e-a934-88b4e0f9842c'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 04:22:55 compute-0 ovn_controller[152025]: 2025-10-11T04:22:55Z|00266|binding|INFO|Releasing lport 0b1ebd2f-e627-497e-a934-88b4e0f9842c from this chassis (sb_readonly=0)
Oct 11 04:22:55 compute-0 nova_compute[259850]: 2025-10-11 04:22:55.457 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 04:22:55 compute-0 nova_compute[259850]: 2025-10-11 04:22:55.483 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 04:22:55 compute-0 ovn_metadata_agent[161897]: 2025-10-11 04:22:55.485 161902 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/be3c4303-5003-4d44-a9c5-e31dbe7169fc.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/be3c4303-5003-4d44-a9c5-e31dbe7169fc.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Oct 11 04:22:55 compute-0 ovn_metadata_agent[161897]: 2025-10-11 04:22:55.486 267637 DEBUG oslo.privsep.daemon [-] privsep: reply[644bc84f-ebd8-440c-a193-ecc134f76afc]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 04:22:55 compute-0 ovn_metadata_agent[161897]: 2025-10-11 04:22:55.487 161902 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct 11 04:22:55 compute-0 ovn_metadata_agent[161897]: global
Oct 11 04:22:55 compute-0 ovn_metadata_agent[161897]:     log         /dev/log local0 debug
Oct 11 04:22:55 compute-0 ovn_metadata_agent[161897]:     log-tag     haproxy-metadata-proxy-be3c4303-5003-4d44-a9c5-e31dbe7169fc
Oct 11 04:22:55 compute-0 ovn_metadata_agent[161897]:     user        root
Oct 11 04:22:55 compute-0 ovn_metadata_agent[161897]:     group       root
Oct 11 04:22:55 compute-0 ovn_metadata_agent[161897]:     maxconn     1024
Oct 11 04:22:55 compute-0 ovn_metadata_agent[161897]:     pidfile     /var/lib/neutron/external/pids/be3c4303-5003-4d44-a9c5-e31dbe7169fc.pid.haproxy
Oct 11 04:22:55 compute-0 ovn_metadata_agent[161897]:     daemon
Oct 11 04:22:55 compute-0 ovn_metadata_agent[161897]: 
Oct 11 04:22:55 compute-0 ovn_metadata_agent[161897]: defaults
Oct 11 04:22:55 compute-0 ovn_metadata_agent[161897]:     log global
Oct 11 04:22:55 compute-0 ovn_metadata_agent[161897]:     mode http
Oct 11 04:22:55 compute-0 ovn_metadata_agent[161897]:     option httplog
Oct 11 04:22:55 compute-0 ovn_metadata_agent[161897]:     option dontlognull
Oct 11 04:22:55 compute-0 ovn_metadata_agent[161897]:     option http-server-close
Oct 11 04:22:55 compute-0 ovn_metadata_agent[161897]:     option forwardfor
Oct 11 04:22:55 compute-0 ovn_metadata_agent[161897]:     retries                 3
Oct 11 04:22:55 compute-0 ovn_metadata_agent[161897]:     timeout http-request    30s
Oct 11 04:22:55 compute-0 ovn_metadata_agent[161897]:     timeout connect         30s
Oct 11 04:22:55 compute-0 ovn_metadata_agent[161897]:     timeout client          32s
Oct 11 04:22:55 compute-0 ovn_metadata_agent[161897]:     timeout server          32s
Oct 11 04:22:55 compute-0 ovn_metadata_agent[161897]:     timeout http-keep-alive 30s
Oct 11 04:22:55 compute-0 ovn_metadata_agent[161897]: 
Oct 11 04:22:55 compute-0 ovn_metadata_agent[161897]: 
Oct 11 04:22:55 compute-0 ovn_metadata_agent[161897]: listen listener
Oct 11 04:22:55 compute-0 ovn_metadata_agent[161897]:     bind 169.254.169.254:80
Oct 11 04:22:55 compute-0 ovn_metadata_agent[161897]:     server metadata /var/lib/neutron/metadata_proxy
Oct 11 04:22:55 compute-0 ovn_metadata_agent[161897]:     http-request add-header X-OVN-Network-ID be3c4303-5003-4d44-a9c5-e31dbe7169fc
Oct 11 04:22:55 compute-0 ovn_metadata_agent[161897]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Oct 11 04:22:55 compute-0 ovn_metadata_agent[161897]: 2025-10-11 04:22:55.489 161902 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-be3c4303-5003-4d44-a9c5-e31dbe7169fc', 'env', 'PROCESS_TAG=haproxy-be3c4303-5003-4d44-a9c5-e31dbe7169fc', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/be3c4303-5003-4d44-a9c5-e31dbe7169fc.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Oct 11 04:22:55 compute-0 podman[303560]: 2025-10-11 04:22:55.911475312 +0000 UTC m=+0.053733103 container create d9524bcaf2ed45cb888223a94cea0d0e4c0e29a7e13fe1f28007de84a0340f43 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-be3c4303-5003-4d44-a9c5-e31dbe7169fc, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, maintainer=OpenStack Kubernetes Operator team)
Oct 11 04:22:55 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1824: 305 pgs: 305 active+clean; 134 MiB data, 463 MiB used, 60 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Oct 11 04:22:55 compute-0 systemd[1]: Started libpod-conmon-d9524bcaf2ed45cb888223a94cea0d0e4c0e29a7e13fe1f28007de84a0340f43.scope.
Oct 11 04:22:55 compute-0 systemd[1]: Started libcrun container.
Oct 11 04:22:55 compute-0 podman[303560]: 2025-10-11 04:22:55.885096145 +0000 UTC m=+0.027353936 image pull 1061e4fafe13e0b9aa1ef2c904ba4ad70c44f3e87b1d831f16c6db34937f4022 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Oct 11 04:22:55 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2028f5856026d0e7e0fc745723aa16c85db73a34881aa0b256b32e8563cf93e1/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct 11 04:22:55 compute-0 podman[303560]: 2025-10-11 04:22:55.996723695 +0000 UTC m=+0.138981486 container init d9524bcaf2ed45cb888223a94cea0d0e4c0e29a7e13fe1f28007de84a0340f43 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-be3c4303-5003-4d44-a9c5-e31dbe7169fc, io.buildah.version=1.41.3, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.schema-version=1.0, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 11 04:22:56 compute-0 podman[303560]: 2025-10-11 04:22:56.001575663 +0000 UTC m=+0.143833444 container start d9524bcaf2ed45cb888223a94cea0d0e4c0e29a7e13fe1f28007de84a0340f43 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-be3c4303-5003-4d44-a9c5-e31dbe7169fc, tcib_managed=true, org.label-schema.schema-version=1.0, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251009, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 11 04:22:56 compute-0 neutron-haproxy-ovnmeta-be3c4303-5003-4d44-a9c5-e31dbe7169fc[303576]: [NOTICE]   (303580) : New worker (303582) forked
Oct 11 04:22:56 compute-0 neutron-haproxy-ovnmeta-be3c4303-5003-4d44-a9c5-e31dbe7169fc[303576]: [NOTICE]   (303580) : Loading success.
Oct 11 04:22:56 compute-0 nova_compute[259850]: 2025-10-11 04:22:56.181 2 DEBUG nova.compute.manager [None req-d4612f08-4753-4a24-8e21-adda82f1151f 33278e6c76494cbbac3a77443a2127d6 5a777d54362640ae90dbd99f4e0ce865 - - default default] [instance: cd805eb4-703c-4647-bda1-59e3435d8c15] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Oct 11 04:22:56 compute-0 nova_compute[259850]: 2025-10-11 04:22:56.182 2 DEBUG nova.virt.driver [None req-3700afd8-f980-4cf2-b41e-54ecc7fbe9a2 - - - - - -] Emitting event <LifecycleEvent: 1760156576.1817813, cd805eb4-703c-4647-bda1-59e3435d8c15 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 11 04:22:56 compute-0 nova_compute[259850]: 2025-10-11 04:22:56.182 2 INFO nova.compute.manager [None req-3700afd8-f980-4cf2-b41e-54ecc7fbe9a2 - - - - - -] [instance: cd805eb4-703c-4647-bda1-59e3435d8c15] VM Started (Lifecycle Event)
Oct 11 04:22:56 compute-0 nova_compute[259850]: 2025-10-11 04:22:56.186 2 DEBUG nova.virt.libvirt.driver [None req-d4612f08-4753-4a24-8e21-adda82f1151f 33278e6c76494cbbac3a77443a2127d6 5a777d54362640ae90dbd99f4e0ce865 - - default default] [instance: cd805eb4-703c-4647-bda1-59e3435d8c15] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Oct 11 04:22:56 compute-0 nova_compute[259850]: 2025-10-11 04:22:56.190 2 INFO nova.virt.libvirt.driver [-] [instance: cd805eb4-703c-4647-bda1-59e3435d8c15] Instance spawned successfully.
Oct 11 04:22:56 compute-0 nova_compute[259850]: 2025-10-11 04:22:56.190 2 DEBUG nova.virt.libvirt.driver [None req-d4612f08-4753-4a24-8e21-adda82f1151f 33278e6c76494cbbac3a77443a2127d6 5a777d54362640ae90dbd99f4e0ce865 - - default default] [instance: cd805eb4-703c-4647-bda1-59e3435d8c15] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Oct 11 04:22:56 compute-0 nova_compute[259850]: 2025-10-11 04:22:56.203 2 DEBUG nova.compute.manager [None req-3700afd8-f980-4cf2-b41e-54ecc7fbe9a2 - - - - - -] [instance: cd805eb4-703c-4647-bda1-59e3435d8c15] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 11 04:22:56 compute-0 nova_compute[259850]: 2025-10-11 04:22:56.208 2 DEBUG nova.compute.manager [None req-3700afd8-f980-4cf2-b41e-54ecc7fbe9a2 - - - - - -] [instance: cd805eb4-703c-4647-bda1-59e3435d8c15] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 11 04:22:56 compute-0 nova_compute[259850]: 2025-10-11 04:22:56.213 2 DEBUG nova.virt.libvirt.driver [None req-d4612f08-4753-4a24-8e21-adda82f1151f 33278e6c76494cbbac3a77443a2127d6 5a777d54362640ae90dbd99f4e0ce865 - - default default] [instance: cd805eb4-703c-4647-bda1-59e3435d8c15] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 11 04:22:56 compute-0 nova_compute[259850]: 2025-10-11 04:22:56.213 2 DEBUG nova.virt.libvirt.driver [None req-d4612f08-4753-4a24-8e21-adda82f1151f 33278e6c76494cbbac3a77443a2127d6 5a777d54362640ae90dbd99f4e0ce865 - - default default] [instance: cd805eb4-703c-4647-bda1-59e3435d8c15] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 11 04:22:56 compute-0 nova_compute[259850]: 2025-10-11 04:22:56.213 2 DEBUG nova.virt.libvirt.driver [None req-d4612f08-4753-4a24-8e21-adda82f1151f 33278e6c76494cbbac3a77443a2127d6 5a777d54362640ae90dbd99f4e0ce865 - - default default] [instance: cd805eb4-703c-4647-bda1-59e3435d8c15] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 11 04:22:56 compute-0 nova_compute[259850]: 2025-10-11 04:22:56.214 2 DEBUG nova.virt.libvirt.driver [None req-d4612f08-4753-4a24-8e21-adda82f1151f 33278e6c76494cbbac3a77443a2127d6 5a777d54362640ae90dbd99f4e0ce865 - - default default] [instance: cd805eb4-703c-4647-bda1-59e3435d8c15] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 11 04:22:56 compute-0 nova_compute[259850]: 2025-10-11 04:22:56.214 2 DEBUG nova.virt.libvirt.driver [None req-d4612f08-4753-4a24-8e21-adda82f1151f 33278e6c76494cbbac3a77443a2127d6 5a777d54362640ae90dbd99f4e0ce865 - - default default] [instance: cd805eb4-703c-4647-bda1-59e3435d8c15] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 11 04:22:56 compute-0 nova_compute[259850]: 2025-10-11 04:22:56.215 2 DEBUG nova.virt.libvirt.driver [None req-d4612f08-4753-4a24-8e21-adda82f1151f 33278e6c76494cbbac3a77443a2127d6 5a777d54362640ae90dbd99f4e0ce865 - - default default] [instance: cd805eb4-703c-4647-bda1-59e3435d8c15] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 11 04:22:56 compute-0 nova_compute[259850]: 2025-10-11 04:22:56.237 2 INFO nova.compute.manager [None req-3700afd8-f980-4cf2-b41e-54ecc7fbe9a2 - - - - - -] [instance: cd805eb4-703c-4647-bda1-59e3435d8c15] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 11 04:22:56 compute-0 nova_compute[259850]: 2025-10-11 04:22:56.238 2 DEBUG nova.virt.driver [None req-3700afd8-f980-4cf2-b41e-54ecc7fbe9a2 - - - - - -] Emitting event <LifecycleEvent: 1760156576.1822567, cd805eb4-703c-4647-bda1-59e3435d8c15 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 11 04:22:56 compute-0 nova_compute[259850]: 2025-10-11 04:22:56.238 2 INFO nova.compute.manager [None req-3700afd8-f980-4cf2-b41e-54ecc7fbe9a2 - - - - - -] [instance: cd805eb4-703c-4647-bda1-59e3435d8c15] VM Paused (Lifecycle Event)
Oct 11 04:22:56 compute-0 nova_compute[259850]: 2025-10-11 04:22:56.265 2 DEBUG nova.compute.manager [None req-3700afd8-f980-4cf2-b41e-54ecc7fbe9a2 - - - - - -] [instance: cd805eb4-703c-4647-bda1-59e3435d8c15] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 11 04:22:56 compute-0 nova_compute[259850]: 2025-10-11 04:22:56.268 2 DEBUG nova.virt.driver [None req-3700afd8-f980-4cf2-b41e-54ecc7fbe9a2 - - - - - -] Emitting event <LifecycleEvent: 1760156576.1854165, cd805eb4-703c-4647-bda1-59e3435d8c15 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 11 04:22:56 compute-0 nova_compute[259850]: 2025-10-11 04:22:56.268 2 INFO nova.compute.manager [None req-3700afd8-f980-4cf2-b41e-54ecc7fbe9a2 - - - - - -] [instance: cd805eb4-703c-4647-bda1-59e3435d8c15] VM Resumed (Lifecycle Event)
Oct 11 04:22:56 compute-0 nova_compute[259850]: 2025-10-11 04:22:56.276 2 INFO nova.compute.manager [None req-d4612f08-4753-4a24-8e21-adda82f1151f 33278e6c76494cbbac3a77443a2127d6 5a777d54362640ae90dbd99f4e0ce865 - - default default] [instance: cd805eb4-703c-4647-bda1-59e3435d8c15] Took 6.93 seconds to spawn the instance on the hypervisor.
Oct 11 04:22:56 compute-0 nova_compute[259850]: 2025-10-11 04:22:56.276 2 DEBUG nova.compute.manager [None req-d4612f08-4753-4a24-8e21-adda82f1151f 33278e6c76494cbbac3a77443a2127d6 5a777d54362640ae90dbd99f4e0ce865 - - default default] [instance: cd805eb4-703c-4647-bda1-59e3435d8c15] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 11 04:22:56 compute-0 nova_compute[259850]: 2025-10-11 04:22:56.286 2 DEBUG nova.compute.manager [None req-3700afd8-f980-4cf2-b41e-54ecc7fbe9a2 - - - - - -] [instance: cd805eb4-703c-4647-bda1-59e3435d8c15] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 11 04:22:56 compute-0 nova_compute[259850]: 2025-10-11 04:22:56.289 2 DEBUG nova.compute.manager [None req-3700afd8-f980-4cf2-b41e-54ecc7fbe9a2 - - - - - -] [instance: cd805eb4-703c-4647-bda1-59e3435d8c15] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 11 04:22:56 compute-0 nova_compute[259850]: 2025-10-11 04:22:56.311 2 INFO nova.compute.manager [None req-3700afd8-f980-4cf2-b41e-54ecc7fbe9a2 - - - - - -] [instance: cd805eb4-703c-4647-bda1-59e3435d8c15] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 11 04:22:56 compute-0 nova_compute[259850]: 2025-10-11 04:22:56.350 2 INFO nova.compute.manager [None req-d4612f08-4753-4a24-8e21-adda82f1151f 33278e6c76494cbbac3a77443a2127d6 5a777d54362640ae90dbd99f4e0ce865 - - default default] [instance: cd805eb4-703c-4647-bda1-59e3435d8c15] Took 7.91 seconds to build instance.
Oct 11 04:22:56 compute-0 nova_compute[259850]: 2025-10-11 04:22:56.369 2 DEBUG oslo_concurrency.lockutils [None req-d4612f08-4753-4a24-8e21-adda82f1151f 33278e6c76494cbbac3a77443a2127d6 5a777d54362640ae90dbd99f4e0ce865 - - default default] Lock "cd805eb4-703c-4647-bda1-59e3435d8c15" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 7.996s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 04:22:57 compute-0 ceph-mon[74273]: pgmap v1824: 305 pgs: 305 active+clean; 134 MiB data, 463 MiB used, 60 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Oct 11 04:22:57 compute-0 nova_compute[259850]: 2025-10-11 04:22:57.432 2 DEBUG nova.compute.manager [req-19c1d9fa-41c2-490e-9351-fc4b0efc69f2 req-390b6fe1-ed1b-4ed2-9f72-a5b9f2d6c76f f4159c198f2c491aba3076e803251bb4 551ef16ca7c64c7d98e9560ddab5abd8 - - default default] [instance: cd805eb4-703c-4647-bda1-59e3435d8c15] Received event network-vif-plugged-9e5f5bdc-671b-4d1a-b567-050dd8925c57 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 11 04:22:57 compute-0 nova_compute[259850]: 2025-10-11 04:22:57.433 2 DEBUG oslo_concurrency.lockutils [req-19c1d9fa-41c2-490e-9351-fc4b0efc69f2 req-390b6fe1-ed1b-4ed2-9f72-a5b9f2d6c76f f4159c198f2c491aba3076e803251bb4 551ef16ca7c64c7d98e9560ddab5abd8 - - default default] Acquiring lock "cd805eb4-703c-4647-bda1-59e3435d8c15-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 04:22:57 compute-0 nova_compute[259850]: 2025-10-11 04:22:57.433 2 DEBUG oslo_concurrency.lockutils [req-19c1d9fa-41c2-490e-9351-fc4b0efc69f2 req-390b6fe1-ed1b-4ed2-9f72-a5b9f2d6c76f f4159c198f2c491aba3076e803251bb4 551ef16ca7c64c7d98e9560ddab5abd8 - - default default] Lock "cd805eb4-703c-4647-bda1-59e3435d8c15-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 04:22:57 compute-0 nova_compute[259850]: 2025-10-11 04:22:57.433 2 DEBUG oslo_concurrency.lockutils [req-19c1d9fa-41c2-490e-9351-fc4b0efc69f2 req-390b6fe1-ed1b-4ed2-9f72-a5b9f2d6c76f f4159c198f2c491aba3076e803251bb4 551ef16ca7c64c7d98e9560ddab5abd8 - - default default] Lock "cd805eb4-703c-4647-bda1-59e3435d8c15-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 04:22:57 compute-0 nova_compute[259850]: 2025-10-11 04:22:57.434 2 DEBUG nova.compute.manager [req-19c1d9fa-41c2-490e-9351-fc4b0efc69f2 req-390b6fe1-ed1b-4ed2-9f72-a5b9f2d6c76f f4159c198f2c491aba3076e803251bb4 551ef16ca7c64c7d98e9560ddab5abd8 - - default default] [instance: cd805eb4-703c-4647-bda1-59e3435d8c15] No waiting events found dispatching network-vif-plugged-9e5f5bdc-671b-4d1a-b567-050dd8925c57 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 11 04:22:57 compute-0 nova_compute[259850]: 2025-10-11 04:22:57.434 2 WARNING nova.compute.manager [req-19c1d9fa-41c2-490e-9351-fc4b0efc69f2 req-390b6fe1-ed1b-4ed2-9f72-a5b9f2d6c76f f4159c198f2c491aba3076e803251bb4 551ef16ca7c64c7d98e9560ddab5abd8 - - default default] [instance: cd805eb4-703c-4647-bda1-59e3435d8c15] Received unexpected event network-vif-plugged-9e5f5bdc-671b-4d1a-b567-050dd8925c57 for instance with vm_state active and task_state None.
Oct 11 04:22:57 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1825: 305 pgs: 305 active+clean; 134 MiB data, 463 MiB used, 60 GiB / 60 GiB avail; 1.6 MiB/s rd, 1.8 MiB/s wr, 85 op/s
Oct 11 04:22:58 compute-0 nova_compute[259850]: 2025-10-11 04:22:58.580 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 04:22:58 compute-0 nova_compute[259850]: 2025-10-11 04:22:58.913 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 04:22:58 compute-0 NetworkManager[44920]: <info>  [1760156578.9138] manager: (patch-br-int-to-provnet-86cd831a-6a58-4ba8-a51c-57fa1a3acacc): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/137)
Oct 11 04:22:58 compute-0 NetworkManager[44920]: <info>  [1760156578.9149] manager: (patch-provnet-86cd831a-6a58-4ba8-a51c-57fa1a3acacc-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/138)
Oct 11 04:22:59 compute-0 nova_compute[259850]: 2025-10-11 04:22:59.013 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 04:22:59 compute-0 ovn_controller[152025]: 2025-10-11T04:22:59Z|00267|binding|INFO|Releasing lport 0b1ebd2f-e627-497e-a934-88b4e0f9842c from this chassis (sb_readonly=0)
Oct 11 04:22:59 compute-0 nova_compute[259850]: 2025-10-11 04:22:59.028 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 04:22:59 compute-0 nova_compute[259850]: 2025-10-11 04:22:59.151 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 04:22:59 compute-0 ceph-mon[74273]: pgmap v1825: 305 pgs: 305 active+clean; 134 MiB data, 463 MiB used, 60 GiB / 60 GiB avail; 1.6 MiB/s rd, 1.8 MiB/s wr, 85 op/s
Oct 11 04:22:59 compute-0 nova_compute[259850]: 2025-10-11 04:22:59.781 2 DEBUG nova.compute.manager [req-a24748ea-b404-4ad1-adad-b01a9908c952 req-223a47b6-7222-4fc1-8bfb-9789942f9939 f4159c198f2c491aba3076e803251bb4 551ef16ca7c64c7d98e9560ddab5abd8 - - default default] [instance: cd805eb4-703c-4647-bda1-59e3435d8c15] Received event network-changed-9e5f5bdc-671b-4d1a-b567-050dd8925c57 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 11 04:22:59 compute-0 nova_compute[259850]: 2025-10-11 04:22:59.782 2 DEBUG nova.compute.manager [req-a24748ea-b404-4ad1-adad-b01a9908c952 req-223a47b6-7222-4fc1-8bfb-9789942f9939 f4159c198f2c491aba3076e803251bb4 551ef16ca7c64c7d98e9560ddab5abd8 - - default default] [instance: cd805eb4-703c-4647-bda1-59e3435d8c15] Refreshing instance network info cache due to event network-changed-9e5f5bdc-671b-4d1a-b567-050dd8925c57. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 11 04:22:59 compute-0 nova_compute[259850]: 2025-10-11 04:22:59.782 2 DEBUG oslo_concurrency.lockutils [req-a24748ea-b404-4ad1-adad-b01a9908c952 req-223a47b6-7222-4fc1-8bfb-9789942f9939 f4159c198f2c491aba3076e803251bb4 551ef16ca7c64c7d98e9560ddab5abd8 - - default default] Acquiring lock "refresh_cache-cd805eb4-703c-4647-bda1-59e3435d8c15" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 11 04:22:59 compute-0 nova_compute[259850]: 2025-10-11 04:22:59.783 2 DEBUG oslo_concurrency.lockutils [req-a24748ea-b404-4ad1-adad-b01a9908c952 req-223a47b6-7222-4fc1-8bfb-9789942f9939 f4159c198f2c491aba3076e803251bb4 551ef16ca7c64c7d98e9560ddab5abd8 - - default default] Acquired lock "refresh_cache-cd805eb4-703c-4647-bda1-59e3435d8c15" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 11 04:22:59 compute-0 nova_compute[259850]: 2025-10-11 04:22:59.783 2 DEBUG nova.network.neutron [req-a24748ea-b404-4ad1-adad-b01a9908c952 req-223a47b6-7222-4fc1-8bfb-9789942f9939 f4159c198f2c491aba3076e803251bb4 551ef16ca7c64c7d98e9560ddab5abd8 - - default default] [instance: cd805eb4-703c-4647-bda1-59e3435d8c15] Refreshing network info cache for port 9e5f5bdc-671b-4d1a-b567-050dd8925c57 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 11 04:22:59 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1826: 305 pgs: 305 active+clean; 134 MiB data, 463 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 100 op/s
Oct 11 04:23:00 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e436 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 04:23:01 compute-0 ceph-mon[74273]: pgmap v1826: 305 pgs: 305 active+clean; 134 MiB data, 463 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 100 op/s
Oct 11 04:23:01 compute-0 nova_compute[259850]: 2025-10-11 04:23:01.433 2 DEBUG nova.network.neutron [req-a24748ea-b404-4ad1-adad-b01a9908c952 req-223a47b6-7222-4fc1-8bfb-9789942f9939 f4159c198f2c491aba3076e803251bb4 551ef16ca7c64c7d98e9560ddab5abd8 - - default default] [instance: cd805eb4-703c-4647-bda1-59e3435d8c15] Updated VIF entry in instance network info cache for port 9e5f5bdc-671b-4d1a-b567-050dd8925c57. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 11 04:23:01 compute-0 nova_compute[259850]: 2025-10-11 04:23:01.435 2 DEBUG nova.network.neutron [req-a24748ea-b404-4ad1-adad-b01a9908c952 req-223a47b6-7222-4fc1-8bfb-9789942f9939 f4159c198f2c491aba3076e803251bb4 551ef16ca7c64c7d98e9560ddab5abd8 - - default default] [instance: cd805eb4-703c-4647-bda1-59e3435d8c15] Updating instance_info_cache with network_info: [{"id": "9e5f5bdc-671b-4d1a-b567-050dd8925c57", "address": "fa:16:3e:06:fe:bd", "network": {"id": "be3c4303-5003-4d44-a9c5-e31dbe7169fc", "bridge": "br-int", "label": "tempest-SnapshotDataIntegrityTests-760144367-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.213", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5a777d54362640ae90dbd99f4e0ce865", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9e5f5bdc-67", "ovs_interfaceid": "9e5f5bdc-671b-4d1a-b567-050dd8925c57", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 11 04:23:01 compute-0 nova_compute[259850]: 2025-10-11 04:23:01.457 2 DEBUG oslo_concurrency.lockutils [req-a24748ea-b404-4ad1-adad-b01a9908c952 req-223a47b6-7222-4fc1-8bfb-9789942f9939 f4159c198f2c491aba3076e803251bb4 551ef16ca7c64c7d98e9560ddab5abd8 - - default default] Releasing lock "refresh_cache-cd805eb4-703c-4647-bda1-59e3435d8c15" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 11 04:23:01 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1827: 305 pgs: 305 active+clean; 134 MiB data, 463 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 100 op/s
Oct 11 04:23:03 compute-0 ceph-mon[74273]: pgmap v1827: 305 pgs: 305 active+clean; 134 MiB data, 463 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 100 op/s
Oct 11 04:23:03 compute-0 nova_compute[259850]: 2025-10-11 04:23:03.582 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 04:23:03 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1828: 305 pgs: 305 active+clean; 134 MiB data, 463 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 100 op/s
Oct 11 04:23:04 compute-0 nova_compute[259850]: 2025-10-11 04:23:04.153 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 04:23:05 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e436 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 04:23:05 compute-0 ceph-mon[74273]: pgmap v1828: 305 pgs: 305 active+clean; 134 MiB data, 463 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 100 op/s
Oct 11 04:23:05 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1829: 305 pgs: 305 active+clean; 134 MiB data, 463 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 14 KiB/s wr, 73 op/s
Oct 11 04:23:07 compute-0 ceph-mon[74273]: pgmap v1829: 305 pgs: 305 active+clean; 134 MiB data, 463 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 14 KiB/s wr, 73 op/s
Oct 11 04:23:07 compute-0 podman[303593]: 2025-10-11 04:23:07.446551444 +0000 UTC m=+0.146527158 container health_status 648a70a95c858d67aef01a55c153ff0b5b9bef4b589fa10c62a24185180db61f (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, container_name=ovn_controller, org.label-schema.build-date=20251009, tcib_managed=true)
Oct 11 04:23:07 compute-0 ovn_controller[152025]: 2025-10-11T04:23:07Z|00066|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:06:fe:bd 10.100.0.8
Oct 11 04:23:07 compute-0 ovn_controller[152025]: 2025-10-11T04:23:07Z|00067|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:06:fe:bd 10.100.0.8
Oct 11 04:23:07 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1830: 305 pgs: 305 active+clean; 156 MiB data, 483 MiB used, 60 GiB / 60 GiB avail; 2.2 MiB/s rd, 1.7 MiB/s wr, 134 op/s
Oct 11 04:23:08 compute-0 nova_compute[259850]: 2025-10-11 04:23:08.587 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 04:23:09 compute-0 nova_compute[259850]: 2025-10-11 04:23:09.156 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 04:23:09 compute-0 ceph-mon[74273]: pgmap v1830: 305 pgs: 305 active+clean; 156 MiB data, 483 MiB used, 60 GiB / 60 GiB avail; 2.2 MiB/s rd, 1.7 MiB/s wr, 134 op/s
Oct 11 04:23:09 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1831: 305 pgs: 305 active+clean; 167 MiB data, 488 MiB used, 60 GiB / 60 GiB avail; 714 KiB/s rd, 2.1 MiB/s wr, 79 op/s
Oct 11 04:23:10 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e436 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 04:23:11 compute-0 ceph-mon[74273]: pgmap v1831: 305 pgs: 305 active+clean; 167 MiB data, 488 MiB used, 60 GiB / 60 GiB avail; 714 KiB/s rd, 2.1 MiB/s wr, 79 op/s
Oct 11 04:23:11 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1832: 305 pgs: 305 active+clean; 167 MiB data, 488 MiB used, 60 GiB / 60 GiB avail; 326 KiB/s rd, 2.1 MiB/s wr, 64 op/s
Oct 11 04:23:12 compute-0 podman[303621]: 2025-10-11 04:23:12.404686672 +0000 UTC m=+0.106350892 container health_status aa44bb3cffcfb092b9d85ae52a9bd9f36c87e07262632420f97b48a886109eab (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009, container_name=ovn_metadata_agent, managed_by=edpm_ansible, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent)
Oct 11 04:23:13 compute-0 ceph-mon[74273]: pgmap v1832: 305 pgs: 305 active+clean; 167 MiB data, 488 MiB used, 60 GiB / 60 GiB avail; 326 KiB/s rd, 2.1 MiB/s wr, 64 op/s
Oct 11 04:23:13 compute-0 nova_compute[259850]: 2025-10-11 04:23:13.589 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 04:23:13 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1833: 305 pgs: 305 active+clean; 167 MiB data, 488 MiB used, 60 GiB / 60 GiB avail; 327 KiB/s rd, 2.1 MiB/s wr, 64 op/s
Oct 11 04:23:14 compute-0 nova_compute[259850]: 2025-10-11 04:23:14.159 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 04:23:15 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e436 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 04:23:15 compute-0 ceph-mon[74273]: pgmap v1833: 305 pgs: 305 active+clean; 167 MiB data, 488 MiB used, 60 GiB / 60 GiB avail; 327 KiB/s rd, 2.1 MiB/s wr, 64 op/s
Oct 11 04:23:15 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1834: 305 pgs: 305 active+clean; 167 MiB data, 488 MiB used, 60 GiB / 60 GiB avail; 327 KiB/s rd, 2.1 MiB/s wr, 64 op/s
Oct 11 04:23:17 compute-0 ceph-mon[74273]: pgmap v1834: 305 pgs: 305 active+clean; 167 MiB data, 488 MiB used, 60 GiB / 60 GiB avail; 327 KiB/s rd, 2.1 MiB/s wr, 64 op/s
Oct 11 04:23:17 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1835: 305 pgs: 305 active+clean; 167 MiB data, 488 MiB used, 60 GiB / 60 GiB avail; 328 KiB/s rd, 2.1 MiB/s wr, 66 op/s
Oct 11 04:23:18 compute-0 nova_compute[259850]: 2025-10-11 04:23:18.592 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 04:23:19 compute-0 nova_compute[259850]: 2025-10-11 04:23:19.162 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 04:23:19 compute-0 nova_compute[259850]: 2025-10-11 04:23:19.168 2 DEBUG oslo_concurrency.lockutils [None req-8efb7ae5-6ce0-434a-a1e9-34c6758cbe4f 33278e6c76494cbbac3a77443a2127d6 5a777d54362640ae90dbd99f4e0ce865 - - default default] Acquiring lock "cd805eb4-703c-4647-bda1-59e3435d8c15" by "nova.compute.manager.ComputeManager.reserve_block_device_name.<locals>.do_reserve" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 04:23:19 compute-0 nova_compute[259850]: 2025-10-11 04:23:19.169 2 DEBUG oslo_concurrency.lockutils [None req-8efb7ae5-6ce0-434a-a1e9-34c6758cbe4f 33278e6c76494cbbac3a77443a2127d6 5a777d54362640ae90dbd99f4e0ce865 - - default default] Lock "cd805eb4-703c-4647-bda1-59e3435d8c15" acquired by "nova.compute.manager.ComputeManager.reserve_block_device_name.<locals>.do_reserve" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 04:23:19 compute-0 nova_compute[259850]: 2025-10-11 04:23:19.187 2 DEBUG nova.objects.instance [None req-8efb7ae5-6ce0-434a-a1e9-34c6758cbe4f 33278e6c76494cbbac3a77443a2127d6 5a777d54362640ae90dbd99f4e0ce865 - - default default] Lazy-loading 'flavor' on Instance uuid cd805eb4-703c-4647-bda1-59e3435d8c15 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 11 04:23:19 compute-0 nova_compute[259850]: 2025-10-11 04:23:19.226 2 DEBUG oslo_concurrency.lockutils [None req-8efb7ae5-6ce0-434a-a1e9-34c6758cbe4f 33278e6c76494cbbac3a77443a2127d6 5a777d54362640ae90dbd99f4e0ce865 - - default default] Lock "cd805eb4-703c-4647-bda1-59e3435d8c15" "released" by "nova.compute.manager.ComputeManager.reserve_block_device_name.<locals>.do_reserve" :: held 0.057s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 04:23:19 compute-0 ceph-mon[74273]: pgmap v1835: 305 pgs: 305 active+clean; 167 MiB data, 488 MiB used, 60 GiB / 60 GiB avail; 328 KiB/s rd, 2.1 MiB/s wr, 66 op/s
Oct 11 04:23:19 compute-0 nova_compute[259850]: 2025-10-11 04:23:19.424 2 DEBUG oslo_concurrency.lockutils [None req-8efb7ae5-6ce0-434a-a1e9-34c6758cbe4f 33278e6c76494cbbac3a77443a2127d6 5a777d54362640ae90dbd99f4e0ce865 - - default default] Acquiring lock "cd805eb4-703c-4647-bda1-59e3435d8c15" by "nova.compute.manager.ComputeManager.attach_volume.<locals>.do_attach_volume" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 04:23:19 compute-0 nova_compute[259850]: 2025-10-11 04:23:19.424 2 DEBUG oslo_concurrency.lockutils [None req-8efb7ae5-6ce0-434a-a1e9-34c6758cbe4f 33278e6c76494cbbac3a77443a2127d6 5a777d54362640ae90dbd99f4e0ce865 - - default default] Lock "cd805eb4-703c-4647-bda1-59e3435d8c15" acquired by "nova.compute.manager.ComputeManager.attach_volume.<locals>.do_attach_volume" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 04:23:19 compute-0 nova_compute[259850]: 2025-10-11 04:23:19.425 2 INFO nova.compute.manager [None req-8efb7ae5-6ce0-434a-a1e9-34c6758cbe4f 33278e6c76494cbbac3a77443a2127d6 5a777d54362640ae90dbd99f4e0ce865 - - default default] [instance: cd805eb4-703c-4647-bda1-59e3435d8c15] Attaching volume a9bd8846-de75-4d33-844f-cdf270772026 to /dev/vdb
Oct 11 04:23:19 compute-0 nova_compute[259850]: 2025-10-11 04:23:19.558 2 DEBUG os_brick.utils [None req-8efb7ae5-6ce0-434a-a1e9-34c6758cbe4f 33278e6c76494cbbac3a77443a2127d6 5a777d54362640ae90dbd99f4e0ce865 - - default default] ==> get_connector_properties: call "{'root_helper': 'sudo nova-rootwrap /etc/nova/rootwrap.conf', 'my_ip': '192.168.122.100', 'multipath': True, 'enforce_multipath': True, 'host': 'compute-0.ctlplane.example.com', 'execute': None}" trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:176
Oct 11 04:23:19 compute-0 nova_compute[259850]: 2025-10-11 04:23:19.560 675 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): multipathd show status execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 04:23:19 compute-0 nova_compute[259850]: 2025-10-11 04:23:19.579 675 DEBUG oslo_concurrency.processutils [-] CMD "multipathd show status" returned: 0 in 0.018s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 04:23:19 compute-0 nova_compute[259850]: 2025-10-11 04:23:19.579 675 DEBUG oslo.privsep.daemon [-] privsep: reply[8098d514-5221-4bd8-b457-9fb22def71c1]: (4, ('path checker states:\n\npaths: 0\nbusy: False\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 04:23:19 compute-0 nova_compute[259850]: 2025-10-11 04:23:19.581 675 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): cat /etc/iscsi/initiatorname.iscsi execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 04:23:19 compute-0 nova_compute[259850]: 2025-10-11 04:23:19.594 675 DEBUG oslo_concurrency.processutils [-] CMD "cat /etc/iscsi/initiatorname.iscsi" returned: 0 in 0.013s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 04:23:19 compute-0 nova_compute[259850]: 2025-10-11 04:23:19.595 675 DEBUG oslo.privsep.daemon [-] privsep: reply[cbc8610e-58bc-4078-adb6-1f2ce261cf28]: (4, ('InitiatorName=iqn.1994-05.com.redhat:e727c2bd432c', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 04:23:19 compute-0 nova_compute[259850]: 2025-10-11 04:23:19.597 675 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): findmnt -v / -n -o SOURCE execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 04:23:19 compute-0 nova_compute[259850]: 2025-10-11 04:23:19.612 675 DEBUG oslo_concurrency.processutils [-] CMD "findmnt -v / -n -o SOURCE" returned: 0 in 0.014s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 04:23:19 compute-0 nova_compute[259850]: 2025-10-11 04:23:19.612 675 DEBUG oslo.privsep.daemon [-] privsep: reply[beba69bb-0bba-4edf-ae31-e16c6d7cbc80]: (4, ('overlay\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 04:23:19 compute-0 nova_compute[259850]: 2025-10-11 04:23:19.614 675 DEBUG oslo.privsep.daemon [-] privsep: reply[60175322-4ed1-4151-8cb8-03b9bcc51a48]: (4, 'e4b2deed-ff06-4afb-a523-b61a9dddb9cc') _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 04:23:19 compute-0 nova_compute[259850]: 2025-10-11 04:23:19.615 2 DEBUG oslo_concurrency.processutils [None req-8efb7ae5-6ce0-434a-a1e9-34c6758cbe4f 33278e6c76494cbbac3a77443a2127d6 5a777d54362640ae90dbd99f4e0ce865 - - default default] Running cmd (subprocess): nvme version execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 04:23:19 compute-0 nova_compute[259850]: 2025-10-11 04:23:19.653 2 DEBUG oslo_concurrency.processutils [None req-8efb7ae5-6ce0-434a-a1e9-34c6758cbe4f 33278e6c76494cbbac3a77443a2127d6 5a777d54362640ae90dbd99f4e0ce865 - - default default] CMD "nvme version" returned: 0 in 0.038s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 04:23:19 compute-0 nova_compute[259850]: 2025-10-11 04:23:19.658 2 DEBUG os_brick.initiator.connectors.lightos [None req-8efb7ae5-6ce0-434a-a1e9-34c6758cbe4f 33278e6c76494cbbac3a77443a2127d6 5a777d54362640ae90dbd99f4e0ce865 - - default default] LIGHTOS: [Errno 111] ECONNREFUSED find_dsc /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:98
Oct 11 04:23:19 compute-0 nova_compute[259850]: 2025-10-11 04:23:19.658 2 DEBUG os_brick.initiator.connectors.lightos [None req-8efb7ae5-6ce0-434a-a1e9-34c6758cbe4f 33278e6c76494cbbac3a77443a2127d6 5a777d54362640ae90dbd99f4e0ce865 - - default default] LIGHTOS: did not find dsc, continuing anyway. get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:76
Oct 11 04:23:19 compute-0 nova_compute[259850]: 2025-10-11 04:23:19.659 2 DEBUG os_brick.initiator.connectors.lightos [None req-8efb7ae5-6ce0-434a-a1e9-34c6758cbe4f 33278e6c76494cbbac3a77443a2127d6 5a777d54362640ae90dbd99f4e0ce865 - - default default] LIGHTOS: finally hostnqn: nqn.2014-08.org.nvmexpress:uuid:83042a20-0f72-4c47-8453-e72ead378624 dsc:  get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:79
Oct 11 04:23:19 compute-0 nova_compute[259850]: 2025-10-11 04:23:19.660 2 DEBUG os_brick.utils [None req-8efb7ae5-6ce0-434a-a1e9-34c6758cbe4f 33278e6c76494cbbac3a77443a2127d6 5a777d54362640ae90dbd99f4e0ce865 - - default default] <== get_connector_properties: return (100ms) {'platform': 'x86_64', 'os_type': 'linux', 'ip': '192.168.122.100', 'host': 'compute-0.ctlplane.example.com', 'multipath': True, 'initiator': 'iqn.1994-05.com.redhat:e727c2bd432c', 'do_local_attach': False, 'nvme_hostid': '83042a20-0f72-4c47-8453-e72ead378624', 'system uuid': 'e4b2deed-ff06-4afb-a523-b61a9dddb9cc', 'nqn': 'nqn.2014-08.org.nvmexpress:uuid:83042a20-0f72-4c47-8453-e72ead378624', 'nvme_native_multipath': True, 'found_dsc': ''} trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:203
Oct 11 04:23:19 compute-0 nova_compute[259850]: 2025-10-11 04:23:19.661 2 DEBUG nova.virt.block_device [None req-8efb7ae5-6ce0-434a-a1e9-34c6758cbe4f 33278e6c76494cbbac3a77443a2127d6 5a777d54362640ae90dbd99f4e0ce865 - - default default] [instance: cd805eb4-703c-4647-bda1-59e3435d8c15] Updating existing volume attachment record: 5483c98b-efc8-429a-ac38-4563009c5e61 _volume_attach /usr/lib/python3.9/site-packages/nova/virt/block_device.py:631
Oct 11 04:23:19 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1836: 305 pgs: 305 active+clean; 167 MiB data, 488 MiB used, 60 GiB / 60 GiB avail; 4.2 KiB/s rd, 422 KiB/s wr, 5 op/s
Oct 11 04:23:20 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e436 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 04:23:20 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 11 04:23:20 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/703957724' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 11 04:23:20 compute-0 nova_compute[259850]: 2025-10-11 04:23:20.575 2 DEBUG nova.objects.instance [None req-8efb7ae5-6ce0-434a-a1e9-34c6758cbe4f 33278e6c76494cbbac3a77443a2127d6 5a777d54362640ae90dbd99f4e0ce865 - - default default] Lazy-loading 'flavor' on Instance uuid cd805eb4-703c-4647-bda1-59e3435d8c15 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 11 04:23:20 compute-0 nova_compute[259850]: 2025-10-11 04:23:20.602 2 DEBUG nova.virt.libvirt.driver [None req-8efb7ae5-6ce0-434a-a1e9-34c6758cbe4f 33278e6c76494cbbac3a77443a2127d6 5a777d54362640ae90dbd99f4e0ce865 - - default default] [instance: cd805eb4-703c-4647-bda1-59e3435d8c15] Attempting to attach volume a9bd8846-de75-4d33-844f-cdf270772026 with discard support enabled to an instance using an unsupported configuration. target_bus = virtio. Trim commands will not be issued to the storage device. _check_discard_for_attach_volume /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2168
Oct 11 04:23:20 compute-0 nova_compute[259850]: 2025-10-11 04:23:20.606 2 DEBUG nova.virt.libvirt.guest [None req-8efb7ae5-6ce0-434a-a1e9-34c6758cbe4f 33278e6c76494cbbac3a77443a2127d6 5a777d54362640ae90dbd99f4e0ce865 - - default default] attach device xml: <disk type="network" device="disk">
Oct 11 04:23:20 compute-0 nova_compute[259850]:   <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Oct 11 04:23:20 compute-0 nova_compute[259850]:   <source protocol="rbd" name="volumes/volume-a9bd8846-de75-4d33-844f-cdf270772026">
Oct 11 04:23:20 compute-0 nova_compute[259850]:     <host name="192.168.122.100" port="6789"/>
Oct 11 04:23:20 compute-0 nova_compute[259850]:   </source>
Oct 11 04:23:20 compute-0 nova_compute[259850]:   <auth username="openstack">
Oct 11 04:23:20 compute-0 nova_compute[259850]:     <secret type="ceph" uuid="23b68101-59a9-532f-ab6b-9acf78fb2162"/>
Oct 11 04:23:20 compute-0 nova_compute[259850]:   </auth>
Oct 11 04:23:20 compute-0 nova_compute[259850]:   <target dev="vdb" bus="virtio"/>
Oct 11 04:23:20 compute-0 nova_compute[259850]:   <serial>a9bd8846-de75-4d33-844f-cdf270772026</serial>
Oct 11 04:23:20 compute-0 nova_compute[259850]: </disk>
Oct 11 04:23:20 compute-0 nova_compute[259850]:  attach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:339
Oct 11 04:23:20 compute-0 nova_compute[259850]: 2025-10-11 04:23:20.718 2 DEBUG nova.virt.libvirt.driver [None req-8efb7ae5-6ce0-434a-a1e9-34c6758cbe4f 33278e6c76494cbbac3a77443a2127d6 5a777d54362640ae90dbd99f4e0ce865 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 11 04:23:20 compute-0 nova_compute[259850]: 2025-10-11 04:23:20.718 2 DEBUG nova.virt.libvirt.driver [None req-8efb7ae5-6ce0-434a-a1e9-34c6758cbe4f 33278e6c76494cbbac3a77443a2127d6 5a777d54362640ae90dbd99f4e0ce865 - - default default] No BDM found with device name vdb, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 11 04:23:20 compute-0 nova_compute[259850]: 2025-10-11 04:23:20.719 2 DEBUG nova.virt.libvirt.driver [None req-8efb7ae5-6ce0-434a-a1e9-34c6758cbe4f 33278e6c76494cbbac3a77443a2127d6 5a777d54362640ae90dbd99f4e0ce865 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 11 04:23:20 compute-0 nova_compute[259850]: 2025-10-11 04:23:20.719 2 DEBUG nova.virt.libvirt.driver [None req-8efb7ae5-6ce0-434a-a1e9-34c6758cbe4f 33278e6c76494cbbac3a77443a2127d6 5a777d54362640ae90dbd99f4e0ce865 - - default default] No VIF found with MAC fa:16:3e:06:fe:bd, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Oct 11 04:23:20 compute-0 ceph-mgr[74563]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 04:23:20 compute-0 ceph-mgr[74563]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 04:23:20 compute-0 ceph-mgr[74563]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 04:23:20 compute-0 ceph-mgr[74563]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 04:23:20 compute-0 ceph-mgr[74563]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 04:23:20 compute-0 ceph-mgr[74563]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 04:23:20 compute-0 ceph-mgr[74563]: [balancer INFO root] Optimize plan auto_2025-10-11_04:23:20
Oct 11 04:23:20 compute-0 ceph-mgr[74563]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 11 04:23:20 compute-0 ceph-mgr[74563]: [balancer INFO root] do_upmap
Oct 11 04:23:20 compute-0 ceph-mgr[74563]: [balancer INFO root] pools ['default.rgw.log', 'default.rgw.meta', 'volumes', 'vms', 'cephfs.cephfs.meta', 'cephfs.cephfs.data', 'backups', '.rgw.root', '.mgr', 'images', 'default.rgw.control']
Oct 11 04:23:20 compute-0 ceph-mgr[74563]: [balancer INFO root] prepared 0/10 changes
Oct 11 04:23:20 compute-0 nova_compute[259850]: 2025-10-11 04:23:20.997 2 DEBUG oslo_concurrency.lockutils [None req-8efb7ae5-6ce0-434a-a1e9-34c6758cbe4f 33278e6c76494cbbac3a77443a2127d6 5a777d54362640ae90dbd99f4e0ce865 - - default default] Lock "cd805eb4-703c-4647-bda1-59e3435d8c15" "released" by "nova.compute.manager.ComputeManager.attach_volume.<locals>.do_attach_volume" :: held 1.572s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 04:23:21 compute-0 ceph-mgr[74563]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 11 04:23:21 compute-0 ceph-mgr[74563]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 11 04:23:21 compute-0 ceph-mgr[74563]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 11 04:23:21 compute-0 ceph-mgr[74563]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 11 04:23:21 compute-0 ceph-mgr[74563]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 11 04:23:21 compute-0 ceph-mgr[74563]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 11 04:23:21 compute-0 ceph-mgr[74563]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 11 04:23:21 compute-0 ceph-mgr[74563]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 11 04:23:21 compute-0 ceph-mgr[74563]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 11 04:23:21 compute-0 ceph-mgr[74563]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 11 04:23:21 compute-0 ceph-mon[74273]: pgmap v1836: 305 pgs: 305 active+clean; 167 MiB data, 488 MiB used, 60 GiB / 60 GiB avail; 4.2 KiB/s rd, 422 KiB/s wr, 5 op/s
Oct 11 04:23:21 compute-0 ceph-mon[74273]: from='client.? 192.168.122.10:0/703957724' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 11 04:23:21 compute-0 sudo[303667]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 04:23:21 compute-0 sudo[303667]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 04:23:21 compute-0 sudo[303667]: pam_unix(sudo:session): session closed for user root
Oct 11 04:23:21 compute-0 sudo[303692]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 11 04:23:21 compute-0 sudo[303692]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 04:23:21 compute-0 sudo[303692]: pam_unix(sudo:session): session closed for user root
Oct 11 04:23:21 compute-0 sudo[303717]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 04:23:21 compute-0 sudo[303717]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 04:23:21 compute-0 sudo[303717]: pam_unix(sudo:session): session closed for user root
Oct 11 04:23:21 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1837: 305 pgs: 305 active+clean; 167 MiB data, 488 MiB used, 60 GiB / 60 GiB avail; 2.4 KiB/s rd, 13 KiB/s wr, 1 op/s
Oct 11 04:23:21 compute-0 sudo[303742]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/23b68101-59a9-532f-ab6b-9acf78fb2162/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Oct 11 04:23:21 compute-0 sudo[303742]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 04:23:22 compute-0 sudo[303742]: pam_unix(sudo:session): session closed for user root
Oct 11 04:23:22 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 11 04:23:22 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 11 04:23:22 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct 11 04:23:22 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 11 04:23:22 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct 11 04:23:22 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' 
Oct 11 04:23:22 compute-0 ceph-mgr[74563]: [progress WARNING root] complete: ev c41550ed-0fc3-47b5-836d-7a1d98d173ce does not exist
Oct 11 04:23:22 compute-0 ceph-mgr[74563]: [progress WARNING root] complete: ev 76578540-ff61-4916-9588-71fddb7857a3 does not exist
Oct 11 04:23:22 compute-0 ceph-mgr[74563]: [progress WARNING root] complete: ev afc8e549-6589-410e-8769-a30783456594 does not exist
Oct 11 04:23:22 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct 11 04:23:22 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 11 04:23:22 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct 11 04:23:22 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 11 04:23:22 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 11 04:23:22 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 11 04:23:22 compute-0 sudo[303798]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 04:23:22 compute-0 sudo[303798]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 04:23:22 compute-0 sudo[303798]: pam_unix(sudo:session): session closed for user root
Oct 11 04:23:22 compute-0 sudo[303838]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 11 04:23:22 compute-0 sudo[303838]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 04:23:22 compute-0 sudo[303838]: pam_unix(sudo:session): session closed for user root
Oct 11 04:23:22 compute-0 podman[303822]: 2025-10-11 04:23:22.85008493 +0000 UTC m=+0.091056229 container health_status 83ff5079e26f0f00bbfd2512db4ebca61e431e3b7e362664573e34fff179574c (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, container_name=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009)
Oct 11 04:23:22 compute-0 podman[303823]: 2025-10-11 04:23:22.883077574 +0000 UTC m=+0.109975055 container health_status 8f03625dda308e9a3fe3b3f00360811eab2a53db90537fed5552a11820542f07 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, org.label-schema.license=GPLv2, tcib_managed=true, config_id=iscsid, maintainer=OpenStack Kubernetes Operator team, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.build-date=20251009, container_name=iscsid, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Oct 11 04:23:22 compute-0 sudo[303885]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 04:23:22 compute-0 sudo[303885]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 04:23:22 compute-0 sudo[303885]: pam_unix(sudo:session): session closed for user root
Oct 11 04:23:22 compute-0 ovn_metadata_agent[161897]: 2025-10-11 04:23:22.974 161902 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 04:23:22 compute-0 ovn_metadata_agent[161897]: 2025-10-11 04:23:22.975 161902 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 04:23:22 compute-0 ovn_metadata_agent[161897]: 2025-10-11 04:23:22.975 161902 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 04:23:22 compute-0 sudo[303915]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/23b68101-59a9-532f-ab6b-9acf78fb2162/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 23b68101-59a9-532f-ab6b-9acf78fb2162 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Oct 11 04:23:22 compute-0 sudo[303915]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 04:23:23 compute-0 ceph-mon[74273]: pgmap v1837: 305 pgs: 305 active+clean; 167 MiB data, 488 MiB used, 60 GiB / 60 GiB avail; 2.4 KiB/s rd, 13 KiB/s wr, 1 op/s
Oct 11 04:23:23 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 11 04:23:23 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 11 04:23:23 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' 
Oct 11 04:23:23 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 11 04:23:23 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 11 04:23:23 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 11 04:23:23 compute-0 podman[303979]: 2025-10-11 04:23:23.437543782 +0000 UTC m=+0.066681789 container create 72c2f064f1c580d5341d85fd06a731eb1e0651a5fb8d5842db9544110883914d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_mccarthy, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Oct 11 04:23:23 compute-0 podman[303979]: 2025-10-11 04:23:23.411295339 +0000 UTC m=+0.040433406 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 04:23:23 compute-0 systemd[1]: Started libpod-conmon-72c2f064f1c580d5341d85fd06a731eb1e0651a5fb8d5842db9544110883914d.scope.
Oct 11 04:23:23 compute-0 systemd[1]: Started libcrun container.
Oct 11 04:23:23 compute-0 podman[303979]: 2025-10-11 04:23:23.5843855 +0000 UTC m=+0.213523557 container init 72c2f064f1c580d5341d85fd06a731eb1e0651a5fb8d5842db9544110883914d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_mccarthy, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 11 04:23:23 compute-0 nova_compute[259850]: 2025-10-11 04:23:23.596 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 04:23:23 compute-0 podman[303979]: 2025-10-11 04:23:23.597207413 +0000 UTC m=+0.226345410 container start 72c2f064f1c580d5341d85fd06a731eb1e0651a5fb8d5842db9544110883914d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_mccarthy, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 11 04:23:23 compute-0 podman[303979]: 2025-10-11 04:23:23.602366529 +0000 UTC m=+0.231504586 container attach 72c2f064f1c580d5341d85fd06a731eb1e0651a5fb8d5842db9544110883914d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_mccarthy, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 11 04:23:23 compute-0 pedantic_mccarthy[303995]: 167 167
Oct 11 04:23:23 compute-0 systemd[1]: libpod-72c2f064f1c580d5341d85fd06a731eb1e0651a5fb8d5842db9544110883914d.scope: Deactivated successfully.
Oct 11 04:23:23 compute-0 podman[303979]: 2025-10-11 04:23:23.606709232 +0000 UTC m=+0.235847229 container died 72c2f064f1c580d5341d85fd06a731eb1e0651a5fb8d5842db9544110883914d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_mccarthy, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Oct 11 04:23:23 compute-0 systemd[1]: var-lib-containers-storage-overlay-bc31991c3d637b1d5c459279442112971ea77df12e996bc2bf597a19e291f388-merged.mount: Deactivated successfully.
Oct 11 04:23:23 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e436 do_prune osdmap full prune enabled
Oct 11 04:23:23 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e437 e437: 3 total, 3 up, 3 in
Oct 11 04:23:23 compute-0 ceph-mon[74273]: log_channel(cluster) log [DBG] : osdmap e437: 3 total, 3 up, 3 in
Oct 11 04:23:23 compute-0 podman[303979]: 2025-10-11 04:23:23.660352101 +0000 UTC m=+0.289490098 container remove 72c2f064f1c580d5341d85fd06a731eb1e0651a5fb8d5842db9544110883914d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_mccarthy, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef)
Oct 11 04:23:23 compute-0 systemd[1]: libpod-conmon-72c2f064f1c580d5341d85fd06a731eb1e0651a5fb8d5842db9544110883914d.scope: Deactivated successfully.
Oct 11 04:23:23 compute-0 podman[304020]: 2025-10-11 04:23:23.93585077 +0000 UTC m=+0.079989375 container create 521e9597de655d610b0285da4f21f7ffc23da84a26ea8623c4cec642ccb6ef5e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_carver, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 11 04:23:23 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1839: 305 pgs: 305 active+clean; 169 MiB data, 489 MiB used, 60 GiB / 60 GiB avail; 13 KiB/s rd, 94 KiB/s wr, 14 op/s
Oct 11 04:23:23 compute-0 podman[304020]: 2025-10-11 04:23:23.881200423 +0000 UTC m=+0.025339078 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 04:23:23 compute-0 systemd[1]: Started libpod-conmon-521e9597de655d610b0285da4f21f7ffc23da84a26ea8623c4cec642ccb6ef5e.scope.
Oct 11 04:23:24 compute-0 systemd[1]: Started libcrun container.
Oct 11 04:23:24 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0038f991f3d8b26876ea7f18ea5b293145d2c685b1407795d1f863caff5369ca/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 11 04:23:24 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0038f991f3d8b26876ea7f18ea5b293145d2c685b1407795d1f863caff5369ca/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 11 04:23:24 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0038f991f3d8b26876ea7f18ea5b293145d2c685b1407795d1f863caff5369ca/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 11 04:23:24 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0038f991f3d8b26876ea7f18ea5b293145d2c685b1407795d1f863caff5369ca/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 11 04:23:24 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0038f991f3d8b26876ea7f18ea5b293145d2c685b1407795d1f863caff5369ca/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct 11 04:23:24 compute-0 podman[304020]: 2025-10-11 04:23:24.044253669 +0000 UTC m=+0.188392244 container init 521e9597de655d610b0285da4f21f7ffc23da84a26ea8623c4cec642ccb6ef5e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_carver, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 11 04:23:24 compute-0 podman[304020]: 2025-10-11 04:23:24.050688862 +0000 UTC m=+0.194827437 container start 521e9597de655d610b0285da4f21f7ffc23da84a26ea8623c4cec642ccb6ef5e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_carver, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 11 04:23:24 compute-0 podman[304020]: 2025-10-11 04:23:24.054244892 +0000 UTC m=+0.198383477 container attach 521e9597de655d610b0285da4f21f7ffc23da84a26ea8623c4cec642ccb6ef5e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_carver, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef)
Oct 11 04:23:24 compute-0 nova_compute[259850]: 2025-10-11 04:23:24.164 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 04:23:24 compute-0 ceph-mon[74273]: osdmap e437: 3 total, 3 up, 3 in
Oct 11 04:23:24 compute-0 ceph-mon[74273]: pgmap v1839: 305 pgs: 305 active+clean; 169 MiB data, 489 MiB used, 60 GiB / 60 GiB avail; 13 KiB/s rd, 94 KiB/s wr, 14 op/s
Oct 11 04:23:25 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e437 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 04:23:25 compute-0 friendly_carver[304037]: --> passed data devices: 0 physical, 3 LVM
Oct 11 04:23:25 compute-0 friendly_carver[304037]: --> relative data size: 1.0
Oct 11 04:23:25 compute-0 friendly_carver[304037]: --> All data devices are unavailable
Oct 11 04:23:25 compute-0 systemd[1]: libpod-521e9597de655d610b0285da4f21f7ffc23da84a26ea8623c4cec642ccb6ef5e.scope: Deactivated successfully.
Oct 11 04:23:25 compute-0 podman[304020]: 2025-10-11 04:23:25.169334464 +0000 UTC m=+1.313473109 container died 521e9597de655d610b0285da4f21f7ffc23da84a26ea8623c4cec642ccb6ef5e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_carver, CEPH_REF=reef, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Oct 11 04:23:25 compute-0 systemd[1]: libpod-521e9597de655d610b0285da4f21f7ffc23da84a26ea8623c4cec642ccb6ef5e.scope: Consumed 1.052s CPU time.
Oct 11 04:23:25 compute-0 systemd[1]: var-lib-containers-storage-overlay-0038f991f3d8b26876ea7f18ea5b293145d2c685b1407795d1f863caff5369ca-merged.mount: Deactivated successfully.
Oct 11 04:23:25 compute-0 podman[304020]: 2025-10-11 04:23:25.265323231 +0000 UTC m=+1.409461826 container remove 521e9597de655d610b0285da4f21f7ffc23da84a26ea8623c4cec642ccb6ef5e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_carver, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2)
Oct 11 04:23:25 compute-0 systemd[1]: libpod-conmon-521e9597de655d610b0285da4f21f7ffc23da84a26ea8623c4cec642ccb6ef5e.scope: Deactivated successfully.
Oct 11 04:23:25 compute-0 sudo[303915]: pam_unix(sudo:session): session closed for user root
Oct 11 04:23:25 compute-0 sudo[304080]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 04:23:25 compute-0 sudo[304080]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 04:23:25 compute-0 sudo[304080]: pam_unix(sudo:session): session closed for user root
Oct 11 04:23:25 compute-0 sudo[304105]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 11 04:23:25 compute-0 sudo[304105]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 04:23:25 compute-0 sudo[304105]: pam_unix(sudo:session): session closed for user root
Oct 11 04:23:25 compute-0 sudo[304130]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 04:23:25 compute-0 sudo[304130]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 04:23:25 compute-0 sudo[304130]: pam_unix(sudo:session): session closed for user root
Oct 11 04:23:25 compute-0 sudo[304155]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/23b68101-59a9-532f-ab6b-9acf78fb2162/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 23b68101-59a9-532f-ab6b-9acf78fb2162 -- lvm list --format json
Oct 11 04:23:25 compute-0 sudo[304155]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 04:23:25 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1840: 305 pgs: 305 active+clean; 169 MiB data, 489 MiB used, 60 GiB / 60 GiB avail; 13 KiB/s rd, 94 KiB/s wr, 14 op/s
Oct 11 04:23:26 compute-0 podman[304221]: 2025-10-11 04:23:26.030675339 +0000 UTC m=+0.040769325 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 04:23:26 compute-0 podman[304221]: 2025-10-11 04:23:26.148704631 +0000 UTC m=+0.158798567 container create 61b397f3aa535eeca478647efa4031d9d7765cc7d2d8c370f8fbf0a9cd2c3da0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=busy_bouman, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 11 04:23:26 compute-0 systemd[1]: Started libpod-conmon-61b397f3aa535eeca478647efa4031d9d7765cc7d2d8c370f8fbf0a9cd2c3da0.scope.
Oct 11 04:23:26 compute-0 systemd[1]: Started libcrun container.
Oct 11 04:23:26 compute-0 podman[304221]: 2025-10-11 04:23:26.348137187 +0000 UTC m=+0.358231153 container init 61b397f3aa535eeca478647efa4031d9d7765cc7d2d8c370f8fbf0a9cd2c3da0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=busy_bouman, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_REF=reef, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Oct 11 04:23:26 compute-0 podman[304221]: 2025-10-11 04:23:26.360464176 +0000 UTC m=+0.370558102 container start 61b397f3aa535eeca478647efa4031d9d7765cc7d2d8c370f8fbf0a9cd2c3da0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=busy_bouman, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 11 04:23:26 compute-0 busy_bouman[304237]: 167 167
Oct 11 04:23:26 compute-0 systemd[1]: libpod-61b397f3aa535eeca478647efa4031d9d7765cc7d2d8c370f8fbf0a9cd2c3da0.scope: Deactivated successfully.
Oct 11 04:23:26 compute-0 podman[304221]: 2025-10-11 04:23:26.42100454 +0000 UTC m=+0.431098526 container attach 61b397f3aa535eeca478647efa4031d9d7765cc7d2d8c370f8fbf0a9cd2c3da0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=busy_bouman, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef)
Oct 11 04:23:26 compute-0 podman[304221]: 2025-10-11 04:23:26.421642338 +0000 UTC m=+0.431736264 container died 61b397f3aa535eeca478647efa4031d9d7765cc7d2d8c370f8fbf0a9cd2c3da0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=busy_bouman, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Oct 11 04:23:26 compute-0 systemd[1]: var-lib-containers-storage-overlay-0c9483c16c71ffd38530fcd36ec2eb06ffd5b15af73240cdf88b09da4f2f2cb9-merged.mount: Deactivated successfully.
Oct 11 04:23:26 compute-0 podman[304221]: 2025-10-11 04:23:26.527322711 +0000 UTC m=+0.537416617 container remove 61b397f3aa535eeca478647efa4031d9d7765cc7d2d8c370f8fbf0a9cd2c3da0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=busy_bouman, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef)
Oct 11 04:23:26 compute-0 systemd[1]: libpod-conmon-61b397f3aa535eeca478647efa4031d9d7765cc7d2d8c370f8fbf0a9cd2c3da0.scope: Deactivated successfully.
Oct 11 04:23:26 compute-0 podman[304263]: 2025-10-11 04:23:26.774603982 +0000 UTC m=+0.067689778 container create 5884eed749ce881e0bed1f8b07d321f7e5c63aa5f94b55172aeb78703a981ba8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=awesome_kirch, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Oct 11 04:23:26 compute-0 systemd[1]: Started libpod-conmon-5884eed749ce881e0bed1f8b07d321f7e5c63aa5f94b55172aeb78703a981ba8.scope.
Oct 11 04:23:26 compute-0 podman[304263]: 2025-10-11 04:23:26.748435161 +0000 UTC m=+0.041520967 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 04:23:26 compute-0 systemd[1]: Started libcrun container.
Oct 11 04:23:26 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/db74ecc8ad6bb9c66ba872117aa175b55e3870acac302744e3f99e2be4ed4211/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 11 04:23:26 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/db74ecc8ad6bb9c66ba872117aa175b55e3870acac302744e3f99e2be4ed4211/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 11 04:23:26 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/db74ecc8ad6bb9c66ba872117aa175b55e3870acac302744e3f99e2be4ed4211/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 11 04:23:26 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/db74ecc8ad6bb9c66ba872117aa175b55e3870acac302744e3f99e2be4ed4211/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 11 04:23:26 compute-0 podman[304263]: 2025-10-11 04:23:26.87133941 +0000 UTC m=+0.164425216 container init 5884eed749ce881e0bed1f8b07d321f7e5c63aa5f94b55172aeb78703a981ba8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=awesome_kirch, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Oct 11 04:23:26 compute-0 podman[304263]: 2025-10-11 04:23:26.882605659 +0000 UTC m=+0.175691445 container start 5884eed749ce881e0bed1f8b07d321f7e5c63aa5f94b55172aeb78703a981ba8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=awesome_kirch, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3)
Oct 11 04:23:26 compute-0 podman[304263]: 2025-10-11 04:23:26.885606864 +0000 UTC m=+0.178692670 container attach 5884eed749ce881e0bed1f8b07d321f7e5c63aa5f94b55172aeb78703a981ba8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=awesome_kirch, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 11 04:23:27 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e437 do_prune osdmap full prune enabled
Oct 11 04:23:27 compute-0 ceph-mon[74273]: pgmap v1840: 305 pgs: 305 active+clean; 169 MiB data, 489 MiB used, 60 GiB / 60 GiB avail; 13 KiB/s rd, 94 KiB/s wr, 14 op/s
Oct 11 04:23:27 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e438 e438: 3 total, 3 up, 3 in
Oct 11 04:23:27 compute-0 ceph-mon[74273]: log_channel(cluster) log [DBG] : osdmap e438: 3 total, 3 up, 3 in
Oct 11 04:23:27 compute-0 awesome_kirch[304279]: {
Oct 11 04:23:27 compute-0 awesome_kirch[304279]:     "0": [
Oct 11 04:23:27 compute-0 awesome_kirch[304279]:         {
Oct 11 04:23:27 compute-0 awesome_kirch[304279]:             "devices": [
Oct 11 04:23:27 compute-0 awesome_kirch[304279]:                 "/dev/loop3"
Oct 11 04:23:27 compute-0 awesome_kirch[304279]:             ],
Oct 11 04:23:27 compute-0 awesome_kirch[304279]:             "lv_name": "ceph_lv0",
Oct 11 04:23:27 compute-0 awesome_kirch[304279]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct 11 04:23:27 compute-0 awesome_kirch[304279]:             "lv_size": "21470642176",
Oct 11 04:23:27 compute-0 awesome_kirch[304279]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=1XVTvq-Um5W-oCLh-MZzq-EKrd-vgGj-JdO4S3,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=23b68101-59a9-532f-ab6b-9acf78fb2162,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=bd7ac921-1218-45c1-b1c6-7c594dbceccb,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 11 04:23:27 compute-0 awesome_kirch[304279]:             "lv_uuid": "1XVTvq-Um5W-oCLh-MZzq-EKrd-vgGj-JdO4S3",
Oct 11 04:23:27 compute-0 awesome_kirch[304279]:             "name": "ceph_lv0",
Oct 11 04:23:27 compute-0 awesome_kirch[304279]:             "path": "/dev/ceph_vg0/ceph_lv0",
Oct 11 04:23:27 compute-0 awesome_kirch[304279]:             "tags": {
Oct 11 04:23:27 compute-0 awesome_kirch[304279]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct 11 04:23:27 compute-0 awesome_kirch[304279]:                 "ceph.block_uuid": "1XVTvq-Um5W-oCLh-MZzq-EKrd-vgGj-JdO4S3",
Oct 11 04:23:27 compute-0 awesome_kirch[304279]:                 "ceph.cephx_lockbox_secret": "",
Oct 11 04:23:27 compute-0 awesome_kirch[304279]:                 "ceph.cluster_fsid": "23b68101-59a9-532f-ab6b-9acf78fb2162",
Oct 11 04:23:27 compute-0 awesome_kirch[304279]:                 "ceph.cluster_name": "ceph",
Oct 11 04:23:27 compute-0 awesome_kirch[304279]:                 "ceph.crush_device_class": "",
Oct 11 04:23:27 compute-0 awesome_kirch[304279]:                 "ceph.encrypted": "0",
Oct 11 04:23:27 compute-0 awesome_kirch[304279]:                 "ceph.osd_fsid": "bd7ac921-1218-45c1-b1c6-7c594dbceccb",
Oct 11 04:23:27 compute-0 awesome_kirch[304279]:                 "ceph.osd_id": "0",
Oct 11 04:23:27 compute-0 awesome_kirch[304279]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 11 04:23:27 compute-0 awesome_kirch[304279]:                 "ceph.type": "block",
Oct 11 04:23:27 compute-0 awesome_kirch[304279]:                 "ceph.vdo": "0"
Oct 11 04:23:27 compute-0 awesome_kirch[304279]:             },
Oct 11 04:23:27 compute-0 awesome_kirch[304279]:             "type": "block",
Oct 11 04:23:27 compute-0 awesome_kirch[304279]:             "vg_name": "ceph_vg0"
Oct 11 04:23:27 compute-0 awesome_kirch[304279]:         }
Oct 11 04:23:27 compute-0 awesome_kirch[304279]:     ],
Oct 11 04:23:27 compute-0 awesome_kirch[304279]:     "1": [
Oct 11 04:23:27 compute-0 awesome_kirch[304279]:         {
Oct 11 04:23:27 compute-0 awesome_kirch[304279]:             "devices": [
Oct 11 04:23:27 compute-0 awesome_kirch[304279]:                 "/dev/loop4"
Oct 11 04:23:27 compute-0 awesome_kirch[304279]:             ],
Oct 11 04:23:27 compute-0 awesome_kirch[304279]:             "lv_name": "ceph_lv1",
Oct 11 04:23:27 compute-0 awesome_kirch[304279]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Oct 11 04:23:27 compute-0 awesome_kirch[304279]:             "lv_size": "21470642176",
Oct 11 04:23:27 compute-0 awesome_kirch[304279]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=SYHCVo-UVf0-Iwc3-4dzz-QuCG-19LJ-9rRA5x,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=23b68101-59a9-532f-ab6b-9acf78fb2162,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=38da774d-7ecf-442f-9a7a-97978287cff8,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 11 04:23:27 compute-0 awesome_kirch[304279]:             "lv_uuid": "SYHCVo-UVf0-Iwc3-4dzz-QuCG-19LJ-9rRA5x",
Oct 11 04:23:27 compute-0 awesome_kirch[304279]:             "name": "ceph_lv1",
Oct 11 04:23:27 compute-0 awesome_kirch[304279]:             "path": "/dev/ceph_vg1/ceph_lv1",
Oct 11 04:23:27 compute-0 awesome_kirch[304279]:             "tags": {
Oct 11 04:23:27 compute-0 awesome_kirch[304279]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Oct 11 04:23:27 compute-0 awesome_kirch[304279]:                 "ceph.block_uuid": "SYHCVo-UVf0-Iwc3-4dzz-QuCG-19LJ-9rRA5x",
Oct 11 04:23:27 compute-0 awesome_kirch[304279]:                 "ceph.cephx_lockbox_secret": "",
Oct 11 04:23:27 compute-0 awesome_kirch[304279]:                 "ceph.cluster_fsid": "23b68101-59a9-532f-ab6b-9acf78fb2162",
Oct 11 04:23:27 compute-0 awesome_kirch[304279]:                 "ceph.cluster_name": "ceph",
Oct 11 04:23:27 compute-0 awesome_kirch[304279]:                 "ceph.crush_device_class": "",
Oct 11 04:23:27 compute-0 awesome_kirch[304279]:                 "ceph.encrypted": "0",
Oct 11 04:23:27 compute-0 awesome_kirch[304279]:                 "ceph.osd_fsid": "38da774d-7ecf-442f-9a7a-97978287cff8",
Oct 11 04:23:27 compute-0 awesome_kirch[304279]:                 "ceph.osd_id": "1",
Oct 11 04:23:27 compute-0 awesome_kirch[304279]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 11 04:23:27 compute-0 awesome_kirch[304279]:                 "ceph.type": "block",
Oct 11 04:23:27 compute-0 awesome_kirch[304279]:                 "ceph.vdo": "0"
Oct 11 04:23:27 compute-0 awesome_kirch[304279]:             },
Oct 11 04:23:27 compute-0 awesome_kirch[304279]:             "type": "block",
Oct 11 04:23:27 compute-0 awesome_kirch[304279]:             "vg_name": "ceph_vg1"
Oct 11 04:23:27 compute-0 awesome_kirch[304279]:         }
Oct 11 04:23:27 compute-0 awesome_kirch[304279]:     ],
Oct 11 04:23:27 compute-0 awesome_kirch[304279]:     "2": [
Oct 11 04:23:27 compute-0 awesome_kirch[304279]:         {
Oct 11 04:23:27 compute-0 awesome_kirch[304279]:             "devices": [
Oct 11 04:23:27 compute-0 awesome_kirch[304279]:                 "/dev/loop5"
Oct 11 04:23:27 compute-0 awesome_kirch[304279]:             ],
Oct 11 04:23:27 compute-0 awesome_kirch[304279]:             "lv_name": "ceph_lv2",
Oct 11 04:23:27 compute-0 awesome_kirch[304279]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Oct 11 04:23:27 compute-0 awesome_kirch[304279]:             "lv_size": "21470642176",
Oct 11 04:23:27 compute-0 awesome_kirch[304279]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=RoMfIg-8Myq-NBZ3-1HM3-6QfC-sjk8-dlChvt,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=23b68101-59a9-532f-ab6b-9acf78fb2162,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=b0a32a6b-06c7-4cda-9235-0c5ffbd1bd85,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 11 04:23:27 compute-0 awesome_kirch[304279]:             "lv_uuid": "RoMfIg-8Myq-NBZ3-1HM3-6QfC-sjk8-dlChvt",
Oct 11 04:23:27 compute-0 awesome_kirch[304279]:             "name": "ceph_lv2",
Oct 11 04:23:27 compute-0 awesome_kirch[304279]:             "path": "/dev/ceph_vg2/ceph_lv2",
Oct 11 04:23:27 compute-0 awesome_kirch[304279]:             "tags": {
Oct 11 04:23:27 compute-0 awesome_kirch[304279]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Oct 11 04:23:27 compute-0 awesome_kirch[304279]:                 "ceph.block_uuid": "RoMfIg-8Myq-NBZ3-1HM3-6QfC-sjk8-dlChvt",
Oct 11 04:23:27 compute-0 awesome_kirch[304279]:                 "ceph.cephx_lockbox_secret": "",
Oct 11 04:23:27 compute-0 awesome_kirch[304279]:                 "ceph.cluster_fsid": "23b68101-59a9-532f-ab6b-9acf78fb2162",
Oct 11 04:23:27 compute-0 awesome_kirch[304279]:                 "ceph.cluster_name": "ceph",
Oct 11 04:23:27 compute-0 awesome_kirch[304279]:                 "ceph.crush_device_class": "",
Oct 11 04:23:27 compute-0 awesome_kirch[304279]:                 "ceph.encrypted": "0",
Oct 11 04:23:27 compute-0 awesome_kirch[304279]:                 "ceph.osd_fsid": "b0a32a6b-06c7-4cda-9235-0c5ffbd1bd85",
Oct 11 04:23:27 compute-0 awesome_kirch[304279]:                 "ceph.osd_id": "2",
Oct 11 04:23:27 compute-0 awesome_kirch[304279]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 11 04:23:27 compute-0 awesome_kirch[304279]:                 "ceph.type": "block",
Oct 11 04:23:27 compute-0 awesome_kirch[304279]:                 "ceph.vdo": "0"
Oct 11 04:23:27 compute-0 awesome_kirch[304279]:             },
Oct 11 04:23:27 compute-0 awesome_kirch[304279]:             "type": "block",
Oct 11 04:23:27 compute-0 awesome_kirch[304279]:             "vg_name": "ceph_vg2"
Oct 11 04:23:27 compute-0 awesome_kirch[304279]:         }
Oct 11 04:23:27 compute-0 awesome_kirch[304279]:     ]
Oct 11 04:23:27 compute-0 awesome_kirch[304279]: }
Oct 11 04:23:27 compute-0 systemd[1]: libpod-5884eed749ce881e0bed1f8b07d321f7e5c63aa5f94b55172aeb78703a981ba8.scope: Deactivated successfully.
Oct 11 04:23:27 compute-0 podman[304263]: 2025-10-11 04:23:27.707582017 +0000 UTC m=+1.000667843 container died 5884eed749ce881e0bed1f8b07d321f7e5c63aa5f94b55172aeb78703a981ba8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=awesome_kirch, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef)
Oct 11 04:23:27 compute-0 systemd[1]: var-lib-containers-storage-overlay-db74ecc8ad6bb9c66ba872117aa175b55e3870acac302744e3f99e2be4ed4211-merged.mount: Deactivated successfully.
Oct 11 04:23:27 compute-0 podman[304263]: 2025-10-11 04:23:27.792057268 +0000 UTC m=+1.085143084 container remove 5884eed749ce881e0bed1f8b07d321f7e5c63aa5f94b55172aeb78703a981ba8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=awesome_kirch, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, io.buildah.version=1.39.3)
Oct 11 04:23:27 compute-0 systemd[1]: libpod-conmon-5884eed749ce881e0bed1f8b07d321f7e5c63aa5f94b55172aeb78703a981ba8.scope: Deactivated successfully.
Oct 11 04:23:27 compute-0 sudo[304155]: pam_unix(sudo:session): session closed for user root
Oct 11 04:23:27 compute-0 sudo[304302]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 04:23:27 compute-0 sudo[304302]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 04:23:27 compute-0 sudo[304302]: pam_unix(sudo:session): session closed for user root
Oct 11 04:23:27 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1842: 305 pgs: 305 active+clean; 169 MiB data, 489 MiB used, 60 GiB / 60 GiB avail; 41 KiB/s rd, 129 KiB/s wr, 46 op/s
Oct 11 04:23:27 compute-0 sudo[304327]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 11 04:23:27 compute-0 sudo[304327]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 04:23:27 compute-0 sudo[304327]: pam_unix(sudo:session): session closed for user root
Oct 11 04:23:28 compute-0 sudo[304352]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 04:23:28 compute-0 sudo[304352]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 04:23:28 compute-0 sudo[304352]: pam_unix(sudo:session): session closed for user root
Oct 11 04:23:28 compute-0 ceph-mon[74273]: osdmap e438: 3 total, 3 up, 3 in
Oct 11 04:23:28 compute-0 sudo[304377]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/23b68101-59a9-532f-ab6b-9acf78fb2162/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 23b68101-59a9-532f-ab6b-9acf78fb2162 -- raw list --format json
Oct 11 04:23:28 compute-0 sudo[304377]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 04:23:28 compute-0 podman[304443]: 2025-10-11 04:23:28.514112381 +0000 UTC m=+0.047427603 container create e8285380bd08447c76cb53029dbbcdcf36078e4eb0bebe286c48f522c47d05ea (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_noether, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 11 04:23:28 compute-0 systemd[1]: Started libpod-conmon-e8285380bd08447c76cb53029dbbcdcf36078e4eb0bebe286c48f522c47d05ea.scope.
Oct 11 04:23:28 compute-0 podman[304443]: 2025-10-11 04:23:28.494731953 +0000 UTC m=+0.028047185 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 04:23:28 compute-0 systemd[1]: Started libcrun container.
Oct 11 04:23:28 compute-0 nova_compute[259850]: 2025-10-11 04:23:28.597 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 04:23:28 compute-0 podman[304443]: 2025-10-11 04:23:28.608252697 +0000 UTC m=+0.141567929 container init e8285380bd08447c76cb53029dbbcdcf36078e4eb0bebe286c48f522c47d05ea (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_noether, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 11 04:23:28 compute-0 podman[304443]: 2025-10-11 04:23:28.614236516 +0000 UTC m=+0.147551708 container start e8285380bd08447c76cb53029dbbcdcf36078e4eb0bebe286c48f522c47d05ea (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_noether, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True)
Oct 11 04:23:28 compute-0 podman[304443]: 2025-10-11 04:23:28.61754402 +0000 UTC m=+0.150859262 container attach e8285380bd08447c76cb53029dbbcdcf36078e4eb0bebe286c48f522c47d05ea (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_noether, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 11 04:23:28 compute-0 fervent_noether[304459]: 167 167
Oct 11 04:23:28 compute-0 systemd[1]: libpod-e8285380bd08447c76cb53029dbbcdcf36078e4eb0bebe286c48f522c47d05ea.scope: Deactivated successfully.
Oct 11 04:23:28 compute-0 podman[304443]: 2025-10-11 04:23:28.623341864 +0000 UTC m=+0.156657096 container died e8285380bd08447c76cb53029dbbcdcf36078e4eb0bebe286c48f522c47d05ea (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_noether, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Oct 11 04:23:28 compute-0 systemd[1]: var-lib-containers-storage-overlay-a79f9930f3acf0bd8105d381f981a1c51fe679657f31cf67ab2ac377404dde3d-merged.mount: Deactivated successfully.
Oct 11 04:23:28 compute-0 podman[304443]: 2025-10-11 04:23:28.676094348 +0000 UTC m=+0.209409580 container remove e8285380bd08447c76cb53029dbbcdcf36078e4eb0bebe286c48f522c47d05ea (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_noether, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 11 04:23:28 compute-0 systemd[1]: libpod-conmon-e8285380bd08447c76cb53029dbbcdcf36078e4eb0bebe286c48f522c47d05ea.scope: Deactivated successfully.
Oct 11 04:23:28 compute-0 ovn_controller[152025]: 2025-10-11T04:23:28Z|00268|memory_trim|INFO|Detected inactivity (last active 30015 ms ago): trimming memory
Oct 11 04:23:28 compute-0 podman[304484]: 2025-10-11 04:23:28.934871483 +0000 UTC m=+0.070893417 container create 8c6a68e0b724f60d93ff3ac9a3fc87993f997270e5bd71aea14e5050515ae81b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_mestorf, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 11 04:23:28 compute-0 systemd[1]: Started libpod-conmon-8c6a68e0b724f60d93ff3ac9a3fc87993f997270e5bd71aea14e5050515ae81b.scope.
Oct 11 04:23:28 compute-0 podman[304484]: 2025-10-11 04:23:28.907104367 +0000 UTC m=+0.043126351 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 04:23:29 compute-0 systemd[1]: Started libcrun container.
Oct 11 04:23:29 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5cead4957848a814eecfae99c34237b1605e9eca46cb78518c9ec4f5d58966f0/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 11 04:23:29 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5cead4957848a814eecfae99c34237b1605e9eca46cb78518c9ec4f5d58966f0/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 11 04:23:29 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5cead4957848a814eecfae99c34237b1605e9eca46cb78518c9ec4f5d58966f0/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 11 04:23:29 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5cead4957848a814eecfae99c34237b1605e9eca46cb78518c9ec4f5d58966f0/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 11 04:23:29 compute-0 podman[304484]: 2025-10-11 04:23:29.051827324 +0000 UTC m=+0.187849228 container init 8c6a68e0b724f60d93ff3ac9a3fc87993f997270e5bd71aea14e5050515ae81b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_mestorf, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 11 04:23:29 compute-0 podman[304484]: 2025-10-11 04:23:29.063175996 +0000 UTC m=+0.199197890 container start 8c6a68e0b724f60d93ff3ac9a3fc87993f997270e5bd71aea14e5050515ae81b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_mestorf, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507)
Oct 11 04:23:29 compute-0 podman[304484]: 2025-10-11 04:23:29.067353774 +0000 UTC m=+0.203375688 container attach 8c6a68e0b724f60d93ff3ac9a3fc87993f997270e5bd71aea14e5050515ae81b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_mestorf, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507)
Oct 11 04:23:29 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e438 do_prune osdmap full prune enabled
Oct 11 04:23:29 compute-0 ceph-mon[74273]: pgmap v1842: 305 pgs: 305 active+clean; 169 MiB data, 489 MiB used, 60 GiB / 60 GiB avail; 41 KiB/s rd, 129 KiB/s wr, 46 op/s
Oct 11 04:23:29 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e439 e439: 3 total, 3 up, 3 in
Oct 11 04:23:29 compute-0 ceph-mon[74273]: log_channel(cluster) log [DBG] : osdmap e439: 3 total, 3 up, 3 in
Oct 11 04:23:29 compute-0 nova_compute[259850]: 2025-10-11 04:23:29.167 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 04:23:29 compute-0 funny_mestorf[304500]: {
Oct 11 04:23:29 compute-0 funny_mestorf[304500]:     "38da774d-7ecf-442f-9a7a-97978287cff8": {
Oct 11 04:23:29 compute-0 funny_mestorf[304500]:         "ceph_fsid": "23b68101-59a9-532f-ab6b-9acf78fb2162",
Oct 11 04:23:29 compute-0 funny_mestorf[304500]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Oct 11 04:23:29 compute-0 funny_mestorf[304500]:         "osd_id": 1,
Oct 11 04:23:29 compute-0 funny_mestorf[304500]:         "osd_uuid": "38da774d-7ecf-442f-9a7a-97978287cff8",
Oct 11 04:23:29 compute-0 funny_mestorf[304500]:         "type": "bluestore"
Oct 11 04:23:29 compute-0 funny_mestorf[304500]:     },
Oct 11 04:23:29 compute-0 funny_mestorf[304500]:     "b0a32a6b-06c7-4cda-9235-0c5ffbd1bd85": {
Oct 11 04:23:29 compute-0 funny_mestorf[304500]:         "ceph_fsid": "23b68101-59a9-532f-ab6b-9acf78fb2162",
Oct 11 04:23:29 compute-0 funny_mestorf[304500]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Oct 11 04:23:29 compute-0 funny_mestorf[304500]:         "osd_id": 2,
Oct 11 04:23:29 compute-0 funny_mestorf[304500]:         "osd_uuid": "b0a32a6b-06c7-4cda-9235-0c5ffbd1bd85",
Oct 11 04:23:29 compute-0 funny_mestorf[304500]:         "type": "bluestore"
Oct 11 04:23:29 compute-0 funny_mestorf[304500]:     },
Oct 11 04:23:29 compute-0 funny_mestorf[304500]:     "bd7ac921-1218-45c1-b1c6-7c594dbceccb": {
Oct 11 04:23:29 compute-0 funny_mestorf[304500]:         "ceph_fsid": "23b68101-59a9-532f-ab6b-9acf78fb2162",
Oct 11 04:23:29 compute-0 funny_mestorf[304500]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct 11 04:23:29 compute-0 funny_mestorf[304500]:         "osd_id": 0,
Oct 11 04:23:29 compute-0 funny_mestorf[304500]:         "osd_uuid": "bd7ac921-1218-45c1-b1c6-7c594dbceccb",
Oct 11 04:23:29 compute-0 funny_mestorf[304500]:         "type": "bluestore"
Oct 11 04:23:29 compute-0 funny_mestorf[304500]:     }
Oct 11 04:23:29 compute-0 funny_mestorf[304500]: }
Oct 11 04:23:29 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1844: 305 pgs: 305 active+clean; 169 MiB data, 489 MiB used, 60 GiB / 60 GiB avail; 36 KiB/s rd, 45 KiB/s wr, 40 op/s
Oct 11 04:23:29 compute-0 systemd[1]: libpod-8c6a68e0b724f60d93ff3ac9a3fc87993f997270e5bd71aea14e5050515ae81b.scope: Deactivated successfully.
Oct 11 04:23:29 compute-0 podman[304484]: 2025-10-11 04:23:29.987789324 +0000 UTC m=+1.123811218 container died 8c6a68e0b724f60d93ff3ac9a3fc87993f997270e5bd71aea14e5050515ae81b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_mestorf, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 11 04:23:30 compute-0 systemd[1]: var-lib-containers-storage-overlay-5cead4957848a814eecfae99c34237b1605e9eca46cb78518c9ec4f5d58966f0-merged.mount: Deactivated successfully.
Oct 11 04:23:30 compute-0 podman[304484]: 2025-10-11 04:23:30.048854503 +0000 UTC m=+1.184876417 container remove 8c6a68e0b724f60d93ff3ac9a3fc87993f997270e5bd71aea14e5050515ae81b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_mestorf, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef)
Oct 11 04:23:30 compute-0 systemd[1]: libpod-conmon-8c6a68e0b724f60d93ff3ac9a3fc87993f997270e5bd71aea14e5050515ae81b.scope: Deactivated successfully.
Oct 11 04:23:30 compute-0 sudo[304377]: pam_unix(sudo:session): session closed for user root
Oct 11 04:23:30 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 11 04:23:30 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' 
Oct 11 04:23:30 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 11 04:23:30 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' 
Oct 11 04:23:30 compute-0 ceph-mgr[74563]: [progress WARNING root] complete: ev c51b83a6-1113-4df0-88ea-a2f9dd28fd5e does not exist
Oct 11 04:23:30 compute-0 ceph-mgr[74563]: [progress WARNING root] complete: ev 9c2a6a2f-7b2d-46b5-be9a-215a122b9491 does not exist
Oct 11 04:23:30 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e439 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 04:23:30 compute-0 ceph-mon[74273]: osdmap e439: 3 total, 3 up, 3 in
Oct 11 04:23:30 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' 
Oct 11 04:23:30 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' 
Oct 11 04:23:30 compute-0 sudo[304550]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 04:23:30 compute-0 nova_compute[259850]: 2025-10-11 04:23:30.171 2 DEBUG oslo_concurrency.lockutils [None req-4d318347-eddd-4610-b840-43905d7dce04 33278e6c76494cbbac3a77443a2127d6 5a777d54362640ae90dbd99f4e0ce865 - - default default] Acquiring lock "cd805eb4-703c-4647-bda1-59e3435d8c15" by "nova.compute.manager.ComputeManager.detach_volume.<locals>.do_detach_volume" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 04:23:30 compute-0 nova_compute[259850]: 2025-10-11 04:23:30.172 2 DEBUG oslo_concurrency.lockutils [None req-4d318347-eddd-4610-b840-43905d7dce04 33278e6c76494cbbac3a77443a2127d6 5a777d54362640ae90dbd99f4e0ce865 - - default default] Lock "cd805eb4-703c-4647-bda1-59e3435d8c15" acquired by "nova.compute.manager.ComputeManager.detach_volume.<locals>.do_detach_volume" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 04:23:30 compute-0 sudo[304550]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 04:23:30 compute-0 sudo[304550]: pam_unix(sudo:session): session closed for user root
Oct 11 04:23:30 compute-0 nova_compute[259850]: 2025-10-11 04:23:30.187 2 INFO nova.compute.manager [None req-4d318347-eddd-4610-b840-43905d7dce04 33278e6c76494cbbac3a77443a2127d6 5a777d54362640ae90dbd99f4e0ce865 - - default default] [instance: cd805eb4-703c-4647-bda1-59e3435d8c15] Detaching volume a9bd8846-de75-4d33-844f-cdf270772026
Oct 11 04:23:30 compute-0 sudo[304575]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Oct 11 04:23:30 compute-0 sudo[304575]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 04:23:30 compute-0 sudo[304575]: pam_unix(sudo:session): session closed for user root
Oct 11 04:23:30 compute-0 nova_compute[259850]: 2025-10-11 04:23:30.296 2 INFO nova.virt.block_device [None req-4d318347-eddd-4610-b840-43905d7dce04 33278e6c76494cbbac3a77443a2127d6 5a777d54362640ae90dbd99f4e0ce865 - - default default] [instance: cd805eb4-703c-4647-bda1-59e3435d8c15] Attempting to driver detach volume a9bd8846-de75-4d33-844f-cdf270772026 from mountpoint /dev/vdb
Oct 11 04:23:30 compute-0 nova_compute[259850]: 2025-10-11 04:23:30.306 2 DEBUG nova.virt.libvirt.driver [None req-4d318347-eddd-4610-b840-43905d7dce04 33278e6c76494cbbac3a77443a2127d6 5a777d54362640ae90dbd99f4e0ce865 - - default default] Attempting to detach device vdb from instance cd805eb4-703c-4647-bda1-59e3435d8c15 from the persistent domain config. _detach_from_persistent /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2487
Oct 11 04:23:30 compute-0 nova_compute[259850]: 2025-10-11 04:23:30.306 2 DEBUG nova.virt.libvirt.guest [None req-4d318347-eddd-4610-b840-43905d7dce04 33278e6c76494cbbac3a77443a2127d6 5a777d54362640ae90dbd99f4e0ce865 - - default default] detach device xml: <disk type="network" device="disk">
Oct 11 04:23:30 compute-0 nova_compute[259850]:   <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Oct 11 04:23:30 compute-0 nova_compute[259850]:   <source protocol="rbd" name="volumes/volume-a9bd8846-de75-4d33-844f-cdf270772026">
Oct 11 04:23:30 compute-0 nova_compute[259850]:     <host name="192.168.122.100" port="6789"/>
Oct 11 04:23:30 compute-0 nova_compute[259850]:   </source>
Oct 11 04:23:30 compute-0 nova_compute[259850]:   <target dev="vdb" bus="virtio"/>
Oct 11 04:23:30 compute-0 nova_compute[259850]:   <serial>a9bd8846-de75-4d33-844f-cdf270772026</serial>
Oct 11 04:23:30 compute-0 nova_compute[259850]:   <address type="pci" domain="0x0000" bus="0x06" slot="0x00" function="0x0"/>
Oct 11 04:23:30 compute-0 nova_compute[259850]: </disk>
Oct 11 04:23:30 compute-0 nova_compute[259850]:  detach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:465
Oct 11 04:23:30 compute-0 nova_compute[259850]: 2025-10-11 04:23:30.315 2 INFO nova.virt.libvirt.driver [None req-4d318347-eddd-4610-b840-43905d7dce04 33278e6c76494cbbac3a77443a2127d6 5a777d54362640ae90dbd99f4e0ce865 - - default default] Successfully detached device vdb from instance cd805eb4-703c-4647-bda1-59e3435d8c15 from the persistent domain config.
Oct 11 04:23:30 compute-0 nova_compute[259850]: 2025-10-11 04:23:30.315 2 DEBUG nova.virt.libvirt.driver [None req-4d318347-eddd-4610-b840-43905d7dce04 33278e6c76494cbbac3a77443a2127d6 5a777d54362640ae90dbd99f4e0ce865 - - default default] (1/8): Attempting to detach device vdb with device alias virtio-disk1 from instance cd805eb4-703c-4647-bda1-59e3435d8c15 from the live domain config. _detach_from_live_with_retry /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2523
Oct 11 04:23:30 compute-0 nova_compute[259850]: 2025-10-11 04:23:30.316 2 DEBUG nova.virt.libvirt.guest [None req-4d318347-eddd-4610-b840-43905d7dce04 33278e6c76494cbbac3a77443a2127d6 5a777d54362640ae90dbd99f4e0ce865 - - default default] detach device xml: <disk type="network" device="disk">
Oct 11 04:23:30 compute-0 nova_compute[259850]:   <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Oct 11 04:23:30 compute-0 nova_compute[259850]:   <source protocol="rbd" name="volumes/volume-a9bd8846-de75-4d33-844f-cdf270772026">
Oct 11 04:23:30 compute-0 nova_compute[259850]:     <host name="192.168.122.100" port="6789"/>
Oct 11 04:23:30 compute-0 nova_compute[259850]:   </source>
Oct 11 04:23:30 compute-0 nova_compute[259850]:   <target dev="vdb" bus="virtio"/>
Oct 11 04:23:30 compute-0 nova_compute[259850]:   <serial>a9bd8846-de75-4d33-844f-cdf270772026</serial>
Oct 11 04:23:30 compute-0 nova_compute[259850]:   <address type="pci" domain="0x0000" bus="0x06" slot="0x00" function="0x0"/>
Oct 11 04:23:30 compute-0 nova_compute[259850]: </disk>
Oct 11 04:23:30 compute-0 nova_compute[259850]:  detach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:465
Oct 11 04:23:30 compute-0 nova_compute[259850]: 2025-10-11 04:23:30.446 2 DEBUG nova.virt.libvirt.driver [None req-3700afd8-f980-4cf2-b41e-54ecc7fbe9a2 - - - - - -] Received event <DeviceRemovedEvent: 1760156610.445859, cd805eb4-703c-4647-bda1-59e3435d8c15 => virtio-disk1> from libvirt while the driver is waiting for it; dispatched. emit_event /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2370
Oct 11 04:23:30 compute-0 nova_compute[259850]: 2025-10-11 04:23:30.450 2 DEBUG nova.virt.libvirt.driver [None req-4d318347-eddd-4610-b840-43905d7dce04 33278e6c76494cbbac3a77443a2127d6 5a777d54362640ae90dbd99f4e0ce865 - - default default] Start waiting for the detach event from libvirt for device vdb with device alias virtio-disk1 for instance cd805eb4-703c-4647-bda1-59e3435d8c15 _detach_from_live_and_wait_for_event /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2599
Oct 11 04:23:30 compute-0 nova_compute[259850]: 2025-10-11 04:23:30.453 2 INFO nova.virt.libvirt.driver [None req-4d318347-eddd-4610-b840-43905d7dce04 33278e6c76494cbbac3a77443a2127d6 5a777d54362640ae90dbd99f4e0ce865 - - default default] Successfully detached device vdb from instance cd805eb4-703c-4647-bda1-59e3435d8c15 from the live domain config.
Oct 11 04:23:30 compute-0 nova_compute[259850]: 2025-10-11 04:23:30.573 2 DEBUG nova.objects.instance [None req-4d318347-eddd-4610-b840-43905d7dce04 33278e6c76494cbbac3a77443a2127d6 5a777d54362640ae90dbd99f4e0ce865 - - default default] Lazy-loading 'flavor' on Instance uuid cd805eb4-703c-4647-bda1-59e3435d8c15 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 11 04:23:30 compute-0 nova_compute[259850]: 2025-10-11 04:23:30.615 2 DEBUG oslo_concurrency.lockutils [None req-4d318347-eddd-4610-b840-43905d7dce04 33278e6c76494cbbac3a77443a2127d6 5a777d54362640ae90dbd99f4e0ce865 - - default default] Lock "cd805eb4-703c-4647-bda1-59e3435d8c15" "released" by "nova.compute.manager.ComputeManager.detach_volume.<locals>.do_detach_volume" :: held 0.444s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 04:23:31 compute-0 ceph-mon[74273]: pgmap v1844: 305 pgs: 305 active+clean; 169 MiB data, 489 MiB used, 60 GiB / 60 GiB avail; 36 KiB/s rd, 45 KiB/s wr, 40 op/s
Oct 11 04:23:31 compute-0 ceph-mgr[74563]: [pg_autoscaler INFO root] _maybe_adjust
Oct 11 04:23:31 compute-0 ceph-mgr[74563]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 04:23:31 compute-0 ceph-mgr[74563]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct 11 04:23:31 compute-0 ceph-mgr[74563]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 04:23:31 compute-0 ceph-mgr[74563]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0007608628190727356 of space, bias 1.0, pg target 0.22825884572182067 quantized to 32 (current 32)
Oct 11 04:23:31 compute-0 ceph-mgr[74563]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 04:23:31 compute-0 ceph-mgr[74563]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.00036183112749886044 of space, bias 1.0, pg target 0.10854933824965814 quantized to 32 (current 32)
Oct 11 04:23:31 compute-0 ceph-mgr[74563]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 04:23:31 compute-0 ceph-mgr[74563]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Oct 11 04:23:31 compute-0 ceph-mgr[74563]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 04:23:31 compute-0 ceph-mgr[74563]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006661762551279547 of space, bias 1.0, pg target 0.1998528765383864 quantized to 32 (current 32)
Oct 11 04:23:31 compute-0 ceph-mgr[74563]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 04:23:31 compute-0 ceph-mgr[74563]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Oct 11 04:23:31 compute-0 ceph-mgr[74563]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 04:23:31 compute-0 ceph-mgr[74563]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 11 04:23:31 compute-0 ceph-mgr[74563]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 04:23:31 compute-0 ceph-mgr[74563]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Oct 11 04:23:31 compute-0 ceph-mgr[74563]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 04:23:31 compute-0 ceph-mgr[74563]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct 11 04:23:31 compute-0 ceph-mgr[74563]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 04:23:31 compute-0 ceph-mgr[74563]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 11 04:23:31 compute-0 ceph-mgr[74563]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 04:23:31 compute-0 ceph-mgr[74563]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct 11 04:23:31 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1845: 305 pgs: 305 active+clean; 169 MiB data, 489 MiB used, 60 GiB / 60 GiB avail; 28 KiB/s rd, 36 KiB/s wr, 31 op/s
Oct 11 04:23:32 compute-0 nova_compute[259850]: 2025-10-11 04:23:32.059 2 DEBUG oslo_service.periodic_task [None req-a15c22ab-259d-4b26-83d0-eb5f92102649 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 11 04:23:32 compute-0 nova_compute[259850]: 2025-10-11 04:23:32.059 2 DEBUG nova.compute.manager [None req-a15c22ab-259d-4b26-83d0-eb5f92102649 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Oct 11 04:23:33 compute-0 nova_compute[259850]: 2025-10-11 04:23:33.060 2 DEBUG oslo_service.periodic_task [None req-a15c22ab-259d-4b26-83d0-eb5f92102649 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 11 04:23:33 compute-0 ceph-mon[74273]: pgmap v1845: 305 pgs: 305 active+clean; 169 MiB data, 489 MiB used, 60 GiB / 60 GiB avail; 28 KiB/s rd, 36 KiB/s wr, 31 op/s
Oct 11 04:23:33 compute-0 nova_compute[259850]: 2025-10-11 04:23:33.167 2 DEBUG oslo_concurrency.lockutils [None req-a90c9fcd-6d54-4465-99b5-2f3323f7fdd1 33278e6c76494cbbac3a77443a2127d6 5a777d54362640ae90dbd99f4e0ce865 - - default default] Acquiring lock "cd805eb4-703c-4647-bda1-59e3435d8c15" by "nova.compute.manager.ComputeManager.reserve_block_device_name.<locals>.do_reserve" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 04:23:33 compute-0 nova_compute[259850]: 2025-10-11 04:23:33.167 2 DEBUG oslo_concurrency.lockutils [None req-a90c9fcd-6d54-4465-99b5-2f3323f7fdd1 33278e6c76494cbbac3a77443a2127d6 5a777d54362640ae90dbd99f4e0ce865 - - default default] Lock "cd805eb4-703c-4647-bda1-59e3435d8c15" acquired by "nova.compute.manager.ComputeManager.reserve_block_device_name.<locals>.do_reserve" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 04:23:33 compute-0 nova_compute[259850]: 2025-10-11 04:23:33.183 2 DEBUG nova.objects.instance [None req-a90c9fcd-6d54-4465-99b5-2f3323f7fdd1 33278e6c76494cbbac3a77443a2127d6 5a777d54362640ae90dbd99f4e0ce865 - - default default] Lazy-loading 'flavor' on Instance uuid cd805eb4-703c-4647-bda1-59e3435d8c15 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 11 04:23:33 compute-0 nova_compute[259850]: 2025-10-11 04:23:33.215 2 DEBUG oslo_concurrency.lockutils [None req-a90c9fcd-6d54-4465-99b5-2f3323f7fdd1 33278e6c76494cbbac3a77443a2127d6 5a777d54362640ae90dbd99f4e0ce865 - - default default] Lock "cd805eb4-703c-4647-bda1-59e3435d8c15" "released" by "nova.compute.manager.ComputeManager.reserve_block_device_name.<locals>.do_reserve" :: held 0.047s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 04:23:33 compute-0 nova_compute[259850]: 2025-10-11 04:23:33.375 2 DEBUG oslo_concurrency.lockutils [None req-a90c9fcd-6d54-4465-99b5-2f3323f7fdd1 33278e6c76494cbbac3a77443a2127d6 5a777d54362640ae90dbd99f4e0ce865 - - default default] Acquiring lock "cd805eb4-703c-4647-bda1-59e3435d8c15" by "nova.compute.manager.ComputeManager.attach_volume.<locals>.do_attach_volume" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 04:23:33 compute-0 nova_compute[259850]: 2025-10-11 04:23:33.376 2 DEBUG oslo_concurrency.lockutils [None req-a90c9fcd-6d54-4465-99b5-2f3323f7fdd1 33278e6c76494cbbac3a77443a2127d6 5a777d54362640ae90dbd99f4e0ce865 - - default default] Lock "cd805eb4-703c-4647-bda1-59e3435d8c15" acquired by "nova.compute.manager.ComputeManager.attach_volume.<locals>.do_attach_volume" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 04:23:33 compute-0 nova_compute[259850]: 2025-10-11 04:23:33.376 2 INFO nova.compute.manager [None req-a90c9fcd-6d54-4465-99b5-2f3323f7fdd1 33278e6c76494cbbac3a77443a2127d6 5a777d54362640ae90dbd99f4e0ce865 - - default default] [instance: cd805eb4-703c-4647-bda1-59e3435d8c15] Attaching volume 6a9407da-1836-48aa-ba41-dc99bcdacc0e to /dev/vdb
Oct 11 04:23:33 compute-0 nova_compute[259850]: 2025-10-11 04:23:33.536 2 DEBUG os_brick.utils [None req-a90c9fcd-6d54-4465-99b5-2f3323f7fdd1 33278e6c76494cbbac3a77443a2127d6 5a777d54362640ae90dbd99f4e0ce865 - - default default] ==> get_connector_properties: call "{'root_helper': 'sudo nova-rootwrap /etc/nova/rootwrap.conf', 'my_ip': '192.168.122.100', 'multipath': True, 'enforce_multipath': True, 'host': 'compute-0.ctlplane.example.com', 'execute': None}" trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:176
Oct 11 04:23:33 compute-0 nova_compute[259850]: 2025-10-11 04:23:33.538 675 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): multipathd show status execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 04:23:33 compute-0 nova_compute[259850]: 2025-10-11 04:23:33.559 675 DEBUG oslo_concurrency.processutils [-] CMD "multipathd show status" returned: 0 in 0.021s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 04:23:33 compute-0 nova_compute[259850]: 2025-10-11 04:23:33.560 675 DEBUG oslo.privsep.daemon [-] privsep: reply[5f6e9ed9-7964-4890-ae5d-7ed92c9857e8]: (4, ('path checker states:\n\npaths: 0\nbusy: False\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 04:23:33 compute-0 nova_compute[259850]: 2025-10-11 04:23:33.561 675 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): cat /etc/iscsi/initiatorname.iscsi execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 04:23:33 compute-0 nova_compute[259850]: 2025-10-11 04:23:33.574 675 DEBUG oslo_concurrency.processutils [-] CMD "cat /etc/iscsi/initiatorname.iscsi" returned: 0 in 0.013s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 04:23:33 compute-0 nova_compute[259850]: 2025-10-11 04:23:33.575 675 DEBUG oslo.privsep.daemon [-] privsep: reply[31f82fca-4d5f-47df-bf67-cceec28f809a]: (4, ('InitiatorName=iqn.1994-05.com.redhat:e727c2bd432c', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 04:23:33 compute-0 nova_compute[259850]: 2025-10-11 04:23:33.577 675 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): findmnt -v / -n -o SOURCE execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 04:23:33 compute-0 nova_compute[259850]: 2025-10-11 04:23:33.591 675 DEBUG oslo_concurrency.processutils [-] CMD "findmnt -v / -n -o SOURCE" returned: 0 in 0.014s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 04:23:33 compute-0 nova_compute[259850]: 2025-10-11 04:23:33.592 675 DEBUG oslo.privsep.daemon [-] privsep: reply[4d2b7774-c5ed-4684-a550-7ba06163a97d]: (4, ('overlay\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 04:23:33 compute-0 nova_compute[259850]: 2025-10-11 04:23:33.593 675 DEBUG oslo.privsep.daemon [-] privsep: reply[6e30b76f-b8c0-4f0f-a91e-e40857773bb4]: (4, 'e4b2deed-ff06-4afb-a523-b61a9dddb9cc') _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 04:23:33 compute-0 nova_compute[259850]: 2025-10-11 04:23:33.594 2 DEBUG oslo_concurrency.processutils [None req-a90c9fcd-6d54-4465-99b5-2f3323f7fdd1 33278e6c76494cbbac3a77443a2127d6 5a777d54362640ae90dbd99f4e0ce865 - - default default] Running cmd (subprocess): nvme version execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 04:23:33 compute-0 nova_compute[259850]: 2025-10-11 04:23:33.626 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 04:23:33 compute-0 nova_compute[259850]: 2025-10-11 04:23:33.634 2 DEBUG oslo_concurrency.processutils [None req-a90c9fcd-6d54-4465-99b5-2f3323f7fdd1 33278e6c76494cbbac3a77443a2127d6 5a777d54362640ae90dbd99f4e0ce865 - - default default] CMD "nvme version" returned: 0 in 0.040s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 04:23:33 compute-0 nova_compute[259850]: 2025-10-11 04:23:33.637 2 DEBUG os_brick.initiator.connectors.lightos [None req-a90c9fcd-6d54-4465-99b5-2f3323f7fdd1 33278e6c76494cbbac3a77443a2127d6 5a777d54362640ae90dbd99f4e0ce865 - - default default] LIGHTOS: [Errno 111] ECONNREFUSED find_dsc /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:98
Oct 11 04:23:33 compute-0 nova_compute[259850]: 2025-10-11 04:23:33.638 2 DEBUG os_brick.initiator.connectors.lightos [None req-a90c9fcd-6d54-4465-99b5-2f3323f7fdd1 33278e6c76494cbbac3a77443a2127d6 5a777d54362640ae90dbd99f4e0ce865 - - default default] LIGHTOS: did not find dsc, continuing anyway. get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:76
Oct 11 04:23:33 compute-0 nova_compute[259850]: 2025-10-11 04:23:33.638 2 DEBUG os_brick.initiator.connectors.lightos [None req-a90c9fcd-6d54-4465-99b5-2f3323f7fdd1 33278e6c76494cbbac3a77443a2127d6 5a777d54362640ae90dbd99f4e0ce865 - - default default] LIGHTOS: finally hostnqn: nqn.2014-08.org.nvmexpress:uuid:83042a20-0f72-4c47-8453-e72ead378624 dsc:  get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:79
Oct 11 04:23:33 compute-0 nova_compute[259850]: 2025-10-11 04:23:33.639 2 DEBUG os_brick.utils [None req-a90c9fcd-6d54-4465-99b5-2f3323f7fdd1 33278e6c76494cbbac3a77443a2127d6 5a777d54362640ae90dbd99f4e0ce865 - - default default] <== get_connector_properties: return (101ms) {'platform': 'x86_64', 'os_type': 'linux', 'ip': '192.168.122.100', 'host': 'compute-0.ctlplane.example.com', 'multipath': True, 'initiator': 'iqn.1994-05.com.redhat:e727c2bd432c', 'do_local_attach': False, 'nvme_hostid': '83042a20-0f72-4c47-8453-e72ead378624', 'system uuid': 'e4b2deed-ff06-4afb-a523-b61a9dddb9cc', 'nqn': 'nqn.2014-08.org.nvmexpress:uuid:83042a20-0f72-4c47-8453-e72ead378624', 'nvme_native_multipath': True, 'found_dsc': ''} trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:203
Oct 11 04:23:33 compute-0 nova_compute[259850]: 2025-10-11 04:23:33.640 2 DEBUG nova.virt.block_device [None req-a90c9fcd-6d54-4465-99b5-2f3323f7fdd1 33278e6c76494cbbac3a77443a2127d6 5a777d54362640ae90dbd99f4e0ce865 - - default default] [instance: cd805eb4-703c-4647-bda1-59e3435d8c15] Updating existing volume attachment record: 0ce62130-aad4-4077-92d1-964f595f1a35 _volume_attach /usr/lib/python3.9/site-packages/nova/virt/block_device.py:631
Oct 11 04:23:33 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1846: 305 pgs: 305 active+clean; 170 MiB data, 490 MiB used, 60 GiB / 60 GiB avail; 64 KiB/s rd, 61 KiB/s wr, 75 op/s
Oct 11 04:23:34 compute-0 nova_compute[259850]: 2025-10-11 04:23:34.055 2 DEBUG oslo_service.periodic_task [None req-a15c22ab-259d-4b26-83d0-eb5f92102649 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 11 04:23:34 compute-0 nova_compute[259850]: 2025-10-11 04:23:34.058 2 DEBUG oslo_service.periodic_task [None req-a15c22ab-259d-4b26-83d0-eb5f92102649 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 11 04:23:34 compute-0 nova_compute[259850]: 2025-10-11 04:23:34.058 2 DEBUG nova.compute.manager [None req-a15c22ab-259d-4b26-83d0-eb5f92102649 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Oct 11 04:23:34 compute-0 nova_compute[259850]: 2025-10-11 04:23:34.059 2 DEBUG nova.compute.manager [None req-a15c22ab-259d-4b26-83d0-eb5f92102649 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Oct 11 04:23:34 compute-0 nova_compute[259850]: 2025-10-11 04:23:34.170 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 04:23:34 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 11 04:23:34 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/806836049' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 11 04:23:34 compute-0 nova_compute[259850]: 2025-10-11 04:23:34.305 2 DEBUG oslo_concurrency.lockutils [None req-a15c22ab-259d-4b26-83d0-eb5f92102649 - - - - - -] Acquiring lock "refresh_cache-cd805eb4-703c-4647-bda1-59e3435d8c15" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 11 04:23:34 compute-0 nova_compute[259850]: 2025-10-11 04:23:34.305 2 DEBUG oslo_concurrency.lockutils [None req-a15c22ab-259d-4b26-83d0-eb5f92102649 - - - - - -] Acquired lock "refresh_cache-cd805eb4-703c-4647-bda1-59e3435d8c15" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 11 04:23:34 compute-0 nova_compute[259850]: 2025-10-11 04:23:34.306 2 DEBUG nova.network.neutron [None req-a15c22ab-259d-4b26-83d0-eb5f92102649 - - - - - -] [instance: cd805eb4-703c-4647-bda1-59e3435d8c15] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Oct 11 04:23:34 compute-0 nova_compute[259850]: 2025-10-11 04:23:34.306 2 DEBUG nova.objects.instance [None req-a15c22ab-259d-4b26-83d0-eb5f92102649 - - - - - -] Lazy-loading 'info_cache' on Instance uuid cd805eb4-703c-4647-bda1-59e3435d8c15 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 11 04:23:34 compute-0 nova_compute[259850]: 2025-10-11 04:23:34.342 2 DEBUG nova.objects.instance [None req-a90c9fcd-6d54-4465-99b5-2f3323f7fdd1 33278e6c76494cbbac3a77443a2127d6 5a777d54362640ae90dbd99f4e0ce865 - - default default] Lazy-loading 'flavor' on Instance uuid cd805eb4-703c-4647-bda1-59e3435d8c15 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 11 04:23:34 compute-0 nova_compute[259850]: 2025-10-11 04:23:34.374 2 DEBUG nova.virt.libvirt.driver [None req-a90c9fcd-6d54-4465-99b5-2f3323f7fdd1 33278e6c76494cbbac3a77443a2127d6 5a777d54362640ae90dbd99f4e0ce865 - - default default] [instance: cd805eb4-703c-4647-bda1-59e3435d8c15] Attempting to attach volume 6a9407da-1836-48aa-ba41-dc99bcdacc0e with discard support enabled to an instance using an unsupported configuration. target_bus = virtio. Trim commands will not be issued to the storage device. _check_discard_for_attach_volume /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2168
Oct 11 04:23:34 compute-0 nova_compute[259850]: 2025-10-11 04:23:34.378 2 DEBUG nova.virt.libvirt.guest [None req-a90c9fcd-6d54-4465-99b5-2f3323f7fdd1 33278e6c76494cbbac3a77443a2127d6 5a777d54362640ae90dbd99f4e0ce865 - - default default] attach device xml: <disk type="network" device="disk">
Oct 11 04:23:34 compute-0 nova_compute[259850]:   <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Oct 11 04:23:34 compute-0 nova_compute[259850]:   <source protocol="rbd" name="volumes/volume-6a9407da-1836-48aa-ba41-dc99bcdacc0e">
Oct 11 04:23:34 compute-0 nova_compute[259850]:     <host name="192.168.122.100" port="6789"/>
Oct 11 04:23:34 compute-0 nova_compute[259850]:   </source>
Oct 11 04:23:34 compute-0 nova_compute[259850]:   <auth username="openstack">
Oct 11 04:23:34 compute-0 nova_compute[259850]:     <secret type="ceph" uuid="23b68101-59a9-532f-ab6b-9acf78fb2162"/>
Oct 11 04:23:34 compute-0 nova_compute[259850]:   </auth>
Oct 11 04:23:34 compute-0 nova_compute[259850]:   <target dev="vdb" bus="virtio"/>
Oct 11 04:23:34 compute-0 nova_compute[259850]:   <serial>6a9407da-1836-48aa-ba41-dc99bcdacc0e</serial>
Oct 11 04:23:34 compute-0 nova_compute[259850]: </disk>
Oct 11 04:23:34 compute-0 nova_compute[259850]:  attach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:339
Oct 11 04:23:34 compute-0 nova_compute[259850]: 2025-10-11 04:23:34.510 2 DEBUG nova.virt.libvirt.driver [None req-a90c9fcd-6d54-4465-99b5-2f3323f7fdd1 33278e6c76494cbbac3a77443a2127d6 5a777d54362640ae90dbd99f4e0ce865 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 11 04:23:34 compute-0 nova_compute[259850]: 2025-10-11 04:23:34.511 2 DEBUG nova.virt.libvirt.driver [None req-a90c9fcd-6d54-4465-99b5-2f3323f7fdd1 33278e6c76494cbbac3a77443a2127d6 5a777d54362640ae90dbd99f4e0ce865 - - default default] No BDM found with device name vdb, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 11 04:23:34 compute-0 nova_compute[259850]: 2025-10-11 04:23:34.511 2 DEBUG nova.virt.libvirt.driver [None req-a90c9fcd-6d54-4465-99b5-2f3323f7fdd1 33278e6c76494cbbac3a77443a2127d6 5a777d54362640ae90dbd99f4e0ce865 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 11 04:23:34 compute-0 nova_compute[259850]: 2025-10-11 04:23:34.511 2 DEBUG nova.virt.libvirt.driver [None req-a90c9fcd-6d54-4465-99b5-2f3323f7fdd1 33278e6c76494cbbac3a77443a2127d6 5a777d54362640ae90dbd99f4e0ce865 - - default default] No VIF found with MAC fa:16:3e:06:fe:bd, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Oct 11 04:23:34 compute-0 nova_compute[259850]: 2025-10-11 04:23:34.706 2 DEBUG oslo_concurrency.lockutils [None req-a90c9fcd-6d54-4465-99b5-2f3323f7fdd1 33278e6c76494cbbac3a77443a2127d6 5a777d54362640ae90dbd99f4e0ce865 - - default default] Lock "cd805eb4-703c-4647-bda1-59e3435d8c15" "released" by "nova.compute.manager.ComputeManager.attach_volume.<locals>.do_attach_volume" :: held 1.330s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 04:23:35 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e439 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 04:23:35 compute-0 ceph-mon[74273]: pgmap v1846: 305 pgs: 305 active+clean; 170 MiB data, 490 MiB used, 60 GiB / 60 GiB avail; 64 KiB/s rd, 61 KiB/s wr, 75 op/s
Oct 11 04:23:35 compute-0 ceph-mon[74273]: from='client.? 192.168.122.10:0/806836049' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 11 04:23:35 compute-0 nova_compute[259850]: 2025-10-11 04:23:35.560 2 DEBUG nova.network.neutron [None req-a15c22ab-259d-4b26-83d0-eb5f92102649 - - - - - -] [instance: cd805eb4-703c-4647-bda1-59e3435d8c15] Updating instance_info_cache with network_info: [{"id": "9e5f5bdc-671b-4d1a-b567-050dd8925c57", "address": "fa:16:3e:06:fe:bd", "network": {"id": "be3c4303-5003-4d44-a9c5-e31dbe7169fc", "bridge": "br-int", "label": "tempest-SnapshotDataIntegrityTests-760144367-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.213", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5a777d54362640ae90dbd99f4e0ce865", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9e5f5bdc-67", "ovs_interfaceid": "9e5f5bdc-671b-4d1a-b567-050dd8925c57", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 11 04:23:35 compute-0 nova_compute[259850]: 2025-10-11 04:23:35.583 2 DEBUG oslo_concurrency.lockutils [None req-a15c22ab-259d-4b26-83d0-eb5f92102649 - - - - - -] Releasing lock "refresh_cache-cd805eb4-703c-4647-bda1-59e3435d8c15" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 11 04:23:35 compute-0 nova_compute[259850]: 2025-10-11 04:23:35.584 2 DEBUG nova.compute.manager [None req-a15c22ab-259d-4b26-83d0-eb5f92102649 - - - - - -] [instance: cd805eb4-703c-4647-bda1-59e3435d8c15] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Oct 11 04:23:35 compute-0 nova_compute[259850]: 2025-10-11 04:23:35.585 2 DEBUG oslo_service.periodic_task [None req-a15c22ab-259d-4b26-83d0-eb5f92102649 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 11 04:23:35 compute-0 nova_compute[259850]: 2025-10-11 04:23:35.585 2 DEBUG oslo_service.periodic_task [None req-a15c22ab-259d-4b26-83d0-eb5f92102649 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 11 04:23:35 compute-0 nova_compute[259850]: 2025-10-11 04:23:35.586 2 DEBUG oslo_service.periodic_task [None req-a15c22ab-259d-4b26-83d0-eb5f92102649 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 11 04:23:35 compute-0 nova_compute[259850]: 2025-10-11 04:23:35.621 2 DEBUG oslo_concurrency.lockutils [None req-a15c22ab-259d-4b26-83d0-eb5f92102649 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 04:23:35 compute-0 nova_compute[259850]: 2025-10-11 04:23:35.622 2 DEBUG oslo_concurrency.lockutils [None req-a15c22ab-259d-4b26-83d0-eb5f92102649 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 04:23:35 compute-0 nova_compute[259850]: 2025-10-11 04:23:35.622 2 DEBUG oslo_concurrency.lockutils [None req-a15c22ab-259d-4b26-83d0-eb5f92102649 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 04:23:35 compute-0 nova_compute[259850]: 2025-10-11 04:23:35.623 2 DEBUG nova.compute.resource_tracker [None req-a15c22ab-259d-4b26-83d0-eb5f92102649 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Oct 11 04:23:35 compute-0 nova_compute[259850]: 2025-10-11 04:23:35.624 2 DEBUG oslo_concurrency.processutils [None req-a15c22ab-259d-4b26-83d0-eb5f92102649 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 04:23:35 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1847: 305 pgs: 305 active+clean; 170 MiB data, 490 MiB used, 60 GiB / 60 GiB avail; 58 KiB/s rd, 55 KiB/s wr, 68 op/s
Oct 11 04:23:36 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 11 04:23:36 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3280220505' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 04:23:36 compute-0 nova_compute[259850]: 2025-10-11 04:23:36.071 2 DEBUG oslo_concurrency.processutils [None req-a15c22ab-259d-4b26-83d0-eb5f92102649 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.447s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 04:23:36 compute-0 nova_compute[259850]: 2025-10-11 04:23:36.156 2 DEBUG nova.virt.libvirt.driver [None req-a15c22ab-259d-4b26-83d0-eb5f92102649 - - - - - -] skipping disk for instance-0000001b as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 11 04:23:36 compute-0 nova_compute[259850]: 2025-10-11 04:23:36.157 2 DEBUG nova.virt.libvirt.driver [None req-a15c22ab-259d-4b26-83d0-eb5f92102649 - - - - - -] skipping disk for instance-0000001b as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 11 04:23:36 compute-0 nova_compute[259850]: 2025-10-11 04:23:36.157 2 DEBUG nova.virt.libvirt.driver [None req-a15c22ab-259d-4b26-83d0-eb5f92102649 - - - - - -] skipping disk for instance-0000001b as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 11 04:23:36 compute-0 ceph-mon[74273]: from='client.? 192.168.122.100:0/3280220505' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 04:23:36 compute-0 nova_compute[259850]: 2025-10-11 04:23:36.400 2 WARNING nova.virt.libvirt.driver [None req-a15c22ab-259d-4b26-83d0-eb5f92102649 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 11 04:23:36 compute-0 nova_compute[259850]: 2025-10-11 04:23:36.401 2 DEBUG nova.compute.resource_tracker [None req-a15c22ab-259d-4b26-83d0-eb5f92102649 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4110MB free_disk=59.94263458251953GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Oct 11 04:23:36 compute-0 nova_compute[259850]: 2025-10-11 04:23:36.401 2 DEBUG oslo_concurrency.lockutils [None req-a15c22ab-259d-4b26-83d0-eb5f92102649 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 04:23:36 compute-0 nova_compute[259850]: 2025-10-11 04:23:36.402 2 DEBUG oslo_concurrency.lockutils [None req-a15c22ab-259d-4b26-83d0-eb5f92102649 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 04:23:36 compute-0 nova_compute[259850]: 2025-10-11 04:23:36.463 2 DEBUG nova.compute.resource_tracker [None req-a15c22ab-259d-4b26-83d0-eb5f92102649 - - - - - -] Instance cd805eb4-703c-4647-bda1-59e3435d8c15 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Oct 11 04:23:36 compute-0 nova_compute[259850]: 2025-10-11 04:23:36.464 2 DEBUG nova.compute.resource_tracker [None req-a15c22ab-259d-4b26-83d0-eb5f92102649 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Oct 11 04:23:36 compute-0 nova_compute[259850]: 2025-10-11 04:23:36.464 2 DEBUG nova.compute.resource_tracker [None req-a15c22ab-259d-4b26-83d0-eb5f92102649 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=640MB phys_disk=59GB used_disk=1GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Oct 11 04:23:36 compute-0 nova_compute[259850]: 2025-10-11 04:23:36.494 2 DEBUG oslo_concurrency.processutils [None req-a15c22ab-259d-4b26-83d0-eb5f92102649 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 04:23:36 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 11 04:23:36 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2003768032' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 04:23:36 compute-0 nova_compute[259850]: 2025-10-11 04:23:36.931 2 DEBUG oslo_concurrency.processutils [None req-a15c22ab-259d-4b26-83d0-eb5f92102649 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.437s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 04:23:36 compute-0 nova_compute[259850]: 2025-10-11 04:23:36.940 2 DEBUG nova.compute.provider_tree [None req-a15c22ab-259d-4b26-83d0-eb5f92102649 - - - - - -] Inventory has not changed in ProviderTree for provider: 108a560b-89c0-4926-a2fc-cb749a6f8386 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 11 04:23:36 compute-0 nova_compute[259850]: 2025-10-11 04:23:36.958 2 DEBUG nova.scheduler.client.report [None req-a15c22ab-259d-4b26-83d0-eb5f92102649 - - - - - -] Inventory has not changed for provider 108a560b-89c0-4926-a2fc-cb749a6f8386 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 11 04:23:36 compute-0 nova_compute[259850]: 2025-10-11 04:23:36.989 2 DEBUG nova.compute.resource_tracker [None req-a15c22ab-259d-4b26-83d0-eb5f92102649 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Oct 11 04:23:36 compute-0 nova_compute[259850]: 2025-10-11 04:23:36.989 2 DEBUG oslo_concurrency.lockutils [None req-a15c22ab-259d-4b26-83d0-eb5f92102649 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.587s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 04:23:37 compute-0 ceph-mon[74273]: pgmap v1847: 305 pgs: 305 active+clean; 170 MiB data, 490 MiB used, 60 GiB / 60 GiB avail; 58 KiB/s rd, 55 KiB/s wr, 68 op/s
Oct 11 04:23:37 compute-0 ceph-mon[74273]: from='client.? 192.168.122.100:0/2003768032' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 04:23:37 compute-0 nova_compute[259850]: 2025-10-11 04:23:37.462 2 DEBUG oslo_service.periodic_task [None req-a15c22ab-259d-4b26-83d0-eb5f92102649 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 11 04:23:37 compute-0 nova_compute[259850]: 2025-10-11 04:23:37.467 2 DEBUG oslo_concurrency.lockutils [None req-33901e36-51cc-4699-8bd8-3abb0676763f 33278e6c76494cbbac3a77443a2127d6 5a777d54362640ae90dbd99f4e0ce865 - - default default] Acquiring lock "cd805eb4-703c-4647-bda1-59e3435d8c15" by "nova.compute.manager.ComputeManager.detach_volume.<locals>.do_detach_volume" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 04:23:37 compute-0 nova_compute[259850]: 2025-10-11 04:23:37.468 2 DEBUG oslo_concurrency.lockutils [None req-33901e36-51cc-4699-8bd8-3abb0676763f 33278e6c76494cbbac3a77443a2127d6 5a777d54362640ae90dbd99f4e0ce865 - - default default] Lock "cd805eb4-703c-4647-bda1-59e3435d8c15" acquired by "nova.compute.manager.ComputeManager.detach_volume.<locals>.do_detach_volume" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 04:23:37 compute-0 nova_compute[259850]: 2025-10-11 04:23:37.484 2 INFO nova.compute.manager [None req-33901e36-51cc-4699-8bd8-3abb0676763f 33278e6c76494cbbac3a77443a2127d6 5a777d54362640ae90dbd99f4e0ce865 - - default default] [instance: cd805eb4-703c-4647-bda1-59e3435d8c15] Detaching volume 6a9407da-1836-48aa-ba41-dc99bcdacc0e
Oct 11 04:23:37 compute-0 nova_compute[259850]: 2025-10-11 04:23:37.590 2 INFO nova.virt.block_device [None req-33901e36-51cc-4699-8bd8-3abb0676763f 33278e6c76494cbbac3a77443a2127d6 5a777d54362640ae90dbd99f4e0ce865 - - default default] [instance: cd805eb4-703c-4647-bda1-59e3435d8c15] Attempting to driver detach volume 6a9407da-1836-48aa-ba41-dc99bcdacc0e from mountpoint /dev/vdb
Oct 11 04:23:37 compute-0 nova_compute[259850]: 2025-10-11 04:23:37.603 2 DEBUG nova.virt.libvirt.driver [None req-33901e36-51cc-4699-8bd8-3abb0676763f 33278e6c76494cbbac3a77443a2127d6 5a777d54362640ae90dbd99f4e0ce865 - - default default] Attempting to detach device vdb from instance cd805eb4-703c-4647-bda1-59e3435d8c15 from the persistent domain config. _detach_from_persistent /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2487
Oct 11 04:23:37 compute-0 nova_compute[259850]: 2025-10-11 04:23:37.604 2 DEBUG nova.virt.libvirt.guest [None req-33901e36-51cc-4699-8bd8-3abb0676763f 33278e6c76494cbbac3a77443a2127d6 5a777d54362640ae90dbd99f4e0ce865 - - default default] detach device xml: <disk type="network" device="disk">
Oct 11 04:23:37 compute-0 nova_compute[259850]:   <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Oct 11 04:23:37 compute-0 nova_compute[259850]:   <source protocol="rbd" name="volumes/volume-6a9407da-1836-48aa-ba41-dc99bcdacc0e">
Oct 11 04:23:37 compute-0 nova_compute[259850]:     <host name="192.168.122.100" port="6789"/>
Oct 11 04:23:37 compute-0 nova_compute[259850]:   </source>
Oct 11 04:23:37 compute-0 nova_compute[259850]:   <target dev="vdb" bus="virtio"/>
Oct 11 04:23:37 compute-0 nova_compute[259850]:   <serial>6a9407da-1836-48aa-ba41-dc99bcdacc0e</serial>
Oct 11 04:23:37 compute-0 nova_compute[259850]:   <address type="pci" domain="0x0000" bus="0x06" slot="0x00" function="0x0"/>
Oct 11 04:23:37 compute-0 nova_compute[259850]: </disk>
Oct 11 04:23:37 compute-0 nova_compute[259850]:  detach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:465
Oct 11 04:23:37 compute-0 nova_compute[259850]: 2025-10-11 04:23:37.614 2 INFO nova.virt.libvirt.driver [None req-33901e36-51cc-4699-8bd8-3abb0676763f 33278e6c76494cbbac3a77443a2127d6 5a777d54362640ae90dbd99f4e0ce865 - - default default] Successfully detached device vdb from instance cd805eb4-703c-4647-bda1-59e3435d8c15 from the persistent domain config.
Oct 11 04:23:37 compute-0 nova_compute[259850]: 2025-10-11 04:23:37.615 2 DEBUG nova.virt.libvirt.driver [None req-33901e36-51cc-4699-8bd8-3abb0676763f 33278e6c76494cbbac3a77443a2127d6 5a777d54362640ae90dbd99f4e0ce865 - - default default] (1/8): Attempting to detach device vdb with device alias virtio-disk1 from instance cd805eb4-703c-4647-bda1-59e3435d8c15 from the live domain config. _detach_from_live_with_retry /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2523
Oct 11 04:23:37 compute-0 nova_compute[259850]: 2025-10-11 04:23:37.615 2 DEBUG nova.virt.libvirt.guest [None req-33901e36-51cc-4699-8bd8-3abb0676763f 33278e6c76494cbbac3a77443a2127d6 5a777d54362640ae90dbd99f4e0ce865 - - default default] detach device xml: <disk type="network" device="disk">
Oct 11 04:23:37 compute-0 nova_compute[259850]:   <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Oct 11 04:23:37 compute-0 nova_compute[259850]:   <source protocol="rbd" name="volumes/volume-6a9407da-1836-48aa-ba41-dc99bcdacc0e">
Oct 11 04:23:37 compute-0 nova_compute[259850]:     <host name="192.168.122.100" port="6789"/>
Oct 11 04:23:37 compute-0 nova_compute[259850]:   </source>
Oct 11 04:23:37 compute-0 nova_compute[259850]:   <target dev="vdb" bus="virtio"/>
Oct 11 04:23:37 compute-0 nova_compute[259850]:   <serial>6a9407da-1836-48aa-ba41-dc99bcdacc0e</serial>
Oct 11 04:23:37 compute-0 nova_compute[259850]:   <address type="pci" domain="0x0000" bus="0x06" slot="0x00" function="0x0"/>
Oct 11 04:23:37 compute-0 nova_compute[259850]: </disk>
Oct 11 04:23:37 compute-0 nova_compute[259850]:  detach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:465
Oct 11 04:23:37 compute-0 nova_compute[259850]: 2025-10-11 04:23:37.747 2 DEBUG nova.virt.libvirt.driver [None req-3700afd8-f980-4cf2-b41e-54ecc7fbe9a2 - - - - - -] Received event <DeviceRemovedEvent: 1760156617.7468417, cd805eb4-703c-4647-bda1-59e3435d8c15 => virtio-disk1> from libvirt while the driver is waiting for it; dispatched. emit_event /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2370
Oct 11 04:23:37 compute-0 nova_compute[259850]: 2025-10-11 04:23:37.750 2 DEBUG nova.virt.libvirt.driver [None req-33901e36-51cc-4699-8bd8-3abb0676763f 33278e6c76494cbbac3a77443a2127d6 5a777d54362640ae90dbd99f4e0ce865 - - default default] Start waiting for the detach event from libvirt for device vdb with device alias virtio-disk1 for instance cd805eb4-703c-4647-bda1-59e3435d8c15 _detach_from_live_and_wait_for_event /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2599
Oct 11 04:23:37 compute-0 nova_compute[259850]: 2025-10-11 04:23:37.754 2 INFO nova.virt.libvirt.driver [None req-33901e36-51cc-4699-8bd8-3abb0676763f 33278e6c76494cbbac3a77443a2127d6 5a777d54362640ae90dbd99f4e0ce865 - - default default] Successfully detached device vdb from instance cd805eb4-703c-4647-bda1-59e3435d8c15 from the live domain config.
Oct 11 04:23:37 compute-0 nova_compute[259850]: 2025-10-11 04:23:37.914 2 DEBUG nova.objects.instance [None req-33901e36-51cc-4699-8bd8-3abb0676763f 33278e6c76494cbbac3a77443a2127d6 5a777d54362640ae90dbd99f4e0ce865 - - default default] Lazy-loading 'flavor' on Instance uuid cd805eb4-703c-4647-bda1-59e3435d8c15 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 11 04:23:37 compute-0 nova_compute[259850]: 2025-10-11 04:23:37.946 2 DEBUG oslo_concurrency.lockutils [None req-33901e36-51cc-4699-8bd8-3abb0676763f 33278e6c76494cbbac3a77443a2127d6 5a777d54362640ae90dbd99f4e0ce865 - - default default] Lock "cd805eb4-703c-4647-bda1-59e3435d8c15" "released" by "nova.compute.manager.ComputeManager.detach_volume.<locals>.do_detach_volume" :: held 0.478s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 04:23:37 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1848: 305 pgs: 305 active+clean; 170 MiB data, 490 MiB used, 60 GiB / 60 GiB avail; 95 KiB/s rd, 37 KiB/s wr, 49 op/s
Oct 11 04:23:38 compute-0 podman[304676]: 2025-10-11 04:23:38.46278355 +0000 UTC m=+0.172700801 container health_status 648a70a95c858d67aef01a55c153ff0b5b9bef4b589fa10c62a24185180db61f (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, container_name=ovn_controller, managed_by=edpm_ansible, tcib_managed=true, config_id=ovn_controller, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Oct 11 04:23:38 compute-0 nova_compute[259850]: 2025-10-11 04:23:38.605 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 04:23:39 compute-0 nova_compute[259850]: 2025-10-11 04:23:39.172 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 04:23:39 compute-0 ceph-mon[74273]: pgmap v1848: 305 pgs: 305 active+clean; 170 MiB data, 490 MiB used, 60 GiB / 60 GiB avail; 95 KiB/s rd, 37 KiB/s wr, 49 op/s
Oct 11 04:23:39 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1849: 305 pgs: 305 active+clean; 170 MiB data, 490 MiB used, 60 GiB / 60 GiB avail; 102 KiB/s rd, 83 KiB/s wr, 47 op/s
Oct 11 04:23:40 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e439 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 04:23:40 compute-0 nova_compute[259850]: 2025-10-11 04:23:40.662 2 DEBUG oslo_concurrency.lockutils [None req-e81acdda-cb02-4212-821f-6d9e4ea29ce3 33278e6c76494cbbac3a77443a2127d6 5a777d54362640ae90dbd99f4e0ce865 - - default default] Acquiring lock "cd805eb4-703c-4647-bda1-59e3435d8c15" by "nova.compute.manager.ComputeManager.reserve_block_device_name.<locals>.do_reserve" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 04:23:40 compute-0 nova_compute[259850]: 2025-10-11 04:23:40.663 2 DEBUG oslo_concurrency.lockutils [None req-e81acdda-cb02-4212-821f-6d9e4ea29ce3 33278e6c76494cbbac3a77443a2127d6 5a777d54362640ae90dbd99f4e0ce865 - - default default] Lock "cd805eb4-703c-4647-bda1-59e3435d8c15" acquired by "nova.compute.manager.ComputeManager.reserve_block_device_name.<locals>.do_reserve" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 04:23:40 compute-0 nova_compute[259850]: 2025-10-11 04:23:40.683 2 DEBUG nova.objects.instance [None req-e81acdda-cb02-4212-821f-6d9e4ea29ce3 33278e6c76494cbbac3a77443a2127d6 5a777d54362640ae90dbd99f4e0ce865 - - default default] Lazy-loading 'flavor' on Instance uuid cd805eb4-703c-4647-bda1-59e3435d8c15 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 11 04:23:40 compute-0 nova_compute[259850]: 2025-10-11 04:23:40.733 2 DEBUG oslo_concurrency.lockutils [None req-e81acdda-cb02-4212-821f-6d9e4ea29ce3 33278e6c76494cbbac3a77443a2127d6 5a777d54362640ae90dbd99f4e0ce865 - - default default] Lock "cd805eb4-703c-4647-bda1-59e3435d8c15" "released" by "nova.compute.manager.ComputeManager.reserve_block_device_name.<locals>.do_reserve" :: held 0.070s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 04:23:40 compute-0 nova_compute[259850]: 2025-10-11 04:23:40.901 2 DEBUG oslo_concurrency.lockutils [None req-e81acdda-cb02-4212-821f-6d9e4ea29ce3 33278e6c76494cbbac3a77443a2127d6 5a777d54362640ae90dbd99f4e0ce865 - - default default] Acquiring lock "cd805eb4-703c-4647-bda1-59e3435d8c15" by "nova.compute.manager.ComputeManager.attach_volume.<locals>.do_attach_volume" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 04:23:40 compute-0 nova_compute[259850]: 2025-10-11 04:23:40.902 2 DEBUG oslo_concurrency.lockutils [None req-e81acdda-cb02-4212-821f-6d9e4ea29ce3 33278e6c76494cbbac3a77443a2127d6 5a777d54362640ae90dbd99f4e0ce865 - - default default] Lock "cd805eb4-703c-4647-bda1-59e3435d8c15" acquired by "nova.compute.manager.ComputeManager.attach_volume.<locals>.do_attach_volume" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 04:23:40 compute-0 nova_compute[259850]: 2025-10-11 04:23:40.903 2 INFO nova.compute.manager [None req-e81acdda-cb02-4212-821f-6d9e4ea29ce3 33278e6c76494cbbac3a77443a2127d6 5a777d54362640ae90dbd99f4e0ce865 - - default default] [instance: cd805eb4-703c-4647-bda1-59e3435d8c15] Attaching volume b47b675f-a67f-4e61-989e-e65fd2083377 to /dev/vdb
Oct 11 04:23:41 compute-0 nova_compute[259850]: 2025-10-11 04:23:41.032 2 DEBUG os_brick.utils [None req-e81acdda-cb02-4212-821f-6d9e4ea29ce3 33278e6c76494cbbac3a77443a2127d6 5a777d54362640ae90dbd99f4e0ce865 - - default default] ==> get_connector_properties: call "{'root_helper': 'sudo nova-rootwrap /etc/nova/rootwrap.conf', 'my_ip': '192.168.122.100', 'multipath': True, 'enforce_multipath': True, 'host': 'compute-0.ctlplane.example.com', 'execute': None}" trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:176
Oct 11 04:23:41 compute-0 nova_compute[259850]: 2025-10-11 04:23:41.034 675 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): multipathd show status execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 04:23:41 compute-0 nova_compute[259850]: 2025-10-11 04:23:41.052 675 DEBUG oslo_concurrency.processutils [-] CMD "multipathd show status" returned: 0 in 0.018s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 04:23:41 compute-0 nova_compute[259850]: 2025-10-11 04:23:41.053 675 DEBUG oslo.privsep.daemon [-] privsep: reply[9a3adf17-8dc4-44d8-9550-fdfcec4c2412]: (4, ('path checker states:\n\npaths: 0\nbusy: False\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 04:23:41 compute-0 nova_compute[259850]: 2025-10-11 04:23:41.055 675 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): cat /etc/iscsi/initiatorname.iscsi execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 04:23:41 compute-0 nova_compute[259850]: 2025-10-11 04:23:41.068 675 DEBUG oslo_concurrency.processutils [-] CMD "cat /etc/iscsi/initiatorname.iscsi" returned: 0 in 0.013s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 04:23:41 compute-0 nova_compute[259850]: 2025-10-11 04:23:41.069 675 DEBUG oslo.privsep.daemon [-] privsep: reply[213c7ec7-8881-4ecc-af7b-2289d59d190f]: (4, ('InitiatorName=iqn.1994-05.com.redhat:e727c2bd432c', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 04:23:41 compute-0 nova_compute[259850]: 2025-10-11 04:23:41.071 675 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): findmnt -v / -n -o SOURCE execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 04:23:41 compute-0 nova_compute[259850]: 2025-10-11 04:23:41.085 675 DEBUG oslo_concurrency.processutils [-] CMD "findmnt -v / -n -o SOURCE" returned: 0 in 0.014s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 04:23:41 compute-0 nova_compute[259850]: 2025-10-11 04:23:41.085 675 DEBUG oslo.privsep.daemon [-] privsep: reply[8ce30e58-54d5-487e-8006-9be2f53453fd]: (4, ('overlay\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 04:23:41 compute-0 nova_compute[259850]: 2025-10-11 04:23:41.087 675 DEBUG oslo.privsep.daemon [-] privsep: reply[491cd129-e8f7-49c1-8026-b86a16260fc6]: (4, 'e4b2deed-ff06-4afb-a523-b61a9dddb9cc') _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 04:23:41 compute-0 nova_compute[259850]: 2025-10-11 04:23:41.088 2 DEBUG oslo_concurrency.processutils [None req-e81acdda-cb02-4212-821f-6d9e4ea29ce3 33278e6c76494cbbac3a77443a2127d6 5a777d54362640ae90dbd99f4e0ce865 - - default default] Running cmd (subprocess): nvme version execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 04:23:41 compute-0 nova_compute[259850]: 2025-10-11 04:23:41.125 2 DEBUG oslo_concurrency.processutils [None req-e81acdda-cb02-4212-821f-6d9e4ea29ce3 33278e6c76494cbbac3a77443a2127d6 5a777d54362640ae90dbd99f4e0ce865 - - default default] CMD "nvme version" returned: 0 in 0.037s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 04:23:41 compute-0 nova_compute[259850]: 2025-10-11 04:23:41.130 2 DEBUG os_brick.initiator.connectors.lightos [None req-e81acdda-cb02-4212-821f-6d9e4ea29ce3 33278e6c76494cbbac3a77443a2127d6 5a777d54362640ae90dbd99f4e0ce865 - - default default] LIGHTOS: [Errno 111] ECONNREFUSED find_dsc /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:98
Oct 11 04:23:41 compute-0 nova_compute[259850]: 2025-10-11 04:23:41.131 2 DEBUG os_brick.initiator.connectors.lightos [None req-e81acdda-cb02-4212-821f-6d9e4ea29ce3 33278e6c76494cbbac3a77443a2127d6 5a777d54362640ae90dbd99f4e0ce865 - - default default] LIGHTOS: did not find dsc, continuing anyway. get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:76
Oct 11 04:23:41 compute-0 nova_compute[259850]: 2025-10-11 04:23:41.131 2 DEBUG os_brick.initiator.connectors.lightos [None req-e81acdda-cb02-4212-821f-6d9e4ea29ce3 33278e6c76494cbbac3a77443a2127d6 5a777d54362640ae90dbd99f4e0ce865 - - default default] LIGHTOS: finally hostnqn: nqn.2014-08.org.nvmexpress:uuid:83042a20-0f72-4c47-8453-e72ead378624 dsc:  get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:79
Oct 11 04:23:41 compute-0 nova_compute[259850]: 2025-10-11 04:23:41.132 2 DEBUG os_brick.utils [None req-e81acdda-cb02-4212-821f-6d9e4ea29ce3 33278e6c76494cbbac3a77443a2127d6 5a777d54362640ae90dbd99f4e0ce865 - - default default] <== get_connector_properties: return (98ms) {'platform': 'x86_64', 'os_type': 'linux', 'ip': '192.168.122.100', 'host': 'compute-0.ctlplane.example.com', 'multipath': True, 'initiator': 'iqn.1994-05.com.redhat:e727c2bd432c', 'do_local_attach': False, 'nvme_hostid': '83042a20-0f72-4c47-8453-e72ead378624', 'system uuid': 'e4b2deed-ff06-4afb-a523-b61a9dddb9cc', 'nqn': 'nqn.2014-08.org.nvmexpress:uuid:83042a20-0f72-4c47-8453-e72ead378624', 'nvme_native_multipath': True, 'found_dsc': ''} trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:203
Oct 11 04:23:41 compute-0 nova_compute[259850]: 2025-10-11 04:23:41.133 2 DEBUG nova.virt.block_device [None req-e81acdda-cb02-4212-821f-6d9e4ea29ce3 33278e6c76494cbbac3a77443a2127d6 5a777d54362640ae90dbd99f4e0ce865 - - default default] [instance: cd805eb4-703c-4647-bda1-59e3435d8c15] Updating existing volume attachment record: 377f2ecd-5312-4202-9c65-76831bb0a11f _volume_attach /usr/lib/python3.9/site-packages/nova/virt/block_device.py:631
Oct 11 04:23:41 compute-0 ceph-mon[74273]: pgmap v1849: 305 pgs: 305 active+clean; 170 MiB data, 490 MiB used, 60 GiB / 60 GiB avail; 102 KiB/s rd, 83 KiB/s wr, 47 op/s
Oct 11 04:23:41 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 11 04:23:41 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1387463156' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 11 04:23:41 compute-0 nova_compute[259850]: 2025-10-11 04:23:41.787 2 DEBUG nova.objects.instance [None req-e81acdda-cb02-4212-821f-6d9e4ea29ce3 33278e6c76494cbbac3a77443a2127d6 5a777d54362640ae90dbd99f4e0ce865 - - default default] Lazy-loading 'flavor' on Instance uuid cd805eb4-703c-4647-bda1-59e3435d8c15 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 11 04:23:41 compute-0 nova_compute[259850]: 2025-10-11 04:23:41.814 2 DEBUG nova.virt.libvirt.driver [None req-e81acdda-cb02-4212-821f-6d9e4ea29ce3 33278e6c76494cbbac3a77443a2127d6 5a777d54362640ae90dbd99f4e0ce865 - - default default] [instance: cd805eb4-703c-4647-bda1-59e3435d8c15] Attempting to attach volume b47b675f-a67f-4e61-989e-e65fd2083377 with discard support enabled to an instance using an unsupported configuration. target_bus = virtio. Trim commands will not be issued to the storage device. _check_discard_for_attach_volume /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2168
Oct 11 04:23:41 compute-0 nova_compute[259850]: 2025-10-11 04:23:41.818 2 DEBUG nova.virt.libvirt.guest [None req-e81acdda-cb02-4212-821f-6d9e4ea29ce3 33278e6c76494cbbac3a77443a2127d6 5a777d54362640ae90dbd99f4e0ce865 - - default default] attach device xml: <disk type="network" device="disk">
Oct 11 04:23:41 compute-0 nova_compute[259850]:   <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Oct 11 04:23:41 compute-0 nova_compute[259850]:   <source protocol="rbd" name="volumes/volume-b47b675f-a67f-4e61-989e-e65fd2083377">
Oct 11 04:23:41 compute-0 nova_compute[259850]:     <host name="192.168.122.100" port="6789"/>
Oct 11 04:23:41 compute-0 nova_compute[259850]:   </source>
Oct 11 04:23:41 compute-0 nova_compute[259850]:   <auth username="openstack">
Oct 11 04:23:41 compute-0 nova_compute[259850]:     <secret type="ceph" uuid="23b68101-59a9-532f-ab6b-9acf78fb2162"/>
Oct 11 04:23:41 compute-0 nova_compute[259850]:   </auth>
Oct 11 04:23:41 compute-0 nova_compute[259850]:   <target dev="vdb" bus="virtio"/>
Oct 11 04:23:41 compute-0 nova_compute[259850]:   <serial>b47b675f-a67f-4e61-989e-e65fd2083377</serial>
Oct 11 04:23:41 compute-0 nova_compute[259850]: </disk>
Oct 11 04:23:41 compute-0 nova_compute[259850]:  attach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:339
Oct 11 04:23:41 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1850: 305 pgs: 305 active+clean; 170 MiB data, 490 MiB used, 60 GiB / 60 GiB avail; 92 KiB/s rd, 75 KiB/s wr, 43 op/s
Oct 11 04:23:41 compute-0 nova_compute[259850]: 2025-10-11 04:23:41.977 2 DEBUG nova.virt.libvirt.driver [None req-e81acdda-cb02-4212-821f-6d9e4ea29ce3 33278e6c76494cbbac3a77443a2127d6 5a777d54362640ae90dbd99f4e0ce865 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 11 04:23:41 compute-0 nova_compute[259850]: 2025-10-11 04:23:41.978 2 DEBUG nova.virt.libvirt.driver [None req-e81acdda-cb02-4212-821f-6d9e4ea29ce3 33278e6c76494cbbac3a77443a2127d6 5a777d54362640ae90dbd99f4e0ce865 - - default default] No BDM found with device name vdb, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 11 04:23:41 compute-0 nova_compute[259850]: 2025-10-11 04:23:41.978 2 DEBUG nova.virt.libvirt.driver [None req-e81acdda-cb02-4212-821f-6d9e4ea29ce3 33278e6c76494cbbac3a77443a2127d6 5a777d54362640ae90dbd99f4e0ce865 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 11 04:23:41 compute-0 nova_compute[259850]: 2025-10-11 04:23:41.979 2 DEBUG nova.virt.libvirt.driver [None req-e81acdda-cb02-4212-821f-6d9e4ea29ce3 33278e6c76494cbbac3a77443a2127d6 5a777d54362640ae90dbd99f4e0ce865 - - default default] No VIF found with MAC fa:16:3e:06:fe:bd, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Oct 11 04:23:42 compute-0 nova_compute[259850]: 2025-10-11 04:23:42.059 2 DEBUG oslo_service.periodic_task [None req-a15c22ab-259d-4b26-83d0-eb5f92102649 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 11 04:23:42 compute-0 nova_compute[259850]: 2025-10-11 04:23:42.160 2 DEBUG oslo_concurrency.lockutils [None req-e81acdda-cb02-4212-821f-6d9e4ea29ce3 33278e6c76494cbbac3a77443a2127d6 5a777d54362640ae90dbd99f4e0ce865 - - default default] Lock "cd805eb4-703c-4647-bda1-59e3435d8c15" "released" by "nova.compute.manager.ComputeManager.attach_volume.<locals>.do_attach_volume" :: held 1.258s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 04:23:42 compute-0 ceph-mon[74273]: from='client.? 192.168.122.10:0/1387463156' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 11 04:23:43 compute-0 ceph-mon[74273]: pgmap v1850: 305 pgs: 305 active+clean; 170 MiB data, 490 MiB used, 60 GiB / 60 GiB avail; 92 KiB/s rd, 75 KiB/s wr, 43 op/s
Oct 11 04:23:43 compute-0 podman[304730]: 2025-10-11 04:23:43.37447004 +0000 UTC m=+0.080762147 container health_status aa44bb3cffcfb092b9d85ae52a9bd9f36c87e07262632420f97b48a886109eab (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251009, tcib_managed=true)
Oct 11 04:23:43 compute-0 nova_compute[259850]: 2025-10-11 04:23:43.607 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 04:23:43 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1851: 305 pgs: 305 active+clean; 170 MiB data, 490 MiB used, 60 GiB / 60 GiB avail; 109 KiB/s rd, 80 KiB/s wr, 64 op/s
Oct 11 04:23:44 compute-0 nova_compute[259850]: 2025-10-11 04:23:44.174 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 04:23:44 compute-0 nova_compute[259850]: 2025-10-11 04:23:44.918 2 DEBUG oslo_concurrency.lockutils [None req-e6de16e7-c3ac-4ebc-9833-b8df6f509a96 33278e6c76494cbbac3a77443a2127d6 5a777d54362640ae90dbd99f4e0ce865 - - default default] Acquiring lock "cd805eb4-703c-4647-bda1-59e3435d8c15" by "nova.compute.manager.ComputeManager.detach_volume.<locals>.do_detach_volume" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 04:23:44 compute-0 nova_compute[259850]: 2025-10-11 04:23:44.918 2 DEBUG oslo_concurrency.lockutils [None req-e6de16e7-c3ac-4ebc-9833-b8df6f509a96 33278e6c76494cbbac3a77443a2127d6 5a777d54362640ae90dbd99f4e0ce865 - - default default] Lock "cd805eb4-703c-4647-bda1-59e3435d8c15" acquired by "nova.compute.manager.ComputeManager.detach_volume.<locals>.do_detach_volume" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 04:23:44 compute-0 nova_compute[259850]: 2025-10-11 04:23:44.935 2 INFO nova.compute.manager [None req-e6de16e7-c3ac-4ebc-9833-b8df6f509a96 33278e6c76494cbbac3a77443a2127d6 5a777d54362640ae90dbd99f4e0ce865 - - default default] [instance: cd805eb4-703c-4647-bda1-59e3435d8c15] Detaching volume b47b675f-a67f-4e61-989e-e65fd2083377
Oct 11 04:23:45 compute-0 nova_compute[259850]: 2025-10-11 04:23:45.064 2 INFO nova.virt.block_device [None req-e6de16e7-c3ac-4ebc-9833-b8df6f509a96 33278e6c76494cbbac3a77443a2127d6 5a777d54362640ae90dbd99f4e0ce865 - - default default] [instance: cd805eb4-703c-4647-bda1-59e3435d8c15] Attempting to driver detach volume b47b675f-a67f-4e61-989e-e65fd2083377 from mountpoint /dev/vdb
Oct 11 04:23:45 compute-0 nova_compute[259850]: 2025-10-11 04:23:45.073 2 DEBUG nova.virt.libvirt.driver [None req-e6de16e7-c3ac-4ebc-9833-b8df6f509a96 33278e6c76494cbbac3a77443a2127d6 5a777d54362640ae90dbd99f4e0ce865 - - default default] Attempting to detach device vdb from instance cd805eb4-703c-4647-bda1-59e3435d8c15 from the persistent domain config. _detach_from_persistent /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2487
Oct 11 04:23:45 compute-0 nova_compute[259850]: 2025-10-11 04:23:45.074 2 DEBUG nova.virt.libvirt.guest [None req-e6de16e7-c3ac-4ebc-9833-b8df6f509a96 33278e6c76494cbbac3a77443a2127d6 5a777d54362640ae90dbd99f4e0ce865 - - default default] detach device xml: <disk type="network" device="disk">
Oct 11 04:23:45 compute-0 nova_compute[259850]:   <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Oct 11 04:23:45 compute-0 nova_compute[259850]:   <source protocol="rbd" name="volumes/volume-b47b675f-a67f-4e61-989e-e65fd2083377">
Oct 11 04:23:45 compute-0 nova_compute[259850]:     <host name="192.168.122.100" port="6789"/>
Oct 11 04:23:45 compute-0 nova_compute[259850]:   </source>
Oct 11 04:23:45 compute-0 nova_compute[259850]:   <target dev="vdb" bus="virtio"/>
Oct 11 04:23:45 compute-0 nova_compute[259850]:   <serial>b47b675f-a67f-4e61-989e-e65fd2083377</serial>
Oct 11 04:23:45 compute-0 nova_compute[259850]:   <address type="pci" domain="0x0000" bus="0x06" slot="0x00" function="0x0"/>
Oct 11 04:23:45 compute-0 nova_compute[259850]: </disk>
Oct 11 04:23:45 compute-0 nova_compute[259850]:  detach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:465
Oct 11 04:23:45 compute-0 nova_compute[259850]: 2025-10-11 04:23:45.084 2 INFO nova.virt.libvirt.driver [None req-e6de16e7-c3ac-4ebc-9833-b8df6f509a96 33278e6c76494cbbac3a77443a2127d6 5a777d54362640ae90dbd99f4e0ce865 - - default default] Successfully detached device vdb from instance cd805eb4-703c-4647-bda1-59e3435d8c15 from the persistent domain config.
Oct 11 04:23:45 compute-0 nova_compute[259850]: 2025-10-11 04:23:45.085 2 DEBUG nova.virt.libvirt.driver [None req-e6de16e7-c3ac-4ebc-9833-b8df6f509a96 33278e6c76494cbbac3a77443a2127d6 5a777d54362640ae90dbd99f4e0ce865 - - default default] (1/8): Attempting to detach device vdb with device alias virtio-disk1 from instance cd805eb4-703c-4647-bda1-59e3435d8c15 from the live domain config. _detach_from_live_with_retry /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2523
Oct 11 04:23:45 compute-0 nova_compute[259850]: 2025-10-11 04:23:45.086 2 DEBUG nova.virt.libvirt.guest [None req-e6de16e7-c3ac-4ebc-9833-b8df6f509a96 33278e6c76494cbbac3a77443a2127d6 5a777d54362640ae90dbd99f4e0ce865 - - default default] detach device xml: <disk type="network" device="disk">
Oct 11 04:23:45 compute-0 nova_compute[259850]:   <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Oct 11 04:23:45 compute-0 nova_compute[259850]:   <source protocol="rbd" name="volumes/volume-b47b675f-a67f-4e61-989e-e65fd2083377">
Oct 11 04:23:45 compute-0 nova_compute[259850]:     <host name="192.168.122.100" port="6789"/>
Oct 11 04:23:45 compute-0 nova_compute[259850]:   </source>
Oct 11 04:23:45 compute-0 nova_compute[259850]:   <target dev="vdb" bus="virtio"/>
Oct 11 04:23:45 compute-0 nova_compute[259850]:   <serial>b47b675f-a67f-4e61-989e-e65fd2083377</serial>
Oct 11 04:23:45 compute-0 nova_compute[259850]:   <address type="pci" domain="0x0000" bus="0x06" slot="0x00" function="0x0"/>
Oct 11 04:23:45 compute-0 nova_compute[259850]: </disk>
Oct 11 04:23:45 compute-0 nova_compute[259850]:  detach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:465
Oct 11 04:23:45 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e439 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 04:23:45 compute-0 nova_compute[259850]: 2025-10-11 04:23:45.208 2 DEBUG nova.virt.libvirt.driver [None req-3700afd8-f980-4cf2-b41e-54ecc7fbe9a2 - - - - - -] Received event <DeviceRemovedEvent: 1760156625.2077975, cd805eb4-703c-4647-bda1-59e3435d8c15 => virtio-disk1> from libvirt while the driver is waiting for it; dispatched. emit_event /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2370
Oct 11 04:23:45 compute-0 nova_compute[259850]: 2025-10-11 04:23:45.210 2 DEBUG nova.virt.libvirt.driver [None req-e6de16e7-c3ac-4ebc-9833-b8df6f509a96 33278e6c76494cbbac3a77443a2127d6 5a777d54362640ae90dbd99f4e0ce865 - - default default] Start waiting for the detach event from libvirt for device vdb with device alias virtio-disk1 for instance cd805eb4-703c-4647-bda1-59e3435d8c15 _detach_from_live_and_wait_for_event /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2599
Oct 11 04:23:45 compute-0 nova_compute[259850]: 2025-10-11 04:23:45.212 2 INFO nova.virt.libvirt.driver [None req-e6de16e7-c3ac-4ebc-9833-b8df6f509a96 33278e6c76494cbbac3a77443a2127d6 5a777d54362640ae90dbd99f4e0ce865 - - default default] Successfully detached device vdb from instance cd805eb4-703c-4647-bda1-59e3435d8c15 from the live domain config.
Oct 11 04:23:45 compute-0 ceph-mon[74273]: pgmap v1851: 305 pgs: 305 active+clean; 170 MiB data, 490 MiB used, 60 GiB / 60 GiB avail; 109 KiB/s rd, 80 KiB/s wr, 64 op/s
Oct 11 04:23:45 compute-0 nova_compute[259850]: 2025-10-11 04:23:45.363 2 DEBUG nova.objects.instance [None req-e6de16e7-c3ac-4ebc-9833-b8df6f509a96 33278e6c76494cbbac3a77443a2127d6 5a777d54362640ae90dbd99f4e0ce865 - - default default] Lazy-loading 'flavor' on Instance uuid cd805eb4-703c-4647-bda1-59e3435d8c15 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 11 04:23:45 compute-0 nova_compute[259850]: 2025-10-11 04:23:45.408 2 DEBUG oslo_concurrency.lockutils [None req-e6de16e7-c3ac-4ebc-9833-b8df6f509a96 33278e6c76494cbbac3a77443a2127d6 5a777d54362640ae90dbd99f4e0ce865 - - default default] Lock "cd805eb4-703c-4647-bda1-59e3435d8c15" "released" by "nova.compute.manager.ComputeManager.detach_volume.<locals>.do_detach_volume" :: held 0.490s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 04:23:45 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1852: 305 pgs: 305 active+clean; 170 MiB data, 490 MiB used, 60 GiB / 60 GiB avail; 85 KiB/s rd, 63 KiB/s wr, 35 op/s
Oct 11 04:23:47 compute-0 ceph-mon[74273]: pgmap v1852: 305 pgs: 305 active+clean; 170 MiB data, 490 MiB used, 60 GiB / 60 GiB avail; 85 KiB/s rd, 63 KiB/s wr, 35 op/s
Oct 11 04:23:47 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1853: 305 pgs: 305 active+clean; 171 MiB data, 491 MiB used, 60 GiB / 60 GiB avail; 153 KiB/s rd, 120 KiB/s wr, 53 op/s
Oct 11 04:23:48 compute-0 nova_compute[259850]: 2025-10-11 04:23:48.085 2 DEBUG oslo_concurrency.lockutils [None req-5abb6e80-447a-488d-a705-ca560500746a 33278e6c76494cbbac3a77443a2127d6 5a777d54362640ae90dbd99f4e0ce865 - - default default] Acquiring lock "cd805eb4-703c-4647-bda1-59e3435d8c15" by "nova.compute.manager.ComputeManager.reserve_block_device_name.<locals>.do_reserve" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 04:23:48 compute-0 nova_compute[259850]: 2025-10-11 04:23:48.085 2 DEBUG oslo_concurrency.lockutils [None req-5abb6e80-447a-488d-a705-ca560500746a 33278e6c76494cbbac3a77443a2127d6 5a777d54362640ae90dbd99f4e0ce865 - - default default] Lock "cd805eb4-703c-4647-bda1-59e3435d8c15" acquired by "nova.compute.manager.ComputeManager.reserve_block_device_name.<locals>.do_reserve" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 04:23:48 compute-0 nova_compute[259850]: 2025-10-11 04:23:48.107 2 DEBUG nova.objects.instance [None req-5abb6e80-447a-488d-a705-ca560500746a 33278e6c76494cbbac3a77443a2127d6 5a777d54362640ae90dbd99f4e0ce865 - - default default] Lazy-loading 'flavor' on Instance uuid cd805eb4-703c-4647-bda1-59e3435d8c15 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 11 04:23:48 compute-0 nova_compute[259850]: 2025-10-11 04:23:48.154 2 DEBUG oslo_concurrency.lockutils [None req-5abb6e80-447a-488d-a705-ca560500746a 33278e6c76494cbbac3a77443a2127d6 5a777d54362640ae90dbd99f4e0ce865 - - default default] Lock "cd805eb4-703c-4647-bda1-59e3435d8c15" "released" by "nova.compute.manager.ComputeManager.reserve_block_device_name.<locals>.do_reserve" :: held 0.069s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 04:23:48 compute-0 nova_compute[259850]: 2025-10-11 04:23:48.339 2 DEBUG oslo_concurrency.lockutils [None req-5abb6e80-447a-488d-a705-ca560500746a 33278e6c76494cbbac3a77443a2127d6 5a777d54362640ae90dbd99f4e0ce865 - - default default] Acquiring lock "cd805eb4-703c-4647-bda1-59e3435d8c15" by "nova.compute.manager.ComputeManager.attach_volume.<locals>.do_attach_volume" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 04:23:48 compute-0 nova_compute[259850]: 2025-10-11 04:23:48.340 2 DEBUG oslo_concurrency.lockutils [None req-5abb6e80-447a-488d-a705-ca560500746a 33278e6c76494cbbac3a77443a2127d6 5a777d54362640ae90dbd99f4e0ce865 - - default default] Lock "cd805eb4-703c-4647-bda1-59e3435d8c15" acquired by "nova.compute.manager.ComputeManager.attach_volume.<locals>.do_attach_volume" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 04:23:48 compute-0 nova_compute[259850]: 2025-10-11 04:23:48.340 2 INFO nova.compute.manager [None req-5abb6e80-447a-488d-a705-ca560500746a 33278e6c76494cbbac3a77443a2127d6 5a777d54362640ae90dbd99f4e0ce865 - - default default] [instance: cd805eb4-703c-4647-bda1-59e3435d8c15] Attaching volume 0e575e2c-ee5d-4e1d-922a-496d43f07e30 to /dev/vdb
Oct 11 04:23:48 compute-0 nova_compute[259850]: 2025-10-11 04:23:48.460 2 DEBUG os_brick.utils [None req-5abb6e80-447a-488d-a705-ca560500746a 33278e6c76494cbbac3a77443a2127d6 5a777d54362640ae90dbd99f4e0ce865 - - default default] ==> get_connector_properties: call "{'root_helper': 'sudo nova-rootwrap /etc/nova/rootwrap.conf', 'my_ip': '192.168.122.100', 'multipath': True, 'enforce_multipath': True, 'host': 'compute-0.ctlplane.example.com', 'execute': None}" trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:176
Oct 11 04:23:48 compute-0 nova_compute[259850]: 2025-10-11 04:23:48.461 675 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): multipathd show status execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 04:23:48 compute-0 nova_compute[259850]: 2025-10-11 04:23:48.473 675 DEBUG oslo_concurrency.processutils [-] CMD "multipathd show status" returned: 0 in 0.012s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 04:23:48 compute-0 nova_compute[259850]: 2025-10-11 04:23:48.474 675 DEBUG oslo.privsep.daemon [-] privsep: reply[da68b944-8620-4da7-8eaf-b7daa1baf0a9]: (4, ('path checker states:\n\npaths: 0\nbusy: False\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 04:23:48 compute-0 nova_compute[259850]: 2025-10-11 04:23:48.475 675 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): cat /etc/iscsi/initiatorname.iscsi execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 04:23:48 compute-0 nova_compute[259850]: 2025-10-11 04:23:48.485 675 DEBUG oslo_concurrency.processutils [-] CMD "cat /etc/iscsi/initiatorname.iscsi" returned: 0 in 0.009s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 04:23:48 compute-0 nova_compute[259850]: 2025-10-11 04:23:48.485 675 DEBUG oslo.privsep.daemon [-] privsep: reply[bfd8c081-a7a8-4004-ada8-4270a4effc00]: (4, ('InitiatorName=iqn.1994-05.com.redhat:e727c2bd432c', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 04:23:48 compute-0 nova_compute[259850]: 2025-10-11 04:23:48.487 675 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): findmnt -v / -n -o SOURCE execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 04:23:48 compute-0 nova_compute[259850]: 2025-10-11 04:23:48.495 675 DEBUG oslo_concurrency.processutils [-] CMD "findmnt -v / -n -o SOURCE" returned: 0 in 0.008s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 04:23:48 compute-0 nova_compute[259850]: 2025-10-11 04:23:48.496 675 DEBUG oslo.privsep.daemon [-] privsep: reply[dc75a7f1-b0d4-4aa3-a45d-92fc7fa6ffe4]: (4, ('overlay\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 04:23:48 compute-0 nova_compute[259850]: 2025-10-11 04:23:48.497 675 DEBUG oslo.privsep.daemon [-] privsep: reply[cc2b30b4-6d87-49e0-89e6-9f4f74863320]: (4, 'e4b2deed-ff06-4afb-a523-b61a9dddb9cc') _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 04:23:48 compute-0 nova_compute[259850]: 2025-10-11 04:23:48.498 2 DEBUG oslo_concurrency.processutils [None req-5abb6e80-447a-488d-a705-ca560500746a 33278e6c76494cbbac3a77443a2127d6 5a777d54362640ae90dbd99f4e0ce865 - - default default] Running cmd (subprocess): nvme version execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 04:23:48 compute-0 nova_compute[259850]: 2025-10-11 04:23:48.525 2 DEBUG oslo_concurrency.processutils [None req-5abb6e80-447a-488d-a705-ca560500746a 33278e6c76494cbbac3a77443a2127d6 5a777d54362640ae90dbd99f4e0ce865 - - default default] CMD "nvme version" returned: 0 in 0.027s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 04:23:48 compute-0 nova_compute[259850]: 2025-10-11 04:23:48.529 2 DEBUG os_brick.initiator.connectors.lightos [None req-5abb6e80-447a-488d-a705-ca560500746a 33278e6c76494cbbac3a77443a2127d6 5a777d54362640ae90dbd99f4e0ce865 - - default default] LIGHTOS: [Errno 111] ECONNREFUSED find_dsc /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:98
Oct 11 04:23:48 compute-0 nova_compute[259850]: 2025-10-11 04:23:48.529 2 DEBUG os_brick.initiator.connectors.lightos [None req-5abb6e80-447a-488d-a705-ca560500746a 33278e6c76494cbbac3a77443a2127d6 5a777d54362640ae90dbd99f4e0ce865 - - default default] LIGHTOS: did not find dsc, continuing anyway. get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:76
Oct 11 04:23:48 compute-0 nova_compute[259850]: 2025-10-11 04:23:48.530 2 DEBUG os_brick.initiator.connectors.lightos [None req-5abb6e80-447a-488d-a705-ca560500746a 33278e6c76494cbbac3a77443a2127d6 5a777d54362640ae90dbd99f4e0ce865 - - default default] LIGHTOS: finally hostnqn: nqn.2014-08.org.nvmexpress:uuid:83042a20-0f72-4c47-8453-e72ead378624 dsc:  get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:79
Oct 11 04:23:48 compute-0 nova_compute[259850]: 2025-10-11 04:23:48.531 2 DEBUG os_brick.utils [None req-5abb6e80-447a-488d-a705-ca560500746a 33278e6c76494cbbac3a77443a2127d6 5a777d54362640ae90dbd99f4e0ce865 - - default default] <== get_connector_properties: return (69ms) {'platform': 'x86_64', 'os_type': 'linux', 'ip': '192.168.122.100', 'host': 'compute-0.ctlplane.example.com', 'multipath': True, 'initiator': 'iqn.1994-05.com.redhat:e727c2bd432c', 'do_local_attach': False, 'nvme_hostid': '83042a20-0f72-4c47-8453-e72ead378624', 'system uuid': 'e4b2deed-ff06-4afb-a523-b61a9dddb9cc', 'nqn': 'nqn.2014-08.org.nvmexpress:uuid:83042a20-0f72-4c47-8453-e72ead378624', 'nvme_native_multipath': True, 'found_dsc': ''} trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:203
Oct 11 04:23:48 compute-0 nova_compute[259850]: 2025-10-11 04:23:48.531 2 DEBUG nova.virt.block_device [None req-5abb6e80-447a-488d-a705-ca560500746a 33278e6c76494cbbac3a77443a2127d6 5a777d54362640ae90dbd99f4e0ce865 - - default default] [instance: cd805eb4-703c-4647-bda1-59e3435d8c15] Updating existing volume attachment record: 6aadcf04-529e-4e4e-9b72-d8184b0c7b67 _volume_attach /usr/lib/python3.9/site-packages/nova/virt/block_device.py:631
Oct 11 04:23:48 compute-0 nova_compute[259850]: 2025-10-11 04:23:48.634 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 04:23:49 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 11 04:23:49 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1348892670' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 11 04:23:49 compute-0 nova_compute[259850]: 2025-10-11 04:23:49.176 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 04:23:49 compute-0 nova_compute[259850]: 2025-10-11 04:23:49.214 2 DEBUG nova.objects.instance [None req-5abb6e80-447a-488d-a705-ca560500746a 33278e6c76494cbbac3a77443a2127d6 5a777d54362640ae90dbd99f4e0ce865 - - default default] Lazy-loading 'flavor' on Instance uuid cd805eb4-703c-4647-bda1-59e3435d8c15 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 11 04:23:49 compute-0 nova_compute[259850]: 2025-10-11 04:23:49.236 2 DEBUG nova.virt.libvirt.driver [None req-5abb6e80-447a-488d-a705-ca560500746a 33278e6c76494cbbac3a77443a2127d6 5a777d54362640ae90dbd99f4e0ce865 - - default default] [instance: cd805eb4-703c-4647-bda1-59e3435d8c15] Attempting to attach volume 0e575e2c-ee5d-4e1d-922a-496d43f07e30 with discard support enabled to an instance using an unsupported configuration. target_bus = virtio. Trim commands will not be issued to the storage device. _check_discard_for_attach_volume /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2168
Oct 11 04:23:49 compute-0 nova_compute[259850]: 2025-10-11 04:23:49.238 2 DEBUG nova.virt.libvirt.guest [None req-5abb6e80-447a-488d-a705-ca560500746a 33278e6c76494cbbac3a77443a2127d6 5a777d54362640ae90dbd99f4e0ce865 - - default default] attach device xml: <disk type="network" device="disk">
Oct 11 04:23:49 compute-0 nova_compute[259850]:   <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Oct 11 04:23:49 compute-0 nova_compute[259850]:   <source protocol="rbd" name="volumes/volume-0e575e2c-ee5d-4e1d-922a-496d43f07e30">
Oct 11 04:23:49 compute-0 nova_compute[259850]:     <host name="192.168.122.100" port="6789"/>
Oct 11 04:23:49 compute-0 nova_compute[259850]:   </source>
Oct 11 04:23:49 compute-0 nova_compute[259850]:   <auth username="openstack">
Oct 11 04:23:49 compute-0 nova_compute[259850]:     <secret type="ceph" uuid="23b68101-59a9-532f-ab6b-9acf78fb2162"/>
Oct 11 04:23:49 compute-0 nova_compute[259850]:   </auth>
Oct 11 04:23:49 compute-0 nova_compute[259850]:   <target dev="vdb" bus="virtio"/>
Oct 11 04:23:49 compute-0 nova_compute[259850]:   <serial>0e575e2c-ee5d-4e1d-922a-496d43f07e30</serial>
Oct 11 04:23:49 compute-0 nova_compute[259850]: </disk>
Oct 11 04:23:49 compute-0 nova_compute[259850]:  attach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:339
Oct 11 04:23:49 compute-0 ceph-mon[74273]: pgmap v1853: 305 pgs: 305 active+clean; 171 MiB data, 491 MiB used, 60 GiB / 60 GiB avail; 153 KiB/s rd, 120 KiB/s wr, 53 op/s
Oct 11 04:23:49 compute-0 ceph-mon[74273]: from='client.? 192.168.122.10:0/1348892670' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 11 04:23:49 compute-0 nova_compute[259850]: 2025-10-11 04:23:49.368 2 DEBUG nova.virt.libvirt.driver [None req-5abb6e80-447a-488d-a705-ca560500746a 33278e6c76494cbbac3a77443a2127d6 5a777d54362640ae90dbd99f4e0ce865 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 11 04:23:49 compute-0 nova_compute[259850]: 2025-10-11 04:23:49.369 2 DEBUG nova.virt.libvirt.driver [None req-5abb6e80-447a-488d-a705-ca560500746a 33278e6c76494cbbac3a77443a2127d6 5a777d54362640ae90dbd99f4e0ce865 - - default default] No BDM found with device name vdb, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 11 04:23:49 compute-0 nova_compute[259850]: 2025-10-11 04:23:49.369 2 DEBUG nova.virt.libvirt.driver [None req-5abb6e80-447a-488d-a705-ca560500746a 33278e6c76494cbbac3a77443a2127d6 5a777d54362640ae90dbd99f4e0ce865 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 11 04:23:49 compute-0 nova_compute[259850]: 2025-10-11 04:23:49.369 2 DEBUG nova.virt.libvirt.driver [None req-5abb6e80-447a-488d-a705-ca560500746a 33278e6c76494cbbac3a77443a2127d6 5a777d54362640ae90dbd99f4e0ce865 - - default default] No VIF found with MAC fa:16:3e:06:fe:bd, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Oct 11 04:23:49 compute-0 nova_compute[259850]: 2025-10-11 04:23:49.532 2 DEBUG oslo_concurrency.lockutils [None req-5abb6e80-447a-488d-a705-ca560500746a 33278e6c76494cbbac3a77443a2127d6 5a777d54362640ae90dbd99f4e0ce865 - - default default] Lock "cd805eb4-703c-4647-bda1-59e3435d8c15" "released" by "nova.compute.manager.ComputeManager.attach_volume.<locals>.do_attach_volume" :: held 1.192s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 04:23:49 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1854: 305 pgs: 305 active+clean; 171 MiB data, 491 MiB used, 60 GiB / 60 GiB avail; 103 KiB/s rd, 111 KiB/s wr, 44 op/s
Oct 11 04:23:50 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e439 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 04:23:50 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct 11 04:23:50 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3862922295' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 11 04:23:50 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct 11 04:23:50 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3862922295' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 11 04:23:50 compute-0 ceph-mgr[74563]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 04:23:50 compute-0 ceph-mgr[74563]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 04:23:50 compute-0 ceph-mgr[74563]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 04:23:50 compute-0 ceph-mgr[74563]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 04:23:50 compute-0 ceph-mgr[74563]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 04:23:50 compute-0 ceph-mgr[74563]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 04:23:51 compute-0 ceph-mon[74273]: pgmap v1854: 305 pgs: 305 active+clean; 171 MiB data, 491 MiB used, 60 GiB / 60 GiB avail; 103 KiB/s rd, 111 KiB/s wr, 44 op/s
Oct 11 04:23:51 compute-0 ceph-mon[74273]: from='client.? 192.168.122.10:0/3862922295' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 11 04:23:51 compute-0 ceph-mon[74273]: from='client.? 192.168.122.10:0/3862922295' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 11 04:23:51 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1855: 305 pgs: 305 active+clean; 171 MiB data, 491 MiB used, 60 GiB / 60 GiB avail; 89 KiB/s rd, 63 KiB/s wr, 41 op/s
Oct 11 04:23:52 compute-0 nova_compute[259850]: 2025-10-11 04:23:52.302 2 DEBUG oslo_concurrency.lockutils [None req-d2ea16d7-cfc9-45a5-b12d-645948747be4 33278e6c76494cbbac3a77443a2127d6 5a777d54362640ae90dbd99f4e0ce865 - - default default] Acquiring lock "cd805eb4-703c-4647-bda1-59e3435d8c15" by "nova.compute.manager.ComputeManager.detach_volume.<locals>.do_detach_volume" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 04:23:52 compute-0 nova_compute[259850]: 2025-10-11 04:23:52.303 2 DEBUG oslo_concurrency.lockutils [None req-d2ea16d7-cfc9-45a5-b12d-645948747be4 33278e6c76494cbbac3a77443a2127d6 5a777d54362640ae90dbd99f4e0ce865 - - default default] Lock "cd805eb4-703c-4647-bda1-59e3435d8c15" acquired by "nova.compute.manager.ComputeManager.detach_volume.<locals>.do_detach_volume" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 04:23:52 compute-0 nova_compute[259850]: 2025-10-11 04:23:52.318 2 INFO nova.compute.manager [None req-d2ea16d7-cfc9-45a5-b12d-645948747be4 33278e6c76494cbbac3a77443a2127d6 5a777d54362640ae90dbd99f4e0ce865 - - default default] [instance: cd805eb4-703c-4647-bda1-59e3435d8c15] Detaching volume 0e575e2c-ee5d-4e1d-922a-496d43f07e30
Oct 11 04:23:52 compute-0 nova_compute[259850]: 2025-10-11 04:23:52.444 2 INFO nova.virt.block_device [None req-d2ea16d7-cfc9-45a5-b12d-645948747be4 33278e6c76494cbbac3a77443a2127d6 5a777d54362640ae90dbd99f4e0ce865 - - default default] [instance: cd805eb4-703c-4647-bda1-59e3435d8c15] Attempting to driver detach volume 0e575e2c-ee5d-4e1d-922a-496d43f07e30 from mountpoint /dev/vdb
Oct 11 04:23:52 compute-0 nova_compute[259850]: 2025-10-11 04:23:52.459 2 DEBUG nova.virt.libvirt.driver [None req-d2ea16d7-cfc9-45a5-b12d-645948747be4 33278e6c76494cbbac3a77443a2127d6 5a777d54362640ae90dbd99f4e0ce865 - - default default] Attempting to detach device vdb from instance cd805eb4-703c-4647-bda1-59e3435d8c15 from the persistent domain config. _detach_from_persistent /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2487
Oct 11 04:23:52 compute-0 nova_compute[259850]: 2025-10-11 04:23:52.460 2 DEBUG nova.virt.libvirt.guest [None req-d2ea16d7-cfc9-45a5-b12d-645948747be4 33278e6c76494cbbac3a77443a2127d6 5a777d54362640ae90dbd99f4e0ce865 - - default default] detach device xml: <disk type="network" device="disk">
Oct 11 04:23:52 compute-0 nova_compute[259850]:   <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Oct 11 04:23:52 compute-0 nova_compute[259850]:   <source protocol="rbd" name="volumes/volume-0e575e2c-ee5d-4e1d-922a-496d43f07e30">
Oct 11 04:23:52 compute-0 nova_compute[259850]:     <host name="192.168.122.100" port="6789"/>
Oct 11 04:23:52 compute-0 nova_compute[259850]:   </source>
Oct 11 04:23:52 compute-0 nova_compute[259850]:   <target dev="vdb" bus="virtio"/>
Oct 11 04:23:52 compute-0 nova_compute[259850]:   <serial>0e575e2c-ee5d-4e1d-922a-496d43f07e30</serial>
Oct 11 04:23:52 compute-0 nova_compute[259850]:   <address type="pci" domain="0x0000" bus="0x06" slot="0x00" function="0x0"/>
Oct 11 04:23:52 compute-0 nova_compute[259850]: </disk>
Oct 11 04:23:52 compute-0 nova_compute[259850]:  detach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:465
Oct 11 04:23:52 compute-0 nova_compute[259850]: 2025-10-11 04:23:52.471 2 INFO nova.virt.libvirt.driver [None req-d2ea16d7-cfc9-45a5-b12d-645948747be4 33278e6c76494cbbac3a77443a2127d6 5a777d54362640ae90dbd99f4e0ce865 - - default default] Successfully detached device vdb from instance cd805eb4-703c-4647-bda1-59e3435d8c15 from the persistent domain config.
Oct 11 04:23:52 compute-0 nova_compute[259850]: 2025-10-11 04:23:52.472 2 DEBUG nova.virt.libvirt.driver [None req-d2ea16d7-cfc9-45a5-b12d-645948747be4 33278e6c76494cbbac3a77443a2127d6 5a777d54362640ae90dbd99f4e0ce865 - - default default] (1/8): Attempting to detach device vdb with device alias virtio-disk1 from instance cd805eb4-703c-4647-bda1-59e3435d8c15 from the live domain config. _detach_from_live_with_retry /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2523
Oct 11 04:23:52 compute-0 nova_compute[259850]: 2025-10-11 04:23:52.473 2 DEBUG nova.virt.libvirt.guest [None req-d2ea16d7-cfc9-45a5-b12d-645948747be4 33278e6c76494cbbac3a77443a2127d6 5a777d54362640ae90dbd99f4e0ce865 - - default default] detach device xml: <disk type="network" device="disk">
Oct 11 04:23:52 compute-0 nova_compute[259850]:   <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Oct 11 04:23:52 compute-0 nova_compute[259850]:   <source protocol="rbd" name="volumes/volume-0e575e2c-ee5d-4e1d-922a-496d43f07e30">
Oct 11 04:23:52 compute-0 nova_compute[259850]:     <host name="192.168.122.100" port="6789"/>
Oct 11 04:23:52 compute-0 nova_compute[259850]:   </source>
Oct 11 04:23:52 compute-0 nova_compute[259850]:   <target dev="vdb" bus="virtio"/>
Oct 11 04:23:52 compute-0 nova_compute[259850]:   <serial>0e575e2c-ee5d-4e1d-922a-496d43f07e30</serial>
Oct 11 04:23:52 compute-0 nova_compute[259850]:   <address type="pci" domain="0x0000" bus="0x06" slot="0x00" function="0x0"/>
Oct 11 04:23:52 compute-0 nova_compute[259850]: </disk>
Oct 11 04:23:52 compute-0 nova_compute[259850]:  detach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:465
Oct 11 04:23:52 compute-0 nova_compute[259850]: 2025-10-11 04:23:52.605 2 DEBUG nova.virt.libvirt.driver [None req-3700afd8-f980-4cf2-b41e-54ecc7fbe9a2 - - - - - -] Received event <DeviceRemovedEvent: 1760156632.6044743, cd805eb4-703c-4647-bda1-59e3435d8c15 => virtio-disk1> from libvirt while the driver is waiting for it; dispatched. emit_event /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2370
Oct 11 04:23:52 compute-0 nova_compute[259850]: 2025-10-11 04:23:52.610 2 DEBUG nova.virt.libvirt.driver [None req-d2ea16d7-cfc9-45a5-b12d-645948747be4 33278e6c76494cbbac3a77443a2127d6 5a777d54362640ae90dbd99f4e0ce865 - - default default] Start waiting for the detach event from libvirt for device vdb with device alias virtio-disk1 for instance cd805eb4-703c-4647-bda1-59e3435d8c15 _detach_from_live_and_wait_for_event /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2599
Oct 11 04:23:52 compute-0 nova_compute[259850]: 2025-10-11 04:23:52.614 2 INFO nova.virt.libvirt.driver [None req-d2ea16d7-cfc9-45a5-b12d-645948747be4 33278e6c76494cbbac3a77443a2127d6 5a777d54362640ae90dbd99f4e0ce865 - - default default] Successfully detached device vdb from instance cd805eb4-703c-4647-bda1-59e3435d8c15 from the live domain config.
Oct 11 04:23:52 compute-0 nova_compute[259850]: 2025-10-11 04:23:52.748 2 DEBUG nova.objects.instance [None req-d2ea16d7-cfc9-45a5-b12d-645948747be4 33278e6c76494cbbac3a77443a2127d6 5a777d54362640ae90dbd99f4e0ce865 - - default default] Lazy-loading 'flavor' on Instance uuid cd805eb4-703c-4647-bda1-59e3435d8c15 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 11 04:23:52 compute-0 nova_compute[259850]: 2025-10-11 04:23:52.784 2 DEBUG oslo_concurrency.lockutils [None req-d2ea16d7-cfc9-45a5-b12d-645948747be4 33278e6c76494cbbac3a77443a2127d6 5a777d54362640ae90dbd99f4e0ce865 - - default default] Lock "cd805eb4-703c-4647-bda1-59e3435d8c15" "released" by "nova.compute.manager.ComputeManager.detach_volume.<locals>.do_detach_volume" :: held 0.481s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 04:23:53 compute-0 ceph-mon[74273]: pgmap v1855: 305 pgs: 305 active+clean; 171 MiB data, 491 MiB used, 60 GiB / 60 GiB avail; 89 KiB/s rd, 63 KiB/s wr, 41 op/s
Oct 11 04:23:53 compute-0 podman[304782]: 2025-10-11 04:23:53.382473097 +0000 UTC m=+0.083272069 container health_status 8f03625dda308e9a3fe3b3f00360811eab2a53db90537fed5552a11820542f07 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, container_name=iscsid, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, tcib_managed=true, org.label-schema.build-date=20251009, org.label-schema.vendor=CentOS, config_id=iscsid)
Oct 11 04:23:53 compute-0 podman[304781]: 2025-10-11 04:23:53.394261061 +0000 UTC m=+0.093767726 container health_status 83ff5079e26f0f00bbfd2512db4ebca61e431e3b7e362664573e34fff179574c (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=multipathd, managed_by=edpm_ansible, org.label-schema.schema-version=1.0)
Oct 11 04:23:53 compute-0 nova_compute[259850]: 2025-10-11 04:23:53.637 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 04:23:53 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1856: 305 pgs: 305 active+clean; 172 MiB data, 491 MiB used, 60 GiB / 60 GiB avail; 159 KiB/s rd, 129 KiB/s wr, 57 op/s
Oct 11 04:23:54 compute-0 nova_compute[259850]: 2025-10-11 04:23:54.179 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 04:23:54 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct 11 04:23:54 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2708813504' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 11 04:23:54 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct 11 04:23:54 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2708813504' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 11 04:23:55 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e439 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 04:23:55 compute-0 ceph-mon[74273]: pgmap v1856: 305 pgs: 305 active+clean; 172 MiB data, 491 MiB used, 60 GiB / 60 GiB avail; 159 KiB/s rd, 129 KiB/s wr, 57 op/s
Oct 11 04:23:55 compute-0 ceph-mon[74273]: from='client.? 192.168.122.10:0/2708813504' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 11 04:23:55 compute-0 ceph-mon[74273]: from='client.? 192.168.122.10:0/2708813504' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 11 04:23:55 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct 11 04:23:55 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3591186939' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 11 04:23:55 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct 11 04:23:55 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3591186939' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 11 04:23:55 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1857: 305 pgs: 305 active+clean; 172 MiB data, 491 MiB used, 60 GiB / 60 GiB avail; 142 KiB/s rd, 124 KiB/s wr, 35 op/s
Oct 11 04:23:56 compute-0 ceph-mon[74273]: from='client.? 192.168.122.10:0/3591186939' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 11 04:23:56 compute-0 ceph-mon[74273]: from='client.? 192.168.122.10:0/3591186939' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 11 04:23:57 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct 11 04:23:57 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/813835831' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 11 04:23:57 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct 11 04:23:57 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/813835831' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 11 04:23:57 compute-0 ceph-mon[74273]: pgmap v1857: 305 pgs: 305 active+clean; 172 MiB data, 491 MiB used, 60 GiB / 60 GiB avail; 142 KiB/s rd, 124 KiB/s wr, 35 op/s
Oct 11 04:23:57 compute-0 ceph-mon[74273]: from='client.? 192.168.122.10:0/813835831' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 11 04:23:57 compute-0 ceph-mon[74273]: from='client.? 192.168.122.10:0/813835831' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 11 04:23:57 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1858: 305 pgs: 305 active+clean; 170 MiB data, 490 MiB used, 60 GiB / 60 GiB avail; 186 KiB/s rd, 127 KiB/s wr, 91 op/s
Oct 11 04:23:58 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e439 do_prune osdmap full prune enabled
Oct 11 04:23:58 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e440 e440: 3 total, 3 up, 3 in
Oct 11 04:23:58 compute-0 ceph-mon[74273]: log_channel(cluster) log [DBG] : osdmap e440: 3 total, 3 up, 3 in
Oct 11 04:23:58 compute-0 nova_compute[259850]: 2025-10-11 04:23:58.685 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 04:23:59 compute-0 nova_compute[259850]: 2025-10-11 04:23:59.181 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 04:23:59 compute-0 ceph-mon[74273]: pgmap v1858: 305 pgs: 305 active+clean; 170 MiB data, 490 MiB used, 60 GiB / 60 GiB avail; 186 KiB/s rd, 127 KiB/s wr, 91 op/s
Oct 11 04:23:59 compute-0 ceph-mon[74273]: osdmap e440: 3 total, 3 up, 3 in
Oct 11 04:23:59 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e440 do_prune osdmap full prune enabled
Oct 11 04:23:59 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e441 e441: 3 total, 3 up, 3 in
Oct 11 04:23:59 compute-0 ceph-mon[74273]: log_channel(cluster) log [DBG] : osdmap e441: 3 total, 3 up, 3 in
Oct 11 04:23:59 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1861: 305 pgs: 305 active+clean; 170 MiB data, 490 MiB used, 60 GiB / 60 GiB avail; 172 KiB/s rd, 105 KiB/s wr, 111 op/s
Oct 11 04:24:00 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e441 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 04:24:00 compute-0 ceph-mon[74273]: osdmap e441: 3 total, 3 up, 3 in
Oct 11 04:24:01 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e441 do_prune osdmap full prune enabled
Oct 11 04:24:01 compute-0 ceph-mon[74273]: pgmap v1861: 305 pgs: 305 active+clean; 170 MiB data, 490 MiB used, 60 GiB / 60 GiB avail; 172 KiB/s rd, 105 KiB/s wr, 111 op/s
Oct 11 04:24:01 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e442 e442: 3 total, 3 up, 3 in
Oct 11 04:24:01 compute-0 ceph-mon[74273]: log_channel(cluster) log [DBG] : osdmap e442: 3 total, 3 up, 3 in
Oct 11 04:24:01 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1863: 305 pgs: 305 active+clean; 170 MiB data, 490 MiB used, 60 GiB / 60 GiB avail; 90 KiB/s rd, 7.5 KiB/s wr, 118 op/s
Oct 11 04:24:02 compute-0 ceph-mon[74273]: osdmap e442: 3 total, 3 up, 3 in
Oct 11 04:24:03 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct 11 04:24:03 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2520015218' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 11 04:24:03 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct 11 04:24:03 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2520015218' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 11 04:24:03 compute-0 ceph-mon[74273]: pgmap v1863: 305 pgs: 305 active+clean; 170 MiB data, 490 MiB used, 60 GiB / 60 GiB avail; 90 KiB/s rd, 7.5 KiB/s wr, 118 op/s
Oct 11 04:24:03 compute-0 ceph-mon[74273]: from='client.? 192.168.122.10:0/2520015218' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 11 04:24:03 compute-0 ceph-mon[74273]: from='client.? 192.168.122.10:0/2520015218' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 11 04:24:03 compute-0 nova_compute[259850]: 2025-10-11 04:24:03.688 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 04:24:03 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1864: 305 pgs: 305 active+clean; 169 MiB data, 490 MiB used, 60 GiB / 60 GiB avail; 48 KiB/s rd, 4.2 KiB/s wr, 69 op/s
Oct 11 04:24:03 compute-0 nova_compute[259850]: 2025-10-11 04:24:03.980 2 DEBUG oslo_concurrency.lockutils [None req-c9283433-1169-4609-abfd-33a955732a5d 33278e6c76494cbbac3a77443a2127d6 5a777d54362640ae90dbd99f4e0ce865 - - default default] Acquiring lock "cd805eb4-703c-4647-bda1-59e3435d8c15" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 04:24:03 compute-0 nova_compute[259850]: 2025-10-11 04:24:03.980 2 DEBUG oslo_concurrency.lockutils [None req-c9283433-1169-4609-abfd-33a955732a5d 33278e6c76494cbbac3a77443a2127d6 5a777d54362640ae90dbd99f4e0ce865 - - default default] Lock "cd805eb4-703c-4647-bda1-59e3435d8c15" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 04:24:03 compute-0 nova_compute[259850]: 2025-10-11 04:24:03.981 2 DEBUG oslo_concurrency.lockutils [None req-c9283433-1169-4609-abfd-33a955732a5d 33278e6c76494cbbac3a77443a2127d6 5a777d54362640ae90dbd99f4e0ce865 - - default default] Acquiring lock "cd805eb4-703c-4647-bda1-59e3435d8c15-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 04:24:03 compute-0 nova_compute[259850]: 2025-10-11 04:24:03.982 2 DEBUG oslo_concurrency.lockutils [None req-c9283433-1169-4609-abfd-33a955732a5d 33278e6c76494cbbac3a77443a2127d6 5a777d54362640ae90dbd99f4e0ce865 - - default default] Lock "cd805eb4-703c-4647-bda1-59e3435d8c15-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 04:24:03 compute-0 nova_compute[259850]: 2025-10-11 04:24:03.982 2 DEBUG oslo_concurrency.lockutils [None req-c9283433-1169-4609-abfd-33a955732a5d 33278e6c76494cbbac3a77443a2127d6 5a777d54362640ae90dbd99f4e0ce865 - - default default] Lock "cd805eb4-703c-4647-bda1-59e3435d8c15-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 04:24:03 compute-0 nova_compute[259850]: 2025-10-11 04:24:03.984 2 INFO nova.compute.manager [None req-c9283433-1169-4609-abfd-33a955732a5d 33278e6c76494cbbac3a77443a2127d6 5a777d54362640ae90dbd99f4e0ce865 - - default default] [instance: cd805eb4-703c-4647-bda1-59e3435d8c15] Terminating instance
Oct 11 04:24:03 compute-0 nova_compute[259850]: 2025-10-11 04:24:03.986 2 DEBUG nova.compute.manager [None req-c9283433-1169-4609-abfd-33a955732a5d 33278e6c76494cbbac3a77443a2127d6 5a777d54362640ae90dbd99f4e0ce865 - - default default] [instance: cd805eb4-703c-4647-bda1-59e3435d8c15] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Oct 11 04:24:04 compute-0 kernel: tap9e5f5bdc-67 (unregistering): left promiscuous mode
Oct 11 04:24:04 compute-0 NetworkManager[44920]: <info>  [1760156644.0524] device (tap9e5f5bdc-67): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 11 04:24:04 compute-0 ovn_controller[152025]: 2025-10-11T04:24:04Z|00269|binding|INFO|Releasing lport 9e5f5bdc-671b-4d1a-b567-050dd8925c57 from this chassis (sb_readonly=0)
Oct 11 04:24:04 compute-0 ovn_controller[152025]: 2025-10-11T04:24:04Z|00270|binding|INFO|Setting lport 9e5f5bdc-671b-4d1a-b567-050dd8925c57 down in Southbound
Oct 11 04:24:04 compute-0 ovn_controller[152025]: 2025-10-11T04:24:04Z|00271|binding|INFO|Removing iface tap9e5f5bdc-67 ovn-installed in OVS
Oct 11 04:24:04 compute-0 nova_compute[259850]: 2025-10-11 04:24:04.069 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 04:24:04 compute-0 ovn_metadata_agent[161897]: 2025-10-11 04:24:04.089 161902 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:06:fe:bd 10.100.0.8'], port_security=['fa:16:3e:06:fe:bd 10.100.0.8'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.8/28', 'neutron:device_id': 'cd805eb4-703c-4647-bda1-59e3435d8c15', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-be3c4303-5003-4d44-a9c5-e31dbe7169fc', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '5a777d54362640ae90dbd99f4e0ce865', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'bd6e7ca9-0308-4e68-bc42-966df1f6185a f840dff6-e5d9-49f8-a626-819e7f43b785', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com', 'neutron:port_fip': '192.168.122.213'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=6b95af32-6805-48be-814d-5ce721b1d9c1, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f22fde8afd0>], logical_port=9e5f5bdc-671b-4d1a-b567-050dd8925c57) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f22fde8afd0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 11 04:24:04 compute-0 ovn_metadata_agent[161897]: 2025-10-11 04:24:04.092 161902 INFO neutron.agent.ovn.metadata.agent [-] Port 9e5f5bdc-671b-4d1a-b567-050dd8925c57 in datapath be3c4303-5003-4d44-a9c5-e31dbe7169fc unbound from our chassis
Oct 11 04:24:04 compute-0 ovn_metadata_agent[161897]: 2025-10-11 04:24:04.095 161902 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network be3c4303-5003-4d44-a9c5-e31dbe7169fc, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Oct 11 04:24:04 compute-0 ovn_metadata_agent[161897]: 2025-10-11 04:24:04.097 267637 DEBUG oslo.privsep.daemon [-] privsep: reply[e94194ec-3572-4339-aa53-c31400048833]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 04:24:04 compute-0 ovn_metadata_agent[161897]: 2025-10-11 04:24:04.098 161902 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-be3c4303-5003-4d44-a9c5-e31dbe7169fc namespace which is not needed anymore
Oct 11 04:24:04 compute-0 nova_compute[259850]: 2025-10-11 04:24:04.114 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 04:24:04 compute-0 systemd[1]: machine-qemu\x2d27\x2dinstance\x2d0000001b.scope: Deactivated successfully.
Oct 11 04:24:04 compute-0 systemd[1]: machine-qemu\x2d27\x2dinstance\x2d0000001b.scope: Consumed 17.515s CPU time.
Oct 11 04:24:04 compute-0 systemd-machined[214869]: Machine qemu-27-instance-0000001b terminated.
Oct 11 04:24:04 compute-0 nova_compute[259850]: 2025-10-11 04:24:04.183 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 04:24:04 compute-0 nova_compute[259850]: 2025-10-11 04:24:04.241 2 INFO nova.virt.libvirt.driver [-] [instance: cd805eb4-703c-4647-bda1-59e3435d8c15] Instance destroyed successfully.
Oct 11 04:24:04 compute-0 nova_compute[259850]: 2025-10-11 04:24:04.242 2 DEBUG nova.objects.instance [None req-c9283433-1169-4609-abfd-33a955732a5d 33278e6c76494cbbac3a77443a2127d6 5a777d54362640ae90dbd99f4e0ce865 - - default default] Lazy-loading 'resources' on Instance uuid cd805eb4-703c-4647-bda1-59e3435d8c15 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 11 04:24:04 compute-0 nova_compute[259850]: 2025-10-11 04:24:04.255 2 DEBUG nova.virt.libvirt.vif [None req-c9283433-1169-4609-abfd-33a955732a5d 33278e6c76494cbbac3a77443a2127d6 5a777d54362640ae90dbd99f4e0ce865 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-11T04:22:47Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-SnapshotDataIntegrityTests-server-1290778056',display_name='tempest-SnapshotDataIntegrityTests-server-1290778056',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-snapshotdataintegritytests-server-1290778056',id=27,image_ref='1a107e2f-1a9d-4b6f-861d-e64bee7d56be',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBG4DGf/+arJzPlezMFuNYUmz37ccTM3o0sMAVGnA02+UGb+0y+Li2G8x8tYV+3LVZQKX5GcWAfEAeF1ZTWclNaUpF1iwZFukTt8FazO3avvAP/xJ52zMuY5wOn+lOjw9PQ==',key_name='tempest-keypair-1926732821',keypairs=<?>,launch_index=0,launched_at=2025-10-11T04:22:56Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='5a777d54362640ae90dbd99f4e0ce865',ramdisk_id='',reservation_id='r-vgfud55l',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='1a107e2f-1a9d-4b6f-861d-e64bee7d56be',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-SnapshotDataIntegrityTests-640213236',owner_user_name='tempest-SnapshotDataIntegrityTests-640213236-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-11T04:22:56Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='33278e6c76494cbbac3a77443a2127d6',uuid=cd805eb4-703c-4647-bda1-59e3435d8c15,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "9e5f5bdc-671b-4d1a-b567-050dd8925c57", "address": "fa:16:3e:06:fe:bd", "network": {"id": "be3c4303-5003-4d44-a9c5-e31dbe7169fc", "bridge": "br-int", "label": "tempest-SnapshotDataIntegrityTests-760144367-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.213", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5a777d54362640ae90dbd99f4e0ce865", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9e5f5bdc-67", "ovs_interfaceid": "9e5f5bdc-671b-4d1a-b567-050dd8925c57", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Oct 11 04:24:04 compute-0 nova_compute[259850]: 2025-10-11 04:24:04.256 2 DEBUG nova.network.os_vif_util [None req-c9283433-1169-4609-abfd-33a955732a5d 33278e6c76494cbbac3a77443a2127d6 5a777d54362640ae90dbd99f4e0ce865 - - default default] Converting VIF {"id": "9e5f5bdc-671b-4d1a-b567-050dd8925c57", "address": "fa:16:3e:06:fe:bd", "network": {"id": "be3c4303-5003-4d44-a9c5-e31dbe7169fc", "bridge": "br-int", "label": "tempest-SnapshotDataIntegrityTests-760144367-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.213", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5a777d54362640ae90dbd99f4e0ce865", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9e5f5bdc-67", "ovs_interfaceid": "9e5f5bdc-671b-4d1a-b567-050dd8925c57", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 11 04:24:04 compute-0 nova_compute[259850]: 2025-10-11 04:24:04.258 2 DEBUG nova.network.os_vif_util [None req-c9283433-1169-4609-abfd-33a955732a5d 33278e6c76494cbbac3a77443a2127d6 5a777d54362640ae90dbd99f4e0ce865 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:06:fe:bd,bridge_name='br-int',has_traffic_filtering=True,id=9e5f5bdc-671b-4d1a-b567-050dd8925c57,network=Network(be3c4303-5003-4d44-a9c5-e31dbe7169fc),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9e5f5bdc-67') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 11 04:24:04 compute-0 nova_compute[259850]: 2025-10-11 04:24:04.259 2 DEBUG os_vif [None req-c9283433-1169-4609-abfd-33a955732a5d 33278e6c76494cbbac3a77443a2127d6 5a777d54362640ae90dbd99f4e0ce865 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:06:fe:bd,bridge_name='br-int',has_traffic_filtering=True,id=9e5f5bdc-671b-4d1a-b567-050dd8925c57,network=Network(be3c4303-5003-4d44-a9c5-e31dbe7169fc),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9e5f5bdc-67') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Oct 11 04:24:04 compute-0 nova_compute[259850]: 2025-10-11 04:24:04.263 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 04:24:04 compute-0 nova_compute[259850]: 2025-10-11 04:24:04.263 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap9e5f5bdc-67, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 04:24:04 compute-0 nova_compute[259850]: 2025-10-11 04:24:04.266 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 04:24:04 compute-0 nova_compute[259850]: 2025-10-11 04:24:04.268 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 11 04:24:04 compute-0 nova_compute[259850]: 2025-10-11 04:24:04.272 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 04:24:04 compute-0 nova_compute[259850]: 2025-10-11 04:24:04.277 2 INFO os_vif [None req-c9283433-1169-4609-abfd-33a955732a5d 33278e6c76494cbbac3a77443a2127d6 5a777d54362640ae90dbd99f4e0ce865 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:06:fe:bd,bridge_name='br-int',has_traffic_filtering=True,id=9e5f5bdc-671b-4d1a-b567-050dd8925c57,network=Network(be3c4303-5003-4d44-a9c5-e31dbe7169fc),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9e5f5bdc-67')
Oct 11 04:24:04 compute-0 neutron-haproxy-ovnmeta-be3c4303-5003-4d44-a9c5-e31dbe7169fc[303576]: [NOTICE]   (303580) : haproxy version is 2.8.14-c23fe91
Oct 11 04:24:04 compute-0 neutron-haproxy-ovnmeta-be3c4303-5003-4d44-a9c5-e31dbe7169fc[303576]: [NOTICE]   (303580) : path to executable is /usr/sbin/haproxy
Oct 11 04:24:04 compute-0 neutron-haproxy-ovnmeta-be3c4303-5003-4d44-a9c5-e31dbe7169fc[303576]: [WARNING]  (303580) : Exiting Master process...
Oct 11 04:24:04 compute-0 neutron-haproxy-ovnmeta-be3c4303-5003-4d44-a9c5-e31dbe7169fc[303576]: [WARNING]  (303580) : Exiting Master process...
Oct 11 04:24:04 compute-0 neutron-haproxy-ovnmeta-be3c4303-5003-4d44-a9c5-e31dbe7169fc[303576]: [ALERT]    (303580) : Current worker (303582) exited with code 143 (Terminated)
Oct 11 04:24:04 compute-0 neutron-haproxy-ovnmeta-be3c4303-5003-4d44-a9c5-e31dbe7169fc[303576]: [WARNING]  (303580) : All workers exited. Exiting... (0)
Oct 11 04:24:04 compute-0 systemd[1]: libpod-d9524bcaf2ed45cb888223a94cea0d0e4c0e29a7e13fe1f28007de84a0340f43.scope: Deactivated successfully.
Oct 11 04:24:04 compute-0 podman[304851]: 2025-10-11 04:24:04.315684237 +0000 UTC m=+0.063127968 container died d9524bcaf2ed45cb888223a94cea0d0e4c0e29a7e13fe1f28007de84a0340f43 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-be3c4303-5003-4d44-a9c5-e31dbe7169fc, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true)
Oct 11 04:24:04 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-d9524bcaf2ed45cb888223a94cea0d0e4c0e29a7e13fe1f28007de84a0340f43-userdata-shm.mount: Deactivated successfully.
Oct 11 04:24:04 compute-0 systemd[1]: var-lib-containers-storage-overlay-2028f5856026d0e7e0fc745723aa16c85db73a34881aa0b256b32e8563cf93e1-merged.mount: Deactivated successfully.
Oct 11 04:24:04 compute-0 podman[304851]: 2025-10-11 04:24:04.389374514 +0000 UTC m=+0.136818235 container cleanup d9524bcaf2ed45cb888223a94cea0d0e4c0e29a7e13fe1f28007de84a0340f43 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-be3c4303-5003-4d44-a9c5-e31dbe7169fc, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 11 04:24:04 compute-0 systemd[1]: libpod-conmon-d9524bcaf2ed45cb888223a94cea0d0e4c0e29a7e13fe1f28007de84a0340f43.scope: Deactivated successfully.
Oct 11 04:24:04 compute-0 podman[304900]: 2025-10-11 04:24:04.498589876 +0000 UTC m=+0.066324819 container remove d9524bcaf2ed45cb888223a94cea0d0e4c0e29a7e13fe1f28007de84a0340f43 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-be3c4303-5003-4d44-a9c5-e31dbe7169fc, io.buildah.version=1.41.3, org.label-schema.build-date=20251009, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.license=GPLv2, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0)
Oct 11 04:24:04 compute-0 ovn_metadata_agent[161897]: 2025-10-11 04:24:04.510 267637 DEBUG oslo.privsep.daemon [-] privsep: reply[b8555e4b-cb29-4ca2-8725-3eb629c6777c]: (4, ('Sat Oct 11 04:24:04 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-be3c4303-5003-4d44-a9c5-e31dbe7169fc (d9524bcaf2ed45cb888223a94cea0d0e4c0e29a7e13fe1f28007de84a0340f43)\nd9524bcaf2ed45cb888223a94cea0d0e4c0e29a7e13fe1f28007de84a0340f43\nSat Oct 11 04:24:04 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-be3c4303-5003-4d44-a9c5-e31dbe7169fc (d9524bcaf2ed45cb888223a94cea0d0e4c0e29a7e13fe1f28007de84a0340f43)\nd9524bcaf2ed45cb888223a94cea0d0e4c0e29a7e13fe1f28007de84a0340f43\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 04:24:04 compute-0 ovn_metadata_agent[161897]: 2025-10-11 04:24:04.513 267637 DEBUG oslo.privsep.daemon [-] privsep: reply[0da2a1e7-2195-40b2-99d1-de7e15791a98]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 04:24:04 compute-0 ovn_metadata_agent[161897]: 2025-10-11 04:24:04.514 161902 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapbe3c4303-50, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 04:24:04 compute-0 kernel: tapbe3c4303-50: left promiscuous mode
Oct 11 04:24:04 compute-0 nova_compute[259850]: 2025-10-11 04:24:04.558 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 04:24:04 compute-0 ovn_metadata_agent[161897]: 2025-10-11 04:24:04.563 267637 DEBUG oslo.privsep.daemon [-] privsep: reply[3beea730-f711-4a55-b358-9377dbaf1a87]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 04:24:04 compute-0 nova_compute[259850]: 2025-10-11 04:24:04.578 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 04:24:04 compute-0 ovn_metadata_agent[161897]: 2025-10-11 04:24:04.595 267637 DEBUG oslo.privsep.daemon [-] privsep: reply[7498d9ec-15a0-4e87-9312-c536c9bba982]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 04:24:04 compute-0 ovn_metadata_agent[161897]: 2025-10-11 04:24:04.596 267637 DEBUG oslo.privsep.daemon [-] privsep: reply[24e953ee-27ab-46a6-a99f-f02ca3525ce2]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 04:24:04 compute-0 ovn_metadata_agent[161897]: 2025-10-11 04:24:04.618 267637 DEBUG oslo.privsep.daemon [-] privsep: reply[b09a09f4-e59f-4d2c-8e0e-06965798cadf]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 496019, 'reachable_time': 43027, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 304916, 'error': None, 'target': 'ovnmeta-be3c4303-5003-4d44-a9c5-e31dbe7169fc', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 04:24:04 compute-0 systemd[1]: run-netns-ovnmeta\x2dbe3c4303\x2d5003\x2d4d44\x2da9c5\x2de31dbe7169fc.mount: Deactivated successfully.
Oct 11 04:24:04 compute-0 ovn_metadata_agent[161897]: 2025-10-11 04:24:04.622 162015 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-be3c4303-5003-4d44-a9c5-e31dbe7169fc deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Oct 11 04:24:04 compute-0 ovn_metadata_agent[161897]: 2025-10-11 04:24:04.622 162015 DEBUG oslo.privsep.daemon [-] privsep: reply[47e98fdc-9668-44f4-a87c-639cf556f002]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 04:24:04 compute-0 nova_compute[259850]: 2025-10-11 04:24:04.692 2 DEBUG nova.compute.manager [req-c791e1b3-8c92-44b7-aac9-f9b2d4e7272a req-ebc6e0d9-dd7b-435b-9f3b-8143ac253e78 f4159c198f2c491aba3076e803251bb4 551ef16ca7c64c7d98e9560ddab5abd8 - - default default] [instance: cd805eb4-703c-4647-bda1-59e3435d8c15] Received event network-vif-unplugged-9e5f5bdc-671b-4d1a-b567-050dd8925c57 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 11 04:24:04 compute-0 nova_compute[259850]: 2025-10-11 04:24:04.694 2 DEBUG oslo_concurrency.lockutils [req-c791e1b3-8c92-44b7-aac9-f9b2d4e7272a req-ebc6e0d9-dd7b-435b-9f3b-8143ac253e78 f4159c198f2c491aba3076e803251bb4 551ef16ca7c64c7d98e9560ddab5abd8 - - default default] Acquiring lock "cd805eb4-703c-4647-bda1-59e3435d8c15-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 04:24:04 compute-0 nova_compute[259850]: 2025-10-11 04:24:04.695 2 DEBUG oslo_concurrency.lockutils [req-c791e1b3-8c92-44b7-aac9-f9b2d4e7272a req-ebc6e0d9-dd7b-435b-9f3b-8143ac253e78 f4159c198f2c491aba3076e803251bb4 551ef16ca7c64c7d98e9560ddab5abd8 - - default default] Lock "cd805eb4-703c-4647-bda1-59e3435d8c15-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 04:24:04 compute-0 nova_compute[259850]: 2025-10-11 04:24:04.695 2 DEBUG oslo_concurrency.lockutils [req-c791e1b3-8c92-44b7-aac9-f9b2d4e7272a req-ebc6e0d9-dd7b-435b-9f3b-8143ac253e78 f4159c198f2c491aba3076e803251bb4 551ef16ca7c64c7d98e9560ddab5abd8 - - default default] Lock "cd805eb4-703c-4647-bda1-59e3435d8c15-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 04:24:04 compute-0 nova_compute[259850]: 2025-10-11 04:24:04.696 2 DEBUG nova.compute.manager [req-c791e1b3-8c92-44b7-aac9-f9b2d4e7272a req-ebc6e0d9-dd7b-435b-9f3b-8143ac253e78 f4159c198f2c491aba3076e803251bb4 551ef16ca7c64c7d98e9560ddab5abd8 - - default default] [instance: cd805eb4-703c-4647-bda1-59e3435d8c15] No waiting events found dispatching network-vif-unplugged-9e5f5bdc-671b-4d1a-b567-050dd8925c57 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 11 04:24:04 compute-0 nova_compute[259850]: 2025-10-11 04:24:04.697 2 DEBUG nova.compute.manager [req-c791e1b3-8c92-44b7-aac9-f9b2d4e7272a req-ebc6e0d9-dd7b-435b-9f3b-8143ac253e78 f4159c198f2c491aba3076e803251bb4 551ef16ca7c64c7d98e9560ddab5abd8 - - default default] [instance: cd805eb4-703c-4647-bda1-59e3435d8c15] Received event network-vif-unplugged-9e5f5bdc-671b-4d1a-b567-050dd8925c57 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Oct 11 04:24:04 compute-0 ovn_metadata_agent[161897]: 2025-10-11 04:24:04.768 161902 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=22, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '7a:61:6f', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '92:f1:b6:e4:f1:16'}, ipsec=False) old=SB_Global(nb_cfg=21) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 11 04:24:04 compute-0 ovn_metadata_agent[161897]: 2025-10-11 04:24:04.769 161902 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 3 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Oct 11 04:24:04 compute-0 nova_compute[259850]: 2025-10-11 04:24:04.770 2 INFO nova.virt.libvirt.driver [None req-c9283433-1169-4609-abfd-33a955732a5d 33278e6c76494cbbac3a77443a2127d6 5a777d54362640ae90dbd99f4e0ce865 - - default default] [instance: cd805eb4-703c-4647-bda1-59e3435d8c15] Deleting instance files /var/lib/nova/instances/cd805eb4-703c-4647-bda1-59e3435d8c15_del
Oct 11 04:24:04 compute-0 nova_compute[259850]: 2025-10-11 04:24:04.771 2 INFO nova.virt.libvirt.driver [None req-c9283433-1169-4609-abfd-33a955732a5d 33278e6c76494cbbac3a77443a2127d6 5a777d54362640ae90dbd99f4e0ce865 - - default default] [instance: cd805eb4-703c-4647-bda1-59e3435d8c15] Deletion of /var/lib/nova/instances/cd805eb4-703c-4647-bda1-59e3435d8c15_del complete
Oct 11 04:24:04 compute-0 nova_compute[259850]: 2025-10-11 04:24:04.775 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 04:24:04 compute-0 nova_compute[259850]: 2025-10-11 04:24:04.817 2 INFO nova.compute.manager [None req-c9283433-1169-4609-abfd-33a955732a5d 33278e6c76494cbbac3a77443a2127d6 5a777d54362640ae90dbd99f4e0ce865 - - default default] [instance: cd805eb4-703c-4647-bda1-59e3435d8c15] Took 0.83 seconds to destroy the instance on the hypervisor.
Oct 11 04:24:04 compute-0 nova_compute[259850]: 2025-10-11 04:24:04.818 2 DEBUG oslo.service.loopingcall [None req-c9283433-1169-4609-abfd-33a955732a5d 33278e6c76494cbbac3a77443a2127d6 5a777d54362640ae90dbd99f4e0ce865 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Oct 11 04:24:04 compute-0 nova_compute[259850]: 2025-10-11 04:24:04.819 2 DEBUG nova.compute.manager [-] [instance: cd805eb4-703c-4647-bda1-59e3435d8c15] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Oct 11 04:24:04 compute-0 nova_compute[259850]: 2025-10-11 04:24:04.819 2 DEBUG nova.network.neutron [-] [instance: cd805eb4-703c-4647-bda1-59e3435d8c15] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Oct 11 04:24:05 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e442 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 04:24:05 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e442 do_prune osdmap full prune enabled
Oct 11 04:24:05 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e443 e443: 3 total, 3 up, 3 in
Oct 11 04:24:05 compute-0 ceph-mon[74273]: log_channel(cluster) log [DBG] : osdmap e443: 3 total, 3 up, 3 in
Oct 11 04:24:05 compute-0 ceph-mon[74273]: pgmap v1864: 305 pgs: 305 active+clean; 169 MiB data, 490 MiB used, 60 GiB / 60 GiB avail; 48 KiB/s rd, 4.2 KiB/s wr, 69 op/s
Oct 11 04:24:05 compute-0 ceph-mon[74273]: osdmap e443: 3 total, 3 up, 3 in
Oct 11 04:24:05 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1866: 305 pgs: 305 active+clean; 169 MiB data, 490 MiB used, 60 GiB / 60 GiB avail; 41 KiB/s rd, 3.2 KiB/s wr, 56 op/s
Oct 11 04:24:06 compute-0 nova_compute[259850]: 2025-10-11 04:24:06.002 2 DEBUG nova.network.neutron [-] [instance: cd805eb4-703c-4647-bda1-59e3435d8c15] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 11 04:24:06 compute-0 nova_compute[259850]: 2025-10-11 04:24:06.026 2 INFO nova.compute.manager [-] [instance: cd805eb4-703c-4647-bda1-59e3435d8c15] Took 1.21 seconds to deallocate network for instance.
Oct 11 04:24:06 compute-0 nova_compute[259850]: 2025-10-11 04:24:06.083 2 DEBUG oslo_concurrency.lockutils [None req-c9283433-1169-4609-abfd-33a955732a5d 33278e6c76494cbbac3a77443a2127d6 5a777d54362640ae90dbd99f4e0ce865 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 04:24:06 compute-0 nova_compute[259850]: 2025-10-11 04:24:06.084 2 DEBUG oslo_concurrency.lockutils [None req-c9283433-1169-4609-abfd-33a955732a5d 33278e6c76494cbbac3a77443a2127d6 5a777d54362640ae90dbd99f4e0ce865 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 04:24:06 compute-0 nova_compute[259850]: 2025-10-11 04:24:06.090 2 DEBUG nova.compute.manager [req-93c9b3d6-961f-4664-9616-16df72ff88a0 req-bb59dfe5-5130-42e5-aae7-de29402fb2ab f4159c198f2c491aba3076e803251bb4 551ef16ca7c64c7d98e9560ddab5abd8 - - default default] [instance: cd805eb4-703c-4647-bda1-59e3435d8c15] Received event network-vif-deleted-9e5f5bdc-671b-4d1a-b567-050dd8925c57 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 11 04:24:06 compute-0 nova_compute[259850]: 2025-10-11 04:24:06.142 2 DEBUG oslo_concurrency.processutils [None req-c9283433-1169-4609-abfd-33a955732a5d 33278e6c76494cbbac3a77443a2127d6 5a777d54362640ae90dbd99f4e0ce865 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 04:24:06 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 11 04:24:06 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1270338064' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 04:24:06 compute-0 nova_compute[259850]: 2025-10-11 04:24:06.634 2 DEBUG oslo_concurrency.processutils [None req-c9283433-1169-4609-abfd-33a955732a5d 33278e6c76494cbbac3a77443a2127d6 5a777d54362640ae90dbd99f4e0ce865 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.491s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 04:24:06 compute-0 nova_compute[259850]: 2025-10-11 04:24:06.641 2 DEBUG nova.compute.provider_tree [None req-c9283433-1169-4609-abfd-33a955732a5d 33278e6c76494cbbac3a77443a2127d6 5a777d54362640ae90dbd99f4e0ce865 - - default default] Inventory has not changed in ProviderTree for provider: 108a560b-89c0-4926-a2fc-cb749a6f8386 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 11 04:24:06 compute-0 nova_compute[259850]: 2025-10-11 04:24:06.656 2 DEBUG nova.scheduler.client.report [None req-c9283433-1169-4609-abfd-33a955732a5d 33278e6c76494cbbac3a77443a2127d6 5a777d54362640ae90dbd99f4e0ce865 - - default default] Inventory has not changed for provider 108a560b-89c0-4926-a2fc-cb749a6f8386 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 11 04:24:06 compute-0 nova_compute[259850]: 2025-10-11 04:24:06.675 2 DEBUG oslo_concurrency.lockutils [None req-c9283433-1169-4609-abfd-33a955732a5d 33278e6c76494cbbac3a77443a2127d6 5a777d54362640ae90dbd99f4e0ce865 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.590s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 04:24:06 compute-0 nova_compute[259850]: 2025-10-11 04:24:06.705 2 INFO nova.scheduler.client.report [None req-c9283433-1169-4609-abfd-33a955732a5d 33278e6c76494cbbac3a77443a2127d6 5a777d54362640ae90dbd99f4e0ce865 - - default default] Deleted allocations for instance cd805eb4-703c-4647-bda1-59e3435d8c15
Oct 11 04:24:06 compute-0 nova_compute[259850]: 2025-10-11 04:24:06.771 2 DEBUG nova.compute.manager [req-67f835c4-7d7d-4cb9-8772-77f2ecc233c9 req-661298c8-f006-4cf2-a3ef-2f15878d9eec f4159c198f2c491aba3076e803251bb4 551ef16ca7c64c7d98e9560ddab5abd8 - - default default] [instance: cd805eb4-703c-4647-bda1-59e3435d8c15] Received event network-vif-plugged-9e5f5bdc-671b-4d1a-b567-050dd8925c57 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 11 04:24:06 compute-0 nova_compute[259850]: 2025-10-11 04:24:06.772 2 DEBUG oslo_concurrency.lockutils [req-67f835c4-7d7d-4cb9-8772-77f2ecc233c9 req-661298c8-f006-4cf2-a3ef-2f15878d9eec f4159c198f2c491aba3076e803251bb4 551ef16ca7c64c7d98e9560ddab5abd8 - - default default] Acquiring lock "cd805eb4-703c-4647-bda1-59e3435d8c15-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 04:24:06 compute-0 nova_compute[259850]: 2025-10-11 04:24:06.773 2 DEBUG oslo_concurrency.lockutils [req-67f835c4-7d7d-4cb9-8772-77f2ecc233c9 req-661298c8-f006-4cf2-a3ef-2f15878d9eec f4159c198f2c491aba3076e803251bb4 551ef16ca7c64c7d98e9560ddab5abd8 - - default default] Lock "cd805eb4-703c-4647-bda1-59e3435d8c15-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 04:24:06 compute-0 nova_compute[259850]: 2025-10-11 04:24:06.773 2 DEBUG oslo_concurrency.lockutils [req-67f835c4-7d7d-4cb9-8772-77f2ecc233c9 req-661298c8-f006-4cf2-a3ef-2f15878d9eec f4159c198f2c491aba3076e803251bb4 551ef16ca7c64c7d98e9560ddab5abd8 - - default default] Lock "cd805eb4-703c-4647-bda1-59e3435d8c15-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 04:24:06 compute-0 nova_compute[259850]: 2025-10-11 04:24:06.774 2 DEBUG nova.compute.manager [req-67f835c4-7d7d-4cb9-8772-77f2ecc233c9 req-661298c8-f006-4cf2-a3ef-2f15878d9eec f4159c198f2c491aba3076e803251bb4 551ef16ca7c64c7d98e9560ddab5abd8 - - default default] [instance: cd805eb4-703c-4647-bda1-59e3435d8c15] No waiting events found dispatching network-vif-plugged-9e5f5bdc-671b-4d1a-b567-050dd8925c57 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 11 04:24:06 compute-0 nova_compute[259850]: 2025-10-11 04:24:06.775 2 WARNING nova.compute.manager [req-67f835c4-7d7d-4cb9-8772-77f2ecc233c9 req-661298c8-f006-4cf2-a3ef-2f15878d9eec f4159c198f2c491aba3076e803251bb4 551ef16ca7c64c7d98e9560ddab5abd8 - - default default] [instance: cd805eb4-703c-4647-bda1-59e3435d8c15] Received unexpected event network-vif-plugged-9e5f5bdc-671b-4d1a-b567-050dd8925c57 for instance with vm_state deleted and task_state None.
Oct 11 04:24:06 compute-0 nova_compute[259850]: 2025-10-11 04:24:06.784 2 DEBUG oslo_concurrency.lockutils [None req-c9283433-1169-4609-abfd-33a955732a5d 33278e6c76494cbbac3a77443a2127d6 5a777d54362640ae90dbd99f4e0ce865 - - default default] Lock "cd805eb4-703c-4647-bda1-59e3435d8c15" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.803s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 04:24:07 compute-0 ceph-mon[74273]: pgmap v1866: 305 pgs: 305 active+clean; 169 MiB data, 490 MiB used, 60 GiB / 60 GiB avail; 41 KiB/s rd, 3.2 KiB/s wr, 56 op/s
Oct 11 04:24:07 compute-0 ceph-mon[74273]: from='client.? 192.168.122.100:0/1270338064' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 04:24:07 compute-0 ovn_metadata_agent[161897]: 2025-10-11 04:24:07.772 161902 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=8a473e03-2208-47ae-afcd-05ad744a5969, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '22'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 04:24:07 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1867: 305 pgs: 305 active+clean; 107 MiB data, 451 MiB used, 60 GiB / 60 GiB avail; 68 KiB/s rd, 5.2 KiB/s wr, 97 op/s
Oct 11 04:24:08 compute-0 nova_compute[259850]: 2025-10-11 04:24:08.695 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 04:24:09 compute-0 nova_compute[259850]: 2025-10-11 04:24:09.267 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 04:24:09 compute-0 ceph-mon[74273]: pgmap v1867: 305 pgs: 305 active+clean; 107 MiB data, 451 MiB used, 60 GiB / 60 GiB avail; 68 KiB/s rd, 5.2 KiB/s wr, 97 op/s
Oct 11 04:24:09 compute-0 podman[304939]: 2025-10-11 04:24:09.428962604 +0000 UTC m=+0.134637543 container health_status 648a70a95c858d67aef01a55c153ff0b5b9bef4b589fa10c62a24185180db61f (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true)
Oct 11 04:24:09 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1868: 305 pgs: 305 active+clean; 88 MiB data, 442 MiB used, 60 GiB / 60 GiB avail; 73 KiB/s rd, 4.9 KiB/s wr, 103 op/s
Oct 11 04:24:10 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e443 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 04:24:11 compute-0 ceph-mon[74273]: pgmap v1868: 305 pgs: 305 active+clean; 88 MiB data, 442 MiB used, 60 GiB / 60 GiB avail; 73 KiB/s rd, 4.9 KiB/s wr, 103 op/s
Oct 11 04:24:11 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1869: 305 pgs: 305 active+clean; 88 MiB data, 442 MiB used, 60 GiB / 60 GiB avail; 63 KiB/s rd, 4.2 KiB/s wr, 89 op/s
Oct 11 04:24:13 compute-0 ceph-mon[74273]: pgmap v1869: 305 pgs: 305 active+clean; 88 MiB data, 442 MiB used, 60 GiB / 60 GiB avail; 63 KiB/s rd, 4.2 KiB/s wr, 89 op/s
Oct 11 04:24:13 compute-0 nova_compute[259850]: 2025-10-11 04:24:13.695 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 04:24:13 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1870: 305 pgs: 305 active+clean; 88 MiB data, 442 MiB used, 60 GiB / 60 GiB avail; 36 KiB/s rd, 2.1 KiB/s wr, 51 op/s
Oct 11 04:24:14 compute-0 nova_compute[259850]: 2025-10-11 04:24:14.045 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 04:24:14 compute-0 nova_compute[259850]: 2025-10-11 04:24:14.198 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 04:24:14 compute-0 nova_compute[259850]: 2025-10-11 04:24:14.269 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 04:24:14 compute-0 podman[304967]: 2025-10-11 04:24:14.38391238 +0000 UTC m=+0.085702117 container health_status aa44bb3cffcfb092b9d85ae52a9bd9f36c87e07262632420f97b48a886109eab (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, org.label-schema.build-date=20251009, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, tcib_managed=true, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent)
Oct 11 04:24:14 compute-0 ceph-mon[74273]: pgmap v1870: 305 pgs: 305 active+clean; 88 MiB data, 442 MiB used, 60 GiB / 60 GiB avail; 36 KiB/s rd, 2.1 KiB/s wr, 51 op/s
Oct 11 04:24:15 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e443 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 04:24:15 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1871: 305 pgs: 305 active+clean; 88 MiB data, 442 MiB used, 60 GiB / 60 GiB avail; 33 KiB/s rd, 1.9 KiB/s wr, 47 op/s
Oct 11 04:24:17 compute-0 ceph-mon[74273]: pgmap v1871: 305 pgs: 305 active+clean; 88 MiB data, 442 MiB used, 60 GiB / 60 GiB avail; 33 KiB/s rd, 1.9 KiB/s wr, 47 op/s
Oct 11 04:24:17 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1872: 305 pgs: 305 active+clean; 88 MiB data, 442 MiB used, 60 GiB / 60 GiB avail; 30 KiB/s rd, 1.7 KiB/s wr, 42 op/s
Oct 11 04:24:18 compute-0 nova_compute[259850]: 2025-10-11 04:24:18.697 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 04:24:19 compute-0 ceph-mon[74273]: pgmap v1872: 305 pgs: 305 active+clean; 88 MiB data, 442 MiB used, 60 GiB / 60 GiB avail; 30 KiB/s rd, 1.7 KiB/s wr, 42 op/s
Oct 11 04:24:19 compute-0 nova_compute[259850]: 2025-10-11 04:24:19.235 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1760156644.2345088, cd805eb4-703c-4647-bda1-59e3435d8c15 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 11 04:24:19 compute-0 nova_compute[259850]: 2025-10-11 04:24:19.235 2 INFO nova.compute.manager [-] [instance: cd805eb4-703c-4647-bda1-59e3435d8c15] VM Stopped (Lifecycle Event)
Oct 11 04:24:19 compute-0 nova_compute[259850]: 2025-10-11 04:24:19.257 2 DEBUG nova.compute.manager [None req-9c272fc2-3b10-41f6-af15-8d1c0d34cc2d - - - - - -] [instance: cd805eb4-703c-4647-bda1-59e3435d8c15] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 11 04:24:19 compute-0 nova_compute[259850]: 2025-10-11 04:24:19.271 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 04:24:19 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1873: 305 pgs: 305 active+clean; 88 MiB data, 442 MiB used, 60 GiB / 60 GiB avail; 7.2 KiB/s rd, 0 B/s wr, 9 op/s
Oct 11 04:24:20 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e443 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 04:24:20 compute-0 ceph-mgr[74563]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 04:24:20 compute-0 ceph-mgr[74563]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 04:24:20 compute-0 ceph-mgr[74563]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 04:24:20 compute-0 ceph-mgr[74563]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 04:24:20 compute-0 ceph-mgr[74563]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 04:24:20 compute-0 ceph-mgr[74563]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 04:24:20 compute-0 ceph-mgr[74563]: [balancer INFO root] Optimize plan auto_2025-10-11_04:24:20
Oct 11 04:24:20 compute-0 ceph-mgr[74563]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 11 04:24:20 compute-0 ceph-mgr[74563]: [balancer INFO root] do_upmap
Oct 11 04:24:20 compute-0 ceph-mgr[74563]: [balancer INFO root] pools ['.rgw.root', 'cephfs.cephfs.data', '.mgr', 'cephfs.cephfs.meta', 'default.rgw.control', 'default.rgw.log', 'volumes', 'images', 'vms', 'backups', 'default.rgw.meta']
Oct 11 04:24:20 compute-0 ceph-mgr[74563]: [balancer INFO root] prepared 0/10 changes
Oct 11 04:24:21 compute-0 ceph-mon[74273]: pgmap v1873: 305 pgs: 305 active+clean; 88 MiB data, 442 MiB used, 60 GiB / 60 GiB avail; 7.2 KiB/s rd, 0 B/s wr, 9 op/s
Oct 11 04:24:21 compute-0 ceph-mgr[74563]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 11 04:24:21 compute-0 ceph-mgr[74563]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 11 04:24:21 compute-0 ceph-mgr[74563]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 11 04:24:21 compute-0 ceph-mgr[74563]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 11 04:24:21 compute-0 ceph-mgr[74563]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 11 04:24:21 compute-0 ceph-mgr[74563]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 11 04:24:21 compute-0 ceph-mgr[74563]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 11 04:24:21 compute-0 ceph-mgr[74563]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 11 04:24:21 compute-0 ceph-mgr[74563]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 11 04:24:21 compute-0 ceph-mgr[74563]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 11 04:24:21 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1874: 305 pgs: 305 active+clean; 88 MiB data, 442 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:24:22 compute-0 ovn_metadata_agent[161897]: 2025-10-11 04:24:22.975 161902 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 04:24:22 compute-0 ovn_metadata_agent[161897]: 2025-10-11 04:24:22.976 161902 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 04:24:22 compute-0 ovn_metadata_agent[161897]: 2025-10-11 04:24:22.976 161902 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 04:24:23 compute-0 ceph-mon[74273]: pgmap v1874: 305 pgs: 305 active+clean; 88 MiB data, 442 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:24:23 compute-0 nova_compute[259850]: 2025-10-11 04:24:23.700 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 04:24:23 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1875: 305 pgs: 305 active+clean; 88 MiB data, 442 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:24:24 compute-0 nova_compute[259850]: 2025-10-11 04:24:24.273 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 04:24:24 compute-0 podman[304988]: 2025-10-11 04:24:24.388387794 +0000 UTC m=+0.086478066 container health_status 8f03625dda308e9a3fe3b3f00360811eab2a53db90537fed5552a11820542f07 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, container_name=iscsid, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, config_id=iscsid, io.buildah.version=1.41.3, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Oct 11 04:24:24 compute-0 podman[304987]: 2025-10-11 04:24:24.388395494 +0000 UTC m=+0.091585460 container health_status 83ff5079e26f0f00bbfd2512db4ebca61e431e3b7e362664573e34fff179574c (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009, org.label-schema.vendor=CentOS, config_id=multipathd)
Oct 11 04:24:25 compute-0 ceph-mon[74273]: pgmap v1875: 305 pgs: 305 active+clean; 88 MiB data, 442 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:24:25 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e443 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 04:24:25 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1876: 305 pgs: 305 active+clean; 88 MiB data, 442 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:24:27 compute-0 ceph-mon[74273]: pgmap v1876: 305 pgs: 305 active+clean; 88 MiB data, 442 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:24:27 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1877: 305 pgs: 305 active+clean; 88 MiB data, 442 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:24:28 compute-0 nova_compute[259850]: 2025-10-11 04:24:28.703 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 04:24:29 compute-0 ceph-mon[74273]: pgmap v1877: 305 pgs: 305 active+clean; 88 MiB data, 442 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:24:29 compute-0 nova_compute[259850]: 2025-10-11 04:24:29.275 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 04:24:29 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1878: 305 pgs: 305 active+clean; 88 MiB data, 442 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:24:30 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e443 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 04:24:30 compute-0 sudo[305027]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 04:24:30 compute-0 sudo[305027]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 04:24:30 compute-0 sudo[305027]: pam_unix(sudo:session): session closed for user root
Oct 11 04:24:30 compute-0 sudo[305052]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 11 04:24:30 compute-0 sudo[305052]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 04:24:30 compute-0 sudo[305052]: pam_unix(sudo:session): session closed for user root
Oct 11 04:24:30 compute-0 sudo[305077]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 04:24:30 compute-0 sudo[305077]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 04:24:30 compute-0 sudo[305077]: pam_unix(sudo:session): session closed for user root
Oct 11 04:24:30 compute-0 sudo[305102]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/23b68101-59a9-532f-ab6b-9acf78fb2162/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Oct 11 04:24:30 compute-0 sudo[305102]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 04:24:31 compute-0 ceph-mon[74273]: pgmap v1878: 305 pgs: 305 active+clean; 88 MiB data, 442 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:24:31 compute-0 sudo[305102]: pam_unix(sudo:session): session closed for user root
Oct 11 04:24:31 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 11 04:24:31 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 11 04:24:31 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct 11 04:24:31 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 11 04:24:31 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct 11 04:24:31 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' 
Oct 11 04:24:31 compute-0 ceph-mgr[74563]: [progress WARNING root] complete: ev 009833e5-cb04-4f29-95a9-d6a5f25a188e does not exist
Oct 11 04:24:31 compute-0 ceph-mgr[74563]: [progress WARNING root] complete: ev 16faaaea-e9ee-4fad-93fb-b872c3564a20 does not exist
Oct 11 04:24:31 compute-0 ceph-mgr[74563]: [progress WARNING root] complete: ev cc3f8aba-7ad4-4d94-bdaa-d1a6e7bcecfe does not exist
Oct 11 04:24:31 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct 11 04:24:31 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 11 04:24:31 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct 11 04:24:31 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 11 04:24:31 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 11 04:24:31 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 11 04:24:31 compute-0 sudo[305159]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 04:24:31 compute-0 sudo[305159]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 04:24:31 compute-0 sudo[305159]: pam_unix(sudo:session): session closed for user root
Oct 11 04:24:31 compute-0 sudo[305184]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 11 04:24:31 compute-0 sudo[305184]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 04:24:31 compute-0 sudo[305184]: pam_unix(sudo:session): session closed for user root
Oct 11 04:24:31 compute-0 sudo[305209]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 04:24:31 compute-0 sudo[305209]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 04:24:31 compute-0 sudo[305209]: pam_unix(sudo:session): session closed for user root
Oct 11 04:24:31 compute-0 ceph-mgr[74563]: [pg_autoscaler INFO root] _maybe_adjust
Oct 11 04:24:31 compute-0 ceph-mgr[74563]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 04:24:31 compute-0 ceph-mgr[74563]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct 11 04:24:31 compute-0 ceph-mgr[74563]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 04:24:31 compute-0 ceph-mgr[74563]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Oct 11 04:24:31 compute-0 ceph-mgr[74563]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 04:24:31 compute-0 ceph-mgr[74563]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.00034720526470013676 of space, bias 1.0, pg target 0.10416157941004103 quantized to 32 (current 32)
Oct 11 04:24:31 compute-0 ceph-mgr[74563]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 04:24:31 compute-0 ceph-mgr[74563]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Oct 11 04:24:31 compute-0 ceph-mgr[74563]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 04:24:31 compute-0 ceph-mgr[74563]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006661762551279547 of space, bias 1.0, pg target 0.1998528765383864 quantized to 32 (current 32)
Oct 11 04:24:31 compute-0 ceph-mgr[74563]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 04:24:31 compute-0 ceph-mgr[74563]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Oct 11 04:24:31 compute-0 ceph-mgr[74563]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 04:24:31 compute-0 ceph-mgr[74563]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 11 04:24:31 compute-0 ceph-mgr[74563]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 04:24:31 compute-0 ceph-mgr[74563]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Oct 11 04:24:31 compute-0 ceph-mgr[74563]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 04:24:31 compute-0 ceph-mgr[74563]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct 11 04:24:31 compute-0 ceph-mgr[74563]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 04:24:31 compute-0 ceph-mgr[74563]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 11 04:24:31 compute-0 ceph-mgr[74563]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 04:24:31 compute-0 ceph-mgr[74563]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct 11 04:24:31 compute-0 sudo[305234]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/23b68101-59a9-532f-ab6b-9acf78fb2162/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 23b68101-59a9-532f-ab6b-9acf78fb2162 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Oct 11 04:24:31 compute-0 sudo[305234]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 04:24:31 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1879: 305 pgs: 305 active+clean; 88 MiB data, 442 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:24:32 compute-0 podman[305300]: 2025-10-11 04:24:32.062504743 +0000 UTC m=+0.070768564 container create 4cd4ad17dad30449c611f3ad2e5042b3aae35dd64f21170a065f158eec23d48d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_snyder, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_REF=reef, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 11 04:24:32 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 11 04:24:32 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 11 04:24:32 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' 
Oct 11 04:24:32 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 11 04:24:32 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 11 04:24:32 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 11 04:24:32 compute-0 systemd[1]: Started libpod-conmon-4cd4ad17dad30449c611f3ad2e5042b3aae35dd64f21170a065f158eec23d48d.scope.
Oct 11 04:24:32 compute-0 podman[305300]: 2025-10-11 04:24:32.035334788 +0000 UTC m=+0.043598699 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 04:24:32 compute-0 systemd[1]: Started libcrun container.
Oct 11 04:24:32 compute-0 podman[305300]: 2025-10-11 04:24:32.174102396 +0000 UTC m=+0.182366287 container init 4cd4ad17dad30449c611f3ad2e5042b3aae35dd64f21170a065f158eec23d48d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_snyder, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 11 04:24:32 compute-0 podman[305300]: 2025-10-11 04:24:32.18669961 +0000 UTC m=+0.194963461 container start 4cd4ad17dad30449c611f3ad2e5042b3aae35dd64f21170a065f158eec23d48d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_snyder, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, OSD_FLAVOR=default)
Oct 11 04:24:32 compute-0 podman[305300]: 2025-10-11 04:24:32.190925339 +0000 UTC m=+0.199189190 container attach 4cd4ad17dad30449c611f3ad2e5042b3aae35dd64f21170a065f158eec23d48d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_snyder, CEPH_REF=reef, org.label-schema.build-date=20250507, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True)
Oct 11 04:24:32 compute-0 ecstatic_snyder[305317]: 167 167
Oct 11 04:24:32 compute-0 systemd[1]: libpod-4cd4ad17dad30449c611f3ad2e5042b3aae35dd64f21170a065f158eec23d48d.scope: Deactivated successfully.
Oct 11 04:24:32 compute-0 podman[305300]: 2025-10-11 04:24:32.197244177 +0000 UTC m=+0.205508018 container died 4cd4ad17dad30449c611f3ad2e5042b3aae35dd64f21170a065f158eec23d48d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_snyder, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Oct 11 04:24:32 compute-0 systemd[1]: var-lib-containers-storage-overlay-528ada13fa964dc4ca250def85b26c0d28a8690789b7de00a6e0e053c4f48617-merged.mount: Deactivated successfully.
Oct 11 04:24:32 compute-0 podman[305300]: 2025-10-11 04:24:32.252140043 +0000 UTC m=+0.260403894 container remove 4cd4ad17dad30449c611f3ad2e5042b3aae35dd64f21170a065f158eec23d48d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_snyder, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 11 04:24:32 compute-0 systemd[1]: libpod-conmon-4cd4ad17dad30449c611f3ad2e5042b3aae35dd64f21170a065f158eec23d48d.scope: Deactivated successfully.
Oct 11 04:24:32 compute-0 podman[305340]: 2025-10-11 04:24:32.504448118 +0000 UTC m=+0.076009551 container create 3f9fbc15174f4422bc2499d3997c39891f3ed251e4e97b2af1a3c301d9d66f77 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_nash, org.label-schema.build-date=20250507, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 11 04:24:32 compute-0 systemd[1]: Started libpod-conmon-3f9fbc15174f4422bc2499d3997c39891f3ed251e4e97b2af1a3c301d9d66f77.scope.
Oct 11 04:24:32 compute-0 podman[305340]: 2025-10-11 04:24:32.475590555 +0000 UTC m=+0.047152068 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 04:24:32 compute-0 systemd[1]: Started libcrun container.
Oct 11 04:24:32 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/11f1849f995aa637ad530074addcd4fc157316b0e0f0972dde7c00b24966840d/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 11 04:24:32 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/11f1849f995aa637ad530074addcd4fc157316b0e0f0972dde7c00b24966840d/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 11 04:24:32 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/11f1849f995aa637ad530074addcd4fc157316b0e0f0972dde7c00b24966840d/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 11 04:24:32 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/11f1849f995aa637ad530074addcd4fc157316b0e0f0972dde7c00b24966840d/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 11 04:24:32 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/11f1849f995aa637ad530074addcd4fc157316b0e0f0972dde7c00b24966840d/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct 11 04:24:32 compute-0 podman[305340]: 2025-10-11 04:24:32.618314774 +0000 UTC m=+0.189876247 container init 3f9fbc15174f4422bc2499d3997c39891f3ed251e4e97b2af1a3c301d9d66f77 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_nash, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, OSD_FLAVOR=default)
Oct 11 04:24:32 compute-0 podman[305340]: 2025-10-11 04:24:32.633566863 +0000 UTC m=+0.205128326 container start 3f9fbc15174f4422bc2499d3997c39891f3ed251e4e97b2af1a3c301d9d66f77 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_nash, org.label-schema.build-date=20250507, CEPH_REF=reef, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Oct 11 04:24:32 compute-0 podman[305340]: 2025-10-11 04:24:32.637496704 +0000 UTC m=+0.209058167 container attach 3f9fbc15174f4422bc2499d3997c39891f3ed251e4e97b2af1a3c301d9d66f77 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_nash, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 11 04:24:33 compute-0 ceph-mon[74273]: pgmap v1879: 305 pgs: 305 active+clean; 88 MiB data, 442 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:24:33 compute-0 nova_compute[259850]: 2025-10-11 04:24:33.706 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 04:24:33 compute-0 musing_nash[305356]: --> passed data devices: 0 physical, 3 LVM
Oct 11 04:24:33 compute-0 musing_nash[305356]: --> relative data size: 1.0
Oct 11 04:24:33 compute-0 musing_nash[305356]: --> All data devices are unavailable
Oct 11 04:24:33 compute-0 systemd[1]: libpod-3f9fbc15174f4422bc2499d3997c39891f3ed251e4e97b2af1a3c301d9d66f77.scope: Deactivated successfully.
Oct 11 04:24:33 compute-0 systemd[1]: libpod-3f9fbc15174f4422bc2499d3997c39891f3ed251e4e97b2af1a3c301d9d66f77.scope: Consumed 1.192s CPU time.
Oct 11 04:24:33 compute-0 podman[305340]: 2025-10-11 04:24:33.881673948 +0000 UTC m=+1.453235401 container died 3f9fbc15174f4422bc2499d3997c39891f3ed251e4e97b2af1a3c301d9d66f77 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_nash, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Oct 11 04:24:33 compute-0 systemd[1]: var-lib-containers-storage-overlay-11f1849f995aa637ad530074addcd4fc157316b0e0f0972dde7c00b24966840d-merged.mount: Deactivated successfully.
Oct 11 04:24:33 compute-0 podman[305340]: 2025-10-11 04:24:33.958617555 +0000 UTC m=+1.530178988 container remove 3f9fbc15174f4422bc2499d3997c39891f3ed251e4e97b2af1a3c301d9d66f77 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_nash, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507)
Oct 11 04:24:33 compute-0 systemd[1]: libpod-conmon-3f9fbc15174f4422bc2499d3997c39891f3ed251e4e97b2af1a3c301d9d66f77.scope: Deactivated successfully.
Oct 11 04:24:33 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1880: 305 pgs: 305 active+clean; 88 MiB data, 442 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:24:33 compute-0 sudo[305234]: pam_unix(sudo:session): session closed for user root
Oct 11 04:24:34 compute-0 nova_compute[259850]: 2025-10-11 04:24:34.055 2 DEBUG oslo_service.periodic_task [None req-a15c22ab-259d-4b26-83d0-eb5f92102649 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 11 04:24:34 compute-0 nova_compute[259850]: 2025-10-11 04:24:34.058 2 DEBUG oslo_service.periodic_task [None req-a15c22ab-259d-4b26-83d0-eb5f92102649 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 11 04:24:34 compute-0 nova_compute[259850]: 2025-10-11 04:24:34.059 2 DEBUG oslo_service.periodic_task [None req-a15c22ab-259d-4b26-83d0-eb5f92102649 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 11 04:24:34 compute-0 nova_compute[259850]: 2025-10-11 04:24:34.059 2 DEBUG nova.compute.manager [None req-a15c22ab-259d-4b26-83d0-eb5f92102649 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Oct 11 04:24:34 compute-0 nova_compute[259850]: 2025-10-11 04:24:34.059 2 DEBUG oslo_service.periodic_task [None req-a15c22ab-259d-4b26-83d0-eb5f92102649 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 11 04:24:34 compute-0 sudo[305399]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 04:24:34 compute-0 sudo[305399]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 04:24:34 compute-0 nova_compute[259850]: 2025-10-11 04:24:34.081 2 DEBUG oslo_concurrency.lockutils [None req-a15c22ab-259d-4b26-83d0-eb5f92102649 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 04:24:34 compute-0 sudo[305399]: pam_unix(sudo:session): session closed for user root
Oct 11 04:24:34 compute-0 nova_compute[259850]: 2025-10-11 04:24:34.082 2 DEBUG oslo_concurrency.lockutils [None req-a15c22ab-259d-4b26-83d0-eb5f92102649 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 04:24:34 compute-0 nova_compute[259850]: 2025-10-11 04:24:34.082 2 DEBUG oslo_concurrency.lockutils [None req-a15c22ab-259d-4b26-83d0-eb5f92102649 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 04:24:34 compute-0 nova_compute[259850]: 2025-10-11 04:24:34.082 2 DEBUG nova.compute.resource_tracker [None req-a15c22ab-259d-4b26-83d0-eb5f92102649 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Oct 11 04:24:34 compute-0 nova_compute[259850]: 2025-10-11 04:24:34.082 2 DEBUG oslo_concurrency.processutils [None req-a15c22ab-259d-4b26-83d0-eb5f92102649 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 04:24:34 compute-0 sudo[305425]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 11 04:24:34 compute-0 sudo[305425]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 04:24:34 compute-0 sudo[305425]: pam_unix(sudo:session): session closed for user root
Oct 11 04:24:34 compute-0 sudo[305450]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 04:24:34 compute-0 sudo[305450]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 04:24:34 compute-0 sudo[305450]: pam_unix(sudo:session): session closed for user root
Oct 11 04:24:34 compute-0 nova_compute[259850]: 2025-10-11 04:24:34.278 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 04:24:34 compute-0 sudo[305494]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/23b68101-59a9-532f-ab6b-9acf78fb2162/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 23b68101-59a9-532f-ab6b-9acf78fb2162 -- lvm list --format json
Oct 11 04:24:34 compute-0 sudo[305494]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 04:24:34 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 11 04:24:34 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1263266058' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 04:24:34 compute-0 nova_compute[259850]: 2025-10-11 04:24:34.572 2 DEBUG oslo_concurrency.processutils [None req-a15c22ab-259d-4b26-83d0-eb5f92102649 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.489s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 04:24:34 compute-0 nova_compute[259850]: 2025-10-11 04:24:34.835 2 WARNING nova.virt.libvirt.driver [None req-a15c22ab-259d-4b26-83d0-eb5f92102649 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 11 04:24:34 compute-0 nova_compute[259850]: 2025-10-11 04:24:34.838 2 DEBUG nova.compute.resource_tracker [None req-a15c22ab-259d-4b26-83d0-eb5f92102649 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4336MB free_disk=59.988277435302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Oct 11 04:24:34 compute-0 nova_compute[259850]: 2025-10-11 04:24:34.839 2 DEBUG oslo_concurrency.lockutils [None req-a15c22ab-259d-4b26-83d0-eb5f92102649 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 04:24:34 compute-0 nova_compute[259850]: 2025-10-11 04:24:34.839 2 DEBUG oslo_concurrency.lockutils [None req-a15c22ab-259d-4b26-83d0-eb5f92102649 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 04:24:34 compute-0 podman[305564]: 2025-10-11 04:24:34.864904565 +0000 UTC m=+0.052426497 container create 4912a76241411c1fef0fd6d8998a34d67e00776b03d8cd740d4650b842c75232 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_dirac, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Oct 11 04:24:34 compute-0 systemd[1]: Started libpod-conmon-4912a76241411c1fef0fd6d8998a34d67e00776b03d8cd740d4650b842c75232.scope.
Oct 11 04:24:34 compute-0 nova_compute[259850]: 2025-10-11 04:24:34.921 2 DEBUG nova.compute.resource_tracker [None req-a15c22ab-259d-4b26-83d0-eb5f92102649 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Oct 11 04:24:34 compute-0 nova_compute[259850]: 2025-10-11 04:24:34.922 2 DEBUG nova.compute.resource_tracker [None req-a15c22ab-259d-4b26-83d0-eb5f92102649 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Oct 11 04:24:34 compute-0 podman[305564]: 2025-10-11 04:24:34.843428771 +0000 UTC m=+0.030950703 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 04:24:34 compute-0 systemd[1]: Started libcrun container.
Oct 11 04:24:34 compute-0 nova_compute[259850]: 2025-10-11 04:24:34.953 2 DEBUG nova.scheduler.client.report [None req-a15c22ab-259d-4b26-83d0-eb5f92102649 - - - - - -] Refreshing inventories for resource provider 108a560b-89c0-4926-a2fc-cb749a6f8386 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804
Oct 11 04:24:34 compute-0 podman[305564]: 2025-10-11 04:24:34.965040055 +0000 UTC m=+0.152562027 container init 4912a76241411c1fef0fd6d8998a34d67e00776b03d8cd740d4650b842c75232 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_dirac, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 11 04:24:34 compute-0 podman[305564]: 2025-10-11 04:24:34.977495256 +0000 UTC m=+0.165017178 container start 4912a76241411c1fef0fd6d8998a34d67e00776b03d8cd740d4650b842c75232 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_dirac, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, ceph=True)
Oct 11 04:24:34 compute-0 eager_dirac[305580]: 167 167
Oct 11 04:24:34 compute-0 systemd[1]: libpod-4912a76241411c1fef0fd6d8998a34d67e00776b03d8cd740d4650b842c75232.scope: Deactivated successfully.
Oct 11 04:24:34 compute-0 podman[305564]: 2025-10-11 04:24:34.985282935 +0000 UTC m=+0.172804867 container attach 4912a76241411c1fef0fd6d8998a34d67e00776b03d8cd740d4650b842c75232 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_dirac, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 11 04:24:34 compute-0 podman[305564]: 2025-10-11 04:24:34.985649696 +0000 UTC m=+0.173171618 container died 4912a76241411c1fef0fd6d8998a34d67e00776b03d8cd740d4650b842c75232 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_dirac, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0)
Oct 11 04:24:34 compute-0 nova_compute[259850]: 2025-10-11 04:24:34.989 2 DEBUG nova.scheduler.client.report [None req-a15c22ab-259d-4b26-83d0-eb5f92102649 - - - - - -] Updating ProviderTree inventory for provider 108a560b-89c0-4926-a2fc-cb749a6f8386 from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768
Oct 11 04:24:34 compute-0 nova_compute[259850]: 2025-10-11 04:24:34.991 2 DEBUG nova.compute.provider_tree [None req-a15c22ab-259d-4b26-83d0-eb5f92102649 - - - - - -] Updating inventory in ProviderTree for provider 108a560b-89c0-4926-a2fc-cb749a6f8386 with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176
Oct 11 04:24:35 compute-0 nova_compute[259850]: 2025-10-11 04:24:35.015 2 DEBUG nova.scheduler.client.report [None req-a15c22ab-259d-4b26-83d0-eb5f92102649 - - - - - -] Refreshing aggregate associations for resource provider 108a560b-89c0-4926-a2fc-cb749a6f8386, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813
Oct 11 04:24:35 compute-0 systemd[1]: var-lib-containers-storage-overlay-6cdbd01bb0d876baecbe8419086bf9380b9a691e5f2fb4707c3a318b775b4fb8-merged.mount: Deactivated successfully.
Oct 11 04:24:35 compute-0 podman[305564]: 2025-10-11 04:24:35.037887917 +0000 UTC m=+0.225409819 container remove 4912a76241411c1fef0fd6d8998a34d67e00776b03d8cd740d4650b842c75232 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_dirac, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 11 04:24:35 compute-0 nova_compute[259850]: 2025-10-11 04:24:35.057 2 DEBUG nova.scheduler.client.report [None req-a15c22ab-259d-4b26-83d0-eb5f92102649 - - - - - -] Refreshing trait associations for resource provider 108a560b-89c0-4926-a2fc-cb749a6f8386, traits: COMPUTE_IMAGE_TYPE_QCOW2,HW_CPU_X86_SSSE3,COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_STORAGE_BUS_SCSI,HW_CPU_X86_AESNI,HW_CPU_X86_FMA3,COMPUTE_SECURITY_UEFI_SECURE_BOOT,HW_CPU_X86_F16C,HW_CPU_X86_SVM,COMPUTE_STORAGE_BUS_USB,HW_CPU_X86_SHA,COMPUTE_ACCELERATORS,COMPUTE_IMAGE_TYPE_AMI,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_GRAPHICS_MODEL_BOCHS,COMPUTE_NET_VIF_MODEL_RTL8139,COMPUTE_IMAGE_TYPE_RAW,COMPUTE_NET_VIF_MODEL_NE2K_PCI,HW_CPU_X86_SSE41,COMPUTE_NODE,COMPUTE_RESCUE_BFV,COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_NET_ATTACH_INTERFACE,COMPUTE_IMAGE_TYPE_ISO,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,COMPUTE_TRUSTED_CERTS,COMPUTE_NET_VIF_MODEL_E1000E,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_STORAGE_BUS_SATA,HW_CPU_X86_AVX,COMPUTE_IMAGE_TYPE_AKI,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_STORAGE_BUS_VIRTIO,COMPUTE_NET_VIF_MODEL_VMXNET3,HW_CPU_X86_BMI2,HW_CPU_X86_MMX,COMPUTE_VOLUME_ATTACH_WITH_TAG,COMPUTE_SECURITY_TPM_1_2,COMPUTE_GRAPHICS_MODEL_NONE,HW_CPU_X86_AVX2,COMPUTE_DEVICE_TAGGING,HW_CPU_X86_AMD_SVM,COMPUTE_STORAGE_BUS_IDE,COMPUTE_GRAPHICS_MODEL_VGA,COMPUTE_VOLUME_EXTEND,COMPUTE_NET_VIF_MODEL_PCNET,HW_CPU_X86_CLMUL,HW_CPU_X86_SSE2,HW_CPU_X86_SSE4A,COMPUTE_GRAPHICS_MODEL_VIRTIO,COMPUTE_NET_VIF_MODEL_E1000,HW_CPU_X86_BMI,COMPUTE_VIOMMU_MODEL_INTEL,HW_CPU_X86_SSE,COMPUTE_VOLUME_MULTI_ATTACH,HW_CPU_X86_ABM,HW_CPU_X86_SSE42,COMPUTE_STORAGE_BUS_FDC,COMPUTE_SECURITY_TPM_2_0,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825
Oct 11 04:24:35 compute-0 systemd[1]: libpod-conmon-4912a76241411c1fef0fd6d8998a34d67e00776b03d8cd740d4650b842c75232.scope: Deactivated successfully.
Oct 11 04:24:35 compute-0 nova_compute[259850]: 2025-10-11 04:24:35.088 2 DEBUG oslo_concurrency.processutils [None req-a15c22ab-259d-4b26-83d0-eb5f92102649 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 04:24:35 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e443 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 04:24:35 compute-0 podman[305606]: 2025-10-11 04:24:35.248487037 +0000 UTC m=+0.074874509 container create 60d2e5f986e4b36c851cf959e89f010cff3ad933f17d5db08daef484857bbc7b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_panini, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 11 04:24:35 compute-0 ceph-mon[74273]: pgmap v1880: 305 pgs: 305 active+clean; 88 MiB data, 442 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:24:35 compute-0 ceph-mon[74273]: from='client.? 192.168.122.100:0/1263266058' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 04:24:35 compute-0 systemd[1]: Started libpod-conmon-60d2e5f986e4b36c851cf959e89f010cff3ad933f17d5db08daef484857bbc7b.scope.
Oct 11 04:24:35 compute-0 podman[305606]: 2025-10-11 04:24:35.219379167 +0000 UTC m=+0.045766729 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 04:24:35 compute-0 systemd[1]: Started libcrun container.
Oct 11 04:24:35 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7a51ebe148f83e0a363ac8be86615b85dbfb2f138b1b9dcc0125d3ccb34c841a/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 11 04:24:35 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7a51ebe148f83e0a363ac8be86615b85dbfb2f138b1b9dcc0125d3ccb34c841a/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 11 04:24:35 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7a51ebe148f83e0a363ac8be86615b85dbfb2f138b1b9dcc0125d3ccb34c841a/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 11 04:24:35 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7a51ebe148f83e0a363ac8be86615b85dbfb2f138b1b9dcc0125d3ccb34c841a/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 11 04:24:35 compute-0 podman[305606]: 2025-10-11 04:24:35.341248679 +0000 UTC m=+0.167636161 container init 60d2e5f986e4b36c851cf959e89f010cff3ad933f17d5db08daef484857bbc7b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_panini, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS)
Oct 11 04:24:35 compute-0 podman[305606]: 2025-10-11 04:24:35.349504902 +0000 UTC m=+0.175892364 container start 60d2e5f986e4b36c851cf959e89f010cff3ad933f17d5db08daef484857bbc7b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_panini, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS)
Oct 11 04:24:35 compute-0 podman[305606]: 2025-10-11 04:24:35.352184507 +0000 UTC m=+0.178571999 container attach 60d2e5f986e4b36c851cf959e89f010cff3ad933f17d5db08daef484857bbc7b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_panini, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Oct 11 04:24:35 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 11 04:24:35 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/914700360' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 04:24:35 compute-0 nova_compute[259850]: 2025-10-11 04:24:35.561 2 DEBUG oslo_concurrency.processutils [None req-a15c22ab-259d-4b26-83d0-eb5f92102649 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.473s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 04:24:35 compute-0 nova_compute[259850]: 2025-10-11 04:24:35.570 2 DEBUG nova.compute.provider_tree [None req-a15c22ab-259d-4b26-83d0-eb5f92102649 - - - - - -] Inventory has not changed in ProviderTree for provider: 108a560b-89c0-4926-a2fc-cb749a6f8386 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 11 04:24:35 compute-0 nova_compute[259850]: 2025-10-11 04:24:35.586 2 DEBUG nova.scheduler.client.report [None req-a15c22ab-259d-4b26-83d0-eb5f92102649 - - - - - -] Inventory has not changed for provider 108a560b-89c0-4926-a2fc-cb749a6f8386 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 11 04:24:35 compute-0 nova_compute[259850]: 2025-10-11 04:24:35.607 2 DEBUG nova.compute.resource_tracker [None req-a15c22ab-259d-4b26-83d0-eb5f92102649 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Oct 11 04:24:35 compute-0 nova_compute[259850]: 2025-10-11 04:24:35.607 2 DEBUG oslo_concurrency.lockutils [None req-a15c22ab-259d-4b26-83d0-eb5f92102649 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.768s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 04:24:35 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1881: 305 pgs: 305 active+clean; 88 MiB data, 442 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:24:36 compute-0 distracted_panini[305642]: {
Oct 11 04:24:36 compute-0 distracted_panini[305642]:     "0": [
Oct 11 04:24:36 compute-0 distracted_panini[305642]:         {
Oct 11 04:24:36 compute-0 distracted_panini[305642]:             "devices": [
Oct 11 04:24:36 compute-0 distracted_panini[305642]:                 "/dev/loop3"
Oct 11 04:24:36 compute-0 distracted_panini[305642]:             ],
Oct 11 04:24:36 compute-0 distracted_panini[305642]:             "lv_name": "ceph_lv0",
Oct 11 04:24:36 compute-0 distracted_panini[305642]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct 11 04:24:36 compute-0 distracted_panini[305642]:             "lv_size": "21470642176",
Oct 11 04:24:36 compute-0 distracted_panini[305642]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=1XVTvq-Um5W-oCLh-MZzq-EKrd-vgGj-JdO4S3,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=23b68101-59a9-532f-ab6b-9acf78fb2162,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=bd7ac921-1218-45c1-b1c6-7c594dbceccb,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 11 04:24:36 compute-0 distracted_panini[305642]:             "lv_uuid": "1XVTvq-Um5W-oCLh-MZzq-EKrd-vgGj-JdO4S3",
Oct 11 04:24:36 compute-0 distracted_panini[305642]:             "name": "ceph_lv0",
Oct 11 04:24:36 compute-0 distracted_panini[305642]:             "path": "/dev/ceph_vg0/ceph_lv0",
Oct 11 04:24:36 compute-0 distracted_panini[305642]:             "tags": {
Oct 11 04:24:36 compute-0 distracted_panini[305642]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct 11 04:24:36 compute-0 distracted_panini[305642]:                 "ceph.block_uuid": "1XVTvq-Um5W-oCLh-MZzq-EKrd-vgGj-JdO4S3",
Oct 11 04:24:36 compute-0 distracted_panini[305642]:                 "ceph.cephx_lockbox_secret": "",
Oct 11 04:24:36 compute-0 distracted_panini[305642]:                 "ceph.cluster_fsid": "23b68101-59a9-532f-ab6b-9acf78fb2162",
Oct 11 04:24:36 compute-0 distracted_panini[305642]:                 "ceph.cluster_name": "ceph",
Oct 11 04:24:36 compute-0 distracted_panini[305642]:                 "ceph.crush_device_class": "",
Oct 11 04:24:36 compute-0 distracted_panini[305642]:                 "ceph.encrypted": "0",
Oct 11 04:24:36 compute-0 distracted_panini[305642]:                 "ceph.osd_fsid": "bd7ac921-1218-45c1-b1c6-7c594dbceccb",
Oct 11 04:24:36 compute-0 distracted_panini[305642]:                 "ceph.osd_id": "0",
Oct 11 04:24:36 compute-0 distracted_panini[305642]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 11 04:24:36 compute-0 distracted_panini[305642]:                 "ceph.type": "block",
Oct 11 04:24:36 compute-0 distracted_panini[305642]:                 "ceph.vdo": "0"
Oct 11 04:24:36 compute-0 distracted_panini[305642]:             },
Oct 11 04:24:36 compute-0 distracted_panini[305642]:             "type": "block",
Oct 11 04:24:36 compute-0 distracted_panini[305642]:             "vg_name": "ceph_vg0"
Oct 11 04:24:36 compute-0 distracted_panini[305642]:         }
Oct 11 04:24:36 compute-0 distracted_panini[305642]:     ],
Oct 11 04:24:36 compute-0 distracted_panini[305642]:     "1": [
Oct 11 04:24:36 compute-0 distracted_panini[305642]:         {
Oct 11 04:24:36 compute-0 distracted_panini[305642]:             "devices": [
Oct 11 04:24:36 compute-0 distracted_panini[305642]:                 "/dev/loop4"
Oct 11 04:24:36 compute-0 distracted_panini[305642]:             ],
Oct 11 04:24:36 compute-0 distracted_panini[305642]:             "lv_name": "ceph_lv1",
Oct 11 04:24:36 compute-0 distracted_panini[305642]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Oct 11 04:24:36 compute-0 distracted_panini[305642]:             "lv_size": "21470642176",
Oct 11 04:24:36 compute-0 distracted_panini[305642]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=SYHCVo-UVf0-Iwc3-4dzz-QuCG-19LJ-9rRA5x,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=23b68101-59a9-532f-ab6b-9acf78fb2162,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=38da774d-7ecf-442f-9a7a-97978287cff8,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 11 04:24:36 compute-0 distracted_panini[305642]:             "lv_uuid": "SYHCVo-UVf0-Iwc3-4dzz-QuCG-19LJ-9rRA5x",
Oct 11 04:24:36 compute-0 distracted_panini[305642]:             "name": "ceph_lv1",
Oct 11 04:24:36 compute-0 distracted_panini[305642]:             "path": "/dev/ceph_vg1/ceph_lv1",
Oct 11 04:24:36 compute-0 distracted_panini[305642]:             "tags": {
Oct 11 04:24:36 compute-0 distracted_panini[305642]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Oct 11 04:24:36 compute-0 distracted_panini[305642]:                 "ceph.block_uuid": "SYHCVo-UVf0-Iwc3-4dzz-QuCG-19LJ-9rRA5x",
Oct 11 04:24:36 compute-0 distracted_panini[305642]:                 "ceph.cephx_lockbox_secret": "",
Oct 11 04:24:36 compute-0 distracted_panini[305642]:                 "ceph.cluster_fsid": "23b68101-59a9-532f-ab6b-9acf78fb2162",
Oct 11 04:24:36 compute-0 distracted_panini[305642]:                 "ceph.cluster_name": "ceph",
Oct 11 04:24:36 compute-0 distracted_panini[305642]:                 "ceph.crush_device_class": "",
Oct 11 04:24:36 compute-0 distracted_panini[305642]:                 "ceph.encrypted": "0",
Oct 11 04:24:36 compute-0 distracted_panini[305642]:                 "ceph.osd_fsid": "38da774d-7ecf-442f-9a7a-97978287cff8",
Oct 11 04:24:36 compute-0 distracted_panini[305642]:                 "ceph.osd_id": "1",
Oct 11 04:24:36 compute-0 distracted_panini[305642]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 11 04:24:36 compute-0 distracted_panini[305642]:                 "ceph.type": "block",
Oct 11 04:24:36 compute-0 distracted_panini[305642]:                 "ceph.vdo": "0"
Oct 11 04:24:36 compute-0 distracted_panini[305642]:             },
Oct 11 04:24:36 compute-0 distracted_panini[305642]:             "type": "block",
Oct 11 04:24:36 compute-0 distracted_panini[305642]:             "vg_name": "ceph_vg1"
Oct 11 04:24:36 compute-0 distracted_panini[305642]:         }
Oct 11 04:24:36 compute-0 distracted_panini[305642]:     ],
Oct 11 04:24:36 compute-0 distracted_panini[305642]:     "2": [
Oct 11 04:24:36 compute-0 distracted_panini[305642]:         {
Oct 11 04:24:36 compute-0 distracted_panini[305642]:             "devices": [
Oct 11 04:24:36 compute-0 distracted_panini[305642]:                 "/dev/loop5"
Oct 11 04:24:36 compute-0 distracted_panini[305642]:             ],
Oct 11 04:24:36 compute-0 distracted_panini[305642]:             "lv_name": "ceph_lv2",
Oct 11 04:24:36 compute-0 distracted_panini[305642]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Oct 11 04:24:36 compute-0 distracted_panini[305642]:             "lv_size": "21470642176",
Oct 11 04:24:36 compute-0 distracted_panini[305642]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=RoMfIg-8Myq-NBZ3-1HM3-6QfC-sjk8-dlChvt,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=23b68101-59a9-532f-ab6b-9acf78fb2162,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=b0a32a6b-06c7-4cda-9235-0c5ffbd1bd85,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 11 04:24:36 compute-0 distracted_panini[305642]:             "lv_uuid": "RoMfIg-8Myq-NBZ3-1HM3-6QfC-sjk8-dlChvt",
Oct 11 04:24:36 compute-0 distracted_panini[305642]:             "name": "ceph_lv2",
Oct 11 04:24:36 compute-0 distracted_panini[305642]:             "path": "/dev/ceph_vg2/ceph_lv2",
Oct 11 04:24:36 compute-0 distracted_panini[305642]:             "tags": {
Oct 11 04:24:36 compute-0 distracted_panini[305642]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Oct 11 04:24:36 compute-0 distracted_panini[305642]:                 "ceph.block_uuid": "RoMfIg-8Myq-NBZ3-1HM3-6QfC-sjk8-dlChvt",
Oct 11 04:24:36 compute-0 distracted_panini[305642]:                 "ceph.cephx_lockbox_secret": "",
Oct 11 04:24:36 compute-0 distracted_panini[305642]:                 "ceph.cluster_fsid": "23b68101-59a9-532f-ab6b-9acf78fb2162",
Oct 11 04:24:36 compute-0 distracted_panini[305642]:                 "ceph.cluster_name": "ceph",
Oct 11 04:24:36 compute-0 distracted_panini[305642]:                 "ceph.crush_device_class": "",
Oct 11 04:24:36 compute-0 distracted_panini[305642]:                 "ceph.encrypted": "0",
Oct 11 04:24:36 compute-0 distracted_panini[305642]:                 "ceph.osd_fsid": "b0a32a6b-06c7-4cda-9235-0c5ffbd1bd85",
Oct 11 04:24:36 compute-0 distracted_panini[305642]:                 "ceph.osd_id": "2",
Oct 11 04:24:36 compute-0 distracted_panini[305642]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 11 04:24:36 compute-0 distracted_panini[305642]:                 "ceph.type": "block",
Oct 11 04:24:36 compute-0 distracted_panini[305642]:                 "ceph.vdo": "0"
Oct 11 04:24:36 compute-0 distracted_panini[305642]:             },
Oct 11 04:24:36 compute-0 distracted_panini[305642]:             "type": "block",
Oct 11 04:24:36 compute-0 distracted_panini[305642]:             "vg_name": "ceph_vg2"
Oct 11 04:24:36 compute-0 distracted_panini[305642]:         }
Oct 11 04:24:36 compute-0 distracted_panini[305642]:     ]
Oct 11 04:24:36 compute-0 distracted_panini[305642]: }
Oct 11 04:24:36 compute-0 systemd[1]: libpod-60d2e5f986e4b36c851cf959e89f010cff3ad933f17d5db08daef484857bbc7b.scope: Deactivated successfully.
Oct 11 04:24:36 compute-0 podman[305606]: 2025-10-11 04:24:36.162887656 +0000 UTC m=+0.989275138 container died 60d2e5f986e4b36c851cf959e89f010cff3ad933f17d5db08daef484857bbc7b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_panini, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS)
Oct 11 04:24:36 compute-0 systemd[1]: var-lib-containers-storage-overlay-7a51ebe148f83e0a363ac8be86615b85dbfb2f138b1b9dcc0125d3ccb34c841a-merged.mount: Deactivated successfully.
Oct 11 04:24:36 compute-0 podman[305606]: 2025-10-11 04:24:36.222887185 +0000 UTC m=+1.049274647 container remove 60d2e5f986e4b36c851cf959e89f010cff3ad933f17d5db08daef484857bbc7b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_panini, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 11 04:24:36 compute-0 systemd[1]: libpod-conmon-60d2e5f986e4b36c851cf959e89f010cff3ad933f17d5db08daef484857bbc7b.scope: Deactivated successfully.
Oct 11 04:24:36 compute-0 ceph-mon[74273]: from='client.? 192.168.122.100:0/914700360' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 04:24:36 compute-0 sudo[305494]: pam_unix(sudo:session): session closed for user root
Oct 11 04:24:36 compute-0 sudo[305667]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 04:24:36 compute-0 sudo[305667]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 04:24:36 compute-0 sudo[305667]: pam_unix(sudo:session): session closed for user root
Oct 11 04:24:36 compute-0 sudo[305692]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 11 04:24:36 compute-0 sudo[305692]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 04:24:36 compute-0 sudo[305692]: pam_unix(sudo:session): session closed for user root
Oct 11 04:24:36 compute-0 sudo[305717]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 04:24:36 compute-0 sudo[305717]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 04:24:36 compute-0 sudo[305717]: pam_unix(sudo:session): session closed for user root
Oct 11 04:24:36 compute-0 nova_compute[259850]: 2025-10-11 04:24:36.608 2 DEBUG oslo_service.periodic_task [None req-a15c22ab-259d-4b26-83d0-eb5f92102649 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 11 04:24:36 compute-0 nova_compute[259850]: 2025-10-11 04:24:36.609 2 DEBUG nova.compute.manager [None req-a15c22ab-259d-4b26-83d0-eb5f92102649 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Oct 11 04:24:36 compute-0 nova_compute[259850]: 2025-10-11 04:24:36.609 2 DEBUG nova.compute.manager [None req-a15c22ab-259d-4b26-83d0-eb5f92102649 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Oct 11 04:24:36 compute-0 nova_compute[259850]: 2025-10-11 04:24:36.626 2 DEBUG nova.compute.manager [None req-a15c22ab-259d-4b26-83d0-eb5f92102649 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Oct 11 04:24:36 compute-0 nova_compute[259850]: 2025-10-11 04:24:36.627 2 DEBUG oslo_service.periodic_task [None req-a15c22ab-259d-4b26-83d0-eb5f92102649 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 11 04:24:36 compute-0 nova_compute[259850]: 2025-10-11 04:24:36.627 2 DEBUG oslo_service.periodic_task [None req-a15c22ab-259d-4b26-83d0-eb5f92102649 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 11 04:24:36 compute-0 sudo[305742]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/23b68101-59a9-532f-ab6b-9acf78fb2162/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 23b68101-59a9-532f-ab6b-9acf78fb2162 -- raw list --format json
Oct 11 04:24:36 compute-0 sudo[305742]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 04:24:37 compute-0 podman[305807]: 2025-10-11 04:24:37.157786871 +0000 UTC m=+0.061478203 container create 486dd73764b091f4e9a4c43d09083f6eb93bdcb09cf4b917e2dc3924ae9f85cd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_leavitt, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 11 04:24:37 compute-0 systemd[1]: Started libpod-conmon-486dd73764b091f4e9a4c43d09083f6eb93bdcb09cf4b917e2dc3924ae9f85cd.scope.
Oct 11 04:24:37 compute-0 podman[305807]: 2025-10-11 04:24:37.135768891 +0000 UTC m=+0.039460283 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 04:24:37 compute-0 systemd[1]: Started libcrun container.
Oct 11 04:24:37 compute-0 podman[305807]: 2025-10-11 04:24:37.259421063 +0000 UTC m=+0.163112445 container init 486dd73764b091f4e9a4c43d09083f6eb93bdcb09cf4b917e2dc3924ae9f85cd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_leavitt, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 11 04:24:37 compute-0 ceph-mon[74273]: pgmap v1881: 305 pgs: 305 active+clean; 88 MiB data, 442 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:24:37 compute-0 podman[305807]: 2025-10-11 04:24:37.271704798 +0000 UTC m=+0.175396130 container start 486dd73764b091f4e9a4c43d09083f6eb93bdcb09cf4b917e2dc3924ae9f85cd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_leavitt, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0)
Oct 11 04:24:37 compute-0 podman[305807]: 2025-10-11 04:24:37.275780283 +0000 UTC m=+0.179471665 container attach 486dd73764b091f4e9a4c43d09083f6eb93bdcb09cf4b917e2dc3924ae9f85cd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_leavitt, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3)
Oct 11 04:24:37 compute-0 silly_leavitt[305823]: 167 167
Oct 11 04:24:37 compute-0 systemd[1]: libpod-486dd73764b091f4e9a4c43d09083f6eb93bdcb09cf4b917e2dc3924ae9f85cd.scope: Deactivated successfully.
Oct 11 04:24:37 compute-0 podman[305807]: 2025-10-11 04:24:37.281287148 +0000 UTC m=+0.184978480 container died 486dd73764b091f4e9a4c43d09083f6eb93bdcb09cf4b917e2dc3924ae9f85cd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_leavitt, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 11 04:24:37 compute-0 systemd[1]: var-lib-containers-storage-overlay-a232a7394e7a3590dab7499b503c1bde7826e0074346fab6216058d57cb39d0d-merged.mount: Deactivated successfully.
Oct 11 04:24:37 compute-0 podman[305807]: 2025-10-11 04:24:37.33782689 +0000 UTC m=+0.241518222 container remove 486dd73764b091f4e9a4c43d09083f6eb93bdcb09cf4b917e2dc3924ae9f85cd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_leavitt, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250507, io.buildah.version=1.39.3)
Oct 11 04:24:37 compute-0 systemd[1]: libpod-conmon-486dd73764b091f4e9a4c43d09083f6eb93bdcb09cf4b917e2dc3924ae9f85cd.scope: Deactivated successfully.
Oct 11 04:24:37 compute-0 podman[305847]: 2025-10-11 04:24:37.575260206 +0000 UTC m=+0.058067816 container create 3c6c035e193b3bf2d376c2cc0d7233fb193f8ab0d478d1f631953588dabcb361 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_euler, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 11 04:24:37 compute-0 systemd[1]: Started libpod-conmon-3c6c035e193b3bf2d376c2cc0d7233fb193f8ab0d478d1f631953588dabcb361.scope.
Oct 11 04:24:37 compute-0 podman[305847]: 2025-10-11 04:24:37.553648028 +0000 UTC m=+0.036455698 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 04:24:37 compute-0 systemd[1]: Started libcrun container.
Oct 11 04:24:37 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6c8a0f09eb24816677fab0aa730c76e4fb1a8e9176785e57d4369555a01a11c5/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 11 04:24:37 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6c8a0f09eb24816677fab0aa730c76e4fb1a8e9176785e57d4369555a01a11c5/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 11 04:24:37 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6c8a0f09eb24816677fab0aa730c76e4fb1a8e9176785e57d4369555a01a11c5/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 11 04:24:37 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6c8a0f09eb24816677fab0aa730c76e4fb1a8e9176785e57d4369555a01a11c5/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 11 04:24:37 compute-0 podman[305847]: 2025-10-11 04:24:37.681976081 +0000 UTC m=+0.164783761 container init 3c6c035e193b3bf2d376c2cc0d7233fb193f8ab0d478d1f631953588dabcb361 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_euler, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 11 04:24:37 compute-0 podman[305847]: 2025-10-11 04:24:37.697676594 +0000 UTC m=+0.180484194 container start 3c6c035e193b3bf2d376c2cc0d7233fb193f8ab0d478d1f631953588dabcb361 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_euler, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.license=GPLv2)
Oct 11 04:24:37 compute-0 podman[305847]: 2025-10-11 04:24:37.701176382 +0000 UTC m=+0.183984042 container attach 3c6c035e193b3bf2d376c2cc0d7233fb193f8ab0d478d1f631953588dabcb361 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_euler, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, ceph=True)
Oct 11 04:24:37 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1882: 305 pgs: 305 active+clean; 88 MiB data, 442 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:24:38 compute-0 nova_compute[259850]: 2025-10-11 04:24:38.058 2 DEBUG oslo_service.periodic_task [None req-a15c22ab-259d-4b26-83d0-eb5f92102649 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 11 04:24:38 compute-0 nova_compute[259850]: 2025-10-11 04:24:38.708 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 04:24:38 compute-0 musing_euler[305864]: {
Oct 11 04:24:38 compute-0 musing_euler[305864]:     "38da774d-7ecf-442f-9a7a-97978287cff8": {
Oct 11 04:24:38 compute-0 musing_euler[305864]:         "ceph_fsid": "23b68101-59a9-532f-ab6b-9acf78fb2162",
Oct 11 04:24:38 compute-0 musing_euler[305864]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Oct 11 04:24:38 compute-0 musing_euler[305864]:         "osd_id": 1,
Oct 11 04:24:38 compute-0 musing_euler[305864]:         "osd_uuid": "38da774d-7ecf-442f-9a7a-97978287cff8",
Oct 11 04:24:38 compute-0 musing_euler[305864]:         "type": "bluestore"
Oct 11 04:24:38 compute-0 musing_euler[305864]:     },
Oct 11 04:24:38 compute-0 musing_euler[305864]:     "b0a32a6b-06c7-4cda-9235-0c5ffbd1bd85": {
Oct 11 04:24:38 compute-0 musing_euler[305864]:         "ceph_fsid": "23b68101-59a9-532f-ab6b-9acf78fb2162",
Oct 11 04:24:38 compute-0 musing_euler[305864]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Oct 11 04:24:38 compute-0 musing_euler[305864]:         "osd_id": 2,
Oct 11 04:24:38 compute-0 musing_euler[305864]:         "osd_uuid": "b0a32a6b-06c7-4cda-9235-0c5ffbd1bd85",
Oct 11 04:24:38 compute-0 musing_euler[305864]:         "type": "bluestore"
Oct 11 04:24:38 compute-0 musing_euler[305864]:     },
Oct 11 04:24:38 compute-0 musing_euler[305864]:     "bd7ac921-1218-45c1-b1c6-7c594dbceccb": {
Oct 11 04:24:38 compute-0 musing_euler[305864]:         "ceph_fsid": "23b68101-59a9-532f-ab6b-9acf78fb2162",
Oct 11 04:24:38 compute-0 musing_euler[305864]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct 11 04:24:38 compute-0 musing_euler[305864]:         "osd_id": 0,
Oct 11 04:24:38 compute-0 musing_euler[305864]:         "osd_uuid": "bd7ac921-1218-45c1-b1c6-7c594dbceccb",
Oct 11 04:24:38 compute-0 musing_euler[305864]:         "type": "bluestore"
Oct 11 04:24:38 compute-0 musing_euler[305864]:     }
Oct 11 04:24:38 compute-0 musing_euler[305864]: }
Oct 11 04:24:38 compute-0 systemd[1]: libpod-3c6c035e193b3bf2d376c2cc0d7233fb193f8ab0d478d1f631953588dabcb361.scope: Deactivated successfully.
Oct 11 04:24:38 compute-0 systemd[1]: libpod-3c6c035e193b3bf2d376c2cc0d7233fb193f8ab0d478d1f631953588dabcb361.scope: Consumed 1.166s CPU time.
Oct 11 04:24:38 compute-0 podman[305897]: 2025-10-11 04:24:38.917607496 +0000 UTC m=+0.037877258 container died 3c6c035e193b3bf2d376c2cc0d7233fb193f8ab0d478d1f631953588dabcb361 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_euler, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Oct 11 04:24:38 compute-0 systemd[1]: var-lib-containers-storage-overlay-6c8a0f09eb24816677fab0aa730c76e4fb1a8e9176785e57d4369555a01a11c5-merged.mount: Deactivated successfully.
Oct 11 04:24:38 compute-0 podman[305897]: 2025-10-11 04:24:38.992716961 +0000 UTC m=+0.112986663 container remove 3c6c035e193b3bf2d376c2cc0d7233fb193f8ab0d478d1f631953588dabcb361 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_euler, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS)
Oct 11 04:24:39 compute-0 systemd[1]: libpod-conmon-3c6c035e193b3bf2d376c2cc0d7233fb193f8ab0d478d1f631953588dabcb361.scope: Deactivated successfully.
Oct 11 04:24:39 compute-0 sudo[305742]: pam_unix(sudo:session): session closed for user root
Oct 11 04:24:39 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 11 04:24:39 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' 
Oct 11 04:24:39 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 11 04:24:39 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' 
Oct 11 04:24:39 compute-0 ceph-mgr[74563]: [progress WARNING root] complete: ev 8a189bcb-e835-4fc4-8056-b15e60dc772e does not exist
Oct 11 04:24:39 compute-0 ceph-mgr[74563]: [progress WARNING root] complete: ev a3fb8e9d-d7aa-46a2-95a2-d6dfe8aa0c6d does not exist
Oct 11 04:24:39 compute-0 sudo[305912]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 04:24:39 compute-0 sudo[305912]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 04:24:39 compute-0 sudo[305912]: pam_unix(sudo:session): session closed for user root
Oct 11 04:24:39 compute-0 sudo[305937]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Oct 11 04:24:39 compute-0 sudo[305937]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 04:24:39 compute-0 sudo[305937]: pam_unix(sudo:session): session closed for user root
Oct 11 04:24:39 compute-0 ceph-mon[74273]: pgmap v1882: 305 pgs: 305 active+clean; 88 MiB data, 442 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:24:39 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' 
Oct 11 04:24:39 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' 
Oct 11 04:24:39 compute-0 nova_compute[259850]: 2025-10-11 04:24:39.281 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 04:24:39 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1883: 305 pgs: 305 active+clean; 88 MiB data, 442 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:24:40 compute-0 nova_compute[259850]: 2025-10-11 04:24:40.055 2 DEBUG oslo_service.periodic_task [None req-a15c22ab-259d-4b26-83d0-eb5f92102649 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 11 04:24:40 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e443 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 04:24:40 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e443 do_prune osdmap full prune enabled
Oct 11 04:24:40 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e444 e444: 3 total, 3 up, 3 in
Oct 11 04:24:40 compute-0 ceph-mon[74273]: log_channel(cluster) log [DBG] : osdmap e444: 3 total, 3 up, 3 in
Oct 11 04:24:40 compute-0 podman[305962]: 2025-10-11 04:24:40.457488468 +0000 UTC m=+0.150379796 container health_status 648a70a95c858d67aef01a55c153ff0b5b9bef4b589fa10c62a24185180db61f (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251009, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Oct 11 04:24:41 compute-0 ceph-mon[74273]: pgmap v1883: 305 pgs: 305 active+clean; 88 MiB data, 442 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:24:41 compute-0 ceph-mon[74273]: osdmap e444: 3 total, 3 up, 3 in
Oct 11 04:24:41 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1885: 305 pgs: 305 active+clean; 88 MiB data, 442 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:24:43 compute-0 nova_compute[259850]: 2025-10-11 04:24:43.059 2 DEBUG oslo_service.periodic_task [None req-a15c22ab-259d-4b26-83d0-eb5f92102649 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 11 04:24:43 compute-0 ceph-mon[74273]: pgmap v1885: 305 pgs: 305 active+clean; 88 MiB data, 442 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:24:43 compute-0 nova_compute[259850]: 2025-10-11 04:24:43.710 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 04:24:43 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1886: 305 pgs: 305 active+clean; 88 MiB data, 442 MiB used, 60 GiB / 60 GiB avail; 24 KiB/s rd, 2.3 KiB/s wr, 32 op/s
Oct 11 04:24:44 compute-0 nova_compute[259850]: 2025-10-11 04:24:44.283 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 04:24:44 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e444 do_prune osdmap full prune enabled
Oct 11 04:24:44 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e445 e445: 3 total, 3 up, 3 in
Oct 11 04:24:44 compute-0 ceph-mon[74273]: log_channel(cluster) log [DBG] : osdmap e445: 3 total, 3 up, 3 in
Oct 11 04:24:44 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct 11 04:24:44 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2023097629' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 11 04:24:44 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct 11 04:24:44 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2023097629' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 11 04:24:45 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e445 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 04:24:45 compute-0 ceph-mon[74273]: pgmap v1886: 305 pgs: 305 active+clean; 88 MiB data, 442 MiB used, 60 GiB / 60 GiB avail; 24 KiB/s rd, 2.3 KiB/s wr, 32 op/s
Oct 11 04:24:45 compute-0 ceph-mon[74273]: osdmap e445: 3 total, 3 up, 3 in
Oct 11 04:24:45 compute-0 ceph-mon[74273]: from='client.? 192.168.122.10:0/2023097629' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 11 04:24:45 compute-0 ceph-mon[74273]: from='client.? 192.168.122.10:0/2023097629' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 11 04:24:45 compute-0 podman[305987]: 2025-10-11 04:24:45.377512691 +0000 UTC m=+0.082278318 container health_status aa44bb3cffcfb092b9d85ae52a9bd9f36c87e07262632420f97b48a886109eab (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, org.label-schema.build-date=20251009, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Oct 11 04:24:45 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1888: 305 pgs: 305 active+clean; 88 MiB data, 442 MiB used, 60 GiB / 60 GiB avail; 30 KiB/s rd, 2.9 KiB/s wr, 40 op/s
Oct 11 04:24:46 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct 11 04:24:46 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/992955165' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 11 04:24:46 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct 11 04:24:46 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/992955165' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 11 04:24:46 compute-0 ceph-mon[74273]: from='client.? 192.168.122.10:0/992955165' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 11 04:24:46 compute-0 ceph-mon[74273]: from='client.? 192.168.122.10:0/992955165' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 11 04:24:47 compute-0 ceph-mon[74273]: pgmap v1888: 305 pgs: 305 active+clean; 88 MiB data, 442 MiB used, 60 GiB / 60 GiB avail; 30 KiB/s rd, 2.9 KiB/s wr, 40 op/s
Oct 11 04:24:47 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1889: 305 pgs: 305 active+clean; 88 MiB data, 442 MiB used, 60 GiB / 60 GiB avail; 84 KiB/s rd, 4.1 KiB/s wr, 108 op/s
Oct 11 04:24:48 compute-0 nova_compute[259850]: 2025-10-11 04:24:48.714 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 04:24:49 compute-0 nova_compute[259850]: 2025-10-11 04:24:49.286 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 04:24:49 compute-0 ceph-mon[74273]: pgmap v1889: 305 pgs: 305 active+clean; 88 MiB data, 442 MiB used, 60 GiB / 60 GiB avail; 84 KiB/s rd, 4.1 KiB/s wr, 108 op/s
Oct 11 04:24:49 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1890: 305 pgs: 305 active+clean; 88 MiB data, 442 MiB used, 60 GiB / 60 GiB avail; 70 KiB/s rd, 3.4 KiB/s wr, 92 op/s
Oct 11 04:24:50 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e445 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 04:24:50 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct 11 04:24:50 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/100816036' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 11 04:24:50 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct 11 04:24:50 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/100816036' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 11 04:24:50 compute-0 ceph-mgr[74563]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 04:24:50 compute-0 ceph-mgr[74563]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 04:24:50 compute-0 ceph-mgr[74563]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 04:24:50 compute-0 ceph-mgr[74563]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 04:24:50 compute-0 ceph-mgr[74563]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 04:24:50 compute-0 ceph-mgr[74563]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 04:24:51 compute-0 ceph-mon[74273]: pgmap v1890: 305 pgs: 305 active+clean; 88 MiB data, 442 MiB used, 60 GiB / 60 GiB avail; 70 KiB/s rd, 3.4 KiB/s wr, 92 op/s
Oct 11 04:24:51 compute-0 ceph-mon[74273]: from='client.? 192.168.122.10:0/100816036' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 11 04:24:51 compute-0 ceph-mon[74273]: from='client.? 192.168.122.10:0/100816036' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 11 04:24:51 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1891: 305 pgs: 305 active+clean; 88 MiB data, 442 MiB used, 60 GiB / 60 GiB avail; 68 KiB/s rd, 3.3 KiB/s wr, 89 op/s
Oct 11 04:24:53 compute-0 ceph-mon[74273]: pgmap v1891: 305 pgs: 305 active+clean; 88 MiB data, 442 MiB used, 60 GiB / 60 GiB avail; 68 KiB/s rd, 3.3 KiB/s wr, 89 op/s
Oct 11 04:24:53 compute-0 nova_compute[259850]: 2025-10-11 04:24:53.715 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 04:24:53 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1892: 305 pgs: 305 active+clean; 88 MiB data, 442 MiB used, 60 GiB / 60 GiB avail; 44 KiB/s rd, 1023 B/s wr, 57 op/s
Oct 11 04:24:54 compute-0 nova_compute[259850]: 2025-10-11 04:24:54.289 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 04:24:55 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e445 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 04:24:55 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e445 do_prune osdmap full prune enabled
Oct 11 04:24:55 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e446 e446: 3 total, 3 up, 3 in
Oct 11 04:24:55 compute-0 ceph-mon[74273]: log_channel(cluster) log [DBG] : osdmap e446: 3 total, 3 up, 3 in
Oct 11 04:24:55 compute-0 podman[306008]: 2025-10-11 04:24:55.361343656 +0000 UTC m=+0.063737306 container health_status 8f03625dda308e9a3fe3b3f00360811eab2a53db90537fed5552a11820542f07 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, config_id=iscsid, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, io.buildah.version=1.41.3, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, container_name=iscsid, org.label-schema.build-date=20251009)
Oct 11 04:24:55 compute-0 podman[306007]: 2025-10-11 04:24:55.361528251 +0000 UTC m=+0.068731597 container health_status 83ff5079e26f0f00bbfd2512db4ebca61e431e3b7e362664573e34fff179574c (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=multipathd, org.label-schema.build-date=20251009, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, io.buildah.version=1.41.3)
Oct 11 04:24:55 compute-0 ceph-mon[74273]: pgmap v1892: 305 pgs: 305 active+clean; 88 MiB data, 442 MiB used, 60 GiB / 60 GiB avail; 44 KiB/s rd, 1023 B/s wr, 57 op/s
Oct 11 04:24:55 compute-0 ceph-mon[74273]: osdmap e446: 3 total, 3 up, 3 in
Oct 11 04:24:55 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1894: 305 pgs: 305 active+clean; 88 MiB data, 442 MiB used, 60 GiB / 60 GiB avail; 44 KiB/s rd, 1023 B/s wr, 57 op/s
Oct 11 04:24:57 compute-0 ceph-mon[74273]: pgmap v1894: 305 pgs: 305 active+clean; 88 MiB data, 442 MiB used, 60 GiB / 60 GiB avail; 44 KiB/s rd, 1023 B/s wr, 57 op/s
Oct 11 04:24:57 compute-0 ovn_controller[152025]: 2025-10-11T04:24:57Z|00272|memory_trim|INFO|Detected inactivity (last active 30002 ms ago): trimming memory
Oct 11 04:24:58 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1895: 305 pgs: 305 active+clean; 88 MiB data, 442 MiB used, 60 GiB / 60 GiB avail; 1.1 KiB/s rd, 0 B/s wr, 2 op/s
Oct 11 04:24:58 compute-0 nova_compute[259850]: 2025-10-11 04:24:58.765 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 04:24:59 compute-0 nova_compute[259850]: 2025-10-11 04:24:59.290 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 04:24:59 compute-0 ceph-mon[74273]: pgmap v1895: 305 pgs: 305 active+clean; 88 MiB data, 442 MiB used, 60 GiB / 60 GiB avail; 1.1 KiB/s rd, 0 B/s wr, 2 op/s
Oct 11 04:25:00 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1896: 305 pgs: 305 active+clean; 88 MiB data, 442 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:25:00 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e446 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 04:25:01 compute-0 ceph-mon[74273]: pgmap v1896: 305 pgs: 305 active+clean; 88 MiB data, 442 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:25:02 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1897: 305 pgs: 305 active+clean; 88 MiB data, 442 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:25:03 compute-0 ceph-mon[74273]: pgmap v1897: 305 pgs: 305 active+clean; 88 MiB data, 442 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:25:03 compute-0 nova_compute[259850]: 2025-10-11 04:25:03.769 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 04:25:04 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1898: 305 pgs: 305 active+clean; 88 MiB data, 442 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:25:04 compute-0 nova_compute[259850]: 2025-10-11 04:25:04.294 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 04:25:05 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e446 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 04:25:05 compute-0 ceph-mon[74273]: pgmap v1898: 305 pgs: 305 active+clean; 88 MiB data, 442 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:25:06 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1899: 305 pgs: 305 active+clean; 88 MiB data, 442 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:25:06 compute-0 ceph-mon[74273]: pgmap v1899: 305 pgs: 305 active+clean; 88 MiB data, 442 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:25:06 compute-0 ovn_metadata_agent[161897]: 2025-10-11 04:25:06.697 161902 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=23, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '7a:61:6f', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '92:f1:b6:e4:f1:16'}, ipsec=False) old=SB_Global(nb_cfg=22) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 11 04:25:06 compute-0 nova_compute[259850]: 2025-10-11 04:25:06.698 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 04:25:06 compute-0 ovn_metadata_agent[161897]: 2025-10-11 04:25:06.698 161902 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 10 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Oct 11 04:25:08 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1900: 305 pgs: 305 active+clean; 88 MiB data, 442 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:25:08 compute-0 ceph-mon[74273]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #78. Immutable memtables: 0.
Oct 11 04:25:08 compute-0 ceph-mon[74273]: rocksdb: (Original Log Time 2025/10/11-04:25:08.067068) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Oct 11 04:25:08 compute-0 ceph-mon[74273]: rocksdb: [db/flush_job.cc:856] [default] [JOB 43] Flushing memtable with next log file: 78
Oct 11 04:25:08 compute-0 ceph-mon[74273]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760156708067326, "job": 43, "event": "flush_started", "num_memtables": 1, "num_entries": 2171, "num_deletes": 260, "total_data_size": 3396588, "memory_usage": 3462240, "flush_reason": "Manual Compaction"}
Oct 11 04:25:08 compute-0 ceph-mon[74273]: rocksdb: [db/flush_job.cc:885] [default] [JOB 43] Level-0 flush table #79: started
Oct 11 04:25:08 compute-0 ceph-mon[74273]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760156708094004, "cf_name": "default", "job": 43, "event": "table_file_creation", "file_number": 79, "file_size": 3336859, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 36635, "largest_seqno": 38805, "table_properties": {"data_size": 3326822, "index_size": 6467, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2501, "raw_key_size": 20622, "raw_average_key_size": 20, "raw_value_size": 3306768, "raw_average_value_size": 3323, "num_data_blocks": 284, "num_entries": 995, "num_filter_entries": 995, "num_deletions": 260, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1760156506, "oldest_key_time": 1760156506, "file_creation_time": 1760156708, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "c5ef686a-ea96-43f3-b64e-136aeef6150d", "db_session_id": "5PENSWPAYOBU0GJSS6OB", "orig_file_number": 79, "seqno_to_time_mapping": "N/A"}}
Oct 11 04:25:08 compute-0 ceph-mon[74273]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 43] Flush lasted 26990 microseconds, and 15255 cpu microseconds.
Oct 11 04:25:08 compute-0 ceph-mon[74273]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 11 04:25:08 compute-0 ceph-mon[74273]: rocksdb: (Original Log Time 2025/10/11-04:25:08.094081) [db/flush_job.cc:967] [default] [JOB 43] Level-0 flush table #79: 3336859 bytes OK
Oct 11 04:25:08 compute-0 ceph-mon[74273]: rocksdb: (Original Log Time 2025/10/11-04:25:08.094110) [db/memtable_list.cc:519] [default] Level-0 commit table #79 started
Oct 11 04:25:08 compute-0 ceph-mon[74273]: rocksdb: (Original Log Time 2025/10/11-04:25:08.095675) [db/memtable_list.cc:722] [default] Level-0 commit table #79: memtable #1 done
Oct 11 04:25:08 compute-0 ceph-mon[74273]: rocksdb: (Original Log Time 2025/10/11-04:25:08.095697) EVENT_LOG_v1 {"time_micros": 1760156708095689, "job": 43, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Oct 11 04:25:08 compute-0 ceph-mon[74273]: rocksdb: (Original Log Time 2025/10/11-04:25:08.095717) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Oct 11 04:25:08 compute-0 ceph-mon[74273]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 43] Try to delete WAL files size 3387395, prev total WAL file size 3387395, number of live WAL files 2.
Oct 11 04:25:08 compute-0 ceph-mon[74273]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000075.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 11 04:25:08 compute-0 ceph-mon[74273]: rocksdb: (Original Log Time 2025/10/11-04:25:08.097299) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730033323633' seq:72057594037927935, type:22 .. '7061786F730033353135' seq:0, type:0; will stop at (end)
Oct 11 04:25:08 compute-0 ceph-mon[74273]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 44] Compacting 1@0 + 1@6 files to L6, score -1.00
Oct 11 04:25:08 compute-0 ceph-mon[74273]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 43 Base level 0, inputs: [79(3258KB)], [77(10035KB)]
Oct 11 04:25:08 compute-0 ceph-mon[74273]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760156708097593, "job": 44, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [79], "files_L6": [77], "score": -1, "input_data_size": 13613111, "oldest_snapshot_seqno": -1}
Oct 11 04:25:08 compute-0 ceph-mon[74273]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 44] Generated table #80: 6983 keys, 11909131 bytes, temperature: kUnknown
Oct 11 04:25:08 compute-0 ceph-mon[74273]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760156708153533, "cf_name": "default", "job": 44, "event": "table_file_creation", "file_number": 80, "file_size": 11909131, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 11855651, "index_size": 34936, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 17477, "raw_key_size": 175807, "raw_average_key_size": 25, "raw_value_size": 11723620, "raw_average_value_size": 1678, "num_data_blocks": 1398, "num_entries": 6983, "num_filter_entries": 6983, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1760153731, "oldest_key_time": 0, "file_creation_time": 1760156708, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "c5ef686a-ea96-43f3-b64e-136aeef6150d", "db_session_id": "5PENSWPAYOBU0GJSS6OB", "orig_file_number": 80, "seqno_to_time_mapping": "N/A"}}
Oct 11 04:25:08 compute-0 ceph-mon[74273]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 11 04:25:08 compute-0 ceph-mon[74273]: rocksdb: (Original Log Time 2025/10/11-04:25:08.154046) [db/compaction/compaction_job.cc:1663] [default] [JOB 44] Compacted 1@0 + 1@6 files to L6 => 11909131 bytes
Oct 11 04:25:08 compute-0 ceph-mon[74273]: rocksdb: (Original Log Time 2025/10/11-04:25:08.155694) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 242.8 rd, 212.4 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(3.2, 9.8 +0.0 blob) out(11.4 +0.0 blob), read-write-amplify(7.6) write-amplify(3.6) OK, records in: 7511, records dropped: 528 output_compression: NoCompression
Oct 11 04:25:08 compute-0 ceph-mon[74273]: rocksdb: (Original Log Time 2025/10/11-04:25:08.155726) EVENT_LOG_v1 {"time_micros": 1760156708155710, "job": 44, "event": "compaction_finished", "compaction_time_micros": 56057, "compaction_time_cpu_micros": 28519, "output_level": 6, "num_output_files": 1, "total_output_size": 11909131, "num_input_records": 7511, "num_output_records": 6983, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Oct 11 04:25:08 compute-0 ceph-mon[74273]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000079.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 11 04:25:08 compute-0 ceph-mon[74273]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760156708157275, "job": 44, "event": "table_file_deletion", "file_number": 79}
Oct 11 04:25:08 compute-0 ceph-mon[74273]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000077.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 11 04:25:08 compute-0 ceph-mon[74273]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760156708161535, "job": 44, "event": "table_file_deletion", "file_number": 77}
Oct 11 04:25:08 compute-0 ceph-mon[74273]: rocksdb: (Original Log Time 2025/10/11-04:25:08.097120) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 11 04:25:08 compute-0 ceph-mon[74273]: rocksdb: (Original Log Time 2025/10/11-04:25:08.161653) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 11 04:25:08 compute-0 ceph-mon[74273]: rocksdb: (Original Log Time 2025/10/11-04:25:08.161660) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 11 04:25:08 compute-0 ceph-mon[74273]: rocksdb: (Original Log Time 2025/10/11-04:25:08.161663) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 11 04:25:08 compute-0 ceph-mon[74273]: rocksdb: (Original Log Time 2025/10/11-04:25:08.161666) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 11 04:25:08 compute-0 ceph-mon[74273]: rocksdb: (Original Log Time 2025/10/11-04:25:08.161669) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 11 04:25:08 compute-0 nova_compute[259850]: 2025-10-11 04:25:08.770 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 04:25:09 compute-0 ceph-mon[74273]: pgmap v1900: 305 pgs: 305 active+clean; 88 MiB data, 442 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:25:09 compute-0 nova_compute[259850]: 2025-10-11 04:25:09.296 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 04:25:10 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1901: 305 pgs: 305 active+clean; 88 MiB data, 442 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:25:10 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e446 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 04:25:11 compute-0 ceph-mon[74273]: pgmap v1901: 305 pgs: 305 active+clean; 88 MiB data, 442 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:25:11 compute-0 podman[306048]: 2025-10-11 04:25:11.417514382 +0000 UTC m=+0.120301428 container health_status 648a70a95c858d67aef01a55c153ff0b5b9bef4b589fa10c62a24185180db61f (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, managed_by=edpm_ansible, org.label-schema.build-date=20251009, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller)
Oct 11 04:25:12 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1902: 305 pgs: 305 active+clean; 88 MiB data, 442 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:25:13 compute-0 ceph-mon[74273]: pgmap v1902: 305 pgs: 305 active+clean; 88 MiB data, 442 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:25:13 compute-0 nova_compute[259850]: 2025-10-11 04:25:13.772 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 04:25:14 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1903: 305 pgs: 305 active+clean; 88 MiB data, 442 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:25:14 compute-0 nova_compute[259850]: 2025-10-11 04:25:14.329 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 04:25:15 compute-0 ceph-mon[74273]: pgmap v1903: 305 pgs: 305 active+clean; 88 MiB data, 442 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:25:15 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e446 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 04:25:16 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1904: 305 pgs: 305 active+clean; 88 MiB data, 442 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:25:16 compute-0 podman[306075]: 2025-10-11 04:25:16.355533241 +0000 UTC m=+0.069609981 container health_status aa44bb3cffcfb092b9d85ae52a9bd9f36c87e07262632420f97b48a886109eab (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251009)
Oct 11 04:25:16 compute-0 ovn_metadata_agent[161897]: 2025-10-11 04:25:16.700 161902 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=8a473e03-2208-47ae-afcd-05ad744a5969, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '23'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 04:25:17 compute-0 ceph-mon[74273]: pgmap v1904: 305 pgs: 305 active+clean; 88 MiB data, 442 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:25:18 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1905: 305 pgs: 305 active+clean; 88 MiB data, 442 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:25:18 compute-0 nova_compute[259850]: 2025-10-11 04:25:18.775 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 04:25:19 compute-0 ceph-mon[74273]: pgmap v1905: 305 pgs: 305 active+clean; 88 MiB data, 442 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:25:19 compute-0 nova_compute[259850]: 2025-10-11 04:25:19.331 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 04:25:20 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1906: 305 pgs: 305 active+clean; 88 MiB data, 442 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:25:20 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e446 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 04:25:20 compute-0 ceph-mgr[74563]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 04:25:20 compute-0 ceph-mgr[74563]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 04:25:20 compute-0 ceph-mgr[74563]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 04:25:20 compute-0 ceph-mgr[74563]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 04:25:20 compute-0 ceph-mgr[74563]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 04:25:20 compute-0 ceph-mgr[74563]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 04:25:20 compute-0 ceph-mgr[74563]: [balancer INFO root] Optimize plan auto_2025-10-11_04:25:20
Oct 11 04:25:20 compute-0 ceph-mgr[74563]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 11 04:25:20 compute-0 ceph-mgr[74563]: [balancer INFO root] do_upmap
Oct 11 04:25:20 compute-0 ceph-mgr[74563]: [balancer INFO root] pools ['cephfs.cephfs.data', 'default.rgw.control', 'vms', 'default.rgw.meta', 'images', '.mgr', 'cephfs.cephfs.meta', '.rgw.root', 'backups', 'default.rgw.log', 'volumes']
Oct 11 04:25:20 compute-0 ceph-mgr[74563]: [balancer INFO root] prepared 0/10 changes
Oct 11 04:25:21 compute-0 ceph-mon[74273]: pgmap v1906: 305 pgs: 305 active+clean; 88 MiB data, 442 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:25:21 compute-0 ceph-mgr[74563]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 11 04:25:21 compute-0 ceph-mgr[74563]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 11 04:25:21 compute-0 ceph-mgr[74563]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 11 04:25:21 compute-0 ceph-mgr[74563]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 11 04:25:21 compute-0 ceph-mgr[74563]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 11 04:25:21 compute-0 ceph-mgr[74563]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 11 04:25:21 compute-0 ceph-mgr[74563]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 11 04:25:21 compute-0 ceph-mgr[74563]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 11 04:25:21 compute-0 ceph-mgr[74563]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 11 04:25:21 compute-0 ceph-mgr[74563]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 11 04:25:22 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1907: 305 pgs: 305 active+clean; 88 MiB data, 442 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:25:22 compute-0 ovn_metadata_agent[161897]: 2025-10-11 04:25:22.976 161902 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 04:25:22 compute-0 ovn_metadata_agent[161897]: 2025-10-11 04:25:22.977 161902 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 04:25:22 compute-0 ovn_metadata_agent[161897]: 2025-10-11 04:25:22.977 161902 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 04:25:23 compute-0 ceph-mon[74273]: pgmap v1907: 305 pgs: 305 active+clean; 88 MiB data, 442 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:25:23 compute-0 nova_compute[259850]: 2025-10-11 04:25:23.813 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 04:25:24 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1908: 305 pgs: 305 active+clean; 88 MiB data, 442 MiB used, 60 GiB / 60 GiB avail; 3.0 KiB/s rd, 22 KiB/s wr, 4 op/s
Oct 11 04:25:24 compute-0 nova_compute[259850]: 2025-10-11 04:25:24.334 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 04:25:25 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e446 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 04:25:25 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e446 do_prune osdmap full prune enabled
Oct 11 04:25:25 compute-0 ceph-mon[74273]: pgmap v1908: 305 pgs: 305 active+clean; 88 MiB data, 442 MiB used, 60 GiB / 60 GiB avail; 3.0 KiB/s rd, 22 KiB/s wr, 4 op/s
Oct 11 04:25:25 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e447 e447: 3 total, 3 up, 3 in
Oct 11 04:25:25 compute-0 ceph-mon[74273]: log_channel(cluster) log [DBG] : osdmap e447: 3 total, 3 up, 3 in
Oct 11 04:25:26 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1910: 305 pgs: 305 active+clean; 88 MiB data, 442 MiB used, 60 GiB / 60 GiB avail; 3.6 KiB/s rd, 26 KiB/s wr, 5 op/s
Oct 11 04:25:26 compute-0 ceph-mon[74273]: osdmap e447: 3 total, 3 up, 3 in
Oct 11 04:25:26 compute-0 podman[306095]: 2025-10-11 04:25:26.386949536 +0000 UTC m=+0.080268901 container health_status 8f03625dda308e9a3fe3b3f00360811eab2a53db90537fed5552a11820542f07 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251009, org.label-schema.schema-version=1.0, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, config_id=iscsid, container_name=iscsid, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true)
Oct 11 04:25:26 compute-0 podman[306094]: 2025-10-11 04:25:26.389415685 +0000 UTC m=+0.092679050 container health_status 83ff5079e26f0f00bbfd2512db4ebca61e431e3b7e362664573e34fff179574c (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, config_id=multipathd, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251009, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd)
Oct 11 04:25:26 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 11 04:25:26 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2229171830' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 11 04:25:27 compute-0 ceph-mon[74273]: pgmap v1910: 305 pgs: 305 active+clean; 88 MiB data, 442 MiB used, 60 GiB / 60 GiB avail; 3.6 KiB/s rd, 26 KiB/s wr, 5 op/s
Oct 11 04:25:27 compute-0 ceph-mon[74273]: from='client.? 192.168.122.10:0/2229171830' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 11 04:25:28 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1911: 305 pgs: 305 active+clean; 88 MiB data, 442 MiB used, 60 GiB / 60 GiB avail; 26 KiB/s rd, 28 KiB/s wr, 34 op/s
Oct 11 04:25:28 compute-0 nova_compute[259850]: 2025-10-11 04:25:28.815 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 04:25:29 compute-0 ceph-mon[74273]: pgmap v1911: 305 pgs: 305 active+clean; 88 MiB data, 442 MiB used, 60 GiB / 60 GiB avail; 26 KiB/s rd, 28 KiB/s wr, 34 op/s
Oct 11 04:25:29 compute-0 nova_compute[259850]: 2025-10-11 04:25:29.335 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 04:25:30 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1912: 305 pgs: 305 active+clean; 88 MiB data, 442 MiB used, 60 GiB / 60 GiB avail; 26 KiB/s rd, 28 KiB/s wr, 34 op/s
Oct 11 04:25:30 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e447 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 04:25:31 compute-0 ceph-mon[74273]: pgmap v1912: 305 pgs: 305 active+clean; 88 MiB data, 442 MiB used, 60 GiB / 60 GiB avail; 26 KiB/s rd, 28 KiB/s wr, 34 op/s
Oct 11 04:25:31 compute-0 ceph-mgr[74563]: [pg_autoscaler INFO root] _maybe_adjust
Oct 11 04:25:31 compute-0 ceph-mgr[74563]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 04:25:31 compute-0 ceph-mgr[74563]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct 11 04:25:31 compute-0 ceph-mgr[74563]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 04:25:31 compute-0 ceph-mgr[74563]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Oct 11 04:25:31 compute-0 ceph-mgr[74563]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 04:25:31 compute-0 ceph-mgr[74563]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.00035159302353975384 of space, bias 1.0, pg target 0.10547790706192615 quantized to 32 (current 32)
Oct 11 04:25:31 compute-0 ceph-mgr[74563]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 04:25:31 compute-0 ceph-mgr[74563]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Oct 11 04:25:31 compute-0 ceph-mgr[74563]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 04:25:31 compute-0 ceph-mgr[74563]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006661762551279547 of space, bias 1.0, pg target 0.1998528765383864 quantized to 32 (current 32)
Oct 11 04:25:31 compute-0 ceph-mgr[74563]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 04:25:31 compute-0 ceph-mgr[74563]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Oct 11 04:25:31 compute-0 ceph-mgr[74563]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 04:25:31 compute-0 ceph-mgr[74563]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 11 04:25:31 compute-0 ceph-mgr[74563]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 04:25:31 compute-0 ceph-mgr[74563]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Oct 11 04:25:31 compute-0 ceph-mgr[74563]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 04:25:31 compute-0 ceph-mgr[74563]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct 11 04:25:31 compute-0 ceph-mgr[74563]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 04:25:31 compute-0 ceph-mgr[74563]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 11 04:25:31 compute-0 ceph-mgr[74563]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 04:25:31 compute-0 ceph-mgr[74563]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct 11 04:25:32 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1913: 305 pgs: 305 active+clean; 88 MiB data, 442 MiB used, 60 GiB / 60 GiB avail; 26 KiB/s rd, 28 KiB/s wr, 34 op/s
Oct 11 04:25:33 compute-0 ceph-mon[74273]: pgmap v1913: 305 pgs: 305 active+clean; 88 MiB data, 442 MiB used, 60 GiB / 60 GiB avail; 26 KiB/s rd, 28 KiB/s wr, 34 op/s
Oct 11 04:25:33 compute-0 nova_compute[259850]: 2025-10-11 04:25:33.817 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 04:25:34 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1914: 305 pgs: 305 active+clean; 88 MiB data, 442 MiB used, 60 GiB / 60 GiB avail; 22 KiB/s rd, 1.6 KiB/s wr, 28 op/s
Oct 11 04:25:34 compute-0 ceph-mon[74273]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Oct 11 04:25:34 compute-0 ceph-mon[74273]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 3000.0 total, 600.0 interval
                                           Cumulative writes: 8518 writes, 38K keys, 8518 commit groups, 1.0 writes per commit group, ingest: 0.05 GB, 0.02 MB/s
                                           Cumulative WAL: 8518 writes, 8518 syncs, 1.00 writes per sync, written: 0.05 GB, 0.02 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 1559 writes, 7028 keys, 1559 commit groups, 1.0 writes per commit group, ingest: 9.67 MB, 0.02 MB/s
                                           Interval WAL: 1559 writes, 1559 syncs, 1.00 writes per sync, written: 0.01 GB, 0.02 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                           
                                           ** Compaction Stats [default] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0    122.1      0.37              0.17        22    0.017       0      0       0.0       0.0
                                             L6      1/0   11.36 MB   0.0      0.2     0.0      0.2       0.2      0.0       0.0   3.8    170.1    142.2      1.21              0.72        21    0.058    113K    12K       0.0       0.0
                                            Sum      1/0   11.36 MB   0.0      0.2     0.0      0.2       0.2      0.1       0.0   4.8    130.4    137.5      1.59              0.89        43    0.037    113K    12K       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.1     0.0      0.0       0.1      0.0       0.0   6.9    152.6    156.1      0.37              0.23        10    0.037     35K   2519       0.0       0.0
                                           
                                           ** Compaction Stats [default] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Low      0/0    0.00 KB   0.0      0.2     0.0      0.2       0.2      0.0       0.0   0.0    170.1    142.2      1.21              0.72        21    0.058    113K    12K       0.0       0.0
                                           High      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0    123.2      0.37              0.17        21    0.017       0      0       0.0       0.0
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0     13.2      0.00              0.00         1    0.004       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 3000.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.044, interval 0.008
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.21 GB write, 0.07 MB/s write, 0.20 GB read, 0.07 MB/s read, 1.6 seconds
                                           Interval compaction: 0.06 GB write, 0.10 MB/s write, 0.06 GB read, 0.09 MB/s read, 0.4 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x558495a5d1f0#2 capacity: 304.00 MB usage: 24.46 MB table_size: 0 occupancy: 18446744073709551615 collections: 6 last_copies: 0 last_secs: 0.000237 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1652,23.54 MB,7.74264%) FilterBlock(44,323.48 KB,0.103915%) IndexBlock(44,622.02 KB,0.199815%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [default] **
Oct 11 04:25:34 compute-0 nova_compute[259850]: 2025-10-11 04:25:34.337 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 04:25:34 compute-0 nova_compute[259850]: 2025-10-11 04:25:34.753 2 DEBUG oslo_concurrency.lockutils [None req-2587ab64-fb3f-4084-a6cf-cf3c7c6847c6 7bf17f3eb8514499a54d67542db6b88a 226e6310b4ee4a68b552a6b3e940a458 - - default default] Acquiring lock "e9134216-e096-4ca2-a8aa-6fdafcd7b04c" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 04:25:34 compute-0 nova_compute[259850]: 2025-10-11 04:25:34.754 2 DEBUG oslo_concurrency.lockutils [None req-2587ab64-fb3f-4084-a6cf-cf3c7c6847c6 7bf17f3eb8514499a54d67542db6b88a 226e6310b4ee4a68b552a6b3e940a458 - - default default] Lock "e9134216-e096-4ca2-a8aa-6fdafcd7b04c" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 04:25:34 compute-0 nova_compute[259850]: 2025-10-11 04:25:34.782 2 DEBUG nova.compute.manager [None req-2587ab64-fb3f-4084-a6cf-cf3c7c6847c6 7bf17f3eb8514499a54d67542db6b88a 226e6310b4ee4a68b552a6b3e940a458 - - default default] [instance: e9134216-e096-4ca2-a8aa-6fdafcd7b04c] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Oct 11 04:25:34 compute-0 nova_compute[259850]: 2025-10-11 04:25:34.876 2 DEBUG oslo_concurrency.lockutils [None req-2587ab64-fb3f-4084-a6cf-cf3c7c6847c6 7bf17f3eb8514499a54d67542db6b88a 226e6310b4ee4a68b552a6b3e940a458 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 04:25:34 compute-0 nova_compute[259850]: 2025-10-11 04:25:34.877 2 DEBUG oslo_concurrency.lockutils [None req-2587ab64-fb3f-4084-a6cf-cf3c7c6847c6 7bf17f3eb8514499a54d67542db6b88a 226e6310b4ee4a68b552a6b3e940a458 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 04:25:34 compute-0 nova_compute[259850]: 2025-10-11 04:25:34.888 2 DEBUG nova.virt.hardware [None req-2587ab64-fb3f-4084-a6cf-cf3c7c6847c6 7bf17f3eb8514499a54d67542db6b88a 226e6310b4ee4a68b552a6b3e940a458 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Oct 11 04:25:34 compute-0 nova_compute[259850]: 2025-10-11 04:25:34.888 2 INFO nova.compute.claims [None req-2587ab64-fb3f-4084-a6cf-cf3c7c6847c6 7bf17f3eb8514499a54d67542db6b88a 226e6310b4ee4a68b552a6b3e940a458 - - default default] [instance: e9134216-e096-4ca2-a8aa-6fdafcd7b04c] Claim successful on node compute-0.ctlplane.example.com
Oct 11 04:25:35 compute-0 nova_compute[259850]: 2025-10-11 04:25:35.013 2 DEBUG oslo_concurrency.processutils [None req-2587ab64-fb3f-4084-a6cf-cf3c7c6847c6 7bf17f3eb8514499a54d67542db6b88a 226e6310b4ee4a68b552a6b3e940a458 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 04:25:35 compute-0 nova_compute[259850]: 2025-10-11 04:25:35.055 2 DEBUG oslo_service.periodic_task [None req-a15c22ab-259d-4b26-83d0-eb5f92102649 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 11 04:25:35 compute-0 nova_compute[259850]: 2025-10-11 04:25:35.058 2 DEBUG oslo_service.periodic_task [None req-a15c22ab-259d-4b26-83d0-eb5f92102649 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 11 04:25:35 compute-0 nova_compute[259850]: 2025-10-11 04:25:35.059 2 DEBUG oslo_service.periodic_task [None req-a15c22ab-259d-4b26-83d0-eb5f92102649 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 11 04:25:35 compute-0 nova_compute[259850]: 2025-10-11 04:25:35.060 2 DEBUG nova.compute.manager [None req-a15c22ab-259d-4b26-83d0-eb5f92102649 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Oct 11 04:25:35 compute-0 nova_compute[259850]: 2025-10-11 04:25:35.060 2 DEBUG oslo_service.periodic_task [None req-a15c22ab-259d-4b26-83d0-eb5f92102649 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 11 04:25:35 compute-0 nova_compute[259850]: 2025-10-11 04:25:35.099 2 DEBUG oslo_concurrency.lockutils [None req-a15c22ab-259d-4b26-83d0-eb5f92102649 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 04:25:35 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e447 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 04:25:35 compute-0 ceph-mon[74273]: pgmap v1914: 305 pgs: 305 active+clean; 88 MiB data, 442 MiB used, 60 GiB / 60 GiB avail; 22 KiB/s rd, 1.6 KiB/s wr, 28 op/s
Oct 11 04:25:35 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 11 04:25:35 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1392066956' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 04:25:35 compute-0 nova_compute[259850]: 2025-10-11 04:25:35.511 2 DEBUG oslo_concurrency.processutils [None req-2587ab64-fb3f-4084-a6cf-cf3c7c6847c6 7bf17f3eb8514499a54d67542db6b88a 226e6310b4ee4a68b552a6b3e940a458 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.497s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 04:25:35 compute-0 nova_compute[259850]: 2025-10-11 04:25:35.518 2 DEBUG nova.compute.provider_tree [None req-2587ab64-fb3f-4084-a6cf-cf3c7c6847c6 7bf17f3eb8514499a54d67542db6b88a 226e6310b4ee4a68b552a6b3e940a458 - - default default] Inventory has not changed in ProviderTree for provider: 108a560b-89c0-4926-a2fc-cb749a6f8386 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 11 04:25:35 compute-0 nova_compute[259850]: 2025-10-11 04:25:35.536 2 DEBUG nova.scheduler.client.report [None req-2587ab64-fb3f-4084-a6cf-cf3c7c6847c6 7bf17f3eb8514499a54d67542db6b88a 226e6310b4ee4a68b552a6b3e940a458 - - default default] Inventory has not changed for provider 108a560b-89c0-4926-a2fc-cb749a6f8386 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 11 04:25:35 compute-0 nova_compute[259850]: 2025-10-11 04:25:35.576 2 DEBUG oslo_concurrency.lockutils [None req-2587ab64-fb3f-4084-a6cf-cf3c7c6847c6 7bf17f3eb8514499a54d67542db6b88a 226e6310b4ee4a68b552a6b3e940a458 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.700s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 04:25:35 compute-0 nova_compute[259850]: 2025-10-11 04:25:35.578 2 DEBUG nova.compute.manager [None req-2587ab64-fb3f-4084-a6cf-cf3c7c6847c6 7bf17f3eb8514499a54d67542db6b88a 226e6310b4ee4a68b552a6b3e940a458 - - default default] [instance: e9134216-e096-4ca2-a8aa-6fdafcd7b04c] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Oct 11 04:25:35 compute-0 nova_compute[259850]: 2025-10-11 04:25:35.583 2 DEBUG oslo_concurrency.lockutils [None req-a15c22ab-259d-4b26-83d0-eb5f92102649 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.484s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 04:25:35 compute-0 nova_compute[259850]: 2025-10-11 04:25:35.583 2 DEBUG oslo_concurrency.lockutils [None req-a15c22ab-259d-4b26-83d0-eb5f92102649 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 04:25:35 compute-0 nova_compute[259850]: 2025-10-11 04:25:35.584 2 DEBUG nova.compute.resource_tracker [None req-a15c22ab-259d-4b26-83d0-eb5f92102649 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Oct 11 04:25:35 compute-0 nova_compute[259850]: 2025-10-11 04:25:35.584 2 DEBUG oslo_concurrency.processutils [None req-a15c22ab-259d-4b26-83d0-eb5f92102649 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 04:25:35 compute-0 nova_compute[259850]: 2025-10-11 04:25:35.676 2 DEBUG nova.compute.manager [None req-2587ab64-fb3f-4084-a6cf-cf3c7c6847c6 7bf17f3eb8514499a54d67542db6b88a 226e6310b4ee4a68b552a6b3e940a458 - - default default] [instance: e9134216-e096-4ca2-a8aa-6fdafcd7b04c] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Oct 11 04:25:35 compute-0 nova_compute[259850]: 2025-10-11 04:25:35.677 2 DEBUG nova.network.neutron [None req-2587ab64-fb3f-4084-a6cf-cf3c7c6847c6 7bf17f3eb8514499a54d67542db6b88a 226e6310b4ee4a68b552a6b3e940a458 - - default default] [instance: e9134216-e096-4ca2-a8aa-6fdafcd7b04c] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Oct 11 04:25:35 compute-0 nova_compute[259850]: 2025-10-11 04:25:35.699 2 INFO nova.virt.libvirt.driver [None req-2587ab64-fb3f-4084-a6cf-cf3c7c6847c6 7bf17f3eb8514499a54d67542db6b88a 226e6310b4ee4a68b552a6b3e940a458 - - default default] [instance: e9134216-e096-4ca2-a8aa-6fdafcd7b04c] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Oct 11 04:25:35 compute-0 nova_compute[259850]: 2025-10-11 04:25:35.714 2 DEBUG nova.compute.manager [None req-2587ab64-fb3f-4084-a6cf-cf3c7c6847c6 7bf17f3eb8514499a54d67542db6b88a 226e6310b4ee4a68b552a6b3e940a458 - - default default] [instance: e9134216-e096-4ca2-a8aa-6fdafcd7b04c] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Oct 11 04:25:35 compute-0 nova_compute[259850]: 2025-10-11 04:25:35.825 2 DEBUG nova.compute.manager [None req-2587ab64-fb3f-4084-a6cf-cf3c7c6847c6 7bf17f3eb8514499a54d67542db6b88a 226e6310b4ee4a68b552a6b3e940a458 - - default default] [instance: e9134216-e096-4ca2-a8aa-6fdafcd7b04c] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Oct 11 04:25:35 compute-0 nova_compute[259850]: 2025-10-11 04:25:35.826 2 DEBUG nova.virt.libvirt.driver [None req-2587ab64-fb3f-4084-a6cf-cf3c7c6847c6 7bf17f3eb8514499a54d67542db6b88a 226e6310b4ee4a68b552a6b3e940a458 - - default default] [instance: e9134216-e096-4ca2-a8aa-6fdafcd7b04c] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Oct 11 04:25:35 compute-0 nova_compute[259850]: 2025-10-11 04:25:35.827 2 INFO nova.virt.libvirt.driver [None req-2587ab64-fb3f-4084-a6cf-cf3c7c6847c6 7bf17f3eb8514499a54d67542db6b88a 226e6310b4ee4a68b552a6b3e940a458 - - default default] [instance: e9134216-e096-4ca2-a8aa-6fdafcd7b04c] Creating image(s)
Oct 11 04:25:35 compute-0 nova_compute[259850]: 2025-10-11 04:25:35.856 2 DEBUG nova.storage.rbd_utils [None req-2587ab64-fb3f-4084-a6cf-cf3c7c6847c6 7bf17f3eb8514499a54d67542db6b88a 226e6310b4ee4a68b552a6b3e940a458 - - default default] rbd image e9134216-e096-4ca2-a8aa-6fdafcd7b04c_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 11 04:25:35 compute-0 nova_compute[259850]: 2025-10-11 04:25:35.890 2 DEBUG nova.storage.rbd_utils [None req-2587ab64-fb3f-4084-a6cf-cf3c7c6847c6 7bf17f3eb8514499a54d67542db6b88a 226e6310b4ee4a68b552a6b3e940a458 - - default default] rbd image e9134216-e096-4ca2-a8aa-6fdafcd7b04c_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 11 04:25:35 compute-0 nova_compute[259850]: 2025-10-11 04:25:35.924 2 DEBUG nova.storage.rbd_utils [None req-2587ab64-fb3f-4084-a6cf-cf3c7c6847c6 7bf17f3eb8514499a54d67542db6b88a 226e6310b4ee4a68b552a6b3e940a458 - - default default] rbd image e9134216-e096-4ca2-a8aa-6fdafcd7b04c_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 11 04:25:35 compute-0 nova_compute[259850]: 2025-10-11 04:25:35.930 2 DEBUG oslo_concurrency.processutils [None req-2587ab64-fb3f-4084-a6cf-cf3c7c6847c6 7bf17f3eb8514499a54d67542db6b88a 226e6310b4ee4a68b552a6b3e940a458 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/1fdd1a1629caceb533dd1ed1ad40a6716b3e72ac --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 04:25:35 compute-0 nova_compute[259850]: 2025-10-11 04:25:35.970 2 DEBUG nova.policy [None req-2587ab64-fb3f-4084-a6cf-cf3c7c6847c6 7bf17f3eb8514499a54d67542db6b88a 226e6310b4ee4a68b552a6b3e940a458 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '7bf17f3eb8514499a54d67542db6b88a', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '226e6310b4ee4a68b552a6b3e940a458', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Oct 11 04:25:36 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1915: 305 pgs: 305 active+clean; 88 MiB data, 442 MiB used, 60 GiB / 60 GiB avail; 20 KiB/s rd, 1.5 KiB/s wr, 26 op/s
Oct 11 04:25:36 compute-0 nova_compute[259850]: 2025-10-11 04:25:36.024 2 DEBUG oslo_concurrency.processutils [None req-2587ab64-fb3f-4084-a6cf-cf3c7c6847c6 7bf17f3eb8514499a54d67542db6b88a 226e6310b4ee4a68b552a6b3e940a458 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/1fdd1a1629caceb533dd1ed1ad40a6716b3e72ac --force-share --output=json" returned: 0 in 0.094s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 04:25:36 compute-0 nova_compute[259850]: 2025-10-11 04:25:36.025 2 DEBUG oslo_concurrency.lockutils [None req-2587ab64-fb3f-4084-a6cf-cf3c7c6847c6 7bf17f3eb8514499a54d67542db6b88a 226e6310b4ee4a68b552a6b3e940a458 - - default default] Acquiring lock "1fdd1a1629caceb533dd1ed1ad40a6716b3e72ac" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 04:25:36 compute-0 nova_compute[259850]: 2025-10-11 04:25:36.026 2 DEBUG oslo_concurrency.lockutils [None req-2587ab64-fb3f-4084-a6cf-cf3c7c6847c6 7bf17f3eb8514499a54d67542db6b88a 226e6310b4ee4a68b552a6b3e940a458 - - default default] Lock "1fdd1a1629caceb533dd1ed1ad40a6716b3e72ac" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 04:25:36 compute-0 nova_compute[259850]: 2025-10-11 04:25:36.027 2 DEBUG oslo_concurrency.lockutils [None req-2587ab64-fb3f-4084-a6cf-cf3c7c6847c6 7bf17f3eb8514499a54d67542db6b88a 226e6310b4ee4a68b552a6b3e940a458 - - default default] Lock "1fdd1a1629caceb533dd1ed1ad40a6716b3e72ac" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 04:25:36 compute-0 nova_compute[259850]: 2025-10-11 04:25:36.055 2 DEBUG nova.storage.rbd_utils [None req-2587ab64-fb3f-4084-a6cf-cf3c7c6847c6 7bf17f3eb8514499a54d67542db6b88a 226e6310b4ee4a68b552a6b3e940a458 - - default default] rbd image e9134216-e096-4ca2-a8aa-6fdafcd7b04c_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 11 04:25:36 compute-0 nova_compute[259850]: 2025-10-11 04:25:36.059 2 DEBUG oslo_concurrency.processutils [None req-2587ab64-fb3f-4084-a6cf-cf3c7c6847c6 7bf17f3eb8514499a54d67542db6b88a 226e6310b4ee4a68b552a6b3e940a458 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/1fdd1a1629caceb533dd1ed1ad40a6716b3e72ac e9134216-e096-4ca2-a8aa-6fdafcd7b04c_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 04:25:36 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 11 04:25:36 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4000422274' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 04:25:36 compute-0 nova_compute[259850]: 2025-10-11 04:25:36.095 2 DEBUG oslo_concurrency.processutils [None req-a15c22ab-259d-4b26-83d0-eb5f92102649 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.510s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 04:25:36 compute-0 ceph-mon[74273]: from='client.? 192.168.122.100:0/1392066956' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 04:25:36 compute-0 ceph-mon[74273]: from='client.? 192.168.122.100:0/4000422274' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 04:25:36 compute-0 nova_compute[259850]: 2025-10-11 04:25:36.359 2 WARNING nova.virt.libvirt.driver [None req-a15c22ab-259d-4b26-83d0-eb5f92102649 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 11 04:25:36 compute-0 nova_compute[259850]: 2025-10-11 04:25:36.361 2 DEBUG nova.compute.resource_tracker [None req-a15c22ab-259d-4b26-83d0-eb5f92102649 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4358MB free_disk=59.988277435302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Oct 11 04:25:36 compute-0 nova_compute[259850]: 2025-10-11 04:25:36.361 2 DEBUG oslo_concurrency.lockutils [None req-a15c22ab-259d-4b26-83d0-eb5f92102649 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 04:25:36 compute-0 nova_compute[259850]: 2025-10-11 04:25:36.361 2 DEBUG oslo_concurrency.lockutils [None req-a15c22ab-259d-4b26-83d0-eb5f92102649 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 04:25:36 compute-0 nova_compute[259850]: 2025-10-11 04:25:36.387 2 DEBUG oslo_concurrency.processutils [None req-2587ab64-fb3f-4084-a6cf-cf3c7c6847c6 7bf17f3eb8514499a54d67542db6b88a 226e6310b4ee4a68b552a6b3e940a458 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/1fdd1a1629caceb533dd1ed1ad40a6716b3e72ac e9134216-e096-4ca2-a8aa-6fdafcd7b04c_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.328s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 04:25:36 compute-0 nova_compute[259850]: 2025-10-11 04:25:36.443 2 DEBUG nova.compute.resource_tracker [None req-a15c22ab-259d-4b26-83d0-eb5f92102649 - - - - - -] Instance e9134216-e096-4ca2-a8aa-6fdafcd7b04c actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Oct 11 04:25:36 compute-0 nova_compute[259850]: 2025-10-11 04:25:36.444 2 DEBUG nova.compute.resource_tracker [None req-a15c22ab-259d-4b26-83d0-eb5f92102649 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Oct 11 04:25:36 compute-0 nova_compute[259850]: 2025-10-11 04:25:36.444 2 DEBUG nova.compute.resource_tracker [None req-a15c22ab-259d-4b26-83d0-eb5f92102649 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=640MB phys_disk=59GB used_disk=1GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Oct 11 04:25:36 compute-0 nova_compute[259850]: 2025-10-11 04:25:36.450 2 DEBUG nova.storage.rbd_utils [None req-2587ab64-fb3f-4084-a6cf-cf3c7c6847c6 7bf17f3eb8514499a54d67542db6b88a 226e6310b4ee4a68b552a6b3e940a458 - - default default] resizing rbd image e9134216-e096-4ca2-a8aa-6fdafcd7b04c_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Oct 11 04:25:36 compute-0 nova_compute[259850]: 2025-10-11 04:25:36.497 2 DEBUG oslo_concurrency.processutils [None req-a15c22ab-259d-4b26-83d0-eb5f92102649 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 04:25:36 compute-0 nova_compute[259850]: 2025-10-11 04:25:36.583 2 DEBUG nova.objects.instance [None req-2587ab64-fb3f-4084-a6cf-cf3c7c6847c6 7bf17f3eb8514499a54d67542db6b88a 226e6310b4ee4a68b552a6b3e940a458 - - default default] Lazy-loading 'migration_context' on Instance uuid e9134216-e096-4ca2-a8aa-6fdafcd7b04c obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 11 04:25:36 compute-0 nova_compute[259850]: 2025-10-11 04:25:36.597 2 DEBUG nova.virt.libvirt.driver [None req-2587ab64-fb3f-4084-a6cf-cf3c7c6847c6 7bf17f3eb8514499a54d67542db6b88a 226e6310b4ee4a68b552a6b3e940a458 - - default default] [instance: e9134216-e096-4ca2-a8aa-6fdafcd7b04c] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Oct 11 04:25:36 compute-0 nova_compute[259850]: 2025-10-11 04:25:36.598 2 DEBUG nova.virt.libvirt.driver [None req-2587ab64-fb3f-4084-a6cf-cf3c7c6847c6 7bf17f3eb8514499a54d67542db6b88a 226e6310b4ee4a68b552a6b3e940a458 - - default default] [instance: e9134216-e096-4ca2-a8aa-6fdafcd7b04c] Ensure instance console log exists: /var/lib/nova/instances/e9134216-e096-4ca2-a8aa-6fdafcd7b04c/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Oct 11 04:25:36 compute-0 nova_compute[259850]: 2025-10-11 04:25:36.599 2 DEBUG oslo_concurrency.lockutils [None req-2587ab64-fb3f-4084-a6cf-cf3c7c6847c6 7bf17f3eb8514499a54d67542db6b88a 226e6310b4ee4a68b552a6b3e940a458 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 04:25:36 compute-0 nova_compute[259850]: 2025-10-11 04:25:36.599 2 DEBUG oslo_concurrency.lockutils [None req-2587ab64-fb3f-4084-a6cf-cf3c7c6847c6 7bf17f3eb8514499a54d67542db6b88a 226e6310b4ee4a68b552a6b3e940a458 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 04:25:36 compute-0 nova_compute[259850]: 2025-10-11 04:25:36.599 2 DEBUG oslo_concurrency.lockutils [None req-2587ab64-fb3f-4084-a6cf-cf3c7c6847c6 7bf17f3eb8514499a54d67542db6b88a 226e6310b4ee4a68b552a6b3e940a458 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 04:25:36 compute-0 nova_compute[259850]: 2025-10-11 04:25:36.899 2 DEBUG nova.network.neutron [None req-2587ab64-fb3f-4084-a6cf-cf3c7c6847c6 7bf17f3eb8514499a54d67542db6b88a 226e6310b4ee4a68b552a6b3e940a458 - - default default] [instance: e9134216-e096-4ca2-a8aa-6fdafcd7b04c] Successfully created port: 944dd3e5-9e7c-412f-ad9e-57f8cd6a65b9 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Oct 11 04:25:36 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 11 04:25:36 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/86959827' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 04:25:36 compute-0 nova_compute[259850]: 2025-10-11 04:25:36.945 2 DEBUG oslo_concurrency.processutils [None req-a15c22ab-259d-4b26-83d0-eb5f92102649 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.448s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 04:25:36 compute-0 nova_compute[259850]: 2025-10-11 04:25:36.952 2 DEBUG nova.compute.provider_tree [None req-a15c22ab-259d-4b26-83d0-eb5f92102649 - - - - - -] Inventory has not changed in ProviderTree for provider: 108a560b-89c0-4926-a2fc-cb749a6f8386 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 11 04:25:36 compute-0 nova_compute[259850]: 2025-10-11 04:25:36.966 2 DEBUG nova.scheduler.client.report [None req-a15c22ab-259d-4b26-83d0-eb5f92102649 - - - - - -] Inventory has not changed for provider 108a560b-89c0-4926-a2fc-cb749a6f8386 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 11 04:25:36 compute-0 nova_compute[259850]: 2025-10-11 04:25:36.987 2 DEBUG nova.compute.resource_tracker [None req-a15c22ab-259d-4b26-83d0-eb5f92102649 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Oct 11 04:25:36 compute-0 nova_compute[259850]: 2025-10-11 04:25:36.987 2 DEBUG oslo_concurrency.lockutils [None req-a15c22ab-259d-4b26-83d0-eb5f92102649 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.626s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 04:25:37 compute-0 ceph-mon[74273]: pgmap v1915: 305 pgs: 305 active+clean; 88 MiB data, 442 MiB used, 60 GiB / 60 GiB avail; 20 KiB/s rd, 1.5 KiB/s wr, 26 op/s
Oct 11 04:25:37 compute-0 ceph-mon[74273]: from='client.? 192.168.122.100:0/86959827' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 04:25:37 compute-0 nova_compute[259850]: 2025-10-11 04:25:37.765 2 DEBUG nova.network.neutron [None req-2587ab64-fb3f-4084-a6cf-cf3c7c6847c6 7bf17f3eb8514499a54d67542db6b88a 226e6310b4ee4a68b552a6b3e940a458 - - default default] [instance: e9134216-e096-4ca2-a8aa-6fdafcd7b04c] Successfully updated port: 944dd3e5-9e7c-412f-ad9e-57f8cd6a65b9 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Oct 11 04:25:37 compute-0 nova_compute[259850]: 2025-10-11 04:25:37.787 2 DEBUG oslo_concurrency.lockutils [None req-2587ab64-fb3f-4084-a6cf-cf3c7c6847c6 7bf17f3eb8514499a54d67542db6b88a 226e6310b4ee4a68b552a6b3e940a458 - - default default] Acquiring lock "refresh_cache-e9134216-e096-4ca2-a8aa-6fdafcd7b04c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 11 04:25:37 compute-0 nova_compute[259850]: 2025-10-11 04:25:37.787 2 DEBUG oslo_concurrency.lockutils [None req-2587ab64-fb3f-4084-a6cf-cf3c7c6847c6 7bf17f3eb8514499a54d67542db6b88a 226e6310b4ee4a68b552a6b3e940a458 - - default default] Acquired lock "refresh_cache-e9134216-e096-4ca2-a8aa-6fdafcd7b04c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 11 04:25:37 compute-0 nova_compute[259850]: 2025-10-11 04:25:37.787 2 DEBUG nova.network.neutron [None req-2587ab64-fb3f-4084-a6cf-cf3c7c6847c6 7bf17f3eb8514499a54d67542db6b88a 226e6310b4ee4a68b552a6b3e940a458 - - default default] [instance: e9134216-e096-4ca2-a8aa-6fdafcd7b04c] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Oct 11 04:25:37 compute-0 nova_compute[259850]: 2025-10-11 04:25:37.861 2 DEBUG nova.compute.manager [req-4eab73c3-8acd-4ed5-9692-8c53bc1022d7 req-5b6feca1-0973-44aa-bfd0-8e6284d502ee f4159c198f2c491aba3076e803251bb4 551ef16ca7c64c7d98e9560ddab5abd8 - - default default] [instance: e9134216-e096-4ca2-a8aa-6fdafcd7b04c] Received event network-changed-944dd3e5-9e7c-412f-ad9e-57f8cd6a65b9 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 11 04:25:37 compute-0 nova_compute[259850]: 2025-10-11 04:25:37.861 2 DEBUG nova.compute.manager [req-4eab73c3-8acd-4ed5-9692-8c53bc1022d7 req-5b6feca1-0973-44aa-bfd0-8e6284d502ee f4159c198f2c491aba3076e803251bb4 551ef16ca7c64c7d98e9560ddab5abd8 - - default default] [instance: e9134216-e096-4ca2-a8aa-6fdafcd7b04c] Refreshing instance network info cache due to event network-changed-944dd3e5-9e7c-412f-ad9e-57f8cd6a65b9. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 11 04:25:37 compute-0 nova_compute[259850]: 2025-10-11 04:25:37.861 2 DEBUG oslo_concurrency.lockutils [req-4eab73c3-8acd-4ed5-9692-8c53bc1022d7 req-5b6feca1-0973-44aa-bfd0-8e6284d502ee f4159c198f2c491aba3076e803251bb4 551ef16ca7c64c7d98e9560ddab5abd8 - - default default] Acquiring lock "refresh_cache-e9134216-e096-4ca2-a8aa-6fdafcd7b04c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 11 04:25:37 compute-0 nova_compute[259850]: 2025-10-11 04:25:37.927 2 DEBUG nova.network.neutron [None req-2587ab64-fb3f-4084-a6cf-cf3c7c6847c6 7bf17f3eb8514499a54d67542db6b88a 226e6310b4ee4a68b552a6b3e940a458 - - default default] [instance: e9134216-e096-4ca2-a8aa-6fdafcd7b04c] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Oct 11 04:25:38 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1916: 305 pgs: 305 active+clean; 130 MiB data, 463 MiB used, 60 GiB / 60 GiB avail; 29 KiB/s rd, 1.7 MiB/s wr, 39 op/s
Oct 11 04:25:38 compute-0 nova_compute[259850]: 2025-10-11 04:25:38.800 2 DEBUG nova.network.neutron [None req-2587ab64-fb3f-4084-a6cf-cf3c7c6847c6 7bf17f3eb8514499a54d67542db6b88a 226e6310b4ee4a68b552a6b3e940a458 - - default default] [instance: e9134216-e096-4ca2-a8aa-6fdafcd7b04c] Updating instance_info_cache with network_info: [{"id": "944dd3e5-9e7c-412f-ad9e-57f8cd6a65b9", "address": "fa:16:3e:67:66:62", "network": {"id": "61e3c4a7-2f2f-451f-b913-c2cdac8efdf3", "bridge": "br-int", "label": "tempest-TestEncryptedCinderVolumes-1503027769-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "226e6310b4ee4a68b552a6b3e940a458", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap944dd3e5-9e", "ovs_interfaceid": "944dd3e5-9e7c-412f-ad9e-57f8cd6a65b9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 11 04:25:38 compute-0 nova_compute[259850]: 2025-10-11 04:25:38.818 2 DEBUG oslo_concurrency.lockutils [None req-2587ab64-fb3f-4084-a6cf-cf3c7c6847c6 7bf17f3eb8514499a54d67542db6b88a 226e6310b4ee4a68b552a6b3e940a458 - - default default] Releasing lock "refresh_cache-e9134216-e096-4ca2-a8aa-6fdafcd7b04c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 11 04:25:38 compute-0 nova_compute[259850]: 2025-10-11 04:25:38.818 2 DEBUG nova.compute.manager [None req-2587ab64-fb3f-4084-a6cf-cf3c7c6847c6 7bf17f3eb8514499a54d67542db6b88a 226e6310b4ee4a68b552a6b3e940a458 - - default default] [instance: e9134216-e096-4ca2-a8aa-6fdafcd7b04c] Instance network_info: |[{"id": "944dd3e5-9e7c-412f-ad9e-57f8cd6a65b9", "address": "fa:16:3e:67:66:62", "network": {"id": "61e3c4a7-2f2f-451f-b913-c2cdac8efdf3", "bridge": "br-int", "label": "tempest-TestEncryptedCinderVolumes-1503027769-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "226e6310b4ee4a68b552a6b3e940a458", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap944dd3e5-9e", "ovs_interfaceid": "944dd3e5-9e7c-412f-ad9e-57f8cd6a65b9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Oct 11 04:25:38 compute-0 nova_compute[259850]: 2025-10-11 04:25:38.819 2 DEBUG oslo_concurrency.lockutils [req-4eab73c3-8acd-4ed5-9692-8c53bc1022d7 req-5b6feca1-0973-44aa-bfd0-8e6284d502ee f4159c198f2c491aba3076e803251bb4 551ef16ca7c64c7d98e9560ddab5abd8 - - default default] Acquired lock "refresh_cache-e9134216-e096-4ca2-a8aa-6fdafcd7b04c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 11 04:25:38 compute-0 nova_compute[259850]: 2025-10-11 04:25:38.819 2 DEBUG nova.network.neutron [req-4eab73c3-8acd-4ed5-9692-8c53bc1022d7 req-5b6feca1-0973-44aa-bfd0-8e6284d502ee f4159c198f2c491aba3076e803251bb4 551ef16ca7c64c7d98e9560ddab5abd8 - - default default] [instance: e9134216-e096-4ca2-a8aa-6fdafcd7b04c] Refreshing network info cache for port 944dd3e5-9e7c-412f-ad9e-57f8cd6a65b9 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 11 04:25:38 compute-0 nova_compute[259850]: 2025-10-11 04:25:38.822 2 DEBUG nova.virt.libvirt.driver [None req-2587ab64-fb3f-4084-a6cf-cf3c7c6847c6 7bf17f3eb8514499a54d67542db6b88a 226e6310b4ee4a68b552a6b3e940a458 - - default default] [instance: e9134216-e096-4ca2-a8aa-6fdafcd7b04c] Start _get_guest_xml network_info=[{"id": "944dd3e5-9e7c-412f-ad9e-57f8cd6a65b9", "address": "fa:16:3e:67:66:62", "network": {"id": "61e3c4a7-2f2f-451f-b913-c2cdac8efdf3", "bridge": "br-int", "label": "tempest-TestEncryptedCinderVolumes-1503027769-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "226e6310b4ee4a68b552a6b3e940a458", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap944dd3e5-9e", "ovs_interfaceid": "944dd3e5-9e7c-412f-ad9e-57f8cd6a65b9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-11T04:01:37Z,direct_url=<?>,disk_format='qcow2',id=1a107e2f-1a9d-4b6f-861d-e64bee7d56be,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='e4ac9f6319b648399a8baca50902ce47',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-11T04:01:39Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_type': 'disk', 'encrypted': False, 'encryption_secret_uuid': None, 'size': 0, 'encryption_format': None, 'boot_index': 0, 'device_name': '/dev/vda', 'guest_format': None, 'encryption_options': None, 'disk_bus': 'virtio', 'image_id': '1a107e2f-1a9d-4b6f-861d-e64bee7d56be'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Oct 11 04:25:38 compute-0 nova_compute[259850]: 2025-10-11 04:25:38.861 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 04:25:38 compute-0 nova_compute[259850]: 2025-10-11 04:25:38.866 2 WARNING nova.virt.libvirt.driver [None req-2587ab64-fb3f-4084-a6cf-cf3c7c6847c6 7bf17f3eb8514499a54d67542db6b88a 226e6310b4ee4a68b552a6b3e940a458 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 11 04:25:38 compute-0 nova_compute[259850]: 2025-10-11 04:25:38.872 2 DEBUG nova.virt.libvirt.host [None req-2587ab64-fb3f-4084-a6cf-cf3c7c6847c6 7bf17f3eb8514499a54d67542db6b88a 226e6310b4ee4a68b552a6b3e940a458 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Oct 11 04:25:38 compute-0 nova_compute[259850]: 2025-10-11 04:25:38.873 2 DEBUG nova.virt.libvirt.host [None req-2587ab64-fb3f-4084-a6cf-cf3c7c6847c6 7bf17f3eb8514499a54d67542db6b88a 226e6310b4ee4a68b552a6b3e940a458 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Oct 11 04:25:38 compute-0 nova_compute[259850]: 2025-10-11 04:25:38.877 2 DEBUG nova.virt.libvirt.host [None req-2587ab64-fb3f-4084-a6cf-cf3c7c6847c6 7bf17f3eb8514499a54d67542db6b88a 226e6310b4ee4a68b552a6b3e940a458 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Oct 11 04:25:38 compute-0 nova_compute[259850]: 2025-10-11 04:25:38.878 2 DEBUG nova.virt.libvirt.host [None req-2587ab64-fb3f-4084-a6cf-cf3c7c6847c6 7bf17f3eb8514499a54d67542db6b88a 226e6310b4ee4a68b552a6b3e940a458 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Oct 11 04:25:38 compute-0 nova_compute[259850]: 2025-10-11 04:25:38.878 2 DEBUG nova.virt.libvirt.driver [None req-2587ab64-fb3f-4084-a6cf-cf3c7c6847c6 7bf17f3eb8514499a54d67542db6b88a 226e6310b4ee4a68b552a6b3e940a458 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Oct 11 04:25:38 compute-0 nova_compute[259850]: 2025-10-11 04:25:38.879 2 DEBUG nova.virt.hardware [None req-2587ab64-fb3f-4084-a6cf-cf3c7c6847c6 7bf17f3eb8514499a54d67542db6b88a 226e6310b4ee4a68b552a6b3e940a458 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-11T04:01:35Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='178575de-f0e6-4acd-9fcd-d75e3e09ac2e',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-11T04:01:37Z,direct_url=<?>,disk_format='qcow2',id=1a107e2f-1a9d-4b6f-861d-e64bee7d56be,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='e4ac9f6319b648399a8baca50902ce47',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-11T04:01:39Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Oct 11 04:25:38 compute-0 nova_compute[259850]: 2025-10-11 04:25:38.879 2 DEBUG nova.virt.hardware [None req-2587ab64-fb3f-4084-a6cf-cf3c7c6847c6 7bf17f3eb8514499a54d67542db6b88a 226e6310b4ee4a68b552a6b3e940a458 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Oct 11 04:25:38 compute-0 nova_compute[259850]: 2025-10-11 04:25:38.880 2 DEBUG nova.virt.hardware [None req-2587ab64-fb3f-4084-a6cf-cf3c7c6847c6 7bf17f3eb8514499a54d67542db6b88a 226e6310b4ee4a68b552a6b3e940a458 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Oct 11 04:25:38 compute-0 nova_compute[259850]: 2025-10-11 04:25:38.880 2 DEBUG nova.virt.hardware [None req-2587ab64-fb3f-4084-a6cf-cf3c7c6847c6 7bf17f3eb8514499a54d67542db6b88a 226e6310b4ee4a68b552a6b3e940a458 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Oct 11 04:25:38 compute-0 nova_compute[259850]: 2025-10-11 04:25:38.880 2 DEBUG nova.virt.hardware [None req-2587ab64-fb3f-4084-a6cf-cf3c7c6847c6 7bf17f3eb8514499a54d67542db6b88a 226e6310b4ee4a68b552a6b3e940a458 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Oct 11 04:25:38 compute-0 nova_compute[259850]: 2025-10-11 04:25:38.881 2 DEBUG nova.virt.hardware [None req-2587ab64-fb3f-4084-a6cf-cf3c7c6847c6 7bf17f3eb8514499a54d67542db6b88a 226e6310b4ee4a68b552a6b3e940a458 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Oct 11 04:25:38 compute-0 nova_compute[259850]: 2025-10-11 04:25:38.881 2 DEBUG nova.virt.hardware [None req-2587ab64-fb3f-4084-a6cf-cf3c7c6847c6 7bf17f3eb8514499a54d67542db6b88a 226e6310b4ee4a68b552a6b3e940a458 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Oct 11 04:25:38 compute-0 nova_compute[259850]: 2025-10-11 04:25:38.881 2 DEBUG nova.virt.hardware [None req-2587ab64-fb3f-4084-a6cf-cf3c7c6847c6 7bf17f3eb8514499a54d67542db6b88a 226e6310b4ee4a68b552a6b3e940a458 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Oct 11 04:25:38 compute-0 nova_compute[259850]: 2025-10-11 04:25:38.882 2 DEBUG nova.virt.hardware [None req-2587ab64-fb3f-4084-a6cf-cf3c7c6847c6 7bf17f3eb8514499a54d67542db6b88a 226e6310b4ee4a68b552a6b3e940a458 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Oct 11 04:25:38 compute-0 nova_compute[259850]: 2025-10-11 04:25:38.882 2 DEBUG nova.virt.hardware [None req-2587ab64-fb3f-4084-a6cf-cf3c7c6847c6 7bf17f3eb8514499a54d67542db6b88a 226e6310b4ee4a68b552a6b3e940a458 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Oct 11 04:25:38 compute-0 nova_compute[259850]: 2025-10-11 04:25:38.882 2 DEBUG nova.virt.hardware [None req-2587ab64-fb3f-4084-a6cf-cf3c7c6847c6 7bf17f3eb8514499a54d67542db6b88a 226e6310b4ee4a68b552a6b3e940a458 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Oct 11 04:25:38 compute-0 nova_compute[259850]: 2025-10-11 04:25:38.887 2 DEBUG oslo_concurrency.processutils [None req-2587ab64-fb3f-4084-a6cf-cf3c7c6847c6 7bf17f3eb8514499a54d67542db6b88a 226e6310b4ee4a68b552a6b3e940a458 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 04:25:38 compute-0 nova_compute[259850]: 2025-10-11 04:25:38.992 2 DEBUG oslo_service.periodic_task [None req-a15c22ab-259d-4b26-83d0-eb5f92102649 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 11 04:25:38 compute-0 nova_compute[259850]: 2025-10-11 04:25:38.993 2 DEBUG nova.compute.manager [None req-a15c22ab-259d-4b26-83d0-eb5f92102649 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Oct 11 04:25:38 compute-0 nova_compute[259850]: 2025-10-11 04:25:38.993 2 DEBUG nova.compute.manager [None req-a15c22ab-259d-4b26-83d0-eb5f92102649 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Oct 11 04:25:39 compute-0 nova_compute[259850]: 2025-10-11 04:25:39.010 2 DEBUG nova.compute.manager [None req-a15c22ab-259d-4b26-83d0-eb5f92102649 - - - - - -] [instance: e9134216-e096-4ca2-a8aa-6fdafcd7b04c] Skipping network cache update for instance because it is Building. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9871
Oct 11 04:25:39 compute-0 nova_compute[259850]: 2025-10-11 04:25:39.010 2 DEBUG nova.compute.manager [None req-a15c22ab-259d-4b26-83d0-eb5f92102649 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Oct 11 04:25:39 compute-0 nova_compute[259850]: 2025-10-11 04:25:39.012 2 DEBUG oslo_service.periodic_task [None req-a15c22ab-259d-4b26-83d0-eb5f92102649 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 11 04:25:39 compute-0 nova_compute[259850]: 2025-10-11 04:25:39.012 2 DEBUG oslo_service.periodic_task [None req-a15c22ab-259d-4b26-83d0-eb5f92102649 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 11 04:25:39 compute-0 ceph-mon[74273]: pgmap v1916: 305 pgs: 305 active+clean; 130 MiB data, 463 MiB used, 60 GiB / 60 GiB avail; 29 KiB/s rd, 1.7 MiB/s wr, 39 op/s
Oct 11 04:25:39 compute-0 sudo[306385]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 04:25:39 compute-0 sudo[306385]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 04:25:39 compute-0 sudo[306385]: pam_unix(sudo:session): session closed for user root
Oct 11 04:25:39 compute-0 nova_compute[259850]: 2025-10-11 04:25:39.339 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 04:25:39 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 11 04:25:39 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3159772619' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 11 04:25:39 compute-0 nova_compute[259850]: 2025-10-11 04:25:39.368 2 DEBUG oslo_concurrency.processutils [None req-2587ab64-fb3f-4084-a6cf-cf3c7c6847c6 7bf17f3eb8514499a54d67542db6b88a 226e6310b4ee4a68b552a6b3e940a458 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.482s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 04:25:39 compute-0 sudo[306410]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 11 04:25:39 compute-0 sudo[306410]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 04:25:39 compute-0 sudo[306410]: pam_unix(sudo:session): session closed for user root
Oct 11 04:25:39 compute-0 nova_compute[259850]: 2025-10-11 04:25:39.396 2 DEBUG nova.storage.rbd_utils [None req-2587ab64-fb3f-4084-a6cf-cf3c7c6847c6 7bf17f3eb8514499a54d67542db6b88a 226e6310b4ee4a68b552a6b3e940a458 - - default default] rbd image e9134216-e096-4ca2-a8aa-6fdafcd7b04c_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 11 04:25:39 compute-0 nova_compute[259850]: 2025-10-11 04:25:39.406 2 DEBUG oslo_concurrency.processutils [None req-2587ab64-fb3f-4084-a6cf-cf3c7c6847c6 7bf17f3eb8514499a54d67542db6b88a 226e6310b4ee4a68b552a6b3e940a458 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 04:25:39 compute-0 sudo[306455]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 04:25:39 compute-0 sudo[306455]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 04:25:39 compute-0 sudo[306455]: pam_unix(sudo:session): session closed for user root
Oct 11 04:25:39 compute-0 sudo[306481]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/23b68101-59a9-532f-ab6b-9acf78fb2162/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Oct 11 04:25:39 compute-0 sudo[306481]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 04:25:39 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 11 04:25:39 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2188506091' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 11 04:25:39 compute-0 nova_compute[259850]: 2025-10-11 04:25:39.882 2 DEBUG oslo_concurrency.processutils [None req-2587ab64-fb3f-4084-a6cf-cf3c7c6847c6 7bf17f3eb8514499a54d67542db6b88a 226e6310b4ee4a68b552a6b3e940a458 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.477s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 04:25:39 compute-0 nova_compute[259850]: 2025-10-11 04:25:39.885 2 DEBUG nova.virt.libvirt.vif [None req-2587ab64-fb3f-4084-a6cf-cf3c7c6847c6 7bf17f3eb8514499a54d67542db6b88a 226e6310b4ee4a68b552a6b3e940a458 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-11T04:25:33Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestEncryptedCinderVolumes-server-713801075',display_name='tempest-TestEncryptedCinderVolumes-server-713801075',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testencryptedcindervolumes-server-713801075',id=28,image_ref='1a107e2f-1a9d-4b6f-861d-e64bee7d56be',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBMJMHbrfIvT0sr9OAjWIFKhwQHBpBnXld+yH6qFtLRHc/PGYRHvOBTdI+nR0jmE3fNmomIpDP4x5vIh6quRMKdDvyUtXcjH0R3ji2qLNxYjzRBvOcNgDEwVgf+rWJVcwAg==',key_name='tempest-keypair-1724388763',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='226e6310b4ee4a68b552a6b3e940a458',ramdisk_id='',reservation_id='r-pyej006m',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='1a107e2f-1a9d-4b6f-861d-e64bee7d56be',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestEncryptedCinderVolumes-1931311766',owner_user_name='tempest-TestEncryptedCinderVolumes-1931311766-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-11T04:25:35Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='7bf17f3eb8514499a54d67542db6b88a',uuid=e9134216-e096-4ca2-a8aa-6fdafcd7b04c,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "944dd3e5-9e7c-412f-ad9e-57f8cd6a65b9", "address": "fa:16:3e:67:66:62", "network": {"id": "61e3c4a7-2f2f-451f-b913-c2cdac8efdf3", "bridge": "br-int", "label": "tempest-TestEncryptedCinderVolumes-1503027769-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "226e6310b4ee4a68b552a6b3e940a458", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap944dd3e5-9e", "ovs_interfaceid": "944dd3e5-9e7c-412f-ad9e-57f8cd6a65b9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Oct 11 04:25:39 compute-0 nova_compute[259850]: 2025-10-11 04:25:39.885 2 DEBUG nova.network.os_vif_util [None req-2587ab64-fb3f-4084-a6cf-cf3c7c6847c6 7bf17f3eb8514499a54d67542db6b88a 226e6310b4ee4a68b552a6b3e940a458 - - default default] Converting VIF {"id": "944dd3e5-9e7c-412f-ad9e-57f8cd6a65b9", "address": "fa:16:3e:67:66:62", "network": {"id": "61e3c4a7-2f2f-451f-b913-c2cdac8efdf3", "bridge": "br-int", "label": "tempest-TestEncryptedCinderVolumes-1503027769-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "226e6310b4ee4a68b552a6b3e940a458", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap944dd3e5-9e", "ovs_interfaceid": "944dd3e5-9e7c-412f-ad9e-57f8cd6a65b9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 11 04:25:39 compute-0 nova_compute[259850]: 2025-10-11 04:25:39.886 2 DEBUG nova.network.os_vif_util [None req-2587ab64-fb3f-4084-a6cf-cf3c7c6847c6 7bf17f3eb8514499a54d67542db6b88a 226e6310b4ee4a68b552a6b3e940a458 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:67:66:62,bridge_name='br-int',has_traffic_filtering=True,id=944dd3e5-9e7c-412f-ad9e-57f8cd6a65b9,network=Network(61e3c4a7-2f2f-451f-b913-c2cdac8efdf3),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap944dd3e5-9e') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 11 04:25:39 compute-0 nova_compute[259850]: 2025-10-11 04:25:39.888 2 DEBUG nova.objects.instance [None req-2587ab64-fb3f-4084-a6cf-cf3c7c6847c6 7bf17f3eb8514499a54d67542db6b88a 226e6310b4ee4a68b552a6b3e940a458 - - default default] Lazy-loading 'pci_devices' on Instance uuid e9134216-e096-4ca2-a8aa-6fdafcd7b04c obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 11 04:25:39 compute-0 nova_compute[259850]: 2025-10-11 04:25:39.903 2 DEBUG nova.virt.libvirt.driver [None req-2587ab64-fb3f-4084-a6cf-cf3c7c6847c6 7bf17f3eb8514499a54d67542db6b88a 226e6310b4ee4a68b552a6b3e940a458 - - default default] [instance: e9134216-e096-4ca2-a8aa-6fdafcd7b04c] End _get_guest_xml xml=<domain type="kvm">
Oct 11 04:25:39 compute-0 nova_compute[259850]:   <uuid>e9134216-e096-4ca2-a8aa-6fdafcd7b04c</uuid>
Oct 11 04:25:39 compute-0 nova_compute[259850]:   <name>instance-0000001c</name>
Oct 11 04:25:39 compute-0 nova_compute[259850]:   <memory>131072</memory>
Oct 11 04:25:39 compute-0 nova_compute[259850]:   <vcpu>1</vcpu>
Oct 11 04:25:39 compute-0 nova_compute[259850]:   <metadata>
Oct 11 04:25:39 compute-0 nova_compute[259850]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 11 04:25:39 compute-0 nova_compute[259850]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 11 04:25:39 compute-0 nova_compute[259850]:       <nova:name>tempest-TestEncryptedCinderVolumes-server-713801075</nova:name>
Oct 11 04:25:39 compute-0 nova_compute[259850]:       <nova:creationTime>2025-10-11 04:25:38</nova:creationTime>
Oct 11 04:25:39 compute-0 nova_compute[259850]:       <nova:flavor name="m1.nano">
Oct 11 04:25:39 compute-0 nova_compute[259850]:         <nova:memory>128</nova:memory>
Oct 11 04:25:39 compute-0 nova_compute[259850]:         <nova:disk>1</nova:disk>
Oct 11 04:25:39 compute-0 nova_compute[259850]:         <nova:swap>0</nova:swap>
Oct 11 04:25:39 compute-0 nova_compute[259850]:         <nova:ephemeral>0</nova:ephemeral>
Oct 11 04:25:39 compute-0 nova_compute[259850]:         <nova:vcpus>1</nova:vcpus>
Oct 11 04:25:39 compute-0 nova_compute[259850]:       </nova:flavor>
Oct 11 04:25:39 compute-0 nova_compute[259850]:       <nova:owner>
Oct 11 04:25:39 compute-0 nova_compute[259850]:         <nova:user uuid="7bf17f3eb8514499a54d67542db6b88a">tempest-TestEncryptedCinderVolumes-1931311766-project-member</nova:user>
Oct 11 04:25:39 compute-0 nova_compute[259850]:         <nova:project uuid="226e6310b4ee4a68b552a6b3e940a458">tempest-TestEncryptedCinderVolumes-1931311766</nova:project>
Oct 11 04:25:39 compute-0 nova_compute[259850]:       </nova:owner>
Oct 11 04:25:39 compute-0 nova_compute[259850]:       <nova:root type="image" uuid="1a107e2f-1a9d-4b6f-861d-e64bee7d56be"/>
Oct 11 04:25:39 compute-0 nova_compute[259850]:       <nova:ports>
Oct 11 04:25:39 compute-0 nova_compute[259850]:         <nova:port uuid="944dd3e5-9e7c-412f-ad9e-57f8cd6a65b9">
Oct 11 04:25:39 compute-0 nova_compute[259850]:           <nova:ip type="fixed" address="10.100.0.4" ipVersion="4"/>
Oct 11 04:25:39 compute-0 nova_compute[259850]:         </nova:port>
Oct 11 04:25:39 compute-0 nova_compute[259850]:       </nova:ports>
Oct 11 04:25:39 compute-0 nova_compute[259850]:     </nova:instance>
Oct 11 04:25:39 compute-0 nova_compute[259850]:   </metadata>
Oct 11 04:25:39 compute-0 nova_compute[259850]:   <sysinfo type="smbios">
Oct 11 04:25:39 compute-0 nova_compute[259850]:     <system>
Oct 11 04:25:39 compute-0 nova_compute[259850]:       <entry name="manufacturer">RDO</entry>
Oct 11 04:25:39 compute-0 nova_compute[259850]:       <entry name="product">OpenStack Compute</entry>
Oct 11 04:25:39 compute-0 nova_compute[259850]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 11 04:25:39 compute-0 nova_compute[259850]:       <entry name="serial">e9134216-e096-4ca2-a8aa-6fdafcd7b04c</entry>
Oct 11 04:25:39 compute-0 nova_compute[259850]:       <entry name="uuid">e9134216-e096-4ca2-a8aa-6fdafcd7b04c</entry>
Oct 11 04:25:39 compute-0 nova_compute[259850]:       <entry name="family">Virtual Machine</entry>
Oct 11 04:25:39 compute-0 nova_compute[259850]:     </system>
Oct 11 04:25:39 compute-0 nova_compute[259850]:   </sysinfo>
Oct 11 04:25:39 compute-0 nova_compute[259850]:   <os>
Oct 11 04:25:39 compute-0 nova_compute[259850]:     <type arch="x86_64" machine="q35">hvm</type>
Oct 11 04:25:39 compute-0 nova_compute[259850]:     <boot dev="hd"/>
Oct 11 04:25:39 compute-0 nova_compute[259850]:     <smbios mode="sysinfo"/>
Oct 11 04:25:39 compute-0 nova_compute[259850]:   </os>
Oct 11 04:25:39 compute-0 nova_compute[259850]:   <features>
Oct 11 04:25:39 compute-0 nova_compute[259850]:     <acpi/>
Oct 11 04:25:39 compute-0 nova_compute[259850]:     <apic/>
Oct 11 04:25:39 compute-0 nova_compute[259850]:     <vmcoreinfo/>
Oct 11 04:25:39 compute-0 nova_compute[259850]:   </features>
Oct 11 04:25:39 compute-0 nova_compute[259850]:   <clock offset="utc">
Oct 11 04:25:39 compute-0 nova_compute[259850]:     <timer name="pit" tickpolicy="delay"/>
Oct 11 04:25:39 compute-0 nova_compute[259850]:     <timer name="rtc" tickpolicy="catchup"/>
Oct 11 04:25:39 compute-0 nova_compute[259850]:     <timer name="hpet" present="no"/>
Oct 11 04:25:39 compute-0 nova_compute[259850]:   </clock>
Oct 11 04:25:39 compute-0 nova_compute[259850]:   <cpu mode="host-model" match="exact">
Oct 11 04:25:39 compute-0 nova_compute[259850]:     <topology sockets="1" cores="1" threads="1"/>
Oct 11 04:25:39 compute-0 nova_compute[259850]:   </cpu>
Oct 11 04:25:39 compute-0 nova_compute[259850]:   <devices>
Oct 11 04:25:39 compute-0 nova_compute[259850]:     <disk type="network" device="disk">
Oct 11 04:25:39 compute-0 nova_compute[259850]:       <driver type="raw" cache="none"/>
Oct 11 04:25:39 compute-0 nova_compute[259850]:       <source protocol="rbd" name="vms/e9134216-e096-4ca2-a8aa-6fdafcd7b04c_disk">
Oct 11 04:25:39 compute-0 nova_compute[259850]:         <host name="192.168.122.100" port="6789"/>
Oct 11 04:25:39 compute-0 nova_compute[259850]:       </source>
Oct 11 04:25:39 compute-0 nova_compute[259850]:       <auth username="openstack">
Oct 11 04:25:39 compute-0 nova_compute[259850]:         <secret type="ceph" uuid="23b68101-59a9-532f-ab6b-9acf78fb2162"/>
Oct 11 04:25:39 compute-0 nova_compute[259850]:       </auth>
Oct 11 04:25:39 compute-0 nova_compute[259850]:       <target dev="vda" bus="virtio"/>
Oct 11 04:25:39 compute-0 nova_compute[259850]:     </disk>
Oct 11 04:25:39 compute-0 nova_compute[259850]:     <disk type="network" device="cdrom">
Oct 11 04:25:39 compute-0 nova_compute[259850]:       <driver type="raw" cache="none"/>
Oct 11 04:25:39 compute-0 nova_compute[259850]:       <source protocol="rbd" name="vms/e9134216-e096-4ca2-a8aa-6fdafcd7b04c_disk.config">
Oct 11 04:25:39 compute-0 nova_compute[259850]:         <host name="192.168.122.100" port="6789"/>
Oct 11 04:25:39 compute-0 nova_compute[259850]:       </source>
Oct 11 04:25:39 compute-0 nova_compute[259850]:       <auth username="openstack">
Oct 11 04:25:39 compute-0 nova_compute[259850]:         <secret type="ceph" uuid="23b68101-59a9-532f-ab6b-9acf78fb2162"/>
Oct 11 04:25:39 compute-0 nova_compute[259850]:       </auth>
Oct 11 04:25:39 compute-0 nova_compute[259850]:       <target dev="sda" bus="sata"/>
Oct 11 04:25:39 compute-0 nova_compute[259850]:     </disk>
Oct 11 04:25:39 compute-0 nova_compute[259850]:     <interface type="ethernet">
Oct 11 04:25:39 compute-0 nova_compute[259850]:       <mac address="fa:16:3e:67:66:62"/>
Oct 11 04:25:39 compute-0 nova_compute[259850]:       <model type="virtio"/>
Oct 11 04:25:39 compute-0 nova_compute[259850]:       <driver name="vhost" rx_queue_size="512"/>
Oct 11 04:25:39 compute-0 nova_compute[259850]:       <mtu size="1442"/>
Oct 11 04:25:39 compute-0 nova_compute[259850]:       <target dev="tap944dd3e5-9e"/>
Oct 11 04:25:39 compute-0 nova_compute[259850]:     </interface>
Oct 11 04:25:39 compute-0 nova_compute[259850]:     <serial type="pty">
Oct 11 04:25:39 compute-0 nova_compute[259850]:       <log file="/var/lib/nova/instances/e9134216-e096-4ca2-a8aa-6fdafcd7b04c/console.log" append="off"/>
Oct 11 04:25:39 compute-0 nova_compute[259850]:     </serial>
Oct 11 04:25:39 compute-0 nova_compute[259850]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 11 04:25:39 compute-0 nova_compute[259850]:     <video>
Oct 11 04:25:39 compute-0 nova_compute[259850]:       <model type="virtio"/>
Oct 11 04:25:39 compute-0 nova_compute[259850]:     </video>
Oct 11 04:25:39 compute-0 nova_compute[259850]:     <input type="tablet" bus="usb"/>
Oct 11 04:25:39 compute-0 nova_compute[259850]:     <rng model="virtio">
Oct 11 04:25:39 compute-0 nova_compute[259850]:       <backend model="random">/dev/urandom</backend>
Oct 11 04:25:39 compute-0 nova_compute[259850]:     </rng>
Oct 11 04:25:39 compute-0 nova_compute[259850]:     <controller type="pci" model="pcie-root"/>
Oct 11 04:25:39 compute-0 nova_compute[259850]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 04:25:39 compute-0 nova_compute[259850]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 04:25:39 compute-0 nova_compute[259850]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 04:25:39 compute-0 nova_compute[259850]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 04:25:39 compute-0 nova_compute[259850]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 04:25:39 compute-0 nova_compute[259850]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 04:25:39 compute-0 nova_compute[259850]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 04:25:39 compute-0 nova_compute[259850]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 04:25:39 compute-0 nova_compute[259850]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 04:25:39 compute-0 nova_compute[259850]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 04:25:39 compute-0 nova_compute[259850]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 04:25:39 compute-0 nova_compute[259850]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 04:25:39 compute-0 nova_compute[259850]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 04:25:39 compute-0 nova_compute[259850]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 04:25:39 compute-0 nova_compute[259850]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 04:25:39 compute-0 nova_compute[259850]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 04:25:39 compute-0 nova_compute[259850]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 04:25:39 compute-0 nova_compute[259850]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 04:25:39 compute-0 nova_compute[259850]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 04:25:39 compute-0 nova_compute[259850]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 04:25:39 compute-0 nova_compute[259850]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 04:25:39 compute-0 nova_compute[259850]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 04:25:39 compute-0 nova_compute[259850]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 04:25:39 compute-0 nova_compute[259850]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 04:25:39 compute-0 nova_compute[259850]:     <controller type="usb" index="0"/>
Oct 11 04:25:39 compute-0 nova_compute[259850]:     <memballoon model="virtio">
Oct 11 04:25:39 compute-0 nova_compute[259850]:       <stats period="10"/>
Oct 11 04:25:39 compute-0 nova_compute[259850]:     </memballoon>
Oct 11 04:25:39 compute-0 nova_compute[259850]:   </devices>
Oct 11 04:25:39 compute-0 nova_compute[259850]: </domain>
Oct 11 04:25:39 compute-0 nova_compute[259850]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Oct 11 04:25:39 compute-0 nova_compute[259850]: 2025-10-11 04:25:39.905 2 DEBUG nova.compute.manager [None req-2587ab64-fb3f-4084-a6cf-cf3c7c6847c6 7bf17f3eb8514499a54d67542db6b88a 226e6310b4ee4a68b552a6b3e940a458 - - default default] [instance: e9134216-e096-4ca2-a8aa-6fdafcd7b04c] Preparing to wait for external event network-vif-plugged-944dd3e5-9e7c-412f-ad9e-57f8cd6a65b9 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Oct 11 04:25:39 compute-0 nova_compute[259850]: 2025-10-11 04:25:39.905 2 DEBUG oslo_concurrency.lockutils [None req-2587ab64-fb3f-4084-a6cf-cf3c7c6847c6 7bf17f3eb8514499a54d67542db6b88a 226e6310b4ee4a68b552a6b3e940a458 - - default default] Acquiring lock "e9134216-e096-4ca2-a8aa-6fdafcd7b04c-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 04:25:39 compute-0 nova_compute[259850]: 2025-10-11 04:25:39.905 2 DEBUG oslo_concurrency.lockutils [None req-2587ab64-fb3f-4084-a6cf-cf3c7c6847c6 7bf17f3eb8514499a54d67542db6b88a 226e6310b4ee4a68b552a6b3e940a458 - - default default] Lock "e9134216-e096-4ca2-a8aa-6fdafcd7b04c-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 04:25:39 compute-0 nova_compute[259850]: 2025-10-11 04:25:39.906 2 DEBUG oslo_concurrency.lockutils [None req-2587ab64-fb3f-4084-a6cf-cf3c7c6847c6 7bf17f3eb8514499a54d67542db6b88a 226e6310b4ee4a68b552a6b3e940a458 - - default default] Lock "e9134216-e096-4ca2-a8aa-6fdafcd7b04c-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 04:25:39 compute-0 nova_compute[259850]: 2025-10-11 04:25:39.906 2 DEBUG nova.virt.libvirt.vif [None req-2587ab64-fb3f-4084-a6cf-cf3c7c6847c6 7bf17f3eb8514499a54d67542db6b88a 226e6310b4ee4a68b552a6b3e940a458 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-11T04:25:33Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestEncryptedCinderVolumes-server-713801075',display_name='tempest-TestEncryptedCinderVolumes-server-713801075',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testencryptedcindervolumes-server-713801075',id=28,image_ref='1a107e2f-1a9d-4b6f-861d-e64bee7d56be',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBMJMHbrfIvT0sr9OAjWIFKhwQHBpBnXld+yH6qFtLRHc/PGYRHvOBTdI+nR0jmE3fNmomIpDP4x5vIh6quRMKdDvyUtXcjH0R3ji2qLNxYjzRBvOcNgDEwVgf+rWJVcwAg==',key_name='tempest-keypair-1724388763',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='226e6310b4ee4a68b552a6b3e940a458',ramdisk_id='',reservation_id='r-pyej006m',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='1a107e2f-1a9d-4b6f-861d-e64bee7d56be',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestEncryptedCinderVolumes-1931311766',owner_user_name='tempest-TestEncryptedCinderVolumes-1931311766-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-11T04:25:35Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='7bf17f3eb8514499a54d67542db6b88a',uuid=e9134216-e096-4ca2-a8aa-6fdafcd7b04c,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "944dd3e5-9e7c-412f-ad9e-57f8cd6a65b9", "address": "fa:16:3e:67:66:62", "network": {"id": "61e3c4a7-2f2f-451f-b913-c2cdac8efdf3", "bridge": "br-int", "label": "tempest-TestEncryptedCinderVolumes-1503027769-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "226e6310b4ee4a68b552a6b3e940a458", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap944dd3e5-9e", "ovs_interfaceid": "944dd3e5-9e7c-412f-ad9e-57f8cd6a65b9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Oct 11 04:25:39 compute-0 nova_compute[259850]: 2025-10-11 04:25:39.907 2 DEBUG nova.network.os_vif_util [None req-2587ab64-fb3f-4084-a6cf-cf3c7c6847c6 7bf17f3eb8514499a54d67542db6b88a 226e6310b4ee4a68b552a6b3e940a458 - - default default] Converting VIF {"id": "944dd3e5-9e7c-412f-ad9e-57f8cd6a65b9", "address": "fa:16:3e:67:66:62", "network": {"id": "61e3c4a7-2f2f-451f-b913-c2cdac8efdf3", "bridge": "br-int", "label": "tempest-TestEncryptedCinderVolumes-1503027769-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "226e6310b4ee4a68b552a6b3e940a458", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap944dd3e5-9e", "ovs_interfaceid": "944dd3e5-9e7c-412f-ad9e-57f8cd6a65b9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 11 04:25:39 compute-0 nova_compute[259850]: 2025-10-11 04:25:39.907 2 DEBUG nova.network.os_vif_util [None req-2587ab64-fb3f-4084-a6cf-cf3c7c6847c6 7bf17f3eb8514499a54d67542db6b88a 226e6310b4ee4a68b552a6b3e940a458 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:67:66:62,bridge_name='br-int',has_traffic_filtering=True,id=944dd3e5-9e7c-412f-ad9e-57f8cd6a65b9,network=Network(61e3c4a7-2f2f-451f-b913-c2cdac8efdf3),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap944dd3e5-9e') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 11 04:25:39 compute-0 nova_compute[259850]: 2025-10-11 04:25:39.908 2 DEBUG os_vif [None req-2587ab64-fb3f-4084-a6cf-cf3c7c6847c6 7bf17f3eb8514499a54d67542db6b88a 226e6310b4ee4a68b552a6b3e940a458 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:67:66:62,bridge_name='br-int',has_traffic_filtering=True,id=944dd3e5-9e7c-412f-ad9e-57f8cd6a65b9,network=Network(61e3c4a7-2f2f-451f-b913-c2cdac8efdf3),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap944dd3e5-9e') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Oct 11 04:25:39 compute-0 nova_compute[259850]: 2025-10-11 04:25:39.908 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 04:25:39 compute-0 nova_compute[259850]: 2025-10-11 04:25:39.909 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 04:25:39 compute-0 nova_compute[259850]: 2025-10-11 04:25:39.909 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 11 04:25:39 compute-0 nova_compute[259850]: 2025-10-11 04:25:39.913 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 04:25:39 compute-0 nova_compute[259850]: 2025-10-11 04:25:39.913 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap944dd3e5-9e, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 04:25:39 compute-0 nova_compute[259850]: 2025-10-11 04:25:39.914 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap944dd3e5-9e, col_values=(('external_ids', {'iface-id': '944dd3e5-9e7c-412f-ad9e-57f8cd6a65b9', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:67:66:62', 'vm-uuid': 'e9134216-e096-4ca2-a8aa-6fdafcd7b04c'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 04:25:39 compute-0 NetworkManager[44920]: <info>  [1760156739.9391] manager: (tap944dd3e5-9e): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/139)
Oct 11 04:25:39 compute-0 nova_compute[259850]: 2025-10-11 04:25:39.938 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 04:25:39 compute-0 nova_compute[259850]: 2025-10-11 04:25:39.941 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 11 04:25:39 compute-0 nova_compute[259850]: 2025-10-11 04:25:39.947 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 04:25:39 compute-0 nova_compute[259850]: 2025-10-11 04:25:39.948 2 INFO os_vif [None req-2587ab64-fb3f-4084-a6cf-cf3c7c6847c6 7bf17f3eb8514499a54d67542db6b88a 226e6310b4ee4a68b552a6b3e940a458 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:67:66:62,bridge_name='br-int',has_traffic_filtering=True,id=944dd3e5-9e7c-412f-ad9e-57f8cd6a65b9,network=Network(61e3c4a7-2f2f-451f-b913-c2cdac8efdf3),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap944dd3e5-9e')
Oct 11 04:25:40 compute-0 nova_compute[259850]: 2025-10-11 04:25:39.999 2 DEBUG nova.virt.libvirt.driver [None req-2587ab64-fb3f-4084-a6cf-cf3c7c6847c6 7bf17f3eb8514499a54d67542db6b88a 226e6310b4ee4a68b552a6b3e940a458 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 11 04:25:40 compute-0 nova_compute[259850]: 2025-10-11 04:25:39.999 2 DEBUG nova.virt.libvirt.driver [None req-2587ab64-fb3f-4084-a6cf-cf3c7c6847c6 7bf17f3eb8514499a54d67542db6b88a 226e6310b4ee4a68b552a6b3e940a458 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 11 04:25:40 compute-0 nova_compute[259850]: 2025-10-11 04:25:39.999 2 DEBUG nova.virt.libvirt.driver [None req-2587ab64-fb3f-4084-a6cf-cf3c7c6847c6 7bf17f3eb8514499a54d67542db6b88a 226e6310b4ee4a68b552a6b3e940a458 - - default default] No VIF found with MAC fa:16:3e:67:66:62, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Oct 11 04:25:40 compute-0 nova_compute[259850]: 2025-10-11 04:25:40.000 2 INFO nova.virt.libvirt.driver [None req-2587ab64-fb3f-4084-a6cf-cf3c7c6847c6 7bf17f3eb8514499a54d67542db6b88a 226e6310b4ee4a68b552a6b3e940a458 - - default default] [instance: e9134216-e096-4ca2-a8aa-6fdafcd7b04c] Using config drive
Oct 11 04:25:40 compute-0 nova_compute[259850]: 2025-10-11 04:25:40.018 2 DEBUG nova.storage.rbd_utils [None req-2587ab64-fb3f-4084-a6cf-cf3c7c6847c6 7bf17f3eb8514499a54d67542db6b88a 226e6310b4ee4a68b552a6b3e940a458 - - default default] rbd image e9134216-e096-4ca2-a8aa-6fdafcd7b04c_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 11 04:25:40 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1917: 305 pgs: 305 active+clean; 134 MiB data, 464 MiB used, 60 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Oct 11 04:25:40 compute-0 nova_compute[259850]: 2025-10-11 04:25:40.059 2 DEBUG oslo_service.periodic_task [None req-a15c22ab-259d-4b26-83d0-eb5f92102649 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 11 04:25:40 compute-0 sudo[306481]: pam_unix(sudo:session): session closed for user root
Oct 11 04:25:40 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e447 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 04:25:40 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 11 04:25:40 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 11 04:25:40 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct 11 04:25:40 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 11 04:25:40 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct 11 04:25:40 compute-0 nova_compute[259850]: 2025-10-11 04:25:40.155 2 DEBUG nova.network.neutron [req-4eab73c3-8acd-4ed5-9692-8c53bc1022d7 req-5b6feca1-0973-44aa-bfd0-8e6284d502ee f4159c198f2c491aba3076e803251bb4 551ef16ca7c64c7d98e9560ddab5abd8 - - default default] [instance: e9134216-e096-4ca2-a8aa-6fdafcd7b04c] Updated VIF entry in instance network info cache for port 944dd3e5-9e7c-412f-ad9e-57f8cd6a65b9. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 11 04:25:40 compute-0 nova_compute[259850]: 2025-10-11 04:25:40.156 2 DEBUG nova.network.neutron [req-4eab73c3-8acd-4ed5-9692-8c53bc1022d7 req-5b6feca1-0973-44aa-bfd0-8e6284d502ee f4159c198f2c491aba3076e803251bb4 551ef16ca7c64c7d98e9560ddab5abd8 - - default default] [instance: e9134216-e096-4ca2-a8aa-6fdafcd7b04c] Updating instance_info_cache with network_info: [{"id": "944dd3e5-9e7c-412f-ad9e-57f8cd6a65b9", "address": "fa:16:3e:67:66:62", "network": {"id": "61e3c4a7-2f2f-451f-b913-c2cdac8efdf3", "bridge": "br-int", "label": "tempest-TestEncryptedCinderVolumes-1503027769-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "226e6310b4ee4a68b552a6b3e940a458", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap944dd3e5-9e", "ovs_interfaceid": "944dd3e5-9e7c-412f-ad9e-57f8cd6a65b9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 11 04:25:40 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' 
Oct 11 04:25:40 compute-0 ceph-mgr[74563]: [progress WARNING root] complete: ev 6d6e200c-3901-42c3-a288-b3d828fe6824 does not exist
Oct 11 04:25:40 compute-0 ceph-mgr[74563]: [progress WARNING root] complete: ev d95ec0e9-36af-456a-976e-0a5427787e0c does not exist
Oct 11 04:25:40 compute-0 ceph-mgr[74563]: [progress WARNING root] complete: ev 96abda89-5c08-4344-92c3-5e349280d616 does not exist
Oct 11 04:25:40 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct 11 04:25:40 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 11 04:25:40 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct 11 04:25:40 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 11 04:25:40 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 11 04:25:40 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 11 04:25:40 compute-0 nova_compute[259850]: 2025-10-11 04:25:40.172 2 DEBUG oslo_concurrency.lockutils [req-4eab73c3-8acd-4ed5-9692-8c53bc1022d7 req-5b6feca1-0973-44aa-bfd0-8e6284d502ee f4159c198f2c491aba3076e803251bb4 551ef16ca7c64c7d98e9560ddab5abd8 - - default default] Releasing lock "refresh_cache-e9134216-e096-4ca2-a8aa-6fdafcd7b04c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 11 04:25:40 compute-0 ceph-mon[74273]: from='client.? 192.168.122.100:0/3159772619' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 11 04:25:40 compute-0 ceph-mon[74273]: from='client.? 192.168.122.100:0/2188506091' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 11 04:25:40 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 11 04:25:40 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 11 04:25:40 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' 
Oct 11 04:25:40 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 11 04:25:40 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 11 04:25:40 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 11 04:25:40 compute-0 sudo[306578]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 04:25:40 compute-0 sudo[306578]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 04:25:40 compute-0 sudo[306578]: pam_unix(sudo:session): session closed for user root
Oct 11 04:25:40 compute-0 sudo[306603]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 11 04:25:40 compute-0 sudo[306603]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 04:25:40 compute-0 sudo[306603]: pam_unix(sudo:session): session closed for user root
Oct 11 04:25:40 compute-0 sudo[306628]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 04:25:40 compute-0 sudo[306628]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 04:25:40 compute-0 sudo[306628]: pam_unix(sudo:session): session closed for user root
Oct 11 04:25:40 compute-0 sudo[306653]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/23b68101-59a9-532f-ab6b-9acf78fb2162/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 23b68101-59a9-532f-ab6b-9acf78fb2162 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Oct 11 04:25:40 compute-0 sudo[306653]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 04:25:40 compute-0 nova_compute[259850]: 2025-10-11 04:25:40.543 2 INFO nova.virt.libvirt.driver [None req-2587ab64-fb3f-4084-a6cf-cf3c7c6847c6 7bf17f3eb8514499a54d67542db6b88a 226e6310b4ee4a68b552a6b3e940a458 - - default default] [instance: e9134216-e096-4ca2-a8aa-6fdafcd7b04c] Creating config drive at /var/lib/nova/instances/e9134216-e096-4ca2-a8aa-6fdafcd7b04c/disk.config
Oct 11 04:25:40 compute-0 nova_compute[259850]: 2025-10-11 04:25:40.559 2 DEBUG oslo_concurrency.processutils [None req-2587ab64-fb3f-4084-a6cf-cf3c7c6847c6 7bf17f3eb8514499a54d67542db6b88a 226e6310b4ee4a68b552a6b3e940a458 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/e9134216-e096-4ca2-a8aa-6fdafcd7b04c/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmphvfdl5iw execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 04:25:40 compute-0 nova_compute[259850]: 2025-10-11 04:25:40.699 2 DEBUG oslo_concurrency.processutils [None req-2587ab64-fb3f-4084-a6cf-cf3c7c6847c6 7bf17f3eb8514499a54d67542db6b88a 226e6310b4ee4a68b552a6b3e940a458 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/e9134216-e096-4ca2-a8aa-6fdafcd7b04c/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmphvfdl5iw" returned: 0 in 0.140s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 04:25:40 compute-0 nova_compute[259850]: 2025-10-11 04:25:40.748 2 DEBUG nova.storage.rbd_utils [None req-2587ab64-fb3f-4084-a6cf-cf3c7c6847c6 7bf17f3eb8514499a54d67542db6b88a 226e6310b4ee4a68b552a6b3e940a458 - - default default] rbd image e9134216-e096-4ca2-a8aa-6fdafcd7b04c_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 11 04:25:40 compute-0 nova_compute[259850]: 2025-10-11 04:25:40.754 2 DEBUG oslo_concurrency.processutils [None req-2587ab64-fb3f-4084-a6cf-cf3c7c6847c6 7bf17f3eb8514499a54d67542db6b88a 226e6310b4ee4a68b552a6b3e940a458 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/e9134216-e096-4ca2-a8aa-6fdafcd7b04c/disk.config e9134216-e096-4ca2-a8aa-6fdafcd7b04c_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 04:25:40 compute-0 nova_compute[259850]: 2025-10-11 04:25:40.910 2 DEBUG oslo_concurrency.processutils [None req-2587ab64-fb3f-4084-a6cf-cf3c7c6847c6 7bf17f3eb8514499a54d67542db6b88a 226e6310b4ee4a68b552a6b3e940a458 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/e9134216-e096-4ca2-a8aa-6fdafcd7b04c/disk.config e9134216-e096-4ca2-a8aa-6fdafcd7b04c_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.156s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 04:25:40 compute-0 nova_compute[259850]: 2025-10-11 04:25:40.912 2 INFO nova.virt.libvirt.driver [None req-2587ab64-fb3f-4084-a6cf-cf3c7c6847c6 7bf17f3eb8514499a54d67542db6b88a 226e6310b4ee4a68b552a6b3e940a458 - - default default] [instance: e9134216-e096-4ca2-a8aa-6fdafcd7b04c] Deleting local config drive /var/lib/nova/instances/e9134216-e096-4ca2-a8aa-6fdafcd7b04c/disk.config because it was imported into RBD.
Oct 11 04:25:40 compute-0 kernel: tap944dd3e5-9e: entered promiscuous mode
Oct 11 04:25:40 compute-0 NetworkManager[44920]: <info>  [1760156740.9897] manager: (tap944dd3e5-9e): new Tun device (/org/freedesktop/NetworkManager/Devices/140)
Oct 11 04:25:41 compute-0 ovn_controller[152025]: 2025-10-11T04:25:41Z|00273|binding|INFO|Claiming lport 944dd3e5-9e7c-412f-ad9e-57f8cd6a65b9 for this chassis.
Oct 11 04:25:41 compute-0 ovn_controller[152025]: 2025-10-11T04:25:41Z|00274|binding|INFO|944dd3e5-9e7c-412f-ad9e-57f8cd6a65b9: Claiming fa:16:3e:67:66:62 10.100.0.4
Oct 11 04:25:41 compute-0 nova_compute[259850]: 2025-10-11 04:25:41.015 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 04:25:41 compute-0 podman[306758]: 2025-10-11 04:25:41.018416183 +0000 UTC m=+0.078963885 container create 55863cbe8cf0360d6347d3bdd0abbd7d950f71c42ed75e487b011f081de1d4b8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=amazing_varahamihira, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Oct 11 04:25:41 compute-0 nova_compute[259850]: 2025-10-11 04:25:41.020 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 04:25:41 compute-0 ovn_metadata_agent[161897]: 2025-10-11 04:25:41.029 161902 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:67:66:62 10.100.0.4'], port_security=['fa:16:3e:67:66:62 10.100.0.4'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.4/28', 'neutron:device_id': 'e9134216-e096-4ca2-a8aa-6fdafcd7b04c', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-61e3c4a7-2f2f-451f-b913-c2cdac8efdf3', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '226e6310b4ee4a68b552a6b3e940a458', 'neutron:revision_number': '2', 'neutron:security_group_ids': '561ab7dc-72c3-4dd2-96d6-1dfd15b5f2c3', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=17f237ce-6320-4c27-9970-fd94aa8457a3, chassis=[<ovs.db.idl.Row object at 0x7f22fde8afd0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f22fde8afd0>], logical_port=944dd3e5-9e7c-412f-ad9e-57f8cd6a65b9) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 11 04:25:41 compute-0 ovn_metadata_agent[161897]: 2025-10-11 04:25:41.030 161902 INFO neutron.agent.ovn.metadata.agent [-] Port 944dd3e5-9e7c-412f-ad9e-57f8cd6a65b9 in datapath 61e3c4a7-2f2f-451f-b913-c2cdac8efdf3 bound to our chassis
Oct 11 04:25:41 compute-0 ovn_metadata_agent[161897]: 2025-10-11 04:25:41.032 161902 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 61e3c4a7-2f2f-451f-b913-c2cdac8efdf3
Oct 11 04:25:41 compute-0 ovn_metadata_agent[161897]: 2025-10-11 04:25:41.048 267637 DEBUG oslo.privsep.daemon [-] privsep: reply[657592ea-610d-41db-a40e-f974ecf37f10]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 04:25:41 compute-0 ovn_metadata_agent[161897]: 2025-10-11 04:25:41.049 161902 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap61e3c4a7-21 in ovnmeta-61e3c4a7-2f2f-451f-b913-c2cdac8efdf3 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Oct 11 04:25:41 compute-0 ovn_metadata_agent[161897]: 2025-10-11 04:25:41.053 267637 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap61e3c4a7-20 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Oct 11 04:25:41 compute-0 ovn_metadata_agent[161897]: 2025-10-11 04:25:41.053 267637 DEBUG oslo.privsep.daemon [-] privsep: reply[2aa7f313-9b40-43d5-ba75-6e3261665300]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 04:25:41 compute-0 ovn_metadata_agent[161897]: 2025-10-11 04:25:41.054 267637 DEBUG oslo.privsep.daemon [-] privsep: reply[76840a77-807e-4613-93a4-14b43b167ec7]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 04:25:41 compute-0 podman[306758]: 2025-10-11 04:25:40.9647197 +0000 UTC m=+0.025267472 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 04:25:41 compute-0 ovn_metadata_agent[161897]: 2025-10-11 04:25:41.071 162015 DEBUG oslo.privsep.daemon [-] privsep: reply[775dd3d2-e613-4d8a-b343-dd65a7650f01]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 04:25:41 compute-0 nova_compute[259850]: 2025-10-11 04:25:41.082 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 04:25:41 compute-0 systemd[1]: Started libpod-conmon-55863cbe8cf0360d6347d3bdd0abbd7d950f71c42ed75e487b011f081de1d4b8.scope.
Oct 11 04:25:41 compute-0 ovn_controller[152025]: 2025-10-11T04:25:41Z|00275|binding|INFO|Setting lport 944dd3e5-9e7c-412f-ad9e-57f8cd6a65b9 ovn-installed in OVS
Oct 11 04:25:41 compute-0 ovn_controller[152025]: 2025-10-11T04:25:41Z|00276|binding|INFO|Setting lport 944dd3e5-9e7c-412f-ad9e-57f8cd6a65b9 up in Southbound
Oct 11 04:25:41 compute-0 nova_compute[259850]: 2025-10-11 04:25:41.088 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 04:25:41 compute-0 systemd-machined[214869]: New machine qemu-28-instance-0000001c.
Oct 11 04:25:41 compute-0 ovn_metadata_agent[161897]: 2025-10-11 04:25:41.104 267637 DEBUG oslo.privsep.daemon [-] privsep: reply[2b6faafc-837f-49cd-b8e8-29a0812526bb]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 04:25:41 compute-0 systemd[1]: Started Virtual Machine qemu-28-instance-0000001c.
Oct 11 04:25:41 compute-0 systemd[1]: Started libcrun container.
Oct 11 04:25:41 compute-0 systemd-udevd[306795]: Network interface NamePolicy= disabled on kernel command line.
Oct 11 04:25:41 compute-0 podman[306758]: 2025-10-11 04:25:41.142044515 +0000 UTC m=+0.202592247 container init 55863cbe8cf0360d6347d3bdd0abbd7d950f71c42ed75e487b011f081de1d4b8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=amazing_varahamihira, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 11 04:25:41 compute-0 podman[306758]: 2025-10-11 04:25:41.152568252 +0000 UTC m=+0.213115944 container start 55863cbe8cf0360d6347d3bdd0abbd7d950f71c42ed75e487b011f081de1d4b8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=amazing_varahamihira, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Oct 11 04:25:41 compute-0 NetworkManager[44920]: <info>  [1760156741.1544] device (tap944dd3e5-9e): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 11 04:25:41 compute-0 ovn_metadata_agent[161897]: 2025-10-11 04:25:41.153 267785 DEBUG oslo.privsep.daemon [-] privsep: reply[175e650a-e8a6-4c11-9206-921cbecf4dc3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 04:25:41 compute-0 NetworkManager[44920]: <info>  [1760156741.1555] device (tap944dd3e5-9e): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 11 04:25:41 compute-0 podman[306758]: 2025-10-11 04:25:41.158812607 +0000 UTC m=+0.219360339 container attach 55863cbe8cf0360d6347d3bdd0abbd7d950f71c42ed75e487b011f081de1d4b8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=amazing_varahamihira, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 11 04:25:41 compute-0 amazing_varahamihira[306790]: 167 167
Oct 11 04:25:41 compute-0 systemd[1]: libpod-55863cbe8cf0360d6347d3bdd0abbd7d950f71c42ed75e487b011f081de1d4b8.scope: Deactivated successfully.
Oct 11 04:25:41 compute-0 ovn_metadata_agent[161897]: 2025-10-11 04:25:41.162 267637 DEBUG oslo.privsep.daemon [-] privsep: reply[4958b13c-d762-4bc5-aa78-3a4d6b0722d3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 04:25:41 compute-0 podman[306758]: 2025-10-11 04:25:41.165321281 +0000 UTC m=+0.225868983 container died 55863cbe8cf0360d6347d3bdd0abbd7d950f71c42ed75e487b011f081de1d4b8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=amazing_varahamihira, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 11 04:25:41 compute-0 NetworkManager[44920]: <info>  [1760156741.1663] manager: (tap61e3c4a7-20): new Veth device (/org/freedesktop/NetworkManager/Devices/141)
Oct 11 04:25:41 compute-0 systemd[1]: var-lib-containers-storage-overlay-1102afdc07491f8bdacd3f6aeec046cc863b4c8b7dbd416a88d6869d608eefb4-merged.mount: Deactivated successfully.
Oct 11 04:25:41 compute-0 ovn_metadata_agent[161897]: 2025-10-11 04:25:41.215 267785 DEBUG oslo.privsep.daemon [-] privsep: reply[747b8d89-e385-4a77-ac7c-18b425b8f2a9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 04:25:41 compute-0 ovn_metadata_agent[161897]: 2025-10-11 04:25:41.220 267785 DEBUG oslo.privsep.daemon [-] privsep: reply[70200949-851b-44aa-ae85-868492c4ae39]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 04:25:41 compute-0 podman[306758]: 2025-10-11 04:25:41.229389925 +0000 UTC m=+0.289937657 container remove 55863cbe8cf0360d6347d3bdd0abbd7d950f71c42ed75e487b011f081de1d4b8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=amazing_varahamihira, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507)
Oct 11 04:25:41 compute-0 ceph-mon[74273]: pgmap v1917: 305 pgs: 305 active+clean; 134 MiB data, 464 MiB used, 60 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Oct 11 04:25:41 compute-0 systemd[1]: libpod-conmon-55863cbe8cf0360d6347d3bdd0abbd7d950f71c42ed75e487b011f081de1d4b8.scope: Deactivated successfully.
Oct 11 04:25:41 compute-0 NetworkManager[44920]: <info>  [1760156741.2542] device (tap61e3c4a7-20): carrier: link connected
Oct 11 04:25:41 compute-0 ovn_metadata_agent[161897]: 2025-10-11 04:25:41.261 267785 DEBUG oslo.privsep.daemon [-] privsep: reply[445d22c7-8ea7-4fb7-97fc-3feb3309f974]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 04:25:41 compute-0 ovn_metadata_agent[161897]: 2025-10-11 04:25:41.278 267637 DEBUG oslo.privsep.daemon [-] privsep: reply[8a1a43eb-7a41-4890-ae28-ef3886cd045c]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap61e3c4a7-21'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:1d:30:90'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 90], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 512629, 'reachable_time': 33858, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 306839, 'error': None, 'target': 'ovnmeta-61e3c4a7-2f2f-451f-b913-c2cdac8efdf3', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 04:25:41 compute-0 ovn_metadata_agent[161897]: 2025-10-11 04:25:41.296 267637 DEBUG oslo.privsep.daemon [-] privsep: reply[f7d4e1c7-2f89-4866-b1dd-545abf18a4fa]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe1d:3090'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 512629, 'tstamp': 512629}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 306840, 'error': None, 'target': 'ovnmeta-61e3c4a7-2f2f-451f-b913-c2cdac8efdf3', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 04:25:41 compute-0 nova_compute[259850]: 2025-10-11 04:25:41.317 2 DEBUG nova.compute.manager [req-4b60ec80-6803-4d66-b38b-3652d22841f6 req-d3b44e60-ebd9-4438-8f0d-1ffc69603a2c f4159c198f2c491aba3076e803251bb4 551ef16ca7c64c7d98e9560ddab5abd8 - - default default] [instance: e9134216-e096-4ca2-a8aa-6fdafcd7b04c] Received event network-vif-plugged-944dd3e5-9e7c-412f-ad9e-57f8cd6a65b9 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 11 04:25:41 compute-0 nova_compute[259850]: 2025-10-11 04:25:41.317 2 DEBUG oslo_concurrency.lockutils [req-4b60ec80-6803-4d66-b38b-3652d22841f6 req-d3b44e60-ebd9-4438-8f0d-1ffc69603a2c f4159c198f2c491aba3076e803251bb4 551ef16ca7c64c7d98e9560ddab5abd8 - - default default] Acquiring lock "e9134216-e096-4ca2-a8aa-6fdafcd7b04c-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 04:25:41 compute-0 nova_compute[259850]: 2025-10-11 04:25:41.318 2 DEBUG oslo_concurrency.lockutils [req-4b60ec80-6803-4d66-b38b-3652d22841f6 req-d3b44e60-ebd9-4438-8f0d-1ffc69603a2c f4159c198f2c491aba3076e803251bb4 551ef16ca7c64c7d98e9560ddab5abd8 - - default default] Lock "e9134216-e096-4ca2-a8aa-6fdafcd7b04c-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 04:25:41 compute-0 nova_compute[259850]: 2025-10-11 04:25:41.318 2 DEBUG oslo_concurrency.lockutils [req-4b60ec80-6803-4d66-b38b-3652d22841f6 req-d3b44e60-ebd9-4438-8f0d-1ffc69603a2c f4159c198f2c491aba3076e803251bb4 551ef16ca7c64c7d98e9560ddab5abd8 - - default default] Lock "e9134216-e096-4ca2-a8aa-6fdafcd7b04c-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 04:25:41 compute-0 nova_compute[259850]: 2025-10-11 04:25:41.319 2 DEBUG nova.compute.manager [req-4b60ec80-6803-4d66-b38b-3652d22841f6 req-d3b44e60-ebd9-4438-8f0d-1ffc69603a2c f4159c198f2c491aba3076e803251bb4 551ef16ca7c64c7d98e9560ddab5abd8 - - default default] [instance: e9134216-e096-4ca2-a8aa-6fdafcd7b04c] Processing event network-vif-plugged-944dd3e5-9e7c-412f-ad9e-57f8cd6a65b9 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Oct 11 04:25:41 compute-0 ovn_metadata_agent[161897]: 2025-10-11 04:25:41.320 267637 DEBUG oslo.privsep.daemon [-] privsep: reply[2717a950-2376-4cdb-8cea-814b80eca552]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap61e3c4a7-21'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:1d:30:90'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 90], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 512629, 'reachable_time': 33858, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 306841, 'error': None, 'target': 'ovnmeta-61e3c4a7-2f2f-451f-b913-c2cdac8efdf3', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 04:25:41 compute-0 ovn_metadata_agent[161897]: 2025-10-11 04:25:41.375 267637 DEBUG oslo.privsep.daemon [-] privsep: reply[7db56ff2-58e6-4e7c-8ecf-93551690e441]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 04:25:41 compute-0 podman[306849]: 2025-10-11 04:25:41.432887336 +0000 UTC m=+0.051775769 container create 0ecb7723549e3497701cd2cec4c4b5c9eae5b3249d4ba84cda7774d8ae0d6140 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_keller, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS)
Oct 11 04:25:41 compute-0 ovn_metadata_agent[161897]: 2025-10-11 04:25:41.452 267637 DEBUG oslo.privsep.daemon [-] privsep: reply[c91f08fa-c180-4302-9126-012af82ee8b6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 04:25:41 compute-0 ovn_metadata_agent[161897]: 2025-10-11 04:25:41.453 161902 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap61e3c4a7-20, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 04:25:41 compute-0 ovn_metadata_agent[161897]: 2025-10-11 04:25:41.453 161902 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 11 04:25:41 compute-0 ovn_metadata_agent[161897]: 2025-10-11 04:25:41.454 161902 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap61e3c4a7-20, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 04:25:41 compute-0 NetworkManager[44920]: <info>  [1760156741.4567] manager: (tap61e3c4a7-20): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/142)
Oct 11 04:25:41 compute-0 kernel: tap61e3c4a7-20: entered promiscuous mode
Oct 11 04:25:41 compute-0 ovn_metadata_agent[161897]: 2025-10-11 04:25:41.461 161902 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap61e3c4a7-20, col_values=(('external_ids', {'iface-id': 'd6a2f98f-398c-4cad-9cd4-adac499bc3d4'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 04:25:41 compute-0 ovn_controller[152025]: 2025-10-11T04:25:41Z|00277|binding|INFO|Releasing lport d6a2f98f-398c-4cad-9cd4-adac499bc3d4 from this chassis (sb_readonly=0)
Oct 11 04:25:41 compute-0 nova_compute[259850]: 2025-10-11 04:25:41.461 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 04:25:41 compute-0 nova_compute[259850]: 2025-10-11 04:25:41.475 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 04:25:41 compute-0 ovn_metadata_agent[161897]: 2025-10-11 04:25:41.476 161902 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/61e3c4a7-2f2f-451f-b913-c2cdac8efdf3.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/61e3c4a7-2f2f-451f-b913-c2cdac8efdf3.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Oct 11 04:25:41 compute-0 ovn_metadata_agent[161897]: 2025-10-11 04:25:41.477 267637 DEBUG oslo.privsep.daemon [-] privsep: reply[fdc3d7f3-37e6-4de6-9c39-9fc8ee821a67]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 04:25:41 compute-0 ovn_metadata_agent[161897]: 2025-10-11 04:25:41.478 161902 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct 11 04:25:41 compute-0 ovn_metadata_agent[161897]: global
Oct 11 04:25:41 compute-0 ovn_metadata_agent[161897]:     log         /dev/log local0 debug
Oct 11 04:25:41 compute-0 ovn_metadata_agent[161897]:     log-tag     haproxy-metadata-proxy-61e3c4a7-2f2f-451f-b913-c2cdac8efdf3
Oct 11 04:25:41 compute-0 ovn_metadata_agent[161897]:     user        root
Oct 11 04:25:41 compute-0 ovn_metadata_agent[161897]:     group       root
Oct 11 04:25:41 compute-0 ovn_metadata_agent[161897]:     maxconn     1024
Oct 11 04:25:41 compute-0 ovn_metadata_agent[161897]:     pidfile     /var/lib/neutron/external/pids/61e3c4a7-2f2f-451f-b913-c2cdac8efdf3.pid.haproxy
Oct 11 04:25:41 compute-0 ovn_metadata_agent[161897]:     daemon
Oct 11 04:25:41 compute-0 ovn_metadata_agent[161897]: 
Oct 11 04:25:41 compute-0 ovn_metadata_agent[161897]: defaults
Oct 11 04:25:41 compute-0 ovn_metadata_agent[161897]:     log global
Oct 11 04:25:41 compute-0 ovn_metadata_agent[161897]:     mode http
Oct 11 04:25:41 compute-0 ovn_metadata_agent[161897]:     option httplog
Oct 11 04:25:41 compute-0 ovn_metadata_agent[161897]:     option dontlognull
Oct 11 04:25:41 compute-0 ovn_metadata_agent[161897]:     option http-server-close
Oct 11 04:25:41 compute-0 ovn_metadata_agent[161897]:     option forwardfor
Oct 11 04:25:41 compute-0 ovn_metadata_agent[161897]:     retries                 3
Oct 11 04:25:41 compute-0 ovn_metadata_agent[161897]:     timeout http-request    30s
Oct 11 04:25:41 compute-0 ovn_metadata_agent[161897]:     timeout connect         30s
Oct 11 04:25:41 compute-0 ovn_metadata_agent[161897]:     timeout client          32s
Oct 11 04:25:41 compute-0 ovn_metadata_agent[161897]:     timeout server          32s
Oct 11 04:25:41 compute-0 ovn_metadata_agent[161897]:     timeout http-keep-alive 30s
Oct 11 04:25:41 compute-0 ovn_metadata_agent[161897]: 
Oct 11 04:25:41 compute-0 ovn_metadata_agent[161897]: 
Oct 11 04:25:41 compute-0 ovn_metadata_agent[161897]: listen listener
Oct 11 04:25:41 compute-0 ovn_metadata_agent[161897]:     bind 169.254.169.254:80
Oct 11 04:25:41 compute-0 ovn_metadata_agent[161897]:     server metadata /var/lib/neutron/metadata_proxy
Oct 11 04:25:41 compute-0 ovn_metadata_agent[161897]:     http-request add-header X-OVN-Network-ID 61e3c4a7-2f2f-451f-b913-c2cdac8efdf3
Oct 11 04:25:41 compute-0 ovn_metadata_agent[161897]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Oct 11 04:25:41 compute-0 ovn_metadata_agent[161897]: 2025-10-11 04:25:41.479 161902 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-61e3c4a7-2f2f-451f-b913-c2cdac8efdf3', 'env', 'PROCESS_TAG=haproxy-61e3c4a7-2f2f-451f-b913-c2cdac8efdf3', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/61e3c4a7-2f2f-451f-b913-c2cdac8efdf3.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Oct 11 04:25:41 compute-0 systemd[1]: Started libpod-conmon-0ecb7723549e3497701cd2cec4c4b5c9eae5b3249d4ba84cda7774d8ae0d6140.scope.
Oct 11 04:25:41 compute-0 podman[306849]: 2025-10-11 04:25:41.406055281 +0000 UTC m=+0.024943754 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 04:25:41 compute-0 systemd[1]: Started libcrun container.
Oct 11 04:25:41 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ef300e91390be7f9f9d0ae6c7d387de264dc51265a24419e3cf2eb1dcdedb89a/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 11 04:25:41 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ef300e91390be7f9f9d0ae6c7d387de264dc51265a24419e3cf2eb1dcdedb89a/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 11 04:25:41 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ef300e91390be7f9f9d0ae6c7d387de264dc51265a24419e3cf2eb1dcdedb89a/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 11 04:25:41 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ef300e91390be7f9f9d0ae6c7d387de264dc51265a24419e3cf2eb1dcdedb89a/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 11 04:25:41 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ef300e91390be7f9f9d0ae6c7d387de264dc51265a24419e3cf2eb1dcdedb89a/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct 11 04:25:41 compute-0 podman[306849]: 2025-10-11 04:25:41.546139877 +0000 UTC m=+0.165028350 container init 0ecb7723549e3497701cd2cec4c4b5c9eae5b3249d4ba84cda7774d8ae0d6140 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_keller, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 11 04:25:41 compute-0 podman[306849]: 2025-10-11 04:25:41.554316707 +0000 UTC m=+0.173205150 container start 0ecb7723549e3497701cd2cec4c4b5c9eae5b3249d4ba84cda7774d8ae0d6140 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_keller, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Oct 11 04:25:41 compute-0 podman[306849]: 2025-10-11 04:25:41.557869207 +0000 UTC m=+0.176757630 container attach 0ecb7723549e3497701cd2cec4c4b5c9eae5b3249d4ba84cda7774d8ae0d6140 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_keller, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Oct 11 04:25:41 compute-0 podman[306867]: 2025-10-11 04:25:41.621256742 +0000 UTC m=+0.138949524 container health_status 648a70a95c858d67aef01a55c153ff0b5b9bef4b589fa10c62a24185180db61f (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.license=GPLv2, tcib_managed=true)
Oct 11 04:25:41 compute-0 podman[306965]: 2025-10-11 04:25:41.876336848 +0000 UTC m=+0.053058616 container create f3171a96d183a04d64a8fcab286821806ea8b4af5244bdd27229216365bb85f1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-61e3c4a7-2f2f-451f-b913-c2cdac8efdf3, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.build-date=20251009)
Oct 11 04:25:41 compute-0 systemd[1]: Started libpod-conmon-f3171a96d183a04d64a8fcab286821806ea8b4af5244bdd27229216365bb85f1.scope.
Oct 11 04:25:41 compute-0 podman[306965]: 2025-10-11 04:25:41.848492363 +0000 UTC m=+0.025214131 image pull 1061e4fafe13e0b9aa1ef2c904ba4ad70c44f3e87b1d831f16c6db34937f4022 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Oct 11 04:25:41 compute-0 systemd[1]: Started libcrun container.
Oct 11 04:25:41 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b01eb05d5107854d886d6c4833bbb6c9e2a0e0bef2b2a75205f8c4a1b9608809/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct 11 04:25:41 compute-0 podman[306965]: 2025-10-11 04:25:41.964083409 +0000 UTC m=+0.140805187 container init f3171a96d183a04d64a8fcab286821806ea8b4af5244bdd27229216365bb85f1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-61e3c4a7-2f2f-451f-b913-c2cdac8efdf3, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251009, io.buildah.version=1.41.3)
Oct 11 04:25:41 compute-0 podman[306965]: 2025-10-11 04:25:41.969031509 +0000 UTC m=+0.145753287 container start f3171a96d183a04d64a8fcab286821806ea8b4af5244bdd27229216365bb85f1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-61e3c4a7-2f2f-451f-b913-c2cdac8efdf3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, tcib_managed=true)
Oct 11 04:25:41 compute-0 neutron-haproxy-ovnmeta-61e3c4a7-2f2f-451f-b913-c2cdac8efdf3[306981]: [NOTICE]   (306985) : New worker (306987) forked
Oct 11 04:25:41 compute-0 neutron-haproxy-ovnmeta-61e3c4a7-2f2f-451f-b913-c2cdac8efdf3[306981]: [NOTICE]   (306985) : Loading success.
Oct 11 04:25:42 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1918: 305 pgs: 305 active+clean; 134 MiB data, 464 MiB used, 60 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Oct 11 04:25:42 compute-0 nova_compute[259850]: 2025-10-11 04:25:42.155 2 DEBUG nova.virt.driver [None req-3700afd8-f980-4cf2-b41e-54ecc7fbe9a2 - - - - - -] Emitting event <LifecycleEvent: 1760156742.1550071, e9134216-e096-4ca2-a8aa-6fdafcd7b04c => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 11 04:25:42 compute-0 nova_compute[259850]: 2025-10-11 04:25:42.156 2 INFO nova.compute.manager [None req-3700afd8-f980-4cf2-b41e-54ecc7fbe9a2 - - - - - -] [instance: e9134216-e096-4ca2-a8aa-6fdafcd7b04c] VM Started (Lifecycle Event)
Oct 11 04:25:42 compute-0 nova_compute[259850]: 2025-10-11 04:25:42.159 2 DEBUG nova.compute.manager [None req-2587ab64-fb3f-4084-a6cf-cf3c7c6847c6 7bf17f3eb8514499a54d67542db6b88a 226e6310b4ee4a68b552a6b3e940a458 - - default default] [instance: e9134216-e096-4ca2-a8aa-6fdafcd7b04c] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Oct 11 04:25:42 compute-0 nova_compute[259850]: 2025-10-11 04:25:42.164 2 DEBUG nova.virt.libvirt.driver [None req-2587ab64-fb3f-4084-a6cf-cf3c7c6847c6 7bf17f3eb8514499a54d67542db6b88a 226e6310b4ee4a68b552a6b3e940a458 - - default default] [instance: e9134216-e096-4ca2-a8aa-6fdafcd7b04c] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Oct 11 04:25:42 compute-0 nova_compute[259850]: 2025-10-11 04:25:42.175 2 INFO nova.virt.libvirt.driver [-] [instance: e9134216-e096-4ca2-a8aa-6fdafcd7b04c] Instance spawned successfully.
Oct 11 04:25:42 compute-0 nova_compute[259850]: 2025-10-11 04:25:42.175 2 DEBUG nova.virt.libvirt.driver [None req-2587ab64-fb3f-4084-a6cf-cf3c7c6847c6 7bf17f3eb8514499a54d67542db6b88a 226e6310b4ee4a68b552a6b3e940a458 - - default default] [instance: e9134216-e096-4ca2-a8aa-6fdafcd7b04c] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Oct 11 04:25:42 compute-0 nova_compute[259850]: 2025-10-11 04:25:42.308 2 DEBUG nova.compute.manager [None req-3700afd8-f980-4cf2-b41e-54ecc7fbe9a2 - - - - - -] [instance: e9134216-e096-4ca2-a8aa-6fdafcd7b04c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 11 04:25:42 compute-0 nova_compute[259850]: 2025-10-11 04:25:42.313 2 DEBUG nova.compute.manager [None req-3700afd8-f980-4cf2-b41e-54ecc7fbe9a2 - - - - - -] [instance: e9134216-e096-4ca2-a8aa-6fdafcd7b04c] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 11 04:25:42 compute-0 nova_compute[259850]: 2025-10-11 04:25:42.317 2 DEBUG nova.virt.libvirt.driver [None req-2587ab64-fb3f-4084-a6cf-cf3c7c6847c6 7bf17f3eb8514499a54d67542db6b88a 226e6310b4ee4a68b552a6b3e940a458 - - default default] [instance: e9134216-e096-4ca2-a8aa-6fdafcd7b04c] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 11 04:25:42 compute-0 nova_compute[259850]: 2025-10-11 04:25:42.318 2 DEBUG nova.virt.libvirt.driver [None req-2587ab64-fb3f-4084-a6cf-cf3c7c6847c6 7bf17f3eb8514499a54d67542db6b88a 226e6310b4ee4a68b552a6b3e940a458 - - default default] [instance: e9134216-e096-4ca2-a8aa-6fdafcd7b04c] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 11 04:25:42 compute-0 nova_compute[259850]: 2025-10-11 04:25:42.318 2 DEBUG nova.virt.libvirt.driver [None req-2587ab64-fb3f-4084-a6cf-cf3c7c6847c6 7bf17f3eb8514499a54d67542db6b88a 226e6310b4ee4a68b552a6b3e940a458 - - default default] [instance: e9134216-e096-4ca2-a8aa-6fdafcd7b04c] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 11 04:25:42 compute-0 nova_compute[259850]: 2025-10-11 04:25:42.319 2 DEBUG nova.virt.libvirt.driver [None req-2587ab64-fb3f-4084-a6cf-cf3c7c6847c6 7bf17f3eb8514499a54d67542db6b88a 226e6310b4ee4a68b552a6b3e940a458 - - default default] [instance: e9134216-e096-4ca2-a8aa-6fdafcd7b04c] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 11 04:25:42 compute-0 nova_compute[259850]: 2025-10-11 04:25:42.320 2 DEBUG nova.virt.libvirt.driver [None req-2587ab64-fb3f-4084-a6cf-cf3c7c6847c6 7bf17f3eb8514499a54d67542db6b88a 226e6310b4ee4a68b552a6b3e940a458 - - default default] [instance: e9134216-e096-4ca2-a8aa-6fdafcd7b04c] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 11 04:25:42 compute-0 nova_compute[259850]: 2025-10-11 04:25:42.320 2 DEBUG nova.virt.libvirt.driver [None req-2587ab64-fb3f-4084-a6cf-cf3c7c6847c6 7bf17f3eb8514499a54d67542db6b88a 226e6310b4ee4a68b552a6b3e940a458 - - default default] [instance: e9134216-e096-4ca2-a8aa-6fdafcd7b04c] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 11 04:25:42 compute-0 nova_compute[259850]: 2025-10-11 04:25:42.362 2 INFO nova.compute.manager [None req-3700afd8-f980-4cf2-b41e-54ecc7fbe9a2 - - - - - -] [instance: e9134216-e096-4ca2-a8aa-6fdafcd7b04c] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 11 04:25:42 compute-0 nova_compute[259850]: 2025-10-11 04:25:42.363 2 DEBUG nova.virt.driver [None req-3700afd8-f980-4cf2-b41e-54ecc7fbe9a2 - - - - - -] Emitting event <LifecycleEvent: 1760156742.156384, e9134216-e096-4ca2-a8aa-6fdafcd7b04c => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 11 04:25:42 compute-0 nova_compute[259850]: 2025-10-11 04:25:42.363 2 INFO nova.compute.manager [None req-3700afd8-f980-4cf2-b41e-54ecc7fbe9a2 - - - - - -] [instance: e9134216-e096-4ca2-a8aa-6fdafcd7b04c] VM Paused (Lifecycle Event)
Oct 11 04:25:42 compute-0 nova_compute[259850]: 2025-10-11 04:25:42.387 2 DEBUG nova.compute.manager [None req-3700afd8-f980-4cf2-b41e-54ecc7fbe9a2 - - - - - -] [instance: e9134216-e096-4ca2-a8aa-6fdafcd7b04c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 11 04:25:42 compute-0 nova_compute[259850]: 2025-10-11 04:25:42.391 2 DEBUG nova.virt.driver [None req-3700afd8-f980-4cf2-b41e-54ecc7fbe9a2 - - - - - -] Emitting event <LifecycleEvent: 1760156742.1626854, e9134216-e096-4ca2-a8aa-6fdafcd7b04c => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 11 04:25:42 compute-0 nova_compute[259850]: 2025-10-11 04:25:42.392 2 INFO nova.compute.manager [None req-3700afd8-f980-4cf2-b41e-54ecc7fbe9a2 - - - - - -] [instance: e9134216-e096-4ca2-a8aa-6fdafcd7b04c] VM Resumed (Lifecycle Event)
Oct 11 04:25:42 compute-0 nova_compute[259850]: 2025-10-11 04:25:42.414 2 INFO nova.compute.manager [None req-2587ab64-fb3f-4084-a6cf-cf3c7c6847c6 7bf17f3eb8514499a54d67542db6b88a 226e6310b4ee4a68b552a6b3e940a458 - - default default] [instance: e9134216-e096-4ca2-a8aa-6fdafcd7b04c] Took 6.59 seconds to spawn the instance on the hypervisor.
Oct 11 04:25:42 compute-0 nova_compute[259850]: 2025-10-11 04:25:42.416 2 DEBUG nova.compute.manager [None req-2587ab64-fb3f-4084-a6cf-cf3c7c6847c6 7bf17f3eb8514499a54d67542db6b88a 226e6310b4ee4a68b552a6b3e940a458 - - default default] [instance: e9134216-e096-4ca2-a8aa-6fdafcd7b04c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 11 04:25:42 compute-0 nova_compute[259850]: 2025-10-11 04:25:42.417 2 DEBUG nova.compute.manager [None req-3700afd8-f980-4cf2-b41e-54ecc7fbe9a2 - - - - - -] [instance: e9134216-e096-4ca2-a8aa-6fdafcd7b04c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 11 04:25:42 compute-0 nova_compute[259850]: 2025-10-11 04:25:42.425 2 DEBUG nova.compute.manager [None req-3700afd8-f980-4cf2-b41e-54ecc7fbe9a2 - - - - - -] [instance: e9134216-e096-4ca2-a8aa-6fdafcd7b04c] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 11 04:25:42 compute-0 nova_compute[259850]: 2025-10-11 04:25:42.448 2 INFO nova.compute.manager [None req-3700afd8-f980-4cf2-b41e-54ecc7fbe9a2 - - - - - -] [instance: e9134216-e096-4ca2-a8aa-6fdafcd7b04c] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 11 04:25:42 compute-0 nova_compute[259850]: 2025-10-11 04:25:42.476 2 INFO nova.compute.manager [None req-2587ab64-fb3f-4084-a6cf-cf3c7c6847c6 7bf17f3eb8514499a54d67542db6b88a 226e6310b4ee4a68b552a6b3e940a458 - - default default] [instance: e9134216-e096-4ca2-a8aa-6fdafcd7b04c] Took 7.63 seconds to build instance.
Oct 11 04:25:42 compute-0 nova_compute[259850]: 2025-10-11 04:25:42.493 2 DEBUG oslo_concurrency.lockutils [None req-2587ab64-fb3f-4084-a6cf-cf3c7c6847c6 7bf17f3eb8514499a54d67542db6b88a 226e6310b4ee4a68b552a6b3e940a458 - - default default] Lock "e9134216-e096-4ca2-a8aa-6fdafcd7b04c" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 7.739s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 04:25:42 compute-0 elastic_keller[306876]: --> passed data devices: 0 physical, 3 LVM
Oct 11 04:25:42 compute-0 elastic_keller[306876]: --> relative data size: 1.0
Oct 11 04:25:42 compute-0 elastic_keller[306876]: --> All data devices are unavailable
Oct 11 04:25:42 compute-0 systemd[1]: libpod-0ecb7723549e3497701cd2cec4c4b5c9eae5b3249d4ba84cda7774d8ae0d6140.scope: Deactivated successfully.
Oct 11 04:25:42 compute-0 systemd[1]: libpod-0ecb7723549e3497701cd2cec4c4b5c9eae5b3249d4ba84cda7774d8ae0d6140.scope: Consumed 1.109s CPU time.
Oct 11 04:25:42 compute-0 conmon[306876]: conmon 0ecb7723549e3497701c <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-0ecb7723549e3497701cd2cec4c4b5c9eae5b3249d4ba84cda7774d8ae0d6140.scope/container/memory.events
Oct 11 04:25:42 compute-0 podman[306849]: 2025-10-11 04:25:42.728831941 +0000 UTC m=+1.347720424 container died 0ecb7723549e3497701cd2cec4c4b5c9eae5b3249d4ba84cda7774d8ae0d6140 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_keller, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Oct 11 04:25:42 compute-0 systemd[1]: var-lib-containers-storage-overlay-ef300e91390be7f9f9d0ae6c7d387de264dc51265a24419e3cf2eb1dcdedb89a-merged.mount: Deactivated successfully.
Oct 11 04:25:42 compute-0 podman[306849]: 2025-10-11 04:25:42.808646569 +0000 UTC m=+1.427535032 container remove 0ecb7723549e3497701cd2cec4c4b5c9eae5b3249d4ba84cda7774d8ae0d6140 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_keller, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 11 04:25:42 compute-0 systemd[1]: libpod-conmon-0ecb7723549e3497701cd2cec4c4b5c9eae5b3249d4ba84cda7774d8ae0d6140.scope: Deactivated successfully.
Oct 11 04:25:42 compute-0 sudo[306653]: pam_unix(sudo:session): session closed for user root
Oct 11 04:25:42 compute-0 sudo[307034]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 04:25:42 compute-0 sudo[307034]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 04:25:42 compute-0 sudo[307034]: pam_unix(sudo:session): session closed for user root
Oct 11 04:25:42 compute-0 sudo[307059]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 11 04:25:42 compute-0 sudo[307059]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 04:25:42 compute-0 sudo[307059]: pam_unix(sudo:session): session closed for user root
Oct 11 04:25:43 compute-0 sudo[307084]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 04:25:43 compute-0 sudo[307084]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 04:25:43 compute-0 sudo[307084]: pam_unix(sudo:session): session closed for user root
Oct 11 04:25:43 compute-0 nova_compute[259850]: 2025-10-11 04:25:43.060 2 DEBUG oslo_service.periodic_task [None req-a15c22ab-259d-4b26-83d0-eb5f92102649 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 11 04:25:43 compute-0 sudo[307109]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/23b68101-59a9-532f-ab6b-9acf78fb2162/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 23b68101-59a9-532f-ab6b-9acf78fb2162 -- lvm list --format json
Oct 11 04:25:43 compute-0 sudo[307109]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 04:25:43 compute-0 ceph-mon[74273]: pgmap v1918: 305 pgs: 305 active+clean; 134 MiB data, 464 MiB used, 60 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Oct 11 04:25:43 compute-0 nova_compute[259850]: 2025-10-11 04:25:43.410 2 DEBUG nova.compute.manager [req-b5a07273-eabc-41ed-955f-30d8d467b604 req-231aa070-cddc-46bf-ad8b-e9b393d464fb f4159c198f2c491aba3076e803251bb4 551ef16ca7c64c7d98e9560ddab5abd8 - - default default] [instance: e9134216-e096-4ca2-a8aa-6fdafcd7b04c] Received event network-vif-plugged-944dd3e5-9e7c-412f-ad9e-57f8cd6a65b9 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 11 04:25:43 compute-0 nova_compute[259850]: 2025-10-11 04:25:43.412 2 DEBUG oslo_concurrency.lockutils [req-b5a07273-eabc-41ed-955f-30d8d467b604 req-231aa070-cddc-46bf-ad8b-e9b393d464fb f4159c198f2c491aba3076e803251bb4 551ef16ca7c64c7d98e9560ddab5abd8 - - default default] Acquiring lock "e9134216-e096-4ca2-a8aa-6fdafcd7b04c-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 04:25:43 compute-0 nova_compute[259850]: 2025-10-11 04:25:43.412 2 DEBUG oslo_concurrency.lockutils [req-b5a07273-eabc-41ed-955f-30d8d467b604 req-231aa070-cddc-46bf-ad8b-e9b393d464fb f4159c198f2c491aba3076e803251bb4 551ef16ca7c64c7d98e9560ddab5abd8 - - default default] Lock "e9134216-e096-4ca2-a8aa-6fdafcd7b04c-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 04:25:43 compute-0 nova_compute[259850]: 2025-10-11 04:25:43.412 2 DEBUG oslo_concurrency.lockutils [req-b5a07273-eabc-41ed-955f-30d8d467b604 req-231aa070-cddc-46bf-ad8b-e9b393d464fb f4159c198f2c491aba3076e803251bb4 551ef16ca7c64c7d98e9560ddab5abd8 - - default default] Lock "e9134216-e096-4ca2-a8aa-6fdafcd7b04c-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 04:25:43 compute-0 nova_compute[259850]: 2025-10-11 04:25:43.413 2 DEBUG nova.compute.manager [req-b5a07273-eabc-41ed-955f-30d8d467b604 req-231aa070-cddc-46bf-ad8b-e9b393d464fb f4159c198f2c491aba3076e803251bb4 551ef16ca7c64c7d98e9560ddab5abd8 - - default default] [instance: e9134216-e096-4ca2-a8aa-6fdafcd7b04c] No waiting events found dispatching network-vif-plugged-944dd3e5-9e7c-412f-ad9e-57f8cd6a65b9 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 11 04:25:43 compute-0 nova_compute[259850]: 2025-10-11 04:25:43.413 2 WARNING nova.compute.manager [req-b5a07273-eabc-41ed-955f-30d8d467b604 req-231aa070-cddc-46bf-ad8b-e9b393d464fb f4159c198f2c491aba3076e803251bb4 551ef16ca7c64c7d98e9560ddab5abd8 - - default default] [instance: e9134216-e096-4ca2-a8aa-6fdafcd7b04c] Received unexpected event network-vif-plugged-944dd3e5-9e7c-412f-ad9e-57f8cd6a65b9 for instance with vm_state active and task_state None.
Oct 11 04:25:43 compute-0 podman[307174]: 2025-10-11 04:25:43.61349371 +0000 UTC m=+0.071658560 container create 94966c7475b9e35a2feda452976da75d3ea132d13b3d64f81dc3c38ad25d890d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_lehmann, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Oct 11 04:25:43 compute-0 systemd[1]: Started libpod-conmon-94966c7475b9e35a2feda452976da75d3ea132d13b3d64f81dc3c38ad25d890d.scope.
Oct 11 04:25:43 compute-0 podman[307174]: 2025-10-11 04:25:43.58618378 +0000 UTC m=+0.044348720 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 04:25:43 compute-0 systemd[1]: Started libcrun container.
Oct 11 04:25:43 compute-0 podman[307174]: 2025-10-11 04:25:43.716497511 +0000 UTC m=+0.174662451 container init 94966c7475b9e35a2feda452976da75d3ea132d13b3d64f81dc3c38ad25d890d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_lehmann, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 11 04:25:43 compute-0 podman[307174]: 2025-10-11 04:25:43.727072579 +0000 UTC m=+0.185237459 container start 94966c7475b9e35a2feda452976da75d3ea132d13b3d64f81dc3c38ad25d890d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_lehmann, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 11 04:25:43 compute-0 podman[307174]: 2025-10-11 04:25:43.730756083 +0000 UTC m=+0.188920973 container attach 94966c7475b9e35a2feda452976da75d3ea132d13b3d64f81dc3c38ad25d890d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_lehmann, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_REF=reef, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 11 04:25:43 compute-0 gracious_lehmann[307191]: 167 167
Oct 11 04:25:43 compute-0 systemd[1]: libpod-94966c7475b9e35a2feda452976da75d3ea132d13b3d64f81dc3c38ad25d890d.scope: Deactivated successfully.
Oct 11 04:25:43 compute-0 podman[307174]: 2025-10-11 04:25:43.733729496 +0000 UTC m=+0.191894386 container died 94966c7475b9e35a2feda452976da75d3ea132d13b3d64f81dc3c38ad25d890d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_lehmann, org.label-schema.vendor=CentOS, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Oct 11 04:25:43 compute-0 systemd[1]: var-lib-containers-storage-overlay-cce0f70597493cb6ff7db00890da594c0abd74f216e9a6accd6528173f6cd6d9-merged.mount: Deactivated successfully.
Oct 11 04:25:43 compute-0 podman[307174]: 2025-10-11 04:25:43.779158796 +0000 UTC m=+0.237323656 container remove 94966c7475b9e35a2feda452976da75d3ea132d13b3d64f81dc3c38ad25d890d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_lehmann, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 11 04:25:43 compute-0 systemd[1]: libpod-conmon-94966c7475b9e35a2feda452976da75d3ea132d13b3d64f81dc3c38ad25d890d.scope: Deactivated successfully.
Oct 11 04:25:43 compute-0 nova_compute[259850]: 2025-10-11 04:25:43.862 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 04:25:43 compute-0 podman[307214]: 2025-10-11 04:25:43.972629926 +0000 UTC m=+0.047895810 container create 57ad162e99c97f3d685e3e98eac07b3dd8c108df8c6df55a16aaed5b2bec7329 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_ganguly, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 11 04:25:44 compute-0 systemd[1]: Started libpod-conmon-57ad162e99c97f3d685e3e98eac07b3dd8c108df8c6df55a16aaed5b2bec7329.scope.
Oct 11 04:25:44 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1919: 305 pgs: 305 active+clean; 134 MiB data, 464 MiB used, 60 GiB / 60 GiB avail; 73 KiB/s rd, 1.8 MiB/s wr, 38 op/s
Oct 11 04:25:44 compute-0 systemd[1]: Started libcrun container.
Oct 11 04:25:44 compute-0 podman[307214]: 2025-10-11 04:25:43.952563251 +0000 UTC m=+0.027829155 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 04:25:44 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/038e6fcee9bc94a0e1630e8b063b30fa4bf70f400dcc732f68e3c95c17a33448/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 11 04:25:44 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/038e6fcee9bc94a0e1630e8b063b30fa4bf70f400dcc732f68e3c95c17a33448/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 11 04:25:44 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/038e6fcee9bc94a0e1630e8b063b30fa4bf70f400dcc732f68e3c95c17a33448/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 11 04:25:44 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/038e6fcee9bc94a0e1630e8b063b30fa4bf70f400dcc732f68e3c95c17a33448/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 11 04:25:44 compute-0 podman[307214]: 2025-10-11 04:25:44.087943734 +0000 UTC m=+0.163209688 container init 57ad162e99c97f3d685e3e98eac07b3dd8c108df8c6df55a16aaed5b2bec7329 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_ganguly, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 11 04:25:44 compute-0 podman[307214]: 2025-10-11 04:25:44.099583942 +0000 UTC m=+0.174849846 container start 57ad162e99c97f3d685e3e98eac07b3dd8c108df8c6df55a16aaed5b2bec7329 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_ganguly, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Oct 11 04:25:44 compute-0 podman[307214]: 2025-10-11 04:25:44.103113751 +0000 UTC m=+0.178379715 container attach 57ad162e99c97f3d685e3e98eac07b3dd8c108df8c6df55a16aaed5b2bec7329 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_ganguly, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.39.3, ceph=True, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 11 04:25:44 compute-0 NetworkManager[44920]: <info>  [1760156744.2254] manager: (patch-provnet-86cd831a-6a58-4ba8-a51c-57fa1a3acacc-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/143)
Oct 11 04:25:44 compute-0 NetworkManager[44920]: <info>  [1760156744.2264] manager: (patch-br-int-to-provnet-86cd831a-6a58-4ba8-a51c-57fa1a3acacc): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/144)
Oct 11 04:25:44 compute-0 nova_compute[259850]: 2025-10-11 04:25:44.236 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 04:25:44 compute-0 nova_compute[259850]: 2025-10-11 04:25:44.341 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 04:25:44 compute-0 ovn_controller[152025]: 2025-10-11T04:25:44Z|00278|binding|INFO|Releasing lport d6a2f98f-398c-4cad-9cd4-adac499bc3d4 from this chassis (sb_readonly=0)
Oct 11 04:25:44 compute-0 nova_compute[259850]: 2025-10-11 04:25:44.349 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 04:25:44 compute-0 charming_ganguly[307231]: {
Oct 11 04:25:44 compute-0 charming_ganguly[307231]:     "0": [
Oct 11 04:25:44 compute-0 charming_ganguly[307231]:         {
Oct 11 04:25:44 compute-0 charming_ganguly[307231]:             "devices": [
Oct 11 04:25:44 compute-0 charming_ganguly[307231]:                 "/dev/loop3"
Oct 11 04:25:44 compute-0 charming_ganguly[307231]:             ],
Oct 11 04:25:44 compute-0 charming_ganguly[307231]:             "lv_name": "ceph_lv0",
Oct 11 04:25:44 compute-0 charming_ganguly[307231]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct 11 04:25:44 compute-0 charming_ganguly[307231]:             "lv_size": "21470642176",
Oct 11 04:25:44 compute-0 charming_ganguly[307231]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=1XVTvq-Um5W-oCLh-MZzq-EKrd-vgGj-JdO4S3,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=23b68101-59a9-532f-ab6b-9acf78fb2162,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=bd7ac921-1218-45c1-b1c6-7c594dbceccb,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 11 04:25:44 compute-0 charming_ganguly[307231]:             "lv_uuid": "1XVTvq-Um5W-oCLh-MZzq-EKrd-vgGj-JdO4S3",
Oct 11 04:25:44 compute-0 charming_ganguly[307231]:             "name": "ceph_lv0",
Oct 11 04:25:44 compute-0 charming_ganguly[307231]:             "path": "/dev/ceph_vg0/ceph_lv0",
Oct 11 04:25:44 compute-0 charming_ganguly[307231]:             "tags": {
Oct 11 04:25:44 compute-0 charming_ganguly[307231]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct 11 04:25:44 compute-0 charming_ganguly[307231]:                 "ceph.block_uuid": "1XVTvq-Um5W-oCLh-MZzq-EKrd-vgGj-JdO4S3",
Oct 11 04:25:44 compute-0 charming_ganguly[307231]:                 "ceph.cephx_lockbox_secret": "",
Oct 11 04:25:44 compute-0 charming_ganguly[307231]:                 "ceph.cluster_fsid": "23b68101-59a9-532f-ab6b-9acf78fb2162",
Oct 11 04:25:44 compute-0 charming_ganguly[307231]:                 "ceph.cluster_name": "ceph",
Oct 11 04:25:44 compute-0 charming_ganguly[307231]:                 "ceph.crush_device_class": "",
Oct 11 04:25:44 compute-0 charming_ganguly[307231]:                 "ceph.encrypted": "0",
Oct 11 04:25:44 compute-0 charming_ganguly[307231]:                 "ceph.osd_fsid": "bd7ac921-1218-45c1-b1c6-7c594dbceccb",
Oct 11 04:25:44 compute-0 charming_ganguly[307231]:                 "ceph.osd_id": "0",
Oct 11 04:25:44 compute-0 charming_ganguly[307231]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 11 04:25:44 compute-0 charming_ganguly[307231]:                 "ceph.type": "block",
Oct 11 04:25:44 compute-0 charming_ganguly[307231]:                 "ceph.vdo": "0"
Oct 11 04:25:44 compute-0 charming_ganguly[307231]:             },
Oct 11 04:25:44 compute-0 charming_ganguly[307231]:             "type": "block",
Oct 11 04:25:44 compute-0 charming_ganguly[307231]:             "vg_name": "ceph_vg0"
Oct 11 04:25:44 compute-0 charming_ganguly[307231]:         }
Oct 11 04:25:44 compute-0 charming_ganguly[307231]:     ],
Oct 11 04:25:44 compute-0 charming_ganguly[307231]:     "1": [
Oct 11 04:25:44 compute-0 charming_ganguly[307231]:         {
Oct 11 04:25:44 compute-0 charming_ganguly[307231]:             "devices": [
Oct 11 04:25:44 compute-0 charming_ganguly[307231]:                 "/dev/loop4"
Oct 11 04:25:44 compute-0 charming_ganguly[307231]:             ],
Oct 11 04:25:44 compute-0 charming_ganguly[307231]:             "lv_name": "ceph_lv1",
Oct 11 04:25:44 compute-0 charming_ganguly[307231]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Oct 11 04:25:44 compute-0 charming_ganguly[307231]:             "lv_size": "21470642176",
Oct 11 04:25:44 compute-0 charming_ganguly[307231]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=SYHCVo-UVf0-Iwc3-4dzz-QuCG-19LJ-9rRA5x,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=23b68101-59a9-532f-ab6b-9acf78fb2162,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=38da774d-7ecf-442f-9a7a-97978287cff8,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 11 04:25:44 compute-0 charming_ganguly[307231]:             "lv_uuid": "SYHCVo-UVf0-Iwc3-4dzz-QuCG-19LJ-9rRA5x",
Oct 11 04:25:44 compute-0 charming_ganguly[307231]:             "name": "ceph_lv1",
Oct 11 04:25:44 compute-0 charming_ganguly[307231]:             "path": "/dev/ceph_vg1/ceph_lv1",
Oct 11 04:25:44 compute-0 charming_ganguly[307231]:             "tags": {
Oct 11 04:25:44 compute-0 charming_ganguly[307231]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Oct 11 04:25:44 compute-0 charming_ganguly[307231]:                 "ceph.block_uuid": "SYHCVo-UVf0-Iwc3-4dzz-QuCG-19LJ-9rRA5x",
Oct 11 04:25:44 compute-0 charming_ganguly[307231]:                 "ceph.cephx_lockbox_secret": "",
Oct 11 04:25:44 compute-0 charming_ganguly[307231]:                 "ceph.cluster_fsid": "23b68101-59a9-532f-ab6b-9acf78fb2162",
Oct 11 04:25:44 compute-0 charming_ganguly[307231]:                 "ceph.cluster_name": "ceph",
Oct 11 04:25:44 compute-0 charming_ganguly[307231]:                 "ceph.crush_device_class": "",
Oct 11 04:25:44 compute-0 charming_ganguly[307231]:                 "ceph.encrypted": "0",
Oct 11 04:25:44 compute-0 charming_ganguly[307231]:                 "ceph.osd_fsid": "38da774d-7ecf-442f-9a7a-97978287cff8",
Oct 11 04:25:44 compute-0 charming_ganguly[307231]:                 "ceph.osd_id": "1",
Oct 11 04:25:44 compute-0 charming_ganguly[307231]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 11 04:25:44 compute-0 charming_ganguly[307231]:                 "ceph.type": "block",
Oct 11 04:25:44 compute-0 charming_ganguly[307231]:                 "ceph.vdo": "0"
Oct 11 04:25:44 compute-0 charming_ganguly[307231]:             },
Oct 11 04:25:44 compute-0 charming_ganguly[307231]:             "type": "block",
Oct 11 04:25:44 compute-0 charming_ganguly[307231]:             "vg_name": "ceph_vg1"
Oct 11 04:25:44 compute-0 charming_ganguly[307231]:         }
Oct 11 04:25:44 compute-0 charming_ganguly[307231]:     ],
Oct 11 04:25:44 compute-0 charming_ganguly[307231]:     "2": [
Oct 11 04:25:44 compute-0 charming_ganguly[307231]:         {
Oct 11 04:25:44 compute-0 charming_ganguly[307231]:             "devices": [
Oct 11 04:25:44 compute-0 charming_ganguly[307231]:                 "/dev/loop5"
Oct 11 04:25:44 compute-0 charming_ganguly[307231]:             ],
Oct 11 04:25:44 compute-0 charming_ganguly[307231]:             "lv_name": "ceph_lv2",
Oct 11 04:25:44 compute-0 charming_ganguly[307231]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Oct 11 04:25:44 compute-0 charming_ganguly[307231]:             "lv_size": "21470642176",
Oct 11 04:25:44 compute-0 charming_ganguly[307231]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=RoMfIg-8Myq-NBZ3-1HM3-6QfC-sjk8-dlChvt,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=23b68101-59a9-532f-ab6b-9acf78fb2162,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=b0a32a6b-06c7-4cda-9235-0c5ffbd1bd85,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 11 04:25:44 compute-0 charming_ganguly[307231]:             "lv_uuid": "RoMfIg-8Myq-NBZ3-1HM3-6QfC-sjk8-dlChvt",
Oct 11 04:25:44 compute-0 charming_ganguly[307231]:             "name": "ceph_lv2",
Oct 11 04:25:44 compute-0 charming_ganguly[307231]:             "path": "/dev/ceph_vg2/ceph_lv2",
Oct 11 04:25:44 compute-0 charming_ganguly[307231]:             "tags": {
Oct 11 04:25:44 compute-0 charming_ganguly[307231]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Oct 11 04:25:44 compute-0 charming_ganguly[307231]:                 "ceph.block_uuid": "RoMfIg-8Myq-NBZ3-1HM3-6QfC-sjk8-dlChvt",
Oct 11 04:25:44 compute-0 charming_ganguly[307231]:                 "ceph.cephx_lockbox_secret": "",
Oct 11 04:25:44 compute-0 charming_ganguly[307231]:                 "ceph.cluster_fsid": "23b68101-59a9-532f-ab6b-9acf78fb2162",
Oct 11 04:25:44 compute-0 charming_ganguly[307231]:                 "ceph.cluster_name": "ceph",
Oct 11 04:25:44 compute-0 charming_ganguly[307231]:                 "ceph.crush_device_class": "",
Oct 11 04:25:44 compute-0 charming_ganguly[307231]:                 "ceph.encrypted": "0",
Oct 11 04:25:44 compute-0 charming_ganguly[307231]:                 "ceph.osd_fsid": "b0a32a6b-06c7-4cda-9235-0c5ffbd1bd85",
Oct 11 04:25:44 compute-0 charming_ganguly[307231]:                 "ceph.osd_id": "2",
Oct 11 04:25:44 compute-0 charming_ganguly[307231]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 11 04:25:44 compute-0 charming_ganguly[307231]:                 "ceph.type": "block",
Oct 11 04:25:44 compute-0 charming_ganguly[307231]:                 "ceph.vdo": "0"
Oct 11 04:25:44 compute-0 charming_ganguly[307231]:             },
Oct 11 04:25:44 compute-0 charming_ganguly[307231]:             "type": "block",
Oct 11 04:25:44 compute-0 charming_ganguly[307231]:             "vg_name": "ceph_vg2"
Oct 11 04:25:44 compute-0 charming_ganguly[307231]:         }
Oct 11 04:25:44 compute-0 charming_ganguly[307231]:     ]
Oct 11 04:25:44 compute-0 charming_ganguly[307231]: }
Oct 11 04:25:44 compute-0 nova_compute[259850]: 2025-10-11 04:25:44.938 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 04:25:44 compute-0 systemd[1]: libpod-57ad162e99c97f3d685e3e98eac07b3dd8c108df8c6df55a16aaed5b2bec7329.scope: Deactivated successfully.
Oct 11 04:25:44 compute-0 podman[307214]: 2025-10-11 04:25:44.956587751 +0000 UTC m=+1.031853635 container died 57ad162e99c97f3d685e3e98eac07b3dd8c108df8c6df55a16aaed5b2bec7329 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_ganguly, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507)
Oct 11 04:25:44 compute-0 systemd[1]: var-lib-containers-storage-overlay-038e6fcee9bc94a0e1630e8b063b30fa4bf70f400dcc732f68e3c95c17a33448-merged.mount: Deactivated successfully.
Oct 11 04:25:45 compute-0 podman[307214]: 2025-10-11 04:25:45.023101634 +0000 UTC m=+1.098367548 container remove 57ad162e99c97f3d685e3e98eac07b3dd8c108df8c6df55a16aaed5b2bec7329 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_ganguly, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS)
Oct 11 04:25:45 compute-0 systemd[1]: libpod-conmon-57ad162e99c97f3d685e3e98eac07b3dd8c108df8c6df55a16aaed5b2bec7329.scope: Deactivated successfully.
Oct 11 04:25:45 compute-0 sudo[307109]: pam_unix(sudo:session): session closed for user root
Oct 11 04:25:45 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e447 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 04:25:45 compute-0 sudo[307252]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 04:25:45 compute-0 sudo[307252]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 04:25:45 compute-0 sudo[307252]: pam_unix(sudo:session): session closed for user root
Oct 11 04:25:45 compute-0 sudo[307277]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 11 04:25:45 compute-0 ceph-mon[74273]: pgmap v1919: 305 pgs: 305 active+clean; 134 MiB data, 464 MiB used, 60 GiB / 60 GiB avail; 73 KiB/s rd, 1.8 MiB/s wr, 38 op/s
Oct 11 04:25:45 compute-0 sudo[307277]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 04:25:45 compute-0 sudo[307277]: pam_unix(sudo:session): session closed for user root
Oct 11 04:25:45 compute-0 sudo[307302]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 04:25:45 compute-0 sudo[307302]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 04:25:45 compute-0 sudo[307302]: pam_unix(sudo:session): session closed for user root
Oct 11 04:25:45 compute-0 sudo[307327]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/23b68101-59a9-532f-ab6b-9acf78fb2162/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 23b68101-59a9-532f-ab6b-9acf78fb2162 -- raw list --format json
Oct 11 04:25:45 compute-0 sudo[307327]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 04:25:45 compute-0 nova_compute[259850]: 2025-10-11 04:25:45.482 2 DEBUG nova.compute.manager [req-b66fa024-0876-4c72-a4b8-c00dc74ca16c req-9d95207b-2fb6-40b1-b942-29e7640b0384 f4159c198f2c491aba3076e803251bb4 551ef16ca7c64c7d98e9560ddab5abd8 - - default default] [instance: e9134216-e096-4ca2-a8aa-6fdafcd7b04c] Received event network-changed-944dd3e5-9e7c-412f-ad9e-57f8cd6a65b9 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 11 04:25:45 compute-0 nova_compute[259850]: 2025-10-11 04:25:45.482 2 DEBUG nova.compute.manager [req-b66fa024-0876-4c72-a4b8-c00dc74ca16c req-9d95207b-2fb6-40b1-b942-29e7640b0384 f4159c198f2c491aba3076e803251bb4 551ef16ca7c64c7d98e9560ddab5abd8 - - default default] [instance: e9134216-e096-4ca2-a8aa-6fdafcd7b04c] Refreshing instance network info cache due to event network-changed-944dd3e5-9e7c-412f-ad9e-57f8cd6a65b9. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 11 04:25:45 compute-0 nova_compute[259850]: 2025-10-11 04:25:45.482 2 DEBUG oslo_concurrency.lockutils [req-b66fa024-0876-4c72-a4b8-c00dc74ca16c req-9d95207b-2fb6-40b1-b942-29e7640b0384 f4159c198f2c491aba3076e803251bb4 551ef16ca7c64c7d98e9560ddab5abd8 - - default default] Acquiring lock "refresh_cache-e9134216-e096-4ca2-a8aa-6fdafcd7b04c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 11 04:25:45 compute-0 nova_compute[259850]: 2025-10-11 04:25:45.483 2 DEBUG oslo_concurrency.lockutils [req-b66fa024-0876-4c72-a4b8-c00dc74ca16c req-9d95207b-2fb6-40b1-b942-29e7640b0384 f4159c198f2c491aba3076e803251bb4 551ef16ca7c64c7d98e9560ddab5abd8 - - default default] Acquired lock "refresh_cache-e9134216-e096-4ca2-a8aa-6fdafcd7b04c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 11 04:25:45 compute-0 nova_compute[259850]: 2025-10-11 04:25:45.483 2 DEBUG nova.network.neutron [req-b66fa024-0876-4c72-a4b8-c00dc74ca16c req-9d95207b-2fb6-40b1-b942-29e7640b0384 f4159c198f2c491aba3076e803251bb4 551ef16ca7c64c7d98e9560ddab5abd8 - - default default] [instance: e9134216-e096-4ca2-a8aa-6fdafcd7b04c] Refreshing network info cache for port 944dd3e5-9e7c-412f-ad9e-57f8cd6a65b9 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 11 04:25:45 compute-0 podman[307394]: 2025-10-11 04:25:45.935930567 +0000 UTC m=+0.069451758 container create 40d921f07f93e33dca6263e4477e526abfe1dd21476b5ab690dae2f7081b18d9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_sinoussi, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3)
Oct 11 04:25:45 compute-0 systemd[1]: Started libpod-conmon-40d921f07f93e33dca6263e4477e526abfe1dd21476b5ab690dae2f7081b18d9.scope.
Oct 11 04:25:45 compute-0 podman[307394]: 2025-10-11 04:25:45.902543616 +0000 UTC m=+0.036064847 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 04:25:46 compute-0 systemd[1]: Started libcrun container.
Oct 11 04:25:46 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1920: 305 pgs: 305 active+clean; 134 MiB data, 464 MiB used, 60 GiB / 60 GiB avail; 73 KiB/s rd, 1.8 MiB/s wr, 38 op/s
Oct 11 04:25:46 compute-0 podman[307394]: 2025-10-11 04:25:46.033360311 +0000 UTC m=+0.166881562 container init 40d921f07f93e33dca6263e4477e526abfe1dd21476b5ab690dae2f7081b18d9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_sinoussi, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 11 04:25:46 compute-0 podman[307394]: 2025-10-11 04:25:46.045106202 +0000 UTC m=+0.178627383 container start 40d921f07f93e33dca6263e4477e526abfe1dd21476b5ab690dae2f7081b18d9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_sinoussi, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef)
Oct 11 04:25:46 compute-0 podman[307394]: 2025-10-11 04:25:46.049625229 +0000 UTC m=+0.183146420 container attach 40d921f07f93e33dca6263e4477e526abfe1dd21476b5ab690dae2f7081b18d9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_sinoussi, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Oct 11 04:25:46 compute-0 systemd[1]: libpod-40d921f07f93e33dca6263e4477e526abfe1dd21476b5ab690dae2f7081b18d9.scope: Deactivated successfully.
Oct 11 04:25:46 compute-0 eloquent_sinoussi[307410]: 167 167
Oct 11 04:25:46 compute-0 conmon[307410]: conmon 40d921f07f93e33dca62 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-40d921f07f93e33dca6263e4477e526abfe1dd21476b5ab690dae2f7081b18d9.scope/container/memory.events
Oct 11 04:25:46 compute-0 podman[307394]: 2025-10-11 04:25:46.054326982 +0000 UTC m=+0.187848163 container died 40d921f07f93e33dca6263e4477e526abfe1dd21476b5ab690dae2f7081b18d9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_sinoussi, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 11 04:25:46 compute-0 systemd[1]: var-lib-containers-storage-overlay-d6f5bfb26f7df9d5b79300471b0f7e5ad02d63cfef23aa0ff544c14f43937969-merged.mount: Deactivated successfully.
Oct 11 04:25:46 compute-0 podman[307394]: 2025-10-11 04:25:46.102133358 +0000 UTC m=+0.235654539 container remove 40d921f07f93e33dca6263e4477e526abfe1dd21476b5ab690dae2f7081b18d9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_sinoussi, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default)
Oct 11 04:25:46 compute-0 systemd[1]: libpod-conmon-40d921f07f93e33dca6263e4477e526abfe1dd21476b5ab690dae2f7081b18d9.scope: Deactivated successfully.
Oct 11 04:25:46 compute-0 podman[307434]: 2025-10-11 04:25:46.366498905 +0000 UTC m=+0.074396517 container create 3032bb26bc25b9bb97b71e77aba58885636a782024e578b8acd91897404439d3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_shamir, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, ceph=True)
Oct 11 04:25:46 compute-0 podman[307434]: 2025-10-11 04:25:46.336830959 +0000 UTC m=+0.044728631 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 04:25:46 compute-0 systemd[1]: Started libpod-conmon-3032bb26bc25b9bb97b71e77aba58885636a782024e578b8acd91897404439d3.scope.
Oct 11 04:25:46 compute-0 systemd[1]: Started libcrun container.
Oct 11 04:25:46 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/24e0b2d77eaeb4b312d7d32fdaeffcdd18fe08f9bf02626fda2dbcd58143c2a7/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 11 04:25:46 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/24e0b2d77eaeb4b312d7d32fdaeffcdd18fe08f9bf02626fda2dbcd58143c2a7/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 11 04:25:46 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/24e0b2d77eaeb4b312d7d32fdaeffcdd18fe08f9bf02626fda2dbcd58143c2a7/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 11 04:25:46 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/24e0b2d77eaeb4b312d7d32fdaeffcdd18fe08f9bf02626fda2dbcd58143c2a7/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 11 04:25:46 compute-0 podman[307434]: 2025-10-11 04:25:46.479631142 +0000 UTC m=+0.187528774 container init 3032bb26bc25b9bb97b71e77aba58885636a782024e578b8acd91897404439d3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_shamir, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 11 04:25:46 compute-0 podman[307449]: 2025-10-11 04:25:46.493180313 +0000 UTC m=+0.076634109 container health_status aa44bb3cffcfb092b9d85ae52a9bd9f36c87e07262632420f97b48a886109eab (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251009, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Oct 11 04:25:46 compute-0 podman[307434]: 2025-10-11 04:25:46.49482031 +0000 UTC m=+0.202717892 container start 3032bb26bc25b9bb97b71e77aba58885636a782024e578b8acd91897404439d3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_shamir, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Oct 11 04:25:46 compute-0 podman[307434]: 2025-10-11 04:25:46.49801824 +0000 UTC m=+0.205915842 container attach 3032bb26bc25b9bb97b71e77aba58885636a782024e578b8acd91897404439d3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_shamir, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 11 04:25:47 compute-0 ceph-mon[74273]: pgmap v1920: 305 pgs: 305 active+clean; 134 MiB data, 464 MiB used, 60 GiB / 60 GiB avail; 73 KiB/s rd, 1.8 MiB/s wr, 38 op/s
Oct 11 04:25:47 compute-0 youthful_shamir[307462]: {
Oct 11 04:25:47 compute-0 youthful_shamir[307462]:     "38da774d-7ecf-442f-9a7a-97978287cff8": {
Oct 11 04:25:47 compute-0 youthful_shamir[307462]:         "ceph_fsid": "23b68101-59a9-532f-ab6b-9acf78fb2162",
Oct 11 04:25:47 compute-0 youthful_shamir[307462]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Oct 11 04:25:47 compute-0 youthful_shamir[307462]:         "osd_id": 1,
Oct 11 04:25:47 compute-0 youthful_shamir[307462]:         "osd_uuid": "38da774d-7ecf-442f-9a7a-97978287cff8",
Oct 11 04:25:47 compute-0 youthful_shamir[307462]:         "type": "bluestore"
Oct 11 04:25:47 compute-0 youthful_shamir[307462]:     },
Oct 11 04:25:47 compute-0 youthful_shamir[307462]:     "b0a32a6b-06c7-4cda-9235-0c5ffbd1bd85": {
Oct 11 04:25:47 compute-0 youthful_shamir[307462]:         "ceph_fsid": "23b68101-59a9-532f-ab6b-9acf78fb2162",
Oct 11 04:25:47 compute-0 youthful_shamir[307462]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Oct 11 04:25:47 compute-0 youthful_shamir[307462]:         "osd_id": 2,
Oct 11 04:25:47 compute-0 youthful_shamir[307462]:         "osd_uuid": "b0a32a6b-06c7-4cda-9235-0c5ffbd1bd85",
Oct 11 04:25:47 compute-0 youthful_shamir[307462]:         "type": "bluestore"
Oct 11 04:25:47 compute-0 youthful_shamir[307462]:     },
Oct 11 04:25:47 compute-0 youthful_shamir[307462]:     "bd7ac921-1218-45c1-b1c6-7c594dbceccb": {
Oct 11 04:25:47 compute-0 youthful_shamir[307462]:         "ceph_fsid": "23b68101-59a9-532f-ab6b-9acf78fb2162",
Oct 11 04:25:47 compute-0 youthful_shamir[307462]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct 11 04:25:47 compute-0 youthful_shamir[307462]:         "osd_id": 0,
Oct 11 04:25:47 compute-0 youthful_shamir[307462]:         "osd_uuid": "bd7ac921-1218-45c1-b1c6-7c594dbceccb",
Oct 11 04:25:47 compute-0 youthful_shamir[307462]:         "type": "bluestore"
Oct 11 04:25:47 compute-0 youthful_shamir[307462]:     }
Oct 11 04:25:47 compute-0 youthful_shamir[307462]: }
Oct 11 04:25:47 compute-0 systemd[1]: libpod-3032bb26bc25b9bb97b71e77aba58885636a782024e578b8acd91897404439d3.scope: Deactivated successfully.
Oct 11 04:25:47 compute-0 systemd[1]: libpod-3032bb26bc25b9bb97b71e77aba58885636a782024e578b8acd91897404439d3.scope: Consumed 1.045s CPU time.
Oct 11 04:25:47 compute-0 conmon[307462]: conmon 3032bb26bc25b9bb97b7 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-3032bb26bc25b9bb97b71e77aba58885636a782024e578b8acd91897404439d3.scope/container/memory.events
Oct 11 04:25:47 compute-0 podman[307501]: 2025-10-11 04:25:47.583082004 +0000 UTC m=+0.029562484 container died 3032bb26bc25b9bb97b71e77aba58885636a782024e578b8acd91897404439d3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_shamir, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Oct 11 04:25:47 compute-0 systemd[1]: var-lib-containers-storage-overlay-24e0b2d77eaeb4b312d7d32fdaeffcdd18fe08f9bf02626fda2dbcd58143c2a7-merged.mount: Deactivated successfully.
Oct 11 04:25:47 compute-0 podman[307501]: 2025-10-11 04:25:47.622871554 +0000 UTC m=+0.069352034 container remove 3032bb26bc25b9bb97b71e77aba58885636a782024e578b8acd91897404439d3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_shamir, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 11 04:25:47 compute-0 systemd[1]: libpod-conmon-3032bb26bc25b9bb97b71e77aba58885636a782024e578b8acd91897404439d3.scope: Deactivated successfully.
Oct 11 04:25:47 compute-0 sudo[307327]: pam_unix(sudo:session): session closed for user root
Oct 11 04:25:47 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 11 04:25:47 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' 
Oct 11 04:25:47 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 11 04:25:47 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' 
Oct 11 04:25:47 compute-0 ceph-mgr[74563]: [progress WARNING root] complete: ev 52b96aa5-4e80-47c3-bbfa-49ec73ecb242 does not exist
Oct 11 04:25:47 compute-0 ceph-mgr[74563]: [progress WARNING root] complete: ev 3fc6f039-193b-40a1-ac3c-0f19321f3cfb does not exist
Oct 11 04:25:47 compute-0 sudo[307516]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 04:25:47 compute-0 sudo[307516]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 04:25:47 compute-0 sudo[307516]: pam_unix(sudo:session): session closed for user root
Oct 11 04:25:47 compute-0 sudo[307541]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Oct 11 04:25:47 compute-0 sudo[307541]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 04:25:47 compute-0 sudo[307541]: pam_unix(sudo:session): session closed for user root
Oct 11 04:25:48 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1921: 305 pgs: 305 active+clean; 134 MiB data, 464 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 100 op/s
Oct 11 04:25:48 compute-0 nova_compute[259850]: 2025-10-11 04:25:48.125 2 DEBUG nova.network.neutron [req-b66fa024-0876-4c72-a4b8-c00dc74ca16c req-9d95207b-2fb6-40b1-b942-29e7640b0384 f4159c198f2c491aba3076e803251bb4 551ef16ca7c64c7d98e9560ddab5abd8 - - default default] [instance: e9134216-e096-4ca2-a8aa-6fdafcd7b04c] Updated VIF entry in instance network info cache for port 944dd3e5-9e7c-412f-ad9e-57f8cd6a65b9. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 11 04:25:48 compute-0 nova_compute[259850]: 2025-10-11 04:25:48.125 2 DEBUG nova.network.neutron [req-b66fa024-0876-4c72-a4b8-c00dc74ca16c req-9d95207b-2fb6-40b1-b942-29e7640b0384 f4159c198f2c491aba3076e803251bb4 551ef16ca7c64c7d98e9560ddab5abd8 - - default default] [instance: e9134216-e096-4ca2-a8aa-6fdafcd7b04c] Updating instance_info_cache with network_info: [{"id": "944dd3e5-9e7c-412f-ad9e-57f8cd6a65b9", "address": "fa:16:3e:67:66:62", "network": {"id": "61e3c4a7-2f2f-451f-b913-c2cdac8efdf3", "bridge": "br-int", "label": "tempest-TestEncryptedCinderVolumes-1503027769-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.238", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "226e6310b4ee4a68b552a6b3e940a458", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap944dd3e5-9e", "ovs_interfaceid": "944dd3e5-9e7c-412f-ad9e-57f8cd6a65b9", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 11 04:25:48 compute-0 nova_compute[259850]: 2025-10-11 04:25:48.145 2 DEBUG oslo_concurrency.lockutils [req-b66fa024-0876-4c72-a4b8-c00dc74ca16c req-9d95207b-2fb6-40b1-b942-29e7640b0384 f4159c198f2c491aba3076e803251bb4 551ef16ca7c64c7d98e9560ddab5abd8 - - default default] Releasing lock "refresh_cache-e9134216-e096-4ca2-a8aa-6fdafcd7b04c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 11 04:25:48 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' 
Oct 11 04:25:48 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' 
Oct 11 04:25:48 compute-0 ceph-mon[74273]: pgmap v1921: 305 pgs: 305 active+clean; 134 MiB data, 464 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 100 op/s
Oct 11 04:25:48 compute-0 nova_compute[259850]: 2025-10-11 04:25:48.881 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 04:25:49 compute-0 nova_compute[259850]: 2025-10-11 04:25:49.970 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 04:25:50 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1922: 305 pgs: 305 active+clean; 134 MiB data, 464 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 60 KiB/s wr, 85 op/s
Oct 11 04:25:50 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e447 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 04:25:50 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct 11 04:25:50 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2355598557' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 11 04:25:50 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct 11 04:25:50 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2355598557' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 11 04:25:50 compute-0 ceph-mgr[74563]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 04:25:50 compute-0 ceph-mgr[74563]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 04:25:50 compute-0 ceph-mgr[74563]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 04:25:50 compute-0 ceph-mgr[74563]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 04:25:50 compute-0 ceph-mgr[74563]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 04:25:50 compute-0 ceph-mgr[74563]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 04:25:51 compute-0 ceph-mon[74273]: pgmap v1922: 305 pgs: 305 active+clean; 134 MiB data, 464 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 60 KiB/s wr, 85 op/s
Oct 11 04:25:51 compute-0 ceph-mon[74273]: from='client.? 192.168.122.10:0/2355598557' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 11 04:25:51 compute-0 ceph-mon[74273]: from='client.? 192.168.122.10:0/2355598557' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 11 04:25:52 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1923: 305 pgs: 305 active+clean; 134 MiB data, 464 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 14 KiB/s wr, 73 op/s
Oct 11 04:25:53 compute-0 ceph-mon[74273]: pgmap v1923: 305 pgs: 305 active+clean; 134 MiB data, 464 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 14 KiB/s wr, 73 op/s
Oct 11 04:25:53 compute-0 nova_compute[259850]: 2025-10-11 04:25:53.919 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 04:25:54 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1924: 305 pgs: 305 active+clean; 142 MiB data, 471 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 671 KiB/s wr, 82 op/s
Oct 11 04:25:54 compute-0 ovn_controller[152025]: 2025-10-11T04:25:54Z|00068|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:67:66:62 10.100.0.4
Oct 11 04:25:54 compute-0 ovn_controller[152025]: 2025-10-11T04:25:54Z|00069|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:67:66:62 10.100.0.4
Oct 11 04:25:55 compute-0 nova_compute[259850]: 2025-10-11 04:25:55.020 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 04:25:55 compute-0 ceph-mon[74273]: pgmap v1924: 305 pgs: 305 active+clean; 142 MiB data, 471 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 671 KiB/s wr, 82 op/s
Oct 11 04:25:55 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e447 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 04:25:56 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1925: 305 pgs: 305 active+clean; 142 MiB data, 471 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 657 KiB/s wr, 70 op/s
Oct 11 04:25:57 compute-0 ceph-mon[74273]: pgmap v1925: 305 pgs: 305 active+clean; 142 MiB data, 471 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 657 KiB/s wr, 70 op/s
Oct 11 04:25:57 compute-0 podman[307568]: 2025-10-11 04:25:57.378109015 +0000 UTC m=+0.086883128 container health_status 8f03625dda308e9a3fe3b3f00360811eab2a53db90537fed5552a11820542f07 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_id=iscsid, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251009, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, managed_by=edpm_ansible, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, container_name=iscsid, org.label-schema.vendor=CentOS)
Oct 11 04:25:57 compute-0 podman[307567]: 2025-10-11 04:25:57.386028298 +0000 UTC m=+0.088089622 container health_status 83ff5079e26f0f00bbfd2512db4ebca61e431e3b7e362664573e34fff179574c (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, container_name=multipathd, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, tcib_managed=true, config_id=multipathd, org.label-schema.build-date=20251009, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, maintainer=OpenStack Kubernetes Operator team)
Oct 11 04:25:58 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1926: 305 pgs: 305 active+clean; 166 MiB data, 506 MiB used, 59 GiB / 60 GiB avail; 2.2 MiB/s rd, 2.1 MiB/s wr, 120 op/s
Oct 11 04:25:58 compute-0 nova_compute[259850]: 2025-10-11 04:25:58.922 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 04:25:59 compute-0 ceph-mon[74273]: pgmap v1926: 305 pgs: 305 active+clean; 166 MiB data, 506 MiB used, 59 GiB / 60 GiB avail; 2.2 MiB/s rd, 2.1 MiB/s wr, 120 op/s
Oct 11 04:26:00 compute-0 nova_compute[259850]: 2025-10-11 04:26:00.023 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 04:26:00 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1927: 305 pgs: 305 active+clean; 167 MiB data, 507 MiB used, 59 GiB / 60 GiB avail; 332 KiB/s rd, 2.1 MiB/s wr, 64 op/s
Oct 11 04:26:00 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e447 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 04:26:00 compute-0 nova_compute[259850]: 2025-10-11 04:26:00.978 2 DEBUG oslo_concurrency.lockutils [None req-f69d169c-75f1-4a46-b16b-9095e3555373 7bf17f3eb8514499a54d67542db6b88a 226e6310b4ee4a68b552a6b3e940a458 - - default default] Acquiring lock "e9134216-e096-4ca2-a8aa-6fdafcd7b04c" by "nova.compute.manager.ComputeManager.reserve_block_device_name.<locals>.do_reserve" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 04:26:00 compute-0 nova_compute[259850]: 2025-10-11 04:26:00.978 2 DEBUG oslo_concurrency.lockutils [None req-f69d169c-75f1-4a46-b16b-9095e3555373 7bf17f3eb8514499a54d67542db6b88a 226e6310b4ee4a68b552a6b3e940a458 - - default default] Lock "e9134216-e096-4ca2-a8aa-6fdafcd7b04c" acquired by "nova.compute.manager.ComputeManager.reserve_block_device_name.<locals>.do_reserve" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 04:26:00 compute-0 nova_compute[259850]: 2025-10-11 04:26:00.992 2 DEBUG nova.objects.instance [None req-f69d169c-75f1-4a46-b16b-9095e3555373 7bf17f3eb8514499a54d67542db6b88a 226e6310b4ee4a68b552a6b3e940a458 - - default default] Lazy-loading 'flavor' on Instance uuid e9134216-e096-4ca2-a8aa-6fdafcd7b04c obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 11 04:26:01 compute-0 nova_compute[259850]: 2025-10-11 04:26:01.028 2 DEBUG oslo_concurrency.lockutils [None req-f69d169c-75f1-4a46-b16b-9095e3555373 7bf17f3eb8514499a54d67542db6b88a 226e6310b4ee4a68b552a6b3e940a458 - - default default] Lock "e9134216-e096-4ca2-a8aa-6fdafcd7b04c" "released" by "nova.compute.manager.ComputeManager.reserve_block_device_name.<locals>.do_reserve" :: held 0.050s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 04:26:01 compute-0 ceph-mon[74273]: pgmap v1927: 305 pgs: 305 active+clean; 167 MiB data, 507 MiB used, 59 GiB / 60 GiB avail; 332 KiB/s rd, 2.1 MiB/s wr, 64 op/s
Oct 11 04:26:01 compute-0 anacron[265197]: Job `cron.daily' started
Oct 11 04:26:01 compute-0 anacron[265197]: Job `cron.daily' terminated
Oct 11 04:26:01 compute-0 anacron[265197]: Normal exit (1 job run)
Oct 11 04:26:01 compute-0 nova_compute[259850]: 2025-10-11 04:26:01.260 2 DEBUG oslo_concurrency.lockutils [None req-f69d169c-75f1-4a46-b16b-9095e3555373 7bf17f3eb8514499a54d67542db6b88a 226e6310b4ee4a68b552a6b3e940a458 - - default default] Acquiring lock "e9134216-e096-4ca2-a8aa-6fdafcd7b04c" by "nova.compute.manager.ComputeManager.attach_volume.<locals>.do_attach_volume" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 04:26:01 compute-0 nova_compute[259850]: 2025-10-11 04:26:01.260 2 DEBUG oslo_concurrency.lockutils [None req-f69d169c-75f1-4a46-b16b-9095e3555373 7bf17f3eb8514499a54d67542db6b88a 226e6310b4ee4a68b552a6b3e940a458 - - default default] Lock "e9134216-e096-4ca2-a8aa-6fdafcd7b04c" acquired by "nova.compute.manager.ComputeManager.attach_volume.<locals>.do_attach_volume" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 04:26:01 compute-0 nova_compute[259850]: 2025-10-11 04:26:01.261 2 INFO nova.compute.manager [None req-f69d169c-75f1-4a46-b16b-9095e3555373 7bf17f3eb8514499a54d67542db6b88a 226e6310b4ee4a68b552a6b3e940a458 - - default default] [instance: e9134216-e096-4ca2-a8aa-6fdafcd7b04c] Attaching volume 9ebf6f1f-de5b-44af-bd87-9160ece6230d to /dev/vdb
Oct 11 04:26:01 compute-0 nova_compute[259850]: 2025-10-11 04:26:01.371 2 DEBUG os_brick.utils [None req-f69d169c-75f1-4a46-b16b-9095e3555373 7bf17f3eb8514499a54d67542db6b88a 226e6310b4ee4a68b552a6b3e940a458 - - default default] ==> get_connector_properties: call "{'root_helper': 'sudo nova-rootwrap /etc/nova/rootwrap.conf', 'my_ip': '192.168.122.100', 'multipath': True, 'enforce_multipath': True, 'host': 'compute-0.ctlplane.example.com', 'execute': None}" trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:176
Oct 11 04:26:01 compute-0 nova_compute[259850]: 2025-10-11 04:26:01.373 675 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): multipathd show status execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 04:26:01 compute-0 nova_compute[259850]: 2025-10-11 04:26:01.393 675 DEBUG oslo_concurrency.processutils [-] CMD "multipathd show status" returned: 0 in 0.019s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 04:26:01 compute-0 nova_compute[259850]: 2025-10-11 04:26:01.393 675 DEBUG oslo.privsep.daemon [-] privsep: reply[3532b1f9-32cc-4b2f-8812-03b782eeba87]: (4, ('path checker states:\n\npaths: 0\nbusy: False\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 04:26:01 compute-0 nova_compute[259850]: 2025-10-11 04:26:01.395 675 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): cat /etc/iscsi/initiatorname.iscsi execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 04:26:01 compute-0 nova_compute[259850]: 2025-10-11 04:26:01.409 675 DEBUG oslo_concurrency.processutils [-] CMD "cat /etc/iscsi/initiatorname.iscsi" returned: 0 in 0.014s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 04:26:01 compute-0 nova_compute[259850]: 2025-10-11 04:26:01.409 675 DEBUG oslo.privsep.daemon [-] privsep: reply[58d42d95-1d71-4f9e-9476-fd57994b6856]: (4, ('InitiatorName=iqn.1994-05.com.redhat:e727c2bd432c', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 04:26:01 compute-0 nova_compute[259850]: 2025-10-11 04:26:01.411 675 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): findmnt -v / -n -o SOURCE execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 04:26:01 compute-0 nova_compute[259850]: 2025-10-11 04:26:01.422 675 DEBUG oslo_concurrency.processutils [-] CMD "findmnt -v / -n -o SOURCE" returned: 0 in 0.012s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 04:26:01 compute-0 nova_compute[259850]: 2025-10-11 04:26:01.423 675 DEBUG oslo.privsep.daemon [-] privsep: reply[2dbeae4b-f849-493c-9079-1680716203e1]: (4, ('overlay\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 04:26:01 compute-0 nova_compute[259850]: 2025-10-11 04:26:01.424 675 DEBUG oslo.privsep.daemon [-] privsep: reply[c84311c2-f533-4aa8-b78c-f5009ac06fb7]: (4, 'e4b2deed-ff06-4afb-a523-b61a9dddb9cc') _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 04:26:01 compute-0 nova_compute[259850]: 2025-10-11 04:26:01.425 2 DEBUG oslo_concurrency.processutils [None req-f69d169c-75f1-4a46-b16b-9095e3555373 7bf17f3eb8514499a54d67542db6b88a 226e6310b4ee4a68b552a6b3e940a458 - - default default] Running cmd (subprocess): nvme version execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 04:26:01 compute-0 nova_compute[259850]: 2025-10-11 04:26:01.460 2 DEBUG oslo_concurrency.processutils [None req-f69d169c-75f1-4a46-b16b-9095e3555373 7bf17f3eb8514499a54d67542db6b88a 226e6310b4ee4a68b552a6b3e940a458 - - default default] CMD "nvme version" returned: 0 in 0.036s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 04:26:01 compute-0 nova_compute[259850]: 2025-10-11 04:26:01.463 2 DEBUG os_brick.initiator.connectors.lightos [None req-f69d169c-75f1-4a46-b16b-9095e3555373 7bf17f3eb8514499a54d67542db6b88a 226e6310b4ee4a68b552a6b3e940a458 - - default default] LIGHTOS: [Errno 111] ECONNREFUSED find_dsc /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:98
Oct 11 04:26:01 compute-0 nova_compute[259850]: 2025-10-11 04:26:01.464 2 DEBUG os_brick.initiator.connectors.lightos [None req-f69d169c-75f1-4a46-b16b-9095e3555373 7bf17f3eb8514499a54d67542db6b88a 226e6310b4ee4a68b552a6b3e940a458 - - default default] LIGHTOS: did not find dsc, continuing anyway. get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:76
Oct 11 04:26:01 compute-0 nova_compute[259850]: 2025-10-11 04:26:01.464 2 DEBUG os_brick.initiator.connectors.lightos [None req-f69d169c-75f1-4a46-b16b-9095e3555373 7bf17f3eb8514499a54d67542db6b88a 226e6310b4ee4a68b552a6b3e940a458 - - default default] LIGHTOS: finally hostnqn: nqn.2014-08.org.nvmexpress:uuid:83042a20-0f72-4c47-8453-e72ead378624 dsc:  get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:79
Oct 11 04:26:01 compute-0 nova_compute[259850]: 2025-10-11 04:26:01.465 2 DEBUG os_brick.utils [None req-f69d169c-75f1-4a46-b16b-9095e3555373 7bf17f3eb8514499a54d67542db6b88a 226e6310b4ee4a68b552a6b3e940a458 - - default default] <== get_connector_properties: return (92ms) {'platform': 'x86_64', 'os_type': 'linux', 'ip': '192.168.122.100', 'host': 'compute-0.ctlplane.example.com', 'multipath': True, 'initiator': 'iqn.1994-05.com.redhat:e727c2bd432c', 'do_local_attach': False, 'nvme_hostid': '83042a20-0f72-4c47-8453-e72ead378624', 'system uuid': 'e4b2deed-ff06-4afb-a523-b61a9dddb9cc', 'nqn': 'nqn.2014-08.org.nvmexpress:uuid:83042a20-0f72-4c47-8453-e72ead378624', 'nvme_native_multipath': True, 'found_dsc': ''} trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:203
Oct 11 04:26:01 compute-0 nova_compute[259850]: 2025-10-11 04:26:01.465 2 DEBUG nova.virt.block_device [None req-f69d169c-75f1-4a46-b16b-9095e3555373 7bf17f3eb8514499a54d67542db6b88a 226e6310b4ee4a68b552a6b3e940a458 - - default default] [instance: e9134216-e096-4ca2-a8aa-6fdafcd7b04c] Updating existing volume attachment record: 117b7573-07a1-4ebb-8f9a-06543e34aea4 _volume_attach /usr/lib/python3.9/site-packages/nova/virt/block_device.py:631
Oct 11 04:26:02 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 11 04:26:02 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3965334243' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 11 04:26:02 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1928: 305 pgs: 305 active+clean; 167 MiB data, 507 MiB used, 59 GiB / 60 GiB avail; 332 KiB/s rd, 2.1 MiB/s wr, 64 op/s
Oct 11 04:26:02 compute-0 ceph-mon[74273]: from='client.? 192.168.122.10:0/3965334243' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 11 04:26:02 compute-0 nova_compute[259850]: 2025-10-11 04:26:02.229 2 DEBUG os_brick.encryptors [None req-f69d169c-75f1-4a46-b16b-9095e3555373 7bf17f3eb8514499a54d67542db6b88a 226e6310b4ee4a68b552a6b3e940a458 - - default default] Using volume encryption metadata '{'encryption_key_id': 'ab2d3253-4770-4f16-89cf-4c66e187efe2', 'control_location': 'front-end', 'cipher': 'aes-xts-plain64', 'key_size': 256, 'provider': 'luks'}' for connection: {'driver_volume_type': 'rbd', 'data': {'name': 'volumes/volume-9ebf6f1f-de5b-44af-bd87-9160ece6230d', 'hosts': ['192.168.122.100'], 'ports': ['6789'], 'cluster_name': 'ceph', 'auth_enabled': True, 'auth_username': 'openstack', 'secret_type': 'ceph', 'secret_uuid': '***', 'volume_id': '9ebf6f1f-de5b-44af-bd87-9160ece6230d', 'discard': True, 'qos_specs': None, 'access_mode': 'rw', 'encrypted': True, 'cacheable': False}, 'status': 'reserved', 'instance': 'e9134216-e096-4ca2-a8aa-6fdafcd7b04c', 'attached_at': '', 'detached_at': '', 'volume_id': '9ebf6f1f-de5b-44af-bd87-9160ece6230d', 'serial': '} get_encryption_metadata /usr/lib/python3.9/site-packages/os_brick/encryptors/__init__.py:135
Oct 11 04:26:02 compute-0 nova_compute[259850]: 2025-10-11 04:26:02.238 2 DEBUG barbicanclient.client [None req-f69d169c-75f1-4a46-b16b-9095e3555373 7bf17f3eb8514499a54d67542db6b88a 226e6310b4ee4a68b552a6b3e940a458 - - default default] Creating Client object Client /usr/lib/python3.9/site-packages/barbicanclient/client.py:163
Oct 11 04:26:02 compute-0 nova_compute[259850]: 2025-10-11 04:26:02.257 2 DEBUG barbicanclient.v1.secrets [None req-f69d169c-75f1-4a46-b16b-9095e3555373 7bf17f3eb8514499a54d67542db6b88a 226e6310b4ee4a68b552a6b3e940a458 - - default default] Getting secret - Secret href: https://barbican-internal.openstack.svc:9311/secrets/ab2d3253-4770-4f16-89cf-4c66e187efe2 get /usr/lib/python3.9/site-packages/barbicanclient/v1/secrets.py:514
Oct 11 04:26:02 compute-0 nova_compute[259850]: 2025-10-11 04:26:02.257 2 INFO barbicanclient.base [None req-f69d169c-75f1-4a46-b16b-9095e3555373 7bf17f3eb8514499a54d67542db6b88a 226e6310b4ee4a68b552a6b3e940a458 - - default default] Calculated Secrets uuid ref: secrets/ab2d3253-4770-4f16-89cf-4c66e187efe2
Oct 11 04:26:02 compute-0 nova_compute[259850]: 2025-10-11 04:26:02.373 2 DEBUG barbicanclient.client [None req-f69d169c-75f1-4a46-b16b-9095e3555373 7bf17f3eb8514499a54d67542db6b88a 226e6310b4ee4a68b552a6b3e940a458 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87
Oct 11 04:26:02 compute-0 nova_compute[259850]: 2025-10-11 04:26:02.374 2 INFO barbicanclient.base [None req-f69d169c-75f1-4a46-b16b-9095e3555373 7bf17f3eb8514499a54d67542db6b88a 226e6310b4ee4a68b552a6b3e940a458 - - default default] Calculated Secrets uuid ref: secrets/ab2d3253-4770-4f16-89cf-4c66e187efe2
Oct 11 04:26:02 compute-0 nova_compute[259850]: 2025-10-11 04:26:02.395 2 DEBUG barbicanclient.client [None req-f69d169c-75f1-4a46-b16b-9095e3555373 7bf17f3eb8514499a54d67542db6b88a 226e6310b4ee4a68b552a6b3e940a458 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87
Oct 11 04:26:02 compute-0 nova_compute[259850]: 2025-10-11 04:26:02.396 2 INFO barbicanclient.base [None req-f69d169c-75f1-4a46-b16b-9095e3555373 7bf17f3eb8514499a54d67542db6b88a 226e6310b4ee4a68b552a6b3e940a458 - - default default] Calculated Secrets uuid ref: secrets/ab2d3253-4770-4f16-89cf-4c66e187efe2
Oct 11 04:26:02 compute-0 nova_compute[259850]: 2025-10-11 04:26:02.417 2 DEBUG barbicanclient.client [None req-f69d169c-75f1-4a46-b16b-9095e3555373 7bf17f3eb8514499a54d67542db6b88a 226e6310b4ee4a68b552a6b3e940a458 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87
Oct 11 04:26:02 compute-0 nova_compute[259850]: 2025-10-11 04:26:02.417 2 INFO barbicanclient.base [None req-f69d169c-75f1-4a46-b16b-9095e3555373 7bf17f3eb8514499a54d67542db6b88a 226e6310b4ee4a68b552a6b3e940a458 - - default default] Calculated Secrets uuid ref: secrets/ab2d3253-4770-4f16-89cf-4c66e187efe2
Oct 11 04:26:02 compute-0 nova_compute[259850]: 2025-10-11 04:26:02.439 2 DEBUG barbicanclient.client [None req-f69d169c-75f1-4a46-b16b-9095e3555373 7bf17f3eb8514499a54d67542db6b88a 226e6310b4ee4a68b552a6b3e940a458 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87
Oct 11 04:26:02 compute-0 nova_compute[259850]: 2025-10-11 04:26:02.439 2 INFO barbicanclient.base [None req-f69d169c-75f1-4a46-b16b-9095e3555373 7bf17f3eb8514499a54d67542db6b88a 226e6310b4ee4a68b552a6b3e940a458 - - default default] Calculated Secrets uuid ref: secrets/ab2d3253-4770-4f16-89cf-4c66e187efe2
Oct 11 04:26:02 compute-0 nova_compute[259850]: 2025-10-11 04:26:02.467 2 DEBUG barbicanclient.client [None req-f69d169c-75f1-4a46-b16b-9095e3555373 7bf17f3eb8514499a54d67542db6b88a 226e6310b4ee4a68b552a6b3e940a458 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87
Oct 11 04:26:02 compute-0 nova_compute[259850]: 2025-10-11 04:26:02.468 2 INFO barbicanclient.base [None req-f69d169c-75f1-4a46-b16b-9095e3555373 7bf17f3eb8514499a54d67542db6b88a 226e6310b4ee4a68b552a6b3e940a458 - - default default] Calculated Secrets uuid ref: secrets/ab2d3253-4770-4f16-89cf-4c66e187efe2
Oct 11 04:26:02 compute-0 nova_compute[259850]: 2025-10-11 04:26:02.486 2 DEBUG barbicanclient.client [None req-f69d169c-75f1-4a46-b16b-9095e3555373 7bf17f3eb8514499a54d67542db6b88a 226e6310b4ee4a68b552a6b3e940a458 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87
Oct 11 04:26:02 compute-0 nova_compute[259850]: 2025-10-11 04:26:02.487 2 INFO barbicanclient.base [None req-f69d169c-75f1-4a46-b16b-9095e3555373 7bf17f3eb8514499a54d67542db6b88a 226e6310b4ee4a68b552a6b3e940a458 - - default default] Calculated Secrets uuid ref: secrets/ab2d3253-4770-4f16-89cf-4c66e187efe2
Oct 11 04:26:02 compute-0 nova_compute[259850]: 2025-10-11 04:26:02.504 2 DEBUG barbicanclient.client [None req-f69d169c-75f1-4a46-b16b-9095e3555373 7bf17f3eb8514499a54d67542db6b88a 226e6310b4ee4a68b552a6b3e940a458 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87
Oct 11 04:26:02 compute-0 nova_compute[259850]: 2025-10-11 04:26:02.505 2 INFO barbicanclient.base [None req-f69d169c-75f1-4a46-b16b-9095e3555373 7bf17f3eb8514499a54d67542db6b88a 226e6310b4ee4a68b552a6b3e940a458 - - default default] Calculated Secrets uuid ref: secrets/ab2d3253-4770-4f16-89cf-4c66e187efe2
Oct 11 04:26:02 compute-0 nova_compute[259850]: 2025-10-11 04:26:02.524 2 DEBUG barbicanclient.client [None req-f69d169c-75f1-4a46-b16b-9095e3555373 7bf17f3eb8514499a54d67542db6b88a 226e6310b4ee4a68b552a6b3e940a458 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87
Oct 11 04:26:02 compute-0 nova_compute[259850]: 2025-10-11 04:26:02.525 2 INFO barbicanclient.base [None req-f69d169c-75f1-4a46-b16b-9095e3555373 7bf17f3eb8514499a54d67542db6b88a 226e6310b4ee4a68b552a6b3e940a458 - - default default] Calculated Secrets uuid ref: secrets/ab2d3253-4770-4f16-89cf-4c66e187efe2
Oct 11 04:26:02 compute-0 nova_compute[259850]: 2025-10-11 04:26:02.542 2 DEBUG barbicanclient.client [None req-f69d169c-75f1-4a46-b16b-9095e3555373 7bf17f3eb8514499a54d67542db6b88a 226e6310b4ee4a68b552a6b3e940a458 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87
Oct 11 04:26:02 compute-0 nova_compute[259850]: 2025-10-11 04:26:02.542 2 INFO barbicanclient.base [None req-f69d169c-75f1-4a46-b16b-9095e3555373 7bf17f3eb8514499a54d67542db6b88a 226e6310b4ee4a68b552a6b3e940a458 - - default default] Calculated Secrets uuid ref: secrets/ab2d3253-4770-4f16-89cf-4c66e187efe2
Oct 11 04:26:02 compute-0 nova_compute[259850]: 2025-10-11 04:26:02.562 2 DEBUG barbicanclient.client [None req-f69d169c-75f1-4a46-b16b-9095e3555373 7bf17f3eb8514499a54d67542db6b88a 226e6310b4ee4a68b552a6b3e940a458 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87
Oct 11 04:26:02 compute-0 nova_compute[259850]: 2025-10-11 04:26:02.563 2 INFO barbicanclient.base [None req-f69d169c-75f1-4a46-b16b-9095e3555373 7bf17f3eb8514499a54d67542db6b88a 226e6310b4ee4a68b552a6b3e940a458 - - default default] Calculated Secrets uuid ref: secrets/ab2d3253-4770-4f16-89cf-4c66e187efe2
Oct 11 04:26:02 compute-0 nova_compute[259850]: 2025-10-11 04:26:02.581 2 DEBUG barbicanclient.client [None req-f69d169c-75f1-4a46-b16b-9095e3555373 7bf17f3eb8514499a54d67542db6b88a 226e6310b4ee4a68b552a6b3e940a458 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87
Oct 11 04:26:02 compute-0 nova_compute[259850]: 2025-10-11 04:26:02.582 2 INFO barbicanclient.base [None req-f69d169c-75f1-4a46-b16b-9095e3555373 7bf17f3eb8514499a54d67542db6b88a 226e6310b4ee4a68b552a6b3e940a458 - - default default] Calculated Secrets uuid ref: secrets/ab2d3253-4770-4f16-89cf-4c66e187efe2
Oct 11 04:26:02 compute-0 nova_compute[259850]: 2025-10-11 04:26:02.601 2 DEBUG barbicanclient.client [None req-f69d169c-75f1-4a46-b16b-9095e3555373 7bf17f3eb8514499a54d67542db6b88a 226e6310b4ee4a68b552a6b3e940a458 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87
Oct 11 04:26:02 compute-0 nova_compute[259850]: 2025-10-11 04:26:02.601 2 INFO barbicanclient.base [None req-f69d169c-75f1-4a46-b16b-9095e3555373 7bf17f3eb8514499a54d67542db6b88a 226e6310b4ee4a68b552a6b3e940a458 - - default default] Calculated Secrets uuid ref: secrets/ab2d3253-4770-4f16-89cf-4c66e187efe2
Oct 11 04:26:02 compute-0 nova_compute[259850]: 2025-10-11 04:26:02.623 2 DEBUG barbicanclient.client [None req-f69d169c-75f1-4a46-b16b-9095e3555373 7bf17f3eb8514499a54d67542db6b88a 226e6310b4ee4a68b552a6b3e940a458 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87
Oct 11 04:26:02 compute-0 nova_compute[259850]: 2025-10-11 04:26:02.624 2 INFO barbicanclient.base [None req-f69d169c-75f1-4a46-b16b-9095e3555373 7bf17f3eb8514499a54d67542db6b88a 226e6310b4ee4a68b552a6b3e940a458 - - default default] Calculated Secrets uuid ref: secrets/ab2d3253-4770-4f16-89cf-4c66e187efe2
Oct 11 04:26:02 compute-0 nova_compute[259850]: 2025-10-11 04:26:02.643 2 DEBUG barbicanclient.client [None req-f69d169c-75f1-4a46-b16b-9095e3555373 7bf17f3eb8514499a54d67542db6b88a 226e6310b4ee4a68b552a6b3e940a458 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87
Oct 11 04:26:02 compute-0 nova_compute[259850]: 2025-10-11 04:26:02.644 2 INFO barbicanclient.base [None req-f69d169c-75f1-4a46-b16b-9095e3555373 7bf17f3eb8514499a54d67542db6b88a 226e6310b4ee4a68b552a6b3e940a458 - - default default] Calculated Secrets uuid ref: secrets/ab2d3253-4770-4f16-89cf-4c66e187efe2
Oct 11 04:26:02 compute-0 nova_compute[259850]: 2025-10-11 04:26:02.663 2 DEBUG barbicanclient.client [None req-f69d169c-75f1-4a46-b16b-9095e3555373 7bf17f3eb8514499a54d67542db6b88a 226e6310b4ee4a68b552a6b3e940a458 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87
Oct 11 04:26:02 compute-0 nova_compute[259850]: 2025-10-11 04:26:02.664 2 DEBUG nova.virt.libvirt.host [None req-f69d169c-75f1-4a46-b16b-9095e3555373 7bf17f3eb8514499a54d67542db6b88a 226e6310b4ee4a68b552a6b3e940a458 - - default default] Secret XML: <secret ephemeral="no" private="no">
Oct 11 04:26:02 compute-0 nova_compute[259850]:   <usage type="volume">
Oct 11 04:26:02 compute-0 nova_compute[259850]:     <volume>9ebf6f1f-de5b-44af-bd87-9160ece6230d</volume>
Oct 11 04:26:02 compute-0 nova_compute[259850]:   </usage>
Oct 11 04:26:02 compute-0 nova_compute[259850]: </secret>
Oct 11 04:26:02 compute-0 nova_compute[259850]:  create_secret /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1131
Oct 11 04:26:02 compute-0 nova_compute[259850]: 2025-10-11 04:26:02.678 2 DEBUG nova.objects.instance [None req-f69d169c-75f1-4a46-b16b-9095e3555373 7bf17f3eb8514499a54d67542db6b88a 226e6310b4ee4a68b552a6b3e940a458 - - default default] Lazy-loading 'flavor' on Instance uuid e9134216-e096-4ca2-a8aa-6fdafcd7b04c obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 11 04:26:02 compute-0 nova_compute[259850]: 2025-10-11 04:26:02.701 2 DEBUG nova.virt.libvirt.driver [None req-f69d169c-75f1-4a46-b16b-9095e3555373 7bf17f3eb8514499a54d67542db6b88a 226e6310b4ee4a68b552a6b3e940a458 - - default default] [instance: e9134216-e096-4ca2-a8aa-6fdafcd7b04c] Attempting to attach volume 9ebf6f1f-de5b-44af-bd87-9160ece6230d with discard support enabled to an instance using an unsupported configuration. target_bus = virtio. Trim commands will not be issued to the storage device. _check_discard_for_attach_volume /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2168
Oct 11 04:26:02 compute-0 nova_compute[259850]: 2025-10-11 04:26:02.706 2 DEBUG nova.virt.libvirt.guest [None req-f69d169c-75f1-4a46-b16b-9095e3555373 7bf17f3eb8514499a54d67542db6b88a 226e6310b4ee4a68b552a6b3e940a458 - - default default] attach device xml: <disk type="network" device="disk">
Oct 11 04:26:02 compute-0 nova_compute[259850]:   <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Oct 11 04:26:02 compute-0 nova_compute[259850]:   <source protocol="rbd" name="volumes/volume-9ebf6f1f-de5b-44af-bd87-9160ece6230d">
Oct 11 04:26:02 compute-0 nova_compute[259850]:     <host name="192.168.122.100" port="6789"/>
Oct 11 04:26:02 compute-0 nova_compute[259850]:   </source>
Oct 11 04:26:02 compute-0 nova_compute[259850]:   <auth username="openstack">
Oct 11 04:26:02 compute-0 nova_compute[259850]:     <secret type="ceph" uuid="23b68101-59a9-532f-ab6b-9acf78fb2162"/>
Oct 11 04:26:02 compute-0 nova_compute[259850]:   </auth>
Oct 11 04:26:02 compute-0 nova_compute[259850]:   <target dev="vdb" bus="virtio"/>
Oct 11 04:26:02 compute-0 nova_compute[259850]:   <serial>9ebf6f1f-de5b-44af-bd87-9160ece6230d</serial>
Oct 11 04:26:02 compute-0 nova_compute[259850]:   <encryption format="luks">
Oct 11 04:26:02 compute-0 nova_compute[259850]:     <secret type="passphrase" uuid="43f89c52-1e0b-4b16-9d6d-87fe3d6cc0a2"/>
Oct 11 04:26:02 compute-0 nova_compute[259850]:   </encryption>
Oct 11 04:26:02 compute-0 nova_compute[259850]: </disk>
Oct 11 04:26:02 compute-0 nova_compute[259850]:  attach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:339
Oct 11 04:26:03 compute-0 ceph-mon[74273]: pgmap v1928: 305 pgs: 305 active+clean; 167 MiB data, 507 MiB used, 59 GiB / 60 GiB avail; 332 KiB/s rd, 2.1 MiB/s wr, 64 op/s
Oct 11 04:26:03 compute-0 nova_compute[259850]: 2025-10-11 04:26:03.928 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 04:26:04 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1929: 305 pgs: 305 active+clean; 167 MiB data, 507 MiB used, 59 GiB / 60 GiB avail; 333 KiB/s rd, 2.1 MiB/s wr, 65 op/s
Oct 11 04:26:05 compute-0 nova_compute[259850]: 2025-10-11 04:26:05.026 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 04:26:05 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e447 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 04:26:05 compute-0 ceph-mon[74273]: pgmap v1929: 305 pgs: 305 active+clean; 167 MiB data, 507 MiB used, 59 GiB / 60 GiB avail; 333 KiB/s rd, 2.1 MiB/s wr, 65 op/s
Oct 11 04:26:05 compute-0 nova_compute[259850]: 2025-10-11 04:26:05.236 2 DEBUG nova.virt.libvirt.driver [None req-f69d169c-75f1-4a46-b16b-9095e3555373 7bf17f3eb8514499a54d67542db6b88a 226e6310b4ee4a68b552a6b3e940a458 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 11 04:26:05 compute-0 nova_compute[259850]: 2025-10-11 04:26:05.236 2 DEBUG nova.virt.libvirt.driver [None req-f69d169c-75f1-4a46-b16b-9095e3555373 7bf17f3eb8514499a54d67542db6b88a 226e6310b4ee4a68b552a6b3e940a458 - - default default] No BDM found with device name vdb, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 11 04:26:05 compute-0 nova_compute[259850]: 2025-10-11 04:26:05.237 2 DEBUG nova.virt.libvirt.driver [None req-f69d169c-75f1-4a46-b16b-9095e3555373 7bf17f3eb8514499a54d67542db6b88a 226e6310b4ee4a68b552a6b3e940a458 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 11 04:26:05 compute-0 nova_compute[259850]: 2025-10-11 04:26:05.237 2 DEBUG nova.virt.libvirt.driver [None req-f69d169c-75f1-4a46-b16b-9095e3555373 7bf17f3eb8514499a54d67542db6b88a 226e6310b4ee4a68b552a6b3e940a458 - - default default] No VIF found with MAC fa:16:3e:67:66:62, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Oct 11 04:26:05 compute-0 nova_compute[259850]: 2025-10-11 04:26:05.427 2 DEBUG oslo_concurrency.lockutils [None req-f69d169c-75f1-4a46-b16b-9095e3555373 7bf17f3eb8514499a54d67542db6b88a 226e6310b4ee4a68b552a6b3e940a458 - - default default] Lock "e9134216-e096-4ca2-a8aa-6fdafcd7b04c" "released" by "nova.compute.manager.ComputeManager.attach_volume.<locals>.do_attach_volume" :: held 4.166s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 04:26:06 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1930: 305 pgs: 305 active+clean; 167 MiB data, 507 MiB used, 59 GiB / 60 GiB avail; 315 KiB/s rd, 1.5 MiB/s wr, 56 op/s
Oct 11 04:26:06 compute-0 nova_compute[259850]: 2025-10-11 04:26:06.165 2 DEBUG oslo_concurrency.lockutils [None req-8a64a2c4-6da6-476d-8a88-84e5c395f897 7bf17f3eb8514499a54d67542db6b88a 226e6310b4ee4a68b552a6b3e940a458 - - default default] Acquiring lock "e9134216-e096-4ca2-a8aa-6fdafcd7b04c" by "nova.compute.manager.ComputeManager.detach_volume.<locals>.do_detach_volume" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 04:26:06 compute-0 nova_compute[259850]: 2025-10-11 04:26:06.165 2 DEBUG oslo_concurrency.lockutils [None req-8a64a2c4-6da6-476d-8a88-84e5c395f897 7bf17f3eb8514499a54d67542db6b88a 226e6310b4ee4a68b552a6b3e940a458 - - default default] Lock "e9134216-e096-4ca2-a8aa-6fdafcd7b04c" acquired by "nova.compute.manager.ComputeManager.detach_volume.<locals>.do_detach_volume" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 04:26:06 compute-0 nova_compute[259850]: 2025-10-11 04:26:06.183 2 INFO nova.compute.manager [None req-8a64a2c4-6da6-476d-8a88-84e5c395f897 7bf17f3eb8514499a54d67542db6b88a 226e6310b4ee4a68b552a6b3e940a458 - - default default] [instance: e9134216-e096-4ca2-a8aa-6fdafcd7b04c] Detaching volume 9ebf6f1f-de5b-44af-bd87-9160ece6230d
Oct 11 04:26:06 compute-0 nova_compute[259850]: 2025-10-11 04:26:06.315 2 INFO nova.virt.block_device [None req-8a64a2c4-6da6-476d-8a88-84e5c395f897 7bf17f3eb8514499a54d67542db6b88a 226e6310b4ee4a68b552a6b3e940a458 - - default default] [instance: e9134216-e096-4ca2-a8aa-6fdafcd7b04c] Attempting to driver detach volume 9ebf6f1f-de5b-44af-bd87-9160ece6230d from mountpoint /dev/vdb
Oct 11 04:26:06 compute-0 nova_compute[259850]: 2025-10-11 04:26:06.457 2 DEBUG os_brick.encryptors [None req-8a64a2c4-6da6-476d-8a88-84e5c395f897 7bf17f3eb8514499a54d67542db6b88a 226e6310b4ee4a68b552a6b3e940a458 - - default default] Using volume encryption metadata '{'encryption_key_id': 'ab2d3253-4770-4f16-89cf-4c66e187efe2', 'control_location': 'front-end', 'cipher': 'aes-xts-plain64', 'key_size': 256, 'provider': 'luks'}' for connection: {'driver_volume_type': 'rbd', 'data': {'name': 'volumes/volume-9ebf6f1f-de5b-44af-bd87-9160ece6230d', 'hosts': ['192.168.122.100'], 'ports': ['6789'], 'cluster_name': 'ceph', 'auth_enabled': True, 'auth_username': 'openstack', 'secret_type': 'ceph', 'secret_uuid': '***', 'volume_id': '9ebf6f1f-de5b-44af-bd87-9160ece6230d', 'discard': True, 'qos_specs': None, 'access_mode': 'rw', 'encrypted': True, 'cacheable': False}, 'status': 'reserved', 'instance': 'e9134216-e096-4ca2-a8aa-6fdafcd7b04c', 'attached_at': '', 'detached_at': '', 'volume_id': '9ebf6f1f-de5b-44af-bd87-9160ece6230d', 'serial': '} get_encryption_metadata /usr/lib/python3.9/site-packages/os_brick/encryptors/__init__.py:135
Oct 11 04:26:06 compute-0 nova_compute[259850]: 2025-10-11 04:26:06.468 2 DEBUG nova.virt.libvirt.driver [None req-8a64a2c4-6da6-476d-8a88-84e5c395f897 7bf17f3eb8514499a54d67542db6b88a 226e6310b4ee4a68b552a6b3e940a458 - - default default] Attempting to detach device vdb from instance e9134216-e096-4ca2-a8aa-6fdafcd7b04c from the persistent domain config. _detach_from_persistent /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2487
Oct 11 04:26:06 compute-0 nova_compute[259850]: 2025-10-11 04:26:06.469 2 DEBUG nova.virt.libvirt.guest [None req-8a64a2c4-6da6-476d-8a88-84e5c395f897 7bf17f3eb8514499a54d67542db6b88a 226e6310b4ee4a68b552a6b3e940a458 - - default default] detach device xml: <disk type="network" device="disk">
Oct 11 04:26:06 compute-0 nova_compute[259850]:   <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Oct 11 04:26:06 compute-0 nova_compute[259850]:   <source protocol="rbd" name="volumes/volume-9ebf6f1f-de5b-44af-bd87-9160ece6230d">
Oct 11 04:26:06 compute-0 nova_compute[259850]:     <host name="192.168.122.100" port="6789"/>
Oct 11 04:26:06 compute-0 nova_compute[259850]:   </source>
Oct 11 04:26:06 compute-0 nova_compute[259850]:   <target dev="vdb" bus="virtio"/>
Oct 11 04:26:06 compute-0 nova_compute[259850]:   <serial>9ebf6f1f-de5b-44af-bd87-9160ece6230d</serial>
Oct 11 04:26:06 compute-0 nova_compute[259850]:   <address type="pci" domain="0x0000" bus="0x06" slot="0x00" function="0x0"/>
Oct 11 04:26:06 compute-0 nova_compute[259850]:   <encryption format="luks">
Oct 11 04:26:06 compute-0 nova_compute[259850]:     <secret type="passphrase" uuid="43f89c52-1e0b-4b16-9d6d-87fe3d6cc0a2"/>
Oct 11 04:26:06 compute-0 nova_compute[259850]:   </encryption>
Oct 11 04:26:06 compute-0 nova_compute[259850]: </disk>
Oct 11 04:26:06 compute-0 nova_compute[259850]:  detach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:465
Oct 11 04:26:06 compute-0 nova_compute[259850]: 2025-10-11 04:26:06.478 2 INFO nova.virt.libvirt.driver [None req-8a64a2c4-6da6-476d-8a88-84e5c395f897 7bf17f3eb8514499a54d67542db6b88a 226e6310b4ee4a68b552a6b3e940a458 - - default default] Successfully detached device vdb from instance e9134216-e096-4ca2-a8aa-6fdafcd7b04c from the persistent domain config.
Oct 11 04:26:06 compute-0 nova_compute[259850]: 2025-10-11 04:26:06.479 2 DEBUG nova.virt.libvirt.driver [None req-8a64a2c4-6da6-476d-8a88-84e5c395f897 7bf17f3eb8514499a54d67542db6b88a 226e6310b4ee4a68b552a6b3e940a458 - - default default] (1/8): Attempting to detach device vdb with device alias virtio-disk1 from instance e9134216-e096-4ca2-a8aa-6fdafcd7b04c from the live domain config. _detach_from_live_with_retry /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2523
Oct 11 04:26:06 compute-0 nova_compute[259850]: 2025-10-11 04:26:06.479 2 DEBUG nova.virt.libvirt.guest [None req-8a64a2c4-6da6-476d-8a88-84e5c395f897 7bf17f3eb8514499a54d67542db6b88a 226e6310b4ee4a68b552a6b3e940a458 - - default default] detach device xml: <disk type="network" device="disk">
Oct 11 04:26:06 compute-0 nova_compute[259850]:   <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Oct 11 04:26:06 compute-0 nova_compute[259850]:   <source protocol="rbd" name="volumes/volume-9ebf6f1f-de5b-44af-bd87-9160ece6230d">
Oct 11 04:26:06 compute-0 nova_compute[259850]:     <host name="192.168.122.100" port="6789"/>
Oct 11 04:26:06 compute-0 nova_compute[259850]:   </source>
Oct 11 04:26:06 compute-0 nova_compute[259850]:   <target dev="vdb" bus="virtio"/>
Oct 11 04:26:06 compute-0 nova_compute[259850]:   <serial>9ebf6f1f-de5b-44af-bd87-9160ece6230d</serial>
Oct 11 04:26:06 compute-0 nova_compute[259850]:   <address type="pci" domain="0x0000" bus="0x06" slot="0x00" function="0x0"/>
Oct 11 04:26:06 compute-0 nova_compute[259850]:   <encryption format="luks">
Oct 11 04:26:06 compute-0 nova_compute[259850]:     <secret type="passphrase" uuid="43f89c52-1e0b-4b16-9d6d-87fe3d6cc0a2"/>
Oct 11 04:26:06 compute-0 nova_compute[259850]:   </encryption>
Oct 11 04:26:06 compute-0 nova_compute[259850]: </disk>
Oct 11 04:26:06 compute-0 nova_compute[259850]:  detach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:465
Oct 11 04:26:06 compute-0 nova_compute[259850]: 2025-10-11 04:26:06.545 2 DEBUG nova.virt.libvirt.driver [None req-3700afd8-f980-4cf2-b41e-54ecc7fbe9a2 - - - - - -] Received event <DeviceRemovedEvent: 1760156766.5448294, e9134216-e096-4ca2-a8aa-6fdafcd7b04c => virtio-disk1> from libvirt while the driver is waiting for it; dispatched. emit_event /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2370
Oct 11 04:26:06 compute-0 nova_compute[259850]: 2025-10-11 04:26:06.548 2 DEBUG nova.virt.libvirt.driver [None req-8a64a2c4-6da6-476d-8a88-84e5c395f897 7bf17f3eb8514499a54d67542db6b88a 226e6310b4ee4a68b552a6b3e940a458 - - default default] Start waiting for the detach event from libvirt for device vdb with device alias virtio-disk1 for instance e9134216-e096-4ca2-a8aa-6fdafcd7b04c _detach_from_live_and_wait_for_event /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2599
Oct 11 04:26:06 compute-0 nova_compute[259850]: 2025-10-11 04:26:06.551 2 INFO nova.virt.libvirt.driver [None req-8a64a2c4-6da6-476d-8a88-84e5c395f897 7bf17f3eb8514499a54d67542db6b88a 226e6310b4ee4a68b552a6b3e940a458 - - default default] Successfully detached device vdb from instance e9134216-e096-4ca2-a8aa-6fdafcd7b04c from the live domain config.
Oct 11 04:26:06 compute-0 nova_compute[259850]: 2025-10-11 04:26:06.703 2 DEBUG nova.objects.instance [None req-8a64a2c4-6da6-476d-8a88-84e5c395f897 7bf17f3eb8514499a54d67542db6b88a 226e6310b4ee4a68b552a6b3e940a458 - - default default] Lazy-loading 'flavor' on Instance uuid e9134216-e096-4ca2-a8aa-6fdafcd7b04c obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 11 04:26:06 compute-0 nova_compute[259850]: 2025-10-11 04:26:06.744 2 DEBUG oslo_concurrency.lockutils [None req-8a64a2c4-6da6-476d-8a88-84e5c395f897 7bf17f3eb8514499a54d67542db6b88a 226e6310b4ee4a68b552a6b3e940a458 - - default default] Lock "e9134216-e096-4ca2-a8aa-6fdafcd7b04c" "released" by "nova.compute.manager.ComputeManager.detach_volume.<locals>.do_detach_volume" :: held 0.578s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 04:26:07 compute-0 ceph-mon[74273]: pgmap v1930: 305 pgs: 305 active+clean; 167 MiB data, 507 MiB used, 59 GiB / 60 GiB avail; 315 KiB/s rd, 1.5 MiB/s wr, 56 op/s
Oct 11 04:26:07 compute-0 nova_compute[259850]: 2025-10-11 04:26:07.706 2 DEBUG oslo_concurrency.lockutils [None req-408b22d7-e3af-4c4d-8976-db7f1c8db33b 7bf17f3eb8514499a54d67542db6b88a 226e6310b4ee4a68b552a6b3e940a458 - - default default] Acquiring lock "e9134216-e096-4ca2-a8aa-6fdafcd7b04c" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 04:26:07 compute-0 nova_compute[259850]: 2025-10-11 04:26:07.707 2 DEBUG oslo_concurrency.lockutils [None req-408b22d7-e3af-4c4d-8976-db7f1c8db33b 7bf17f3eb8514499a54d67542db6b88a 226e6310b4ee4a68b552a6b3e940a458 - - default default] Lock "e9134216-e096-4ca2-a8aa-6fdafcd7b04c" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 04:26:07 compute-0 nova_compute[259850]: 2025-10-11 04:26:07.708 2 DEBUG oslo_concurrency.lockutils [None req-408b22d7-e3af-4c4d-8976-db7f1c8db33b 7bf17f3eb8514499a54d67542db6b88a 226e6310b4ee4a68b552a6b3e940a458 - - default default] Acquiring lock "e9134216-e096-4ca2-a8aa-6fdafcd7b04c-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 04:26:07 compute-0 nova_compute[259850]: 2025-10-11 04:26:07.708 2 DEBUG oslo_concurrency.lockutils [None req-408b22d7-e3af-4c4d-8976-db7f1c8db33b 7bf17f3eb8514499a54d67542db6b88a 226e6310b4ee4a68b552a6b3e940a458 - - default default] Lock "e9134216-e096-4ca2-a8aa-6fdafcd7b04c-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 04:26:07 compute-0 nova_compute[259850]: 2025-10-11 04:26:07.709 2 DEBUG oslo_concurrency.lockutils [None req-408b22d7-e3af-4c4d-8976-db7f1c8db33b 7bf17f3eb8514499a54d67542db6b88a 226e6310b4ee4a68b552a6b3e940a458 - - default default] Lock "e9134216-e096-4ca2-a8aa-6fdafcd7b04c-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 04:26:07 compute-0 nova_compute[259850]: 2025-10-11 04:26:07.711 2 INFO nova.compute.manager [None req-408b22d7-e3af-4c4d-8976-db7f1c8db33b 7bf17f3eb8514499a54d67542db6b88a 226e6310b4ee4a68b552a6b3e940a458 - - default default] [instance: e9134216-e096-4ca2-a8aa-6fdafcd7b04c] Terminating instance
Oct 11 04:26:07 compute-0 nova_compute[259850]: 2025-10-11 04:26:07.712 2 DEBUG nova.compute.manager [None req-408b22d7-e3af-4c4d-8976-db7f1c8db33b 7bf17f3eb8514499a54d67542db6b88a 226e6310b4ee4a68b552a6b3e940a458 - - default default] [instance: e9134216-e096-4ca2-a8aa-6fdafcd7b04c] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Oct 11 04:26:07 compute-0 kernel: tap944dd3e5-9e (unregistering): left promiscuous mode
Oct 11 04:26:07 compute-0 NetworkManager[44920]: <info>  [1760156767.7808] device (tap944dd3e5-9e): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 11 04:26:07 compute-0 ovn_controller[152025]: 2025-10-11T04:26:07Z|00279|binding|INFO|Releasing lport 944dd3e5-9e7c-412f-ad9e-57f8cd6a65b9 from this chassis (sb_readonly=0)
Oct 11 04:26:07 compute-0 ovn_controller[152025]: 2025-10-11T04:26:07Z|00280|binding|INFO|Setting lport 944dd3e5-9e7c-412f-ad9e-57f8cd6a65b9 down in Southbound
Oct 11 04:26:07 compute-0 ovn_controller[152025]: 2025-10-11T04:26:07Z|00281|binding|INFO|Removing iface tap944dd3e5-9e ovn-installed in OVS
Oct 11 04:26:07 compute-0 nova_compute[259850]: 2025-10-11 04:26:07.795 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 04:26:07 compute-0 nova_compute[259850]: 2025-10-11 04:26:07.797 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 04:26:07 compute-0 nova_compute[259850]: 2025-10-11 04:26:07.800 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 04:26:07 compute-0 nova_compute[259850]: 2025-10-11 04:26:07.833 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 04:26:07 compute-0 systemd[1]: machine-qemu\x2d28\x2dinstance\x2d0000001c.scope: Deactivated successfully.
Oct 11 04:26:07 compute-0 systemd[1]: machine-qemu\x2d28\x2dinstance\x2d0000001c.scope: Consumed 15.935s CPU time.
Oct 11 04:26:07 compute-0 systemd-machined[214869]: Machine qemu-28-instance-0000001c terminated.
Oct 11 04:26:07 compute-0 ovn_metadata_agent[161897]: 2025-10-11 04:26:07.855 161902 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:67:66:62 10.100.0.4'], port_security=['fa:16:3e:67:66:62 10.100.0.4'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.4/28', 'neutron:device_id': 'e9134216-e096-4ca2-a8aa-6fdafcd7b04c', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-61e3c4a7-2f2f-451f-b913-c2cdac8efdf3', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '226e6310b4ee4a68b552a6b3e940a458', 'neutron:revision_number': '4', 'neutron:security_group_ids': '561ab7dc-72c3-4dd2-96d6-1dfd15b5f2c3', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com', 'neutron:port_fip': '192.168.122.238'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=17f237ce-6320-4c27-9970-fd94aa8457a3, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f22fde8afd0>], logical_port=944dd3e5-9e7c-412f-ad9e-57f8cd6a65b9) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f22fde8afd0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 11 04:26:07 compute-0 ovn_metadata_agent[161897]: 2025-10-11 04:26:07.857 161902 INFO neutron.agent.ovn.metadata.agent [-] Port 944dd3e5-9e7c-412f-ad9e-57f8cd6a65b9 in datapath 61e3c4a7-2f2f-451f-b913-c2cdac8efdf3 unbound from our chassis
Oct 11 04:26:07 compute-0 ovn_metadata_agent[161897]: 2025-10-11 04:26:07.858 161902 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 61e3c4a7-2f2f-451f-b913-c2cdac8efdf3, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Oct 11 04:26:07 compute-0 ovn_metadata_agent[161897]: 2025-10-11 04:26:07.859 267637 DEBUG oslo.privsep.daemon [-] privsep: reply[c34f97bd-a4c1-41bd-97a5-2b2a6d9b4a77]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 04:26:07 compute-0 ovn_metadata_agent[161897]: 2025-10-11 04:26:07.860 161902 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-61e3c4a7-2f2f-451f-b913-c2cdac8efdf3 namespace which is not needed anymore
Oct 11 04:26:07 compute-0 nova_compute[259850]: 2025-10-11 04:26:07.964 2 INFO nova.virt.libvirt.driver [-] [instance: e9134216-e096-4ca2-a8aa-6fdafcd7b04c] Instance destroyed successfully.
Oct 11 04:26:07 compute-0 nova_compute[259850]: 2025-10-11 04:26:07.965 2 DEBUG nova.objects.instance [None req-408b22d7-e3af-4c4d-8976-db7f1c8db33b 7bf17f3eb8514499a54d67542db6b88a 226e6310b4ee4a68b552a6b3e940a458 - - default default] Lazy-loading 'resources' on Instance uuid e9134216-e096-4ca2-a8aa-6fdafcd7b04c obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 11 04:26:08 compute-0 nova_compute[259850]: 2025-10-11 04:26:08.004 2 DEBUG nova.virt.libvirt.vif [None req-408b22d7-e3af-4c4d-8976-db7f1c8db33b 7bf17f3eb8514499a54d67542db6b88a 226e6310b4ee4a68b552a6b3e940a458 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-11T04:25:33Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestEncryptedCinderVolumes-server-713801075',display_name='tempest-TestEncryptedCinderVolumes-server-713801075',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testencryptedcindervolumes-server-713801075',id=28,image_ref='1a107e2f-1a9d-4b6f-861d-e64bee7d56be',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBMJMHbrfIvT0sr9OAjWIFKhwQHBpBnXld+yH6qFtLRHc/PGYRHvOBTdI+nR0jmE3fNmomIpDP4x5vIh6quRMKdDvyUtXcjH0R3ji2qLNxYjzRBvOcNgDEwVgf+rWJVcwAg==',key_name='tempest-keypair-1724388763',keypairs=<?>,launch_index=0,launched_at=2025-10-11T04:25:42Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='226e6310b4ee4a68b552a6b3e940a458',ramdisk_id='',reservation_id='r-pyej006m',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='1a107e2f-1a9d-4b6f-861d-e64bee7d56be',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestEncryptedCinderVolumes-1931311766',owner_user_name='tempest-TestEncryptedCinderVolumes-1931311766-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-11T04:25:42Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='7bf17f3eb8514499a54d67542db6b88a',uuid=e9134216-e096-4ca2-a8aa-6fdafcd7b04c,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "944dd3e5-9e7c-412f-ad9e-57f8cd6a65b9", "address": "fa:16:3e:67:66:62", "network": {"id": "61e3c4a7-2f2f-451f-b913-c2cdac8efdf3", "bridge": "br-int", "label": "tempest-TestEncryptedCinderVolumes-1503027769-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.238", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "226e6310b4ee4a68b552a6b3e940a458", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap944dd3e5-9e", "ovs_interfaceid": "944dd3e5-9e7c-412f-ad9e-57f8cd6a65b9", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Oct 11 04:26:08 compute-0 nova_compute[259850]: 2025-10-11 04:26:08.005 2 DEBUG nova.network.os_vif_util [None req-408b22d7-e3af-4c4d-8976-db7f1c8db33b 7bf17f3eb8514499a54d67542db6b88a 226e6310b4ee4a68b552a6b3e940a458 - - default default] Converting VIF {"id": "944dd3e5-9e7c-412f-ad9e-57f8cd6a65b9", "address": "fa:16:3e:67:66:62", "network": {"id": "61e3c4a7-2f2f-451f-b913-c2cdac8efdf3", "bridge": "br-int", "label": "tempest-TestEncryptedCinderVolumes-1503027769-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.238", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "226e6310b4ee4a68b552a6b3e940a458", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap944dd3e5-9e", "ovs_interfaceid": "944dd3e5-9e7c-412f-ad9e-57f8cd6a65b9", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 11 04:26:08 compute-0 nova_compute[259850]: 2025-10-11 04:26:08.006 2 DEBUG nova.network.os_vif_util [None req-408b22d7-e3af-4c4d-8976-db7f1c8db33b 7bf17f3eb8514499a54d67542db6b88a 226e6310b4ee4a68b552a6b3e940a458 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:67:66:62,bridge_name='br-int',has_traffic_filtering=True,id=944dd3e5-9e7c-412f-ad9e-57f8cd6a65b9,network=Network(61e3c4a7-2f2f-451f-b913-c2cdac8efdf3),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap944dd3e5-9e') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 11 04:26:08 compute-0 nova_compute[259850]: 2025-10-11 04:26:08.006 2 DEBUG os_vif [None req-408b22d7-e3af-4c4d-8976-db7f1c8db33b 7bf17f3eb8514499a54d67542db6b88a 226e6310b4ee4a68b552a6b3e940a458 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:67:66:62,bridge_name='br-int',has_traffic_filtering=True,id=944dd3e5-9e7c-412f-ad9e-57f8cd6a65b9,network=Network(61e3c4a7-2f2f-451f-b913-c2cdac8efdf3),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap944dd3e5-9e') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Oct 11 04:26:08 compute-0 nova_compute[259850]: 2025-10-11 04:26:08.008 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 04:26:08 compute-0 nova_compute[259850]: 2025-10-11 04:26:08.009 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap944dd3e5-9e, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 04:26:08 compute-0 nova_compute[259850]: 2025-10-11 04:26:08.011 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 04:26:08 compute-0 nova_compute[259850]: 2025-10-11 04:26:08.015 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 11 04:26:08 compute-0 nova_compute[259850]: 2025-10-11 04:26:08.018 2 INFO os_vif [None req-408b22d7-e3af-4c4d-8976-db7f1c8db33b 7bf17f3eb8514499a54d67542db6b88a 226e6310b4ee4a68b552a6b3e940a458 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:67:66:62,bridge_name='br-int',has_traffic_filtering=True,id=944dd3e5-9e7c-412f-ad9e-57f8cd6a65b9,network=Network(61e3c4a7-2f2f-451f-b913-c2cdac8efdf3),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap944dd3e5-9e')
Oct 11 04:26:08 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1931: 305 pgs: 305 active+clean; 167 MiB data, 507 MiB used, 59 GiB / 60 GiB avail; 341 KiB/s rd, 1.5 MiB/s wr, 62 op/s
Oct 11 04:26:08 compute-0 neutron-haproxy-ovnmeta-61e3c4a7-2f2f-451f-b913-c2cdac8efdf3[306981]: [NOTICE]   (306985) : haproxy version is 2.8.14-c23fe91
Oct 11 04:26:08 compute-0 neutron-haproxy-ovnmeta-61e3c4a7-2f2f-451f-b913-c2cdac8efdf3[306981]: [NOTICE]   (306985) : path to executable is /usr/sbin/haproxy
Oct 11 04:26:08 compute-0 neutron-haproxy-ovnmeta-61e3c4a7-2f2f-451f-b913-c2cdac8efdf3[306981]: [WARNING]  (306985) : Exiting Master process...
Oct 11 04:26:08 compute-0 neutron-haproxy-ovnmeta-61e3c4a7-2f2f-451f-b913-c2cdac8efdf3[306981]: [WARNING]  (306985) : Exiting Master process...
Oct 11 04:26:08 compute-0 neutron-haproxy-ovnmeta-61e3c4a7-2f2f-451f-b913-c2cdac8efdf3[306981]: [ALERT]    (306985) : Current worker (306987) exited with code 143 (Terminated)
Oct 11 04:26:08 compute-0 neutron-haproxy-ovnmeta-61e3c4a7-2f2f-451f-b913-c2cdac8efdf3[306981]: [WARNING]  (306985) : All workers exited. Exiting... (0)
Oct 11 04:26:08 compute-0 systemd[1]: libpod-f3171a96d183a04d64a8fcab286821806ea8b4af5244bdd27229216365bb85f1.scope: Deactivated successfully.
Oct 11 04:26:08 compute-0 podman[307674]: 2025-10-11 04:26:08.064010764 +0000 UTC m=+0.062185873 container died f3171a96d183a04d64a8fcab286821806ea8b4af5244bdd27229216365bb85f1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-61e3c4a7-2f2f-451f-b913-c2cdac8efdf3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, org.label-schema.vendor=CentOS, tcib_managed=true)
Oct 11 04:26:08 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-f3171a96d183a04d64a8fcab286821806ea8b4af5244bdd27229216365bb85f1-userdata-shm.mount: Deactivated successfully.
Oct 11 04:26:08 compute-0 systemd[1]: var-lib-containers-storage-overlay-b01eb05d5107854d886d6c4833bbb6c9e2a0e0bef2b2a75205f8c4a1b9608809-merged.mount: Deactivated successfully.
Oct 11 04:26:08 compute-0 podman[307674]: 2025-10-11 04:26:08.12280646 +0000 UTC m=+0.120981589 container cleanup f3171a96d183a04d64a8fcab286821806ea8b4af5244bdd27229216365bb85f1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-61e3c4a7-2f2f-451f-b913-c2cdac8efdf3, io.buildah.version=1.41.3, org.label-schema.build-date=20251009, org.label-schema.schema-version=1.0, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true)
Oct 11 04:26:08 compute-0 systemd[1]: libpod-conmon-f3171a96d183a04d64a8fcab286821806ea8b4af5244bdd27229216365bb85f1.scope: Deactivated successfully.
Oct 11 04:26:08 compute-0 podman[307721]: 2025-10-11 04:26:08.205304954 +0000 UTC m=+0.051986266 container remove f3171a96d183a04d64a8fcab286821806ea8b4af5244bdd27229216365bb85f1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-61e3c4a7-2f2f-451f-b913-c2cdac8efdf3, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, org.label-schema.vendor=CentOS, tcib_managed=true)
Oct 11 04:26:08 compute-0 ovn_metadata_agent[161897]: 2025-10-11 04:26:08.213 267637 DEBUG oslo.privsep.daemon [-] privsep: reply[bba86e3c-bf00-4675-8a26-fcaa67ef301f]: (4, ('Sat Oct 11 04:26:07 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-61e3c4a7-2f2f-451f-b913-c2cdac8efdf3 (f3171a96d183a04d64a8fcab286821806ea8b4af5244bdd27229216365bb85f1)\nf3171a96d183a04d64a8fcab286821806ea8b4af5244bdd27229216365bb85f1\nSat Oct 11 04:26:08 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-61e3c4a7-2f2f-451f-b913-c2cdac8efdf3 (f3171a96d183a04d64a8fcab286821806ea8b4af5244bdd27229216365bb85f1)\nf3171a96d183a04d64a8fcab286821806ea8b4af5244bdd27229216365bb85f1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 04:26:08 compute-0 ovn_metadata_agent[161897]: 2025-10-11 04:26:08.215 267637 DEBUG oslo.privsep.daemon [-] privsep: reply[a598a655-7025-49f6-8551-0a59dbdc9f4f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 04:26:08 compute-0 ovn_metadata_agent[161897]: 2025-10-11 04:26:08.217 161902 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap61e3c4a7-20, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 04:26:08 compute-0 nova_compute[259850]: 2025-10-11 04:26:08.242 2 DEBUG nova.compute.manager [req-707b66ea-3270-49eb-8965-0c2764b5474f req-6e32ff4f-0820-469f-bfd5-ae988adca7dc f4159c198f2c491aba3076e803251bb4 551ef16ca7c64c7d98e9560ddab5abd8 - - default default] [instance: e9134216-e096-4ca2-a8aa-6fdafcd7b04c] Received event network-vif-unplugged-944dd3e5-9e7c-412f-ad9e-57f8cd6a65b9 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 11 04:26:08 compute-0 nova_compute[259850]: 2025-10-11 04:26:08.242 2 DEBUG oslo_concurrency.lockutils [req-707b66ea-3270-49eb-8965-0c2764b5474f req-6e32ff4f-0820-469f-bfd5-ae988adca7dc f4159c198f2c491aba3076e803251bb4 551ef16ca7c64c7d98e9560ddab5abd8 - - default default] Acquiring lock "e9134216-e096-4ca2-a8aa-6fdafcd7b04c-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 04:26:08 compute-0 kernel: tap61e3c4a7-20: left promiscuous mode
Oct 11 04:26:08 compute-0 nova_compute[259850]: 2025-10-11 04:26:08.243 2 DEBUG oslo_concurrency.lockutils [req-707b66ea-3270-49eb-8965-0c2764b5474f req-6e32ff4f-0820-469f-bfd5-ae988adca7dc f4159c198f2c491aba3076e803251bb4 551ef16ca7c64c7d98e9560ddab5abd8 - - default default] Lock "e9134216-e096-4ca2-a8aa-6fdafcd7b04c-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 04:26:08 compute-0 nova_compute[259850]: 2025-10-11 04:26:08.243 2 DEBUG oslo_concurrency.lockutils [req-707b66ea-3270-49eb-8965-0c2764b5474f req-6e32ff4f-0820-469f-bfd5-ae988adca7dc f4159c198f2c491aba3076e803251bb4 551ef16ca7c64c7d98e9560ddab5abd8 - - default default] Lock "e9134216-e096-4ca2-a8aa-6fdafcd7b04c-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 04:26:08 compute-0 nova_compute[259850]: 2025-10-11 04:26:08.244 2 DEBUG nova.compute.manager [req-707b66ea-3270-49eb-8965-0c2764b5474f req-6e32ff4f-0820-469f-bfd5-ae988adca7dc f4159c198f2c491aba3076e803251bb4 551ef16ca7c64c7d98e9560ddab5abd8 - - default default] [instance: e9134216-e096-4ca2-a8aa-6fdafcd7b04c] No waiting events found dispatching network-vif-unplugged-944dd3e5-9e7c-412f-ad9e-57f8cd6a65b9 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 11 04:26:08 compute-0 nova_compute[259850]: 2025-10-11 04:26:08.244 2 DEBUG nova.compute.manager [req-707b66ea-3270-49eb-8965-0c2764b5474f req-6e32ff4f-0820-469f-bfd5-ae988adca7dc f4159c198f2c491aba3076e803251bb4 551ef16ca7c64c7d98e9560ddab5abd8 - - default default] [instance: e9134216-e096-4ca2-a8aa-6fdafcd7b04c] Received event network-vif-unplugged-944dd3e5-9e7c-412f-ad9e-57f8cd6a65b9 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Oct 11 04:26:08 compute-0 nova_compute[259850]: 2025-10-11 04:26:08.245 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 04:26:08 compute-0 nova_compute[259850]: 2025-10-11 04:26:08.265 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 04:26:08 compute-0 ovn_metadata_agent[161897]: 2025-10-11 04:26:08.268 267637 DEBUG oslo.privsep.daemon [-] privsep: reply[5318a375-17c4-48a5-8eea-a6eef871b940]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 04:26:08 compute-0 ovn_metadata_agent[161897]: 2025-10-11 04:26:08.295 267637 DEBUG oslo.privsep.daemon [-] privsep: reply[1b36d647-5046-4a8a-ac2b-a966793b2ed1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 04:26:08 compute-0 ovn_metadata_agent[161897]: 2025-10-11 04:26:08.297 267637 DEBUG oslo.privsep.daemon [-] privsep: reply[60951cd6-798a-454c-883e-13dcd80db22a]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 04:26:08 compute-0 ovn_metadata_agent[161897]: 2025-10-11 04:26:08.311 161902 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=24, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '7a:61:6f', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '92:f1:b6:e4:f1:16'}, ipsec=False) old=SB_Global(nb_cfg=23) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 11 04:26:08 compute-0 nova_compute[259850]: 2025-10-11 04:26:08.311 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 04:26:08 compute-0 ovn_metadata_agent[161897]: 2025-10-11 04:26:08.323 267637 DEBUG oslo.privsep.daemon [-] privsep: reply[d81b0793-d5ab-42df-9b41-2907a37b432b]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 512617, 'reachable_time': 17631, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 307737, 'error': None, 'target': 'ovnmeta-61e3c4a7-2f2f-451f-b913-c2cdac8efdf3', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 04:26:08 compute-0 ovn_metadata_agent[161897]: 2025-10-11 04:26:08.327 162015 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-61e3c4a7-2f2f-451f-b913-c2cdac8efdf3 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Oct 11 04:26:08 compute-0 systemd[1]: run-netns-ovnmeta\x2d61e3c4a7\x2d2f2f\x2d451f\x2db913\x2dc2cdac8efdf3.mount: Deactivated successfully.
Oct 11 04:26:08 compute-0 ovn_metadata_agent[161897]: 2025-10-11 04:26:08.328 162015 DEBUG oslo.privsep.daemon [-] privsep: reply[133ed2f8-022f-4278-80fb-5c476021031e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 04:26:08 compute-0 ovn_metadata_agent[161897]: 2025-10-11 04:26:08.329 161902 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 5 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Oct 11 04:26:08 compute-0 nova_compute[259850]: 2025-10-11 04:26:08.502 2 INFO nova.virt.libvirt.driver [None req-408b22d7-e3af-4c4d-8976-db7f1c8db33b 7bf17f3eb8514499a54d67542db6b88a 226e6310b4ee4a68b552a6b3e940a458 - - default default] [instance: e9134216-e096-4ca2-a8aa-6fdafcd7b04c] Deleting instance files /var/lib/nova/instances/e9134216-e096-4ca2-a8aa-6fdafcd7b04c_del
Oct 11 04:26:08 compute-0 nova_compute[259850]: 2025-10-11 04:26:08.502 2 INFO nova.virt.libvirt.driver [None req-408b22d7-e3af-4c4d-8976-db7f1c8db33b 7bf17f3eb8514499a54d67542db6b88a 226e6310b4ee4a68b552a6b3e940a458 - - default default] [instance: e9134216-e096-4ca2-a8aa-6fdafcd7b04c] Deletion of /var/lib/nova/instances/e9134216-e096-4ca2-a8aa-6fdafcd7b04c_del complete
Oct 11 04:26:08 compute-0 nova_compute[259850]: 2025-10-11 04:26:08.565 2 INFO nova.compute.manager [None req-408b22d7-e3af-4c4d-8976-db7f1c8db33b 7bf17f3eb8514499a54d67542db6b88a 226e6310b4ee4a68b552a6b3e940a458 - - default default] [instance: e9134216-e096-4ca2-a8aa-6fdafcd7b04c] Took 0.85 seconds to destroy the instance on the hypervisor.
Oct 11 04:26:08 compute-0 nova_compute[259850]: 2025-10-11 04:26:08.566 2 DEBUG oslo.service.loopingcall [None req-408b22d7-e3af-4c4d-8976-db7f1c8db33b 7bf17f3eb8514499a54d67542db6b88a 226e6310b4ee4a68b552a6b3e940a458 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Oct 11 04:26:08 compute-0 nova_compute[259850]: 2025-10-11 04:26:08.567 2 DEBUG nova.compute.manager [-] [instance: e9134216-e096-4ca2-a8aa-6fdafcd7b04c] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Oct 11 04:26:08 compute-0 nova_compute[259850]: 2025-10-11 04:26:08.567 2 DEBUG nova.network.neutron [-] [instance: e9134216-e096-4ca2-a8aa-6fdafcd7b04c] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Oct 11 04:26:08 compute-0 nova_compute[259850]: 2025-10-11 04:26:08.929 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 04:26:09 compute-0 ceph-mon[74273]: pgmap v1931: 305 pgs: 305 active+clean; 167 MiB data, 507 MiB used, 59 GiB / 60 GiB avail; 341 KiB/s rd, 1.5 MiB/s wr, 62 op/s
Oct 11 04:26:09 compute-0 nova_compute[259850]: 2025-10-11 04:26:09.251 2 DEBUG nova.network.neutron [-] [instance: e9134216-e096-4ca2-a8aa-6fdafcd7b04c] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 11 04:26:09 compute-0 nova_compute[259850]: 2025-10-11 04:26:09.270 2 INFO nova.compute.manager [-] [instance: e9134216-e096-4ca2-a8aa-6fdafcd7b04c] Took 0.70 seconds to deallocate network for instance.
Oct 11 04:26:09 compute-0 nova_compute[259850]: 2025-10-11 04:26:09.327 2 DEBUG oslo_concurrency.lockutils [None req-408b22d7-e3af-4c4d-8976-db7f1c8db33b 7bf17f3eb8514499a54d67542db6b88a 226e6310b4ee4a68b552a6b3e940a458 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 04:26:09 compute-0 nova_compute[259850]: 2025-10-11 04:26:09.328 2 DEBUG oslo_concurrency.lockutils [None req-408b22d7-e3af-4c4d-8976-db7f1c8db33b 7bf17f3eb8514499a54d67542db6b88a 226e6310b4ee4a68b552a6b3e940a458 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 04:26:09 compute-0 nova_compute[259850]: 2025-10-11 04:26:09.342 2 DEBUG nova.compute.manager [req-aef9a7b2-7ba0-4267-b8da-9e01a7d48608 req-e9a26612-23a8-499d-8fc0-db9eb12e3c91 f4159c198f2c491aba3076e803251bb4 551ef16ca7c64c7d98e9560ddab5abd8 - - default default] [instance: e9134216-e096-4ca2-a8aa-6fdafcd7b04c] Received event network-vif-deleted-944dd3e5-9e7c-412f-ad9e-57f8cd6a65b9 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 11 04:26:09 compute-0 nova_compute[259850]: 2025-10-11 04:26:09.400 2 DEBUG oslo_concurrency.processutils [None req-408b22d7-e3af-4c4d-8976-db7f1c8db33b 7bf17f3eb8514499a54d67542db6b88a 226e6310b4ee4a68b552a6b3e940a458 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 04:26:09 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 11 04:26:09 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/732259351' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 04:26:09 compute-0 nova_compute[259850]: 2025-10-11 04:26:09.831 2 DEBUG oslo_concurrency.processutils [None req-408b22d7-e3af-4c4d-8976-db7f1c8db33b 7bf17f3eb8514499a54d67542db6b88a 226e6310b4ee4a68b552a6b3e940a458 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.431s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 04:26:09 compute-0 nova_compute[259850]: 2025-10-11 04:26:09.839 2 DEBUG nova.compute.provider_tree [None req-408b22d7-e3af-4c4d-8976-db7f1c8db33b 7bf17f3eb8514499a54d67542db6b88a 226e6310b4ee4a68b552a6b3e940a458 - - default default] Inventory has not changed in ProviderTree for provider: 108a560b-89c0-4926-a2fc-cb749a6f8386 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 11 04:26:09 compute-0 nova_compute[259850]: 2025-10-11 04:26:09.853 2 DEBUG nova.scheduler.client.report [None req-408b22d7-e3af-4c4d-8976-db7f1c8db33b 7bf17f3eb8514499a54d67542db6b88a 226e6310b4ee4a68b552a6b3e940a458 - - default default] Inventory has not changed for provider 108a560b-89c0-4926-a2fc-cb749a6f8386 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 11 04:26:09 compute-0 nova_compute[259850]: 2025-10-11 04:26:09.885 2 DEBUG oslo_concurrency.lockutils [None req-408b22d7-e3af-4c4d-8976-db7f1c8db33b 7bf17f3eb8514499a54d67542db6b88a 226e6310b4ee4a68b552a6b3e940a458 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.557s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 04:26:09 compute-0 nova_compute[259850]: 2025-10-11 04:26:09.922 2 INFO nova.scheduler.client.report [None req-408b22d7-e3af-4c4d-8976-db7f1c8db33b 7bf17f3eb8514499a54d67542db6b88a 226e6310b4ee4a68b552a6b3e940a458 - - default default] Deleted allocations for instance e9134216-e096-4ca2-a8aa-6fdafcd7b04c
Oct 11 04:26:10 compute-0 nova_compute[259850]: 2025-10-11 04:26:10.019 2 DEBUG oslo_concurrency.lockutils [None req-408b22d7-e3af-4c4d-8976-db7f1c8db33b 7bf17f3eb8514499a54d67542db6b88a 226e6310b4ee4a68b552a6b3e940a458 - - default default] Lock "e9134216-e096-4ca2-a8aa-6fdafcd7b04c" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.312s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 04:26:10 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1932: 305 pgs: 305 active+clean; 167 MiB data, 507 MiB used, 59 GiB / 60 GiB avail; 34 KiB/s rd, 28 KiB/s wr, 14 op/s
Oct 11 04:26:10 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e447 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 04:26:10 compute-0 ceph-mon[74273]: from='client.? 192.168.122.100:0/732259351' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 04:26:10 compute-0 nova_compute[259850]: 2025-10-11 04:26:10.298 2 DEBUG nova.compute.manager [req-d15be1b5-98d0-462d-8a55-d484a1db590f req-7750b598-3fc7-4db2-b8e5-8a4e16828eb2 f4159c198f2c491aba3076e803251bb4 551ef16ca7c64c7d98e9560ddab5abd8 - - default default] [instance: e9134216-e096-4ca2-a8aa-6fdafcd7b04c] Received event network-vif-plugged-944dd3e5-9e7c-412f-ad9e-57f8cd6a65b9 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 11 04:26:10 compute-0 nova_compute[259850]: 2025-10-11 04:26:10.299 2 DEBUG oslo_concurrency.lockutils [req-d15be1b5-98d0-462d-8a55-d484a1db590f req-7750b598-3fc7-4db2-b8e5-8a4e16828eb2 f4159c198f2c491aba3076e803251bb4 551ef16ca7c64c7d98e9560ddab5abd8 - - default default] Acquiring lock "e9134216-e096-4ca2-a8aa-6fdafcd7b04c-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 04:26:10 compute-0 nova_compute[259850]: 2025-10-11 04:26:10.300 2 DEBUG oslo_concurrency.lockutils [req-d15be1b5-98d0-462d-8a55-d484a1db590f req-7750b598-3fc7-4db2-b8e5-8a4e16828eb2 f4159c198f2c491aba3076e803251bb4 551ef16ca7c64c7d98e9560ddab5abd8 - - default default] Lock "e9134216-e096-4ca2-a8aa-6fdafcd7b04c-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 04:26:10 compute-0 nova_compute[259850]: 2025-10-11 04:26:10.300 2 DEBUG oslo_concurrency.lockutils [req-d15be1b5-98d0-462d-8a55-d484a1db590f req-7750b598-3fc7-4db2-b8e5-8a4e16828eb2 f4159c198f2c491aba3076e803251bb4 551ef16ca7c64c7d98e9560ddab5abd8 - - default default] Lock "e9134216-e096-4ca2-a8aa-6fdafcd7b04c-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 04:26:10 compute-0 nova_compute[259850]: 2025-10-11 04:26:10.301 2 DEBUG nova.compute.manager [req-d15be1b5-98d0-462d-8a55-d484a1db590f req-7750b598-3fc7-4db2-b8e5-8a4e16828eb2 f4159c198f2c491aba3076e803251bb4 551ef16ca7c64c7d98e9560ddab5abd8 - - default default] [instance: e9134216-e096-4ca2-a8aa-6fdafcd7b04c] No waiting events found dispatching network-vif-plugged-944dd3e5-9e7c-412f-ad9e-57f8cd6a65b9 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 11 04:26:10 compute-0 nova_compute[259850]: 2025-10-11 04:26:10.301 2 WARNING nova.compute.manager [req-d15be1b5-98d0-462d-8a55-d484a1db590f req-7750b598-3fc7-4db2-b8e5-8a4e16828eb2 f4159c198f2c491aba3076e803251bb4 551ef16ca7c64c7d98e9560ddab5abd8 - - default default] [instance: e9134216-e096-4ca2-a8aa-6fdafcd7b04c] Received unexpected event network-vif-plugged-944dd3e5-9e7c-412f-ad9e-57f8cd6a65b9 for instance with vm_state deleted and task_state None.
Oct 11 04:26:11 compute-0 ceph-mon[74273]: pgmap v1932: 305 pgs: 305 active+clean; 167 MiB data, 507 MiB used, 59 GiB / 60 GiB avail; 34 KiB/s rd, 28 KiB/s wr, 14 op/s
Oct 11 04:26:12 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct 11 04:26:12 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2866644315' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 11 04:26:12 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct 11 04:26:12 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2866644315' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 11 04:26:12 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1933: 305 pgs: 305 active+clean; 167 MiB data, 507 MiB used, 59 GiB / 60 GiB avail; 27 KiB/s rd, 14 KiB/s wr, 8 op/s
Oct 11 04:26:12 compute-0 ceph-mon[74273]: from='client.? 192.168.122.10:0/2866644315' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 11 04:26:12 compute-0 ceph-mon[74273]: from='client.? 192.168.122.10:0/2866644315' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 11 04:26:12 compute-0 podman[307761]: 2025-10-11 04:26:12.439626194 +0000 UTC m=+0.131244208 container health_status 648a70a95c858d67aef01a55c153ff0b5b9bef4b589fa10c62a24185180db61f (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Oct 11 04:26:13 compute-0 nova_compute[259850]: 2025-10-11 04:26:13.012 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 04:26:13 compute-0 ceph-mon[74273]: pgmap v1933: 305 pgs: 305 active+clean; 167 MiB data, 507 MiB used, 59 GiB / 60 GiB avail; 27 KiB/s rd, 14 KiB/s wr, 8 op/s
Oct 11 04:26:13 compute-0 ovn_metadata_agent[161897]: 2025-10-11 04:26:13.331 161902 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=8a473e03-2208-47ae-afcd-05ad744a5969, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '24'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 04:26:13 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct 11 04:26:13 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1266683990' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 11 04:26:13 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct 11 04:26:13 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1266683990' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 11 04:26:13 compute-0 nova_compute[259850]: 2025-10-11 04:26:13.931 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 04:26:14 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1934: 305 pgs: 305 active+clean; 88 MiB data, 460 MiB used, 60 GiB / 60 GiB avail; 59 KiB/s rd, 16 KiB/s wr, 51 op/s
Oct 11 04:26:14 compute-0 unix_chkpwd[307789]: password check failed for user (root)
Oct 11 04:26:14 compute-0 sshd-session[307787]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=80.94.93.233  user=root
Oct 11 04:26:14 compute-0 ceph-mon[74273]: from='client.? 192.168.122.10:0/1266683990' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 11 04:26:14 compute-0 ceph-mon[74273]: from='client.? 192.168.122.10:0/1266683990' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 11 04:26:15 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e447 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 04:26:15 compute-0 ceph-mon[74273]: pgmap v1934: 305 pgs: 305 active+clean; 88 MiB data, 460 MiB used, 60 GiB / 60 GiB avail; 59 KiB/s rd, 16 KiB/s wr, 51 op/s
Oct 11 04:26:15 compute-0 sshd-session[307787]: Failed password for root from 80.94.93.233 port 40054 ssh2
Oct 11 04:26:16 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1935: 305 pgs: 305 active+clean; 88 MiB data, 460 MiB used, 60 GiB / 60 GiB avail; 58 KiB/s rd, 3.2 KiB/s wr, 51 op/s
Oct 11 04:26:16 compute-0 unix_chkpwd[307790]: password check failed for user (root)
Oct 11 04:26:17 compute-0 ceph-mon[74273]: pgmap v1935: 305 pgs: 305 active+clean; 88 MiB data, 460 MiB used, 60 GiB / 60 GiB avail; 58 KiB/s rd, 3.2 KiB/s wr, 51 op/s
Oct 11 04:26:17 compute-0 podman[307791]: 2025-10-11 04:26:17.395660461 +0000 UTC m=+0.091304253 container health_status aa44bb3cffcfb092b9d85ae52a9bd9f36c87e07262632420f97b48a886109eab (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, tcib_managed=true, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20251009, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Oct 11 04:26:18 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1936: 305 pgs: 305 active+clean; 88 MiB data, 460 MiB used, 60 GiB / 60 GiB avail; 64 KiB/s rd, 3.4 KiB/s wr, 58 op/s
Oct 11 04:26:18 compute-0 nova_compute[259850]: 2025-10-11 04:26:18.055 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 04:26:18 compute-0 sshd-session[307787]: Failed password for root from 80.94.93.233 port 40054 ssh2
Oct 11 04:26:18 compute-0 nova_compute[259850]: 2025-10-11 04:26:18.933 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 04:26:19 compute-0 ceph-mon[74273]: pgmap v1936: 305 pgs: 305 active+clean; 88 MiB data, 460 MiB used, 60 GiB / 60 GiB avail; 64 KiB/s rd, 3.4 KiB/s wr, 58 op/s
Oct 11 04:26:19 compute-0 unix_chkpwd[307810]: password check failed for user (root)
Oct 11 04:26:20 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1937: 305 pgs: 305 active+clean; 88 MiB data, 460 MiB used, 60 GiB / 60 GiB avail; 38 KiB/s rd, 1.7 KiB/s wr, 52 op/s
Oct 11 04:26:20 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e447 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 04:26:20 compute-0 ceph-mgr[74563]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 04:26:20 compute-0 ceph-mgr[74563]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 04:26:20 compute-0 ceph-mgr[74563]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 04:26:20 compute-0 ceph-mgr[74563]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 04:26:20 compute-0 ceph-mgr[74563]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 04:26:20 compute-0 ceph-mgr[74563]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 04:26:20 compute-0 ceph-mgr[74563]: [balancer INFO root] Optimize plan auto_2025-10-11_04:26:20
Oct 11 04:26:20 compute-0 ceph-mgr[74563]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 11 04:26:20 compute-0 ceph-mgr[74563]: [balancer INFO root] do_upmap
Oct 11 04:26:20 compute-0 ceph-mgr[74563]: [balancer INFO root] pools ['cephfs.cephfs.meta', 'default.rgw.control', 'images', 'default.rgw.log', '.mgr', 'volumes', 'default.rgw.meta', 'vms', 'cephfs.cephfs.data', '.rgw.root', 'backups']
Oct 11 04:26:20 compute-0 ceph-mgr[74563]: [balancer INFO root] prepared 0/10 changes
Oct 11 04:26:21 compute-0 ceph-mgr[74563]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 11 04:26:21 compute-0 ceph-mgr[74563]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 11 04:26:21 compute-0 ceph-mgr[74563]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 11 04:26:21 compute-0 ceph-mgr[74563]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 11 04:26:21 compute-0 ceph-mgr[74563]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 11 04:26:21 compute-0 ceph-mgr[74563]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 11 04:26:21 compute-0 ceph-mgr[74563]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 11 04:26:21 compute-0 ceph-mgr[74563]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 11 04:26:21 compute-0 ceph-mgr[74563]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 11 04:26:21 compute-0 ceph-mgr[74563]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 11 04:26:21 compute-0 ceph-mon[74273]: pgmap v1937: 305 pgs: 305 active+clean; 88 MiB data, 460 MiB used, 60 GiB / 60 GiB avail; 38 KiB/s rd, 1.7 KiB/s wr, 52 op/s
Oct 11 04:26:21 compute-0 sshd-session[307787]: Failed password for root from 80.94.93.233 port 40054 ssh2
Oct 11 04:26:22 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1938: 305 pgs: 305 active+clean; 88 MiB data, 460 MiB used, 60 GiB / 60 GiB avail; 37 KiB/s rd, 1.7 KiB/s wr, 51 op/s
Oct 11 04:26:22 compute-0 nova_compute[259850]: 2025-10-11 04:26:22.959 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1760156767.956474, e9134216-e096-4ca2-a8aa-6fdafcd7b04c => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 11 04:26:22 compute-0 nova_compute[259850]: 2025-10-11 04:26:22.959 2 INFO nova.compute.manager [-] [instance: e9134216-e096-4ca2-a8aa-6fdafcd7b04c] VM Stopped (Lifecycle Event)
Oct 11 04:26:22 compute-0 ovn_metadata_agent[161897]: 2025-10-11 04:26:22.978 161902 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 04:26:22 compute-0 ovn_metadata_agent[161897]: 2025-10-11 04:26:22.979 161902 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 04:26:22 compute-0 ovn_metadata_agent[161897]: 2025-10-11 04:26:22.979 161902 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 04:26:22 compute-0 nova_compute[259850]: 2025-10-11 04:26:22.980 2 DEBUG nova.compute.manager [None req-eb490c3c-225c-4b22-995a-ce2fad128db8 - - - - - -] [instance: e9134216-e096-4ca2-a8aa-6fdafcd7b04c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 11 04:26:23 compute-0 nova_compute[259850]: 2025-10-11 04:26:23.057 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 04:26:23 compute-0 ceph-mon[74273]: pgmap v1938: 305 pgs: 305 active+clean; 88 MiB data, 460 MiB used, 60 GiB / 60 GiB avail; 37 KiB/s rd, 1.7 KiB/s wr, 51 op/s
Oct 11 04:26:23 compute-0 sshd-session[307787]: Received disconnect from 80.94.93.233 port 40054:11:  [preauth]
Oct 11 04:26:23 compute-0 sshd-session[307787]: Disconnected from authenticating user root 80.94.93.233 port 40054 [preauth]
Oct 11 04:26:23 compute-0 sshd-session[307787]: PAM 2 more authentication failures; logname= uid=0 euid=0 tty=ssh ruser= rhost=80.94.93.233  user=root
Oct 11 04:26:23 compute-0 nova_compute[259850]: 2025-10-11 04:26:23.937 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 04:26:24 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1939: 305 pgs: 305 active+clean; 88 MiB data, 460 MiB used, 60 GiB / 60 GiB avail; 1.7 MiB/s rd, 1.8 KiB/s wr, 58 op/s
Oct 11 04:26:24 compute-0 unix_chkpwd[307813]: password check failed for user (root)
Oct 11 04:26:24 compute-0 sshd-session[307811]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=80.94.93.233  user=root
Oct 11 04:26:25 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e447 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 04:26:25 compute-0 ceph-mon[74273]: pgmap v1939: 305 pgs: 305 active+clean; 88 MiB data, 460 MiB used, 60 GiB / 60 GiB avail; 1.7 MiB/s rd, 1.8 KiB/s wr, 58 op/s
Oct 11 04:26:26 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1940: 305 pgs: 305 active+clean; 88 MiB data, 460 MiB used, 60 GiB / 60 GiB avail; 1.7 MiB/s rd, 255 B/s wr, 14 op/s
Oct 11 04:26:26 compute-0 sshd-session[307811]: Failed password for root from 80.94.93.233 port 31938 ssh2
Oct 11 04:26:27 compute-0 ceph-mon[74273]: pgmap v1940: 305 pgs: 305 active+clean; 88 MiB data, 460 MiB used, 60 GiB / 60 GiB avail; 1.7 MiB/s rd, 255 B/s wr, 14 op/s
Oct 11 04:26:28 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1941: 305 pgs: 305 active+clean; 88 MiB data, 460 MiB used, 60 GiB / 60 GiB avail; 1.7 MiB/s rd, 255 B/s wr, 14 op/s
Oct 11 04:26:28 compute-0 nova_compute[259850]: 2025-10-11 04:26:28.059 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 04:26:28 compute-0 unix_chkpwd[307848]: password check failed for user (root)
Oct 11 04:26:28 compute-0 podman[307814]: 2025-10-11 04:26:28.403000974 +0000 UTC m=+0.100351688 container health_status 83ff5079e26f0f00bbfd2512db4ebca61e431e3b7e362664573e34fff179574c (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.build-date=20251009, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, tcib_managed=true)
Oct 11 04:26:28 compute-0 podman[307815]: 2025-10-11 04:26:28.405013521 +0000 UTC m=+0.097295032 container health_status 8f03625dda308e9a3fe3b3f00360811eab2a53db90537fed5552a11820542f07 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, container_name=iscsid, org.label-schema.build-date=20251009, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, config_id=iscsid, io.buildah.version=1.41.3)
Oct 11 04:26:28 compute-0 nova_compute[259850]: 2025-10-11 04:26:28.939 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 04:26:29 compute-0 ceph-mon[74273]: pgmap v1941: 305 pgs: 305 active+clean; 88 MiB data, 460 MiB used, 60 GiB / 60 GiB avail; 1.7 MiB/s rd, 255 B/s wr, 14 op/s
Oct 11 04:26:30 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1942: 305 pgs: 305 active+clean; 88 MiB data, 460 MiB used, 60 GiB / 60 GiB avail; 1.7 MiB/s rd, 85 B/s wr, 6 op/s
Oct 11 04:26:30 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e447 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 04:26:30 compute-0 sshd-session[307811]: Failed password for root from 80.94.93.233 port 31938 ssh2
Oct 11 04:26:31 compute-0 ceph-mon[74273]: pgmap v1942: 305 pgs: 305 active+clean; 88 MiB data, 460 MiB used, 60 GiB / 60 GiB avail; 1.7 MiB/s rd, 85 B/s wr, 6 op/s
Oct 11 04:26:31 compute-0 ceph-mgr[74563]: [pg_autoscaler INFO root] _maybe_adjust
Oct 11 04:26:31 compute-0 ceph-mgr[74563]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 04:26:31 compute-0 ceph-mgr[74563]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct 11 04:26:31 compute-0 ceph-mgr[74563]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 04:26:31 compute-0 ceph-mgr[74563]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Oct 11 04:26:31 compute-0 ceph-mgr[74563]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 04:26:31 compute-0 ceph-mgr[74563]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.00035146584212411276 of space, bias 1.0, pg target 0.10543975263723383 quantized to 32 (current 32)
Oct 11 04:26:31 compute-0 ceph-mgr[74563]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 04:26:31 compute-0 ceph-mgr[74563]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Oct 11 04:26:31 compute-0 ceph-mgr[74563]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 04:26:31 compute-0 ceph-mgr[74563]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006661762551279547 of space, bias 1.0, pg target 0.1998528765383864 quantized to 32 (current 32)
Oct 11 04:26:31 compute-0 ceph-mgr[74563]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 04:26:31 compute-0 ceph-mgr[74563]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Oct 11 04:26:31 compute-0 ceph-mgr[74563]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 04:26:31 compute-0 ceph-mgr[74563]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 11 04:26:31 compute-0 ceph-mgr[74563]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 04:26:31 compute-0 ceph-mgr[74563]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Oct 11 04:26:31 compute-0 ceph-mgr[74563]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 04:26:31 compute-0 ceph-mgr[74563]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct 11 04:26:31 compute-0 ceph-mgr[74563]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 04:26:31 compute-0 ceph-mgr[74563]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 11 04:26:31 compute-0 ceph-mgr[74563]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 04:26:31 compute-0 ceph-mgr[74563]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct 11 04:26:32 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1943: 305 pgs: 305 active+clean; 88 MiB data, 460 MiB used, 60 GiB / 60 GiB avail; 1.7 MiB/s rd, 85 B/s wr, 6 op/s
Oct 11 04:26:32 compute-0 unix_chkpwd[307853]: password check failed for user (root)
Oct 11 04:26:33 compute-0 nova_compute[259850]: 2025-10-11 04:26:33.062 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 04:26:33 compute-0 ceph-mon[74273]: pgmap v1943: 305 pgs: 305 active+clean; 88 MiB data, 460 MiB used, 60 GiB / 60 GiB avail; 1.7 MiB/s rd, 85 B/s wr, 6 op/s
Oct 11 04:26:33 compute-0 nova_compute[259850]: 2025-10-11 04:26:33.942 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 04:26:34 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1944: 305 pgs: 305 active+clean; 88 MiB data, 460 MiB used, 60 GiB / 60 GiB avail; 1.7 MiB/s rd, 22 KiB/s wr, 11 op/s
Oct 11 04:26:34 compute-0 sshd-session[307811]: Failed password for root from 80.94.93.233 port 31938 ssh2
Oct 11 04:26:35 compute-0 nova_compute[259850]: 2025-10-11 04:26:35.059 2 DEBUG oslo_service.periodic_task [None req-a15c22ab-259d-4b26-83d0-eb5f92102649 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 11 04:26:35 compute-0 nova_compute[259850]: 2025-10-11 04:26:35.060 2 DEBUG oslo_service.periodic_task [None req-a15c22ab-259d-4b26-83d0-eb5f92102649 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 11 04:26:35 compute-0 nova_compute[259850]: 2025-10-11 04:26:35.101 2 DEBUG oslo_concurrency.lockutils [None req-a15c22ab-259d-4b26-83d0-eb5f92102649 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 04:26:35 compute-0 nova_compute[259850]: 2025-10-11 04:26:35.102 2 DEBUG oslo_concurrency.lockutils [None req-a15c22ab-259d-4b26-83d0-eb5f92102649 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 04:26:35 compute-0 nova_compute[259850]: 2025-10-11 04:26:35.102 2 DEBUG oslo_concurrency.lockutils [None req-a15c22ab-259d-4b26-83d0-eb5f92102649 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 04:26:35 compute-0 nova_compute[259850]: 2025-10-11 04:26:35.103 2 DEBUG nova.compute.resource_tracker [None req-a15c22ab-259d-4b26-83d0-eb5f92102649 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Oct 11 04:26:35 compute-0 nova_compute[259850]: 2025-10-11 04:26:35.103 2 DEBUG oslo_concurrency.processutils [None req-a15c22ab-259d-4b26-83d0-eb5f92102649 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 04:26:35 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e447 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 04:26:35 compute-0 ceph-mon[74273]: pgmap v1944: 305 pgs: 305 active+clean; 88 MiB data, 460 MiB used, 60 GiB / 60 GiB avail; 1.7 MiB/s rd, 22 KiB/s wr, 11 op/s
Oct 11 04:26:35 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 11 04:26:35 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3631488990' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 04:26:35 compute-0 nova_compute[259850]: 2025-10-11 04:26:35.608 2 DEBUG oslo_concurrency.processutils [None req-a15c22ab-259d-4b26-83d0-eb5f92102649 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.505s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 04:26:35 compute-0 nova_compute[259850]: 2025-10-11 04:26:35.784 2 WARNING nova.virt.libvirt.driver [None req-a15c22ab-259d-4b26-83d0-eb5f92102649 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 11 04:26:35 compute-0 nova_compute[259850]: 2025-10-11 04:26:35.785 2 DEBUG nova.compute.resource_tracker [None req-a15c22ab-259d-4b26-83d0-eb5f92102649 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4377MB free_disk=59.988277435302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Oct 11 04:26:35 compute-0 nova_compute[259850]: 2025-10-11 04:26:35.785 2 DEBUG oslo_concurrency.lockutils [None req-a15c22ab-259d-4b26-83d0-eb5f92102649 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 04:26:35 compute-0 nova_compute[259850]: 2025-10-11 04:26:35.785 2 DEBUG oslo_concurrency.lockutils [None req-a15c22ab-259d-4b26-83d0-eb5f92102649 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 04:26:35 compute-0 nova_compute[259850]: 2025-10-11 04:26:35.937 2 DEBUG nova.compute.resource_tracker [None req-a15c22ab-259d-4b26-83d0-eb5f92102649 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Oct 11 04:26:35 compute-0 nova_compute[259850]: 2025-10-11 04:26:35.937 2 DEBUG nova.compute.resource_tracker [None req-a15c22ab-259d-4b26-83d0-eb5f92102649 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Oct 11 04:26:35 compute-0 nova_compute[259850]: 2025-10-11 04:26:35.958 2 DEBUG oslo_concurrency.processutils [None req-a15c22ab-259d-4b26-83d0-eb5f92102649 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 04:26:35 compute-0 sshd-session[307811]: Received disconnect from 80.94.93.233 port 31938:11:  [preauth]
Oct 11 04:26:35 compute-0 sshd-session[307811]: Disconnected from authenticating user root 80.94.93.233 port 31938 [preauth]
Oct 11 04:26:35 compute-0 sshd-session[307811]: PAM 2 more authentication failures; logname= uid=0 euid=0 tty=ssh ruser= rhost=80.94.93.233  user=root
Oct 11 04:26:36 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1945: 305 pgs: 305 active+clean; 88 MiB data, 460 MiB used, 60 GiB / 60 GiB avail; 3.0 KiB/s rd, 22 KiB/s wr, 4 op/s
Oct 11 04:26:36 compute-0 ceph-mon[74273]: from='client.? 192.168.122.100:0/3631488990' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 04:26:36 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 11 04:26:36 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3973650149' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 04:26:36 compute-0 nova_compute[259850]: 2025-10-11 04:26:36.457 2 DEBUG oslo_concurrency.processutils [None req-a15c22ab-259d-4b26-83d0-eb5f92102649 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.498s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 04:26:36 compute-0 nova_compute[259850]: 2025-10-11 04:26:36.463 2 DEBUG nova.compute.provider_tree [None req-a15c22ab-259d-4b26-83d0-eb5f92102649 - - - - - -] Inventory has not changed in ProviderTree for provider: 108a560b-89c0-4926-a2fc-cb749a6f8386 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 11 04:26:36 compute-0 nova_compute[259850]: 2025-10-11 04:26:36.480 2 DEBUG nova.scheduler.client.report [None req-a15c22ab-259d-4b26-83d0-eb5f92102649 - - - - - -] Inventory has not changed for provider 108a560b-89c0-4926-a2fc-cb749a6f8386 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 11 04:26:36 compute-0 nova_compute[259850]: 2025-10-11 04:26:36.498 2 DEBUG nova.compute.resource_tracker [None req-a15c22ab-259d-4b26-83d0-eb5f92102649 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Oct 11 04:26:36 compute-0 nova_compute[259850]: 2025-10-11 04:26:36.498 2 DEBUG oslo_concurrency.lockutils [None req-a15c22ab-259d-4b26-83d0-eb5f92102649 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.713s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 04:26:36 compute-0 unix_chkpwd[307901]: password check failed for user (root)
Oct 11 04:26:36 compute-0 sshd-session[307879]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=80.94.93.233  user=root
Oct 11 04:26:37 compute-0 ceph-mon[74273]: pgmap v1945: 305 pgs: 305 active+clean; 88 MiB data, 460 MiB used, 60 GiB / 60 GiB avail; 3.0 KiB/s rd, 22 KiB/s wr, 4 op/s
Oct 11 04:26:37 compute-0 ceph-mon[74273]: from='client.? 192.168.122.100:0/3973650149' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 04:26:37 compute-0 nova_compute[259850]: 2025-10-11 04:26:37.494 2 DEBUG oslo_service.periodic_task [None req-a15c22ab-259d-4b26-83d0-eb5f92102649 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 11 04:26:37 compute-0 nova_compute[259850]: 2025-10-11 04:26:37.494 2 DEBUG oslo_service.periodic_task [None req-a15c22ab-259d-4b26-83d0-eb5f92102649 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 11 04:26:37 compute-0 nova_compute[259850]: 2025-10-11 04:26:37.495 2 DEBUG nova.compute.manager [None req-a15c22ab-259d-4b26-83d0-eb5f92102649 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Oct 11 04:26:38 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1946: 305 pgs: 305 active+clean; 88 MiB data, 460 MiB used, 60 GiB / 60 GiB avail; 3.0 KiB/s rd, 22 KiB/s wr, 4 op/s
Oct 11 04:26:38 compute-0 nova_compute[259850]: 2025-10-11 04:26:38.059 2 DEBUG oslo_service.periodic_task [None req-a15c22ab-259d-4b26-83d0-eb5f92102649 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 11 04:26:38 compute-0 nova_compute[259850]: 2025-10-11 04:26:38.060 2 DEBUG nova.compute.manager [None req-a15c22ab-259d-4b26-83d0-eb5f92102649 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Oct 11 04:26:38 compute-0 nova_compute[259850]: 2025-10-11 04:26:38.060 2 DEBUG nova.compute.manager [None req-a15c22ab-259d-4b26-83d0-eb5f92102649 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Oct 11 04:26:38 compute-0 nova_compute[259850]: 2025-10-11 04:26:38.064 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 04:26:38 compute-0 nova_compute[259850]: 2025-10-11 04:26:38.083 2 DEBUG nova.compute.manager [None req-a15c22ab-259d-4b26-83d0-eb5f92102649 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Oct 11 04:26:38 compute-0 nova_compute[259850]: 2025-10-11 04:26:38.998 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 04:26:39 compute-0 nova_compute[259850]: 2025-10-11 04:26:39.059 2 DEBUG oslo_service.periodic_task [None req-a15c22ab-259d-4b26-83d0-eb5f92102649 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 11 04:26:39 compute-0 nova_compute[259850]: 2025-10-11 04:26:39.059 2 DEBUG oslo_service.periodic_task [None req-a15c22ab-259d-4b26-83d0-eb5f92102649 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 11 04:26:39 compute-0 sshd-session[307879]: Failed password for root from 80.94.93.233 port 34324 ssh2
Oct 11 04:26:39 compute-0 ovn_controller[152025]: 2025-10-11T04:26:39Z|00282|memory_trim|INFO|Detected inactivity (last active 30026 ms ago): trimming memory
Oct 11 04:26:39 compute-0 ceph-mon[74273]: pgmap v1946: 305 pgs: 305 active+clean; 88 MiB data, 460 MiB used, 60 GiB / 60 GiB avail; 3.0 KiB/s rd, 22 KiB/s wr, 4 op/s
Oct 11 04:26:40 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1947: 305 pgs: 305 active+clean; 88 MiB data, 460 MiB used, 60 GiB / 60 GiB avail; 3.0 KiB/s rd, 22 KiB/s wr, 4 op/s
Oct 11 04:26:40 compute-0 nova_compute[259850]: 2025-10-11 04:26:40.059 2 DEBUG oslo_service.periodic_task [None req-a15c22ab-259d-4b26-83d0-eb5f92102649 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 11 04:26:40 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e447 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 04:26:40 compute-0 unix_chkpwd[307902]: password check failed for user (root)
Oct 11 04:26:41 compute-0 ceph-mon[74273]: pgmap v1947: 305 pgs: 305 active+clean; 88 MiB data, 460 MiB used, 60 GiB / 60 GiB avail; 3.0 KiB/s rd, 22 KiB/s wr, 4 op/s
Oct 11 04:26:42 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1948: 305 pgs: 305 active+clean; 88 MiB data, 460 MiB used, 60 GiB / 60 GiB avail; 3.0 KiB/s rd, 22 KiB/s wr, 4 op/s
Oct 11 04:26:42 compute-0 ceph-mon[74273]: pgmap v1948: 305 pgs: 305 active+clean; 88 MiB data, 460 MiB used, 60 GiB / 60 GiB avail; 3.0 KiB/s rd, 22 KiB/s wr, 4 op/s
Oct 11 04:26:42 compute-0 sshd-session[307879]: Failed password for root from 80.94.93.233 port 34324 ssh2
Oct 11 04:26:43 compute-0 nova_compute[259850]: 2025-10-11 04:26:43.066 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 04:26:43 compute-0 podman[307903]: 2025-10-11 04:26:43.424203887 +0000 UTC m=+0.128367137 container health_status 648a70a95c858d67aef01a55c153ff0b5b9bef4b589fa10c62a24185180db61f (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3)
Oct 11 04:26:44 compute-0 nova_compute[259850]: 2025-10-11 04:26:44.040 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 04:26:44 compute-0 nova_compute[259850]: 2025-10-11 04:26:44.054 2 DEBUG oslo_service.periodic_task [None req-a15c22ab-259d-4b26-83d0-eb5f92102649 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 11 04:26:44 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1949: 305 pgs: 305 active+clean; 156 MiB data, 482 MiB used, 60 GiB / 60 GiB avail; 23 KiB/s rd, 5.5 MiB/s wr, 35 op/s
Oct 11 04:26:44 compute-0 unix_chkpwd[307929]: password check failed for user (root)
Oct 11 04:26:45 compute-0 nova_compute[259850]: 2025-10-11 04:26:45.060 2 DEBUG oslo_service.periodic_task [None req-a15c22ab-259d-4b26-83d0-eb5f92102649 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 11 04:26:45 compute-0 ceph-mon[74273]: pgmap v1949: 305 pgs: 305 active+clean; 156 MiB data, 482 MiB used, 60 GiB / 60 GiB avail; 23 KiB/s rd, 5.5 MiB/s wr, 35 op/s
Oct 11 04:26:45 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e447 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 04:26:45 compute-0 nova_compute[259850]: 2025-10-11 04:26:45.466 2 DEBUG oslo_service.periodic_task [None req-a15c22ab-259d-4b26-83d0-eb5f92102649 - - - - - -] Running periodic task ComputeManager._cleanup_running_deleted_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 11 04:26:46 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1950: 305 pgs: 305 active+clean; 156 MiB data, 482 MiB used, 60 GiB / 60 GiB avail; 20 KiB/s rd, 5.5 MiB/s wr, 30 op/s
Oct 11 04:26:46 compute-0 sshd-session[307879]: Failed password for root from 80.94.93.233 port 34324 ssh2
Oct 11 04:26:46 compute-0 sshd-session[307879]: Received disconnect from 80.94.93.233 port 34324:11:  [preauth]
Oct 11 04:26:46 compute-0 sshd-session[307879]: Disconnected from authenticating user root 80.94.93.233 port 34324 [preauth]
Oct 11 04:26:46 compute-0 sshd-session[307879]: PAM 2 more authentication failures; logname= uid=0 euid=0 tty=ssh ruser= rhost=80.94.93.233  user=root
Oct 11 04:26:47 compute-0 nova_compute[259850]: 2025-10-11 04:26:47.015 2 DEBUG oslo_concurrency.lockutils [None req-8616c69a-e994-4e15-bd76-7208f7f58907 7bf17f3eb8514499a54d67542db6b88a 226e6310b4ee4a68b552a6b3e940a458 - - default default] Acquiring lock "116a010a-a523-4fa3-8dbc-de6caec760c9" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 04:26:47 compute-0 nova_compute[259850]: 2025-10-11 04:26:47.015 2 DEBUG oslo_concurrency.lockutils [None req-8616c69a-e994-4e15-bd76-7208f7f58907 7bf17f3eb8514499a54d67542db6b88a 226e6310b4ee4a68b552a6b3e940a458 - - default default] Lock "116a010a-a523-4fa3-8dbc-de6caec760c9" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 04:26:47 compute-0 nova_compute[259850]: 2025-10-11 04:26:47.059 2 DEBUG oslo_service.periodic_task [None req-a15c22ab-259d-4b26-83d0-eb5f92102649 - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 11 04:26:47 compute-0 nova_compute[259850]: 2025-10-11 04:26:47.060 2 DEBUG nova.compute.manager [None req-a15c22ab-259d-4b26-83d0-eb5f92102649 - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145
Oct 11 04:26:47 compute-0 ceph-mon[74273]: pgmap v1950: 305 pgs: 305 active+clean; 156 MiB data, 482 MiB used, 60 GiB / 60 GiB avail; 20 KiB/s rd, 5.5 MiB/s wr, 30 op/s
Oct 11 04:26:47 compute-0 nova_compute[259850]: 2025-10-11 04:26:47.243 2 DEBUG nova.compute.manager [None req-a15c22ab-259d-4b26-83d0-eb5f92102649 - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154
Oct 11 04:26:47 compute-0 nova_compute[259850]: 2025-10-11 04:26:47.244 2 DEBUG nova.compute.manager [None req-8616c69a-e994-4e15-bd76-7208f7f58907 7bf17f3eb8514499a54d67542db6b88a 226e6310b4ee4a68b552a6b3e940a458 - - default default] [instance: 116a010a-a523-4fa3-8dbc-de6caec760c9] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Oct 11 04:26:47 compute-0 nova_compute[259850]: 2025-10-11 04:26:47.368 2 DEBUG oslo_concurrency.lockutils [None req-8616c69a-e994-4e15-bd76-7208f7f58907 7bf17f3eb8514499a54d67542db6b88a 226e6310b4ee4a68b552a6b3e940a458 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 04:26:47 compute-0 nova_compute[259850]: 2025-10-11 04:26:47.369 2 DEBUG oslo_concurrency.lockutils [None req-8616c69a-e994-4e15-bd76-7208f7f58907 7bf17f3eb8514499a54d67542db6b88a 226e6310b4ee4a68b552a6b3e940a458 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 04:26:47 compute-0 nova_compute[259850]: 2025-10-11 04:26:47.379 2 DEBUG nova.virt.hardware [None req-8616c69a-e994-4e15-bd76-7208f7f58907 7bf17f3eb8514499a54d67542db6b88a 226e6310b4ee4a68b552a6b3e940a458 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Oct 11 04:26:47 compute-0 nova_compute[259850]: 2025-10-11 04:26:47.381 2 INFO nova.compute.claims [None req-8616c69a-e994-4e15-bd76-7208f7f58907 7bf17f3eb8514499a54d67542db6b88a 226e6310b4ee4a68b552a6b3e940a458 - - default default] [instance: 116a010a-a523-4fa3-8dbc-de6caec760c9] Claim successful on node compute-0.ctlplane.example.com
Oct 11 04:26:47 compute-0 nova_compute[259850]: 2025-10-11 04:26:47.488 2 DEBUG oslo_concurrency.processutils [None req-8616c69a-e994-4e15-bd76-7208f7f58907 7bf17f3eb8514499a54d67542db6b88a 226e6310b4ee4a68b552a6b3e940a458 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 04:26:47 compute-0 sudo[307950]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 04:26:47 compute-0 sudo[307950]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 04:26:47 compute-0 sudo[307950]: pam_unix(sudo:session): session closed for user root
Oct 11 04:26:47 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 11 04:26:47 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3893759974' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 04:26:48 compute-0 nova_compute[259850]: 2025-10-11 04:26:48.018 2 DEBUG oslo_concurrency.processutils [None req-8616c69a-e994-4e15-bd76-7208f7f58907 7bf17f3eb8514499a54d67542db6b88a 226e6310b4ee4a68b552a6b3e940a458 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.530s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 04:26:48 compute-0 nova_compute[259850]: 2025-10-11 04:26:48.028 2 DEBUG nova.compute.provider_tree [None req-8616c69a-e994-4e15-bd76-7208f7f58907 7bf17f3eb8514499a54d67542db6b88a 226e6310b4ee4a68b552a6b3e940a458 - - default default] Inventory has not changed in ProviderTree for provider: 108a560b-89c0-4926-a2fc-cb749a6f8386 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 11 04:26:48 compute-0 podman[307974]: 2025-10-11 04:26:48.032002617 +0000 UTC m=+0.052061658 container health_status aa44bb3cffcfb092b9d85ae52a9bd9f36c87e07262632420f97b48a886109eab (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, container_name=ovn_metadata_agent, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.schema-version=1.0, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251009)
Oct 11 04:26:48 compute-0 nova_compute[259850]: 2025-10-11 04:26:48.044 2 DEBUG nova.scheduler.client.report [None req-8616c69a-e994-4e15-bd76-7208f7f58907 7bf17f3eb8514499a54d67542db6b88a 226e6310b4ee4a68b552a6b3e940a458 - - default default] Inventory has not changed for provider 108a560b-89c0-4926-a2fc-cb749a6f8386 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 11 04:26:48 compute-0 sudo[307986]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 11 04:26:48 compute-0 sudo[307986]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 04:26:48 compute-0 sudo[307986]: pam_unix(sudo:session): session closed for user root
Oct 11 04:26:48 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1951: 305 pgs: 305 active+clean; 202 MiB data, 544 MiB used, 59 GiB / 60 GiB avail; 29 KiB/s rd, 9.4 MiB/s wr, 44 op/s
Oct 11 04:26:48 compute-0 nova_compute[259850]: 2025-10-11 04:26:48.069 2 DEBUG oslo_concurrency.lockutils [None req-8616c69a-e994-4e15-bd76-7208f7f58907 7bf17f3eb8514499a54d67542db6b88a 226e6310b4ee4a68b552a6b3e940a458 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.700s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 04:26:48 compute-0 nova_compute[259850]: 2025-10-11 04:26:48.070 2 DEBUG nova.compute.manager [None req-8616c69a-e994-4e15-bd76-7208f7f58907 7bf17f3eb8514499a54d67542db6b88a 226e6310b4ee4a68b552a6b3e940a458 - - default default] [instance: 116a010a-a523-4fa3-8dbc-de6caec760c9] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Oct 11 04:26:48 compute-0 nova_compute[259850]: 2025-10-11 04:26:48.073 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 04:26:48 compute-0 sudo[308020]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 04:26:48 compute-0 sudo[308020]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 04:26:48 compute-0 nova_compute[259850]: 2025-10-11 04:26:48.117 2 DEBUG nova.compute.manager [None req-8616c69a-e994-4e15-bd76-7208f7f58907 7bf17f3eb8514499a54d67542db6b88a 226e6310b4ee4a68b552a6b3e940a458 - - default default] [instance: 116a010a-a523-4fa3-8dbc-de6caec760c9] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Oct 11 04:26:48 compute-0 nova_compute[259850]: 2025-10-11 04:26:48.118 2 DEBUG nova.network.neutron [None req-8616c69a-e994-4e15-bd76-7208f7f58907 7bf17f3eb8514499a54d67542db6b88a 226e6310b4ee4a68b552a6b3e940a458 - - default default] [instance: 116a010a-a523-4fa3-8dbc-de6caec760c9] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Oct 11 04:26:48 compute-0 sudo[308020]: pam_unix(sudo:session): session closed for user root
Oct 11 04:26:48 compute-0 ceph-mon[74273]: from='client.? 192.168.122.100:0/3893759974' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 04:26:48 compute-0 nova_compute[259850]: 2025-10-11 04:26:48.147 2 INFO nova.virt.libvirt.driver [None req-8616c69a-e994-4e15-bd76-7208f7f58907 7bf17f3eb8514499a54d67542db6b88a 226e6310b4ee4a68b552a6b3e940a458 - - default default] [instance: 116a010a-a523-4fa3-8dbc-de6caec760c9] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Oct 11 04:26:48 compute-0 nova_compute[259850]: 2025-10-11 04:26:48.172 2 DEBUG nova.compute.manager [None req-8616c69a-e994-4e15-bd76-7208f7f58907 7bf17f3eb8514499a54d67542db6b88a 226e6310b4ee4a68b552a6b3e940a458 - - default default] [instance: 116a010a-a523-4fa3-8dbc-de6caec760c9] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Oct 11 04:26:48 compute-0 sudo[308045]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/23b68101-59a9-532f-ab6b-9acf78fb2162/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Oct 11 04:26:48 compute-0 sudo[308045]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 04:26:48 compute-0 nova_compute[259850]: 2025-10-11 04:26:48.217 2 INFO nova.virt.block_device [None req-8616c69a-e994-4e15-bd76-7208f7f58907 7bf17f3eb8514499a54d67542db6b88a 226e6310b4ee4a68b552a6b3e940a458 - - default default] [instance: 116a010a-a523-4fa3-8dbc-de6caec760c9] Booting with volume 2c8cda8e-9e0b-4e1d-b6e8-e5638bef6ce5 at /dev/vda
Oct 11 04:26:48 compute-0 nova_compute[259850]: 2025-10-11 04:26:48.280 2 DEBUG nova.policy [None req-8616c69a-e994-4e15-bd76-7208f7f58907 7bf17f3eb8514499a54d67542db6b88a 226e6310b4ee4a68b552a6b3e940a458 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '7bf17f3eb8514499a54d67542db6b88a', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '226e6310b4ee4a68b552a6b3e940a458', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Oct 11 04:26:48 compute-0 nova_compute[259850]: 2025-10-11 04:26:48.339 2 DEBUG os_brick.utils [None req-8616c69a-e994-4e15-bd76-7208f7f58907 7bf17f3eb8514499a54d67542db6b88a 226e6310b4ee4a68b552a6b3e940a458 - - default default] ==> get_connector_properties: call "{'root_helper': 'sudo nova-rootwrap /etc/nova/rootwrap.conf', 'my_ip': '192.168.122.100', 'multipath': True, 'enforce_multipath': True, 'host': 'compute-0.ctlplane.example.com', 'execute': None}" trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:176
Oct 11 04:26:48 compute-0 nova_compute[259850]: 2025-10-11 04:26:48.340 675 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): multipathd show status execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 04:26:48 compute-0 nova_compute[259850]: 2025-10-11 04:26:48.351 675 DEBUG oslo_concurrency.processutils [-] CMD "multipathd show status" returned: 0 in 0.011s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 04:26:48 compute-0 nova_compute[259850]: 2025-10-11 04:26:48.352 675 DEBUG oslo.privsep.daemon [-] privsep: reply[71513c54-ca24-48ad-a16e-69dc9b44b3f5]: (4, ('path checker states:\n\npaths: 0\nbusy: False\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 04:26:48 compute-0 nova_compute[259850]: 2025-10-11 04:26:48.353 675 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): cat /etc/iscsi/initiatorname.iscsi execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 04:26:48 compute-0 nova_compute[259850]: 2025-10-11 04:26:48.360 675 DEBUG oslo_concurrency.processutils [-] CMD "cat /etc/iscsi/initiatorname.iscsi" returned: 0 in 0.006s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 04:26:48 compute-0 nova_compute[259850]: 2025-10-11 04:26:48.360 675 DEBUG oslo.privsep.daemon [-] privsep: reply[a4952272-78bb-4b1f-af6e-3b0b51b0a53a]: (4, ('InitiatorName=iqn.1994-05.com.redhat:e727c2bd432c', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 04:26:48 compute-0 nova_compute[259850]: 2025-10-11 04:26:48.361 675 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): findmnt -v / -n -o SOURCE execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 04:26:48 compute-0 nova_compute[259850]: 2025-10-11 04:26:48.372 675 DEBUG oslo_concurrency.processutils [-] CMD "findmnt -v / -n -o SOURCE" returned: 0 in 0.010s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 04:26:48 compute-0 nova_compute[259850]: 2025-10-11 04:26:48.372 675 DEBUG oslo.privsep.daemon [-] privsep: reply[480d9875-8473-4d4d-b16a-8584305b99a9]: (4, ('overlay\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 04:26:48 compute-0 nova_compute[259850]: 2025-10-11 04:26:48.373 675 DEBUG oslo.privsep.daemon [-] privsep: reply[2b7367ea-2f4f-47f8-84c3-549f9d36cba4]: (4, 'e4b2deed-ff06-4afb-a523-b61a9dddb9cc') _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 04:26:48 compute-0 nova_compute[259850]: 2025-10-11 04:26:48.374 2 DEBUG oslo_concurrency.processutils [None req-8616c69a-e994-4e15-bd76-7208f7f58907 7bf17f3eb8514499a54d67542db6b88a 226e6310b4ee4a68b552a6b3e940a458 - - default default] Running cmd (subprocess): nvme version execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 04:26:48 compute-0 nova_compute[259850]: 2025-10-11 04:26:48.399 2 DEBUG oslo_concurrency.processutils [None req-8616c69a-e994-4e15-bd76-7208f7f58907 7bf17f3eb8514499a54d67542db6b88a 226e6310b4ee4a68b552a6b3e940a458 - - default default] CMD "nvme version" returned: 0 in 0.025s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 04:26:48 compute-0 nova_compute[259850]: 2025-10-11 04:26:48.401 2 DEBUG os_brick.initiator.connectors.lightos [None req-8616c69a-e994-4e15-bd76-7208f7f58907 7bf17f3eb8514499a54d67542db6b88a 226e6310b4ee4a68b552a6b3e940a458 - - default default] LIGHTOS: [Errno 111] ECONNREFUSED find_dsc /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:98
Oct 11 04:26:48 compute-0 nova_compute[259850]: 2025-10-11 04:26:48.401 2 DEBUG os_brick.initiator.connectors.lightos [None req-8616c69a-e994-4e15-bd76-7208f7f58907 7bf17f3eb8514499a54d67542db6b88a 226e6310b4ee4a68b552a6b3e940a458 - - default default] LIGHTOS: did not find dsc, continuing anyway. get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:76
Oct 11 04:26:48 compute-0 nova_compute[259850]: 2025-10-11 04:26:48.401 2 DEBUG os_brick.initiator.connectors.lightos [None req-8616c69a-e994-4e15-bd76-7208f7f58907 7bf17f3eb8514499a54d67542db6b88a 226e6310b4ee4a68b552a6b3e940a458 - - default default] LIGHTOS: finally hostnqn: nqn.2014-08.org.nvmexpress:uuid:83042a20-0f72-4c47-8453-e72ead378624 dsc:  get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:79
Oct 11 04:26:48 compute-0 nova_compute[259850]: 2025-10-11 04:26:48.402 2 DEBUG os_brick.utils [None req-8616c69a-e994-4e15-bd76-7208f7f58907 7bf17f3eb8514499a54d67542db6b88a 226e6310b4ee4a68b552a6b3e940a458 - - default default] <== get_connector_properties: return (61ms) {'platform': 'x86_64', 'os_type': 'linux', 'ip': '192.168.122.100', 'host': 'compute-0.ctlplane.example.com', 'multipath': True, 'initiator': 'iqn.1994-05.com.redhat:e727c2bd432c', 'do_local_attach': False, 'nvme_hostid': '83042a20-0f72-4c47-8453-e72ead378624', 'system uuid': 'e4b2deed-ff06-4afb-a523-b61a9dddb9cc', 'nqn': 'nqn.2014-08.org.nvmexpress:uuid:83042a20-0f72-4c47-8453-e72ead378624', 'nvme_native_multipath': True, 'found_dsc': ''} trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:203
Oct 11 04:26:48 compute-0 nova_compute[259850]: 2025-10-11 04:26:48.402 2 DEBUG nova.virt.block_device [None req-8616c69a-e994-4e15-bd76-7208f7f58907 7bf17f3eb8514499a54d67542db6b88a 226e6310b4ee4a68b552a6b3e940a458 - - default default] [instance: 116a010a-a523-4fa3-8dbc-de6caec760c9] Updating existing volume attachment record: 2440ea58-96fa-4193-a2d7-a4cded90dbdf _volume_attach /usr/lib/python3.9/site-packages/nova/virt/block_device.py:631
Oct 11 04:26:48 compute-0 sudo[308045]: pam_unix(sudo:session): session closed for user root
Oct 11 04:26:48 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 11 04:26:48 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 11 04:26:48 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct 11 04:26:48 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 11 04:26:48 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct 11 04:26:48 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' 
Oct 11 04:26:48 compute-0 ceph-mgr[74563]: [progress WARNING root] complete: ev 096f0f38-5f4f-4308-9759-ed577f2e3294 does not exist
Oct 11 04:26:48 compute-0 ceph-mgr[74563]: [progress WARNING root] complete: ev 8ba21c6e-416f-4c2d-9911-1c85590013dc does not exist
Oct 11 04:26:48 compute-0 ceph-mgr[74563]: [progress WARNING root] complete: ev f1cdfdad-9010-4d5a-9190-217d347f008e does not exist
Oct 11 04:26:48 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct 11 04:26:48 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 11 04:26:48 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct 11 04:26:48 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 11 04:26:48 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 11 04:26:48 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 11 04:26:48 compute-0 sudo[308108]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 04:26:48 compute-0 sudo[308108]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 04:26:48 compute-0 sudo[308108]: pam_unix(sudo:session): session closed for user root
Oct 11 04:26:48 compute-0 sudo[308133]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 11 04:26:48 compute-0 sudo[308133]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 04:26:48 compute-0 sudo[308133]: pam_unix(sudo:session): session closed for user root
Oct 11 04:26:49 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 11 04:26:49 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1160178966' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 11 04:26:49 compute-0 sudo[308158]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 04:26:49 compute-0 sudo[308158]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 04:26:49 compute-0 sudo[308158]: pam_unix(sudo:session): session closed for user root
Oct 11 04:26:49 compute-0 nova_compute[259850]: 2025-10-11 04:26:49.076 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 04:26:49 compute-0 nova_compute[259850]: 2025-10-11 04:26:49.098 2 DEBUG nova.network.neutron [None req-8616c69a-e994-4e15-bd76-7208f7f58907 7bf17f3eb8514499a54d67542db6b88a 226e6310b4ee4a68b552a6b3e940a458 - - default default] [instance: 116a010a-a523-4fa3-8dbc-de6caec760c9] Successfully created port: aa158452-f9f5-45a1-9841-28136bfa13a6 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Oct 11 04:26:49 compute-0 sudo[308183]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/23b68101-59a9-532f-ab6b-9acf78fb2162/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 23b68101-59a9-532f-ab6b-9acf78fb2162 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Oct 11 04:26:49 compute-0 sudo[308183]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 04:26:49 compute-0 ceph-mon[74273]: pgmap v1951: 305 pgs: 305 active+clean; 202 MiB data, 544 MiB used, 59 GiB / 60 GiB avail; 29 KiB/s rd, 9.4 MiB/s wr, 44 op/s
Oct 11 04:26:49 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 11 04:26:49 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 11 04:26:49 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' 
Oct 11 04:26:49 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 11 04:26:49 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 11 04:26:49 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 11 04:26:49 compute-0 ceph-mon[74273]: from='client.? 192.168.122.10:0/1160178966' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 11 04:26:49 compute-0 nova_compute[259850]: 2025-10-11 04:26:49.312 2 DEBUG nova.compute.manager [None req-8616c69a-e994-4e15-bd76-7208f7f58907 7bf17f3eb8514499a54d67542db6b88a 226e6310b4ee4a68b552a6b3e940a458 - - default default] [instance: 116a010a-a523-4fa3-8dbc-de6caec760c9] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Oct 11 04:26:49 compute-0 nova_compute[259850]: 2025-10-11 04:26:49.315 2 DEBUG nova.virt.libvirt.driver [None req-8616c69a-e994-4e15-bd76-7208f7f58907 7bf17f3eb8514499a54d67542db6b88a 226e6310b4ee4a68b552a6b3e940a458 - - default default] [instance: 116a010a-a523-4fa3-8dbc-de6caec760c9] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Oct 11 04:26:49 compute-0 nova_compute[259850]: 2025-10-11 04:26:49.315 2 INFO nova.virt.libvirt.driver [None req-8616c69a-e994-4e15-bd76-7208f7f58907 7bf17f3eb8514499a54d67542db6b88a 226e6310b4ee4a68b552a6b3e940a458 - - default default] [instance: 116a010a-a523-4fa3-8dbc-de6caec760c9] Creating image(s)
Oct 11 04:26:49 compute-0 nova_compute[259850]: 2025-10-11 04:26:49.316 2 DEBUG nova.virt.libvirt.driver [None req-8616c69a-e994-4e15-bd76-7208f7f58907 7bf17f3eb8514499a54d67542db6b88a 226e6310b4ee4a68b552a6b3e940a458 - - default default] [instance: 116a010a-a523-4fa3-8dbc-de6caec760c9] Did not create local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4859
Oct 11 04:26:49 compute-0 nova_compute[259850]: 2025-10-11 04:26:49.316 2 DEBUG nova.virt.libvirt.driver [None req-8616c69a-e994-4e15-bd76-7208f7f58907 7bf17f3eb8514499a54d67542db6b88a 226e6310b4ee4a68b552a6b3e940a458 - - default default] [instance: 116a010a-a523-4fa3-8dbc-de6caec760c9] Ensure instance console log exists: /var/lib/nova/instances/116a010a-a523-4fa3-8dbc-de6caec760c9/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Oct 11 04:26:49 compute-0 nova_compute[259850]: 2025-10-11 04:26:49.316 2 DEBUG oslo_concurrency.lockutils [None req-8616c69a-e994-4e15-bd76-7208f7f58907 7bf17f3eb8514499a54d67542db6b88a 226e6310b4ee4a68b552a6b3e940a458 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 04:26:49 compute-0 nova_compute[259850]: 2025-10-11 04:26:49.316 2 DEBUG oslo_concurrency.lockutils [None req-8616c69a-e994-4e15-bd76-7208f7f58907 7bf17f3eb8514499a54d67542db6b88a 226e6310b4ee4a68b552a6b3e940a458 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 04:26:49 compute-0 nova_compute[259850]: 2025-10-11 04:26:49.317 2 DEBUG oslo_concurrency.lockutils [None req-8616c69a-e994-4e15-bd76-7208f7f58907 7bf17f3eb8514499a54d67542db6b88a 226e6310b4ee4a68b552a6b3e940a458 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 04:26:49 compute-0 podman[308247]: 2025-10-11 04:26:49.53207231 +0000 UTC m=+0.054202948 container create d0e7a8872a3eaf45bafa422793302ded5676ec97f20069799b35bd6285f14d16 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_mendeleev, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_REF=reef)
Oct 11 04:26:49 compute-0 systemd[1]: Started libpod-conmon-d0e7a8872a3eaf45bafa422793302ded5676ec97f20069799b35bd6285f14d16.scope.
Oct 11 04:26:49 compute-0 podman[308247]: 2025-10-11 04:26:49.506062567 +0000 UTC m=+0.028193285 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 04:26:49 compute-0 systemd[1]: Started libcrun container.
Oct 11 04:26:49 compute-0 podman[308247]: 2025-10-11 04:26:49.641633776 +0000 UTC m=+0.163764444 container init d0e7a8872a3eaf45bafa422793302ded5676ec97f20069799b35bd6285f14d16 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_mendeleev, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, ceph=True, org.label-schema.schema-version=1.0)
Oct 11 04:26:49 compute-0 podman[308247]: 2025-10-11 04:26:49.653718346 +0000 UTC m=+0.175849014 container start d0e7a8872a3eaf45bafa422793302ded5676ec97f20069799b35bd6285f14d16 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_mendeleev, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS)
Oct 11 04:26:49 compute-0 podman[308247]: 2025-10-11 04:26:49.658187992 +0000 UTC m=+0.180318660 container attach d0e7a8872a3eaf45bafa422793302ded5676ec97f20069799b35bd6285f14d16 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_mendeleev, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True)
Oct 11 04:26:49 compute-0 cranky_mendeleev[308263]: 167 167
Oct 11 04:26:49 compute-0 systemd[1]: libpod-d0e7a8872a3eaf45bafa422793302ded5676ec97f20069799b35bd6285f14d16.scope: Deactivated successfully.
Oct 11 04:26:49 compute-0 conmon[308263]: conmon d0e7a8872a3eaf45bafa <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-d0e7a8872a3eaf45bafa422793302ded5676ec97f20069799b35bd6285f14d16.scope/container/memory.events
Oct 11 04:26:49 compute-0 podman[308247]: 2025-10-11 04:26:49.664908691 +0000 UTC m=+0.187039329 container died d0e7a8872a3eaf45bafa422793302ded5676ec97f20069799b35bd6285f14d16 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_mendeleev, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 11 04:26:49 compute-0 systemd[1]: var-lib-containers-storage-overlay-3c1184d66f2cd4c3311d4edfe03d1ef66cbdbb5b1a7bc3b5b4b7832891c87a3a-merged.mount: Deactivated successfully.
Oct 11 04:26:49 compute-0 podman[308247]: 2025-10-11 04:26:49.714083836 +0000 UTC m=+0.236214474 container remove d0e7a8872a3eaf45bafa422793302ded5676ec97f20069799b35bd6285f14d16 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_mendeleev, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 11 04:26:49 compute-0 systemd[1]: libpod-conmon-d0e7a8872a3eaf45bafa422793302ded5676ec97f20069799b35bd6285f14d16.scope: Deactivated successfully.
Oct 11 04:26:49 compute-0 podman[308287]: 2025-10-11 04:26:49.943256572 +0000 UTC m=+0.066644758 container create 69abb447d155c9bc046744b683bf014dc51c930ccdba9ede951f5095718c8fd4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_raman, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 11 04:26:50 compute-0 systemd[1]: Started libpod-conmon-69abb447d155c9bc046744b683bf014dc51c930ccdba9ede951f5095718c8fd4.scope.
Oct 11 04:26:50 compute-0 podman[308287]: 2025-10-11 04:26:49.920725747 +0000 UTC m=+0.044113923 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 04:26:50 compute-0 systemd[1]: Started libcrun container.
Oct 11 04:26:50 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c964d5dd7ed2ae81b21006f7ff30a719c2aa881372606a2663e219dadcf8f450/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 11 04:26:50 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c964d5dd7ed2ae81b21006f7ff30a719c2aa881372606a2663e219dadcf8f450/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 11 04:26:50 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c964d5dd7ed2ae81b21006f7ff30a719c2aa881372606a2663e219dadcf8f450/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 11 04:26:50 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c964d5dd7ed2ae81b21006f7ff30a719c2aa881372606a2663e219dadcf8f450/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 11 04:26:50 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c964d5dd7ed2ae81b21006f7ff30a719c2aa881372606a2663e219dadcf8f450/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct 11 04:26:50 compute-0 podman[308287]: 2025-10-11 04:26:50.047324883 +0000 UTC m=+0.170713059 container init 69abb447d155c9bc046744b683bf014dc51c930ccdba9ede951f5095718c8fd4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_raman, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Oct 11 04:26:50 compute-0 podman[308287]: 2025-10-11 04:26:50.055450192 +0000 UTC m=+0.178838348 container start 69abb447d155c9bc046744b683bf014dc51c930ccdba9ede951f5095718c8fd4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_raman, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 11 04:26:50 compute-0 podman[308287]: 2025-10-11 04:26:50.059500306 +0000 UTC m=+0.182888522 container attach 69abb447d155c9bc046744b683bf014dc51c930ccdba9ede951f5095718c8fd4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_raman, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507)
Oct 11 04:26:50 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1952: 305 pgs: 305 active+clean; 202 MiB data, 572 MiB used, 59 GiB / 60 GiB avail; 29 KiB/s rd, 9.4 MiB/s wr, 44 op/s
Oct 11 04:26:50 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e447 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 04:26:50 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct 11 04:26:50 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/876591366' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 11 04:26:50 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct 11 04:26:50 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/876591366' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 11 04:26:50 compute-0 nova_compute[259850]: 2025-10-11 04:26:50.768 2 DEBUG nova.network.neutron [None req-8616c69a-e994-4e15-bd76-7208f7f58907 7bf17f3eb8514499a54d67542db6b88a 226e6310b4ee4a68b552a6b3e940a458 - - default default] [instance: 116a010a-a523-4fa3-8dbc-de6caec760c9] Successfully updated port: aa158452-f9f5-45a1-9841-28136bfa13a6 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Oct 11 04:26:50 compute-0 ceph-mgr[74563]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 04:26:50 compute-0 ceph-mgr[74563]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 04:26:50 compute-0 nova_compute[259850]: 2025-10-11 04:26:50.789 2 DEBUG oslo_concurrency.lockutils [None req-8616c69a-e994-4e15-bd76-7208f7f58907 7bf17f3eb8514499a54d67542db6b88a 226e6310b4ee4a68b552a6b3e940a458 - - default default] Acquiring lock "refresh_cache-116a010a-a523-4fa3-8dbc-de6caec760c9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 11 04:26:50 compute-0 nova_compute[259850]: 2025-10-11 04:26:50.789 2 DEBUG oslo_concurrency.lockutils [None req-8616c69a-e994-4e15-bd76-7208f7f58907 7bf17f3eb8514499a54d67542db6b88a 226e6310b4ee4a68b552a6b3e940a458 - - default default] Acquired lock "refresh_cache-116a010a-a523-4fa3-8dbc-de6caec760c9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 11 04:26:50 compute-0 nova_compute[259850]: 2025-10-11 04:26:50.790 2 DEBUG nova.network.neutron [None req-8616c69a-e994-4e15-bd76-7208f7f58907 7bf17f3eb8514499a54d67542db6b88a 226e6310b4ee4a68b552a6b3e940a458 - - default default] [instance: 116a010a-a523-4fa3-8dbc-de6caec760c9] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Oct 11 04:26:50 compute-0 ceph-mgr[74563]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 04:26:50 compute-0 ceph-mgr[74563]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 04:26:50 compute-0 ceph-mgr[74563]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 04:26:50 compute-0 ceph-mgr[74563]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 04:26:50 compute-0 nova_compute[259850]: 2025-10-11 04:26:50.911 2 DEBUG nova.compute.manager [req-b25f30a1-260a-4c84-9b1e-67e7310603e2 req-f2e266b0-5c5d-412a-b6d0-8486ad74a340 f4159c198f2c491aba3076e803251bb4 551ef16ca7c64c7d98e9560ddab5abd8 - - default default] [instance: 116a010a-a523-4fa3-8dbc-de6caec760c9] Received event network-changed-aa158452-f9f5-45a1-9841-28136bfa13a6 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 11 04:26:50 compute-0 nova_compute[259850]: 2025-10-11 04:26:50.911 2 DEBUG nova.compute.manager [req-b25f30a1-260a-4c84-9b1e-67e7310603e2 req-f2e266b0-5c5d-412a-b6d0-8486ad74a340 f4159c198f2c491aba3076e803251bb4 551ef16ca7c64c7d98e9560ddab5abd8 - - default default] [instance: 116a010a-a523-4fa3-8dbc-de6caec760c9] Refreshing instance network info cache due to event network-changed-aa158452-f9f5-45a1-9841-28136bfa13a6. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 11 04:26:50 compute-0 nova_compute[259850]: 2025-10-11 04:26:50.911 2 DEBUG oslo_concurrency.lockutils [req-b25f30a1-260a-4c84-9b1e-67e7310603e2 req-f2e266b0-5c5d-412a-b6d0-8486ad74a340 f4159c198f2c491aba3076e803251bb4 551ef16ca7c64c7d98e9560ddab5abd8 - - default default] Acquiring lock "refresh_cache-116a010a-a523-4fa3-8dbc-de6caec760c9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 11 04:26:51 compute-0 vigilant_raman[308304]: --> passed data devices: 0 physical, 3 LVM
Oct 11 04:26:51 compute-0 vigilant_raman[308304]: --> relative data size: 1.0
Oct 11 04:26:51 compute-0 vigilant_raman[308304]: --> All data devices are unavailable
Oct 11 04:26:51 compute-0 systemd[1]: libpod-69abb447d155c9bc046744b683bf014dc51c930ccdba9ede951f5095718c8fd4.scope: Deactivated successfully.
Oct 11 04:26:51 compute-0 podman[308287]: 2025-10-11 04:26:51.081629477 +0000 UTC m=+1.205017633 container died 69abb447d155c9bc046744b683bf014dc51c930ccdba9ede951f5095718c8fd4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_raman, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 11 04:26:51 compute-0 systemd[1]: var-lib-containers-storage-overlay-c964d5dd7ed2ae81b21006f7ff30a719c2aa881372606a2663e219dadcf8f450-merged.mount: Deactivated successfully.
Oct 11 04:26:51 compute-0 podman[308287]: 2025-10-11 04:26:51.136973366 +0000 UTC m=+1.260361522 container remove 69abb447d155c9bc046744b683bf014dc51c930ccdba9ede951f5095718c8fd4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_raman, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 11 04:26:51 compute-0 systemd[1]: libpod-conmon-69abb447d155c9bc046744b683bf014dc51c930ccdba9ede951f5095718c8fd4.scope: Deactivated successfully.
Oct 11 04:26:51 compute-0 ceph-mon[74273]: pgmap v1952: 305 pgs: 305 active+clean; 202 MiB data, 572 MiB used, 59 GiB / 60 GiB avail; 29 KiB/s rd, 9.4 MiB/s wr, 44 op/s
Oct 11 04:26:51 compute-0 ceph-mon[74273]: from='client.? 192.168.122.10:0/876591366' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 11 04:26:51 compute-0 ceph-mon[74273]: from='client.? 192.168.122.10:0/876591366' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 11 04:26:51 compute-0 sudo[308183]: pam_unix(sudo:session): session closed for user root
Oct 11 04:26:51 compute-0 sudo[308347]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 04:26:51 compute-0 sudo[308347]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 04:26:51 compute-0 sudo[308347]: pam_unix(sudo:session): session closed for user root
Oct 11 04:26:51 compute-0 sudo[308372]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 11 04:26:51 compute-0 sudo[308372]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 04:26:51 compute-0 sudo[308372]: pam_unix(sudo:session): session closed for user root
Oct 11 04:26:51 compute-0 sudo[308397]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 04:26:51 compute-0 sudo[308397]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 04:26:51 compute-0 sudo[308397]: pam_unix(sudo:session): session closed for user root
Oct 11 04:26:51 compute-0 sudo[308422]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/23b68101-59a9-532f-ab6b-9acf78fb2162/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 23b68101-59a9-532f-ab6b-9acf78fb2162 -- lvm list --format json
Oct 11 04:26:51 compute-0 sudo[308422]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 04:26:51 compute-0 nova_compute[259850]: 2025-10-11 04:26:51.431 2 DEBUG nova.network.neutron [None req-8616c69a-e994-4e15-bd76-7208f7f58907 7bf17f3eb8514499a54d67542db6b88a 226e6310b4ee4a68b552a6b3e940a458 - - default default] [instance: 116a010a-a523-4fa3-8dbc-de6caec760c9] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Oct 11 04:26:51 compute-0 podman[308488]: 2025-10-11 04:26:51.85566513 +0000 UTC m=+0.062971634 container create 98c3b6996fb6f9de26369e7de551038ada37a01ab94058a8b1852d6b13c4e9d0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_rhodes, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 11 04:26:51 compute-0 systemd[1]: Started libpod-conmon-98c3b6996fb6f9de26369e7de551038ada37a01ab94058a8b1852d6b13c4e9d0.scope.
Oct 11 04:26:51 compute-0 systemd[1]: Started libcrun container.
Oct 11 04:26:51 compute-0 podman[308488]: 2025-10-11 04:26:51.835982316 +0000 UTC m=+0.043288870 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 04:26:51 compute-0 podman[308488]: 2025-10-11 04:26:51.940518581 +0000 UTC m=+0.147825105 container init 98c3b6996fb6f9de26369e7de551038ada37a01ab94058a8b1852d6b13c4e9d0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_rhodes, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, io.buildah.version=1.39.3)
Oct 11 04:26:51 compute-0 podman[308488]: 2025-10-11 04:26:51.945877151 +0000 UTC m=+0.153183655 container start 98c3b6996fb6f9de26369e7de551038ada37a01ab94058a8b1852d6b13c4e9d0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_rhodes, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Oct 11 04:26:51 compute-0 podman[308488]: 2025-10-11 04:26:51.949126213 +0000 UTC m=+0.156432767 container attach 98c3b6996fb6f9de26369e7de551038ada37a01ab94058a8b1852d6b13c4e9d0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_rhodes, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 11 04:26:51 compute-0 stoic_rhodes[308504]: 167 167
Oct 11 04:26:51 compute-0 systemd[1]: libpod-98c3b6996fb6f9de26369e7de551038ada37a01ab94058a8b1852d6b13c4e9d0.scope: Deactivated successfully.
Oct 11 04:26:51 compute-0 podman[308488]: 2025-10-11 04:26:51.95152383 +0000 UTC m=+0.158830334 container died 98c3b6996fb6f9de26369e7de551038ada37a01ab94058a8b1852d6b13c4e9d0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_rhodes, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 11 04:26:51 compute-0 systemd[1]: var-lib-containers-storage-overlay-b04a1ba80deb1b205e43c7d69eddd17d2819b464a0a0cb3b7aa7eb32b0b781ab-merged.mount: Deactivated successfully.
Oct 11 04:26:51 compute-0 podman[308488]: 2025-10-11 04:26:51.983560083 +0000 UTC m=+0.190866587 container remove 98c3b6996fb6f9de26369e7de551038ada37a01ab94058a8b1852d6b13c4e9d0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_rhodes, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 11 04:26:51 compute-0 systemd[1]: libpod-conmon-98c3b6996fb6f9de26369e7de551038ada37a01ab94058a8b1852d6b13c4e9d0.scope: Deactivated successfully.
Oct 11 04:26:52 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1953: 305 pgs: 305 active+clean; 202 MiB data, 572 MiB used, 59 GiB / 60 GiB avail; 29 KiB/s rd, 9.4 MiB/s wr, 44 op/s
Oct 11 04:26:52 compute-0 podman[308528]: 2025-10-11 04:26:52.247531648 +0000 UTC m=+0.075484137 container create 9016e1dcb92f49762ac6f61cf7216acbc562c92a9ecbd56aed138f5647a1ab79 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=suspicious_blackburn, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507)
Oct 11 04:26:52 compute-0 systemd[1]: Started libpod-conmon-9016e1dcb92f49762ac6f61cf7216acbc562c92a9ecbd56aed138f5647a1ab79.scope.
Oct 11 04:26:52 compute-0 podman[308528]: 2025-10-11 04:26:52.216645578 +0000 UTC m=+0.044598117 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 04:26:52 compute-0 systemd[1]: Started libcrun container.
Oct 11 04:26:52 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9a132af988a9b3dec8fb7525067c0c01ee8b3c5f5e0cea2bfb73611b80a31345/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 11 04:26:52 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9a132af988a9b3dec8fb7525067c0c01ee8b3c5f5e0cea2bfb73611b80a31345/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 11 04:26:52 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9a132af988a9b3dec8fb7525067c0c01ee8b3c5f5e0cea2bfb73611b80a31345/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 11 04:26:52 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9a132af988a9b3dec8fb7525067c0c01ee8b3c5f5e0cea2bfb73611b80a31345/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 11 04:26:52 compute-0 podman[308528]: 2025-10-11 04:26:52.371881561 +0000 UTC m=+0.199834050 container init 9016e1dcb92f49762ac6f61cf7216acbc562c92a9ecbd56aed138f5647a1ab79 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=suspicious_blackburn, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 11 04:26:52 compute-0 podman[308528]: 2025-10-11 04:26:52.38783056 +0000 UTC m=+0.215783049 container start 9016e1dcb92f49762ac6f61cf7216acbc562c92a9ecbd56aed138f5647a1ab79 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=suspicious_blackburn, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 11 04:26:52 compute-0 podman[308528]: 2025-10-11 04:26:52.392414749 +0000 UTC m=+0.220367238 container attach 9016e1dcb92f49762ac6f61cf7216acbc562c92a9ecbd56aed138f5647a1ab79 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=suspicious_blackburn, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Oct 11 04:26:52 compute-0 nova_compute[259850]: 2025-10-11 04:26:52.619 2 DEBUG nova.network.neutron [None req-8616c69a-e994-4e15-bd76-7208f7f58907 7bf17f3eb8514499a54d67542db6b88a 226e6310b4ee4a68b552a6b3e940a458 - - default default] [instance: 116a010a-a523-4fa3-8dbc-de6caec760c9] Updating instance_info_cache with network_info: [{"id": "aa158452-f9f5-45a1-9841-28136bfa13a6", "address": "fa:16:3e:7b:ce:34", "network": {"id": "61e3c4a7-2f2f-451f-b913-c2cdac8efdf3", "bridge": "br-int", "label": "tempest-TestEncryptedCinderVolumes-1503027769-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "226e6310b4ee4a68b552a6b3e940a458", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapaa158452-f9", "ovs_interfaceid": "aa158452-f9f5-45a1-9841-28136bfa13a6", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 11 04:26:52 compute-0 nova_compute[259850]: 2025-10-11 04:26:52.647 2 DEBUG oslo_concurrency.lockutils [None req-8616c69a-e994-4e15-bd76-7208f7f58907 7bf17f3eb8514499a54d67542db6b88a 226e6310b4ee4a68b552a6b3e940a458 - - default default] Releasing lock "refresh_cache-116a010a-a523-4fa3-8dbc-de6caec760c9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 11 04:26:52 compute-0 nova_compute[259850]: 2025-10-11 04:26:52.648 2 DEBUG nova.compute.manager [None req-8616c69a-e994-4e15-bd76-7208f7f58907 7bf17f3eb8514499a54d67542db6b88a 226e6310b4ee4a68b552a6b3e940a458 - - default default] [instance: 116a010a-a523-4fa3-8dbc-de6caec760c9] Instance network_info: |[{"id": "aa158452-f9f5-45a1-9841-28136bfa13a6", "address": "fa:16:3e:7b:ce:34", "network": {"id": "61e3c4a7-2f2f-451f-b913-c2cdac8efdf3", "bridge": "br-int", "label": "tempest-TestEncryptedCinderVolumes-1503027769-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "226e6310b4ee4a68b552a6b3e940a458", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapaa158452-f9", "ovs_interfaceid": "aa158452-f9f5-45a1-9841-28136bfa13a6", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Oct 11 04:26:52 compute-0 nova_compute[259850]: 2025-10-11 04:26:52.648 2 DEBUG oslo_concurrency.lockutils [req-b25f30a1-260a-4c84-9b1e-67e7310603e2 req-f2e266b0-5c5d-412a-b6d0-8486ad74a340 f4159c198f2c491aba3076e803251bb4 551ef16ca7c64c7d98e9560ddab5abd8 - - default default] Acquired lock "refresh_cache-116a010a-a523-4fa3-8dbc-de6caec760c9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 11 04:26:52 compute-0 nova_compute[259850]: 2025-10-11 04:26:52.649 2 DEBUG nova.network.neutron [req-b25f30a1-260a-4c84-9b1e-67e7310603e2 req-f2e266b0-5c5d-412a-b6d0-8486ad74a340 f4159c198f2c491aba3076e803251bb4 551ef16ca7c64c7d98e9560ddab5abd8 - - default default] [instance: 116a010a-a523-4fa3-8dbc-de6caec760c9] Refreshing network info cache for port aa158452-f9f5-45a1-9841-28136bfa13a6 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 11 04:26:52 compute-0 nova_compute[259850]: 2025-10-11 04:26:52.655 2 DEBUG nova.virt.libvirt.driver [None req-8616c69a-e994-4e15-bd76-7208f7f58907 7bf17f3eb8514499a54d67542db6b88a 226e6310b4ee4a68b552a6b3e940a458 - - default default] [instance: 116a010a-a523-4fa3-8dbc-de6caec760c9] Start _get_guest_xml network_info=[{"id": "aa158452-f9f5-45a1-9841-28136bfa13a6", "address": "fa:16:3e:7b:ce:34", "network": {"id": "61e3c4a7-2f2f-451f-b913-c2cdac8efdf3", "bridge": "br-int", "label": "tempest-TestEncryptedCinderVolumes-1503027769-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "226e6310b4ee4a68b552a6b3e940a458", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapaa158452-f9", "ovs_interfaceid": "aa158452-f9f5-45a1-9841-28136bfa13a6", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, '/dev/vda': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum=<?>,container_format=<?>,created_at=<?>,direct_url=<?>,disk_format=<?>,id=<?>,min_disk=0,min_ram=0,name=<?>,owner=<?>,properties=ImageMetaProps,protected=<?>,size=1073741824,status='active',tags=<?>,updated_at=<?>,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [], 'ephemerals': [], 'block_device_mapping': [{'device_type': 'disk', 'delete_on_termination': False, 'connection_info': {'driver_volume_type': 'rbd', 'data': {'name': 'volumes/volume-2c8cda8e-9e0b-4e1d-b6e8-e5638bef6ce5', 'hosts': ['192.168.122.100'], 'ports': ['6789'], 'cluster_name': 'ceph', 'auth_enabled': True, 'auth_username': 'openstack', 'secret_type': 'ceph', 'secret_uuid': '***', 'volume_id': '2c8cda8e-9e0b-4e1d-b6e8-e5638bef6ce5', 'discard': True, 'qos_specs': None, 'access_mode': 'rw', 'encrypted': True, 'cacheable': False}, 'status': 'reserved', 'instance': '116a010a-a523-4fa3-8dbc-de6caec760c9', 'attached_at': '', 'detached_at': '', 'volume_id': '2c8cda8e-9e0b-4e1d-b6e8-e5638bef6ce5', 'serial': '2c8cda8e-9e0b-4e1d-b6e8-e5638bef6ce5'}, 'boot_index': 0, 'guest_format': None, 'attachment_id': '2440ea58-96fa-4193-a2d7-a4cded90dbdf', 'mount_device': '/dev/vda', 'disk_bus': 'virtio', 'volume_type': None}], ': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Oct 11 04:26:52 compute-0 nova_compute[259850]: 2025-10-11 04:26:52.663 2 WARNING nova.virt.libvirt.driver [None req-8616c69a-e994-4e15-bd76-7208f7f58907 7bf17f3eb8514499a54d67542db6b88a 226e6310b4ee4a68b552a6b3e940a458 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 11 04:26:52 compute-0 nova_compute[259850]: 2025-10-11 04:26:52.673 2 DEBUG nova.virt.libvirt.host [None req-8616c69a-e994-4e15-bd76-7208f7f58907 7bf17f3eb8514499a54d67542db6b88a 226e6310b4ee4a68b552a6b3e940a458 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Oct 11 04:26:52 compute-0 nova_compute[259850]: 2025-10-11 04:26:52.674 2 DEBUG nova.virt.libvirt.host [None req-8616c69a-e994-4e15-bd76-7208f7f58907 7bf17f3eb8514499a54d67542db6b88a 226e6310b4ee4a68b552a6b3e940a458 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Oct 11 04:26:52 compute-0 nova_compute[259850]: 2025-10-11 04:26:52.682 2 DEBUG nova.virt.libvirt.host [None req-8616c69a-e994-4e15-bd76-7208f7f58907 7bf17f3eb8514499a54d67542db6b88a 226e6310b4ee4a68b552a6b3e940a458 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Oct 11 04:26:52 compute-0 nova_compute[259850]: 2025-10-11 04:26:52.683 2 DEBUG nova.virt.libvirt.host [None req-8616c69a-e994-4e15-bd76-7208f7f58907 7bf17f3eb8514499a54d67542db6b88a 226e6310b4ee4a68b552a6b3e940a458 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Oct 11 04:26:52 compute-0 nova_compute[259850]: 2025-10-11 04:26:52.684 2 DEBUG nova.virt.libvirt.driver [None req-8616c69a-e994-4e15-bd76-7208f7f58907 7bf17f3eb8514499a54d67542db6b88a 226e6310b4ee4a68b552a6b3e940a458 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Oct 11 04:26:52 compute-0 nova_compute[259850]: 2025-10-11 04:26:52.684 2 DEBUG nova.virt.hardware [None req-8616c69a-e994-4e15-bd76-7208f7f58907 7bf17f3eb8514499a54d67542db6b88a 226e6310b4ee4a68b552a6b3e940a458 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-11T04:01:35Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='178575de-f0e6-4acd-9fcd-d75e3e09ac2e',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum=<?>,container_format=<?>,created_at=<?>,direct_url=<?>,disk_format=<?>,id=<?>,min_disk=0,min_ram=0,name=<?>,owner=<?>,properties=ImageMetaProps,protected=<?>,size=1073741824,status='active',tags=<?>,updated_at=<?>,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Oct 11 04:26:52 compute-0 nova_compute[259850]: 2025-10-11 04:26:52.685 2 DEBUG nova.virt.hardware [None req-8616c69a-e994-4e15-bd76-7208f7f58907 7bf17f3eb8514499a54d67542db6b88a 226e6310b4ee4a68b552a6b3e940a458 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Oct 11 04:26:52 compute-0 nova_compute[259850]: 2025-10-11 04:26:52.686 2 DEBUG nova.virt.hardware [None req-8616c69a-e994-4e15-bd76-7208f7f58907 7bf17f3eb8514499a54d67542db6b88a 226e6310b4ee4a68b552a6b3e940a458 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Oct 11 04:26:52 compute-0 nova_compute[259850]: 2025-10-11 04:26:52.686 2 DEBUG nova.virt.hardware [None req-8616c69a-e994-4e15-bd76-7208f7f58907 7bf17f3eb8514499a54d67542db6b88a 226e6310b4ee4a68b552a6b3e940a458 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Oct 11 04:26:52 compute-0 nova_compute[259850]: 2025-10-11 04:26:52.686 2 DEBUG nova.virt.hardware [None req-8616c69a-e994-4e15-bd76-7208f7f58907 7bf17f3eb8514499a54d67542db6b88a 226e6310b4ee4a68b552a6b3e940a458 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Oct 11 04:26:52 compute-0 nova_compute[259850]: 2025-10-11 04:26:52.687 2 DEBUG nova.virt.hardware [None req-8616c69a-e994-4e15-bd76-7208f7f58907 7bf17f3eb8514499a54d67542db6b88a 226e6310b4ee4a68b552a6b3e940a458 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Oct 11 04:26:52 compute-0 nova_compute[259850]: 2025-10-11 04:26:52.687 2 DEBUG nova.virt.hardware [None req-8616c69a-e994-4e15-bd76-7208f7f58907 7bf17f3eb8514499a54d67542db6b88a 226e6310b4ee4a68b552a6b3e940a458 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Oct 11 04:26:52 compute-0 nova_compute[259850]: 2025-10-11 04:26:52.688 2 DEBUG nova.virt.hardware [None req-8616c69a-e994-4e15-bd76-7208f7f58907 7bf17f3eb8514499a54d67542db6b88a 226e6310b4ee4a68b552a6b3e940a458 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Oct 11 04:26:52 compute-0 nova_compute[259850]: 2025-10-11 04:26:52.688 2 DEBUG nova.virt.hardware [None req-8616c69a-e994-4e15-bd76-7208f7f58907 7bf17f3eb8514499a54d67542db6b88a 226e6310b4ee4a68b552a6b3e940a458 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Oct 11 04:26:52 compute-0 nova_compute[259850]: 2025-10-11 04:26:52.689 2 DEBUG nova.virt.hardware [None req-8616c69a-e994-4e15-bd76-7208f7f58907 7bf17f3eb8514499a54d67542db6b88a 226e6310b4ee4a68b552a6b3e940a458 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Oct 11 04:26:52 compute-0 nova_compute[259850]: 2025-10-11 04:26:52.689 2 DEBUG nova.virt.hardware [None req-8616c69a-e994-4e15-bd76-7208f7f58907 7bf17f3eb8514499a54d67542db6b88a 226e6310b4ee4a68b552a6b3e940a458 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Oct 11 04:26:52 compute-0 nova_compute[259850]: 2025-10-11 04:26:52.725 2 DEBUG nova.storage.rbd_utils [None req-8616c69a-e994-4e15-bd76-7208f7f58907 7bf17f3eb8514499a54d67542db6b88a 226e6310b4ee4a68b552a6b3e940a458 - - default default] rbd image 116a010a-a523-4fa3-8dbc-de6caec760c9_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 11 04:26:52 compute-0 nova_compute[259850]: 2025-10-11 04:26:52.731 2 DEBUG oslo_concurrency.processutils [None req-8616c69a-e994-4e15-bd76-7208f7f58907 7bf17f3eb8514499a54d67542db6b88a 226e6310b4ee4a68b552a6b3e940a458 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 04:26:53 compute-0 nova_compute[259850]: 2025-10-11 04:26:53.076 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 04:26:53 compute-0 suspicious_blackburn[308544]: {
Oct 11 04:26:53 compute-0 suspicious_blackburn[308544]:     "0": [
Oct 11 04:26:53 compute-0 suspicious_blackburn[308544]:         {
Oct 11 04:26:53 compute-0 suspicious_blackburn[308544]:             "devices": [
Oct 11 04:26:53 compute-0 suspicious_blackburn[308544]:                 "/dev/loop3"
Oct 11 04:26:53 compute-0 suspicious_blackburn[308544]:             ],
Oct 11 04:26:53 compute-0 suspicious_blackburn[308544]:             "lv_name": "ceph_lv0",
Oct 11 04:26:53 compute-0 suspicious_blackburn[308544]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct 11 04:26:53 compute-0 suspicious_blackburn[308544]:             "lv_size": "21470642176",
Oct 11 04:26:53 compute-0 suspicious_blackburn[308544]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=1XVTvq-Um5W-oCLh-MZzq-EKrd-vgGj-JdO4S3,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=23b68101-59a9-532f-ab6b-9acf78fb2162,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=bd7ac921-1218-45c1-b1c6-7c594dbceccb,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 11 04:26:53 compute-0 suspicious_blackburn[308544]:             "lv_uuid": "1XVTvq-Um5W-oCLh-MZzq-EKrd-vgGj-JdO4S3",
Oct 11 04:26:53 compute-0 suspicious_blackburn[308544]:             "name": "ceph_lv0",
Oct 11 04:26:53 compute-0 suspicious_blackburn[308544]:             "path": "/dev/ceph_vg0/ceph_lv0",
Oct 11 04:26:53 compute-0 suspicious_blackburn[308544]:             "tags": {
Oct 11 04:26:53 compute-0 suspicious_blackburn[308544]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct 11 04:26:53 compute-0 suspicious_blackburn[308544]:                 "ceph.block_uuid": "1XVTvq-Um5W-oCLh-MZzq-EKrd-vgGj-JdO4S3",
Oct 11 04:26:53 compute-0 suspicious_blackburn[308544]:                 "ceph.cephx_lockbox_secret": "",
Oct 11 04:26:53 compute-0 suspicious_blackburn[308544]:                 "ceph.cluster_fsid": "23b68101-59a9-532f-ab6b-9acf78fb2162",
Oct 11 04:26:53 compute-0 suspicious_blackburn[308544]:                 "ceph.cluster_name": "ceph",
Oct 11 04:26:53 compute-0 suspicious_blackburn[308544]:                 "ceph.crush_device_class": "",
Oct 11 04:26:53 compute-0 suspicious_blackburn[308544]:                 "ceph.encrypted": "0",
Oct 11 04:26:53 compute-0 suspicious_blackburn[308544]:                 "ceph.osd_fsid": "bd7ac921-1218-45c1-b1c6-7c594dbceccb",
Oct 11 04:26:53 compute-0 suspicious_blackburn[308544]:                 "ceph.osd_id": "0",
Oct 11 04:26:53 compute-0 suspicious_blackburn[308544]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 11 04:26:53 compute-0 suspicious_blackburn[308544]:                 "ceph.type": "block",
Oct 11 04:26:53 compute-0 suspicious_blackburn[308544]:                 "ceph.vdo": "0"
Oct 11 04:26:53 compute-0 suspicious_blackburn[308544]:             },
Oct 11 04:26:53 compute-0 suspicious_blackburn[308544]:             "type": "block",
Oct 11 04:26:53 compute-0 suspicious_blackburn[308544]:             "vg_name": "ceph_vg0"
Oct 11 04:26:53 compute-0 suspicious_blackburn[308544]:         }
Oct 11 04:26:53 compute-0 suspicious_blackburn[308544]:     ],
Oct 11 04:26:53 compute-0 suspicious_blackburn[308544]:     "1": [
Oct 11 04:26:53 compute-0 suspicious_blackburn[308544]:         {
Oct 11 04:26:53 compute-0 suspicious_blackburn[308544]:             "devices": [
Oct 11 04:26:53 compute-0 suspicious_blackburn[308544]:                 "/dev/loop4"
Oct 11 04:26:53 compute-0 suspicious_blackburn[308544]:             ],
Oct 11 04:26:53 compute-0 suspicious_blackburn[308544]:             "lv_name": "ceph_lv1",
Oct 11 04:26:53 compute-0 suspicious_blackburn[308544]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Oct 11 04:26:53 compute-0 suspicious_blackburn[308544]:             "lv_size": "21470642176",
Oct 11 04:26:53 compute-0 suspicious_blackburn[308544]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=SYHCVo-UVf0-Iwc3-4dzz-QuCG-19LJ-9rRA5x,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=23b68101-59a9-532f-ab6b-9acf78fb2162,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=38da774d-7ecf-442f-9a7a-97978287cff8,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 11 04:26:53 compute-0 suspicious_blackburn[308544]:             "lv_uuid": "SYHCVo-UVf0-Iwc3-4dzz-QuCG-19LJ-9rRA5x",
Oct 11 04:26:53 compute-0 suspicious_blackburn[308544]:             "name": "ceph_lv1",
Oct 11 04:26:53 compute-0 suspicious_blackburn[308544]:             "path": "/dev/ceph_vg1/ceph_lv1",
Oct 11 04:26:53 compute-0 suspicious_blackburn[308544]:             "tags": {
Oct 11 04:26:53 compute-0 suspicious_blackburn[308544]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Oct 11 04:26:53 compute-0 suspicious_blackburn[308544]:                 "ceph.block_uuid": "SYHCVo-UVf0-Iwc3-4dzz-QuCG-19LJ-9rRA5x",
Oct 11 04:26:53 compute-0 suspicious_blackburn[308544]:                 "ceph.cephx_lockbox_secret": "",
Oct 11 04:26:53 compute-0 suspicious_blackburn[308544]:                 "ceph.cluster_fsid": "23b68101-59a9-532f-ab6b-9acf78fb2162",
Oct 11 04:26:53 compute-0 suspicious_blackburn[308544]:                 "ceph.cluster_name": "ceph",
Oct 11 04:26:53 compute-0 suspicious_blackburn[308544]:                 "ceph.crush_device_class": "",
Oct 11 04:26:53 compute-0 suspicious_blackburn[308544]:                 "ceph.encrypted": "0",
Oct 11 04:26:53 compute-0 suspicious_blackburn[308544]:                 "ceph.osd_fsid": "38da774d-7ecf-442f-9a7a-97978287cff8",
Oct 11 04:26:53 compute-0 suspicious_blackburn[308544]:                 "ceph.osd_id": "1",
Oct 11 04:26:53 compute-0 suspicious_blackburn[308544]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 11 04:26:53 compute-0 suspicious_blackburn[308544]:                 "ceph.type": "block",
Oct 11 04:26:53 compute-0 suspicious_blackburn[308544]:                 "ceph.vdo": "0"
Oct 11 04:26:53 compute-0 suspicious_blackburn[308544]:             },
Oct 11 04:26:53 compute-0 suspicious_blackburn[308544]:             "type": "block",
Oct 11 04:26:53 compute-0 suspicious_blackburn[308544]:             "vg_name": "ceph_vg1"
Oct 11 04:26:53 compute-0 suspicious_blackburn[308544]:         }
Oct 11 04:26:53 compute-0 suspicious_blackburn[308544]:     ],
Oct 11 04:26:53 compute-0 suspicious_blackburn[308544]:     "2": [
Oct 11 04:26:53 compute-0 suspicious_blackburn[308544]:         {
Oct 11 04:26:53 compute-0 suspicious_blackburn[308544]:             "devices": [
Oct 11 04:26:53 compute-0 suspicious_blackburn[308544]:                 "/dev/loop5"
Oct 11 04:26:53 compute-0 suspicious_blackburn[308544]:             ],
Oct 11 04:26:53 compute-0 suspicious_blackburn[308544]:             "lv_name": "ceph_lv2",
Oct 11 04:26:53 compute-0 suspicious_blackburn[308544]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Oct 11 04:26:53 compute-0 suspicious_blackburn[308544]:             "lv_size": "21470642176",
Oct 11 04:26:53 compute-0 suspicious_blackburn[308544]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=RoMfIg-8Myq-NBZ3-1HM3-6QfC-sjk8-dlChvt,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=23b68101-59a9-532f-ab6b-9acf78fb2162,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=b0a32a6b-06c7-4cda-9235-0c5ffbd1bd85,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 11 04:26:53 compute-0 suspicious_blackburn[308544]:             "lv_uuid": "RoMfIg-8Myq-NBZ3-1HM3-6QfC-sjk8-dlChvt",
Oct 11 04:26:53 compute-0 suspicious_blackburn[308544]:             "name": "ceph_lv2",
Oct 11 04:26:53 compute-0 suspicious_blackburn[308544]:             "path": "/dev/ceph_vg2/ceph_lv2",
Oct 11 04:26:53 compute-0 suspicious_blackburn[308544]:             "tags": {
Oct 11 04:26:53 compute-0 suspicious_blackburn[308544]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Oct 11 04:26:53 compute-0 suspicious_blackburn[308544]:                 "ceph.block_uuid": "RoMfIg-8Myq-NBZ3-1HM3-6QfC-sjk8-dlChvt",
Oct 11 04:26:53 compute-0 suspicious_blackburn[308544]:                 "ceph.cephx_lockbox_secret": "",
Oct 11 04:26:53 compute-0 suspicious_blackburn[308544]:                 "ceph.cluster_fsid": "23b68101-59a9-532f-ab6b-9acf78fb2162",
Oct 11 04:26:53 compute-0 suspicious_blackburn[308544]:                 "ceph.cluster_name": "ceph",
Oct 11 04:26:53 compute-0 suspicious_blackburn[308544]:                 "ceph.crush_device_class": "",
Oct 11 04:26:53 compute-0 suspicious_blackburn[308544]:                 "ceph.encrypted": "0",
Oct 11 04:26:53 compute-0 suspicious_blackburn[308544]:                 "ceph.osd_fsid": "b0a32a6b-06c7-4cda-9235-0c5ffbd1bd85",
Oct 11 04:26:53 compute-0 suspicious_blackburn[308544]:                 "ceph.osd_id": "2",
Oct 11 04:26:53 compute-0 suspicious_blackburn[308544]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 11 04:26:53 compute-0 suspicious_blackburn[308544]:                 "ceph.type": "block",
Oct 11 04:26:53 compute-0 suspicious_blackburn[308544]:                 "ceph.vdo": "0"
Oct 11 04:26:53 compute-0 suspicious_blackburn[308544]:             },
Oct 11 04:26:53 compute-0 suspicious_blackburn[308544]:             "type": "block",
Oct 11 04:26:53 compute-0 suspicious_blackburn[308544]:             "vg_name": "ceph_vg2"
Oct 11 04:26:53 compute-0 suspicious_blackburn[308544]:         }
Oct 11 04:26:53 compute-0 suspicious_blackburn[308544]:     ]
Oct 11 04:26:53 compute-0 suspicious_blackburn[308544]: }
Oct 11 04:26:53 compute-0 ceph-mon[74273]: pgmap v1953: 305 pgs: 305 active+clean; 202 MiB data, 572 MiB used, 59 GiB / 60 GiB avail; 29 KiB/s rd, 9.4 MiB/s wr, 44 op/s
Oct 11 04:26:53 compute-0 systemd[1]: libpod-9016e1dcb92f49762ac6f61cf7216acbc562c92a9ecbd56aed138f5647a1ab79.scope: Deactivated successfully.
Oct 11 04:26:53 compute-0 podman[308528]: 2025-10-11 04:26:53.189666096 +0000 UTC m=+1.017618555 container died 9016e1dcb92f49762ac6f61cf7216acbc562c92a9ecbd56aed138f5647a1ab79 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=suspicious_blackburn, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS)
Oct 11 04:26:53 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 11 04:26:53 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2055942995' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 11 04:26:53 compute-0 systemd[1]: var-lib-containers-storage-overlay-9a132af988a9b3dec8fb7525067c0c01ee8b3c5f5e0cea2bfb73611b80a31345-merged.mount: Deactivated successfully.
Oct 11 04:26:53 compute-0 nova_compute[259850]: 2025-10-11 04:26:53.231 2 DEBUG oslo_concurrency.processutils [None req-8616c69a-e994-4e15-bd76-7208f7f58907 7bf17f3eb8514499a54d67542db6b88a 226e6310b4ee4a68b552a6b3e940a458 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.500s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 04:26:53 compute-0 podman[308528]: 2025-10-11 04:26:53.249365267 +0000 UTC m=+1.077317726 container remove 9016e1dcb92f49762ac6f61cf7216acbc562c92a9ecbd56aed138f5647a1ab79 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=suspicious_blackburn, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 11 04:26:53 compute-0 systemd[1]: libpod-conmon-9016e1dcb92f49762ac6f61cf7216acbc562c92a9ecbd56aed138f5647a1ab79.scope: Deactivated successfully.
Oct 11 04:26:53 compute-0 sudo[308422]: pam_unix(sudo:session): session closed for user root
Oct 11 04:26:53 compute-0 nova_compute[259850]: 2025-10-11 04:26:53.344 2 DEBUG os_brick.encryptors [None req-8616c69a-e994-4e15-bd76-7208f7f58907 7bf17f3eb8514499a54d67542db6b88a 226e6310b4ee4a68b552a6b3e940a458 - - default default] Using volume encryption metadata '{'encryption_key_id': '05562f17-7dd0-4288-ae40-8034558d64d0', 'control_location': 'front-end', 'cipher': 'aes-xts-plain64', 'key_size': 256, 'provider': 'luks'}' for connection: {'driver_volume_type': 'rbd', 'data': {'name': 'volumes/volume-2c8cda8e-9e0b-4e1d-b6e8-e5638bef6ce5', 'hosts': ['192.168.122.100'], 'ports': ['6789'], 'cluster_name': 'ceph', 'auth_enabled': True, 'auth_username': 'openstack', 'secret_type': 'ceph', 'secret_uuid': '***', 'volume_id': '2c8cda8e-9e0b-4e1d-b6e8-e5638bef6ce5', 'discard': True, 'qos_specs': None, 'access_mode': 'rw', 'encrypted': True, 'cacheable': False}, 'status': 'reserved', 'instance': '116a010a-a523-4fa3-8dbc-de6caec760c9', 'attached_at': '', 'detached_at': '', 'volume_id': '2c8cda8e-9e0b-4e1d-b6e8-e5638bef6ce5', 'serial': '} get_encryption_metadata /usr/lib/python3.9/site-packages/os_brick/encryptors/__init__.py:135
Oct 11 04:26:53 compute-0 nova_compute[259850]: 2025-10-11 04:26:53.349 2 DEBUG barbicanclient.client [None req-8616c69a-e994-4e15-bd76-7208f7f58907 7bf17f3eb8514499a54d67542db6b88a 226e6310b4ee4a68b552a6b3e940a458 - - default default] Creating Client object Client /usr/lib/python3.9/site-packages/barbicanclient/client.py:163
Oct 11 04:26:53 compute-0 nova_compute[259850]: 2025-10-11 04:26:53.371 2 DEBUG barbicanclient.v1.secrets [None req-8616c69a-e994-4e15-bd76-7208f7f58907 7bf17f3eb8514499a54d67542db6b88a 226e6310b4ee4a68b552a6b3e940a458 - - default default] Getting secret - Secret href: https://barbican-internal.openstack.svc:9311/secrets/05562f17-7dd0-4288-ae40-8034558d64d0 get /usr/lib/python3.9/site-packages/barbicanclient/v1/secrets.py:514
Oct 11 04:26:53 compute-0 nova_compute[259850]: 2025-10-11 04:26:53.372 2 INFO barbicanclient.base [None req-8616c69a-e994-4e15-bd76-7208f7f58907 7bf17f3eb8514499a54d67542db6b88a 226e6310b4ee4a68b552a6b3e940a458 - - default default] Calculated Secrets uuid ref: secrets/05562f17-7dd0-4288-ae40-8034558d64d0
Oct 11 04:26:53 compute-0 sudo[308604]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 04:26:53 compute-0 sudo[308604]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 04:26:53 compute-0 sudo[308604]: pam_unix(sudo:session): session closed for user root
Oct 11 04:26:53 compute-0 nova_compute[259850]: 2025-10-11 04:26:53.394 2 DEBUG barbicanclient.client [None req-8616c69a-e994-4e15-bd76-7208f7f58907 7bf17f3eb8514499a54d67542db6b88a 226e6310b4ee4a68b552a6b3e940a458 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87
Oct 11 04:26:53 compute-0 nova_compute[259850]: 2025-10-11 04:26:53.395 2 INFO barbicanclient.base [None req-8616c69a-e994-4e15-bd76-7208f7f58907 7bf17f3eb8514499a54d67542db6b88a 226e6310b4ee4a68b552a6b3e940a458 - - default default] Calculated Secrets uuid ref: secrets/05562f17-7dd0-4288-ae40-8034558d64d0
Oct 11 04:26:53 compute-0 nova_compute[259850]: 2025-10-11 04:26:53.413 2 DEBUG barbicanclient.client [None req-8616c69a-e994-4e15-bd76-7208f7f58907 7bf17f3eb8514499a54d67542db6b88a 226e6310b4ee4a68b552a6b3e940a458 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87
Oct 11 04:26:53 compute-0 nova_compute[259850]: 2025-10-11 04:26:53.414 2 INFO barbicanclient.base [None req-8616c69a-e994-4e15-bd76-7208f7f58907 7bf17f3eb8514499a54d67542db6b88a 226e6310b4ee4a68b552a6b3e940a458 - - default default] Calculated Secrets uuid ref: secrets/05562f17-7dd0-4288-ae40-8034558d64d0
Oct 11 04:26:53 compute-0 nova_compute[259850]: 2025-10-11 04:26:53.433 2 DEBUG barbicanclient.client [None req-8616c69a-e994-4e15-bd76-7208f7f58907 7bf17f3eb8514499a54d67542db6b88a 226e6310b4ee4a68b552a6b3e940a458 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87
Oct 11 04:26:53 compute-0 nova_compute[259850]: 2025-10-11 04:26:53.434 2 INFO barbicanclient.base [None req-8616c69a-e994-4e15-bd76-7208f7f58907 7bf17f3eb8514499a54d67542db6b88a 226e6310b4ee4a68b552a6b3e940a458 - - default default] Calculated Secrets uuid ref: secrets/05562f17-7dd0-4288-ae40-8034558d64d0
Oct 11 04:26:53 compute-0 nova_compute[259850]: 2025-10-11 04:26:53.455 2 DEBUG barbicanclient.client [None req-8616c69a-e994-4e15-bd76-7208f7f58907 7bf17f3eb8514499a54d67542db6b88a 226e6310b4ee4a68b552a6b3e940a458 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87
Oct 11 04:26:53 compute-0 nova_compute[259850]: 2025-10-11 04:26:53.455 2 INFO barbicanclient.base [None req-8616c69a-e994-4e15-bd76-7208f7f58907 7bf17f3eb8514499a54d67542db6b88a 226e6310b4ee4a68b552a6b3e940a458 - - default default] Calculated Secrets uuid ref: secrets/05562f17-7dd0-4288-ae40-8034558d64d0
Oct 11 04:26:53 compute-0 sudo[308629]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 11 04:26:53 compute-0 sudo[308629]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 04:26:53 compute-0 sudo[308629]: pam_unix(sudo:session): session closed for user root
Oct 11 04:26:53 compute-0 nova_compute[259850]: 2025-10-11 04:26:53.476 2 DEBUG barbicanclient.client [None req-8616c69a-e994-4e15-bd76-7208f7f58907 7bf17f3eb8514499a54d67542db6b88a 226e6310b4ee4a68b552a6b3e940a458 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87
Oct 11 04:26:53 compute-0 nova_compute[259850]: 2025-10-11 04:26:53.478 2 INFO barbicanclient.base [None req-8616c69a-e994-4e15-bd76-7208f7f58907 7bf17f3eb8514499a54d67542db6b88a 226e6310b4ee4a68b552a6b3e940a458 - - default default] Calculated Secrets uuid ref: secrets/05562f17-7dd0-4288-ae40-8034558d64d0
Oct 11 04:26:53 compute-0 nova_compute[259850]: 2025-10-11 04:26:53.503 2 DEBUG barbicanclient.client [None req-8616c69a-e994-4e15-bd76-7208f7f58907 7bf17f3eb8514499a54d67542db6b88a 226e6310b4ee4a68b552a6b3e940a458 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87
Oct 11 04:26:53 compute-0 nova_compute[259850]: 2025-10-11 04:26:53.504 2 INFO barbicanclient.base [None req-8616c69a-e994-4e15-bd76-7208f7f58907 7bf17f3eb8514499a54d67542db6b88a 226e6310b4ee4a68b552a6b3e940a458 - - default default] Calculated Secrets uuid ref: secrets/05562f17-7dd0-4288-ae40-8034558d64d0
Oct 11 04:26:53 compute-0 nova_compute[259850]: 2025-10-11 04:26:53.525 2 DEBUG barbicanclient.client [None req-8616c69a-e994-4e15-bd76-7208f7f58907 7bf17f3eb8514499a54d67542db6b88a 226e6310b4ee4a68b552a6b3e940a458 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87
Oct 11 04:26:53 compute-0 nova_compute[259850]: 2025-10-11 04:26:53.525 2 INFO barbicanclient.base [None req-8616c69a-e994-4e15-bd76-7208f7f58907 7bf17f3eb8514499a54d67542db6b88a 226e6310b4ee4a68b552a6b3e940a458 - - default default] Calculated Secrets uuid ref: secrets/05562f17-7dd0-4288-ae40-8034558d64d0
Oct 11 04:26:53 compute-0 sudo[308654]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 04:26:53 compute-0 sudo[308654]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 04:26:53 compute-0 sudo[308654]: pam_unix(sudo:session): session closed for user root
Oct 11 04:26:53 compute-0 nova_compute[259850]: 2025-10-11 04:26:53.547 2 DEBUG barbicanclient.client [None req-8616c69a-e994-4e15-bd76-7208f7f58907 7bf17f3eb8514499a54d67542db6b88a 226e6310b4ee4a68b552a6b3e940a458 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87
Oct 11 04:26:53 compute-0 nova_compute[259850]: 2025-10-11 04:26:53.548 2 INFO barbicanclient.base [None req-8616c69a-e994-4e15-bd76-7208f7f58907 7bf17f3eb8514499a54d67542db6b88a 226e6310b4ee4a68b552a6b3e940a458 - - default default] Calculated Secrets uuid ref: secrets/05562f17-7dd0-4288-ae40-8034558d64d0
Oct 11 04:26:53 compute-0 nova_compute[259850]: 2025-10-11 04:26:53.574 2 DEBUG barbicanclient.client [None req-8616c69a-e994-4e15-bd76-7208f7f58907 7bf17f3eb8514499a54d67542db6b88a 226e6310b4ee4a68b552a6b3e940a458 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87
Oct 11 04:26:53 compute-0 nova_compute[259850]: 2025-10-11 04:26:53.574 2 INFO barbicanclient.base [None req-8616c69a-e994-4e15-bd76-7208f7f58907 7bf17f3eb8514499a54d67542db6b88a 226e6310b4ee4a68b552a6b3e940a458 - - default default] Calculated Secrets uuid ref: secrets/05562f17-7dd0-4288-ae40-8034558d64d0
Oct 11 04:26:53 compute-0 nova_compute[259850]: 2025-10-11 04:26:53.596 2 DEBUG barbicanclient.client [None req-8616c69a-e994-4e15-bd76-7208f7f58907 7bf17f3eb8514499a54d67542db6b88a 226e6310b4ee4a68b552a6b3e940a458 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87
Oct 11 04:26:53 compute-0 nova_compute[259850]: 2025-10-11 04:26:53.596 2 INFO barbicanclient.base [None req-8616c69a-e994-4e15-bd76-7208f7f58907 7bf17f3eb8514499a54d67542db6b88a 226e6310b4ee4a68b552a6b3e940a458 - - default default] Calculated Secrets uuid ref: secrets/05562f17-7dd0-4288-ae40-8034558d64d0
Oct 11 04:26:53 compute-0 sudo[308679]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/23b68101-59a9-532f-ab6b-9acf78fb2162/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 23b68101-59a9-532f-ab6b-9acf78fb2162 -- raw list --format json
Oct 11 04:26:53 compute-0 sudo[308679]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 04:26:53 compute-0 nova_compute[259850]: 2025-10-11 04:26:53.616 2 DEBUG barbicanclient.client [None req-8616c69a-e994-4e15-bd76-7208f7f58907 7bf17f3eb8514499a54d67542db6b88a 226e6310b4ee4a68b552a6b3e940a458 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87
Oct 11 04:26:53 compute-0 nova_compute[259850]: 2025-10-11 04:26:53.617 2 INFO barbicanclient.base [None req-8616c69a-e994-4e15-bd76-7208f7f58907 7bf17f3eb8514499a54d67542db6b88a 226e6310b4ee4a68b552a6b3e940a458 - - default default] Calculated Secrets uuid ref: secrets/05562f17-7dd0-4288-ae40-8034558d64d0
Oct 11 04:26:53 compute-0 nova_compute[259850]: 2025-10-11 04:26:53.642 2 DEBUG barbicanclient.client [None req-8616c69a-e994-4e15-bd76-7208f7f58907 7bf17f3eb8514499a54d67542db6b88a 226e6310b4ee4a68b552a6b3e940a458 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87
Oct 11 04:26:53 compute-0 nova_compute[259850]: 2025-10-11 04:26:53.643 2 INFO barbicanclient.base [None req-8616c69a-e994-4e15-bd76-7208f7f58907 7bf17f3eb8514499a54d67542db6b88a 226e6310b4ee4a68b552a6b3e940a458 - - default default] Calculated Secrets uuid ref: secrets/05562f17-7dd0-4288-ae40-8034558d64d0
Oct 11 04:26:53 compute-0 nova_compute[259850]: 2025-10-11 04:26:53.668 2 DEBUG barbicanclient.client [None req-8616c69a-e994-4e15-bd76-7208f7f58907 7bf17f3eb8514499a54d67542db6b88a 226e6310b4ee4a68b552a6b3e940a458 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87
Oct 11 04:26:53 compute-0 nova_compute[259850]: 2025-10-11 04:26:53.668 2 INFO barbicanclient.base [None req-8616c69a-e994-4e15-bd76-7208f7f58907 7bf17f3eb8514499a54d67542db6b88a 226e6310b4ee4a68b552a6b3e940a458 - - default default] Calculated Secrets uuid ref: secrets/05562f17-7dd0-4288-ae40-8034558d64d0
Oct 11 04:26:53 compute-0 nova_compute[259850]: 2025-10-11 04:26:53.691 2 DEBUG barbicanclient.client [None req-8616c69a-e994-4e15-bd76-7208f7f58907 7bf17f3eb8514499a54d67542db6b88a 226e6310b4ee4a68b552a6b3e940a458 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87
Oct 11 04:26:53 compute-0 nova_compute[259850]: 2025-10-11 04:26:53.691 2 INFO barbicanclient.base [None req-8616c69a-e994-4e15-bd76-7208f7f58907 7bf17f3eb8514499a54d67542db6b88a 226e6310b4ee4a68b552a6b3e940a458 - - default default] Calculated Secrets uuid ref: secrets/05562f17-7dd0-4288-ae40-8034558d64d0
Oct 11 04:26:53 compute-0 nova_compute[259850]: 2025-10-11 04:26:53.718 2 DEBUG barbicanclient.client [None req-8616c69a-e994-4e15-bd76-7208f7f58907 7bf17f3eb8514499a54d67542db6b88a 226e6310b4ee4a68b552a6b3e940a458 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87
Oct 11 04:26:53 compute-0 nova_compute[259850]: 2025-10-11 04:26:53.719 2 DEBUG nova.virt.libvirt.host [None req-8616c69a-e994-4e15-bd76-7208f7f58907 7bf17f3eb8514499a54d67542db6b88a 226e6310b4ee4a68b552a6b3e940a458 - - default default] Secret XML: <secret ephemeral="no" private="no">
Oct 11 04:26:53 compute-0 nova_compute[259850]:   <usage type="volume">
Oct 11 04:26:53 compute-0 nova_compute[259850]:     <volume>2c8cda8e-9e0b-4e1d-b6e8-e5638bef6ce5</volume>
Oct 11 04:26:53 compute-0 nova_compute[259850]:   </usage>
Oct 11 04:26:53 compute-0 nova_compute[259850]: </secret>
Oct 11 04:26:53 compute-0 nova_compute[259850]:  create_secret /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1131
Oct 11 04:26:53 compute-0 nova_compute[259850]: 2025-10-11 04:26:53.753 2 DEBUG nova.virt.libvirt.vif [None req-8616c69a-e994-4e15-bd76-7208f7f58907 7bf17f3eb8514499a54d67542db6b88a 226e6310b4ee4a68b552a6b3e940a458 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-11T04:26:45Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestEncryptedCinderVolumes-server-787794379',display_name='tempest-TestEncryptedCinderVolumes-server-787794379',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testencryptedcindervolumes-server-787794379',id=29,image_ref='',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBA3pFsCx6Lv4ZhABALE9kJlaC2VLcHHMajXk3FwO0YwDAD8GzEfOWx1nJYDa1BnjHeTckP7sy9/Wa8HAN31/eIMe4p7SlbrVdBBFvJpvxVBbmewtPKqpzKac1Jk+If2OOg==',key_name='tempest-TestEncryptedCinderVolumes-1373224468',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='226e6310b4ee4a68b552a6b3e940a458',ramdisk_id='',reservation_id='r-wwcbx9v9',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',image_signature_verified='False',network_allocated='True',owner_project_name='tempest-TestEncryptedCinderVolumes-1931311766',owner_user_name='tempest-TestEncryptedCinderVolumes-1931311766-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-11T04:26:48Z,user_data=None,user_id='7bf17f3eb8514499a54d67542db6b88a',uuid=116a010a-a523-4fa3-8dbc-de6caec760c9,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "aa158452-f9f5-45a1-9841-28136bfa13a6", "address": "fa:16:3e:7b:ce:34", "network": {"id": "61e3c4a7-2f2f-451f-b913-c2cdac8efdf3", "bridge": "br-int", "label": "tempest-TestEncryptedCinderVolumes-1503027769-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "226e6310b4ee4a68b552a6b3e940a458", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapaa158452-f9", "ovs_interfaceid": "aa158452-f9f5-45a1-9841-28136bfa13a6", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Oct 11 04:26:53 compute-0 nova_compute[259850]: 2025-10-11 04:26:53.754 2 DEBUG nova.network.os_vif_util [None req-8616c69a-e994-4e15-bd76-7208f7f58907 7bf17f3eb8514499a54d67542db6b88a 226e6310b4ee4a68b552a6b3e940a458 - - default default] Converting VIF {"id": "aa158452-f9f5-45a1-9841-28136bfa13a6", "address": "fa:16:3e:7b:ce:34", "network": {"id": "61e3c4a7-2f2f-451f-b913-c2cdac8efdf3", "bridge": "br-int", "label": "tempest-TestEncryptedCinderVolumes-1503027769-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "226e6310b4ee4a68b552a6b3e940a458", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapaa158452-f9", "ovs_interfaceid": "aa158452-f9f5-45a1-9841-28136bfa13a6", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 11 04:26:53 compute-0 nova_compute[259850]: 2025-10-11 04:26:53.755 2 DEBUG nova.network.os_vif_util [None req-8616c69a-e994-4e15-bd76-7208f7f58907 7bf17f3eb8514499a54d67542db6b88a 226e6310b4ee4a68b552a6b3e940a458 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:7b:ce:34,bridge_name='br-int',has_traffic_filtering=True,id=aa158452-f9f5-45a1-9841-28136bfa13a6,network=Network(61e3c4a7-2f2f-451f-b913-c2cdac8efdf3),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapaa158452-f9') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 11 04:26:53 compute-0 nova_compute[259850]: 2025-10-11 04:26:53.757 2 DEBUG nova.objects.instance [None req-8616c69a-e994-4e15-bd76-7208f7f58907 7bf17f3eb8514499a54d67542db6b88a 226e6310b4ee4a68b552a6b3e940a458 - - default default] Lazy-loading 'pci_devices' on Instance uuid 116a010a-a523-4fa3-8dbc-de6caec760c9 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 11 04:26:53 compute-0 nova_compute[259850]: 2025-10-11 04:26:53.771 2 DEBUG nova.virt.libvirt.driver [None req-8616c69a-e994-4e15-bd76-7208f7f58907 7bf17f3eb8514499a54d67542db6b88a 226e6310b4ee4a68b552a6b3e940a458 - - default default] [instance: 116a010a-a523-4fa3-8dbc-de6caec760c9] End _get_guest_xml xml=<domain type="kvm">
Oct 11 04:26:53 compute-0 nova_compute[259850]:   <uuid>116a010a-a523-4fa3-8dbc-de6caec760c9</uuid>
Oct 11 04:26:53 compute-0 nova_compute[259850]:   <name>instance-0000001d</name>
Oct 11 04:26:53 compute-0 nova_compute[259850]:   <memory>131072</memory>
Oct 11 04:26:53 compute-0 nova_compute[259850]:   <vcpu>1</vcpu>
Oct 11 04:26:53 compute-0 nova_compute[259850]:   <metadata>
Oct 11 04:26:53 compute-0 nova_compute[259850]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 11 04:26:53 compute-0 nova_compute[259850]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 11 04:26:53 compute-0 nova_compute[259850]:       <nova:name>tempest-TestEncryptedCinderVolumes-server-787794379</nova:name>
Oct 11 04:26:53 compute-0 nova_compute[259850]:       <nova:creationTime>2025-10-11 04:26:52</nova:creationTime>
Oct 11 04:26:53 compute-0 nova_compute[259850]:       <nova:flavor name="m1.nano">
Oct 11 04:26:53 compute-0 nova_compute[259850]:         <nova:memory>128</nova:memory>
Oct 11 04:26:53 compute-0 nova_compute[259850]:         <nova:disk>1</nova:disk>
Oct 11 04:26:53 compute-0 nova_compute[259850]:         <nova:swap>0</nova:swap>
Oct 11 04:26:53 compute-0 nova_compute[259850]:         <nova:ephemeral>0</nova:ephemeral>
Oct 11 04:26:53 compute-0 nova_compute[259850]:         <nova:vcpus>1</nova:vcpus>
Oct 11 04:26:53 compute-0 nova_compute[259850]:       </nova:flavor>
Oct 11 04:26:53 compute-0 nova_compute[259850]:       <nova:owner>
Oct 11 04:26:53 compute-0 nova_compute[259850]:         <nova:user uuid="7bf17f3eb8514499a54d67542db6b88a">tempest-TestEncryptedCinderVolumes-1931311766-project-member</nova:user>
Oct 11 04:26:53 compute-0 nova_compute[259850]:         <nova:project uuid="226e6310b4ee4a68b552a6b3e940a458">tempest-TestEncryptedCinderVolumes-1931311766</nova:project>
Oct 11 04:26:53 compute-0 nova_compute[259850]:       </nova:owner>
Oct 11 04:26:53 compute-0 nova_compute[259850]:       <nova:ports>
Oct 11 04:26:53 compute-0 nova_compute[259850]:         <nova:port uuid="aa158452-f9f5-45a1-9841-28136bfa13a6">
Oct 11 04:26:53 compute-0 nova_compute[259850]:           <nova:ip type="fixed" address="10.100.0.13" ipVersion="4"/>
Oct 11 04:26:53 compute-0 nova_compute[259850]:         </nova:port>
Oct 11 04:26:53 compute-0 nova_compute[259850]:       </nova:ports>
Oct 11 04:26:53 compute-0 nova_compute[259850]:     </nova:instance>
Oct 11 04:26:53 compute-0 nova_compute[259850]:   </metadata>
Oct 11 04:26:53 compute-0 nova_compute[259850]:   <sysinfo type="smbios">
Oct 11 04:26:53 compute-0 nova_compute[259850]:     <system>
Oct 11 04:26:53 compute-0 nova_compute[259850]:       <entry name="manufacturer">RDO</entry>
Oct 11 04:26:53 compute-0 nova_compute[259850]:       <entry name="product">OpenStack Compute</entry>
Oct 11 04:26:53 compute-0 nova_compute[259850]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 11 04:26:53 compute-0 nova_compute[259850]:       <entry name="serial">116a010a-a523-4fa3-8dbc-de6caec760c9</entry>
Oct 11 04:26:53 compute-0 nova_compute[259850]:       <entry name="uuid">116a010a-a523-4fa3-8dbc-de6caec760c9</entry>
Oct 11 04:26:53 compute-0 nova_compute[259850]:       <entry name="family">Virtual Machine</entry>
Oct 11 04:26:53 compute-0 nova_compute[259850]:     </system>
Oct 11 04:26:53 compute-0 nova_compute[259850]:   </sysinfo>
Oct 11 04:26:53 compute-0 nova_compute[259850]:   <os>
Oct 11 04:26:53 compute-0 nova_compute[259850]:     <type arch="x86_64" machine="q35">hvm</type>
Oct 11 04:26:53 compute-0 nova_compute[259850]:     <boot dev="hd"/>
Oct 11 04:26:53 compute-0 nova_compute[259850]:     <smbios mode="sysinfo"/>
Oct 11 04:26:53 compute-0 nova_compute[259850]:   </os>
Oct 11 04:26:53 compute-0 nova_compute[259850]:   <features>
Oct 11 04:26:53 compute-0 nova_compute[259850]:     <acpi/>
Oct 11 04:26:53 compute-0 nova_compute[259850]:     <apic/>
Oct 11 04:26:53 compute-0 nova_compute[259850]:     <vmcoreinfo/>
Oct 11 04:26:53 compute-0 nova_compute[259850]:   </features>
Oct 11 04:26:53 compute-0 nova_compute[259850]:   <clock offset="utc">
Oct 11 04:26:53 compute-0 nova_compute[259850]:     <timer name="pit" tickpolicy="delay"/>
Oct 11 04:26:53 compute-0 nova_compute[259850]:     <timer name="rtc" tickpolicy="catchup"/>
Oct 11 04:26:53 compute-0 nova_compute[259850]:     <timer name="hpet" present="no"/>
Oct 11 04:26:53 compute-0 nova_compute[259850]:   </clock>
Oct 11 04:26:53 compute-0 nova_compute[259850]:   <cpu mode="host-model" match="exact">
Oct 11 04:26:53 compute-0 nova_compute[259850]:     <topology sockets="1" cores="1" threads="1"/>
Oct 11 04:26:53 compute-0 nova_compute[259850]:   </cpu>
Oct 11 04:26:53 compute-0 nova_compute[259850]:   <devices>
Oct 11 04:26:53 compute-0 nova_compute[259850]:     <disk type="network" device="cdrom">
Oct 11 04:26:53 compute-0 nova_compute[259850]:       <driver type="raw" cache="none"/>
Oct 11 04:26:53 compute-0 nova_compute[259850]:       <source protocol="rbd" name="vms/116a010a-a523-4fa3-8dbc-de6caec760c9_disk.config">
Oct 11 04:26:53 compute-0 nova_compute[259850]:         <host name="192.168.122.100" port="6789"/>
Oct 11 04:26:53 compute-0 nova_compute[259850]:       </source>
Oct 11 04:26:53 compute-0 nova_compute[259850]:       <auth username="openstack">
Oct 11 04:26:53 compute-0 nova_compute[259850]:         <secret type="ceph" uuid="23b68101-59a9-532f-ab6b-9acf78fb2162"/>
Oct 11 04:26:53 compute-0 nova_compute[259850]:       </auth>
Oct 11 04:26:53 compute-0 nova_compute[259850]:       <target dev="sda" bus="sata"/>
Oct 11 04:26:53 compute-0 nova_compute[259850]:     </disk>
Oct 11 04:26:53 compute-0 nova_compute[259850]:     <disk type="network" device="disk">
Oct 11 04:26:53 compute-0 nova_compute[259850]:       <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Oct 11 04:26:53 compute-0 nova_compute[259850]:       <source protocol="rbd" name="volumes/volume-2c8cda8e-9e0b-4e1d-b6e8-e5638bef6ce5">
Oct 11 04:26:53 compute-0 nova_compute[259850]:         <host name="192.168.122.100" port="6789"/>
Oct 11 04:26:53 compute-0 nova_compute[259850]:       </source>
Oct 11 04:26:53 compute-0 nova_compute[259850]:       <auth username="openstack">
Oct 11 04:26:53 compute-0 nova_compute[259850]:         <secret type="ceph" uuid="23b68101-59a9-532f-ab6b-9acf78fb2162"/>
Oct 11 04:26:53 compute-0 nova_compute[259850]:       </auth>
Oct 11 04:26:53 compute-0 nova_compute[259850]:       <target dev="vda" bus="virtio"/>
Oct 11 04:26:53 compute-0 nova_compute[259850]:       <serial>2c8cda8e-9e0b-4e1d-b6e8-e5638bef6ce5</serial>
Oct 11 04:26:53 compute-0 nova_compute[259850]:       <encryption format="luks">
Oct 11 04:26:53 compute-0 nova_compute[259850]:         <secret type="passphrase" uuid="f2112a55-b49d-4f67-8f28-4d09492fe3d0"/>
Oct 11 04:26:53 compute-0 nova_compute[259850]:       </encryption>
Oct 11 04:26:53 compute-0 nova_compute[259850]:     </disk>
Oct 11 04:26:53 compute-0 nova_compute[259850]:     <interface type="ethernet">
Oct 11 04:26:53 compute-0 nova_compute[259850]:       <mac address="fa:16:3e:7b:ce:34"/>
Oct 11 04:26:53 compute-0 nova_compute[259850]:       <model type="virtio"/>
Oct 11 04:26:53 compute-0 nova_compute[259850]:       <driver name="vhost" rx_queue_size="512"/>
Oct 11 04:26:53 compute-0 nova_compute[259850]:       <mtu size="1442"/>
Oct 11 04:26:53 compute-0 nova_compute[259850]:       <target dev="tapaa158452-f9"/>
Oct 11 04:26:53 compute-0 nova_compute[259850]:     </interface>
Oct 11 04:26:53 compute-0 nova_compute[259850]:     <serial type="pty">
Oct 11 04:26:53 compute-0 nova_compute[259850]:       <log file="/var/lib/nova/instances/116a010a-a523-4fa3-8dbc-de6caec760c9/console.log" append="off"/>
Oct 11 04:26:53 compute-0 nova_compute[259850]:     </serial>
Oct 11 04:26:53 compute-0 nova_compute[259850]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 11 04:26:53 compute-0 nova_compute[259850]:     <video>
Oct 11 04:26:53 compute-0 nova_compute[259850]:       <model type="virtio"/>
Oct 11 04:26:53 compute-0 nova_compute[259850]:     </video>
Oct 11 04:26:53 compute-0 nova_compute[259850]:     <input type="tablet" bus="usb"/>
Oct 11 04:26:53 compute-0 nova_compute[259850]:     <rng model="virtio">
Oct 11 04:26:53 compute-0 nova_compute[259850]:       <backend model="random">/dev/urandom</backend>
Oct 11 04:26:53 compute-0 nova_compute[259850]:     </rng>
Oct 11 04:26:53 compute-0 nova_compute[259850]:     <controller type="pci" model="pcie-root"/>
Oct 11 04:26:53 compute-0 nova_compute[259850]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 04:26:53 compute-0 nova_compute[259850]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 04:26:53 compute-0 nova_compute[259850]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 04:26:53 compute-0 nova_compute[259850]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 04:26:53 compute-0 nova_compute[259850]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 04:26:53 compute-0 nova_compute[259850]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 04:26:53 compute-0 nova_compute[259850]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 04:26:53 compute-0 nova_compute[259850]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 04:26:53 compute-0 nova_compute[259850]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 04:26:53 compute-0 nova_compute[259850]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 04:26:53 compute-0 nova_compute[259850]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 04:26:53 compute-0 nova_compute[259850]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 04:26:53 compute-0 nova_compute[259850]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 04:26:53 compute-0 nova_compute[259850]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 04:26:53 compute-0 nova_compute[259850]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 04:26:53 compute-0 nova_compute[259850]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 04:26:53 compute-0 nova_compute[259850]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 04:26:53 compute-0 nova_compute[259850]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 04:26:53 compute-0 nova_compute[259850]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 04:26:53 compute-0 nova_compute[259850]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 04:26:53 compute-0 nova_compute[259850]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 04:26:53 compute-0 nova_compute[259850]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 04:26:53 compute-0 nova_compute[259850]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 04:26:53 compute-0 nova_compute[259850]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 04:26:53 compute-0 nova_compute[259850]:     <controller type="usb" index="0"/>
Oct 11 04:26:53 compute-0 nova_compute[259850]:     <memballoon model="virtio">
Oct 11 04:26:53 compute-0 nova_compute[259850]:       <stats period="10"/>
Oct 11 04:26:53 compute-0 nova_compute[259850]:     </memballoon>
Oct 11 04:26:53 compute-0 nova_compute[259850]:   </devices>
Oct 11 04:26:53 compute-0 nova_compute[259850]: </domain>
Oct 11 04:26:53 compute-0 nova_compute[259850]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Oct 11 04:26:53 compute-0 nova_compute[259850]: 2025-10-11 04:26:53.771 2 DEBUG nova.compute.manager [None req-8616c69a-e994-4e15-bd76-7208f7f58907 7bf17f3eb8514499a54d67542db6b88a 226e6310b4ee4a68b552a6b3e940a458 - - default default] [instance: 116a010a-a523-4fa3-8dbc-de6caec760c9] Preparing to wait for external event network-vif-plugged-aa158452-f9f5-45a1-9841-28136bfa13a6 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Oct 11 04:26:53 compute-0 nova_compute[259850]: 2025-10-11 04:26:53.772 2 DEBUG oslo_concurrency.lockutils [None req-8616c69a-e994-4e15-bd76-7208f7f58907 7bf17f3eb8514499a54d67542db6b88a 226e6310b4ee4a68b552a6b3e940a458 - - default default] Acquiring lock "116a010a-a523-4fa3-8dbc-de6caec760c9-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 04:26:53 compute-0 nova_compute[259850]: 2025-10-11 04:26:53.772 2 DEBUG oslo_concurrency.lockutils [None req-8616c69a-e994-4e15-bd76-7208f7f58907 7bf17f3eb8514499a54d67542db6b88a 226e6310b4ee4a68b552a6b3e940a458 - - default default] Lock "116a010a-a523-4fa3-8dbc-de6caec760c9-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 04:26:53 compute-0 nova_compute[259850]: 2025-10-11 04:26:53.772 2 DEBUG oslo_concurrency.lockutils [None req-8616c69a-e994-4e15-bd76-7208f7f58907 7bf17f3eb8514499a54d67542db6b88a 226e6310b4ee4a68b552a6b3e940a458 - - default default] Lock "116a010a-a523-4fa3-8dbc-de6caec760c9-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 04:26:53 compute-0 nova_compute[259850]: 2025-10-11 04:26:53.773 2 DEBUG nova.virt.libvirt.vif [None req-8616c69a-e994-4e15-bd76-7208f7f58907 7bf17f3eb8514499a54d67542db6b88a 226e6310b4ee4a68b552a6b3e940a458 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-11T04:26:45Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestEncryptedCinderVolumes-server-787794379',display_name='tempest-TestEncryptedCinderVolumes-server-787794379',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testencryptedcindervolumes-server-787794379',id=29,image_ref='',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBA3pFsCx6Lv4ZhABALE9kJlaC2VLcHHMajXk3FwO0YwDAD8GzEfOWx1nJYDa1BnjHeTckP7sy9/Wa8HAN31/eIMe4p7SlbrVdBBFvJpvxVBbmewtPKqpzKac1Jk+If2OOg==',key_name='tempest-TestEncryptedCinderVolumes-1373224468',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='226e6310b4ee4a68b552a6b3e940a458',ramdisk_id='',reservation_id='r-wwcbx9v9',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',image_signature_verified='False',network_allocated='True',owner_project_name='tempest-TestEncryptedCinderVolumes-1931311766',owner_user_name='tempest-TestEncryptedCinderVolumes-1931311766-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-11T04:26:48Z,user_data=None,user_id='7bf17f3eb8514499a54d67542db6b88a',uuid=116a010a-a523-4fa3-8dbc-de6caec760c9,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "aa158452-f9f5-45a1-9841-28136bfa13a6", "address": "fa:16:3e:7b:ce:34", "network": {"id": "61e3c4a7-2f2f-451f-b913-c2cdac8efdf3", "bridge": "br-int", "label": "tempest-TestEncryptedCinderVolumes-1503027769-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "226e6310b4ee4a68b552a6b3e940a458", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapaa158452-f9", "ovs_interfaceid": "aa158452-f9f5-45a1-9841-28136bfa13a6", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Oct 11 04:26:53 compute-0 nova_compute[259850]: 2025-10-11 04:26:53.773 2 DEBUG nova.network.os_vif_util [None req-8616c69a-e994-4e15-bd76-7208f7f58907 7bf17f3eb8514499a54d67542db6b88a 226e6310b4ee4a68b552a6b3e940a458 - - default default] Converting VIF {"id": "aa158452-f9f5-45a1-9841-28136bfa13a6", "address": "fa:16:3e:7b:ce:34", "network": {"id": "61e3c4a7-2f2f-451f-b913-c2cdac8efdf3", "bridge": "br-int", "label": "tempest-TestEncryptedCinderVolumes-1503027769-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "226e6310b4ee4a68b552a6b3e940a458", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapaa158452-f9", "ovs_interfaceid": "aa158452-f9f5-45a1-9841-28136bfa13a6", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 11 04:26:53 compute-0 nova_compute[259850]: 2025-10-11 04:26:53.774 2 DEBUG nova.network.os_vif_util [None req-8616c69a-e994-4e15-bd76-7208f7f58907 7bf17f3eb8514499a54d67542db6b88a 226e6310b4ee4a68b552a6b3e940a458 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:7b:ce:34,bridge_name='br-int',has_traffic_filtering=True,id=aa158452-f9f5-45a1-9841-28136bfa13a6,network=Network(61e3c4a7-2f2f-451f-b913-c2cdac8efdf3),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapaa158452-f9') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 11 04:26:53 compute-0 nova_compute[259850]: 2025-10-11 04:26:53.774 2 DEBUG os_vif [None req-8616c69a-e994-4e15-bd76-7208f7f58907 7bf17f3eb8514499a54d67542db6b88a 226e6310b4ee4a68b552a6b3e940a458 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:7b:ce:34,bridge_name='br-int',has_traffic_filtering=True,id=aa158452-f9f5-45a1-9841-28136bfa13a6,network=Network(61e3c4a7-2f2f-451f-b913-c2cdac8efdf3),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapaa158452-f9') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Oct 11 04:26:53 compute-0 nova_compute[259850]: 2025-10-11 04:26:53.775 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 04:26:53 compute-0 nova_compute[259850]: 2025-10-11 04:26:53.775 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 04:26:53 compute-0 nova_compute[259850]: 2025-10-11 04:26:53.776 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 11 04:26:53 compute-0 nova_compute[259850]: 2025-10-11 04:26:53.780 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 04:26:53 compute-0 nova_compute[259850]: 2025-10-11 04:26:53.780 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapaa158452-f9, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 04:26:53 compute-0 nova_compute[259850]: 2025-10-11 04:26:53.780 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapaa158452-f9, col_values=(('external_ids', {'iface-id': 'aa158452-f9f5-45a1-9841-28136bfa13a6', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:7b:ce:34', 'vm-uuid': '116a010a-a523-4fa3-8dbc-de6caec760c9'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 04:26:53 compute-0 nova_compute[259850]: 2025-10-11 04:26:53.782 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 04:26:53 compute-0 nova_compute[259850]: 2025-10-11 04:26:53.784 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 11 04:26:53 compute-0 NetworkManager[44920]: <info>  [1760156813.7858] manager: (tapaa158452-f9): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/145)
Oct 11 04:26:53 compute-0 nova_compute[259850]: 2025-10-11 04:26:53.797 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 04:26:53 compute-0 nova_compute[259850]: 2025-10-11 04:26:53.797 2 INFO os_vif [None req-8616c69a-e994-4e15-bd76-7208f7f58907 7bf17f3eb8514499a54d67542db6b88a 226e6310b4ee4a68b552a6b3e940a458 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:7b:ce:34,bridge_name='br-int',has_traffic_filtering=True,id=aa158452-f9f5-45a1-9841-28136bfa13a6,network=Network(61e3c4a7-2f2f-451f-b913-c2cdac8efdf3),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapaa158452-f9')
Oct 11 04:26:53 compute-0 nova_compute[259850]: 2025-10-11 04:26:53.838 2 DEBUG nova.virt.libvirt.driver [None req-8616c69a-e994-4e15-bd76-7208f7f58907 7bf17f3eb8514499a54d67542db6b88a 226e6310b4ee4a68b552a6b3e940a458 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 11 04:26:53 compute-0 nova_compute[259850]: 2025-10-11 04:26:53.839 2 DEBUG nova.virt.libvirt.driver [None req-8616c69a-e994-4e15-bd76-7208f7f58907 7bf17f3eb8514499a54d67542db6b88a 226e6310b4ee4a68b552a6b3e940a458 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 11 04:26:53 compute-0 nova_compute[259850]: 2025-10-11 04:26:53.839 2 DEBUG nova.virt.libvirt.driver [None req-8616c69a-e994-4e15-bd76-7208f7f58907 7bf17f3eb8514499a54d67542db6b88a 226e6310b4ee4a68b552a6b3e940a458 - - default default] No VIF found with MAC fa:16:3e:7b:ce:34, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Oct 11 04:26:53 compute-0 nova_compute[259850]: 2025-10-11 04:26:53.839 2 INFO nova.virt.libvirt.driver [None req-8616c69a-e994-4e15-bd76-7208f7f58907 7bf17f3eb8514499a54d67542db6b88a 226e6310b4ee4a68b552a6b3e940a458 - - default default] [instance: 116a010a-a523-4fa3-8dbc-de6caec760c9] Using config drive
Oct 11 04:26:53 compute-0 nova_compute[259850]: 2025-10-11 04:26:53.865 2 DEBUG nova.storage.rbd_utils [None req-8616c69a-e994-4e15-bd76-7208f7f58907 7bf17f3eb8514499a54d67542db6b88a 226e6310b4ee4a68b552a6b3e940a458 - - default default] rbd image 116a010a-a523-4fa3-8dbc-de6caec760c9_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 11 04:26:54 compute-0 nova_compute[259850]: 2025-10-11 04:26:54.059 2 DEBUG oslo_service.periodic_task [None req-a15c22ab-259d-4b26-83d0-eb5f92102649 - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 11 04:26:54 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1954: 305 pgs: 305 active+clean; 202 MiB data, 572 MiB used, 59 GiB / 60 GiB avail; 29 KiB/s rd, 9.4 MiB/s wr, 44 op/s
Oct 11 04:26:54 compute-0 podman[308760]: 2025-10-11 04:26:54.064851969 +0000 UTC m=+0.062485581 container create b92a8b32c38437f7b3ec3db98fab9ee6a84d7a0bf3c148b28682ec5ee52c9b27 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_mclaren, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 11 04:26:54 compute-0 nova_compute[259850]: 2025-10-11 04:26:54.079 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 04:26:54 compute-0 systemd[1]: Started libpod-conmon-b92a8b32c38437f7b3ec3db98fab9ee6a84d7a0bf3c148b28682ec5ee52c9b27.scope.
Oct 11 04:26:54 compute-0 podman[308760]: 2025-10-11 04:26:54.035033029 +0000 UTC m=+0.032666701 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 04:26:54 compute-0 systemd[1]: Started libcrun container.
Oct 11 04:26:54 compute-0 podman[308760]: 2025-10-11 04:26:54.157078046 +0000 UTC m=+0.154711628 container init b92a8b32c38437f7b3ec3db98fab9ee6a84d7a0bf3c148b28682ec5ee52c9b27 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_mclaren, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 11 04:26:54 compute-0 podman[308760]: 2025-10-11 04:26:54.168974992 +0000 UTC m=+0.166608594 container start b92a8b32c38437f7b3ec3db98fab9ee6a84d7a0bf3c148b28682ec5ee52c9b27 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_mclaren, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.license=GPLv2)
Oct 11 04:26:54 compute-0 podman[308760]: 2025-10-11 04:26:54.172865501 +0000 UTC m=+0.170499103 container attach b92a8b32c38437f7b3ec3db98fab9ee6a84d7a0bf3c148b28682ec5ee52c9b27 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_mclaren, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 11 04:26:54 compute-0 clever_mclaren[308776]: 167 167
Oct 11 04:26:54 compute-0 systemd[1]: libpod-b92a8b32c38437f7b3ec3db98fab9ee6a84d7a0bf3c148b28682ec5ee52c9b27.scope: Deactivated successfully.
Oct 11 04:26:54 compute-0 podman[308760]: 2025-10-11 04:26:54.17745642 +0000 UTC m=+0.175090062 container died b92a8b32c38437f7b3ec3db98fab9ee6a84d7a0bf3c148b28682ec5ee52c9b27 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_mclaren, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 11 04:26:54 compute-0 ceph-mon[74273]: from='client.? 192.168.122.100:0/2055942995' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 11 04:26:54 compute-0 systemd[1]: var-lib-containers-storage-overlay-7f6fecb81c525088241b3265cd96a49aaf0c4d83063a061879d6453e15cde1f2-merged.mount: Deactivated successfully.
Oct 11 04:26:54 compute-0 podman[308760]: 2025-10-11 04:26:54.217496798 +0000 UTC m=+0.215130410 container remove b92a8b32c38437f7b3ec3db98fab9ee6a84d7a0bf3c148b28682ec5ee52c9b27 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_mclaren, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 11 04:26:54 compute-0 systemd[1]: libpod-conmon-b92a8b32c38437f7b3ec3db98fab9ee6a84d7a0bf3c148b28682ec5ee52c9b27.scope: Deactivated successfully.
Oct 11 04:26:54 compute-0 podman[308798]: 2025-10-11 04:26:54.446997793 +0000 UTC m=+0.063130619 container create 21fee900de7c20075fd8cf9e5905ca16c3ecbe7e03f2c084a37ea3c0c33da2e0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=serene_mcclintock, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 11 04:26:54 compute-0 systemd[1]: Started libpod-conmon-21fee900de7c20075fd8cf9e5905ca16c3ecbe7e03f2c084a37ea3c0c33da2e0.scope.
Oct 11 04:26:54 compute-0 podman[308798]: 2025-10-11 04:26:54.415901387 +0000 UTC m=+0.032034253 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 04:26:54 compute-0 systemd[1]: Started libcrun container.
Oct 11 04:26:54 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fbcbf2f5001fd127aea4192a5455859ac04c30ff1d5258d36642c435f5a651ae/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 11 04:26:54 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fbcbf2f5001fd127aea4192a5455859ac04c30ff1d5258d36642c435f5a651ae/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 11 04:26:54 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fbcbf2f5001fd127aea4192a5455859ac04c30ff1d5258d36642c435f5a651ae/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 11 04:26:54 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fbcbf2f5001fd127aea4192a5455859ac04c30ff1d5258d36642c435f5a651ae/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 11 04:26:54 compute-0 podman[308798]: 2025-10-11 04:26:54.559697128 +0000 UTC m=+0.175829994 container init 21fee900de7c20075fd8cf9e5905ca16c3ecbe7e03f2c084a37ea3c0c33da2e0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=serene_mcclintock, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Oct 11 04:26:54 compute-0 podman[308798]: 2025-10-11 04:26:54.57504278 +0000 UTC m=+0.191175596 container start 21fee900de7c20075fd8cf9e5905ca16c3ecbe7e03f2c084a37ea3c0c33da2e0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=serene_mcclintock, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Oct 11 04:26:54 compute-0 podman[308798]: 2025-10-11 04:26:54.578702693 +0000 UTC m=+0.194835519 container attach 21fee900de7c20075fd8cf9e5905ca16c3ecbe7e03f2c084a37ea3c0c33da2e0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=serene_mcclintock, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 11 04:26:54 compute-0 nova_compute[259850]: 2025-10-11 04:26:54.592 2 INFO nova.virt.libvirt.driver [None req-8616c69a-e994-4e15-bd76-7208f7f58907 7bf17f3eb8514499a54d67542db6b88a 226e6310b4ee4a68b552a6b3e940a458 - - default default] [instance: 116a010a-a523-4fa3-8dbc-de6caec760c9] Creating config drive at /var/lib/nova/instances/116a010a-a523-4fa3-8dbc-de6caec760c9/disk.config
Oct 11 04:26:54 compute-0 nova_compute[259850]: 2025-10-11 04:26:54.598 2 DEBUG oslo_concurrency.processutils [None req-8616c69a-e994-4e15-bd76-7208f7f58907 7bf17f3eb8514499a54d67542db6b88a 226e6310b4ee4a68b552a6b3e940a458 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/116a010a-a523-4fa3-8dbc-de6caec760c9/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpyrazd4wd execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 04:26:54 compute-0 nova_compute[259850]: 2025-10-11 04:26:54.636 2 DEBUG nova.network.neutron [req-b25f30a1-260a-4c84-9b1e-67e7310603e2 req-f2e266b0-5c5d-412a-b6d0-8486ad74a340 f4159c198f2c491aba3076e803251bb4 551ef16ca7c64c7d98e9560ddab5abd8 - - default default] [instance: 116a010a-a523-4fa3-8dbc-de6caec760c9] Updated VIF entry in instance network info cache for port aa158452-f9f5-45a1-9841-28136bfa13a6. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 11 04:26:54 compute-0 nova_compute[259850]: 2025-10-11 04:26:54.637 2 DEBUG nova.network.neutron [req-b25f30a1-260a-4c84-9b1e-67e7310603e2 req-f2e266b0-5c5d-412a-b6d0-8486ad74a340 f4159c198f2c491aba3076e803251bb4 551ef16ca7c64c7d98e9560ddab5abd8 - - default default] [instance: 116a010a-a523-4fa3-8dbc-de6caec760c9] Updating instance_info_cache with network_info: [{"id": "aa158452-f9f5-45a1-9841-28136bfa13a6", "address": "fa:16:3e:7b:ce:34", "network": {"id": "61e3c4a7-2f2f-451f-b913-c2cdac8efdf3", "bridge": "br-int", "label": "tempest-TestEncryptedCinderVolumes-1503027769-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "226e6310b4ee4a68b552a6b3e940a458", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapaa158452-f9", "ovs_interfaceid": "aa158452-f9f5-45a1-9841-28136bfa13a6", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 11 04:26:54 compute-0 nova_compute[259850]: 2025-10-11 04:26:54.658 2 DEBUG oslo_concurrency.lockutils [req-b25f30a1-260a-4c84-9b1e-67e7310603e2 req-f2e266b0-5c5d-412a-b6d0-8486ad74a340 f4159c198f2c491aba3076e803251bb4 551ef16ca7c64c7d98e9560ddab5abd8 - - default default] Releasing lock "refresh_cache-116a010a-a523-4fa3-8dbc-de6caec760c9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 11 04:26:54 compute-0 nova_compute[259850]: 2025-10-11 04:26:54.748 2 DEBUG oslo_concurrency.processutils [None req-8616c69a-e994-4e15-bd76-7208f7f58907 7bf17f3eb8514499a54d67542db6b88a 226e6310b4ee4a68b552a6b3e940a458 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/116a010a-a523-4fa3-8dbc-de6caec760c9/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpyrazd4wd" returned: 0 in 0.149s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 04:26:54 compute-0 nova_compute[259850]: 2025-10-11 04:26:54.790 2 DEBUG nova.storage.rbd_utils [None req-8616c69a-e994-4e15-bd76-7208f7f58907 7bf17f3eb8514499a54d67542db6b88a 226e6310b4ee4a68b552a6b3e940a458 - - default default] rbd image 116a010a-a523-4fa3-8dbc-de6caec760c9_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 11 04:26:54 compute-0 nova_compute[259850]: 2025-10-11 04:26:54.795 2 DEBUG oslo_concurrency.processutils [None req-8616c69a-e994-4e15-bd76-7208f7f58907 7bf17f3eb8514499a54d67542db6b88a 226e6310b4ee4a68b552a6b3e940a458 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/116a010a-a523-4fa3-8dbc-de6caec760c9/disk.config 116a010a-a523-4fa3-8dbc-de6caec760c9_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 04:26:54 compute-0 nova_compute[259850]: 2025-10-11 04:26:54.963 2 DEBUG oslo_concurrency.processutils [None req-8616c69a-e994-4e15-bd76-7208f7f58907 7bf17f3eb8514499a54d67542db6b88a 226e6310b4ee4a68b552a6b3e940a458 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/116a010a-a523-4fa3-8dbc-de6caec760c9/disk.config 116a010a-a523-4fa3-8dbc-de6caec760c9_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.169s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 04:26:54 compute-0 nova_compute[259850]: 2025-10-11 04:26:54.965 2 INFO nova.virt.libvirt.driver [None req-8616c69a-e994-4e15-bd76-7208f7f58907 7bf17f3eb8514499a54d67542db6b88a 226e6310b4ee4a68b552a6b3e940a458 - - default default] [instance: 116a010a-a523-4fa3-8dbc-de6caec760c9] Deleting local config drive /var/lib/nova/instances/116a010a-a523-4fa3-8dbc-de6caec760c9/disk.config because it was imported into RBD.
Oct 11 04:26:55 compute-0 kernel: tapaa158452-f9: entered promiscuous mode
Oct 11 04:26:55 compute-0 NetworkManager[44920]: <info>  [1760156815.0312] manager: (tapaa158452-f9): new Tun device (/org/freedesktop/NetworkManager/Devices/146)
Oct 11 04:26:55 compute-0 ovn_controller[152025]: 2025-10-11T04:26:55Z|00283|binding|INFO|Claiming lport aa158452-f9f5-45a1-9841-28136bfa13a6 for this chassis.
Oct 11 04:26:55 compute-0 ovn_controller[152025]: 2025-10-11T04:26:55Z|00284|binding|INFO|aa158452-f9f5-45a1-9841-28136bfa13a6: Claiming fa:16:3e:7b:ce:34 10.100.0.13
Oct 11 04:26:55 compute-0 nova_compute[259850]: 2025-10-11 04:26:55.042 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 04:26:55 compute-0 ovn_metadata_agent[161897]: 2025-10-11 04:26:55.046 161902 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:7b:ce:34 10.100.0.13'], port_security=['fa:16:3e:7b:ce:34 10.100.0.13'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.13/28', 'neutron:device_id': '116a010a-a523-4fa3-8dbc-de6caec760c9', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-61e3c4a7-2f2f-451f-b913-c2cdac8efdf3', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '226e6310b4ee4a68b552a6b3e940a458', 'neutron:revision_number': '2', 'neutron:security_group_ids': '77d1d83e-ff49-437a-8a94-baa66143ce2b', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=17f237ce-6320-4c27-9970-fd94aa8457a3, chassis=[<ovs.db.idl.Row object at 0x7f22fde8afd0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f22fde8afd0>], logical_port=aa158452-f9f5-45a1-9841-28136bfa13a6) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 11 04:26:55 compute-0 ovn_metadata_agent[161897]: 2025-10-11 04:26:55.049 161902 INFO neutron.agent.ovn.metadata.agent [-] Port aa158452-f9f5-45a1-9841-28136bfa13a6 in datapath 61e3c4a7-2f2f-451f-b913-c2cdac8efdf3 bound to our chassis
Oct 11 04:26:55 compute-0 ovn_metadata_agent[161897]: 2025-10-11 04:26:55.051 161902 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 61e3c4a7-2f2f-451f-b913-c2cdac8efdf3
Oct 11 04:26:55 compute-0 ovn_controller[152025]: 2025-10-11T04:26:55Z|00285|binding|INFO|Setting lport aa158452-f9f5-45a1-9841-28136bfa13a6 ovn-installed in OVS
Oct 11 04:26:55 compute-0 ovn_controller[152025]: 2025-10-11T04:26:55Z|00286|binding|INFO|Setting lport aa158452-f9f5-45a1-9841-28136bfa13a6 up in Southbound
Oct 11 04:26:55 compute-0 nova_compute[259850]: 2025-10-11 04:26:55.054 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 04:26:55 compute-0 ovn_metadata_agent[161897]: 2025-10-11 04:26:55.063 267637 DEBUG oslo.privsep.daemon [-] privsep: reply[f29c4c81-c250-41a9-a220-0654e6cba79f]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 04:26:55 compute-0 ovn_metadata_agent[161897]: 2025-10-11 04:26:55.064 161902 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap61e3c4a7-21 in ovnmeta-61e3c4a7-2f2f-451f-b913-c2cdac8efdf3 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Oct 11 04:26:55 compute-0 systemd-machined[214869]: New machine qemu-29-instance-0000001d.
Oct 11 04:26:55 compute-0 ovn_metadata_agent[161897]: 2025-10-11 04:26:55.066 267637 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap61e3c4a7-20 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Oct 11 04:26:55 compute-0 ovn_metadata_agent[161897]: 2025-10-11 04:26:55.067 267637 DEBUG oslo.privsep.daemon [-] privsep: reply[a81fda7d-851b-42d3-84b0-aa27590dce47]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 04:26:55 compute-0 ovn_metadata_agent[161897]: 2025-10-11 04:26:55.068 267637 DEBUG oslo.privsep.daemon [-] privsep: reply[9ee062f4-ec6c-4056-871c-6ca91f33c279]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 04:26:55 compute-0 ovn_metadata_agent[161897]: 2025-10-11 04:26:55.081 162015 DEBUG oslo.privsep.daemon [-] privsep: reply[450a634b-e0f4-4161-8d40-eb7d1f00990a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 04:26:55 compute-0 systemd[1]: Started Virtual Machine qemu-29-instance-0000001d.
Oct 11 04:26:55 compute-0 ovn_metadata_agent[161897]: 2025-10-11 04:26:55.105 267637 DEBUG oslo.privsep.daemon [-] privsep: reply[aa88949c-4886-4d4a-b2e7-636b935df5fa]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 04:26:55 compute-0 systemd-udevd[308882]: Network interface NamePolicy= disabled on kernel command line.
Oct 11 04:26:55 compute-0 NetworkManager[44920]: <info>  [1760156815.1488] device (tapaa158452-f9): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 11 04:26:55 compute-0 ovn_metadata_agent[161897]: 2025-10-11 04:26:55.149 267785 DEBUG oslo.privsep.daemon [-] privsep: reply[297bdad0-7e16-4e27-a012-38210d571a1d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 04:26:55 compute-0 NetworkManager[44920]: <info>  [1760156815.1514] device (tapaa158452-f9): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 11 04:26:55 compute-0 ovn_metadata_agent[161897]: 2025-10-11 04:26:55.154 267637 DEBUG oslo.privsep.daemon [-] privsep: reply[eed52cdc-a85b-43e9-8a88-85d09fcbf072]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 04:26:55 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e447 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 04:26:55 compute-0 NetworkManager[44920]: <info>  [1760156815.1626] manager: (tap61e3c4a7-20): new Veth device (/org/freedesktop/NetworkManager/Devices/147)
Oct 11 04:26:55 compute-0 ceph-mon[74273]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #81. Immutable memtables: 0.
Oct 11 04:26:55 compute-0 ceph-mon[74273]: rocksdb: (Original Log Time 2025/10/11-04:26:55.176231) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Oct 11 04:26:55 compute-0 ceph-mon[74273]: rocksdb: [db/flush_job.cc:856] [default] [JOB 45] Flushing memtable with next log file: 81
Oct 11 04:26:55 compute-0 ceph-mon[74273]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760156815176296, "job": 45, "event": "flush_started", "num_memtables": 1, "num_entries": 1143, "num_deletes": 250, "total_data_size": 1598900, "memory_usage": 1620416, "flush_reason": "Manual Compaction"}
Oct 11 04:26:55 compute-0 ceph-mon[74273]: rocksdb: [db/flush_job.cc:885] [default] [JOB 45] Level-0 flush table #82: started
Oct 11 04:26:55 compute-0 ceph-mon[74273]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760156815183944, "cf_name": "default", "job": 45, "event": "table_file_creation", "file_number": 82, "file_size": 965266, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 38806, "largest_seqno": 39948, "table_properties": {"data_size": 960988, "index_size": 1802, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1413, "raw_key_size": 11646, "raw_average_key_size": 20, "raw_value_size": 951517, "raw_average_value_size": 1702, "num_data_blocks": 81, "num_entries": 559, "num_filter_entries": 559, "num_deletions": 250, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1760156709, "oldest_key_time": 1760156709, "file_creation_time": 1760156815, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "c5ef686a-ea96-43f3-b64e-136aeef6150d", "db_session_id": "5PENSWPAYOBU0GJSS6OB", "orig_file_number": 82, "seqno_to_time_mapping": "N/A"}}
Oct 11 04:26:55 compute-0 ceph-mon[74273]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 45] Flush lasted 7749 microseconds, and 3203 cpu microseconds.
Oct 11 04:26:55 compute-0 ceph-mon[74273]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 11 04:26:55 compute-0 ceph-mon[74273]: rocksdb: (Original Log Time 2025/10/11-04:26:55.183985) [db/flush_job.cc:967] [default] [JOB 45] Level-0 flush table #82: 965266 bytes OK
Oct 11 04:26:55 compute-0 ceph-mon[74273]: rocksdb: (Original Log Time 2025/10/11-04:26:55.184004) [db/memtable_list.cc:519] [default] Level-0 commit table #82 started
Oct 11 04:26:55 compute-0 ceph-mon[74273]: rocksdb: (Original Log Time 2025/10/11-04:26:55.186202) [db/memtable_list.cc:722] [default] Level-0 commit table #82: memtable #1 done
Oct 11 04:26:55 compute-0 ceph-mon[74273]: rocksdb: (Original Log Time 2025/10/11-04:26:55.186226) EVENT_LOG_v1 {"time_micros": 1760156815186220, "job": 45, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Oct 11 04:26:55 compute-0 ceph-mon[74273]: rocksdb: (Original Log Time 2025/10/11-04:26:55.186245) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Oct 11 04:26:55 compute-0 ceph-mon[74273]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 45] Try to delete WAL files size 1593613, prev total WAL file size 1593613, number of live WAL files 2.
Oct 11 04:26:55 compute-0 ceph-mon[74273]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000078.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 11 04:26:55 compute-0 ceph-mon[74273]: rocksdb: (Original Log Time 2025/10/11-04:26:55.189360) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6D6772737461740031323532' seq:72057594037927935, type:22 .. '6D6772737461740031353033' seq:0, type:0; will stop at (end)
Oct 11 04:26:55 compute-0 ceph-mon[74273]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 46] Compacting 1@0 + 1@6 files to L6, score -1.00
Oct 11 04:26:55 compute-0 ceph-mon[74273]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 45 Base level 0, inputs: [82(942KB)], [80(11MB)]
Oct 11 04:26:55 compute-0 ceph-mon[74273]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760156815189405, "job": 46, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [82], "files_L6": [80], "score": -1, "input_data_size": 12874397, "oldest_snapshot_seqno": -1}
Oct 11 04:26:55 compute-0 ceph-mon[74273]: pgmap v1954: 305 pgs: 305 active+clean; 202 MiB data, 572 MiB used, 59 GiB / 60 GiB avail; 29 KiB/s rd, 9.4 MiB/s wr, 44 op/s
Oct 11 04:26:55 compute-0 ovn_metadata_agent[161897]: 2025-10-11 04:26:55.191 267785 DEBUG oslo.privsep.daemon [-] privsep: reply[813e0d34-1ea6-4234-9606-a9584a3942b9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 04:26:55 compute-0 ovn_metadata_agent[161897]: 2025-10-11 04:26:55.195 267785 DEBUG oslo.privsep.daemon [-] privsep: reply[1ef9f8cf-0116-4bad-b62f-af0d291ef21f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 04:26:55 compute-0 NetworkManager[44920]: <info>  [1760156815.2165] device (tap61e3c4a7-20): carrier: link connected
Oct 11 04:26:55 compute-0 ovn_metadata_agent[161897]: 2025-10-11 04:26:55.221 267785 DEBUG oslo.privsep.daemon [-] privsep: reply[98cc9f70-f64a-4465-858f-212ef1ee230c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 04:26:55 compute-0 ovn_metadata_agent[161897]: 2025-10-11 04:26:55.236 267637 DEBUG oslo.privsep.daemon [-] privsep: reply[c305883d-11fd-4a17-838e-460496b522d2]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap61e3c4a7-21'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:1d:30:90'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 93], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 520025, 'reachable_time': 41992, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 308905, 'error': None, 'target': 'ovnmeta-61e3c4a7-2f2f-451f-b913-c2cdac8efdf3', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 04:26:55 compute-0 ceph-mon[74273]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 46] Generated table #83: 7070 keys, 10187799 bytes, temperature: kUnknown
Oct 11 04:26:55 compute-0 ceph-mon[74273]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760156815243453, "cf_name": "default", "job": 46, "event": "table_file_creation", "file_number": 83, "file_size": 10187799, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 10137135, "index_size": 31864, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 17733, "raw_key_size": 177819, "raw_average_key_size": 25, "raw_value_size": 10007089, "raw_average_value_size": 1415, "num_data_blocks": 1273, "num_entries": 7070, "num_filter_entries": 7070, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1760153731, "oldest_key_time": 0, "file_creation_time": 1760156815, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "c5ef686a-ea96-43f3-b64e-136aeef6150d", "db_session_id": "5PENSWPAYOBU0GJSS6OB", "orig_file_number": 83, "seqno_to_time_mapping": "N/A"}}
Oct 11 04:26:55 compute-0 ceph-mon[74273]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 11 04:26:55 compute-0 ceph-mon[74273]: rocksdb: (Original Log Time 2025/10/11-04:26:55.243868) [db/compaction/compaction_job.cc:1663] [default] [JOB 46] Compacted 1@0 + 1@6 files to L6 => 10187799 bytes
Oct 11 04:26:55 compute-0 ceph-mon[74273]: rocksdb: (Original Log Time 2025/10/11-04:26:55.245503) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 237.0 rd, 187.6 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.9, 11.4 +0.0 blob) out(9.7 +0.0 blob), read-write-amplify(23.9) write-amplify(10.6) OK, records in: 7542, records dropped: 472 output_compression: NoCompression
Oct 11 04:26:55 compute-0 ceph-mon[74273]: rocksdb: (Original Log Time 2025/10/11-04:26:55.245547) EVENT_LOG_v1 {"time_micros": 1760156815245539, "job": 46, "event": "compaction_finished", "compaction_time_micros": 54315, "compaction_time_cpu_micros": 32417, "output_level": 6, "num_output_files": 1, "total_output_size": 10187799, "num_input_records": 7542, "num_output_records": 7070, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Oct 11 04:26:55 compute-0 ceph-mon[74273]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000082.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 11 04:26:55 compute-0 ceph-mon[74273]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760156815245800, "job": 46, "event": "table_file_deletion", "file_number": 82}
Oct 11 04:26:55 compute-0 ceph-mon[74273]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000080.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 11 04:26:55 compute-0 ceph-mon[74273]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760156815247502, "job": 46, "event": "table_file_deletion", "file_number": 80}
Oct 11 04:26:55 compute-0 ceph-mon[74273]: rocksdb: (Original Log Time 2025/10/11-04:26:55.189284) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 11 04:26:55 compute-0 ceph-mon[74273]: rocksdb: (Original Log Time 2025/10/11-04:26:55.247541) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 11 04:26:55 compute-0 ceph-mon[74273]: rocksdb: (Original Log Time 2025/10/11-04:26:55.247545) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 11 04:26:55 compute-0 ceph-mon[74273]: rocksdb: (Original Log Time 2025/10/11-04:26:55.247546) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 11 04:26:55 compute-0 ceph-mon[74273]: rocksdb: (Original Log Time 2025/10/11-04:26:55.247548) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 11 04:26:55 compute-0 ceph-mon[74273]: rocksdb: (Original Log Time 2025/10/11-04:26:55.247549) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 11 04:26:55 compute-0 ovn_metadata_agent[161897]: 2025-10-11 04:26:55.252 267637 DEBUG oslo.privsep.daemon [-] privsep: reply[8e2023d4-ade6-4e4c-8dfd-1fc3ef94f244]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe1d:3090'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 520025, 'tstamp': 520025}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 308906, 'error': None, 'target': 'ovnmeta-61e3c4a7-2f2f-451f-b913-c2cdac8efdf3', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 04:26:55 compute-0 ovn_metadata_agent[161897]: 2025-10-11 04:26:55.268 267637 DEBUG oslo.privsep.daemon [-] privsep: reply[8836f52d-eb2f-4892-b887-52ef6745362b]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap61e3c4a7-21'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:1d:30:90'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 93], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 520025, 'reachable_time': 41992, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 308909, 'error': None, 'target': 'ovnmeta-61e3c4a7-2f2f-451f-b913-c2cdac8efdf3', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 04:26:55 compute-0 ovn_metadata_agent[161897]: 2025-10-11 04:26:55.298 267637 DEBUG oslo.privsep.daemon [-] privsep: reply[09045610-a556-4468-abe6-f1e13e679805]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 04:26:55 compute-0 ovn_metadata_agent[161897]: 2025-10-11 04:26:55.351 267637 DEBUG oslo.privsep.daemon [-] privsep: reply[b4e9d967-8b35-4e62-af9f-3087e17b5de6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 04:26:55 compute-0 ovn_metadata_agent[161897]: 2025-10-11 04:26:55.352 161902 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap61e3c4a7-20, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 04:26:55 compute-0 ovn_metadata_agent[161897]: 2025-10-11 04:26:55.352 161902 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 11 04:26:55 compute-0 ovn_metadata_agent[161897]: 2025-10-11 04:26:55.353 161902 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap61e3c4a7-20, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 04:26:55 compute-0 nova_compute[259850]: 2025-10-11 04:26:55.354 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 04:26:55 compute-0 kernel: tap61e3c4a7-20: entered promiscuous mode
Oct 11 04:26:55 compute-0 NetworkManager[44920]: <info>  [1760156815.3551] manager: (tap61e3c4a7-20): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/148)
Oct 11 04:26:55 compute-0 nova_compute[259850]: 2025-10-11 04:26:55.356 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 04:26:55 compute-0 ovn_metadata_agent[161897]: 2025-10-11 04:26:55.357 161902 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap61e3c4a7-20, col_values=(('external_ids', {'iface-id': 'd6a2f98f-398c-4cad-9cd4-adac499bc3d4'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 04:26:55 compute-0 nova_compute[259850]: 2025-10-11 04:26:55.358 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 04:26:55 compute-0 ovn_controller[152025]: 2025-10-11T04:26:55Z|00287|binding|INFO|Releasing lport d6a2f98f-398c-4cad-9cd4-adac499bc3d4 from this chassis (sb_readonly=0)
Oct 11 04:26:55 compute-0 nova_compute[259850]: 2025-10-11 04:26:55.371 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 04:26:55 compute-0 ovn_metadata_agent[161897]: 2025-10-11 04:26:55.372 161902 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/61e3c4a7-2f2f-451f-b913-c2cdac8efdf3.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/61e3c4a7-2f2f-451f-b913-c2cdac8efdf3.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Oct 11 04:26:55 compute-0 ovn_metadata_agent[161897]: 2025-10-11 04:26:55.373 267637 DEBUG oslo.privsep.daemon [-] privsep: reply[a6f95669-41b5-4090-9e3b-7162554a0a69]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 04:26:55 compute-0 ovn_metadata_agent[161897]: 2025-10-11 04:26:55.374 161902 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct 11 04:26:55 compute-0 ovn_metadata_agent[161897]: global
Oct 11 04:26:55 compute-0 ovn_metadata_agent[161897]:     log         /dev/log local0 debug
Oct 11 04:26:55 compute-0 ovn_metadata_agent[161897]:     log-tag     haproxy-metadata-proxy-61e3c4a7-2f2f-451f-b913-c2cdac8efdf3
Oct 11 04:26:55 compute-0 ovn_metadata_agent[161897]:     user        root
Oct 11 04:26:55 compute-0 ovn_metadata_agent[161897]:     group       root
Oct 11 04:26:55 compute-0 ovn_metadata_agent[161897]:     maxconn     1024
Oct 11 04:26:55 compute-0 ovn_metadata_agent[161897]:     pidfile     /var/lib/neutron/external/pids/61e3c4a7-2f2f-451f-b913-c2cdac8efdf3.pid.haproxy
Oct 11 04:26:55 compute-0 ovn_metadata_agent[161897]:     daemon
Oct 11 04:26:55 compute-0 ovn_metadata_agent[161897]: 
Oct 11 04:26:55 compute-0 ovn_metadata_agent[161897]: defaults
Oct 11 04:26:55 compute-0 ovn_metadata_agent[161897]:     log global
Oct 11 04:26:55 compute-0 ovn_metadata_agent[161897]:     mode http
Oct 11 04:26:55 compute-0 ovn_metadata_agent[161897]:     option httplog
Oct 11 04:26:55 compute-0 ovn_metadata_agent[161897]:     option dontlognull
Oct 11 04:26:55 compute-0 ovn_metadata_agent[161897]:     option http-server-close
Oct 11 04:26:55 compute-0 ovn_metadata_agent[161897]:     option forwardfor
Oct 11 04:26:55 compute-0 ovn_metadata_agent[161897]:     retries                 3
Oct 11 04:26:55 compute-0 ovn_metadata_agent[161897]:     timeout http-request    30s
Oct 11 04:26:55 compute-0 ovn_metadata_agent[161897]:     timeout connect         30s
Oct 11 04:26:55 compute-0 ovn_metadata_agent[161897]:     timeout client          32s
Oct 11 04:26:55 compute-0 ovn_metadata_agent[161897]:     timeout server          32s
Oct 11 04:26:55 compute-0 ovn_metadata_agent[161897]:     timeout http-keep-alive 30s
Oct 11 04:26:55 compute-0 ovn_metadata_agent[161897]: 
Oct 11 04:26:55 compute-0 ovn_metadata_agent[161897]: 
Oct 11 04:26:55 compute-0 ovn_metadata_agent[161897]: listen listener
Oct 11 04:26:55 compute-0 ovn_metadata_agent[161897]:     bind 169.254.169.254:80
Oct 11 04:26:55 compute-0 ovn_metadata_agent[161897]:     server metadata /var/lib/neutron/metadata_proxy
Oct 11 04:26:55 compute-0 ovn_metadata_agent[161897]:     http-request add-header X-OVN-Network-ID 61e3c4a7-2f2f-451f-b913-c2cdac8efdf3
Oct 11 04:26:55 compute-0 ovn_metadata_agent[161897]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Oct 11 04:26:55 compute-0 ovn_metadata_agent[161897]: 2025-10-11 04:26:55.374 161902 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-61e3c4a7-2f2f-451f-b913-c2cdac8efdf3', 'env', 'PROCESS_TAG=haproxy-61e3c4a7-2f2f-451f-b913-c2cdac8efdf3', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/61e3c4a7-2f2f-451f-b913-c2cdac8efdf3.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Oct 11 04:26:55 compute-0 nova_compute[259850]: 2025-10-11 04:26:55.405 2 DEBUG nova.compute.manager [req-c60b0ef5-45b0-44d0-ab7c-318963909d9f req-9f0659e5-c1ca-48f7-9a80-2c60f3bb5402 f4159c198f2c491aba3076e803251bb4 551ef16ca7c64c7d98e9560ddab5abd8 - - default default] [instance: 116a010a-a523-4fa3-8dbc-de6caec760c9] Received event network-vif-plugged-aa158452-f9f5-45a1-9841-28136bfa13a6 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 11 04:26:55 compute-0 nova_compute[259850]: 2025-10-11 04:26:55.405 2 DEBUG oslo_concurrency.lockutils [req-c60b0ef5-45b0-44d0-ab7c-318963909d9f req-9f0659e5-c1ca-48f7-9a80-2c60f3bb5402 f4159c198f2c491aba3076e803251bb4 551ef16ca7c64c7d98e9560ddab5abd8 - - default default] Acquiring lock "116a010a-a523-4fa3-8dbc-de6caec760c9-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 04:26:55 compute-0 nova_compute[259850]: 2025-10-11 04:26:55.405 2 DEBUG oslo_concurrency.lockutils [req-c60b0ef5-45b0-44d0-ab7c-318963909d9f req-9f0659e5-c1ca-48f7-9a80-2c60f3bb5402 f4159c198f2c491aba3076e803251bb4 551ef16ca7c64c7d98e9560ddab5abd8 - - default default] Lock "116a010a-a523-4fa3-8dbc-de6caec760c9-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 04:26:55 compute-0 nova_compute[259850]: 2025-10-11 04:26:55.406 2 DEBUG oslo_concurrency.lockutils [req-c60b0ef5-45b0-44d0-ab7c-318963909d9f req-9f0659e5-c1ca-48f7-9a80-2c60f3bb5402 f4159c198f2c491aba3076e803251bb4 551ef16ca7c64c7d98e9560ddab5abd8 - - default default] Lock "116a010a-a523-4fa3-8dbc-de6caec760c9-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 04:26:55 compute-0 nova_compute[259850]: 2025-10-11 04:26:55.406 2 DEBUG nova.compute.manager [req-c60b0ef5-45b0-44d0-ab7c-318963909d9f req-9f0659e5-c1ca-48f7-9a80-2c60f3bb5402 f4159c198f2c491aba3076e803251bb4 551ef16ca7c64c7d98e9560ddab5abd8 - - default default] [instance: 116a010a-a523-4fa3-8dbc-de6caec760c9] Processing event network-vif-plugged-aa158452-f9f5-45a1-9841-28136bfa13a6 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Oct 11 04:26:55 compute-0 serene_mcclintock[308814]: {
Oct 11 04:26:55 compute-0 serene_mcclintock[308814]:     "38da774d-7ecf-442f-9a7a-97978287cff8": {
Oct 11 04:26:55 compute-0 serene_mcclintock[308814]:         "ceph_fsid": "23b68101-59a9-532f-ab6b-9acf78fb2162",
Oct 11 04:26:55 compute-0 serene_mcclintock[308814]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Oct 11 04:26:55 compute-0 serene_mcclintock[308814]:         "osd_id": 1,
Oct 11 04:26:55 compute-0 serene_mcclintock[308814]:         "osd_uuid": "38da774d-7ecf-442f-9a7a-97978287cff8",
Oct 11 04:26:55 compute-0 serene_mcclintock[308814]:         "type": "bluestore"
Oct 11 04:26:55 compute-0 serene_mcclintock[308814]:     },
Oct 11 04:26:55 compute-0 serene_mcclintock[308814]:     "b0a32a6b-06c7-4cda-9235-0c5ffbd1bd85": {
Oct 11 04:26:55 compute-0 serene_mcclintock[308814]:         "ceph_fsid": "23b68101-59a9-532f-ab6b-9acf78fb2162",
Oct 11 04:26:55 compute-0 serene_mcclintock[308814]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Oct 11 04:26:55 compute-0 serene_mcclintock[308814]:         "osd_id": 2,
Oct 11 04:26:55 compute-0 serene_mcclintock[308814]:         "osd_uuid": "b0a32a6b-06c7-4cda-9235-0c5ffbd1bd85",
Oct 11 04:26:55 compute-0 serene_mcclintock[308814]:         "type": "bluestore"
Oct 11 04:26:55 compute-0 serene_mcclintock[308814]:     },
Oct 11 04:26:55 compute-0 serene_mcclintock[308814]:     "bd7ac921-1218-45c1-b1c6-7c594dbceccb": {
Oct 11 04:26:55 compute-0 serene_mcclintock[308814]:         "ceph_fsid": "23b68101-59a9-532f-ab6b-9acf78fb2162",
Oct 11 04:26:55 compute-0 serene_mcclintock[308814]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct 11 04:26:55 compute-0 serene_mcclintock[308814]:         "osd_id": 0,
Oct 11 04:26:55 compute-0 serene_mcclintock[308814]:         "osd_uuid": "bd7ac921-1218-45c1-b1c6-7c594dbceccb",
Oct 11 04:26:55 compute-0 serene_mcclintock[308814]:         "type": "bluestore"
Oct 11 04:26:55 compute-0 serene_mcclintock[308814]:     }
Oct 11 04:26:55 compute-0 serene_mcclintock[308814]: }
Oct 11 04:26:55 compute-0 systemd[1]: libpod-21fee900de7c20075fd8cf9e5905ca16c3ecbe7e03f2c084a37ea3c0c33da2e0.scope: Deactivated successfully.
Oct 11 04:26:55 compute-0 systemd[1]: libpod-21fee900de7c20075fd8cf9e5905ca16c3ecbe7e03f2c084a37ea3c0c33da2e0.scope: Consumed 1.018s CPU time.
Oct 11 04:26:55 compute-0 podman[308798]: 2025-10-11 04:26:55.631060076 +0000 UTC m=+1.247192862 container died 21fee900de7c20075fd8cf9e5905ca16c3ecbe7e03f2c084a37ea3c0c33da2e0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=serene_mcclintock, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_REF=reef, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Oct 11 04:26:55 compute-0 systemd[1]: var-lib-containers-storage-overlay-fbcbf2f5001fd127aea4192a5455859ac04c30ff1d5258d36642c435f5a651ae-merged.mount: Deactivated successfully.
Oct 11 04:26:55 compute-0 podman[308798]: 2025-10-11 04:26:55.709175606 +0000 UTC m=+1.325308402 container remove 21fee900de7c20075fd8cf9e5905ca16c3ecbe7e03f2c084a37ea3c0c33da2e0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=serene_mcclintock, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0)
Oct 11 04:26:55 compute-0 systemd[1]: libpod-conmon-21fee900de7c20075fd8cf9e5905ca16c3ecbe7e03f2c084a37ea3c0c33da2e0.scope: Deactivated successfully.
Oct 11 04:26:55 compute-0 sudo[308679]: pam_unix(sudo:session): session closed for user root
Oct 11 04:26:55 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 11 04:26:55 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' 
Oct 11 04:26:55 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 11 04:26:55 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' 
Oct 11 04:26:55 compute-0 ceph-mgr[74563]: [progress WARNING root] complete: ev 06177141-f012-4192-8ccb-2d7fe9c260ca does not exist
Oct 11 04:26:55 compute-0 ceph-mgr[74563]: [progress WARNING root] complete: ev 5009a34f-9398-4d90-89e3-b1240a3c3ce2 does not exist
Oct 11 04:26:55 compute-0 podman[309009]: 2025-10-11 04:26:55.763569348 +0000 UTC m=+0.056742449 container create 128d27466b7ceb10c4cec5874e46e51523f9b2dda99d2d89f2c400d84c4c4ac3 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-61e3c4a7-2f2f-451f-b913-c2cdac8efdf3, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, org.label-schema.build-date=20251009, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Oct 11 04:26:55 compute-0 systemd[1]: Started libpod-conmon-128d27466b7ceb10c4cec5874e46e51523f9b2dda99d2d89f2c400d84c4c4ac3.scope.
Oct 11 04:26:55 compute-0 sudo[309022]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 04:26:55 compute-0 sudo[309022]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 04:26:55 compute-0 sudo[309022]: pam_unix(sudo:session): session closed for user root
Oct 11 04:26:55 compute-0 podman[309009]: 2025-10-11 04:26:55.736403153 +0000 UTC m=+0.029576264 image pull 1061e4fafe13e0b9aa1ef2c904ba4ad70c44f3e87b1d831f16c6db34937f4022 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Oct 11 04:26:55 compute-0 systemd[1]: Started libcrun container.
Oct 11 04:26:55 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c3ac786154799b719da6466dee7a8091d1fe7dee49b7104de01c9681461d0fee/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct 11 04:26:55 compute-0 podman[309009]: 2025-10-11 04:26:55.8562915 +0000 UTC m=+0.149464621 container init 128d27466b7ceb10c4cec5874e46e51523f9b2dda99d2d89f2c400d84c4c4ac3 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-61e3c4a7-2f2f-451f-b913-c2cdac8efdf3, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3)
Oct 11 04:26:55 compute-0 podman[309009]: 2025-10-11 04:26:55.86266672 +0000 UTC m=+0.155839821 container start 128d27466b7ceb10c4cec5874e46e51523f9b2dda99d2d89f2c400d84c4c4ac3 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-61e3c4a7-2f2f-451f-b913-c2cdac8efdf3, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team)
Oct 11 04:26:55 compute-0 sudo[309052]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Oct 11 04:26:55 compute-0 sudo[309052]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 04:26:55 compute-0 sudo[309052]: pam_unix(sudo:session): session closed for user root
Oct 11 04:26:55 compute-0 neutron-haproxy-ovnmeta-61e3c4a7-2f2f-451f-b913-c2cdac8efdf3[309048]: [NOTICE]   (309076) : New worker (309080) forked
Oct 11 04:26:55 compute-0 neutron-haproxy-ovnmeta-61e3c4a7-2f2f-451f-b913-c2cdac8efdf3[309048]: [NOTICE]   (309076) : Loading success.
Oct 11 04:26:56 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1955: 305 pgs: 305 active+clean; 202 MiB data, 572 MiB used, 59 GiB / 60 GiB avail; 8.7 KiB/s rd, 3.8 MiB/s wr, 14 op/s
Oct 11 04:26:56 compute-0 nova_compute[259850]: 2025-10-11 04:26:56.074 2 DEBUG oslo_service.periodic_task [None req-a15c22ab-259d-4b26-83d0-eb5f92102649 - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 11 04:26:56 compute-0 nova_compute[259850]: 2025-10-11 04:26:56.075 2 DEBUG nova.compute.manager [None req-a15c22ab-259d-4b26-83d0-eb5f92102649 - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183
Oct 11 04:26:56 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' 
Oct 11 04:26:56 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' 
Oct 11 04:26:56 compute-0 ceph-mon[74273]: pgmap v1955: 305 pgs: 305 active+clean; 202 MiB data, 572 MiB used, 59 GiB / 60 GiB avail; 8.7 KiB/s rd, 3.8 MiB/s wr, 14 op/s
Oct 11 04:26:57 compute-0 nova_compute[259850]: 2025-10-11 04:26:57.467 2 DEBUG nova.compute.manager [req-3cd30499-4c0c-4052-8ab8-ec213609500f req-4b8352f1-06d7-43ae-98a3-b83572eec114 f4159c198f2c491aba3076e803251bb4 551ef16ca7c64c7d98e9560ddab5abd8 - - default default] [instance: 116a010a-a523-4fa3-8dbc-de6caec760c9] Received event network-vif-plugged-aa158452-f9f5-45a1-9841-28136bfa13a6 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 11 04:26:57 compute-0 nova_compute[259850]: 2025-10-11 04:26:57.468 2 DEBUG oslo_concurrency.lockutils [req-3cd30499-4c0c-4052-8ab8-ec213609500f req-4b8352f1-06d7-43ae-98a3-b83572eec114 f4159c198f2c491aba3076e803251bb4 551ef16ca7c64c7d98e9560ddab5abd8 - - default default] Acquiring lock "116a010a-a523-4fa3-8dbc-de6caec760c9-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 04:26:57 compute-0 nova_compute[259850]: 2025-10-11 04:26:57.468 2 DEBUG oslo_concurrency.lockutils [req-3cd30499-4c0c-4052-8ab8-ec213609500f req-4b8352f1-06d7-43ae-98a3-b83572eec114 f4159c198f2c491aba3076e803251bb4 551ef16ca7c64c7d98e9560ddab5abd8 - - default default] Lock "116a010a-a523-4fa3-8dbc-de6caec760c9-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 04:26:57 compute-0 nova_compute[259850]: 2025-10-11 04:26:57.469 2 DEBUG oslo_concurrency.lockutils [req-3cd30499-4c0c-4052-8ab8-ec213609500f req-4b8352f1-06d7-43ae-98a3-b83572eec114 f4159c198f2c491aba3076e803251bb4 551ef16ca7c64c7d98e9560ddab5abd8 - - default default] Lock "116a010a-a523-4fa3-8dbc-de6caec760c9-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 04:26:57 compute-0 nova_compute[259850]: 2025-10-11 04:26:57.469 2 DEBUG nova.compute.manager [req-3cd30499-4c0c-4052-8ab8-ec213609500f req-4b8352f1-06d7-43ae-98a3-b83572eec114 f4159c198f2c491aba3076e803251bb4 551ef16ca7c64c7d98e9560ddab5abd8 - - default default] [instance: 116a010a-a523-4fa3-8dbc-de6caec760c9] No waiting events found dispatching network-vif-plugged-aa158452-f9f5-45a1-9841-28136bfa13a6 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 11 04:26:57 compute-0 nova_compute[259850]: 2025-10-11 04:26:57.470 2 WARNING nova.compute.manager [req-3cd30499-4c0c-4052-8ab8-ec213609500f req-4b8352f1-06d7-43ae-98a3-b83572eec114 f4159c198f2c491aba3076e803251bb4 551ef16ca7c64c7d98e9560ddab5abd8 - - default default] [instance: 116a010a-a523-4fa3-8dbc-de6caec760c9] Received unexpected event network-vif-plugged-aa158452-f9f5-45a1-9841-28136bfa13a6 for instance with vm_state building and task_state spawning.
Oct 11 04:26:58 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1956: 305 pgs: 305 active+clean; 202 MiB data, 572 MiB used, 59 GiB / 60 GiB avail; 36 KiB/s rd, 3.8 MiB/s wr, 23 op/s
Oct 11 04:26:58 compute-0 nova_compute[259850]: 2025-10-11 04:26:58.286 2 DEBUG nova.virt.driver [None req-3700afd8-f980-4cf2-b41e-54ecc7fbe9a2 - - - - - -] Emitting event <LifecycleEvent: 1760156818.285558, 116a010a-a523-4fa3-8dbc-de6caec760c9 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 11 04:26:58 compute-0 nova_compute[259850]: 2025-10-11 04:26:58.287 2 INFO nova.compute.manager [None req-3700afd8-f980-4cf2-b41e-54ecc7fbe9a2 - - - - - -] [instance: 116a010a-a523-4fa3-8dbc-de6caec760c9] VM Started (Lifecycle Event)
Oct 11 04:26:58 compute-0 nova_compute[259850]: 2025-10-11 04:26:58.290 2 DEBUG nova.compute.manager [None req-8616c69a-e994-4e15-bd76-7208f7f58907 7bf17f3eb8514499a54d67542db6b88a 226e6310b4ee4a68b552a6b3e940a458 - - default default] [instance: 116a010a-a523-4fa3-8dbc-de6caec760c9] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Oct 11 04:26:58 compute-0 nova_compute[259850]: 2025-10-11 04:26:58.293 2 DEBUG nova.virt.libvirt.driver [None req-8616c69a-e994-4e15-bd76-7208f7f58907 7bf17f3eb8514499a54d67542db6b88a 226e6310b4ee4a68b552a6b3e940a458 - - default default] [instance: 116a010a-a523-4fa3-8dbc-de6caec760c9] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Oct 11 04:26:58 compute-0 nova_compute[259850]: 2025-10-11 04:26:58.300 2 INFO nova.virt.libvirt.driver [-] [instance: 116a010a-a523-4fa3-8dbc-de6caec760c9] Instance spawned successfully.
Oct 11 04:26:58 compute-0 nova_compute[259850]: 2025-10-11 04:26:58.301 2 DEBUG nova.virt.libvirt.driver [None req-8616c69a-e994-4e15-bd76-7208f7f58907 7bf17f3eb8514499a54d67542db6b88a 226e6310b4ee4a68b552a6b3e940a458 - - default default] [instance: 116a010a-a523-4fa3-8dbc-de6caec760c9] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Oct 11 04:26:58 compute-0 nova_compute[259850]: 2025-10-11 04:26:58.309 2 DEBUG nova.compute.manager [None req-3700afd8-f980-4cf2-b41e-54ecc7fbe9a2 - - - - - -] [instance: 116a010a-a523-4fa3-8dbc-de6caec760c9] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 11 04:26:58 compute-0 nova_compute[259850]: 2025-10-11 04:26:58.318 2 DEBUG nova.compute.manager [None req-3700afd8-f980-4cf2-b41e-54ecc7fbe9a2 - - - - - -] [instance: 116a010a-a523-4fa3-8dbc-de6caec760c9] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 11 04:26:58 compute-0 nova_compute[259850]: 2025-10-11 04:26:58.328 2 DEBUG nova.virt.libvirt.driver [None req-8616c69a-e994-4e15-bd76-7208f7f58907 7bf17f3eb8514499a54d67542db6b88a 226e6310b4ee4a68b552a6b3e940a458 - - default default] [instance: 116a010a-a523-4fa3-8dbc-de6caec760c9] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 11 04:26:58 compute-0 nova_compute[259850]: 2025-10-11 04:26:58.329 2 DEBUG nova.virt.libvirt.driver [None req-8616c69a-e994-4e15-bd76-7208f7f58907 7bf17f3eb8514499a54d67542db6b88a 226e6310b4ee4a68b552a6b3e940a458 - - default default] [instance: 116a010a-a523-4fa3-8dbc-de6caec760c9] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 11 04:26:58 compute-0 nova_compute[259850]: 2025-10-11 04:26:58.330 2 DEBUG nova.virt.libvirt.driver [None req-8616c69a-e994-4e15-bd76-7208f7f58907 7bf17f3eb8514499a54d67542db6b88a 226e6310b4ee4a68b552a6b3e940a458 - - default default] [instance: 116a010a-a523-4fa3-8dbc-de6caec760c9] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 11 04:26:58 compute-0 nova_compute[259850]: 2025-10-11 04:26:58.331 2 DEBUG nova.virt.libvirt.driver [None req-8616c69a-e994-4e15-bd76-7208f7f58907 7bf17f3eb8514499a54d67542db6b88a 226e6310b4ee4a68b552a6b3e940a458 - - default default] [instance: 116a010a-a523-4fa3-8dbc-de6caec760c9] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 11 04:26:58 compute-0 nova_compute[259850]: 2025-10-11 04:26:58.332 2 DEBUG nova.virt.libvirt.driver [None req-8616c69a-e994-4e15-bd76-7208f7f58907 7bf17f3eb8514499a54d67542db6b88a 226e6310b4ee4a68b552a6b3e940a458 - - default default] [instance: 116a010a-a523-4fa3-8dbc-de6caec760c9] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 11 04:26:58 compute-0 nova_compute[259850]: 2025-10-11 04:26:58.333 2 DEBUG nova.virt.libvirt.driver [None req-8616c69a-e994-4e15-bd76-7208f7f58907 7bf17f3eb8514499a54d67542db6b88a 226e6310b4ee4a68b552a6b3e940a458 - - default default] [instance: 116a010a-a523-4fa3-8dbc-de6caec760c9] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 11 04:26:58 compute-0 nova_compute[259850]: 2025-10-11 04:26:58.338 2 INFO nova.compute.manager [None req-3700afd8-f980-4cf2-b41e-54ecc7fbe9a2 - - - - - -] [instance: 116a010a-a523-4fa3-8dbc-de6caec760c9] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 11 04:26:58 compute-0 nova_compute[259850]: 2025-10-11 04:26:58.338 2 DEBUG nova.virt.driver [None req-3700afd8-f980-4cf2-b41e-54ecc7fbe9a2 - - - - - -] Emitting event <LifecycleEvent: 1760156818.2856688, 116a010a-a523-4fa3-8dbc-de6caec760c9 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 11 04:26:58 compute-0 nova_compute[259850]: 2025-10-11 04:26:58.339 2 INFO nova.compute.manager [None req-3700afd8-f980-4cf2-b41e-54ecc7fbe9a2 - - - - - -] [instance: 116a010a-a523-4fa3-8dbc-de6caec760c9] VM Paused (Lifecycle Event)
Oct 11 04:26:58 compute-0 nova_compute[259850]: 2025-10-11 04:26:58.372 2 DEBUG nova.compute.manager [None req-3700afd8-f980-4cf2-b41e-54ecc7fbe9a2 - - - - - -] [instance: 116a010a-a523-4fa3-8dbc-de6caec760c9] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 11 04:26:58 compute-0 nova_compute[259850]: 2025-10-11 04:26:58.375 2 DEBUG nova.virt.driver [None req-3700afd8-f980-4cf2-b41e-54ecc7fbe9a2 - - - - - -] Emitting event <LifecycleEvent: 1760156818.2932992, 116a010a-a523-4fa3-8dbc-de6caec760c9 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 11 04:26:58 compute-0 nova_compute[259850]: 2025-10-11 04:26:58.375 2 INFO nova.compute.manager [None req-3700afd8-f980-4cf2-b41e-54ecc7fbe9a2 - - - - - -] [instance: 116a010a-a523-4fa3-8dbc-de6caec760c9] VM Resumed (Lifecycle Event)
Oct 11 04:26:58 compute-0 nova_compute[259850]: 2025-10-11 04:26:58.388 2 INFO nova.compute.manager [None req-8616c69a-e994-4e15-bd76-7208f7f58907 7bf17f3eb8514499a54d67542db6b88a 226e6310b4ee4a68b552a6b3e940a458 - - default default] [instance: 116a010a-a523-4fa3-8dbc-de6caec760c9] Took 9.08 seconds to spawn the instance on the hypervisor.
Oct 11 04:26:58 compute-0 nova_compute[259850]: 2025-10-11 04:26:58.389 2 DEBUG nova.compute.manager [None req-8616c69a-e994-4e15-bd76-7208f7f58907 7bf17f3eb8514499a54d67542db6b88a 226e6310b4ee4a68b552a6b3e940a458 - - default default] [instance: 116a010a-a523-4fa3-8dbc-de6caec760c9] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 11 04:26:58 compute-0 nova_compute[259850]: 2025-10-11 04:26:58.398 2 DEBUG nova.compute.manager [None req-3700afd8-f980-4cf2-b41e-54ecc7fbe9a2 - - - - - -] [instance: 116a010a-a523-4fa3-8dbc-de6caec760c9] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 11 04:26:58 compute-0 nova_compute[259850]: 2025-10-11 04:26:58.401 2 DEBUG nova.compute.manager [None req-3700afd8-f980-4cf2-b41e-54ecc7fbe9a2 - - - - - -] [instance: 116a010a-a523-4fa3-8dbc-de6caec760c9] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 11 04:26:58 compute-0 nova_compute[259850]: 2025-10-11 04:26:58.429 2 INFO nova.compute.manager [None req-3700afd8-f980-4cf2-b41e-54ecc7fbe9a2 - - - - - -] [instance: 116a010a-a523-4fa3-8dbc-de6caec760c9] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 11 04:26:58 compute-0 nova_compute[259850]: 2025-10-11 04:26:58.453 2 INFO nova.compute.manager [None req-8616c69a-e994-4e15-bd76-7208f7f58907 7bf17f3eb8514499a54d67542db6b88a 226e6310b4ee4a68b552a6b3e940a458 - - default default] [instance: 116a010a-a523-4fa3-8dbc-de6caec760c9] Took 11.16 seconds to build instance.
Oct 11 04:26:58 compute-0 nova_compute[259850]: 2025-10-11 04:26:58.470 2 DEBUG oslo_concurrency.lockutils [None req-8616c69a-e994-4e15-bd76-7208f7f58907 7bf17f3eb8514499a54d67542db6b88a 226e6310b4ee4a68b552a6b3e940a458 - - default default] Lock "116a010a-a523-4fa3-8dbc-de6caec760c9" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 11.455s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 04:26:58 compute-0 nova_compute[259850]: 2025-10-11 04:26:58.783 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 04:26:59 compute-0 nova_compute[259850]: 2025-10-11 04:26:59.081 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 04:26:59 compute-0 ceph-mon[74273]: pgmap v1956: 305 pgs: 305 active+clean; 202 MiB data, 572 MiB used, 59 GiB / 60 GiB avail; 36 KiB/s rd, 3.8 MiB/s wr, 23 op/s
Oct 11 04:26:59 compute-0 podman[309095]: 2025-10-11 04:26:59.363994294 +0000 UTC m=+0.072731860 container health_status 83ff5079e26f0f00bbfd2512db4ebca61e431e3b7e362664573e34fff179574c (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251009, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, container_name=multipathd, io.buildah.version=1.41.3)
Oct 11 04:26:59 compute-0 podman[309096]: 2025-10-11 04:26:59.393519986 +0000 UTC m=+0.099030961 container health_status 8f03625dda308e9a3fe3b3f00360811eab2a53db90537fed5552a11820542f07 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, container_name=iscsid, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, config_id=iscsid, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251009)
Oct 11 04:27:00 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1957: 305 pgs: 305 active+clean; 202 MiB data, 572 MiB used, 59 GiB / 60 GiB avail; 28 KiB/s rd, 12 KiB/s wr, 9 op/s
Oct 11 04:27:00 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e447 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 04:27:01 compute-0 ceph-mon[74273]: pgmap v1957: 305 pgs: 305 active+clean; 202 MiB data, 572 MiB used, 59 GiB / 60 GiB avail; 28 KiB/s rd, 12 KiB/s wr, 9 op/s
Oct 11 04:27:01 compute-0 nova_compute[259850]: 2025-10-11 04:27:01.706 2 DEBUG nova.compute.manager [req-bcf82156-a3dd-4c22-8150-f548503fdec2 req-8bd7251c-617e-48e1-860c-17dfe1642b54 f4159c198f2c491aba3076e803251bb4 551ef16ca7c64c7d98e9560ddab5abd8 - - default default] [instance: 116a010a-a523-4fa3-8dbc-de6caec760c9] Received event network-changed-aa158452-f9f5-45a1-9841-28136bfa13a6 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 11 04:27:01 compute-0 nova_compute[259850]: 2025-10-11 04:27:01.706 2 DEBUG nova.compute.manager [req-bcf82156-a3dd-4c22-8150-f548503fdec2 req-8bd7251c-617e-48e1-860c-17dfe1642b54 f4159c198f2c491aba3076e803251bb4 551ef16ca7c64c7d98e9560ddab5abd8 - - default default] [instance: 116a010a-a523-4fa3-8dbc-de6caec760c9] Refreshing instance network info cache due to event network-changed-aa158452-f9f5-45a1-9841-28136bfa13a6. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 11 04:27:01 compute-0 nova_compute[259850]: 2025-10-11 04:27:01.707 2 DEBUG oslo_concurrency.lockutils [req-bcf82156-a3dd-4c22-8150-f548503fdec2 req-8bd7251c-617e-48e1-860c-17dfe1642b54 f4159c198f2c491aba3076e803251bb4 551ef16ca7c64c7d98e9560ddab5abd8 - - default default] Acquiring lock "refresh_cache-116a010a-a523-4fa3-8dbc-de6caec760c9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 11 04:27:01 compute-0 nova_compute[259850]: 2025-10-11 04:27:01.707 2 DEBUG oslo_concurrency.lockutils [req-bcf82156-a3dd-4c22-8150-f548503fdec2 req-8bd7251c-617e-48e1-860c-17dfe1642b54 f4159c198f2c491aba3076e803251bb4 551ef16ca7c64c7d98e9560ddab5abd8 - - default default] Acquired lock "refresh_cache-116a010a-a523-4fa3-8dbc-de6caec760c9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 11 04:27:01 compute-0 nova_compute[259850]: 2025-10-11 04:27:01.707 2 DEBUG nova.network.neutron [req-bcf82156-a3dd-4c22-8150-f548503fdec2 req-8bd7251c-617e-48e1-860c-17dfe1642b54 f4159c198f2c491aba3076e803251bb4 551ef16ca7c64c7d98e9560ddab5abd8 - - default default] [instance: 116a010a-a523-4fa3-8dbc-de6caec760c9] Refreshing network info cache for port aa158452-f9f5-45a1-9841-28136bfa13a6 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 11 04:27:02 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1958: 305 pgs: 305 active+clean; 202 MiB data, 572 MiB used, 59 GiB / 60 GiB avail; 28 KiB/s rd, 12 KiB/s wr, 9 op/s
Oct 11 04:27:02 compute-0 nova_compute[259850]: 2025-10-11 04:27:02.584 2 DEBUG nova.network.neutron [req-bcf82156-a3dd-4c22-8150-f548503fdec2 req-8bd7251c-617e-48e1-860c-17dfe1642b54 f4159c198f2c491aba3076e803251bb4 551ef16ca7c64c7d98e9560ddab5abd8 - - default default] [instance: 116a010a-a523-4fa3-8dbc-de6caec760c9] Updated VIF entry in instance network info cache for port aa158452-f9f5-45a1-9841-28136bfa13a6. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 11 04:27:02 compute-0 nova_compute[259850]: 2025-10-11 04:27:02.585 2 DEBUG nova.network.neutron [req-bcf82156-a3dd-4c22-8150-f548503fdec2 req-8bd7251c-617e-48e1-860c-17dfe1642b54 f4159c198f2c491aba3076e803251bb4 551ef16ca7c64c7d98e9560ddab5abd8 - - default default] [instance: 116a010a-a523-4fa3-8dbc-de6caec760c9] Updating instance_info_cache with network_info: [{"id": "aa158452-f9f5-45a1-9841-28136bfa13a6", "address": "fa:16:3e:7b:ce:34", "network": {"id": "61e3c4a7-2f2f-451f-b913-c2cdac8efdf3", "bridge": "br-int", "label": "tempest-TestEncryptedCinderVolumes-1503027769-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.242", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "226e6310b4ee4a68b552a6b3e940a458", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapaa158452-f9", "ovs_interfaceid": "aa158452-f9f5-45a1-9841-28136bfa13a6", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 11 04:27:02 compute-0 nova_compute[259850]: 2025-10-11 04:27:02.605 2 DEBUG oslo_concurrency.lockutils [req-bcf82156-a3dd-4c22-8150-f548503fdec2 req-8bd7251c-617e-48e1-860c-17dfe1642b54 f4159c198f2c491aba3076e803251bb4 551ef16ca7c64c7d98e9560ddab5abd8 - - default default] Releasing lock "refresh_cache-116a010a-a523-4fa3-8dbc-de6caec760c9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 11 04:27:03 compute-0 ceph-mon[74273]: pgmap v1958: 305 pgs: 305 active+clean; 202 MiB data, 572 MiB used, 59 GiB / 60 GiB avail; 28 KiB/s rd, 12 KiB/s wr, 9 op/s
Oct 11 04:27:03 compute-0 nova_compute[259850]: 2025-10-11 04:27:03.823 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 04:27:04 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1959: 305 pgs: 305 active+clean; 202 MiB data, 572 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Oct 11 04:27:04 compute-0 nova_compute[259850]: 2025-10-11 04:27:04.082 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 04:27:05 compute-0 ceph-mon[74273]: pgmap v1959: 305 pgs: 305 active+clean; 202 MiB data, 572 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Oct 11 04:27:05 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e447 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 04:27:06 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1960: 305 pgs: 305 active+clean; 202 MiB data, 572 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Oct 11 04:27:06 compute-0 nova_compute[259850]: 2025-10-11 04:27:06.429 2 DEBUG oslo_service.periodic_task [None req-a15c22ab-259d-4b26-83d0-eb5f92102649 - - - - - -] Running periodic task ComputeManager._sync_power_states run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 11 04:27:06 compute-0 nova_compute[259850]: 2025-10-11 04:27:06.455 2 DEBUG nova.compute.manager [None req-a15c22ab-259d-4b26-83d0-eb5f92102649 - - - - - -] Triggering sync for uuid 116a010a-a523-4fa3-8dbc-de6caec760c9 _sync_power_states /usr/lib/python3.9/site-packages/nova/compute/manager.py:10268
Oct 11 04:27:06 compute-0 nova_compute[259850]: 2025-10-11 04:27:06.457 2 DEBUG oslo_concurrency.lockutils [None req-a15c22ab-259d-4b26-83d0-eb5f92102649 - - - - - -] Acquiring lock "116a010a-a523-4fa3-8dbc-de6caec760c9" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 04:27:06 compute-0 nova_compute[259850]: 2025-10-11 04:27:06.457 2 DEBUG oslo_concurrency.lockutils [None req-a15c22ab-259d-4b26-83d0-eb5f92102649 - - - - - -] Lock "116a010a-a523-4fa3-8dbc-de6caec760c9" acquired by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 04:27:06 compute-0 nova_compute[259850]: 2025-10-11 04:27:06.486 2 DEBUG oslo_concurrency.lockutils [None req-a15c22ab-259d-4b26-83d0-eb5f92102649 - - - - - -] Lock "116a010a-a523-4fa3-8dbc-de6caec760c9" "released" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: held 0.028s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 04:27:07 compute-0 ceph-mon[74273]: pgmap v1960: 305 pgs: 305 active+clean; 202 MiB data, 572 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Oct 11 04:27:08 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1961: 305 pgs: 305 active+clean; 202 MiB data, 572 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Oct 11 04:27:08 compute-0 ceph-osd[88594]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #45. Immutable memtables: 2.
Oct 11 04:27:08 compute-0 nova_compute[259850]: 2025-10-11 04:27:08.825 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 04:27:09 compute-0 nova_compute[259850]: 2025-10-11 04:27:09.085 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 04:27:09 compute-0 ovn_controller[152025]: 2025-10-11T04:27:09Z|00070|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:7b:ce:34 10.100.0.13
Oct 11 04:27:09 compute-0 ovn_controller[152025]: 2025-10-11T04:27:09Z|00071|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:7b:ce:34 10.100.0.13
Oct 11 04:27:09 compute-0 ceph-mon[74273]: pgmap v1961: 305 pgs: 305 active+clean; 202 MiB data, 572 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Oct 11 04:27:10 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1962: 305 pgs: 305 active+clean; 202 MiB data, 572 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 65 op/s
Oct 11 04:27:10 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e447 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 04:27:10 compute-0 ceph-osd[87591]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #46. Immutable memtables: 3.
Oct 11 04:27:11 compute-0 ceph-mon[74273]: pgmap v1962: 305 pgs: 305 active+clean; 202 MiB data, 572 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 65 op/s
Oct 11 04:27:12 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1963: 305 pgs: 305 active+clean; 202 MiB data, 572 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 63 op/s
Oct 11 04:27:13 compute-0 ceph-mon[74273]: pgmap v1963: 305 pgs: 305 active+clean; 202 MiB data, 572 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 63 op/s
Oct 11 04:27:13 compute-0 nova_compute[259850]: 2025-10-11 04:27:13.828 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 04:27:14 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1964: 305 pgs: 305 active+clean; 271 MiB data, 672 MiB used, 59 GiB / 60 GiB avail; 2.4 MiB/s rd, 5.8 MiB/s wr, 138 op/s
Oct 11 04:27:14 compute-0 nova_compute[259850]: 2025-10-11 04:27:14.087 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 04:27:14 compute-0 podman[309137]: 2025-10-11 04:27:14.446834402 +0000 UTC m=+0.142735631 container health_status 648a70a95c858d67aef01a55c153ff0b5b9bef4b589fa10c62a24185180db61f (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller)
Oct 11 04:27:15 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e447 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 04:27:15 compute-0 ceph-mon[74273]: pgmap v1964: 305 pgs: 305 active+clean; 271 MiB data, 672 MiB used, 59 GiB / 60 GiB avail; 2.4 MiB/s rd, 5.8 MiB/s wr, 138 op/s
Oct 11 04:27:16 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1965: 305 pgs: 305 active+clean; 271 MiB data, 672 MiB used, 59 GiB / 60 GiB avail; 522 KiB/s rd, 5.8 MiB/s wr, 74 op/s
Oct 11 04:27:16 compute-0 ceph-osd[87591]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Oct 11 04:27:16 compute-0 ceph-osd[87591]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 3000.1 total, 600.0 interval
                                           Cumulative writes: 31K writes, 117K keys, 31K commit groups, 1.0 writes per commit group, ingest: 0.09 GB, 0.03 MB/s
                                           Cumulative WAL: 31K writes, 11K syncs, 2.63 writes per sync, written: 0.09 GB, 0.03 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 5155 writes, 16K keys, 5155 commit groups, 1.0 writes per commit group, ingest: 18.86 MB, 0.03 MB/s
                                           Interval WAL: 5155 writes, 2191 syncs, 2.35 writes per sync, written: 0.02 GB, 0.03 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Oct 11 04:27:17 compute-0 ceph-mon[74273]: pgmap v1965: 305 pgs: 305 active+clean; 271 MiB data, 672 MiB used, 59 GiB / 60 GiB avail; 522 KiB/s rd, 5.8 MiB/s wr, 74 op/s
Oct 11 04:27:17 compute-0 nova_compute[259850]: 2025-10-11 04:27:17.449 2 DEBUG oslo_concurrency.lockutils [None req-c17d15aa-7781-41b3-a3c9-bcc45fa0833d 7bf17f3eb8514499a54d67542db6b88a 226e6310b4ee4a68b552a6b3e940a458 - - default default] Acquiring lock "116a010a-a523-4fa3-8dbc-de6caec760c9" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 04:27:17 compute-0 nova_compute[259850]: 2025-10-11 04:27:17.450 2 DEBUG oslo_concurrency.lockutils [None req-c17d15aa-7781-41b3-a3c9-bcc45fa0833d 7bf17f3eb8514499a54d67542db6b88a 226e6310b4ee4a68b552a6b3e940a458 - - default default] Lock "116a010a-a523-4fa3-8dbc-de6caec760c9" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 04:27:17 compute-0 nova_compute[259850]: 2025-10-11 04:27:17.450 2 DEBUG oslo_concurrency.lockutils [None req-c17d15aa-7781-41b3-a3c9-bcc45fa0833d 7bf17f3eb8514499a54d67542db6b88a 226e6310b4ee4a68b552a6b3e940a458 - - default default] Acquiring lock "116a010a-a523-4fa3-8dbc-de6caec760c9-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 04:27:17 compute-0 nova_compute[259850]: 2025-10-11 04:27:17.450 2 DEBUG oslo_concurrency.lockutils [None req-c17d15aa-7781-41b3-a3c9-bcc45fa0833d 7bf17f3eb8514499a54d67542db6b88a 226e6310b4ee4a68b552a6b3e940a458 - - default default] Lock "116a010a-a523-4fa3-8dbc-de6caec760c9-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 04:27:17 compute-0 nova_compute[259850]: 2025-10-11 04:27:17.450 2 DEBUG oslo_concurrency.lockutils [None req-c17d15aa-7781-41b3-a3c9-bcc45fa0833d 7bf17f3eb8514499a54d67542db6b88a 226e6310b4ee4a68b552a6b3e940a458 - - default default] Lock "116a010a-a523-4fa3-8dbc-de6caec760c9-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 04:27:17 compute-0 nova_compute[259850]: 2025-10-11 04:27:17.451 2 INFO nova.compute.manager [None req-c17d15aa-7781-41b3-a3c9-bcc45fa0833d 7bf17f3eb8514499a54d67542db6b88a 226e6310b4ee4a68b552a6b3e940a458 - - default default] [instance: 116a010a-a523-4fa3-8dbc-de6caec760c9] Terminating instance
Oct 11 04:27:17 compute-0 nova_compute[259850]: 2025-10-11 04:27:17.452 2 DEBUG nova.compute.manager [None req-c17d15aa-7781-41b3-a3c9-bcc45fa0833d 7bf17f3eb8514499a54d67542db6b88a 226e6310b4ee4a68b552a6b3e940a458 - - default default] [instance: 116a010a-a523-4fa3-8dbc-de6caec760c9] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Oct 11 04:27:17 compute-0 kernel: tapaa158452-f9 (unregistering): left promiscuous mode
Oct 11 04:27:17 compute-0 NetworkManager[44920]: <info>  [1760156837.5133] device (tapaa158452-f9): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 11 04:27:17 compute-0 nova_compute[259850]: 2025-10-11 04:27:17.527 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 04:27:17 compute-0 nova_compute[259850]: 2025-10-11 04:27:17.529 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 04:27:17 compute-0 ovn_controller[152025]: 2025-10-11T04:27:17Z|00288|binding|INFO|Releasing lport aa158452-f9f5-45a1-9841-28136bfa13a6 from this chassis (sb_readonly=0)
Oct 11 04:27:17 compute-0 ovn_controller[152025]: 2025-10-11T04:27:17Z|00289|binding|INFO|Setting lport aa158452-f9f5-45a1-9841-28136bfa13a6 down in Southbound
Oct 11 04:27:17 compute-0 ovn_controller[152025]: 2025-10-11T04:27:17Z|00290|binding|INFO|Removing iface tapaa158452-f9 ovn-installed in OVS
Oct 11 04:27:17 compute-0 ovn_metadata_agent[161897]: 2025-10-11 04:27:17.534 161902 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:7b:ce:34 10.100.0.13'], port_security=['fa:16:3e:7b:ce:34 10.100.0.13'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.13/28', 'neutron:device_id': '116a010a-a523-4fa3-8dbc-de6caec760c9', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-61e3c4a7-2f2f-451f-b913-c2cdac8efdf3', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '226e6310b4ee4a68b552a6b3e940a458', 'neutron:revision_number': '4', 'neutron:security_group_ids': '77d1d83e-ff49-437a-8a94-baa66143ce2b', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com', 'neutron:port_fip': '192.168.122.242'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=17f237ce-6320-4c27-9970-fd94aa8457a3, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f22fde8afd0>], logical_port=aa158452-f9f5-45a1-9841-28136bfa13a6) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f22fde8afd0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 11 04:27:17 compute-0 ovn_metadata_agent[161897]: 2025-10-11 04:27:17.536 161902 INFO neutron.agent.ovn.metadata.agent [-] Port aa158452-f9f5-45a1-9841-28136bfa13a6 in datapath 61e3c4a7-2f2f-451f-b913-c2cdac8efdf3 unbound from our chassis
Oct 11 04:27:17 compute-0 ovn_metadata_agent[161897]: 2025-10-11 04:27:17.537 161902 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 61e3c4a7-2f2f-451f-b913-c2cdac8efdf3, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Oct 11 04:27:17 compute-0 ovn_metadata_agent[161897]: 2025-10-11 04:27:17.538 267637 DEBUG oslo.privsep.daemon [-] privsep: reply[b7a068e9-aa0a-4efc-8467-a23446318c56]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 04:27:17 compute-0 ovn_metadata_agent[161897]: 2025-10-11 04:27:17.539 161902 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-61e3c4a7-2f2f-451f-b913-c2cdac8efdf3 namespace which is not needed anymore
Oct 11 04:27:17 compute-0 nova_compute[259850]: 2025-10-11 04:27:17.569 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 04:27:17 compute-0 systemd[1]: machine-qemu\x2d29\x2dinstance\x2d0000001d.scope: Deactivated successfully.
Oct 11 04:27:17 compute-0 systemd[1]: machine-qemu\x2d29\x2dinstance\x2d0000001d.scope: Consumed 15.246s CPU time.
Oct 11 04:27:17 compute-0 systemd-machined[214869]: Machine qemu-29-instance-0000001d terminated.
Oct 11 04:27:17 compute-0 nova_compute[259850]: 2025-10-11 04:27:17.678 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 04:27:17 compute-0 nova_compute[259850]: 2025-10-11 04:27:17.686 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 04:27:17 compute-0 nova_compute[259850]: 2025-10-11 04:27:17.698 2 INFO nova.virt.libvirt.driver [-] [instance: 116a010a-a523-4fa3-8dbc-de6caec760c9] Instance destroyed successfully.
Oct 11 04:27:17 compute-0 nova_compute[259850]: 2025-10-11 04:27:17.698 2 DEBUG nova.objects.instance [None req-c17d15aa-7781-41b3-a3c9-bcc45fa0833d 7bf17f3eb8514499a54d67542db6b88a 226e6310b4ee4a68b552a6b3e940a458 - - default default] Lazy-loading 'resources' on Instance uuid 116a010a-a523-4fa3-8dbc-de6caec760c9 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 11 04:27:17 compute-0 nova_compute[259850]: 2025-10-11 04:27:17.712 2 DEBUG nova.virt.libvirt.vif [None req-c17d15aa-7781-41b3-a3c9-bcc45fa0833d 7bf17f3eb8514499a54d67542db6b88a 226e6310b4ee4a68b552a6b3e940a458 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-11T04:26:45Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestEncryptedCinderVolumes-server-787794379',display_name='tempest-TestEncryptedCinderVolumes-server-787794379',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testencryptedcindervolumes-server-787794379',id=29,image_ref='',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBA3pFsCx6Lv4ZhABALE9kJlaC2VLcHHMajXk3FwO0YwDAD8GzEfOWx1nJYDa1BnjHeTckP7sy9/Wa8HAN31/eIMe4p7SlbrVdBBFvJpvxVBbmewtPKqpzKac1Jk+If2OOg==',key_name='tempest-TestEncryptedCinderVolumes-1373224468',keypairs=<?>,launch_index=0,launched_at=2025-10-11T04:26:58Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='226e6310b4ee4a68b552a6b3e940a458',ramdisk_id='',reservation_id='r-wwcbx9v9',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',image_signature_verified='False',owner_project_name='tempest-TestEncryptedCinderVolumes-1931311766',owner_user_name='tempest-TestEncryptedCinderVolumes-1931311766-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-11T04:26:58Z,user_data=None,user_id='7bf17f3eb8514499a54d67542db6b88a',uuid=116a010a-a523-4fa3-8dbc-de6caec760c9,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "aa158452-f9f5-45a1-9841-28136bfa13a6", "address": "fa:16:3e:7b:ce:34", "network": {"id": "61e3c4a7-2f2f-451f-b913-c2cdac8efdf3", "bridge": "br-int", "label": "tempest-TestEncryptedCinderVolumes-1503027769-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.242", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "226e6310b4ee4a68b552a6b3e940a458", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapaa158452-f9", "ovs_interfaceid": "aa158452-f9f5-45a1-9841-28136bfa13a6", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Oct 11 04:27:17 compute-0 nova_compute[259850]: 2025-10-11 04:27:17.712 2 DEBUG nova.network.os_vif_util [None req-c17d15aa-7781-41b3-a3c9-bcc45fa0833d 7bf17f3eb8514499a54d67542db6b88a 226e6310b4ee4a68b552a6b3e940a458 - - default default] Converting VIF {"id": "aa158452-f9f5-45a1-9841-28136bfa13a6", "address": "fa:16:3e:7b:ce:34", "network": {"id": "61e3c4a7-2f2f-451f-b913-c2cdac8efdf3", "bridge": "br-int", "label": "tempest-TestEncryptedCinderVolumes-1503027769-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.242", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "226e6310b4ee4a68b552a6b3e940a458", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapaa158452-f9", "ovs_interfaceid": "aa158452-f9f5-45a1-9841-28136bfa13a6", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 11 04:27:17 compute-0 nova_compute[259850]: 2025-10-11 04:27:17.714 2 DEBUG nova.network.os_vif_util [None req-c17d15aa-7781-41b3-a3c9-bcc45fa0833d 7bf17f3eb8514499a54d67542db6b88a 226e6310b4ee4a68b552a6b3e940a458 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:7b:ce:34,bridge_name='br-int',has_traffic_filtering=True,id=aa158452-f9f5-45a1-9841-28136bfa13a6,network=Network(61e3c4a7-2f2f-451f-b913-c2cdac8efdf3),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapaa158452-f9') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 11 04:27:17 compute-0 nova_compute[259850]: 2025-10-11 04:27:17.714 2 DEBUG os_vif [None req-c17d15aa-7781-41b3-a3c9-bcc45fa0833d 7bf17f3eb8514499a54d67542db6b88a 226e6310b4ee4a68b552a6b3e940a458 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:7b:ce:34,bridge_name='br-int',has_traffic_filtering=True,id=aa158452-f9f5-45a1-9841-28136bfa13a6,network=Network(61e3c4a7-2f2f-451f-b913-c2cdac8efdf3),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapaa158452-f9') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Oct 11 04:27:17 compute-0 nova_compute[259850]: 2025-10-11 04:27:17.717 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 04:27:17 compute-0 nova_compute[259850]: 2025-10-11 04:27:17.718 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapaa158452-f9, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 04:27:17 compute-0 nova_compute[259850]: 2025-10-11 04:27:17.720 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 04:27:17 compute-0 nova_compute[259850]: 2025-10-11 04:27:17.722 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 11 04:27:17 compute-0 nova_compute[259850]: 2025-10-11 04:27:17.725 2 INFO os_vif [None req-c17d15aa-7781-41b3-a3c9-bcc45fa0833d 7bf17f3eb8514499a54d67542db6b88a 226e6310b4ee4a68b552a6b3e940a458 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:7b:ce:34,bridge_name='br-int',has_traffic_filtering=True,id=aa158452-f9f5-45a1-9841-28136bfa13a6,network=Network(61e3c4a7-2f2f-451f-b913-c2cdac8efdf3),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapaa158452-f9')
Oct 11 04:27:17 compute-0 neutron-haproxy-ovnmeta-61e3c4a7-2f2f-451f-b913-c2cdac8efdf3[309048]: [NOTICE]   (309076) : haproxy version is 2.8.14-c23fe91
Oct 11 04:27:17 compute-0 neutron-haproxy-ovnmeta-61e3c4a7-2f2f-451f-b913-c2cdac8efdf3[309048]: [NOTICE]   (309076) : path to executable is /usr/sbin/haproxy
Oct 11 04:27:17 compute-0 neutron-haproxy-ovnmeta-61e3c4a7-2f2f-451f-b913-c2cdac8efdf3[309048]: [WARNING]  (309076) : Exiting Master process...
Oct 11 04:27:17 compute-0 neutron-haproxy-ovnmeta-61e3c4a7-2f2f-451f-b913-c2cdac8efdf3[309048]: [ALERT]    (309076) : Current worker (309080) exited with code 143 (Terminated)
Oct 11 04:27:17 compute-0 neutron-haproxy-ovnmeta-61e3c4a7-2f2f-451f-b913-c2cdac8efdf3[309048]: [WARNING]  (309076) : All workers exited. Exiting... (0)
Oct 11 04:27:17 compute-0 systemd[1]: libpod-128d27466b7ceb10c4cec5874e46e51523f9b2dda99d2d89f2c400d84c4c4ac3.scope: Deactivated successfully.
Oct 11 04:27:17 compute-0 podman[309188]: 2025-10-11 04:27:17.747949838 +0000 UTC m=+0.061173945 container died 128d27466b7ceb10c4cec5874e46e51523f9b2dda99d2d89f2c400d84c4c4ac3 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-61e3c4a7-2f2f-451f-b913-c2cdac8efdf3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team)
Oct 11 04:27:17 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-128d27466b7ceb10c4cec5874e46e51523f9b2dda99d2d89f2c400d84c4c4ac3-userdata-shm.mount: Deactivated successfully.
Oct 11 04:27:17 compute-0 systemd[1]: var-lib-containers-storage-overlay-c3ac786154799b719da6466dee7a8091d1fe7dee49b7104de01c9681461d0fee-merged.mount: Deactivated successfully.
Oct 11 04:27:17 compute-0 podman[309188]: 2025-10-11 04:27:17.869727408 +0000 UTC m=+0.182951545 container cleanup 128d27466b7ceb10c4cec5874e46e51523f9b2dda99d2d89f2c400d84c4c4ac3 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-61e3c4a7-2f2f-451f-b913-c2cdac8efdf3, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true)
Oct 11 04:27:17 compute-0 systemd[1]: libpod-conmon-128d27466b7ceb10c4cec5874e46e51523f9b2dda99d2d89f2c400d84c4c4ac3.scope: Deactivated successfully.
Oct 11 04:27:18 compute-0 podman[309240]: 2025-10-11 04:27:18.004319238 +0000 UTC m=+0.096921610 container remove 128d27466b7ceb10c4cec5874e46e51523f9b2dda99d2d89f2c400d84c4c4ac3 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-61e3c4a7-2f2f-451f-b913-c2cdac8efdf3, org.label-schema.build-date=20251009, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Oct 11 04:27:18 compute-0 ovn_metadata_agent[161897]: 2025-10-11 04:27:18.015 267637 DEBUG oslo.privsep.daemon [-] privsep: reply[afc4d866-0a49-4043-b9b5-576434acf567]: (4, ('Sat Oct 11 04:27:17 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-61e3c4a7-2f2f-451f-b913-c2cdac8efdf3 (128d27466b7ceb10c4cec5874e46e51523f9b2dda99d2d89f2c400d84c4c4ac3)\n128d27466b7ceb10c4cec5874e46e51523f9b2dda99d2d89f2c400d84c4c4ac3\nSat Oct 11 04:27:17 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-61e3c4a7-2f2f-451f-b913-c2cdac8efdf3 (128d27466b7ceb10c4cec5874e46e51523f9b2dda99d2d89f2c400d84c4c4ac3)\n128d27466b7ceb10c4cec5874e46e51523f9b2dda99d2d89f2c400d84c4c4ac3\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 04:27:18 compute-0 ovn_metadata_agent[161897]: 2025-10-11 04:27:18.017 267637 DEBUG oslo.privsep.daemon [-] privsep: reply[54d82e44-71ac-47f6-914c-5b570999335d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 04:27:18 compute-0 ovn_metadata_agent[161897]: 2025-10-11 04:27:18.019 161902 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap61e3c4a7-20, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 04:27:18 compute-0 nova_compute[259850]: 2025-10-11 04:27:18.022 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 04:27:18 compute-0 kernel: tap61e3c4a7-20: left promiscuous mode
Oct 11 04:27:18 compute-0 nova_compute[259850]: 2025-10-11 04:27:18.026 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 04:27:18 compute-0 ovn_metadata_agent[161897]: 2025-10-11 04:27:18.032 267637 DEBUG oslo.privsep.daemon [-] privsep: reply[29e8acce-72f8-4c71-b7db-35e022599e0f]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 04:27:18 compute-0 nova_compute[259850]: 2025-10-11 04:27:18.057 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 04:27:18 compute-0 ovn_metadata_agent[161897]: 2025-10-11 04:27:18.074 267637 DEBUG oslo.privsep.daemon [-] privsep: reply[9549524a-2051-4591-b459-a37d32545796]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 04:27:18 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1966: 305 pgs: 305 active+clean; 271 MiB data, 669 MiB used, 59 GiB / 60 GiB avail; 523 KiB/s rd, 5.8 MiB/s wr, 76 op/s
Oct 11 04:27:18 compute-0 ovn_metadata_agent[161897]: 2025-10-11 04:27:18.076 267637 DEBUG oslo.privsep.daemon [-] privsep: reply[1e3f40b1-41f3-4651-8dca-ca1d75f22f91]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 04:27:18 compute-0 ovn_metadata_agent[161897]: 2025-10-11 04:27:18.101 267637 DEBUG oslo.privsep.daemon [-] privsep: reply[e524b069-a69c-49db-8c63-b450c4aad11b]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 520017, 'reachable_time': 41885, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 309262, 'error': None, 'target': 'ovnmeta-61e3c4a7-2f2f-451f-b913-c2cdac8efdf3', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 04:27:18 compute-0 ovn_metadata_agent[161897]: 2025-10-11 04:27:18.105 162015 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-61e3c4a7-2f2f-451f-b913-c2cdac8efdf3 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Oct 11 04:27:18 compute-0 ovn_metadata_agent[161897]: 2025-10-11 04:27:18.105 162015 DEBUG oslo.privsep.daemon [-] privsep: reply[b288ac47-c1a6-42fc-8b0b-309a2f6c7425]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 04:27:18 compute-0 systemd[1]: run-netns-ovnmeta\x2d61e3c4a7\x2d2f2f\x2d451f\x2db913\x2dc2cdac8efdf3.mount: Deactivated successfully.
Oct 11 04:27:18 compute-0 podman[309254]: 2025-10-11 04:27:18.163360938 +0000 UTC m=+0.082714171 container health_status aa44bb3cffcfb092b9d85ae52a9bd9f36c87e07262632420f97b48a886109eab (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, managed_by=edpm_ansible, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251009)
Oct 11 04:27:18 compute-0 nova_compute[259850]: 2025-10-11 04:27:18.169 2 INFO nova.virt.libvirt.driver [None req-c17d15aa-7781-41b3-a3c9-bcc45fa0833d 7bf17f3eb8514499a54d67542db6b88a 226e6310b4ee4a68b552a6b3e940a458 - - default default] [instance: 116a010a-a523-4fa3-8dbc-de6caec760c9] Deleting instance files /var/lib/nova/instances/116a010a-a523-4fa3-8dbc-de6caec760c9_del
Oct 11 04:27:18 compute-0 nova_compute[259850]: 2025-10-11 04:27:18.170 2 INFO nova.virt.libvirt.driver [None req-c17d15aa-7781-41b3-a3c9-bcc45fa0833d 7bf17f3eb8514499a54d67542db6b88a 226e6310b4ee4a68b552a6b3e940a458 - - default default] [instance: 116a010a-a523-4fa3-8dbc-de6caec760c9] Deletion of /var/lib/nova/instances/116a010a-a523-4fa3-8dbc-de6caec760c9_del complete
Oct 11 04:27:18 compute-0 nova_compute[259850]: 2025-10-11 04:27:18.243 2 INFO nova.compute.manager [None req-c17d15aa-7781-41b3-a3c9-bcc45fa0833d 7bf17f3eb8514499a54d67542db6b88a 226e6310b4ee4a68b552a6b3e940a458 - - default default] [instance: 116a010a-a523-4fa3-8dbc-de6caec760c9] Took 0.79 seconds to destroy the instance on the hypervisor.
Oct 11 04:27:18 compute-0 nova_compute[259850]: 2025-10-11 04:27:18.243 2 DEBUG oslo.service.loopingcall [None req-c17d15aa-7781-41b3-a3c9-bcc45fa0833d 7bf17f3eb8514499a54d67542db6b88a 226e6310b4ee4a68b552a6b3e940a458 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Oct 11 04:27:18 compute-0 nova_compute[259850]: 2025-10-11 04:27:18.245 2 DEBUG nova.compute.manager [-] [instance: 116a010a-a523-4fa3-8dbc-de6caec760c9] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Oct 11 04:27:18 compute-0 nova_compute[259850]: 2025-10-11 04:27:18.245 2 DEBUG nova.network.neutron [-] [instance: 116a010a-a523-4fa3-8dbc-de6caec760c9] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Oct 11 04:27:18 compute-0 nova_compute[259850]: 2025-10-11 04:27:18.345 2 DEBUG nova.compute.manager [req-fd5c2206-7c81-4a82-80ad-b688badfeb88 req-7c199cc9-e851-4c58-b775-106226de3792 f4159c198f2c491aba3076e803251bb4 551ef16ca7c64c7d98e9560ddab5abd8 - - default default] [instance: 116a010a-a523-4fa3-8dbc-de6caec760c9] Received event network-vif-unplugged-aa158452-f9f5-45a1-9841-28136bfa13a6 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 11 04:27:18 compute-0 nova_compute[259850]: 2025-10-11 04:27:18.346 2 DEBUG oslo_concurrency.lockutils [req-fd5c2206-7c81-4a82-80ad-b688badfeb88 req-7c199cc9-e851-4c58-b775-106226de3792 f4159c198f2c491aba3076e803251bb4 551ef16ca7c64c7d98e9560ddab5abd8 - - default default] Acquiring lock "116a010a-a523-4fa3-8dbc-de6caec760c9-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 04:27:18 compute-0 nova_compute[259850]: 2025-10-11 04:27:18.346 2 DEBUG oslo_concurrency.lockutils [req-fd5c2206-7c81-4a82-80ad-b688badfeb88 req-7c199cc9-e851-4c58-b775-106226de3792 f4159c198f2c491aba3076e803251bb4 551ef16ca7c64c7d98e9560ddab5abd8 - - default default] Lock "116a010a-a523-4fa3-8dbc-de6caec760c9-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 04:27:18 compute-0 nova_compute[259850]: 2025-10-11 04:27:18.347 2 DEBUG oslo_concurrency.lockutils [req-fd5c2206-7c81-4a82-80ad-b688badfeb88 req-7c199cc9-e851-4c58-b775-106226de3792 f4159c198f2c491aba3076e803251bb4 551ef16ca7c64c7d98e9560ddab5abd8 - - default default] Lock "116a010a-a523-4fa3-8dbc-de6caec760c9-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 04:27:18 compute-0 nova_compute[259850]: 2025-10-11 04:27:18.347 2 DEBUG nova.compute.manager [req-fd5c2206-7c81-4a82-80ad-b688badfeb88 req-7c199cc9-e851-4c58-b775-106226de3792 f4159c198f2c491aba3076e803251bb4 551ef16ca7c64c7d98e9560ddab5abd8 - - default default] [instance: 116a010a-a523-4fa3-8dbc-de6caec760c9] No waiting events found dispatching network-vif-unplugged-aa158452-f9f5-45a1-9841-28136bfa13a6 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 11 04:27:18 compute-0 nova_compute[259850]: 2025-10-11 04:27:18.347 2 DEBUG nova.compute.manager [req-fd5c2206-7c81-4a82-80ad-b688badfeb88 req-7c199cc9-e851-4c58-b775-106226de3792 f4159c198f2c491aba3076e803251bb4 551ef16ca7c64c7d98e9560ddab5abd8 - - default default] [instance: 116a010a-a523-4fa3-8dbc-de6caec760c9] Received event network-vif-unplugged-aa158452-f9f5-45a1-9841-28136bfa13a6 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Oct 11 04:27:19 compute-0 nova_compute[259850]: 2025-10-11 04:27:19.088 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 04:27:19 compute-0 ceph-mon[74273]: pgmap v1966: 305 pgs: 305 active+clean; 271 MiB data, 669 MiB used, 59 GiB / 60 GiB avail; 523 KiB/s rd, 5.8 MiB/s wr, 76 op/s
Oct 11 04:27:19 compute-0 nova_compute[259850]: 2025-10-11 04:27:19.565 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 04:27:19 compute-0 ovn_metadata_agent[161897]: 2025-10-11 04:27:19.565 161902 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=25, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '7a:61:6f', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '92:f1:b6:e4:f1:16'}, ipsec=False) old=SB_Global(nb_cfg=24) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 11 04:27:19 compute-0 ovn_metadata_agent[161897]: 2025-10-11 04:27:19.566 161902 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 4 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Oct 11 04:27:20 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1967: 305 pgs: 305 active+clean; 271 MiB data, 669 MiB used, 59 GiB / 60 GiB avail; 524 KiB/s rd, 5.8 MiB/s wr, 77 op/s
Oct 11 04:27:20 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e447 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 04:27:20 compute-0 nova_compute[259850]: 2025-10-11 04:27:20.450 2 DEBUG nova.compute.manager [req-b8fa8f50-f1c7-4b6f-88c7-3db9e3b9cac6 req-6f23a8a6-5bc0-4b59-a7d0-3fd933e6f24b f4159c198f2c491aba3076e803251bb4 551ef16ca7c64c7d98e9560ddab5abd8 - - default default] [instance: 116a010a-a523-4fa3-8dbc-de6caec760c9] Received event network-vif-plugged-aa158452-f9f5-45a1-9841-28136bfa13a6 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 11 04:27:20 compute-0 nova_compute[259850]: 2025-10-11 04:27:20.451 2 DEBUG oslo_concurrency.lockutils [req-b8fa8f50-f1c7-4b6f-88c7-3db9e3b9cac6 req-6f23a8a6-5bc0-4b59-a7d0-3fd933e6f24b f4159c198f2c491aba3076e803251bb4 551ef16ca7c64c7d98e9560ddab5abd8 - - default default] Acquiring lock "116a010a-a523-4fa3-8dbc-de6caec760c9-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 04:27:20 compute-0 nova_compute[259850]: 2025-10-11 04:27:20.451 2 DEBUG oslo_concurrency.lockutils [req-b8fa8f50-f1c7-4b6f-88c7-3db9e3b9cac6 req-6f23a8a6-5bc0-4b59-a7d0-3fd933e6f24b f4159c198f2c491aba3076e803251bb4 551ef16ca7c64c7d98e9560ddab5abd8 - - default default] Lock "116a010a-a523-4fa3-8dbc-de6caec760c9-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 04:27:20 compute-0 nova_compute[259850]: 2025-10-11 04:27:20.451 2 DEBUG oslo_concurrency.lockutils [req-b8fa8f50-f1c7-4b6f-88c7-3db9e3b9cac6 req-6f23a8a6-5bc0-4b59-a7d0-3fd933e6f24b f4159c198f2c491aba3076e803251bb4 551ef16ca7c64c7d98e9560ddab5abd8 - - default default] Lock "116a010a-a523-4fa3-8dbc-de6caec760c9-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 04:27:20 compute-0 nova_compute[259850]: 2025-10-11 04:27:20.452 2 DEBUG nova.compute.manager [req-b8fa8f50-f1c7-4b6f-88c7-3db9e3b9cac6 req-6f23a8a6-5bc0-4b59-a7d0-3fd933e6f24b f4159c198f2c491aba3076e803251bb4 551ef16ca7c64c7d98e9560ddab5abd8 - - default default] [instance: 116a010a-a523-4fa3-8dbc-de6caec760c9] No waiting events found dispatching network-vif-plugged-aa158452-f9f5-45a1-9841-28136bfa13a6 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 11 04:27:20 compute-0 nova_compute[259850]: 2025-10-11 04:27:20.452 2 WARNING nova.compute.manager [req-b8fa8f50-f1c7-4b6f-88c7-3db9e3b9cac6 req-6f23a8a6-5bc0-4b59-a7d0-3fd933e6f24b f4159c198f2c491aba3076e803251bb4 551ef16ca7c64c7d98e9560ddab5abd8 - - default default] [instance: 116a010a-a523-4fa3-8dbc-de6caec760c9] Received unexpected event network-vif-plugged-aa158452-f9f5-45a1-9841-28136bfa13a6 for instance with vm_state active and task_state deleting.
Oct 11 04:27:20 compute-0 ceph-mgr[74563]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 04:27:20 compute-0 ceph-mgr[74563]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 04:27:20 compute-0 ceph-mgr[74563]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 04:27:20 compute-0 ceph-mgr[74563]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 04:27:20 compute-0 ceph-mgr[74563]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 04:27:20 compute-0 ceph-mgr[74563]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 04:27:20 compute-0 ceph-mgr[74563]: [balancer INFO root] Optimize plan auto_2025-10-11_04:27:20
Oct 11 04:27:20 compute-0 ceph-mgr[74563]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 11 04:27:20 compute-0 ceph-mgr[74563]: [balancer INFO root] do_upmap
Oct 11 04:27:20 compute-0 ceph-mgr[74563]: [balancer INFO root] pools ['.mgr', 'backups', 'cephfs.cephfs.meta', 'default.rgw.control', '.rgw.root', 'default.rgw.log', 'cephfs.cephfs.data', 'images', 'vms', 'default.rgw.meta', 'volumes']
Oct 11 04:27:20 compute-0 ceph-mgr[74563]: [balancer INFO root] prepared 0/10 changes
Oct 11 04:27:21 compute-0 ceph-mgr[74563]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 11 04:27:21 compute-0 ceph-mgr[74563]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 11 04:27:21 compute-0 ceph-mgr[74563]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 11 04:27:21 compute-0 ceph-mgr[74563]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 11 04:27:21 compute-0 ceph-mgr[74563]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 11 04:27:21 compute-0 ceph-mgr[74563]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 11 04:27:21 compute-0 ceph-mgr[74563]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 11 04:27:21 compute-0 ceph-mgr[74563]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 11 04:27:21 compute-0 ceph-mgr[74563]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 11 04:27:21 compute-0 ceph-mgr[74563]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 11 04:27:21 compute-0 ceph-mon[74273]: pgmap v1967: 305 pgs: 305 active+clean; 271 MiB data, 669 MiB used, 59 GiB / 60 GiB avail; 524 KiB/s rd, 5.8 MiB/s wr, 77 op/s
Oct 11 04:27:21 compute-0 nova_compute[259850]: 2025-10-11 04:27:21.966 2 DEBUG nova.network.neutron [-] [instance: 116a010a-a523-4fa3-8dbc-de6caec760c9] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 11 04:27:21 compute-0 ceph-osd[88594]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Oct 11 04:27:21 compute-0 ceph-osd[88594]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 3000.1 total, 600.0 interval
                                           Cumulative writes: 28K writes, 110K keys, 28K commit groups, 1.0 writes per commit group, ingest: 0.07 GB, 0.02 MB/s
                                           Cumulative WAL: 28K writes, 10K syncs, 2.69 writes per sync, written: 0.07 GB, 0.02 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 3948 writes, 11K keys, 3948 commit groups, 1.0 writes per commit group, ingest: 15.22 MB, 0.03 MB/s
                                           Interval WAL: 3948 writes, 1718 syncs, 2.30 writes per sync, written: 0.01 GB, 0.03 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Oct 11 04:27:21 compute-0 nova_compute[259850]: 2025-10-11 04:27:21.983 2 INFO nova.compute.manager [-] [instance: 116a010a-a523-4fa3-8dbc-de6caec760c9] Took 3.74 seconds to deallocate network for instance.
Oct 11 04:27:22 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1968: 305 pgs: 305 active+clean; 271 MiB data, 669 MiB used, 59 GiB / 60 GiB avail; 524 KiB/s rd, 5.8 MiB/s wr, 77 op/s
Oct 11 04:27:22 compute-0 nova_compute[259850]: 2025-10-11 04:27:22.085 2 DEBUG nova.compute.manager [req-42ce7e5e-3797-4518-b552-b9abdefc0432 req-7b2a11ee-174a-4dd7-85e1-319c60c1eefc f4159c198f2c491aba3076e803251bb4 551ef16ca7c64c7d98e9560ddab5abd8 - - default default] [instance: 116a010a-a523-4fa3-8dbc-de6caec760c9] Received event network-vif-deleted-aa158452-f9f5-45a1-9841-28136bfa13a6 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 11 04:27:22 compute-0 nova_compute[259850]: 2025-10-11 04:27:22.374 2 INFO nova.compute.manager [None req-c17d15aa-7781-41b3-a3c9-bcc45fa0833d 7bf17f3eb8514499a54d67542db6b88a 226e6310b4ee4a68b552a6b3e940a458 - - default default] [instance: 116a010a-a523-4fa3-8dbc-de6caec760c9] Took 0.39 seconds to detach 1 volumes for instance.
Oct 11 04:27:22 compute-0 nova_compute[259850]: 2025-10-11 04:27:22.423 2 DEBUG oslo_concurrency.lockutils [None req-c17d15aa-7781-41b3-a3c9-bcc45fa0833d 7bf17f3eb8514499a54d67542db6b88a 226e6310b4ee4a68b552a6b3e940a458 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 04:27:22 compute-0 nova_compute[259850]: 2025-10-11 04:27:22.423 2 DEBUG oslo_concurrency.lockutils [None req-c17d15aa-7781-41b3-a3c9-bcc45fa0833d 7bf17f3eb8514499a54d67542db6b88a 226e6310b4ee4a68b552a6b3e940a458 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 04:27:22 compute-0 nova_compute[259850]: 2025-10-11 04:27:22.477 2 DEBUG oslo_concurrency.processutils [None req-c17d15aa-7781-41b3-a3c9-bcc45fa0833d 7bf17f3eb8514499a54d67542db6b88a 226e6310b4ee4a68b552a6b3e940a458 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 04:27:22 compute-0 nova_compute[259850]: 2025-10-11 04:27:22.768 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 04:27:22 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 11 04:27:22 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3718815871' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 04:27:22 compute-0 nova_compute[259850]: 2025-10-11 04:27:22.932 2 DEBUG oslo_concurrency.processutils [None req-c17d15aa-7781-41b3-a3c9-bcc45fa0833d 7bf17f3eb8514499a54d67542db6b88a 226e6310b4ee4a68b552a6b3e940a458 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.455s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 04:27:22 compute-0 nova_compute[259850]: 2025-10-11 04:27:22.939 2 DEBUG nova.compute.provider_tree [None req-c17d15aa-7781-41b3-a3c9-bcc45fa0833d 7bf17f3eb8514499a54d67542db6b88a 226e6310b4ee4a68b552a6b3e940a458 - - default default] Inventory has not changed in ProviderTree for provider: 108a560b-89c0-4926-a2fc-cb749a6f8386 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 11 04:27:22 compute-0 nova_compute[259850]: 2025-10-11 04:27:22.961 2 DEBUG nova.scheduler.client.report [None req-c17d15aa-7781-41b3-a3c9-bcc45fa0833d 7bf17f3eb8514499a54d67542db6b88a 226e6310b4ee4a68b552a6b3e940a458 - - default default] Inventory has not changed for provider 108a560b-89c0-4926-a2fc-cb749a6f8386 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 11 04:27:22 compute-0 nova_compute[259850]: 2025-10-11 04:27:22.979 2 DEBUG oslo_concurrency.lockutils [None req-c17d15aa-7781-41b3-a3c9-bcc45fa0833d 7bf17f3eb8514499a54d67542db6b88a 226e6310b4ee4a68b552a6b3e940a458 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.556s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 04:27:22 compute-0 ovn_metadata_agent[161897]: 2025-10-11 04:27:22.979 161902 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 04:27:22 compute-0 ovn_metadata_agent[161897]: 2025-10-11 04:27:22.980 161902 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 04:27:22 compute-0 ovn_metadata_agent[161897]: 2025-10-11 04:27:22.980 161902 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 04:27:23 compute-0 nova_compute[259850]: 2025-10-11 04:27:23.003 2 INFO nova.scheduler.client.report [None req-c17d15aa-7781-41b3-a3c9-bcc45fa0833d 7bf17f3eb8514499a54d67542db6b88a 226e6310b4ee4a68b552a6b3e940a458 - - default default] Deleted allocations for instance 116a010a-a523-4fa3-8dbc-de6caec760c9
Oct 11 04:27:23 compute-0 nova_compute[259850]: 2025-10-11 04:27:23.103 2 DEBUG oslo_concurrency.lockutils [None req-c17d15aa-7781-41b3-a3c9-bcc45fa0833d 7bf17f3eb8514499a54d67542db6b88a 226e6310b4ee4a68b552a6b3e940a458 - - default default] Lock "116a010a-a523-4fa3-8dbc-de6caec760c9" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 5.654s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 04:27:23 compute-0 ceph-mon[74273]: pgmap v1968: 305 pgs: 305 active+clean; 271 MiB data, 669 MiB used, 59 GiB / 60 GiB avail; 524 KiB/s rd, 5.8 MiB/s wr, 77 op/s
Oct 11 04:27:23 compute-0 ceph-mon[74273]: from='client.? 192.168.122.100:0/3718815871' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 04:27:23 compute-0 ovn_metadata_agent[161897]: 2025-10-11 04:27:23.568 161902 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=8a473e03-2208-47ae-afcd-05ad744a5969, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '25'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 04:27:24 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1969: 305 pgs: 305 active+clean; 271 MiB data, 669 MiB used, 59 GiB / 60 GiB avail; 533 KiB/s rd, 5.8 MiB/s wr, 89 op/s
Oct 11 04:27:24 compute-0 nova_compute[259850]: 2025-10-11 04:27:24.130 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 04:27:25 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e447 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 04:27:25 compute-0 ceph-mon[74273]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #84. Immutable memtables: 0.
Oct 11 04:27:25 compute-0 ceph-mon[74273]: rocksdb: (Original Log Time 2025/10/11-04:27:25.181617) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Oct 11 04:27:25 compute-0 ceph-mon[74273]: rocksdb: [db/flush_job.cc:856] [default] [JOB 47] Flushing memtable with next log file: 84
Oct 11 04:27:25 compute-0 ceph-mon[74273]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760156845181659, "job": 47, "event": "flush_started", "num_memtables": 1, "num_entries": 488, "num_deletes": 256, "total_data_size": 478577, "memory_usage": 488120, "flush_reason": "Manual Compaction"}
Oct 11 04:27:25 compute-0 ceph-mon[74273]: rocksdb: [db/flush_job.cc:885] [default] [JOB 47] Level-0 flush table #85: started
Oct 11 04:27:25 compute-0 ceph-mon[74273]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760156845188617, "cf_name": "default", "job": 47, "event": "table_file_creation", "file_number": 85, "file_size": 474888, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 39949, "largest_seqno": 40436, "table_properties": {"data_size": 472053, "index_size": 871, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 901, "raw_key_size": 6385, "raw_average_key_size": 18, "raw_value_size": 466459, "raw_average_value_size": 1336, "num_data_blocks": 38, "num_entries": 349, "num_filter_entries": 349, "num_deletions": 256, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1760156815, "oldest_key_time": 1760156815, "file_creation_time": 1760156845, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "c5ef686a-ea96-43f3-b64e-136aeef6150d", "db_session_id": "5PENSWPAYOBU0GJSS6OB", "orig_file_number": 85, "seqno_to_time_mapping": "N/A"}}
Oct 11 04:27:25 compute-0 ceph-mon[74273]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 47] Flush lasted 7052 microseconds, and 3119 cpu microseconds.
Oct 11 04:27:25 compute-0 ceph-mon[74273]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 11 04:27:25 compute-0 ceph-mon[74273]: rocksdb: (Original Log Time 2025/10/11-04:27:25.188669) [db/flush_job.cc:967] [default] [JOB 47] Level-0 flush table #85: 474888 bytes OK
Oct 11 04:27:25 compute-0 ceph-mon[74273]: rocksdb: (Original Log Time 2025/10/11-04:27:25.188690) [db/memtable_list.cc:519] [default] Level-0 commit table #85 started
Oct 11 04:27:25 compute-0 ceph-mon[74273]: rocksdb: (Original Log Time 2025/10/11-04:27:25.190503) [db/memtable_list.cc:722] [default] Level-0 commit table #85: memtable #1 done
Oct 11 04:27:25 compute-0 ceph-mon[74273]: rocksdb: (Original Log Time 2025/10/11-04:27:25.190525) EVENT_LOG_v1 {"time_micros": 1760156845190518, "job": 47, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Oct 11 04:27:25 compute-0 ceph-mon[74273]: rocksdb: (Original Log Time 2025/10/11-04:27:25.190546) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Oct 11 04:27:25 compute-0 ceph-mon[74273]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 47] Try to delete WAL files size 475697, prev total WAL file size 475697, number of live WAL files 2.
Oct 11 04:27:25 compute-0 ceph-mon[74273]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000081.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 11 04:27:25 compute-0 ceph-mon[74273]: rocksdb: (Original Log Time 2025/10/11-04:27:25.191214) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6C6F676D0031323533' seq:72057594037927935, type:22 .. '6C6F676D0031353035' seq:0, type:0; will stop at (end)
Oct 11 04:27:25 compute-0 ceph-mon[74273]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 48] Compacting 1@0 + 1@6 files to L6, score -1.00
Oct 11 04:27:25 compute-0 ceph-mon[74273]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 47 Base level 0, inputs: [85(463KB)], [83(9949KB)]
Oct 11 04:27:25 compute-0 ceph-mon[74273]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760156845191266, "job": 48, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [85], "files_L6": [83], "score": -1, "input_data_size": 10662687, "oldest_snapshot_seqno": -1}
Oct 11 04:27:25 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e447 do_prune osdmap full prune enabled
Oct 11 04:27:25 compute-0 ceph-mon[74273]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 48] Generated table #86: 6897 keys, 10499632 bytes, temperature: kUnknown
Oct 11 04:27:25 compute-0 ceph-mon[74273]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760156845250893, "cf_name": "default", "job": 48, "event": "table_file_creation", "file_number": 86, "file_size": 10499632, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 10449334, "index_size": 31946, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 17285, "raw_key_size": 175135, "raw_average_key_size": 25, "raw_value_size": 10321441, "raw_average_value_size": 1496, "num_data_blocks": 1274, "num_entries": 6897, "num_filter_entries": 6897, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1760153731, "oldest_key_time": 0, "file_creation_time": 1760156845, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "c5ef686a-ea96-43f3-b64e-136aeef6150d", "db_session_id": "5PENSWPAYOBU0GJSS6OB", "orig_file_number": 86, "seqno_to_time_mapping": "N/A"}}
Oct 11 04:27:25 compute-0 ceph-mon[74273]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 11 04:27:25 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e448 e448: 3 total, 3 up, 3 in
Oct 11 04:27:25 compute-0 ceph-mon[74273]: rocksdb: (Original Log Time 2025/10/11-04:27:25.251190) [db/compaction/compaction_job.cc:1663] [default] [JOB 48] Compacted 1@0 + 1@6 files to L6 => 10499632 bytes
Oct 11 04:27:25 compute-0 ceph-mon[74273]: rocksdb: (Original Log Time 2025/10/11-04:27:25.253441) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 178.6 rd, 175.9 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.5, 9.7 +0.0 blob) out(10.0 +0.0 blob), read-write-amplify(44.6) write-amplify(22.1) OK, records in: 7419, records dropped: 522 output_compression: NoCompression
Oct 11 04:27:25 compute-0 ceph-mon[74273]: rocksdb: (Original Log Time 2025/10/11-04:27:25.253481) EVENT_LOG_v1 {"time_micros": 1760156845253463, "job": 48, "event": "compaction_finished", "compaction_time_micros": 59700, "compaction_time_cpu_micros": 34042, "output_level": 6, "num_output_files": 1, "total_output_size": 10499632, "num_input_records": 7419, "num_output_records": 6897, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Oct 11 04:27:25 compute-0 ceph-mon[74273]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000085.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 11 04:27:25 compute-0 ceph-mon[74273]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760156845253829, "job": 48, "event": "table_file_deletion", "file_number": 85}
Oct 11 04:27:25 compute-0 ceph-mon[74273]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000083.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 11 04:27:25 compute-0 ceph-mon[74273]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760156845258234, "job": 48, "event": "table_file_deletion", "file_number": 83}
Oct 11 04:27:25 compute-0 ceph-mon[74273]: rocksdb: (Original Log Time 2025/10/11-04:27:25.191061) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 11 04:27:25 compute-0 ceph-mon[74273]: rocksdb: (Original Log Time 2025/10/11-04:27:25.258277) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 11 04:27:25 compute-0 ceph-mon[74273]: rocksdb: (Original Log Time 2025/10/11-04:27:25.258283) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 11 04:27:25 compute-0 ceph-mon[74273]: rocksdb: (Original Log Time 2025/10/11-04:27:25.258284) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 11 04:27:25 compute-0 ceph-mon[74273]: rocksdb: (Original Log Time 2025/10/11-04:27:25.258286) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 11 04:27:25 compute-0 ceph-mon[74273]: rocksdb: (Original Log Time 2025/10/11-04:27:25.258288) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 11 04:27:25 compute-0 ceph-mon[74273]: log_channel(cluster) log [DBG] : osdmap e448: 3 total, 3 up, 3 in
Oct 11 04:27:25 compute-0 ceph-mon[74273]: pgmap v1969: 305 pgs: 305 active+clean; 271 MiB data, 669 MiB used, 59 GiB / 60 GiB avail; 533 KiB/s rd, 5.8 MiB/s wr, 89 op/s
Oct 11 04:27:26 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 11 04:27:26 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/176760735' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 11 04:27:26 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1971: 305 pgs: 305 active+clean; 271 MiB data, 669 MiB used, 59 GiB / 60 GiB avail; 13 KiB/s rd, 17 KiB/s wr, 17 op/s
Oct 11 04:27:26 compute-0 ceph-mon[74273]: osdmap e448: 3 total, 3 up, 3 in
Oct 11 04:27:26 compute-0 ceph-mon[74273]: from='client.? 192.168.122.10:0/176760735' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 11 04:27:27 compute-0 ceph-osd[89722]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Oct 11 04:27:27 compute-0 ceph-osd[89722]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 3000.1 total, 600.0 interval
                                           Cumulative writes: 24K writes, 95K keys, 24K commit groups, 1.0 writes per commit group, ingest: 0.07 GB, 0.02 MB/s
                                           Cumulative WAL: 24K writes, 8676 syncs, 2.77 writes per sync, written: 0.07 GB, 0.02 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 3007 writes, 9551 keys, 3007 commit groups, 1.0 writes per commit group, ingest: 9.74 MB, 0.02 MB/s
                                           Interval WAL: 3007 writes, 1309 syncs, 2.30 writes per sync, written: 0.01 GB, 0.02 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Oct 11 04:27:27 compute-0 ceph-mon[74273]: pgmap v1971: 305 pgs: 305 active+clean; 271 MiB data, 669 MiB used, 59 GiB / 60 GiB avail; 13 KiB/s rd, 17 KiB/s wr, 17 op/s
Oct 11 04:27:27 compute-0 nova_compute[259850]: 2025-10-11 04:27:27.815 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 04:27:28 compute-0 ceph-mgr[74563]: [devicehealth INFO root] Check health
Oct 11 04:27:28 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1972: 305 pgs: 305 active+clean; 271 MiB data, 673 MiB used, 59 GiB / 60 GiB avail; 33 KiB/s rd, 2.2 KiB/s wr, 43 op/s
Oct 11 04:27:29 compute-0 nova_compute[259850]: 2025-10-11 04:27:29.131 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 04:27:29 compute-0 ceph-mon[74273]: pgmap v1972: 305 pgs: 305 active+clean; 271 MiB data, 673 MiB used, 59 GiB / 60 GiB avail; 33 KiB/s rd, 2.2 KiB/s wr, 43 op/s
Oct 11 04:27:30 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1973: 305 pgs: 305 active+clean; 271 MiB data, 673 MiB used, 59 GiB / 60 GiB avail; 33 KiB/s rd, 2.3 KiB/s wr, 43 op/s
Oct 11 04:27:30 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e448 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 04:27:30 compute-0 podman[309300]: 2025-10-11 04:27:30.369657448 +0000 UTC m=+0.074861579 container health_status 83ff5079e26f0f00bbfd2512db4ebca61e431e3b7e362664573e34fff179574c (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, tcib_managed=true, org.label-schema.build-date=20251009, org.label-schema.schema-version=1.0, container_name=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team)
Oct 11 04:27:30 compute-0 podman[309301]: 2025-10-11 04:27:30.397963476 +0000 UTC m=+0.097082196 container health_status 8f03625dda308e9a3fe3b3f00360811eab2a53db90537fed5552a11820542f07 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, config_id=iscsid, container_name=iscsid, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Oct 11 04:27:30 compute-0 nova_compute[259850]: 2025-10-11 04:27:30.841 2 DEBUG oslo_concurrency.lockutils [None req-99f7b22c-b8ce-4bd1-b37a-a02bc2daa7b9 7bf17f3eb8514499a54d67542db6b88a 226e6310b4ee4a68b552a6b3e940a458 - - default default] Acquiring lock "bc8a8366-552e-41a7-bd24-afacb81114bc" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 04:27:30 compute-0 nova_compute[259850]: 2025-10-11 04:27:30.842 2 DEBUG oslo_concurrency.lockutils [None req-99f7b22c-b8ce-4bd1-b37a-a02bc2daa7b9 7bf17f3eb8514499a54d67542db6b88a 226e6310b4ee4a68b552a6b3e940a458 - - default default] Lock "bc8a8366-552e-41a7-bd24-afacb81114bc" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 04:27:30 compute-0 nova_compute[259850]: 2025-10-11 04:27:30.855 2 DEBUG nova.compute.manager [None req-99f7b22c-b8ce-4bd1-b37a-a02bc2daa7b9 7bf17f3eb8514499a54d67542db6b88a 226e6310b4ee4a68b552a6b3e940a458 - - default default] [instance: bc8a8366-552e-41a7-bd24-afacb81114bc] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Oct 11 04:27:30 compute-0 nova_compute[259850]: 2025-10-11 04:27:30.926 2 DEBUG oslo_concurrency.lockutils [None req-99f7b22c-b8ce-4bd1-b37a-a02bc2daa7b9 7bf17f3eb8514499a54d67542db6b88a 226e6310b4ee4a68b552a6b3e940a458 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 04:27:30 compute-0 nova_compute[259850]: 2025-10-11 04:27:30.927 2 DEBUG oslo_concurrency.lockutils [None req-99f7b22c-b8ce-4bd1-b37a-a02bc2daa7b9 7bf17f3eb8514499a54d67542db6b88a 226e6310b4ee4a68b552a6b3e940a458 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 04:27:30 compute-0 nova_compute[259850]: 2025-10-11 04:27:30.932 2 DEBUG nova.virt.hardware [None req-99f7b22c-b8ce-4bd1-b37a-a02bc2daa7b9 7bf17f3eb8514499a54d67542db6b88a 226e6310b4ee4a68b552a6b3e940a458 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Oct 11 04:27:30 compute-0 nova_compute[259850]: 2025-10-11 04:27:30.933 2 INFO nova.compute.claims [None req-99f7b22c-b8ce-4bd1-b37a-a02bc2daa7b9 7bf17f3eb8514499a54d67542db6b88a 226e6310b4ee4a68b552a6b3e940a458 - - default default] [instance: bc8a8366-552e-41a7-bd24-afacb81114bc] Claim successful on node compute-0.ctlplane.example.com
Oct 11 04:27:31 compute-0 nova_compute[259850]: 2025-10-11 04:27:31.034 2 DEBUG oslo_concurrency.processutils [None req-99f7b22c-b8ce-4bd1-b37a-a02bc2daa7b9 7bf17f3eb8514499a54d67542db6b88a 226e6310b4ee4a68b552a6b3e940a458 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 04:27:31 compute-0 ceph-mon[74273]: pgmap v1973: 305 pgs: 305 active+clean; 271 MiB data, 673 MiB used, 59 GiB / 60 GiB avail; 33 KiB/s rd, 2.3 KiB/s wr, 43 op/s
Oct 11 04:27:31 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 11 04:27:31 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/710600796' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 04:27:31 compute-0 nova_compute[259850]: 2025-10-11 04:27:31.450 2 DEBUG oslo_concurrency.processutils [None req-99f7b22c-b8ce-4bd1-b37a-a02bc2daa7b9 7bf17f3eb8514499a54d67542db6b88a 226e6310b4ee4a68b552a6b3e940a458 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.415s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 04:27:31 compute-0 nova_compute[259850]: 2025-10-11 04:27:31.459 2 DEBUG nova.compute.provider_tree [None req-99f7b22c-b8ce-4bd1-b37a-a02bc2daa7b9 7bf17f3eb8514499a54d67542db6b88a 226e6310b4ee4a68b552a6b3e940a458 - - default default] Inventory has not changed in ProviderTree for provider: 108a560b-89c0-4926-a2fc-cb749a6f8386 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 11 04:27:31 compute-0 nova_compute[259850]: 2025-10-11 04:27:31.481 2 DEBUG nova.scheduler.client.report [None req-99f7b22c-b8ce-4bd1-b37a-a02bc2daa7b9 7bf17f3eb8514499a54d67542db6b88a 226e6310b4ee4a68b552a6b3e940a458 - - default default] Inventory has not changed for provider 108a560b-89c0-4926-a2fc-cb749a6f8386 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 11 04:27:31 compute-0 nova_compute[259850]: 2025-10-11 04:27:31.515 2 DEBUG oslo_concurrency.lockutils [None req-99f7b22c-b8ce-4bd1-b37a-a02bc2daa7b9 7bf17f3eb8514499a54d67542db6b88a 226e6310b4ee4a68b552a6b3e940a458 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.588s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 04:27:31 compute-0 nova_compute[259850]: 2025-10-11 04:27:31.516 2 DEBUG nova.compute.manager [None req-99f7b22c-b8ce-4bd1-b37a-a02bc2daa7b9 7bf17f3eb8514499a54d67542db6b88a 226e6310b4ee4a68b552a6b3e940a458 - - default default] [instance: bc8a8366-552e-41a7-bd24-afacb81114bc] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Oct 11 04:27:31 compute-0 ceph-mgr[74563]: [pg_autoscaler INFO root] _maybe_adjust
Oct 11 04:27:31 compute-0 ceph-mgr[74563]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 04:27:31 compute-0 ceph-mgr[74563]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct 11 04:27:31 compute-0 ceph-mgr[74563]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 04:27:31 compute-0 ceph-mgr[74563]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Oct 11 04:27:31 compute-0 ceph-mgr[74563]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 04:27:31 compute-0 ceph-mgr[74563]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.002894585429283063 of space, bias 1.0, pg target 0.8683756287849189 quantized to 32 (current 32)
Oct 11 04:27:31 compute-0 ceph-mgr[74563]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 04:27:31 compute-0 ceph-mgr[74563]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Oct 11 04:27:31 compute-0 ceph-mgr[74563]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 04:27:31 compute-0 ceph-mgr[74563]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006661762551279547 of space, bias 1.0, pg target 0.1998528765383864 quantized to 32 (current 32)
Oct 11 04:27:31 compute-0 ceph-mgr[74563]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 04:27:31 compute-0 ceph-mgr[74563]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Oct 11 04:27:31 compute-0 ceph-mgr[74563]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 04:27:31 compute-0 ceph-mgr[74563]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 11 04:27:31 compute-0 ceph-mgr[74563]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 04:27:31 compute-0 ceph-mgr[74563]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Oct 11 04:27:31 compute-0 ceph-mgr[74563]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 04:27:31 compute-0 ceph-mgr[74563]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct 11 04:27:31 compute-0 ceph-mgr[74563]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 04:27:31 compute-0 ceph-mgr[74563]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 11 04:27:31 compute-0 ceph-mgr[74563]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 04:27:31 compute-0 ceph-mgr[74563]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct 11 04:27:31 compute-0 nova_compute[259850]: 2025-10-11 04:27:31.587 2 DEBUG nova.compute.manager [None req-99f7b22c-b8ce-4bd1-b37a-a02bc2daa7b9 7bf17f3eb8514499a54d67542db6b88a 226e6310b4ee4a68b552a6b3e940a458 - - default default] [instance: bc8a8366-552e-41a7-bd24-afacb81114bc] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Oct 11 04:27:31 compute-0 nova_compute[259850]: 2025-10-11 04:27:31.587 2 DEBUG nova.network.neutron [None req-99f7b22c-b8ce-4bd1-b37a-a02bc2daa7b9 7bf17f3eb8514499a54d67542db6b88a 226e6310b4ee4a68b552a6b3e940a458 - - default default] [instance: bc8a8366-552e-41a7-bd24-afacb81114bc] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Oct 11 04:27:31 compute-0 nova_compute[259850]: 2025-10-11 04:27:31.632 2 INFO nova.virt.libvirt.driver [None req-99f7b22c-b8ce-4bd1-b37a-a02bc2daa7b9 7bf17f3eb8514499a54d67542db6b88a 226e6310b4ee4a68b552a6b3e940a458 - - default default] [instance: bc8a8366-552e-41a7-bd24-afacb81114bc] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Oct 11 04:27:31 compute-0 nova_compute[259850]: 2025-10-11 04:27:31.660 2 DEBUG nova.compute.manager [None req-99f7b22c-b8ce-4bd1-b37a-a02bc2daa7b9 7bf17f3eb8514499a54d67542db6b88a 226e6310b4ee4a68b552a6b3e940a458 - - default default] [instance: bc8a8366-552e-41a7-bd24-afacb81114bc] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Oct 11 04:27:31 compute-0 nova_compute[259850]: 2025-10-11 04:27:31.799 2 INFO nova.virt.block_device [None req-99f7b22c-b8ce-4bd1-b37a-a02bc2daa7b9 7bf17f3eb8514499a54d67542db6b88a 226e6310b4ee4a68b552a6b3e940a458 - - default default] [instance: bc8a8366-552e-41a7-bd24-afacb81114bc] Booting with volume 8c2f9549-0680-4737-9249-b8499f4fddcc at /dev/vda
Oct 11 04:27:31 compute-0 nova_compute[259850]: 2025-10-11 04:27:31.928 2 DEBUG os_brick.utils [None req-99f7b22c-b8ce-4bd1-b37a-a02bc2daa7b9 7bf17f3eb8514499a54d67542db6b88a 226e6310b4ee4a68b552a6b3e940a458 - - default default] ==> get_connector_properties: call "{'root_helper': 'sudo nova-rootwrap /etc/nova/rootwrap.conf', 'my_ip': '192.168.122.100', 'multipath': True, 'enforce_multipath': True, 'host': 'compute-0.ctlplane.example.com', 'execute': None}" trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:176
Oct 11 04:27:31 compute-0 nova_compute[259850]: 2025-10-11 04:27:31.930 675 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): multipathd show status execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 04:27:31 compute-0 nova_compute[259850]: 2025-10-11 04:27:31.949 675 DEBUG oslo_concurrency.processutils [-] CMD "multipathd show status" returned: 0 in 0.019s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 04:27:31 compute-0 nova_compute[259850]: 2025-10-11 04:27:31.949 675 DEBUG oslo.privsep.daemon [-] privsep: reply[c6380735-af57-4447-ab3d-958728930f44]: (4, ('path checker states:\n\npaths: 0\nbusy: False\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 04:27:31 compute-0 nova_compute[259850]: 2025-10-11 04:27:31.951 675 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): cat /etc/iscsi/initiatorname.iscsi execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 04:27:31 compute-0 nova_compute[259850]: 2025-10-11 04:27:31.965 675 DEBUG oslo_concurrency.processutils [-] CMD "cat /etc/iscsi/initiatorname.iscsi" returned: 0 in 0.014s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 04:27:31 compute-0 nova_compute[259850]: 2025-10-11 04:27:31.966 675 DEBUG oslo.privsep.daemon [-] privsep: reply[b556aca4-6228-4436-8239-90a5845fd84b]: (4, ('InitiatorName=iqn.1994-05.com.redhat:e727c2bd432c', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 04:27:31 compute-0 nova_compute[259850]: 2025-10-11 04:27:31.968 675 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): findmnt -v / -n -o SOURCE execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 04:27:31 compute-0 nova_compute[259850]: 2025-10-11 04:27:31.983 675 DEBUG oslo_concurrency.processutils [-] CMD "findmnt -v / -n -o SOURCE" returned: 0 in 0.015s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 04:27:31 compute-0 nova_compute[259850]: 2025-10-11 04:27:31.984 675 DEBUG oslo.privsep.daemon [-] privsep: reply[02605cea-d050-4594-8086-b07e1527bd7f]: (4, ('overlay\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 04:27:31 compute-0 nova_compute[259850]: 2025-10-11 04:27:31.986 675 DEBUG oslo.privsep.daemon [-] privsep: reply[cf86108e-df57-4907-8d68-591520a3b6bd]: (4, 'e4b2deed-ff06-4afb-a523-b61a9dddb9cc') _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 04:27:31 compute-0 nova_compute[259850]: 2025-10-11 04:27:31.986 2 DEBUG oslo_concurrency.processutils [None req-99f7b22c-b8ce-4bd1-b37a-a02bc2daa7b9 7bf17f3eb8514499a54d67542db6b88a 226e6310b4ee4a68b552a6b3e940a458 - - default default] Running cmd (subprocess): nvme version execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 04:27:32 compute-0 nova_compute[259850]: 2025-10-11 04:27:32.024 2 DEBUG oslo_concurrency.processutils [None req-99f7b22c-b8ce-4bd1-b37a-a02bc2daa7b9 7bf17f3eb8514499a54d67542db6b88a 226e6310b4ee4a68b552a6b3e940a458 - - default default] CMD "nvme version" returned: 0 in 0.038s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 04:27:32 compute-0 nova_compute[259850]: 2025-10-11 04:27:32.028 2 DEBUG os_brick.initiator.connectors.lightos [None req-99f7b22c-b8ce-4bd1-b37a-a02bc2daa7b9 7bf17f3eb8514499a54d67542db6b88a 226e6310b4ee4a68b552a6b3e940a458 - - default default] LIGHTOS: [Errno 111] ECONNREFUSED find_dsc /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:98
Oct 11 04:27:32 compute-0 nova_compute[259850]: 2025-10-11 04:27:32.029 2 DEBUG os_brick.initiator.connectors.lightos [None req-99f7b22c-b8ce-4bd1-b37a-a02bc2daa7b9 7bf17f3eb8514499a54d67542db6b88a 226e6310b4ee4a68b552a6b3e940a458 - - default default] LIGHTOS: did not find dsc, continuing anyway. get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:76
Oct 11 04:27:32 compute-0 nova_compute[259850]: 2025-10-11 04:27:32.029 2 DEBUG os_brick.initiator.connectors.lightos [None req-99f7b22c-b8ce-4bd1-b37a-a02bc2daa7b9 7bf17f3eb8514499a54d67542db6b88a 226e6310b4ee4a68b552a6b3e940a458 - - default default] LIGHTOS: finally hostnqn: nqn.2014-08.org.nvmexpress:uuid:83042a20-0f72-4c47-8453-e72ead378624 dsc:  get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:79
Oct 11 04:27:32 compute-0 nova_compute[259850]: 2025-10-11 04:27:32.030 2 DEBUG os_brick.utils [None req-99f7b22c-b8ce-4bd1-b37a-a02bc2daa7b9 7bf17f3eb8514499a54d67542db6b88a 226e6310b4ee4a68b552a6b3e940a458 - - default default] <== get_connector_properties: return (100ms) {'platform': 'x86_64', 'os_type': 'linux', 'ip': '192.168.122.100', 'host': 'compute-0.ctlplane.example.com', 'multipath': True, 'initiator': 'iqn.1994-05.com.redhat:e727c2bd432c', 'do_local_attach': False, 'nvme_hostid': '83042a20-0f72-4c47-8453-e72ead378624', 'system uuid': 'e4b2deed-ff06-4afb-a523-b61a9dddb9cc', 'nqn': 'nqn.2014-08.org.nvmexpress:uuid:83042a20-0f72-4c47-8453-e72ead378624', 'nvme_native_multipath': True, 'found_dsc': ''} trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:203
Oct 11 04:27:32 compute-0 nova_compute[259850]: 2025-10-11 04:27:32.031 2 DEBUG nova.virt.block_device [None req-99f7b22c-b8ce-4bd1-b37a-a02bc2daa7b9 7bf17f3eb8514499a54d67542db6b88a 226e6310b4ee4a68b552a6b3e940a458 - - default default] [instance: bc8a8366-552e-41a7-bd24-afacb81114bc] Updating existing volume attachment record: 78a6639d-6690-4aca-8524-98b4d4dd6498 _volume_attach /usr/lib/python3.9/site-packages/nova/virt/block_device.py:631
Oct 11 04:27:32 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1974: 305 pgs: 305 active+clean; 271 MiB data, 673 MiB used, 59 GiB / 60 GiB avail; 33 KiB/s rd, 2.3 KiB/s wr, 43 op/s
Oct 11 04:27:32 compute-0 ceph-mon[74273]: from='client.? 192.168.122.100:0/710600796' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 04:27:32 compute-0 nova_compute[259850]: 2025-10-11 04:27:32.598 2 DEBUG nova.policy [None req-99f7b22c-b8ce-4bd1-b37a-a02bc2daa7b9 7bf17f3eb8514499a54d67542db6b88a 226e6310b4ee4a68b552a6b3e940a458 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '7bf17f3eb8514499a54d67542db6b88a', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '226e6310b4ee4a68b552a6b3e940a458', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Oct 11 04:27:32 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 11 04:27:32 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1313039603' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 11 04:27:32 compute-0 nova_compute[259850]: 2025-10-11 04:27:32.692 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1760156837.6916158, 116a010a-a523-4fa3-8dbc-de6caec760c9 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 11 04:27:32 compute-0 nova_compute[259850]: 2025-10-11 04:27:32.693 2 INFO nova.compute.manager [-] [instance: 116a010a-a523-4fa3-8dbc-de6caec760c9] VM Stopped (Lifecycle Event)
Oct 11 04:27:32 compute-0 nova_compute[259850]: 2025-10-11 04:27:32.762 2 DEBUG nova.compute.manager [None req-98b65a38-8147-462f-b54d-873b51b8bac2 - - - - - -] [instance: 116a010a-a523-4fa3-8dbc-de6caec760c9] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 11 04:27:32 compute-0 nova_compute[259850]: 2025-10-11 04:27:32.820 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 04:27:33 compute-0 nova_compute[259850]: 2025-10-11 04:27:33.148 2 DEBUG nova.compute.manager [None req-99f7b22c-b8ce-4bd1-b37a-a02bc2daa7b9 7bf17f3eb8514499a54d67542db6b88a 226e6310b4ee4a68b552a6b3e940a458 - - default default] [instance: bc8a8366-552e-41a7-bd24-afacb81114bc] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Oct 11 04:27:33 compute-0 nova_compute[259850]: 2025-10-11 04:27:33.150 2 DEBUG nova.virt.libvirt.driver [None req-99f7b22c-b8ce-4bd1-b37a-a02bc2daa7b9 7bf17f3eb8514499a54d67542db6b88a 226e6310b4ee4a68b552a6b3e940a458 - - default default] [instance: bc8a8366-552e-41a7-bd24-afacb81114bc] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Oct 11 04:27:33 compute-0 nova_compute[259850]: 2025-10-11 04:27:33.151 2 INFO nova.virt.libvirt.driver [None req-99f7b22c-b8ce-4bd1-b37a-a02bc2daa7b9 7bf17f3eb8514499a54d67542db6b88a 226e6310b4ee4a68b552a6b3e940a458 - - default default] [instance: bc8a8366-552e-41a7-bd24-afacb81114bc] Creating image(s)
Oct 11 04:27:33 compute-0 nova_compute[259850]: 2025-10-11 04:27:33.152 2 DEBUG nova.virt.libvirt.driver [None req-99f7b22c-b8ce-4bd1-b37a-a02bc2daa7b9 7bf17f3eb8514499a54d67542db6b88a 226e6310b4ee4a68b552a6b3e940a458 - - default default] [instance: bc8a8366-552e-41a7-bd24-afacb81114bc] Did not create local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4859
Oct 11 04:27:33 compute-0 nova_compute[259850]: 2025-10-11 04:27:33.152 2 DEBUG nova.virt.libvirt.driver [None req-99f7b22c-b8ce-4bd1-b37a-a02bc2daa7b9 7bf17f3eb8514499a54d67542db6b88a 226e6310b4ee4a68b552a6b3e940a458 - - default default] [instance: bc8a8366-552e-41a7-bd24-afacb81114bc] Ensure instance console log exists: /var/lib/nova/instances/bc8a8366-552e-41a7-bd24-afacb81114bc/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Oct 11 04:27:33 compute-0 nova_compute[259850]: 2025-10-11 04:27:33.153 2 DEBUG oslo_concurrency.lockutils [None req-99f7b22c-b8ce-4bd1-b37a-a02bc2daa7b9 7bf17f3eb8514499a54d67542db6b88a 226e6310b4ee4a68b552a6b3e940a458 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 04:27:33 compute-0 nova_compute[259850]: 2025-10-11 04:27:33.153 2 DEBUG oslo_concurrency.lockutils [None req-99f7b22c-b8ce-4bd1-b37a-a02bc2daa7b9 7bf17f3eb8514499a54d67542db6b88a 226e6310b4ee4a68b552a6b3e940a458 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 04:27:33 compute-0 nova_compute[259850]: 2025-10-11 04:27:33.154 2 DEBUG oslo_concurrency.lockutils [None req-99f7b22c-b8ce-4bd1-b37a-a02bc2daa7b9 7bf17f3eb8514499a54d67542db6b88a 226e6310b4ee4a68b552a6b3e940a458 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 04:27:33 compute-0 nova_compute[259850]: 2025-10-11 04:27:33.279 2 DEBUG nova.network.neutron [None req-99f7b22c-b8ce-4bd1-b37a-a02bc2daa7b9 7bf17f3eb8514499a54d67542db6b88a 226e6310b4ee4a68b552a6b3e940a458 - - default default] [instance: bc8a8366-552e-41a7-bd24-afacb81114bc] Successfully created port: 7932da10-eaa5-4512-a329-16ee6de1e17c _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Oct 11 04:27:33 compute-0 ceph-mon[74273]: pgmap v1974: 305 pgs: 305 active+clean; 271 MiB data, 673 MiB used, 59 GiB / 60 GiB avail; 33 KiB/s rd, 2.3 KiB/s wr, 43 op/s
Oct 11 04:27:33 compute-0 ceph-mon[74273]: from='client.? 192.168.122.10:0/1313039603' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 11 04:27:34 compute-0 nova_compute[259850]: 2025-10-11 04:27:34.069 2 DEBUG nova.network.neutron [None req-99f7b22c-b8ce-4bd1-b37a-a02bc2daa7b9 7bf17f3eb8514499a54d67542db6b88a 226e6310b4ee4a68b552a6b3e940a458 - - default default] [instance: bc8a8366-552e-41a7-bd24-afacb81114bc] Successfully updated port: 7932da10-eaa5-4512-a329-16ee6de1e17c _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Oct 11 04:27:34 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1975: 305 pgs: 305 active+clean; 271 MiB data, 673 MiB used, 59 GiB / 60 GiB avail; 22 KiB/s rd, 1.6 KiB/s wr, 28 op/s
Oct 11 04:27:34 compute-0 nova_compute[259850]: 2025-10-11 04:27:34.091 2 DEBUG oslo_concurrency.lockutils [None req-99f7b22c-b8ce-4bd1-b37a-a02bc2daa7b9 7bf17f3eb8514499a54d67542db6b88a 226e6310b4ee4a68b552a6b3e940a458 - - default default] Acquiring lock "refresh_cache-bc8a8366-552e-41a7-bd24-afacb81114bc" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 11 04:27:34 compute-0 nova_compute[259850]: 2025-10-11 04:27:34.092 2 DEBUG oslo_concurrency.lockutils [None req-99f7b22c-b8ce-4bd1-b37a-a02bc2daa7b9 7bf17f3eb8514499a54d67542db6b88a 226e6310b4ee4a68b552a6b3e940a458 - - default default] Acquired lock "refresh_cache-bc8a8366-552e-41a7-bd24-afacb81114bc" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 11 04:27:34 compute-0 nova_compute[259850]: 2025-10-11 04:27:34.092 2 DEBUG nova.network.neutron [None req-99f7b22c-b8ce-4bd1-b37a-a02bc2daa7b9 7bf17f3eb8514499a54d67542db6b88a 226e6310b4ee4a68b552a6b3e940a458 - - default default] [instance: bc8a8366-552e-41a7-bd24-afacb81114bc] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Oct 11 04:27:34 compute-0 nova_compute[259850]: 2025-10-11 04:27:34.134 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 04:27:34 compute-0 nova_compute[259850]: 2025-10-11 04:27:34.194 2 DEBUG nova.compute.manager [req-c0705dd8-5fc4-49d4-b2e7-101fe39961ac req-e049b7b4-ce66-4e5c-abee-73f1db26adbe f4159c198f2c491aba3076e803251bb4 551ef16ca7c64c7d98e9560ddab5abd8 - - default default] [instance: bc8a8366-552e-41a7-bd24-afacb81114bc] Received event network-changed-7932da10-eaa5-4512-a329-16ee6de1e17c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 11 04:27:34 compute-0 nova_compute[259850]: 2025-10-11 04:27:34.195 2 DEBUG nova.compute.manager [req-c0705dd8-5fc4-49d4-b2e7-101fe39961ac req-e049b7b4-ce66-4e5c-abee-73f1db26adbe f4159c198f2c491aba3076e803251bb4 551ef16ca7c64c7d98e9560ddab5abd8 - - default default] [instance: bc8a8366-552e-41a7-bd24-afacb81114bc] Refreshing instance network info cache due to event network-changed-7932da10-eaa5-4512-a329-16ee6de1e17c. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 11 04:27:34 compute-0 nova_compute[259850]: 2025-10-11 04:27:34.195 2 DEBUG oslo_concurrency.lockutils [req-c0705dd8-5fc4-49d4-b2e7-101fe39961ac req-e049b7b4-ce66-4e5c-abee-73f1db26adbe f4159c198f2c491aba3076e803251bb4 551ef16ca7c64c7d98e9560ddab5abd8 - - default default] Acquiring lock "refresh_cache-bc8a8366-552e-41a7-bd24-afacb81114bc" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 11 04:27:34 compute-0 nova_compute[259850]: 2025-10-11 04:27:34.536 2 DEBUG nova.network.neutron [None req-99f7b22c-b8ce-4bd1-b37a-a02bc2daa7b9 7bf17f3eb8514499a54d67542db6b88a 226e6310b4ee4a68b552a6b3e940a458 - - default default] [instance: bc8a8366-552e-41a7-bd24-afacb81114bc] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Oct 11 04:27:35 compute-0 nova_compute[259850]: 2025-10-11 04:27:35.087 2 DEBUG oslo_service.periodic_task [None req-a15c22ab-259d-4b26-83d0-eb5f92102649 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 11 04:27:35 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e448 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 04:27:35 compute-0 ceph-mon[74273]: pgmap v1975: 305 pgs: 305 active+clean; 271 MiB data, 673 MiB used, 59 GiB / 60 GiB avail; 22 KiB/s rd, 1.6 KiB/s wr, 28 op/s
Oct 11 04:27:35 compute-0 nova_compute[259850]: 2025-10-11 04:27:35.429 2 DEBUG nova.network.neutron [None req-99f7b22c-b8ce-4bd1-b37a-a02bc2daa7b9 7bf17f3eb8514499a54d67542db6b88a 226e6310b4ee4a68b552a6b3e940a458 - - default default] [instance: bc8a8366-552e-41a7-bd24-afacb81114bc] Updating instance_info_cache with network_info: [{"id": "7932da10-eaa5-4512-a329-16ee6de1e17c", "address": "fa:16:3e:0c:2a:34", "network": {"id": "61e3c4a7-2f2f-451f-b913-c2cdac8efdf3", "bridge": "br-int", "label": "tempest-TestEncryptedCinderVolumes-1503027769-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "226e6310b4ee4a68b552a6b3e940a458", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7932da10-ea", "ovs_interfaceid": "7932da10-eaa5-4512-a329-16ee6de1e17c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 11 04:27:35 compute-0 nova_compute[259850]: 2025-10-11 04:27:35.452 2 DEBUG oslo_concurrency.lockutils [None req-99f7b22c-b8ce-4bd1-b37a-a02bc2daa7b9 7bf17f3eb8514499a54d67542db6b88a 226e6310b4ee4a68b552a6b3e940a458 - - default default] Releasing lock "refresh_cache-bc8a8366-552e-41a7-bd24-afacb81114bc" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 11 04:27:35 compute-0 nova_compute[259850]: 2025-10-11 04:27:35.453 2 DEBUG nova.compute.manager [None req-99f7b22c-b8ce-4bd1-b37a-a02bc2daa7b9 7bf17f3eb8514499a54d67542db6b88a 226e6310b4ee4a68b552a6b3e940a458 - - default default] [instance: bc8a8366-552e-41a7-bd24-afacb81114bc] Instance network_info: |[{"id": "7932da10-eaa5-4512-a329-16ee6de1e17c", "address": "fa:16:3e:0c:2a:34", "network": {"id": "61e3c4a7-2f2f-451f-b913-c2cdac8efdf3", "bridge": "br-int", "label": "tempest-TestEncryptedCinderVolumes-1503027769-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "226e6310b4ee4a68b552a6b3e940a458", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7932da10-ea", "ovs_interfaceid": "7932da10-eaa5-4512-a329-16ee6de1e17c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Oct 11 04:27:35 compute-0 nova_compute[259850]: 2025-10-11 04:27:35.454 2 DEBUG oslo_concurrency.lockutils [req-c0705dd8-5fc4-49d4-b2e7-101fe39961ac req-e049b7b4-ce66-4e5c-abee-73f1db26adbe f4159c198f2c491aba3076e803251bb4 551ef16ca7c64c7d98e9560ddab5abd8 - - default default] Acquired lock "refresh_cache-bc8a8366-552e-41a7-bd24-afacb81114bc" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 11 04:27:35 compute-0 nova_compute[259850]: 2025-10-11 04:27:35.454 2 DEBUG nova.network.neutron [req-c0705dd8-5fc4-49d4-b2e7-101fe39961ac req-e049b7b4-ce66-4e5c-abee-73f1db26adbe f4159c198f2c491aba3076e803251bb4 551ef16ca7c64c7d98e9560ddab5abd8 - - default default] [instance: bc8a8366-552e-41a7-bd24-afacb81114bc] Refreshing network info cache for port 7932da10-eaa5-4512-a329-16ee6de1e17c _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 11 04:27:35 compute-0 nova_compute[259850]: 2025-10-11 04:27:35.460 2 DEBUG nova.virt.libvirt.driver [None req-99f7b22c-b8ce-4bd1-b37a-a02bc2daa7b9 7bf17f3eb8514499a54d67542db6b88a 226e6310b4ee4a68b552a6b3e940a458 - - default default] [instance: bc8a8366-552e-41a7-bd24-afacb81114bc] Start _get_guest_xml network_info=[{"id": "7932da10-eaa5-4512-a329-16ee6de1e17c", "address": "fa:16:3e:0c:2a:34", "network": {"id": "61e3c4a7-2f2f-451f-b913-c2cdac8efdf3", "bridge": "br-int", "label": "tempest-TestEncryptedCinderVolumes-1503027769-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "226e6310b4ee4a68b552a6b3e940a458", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7932da10-ea", "ovs_interfaceid": "7932da10-eaa5-4512-a329-16ee6de1e17c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, '/dev/vda': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum=<?>,container_format=<?>,created_at=<?>,direct_url=<?>,disk_format=<?>,id=<?>,min_disk=0,min_ram=0,name=<?>,owner=<?>,properties=ImageMetaProps,protected=<?>,size=1073741824,status='active',tags=<?>,updated_at=<?>,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [], 'ephemerals': [], 'block_device_mapping': [{'device_type': 'disk', 'delete_on_termination': False, 'connection_info': {'driver_volume_type': 'rbd', 'data': {'name': 'volumes/volume-8c2f9549-0680-4737-9249-b8499f4fddcc', 'hosts': ['192.168.122.100'], 'ports': ['6789'], 'cluster_name': 'ceph', 'auth_enabled': True, 'auth_username': 'openstack', 'secret_type': 'ceph', 'secret_uuid': '***', 'volume_id': '8c2f9549-0680-4737-9249-b8499f4fddcc', 'discard': True, 'qos_specs': None, 'access_mode': 'rw', 'encrypted': True, 'cacheable': False}, 'status': 'reserved', 'instance': 'bc8a8366-552e-41a7-bd24-afacb81114bc', 'attached_at': '', 'detached_at': '', 'volume_id': '8c2f9549-0680-4737-9249-b8499f4fddcc', 'serial': '8c2f9549-0680-4737-9249-b8499f4fddcc'}, 'boot_index': 0, 'guest_format': None, 'attachment_id': '78a6639d-6690-4aca-8524-98b4d4dd6498', 'mount_device': '/dev/vda', 'disk_bus': 'virtio', 'volume_type': None}], ': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Oct 11 04:27:35 compute-0 nova_compute[259850]: 2025-10-11 04:27:35.470 2 WARNING nova.virt.libvirt.driver [None req-99f7b22c-b8ce-4bd1-b37a-a02bc2daa7b9 7bf17f3eb8514499a54d67542db6b88a 226e6310b4ee4a68b552a6b3e940a458 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 11 04:27:35 compute-0 nova_compute[259850]: 2025-10-11 04:27:35.483 2 DEBUG nova.virt.libvirt.host [None req-99f7b22c-b8ce-4bd1-b37a-a02bc2daa7b9 7bf17f3eb8514499a54d67542db6b88a 226e6310b4ee4a68b552a6b3e940a458 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Oct 11 04:27:35 compute-0 nova_compute[259850]: 2025-10-11 04:27:35.484 2 DEBUG nova.virt.libvirt.host [None req-99f7b22c-b8ce-4bd1-b37a-a02bc2daa7b9 7bf17f3eb8514499a54d67542db6b88a 226e6310b4ee4a68b552a6b3e940a458 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Oct 11 04:27:35 compute-0 nova_compute[259850]: 2025-10-11 04:27:35.489 2 DEBUG nova.virt.libvirt.host [None req-99f7b22c-b8ce-4bd1-b37a-a02bc2daa7b9 7bf17f3eb8514499a54d67542db6b88a 226e6310b4ee4a68b552a6b3e940a458 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Oct 11 04:27:35 compute-0 nova_compute[259850]: 2025-10-11 04:27:35.490 2 DEBUG nova.virt.libvirt.host [None req-99f7b22c-b8ce-4bd1-b37a-a02bc2daa7b9 7bf17f3eb8514499a54d67542db6b88a 226e6310b4ee4a68b552a6b3e940a458 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Oct 11 04:27:35 compute-0 nova_compute[259850]: 2025-10-11 04:27:35.491 2 DEBUG nova.virt.libvirt.driver [None req-99f7b22c-b8ce-4bd1-b37a-a02bc2daa7b9 7bf17f3eb8514499a54d67542db6b88a 226e6310b4ee4a68b552a6b3e940a458 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Oct 11 04:27:35 compute-0 nova_compute[259850]: 2025-10-11 04:27:35.491 2 DEBUG nova.virt.hardware [None req-99f7b22c-b8ce-4bd1-b37a-a02bc2daa7b9 7bf17f3eb8514499a54d67542db6b88a 226e6310b4ee4a68b552a6b3e940a458 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-11T04:01:35Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='178575de-f0e6-4acd-9fcd-d75e3e09ac2e',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum=<?>,container_format=<?>,created_at=<?>,direct_url=<?>,disk_format=<?>,id=<?>,min_disk=0,min_ram=0,name=<?>,owner=<?>,properties=ImageMetaProps,protected=<?>,size=1073741824,status='active',tags=<?>,updated_at=<?>,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Oct 11 04:27:35 compute-0 nova_compute[259850]: 2025-10-11 04:27:35.492 2 DEBUG nova.virt.hardware [None req-99f7b22c-b8ce-4bd1-b37a-a02bc2daa7b9 7bf17f3eb8514499a54d67542db6b88a 226e6310b4ee4a68b552a6b3e940a458 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Oct 11 04:27:35 compute-0 nova_compute[259850]: 2025-10-11 04:27:35.492 2 DEBUG nova.virt.hardware [None req-99f7b22c-b8ce-4bd1-b37a-a02bc2daa7b9 7bf17f3eb8514499a54d67542db6b88a 226e6310b4ee4a68b552a6b3e940a458 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Oct 11 04:27:35 compute-0 nova_compute[259850]: 2025-10-11 04:27:35.492 2 DEBUG nova.virt.hardware [None req-99f7b22c-b8ce-4bd1-b37a-a02bc2daa7b9 7bf17f3eb8514499a54d67542db6b88a 226e6310b4ee4a68b552a6b3e940a458 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Oct 11 04:27:35 compute-0 nova_compute[259850]: 2025-10-11 04:27:35.493 2 DEBUG nova.virt.hardware [None req-99f7b22c-b8ce-4bd1-b37a-a02bc2daa7b9 7bf17f3eb8514499a54d67542db6b88a 226e6310b4ee4a68b552a6b3e940a458 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Oct 11 04:27:35 compute-0 nova_compute[259850]: 2025-10-11 04:27:35.493 2 DEBUG nova.virt.hardware [None req-99f7b22c-b8ce-4bd1-b37a-a02bc2daa7b9 7bf17f3eb8514499a54d67542db6b88a 226e6310b4ee4a68b552a6b3e940a458 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Oct 11 04:27:35 compute-0 nova_compute[259850]: 2025-10-11 04:27:35.493 2 DEBUG nova.virt.hardware [None req-99f7b22c-b8ce-4bd1-b37a-a02bc2daa7b9 7bf17f3eb8514499a54d67542db6b88a 226e6310b4ee4a68b552a6b3e940a458 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Oct 11 04:27:35 compute-0 nova_compute[259850]: 2025-10-11 04:27:35.494 2 DEBUG nova.virt.hardware [None req-99f7b22c-b8ce-4bd1-b37a-a02bc2daa7b9 7bf17f3eb8514499a54d67542db6b88a 226e6310b4ee4a68b552a6b3e940a458 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Oct 11 04:27:35 compute-0 nova_compute[259850]: 2025-10-11 04:27:35.494 2 DEBUG nova.virt.hardware [None req-99f7b22c-b8ce-4bd1-b37a-a02bc2daa7b9 7bf17f3eb8514499a54d67542db6b88a 226e6310b4ee4a68b552a6b3e940a458 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Oct 11 04:27:35 compute-0 nova_compute[259850]: 2025-10-11 04:27:35.495 2 DEBUG nova.virt.hardware [None req-99f7b22c-b8ce-4bd1-b37a-a02bc2daa7b9 7bf17f3eb8514499a54d67542db6b88a 226e6310b4ee4a68b552a6b3e940a458 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Oct 11 04:27:35 compute-0 nova_compute[259850]: 2025-10-11 04:27:35.495 2 DEBUG nova.virt.hardware [None req-99f7b22c-b8ce-4bd1-b37a-a02bc2daa7b9 7bf17f3eb8514499a54d67542db6b88a 226e6310b4ee4a68b552a6b3e940a458 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Oct 11 04:27:35 compute-0 nova_compute[259850]: 2025-10-11 04:27:35.538 2 DEBUG nova.storage.rbd_utils [None req-99f7b22c-b8ce-4bd1-b37a-a02bc2daa7b9 7bf17f3eb8514499a54d67542db6b88a 226e6310b4ee4a68b552a6b3e940a458 - - default default] rbd image bc8a8366-552e-41a7-bd24-afacb81114bc_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 11 04:27:35 compute-0 nova_compute[259850]: 2025-10-11 04:27:35.546 2 DEBUG oslo_concurrency.processutils [None req-99f7b22c-b8ce-4bd1-b37a-a02bc2daa7b9 7bf17f3eb8514499a54d67542db6b88a 226e6310b4ee4a68b552a6b3e940a458 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 04:27:35 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 11 04:27:35 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3774172432' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 11 04:27:36 compute-0 nova_compute[259850]: 2025-10-11 04:27:36.014 2 DEBUG oslo_concurrency.processutils [None req-99f7b22c-b8ce-4bd1-b37a-a02bc2daa7b9 7bf17f3eb8514499a54d67542db6b88a 226e6310b4ee4a68b552a6b3e940a458 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.468s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 04:27:36 compute-0 nova_compute[259850]: 2025-10-11 04:27:36.059 2 DEBUG oslo_service.periodic_task [None req-a15c22ab-259d-4b26-83d0-eb5f92102649 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 11 04:27:36 compute-0 nova_compute[259850]: 2025-10-11 04:27:36.076 2 DEBUG oslo_concurrency.lockutils [None req-a15c22ab-259d-4b26-83d0-eb5f92102649 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 04:27:36 compute-0 nova_compute[259850]: 2025-10-11 04:27:36.077 2 DEBUG oslo_concurrency.lockutils [None req-a15c22ab-259d-4b26-83d0-eb5f92102649 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 04:27:36 compute-0 nova_compute[259850]: 2025-10-11 04:27:36.078 2 DEBUG oslo_concurrency.lockutils [None req-a15c22ab-259d-4b26-83d0-eb5f92102649 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 04:27:36 compute-0 nova_compute[259850]: 2025-10-11 04:27:36.078 2 DEBUG nova.compute.resource_tracker [None req-a15c22ab-259d-4b26-83d0-eb5f92102649 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Oct 11 04:27:36 compute-0 nova_compute[259850]: 2025-10-11 04:27:36.079 2 DEBUG oslo_concurrency.processutils [None req-a15c22ab-259d-4b26-83d0-eb5f92102649 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 04:27:36 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1976: 305 pgs: 305 active+clean; 271 MiB data, 673 MiB used, 59 GiB / 60 GiB avail; 21 KiB/s rd, 1.5 KiB/s wr, 26 op/s
Oct 11 04:27:36 compute-0 nova_compute[259850]: 2025-10-11 04:27:36.115 2 DEBUG os_brick.encryptors [None req-99f7b22c-b8ce-4bd1-b37a-a02bc2daa7b9 7bf17f3eb8514499a54d67542db6b88a 226e6310b4ee4a68b552a6b3e940a458 - - default default] Using volume encryption metadata '{'encryption_key_id': '4199ac44-f27e-4d5c-949b-a3c0225daaee', 'control_location': 'front-end', 'cipher': 'aes-xts-plain64', 'key_size': 256, 'provider': 'luks'}' for connection: {'driver_volume_type': 'rbd', 'data': {'name': 'volumes/volume-8c2f9549-0680-4737-9249-b8499f4fddcc', 'hosts': ['192.168.122.100'], 'ports': ['6789'], 'cluster_name': 'ceph', 'auth_enabled': True, 'auth_username': 'openstack', 'secret_type': 'ceph', 'secret_uuid': '***', 'volume_id': '8c2f9549-0680-4737-9249-b8499f4fddcc', 'discard': True, 'qos_specs': None, 'access_mode': 'rw', 'encrypted': True, 'cacheable': False}, 'status': 'reserved', 'instance': 'bc8a8366-552e-41a7-bd24-afacb81114bc', 'attached_at': '', 'detached_at': '', 'volume_id': '8c2f9549-0680-4737-9249-b8499f4fddcc', 'serial': '} get_encryption_metadata /usr/lib/python3.9/site-packages/os_brick/encryptors/__init__.py:135
Oct 11 04:27:36 compute-0 nova_compute[259850]: 2025-10-11 04:27:36.120 2 DEBUG barbicanclient.client [None req-99f7b22c-b8ce-4bd1-b37a-a02bc2daa7b9 7bf17f3eb8514499a54d67542db6b88a 226e6310b4ee4a68b552a6b3e940a458 - - default default] Creating Client object Client /usr/lib/python3.9/site-packages/barbicanclient/client.py:163
Oct 11 04:27:36 compute-0 nova_compute[259850]: 2025-10-11 04:27:36.142 2 DEBUG barbicanclient.v1.secrets [None req-99f7b22c-b8ce-4bd1-b37a-a02bc2daa7b9 7bf17f3eb8514499a54d67542db6b88a 226e6310b4ee4a68b552a6b3e940a458 - - default default] Getting secret - Secret href: https://barbican-internal.openstack.svc:9311/secrets/4199ac44-f27e-4d5c-949b-a3c0225daaee get /usr/lib/python3.9/site-packages/barbicanclient/v1/secrets.py:514
Oct 11 04:27:36 compute-0 nova_compute[259850]: 2025-10-11 04:27:36.143 2 INFO barbicanclient.base [None req-99f7b22c-b8ce-4bd1-b37a-a02bc2daa7b9 7bf17f3eb8514499a54d67542db6b88a 226e6310b4ee4a68b552a6b3e940a458 - - default default] Calculated Secrets uuid ref: secrets/4199ac44-f27e-4d5c-949b-a3c0225daaee
Oct 11 04:27:36 compute-0 nova_compute[259850]: 2025-10-11 04:27:36.164 2 DEBUG barbicanclient.client [None req-99f7b22c-b8ce-4bd1-b37a-a02bc2daa7b9 7bf17f3eb8514499a54d67542db6b88a 226e6310b4ee4a68b552a6b3e940a458 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87
Oct 11 04:27:36 compute-0 nova_compute[259850]: 2025-10-11 04:27:36.166 2 INFO barbicanclient.base [None req-99f7b22c-b8ce-4bd1-b37a-a02bc2daa7b9 7bf17f3eb8514499a54d67542db6b88a 226e6310b4ee4a68b552a6b3e940a458 - - default default] Calculated Secrets uuid ref: secrets/4199ac44-f27e-4d5c-949b-a3c0225daaee
Oct 11 04:27:36 compute-0 nova_compute[259850]: 2025-10-11 04:27:36.188 2 DEBUG barbicanclient.client [None req-99f7b22c-b8ce-4bd1-b37a-a02bc2daa7b9 7bf17f3eb8514499a54d67542db6b88a 226e6310b4ee4a68b552a6b3e940a458 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87
Oct 11 04:27:36 compute-0 nova_compute[259850]: 2025-10-11 04:27:36.189 2 INFO barbicanclient.base [None req-99f7b22c-b8ce-4bd1-b37a-a02bc2daa7b9 7bf17f3eb8514499a54d67542db6b88a 226e6310b4ee4a68b552a6b3e940a458 - - default default] Calculated Secrets uuid ref: secrets/4199ac44-f27e-4d5c-949b-a3c0225daaee
Oct 11 04:27:36 compute-0 nova_compute[259850]: 2025-10-11 04:27:36.208 2 DEBUG barbicanclient.client [None req-99f7b22c-b8ce-4bd1-b37a-a02bc2daa7b9 7bf17f3eb8514499a54d67542db6b88a 226e6310b4ee4a68b552a6b3e940a458 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87
Oct 11 04:27:36 compute-0 nova_compute[259850]: 2025-10-11 04:27:36.208 2 INFO barbicanclient.base [None req-99f7b22c-b8ce-4bd1-b37a-a02bc2daa7b9 7bf17f3eb8514499a54d67542db6b88a 226e6310b4ee4a68b552a6b3e940a458 - - default default] Calculated Secrets uuid ref: secrets/4199ac44-f27e-4d5c-949b-a3c0225daaee
Oct 11 04:27:36 compute-0 nova_compute[259850]: 2025-10-11 04:27:36.224 2 DEBUG barbicanclient.client [None req-99f7b22c-b8ce-4bd1-b37a-a02bc2daa7b9 7bf17f3eb8514499a54d67542db6b88a 226e6310b4ee4a68b552a6b3e940a458 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87
Oct 11 04:27:36 compute-0 nova_compute[259850]: 2025-10-11 04:27:36.225 2 INFO barbicanclient.base [None req-99f7b22c-b8ce-4bd1-b37a-a02bc2daa7b9 7bf17f3eb8514499a54d67542db6b88a 226e6310b4ee4a68b552a6b3e940a458 - - default default] Calculated Secrets uuid ref: secrets/4199ac44-f27e-4d5c-949b-a3c0225daaee
Oct 11 04:27:36 compute-0 nova_compute[259850]: 2025-10-11 04:27:36.252 2 DEBUG barbicanclient.client [None req-99f7b22c-b8ce-4bd1-b37a-a02bc2daa7b9 7bf17f3eb8514499a54d67542db6b88a 226e6310b4ee4a68b552a6b3e940a458 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87
Oct 11 04:27:36 compute-0 nova_compute[259850]: 2025-10-11 04:27:36.253 2 INFO barbicanclient.base [None req-99f7b22c-b8ce-4bd1-b37a-a02bc2daa7b9 7bf17f3eb8514499a54d67542db6b88a 226e6310b4ee4a68b552a6b3e940a458 - - default default] Calculated Secrets uuid ref: secrets/4199ac44-f27e-4d5c-949b-a3c0225daaee
Oct 11 04:27:36 compute-0 nova_compute[259850]: 2025-10-11 04:27:36.283 2 DEBUG barbicanclient.client [None req-99f7b22c-b8ce-4bd1-b37a-a02bc2daa7b9 7bf17f3eb8514499a54d67542db6b88a 226e6310b4ee4a68b552a6b3e940a458 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87
Oct 11 04:27:36 compute-0 nova_compute[259850]: 2025-10-11 04:27:36.284 2 INFO barbicanclient.base [None req-99f7b22c-b8ce-4bd1-b37a-a02bc2daa7b9 7bf17f3eb8514499a54d67542db6b88a 226e6310b4ee4a68b552a6b3e940a458 - - default default] Calculated Secrets uuid ref: secrets/4199ac44-f27e-4d5c-949b-a3c0225daaee
Oct 11 04:27:36 compute-0 nova_compute[259850]: 2025-10-11 04:27:36.305 2 DEBUG barbicanclient.client [None req-99f7b22c-b8ce-4bd1-b37a-a02bc2daa7b9 7bf17f3eb8514499a54d67542db6b88a 226e6310b4ee4a68b552a6b3e940a458 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87
Oct 11 04:27:36 compute-0 nova_compute[259850]: 2025-10-11 04:27:36.306 2 INFO barbicanclient.base [None req-99f7b22c-b8ce-4bd1-b37a-a02bc2daa7b9 7bf17f3eb8514499a54d67542db6b88a 226e6310b4ee4a68b552a6b3e940a458 - - default default] Calculated Secrets uuid ref: secrets/4199ac44-f27e-4d5c-949b-a3c0225daaee
Oct 11 04:27:36 compute-0 nova_compute[259850]: 2025-10-11 04:27:36.332 2 DEBUG barbicanclient.client [None req-99f7b22c-b8ce-4bd1-b37a-a02bc2daa7b9 7bf17f3eb8514499a54d67542db6b88a 226e6310b4ee4a68b552a6b3e940a458 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87
Oct 11 04:27:36 compute-0 nova_compute[259850]: 2025-10-11 04:27:36.333 2 INFO barbicanclient.base [None req-99f7b22c-b8ce-4bd1-b37a-a02bc2daa7b9 7bf17f3eb8514499a54d67542db6b88a 226e6310b4ee4a68b552a6b3e940a458 - - default default] Calculated Secrets uuid ref: secrets/4199ac44-f27e-4d5c-949b-a3c0225daaee
Oct 11 04:27:36 compute-0 ceph-mon[74273]: from='client.? 192.168.122.100:0/3774172432' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 11 04:27:36 compute-0 nova_compute[259850]: 2025-10-11 04:27:36.356 2 DEBUG barbicanclient.client [None req-99f7b22c-b8ce-4bd1-b37a-a02bc2daa7b9 7bf17f3eb8514499a54d67542db6b88a 226e6310b4ee4a68b552a6b3e940a458 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87
Oct 11 04:27:36 compute-0 nova_compute[259850]: 2025-10-11 04:27:36.358 2 INFO barbicanclient.base [None req-99f7b22c-b8ce-4bd1-b37a-a02bc2daa7b9 7bf17f3eb8514499a54d67542db6b88a 226e6310b4ee4a68b552a6b3e940a458 - - default default] Calculated Secrets uuid ref: secrets/4199ac44-f27e-4d5c-949b-a3c0225daaee
Oct 11 04:27:36 compute-0 nova_compute[259850]: 2025-10-11 04:27:36.397 2 DEBUG barbicanclient.client [None req-99f7b22c-b8ce-4bd1-b37a-a02bc2daa7b9 7bf17f3eb8514499a54d67542db6b88a 226e6310b4ee4a68b552a6b3e940a458 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87
Oct 11 04:27:36 compute-0 nova_compute[259850]: 2025-10-11 04:27:36.398 2 INFO barbicanclient.base [None req-99f7b22c-b8ce-4bd1-b37a-a02bc2daa7b9 7bf17f3eb8514499a54d67542db6b88a 226e6310b4ee4a68b552a6b3e940a458 - - default default] Calculated Secrets uuid ref: secrets/4199ac44-f27e-4d5c-949b-a3c0225daaee
Oct 11 04:27:36 compute-0 nova_compute[259850]: 2025-10-11 04:27:36.417 2 DEBUG barbicanclient.client [None req-99f7b22c-b8ce-4bd1-b37a-a02bc2daa7b9 7bf17f3eb8514499a54d67542db6b88a 226e6310b4ee4a68b552a6b3e940a458 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87
Oct 11 04:27:36 compute-0 nova_compute[259850]: 2025-10-11 04:27:36.418 2 INFO barbicanclient.base [None req-99f7b22c-b8ce-4bd1-b37a-a02bc2daa7b9 7bf17f3eb8514499a54d67542db6b88a 226e6310b4ee4a68b552a6b3e940a458 - - default default] Calculated Secrets uuid ref: secrets/4199ac44-f27e-4d5c-949b-a3c0225daaee
Oct 11 04:27:36 compute-0 nova_compute[259850]: 2025-10-11 04:27:36.450 2 DEBUG barbicanclient.client [None req-99f7b22c-b8ce-4bd1-b37a-a02bc2daa7b9 7bf17f3eb8514499a54d67542db6b88a 226e6310b4ee4a68b552a6b3e940a458 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87
Oct 11 04:27:36 compute-0 nova_compute[259850]: 2025-10-11 04:27:36.452 2 INFO barbicanclient.base [None req-99f7b22c-b8ce-4bd1-b37a-a02bc2daa7b9 7bf17f3eb8514499a54d67542db6b88a 226e6310b4ee4a68b552a6b3e940a458 - - default default] Calculated Secrets uuid ref: secrets/4199ac44-f27e-4d5c-949b-a3c0225daaee
Oct 11 04:27:36 compute-0 nova_compute[259850]: 2025-10-11 04:27:36.490 2 DEBUG barbicanclient.client [None req-99f7b22c-b8ce-4bd1-b37a-a02bc2daa7b9 7bf17f3eb8514499a54d67542db6b88a 226e6310b4ee4a68b552a6b3e940a458 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87
Oct 11 04:27:36 compute-0 nova_compute[259850]: 2025-10-11 04:27:36.491 2 INFO barbicanclient.base [None req-99f7b22c-b8ce-4bd1-b37a-a02bc2daa7b9 7bf17f3eb8514499a54d67542db6b88a 226e6310b4ee4a68b552a6b3e940a458 - - default default] Calculated Secrets uuid ref: secrets/4199ac44-f27e-4d5c-949b-a3c0225daaee
Oct 11 04:27:36 compute-0 nova_compute[259850]: 2025-10-11 04:27:36.510 2 DEBUG barbicanclient.client [None req-99f7b22c-b8ce-4bd1-b37a-a02bc2daa7b9 7bf17f3eb8514499a54d67542db6b88a 226e6310b4ee4a68b552a6b3e940a458 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87
Oct 11 04:27:36 compute-0 nova_compute[259850]: 2025-10-11 04:27:36.511 2 INFO barbicanclient.base [None req-99f7b22c-b8ce-4bd1-b37a-a02bc2daa7b9 7bf17f3eb8514499a54d67542db6b88a 226e6310b4ee4a68b552a6b3e940a458 - - default default] Calculated Secrets uuid ref: secrets/4199ac44-f27e-4d5c-949b-a3c0225daaee
Oct 11 04:27:36 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 11 04:27:36 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2609465911' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 04:27:36 compute-0 nova_compute[259850]: 2025-10-11 04:27:36.530 2 DEBUG barbicanclient.client [None req-99f7b22c-b8ce-4bd1-b37a-a02bc2daa7b9 7bf17f3eb8514499a54d67542db6b88a 226e6310b4ee4a68b552a6b3e940a458 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87
Oct 11 04:27:36 compute-0 nova_compute[259850]: 2025-10-11 04:27:36.531 2 DEBUG nova.virt.libvirt.host [None req-99f7b22c-b8ce-4bd1-b37a-a02bc2daa7b9 7bf17f3eb8514499a54d67542db6b88a 226e6310b4ee4a68b552a6b3e940a458 - - default default] Secret XML: <secret ephemeral="no" private="no">
Oct 11 04:27:36 compute-0 nova_compute[259850]:   <usage type="volume">
Oct 11 04:27:36 compute-0 nova_compute[259850]:     <volume>8c2f9549-0680-4737-9249-b8499f4fddcc</volume>
Oct 11 04:27:36 compute-0 nova_compute[259850]:   </usage>
Oct 11 04:27:36 compute-0 nova_compute[259850]: </secret>
Oct 11 04:27:36 compute-0 nova_compute[259850]:  create_secret /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1131
Oct 11 04:27:36 compute-0 nova_compute[259850]: 2025-10-11 04:27:36.548 2 DEBUG oslo_concurrency.processutils [None req-a15c22ab-259d-4b26-83d0-eb5f92102649 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.469s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 04:27:36 compute-0 nova_compute[259850]: 2025-10-11 04:27:36.570 2 DEBUG nova.virt.libvirt.vif [None req-99f7b22c-b8ce-4bd1-b37a-a02bc2daa7b9 7bf17f3eb8514499a54d67542db6b88a 226e6310b4ee4a68b552a6b3e940a458 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-11T04:27:30Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestEncryptedCinderVolumes-server-838098928',display_name='tempest-TestEncryptedCinderVolumes-server-838098928',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testencryptedcindervolumes-server-838098928',id=30,image_ref='',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBA3pFsCx6Lv4ZhABALE9kJlaC2VLcHHMajXk3FwO0YwDAD8GzEfOWx1nJYDa1BnjHeTckP7sy9/Wa8HAN31/eIMe4p7SlbrVdBBFvJpvxVBbmewtPKqpzKac1Jk+If2OOg==',key_name='tempest-TestEncryptedCinderVolumes-1373224468',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='226e6310b4ee4a68b552a6b3e940a458',ramdisk_id='',reservation_id='r-8kn13c8t',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',image_signature_verified='False',network_allocated='True',owner_project_name='tempest-TestEncryptedCinderVolumes-1931311766',owner_user_name='tempest-TestEncryptedCinderVolumes-1931311766-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-11T04:27:31Z,user_data=None,user_id='7bf17f3eb8514499a54d67542db6b88a',uuid=bc8a8366-552e-41a7-bd24-afacb81114bc,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "7932da10-eaa5-4512-a329-16ee6de1e17c", "address": "fa:16:3e:0c:2a:34", "network": {"id": "61e3c4a7-2f2f-451f-b913-c2cdac8efdf3", "bridge": "br-int", "label": "tempest-TestEncryptedCinderVolumes-1503027769-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "226e6310b4ee4a68b552a6b3e940a458", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7932da10-ea", "ovs_interfaceid": "7932da10-eaa5-4512-a329-16ee6de1e17c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Oct 11 04:27:36 compute-0 nova_compute[259850]: 2025-10-11 04:27:36.571 2 DEBUG nova.network.os_vif_util [None req-99f7b22c-b8ce-4bd1-b37a-a02bc2daa7b9 7bf17f3eb8514499a54d67542db6b88a 226e6310b4ee4a68b552a6b3e940a458 - - default default] Converting VIF {"id": "7932da10-eaa5-4512-a329-16ee6de1e17c", "address": "fa:16:3e:0c:2a:34", "network": {"id": "61e3c4a7-2f2f-451f-b913-c2cdac8efdf3", "bridge": "br-int", "label": "tempest-TestEncryptedCinderVolumes-1503027769-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "226e6310b4ee4a68b552a6b3e940a458", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7932da10-ea", "ovs_interfaceid": "7932da10-eaa5-4512-a329-16ee6de1e17c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 11 04:27:36 compute-0 nova_compute[259850]: 2025-10-11 04:27:36.573 2 DEBUG nova.network.os_vif_util [None req-99f7b22c-b8ce-4bd1-b37a-a02bc2daa7b9 7bf17f3eb8514499a54d67542db6b88a 226e6310b4ee4a68b552a6b3e940a458 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:0c:2a:34,bridge_name='br-int',has_traffic_filtering=True,id=7932da10-eaa5-4512-a329-16ee6de1e17c,network=Network(61e3c4a7-2f2f-451f-b913-c2cdac8efdf3),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap7932da10-ea') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 11 04:27:36 compute-0 nova_compute[259850]: 2025-10-11 04:27:36.576 2 DEBUG nova.objects.instance [None req-99f7b22c-b8ce-4bd1-b37a-a02bc2daa7b9 7bf17f3eb8514499a54d67542db6b88a 226e6310b4ee4a68b552a6b3e940a458 - - default default] Lazy-loading 'pci_devices' on Instance uuid bc8a8366-552e-41a7-bd24-afacb81114bc obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 11 04:27:36 compute-0 nova_compute[259850]: 2025-10-11 04:27:36.594 2 DEBUG nova.virt.libvirt.driver [None req-99f7b22c-b8ce-4bd1-b37a-a02bc2daa7b9 7bf17f3eb8514499a54d67542db6b88a 226e6310b4ee4a68b552a6b3e940a458 - - default default] [instance: bc8a8366-552e-41a7-bd24-afacb81114bc] End _get_guest_xml xml=<domain type="kvm">
Oct 11 04:27:36 compute-0 nova_compute[259850]:   <uuid>bc8a8366-552e-41a7-bd24-afacb81114bc</uuid>
Oct 11 04:27:36 compute-0 nova_compute[259850]:   <name>instance-0000001e</name>
Oct 11 04:27:36 compute-0 nova_compute[259850]:   <memory>131072</memory>
Oct 11 04:27:36 compute-0 nova_compute[259850]:   <vcpu>1</vcpu>
Oct 11 04:27:36 compute-0 nova_compute[259850]:   <metadata>
Oct 11 04:27:36 compute-0 nova_compute[259850]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 11 04:27:36 compute-0 nova_compute[259850]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 11 04:27:36 compute-0 nova_compute[259850]:       <nova:name>tempest-TestEncryptedCinderVolumes-server-838098928</nova:name>
Oct 11 04:27:36 compute-0 nova_compute[259850]:       <nova:creationTime>2025-10-11 04:27:35</nova:creationTime>
Oct 11 04:27:36 compute-0 nova_compute[259850]:       <nova:flavor name="m1.nano">
Oct 11 04:27:36 compute-0 nova_compute[259850]:         <nova:memory>128</nova:memory>
Oct 11 04:27:36 compute-0 nova_compute[259850]:         <nova:disk>1</nova:disk>
Oct 11 04:27:36 compute-0 nova_compute[259850]:         <nova:swap>0</nova:swap>
Oct 11 04:27:36 compute-0 nova_compute[259850]:         <nova:ephemeral>0</nova:ephemeral>
Oct 11 04:27:36 compute-0 nova_compute[259850]:         <nova:vcpus>1</nova:vcpus>
Oct 11 04:27:36 compute-0 nova_compute[259850]:       </nova:flavor>
Oct 11 04:27:36 compute-0 nova_compute[259850]:       <nova:owner>
Oct 11 04:27:36 compute-0 nova_compute[259850]:         <nova:user uuid="7bf17f3eb8514499a54d67542db6b88a">tempest-TestEncryptedCinderVolumes-1931311766-project-member</nova:user>
Oct 11 04:27:36 compute-0 nova_compute[259850]:         <nova:project uuid="226e6310b4ee4a68b552a6b3e940a458">tempest-TestEncryptedCinderVolumes-1931311766</nova:project>
Oct 11 04:27:36 compute-0 nova_compute[259850]:       </nova:owner>
Oct 11 04:27:36 compute-0 nova_compute[259850]:       <nova:ports>
Oct 11 04:27:36 compute-0 nova_compute[259850]:         <nova:port uuid="7932da10-eaa5-4512-a329-16ee6de1e17c">
Oct 11 04:27:36 compute-0 nova_compute[259850]:           <nova:ip type="fixed" address="10.100.0.12" ipVersion="4"/>
Oct 11 04:27:36 compute-0 nova_compute[259850]:         </nova:port>
Oct 11 04:27:36 compute-0 nova_compute[259850]:       </nova:ports>
Oct 11 04:27:36 compute-0 nova_compute[259850]:     </nova:instance>
Oct 11 04:27:36 compute-0 nova_compute[259850]:   </metadata>
Oct 11 04:27:36 compute-0 nova_compute[259850]:   <sysinfo type="smbios">
Oct 11 04:27:36 compute-0 nova_compute[259850]:     <system>
Oct 11 04:27:36 compute-0 nova_compute[259850]:       <entry name="manufacturer">RDO</entry>
Oct 11 04:27:36 compute-0 nova_compute[259850]:       <entry name="product">OpenStack Compute</entry>
Oct 11 04:27:36 compute-0 nova_compute[259850]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 11 04:27:36 compute-0 nova_compute[259850]:       <entry name="serial">bc8a8366-552e-41a7-bd24-afacb81114bc</entry>
Oct 11 04:27:36 compute-0 nova_compute[259850]:       <entry name="uuid">bc8a8366-552e-41a7-bd24-afacb81114bc</entry>
Oct 11 04:27:36 compute-0 nova_compute[259850]:       <entry name="family">Virtual Machine</entry>
Oct 11 04:27:36 compute-0 nova_compute[259850]:     </system>
Oct 11 04:27:36 compute-0 nova_compute[259850]:   </sysinfo>
Oct 11 04:27:36 compute-0 nova_compute[259850]:   <os>
Oct 11 04:27:36 compute-0 nova_compute[259850]:     <type arch="x86_64" machine="q35">hvm</type>
Oct 11 04:27:36 compute-0 nova_compute[259850]:     <boot dev="hd"/>
Oct 11 04:27:36 compute-0 nova_compute[259850]:     <smbios mode="sysinfo"/>
Oct 11 04:27:36 compute-0 nova_compute[259850]:   </os>
Oct 11 04:27:36 compute-0 nova_compute[259850]:   <features>
Oct 11 04:27:36 compute-0 nova_compute[259850]:     <acpi/>
Oct 11 04:27:36 compute-0 nova_compute[259850]:     <apic/>
Oct 11 04:27:36 compute-0 nova_compute[259850]:     <vmcoreinfo/>
Oct 11 04:27:36 compute-0 nova_compute[259850]:   </features>
Oct 11 04:27:36 compute-0 nova_compute[259850]:   <clock offset="utc">
Oct 11 04:27:36 compute-0 nova_compute[259850]:     <timer name="pit" tickpolicy="delay"/>
Oct 11 04:27:36 compute-0 nova_compute[259850]:     <timer name="rtc" tickpolicy="catchup"/>
Oct 11 04:27:36 compute-0 nova_compute[259850]:     <timer name="hpet" present="no"/>
Oct 11 04:27:36 compute-0 nova_compute[259850]:   </clock>
Oct 11 04:27:36 compute-0 nova_compute[259850]:   <cpu mode="host-model" match="exact">
Oct 11 04:27:36 compute-0 nova_compute[259850]:     <topology sockets="1" cores="1" threads="1"/>
Oct 11 04:27:36 compute-0 nova_compute[259850]:   </cpu>
Oct 11 04:27:36 compute-0 nova_compute[259850]:   <devices>
Oct 11 04:27:36 compute-0 nova_compute[259850]:     <disk type="network" device="cdrom">
Oct 11 04:27:36 compute-0 nova_compute[259850]:       <driver type="raw" cache="none"/>
Oct 11 04:27:36 compute-0 nova_compute[259850]:       <source protocol="rbd" name="vms/bc8a8366-552e-41a7-bd24-afacb81114bc_disk.config">
Oct 11 04:27:36 compute-0 nova_compute[259850]:         <host name="192.168.122.100" port="6789"/>
Oct 11 04:27:36 compute-0 nova_compute[259850]:       </source>
Oct 11 04:27:36 compute-0 nova_compute[259850]:       <auth username="openstack">
Oct 11 04:27:36 compute-0 nova_compute[259850]:         <secret type="ceph" uuid="23b68101-59a9-532f-ab6b-9acf78fb2162"/>
Oct 11 04:27:36 compute-0 nova_compute[259850]:       </auth>
Oct 11 04:27:36 compute-0 nova_compute[259850]:       <target dev="sda" bus="sata"/>
Oct 11 04:27:36 compute-0 nova_compute[259850]:     </disk>
Oct 11 04:27:36 compute-0 nova_compute[259850]:     <disk type="network" device="disk">
Oct 11 04:27:36 compute-0 nova_compute[259850]:       <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Oct 11 04:27:36 compute-0 nova_compute[259850]:       <source protocol="rbd" name="volumes/volume-8c2f9549-0680-4737-9249-b8499f4fddcc">
Oct 11 04:27:36 compute-0 nova_compute[259850]:         <host name="192.168.122.100" port="6789"/>
Oct 11 04:27:36 compute-0 nova_compute[259850]:       </source>
Oct 11 04:27:36 compute-0 nova_compute[259850]:       <auth username="openstack">
Oct 11 04:27:36 compute-0 nova_compute[259850]:         <secret type="ceph" uuid="23b68101-59a9-532f-ab6b-9acf78fb2162"/>
Oct 11 04:27:36 compute-0 nova_compute[259850]:       </auth>
Oct 11 04:27:36 compute-0 nova_compute[259850]:       <target dev="vda" bus="virtio"/>
Oct 11 04:27:36 compute-0 nova_compute[259850]:       <serial>8c2f9549-0680-4737-9249-b8499f4fddcc</serial>
Oct 11 04:27:36 compute-0 nova_compute[259850]:       <encryption format="luks">
Oct 11 04:27:36 compute-0 nova_compute[259850]:         <secret type="passphrase" uuid="074a063b-6894-4a5e-b6e9-c56c75dbe275"/>
Oct 11 04:27:36 compute-0 nova_compute[259850]:       </encryption>
Oct 11 04:27:36 compute-0 nova_compute[259850]:     </disk>
Oct 11 04:27:36 compute-0 nova_compute[259850]:     <interface type="ethernet">
Oct 11 04:27:36 compute-0 nova_compute[259850]:       <mac address="fa:16:3e:0c:2a:34"/>
Oct 11 04:27:36 compute-0 nova_compute[259850]:       <model type="virtio"/>
Oct 11 04:27:36 compute-0 nova_compute[259850]:       <driver name="vhost" rx_queue_size="512"/>
Oct 11 04:27:36 compute-0 nova_compute[259850]:       <mtu size="1442"/>
Oct 11 04:27:36 compute-0 nova_compute[259850]:       <target dev="tap7932da10-ea"/>
Oct 11 04:27:36 compute-0 nova_compute[259850]:     </interface>
Oct 11 04:27:36 compute-0 nova_compute[259850]:     <serial type="pty">
Oct 11 04:27:36 compute-0 nova_compute[259850]:       <log file="/var/lib/nova/instances/bc8a8366-552e-41a7-bd24-afacb81114bc/console.log" append="off"/>
Oct 11 04:27:36 compute-0 nova_compute[259850]:     </serial>
Oct 11 04:27:36 compute-0 nova_compute[259850]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 11 04:27:36 compute-0 nova_compute[259850]:     <video>
Oct 11 04:27:36 compute-0 nova_compute[259850]:       <model type="virtio"/>
Oct 11 04:27:36 compute-0 nova_compute[259850]:     </video>
Oct 11 04:27:36 compute-0 nova_compute[259850]:     <input type="tablet" bus="usb"/>
Oct 11 04:27:36 compute-0 nova_compute[259850]:     <rng model="virtio">
Oct 11 04:27:36 compute-0 nova_compute[259850]:       <backend model="random">/dev/urandom</backend>
Oct 11 04:27:36 compute-0 nova_compute[259850]:     </rng>
Oct 11 04:27:36 compute-0 nova_compute[259850]:     <controller type="pci" model="pcie-root"/>
Oct 11 04:27:36 compute-0 nova_compute[259850]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 04:27:36 compute-0 nova_compute[259850]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 04:27:36 compute-0 nova_compute[259850]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 04:27:36 compute-0 nova_compute[259850]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 04:27:36 compute-0 nova_compute[259850]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 04:27:36 compute-0 nova_compute[259850]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 04:27:36 compute-0 nova_compute[259850]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 04:27:36 compute-0 nova_compute[259850]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 04:27:36 compute-0 nova_compute[259850]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 04:27:36 compute-0 nova_compute[259850]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 04:27:36 compute-0 nova_compute[259850]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 04:27:36 compute-0 nova_compute[259850]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 04:27:36 compute-0 nova_compute[259850]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 04:27:36 compute-0 nova_compute[259850]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 04:27:36 compute-0 nova_compute[259850]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 04:27:36 compute-0 nova_compute[259850]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 04:27:36 compute-0 nova_compute[259850]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 04:27:36 compute-0 nova_compute[259850]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 04:27:36 compute-0 nova_compute[259850]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 04:27:36 compute-0 nova_compute[259850]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 04:27:36 compute-0 nova_compute[259850]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 04:27:36 compute-0 nova_compute[259850]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 04:27:36 compute-0 nova_compute[259850]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 04:27:36 compute-0 nova_compute[259850]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 04:27:36 compute-0 nova_compute[259850]:     <controller type="usb" index="0"/>
Oct 11 04:27:36 compute-0 nova_compute[259850]:     <memballoon model="virtio">
Oct 11 04:27:36 compute-0 nova_compute[259850]:       <stats period="10"/>
Oct 11 04:27:36 compute-0 nova_compute[259850]:     </memballoon>
Oct 11 04:27:36 compute-0 nova_compute[259850]:   </devices>
Oct 11 04:27:36 compute-0 nova_compute[259850]: </domain>
Oct 11 04:27:36 compute-0 nova_compute[259850]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Oct 11 04:27:36 compute-0 nova_compute[259850]: 2025-10-11 04:27:36.597 2 DEBUG nova.compute.manager [None req-99f7b22c-b8ce-4bd1-b37a-a02bc2daa7b9 7bf17f3eb8514499a54d67542db6b88a 226e6310b4ee4a68b552a6b3e940a458 - - default default] [instance: bc8a8366-552e-41a7-bd24-afacb81114bc] Preparing to wait for external event network-vif-plugged-7932da10-eaa5-4512-a329-16ee6de1e17c prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Oct 11 04:27:36 compute-0 nova_compute[259850]: 2025-10-11 04:27:36.598 2 DEBUG oslo_concurrency.lockutils [None req-99f7b22c-b8ce-4bd1-b37a-a02bc2daa7b9 7bf17f3eb8514499a54d67542db6b88a 226e6310b4ee4a68b552a6b3e940a458 - - default default] Acquiring lock "bc8a8366-552e-41a7-bd24-afacb81114bc-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 04:27:36 compute-0 nova_compute[259850]: 2025-10-11 04:27:36.598 2 DEBUG oslo_concurrency.lockutils [None req-99f7b22c-b8ce-4bd1-b37a-a02bc2daa7b9 7bf17f3eb8514499a54d67542db6b88a 226e6310b4ee4a68b552a6b3e940a458 - - default default] Lock "bc8a8366-552e-41a7-bd24-afacb81114bc-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 04:27:36 compute-0 nova_compute[259850]: 2025-10-11 04:27:36.599 2 DEBUG oslo_concurrency.lockutils [None req-99f7b22c-b8ce-4bd1-b37a-a02bc2daa7b9 7bf17f3eb8514499a54d67542db6b88a 226e6310b4ee4a68b552a6b3e940a458 - - default default] Lock "bc8a8366-552e-41a7-bd24-afacb81114bc-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 04:27:36 compute-0 nova_compute[259850]: 2025-10-11 04:27:36.600 2 DEBUG nova.virt.libvirt.vif [None req-99f7b22c-b8ce-4bd1-b37a-a02bc2daa7b9 7bf17f3eb8514499a54d67542db6b88a 226e6310b4ee4a68b552a6b3e940a458 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-11T04:27:30Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestEncryptedCinderVolumes-server-838098928',display_name='tempest-TestEncryptedCinderVolumes-server-838098928',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testencryptedcindervolumes-server-838098928',id=30,image_ref='',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBA3pFsCx6Lv4ZhABALE9kJlaC2VLcHHMajXk3FwO0YwDAD8GzEfOWx1nJYDa1BnjHeTckP7sy9/Wa8HAN31/eIMe4p7SlbrVdBBFvJpvxVBbmewtPKqpzKac1Jk+If2OOg==',key_name='tempest-TestEncryptedCinderVolumes-1373224468',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='226e6310b4ee4a68b552a6b3e940a458',ramdisk_id='',reservation_id='r-8kn13c8t',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',image_signature_verified='False',network_allocated='True',owner_project_name='tempest-TestEncryptedCinderVolumes-1931311766',owner_user_name='tempest-TestEncryptedCinderVolumes-1931311766-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-11T04:27:31Z,user_data=None,user_id='7bf17f3eb8514499a54d67542db6b88a',uuid=bc8a8366-552e-41a7-bd24-afacb81114bc,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "7932da10-eaa5-4512-a329-16ee6de1e17c", "address": "fa:16:3e:0c:2a:34", "network": {"id": "61e3c4a7-2f2f-451f-b913-c2cdac8efdf3", "bridge": "br-int", "label": "tempest-TestEncryptedCinderVolumes-1503027769-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "226e6310b4ee4a68b552a6b3e940a458", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7932da10-ea", "ovs_interfaceid": "7932da10-eaa5-4512-a329-16ee6de1e17c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Oct 11 04:27:36 compute-0 nova_compute[259850]: 2025-10-11 04:27:36.601 2 DEBUG nova.network.os_vif_util [None req-99f7b22c-b8ce-4bd1-b37a-a02bc2daa7b9 7bf17f3eb8514499a54d67542db6b88a 226e6310b4ee4a68b552a6b3e940a458 - - default default] Converting VIF {"id": "7932da10-eaa5-4512-a329-16ee6de1e17c", "address": "fa:16:3e:0c:2a:34", "network": {"id": "61e3c4a7-2f2f-451f-b913-c2cdac8efdf3", "bridge": "br-int", "label": "tempest-TestEncryptedCinderVolumes-1503027769-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "226e6310b4ee4a68b552a6b3e940a458", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7932da10-ea", "ovs_interfaceid": "7932da10-eaa5-4512-a329-16ee6de1e17c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 11 04:27:36 compute-0 nova_compute[259850]: 2025-10-11 04:27:36.602 2 DEBUG nova.network.os_vif_util [None req-99f7b22c-b8ce-4bd1-b37a-a02bc2daa7b9 7bf17f3eb8514499a54d67542db6b88a 226e6310b4ee4a68b552a6b3e940a458 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:0c:2a:34,bridge_name='br-int',has_traffic_filtering=True,id=7932da10-eaa5-4512-a329-16ee6de1e17c,network=Network(61e3c4a7-2f2f-451f-b913-c2cdac8efdf3),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap7932da10-ea') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 11 04:27:36 compute-0 nova_compute[259850]: 2025-10-11 04:27:36.603 2 DEBUG os_vif [None req-99f7b22c-b8ce-4bd1-b37a-a02bc2daa7b9 7bf17f3eb8514499a54d67542db6b88a 226e6310b4ee4a68b552a6b3e940a458 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:0c:2a:34,bridge_name='br-int',has_traffic_filtering=True,id=7932da10-eaa5-4512-a329-16ee6de1e17c,network=Network(61e3c4a7-2f2f-451f-b913-c2cdac8efdf3),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap7932da10-ea') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Oct 11 04:27:36 compute-0 nova_compute[259850]: 2025-10-11 04:27:36.604 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 04:27:36 compute-0 nova_compute[259850]: 2025-10-11 04:27:36.605 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 04:27:36 compute-0 nova_compute[259850]: 2025-10-11 04:27:36.606 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 11 04:27:36 compute-0 nova_compute[259850]: 2025-10-11 04:27:36.610 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 04:27:36 compute-0 nova_compute[259850]: 2025-10-11 04:27:36.611 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap7932da10-ea, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 04:27:36 compute-0 nova_compute[259850]: 2025-10-11 04:27:36.612 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap7932da10-ea, col_values=(('external_ids', {'iface-id': '7932da10-eaa5-4512-a329-16ee6de1e17c', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:0c:2a:34', 'vm-uuid': 'bc8a8366-552e-41a7-bd24-afacb81114bc'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 04:27:36 compute-0 nova_compute[259850]: 2025-10-11 04:27:36.614 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 04:27:36 compute-0 NetworkManager[44920]: <info>  [1760156856.6149] manager: (tap7932da10-ea): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/149)
Oct 11 04:27:36 compute-0 nova_compute[259850]: 2025-10-11 04:27:36.616 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 11 04:27:36 compute-0 nova_compute[259850]: 2025-10-11 04:27:36.623 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 04:27:36 compute-0 nova_compute[259850]: 2025-10-11 04:27:36.624 2 INFO os_vif [None req-99f7b22c-b8ce-4bd1-b37a-a02bc2daa7b9 7bf17f3eb8514499a54d67542db6b88a 226e6310b4ee4a68b552a6b3e940a458 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:0c:2a:34,bridge_name='br-int',has_traffic_filtering=True,id=7932da10-eaa5-4512-a329-16ee6de1e17c,network=Network(61e3c4a7-2f2f-451f-b913-c2cdac8efdf3),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap7932da10-ea')
Oct 11 04:27:36 compute-0 nova_compute[259850]: 2025-10-11 04:27:36.677 2 DEBUG nova.virt.libvirt.driver [None req-99f7b22c-b8ce-4bd1-b37a-a02bc2daa7b9 7bf17f3eb8514499a54d67542db6b88a 226e6310b4ee4a68b552a6b3e940a458 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 11 04:27:36 compute-0 nova_compute[259850]: 2025-10-11 04:27:36.677 2 DEBUG nova.virt.libvirt.driver [None req-99f7b22c-b8ce-4bd1-b37a-a02bc2daa7b9 7bf17f3eb8514499a54d67542db6b88a 226e6310b4ee4a68b552a6b3e940a458 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 11 04:27:36 compute-0 nova_compute[259850]: 2025-10-11 04:27:36.678 2 DEBUG nova.virt.libvirt.driver [None req-99f7b22c-b8ce-4bd1-b37a-a02bc2daa7b9 7bf17f3eb8514499a54d67542db6b88a 226e6310b4ee4a68b552a6b3e940a458 - - default default] No VIF found with MAC fa:16:3e:0c:2a:34, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Oct 11 04:27:36 compute-0 nova_compute[259850]: 2025-10-11 04:27:36.679 2 INFO nova.virt.libvirt.driver [None req-99f7b22c-b8ce-4bd1-b37a-a02bc2daa7b9 7bf17f3eb8514499a54d67542db6b88a 226e6310b4ee4a68b552a6b3e940a458 - - default default] [instance: bc8a8366-552e-41a7-bd24-afacb81114bc] Using config drive
Oct 11 04:27:36 compute-0 nova_compute[259850]: 2025-10-11 04:27:36.708 2 DEBUG nova.storage.rbd_utils [None req-99f7b22c-b8ce-4bd1-b37a-a02bc2daa7b9 7bf17f3eb8514499a54d67542db6b88a 226e6310b4ee4a68b552a6b3e940a458 - - default default] rbd image bc8a8366-552e-41a7-bd24-afacb81114bc_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 11 04:27:36 compute-0 nova_compute[259850]: 2025-10-11 04:27:36.900 2 WARNING nova.virt.libvirt.driver [None req-a15c22ab-259d-4b26-83d0-eb5f92102649 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 11 04:27:36 compute-0 nova_compute[259850]: 2025-10-11 04:27:36.901 2 DEBUG nova.compute.resource_tracker [None req-a15c22ab-259d-4b26-83d0-eb5f92102649 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4342MB free_disk=59.988277435302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Oct 11 04:27:36 compute-0 nova_compute[259850]: 2025-10-11 04:27:36.902 2 DEBUG oslo_concurrency.lockutils [None req-a15c22ab-259d-4b26-83d0-eb5f92102649 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 04:27:36 compute-0 nova_compute[259850]: 2025-10-11 04:27:36.902 2 DEBUG oslo_concurrency.lockutils [None req-a15c22ab-259d-4b26-83d0-eb5f92102649 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 04:27:37 compute-0 nova_compute[259850]: 2025-10-11 04:27:37.001 2 DEBUG nova.compute.resource_tracker [None req-a15c22ab-259d-4b26-83d0-eb5f92102649 - - - - - -] Instance bc8a8366-552e-41a7-bd24-afacb81114bc actively managed on this compute host and has allocations in placement: {'resources': {'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Oct 11 04:27:37 compute-0 nova_compute[259850]: 2025-10-11 04:27:37.002 2 DEBUG nova.compute.resource_tracker [None req-a15c22ab-259d-4b26-83d0-eb5f92102649 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Oct 11 04:27:37 compute-0 nova_compute[259850]: 2025-10-11 04:27:37.002 2 DEBUG nova.compute.resource_tracker [None req-a15c22ab-259d-4b26-83d0-eb5f92102649 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=640MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Oct 11 04:27:37 compute-0 nova_compute[259850]: 2025-10-11 04:27:37.039 2 DEBUG oslo_concurrency.processutils [None req-a15c22ab-259d-4b26-83d0-eb5f92102649 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 04:27:37 compute-0 ceph-mon[74273]: pgmap v1976: 305 pgs: 305 active+clean; 271 MiB data, 673 MiB used, 59 GiB / 60 GiB avail; 21 KiB/s rd, 1.5 KiB/s wr, 26 op/s
Oct 11 04:27:37 compute-0 ceph-mon[74273]: from='client.? 192.168.122.100:0/2609465911' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 04:27:37 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 11 04:27:37 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3758724885' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 04:27:37 compute-0 nova_compute[259850]: 2025-10-11 04:27:37.491 2 DEBUG oslo_concurrency.processutils [None req-a15c22ab-259d-4b26-83d0-eb5f92102649 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.453s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 04:27:37 compute-0 nova_compute[259850]: 2025-10-11 04:27:37.501 2 DEBUG nova.compute.provider_tree [None req-a15c22ab-259d-4b26-83d0-eb5f92102649 - - - - - -] Inventory has not changed in ProviderTree for provider: 108a560b-89c0-4926-a2fc-cb749a6f8386 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 11 04:27:37 compute-0 nova_compute[259850]: 2025-10-11 04:27:37.521 2 DEBUG nova.scheduler.client.report [None req-a15c22ab-259d-4b26-83d0-eb5f92102649 - - - - - -] Inventory has not changed for provider 108a560b-89c0-4926-a2fc-cb749a6f8386 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 11 04:27:37 compute-0 nova_compute[259850]: 2025-10-11 04:27:37.550 2 DEBUG nova.compute.resource_tracker [None req-a15c22ab-259d-4b26-83d0-eb5f92102649 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Oct 11 04:27:37 compute-0 nova_compute[259850]: 2025-10-11 04:27:37.551 2 DEBUG oslo_concurrency.lockutils [None req-a15c22ab-259d-4b26-83d0-eb5f92102649 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.649s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 04:27:37 compute-0 nova_compute[259850]: 2025-10-11 04:27:37.597 2 DEBUG nova.network.neutron [req-c0705dd8-5fc4-49d4-b2e7-101fe39961ac req-e049b7b4-ce66-4e5c-abee-73f1db26adbe f4159c198f2c491aba3076e803251bb4 551ef16ca7c64c7d98e9560ddab5abd8 - - default default] [instance: bc8a8366-552e-41a7-bd24-afacb81114bc] Updated VIF entry in instance network info cache for port 7932da10-eaa5-4512-a329-16ee6de1e17c. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 11 04:27:37 compute-0 nova_compute[259850]: 2025-10-11 04:27:37.599 2 DEBUG nova.network.neutron [req-c0705dd8-5fc4-49d4-b2e7-101fe39961ac req-e049b7b4-ce66-4e5c-abee-73f1db26adbe f4159c198f2c491aba3076e803251bb4 551ef16ca7c64c7d98e9560ddab5abd8 - - default default] [instance: bc8a8366-552e-41a7-bd24-afacb81114bc] Updating instance_info_cache with network_info: [{"id": "7932da10-eaa5-4512-a329-16ee6de1e17c", "address": "fa:16:3e:0c:2a:34", "network": {"id": "61e3c4a7-2f2f-451f-b913-c2cdac8efdf3", "bridge": "br-int", "label": "tempest-TestEncryptedCinderVolumes-1503027769-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "226e6310b4ee4a68b552a6b3e940a458", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7932da10-ea", "ovs_interfaceid": "7932da10-eaa5-4512-a329-16ee6de1e17c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 11 04:27:37 compute-0 nova_compute[259850]: 2025-10-11 04:27:37.607 2 INFO nova.virt.libvirt.driver [None req-99f7b22c-b8ce-4bd1-b37a-a02bc2daa7b9 7bf17f3eb8514499a54d67542db6b88a 226e6310b4ee4a68b552a6b3e940a458 - - default default] [instance: bc8a8366-552e-41a7-bd24-afacb81114bc] Creating config drive at /var/lib/nova/instances/bc8a8366-552e-41a7-bd24-afacb81114bc/disk.config
Oct 11 04:27:37 compute-0 nova_compute[259850]: 2025-10-11 04:27:37.614 2 DEBUG oslo_concurrency.processutils [None req-99f7b22c-b8ce-4bd1-b37a-a02bc2daa7b9 7bf17f3eb8514499a54d67542db6b88a 226e6310b4ee4a68b552a6b3e940a458 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/bc8a8366-552e-41a7-bd24-afacb81114bc/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp8xcn4j7r execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 04:27:37 compute-0 nova_compute[259850]: 2025-10-11 04:27:37.637 2 DEBUG oslo_concurrency.lockutils [req-c0705dd8-5fc4-49d4-b2e7-101fe39961ac req-e049b7b4-ce66-4e5c-abee-73f1db26adbe f4159c198f2c491aba3076e803251bb4 551ef16ca7c64c7d98e9560ddab5abd8 - - default default] Releasing lock "refresh_cache-bc8a8366-552e-41a7-bd24-afacb81114bc" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 11 04:27:37 compute-0 nova_compute[259850]: 2025-10-11 04:27:37.742 2 DEBUG oslo_concurrency.processutils [None req-99f7b22c-b8ce-4bd1-b37a-a02bc2daa7b9 7bf17f3eb8514499a54d67542db6b88a 226e6310b4ee4a68b552a6b3e940a458 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/bc8a8366-552e-41a7-bd24-afacb81114bc/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp8xcn4j7r" returned: 0 in 0.128s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 04:27:37 compute-0 nova_compute[259850]: 2025-10-11 04:27:37.782 2 DEBUG nova.storage.rbd_utils [None req-99f7b22c-b8ce-4bd1-b37a-a02bc2daa7b9 7bf17f3eb8514499a54d67542db6b88a 226e6310b4ee4a68b552a6b3e940a458 - - default default] rbd image bc8a8366-552e-41a7-bd24-afacb81114bc_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 11 04:27:37 compute-0 nova_compute[259850]: 2025-10-11 04:27:37.786 2 DEBUG oslo_concurrency.processutils [None req-99f7b22c-b8ce-4bd1-b37a-a02bc2daa7b9 7bf17f3eb8514499a54d67542db6b88a 226e6310b4ee4a68b552a6b3e940a458 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/bc8a8366-552e-41a7-bd24-afacb81114bc/disk.config bc8a8366-552e-41a7-bd24-afacb81114bc_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 04:27:37 compute-0 nova_compute[259850]: 2025-10-11 04:27:37.946 2 DEBUG oslo_concurrency.processutils [None req-99f7b22c-b8ce-4bd1-b37a-a02bc2daa7b9 7bf17f3eb8514499a54d67542db6b88a 226e6310b4ee4a68b552a6b3e940a458 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/bc8a8366-552e-41a7-bd24-afacb81114bc/disk.config bc8a8366-552e-41a7-bd24-afacb81114bc_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.160s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 04:27:37 compute-0 nova_compute[259850]: 2025-10-11 04:27:37.947 2 INFO nova.virt.libvirt.driver [None req-99f7b22c-b8ce-4bd1-b37a-a02bc2daa7b9 7bf17f3eb8514499a54d67542db6b88a 226e6310b4ee4a68b552a6b3e940a458 - - default default] [instance: bc8a8366-552e-41a7-bd24-afacb81114bc] Deleting local config drive /var/lib/nova/instances/bc8a8366-552e-41a7-bd24-afacb81114bc/disk.config because it was imported into RBD.
Oct 11 04:27:38 compute-0 kernel: tap7932da10-ea: entered promiscuous mode
Oct 11 04:27:38 compute-0 NetworkManager[44920]: <info>  [1760156858.0194] manager: (tap7932da10-ea): new Tun device (/org/freedesktop/NetworkManager/Devices/150)
Oct 11 04:27:38 compute-0 nova_compute[259850]: 2025-10-11 04:27:38.020 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 04:27:38 compute-0 ovn_controller[152025]: 2025-10-11T04:27:38Z|00291|binding|INFO|Claiming lport 7932da10-eaa5-4512-a329-16ee6de1e17c for this chassis.
Oct 11 04:27:38 compute-0 ovn_controller[152025]: 2025-10-11T04:27:38Z|00292|binding|INFO|7932da10-eaa5-4512-a329-16ee6de1e17c: Claiming fa:16:3e:0c:2a:34 10.100.0.12
Oct 11 04:27:38 compute-0 ovn_metadata_agent[161897]: 2025-10-11 04:27:38.033 161902 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:0c:2a:34 10.100.0.12'], port_security=['fa:16:3e:0c:2a:34 10.100.0.12'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.12/28', 'neutron:device_id': 'bc8a8366-552e-41a7-bd24-afacb81114bc', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-61e3c4a7-2f2f-451f-b913-c2cdac8efdf3', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '226e6310b4ee4a68b552a6b3e940a458', 'neutron:revision_number': '2', 'neutron:security_group_ids': '77d1d83e-ff49-437a-8a94-baa66143ce2b', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=17f237ce-6320-4c27-9970-fd94aa8457a3, chassis=[<ovs.db.idl.Row object at 0x7f22fde8afd0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f22fde8afd0>], logical_port=7932da10-eaa5-4512-a329-16ee6de1e17c) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 11 04:27:38 compute-0 ovn_metadata_agent[161897]: 2025-10-11 04:27:38.035 161902 INFO neutron.agent.ovn.metadata.agent [-] Port 7932da10-eaa5-4512-a329-16ee6de1e17c in datapath 61e3c4a7-2f2f-451f-b913-c2cdac8efdf3 bound to our chassis
Oct 11 04:27:38 compute-0 ovn_metadata_agent[161897]: 2025-10-11 04:27:38.037 161902 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 61e3c4a7-2f2f-451f-b913-c2cdac8efdf3
Oct 11 04:27:38 compute-0 ovn_controller[152025]: 2025-10-11T04:27:38Z|00293|binding|INFO|Setting lport 7932da10-eaa5-4512-a329-16ee6de1e17c ovn-installed in OVS
Oct 11 04:27:38 compute-0 ovn_controller[152025]: 2025-10-11T04:27:38Z|00294|binding|INFO|Setting lport 7932da10-eaa5-4512-a329-16ee6de1e17c up in Southbound
Oct 11 04:27:38 compute-0 nova_compute[259850]: 2025-10-11 04:27:38.052 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 04:27:38 compute-0 nova_compute[259850]: 2025-10-11 04:27:38.057 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 04:27:38 compute-0 ovn_metadata_agent[161897]: 2025-10-11 04:27:38.057 267637 DEBUG oslo.privsep.daemon [-] privsep: reply[67e874f9-3b48-457b-9edb-f41d9e0fd6f0]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 04:27:38 compute-0 ovn_metadata_agent[161897]: 2025-10-11 04:27:38.058 161902 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap61e3c4a7-21 in ovnmeta-61e3c4a7-2f2f-451f-b913-c2cdac8efdf3 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Oct 11 04:27:38 compute-0 ovn_metadata_agent[161897]: 2025-10-11 04:27:38.061 267637 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap61e3c4a7-20 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Oct 11 04:27:38 compute-0 ovn_metadata_agent[161897]: 2025-10-11 04:27:38.061 267637 DEBUG oslo.privsep.daemon [-] privsep: reply[bbd08a42-8219-4506-9c19-a5685d3caada]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 04:27:38 compute-0 ovn_metadata_agent[161897]: 2025-10-11 04:27:38.062 267637 DEBUG oslo.privsep.daemon [-] privsep: reply[74b9892e-7320-40a2-b1d9-4d796e6f692f]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 04:27:38 compute-0 systemd-udevd[309526]: Network interface NamePolicy= disabled on kernel command line.
Oct 11 04:27:38 compute-0 NetworkManager[44920]: <info>  [1760156858.0834] device (tap7932da10-ea): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 11 04:27:38 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1977: 305 pgs: 305 active+clean; 271 MiB data, 673 MiB used, 59 GiB / 60 GiB avail; 18 KiB/s rd, 1.3 KiB/s wr, 24 op/s
Oct 11 04:27:38 compute-0 NetworkManager[44920]: <info>  [1760156858.0847] device (tap7932da10-ea): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 11 04:27:38 compute-0 ovn_metadata_agent[161897]: 2025-10-11 04:27:38.085 162015 DEBUG oslo.privsep.daemon [-] privsep: reply[efc28d20-12f9-497d-89df-53173c0c36d7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 04:27:38 compute-0 systemd-machined[214869]: New machine qemu-30-instance-0000001e.
Oct 11 04:27:38 compute-0 systemd[1]: Started Virtual Machine qemu-30-instance-0000001e.
Oct 11 04:27:38 compute-0 ovn_metadata_agent[161897]: 2025-10-11 04:27:38.117 267637 DEBUG oslo.privsep.daemon [-] privsep: reply[47be82cb-e24f-49c4-a093-be354e5150a8]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 04:27:38 compute-0 ovn_metadata_agent[161897]: 2025-10-11 04:27:38.163 267785 DEBUG oslo.privsep.daemon [-] privsep: reply[e6da5d11-a5bd-463a-9280-c90da1eda5c7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 04:27:38 compute-0 ovn_metadata_agent[161897]: 2025-10-11 04:27:38.172 267637 DEBUG oslo.privsep.daemon [-] privsep: reply[ea16f028-c593-4ce8-a407-45f5abd5e1bb]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 04:27:38 compute-0 NetworkManager[44920]: <info>  [1760156858.1739] manager: (tap61e3c4a7-20): new Veth device (/org/freedesktop/NetworkManager/Devices/151)
Oct 11 04:27:38 compute-0 systemd-udevd[309530]: Network interface NamePolicy= disabled on kernel command line.
Oct 11 04:27:38 compute-0 ovn_metadata_agent[161897]: 2025-10-11 04:27:38.238 267785 DEBUG oslo.privsep.daemon [-] privsep: reply[2b43bcf5-8f56-4d9d-8d2e-4d42a8ef8469]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 04:27:38 compute-0 ovn_metadata_agent[161897]: 2025-10-11 04:27:38.242 267785 DEBUG oslo.privsep.daemon [-] privsep: reply[fa69af0e-dd2d-4b4d-8784-612b007cc532]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 04:27:38 compute-0 NetworkManager[44920]: <info>  [1760156858.2675] device (tap61e3c4a7-20): carrier: link connected
Oct 11 04:27:38 compute-0 ovn_metadata_agent[161897]: 2025-10-11 04:27:38.276 267785 DEBUG oslo.privsep.daemon [-] privsep: reply[bb93f019-1453-42fe-93e2-93a9fd178b0b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 04:27:38 compute-0 ovn_metadata_agent[161897]: 2025-10-11 04:27:38.301 267637 DEBUG oslo.privsep.daemon [-] privsep: reply[1ae5c876-a7fb-430e-bda5-0669217d683f]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap61e3c4a7-21'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:1d:30:90'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 96], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 524330, 'reachable_time': 36529, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 309559, 'error': None, 'target': 'ovnmeta-61e3c4a7-2f2f-451f-b913-c2cdac8efdf3', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 04:27:38 compute-0 ovn_metadata_agent[161897]: 2025-10-11 04:27:38.314 267637 DEBUG oslo.privsep.daemon [-] privsep: reply[73d69518-469d-4c9a-aa6a-b1295238c91a]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe1d:3090'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 524330, 'tstamp': 524330}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 309560, 'error': None, 'target': 'ovnmeta-61e3c4a7-2f2f-451f-b913-c2cdac8efdf3', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 04:27:38 compute-0 ovn_metadata_agent[161897]: 2025-10-11 04:27:38.332 267637 DEBUG oslo.privsep.daemon [-] privsep: reply[8860723b-abc4-4538-b74b-05d18eb93a62]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap61e3c4a7-21'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:1d:30:90'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 96], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 524330, 'reachable_time': 36529, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 309561, 'error': None, 'target': 'ovnmeta-61e3c4a7-2f2f-451f-b913-c2cdac8efdf3', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 04:27:38 compute-0 ceph-mon[74273]: from='client.? 192.168.122.100:0/3758724885' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 04:27:38 compute-0 ovn_metadata_agent[161897]: 2025-10-11 04:27:38.365 267637 DEBUG oslo.privsep.daemon [-] privsep: reply[d1ef5a95-4881-4402-b504-aa2d79aaf415]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 04:27:38 compute-0 ovn_metadata_agent[161897]: 2025-10-11 04:27:38.439 267637 DEBUG oslo.privsep.daemon [-] privsep: reply[bbae36a6-464d-447d-99e8-807c7127b74b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 04:27:38 compute-0 ovn_metadata_agent[161897]: 2025-10-11 04:27:38.441 161902 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap61e3c4a7-20, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 04:27:38 compute-0 ovn_metadata_agent[161897]: 2025-10-11 04:27:38.441 161902 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 11 04:27:38 compute-0 ovn_metadata_agent[161897]: 2025-10-11 04:27:38.442 161902 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap61e3c4a7-20, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 04:27:38 compute-0 nova_compute[259850]: 2025-10-11 04:27:38.485 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 04:27:38 compute-0 NetworkManager[44920]: <info>  [1760156858.4915] manager: (tap61e3c4a7-20): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/152)
Oct 11 04:27:38 compute-0 kernel: tap61e3c4a7-20: entered promiscuous mode
Oct 11 04:27:38 compute-0 nova_compute[259850]: 2025-10-11 04:27:38.494 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 04:27:38 compute-0 ovn_metadata_agent[161897]: 2025-10-11 04:27:38.498 161902 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap61e3c4a7-20, col_values=(('external_ids', {'iface-id': 'd6a2f98f-398c-4cad-9cd4-adac499bc3d4'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 04:27:38 compute-0 ovn_controller[152025]: 2025-10-11T04:27:38Z|00295|binding|INFO|Releasing lport d6a2f98f-398c-4cad-9cd4-adac499bc3d4 from this chassis (sb_readonly=0)
Oct 11 04:27:38 compute-0 nova_compute[259850]: 2025-10-11 04:27:38.500 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 04:27:38 compute-0 nova_compute[259850]: 2025-10-11 04:27:38.502 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 04:27:38 compute-0 ovn_metadata_agent[161897]: 2025-10-11 04:27:38.503 161902 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/61e3c4a7-2f2f-451f-b913-c2cdac8efdf3.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/61e3c4a7-2f2f-451f-b913-c2cdac8efdf3.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Oct 11 04:27:38 compute-0 ovn_metadata_agent[161897]: 2025-10-11 04:27:38.504 267637 DEBUG oslo.privsep.daemon [-] privsep: reply[857808a9-4391-4392-83da-add9256a9768]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 04:27:38 compute-0 ovn_metadata_agent[161897]: 2025-10-11 04:27:38.505 161902 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct 11 04:27:38 compute-0 ovn_metadata_agent[161897]: global
Oct 11 04:27:38 compute-0 ovn_metadata_agent[161897]:     log         /dev/log local0 debug
Oct 11 04:27:38 compute-0 ovn_metadata_agent[161897]:     log-tag     haproxy-metadata-proxy-61e3c4a7-2f2f-451f-b913-c2cdac8efdf3
Oct 11 04:27:38 compute-0 ovn_metadata_agent[161897]:     user        root
Oct 11 04:27:38 compute-0 ovn_metadata_agent[161897]:     group       root
Oct 11 04:27:38 compute-0 ovn_metadata_agent[161897]:     maxconn     1024
Oct 11 04:27:38 compute-0 ovn_metadata_agent[161897]:     pidfile     /var/lib/neutron/external/pids/61e3c4a7-2f2f-451f-b913-c2cdac8efdf3.pid.haproxy
Oct 11 04:27:38 compute-0 ovn_metadata_agent[161897]:     daemon
Oct 11 04:27:38 compute-0 ovn_metadata_agent[161897]: 
Oct 11 04:27:38 compute-0 ovn_metadata_agent[161897]: defaults
Oct 11 04:27:38 compute-0 ovn_metadata_agent[161897]:     log global
Oct 11 04:27:38 compute-0 ovn_metadata_agent[161897]:     mode http
Oct 11 04:27:38 compute-0 ovn_metadata_agent[161897]:     option httplog
Oct 11 04:27:38 compute-0 ovn_metadata_agent[161897]:     option dontlognull
Oct 11 04:27:38 compute-0 ovn_metadata_agent[161897]:     option http-server-close
Oct 11 04:27:38 compute-0 ovn_metadata_agent[161897]:     option forwardfor
Oct 11 04:27:38 compute-0 ovn_metadata_agent[161897]:     retries                 3
Oct 11 04:27:38 compute-0 ovn_metadata_agent[161897]:     timeout http-request    30s
Oct 11 04:27:38 compute-0 ovn_metadata_agent[161897]:     timeout connect         30s
Oct 11 04:27:38 compute-0 ovn_metadata_agent[161897]:     timeout client          32s
Oct 11 04:27:38 compute-0 ovn_metadata_agent[161897]:     timeout server          32s
Oct 11 04:27:38 compute-0 ovn_metadata_agent[161897]:     timeout http-keep-alive 30s
Oct 11 04:27:38 compute-0 ovn_metadata_agent[161897]: 
Oct 11 04:27:38 compute-0 ovn_metadata_agent[161897]: 
Oct 11 04:27:38 compute-0 ovn_metadata_agent[161897]: listen listener
Oct 11 04:27:38 compute-0 ovn_metadata_agent[161897]:     bind 169.254.169.254:80
Oct 11 04:27:38 compute-0 ovn_metadata_agent[161897]:     server metadata /var/lib/neutron/metadata_proxy
Oct 11 04:27:38 compute-0 ovn_metadata_agent[161897]:     http-request add-header X-OVN-Network-ID 61e3c4a7-2f2f-451f-b913-c2cdac8efdf3
Oct 11 04:27:38 compute-0 ovn_metadata_agent[161897]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Oct 11 04:27:38 compute-0 ovn_metadata_agent[161897]: 2025-10-11 04:27:38.507 161902 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-61e3c4a7-2f2f-451f-b913-c2cdac8efdf3', 'env', 'PROCESS_TAG=haproxy-61e3c4a7-2f2f-451f-b913-c2cdac8efdf3', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/61e3c4a7-2f2f-451f-b913-c2cdac8efdf3.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Oct 11 04:27:38 compute-0 nova_compute[259850]: 2025-10-11 04:27:38.512 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 04:27:38 compute-0 nova_compute[259850]: 2025-10-11 04:27:38.552 2 DEBUG oslo_service.periodic_task [None req-a15c22ab-259d-4b26-83d0-eb5f92102649 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 11 04:27:38 compute-0 nova_compute[259850]: 2025-10-11 04:27:38.553 2 DEBUG oslo_service.periodic_task [None req-a15c22ab-259d-4b26-83d0-eb5f92102649 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 11 04:27:38 compute-0 nova_compute[259850]: 2025-10-11 04:27:38.554 2 DEBUG nova.compute.manager [None req-a15c22ab-259d-4b26-83d0-eb5f92102649 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Oct 11 04:27:38 compute-0 nova_compute[259850]: 2025-10-11 04:27:38.554 2 DEBUG nova.compute.manager [None req-a15c22ab-259d-4b26-83d0-eb5f92102649 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Oct 11 04:27:38 compute-0 nova_compute[259850]: 2025-10-11 04:27:38.593 2 DEBUG nova.compute.manager [None req-a15c22ab-259d-4b26-83d0-eb5f92102649 - - - - - -] [instance: bc8a8366-552e-41a7-bd24-afacb81114bc] Skipping network cache update for instance because it is Building. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9871
Oct 11 04:27:38 compute-0 nova_compute[259850]: 2025-10-11 04:27:38.593 2 DEBUG nova.compute.manager [None req-a15c22ab-259d-4b26-83d0-eb5f92102649 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Oct 11 04:27:38 compute-0 nova_compute[259850]: 2025-10-11 04:27:38.595 2 DEBUG oslo_service.periodic_task [None req-a15c22ab-259d-4b26-83d0-eb5f92102649 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 11 04:27:38 compute-0 nova_compute[259850]: 2025-10-11 04:27:38.595 2 DEBUG nova.compute.manager [None req-a15c22ab-259d-4b26-83d0-eb5f92102649 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Oct 11 04:27:38 compute-0 nova_compute[259850]: 2025-10-11 04:27:38.751 2 DEBUG nova.compute.manager [req-7ac94088-0be5-4f51-a42e-8b1eaea23306 req-e12a0779-e994-40e0-8ccb-10cda0e1a2ef f4159c198f2c491aba3076e803251bb4 551ef16ca7c64c7d98e9560ddab5abd8 - - default default] [instance: bc8a8366-552e-41a7-bd24-afacb81114bc] Received event network-vif-plugged-7932da10-eaa5-4512-a329-16ee6de1e17c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 11 04:27:38 compute-0 nova_compute[259850]: 2025-10-11 04:27:38.752 2 DEBUG oslo_concurrency.lockutils [req-7ac94088-0be5-4f51-a42e-8b1eaea23306 req-e12a0779-e994-40e0-8ccb-10cda0e1a2ef f4159c198f2c491aba3076e803251bb4 551ef16ca7c64c7d98e9560ddab5abd8 - - default default] Acquiring lock "bc8a8366-552e-41a7-bd24-afacb81114bc-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 04:27:38 compute-0 nova_compute[259850]: 2025-10-11 04:27:38.753 2 DEBUG oslo_concurrency.lockutils [req-7ac94088-0be5-4f51-a42e-8b1eaea23306 req-e12a0779-e994-40e0-8ccb-10cda0e1a2ef f4159c198f2c491aba3076e803251bb4 551ef16ca7c64c7d98e9560ddab5abd8 - - default default] Lock "bc8a8366-552e-41a7-bd24-afacb81114bc-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 04:27:38 compute-0 nova_compute[259850]: 2025-10-11 04:27:38.754 2 DEBUG oslo_concurrency.lockutils [req-7ac94088-0be5-4f51-a42e-8b1eaea23306 req-e12a0779-e994-40e0-8ccb-10cda0e1a2ef f4159c198f2c491aba3076e803251bb4 551ef16ca7c64c7d98e9560ddab5abd8 - - default default] Lock "bc8a8366-552e-41a7-bd24-afacb81114bc-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 04:27:38 compute-0 nova_compute[259850]: 2025-10-11 04:27:38.755 2 DEBUG nova.compute.manager [req-7ac94088-0be5-4f51-a42e-8b1eaea23306 req-e12a0779-e994-40e0-8ccb-10cda0e1a2ef f4159c198f2c491aba3076e803251bb4 551ef16ca7c64c7d98e9560ddab5abd8 - - default default] [instance: bc8a8366-552e-41a7-bd24-afacb81114bc] Processing event network-vif-plugged-7932da10-eaa5-4512-a329-16ee6de1e17c _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Oct 11 04:27:38 compute-0 podman[309629]: 2025-10-11 04:27:38.930441246 +0000 UTC m=+0.044142524 container create c62cc7fe2c0bb78fdfcc83e62767dbba62e09df04937434deaf07bca06583364 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-61e3c4a7-2f2f-451f-b913-c2cdac8efdf3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251009)
Oct 11 04:27:38 compute-0 systemd[1]: Started libpod-conmon-c62cc7fe2c0bb78fdfcc83e62767dbba62e09df04937434deaf07bca06583364.scope.
Oct 11 04:27:39 compute-0 podman[309629]: 2025-10-11 04:27:38.906647816 +0000 UTC m=+0.020349114 image pull 1061e4fafe13e0b9aa1ef2c904ba4ad70c44f3e87b1d831f16c6db34937f4022 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Oct 11 04:27:39 compute-0 systemd[1]: Started libcrun container.
Oct 11 04:27:39 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1431f464a01116d5d8a65b0a7c5048cdd4152ce27dcdc294b1f1a89929705a28/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct 11 04:27:39 compute-0 podman[309629]: 2025-10-11 04:27:39.043018217 +0000 UTC m=+0.156719575 container init c62cc7fe2c0bb78fdfcc83e62767dbba62e09df04937434deaf07bca06583364 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-61e3c4a7-2f2f-451f-b913-c2cdac8efdf3, org.label-schema.build-date=20251009, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 11 04:27:39 compute-0 podman[309629]: 2025-10-11 04:27:39.050350044 +0000 UTC m=+0.164051362 container start c62cc7fe2c0bb78fdfcc83e62767dbba62e09df04937434deaf07bca06583364 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-61e3c4a7-2f2f-451f-b913-c2cdac8efdf3, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, org.label-schema.build-date=20251009, org.label-schema.schema-version=1.0)
Oct 11 04:27:39 compute-0 nova_compute[259850]: 2025-10-11 04:27:39.060 2 DEBUG oslo_service.periodic_task [None req-a15c22ab-259d-4b26-83d0-eb5f92102649 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 11 04:27:39 compute-0 neutron-haproxy-ovnmeta-61e3c4a7-2f2f-451f-b913-c2cdac8efdf3[309645]: [NOTICE]   (309649) : New worker (309651) forked
Oct 11 04:27:39 compute-0 neutron-haproxy-ovnmeta-61e3c4a7-2f2f-451f-b913-c2cdac8efdf3[309645]: [NOTICE]   (309649) : Loading success.
Oct 11 04:27:39 compute-0 nova_compute[259850]: 2025-10-11 04:27:39.136 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 04:27:39 compute-0 ceph-mon[74273]: pgmap v1977: 305 pgs: 305 active+clean; 271 MiB data, 673 MiB used, 59 GiB / 60 GiB avail; 18 KiB/s rd, 1.3 KiB/s wr, 24 op/s
Oct 11 04:27:40 compute-0 nova_compute[259850]: 2025-10-11 04:27:40.060 2 DEBUG oslo_service.periodic_task [None req-a15c22ab-259d-4b26-83d0-eb5f92102649 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 11 04:27:40 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1978: 305 pgs: 305 active+clean; 271 MiB data, 673 MiB used, 59 GiB / 60 GiB avail; 426 B/s rd, 85 B/s wr, 0 op/s
Oct 11 04:27:40 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e448 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 04:27:40 compute-0 nova_compute[259850]: 2025-10-11 04:27:40.861 2 DEBUG nova.compute.manager [req-d70ed207-2118-43a4-85bc-be134b0e5f4f req-c38b9845-892b-492c-a418-44eb38cb9a2f f4159c198f2c491aba3076e803251bb4 551ef16ca7c64c7d98e9560ddab5abd8 - - default default] [instance: bc8a8366-552e-41a7-bd24-afacb81114bc] Received event network-vif-plugged-7932da10-eaa5-4512-a329-16ee6de1e17c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 11 04:27:40 compute-0 nova_compute[259850]: 2025-10-11 04:27:40.862 2 DEBUG oslo_concurrency.lockutils [req-d70ed207-2118-43a4-85bc-be134b0e5f4f req-c38b9845-892b-492c-a418-44eb38cb9a2f f4159c198f2c491aba3076e803251bb4 551ef16ca7c64c7d98e9560ddab5abd8 - - default default] Acquiring lock "bc8a8366-552e-41a7-bd24-afacb81114bc-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 04:27:40 compute-0 nova_compute[259850]: 2025-10-11 04:27:40.863 2 DEBUG oslo_concurrency.lockutils [req-d70ed207-2118-43a4-85bc-be134b0e5f4f req-c38b9845-892b-492c-a418-44eb38cb9a2f f4159c198f2c491aba3076e803251bb4 551ef16ca7c64c7d98e9560ddab5abd8 - - default default] Lock "bc8a8366-552e-41a7-bd24-afacb81114bc-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 04:27:40 compute-0 nova_compute[259850]: 2025-10-11 04:27:40.863 2 DEBUG oslo_concurrency.lockutils [req-d70ed207-2118-43a4-85bc-be134b0e5f4f req-c38b9845-892b-492c-a418-44eb38cb9a2f f4159c198f2c491aba3076e803251bb4 551ef16ca7c64c7d98e9560ddab5abd8 - - default default] Lock "bc8a8366-552e-41a7-bd24-afacb81114bc-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 04:27:40 compute-0 nova_compute[259850]: 2025-10-11 04:27:40.864 2 DEBUG nova.compute.manager [req-d70ed207-2118-43a4-85bc-be134b0e5f4f req-c38b9845-892b-492c-a418-44eb38cb9a2f f4159c198f2c491aba3076e803251bb4 551ef16ca7c64c7d98e9560ddab5abd8 - - default default] [instance: bc8a8366-552e-41a7-bd24-afacb81114bc] No waiting events found dispatching network-vif-plugged-7932da10-eaa5-4512-a329-16ee6de1e17c pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 11 04:27:40 compute-0 nova_compute[259850]: 2025-10-11 04:27:40.865 2 WARNING nova.compute.manager [req-d70ed207-2118-43a4-85bc-be134b0e5f4f req-c38b9845-892b-492c-a418-44eb38cb9a2f f4159c198f2c491aba3076e803251bb4 551ef16ca7c64c7d98e9560ddab5abd8 - - default default] [instance: bc8a8366-552e-41a7-bd24-afacb81114bc] Received unexpected event network-vif-plugged-7932da10-eaa5-4512-a329-16ee6de1e17c for instance with vm_state building and task_state spawning.
Oct 11 04:27:41 compute-0 nova_compute[259850]: 2025-10-11 04:27:41.059 2 DEBUG oslo_service.periodic_task [None req-a15c22ab-259d-4b26-83d0-eb5f92102649 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 11 04:27:41 compute-0 ceph-mon[74273]: pgmap v1978: 305 pgs: 305 active+clean; 271 MiB data, 673 MiB used, 59 GiB / 60 GiB avail; 426 B/s rd, 85 B/s wr, 0 op/s
Oct 11 04:27:41 compute-0 nova_compute[259850]: 2025-10-11 04:27:41.551 2 DEBUG nova.virt.driver [None req-3700afd8-f980-4cf2-b41e-54ecc7fbe9a2 - - - - - -] Emitting event <LifecycleEvent: 1760156861.5507224, bc8a8366-552e-41a7-bd24-afacb81114bc => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 11 04:27:41 compute-0 nova_compute[259850]: 2025-10-11 04:27:41.551 2 INFO nova.compute.manager [None req-3700afd8-f980-4cf2-b41e-54ecc7fbe9a2 - - - - - -] [instance: bc8a8366-552e-41a7-bd24-afacb81114bc] VM Started (Lifecycle Event)
Oct 11 04:27:41 compute-0 nova_compute[259850]: 2025-10-11 04:27:41.553 2 DEBUG nova.compute.manager [None req-99f7b22c-b8ce-4bd1-b37a-a02bc2daa7b9 7bf17f3eb8514499a54d67542db6b88a 226e6310b4ee4a68b552a6b3e940a458 - - default default] [instance: bc8a8366-552e-41a7-bd24-afacb81114bc] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Oct 11 04:27:41 compute-0 nova_compute[259850]: 2025-10-11 04:27:41.561 2 DEBUG nova.virt.libvirt.driver [None req-99f7b22c-b8ce-4bd1-b37a-a02bc2daa7b9 7bf17f3eb8514499a54d67542db6b88a 226e6310b4ee4a68b552a6b3e940a458 - - default default] [instance: bc8a8366-552e-41a7-bd24-afacb81114bc] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Oct 11 04:27:41 compute-0 nova_compute[259850]: 2025-10-11 04:27:41.565 2 INFO nova.virt.libvirt.driver [-] [instance: bc8a8366-552e-41a7-bd24-afacb81114bc] Instance spawned successfully.
Oct 11 04:27:41 compute-0 nova_compute[259850]: 2025-10-11 04:27:41.566 2 DEBUG nova.virt.libvirt.driver [None req-99f7b22c-b8ce-4bd1-b37a-a02bc2daa7b9 7bf17f3eb8514499a54d67542db6b88a 226e6310b4ee4a68b552a6b3e940a458 - - default default] [instance: bc8a8366-552e-41a7-bd24-afacb81114bc] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Oct 11 04:27:41 compute-0 nova_compute[259850]: 2025-10-11 04:27:41.572 2 DEBUG nova.compute.manager [None req-3700afd8-f980-4cf2-b41e-54ecc7fbe9a2 - - - - - -] [instance: bc8a8366-552e-41a7-bd24-afacb81114bc] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 11 04:27:41 compute-0 nova_compute[259850]: 2025-10-11 04:27:41.575 2 DEBUG nova.compute.manager [None req-3700afd8-f980-4cf2-b41e-54ecc7fbe9a2 - - - - - -] [instance: bc8a8366-552e-41a7-bd24-afacb81114bc] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 11 04:27:41 compute-0 nova_compute[259850]: 2025-10-11 04:27:41.584 2 DEBUG nova.virt.libvirt.driver [None req-99f7b22c-b8ce-4bd1-b37a-a02bc2daa7b9 7bf17f3eb8514499a54d67542db6b88a 226e6310b4ee4a68b552a6b3e940a458 - - default default] [instance: bc8a8366-552e-41a7-bd24-afacb81114bc] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 11 04:27:41 compute-0 nova_compute[259850]: 2025-10-11 04:27:41.584 2 DEBUG nova.virt.libvirt.driver [None req-99f7b22c-b8ce-4bd1-b37a-a02bc2daa7b9 7bf17f3eb8514499a54d67542db6b88a 226e6310b4ee4a68b552a6b3e940a458 - - default default] [instance: bc8a8366-552e-41a7-bd24-afacb81114bc] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 11 04:27:41 compute-0 nova_compute[259850]: 2025-10-11 04:27:41.585 2 DEBUG nova.virt.libvirt.driver [None req-99f7b22c-b8ce-4bd1-b37a-a02bc2daa7b9 7bf17f3eb8514499a54d67542db6b88a 226e6310b4ee4a68b552a6b3e940a458 - - default default] [instance: bc8a8366-552e-41a7-bd24-afacb81114bc] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 11 04:27:41 compute-0 nova_compute[259850]: 2025-10-11 04:27:41.585 2 DEBUG nova.virt.libvirt.driver [None req-99f7b22c-b8ce-4bd1-b37a-a02bc2daa7b9 7bf17f3eb8514499a54d67542db6b88a 226e6310b4ee4a68b552a6b3e940a458 - - default default] [instance: bc8a8366-552e-41a7-bd24-afacb81114bc] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 11 04:27:41 compute-0 nova_compute[259850]: 2025-10-11 04:27:41.586 2 DEBUG nova.virt.libvirt.driver [None req-99f7b22c-b8ce-4bd1-b37a-a02bc2daa7b9 7bf17f3eb8514499a54d67542db6b88a 226e6310b4ee4a68b552a6b3e940a458 - - default default] [instance: bc8a8366-552e-41a7-bd24-afacb81114bc] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 11 04:27:41 compute-0 nova_compute[259850]: 2025-10-11 04:27:41.586 2 DEBUG nova.virt.libvirt.driver [None req-99f7b22c-b8ce-4bd1-b37a-a02bc2daa7b9 7bf17f3eb8514499a54d67542db6b88a 226e6310b4ee4a68b552a6b3e940a458 - - default default] [instance: bc8a8366-552e-41a7-bd24-afacb81114bc] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 11 04:27:41 compute-0 nova_compute[259850]: 2025-10-11 04:27:41.591 2 INFO nova.compute.manager [None req-3700afd8-f980-4cf2-b41e-54ecc7fbe9a2 - - - - - -] [instance: bc8a8366-552e-41a7-bd24-afacb81114bc] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 11 04:27:41 compute-0 nova_compute[259850]: 2025-10-11 04:27:41.591 2 DEBUG nova.virt.driver [None req-3700afd8-f980-4cf2-b41e-54ecc7fbe9a2 - - - - - -] Emitting event <LifecycleEvent: 1760156861.5529525, bc8a8366-552e-41a7-bd24-afacb81114bc => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 11 04:27:41 compute-0 nova_compute[259850]: 2025-10-11 04:27:41.592 2 INFO nova.compute.manager [None req-3700afd8-f980-4cf2-b41e-54ecc7fbe9a2 - - - - - -] [instance: bc8a8366-552e-41a7-bd24-afacb81114bc] VM Paused (Lifecycle Event)
Oct 11 04:27:41 compute-0 nova_compute[259850]: 2025-10-11 04:27:41.614 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 04:27:41 compute-0 nova_compute[259850]: 2025-10-11 04:27:41.637 2 DEBUG nova.compute.manager [None req-3700afd8-f980-4cf2-b41e-54ecc7fbe9a2 - - - - - -] [instance: bc8a8366-552e-41a7-bd24-afacb81114bc] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 11 04:27:41 compute-0 nova_compute[259850]: 2025-10-11 04:27:41.642 2 DEBUG nova.virt.driver [None req-3700afd8-f980-4cf2-b41e-54ecc7fbe9a2 - - - - - -] Emitting event <LifecycleEvent: 1760156861.5610776, bc8a8366-552e-41a7-bd24-afacb81114bc => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 11 04:27:41 compute-0 nova_compute[259850]: 2025-10-11 04:27:41.642 2 INFO nova.compute.manager [None req-3700afd8-f980-4cf2-b41e-54ecc7fbe9a2 - - - - - -] [instance: bc8a8366-552e-41a7-bd24-afacb81114bc] VM Resumed (Lifecycle Event)
Oct 11 04:27:41 compute-0 nova_compute[259850]: 2025-10-11 04:27:41.665 2 DEBUG nova.compute.manager [None req-3700afd8-f980-4cf2-b41e-54ecc7fbe9a2 - - - - - -] [instance: bc8a8366-552e-41a7-bd24-afacb81114bc] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 11 04:27:41 compute-0 nova_compute[259850]: 2025-10-11 04:27:41.668 2 DEBUG nova.compute.manager [None req-3700afd8-f980-4cf2-b41e-54ecc7fbe9a2 - - - - - -] [instance: bc8a8366-552e-41a7-bd24-afacb81114bc] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 11 04:27:41 compute-0 nova_compute[259850]: 2025-10-11 04:27:41.684 2 INFO nova.compute.manager [None req-99f7b22c-b8ce-4bd1-b37a-a02bc2daa7b9 7bf17f3eb8514499a54d67542db6b88a 226e6310b4ee4a68b552a6b3e940a458 - - default default] [instance: bc8a8366-552e-41a7-bd24-afacb81114bc] Took 8.54 seconds to spawn the instance on the hypervisor.
Oct 11 04:27:41 compute-0 nova_compute[259850]: 2025-10-11 04:27:41.684 2 DEBUG nova.compute.manager [None req-99f7b22c-b8ce-4bd1-b37a-a02bc2daa7b9 7bf17f3eb8514499a54d67542db6b88a 226e6310b4ee4a68b552a6b3e940a458 - - default default] [instance: bc8a8366-552e-41a7-bd24-afacb81114bc] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 11 04:27:41 compute-0 nova_compute[259850]: 2025-10-11 04:27:41.696 2 INFO nova.compute.manager [None req-3700afd8-f980-4cf2-b41e-54ecc7fbe9a2 - - - - - -] [instance: bc8a8366-552e-41a7-bd24-afacb81114bc] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 11 04:27:41 compute-0 nova_compute[259850]: 2025-10-11 04:27:41.747 2 INFO nova.compute.manager [None req-99f7b22c-b8ce-4bd1-b37a-a02bc2daa7b9 7bf17f3eb8514499a54d67542db6b88a 226e6310b4ee4a68b552a6b3e940a458 - - default default] [instance: bc8a8366-552e-41a7-bd24-afacb81114bc] Took 10.84 seconds to build instance.
Oct 11 04:27:41 compute-0 nova_compute[259850]: 2025-10-11 04:27:41.766 2 DEBUG oslo_concurrency.lockutils [None req-99f7b22c-b8ce-4bd1-b37a-a02bc2daa7b9 7bf17f3eb8514499a54d67542db6b88a 226e6310b4ee4a68b552a6b3e940a458 - - default default] Lock "bc8a8366-552e-41a7-bd24-afacb81114bc" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 10.924s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 04:27:42 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1979: 305 pgs: 305 active+clean; 271 MiB data, 673 MiB used, 59 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Oct 11 04:27:43 compute-0 ceph-mon[74273]: pgmap v1979: 305 pgs: 305 active+clean; 271 MiB data, 673 MiB used, 59 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Oct 11 04:27:44 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1980: 305 pgs: 305 active+clean; 271 MiB data, 673 MiB used, 59 GiB / 60 GiB avail; 1.6 MiB/s rd, 13 KiB/s wr, 64 op/s
Oct 11 04:27:44 compute-0 nova_compute[259850]: 2025-10-11 04:27:44.138 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 04:27:45 compute-0 nova_compute[259850]: 2025-10-11 04:27:45.060 2 DEBUG oslo_service.periodic_task [None req-a15c22ab-259d-4b26-83d0-eb5f92102649 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 11 04:27:45 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e448 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 04:27:45 compute-0 nova_compute[259850]: 2025-10-11 04:27:45.321 2 DEBUG nova.compute.manager [req-203e2e50-31b2-4933-ab99-194377e828d4 req-6db3df7f-536f-40b5-9986-07ca6ca5f992 f4159c198f2c491aba3076e803251bb4 551ef16ca7c64c7d98e9560ddab5abd8 - - default default] [instance: bc8a8366-552e-41a7-bd24-afacb81114bc] Received event network-changed-7932da10-eaa5-4512-a329-16ee6de1e17c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 11 04:27:45 compute-0 nova_compute[259850]: 2025-10-11 04:27:45.322 2 DEBUG nova.compute.manager [req-203e2e50-31b2-4933-ab99-194377e828d4 req-6db3df7f-536f-40b5-9986-07ca6ca5f992 f4159c198f2c491aba3076e803251bb4 551ef16ca7c64c7d98e9560ddab5abd8 - - default default] [instance: bc8a8366-552e-41a7-bd24-afacb81114bc] Refreshing instance network info cache due to event network-changed-7932da10-eaa5-4512-a329-16ee6de1e17c. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 11 04:27:45 compute-0 nova_compute[259850]: 2025-10-11 04:27:45.323 2 DEBUG oslo_concurrency.lockutils [req-203e2e50-31b2-4933-ab99-194377e828d4 req-6db3df7f-536f-40b5-9986-07ca6ca5f992 f4159c198f2c491aba3076e803251bb4 551ef16ca7c64c7d98e9560ddab5abd8 - - default default] Acquiring lock "refresh_cache-bc8a8366-552e-41a7-bd24-afacb81114bc" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 11 04:27:45 compute-0 nova_compute[259850]: 2025-10-11 04:27:45.323 2 DEBUG oslo_concurrency.lockutils [req-203e2e50-31b2-4933-ab99-194377e828d4 req-6db3df7f-536f-40b5-9986-07ca6ca5f992 f4159c198f2c491aba3076e803251bb4 551ef16ca7c64c7d98e9560ddab5abd8 - - default default] Acquired lock "refresh_cache-bc8a8366-552e-41a7-bd24-afacb81114bc" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 11 04:27:45 compute-0 nova_compute[259850]: 2025-10-11 04:27:45.324 2 DEBUG nova.network.neutron [req-203e2e50-31b2-4933-ab99-194377e828d4 req-6db3df7f-536f-40b5-9986-07ca6ca5f992 f4159c198f2c491aba3076e803251bb4 551ef16ca7c64c7d98e9560ddab5abd8 - - default default] [instance: bc8a8366-552e-41a7-bd24-afacb81114bc] Refreshing network info cache for port 7932da10-eaa5-4512-a329-16ee6de1e17c _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 11 04:27:45 compute-0 ceph-mon[74273]: pgmap v1980: 305 pgs: 305 active+clean; 271 MiB data, 673 MiB used, 59 GiB / 60 GiB avail; 1.6 MiB/s rd, 13 KiB/s wr, 64 op/s
Oct 11 04:27:45 compute-0 podman[309666]: 2025-10-11 04:27:45.454119263 +0000 UTC m=+0.151283463 container health_status 648a70a95c858d67aef01a55c153ff0b5b9bef4b589fa10c62a24185180db61f (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251009, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Oct 11 04:27:46 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1981: 305 pgs: 305 active+clean; 271 MiB data, 673 MiB used, 59 GiB / 60 GiB avail; 1.6 MiB/s rd, 13 KiB/s wr, 64 op/s
Oct 11 04:27:46 compute-0 nova_compute[259850]: 2025-10-11 04:27:46.462 2 DEBUG nova.network.neutron [req-203e2e50-31b2-4933-ab99-194377e828d4 req-6db3df7f-536f-40b5-9986-07ca6ca5f992 f4159c198f2c491aba3076e803251bb4 551ef16ca7c64c7d98e9560ddab5abd8 - - default default] [instance: bc8a8366-552e-41a7-bd24-afacb81114bc] Updated VIF entry in instance network info cache for port 7932da10-eaa5-4512-a329-16ee6de1e17c. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 11 04:27:46 compute-0 nova_compute[259850]: 2025-10-11 04:27:46.463 2 DEBUG nova.network.neutron [req-203e2e50-31b2-4933-ab99-194377e828d4 req-6db3df7f-536f-40b5-9986-07ca6ca5f992 f4159c198f2c491aba3076e803251bb4 551ef16ca7c64c7d98e9560ddab5abd8 - - default default] [instance: bc8a8366-552e-41a7-bd24-afacb81114bc] Updating instance_info_cache with network_info: [{"id": "7932da10-eaa5-4512-a329-16ee6de1e17c", "address": "fa:16:3e:0c:2a:34", "network": {"id": "61e3c4a7-2f2f-451f-b913-c2cdac8efdf3", "bridge": "br-int", "label": "tempest-TestEncryptedCinderVolumes-1503027769-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.209", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "226e6310b4ee4a68b552a6b3e940a458", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7932da10-ea", "ovs_interfaceid": "7932da10-eaa5-4512-a329-16ee6de1e17c", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 11 04:27:46 compute-0 nova_compute[259850]: 2025-10-11 04:27:46.480 2 DEBUG oslo_concurrency.lockutils [req-203e2e50-31b2-4933-ab99-194377e828d4 req-6db3df7f-536f-40b5-9986-07ca6ca5f992 f4159c198f2c491aba3076e803251bb4 551ef16ca7c64c7d98e9560ddab5abd8 - - default default] Releasing lock "refresh_cache-bc8a8366-552e-41a7-bd24-afacb81114bc" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 11 04:27:46 compute-0 nova_compute[259850]: 2025-10-11 04:27:46.616 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 04:27:47 compute-0 ceph-mon[74273]: pgmap v1981: 305 pgs: 305 active+clean; 271 MiB data, 673 MiB used, 59 GiB / 60 GiB avail; 1.6 MiB/s rd, 13 KiB/s wr, 64 op/s
Oct 11 04:27:48 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1982: 305 pgs: 305 active+clean; 271 MiB data, 673 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 13 KiB/s wr, 77 op/s
Oct 11 04:27:48 compute-0 podman[309692]: 2025-10-11 04:27:48.355189179 +0000 UTC m=+0.065234969 container health_status aa44bb3cffcfb092b9d85ae52a9bd9f36c87e07262632420f97b48a886109eab (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20251009, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent)
Oct 11 04:27:49 compute-0 nova_compute[259850]: 2025-10-11 04:27:49.191 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 04:27:49 compute-0 ceph-mon[74273]: pgmap v1982: 305 pgs: 305 active+clean; 271 MiB data, 673 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 13 KiB/s wr, 77 op/s
Oct 11 04:27:50 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1983: 305 pgs: 305 active+clean; 271 MiB data, 673 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 13 KiB/s wr, 77 op/s
Oct 11 04:27:50 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e448 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 04:27:50 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct 11 04:27:50 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2623681621' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 11 04:27:50 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct 11 04:27:50 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2623681621' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 11 04:27:50 compute-0 ceph-mgr[74563]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 04:27:50 compute-0 ceph-mgr[74563]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 04:27:50 compute-0 ceph-mgr[74563]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 04:27:50 compute-0 ceph-mgr[74563]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 04:27:50 compute-0 ceph-mgr[74563]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 04:27:50 compute-0 ceph-mgr[74563]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 04:27:51 compute-0 ceph-mon[74273]: pgmap v1983: 305 pgs: 305 active+clean; 271 MiB data, 673 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 13 KiB/s wr, 77 op/s
Oct 11 04:27:51 compute-0 ceph-mon[74273]: from='client.? 192.168.122.10:0/2623681621' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 11 04:27:51 compute-0 ceph-mon[74273]: from='client.? 192.168.122.10:0/2623681621' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 11 04:27:51 compute-0 nova_compute[259850]: 2025-10-11 04:27:51.644 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 04:27:52 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1984: 305 pgs: 305 active+clean; 271 MiB data, 673 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 13 KiB/s wr, 76 op/s
Oct 11 04:27:52 compute-0 ovn_controller[152025]: 2025-10-11T04:27:52Z|00072|pinctrl(ovn_pinctrl0)|WARN|Dropped 2 log messages in last 549 seconds (most recently, 545 seconds ago) due to excessive rate
Oct 11 04:27:52 compute-0 ovn_controller[152025]: 2025-10-11T04:27:52Z|00073|pinctrl(ovn_pinctrl0)|WARN|DHCPREQUEST requested IP 10.100.0.13 does not match offer 10.100.0.12
Oct 11 04:27:52 compute-0 ovn_controller[152025]: 2025-10-11T04:27:52Z|00074|pinctrl(ovn_pinctrl0)|INFO|DHCPNAK fa:16:3e:0c:2a:34 10.100.0.12
Oct 11 04:27:53 compute-0 ceph-mon[74273]: pgmap v1984: 305 pgs: 305 active+clean; 271 MiB data, 673 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 13 KiB/s wr, 76 op/s
Oct 11 04:27:54 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1985: 305 pgs: 305 active+clean; 283 MiB data, 685 MiB used, 59 GiB / 60 GiB avail; 3.5 MiB/s rd, 1.0 MiB/s wr, 119 op/s
Oct 11 04:27:54 compute-0 nova_compute[259850]: 2025-10-11 04:27:54.230 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 04:27:55 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e448 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 04:27:55 compute-0 ceph-mon[74273]: pgmap v1985: 305 pgs: 305 active+clean; 283 MiB data, 685 MiB used, 59 GiB / 60 GiB avail; 3.5 MiB/s rd, 1.0 MiB/s wr, 119 op/s
Oct 11 04:27:55 compute-0 sudo[309711]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 04:27:55 compute-0 sudo[309711]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 04:27:55 compute-0 sudo[309711]: pam_unix(sudo:session): session closed for user root
Oct 11 04:27:56 compute-0 sudo[309736]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 11 04:27:56 compute-0 sudo[309736]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 04:27:56 compute-0 sudo[309736]: pam_unix(sudo:session): session closed for user root
Oct 11 04:27:56 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1986: 305 pgs: 305 active+clean; 283 MiB data, 685 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.0 MiB/s wr, 55 op/s
Oct 11 04:27:56 compute-0 sudo[309761]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 04:27:56 compute-0 sudo[309761]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 04:27:56 compute-0 sudo[309761]: pam_unix(sudo:session): session closed for user root
Oct 11 04:27:56 compute-0 sudo[309786]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/23b68101-59a9-532f-ab6b-9acf78fb2162/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Oct 11 04:27:56 compute-0 sudo[309786]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 04:27:56 compute-0 nova_compute[259850]: 2025-10-11 04:27:56.658 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 04:27:56 compute-0 sudo[309786]: pam_unix(sudo:session): session closed for user root
Oct 11 04:27:56 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 11 04:27:56 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 11 04:27:56 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct 11 04:27:56 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 11 04:27:56 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct 11 04:27:56 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' 
Oct 11 04:27:56 compute-0 ceph-mgr[74563]: [progress WARNING root] complete: ev e0cd072b-f96f-486f-bf95-1f90eb051da2 does not exist
Oct 11 04:27:56 compute-0 ceph-mgr[74563]: [progress WARNING root] complete: ev 409d9503-1d4e-4820-a333-882f63288949 does not exist
Oct 11 04:27:56 compute-0 ceph-mgr[74563]: [progress WARNING root] complete: ev bef55f3a-bc8d-4dd5-adf9-08c1943e7f29 does not exist
Oct 11 04:27:56 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct 11 04:27:56 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 11 04:27:56 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct 11 04:27:56 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 11 04:27:56 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 11 04:27:56 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 11 04:27:57 compute-0 sudo[309842]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 04:27:57 compute-0 sudo[309842]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 04:27:57 compute-0 sudo[309842]: pam_unix(sudo:session): session closed for user root
Oct 11 04:27:57 compute-0 sudo[309867]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 11 04:27:57 compute-0 sudo[309867]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 04:27:57 compute-0 sudo[309867]: pam_unix(sudo:session): session closed for user root
Oct 11 04:27:57 compute-0 sudo[309892]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 04:27:57 compute-0 sudo[309892]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 04:27:57 compute-0 sudo[309892]: pam_unix(sudo:session): session closed for user root
Oct 11 04:27:57 compute-0 sudo[309917]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/23b68101-59a9-532f-ab6b-9acf78fb2162/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 23b68101-59a9-532f-ab6b-9acf78fb2162 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Oct 11 04:27:57 compute-0 sudo[309917]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 04:27:57 compute-0 ovn_controller[152025]: 2025-10-11T04:27:57Z|00075|pinctrl(ovn_pinctrl0)|WARN|DHCPREQUEST requested IP 10.100.0.13 does not match offer 10.100.0.12
Oct 11 04:27:57 compute-0 ovn_controller[152025]: 2025-10-11T04:27:57Z|00076|pinctrl(ovn_pinctrl0)|INFO|DHCPNAK fa:16:3e:0c:2a:34 10.100.0.12
Oct 11 04:27:57 compute-0 ceph-mon[74273]: pgmap v1986: 305 pgs: 305 active+clean; 283 MiB data, 685 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.0 MiB/s wr, 55 op/s
Oct 11 04:27:57 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 11 04:27:57 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 11 04:27:57 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' 
Oct 11 04:27:57 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 11 04:27:57 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 11 04:27:57 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 11 04:27:57 compute-0 ovn_controller[152025]: 2025-10-11T04:27:57Z|00077|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:0c:2a:34 10.100.0.12
Oct 11 04:27:57 compute-0 ovn_controller[152025]: 2025-10-11T04:27:57Z|00078|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:0c:2a:34 10.100.0.12
Oct 11 04:27:57 compute-0 podman[309986]: 2025-10-11 04:27:57.712630026 +0000 UTC m=+0.062287216 container create 54dea2aafb557450372384bbafeeb6b8a3c3ab786958098ff74d264961cb0cec (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=confident_euclid, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 11 04:27:57 compute-0 systemd[1]: Started libpod-conmon-54dea2aafb557450372384bbafeeb6b8a3c3ab786958098ff74d264961cb0cec.scope.
Oct 11 04:27:57 compute-0 podman[309986]: 2025-10-11 04:27:57.684949256 +0000 UTC m=+0.034606486 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 04:27:57 compute-0 systemd[1]: Started libcrun container.
Oct 11 04:27:57 compute-0 podman[309986]: 2025-10-11 04:27:57.82178431 +0000 UTC m=+0.171441540 container init 54dea2aafb557450372384bbafeeb6b8a3c3ab786958098ff74d264961cb0cec (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=confident_euclid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Oct 11 04:27:57 compute-0 podman[309986]: 2025-10-11 04:27:57.83454269 +0000 UTC m=+0.184199870 container start 54dea2aafb557450372384bbafeeb6b8a3c3ab786958098ff74d264961cb0cec (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=confident_euclid, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, OSD_FLAVOR=default)
Oct 11 04:27:57 compute-0 podman[309986]: 2025-10-11 04:27:57.837543804 +0000 UTC m=+0.187201004 container attach 54dea2aafb557450372384bbafeeb6b8a3c3ab786958098ff74d264961cb0cec (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=confident_euclid, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Oct 11 04:27:57 compute-0 confident_euclid[310002]: 167 167
Oct 11 04:27:57 compute-0 systemd[1]: libpod-54dea2aafb557450372384bbafeeb6b8a3c3ab786958098ff74d264961cb0cec.scope: Deactivated successfully.
Oct 11 04:27:57 compute-0 conmon[310002]: conmon 54dea2aafb5574503723 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-54dea2aafb557450372384bbafeeb6b8a3c3ab786958098ff74d264961cb0cec.scope/container/memory.events
Oct 11 04:27:57 compute-0 podman[309986]: 2025-10-11 04:27:57.845771666 +0000 UTC m=+0.195428846 container died 54dea2aafb557450372384bbafeeb6b8a3c3ab786958098ff74d264961cb0cec (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=confident_euclid, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, OSD_FLAVOR=default, io.buildah.version=1.39.3)
Oct 11 04:27:57 compute-0 systemd[1]: var-lib-containers-storage-overlay-a547799c18799b59babb077cf181cb6f78036fc076f5d2fb0cc8cb71cd65ec43-merged.mount: Deactivated successfully.
Oct 11 04:27:57 compute-0 podman[309986]: 2025-10-11 04:27:57.889117137 +0000 UTC m=+0.238774317 container remove 54dea2aafb557450372384bbafeeb6b8a3c3ab786958098ff74d264961cb0cec (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=confident_euclid, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 11 04:27:57 compute-0 systemd[1]: libpod-conmon-54dea2aafb557450372384bbafeeb6b8a3c3ab786958098ff74d264961cb0cec.scope: Deactivated successfully.
Oct 11 04:27:58 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1987: 305 pgs: 305 active+clean; 283 MiB data, 685 MiB used, 59 GiB / 60 GiB avail; 2.3 MiB/s rd, 1.0 MiB/s wr, 60 op/s
Oct 11 04:27:58 compute-0 podman[310026]: 2025-10-11 04:27:58.161960233 +0000 UTC m=+0.094901945 container create 114a4682abaa53fcb24fa3a663b651d530d8f7fa921365def8ccb13288a235af (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=loving_easley, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0)
Oct 11 04:27:58 compute-0 systemd[1]: Started libpod-conmon-114a4682abaa53fcb24fa3a663b651d530d8f7fa921365def8ccb13288a235af.scope.
Oct 11 04:27:58 compute-0 systemd[1]: Started libcrun container.
Oct 11 04:27:58 compute-0 podman[310026]: 2025-10-11 04:27:58.134130509 +0000 UTC m=+0.067072281 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 04:27:58 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7996c918ddc170faa57b1fcf4be2c8b559175da854abba9aa968e045978882f0/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 11 04:27:58 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7996c918ddc170faa57b1fcf4be2c8b559175da854abba9aa968e045978882f0/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 11 04:27:58 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7996c918ddc170faa57b1fcf4be2c8b559175da854abba9aa968e045978882f0/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 11 04:27:58 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7996c918ddc170faa57b1fcf4be2c8b559175da854abba9aa968e045978882f0/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 11 04:27:58 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7996c918ddc170faa57b1fcf4be2c8b559175da854abba9aa968e045978882f0/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct 11 04:27:58 compute-0 podman[310026]: 2025-10-11 04:27:58.246945096 +0000 UTC m=+0.179886858 container init 114a4682abaa53fcb24fa3a663b651d530d8f7fa921365def8ccb13288a235af (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=loving_easley, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, OSD_FLAVOR=default, io.buildah.version=1.39.3)
Oct 11 04:27:58 compute-0 podman[310026]: 2025-10-11 04:27:58.257883684 +0000 UTC m=+0.190825356 container start 114a4682abaa53fcb24fa3a663b651d530d8f7fa921365def8ccb13288a235af (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=loving_easley, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3)
Oct 11 04:27:58 compute-0 podman[310026]: 2025-10-11 04:27:58.261208308 +0000 UTC m=+0.194150010 container attach 114a4682abaa53fcb24fa3a663b651d530d8f7fa921365def8ccb13288a235af (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=loving_easley, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Oct 11 04:27:59 compute-0 nova_compute[259850]: 2025-10-11 04:27:59.277 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 04:27:59 compute-0 loving_easley[310044]: --> passed data devices: 0 physical, 3 LVM
Oct 11 04:27:59 compute-0 loving_easley[310044]: --> relative data size: 1.0
Oct 11 04:27:59 compute-0 loving_easley[310044]: --> All data devices are unavailable
Oct 11 04:27:59 compute-0 systemd[1]: libpod-114a4682abaa53fcb24fa3a663b651d530d8f7fa921365def8ccb13288a235af.scope: Deactivated successfully.
Oct 11 04:27:59 compute-0 systemd[1]: libpod-114a4682abaa53fcb24fa3a663b651d530d8f7fa921365def8ccb13288a235af.scope: Consumed 1.027s CPU time.
Oct 11 04:27:59 compute-0 podman[310026]: 2025-10-11 04:27:59.382864403 +0000 UTC m=+1.315806085 container died 114a4682abaa53fcb24fa3a663b651d530d8f7fa921365def8ccb13288a235af (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=loving_easley, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=reef)
Oct 11 04:27:59 compute-0 systemd[1]: var-lib-containers-storage-overlay-7996c918ddc170faa57b1fcf4be2c8b559175da854abba9aa968e045978882f0-merged.mount: Deactivated successfully.
Oct 11 04:27:59 compute-0 ceph-mon[74273]: pgmap v1987: 305 pgs: 305 active+clean; 283 MiB data, 685 MiB used, 59 GiB / 60 GiB avail; 2.3 MiB/s rd, 1.0 MiB/s wr, 60 op/s
Oct 11 04:27:59 compute-0 podman[310026]: 2025-10-11 04:27:59.461889749 +0000 UTC m=+1.394831461 container remove 114a4682abaa53fcb24fa3a663b651d530d8f7fa921365def8ccb13288a235af (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=loving_easley, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Oct 11 04:27:59 compute-0 systemd[1]: libpod-conmon-114a4682abaa53fcb24fa3a663b651d530d8f7fa921365def8ccb13288a235af.scope: Deactivated successfully.
Oct 11 04:27:59 compute-0 sudo[309917]: pam_unix(sudo:session): session closed for user root
Oct 11 04:27:59 compute-0 sudo[310087]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 04:27:59 compute-0 sudo[310087]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 04:27:59 compute-0 sudo[310087]: pam_unix(sudo:session): session closed for user root
Oct 11 04:27:59 compute-0 sudo[310112]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 11 04:27:59 compute-0 sudo[310112]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 04:27:59 compute-0 sudo[310112]: pam_unix(sudo:session): session closed for user root
Oct 11 04:27:59 compute-0 sudo[310137]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 04:27:59 compute-0 sudo[310137]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 04:27:59 compute-0 sudo[310137]: pam_unix(sudo:session): session closed for user root
Oct 11 04:27:59 compute-0 sudo[310162]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/23b68101-59a9-532f-ab6b-9acf78fb2162/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 23b68101-59a9-532f-ab6b-9acf78fb2162 -- lvm list --format json
Oct 11 04:27:59 compute-0 sudo[310162]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 04:28:00 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1988: 305 pgs: 305 active+clean; 287 MiB data, 689 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.3 MiB/s wr, 49 op/s
Oct 11 04:28:00 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e448 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 04:28:00 compute-0 podman[310227]: 2025-10-11 04:28:00.187630511 +0000 UTC m=+0.061723479 container create 32df1c6c9b9ecfe8f11bf1add73685986731f1d06c296296c6f610b1f06f968e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_lamarr, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=reef)
Oct 11 04:28:00 compute-0 systemd[1]: Started libpod-conmon-32df1c6c9b9ecfe8f11bf1add73685986731f1d06c296296c6f610b1f06f968e.scope.
Oct 11 04:28:00 compute-0 podman[310227]: 2025-10-11 04:28:00.162037881 +0000 UTC m=+0.036130939 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 04:28:00 compute-0 systemd[1]: Started libcrun container.
Oct 11 04:28:00 compute-0 podman[310227]: 2025-10-11 04:28:00.297721272 +0000 UTC m=+0.171814250 container init 32df1c6c9b9ecfe8f11bf1add73685986731f1d06c296296c6f610b1f06f968e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_lamarr, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Oct 11 04:28:00 compute-0 podman[310227]: 2025-10-11 04:28:00.309752751 +0000 UTC m=+0.183845719 container start 32df1c6c9b9ecfe8f11bf1add73685986731f1d06c296296c6f610b1f06f968e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_lamarr, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, io.buildah.version=1.39.3)
Oct 11 04:28:00 compute-0 podman[310227]: 2025-10-11 04:28:00.313092155 +0000 UTC m=+0.187185163 container attach 32df1c6c9b9ecfe8f11bf1add73685986731f1d06c296296c6f610b1f06f968e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_lamarr, io.buildah.version=1.39.3, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507)
Oct 11 04:28:00 compute-0 vigorous_lamarr[310243]: 167 167
Oct 11 04:28:00 compute-0 podman[310227]: 2025-10-11 04:28:00.317795958 +0000 UTC m=+0.191888936 container died 32df1c6c9b9ecfe8f11bf1add73685986731f1d06c296296c6f610b1f06f968e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_lamarr, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 11 04:28:00 compute-0 systemd[1]: libpod-32df1c6c9b9ecfe8f11bf1add73685986731f1d06c296296c6f610b1f06f968e.scope: Deactivated successfully.
Oct 11 04:28:00 compute-0 systemd[1]: var-lib-containers-storage-overlay-bb5ffceae6b3598f4c3ebdde174fa8e918268bef9a98889c13c5709f9d0ee1db-merged.mount: Deactivated successfully.
Oct 11 04:28:00 compute-0 podman[310227]: 2025-10-11 04:28:00.368435464 +0000 UTC m=+0.242528472 container remove 32df1c6c9b9ecfe8f11bf1add73685986731f1d06c296296c6f610b1f06f968e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_lamarr, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, OSD_FLAVOR=default, ceph=True, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 11 04:28:00 compute-0 systemd[1]: libpod-conmon-32df1c6c9b9ecfe8f11bf1add73685986731f1d06c296296c6f610b1f06f968e.scope: Deactivated successfully.
Oct 11 04:28:00 compute-0 podman[310262]: 2025-10-11 04:28:00.474397399 +0000 UTC m=+0.056115422 container health_status 83ff5079e26f0f00bbfd2512db4ebca61e431e3b7e362664573e34fff179574c (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, container_name=multipathd, managed_by=edpm_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009, org.label-schema.vendor=CentOS)
Oct 11 04:28:00 compute-0 podman[310263]: 2025-10-11 04:28:00.498088066 +0000 UTC m=+0.069299923 container health_status 8f03625dda308e9a3fe3b3f00360811eab2a53db90537fed5552a11820542f07 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=iscsid, container_name=iscsid, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, org.label-schema.build-date=20251009, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Oct 11 04:28:00 compute-0 podman[310305]: 2025-10-11 04:28:00.552812268 +0000 UTC m=+0.044612638 container create 275b376fba814ef2ab53a11cdbb85c334b7e5d5ff619aa288cf28b030d57c855 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_nash, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Oct 11 04:28:00 compute-0 systemd[1]: Started libpod-conmon-275b376fba814ef2ab53a11cdbb85c334b7e5d5ff619aa288cf28b030d57c855.scope.
Oct 11 04:28:00 compute-0 podman[310305]: 2025-10-11 04:28:00.531004814 +0000 UTC m=+0.022805174 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 04:28:00 compute-0 systemd[1]: Started libcrun container.
Oct 11 04:28:00 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/006509f47f88faea282528425ffeef0d3c9c5faa71d16bf09663dc6b44933a1c/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 11 04:28:00 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/006509f47f88faea282528425ffeef0d3c9c5faa71d16bf09663dc6b44933a1c/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 11 04:28:00 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/006509f47f88faea282528425ffeef0d3c9c5faa71d16bf09663dc6b44933a1c/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 11 04:28:00 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/006509f47f88faea282528425ffeef0d3c9c5faa71d16bf09663dc6b44933a1c/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 11 04:28:00 compute-0 podman[310305]: 2025-10-11 04:28:00.65726389 +0000 UTC m=+0.149064310 container init 275b376fba814ef2ab53a11cdbb85c334b7e5d5ff619aa288cf28b030d57c855 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_nash, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 11 04:28:00 compute-0 podman[310305]: 2025-10-11 04:28:00.670255316 +0000 UTC m=+0.162055646 container start 275b376fba814ef2ab53a11cdbb85c334b7e5d5ff619aa288cf28b030d57c855 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_nash, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=reef)
Oct 11 04:28:00 compute-0 podman[310305]: 2025-10-11 04:28:00.674401663 +0000 UTC m=+0.166202093 container attach 275b376fba814ef2ab53a11cdbb85c334b7e5d5ff619aa288cf28b030d57c855 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_nash, OSD_FLAVOR=default, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 11 04:28:01 compute-0 condescending_nash[310321]: {
Oct 11 04:28:01 compute-0 condescending_nash[310321]:     "0": [
Oct 11 04:28:01 compute-0 condescending_nash[310321]:         {
Oct 11 04:28:01 compute-0 condescending_nash[310321]:             "devices": [
Oct 11 04:28:01 compute-0 condescending_nash[310321]:                 "/dev/loop3"
Oct 11 04:28:01 compute-0 condescending_nash[310321]:             ],
Oct 11 04:28:01 compute-0 condescending_nash[310321]:             "lv_name": "ceph_lv0",
Oct 11 04:28:01 compute-0 condescending_nash[310321]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct 11 04:28:01 compute-0 condescending_nash[310321]:             "lv_size": "21470642176",
Oct 11 04:28:01 compute-0 condescending_nash[310321]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=1XVTvq-Um5W-oCLh-MZzq-EKrd-vgGj-JdO4S3,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=23b68101-59a9-532f-ab6b-9acf78fb2162,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=bd7ac921-1218-45c1-b1c6-7c594dbceccb,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 11 04:28:01 compute-0 condescending_nash[310321]:             "lv_uuid": "1XVTvq-Um5W-oCLh-MZzq-EKrd-vgGj-JdO4S3",
Oct 11 04:28:01 compute-0 condescending_nash[310321]:             "name": "ceph_lv0",
Oct 11 04:28:01 compute-0 condescending_nash[310321]:             "path": "/dev/ceph_vg0/ceph_lv0",
Oct 11 04:28:01 compute-0 condescending_nash[310321]:             "tags": {
Oct 11 04:28:01 compute-0 condescending_nash[310321]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct 11 04:28:01 compute-0 condescending_nash[310321]:                 "ceph.block_uuid": "1XVTvq-Um5W-oCLh-MZzq-EKrd-vgGj-JdO4S3",
Oct 11 04:28:01 compute-0 condescending_nash[310321]:                 "ceph.cephx_lockbox_secret": "",
Oct 11 04:28:01 compute-0 condescending_nash[310321]:                 "ceph.cluster_fsid": "23b68101-59a9-532f-ab6b-9acf78fb2162",
Oct 11 04:28:01 compute-0 condescending_nash[310321]:                 "ceph.cluster_name": "ceph",
Oct 11 04:28:01 compute-0 condescending_nash[310321]:                 "ceph.crush_device_class": "",
Oct 11 04:28:01 compute-0 condescending_nash[310321]:                 "ceph.encrypted": "0",
Oct 11 04:28:01 compute-0 condescending_nash[310321]:                 "ceph.osd_fsid": "bd7ac921-1218-45c1-b1c6-7c594dbceccb",
Oct 11 04:28:01 compute-0 condescending_nash[310321]:                 "ceph.osd_id": "0",
Oct 11 04:28:01 compute-0 condescending_nash[310321]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 11 04:28:01 compute-0 condescending_nash[310321]:                 "ceph.type": "block",
Oct 11 04:28:01 compute-0 condescending_nash[310321]:                 "ceph.vdo": "0"
Oct 11 04:28:01 compute-0 condescending_nash[310321]:             },
Oct 11 04:28:01 compute-0 condescending_nash[310321]:             "type": "block",
Oct 11 04:28:01 compute-0 condescending_nash[310321]:             "vg_name": "ceph_vg0"
Oct 11 04:28:01 compute-0 condescending_nash[310321]:         }
Oct 11 04:28:01 compute-0 condescending_nash[310321]:     ],
Oct 11 04:28:01 compute-0 condescending_nash[310321]:     "1": [
Oct 11 04:28:01 compute-0 condescending_nash[310321]:         {
Oct 11 04:28:01 compute-0 condescending_nash[310321]:             "devices": [
Oct 11 04:28:01 compute-0 condescending_nash[310321]:                 "/dev/loop4"
Oct 11 04:28:01 compute-0 condescending_nash[310321]:             ],
Oct 11 04:28:01 compute-0 condescending_nash[310321]:             "lv_name": "ceph_lv1",
Oct 11 04:28:01 compute-0 condescending_nash[310321]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Oct 11 04:28:01 compute-0 condescending_nash[310321]:             "lv_size": "21470642176",
Oct 11 04:28:01 compute-0 condescending_nash[310321]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=SYHCVo-UVf0-Iwc3-4dzz-QuCG-19LJ-9rRA5x,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=23b68101-59a9-532f-ab6b-9acf78fb2162,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=38da774d-7ecf-442f-9a7a-97978287cff8,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 11 04:28:01 compute-0 condescending_nash[310321]:             "lv_uuid": "SYHCVo-UVf0-Iwc3-4dzz-QuCG-19LJ-9rRA5x",
Oct 11 04:28:01 compute-0 condescending_nash[310321]:             "name": "ceph_lv1",
Oct 11 04:28:01 compute-0 condescending_nash[310321]:             "path": "/dev/ceph_vg1/ceph_lv1",
Oct 11 04:28:01 compute-0 condescending_nash[310321]:             "tags": {
Oct 11 04:28:01 compute-0 condescending_nash[310321]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Oct 11 04:28:01 compute-0 condescending_nash[310321]:                 "ceph.block_uuid": "SYHCVo-UVf0-Iwc3-4dzz-QuCG-19LJ-9rRA5x",
Oct 11 04:28:01 compute-0 condescending_nash[310321]:                 "ceph.cephx_lockbox_secret": "",
Oct 11 04:28:01 compute-0 condescending_nash[310321]:                 "ceph.cluster_fsid": "23b68101-59a9-532f-ab6b-9acf78fb2162",
Oct 11 04:28:01 compute-0 condescending_nash[310321]:                 "ceph.cluster_name": "ceph",
Oct 11 04:28:01 compute-0 condescending_nash[310321]:                 "ceph.crush_device_class": "",
Oct 11 04:28:01 compute-0 condescending_nash[310321]:                 "ceph.encrypted": "0",
Oct 11 04:28:01 compute-0 condescending_nash[310321]:                 "ceph.osd_fsid": "38da774d-7ecf-442f-9a7a-97978287cff8",
Oct 11 04:28:01 compute-0 condescending_nash[310321]:                 "ceph.osd_id": "1",
Oct 11 04:28:01 compute-0 condescending_nash[310321]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 11 04:28:01 compute-0 condescending_nash[310321]:                 "ceph.type": "block",
Oct 11 04:28:01 compute-0 condescending_nash[310321]:                 "ceph.vdo": "0"
Oct 11 04:28:01 compute-0 condescending_nash[310321]:             },
Oct 11 04:28:01 compute-0 condescending_nash[310321]:             "type": "block",
Oct 11 04:28:01 compute-0 condescending_nash[310321]:             "vg_name": "ceph_vg1"
Oct 11 04:28:01 compute-0 condescending_nash[310321]:         }
Oct 11 04:28:01 compute-0 condescending_nash[310321]:     ],
Oct 11 04:28:01 compute-0 condescending_nash[310321]:     "2": [
Oct 11 04:28:01 compute-0 condescending_nash[310321]:         {
Oct 11 04:28:01 compute-0 condescending_nash[310321]:             "devices": [
Oct 11 04:28:01 compute-0 condescending_nash[310321]:                 "/dev/loop5"
Oct 11 04:28:01 compute-0 condescending_nash[310321]:             ],
Oct 11 04:28:01 compute-0 condescending_nash[310321]:             "lv_name": "ceph_lv2",
Oct 11 04:28:01 compute-0 condescending_nash[310321]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Oct 11 04:28:01 compute-0 condescending_nash[310321]:             "lv_size": "21470642176",
Oct 11 04:28:01 compute-0 condescending_nash[310321]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=RoMfIg-8Myq-NBZ3-1HM3-6QfC-sjk8-dlChvt,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=23b68101-59a9-532f-ab6b-9acf78fb2162,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=b0a32a6b-06c7-4cda-9235-0c5ffbd1bd85,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 11 04:28:01 compute-0 condescending_nash[310321]:             "lv_uuid": "RoMfIg-8Myq-NBZ3-1HM3-6QfC-sjk8-dlChvt",
Oct 11 04:28:01 compute-0 condescending_nash[310321]:             "name": "ceph_lv2",
Oct 11 04:28:01 compute-0 condescending_nash[310321]:             "path": "/dev/ceph_vg2/ceph_lv2",
Oct 11 04:28:01 compute-0 condescending_nash[310321]:             "tags": {
Oct 11 04:28:01 compute-0 condescending_nash[310321]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Oct 11 04:28:01 compute-0 condescending_nash[310321]:                 "ceph.block_uuid": "RoMfIg-8Myq-NBZ3-1HM3-6QfC-sjk8-dlChvt",
Oct 11 04:28:01 compute-0 condescending_nash[310321]:                 "ceph.cephx_lockbox_secret": "",
Oct 11 04:28:01 compute-0 condescending_nash[310321]:                 "ceph.cluster_fsid": "23b68101-59a9-532f-ab6b-9acf78fb2162",
Oct 11 04:28:01 compute-0 condescending_nash[310321]:                 "ceph.cluster_name": "ceph",
Oct 11 04:28:01 compute-0 condescending_nash[310321]:                 "ceph.crush_device_class": "",
Oct 11 04:28:01 compute-0 condescending_nash[310321]:                 "ceph.encrypted": "0",
Oct 11 04:28:01 compute-0 condescending_nash[310321]:                 "ceph.osd_fsid": "b0a32a6b-06c7-4cda-9235-0c5ffbd1bd85",
Oct 11 04:28:01 compute-0 condescending_nash[310321]:                 "ceph.osd_id": "2",
Oct 11 04:28:01 compute-0 condescending_nash[310321]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 11 04:28:01 compute-0 condescending_nash[310321]:                 "ceph.type": "block",
Oct 11 04:28:01 compute-0 condescending_nash[310321]:                 "ceph.vdo": "0"
Oct 11 04:28:01 compute-0 condescending_nash[310321]:             },
Oct 11 04:28:01 compute-0 condescending_nash[310321]:             "type": "block",
Oct 11 04:28:01 compute-0 condescending_nash[310321]:             "vg_name": "ceph_vg2"
Oct 11 04:28:01 compute-0 condescending_nash[310321]:         }
Oct 11 04:28:01 compute-0 condescending_nash[310321]:     ]
Oct 11 04:28:01 compute-0 condescending_nash[310321]: }
Oct 11 04:28:01 compute-0 systemd[1]: libpod-275b376fba814ef2ab53a11cdbb85c334b7e5d5ff619aa288cf28b030d57c855.scope: Deactivated successfully.
Oct 11 04:28:01 compute-0 conmon[310321]: conmon 275b376fba814ef2ab53 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-275b376fba814ef2ab53a11cdbb85c334b7e5d5ff619aa288cf28b030d57c855.scope/container/memory.events
Oct 11 04:28:01 compute-0 podman[310305]: 2025-10-11 04:28:01.44898486 +0000 UTC m=+0.940785210 container died 275b376fba814ef2ab53a11cdbb85c334b7e5d5ff619aa288cf28b030d57c855 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_nash, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 11 04:28:01 compute-0 ceph-mon[74273]: pgmap v1988: 305 pgs: 305 active+clean; 287 MiB data, 689 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.3 MiB/s wr, 49 op/s
Oct 11 04:28:01 compute-0 systemd[1]: var-lib-containers-storage-overlay-006509f47f88faea282528425ffeef0d3c9c5faa71d16bf09663dc6b44933a1c-merged.mount: Deactivated successfully.
Oct 11 04:28:01 compute-0 podman[310305]: 2025-10-11 04:28:01.526222866 +0000 UTC m=+1.018023196 container remove 275b376fba814ef2ab53a11cdbb85c334b7e5d5ff619aa288cf28b030d57c855 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_nash, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507)
Oct 11 04:28:01 compute-0 systemd[1]: libpod-conmon-275b376fba814ef2ab53a11cdbb85c334b7e5d5ff619aa288cf28b030d57c855.scope: Deactivated successfully.
Oct 11 04:28:01 compute-0 sudo[310162]: pam_unix(sudo:session): session closed for user root
Oct 11 04:28:01 compute-0 sudo[310344]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 04:28:01 compute-0 sudo[310344]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 04:28:01 compute-0 sudo[310344]: pam_unix(sudo:session): session closed for user root
Oct 11 04:28:01 compute-0 nova_compute[259850]: 2025-10-11 04:28:01.661 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 04:28:01 compute-0 sudo[310369]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 11 04:28:01 compute-0 sudo[310369]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 04:28:01 compute-0 sudo[310369]: pam_unix(sudo:session): session closed for user root
Oct 11 04:28:01 compute-0 sudo[310394]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 04:28:01 compute-0 sudo[310394]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 04:28:01 compute-0 sudo[310394]: pam_unix(sudo:session): session closed for user root
Oct 11 04:28:01 compute-0 sudo[310419]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/23b68101-59a9-532f-ab6b-9acf78fb2162/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 23b68101-59a9-532f-ab6b-9acf78fb2162 -- raw list --format json
Oct 11 04:28:01 compute-0 sudo[310419]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 04:28:02 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1989: 305 pgs: 305 active+clean; 287 MiB data, 689 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.3 MiB/s wr, 49 op/s
Oct 11 04:28:02 compute-0 podman[310484]: 2025-10-11 04:28:02.238111048 +0000 UTC m=+0.075635441 container create ceb29a52c1f6b868884ef8e28eb1a3ac5f8e6246cc83dfaf0eae8c37e97dc5af (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_ganguly, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Oct 11 04:28:02 compute-0 systemd[1]: Started libpod-conmon-ceb29a52c1f6b868884ef8e28eb1a3ac5f8e6246cc83dfaf0eae8c37e97dc5af.scope.
Oct 11 04:28:02 compute-0 podman[310484]: 2025-10-11 04:28:02.203101732 +0000 UTC m=+0.040626145 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 04:28:02 compute-0 systemd[1]: Started libcrun container.
Oct 11 04:28:02 compute-0 podman[310484]: 2025-10-11 04:28:02.345892844 +0000 UTC m=+0.183417267 container init ceb29a52c1f6b868884ef8e28eb1a3ac5f8e6246cc83dfaf0eae8c37e97dc5af (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_ganguly, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 11 04:28:02 compute-0 podman[310484]: 2025-10-11 04:28:02.359653682 +0000 UTC m=+0.197178055 container start ceb29a52c1f6b868884ef8e28eb1a3ac5f8e6246cc83dfaf0eae8c37e97dc5af (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_ganguly, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Oct 11 04:28:02 compute-0 podman[310484]: 2025-10-11 04:28:02.363743447 +0000 UTC m=+0.201267940 container attach ceb29a52c1f6b868884ef8e28eb1a3ac5f8e6246cc83dfaf0eae8c37e97dc5af (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_ganguly, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, io.buildah.version=1.39.3, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.schema-version=1.0)
Oct 11 04:28:02 compute-0 condescending_ganguly[310500]: 167 167
Oct 11 04:28:02 compute-0 systemd[1]: libpod-ceb29a52c1f6b868884ef8e28eb1a3ac5f8e6246cc83dfaf0eae8c37e97dc5af.scope: Deactivated successfully.
Oct 11 04:28:02 compute-0 podman[310484]: 2025-10-11 04:28:02.369817458 +0000 UTC m=+0.207341861 container died ceb29a52c1f6b868884ef8e28eb1a3ac5f8e6246cc83dfaf0eae8c37e97dc5af (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_ganguly, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 11 04:28:02 compute-0 systemd[1]: var-lib-containers-storage-overlay-e75152c939fa2a6c19819802761b7824d510003281e4b5003d25d244b25353b1-merged.mount: Deactivated successfully.
Oct 11 04:28:02 compute-0 podman[310484]: 2025-10-11 04:28:02.45223855 +0000 UTC m=+0.289762933 container remove ceb29a52c1f6b868884ef8e28eb1a3ac5f8e6246cc83dfaf0eae8c37e97dc5af (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_ganguly, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.build-date=20250507)
Oct 11 04:28:02 compute-0 ceph-mon[74273]: pgmap v1989: 305 pgs: 305 active+clean; 287 MiB data, 689 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.3 MiB/s wr, 49 op/s
Oct 11 04:28:02 compute-0 systemd[1]: libpod-conmon-ceb29a52c1f6b868884ef8e28eb1a3ac5f8e6246cc83dfaf0eae8c37e97dc5af.scope: Deactivated successfully.
Oct 11 04:28:02 compute-0 podman[310524]: 2025-10-11 04:28:02.736499357 +0000 UTC m=+0.062974765 container create 1ef4e2f4daf6e6b2bd447ffb506e8b65a1631cf43746f018577a40ea81bf62a6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_meitner, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 11 04:28:02 compute-0 systemd[1]: Started libpod-conmon-1ef4e2f4daf6e6b2bd447ffb506e8b65a1631cf43746f018577a40ea81bf62a6.scope.
Oct 11 04:28:02 compute-0 podman[310524]: 2025-10-11 04:28:02.712998765 +0000 UTC m=+0.039474173 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 04:28:02 compute-0 systemd[1]: Started libcrun container.
Oct 11 04:28:02 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2f6e8c285990b2cf8a56fbc6d13d6240688d2e4d6cebefc2d3f743d154ae9bfc/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 11 04:28:02 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2f6e8c285990b2cf8a56fbc6d13d6240688d2e4d6cebefc2d3f743d154ae9bfc/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 11 04:28:02 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2f6e8c285990b2cf8a56fbc6d13d6240688d2e4d6cebefc2d3f743d154ae9bfc/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 11 04:28:02 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2f6e8c285990b2cf8a56fbc6d13d6240688d2e4d6cebefc2d3f743d154ae9bfc/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 11 04:28:02 compute-0 podman[310524]: 2025-10-11 04:28:02.835472515 +0000 UTC m=+0.161947903 container init 1ef4e2f4daf6e6b2bd447ffb506e8b65a1631cf43746f018577a40ea81bf62a6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_meitner, CEPH_REF=reef, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507)
Oct 11 04:28:02 compute-0 podman[310524]: 2025-10-11 04:28:02.849257803 +0000 UTC m=+0.175733211 container start 1ef4e2f4daf6e6b2bd447ffb506e8b65a1631cf43746f018577a40ea81bf62a6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_meitner, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_REF=reef)
Oct 11 04:28:02 compute-0 podman[310524]: 2025-10-11 04:28:02.857232398 +0000 UTC m=+0.183707796 container attach 1ef4e2f4daf6e6b2bd447ffb506e8b65a1631cf43746f018577a40ea81bf62a6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_meitner, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default)
Oct 11 04:28:03 compute-0 distracted_meitner[310540]: {
Oct 11 04:28:03 compute-0 distracted_meitner[310540]:     "38da774d-7ecf-442f-9a7a-97978287cff8": {
Oct 11 04:28:03 compute-0 distracted_meitner[310540]:         "ceph_fsid": "23b68101-59a9-532f-ab6b-9acf78fb2162",
Oct 11 04:28:03 compute-0 distracted_meitner[310540]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Oct 11 04:28:03 compute-0 distracted_meitner[310540]:         "osd_id": 1,
Oct 11 04:28:03 compute-0 distracted_meitner[310540]:         "osd_uuid": "38da774d-7ecf-442f-9a7a-97978287cff8",
Oct 11 04:28:03 compute-0 distracted_meitner[310540]:         "type": "bluestore"
Oct 11 04:28:03 compute-0 distracted_meitner[310540]:     },
Oct 11 04:28:03 compute-0 distracted_meitner[310540]:     "b0a32a6b-06c7-4cda-9235-0c5ffbd1bd85": {
Oct 11 04:28:03 compute-0 distracted_meitner[310540]:         "ceph_fsid": "23b68101-59a9-532f-ab6b-9acf78fb2162",
Oct 11 04:28:03 compute-0 distracted_meitner[310540]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Oct 11 04:28:03 compute-0 distracted_meitner[310540]:         "osd_id": 2,
Oct 11 04:28:03 compute-0 distracted_meitner[310540]:         "osd_uuid": "b0a32a6b-06c7-4cda-9235-0c5ffbd1bd85",
Oct 11 04:28:03 compute-0 distracted_meitner[310540]:         "type": "bluestore"
Oct 11 04:28:03 compute-0 distracted_meitner[310540]:     },
Oct 11 04:28:03 compute-0 distracted_meitner[310540]:     "bd7ac921-1218-45c1-b1c6-7c594dbceccb": {
Oct 11 04:28:03 compute-0 distracted_meitner[310540]:         "ceph_fsid": "23b68101-59a9-532f-ab6b-9acf78fb2162",
Oct 11 04:28:03 compute-0 distracted_meitner[310540]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct 11 04:28:03 compute-0 distracted_meitner[310540]:         "osd_id": 0,
Oct 11 04:28:03 compute-0 distracted_meitner[310540]:         "osd_uuid": "bd7ac921-1218-45c1-b1c6-7c594dbceccb",
Oct 11 04:28:03 compute-0 distracted_meitner[310540]:         "type": "bluestore"
Oct 11 04:28:03 compute-0 distracted_meitner[310540]:     }
Oct 11 04:28:03 compute-0 distracted_meitner[310540]: }
Oct 11 04:28:03 compute-0 systemd[1]: libpod-1ef4e2f4daf6e6b2bd447ffb506e8b65a1631cf43746f018577a40ea81bf62a6.scope: Deactivated successfully.
Oct 11 04:28:03 compute-0 podman[310524]: 2025-10-11 04:28:03.874436321 +0000 UTC m=+1.200911709 container died 1ef4e2f4daf6e6b2bd447ffb506e8b65a1631cf43746f018577a40ea81bf62a6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_meitner, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default)
Oct 11 04:28:03 compute-0 systemd[1]: libpod-1ef4e2f4daf6e6b2bd447ffb506e8b65a1631cf43746f018577a40ea81bf62a6.scope: Consumed 1.035s CPU time.
Oct 11 04:28:03 compute-0 systemd[1]: var-lib-containers-storage-overlay-2f6e8c285990b2cf8a56fbc6d13d6240688d2e4d6cebefc2d3f743d154ae9bfc-merged.mount: Deactivated successfully.
Oct 11 04:28:03 compute-0 podman[310524]: 2025-10-11 04:28:03.947325554 +0000 UTC m=+1.273800952 container remove 1ef4e2f4daf6e6b2bd447ffb506e8b65a1631cf43746f018577a40ea81bf62a6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_meitner, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 11 04:28:03 compute-0 systemd[1]: libpod-conmon-1ef4e2f4daf6e6b2bd447ffb506e8b65a1631cf43746f018577a40ea81bf62a6.scope: Deactivated successfully.
Oct 11 04:28:03 compute-0 sudo[310419]: pam_unix(sudo:session): session closed for user root
Oct 11 04:28:03 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 11 04:28:03 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' 
Oct 11 04:28:03 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 11 04:28:03 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' 
Oct 11 04:28:03 compute-0 ceph-mgr[74563]: [progress WARNING root] complete: ev 6dcc0bb1-883f-4090-8347-082f86622af8 does not exist
Oct 11 04:28:04 compute-0 ceph-mgr[74563]: [progress WARNING root] complete: ev 119c45ec-fc2e-4890-a1bb-5b244db1e5a9 does not exist
Oct 11 04:28:04 compute-0 sudo[310587]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 04:28:04 compute-0 sudo[310587]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 04:28:04 compute-0 sudo[310587]: pam_unix(sudo:session): session closed for user root
Oct 11 04:28:04 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1990: 305 pgs: 305 active+clean; 287 MiB data, 689 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.3 MiB/s wr, 49 op/s
Oct 11 04:28:04 compute-0 sudo[310612]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Oct 11 04:28:04 compute-0 sudo[310612]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 04:28:04 compute-0 sudo[310612]: pam_unix(sudo:session): session closed for user root
Oct 11 04:28:04 compute-0 nova_compute[259850]: 2025-10-11 04:28:04.280 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 04:28:04 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' 
Oct 11 04:28:04 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' 
Oct 11 04:28:04 compute-0 ceph-mon[74273]: pgmap v1990: 305 pgs: 305 active+clean; 287 MiB data, 689 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.3 MiB/s wr, 49 op/s
Oct 11 04:28:05 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e448 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 04:28:06 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1991: 305 pgs: 305 active+clean; 287 MiB data, 689 MiB used, 59 GiB / 60 GiB avail; 382 KiB/s rd, 354 KiB/s wr, 6 op/s
Oct 11 04:28:06 compute-0 nova_compute[259850]: 2025-10-11 04:28:06.665 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 04:28:07 compute-0 ceph-mon[74273]: pgmap v1991: 305 pgs: 305 active+clean; 287 MiB data, 689 MiB used, 59 GiB / 60 GiB avail; 382 KiB/s rd, 354 KiB/s wr, 6 op/s
Oct 11 04:28:08 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1992: 305 pgs: 305 active+clean; 287 MiB data, 689 MiB used, 59 GiB / 60 GiB avail; 811 KiB/s rd, 362 KiB/s wr, 10 op/s
Oct 11 04:28:09 compute-0 ceph-mon[74273]: pgmap v1992: 305 pgs: 305 active+clean; 287 MiB data, 689 MiB used, 59 GiB / 60 GiB avail; 811 KiB/s rd, 362 KiB/s wr, 10 op/s
Oct 11 04:28:09 compute-0 nova_compute[259850]: 2025-10-11 04:28:09.282 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 04:28:10 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1993: 305 pgs: 305 active+clean; 295 MiB data, 694 MiB used, 59 GiB / 60 GiB avail; 430 KiB/s rd, 782 KiB/s wr, 5 op/s
Oct 11 04:28:10 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e448 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 04:28:11 compute-0 ceph-mon[74273]: pgmap v1993: 305 pgs: 305 active+clean; 295 MiB data, 694 MiB used, 59 GiB / 60 GiB avail; 430 KiB/s rd, 782 KiB/s wr, 5 op/s
Oct 11 04:28:11 compute-0 nova_compute[259850]: 2025-10-11 04:28:11.667 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 04:28:12 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1994: 305 pgs: 305 active+clean; 295 MiB data, 694 MiB used, 59 GiB / 60 GiB avail; 430 KiB/s rd, 437 KiB/s wr, 4 op/s
Oct 11 04:28:13 compute-0 ceph-mon[74273]: pgmap v1994: 305 pgs: 305 active+clean; 295 MiB data, 694 MiB used, 59 GiB / 60 GiB avail; 430 KiB/s rd, 437 KiB/s wr, 4 op/s
Oct 11 04:28:14 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1995: 305 pgs: 305 active+clean; 295 MiB data, 694 MiB used, 59 GiB / 60 GiB avail; 430 KiB/s rd, 440 KiB/s wr, 4 op/s
Oct 11 04:28:14 compute-0 nova_compute[259850]: 2025-10-11 04:28:14.285 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 04:28:15 compute-0 ovn_controller[152025]: 2025-10-11T04:28:15Z|00296|memory_trim|INFO|Detected inactivity (last active 30001 ms ago): trimming memory
Oct 11 04:28:15 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e448 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 04:28:15 compute-0 ceph-mon[74273]: pgmap v1995: 305 pgs: 305 active+clean; 295 MiB data, 694 MiB used, 59 GiB / 60 GiB avail; 430 KiB/s rd, 440 KiB/s wr, 4 op/s
Oct 11 04:28:16 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1996: 305 pgs: 305 active+clean; 295 MiB data, 694 MiB used, 59 GiB / 60 GiB avail; 430 KiB/s rd, 440 KiB/s wr, 4 op/s
Oct 11 04:28:16 compute-0 podman[310638]: 2025-10-11 04:28:16.46179846 +0000 UTC m=+0.155522101 container health_status 648a70a95c858d67aef01a55c153ff0b5b9bef4b589fa10c62a24185180db61f (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=ovn_controller, org.label-schema.build-date=20251009, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, container_name=ovn_controller, io.buildah.version=1.41.3)
Oct 11 04:28:16 compute-0 nova_compute[259850]: 2025-10-11 04:28:16.669 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 04:28:17 compute-0 ceph-mon[74273]: pgmap v1996: 305 pgs: 305 active+clean; 295 MiB data, 694 MiB used, 59 GiB / 60 GiB avail; 430 KiB/s rd, 440 KiB/s wr, 4 op/s
Oct 11 04:28:17 compute-0 nova_compute[259850]: 2025-10-11 04:28:17.694 2 DEBUG oslo_concurrency.lockutils [None req-1bcf7889-bd07-482e-b193-773d4c316d98 7bf17f3eb8514499a54d67542db6b88a 226e6310b4ee4a68b552a6b3e940a458 - - default default] Acquiring lock "bc8a8366-552e-41a7-bd24-afacb81114bc" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 04:28:17 compute-0 nova_compute[259850]: 2025-10-11 04:28:17.694 2 DEBUG oslo_concurrency.lockutils [None req-1bcf7889-bd07-482e-b193-773d4c316d98 7bf17f3eb8514499a54d67542db6b88a 226e6310b4ee4a68b552a6b3e940a458 - - default default] Lock "bc8a8366-552e-41a7-bd24-afacb81114bc" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 04:28:17 compute-0 nova_compute[259850]: 2025-10-11 04:28:17.695 2 DEBUG oslo_concurrency.lockutils [None req-1bcf7889-bd07-482e-b193-773d4c316d98 7bf17f3eb8514499a54d67542db6b88a 226e6310b4ee4a68b552a6b3e940a458 - - default default] Acquiring lock "bc8a8366-552e-41a7-bd24-afacb81114bc-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 04:28:17 compute-0 nova_compute[259850]: 2025-10-11 04:28:17.695 2 DEBUG oslo_concurrency.lockutils [None req-1bcf7889-bd07-482e-b193-773d4c316d98 7bf17f3eb8514499a54d67542db6b88a 226e6310b4ee4a68b552a6b3e940a458 - - default default] Lock "bc8a8366-552e-41a7-bd24-afacb81114bc-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 04:28:17 compute-0 nova_compute[259850]: 2025-10-11 04:28:17.695 2 DEBUG oslo_concurrency.lockutils [None req-1bcf7889-bd07-482e-b193-773d4c316d98 7bf17f3eb8514499a54d67542db6b88a 226e6310b4ee4a68b552a6b3e940a458 - - default default] Lock "bc8a8366-552e-41a7-bd24-afacb81114bc-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 04:28:17 compute-0 nova_compute[259850]: 2025-10-11 04:28:17.697 2 INFO nova.compute.manager [None req-1bcf7889-bd07-482e-b193-773d4c316d98 7bf17f3eb8514499a54d67542db6b88a 226e6310b4ee4a68b552a6b3e940a458 - - default default] [instance: bc8a8366-552e-41a7-bd24-afacb81114bc] Terminating instance
Oct 11 04:28:17 compute-0 nova_compute[259850]: 2025-10-11 04:28:17.699 2 DEBUG nova.compute.manager [None req-1bcf7889-bd07-482e-b193-773d4c316d98 7bf17f3eb8514499a54d67542db6b88a 226e6310b4ee4a68b552a6b3e940a458 - - default default] [instance: bc8a8366-552e-41a7-bd24-afacb81114bc] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Oct 11 04:28:17 compute-0 kernel: tap7932da10-ea (unregistering): left promiscuous mode
Oct 11 04:28:17 compute-0 NetworkManager[44920]: <info>  [1760156897.7787] device (tap7932da10-ea): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 11 04:28:17 compute-0 nova_compute[259850]: 2025-10-11 04:28:17.790 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 04:28:17 compute-0 ovn_controller[152025]: 2025-10-11T04:28:17Z|00297|binding|INFO|Releasing lport 7932da10-eaa5-4512-a329-16ee6de1e17c from this chassis (sb_readonly=0)
Oct 11 04:28:17 compute-0 ovn_controller[152025]: 2025-10-11T04:28:17Z|00298|binding|INFO|Setting lport 7932da10-eaa5-4512-a329-16ee6de1e17c down in Southbound
Oct 11 04:28:17 compute-0 ovn_controller[152025]: 2025-10-11T04:28:17Z|00299|binding|INFO|Removing iface tap7932da10-ea ovn-installed in OVS
Oct 11 04:28:17 compute-0 nova_compute[259850]: 2025-10-11 04:28:17.793 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 04:28:17 compute-0 ovn_metadata_agent[161897]: 2025-10-11 04:28:17.805 161902 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:0c:2a:34 10.100.0.12'], port_security=['fa:16:3e:0c:2a:34 10.100.0.12'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.12/28', 'neutron:device_id': 'bc8a8366-552e-41a7-bd24-afacb81114bc', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-61e3c4a7-2f2f-451f-b913-c2cdac8efdf3', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '226e6310b4ee4a68b552a6b3e940a458', 'neutron:revision_number': '4', 'neutron:security_group_ids': '77d1d83e-ff49-437a-8a94-baa66143ce2b', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com', 'neutron:port_fip': '192.168.122.209'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=17f237ce-6320-4c27-9970-fd94aa8457a3, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f22fde8afd0>], logical_port=7932da10-eaa5-4512-a329-16ee6de1e17c) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f22fde8afd0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 11 04:28:17 compute-0 ovn_metadata_agent[161897]: 2025-10-11 04:28:17.808 161902 INFO neutron.agent.ovn.metadata.agent [-] Port 7932da10-eaa5-4512-a329-16ee6de1e17c in datapath 61e3c4a7-2f2f-451f-b913-c2cdac8efdf3 unbound from our chassis
Oct 11 04:28:17 compute-0 ovn_metadata_agent[161897]: 2025-10-11 04:28:17.809 161902 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 61e3c4a7-2f2f-451f-b913-c2cdac8efdf3, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Oct 11 04:28:17 compute-0 ovn_metadata_agent[161897]: 2025-10-11 04:28:17.811 267637 DEBUG oslo.privsep.daemon [-] privsep: reply[44b667fb-aa38-48cd-9024-f4792444646d]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 04:28:17 compute-0 ovn_metadata_agent[161897]: 2025-10-11 04:28:17.812 161902 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-61e3c4a7-2f2f-451f-b913-c2cdac8efdf3 namespace which is not needed anymore
Oct 11 04:28:17 compute-0 nova_compute[259850]: 2025-10-11 04:28:17.827 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 04:28:17 compute-0 systemd[1]: machine-qemu\x2d30\x2dinstance\x2d0000001e.scope: Deactivated successfully.
Oct 11 04:28:17 compute-0 systemd[1]: machine-qemu\x2d30\x2dinstance\x2d0000001e.scope: Consumed 16.373s CPU time.
Oct 11 04:28:17 compute-0 systemd-machined[214869]: Machine qemu-30-instance-0000001e terminated.
Oct 11 04:28:17 compute-0 nova_compute[259850]: 2025-10-11 04:28:17.946 2 INFO nova.virt.libvirt.driver [-] [instance: bc8a8366-552e-41a7-bd24-afacb81114bc] Instance destroyed successfully.
Oct 11 04:28:17 compute-0 nova_compute[259850]: 2025-10-11 04:28:17.948 2 DEBUG nova.objects.instance [None req-1bcf7889-bd07-482e-b193-773d4c316d98 7bf17f3eb8514499a54d67542db6b88a 226e6310b4ee4a68b552a6b3e940a458 - - default default] Lazy-loading 'resources' on Instance uuid bc8a8366-552e-41a7-bd24-afacb81114bc obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 11 04:28:17 compute-0 nova_compute[259850]: 2025-10-11 04:28:17.961 2 DEBUG nova.virt.libvirt.vif [None req-1bcf7889-bd07-482e-b193-773d4c316d98 7bf17f3eb8514499a54d67542db6b88a 226e6310b4ee4a68b552a6b3e940a458 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-11T04:27:30Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestEncryptedCinderVolumes-server-838098928',display_name='tempest-TestEncryptedCinderVolumes-server-838098928',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testencryptedcindervolumes-server-838098928',id=30,image_ref='',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBA3pFsCx6Lv4ZhABALE9kJlaC2VLcHHMajXk3FwO0YwDAD8GzEfOWx1nJYDa1BnjHeTckP7sy9/Wa8HAN31/eIMe4p7SlbrVdBBFvJpvxVBbmewtPKqpzKac1Jk+If2OOg==',key_name='tempest-TestEncryptedCinderVolumes-1373224468',keypairs=<?>,launch_index=0,launched_at=2025-10-11T04:27:41Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='226e6310b4ee4a68b552a6b3e940a458',ramdisk_id='',reservation_id='r-8kn13c8t',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',image_signature_verified='False',owner_project_name='tempest-TestEncryptedCinderVolumes-1931311766',owner_user_name='tempest-TestEncryptedCinderVolumes-1931311766-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-11T04:27:41Z,user_data=None,user_id='7bf17f3eb8514499a54d67542db6b88a',uuid=bc8a8366-552e-41a7-bd24-afacb81114bc,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "7932da10-eaa5-4512-a329-16ee6de1e17c", "address": "fa:16:3e:0c:2a:34", "network": {"id": "61e3c4a7-2f2f-451f-b913-c2cdac8efdf3", "bridge": "br-int", "label": "tempest-TestEncryptedCinderVolumes-1503027769-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.209", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "226e6310b4ee4a68b552a6b3e940a458", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7932da10-ea", "ovs_interfaceid": "7932da10-eaa5-4512-a329-16ee6de1e17c", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Oct 11 04:28:17 compute-0 nova_compute[259850]: 2025-10-11 04:28:17.962 2 DEBUG nova.network.os_vif_util [None req-1bcf7889-bd07-482e-b193-773d4c316d98 7bf17f3eb8514499a54d67542db6b88a 226e6310b4ee4a68b552a6b3e940a458 - - default default] Converting VIF {"id": "7932da10-eaa5-4512-a329-16ee6de1e17c", "address": "fa:16:3e:0c:2a:34", "network": {"id": "61e3c4a7-2f2f-451f-b913-c2cdac8efdf3", "bridge": "br-int", "label": "tempest-TestEncryptedCinderVolumes-1503027769-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.209", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "226e6310b4ee4a68b552a6b3e940a458", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7932da10-ea", "ovs_interfaceid": "7932da10-eaa5-4512-a329-16ee6de1e17c", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 11 04:28:17 compute-0 nova_compute[259850]: 2025-10-11 04:28:17.964 2 DEBUG nova.network.os_vif_util [None req-1bcf7889-bd07-482e-b193-773d4c316d98 7bf17f3eb8514499a54d67542db6b88a 226e6310b4ee4a68b552a6b3e940a458 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:0c:2a:34,bridge_name='br-int',has_traffic_filtering=True,id=7932da10-eaa5-4512-a329-16ee6de1e17c,network=Network(61e3c4a7-2f2f-451f-b913-c2cdac8efdf3),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap7932da10-ea') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 11 04:28:17 compute-0 nova_compute[259850]: 2025-10-11 04:28:17.965 2 DEBUG os_vif [None req-1bcf7889-bd07-482e-b193-773d4c316d98 7bf17f3eb8514499a54d67542db6b88a 226e6310b4ee4a68b552a6b3e940a458 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:0c:2a:34,bridge_name='br-int',has_traffic_filtering=True,id=7932da10-eaa5-4512-a329-16ee6de1e17c,network=Network(61e3c4a7-2f2f-451f-b913-c2cdac8efdf3),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap7932da10-ea') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Oct 11 04:28:17 compute-0 nova_compute[259850]: 2025-10-11 04:28:17.967 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 04:28:17 compute-0 nova_compute[259850]: 2025-10-11 04:28:17.967 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap7932da10-ea, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 04:28:18 compute-0 neutron-haproxy-ovnmeta-61e3c4a7-2f2f-451f-b913-c2cdac8efdf3[309645]: [NOTICE]   (309649) : haproxy version is 2.8.14-c23fe91
Oct 11 04:28:18 compute-0 neutron-haproxy-ovnmeta-61e3c4a7-2f2f-451f-b913-c2cdac8efdf3[309645]: [NOTICE]   (309649) : path to executable is /usr/sbin/haproxy
Oct 11 04:28:18 compute-0 neutron-haproxy-ovnmeta-61e3c4a7-2f2f-451f-b913-c2cdac8efdf3[309645]: [ALERT]    (309649) : Current worker (309651) exited with code 143 (Terminated)
Oct 11 04:28:18 compute-0 neutron-haproxy-ovnmeta-61e3c4a7-2f2f-451f-b913-c2cdac8efdf3[309645]: [WARNING]  (309649) : All workers exited. Exiting... (0)
Oct 11 04:28:18 compute-0 systemd[1]: libpod-c62cc7fe2c0bb78fdfcc83e62767dbba62e09df04937434deaf07bca06583364.scope: Deactivated successfully.
Oct 11 04:28:18 compute-0 nova_compute[259850]: 2025-10-11 04:28:18.006 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 04:28:18 compute-0 conmon[309645]: conmon c62cc7fe2c0bb78fdfcc <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-c62cc7fe2c0bb78fdfcc83e62767dbba62e09df04937434deaf07bca06583364.scope/container/memory.events
Oct 11 04:28:18 compute-0 nova_compute[259850]: 2025-10-11 04:28:18.008 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 11 04:28:18 compute-0 podman[310690]: 2025-10-11 04:28:18.011750239 +0000 UTC m=+0.087292450 container died c62cc7fe2c0bb78fdfcc83e62767dbba62e09df04937434deaf07bca06583364 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-61e3c4a7-2f2f-451f-b913-c2cdac8efdf3, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, io.buildah.version=1.41.3, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251009)
Oct 11 04:28:18 compute-0 nova_compute[259850]: 2025-10-11 04:28:18.011 2 INFO os_vif [None req-1bcf7889-bd07-482e-b193-773d4c316d98 7bf17f3eb8514499a54d67542db6b88a 226e6310b4ee4a68b552a6b3e940a458 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:0c:2a:34,bridge_name='br-int',has_traffic_filtering=True,id=7932da10-eaa5-4512-a329-16ee6de1e17c,network=Network(61e3c4a7-2f2f-451f-b913-c2cdac8efdf3),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap7932da10-ea')
Oct 11 04:28:18 compute-0 systemd[1]: var-lib-containers-storage-overlay-1431f464a01116d5d8a65b0a7c5048cdd4152ce27dcdc294b1f1a89929705a28-merged.mount: Deactivated successfully.
Oct 11 04:28:18 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-c62cc7fe2c0bb78fdfcc83e62767dbba62e09df04937434deaf07bca06583364-userdata-shm.mount: Deactivated successfully.
Oct 11 04:28:18 compute-0 nova_compute[259850]: 2025-10-11 04:28:18.043 2 DEBUG nova.compute.manager [req-5dc22341-756e-4ab1-9c39-adfb5e7f33df req-1a20c804-6ac9-47a5-84d5-a5b291ae2ad4 f4159c198f2c491aba3076e803251bb4 551ef16ca7c64c7d98e9560ddab5abd8 - - default default] [instance: bc8a8366-552e-41a7-bd24-afacb81114bc] Received event network-vif-unplugged-7932da10-eaa5-4512-a329-16ee6de1e17c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 11 04:28:18 compute-0 nova_compute[259850]: 2025-10-11 04:28:18.044 2 DEBUG oslo_concurrency.lockutils [req-5dc22341-756e-4ab1-9c39-adfb5e7f33df req-1a20c804-6ac9-47a5-84d5-a5b291ae2ad4 f4159c198f2c491aba3076e803251bb4 551ef16ca7c64c7d98e9560ddab5abd8 - - default default] Acquiring lock "bc8a8366-552e-41a7-bd24-afacb81114bc-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 04:28:18 compute-0 nova_compute[259850]: 2025-10-11 04:28:18.044 2 DEBUG oslo_concurrency.lockutils [req-5dc22341-756e-4ab1-9c39-adfb5e7f33df req-1a20c804-6ac9-47a5-84d5-a5b291ae2ad4 f4159c198f2c491aba3076e803251bb4 551ef16ca7c64c7d98e9560ddab5abd8 - - default default] Lock "bc8a8366-552e-41a7-bd24-afacb81114bc-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 04:28:18 compute-0 nova_compute[259850]: 2025-10-11 04:28:18.045 2 DEBUG oslo_concurrency.lockutils [req-5dc22341-756e-4ab1-9c39-adfb5e7f33df req-1a20c804-6ac9-47a5-84d5-a5b291ae2ad4 f4159c198f2c491aba3076e803251bb4 551ef16ca7c64c7d98e9560ddab5abd8 - - default default] Lock "bc8a8366-552e-41a7-bd24-afacb81114bc-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 04:28:18 compute-0 nova_compute[259850]: 2025-10-11 04:28:18.045 2 DEBUG nova.compute.manager [req-5dc22341-756e-4ab1-9c39-adfb5e7f33df req-1a20c804-6ac9-47a5-84d5-a5b291ae2ad4 f4159c198f2c491aba3076e803251bb4 551ef16ca7c64c7d98e9560ddab5abd8 - - default default] [instance: bc8a8366-552e-41a7-bd24-afacb81114bc] No waiting events found dispatching network-vif-unplugged-7932da10-eaa5-4512-a329-16ee6de1e17c pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 11 04:28:18 compute-0 nova_compute[259850]: 2025-10-11 04:28:18.045 2 DEBUG nova.compute.manager [req-5dc22341-756e-4ab1-9c39-adfb5e7f33df req-1a20c804-6ac9-47a5-84d5-a5b291ae2ad4 f4159c198f2c491aba3076e803251bb4 551ef16ca7c64c7d98e9560ddab5abd8 - - default default] [instance: bc8a8366-552e-41a7-bd24-afacb81114bc] Received event network-vif-unplugged-7932da10-eaa5-4512-a329-16ee6de1e17c for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Oct 11 04:28:18 compute-0 podman[310690]: 2025-10-11 04:28:18.052891898 +0000 UTC m=+0.128434149 container cleanup c62cc7fe2c0bb78fdfcc83e62767dbba62e09df04937434deaf07bca06583364 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-61e3c4a7-2f2f-451f-b913-c2cdac8efdf3, io.buildah.version=1.41.3, org.label-schema.build-date=20251009, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Oct 11 04:28:18 compute-0 systemd[1]: libpod-conmon-c62cc7fe2c0bb78fdfcc83e62767dbba62e09df04937434deaf07bca06583364.scope: Deactivated successfully.
Oct 11 04:28:18 compute-0 podman[310747]: 2025-10-11 04:28:18.121576532 +0000 UTC m=+0.042025954 container remove c62cc7fe2c0bb78fdfcc83e62767dbba62e09df04937434deaf07bca06583364 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-61e3c4a7-2f2f-451f-b913-c2cdac8efdf3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2)
Oct 11 04:28:18 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1997: 305 pgs: 305 active+clean; 295 MiB data, 694 MiB used, 59 GiB / 60 GiB avail; 497 KiB/s rd, 440 KiB/s wr, 7 op/s
Oct 11 04:28:18 compute-0 ovn_metadata_agent[161897]: 2025-10-11 04:28:18.130 267637 DEBUG oslo.privsep.daemon [-] privsep: reply[536dfcc6-3066-44f9-952c-48d259510590]: (4, ('Sat Oct 11 04:28:17 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-61e3c4a7-2f2f-451f-b913-c2cdac8efdf3 (c62cc7fe2c0bb78fdfcc83e62767dbba62e09df04937434deaf07bca06583364)\nc62cc7fe2c0bb78fdfcc83e62767dbba62e09df04937434deaf07bca06583364\nSat Oct 11 04:28:18 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-61e3c4a7-2f2f-451f-b913-c2cdac8efdf3 (c62cc7fe2c0bb78fdfcc83e62767dbba62e09df04937434deaf07bca06583364)\nc62cc7fe2c0bb78fdfcc83e62767dbba62e09df04937434deaf07bca06583364\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 04:28:18 compute-0 ovn_metadata_agent[161897]: 2025-10-11 04:28:18.134 267637 DEBUG oslo.privsep.daemon [-] privsep: reply[d4fbd347-fc4a-47c9-b1d5-a6ec386fba32]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 04:28:18 compute-0 ovn_metadata_agent[161897]: 2025-10-11 04:28:18.136 161902 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap61e3c4a7-20, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 04:28:18 compute-0 nova_compute[259850]: 2025-10-11 04:28:18.139 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 04:28:18 compute-0 kernel: tap61e3c4a7-20: left promiscuous mode
Oct 11 04:28:18 compute-0 nova_compute[259850]: 2025-10-11 04:28:18.158 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 04:28:18 compute-0 ovn_metadata_agent[161897]: 2025-10-11 04:28:18.162 267637 DEBUG oslo.privsep.daemon [-] privsep: reply[cc934aa4-263b-48ab-a2a1-47c610b40c92]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 04:28:18 compute-0 ovn_metadata_agent[161897]: 2025-10-11 04:28:18.193 267637 DEBUG oslo.privsep.daemon [-] privsep: reply[8575fc2e-815e-4dbd-aa9a-f164d8d4279b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 04:28:18 compute-0 ovn_metadata_agent[161897]: 2025-10-11 04:28:18.195 267637 DEBUG oslo.privsep.daemon [-] privsep: reply[075a1382-b4b0-466b-bf89-40def64b2d51]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 04:28:18 compute-0 ovn_metadata_agent[161897]: 2025-10-11 04:28:18.213 267637 DEBUG oslo.privsep.daemon [-] privsep: reply[56b28e39-2771-4bfe-86e4-10312b440d10]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 524319, 'reachable_time': 29639, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 310762, 'error': None, 'target': 'ovnmeta-61e3c4a7-2f2f-451f-b913-c2cdac8efdf3', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 04:28:18 compute-0 systemd[1]: run-netns-ovnmeta\x2d61e3c4a7\x2d2f2f\x2d451f\x2db913\x2dc2cdac8efdf3.mount: Deactivated successfully.
Oct 11 04:28:18 compute-0 ovn_metadata_agent[161897]: 2025-10-11 04:28:18.216 162015 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-61e3c4a7-2f2f-451f-b913-c2cdac8efdf3 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Oct 11 04:28:18 compute-0 ovn_metadata_agent[161897]: 2025-10-11 04:28:18.217 162015 DEBUG oslo.privsep.daemon [-] privsep: reply[5030fa9c-34c5-4b09-82bb-fccb70d95122]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 04:28:18 compute-0 nova_compute[259850]: 2025-10-11 04:28:18.222 2 INFO nova.virt.libvirt.driver [None req-1bcf7889-bd07-482e-b193-773d4c316d98 7bf17f3eb8514499a54d67542db6b88a 226e6310b4ee4a68b552a6b3e940a458 - - default default] [instance: bc8a8366-552e-41a7-bd24-afacb81114bc] Deleting instance files /var/lib/nova/instances/bc8a8366-552e-41a7-bd24-afacb81114bc_del
Oct 11 04:28:18 compute-0 nova_compute[259850]: 2025-10-11 04:28:18.223 2 INFO nova.virt.libvirt.driver [None req-1bcf7889-bd07-482e-b193-773d4c316d98 7bf17f3eb8514499a54d67542db6b88a 226e6310b4ee4a68b552a6b3e940a458 - - default default] [instance: bc8a8366-552e-41a7-bd24-afacb81114bc] Deletion of /var/lib/nova/instances/bc8a8366-552e-41a7-bd24-afacb81114bc_del complete
Oct 11 04:28:18 compute-0 nova_compute[259850]: 2025-10-11 04:28:18.278 2 INFO nova.compute.manager [None req-1bcf7889-bd07-482e-b193-773d4c316d98 7bf17f3eb8514499a54d67542db6b88a 226e6310b4ee4a68b552a6b3e940a458 - - default default] [instance: bc8a8366-552e-41a7-bd24-afacb81114bc] Took 0.58 seconds to destroy the instance on the hypervisor.
Oct 11 04:28:18 compute-0 nova_compute[259850]: 2025-10-11 04:28:18.279 2 DEBUG oslo.service.loopingcall [None req-1bcf7889-bd07-482e-b193-773d4c316d98 7bf17f3eb8514499a54d67542db6b88a 226e6310b4ee4a68b552a6b3e940a458 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Oct 11 04:28:18 compute-0 nova_compute[259850]: 2025-10-11 04:28:18.279 2 DEBUG nova.compute.manager [-] [instance: bc8a8366-552e-41a7-bd24-afacb81114bc] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Oct 11 04:28:18 compute-0 nova_compute[259850]: 2025-10-11 04:28:18.279 2 DEBUG nova.network.neutron [-] [instance: bc8a8366-552e-41a7-bd24-afacb81114bc] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Oct 11 04:28:19 compute-0 ceph-mon[74273]: pgmap v1997: 305 pgs: 305 active+clean; 295 MiB data, 694 MiB used, 59 GiB / 60 GiB avail; 497 KiB/s rd, 440 KiB/s wr, 7 op/s
Oct 11 04:28:19 compute-0 nova_compute[259850]: 2025-10-11 04:28:19.287 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 04:28:19 compute-0 podman[310764]: 2025-10-11 04:28:19.38572491 +0000 UTC m=+0.081997870 container health_status aa44bb3cffcfb092b9d85ae52a9bd9f36c87e07262632420f97b48a886109eab (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, tcib_managed=true, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent)
Oct 11 04:28:20 compute-0 nova_compute[259850]: 2025-10-11 04:28:20.122 2 DEBUG nova.compute.manager [req-9c9770dd-59b8-4c46-9f60-495705367049 req-c2057b9d-7a10-4cff-b2da-36a820f6e029 f4159c198f2c491aba3076e803251bb4 551ef16ca7c64c7d98e9560ddab5abd8 - - default default] [instance: bc8a8366-552e-41a7-bd24-afacb81114bc] Received event network-vif-plugged-7932da10-eaa5-4512-a329-16ee6de1e17c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 11 04:28:20 compute-0 nova_compute[259850]: 2025-10-11 04:28:20.123 2 DEBUG oslo_concurrency.lockutils [req-9c9770dd-59b8-4c46-9f60-495705367049 req-c2057b9d-7a10-4cff-b2da-36a820f6e029 f4159c198f2c491aba3076e803251bb4 551ef16ca7c64c7d98e9560ddab5abd8 - - default default] Acquiring lock "bc8a8366-552e-41a7-bd24-afacb81114bc-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 04:28:20 compute-0 nova_compute[259850]: 2025-10-11 04:28:20.123 2 DEBUG oslo_concurrency.lockutils [req-9c9770dd-59b8-4c46-9f60-495705367049 req-c2057b9d-7a10-4cff-b2da-36a820f6e029 f4159c198f2c491aba3076e803251bb4 551ef16ca7c64c7d98e9560ddab5abd8 - - default default] Lock "bc8a8366-552e-41a7-bd24-afacb81114bc-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 04:28:20 compute-0 nova_compute[259850]: 2025-10-11 04:28:20.123 2 DEBUG oslo_concurrency.lockutils [req-9c9770dd-59b8-4c46-9f60-495705367049 req-c2057b9d-7a10-4cff-b2da-36a820f6e029 f4159c198f2c491aba3076e803251bb4 551ef16ca7c64c7d98e9560ddab5abd8 - - default default] Lock "bc8a8366-552e-41a7-bd24-afacb81114bc-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 04:28:20 compute-0 nova_compute[259850]: 2025-10-11 04:28:20.124 2 DEBUG nova.compute.manager [req-9c9770dd-59b8-4c46-9f60-495705367049 req-c2057b9d-7a10-4cff-b2da-36a820f6e029 f4159c198f2c491aba3076e803251bb4 551ef16ca7c64c7d98e9560ddab5abd8 - - default default] [instance: bc8a8366-552e-41a7-bd24-afacb81114bc] No waiting events found dispatching network-vif-plugged-7932da10-eaa5-4512-a329-16ee6de1e17c pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 11 04:28:20 compute-0 nova_compute[259850]: 2025-10-11 04:28:20.124 2 WARNING nova.compute.manager [req-9c9770dd-59b8-4c46-9f60-495705367049 req-c2057b9d-7a10-4cff-b2da-36a820f6e029 f4159c198f2c491aba3076e803251bb4 551ef16ca7c64c7d98e9560ddab5abd8 - - default default] [instance: bc8a8366-552e-41a7-bd24-afacb81114bc] Received unexpected event network-vif-plugged-7932da10-eaa5-4512-a329-16ee6de1e17c for instance with vm_state active and task_state deleting.
Oct 11 04:28:20 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1998: 305 pgs: 305 active+clean; 295 MiB data, 694 MiB used, 59 GiB / 60 GiB avail; 211 KiB/s rd, 432 KiB/s wr, 7 op/s
Oct 11 04:28:20 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e448 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 04:28:20 compute-0 nova_compute[259850]: 2025-10-11 04:28:20.612 2 DEBUG nova.network.neutron [-] [instance: bc8a8366-552e-41a7-bd24-afacb81114bc] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 11 04:28:20 compute-0 nova_compute[259850]: 2025-10-11 04:28:20.639 2 INFO nova.compute.manager [-] [instance: bc8a8366-552e-41a7-bd24-afacb81114bc] Took 2.36 seconds to deallocate network for instance.
Oct 11 04:28:20 compute-0 nova_compute[259850]: 2025-10-11 04:28:20.689 2 DEBUG nova.compute.manager [req-1f909326-4390-4668-a418-1c535013ad2f req-6c22ed07-150c-4cb5-a55f-0137a331c77d f4159c198f2c491aba3076e803251bb4 551ef16ca7c64c7d98e9560ddab5abd8 - - default default] [instance: bc8a8366-552e-41a7-bd24-afacb81114bc] Received event network-vif-deleted-7932da10-eaa5-4512-a329-16ee6de1e17c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 11 04:28:20 compute-0 ceph-mgr[74563]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 04:28:20 compute-0 ceph-mgr[74563]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 04:28:20 compute-0 ceph-mgr[74563]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 04:28:20 compute-0 ceph-mgr[74563]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 04:28:20 compute-0 ceph-mgr[74563]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 04:28:20 compute-0 ceph-mgr[74563]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 04:28:20 compute-0 nova_compute[259850]: 2025-10-11 04:28:20.840 2 INFO nova.compute.manager [None req-1bcf7889-bd07-482e-b193-773d4c316d98 7bf17f3eb8514499a54d67542db6b88a 226e6310b4ee4a68b552a6b3e940a458 - - default default] [instance: bc8a8366-552e-41a7-bd24-afacb81114bc] Took 0.20 seconds to detach 1 volumes for instance.
Oct 11 04:28:20 compute-0 ceph-mgr[74563]: [balancer INFO root] Optimize plan auto_2025-10-11_04:28:20
Oct 11 04:28:20 compute-0 ceph-mgr[74563]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 11 04:28:20 compute-0 ceph-mgr[74563]: [balancer INFO root] do_upmap
Oct 11 04:28:20 compute-0 ceph-mgr[74563]: [balancer INFO root] pools ['.rgw.root', 'vms', 'images', 'cephfs.cephfs.data', '.mgr', 'backups', 'default.rgw.control', 'default.rgw.log', 'volumes', 'cephfs.cephfs.meta', 'default.rgw.meta']
Oct 11 04:28:20 compute-0 ceph-mgr[74563]: [balancer INFO root] prepared 0/10 changes
Oct 11 04:28:20 compute-0 nova_compute[259850]: 2025-10-11 04:28:20.904 2 DEBUG oslo_concurrency.lockutils [None req-1bcf7889-bd07-482e-b193-773d4c316d98 7bf17f3eb8514499a54d67542db6b88a 226e6310b4ee4a68b552a6b3e940a458 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 04:28:20 compute-0 nova_compute[259850]: 2025-10-11 04:28:20.905 2 DEBUG oslo_concurrency.lockutils [None req-1bcf7889-bd07-482e-b193-773d4c316d98 7bf17f3eb8514499a54d67542db6b88a 226e6310b4ee4a68b552a6b3e940a458 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 04:28:20 compute-0 nova_compute[259850]: 2025-10-11 04:28:20.975 2 DEBUG oslo_concurrency.processutils [None req-1bcf7889-bd07-482e-b193-773d4c316d98 7bf17f3eb8514499a54d67542db6b88a 226e6310b4ee4a68b552a6b3e940a458 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 04:28:21 compute-0 ceph-mgr[74563]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 11 04:28:21 compute-0 ceph-mgr[74563]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 11 04:28:21 compute-0 ceph-mgr[74563]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 11 04:28:21 compute-0 ceph-mgr[74563]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 11 04:28:21 compute-0 ceph-mgr[74563]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 11 04:28:21 compute-0 ceph-mgr[74563]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 11 04:28:21 compute-0 ceph-mgr[74563]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 11 04:28:21 compute-0 ceph-mgr[74563]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 11 04:28:21 compute-0 ceph-mgr[74563]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 11 04:28:21 compute-0 ceph-mgr[74563]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 11 04:28:21 compute-0 ceph-mon[74273]: pgmap v1998: 305 pgs: 305 active+clean; 295 MiB data, 694 MiB used, 59 GiB / 60 GiB avail; 211 KiB/s rd, 432 KiB/s wr, 7 op/s
Oct 11 04:28:21 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 11 04:28:21 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2407052511' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 04:28:21 compute-0 nova_compute[259850]: 2025-10-11 04:28:21.472 2 DEBUG oslo_concurrency.processutils [None req-1bcf7889-bd07-482e-b193-773d4c316d98 7bf17f3eb8514499a54d67542db6b88a 226e6310b4ee4a68b552a6b3e940a458 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.497s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 04:28:21 compute-0 nova_compute[259850]: 2025-10-11 04:28:21.482 2 DEBUG nova.compute.provider_tree [None req-1bcf7889-bd07-482e-b193-773d4c316d98 7bf17f3eb8514499a54d67542db6b88a 226e6310b4ee4a68b552a6b3e940a458 - - default default] Inventory has not changed in ProviderTree for provider: 108a560b-89c0-4926-a2fc-cb749a6f8386 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 11 04:28:21 compute-0 nova_compute[259850]: 2025-10-11 04:28:21.498 2 DEBUG nova.scheduler.client.report [None req-1bcf7889-bd07-482e-b193-773d4c316d98 7bf17f3eb8514499a54d67542db6b88a 226e6310b4ee4a68b552a6b3e940a458 - - default default] Inventory has not changed for provider 108a560b-89c0-4926-a2fc-cb749a6f8386 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 11 04:28:21 compute-0 nova_compute[259850]: 2025-10-11 04:28:21.523 2 DEBUG oslo_concurrency.lockutils [None req-1bcf7889-bd07-482e-b193-773d4c316d98 7bf17f3eb8514499a54d67542db6b88a 226e6310b4ee4a68b552a6b3e940a458 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.618s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 04:28:21 compute-0 nova_compute[259850]: 2025-10-11 04:28:21.554 2 INFO nova.scheduler.client.report [None req-1bcf7889-bd07-482e-b193-773d4c316d98 7bf17f3eb8514499a54d67542db6b88a 226e6310b4ee4a68b552a6b3e940a458 - - default default] Deleted allocations for instance bc8a8366-552e-41a7-bd24-afacb81114bc
Oct 11 04:28:21 compute-0 nova_compute[259850]: 2025-10-11 04:28:21.607 2 DEBUG oslo_concurrency.lockutils [None req-1bcf7889-bd07-482e-b193-773d4c316d98 7bf17f3eb8514499a54d67542db6b88a 226e6310b4ee4a68b552a6b3e940a458 - - default default] Lock "bc8a8366-552e-41a7-bd24-afacb81114bc" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.913s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 04:28:22 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1999: 305 pgs: 305 active+clean; 295 MiB data, 694 MiB used, 59 GiB / 60 GiB avail; 211 KiB/s rd, 3.0 KiB/s wr, 6 op/s
Oct 11 04:28:22 compute-0 ceph-mon[74273]: from='client.? 192.168.122.100:0/2407052511' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 04:28:22 compute-0 nova_compute[259850]: 2025-10-11 04:28:22.814 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 04:28:22 compute-0 ovn_metadata_agent[161897]: 2025-10-11 04:28:22.814 161902 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=26, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '7a:61:6f', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '92:f1:b6:e4:f1:16'}, ipsec=False) old=SB_Global(nb_cfg=25) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 11 04:28:22 compute-0 ovn_metadata_agent[161897]: 2025-10-11 04:28:22.816 161902 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 6 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Oct 11 04:28:22 compute-0 ovn_metadata_agent[161897]: 2025-10-11 04:28:22.980 161902 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 04:28:22 compute-0 ovn_metadata_agent[161897]: 2025-10-11 04:28:22.981 161902 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 04:28:22 compute-0 ovn_metadata_agent[161897]: 2025-10-11 04:28:22.981 161902 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 04:28:23 compute-0 nova_compute[259850]: 2025-10-11 04:28:23.006 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 04:28:23 compute-0 ceph-mon[74273]: pgmap v1999: 305 pgs: 305 active+clean; 295 MiB data, 694 MiB used, 59 GiB / 60 GiB avail; 211 KiB/s rd, 3.0 KiB/s wr, 6 op/s
Oct 11 04:28:23 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct 11 04:28:23 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1375096658' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 11 04:28:23 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct 11 04:28:23 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1375096658' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 11 04:28:24 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v2000: 305 pgs: 305 active+clean; 295 MiB data, 694 MiB used, 59 GiB / 60 GiB avail; 224 KiB/s rd, 3.6 KiB/s wr, 25 op/s
Oct 11 04:28:24 compute-0 ceph-mon[74273]: from='client.? 192.168.122.10:0/1375096658' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 11 04:28:24 compute-0 ceph-mon[74273]: from='client.? 192.168.122.10:0/1375096658' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 11 04:28:24 compute-0 nova_compute[259850]: 2025-10-11 04:28:24.330 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 04:28:25 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e448 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 04:28:25 compute-0 ceph-mon[74273]: pgmap v2000: 305 pgs: 305 active+clean; 295 MiB data, 694 MiB used, 59 GiB / 60 GiB avail; 224 KiB/s rd, 3.6 KiB/s wr, 25 op/s
Oct 11 04:28:25 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct 11 04:28:25 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1002605384' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 11 04:28:25 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct 11 04:28:25 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1002605384' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 11 04:28:26 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v2001: 305 pgs: 305 active+clean; 295 MiB data, 694 MiB used, 59 GiB / 60 GiB avail; 224 KiB/s rd, 597 B/s wr, 25 op/s
Oct 11 04:28:26 compute-0 ceph-mon[74273]: from='client.? 192.168.122.10:0/1002605384' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 11 04:28:26 compute-0 ceph-mon[74273]: from='client.? 192.168.122.10:0/1002605384' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 11 04:28:27 compute-0 ceph-mon[74273]: pgmap v2001: 305 pgs: 305 active+clean; 295 MiB data, 694 MiB used, 59 GiB / 60 GiB avail; 224 KiB/s rd, 597 B/s wr, 25 op/s
Oct 11 04:28:28 compute-0 nova_compute[259850]: 2025-10-11 04:28:28.012 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 04:28:28 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v2002: 305 pgs: 305 active+clean; 283 MiB data, 666 MiB used, 59 GiB / 60 GiB avail; 268 KiB/s rd, 1.5 KiB/s wr, 92 op/s
Oct 11 04:28:28 compute-0 nova_compute[259850]: 2025-10-11 04:28:28.369 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 04:28:28 compute-0 nova_compute[259850]: 2025-10-11 04:28:28.519 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 04:28:28 compute-0 ovn_metadata_agent[161897]: 2025-10-11 04:28:28.818 161902 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=8a473e03-2208-47ae-afcd-05ad744a5969, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '26'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 04:28:29 compute-0 ceph-mon[74273]: pgmap v2002: 305 pgs: 305 active+clean; 283 MiB data, 666 MiB used, 59 GiB / 60 GiB avail; 268 KiB/s rd, 1.5 KiB/s wr, 92 op/s
Oct 11 04:28:29 compute-0 nova_compute[259850]: 2025-10-11 04:28:29.332 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 04:28:30 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v2003: 305 pgs: 305 active+clean; 271 MiB data, 657 MiB used, 59 GiB / 60 GiB avail; 208 KiB/s rd, 1.5 KiB/s wr, 102 op/s
Oct 11 04:28:30 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e448 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 04:28:31 compute-0 ceph-mon[74273]: pgmap v2003: 305 pgs: 305 active+clean; 271 MiB data, 657 MiB used, 59 GiB / 60 GiB avail; 208 KiB/s rd, 1.5 KiB/s wr, 102 op/s
Oct 11 04:28:31 compute-0 podman[310807]: 2025-10-11 04:28:31.394536532 +0000 UTC m=+0.088781961 container health_status 8f03625dda308e9a3fe3b3f00360811eab2a53db90537fed5552a11820542f07 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, config_id=iscsid, io.buildah.version=1.41.3, org.label-schema.build-date=20251009, container_name=iscsid)
Oct 11 04:28:31 compute-0 podman[310806]: 2025-10-11 04:28:31.400537332 +0000 UTC m=+0.100705118 container health_status 83ff5079e26f0f00bbfd2512db4ebca61e431e3b7e362664573e34fff179574c (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, container_name=multipathd, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Oct 11 04:28:31 compute-0 ceph-mgr[74563]: [pg_autoscaler INFO root] _maybe_adjust
Oct 11 04:28:31 compute-0 ceph-mgr[74563]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 04:28:31 compute-0 ceph-mgr[74563]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct 11 04:28:31 compute-0 ceph-mgr[74563]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 04:28:31 compute-0 ceph-mgr[74563]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Oct 11 04:28:31 compute-0 ceph-mgr[74563]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 04:28:31 compute-0 ceph-mgr[74563]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.002894458247867422 of space, bias 1.0, pg target 0.8683374743602266 quantized to 32 (current 32)
Oct 11 04:28:31 compute-0 ceph-mgr[74563]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 04:28:31 compute-0 ceph-mgr[74563]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Oct 11 04:28:31 compute-0 ceph-mgr[74563]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 04:28:31 compute-0 ceph-mgr[74563]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006661762551279547 of space, bias 1.0, pg target 0.1998528765383864 quantized to 32 (current 32)
Oct 11 04:28:31 compute-0 ceph-mgr[74563]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 04:28:31 compute-0 ceph-mgr[74563]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Oct 11 04:28:31 compute-0 ceph-mgr[74563]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 04:28:31 compute-0 ceph-mgr[74563]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 11 04:28:31 compute-0 ceph-mgr[74563]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 04:28:31 compute-0 ceph-mgr[74563]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Oct 11 04:28:31 compute-0 ceph-mgr[74563]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 04:28:31 compute-0 ceph-mgr[74563]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct 11 04:28:31 compute-0 ceph-mgr[74563]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 04:28:31 compute-0 ceph-mgr[74563]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 11 04:28:31 compute-0 ceph-mgr[74563]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 04:28:31 compute-0 ceph-mgr[74563]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct 11 04:28:32 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v2004: 305 pgs: 305 active+clean; 271 MiB data, 657 MiB used, 59 GiB / 60 GiB avail; 65 KiB/s rd, 1.5 KiB/s wr, 99 op/s
Oct 11 04:28:32 compute-0 nova_compute[259850]: 2025-10-11 04:28:32.945 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1760156897.9433255, bc8a8366-552e-41a7-bd24-afacb81114bc => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 11 04:28:32 compute-0 nova_compute[259850]: 2025-10-11 04:28:32.945 2 INFO nova.compute.manager [-] [instance: bc8a8366-552e-41a7-bd24-afacb81114bc] VM Stopped (Lifecycle Event)
Oct 11 04:28:32 compute-0 nova_compute[259850]: 2025-10-11 04:28:32.969 2 DEBUG nova.compute.manager [None req-4b8ad7e5-b511-4006-9ea3-20150187efe7 - - - - - -] [instance: bc8a8366-552e-41a7-bd24-afacb81114bc] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 11 04:28:33 compute-0 nova_compute[259850]: 2025-10-11 04:28:33.016 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 04:28:33 compute-0 ceph-mon[74273]: pgmap v2004: 305 pgs: 305 active+clean; 271 MiB data, 657 MiB used, 59 GiB / 60 GiB avail; 65 KiB/s rd, 1.5 KiB/s wr, 99 op/s
Oct 11 04:28:34 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v2005: 305 pgs: 305 active+clean; 271 MiB data, 657 MiB used, 59 GiB / 60 GiB avail; 65 KiB/s rd, 1.5 KiB/s wr, 99 op/s
Oct 11 04:28:34 compute-0 nova_compute[259850]: 2025-10-11 04:28:34.334 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 04:28:35 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e448 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 04:28:35 compute-0 ceph-mon[74273]: pgmap v2005: 305 pgs: 305 active+clean; 271 MiB data, 657 MiB used, 59 GiB / 60 GiB avail; 65 KiB/s rd, 1.5 KiB/s wr, 99 op/s
Oct 11 04:28:36 compute-0 nova_compute[259850]: 2025-10-11 04:28:36.060 2 DEBUG oslo_service.periodic_task [None req-a15c22ab-259d-4b26-83d0-eb5f92102649 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 11 04:28:36 compute-0 nova_compute[259850]: 2025-10-11 04:28:36.060 2 DEBUG oslo_service.periodic_task [None req-a15c22ab-259d-4b26-83d0-eb5f92102649 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 11 04:28:36 compute-0 nova_compute[259850]: 2025-10-11 04:28:36.090 2 DEBUG oslo_concurrency.lockutils [None req-a15c22ab-259d-4b26-83d0-eb5f92102649 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 04:28:36 compute-0 nova_compute[259850]: 2025-10-11 04:28:36.090 2 DEBUG oslo_concurrency.lockutils [None req-a15c22ab-259d-4b26-83d0-eb5f92102649 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 04:28:36 compute-0 nova_compute[259850]: 2025-10-11 04:28:36.091 2 DEBUG oslo_concurrency.lockutils [None req-a15c22ab-259d-4b26-83d0-eb5f92102649 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 04:28:36 compute-0 nova_compute[259850]: 2025-10-11 04:28:36.091 2 DEBUG nova.compute.resource_tracker [None req-a15c22ab-259d-4b26-83d0-eb5f92102649 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Oct 11 04:28:36 compute-0 nova_compute[259850]: 2025-10-11 04:28:36.092 2 DEBUG oslo_concurrency.processutils [None req-a15c22ab-259d-4b26-83d0-eb5f92102649 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 04:28:36 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v2006: 305 pgs: 305 active+clean; 271 MiB data, 657 MiB used, 59 GiB / 60 GiB avail; 52 KiB/s rd, 938 B/s wr, 79 op/s
Oct 11 04:28:36 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 11 04:28:36 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2596719516' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 04:28:36 compute-0 nova_compute[259850]: 2025-10-11 04:28:36.557 2 DEBUG oslo_concurrency.processutils [None req-a15c22ab-259d-4b26-83d0-eb5f92102649 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.465s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 04:28:36 compute-0 nova_compute[259850]: 2025-10-11 04:28:36.754 2 WARNING nova.virt.libvirt.driver [None req-a15c22ab-259d-4b26-83d0-eb5f92102649 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 11 04:28:36 compute-0 nova_compute[259850]: 2025-10-11 04:28:36.756 2 DEBUG nova.compute.resource_tracker [None req-a15c22ab-259d-4b26-83d0-eb5f92102649 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4282MB free_disk=59.988277435302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Oct 11 04:28:36 compute-0 nova_compute[259850]: 2025-10-11 04:28:36.756 2 DEBUG oslo_concurrency.lockutils [None req-a15c22ab-259d-4b26-83d0-eb5f92102649 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 04:28:36 compute-0 nova_compute[259850]: 2025-10-11 04:28:36.757 2 DEBUG oslo_concurrency.lockutils [None req-a15c22ab-259d-4b26-83d0-eb5f92102649 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 04:28:36 compute-0 nova_compute[259850]: 2025-10-11 04:28:36.818 2 DEBUG nova.compute.resource_tracker [None req-a15c22ab-259d-4b26-83d0-eb5f92102649 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Oct 11 04:28:36 compute-0 nova_compute[259850]: 2025-10-11 04:28:36.819 2 DEBUG nova.compute.resource_tracker [None req-a15c22ab-259d-4b26-83d0-eb5f92102649 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Oct 11 04:28:36 compute-0 nova_compute[259850]: 2025-10-11 04:28:36.837 2 DEBUG oslo_concurrency.processutils [None req-a15c22ab-259d-4b26-83d0-eb5f92102649 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 04:28:37 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 11 04:28:37 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/221397313' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 04:28:37 compute-0 nova_compute[259850]: 2025-10-11 04:28:37.280 2 DEBUG oslo_concurrency.processutils [None req-a15c22ab-259d-4b26-83d0-eb5f92102649 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.443s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 04:28:37 compute-0 nova_compute[259850]: 2025-10-11 04:28:37.289 2 DEBUG nova.compute.provider_tree [None req-a15c22ab-259d-4b26-83d0-eb5f92102649 - - - - - -] Inventory has not changed in ProviderTree for provider: 108a560b-89c0-4926-a2fc-cb749a6f8386 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 11 04:28:37 compute-0 nova_compute[259850]: 2025-10-11 04:28:37.310 2 DEBUG nova.scheduler.client.report [None req-a15c22ab-259d-4b26-83d0-eb5f92102649 - - - - - -] Inventory has not changed for provider 108a560b-89c0-4926-a2fc-cb749a6f8386 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 11 04:28:37 compute-0 ceph-mon[74273]: pgmap v2006: 305 pgs: 305 active+clean; 271 MiB data, 657 MiB used, 59 GiB / 60 GiB avail; 52 KiB/s rd, 938 B/s wr, 79 op/s
Oct 11 04:28:37 compute-0 ceph-mon[74273]: from='client.? 192.168.122.100:0/2596719516' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 04:28:37 compute-0 ceph-mon[74273]: from='client.? 192.168.122.100:0/221397313' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 04:28:37 compute-0 nova_compute[259850]: 2025-10-11 04:28:37.332 2 DEBUG nova.compute.resource_tracker [None req-a15c22ab-259d-4b26-83d0-eb5f92102649 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Oct 11 04:28:37 compute-0 nova_compute[259850]: 2025-10-11 04:28:37.333 2 DEBUG oslo_concurrency.lockutils [None req-a15c22ab-259d-4b26-83d0-eb5f92102649 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.577s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 04:28:38 compute-0 nova_compute[259850]: 2025-10-11 04:28:38.020 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 04:28:38 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v2007: 305 pgs: 305 active+clean; 271 MiB data, 657 MiB used, 59 GiB / 60 GiB avail; 52 KiB/s rd, 938 B/s wr, 79 op/s
Oct 11 04:28:38 compute-0 nova_compute[259850]: 2025-10-11 04:28:38.333 2 DEBUG oslo_service.periodic_task [None req-a15c22ab-259d-4b26-83d0-eb5f92102649 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 11 04:28:38 compute-0 nova_compute[259850]: 2025-10-11 04:28:38.334 2 DEBUG oslo_service.periodic_task [None req-a15c22ab-259d-4b26-83d0-eb5f92102649 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 11 04:28:38 compute-0 nova_compute[259850]: 2025-10-11 04:28:38.334 2 DEBUG nova.compute.manager [None req-a15c22ab-259d-4b26-83d0-eb5f92102649 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Oct 11 04:28:39 compute-0 nova_compute[259850]: 2025-10-11 04:28:39.059 2 DEBUG oslo_service.periodic_task [None req-a15c22ab-259d-4b26-83d0-eb5f92102649 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 11 04:28:39 compute-0 ceph-mon[74273]: pgmap v2007: 305 pgs: 305 active+clean; 271 MiB data, 657 MiB used, 59 GiB / 60 GiB avail; 52 KiB/s rd, 938 B/s wr, 79 op/s
Oct 11 04:28:39 compute-0 nova_compute[259850]: 2025-10-11 04:28:39.336 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 04:28:40 compute-0 nova_compute[259850]: 2025-10-11 04:28:40.060 2 DEBUG oslo_service.periodic_task [None req-a15c22ab-259d-4b26-83d0-eb5f92102649 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 11 04:28:40 compute-0 nova_compute[259850]: 2025-10-11 04:28:40.061 2 DEBUG nova.compute.manager [None req-a15c22ab-259d-4b26-83d0-eb5f92102649 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Oct 11 04:28:40 compute-0 nova_compute[259850]: 2025-10-11 04:28:40.061 2 DEBUG nova.compute.manager [None req-a15c22ab-259d-4b26-83d0-eb5f92102649 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Oct 11 04:28:40 compute-0 nova_compute[259850]: 2025-10-11 04:28:40.081 2 DEBUG nova.compute.manager [None req-a15c22ab-259d-4b26-83d0-eb5f92102649 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Oct 11 04:28:40 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v2008: 305 pgs: 305 active+clean; 271 MiB data, 657 MiB used, 59 GiB / 60 GiB avail; 7.3 KiB/s rd, 0 B/s wr, 12 op/s
Oct 11 04:28:40 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e448 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 04:28:41 compute-0 nova_compute[259850]: 2025-10-11 04:28:41.059 2 DEBUG oslo_service.periodic_task [None req-a15c22ab-259d-4b26-83d0-eb5f92102649 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 11 04:28:41 compute-0 ceph-mon[74273]: pgmap v2008: 305 pgs: 305 active+clean; 271 MiB data, 657 MiB used, 59 GiB / 60 GiB avail; 7.3 KiB/s rd, 0 B/s wr, 12 op/s
Oct 11 04:28:42 compute-0 nova_compute[259850]: 2025-10-11 04:28:42.059 2 DEBUG oslo_service.periodic_task [None req-a15c22ab-259d-4b26-83d0-eb5f92102649 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 11 04:28:42 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v2009: 305 pgs: 305 active+clean; 271 MiB data, 657 MiB used, 59 GiB / 60 GiB avail
Oct 11 04:28:43 compute-0 nova_compute[259850]: 2025-10-11 04:28:43.049 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 04:28:43 compute-0 ceph-mon[74273]: pgmap v2009: 305 pgs: 305 active+clean; 271 MiB data, 657 MiB used, 59 GiB / 60 GiB avail
Oct 11 04:28:44 compute-0 nova_compute[259850]: 2025-10-11 04:28:44.054 2 DEBUG oslo_service.periodic_task [None req-a15c22ab-259d-4b26-83d0-eb5f92102649 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 11 04:28:44 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v2010: 305 pgs: 305 active+clean; 271 MiB data, 657 MiB used, 59 GiB / 60 GiB avail
Oct 11 04:28:44 compute-0 nova_compute[259850]: 2025-10-11 04:28:44.339 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 04:28:45 compute-0 nova_compute[259850]: 2025-10-11 04:28:45.059 2 DEBUG oslo_service.periodic_task [None req-a15c22ab-259d-4b26-83d0-eb5f92102649 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 11 04:28:45 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e448 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 04:28:45 compute-0 ceph-mon[74273]: pgmap v2010: 305 pgs: 305 active+clean; 271 MiB data, 657 MiB used, 59 GiB / 60 GiB avail
Oct 11 04:28:46 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v2011: 305 pgs: 305 active+clean; 271 MiB data, 657 MiB used, 59 GiB / 60 GiB avail
Oct 11 04:28:47 compute-0 ceph-mon[74273]: pgmap v2011: 305 pgs: 305 active+clean; 271 MiB data, 657 MiB used, 59 GiB / 60 GiB avail
Oct 11 04:28:47 compute-0 ceph-mon[74273]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #87. Immutable memtables: 0.
Oct 11 04:28:47 compute-0 ceph-mon[74273]: rocksdb: (Original Log Time 2025/10/11-04:28:47.383481) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Oct 11 04:28:47 compute-0 ceph-mon[74273]: rocksdb: [db/flush_job.cc:856] [default] [JOB 49] Flushing memtable with next log file: 87
Oct 11 04:28:47 compute-0 ceph-mon[74273]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760156927383518, "job": 49, "event": "flush_started", "num_memtables": 1, "num_entries": 936, "num_deletes": 251, "total_data_size": 1247440, "memory_usage": 1266784, "flush_reason": "Manual Compaction"}
Oct 11 04:28:47 compute-0 ceph-mon[74273]: rocksdb: [db/flush_job.cc:885] [default] [JOB 49] Level-0 flush table #88: started
Oct 11 04:28:47 compute-0 ceph-mon[74273]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760156927395343, "cf_name": "default", "job": 49, "event": "table_file_creation", "file_number": 88, "file_size": 1235120, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 40437, "largest_seqno": 41372, "table_properties": {"data_size": 1230486, "index_size": 2222, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1349, "raw_key_size": 10353, "raw_average_key_size": 19, "raw_value_size": 1221104, "raw_average_value_size": 2330, "num_data_blocks": 99, "num_entries": 524, "num_filter_entries": 524, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1760156845, "oldest_key_time": 1760156845, "file_creation_time": 1760156927, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "c5ef686a-ea96-43f3-b64e-136aeef6150d", "db_session_id": "5PENSWPAYOBU0GJSS6OB", "orig_file_number": 88, "seqno_to_time_mapping": "N/A"}}
Oct 11 04:28:47 compute-0 ceph-mon[74273]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 49] Flush lasted 11904 microseconds, and 3608 cpu microseconds.
Oct 11 04:28:47 compute-0 ceph-mon[74273]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 11 04:28:47 compute-0 ceph-mon[74273]: rocksdb: (Original Log Time 2025/10/11-04:28:47.395383) [db/flush_job.cc:967] [default] [JOB 49] Level-0 flush table #88: 1235120 bytes OK
Oct 11 04:28:47 compute-0 ceph-mon[74273]: rocksdb: (Original Log Time 2025/10/11-04:28:47.395402) [db/memtable_list.cc:519] [default] Level-0 commit table #88 started
Oct 11 04:28:47 compute-0 ceph-mon[74273]: rocksdb: (Original Log Time 2025/10/11-04:28:47.396941) [db/memtable_list.cc:722] [default] Level-0 commit table #88: memtable #1 done
Oct 11 04:28:47 compute-0 ceph-mon[74273]: rocksdb: (Original Log Time 2025/10/11-04:28:47.396952) EVENT_LOG_v1 {"time_micros": 1760156927396948, "job": 49, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Oct 11 04:28:47 compute-0 ceph-mon[74273]: rocksdb: (Original Log Time 2025/10/11-04:28:47.396966) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Oct 11 04:28:47 compute-0 ceph-mon[74273]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 49] Try to delete WAL files size 1242929, prev total WAL file size 1242929, number of live WAL files 2.
Oct 11 04:28:47 compute-0 ceph-mon[74273]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000084.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 11 04:28:47 compute-0 ceph-mon[74273]: rocksdb: (Original Log Time 2025/10/11-04:28:47.397474) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730033353134' seq:72057594037927935, type:22 .. '7061786F730033373636' seq:0, type:0; will stop at (end)
Oct 11 04:28:47 compute-0 ceph-mon[74273]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 50] Compacting 1@0 + 1@6 files to L6, score -1.00
Oct 11 04:28:47 compute-0 ceph-mon[74273]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 49 Base level 0, inputs: [88(1206KB)], [86(10MB)]
Oct 11 04:28:47 compute-0 ceph-mon[74273]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760156927397502, "job": 50, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [88], "files_L6": [86], "score": -1, "input_data_size": 11734752, "oldest_snapshot_seqno": -1}
Oct 11 04:28:47 compute-0 podman[310890]: 2025-10-11 04:28:47.418465538 +0000 UTC m=+0.126773482 container health_status 648a70a95c858d67aef01a55c153ff0b5b9bef4b589fa10c62a24185180db61f (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.build-date=20251009, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f)
Oct 11 04:28:47 compute-0 ceph-mon[74273]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 50] Generated table #89: 6903 keys, 9969285 bytes, temperature: kUnknown
Oct 11 04:28:47 compute-0 ceph-mon[74273]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760156927448772, "cf_name": "default", "job": 50, "event": "table_file_creation", "file_number": 89, "file_size": 9969285, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 9919763, "index_size": 31170, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 17285, "raw_key_size": 175953, "raw_average_key_size": 25, "raw_value_size": 9792522, "raw_average_value_size": 1418, "num_data_blocks": 1233, "num_entries": 6903, "num_filter_entries": 6903, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1760153731, "oldest_key_time": 0, "file_creation_time": 1760156927, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "c5ef686a-ea96-43f3-b64e-136aeef6150d", "db_session_id": "5PENSWPAYOBU0GJSS6OB", "orig_file_number": 89, "seqno_to_time_mapping": "N/A"}}
Oct 11 04:28:47 compute-0 ceph-mon[74273]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 11 04:28:47 compute-0 ceph-mon[74273]: rocksdb: (Original Log Time 2025/10/11-04:28:47.449101) [db/compaction/compaction_job.cc:1663] [default] [JOB 50] Compacted 1@0 + 1@6 files to L6 => 9969285 bytes
Oct 11 04:28:47 compute-0 ceph-mon[74273]: rocksdb: (Original Log Time 2025/10/11-04:28:47.450582) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 228.4 rd, 194.0 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.2, 10.0 +0.0 blob) out(9.5 +0.0 blob), read-write-amplify(17.6) write-amplify(8.1) OK, records in: 7421, records dropped: 518 output_compression: NoCompression
Oct 11 04:28:47 compute-0 ceph-mon[74273]: rocksdb: (Original Log Time 2025/10/11-04:28:47.450611) EVENT_LOG_v1 {"time_micros": 1760156927450597, "job": 50, "event": "compaction_finished", "compaction_time_micros": 51377, "compaction_time_cpu_micros": 21783, "output_level": 6, "num_output_files": 1, "total_output_size": 9969285, "num_input_records": 7421, "num_output_records": 6903, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Oct 11 04:28:47 compute-0 ceph-mon[74273]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000088.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 11 04:28:47 compute-0 ceph-mon[74273]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760156927451121, "job": 50, "event": "table_file_deletion", "file_number": 88}
Oct 11 04:28:47 compute-0 ceph-mon[74273]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000086.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 11 04:28:47 compute-0 ceph-mon[74273]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760156927454618, "job": 50, "event": "table_file_deletion", "file_number": 86}
Oct 11 04:28:47 compute-0 ceph-mon[74273]: rocksdb: (Original Log Time 2025/10/11-04:28:47.397396) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 11 04:28:47 compute-0 ceph-mon[74273]: rocksdb: (Original Log Time 2025/10/11-04:28:47.454727) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 11 04:28:47 compute-0 ceph-mon[74273]: rocksdb: (Original Log Time 2025/10/11-04:28:47.454738) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 11 04:28:47 compute-0 ceph-mon[74273]: rocksdb: (Original Log Time 2025/10/11-04:28:47.454741) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 11 04:28:47 compute-0 ceph-mon[74273]: rocksdb: (Original Log Time 2025/10/11-04:28:47.454744) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 11 04:28:47 compute-0 ceph-mon[74273]: rocksdb: (Original Log Time 2025/10/11-04:28:47.454747) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 11 04:28:48 compute-0 nova_compute[259850]: 2025-10-11 04:28:48.051 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 04:28:48 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v2012: 305 pgs: 305 active+clean; 271 MiB data, 657 MiB used, 59 GiB / 60 GiB avail
Oct 11 04:28:49 compute-0 nova_compute[259850]: 2025-10-11 04:28:49.340 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 04:28:49 compute-0 ceph-mon[74273]: pgmap v2012: 305 pgs: 305 active+clean; 271 MiB data, 657 MiB used, 59 GiB / 60 GiB avail
Oct 11 04:28:50 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v2013: 305 pgs: 305 active+clean; 271 MiB data, 657 MiB used, 59 GiB / 60 GiB avail
Oct 11 04:28:50 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e448 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 04:28:50 compute-0 podman[310916]: 2025-10-11 04:28:50.357983557 +0000 UTC m=+0.061695809 container health_status aa44bb3cffcfb092b9d85ae52a9bd9f36c87e07262632420f97b48a886109eab (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20251009, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.license=GPLv2, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent)
Oct 11 04:28:50 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct 11 04:28:50 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2323430645' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 11 04:28:50 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct 11 04:28:50 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2323430645' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 11 04:28:50 compute-0 ceph-mgr[74563]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 04:28:50 compute-0 ceph-mgr[74563]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 04:28:50 compute-0 ceph-mgr[74563]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 04:28:50 compute-0 ceph-mgr[74563]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 04:28:50 compute-0 ceph-mgr[74563]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 04:28:50 compute-0 ceph-mgr[74563]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 04:28:51 compute-0 ceph-mon[74273]: pgmap v2013: 305 pgs: 305 active+clean; 271 MiB data, 657 MiB used, 59 GiB / 60 GiB avail
Oct 11 04:28:51 compute-0 ceph-mon[74273]: from='client.? 192.168.122.10:0/2323430645' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 11 04:28:51 compute-0 ceph-mon[74273]: from='client.? 192.168.122.10:0/2323430645' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 11 04:28:52 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v2014: 305 pgs: 305 active+clean; 271 MiB data, 657 MiB used, 59 GiB / 60 GiB avail
Oct 11 04:28:53 compute-0 nova_compute[259850]: 2025-10-11 04:28:53.056 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 04:28:53 compute-0 ceph-mon[74273]: pgmap v2014: 305 pgs: 305 active+clean; 271 MiB data, 657 MiB used, 59 GiB / 60 GiB avail
Oct 11 04:28:54 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v2015: 305 pgs: 305 active+clean; 271 MiB data, 657 MiB used, 59 GiB / 60 GiB avail
Oct 11 04:28:54 compute-0 nova_compute[259850]: 2025-10-11 04:28:54.343 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 04:28:55 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e448 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 04:28:55 compute-0 ceph-mon[74273]: pgmap v2015: 305 pgs: 305 active+clean; 271 MiB data, 657 MiB used, 59 GiB / 60 GiB avail
Oct 11 04:28:56 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v2016: 305 pgs: 305 active+clean; 271 MiB data, 657 MiB used, 59 GiB / 60 GiB avail
Oct 11 04:28:57 compute-0 ceph-mon[74273]: pgmap v2016: 305 pgs: 305 active+clean; 271 MiB data, 657 MiB used, 59 GiB / 60 GiB avail
Oct 11 04:28:58 compute-0 nova_compute[259850]: 2025-10-11 04:28:58.061 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 04:28:58 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v2017: 305 pgs: 305 active+clean; 271 MiB data, 657 MiB used, 59 GiB / 60 GiB avail
Oct 11 04:28:59 compute-0 nova_compute[259850]: 2025-10-11 04:28:59.344 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 04:28:59 compute-0 ceph-mon[74273]: pgmap v2017: 305 pgs: 305 active+clean; 271 MiB data, 657 MiB used, 59 GiB / 60 GiB avail
Oct 11 04:29:00 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v2018: 305 pgs: 305 active+clean; 271 MiB data, 657 MiB used, 59 GiB / 60 GiB avail
Oct 11 04:29:00 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e448 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 04:29:01 compute-0 ceph-mon[74273]: pgmap v2018: 305 pgs: 305 active+clean; 271 MiB data, 657 MiB used, 59 GiB / 60 GiB avail
Oct 11 04:29:02 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v2019: 305 pgs: 305 active+clean; 271 MiB data, 657 MiB used, 59 GiB / 60 GiB avail
Oct 11 04:29:02 compute-0 podman[310936]: 2025-10-11 04:29:02.371313823 +0000 UTC m=+0.073813420 container health_status 83ff5079e26f0f00bbfd2512db4ebca61e431e3b7e362664573e34fff179574c (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_id=multipathd, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251009, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f)
Oct 11 04:29:02 compute-0 podman[310937]: 2025-10-11 04:29:02.385305838 +0000 UTC m=+0.074806699 container health_status 8f03625dda308e9a3fe3b3f00360811eab2a53db90537fed5552a11820542f07 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, config_id=iscsid, container_name=iscsid, managed_by=edpm_ansible, org.label-schema.build-date=20251009)
Oct 11 04:29:03 compute-0 nova_compute[259850]: 2025-10-11 04:29:03.125 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 04:29:03 compute-0 ovn_controller[152025]: 2025-10-11T04:29:03Z|00300|memory_trim|INFO|Detected inactivity (last active 30015 ms ago): trimming memory
Oct 11 04:29:03 compute-0 ceph-mon[74273]: pgmap v2019: 305 pgs: 305 active+clean; 271 MiB data, 657 MiB used, 59 GiB / 60 GiB avail
Oct 11 04:29:04 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v2020: 305 pgs: 305 active+clean; 271 MiB data, 657 MiB used, 59 GiB / 60 GiB avail
Oct 11 04:29:04 compute-0 sudo[310976]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 04:29:04 compute-0 sudo[310976]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 04:29:04 compute-0 sudo[310976]: pam_unix(sudo:session): session closed for user root
Oct 11 04:29:04 compute-0 sudo[311001]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 11 04:29:04 compute-0 sudo[311001]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 04:29:04 compute-0 nova_compute[259850]: 2025-10-11 04:29:04.382 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 04:29:04 compute-0 sudo[311001]: pam_unix(sudo:session): session closed for user root
Oct 11 04:29:04 compute-0 sudo[311026]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 04:29:04 compute-0 sudo[311026]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 04:29:04 compute-0 sudo[311026]: pam_unix(sudo:session): session closed for user root
Oct 11 04:29:04 compute-0 sudo[311051]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/23b68101-59a9-532f-ab6b-9acf78fb2162/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 check-host
Oct 11 04:29:04 compute-0 sudo[311051]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 04:29:04 compute-0 sudo[311051]: pam_unix(sudo:session): session closed for user root
Oct 11 04:29:04 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 11 04:29:04 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' 
Oct 11 04:29:04 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 11 04:29:04 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' 
Oct 11 04:29:04 compute-0 sudo[311096]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 04:29:04 compute-0 sudo[311096]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 04:29:04 compute-0 sudo[311096]: pam_unix(sudo:session): session closed for user root
Oct 11 04:29:05 compute-0 sudo[311121]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 11 04:29:05 compute-0 sudo[311121]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 04:29:05 compute-0 sudo[311121]: pam_unix(sudo:session): session closed for user root
Oct 11 04:29:05 compute-0 sudo[311146]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 04:29:05 compute-0 sudo[311146]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 04:29:05 compute-0 sudo[311146]: pam_unix(sudo:session): session closed for user root
Oct 11 04:29:05 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e448 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 04:29:05 compute-0 sudo[311172]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/23b68101-59a9-532f-ab6b-9acf78fb2162/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Oct 11 04:29:05 compute-0 sudo[311172]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 04:29:05 compute-0 ceph-mon[74273]: pgmap v2020: 305 pgs: 305 active+clean; 271 MiB data, 657 MiB used, 59 GiB / 60 GiB avail
Oct 11 04:29:05 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' 
Oct 11 04:29:05 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' 
Oct 11 04:29:05 compute-0 sudo[311172]: pam_unix(sudo:session): session closed for user root
Oct 11 04:29:05 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"} v 0) v1
Oct 11 04:29:05 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Oct 11 04:29:05 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 11 04:29:05 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 11 04:29:05 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct 11 04:29:05 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 11 04:29:05 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct 11 04:29:05 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' 
Oct 11 04:29:05 compute-0 ceph-mgr[74563]: [progress WARNING root] complete: ev f4ea8016-b364-488b-8c77-b7a02326ed26 does not exist
Oct 11 04:29:05 compute-0 ceph-mgr[74563]: [progress WARNING root] complete: ev 79846981-be3e-4044-9355-b8d39b8bc428 does not exist
Oct 11 04:29:05 compute-0 ceph-mgr[74563]: [progress WARNING root] complete: ev 2ec3003e-65b8-46d8-96d9-1b2825104dab does not exist
Oct 11 04:29:05 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct 11 04:29:05 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 11 04:29:05 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct 11 04:29:05 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 11 04:29:05 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 11 04:29:05 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 11 04:29:05 compute-0 sudo[311228]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 04:29:05 compute-0 sudo[311228]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 04:29:05 compute-0 sudo[311228]: pam_unix(sudo:session): session closed for user root
Oct 11 04:29:06 compute-0 sudo[311253]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 11 04:29:06 compute-0 sudo[311253]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 04:29:06 compute-0 sudo[311253]: pam_unix(sudo:session): session closed for user root
Oct 11 04:29:06 compute-0 sudo[311278]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 04:29:06 compute-0 sudo[311278]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 04:29:06 compute-0 sudo[311278]: pam_unix(sudo:session): session closed for user root
Oct 11 04:29:06 compute-0 sudo[311303]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/23b68101-59a9-532f-ab6b-9acf78fb2162/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 23b68101-59a9-532f-ab6b-9acf78fb2162 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Oct 11 04:29:06 compute-0 sudo[311303]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 04:29:06 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v2021: 305 pgs: 305 active+clean; 271 MiB data, 657 MiB used, 59 GiB / 60 GiB avail
Oct 11 04:29:06 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Oct 11 04:29:06 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 11 04:29:06 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 11 04:29:06 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' 
Oct 11 04:29:06 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 11 04:29:06 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 11 04:29:06 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 11 04:29:06 compute-0 podman[311369]: 2025-10-11 04:29:06.619284059 +0000 UTC m=+0.072077682 container create c402bc0f93c2b6b5e21fb4fad70513c74a89cf1f5b8ef3bc1c15fa42af5e6ea8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=busy_gauss, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 11 04:29:06 compute-0 systemd[1]: Started libpod-conmon-c402bc0f93c2b6b5e21fb4fad70513c74a89cf1f5b8ef3bc1c15fa42af5e6ea8.scope.
Oct 11 04:29:06 compute-0 podman[311369]: 2025-10-11 04:29:06.587502633 +0000 UTC m=+0.040296326 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 04:29:06 compute-0 systemd[1]: Started libcrun container.
Oct 11 04:29:06 compute-0 podman[311369]: 2025-10-11 04:29:06.73398719 +0000 UTC m=+0.186780833 container init c402bc0f93c2b6b5e21fb4fad70513c74a89cf1f5b8ef3bc1c15fa42af5e6ea8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=busy_gauss, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 11 04:29:06 compute-0 podman[311369]: 2025-10-11 04:29:06.748429126 +0000 UTC m=+0.201222779 container start c402bc0f93c2b6b5e21fb4fad70513c74a89cf1f5b8ef3bc1c15fa42af5e6ea8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=busy_gauss, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Oct 11 04:29:06 compute-0 podman[311369]: 2025-10-11 04:29:06.752266265 +0000 UTC m=+0.205059958 container attach c402bc0f93c2b6b5e21fb4fad70513c74a89cf1f5b8ef3bc1c15fa42af5e6ea8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=busy_gauss, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Oct 11 04:29:06 compute-0 busy_gauss[311386]: 167 167
Oct 11 04:29:06 compute-0 systemd[1]: libpod-c402bc0f93c2b6b5e21fb4fad70513c74a89cf1f5b8ef3bc1c15fa42af5e6ea8.scope: Deactivated successfully.
Oct 11 04:29:06 compute-0 podman[311369]: 2025-10-11 04:29:06.759305203 +0000 UTC m=+0.212098846 container died c402bc0f93c2b6b5e21fb4fad70513c74a89cf1f5b8ef3bc1c15fa42af5e6ea8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=busy_gauss, org.label-schema.build-date=20250507, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Oct 11 04:29:06 compute-0 systemd[1]: var-lib-containers-storage-overlay-a665ce86dcd0e0e85906489f72b6337c42005211f61db8aff7f937ce8f79f458-merged.mount: Deactivated successfully.
Oct 11 04:29:06 compute-0 podman[311369]: 2025-10-11 04:29:06.815771763 +0000 UTC m=+0.268565416 container remove c402bc0f93c2b6b5e21fb4fad70513c74a89cf1f5b8ef3bc1c15fa42af5e6ea8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=busy_gauss, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Oct 11 04:29:06 compute-0 systemd[1]: libpod-conmon-c402bc0f93c2b6b5e21fb4fad70513c74a89cf1f5b8ef3bc1c15fa42af5e6ea8.scope: Deactivated successfully.
Oct 11 04:29:07 compute-0 podman[311410]: 2025-10-11 04:29:07.027909689 +0000 UTC m=+0.055823864 container create 5acdb679cc51c87e9c19bfc388879740820ed6db2750425d4fb1d489520af6ec (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_tu, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Oct 11 04:29:07 compute-0 systemd[1]: Started libpod-conmon-5acdb679cc51c87e9c19bfc388879740820ed6db2750425d4fb1d489520af6ec.scope.
Oct 11 04:29:07 compute-0 podman[311410]: 2025-10-11 04:29:07.00061396 +0000 UTC m=+0.028528165 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 04:29:07 compute-0 systemd[1]: Started libcrun container.
Oct 11 04:29:07 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7aebd40e451c9274f6d935e8a3a2e88222d759f4142b4b1de32b29d7e47755b0/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 11 04:29:07 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7aebd40e451c9274f6d935e8a3a2e88222d759f4142b4b1de32b29d7e47755b0/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 11 04:29:07 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7aebd40e451c9274f6d935e8a3a2e88222d759f4142b4b1de32b29d7e47755b0/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 11 04:29:07 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7aebd40e451c9274f6d935e8a3a2e88222d759f4142b4b1de32b29d7e47755b0/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 11 04:29:07 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7aebd40e451c9274f6d935e8a3a2e88222d759f4142b4b1de32b29d7e47755b0/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct 11 04:29:07 compute-0 podman[311410]: 2025-10-11 04:29:07.140743327 +0000 UTC m=+0.168657572 container init 5acdb679cc51c87e9c19bfc388879740820ed6db2750425d4fb1d489520af6ec (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_tu, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, ceph=True, org.label-schema.schema-version=1.0)
Oct 11 04:29:07 compute-0 podman[311410]: 2025-10-11 04:29:07.157852409 +0000 UTC m=+0.185766584 container start 5acdb679cc51c87e9c19bfc388879740820ed6db2750425d4fb1d489520af6ec (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_tu, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 11 04:29:07 compute-0 podman[311410]: 2025-10-11 04:29:07.162925982 +0000 UTC m=+0.190840207 container attach 5acdb679cc51c87e9c19bfc388879740820ed6db2750425d4fb1d489520af6ec (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_tu, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 11 04:29:07 compute-0 ceph-mon[74273]: pgmap v2021: 305 pgs: 305 active+clean; 271 MiB data, 657 MiB used, 59 GiB / 60 GiB avail
Oct 11 04:29:08 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v2022: 305 pgs: 305 active+clean; 271 MiB data, 657 MiB used, 59 GiB / 60 GiB avail
Oct 11 04:29:08 compute-0 nova_compute[259850]: 2025-10-11 04:29:08.178 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 04:29:08 compute-0 sweet_tu[311427]: --> passed data devices: 0 physical, 3 LVM
Oct 11 04:29:08 compute-0 sweet_tu[311427]: --> relative data size: 1.0
Oct 11 04:29:08 compute-0 sweet_tu[311427]: --> All data devices are unavailable
Oct 11 04:29:08 compute-0 systemd[1]: libpod-5acdb679cc51c87e9c19bfc388879740820ed6db2750425d4fb1d489520af6ec.scope: Deactivated successfully.
Oct 11 04:29:08 compute-0 podman[311410]: 2025-10-11 04:29:08.342522809 +0000 UTC m=+1.370436984 container died 5acdb679cc51c87e9c19bfc388879740820ed6db2750425d4fb1d489520af6ec (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_tu, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=reef, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507)
Oct 11 04:29:08 compute-0 systemd[1]: libpod-5acdb679cc51c87e9c19bfc388879740820ed6db2750425d4fb1d489520af6ec.scope: Consumed 1.098s CPU time.
Oct 11 04:29:08 compute-0 systemd[1]: var-lib-containers-storage-overlay-7aebd40e451c9274f6d935e8a3a2e88222d759f4142b4b1de32b29d7e47755b0-merged.mount: Deactivated successfully.
Oct 11 04:29:08 compute-0 podman[311410]: 2025-10-11 04:29:08.424106277 +0000 UTC m=+1.452020432 container remove 5acdb679cc51c87e9c19bfc388879740820ed6db2750425d4fb1d489520af6ec (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_tu, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.license=GPLv2)
Oct 11 04:29:08 compute-0 systemd[1]: libpod-conmon-5acdb679cc51c87e9c19bfc388879740820ed6db2750425d4fb1d489520af6ec.scope: Deactivated successfully.
Oct 11 04:29:08 compute-0 sudo[311303]: pam_unix(sudo:session): session closed for user root
Oct 11 04:29:08 compute-0 sudo[311470]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 04:29:08 compute-0 sudo[311470]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 04:29:08 compute-0 sudo[311470]: pam_unix(sudo:session): session closed for user root
Oct 11 04:29:08 compute-0 sudo[311495]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 11 04:29:08 compute-0 sudo[311495]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 04:29:08 compute-0 sudo[311495]: pam_unix(sudo:session): session closed for user root
Oct 11 04:29:08 compute-0 sudo[311520]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 04:29:08 compute-0 sudo[311520]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 04:29:08 compute-0 sudo[311520]: pam_unix(sudo:session): session closed for user root
Oct 11 04:29:08 compute-0 sudo[311545]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/23b68101-59a9-532f-ab6b-9acf78fb2162/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 23b68101-59a9-532f-ab6b-9acf78fb2162 -- lvm list --format json
Oct 11 04:29:08 compute-0 sudo[311545]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 04:29:09 compute-0 podman[311613]: 2025-10-11 04:29:09.366144021 +0000 UTC m=+0.061696129 container create aeaa0e3179dfe793dd5a46873b3c80d9c293728830ec77c110d386f62fe8c18b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_brattain, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0)
Oct 11 04:29:09 compute-0 nova_compute[259850]: 2025-10-11 04:29:09.384 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 04:29:09 compute-0 systemd[1]: Started libpod-conmon-aeaa0e3179dfe793dd5a46873b3c80d9c293728830ec77c110d386f62fe8c18b.scope.
Oct 11 04:29:09 compute-0 podman[311613]: 2025-10-11 04:29:09.339614874 +0000 UTC m=+0.035167042 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 04:29:09 compute-0 ceph-mon[74273]: pgmap v2022: 305 pgs: 305 active+clean; 271 MiB data, 657 MiB used, 59 GiB / 60 GiB avail
Oct 11 04:29:09 compute-0 systemd[1]: Started libcrun container.
Oct 11 04:29:09 compute-0 podman[311613]: 2025-10-11 04:29:09.49638763 +0000 UTC m=+0.191939798 container init aeaa0e3179dfe793dd5a46873b3c80d9c293728830ec77c110d386f62fe8c18b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_brattain, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 11 04:29:09 compute-0 podman[311613]: 2025-10-11 04:29:09.503319295 +0000 UTC m=+0.198871383 container start aeaa0e3179dfe793dd5a46873b3c80d9c293728830ec77c110d386f62fe8c18b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_brattain, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Oct 11 04:29:09 compute-0 podman[311613]: 2025-10-11 04:29:09.510798726 +0000 UTC m=+0.206350884 container attach aeaa0e3179dfe793dd5a46873b3c80d9c293728830ec77c110d386f62fe8c18b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_brattain, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=reef, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507)
Oct 11 04:29:09 compute-0 reverent_brattain[311629]: 167 167
Oct 11 04:29:09 compute-0 systemd[1]: libpod-aeaa0e3179dfe793dd5a46873b3c80d9c293728830ec77c110d386f62fe8c18b.scope: Deactivated successfully.
Oct 11 04:29:09 compute-0 podman[311613]: 2025-10-11 04:29:09.512348839 +0000 UTC m=+0.207900947 container died aeaa0e3179dfe793dd5a46873b3c80d9c293728830ec77c110d386f62fe8c18b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_brattain, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 11 04:29:09 compute-0 systemd[1]: var-lib-containers-storage-overlay-4d06962de2e1b0e7ee556b943b594fed416a4dc1528fa5177f3aa3b1a0abcb66-merged.mount: Deactivated successfully.
Oct 11 04:29:09 compute-0 podman[311613]: 2025-10-11 04:29:09.566025181 +0000 UTC m=+0.261577299 container remove aeaa0e3179dfe793dd5a46873b3c80d9c293728830ec77c110d386f62fe8c18b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_brattain, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 11 04:29:09 compute-0 systemd[1]: libpod-conmon-aeaa0e3179dfe793dd5a46873b3c80d9c293728830ec77c110d386f62fe8c18b.scope: Deactivated successfully.
Oct 11 04:29:09 compute-0 podman[311653]: 2025-10-11 04:29:09.807354019 +0000 UTC m=+0.074234482 container create a9c38d13528e11b24155537a76e11161089738070b60216e1da7616b289d749d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_stonebraker, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 11 04:29:09 compute-0 podman[311653]: 2025-10-11 04:29:09.774910075 +0000 UTC m=+0.041790578 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 04:29:09 compute-0 systemd[1]: Started libpod-conmon-a9c38d13528e11b24155537a76e11161089738070b60216e1da7616b289d749d.scope.
Oct 11 04:29:09 compute-0 systemd[1]: Started libcrun container.
Oct 11 04:29:09 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fd1402146769d4baac9b0c63cd3dce9684cee0e96321fbd6f0751cacd2f90bb9/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 11 04:29:09 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fd1402146769d4baac9b0c63cd3dce9684cee0e96321fbd6f0751cacd2f90bb9/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 11 04:29:09 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fd1402146769d4baac9b0c63cd3dce9684cee0e96321fbd6f0751cacd2f90bb9/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 11 04:29:09 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fd1402146769d4baac9b0c63cd3dce9684cee0e96321fbd6f0751cacd2f90bb9/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 11 04:29:09 compute-0 podman[311653]: 2025-10-11 04:29:09.931597489 +0000 UTC m=+0.198477992 container init a9c38d13528e11b24155537a76e11161089738070b60216e1da7616b289d749d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_stonebraker, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef)
Oct 11 04:29:09 compute-0 podman[311653]: 2025-10-11 04:29:09.944528203 +0000 UTC m=+0.211408666 container start a9c38d13528e11b24155537a76e11161089738070b60216e1da7616b289d749d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_stonebraker, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 11 04:29:09 compute-0 podman[311653]: 2025-10-11 04:29:09.9490439 +0000 UTC m=+0.215924333 container attach a9c38d13528e11b24155537a76e11161089738070b60216e1da7616b289d749d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_stonebraker, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=reef)
Oct 11 04:29:10 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v2023: 305 pgs: 305 active+clean; 271 MiB data, 657 MiB used, 59 GiB / 60 GiB avail
Oct 11 04:29:10 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e448 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 04:29:10 compute-0 ceph-mon[74273]: pgmap v2023: 305 pgs: 305 active+clean; 271 MiB data, 657 MiB used, 59 GiB / 60 GiB avail
Oct 11 04:29:10 compute-0 quizzical_stonebraker[311670]: {
Oct 11 04:29:10 compute-0 quizzical_stonebraker[311670]:     "0": [
Oct 11 04:29:10 compute-0 quizzical_stonebraker[311670]:         {
Oct 11 04:29:10 compute-0 quizzical_stonebraker[311670]:             "devices": [
Oct 11 04:29:10 compute-0 quizzical_stonebraker[311670]:                 "/dev/loop3"
Oct 11 04:29:10 compute-0 quizzical_stonebraker[311670]:             ],
Oct 11 04:29:10 compute-0 quizzical_stonebraker[311670]:             "lv_name": "ceph_lv0",
Oct 11 04:29:10 compute-0 quizzical_stonebraker[311670]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct 11 04:29:10 compute-0 quizzical_stonebraker[311670]:             "lv_size": "21470642176",
Oct 11 04:29:10 compute-0 quizzical_stonebraker[311670]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=1XVTvq-Um5W-oCLh-MZzq-EKrd-vgGj-JdO4S3,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=23b68101-59a9-532f-ab6b-9acf78fb2162,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=bd7ac921-1218-45c1-b1c6-7c594dbceccb,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 11 04:29:10 compute-0 quizzical_stonebraker[311670]:             "lv_uuid": "1XVTvq-Um5W-oCLh-MZzq-EKrd-vgGj-JdO4S3",
Oct 11 04:29:10 compute-0 quizzical_stonebraker[311670]:             "name": "ceph_lv0",
Oct 11 04:29:10 compute-0 quizzical_stonebraker[311670]:             "path": "/dev/ceph_vg0/ceph_lv0",
Oct 11 04:29:10 compute-0 quizzical_stonebraker[311670]:             "tags": {
Oct 11 04:29:10 compute-0 quizzical_stonebraker[311670]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct 11 04:29:10 compute-0 quizzical_stonebraker[311670]:                 "ceph.block_uuid": "1XVTvq-Um5W-oCLh-MZzq-EKrd-vgGj-JdO4S3",
Oct 11 04:29:10 compute-0 quizzical_stonebraker[311670]:                 "ceph.cephx_lockbox_secret": "",
Oct 11 04:29:10 compute-0 quizzical_stonebraker[311670]:                 "ceph.cluster_fsid": "23b68101-59a9-532f-ab6b-9acf78fb2162",
Oct 11 04:29:10 compute-0 quizzical_stonebraker[311670]:                 "ceph.cluster_name": "ceph",
Oct 11 04:29:10 compute-0 quizzical_stonebraker[311670]:                 "ceph.crush_device_class": "",
Oct 11 04:29:10 compute-0 quizzical_stonebraker[311670]:                 "ceph.encrypted": "0",
Oct 11 04:29:10 compute-0 quizzical_stonebraker[311670]:                 "ceph.osd_fsid": "bd7ac921-1218-45c1-b1c6-7c594dbceccb",
Oct 11 04:29:10 compute-0 quizzical_stonebraker[311670]:                 "ceph.osd_id": "0",
Oct 11 04:29:10 compute-0 quizzical_stonebraker[311670]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 11 04:29:10 compute-0 quizzical_stonebraker[311670]:                 "ceph.type": "block",
Oct 11 04:29:10 compute-0 quizzical_stonebraker[311670]:                 "ceph.vdo": "0"
Oct 11 04:29:10 compute-0 quizzical_stonebraker[311670]:             },
Oct 11 04:29:10 compute-0 quizzical_stonebraker[311670]:             "type": "block",
Oct 11 04:29:10 compute-0 quizzical_stonebraker[311670]:             "vg_name": "ceph_vg0"
Oct 11 04:29:10 compute-0 quizzical_stonebraker[311670]:         }
Oct 11 04:29:10 compute-0 quizzical_stonebraker[311670]:     ],
Oct 11 04:29:10 compute-0 quizzical_stonebraker[311670]:     "1": [
Oct 11 04:29:10 compute-0 quizzical_stonebraker[311670]:         {
Oct 11 04:29:10 compute-0 quizzical_stonebraker[311670]:             "devices": [
Oct 11 04:29:10 compute-0 quizzical_stonebraker[311670]:                 "/dev/loop4"
Oct 11 04:29:10 compute-0 quizzical_stonebraker[311670]:             ],
Oct 11 04:29:10 compute-0 quizzical_stonebraker[311670]:             "lv_name": "ceph_lv1",
Oct 11 04:29:10 compute-0 quizzical_stonebraker[311670]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Oct 11 04:29:10 compute-0 quizzical_stonebraker[311670]:             "lv_size": "21470642176",
Oct 11 04:29:10 compute-0 quizzical_stonebraker[311670]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=SYHCVo-UVf0-Iwc3-4dzz-QuCG-19LJ-9rRA5x,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=23b68101-59a9-532f-ab6b-9acf78fb2162,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=38da774d-7ecf-442f-9a7a-97978287cff8,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 11 04:29:10 compute-0 quizzical_stonebraker[311670]:             "lv_uuid": "SYHCVo-UVf0-Iwc3-4dzz-QuCG-19LJ-9rRA5x",
Oct 11 04:29:10 compute-0 quizzical_stonebraker[311670]:             "name": "ceph_lv1",
Oct 11 04:29:10 compute-0 quizzical_stonebraker[311670]:             "path": "/dev/ceph_vg1/ceph_lv1",
Oct 11 04:29:10 compute-0 quizzical_stonebraker[311670]:             "tags": {
Oct 11 04:29:10 compute-0 quizzical_stonebraker[311670]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Oct 11 04:29:10 compute-0 quizzical_stonebraker[311670]:                 "ceph.block_uuid": "SYHCVo-UVf0-Iwc3-4dzz-QuCG-19LJ-9rRA5x",
Oct 11 04:29:10 compute-0 quizzical_stonebraker[311670]:                 "ceph.cephx_lockbox_secret": "",
Oct 11 04:29:10 compute-0 quizzical_stonebraker[311670]:                 "ceph.cluster_fsid": "23b68101-59a9-532f-ab6b-9acf78fb2162",
Oct 11 04:29:10 compute-0 quizzical_stonebraker[311670]:                 "ceph.cluster_name": "ceph",
Oct 11 04:29:10 compute-0 quizzical_stonebraker[311670]:                 "ceph.crush_device_class": "",
Oct 11 04:29:10 compute-0 quizzical_stonebraker[311670]:                 "ceph.encrypted": "0",
Oct 11 04:29:10 compute-0 quizzical_stonebraker[311670]:                 "ceph.osd_fsid": "38da774d-7ecf-442f-9a7a-97978287cff8",
Oct 11 04:29:10 compute-0 quizzical_stonebraker[311670]:                 "ceph.osd_id": "1",
Oct 11 04:29:10 compute-0 quizzical_stonebraker[311670]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 11 04:29:10 compute-0 quizzical_stonebraker[311670]:                 "ceph.type": "block",
Oct 11 04:29:10 compute-0 quizzical_stonebraker[311670]:                 "ceph.vdo": "0"
Oct 11 04:29:10 compute-0 quizzical_stonebraker[311670]:             },
Oct 11 04:29:10 compute-0 quizzical_stonebraker[311670]:             "type": "block",
Oct 11 04:29:10 compute-0 quizzical_stonebraker[311670]:             "vg_name": "ceph_vg1"
Oct 11 04:29:10 compute-0 quizzical_stonebraker[311670]:         }
Oct 11 04:29:10 compute-0 quizzical_stonebraker[311670]:     ],
Oct 11 04:29:10 compute-0 quizzical_stonebraker[311670]:     "2": [
Oct 11 04:29:10 compute-0 quizzical_stonebraker[311670]:         {
Oct 11 04:29:10 compute-0 quizzical_stonebraker[311670]:             "devices": [
Oct 11 04:29:10 compute-0 quizzical_stonebraker[311670]:                 "/dev/loop5"
Oct 11 04:29:10 compute-0 quizzical_stonebraker[311670]:             ],
Oct 11 04:29:10 compute-0 quizzical_stonebraker[311670]:             "lv_name": "ceph_lv2",
Oct 11 04:29:10 compute-0 quizzical_stonebraker[311670]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Oct 11 04:29:10 compute-0 quizzical_stonebraker[311670]:             "lv_size": "21470642176",
Oct 11 04:29:10 compute-0 quizzical_stonebraker[311670]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=RoMfIg-8Myq-NBZ3-1HM3-6QfC-sjk8-dlChvt,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=23b68101-59a9-532f-ab6b-9acf78fb2162,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=b0a32a6b-06c7-4cda-9235-0c5ffbd1bd85,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 11 04:29:10 compute-0 quizzical_stonebraker[311670]:             "lv_uuid": "RoMfIg-8Myq-NBZ3-1HM3-6QfC-sjk8-dlChvt",
Oct 11 04:29:10 compute-0 quizzical_stonebraker[311670]:             "name": "ceph_lv2",
Oct 11 04:29:10 compute-0 quizzical_stonebraker[311670]:             "path": "/dev/ceph_vg2/ceph_lv2",
Oct 11 04:29:10 compute-0 quizzical_stonebraker[311670]:             "tags": {
Oct 11 04:29:10 compute-0 quizzical_stonebraker[311670]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Oct 11 04:29:10 compute-0 quizzical_stonebraker[311670]:                 "ceph.block_uuid": "RoMfIg-8Myq-NBZ3-1HM3-6QfC-sjk8-dlChvt",
Oct 11 04:29:10 compute-0 quizzical_stonebraker[311670]:                 "ceph.cephx_lockbox_secret": "",
Oct 11 04:29:10 compute-0 quizzical_stonebraker[311670]:                 "ceph.cluster_fsid": "23b68101-59a9-532f-ab6b-9acf78fb2162",
Oct 11 04:29:10 compute-0 quizzical_stonebraker[311670]:                 "ceph.cluster_name": "ceph",
Oct 11 04:29:10 compute-0 quizzical_stonebraker[311670]:                 "ceph.crush_device_class": "",
Oct 11 04:29:10 compute-0 quizzical_stonebraker[311670]:                 "ceph.encrypted": "0",
Oct 11 04:29:10 compute-0 quizzical_stonebraker[311670]:                 "ceph.osd_fsid": "b0a32a6b-06c7-4cda-9235-0c5ffbd1bd85",
Oct 11 04:29:10 compute-0 quizzical_stonebraker[311670]:                 "ceph.osd_id": "2",
Oct 11 04:29:10 compute-0 quizzical_stonebraker[311670]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 11 04:29:10 compute-0 quizzical_stonebraker[311670]:                 "ceph.type": "block",
Oct 11 04:29:10 compute-0 quizzical_stonebraker[311670]:                 "ceph.vdo": "0"
Oct 11 04:29:10 compute-0 quizzical_stonebraker[311670]:             },
Oct 11 04:29:10 compute-0 quizzical_stonebraker[311670]:             "type": "block",
Oct 11 04:29:10 compute-0 quizzical_stonebraker[311670]:             "vg_name": "ceph_vg2"
Oct 11 04:29:10 compute-0 quizzical_stonebraker[311670]:         }
Oct 11 04:29:10 compute-0 quizzical_stonebraker[311670]:     ]
Oct 11 04:29:10 compute-0 quizzical_stonebraker[311670]: }
Oct 11 04:29:10 compute-0 systemd[1]: libpod-a9c38d13528e11b24155537a76e11161089738070b60216e1da7616b289d749d.scope: Deactivated successfully.
Oct 11 04:29:10 compute-0 podman[311653]: 2025-10-11 04:29:10.790550084 +0000 UTC m=+1.057430557 container died a9c38d13528e11b24155537a76e11161089738070b60216e1da7616b289d749d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_stonebraker, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 11 04:29:10 compute-0 systemd[1]: var-lib-containers-storage-overlay-fd1402146769d4baac9b0c63cd3dce9684cee0e96321fbd6f0751cacd2f90bb9-merged.mount: Deactivated successfully.
Oct 11 04:29:10 compute-0 podman[311653]: 2025-10-11 04:29:10.868489889 +0000 UTC m=+1.135370362 container remove a9c38d13528e11b24155537a76e11161089738070b60216e1da7616b289d749d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_stonebraker, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20250507)
Oct 11 04:29:10 compute-0 systemd[1]: libpod-conmon-a9c38d13528e11b24155537a76e11161089738070b60216e1da7616b289d749d.scope: Deactivated successfully.
Oct 11 04:29:10 compute-0 sudo[311545]: pam_unix(sudo:session): session closed for user root
Oct 11 04:29:11 compute-0 sudo[311691]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 04:29:11 compute-0 sudo[311691]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 04:29:11 compute-0 sudo[311691]: pam_unix(sudo:session): session closed for user root
Oct 11 04:29:11 compute-0 sudo[311716]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 11 04:29:11 compute-0 sudo[311716]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 04:29:11 compute-0 sudo[311716]: pam_unix(sudo:session): session closed for user root
Oct 11 04:29:11 compute-0 sudo[311741]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 04:29:11 compute-0 sudo[311741]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 04:29:11 compute-0 sudo[311741]: pam_unix(sudo:session): session closed for user root
Oct 11 04:29:11 compute-0 sudo[311766]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/23b68101-59a9-532f-ab6b-9acf78fb2162/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 23b68101-59a9-532f-ab6b-9acf78fb2162 -- raw list --format json
Oct 11 04:29:11 compute-0 sudo[311766]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 04:29:11 compute-0 podman[311831]: 2025-10-11 04:29:11.69818009 +0000 UTC m=+0.049719692 container create dcdf3865f5c2452d118a30c82c275d8d639f9f527ded05838d79987f2ce6f68c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=amazing_shockley, io.buildah.version=1.39.3, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Oct 11 04:29:11 compute-0 systemd[1]: Started libpod-conmon-dcdf3865f5c2452d118a30c82c275d8d639f9f527ded05838d79987f2ce6f68c.scope.
Oct 11 04:29:11 compute-0 podman[311831]: 2025-10-11 04:29:11.671062756 +0000 UTC m=+0.022602448 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 04:29:11 compute-0 systemd[1]: Started libcrun container.
Oct 11 04:29:11 compute-0 podman[311831]: 2025-10-11 04:29:11.800873532 +0000 UTC m=+0.152413224 container init dcdf3865f5c2452d118a30c82c275d8d639f9f527ded05838d79987f2ce6f68c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=amazing_shockley, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.build-date=20250507, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Oct 11 04:29:11 compute-0 podman[311831]: 2025-10-11 04:29:11.811976665 +0000 UTC m=+0.163516307 container start dcdf3865f5c2452d118a30c82c275d8d639f9f527ded05838d79987f2ce6f68c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=amazing_shockley, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 11 04:29:11 compute-0 podman[311831]: 2025-10-11 04:29:11.815998538 +0000 UTC m=+0.167538180 container attach dcdf3865f5c2452d118a30c82c275d8d639f9f527ded05838d79987f2ce6f68c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=amazing_shockley, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Oct 11 04:29:11 compute-0 amazing_shockley[311848]: 167 167
Oct 11 04:29:11 compute-0 systemd[1]: libpod-dcdf3865f5c2452d118a30c82c275d8d639f9f527ded05838d79987f2ce6f68c.scope: Deactivated successfully.
Oct 11 04:29:11 compute-0 podman[311831]: 2025-10-11 04:29:11.821459072 +0000 UTC m=+0.172998734 container died dcdf3865f5c2452d118a30c82c275d8d639f9f527ded05838d79987f2ce6f68c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=amazing_shockley, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_REF=reef, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507)
Oct 11 04:29:11 compute-0 systemd[1]: var-lib-containers-storage-overlay-635af70bfb3ae684f7827feb3316fc822b228882bb0fd9efe21be4fba1f9a0ff-merged.mount: Deactivated successfully.
Oct 11 04:29:11 compute-0 podman[311831]: 2025-10-11 04:29:11.86680597 +0000 UTC m=+0.218345602 container remove dcdf3865f5c2452d118a30c82c275d8d639f9f527ded05838d79987f2ce6f68c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=amazing_shockley, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 11 04:29:11 compute-0 systemd[1]: libpod-conmon-dcdf3865f5c2452d118a30c82c275d8d639f9f527ded05838d79987f2ce6f68c.scope: Deactivated successfully.
Oct 11 04:29:12 compute-0 podman[311872]: 2025-10-11 04:29:12.129336324 +0000 UTC m=+0.072556334 container create 32c2aa4179061d9be1382566fbf85a3bd4296bd6942bc1efc659dda0f0b02d56 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_villani, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3)
Oct 11 04:29:12 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v2024: 305 pgs: 305 active+clean; 271 MiB data, 657 MiB used, 59 GiB / 60 GiB avail
Oct 11 04:29:12 compute-0 systemd[1]: Started libpod-conmon-32c2aa4179061d9be1382566fbf85a3bd4296bd6942bc1efc659dda0f0b02d56.scope.
Oct 11 04:29:12 compute-0 podman[311872]: 2025-10-11 04:29:12.101928702 +0000 UTC m=+0.045148762 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 04:29:12 compute-0 systemd[1]: Started libcrun container.
Oct 11 04:29:12 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e39a4e633d54cf298a775f716f3a473b1ceaf337c22053daf4f6cb1571918cbc/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 11 04:29:12 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e39a4e633d54cf298a775f716f3a473b1ceaf337c22053daf4f6cb1571918cbc/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 11 04:29:12 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e39a4e633d54cf298a775f716f3a473b1ceaf337c22053daf4f6cb1571918cbc/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 11 04:29:12 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e39a4e633d54cf298a775f716f3a473b1ceaf337c22053daf4f6cb1571918cbc/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 11 04:29:12 compute-0 podman[311872]: 2025-10-11 04:29:12.261187458 +0000 UTC m=+0.204407488 container init 32c2aa4179061d9be1382566fbf85a3bd4296bd6942bc1efc659dda0f0b02d56 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_villani, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507)
Oct 11 04:29:12 compute-0 podman[311872]: 2025-10-11 04:29:12.276311225 +0000 UTC m=+0.219531225 container start 32c2aa4179061d9be1382566fbf85a3bd4296bd6942bc1efc659dda0f0b02d56 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_villani, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Oct 11 04:29:12 compute-0 podman[311872]: 2025-10-11 04:29:12.280329678 +0000 UTC m=+0.223549688 container attach 32c2aa4179061d9be1382566fbf85a3bd4296bd6942bc1efc659dda0f0b02d56 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_villani, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Oct 11 04:29:13 compute-0 nova_compute[259850]: 2025-10-11 04:29:13.187 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 04:29:13 compute-0 ceph-mon[74273]: pgmap v2024: 305 pgs: 305 active+clean; 271 MiB data, 657 MiB used, 59 GiB / 60 GiB avail
Oct 11 04:29:13 compute-0 nervous_villani[311889]: {
Oct 11 04:29:13 compute-0 nervous_villani[311889]:     "38da774d-7ecf-442f-9a7a-97978287cff8": {
Oct 11 04:29:13 compute-0 nervous_villani[311889]:         "ceph_fsid": "23b68101-59a9-532f-ab6b-9acf78fb2162",
Oct 11 04:29:13 compute-0 nervous_villani[311889]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Oct 11 04:29:13 compute-0 nervous_villani[311889]:         "osd_id": 1,
Oct 11 04:29:13 compute-0 nervous_villani[311889]:         "osd_uuid": "38da774d-7ecf-442f-9a7a-97978287cff8",
Oct 11 04:29:13 compute-0 nervous_villani[311889]:         "type": "bluestore"
Oct 11 04:29:13 compute-0 nervous_villani[311889]:     },
Oct 11 04:29:13 compute-0 nervous_villani[311889]:     "b0a32a6b-06c7-4cda-9235-0c5ffbd1bd85": {
Oct 11 04:29:13 compute-0 nervous_villani[311889]:         "ceph_fsid": "23b68101-59a9-532f-ab6b-9acf78fb2162",
Oct 11 04:29:13 compute-0 nervous_villani[311889]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Oct 11 04:29:13 compute-0 nervous_villani[311889]:         "osd_id": 2,
Oct 11 04:29:13 compute-0 nervous_villani[311889]:         "osd_uuid": "b0a32a6b-06c7-4cda-9235-0c5ffbd1bd85",
Oct 11 04:29:13 compute-0 nervous_villani[311889]:         "type": "bluestore"
Oct 11 04:29:13 compute-0 nervous_villani[311889]:     },
Oct 11 04:29:13 compute-0 nervous_villani[311889]:     "bd7ac921-1218-45c1-b1c6-7c594dbceccb": {
Oct 11 04:29:13 compute-0 nervous_villani[311889]:         "ceph_fsid": "23b68101-59a9-532f-ab6b-9acf78fb2162",
Oct 11 04:29:13 compute-0 nervous_villani[311889]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct 11 04:29:13 compute-0 nervous_villani[311889]:         "osd_id": 0,
Oct 11 04:29:13 compute-0 nervous_villani[311889]:         "osd_uuid": "bd7ac921-1218-45c1-b1c6-7c594dbceccb",
Oct 11 04:29:13 compute-0 nervous_villani[311889]:         "type": "bluestore"
Oct 11 04:29:13 compute-0 nervous_villani[311889]:     }
Oct 11 04:29:13 compute-0 nervous_villani[311889]: }
Oct 11 04:29:13 compute-0 systemd[1]: libpod-32c2aa4179061d9be1382566fbf85a3bd4296bd6942bc1efc659dda0f0b02d56.scope: Deactivated successfully.
Oct 11 04:29:13 compute-0 podman[311872]: 2025-10-11 04:29:13.4211171 +0000 UTC m=+1.364337070 container died 32c2aa4179061d9be1382566fbf85a3bd4296bd6942bc1efc659dda0f0b02d56 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_villani, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 11 04:29:13 compute-0 systemd[1]: libpod-32c2aa4179061d9be1382566fbf85a3bd4296bd6942bc1efc659dda0f0b02d56.scope: Consumed 1.152s CPU time.
Oct 11 04:29:13 compute-0 systemd[1]: var-lib-containers-storage-overlay-e39a4e633d54cf298a775f716f3a473b1ceaf337c22053daf4f6cb1571918cbc-merged.mount: Deactivated successfully.
Oct 11 04:29:13 compute-0 podman[311872]: 2025-10-11 04:29:13.484343691 +0000 UTC m=+1.427563671 container remove 32c2aa4179061d9be1382566fbf85a3bd4296bd6942bc1efc659dda0f0b02d56 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_villani, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Oct 11 04:29:13 compute-0 systemd[1]: libpod-conmon-32c2aa4179061d9be1382566fbf85a3bd4296bd6942bc1efc659dda0f0b02d56.scope: Deactivated successfully.
Oct 11 04:29:13 compute-0 sudo[311766]: pam_unix(sudo:session): session closed for user root
Oct 11 04:29:13 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 11 04:29:13 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' 
Oct 11 04:29:13 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 11 04:29:13 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' 
Oct 11 04:29:13 compute-0 ceph-mgr[74563]: [progress WARNING root] complete: ev 6f564b9a-a665-436a-a5ed-c5d53a0b2e01 does not exist
Oct 11 04:29:13 compute-0 ceph-mgr[74563]: [progress WARNING root] complete: ev ea8c8629-cba0-43a2-912a-71bc6b105001 does not exist
Oct 11 04:29:13 compute-0 sudo[311936]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 04:29:13 compute-0 sudo[311936]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 04:29:13 compute-0 sudo[311936]: pam_unix(sudo:session): session closed for user root
Oct 11 04:29:13 compute-0 sudo[311961]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Oct 11 04:29:13 compute-0 sudo[311961]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 04:29:13 compute-0 sudo[311961]: pam_unix(sudo:session): session closed for user root
Oct 11 04:29:14 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v2025: 305 pgs: 305 active+clean; 271 MiB data, 657 MiB used, 59 GiB / 60 GiB avail
Oct 11 04:29:14 compute-0 nova_compute[259850]: 2025-10-11 04:29:14.416 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 04:29:14 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' 
Oct 11 04:29:14 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' 
Oct 11 04:29:14 compute-0 ceph-mon[74273]: pgmap v2025: 305 pgs: 305 active+clean; 271 MiB data, 657 MiB used, 59 GiB / 60 GiB avail
Oct 11 04:29:15 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e448 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 04:29:16 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v2026: 305 pgs: 305 active+clean; 271 MiB data, 657 MiB used, 59 GiB / 60 GiB avail
Oct 11 04:29:17 compute-0 ceph-mon[74273]: pgmap v2026: 305 pgs: 305 active+clean; 271 MiB data, 657 MiB used, 59 GiB / 60 GiB avail
Oct 11 04:29:18 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v2027: 305 pgs: 305 active+clean; 271 MiB data, 657 MiB used, 59 GiB / 60 GiB avail
Oct 11 04:29:18 compute-0 nova_compute[259850]: 2025-10-11 04:29:18.239 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 04:29:18 compute-0 podman[311986]: 2025-10-11 04:29:18.459621575 +0000 UTC m=+0.150323756 container health_status 648a70a95c858d67aef01a55c153ff0b5b9bef4b589fa10c62a24185180db61f (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Oct 11 04:29:19 compute-0 ceph-mon[74273]: pgmap v2027: 305 pgs: 305 active+clean; 271 MiB data, 657 MiB used, 59 GiB / 60 GiB avail
Oct 11 04:29:19 compute-0 nova_compute[259850]: 2025-10-11 04:29:19.418 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 04:29:20 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v2028: 305 pgs: 305 active+clean; 271 MiB data, 657 MiB used, 59 GiB / 60 GiB avail
Oct 11 04:29:20 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e448 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 04:29:20 compute-0 ceph-mgr[74563]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 04:29:20 compute-0 ceph-mgr[74563]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 04:29:20 compute-0 ceph-mgr[74563]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 04:29:20 compute-0 ceph-mgr[74563]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 04:29:20 compute-0 ceph-mgr[74563]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 04:29:20 compute-0 ceph-mgr[74563]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 04:29:20 compute-0 ceph-mgr[74563]: [balancer INFO root] Optimize plan auto_2025-10-11_04:29:20
Oct 11 04:29:20 compute-0 ceph-mgr[74563]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 11 04:29:20 compute-0 ceph-mgr[74563]: [balancer INFO root] do_upmap
Oct 11 04:29:20 compute-0 ceph-mgr[74563]: [balancer INFO root] pools ['vms', 'volumes', '.mgr', 'default.rgw.log', 'images', 'backups', 'default.rgw.meta', 'cephfs.cephfs.meta', 'default.rgw.control', '.rgw.root', 'cephfs.cephfs.data']
Oct 11 04:29:20 compute-0 ceph-mgr[74563]: [balancer INFO root] prepared 0/10 changes
Oct 11 04:29:21 compute-0 ceph-mgr[74563]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 11 04:29:21 compute-0 ceph-mgr[74563]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 11 04:29:21 compute-0 ceph-mgr[74563]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 11 04:29:21 compute-0 ceph-mgr[74563]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 11 04:29:21 compute-0 ceph-mgr[74563]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 11 04:29:21 compute-0 ceph-mgr[74563]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 11 04:29:21 compute-0 ceph-mgr[74563]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 11 04:29:21 compute-0 ceph-mgr[74563]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 11 04:29:21 compute-0 ceph-mgr[74563]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 11 04:29:21 compute-0 ceph-mgr[74563]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 11 04:29:21 compute-0 ceph-mon[74273]: pgmap v2028: 305 pgs: 305 active+clean; 271 MiB data, 657 MiB used, 59 GiB / 60 GiB avail
Oct 11 04:29:21 compute-0 podman[312012]: 2025-10-11 04:29:21.370165027 +0000 UTC m=+0.066440363 container health_status aa44bb3cffcfb092b9d85ae52a9bd9f36c87e07262632420f97b48a886109eab (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251009, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Oct 11 04:29:22 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v2029: 305 pgs: 305 active+clean; 271 MiB data, 657 MiB used, 59 GiB / 60 GiB avail
Oct 11 04:29:22 compute-0 ovn_metadata_agent[161897]: 2025-10-11 04:29:22.982 161902 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 04:29:22 compute-0 ovn_metadata_agent[161897]: 2025-10-11 04:29:22.982 161902 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 04:29:22 compute-0 ovn_metadata_agent[161897]: 2025-10-11 04:29:22.982 161902 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 04:29:23 compute-0 ceph-mon[74273]: pgmap v2029: 305 pgs: 305 active+clean; 271 MiB data, 657 MiB used, 59 GiB / 60 GiB avail
Oct 11 04:29:23 compute-0 nova_compute[259850]: 2025-10-11 04:29:23.288 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 04:29:24 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v2030: 305 pgs: 305 active+clean; 271 MiB data, 657 MiB used, 59 GiB / 60 GiB avail
Oct 11 04:29:24 compute-0 nova_compute[259850]: 2025-10-11 04:29:24.467 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 04:29:25 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e448 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 04:29:25 compute-0 ceph-mon[74273]: pgmap v2030: 305 pgs: 305 active+clean; 271 MiB data, 657 MiB used, 59 GiB / 60 GiB avail
Oct 11 04:29:26 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v2031: 305 pgs: 305 active+clean; 271 MiB data, 657 MiB used, 59 GiB / 60 GiB avail
Oct 11 04:29:27 compute-0 ceph-mon[74273]: pgmap v2031: 305 pgs: 305 active+clean; 271 MiB data, 657 MiB used, 59 GiB / 60 GiB avail
Oct 11 04:29:28 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v2032: 305 pgs: 305 active+clean; 271 MiB data, 657 MiB used, 59 GiB / 60 GiB avail
Oct 11 04:29:28 compute-0 nova_compute[259850]: 2025-10-11 04:29:28.291 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 04:29:29 compute-0 ceph-mon[74273]: pgmap v2032: 305 pgs: 305 active+clean; 271 MiB data, 657 MiB used, 59 GiB / 60 GiB avail
Oct 11 04:29:29 compute-0 nova_compute[259850]: 2025-10-11 04:29:29.469 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 04:29:30 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v2033: 305 pgs: 305 active+clean; 271 MiB data, 657 MiB used, 59 GiB / 60 GiB avail
Oct 11 04:29:30 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e448 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 04:29:31 compute-0 ceph-mon[74273]: pgmap v2033: 305 pgs: 305 active+clean; 271 MiB data, 657 MiB used, 59 GiB / 60 GiB avail
Oct 11 04:29:31 compute-0 ceph-mgr[74563]: [pg_autoscaler INFO root] _maybe_adjust
Oct 11 04:29:31 compute-0 ceph-mgr[74563]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 04:29:31 compute-0 ceph-mgr[74563]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct 11 04:29:31 compute-0 ceph-mgr[74563]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 04:29:31 compute-0 ceph-mgr[74563]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Oct 11 04:29:31 compute-0 ceph-mgr[74563]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 04:29:31 compute-0 ceph-mgr[74563]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.002894458247867422 of space, bias 1.0, pg target 0.8683374743602266 quantized to 32 (current 32)
Oct 11 04:29:31 compute-0 ceph-mgr[74563]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 04:29:31 compute-0 ceph-mgr[74563]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Oct 11 04:29:31 compute-0 ceph-mgr[74563]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 04:29:31 compute-0 ceph-mgr[74563]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006661762551279547 of space, bias 1.0, pg target 0.1998528765383864 quantized to 32 (current 32)
Oct 11 04:29:31 compute-0 ceph-mgr[74563]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 04:29:31 compute-0 ceph-mgr[74563]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Oct 11 04:29:31 compute-0 ceph-mgr[74563]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 04:29:31 compute-0 ceph-mgr[74563]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 11 04:29:31 compute-0 ceph-mgr[74563]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 04:29:31 compute-0 ceph-mgr[74563]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Oct 11 04:29:31 compute-0 ceph-mgr[74563]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 04:29:31 compute-0 ceph-mgr[74563]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct 11 04:29:31 compute-0 ceph-mgr[74563]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 04:29:31 compute-0 ceph-mgr[74563]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 11 04:29:31 compute-0 ceph-mgr[74563]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 04:29:31 compute-0 ceph-mgr[74563]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct 11 04:29:32 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v2034: 305 pgs: 305 active+clean; 271 MiB data, 657 MiB used, 59 GiB / 60 GiB avail
Oct 11 04:29:33 compute-0 ceph-mon[74273]: pgmap v2034: 305 pgs: 305 active+clean; 271 MiB data, 657 MiB used, 59 GiB / 60 GiB avail
Oct 11 04:29:33 compute-0 nova_compute[259850]: 2025-10-11 04:29:33.352 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 04:29:33 compute-0 podman[312032]: 2025-10-11 04:29:33.383535566 +0000 UTC m=+0.077495324 container health_status 8f03625dda308e9a3fe3b3f00360811eab2a53db90537fed5552a11820542f07 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, container_name=iscsid, io.buildah.version=1.41.3, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=iscsid)
Oct 11 04:29:33 compute-0 podman[312031]: 2025-10-11 04:29:33.391735227 +0000 UTC m=+0.091053036 container health_status 83ff5079e26f0f00bbfd2512db4ebca61e431e3b7e362664573e34fff179574c (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_id=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.schema-version=1.0, container_name=multipathd, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true)
Oct 11 04:29:34 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v2035: 305 pgs: 305 active+clean; 271 MiB data, 657 MiB used, 59 GiB / 60 GiB avail
Oct 11 04:29:34 compute-0 nova_compute[259850]: 2025-10-11 04:29:34.527 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 04:29:35 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e448 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 04:29:35 compute-0 ceph-mon[74273]: pgmap v2035: 305 pgs: 305 active+clean; 271 MiB data, 657 MiB used, 59 GiB / 60 GiB avail
Oct 11 04:29:36 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v2036: 305 pgs: 305 active+clean; 271 MiB data, 657 MiB used, 59 GiB / 60 GiB avail
Oct 11 04:29:37 compute-0 nova_compute[259850]: 2025-10-11 04:29:37.059 2 DEBUG oslo_service.periodic_task [None req-a15c22ab-259d-4b26-83d0-eb5f92102649 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 11 04:29:37 compute-0 ceph-mon[74273]: pgmap v2036: 305 pgs: 305 active+clean; 271 MiB data, 657 MiB used, 59 GiB / 60 GiB avail
Oct 11 04:29:38 compute-0 nova_compute[259850]: 2025-10-11 04:29:38.059 2 DEBUG oslo_service.periodic_task [None req-a15c22ab-259d-4b26-83d0-eb5f92102649 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 11 04:29:38 compute-0 nova_compute[259850]: 2025-10-11 04:29:38.059 2 DEBUG nova.compute.manager [None req-a15c22ab-259d-4b26-83d0-eb5f92102649 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Oct 11 04:29:38 compute-0 nova_compute[259850]: 2025-10-11 04:29:38.059 2 DEBUG oslo_service.periodic_task [None req-a15c22ab-259d-4b26-83d0-eb5f92102649 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 11 04:29:38 compute-0 nova_compute[259850]: 2025-10-11 04:29:38.142 2 DEBUG oslo_concurrency.lockutils [None req-a15c22ab-259d-4b26-83d0-eb5f92102649 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 04:29:38 compute-0 nova_compute[259850]: 2025-10-11 04:29:38.143 2 DEBUG oslo_concurrency.lockutils [None req-a15c22ab-259d-4b26-83d0-eb5f92102649 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 04:29:38 compute-0 nova_compute[259850]: 2025-10-11 04:29:38.143 2 DEBUG oslo_concurrency.lockutils [None req-a15c22ab-259d-4b26-83d0-eb5f92102649 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 04:29:38 compute-0 nova_compute[259850]: 2025-10-11 04:29:38.143 2 DEBUG nova.compute.resource_tracker [None req-a15c22ab-259d-4b26-83d0-eb5f92102649 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Oct 11 04:29:38 compute-0 nova_compute[259850]: 2025-10-11 04:29:38.144 2 DEBUG oslo_concurrency.processutils [None req-a15c22ab-259d-4b26-83d0-eb5f92102649 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 04:29:38 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v2037: 305 pgs: 305 active+clean; 271 MiB data, 657 MiB used, 59 GiB / 60 GiB avail
Oct 11 04:29:38 compute-0 nova_compute[259850]: 2025-10-11 04:29:38.355 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 04:29:38 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 11 04:29:38 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/253214653' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 04:29:38 compute-0 nova_compute[259850]: 2025-10-11 04:29:38.589 2 DEBUG oslo_concurrency.processutils [None req-a15c22ab-259d-4b26-83d0-eb5f92102649 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.445s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 04:29:38 compute-0 nova_compute[259850]: 2025-10-11 04:29:38.815 2 WARNING nova.virt.libvirt.driver [None req-a15c22ab-259d-4b26-83d0-eb5f92102649 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 11 04:29:38 compute-0 nova_compute[259850]: 2025-10-11 04:29:38.817 2 DEBUG nova.compute.resource_tracker [None req-a15c22ab-259d-4b26-83d0-eb5f92102649 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4295MB free_disk=59.988277435302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Oct 11 04:29:38 compute-0 nova_compute[259850]: 2025-10-11 04:29:38.817 2 DEBUG oslo_concurrency.lockutils [None req-a15c22ab-259d-4b26-83d0-eb5f92102649 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 04:29:38 compute-0 nova_compute[259850]: 2025-10-11 04:29:38.818 2 DEBUG oslo_concurrency.lockutils [None req-a15c22ab-259d-4b26-83d0-eb5f92102649 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 04:29:38 compute-0 nova_compute[259850]: 2025-10-11 04:29:38.938 2 DEBUG nova.compute.resource_tracker [None req-a15c22ab-259d-4b26-83d0-eb5f92102649 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Oct 11 04:29:38 compute-0 nova_compute[259850]: 2025-10-11 04:29:38.938 2 DEBUG nova.compute.resource_tracker [None req-a15c22ab-259d-4b26-83d0-eb5f92102649 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Oct 11 04:29:38 compute-0 nova_compute[259850]: 2025-10-11 04:29:38.993 2 DEBUG nova.scheduler.client.report [None req-a15c22ab-259d-4b26-83d0-eb5f92102649 - - - - - -] Refreshing inventories for resource provider 108a560b-89c0-4926-a2fc-cb749a6f8386 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804
Oct 11 04:29:39 compute-0 nova_compute[259850]: 2025-10-11 04:29:39.016 2 DEBUG nova.scheduler.client.report [None req-a15c22ab-259d-4b26-83d0-eb5f92102649 - - - - - -] Updating ProviderTree inventory for provider 108a560b-89c0-4926-a2fc-cb749a6f8386 from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768
Oct 11 04:29:39 compute-0 nova_compute[259850]: 2025-10-11 04:29:39.016 2 DEBUG nova.compute.provider_tree [None req-a15c22ab-259d-4b26-83d0-eb5f92102649 - - - - - -] Updating inventory in ProviderTree for provider 108a560b-89c0-4926-a2fc-cb749a6f8386 with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176
Oct 11 04:29:39 compute-0 nova_compute[259850]: 2025-10-11 04:29:39.037 2 DEBUG nova.scheduler.client.report [None req-a15c22ab-259d-4b26-83d0-eb5f92102649 - - - - - -] Refreshing aggregate associations for resource provider 108a560b-89c0-4926-a2fc-cb749a6f8386, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813
Oct 11 04:29:39 compute-0 nova_compute[259850]: 2025-10-11 04:29:39.069 2 DEBUG nova.scheduler.client.report [None req-a15c22ab-259d-4b26-83d0-eb5f92102649 - - - - - -] Refreshing trait associations for resource provider 108a560b-89c0-4926-a2fc-cb749a6f8386, traits: COMPUTE_IMAGE_TYPE_QCOW2,HW_CPU_X86_SSSE3,COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_STORAGE_BUS_SCSI,HW_CPU_X86_AESNI,HW_CPU_X86_FMA3,COMPUTE_SECURITY_UEFI_SECURE_BOOT,HW_CPU_X86_F16C,HW_CPU_X86_SVM,COMPUTE_STORAGE_BUS_USB,HW_CPU_X86_SHA,COMPUTE_ACCELERATORS,COMPUTE_IMAGE_TYPE_AMI,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_GRAPHICS_MODEL_BOCHS,COMPUTE_NET_VIF_MODEL_RTL8139,COMPUTE_IMAGE_TYPE_RAW,COMPUTE_NET_VIF_MODEL_NE2K_PCI,HW_CPU_X86_SSE41,COMPUTE_NODE,COMPUTE_RESCUE_BFV,COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_NET_ATTACH_INTERFACE,COMPUTE_IMAGE_TYPE_ISO,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,COMPUTE_TRUSTED_CERTS,COMPUTE_NET_VIF_MODEL_E1000E,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_STORAGE_BUS_SATA,HW_CPU_X86_AVX,COMPUTE_IMAGE_TYPE_AKI,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_STORAGE_BUS_VIRTIO,COMPUTE_NET_VIF_MODEL_VMXNET3,HW_CPU_X86_BMI2,HW_CPU_X86_MMX,COMPUTE_VOLUME_ATTACH_WITH_TAG,COMPUTE_SECURITY_TPM_1_2,COMPUTE_GRAPHICS_MODEL_NONE,HW_CPU_X86_AVX2,COMPUTE_DEVICE_TAGGING,HW_CPU_X86_AMD_SVM,COMPUTE_STORAGE_BUS_IDE,COMPUTE_GRAPHICS_MODEL_VGA,COMPUTE_VOLUME_EXTEND,COMPUTE_NET_VIF_MODEL_PCNET,HW_CPU_X86_CLMUL,HW_CPU_X86_SSE2,HW_CPU_X86_SSE4A,COMPUTE_GRAPHICS_MODEL_VIRTIO,COMPUTE_NET_VIF_MODEL_E1000,HW_CPU_X86_BMI,COMPUTE_VIOMMU_MODEL_INTEL,HW_CPU_X86_SSE,COMPUTE_VOLUME_MULTI_ATTACH,HW_CPU_X86_ABM,HW_CPU_X86_SSE42,COMPUTE_STORAGE_BUS_FDC,COMPUTE_SECURITY_TPM_2_0,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825
Oct 11 04:29:39 compute-0 nova_compute[259850]: 2025-10-11 04:29:39.101 2 DEBUG oslo_concurrency.processutils [None req-a15c22ab-259d-4b26-83d0-eb5f92102649 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 04:29:39 compute-0 ceph-mon[74273]: pgmap v2037: 305 pgs: 305 active+clean; 271 MiB data, 657 MiB used, 59 GiB / 60 GiB avail
Oct 11 04:29:39 compute-0 ceph-mon[74273]: from='client.? 192.168.122.100:0/253214653' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 04:29:39 compute-0 nova_compute[259850]: 2025-10-11 04:29:39.529 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 04:29:39 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 11 04:29:39 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1597225767' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 04:29:39 compute-0 nova_compute[259850]: 2025-10-11 04:29:39.583 2 DEBUG oslo_concurrency.processutils [None req-a15c22ab-259d-4b26-83d0-eb5f92102649 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.483s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 04:29:39 compute-0 nova_compute[259850]: 2025-10-11 04:29:39.591 2 DEBUG nova.compute.provider_tree [None req-a15c22ab-259d-4b26-83d0-eb5f92102649 - - - - - -] Inventory has not changed in ProviderTree for provider: 108a560b-89c0-4926-a2fc-cb749a6f8386 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 11 04:29:39 compute-0 nova_compute[259850]: 2025-10-11 04:29:39.611 2 DEBUG nova.scheduler.client.report [None req-a15c22ab-259d-4b26-83d0-eb5f92102649 - - - - - -] Inventory has not changed for provider 108a560b-89c0-4926-a2fc-cb749a6f8386 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 11 04:29:39 compute-0 nova_compute[259850]: 2025-10-11 04:29:39.614 2 DEBUG nova.compute.resource_tracker [None req-a15c22ab-259d-4b26-83d0-eb5f92102649 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Oct 11 04:29:39 compute-0 nova_compute[259850]: 2025-10-11 04:29:39.615 2 DEBUG oslo_concurrency.lockutils [None req-a15c22ab-259d-4b26-83d0-eb5f92102649 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.797s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 04:29:40 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v2038: 305 pgs: 305 active+clean; 271 MiB data, 657 MiB used, 59 GiB / 60 GiB avail
Oct 11 04:29:40 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e448 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 04:29:40 compute-0 ceph-mon[74273]: from='client.? 192.168.122.100:0/1597225767' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 04:29:40 compute-0 nova_compute[259850]: 2025-10-11 04:29:40.616 2 DEBUG oslo_service.periodic_task [None req-a15c22ab-259d-4b26-83d0-eb5f92102649 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 11 04:29:40 compute-0 nova_compute[259850]: 2025-10-11 04:29:40.617 2 DEBUG oslo_service.periodic_task [None req-a15c22ab-259d-4b26-83d0-eb5f92102649 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 11 04:29:41 compute-0 nova_compute[259850]: 2025-10-11 04:29:41.059 2 DEBUG oslo_service.periodic_task [None req-a15c22ab-259d-4b26-83d0-eb5f92102649 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 11 04:29:41 compute-0 nova_compute[259850]: 2025-10-11 04:29:41.060 2 DEBUG nova.compute.manager [None req-a15c22ab-259d-4b26-83d0-eb5f92102649 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Oct 11 04:29:41 compute-0 nova_compute[259850]: 2025-10-11 04:29:41.060 2 DEBUG nova.compute.manager [None req-a15c22ab-259d-4b26-83d0-eb5f92102649 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Oct 11 04:29:41 compute-0 nova_compute[259850]: 2025-10-11 04:29:41.178 2 DEBUG nova.compute.manager [None req-a15c22ab-259d-4b26-83d0-eb5f92102649 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Oct 11 04:29:41 compute-0 nova_compute[259850]: 2025-10-11 04:29:41.179 2 DEBUG oslo_service.periodic_task [None req-a15c22ab-259d-4b26-83d0-eb5f92102649 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 11 04:29:41 compute-0 ceph-mon[74273]: pgmap v2038: 305 pgs: 305 active+clean; 271 MiB data, 657 MiB used, 59 GiB / 60 GiB avail
Oct 11 04:29:42 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v2039: 305 pgs: 305 active+clean; 271 MiB data, 657 MiB used, 59 GiB / 60 GiB avail
Oct 11 04:29:43 compute-0 ceph-mon[74273]: pgmap v2039: 305 pgs: 305 active+clean; 271 MiB data, 657 MiB used, 59 GiB / 60 GiB avail
Oct 11 04:29:43 compute-0 nova_compute[259850]: 2025-10-11 04:29:43.360 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 04:29:44 compute-0 nova_compute[259850]: 2025-10-11 04:29:44.059 2 DEBUG oslo_service.periodic_task [None req-a15c22ab-259d-4b26-83d0-eb5f92102649 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 11 04:29:44 compute-0 sshd-session[312109]: Accepted publickey for zuul from 192.168.122.10 port 47574 ssh2: ECDSA SHA256:qo9+RMabHfLAOt2q/80W97JXaZUdeUCREBuTRaqgxBY
Oct 11 04:29:44 compute-0 systemd-logind[820]: New session 53 of user zuul.
Oct 11 04:29:44 compute-0 systemd[1]: Started Session 53 of User zuul.
Oct 11 04:29:44 compute-0 sshd-session[312109]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Oct 11 04:29:44 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v2040: 305 pgs: 305 active+clean; 271 MiB data, 657 MiB used, 59 GiB / 60 GiB avail
Oct 11 04:29:44 compute-0 sudo[312113]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/bash -c 'rm -rf /var/tmp/sos-osp && mkdir /var/tmp/sos-osp && sos report --batch --all-logs --tmp-dir=/var/tmp/sos-osp -p container,openstack_edpm,system,storage,virt'
Oct 11 04:29:44 compute-0 sudo[312113]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 04:29:44 compute-0 nova_compute[259850]: 2025-10-11 04:29:44.571 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 04:29:45 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e448 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 04:29:45 compute-0 ceph-mon[74273]: pgmap v2040: 305 pgs: 305 active+clean; 271 MiB data, 657 MiB used, 59 GiB / 60 GiB avail
Oct 11 04:29:46 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v2041: 305 pgs: 305 active+clean; 271 MiB data, 657 MiB used, 59 GiB / 60 GiB avail
Oct 11 04:29:47 compute-0 nova_compute[259850]: 2025-10-11 04:29:47.058 2 DEBUG oslo_service.periodic_task [None req-a15c22ab-259d-4b26-83d0-eb5f92102649 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 11 04:29:47 compute-0 ceph-mon[74273]: pgmap v2041: 305 pgs: 305 active+clean; 271 MiB data, 657 MiB used, 59 GiB / 60 GiB avail
Oct 11 04:29:47 compute-0 ceph-mgr[74563]: log_channel(audit) log [DBG] : from='client.19209 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""]}]: dispatch
Oct 11 04:29:48 compute-0 ceph-mgr[74563]: log_channel(audit) log [DBG] : from='client.19211 -' entity='client.admin' cmd=[{"prefix": "crash ls", "target": ["mon-mgr", ""]}]: dispatch
Oct 11 04:29:48 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v2042: 305 pgs: 305 active+clean; 271 MiB data, 657 MiB used, 59 GiB / 60 GiB avail
Oct 11 04:29:48 compute-0 nova_compute[259850]: 2025-10-11 04:29:48.364 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 04:29:48 compute-0 ceph-mon[74273]: from='client.19209 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""]}]: dispatch
Oct 11 04:29:48 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status"} v 0) v1
Oct 11 04:29:48 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1566637113' entity='client.admin' cmd=[{"prefix": "status"}]: dispatch
Oct 11 04:29:49 compute-0 podman[312364]: 2025-10-11 04:29:49.369829342 +0000 UTC m=+0.135121117 container health_status 648a70a95c858d67aef01a55c153ff0b5b9bef4b589fa10c62a24185180db61f (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, org.label-schema.build-date=20251009, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.vendor=CentOS, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_managed=true)
Oct 11 04:29:49 compute-0 ceph-mon[74273]: from='client.19211 -' entity='client.admin' cmd=[{"prefix": "crash ls", "target": ["mon-mgr", ""]}]: dispatch
Oct 11 04:29:49 compute-0 ceph-mon[74273]: pgmap v2042: 305 pgs: 305 active+clean; 271 MiB data, 657 MiB used, 59 GiB / 60 GiB avail
Oct 11 04:29:49 compute-0 ceph-mon[74273]: from='client.? 192.168.122.100:0/1566637113' entity='client.admin' cmd=[{"prefix": "status"}]: dispatch
Oct 11 04:29:49 compute-0 nova_compute[259850]: 2025-10-11 04:29:49.574 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 04:29:50 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v2043: 305 pgs: 305 active+clean; 271 MiB data, 657 MiB used, 59 GiB / 60 GiB avail
Oct 11 04:29:50 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e448 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 04:29:50 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct 11 04:29:50 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/4074440777' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 11 04:29:50 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct 11 04:29:50 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/4074440777' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 11 04:29:50 compute-0 ceph-mgr[74563]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 04:29:50 compute-0 ceph-mgr[74563]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 04:29:50 compute-0 ceph-mgr[74563]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 04:29:50 compute-0 ceph-mgr[74563]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 04:29:50 compute-0 ceph-mgr[74563]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 04:29:50 compute-0 ceph-mgr[74563]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 04:29:51 compute-0 ceph-mon[74273]: pgmap v2043: 305 pgs: 305 active+clean; 271 MiB data, 657 MiB used, 59 GiB / 60 GiB avail
Oct 11 04:29:51 compute-0 ceph-mon[74273]: from='client.? 192.168.122.10:0/4074440777' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 11 04:29:51 compute-0 ceph-mon[74273]: from='client.? 192.168.122.10:0/4074440777' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 11 04:29:52 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v2044: 305 pgs: 305 active+clean; 271 MiB data, 657 MiB used, 59 GiB / 60 GiB avail
Oct 11 04:29:52 compute-0 podman[312439]: 2025-10-11 04:29:52.380935589 +0000 UTC m=+0.089078431 container health_status aa44bb3cffcfb092b9d85ae52a9bd9f36c87e07262632420f97b48a886109eab (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251009, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent)
Oct 11 04:29:53 compute-0 nova_compute[259850]: 2025-10-11 04:29:53.368 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 04:29:53 compute-0 ceph-mon[74273]: pgmap v2044: 305 pgs: 305 active+clean; 271 MiB data, 657 MiB used, 59 GiB / 60 GiB avail
Oct 11 04:29:53 compute-0 ovs-vsctl[312487]: ovs|00001|db_ctl_base|ERR|no key "dpdk-init" in Open_vSwitch record "." column other_config
Oct 11 04:29:54 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v2045: 305 pgs: 305 active+clean; 271 MiB data, 657 MiB used, 59 GiB / 60 GiB avail
Oct 11 04:29:54 compute-0 nova_compute[259850]: 2025-10-11 04:29:54.576 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 04:29:54 compute-0 virtqemud[259597]: Failed to connect socket to '/var/run/libvirt/virtnetworkd-sock-ro': No such file or directory
Oct 11 04:29:55 compute-0 virtqemud[259597]: Failed to connect socket to '/var/run/libvirt/virtnwfilterd-sock-ro': No such file or directory
Oct 11 04:29:55 compute-0 virtqemud[259597]: Failed to connect socket to '/var/run/libvirt/virtstoraged-sock-ro': No such file or directory
Oct 11 04:29:55 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e448 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 04:29:55 compute-0 ceph-mon[74273]: pgmap v2045: 305 pgs: 305 active+clean; 271 MiB data, 657 MiB used, 59 GiB / 60 GiB avail
Oct 11 04:29:55 compute-0 ceph-mds[100691]: mds.cephfs.compute-0.lkhlqa asok_command: cache status {prefix=cache status} (starting...)
Oct 11 04:29:55 compute-0 ceph-mds[100691]: mds.cephfs.compute-0.lkhlqa asok_command: client ls {prefix=client ls} (starting...)
Oct 11 04:29:55 compute-0 lvm[312818]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Oct 11 04:29:55 compute-0 lvm[312818]: VG ceph_vg0 finished
Oct 11 04:29:55 compute-0 lvm[312839]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Oct 11 04:29:55 compute-0 lvm[312839]: VG ceph_vg2 finished
Oct 11 04:29:56 compute-0 lvm[312846]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Oct 11 04:29:56 compute-0 lvm[312846]: VG ceph_vg1 finished
Oct 11 04:29:56 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v2046: 305 pgs: 305 active+clean; 271 MiB data, 657 MiB used, 59 GiB / 60 GiB avail
Oct 11 04:29:56 compute-0 ceph-mgr[74563]: log_channel(audit) log [DBG] : from='client.19219 -' entity='client.admin' cmd=[{"prefix": "balancer eval", "target": ["mon-mgr", ""]}]: dispatch
Oct 11 04:29:56 compute-0 kernel: block vda: the capability attribute has been deprecated.
Oct 11 04:29:56 compute-0 ceph-mds[100691]: mds.cephfs.compute-0.lkhlqa asok_command: damage ls {prefix=damage ls} (starting...)
Oct 11 04:29:56 compute-0 ceph-mds[100691]: mds.cephfs.compute-0.lkhlqa asok_command: dump loads {prefix=dump loads} (starting...)
Oct 11 04:29:56 compute-0 ceph-mds[100691]: mds.cephfs.compute-0.lkhlqa asok_command: dump tree {prefix=dump tree,root=/} (starting...)
Oct 11 04:29:56 compute-0 ceph-mgr[74563]: log_channel(audit) log [DBG] : from='client.19221 -' entity='client.admin' cmd=[{"prefix": "balancer status", "target": ["mon-mgr", ""]}]: dispatch
Oct 11 04:29:56 compute-0 ceph-mds[100691]: mds.cephfs.compute-0.lkhlqa asok_command: dump_blocked_ops {prefix=dump_blocked_ops} (starting...)
Oct 11 04:29:56 compute-0 ceph-mds[100691]: mds.cephfs.compute-0.lkhlqa asok_command: dump_historic_ops {prefix=dump_historic_ops} (starting...)
Oct 11 04:29:57 compute-0 ceph-mds[100691]: mds.cephfs.compute-0.lkhlqa asok_command: dump_historic_ops_by_duration {prefix=dump_historic_ops_by_duration} (starting...)
Oct 11 04:29:57 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "report"} v 0) v1
Oct 11 04:29:57 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3210837450' entity='client.admin' cmd=[{"prefix": "report"}]: dispatch
Oct 11 04:29:57 compute-0 ceph-mds[100691]: mds.cephfs.compute-0.lkhlqa asok_command: dump_ops_in_flight {prefix=dump_ops_in_flight} (starting...)
Oct 11 04:29:57 compute-0 ceph-mds[100691]: mds.cephfs.compute-0.lkhlqa asok_command: get subtrees {prefix=get subtrees} (starting...)
Oct 11 04:29:57 compute-0 ceph-mgr[74563]: log_channel(audit) log [DBG] : from='client.19227 -' entity='client.admin' cmd=[{"prefix": "healthcheck history ls", "target": ["mon-mgr", ""]}]: dispatch
Oct 11 04:29:57 compute-0 ceph-mgr[74563]: mgr.server reply reply (95) Operation not supported Module 'prometheus' is not enabled/loaded (required by command 'healthcheck history ls'): use `ceph mgr module enable prometheus` to enable it
Oct 11 04:29:57 compute-0 ceph-23b68101-59a9-532f-ab6b-9acf78fb2162-mgr-compute-0-jhqlii[74559]: 2025-10-11T04:29:57.431+0000 7f0cd7a88640 -1 mgr.server reply reply (95) Operation not supported Module 'prometheus' is not enabled/loaded (required by command 'healthcheck history ls'): use `ceph mgr module enable prometheus` to enable it
Oct 11 04:29:57 compute-0 ceph-mon[74273]: pgmap v2046: 305 pgs: 305 active+clean; 271 MiB data, 657 MiB used, 59 GiB / 60 GiB avail
Oct 11 04:29:57 compute-0 ceph-mon[74273]: from='client.19219 -' entity='client.admin' cmd=[{"prefix": "balancer eval", "target": ["mon-mgr", ""]}]: dispatch
Oct 11 04:29:57 compute-0 ceph-mon[74273]: from='client.? 192.168.122.100:0/3210837450' entity='client.admin' cmd=[{"prefix": "report"}]: dispatch
Oct 11 04:29:57 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 11 04:29:57 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2886021555' entity='client.admin' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 11 04:29:57 compute-0 ceph-mds[100691]: mds.cephfs.compute-0.lkhlqa asok_command: ops {prefix=ops} (starting...)
Oct 11 04:29:57 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config log"} v 0) v1
Oct 11 04:29:57 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2858373010' entity='client.admin' cmd=[{"prefix": "config log"}]: dispatch
Oct 11 04:29:57 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "log last", "channel": "cephadm"} v 0) v1
Oct 11 04:29:57 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/88457110' entity='client.admin' cmd=[{"prefix": "log last", "channel": "cephadm"}]: dispatch
Oct 11 04:29:58 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v2047: 305 pgs: 305 active+clean; 271 MiB data, 657 MiB used, 59 GiB / 60 GiB avail
Oct 11 04:29:58 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config-key dump"} v 0) v1
Oct 11 04:29:58 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1140170438' entity='client.admin' cmd=[{"prefix": "config-key dump"}]: dispatch
Oct 11 04:29:58 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr dump"} v 0) v1
Oct 11 04:29:58 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2806729692' entity='client.admin' cmd=[{"prefix": "mgr dump"}]: dispatch
Oct 11 04:29:58 compute-0 nova_compute[259850]: 2025-10-11 04:29:58.370 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 04:29:58 compute-0 ceph-mds[100691]: mds.cephfs.compute-0.lkhlqa asok_command: session ls {prefix=session ls} (starting...)
Oct 11 04:29:58 compute-0 ceph-mon[74273]: from='client.19221 -' entity='client.admin' cmd=[{"prefix": "balancer status", "target": ["mon-mgr", ""]}]: dispatch
Oct 11 04:29:58 compute-0 ceph-mon[74273]: from='client.19227 -' entity='client.admin' cmd=[{"prefix": "healthcheck history ls", "target": ["mon-mgr", ""]}]: dispatch
Oct 11 04:29:58 compute-0 ceph-mon[74273]: from='client.? 192.168.122.100:0/2886021555' entity='client.admin' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 11 04:29:58 compute-0 ceph-mon[74273]: from='client.? 192.168.122.100:0/2858373010' entity='client.admin' cmd=[{"prefix": "config log"}]: dispatch
Oct 11 04:29:58 compute-0 ceph-mon[74273]: from='client.? 192.168.122.100:0/88457110' entity='client.admin' cmd=[{"prefix": "log last", "channel": "cephadm"}]: dispatch
Oct 11 04:29:58 compute-0 ceph-mon[74273]: from='client.? 192.168.122.100:0/1140170438' entity='client.admin' cmd=[{"prefix": "config-key dump"}]: dispatch
Oct 11 04:29:58 compute-0 ceph-mon[74273]: from='client.? 192.168.122.100:0/2806729692' entity='client.admin' cmd=[{"prefix": "mgr dump"}]: dispatch
Oct 11 04:29:58 compute-0 ceph-mds[100691]: mds.cephfs.compute-0.lkhlqa asok_command: status {prefix=status} (starting...)
Oct 11 04:29:58 compute-0 ceph-mgr[74563]: log_channel(audit) log [DBG] : from='client.19240 -' entity='client.admin' cmd=[{"prefix": "crash ls", "target": ["mon-mgr", ""]}]: dispatch
Oct 11 04:29:58 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr metadata"} v 0) v1
Oct 11 04:29:58 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2941937694' entity='client.admin' cmd=[{"prefix": "mgr metadata"}]: dispatch
Oct 11 04:29:59 compute-0 ceph-mgr[74563]: log_channel(audit) log [DBG] : from='client.19243 -' entity='client.admin' cmd=[{"prefix": "crash stat", "target": ["mon-mgr", ""]}]: dispatch
Oct 11 04:29:59 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr module ls"} v 0) v1
Oct 11 04:29:59 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1812791017' entity='client.admin' cmd=[{"prefix": "mgr module ls"}]: dispatch
Oct 11 04:29:59 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "features"} v 0) v1
Oct 11 04:29:59 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/882578944' entity='client.admin' cmd=[{"prefix": "features"}]: dispatch
Oct 11 04:29:59 compute-0 ceph-mon[74273]: pgmap v2047: 305 pgs: 305 active+clean; 271 MiB data, 657 MiB used, 59 GiB / 60 GiB avail
Oct 11 04:29:59 compute-0 ceph-mon[74273]: from='client.? 192.168.122.100:0/2941937694' entity='client.admin' cmd=[{"prefix": "mgr metadata"}]: dispatch
Oct 11 04:29:59 compute-0 ceph-mon[74273]: from='client.? 192.168.122.100:0/1812791017' entity='client.admin' cmd=[{"prefix": "mgr module ls"}]: dispatch
Oct 11 04:29:59 compute-0 ceph-mon[74273]: from='client.? 192.168.122.100:0/882578944' entity='client.admin' cmd=[{"prefix": "features"}]: dispatch
Oct 11 04:29:59 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr services"} v 0) v1
Oct 11 04:29:59 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2617777880' entity='client.admin' cmd=[{"prefix": "mgr services"}]: dispatch
Oct 11 04:29:59 compute-0 nova_compute[259850]: 2025-10-11 04:29:59.577 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 04:29:59 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "health", "detail": "detail"} v 0) v1
Oct 11 04:29:59 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3220398332' entity='client.admin' cmd=[{"prefix": "health", "detail": "detail"}]: dispatch
Oct 11 04:29:59 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr stat"} v 0) v1
Oct 11 04:29:59 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/533249348' entity='client.admin' cmd=[{"prefix": "mgr stat"}]: dispatch
Oct 11 04:30:00 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v2048: 305 pgs: 305 active+clean; 271 MiB data, 657 MiB used, 59 GiB / 60 GiB avail
Oct 11 04:30:00 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e448 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 04:30:00 compute-0 ceph-mgr[74563]: log_channel(audit) log [DBG] : from='client.19255 -' entity='client.admin' cmd=[{"prefix": "insights", "target": ["mon-mgr", ""]}]: dispatch
Oct 11 04:30:00 compute-0 ceph-23b68101-59a9-532f-ab6b-9acf78fb2162-mgr-compute-0-jhqlii[74559]: 2025-10-11T04:30:00.308+0000 7f0cd7a88640 -1 mgr.server reply reply (95) Operation not supported Module 'insights' is not enabled/loaded (required by command 'insights'): use `ceph mgr module enable insights` to enable it
Oct 11 04:30:00 compute-0 ceph-mgr[74563]: mgr.server reply reply (95) Operation not supported Module 'insights' is not enabled/loaded (required by command 'insights'): use `ceph mgr module enable insights` to enable it
Oct 11 04:30:00 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr versions"} v 0) v1
Oct 11 04:30:00 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/276357818' entity='client.admin' cmd=[{"prefix": "mgr versions"}]: dispatch
Oct 11 04:30:00 compute-0 ceph-mon[74273]: from='client.19240 -' entity='client.admin' cmd=[{"prefix": "crash ls", "target": ["mon-mgr", ""]}]: dispatch
Oct 11 04:30:00 compute-0 ceph-mon[74273]: from='client.19243 -' entity='client.admin' cmd=[{"prefix": "crash stat", "target": ["mon-mgr", ""]}]: dispatch
Oct 11 04:30:00 compute-0 ceph-mon[74273]: from='client.? 192.168.122.100:0/2617777880' entity='client.admin' cmd=[{"prefix": "mgr services"}]: dispatch
Oct 11 04:30:00 compute-0 ceph-mon[74273]: from='client.? 192.168.122.100:0/3220398332' entity='client.admin' cmd=[{"prefix": "health", "detail": "detail"}]: dispatch
Oct 11 04:30:00 compute-0 ceph-mon[74273]: from='client.? 192.168.122.100:0/533249348' entity='client.admin' cmd=[{"prefix": "mgr stat"}]: dispatch
Oct 11 04:30:00 compute-0 ceph-mon[74273]: from='client.? 192.168.122.100:0/276357818' entity='client.admin' cmd=[{"prefix": "mgr versions"}]: dispatch
Oct 11 04:30:00 compute-0 ceph-mgr[74563]: log_channel(audit) log [DBG] : from='client.19261 -' entity='client.admin' cmd=[{"prefix": "orch host ls", "target": ["mon-mgr", ""]}]: dispatch
Oct 11 04:30:00 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "log last", "num": 10000, "level": "debug", "channel": "audit"} v 0) v1
Oct 11 04:30:00 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2132377893' entity='client.admin' cmd=[{"prefix": "log last", "num": 10000, "level": "debug", "channel": "audit"}]: dispatch
Oct 11 04:30:01 compute-0 ceph-mgr[74563]: log_channel(audit) log [DBG] : from='client.19263 -' entity='client.admin' cmd=[{"prefix": "orch device ls", "target": ["mon-mgr", ""]}]: dispatch
Oct 11 04:30:01 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "log last", "num": 10000, "level": "debug", "channel": "cluster"} v 0) v1
Oct 11 04:30:01 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/605022715' entity='client.admin' cmd=[{"prefix": "log last", "num": 10000, "level": "debug", "channel": "cluster"}]: dispatch
Oct 11 04:30:01 compute-0 ceph-mgr[74563]: log_channel(audit) log [DBG] : from='client.19267 -' entity='client.admin' cmd=[{"prefix": "orch ls", "target": ["mon-mgr", ""]}]: dispatch
Oct 11 04:30:01 compute-0 ceph-mon[74273]: pgmap v2048: 305 pgs: 305 active+clean; 271 MiB data, 657 MiB used, 59 GiB / 60 GiB avail
Oct 11 04:30:01 compute-0 ceph-mon[74273]: from='client.19255 -' entity='client.admin' cmd=[{"prefix": "insights", "target": ["mon-mgr", ""]}]: dispatch
Oct 11 04:30:01 compute-0 ceph-mon[74273]: from='client.19261 -' entity='client.admin' cmd=[{"prefix": "orch host ls", "target": ["mon-mgr", ""]}]: dispatch
Oct 11 04:30:01 compute-0 ceph-mon[74273]: from='client.? 192.168.122.100:0/2132377893' entity='client.admin' cmd=[{"prefix": "log last", "num": 10000, "level": "debug", "channel": "audit"}]: dispatch
Oct 11 04:30:01 compute-0 ceph-mon[74273]: from='client.19263 -' entity='client.admin' cmd=[{"prefix": "orch device ls", "target": ["mon-mgr", ""]}]: dispatch
Oct 11 04:30:01 compute-0 ceph-mon[74273]: from='client.? 192.168.122.100:0/605022715' entity='client.admin' cmd=[{"prefix": "log last", "num": 10000, "level": "debug", "channel": "cluster"}]: dispatch
Oct 11 04:30:01 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr dump"} v 0) v1
Oct 11 04:30:01 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4159813985' entity='client.admin' cmd=[{"prefix": "mgr dump"}]: dispatch
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 176 heartbeat osd_stat(store_statfs(0x4f9fea000/0x0/0x4ffc00000, data 0x15870cd/0x1683000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct 11 04:30:01 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 103473152 unmapped: 31318016 heap: 134791168 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:04:57.620637+0000)
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 176 heartbeat osd_stat(store_statfs(0x4f9fea000/0x0/0x4ffc00000, data 0x15870aa/0x1682000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct 11 04:30:01 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 103473152 unmapped: 31318016 heap: 134791168 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:04:58.620807+0000)
Oct 11 04:30:01 compute-0 ceph-osd[89722]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:30:01 compute-0 ceph-osd[89722]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:30:01 compute-0 ceph-osd[89722]: bluestore.MempoolThread(0x55f1042abb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1249268 data_alloc: 234881024 data_used: 12365824
Oct 11 04:30:01 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 103473152 unmapped: 31318016 heap: 134791168 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:04:59.621143+0000)
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: handle_auth_request added challenge on 0x55f107d32400
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 176 handle_osd_map epochs [176,177], i have 176, src has [1,177]
Oct 11 04:30:01 compute-0 ceph-osd[89722]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 11.376828194s of 11.887859344s, submitted: 139
Oct 11 04:30:01 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 103473152 unmapped: 31318016 heap: 134791168 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 177 ms_handle_reset con 0x55f107d32400 session 0x55f105b8cb40
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:05:00.621376+0000)
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: handle_auth_request added challenge on 0x55f107d33400
Oct 11 04:30:01 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 104546304 unmapped: 30244864 heap: 134791168 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:05:01.621572+0000)
Oct 11 04:30:01 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 103563264 unmapped: 31227904 heap: 134791168 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:05:02.621770+0000)
Oct 11 04:30:01 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 103620608 unmapped: 31170560 heap: 134791168 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:05:03.621974+0000)
Oct 11 04:30:01 compute-0 ceph-osd[89722]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:30:01 compute-0 ceph-osd[89722]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:30:01 compute-0 ceph-osd[89722]: bluestore.MempoolThread(0x55f1042abb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1472459 data_alloc: 234881024 data_used: 12369920
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 177 heartbeat osd_stat(store_statfs(0x4f7fe9000/0x0/0x4ffc00000, data 0x3588b0d/0x3685000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct 11 04:30:01 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 103612416 unmapped: 31178752 heap: 134791168 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 177 heartbeat osd_stat(store_statfs(0x4f7fe9000/0x0/0x4ffc00000, data 0x3588b0d/0x3685000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:05:04.622242+0000)
Oct 11 04:30:01 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 103792640 unmapped: 30998528 heap: 134791168 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:05:05.622623+0000)
Oct 11 04:30:01 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 104013824 unmapped: 30777344 heap: 134791168 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:05:06.622800+0000)
Oct 11 04:30:01 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 104194048 unmapped: 30597120 heap: 134791168 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 177 heartbeat osd_stat(store_statfs(0x4f37e9000/0x0/0x4ffc00000, data 0x7d88b0d/0x7e85000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:05:07.622959+0000)
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 177 heartbeat osd_stat(store_statfs(0x4f1fe9000/0x0/0x4ffc00000, data 0x9588b0d/0x9685000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct 11 04:30:01 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 104431616 unmapped: 30359552 heap: 134791168 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:05:08.623077+0000)
Oct 11 04:30:01 compute-0 ceph-osd[89722]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:30:01 compute-0 ceph-osd[89722]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:30:01 compute-0 ceph-osd[89722]: bluestore.MempoolThread(0x55f1042abb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2459723 data_alloc: 234881024 data_used: 12369920
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 177 heartbeat osd_stat(store_statfs(0x4ef7e9000/0x0/0x4ffc00000, data 0xbd88b0d/0xbe85000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct 11 04:30:01 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 104652800 unmapped: 30138368 heap: 134791168 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:05:09.623254+0000)
Oct 11 04:30:01 compute-0 ceph-osd[89722]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 9.381105423s of 10.115213394s, submitted: 49
Oct 11 04:30:01 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 104980480 unmapped: 29810688 heap: 134791168 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:05:10.623431+0000)
Oct 11 04:30:01 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 104980480 unmapped: 29810688 heap: 134791168 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:05:11.623621+0000)
Oct 11 04:30:01 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 105267200 unmapped: 29523968 heap: 134791168 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 177 heartbeat osd_stat(store_statfs(0x4ebfe9000/0x0/0x4ffc00000, data 0xf588b0d/0xf685000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:05:12.623751+0000)
Oct 11 04:30:01 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 105586688 unmapped: 29204480 heap: 134791168 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:05:13.623891+0000)
Oct 11 04:30:01 compute-0 ceph-osd[89722]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:30:01 compute-0 ceph-osd[89722]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:30:01 compute-0 ceph-osd[89722]: bluestore.MempoolThread(0x55f1042abb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3172747 data_alloc: 234881024 data_used: 12369920
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 177 heartbeat osd_stat(store_statfs(0x4e87e9000/0x0/0x4ffc00000, data 0x12d88b0d/0x12e85000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct 11 04:30:01 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 105586688 unmapped: 29204480 heap: 134791168 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:05:14.624093+0000)
Oct 11 04:30:01 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 105807872 unmapped: 28983296 heap: 134791168 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: handle_auth_request added challenge on 0x55f107d33800
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 177 ms_handle_reset con 0x55f107d33800 session 0x55f10795f2c0
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:05:15.624305+0000)
Oct 11 04:30:01 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 105963520 unmapped: 28827648 heap: 134791168 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:05:16.624522+0000)
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: handle_auth_request added challenge on 0x55f1081a4c00
Oct 11 04:30:01 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 106168320 unmapped: 28622848 heap: 134791168 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:05:17.624633+0000)
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _renew_subs
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 177 handle_osd_map epochs [178,178], i have 177, src has [1,178]
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 178 ms_handle_reset con 0x55f107d33400 session 0x55f107946f00
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 178 ms_handle_reset con 0x55f1081a4c00 session 0x55f106d385a0
Oct 11 04:30:01 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 106201088 unmapped: 28590080 heap: 134791168 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: handle_auth_request added challenge on 0x55f105d2b800
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 178 ms_handle_reset con 0x55f105d2b800 session 0x55f10698e3c0
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:05:18.624768+0000)
Oct 11 04:30:01 compute-0 ceph-osd[89722]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:30:01 compute-0 ceph-osd[89722]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:30:01 compute-0 ceph-osd[89722]: bluestore.MempoolThread(0x55f1042abb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3617262 data_alloc: 234881024 data_used: 12382208
Oct 11 04:30:01 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 106168320 unmapped: 28622848 heap: 134791168 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: handle_auth_request added challenge on 0x55f107d32400
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: handle_auth_request added challenge on 0x55f107d33400
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:05:19.624941+0000)
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: handle_auth_request added challenge on 0x55f107d33800
Oct 11 04:30:01 compute-0 ceph-osd[89722]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 9.528376579s of 10.002378464s, submitted: 36
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 178 heartbeat osd_stat(store_statfs(0x4e47e3000/0x0/0x4ffc00000, data 0x16d8a8a0/0x16e8a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [0,0,0,0,0,0,1,7,1])
Oct 11 04:30:01 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 111984640 unmapped: 48005120 heap: 159989760 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:05:20.625108+0000)
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _renew_subs
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 178 handle_osd_map epochs [179,179], i have 178, src has [1,179]
Oct 11 04:30:01 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 109355008 unmapped: 50634752 heap: 159989760 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:05:21.625397+0000)
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 179 ms_handle_reset con 0x55f107d33800 session 0x55f10772fc20
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: handle_auth_request added challenge on 0x55f107e5b000
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 179 ms_handle_reset con 0x55f107e5b000 session 0x55f10561e780
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: handle_auth_request added challenge on 0x55f1081a6800
Oct 11 04:30:01 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 120594432 unmapped: 39395328 heap: 159989760 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 179 ms_handle_reset con 0x55f1081a6800 session 0x55f10561e3c0
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:05:22.625587+0000)
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 179 heartbeat osd_stat(store_statfs(0x4effe2000/0x0/0x4ffc00000, data 0xb58c461/0xb68c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [0,0,0,1])
Oct 11 04:30:01 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 109338624 unmapped: 50651136 heap: 159989760 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:05:23.625806+0000)
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: handle_auth_request added challenge on 0x55f1081a4400
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 179 ms_handle_reset con 0x55f1081a4400 session 0x55f10561e000
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: handle_auth_request added challenge on 0x55f105d2b800
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 179 ms_handle_reset con 0x55f105d2b800 session 0x55f10561f860
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: handle_auth_request added challenge on 0x55f107d33800
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 179 ms_handle_reset con 0x55f107d33800 session 0x55f105b9f2c0
Oct 11 04:30:01 compute-0 ceph-osd[89722]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:30:01 compute-0 ceph-osd[89722]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:30:01 compute-0 ceph-osd[89722]: bluestore.MempoolThread(0x55f1042abb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2834713 data_alloc: 234881024 data_used: 12386304
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: handle_auth_request added challenge on 0x55f107e5b000
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 179 ms_handle_reset con 0x55f107e5b000 session 0x55f105bb6000
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: handle_auth_request added challenge on 0x55f1081a6800
Oct 11 04:30:01 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 110960640 unmapped: 49029120 heap: 159989760 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 179 ms_handle_reset con 0x55f1081a6800 session 0x55f105bb63c0
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: handle_auth_request added challenge on 0x55f1081a5c00
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 179 ms_handle_reset con 0x55f1081a5c00 session 0x55f1081634a0
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: handle_auth_request added challenge on 0x55f105d2b800
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 179 ms_handle_reset con 0x55f105d2b800 session 0x55f108162d20
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: handle_auth_request added challenge on 0x55f107d33800
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 179 heartbeat osd_stat(store_statfs(0x4ed3e2000/0x0/0x4ffc00000, data 0xe18c461/0xe28c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [0,0,2,2])
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:05:24.625924+0000)
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 179 ms_handle_reset con 0x55f107d33800 session 0x55f1081630e0
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: handle_auth_request added challenge on 0x55f107e5b000
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 179 ms_handle_reset con 0x55f107e5b000 session 0x55f1081621e0
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _renew_subs
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 179 handle_osd_map epochs [180,180], i have 179, src has [1,180]
Oct 11 04:30:01 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 110542848 unmapped: 49446912 heap: 159989760 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:05:25.626210+0000)
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 180 ms_handle_reset con 0x55f107d32400 session 0x55f1069814a0
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 180 ms_handle_reset con 0x55f107d33400 session 0x55f106981c20
Oct 11 04:30:01 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 111198208 unmapped: 48791552 heap: 159989760 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:05:26.626355+0000)
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: handle_auth_request added challenge on 0x55f105d2b800
Oct 11 04:30:01 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 111190016 unmapped: 48799744 heap: 159989760 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: handle_auth_request added challenge on 0x55f107d32400
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 180 ms_handle_reset con 0x55f107d32400 session 0x55f10561e000
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 180 ms_handle_reset con 0x55f105d2b800 session 0x55f10799d2c0
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:05:27.626513+0000)
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: handle_auth_request added challenge on 0x55f107d33800
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 180 ms_handle_reset con 0x55f107d33800 session 0x55f105b0b0e0
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 180 heartbeat osd_stat(store_statfs(0x4f9e8f000/0x0/0x4ffc00000, data 0x16dde62/0x17de000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct 11 04:30:01 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 110870528 unmapped: 49119232 heap: 159989760 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: handle_auth_request added challenge on 0x55f107e5b000
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 180 ms_handle_reset con 0x55f107e5b000 session 0x55f105b9d680
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: handle_auth_request added challenge on 0x55f1081a6800
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:05:28.626670+0000)
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: handle_auth_request added challenge on 0x55f1081a5800
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 180 ms_handle_reset con 0x55f1081a5800 session 0x55f10561f860
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: handle_auth_request added challenge on 0x55f105d2b800
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 180 ms_handle_reset con 0x55f105d2b800 session 0x55f10772fc20
Oct 11 04:30:01 compute-0 ceph-osd[89722]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:30:01 compute-0 ceph-osd[89722]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:30:01 compute-0 ceph-osd[89722]: bluestore.MempoolThread(0x55f1042abb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1425581 data_alloc: 234881024 data_used: 12398592
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 180 handle_osd_map epochs [180,181], i have 180, src has [1,181]
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 181 ms_handle_reset con 0x55f1081a6800 session 0x55f107708960
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: handle_auth_request added challenge on 0x55f107d32400
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 181 heartbeat osd_stat(store_statfs(0x4f9e8f000/0x0/0x4ffc00000, data 0x16dde62/0x17de000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: handle_auth_request added challenge on 0x55f107d33800
Oct 11 04:30:01 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 111190016 unmapped: 48799744 heap: 159989760 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:05:29.626871+0000)
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 181 heartbeat osd_stat(store_statfs(0x4f9e66000/0x0/0x4ffc00000, data 0x1703a56/0x1806000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct 11 04:30:01 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 111394816 unmapped: 48594944 heap: 159989760 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:05:30.627027+0000)
Oct 11 04:30:01 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 111525888 unmapped: 48463872 heap: 159989760 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:05:31.627205+0000)
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: handle_auth_request added challenge on 0x55f107e5b000
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 181 ms_handle_reset con 0x55f107e5b000 session 0x55f1081632c0
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: handle_auth_request added challenge on 0x55f1081a4800
Oct 11 04:30:01 compute-0 ceph-osd[89722]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 8.531565666s of 12.360223770s, submitted: 556
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 181 ms_handle_reset con 0x55f1081a4800 session 0x55f108163c20
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: handle_auth_request added challenge on 0x55f1081b7800
Oct 11 04:30:01 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 111542272 unmapped: 48447488 heap: 159989760 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 181 ms_handle_reset con 0x55f1081b7800 session 0x55f1081625a0
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:05:32.627371+0000)
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: handle_auth_request added challenge on 0x55f1081b7800
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 181 ms_handle_reset con 0x55f1081b7800 session 0x55f108162f00
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: handle_auth_request added challenge on 0x55f105d2b800
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 181 ms_handle_reset con 0x55f105d2b800 session 0x55f1063174a0
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 181 heartbeat osd_stat(store_statfs(0x4f9e68000/0x0/0x4ffc00000, data 0x1703a56/0x1806000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct 11 04:30:01 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 112238592 unmapped: 47751168 heap: 159989760 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:05:33.627552+0000)
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: handle_auth_request added challenge on 0x55f1081b7400
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 181 ms_handle_reset con 0x55f1081b7400 session 0x55f1079f2b40
Oct 11 04:30:01 compute-0 ceph-osd[89722]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:30:01 compute-0 ceph-osd[89722]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:30:01 compute-0 ceph-osd[89722]: bluestore.MempoolThread(0x55f1042abb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1439993 data_alloc: 234881024 data_used: 13594624
Oct 11 04:30:01 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 112238592 unmapped: 47751168 heap: 159989760 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: handle_auth_request added challenge on 0x55f1081b7000
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 181 ms_handle_reset con 0x55f1081b7000 session 0x55f109b9a780
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:05:34.627685+0000)
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 181 handle_osd_map epochs [181,182], i have 181, src has [1,182]
Oct 11 04:30:01 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 112271360 unmapped: 47718400 heap: 159989760 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:05:35.627822+0000)
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 182 ms_handle_reset con 0x55f107d32400 session 0x55f1076cf680
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 182 ms_handle_reset con 0x55f107d33800 session 0x55f10698e3c0
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: handle_auth_request added challenge on 0x55f105d2b800
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: handle_auth_request added challenge on 0x55f1081b7000
Oct 11 04:30:01 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 111812608 unmapped: 48177152 heap: 159989760 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 182 ms_handle_reset con 0x55f1081b7000 session 0x55f106d385a0
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 182 ms_handle_reset con 0x55f105d2b800 session 0x55f1076e6000
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: handle_auth_request added challenge on 0x55f1081b7400
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:05:36.627931+0000)
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 182 ms_handle_reset con 0x55f1081b7400 session 0x55f10562b860
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 182 heartbeat osd_stat(store_statfs(0x4f9f55000/0x0/0x4ffc00000, data 0x16134a6/0x1717000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: handle_auth_request added challenge on 0x55f1081b7800
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 182 ms_handle_reset con 0x55f1081b7800 session 0x55f1079f3a40
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: handle_auth_request added challenge on 0x55f105d2b800
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 182 ms_handle_reset con 0x55f105d2b800 session 0x55f105b8c780
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: handle_auth_request added challenge on 0x55f107d33800
Oct 11 04:30:01 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 111919104 unmapped: 48070656 heap: 159989760 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 182 ms_handle_reset con 0x55f107d33800 session 0x55f1069810e0
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:05:37.628085+0000)
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: handle_auth_request added challenge on 0x55f1081b7000
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 182 ms_handle_reset con 0x55f1081b7000 session 0x55f1079732c0
Oct 11 04:30:01 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 112050176 unmapped: 47939584 heap: 159989760 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:05:38.628253+0000)
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: handle_auth_request added challenge on 0x55f1081b7400
Oct 11 04:30:01 compute-0 ceph-osd[89722]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:30:01 compute-0 ceph-osd[89722]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:30:01 compute-0 ceph-osd[89722]: bluestore.MempoolThread(0x55f1042abb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1545284 data_alloc: 234881024 data_used: 12427264
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _renew_subs
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 182 handle_osd_map epochs [183,183], i have 182, src has [1,183]
Oct 11 04:30:01 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 111878144 unmapped: 48111616 heap: 159989760 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 183 ms_handle_reset con 0x55f1081b7400 session 0x55f1076e6960
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:05:39.628415+0000)
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: handle_auth_request added challenge on 0x55f1081b6c00
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 183 ms_handle_reset con 0x55f1081b6c00 session 0x55f109955680
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: handle_auth_request added challenge on 0x55f105d2b800
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 183 ms_handle_reset con 0x55f105d2b800 session 0x55f10767e780
Oct 11 04:30:01 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 111886336 unmapped: 48103424 heap: 159989760 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: handle_auth_request added challenge on 0x55f107d33800
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 183 ms_handle_reset con 0x55f107d33800 session 0x55f10767e960
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:05:40.628596+0000)
Oct 11 04:30:01 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 111886336 unmapped: 48103424 heap: 159989760 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: handle_auth_request added challenge on 0x55f1081b6c00
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 183 ms_handle_reset con 0x55f1081b6c00 session 0x55f1077210e0
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:05:41.628844+0000)
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: handle_auth_request added challenge on 0x55f1081b7000
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 183 ms_handle_reset con 0x55f1081b7000 session 0x55f104fb12c0
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: handle_auth_request added challenge on 0x55f1081b7400
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 183 ms_handle_reset con 0x55f1081b7400 session 0x55f1076fad20
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 183 heartbeat osd_stat(store_statfs(0x4f8f37000/0x0/0x4ffc00000, data 0x2632023/0x2737000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct 11 04:30:01 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 111902720 unmapped: 48087040 heap: 159989760 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:05:42.629027+0000)
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: handle_auth_request added challenge on 0x55f105d2b800
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 183 ms_handle_reset con 0x55f105d2b800 session 0x55f10799d860
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: handle_auth_request added challenge on 0x55f107d33800
Oct 11 04:30:01 compute-0 ceph-osd[89722]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 10.418343544s of 10.912449837s, submitted: 111
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 183 ms_handle_reset con 0x55f107d33800 session 0x55f107972960
Oct 11 04:30:01 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 111910912 unmapped: 48078848 heap: 159989760 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:05:43.629143+0000)
Oct 11 04:30:01 compute-0 ceph-osd[89722]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:30:01 compute-0 ceph-osd[89722]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:30:01 compute-0 ceph-osd[89722]: bluestore.MempoolThread(0x55f1042abb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1549492 data_alloc: 234881024 data_used: 12435456
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: handle_auth_request added challenge on 0x55f1081b6c00
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 183 ms_handle_reset con 0x55f1081b6c00 session 0x55f105b0b4a0
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: handle_auth_request added challenge on 0x55f1081b7000
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 183 ms_handle_reset con 0x55f1081b7000 session 0x55f105b9c000
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: handle_auth_request added challenge on 0x55f1081b6000
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 183 ms_handle_reset con 0x55f1081b6000 session 0x55f10799c5a0
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: handle_auth_request added challenge on 0x55f105d2b800
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 183 ms_handle_reset con 0x55f105d2b800 session 0x55f10795f0e0
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: handle_auth_request added challenge on 0x55f107d33800
Oct 11 04:30:01 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 111919104 unmapped: 48070656 heap: 159989760 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 183 ms_handle_reset con 0x55f107d33800 session 0x55f105b9a3c0
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: handle_auth_request added challenge on 0x55f1081b6c00
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: handle_auth_request added challenge on 0x55f1081b7000
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 183 ms_handle_reset con 0x55f1081b6c00 session 0x55f10772e3c0
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: handle_auth_request added challenge on 0x55f1081b7c00
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 183 ms_handle_reset con 0x55f1081b7c00 session 0x55f106990d20
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:05:44.629297+0000)
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: handle_auth_request added challenge on 0x55f1081b6800
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _renew_subs
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 183 handle_osd_map epochs [184,184], i have 183, src has [1,184]
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 184 ms_handle_reset con 0x55f1081b6800 session 0x55f1079d6f00
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: handle_auth_request added challenge on 0x55f105d2b800
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 184 ms_handle_reset con 0x55f105d2b800 session 0x55f1063163c0
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 184 ms_handle_reset con 0x55f1081b7000 session 0x55f10795f2c0
Oct 11 04:30:01 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 112386048 unmapped: 47603712 heap: 159989760 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:05:45.629413+0000)
Oct 11 04:30:01 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 112386048 unmapped: 47603712 heap: 159989760 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:05:46.629699+0000)
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 184 heartbeat osd_stat(store_statfs(0x4f8bd8000/0x0/0x4ffc00000, data 0x298dbb0/0x2a95000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct 11 04:30:01 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 112386048 unmapped: 47603712 heap: 159989760 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: handle_auth_request added challenge on 0x55f107d33800
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 184 ms_handle_reset con 0x55f107d33800 session 0x55f10562a1e0
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:05:47.629822+0000)
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: handle_auth_request added challenge on 0x55f1081b6c00
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 184 ms_handle_reset con 0x55f1081b6c00 session 0x55f1079f32c0
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: handle_auth_request added challenge on 0x55f1081b7c00
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 184 ms_handle_reset con 0x55f1081b7c00 session 0x55f1063161e0
Oct 11 04:30:01 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 112001024 unmapped: 47988736 heap: 159989760 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:05:48.629986+0000)
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 184 heartbeat osd_stat(store_statfs(0x4f8bd9000/0x0/0x4ffc00000, data 0x298dbb0/0x2a95000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct 11 04:30:01 compute-0 ceph-osd[89722]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:30:01 compute-0 ceph-osd[89722]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:30:01 compute-0 ceph-osd[89722]: bluestore.MempoolThread(0x55f1042abb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1588537 data_alloc: 234881024 data_used: 12443648
Oct 11 04:30:01 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 112001024 unmapped: 47988736 heap: 159989760 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 184 heartbeat osd_stat(store_statfs(0x4f8bd9000/0x0/0x4ffc00000, data 0x298dbb0/0x2a95000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:05:49.630132+0000)
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: handle_auth_request added challenge on 0x55f105d2b800
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 184 ms_handle_reset con 0x55f105d2b800 session 0x55f106ae54a0
Oct 11 04:30:01 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 112353280 unmapped: 47636480 heap: 159989760 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: handle_auth_request added challenge on 0x55f107d33800
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: handle_auth_request added challenge on 0x55f1081b6c00
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:05:50.630335+0000)
Oct 11 04:30:01 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 112369664 unmapped: 47620096 heap: 159989760 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:05:51.630511+0000)
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: handle_auth_request added challenge on 0x55f1081b7000
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 184 ms_handle_reset con 0x55f1081b7000 session 0x55f107708780
Oct 11 04:30:01 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 112246784 unmapped: 47742976 heap: 159989760 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: handle_auth_request added challenge on 0x55f107bf7c00
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 184 ms_handle_reset con 0x55f107bf7c00 session 0x55f107708000
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:05:52.630648+0000)
Oct 11 04:30:01 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 112246784 unmapped: 47742976 heap: 159989760 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: handle_auth_request added challenge on 0x55f1081b7c00
Oct 11 04:30:01 compute-0 ceph-osd[89722]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 10.266264915s of 10.393281937s, submitted: 27
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 184 ms_handle_reset con 0x55f1081b7c00 session 0x55f107709c20
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 184 heartbeat osd_stat(store_statfs(0x4f8baf000/0x0/0x4ffc00000, data 0x29b7bb0/0x2abf000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:05:53.630783+0000)
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: handle_auth_request added challenge on 0x55f1081b7800
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 184 ms_handle_reset con 0x55f1081b7800 session 0x55f107708b40
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: handle_auth_request added challenge on 0x55f105d2b800
Oct 11 04:30:01 compute-0 ceph-osd[89722]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:30:01 compute-0 ceph-osd[89722]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:30:01 compute-0 ceph-osd[89722]: bluestore.MempoolThread(0x55f1042abb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1619944 data_alloc: 234881024 data_used: 15765504
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 184 heartbeat osd_stat(store_statfs(0x4f8bae000/0x0/0x4ffc00000, data 0x29b7bc0/0x2ac0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 184 ms_handle_reset con 0x55f105d2b800 session 0x55f107709a40
Oct 11 04:30:01 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 112271360 unmapped: 47718400 heap: 159989760 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:05:54.630898+0000)
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 184 heartbeat osd_stat(store_statfs(0x4f8bae000/0x0/0x4ffc00000, data 0x29b7bc0/0x2ac0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct 11 04:30:01 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 112271360 unmapped: 47718400 heap: 159989760 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:05:55.631032+0000)
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 184 heartbeat osd_stat(store_statfs(0x4f8bae000/0x0/0x4ffc00000, data 0x29b7bc0/0x2ac0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: handle_auth_request added challenge on 0x55f107bf7c00
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 184 ms_handle_reset con 0x55f107bf7c00 session 0x55f105bab0e0
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: handle_auth_request added challenge on 0x55f1081b7000
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 184 ms_handle_reset con 0x55f1081b7000 session 0x55f105b9c1e0
Oct 11 04:30:01 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 112287744 unmapped: 47702016 heap: 159989760 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:05:56.631190+0000)
Oct 11 04:30:01 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 112287744 unmapped: 47702016 heap: 159989760 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:05:57.631326+0000)
Oct 11 04:30:01 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 112287744 unmapped: 47702016 heap: 159989760 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:05:58.631471+0000)
Oct 11 04:30:01 compute-0 ceph-osd[89722]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:30:01 compute-0 ceph-osd[89722]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:30:01 compute-0 ceph-osd[89722]: bluestore.MempoolThread(0x55f1042abb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1618452 data_alloc: 234881024 data_used: 15765504
Oct 11 04:30:01 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 112287744 unmapped: 47702016 heap: 159989760 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:05:59.631698+0000)
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 184 heartbeat osd_stat(store_statfs(0x4f8bae000/0x0/0x4ffc00000, data 0x29b7bb0/0x2abf000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct 11 04:30:01 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 112287744 unmapped: 47702016 heap: 159989760 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:06:00.631931+0000)
Oct 11 04:30:01 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 112287744 unmapped: 47702016 heap: 159989760 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:06:01.632215+0000)
Oct 11 04:30:01 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 116719616 unmapped: 43270144 heap: 159989760 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:06:02.632393+0000)
Oct 11 04:30:01 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 118669312 unmapped: 41320448 heap: 159989760 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:06:03.632572+0000)
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 184 heartbeat osd_stat(store_statfs(0x4f7eee000/0x0/0x4ffc00000, data 0x3678bb0/0x3780000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct 11 04:30:01 compute-0 ceph-osd[89722]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:30:01 compute-0 ceph-osd[89722]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:30:01 compute-0 ceph-osd[89722]: bluestore.MempoolThread(0x55f1042abb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1735354 data_alloc: 234881024 data_used: 16539648
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 184 heartbeat osd_stat(store_statfs(0x4f7eee000/0x0/0x4ffc00000, data 0x3678bb0/0x3780000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct 11 04:30:01 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 118702080 unmapped: 41287680 heap: 159989760 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:06:04.632715+0000)
Oct 11 04:30:01 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 118702080 unmapped: 41287680 heap: 159989760 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:06:05.632930+0000)
Oct 11 04:30:01 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 118702080 unmapped: 41287680 heap: 159989760 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:06:06.633066+0000)
Oct 11 04:30:01 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 118702080 unmapped: 41287680 heap: 159989760 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:06:07.633244+0000)
Oct 11 04:30:01 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 118702080 unmapped: 41287680 heap: 159989760 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:06:08.633439+0000)
Oct 11 04:30:01 compute-0 ceph-osd[89722]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 14.807895660s of 15.122739792s, submitted: 125
Oct 11 04:30:01 compute-0 ceph-osd[89722]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:30:01 compute-0 ceph-osd[89722]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:30:01 compute-0 ceph-osd[89722]: bluestore.MempoolThread(0x55f1042abb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1730106 data_alloc: 234881024 data_used: 16539648
Oct 11 04:30:01 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 117989376 unmapped: 42000384 heap: 159989760 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:06:09.633619+0000)
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 184 heartbeat osd_stat(store_statfs(0x4f7ec9000/0x0/0x4ffc00000, data 0x369dbb0/0x37a5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct 11 04:30:01 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 117989376 unmapped: 42000384 heap: 159989760 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:06:10.633856+0000)
Oct 11 04:30:01 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 117989376 unmapped: 42000384 heap: 159989760 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:06:11.634059+0000)
Oct 11 04:30:01 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 117989376 unmapped: 42000384 heap: 159989760 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: handle_auth_request added challenge on 0x55f1081b7c00
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 184 ms_handle_reset con 0x55f1081b7c00 session 0x55f105b9d4a0
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:06:12.634200+0000)
Oct 11 04:30:01 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 117989376 unmapped: 42000384 heap: 159989760 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:06:13.634388+0000)
Oct 11 04:30:01 compute-0 ceph-osd[89722]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:30:01 compute-0 ceph-osd[89722]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:30:01 compute-0 ceph-osd[89722]: bluestore.MempoolThread(0x55f1042abb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1731580 data_alloc: 234881024 data_used: 16539648
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 184 heartbeat osd_stat(store_statfs(0x4f7ec9000/0x0/0x4ffc00000, data 0x369dbb0/0x37a5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct 11 04:30:01 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 117989376 unmapped: 42000384 heap: 159989760 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:06:14.634573+0000)
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 184 handle_osd_map epochs [184,185], i have 184, src has [1,185]
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: handle_auth_request added challenge on 0x55f109aba000
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 185 ms_handle_reset con 0x55f109aba000 session 0x55f105b9ed20
Oct 11 04:30:01 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 117989376 unmapped: 42000384 heap: 159989760 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:06:15.634783+0000)
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: handle_auth_request added challenge on 0x55f109aba000
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 185 ms_handle_reset con 0x55f109aba000 session 0x55f10799cb40
Oct 11 04:30:01 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 117997568 unmapped: 41992192 heap: 159989760 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: handle_auth_request added challenge on 0x55f105d2b800
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:06:16.634901+0000)
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 185 ms_handle_reset con 0x55f105d2b800 session 0x55f10772ed20
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 185 handle_osd_map epochs [186,186], i have 185, src has [1,186]
Oct 11 04:30:01 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 117997568 unmapped: 41992192 heap: 159989760 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:06:17.635196+0000)
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 186 handle_osd_map epochs [187,187], i have 186, src has [1,187]
Oct 11 04:30:01 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 118005760 unmapped: 41984000 heap: 159989760 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:06:18.635368+0000)
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: handle_auth_request added challenge on 0x55f107bf7c00
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 187 ms_handle_reset con 0x55f107bf7c00 session 0x55f1069805a0
Oct 11 04:30:01 compute-0 ceph-osd[89722]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:30:01 compute-0 ceph-osd[89722]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:30:01 compute-0 ceph-osd[89722]: bluestore.MempoolThread(0x55f1042abb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1741902 data_alloc: 234881024 data_used: 16547840
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: handle_auth_request added challenge on 0x55f1081b7000
Oct 11 04:30:01 compute-0 ceph-osd[89722]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 10.481925964s of 10.552505493s, submitted: 15
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 187 ms_handle_reset con 0x55f1081b7000 session 0x55f106980780
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: handle_auth_request added challenge on 0x55f1081b7c00
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 187 ms_handle_reset con 0x55f1081b7c00 session 0x55f105b9f2c0
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: handle_auth_request added challenge on 0x55f1081b7c00
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 187 ms_handle_reset con 0x55f1081b7c00 session 0x55f1056bd2c0
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: handle_auth_request added challenge on 0x55f105d2b800
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 187 ms_handle_reset con 0x55f105d2b800 session 0x55f105baad20
Oct 11 04:30:01 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 118079488 unmapped: 41910272 heap: 159989760 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: handle_auth_request added challenge on 0x55f107bf7c00
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 187 ms_handle_reset con 0x55f107bf7c00 session 0x55f105baa3c0
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:06:19.635592+0000)
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 187 heartbeat osd_stat(store_statfs(0x4f7625000/0x0/0x4ffc00000, data 0x3f3be27/0x4047000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: handle_auth_request added challenge on 0x55f1081b7000
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _renew_subs
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 187 handle_osd_map epochs [188,188], i have 187, src has [1,188]
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 188 ms_handle_reset con 0x55f1081b7000 session 0x55f106990000
Oct 11 04:30:01 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 118087680 unmapped: 41902080 heap: 159989760 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:06:20.635790+0000)
Oct 11 04:30:01 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 118087680 unmapped: 41902080 heap: 159989760 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:06:21.636468+0000)
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: handle_auth_request added challenge on 0x55f109aba000
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _renew_subs
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 188 handle_osd_map epochs [189,189], i have 188, src has [1,189]
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 189 ms_handle_reset con 0x55f109aba000 session 0x55f106990f00
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 189 handle_osd_map epochs [189,190], i have 189, src has [1,190]
Oct 11 04:30:01 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 116613120 unmapped: 43376640 heap: 159989760 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:06:22.636633+0000)
Oct 11 04:30:01 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 116678656 unmapped: 43311104 heap: 159989760 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:06:23.636779+0000)
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 190 heartbeat osd_stat(store_statfs(0x4f7208000/0x0/0x4ffc00000, data 0x3f441aa/0x4053000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: handle_auth_request added challenge on 0x55f109aba000
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 190 ms_handle_reset con 0x55f109aba000 session 0x55f106991c20
Oct 11 04:30:01 compute-0 ceph-osd[89722]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:30:01 compute-0 ceph-osd[89722]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:30:01 compute-0 ceph-osd[89722]: bluestore.MempoolThread(0x55f1042abb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1816784 data_alloc: 234881024 data_used: 16551936
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 190 ms_handle_reset con 0x55f107d33800 session 0x55f105b9f0e0
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 190 ms_handle_reset con 0x55f1081b6c00 session 0x55f105b8cb40
Oct 11 04:30:01 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 116711424 unmapped: 43278336 heap: 159989760 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: handle_auth_request added challenge on 0x55f105d2b800
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:06:24.636898+0000)
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: handle_auth_request added challenge on 0x55f107bf7c00
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: handle_auth_request added challenge on 0x55f1081b7000
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 190 handle_osd_map epochs [190,191], i have 190, src has [1,191]
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 191 ms_handle_reset con 0x55f105d2b800 session 0x55f1079721e0
Oct 11 04:30:01 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 113836032 unmapped: 46153728 heap: 159989760 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:06:25.637038+0000)
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: handle_auth_request added challenge on 0x55f1081b7c00
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 191 ms_handle_reset con 0x55f1081b7c00 session 0x55f106d381e0
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: handle_auth_request added challenge on 0x55f105d2b800
Oct 11 04:30:01 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 117350400 unmapped: 42639360 heap: 159989760 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:06:26.637398+0000)
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 191 ms_handle_reset con 0x55f105d2b800 session 0x55f106d7f2c0
Oct 11 04:30:01 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 117473280 unmapped: 42516480 heap: 159989760 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:06:27.637691+0000)
Oct 11 04:30:01 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 117473280 unmapped: 42516480 heap: 159989760 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:06:28.637901+0000)
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 191 heartbeat osd_stat(store_statfs(0x4f827c000/0x0/0x4ffc00000, data 0x2ed2bfd/0x2fe2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Oct 11 04:30:01 compute-0 ceph-osd[89722]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:30:01 compute-0 ceph-osd[89722]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:30:01 compute-0 ceph-osd[89722]: bluestore.MempoolThread(0x55f1042abb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1719106 data_alloc: 234881024 data_used: 21442560
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: handle_auth_request added challenge on 0x55f107d33800
Oct 11 04:30:01 compute-0 ceph-osd[89722]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 9.973114014s of 10.281463623s, submitted: 103
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 191 ms_handle_reset con 0x55f107d33800 session 0x55f10562b860
Oct 11 04:30:01 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 117489664 unmapped: 42500096 heap: 159989760 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:06:29.638271+0000)
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 191 heartbeat osd_stat(store_statfs(0x4f8ac8000/0x0/0x4ffc00000, data 0x2686bfd/0x2796000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: handle_auth_request added challenge on 0x55f1081b6c00
Oct 11 04:30:01 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 117604352 unmapped: 42385408 heap: 159989760 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:06:30.638403+0000)
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: handle_auth_request added challenge on 0x55f109aba000
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 191 ms_handle_reset con 0x55f1081b6c00 session 0x55f1077210e0
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: handle_auth_request added challenge on 0x55f109aba400
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 191 ms_handle_reset con 0x55f109aba400 session 0x55f1079d6000
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _renew_subs
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 191 handle_osd_map epochs [192,192], i have 191, src has [1,192]
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 192 ms_handle_reset con 0x55f109aba000 session 0x55f1069914a0
Oct 11 04:30:01 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 118358016 unmapped: 41631744 heap: 159989760 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:06:31.638851+0000)
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: handle_auth_request added challenge on 0x55f105d2b800
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 192 ms_handle_reset con 0x55f105d2b800 session 0x55f109954960
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: handle_auth_request added challenge on 0x55f107d33800
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: handle_auth_request added challenge on 0x55f1081b6c00
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 192 ms_handle_reset con 0x55f1081b6c00 session 0x55f10698fc20
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: handle_auth_request added challenge on 0x55f109aba400
Oct 11 04:30:01 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 118358016 unmapped: 41631744 heap: 159989760 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:06:32.638985+0000)
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 192 ms_handle_reset con 0x55f109aba400 session 0x55f1076cfa40
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 192 handle_osd_map epochs [192,193], i have 192, src has [1,193]
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 193 ms_handle_reset con 0x55f107d33800 session 0x55f1056bd0e0
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: handle_auth_request added challenge on 0x55f109aba800
Oct 11 04:30:01 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 118382592 unmapped: 41607168 heap: 159989760 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:06:33.639353+0000)
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 193 handle_osd_map epochs [193,194], i have 193, src has [1,194]
Oct 11 04:30:01 compute-0 ceph-osd[89722]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:30:01 compute-0 ceph-osd[89722]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:30:01 compute-0 ceph-osd[89722]: bluestore.MempoolThread(0x55f1042abb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1676724 data_alloc: 234881024 data_used: 21463040
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 194 ms_handle_reset con 0x55f109aba800 session 0x55f10562be00
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 194 heartbeat osd_stat(store_statfs(0x4f8a78000/0x0/0x4ffc00000, data 0x268a34b/0x279c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Oct 11 04:30:01 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 118390784 unmapped: 41598976 heap: 159989760 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:06:34.639702+0000)
Oct 11 04:30:01 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 118407168 unmapped: 41582592 heap: 159989760 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:06:35.639905+0000)
Oct 11 04:30:01 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 122298368 unmapped: 37691392 heap: 159989760 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:06:36.640194+0000)
Oct 11 04:30:01 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 123625472 unmapped: 36364288 heap: 159989760 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:06:37.640416+0000)
Oct 11 04:30:01 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 123797504 unmapped: 36192256 heap: 159989760 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:06:38.640683+0000)
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 194 heartbeat osd_stat(store_statfs(0x4f8484000/0x0/0x4ffc00000, data 0x2cc6ee4/0x2dda000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Oct 11 04:30:01 compute-0 ceph-osd[89722]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:30:01 compute-0 ceph-osd[89722]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:30:01 compute-0 ceph-osd[89722]: bluestore.MempoolThread(0x55f1042abb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1732806 data_alloc: 234881024 data_used: 22200320
Oct 11 04:30:01 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 123797504 unmapped: 36192256 heap: 159989760 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:06:39.640910+0000)
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 194 heartbeat osd_stat(store_statfs(0x4f8484000/0x0/0x4ffc00000, data 0x2cc6ee4/0x2dda000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Oct 11 04:30:01 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 123813888 unmapped: 36175872 heap: 159989760 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:06:40.641201+0000)
Oct 11 04:30:01 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 123813888 unmapped: 36175872 heap: 159989760 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:06:41.641431+0000)
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 194 heartbeat osd_stat(store_statfs(0x4f8484000/0x0/0x4ffc00000, data 0x2cc6ee4/0x2dda000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: handle_auth_request added challenge on 0x55f105d2b800
Oct 11 04:30:01 compute-0 ceph-osd[89722]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 12.117969513s of 12.527305603s, submitted: 133
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 194 ms_handle_reset con 0x55f105d2b800 session 0x55f10772ed20
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: handle_auth_request added challenge on 0x55f107d33800
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 194 ms_handle_reset con 0x55f107d33800 session 0x55f1056bdc20
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: handle_auth_request added challenge on 0x55f1081b6c00
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 194 ms_handle_reset con 0x55f1081b6c00 session 0x55f105baa780
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: handle_auth_request added challenge on 0x55f109aba400
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 194 ms_handle_reset con 0x55f109aba400 session 0x55f10799cb40
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: handle_auth_request added challenge on 0x55f109abac00
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: handle_auth_request added challenge on 0x55f109abb000
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 194 ms_handle_reset con 0x55f109abb000 session 0x55f10562bc20
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 194 ms_handle_reset con 0x55f109abac00 session 0x55f105b0a3c0
Oct 11 04:30:01 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 123953152 unmapped: 36036608 heap: 159989760 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:06:42.641626+0000)
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: handle_auth_request added challenge on 0x55f105d2b800
Oct 11 04:30:01 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 123953152 unmapped: 36036608 heap: 159989760 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:06:43.641733+0000)
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 194 heartbeat osd_stat(store_statfs(0x4f8035000/0x0/0x4ffc00000, data 0x3113f56/0x3229000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 194 handle_osd_map epochs [195,195], i have 194, src has [1,195]
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _renew_subs
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 194 handle_osd_map epochs [195,195], i have 195, src has [1,195]
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 195 ms_handle_reset con 0x55f105d2b800 session 0x55f106ae4000
Oct 11 04:30:01 compute-0 ceph-osd[89722]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:30:01 compute-0 ceph-osd[89722]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:30:01 compute-0 ceph-osd[89722]: bluestore.MempoolThread(0x55f1042abb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1772632 data_alloc: 234881024 data_used: 22208512
Oct 11 04:30:01 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 123682816 unmapped: 36306944 heap: 159989760 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:06:44.641873+0000)
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: handle_auth_request added challenge on 0x55f107d33800
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _renew_subs
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 195 handle_osd_map epochs [196,196], i have 195, src has [1,196]
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 196 heartbeat osd_stat(store_statfs(0x4f8031000/0x0/0x4ffc00000, data 0x3115aef/0x322c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Oct 11 04:30:01 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 123682816 unmapped: 36306944 heap: 159989760 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:06:45.642051+0000)
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 196 handle_osd_map epochs [196,197], i have 196, src has [1,197]
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: handle_auth_request added challenge on 0x55f1081b6c00
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 197 ms_handle_reset con 0x55f107d33800 session 0x55f10795f680
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: handle_auth_request added challenge on 0x55f109aba400
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 197 ms_handle_reset con 0x55f109aba400 session 0x55f10799c3c0
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:06:46.642194+0000)
Oct 11 04:30:01 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 124092416 unmapped: 35897344 heap: 159989760 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: handle_auth_request added challenge on 0x55f109abb400
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: handle_auth_request added challenge on 0x55f109abb800
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 197 heartbeat osd_stat(store_statfs(0x4f7ffe000/0x0/0x4ffc00000, data 0x31430f2/0x325d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: handle_auth_request added challenge on 0x55f109abbc00
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 197 ms_handle_reset con 0x55f109abbc00 session 0x55f106883c20
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: handle_auth_request added challenge on 0x55f105d2b800
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: handle_auth_request added challenge on 0x55f107d33800
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 197 ms_handle_reset con 0x55f107d33800 session 0x55f105baa5a0
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:06:47.642303+0000)
Oct 11 04:30:01 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 124141568 unmapped: 35848192 heap: 159989760 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 197 heartbeat osd_stat(store_statfs(0x4f8001000/0x0/0x4ffc00000, data 0x31430f2/0x325d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _renew_subs
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 197 handle_osd_map epochs [198,198], i have 197, src has [1,198]
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 198 ms_handle_reset con 0x55f105d2b800 session 0x55f106883860
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: handle_auth_request added challenge on 0x55f1081b5000
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 198 ms_handle_reset con 0x55f1081b5000 session 0x55f105b8da40
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: handle_auth_request added challenge on 0x55f1081b5800
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:06:48.642471+0000)
Oct 11 04:30:01 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 127844352 unmapped: 32145408 heap: 159989760 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 198 handle_osd_map epochs [198,199], i have 198, src has [1,199]
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 199 ms_handle_reset con 0x55f1081b5800 session 0x55f105b8dc20
Oct 11 04:30:01 compute-0 ceph-osd[89722]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:30:01 compute-0 ceph-osd[89722]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 11 04:30:01 compute-0 ceph-osd[89722]: bluestore.MempoolThread(0x55f1042abb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1823514 data_alloc: 234881024 data_used: 26738688
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: handle_auth_request added challenge on 0x55f1081b4800
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 199 ms_handle_reset con 0x55f1081b4800 session 0x55f1069805a0
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: handle_auth_request added challenge on 0x55f105d2b800
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 199 ms_handle_reset con 0x55f1081b6c00 session 0x55f105b9e1e0
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 199 ms_handle_reset con 0x55f105d2b800 session 0x55f105b9ed20
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:06:49.642730+0000)
Oct 11 04:30:01 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 127877120 unmapped: 32112640 heap: 159989760 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 199 heartbeat osd_stat(store_statfs(0x4f7ffa000/0x0/0x4ffc00000, data 0x3146884/0x3262000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:06:50.642946+0000)
Oct 11 04:30:01 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 127909888 unmapped: 32079872 heap: 159989760 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 199 ms_handle_reset con 0x55f107bf7c00 session 0x55f10698e5a0
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 199 ms_handle_reset con 0x55f1081b7000 session 0x55f1068834a0
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: handle_auth_request added challenge on 0x55f107d33800
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: handle_auth_request added challenge on 0x55f1081b5000
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 199 ms_handle_reset con 0x55f1081b5000 session 0x55f105b8de00
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 199 ms_handle_reset con 0x55f107d33800 session 0x55f105bab860
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:06:51.643196+0000)
Oct 11 04:30:01 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 121659392 unmapped: 38330368 heap: 159989760 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: handle_auth_request added challenge on 0x55f105d2b800
Oct 11 04:30:01 compute-0 ceph-osd[89722]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 10.265389442s of 10.640091896s, submitted: 124
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:06:52.643505+0000)
Oct 11 04:30:01 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 121659392 unmapped: 38330368 heap: 159989760 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _renew_subs
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 199 handle_osd_map epochs [200,200], i have 199, src has [1,200]
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 200 ms_handle_reset con 0x55f105d2b800 session 0x55f105bb6780
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:06:53.643695+0000)
Oct 11 04:30:01 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 121667584 unmapped: 38322176 heap: 159989760 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 200 heartbeat osd_stat(store_statfs(0x4f8ec6000/0x0/0x4ffc00000, data 0x227a41d/0x2397000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: handle_auth_request added challenge on 0x55f107bf7c00
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 200 ms_handle_reset con 0x55f107bf7c00 session 0x55f105bb63c0
Oct 11 04:30:01 compute-0 ceph-osd[89722]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:30:01 compute-0 ceph-osd[89722]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:30:01 compute-0 ceph-osd[89722]: bluestore.MempoolThread(0x55f1042abb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1650899 data_alloc: 234881024 data_used: 17035264
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:06:54.643897+0000)
Oct 11 04:30:01 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 121675776 unmapped: 38313984 heap: 159989760 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 200 handle_osd_map epochs [201,201], i have 200, src has [1,201]
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: handle_auth_request added challenge on 0x55f1081b7000
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 201 ms_handle_reset con 0x55f1081b7000 session 0x55f10795fe00
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: handle_auth_request added challenge on 0x55f1081b6c00
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 201 ms_handle_reset con 0x55f1081b6c00 session 0x55f105b9a3c0
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: handle_auth_request added challenge on 0x55f1081b6c00
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 201 ms_handle_reset con 0x55f1081b6c00 session 0x55f10646f680
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: handle_auth_request added challenge on 0x55f105d2b800
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:06:55.644089+0000)
Oct 11 04:30:01 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 121692160 unmapped: 38297600 heap: 159989760 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 201 ms_handle_reset con 0x55f105d2b800 session 0x55f1079f23c0
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: handle_auth_request added challenge on 0x55f107bf7c00
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: handle_auth_request added challenge on 0x55f107d33800
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 201 ms_handle_reset con 0x55f107d33800 session 0x55f106d38b40
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: handle_auth_request added challenge on 0x55f1081b7000
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:06:56.644332+0000)
Oct 11 04:30:01 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 121708544 unmapped: 38281216 heap: 159989760 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _renew_subs
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 201 handle_osd_map epochs [202,202], i have 201, src has [1,202]
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 202 ms_handle_reset con 0x55f1081b7000 session 0x55f106980f00
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 202 ms_handle_reset con 0x55f107bf7c00 session 0x55f1076ced20
Oct 11 04:30:01 compute-0 ceph-osd[89722]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Oct 11 04:30:01 compute-0 ceph-osd[89722]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 1800.1 total, 600.0 interval
                                           Cumulative writes: 10K writes, 44K keys, 10K commit groups, 1.0 writes per commit group, ingest: 0.03 GB, 0.02 MB/s
                                           Cumulative WAL: 10K writes, 3104 syncs, 3.50 writes per sync, written: 0.03 GB, 0.02 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 5192 writes, 21K keys, 5192 commit groups, 1.0 writes per commit group, ingest: 12.46 MB, 0.02 MB/s
                                           Interval WAL: 5192 writes, 2217 syncs, 2.34 writes per sync, written: 0.01 GB, 0.02 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 202 heartbeat osd_stat(store_statfs(0x4f8ec4000/0x0/0x4ffc00000, data 0x227be80/0x239a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:06:57.644484+0000)
Oct 11 04:30:01 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 121724928 unmapped: 38264832 heap: 159989760 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: handle_auth_request added challenge on 0x55f105d2b800
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 202 ms_handle_reset con 0x55f105d2b800 session 0x55f106990f00
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: handle_auth_request added challenge on 0x55f107d33800
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 202 heartbeat osd_stat(store_statfs(0x4f8ec1000/0x0/0x4ffc00000, data 0x227d9ef/0x239c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 202 ms_handle_reset con 0x55f107d33800 session 0x55f107720d20
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:06:58.644712+0000)
Oct 11 04:30:01 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 122322944 unmapped: 37666816 heap: 159989760 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:01 compute-0 ceph-osd[89722]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:30:01 compute-0 ceph-osd[89722]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:30:01 compute-0 ceph-osd[89722]: bluestore.MempoolThread(0x55f1042abb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1694960 data_alloc: 234881024 data_used: 17186816
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:06:59.644903+0000)
Oct 11 04:30:01 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 120963072 unmapped: 39026688 heap: 159989760 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:07:00.645134+0000)
Oct 11 04:30:01 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 120963072 unmapped: 39026688 heap: 159989760 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:07:01.645355+0000)
Oct 11 04:30:01 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 120963072 unmapped: 39026688 heap: 159989760 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:07:02.645581+0000)
Oct 11 04:30:01 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 120963072 unmapped: 39026688 heap: 159989760 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:01 compute-0 ceph-osd[89722]: mgrc ms_handle_reset ms_handle_reset con 0x55f105187c00
Oct 11 04:30:01 compute-0 ceph-osd[89722]: mgrc reconnect Terminating session with v2:192.168.122.100:6800/3360631616
Oct 11 04:30:01 compute-0 ceph-osd[89722]: mgrc reconnect Starting new session with [v2:192.168.122.100:6800/3360631616,v1:192.168.122.100:6801/3360631616]
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: get_auth_request con 0x55f107bf7c00 auth_method 0
Oct 11 04:30:01 compute-0 ceph-osd[89722]: mgrc handle_mgr_configure stats_period=5
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:07:03.645856+0000)
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 202 heartbeat osd_stat(store_statfs(0x4f8a68000/0x0/0x4ffc00000, data 0x26d79ef/0x27f6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Oct 11 04:30:01 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 121167872 unmapped: 38821888 heap: 159989760 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:01 compute-0 ceph-osd[89722]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:30:01 compute-0 ceph-osd[89722]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:30:01 compute-0 ceph-osd[89722]: bluestore.MempoolThread(0x55f1042abb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1700208 data_alloc: 234881024 data_used: 17186816
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:07:04.646045+0000)
Oct 11 04:30:01 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 121167872 unmapped: 38821888 heap: 159989760 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 202 handle_osd_map epochs [202,203], i have 202, src has [1,203]
Oct 11 04:30:01 compute-0 ceph-osd[89722]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 11.673974037s of 12.131446838s, submitted: 142
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: handle_auth_request added challenge on 0x55f1081b6c00
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 203 ms_handle_reset con 0x55f1081b6c00 session 0x55f105b8cf00
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:07:05.646282+0000)
Oct 11 04:30:01 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 121184256 unmapped: 38805504 heap: 159989760 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:07:06.646476+0000)
Oct 11 04:30:01 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 121184256 unmapped: 38805504 heap: 159989760 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:07:07.646690+0000)
Oct 11 04:30:01 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 121184256 unmapped: 38805504 heap: 159989760 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 203 heartbeat osd_stat(store_statfs(0x4f8a63000/0x0/0x4ffc00000, data 0x26d9462/0x27fa000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: handle_auth_request added challenge on 0x55f1081b7000
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 203 ms_handle_reset con 0x55f1081b7000 session 0x55f105b8c5a0
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: handle_auth_request added challenge on 0x55f1081b5400
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: handle_auth_request added challenge on 0x55f1081b4c00
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 203 ms_handle_reset con 0x55f1081b5400 session 0x55f105baa1e0
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:07:08.646866+0000)
Oct 11 04:30:01 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 121118720 unmapped: 38871040 heap: 159989760 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:01 compute-0 ceph-osd[89722]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:30:01 compute-0 ceph-osd[89722]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:30:01 compute-0 ceph-osd[89722]: bluestore.MempoolThread(0x55f1042abb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1704562 data_alloc: 234881024 data_used: 17195008
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: handle_auth_request added challenge on 0x55f1081b5400
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:07:09.647058+0000)
Oct 11 04:30:01 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 121118720 unmapped: 38871040 heap: 159989760 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 203 handle_osd_map epochs [203,204], i have 203, src has [1,204]
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: handle_auth_request added challenge on 0x55f105d2b800
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 204 ms_handle_reset con 0x55f105d2b800 session 0x55f105b8de00
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 204 ms_handle_reset con 0x55f1081b5400 session 0x55f105baa000
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: handle_auth_request added challenge on 0x55f107d33800
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 204 ms_handle_reset con 0x55f107d33800 session 0x55f105bab0e0
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:07:10.647244+0000)
Oct 11 04:30:01 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 121126912 unmapped: 38862848 heap: 159989760 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: handle_auth_request added challenge on 0x55f1081b6c00
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:07:11.647407+0000)
Oct 11 04:30:01 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 121126912 unmapped: 38862848 heap: 159989760 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: handle_auth_request added challenge on 0x55f1081b7000
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: handle_auth_request added challenge on 0x55f1081b5800
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 204 ms_handle_reset con 0x55f1081b7000 session 0x55f10767fa40
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _renew_subs
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 204 handle_osd_map epochs [205,205], i have 204, src has [1,205]
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 205 ms_handle_reset con 0x55f1081b6c00 session 0x55f105bab4a0
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 205 heartbeat osd_stat(store_statfs(0x4f8a60000/0x0/0x4ffc00000, data 0x26db031/0x27fd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:07:12.647581+0000)
Oct 11 04:30:01 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 121135104 unmapped: 38854656 heap: 159989760 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _renew_subs
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 205 handle_osd_map epochs [206,206], i have 205, src has [1,206]
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 206 ms_handle_reset con 0x55f1081b5800 session 0x55f105baad20
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: handle_auth_request added challenge on 0x55f105d2b800
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 206 ms_handle_reset con 0x55f105d2b800 session 0x55f10799c780
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: handle_auth_request added challenge on 0x55f107d33800
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:07:13.647742+0000)
Oct 11 04:30:01 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 121151488 unmapped: 38838272 heap: 159989760 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _renew_subs
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 206 handle_osd_map epochs [207,207], i have 206, src has [1,207]
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 207 ms_handle_reset con 0x55f107d33800 session 0x55f10799de00
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 207 heartbeat osd_stat(store_statfs(0x4f8a59000/0x0/0x4ffc00000, data 0x26de72b/0x2803000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Oct 11 04:30:01 compute-0 ceph-osd[89722]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:30:01 compute-0 ceph-osd[89722]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:30:01 compute-0 ceph-osd[89722]: bluestore.MempoolThread(0x55f1042abb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1719316 data_alloc: 234881024 data_used: 17215488
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: handle_auth_request added challenge on 0x55f1081b5400
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: handle_auth_request added challenge on 0x55f1081b7000
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 207 ms_handle_reset con 0x55f1081b7000 session 0x55f108162780
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:07:14.647935+0000)
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: handle_auth_request added challenge on 0x55f1081a7000
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 207 heartbeat osd_stat(store_statfs(0x4f8a57000/0x0/0x4ffc00000, data 0x26e02fc/0x2806000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Oct 11 04:30:01 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 121143296 unmapped: 38846464 heap: 159989760 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _renew_subs
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 207 handle_osd_map epochs [208,208], i have 207, src has [1,208]
Oct 11 04:30:01 compute-0 ceph-osd[89722]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 10.010853767s of 10.163574219s, submitted: 49
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 208 ms_handle_reset con 0x55f1081a7000 session 0x55f108163e00
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 208 ms_handle_reset con 0x55f1081b5400 session 0x55f1056bde00
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:07:15.648141+0000)
Oct 11 04:30:01 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 121176064 unmapped: 38813696 heap: 159989760 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: handle_auth_request added challenge on 0x55f105d2b800
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 208 handle_osd_map epochs [208,209], i have 208, src has [1,209]
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: handle_auth_request added challenge on 0x55f107d33800
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 209 ms_handle_reset con 0x55f105d2b800 session 0x55f10772fc20
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 209 ms_handle_reset con 0x55f107d33800 session 0x55f1063161e0
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: handle_auth_request added challenge on 0x55f1081b5800
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:07:16.648389+0000)
Oct 11 04:30:01 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 121208832 unmapped: 38780928 heap: 159989760 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _renew_subs
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 209 handle_osd_map epochs [210,210], i have 209, src has [1,210]
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 210 ms_handle_reset con 0x55f1081b5800 session 0x55f10562b860
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:07:17.648581+0000)
Oct 11 04:30:01 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 121257984 unmapped: 38731776 heap: 159989760 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 210 ms_handle_reset con 0x55f1081b4c00 session 0x55f106316b40
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:07:18.648841+0000)
Oct 11 04:30:01 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 121257984 unmapped: 38731776 heap: 159989760 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 210 ms_handle_reset con 0x55f109abb400 session 0x55f1068825a0
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 210 ms_handle_reset con 0x55f109abb800 session 0x55f108163a40
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: handle_auth_request added challenge on 0x55f105d2b800
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: handle_auth_request added challenge on 0x55f107d33800
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 210 ms_handle_reset con 0x55f107d33800 session 0x55f105b9c1e0
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: handle_auth_request added challenge on 0x55f1081b4c00
Oct 11 04:30:01 compute-0 ceph-osd[89722]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:30:01 compute-0 ceph-osd[89722]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:30:01 compute-0 ceph-osd[89722]: bluestore.MempoolThread(0x55f1042abb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1629485 data_alloc: 234881024 data_used: 12709888
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 210 ms_handle_reset con 0x55f105d2b800 session 0x55f105baa960
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:07:19.649049+0000)
Oct 11 04:30:01 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 118398976 unmapped: 41590784 heap: 159989760 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _renew_subs
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 210 handle_osd_map epochs [211,211], i have 210, src has [1,211]
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 211 ms_handle_reset con 0x55f1081b4c00 session 0x55f1076e6000
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: handle_auth_request added challenge on 0x55f105d2b800
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 211 ms_handle_reset con 0x55f105d2b800 session 0x55f1081634a0
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 211 heartbeat osd_stat(store_statfs(0x4f931f000/0x0/0x4ffc00000, data 0x1e14644/0x1f3d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: handle_auth_request added challenge on 0x55f107d33800
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:07:20.649307+0000)
Oct 11 04:30:01 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 118489088 unmapped: 41500672 heap: 159989760 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _renew_subs
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 211 handle_osd_map epochs [212,212], i have 211, src has [1,212]
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 212 ms_handle_reset con 0x55f107d33800 session 0x55f106882960
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:07:21.649502+0000)
Oct 11 04:30:01 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 117719040 unmapped: 42270720 heap: 159989760 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: handle_auth_request added challenge on 0x55f109abb400
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 212 ms_handle_reset con 0x55f109abb400 session 0x55f105bab680
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: handle_auth_request added challenge on 0x55f109abb800
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 212 handle_osd_map epochs [212,213], i have 212, src has [1,213]
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _renew_subs
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 212 handle_osd_map epochs [213,213], i have 213, src has [1,213]
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 213 ms_handle_reset con 0x55f109abb800 session 0x55f1076ce3c0
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:07:22.649660+0000)
Oct 11 04:30:01 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 118784000 unmapped: 41205760 heap: 159989760 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: handle_auth_request added challenge on 0x55f1081b5400
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 213 ms_handle_reset con 0x55f1081b5400 session 0x55f107946960
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: handle_auth_request added challenge on 0x55f105d2b800
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:07:23.649793+0000)
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 213 ms_handle_reset con 0x55f105d2b800 session 0x55f106ae4960
Oct 11 04:30:01 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 118800384 unmapped: 41189376 heap: 159989760 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 213 heartbeat osd_stat(store_statfs(0x4f9319000/0x0/0x4ffc00000, data 0x1e199d5/0x1f45000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Oct 11 04:30:01 compute-0 ceph-osd[89722]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:30:01 compute-0 ceph-osd[89722]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:30:01 compute-0 ceph-osd[89722]: bluestore.MempoolThread(0x55f1042abb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1627254 data_alloc: 234881024 data_used: 12566528
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: handle_auth_request added challenge on 0x55f107d33800
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 213 ms_handle_reset con 0x55f107d33800 session 0x55f105b9b860
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:07:24.649892+0000)
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 213 handle_osd_map epochs [213,214], i have 213, src has [1,214]
Oct 11 04:30:01 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 118800384 unmapped: 41189376 heap: 159989760 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: handle_auth_request added challenge on 0x55f109abb400
Oct 11 04:30:01 compute-0 ceph-osd[89722]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 9.420568466s of 10.077694893s, submitted: 180
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 214 ms_handle_reset con 0x55f109abb400 session 0x55f106ae4d20
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:07:25.650059+0000)
Oct 11 04:30:01 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 118824960 unmapped: 41164800 heap: 159989760 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: handle_auth_request added challenge on 0x55f109abb800
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 214 handle_osd_map epochs [214,215], i have 214, src has [1,215]
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 215 ms_handle_reset con 0x55f109abb800 session 0x55f106980d20
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 215 heartbeat osd_stat(store_statfs(0x4f9313000/0x0/0x4ffc00000, data 0x1e1b526/0x1f4a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:07:26.650234+0000)
Oct 11 04:30:01 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 118865920 unmapped: 41123840 heap: 159989760 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 215 heartbeat osd_stat(store_statfs(0x4f930f000/0x0/0x4ffc00000, data 0x1e1d121/0x1f4e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:07:27.650397+0000)
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: handle_auth_request added challenge on 0x55f1081b5800
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: handle_auth_request added challenge on 0x55f1081b7000
Oct 11 04:30:01 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 118882304 unmapped: 41107456 heap: 159989760 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 215 ms_handle_reset con 0x55f1081b7000 session 0x55f10799c780
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 215 ms_handle_reset con 0x55f1081b5800 session 0x55f106882b40
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: handle_auth_request added challenge on 0x55f105d2b800
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 215 ms_handle_reset con 0x55f105d2b800 session 0x55f10799cf00
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:07:28.650504+0000)
Oct 11 04:30:01 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 118898688 unmapped: 41091072 heap: 159989760 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: handle_auth_request added challenge on 0x55f107d33800
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 215 ms_handle_reset con 0x55f107d33800 session 0x55f108162d20
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: handle_auth_request added challenge on 0x55f109abb400
Oct 11 04:30:01 compute-0 ceph-osd[89722]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:30:01 compute-0 ceph-osd[89722]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:30:01 compute-0 ceph-osd[89722]: bluestore.MempoolThread(0x55f1042abb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1643049 data_alloc: 234881024 data_used: 12587008
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 215 ms_handle_reset con 0x55f109abb400 session 0x55f105baaf00
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:07:29.650653+0000)
Oct 11 04:30:01 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 118726656 unmapped: 41263104 heap: 159989760 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: handle_auth_request added challenge on 0x55f109abb800
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 215 ms_handle_reset con 0x55f109abb800 session 0x55f1063163c0
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: handle_auth_request added challenge on 0x55f105d2b800
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:07:30.650812+0000)
Oct 11 04:30:01 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 118767616 unmapped: 41222144 heap: 159989760 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _renew_subs
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 215 handle_osd_map epochs [216,216], i have 215, src has [1,216]
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 216 ms_handle_reset con 0x55f105d2b800 session 0x55f109b9b0e0
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:07:31.650991+0000)
Oct 11 04:30:01 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 118775808 unmapped: 41213952 heap: 159989760 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:07:32.651185+0000)
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: handle_auth_request added challenge on 0x55f107d33800
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 216 ms_handle_reset con 0x55f107d33800 session 0x55f1077090e0
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: handle_auth_request added challenge on 0x55f1081b5800
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 216 heartbeat osd_stat(store_statfs(0x4f930e000/0x0/0x4ffc00000, data 0x1e1eca0/0x1f50000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Oct 11 04:30:01 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 118833152 unmapped: 41156608 heap: 159989760 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 216 ms_handle_reset con 0x55f1081b5800 session 0x55f105b9c3c0
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: handle_auth_request added challenge on 0x55f109abb400
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 216 ms_handle_reset con 0x55f109abb400 session 0x55f10562ab40
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: handle_auth_request added challenge on 0x55f109abb800
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 216 ms_handle_reset con 0x55f109abb800 session 0x55f10562b860
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: handle_auth_request added challenge on 0x55f105d2b800
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 216 ms_handle_reset con 0x55f105d2b800 session 0x55f10772fe00
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: handle_auth_request added challenge on 0x55f107d33800
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:07:33.651352+0000)
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 216 ms_handle_reset con 0x55f107d33800 session 0x55f10772f2c0
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: handle_auth_request added challenge on 0x55f1081b5800
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 216 ms_handle_reset con 0x55f1081b5800 session 0x55f10772e3c0
Oct 11 04:30:01 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 119357440 unmapped: 40632320 heap: 159989760 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 216 heartbeat osd_stat(store_statfs(0x4f915a000/0x0/0x4ffc00000, data 0x1fd2ca0/0x2104000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: handle_auth_request added challenge on 0x55f109abb400
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 216 ms_handle_reset con 0x55f109abb400 session 0x55f10772ed20
Oct 11 04:30:01 compute-0 ceph-osd[89722]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:30:01 compute-0 ceph-osd[89722]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:30:01 compute-0 ceph-osd[89722]: bluestore.MempoolThread(0x55f1042abb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1665494 data_alloc: 234881024 data_used: 12599296
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:07:34.651469+0000)
Oct 11 04:30:01 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 119357440 unmapped: 40632320 heap: 159989760 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: handle_auth_request added challenge on 0x55f1081a6000
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _renew_subs
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 216 handle_osd_map epochs [217,217], i have 216, src has [1,217]
Oct 11 04:30:01 compute-0 ceph-osd[89722]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 10.063012123s of 10.494909286s, submitted: 87
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:07:35.651588+0000)
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _renew_subs
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 217 handle_osd_map epochs [218,218], i have 217, src has [1,218]
Oct 11 04:30:01 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 119406592 unmapped: 40583168 heap: 159989760 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 218 ms_handle_reset con 0x55f1081a6000 session 0x55f10795e3c0
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:07:36.651737+0000)
Oct 11 04:30:01 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 119431168 unmapped: 40558592 heap: 159989760 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: handle_auth_request added challenge on 0x55f105d2b800
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: handle_auth_request added challenge on 0x55f107d33800
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 218 ms_handle_reset con 0x55f107d33800 session 0x55f105bab0e0
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 218 ms_handle_reset con 0x55f105d2b800 session 0x55f107720f00
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:07:37.651879+0000)
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: handle_auth_request added challenge on 0x55f1081b5800
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 218 ms_handle_reset con 0x55f1081b5800 session 0x55f10561e000
Oct 11 04:30:01 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 119529472 unmapped: 40460288 heap: 159989760 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: handle_auth_request added challenge on 0x55f109abb400
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 218 ms_handle_reset con 0x55f109abb400 session 0x55f10561e1e0
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: handle_auth_request added challenge on 0x55f1081a7800
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:07:38.652026+0000)
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 218 ms_handle_reset con 0x55f1081a7800 session 0x55f1077090e0
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: handle_auth_request added challenge on 0x55f105d2b800
Oct 11 04:30:01 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 119611392 unmapped: 40378368 heap: 159989760 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 218 ms_handle_reset con 0x55f105d2b800 session 0x55f107709a40
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: handle_auth_request added challenge on 0x55f107d33800
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 218 ms_handle_reset con 0x55f107d33800 session 0x55f10795e3c0
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: handle_auth_request added challenge on 0x55f1081b5800
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 218 ms_handle_reset con 0x55f1081b5800 session 0x55f10772fe00
Oct 11 04:30:01 compute-0 ceph-osd[89722]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:30:01 compute-0 ceph-osd[89722]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:30:01 compute-0 ceph-osd[89722]: bluestore.MempoolThread(0x55f1042abb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1677310 data_alloc: 234881024 data_used: 12607488
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:07:39.652324+0000)
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 218 heartbeat osd_stat(store_statfs(0x4f9129000/0x0/0x4ffc00000, data 0x20002b0/0x2135000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Oct 11 04:30:01 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 119963648 unmapped: 40026112 heap: 159989760 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: handle_auth_request added challenge on 0x55f109abb400
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: handle_auth_request added challenge on 0x55f1081a7400
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: handle_auth_request added challenge on 0x55f1081a6c00
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 218 ms_handle_reset con 0x55f1081a7400 session 0x55f105bab4a0
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: handle_auth_request added challenge on 0x55f1081a6400
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:07:40.652485+0000)
Oct 11 04:30:01 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 119947264 unmapped: 40042496 heap: 159989760 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _renew_subs
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 218 handle_osd_map epochs [219,219], i have 218, src has [1,219]
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 219 ms_handle_reset con 0x55f1081a6400 session 0x55f105b8d2c0
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: handle_auth_request added challenge on 0x55f105d2b800
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 219 ms_handle_reset con 0x55f105d2b800 session 0x55f1069901e0
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: handle_auth_request added challenge on 0x55f107d33800
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 219 ms_handle_reset con 0x55f107d33800 session 0x55f1076cef00
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:07:41.652694+0000)
Oct 11 04:30:01 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 119816192 unmapped: 40173568 heap: 159989760 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:07:42.653110+0000)
Oct 11 04:30:01 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 119816192 unmapped: 40173568 heap: 159989760 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: handle_auth_request added challenge on 0x55f1081a7400
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 219 ms_handle_reset con 0x55f1081a7400 session 0x55f105b0ad20
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 219 heartbeat osd_stat(store_statfs(0x4f9125000/0x0/0x4ffc00000, data 0x2001e81/0x2138000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:07:43.653250+0000)
Oct 11 04:30:01 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 119848960 unmapped: 40140800 heap: 159989760 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:01 compute-0 ceph-osd[89722]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:30:01 compute-0 ceph-osd[89722]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:30:01 compute-0 ceph-osd[89722]: bluestore.MempoolThread(0x55f1042abb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1689188 data_alloc: 234881024 data_used: 13185024
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:07:44.653386+0000)
Oct 11 04:30:01 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 119848960 unmapped: 40140800 heap: 159989760 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 219 handle_osd_map epochs [220,220], i have 219, src has [1,220]
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:07:45.653563+0000)
Oct 11 04:30:01 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 119848960 unmapped: 40140800 heap: 159989760 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:07:46.653701+0000)
Oct 11 04:30:01 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 119848960 unmapped: 40140800 heap: 159989760 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:07:47.653896+0000)
Oct 11 04:30:01 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 119848960 unmapped: 40140800 heap: 159989760 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 220 heartbeat osd_stat(store_statfs(0x4f9123000/0x0/0x4ffc00000, data 0x2003882/0x213a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:07:48.654078+0000)
Oct 11 04:30:01 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 119848960 unmapped: 40140800 heap: 159989760 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:01 compute-0 ceph-osd[89722]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:30:01 compute-0 ceph-osd[89722]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:30:01 compute-0 ceph-osd[89722]: bluestore.MempoolThread(0x55f1042abb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1692162 data_alloc: 234881024 data_used: 13185024
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: handle_auth_request added challenge on 0x55f1081b5800
Oct 11 04:30:01 compute-0 ceph-osd[89722]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 13.809909821s of 14.027153969s, submitted: 83
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 220 ms_handle_reset con 0x55f1081b5800 session 0x55f10698fe00
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: handle_auth_request added challenge on 0x55f10968fc00
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:07:49.654245+0000)
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 220 ms_handle_reset con 0x55f10968fc00 session 0x55f10698ef00
Oct 11 04:30:01 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 119554048 unmapped: 40435712 heap: 159989760 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 220 heartbeat osd_stat(store_statfs(0x4f9124000/0x0/0x4ffc00000, data 0x2003882/0x213a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:07:50.654380+0000)
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 220 heartbeat osd_stat(store_statfs(0x4f9124000/0x0/0x4ffc00000, data 0x2003882/0x213a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Oct 11 04:30:01 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 119554048 unmapped: 40435712 heap: 159989760 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: handle_auth_request added challenge on 0x55f105d2b800
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 220 ms_handle_reset con 0x55f105d2b800 session 0x55f1081625a0
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:07:51.654669+0000)
Oct 11 04:30:01 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 122462208 unmapped: 37527552 heap: 159989760 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:07:52.654835+0000)
Oct 11 04:30:01 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 123183104 unmapped: 36806656 heap: 159989760 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:07:53.654978+0000)
Oct 11 04:30:01 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 124264448 unmapped: 35725312 heap: 159989760 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:01 compute-0 ceph-osd[89722]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:30:01 compute-0 ceph-osd[89722]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:30:01 compute-0 ceph-osd[89722]: bluestore.MempoolThread(0x55f1042abb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1741306 data_alloc: 234881024 data_used: 13545472
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:07:54.655160+0000)
Oct 11 04:30:01 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 124264448 unmapped: 35725312 heap: 159989760 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:07:55.655334+0000)
Oct 11 04:30:01 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 124264448 unmapped: 35725312 heap: 159989760 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: handle_auth_request added challenge on 0x55f107d33800
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 220 ms_handle_reset con 0x55f107d33800 session 0x55f1081621e0
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: handle_auth_request added challenge on 0x55f1081a7400
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 220 heartbeat osd_stat(store_statfs(0x4f8c5e000/0x0/0x4ffc00000, data 0x24c9882/0x2600000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 220 ms_handle_reset con 0x55f1081a7400 session 0x55f105b9f860
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: handle_auth_request added challenge on 0x55f1081b5800
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:07:56.655475+0000)
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 220 ms_handle_reset con 0x55f1081b5800 session 0x55f106883860
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: handle_auth_request added challenge on 0x55f10968f800
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 220 ms_handle_reset con 0x55f10968f800 session 0x55f10698e5a0
Oct 11 04:30:01 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 124616704 unmapped: 35373056 heap: 159989760 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:07:57.655645+0000)
Oct 11 04:30:01 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 124633088 unmapped: 35356672 heap: 159989760 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: handle_auth_request added challenge on 0x55f105d2b800
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:07:58.655803+0000)
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: handle_auth_request added challenge on 0x55f107d33800
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 220 ms_handle_reset con 0x55f107d33800 session 0x55f10698f860
Oct 11 04:30:01 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 124633088 unmapped: 35356672 heap: 159989760 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _renew_subs
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 220 handle_osd_map epochs [221,221], i have 220, src has [1,221]
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: handle_auth_request added challenge on 0x55f1081a7400
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: handle_auth_request added challenge on 0x55f1081b5800
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 221 ms_handle_reset con 0x55f1081b5800 session 0x55f105b9a3c0
Oct 11 04:30:01 compute-0 ceph-osd[89722]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:30:01 compute-0 ceph-osd[89722]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:30:01 compute-0 ceph-osd[89722]: bluestore.MempoolThread(0x55f1042abb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1848775 data_alloc: 234881024 data_used: 13557760
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 221 heartbeat osd_stat(store_statfs(0x4f8377000/0x0/0x4ffc00000, data 0x2db0882/0x2ee7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:07:59.655992+0000)
Oct 11 04:30:01 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 124657664 unmapped: 35332096 heap: 159989760 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 221 ms_handle_reset con 0x55f1081a7400 session 0x55f10698fc20
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: handle_auth_request added challenge on 0x55f10968f400
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _renew_subs
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 221 handle_osd_map epochs [222,222], i have 221, src has [1,222]
Oct 11 04:30:01 compute-0 ceph-osd[89722]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 9.887927055s of 10.618182182s, submitted: 220
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 222 ms_handle_reset con 0x55f10968f400 session 0x55f106980780
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 222 ms_handle_reset con 0x55f105d2b800 session 0x55f1079f34a0
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:08:00.656124+0000)
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: handle_auth_request added challenge on 0x55f107d33800
Oct 11 04:30:01 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 124665856 unmapped: 35323904 heap: 159989760 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:08:01.656296+0000)
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: handle_auth_request added challenge on 0x55f1081a7400
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 222 ms_handle_reset con 0x55f1081a7400 session 0x55f106980b40
Oct 11 04:30:01 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 124665856 unmapped: 35323904 heap: 159989760 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 222 heartbeat osd_stat(store_statfs(0x4f7f23000/0x0/0x4ffc00000, data 0x31fe5d2/0x3339000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:08:02.656471+0000)
Oct 11 04:30:01 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 124665856 unmapped: 35323904 heap: 159989760 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: handle_auth_request added challenge on 0x55f1081b5800
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: handle_auth_request added challenge on 0x55f10968f400
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:08:03.656603+0000)
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 222 ms_handle_reset con 0x55f10968f400 session 0x55f109b9af00
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: handle_auth_request added challenge on 0x55f10968f000
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: handle_auth_request added challenge on 0x55f10968e800
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 222 ms_handle_reset con 0x55f10968e800 session 0x55f1076cf680
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: handle_auth_request added challenge on 0x55f10968ec00
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 222 ms_handle_reset con 0x55f10968f000 session 0x55f107972000
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: handle_auth_request added challenge on 0x55f109aba400
Oct 11 04:30:01 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 124706816 unmapped: 35282944 heap: 159989760 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: handle_auth_request added challenge on 0x55f109abac00
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 222 ms_handle_reset con 0x55f109aba400 session 0x55f107973680
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 222 heartbeat osd_stat(store_statfs(0x4f7f22000/0x0/0x4ffc00000, data 0x31fe634/0x333a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 222 handle_osd_map epochs [222,223], i have 222, src has [1,223]
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 223 ms_handle_reset con 0x55f109abac00 session 0x55f106980d20
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: handle_auth_request added challenge on 0x55f1081a7400
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: handle_auth_request added challenge on 0x55f10968e800
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 223 ms_handle_reset con 0x55f1081a7400 session 0x55f106982b40
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: handle_auth_request added challenge on 0x55f10968f000
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 223 ms_handle_reset con 0x55f10968e800 session 0x55f105bab680
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: handle_auth_request added challenge on 0x55f10968f400
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 223 ms_handle_reset con 0x55f10968f000 session 0x55f1069825a0
Oct 11 04:30:01 compute-0 ceph-osd[89722]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:30:01 compute-0 ceph-osd[89722]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:30:01 compute-0 ceph-osd[89722]: bluestore.MempoolThread(0x55f1042abb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1924428 data_alloc: 234881024 data_used: 13578240
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 223 ms_handle_reset con 0x55f10968ec00 session 0x55f105b8d680
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:08:04.656780+0000)
Oct 11 04:30:01 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 125427712 unmapped: 34562048 heap: 159989760 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _renew_subs
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 223 handle_osd_map epochs [224,224], i have 223, src has [1,224]
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 224 ms_handle_reset con 0x55f10968f400 session 0x55f105baaf00
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 224 ms_handle_reset con 0x55f1081b5800 session 0x55f1069805a0
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:08:05.656937+0000)
Oct 11 04:30:01 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 125304832 unmapped: 34684928 heap: 159989760 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: handle_auth_request added challenge on 0x55f10968ec00
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 224 heartbeat osd_stat(store_statfs(0x4f7720000/0x0/0x4ffc00000, data 0x39fdd90/0x3b3d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:08:06.657094+0000)
Oct 11 04:30:01 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 125304832 unmapped: 34684928 heap: 159989760 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 224 handle_osd_map epochs [224,225], i have 224, src has [1,225]
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 225 ms_handle_reset con 0x55f10968ec00 session 0x55f10562b860
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:08:07.657257+0000)
Oct 11 04:30:01 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 125304832 unmapped: 34684928 heap: 159989760 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:08:08.657459+0000)
Oct 11 04:30:01 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 125321216 unmapped: 34668544 heap: 159989760 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 225 ms_handle_reset con 0x55f107d33800 session 0x55f109b9ab40
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 225 heartbeat osd_stat(store_statfs(0x4f771c000/0x0/0x4ffc00000, data 0x39ff8ff/0x3b3f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Oct 11 04:30:01 compute-0 ceph-osd[89722]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:30:01 compute-0 ceph-osd[89722]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:30:01 compute-0 ceph-osd[89722]: bluestore.MempoolThread(0x55f1042abb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1930158 data_alloc: 234881024 data_used: 13582336
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:08:09.657618+0000)
Oct 11 04:30:01 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 125321216 unmapped: 34668544 heap: 159989760 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 225 handle_osd_map epochs [226,226], i have 225, src has [1,226]
Oct 11 04:30:01 compute-0 ceph-osd[89722]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 10.106430054s of 10.340330124s, submitted: 59
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:08:10.657762+0000)
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 226 ms_handle_reset con 0x55f109abb400 session 0x55f10772ed20
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 226 ms_handle_reset con 0x55f1081a6c00 session 0x55f1079f2f00
Oct 11 04:30:01 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 125321216 unmapped: 34668544 heap: 159989760 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: handle_auth_request added challenge on 0x55f107d33800
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 226 ms_handle_reset con 0x55f107d33800 session 0x55f105baa3c0
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:08:11.657989+0000)
Oct 11 04:30:01 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 124207104 unmapped: 35782656 heap: 159989760 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:08:12.658110+0000)
Oct 11 04:30:01 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 124207104 unmapped: 35782656 heap: 159989760 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:08:13.658836+0000)
Oct 11 04:30:01 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 124207104 unmapped: 35782656 heap: 159989760 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:01 compute-0 ceph-osd[89722]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:30:01 compute-0 ceph-osd[89722]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:30:01 compute-0 ceph-osd[89722]: bluestore.MempoolThread(0x55f1042abb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1862539 data_alloc: 234881024 data_used: 12660736
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:08:14.659073+0000)
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 226 heartbeat osd_stat(store_statfs(0x4f7dc4000/0x0/0x4ffc00000, data 0x335a352/0x349a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Oct 11 04:30:01 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 124207104 unmapped: 35782656 heap: 159989760 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 226 heartbeat osd_stat(store_statfs(0x4f7dc4000/0x0/0x4ffc00000, data 0x335a352/0x349a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:08:15.659260+0000)
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: handle_auth_request added challenge on 0x55f1081b5800
Oct 11 04:30:01 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 124207104 unmapped: 35782656 heap: 159989760 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:08:16.659401+0000)
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _renew_subs
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 226 handle_osd_map epochs [227,227], i have 226, src has [1,227]
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 227 ms_handle_reset con 0x55f1081b5800 session 0x55f106d7f0e0
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: handle_auth_request added challenge on 0x55f10968ec00
Oct 11 04:30:01 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 124215296 unmapped: 35774464 heap: 159989760 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 227 ms_handle_reset con 0x55f10968ec00 session 0x55f10562b680
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: handle_auth_request added challenge on 0x55f10968f400
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 227 ms_handle_reset con 0x55f10968f400 session 0x55f107ed01e0
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:08:17.659555+0000)
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: handle_auth_request added challenge on 0x55f107d33800
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 227 ms_handle_reset con 0x55f107d33800 session 0x55f106882000
Oct 11 04:30:01 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 124239872 unmapped: 35749888 heap: 159989760 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 227 heartbeat osd_stat(store_statfs(0x4f7dc3000/0x0/0x4ffc00000, data 0x335beb1/0x349b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: handle_auth_request added challenge on 0x55f1081a6c00
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 227 ms_handle_reset con 0x55f1081a6c00 session 0x55f1079f3a40
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: handle_auth_request added challenge on 0x55f1081b5800
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 227 ms_handle_reset con 0x55f1081b5800 session 0x55f107946960
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:08:18.660387+0000)
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: handle_auth_request added challenge on 0x55f10968ec00
Oct 11 04:30:01 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 124510208 unmapped: 35479552 heap: 159989760 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:01 compute-0 ceph-osd[89722]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:30:01 compute-0 ceph-osd[89722]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:30:01 compute-0 ceph-osd[89722]: bluestore.MempoolThread(0x55f1042abb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1873019 data_alloc: 234881024 data_used: 12664832
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:08:19.660562+0000)
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: handle_auth_request added challenge on 0x55f1081a7400
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: handle_auth_request added challenge on 0x55f10968e800
Oct 11 04:30:01 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 124510208 unmapped: 35479552 heap: 159989760 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 227 ms_handle_reset con 0x55f10968ec00 session 0x55f10767fa40
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:08:20.660718+0000)
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: handle_auth_request added challenge on 0x55f10968f000
Oct 11 04:30:01 compute-0 ceph-osd[89722]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 10.313109398s of 10.449379921s, submitted: 52
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 227 ms_handle_reset con 0x55f10968f000 session 0x55f105baad20
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: handle_auth_request added challenge on 0x55f107d33800
Oct 11 04:30:01 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 124493824 unmapped: 35495936 heap: 159989760 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 227 ms_handle_reset con 0x55f107d33800 session 0x55f1079d7860
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: handle_auth_request added challenge on 0x55f1081a6c00
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 227 ms_handle_reset con 0x55f1081a6c00 session 0x55f105b0b4a0
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: handle_auth_request added challenge on 0x55f1081b5800
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 227 ms_handle_reset con 0x55f1081b5800 session 0x55f1079f3680
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: handle_auth_request added challenge on 0x55f10968ec00
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 227 ms_handle_reset con 0x55f10968ec00 session 0x55f10799c3c0
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:08:21.660876+0000)
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: handle_auth_request added challenge on 0x55f109abac00
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 227 ms_handle_reset con 0x55f109abac00 session 0x55f10646f0e0
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: handle_auth_request added challenge on 0x55f107d33800
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 227 ms_handle_reset con 0x55f107d33800 session 0x55f105d03680
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: handle_auth_request added challenge on 0x55f1081a6c00
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 227 ms_handle_reset con 0x55f1081a6c00 session 0x55f105bab0e0
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: handle_auth_request added challenge on 0x55f1081b5800
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 227 ms_handle_reset con 0x55f1081b5800 session 0x55f10772fe00
Oct 11 04:30:01 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 126803968 unmapped: 33185792 heap: 159989760 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: handle_auth_request added challenge on 0x55f10968ec00
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 227 ms_handle_reset con 0x55f10968ec00 session 0x55f107708780
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: handle_auth_request added challenge on 0x55f109aba400
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 227 ms_handle_reset con 0x55f109aba400 session 0x55f10561f0e0
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: handle_auth_request added challenge on 0x55f107d33800
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:08:22.661003+0000)
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 227 ms_handle_reset con 0x55f107d33800 session 0x55f10561e1e0
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 227 heartbeat osd_stat(store_statfs(0x4f75f7000/0x0/0x4ffc00000, data 0x3b24ef4/0x3c67000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Oct 11 04:30:01 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 126787584 unmapped: 33202176 heap: 159989760 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:08:23.661183+0000)
Oct 11 04:30:01 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 126787584 unmapped: 33202176 heap: 159989760 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:01 compute-0 ceph-osd[89722]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:30:01 compute-0 ceph-osd[89722]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:30:01 compute-0 ceph-osd[89722]: bluestore.MempoolThread(0x55f1042abb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1986302 data_alloc: 234881024 data_used: 19714048
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:08:24.661343+0000)
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 227 handle_osd_map epochs [227,228], i have 227, src has [1,228]
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: handle_auth_request added challenge on 0x55f1081a6c00
Oct 11 04:30:01 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 126795776 unmapped: 33193984 heap: 159989760 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 228 ms_handle_reset con 0x55f1081a6c00 session 0x55f107720f00
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: handle_auth_request added challenge on 0x55f1081b5800
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 228 ms_handle_reset con 0x55f1081b5800 session 0x55f1079d6780
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: handle_auth_request added challenge on 0x55f10968ec00
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:08:25.661523+0000)
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 228 ms_handle_reset con 0x55f10968ec00 session 0x55f105b9c780
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 228 heartbeat osd_stat(store_statfs(0x4f75f1000/0x0/0x4ffc00000, data 0x3b269c9/0x3c6c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: handle_auth_request added challenge on 0x55f109abb000
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 228 ms_handle_reset con 0x55f109abb000 session 0x55f1076e6780
Oct 11 04:30:01 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 126910464 unmapped: 33079296 heap: 159989760 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: handle_auth_request added challenge on 0x55f107d33800
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 228 ms_handle_reset con 0x55f107d33800 session 0x55f105bb7e00
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:08:26.662010+0000)
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: handle_auth_request added challenge on 0x55f1081a6c00
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 228 ms_handle_reset con 0x55f1081a6c00 session 0x55f106d385a0
Oct 11 04:30:01 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 126910464 unmapped: 33079296 heap: 159989760 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: handle_auth_request added challenge on 0x55f1081b5800
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: handle_auth_request added challenge on 0x55f10968ec00
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 228 ms_handle_reset con 0x55f10968ec00 session 0x55f10698fe00
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:08:27.662192+0000)
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: handle_auth_request added challenge on 0x55f109abb000
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 228 ms_handle_reset con 0x55f109abb000 session 0x55f106882000
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 228 handle_osd_map epochs [228,229], i have 228, src has [1,229]
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: handle_auth_request added challenge on 0x55f109aba800
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 229 ms_handle_reset con 0x55f109aba800 session 0x55f1068830e0
Oct 11 04:30:01 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 126959616 unmapped: 33030144 heap: 159989760 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: handle_auth_request added challenge on 0x55f107d33800
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: handle_auth_request added challenge on 0x55f1081a6c00
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 229 ms_handle_reset con 0x55f107d33800 session 0x55f106883860
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 229 heartbeat osd_stat(store_statfs(0x4f6b95000/0x0/0x4ffc00000, data 0x4583977/0x46c9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: handle_auth_request added challenge on 0x55f10968ec00
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:08:28.662330+0000)
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: handle_auth_request added challenge on 0x55f109abb000
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 229 handle_osd_map epochs [229,230], i have 229, src has [1,230]
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 230 ms_handle_reset con 0x55f1081a6c00 session 0x55f105b9ed20
Oct 11 04:30:01 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 127787008 unmapped: 32202752 heap: 159989760 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 230 heartbeat osd_stat(store_statfs(0x4f6a1b000/0x0/0x4ffc00000, data 0x46fa556/0x4842000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 230 ms_handle_reset con 0x55f1081b5800 session 0x55f106317a40
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:08:29.662477+0000)
Oct 11 04:30:01 compute-0 ceph-osd[89722]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:30:01 compute-0 ceph-osd[89722]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:30:01 compute-0 ceph-osd[89722]: bluestore.MempoolThread(0x55f1042abb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2095647 data_alloc: 234881024 data_used: 19738624
Oct 11 04:30:01 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 128376832 unmapped: 31612928 heap: 159989760 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: handle_auth_request added challenge on 0x55f109abbc00
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 230 ms_handle_reset con 0x55f109abbc00 session 0x55f106983680
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:08:30.662593+0000)
Oct 11 04:30:01 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 138649600 unmapped: 21340160 heap: 159989760 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:01 compute-0 ceph-osd[89722]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 9.909643173s of 10.443037033s, submitted: 139
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:08:31.662744+0000)
Oct 11 04:30:01 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 141287424 unmapped: 18702336 heap: 159989760 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: handle_auth_request added challenge on 0x55f10968f400
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: handle_auth_request added challenge on 0x55f109abb800
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 230 ms_handle_reset con 0x55f109abb800 session 0x55f105b9c1e0
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: handle_auth_request added challenge on 0x55f107d33800
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 230 ms_handle_reset con 0x55f10968f400 session 0x55f10567da40
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: handle_auth_request added challenge on 0x55f1081a6c00
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: handle_auth_request added challenge on 0x55f1081b5800
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 230 heartbeat osd_stat(store_statfs(0x4f5ded000/0x0/0x4ffc00000, data 0x5703135/0x5058000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x4daf9c6), peers [0,1] op hist [0,0,0,1,1,1])
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:08:32.662887+0000)
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 230 ms_handle_reset con 0x55f1081a6c00 session 0x55f10772ed20
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _renew_subs
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 230 handle_osd_map epochs [231,231], i have 230, src has [1,231]
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 231 ms_handle_reset con 0x55f1081b5800 session 0x55f10646f0e0
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: handle_auth_request added challenge on 0x55f109abb800
Oct 11 04:30:01 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 140902400 unmapped: 19087360 heap: 159989760 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 231 ms_handle_reset con 0x55f109abb800 session 0x55f105b9e3c0
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 231 ms_handle_reset con 0x55f107d33800 session 0x55f108163e00
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:08:33.663085+0000)
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 231 heartbeat osd_stat(store_statfs(0x4f5a50000/0x0/0x4ffc00000, data 0x5aa9135/0x53fe000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x4daf9c6), peers [0,1] op hist [])
Oct 11 04:30:01 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 140918784 unmapped: 19070976 heap: 159989760 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: handle_auth_request added challenge on 0x55f107d33800
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:08:34.663380+0000)
Oct 11 04:30:01 compute-0 ceph-osd[89722]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:30:01 compute-0 ceph-osd[89722]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 11 04:30:01 compute-0 ceph-osd[89722]: bluestore.MempoolThread(0x55f1042abb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 2330312 data_alloc: 251658240 data_used: 28643328
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 231 handle_osd_map epochs [231,232], i have 231, src has [1,232]
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 232 ms_handle_reset con 0x55f107d33800 session 0x55f105b9e1e0
Oct 11 04:30:01 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 140926976 unmapped: 19062784 heap: 159989760 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: handle_auth_request added challenge on 0x55f1081a6c00
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 232 ms_handle_reset con 0x55f1081a6c00 session 0x55f10646f680
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:08:35.663530+0000)
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 232 heartbeat osd_stat(store_statfs(0x4f5df0000/0x0/0x4ffc00000, data 0x5706821/0x505d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x4daf9c6), peers [0,1] op hist [])
Oct 11 04:30:01 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 140943360 unmapped: 19046400 heap: 159989760 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: handle_auth_request added challenge on 0x55f1081b5800
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:08:36.663745+0000)
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 232 handle_osd_map epochs [233,233], i have 232, src has [1,233]
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _renew_subs
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 232 handle_osd_map epochs [233,233], i have 233, src has [1,233]
Oct 11 04:30:01 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 140959744 unmapped: 19030016 heap: 159989760 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 233 ms_handle_reset con 0x55f1081b5800 session 0x55f105b8de00
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:08:37.663908+0000)
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: handle_auth_request added challenge on 0x55f10968f400
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 233 ms_handle_reset con 0x55f10968f400 session 0x55f1056bd680
Oct 11 04:30:01 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 140967936 unmapped: 19021824 heap: 159989760 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 233 ms_handle_reset con 0x55f1081a7400 session 0x55f10562b4a0
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: handle_auth_request added challenge on 0x55f107d33800
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _renew_subs
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 233 handle_osd_map epochs [234,234], i have 233, src has [1,234]
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 234 ms_handle_reset con 0x55f10968e800 session 0x55f1077092c0
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 234 ms_handle_reset con 0x55f107d33800 session 0x55f1069812c0
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: handle_auth_request added challenge on 0x55f1081a6c00
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:08:38.664085+0000)
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 234 ms_handle_reset con 0x55f1081a6c00 session 0x55f105bb7c20
Oct 11 04:30:01 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 141033472 unmapped: 18956288 heap: 159989760 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:08:39.664223+0000)
Oct 11 04:30:01 compute-0 ceph-osd[89722]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:30:01 compute-0 ceph-osd[89722]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 11 04:30:01 compute-0 ceph-osd[89722]: bluestore.MempoolThread(0x55f1042abb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 2263303 data_alloc: 251658240 data_used: 28667904
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 234 heartbeat osd_stat(store_statfs(0x4f6431000/0x0/0x4ffc00000, data 0x50c0f46/0x4a15000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x4daf9c6), peers [0,1] op hist [])
Oct 11 04:30:01 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 140828672 unmapped: 19161088 heap: 159989760 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:08:40.664419+0000)
Oct 11 04:30:01 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 141606912 unmapped: 18382848 heap: 159989760 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: handle_auth_request added challenge on 0x55f1081b5800
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 234 ms_handle_reset con 0x55f1081b5800 session 0x55f105baa3c0
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:08:41.664610+0000)
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: handle_auth_request added challenge on 0x55f10968f400
Oct 11 04:30:01 compute-0 ceph-osd[89722]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 9.481028557s of 10.300889969s, submitted: 217
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: handle_auth_request added challenge on 0x55f109abb800
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: handle_auth_request added challenge on 0x55f109abbc00
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 234 handle_osd_map epochs [234,235], i have 234, src has [1,235]
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _renew_subs
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 234 handle_osd_map epochs [235,235], i have 235, src has [1,235]
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: handle_auth_request added challenge on 0x55f109192c00
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 235 ms_handle_reset con 0x55f10968f400 session 0x55f1079d65a0
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 235 ms_handle_reset con 0x55f109abbc00 session 0x55f107720d20
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: handle_auth_request added challenge on 0x55f107d33800
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: handle_auth_request added challenge on 0x55f1081a6c00
Oct 11 04:30:01 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 140853248 unmapped: 19136512 heap: 159989760 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: handle_auth_request added challenge on 0x55f1081b5800
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: handle_auth_request added challenge on 0x55f10968e800
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 235 ms_handle_reset con 0x55f1081b5800 session 0x55f105b8c5a0
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:08:42.665020+0000)
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 235 ms_handle_reset con 0x55f109abb800 session 0x55f1077081e0
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 235 ms_handle_reset con 0x55f1081a6c00 session 0x55f1063165a0
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 235 ms_handle_reset con 0x55f107d33800 session 0x55f105b8d2c0
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: handle_auth_request added challenge on 0x55f107d33800
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 235 ms_handle_reset con 0x55f107d33800 session 0x55f1076cf4a0
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 235 handle_osd_map epochs [235,236], i have 235, src has [1,236]
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 236 ms_handle_reset con 0x55f10968e800 session 0x55f106d39c20
Oct 11 04:30:01 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 138280960 unmapped: 21708800 heap: 159989760 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: handle_auth_request added challenge on 0x55f1081a6c00
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 236 ms_handle_reset con 0x55f1081a6c00 session 0x55f1079721e0
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 236 ms_handle_reset con 0x55f109192c00 session 0x55f104fb12c0
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:08:43.665369+0000)
Oct 11 04:30:01 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 137043968 unmapped: 22945792 heap: 159989760 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:08:44.665545+0000)
Oct 11 04:30:01 compute-0 ceph-osd[89722]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:30:01 compute-0 ceph-osd[89722]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:30:01 compute-0 ceph-osd[89722]: bluestore.MempoolThread(0x55f1042abb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1961124 data_alloc: 234881024 data_used: 23175168
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 236 handle_osd_map epochs [236,237], i have 236, src has [1,237]
Oct 11 04:30:01 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 135536640 unmapped: 24453120 heap: 159989760 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:08:45.665710+0000)
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 237 heartbeat osd_stat(store_statfs(0x4f9180000/0x0/0x4ffc00000, data 0x2ba033f/0x2ced000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x3d8f9c6), peers [0,1] op hist [])
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: handle_auth_request added challenge on 0x55f1081b5800
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: handle_auth_request added challenge on 0x55f109abb800
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 237 ms_handle_reset con 0x55f109abb800 session 0x55f10795f4a0
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: handle_auth_request added challenge on 0x55f107d33800
Oct 11 04:30:01 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 135536640 unmapped: 24453120 heap: 159989760 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 237 ms_handle_reset con 0x55f1081b5800 session 0x55f105b8d0e0
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: handle_auth_request added challenge on 0x55f1081a6c00
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 237 ms_handle_reset con 0x55f1081a6c00 session 0x55f1068825a0
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:08:46.665852+0000)
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: handle_auth_request added challenge on 0x55f109192c00
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _renew_subs
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 237 handle_osd_map epochs [238,238], i have 237, src has [1,238]
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 238 ms_handle_reset con 0x55f109192c00 session 0x55f10795e1e0
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: handle_auth_request added challenge on 0x55f10968e800
Oct 11 04:30:01 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 135544832 unmapped: 24444928 heap: 159989760 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 238 ms_handle_reset con 0x55f10968e800 session 0x55f106980960
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: handle_auth_request added challenge on 0x55f109abbc00
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 238 ms_handle_reset con 0x55f109abbc00 session 0x55f105baaf00
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 238 ms_handle_reset con 0x55f107d33800 session 0x55f10795eb40
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:08:47.666019+0000)
Oct 11 04:30:01 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 135544832 unmapped: 24444928 heap: 159989760 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: handle_auth_request added challenge on 0x55f1081a6c00
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:08:48.666215+0000)
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 238 ms_handle_reset con 0x55f1081a6c00 session 0x55f105b8d680
Oct 11 04:30:01 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 135544832 unmapped: 24444928 heap: 159989760 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 238 handle_osd_map epochs [239,239], i have 238, src has [1,239]
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:08:49.666359+0000)
Oct 11 04:30:01 compute-0 ceph-osd[89722]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:30:01 compute-0 ceph-osd[89722]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:30:01 compute-0 ceph-osd[89722]: bluestore.MempoolThread(0x55f1042abb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1947110 data_alloc: 234881024 data_used: 20635648
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 239 heartbeat osd_stat(store_statfs(0x4f9177000/0x0/0x4ffc00000, data 0x2ba6ab9/0x2cf6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x3d8f9c6), peers [0,1] op hist [])
Oct 11 04:30:01 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 135544832 unmapped: 24444928 heap: 159989760 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:08:50.666646+0000)
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: handle_auth_request added challenge on 0x55f1081b5800
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 239 handle_osd_map epochs [239,240], i have 239, src has [1,240]
Oct 11 04:30:01 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 135561216 unmapped: 24428544 heap: 159989760 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 240 ms_handle_reset con 0x55f1081b5800 session 0x55f106d7f2c0
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: handle_auth_request added challenge on 0x55f109192c00
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 240 ms_handle_reset con 0x55f109192c00 session 0x55f107708780
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:08:51.666867+0000)
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: handle_auth_request added challenge on 0x55f10968e800
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: handle_auth_request added challenge on 0x55f109192800
Oct 11 04:30:01 compute-0 ceph-osd[89722]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 10.010061264s of 10.475114822s, submitted: 147
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 240 ms_handle_reset con 0x55f109192800 session 0x55f105b0a960
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 240 handle_osd_map epochs [241,241], i have 240, src has [1,241]
Oct 11 04:30:01 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 135561216 unmapped: 24428544 heap: 159989760 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 241 heartbeat osd_stat(store_statfs(0x4f9176000/0x0/0x4ffc00000, data 0x2ba810a/0x2cf7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x3d8f9c6), peers [0,1] op hist [])
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:08:52.667055+0000)
Oct 11 04:30:01 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 135561216 unmapped: 24428544 heap: 159989760 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:08:53.667249+0000)
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: handle_auth_request added challenge on 0x55f107d33800
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 241 ms_handle_reset con 0x55f107d33800 session 0x55f106d7fc20
Oct 11 04:30:01 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 135577600 unmapped: 24412160 heap: 159989760 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:08:54.667539+0000)
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 241 handle_osd_map epochs [241,242], i have 241, src has [1,242]
Oct 11 04:30:01 compute-0 ceph-osd[89722]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:30:01 compute-0 ceph-osd[89722]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:30:01 compute-0 ceph-osd[89722]: bluestore.MempoolThread(0x55f1042abb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1958712 data_alloc: 234881024 data_used: 20914176
Oct 11 04:30:01 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 135585792 unmapped: 24403968 heap: 159989760 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 242 heartbeat osd_stat(store_statfs(0x4f9170000/0x0/0x4ffc00000, data 0x2bab7f2/0x2cfd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x3d8f9c6), peers [0,1] op hist [])
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:08:55.667728+0000)
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 242 ms_handle_reset con 0x55f10968e800 session 0x55f1079f3c20
Oct 11 04:30:01 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 135585792 unmapped: 24403968 heap: 159989760 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:08:56.667946+0000)
Oct 11 04:30:01 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 135585792 unmapped: 24403968 heap: 159989760 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 242 ms_handle_reset con 0x55f10968ec00 session 0x55f105baad20
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 242 ms_handle_reset con 0x55f109abb000 session 0x55f105b9a780
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 242 heartbeat osd_stat(store_statfs(0x4f9170000/0x0/0x4ffc00000, data 0x2bab7f2/0x2cfd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x3d8f9c6), peers [0,1] op hist [])
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: handle_auth_request added challenge on 0x55f1081a6c00
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:08:57.668063+0000)
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 242 ms_handle_reset con 0x55f1081a6c00 session 0x55f106980b40
Oct 11 04:30:01 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 130187264 unmapped: 29802496 heap: 159989760 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:08:58.668356+0000)
Oct 11 04:30:01 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 130187264 unmapped: 29802496 heap: 159989760 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:08:59.668475+0000)
Oct 11 04:30:01 compute-0 ceph-osd[89722]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:30:01 compute-0 ceph-osd[89722]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:30:01 compute-0 ceph-osd[89722]: bluestore.MempoolThread(0x55f1042abb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1802194 data_alloc: 234881024 data_used: 12992512
Oct 11 04:30:01 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 130187264 unmapped: 29802496 heap: 159989760 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:09:00.670652+0000)
Oct 11 04:30:01 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 130187264 unmapped: 29802496 heap: 159989760 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:09:01.670883+0000)
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 242 heartbeat osd_stat(store_statfs(0x4f9e92000/0x0/0x4ffc00000, data 0x1e8b7e2/0x1fdc000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x3d8f9c6), peers [0,1] op hist [])
Oct 11 04:30:01 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 130187264 unmapped: 29802496 heap: 159989760 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: handle_auth_request added challenge on 0x55f107d33800
Oct 11 04:30:01 compute-0 ceph-osd[89722]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 10.244455338s of 10.403511047s, submitted: 86
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 242 ms_handle_reset con 0x55f107d33800 session 0x55f10567d0e0
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:09:02.671056+0000)
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: handle_auth_request added challenge on 0x55f10968e800
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 242 ms_handle_reset con 0x55f10968e800 session 0x55f105b9f0e0
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: handle_auth_request added challenge on 0x55f10968ec00
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 242 ms_handle_reset con 0x55f10968ec00 session 0x55f105baa1e0
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: handle_auth_request added challenge on 0x55f109abb000
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 242 ms_handle_reset con 0x55f109abb000 session 0x55f1079734a0
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: handle_auth_request added challenge on 0x55f1081b5800
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 242 ms_handle_reset con 0x55f1081b5800 session 0x55f10772e000
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: handle_auth_request added challenge on 0x55f107d33800
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 242 ms_handle_reset con 0x55f107d33800 session 0x55f106991680
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: handle_auth_request added challenge on 0x55f10968e800
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 242 ms_handle_reset con 0x55f10968e800 session 0x55f10562bc20
Oct 11 04:30:01 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 130891776 unmapped: 29097984 heap: 159989760 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:09:03.671295+0000)
Oct 11 04:30:01 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 130891776 unmapped: 29097984 heap: 159989760 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:09:04.671433+0000)
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: handle_auth_request added challenge on 0x55f10968ec00
Oct 11 04:30:01 compute-0 ceph-osd[89722]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:30:01 compute-0 ceph-osd[89722]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:30:01 compute-0 ceph-osd[89722]: bluestore.MempoolThread(0x55f1042abb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1858541 data_alloc: 234881024 data_used: 12730368
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: handle_auth_request added challenge on 0x55f109abb000
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 242 ms_handle_reset con 0x55f109abb000 session 0x55f1076e7860
Oct 11 04:30:01 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 130908160 unmapped: 29081600 heap: 159989760 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:09:05.671571+0000)
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _renew_subs
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 242 handle_osd_map epochs [243,243], i have 242, src has [1,243]
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: handle_auth_request added challenge on 0x55f109192c00
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: handle_auth_request added challenge on 0x55f1081b4000
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 243 ms_handle_reset con 0x55f109192c00 session 0x55f105b8dc20
Oct 11 04:30:01 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 130932736 unmapped: 29057024 heap: 159989760 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:09:06.671810+0000)
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _renew_subs
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 243 handle_osd_map epochs [244,244], i have 243, src has [1,244]
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 244 ms_handle_reset con 0x55f1081b4000 session 0x55f109b9be00
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 244 ms_handle_reset con 0x55f10968ec00 session 0x55f105b9cd20
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: handle_auth_request added challenge on 0x55f1081b4000
Oct 11 04:30:01 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 130940928 unmapped: 29048832 heap: 159989760 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 244 ms_handle_reset con 0x55f1081b4000 session 0x55f10772f4a0
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 244 heartbeat osd_stat(store_statfs(0x4f9889000/0x0/0x4ffc00000, data 0x248f485/0x25e4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x3d8f9c6), peers [0,1] op hist [])
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:09:07.671992+0000)
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: handle_auth_request added challenge on 0x55f107d33800
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: handle_auth_request added challenge on 0x55f109192c00
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: handle_auth_request added challenge on 0x55f10968e800
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 244 ms_handle_reset con 0x55f10968e800 session 0x55f1079f3a40
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: handle_auth_request added challenge on 0x55f109abb000
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 244 ms_handle_reset con 0x55f109abb000 session 0x55f1069821e0
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 244 ms_handle_reset con 0x55f109192c00 session 0x55f105b0b0e0
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: handle_auth_request added challenge on 0x55f1081b4000
Oct 11 04:30:01 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 139427840 unmapped: 20561920 heap: 159989760 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:09:08.672224+0000)
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: handle_auth_request added challenge on 0x55f109192c00
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 244 ms_handle_reset con 0x55f109192c00 session 0x55f109b9b4a0
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: handle_auth_request added challenge on 0x55f10968e800
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 244 ms_handle_reset con 0x55f10968e800 session 0x55f1079d7860
Oct 11 04:30:01 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 139427840 unmapped: 54165504 heap: 193593344 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:09:09.672359+0000)
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: handle_auth_request added challenge on 0x55f10968ec00
Oct 11 04:30:01 compute-0 ceph-osd[89722]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:30:01 compute-0 ceph-osd[89722]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:30:01 compute-0 ceph-osd[89722]: bluestore.MempoolThread(0x55f1042abb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2568607 data_alloc: 234881024 data_used: 12742656
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: handle_auth_request added challenge on 0x55f109abb000
Oct 11 04:30:01 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 133267456 unmapped: 60325888 heap: 193593344 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:09:10.672548+0000)
Oct 11 04:30:01 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 134684672 unmapped: 58908672 heap: 193593344 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:09:11.672738+0000)
Oct 11 04:30:01 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 134979584 unmapped: 58613760 heap: 193593344 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:01 compute-0 ceph-osd[89722]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 6.919960022s of 10.027689934s, submitted: 343
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:09:12.672853+0000)
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 244 heartbeat osd_stat(store_statfs(0x4e66e6000/0x0/0x4ffc00000, data 0x14491064/0x145e8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x4f2f9c6), peers [0,1] op hist [])
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 244 ms_handle_reset con 0x55f1081b4000 session 0x55f109b9bc20
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 244 ms_handle_reset con 0x55f107d33800 session 0x55f105bb6000
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 244 heartbeat osd_stat(store_statfs(0x4e66e6000/0x0/0x4ffc00000, data 0x14491064/0x145e8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x4f2f9c6), peers [0,1] op hist [])
Oct 11 04:30:01 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 136192000 unmapped: 57401344 heap: 193593344 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:09:13.672996+0000)
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 244 heartbeat osd_stat(store_statfs(0x4e62e6000/0x0/0x4ffc00000, data 0x14891064/0x149e8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x4f2f9c6), peers [0,1] op hist [])
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: handle_auth_request added challenge on 0x55f1081a4800
Oct 11 04:30:01 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 136192000 unmapped: 57401344 heap: 193593344 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:09:14.673202+0000)
Oct 11 04:30:01 compute-0 ceph-osd[89722]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:30:01 compute-0 ceph-osd[89722]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:30:01 compute-0 ceph-osd[89722]: bluestore.MempoolThread(0x55f1042abb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3952182 data_alloc: 234881024 data_used: 18087936
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _renew_subs
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 244 handle_osd_map epochs [245,245], i have 244, src has [1,245]
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 245 heartbeat osd_stat(store_statfs(0x4e62e7000/0x0/0x4ffc00000, data 0x14891002/0x149e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x4f2f9c6), peers [0,1] op hist [])
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 245 ms_handle_reset con 0x55f1081a4800 session 0x55f10767fa40
Oct 11 04:30:01 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 136232960 unmapped: 57360384 heap: 193593344 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:09:15.673364+0000)
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 245 heartbeat osd_stat(store_statfs(0x4e62e4000/0x0/0x4ffc00000, data 0x14892b71/0x149e9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x4f2f9c6), peers [0,1] op hist [])
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: handle_auth_request added challenge on 0x55f107d33800
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 245 ms_handle_reset con 0x55f107d33800 session 0x55f107972b40
Oct 11 04:30:01 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 136241152 unmapped: 57352192 heap: 193593344 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:09:16.673555+0000)
Oct 11 04:30:01 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 136241152 unmapped: 57352192 heap: 193593344 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:09:17.673702+0000)
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: handle_auth_request added challenge on 0x55f1081b4000
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: handle_auth_request added challenge on 0x55f109192c00
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 245 ms_handle_reset con 0x55f109192c00 session 0x55f105b9ef00
Oct 11 04:30:01 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 136241152 unmapped: 57352192 heap: 193593344 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:09:18.673907+0000)
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _renew_subs
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 245 handle_osd_map epochs [246,246], i have 245, src has [1,246]
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: handle_auth_request added challenge on 0x55f10968e800
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: handle_auth_request added challenge on 0x55f105d2ac00
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 246 ms_handle_reset con 0x55f10968e800 session 0x55f1076cfa40
Oct 11 04:30:01 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 136282112 unmapped: 57311232 heap: 193593344 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:09:19.674069+0000)
Oct 11 04:30:01 compute-0 ceph-osd[89722]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:30:01 compute-0 ceph-osd[89722]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:30:01 compute-0 ceph-osd[89722]: bluestore.MempoolThread(0x55f1042abb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3961918 data_alloc: 234881024 data_used: 18100224
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 246 handle_osd_map epochs [246,247], i have 246, src has [1,247]
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 247 ms_handle_reset con 0x55f105d2ac00 session 0x55f107946f00
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: handle_auth_request added challenge on 0x55f107b53000
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 247 ms_handle_reset con 0x55f1081b4000 session 0x55f10799de00
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 247 ms_handle_reset con 0x55f107b53000 session 0x55f105b9e5a0
Oct 11 04:30:01 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 136241152 unmapped: 57352192 heap: 193593344 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:09:20.674243+0000)
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 247 handle_osd_map epochs [247,248], i have 247, src has [1,248]
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: handle_auth_request added challenge on 0x55f105d2ac00
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: handle_auth_request added challenge on 0x55f107d33800
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 248 ms_handle_reset con 0x55f105d2ac00 session 0x55f10767e960
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 248 heartbeat osd_stat(store_statfs(0x4e62db000/0x0/0x4ffc00000, data 0x14896325/0x149f2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x4f2f9c6), peers [0,1] op hist [0,0,0,0,0,1])
Oct 11 04:30:01 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 136716288 unmapped: 56877056 heap: 193593344 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:09:21.674408+0000)
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: handle_auth_request added challenge on 0x55f109192c00
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: handle_auth_request added challenge on 0x55f10968e800
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 248 ms_handle_reset con 0x55f10968e800 session 0x55f106d39860
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 248 handle_osd_map epochs [248,249], i have 248, src has [1,249]
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 249 ms_handle_reset con 0x55f107d33800 session 0x55f106316b40
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: handle_auth_request added challenge on 0x55f10ab97400
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: handle_auth_request added challenge on 0x55f10a33f400
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 249 ms_handle_reset con 0x55f10ab97400 session 0x55f105baa000
Oct 11 04:30:01 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 142934016 unmapped: 50659328 heap: 193593344 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:01 compute-0 ceph-osd[89722]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 9.366314888s of 10.055486679s, submitted: 145
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:09:22.674588+0000)
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: handle_auth_request added challenge on 0x55f105d2ac00
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _renew_subs
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 249 handle_osd_map epochs [250,250], i have 249, src has [1,250]
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 250 ms_handle_reset con 0x55f10a33f400 session 0x55f10562a960
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 250 ms_handle_reset con 0x55f109192c00 session 0x55f1069805a0
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 250 ms_handle_reset con 0x55f105d2ac00 session 0x55f106980960
Oct 11 04:30:01 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 139460608 unmapped: 54132736 heap: 193593344 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:09:23.674725+0000)
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 250 heartbeat osd_stat(store_statfs(0x4e5a1f000/0x0/0x4ffc00000, data 0x15147c0b/0x152a7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x4f2f9c6), peers [0,1] op hist [])
Oct 11 04:30:01 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 139460608 unmapped: 54132736 heap: 193593344 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:09:24.674916+0000)
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 250 handle_osd_map epochs [250,251], i have 250, src has [1,251]
Oct 11 04:30:01 compute-0 ceph-osd[89722]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:30:01 compute-0 ceph-osd[89722]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:30:01 compute-0 ceph-osd[89722]: bluestore.MempoolThread(0x55f1042abb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4068268 data_alloc: 234881024 data_used: 18952192
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: handle_auth_request added challenge on 0x55f107b53000
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: handle_auth_request added challenge on 0x55f107d33800
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: handle_auth_request added challenge on 0x55f10968e800
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 251 ms_handle_reset con 0x55f10968e800 session 0x55f105b9b2c0
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 251 ms_handle_reset con 0x55f107d33800 session 0x55f10567c000
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: handle_auth_request added challenge on 0x55f105d2ac00
Oct 11 04:30:01 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 160636928 unmapped: 32956416 heap: 193593344 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:09:25.675112+0000)
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: handle_auth_request added challenge on 0x55f109192c00
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 251 ms_handle_reset con 0x55f109192c00 session 0x55f105b9c3c0
Oct 11 04:30:01 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 139952128 unmapped: 53641216 heap: 193593344 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:09:26.675326+0000)
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: handle_auth_request added challenge on 0x55f10968e800
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _renew_subs
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 251 handle_osd_map epochs [252,252], i have 251, src has [1,252]
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: handle_auth_request added challenge on 0x55f10a33f400
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: handle_auth_request added challenge on 0x55f107bee000
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 252 ms_handle_reset con 0x55f10a33f400 session 0x55f106981e00
Oct 11 04:30:01 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 144883712 unmapped: 48709632 heap: 193593344 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:09:27.675499+0000)
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 252 heartbeat osd_stat(store_statfs(0x4dde21000/0x0/0x4ffc00000, data 0x1cd4b2bd/0x1cead000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x4f2f9c6), peers [0,1] op hist [0,0,0,0,1])
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 252 ms_handle_reset con 0x55f107bee000 session 0x55f105b9da40
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _renew_subs
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 252 handle_osd_map epochs [253,253], i have 252, src has [1,253]
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 253 ms_handle_reset con 0x55f10968e800 session 0x55f105b9b2c0
Oct 11 04:30:01 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 149749760 unmapped: 43843584 heap: 193593344 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:09:28.675731+0000)
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: handle_auth_request added challenge on 0x55f10ac62800
Oct 11 04:30:01 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 141901824 unmapped: 51691520 heap: 193593344 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:09:29.675919+0000)
Oct 11 04:30:01 compute-0 ceph-osd[89722]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:30:01 compute-0 ceph-osd[89722]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:30:01 compute-0 ceph-osd[89722]: bluestore.MempoolThread(0x55f1042abb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 6313904 data_alloc: 234881024 data_used: 18968576
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 253 ms_handle_reset con 0x55f105d2ac00 session 0x55f108162f00
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 253 handle_osd_map epochs [253,254], i have 253, src has [1,254]
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 254 ms_handle_reset con 0x55f107b53000 session 0x55f106ae4d20
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 254 ms_handle_reset con 0x55f10ac62800 session 0x55f10767e960
Oct 11 04:30:01 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 142188544 unmapped: 51404800 heap: 193593344 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:09:30.676108+0000)
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: handle_auth_request added challenge on 0x55f107bee000
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _renew_subs
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 254 handle_osd_map epochs [255,255], i have 254, src has [1,255]
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 255 ms_handle_reset con 0x55f107bee000 session 0x55f107972b40
Oct 11 04:30:01 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 142221312 unmapped: 51372032 heap: 193593344 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:09:31.676410+0000)
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: handle_auth_request added challenge on 0x55f109192c00
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 255 ms_handle_reset con 0x55f109192c00 session 0x55f105bb6000
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: handle_auth_request added challenge on 0x55f10968e800
Oct 11 04:30:01 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 142221312 unmapped: 51372032 heap: 193593344 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:09:32.676577+0000)
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 255 heartbeat osd_stat(store_statfs(0x4d1613000/0x0/0x4ffc00000, data 0x29552223/0x296bb000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x4f2f9c6), peers [0,1] op hist [])
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _renew_subs
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 255 handle_osd_map epochs [256,256], i have 255, src has [1,256]
Oct 11 04:30:01 compute-0 ceph-osd[89722]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 7.982084274s of 10.766675949s, submitted: 170
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: handle_auth_request added challenge on 0x55f10a33f400
Oct 11 04:30:01 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 142245888 unmapped: 51347456 heap: 193593344 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: handle_auth_request added challenge on 0x55f10ac62c00
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 256 ms_handle_reset con 0x55f10ac62c00 session 0x55f105b0b0e0
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:09:33.676730+0000)
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 256 ms_handle_reset con 0x55f10968e800 session 0x55f1079d7860
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 256 handle_osd_map epochs [256,257], i have 256, src has [1,257]
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: handle_auth_request added challenge on 0x55f107b53000
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: handle_auth_request added challenge on 0x55f107bee000
Oct 11 04:30:01 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 142311424 unmapped: 51281920 heap: 193593344 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 257 ms_handle_reset con 0x55f107b53000 session 0x55f105b9cd20
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:09:34.676929+0000)
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 257 ms_handle_reset con 0x55f107bee000 session 0x55f105b8dc20
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _renew_subs
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 257 handle_osd_map epochs [258,258], i have 257, src has [1,258]
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 258 ms_handle_reset con 0x55f10a33f400 session 0x55f106991860
Oct 11 04:30:01 compute-0 ceph-osd[89722]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:30:01 compute-0 ceph-osd[89722]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:30:01 compute-0 ceph-osd[89722]: bluestore.MempoolThread(0x55f1042abb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 6365129 data_alloc: 234881024 data_used: 18997248
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 258 heartbeat osd_stat(store_statfs(0x4d1607000/0x0/0x4ffc00000, data 0x29557610/0x296c6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x4f2f9c6), peers [0,1] op hist [])
Oct 11 04:30:01 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 142336000 unmapped: 51257344 heap: 193593344 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:09:35.677102+0000)
Oct 11 04:30:01 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 142336000 unmapped: 51257344 heap: 193593344 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:09:36.677299+0000)
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: handle_auth_request added challenge on 0x55f109192c00
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 258 ms_handle_reset con 0x55f109192c00 session 0x55f105b8da40
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: handle_auth_request added challenge on 0x55f107b53000
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: handle_auth_request added challenge on 0x55f107bee000
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 258 ms_handle_reset con 0x55f107b53000 session 0x55f105b8c780
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: handle_auth_request added challenge on 0x55f10968e800
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 258 ms_handle_reset con 0x55f10968e800 session 0x55f106d7e960
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _renew_subs
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 258 handle_osd_map epochs [259,259], i have 258, src has [1,259]
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: handle_auth_request added challenge on 0x55f10a33f400
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: handle_auth_request added challenge on 0x55f10ac62800
Oct 11 04:30:01 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 142835712 unmapped: 50757632 heap: 193593344 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 259 ms_handle_reset con 0x55f10a33f400 session 0x55f105bb63c0
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:09:37.677471+0000)
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: handle_auth_request added challenge on 0x55f10ac63000
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 259 ms_handle_reset con 0x55f10ac63000 session 0x55f106981a40
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 259 heartbeat osd_stat(store_statfs(0x4d1603000/0x0/0x4ffc00000, data 0x295591a9/0x296c9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x4f2f9c6), peers [0,1] op hist [])
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 259 ms_handle_reset con 0x55f10ac62800 session 0x55f1079d7a40
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _renew_subs
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 259 handle_osd_map epochs [260,260], i have 259, src has [1,260]
Oct 11 04:30:01 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 142860288 unmapped: 50733056 heap: 193593344 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 260 ms_handle_reset con 0x55f107bee000 session 0x55f105bab4a0
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:09:38.677683+0000)
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 260 heartbeat osd_stat(store_statfs(0x4d1600000/0x0/0x4ffc00000, data 0x2955ada4/0x296cd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x4f2f9c6), peers [0,1] op hist [])
Oct 11 04:30:01 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 142868480 unmapped: 50724864 heap: 193593344 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:09:39.677909+0000)
Oct 11 04:30:01 compute-0 ceph-osd[89722]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:30:01 compute-0 ceph-osd[89722]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:30:01 compute-0 ceph-osd[89722]: bluestore.MempoolThread(0x55f1042abb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 6377853 data_alloc: 234881024 data_used: 19021824
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: handle_auth_request added challenge on 0x55f107b53000
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: handle_auth_request added challenge on 0x55f10968e800
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: handle_auth_request added challenge on 0x55f10a33f400
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 260 ms_handle_reset con 0x55f10a33f400 session 0x55f105b8d680
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 260 ms_handle_reset con 0x55f10968e800 session 0x55f105b8c000
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: handle_auth_request added challenge on 0x55f10ac63000
Oct 11 04:30:01 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 142884864 unmapped: 50708480 heap: 193593344 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:09:40.678130+0000)
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 260 ms_handle_reset con 0x55f10ac63000 session 0x55f107708780
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: handle_auth_request added challenge on 0x55f10ac63400
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 260 ms_handle_reset con 0x55f10ac63400 session 0x55f106ae5860
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _renew_subs
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 260 handle_osd_map epochs [261,261], i have 260, src has [1,261]
Oct 11 04:30:01 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 143851520 unmapped: 49741824 heap: 193593344 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: handle_auth_request added challenge on 0x55f107bee000
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: handle_auth_request added challenge on 0x55f10968e800
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 261 ms_handle_reset con 0x55f10968e800 session 0x55f106317c20
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:09:41.678348+0000)
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _renew_subs
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 261 handle_osd_map epochs [262,262], i have 261, src has [1,262]
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 262 ms_handle_reset con 0x55f107bee000 session 0x55f1081630e0
Oct 11 04:30:01 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 144941056 unmapped: 48652288 heap: 193593344 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 262 ms_handle_reset con 0x55f107b53000 session 0x55f10567cb40
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:09:42.678529+0000)
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: handle_auth_request added challenge on 0x55f10a33f400
Oct 11 04:30:01 compute-0 ceph-osd[89722]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 9.762645721s of 10.073081970s, submitted: 107
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 262 ms_handle_reset con 0x55f10a33f400 session 0x55f106d385a0
Oct 11 04:30:01 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 144973824 unmapped: 48619520 heap: 193593344 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:09:43.678710+0000)
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: handle_auth_request added challenge on 0x55f10ac63000
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 262 heartbeat osd_stat(store_statfs(0x4d13d2000/0x0/0x4ffc00000, data 0x29783548/0x298fa000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x4f2f9c6), peers [0,1] op hist [])
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: handle_auth_request added challenge on 0x55f10939e000
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 262 ms_handle_reset con 0x55f10939e000 session 0x55f105b8cb40
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _renew_subs
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 262 handle_osd_map epochs [263,263], i have 262, src has [1,263]
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: handle_auth_request added challenge on 0x55f107b53000
Oct 11 04:30:01 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 144998400 unmapped: 48594944 heap: 193593344 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:09:44.678867+0000)
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: handle_auth_request added challenge on 0x55f107bee000
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: handle_auth_request added challenge on 0x55f10968e800
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: handle_auth_request added challenge on 0x55f10a33f400
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: handle_auth_request added challenge on 0x55f10939fc00
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 263 ms_handle_reset con 0x55f10a33f400 session 0x55f106981c20
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 263 handle_osd_map epochs [263,264], i have 263, src has [1,264]
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 264 ms_handle_reset con 0x55f10968e800 session 0x55f1079f2b40
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 264 ms_handle_reset con 0x55f10939fc00 session 0x55f1076cef00
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: handle_auth_request added challenge on 0x55f10939f800
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 264 ms_handle_reset con 0x55f10939f800 session 0x55f1079d6780
Oct 11 04:30:01 compute-0 ceph-osd[89722]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:30:01 compute-0 ceph-osd[89722]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:30:01 compute-0 ceph-osd[89722]: bluestore.MempoolThread(0x55f1042abb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 6501120 data_alloc: 234881024 data_used: 19042304
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 264 ms_handle_reset con 0x55f10ac63000 session 0x55f105bb7e00
Oct 11 04:30:01 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 145326080 unmapped: 48267264 heap: 193593344 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:09:45.679065+0000)
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: handle_auth_request added challenge on 0x55f10939f800
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _renew_subs
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 264 handle_osd_map epochs [265,265], i have 264, src has [1,265]
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 265 ms_handle_reset con 0x55f107bee000 session 0x55f105b0bc20
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 265 ms_handle_reset con 0x55f107b53000 session 0x55f105b9d680
Oct 11 04:30:01 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 145334272 unmapped: 48259072 heap: 193593344 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:09:46.679237+0000)
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 265 handle_osd_map epochs [265,266], i have 265, src has [1,266]
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 266 ms_handle_reset con 0x55f10939f800 session 0x55f10567c000
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: handle_auth_request added challenge on 0x55f10939fc00
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 266 ms_handle_reset con 0x55f10939fc00 session 0x55f10698fc20
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 266 heartbeat osd_stat(store_statfs(0x4d0b5f000/0x0/0x4ffc00000, data 0x29ff1446/0x2a16e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x4f2f9c6), peers [0,1] op hist [])
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 266 heartbeat osd_stat(store_statfs(0x4d0b5f000/0x0/0x4ffc00000, data 0x29ff1446/0x2a16e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x4f2f9c6), peers [0,1] op hist [])
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: handle_auth_request added challenge on 0x55f10968e800
Oct 11 04:30:01 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 145350656 unmapped: 48242688 heap: 193593344 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:09:47.679363+0000)
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: handle_auth_request added challenge on 0x55f10a33f400
Oct 11 04:30:01 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 145342464 unmapped: 48250880 heap: 193593344 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 266 heartbeat osd_stat(store_statfs(0x4d0b5f000/0x0/0x4ffc00000, data 0x29ff1446/0x2a16e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x4f2f9c6), peers [0,1] op hist [])
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:09:48.679549+0000)
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _renew_subs
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 266 handle_osd_map epochs [267,267], i have 266, src has [1,267]
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 267 ms_handle_reset con 0x55f10a33f400 session 0x55f106980780
Oct 11 04:30:01 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 145367040 unmapped: 48226304 heap: 193593344 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:09:49.679753+0000)
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: handle_auth_request added challenge on 0x55f107b53000
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: handle_auth_request added challenge on 0x55f107bee000
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: handle_auth_request added challenge on 0x55f10939f800
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 267 ms_handle_reset con 0x55f10939f800 session 0x55f105b9ad20
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _renew_subs
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 267 handle_osd_map epochs [268,268], i have 267, src has [1,268]
Oct 11 04:30:01 compute-0 ceph-osd[89722]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:30:01 compute-0 ceph-osd[89722]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:30:01 compute-0 ceph-osd[89722]: bluestore.MempoolThread(0x55f1042abb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 6516805 data_alloc: 234881024 data_used: 19075072
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: handle_auth_request added challenge on 0x55f10939fc00
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: handle_auth_request added challenge on 0x55f10ac63000
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 268 ms_handle_reset con 0x55f10ac63000 session 0x55f1081630e0
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 268 ms_handle_reset con 0x55f107bee000 session 0x55f106983e00
Oct 11 04:30:01 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 145440768 unmapped: 48152576 heap: 193593344 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:09:50.679973+0000)
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _renew_subs
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 268 handle_osd_map epochs [269,269], i have 268, src has [1,269]
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: handle_auth_request added challenge on 0x55f10939f400
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 269 ms_handle_reset con 0x55f10939fc00 session 0x55f105b8c1e0
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 269 ms_handle_reset con 0x55f107b53000 session 0x55f1079f2b40
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 269 heartbeat osd_stat(store_statfs(0x4d0b52000/0x0/0x4ffc00000, data 0x29ff69a7/0x2a17a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x4f2f9c6), peers [0,1] op hist [])
Oct 11 04:30:01 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 145448960 unmapped: 48144384 heap: 193593344 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:09:51.680211+0000)
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 269 handle_osd_map epochs [269,270], i have 269, src has [1,270]
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 270 ms_handle_reset con 0x55f10939f400 session 0x55f109b9a000
Oct 11 04:30:01 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 145473536 unmapped: 48119808 heap: 193593344 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 270 heartbeat osd_stat(store_statfs(0x4d0b4e000/0x0/0x4ffc00000, data 0x29ff8578/0x2a17d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x4f2f9c6), peers [0,1] op hist [])
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:09:52.680341+0000)
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: handle_auth_request added challenge on 0x55f107bee000
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: handle_auth_request added challenge on 0x55f10939f800
Oct 11 04:30:01 compute-0 ceph-osd[89722]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 9.489175797s of 10.038974762s, submitted: 134
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: handle_auth_request added challenge on 0x55f10939fc00
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 270 ms_handle_reset con 0x55f10939fc00 session 0x55f106d7f0e0
Oct 11 04:30:01 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 145481728 unmapped: 48111616 heap: 193593344 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:09:53.680524+0000)
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 270 handle_osd_map epochs [270,271], i have 270, src has [1,271]
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _renew_subs
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 271 handle_osd_map epochs [271,271], i have 271, src has [1,271]
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: handle_auth_request added challenge on 0x55f10ac63000
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: handle_auth_request added challenge on 0x55f10939f000
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 271 ms_handle_reset con 0x55f10ac63000 session 0x55f109b9bc20
Oct 11 04:30:01 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 145522688 unmapped: 48070656 heap: 193593344 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: handle_auth_request added challenge on 0x55f10939ec00
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:09:54.680693+0000)
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _renew_subs
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 271 handle_osd_map epochs [272,272], i have 271, src has [1,272]
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 272 ms_handle_reset con 0x55f10939ec00 session 0x55f105baa780
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 272 ms_handle_reset con 0x55f10939f000 session 0x55f106ae4d20
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: handle_auth_request added challenge on 0x55f10939ec00
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 272 ms_handle_reset con 0x55f10939ec00 session 0x55f107972960
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: handle_auth_request added challenge on 0x55f10939f400
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 272 ms_handle_reset con 0x55f10939f400 session 0x55f105b9f4a0
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: handle_auth_request added challenge on 0x55f10939fc00
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 272 ms_handle_reset con 0x55f107bee000 session 0x55f1079d7860
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 272 ms_handle_reset con 0x55f10939fc00 session 0x55f10795f2c0
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: handle_auth_request added challenge on 0x55f10ac63000
Oct 11 04:30:01 compute-0 ceph-osd[89722]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:30:01 compute-0 ceph-osd[89722]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:30:01 compute-0 ceph-osd[89722]: bluestore.MempoolThread(0x55f1042abb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 6590638 data_alloc: 234881024 data_used: 19087360
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 272 ms_handle_reset con 0x55f10ac63000 session 0x55f10698e5a0
Oct 11 04:30:01 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 145670144 unmapped: 47923200 heap: 193593344 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:09:55.680885+0000)
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _renew_subs
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 272 handle_osd_map epochs [273,273], i have 272, src has [1,273]
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: handle_auth_request added challenge on 0x55f107bee000
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 273 heartbeat osd_stat(store_statfs(0x4cff6e000/0x0/0x4ffc00000, data 0x2a7c8a1f/0x2a94f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x533f9c6), peers [0,1] op hist [])
Oct 11 04:30:01 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 145678336 unmapped: 47915008 heap: 193593344 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:09:56.681073+0000)
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 273 handle_osd_map epochs [273,274], i have 273, src has [1,274]
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 274 ms_handle_reset con 0x55f107bee000 session 0x55f10698f0e0
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 274 ms_handle_reset con 0x55f10939f800 session 0x55f105b0a5a0
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: handle_auth_request added challenge on 0x55f10939ec00
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 274 ms_handle_reset con 0x55f10939ec00 session 0x55f105b9d2c0
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: handle_auth_request added challenge on 0x55f10939f400
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 274 ms_handle_reset con 0x55f10939f400 session 0x55f10799dc20
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:09:57.683542+0000)
Oct 11 04:30:01 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 145874944 unmapped: 47718400 heap: 193593344 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: handle_auth_request added challenge on 0x55f10939fc00
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: handle_auth_request added challenge on 0x55f10939e800
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: handle_auth_request added challenge on 0x55f10939e400
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: handle_auth_request added challenge on 0x55f10c914000
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:09:58.683686+0000)
Oct 11 04:30:01 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 145883136 unmapped: 47710208 heap: 193593344 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: handle_auth_request added challenge on 0x55f10c914400
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 274 ms_handle_reset con 0x55f10c914400 session 0x55f106882b40
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 274 handle_osd_map epochs [274,275], i have 274, src has [1,275]
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 275 ms_handle_reset con 0x55f10939e800 session 0x55f1069905a0
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: handle_auth_request added challenge on 0x55f107bee000
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: handle_auth_request added challenge on 0x55f10939ec00
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 275 ms_handle_reset con 0x55f107bee000 session 0x55f106980f00
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:09:59.683801+0000)
Oct 11 04:30:01 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 148619264 unmapped: 44974080 heap: 193593344 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _renew_subs
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 275 handle_osd_map epochs [276,276], i have 275, src has [1,276]
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 276 ms_handle_reset con 0x55f10939ec00 session 0x55f105b8d860
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: handle_auth_request added challenge on 0x55f10939f400
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 276 ms_handle_reset con 0x55f10939f400 session 0x55f1079f2f00
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 276 ms_handle_reset con 0x55f10c914000 session 0x55f10799c780
Oct 11 04:30:01 compute-0 ceph-osd[89722]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:30:01 compute-0 ceph-osd[89722]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 11 04:30:01 compute-0 ceph-osd[89722]: bluestore.MempoolThread(0x55f1042abb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 6663710 data_alloc: 234881024 data_used: 27058176
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:10:00.683972+0000)
Oct 11 04:30:01 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 151748608 unmapped: 41844736 heap: 193593344 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: handle_auth_request added challenge on 0x55f107bee000
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _renew_subs
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 276 handle_osd_map epochs [277,277], i have 276, src has [1,277]
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:10:01.684219+0000)
Oct 11 04:30:01 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 151822336 unmapped: 41771008 heap: 193593344 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 277 handle_osd_map epochs [277,278], i have 277, src has [1,278]
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 278 ms_handle_reset con 0x55f107bee000 session 0x55f1079f34a0
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: handle_auth_request added challenge on 0x55f10939e800
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 278 ms_handle_reset con 0x55f10939e800 session 0x55f105b9b4a0
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 278 heartbeat osd_stat(store_statfs(0x4cff37000/0x0/0x4ffc00000, data 0x2a7fb720/0x2a986000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x533f9c6), peers [0,1] op hist [])
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: handle_auth_request added challenge on 0x55f10939ec00
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:10:02.684342+0000)
Oct 11 04:30:01 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 151863296 unmapped: 41730048 heap: 193593344 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _renew_subs
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 278 handle_osd_map epochs [279,279], i have 278, src has [1,279]
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 279 ms_handle_reset con 0x55f10939ec00 session 0x55f1076cf4a0
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:10:03.684588+0000)
Oct 11 04:30:01 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 151904256 unmapped: 41689088 heap: 193593344 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 279 heartbeat osd_stat(store_statfs(0x4cff34000/0x0/0x4ffc00000, data 0x2a7fd2eb/0x2a988000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x533f9c6), peers [0,1] op hist [])
Oct 11 04:30:01 compute-0 ceph-osd[89722]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 10.080289841s of 10.903662682s, submitted: 212
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 279 ms_handle_reset con 0x55f10968e800 session 0x55f1079d6000
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:10:04.684750+0000)
Oct 11 04:30:01 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 151937024 unmapped: 41656320 heap: 193593344 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:01 compute-0 ceph-osd[89722]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:30:01 compute-0 ceph-osd[89722]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 11 04:30:01 compute-0 ceph-osd[89722]: bluestore.MempoolThread(0x55f1042abb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 6668488 data_alloc: 234881024 data_used: 27054080
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:10:05.684889+0000)
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _renew_subs
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 279 handle_osd_map epochs [280,280], i have 279, src has [1,280]
Oct 11 04:30:01 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 151953408 unmapped: 41639936 heap: 193593344 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 280 ms_handle_reset con 0x55f10968ec00 session 0x55f106983860
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 280 ms_handle_reset con 0x55f109abb000 session 0x55f1056bcb40
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: handle_auth_request added challenge on 0x55f10968ec00
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 280 ms_handle_reset con 0x55f10968ec00 session 0x55f106882d20
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 280 heartbeat osd_stat(store_statfs(0x4cff34000/0x0/0x4ffc00000, data 0x2a7fed90/0x2a989000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x533f9c6), peers [0,1] op hist [])
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:10:06.685069+0000)
Oct 11 04:30:01 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 148832256 unmapped: 44761088 heap: 193593344 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: handle_auth_request added challenge on 0x55f107bee000
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:10:07.685245+0000)
Oct 11 04:30:01 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 148840448 unmapped: 44752896 heap: 193593344 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 280 handle_osd_map epochs [280,281], i have 280, src has [1,281]
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 281 ms_handle_reset con 0x55f107bee000 session 0x55f10562a1e0
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:10:08.685400+0000)
Oct 11 04:30:01 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 148840448 unmapped: 44752896 heap: 193593344 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:10:09.685577+0000)
Oct 11 04:30:01 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 148840448 unmapped: 44752896 heap: 193593344 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: handle_auth_request added challenge on 0x55f10939e800
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: handle_auth_request added challenge on 0x55f10939ec00
Oct 11 04:30:01 compute-0 ceph-osd[89722]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:30:01 compute-0 ceph-osd[89722]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 11 04:30:01 compute-0 ceph-osd[89722]: bluestore.MempoolThread(0x55f1042abb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 6582229 data_alloc: 234881024 data_used: 20897792
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: handle_auth_request added challenge on 0x55f10968e800
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 281 ms_handle_reset con 0x55f10968e800 session 0x55f10767e000
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: handle_auth_request added challenge on 0x55f10939f400
Oct 11 04:30:01 compute-0 ceph-osd[89722]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #44. Immutable memtables: 1.
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: handle_auth_request added challenge on 0x55f10c914000
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 281 ms_handle_reset con 0x55f10c914000 session 0x55f106d39a40
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 281 ms_handle_reset con 0x55f10939f400 session 0x55f106882b40
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: handle_auth_request added challenge on 0x55f10c914000
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 281 ms_handle_reset con 0x55f10c914000 session 0x55f107972960
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:10:10.685731+0000)
Oct 11 04:30:01 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 155541504 unmapped: 38051840 heap: 193593344 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: handle_auth_request added challenge on 0x55f107bee000
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 281 ms_handle_reset con 0x55f107bee000 session 0x55f1079f2b40
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 281 handle_osd_map epochs [281,282], i have 281, src has [1,282]
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _renew_subs
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 281 handle_osd_map epochs [282,282], i have 282, src has [1,282]
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: handle_auth_request added challenge on 0x55f10968e800
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 282 ms_handle_reset con 0x55f10968e800 session 0x55f10567c000
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:10:11.685985+0000)
Oct 11 04:30:01 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 154959872 unmapped: 38633472 heap: 193593344 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 282 heartbeat osd_stat(store_statfs(0x4cecac000/0x0/0x4ffc00000, data 0x2ab02498/0x2aa71000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x64df9c6), peers [0,1] op hist [])
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _renew_subs
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 282 handle_osd_map epochs [283,283], i have 282, src has [1,283]
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: handle_auth_request added challenge on 0x55f10968ec00
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:10:12.743503+0000)
Oct 11 04:30:01 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 154992640 unmapped: 38600704 heap: 193593344 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _renew_subs
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 283 handle_osd_map epochs [284,284], i have 283, src has [1,284]
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 284 ms_handle_reset con 0x55f10939ec00 session 0x55f105bab4a0
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: handle_auth_request added challenge on 0x55f107bee000
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 284 ms_handle_reset con 0x55f10968ec00 session 0x55f105b8da40
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 284 ms_handle_reset con 0x55f107bee000 session 0x55f105b8cf00
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:10:13.743693+0000)
Oct 11 04:30:01 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 155115520 unmapped: 38477824 heap: 193593344 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 284 heartbeat osd_stat(store_statfs(0x4cebef000/0x0/0x4ffc00000, data 0x2abb4d3e/0x2ab22000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x64df9c6), peers [0,1] op hist [])
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:10:14.743827+0000)
Oct 11 04:30:01 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 155115520 unmapped: 38477824 heap: 193593344 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 284 handle_osd_map epochs [284,285], i have 284, src has [1,285]
Oct 11 04:30:01 compute-0 ceph-osd[89722]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 9.598058701s of 10.418868065s, submitted: 322
Oct 11 04:30:01 compute-0 ceph-osd[89722]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:30:01 compute-0 ceph-osd[89722]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 11 04:30:01 compute-0 ceph-osd[89722]: bluestore.MempoolThread(0x55f1042abb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 6685649 data_alloc: 234881024 data_used: 22323200
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:10:15.744005+0000)
Oct 11 04:30:01 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 154566656 unmapped: 39026688 heap: 193593344 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:10:16.744221+0000)
Oct 11 04:30:01 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 154566656 unmapped: 39026688 heap: 193593344 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:10:17.744357+0000)
Oct 11 04:30:01 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 154705920 unmapped: 38887424 heap: 193593344 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:10:18.744477+0000)
Oct 11 04:30:01 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 154705920 unmapped: 38887424 heap: 193593344 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 285 ms_handle_reset con 0x55f10939fc00 session 0x55f10799c5a0
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 285 ms_handle_reset con 0x55f10939e800 session 0x55f1076cfa40
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 285 ms_handle_reset con 0x55f10939e400 session 0x55f1056bd860
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:10:19.744601+0000)
Oct 11 04:30:01 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 154705920 unmapped: 38887424 heap: 193593344 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: handle_auth_request added challenge on 0x55f10939e400
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 285 heartbeat osd_stat(store_statfs(0x4cebd7000/0x0/0x4ffc00000, data 0x2abd8831/0x2ab47000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x64df9c6), peers [0,1] op hist [])
Oct 11 04:30:01 compute-0 ceph-osd[89722]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:30:01 compute-0 ceph-osd[89722]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:30:01 compute-0 ceph-osd[89722]: bluestore.MempoolThread(0x55f1042abb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 6572697 data_alloc: 234881024 data_used: 14360576
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 285 ms_handle_reset con 0x55f10939e400 session 0x55f106991c20
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: handle_auth_request added challenge on 0x55f107bee000
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 285 ms_handle_reset con 0x55f107bee000 session 0x55f1079732c0
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:10:20.744733+0000)
Oct 11 04:30:01 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 150110208 unmapped: 43483136 heap: 193593344 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:10:21.744914+0000)
Oct 11 04:30:01 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 150110208 unmapped: 43483136 heap: 193593344 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:10:22.745026+0000)
Oct 11 04:30:01 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 150110208 unmapped: 43483136 heap: 193593344 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: handle_auth_request added challenge on 0x55f10939e800
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: handle_auth_request added challenge on 0x55f10939fc00
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 285 ms_handle_reset con 0x55f10939fc00 session 0x55f105b0b860
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:10:23.745258+0000)
Oct 11 04:30:01 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 150110208 unmapped: 43483136 heap: 193593344 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 285 handle_osd_map epochs [285,286], i have 285, src has [1,286]
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: handle_auth_request added challenge on 0x55f10968ec00
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 286 heartbeat osd_stat(store_statfs(0x4cf3c6000/0x0/0x4ffc00000, data 0x2a3e63f6/0x2a357000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x64df9c6), peers [0,1] op hist [])
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: handle_auth_request added challenge on 0x55f10939f400
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 286 ms_handle_reset con 0x55f10968ec00 session 0x55f10795eb40
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:10:24.745427+0000)
Oct 11 04:30:01 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 150118400 unmapped: 43474944 heap: 193593344 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 286 ms_handle_reset con 0x55f10939f400 session 0x55f105b0be00
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: handle_auth_request added challenge on 0x55f107bee000
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _renew_subs
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 286 handle_osd_map epochs [287,287], i have 286, src has [1,287]
Oct 11 04:30:01 compute-0 ceph-osd[89722]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 10.070068359s of 10.121883392s, submitted: 31
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 287 ms_handle_reset con 0x55f107bee000 session 0x55f10561eb40
Oct 11 04:30:01 compute-0 ceph-osd[89722]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:30:01 compute-0 ceph-osd[89722]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:30:01 compute-0 ceph-osd[89722]: bluestore.MempoolThread(0x55f1042abb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 6574615 data_alloc: 234881024 data_used: 14237696
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 287 ms_handle_reset con 0x55f10939e800 session 0x55f1076e6f00
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:10:25.745577+0000)
Oct 11 04:30:01 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 150118400 unmapped: 43474944 heap: 193593344 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 287 handle_osd_map epochs [287,288], i have 287, src has [1,288]
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: handle_auth_request added challenge on 0x55f10939e400
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: handle_auth_request added challenge on 0x55f10939fc00
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 288 ms_handle_reset con 0x55f10939fc00 session 0x55f10646e3c0
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:10:26.745739+0000)
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: handle_auth_request added challenge on 0x55f10968ec00
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 288 ms_handle_reset con 0x55f10968ec00 session 0x55f1069830e0
Oct 11 04:30:01 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 153059328 unmapped: 40534016 heap: 193593344 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: handle_auth_request added challenge on 0x55f10968e800
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 288 ms_handle_reset con 0x55f10968e800 session 0x55f109b9a1e0
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: handle_auth_request added challenge on 0x55f107bee000
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: handle_auth_request added challenge on 0x55f10939e800
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 288 ms_handle_reset con 0x55f10939e800 session 0x55f1077210e0
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: handle_auth_request added challenge on 0x55f10939fc00
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _renew_subs
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 288 handle_osd_map epochs [289,289], i have 288, src has [1,289]
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 289 ms_handle_reset con 0x55f10939e400 session 0x55f105bb6b40
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 289 ms_handle_reset con 0x55f10939fc00 session 0x55f1079730e0
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: handle_auth_request added challenge on 0x55f10968ec00
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 289 ms_handle_reset con 0x55f107bee000 session 0x55f105b9e960
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 289 ms_handle_reset con 0x55f10968ec00 session 0x55f105b8d860
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:10:27.745881+0000)
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 289 heartbeat osd_stat(store_statfs(0x4ceb39000/0x0/0x4ffc00000, data 0x2b4cebd2/0x2abe4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x64df9c6), peers [0,1] op hist [])
Oct 11 04:30:01 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 153157632 unmapped: 40435712 heap: 193593344 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: handle_auth_request added challenge on 0x55f107bee000
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _renew_subs
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 289 handle_osd_map epochs [290,290], i have 289, src has [1,290]
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 290 ms_handle_reset con 0x55f107bee000 session 0x55f10646f680
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:10:28.746085+0000)
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: handle_auth_request added challenge on 0x55f10939e400
Oct 11 04:30:01 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 153272320 unmapped: 40321024 heap: 193593344 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: handle_auth_request added challenge on 0x55f10939e800
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 290 ms_handle_reset con 0x55f10939e800 session 0x55f1079470e0
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _renew_subs
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 290 handle_osd_map epochs [291,291], i have 290, src has [1,291]
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 291 ms_handle_reset con 0x55f10939e400 session 0x55f106316b40
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:10:29.746274+0000)
Oct 11 04:30:01 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 152141824 unmapped: 41451520 heap: 193593344 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:01 compute-0 ceph-osd[89722]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:30:01 compute-0 ceph-osd[89722]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:30:01 compute-0 ceph-osd[89722]: bluestore.MempoolThread(0x55f1042abb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 6727171 data_alloc: 234881024 data_used: 14241792
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:10:30.746479+0000)
Oct 11 04:30:01 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 152141824 unmapped: 41451520 heap: 193593344 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: handle_auth_request added challenge on 0x55f10939fc00
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 291 ms_handle_reset con 0x55f10939fc00 session 0x55f1063165a0
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: handle_auth_request added challenge on 0x55f10c914000
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 291 ms_handle_reset con 0x55f10c914000 session 0x55f106882000
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:10:31.746693+0000)
Oct 11 04:30:01 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 152141824 unmapped: 41451520 heap: 193593344 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: handle_auth_request added challenge on 0x55f107bee000
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 291 ms_handle_reset con 0x55f107bee000 session 0x55f1068830e0
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: handle_auth_request added challenge on 0x55f10939e400
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: handle_auth_request added challenge on 0x55f10939e800
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 291 ms_handle_reset con 0x55f10939e800 session 0x55f1079465a0
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 291 heartbeat osd_stat(store_statfs(0x4ceb2b000/0x0/0x4ffc00000, data 0x2b4d9ef1/0x2abf3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x64df9c6), peers [0,1] op hist [0,1])
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: handle_auth_request added challenge on 0x55f10939fc00
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: handle_auth_request added challenge on 0x55f109abb000
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 291 ms_handle_reset con 0x55f10939e400 session 0x55f10562a780
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 291 ms_handle_reset con 0x55f10939fc00 session 0x55f10698e960
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:10:32.746838+0000)
Oct 11 04:30:01 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 152240128 unmapped: 41353216 heap: 193593344 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 291 handle_osd_map epochs [291,292], i have 291, src has [1,292]
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: handle_auth_request added challenge on 0x55f10939f800
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:10:33.747053+0000)
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 292 ms_handle_reset con 0x55f109abb000 session 0x55f10799de00
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 292 ms_handle_reset con 0x55f10939f800 session 0x55f105bab0e0
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: handle_auth_request added challenge on 0x55f107bee000
Oct 11 04:30:01 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 150659072 unmapped: 42934272 heap: 193593344 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 292 ms_handle_reset con 0x55f107bee000 session 0x55f105bb7680
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:10:34.747239+0000)
Oct 11 04:30:01 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 150659072 unmapped: 42934272 heap: 193593344 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: handle_auth_request added challenge on 0x55f10939e400
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _renew_subs
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 292 handle_osd_map epochs [293,293], i have 292, src has [1,293]
Oct 11 04:30:01 compute-0 ceph-osd[89722]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 9.635179520s of 10.285308838s, submitted: 216
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 293 ms_handle_reset con 0x55f10939e400 session 0x55f105bb6b40
Oct 11 04:30:01 compute-0 ceph-osd[89722]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:30:01 compute-0 ceph-osd[89722]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:30:01 compute-0 ceph-osd[89722]: bluestore.MempoolThread(0x55f1042abb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 6591615 data_alloc: 234881024 data_used: 14254080
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 293 heartbeat osd_stat(store_statfs(0x4cf5ba000/0x0/0x4ffc00000, data 0x2a1e99ee/0x2a160000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x64df9c6), peers [0,1] op hist [])
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: handle_auth_request added challenge on 0x55f10939e800
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 293 ms_handle_reset con 0x55f10939e800 session 0x55f1077210e0
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:10:35.747393+0000)
Oct 11 04:30:01 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 150675456 unmapped: 42917888 heap: 193593344 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: handle_auth_request added challenge on 0x55f10939fc00
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: handle_auth_request added challenge on 0x55f10c914800
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:10:36.747508+0000)
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: handle_auth_request added challenge on 0x55f10c914c00
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 293 ms_handle_reset con 0x55f10c914c00 session 0x55f1076e72c0
Oct 11 04:30:01 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 150675456 unmapped: 42917888 heap: 193593344 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _renew_subs
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 293 handle_osd_map epochs [294,294], i have 293, src has [1,294]
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 294 ms_handle_reset con 0x55f10939fc00 session 0x55f1069830e0
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 294 ms_handle_reset con 0x55f10c914800 session 0x55f10698f4a0
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: handle_auth_request added challenge on 0x55f10c914c00
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 294 ms_handle_reset con 0x55f10c914c00 session 0x55f10646e3c0
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:10:37.747712+0000)
Oct 11 04:30:01 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 150691840 unmapped: 42901504 heap: 193593344 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: handle_auth_request added challenge on 0x55f107bee000
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 294 ms_handle_reset con 0x55f107bee000 session 0x55f10561eb40
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: handle_auth_request added challenge on 0x55f10939e400
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:10:38.747869+0000)
Oct 11 04:30:01 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 150691840 unmapped: 42901504 heap: 193593344 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _renew_subs
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 294 handle_osd_map epochs [295,295], i have 294, src has [1,295]
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: handle_auth_request added challenge on 0x55f10939e800
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 295 ms_handle_reset con 0x55f10939e400 session 0x55f105b0be00
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: handle_auth_request added challenge on 0x55f107bee000
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:10:39.748042+0000)
Oct 11 04:30:01 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 150831104 unmapped: 55369728 heap: 206200832 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: handle_auth_request added challenge on 0x55f10939fc00
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 295 ms_handle_reset con 0x55f10939fc00 session 0x55f109b9a960
Oct 11 04:30:01 compute-0 ceph-osd[89722]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:30:01 compute-0 ceph-osd[89722]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:30:01 compute-0 ceph-osd[89722]: bluestore.MempoolThread(0x55f1042abb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 6900551 data_alloc: 234881024 data_used: 14262272
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 295 heartbeat osd_stat(store_statfs(0x4cd811000/0x0/0x4ffc00000, data 0x2bf8ed34/0x2bf0c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x64df9c6), peers [0,1] op hist [0,0,0,0,0,0,0,1])
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:10:40.748205+0000)
Oct 11 04:30:01 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 155312128 unmapped: 50888704 heap: 206200832 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: handle_auth_request added challenge on 0x55f10c914800
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 295 ms_handle_reset con 0x55f10c914800 session 0x55f109b9ab40
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: handle_auth_request added challenge on 0x55f10c914c00
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:10:41.748375+0000)
Oct 11 04:30:01 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 155688960 unmapped: 50511872 heap: 206200832 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 295 handle_osd_map epochs [295,296], i have 295, src has [1,296]
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 296 ms_handle_reset con 0x55f10c914c00 session 0x55f109b9ad20
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: handle_auth_request added challenge on 0x55f10939f800
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:10:42.748509+0000)
Oct 11 04:30:01 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 160301056 unmapped: 45899776 heap: 206200832 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: handle_auth_request added challenge on 0x55f10c915000
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 296 ms_handle_reset con 0x55f10939f800 session 0x55f109b9be00
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 296 ms_handle_reset con 0x55f10c915000 session 0x55f1079f2b40
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: handle_auth_request added challenge on 0x55f10939f800
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:10:43.749415+0000)
Oct 11 04:30:01 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 164798464 unmapped: 41402368 heap: 206200832 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 296 ms_handle_reset con 0x55f10939f800 session 0x55f109b9b0e0
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:10:44.749616+0000)
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 296 heartbeat osd_stat(store_statfs(0x4c200f000/0x0/0x4ffc00000, data 0x37790931/0x3770f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x64df9c6), peers [0,1] op hist [0,0,0,0,0,1])
Oct 11 04:30:01 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 156835840 unmapped: 49364992 heap: 206200832 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 296 handle_osd_map epochs [296,297], i have 296, src has [1,297]
Oct 11 04:30:01 compute-0 ceph-osd[89722]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 6.961782932s of 10.032380104s, submitted: 150
Oct 11 04:30:01 compute-0 ceph-osd[89722]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:30:01 compute-0 ceph-osd[89722]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:30:01 compute-0 ceph-osd[89722]: bluestore.MempoolThread(0x55f1042abb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 8520897 data_alloc: 234881024 data_used: 14278656
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:10:45.749793+0000)
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 297 ms_handle_reset con 0x55f107bee000 session 0x55f1079732c0
Oct 11 04:30:01 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 154329088 unmapped: 51871744 heap: 206200832 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 297 ms_handle_reset con 0x55f10939e800 session 0x55f105b9b860
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: handle_auth_request added challenge on 0x55f10939fc00
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 297 ms_handle_reset con 0x55f10939fc00 session 0x55f1056bcb40
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: handle_auth_request added challenge on 0x55f10c914800
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: handle_auth_request added challenge on 0x55f10c914c00
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: handle_auth_request added challenge on 0x55f10c915400
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 297 ms_handle_reset con 0x55f10c914800 session 0x55f1079d6000
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 297 ms_handle_reset con 0x55f10c915400 session 0x55f109b9b2c0
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:10:46.749962+0000)
Oct 11 04:30:01 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 154853376 unmapped: 51347456 heap: 206200832 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 297 ms_handle_reset con 0x55f10c914c00 session 0x55f1076cf860
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: handle_auth_request added challenge on 0x55f107bee000
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:10:47.750208+0000)
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: handle_auth_request added challenge on 0x55f10939e800
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 297 ms_handle_reset con 0x55f107bee000 session 0x55f105b9b4a0
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: handle_auth_request added challenge on 0x55f10939f800
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 297 ms_handle_reset con 0x55f10939e800 session 0x55f106980780
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: handle_auth_request added challenge on 0x55f10939fc00
Oct 11 04:30:01 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 154869760 unmapped: 51331072 heap: 206200832 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 297 ms_handle_reset con 0x55f10939fc00 session 0x55f1079d6f00
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _renew_subs
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 297 handle_osd_map epochs [298,298], i have 297, src has [1,298]
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 298 ms_handle_reset con 0x55f10939f800 session 0x55f1056bcb40
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: handle_auth_request added challenge on 0x55f107bee000
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 298 ms_handle_reset con 0x55f107bee000 session 0x55f1079732c0
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:10:48.750395+0000)
Oct 11 04:30:01 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 154886144 unmapped: 51314688 heap: 206200832 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: handle_auth_request added challenge on 0x55f10939e800
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 298 ms_handle_reset con 0x55f10939e800 session 0x55f109b9b0e0
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: handle_auth_request added challenge on 0x55f10939fc00
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:10:49.750667+0000)
Oct 11 04:30:01 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 154902528 unmapped: 51298304 heap: 206200832 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 298 heartbeat osd_stat(store_statfs(0x4cfc0b000/0x0/0x4ffc00000, data 0x29b93e98/0x29b12000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x64df9c6), peers [0,1] op hist [])
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 298 ms_handle_reset con 0x55f10939fc00 session 0x55f109b9ad20
Oct 11 04:30:01 compute-0 ceph-osd[89722]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:30:01 compute-0 ceph-osd[89722]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:30:01 compute-0 ceph-osd[89722]: bluestore.MempoolThread(0x55f1042abb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 6652322 data_alloc: 234881024 data_used: 14286848
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: handle_auth_request added challenge on 0x55f10c914c00
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _renew_subs
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 298 handle_osd_map epochs [299,299], i have 298, src has [1,299]
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:01 compute-0 ceph-mgr[74563]: log_channel(audit) log [DBG] : from='client.19271 -' entity='client.admin' cmd=[{"prefix": "orch ls", "export": true, "target": ["mon-mgr", ""]}]: dispatch
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:10:50.750845+0000)
Oct 11 04:30:01 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 154927104 unmapped: 51273728 heap: 206200832 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: handle_auth_request added challenge on 0x55f10c915400
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 299 ms_handle_reset con 0x55f10c915400 session 0x55f10772f2c0
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: handle_auth_request added challenge on 0x55f107bee000
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 299 ms_handle_reset con 0x55f107bee000 session 0x55f10772f680
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:10:51.751029+0000)
Oct 11 04:30:01 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 154927104 unmapped: 51273728 heap: 206200832 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 299 ms_handle_reset con 0x55f10c914c00 session 0x55f109b9ab40
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: handle_auth_request added challenge on 0x55f10939e800
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _renew_subs
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _renew_subs
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 299 handle_osd_map epochs [301,301], i have 299, src has [1,301]
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 299 handle_osd_map epochs [300,301], i have 299, src has [1,301]
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: handle_auth_request added challenge on 0x55f10939f800
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 301 ms_handle_reset con 0x55f10939e800 session 0x55f109b9a960
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 301 ms_handle_reset con 0x55f10939f800 session 0x55f10646f0e0
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: handle_auth_request added challenge on 0x55f10939fc00
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 301 ms_handle_reset con 0x55f10939fc00 session 0x55f10561eb40
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:10:52.751190+0000)
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: handle_auth_request added challenge on 0x55f107bee000
Oct 11 04:30:01 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 154345472 unmapped: 51855360 heap: 206200832 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 301 heartbeat osd_stat(store_statfs(0x4cfc09000/0x0/0x4ffc00000, data 0x29b959cf/0x29b14000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x64df9c6), peers [0,1] op hist [])
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _renew_subs
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 301 handle_osd_map epochs [302,302], i have 301, src has [1,302]
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:10:53.751359+0000)
Oct 11 04:30:01 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 154353664 unmapped: 51847168 heap: 206200832 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 302 handle_osd_map epochs [302,303], i have 302, src has [1,303]
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 303 ms_handle_reset con 0x55f107bee000 session 0x55f10698f4a0
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: handle_auth_request added challenge on 0x55f10939e800
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:10:54.751557+0000)
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 303 ms_handle_reset con 0x55f10939e800 session 0x55f1076e72c0
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: handle_auth_request added challenge on 0x55f10939f800
Oct 11 04:30:01 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 152420352 unmapped: 53780480 heap: 206200832 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _renew_subs
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 303 handle_osd_map epochs [304,304], i have 303, src has [1,304]
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 304 ms_handle_reset con 0x55f10939f800 session 0x55f105b8d860
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 304 heartbeat osd_stat(store_statfs(0x4e52c4000/0x0/0x4ffc00000, data 0x142b7450/0x14457000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x64df9c6), peers [0,1] op hist [])
Oct 11 04:30:01 compute-0 ceph-osd[89722]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:30:01 compute-0 ceph-osd[89722]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:30:01 compute-0 ceph-osd[89722]: bluestore.MempoolThread(0x55f1042abb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4316270 data_alloc: 234881024 data_used: 12955648
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: handle_auth_request added challenge on 0x55f10939fc00
Oct 11 04:30:01 compute-0 ceph-osd[89722]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 8.965749741s of 10.295836449s, submitted: 320
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:10:55.751809+0000)
Oct 11 04:30:01 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 152428544 unmapped: 53772288 heap: 206200832 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: handle_auth_request added challenge on 0x55f10c914c00
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 304 handle_osd_map epochs [304,305], i have 304, src has [1,305]
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 305 ms_handle_reset con 0x55f10939fc00 session 0x55f1079730e0
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: handle_auth_request added challenge on 0x55f10c915800
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 305 ms_handle_reset con 0x55f10c914c00 session 0x55f106ae5860
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:10:56.751953+0000)
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 305 ms_handle_reset con 0x55f10c915800 session 0x55f105b9c960
Oct 11 04:30:01 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 150454272 unmapped: 55746560 heap: 206200832 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: handle_auth_request added challenge on 0x55f107bee000
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 305 ms_handle_reset con 0x55f107bee000 session 0x55f1056bde00
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:10:57.752120+0000)
Oct 11 04:30:01 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 150462464 unmapped: 55738368 heap: 206200832 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:10:58.752292+0000)
Oct 11 04:30:01 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 150462464 unmapped: 55738368 heap: 206200832 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: handle_auth_request added challenge on 0x55f10939e800
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 305 ms_handle_reset con 0x55f10939e800 session 0x55f106980b40
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:10:59.752426+0000)
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 305 handle_osd_map epochs [305,306], i have 305, src has [1,306]
Oct 11 04:30:01 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 150462464 unmapped: 55738368 heap: 206200832 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:01 compute-0 ceph-osd[89722]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:30:01 compute-0 ceph-osd[89722]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:30:01 compute-0 ceph-osd[89722]: bluestore.MempoolThread(0x55f1042abb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2354644 data_alloc: 234881024 data_used: 12972032
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 306 heartbeat osd_stat(store_statfs(0x4f76c0000/0x0/0x4ffc00000, data 0x1ebaaf2/0x205d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x64df9c6), peers [0,1] op hist [])
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:11:00.752807+0000)
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: handle_auth_request added challenge on 0x55f10939f800
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 306 handle_osd_map epochs [306,307], i have 306, src has [1,307]
Oct 11 04:30:01 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 150462464 unmapped: 55738368 heap: 206200832 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 307 ms_handle_reset con 0x55f10939f800 session 0x55f105b0a3c0
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:11:01.753035+0000)
Oct 11 04:30:01 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 150462464 unmapped: 55738368 heap: 206200832 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: handle_auth_request added challenge on 0x55f10939fc00
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 307 ms_handle_reset con 0x55f10939fc00 session 0x55f106317a40
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:11:02.753236+0000)
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 307 heartbeat osd_stat(store_statfs(0x4f76bc000/0x0/0x4ffc00000, data 0x1ebc66f/0x2060000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x64df9c6), peers [0,1] op hist [])
Oct 11 04:30:01 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 150462464 unmapped: 55738368 heap: 206200832 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 307 heartbeat osd_stat(store_statfs(0x4f76bc000/0x0/0x4ffc00000, data 0x1ebc66f/0x2060000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x64df9c6), peers [0,1] op hist [])
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:11:03.753440+0000)
Oct 11 04:30:01 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 150462464 unmapped: 55738368 heap: 206200832 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: handle_auth_request added challenge on 0x55f107bee000
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 307 ms_handle_reset con 0x55f107bee000 session 0x55f105b9ba40
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: handle_auth_request added challenge on 0x55f10939e800
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:11:04.753626+0000)
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: handle_auth_request added challenge on 0x55f10939f800
Oct 11 04:30:01 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 150470656 unmapped: 55730176 heap: 206200832 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 307 handle_osd_map epochs [308,308], i have 307, src has [1,308]
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 308 ms_handle_reset con 0x55f10939e800 session 0x55f1069834a0
Oct 11 04:30:01 compute-0 ceph-osd[89722]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:30:01 compute-0 ceph-osd[89722]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:30:01 compute-0 ceph-osd[89722]: bluestore.MempoolThread(0x55f1042abb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2362605 data_alloc: 234881024 data_used: 12980224
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:11:05.753791+0000)
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: handle_auth_request added challenge on 0x55f10c915800
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 308 ms_handle_reset con 0x55f10c915800 session 0x55f10567c3c0
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: handle_auth_request added challenge on 0x55f10c915c00
Oct 11 04:30:01 compute-0 ceph-osd[89722]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 9.879675865s of 10.371082306s, submitted: 163
Oct 11 04:30:01 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 150487040 unmapped: 55713792 heap: 206200832 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _renew_subs
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 308 handle_osd_map epochs [309,309], i have 308, src has [1,309]
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 309 ms_handle_reset con 0x55f10939f800 session 0x55f1063161e0
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 309 ms_handle_reset con 0x55f10c915c00 session 0x55f10698f2c0
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:11:06.753933+0000)
Oct 11 04:30:01 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 150495232 unmapped: 55705600 heap: 206200832 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: handle_auth_request added challenge on 0x55f107bee000
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 309 ms_handle_reset con 0x55f107bee000 session 0x55f109b9a000
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: handle_auth_request added challenge on 0x55f10939e800
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:11:07.754131+0000)
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 309 handle_osd_map epochs [309,310], i have 309, src has [1,310]
Oct 11 04:30:01 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 150511616 unmapped: 55689216 heap: 206200832 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 310 ms_handle_reset con 0x55f10939e800 session 0x55f106d38b40
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: handle_auth_request added challenge on 0x55f10939f800
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 310 ms_handle_reset con 0x55f10939f800 session 0x55f10561e000
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: handle_auth_request added challenge on 0x55f10c915800
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 310 heartbeat osd_stat(store_statfs(0x4f76b4000/0x0/0x4ffc00000, data 0x1ec19e0/0x206a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x64df9c6), peers [0,1] op hist [])
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: handle_auth_request added challenge on 0x55f107c1d800
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:11:08.754281+0000)
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 310 ms_handle_reset con 0x55f10c915800 session 0x55f105baa780
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 310 ms_handle_reset con 0x55f107c1d800 session 0x55f10561f2c0
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: handle_auth_request added challenge on 0x55f107bee000
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: handle_auth_request added challenge on 0x55f107c1d800
Oct 11 04:30:01 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 150511616 unmapped: 55689216 heap: 206200832 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 310 ms_handle_reset con 0x55f107c1d800 session 0x55f1056bc5a0
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 310 ms_handle_reset con 0x55f107bee000 session 0x55f106980b40
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: handle_auth_request added challenge on 0x55f107bef400
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 310 ms_handle_reset con 0x55f107bef400 session 0x55f105b9c960
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:11:09.754504+0000)
Oct 11 04:30:01 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 150511616 unmapped: 55689216 heap: 206200832 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:01 compute-0 ceph-osd[89722]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:30:01 compute-0 ceph-osd[89722]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:30:01 compute-0 ceph-osd[89722]: bluestore.MempoolThread(0x55f1042abb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2369174 data_alloc: 234881024 data_used: 12988416
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:11:10.754714+0000)
Oct 11 04:30:01 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 150511616 unmapped: 55689216 heap: 206200832 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: handle_auth_request added challenge on 0x55f10939e800
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 310 ms_handle_reset con 0x55f10939e800 session 0x55f1079730e0
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: handle_auth_request added challenge on 0x55f10939f800
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 310 ms_handle_reset con 0x55f10939f800 session 0x55f105b8d860
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:11:11.754978+0000)
Oct 11 04:30:01 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 150519808 unmapped: 55681024 heap: 206200832 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: handle_auth_request added challenge on 0x55f107bee000
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:11:12.755141+0000)
Oct 11 04:30:01 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 150519808 unmapped: 55681024 heap: 206200832 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _renew_subs
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 310 handle_osd_map epochs [311,311], i have 310, src has [1,311]
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 311 ms_handle_reset con 0x55f107bee000 session 0x55f10561eb40
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:11:13.755323+0000)
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: handle_auth_request added challenge on 0x55f107bef400
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 311 ms_handle_reset con 0x55f107bef400 session 0x55f109b9b0e0
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: handle_auth_request added challenge on 0x55f107c1d800
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 311 heartbeat osd_stat(store_statfs(0x4f72a6000/0x0/0x4ffc00000, data 0x1ec195b/0x2068000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x68ef9c6), peers [0,1] op hist [1])
Oct 11 04:30:01 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 150544384 unmapped: 55656448 heap: 206200832 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _renew_subs
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 311 handle_osd_map epochs [312,312], i have 311, src has [1,312]
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 312 ms_handle_reset con 0x55f107c1d800 session 0x55f1056bcb40
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: handle_auth_request added challenge on 0x55f10939e800
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 312 ms_handle_reset con 0x55f10939e800 session 0x55f105b9cd20
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:11:14.755512+0000)
Oct 11 04:30:01 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 151609344 unmapped: 54591488 heap: 206200832 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: handle_auth_request added challenge on 0x55f10c915800
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _renew_subs
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 312 handle_osd_map epochs [313,313], i have 312, src has [1,313]
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 313 ms_handle_reset con 0x55f10c915800 session 0x55f1077090e0
Oct 11 04:30:01 compute-0 ceph-osd[89722]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:30:01 compute-0 ceph-osd[89722]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:30:01 compute-0 ceph-osd[89722]: bluestore.MempoolThread(0x55f1042abb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2384037 data_alloc: 234881024 data_used: 12992512
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:11:15.755741+0000)
Oct 11 04:30:01 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 151642112 unmapped: 54558720 heap: 206200832 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: handle_auth_request added challenge on 0x55f107bee000
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 313 ms_handle_reset con 0x55f107bee000 session 0x55f105b8cd20
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: handle_auth_request added challenge on 0x55f107bef400
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: handle_auth_request added challenge on 0x55f107c1d800
Oct 11 04:30:01 compute-0 ceph-osd[89722]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 10.238830566s of 10.634375572s, submitted: 121
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 313 ms_handle_reset con 0x55f107c1d800 session 0x55f108163c20
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 313 ms_handle_reset con 0x55f107bef400 session 0x55f10698f0e0
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: handle_auth_request added challenge on 0x55f10939e800
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:11:16.755897+0000)
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: handle_auth_request added challenge on 0x55f10a33e000
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 313 ms_handle_reset con 0x55f10939e800 session 0x55f1081632c0
Oct 11 04:30:01 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 151642112 unmapped: 54558720 heap: 206200832 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _renew_subs
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 313 handle_osd_map epochs [314,314], i have 313, src has [1,314]
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 314 ms_handle_reset con 0x55f10a33e000 session 0x55f10772e000
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:11:17.756054+0000)
Oct 11 04:30:01 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 151642112 unmapped: 54558720 heap: 206200832 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: handle_auth_request added challenge on 0x55f107bee000
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: handle_auth_request added challenge on 0x55f107bef400
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 314 ms_handle_reset con 0x55f107bee000 session 0x55f10562a780
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: handle_auth_request added challenge on 0x55f107c1d800
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 314 ms_handle_reset con 0x55f107bef400 session 0x55f1079f3a40
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: handle_auth_request added challenge on 0x55f10939e800
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: handle_auth_request added challenge on 0x55f10a33f800
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 314 ms_handle_reset con 0x55f10a33f800 session 0x55f1076e6780
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: handle_auth_request added challenge on 0x55f10a33e800
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 314 ms_handle_reset con 0x55f10a33e800 session 0x55f1076e70e0
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: handle_auth_request added challenge on 0x55f10a33f400
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 314 ms_handle_reset con 0x55f10a33f400 session 0x55f106ae52c0
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: handle_auth_request added challenge on 0x55f10a33f400
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 314 ms_handle_reset con 0x55f10a33f400 session 0x55f106ae43c0
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 314 heartbeat osd_stat(store_statfs(0x4f7297000/0x0/0x4ffc00000, data 0x1ec8b63/0x2076000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x68ef9c6), peers [0,1] op hist [])
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:11:18.756260+0000)
Oct 11 04:30:01 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 151781376 unmapped: 54419456 heap: 206200832 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _renew_subs
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 314 handle_osd_map epochs [315,315], i have 314, src has [1,315]
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 315 ms_handle_reset con 0x55f107c1d800 session 0x55f1079d7c20
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 315 ms_handle_reset con 0x55f10939e800 session 0x55f1069905a0
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:11:19.756471+0000)
Oct 11 04:30:01 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 151789568 unmapped: 54411264 heap: 206200832 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:01 compute-0 ceph-osd[89722]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:30:01 compute-0 ceph-osd[89722]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:30:01 compute-0 ceph-osd[89722]: bluestore.MempoolThread(0x55f1042abb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2440112 data_alloc: 234881024 data_used: 13008896
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: handle_auth_request added challenge on 0x55f107bee000
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:11:20.756613+0000)
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: handle_auth_request added challenge on 0x55f107bef400
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 315 ms_handle_reset con 0x55f107bee000 session 0x55f1079721e0
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 315 ms_handle_reset con 0x55f107bef400 session 0x55f106d7e000
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: handle_auth_request added challenge on 0x55f107bee000
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 315 ms_handle_reset con 0x55f107bee000 session 0x55f1076ce5a0
Oct 11 04:30:01 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 151814144 unmapped: 54386688 heap: 206200832 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: handle_auth_request added challenge on 0x55f107bef400
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 315 ms_handle_reset con 0x55f107bef400 session 0x55f106882d20
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:11:21.756778+0000)
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: handle_auth_request added challenge on 0x55f107c1d800
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 315 ms_handle_reset con 0x55f107c1d800 session 0x55f10795fa40
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: handle_auth_request added challenge on 0x55f10939e800
Oct 11 04:30:01 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 151814144 unmapped: 54386688 heap: 206200832 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 315 ms_handle_reset con 0x55f10939e800 session 0x55f107972960
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: handle_auth_request added challenge on 0x55f10a33f400
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:11:22.756943+0000)
Oct 11 04:30:01 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 152076288 unmapped: 54124544 heap: 206200832 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _renew_subs
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 315 handle_osd_map epochs [316,316], i have 315, src has [1,316]
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 316 ms_handle_reset con 0x55f10a33f400 session 0x55f10698f0e0
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:11:23.757220+0000)
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: handle_auth_request added challenge on 0x55f107bee000
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 316 ms_handle_reset con 0x55f107bee000 session 0x55f10561f2c0
Oct 11 04:30:01 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 152461312 unmapped: 53739520 heap: 206200832 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: handle_auth_request added challenge on 0x55f107bef400
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: handle_auth_request added challenge on 0x55f107c1d800
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 316 heartbeat osd_stat(store_statfs(0x4f6d35000/0x0/0x4ffc00000, data 0x2426319/0x25d8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x68ef9c6), peers [0,1] op hist [])
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:11:24.757354+0000)
Oct 11 04:30:01 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 152461312 unmapped: 53739520 heap: 206200832 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 316 heartbeat osd_stat(store_statfs(0x4f6d35000/0x0/0x4ffc00000, data 0x2426319/0x25d8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x68ef9c6), peers [0,1] op hist [])
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 316 handle_osd_map epochs [317,317], i have 316, src has [1,317]
Oct 11 04:30:01 compute-0 ceph-osd[89722]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:30:01 compute-0 ceph-osd[89722]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:30:01 compute-0 ceph-osd[89722]: bluestore.MempoolThread(0x55f1042abb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2452220 data_alloc: 234881024 data_used: 13053952
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:11:25.757473+0000)
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: handle_auth_request added challenge on 0x55f10939e800
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 317 ms_handle_reset con 0x55f10939e800 session 0x55f106ae5e00
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: handle_auth_request added challenge on 0x55f10a33e800
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: handle_auth_request added challenge on 0x55f10a33f800
Oct 11 04:30:01 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 152535040 unmapped: 53665792 heap: 206200832 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 317 ms_handle_reset con 0x55f10a33f800 session 0x55f106317c20
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 317 ms_handle_reset con 0x55f10a33e800 session 0x55f1069901e0
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: handle_auth_request added challenge on 0x55f107959400
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 317 ms_handle_reset con 0x55f107959400 session 0x55f105b9de00
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: handle_auth_request added challenge on 0x55f107959400
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 317 ms_handle_reset con 0x55f107959400 session 0x55f106d385a0
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:11:26.757608+0000)
Oct 11 04:30:01 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 153427968 unmapped: 52772864 heap: 206200832 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: handle_auth_request added challenge on 0x55f107bee000
Oct 11 04:30:01 compute-0 ceph-osd[89722]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 10.444011688s of 10.791432381s, submitted: 113
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:11:27.757729+0000)
Oct 11 04:30:01 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 153444352 unmapped: 52756480 heap: 206200832 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _renew_subs
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 317 handle_osd_map epochs [318,318], i have 317, src has [1,318]
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 318 ms_handle_reset con 0x55f107bee000 session 0x55f1076ce1e0
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: handle_auth_request added challenge on 0x55f10939e800
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 318 ms_handle_reset con 0x55f10939e800 session 0x55f1077090e0
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: handle_auth_request added challenge on 0x55f10a33e800
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:11:28.757894+0000)
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: handle_auth_request added challenge on 0x55f10a33f800
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: handle_auth_request added challenge on 0x55f107958000
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 318 ms_handle_reset con 0x55f107958000 session 0x55f10795e1e0
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 318 ms_handle_reset con 0x55f10a33f800 session 0x55f106983a40
Oct 11 04:30:01 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 153501696 unmapped: 52699136 heap: 206200832 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 318 handle_osd_map epochs [318,319], i have 318, src has [1,319]
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: handle_auth_request added challenge on 0x55f107958000
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: handle_auth_request added challenge on 0x55f107959400
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 319 ms_handle_reset con 0x55f107958000 session 0x55f1069834a0
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:11:29.758014+0000)
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 319 heartbeat osd_stat(store_statfs(0x4f6d2d000/0x0/0x4ffc00000, data 0x24299d5/0x25e0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x68ef9c6), peers [0,1] op hist [])
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: handle_auth_request added challenge on 0x55f107bee000
Oct 11 04:30:01 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 153509888 unmapped: 52690944 heap: 206200832 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 319 ms_handle_reset con 0x55f107bee000 session 0x55f10562b680
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: handle_auth_request added challenge on 0x55f10939e800
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 319 handle_osd_map epochs [319,320], i have 319, src has [1,320]
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 320 ms_handle_reset con 0x55f10939e800 session 0x55f1069810e0
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 320 ms_handle_reset con 0x55f107959400 session 0x55f10795ed20
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 320 ms_handle_reset con 0x55f10a33e800 session 0x55f1079d7860
Oct 11 04:30:01 compute-0 ceph-osd[89722]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:30:01 compute-0 ceph-osd[89722]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:30:01 compute-0 ceph-osd[89722]: bluestore.MempoolThread(0x55f1042abb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2502907 data_alloc: 234881024 data_used: 17391616
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:11:30.758188+0000)
Oct 11 04:30:01 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 153387008 unmapped: 52813824 heap: 206200832 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: handle_auth_request added challenge on 0x55f10a33e800
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 320 ms_handle_reset con 0x55f10a33e800 session 0x55f1069905a0
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: handle_auth_request added challenge on 0x55f107958000
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:11:31.758397+0000)
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: handle_auth_request added challenge on 0x55f107959400
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 320 ms_handle_reset con 0x55f107959400 session 0x55f10567cb40
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: handle_auth_request added challenge on 0x55f107bee000
Oct 11 04:30:01 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 153411584 unmapped: 52789248 heap: 206200832 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _renew_subs
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 320 handle_osd_map epochs [321,321], i have 320, src has [1,321]
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 321 ms_handle_reset con 0x55f107958000 session 0x55f105b9f4a0
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: handle_auth_request added challenge on 0x55f10939e800
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: handle_auth_request added challenge on 0x55f1094e0800
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 321 ms_handle_reset con 0x55f1094e0800 session 0x55f1069825a0
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: handle_auth_request added challenge on 0x55f1080e0400
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:11:32.758529+0000)
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 321 ms_handle_reset con 0x55f1080e0400 session 0x55f10795fe00
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: handle_auth_request added challenge on 0x55f107958000
Oct 11 04:30:01 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 153411584 unmapped: 52789248 heap: 206200832 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 321 ms_handle_reset con 0x55f107958000 session 0x55f1079721e0
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: handle_auth_request added challenge on 0x55f107959400
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _renew_subs
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 321 handle_osd_map epochs [322,322], i have 321, src has [1,322]
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 322 ms_handle_reset con 0x55f10939e800 session 0x55f105b8da40
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: handle_auth_request added challenge on 0x55f1094e0800
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 322 ms_handle_reset con 0x55f1094e0800 session 0x55f106980960
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 322 ms_handle_reset con 0x55f107bee000 session 0x55f10567da40
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:11:33.758751+0000)
Oct 11 04:30:01 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 153452544 unmapped: 52748288 heap: 206200832 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _renew_subs
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 322 handle_osd_map epochs [323,323], i have 322, src has [1,323]
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 323 ms_handle_reset con 0x55f107959400 session 0x55f105baaf00
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: handle_auth_request added challenge on 0x55f107958000
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: handle_auth_request added challenge on 0x55f107bee000
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:11:34.758913+0000)
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 323 ms_handle_reset con 0x55f107bee000 session 0x55f10567d680
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _renew_subs
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 323 handle_osd_map epochs [324,324], i have 323, src has [1,324]
Oct 11 04:30:01 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 155426816 unmapped: 50774016 heap: 206200832 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 324 heartbeat osd_stat(store_statfs(0x4f6d12000/0x0/0x4ffc00000, data 0x2434695/0x25f7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x68ef9c6), peers [0,1] op hist [0,1,0,2])
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 324 ms_handle_reset con 0x55f107958000 session 0x55f1099543c0
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: handle_auth_request added challenge on 0x55f10939e800
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 324 ms_handle_reset con 0x55f10939e800 session 0x55f1076b52c0
Oct 11 04:30:01 compute-0 ceph-osd[89722]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:30:01 compute-0 ceph-osd[89722]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:30:01 compute-0 ceph-osd[89722]: bluestore.MempoolThread(0x55f1042abb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2564290 data_alloc: 234881024 data_used: 18141184
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:11:35.759048+0000)
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: handle_auth_request added challenge on 0x55f1094e0800
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 324 ms_handle_reset con 0x55f1094e0800 session 0x55f1076b54a0
Oct 11 04:30:01 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 157712384 unmapped: 48488448 heap: 206200832 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: handle_auth_request added challenge on 0x55f10a33e800
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 324 ms_handle_reset con 0x55f10a33e800 session 0x55f1076b4b40
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: handle_auth_request added challenge on 0x55f10a33e800
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:11:36.759193+0000)
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: handle_auth_request added challenge on 0x55f107958000
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 324 ms_handle_reset con 0x55f107958000 session 0x55f10772f680
Oct 11 04:30:01 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 160440320 unmapped: 45760512 heap: 206200832 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 324 handle_osd_map epochs [324,325], i have 324, src has [1,325]
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: handle_auth_request added challenge on 0x55f107bee000
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: handle_auth_request added challenge on 0x55f10939e800
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 325 ms_handle_reset con 0x55f107bee000 session 0x55f10561eb40
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:11:37.759387+0000)
Oct 11 04:30:01 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 160514048 unmapped: 45686784 heap: 206200832 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 325 handle_osd_map epochs [325,326], i have 325, src has [1,326]
Oct 11 04:30:01 compute-0 ceph-osd[89722]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 10.241136551s of 10.960977554s, submitted: 266
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 326 ms_handle_reset con 0x55f10939e800 session 0x55f1076b54a0
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 326 ms_handle_reset con 0x55f10a33e800 session 0x55f1076b50e0
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:11:38.759552+0000)
Oct 11 04:30:01 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 159662080 unmapped: 46538752 heap: 206200832 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: handle_auth_request added challenge on 0x55f1094e0800
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: handle_auth_request added challenge on 0x55f1080e1800
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:11:39.759778+0000)
Oct 11 04:30:01 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 159662080 unmapped: 46538752 heap: 206200832 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _renew_subs
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 326 handle_osd_map epochs [327,327], i have 326, src has [1,327]
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 327 ms_handle_reset con 0x55f1080e1800 session 0x55f10698e000
Oct 11 04:30:01 compute-0 ceph-osd[89722]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:30:01 compute-0 ceph-osd[89722]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:30:01 compute-0 ceph-osd[89722]: bluestore.MempoolThread(0x55f1042abb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2586276 data_alloc: 234881024 data_used: 18624512
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: handle_auth_request added challenge on 0x55f107958000
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 327 ms_handle_reset con 0x55f107958000 session 0x55f109b9a5a0
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 327 ms_handle_reset con 0x55f1094e0800 session 0x55f1076b5a40
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:11:40.759955+0000)
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 327 heartbeat osd_stat(store_statfs(0x4f67b2000/0x0/0x4ffc00000, data 0x2998889/0x2b5a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x68ef9c6), peers [0,1] op hist [])
Oct 11 04:30:01 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 159670272 unmapped: 46530560 heap: 206200832 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: handle_auth_request added challenge on 0x55f107bee000
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 327 ms_handle_reset con 0x55f107bee000 session 0x55f1076dd4a0
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:11:41.760261+0000)
Oct 11 04:30:01 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 159694848 unmapped: 46505984 heap: 206200832 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:11:42.760430+0000)
Oct 11 04:30:01 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 159694848 unmapped: 46505984 heap: 206200832 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: handle_auth_request added challenge on 0x55f1080e1800
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: handle_auth_request added challenge on 0x55f10939e800
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 327 ms_handle_reset con 0x55f10939e800 session 0x55f1079f21e0
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:11:43.760686+0000)
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 327 heartbeat osd_stat(store_statfs(0x4f67ae000/0x0/0x4ffc00000, data 0x299a62c/0x2b5d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x68ef9c6), peers [0,1] op hist [])
Oct 11 04:30:01 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 159694848 unmapped: 46505984 heap: 206200832 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 327 handle_osd_map epochs [327,328], i have 327, src has [1,328]
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: handle_auth_request added challenge on 0x55f10a33e800
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: handle_auth_request added challenge on 0x55f1080e1c00
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 328 ms_handle_reset con 0x55f10a33e800 session 0x55f105baa960
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: handle_auth_request added challenge on 0x55f107958000
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 328 ms_handle_reset con 0x55f107958000 session 0x55f10562b4a0
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:11:44.760867+0000)
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: handle_auth_request added challenge on 0x55f107bee000
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _renew_subs
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 328 handle_osd_map epochs [329,329], i have 328, src has [1,329]
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 329 ms_handle_reset con 0x55f1080e1c00 session 0x55f1076b52c0
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: handle_auth_request added challenge on 0x55f10939e800
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: handle_auth_request added challenge on 0x55f1094e0800
Oct 11 04:30:01 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 159703040 unmapped: 46497792 heap: 206200832 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 329 ms_handle_reset con 0x55f10939e800 session 0x55f10567d680
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 329 ms_handle_reset con 0x55f1080e1800 session 0x55f1076dde00
Oct 11 04:30:01 compute-0 ceph-osd[89722]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:30:01 compute-0 ceph-osd[89722]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:30:01 compute-0 ceph-osd[89722]: bluestore.MempoolThread(0x55f1042abb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2598274 data_alloc: 234881024 data_used: 18644992
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:11:45.761034+0000)
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: handle_auth_request added challenge on 0x55f1080e1000
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 329 ms_handle_reset con 0x55f1094e0800 session 0x55f1076dd680
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: handle_auth_request added challenge on 0x55f1094e0800
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _renew_subs
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 329 handle_osd_map epochs [330,330], i have 329, src has [1,330]
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 330 ms_handle_reset con 0x55f1094e0800 session 0x55f1076dd860
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 330 ms_handle_reset con 0x55f107bee000 session 0x55f10767f860
Oct 11 04:30:01 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 159735808 unmapped: 46465024 heap: 206200832 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: handle_auth_request added challenge on 0x55f107958000
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 330 ms_handle_reset con 0x55f107958000 session 0x55f1076dc3c0
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:11:46.761204+0000)
Oct 11 04:30:01 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 159735808 unmapped: 46465024 heap: 206200832 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:11:47.761463+0000)
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: handle_auth_request added challenge on 0x55f1080e1800
Oct 11 04:30:01 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 159350784 unmapped: 46850048 heap: 206200832 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: handle_auth_request added challenge on 0x55f1080e1c00
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 330 ms_handle_reset con 0x55f1080e1c00 session 0x55f105b9ef00
Oct 11 04:30:01 compute-0 ceph-osd[89722]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 9.893532753s of 10.034345627s, submitted: 71
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:11:48.761611+0000)
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 330 ms_handle_reset con 0x55f1080e1800 session 0x55f1076fab40
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: handle_auth_request added challenge on 0x55f107958000
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _renew_subs
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 330 handle_osd_map epochs [331,331], i have 330, src has [1,331]
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: handle_auth_request added challenge on 0x55f107bee000
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 331 ms_handle_reset con 0x55f107bee000 session 0x55f1076e6f00
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: handle_auth_request added challenge on 0x55f1080e1c00
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 331 ms_handle_reset con 0x55f1080e1c00 session 0x55f10561e1e0
Oct 11 04:30:01 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 159424512 unmapped: 46776320 heap: 206200832 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 331 ms_handle_reset con 0x55f1080e1000 session 0x55f10795ed20
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:11:49.761802+0000)
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 331 ms_handle_reset con 0x55f107958000 session 0x55f1076fa1e0
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 331 heartbeat osd_stat(store_statfs(0x4f659f000/0x0/0x4ffc00000, data 0x2ba2e58/0x2d6e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x68ef9c6), peers [0,1] op hist [])
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 331 handle_osd_map epochs [332,332], i have 331, src has [1,332]
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: handle_auth_request added challenge on 0x55f1094e0800
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 332 ms_handle_reset con 0x55f1094e0800 session 0x55f1076faf00
Oct 11 04:30:01 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 159416320 unmapped: 46784512 heap: 206200832 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:01 compute-0 ceph-osd[89722]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:30:01 compute-0 ceph-osd[89722]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:30:01 compute-0 ceph-osd[89722]: bluestore.MempoolThread(0x55f1042abb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2630600 data_alloc: 234881024 data_used: 18649088
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: handle_auth_request added challenge on 0x55f107958000
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 332 ms_handle_reset con 0x55f107958000 session 0x55f1056bdc20
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:11:50.762003+0000)
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: handle_auth_request added challenge on 0x55f107bee000
Oct 11 04:30:01 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 159432704 unmapped: 46768128 heap: 206200832 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: handle_auth_request added challenge on 0x55f1080e1000
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 332 ms_handle_reset con 0x55f1080e1000 session 0x55f104fb12c0
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:11:51.762210+0000)
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: handle_auth_request added challenge on 0x55f1080e1c00
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _renew_subs
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 332 handle_osd_map epochs [333,333], i have 332, src has [1,333]
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 333 ms_handle_reset con 0x55f1080e1c00 session 0x55f107708780
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 333 ms_handle_reset con 0x55f107bee000 session 0x55f106981a40
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: handle_auth_request added challenge on 0x55f10939e800
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 333 ms_handle_reset con 0x55f10939e800 session 0x55f1081623c0
Oct 11 04:30:01 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 159432704 unmapped: 46768128 heap: 206200832 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: handle_auth_request added challenge on 0x55f10939e800
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 333 ms_handle_reset con 0x55f10939e800 session 0x55f106d7fc20
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:11:52.762376+0000)
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: handle_auth_request added challenge on 0x55f107958000
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 333 ms_handle_reset con 0x55f107958000 session 0x55f106d38000
Oct 11 04:30:01 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 159432704 unmapped: 46768128 heap: 206200832 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: handle_auth_request added challenge on 0x55f107bee000
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: handle_auth_request added challenge on 0x55f1080e1000
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 333 ms_handle_reset con 0x55f1080e1000 session 0x55f105b0a000
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:11:53.762532+0000)
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: handle_auth_request added challenge on 0x55f1080e1c00
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _renew_subs
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 333 handle_osd_map epochs [334,334], i have 333, src has [1,334]
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 334 ms_handle_reset con 0x55f107bee000 session 0x55f108163860
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: handle_auth_request added challenge on 0x55f109192000
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: handle_auth_request added challenge on 0x55f109193c00
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: handle_auth_request added challenge on 0x55f109192800
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: handle_auth_request added challenge on 0x55f109193800
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 334 ms_handle_reset con 0x55f109192000 session 0x55f1076cf680
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: handle_auth_request added challenge on 0x55f109192000
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: handle_auth_request added challenge on 0x55f107958000
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 334 ms_handle_reset con 0x55f109193800 session 0x55f106882f00
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 334 ms_handle_reset con 0x55f109192800 session 0x55f1079f25a0
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 334 ms_handle_reset con 0x55f107958000 session 0x55f105b0b4a0
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: handle_auth_request added challenge on 0x55f107bee000
Oct 11 04:30:01 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 170696704 unmapped: 35504128 heap: 206200832 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 334 ms_handle_reset con 0x55f107bee000 session 0x55f106ae43c0
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: handle_auth_request added challenge on 0x55f1080e1000
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:11:54.762757+0000)
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 334 handle_osd_map epochs [334,335], i have 334, src has [1,335]
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 335 ms_handle_reset con 0x55f1080e1000 session 0x55f106990000
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 335 ms_handle_reset con 0x55f109193c00 session 0x55f1079465a0
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 335 ms_handle_reset con 0x55f109192000 session 0x55f1099543c0
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: handle_auth_request added challenge on 0x55f1080e1000
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: handle_auth_request added challenge on 0x55f107958000
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _renew_subs
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 335 handle_osd_map epochs [336,336], i have 335, src has [1,336]
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 336 ms_handle_reset con 0x55f107958000 session 0x55f105bab680
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 336 ms_handle_reset con 0x55f1080e1c00 session 0x55f107721a40
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 336 ms_handle_reset con 0x55f1080e1000 session 0x55f105bb6780
Oct 11 04:30:01 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 168845312 unmapped: 37355520 heap: 206200832 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: handle_auth_request added challenge on 0x55f1080e1000
Oct 11 04:30:01 compute-0 ceph-osd[89722]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:30:01 compute-0 ceph-osd[89722]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 11 04:30:01 compute-0 ceph-osd[89722]: bluestore.MempoolThread(0x55f1042abb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2795671 data_alloc: 234881024 data_used: 24317952
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 336 heartbeat osd_stat(store_statfs(0x4f5655000/0x0/0x4ffc00000, data 0x3ae436c/0x3cb8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x68ef9c6), peers [0,1] op hist [])
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:11:55.762888+0000)
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 336 ms_handle_reset con 0x55f1080e1000 session 0x55f106ae5860
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: handle_auth_request added challenge on 0x55f107958000
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _renew_subs
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 336 handle_osd_map epochs [337,337], i have 336, src has [1,337]
Oct 11 04:30:01 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 168894464 unmapped: 37306368 heap: 206200832 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:11:56.763089+0000)
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 337 handle_osd_map epochs [337,338], i have 337, src has [1,338]
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: handle_auth_request added challenge on 0x55f1080e1c00
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 338 ms_handle_reset con 0x55f1080e1c00 session 0x55f10562b2c0
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 338 ms_handle_reset con 0x55f107958000 session 0x55f107708b40
Oct 11 04:30:01 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 168943616 unmapped: 37257216 heap: 206200832 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: handle_auth_request added challenge on 0x55f109192000
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 338 ms_handle_reset con 0x55f109192000 session 0x55f1056bde00
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: handle_auth_request added challenge on 0x55f109193c00
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 338 ms_handle_reset con 0x55f109193c00 session 0x55f106990f00
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:11:57.763220+0000)
Oct 11 04:30:01 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 168960000 unmapped: 37240832 heap: 206200832 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: handle_auth_request added challenge on 0x55f107958000
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: handle_auth_request added challenge on 0x55f1080e1000
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 338 ms_handle_reset con 0x55f1080e1000 session 0x55f107709a40
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: handle_auth_request added challenge on 0x55f1080e1c00
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 338 ms_handle_reset con 0x55f107958000 session 0x55f10799cb40
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:11:58.763470+0000)
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: handle_auth_request added challenge on 0x55f109192000
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: handle_auth_request added challenge on 0x55f107bee000
Oct 11 04:30:01 compute-0 ceph-osd[89722]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 9.864970207s of 10.492430687s, submitted: 176
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 338 ms_handle_reset con 0x55f107bee000 session 0x55f10767e960
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 338 ms_handle_reset con 0x55f109192000 session 0x55f1069832c0
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: handle_auth_request added challenge on 0x55f109192800
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _renew_subs
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 338 handle_osd_map epochs [339,339], i have 338, src has [1,339]
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 339 ms_handle_reset con 0x55f109192800 session 0x55f1079d6f00
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: handle_auth_request added challenge on 0x55f109192800
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 339 ms_handle_reset con 0x55f109192800 session 0x55f1079732c0
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 339 ms_handle_reset con 0x55f1080e1c00 session 0x55f10772ed20
Oct 11 04:30:01 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 165617664 unmapped: 40583168 heap: 206200832 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:11:59.763678+0000)
Oct 11 04:30:01 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 165617664 unmapped: 40583168 heap: 206200832 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: handle_auth_request added challenge on 0x55f107958000
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 339 heartbeat osd_stat(store_statfs(0x4f5647000/0x0/0x4ffc00000, data 0x3aeb9cb/0x3cc6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x68ef9c6), peers [0,1] op hist [])
Oct 11 04:30:01 compute-0 ceph-osd[89722]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:30:01 compute-0 ceph-osd[89722]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 11 04:30:01 compute-0 ceph-osd[89722]: bluestore.MempoolThread(0x55f1042abb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2798810 data_alloc: 234881024 data_used: 24342528
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:12:00.763855+0000)
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: handle_auth_request added challenge on 0x55f107bee000
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 339 heartbeat osd_stat(store_statfs(0x4f5647000/0x0/0x4ffc00000, data 0x3aeb969/0x3cc5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x68ef9c6), peers [0,1] op hist [])
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: handle_auth_request added challenge on 0x55f1080e1000
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 339 ms_handle_reset con 0x55f1080e1000 session 0x55f106991c20
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _renew_subs
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 339 handle_osd_map epochs [340,340], i have 339, src has [1,340]
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: handle_auth_request added challenge on 0x55f109192000
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 340 ms_handle_reset con 0x55f109192000 session 0x55f10562b860
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: handle_auth_request added challenge on 0x55f109193800
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 340 ms_handle_reset con 0x55f109193800 session 0x55f106d7ef00
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 340 ms_handle_reset con 0x55f107958000 session 0x55f109b9ad20
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: handle_auth_request added challenge on 0x55f1080e1000
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 340 ms_handle_reset con 0x55f1080e1000 session 0x55f109b9a000
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: handle_auth_request added challenge on 0x55f1080e1c00
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: handle_auth_request added challenge on 0x55f109192000
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: handle_auth_request added challenge on 0x55f109192800
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 340 ms_handle_reset con 0x55f109192000 session 0x55f10795fc20
Oct 11 04:30:01 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 165658624 unmapped: 40542208 heap: 206200832 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 340 ms_handle_reset con 0x55f1080e1c00 session 0x55f105b9eb40
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: handle_auth_request added challenge on 0x55f109193800
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: handle_auth_request added challenge on 0x55f10939e800
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 340 ms_handle_reset con 0x55f10939e800 session 0x55f106d7f2c0
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 340 ms_handle_reset con 0x55f109193800 session 0x55f108163e00
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: handle_auth_request added challenge on 0x55f10939e800
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 340 ms_handle_reset con 0x55f10939e800 session 0x55f107947a40
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: handle_auth_request added challenge on 0x55f107958000
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 340 ms_handle_reset con 0x55f107958000 session 0x55f10795f4a0
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: handle_auth_request added challenge on 0x55f1080e1000
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: handle_auth_request added challenge on 0x55f1080e1c00
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 340 ms_handle_reset con 0x55f1080e1000 session 0x55f10646f0e0
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:12:01.764107+0000)
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 340 heartbeat osd_stat(store_statfs(0x4f5646000/0x0/0x4ffc00000, data 0x3aed530/0x3cc8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x68ef9c6), peers [0,1] op hist [])
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 340 handle_osd_map epochs [340,341], i have 340, src has [1,341]
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 341 ms_handle_reset con 0x55f109192800 session 0x55f10799c780
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 341 ms_handle_reset con 0x55f107bee000 session 0x55f109955c20
Oct 11 04:30:01 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 165683200 unmapped: 40517632 heap: 206200832 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 341 ms_handle_reset con 0x55f1080e1c00 session 0x55f105b9eb40
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: handle_auth_request added challenge on 0x55f109192800
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 341 ms_handle_reset con 0x55f109192800 session 0x55f106d7ef00
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:12:02.764252+0000)
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: handle_auth_request added challenge on 0x55f107958000
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 341 handle_osd_map epochs [341,342], i have 341, src has [1,342]
Oct 11 04:30:01 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 165691392 unmapped: 40509440 heap: 206200832 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 342 ms_handle_reset con 0x55f107958000 session 0x55f10562b860
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:12:03.764507+0000)
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: handle_auth_request added challenge on 0x55f1080e1000
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 342 handle_osd_map epochs [342,343], i have 342, src has [1,343]
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 343 ms_handle_reset con 0x55f1080e1000 session 0x55f1079d6f00
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: handle_auth_request added challenge on 0x55f107958000
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: handle_auth_request added challenge on 0x55f107bee000
Oct 11 04:30:01 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 165699584 unmapped: 40501248 heap: 206200832 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 343 ms_handle_reset con 0x55f107958000 session 0x55f107708b40
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: handle_auth_request added challenge on 0x55f1080e1c00
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 343 ms_handle_reset con 0x55f1080e1c00 session 0x55f1079f25a0
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:12:04.764670+0000)
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _renew_subs
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 343 handle_osd_map epochs [344,344], i have 343, src has [1,344]
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: handle_auth_request added challenge on 0x55f109192800
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 344 ms_handle_reset con 0x55f109192800 session 0x55f106882f00
Oct 11 04:30:01 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 165748736 unmapped: 40452096 heap: 206200832 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 344 ms_handle_reset con 0x55f107bee000 session 0x55f1077081e0
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: handle_auth_request added challenge on 0x55f109193800
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 344 ms_handle_reset con 0x55f109193800 session 0x55f1076cf680
Oct 11 04:30:01 compute-0 ceph-osd[89722]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:30:01 compute-0 ceph-osd[89722]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 11 04:30:01 compute-0 ceph-osd[89722]: bluestore.MempoolThread(0x55f1042abb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2812303 data_alloc: 234881024 data_used: 24354816
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: handle_auth_request added challenge on 0x55f107958000
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 344 ms_handle_reset con 0x55f107958000 session 0x55f108163860
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: handle_auth_request added challenge on 0x55f107bee000
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:12:05.764818+0000)
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 344 ms_handle_reset con 0x55f107bee000 session 0x55f105b0a000
Oct 11 04:30:01 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 165756928 unmapped: 40443904 heap: 206200832 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: handle_auth_request added challenge on 0x55f1080e1c00
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: handle_auth_request added challenge on 0x55f109192800
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: handle_auth_request added challenge on 0x55f10939e800
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 344 heartbeat osd_stat(store_statfs(0x4f563c000/0x0/0x4ffc00000, data 0x3af410a/0x3cd1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x68ef9c6), peers [0,1] op hist [])
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:12:06.764989+0000)
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 344 handle_osd_map epochs [344,345], i have 344, src has [1,345]
Oct 11 04:30:01 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 165765120 unmapped: 40435712 heap: 206200832 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: handle_auth_request added challenge on 0x55f109192000
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 345 ms_handle_reset con 0x55f109192000 session 0x55f10698f860
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: handle_auth_request added challenge on 0x55f105d2b000
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:12:07.765113+0000)
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 345 handle_osd_map epochs [345,346], i have 345, src has [1,346]
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 346 ms_handle_reset con 0x55f10939e800 session 0x55f105b9c780
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: handle_auth_request added challenge on 0x55f105d2ac00
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: handle_auth_request added challenge on 0x55f105d2b800
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: handle_auth_request added challenge on 0x55f107e5a800
Oct 11 04:30:01 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 166559744 unmapped: 39641088 heap: 206200832 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 346 ms_handle_reset con 0x55f107e5a800 session 0x55f109b9b4a0
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 346 ms_handle_reset con 0x55f105d2ac00 session 0x55f105b9f0e0
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:12:08.765232+0000)
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 346 heartbeat osd_stat(store_statfs(0x4f5638000/0x0/0x4ffc00000, data 0x3af781e/0x3cd5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x68ef9c6), peers [0,1] op hist [])
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 346 ms_handle_reset con 0x55f105d2b800 session 0x55f105b8cd20
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _renew_subs
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 346 handle_osd_map epochs [347,347], i have 346, src has [1,347]
Oct 11 04:30:01 compute-0 ceph-osd[89722]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 9.779752731s of 10.338211060s, submitted: 191
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 347 ms_handle_reset con 0x55f105d2b000 session 0x55f10772ed20
Oct 11 04:30:01 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 168353792 unmapped: 37847040 heap: 206200832 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:12:09.765406+0000)
Oct 11 04:30:01 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 168353792 unmapped: 37847040 heap: 206200832 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: handle_auth_request added challenge on 0x55f107958000
Oct 11 04:30:01 compute-0 ceph-osd[89722]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:30:01 compute-0 ceph-osd[89722]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 11 04:30:01 compute-0 ceph-osd[89722]: bluestore.MempoolThread(0x55f1042abb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2850667 data_alloc: 251658240 data_used: 28061696
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:12:10.765580+0000)
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _renew_subs
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 347 handle_osd_map epochs [348,348], i have 347, src has [1,348]
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 348 ms_handle_reset con 0x55f107958000 session 0x55f10562ab40
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: handle_auth_request added challenge on 0x55f107bee000
Oct 11 04:30:01 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 168427520 unmapped: 37773312 heap: 206200832 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 348 ms_handle_reset con 0x55f107bee000 session 0x55f106d39680
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:12:11.765739+0000)
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 348 heartbeat osd_stat(store_statfs(0x4f5636000/0x0/0x4ffc00000, data 0x3af9403/0x3cd8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x68ef9c6), peers [0,1] op hist [])
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: handle_auth_request added challenge on 0x55f105d2ac00
Oct 11 04:30:01 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 168460288 unmapped: 37740544 heap: 206200832 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 348 ms_handle_reset con 0x55f105d2ac00 session 0x55f10799de00
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: handle_auth_request added challenge on 0x55f105d2b000
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 348 ms_handle_reset con 0x55f105d2b000 session 0x55f10767fa40
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:12:12.765893+0000)
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 11 04:30:01 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 168493056 unmapped: 37707776 heap: 206200832 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:12:13.766069+0000)
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: handle_auth_request added challenge on 0x55f105d2b800
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: handle_auth_request added challenge on 0x55f107958000
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: handle_auth_request added challenge on 0x55f109192000
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 348 ms_handle_reset con 0x55f109192000 session 0x55f1063161e0
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 348 ms_handle_reset con 0x55f107958000 session 0x55f1079730e0
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: handle_auth_request added challenge on 0x55f10939e800
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: handle_auth_request added challenge on 0x55f107e5b000
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 348 ms_handle_reset con 0x55f10939e800 session 0x55f109b9ba40
Oct 11 04:30:01 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 168501248 unmapped: 37699584 heap: 206200832 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: handle_auth_request added challenge on 0x55f10939e800
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 348 ms_handle_reset con 0x55f10939e800 session 0x55f105b9fc20
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: handle_auth_request added challenge on 0x55f105d2ac00
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:12:14.766217+0000)
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _renew_subs
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 348 handle_osd_map epochs [349,349], i have 348, src has [1,349]
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 349 ms_handle_reset con 0x55f107e5b000 session 0x55f106d381e0
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: handle_auth_request added challenge on 0x55f105d2b000
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 349 ms_handle_reset con 0x55f105d2b000 session 0x55f108162d20
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: handle_auth_request added challenge on 0x55f107958000
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: handle_auth_request added challenge on 0x55f109192000
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: handle_auth_request added challenge on 0x55f1094a1c00
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 349 ms_handle_reset con 0x55f107958000 session 0x55f108162960
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 349 ms_handle_reset con 0x55f1094a1c00 session 0x55f106ae52c0
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 349 ms_handle_reset con 0x55f105d2b800 session 0x55f10795f680
Oct 11 04:30:01 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 168525824 unmapped: 37675008 heap: 206200832 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 349 heartbeat osd_stat(store_statfs(0x4f5632000/0x0/0x4ffc00000, data 0x3afb040/0x3cdc000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x68ef9c6), peers [0,1] op hist [])
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: handle_auth_request added challenge on 0x55f1094a1c00
Oct 11 04:30:01 compute-0 ceph-osd[89722]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:30:01 compute-0 ceph-osd[89722]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 11 04:30:01 compute-0 ceph-osd[89722]: bluestore.MempoolThread(0x55f1042abb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2864206 data_alloc: 251658240 data_used: 28094464
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:12:15.766380+0000)
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _renew_subs
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 349 handle_osd_map epochs [350,350], i have 349, src has [1,350]
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 350 ms_handle_reset con 0x55f1094a1c00 session 0x55f106d39a40
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 350 ms_handle_reset con 0x55f109192000 session 0x55f106ae54a0
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 350 ms_handle_reset con 0x55f105d2ac00 session 0x55f1069834a0
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: handle_auth_request added challenge on 0x55f105d2b000
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 350 ms_handle_reset con 0x55f105d2b000 session 0x55f105bb7c20
Oct 11 04:30:01 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 168534016 unmapped: 37666816 heap: 206200832 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 350 heartbeat osd_stat(store_statfs(0x4f562c000/0x0/0x4ffc00000, data 0x3afcd0f/0x3ce0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x68ef9c6), peers [0,1] op hist [])
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:12:16.766532+0000)
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: handle_auth_request added challenge on 0x55f105d2ac00
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _renew_subs
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 350 handle_osd_map epochs [351,351], i have 350, src has [1,351]
Oct 11 04:30:01 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 168542208 unmapped: 37658624 heap: 206200832 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: handle_auth_request added challenge on 0x55f105d2b800
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 351 ms_handle_reset con 0x55f105d2ac00 session 0x55f106d39c20
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:12:17.766729+0000)
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: handle_auth_request added challenge on 0x55f109192000
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 351 handle_osd_map epochs [351,352], i have 351, src has [1,352]
Oct 11 04:30:01 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 168534016 unmapped: 37666816 heap: 206200832 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 352 ms_handle_reset con 0x55f105d2b800 session 0x55f1099543c0
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:12:18.766863+0000)
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 352 handle_osd_map epochs [352,353], i have 352, src has [1,353]
Oct 11 04:30:01 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 169377792 unmapped: 36823040 heap: 206200832 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:01 compute-0 ceph-osd[89722]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 9.808245659s of 10.113677025s, submitted: 104
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 353 ms_handle_reset con 0x55f109192000 session 0x55f105b0a960
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: handle_auth_request added challenge on 0x55f1094a1c00
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:12:19.767021+0000)
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _renew_subs
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 353 handle_osd_map epochs [354,354], i have 353, src has [1,354]
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 354 ms_handle_reset con 0x55f1094a1c00 session 0x55f1079730e0
Oct 11 04:30:01 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 169418752 unmapped: 36782080 heap: 206200832 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: handle_auth_request added challenge on 0x55f107958000
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: handle_auth_request added challenge on 0x55f107e5b000
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 354 ms_handle_reset con 0x55f107958000 session 0x55f10767fa40
Oct 11 04:30:01 compute-0 ceph-osd[89722]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:30:01 compute-0 ceph-osd[89722]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 11 04:30:01 compute-0 ceph-osd[89722]: bluestore.MempoolThread(0x55f1042abb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 2895410 data_alloc: 251658240 data_used: 30040064
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:12:20.767208+0000)
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _renew_subs
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 354 handle_osd_map epochs [355,355], i have 354, src has [1,355]
Oct 11 04:30:01 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 170475520 unmapped: 35725312 heap: 206200832 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 355 ms_handle_reset con 0x55f107e5b000 session 0x55f1079f2b40
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: handle_auth_request added challenge on 0x55f105d2ac00
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 355 ms_handle_reset con 0x55f105d2ac00 session 0x55f10562ab40
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:12:21.767390+0000)
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: handle_auth_request added challenge on 0x55f105d2b800
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 355 handle_osd_map epochs [355,356], i have 355, src has [1,356]
Oct 11 04:30:01 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 170541056 unmapped: 35659776 heap: 206200832 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 356 heartbeat osd_stat(store_statfs(0x4f55da000/0x0/0x4ffc00000, data 0x3b4cfd8/0x3d32000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x68ef9c6), peers [0,1] op hist [])
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:12:22.767566+0000)
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 356 ms_handle_reset con 0x55f105d2b800 session 0x55f10772ed20
Oct 11 04:30:01 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 170631168 unmapped: 35569664 heap: 206200832 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 356 handle_osd_map epochs [357,357], i have 356, src has [1,357]
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:12:23.767746+0000)
Oct 11 04:30:01 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 170631168 unmapped: 35569664 heap: 206200832 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 357 heartbeat osd_stat(store_statfs(0x4f55d5000/0x0/0x4ffc00000, data 0x3b50822/0x3d38000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x68ef9c6), peers [0,1] op hist [])
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:12:24.767934+0000)
Oct 11 04:30:01 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 170663936 unmapped: 35536896 heap: 206200832 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:01 compute-0 ceph-osd[89722]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:30:01 compute-0 ceph-osd[89722]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 11 04:30:01 compute-0 ceph-osd[89722]: bluestore.MempoolThread(0x55f1042abb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 2903044 data_alloc: 251658240 data_used: 30048256
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:12:25.768146+0000)
Oct 11 04:30:01 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 170680320 unmapped: 35520512 heap: 206200832 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:12:26.768373+0000)
Oct 11 04:30:01 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 170680320 unmapped: 35520512 heap: 206200832 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:12:27.768597+0000)
Oct 11 04:30:01 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 170680320 unmapped: 35520512 heap: 206200832 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 357 handle_osd_map epochs [358,358], i have 357, src has [1,358]
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:12:28.768890+0000)
Oct 11 04:30:01 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 170680320 unmapped: 35520512 heap: 206200832 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 358 heartbeat osd_stat(store_statfs(0x4f55d2000/0x0/0x4ffc00000, data 0x3b52339/0x3d3b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x68ef9c6), peers [0,1] op hist [])
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:12:29.769104+0000)
Oct 11 04:30:01 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 170680320 unmapped: 35520512 heap: 206200832 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:01 compute-0 ceph-osd[89722]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:30:01 compute-0 ceph-osd[89722]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 11 04:30:01 compute-0 ceph-osd[89722]: bluestore.MempoolThread(0x55f1042abb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 2906018 data_alloc: 251658240 data_used: 30048256
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:12:30.769296+0000)
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: handle_auth_request added challenge on 0x55f107958000
Oct 11 04:30:01 compute-0 ceph-osd[89722]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 11.477982521s of 11.684353828s, submitted: 78
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 358 ms_handle_reset con 0x55f107958000 session 0x55f105b8cd20
Oct 11 04:30:01 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 170688512 unmapped: 35512320 heap: 206200832 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: handle_auth_request added challenge on 0x55f109192000
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 358 ms_handle_reset con 0x55f109192000 session 0x55f109b9b4a0
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:12:31.769489+0000)
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 358 handle_osd_map epochs [358,359], i have 358, src has [1,359]
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: handle_auth_request added challenge on 0x55f105d2ac00
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: handle_auth_request added challenge on 0x55f105d2b800
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: handle_auth_request added challenge on 0x55f107958000
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 359 ms_handle_reset con 0x55f107958000 session 0x55f105b9e1e0
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: handle_auth_request added challenge on 0x55f107e5b000
Oct 11 04:30:01 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 170721280 unmapped: 35479552 heap: 206200832 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:12:32.769643+0000)
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 359 handle_osd_map epochs [359,360], i have 359, src has [1,360]
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _renew_subs
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 360 handle_osd_map epochs [360,360], i have 360, src has [1,360]
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 360 ms_handle_reset con 0x55f105d2b800 session 0x55f105b9c1e0
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 360 ms_handle_reset con 0x55f107e5b000 session 0x55f10767e000
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 360 ms_handle_reset con 0x55f105d2ac00 session 0x55f10698f860
Oct 11 04:30:01 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 170737664 unmapped: 35463168 heap: 206200832 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: handle_auth_request added challenge on 0x55f1094a1c00
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 360 ms_handle_reset con 0x55f1094a1c00 session 0x55f105b0a000
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: handle_auth_request added challenge on 0x55f105d2ac00
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:12:33.769802+0000)
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _renew_subs
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 360 handle_osd_map epochs [361,361], i have 360, src has [1,361]
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 361 ms_handle_reset con 0x55f105d2ac00 session 0x55f106882f00
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 361 heartbeat osd_stat(store_statfs(0x4f55c8000/0x0/0x4ffc00000, data 0x3b55c3f/0x3d44000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x68ef9c6), peers [0,1] op hist [])
Oct 11 04:30:01 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 170811392 unmapped: 35389440 heap: 206200832 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: handle_auth_request added challenge on 0x55f105d2b800
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:12:34.769924+0000)
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _renew_subs
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 361 handle_osd_map epochs [362,362], i have 361, src has [1,362]
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: handle_auth_request added challenge on 0x55f107958000
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 362 ms_handle_reset con 0x55f107958000 session 0x55f107972b40
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: handle_auth_request added challenge on 0x55f107e5b000
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 362 ms_handle_reset con 0x55f105d2b800 session 0x55f105b9eb40
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 362 ms_handle_reset con 0x55f107e5b000 session 0x55f10767f2c0
Oct 11 04:30:01 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 171950080 unmapped: 34250752 heap: 206200832 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:01 compute-0 ceph-osd[89722]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:30:01 compute-0 ceph-osd[89722]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 11 04:30:01 compute-0 ceph-osd[89722]: bluestore.MempoolThread(0x55f1042abb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 2922687 data_alloc: 251658240 data_used: 30056448
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:12:35.770056+0000)
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: handle_auth_request added challenge on 0x55f10939e800
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 362 handle_osd_map epochs [362,363], i have 362, src has [1,363]
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 363 ms_handle_reset con 0x55f10939e800 session 0x55f10799c780
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: handle_auth_request added challenge on 0x55f105d2ac00
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 363 ms_handle_reset con 0x55f105d2ac00 session 0x55f10795f4a0
Oct 11 04:30:01 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 171966464 unmapped: 34234368 heap: 206200832 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:12:36.770217+0000)
Oct 11 04:30:01 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 171966464 unmapped: 34234368 heap: 206200832 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: handle_auth_request added challenge on 0x55f105d2b800
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:12:37.770420+0000)
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 363 ms_handle_reset con 0x55f105d2b800 session 0x55f107947a40
Oct 11 04:30:01 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 171999232 unmapped: 34201600 heap: 206200832 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 363 handle_osd_map epochs [364,364], i have 363, src has [1,364]
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:12:38.770624+0000)
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: handle_auth_request added challenge on 0x55f107958000
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 364 ms_handle_reset con 0x55f107958000 session 0x55f106d7f2c0
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: handle_auth_request added challenge on 0x55f107e5b000
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 364 heartbeat osd_stat(store_statfs(0x4f55bf000/0x0/0x4ffc00000, data 0x3b5cb75/0x3d4e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x68ef9c6), peers [0,1] op hist [])
Oct 11 04:30:01 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 172015616 unmapped: 34185216 heap: 206200832 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:12:39.770779+0000)
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 364 ms_handle_reset con 0x55f107e5b000 session 0x55f1081632c0
Oct 11 04:30:01 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 172015616 unmapped: 34185216 heap: 206200832 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: handle_auth_request added challenge on 0x55f107bf6000
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _renew_subs
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 364 handle_osd_map epochs [365,365], i have 364, src has [1,365]
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 365 ms_handle_reset con 0x55f107bf6000 session 0x55f1099541e0
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: handle_auth_request added challenge on 0x55f105d2ac00
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 365 ms_handle_reset con 0x55f105d2ac00 session 0x55f106990960
Oct 11 04:30:01 compute-0 ceph-osd[89722]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:30:01 compute-0 ceph-osd[89722]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 11 04:30:01 compute-0 ceph-osd[89722]: bluestore.MempoolThread(0x55f1042abb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 2931987 data_alloc: 251658240 data_used: 30060544
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:12:40.770897+0000)
Oct 11 04:30:01 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 172032000 unmapped: 34168832 heap: 206200832 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: handle_auth_request added challenge on 0x55f105d2b800
Oct 11 04:30:01 compute-0 ceph-osd[89722]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 10.264051437s of 10.585229874s, submitted: 126
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 365 ms_handle_reset con 0x55f105d2b800 session 0x55f106980d20
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:12:41.771127+0000)
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: handle_auth_request added challenge on 0x55f107958000
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 365 ms_handle_reset con 0x55f107958000 session 0x55f108163680
Oct 11 04:30:01 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 172081152 unmapped: 34119680 heap: 206200832 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:12:42.771337+0000)
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: handle_auth_request added challenge on 0x55f107e5b000
Oct 11 04:30:01 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 172081152 unmapped: 34119680 heap: 206200832 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: handle_auth_request added challenge on 0x55f107c1ac00
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: handle_auth_request added challenge on 0x55f107bf4c00
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 365 ms_handle_reset con 0x55f107bf4c00 session 0x55f1076b5c20
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:12:43.771428+0000)
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 365 handle_osd_map epochs [365,366], i have 365, src has [1,366]
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 366 ms_handle_reset con 0x55f107e5b000 session 0x55f109954b40
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: handle_auth_request added challenge on 0x55f105d2ac00
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: handle_auth_request added challenge on 0x55f105d2b800
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 366 ms_handle_reset con 0x55f105d2ac00 session 0x55f10646f680
Oct 11 04:30:01 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 172138496 unmapped: 34062336 heap: 206200832 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: handle_auth_request added challenge on 0x55f107958000
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 366 heartbeat osd_stat(store_statfs(0x4f55bb000/0x0/0x4ffc00000, data 0x3b5e7e4/0x3d53000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x68ef9c6), peers [0,1] op hist [])
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 366 ms_handle_reset con 0x55f107958000 session 0x55f10799c3c0
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: handle_auth_request added challenge on 0x55f107bf4c00
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:12:44.771572+0000)
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: handle_auth_request added challenge on 0x55f107bf5c00
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 366 handle_osd_map epochs [366,367], i have 366, src has [1,367]
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _renew_subs
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 367 handle_osd_map epochs [367,367], i have 367, src has [1,367]
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 367 ms_handle_reset con 0x55f107bf4c00 session 0x55f106ae5e00
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 367 ms_handle_reset con 0x55f105d2b800 session 0x55f106983860
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 367 ms_handle_reset con 0x55f107c1ac00 session 0x55f10799cb40
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 367 heartbeat osd_stat(store_statfs(0x4f55b7000/0x0/0x4ffc00000, data 0x3b603a9/0x3d56000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x68ef9c6), peers [0,1] op hist [])
Oct 11 04:30:01 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 172179456 unmapped: 34021376 heap: 206200832 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 367 heartbeat osd_stat(store_statfs(0x4f55b7000/0x0/0x4ffc00000, data 0x3b603a9/0x3d56000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x68ef9c6), peers [0,1] op hist [])
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:12:45.771730+0000)
Oct 11 04:30:01 compute-0 ceph-osd[89722]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:30:01 compute-0 ceph-osd[89722]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 11 04:30:01 compute-0 ceph-osd[89722]: bluestore.MempoolThread(0x55f1042abb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 2949566 data_alloc: 251658240 data_used: 30085120
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: handle_auth_request added challenge on 0x55f105d2ac00
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 367 ms_handle_reset con 0x55f105d2ac00 session 0x55f10767f2c0
Oct 11 04:30:01 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 172195840 unmapped: 34004992 heap: 206200832 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:12:46.771901+0000)
Oct 11 04:30:01 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 175882240 unmapped: 30318592 heap: 206200832 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: handle_auth_request added challenge on 0x55f105d2b800
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 367 ms_handle_reset con 0x55f105d2b800 session 0x55f109b9b4a0
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 367 ms_handle_reset con 0x55f107bf5c00 session 0x55f1076dcd20
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:12:47.772108+0000)
Oct 11 04:30:01 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 175882240 unmapped: 30318592 heap: 206200832 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: handle_auth_request added challenge on 0x55f107958000
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 367 ms_handle_reset con 0x55f107958000 session 0x55f106980960
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:12:48.772293+0000)
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: handle_auth_request added challenge on 0x55f107bf4c00
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 367 handle_osd_map epochs [367,368], i have 367, src has [1,368]
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _renew_subs
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 368 handle_osd_map epochs [368,368], i have 368, src has [1,368]
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 368 ms_handle_reset con 0x55f107bf4c00 session 0x55f107721a40
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: handle_auth_request added challenge on 0x55f105d2ac00
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: handle_auth_request added challenge on 0x55f105d2b800
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 368 ms_handle_reset con 0x55f105d2ac00 session 0x55f1069901e0
Oct 11 04:30:01 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 176103424 unmapped: 30097408 heap: 206200832 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:12:49.772430+0000)
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 368 ms_handle_reset con 0x55f1080e1c00 session 0x55f10767e960
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 368 ms_handle_reset con 0x55f109192800 session 0x55f105bab0e0
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: handle_auth_request added challenge on 0x55f107958000
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 368 handle_osd_map epochs [368,369], i have 368, src has [1,369]
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: handle_auth_request added challenge on 0x55f107bf4c00
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 369 ms_handle_reset con 0x55f105d2b800 session 0x55f10772ed20
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 369 ms_handle_reset con 0x55f107bf4c00 session 0x55f109954d20
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: handle_auth_request added challenge on 0x55f105d2ac00
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: handle_auth_request added challenge on 0x55f105d2b800
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 369 ms_handle_reset con 0x55f105d2b800 session 0x55f1077203c0
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 369 ms_handle_reset con 0x55f107958000 session 0x55f106982d20
Oct 11 04:30:01 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 176152576 unmapped: 30048256 heap: 206200832 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:12:50.772598+0000)
Oct 11 04:30:01 compute-0 ceph-osd[89722]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:30:01 compute-0 ceph-osd[89722]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 11 04:30:01 compute-0 ceph-osd[89722]: bluestore.MempoolThread(0x55f1042abb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 2986374 data_alloc: 251658240 data_used: 34557952
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 369 handle_osd_map epochs [369,370], i have 369, src has [1,370]
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _renew_subs
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 370 handle_osd_map epochs [370,370], i have 370, src has [1,370]
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 370 ms_handle_reset con 0x55f105d2ac00 session 0x55f10767fa40
Oct 11 04:30:01 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 176185344 unmapped: 30015488 heap: 206200832 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: handle_auth_request added challenge on 0x55f107bf4c00
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 370 ms_handle_reset con 0x55f107bf4c00 session 0x55f106d39c20
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 370 heartbeat osd_stat(store_statfs(0x4f53f1000/0x0/0x4ffc00000, data 0x3d21027/0x3f1c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x68ef9c6), peers [0,1] op hist [])
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: handle_auth_request added challenge on 0x55f1080e1c00
Oct 11 04:30:01 compute-0 ceph-osd[89722]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 9.644184113s of 10.008940697s, submitted: 122
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:12:51.772753+0000)
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 370 ms_handle_reset con 0x55f1080e1c00 session 0x55f106ae54a0
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: handle_auth_request added challenge on 0x55f105d2ac00
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _renew_subs
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 370 handle_osd_map epochs [371,371], i have 370, src has [1,371]
Oct 11 04:30:01 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 176226304 unmapped: 29974528 heap: 206200832 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 371 ms_handle_reset con 0x55f105d2ac00 session 0x55f105b0b860
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:12:52.772893+0000)
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: handle_auth_request added challenge on 0x55f105d2b800
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 371 ms_handle_reset con 0x55f105d2b800 session 0x55f10698e3c0
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: handle_auth_request added challenge on 0x55f107958000
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 371 ms_handle_reset con 0x55f107958000 session 0x55f10d6f6780
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: handle_auth_request added challenge on 0x55f107bf4c00
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 371 ms_handle_reset con 0x55f107bf4c00 session 0x55f107708000
Oct 11 04:30:01 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 176234496 unmapped: 29966336 heap: 206200832 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:12:53.773054+0000)
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: handle_auth_request added challenge on 0x55f109192800
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: handle_auth_request added challenge on 0x55f107bf5c00
Oct 11 04:30:01 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 176234496 unmapped: 29966336 heap: 206200832 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:12:54.773247+0000)
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 371 handle_osd_map epochs [371,372], i have 371, src has [1,372]
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 372 ms_handle_reset con 0x55f109192800 session 0x55f106990f00
Oct 11 04:30:01 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 176259072 unmapped: 29941760 heap: 206200832 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 372 ms_handle_reset con 0x55f107bf5c00 session 0x55f1076b4b40
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: handle_auth_request added challenge on 0x55f105d2ac00
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 372 ms_handle_reset con 0x55f105d2ac00 session 0x55f10767e5a0
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: handle_auth_request added challenge on 0x55f105d2b800
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 372 ms_handle_reset con 0x55f105d2b800 session 0x55f1069905a0
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:12:55.773332+0000)
Oct 11 04:30:01 compute-0 ceph-osd[89722]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:30:01 compute-0 ceph-osd[89722]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 11 04:30:01 compute-0 ceph-osd[89722]: bluestore.MempoolThread(0x55f1042abb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2820801 data_alloc: 234881024 data_used: 26578944
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: handle_auth_request added challenge on 0x55f107958000
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: handle_auth_request added challenge on 0x55f107bf4c00
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 372 ms_handle_reset con 0x55f107bf4c00 session 0x55f1056bd860
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: handle_auth_request added challenge on 0x55f107bd0800
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 372 handle_osd_map epochs [372,373], i have 372, src has [1,373]
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 373 ms_handle_reset con 0x55f107958000 session 0x55f108163c20
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 373 ms_handle_reset con 0x55f107bd0800 session 0x55f106d7f2c0
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: handle_auth_request added challenge on 0x55f105d2ac00
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: handle_auth_request added challenge on 0x55f105d2b800
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 373 ms_handle_reset con 0x55f105d2b800 session 0x55f10799cf00
Oct 11 04:30:01 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 171663360 unmapped: 34537472 heap: 206200832 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:12:56.773517+0000)
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 373 heartbeat osd_stat(store_statfs(0x4f6520000/0x0/0x4ffc00000, data 0x2becf2f/0x2dec000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x68ef9c6), peers [0,1] op hist [])
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: handle_auth_request added challenge on 0x55f107bf4c00
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _renew_subs
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 373 handle_osd_map epochs [374,374], i have 373, src has [1,374]
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 374 ms_handle_reset con 0x55f105d2ac00 session 0x55f105b8da40
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 374 ms_handle_reset con 0x55f107bf4c00 session 0x55f1081630e0
Oct 11 04:30:01 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 171696128 unmapped: 34504704 heap: 206200832 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:12:57.773690+0000)
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 374 ms_handle_reset con 0x55f107bef400 session 0x55f106d38b40
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 374 ms_handle_reset con 0x55f107c1d800 session 0x55f1069805a0
Oct 11 04:30:01 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 171696128 unmapped: 34504704 heap: 206200832 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: handle_auth_request added challenge on 0x55f105d2ac00
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:12:58.773864+0000)
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 374 ms_handle_reset con 0x55f105d2ac00 session 0x55f10562a960
Oct 11 04:30:01 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 167239680 unmapped: 38961152 heap: 206200832 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:12:59.774091+0000)
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 374 handle_osd_map epochs [375,375], i have 374, src has [1,375]
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: handle_auth_request added challenge on 0x55f105d2b800
Oct 11 04:30:01 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 166051840 unmapped: 40148992 heap: 206200832 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:13:00.774234+0000)
Oct 11 04:30:01 compute-0 ceph-osd[89722]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:30:01 compute-0 ceph-osd[89722]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:30:01 compute-0 ceph-osd[89722]: bluestore.MempoolThread(0x55f1042abb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2678095 data_alloc: 234881024 data_used: 15384576
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 375 heartbeat osd_stat(store_statfs(0x4f6ba3000/0x0/0x4ffc00000, data 0x213566c/0x2335000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cff9c6), peers [0,1] op hist [])
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _renew_subs
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 375 handle_osd_map epochs [376,376], i have 375, src has [1,376]
Oct 11 04:30:01 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 166051840 unmapped: 40148992 heap: 206200832 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 376 ms_handle_reset con 0x55f105d2b800 session 0x55f1076b43c0
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:13:01.774449+0000)
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: handle_auth_request added challenge on 0x55f107bd0800
Oct 11 04:30:01 compute-0 ceph-osd[89722]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 10.005718231s of 10.476074219s, submitted: 151
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 376 ms_handle_reset con 0x55f107bd0800 session 0x55f10d6f74a0
Oct 11 04:30:01 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 165060608 unmapped: 41140224 heap: 206200832 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:13:02.774618+0000)
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: handle_auth_request added challenge on 0x55f107bef400
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _renew_subs
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 376 handle_osd_map epochs [377,377], i have 376, src has [1,377]
Oct 11 04:30:01 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 165044224 unmapped: 41156608 heap: 206200832 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 377 ms_handle_reset con 0x55f107bef400 session 0x55f10767ef00
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: handle_auth_request added challenge on 0x55f105d2ac00
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 377 ms_handle_reset con 0x55f105d2ac00 session 0x55f106882f00
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:13:03.774779+0000)
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: handle_auth_request added challenge on 0x55f105d2b800
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _renew_subs
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 377 handle_osd_map epochs [378,378], i have 377, src has [1,378]
Oct 11 04:30:01 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 166109184 unmapped: 40091648 heap: 206200832 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 378 ms_handle_reset con 0x55f105d2b800 session 0x55f106990d20
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:13:04.774934+0000)
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 378 handle_osd_map epochs [378,379], i have 378, src has [1,379]
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: handle_auth_request added challenge on 0x55f107bd0800
Oct 11 04:30:01 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 166117376 unmapped: 40083456 heap: 206200832 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:13:05.775096+0000)
Oct 11 04:30:01 compute-0 ceph-osd[89722]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:30:01 compute-0 ceph-osd[89722]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:30:01 compute-0 ceph-osd[89722]: bluestore.MempoolThread(0x55f1042abb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2665876 data_alloc: 234881024 data_used: 13287424
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 379 heartbeat osd_stat(store_statfs(0x4f6dc4000/0x0/0x4ffc00000, data 0x1f3ad8b/0x2139000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cff9c6), peers [0,1] op hist [])
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _renew_subs
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 379 handle_osd_map epochs [380,380], i have 379, src has [1,380]
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 380 ms_handle_reset con 0x55f107bd0800 session 0x55f107709a40
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: handle_auth_request added challenge on 0x55f107c1d800
Oct 11 04:30:01 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 166125568 unmapped: 40075264 heap: 206200832 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:13:06.775306+0000)
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 380 ms_handle_reset con 0x55f107c1d800 session 0x55f1076fa5a0
Oct 11 04:30:01 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 166125568 unmapped: 40075264 heap: 206200832 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:13:07.775455+0000)
Oct 11 04:30:01 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 166125568 unmapped: 40075264 heap: 206200832 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:13:08.775585+0000)
Oct 11 04:30:01 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 166125568 unmapped: 40075264 heap: 206200832 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:13:09.775708+0000)
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 380 handle_osd_map epochs [380,381], i have 380, src has [1,381]
Oct 11 04:30:01 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 166133760 unmapped: 40067072 heap: 206200832 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:13:10.775868+0000)
Oct 11 04:30:01 compute-0 ceph-osd[89722]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:30:01 compute-0 ceph-osd[89722]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:30:01 compute-0 ceph-osd[89722]: bluestore.MempoolThread(0x55f1042abb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2670449 data_alloc: 234881024 data_used: 13291520
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 381 heartbeat osd_stat(store_statfs(0x4f6dbf000/0x0/0x4ffc00000, data 0x1f3e403/0x213e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cff9c6), peers [0,1] op hist [])
Oct 11 04:30:01 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 166133760 unmapped: 40067072 heap: 206200832 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:13:11.776049+0000)
Oct 11 04:30:01 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 166133760 unmapped: 40067072 heap: 206200832 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:13:12.776221+0000)
Oct 11 04:30:01 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 166133760 unmapped: 40067072 heap: 206200832 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:13:13.776384+0000)
Oct 11 04:30:01 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 166133760 unmapped: 40067072 heap: 206200832 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:13:14.776553+0000)
Oct 11 04:30:01 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 166133760 unmapped: 40067072 heap: 206200832 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:13:15.776645+0000)
Oct 11 04:30:01 compute-0 ceph-osd[89722]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:30:01 compute-0 ceph-osd[89722]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:30:01 compute-0 ceph-osd[89722]: bluestore.MempoolThread(0x55f1042abb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2670449 data_alloc: 234881024 data_used: 13291520
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 381 heartbeat osd_stat(store_statfs(0x4f6dbf000/0x0/0x4ffc00000, data 0x1f3e403/0x213e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cff9c6), peers [0,1] op hist [])
Oct 11 04:30:01 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 166133760 unmapped: 40067072 heap: 206200832 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:13:16.776821+0000)
Oct 11 04:30:01 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 166133760 unmapped: 40067072 heap: 206200832 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:13:17.776988+0000)
Oct 11 04:30:01 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 166133760 unmapped: 40067072 heap: 206200832 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:13:18.777138+0000)
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 381 heartbeat osd_stat(store_statfs(0x4f6dbf000/0x0/0x4ffc00000, data 0x1f3e403/0x213e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cff9c6), peers [0,1] op hist [])
Oct 11 04:30:01 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 166133760 unmapped: 40067072 heap: 206200832 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:13:19.777346+0000)
Oct 11 04:30:01 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 166133760 unmapped: 40067072 heap: 206200832 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:13:20.777524+0000)
Oct 11 04:30:01 compute-0 ceph-osd[89722]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:30:01 compute-0 ceph-osd[89722]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:30:01 compute-0 ceph-osd[89722]: bluestore.MempoolThread(0x55f1042abb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2670449 data_alloc: 234881024 data_used: 13291520
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: handle_auth_request added challenge on 0x55f107bf4c00
Oct 11 04:30:01 compute-0 ceph-osd[89722]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 18.877977371s of 19.190431595s, submitted: 140
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 381 ms_handle_reset con 0x55f107bf4c00 session 0x55f1079f25a0
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 381 heartbeat osd_stat(store_statfs(0x4f6dbf000/0x0/0x4ffc00000, data 0x1f3e403/0x213e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cff9c6), peers [0,1] op hist [])
Oct 11 04:30:01 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 166133760 unmapped: 40067072 heap: 206200832 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:13:21.777696+0000)
Oct 11 04:30:01 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 166133760 unmapped: 40067072 heap: 206200832 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: handle_auth_request added challenge on 0x55f105d2ac00
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 381 ms_handle_reset con 0x55f105d2ac00 session 0x55f1076cf680
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: handle_auth_request added challenge on 0x55f105d2b800
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:13:22.777822+0000)
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 381 ms_handle_reset con 0x55f105d2b800 session 0x55f105b9c5a0
Oct 11 04:30:01 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 166150144 unmapped: 40050688 heap: 206200832 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:13:23.777948+0000)
Oct 11 04:30:01 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 166150144 unmapped: 40050688 heap: 206200832 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:13:24.778119+0000)
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 381 heartbeat osd_stat(store_statfs(0x4f6dc0000/0x0/0x4ffc00000, data 0x1f3e403/0x213e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cff9c6), peers [0,1] op hist [])
Oct 11 04:30:01 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 166150144 unmapped: 40050688 heap: 206200832 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:13:25.778267+0000)
Oct 11 04:30:01 compute-0 ceph-osd[89722]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:30:01 compute-0 ceph-osd[89722]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:30:01 compute-0 ceph-osd[89722]: bluestore.MempoolThread(0x55f1042abb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2670474 data_alloc: 234881024 data_used: 13291520
Oct 11 04:30:01 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 166150144 unmapped: 40050688 heap: 206200832 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:13:26.778470+0000)
Oct 11 04:30:01 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 166150144 unmapped: 40050688 heap: 206200832 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:13:27.778652+0000)
Oct 11 04:30:01 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 166150144 unmapped: 40050688 heap: 206200832 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:13:28.778785+0000)
Oct 11 04:30:01 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 166150144 unmapped: 40050688 heap: 206200832 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:13:29.778929+0000)
Oct 11 04:30:01 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 166150144 unmapped: 40050688 heap: 206200832 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:13:30.779074+0000)
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 381 heartbeat osd_stat(store_statfs(0x4f6dc0000/0x0/0x4ffc00000, data 0x1f3e403/0x213e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cff9c6), peers [0,1] op hist [])
Oct 11 04:30:01 compute-0 ceph-osd[89722]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:30:01 compute-0 ceph-osd[89722]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:30:01 compute-0 ceph-osd[89722]: bluestore.MempoolThread(0x55f1042abb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2670474 data_alloc: 234881024 data_used: 13291520
Oct 11 04:30:01 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 166150144 unmapped: 40050688 heap: 206200832 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:13:31.779244+0000)
Oct 11 04:30:01 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 166150144 unmapped: 40050688 heap: 206200832 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:13:32.779399+0000)
Oct 11 04:30:01 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 166150144 unmapped: 40050688 heap: 206200832 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:13:33.779546+0000)
Oct 11 04:30:01 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 166150144 unmapped: 40050688 heap: 206200832 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:13:34.779723+0000)
Oct 11 04:30:01 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 166150144 unmapped: 40050688 heap: 206200832 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:13:35.779882+0000)
Oct 11 04:30:01 compute-0 ceph-osd[89722]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:30:01 compute-0 ceph-osd[89722]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:30:01 compute-0 ceph-osd[89722]: bluestore.MempoolThread(0x55f1042abb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2670474 data_alloc: 234881024 data_used: 13291520
Oct 11 04:30:01 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 166150144 unmapped: 40050688 heap: 206200832 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:13:36.780001+0000)
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 381 heartbeat osd_stat(store_statfs(0x4f6dc0000/0x0/0x4ffc00000, data 0x1f3e403/0x213e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cff9c6), peers [0,1] op hist [])
Oct 11 04:30:01 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 166150144 unmapped: 40050688 heap: 206200832 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:13:37.780229+0000)
Oct 11 04:30:01 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 166150144 unmapped: 40050688 heap: 206200832 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:13:38.780363+0000)
Oct 11 04:30:01 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 166150144 unmapped: 40050688 heap: 206200832 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 381 heartbeat osd_stat(store_statfs(0x4f6dc0000/0x0/0x4ffc00000, data 0x1f3e403/0x213e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cff9c6), peers [0,1] op hist [])
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:13:39.780509+0000)
Oct 11 04:30:01 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 166150144 unmapped: 40050688 heap: 206200832 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:13:40.780673+0000)
Oct 11 04:30:01 compute-0 ceph-osd[89722]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:30:01 compute-0 ceph-osd[89722]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:30:01 compute-0 ceph-osd[89722]: bluestore.MempoolThread(0x55f1042abb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2670474 data_alloc: 234881024 data_used: 13291520
Oct 11 04:30:01 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 166150144 unmapped: 40050688 heap: 206200832 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:13:41.780885+0000)
Oct 11 04:30:01 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 166150144 unmapped: 40050688 heap: 206200832 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:13:42.781069+0000)
Oct 11 04:30:01 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 166150144 unmapped: 40050688 heap: 206200832 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:13:43.781240+0000)
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 381 heartbeat osd_stat(store_statfs(0x4f6dc0000/0x0/0x4ffc00000, data 0x1f3e403/0x213e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cff9c6), peers [0,1] op hist [])
Oct 11 04:30:01 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 166150144 unmapped: 40050688 heap: 206200832 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:13:44.781424+0000)
Oct 11 04:30:01 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 166150144 unmapped: 40050688 heap: 206200832 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:13:45.781617+0000)
Oct 11 04:30:01 compute-0 ceph-osd[89722]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:30:01 compute-0 ceph-osd[89722]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:30:01 compute-0 ceph-osd[89722]: bluestore.MempoolThread(0x55f1042abb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2670474 data_alloc: 234881024 data_used: 13291520
Oct 11 04:30:01 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 166150144 unmapped: 40050688 heap: 206200832 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:13:46.781802+0000)
Oct 11 04:30:01 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 166150144 unmapped: 40050688 heap: 206200832 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:13:47.782003+0000)
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: handle_auth_request added challenge on 0x55f107bd0800
Oct 11 04:30:01 compute-0 ceph-osd[89722]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 26.478801727s of 26.532886505s, submitted: 17
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 381 ms_handle_reset con 0x55f107bd0800 session 0x55f10698e3c0
Oct 11 04:30:01 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 167198720 unmapped: 39002112 heap: 206200832 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: handle_auth_request added challenge on 0x55f107c1d800
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 381 ms_handle_reset con 0x55f107c1d800 session 0x55f106ae54a0
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:13:48.782179+0000)
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 381 heartbeat osd_stat(store_statfs(0x4f6dbe000/0x0/0x4ffc00000, data 0x1f3e475/0x2140000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cff9c6), peers [0,1] op hist [])
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: handle_auth_request added challenge on 0x55f107bf5c00
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 381 ms_handle_reset con 0x55f107bf5c00 session 0x55f106d39c20
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: handle_auth_request added challenge on 0x55f105d2ac00
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 381 ms_handle_reset con 0x55f105d2ac00 session 0x55f10767fa40
Oct 11 04:30:01 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 166158336 unmapped: 40042496 heap: 206200832 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:13:49.782625+0000)
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: handle_auth_request added challenge on 0x55f105d2b800
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 381 ms_handle_reset con 0x55f105d2b800 session 0x55f109954d20
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: handle_auth_request added challenge on 0x55f107bd0800
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 381 ms_handle_reset con 0x55f107bd0800 session 0x55f105bab0e0
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: handle_auth_request added challenge on 0x55f107c1d800
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 381 ms_handle_reset con 0x55f107c1d800 session 0x55f10767e960
Oct 11 04:30:01 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 166166528 unmapped: 40034304 heap: 206200832 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: handle_auth_request added challenge on 0x55f10c914400
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:13:50.782785+0000)
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 381 heartbeat osd_stat(store_statfs(0x4f6dbe000/0x0/0x4ffc00000, data 0x1f3e403/0x213e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cff9c6), peers [0,1] op hist [])
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 381 ms_handle_reset con 0x55f10c914400 session 0x55f1069901e0
Oct 11 04:30:01 compute-0 ceph-osd[89722]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:30:01 compute-0 ceph-osd[89722]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:30:01 compute-0 ceph-osd[89722]: bluestore.MempoolThread(0x55f1042abb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2673730 data_alloc: 234881024 data_used: 13291520
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: handle_auth_request added challenge on 0x55f105d2ac00
Oct 11 04:30:01 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 166166528 unmapped: 40034304 heap: 206200832 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:13:51.788254+0000)
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: handle_auth_request added challenge on 0x55f105d2b800
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 381 ms_handle_reset con 0x55f105d2b800 session 0x55f107720d20
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: handle_auth_request added challenge on 0x55f107bd0800
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 381 ms_handle_reset con 0x55f107bd0800 session 0x55f106980f00
Oct 11 04:30:01 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 165240832 unmapped: 40960000 heap: 206200832 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:13:52.788447+0000)
Oct 11 04:30:01 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 165240832 unmapped: 40960000 heap: 206200832 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:13:53.788604+0000)
Oct 11 04:30:01 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 165240832 unmapped: 40960000 heap: 206200832 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:13:54.788780+0000)
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 381 heartbeat osd_stat(store_statfs(0x4f6dbf000/0x0/0x4ffc00000, data 0x1f3e403/0x213e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cff9c6), peers [0,1] op hist [])
Oct 11 04:30:01 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 165240832 unmapped: 40960000 heap: 206200832 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:13:55.788950+0000)
Oct 11 04:30:01 compute-0 ceph-osd[89722]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:30:01 compute-0 ceph-osd[89722]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:30:01 compute-0 ceph-osd[89722]: bluestore.MempoolThread(0x55f1042abb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2672416 data_alloc: 234881024 data_used: 13291520
Oct 11 04:30:01 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 165240832 unmapped: 40960000 heap: 206200832 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:13:56.789208+0000)
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 381 heartbeat osd_stat(store_statfs(0x4f6dbf000/0x0/0x4ffc00000, data 0x1f3e403/0x213e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cff9c6), peers [0,1] op hist [])
Oct 11 04:30:01 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 165240832 unmapped: 40960000 heap: 206200832 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 381 ms_handle_reset con 0x55f105d2ac00 session 0x55f106980960
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:13:57.789416+0000)
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: handle_auth_request added challenge on 0x55f107c1d800
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 381 ms_handle_reset con 0x55f107c1d800 session 0x55f10767f2c0
Oct 11 04:30:01 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 165240832 unmapped: 40960000 heap: 206200832 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:13:58.789570+0000)
Oct 11 04:30:01 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 165240832 unmapped: 40960000 heap: 206200832 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:13:59.789752+0000)
Oct 11 04:30:01 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 165240832 unmapped: 40960000 heap: 206200832 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:14:00.789925+0000)
Oct 11 04:30:01 compute-0 ceph-osd[89722]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:30:01 compute-0 ceph-osd[89722]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:30:01 compute-0 ceph-osd[89722]: bluestore.MempoolThread(0x55f1042abb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2672416 data_alloc: 234881024 data_used: 13291520
Oct 11 04:30:01 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 165240832 unmapped: 40960000 heap: 206200832 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:14:01.790197+0000)
Oct 11 04:30:01 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 165240832 unmapped: 40960000 heap: 206200832 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:14:02.790352+0000)
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 381 heartbeat osd_stat(store_statfs(0x4f6dbf000/0x0/0x4ffc00000, data 0x1f3e403/0x213e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cff9c6), peers [0,1] op hist [])
Oct 11 04:30:01 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 165240832 unmapped: 40960000 heap: 206200832 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:14:03.790542+0000)
Oct 11 04:30:01 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 165240832 unmapped: 40960000 heap: 206200832 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:14:04.790735+0000)
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: handle_auth_request added challenge on 0x55f105c6bc00
Oct 11 04:30:01 compute-0 ceph-osd[89722]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 17.390193939s of 17.496828079s, submitted: 35
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 381 ms_handle_reset con 0x55f105c6bc00 session 0x55f106ae5e00
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: handle_auth_request added challenge on 0x55f105d2ac00
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 381 ms_handle_reset con 0x55f105d2ac00 session 0x55f10646f680
Oct 11 04:30:01 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 165167104 unmapped: 41033728 heap: 206200832 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:14:05.790931+0000)
Oct 11 04:30:01 compute-0 ceph-osd[89722]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:30:01 compute-0 ceph-osd[89722]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:30:01 compute-0 ceph-osd[89722]: bluestore.MempoolThread(0x55f1042abb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2673939 data_alloc: 234881024 data_used: 13291520
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 381 heartbeat osd_stat(store_statfs(0x4f6dc0000/0x0/0x4ffc00000, data 0x1f3e403/0x213e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cff9c6), peers [0,1] op hist [])
Oct 11 04:30:01 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 164618240 unmapped: 41582592 heap: 206200832 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:14:06.791105+0000)
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: handle_auth_request added challenge on 0x55f105d2b800
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 381 heartbeat osd_stat(store_statfs(0x4f6dc0000/0x0/0x4ffc00000, data 0x1f3e403/0x213e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cff9c6), peers [0,1] op hist [])
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 381 ms_handle_reset con 0x55f105d2b800 session 0x55f1099541e0
Oct 11 04:30:01 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 164618240 unmapped: 41582592 heap: 206200832 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:14:07.791260+0000)
Oct 11 04:30:01 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 164618240 unmapped: 41582592 heap: 206200832 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:14:08.791446+0000)
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: handle_auth_request added challenge on 0x55f107bd0800
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 381 ms_handle_reset con 0x55f107bd0800 session 0x55f106980f00
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: handle_auth_request added challenge on 0x55f107c1d800
Oct 11 04:30:01 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 164618240 unmapped: 41582592 heap: 206200832 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:14:09.791605+0000)
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 381 ms_handle_reset con 0x55f107c1d800 session 0x55f107720d20
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: handle_auth_request added challenge on 0x55f109423800
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 381 ms_handle_reset con 0x55f109423800 session 0x55f105bab0e0
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: handle_auth_request added challenge on 0x55f105d2ac00
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 381 ms_handle_reset con 0x55f105d2ac00 session 0x55f1076cf680
Oct 11 04:30:01 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 165191680 unmapped: 41009152 heap: 206200832 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:14:10.791746+0000)
Oct 11 04:30:01 compute-0 ceph-osd[89722]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:30:01 compute-0 ceph-osd[89722]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:30:01 compute-0 ceph-osd[89722]: bluestore.MempoolThread(0x55f1042abb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2726043 data_alloc: 234881024 data_used: 13291520
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: handle_auth_request added challenge on 0x55f105d2b800
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 381 heartbeat osd_stat(store_statfs(0x4f6863000/0x0/0x4ffc00000, data 0x2499475/0x269b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cff9c6), peers [0,1] op hist [])
Oct 11 04:30:01 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 165191680 unmapped: 41009152 heap: 206200832 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:14:11.791997+0000)
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 381 handle_osd_map epochs [381,382], i have 381, src has [1,382]
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 382 ms_handle_reset con 0x55f105d2b800 session 0x55f1076fa5a0
Oct 11 04:30:01 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 165199872 unmapped: 41000960 heap: 206200832 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:14:12.792170+0000)
Oct 11 04:30:01 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 165199872 unmapped: 41000960 heap: 206200832 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:14:13.792339+0000)
Oct 11 04:30:01 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 165199872 unmapped: 41000960 heap: 206200832 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:14:14.792506+0000)
Oct 11 04:30:01 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 165199872 unmapped: 41000960 heap: 206200832 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:14:15.792689+0000)
Oct 11 04:30:01 compute-0 ceph-osd[89722]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:30:01 compute-0 ceph-osd[89722]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:30:01 compute-0 ceph-osd[89722]: bluestore.MempoolThread(0x55f1042abb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2728345 data_alloc: 234881024 data_used: 13299712
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 382 heartbeat osd_stat(store_statfs(0x4f685f000/0x0/0x4ffc00000, data 0x249aff2/0x269e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cff9c6), peers [0,1] op hist [])
Oct 11 04:30:01 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 165199872 unmapped: 41000960 heap: 206200832 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:14:16.792819+0000)
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: handle_auth_request added challenge on 0x55f107bd0800
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: handle_auth_request added challenge on 0x55f107c1d800
Oct 11 04:30:01 compute-0 ceph-osd[89722]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 11.453276634s of 11.628294945s, submitted: 51
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 382 ms_handle_reset con 0x55f107c1d800 session 0x55f10795f860
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 382 ms_handle_reset con 0x55f107bd0800 session 0x55f106882f00
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:14:17.792963+0000)
Oct 11 04:30:01 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 165199872 unmapped: 41000960 heap: 206200832 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:14:18.793201+0000)
Oct 11 04:30:01 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 165199872 unmapped: 41000960 heap: 206200832 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:14:19.793409+0000)
Oct 11 04:30:01 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 165199872 unmapped: 41000960 heap: 206200832 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:14:20.793555+0000)
Oct 11 04:30:01 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 165199872 unmapped: 41000960 heap: 206200832 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:01 compute-0 ceph-osd[89722]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:30:01 compute-0 ceph-osd[89722]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:30:01 compute-0 ceph-osd[89722]: bluestore.MempoolThread(0x55f1042abb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2730333 data_alloc: 234881024 data_used: 13303808
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 382 heartbeat osd_stat(store_statfs(0x4f685e000/0x0/0x4ffc00000, data 0x249b002/0x269f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cff9c6), peers [0,1] op hist [])
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:14:21.793742+0000)
Oct 11 04:30:01 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 165199872 unmapped: 41000960 heap: 206200832 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:14:22.793912+0000)
Oct 11 04:30:01 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 165199872 unmapped: 41000960 heap: 206200832 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 382 heartbeat osd_stat(store_statfs(0x4f685e000/0x0/0x4ffc00000, data 0x249b002/0x269f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cff9c6), peers [0,1] op hist [])
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: handle_auth_request added challenge on 0x55f10763d800
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 382 ms_handle_reset con 0x55f10763d800 session 0x55f10d6f74a0
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:14:23.794072+0000)
Oct 11 04:30:01 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 165191680 unmapped: 41009152 heap: 206200832 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: handle_auth_request added challenge on 0x55f105d2ac00
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: handle_auth_request added challenge on 0x55f105d2b800
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:14:24.794258+0000)
Oct 11 04:30:01 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 165191680 unmapped: 41009152 heap: 206200832 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 382 heartbeat osd_stat(store_statfs(0x4f685e000/0x0/0x4ffc00000, data 0x249b025/0x26a0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cff9c6), peers [0,1] op hist [])
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 382 heartbeat osd_stat(store_statfs(0x4f685e000/0x0/0x4ffc00000, data 0x249b025/0x26a0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cff9c6), peers [0,1] op hist [])
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:14:25.794474+0000)
Oct 11 04:30:01 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 165404672 unmapped: 40796160 heap: 206200832 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:01 compute-0 ceph-osd[89722]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:30:01 compute-0 ceph-osd[89722]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:30:01 compute-0 ceph-osd[89722]: bluestore.MempoolThread(0x55f1042abb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2763730 data_alloc: 234881024 data_used: 17891328
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:14:26.794726+0000)
Oct 11 04:30:01 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 166387712 unmapped: 39813120 heap: 206200832 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 382 ms_handle_reset con 0x55f105d2ac00 session 0x55f1069805a0
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 382 ms_handle_reset con 0x55f105d2b800 session 0x55f105b8d2c0
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: handle_auth_request added challenge on 0x55f107bd0800
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: handle_auth_request added challenge on 0x55f107c1d800
Oct 11 04:30:01 compute-0 ceph-osd[89722]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 10.697650909s of 10.716534615s, submitted: 5
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 382 ms_handle_reset con 0x55f107c1d800 session 0x55f107947a40
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: handle_auth_request added challenge on 0x55f107c1f800
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:14:27.794859+0000)
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 382 ms_handle_reset con 0x55f107bd0800 session 0x55f106d38b40
Oct 11 04:30:01 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 165601280 unmapped: 40599552 heap: 206200832 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 382 ms_handle_reset con 0x55f107c1f800 session 0x55f105bb7c20
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:14:28.795075+0000)
Oct 11 04:30:01 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 165601280 unmapped: 40599552 heap: 206200832 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: handle_auth_request added challenge on 0x55f105d2ac00
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 382 ms_handle_reset con 0x55f105d2ac00 session 0x55f108163c20
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: handle_auth_request added challenge on 0x55f105d2b800
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 382 ms_handle_reset con 0x55f105d2b800 session 0x55f1056bd860
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 382 heartbeat osd_stat(store_statfs(0x4f685f000/0x0/0x4ffc00000, data 0x249aff2/0x269e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cff9c6), peers [0,1] op hist [])
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:14:29.795217+0000)
Oct 11 04:30:01 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 165601280 unmapped: 40599552 heap: 206200832 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:14:30.795429+0000)
Oct 11 04:30:01 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 165601280 unmapped: 40599552 heap: 206200832 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: handle_auth_request added challenge on 0x55f107bd0800
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 382 ms_handle_reset con 0x55f107bd0800 session 0x55f1069905a0
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: handle_auth_request added challenge on 0x55f107c1d800
Oct 11 04:30:01 compute-0 ceph-osd[89722]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:30:01 compute-0 ceph-osd[89722]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:30:01 compute-0 ceph-osd[89722]: bluestore.MempoolThread(0x55f1042abb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2765058 data_alloc: 234881024 data_used: 18706432
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:14:31.795773+0000)
Oct 11 04:30:01 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 165609472 unmapped: 40591360 heap: 206200832 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _renew_subs
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 382 handle_osd_map epochs [383,383], i have 382, src has [1,383]
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 383 ms_handle_reset con 0x55f107c1d800 session 0x55f1076b4b40
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: handle_auth_request added challenge on 0x55f107be1400
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 383 ms_handle_reset con 0x55f107be1400 session 0x55f106d7fc20
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: handle_auth_request added challenge on 0x55f105d2ac00
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 383 ms_handle_reset con 0x55f105d2ac00 session 0x55f1081623c0
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:14:32.795953+0000)
Oct 11 04:30:01 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 163495936 unmapped: 42704896 heap: 206200832 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 383 heartbeat osd_stat(store_statfs(0x4f6db8000/0x0/0x4ffc00000, data 0x1f41bb3/0x2145000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cff9c6), peers [0,1] op hist [])
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:14:33.796226+0000)
Oct 11 04:30:01 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 163495936 unmapped: 42704896 heap: 206200832 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: handle_auth_request added challenge on 0x55f105d2b800
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:14:34.796416+0000)
Oct 11 04:30:01 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 164544512 unmapped: 41656320 heap: 206200832 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 383 handle_osd_map epochs [383,384], i have 383, src has [1,384]
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 384 heartbeat osd_stat(store_statfs(0x4f6db5000/0x0/0x4ffc00000, data 0x1f43617/0x2148000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cff9c6), peers [0,1] op hist [])
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:14:35.796576+0000)
Oct 11 04:30:01 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 163495936 unmapped: 42704896 heap: 206200832 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: handle_auth_request added challenge on 0x55f107bd0800
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 384 ms_handle_reset con 0x55f107bd0800 session 0x55f10799dc20
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: handle_auth_request added challenge on 0x55f107c1d800
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 384 ms_handle_reset con 0x55f105d2b800 session 0x55f1076faf00
Oct 11 04:30:01 compute-0 ceph-osd[89722]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:30:01 compute-0 ceph-osd[89722]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:30:01 compute-0 ceph-osd[89722]: bluestore.MempoolThread(0x55f1042abb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2693133 data_alloc: 234881024 data_used: 13312000
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 384 ms_handle_reset con 0x55f107c1d800 session 0x55f105b9cd20
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:14:36.796767+0000)
Oct 11 04:30:01 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 163045376 unmapped: 43155456 heap: 206200832 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:14:37.796994+0000)
Oct 11 04:30:01 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 163045376 unmapped: 43155456 heap: 206200832 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:14:38.797222+0000)
Oct 11 04:30:01 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 163045376 unmapped: 43155456 heap: 206200832 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 384 heartbeat osd_stat(store_statfs(0x4f6db5000/0x0/0x4ffc00000, data 0x1f43617/0x2148000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cff9c6), peers [0,1] op hist [])
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:14:39.797431+0000)
Oct 11 04:30:01 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 163045376 unmapped: 43155456 heap: 206200832 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: handle_auth_request added challenge on 0x55f107bd7000
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 384 ms_handle_reset con 0x55f107bd7000 session 0x55f10561e1e0
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: handle_auth_request added challenge on 0x55f105d2ac00
Oct 11 04:30:01 compute-0 ceph-osd[89722]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 12.976881027s of 13.127993584s, submitted: 71
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:14:40.797597+0000)
Oct 11 04:30:01 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 163053568 unmapped: 43147264 heap: 206200832 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 384 ms_handle_reset con 0x55f105d2ac00 session 0x55f1076e6f00
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: handle_auth_request added challenge on 0x55f105d2b800
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 384 ms_handle_reset con 0x55f105d2b800 session 0x55f1076dd680
Oct 11 04:30:01 compute-0 ceph-osd[89722]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:30:01 compute-0 ceph-osd[89722]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:30:01 compute-0 ceph-osd[89722]: bluestore.MempoolThread(0x55f1042abb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2806654 data_alloc: 234881024 data_used: 13312000
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:14:41.797978+0000)
Oct 11 04:30:01 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 163143680 unmapped: 43057152 heap: 206200832 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 384 heartbeat osd_stat(store_statfs(0x4f5eea000/0x0/0x4ffc00000, data 0x2e0f617/0x3014000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cff9c6), peers [0,1] op hist [])
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:14:42.798179+0000)
Oct 11 04:30:01 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 163143680 unmapped: 43057152 heap: 206200832 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:14:43.798392+0000)
Oct 11 04:30:01 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 163143680 unmapped: 43057152 heap: 206200832 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:14:44.798634+0000)
Oct 11 04:30:01 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 163143680 unmapped: 43057152 heap: 206200832 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:14:45.798918+0000)
Oct 11 04:30:01 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 163143680 unmapped: 43057152 heap: 206200832 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:01 compute-0 ceph-osd[89722]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:30:01 compute-0 ceph-osd[89722]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:30:01 compute-0 ceph-osd[89722]: bluestore.MempoolThread(0x55f1042abb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2806654 data_alloc: 234881024 data_used: 13312000
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 384 heartbeat osd_stat(store_statfs(0x4f5eea000/0x0/0x4ffc00000, data 0x2e0f617/0x3014000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cff9c6), peers [0,1] op hist [])
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:14:46.799242+0000)
Oct 11 04:30:01 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 163143680 unmapped: 43057152 heap: 206200832 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:14:47.799594+0000)
Oct 11 04:30:01 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 163143680 unmapped: 43057152 heap: 206200832 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: handle_auth_request added challenge on 0x55f107bd0800
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 384 ms_handle_reset con 0x55f107bd0800 session 0x55f1079f21e0
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: handle_auth_request added challenge on 0x55f107c1d800
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:14:48.799778+0000)
Oct 11 04:30:01 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 171819008 unmapped: 34381824 heap: 206200832 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 384 ms_handle_reset con 0x55f107c1d800 session 0x55f109b9a5a0
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:14:49.800031+0000)
Oct 11 04:30:01 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 163430400 unmapped: 42770432 heap: 206200832 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:14:50.800226+0000)
Oct 11 04:30:01 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 163430400 unmapped: 42770432 heap: 206200832 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:01 compute-0 ceph-osd[89722]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:30:01 compute-0 ceph-osd[89722]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:30:01 compute-0 ceph-osd[89722]: bluestore.MempoolThread(0x55f1042abb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2962934 data_alloc: 234881024 data_used: 13316096
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:14:51.800503+0000)
Oct 11 04:30:01 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 163430400 unmapped: 42770432 heap: 206200832 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 384 heartbeat osd_stat(store_statfs(0x4f48a3000/0x0/0x4ffc00000, data 0x44565b4/0x465a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cff9c6), peers [0,1] op hist [])
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:14:52.800939+0000)
Oct 11 04:30:01 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 163430400 unmapped: 42770432 heap: 206200832 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:14:53.801083+0000)
Oct 11 04:30:01 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 163430400 unmapped: 42770432 heap: 206200832 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:14:54.801239+0000)
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 384 heartbeat osd_stat(store_statfs(0x4f48a3000/0x0/0x4ffc00000, data 0x44565b4/0x465a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cff9c6), peers [0,1] op hist [])
Oct 11 04:30:01 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 163430400 unmapped: 42770432 heap: 206200832 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: handle_auth_request added challenge on 0x55f1081b4c00
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 384 ms_handle_reset con 0x55f1081b4c00 session 0x55f1076b54a0
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:14:55.801364+0000)
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: handle_auth_request added challenge on 0x55f105d2ac00
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: handle_auth_request added challenge on 0x55f105d2b800
Oct 11 04:30:01 compute-0 ceph-osd[89722]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 14.889142990s of 15.204251289s, submitted: 37
Oct 11 04:30:01 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 163438592 unmapped: 42762240 heap: 206200832 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:01 compute-0 ceph-osd[89722]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:30:01 compute-0 ceph-osd[89722]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:30:01 compute-0 ceph-osd[89722]: bluestore.MempoolThread(0x55f1042abb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2963066 data_alloc: 234881024 data_used: 13316096
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:14:56.801485+0000)
Oct 11 04:30:01 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 163438592 unmapped: 42762240 heap: 206200832 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 384 heartbeat osd_stat(store_statfs(0x4f48a3000/0x0/0x4ffc00000, data 0x44565b4/0x465a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cff9c6), peers [0,1] op hist [])
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:14:57.801643+0000)
Oct 11 04:30:01 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 172433408 unmapped: 33767424 heap: 206200832 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 384 heartbeat osd_stat(store_statfs(0x4f48a3000/0x0/0x4ffc00000, data 0x44565b4/0x465a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cff9c6), peers [0,1] op hist [])
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:14:58.801812+0000)
Oct 11 04:30:01 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 172466176 unmapped: 33734656 heap: 206200832 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 384 heartbeat osd_stat(store_statfs(0x4f48a3000/0x0/0x4ffc00000, data 0x44565b4/0x465a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cff9c6), peers [0,1] op hist [])
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:14:59.801923+0000)
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: handle_auth_request added challenge on 0x55f107bd0800
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 384 ms_handle_reset con 0x55f107bd0800 session 0x55f109b9ba40
Oct 11 04:30:01 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 172498944 unmapped: 33701888 heap: 206200832 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: handle_auth_request added challenge on 0x55f107c1d800
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: handle_auth_request added challenge on 0x55f1081b4c00
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:15:00.802040+0000)
Oct 11 04:30:01 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 172498944 unmapped: 33701888 heap: 206200832 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:01 compute-0 ceph-osd[89722]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:30:01 compute-0 ceph-osd[89722]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 11 04:30:01 compute-0 ceph-osd[89722]: bluestore.MempoolThread(0x55f1042abb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3075279 data_alloc: 251658240 data_used: 28835840
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:15:01.802307+0000)
Oct 11 04:30:01 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 172498944 unmapped: 33701888 heap: 206200832 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:15:02.802489+0000)
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 384 heartbeat osd_stat(store_statfs(0x4f48a3000/0x0/0x4ffc00000, data 0x44565d7/0x465b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cff9c6), peers [0,1] op hist [])
Oct 11 04:30:01 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 172531712 unmapped: 33669120 heap: 206200832 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:15:03.802707+0000)
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 384 heartbeat osd_stat(store_statfs(0x4f48a3000/0x0/0x4ffc00000, data 0x44565d7/0x465b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cff9c6), peers [0,1] op hist [])
Oct 11 04:30:01 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 172564480 unmapped: 33636352 heap: 206200832 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:15:04.802851+0000)
Oct 11 04:30:01 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 176480256 unmapped: 29720576 heap: 206200832 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:15:05.803041+0000)
Oct 11 04:30:01 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 176480256 unmapped: 29720576 heap: 206200832 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 384 heartbeat osd_stat(store_statfs(0x4f48a3000/0x0/0x4ffc00000, data 0x44565d7/0x465b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cff9c6), peers [0,1] op hist [])
Oct 11 04:30:01 compute-0 ceph-osd[89722]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:30:01 compute-0 ceph-osd[89722]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 11 04:30:01 compute-0 ceph-osd[89722]: bluestore.MempoolThread(0x55f1042abb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3113039 data_alloc: 251658240 data_used: 34263040
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:15:06.803225+0000)
Oct 11 04:30:01 compute-0 ceph-osd[89722]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 11.004094124s of 11.019952774s, submitted: 5
Oct 11 04:30:01 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 176644096 unmapped: 29556736 heap: 206200832 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:15:07.803409+0000)
Oct 11 04:30:01 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 183713792 unmapped: 22487040 heap: 206200832 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:15:08.803569+0000)
Oct 11 04:30:01 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 183820288 unmapped: 22380544 heap: 206200832 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:15:09.803712+0000)
Oct 11 04:30:01 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 183820288 unmapped: 22380544 heap: 206200832 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:15:10.803906+0000)
Oct 11 04:30:01 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 183820288 unmapped: 22380544 heap: 206200832 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:01 compute-0 ceph-osd[89722]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:30:01 compute-0 ceph-osd[89722]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 11 04:30:01 compute-0 ceph-osd[89722]: bluestore.MempoolThread(0x55f1042abb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3178711 data_alloc: 251658240 data_used: 34492416
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:15:11.804186+0000)
Oct 11 04:30:01 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 183820288 unmapped: 22380544 heap: 206200832 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 384 heartbeat osd_stat(store_statfs(0x4f3fe8000/0x0/0x4ffc00000, data 0x4d115d7/0x4f16000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cff9c6), peers [0,1] op hist [])
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:15:12.804356+0000)
Oct 11 04:30:01 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 183853056 unmapped: 22347776 heap: 206200832 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:15:13.804557+0000)
Oct 11 04:30:01 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 183853056 unmapped: 22347776 heap: 206200832 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:15:14.804715+0000)
Oct 11 04:30:01 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 185950208 unmapped: 20250624 heap: 206200832 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:15:15.804871+0000)
Oct 11 04:30:01 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 187187200 unmapped: 19013632 heap: 206200832 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:01 compute-0 ceph-osd[89722]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:30:01 compute-0 ceph-osd[89722]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 11 04:30:01 compute-0 ceph-osd[89722]: bluestore.MempoolThread(0x55f1042abb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3279047 data_alloc: 251658240 data_used: 34992128
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:15:16.805056+0000)
Oct 11 04:30:01 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 187244544 unmapped: 18956288 heap: 206200832 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: handle_auth_request added challenge on 0x55f107c1fc00
Oct 11 04:30:01 compute-0 ceph-osd[89722]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 9.880732536s of 10.228637695s, submitted: 90
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 384 heartbeat osd_stat(store_statfs(0x4f31e1000/0x0/0x4ffc00000, data 0x5a185d7/0x5c1d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cff9c6), peers [0,1] op hist [])
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: handle_auth_request added challenge on 0x55f107c1c800
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:15:17.805219+0000)
Oct 11 04:30:01 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 185393152 unmapped: 20807680 heap: 206200832 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _renew_subs
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 384 handle_osd_map epochs [385,385], i have 384, src has [1,385]
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 385 ms_handle_reset con 0x55f107c1fc00 session 0x55f108163c20
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 385 ms_handle_reset con 0x55f107c1c800 session 0x55f10562b680
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:15:18.805435+0000)
Oct 11 04:30:01 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 185409536 unmapped: 20791296 heap: 206200832 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:15:19.805665+0000)
Oct 11 04:30:01 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 185409536 unmapped: 20791296 heap: 206200832 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:15:20.805832+0000)
Oct 11 04:30:01 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 185409536 unmapped: 20791296 heap: 206200832 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:01 compute-0 ceph-osd[89722]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:30:01 compute-0 ceph-osd[89722]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 11 04:30:01 compute-0 ceph-osd[89722]: bluestore.MempoolThread(0x55f1042abb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3280371 data_alloc: 251658240 data_used: 35000320
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 385 heartbeat osd_stat(store_statfs(0x4f32db000/0x0/0x4ffc00000, data 0x5a1a2d4/0x5c22000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cff9c6), peers [0,1] op hist [])
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:15:21.806066+0000)
Oct 11 04:30:01 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 185409536 unmapped: 20791296 heap: 206200832 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 385 heartbeat osd_stat(store_statfs(0x4f32db000/0x0/0x4ffc00000, data 0x5a1a2d4/0x5c22000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cff9c6), peers [0,1] op hist [])
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:15:22.806223+0000)
Oct 11 04:30:01 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 185409536 unmapped: 20791296 heap: 206200832 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:15:23.806394+0000)
Oct 11 04:30:01 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 185409536 unmapped: 20791296 heap: 206200832 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:15:24.806934+0000)
Oct 11 04:30:01 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 185409536 unmapped: 20791296 heap: 206200832 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: handle_auth_request added challenge on 0x55f105b88800
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 385 ms_handle_reset con 0x55f105b88800 session 0x55f1076cf680
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:15:25.807550+0000)
Oct 11 04:30:01 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 185425920 unmapped: 20774912 heap: 206200832 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:01 compute-0 ceph-osd[89722]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:30:01 compute-0 ceph-osd[89722]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 11 04:30:01 compute-0 ceph-osd[89722]: bluestore.MempoolThread(0x55f1042abb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3280371 data_alloc: 251658240 data_used: 35000320
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:15:26.807749+0000)
Oct 11 04:30:01 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 185425920 unmapped: 20774912 heap: 206200832 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 385 heartbeat osd_stat(store_statfs(0x4f32db000/0x0/0x4ffc00000, data 0x5a1a2d4/0x5c22000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cff9c6), peers [0,1] op hist [])
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:15:27.808208+0000)
Oct 11 04:30:01 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 185425920 unmapped: 20774912 heap: 206200832 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 385 ms_handle_reset con 0x55f107c1d800 session 0x55f10795fc20
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 385 ms_handle_reset con 0x55f1081b4c00 session 0x55f1076dc1e0
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:15:28.811268+0000)
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: handle_auth_request added challenge on 0x55f107c1d800
Oct 11 04:30:01 compute-0 ceph-osd[89722]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 11.667320251s of 11.712656021s, submitted: 11
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 385 ms_handle_reset con 0x55f107c1d800 session 0x55f105bab680
Oct 11 04:30:01 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 185425920 unmapped: 20774912 heap: 206200832 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:15:29.811392+0000)
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: handle_auth_request added challenge on 0x55f105b88800
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 385 ms_handle_reset con 0x55f105b88800 session 0x55f1076e7860
Oct 11 04:30:01 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 185425920 unmapped: 20774912 heap: 206200832 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: handle_auth_request added challenge on 0x55f107bd0800
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 385 ms_handle_reset con 0x55f107bd0800 session 0x55f10567d860
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:15:30.811505+0000)
Oct 11 04:30:01 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 185425920 unmapped: 20774912 heap: 206200832 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: handle_auth_request added challenge on 0x55f107c1c800
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 385 ms_handle_reset con 0x55f107c1c800 session 0x55f106882b40
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: handle_auth_request added challenge on 0x55f107c1c800
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 385 ms_handle_reset con 0x55f107c1c800 session 0x55f10567cf00
Oct 11 04:30:01 compute-0 ceph-osd[89722]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:30:01 compute-0 ceph-osd[89722]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 11 04:30:01 compute-0 ceph-osd[89722]: bluestore.MempoolThread(0x55f1042abb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3280281 data_alloc: 251658240 data_used: 34996224
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: handle_auth_request added challenge on 0x55f105b88800
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: handle_auth_request added challenge on 0x55f107bd0800
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:15:31.811672+0000)
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 385 heartbeat osd_stat(store_statfs(0x4f32db000/0x0/0x4ffc00000, data 0x5a1a2c1/0x5c22000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cff9c6), peers [0,1] op hist [])
Oct 11 04:30:01 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 185483264 unmapped: 20717568 heap: 206200832 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 385 heartbeat osd_stat(store_statfs(0x4f32db000/0x0/0x4ffc00000, data 0x5a1a2c1/0x5c22000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cff9c6), peers [0,1] op hist [])
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:15:32.811799+0000)
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 385 heartbeat osd_stat(store_statfs(0x4f32db000/0x0/0x4ffc00000, data 0x5a1a2c1/0x5c22000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cff9c6), peers [0,1] op hist [])
Oct 11 04:30:01 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 185540608 unmapped: 20660224 heap: 206200832 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:15:33.811934+0000)
Oct 11 04:30:01 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 185581568 unmapped: 20619264 heap: 206200832 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 385 heartbeat osd_stat(store_statfs(0x4f32db000/0x0/0x4ffc00000, data 0x5a1a2c1/0x5c22000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cff9c6), peers [0,1] op hist [])
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:15:34.812131+0000)
Oct 11 04:30:01 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 185581568 unmapped: 20619264 heap: 206200832 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:15:35.812621+0000)
Oct 11 04:30:01 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 185581568 unmapped: 20619264 heap: 206200832 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:01 compute-0 ceph-osd[89722]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:30:01 compute-0 ceph-osd[89722]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 11 04:30:01 compute-0 ceph-osd[89722]: bluestore.MempoolThread(0x55f1042abb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3280601 data_alloc: 251658240 data_used: 35041280
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:15:36.812847+0000)
Oct 11 04:30:01 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 185581568 unmapped: 20619264 heap: 206200832 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:15:37.813058+0000)
Oct 11 04:30:01 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 185581568 unmapped: 20619264 heap: 206200832 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:15:38.813244+0000)
Oct 11 04:30:01 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 185581568 unmapped: 20619264 heap: 206200832 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 385 heartbeat osd_stat(store_statfs(0x4f32db000/0x0/0x4ffc00000, data 0x5a1a2c1/0x5c22000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cff9c6), peers [0,1] op hist [])
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:15:39.813456+0000)
Oct 11 04:30:01 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 185581568 unmapped: 20619264 heap: 206200832 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:15:40.813656+0000)
Oct 11 04:30:01 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 185581568 unmapped: 20619264 heap: 206200832 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:01 compute-0 ceph-osd[89722]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:30:01 compute-0 ceph-osd[89722]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 11 04:30:01 compute-0 ceph-osd[89722]: bluestore.MempoolThread(0x55f1042abb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3280601 data_alloc: 251658240 data_used: 35041280
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:15:41.813837+0000)
Oct 11 04:30:01 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 185581568 unmapped: 20619264 heap: 206200832 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 385 heartbeat osd_stat(store_statfs(0x4f32db000/0x0/0x4ffc00000, data 0x5a1a2c1/0x5c22000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cff9c6), peers [0,1] op hist [])
Oct 11 04:30:01 compute-0 ceph-osd[89722]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 13.745819092s of 13.771948814s, submitted: 10
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:15:42.813987+0000)
Oct 11 04:30:01 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 185581568 unmapped: 20619264 heap: 206200832 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:15:43.814278+0000)
Oct 11 04:30:01 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 185532416 unmapped: 20668416 heap: 206200832 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:15:44.814430+0000)
Oct 11 04:30:01 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 185589760 unmapped: 20611072 heap: 206200832 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: handle_auth_request added challenge on 0x55f107c1d800
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 385 ms_handle_reset con 0x55f107c1d800 session 0x55f1099541e0
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:15:45.814583+0000)
Oct 11 04:30:01 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 185589760 unmapped: 20611072 heap: 206200832 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 385 heartbeat osd_stat(store_statfs(0x4f3226000/0x0/0x4ffc00000, data 0x5ad02c1/0x5cd8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cff9c6), peers [0,1] op hist [])
Oct 11 04:30:01 compute-0 ceph-osd[89722]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:30:01 compute-0 ceph-osd[89722]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 11 04:30:01 compute-0 ceph-osd[89722]: bluestore.MempoolThread(0x55f1042abb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3296731 data_alloc: 251658240 data_used: 36110336
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:15:46.814753+0000)
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: handle_auth_request added challenge on 0x55f1081b4c00
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 385 ms_handle_reset con 0x55f1081b4c00 session 0x55f106ae5e00
Oct 11 04:30:01 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 185589760 unmapped: 20611072 heap: 206200832 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: handle_auth_request added challenge on 0x55f107c1fc00
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 385 ms_handle_reset con 0x55f107c1fc00 session 0x55f10767f2c0
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: handle_auth_request added challenge on 0x55f1094e0c00
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 385 ms_handle_reset con 0x55f1094e0c00 session 0x55f106980960
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:15:47.814923+0000)
Oct 11 04:30:01 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 185630720 unmapped: 20570112 heap: 206200832 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: handle_auth_request added challenge on 0x55f107c1c800
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: handle_auth_request added challenge on 0x55f107c1d800
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 385 heartbeat osd_stat(store_statfs(0x4f3224000/0x0/0x4ffc00000, data 0x5ad02f4/0x5cda000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cff9c6), peers [0,1] op hist [])
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:15:48.815050+0000)
Oct 11 04:30:01 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 185843712 unmapped: 20357120 heap: 206200832 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:15:49.815204+0000)
Oct 11 04:30:01 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 185843712 unmapped: 20357120 heap: 206200832 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:15:50.815406+0000)
Oct 11 04:30:01 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 185843712 unmapped: 20357120 heap: 206200832 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:01 compute-0 ceph-osd[89722]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:30:01 compute-0 ceph-osd[89722]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 11 04:30:01 compute-0 ceph-osd[89722]: bluestore.MempoolThread(0x55f1042abb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3301262 data_alloc: 251658240 data_used: 36114432
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:15:51.815608+0000)
Oct 11 04:30:01 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 185860096 unmapped: 20340736 heap: 206200832 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:15:52.815797+0000)
Oct 11 04:30:01 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 185860096 unmapped: 20340736 heap: 206200832 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:15:53.815939+0000)
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 385 heartbeat osd_stat(store_statfs(0x4f3224000/0x0/0x4ffc00000, data 0x5ad02f4/0x5cda000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cff9c6), peers [0,1] op hist [])
Oct 11 04:30:01 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 185892864 unmapped: 20307968 heap: 206200832 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:01 compute-0 ceph-osd[89722]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 11.791725159s of 11.837285995s, submitted: 12
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:15:54.816085+0000)
Oct 11 04:30:01 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 185892864 unmapped: 20307968 heap: 206200832 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:15:55.816244+0000)
Oct 11 04:30:01 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 186056704 unmapped: 20144128 heap: 206200832 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:01 compute-0 ceph-osd[89722]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:30:01 compute-0 ceph-osd[89722]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 11 04:30:01 compute-0 ceph-osd[89722]: bluestore.MempoolThread(0x55f1042abb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3301166 data_alloc: 251658240 data_used: 36184064
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:15:56.816383+0000)
Oct 11 04:30:01 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 186056704 unmapped: 20144128 heap: 206200832 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:15:57.816538+0000)
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 385 heartbeat osd_stat(store_statfs(0x4f3224000/0x0/0x4ffc00000, data 0x5ad02f4/0x5cda000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cff9c6), peers [0,1] op hist [])
Oct 11 04:30:01 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 186064896 unmapped: 20135936 heap: 206200832 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:15:58.816738+0000)
Oct 11 04:30:01 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 186064896 unmapped: 20135936 heap: 206200832 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:15:59.816913+0000)
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 385 heartbeat osd_stat(store_statfs(0x4f3224000/0x0/0x4ffc00000, data 0x5ad02f4/0x5cda000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cff9c6), peers [0,1] op hist [])
Oct 11 04:30:01 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 186064896 unmapped: 20135936 heap: 206200832 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:16:00.817102+0000)
Oct 11 04:30:01 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 186064896 unmapped: 20135936 heap: 206200832 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:01 compute-0 ceph-osd[89722]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:30:01 compute-0 ceph-osd[89722]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 11 04:30:01 compute-0 ceph-osd[89722]: bluestore.MempoolThread(0x55f1042abb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3301166 data_alloc: 251658240 data_used: 36184064
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:16:01.817278+0000)
Oct 11 04:30:01 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 186114048 unmapped: 20086784 heap: 206200832 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 385 heartbeat osd_stat(store_statfs(0x4f3224000/0x0/0x4ffc00000, data 0x5ad02f4/0x5cda000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cff9c6), peers [0,1] op hist [])
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:16:02.817396+0000)
Oct 11 04:30:01 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 186204160 unmapped: 19996672 heap: 206200832 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:16:03.817542+0000)
Oct 11 04:30:01 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 186204160 unmapped: 19996672 heap: 206200832 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:16:04.817677+0000)
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 385 heartbeat osd_stat(store_statfs(0x4f3224000/0x0/0x4ffc00000, data 0x5ad02f4/0x5cda000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cff9c6), peers [0,1] op hist [])
Oct 11 04:30:01 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 186204160 unmapped: 19996672 heap: 206200832 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:16:05.818246+0000)
Oct 11 04:30:01 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 186204160 unmapped: 19996672 heap: 206200832 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:01 compute-0 ceph-osd[89722]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:30:01 compute-0 ceph-osd[89722]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 11 04:30:01 compute-0 ceph-osd[89722]: bluestore.MempoolThread(0x55f1042abb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3305310 data_alloc: 251658240 data_used: 36519936
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:16:06.818445+0000)
Oct 11 04:30:01 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 186204160 unmapped: 19996672 heap: 206200832 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:16:07.818738+0000)
Oct 11 04:30:01 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 186204160 unmapped: 19996672 heap: 206200832 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 385 ms_handle_reset con 0x55f105b88800 session 0x55f107947a40
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 385 ms_handle_reset con 0x55f107bd0800 session 0x55f105b9b4a0
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: handle_auth_request added challenge on 0x55f107c1fc00
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:16:08.818881+0000)
Oct 11 04:30:01 compute-0 ceph-osd[89722]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 14.257530212s of 14.278488159s, submitted: 5
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 385 ms_handle_reset con 0x55f107c1fc00 session 0x55f10d6f6b40
Oct 11 04:30:01 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 186204160 unmapped: 19996672 heap: 206200832 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 385 heartbeat osd_stat(store_statfs(0x4f3225000/0x0/0x4ffc00000, data 0x5ad02e4/0x5cd9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cff9c6), peers [0,1] op hist [])
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:16:09.819040+0000)
Oct 11 04:30:01 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 186204160 unmapped: 19996672 heap: 206200832 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:16:10.819214+0000)
Oct 11 04:30:01 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 186204160 unmapped: 19996672 heap: 206200832 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: handle_auth_request added challenge on 0x55f1081b4c00
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 385 ms_handle_reset con 0x55f1081b4c00 session 0x55f106991860
Oct 11 04:30:01 compute-0 ceph-osd[89722]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:30:01 compute-0 ceph-osd[89722]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 11 04:30:01 compute-0 ceph-osd[89722]: bluestore.MempoolThread(0x55f1042abb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3304585 data_alloc: 251658240 data_used: 36519936
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:16:11.819393+0000)
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: handle_auth_request added challenge on 0x55f1094e0c00
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 385 ms_handle_reset con 0x55f1094e0c00 session 0x55f109954b40
Oct 11 04:30:01 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 186204160 unmapped: 19996672 heap: 206200832 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:16:12.819706+0000)
Oct 11 04:30:01 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 186204160 unmapped: 19996672 heap: 206200832 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: handle_auth_request added challenge on 0x55f105b88800
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 385 ms_handle_reset con 0x55f105b88800 session 0x55f1079472c0
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: handle_auth_request added challenge on 0x55f107bd0800
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:16:13.819882+0000)
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _renew_subs
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 385 handle_osd_map epochs [386,386], i have 385, src has [1,386]
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 386 ms_handle_reset con 0x55f107bd0800 session 0x55f1099552c0
Oct 11 04:30:01 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 186261504 unmapped: 19939328 heap: 206200832 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 386 heartbeat osd_stat(store_statfs(0x4f32db000/0x0/0x4ffc00000, data 0x5a1a2e4/0x5c23000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cff9c6), peers [0,1] op hist [])
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 386 ms_handle_reset con 0x55f105d2ac00 session 0x55f10561eb40
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:16:14.820044+0000)
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 386 ms_handle_reset con 0x55f105d2b800 session 0x55f1079d7860
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: handle_auth_request added challenge on 0x55f107c1fc00
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 386 ms_handle_reset con 0x55f107c1fc00 session 0x55f109b9b680
Oct 11 04:30:01 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 186261504 unmapped: 19939328 heap: 206200832 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:16:15.820325+0000)
Oct 11 04:30:01 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 186261504 unmapped: 19939328 heap: 206200832 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: handle_auth_request added challenge on 0x55f105b88800
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 386 ms_handle_reset con 0x55f105b88800 session 0x55f106980b40
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: handle_auth_request added challenge on 0x55f105d2ac00
Oct 11 04:30:01 compute-0 ceph-osd[89722]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:30:01 compute-0 ceph-osd[89722]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 11 04:30:01 compute-0 ceph-osd[89722]: bluestore.MempoolThread(0x55f1042abb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3300077 data_alloc: 251658240 data_used: 36524032
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:16:16.820442+0000)
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 386 ms_handle_reset con 0x55f105d2ac00 session 0x55f1079d65a0
Oct 11 04:30:01 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 176332800 unmapped: 29868032 heap: 206200832 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:16:17.820595+0000)
Oct 11 04:30:01 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 176332800 unmapped: 29868032 heap: 206200832 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 386 heartbeat osd_stat(store_statfs(0x4f4a5f000/0x0/0x4ffc00000, data 0x4294eb5/0x449f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cff9c6), peers [0,1] op hist [])
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:16:18.820734+0000)
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: handle_auth_request added challenge on 0x55f105d2b800
Oct 11 04:30:01 compute-0 ceph-osd[89722]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 10.236898422s of 10.383208275s, submitted: 47
Oct 11 04:30:01 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 176332800 unmapped: 29868032 heap: 206200832 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:16:19.820942+0000)
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _renew_subs
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 386 handle_osd_map epochs [387,387], i have 386, src has [1,387]
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _renew_subs
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 387 handle_osd_map epochs [388,388], i have 387, src has [1,388]
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 388 ms_handle_reset con 0x55f105d2b800 session 0x55f105bab4a0
Oct 11 04:30:01 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 176332800 unmapped: 29868032 heap: 206200832 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:16:20.821102+0000)
Oct 11 04:30:01 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 176332800 unmapped: 29868032 heap: 206200832 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 388 heartbeat osd_stat(store_statfs(0x4f4a59000/0x0/0x4ffc00000, data 0x42984f3/0x44a3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cff9c6), peers [0,1] op hist [])
Oct 11 04:30:01 compute-0 ceph-osd[89722]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:30:01 compute-0 ceph-osd[89722]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:30:01 compute-0 ceph-osd[89722]: bluestore.MempoolThread(0x55f1042abb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3021814 data_alloc: 234881024 data_used: 19681280
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:16:21.821393+0000)
Oct 11 04:30:01 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 176332800 unmapped: 29868032 heap: 206200832 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:16:22.821560+0000)
Oct 11 04:30:01 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 176332800 unmapped: 29868032 heap: 206200832 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:16:23.821840+0000)
Oct 11 04:30:01 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 176332800 unmapped: 29868032 heap: 206200832 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:16:24.822114+0000)
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 388 handle_osd_map epochs [388,389], i have 388, src has [1,389]
Oct 11 04:30:01 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 176381952 unmapped: 29818880 heap: 206200832 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:16:25.822434+0000)
Oct 11 04:30:01 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 176381952 unmapped: 29818880 heap: 206200832 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:01 compute-0 ceph-osd[89722]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:30:01 compute-0 ceph-osd[89722]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:30:01 compute-0 ceph-osd[89722]: bluestore.MempoolThread(0x55f1042abb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3024116 data_alloc: 234881024 data_used: 19681280
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 389 heartbeat osd_stat(store_statfs(0x4f4a57000/0x0/0x4ffc00000, data 0x4299f56/0x44a6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cff9c6), peers [0,1] op hist [])
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:16:26.822723+0000)
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: handle_auth_request added challenge on 0x55f107bd0800
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 389 ms_handle_reset con 0x55f107bd0800 session 0x55f1063174a0
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: handle_auth_request added challenge on 0x55f107c1fc00
Oct 11 04:30:01 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 176390144 unmapped: 29810688 heap: 206200832 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 389 ms_handle_reset con 0x55f107c1fc00 session 0x55f1056bc5a0
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:16:27.823014+0000)
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 389 ms_handle_reset con 0x55f107c1c800 session 0x55f106980780
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 389 ms_handle_reset con 0x55f107c1d800 session 0x55f105b8cf00
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: handle_auth_request added challenge on 0x55f105b88800
Oct 11 04:30:01 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 175472640 unmapped: 30728192 heap: 206200832 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 389 ms_handle_reset con 0x55f105b88800 session 0x55f10772ed20
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 389 heartbeat osd_stat(store_statfs(0x4f4a59000/0x0/0x4ffc00000, data 0x4299f23/0x44a4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cff9c6), peers [0,1] op hist [])
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:16:28.823247+0000)
Oct 11 04:30:01 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 175472640 unmapped: 30728192 heap: 206200832 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:16:29.823511+0000)
Oct 11 04:30:01 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 175472640 unmapped: 30728192 heap: 206200832 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:16:30.823759+0000)
Oct 11 04:30:01 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 175472640 unmapped: 30728192 heap: 206200832 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:01 compute-0 ceph-osd[89722]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:30:01 compute-0 ceph-osd[89722]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:30:01 compute-0 ceph-osd[89722]: bluestore.MempoolThread(0x55f1042abb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3019895 data_alloc: 234881024 data_used: 19783680
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:16:31.824006+0000)
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: handle_auth_request added challenge on 0x55f105d2ac00
Oct 11 04:30:01 compute-0 ceph-osd[89722]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 12.704990387s of 12.812119484s, submitted: 57
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 389 ms_handle_reset con 0x55f105d2ac00 session 0x55f105b9a780
Oct 11 04:30:01 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 175562752 unmapped: 30638080 heap: 206200832 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 389 heartbeat osd_stat(store_statfs(0x4f3ff2000/0x0/0x4ffc00000, data 0x4d01f23/0x4f0c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cff9c6), peers [0,1] op hist [])
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:16:32.824197+0000)
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: handle_auth_request added challenge on 0x55f105d2b800
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 389 ms_handle_reset con 0x55f105d2b800 session 0x55f105b0a960
Oct 11 04:30:01 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 170647552 unmapped: 35553280 heap: 206200832 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:16:33.824350+0000)
Oct 11 04:30:01 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 170647552 unmapped: 35553280 heap: 206200832 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:16:34.824504+0000)
Oct 11 04:30:01 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 170647552 unmapped: 35553280 heap: 206200832 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:16:35.824716+0000)
Oct 11 04:30:01 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 170647552 unmapped: 35553280 heap: 206200832 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:01 compute-0 ceph-osd[89722]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:30:01 compute-0 ceph-osd[89722]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:30:01 compute-0 ceph-osd[89722]: bluestore.MempoolThread(0x55f1042abb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2811257 data_alloc: 234881024 data_used: 13340672
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:16:36.824816+0000)
Oct 11 04:30:01 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 170647552 unmapped: 35553280 heap: 206200832 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:16:37.824975+0000)
Oct 11 04:30:01 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 170647552 unmapped: 35553280 heap: 206200832 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 389 heartbeat osd_stat(store_statfs(0x4f6340000/0x0/0x4ffc00000, data 0x29b3f23/0x2bbe000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cff9c6), peers [0,1] op hist [])
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:16:38.825116+0000)
Oct 11 04:30:01 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 170647552 unmapped: 35553280 heap: 206200832 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:16:39.825249+0000)
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: handle_auth_request added challenge on 0x55f105b88800
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 389 ms_handle_reset con 0x55f105b88800 session 0x55f1079f2b40
Oct 11 04:30:01 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 170647552 unmapped: 35553280 heap: 206200832 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: handle_auth_request added challenge on 0x55f105d2ac00
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 389 ms_handle_reset con 0x55f105d2ac00 session 0x55f105bb6b40
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: handle_auth_request added challenge on 0x55f107c1c800
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:16:40.825372+0000)
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 389 ms_handle_reset con 0x55f107c1c800 session 0x55f106980780
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: handle_auth_request added challenge on 0x55f107c1d800
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 389 ms_handle_reset con 0x55f107c1d800 session 0x55f105bab4a0
Oct 11 04:30:01 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 170385408 unmapped: 35815424 heap: 206200832 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:01 compute-0 ceph-osd[89722]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:30:01 compute-0 ceph-osd[89722]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:30:01 compute-0 ceph-osd[89722]: bluestore.MempoolThread(0x55f1042abb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2813817 data_alloc: 234881024 data_used: 13340672
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:16:41.825554+0000)
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: handle_auth_request added challenge on 0x55f107bd0800
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 389 ms_handle_reset con 0x55f107bd0800 session 0x55f106980b40
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: handle_auth_request added challenge on 0x55f105b88800
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 389 ms_handle_reset con 0x55f105b88800 session 0x55f109b9b680
Oct 11 04:30:01 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 170385408 unmapped: 35815424 heap: 206200832 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: handle_auth_request added challenge on 0x55f105d2ac00
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:16:42.825667+0000)
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 389 heartbeat osd_stat(store_statfs(0x4f6340000/0x0/0x4ffc00000, data 0x29b3f23/0x2bbe000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cff9c6), peers [0,1] op hist [])
Oct 11 04:30:01 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 170393600 unmapped: 35807232 heap: 206200832 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 389 heartbeat osd_stat(store_statfs(0x4f6340000/0x0/0x4ffc00000, data 0x29b3f23/0x2bbe000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cff9c6), peers [0,1] op hist [])
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: handle_auth_request added challenge on 0x55f107c1c800
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:16:43.825842+0000)
Oct 11 04:30:01 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 172843008 unmapped: 33357824 heap: 206200832 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:16:44.826018+0000)
Oct 11 04:30:01 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 173793280 unmapped: 32407552 heap: 206200832 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:16:45.826222+0000)
Oct 11 04:30:01 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 173793280 unmapped: 32407552 heap: 206200832 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: handle_auth_request added challenge on 0x55f107c1d800
Oct 11 04:30:01 compute-0 ceph-osd[89722]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 14.534890175s of 14.663038254s, submitted: 23
Oct 11 04:30:01 compute-0 ceph-osd[89722]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:30:01 compute-0 ceph-osd[89722]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 11 04:30:01 compute-0 ceph-osd[89722]: bluestore.MempoolThread(0x55f1042abb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2892518 data_alloc: 234881024 data_used: 24162304
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:16:46.826343+0000)
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 389 heartbeat osd_stat(store_statfs(0x4f6340000/0x0/0x4ffc00000, data 0x29b3f23/0x2bbe000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cff9c6), peers [0,1] op hist [])
Oct 11 04:30:01 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 173793280 unmapped: 32407552 heap: 206200832 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:16:47.826468+0000)
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 389 ms_handle_reset con 0x55f107c1d800 session 0x55f105b9ba40
Oct 11 04:30:01 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 173793280 unmapped: 32407552 heap: 206200832 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:16:48.826585+0000)
Oct 11 04:30:01 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 173793280 unmapped: 32407552 heap: 206200832 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:16:49.826709+0000)
Oct 11 04:30:01 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 173793280 unmapped: 32407552 heap: 206200832 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 389 heartbeat osd_stat(store_statfs(0x4f633f000/0x0/0x4ffc00000, data 0x29b3f86/0x2bbf000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cff9c6), peers [0,1] op hist [])
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:16:50.826823+0000)
Oct 11 04:30:01 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 173793280 unmapped: 32407552 heap: 206200832 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 389 heartbeat osd_stat(store_statfs(0x4f633f000/0x0/0x4ffc00000, data 0x29b3f86/0x2bbf000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cff9c6), peers [0,1] op hist [])
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:16:51.826972+0000)
Oct 11 04:30:01 compute-0 ceph-osd[89722]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:30:01 compute-0 ceph-osd[89722]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 11 04:30:01 compute-0 ceph-osd[89722]: bluestore.MempoolThread(0x55f1042abb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2892446 data_alloc: 234881024 data_used: 24162304
Oct 11 04:30:01 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 173793280 unmapped: 32407552 heap: 206200832 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:16:52.827097+0000)
Oct 11 04:30:01 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 179838976 unmapped: 26361856 heap: 206200832 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:16:53.827240+0000)
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 389 heartbeat osd_stat(store_statfs(0x4f5973000/0x0/0x4ffc00000, data 0x3377f86/0x3583000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cff9c6), peers [0,1] op hist [])
Oct 11 04:30:01 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 179093504 unmapped: 27107328 heap: 206200832 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:16:54.827398+0000)
Oct 11 04:30:01 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 179101696 unmapped: 27099136 heap: 206200832 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:16:55.827579+0000)
Oct 11 04:30:01 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 179101696 unmapped: 27099136 heap: 206200832 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:16:56.827766+0000)
Oct 11 04:30:01 compute-0 ceph-osd[89722]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:30:01 compute-0 ceph-osd[89722]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 11 04:30:01 compute-0 ceph-osd[89722]: bluestore.MempoolThread(0x55f1042abb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2969762 data_alloc: 234881024 data_used: 24158208
Oct 11 04:30:01 compute-0 ceph-osd[89722]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Oct 11 04:30:01 compute-0 ceph-osd[89722]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 2400.1 total, 600.0 interval
                                           Cumulative writes: 20K writes, 85K keys, 20K commit groups, 1.0 writes per commit group, ingest: 0.06 GB, 0.02 MB/s
                                           Cumulative WAL: 20K writes, 7367 syncs, 2.85 writes per sync, written: 0.06 GB, 0.02 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 10K writes, 40K keys, 10K commit groups, 1.0 writes per commit group, ingest: 26.41 MB, 0.04 MB/s
                                           Interval WAL: 10K writes, 4263 syncs, 2.38 writes per sync, written: 0.03 GB, 0.04 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Oct 11 04:30:01 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 179101696 unmapped: 27099136 heap: 206200832 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:16:57.827967+0000)
Oct 11 04:30:01 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 179101696 unmapped: 27099136 heap: 206200832 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:16:58.828139+0000)
Oct 11 04:30:01 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 179101696 unmapped: 27099136 heap: 206200832 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:16:59.828320+0000)
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: handle_auth_request added challenge on 0x55f1081b4c00
Oct 11 04:30:01 compute-0 ceph-osd[89722]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 12.991976738s of 13.187226295s, submitted: 73
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 389 heartbeat osd_stat(store_statfs(0x4f5955000/0x0/0x4ffc00000, data 0x339df86/0x35a9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cff9c6), peers [0,1] op hist [1])
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 389 ms_handle_reset con 0x55f1081b4c00 session 0x55f105b9cd20
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: handle_auth_request added challenge on 0x55f107c1a800
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 389 ms_handle_reset con 0x55f107c1a800 session 0x55f1069830e0
Oct 11 04:30:01 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 177414144 unmapped: 28786688 heap: 206200832 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: handle_auth_request added challenge on 0x55f107e5b400
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 389 ms_handle_reset con 0x55f107e5b400 session 0x55f105b8d2c0
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:17:00.828510+0000)
Oct 11 04:30:01 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 177414144 unmapped: 28786688 heap: 206200832 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:17:01.828721+0000)
Oct 11 04:30:01 compute-0 ceph-osd[89722]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:30:01 compute-0 ceph-osd[89722]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 11 04:30:01 compute-0 ceph-osd[89722]: bluestore.MempoolThread(0x55f1042abb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3225007 data_alloc: 234881024 data_used: 24154112
Oct 11 04:30:01 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 177414144 unmapped: 28786688 heap: 206200832 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:17:02.829565+0000)
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 389 heartbeat osd_stat(store_statfs(0x4f3513000/0x0/0x4ffc00000, data 0x57dff85/0x59eb000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cff9c6), peers [0,1] op hist [])
Oct 11 04:30:01 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 177414144 unmapped: 28786688 heap: 206200832 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:17:03.831237+0000)
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 389 heartbeat osd_stat(store_statfs(0x4f3513000/0x0/0x4ffc00000, data 0x57dff85/0x59eb000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cff9c6), peers [0,1] op hist [])
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 389 ms_handle_reset con 0x55f105d2ac00 session 0x55f1079d7860
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 389 ms_handle_reset con 0x55f107c1c800 session 0x55f105bb7e00
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 389 heartbeat osd_stat(store_statfs(0x4f3513000/0x0/0x4ffc00000, data 0x57dff85/0x59eb000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cff9c6), peers [0,1] op hist [])
Oct 11 04:30:01 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 177414144 unmapped: 28786688 heap: 206200832 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: handle_auth_request added challenge on 0x55f107e5b400
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:17:04.831440+0000)
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 389 ms_handle_reset con 0x55f107e5b400 session 0x55f1079472c0
Oct 11 04:30:01 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 177414144 unmapped: 28786688 heap: 206200832 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:17:05.831692+0000)
Oct 11 04:30:01 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 177414144 unmapped: 28786688 heap: 206200832 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:17:06.831842+0000)
Oct 11 04:30:01 compute-0 ceph-osd[89722]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:30:01 compute-0 ceph-osd[89722]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 11 04:30:01 compute-0 ceph-osd[89722]: bluestore.MempoolThread(0x55f1042abb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3225243 data_alloc: 234881024 data_used: 24154112
Oct 11 04:30:01 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 177414144 unmapped: 28786688 heap: 206200832 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:17:07.832012+0000)
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 389 heartbeat osd_stat(store_statfs(0x4f3511000/0x0/0x4ffc00000, data 0x57e1f85/0x59ed000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cff9c6), peers [0,1] op hist [])
Oct 11 04:30:01 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 177414144 unmapped: 28786688 heap: 206200832 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: handle_auth_request added challenge on 0x55f105b88800
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 389 ms_handle_reset con 0x55f105b88800 session 0x55f106991860
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:17:08.832217+0000)
Oct 11 04:30:01 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 177414144 unmapped: 28786688 heap: 206200832 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:17:09.832337+0000)
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: handle_auth_request added challenge on 0x55f107c1a800
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 389 ms_handle_reset con 0x55f107c1a800 session 0x55f10d6f6b40
Oct 11 04:30:01 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 177414144 unmapped: 28786688 heap: 206200832 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: handle_auth_request added challenge on 0x55f105b88800
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 389 ms_handle_reset con 0x55f105b88800 session 0x55f105b9b4a0
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:17:10.832493+0000)
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: handle_auth_request added challenge on 0x55f105d2ac00
Oct 11 04:30:01 compute-0 ceph-osd[89722]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 10.671446800s of 11.062356949s, submitted: 61
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 389 ms_handle_reset con 0x55f105d2ac00 session 0x55f107947a40
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 389 heartbeat osd_stat(store_statfs(0x4f3511000/0x0/0x4ffc00000, data 0x57e1f85/0x59ed000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cff9c6), peers [0,1] op hist [])
Oct 11 04:30:01 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 177438720 unmapped: 28762112 heap: 206200832 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: handle_auth_request added challenge on 0x55f107c1a800
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: handle_auth_request added challenge on 0x55f107c1c800
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:17:11.832648+0000)
Oct 11 04:30:01 compute-0 ceph-osd[89722]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:30:01 compute-0 ceph-osd[89722]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 11 04:30:01 compute-0 ceph-osd[89722]: bluestore.MempoolThread(0x55f1042abb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3230459 data_alloc: 234881024 data_used: 24158208
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 389 heartbeat osd_stat(store_statfs(0x4f350f000/0x0/0x4ffc00000, data 0x57e1fb8/0x59ef000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cff9c6), peers [0,1] op hist [])
Oct 11 04:30:01 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 177438720 unmapped: 28762112 heap: 206200832 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:17:12.832778+0000)
Oct 11 04:30:01 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 177438720 unmapped: 28762112 heap: 206200832 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:17:13.832933+0000)
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 389 heartbeat osd_stat(store_statfs(0x4f350f000/0x0/0x4ffc00000, data 0x57e1fb8/0x59ef000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cff9c6), peers [0,1] op hist [])
Oct 11 04:30:01 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 177438720 unmapped: 28762112 heap: 206200832 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:17:14.833114+0000)
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: handle_auth_request added challenge on 0x55f107e5b400
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 389 ms_handle_reset con 0x55f107e5b400 session 0x55f109b9bc20
Oct 11 04:30:01 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 178036736 unmapped: 28164096 heap: 206200832 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:17:15.834227+0000)
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: handle_auth_request added challenge on 0x55f107c1d800
Oct 11 04:30:01 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 178593792 unmapped: 27607040 heap: 206200832 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:17:16.834423+0000)
Oct 11 04:30:01 compute-0 ceph-osd[89722]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:30:01 compute-0 ceph-osd[89722]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 11 04:30:01 compute-0 ceph-osd[89722]: bluestore.MempoolThread(0x55f1042abb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3258299 data_alloc: 251658240 data_used: 28270592
Oct 11 04:30:01 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 178593792 unmapped: 27607040 heap: 206200832 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:17:17.834714+0000)
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 389 heartbeat osd_stat(store_statfs(0x4f350f000/0x0/0x4ffc00000, data 0x57e1fb8/0x59ef000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cff9c6), peers [0,1] op hist [])
Oct 11 04:30:01 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 178593792 unmapped: 27607040 heap: 206200832 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:17:18.834876+0000)
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: handle_auth_request added challenge on 0x55f1081b4c00
Oct 11 04:30:01 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 178642944 unmapped: 27557888 heap: 206200832 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:17:19.835082+0000)
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 389 heartbeat osd_stat(store_statfs(0x4f350f000/0x0/0x4ffc00000, data 0x57e1fb8/0x59ef000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cff9c6), peers [0,1] op hist [])
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 389 heartbeat osd_stat(store_statfs(0x4f350f000/0x0/0x4ffc00000, data 0x57e1fb8/0x59ef000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cff9c6), peers [0,1] op hist [])
Oct 11 04:30:01 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 178642944 unmapped: 27557888 heap: 206200832 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:17:20.835255+0000)
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 389 heartbeat osd_stat(store_statfs(0x4f350f000/0x0/0x4ffc00000, data 0x57e1fb8/0x59ef000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cff9c6), peers [0,1] op hist [])
Oct 11 04:30:01 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 178642944 unmapped: 27557888 heap: 206200832 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:17:21.835472+0000)
Oct 11 04:30:01 compute-0 ceph-osd[89722]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:30:01 compute-0 ceph-osd[89722]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 11 04:30:01 compute-0 ceph-osd[89722]: bluestore.MempoolThread(0x55f1042abb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3258939 data_alloc: 251658240 data_used: 28327936
Oct 11 04:30:01 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 178642944 unmapped: 27557888 heap: 206200832 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:17:22.835652+0000)
Oct 11 04:30:01 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 178642944 unmapped: 27557888 heap: 206200832 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:17:23.835850+0000)
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 389 heartbeat osd_stat(store_statfs(0x4f350f000/0x0/0x4ffc00000, data 0x57e1fb8/0x59ef000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cff9c6), peers [0,1] op hist [])
Oct 11 04:30:01 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 178642944 unmapped: 27557888 heap: 206200832 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:17:24.836005+0000)
Oct 11 04:30:01 compute-0 ceph-osd[89722]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 14.380146027s of 14.414755821s, submitted: 11
Oct 11 04:30:01 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 186662912 unmapped: 19537920 heap: 206200832 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:17:25.836192+0000)
Oct 11 04:30:01 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 188907520 unmapped: 17293312 heap: 206200832 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:17:26.836315+0000)
Oct 11 04:30:01 compute-0 ceph-osd[89722]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:30:01 compute-0 ceph-osd[89722]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 11 04:30:01 compute-0 ceph-osd[89722]: bluestore.MempoolThread(0x55f1042abb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3336027 data_alloc: 251658240 data_used: 29671424
Oct 11 04:30:01 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 183803904 unmapped: 22396928 heap: 206200832 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:17:27.836478+0000)
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 389 heartbeat osd_stat(store_statfs(0x4f269d000/0x0/0x4ffc00000, data 0x6623fb8/0x6831000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cff9c6), peers [0,1] op hist [])
Oct 11 04:30:01 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 183836672 unmapped: 22364160 heap: 206200832 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:17:28.836588+0000)
Oct 11 04:30:01 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 184467456 unmapped: 21733376 heap: 206200832 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 389 heartbeat osd_stat(store_statfs(0x4f269d000/0x0/0x4ffc00000, data 0x6623fb8/0x6831000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cff9c6), peers [0,1] op hist [])
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:17:29.836789+0000)
Oct 11 04:30:01 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 184467456 unmapped: 21733376 heap: 206200832 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:17:30.836963+0000)
Oct 11 04:30:01 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 184467456 unmapped: 21733376 heap: 206200832 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:17:31.837197+0000)
Oct 11 04:30:01 compute-0 ceph-osd[89722]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:30:01 compute-0 ceph-osd[89722]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 11 04:30:01 compute-0 ceph-osd[89722]: bluestore.MempoolThread(0x55f1042abb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3391675 data_alloc: 251658240 data_used: 31346688
Oct 11 04:30:01 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 184467456 unmapped: 21733376 heap: 206200832 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:17:32.837328+0000)
Oct 11 04:30:01 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 184467456 unmapped: 21733376 heap: 206200832 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:17:33.837489+0000)
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 389 heartbeat osd_stat(store_statfs(0x4f269d000/0x0/0x4ffc00000, data 0x6623fb8/0x6831000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cff9c6), peers [0,1] op hist [])
Oct 11 04:30:01 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 184631296 unmapped: 21569536 heap: 206200832 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:17:34.837783+0000)
Oct 11 04:30:01 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 184631296 unmapped: 21569536 heap: 206200832 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:17:35.838032+0000)
Oct 11 04:30:01 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 184631296 unmapped: 21569536 heap: 206200832 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:17:36.838250+0000)
Oct 11 04:30:01 compute-0 ceph-osd[89722]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:30:01 compute-0 ceph-osd[89722]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 11 04:30:01 compute-0 ceph-osd[89722]: bluestore.MempoolThread(0x55f1042abb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3386443 data_alloc: 251658240 data_used: 31346688
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 389 heartbeat osd_stat(store_statfs(0x4f26cd000/0x0/0x4ffc00000, data 0x6623fb8/0x6831000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cff9c6), peers [0,1] op hist [])
Oct 11 04:30:01 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 184631296 unmapped: 21569536 heap: 206200832 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:17:37.838708+0000)
Oct 11 04:30:01 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 184631296 unmapped: 21569536 heap: 206200832 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:17:38.839027+0000)
Oct 11 04:30:01 compute-0 ceph-osd[89722]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 13.095819473s of 13.516435623s, submitted: 123
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 389 heartbeat osd_stat(store_statfs(0x4f26cd000/0x0/0x4ffc00000, data 0x6623fb8/0x6831000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cff9c6), peers [0,1] op hist [])
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 389 ms_handle_reset con 0x55f107c1a800 session 0x55f106882b40
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 389 ms_handle_reset con 0x55f107c1c800 session 0x55f1076dd860
Oct 11 04:30:01 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 184762368 unmapped: 21438464 heap: 206200832 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:17:39.839246+0000)
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: handle_auth_request added challenge on 0x55f105b88800
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 389 ms_handle_reset con 0x55f105b88800 session 0x55f10567d860
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 389 heartbeat osd_stat(store_statfs(0x4f26ce000/0x0/0x4ffc00000, data 0x6623f85/0x682f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cff9c6), peers [0,1] op hist [])
Oct 11 04:30:01 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 185122816 unmapped: 21078016 heap: 206200832 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:17:40.839563+0000)
Oct 11 04:30:01 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 185122816 unmapped: 21078016 heap: 206200832 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:17:41.839799+0000)
Oct 11 04:30:01 compute-0 ceph-osd[89722]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:30:01 compute-0 ceph-osd[89722]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 11 04:30:01 compute-0 ceph-osd[89722]: bluestore.MempoolThread(0x55f1042abb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3391234 data_alloc: 251658240 data_used: 32665600
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 389 handle_osd_map epochs [390,390], i have 389, src has [1,390]
Oct 11 04:30:01 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 185122816 unmapped: 21078016 heap: 206200832 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:17:42.839956+0000)
Oct 11 04:30:01 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 185139200 unmapped: 21061632 heap: 206200832 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:17:43.840089+0000)
Oct 11 04:30:01 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 185139200 unmapped: 21061632 heap: 206200832 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:17:44.840209+0000)
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:17:45.840352+0000)
Oct 11 04:30:01 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 185139200 unmapped: 21061632 heap: 206200832 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 390 heartbeat osd_stat(store_statfs(0x4f26ca000/0x0/0x4ffc00000, data 0x6625b02/0x6832000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cff9c6), peers [0,1] op hist [])
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:17:46.840512+0000)
Oct 11 04:30:01 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 185196544 unmapped: 21004288 heap: 206200832 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:01 compute-0 ceph-osd[89722]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:30:01 compute-0 ceph-osd[89722]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 11 04:30:01 compute-0 ceph-osd[89722]: bluestore.MempoolThread(0x55f1042abb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3402972 data_alloc: 251658240 data_used: 32673792
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:17:47.840632+0000)
Oct 11 04:30:01 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 185196544 unmapped: 21004288 heap: 206200832 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:17:48.840855+0000)
Oct 11 04:30:01 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 185196544 unmapped: 21004288 heap: 206200832 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:01 compute-0 ceph-osd[89722]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 10.119164467s of 10.280247688s, submitted: 51
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:17:49.841050+0000)
Oct 11 04:30:01 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 185196544 unmapped: 21004288 heap: 206200832 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 390 heartbeat osd_stat(store_statfs(0x4f26c7000/0x0/0x4ffc00000, data 0x6907b02/0x6837000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cff9c6), peers [0,1] op hist [])
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:17:50.841253+0000)
Oct 11 04:30:01 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 185204736 unmapped: 20996096 heap: 206200832 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: handle_auth_request added challenge on 0x55f105d2ac00
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 390 ms_handle_reset con 0x55f105d2ac00 session 0x55f1076ce1e0
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:17:51.841504+0000)
Oct 11 04:30:01 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 185204736 unmapped: 20996096 heap: 206200832 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:01 compute-0 ceph-osd[89722]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:30:01 compute-0 ceph-osd[89722]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 11 04:30:01 compute-0 ceph-osd[89722]: bluestore.MempoolThread(0x55f1042abb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3421502 data_alloc: 251658240 data_used: 32673792
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: handle_auth_request added challenge on 0x55f107c1a800
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 390 ms_handle_reset con 0x55f107c1a800 session 0x55f105b9da40
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:17:52.841646+0000)
Oct 11 04:30:01 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 185237504 unmapped: 20963328 heap: 206200832 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 390 heartbeat osd_stat(store_statfs(0x4f26c7000/0x0/0x4ffc00000, data 0x6907b02/0x6837000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cff9c6), peers [0,1] op hist [])
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: handle_auth_request added challenge on 0x55f107e5b400
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 390 ms_handle_reset con 0x55f107e5b400 session 0x55f1081625a0
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: handle_auth_request added challenge on 0x55f10939fc00
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 390 ms_handle_reset con 0x55f10939fc00 session 0x55f1069914a0
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:17:53.841819+0000)
Oct 11 04:30:01 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 185311232 unmapped: 20889600 heap: 206200832 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: handle_auth_request added challenge on 0x55f105b88800
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: handle_auth_request added challenge on 0x55f105d2ac00
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:17:54.841992+0000)
Oct 11 04:30:01 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 185319424 unmapped: 20881408 heap: 206200832 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:17:55.842127+0000)
Oct 11 04:30:01 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 185344000 unmapped: 20856832 heap: 206200832 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:17:56.842351+0000)
Oct 11 04:30:01 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 185352192 unmapped: 20848640 heap: 206200832 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:01 compute-0 ceph-osd[89722]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:30:01 compute-0 ceph-osd[89722]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 11 04:30:01 compute-0 ceph-osd[89722]: bluestore.MempoolThread(0x55f1042abb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3424601 data_alloc: 251658240 data_used: 32681984
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: handle_auth_request added challenge on 0x55f107c1a800
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 390 ms_handle_reset con 0x55f107c1a800 session 0x55f10772f2c0
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 390 heartbeat osd_stat(store_statfs(0x4f26a1000/0x0/0x4ffc00000, data 0x692bb35/0x685d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cff9c6), peers [0,1] op hist [])
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:17:57.842550+0000)
Oct 11 04:30:01 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 185532416 unmapped: 20668416 heap: 206200832 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: handle_auth_request added challenge on 0x55f107e5b400
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: handle_auth_request added challenge on 0x55f10939fc00
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:17:58.842684+0000)
Oct 11 04:30:01 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 185532416 unmapped: 20668416 heap: 206200832 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:17:59.842854+0000)
Oct 11 04:30:01 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 185532416 unmapped: 20668416 heap: 206200832 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:18:00.843042+0000)
Oct 11 04:30:01 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 185532416 unmapped: 20668416 heap: 206200832 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:18:01.843222+0000)
Oct 11 04:30:01 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 185540608 unmapped: 20660224 heap: 206200832 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 390 heartbeat osd_stat(store_statfs(0x4f26a1000/0x0/0x4ffc00000, data 0x692bb35/0x685d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cff9c6), peers [0,1] op hist [1])
Oct 11 04:30:01 compute-0 ceph-osd[89722]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:30:01 compute-0 ceph-osd[89722]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 11 04:30:01 compute-0 ceph-osd[89722]: bluestore.MempoolThread(0x55f1042abb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3430610 data_alloc: 251658240 data_used: 32800768
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:18:02.843374+0000)
Oct 11 04:30:01 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 185540608 unmapped: 20660224 heap: 206200832 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:18:03.843584+0000)
Oct 11 04:30:01 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 185540608 unmapped: 20660224 heap: 206200832 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:18:04.843731+0000)
Oct 11 04:30:01 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 185540608 unmapped: 20660224 heap: 206200832 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:01 compute-0 ceph-osd[89722]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 16.078866959s of 16.462472916s, submitted: 103
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:18:05.843851+0000)
Oct 11 04:30:01 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 190021632 unmapped: 16179200 heap: 206200832 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 390 heartbeat osd_stat(store_statfs(0x4f21d1000/0x0/0x4ffc00000, data 0x6dfbb35/0x6d2d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cff9c6), peers [0,1] op hist [])
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:18:06.844046+0000)
Oct 11 04:30:01 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 190062592 unmapped: 16138240 heap: 206200832 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:01 compute-0 ceph-osd[89722]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:30:01 compute-0 ceph-osd[89722]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 11 04:30:01 compute-0 ceph-osd[89722]: bluestore.MempoolThread(0x55f1042abb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3480666 data_alloc: 251658240 data_used: 33574912
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 390 heartbeat osd_stat(store_statfs(0x4f21d1000/0x0/0x4ffc00000, data 0x6dfbb35/0x6d2d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cff9c6), peers [0,1] op hist [])
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:18:07.844219+0000)
Oct 11 04:30:01 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 190193664 unmapped: 16007168 heap: 206200832 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:18:08.844373+0000)
Oct 11 04:30:01 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 190193664 unmapped: 16007168 heap: 206200832 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:18:09.844504+0000)
Oct 11 04:30:01 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 190193664 unmapped: 16007168 heap: 206200832 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 390 heartbeat osd_stat(store_statfs(0x4f21d1000/0x0/0x4ffc00000, data 0x6dfbb35/0x6d2d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cff9c6), peers [0,1] op hist [])
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 390 heartbeat osd_stat(store_statfs(0x4f21d1000/0x0/0x4ffc00000, data 0x6dfbb35/0x6d2d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cff9c6), peers [0,1] op hist [])
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:18:10.844663+0000)
Oct 11 04:30:01 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 190193664 unmapped: 16007168 heap: 206200832 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:18:11.844846+0000)
Oct 11 04:30:01 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 187097088 unmapped: 19103744 heap: 206200832 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:01 compute-0 ceph-osd[89722]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:30:01 compute-0 ceph-osd[89722]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 11 04:30:01 compute-0 ceph-osd[89722]: bluestore.MempoolThread(0x55f1042abb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3484754 data_alloc: 251658240 data_used: 34365440
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 390 heartbeat osd_stat(store_statfs(0x4f21d1000/0x0/0x4ffc00000, data 0x6dfbb35/0x6d2d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cff9c6), peers [0,1] op hist [])
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:18:12.844997+0000)
Oct 11 04:30:01 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 187187200 unmapped: 19013632 heap: 206200832 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 390 heartbeat osd_stat(store_statfs(0x4f21d1000/0x0/0x4ffc00000, data 0x6dfbb35/0x6d2d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cff9c6), peers [0,1] op hist [])
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:18:13.845182+0000)
Oct 11 04:30:01 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 187187200 unmapped: 19013632 heap: 206200832 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:18:14.845326+0000)
Oct 11 04:30:01 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 187187200 unmapped: 19013632 heap: 206200832 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:18:15.845701+0000)
Oct 11 04:30:01 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 187187200 unmapped: 19013632 heap: 206200832 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:18:16.845873+0000)
Oct 11 04:30:01 compute-0 ceph-osd[89722]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 11.278578758s of 11.329904556s, submitted: 16
Oct 11 04:30:01 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 187310080 unmapped: 18890752 heap: 206200832 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:01 compute-0 ceph-osd[89722]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:30:01 compute-0 ceph-osd[89722]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 11 04:30:01 compute-0 ceph-osd[89722]: bluestore.MempoolThread(0x55f1042abb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3483842 data_alloc: 251658240 data_used: 34365440
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:18:17.846007+0000)
Oct 11 04:30:01 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 187310080 unmapped: 18890752 heap: 206200832 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 390 heartbeat osd_stat(store_statfs(0x4f21d1000/0x0/0x4ffc00000, data 0x6dfbb35/0x6d2d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cff9c6), peers [0,1] op hist [])
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:18:18.846140+0000)
Oct 11 04:30:01 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 187310080 unmapped: 18890752 heap: 206200832 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:18:19.846319+0000)
Oct 11 04:30:01 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 187310080 unmapped: 18890752 heap: 206200832 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:18:20.846448+0000)
Oct 11 04:30:01 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 187310080 unmapped: 18890752 heap: 206200832 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 390 heartbeat osd_stat(store_statfs(0x4f21d1000/0x0/0x4ffc00000, data 0x6dfbb35/0x6d2d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cff9c6), peers [0,1] op hist [])
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:18:21.846650+0000)
Oct 11 04:30:01 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 187310080 unmapped: 18890752 heap: 206200832 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:01 compute-0 ceph-osd[89722]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:30:01 compute-0 ceph-osd[89722]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 11 04:30:01 compute-0 ceph-osd[89722]: bluestore.MempoolThread(0x55f1042abb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3483842 data_alloc: 251658240 data_used: 34365440
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:18:22.846785+0000)
Oct 11 04:30:01 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 187310080 unmapped: 18890752 heap: 206200832 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 390 heartbeat osd_stat(store_statfs(0x4f21d1000/0x0/0x4ffc00000, data 0x6dfbb35/0x6d2d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cff9c6), peers [0,1] op hist [])
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:18:23.846924+0000)
Oct 11 04:30:01 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 187351040 unmapped: 18849792 heap: 206200832 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:18:24.847090+0000)
Oct 11 04:30:01 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 187351040 unmapped: 18849792 heap: 206200832 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 390 heartbeat osd_stat(store_statfs(0x4f21d1000/0x0/0x4ffc00000, data 0x6dfbb35/0x6d2d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cff9c6), peers [0,1] op hist [])
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:18:25.847269+0000)
Oct 11 04:30:01 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 188399616 unmapped: 17801216 heap: 206200832 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:18:26.847446+0000)
Oct 11 04:30:01 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 188399616 unmapped: 17801216 heap: 206200832 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:01 compute-0 ceph-osd[89722]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:30:01 compute-0 ceph-osd[89722]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 11 04:30:01 compute-0 ceph-osd[89722]: bluestore.MempoolThread(0x55f1042abb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3484722 data_alloc: 251658240 data_used: 34357248
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:18:27.847727+0000)
Oct 11 04:30:01 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 188399616 unmapped: 17801216 heap: 206200832 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:18:28.848013+0000)
Oct 11 04:30:01 compute-0 ceph-osd[89722]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 11.980001450s of 12.032711983s, submitted: 17
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 390 heartbeat osd_stat(store_statfs(0x4f21d1000/0x0/0x4ffc00000, data 0x6dfbb35/0x6d2d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cff9c6), peers [0,1] op hist [])
Oct 11 04:30:01 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 188555264 unmapped: 17645568 heap: 206200832 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 390 heartbeat osd_stat(store_statfs(0x4f21d1000/0x0/0x4ffc00000, data 0x6dfbb35/0x6d2d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cff9c6), peers [0,1] op hist [])
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:18:29.848192+0000)
Oct 11 04:30:01 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 188555264 unmapped: 17645568 heap: 206200832 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 390 heartbeat osd_stat(store_statfs(0x4f21d1000/0x0/0x4ffc00000, data 0x6dfbb35/0x6d2d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cff9c6), peers [0,1] op hist [])
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:18:30.848335+0000)
Oct 11 04:30:01 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 188555264 unmapped: 17645568 heap: 206200832 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:18:31.848567+0000)
Oct 11 04:30:01 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 188555264 unmapped: 17645568 heap: 206200832 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:01 compute-0 ceph-osd[89722]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:30:01 compute-0 ceph-osd[89722]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 11 04:30:01 compute-0 ceph-osd[89722]: bluestore.MempoolThread(0x55f1042abb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3484578 data_alloc: 251658240 data_used: 34394112
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 390 ms_handle_reset con 0x55f105b88800 session 0x55f1076cf860
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 390 ms_handle_reset con 0x55f105d2ac00 session 0x55f1076b52c0
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:18:32.848725+0000)
Oct 11 04:30:01 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 188555264 unmapped: 17645568 heap: 206200832 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: handle_auth_request added challenge on 0x55f107c1b800
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 390 ms_handle_reset con 0x55f107c1b800 session 0x55f107721a40
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 390 heartbeat osd_stat(store_statfs(0x4f21d1000/0x0/0x4ffc00000, data 0x6dfbb35/0x6d2d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cff9c6), peers [0,1] op hist [])
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:18:33.848894+0000)
Oct 11 04:30:01 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 188555264 unmapped: 17645568 heap: 206200832 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:18:34.849046+0000)
Oct 11 04:30:01 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 188555264 unmapped: 17645568 heap: 206200832 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:18:35.849279+0000)
Oct 11 04:30:01 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 188555264 unmapped: 17645568 heap: 206200832 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: handle_auth_request added challenge on 0x55f10ac62800
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 390 ms_handle_reset con 0x55f10ac62800 session 0x55f10698f0e0
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 390 ms_handle_reset con 0x55f107e5b400 session 0x55f105b9eb40
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 390 ms_handle_reset con 0x55f10939fc00 session 0x55f105b9c1e0
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:18:36.849406+0000)
Oct 11 04:30:01 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 189136896 unmapped: 17063936 heap: 206200832 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: handle_auth_request added challenge on 0x55f105b88800
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 390 ms_handle_reset con 0x55f105b88800 session 0x55f10698e1e0
Oct 11 04:30:01 compute-0 ceph-osd[89722]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:30:01 compute-0 ceph-osd[89722]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 11 04:30:01 compute-0 ceph-osd[89722]: bluestore.MempoolThread(0x55f1042abb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3446085 data_alloc: 251658240 data_used: 35737600
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 390 heartbeat osd_stat(store_statfs(0x4f26a2000/0x0/0x4ffc00000, data 0x692bb25/0x685c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cff9c6), peers [0,1] op hist [])
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: handle_auth_request added challenge on 0x55f105d2ac00
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 390 ms_handle_reset con 0x55f105d2ac00 session 0x55f108162000
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:18:37.849581+0000)
Oct 11 04:30:01 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 189136896 unmapped: 17063936 heap: 206200832 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:18:38.849740+0000)
Oct 11 04:30:01 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 189136896 unmapped: 17063936 heap: 206200832 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 390 ms_handle_reset con 0x55f107c1d800 session 0x55f1069814a0
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 390 ms_handle_reset con 0x55f1081b4c00 session 0x55f1076dc1e0
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: handle_auth_request added challenge on 0x55f107c1d800
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _renew_subs
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 390 handle_osd_map epochs [391,391], i have 390, src has [1,391]
Oct 11 04:30:01 compute-0 ceph-osd[89722]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 10.716113091s of 10.827631950s, submitted: 31
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:18:39.849910+0000)
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 391 ms_handle_reset con 0x55f107c1d800 session 0x55f108162d20
Oct 11 04:30:01 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 189194240 unmapped: 17006592 heap: 206200832 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:18:40.850070+0000)
Oct 11 04:30:01 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 189210624 unmapped: 16990208 heap: 206200832 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:18:41.850257+0000)
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: handle_auth_request added challenge on 0x55f105b88800
Oct 11 04:30:01 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 189210624 unmapped: 16990208 heap: 206200832 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 391 ms_handle_reset con 0x55f105b88800 session 0x55f106983680
Oct 11 04:30:01 compute-0 ceph-osd[89722]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:30:01 compute-0 ceph-osd[89722]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 11 04:30:01 compute-0 ceph-osd[89722]: bluestore.MempoolThread(0x55f1042abb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3037969 data_alloc: 251658240 data_used: 27963392
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:18:42.850391+0000)
Oct 11 04:30:01 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 187891712 unmapped: 18309120 heap: 206200832 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 391 heartbeat osd_stat(store_statfs(0x4f4d4b000/0x0/0x4ffc00000, data 0x33a5671/0x35b2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cff9c6), peers [0,1] op hist [])
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:18:43.850514+0000)
Oct 11 04:30:01 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 187891712 unmapped: 18309120 heap: 206200832 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: handle_auth_request added challenge on 0x55f105d2ac00
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 391 ms_handle_reset con 0x55f105d2ac00 session 0x55f10698e3c0
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:18:44.850625+0000)
Oct 11 04:30:01 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 178003968 unmapped: 28196864 heap: 206200832 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:18:45.850805+0000)
Oct 11 04:30:01 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 178003968 unmapped: 28196864 heap: 206200832 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 391 handle_osd_map epochs [392,392], i have 391, src has [1,392]
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:18:46.850945+0000)
Oct 11 04:30:01 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 178003968 unmapped: 28196864 heap: 206200832 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:01 compute-0 ceph-osd[89722]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:30:01 compute-0 ceph-osd[89722]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:30:01 compute-0 ceph-osd[89722]: bluestore.MempoolThread(0x55f1042abb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2783873 data_alloc: 234881024 data_used: 13365248
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:18:47.851104+0000)
Oct 11 04:30:01 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 178003968 unmapped: 28196864 heap: 206200832 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:18:48.851230+0000)
Oct 11 04:30:01 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 178003968 unmapped: 28196864 heap: 206200832 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 392 heartbeat osd_stat(store_statfs(0x4f6d9e000/0x0/0x4ffc00000, data 0x1f510d4/0x215f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cff9c6), peers [0,1] op hist [])
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:18:49.851343+0000)
Oct 11 04:30:01 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 178003968 unmapped: 28196864 heap: 206200832 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:18:50.851506+0000)
Oct 11 04:30:01 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 178003968 unmapped: 28196864 heap: 206200832 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:18:51.851685+0000)
Oct 11 04:30:01 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 178003968 unmapped: 28196864 heap: 206200832 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 392 heartbeat osd_stat(store_statfs(0x4f6d9e000/0x0/0x4ffc00000, data 0x1f510d4/0x215f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cff9c6), peers [0,1] op hist [])
Oct 11 04:30:01 compute-0 ceph-osd[89722]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:30:01 compute-0 ceph-osd[89722]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:30:01 compute-0 ceph-osd[89722]: bluestore.MempoolThread(0x55f1042abb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2783873 data_alloc: 234881024 data_used: 13365248
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:18:52.851791+0000)
Oct 11 04:30:01 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 178003968 unmapped: 28196864 heap: 206200832 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:18:53.851953+0000)
Oct 11 04:30:01 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 178003968 unmapped: 28196864 heap: 206200832 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:18:54.852102+0000)
Oct 11 04:30:01 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 178003968 unmapped: 28196864 heap: 206200832 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 392 heartbeat osd_stat(store_statfs(0x4f6d9e000/0x0/0x4ffc00000, data 0x1f510d4/0x215f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cff9c6), peers [0,1] op hist [])
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:18:55.852253+0000)
Oct 11 04:30:01 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 178003968 unmapped: 28196864 heap: 206200832 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:18:56.852417+0000)
Oct 11 04:30:01 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 178003968 unmapped: 28196864 heap: 206200832 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:01 compute-0 ceph-osd[89722]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:30:01 compute-0 ceph-osd[89722]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:30:01 compute-0 ceph-osd[89722]: bluestore.MempoolThread(0x55f1042abb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2783873 data_alloc: 234881024 data_used: 13365248
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: handle_auth_request added challenge on 0x55f107e5b400
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 392 ms_handle_reset con 0x55f107e5b400 session 0x55f106d7e5a0
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:18:57.852549+0000)
Oct 11 04:30:01 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 178003968 unmapped: 28196864 heap: 206200832 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 392 heartbeat osd_stat(store_statfs(0x4f6d9e000/0x0/0x4ffc00000, data 0x1f510d4/0x215f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cff9c6), peers [0,1] op hist [])
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:18:58.852738+0000)
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 392 handle_osd_map epochs [392,393], i have 392, src has [1,393]
Oct 11 04:30:01 compute-0 ceph-osd[89722]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 19.146833420s of 19.325439453s, submitted: 58
Oct 11 04:30:01 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 178003968 unmapped: 28196864 heap: 206200832 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: handle_auth_request added challenge on 0x55f105b88800
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 393 ms_handle_reset con 0x55f105b88800 session 0x55f1076faf00
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:18:59.852876+0000)
Oct 11 04:30:01 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 178003968 unmapped: 28196864 heap: 206200832 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: handle_auth_request added challenge on 0x55f105d2ac00
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 393 ms_handle_reset con 0x55f105d2ac00 session 0x55f10799c780
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: handle_auth_request added challenge on 0x55f107c1d800
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 393 ms_handle_reset con 0x55f107c1d800 session 0x55f106981680
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:19:00.852994+0000)
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: handle_auth_request added challenge on 0x55f1081b4c00
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 393 ms_handle_reset con 0x55f1081b4c00 session 0x55f108163680
Oct 11 04:30:01 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 178003968 unmapped: 28196864 heap: 206200832 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: handle_auth_request added challenge on 0x55f10939fc00
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 393 ms_handle_reset con 0x55f10939fc00 session 0x55f1081621e0
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 393 heartbeat osd_stat(store_statfs(0x4f6d99000/0x0/0x4ffc00000, data 0x1f52cc3/0x2164000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cff9c6), peers [0,1] op hist [])
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:19:01.853212+0000)
Oct 11 04:30:01 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 178028544 unmapped: 28172288 heap: 206200832 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:01 compute-0 ceph-osd[89722]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:30:01 compute-0 ceph-osd[89722]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:30:01 compute-0 ceph-osd[89722]: bluestore.MempoolThread(0x55f1042abb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2794167 data_alloc: 234881024 data_used: 13365248
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:19:02.853369+0000)
Oct 11 04:30:01 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 178028544 unmapped: 28172288 heap: 206200832 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: handle_auth_request added challenge on 0x55f105b88800
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 393 ms_handle_reset con 0x55f105b88800 session 0x55f10567cb40
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: handle_auth_request added challenge on 0x55f105d2ac00
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 393 ms_handle_reset con 0x55f105d2ac00 session 0x55f1076e7860
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:19:03.853524+0000)
Oct 11 04:30:01 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 177217536 unmapped: 28983296 heap: 206200832 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: handle_auth_request added challenge on 0x55f107c1d800
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 393 ms_handle_reset con 0x55f107c1d800 session 0x55f106991680
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: handle_auth_request added challenge on 0x55f1081b4c00
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 393 ms_handle_reset con 0x55f1081b4c00 session 0x55f108163c20
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:19:04.853658+0000)
Oct 11 04:30:01 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 177217536 unmapped: 28983296 heap: 206200832 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 393 heartbeat osd_stat(store_statfs(0x4f6d99000/0x0/0x4ffc00000, data 0x1f52cc3/0x2164000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cff9c6), peers [0,1] op hist [])
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:19:05.853815+0000)
Oct 11 04:30:01 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 177217536 unmapped: 28983296 heap: 206200832 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:19:06.853995+0000)
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: handle_auth_request added challenge on 0x55f107c1a800
Oct 11 04:30:01 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 177217536 unmapped: 28983296 heap: 206200832 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 393 ms_handle_reset con 0x55f107c1a800 session 0x55f106981c20
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: handle_auth_request added challenge on 0x55f105b88800
Oct 11 04:30:01 compute-0 ceph-osd[89722]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:30:01 compute-0 ceph-osd[89722]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:30:01 compute-0 ceph-osd[89722]: bluestore.MempoolThread(0x55f1042abb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2790376 data_alloc: 234881024 data_used: 13365248
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 393 ms_handle_reset con 0x55f105b88800 session 0x55f106882000
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:19:07.854231+0000)
Oct 11 04:30:01 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 177217536 unmapped: 28983296 heap: 206200832 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: handle_auth_request added challenge on 0x55f105d2ac00
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 393 ms_handle_reset con 0x55f105d2ac00 session 0x55f1079d6000
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: handle_auth_request added challenge on 0x55f107c1d800
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 393 heartbeat osd_stat(store_statfs(0x4f6d9b000/0x0/0x4ffc00000, data 0x1f52cb3/0x2163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cff9c6), peers [0,1] op hist [])
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:19:08.854406+0000)
Oct 11 04:30:01 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 177217536 unmapped: 28983296 heap: 206200832 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 393 ms_handle_reset con 0x55f107c1d800 session 0x55f10795eb40
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:19:09.854567+0000)
Oct 11 04:30:01 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 177217536 unmapped: 28983296 heap: 206200832 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 393 handle_osd_map epochs [394,394], i have 393, src has [1,394]
Oct 11 04:30:01 compute-0 ceph-osd[89722]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 11.169502258s of 11.276976585s, submitted: 32
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:19:10.854718+0000)
Oct 11 04:30:01 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 177217536 unmapped: 28983296 heap: 206200832 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:19:11.854900+0000)
Oct 11 04:30:01 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 177217536 unmapped: 28983296 heap: 206200832 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:01 compute-0 ceph-osd[89722]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:30:01 compute-0 ceph-osd[89722]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:30:01 compute-0 ceph-osd[89722]: bluestore.MempoolThread(0x55f1042abb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2793836 data_alloc: 234881024 data_used: 13373440
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:19:12.855067+0000)
Oct 11 04:30:01 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 177217536 unmapped: 28983296 heap: 206200832 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 394 heartbeat osd_stat(store_statfs(0x4f6d98000/0x0/0x4ffc00000, data 0x1f54822/0x2165000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cff9c6), peers [0,1] op hist [])
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:19:13.855233+0000)
Oct 11 04:30:01 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 177217536 unmapped: 28983296 heap: 206200832 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:19:14.855390+0000)
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 394 heartbeat osd_stat(store_statfs(0x4f6d98000/0x0/0x4ffc00000, data 0x1f54822/0x2165000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cff9c6), peers [0,1] op hist [])
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 394 handle_osd_map epochs [394,395], i have 394, src has [1,395]
Oct 11 04:30:01 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 177225728 unmapped: 28975104 heap: 206200832 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: handle_auth_request added challenge on 0x55f1081b4c00
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 395 ms_handle_reset con 0x55f1081b4c00 session 0x55f106d39c20
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:19:15.855518+0000)
Oct 11 04:30:01 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 177225728 unmapped: 28975104 heap: 206200832 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: handle_auth_request added challenge on 0x55f107c1b800
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 395 ms_handle_reset con 0x55f107c1b800 session 0x55f1099552c0
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:19:16.855639+0000)
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: handle_auth_request added challenge on 0x55f105b88800
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 395 ms_handle_reset con 0x55f105b88800 session 0x55f106d7f860
Oct 11 04:30:01 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 177225728 unmapped: 28975104 heap: 206200832 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: handle_auth_request added challenge on 0x55f105d2ac00
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 395 ms_handle_reset con 0x55f105d2ac00 session 0x55f10698e000
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: handle_auth_request added challenge on 0x55f107c1b800
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 395 ms_handle_reset con 0x55f107c1b800 session 0x55f10698fe00
Oct 11 04:30:01 compute-0 ceph-osd[89722]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:30:01 compute-0 ceph-osd[89722]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:30:01 compute-0 ceph-osd[89722]: bluestore.MempoolThread(0x55f1042abb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2805830 data_alloc: 234881024 data_used: 13373440
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: handle_auth_request added challenge on 0x55f107c1d800
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 395 ms_handle_reset con 0x55f107c1d800 session 0x55f10698e5a0
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:19:17.855823+0000)
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: handle_auth_request added challenge on 0x55f1081b4c00
Oct 11 04:30:01 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 177225728 unmapped: 28975104 heap: 206200832 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 395 ms_handle_reset con 0x55f1081b4c00 session 0x55f1076fad20
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 395 heartbeat osd_stat(store_statfs(0x4f6d91000/0x0/0x4ffc00000, data 0x1f5646f/0x216d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cff9c6), peers [0,1] op hist [])
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:19:18.856008+0000)
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 395 heartbeat osd_stat(store_statfs(0x4f6d91000/0x0/0x4ffc00000, data 0x1f5640d/0x216c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cff9c6), peers [0,1] op hist [])
Oct 11 04:30:01 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 177225728 unmapped: 28975104 heap: 206200832 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: handle_auth_request added challenge on 0x55f105b88800
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 395 ms_handle_reset con 0x55f105b88800 session 0x55f10799c5a0
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: handle_auth_request added challenge on 0x55f105d2ac00
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 395 ms_handle_reset con 0x55f105d2ac00 session 0x55f1069805a0
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:19:19.856226+0000)
Oct 11 04:30:01 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 177242112 unmapped: 28958720 heap: 206200832 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: handle_auth_request added challenge on 0x55f107c1b800
Oct 11 04:30:01 compute-0 ceph-osd[89722]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 10.276031494s of 10.440155983s, submitted: 60
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:19:20.856402+0000)
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 395 ms_handle_reset con 0x55f107c1b800 session 0x55f1076b4b40
Oct 11 04:30:01 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 177250304 unmapped: 28950528 heap: 206200832 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:19:21.856534+0000)
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: handle_auth_request added challenge on 0x55f107c1d800
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 395 ms_handle_reset con 0x55f107c1d800 session 0x55f10567cf00
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: handle_auth_request added challenge on 0x55f10763d000
Oct 11 04:30:01 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 177258496 unmapped: 28942336 heap: 206200832 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 395 ms_handle_reset con 0x55f10763d000 session 0x55f106ae4960
Oct 11 04:30:01 compute-0 ceph-osd[89722]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:30:01 compute-0 ceph-osd[89722]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:30:01 compute-0 ceph-osd[89722]: bluestore.MempoolThread(0x55f1042abb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2802073 data_alloc: 234881024 data_used: 13373440
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:19:22.856703+0000)
Oct 11 04:30:01 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 177266688 unmapped: 28934144 heap: 206200832 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:19:23.856949+0000)
Oct 11 04:30:01 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 177266688 unmapped: 28934144 heap: 206200832 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 395 heartbeat osd_stat(store_statfs(0x4f6d95000/0x0/0x4ffc00000, data 0x1f562e7/0x2169000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cff9c6), peers [0,1] op hist [])
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: handle_auth_request added challenge on 0x55f105b88800
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:19:24.857074+0000)
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 395 ms_handle_reset con 0x55f105b88800 session 0x55f106981c20
Oct 11 04:30:01 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 177266688 unmapped: 28934144 heap: 206200832 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:19:25.857228+0000)
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 395 heartbeat osd_stat(store_statfs(0x4f6d96000/0x0/0x4ffc00000, data 0x1f56285/0x2168000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cff9c6), peers [0,1] op hist [])
Oct 11 04:30:01 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 177266688 unmapped: 28934144 heap: 206200832 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:19:26.857418+0000)
Oct 11 04:30:01 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 177266688 unmapped: 28934144 heap: 206200832 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:01 compute-0 ceph-osd[89722]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:30:01 compute-0 ceph-osd[89722]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:30:01 compute-0 ceph-osd[89722]: bluestore.MempoolThread(0x55f1042abb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2801376 data_alloc: 234881024 data_used: 13373440
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 395 heartbeat osd_stat(store_statfs(0x4f6d96000/0x0/0x4ffc00000, data 0x1f56285/0x2168000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cff9c6), peers [0,1] op hist [])
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:19:27.857614+0000)
Oct 11 04:30:01 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 177266688 unmapped: 28934144 heap: 206200832 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:19:28.857771+0000)
Oct 11 04:30:01 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 177266688 unmapped: 28934144 heap: 206200832 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 395 heartbeat osd_stat(store_statfs(0x4f6d96000/0x0/0x4ffc00000, data 0x1f56285/0x2168000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cff9c6), peers [0,1] op hist [])
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:19:29.857936+0000)
Oct 11 04:30:01 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 177266688 unmapped: 28934144 heap: 206200832 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: handle_auth_request added challenge on 0x55f105d2ac00
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 395 ms_handle_reset con 0x55f105d2ac00 session 0x55f1076e7860
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:19:30.858124+0000)
Oct 11 04:30:01 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 177266688 unmapped: 28934144 heap: 206200832 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: handle_auth_request added challenge on 0x55f107c1b800
Oct 11 04:30:01 compute-0 ceph-osd[89722]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 10.802103043s of 10.907313347s, submitted: 38
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:19:31.858889+0000)
Oct 11 04:30:01 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 177266688 unmapped: 28934144 heap: 206200832 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 395 heartbeat osd_stat(store_statfs(0x4f6d95000/0x0/0x4ffc00000, data 0x1f562e7/0x2169000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cff9c6), peers [0,1] op hist [])
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _renew_subs
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 395 handle_osd_map epochs [396,396], i have 395, src has [1,396]
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 396 ms_handle_reset con 0x55f107c1b800 session 0x55f10567cb40
Oct 11 04:30:01 compute-0 ceph-osd[89722]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:30:01 compute-0 ceph-osd[89722]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:30:01 compute-0 ceph-osd[89722]: bluestore.MempoolThread(0x55f1042abb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2807331 data_alloc: 234881024 data_used: 13385728
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:19:32.859016+0000)
Oct 11 04:30:01 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 177266688 unmapped: 28934144 heap: 206200832 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:19:33.859184+0000)
Oct 11 04:30:01 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 177266688 unmapped: 28934144 heap: 206200832 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:19:34.859304+0000)
Oct 11 04:30:01 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 177266688 unmapped: 28934144 heap: 206200832 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:19:35.859460+0000)
Oct 11 04:30:01 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 177266688 unmapped: 28934144 heap: 206200832 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: handle_auth_request added challenge on 0x55f107c1d800
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _renew_subs
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 396 handle_osd_map epochs [397,397], i have 396, src has [1,397]
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 397 ms_handle_reset con 0x55f107c1d800 session 0x55f108163680
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:19:36.859635+0000)
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 397 heartbeat osd_stat(store_statfs(0x4f6d8e000/0x0/0x4ffc00000, data 0x1f599e1/0x216f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cff9c6), peers [0,1] op hist [])
Oct 11 04:30:01 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 177266688 unmapped: 28934144 heap: 206200832 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:01 compute-0 ceph-osd[89722]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:30:01 compute-0 ceph-osd[89722]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:30:01 compute-0 ceph-osd[89722]: bluestore.MempoolThread(0x55f1042abb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2811779 data_alloc: 234881024 data_used: 13385728
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: handle_auth_request added challenge on 0x55f107be1c00
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 397 handle_osd_map epochs [397,398], i have 397, src has [1,398]
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _renew_subs
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 397 handle_osd_map epochs [398,398], i have 398, src has [1,398]
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:19:37.859814+0000)
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 398 ms_handle_reset con 0x55f107be1c00 session 0x55f106d7e5a0
Oct 11 04:30:01 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 177307648 unmapped: 28893184 heap: 206200832 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: handle_auth_request added challenge on 0x55f105b88800
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 398 ms_handle_reset con 0x55f105b88800 session 0x55f106982b40
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: handle_auth_request added challenge on 0x55f105d2ac00
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:19:38.859968+0000)
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 398 ms_handle_reset con 0x55f105d2ac00 session 0x55f106983860
Oct 11 04:30:01 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 177324032 unmapped: 28876800 heap: 206200832 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:19:39.860189+0000)
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: handle_auth_request added challenge on 0x55f107c1b800
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 398 ms_handle_reset con 0x55f107c1b800 session 0x55f106983e00
Oct 11 04:30:01 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 177324032 unmapped: 28876800 heap: 206200832 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:19:40.860352+0000)
Oct 11 04:30:01 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 177324032 unmapped: 28876800 heap: 206200832 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: handle_auth_request added challenge on 0x55f107c1d800
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 398 ms_handle_reset con 0x55f107c1d800 session 0x55f109955e00
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: handle_auth_request added challenge on 0x55f105187c00
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 398 ms_handle_reset con 0x55f105187c00 session 0x55f109954780
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 398 heartbeat osd_stat(store_statfs(0x4f697a000/0x0/0x4ffc00000, data 0x1f5b5c0/0x2173000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x710f9c6), peers [0,1] op hist [])
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:19:41.860551+0000)
Oct 11 04:30:01 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 177324032 unmapped: 28876800 heap: 206200832 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:01 compute-0 ceph-osd[89722]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:30:01 compute-0 ceph-osd[89722]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:30:01 compute-0 ceph-osd[89722]: bluestore.MempoolThread(0x55f1042abb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2817330 data_alloc: 234881024 data_used: 13393920
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: handle_auth_request added challenge on 0x55f105b88800
Oct 11 04:30:01 compute-0 ceph-osd[89722]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 10.911602974s of 11.068705559s, submitted: 33
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:19:42.860698+0000)
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: handle_auth_request added challenge on 0x55f105d2ac00
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 398 ms_handle_reset con 0x55f105d2ac00 session 0x55f108163e00
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: handle_auth_request added challenge on 0x55f107c1b800
Oct 11 04:30:01 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 177332224 unmapped: 28868608 heap: 206200832 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:19:43.860865+0000)
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _renew_subs
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 398 handle_osd_map epochs [399,399], i have 398, src has [1,399]
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 399 ms_handle_reset con 0x55f107c1b800 session 0x55f105b9f860
Oct 11 04:30:01 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 177332224 unmapped: 28868608 heap: 206200832 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 399 ms_handle_reset con 0x55f105b88800 session 0x55f109954f00
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:19:44.860999+0000)
Oct 11 04:30:01 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 177332224 unmapped: 28868608 heap: 206200832 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: handle_auth_request added challenge on 0x55f107c1d800
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 399 ms_handle_reset con 0x55f107c1d800 session 0x55f10767f860
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: handle_auth_request added challenge on 0x55f109422400
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:19:45.861251+0000)
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _renew_subs
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 399 handle_osd_map epochs [400,400], i have 399, src has [1,400]
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 400 ms_handle_reset con 0x55f109422400 session 0x55f10767fa40
Oct 11 04:30:01 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 177332224 unmapped: 28868608 heap: 206200832 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:19:46.861386+0000)
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 400 heartbeat osd_stat(store_statfs(0x4f6974000/0x0/0x4ffc00000, data 0x1f5ed00/0x2178000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x710f9c6), peers [0,1] op hist [])
Oct 11 04:30:01 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 177332224 unmapped: 28868608 heap: 206200832 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:01 compute-0 ceph-osd[89722]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:30:01 compute-0 ceph-osd[89722]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:30:01 compute-0 ceph-osd[89722]: bluestore.MempoolThread(0x55f1042abb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2824278 data_alloc: 234881024 data_used: 13398016
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: handle_auth_request added challenge on 0x55f105b88800
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 400 ms_handle_reset con 0x55f105b88800 session 0x55f1076ced20
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: handle_auth_request added challenge on 0x55f105d2ac00
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:19:47.861568+0000)
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 400 ms_handle_reset con 0x55f105d2ac00 session 0x55f106d7f2c0
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 400 heartbeat osd_stat(store_statfs(0x4f6976000/0x0/0x4ffc00000, data 0x1f5ed00/0x2178000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x710f9c6), peers [0,1] op hist [])
Oct 11 04:30:01 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 177348608 unmapped: 28852224 heap: 206200832 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:19:48.861771+0000)
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: handle_auth_request added challenge on 0x55f107c1b800
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 400 ms_handle_reset con 0x55f107c1b800 session 0x55f106317860
Oct 11 04:30:01 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 177348608 unmapped: 28852224 heap: 206200832 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 400 heartbeat osd_stat(store_statfs(0x4f6976000/0x0/0x4ffc00000, data 0x1f5ed00/0x2178000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x710f9c6), peers [0,1] op hist [])
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:19:49.861983+0000)
Oct 11 04:30:01 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 177348608 unmapped: 28852224 heap: 206200832 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: handle_auth_request added challenge on 0x55f107c1d800
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:19:50.862231+0000)
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _renew_subs
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 400 handle_osd_map epochs [401,401], i have 400, src has [1,401]
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 401 ms_handle_reset con 0x55f107c1d800 session 0x55f106317c20
Oct 11 04:30:01 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 177356800 unmapped: 28844032 heap: 206200832 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 401 heartbeat osd_stat(store_statfs(0x4f6975000/0x0/0x4ffc00000, data 0x1f5ed62/0x2179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x710f9c6), peers [0,1] op hist [])
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:19:51.862473+0000)
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: handle_auth_request added challenge on 0x55f105d2b000
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 401 ms_handle_reset con 0x55f105d2b000 session 0x55f105bab680
Oct 11 04:30:01 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 177364992 unmapped: 28835840 heap: 206200832 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:01 compute-0 ceph-osd[89722]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:30:01 compute-0 ceph-osd[89722]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:30:01 compute-0 ceph-osd[89722]: bluestore.MempoolThread(0x55f1042abb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2831684 data_alloc: 234881024 data_used: 13418496
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:19:52.862657+0000)
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: handle_auth_request added challenge on 0x55f105b88800
Oct 11 04:30:01 compute-0 ceph-osd[89722]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 10.050302505s of 10.410983086s, submitted: 65
Oct 11 04:30:01 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 177364992 unmapped: 28835840 heap: 206200832 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:19:53.862858+0000)
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _renew_subs
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 401 handle_osd_map epochs [402,402], i have 401, src has [1,402]
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 402 ms_handle_reset con 0x55f105b88800 session 0x55f105baaf00
Oct 11 04:30:01 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 177373184 unmapped: 28827648 heap: 206200832 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:19:54.863045+0000)
Oct 11 04:30:01 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 177373184 unmapped: 28827648 heap: 206200832 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: handle_auth_request added challenge on 0x55f105d2ac00
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _renew_subs
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 402 handle_osd_map epochs [403,403], i have 402, src has [1,403]
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:19:55.863241+0000)
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: handle_auth_request added challenge on 0x55f107c1b800
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 403 ms_handle_reset con 0x55f107c1b800 session 0x55f107708780
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 403 ms_handle_reset con 0x55f105d2ac00 session 0x55f1056bd860
Oct 11 04:30:01 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 177373184 unmapped: 28827648 heap: 206200832 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:19:56.863453+0000)
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: handle_auth_request added challenge on 0x55f107c1d800
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 403 ms_handle_reset con 0x55f107c1d800 session 0x55f1056bcb40
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: handle_auth_request added challenge on 0x55f107bf6800
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 403 ms_handle_reset con 0x55f107bf6800 session 0x55f109b9ab40
Oct 11 04:30:01 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 177389568 unmapped: 28811264 heap: 206200832 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 403 heartbeat osd_stat(store_statfs(0x4f6969000/0x0/0x4ffc00000, data 0x1f63fbb/0x2184000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x710f9c6), peers [0,1] op hist [])
Oct 11 04:30:01 compute-0 ceph-osd[89722]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:30:01 compute-0 ceph-osd[89722]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:30:01 compute-0 ceph-osd[89722]: bluestore.MempoolThread(0x55f1042abb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2839756 data_alloc: 234881024 data_used: 13430784
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:19:57.863606+0000)
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: handle_auth_request added challenge on 0x55f105b88800
Oct 11 04:30:01 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 177389568 unmapped: 28811264 heap: 206200832 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 403 ms_handle_reset con 0x55f105b88800 session 0x55f109b9a780
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: handle_auth_request added challenge on 0x55f105d2ac00
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 403 ms_handle_reset con 0x55f105d2ac00 session 0x55f109b9a1e0
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:19:58.863764+0000)
Oct 11 04:30:01 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 177405952 unmapped: 28794880 heap: 206200832 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: handle_auth_request added challenge on 0x55f107c1b800
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: handle_auth_request added challenge on 0x55f107c1d800
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 403 ms_handle_reset con 0x55f107c1d800 session 0x55f1079f3e00
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: handle_auth_request added challenge on 0x55f10bb17000
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:19:59.863922+0000)
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 403 handle_osd_map epochs [403,404], i have 403, src has [1,404]
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 404 ms_handle_reset con 0x55f10bb17000 session 0x55f1076b43c0
Oct 11 04:30:01 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 177414144 unmapped: 28786688 heap: 206200832 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 404 ms_handle_reset con 0x55f107c1b800 session 0x55f1079721e0
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:20:00.864112+0000)
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: handle_auth_request added challenge on 0x55f105b88800
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 404 ms_handle_reset con 0x55f105b88800 session 0x55f10d6f6960
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: handle_auth_request added challenge on 0x55f105d2ac00
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: handle_auth_request added challenge on 0x55f107c1d800
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 404 ms_handle_reset con 0x55f107c1d800 session 0x55f105b8d0e0
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: handle_auth_request added challenge on 0x55f10bb17000
Oct 11 04:30:01 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 177463296 unmapped: 28737536 heap: 206200832 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 404 heartbeat osd_stat(store_statfs(0x4f6967000/0x0/0x4ffc00000, data 0x1f65b2a/0x2186000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x710f9c6), peers [0,1] op hist [])
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:20:01.864344+0000)
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 404 handle_osd_map epochs [404,405], i have 404, src has [1,405]
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 405 ms_handle_reset con 0x55f10bb17000 session 0x55f105b9c780
Oct 11 04:30:01 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 177487872 unmapped: 28712960 heap: 206200832 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 405 ms_handle_reset con 0x55f105d2ac00 session 0x55f10d6f70e0
Oct 11 04:30:01 compute-0 ceph-osd[89722]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:30:01 compute-0 ceph-osd[89722]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:30:01 compute-0 ceph-osd[89722]: bluestore.MempoolThread(0x55f1042abb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2843849 data_alloc: 234881024 data_used: 13438976
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:20:02.864535+0000)
Oct 11 04:30:01 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 177504256 unmapped: 28696576 heap: 206200832 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: handle_auth_request added challenge on 0x55f106da1400
Oct 11 04:30:01 compute-0 ceph-osd[89722]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 9.778827667s of 10.166868210s, submitted: 147
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 405 ms_handle_reset con 0x55f106da1400 session 0x55f10d6f7860
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:20:03.864757+0000)
Oct 11 04:30:01 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 177512448 unmapped: 28688384 heap: 206200832 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:20:04.864915+0000)
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: handle_auth_request added challenge on 0x55f105b88800
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 405 ms_handle_reset con 0x55f105b88800 session 0x55f10d6f7c20
Oct 11 04:30:01 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 178626560 unmapped: 27574272 heap: 206200832 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:20:05.865053+0000)
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: handle_auth_request added challenge on 0x55f105d2ac00
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _renew_subs
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 405 handle_osd_map epochs [406,406], i have 405, src has [1,406]
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _renew_subs
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 406 handle_osd_map epochs [407,407], i have 406, src has [1,407]
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 407 ms_handle_reset con 0x55f105d2ac00 session 0x55f108163c20
Oct 11 04:30:01 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 178634752 unmapped: 27566080 heap: 206200832 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 407 heartbeat osd_stat(store_statfs(0x4f79a5000/0x0/0x4ffc00000, data 0x1f676a9/0x2189000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x60cf9c6), peers [0,1] op hist [])
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:20:06.865209+0000)
Oct 11 04:30:01 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 178634752 unmapped: 27566080 heap: 206200832 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: handle_auth_request added challenge on 0x55f107c1d800
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: handle_auth_request added challenge on 0x55f10bb17000
Oct 11 04:30:01 compute-0 ceph-osd[89722]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:30:01 compute-0 ceph-osd[89722]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:30:01 compute-0 ceph-osd[89722]: bluestore.MempoolThread(0x55f1042abb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2856694 data_alloc: 234881024 data_used: 13447168
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 407 ms_handle_reset con 0x55f10bb17000 session 0x55f1056bde00
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 407 ms_handle_reset con 0x55f107c1d800 session 0x55f105b0b0e0
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:20:07.865386+0000)
Oct 11 04:30:01 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 178642944 unmapped: 27557888 heap: 206200832 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: handle_auth_request added challenge on 0x55f109abb400
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 407 ms_handle_reset con 0x55f109abb400 session 0x55f10795eb40
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: handle_auth_request added challenge on 0x55f105b88800
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 407 ms_handle_reset con 0x55f105b88800 session 0x55f106d383c0
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:20:08.865564+0000)
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 407 heartbeat osd_stat(store_statfs(0x4f799d000/0x0/0x4ffc00000, data 0x1f6ad4d/0x2191000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x60cf9c6), peers [0,1] op hist [])
Oct 11 04:30:01 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 178642944 unmapped: 27557888 heap: 206200832 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:20:09.865733+0000)
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: handle_auth_request added challenge on 0x55f105d2ac00
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 407 ms_handle_reset con 0x55f105d2ac00 session 0x55f1063165a0
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: handle_auth_request added challenge on 0x55f107c1d800
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: handle_auth_request added challenge on 0x55f10bb17000
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 407 ms_handle_reset con 0x55f10bb17000 session 0x55f1056bc780
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: handle_auth_request added challenge on 0x55f10a1cb000
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _renew_subs
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 407 handle_osd_map epochs [408,408], i have 407, src has [1,408]
Oct 11 04:30:01 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 178642944 unmapped: 27557888 heap: 206200832 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 408 ms_handle_reset con 0x55f10a1cb000 session 0x55f10d6f6d20
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 408 ms_handle_reset con 0x55f107c1d800 session 0x55f10767e5a0
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:20:10.865941+0000)
Oct 11 04:30:01 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 178651136 unmapped: 27549696 heap: 206200832 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: handle_auth_request added challenge on 0x55f105b88800
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 408 ms_handle_reset con 0x55f105b88800 session 0x55f10799cf00
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:20:11.866132+0000)
Oct 11 04:30:01 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 178651136 unmapped: 27549696 heap: 206200832 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:01 compute-0 ceph-osd[89722]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:30:01 compute-0 ceph-osd[89722]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:30:01 compute-0 ceph-osd[89722]: bluestore.MempoolThread(0x55f1042abb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2859646 data_alloc: 234881024 data_used: 13459456
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: handle_auth_request added challenge on 0x55f105d2ac00
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:20:12.866340+0000)
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 408 heartbeat osd_stat(store_statfs(0x4f799c000/0x0/0x4ffc00000, data 0x1f6c8ac/0x2192000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x60cf9c6), peers [0,1] op hist [])
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _renew_subs
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 408 handle_osd_map epochs [409,409], i have 408, src has [1,409]
Oct 11 04:30:01 compute-0 ceph-osd[89722]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 9.777158737s of 10.004230499s, submitted: 66
Oct 11 04:30:01 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 178659328 unmapped: 27541504 heap: 206200832 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 409 ms_handle_reset con 0x55f105d2ac00 session 0x55f10698e000
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:20:13.866536+0000)
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: handle_auth_request added challenge on 0x55f10a1cb000
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 409 ms_handle_reset con 0x55f10a1cb000 session 0x55f105b9c000
Oct 11 04:30:01 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 178659328 unmapped: 27541504 heap: 206200832 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:20:14.866693+0000)
Oct 11 04:30:01 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 178659328 unmapped: 27541504 heap: 206200832 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:20:15.866871+0000)
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 409 heartbeat osd_stat(store_statfs(0x4f7999000/0x0/0x4ffc00000, data 0x1f6e445/0x2195000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x60cf9c6), peers [0,1] op hist [])
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 409 handle_osd_map epochs [410,410], i have 409, src has [1,410]
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 409 handle_osd_map epochs [410,410], i have 410, src has [1,410]
Oct 11 04:30:01 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 178659328 unmapped: 27541504 heap: 206200832 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: handle_auth_request added challenge on 0x55f10bb17000
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 410 ms_handle_reset con 0x55f10bb17000 session 0x55f105b9d4a0
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: handle_auth_request added challenge on 0x55f107c1c400
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 410 ms_handle_reset con 0x55f107c1c400 session 0x55f10767ef00
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:20:16.867024+0000)
Oct 11 04:30:01 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 178683904 unmapped: 27516928 heap: 206200832 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:01 compute-0 ceph-osd[89722]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:30:01 compute-0 ceph-osd[89722]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:30:01 compute-0 ceph-osd[89722]: bluestore.MempoolThread(0x55f1042abb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2865089 data_alloc: 234881024 data_used: 13484032
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 410 heartbeat osd_stat(store_statfs(0x4f7997000/0x0/0x4ffc00000, data 0x1f6fe46/0x2197000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x60cf9c6), peers [0,1] op hist [])
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:20:17.867251+0000)
Oct 11 04:30:01 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 178683904 unmapped: 27516928 heap: 206200832 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:20:18.867421+0000)
Oct 11 04:30:01 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 178683904 unmapped: 27516928 heap: 206200832 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:20:19.867550+0000)
Oct 11 04:30:01 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 178683904 unmapped: 27516928 heap: 206200832 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:20:20.867754+0000)
Oct 11 04:30:01 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 178683904 unmapped: 27516928 heap: 206200832 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:20:21.867964+0000)
Oct 11 04:30:01 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 178683904 unmapped: 27516928 heap: 206200832 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 410 heartbeat osd_stat(store_statfs(0x4f7997000/0x0/0x4ffc00000, data 0x1f6fe46/0x2197000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x60cf9c6), peers [0,1] op hist [])
Oct 11 04:30:01 compute-0 ceph-osd[89722]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:30:01 compute-0 ceph-osd[89722]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:30:01 compute-0 ceph-osd[89722]: bluestore.MempoolThread(0x55f1042abb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2865409 data_alloc: 234881024 data_used: 13492224
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:20:22.868094+0000)
Oct 11 04:30:01 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 178683904 unmapped: 27516928 heap: 206200832 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:20:23.868292+0000)
Oct 11 04:30:01 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 178683904 unmapped: 27516928 heap: 206200832 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:20:24.868535+0000)
Oct 11 04:30:01 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 178683904 unmapped: 27516928 heap: 206200832 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:20:25.868725+0000)
Oct 11 04:30:01 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 178683904 unmapped: 27516928 heap: 206200832 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: handle_auth_request added challenge on 0x55f105b88800
Oct 11 04:30:01 compute-0 ceph-osd[89722]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 13.241363525s of 13.341303825s, submitted: 37
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 410 ms_handle_reset con 0x55f105b88800 session 0x55f109b9a5a0
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:20:26.868889+0000)
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: handle_auth_request added challenge on 0x55f105d2ac00
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _renew_subs
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 410 handle_osd_map epochs [411,411], i have 410, src has [1,411]
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: handle_auth_request added challenge on 0x55f10a1cb000
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 411 ms_handle_reset con 0x55f10a1cb000 session 0x55f1079470e0
Oct 11 04:30:01 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 178692096 unmapped: 27508736 heap: 206200832 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:01 compute-0 ceph-osd[89722]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:30:01 compute-0 ceph-osd[89722]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:30:01 compute-0 ceph-osd[89722]: bluestore.MempoolThread(0x55f1042abb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2871499 data_alloc: 234881024 data_used: 13500416
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:20:27.869048+0000)
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _renew_subs
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 411 handle_osd_map epochs [412,412], i have 411, src has [1,412]
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 412 ms_handle_reset con 0x55f105d2ac00 session 0x55f105b8d4a0
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 412 heartbeat osd_stat(store_statfs(0x4f7992000/0x0/0x4ffc00000, data 0x1f719d3/0x219b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x60cf9c6), peers [0,1] op hist [])
Oct 11 04:30:01 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 178700288 unmapped: 27500544 heap: 206200832 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:20:28.869220+0000)
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: handle_auth_request added challenge on 0x55f10bb17000
Oct 11 04:30:01 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 178716672 unmapped: 27484160 heap: 206200832 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:20:29.869367+0000)
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _renew_subs
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 412 handle_osd_map epochs [413,413], i have 412, src has [1,413]
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 413 ms_handle_reset con 0x55f10bb17000 session 0x55f105b8d680
Oct 11 04:30:01 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 178716672 unmapped: 27484160 heap: 206200832 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:20:30.869519+0000)
Oct 11 04:30:01 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 178716672 unmapped: 27484160 heap: 206200832 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:20:31.869866+0000)
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 413 handle_osd_map epochs [414,414], i have 413, src has [1,414]
Oct 11 04:30:01 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 178724864 unmapped: 27475968 heap: 206200832 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 414 heartbeat osd_stat(store_statfs(0x4f798d000/0x0/0x4ffc00000, data 0x1f75111/0x21a0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x60cf9c6), peers [0,1] op hist [])
Oct 11 04:30:01 compute-0 ceph-osd[89722]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:30:01 compute-0 ceph-osd[89722]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:30:01 compute-0 ceph-osd[89722]: bluestore.MempoolThread(0x55f1042abb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2879481 data_alloc: 234881024 data_used: 13500416
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:20:32.870023+0000)
Oct 11 04:30:01 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 178724864 unmapped: 27475968 heap: 206200832 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:20:33.870159+0000)
Oct 11 04:30:01 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 178724864 unmapped: 27475968 heap: 206200832 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:20:34.870348+0000)
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 414 heartbeat osd_stat(store_statfs(0x4f798a000/0x0/0x4ffc00000, data 0x1f76cfe/0x21a3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x60cf9c6), peers [0,1] op hist [])
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 414 handle_osd_map epochs [415,415], i have 414, src has [1,415]
Oct 11 04:30:01 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 178733056 unmapped: 27467776 heap: 206200832 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:20:35.870508+0000)
Oct 11 04:30:01 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 178733056 unmapped: 27467776 heap: 206200832 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:20:36.870742+0000)
Oct 11 04:30:01 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 178733056 unmapped: 27467776 heap: 206200832 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:01 compute-0 ceph-osd[89722]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:30:01 compute-0 ceph-osd[89722]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:30:01 compute-0 ceph-osd[89722]: bluestore.MempoolThread(0x55f1042abb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2882615 data_alloc: 234881024 data_used: 13504512
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 415 heartbeat osd_stat(store_statfs(0x4f67e7000/0x0/0x4ffc00000, data 0x1f78799/0x21a6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x726f9c6), peers [0,1] op hist [])
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:20:37.870905+0000)
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 415 heartbeat osd_stat(store_statfs(0x4f67e7000/0x0/0x4ffc00000, data 0x1f78799/0x21a6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x726f9c6), peers [0,1] op hist [])
Oct 11 04:30:01 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 178733056 unmapped: 27467776 heap: 206200832 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:20:38.871048+0000)
Oct 11 04:30:01 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 178733056 unmapped: 27467776 heap: 206200832 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:20:39.871216+0000)
Oct 11 04:30:01 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 178733056 unmapped: 27467776 heap: 206200832 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:20:40.871402+0000)
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 415 heartbeat osd_stat(store_statfs(0x4f67e7000/0x0/0x4ffc00000, data 0x1f78799/0x21a6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x726f9c6), peers [0,1] op hist [])
Oct 11 04:30:01 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 178733056 unmapped: 27467776 heap: 206200832 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:20:41.871566+0000)
Oct 11 04:30:01 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 178733056 unmapped: 27467776 heap: 206200832 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:01 compute-0 ceph-osd[89722]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:30:01 compute-0 ceph-osd[89722]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:30:01 compute-0 ceph-osd[89722]: bluestore.MempoolThread(0x55f1042abb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2882615 data_alloc: 234881024 data_used: 13504512
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:20:42.871737+0000)
Oct 11 04:30:01 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 178733056 unmapped: 27467776 heap: 206200832 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:20:43.871925+0000)
Oct 11 04:30:01 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 178733056 unmapped: 27467776 heap: 206200832 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 415 heartbeat osd_stat(store_statfs(0x4f67e7000/0x0/0x4ffc00000, data 0x1f78799/0x21a6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x726f9c6), peers [0,1] op hist [])
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:20:44.872094+0000)
Oct 11 04:30:01 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 178741248 unmapped: 27459584 heap: 206200832 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:20:45.872250+0000)
Oct 11 04:30:01 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 178741248 unmapped: 27459584 heap: 206200832 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:20:46.872430+0000)
Oct 11 04:30:01 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 178741248 unmapped: 27459584 heap: 206200832 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:01 compute-0 ceph-osd[89722]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:30:01 compute-0 ceph-osd[89722]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:30:01 compute-0 ceph-osd[89722]: bluestore.MempoolThread(0x55f1042abb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2882615 data_alloc: 234881024 data_used: 13504512
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:20:47.872587+0000)
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 415 heartbeat osd_stat(store_statfs(0x4f67e7000/0x0/0x4ffc00000, data 0x1f78799/0x21a6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x726f9c6), peers [0,1] op hist [])
Oct 11 04:30:01 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 178749440 unmapped: 27451392 heap: 206200832 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:20:48.872771+0000)
Oct 11 04:30:01 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 178749440 unmapped: 27451392 heap: 206200832 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:20:49.872946+0000)
Oct 11 04:30:01 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 178749440 unmapped: 27451392 heap: 206200832 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:20:50.873194+0000)
Oct 11 04:30:01 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 178749440 unmapped: 27451392 heap: 206200832 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:20:51.873426+0000)
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 415 heartbeat osd_stat(store_statfs(0x4f67e7000/0x0/0x4ffc00000, data 0x1f78799/0x21a6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x726f9c6), peers [0,1] op hist [])
Oct 11 04:30:01 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 178749440 unmapped: 27451392 heap: 206200832 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:01 compute-0 ceph-osd[89722]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:30:01 compute-0 ceph-osd[89722]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:30:01 compute-0 ceph-osd[89722]: bluestore.MempoolThread(0x55f1042abb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2882615 data_alloc: 234881024 data_used: 13504512
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:20:52.873561+0000)
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 415 handle_osd_map epochs [416,416], i have 415, src has [1,416]
Oct 11 04:30:01 compute-0 ceph-osd[89722]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 26.694824219s of 26.818597794s, submitted: 50
Oct 11 04:30:01 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 178757632 unmapped: 27443200 heap: 206200832 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:20:53.873727+0000)
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: handle_auth_request added challenge on 0x55f107bf4800
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 416 ms_handle_reset con 0x55f107bf4800 session 0x55f1094525a0
Oct 11 04:30:01 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 178757632 unmapped: 27443200 heap: 206200832 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:20:54.873887+0000)
Oct 11 04:30:01 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 178757632 unmapped: 27443200 heap: 206200832 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 416 handle_osd_map epochs [417,417], i have 416, src has [1,417]
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:20:55.874025+0000)
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: handle_auth_request added challenge on 0x55f105b88800
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 417 ms_handle_reset con 0x55f105b88800 session 0x55f109452780
Oct 11 04:30:01 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 178782208 unmapped: 27418624 heap: 206200832 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:20:56.874227+0000)
Oct 11 04:30:01 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 178782208 unmapped: 27418624 heap: 206200832 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:01 compute-0 ceph-osd[89722]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:30:01 compute-0 ceph-osd[89722]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:30:01 compute-0 ceph-osd[89722]: bluestore.MempoolThread(0x55f1042abb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2888563 data_alloc: 234881024 data_used: 13504512
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:20:57.874397+0000)
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 417 heartbeat osd_stat(store_statfs(0x4f67e1000/0x0/0x4ffc00000, data 0x1f7bee7/0x21ac000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x726f9c6), peers [0,1] op hist [])
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 417 handle_osd_map epochs [417,418], i have 417, src has [1,418]
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: handle_auth_request added challenge on 0x55f105d2ac00
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 418 ms_handle_reset con 0x55f105d2ac00 session 0x55f109452b40
Oct 11 04:30:01 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 178790400 unmapped: 27410432 heap: 206200832 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:20:58.874578+0000)
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: handle_auth_request added challenge on 0x55f107bf4800
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 418 ms_handle_reset con 0x55f107bf4800 session 0x55f109452d20
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: handle_auth_request added challenge on 0x55f10a1cb000
Oct 11 04:30:01 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 178790400 unmapped: 27410432 heap: 206200832 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 418 heartbeat osd_stat(store_statfs(0x4f67dc000/0x0/0x4ffc00000, data 0x1f7dae2/0x21b0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x726f9c6), peers [0,1] op hist [])
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:20:59.874798+0000)
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 418 ms_handle_reset con 0x55f10a1cb000 session 0x55f1094530e0
Oct 11 04:30:01 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 178790400 unmapped: 27410432 heap: 206200832 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:21:00.874956+0000)
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 418 handle_osd_map epochs [419,419], i have 418, src has [1,419]
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: handle_auth_request added challenge on 0x55f10bb17000
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 419 ms_handle_reset con 0x55f10bb17000 session 0x55f1094532c0
Oct 11 04:30:01 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 178806784 unmapped: 27394048 heap: 206200832 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:21:01.875219+0000)
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 419 heartbeat osd_stat(store_statfs(0x4f67db000/0x0/0x4ffc00000, data 0x1f7f661/0x21b2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x726f9c6), peers [0,1] op hist [])
Oct 11 04:30:01 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 178806784 unmapped: 27394048 heap: 206200832 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:01 compute-0 ceph-osd[89722]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:30:01 compute-0 ceph-osd[89722]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:30:01 compute-0 ceph-osd[89722]: bluestore.MempoolThread(0x55f1042abb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2894671 data_alloc: 234881024 data_used: 13508608
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:21:02.875382+0000)
Oct 11 04:30:01 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 178806784 unmapped: 27394048 heap: 206200832 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:21:03.875591+0000)
Oct 11 04:30:01 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 178806784 unmapped: 27394048 heap: 206200832 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: handle_auth_request added challenge on 0x55f105b88800
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _renew_subs
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 419 handle_osd_map epochs [420,420], i have 419, src has [1,420]
Oct 11 04:30:01 compute-0 ceph-osd[89722]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 11.079965591s of 11.157909393s, submitted: 26
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 420 ms_handle_reset con 0x55f105b88800 session 0x55f109453680
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:21:04.875737+0000)
Oct 11 04:30:01 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 178814976 unmapped: 27385856 heap: 206200832 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:21:05.875998+0000)
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 420 handle_osd_map epochs [421,421], i have 420, src has [1,421]
Oct 11 04:30:01 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 178839552 unmapped: 27361280 heap: 206200832 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 421 heartbeat osd_stat(store_statfs(0x4f67d4000/0x0/0x4ffc00000, data 0x1f82e33/0x21b8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x726f9c6), peers [0,1] op hist [])
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:21:06.876200+0000)
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: handle_auth_request added challenge on 0x55f105d2ac00
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 421 ms_handle_reset con 0x55f105d2ac00 session 0x55f109453860
Oct 11 04:30:01 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 178839552 unmapped: 27361280 heap: 206200832 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:01 compute-0 ceph-osd[89722]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:30:01 compute-0 ceph-osd[89722]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:30:01 compute-0 ceph-osd[89722]: bluestore.MempoolThread(0x55f1042abb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2901291 data_alloc: 234881024 data_used: 13508608
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:21:07.876424+0000)
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 421 heartbeat osd_stat(store_statfs(0x4f67d4000/0x0/0x4ffc00000, data 0x1f82e33/0x21b8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x726f9c6), peers [0,1] op hist [])
Oct 11 04:30:01 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 178839552 unmapped: 27361280 heap: 206200832 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:21:08.876686+0000)
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 421 handle_osd_map epochs [421,422], i have 421, src has [1,422]
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: handle_auth_request added challenge on 0x55f107bf4800
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 422 ms_handle_reset con 0x55f107bf4800 session 0x55f109453c20
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 422 heartbeat osd_stat(store_statfs(0x4f67d1000/0x0/0x4ffc00000, data 0x1f84a2e/0x21bc000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x726f9c6), peers [0,1] op hist [])
Oct 11 04:30:01 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 178864128 unmapped: 27336704 heap: 206200832 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: handle_auth_request added challenge on 0x55f10a1cb000
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:21:09.876856+0000)
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 422 ms_handle_reset con 0x55f10a1cb000 session 0x55f109453e00
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: handle_auth_request added challenge on 0x55f109eaf400
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 422 ms_handle_reset con 0x55f109eaf400 session 0x55f109452d20
Oct 11 04:30:01 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 178864128 unmapped: 27336704 heap: 206200832 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:21:10.877033+0000)
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 422 handle_osd_map epochs [423,424], i have 422, src has [1,424]
Oct 11 04:30:01 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 178896896 unmapped: 27303936 heap: 206200832 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:21:11.877293+0000)
Oct 11 04:30:01 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 178896896 unmapped: 27303936 heap: 206200832 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:01 compute-0 ceph-osd[89722]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:30:01 compute-0 ceph-osd[89722]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:30:01 compute-0 ceph-osd[89722]: bluestore.MempoolThread(0x55f1042abb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2910229 data_alloc: 234881024 data_used: 13512704
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:21:12.877443+0000)
Oct 11 04:30:01 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 178896896 unmapped: 27303936 heap: 206200832 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 424 heartbeat osd_stat(store_statfs(0x4f67cc000/0x0/0x4ffc00000, data 0x1f8802c/0x21c1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x726f9c6), peers [0,1] op hist [])
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:21:13.877636+0000)
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 424 heartbeat osd_stat(store_statfs(0x4f67cc000/0x0/0x4ffc00000, data 0x1f8802c/0x21c1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x726f9c6), peers [0,1] op hist [])
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 424 handle_osd_map epochs [425,425], i have 424, src has [1,425]
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 424 handle_osd_map epochs [425,425], i have 425, src has [1,425]
Oct 11 04:30:01 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 178905088 unmapped: 27295744 heap: 206200832 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:21:14.877785+0000)
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: handle_auth_request added challenge on 0x55f105d2ac00
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _renew_subs
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 425 handle_osd_map epochs [426,426], i have 425, src has [1,426]
Oct 11 04:30:01 compute-0 ceph-osd[89722]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 10.376149178s of 10.470042229s, submitted: 46
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 426 ms_handle_reset con 0x55f105d2ac00 session 0x55f109452780
Oct 11 04:30:01 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 178921472 unmapped: 27279360 heap: 206200832 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:21:15.877997+0000)
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 426 handle_osd_map epochs [426,427], i have 426, src has [1,427]
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: handle_auth_request added challenge on 0x55f107bf4800
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 427 ms_handle_reset con 0x55f107bf4800 session 0x55f1094525a0
Oct 11 04:30:01 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 178937856 unmapped: 27262976 heap: 206200832 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:21:16.878236+0000)
Oct 11 04:30:01 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 178946048 unmapped: 27254784 heap: 206200832 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: handle_auth_request added challenge on 0x55f10a1cb000
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 427 ms_handle_reset con 0x55f10a1cb000 session 0x55f105b8d4a0
Oct 11 04:30:01 compute-0 ceph-osd[89722]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:30:01 compute-0 ceph-osd[89722]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:30:01 compute-0 ceph-osd[89722]: bluestore.MempoolThread(0x55f1042abb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2919967 data_alloc: 234881024 data_used: 13512704
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:21:17.878430+0000)
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 427 heartbeat osd_stat(store_statfs(0x4f67c1000/0x0/0x4ffc00000, data 0x1f8d1f9/0x21ca000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x726f9c6), peers [0,1] op hist [])
Oct 11 04:30:01 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 178946048 unmapped: 27254784 heap: 206200832 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:21:18.878624+0000)
Oct 11 04:30:01 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 178946048 unmapped: 27254784 heap: 206200832 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:21:19.879722+0000)
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: handle_auth_request added challenge on 0x55f107bf3400
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _renew_subs
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 427 handle_osd_map epochs [428,428], i have 427, src has [1,428]
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 428 ms_handle_reset con 0x55f107bf3400 session 0x55f1079470e0
Oct 11 04:30:01 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 178995200 unmapped: 27205632 heap: 206200832 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:21:20.880078+0000)
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 428 handle_osd_map epochs [428,429], i have 428, src has [1,429]
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: handle_auth_request added challenge on 0x55f107f00c00
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 429 ms_handle_reset con 0x55f107f00c00 session 0x55f10767ef00
Oct 11 04:30:01 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 178995200 unmapped: 27205632 heap: 206200832 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:21:21.882671+0000)
Oct 11 04:30:01 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 178995200 unmapped: 27205632 heap: 206200832 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:01 compute-0 ceph-osd[89722]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:30:01 compute-0 ceph-osd[89722]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:30:01 compute-0 ceph-osd[89722]: bluestore.MempoolThread(0x55f1042abb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2925243 data_alloc: 234881024 data_used: 13512704
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:21:22.883668+0000)
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: handle_auth_request added challenge on 0x55f105d2ac00
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 429 ms_handle_reset con 0x55f105d2ac00 session 0x55f105b9c000
Oct 11 04:30:01 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 178995200 unmapped: 27205632 heap: 206200832 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:21:23.884942+0000)
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 429 heartbeat osd_stat(store_statfs(0x4f67bc000/0x0/0x4ffc00000, data 0x1f90973/0x21d0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x726f9c6), peers [0,1] op hist [])
Oct 11 04:30:01 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 179003392 unmapped: 27197440 heap: 206200832 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:21:24.886094+0000)
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: handle_auth_request added challenge on 0x55f107bf3400
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _renew_subs
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 429 handle_osd_map epochs [430,430], i have 429, src has [1,430]
Oct 11 04:30:01 compute-0 ceph-osd[89722]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 10.259327888s of 10.350454330s, submitted: 32
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 430 ms_handle_reset con 0x55f107bf3400 session 0x55f10698e000
Oct 11 04:30:01 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 179003392 unmapped: 27197440 heap: 206200832 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:21:25.886504+0000)
Oct 11 04:30:01 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 179003392 unmapped: 27197440 heap: 206200832 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 430 handle_osd_map epochs [431,431], i have 430, src has [1,431]
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:21:26.886658+0000)
Oct 11 04:30:01 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 180084736 unmapped: 26116096 heap: 206200832 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:01 compute-0 ceph-osd[89722]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:30:01 compute-0 ceph-osd[89722]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:30:01 compute-0 ceph-osd[89722]: bluestore.MempoolThread(0x55f1042abb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2930679 data_alloc: 234881024 data_used: 13516800
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:21:27.887520+0000)
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: handle_auth_request added challenge on 0x55f107bf4800
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 431 ms_handle_reset con 0x55f107bf4800 session 0x55f10767e5a0
Oct 11 04:30:01 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 180084736 unmapped: 26116096 heap: 206200832 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:21:28.887850+0000)
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: handle_auth_request added challenge on 0x55f107f00c00
Oct 11 04:30:01 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 180084736 unmapped: 26116096 heap: 206200832 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:21:29.888911+0000)
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 431 heartbeat osd_stat(store_statfs(0x4f67b6000/0x0/0x4ffc00000, data 0x1f94129/0x21d7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x726f9c6), peers [0,1] op hist [])
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 431 handle_osd_map epochs [431,432], i have 431, src has [1,432]
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 432 ms_handle_reset con 0x55f107f00c00 session 0x55f10d6f6d20
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: handle_auth_request added challenge on 0x55f10a1cb000
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 432 ms_handle_reset con 0x55f10a1cb000 session 0x55f1063165a0
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: handle_auth_request added challenge on 0x55f105d2ac00
Oct 11 04:30:01 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 180092928 unmapped: 26107904 heap: 206200832 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:21:30.889356+0000)
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 432 handle_osd_map epochs [432,433], i have 432, src has [1,433]
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 433 ms_handle_reset con 0x55f105d2ac00 session 0x55f10795eb40
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: handle_auth_request added challenge on 0x55f107bf3400
Oct 11 04:30:01 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 180101120 unmapped: 26099712 heap: 206200832 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 433 ms_handle_reset con 0x55f107bf3400 session 0x55f105b0b0e0
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:21:31.889583+0000)
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: handle_auth_request added challenge on 0x55f107bf4800
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 433 ms_handle_reset con 0x55f107bf4800 session 0x55f1056bde00
Oct 11 04:30:01 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 180101120 unmapped: 26099712 heap: 206200832 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:21:32.890116+0000)
Oct 11 04:30:01 compute-0 ceph-osd[89722]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:30:01 compute-0 ceph-osd[89722]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:30:01 compute-0 ceph-osd[89722]: bluestore.MempoolThread(0x55f1042abb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2938231 data_alloc: 234881024 data_used: 13516800
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: handle_auth_request added challenge on 0x55f107f00c00
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 433 ms_handle_reset con 0x55f107f00c00 session 0x55f10d6f7c20
Oct 11 04:30:01 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 180101120 unmapped: 26099712 heap: 206200832 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:21:33.890353+0000)
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: handle_auth_request added challenge on 0x55f10a1cb400
Oct 11 04:30:01 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 180101120 unmapped: 26099712 heap: 206200832 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:21:34.890704+0000)
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _renew_subs
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 433 handle_osd_map epochs [434,434], i have 433, src has [1,434]
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 434 ms_handle_reset con 0x55f10a1cb400 session 0x55f10d6f70e0
Oct 11 04:30:01 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 180101120 unmapped: 26099712 heap: 206200832 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:21:35.891069+0000)
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: handle_auth_request added challenge on 0x55f105d2ac00
Oct 11 04:30:01 compute-0 ceph-osd[89722]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 10.356181145s of 10.506200790s, submitted: 88
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 434 heartbeat osd_stat(store_statfs(0x4f67ac000/0x0/0x4ffc00000, data 0x1f994da/0x21e1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x726f9c6), peers [0,1] op hist [1])
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 434 ms_handle_reset con 0x55f105d2ac00 session 0x55f10bf34000
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: handle_auth_request added challenge on 0x55f107bf3400
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _renew_subs
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 434 handle_osd_map epochs [435,435], i have 434, src has [1,435]
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 435 ms_handle_reset con 0x55f107bf3400 session 0x55f10bf34960
Oct 11 04:30:01 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 180133888 unmapped: 26066944 heap: 206200832 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:21:36.891402+0000)
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 435 heartbeat osd_stat(store_statfs(0x4f67ac000/0x0/0x4ffc00000, data 0x1f994da/0x21e1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x726f9c6), peers [0,1] op hist [])
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: handle_auth_request added challenge on 0x55f107bf4800
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 435 ms_handle_reset con 0x55f107bf4800 session 0x55f10bf352c0
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: handle_auth_request added challenge on 0x55f107f00c00
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 435 ms_handle_reset con 0x55f107f00c00 session 0x55f10bf35860
Oct 11 04:30:01 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 179929088 unmapped: 26271744 heap: 206200832 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:21:37.891782+0000)
Oct 11 04:30:01 compute-0 ceph-osd[89722]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:30:01 compute-0 ceph-osd[89722]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:30:01 compute-0 ceph-osd[89722]: bluestore.MempoolThread(0x55f1042abb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2947245 data_alloc: 234881024 data_used: 13524992
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:21:38.892058+0000)
Oct 11 04:30:01 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 179929088 unmapped: 26271744 heap: 206200832 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:21:39.892302+0000)
Oct 11 04:30:01 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 179929088 unmapped: 26271744 heap: 206200832 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:21:40.892557+0000)
Oct 11 04:30:01 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 179929088 unmapped: 26271744 heap: 206200832 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:21:41.892782+0000)
Oct 11 04:30:01 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 179929088 unmapped: 26271744 heap: 206200832 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:21:42.892986+0000)
Oct 11 04:30:01 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 179929088 unmapped: 26271744 heap: 206200832 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:01 compute-0 ceph-osd[89722]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:30:01 compute-0 ceph-osd[89722]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:30:01 compute-0 ceph-osd[89722]: bluestore.MempoolThread(0x55f1042abb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2947245 data_alloc: 234881024 data_used: 13524992
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 435 heartbeat osd_stat(store_statfs(0x4f67aa000/0x0/0x4ffc00000, data 0x1f9b039/0x21e2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x726f9c6), peers [0,1] op hist [])
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:21:43.893230+0000)
Oct 11 04:30:01 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 179929088 unmapped: 26271744 heap: 206200832 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:21:44.893395+0000)
Oct 11 04:30:01 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 179929088 unmapped: 26271744 heap: 206200832 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 435 handle_osd_map epochs [436,436], i have 435, src has [1,436]
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:21:45.893601+0000)
Oct 11 04:30:01 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 179929088 unmapped: 26271744 heap: 206200832 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:21:46.893763+0000)
Oct 11 04:30:01 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 179929088 unmapped: 26271744 heap: 206200832 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 436 heartbeat osd_stat(store_statfs(0x4f67a8000/0x0/0x4ffc00000, data 0x1f9ca9c/0x21e5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x726f9c6), peers [0,1] op hist [])
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:21:47.893895+0000)
Oct 11 04:30:01 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 179929088 unmapped: 26271744 heap: 206200832 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:01 compute-0 ceph-osd[89722]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:30:01 compute-0 ceph-osd[89722]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:30:01 compute-0 ceph-osd[89722]: bluestore.MempoolThread(0x55f1042abb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2949547 data_alloc: 234881024 data_used: 13524992
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 436 heartbeat osd_stat(store_statfs(0x4f67a8000/0x0/0x4ffc00000, data 0x1f9ca9c/0x21e5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x726f9c6), peers [0,1] op hist [])
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:21:48.894184+0000)
Oct 11 04:30:01 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 179929088 unmapped: 26271744 heap: 206200832 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:21:49.894374+0000)
Oct 11 04:30:01 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 179929088 unmapped: 26271744 heap: 206200832 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:21:50.894499+0000)
Oct 11 04:30:01 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 179929088 unmapped: 26271744 heap: 206200832 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 436 heartbeat osd_stat(store_statfs(0x4f67a8000/0x0/0x4ffc00000, data 0x1f9ca9c/0x21e5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x726f9c6), peers [0,1] op hist [])
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:21:51.894703+0000)
Oct 11 04:30:01 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 179929088 unmapped: 26271744 heap: 206200832 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:21:52.894896+0000)
Oct 11 04:30:01 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 179929088 unmapped: 26271744 heap: 206200832 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:01 compute-0 ceph-osd[89722]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:30:01 compute-0 ceph-osd[89722]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:30:01 compute-0 ceph-osd[89722]: bluestore.MempoolThread(0x55f1042abb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2949547 data_alloc: 234881024 data_used: 13524992
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:21:53.895142+0000)
Oct 11 04:30:01 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 179929088 unmapped: 26271744 heap: 206200832 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:21:54.895362+0000)
Oct 11 04:30:01 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 179929088 unmapped: 26271744 heap: 206200832 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 436 heartbeat osd_stat(store_statfs(0x4f67a8000/0x0/0x4ffc00000, data 0x1f9ca9c/0x21e5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x726f9c6), peers [0,1] op hist [])
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 436 heartbeat osd_stat(store_statfs(0x4f67a8000/0x0/0x4ffc00000, data 0x1f9ca9c/0x21e5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x726f9c6), peers [0,1] op hist [])
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:21:55.895528+0000)
Oct 11 04:30:01 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 179929088 unmapped: 26271744 heap: 206200832 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:21:56.895691+0000)
Oct 11 04:30:01 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 179929088 unmapped: 26271744 heap: 206200832 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets getting new tickets!
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:21:57.896008+0000)
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _finish_auth 0
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:21:57.897047+0000)
Oct 11 04:30:01 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 179945472 unmapped: 26255360 heap: 206200832 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:01 compute-0 ceph-osd[89722]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:30:01 compute-0 ceph-osd[89722]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:30:01 compute-0 ceph-osd[89722]: bluestore.MempoolThread(0x55f1042abb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2949547 data_alloc: 234881024 data_used: 13524992
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:21:58.896235+0000)
Oct 11 04:30:01 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 179945472 unmapped: 26255360 heap: 206200832 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 436 heartbeat osd_stat(store_statfs(0x4f67a8000/0x0/0x4ffc00000, data 0x1f9ca9c/0x21e5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x726f9c6), peers [0,1] op hist [])
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:21:59.896399+0000)
Oct 11 04:30:01 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 179945472 unmapped: 26255360 heap: 206200832 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:22:00.896623+0000)
Oct 11 04:30:01 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 179945472 unmapped: 26255360 heap: 206200832 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:22:01.896853+0000)
Oct 11 04:30:01 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 179945472 unmapped: 26255360 heap: 206200832 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:22:02.897062+0000)
Oct 11 04:30:01 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 179945472 unmapped: 26255360 heap: 206200832 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:01 compute-0 ceph-osd[89722]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:30:01 compute-0 ceph-osd[89722]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:30:01 compute-0 ceph-osd[89722]: bluestore.MempoolThread(0x55f1042abb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2949547 data_alloc: 234881024 data_used: 13524992
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:22:03.897216+0000)
Oct 11 04:30:01 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 179945472 unmapped: 26255360 heap: 206200832 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:22:04.897379+0000)
Oct 11 04:30:01 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 179945472 unmapped: 26255360 heap: 206200832 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 436 heartbeat osd_stat(store_statfs(0x4f67a8000/0x0/0x4ffc00000, data 0x1f9ca9c/0x21e5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x726f9c6), peers [0,1] op hist [])
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 436 heartbeat osd_stat(store_statfs(0x4f67a8000/0x0/0x4ffc00000, data 0x1f9ca9c/0x21e5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x726f9c6), peers [0,1] op hist [])
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:22:05.897486+0000)
Oct 11 04:30:01 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 179945472 unmapped: 26255360 heap: 206200832 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 436 heartbeat osd_stat(store_statfs(0x4f67a8000/0x0/0x4ffc00000, data 0x1f9ca9c/0x21e5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x726f9c6), peers [0,1] op hist [])
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:22:06.897681+0000)
Oct 11 04:30:01 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 179945472 unmapped: 26255360 heap: 206200832 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:22:07.897833+0000)
Oct 11 04:30:01 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 179945472 unmapped: 26255360 heap: 206200832 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:01 compute-0 ceph-osd[89722]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:30:01 compute-0 ceph-osd[89722]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:30:01 compute-0 ceph-osd[89722]: bluestore.MempoolThread(0x55f1042abb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2949547 data_alloc: 234881024 data_used: 13524992
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:22:08.898021+0000)
Oct 11 04:30:01 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 179945472 unmapped: 26255360 heap: 206200832 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:22:09.898224+0000)
Oct 11 04:30:01 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 179945472 unmapped: 26255360 heap: 206200832 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:22:10.898424+0000)
Oct 11 04:30:01 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 179945472 unmapped: 26255360 heap: 206200832 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 436 heartbeat osd_stat(store_statfs(0x4f67a8000/0x0/0x4ffc00000, data 0x1f9ca9c/0x21e5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x726f9c6), peers [0,1] op hist [])
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:22:11.898675+0000)
Oct 11 04:30:01 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 179945472 unmapped: 26255360 heap: 206200832 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:22:12.898866+0000)
Oct 11 04:30:01 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 179945472 unmapped: 26255360 heap: 206200832 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:01 compute-0 ceph-osd[89722]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:30:01 compute-0 ceph-osd[89722]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:30:01 compute-0 ceph-osd[89722]: bluestore.MempoolThread(0x55f1042abb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2949547 data_alloc: 234881024 data_used: 13524992
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:22:13.899058+0000)
Oct 11 04:30:01 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 179945472 unmapped: 26255360 heap: 206200832 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:22:14.899232+0000)
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 436 heartbeat osd_stat(store_statfs(0x4f67a8000/0x0/0x4ffc00000, data 0x1f9ca9c/0x21e5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x726f9c6), peers [0,1] op hist [])
Oct 11 04:30:01 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 179945472 unmapped: 26255360 heap: 206200832 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:22:15.899400+0000)
Oct 11 04:30:01 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 179945472 unmapped: 26255360 heap: 206200832 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:22:16.899588+0000)
Oct 11 04:30:01 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 179945472 unmapped: 26255360 heap: 206200832 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:22:17.899759+0000)
Oct 11 04:30:01 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 179945472 unmapped: 26255360 heap: 206200832 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:01 compute-0 ceph-osd[89722]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:30:01 compute-0 ceph-osd[89722]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:30:01 compute-0 ceph-osd[89722]: bluestore.MempoolThread(0x55f1042abb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2949547 data_alloc: 234881024 data_used: 13524992
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:22:18.899923+0000)
Oct 11 04:30:01 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 179961856 unmapped: 26238976 heap: 206200832 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: handle_auth_request added challenge on 0x55f10763d400
Oct 11 04:30:01 compute-0 ceph-osd[89722]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 43.459426880s of 43.662136078s, submitted: 60
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 436 ms_handle_reset con 0x55f10763d400 session 0x55f1079f2f00
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:22:19.900093+0000)
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: handle_auth_request added challenge on 0x55f105d2ac00
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 436 ms_handle_reset con 0x55f105d2ac00 session 0x55f1056bcf00
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: handle_auth_request added challenge on 0x55f107bf3400
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 436 ms_handle_reset con 0x55f107bf3400 session 0x55f106d7e000
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: handle_auth_request added challenge on 0x55f107bf4800
Oct 11 04:30:01 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 179978240 unmapped: 26222592 heap: 206200832 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 436 ms_handle_reset con 0x55f107bf4800 session 0x55f105b8dc20
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: handle_auth_request added challenge on 0x55f107f00c00
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 436 ms_handle_reset con 0x55f107f00c00 session 0x55f105b0a3c0
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 436 heartbeat osd_stat(store_statfs(0x4f67a8000/0x0/0x4ffc00000, data 0x1f9ca9c/0x21e5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x726f9c6), peers [0,1] op hist [])
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:22:20.900267+0000)
Oct 11 04:30:01 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 179978240 unmapped: 26222592 heap: 206200832 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:22:21.900519+0000)
Oct 11 04:30:01 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 179978240 unmapped: 26222592 heap: 206200832 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:22:22.900683+0000)
Oct 11 04:30:01 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 179978240 unmapped: 26222592 heap: 206200832 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:01 compute-0 ceph-osd[89722]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:30:01 compute-0 ceph-osd[89722]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:30:01 compute-0 ceph-osd[89722]: bluestore.MempoolThread(0x55f1042abb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2982994 data_alloc: 234881024 data_used: 13524992
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: handle_auth_request added challenge on 0x55f10a245400
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 436 ms_handle_reset con 0x55f10a245400 session 0x55f1076dc1e0
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:22:23.900856+0000)
Oct 11 04:30:01 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 179978240 unmapped: 26222592 heap: 206200832 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: handle_auth_request added challenge on 0x55f105d2ac00
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 436 ms_handle_reset con 0x55f105d2ac00 session 0x55f10c07a000
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: handle_auth_request added challenge on 0x55f107bf3400
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 436 ms_handle_reset con 0x55f107bf3400 session 0x55f10c07a1e0
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: handle_auth_request added challenge on 0x55f107bf4800
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:22:24.900980+0000)
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 436 ms_handle_reset con 0x55f107bf4800 session 0x55f10c07a3c0
Oct 11 04:30:01 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 180002816 unmapped: 26198016 heap: 206200832 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: handle_auth_request added challenge on 0x55f107f00c00
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: handle_auth_request added challenge on 0x55f107b52c00
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:22:25.901101+0000)
Oct 11 04:30:01 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 180002816 unmapped: 26198016 heap: 206200832 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 436 heartbeat osd_stat(store_statfs(0x4f635e000/0x0/0x4ffc00000, data 0x23e6aac/0x2630000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x726f9c6), peers [0,1] op hist [])
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:22:26.901319+0000)
Oct 11 04:30:01 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 180011008 unmapped: 26189824 heap: 206200832 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:22:27.901560+0000)
Oct 11 04:30:01 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 180051968 unmapped: 26148864 heap: 206200832 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:01 compute-0 ceph-osd[89722]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:30:01 compute-0 ceph-osd[89722]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:30:01 compute-0 ceph-osd[89722]: bluestore.MempoolThread(0x55f1042abb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3017252 data_alloc: 234881024 data_used: 18022400
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:22:28.901779+0000)
Oct 11 04:30:01 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 180051968 unmapped: 26148864 heap: 206200832 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:22:29.901912+0000)
Oct 11 04:30:01 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 180051968 unmapped: 26148864 heap: 206200832 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 436 heartbeat osd_stat(store_statfs(0x4f635e000/0x0/0x4ffc00000, data 0x23e6aac/0x2630000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x726f9c6), peers [0,1] op hist [])
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:22:30.902034+0000)
Oct 11 04:30:01 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 180051968 unmapped: 26148864 heap: 206200832 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:22:31.902217+0000)
Oct 11 04:30:01 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 180051968 unmapped: 26148864 heap: 206200832 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:22:32.902337+0000)
Oct 11 04:30:01 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 180051968 unmapped: 26148864 heap: 206200832 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:01 compute-0 ceph-osd[89722]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:30:01 compute-0 ceph-osd[89722]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:30:01 compute-0 ceph-osd[89722]: bluestore.MempoolThread(0x55f1042abb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3017252 data_alloc: 234881024 data_used: 18022400
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 436 heartbeat osd_stat(store_statfs(0x4f635e000/0x0/0x4ffc00000, data 0x23e6aac/0x2630000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x726f9c6), peers [0,1] op hist [])
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:22:33.902464+0000)
Oct 11 04:30:01 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 180051968 unmapped: 26148864 heap: 206200832 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:22:34.902602+0000)
Oct 11 04:30:01 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 180051968 unmapped: 26148864 heap: 206200832 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:01 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:22:35.902754+0000)
Oct 11 04:30:01 compute-0 ceph-osd[89722]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 16.228633881s of 16.302944183s, submitted: 16
Oct 11 04:30:01 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 180133888 unmapped: 26066944 heap: 206200832 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:01 compute-0 ceph-osd[89722]: osd.2 436 heartbeat osd_stat(store_statfs(0x4f5ebf000/0x0/0x4ffc00000, data 0x2885aac/0x2acf000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x726f9c6), peers [0,1] op hist [])
Oct 11 04:30:02 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:02 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:02 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:22:36.902918+0000)
Oct 11 04:30:02 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 183336960 unmapped: 22863872 heap: 206200832 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:02 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:02 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:02 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:22:37.903200+0000)
Oct 11 04:30:02 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 183427072 unmapped: 22773760 heap: 206200832 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:02 compute-0 ceph-osd[89722]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:30:02 compute-0 ceph-osd[89722]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:30:02 compute-0 ceph-osd[89722]: bluestore.MempoolThread(0x55f1042abb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3061484 data_alloc: 234881024 data_used: 18022400
Oct 11 04:30:02 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:02 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:02 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:22:38.903366+0000)
Oct 11 04:30:02 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 183443456 unmapped: 22757376 heap: 206200832 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:02 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:02 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:02 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:22:39.903590+0000)
Oct 11 04:30:02 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 183443456 unmapped: 22757376 heap: 206200832 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:02 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:02 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:02 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:22:40.903795+0000)
Oct 11 04:30:02 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 183443456 unmapped: 22757376 heap: 206200832 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:02 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:02 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:02 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:22:41.903993+0000)
Oct 11 04:30:02 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 183443456 unmapped: 22757376 heap: 206200832 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:02 compute-0 ceph-osd[89722]: osd.2 436 heartbeat osd_stat(store_statfs(0x4f5e95000/0x0/0x4ffc00000, data 0x28afaac/0x2af9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x726f9c6), peers [0,1] op hist [])
Oct 11 04:30:02 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:02 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:02 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:22:42.904204+0000)
Oct 11 04:30:02 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 183246848 unmapped: 22953984 heap: 206200832 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:02 compute-0 ceph-osd[89722]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:30:02 compute-0 ceph-osd[89722]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:30:02 compute-0 ceph-osd[89722]: bluestore.MempoolThread(0x55f1042abb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3059764 data_alloc: 234881024 data_used: 18022400
Oct 11 04:30:02 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:02 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:02 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:22:43.904388+0000)
Oct 11 04:30:02 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 183246848 unmapped: 22953984 heap: 206200832 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:02 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:02 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:02 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:22:44.904548+0000)
Oct 11 04:30:02 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 183246848 unmapped: 22953984 heap: 206200832 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:02 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:02 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:02 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:22:45.904717+0000)
Oct 11 04:30:02 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 183246848 unmapped: 22953984 heap: 206200832 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:02 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:02 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:02 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:22:46.904943+0000)
Oct 11 04:30:02 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 183246848 unmapped: 22953984 heap: 206200832 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:02 compute-0 ceph-osd[89722]: osd.2 436 heartbeat osd_stat(store_statfs(0x4f5e92000/0x0/0x4ffc00000, data 0x28b2aac/0x2afc000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x726f9c6), peers [0,1] op hist [])
Oct 11 04:30:02 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:02 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:02 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:22:47.905127+0000)
Oct 11 04:30:02 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 183246848 unmapped: 22953984 heap: 206200832 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:02 compute-0 ceph-osd[89722]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:30:02 compute-0 ceph-osd[89722]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:30:02 compute-0 ceph-osd[89722]: bluestore.MempoolThread(0x55f1042abb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3059764 data_alloc: 234881024 data_used: 18022400
Oct 11 04:30:02 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:02 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:02 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:22:48.905370+0000)
Oct 11 04:30:02 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 183246848 unmapped: 22953984 heap: 206200832 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:02 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:02 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:02 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:22:49.905561+0000)
Oct 11 04:30:02 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 183246848 unmapped: 22953984 heap: 206200832 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:02 compute-0 ceph-osd[89722]: osd.2 436 heartbeat osd_stat(store_statfs(0x4f5e92000/0x0/0x4ffc00000, data 0x28b2aac/0x2afc000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x726f9c6), peers [0,1] op hist [])
Oct 11 04:30:02 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:02 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:02 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:22:50.905731+0000)
Oct 11 04:30:02 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 183246848 unmapped: 22953984 heap: 206200832 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:02 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:02 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:02 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:22:51.905882+0000)
Oct 11 04:30:02 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 183246848 unmapped: 22953984 heap: 206200832 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:02 compute-0 ceph-osd[89722]: monclient: handle_auth_request added challenge on 0x55f107e5b800
Oct 11 04:30:02 compute-0 ceph-osd[89722]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 16.051126480s of 16.244737625s, submitted: 42
Oct 11 04:30:02 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:02 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:02 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:22:52.906045+0000)
Oct 11 04:30:02 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 183517184 unmapped: 22683648 heap: 206200832 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:02 compute-0 ceph-osd[89722]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:30:02 compute-0 ceph-osd[89722]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:30:02 compute-0 ceph-osd[89722]: bluestore.MempoolThread(0x55f1042abb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3067096 data_alloc: 234881024 data_used: 18022400
Oct 11 04:30:02 compute-0 ceph-osd[89722]: osd.2 436 heartbeat osd_stat(store_statfs(0x4f5e65000/0x0/0x4ffc00000, data 0x28dbaac/0x2b25000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x726f9c6), peers [0,1] op hist [])
Oct 11 04:30:02 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:02 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:02 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:22:53.906194+0000)
Oct 11 04:30:02 compute-0 ceph-osd[89722]: osd.2 436 handle_osd_map epochs [437,437], i have 436, src has [1,437]
Oct 11 04:30:02 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 183533568 unmapped: 22667264 heap: 206200832 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:02 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:02 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:02 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:22:54.906366+0000)
Oct 11 04:30:02 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 183549952 unmapped: 22650880 heap: 206200832 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:02 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:02 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:02 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:22:55.906495+0000)
Oct 11 04:30:02 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 183549952 unmapped: 22650880 heap: 206200832 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:02 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:02 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:02 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:22:56.906668+0000)
Oct 11 04:30:02 compute-0 ceph-osd[89722]: osd.2 437 handle_osd_map epochs [437,438], i have 437, src has [1,438]
Oct 11 04:30:02 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 183418880 unmapped: 22781952 heap: 206200832 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:02 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:02 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:02 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:22:57.906849+0000)
Oct 11 04:30:02 compute-0 ceph-osd[89722]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:30:02 compute-0 ceph-osd[89722]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:30:02 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 183427072 unmapped: 22773760 heap: 206200832 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:02 compute-0 ceph-osd[89722]: bluestore.MempoolThread(0x55f1042abb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3075628 data_alloc: 234881024 data_used: 18034688
Oct 11 04:30:02 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:02 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:02 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:22:58.906992+0000)
Oct 11 04:30:02 compute-0 ceph-osd[89722]: osd.2 438 heartbeat osd_stat(store_statfs(0x4f5e57000/0x0/0x4ffc00000, data 0x28ed1a6/0x2b36000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x726f9c6), peers [0,1] op hist [])
Oct 11 04:30:02 compute-0 ceph-osd[89722]: osd.2 438 handle_osd_map epochs [438,439], i have 438, src has [1,439]
Oct 11 04:30:02 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 183451648 unmapped: 22749184 heap: 206200832 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:02 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:02 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:02 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:22:59.907125+0000)
Oct 11 04:30:02 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 183451648 unmapped: 22749184 heap: 206200832 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:02 compute-0 ceph-osd[89722]: osd.2 439 ms_handle_reset con 0x55f107e5b800 session 0x55f10795e1e0
Oct 11 04:30:02 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:02 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:02 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:23:00.907309+0000)
Oct 11 04:30:02 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 183451648 unmapped: 22749184 heap: 206200832 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:02 compute-0 ceph-osd[89722]: monclient: handle_auth_request added challenge on 0x55f107c1d000
Oct 11 04:30:02 compute-0 ceph-osd[89722]: osd.2 439 ms_handle_reset con 0x55f107c1d000 session 0x55f105b9b860
Oct 11 04:30:02 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:02 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:02 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:23:01.907503+0000)
Oct 11 04:30:02 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 183484416 unmapped: 22716416 heap: 206200832 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:02 compute-0 ceph-osd[89722]: osd.2 439 heartbeat osd_stat(store_statfs(0x4f5e47000/0x0/0x4ffc00000, data 0x28fad33/0x2b43000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x726f9c6), peers [0,1] op hist [])
Oct 11 04:30:02 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:02 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:02 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:23:02.907671+0000)
Oct 11 04:30:02 compute-0 ceph-osd[89722]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 10.576444626s of 10.702510834s, submitted: 28
Oct 11 04:30:02 compute-0 ceph-osd[89722]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:30:02 compute-0 ceph-osd[89722]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:30:02 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 183525376 unmapped: 22675456 heap: 206200832 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:02 compute-0 ceph-osd[89722]: bluestore.MempoolThread(0x55f1042abb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3081994 data_alloc: 234881024 data_used: 18034688
Oct 11 04:30:02 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:02 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:02 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:23:03.907842+0000)
Oct 11 04:30:02 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 183525376 unmapped: 22675456 heap: 206200832 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:02 compute-0 ceph-osd[89722]: monclient: handle_auth_request added challenge on 0x55f105d2ac00
Oct 11 04:30:02 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:02 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:02 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:23:04.908077+0000)
Oct 11 04:30:02 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 183525376 unmapped: 22675456 heap: 206200832 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:02 compute-0 ceph-osd[89722]: osd.2 439 heartbeat osd_stat(store_statfs(0x4f5e4a000/0x0/0x4ffc00000, data 0x28fbd33/0x2b44000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x726f9c6), peers [0,1] op hist [])
Oct 11 04:30:02 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:02 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:02 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:23:05.908197+0000)
Oct 11 04:30:02 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 183951360 unmapped: 22249472 heap: 206200832 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:02 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:02 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:02 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:23:06.908353+0000)
Oct 11 04:30:02 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 183959552 unmapped: 22241280 heap: 206200832 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:02 compute-0 ceph-osd[89722]: osd.2 439 ms_handle_reset con 0x55f105d2ac00 session 0x55f10646e3c0
Oct 11 04:30:02 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:02 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:02 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:23:07.908513+0000)
Oct 11 04:30:02 compute-0 ceph-osd[89722]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:30:02 compute-0 ceph-osd[89722]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:30:02 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 183967744 unmapped: 22233088 heap: 206200832 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:02 compute-0 ceph-osd[89722]: bluestore.MempoolThread(0x55f1042abb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3093064 data_alloc: 234881024 data_used: 18186240
Oct 11 04:30:02 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:02 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:02 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:23:08.908697+0000)
Oct 11 04:30:02 compute-0 ceph-osd[89722]: monclient: handle_auth_request added challenge on 0x55f107bf3400
Oct 11 04:30:02 compute-0 ceph-osd[89722]: osd.2 439 ms_handle_reset con 0x55f107bf3400 session 0x55f107720780
Oct 11 04:30:02 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 183967744 unmapped: 22233088 heap: 206200832 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:02 compute-0 ceph-osd[89722]: osd.2 439 heartbeat osd_stat(store_statfs(0x4f5dbe000/0x0/0x4ffc00000, data 0x2983d95/0x2bcd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x726f9c6), peers [0,1] op hist [])
Oct 11 04:30:02 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:02 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:02 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:23:09.908877+0000)
Oct 11 04:30:02 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 183967744 unmapped: 22233088 heap: 206200832 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:02 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:02 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:02 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:23:10.909074+0000)
Oct 11 04:30:02 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 183967744 unmapped: 22233088 heap: 206200832 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:02 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:02 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:02 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:23:11.909361+0000)
Oct 11 04:30:02 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 183967744 unmapped: 22233088 heap: 206200832 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:02 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:02 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:02 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:23:12.909493+0000)
Oct 11 04:30:02 compute-0 ceph-osd[89722]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:30:02 compute-0 ceph-osd[89722]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:30:02 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 183967744 unmapped: 22233088 heap: 206200832 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:02 compute-0 ceph-osd[89722]: bluestore.MempoolThread(0x55f1042abb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3094846 data_alloc: 234881024 data_used: 18186240
Oct 11 04:30:02 compute-0 ceph-osd[89722]: monclient: handle_auth_request added challenge on 0x55f107bf4800
Oct 11 04:30:02 compute-0 ceph-osd[89722]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 10.726171494s of 10.773788452s, submitted: 10
Oct 11 04:30:02 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:02 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:02 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:23:13.909667+0000)
Oct 11 04:30:02 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 183296000 unmapped: 22904832 heap: 206200832 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:02 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:02 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:02 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:23:14.909848+0000)
Oct 11 04:30:02 compute-0 ceph-osd[89722]: osd.2 439 ms_handle_reset con 0x55f107bf4800 session 0x55f1079f30e0
Oct 11 04:30:02 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 183296000 unmapped: 22904832 heap: 206200832 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:02 compute-0 ceph-osd[89722]: osd.2 439 heartbeat osd_stat(store_statfs(0x4f5dc1000/0x0/0x4ffc00000, data 0x2983d95/0x2bcd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x726f9c6), peers [0,1] op hist [])
Oct 11 04:30:02 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:02 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:02 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:23:15.910038+0000)
Oct 11 04:30:02 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 183296000 unmapped: 22904832 heap: 206200832 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:02 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:02 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:02 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:23:16.910227+0000)
Oct 11 04:30:02 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 183296000 unmapped: 22904832 heap: 206200832 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:02 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:02 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:02 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:23:17.910388+0000)
Oct 11 04:30:02 compute-0 ceph-osd[89722]: osd.2 439 heartbeat osd_stat(store_statfs(0x4f5dc0000/0x0/0x4ffc00000, data 0x2984d95/0x2bce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x726f9c6), peers [0,1] op hist [])
Oct 11 04:30:02 compute-0 ceph-osd[89722]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:30:02 compute-0 ceph-osd[89722]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:30:02 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 183296000 unmapped: 22904832 heap: 206200832 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:02 compute-0 ceph-osd[89722]: bluestore.MempoolThread(0x55f1042abb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3092513 data_alloc: 234881024 data_used: 18223104
Oct 11 04:30:02 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:02 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:02 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:23:18.910559+0000)
Oct 11 04:30:02 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 184344576 unmapped: 21856256 heap: 206200832 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:02 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:02 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:02 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:23:19.910733+0000)
Oct 11 04:30:02 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 184344576 unmapped: 21856256 heap: 206200832 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:02 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:02 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:02 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:23:20.910910+0000)
Oct 11 04:30:02 compute-0 ceph-osd[89722]: monclient: handle_auth_request added challenge on 0x55f107e5b800
Oct 11 04:30:02 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 184344576 unmapped: 21856256 heap: 206200832 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:02 compute-0 ceph-osd[89722]: osd.2 439 heartbeat osd_stat(store_statfs(0x4f5dbd000/0x0/0x4ffc00000, data 0x2985d95/0x2bcf000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x726f9c6), peers [0,1] op hist [])
Oct 11 04:30:02 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:02 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:02 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:23:21.911114+0000)
Oct 11 04:30:02 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 184344576 unmapped: 21856256 heap: 206200832 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:02 compute-0 ceph-osd[89722]: osd.2 439 ms_handle_reset con 0x55f107e5b800 session 0x55f109452f00
Oct 11 04:30:02 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:02 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:02 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:23:22.911326+0000)
Oct 11 04:30:02 compute-0 ceph-osd[89722]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:30:02 compute-0 ceph-osd[89722]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:30:02 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 184344576 unmapped: 21856256 heap: 206200832 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:02 compute-0 ceph-osd[89722]: bluestore.MempoolThread(0x55f1042abb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3092905 data_alloc: 234881024 data_used: 18259968
Oct 11 04:30:02 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:02 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:02 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:23:23.911509+0000)
Oct 11 04:30:02 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 184344576 unmapped: 21856256 heap: 206200832 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:02 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:02 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:02 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:23:24.911697+0000)
Oct 11 04:30:02 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 184344576 unmapped: 21856256 heap: 206200832 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:02 compute-0 ceph-osd[89722]: monclient: handle_auth_request added challenge on 0x55f10938d400
Oct 11 04:30:02 compute-0 ceph-osd[89722]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 11.664058685s of 11.698184013s, submitted: 8
Oct 11 04:30:02 compute-0 ceph-osd[89722]: osd.2 439 ms_handle_reset con 0x55f10938d400 session 0x55f10bf34d20
Oct 11 04:30:02 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:02 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:02 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:23:25.911841+0000)
Oct 11 04:30:02 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 184344576 unmapped: 21856256 heap: 206200832 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:02 compute-0 ceph-osd[89722]: monclient: handle_auth_request added challenge on 0x55f10938d400
Oct 11 04:30:02 compute-0 ceph-osd[89722]: osd.2 439 ms_handle_reset con 0x55f10938d400 session 0x55f1079d7e00
Oct 11 04:30:02 compute-0 ceph-osd[89722]: monclient: handle_auth_request added challenge on 0x55f105d2ac00
Oct 11 04:30:02 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:02 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:02 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:23:26.911984+0000)
Oct 11 04:30:02 compute-0 ceph-osd[89722]: osd.2 439 ms_handle_reset con 0x55f105d2ac00 session 0x55f1076dcd20
Oct 11 04:30:02 compute-0 ceph-osd[89722]: osd.2 439 heartbeat osd_stat(store_statfs(0x4f5dbe000/0x0/0x4ffc00000, data 0x2986d33/0x2bcf000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x726f9c6), peers [0,1] op hist [])
Oct 11 04:30:02 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 184344576 unmapped: 21856256 heap: 206200832 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:02 compute-0 ceph-osd[89722]: monclient: handle_auth_request added challenge on 0x55f107bf3400
Oct 11 04:30:02 compute-0 ceph-osd[89722]: osd.2 439 ms_handle_reset con 0x55f107bf3400 session 0x55f105b9e960
Oct 11 04:30:02 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:02 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:02 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:23:27.912164+0000)
Oct 11 04:30:02 compute-0 ceph-osd[89722]: osd.2 439 heartbeat osd_stat(store_statfs(0x4f5e47000/0x0/0x4ffc00000, data 0x28ffd23/0x2b47000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x726f9c6), peers [0,1] op hist [])
Oct 11 04:30:02 compute-0 ceph-osd[89722]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:30:02 compute-0 ceph-osd[89722]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:30:02 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 184344576 unmapped: 21856256 heap: 206200832 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:02 compute-0 ceph-osd[89722]: bluestore.MempoolThread(0x55f1042abb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3084815 data_alloc: 234881024 data_used: 18255872
Oct 11 04:30:02 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:02 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:02 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:23:28.912366+0000)
Oct 11 04:30:02 compute-0 ceph-osd[89722]: monclient: handle_auth_request added challenge on 0x55f107bf4800
Oct 11 04:30:02 compute-0 ceph-osd[89722]: monclient: _renew_subs
Oct 11 04:30:02 compute-0 ceph-osd[89722]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 11 04:30:02 compute-0 ceph-osd[89722]: osd.2 439 handle_osd_map epochs [440,440], i have 439, src has [1,440]
Oct 11 04:30:02 compute-0 ceph-osd[89722]: osd.2 440 ms_handle_reset con 0x55f107bf4800 session 0x55f1076fab40
Oct 11 04:30:02 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 184352768 unmapped: 21848064 heap: 206200832 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:02 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:02 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:02 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:23:29.912504+0000)
Oct 11 04:30:02 compute-0 ceph-osd[89722]: osd.2 440 handle_osd_map epochs [441,441], i have 440, src has [1,441]
Oct 11 04:30:02 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 184401920 unmapped: 21798912 heap: 206200832 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:02 compute-0 ceph-osd[89722]: monclient: handle_auth_request added challenge on 0x55f107e5b800
Oct 11 04:30:02 compute-0 ceph-osd[89722]: osd.2 441 ms_handle_reset con 0x55f107e5b800 session 0x55f10799cf00
Oct 11 04:30:02 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:02 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:02 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:23:30.912658+0000)
Oct 11 04:30:02 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 184410112 unmapped: 21790720 heap: 206200832 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:02 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:02 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:02 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:23:31.912855+0000)
Oct 11 04:30:02 compute-0 ceph-osd[89722]: osd.2 441 handle_osd_map epochs [442,442], i have 441, src has [1,442]
Oct 11 04:30:02 compute-0 ceph-osd[89722]: osd.2 442 heartbeat osd_stat(store_statfs(0x4f5e49000/0x0/0x4ffc00000, data 0x28f74c5/0x2b44000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x726f9c6), peers [0,1] op hist [])
Oct 11 04:30:02 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 184410112 unmapped: 21790720 heap: 206200832 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:02 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:02 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:02 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:23:32.913008+0000)
Oct 11 04:30:02 compute-0 ceph-osd[89722]: monclient: handle_auth_request added challenge on 0x55f105d2ac00
Oct 11 04:30:02 compute-0 ceph-osd[89722]: osd.2 442 ms_handle_reset con 0x55f105d2ac00 session 0x55f105b9c780
Oct 11 04:30:02 compute-0 ceph-osd[89722]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:30:02 compute-0 ceph-osd[89722]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:30:02 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 184410112 unmapped: 21790720 heap: 206200832 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:02 compute-0 ceph-osd[89722]: bluestore.MempoolThread(0x55f1042abb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3087485 data_alloc: 234881024 data_used: 18038784
Oct 11 04:30:02 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:02 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:02 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:23:33.913252+0000)
Oct 11 04:30:02 compute-0 ceph-osd[89722]: osd.2 442 ms_handle_reset con 0x55f107f00c00 session 0x55f10c07a780
Oct 11 04:30:02 compute-0 ceph-osd[89722]: osd.2 442 ms_handle_reset con 0x55f107b52c00 session 0x55f10799c960
Oct 11 04:30:02 compute-0 ceph-osd[89722]: monclient: handle_auth_request added challenge on 0x55f107bf3400
Oct 11 04:30:02 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 184418304 unmapped: 21782528 heap: 206200832 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:02 compute-0 ceph-osd[89722]: osd.2 442 ms_handle_reset con 0x55f107bf3400 session 0x55f1076b41e0
Oct 11 04:30:02 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:02 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:02 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:23:34.913485+0000)
Oct 11 04:30:02 compute-0 ceph-osd[89722]: osd.2 442 handle_osd_map epochs [442,443], i have 442, src has [1,443]
Oct 11 04:30:02 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 180215808 unmapped: 25985024 heap: 206200832 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:02 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:02 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:02 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:23:35.913691+0000)
Oct 11 04:30:02 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 180215808 unmapped: 25985024 heap: 206200832 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:02 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:02 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:02 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:23:36.913867+0000)
Oct 11 04:30:02 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 180215808 unmapped: 25985024 heap: 206200832 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:02 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:02 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:02 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:23:37.914040+0000)
Oct 11 04:30:02 compute-0 ceph-osd[89722]: osd.2 443 heartbeat osd_stat(store_statfs(0x4f6793000/0x0/0x4ffc00000, data 0x1fa8ae9/0x21fa000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x726f9c6), peers [0,1] op hist [])
Oct 11 04:30:02 compute-0 ceph-osd[89722]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:30:02 compute-0 ceph-osd[89722]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:30:02 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 180215808 unmapped: 25985024 heap: 206200832 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:02 compute-0 ceph-osd[89722]: bluestore.MempoolThread(0x55f1042abb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2988120 data_alloc: 234881024 data_used: 13549568
Oct 11 04:30:02 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:02 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:02 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:23:38.914307+0000)
Oct 11 04:30:02 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 180215808 unmapped: 25985024 heap: 206200832 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:02 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:02 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:02 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:23:39.914524+0000)
Oct 11 04:30:02 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 180215808 unmapped: 25985024 heap: 206200832 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:02 compute-0 ceph-osd[89722]: osd.2 443 heartbeat osd_stat(store_statfs(0x4f6793000/0x0/0x4ffc00000, data 0x1fa8ae9/0x21fa000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x726f9c6), peers [0,1] op hist [])
Oct 11 04:30:02 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:02 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:02 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:23:40.914701+0000)
Oct 11 04:30:02 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 180215808 unmapped: 25985024 heap: 206200832 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:02 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:02 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:02 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:23:41.914909+0000)
Oct 11 04:30:02 compute-0 ceph-osd[89722]: osd.2 443 heartbeat osd_stat(store_statfs(0x4f6793000/0x0/0x4ffc00000, data 0x1fa8ae9/0x21fa000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x726f9c6), peers [0,1] op hist [])
Oct 11 04:30:02 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 180215808 unmapped: 25985024 heap: 206200832 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:02 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:02 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:02 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:23:42.915070+0000)
Oct 11 04:30:02 compute-0 ceph-osd[89722]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:30:02 compute-0 ceph-osd[89722]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:30:02 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 180215808 unmapped: 25985024 heap: 206200832 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:02 compute-0 ceph-osd[89722]: bluestore.MempoolThread(0x55f1042abb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2988120 data_alloc: 234881024 data_used: 13549568
Oct 11 04:30:02 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:02 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:02 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:23:43.915277+0000)
Oct 11 04:30:02 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 180215808 unmapped: 25985024 heap: 206200832 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:02 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:02 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:02 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:23:44.915473+0000)
Oct 11 04:30:02 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 180215808 unmapped: 25985024 heap: 206200832 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:02 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:02 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:02 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:23:45.915684+0000)
Oct 11 04:30:02 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 180215808 unmapped: 25985024 heap: 206200832 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:02 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:02 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:02 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:23:46.915858+0000)
Oct 11 04:30:02 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 180215808 unmapped: 25985024 heap: 206200832 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:02 compute-0 ceph-osd[89722]: osd.2 443 heartbeat osd_stat(store_statfs(0x4f6793000/0x0/0x4ffc00000, data 0x1fa8ae9/0x21fa000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x726f9c6), peers [0,1] op hist [])
Oct 11 04:30:02 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:02 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:02 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:23:47.916004+0000)
Oct 11 04:30:02 compute-0 ceph-osd[89722]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:30:02 compute-0 ceph-osd[89722]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:30:02 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 180215808 unmapped: 25985024 heap: 206200832 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:02 compute-0 ceph-osd[89722]: bluestore.MempoolThread(0x55f1042abb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2988120 data_alloc: 234881024 data_used: 13549568
Oct 11 04:30:02 compute-0 ceph-osd[89722]: osd.2 443 heartbeat osd_stat(store_statfs(0x4f6793000/0x0/0x4ffc00000, data 0x1fa8ae9/0x21fa000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x726f9c6), peers [0,1] op hist [])
Oct 11 04:30:02 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:02 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:02 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:23:48.916233+0000)
Oct 11 04:30:02 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 180215808 unmapped: 25985024 heap: 206200832 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:02 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:02 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:02 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:23:49.916391+0000)
Oct 11 04:30:02 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 180215808 unmapped: 25985024 heap: 206200832 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:02 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:02 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:02 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:23:50.916604+0000)
Oct 11 04:30:02 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 180215808 unmapped: 25985024 heap: 206200832 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:02 compute-0 ceph-osd[89722]: osd.2 443 heartbeat osd_stat(store_statfs(0x4f6793000/0x0/0x4ffc00000, data 0x1fa8ae9/0x21fa000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x726f9c6), peers [0,1] op hist [])
Oct 11 04:30:02 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:02 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:02 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:23:51.916813+0000)
Oct 11 04:30:02 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 180215808 unmapped: 25985024 heap: 206200832 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:02 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:02 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:02 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:23:52.916948+0000)
Oct 11 04:30:02 compute-0 ceph-osd[89722]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:30:02 compute-0 ceph-osd[89722]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:30:02 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 180215808 unmapped: 25985024 heap: 206200832 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:02 compute-0 ceph-osd[89722]: bluestore.MempoolThread(0x55f1042abb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2988120 data_alloc: 234881024 data_used: 13549568
Oct 11 04:30:02 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:02 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:02 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:23:53.917228+0000)
Oct 11 04:30:02 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 180215808 unmapped: 25985024 heap: 206200832 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:02 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:02 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:02 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:23:54.917419+0000)
Oct 11 04:30:02 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 180215808 unmapped: 25985024 heap: 206200832 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:02 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:02 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:02 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:23:55.917582+0000)
Oct 11 04:30:02 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 180215808 unmapped: 25985024 heap: 206200832 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:02 compute-0 ceph-osd[89722]: osd.2 443 heartbeat osd_stat(store_statfs(0x4f6793000/0x0/0x4ffc00000, data 0x1fa8ae9/0x21fa000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x726f9c6), peers [0,1] op hist [])
Oct 11 04:30:02 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:02 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:02 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:23:56.917817+0000)
Oct 11 04:30:02 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 180215808 unmapped: 25985024 heap: 206200832 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:02 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:02 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:02 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:23:57.918018+0000)
Oct 11 04:30:02 compute-0 ceph-osd[89722]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:30:02 compute-0 ceph-osd[89722]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:30:02 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 180215808 unmapped: 25985024 heap: 206200832 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:02 compute-0 ceph-osd[89722]: bluestore.MempoolThread(0x55f1042abb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2988120 data_alloc: 234881024 data_used: 13549568
Oct 11 04:30:02 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:02 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:02 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:23:58.918275+0000)
Oct 11 04:30:02 compute-0 ceph-osd[89722]: osd.2 443 heartbeat osd_stat(store_statfs(0x4f6793000/0x0/0x4ffc00000, data 0x1fa8ae9/0x21fa000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x726f9c6), peers [0,1] op hist [])
Oct 11 04:30:02 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 180215808 unmapped: 25985024 heap: 206200832 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:02 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:02 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:02 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:23:59.918504+0000)
Oct 11 04:30:02 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 180215808 unmapped: 25985024 heap: 206200832 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:02 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:02 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:02 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:24:00.918673+0000)
Oct 11 04:30:02 compute-0 ceph-osd[89722]: osd.2 443 heartbeat osd_stat(store_statfs(0x4f6793000/0x0/0x4ffc00000, data 0x1fa8ae9/0x21fa000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x726f9c6), peers [0,1] op hist [])
Oct 11 04:30:02 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 180215808 unmapped: 25985024 heap: 206200832 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:02 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:02 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:02 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:24:01.918874+0000)
Oct 11 04:30:02 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 180215808 unmapped: 25985024 heap: 206200832 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:02 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:02 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:02 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:24:02.919083+0000)
Oct 11 04:30:02 compute-0 ceph-osd[89722]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:30:02 compute-0 ceph-osd[89722]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:30:02 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 180215808 unmapped: 25985024 heap: 206200832 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:02 compute-0 ceph-osd[89722]: bluestore.MempoolThread(0x55f1042abb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2988120 data_alloc: 234881024 data_used: 13549568
Oct 11 04:30:02 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:02 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:02 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:24:03.919292+0000)
Oct 11 04:30:02 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 180215808 unmapped: 25985024 heap: 206200832 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:02 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:02 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:02 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:24:04.919440+0000)
Oct 11 04:30:02 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 180215808 unmapped: 25985024 heap: 206200832 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:02 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:02 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:02 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:24:05.919604+0000)
Oct 11 04:30:02 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 180215808 unmapped: 25985024 heap: 206200832 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:02 compute-0 ceph-osd[89722]: osd.2 443 heartbeat osd_stat(store_statfs(0x4f6793000/0x0/0x4ffc00000, data 0x1fa8ae9/0x21fa000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x726f9c6), peers [0,1] op hist [])
Oct 11 04:30:02 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:02 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:02 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:24:06.919754+0000)
Oct 11 04:30:02 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 180215808 unmapped: 25985024 heap: 206200832 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:02 compute-0 ceph-osd[89722]: osd.2 443 heartbeat osd_stat(store_statfs(0x4f6793000/0x0/0x4ffc00000, data 0x1fa8ae9/0x21fa000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x726f9c6), peers [0,1] op hist [])
Oct 11 04:30:02 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:02 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:02 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:24:07.919950+0000)
Oct 11 04:30:02 compute-0 ceph-osd[89722]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:30:02 compute-0 ceph-osd[89722]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:30:02 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 180215808 unmapped: 25985024 heap: 206200832 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:02 compute-0 ceph-osd[89722]: bluestore.MempoolThread(0x55f1042abb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2988120 data_alloc: 234881024 data_used: 13549568
Oct 11 04:30:02 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:02 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:02 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:24:08.920127+0000)
Oct 11 04:30:02 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 180215808 unmapped: 25985024 heap: 206200832 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:02 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:02 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:02 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:24:09.920364+0000)
Oct 11 04:30:02 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 180215808 unmapped: 25985024 heap: 206200832 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:02 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:02 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:02 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:24:10.920548+0000)
Oct 11 04:30:02 compute-0 ceph-osd[89722]: monclient: handle_auth_request added challenge on 0x55f107bf4800
Oct 11 04:30:02 compute-0 ceph-osd[89722]: monclient: _renew_subs
Oct 11 04:30:02 compute-0 ceph-osd[89722]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 11 04:30:02 compute-0 ceph-osd[89722]: osd.2 443 handle_osd_map epochs [444,444], i have 443, src has [1,444]
Oct 11 04:30:02 compute-0 ceph-osd[89722]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 45.309783936s of 45.571777344s, submitted: 80
Oct 11 04:30:02 compute-0 ceph-osd[89722]: osd.2 444 ms_handle_reset con 0x55f107bf4800 session 0x55f107721a40
Oct 11 04:30:02 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 180224000 unmapped: 25976832 heap: 206200832 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:02 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:02 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:02 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:24:11.920748+0000)
Oct 11 04:30:02 compute-0 ceph-osd[89722]: monclient: handle_auth_request added challenge on 0x55f105d2ac00
Oct 11 04:30:02 compute-0 ceph-osd[89722]: osd.2 444 ms_handle_reset con 0x55f105d2ac00 session 0x55f10562a960
Oct 11 04:30:02 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 180224000 unmapped: 25976832 heap: 206200832 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:02 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:02 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:02 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:24:12.920908+0000)
Oct 11 04:30:02 compute-0 ceph-osd[89722]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:30:02 compute-0 ceph-osd[89722]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:30:02 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 180224000 unmapped: 25976832 heap: 206200832 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:02 compute-0 ceph-osd[89722]: bluestore.MempoolThread(0x55f1042abb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2992876 data_alloc: 234881024 data_used: 13549568
Oct 11 04:30:02 compute-0 ceph-osd[89722]: monclient: handle_auth_request added challenge on 0x55f107b52c00
Oct 11 04:30:02 compute-0 ceph-osd[89722]: osd.2 444 ms_handle_reset con 0x55f107b52c00 session 0x55f10562a1e0
Oct 11 04:30:02 compute-0 ceph-osd[89722]: osd.2 444 heartbeat osd_stat(store_statfs(0x4f678f000/0x0/0x4ffc00000, data 0x1faa6c8/0x21fe000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x726f9c6), peers [0,1] op hist [])
Oct 11 04:30:02 compute-0 ceph-osd[89722]: monclient: handle_auth_request added challenge on 0x55f107bf3400
Oct 11 04:30:02 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:02 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:02 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:24:13.921089+0000)
Oct 11 04:30:02 compute-0 ceph-osd[89722]: monclient: _renew_subs
Oct 11 04:30:02 compute-0 ceph-osd[89722]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 11 04:30:02 compute-0 ceph-osd[89722]: osd.2 444 handle_osd_map epochs [445,445], i have 444, src has [1,445]
Oct 11 04:30:02 compute-0 ceph-osd[89722]: osd.2 445 ms_handle_reset con 0x55f107bf3400 session 0x55f10562a5a0
Oct 11 04:30:02 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 180224000 unmapped: 25976832 heap: 206200832 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:02 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:02 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:02 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:24:14.921277+0000)
Oct 11 04:30:02 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 180224000 unmapped: 25976832 heap: 206200832 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:02 compute-0 ceph-osd[89722]: monclient: handle_auth_request added challenge on 0x55f107f00c00
Oct 11 04:30:02 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:02 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:02 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:24:15.921471+0000)
Oct 11 04:30:02 compute-0 ceph-osd[89722]: osd.2 445 heartbeat osd_stat(store_statfs(0x4f678d000/0x0/0x4ffc00000, data 0x1fac299/0x2201000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x726f9c6), peers [0,1] op hist [1])
Oct 11 04:30:02 compute-0 ceph-osd[89722]: osd.2 445 ms_handle_reset con 0x55f107f00c00 session 0x55f105baa960
Oct 11 04:30:02 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 181272576 unmapped: 24928256 heap: 206200832 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:02 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:02 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:02 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:24:16.921607+0000)
Oct 11 04:30:02 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 181272576 unmapped: 24928256 heap: 206200832 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:02 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:02 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:02 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:24:17.921802+0000)
Oct 11 04:30:02 compute-0 ceph-osd[89722]: osd.2 445 heartbeat osd_stat(store_statfs(0x4f678e000/0x0/0x4ffc00000, data 0x1fac237/0x2200000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x726f9c6), peers [0,1] op hist [])
Oct 11 04:30:02 compute-0 ceph-osd[89722]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:30:02 compute-0 ceph-osd[89722]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:30:02 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 181215232 unmapped: 24985600 heap: 206200832 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:02 compute-0 ceph-osd[89722]: bluestore.MempoolThread(0x55f1042abb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2994097 data_alloc: 234881024 data_used: 13549568
Oct 11 04:30:02 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:02 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:02 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:24:18.921993+0000)
Oct 11 04:30:02 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 181215232 unmapped: 24985600 heap: 206200832 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:02 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:02 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:02 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:24:19.922265+0000)
Oct 11 04:30:02 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 181215232 unmapped: 24985600 heap: 206200832 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:02 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:02 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:02 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:24:20.922538+0000)
Oct 11 04:30:02 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 181215232 unmapped: 24985600 heap: 206200832 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:02 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:02 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:02 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:24:21.922826+0000)
Oct 11 04:30:02 compute-0 ceph-osd[89722]: osd.2 445 heartbeat osd_stat(store_statfs(0x4f678e000/0x0/0x4ffc00000, data 0x1fac237/0x2200000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x726f9c6), peers [0,1] op hist [])
Oct 11 04:30:02 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 181215232 unmapped: 24985600 heap: 206200832 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:02 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:02 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:02 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:24:22.923022+0000)
Oct 11 04:30:02 compute-0 ceph-osd[89722]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:30:02 compute-0 ceph-osd[89722]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:30:02 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 181215232 unmapped: 24985600 heap: 206200832 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:02 compute-0 ceph-osd[89722]: bluestore.MempoolThread(0x55f1042abb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2994097 data_alloc: 234881024 data_used: 13549568
Oct 11 04:30:02 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:02 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:02 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:24:23.923255+0000)
Oct 11 04:30:02 compute-0 ceph-osd[89722]: osd.2 445 heartbeat osd_stat(store_statfs(0x4f678e000/0x0/0x4ffc00000, data 0x1fac237/0x2200000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x726f9c6), peers [0,1] op hist [])
Oct 11 04:30:02 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 181215232 unmapped: 24985600 heap: 206200832 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:02 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:02 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:02 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:24:24.923388+0000)
Oct 11 04:30:02 compute-0 ceph-osd[89722]: osd.2 445 handle_osd_map epochs [445,446], i have 445, src has [1,446]
Oct 11 04:30:02 compute-0 ceph-osd[89722]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 14.046374321s of 14.137036324s, submitted: 21
Oct 11 04:30:02 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 180166656 unmapped: 26034176 heap: 206200832 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:02 compute-0 ceph-osd[89722]: osd.2 446 heartbeat osd_stat(store_statfs(0x4f678a000/0x0/0x4ffc00000, data 0x1fadc9a/0x2203000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x726f9c6), peers [0,1] op hist [])
Oct 11 04:30:02 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:02 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:02 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:24:25.923579+0000)
Oct 11 04:30:02 compute-0 ceph-osd[89722]: osd.2 446 heartbeat osd_stat(store_statfs(0x4f678a000/0x0/0x4ffc00000, data 0x1fadc9a/0x2203000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x726f9c6), peers [0,1] op hist [])
Oct 11 04:30:02 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 180166656 unmapped: 26034176 heap: 206200832 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:02 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:02 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:02 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:24:26.923756+0000)
Oct 11 04:30:02 compute-0 ceph-osd[89722]: osd.2 446 heartbeat osd_stat(store_statfs(0x4f678a000/0x0/0x4ffc00000, data 0x1fadc9a/0x2203000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x726f9c6), peers [0,1] op hist [])
Oct 11 04:30:02 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 180166656 unmapped: 26034176 heap: 206200832 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:02 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:02 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:02 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:24:27.923895+0000)
Oct 11 04:30:02 compute-0 ceph-osd[89722]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:30:02 compute-0 ceph-osd[89722]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:30:02 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 180166656 unmapped: 26034176 heap: 206200832 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:02 compute-0 ceph-osd[89722]: bluestore.MempoolThread(0x55f1042abb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2998271 data_alloc: 234881024 data_used: 13557760
Oct 11 04:30:02 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:02 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:02 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:24:28.924059+0000)
Oct 11 04:30:02 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 180166656 unmapped: 26034176 heap: 206200832 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:02 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:02 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:02 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:24:29.924209+0000)
Oct 11 04:30:02 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 180166656 unmapped: 26034176 heap: 206200832 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:02 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:02 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:02 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:24:30.924368+0000)
Oct 11 04:30:02 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 180166656 unmapped: 26034176 heap: 206200832 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:02 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:02 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:02 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:24:31.924533+0000)
Oct 11 04:30:02 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 180166656 unmapped: 26034176 heap: 206200832 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:02 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:02 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:02 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:24:32.924725+0000)
Oct 11 04:30:02 compute-0 ceph-osd[89722]: osd.2 446 heartbeat osd_stat(store_statfs(0x4f678a000/0x0/0x4ffc00000, data 0x1fadc9a/0x2203000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x726f9c6), peers [0,1] op hist [])
Oct 11 04:30:02 compute-0 ceph-osd[89722]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:30:02 compute-0 ceph-osd[89722]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:30:02 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 180166656 unmapped: 26034176 heap: 206200832 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:02 compute-0 ceph-osd[89722]: bluestore.MempoolThread(0x55f1042abb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2998271 data_alloc: 234881024 data_used: 13557760
Oct 11 04:30:02 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:02 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:02 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:24:33.924924+0000)
Oct 11 04:30:02 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 180166656 unmapped: 26034176 heap: 206200832 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:02 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:02 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:02 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:24:34.925119+0000)
Oct 11 04:30:02 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 180166656 unmapped: 26034176 heap: 206200832 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:02 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:02 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:02 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:24:35.925273+0000)
Oct 11 04:30:02 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 180166656 unmapped: 26034176 heap: 206200832 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:02 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:02 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:02 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:24:36.925445+0000)
Oct 11 04:30:02 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 180166656 unmapped: 26034176 heap: 206200832 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:02 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:02 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:02 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:24:37.925619+0000)
Oct 11 04:30:02 compute-0 ceph-osd[89722]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:30:02 compute-0 ceph-osd[89722]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:30:02 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 180166656 unmapped: 26034176 heap: 206200832 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:02 compute-0 ceph-osd[89722]: bluestore.MempoolThread(0x55f1042abb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2998271 data_alloc: 234881024 data_used: 13557760
Oct 11 04:30:02 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:02 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:02 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:24:38.925808+0000)
Oct 11 04:30:02 compute-0 ceph-osd[89722]: osd.2 446 heartbeat osd_stat(store_statfs(0x4f678a000/0x0/0x4ffc00000, data 0x1fadc9a/0x2203000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x726f9c6), peers [0,1] op hist [])
Oct 11 04:30:02 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 180166656 unmapped: 26034176 heap: 206200832 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:02 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:02 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:02 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:24:39.925973+0000)
Oct 11 04:30:02 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 180166656 unmapped: 26034176 heap: 206200832 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:02 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:02 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:02 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:24:40.926241+0000)
Oct 11 04:30:02 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 180166656 unmapped: 26034176 heap: 206200832 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:02 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:02 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:02 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:24:41.926484+0000)
Oct 11 04:30:02 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 180166656 unmapped: 26034176 heap: 206200832 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:02 compute-0 ceph-osd[89722]: osd.2 446 heartbeat osd_stat(store_statfs(0x4f678a000/0x0/0x4ffc00000, data 0x1fadc9a/0x2203000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x726f9c6), peers [0,1] op hist [])
Oct 11 04:30:02 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:02 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:02 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:24:42.926610+0000)
Oct 11 04:30:02 compute-0 ceph-osd[89722]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:30:02 compute-0 ceph-osd[89722]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:30:02 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 180166656 unmapped: 26034176 heap: 206200832 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:02 compute-0 ceph-osd[89722]: bluestore.MempoolThread(0x55f1042abb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2998271 data_alloc: 234881024 data_used: 13557760
Oct 11 04:30:02 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:02 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:02 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:24:43.926768+0000)
Oct 11 04:30:02 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 180166656 unmapped: 26034176 heap: 206200832 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:02 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:02 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:02 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:24:44.926951+0000)
Oct 11 04:30:02 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 180166656 unmapped: 26034176 heap: 206200832 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:02 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:02 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:02 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:24:45.927118+0000)
Oct 11 04:30:02 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 180166656 unmapped: 26034176 heap: 206200832 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:02 compute-0 ceph-osd[89722]: osd.2 446 heartbeat osd_stat(store_statfs(0x4f678a000/0x0/0x4ffc00000, data 0x1fadc9a/0x2203000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x726f9c6), peers [0,1] op hist [])
Oct 11 04:30:02 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:02 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:02 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:24:46.927288+0000)
Oct 11 04:30:02 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 180166656 unmapped: 26034176 heap: 206200832 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:02 compute-0 ceph-osd[89722]: osd.2 446 heartbeat osd_stat(store_statfs(0x4f678a000/0x0/0x4ffc00000, data 0x1fadc9a/0x2203000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x726f9c6), peers [0,1] op hist [])
Oct 11 04:30:02 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:02 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:02 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:24:47.927477+0000)
Oct 11 04:30:02 compute-0 ceph-osd[89722]: osd.2 446 heartbeat osd_stat(store_statfs(0x4f678a000/0x0/0x4ffc00000, data 0x1fadc9a/0x2203000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x726f9c6), peers [0,1] op hist [])
Oct 11 04:30:02 compute-0 ceph-osd[89722]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:30:02 compute-0 ceph-osd[89722]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:30:02 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 180166656 unmapped: 26034176 heap: 206200832 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:02 compute-0 ceph-osd[89722]: bluestore.MempoolThread(0x55f1042abb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2998271 data_alloc: 234881024 data_used: 13557760
Oct 11 04:30:02 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:02 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:02 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:24:48.927700+0000)
Oct 11 04:30:02 compute-0 ceph-osd[89722]: osd.2 446 heartbeat osd_stat(store_statfs(0x4f678a000/0x0/0x4ffc00000, data 0x1fadc9a/0x2203000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x726f9c6), peers [0,1] op hist [])
Oct 11 04:30:02 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 180166656 unmapped: 26034176 heap: 206200832 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:02 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:02 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:02 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:24:49.927859+0000)
Oct 11 04:30:02 compute-0 ceph-osd[89722]: osd.2 446 heartbeat osd_stat(store_statfs(0x4f678a000/0x0/0x4ffc00000, data 0x1fadc9a/0x2203000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x726f9c6), peers [0,1] op hist [])
Oct 11 04:30:02 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 180166656 unmapped: 26034176 heap: 206200832 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:02 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:02 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:02 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:24:50.928007+0000)
Oct 11 04:30:02 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 180166656 unmapped: 26034176 heap: 206200832 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:02 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:02 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:02 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:24:51.928226+0000)
Oct 11 04:30:02 compute-0 ceph-osd[89722]: osd.2 446 heartbeat osd_stat(store_statfs(0x4f678a000/0x0/0x4ffc00000, data 0x1fadc9a/0x2203000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x726f9c6), peers [0,1] op hist [])
Oct 11 04:30:02 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 180166656 unmapped: 26034176 heap: 206200832 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:02 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:02 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:02 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:24:52.928374+0000)
Oct 11 04:30:02 compute-0 ceph-osd[89722]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:30:02 compute-0 ceph-osd[89722]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:30:02 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 180166656 unmapped: 26034176 heap: 206200832 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:02 compute-0 ceph-osd[89722]: bluestore.MempoolThread(0x55f1042abb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2998271 data_alloc: 234881024 data_used: 13557760
Oct 11 04:30:02 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:02 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:02 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:24:53.928601+0000)
Oct 11 04:30:02 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 180166656 unmapped: 26034176 heap: 206200832 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:02 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:02 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:02 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:24:54.928773+0000)
Oct 11 04:30:02 compute-0 ceph-osd[89722]: monclient: handle_auth_request added challenge on 0x55f10938d400
Oct 11 04:30:02 compute-0 ceph-osd[89722]: monclient: _renew_subs
Oct 11 04:30:02 compute-0 ceph-osd[89722]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 11 04:30:02 compute-0 ceph-osd[89722]: osd.2 446 handle_osd_map epochs [447,447], i have 446, src has [1,447]
Oct 11 04:30:02 compute-0 ceph-osd[89722]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 30.086074829s of 30.096666336s, submitted: 41
Oct 11 04:30:02 compute-0 ceph-osd[89722]: osd.2 447 ms_handle_reset con 0x55f10938d400 session 0x55f10561e3c0
Oct 11 04:30:02 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 180174848 unmapped: 26025984 heap: 206200832 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:02 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:02 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:02 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:24:55.929030+0000)
Oct 11 04:30:02 compute-0 ceph-osd[89722]: osd.2 447 heartbeat osd_stat(store_statfs(0x4f6786000/0x0/0x4ffc00000, data 0x1faf87a/0x2207000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x726f9c6), peers [0,1] op hist [])
Oct 11 04:30:02 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 180174848 unmapped: 26025984 heap: 206200832 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:02 compute-0 ceph-osd[89722]: monclient: handle_auth_request added challenge on 0x55f105d2ac00
Oct 11 04:30:02 compute-0 ceph-osd[89722]: osd.2 447 ms_handle_reset con 0x55f105d2ac00 session 0x55f1077092c0
Oct 11 04:30:02 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:02 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:02 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:24:56.929204+0000)
Oct 11 04:30:02 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 180174848 unmapped: 26025984 heap: 206200832 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:02 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:02 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:02 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:24:57.929364+0000)
Oct 11 04:30:02 compute-0 ceph-osd[89722]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:30:02 compute-0 ceph-osd[89722]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:30:02 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 180174848 unmapped: 26025984 heap: 206200832 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:02 compute-0 ceph-osd[89722]: bluestore.MempoolThread(0x55f1042abb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3001986 data_alloc: 234881024 data_used: 13557760
Oct 11 04:30:02 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:02 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:02 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:24:58.929517+0000)
Oct 11 04:30:02 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 180174848 unmapped: 26025984 heap: 206200832 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:02 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:02 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:02 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:24:59.929676+0000)
Oct 11 04:30:02 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 180174848 unmapped: 26025984 heap: 206200832 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:02 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:02 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:02 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:25:00.929895+0000)
Oct 11 04:30:02 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 180174848 unmapped: 26025984 heap: 206200832 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:02 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:02 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:02 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:25:01.930139+0000)
Oct 11 04:30:02 compute-0 ceph-osd[89722]: osd.2 447 heartbeat osd_stat(store_statfs(0x4f6787000/0x0/0x4ffc00000, data 0x1faf87a/0x2207000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x726f9c6), peers [0,1] op hist [])
Oct 11 04:30:02 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 180174848 unmapped: 26025984 heap: 206200832 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:02 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:02 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:02 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:25:02.930416+0000)
Oct 11 04:30:02 compute-0 ceph-osd[89722]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:30:02 compute-0 ceph-osd[89722]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:30:02 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 180174848 unmapped: 26025984 heap: 206200832 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:02 compute-0 ceph-osd[89722]: bluestore.MempoolThread(0x55f1042abb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3001986 data_alloc: 234881024 data_used: 13557760
Oct 11 04:30:02 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:02 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:02 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:25:03.930574+0000)
Oct 11 04:30:02 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 180174848 unmapped: 26025984 heap: 206200832 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:02 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:02 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:02 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:25:04.930744+0000)
Oct 11 04:30:02 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 180174848 unmapped: 26025984 heap: 206200832 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:02 compute-0 ceph-osd[89722]: osd.2 447 heartbeat osd_stat(store_statfs(0x4f6787000/0x0/0x4ffc00000, data 0x1faf87a/0x2207000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x726f9c6), peers [0,1] op hist [])
Oct 11 04:30:02 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:02 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:02 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:25:05.930922+0000)
Oct 11 04:30:02 compute-0 ceph-osd[89722]: monclient: handle_auth_request added challenge on 0x55f107b52c00
Oct 11 04:30:02 compute-0 ceph-osd[89722]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 10.955959320s of 10.973167419s, submitted: 4
Oct 11 04:30:02 compute-0 ceph-osd[89722]: osd.2 447 ms_handle_reset con 0x55f107b52c00 session 0x55f10c07ba40
Oct 11 04:30:02 compute-0 ceph-osd[89722]: monclient: handle_auth_request added challenge on 0x55f107bf3400
Oct 11 04:30:02 compute-0 ceph-osd[89722]: osd.2 447 ms_handle_reset con 0x55f107bf3400 session 0x55f10d6f7680
Oct 11 04:30:02 compute-0 ceph-osd[89722]: monclient: handle_auth_request added challenge on 0x55f107f00c00
Oct 11 04:30:02 compute-0 ceph-osd[89722]: osd.2 447 ms_handle_reset con 0x55f107f00c00 session 0x55f1099545a0
Oct 11 04:30:02 compute-0 ceph-osd[89722]: monclient: handle_auth_request added challenge on 0x55f107c1e000
Oct 11 04:30:02 compute-0 ceph-osd[89722]: osd.2 447 ms_handle_reset con 0x55f107c1e000 session 0x55f106ae4b40
Oct 11 04:30:02 compute-0 ceph-osd[89722]: monclient: handle_auth_request added challenge on 0x55f105d2ac00
Oct 11 04:30:02 compute-0 ceph-osd[89722]: osd.2 447 ms_handle_reset con 0x55f105d2ac00 session 0x55f10bf35e00
Oct 11 04:30:02 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 181272576 unmapped: 24928256 heap: 206200832 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:02 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:02 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:02 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:25:06.931053+0000)
Oct 11 04:30:02 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 181272576 unmapped: 24928256 heap: 206200832 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:02 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:02 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:02 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:25:07.931245+0000)
Oct 11 04:30:02 compute-0 ceph-osd[89722]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:30:02 compute-0 ceph-osd[89722]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:30:02 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 181272576 unmapped: 24928256 heap: 206200832 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:02 compute-0 ceph-osd[89722]: bluestore.MempoolThread(0x55f1042abb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3011013 data_alloc: 234881024 data_used: 13557760
Oct 11 04:30:02 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:02 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:02 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:25:08.931390+0000)
Oct 11 04:30:02 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 181272576 unmapped: 24928256 heap: 206200832 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:02 compute-0 ceph-osd[89722]: osd.2 447 heartbeat osd_stat(store_statfs(0x4f6704000/0x0/0x4ffc00000, data 0x20318dc/0x228a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x726f9c6), peers [0,1] op hist [])
Oct 11 04:30:02 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:02 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:02 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:25:09.937783+0000)
Oct 11 04:30:02 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 181272576 unmapped: 24928256 heap: 206200832 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:02 compute-0 ceph-osd[89722]: monclient: handle_auth_request added challenge on 0x55f107b52c00
Oct 11 04:30:02 compute-0 ceph-osd[89722]: osd.2 447 ms_handle_reset con 0x55f107b52c00 session 0x55f10bf350e0
Oct 11 04:30:02 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:02 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:02 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:25:10.937909+0000)
Oct 11 04:30:02 compute-0 ceph-osd[89722]: monclient: handle_auth_request added challenge on 0x55f107bf3400
Oct 11 04:30:02 compute-0 ceph-osd[89722]: monclient: handle_auth_request added challenge on 0x55f107f00c00
Oct 11 04:30:02 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 181272576 unmapped: 24928256 heap: 206200832 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:02 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:02 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:02 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:25:11.938053+0000)
Oct 11 04:30:02 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 181272576 unmapped: 24928256 heap: 206200832 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:02 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:02 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:02 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:25:12.938199+0000)
Oct 11 04:30:02 compute-0 ceph-osd[89722]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:30:02 compute-0 ceph-osd[89722]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:30:02 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 181272576 unmapped: 24928256 heap: 206200832 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:02 compute-0 ceph-osd[89722]: bluestore.MempoolThread(0x55f1042abb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3015013 data_alloc: 234881024 data_used: 14086144
Oct 11 04:30:02 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:02 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:02 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:25:13.938331+0000)
Oct 11 04:30:02 compute-0 ceph-osd[89722]: osd.2 447 heartbeat osd_stat(store_statfs(0x4f6704000/0x0/0x4ffc00000, data 0x20318dc/0x228a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x726f9c6), peers [0,1] op hist [])
Oct 11 04:30:02 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 181272576 unmapped: 24928256 heap: 206200832 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:02 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:02 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:02 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:25:14.938483+0000)
Oct 11 04:30:02 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 181272576 unmapped: 24928256 heap: 206200832 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:02 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:02 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:02 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:25:15.938649+0000)
Oct 11 04:30:02 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 181272576 unmapped: 24928256 heap: 206200832 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:02 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:02 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:02 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:25:16.938768+0000)
Oct 11 04:30:02 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 181272576 unmapped: 24928256 heap: 206200832 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:02 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:02 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:02 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:25:17.938893+0000)
Oct 11 04:30:02 compute-0 ceph-osd[89722]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:30:02 compute-0 ceph-osd[89722]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:30:02 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 181272576 unmapped: 24928256 heap: 206200832 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:02 compute-0 ceph-osd[89722]: bluestore.MempoolThread(0x55f1042abb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3015013 data_alloc: 234881024 data_used: 14086144
Oct 11 04:30:02 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:02 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:02 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:25:18.939081+0000)
Oct 11 04:30:02 compute-0 ceph-osd[89722]: osd.2 447 heartbeat osd_stat(store_statfs(0x4f6704000/0x0/0x4ffc00000, data 0x20318dc/0x228a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x726f9c6), peers [0,1] op hist [])
Oct 11 04:30:02 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 181272576 unmapped: 24928256 heap: 206200832 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:02 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:02 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:02 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:25:19.939241+0000)
Oct 11 04:30:02 compute-0 ceph-osd[89722]: osd.2 447 heartbeat osd_stat(store_statfs(0x4f6704000/0x0/0x4ffc00000, data 0x20318dc/0x228a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x726f9c6), peers [0,1] op hist [])
Oct 11 04:30:02 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 181272576 unmapped: 24928256 heap: 206200832 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:02 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:02 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:02 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:25:20.939440+0000)
Oct 11 04:30:02 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 181272576 unmapped: 24928256 heap: 206200832 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:02 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:02 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:02 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:25:21.939618+0000)
Oct 11 04:30:02 compute-0 ceph-osd[89722]: osd.2 447 heartbeat osd_stat(store_statfs(0x4f6704000/0x0/0x4ffc00000, data 0x20318dc/0x228a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x726f9c6), peers [0,1] op hist [])
Oct 11 04:30:02 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 181272576 unmapped: 24928256 heap: 206200832 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:02 compute-0 ceph-osd[89722]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 16.484191895s of 16.575172424s, submitted: 22
Oct 11 04:30:02 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:02 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:02 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:25:22.939767+0000)
Oct 11 04:30:02 compute-0 ceph-osd[89722]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:30:02 compute-0 ceph-osd[89722]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:30:02 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 183943168 unmapped: 22257664 heap: 206200832 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:02 compute-0 ceph-osd[89722]: bluestore.MempoolThread(0x55f1042abb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3073423 data_alloc: 234881024 data_used: 14282752
Oct 11 04:30:02 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:02 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:02 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:25:23.939920+0000)
Oct 11 04:30:02 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 183951360 unmapped: 22249472 heap: 206200832 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:02 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:02 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:02 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:25:24.940074+0000)
Oct 11 04:30:02 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 183951360 unmapped: 22249472 heap: 206200832 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:02 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:02 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:02 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:25:25.940255+0000)
Oct 11 04:30:02 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 183951360 unmapped: 22249472 heap: 206200832 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:02 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:02 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:02 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:25:26.940371+0000)
Oct 11 04:30:02 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 183951360 unmapped: 22249472 heap: 206200832 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:02 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:02 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:02 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:25:27.940540+0000)
Oct 11 04:30:02 compute-0 ceph-osd[89722]: osd.2 447 heartbeat osd_stat(store_statfs(0x4f5f30000/0x0/0x4ffc00000, data 0x28058dc/0x2a5e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x726f9c6), peers [0,1] op hist [])
Oct 11 04:30:02 compute-0 ceph-osd[89722]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:30:02 compute-0 ceph-osd[89722]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:30:02 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 183951360 unmapped: 22249472 heap: 206200832 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:02 compute-0 ceph-osd[89722]: bluestore.MempoolThread(0x55f1042abb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3076013 data_alloc: 234881024 data_used: 14282752
Oct 11 04:30:02 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:02 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:02 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:25:28.940710+0000)
Oct 11 04:30:02 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 183902208 unmapped: 22298624 heap: 206200832 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:02 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:02 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:02 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:25:29.940893+0000)
Oct 11 04:30:02 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 183902208 unmapped: 22298624 heap: 206200832 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:02 compute-0 ceph-osd[89722]: osd.2 447 heartbeat osd_stat(store_statfs(0x4f5f2d000/0x0/0x4ffc00000, data 0x28088dc/0x2a61000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x726f9c6), peers [0,1] op hist [])
Oct 11 04:30:02 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:02 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:02 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:25:30.941071+0000)
Oct 11 04:30:02 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 183902208 unmapped: 22298624 heap: 206200832 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:02 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:02 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:02 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:25:31.941289+0000)
Oct 11 04:30:02 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 183902208 unmapped: 22298624 heap: 206200832 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:02 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:02 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:02 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:25:32.941418+0000)
Oct 11 04:30:02 compute-0 ceph-osd[89722]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:30:02 compute-0 ceph-osd[89722]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:30:02 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 183902208 unmapped: 22298624 heap: 206200832 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:02 compute-0 ceph-osd[89722]: bluestore.MempoolThread(0x55f1042abb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3075301 data_alloc: 234881024 data_used: 14286848
Oct 11 04:30:02 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:02 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:02 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:25:33.941591+0000)
Oct 11 04:30:02 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 183902208 unmapped: 22298624 heap: 206200832 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:02 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:02 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:02 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:25:34.941758+0000)
Oct 11 04:30:02 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 183902208 unmapped: 22298624 heap: 206200832 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:02 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:02 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:02 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:25:35.941944+0000)
Oct 11 04:30:02 compute-0 ceph-osd[89722]: osd.2 447 heartbeat osd_stat(store_statfs(0x4f5f2d000/0x0/0x4ffc00000, data 0x28088dc/0x2a61000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x726f9c6), peers [0,1] op hist [])
Oct 11 04:30:02 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 183902208 unmapped: 22298624 heap: 206200832 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:02 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:02 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:02 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:25:36.942098+0000)
Oct 11 04:30:02 compute-0 ceph-osd[89722]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 14.695049286s of 14.940235138s, submitted: 69
Oct 11 04:30:02 compute-0 ceph-osd[89722]: osd.2 447 ms_handle_reset con 0x55f107bf3400 session 0x55f10d6f6960
Oct 11 04:30:02 compute-0 ceph-osd[89722]: osd.2 447 ms_handle_reset con 0x55f107f00c00 session 0x55f10567c3c0
Oct 11 04:30:02 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 183902208 unmapped: 22298624 heap: 206200832 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:02 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:02 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:02 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:25:37.942317+0000)
Oct 11 04:30:02 compute-0 ceph-osd[89722]: monclient: handle_auth_request added challenge on 0x55f10c914000
Oct 11 04:30:02 compute-0 ceph-osd[89722]: osd.2 447 ms_handle_reset con 0x55f10c914000 session 0x55f10c07ad20
Oct 11 04:30:02 compute-0 ceph-osd[89722]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:30:02 compute-0 ceph-osd[89722]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:30:02 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 181854208 unmapped: 24346624 heap: 206200832 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:02 compute-0 ceph-osd[89722]: bluestore.MempoolThread(0x55f1042abb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3007071 data_alloc: 234881024 data_used: 13557760
Oct 11 04:30:02 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:02 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:02 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:25:38.942620+0000)
Oct 11 04:30:02 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 181854208 unmapped: 24346624 heap: 206200832 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:02 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:02 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:02 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:25:39.942783+0000)
Oct 11 04:30:02 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 181854208 unmapped: 24346624 heap: 206200832 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:02 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:02 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:02 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:25:40.943414+0000)
Oct 11 04:30:02 compute-0 ceph-osd[89722]: monclient: handle_auth_request added challenge on 0x55f105d2ac00
Oct 11 04:30:02 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 181854208 unmapped: 24346624 heap: 206200832 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:02 compute-0 ceph-osd[89722]: osd.2 447 ms_handle_reset con 0x55f105d2ac00 session 0x55f1076e65a0
Oct 11 04:30:02 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:02 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:02 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:25:41.943996+0000)
Oct 11 04:30:02 compute-0 ceph-osd[89722]: osd.2 447 heartbeat osd_stat(store_statfs(0x4f6788000/0x0/0x4ffc00000, data 0x1faf817/0x2206000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x726f9c6), peers [0,1] op hist [])
Oct 11 04:30:02 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 181854208 unmapped: 24346624 heap: 206200832 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:02 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:02 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:02 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:25:42.944489+0000)
Oct 11 04:30:02 compute-0 ceph-osd[89722]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:30:02 compute-0 ceph-osd[89722]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:30:02 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 181854208 unmapped: 24346624 heap: 206200832 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:02 compute-0 ceph-osd[89722]: bluestore.MempoolThread(0x55f1042abb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3006183 data_alloc: 234881024 data_used: 13557760
Oct 11 04:30:02 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:02 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:02 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:25:43.944684+0000)
Oct 11 04:30:02 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 181854208 unmapped: 24346624 heap: 206200832 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:02 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:02 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:02 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:25:44.945204+0000)
Oct 11 04:30:02 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 181854208 unmapped: 24346624 heap: 206200832 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:02 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:02 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:02 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:25:45.945599+0000)
Oct 11 04:30:02 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 181854208 unmapped: 24346624 heap: 206200832 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:02 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:02 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:02 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:25:46.945952+0000)
Oct 11 04:30:02 compute-0 ceph-osd[89722]: osd.2 447 heartbeat osd_stat(store_statfs(0x4f6788000/0x0/0x4ffc00000, data 0x1faf817/0x2206000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x726f9c6), peers [0,1] op hist [])
Oct 11 04:30:02 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 181854208 unmapped: 24346624 heap: 206200832 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:02 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:02 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:02 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:25:47.946129+0000)
Oct 11 04:30:02 compute-0 ceph-osd[89722]: osd.2 447 heartbeat osd_stat(store_statfs(0x4f6788000/0x0/0x4ffc00000, data 0x1faf817/0x2206000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x726f9c6), peers [0,1] op hist [])
Oct 11 04:30:02 compute-0 ceph-osd[89722]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:30:02 compute-0 ceph-osd[89722]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:30:02 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 181854208 unmapped: 24346624 heap: 206200832 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:02 compute-0 ceph-osd[89722]: bluestore.MempoolThread(0x55f1042abb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3006183 data_alloc: 234881024 data_used: 13557760
Oct 11 04:30:02 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:02 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:02 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:25:48.946336+0000)
Oct 11 04:30:02 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 181854208 unmapped: 24346624 heap: 206200832 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:02 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:02 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:02 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:25:49.946492+0000)
Oct 11 04:30:02 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 181854208 unmapped: 24346624 heap: 206200832 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:02 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:02 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:02 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:25:50.946695+0000)
Oct 11 04:30:02 compute-0 ceph-osd[89722]: monclient: handle_auth_request added challenge on 0x55f107b52c00
Oct 11 04:30:02 compute-0 ceph-osd[89722]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 13.950961113s of 14.040390968s, submitted: 21
Oct 11 04:30:02 compute-0 ceph-osd[89722]: osd.2 447 ms_handle_reset con 0x55f107b52c00 session 0x55f105b9fe00
Oct 11 04:30:02 compute-0 ceph-osd[89722]: monclient: handle_auth_request added challenge on 0x55f107bf3400
Oct 11 04:30:02 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 182198272 unmapped: 24002560 heap: 206200832 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:02 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:02 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:02 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:25:51.947001+0000)
Oct 11 04:30:02 compute-0 ceph-osd[89722]: osd.2 447 ms_handle_reset con 0x55f107bf3400 session 0x55f10561f2c0
Oct 11 04:30:02 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 182198272 unmapped: 24002560 heap: 206200832 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:02 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:02 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:02 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:25:52.947364+0000)
Oct 11 04:30:02 compute-0 ceph-osd[89722]: osd.2 447 heartbeat osd_stat(store_statfs(0x4f6788000/0x0/0x4ffc00000, data 0x1faf817/0x2206000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x726f9c6), peers [0,1] op hist [])
Oct 11 04:30:02 compute-0 ceph-osd[89722]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:30:02 compute-0 ceph-osd[89722]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:30:02 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 182198272 unmapped: 24002560 heap: 206200832 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:02 compute-0 ceph-osd[89722]: bluestore.MempoolThread(0x55f1042abb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3003623 data_alloc: 234881024 data_used: 13557760
Oct 11 04:30:02 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:02 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:02 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:25:53.947537+0000)
Oct 11 04:30:02 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 182198272 unmapped: 24002560 heap: 206200832 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:02 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:02 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:02 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:25:54.947873+0000)
Oct 11 04:30:02 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 182198272 unmapped: 24002560 heap: 206200832 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:02 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:02 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:02 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:25:55.948236+0000)
Oct 11 04:30:02 compute-0 ceph-osd[89722]: osd.2 447 heartbeat osd_stat(store_statfs(0x4f6788000/0x0/0x4ffc00000, data 0x1faf817/0x2206000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x726f9c6), peers [0,1] op hist [])
Oct 11 04:30:02 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 182198272 unmapped: 24002560 heap: 206200832 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:02 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:02 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:02 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:25:56.948394+0000)
Oct 11 04:30:02 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:02 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:02 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:25:57.948663+0000)
Oct 11 04:30:02 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 182198272 unmapped: 24002560 heap: 206200832 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:02 compute-0 ceph-osd[89722]: osd.2 447 heartbeat osd_stat(store_statfs(0x4f6788000/0x0/0x4ffc00000, data 0x1faf817/0x2206000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x726f9c6), peers [0,1] op hist [])
Oct 11 04:30:02 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:02 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:02 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:25:58.948814+0000)
Oct 11 04:30:02 compute-0 ceph-osd[89722]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:30:02 compute-0 ceph-osd[89722]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:30:02 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 182198272 unmapped: 24002560 heap: 206200832 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:02 compute-0 ceph-osd[89722]: bluestore.MempoolThread(0x55f1042abb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3003623 data_alloc: 234881024 data_used: 13557760
Oct 11 04:30:02 compute-0 ceph-osd[89722]: osd.2 447 heartbeat osd_stat(store_statfs(0x4f6788000/0x0/0x4ffc00000, data 0x1faf817/0x2206000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x726f9c6), peers [0,1] op hist [])
Oct 11 04:30:02 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:02 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:02 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:25:59.949026+0000)
Oct 11 04:30:02 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 182198272 unmapped: 24002560 heap: 206200832 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:02 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:02 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:02 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:26:00.949246+0000)
Oct 11 04:30:02 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 182198272 unmapped: 24002560 heap: 206200832 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:02 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:02 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:02 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:26:01.949543+0000)
Oct 11 04:30:02 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 182198272 unmapped: 24002560 heap: 206200832 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:02 compute-0 ceph-osd[89722]: osd.2 447 heartbeat osd_stat(store_statfs(0x4f6788000/0x0/0x4ffc00000, data 0x1faf817/0x2206000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x726f9c6), peers [0,1] op hist [])
Oct 11 04:30:02 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:02 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:02 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:26:02.949717+0000)
Oct 11 04:30:02 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 182206464 unmapped: 23994368 heap: 206200832 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:02 compute-0 ceph-osd[89722]: osd.2 447 heartbeat osd_stat(store_statfs(0x4f6788000/0x0/0x4ffc00000, data 0x1faf817/0x2206000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x726f9c6), peers [0,1] op hist [])
Oct 11 04:30:02 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:02 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:02 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:26:03.949868+0000)
Oct 11 04:30:02 compute-0 ceph-osd[89722]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:30:02 compute-0 ceph-osd[89722]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:30:02 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 182206464 unmapped: 23994368 heap: 206200832 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:02 compute-0 ceph-osd[89722]: bluestore.MempoolThread(0x55f1042abb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3003623 data_alloc: 234881024 data_used: 13557760
Oct 11 04:30:02 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:02 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:02 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:26:04.950016+0000)
Oct 11 04:30:02 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 182206464 unmapped: 23994368 heap: 206200832 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:02 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:02 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:02 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:26:05.950224+0000)
Oct 11 04:30:02 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 182206464 unmapped: 23994368 heap: 206200832 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:02 compute-0 ceph-osd[89722]: osd.2 447 heartbeat osd_stat(store_statfs(0x4f6788000/0x0/0x4ffc00000, data 0x1faf817/0x2206000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x726f9c6), peers [0,1] op hist [])
Oct 11 04:30:02 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:02 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:02 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:26:06.950414+0000)
Oct 11 04:30:02 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 182206464 unmapped: 23994368 heap: 206200832 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:02 compute-0 ceph-osd[89722]: osd.2 447 heartbeat osd_stat(store_statfs(0x4f6788000/0x0/0x4ffc00000, data 0x1faf817/0x2206000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x726f9c6), peers [0,1] op hist [])
Oct 11 04:30:02 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:02 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:02 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:26:07.950593+0000)
Oct 11 04:30:02 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 182206464 unmapped: 23994368 heap: 206200832 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:02 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:02 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:02 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:26:08.950735+0000)
Oct 11 04:30:02 compute-0 ceph-osd[89722]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:30:02 compute-0 ceph-osd[89722]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:30:02 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 182206464 unmapped: 23994368 heap: 206200832 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:02 compute-0 ceph-osd[89722]: bluestore.MempoolThread(0x55f1042abb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3003623 data_alloc: 234881024 data_used: 13557760
Oct 11 04:30:02 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:02 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:02 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:26:09.950913+0000)
Oct 11 04:30:02 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 182206464 unmapped: 23994368 heap: 206200832 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:02 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:02 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:02 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:26:10.951116+0000)
Oct 11 04:30:02 compute-0 ceph-osd[89722]: osd.2 447 heartbeat osd_stat(store_statfs(0x4f6788000/0x0/0x4ffc00000, data 0x1faf817/0x2206000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x726f9c6), peers [0,1] op hist [])
Oct 11 04:30:02 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 182206464 unmapped: 23994368 heap: 206200832 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:02 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:02 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:02 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:26:11.951356+0000)
Oct 11 04:30:02 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 182206464 unmapped: 23994368 heap: 206200832 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:02 compute-0 ceph-osd[89722]: monclient: handle_auth_request added challenge on 0x55f107f00c00
Oct 11 04:30:02 compute-0 ceph-osd[89722]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 20.742687225s of 20.757467270s, submitted: 5
Oct 11 04:30:02 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:02 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:02 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:26:12.951535+0000)
Oct 11 04:30:02 compute-0 ceph-osd[89722]: osd.2 447 ms_handle_reset con 0x55f107f00c00 session 0x55f106316b40
Oct 11 04:30:02 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 182231040 unmapped: 40763392 heap: 222994432 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:02 compute-0 ceph-osd[89722]: osd.2 447 heartbeat osd_stat(store_statfs(0x4f6788000/0x0/0x4ffc00000, data 0x1faf817/0x2206000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x726f9c6), peers [0,1] op hist [])
Oct 11 04:30:02 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:02 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:02 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:26:13.951721+0000)
Oct 11 04:30:02 compute-0 ceph-osd[89722]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:30:02 compute-0 ceph-osd[89722]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:30:02 compute-0 ceph-osd[89722]: bluestore.MempoolThread(0x55f1042abb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3200503 data_alloc: 234881024 data_used: 13557760
Oct 11 04:30:02 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 182231040 unmapped: 40763392 heap: 222994432 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:02 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:02 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:02 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:26:14.951885+0000)
Oct 11 04:30:02 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 182231040 unmapped: 40763392 heap: 222994432 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:02 compute-0 ceph-osd[89722]: osd.2 447 heartbeat osd_stat(store_statfs(0x4f4b88000/0x0/0x4ffc00000, data 0x3baf817/0x3e06000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x726f9c6), peers [0,1] op hist [])
Oct 11 04:30:02 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:02 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:02 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:26:15.952803+0000)
Oct 11 04:30:02 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 182231040 unmapped: 40763392 heap: 222994432 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:02 compute-0 ceph-osd[89722]: osd.2 447 heartbeat osd_stat(store_statfs(0x4f4b88000/0x0/0x4ffc00000, data 0x3baf817/0x3e06000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x726f9c6), peers [0,1] op hist [])
Oct 11 04:30:02 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:02 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:02 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:26:16.952956+0000)
Oct 11 04:30:02 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 182231040 unmapped: 40763392 heap: 222994432 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:02 compute-0 ceph-osd[89722]: osd.2 447 heartbeat osd_stat(store_statfs(0x4f4b88000/0x0/0x4ffc00000, data 0x3baf817/0x3e06000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x726f9c6), peers [0,1] op hist [])
Oct 11 04:30:02 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:02 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:02 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:26:17.953188+0000)
Oct 11 04:30:02 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 182231040 unmapped: 40763392 heap: 222994432 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:02 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:02 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:02 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:26:18.953346+0000)
Oct 11 04:30:02 compute-0 ceph-osd[89722]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:30:02 compute-0 ceph-osd[89722]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:30:02 compute-0 ceph-osd[89722]: bluestore.MempoolThread(0x55f1042abb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3200503 data_alloc: 234881024 data_used: 13557760
Oct 11 04:30:02 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 182231040 unmapped: 40763392 heap: 222994432 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:02 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:02 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:02 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:26:19.953491+0000)
Oct 11 04:30:02 compute-0 ceph-osd[89722]: osd.2 447 heartbeat osd_stat(store_statfs(0x4f4b88000/0x0/0x4ffc00000, data 0x3baf817/0x3e06000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x726f9c6), peers [0,1] op hist [])
Oct 11 04:30:02 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 182231040 unmapped: 40763392 heap: 222994432 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:02 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:02 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:02 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:26:20.953649+0000)
Oct 11 04:30:02 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 182231040 unmapped: 40763392 heap: 222994432 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:02 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:02 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:02 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:26:21.953804+0000)
Oct 11 04:30:02 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 182231040 unmapped: 40763392 heap: 222994432 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:02 compute-0 ceph-osd[89722]: monclient: handle_auth_request added challenge on 0x55f1094e0800
Oct 11 04:30:02 compute-0 ceph-osd[89722]: osd.2 447 ms_handle_reset con 0x55f1094e0800 session 0x55f1069832c0
Oct 11 04:30:02 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:02 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:02 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:26:22.954004+0000)
Oct 11 04:30:02 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 182231040 unmapped: 40763392 heap: 222994432 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:02 compute-0 ceph-osd[89722]: monclient: handle_auth_request added challenge on 0x55f105d2ac00
Oct 11 04:30:02 compute-0 ceph-osd[89722]: osd.2 447 ms_handle_reset con 0x55f105d2ac00 session 0x55f106981680
Oct 11 04:30:02 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:02 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:02 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:26:23.954144+0000)
Oct 11 04:30:02 compute-0 ceph-osd[89722]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:30:02 compute-0 ceph-osd[89722]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:30:02 compute-0 ceph-osd[89722]: bluestore.MempoolThread(0x55f1042abb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3200503 data_alloc: 234881024 data_used: 13557760
Oct 11 04:30:02 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 182231040 unmapped: 40763392 heap: 222994432 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:02 compute-0 ceph-osd[89722]: osd.2 447 heartbeat osd_stat(store_statfs(0x4f4b88000/0x0/0x4ffc00000, data 0x3baf817/0x3e06000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x726f9c6), peers [0,1] op hist [])
Oct 11 04:30:02 compute-0 ceph-osd[89722]: monclient: handle_auth_request added challenge on 0x55f107b52c00
Oct 11 04:30:02 compute-0 ceph-osd[89722]: osd.2 447 ms_handle_reset con 0x55f107b52c00 session 0x55f10c07a960
Oct 11 04:30:02 compute-0 ceph-osd[89722]: monclient: handle_auth_request added challenge on 0x55f107bf3400
Oct 11 04:30:02 compute-0 ceph-osd[89722]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 11.987442970s of 12.373683929s, submitted: 7
Oct 11 04:30:02 compute-0 ceph-osd[89722]: osd.2 447 ms_handle_reset con 0x55f107bf3400 session 0x55f10c07be00
Oct 11 04:30:02 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:02 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:02 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:26:24.954307+0000)
Oct 11 04:30:02 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 182534144 unmapped: 40460288 heap: 222994432 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:02 compute-0 ceph-osd[89722]: monclient: handle_auth_request added challenge on 0x55f107f00c00
Oct 11 04:30:02 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:02 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:02 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:26:25.954438+0000)
Oct 11 04:30:02 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 182534144 unmapped: 40460288 heap: 222994432 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:02 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:02 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:02 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:26:26.954599+0000)
Oct 11 04:30:02 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 182534144 unmapped: 40460288 heap: 222994432 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:02 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:02 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:02 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:26:27.954835+0000)
Oct 11 04:30:02 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 182534144 unmapped: 40460288 heap: 222994432 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:02 compute-0 ceph-osd[89722]: monclient: handle_auth_request added challenge on 0x55f107bd6400
Oct 11 04:30:02 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:02 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:02 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:26:28.954976+0000)
Oct 11 04:30:02 compute-0 ceph-osd[89722]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:30:02 compute-0 ceph-osd[89722]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:30:02 compute-0 ceph-osd[89722]: bluestore.MempoolThread(0x55f1042abb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3252583 data_alloc: 234881024 data_used: 20209664
Oct 11 04:30:02 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 185434112 unmapped: 37560320 heap: 222994432 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:02 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:02 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:02 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:26:29.955116+0000)
Oct 11 04:30:02 compute-0 ceph-osd[89722]: osd.2 447 heartbeat osd_stat(store_statfs(0x4f4b63000/0x0/0x4ffc00000, data 0x3bd3826/0x3e2b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x726f9c6), peers [0,1] op hist [])
Oct 11 04:30:02 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 188129280 unmapped: 34865152 heap: 222994432 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:02 compute-0 ceph-osd[89722]: osd.2 447 heartbeat osd_stat(store_statfs(0x4f4b63000/0x0/0x4ffc00000, data 0x3bd3826/0x3e2b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x726f9c6), peers [0,1] op hist [])
Oct 11 04:30:02 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:02 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:02 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:26:30.955261+0000)
Oct 11 04:30:02 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 188129280 unmapped: 34865152 heap: 222994432 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:02 compute-0 ceph-osd[89722]: osd.2 447 heartbeat osd_stat(store_statfs(0x4f4b63000/0x0/0x4ffc00000, data 0x3bd3826/0x3e2b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x726f9c6), peers [0,1] op hist [])
Oct 11 04:30:02 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:02 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:02 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:26:31.955518+0000)
Oct 11 04:30:02 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 188129280 unmapped: 34865152 heap: 222994432 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:02 compute-0 ceph-osd[89722]: osd.2 447 heartbeat osd_stat(store_statfs(0x4f4b63000/0x0/0x4ffc00000, data 0x3bd3826/0x3e2b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x726f9c6), peers [0,1] op hist [])
Oct 11 04:30:02 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:02 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:02 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:26:32.955719+0000)
Oct 11 04:30:02 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 188129280 unmapped: 34865152 heap: 222994432 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:02 compute-0 ceph-osd[89722]: osd.2 447 heartbeat osd_stat(store_statfs(0x4f4b63000/0x0/0x4ffc00000, data 0x3bd3826/0x3e2b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x726f9c6), peers [0,1] op hist [])
Oct 11 04:30:02 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:02 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:02 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:26:33.955893+0000)
Oct 11 04:30:02 compute-0 ceph-osd[89722]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:30:02 compute-0 ceph-osd[89722]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 11 04:30:02 compute-0 ceph-osd[89722]: bluestore.MempoolThread(0x55f1042abb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3301703 data_alloc: 234881024 data_used: 27193344
Oct 11 04:30:02 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 188129280 unmapped: 34865152 heap: 222994432 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:02 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:02 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:02 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:26:34.956065+0000)
Oct 11 04:30:02 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 188129280 unmapped: 34865152 heap: 222994432 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:02 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:02 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:02 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:26:35.956239+0000)
Oct 11 04:30:02 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 188129280 unmapped: 34865152 heap: 222994432 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:02 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:02 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:02 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:26:36.956393+0000)
Oct 11 04:30:02 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 188129280 unmapped: 34865152 heap: 222994432 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:02 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:02 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:02 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:26:37.956597+0000)
Oct 11 04:30:02 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 188129280 unmapped: 34865152 heap: 222994432 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:02 compute-0 ceph-osd[89722]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 13.556703568s of 13.566667557s, submitted: 2
Oct 11 04:30:02 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:02 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:02 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:26:38.956703+0000)
Oct 11 04:30:02 compute-0 ceph-osd[89722]: osd.2 447 heartbeat osd_stat(store_statfs(0x4f45e3000/0x0/0x4ffc00000, data 0x3bd3826/0x3e2b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x726f9c6), peers [0,1] op hist [])
Oct 11 04:30:02 compute-0 ceph-osd[89722]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:30:02 compute-0 ceph-osd[89722]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 11 04:30:02 compute-0 ceph-osd[89722]: bluestore.MempoolThread(0x55f1042abb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3330855 data_alloc: 234881024 data_used: 27275264
Oct 11 04:30:02 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 200802304 unmapped: 22192128 heap: 222994432 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:02 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:02 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:02 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:26:39.956844+0000)
Oct 11 04:30:02 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 193798144 unmapped: 29196288 heap: 222994432 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:02 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:02 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:02 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:26:40.957065+0000)
Oct 11 04:30:02 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 193167360 unmapped: 29827072 heap: 222994432 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:02 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:02 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:02 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:26:41.957294+0000)
Oct 11 04:30:02 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 193167360 unmapped: 29827072 heap: 222994432 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:02 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:02 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:02 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:26:42.957489+0000)
Oct 11 04:30:02 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 193167360 unmapped: 29827072 heap: 222994432 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:02 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:02 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:02 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:26:43.957695+0000)
Oct 11 04:30:02 compute-0 ceph-osd[89722]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:30:02 compute-0 ceph-osd[89722]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 11 04:30:02 compute-0 ceph-osd[89722]: bluestore.MempoolThread(0x55f1042abb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3450847 data_alloc: 234881024 data_used: 27594752
Oct 11 04:30:02 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 193167360 unmapped: 29827072 heap: 222994432 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:02 compute-0 ceph-osd[89722]: osd.2 447 heartbeat osd_stat(store_statfs(0x4f37d9000/0x0/0x4ffc00000, data 0x4f5d826/0x51b5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x726f9c6), peers [0,1] op hist [])
Oct 11 04:30:02 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:02 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:02 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:26:44.957831+0000)
Oct 11 04:30:02 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 193167360 unmapped: 29827072 heap: 222994432 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:02 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:02 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:02 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:26:45.958093+0000)
Oct 11 04:30:02 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 193167360 unmapped: 29827072 heap: 222994432 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:02 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:02 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:02 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:26:46.958253+0000)
Oct 11 04:30:02 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 193167360 unmapped: 29827072 heap: 222994432 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:02 compute-0 ceph-osd[89722]: osd.2 447 ms_handle_reset con 0x55f107f00c00 session 0x55f109b9a000
Oct 11 04:30:02 compute-0 ceph-osd[89722]: osd.2 447 ms_handle_reset con 0x55f107bd6400 session 0x55f106883c20
Oct 11 04:30:02 compute-0 ceph-osd[89722]: monclient: handle_auth_request added challenge on 0x55f105d2ac00
Oct 11 04:30:02 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:02 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:02 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:26:47.958417+0000)
Oct 11 04:30:02 compute-0 ceph-osd[89722]: osd.2 447 ms_handle_reset con 0x55f105d2ac00 session 0x55f105b9d4a0
Oct 11 04:30:02 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 193167360 unmapped: 29827072 heap: 222994432 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:02 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:02 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:02 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:26:48.958638+0000)
Oct 11 04:30:02 compute-0 ceph-osd[89722]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:30:02 compute-0 ceph-osd[89722]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 11 04:30:02 compute-0 ceph-osd[89722]: bluestore.MempoolThread(0x55f1042abb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3441871 data_alloc: 234881024 data_used: 27484160
Oct 11 04:30:02 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 193167360 unmapped: 29827072 heap: 222994432 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:02 compute-0 ceph-osd[89722]: osd.2 447 heartbeat osd_stat(store_statfs(0x4f37fe000/0x0/0x4ffc00000, data 0x4f39817/0x5190000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x726f9c6), peers [0,1] op hist [])
Oct 11 04:30:02 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:02 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:02 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:26:49.958829+0000)
Oct 11 04:30:02 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 193167360 unmapped: 29827072 heap: 222994432 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:02 compute-0 ceph-osd[89722]: osd.2 447 heartbeat osd_stat(store_statfs(0x4f37fe000/0x0/0x4ffc00000, data 0x4f39817/0x5190000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x726f9c6), peers [0,1] op hist [])
Oct 11 04:30:02 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:02 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:02 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:26:50.958966+0000)
Oct 11 04:30:02 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 193167360 unmapped: 29827072 heap: 222994432 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:02 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:02 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:02 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:26:51.959202+0000)
Oct 11 04:30:02 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 193167360 unmapped: 29827072 heap: 222994432 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:02 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:02 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:02 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:26:52.959341+0000)
Oct 11 04:30:02 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 193167360 unmapped: 29827072 heap: 222994432 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:02 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:02 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:02 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:26:53.959533+0000)
Oct 11 04:30:02 compute-0 ceph-osd[89722]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:30:02 compute-0 ceph-osd[89722]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 11 04:30:02 compute-0 ceph-osd[89722]: bluestore.MempoolThread(0x55f1042abb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3441871 data_alloc: 234881024 data_used: 27484160
Oct 11 04:30:02 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 193167360 unmapped: 29827072 heap: 222994432 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:02 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:02 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:02 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:26:54.959682+0000)
Oct 11 04:30:02 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 193167360 unmapped: 29827072 heap: 222994432 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:02 compute-0 ceph-osd[89722]: monclient: handle_auth_request added challenge on 0x55f107b52c00
Oct 11 04:30:02 compute-0 ceph-osd[89722]: monclient: _renew_subs
Oct 11 04:30:02 compute-0 ceph-osd[89722]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 11 04:30:02 compute-0 ceph-osd[89722]: osd.2 447 handle_osd_map epochs [448,448], i have 447, src has [1,448]
Oct 11 04:30:02 compute-0 ceph-osd[89722]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 16.550647736s of 16.881116867s, submitted: 69
Oct 11 04:30:02 compute-0 ceph-osd[89722]: osd.2 448 ms_handle_reset con 0x55f107b52c00 session 0x55f10c07b860
Oct 11 04:30:02 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:02 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:02 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:26:55.959874+0000)
Oct 11 04:30:02 compute-0 ceph-osd[89722]: monclient: handle_auth_request added challenge on 0x55f107bf3400
Oct 11 04:30:02 compute-0 ceph-osd[89722]: osd.2 448 ms_handle_reset con 0x55f107bf3400 session 0x55f105babc20
Oct 11 04:30:02 compute-0 ceph-osd[89722]: osd.2 448 heartbeat osd_stat(store_statfs(0x4f37f9000/0x0/0x4ffc00000, data 0x4f3b3a4/0x5194000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x726f9c6), peers [0,1] op hist [])
Oct 11 04:30:02 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 193167360 unmapped: 29827072 heap: 222994432 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:02 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:02 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:02 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:26:56.960062+0000)
Oct 11 04:30:02 compute-0 ceph-osd[89722]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Oct 11 04:30:02 compute-0 ceph-osd[89722]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 3000.1 total, 600.0 interval
                                           Cumulative writes: 24K writes, 95K keys, 24K commit groups, 1.0 writes per commit group, ingest: 0.07 GB, 0.02 MB/s
                                           Cumulative WAL: 24K writes, 8676 syncs, 2.77 writes per sync, written: 0.07 GB, 0.02 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 3007 writes, 9551 keys, 3007 commit groups, 1.0 writes per commit group, ingest: 9.74 MB, 0.02 MB/s
                                           Interval WAL: 3007 writes, 1309 syncs, 2.30 writes per sync, written: 0.01 GB, 0.02 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Oct 11 04:30:02 compute-0 ceph-osd[89722]: osd.2 448 heartbeat osd_stat(store_statfs(0x4f37f9000/0x0/0x4ffc00000, data 0x4f3b3a4/0x5194000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x726f9c6), peers [0,1] op hist [])
Oct 11 04:30:02 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 193167360 unmapped: 29827072 heap: 222994432 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:02 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:02 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:02 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:26:57.960252+0000)
Oct 11 04:30:02 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 193183744 unmapped: 29810688 heap: 222994432 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:02 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:02 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:02 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:26:58.960445+0000)
Oct 11 04:30:02 compute-0 ceph-osd[89722]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:30:02 compute-0 ceph-osd[89722]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 11 04:30:02 compute-0 ceph-osd[89722]: bluestore.MempoolThread(0x55f1042abb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3447873 data_alloc: 234881024 data_used: 27492352
Oct 11 04:30:02 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 193183744 unmapped: 29810688 heap: 222994432 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:02 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:02 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:02 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:26:59.960630+0000)
Oct 11 04:30:02 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 193183744 unmapped: 29810688 heap: 222994432 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:02 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:02 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:02 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:27:00.960773+0000)
Oct 11 04:30:02 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 193183744 unmapped: 29810688 heap: 222994432 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:02 compute-0 ceph-osd[89722]: osd.2 448 heartbeat osd_stat(store_statfs(0x4f37f9000/0x0/0x4ffc00000, data 0x4f3b3a4/0x5194000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x726f9c6), peers [0,1] op hist [])
Oct 11 04:30:02 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:02 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:02 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:27:01.960928+0000)
Oct 11 04:30:02 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 193183744 unmapped: 29810688 heap: 222994432 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:02 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:02 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:02 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:27:02.961115+0000)
Oct 11 04:30:02 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 193183744 unmapped: 29810688 heap: 222994432 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:02 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:02 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:02 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:27:03.961514+0000)
Oct 11 04:30:02 compute-0 ceph-osd[89722]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:30:02 compute-0 ceph-osd[89722]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 11 04:30:02 compute-0 ceph-osd[89722]: bluestore.MempoolThread(0x55f1042abb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3447873 data_alloc: 234881024 data_used: 27492352
Oct 11 04:30:02 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 193183744 unmapped: 29810688 heap: 222994432 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:02 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:02 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:02 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:27:04.961772+0000)
Oct 11 04:30:02 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 193183744 unmapped: 29810688 heap: 222994432 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:02 compute-0 ceph-osd[89722]: osd.2 448 heartbeat osd_stat(store_statfs(0x4f37f9000/0x0/0x4ffc00000, data 0x4f3b3a4/0x5194000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x726f9c6), peers [0,1] op hist [])
Oct 11 04:30:02 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:02 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:02 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:27:05.961946+0000)
Oct 11 04:30:02 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 193183744 unmapped: 29810688 heap: 222994432 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:02 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:02 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:02 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:27:06.962088+0000)
Oct 11 04:30:02 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 193183744 unmapped: 29810688 heap: 222994432 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:02 compute-0 ceph-osd[89722]: monclient: handle_auth_request added challenge on 0x55f107f00c00
Oct 11 04:30:02 compute-0 ceph-osd[89722]: osd.2 448 ms_handle_reset con 0x55f107f00c00 session 0x55f108162000
Oct 11 04:30:02 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:02 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:02 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:27:07.962361+0000)
Oct 11 04:30:02 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 193183744 unmapped: 29810688 heap: 222994432 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:02 compute-0 ceph-osd[89722]: monclient: handle_auth_request added challenge on 0x55f107959400
Oct 11 04:30:02 compute-0 ceph-osd[89722]: monclient: handle_auth_request added challenge on 0x55f10c0ef400
Oct 11 04:30:02 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:02 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:02 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:27:08.962512+0000)
Oct 11 04:30:02 compute-0 ceph-osd[89722]: osd.2 448 heartbeat osd_stat(store_statfs(0x4f37f9000/0x0/0x4ffc00000, data 0x4f3b3a4/0x5194000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x726f9c6), peers [0,1] op hist [])
Oct 11 04:30:02 compute-0 ceph-osd[89722]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:30:02 compute-0 ceph-osd[89722]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 11 04:30:02 compute-0 ceph-osd[89722]: bluestore.MempoolThread(0x55f1042abb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3447873 data_alloc: 234881024 data_used: 27492352
Oct 11 04:30:02 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 193183744 unmapped: 29810688 heap: 222994432 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:02 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:02 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:02 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:27:09.962708+0000)
Oct 11 04:30:02 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 193183744 unmapped: 29810688 heap: 222994432 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:02 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:02 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:02 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:27:10.962941+0000)
Oct 11 04:30:02 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 193183744 unmapped: 29810688 heap: 222994432 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:02 compute-0 ceph-osd[89722]: osd.2 448 heartbeat osd_stat(store_statfs(0x4f37f9000/0x0/0x4ffc00000, data 0x4f3b3a4/0x5194000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x726f9c6), peers [0,1] op hist [])
Oct 11 04:30:02 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:02 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:02 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:27:11.963209+0000)
Oct 11 04:30:02 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 193183744 unmapped: 29810688 heap: 222994432 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:02 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:02 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:02 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:27:12.963410+0000)
Oct 11 04:30:02 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 193191936 unmapped: 29802496 heap: 222994432 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:02 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:02 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:02 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:27:13.963618+0000)
Oct 11 04:30:02 compute-0 ceph-osd[89722]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:30:02 compute-0 ceph-osd[89722]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 11 04:30:02 compute-0 ceph-osd[89722]: bluestore.MempoolThread(0x55f1042abb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3448033 data_alloc: 234881024 data_used: 27516928
Oct 11 04:30:02 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 193191936 unmapped: 29802496 heap: 222994432 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:02 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:02 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:02 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:27:14.963813+0000)
Oct 11 04:30:02 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 193191936 unmapped: 29802496 heap: 222994432 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:02 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:02 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:02 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:27:15.964024+0000)
Oct 11 04:30:02 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 193191936 unmapped: 29802496 heap: 222994432 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:02 compute-0 ceph-osd[89722]: osd.2 448 heartbeat osd_stat(store_statfs(0x4f37f9000/0x0/0x4ffc00000, data 0x4f3b3a4/0x5194000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x726f9c6), peers [0,1] op hist [])
Oct 11 04:30:02 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:02 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:02 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:27:16.964215+0000)
Oct 11 04:30:02 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 193191936 unmapped: 29802496 heap: 222994432 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:02 compute-0 ceph-osd[89722]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 11 04:30:02 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:02 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:02 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:27:17.964386+0000)
Oct 11 04:30:02 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 193208320 unmapped: 29786112 heap: 222994432 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:02 compute-0 ceph-osd[89722]: osd.2 448 heartbeat osd_stat(store_statfs(0x4f37f9000/0x0/0x4ffc00000, data 0x4f3b3a4/0x5194000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x726f9c6), peers [0,1] op hist [])
Oct 11 04:30:02 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:02 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:02 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:27:18.964582+0000)
Oct 11 04:30:02 compute-0 ceph-osd[89722]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:30:02 compute-0 ceph-osd[89722]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 11 04:30:02 compute-0 ceph-osd[89722]: bluestore.MempoolThread(0x55f1042abb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3448033 data_alloc: 234881024 data_used: 27516928
Oct 11 04:30:02 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 193208320 unmapped: 29786112 heap: 222994432 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:02 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:02 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:02 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:27:19.964772+0000)
Oct 11 04:30:02 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 193208320 unmapped: 29786112 heap: 222994432 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:02 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:02 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:02 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:27:20.964974+0000)
Oct 11 04:30:02 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 193208320 unmapped: 29786112 heap: 222994432 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:02 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:02 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:02 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:27:21.965185+0000)
Oct 11 04:30:02 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 193208320 unmapped: 29786112 heap: 222994432 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:02 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:02 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:02 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:27:22.965368+0000)
Oct 11 04:30:02 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 193265664 unmapped: 29728768 heap: 222994432 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:02 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:02 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:02 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:27:23.965606+0000)
Oct 11 04:30:02 compute-0 ceph-osd[89722]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:30:02 compute-0 ceph-osd[89722]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 11 04:30:02 compute-0 ceph-osd[89722]: bluestore.MempoolThread(0x55f1042abb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3453473 data_alloc: 234881024 data_used: 28049408
Oct 11 04:30:02 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 193265664 unmapped: 29728768 heap: 222994432 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:02 compute-0 ceph-osd[89722]: osd.2 448 heartbeat osd_stat(store_statfs(0x4f37f9000/0x0/0x4ffc00000, data 0x4f3b3a4/0x5194000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x726f9c6), peers [0,1] op hist [])
Oct 11 04:30:02 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:02 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:02 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:27:24.965777+0000)
Oct 11 04:30:02 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 193265664 unmapped: 29728768 heap: 222994432 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:02 compute-0 ceph-osd[89722]: osd.2 448 heartbeat osd_stat(store_statfs(0x4f37f9000/0x0/0x4ffc00000, data 0x4f3b3a4/0x5194000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x726f9c6), peers [0,1] op hist [])
Oct 11 04:30:02 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:02 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:02 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:27:25.965962+0000)
Oct 11 04:30:02 compute-0 ceph-osd[89722]: osd.2 448 heartbeat osd_stat(store_statfs(0x4f37f9000/0x0/0x4ffc00000, data 0x4f3b3a4/0x5194000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x726f9c6), peers [0,1] op hist [])
Oct 11 04:30:02 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 193265664 unmapped: 29728768 heap: 222994432 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:02 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:02 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:02 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:27:26.966141+0000)
Oct 11 04:30:02 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 197468160 unmapped: 25526272 heap: 222994432 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:02 compute-0 ceph-osd[89722]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 32.095371246s of 32.103050232s, submitted: 2
Oct 11 04:30:02 compute-0 ceph-osd[89722]: osd.2 448 heartbeat osd_stat(store_statfs(0x4f37f9000/0x0/0x4ffc00000, data 0x4f3b3a4/0x5194000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x726f9c6), peers [0,1] op hist [])
Oct 11 04:30:02 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:02 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:02 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:27:27.966343+0000)
Oct 11 04:30:02 compute-0 ceph-osd[89722]: osd.2 448 heartbeat osd_stat(store_statfs(0x4f33fa000/0x0/0x4ffc00000, data 0x533b3a4/0x5594000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x726f9c6), peers [0,1] op hist [])
Oct 11 04:30:02 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 193740800 unmapped: 29253632 heap: 222994432 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:02 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:02 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:02 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:27:28.966520+0000)
Oct 11 04:30:02 compute-0 ceph-osd[89722]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:30:02 compute-0 ceph-osd[89722]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 11 04:30:02 compute-0 ceph-osd[89722]: bluestore.MempoolThread(0x55f1042abb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3481345 data_alloc: 234881024 data_used: 28049408
Oct 11 04:30:02 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 193740800 unmapped: 29253632 heap: 222994432 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:02 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:02 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:02 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:27:29.966673+0000)
Oct 11 04:30:02 compute-0 ceph-osd[89722]: osd.2 448 heartbeat osd_stat(store_statfs(0x4f33fa000/0x0/0x4ffc00000, data 0x533b3a4/0x5594000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x726f9c6), peers [0,1] op hist [])
Oct 11 04:30:02 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 193740800 unmapped: 29253632 heap: 222994432 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:02 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:02 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:02 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:27:30.967456+0000)
Oct 11 04:30:02 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 193740800 unmapped: 29253632 heap: 222994432 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:02 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:02 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:02 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:27:31.967635+0000)
Oct 11 04:30:02 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 193740800 unmapped: 29253632 heap: 222994432 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:02 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:02 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:02 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:27:32.967806+0000)
Oct 11 04:30:02 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 193740800 unmapped: 29253632 heap: 222994432 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:02 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:02 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:02 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:27:33.967952+0000)
Oct 11 04:30:02 compute-0 ceph-osd[89722]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:30:02 compute-0 ceph-osd[89722]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 11 04:30:02 compute-0 ceph-osd[89722]: bluestore.MempoolThread(0x55f1042abb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3481345 data_alloc: 234881024 data_used: 28049408
Oct 11 04:30:02 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 193740800 unmapped: 29253632 heap: 222994432 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:02 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:02 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:02 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:27:34.968118+0000)
Oct 11 04:30:02 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 193740800 unmapped: 29253632 heap: 222994432 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:02 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:02 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:02 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:27:35.968278+0000)
Oct 11 04:30:02 compute-0 ceph-osd[89722]: osd.2 448 heartbeat osd_stat(store_statfs(0x4f33fa000/0x0/0x4ffc00000, data 0x533b3a4/0x5594000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x726f9c6), peers [0,1] op hist [])
Oct 11 04:30:02 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 193740800 unmapped: 29253632 heap: 222994432 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:02 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:02 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:02 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:27:36.968460+0000)
Oct 11 04:30:02 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 193757184 unmapped: 29237248 heap: 222994432 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:02 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:02 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:02 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:27:37.968684+0000)
Oct 11 04:30:02 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 193773568 unmapped: 29220864 heap: 222994432 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:02 compute-0 ceph-osd[89722]: osd.2 448 heartbeat osd_stat(store_statfs(0x4f2ef3000/0x0/0x4ffc00000, data 0x58423a4/0x5a9b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x726f9c6), peers [0,1] op hist [])
Oct 11 04:30:02 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:02 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:02 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:27:38.968875+0000)
Oct 11 04:30:02 compute-0 ceph-osd[89722]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:30:02 compute-0 ceph-osd[89722]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 11 04:30:02 compute-0 ceph-osd[89722]: bluestore.MempoolThread(0x55f1042abb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3517249 data_alloc: 234881024 data_used: 28049408
Oct 11 04:30:02 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 193773568 unmapped: 29220864 heap: 222994432 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:02 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:02 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:02 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:27:39.969056+0000)
Oct 11 04:30:02 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 193773568 unmapped: 29220864 heap: 222994432 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:02 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:02 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:02 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:27:40.969224+0000)
Oct 11 04:30:02 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 193773568 unmapped: 29220864 heap: 222994432 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:02 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:02 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:02 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:27:41.969425+0000)
Oct 11 04:30:02 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 193773568 unmapped: 29220864 heap: 222994432 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:02 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:02 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:02 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:27:42.969612+0000)
Oct 11 04:30:02 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 193773568 unmapped: 29220864 heap: 222994432 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:02 compute-0 ceph-osd[89722]: osd.2 448 heartbeat osd_stat(store_statfs(0x4f2ef3000/0x0/0x4ffc00000, data 0x58423a4/0x5a9b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x726f9c6), peers [0,1] op hist [])
Oct 11 04:30:02 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:02 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:02 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:27:43.969804+0000)
Oct 11 04:30:02 compute-0 ceph-osd[89722]: osd.2 448 heartbeat osd_stat(store_statfs(0x4f2ef3000/0x0/0x4ffc00000, data 0x58423a4/0x5a9b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x726f9c6), peers [0,1] op hist [])
Oct 11 04:30:02 compute-0 ceph-osd[89722]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:30:02 compute-0 ceph-osd[89722]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 11 04:30:02 compute-0 ceph-osd[89722]: bluestore.MempoolThread(0x55f1042abb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3517249 data_alloc: 234881024 data_used: 28049408
Oct 11 04:30:02 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 193773568 unmapped: 29220864 heap: 222994432 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:02 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:02 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:02 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:27:44.969980+0000)
Oct 11 04:30:02 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 193773568 unmapped: 29220864 heap: 222994432 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:02 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:02 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:02 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:27:45.970142+0000)
Oct 11 04:30:02 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 193773568 unmapped: 29220864 heap: 222994432 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:02 compute-0 ceph-osd[89722]: osd.2 448 heartbeat osd_stat(store_statfs(0x4f2ef3000/0x0/0x4ffc00000, data 0x58423a4/0x5a9b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x726f9c6), peers [0,1] op hist [])
Oct 11 04:30:02 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:02 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:02 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:27:46.970340+0000)
Oct 11 04:30:02 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 193970176 unmapped: 29024256 heap: 222994432 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:02 compute-0 ceph-osd[89722]: osd.2 448 ms_handle_reset con 0x55f107959400 session 0x55f10767f2c0
Oct 11 04:30:02 compute-0 ceph-osd[89722]: osd.2 448 ms_handle_reset con 0x55f10c0ef400 session 0x55f10799dc20
Oct 11 04:30:02 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:02 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:02 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:27:47.970472+0000)
Oct 11 04:30:02 compute-0 ceph-osd[89722]: monclient: handle_auth_request added challenge on 0x55f105d2ac00
Oct 11 04:30:02 compute-0 ceph-osd[89722]: osd.2 448 ms_handle_reset con 0x55f105d2ac00 session 0x55f1076b43c0
Oct 11 04:30:02 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 194043904 unmapped: 28950528 heap: 222994432 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:02 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:02 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:02 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:27:48.970622+0000)
Oct 11 04:30:02 compute-0 ceph-osd[89722]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:30:02 compute-0 ceph-osd[89722]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 11 04:30:02 compute-0 ceph-osd[89722]: bluestore.MempoolThread(0x55f1042abb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3526689 data_alloc: 251658240 data_used: 29802496
Oct 11 04:30:02 compute-0 ceph-osd[89722]: osd.2 448 heartbeat osd_stat(store_statfs(0x4f2ef3000/0x0/0x4ffc00000, data 0x58423a4/0x5a9b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x726f9c6), peers [0,1] op hist [])
Oct 11 04:30:02 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 194043904 unmapped: 28950528 heap: 222994432 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:02 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:02 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:02 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:27:49.970774+0000)
Oct 11 04:30:02 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 194043904 unmapped: 28950528 heap: 222994432 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:02 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:02 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:02 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:27:50.970931+0000)
Oct 11 04:30:02 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 194043904 unmapped: 28950528 heap: 222994432 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:02 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:02 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:02 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:27:51.971213+0000)
Oct 11 04:30:02 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 194043904 unmapped: 28950528 heap: 222994432 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:02 compute-0 ceph-osd[89722]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 25.395416260s of 25.449554443s, submitted: 9
Oct 11 04:30:02 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:02 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:02 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:27:52.971380+0000)
Oct 11 04:30:02 compute-0 ceph-osd[89722]: monclient: handle_auth_request added challenge on 0x55f107b52c00
Oct 11 04:30:02 compute-0 ceph-osd[89722]: osd.2 448 ms_handle_reset con 0x55f107b52c00 session 0x55f109452f00
Oct 11 04:30:02 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 194125824 unmapped: 28868608 heap: 222994432 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:02 compute-0 ceph-osd[89722]: monclient: handle_auth_request added challenge on 0x55f107bf3400
Oct 11 04:30:02 compute-0 ceph-osd[89722]: osd.2 448 heartbeat osd_stat(store_statfs(0x4f2ef3000/0x0/0x4ffc00000, data 0x58423a4/0x5a9b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x726f9c6), peers [0,1] op hist [])
Oct 11 04:30:02 compute-0 ceph-osd[89722]: osd.2 448 ms_handle_reset con 0x55f107bf3400 session 0x55f105b9a780
Oct 11 04:30:02 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:02 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:02 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:27:53.971506+0000)
Oct 11 04:30:02 compute-0 ceph-osd[89722]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:30:02 compute-0 ceph-osd[89722]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 11 04:30:02 compute-0 ceph-osd[89722]: bluestore.MempoolThread(0x55f1042abb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3458537 data_alloc: 251658240 data_used: 28651520
Oct 11 04:30:02 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 194134016 unmapped: 28860416 heap: 222994432 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:02 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:02 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:02 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:27:54.971669+0000)
Oct 11 04:30:02 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 194134016 unmapped: 28860416 heap: 222994432 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:02 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:02 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:02 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:27:55.971849+0000)
Oct 11 04:30:02 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 194134016 unmapped: 28860416 heap: 222994432 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:02 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:02 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:02 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:27:56.971958+0000)
Oct 11 04:30:02 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 194134016 unmapped: 28860416 heap: 222994432 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:02 compute-0 ceph-osd[89722]: osd.2 448 heartbeat osd_stat(store_statfs(0x4f37fb000/0x0/0x4ffc00000, data 0x4f3b394/0x5193000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x726f9c6), peers [0,1] op hist [])
Oct 11 04:30:02 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:02 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:02 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:27:57.972217+0000)
Oct 11 04:30:02 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 194134016 unmapped: 28860416 heap: 222994432 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:02 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:02 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:02 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:27:58.972441+0000)
Oct 11 04:30:02 compute-0 ceph-osd[89722]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:30:02 compute-0 ceph-osd[89722]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 11 04:30:02 compute-0 ceph-osd[89722]: bluestore.MempoolThread(0x55f1042abb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3458537 data_alloc: 251658240 data_used: 28651520
Oct 11 04:30:02 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 194134016 unmapped: 28860416 heap: 222994432 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:02 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:02 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:02 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:27:59.972619+0000)
Oct 11 04:30:02 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 194134016 unmapped: 28860416 heap: 222994432 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:02 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:02 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:02 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:28:00.972779+0000)
Oct 11 04:30:02 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 194134016 unmapped: 28860416 heap: 222994432 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:02 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:02 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:02 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:28:01.972954+0000)
Oct 11 04:30:02 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 194134016 unmapped: 28860416 heap: 222994432 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:02 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:02 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:02 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:28:02.973110+0000)
Oct 11 04:30:02 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 194134016 unmapped: 28860416 heap: 222994432 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:02 compute-0 ceph-osd[89722]: osd.2 448 heartbeat osd_stat(store_statfs(0x4f37fb000/0x0/0x4ffc00000, data 0x4f3b394/0x5193000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x726f9c6), peers [0,1] op hist [])
Oct 11 04:30:02 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:02 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:02 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:28:03.973304+0000)
Oct 11 04:30:02 compute-0 ceph-osd[89722]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:30:02 compute-0 ceph-osd[89722]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 11 04:30:02 compute-0 ceph-osd[89722]: bluestore.MempoolThread(0x55f1042abb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3458537 data_alloc: 251658240 data_used: 28651520
Oct 11 04:30:02 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 194134016 unmapped: 28860416 heap: 222994432 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:02 compute-0 ceph-osd[89722]: osd.2 448 heartbeat osd_stat(store_statfs(0x4f37fb000/0x0/0x4ffc00000, data 0x4f3b394/0x5193000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x726f9c6), peers [0,1] op hist [])
Oct 11 04:30:02 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:02 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:02 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:28:04.973468+0000)
Oct 11 04:30:02 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 194134016 unmapped: 28860416 heap: 222994432 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:02 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:02 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:02 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:28:05.973547+0000)
Oct 11 04:30:02 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 194134016 unmapped: 28860416 heap: 222994432 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:02 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:02 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:02 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:28:06.973716+0000)
Oct 11 04:30:02 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 194134016 unmapped: 28860416 heap: 222994432 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:02 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:02 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:02 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:28:07.973833+0000)
Oct 11 04:30:02 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 194134016 unmapped: 28860416 heap: 222994432 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:02 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:02 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:02 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:28:08.974051+0000)
Oct 11 04:30:02 compute-0 ceph-osd[89722]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:30:02 compute-0 ceph-osd[89722]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 11 04:30:02 compute-0 ceph-osd[89722]: bluestore.MempoolThread(0x55f1042abb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3458537 data_alloc: 251658240 data_used: 28651520
Oct 11 04:30:02 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 194134016 unmapped: 28860416 heap: 222994432 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:02 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:02 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:02 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:28:09.974243+0000)
Oct 11 04:30:02 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 194134016 unmapped: 28860416 heap: 222994432 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:02 compute-0 ceph-osd[89722]: osd.2 448 heartbeat osd_stat(store_statfs(0x4f37fb000/0x0/0x4ffc00000, data 0x4f3b394/0x5193000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x726f9c6), peers [0,1] op hist [])
Oct 11 04:30:02 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:02 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:02 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:28:10.974439+0000)
Oct 11 04:30:02 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 194134016 unmapped: 28860416 heap: 222994432 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:02 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:02 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:02 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:28:11.974652+0000)
Oct 11 04:30:02 compute-0 ceph-osd[89722]: osd.2 448 heartbeat osd_stat(store_statfs(0x4f37fb000/0x0/0x4ffc00000, data 0x4f3b394/0x5193000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x726f9c6), peers [0,1] op hist [])
Oct 11 04:30:02 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 194134016 unmapped: 28860416 heap: 222994432 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:02 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:02 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:02 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:28:12.974877+0000)
Oct 11 04:30:02 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 194134016 unmapped: 28860416 heap: 222994432 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:02 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:02 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:02 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:28:13.975062+0000)
Oct 11 04:30:02 compute-0 ceph-osd[89722]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:30:02 compute-0 ceph-osd[89722]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 11 04:30:02 compute-0 ceph-osd[89722]: bluestore.MempoolThread(0x55f1042abb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3458537 data_alloc: 251658240 data_used: 28651520
Oct 11 04:30:02 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 194134016 unmapped: 28860416 heap: 222994432 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:02 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:02 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:02 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:28:14.975249+0000)
Oct 11 04:30:02 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 194134016 unmapped: 28860416 heap: 222994432 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:02 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:02 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:02 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:28:15.975413+0000)
Oct 11 04:30:02 compute-0 ceph-osd[89722]: osd.2 448 heartbeat osd_stat(store_statfs(0x4f37fb000/0x0/0x4ffc00000, data 0x4f3b394/0x5193000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x726f9c6), peers [0,1] op hist [])
Oct 11 04:30:02 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 194134016 unmapped: 28860416 heap: 222994432 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:02 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:02 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:02 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:28:16.975597+0000)
Oct 11 04:30:02 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 194134016 unmapped: 28860416 heap: 222994432 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:02 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:02 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:02 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:28:17.975746+0000)
Oct 11 04:30:02 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 194134016 unmapped: 28860416 heap: 222994432 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:02 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:02 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:02 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:28:18.975893+0000)
Oct 11 04:30:02 compute-0 ceph-osd[89722]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:30:02 compute-0 ceph-osd[89722]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 11 04:30:02 compute-0 ceph-osd[89722]: bluestore.MempoolThread(0x55f1042abb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3458537 data_alloc: 251658240 data_used: 28651520
Oct 11 04:30:02 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 194134016 unmapped: 28860416 heap: 222994432 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:02 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:02 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:02 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:28:19.976099+0000)
Oct 11 04:30:02 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 194134016 unmapped: 28860416 heap: 222994432 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:02 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:02 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:02 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:28:20.976258+0000)
Oct 11 04:30:02 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 194134016 unmapped: 28860416 heap: 222994432 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:02 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:02 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:02 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:28:21.976441+0000)
Oct 11 04:30:02 compute-0 ceph-osd[89722]: osd.2 448 heartbeat osd_stat(store_statfs(0x4f37fb000/0x0/0x4ffc00000, data 0x4f3b394/0x5193000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x726f9c6), peers [0,1] op hist [])
Oct 11 04:30:02 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 194134016 unmapped: 28860416 heap: 222994432 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:02 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:02 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:02 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:28:22.976610+0000)
Oct 11 04:30:02 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 194134016 unmapped: 28860416 heap: 222994432 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:02 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:02 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:02 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:28:23.976789+0000)
Oct 11 04:30:02 compute-0 ceph-osd[89722]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:30:02 compute-0 ceph-osd[89722]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 11 04:30:02 compute-0 ceph-osd[89722]: bluestore.MempoolThread(0x55f1042abb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3458537 data_alloc: 251658240 data_used: 28651520
Oct 11 04:30:02 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 194134016 unmapped: 28860416 heap: 222994432 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:02 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:02 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:02 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:28:24.977044+0000)
Oct 11 04:30:02 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 194134016 unmapped: 28860416 heap: 222994432 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:02 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:02 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:02 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:28:25.977247+0000)
Oct 11 04:30:02 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 194134016 unmapped: 28860416 heap: 222994432 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:02 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:02 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:02 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:28:26.977432+0000)
Oct 11 04:30:02 compute-0 ceph-osd[89722]: osd.2 448 heartbeat osd_stat(store_statfs(0x4f37fb000/0x0/0x4ffc00000, data 0x4f3b394/0x5193000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x726f9c6), peers [0,1] op hist [])
Oct 11 04:30:02 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 194134016 unmapped: 28860416 heap: 222994432 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:02 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:02 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:02 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:28:27.977723+0000)
Oct 11 04:30:02 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 194134016 unmapped: 28860416 heap: 222994432 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:02 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:02 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:02 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:28:28.977936+0000)
Oct 11 04:30:02 compute-0 ceph-osd[89722]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:30:02 compute-0 ceph-osd[89722]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 11 04:30:02 compute-0 ceph-osd[89722]: bluestore.MempoolThread(0x55f1042abb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3458537 data_alloc: 251658240 data_used: 28651520
Oct 11 04:30:02 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 194134016 unmapped: 28860416 heap: 222994432 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:02 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:02 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:02 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:28:29.978134+0000)
Oct 11 04:30:02 compute-0 ceph-osd[89722]: osd.2 448 heartbeat osd_stat(store_statfs(0x4f37fb000/0x0/0x4ffc00000, data 0x4f3b394/0x5193000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x726f9c6), peers [0,1] op hist [])
Oct 11 04:30:02 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 194134016 unmapped: 28860416 heap: 222994432 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:02 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:02 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:02 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:28:30.978367+0000)
Oct 11 04:30:02 compute-0 ceph-osd[89722]: osd.2 448 heartbeat osd_stat(store_statfs(0x4f37fb000/0x0/0x4ffc00000, data 0x4f3b394/0x5193000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x726f9c6), peers [0,1] op hist [])
Oct 11 04:30:02 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 194134016 unmapped: 28860416 heap: 222994432 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:02 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:02 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:02 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:28:31.978592+0000)
Oct 11 04:30:02 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 194134016 unmapped: 28860416 heap: 222994432 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:02 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:02 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:02 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:28:32.978786+0000)
Oct 11 04:30:02 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 194134016 unmapped: 28860416 heap: 222994432 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:02 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:02 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:02 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:28:33.979000+0000)
Oct 11 04:30:02 compute-0 ceph-osd[89722]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:30:02 compute-0 ceph-osd[89722]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 11 04:30:02 compute-0 ceph-osd[89722]: bluestore.MempoolThread(0x55f1042abb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3458537 data_alloc: 251658240 data_used: 28651520
Oct 11 04:30:02 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 194134016 unmapped: 28860416 heap: 222994432 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:02 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:02 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:02 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:28:34.979180+0000)
Oct 11 04:30:02 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 194134016 unmapped: 28860416 heap: 222994432 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:02 compute-0 ceph-osd[89722]: osd.2 448 heartbeat osd_stat(store_statfs(0x4f37fb000/0x0/0x4ffc00000, data 0x4f3b394/0x5193000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x726f9c6), peers [0,1] op hist [])
Oct 11 04:30:02 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:02 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:02 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:28:35.979352+0000)
Oct 11 04:30:02 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 194134016 unmapped: 28860416 heap: 222994432 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:02 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:02 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:02 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:28:36.979486+0000)
Oct 11 04:30:02 compute-0 ceph-osd[89722]: osd.2 448 heartbeat osd_stat(store_statfs(0x4f37fb000/0x0/0x4ffc00000, data 0x4f3b394/0x5193000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x726f9c6), peers [0,1] op hist [])
Oct 11 04:30:02 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 194134016 unmapped: 28860416 heap: 222994432 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:02 compute-0 ceph-osd[89722]: osd.2 448 heartbeat osd_stat(store_statfs(0x4f37fb000/0x0/0x4ffc00000, data 0x4f3b394/0x5193000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x726f9c6), peers [0,1] op hist [])
Oct 11 04:30:02 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:02 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:02 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:28:37.979650+0000)
Oct 11 04:30:02 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 194134016 unmapped: 28860416 heap: 222994432 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:02 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:02 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:02 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:28:38.979836+0000)
Oct 11 04:30:02 compute-0 ceph-osd[89722]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:30:02 compute-0 ceph-osd[89722]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 11 04:30:02 compute-0 ceph-osd[89722]: bluestore.MempoolThread(0x55f1042abb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3458537 data_alloc: 251658240 data_used: 28651520
Oct 11 04:30:02 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 194134016 unmapped: 28860416 heap: 222994432 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:02 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:02 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:02 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:28:39.980019+0000)
Oct 11 04:30:02 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 194134016 unmapped: 28860416 heap: 222994432 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:02 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:02 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:02 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:28:40.980335+0000)
Oct 11 04:30:02 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 194134016 unmapped: 28860416 heap: 222994432 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:02 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:02 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:02 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:28:41.981420+0000)
Oct 11 04:30:02 compute-0 ceph-osd[89722]: osd.2 448 heartbeat osd_stat(store_statfs(0x4f37fb000/0x0/0x4ffc00000, data 0x4f3b394/0x5193000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x726f9c6), peers [0,1] op hist [])
Oct 11 04:30:02 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 194134016 unmapped: 28860416 heap: 222994432 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:02 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:02 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:02 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:28:42.981544+0000)
Oct 11 04:30:02 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 194134016 unmapped: 28860416 heap: 222994432 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:02 compute-0 ceph-osd[89722]: osd.2 448 heartbeat osd_stat(store_statfs(0x4f37fb000/0x0/0x4ffc00000, data 0x4f3b394/0x5193000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x726f9c6), peers [0,1] op hist [])
Oct 11 04:30:02 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:02 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:02 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:28:43.981675+0000)
Oct 11 04:30:02 compute-0 ceph-osd[89722]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:30:02 compute-0 ceph-osd[89722]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 11 04:30:02 compute-0 ceph-osd[89722]: bluestore.MempoolThread(0x55f1042abb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3458537 data_alloc: 251658240 data_used: 28651520
Oct 11 04:30:02 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 194134016 unmapped: 28860416 heap: 222994432 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:02 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:02 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:02 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:28:44.981836+0000)
Oct 11 04:30:02 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 194134016 unmapped: 28860416 heap: 222994432 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:02 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:02 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:02 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:28:45.982044+0000)
Oct 11 04:30:02 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 194134016 unmapped: 28860416 heap: 222994432 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:02 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:02 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:02 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:28:46.982277+0000)
Oct 11 04:30:02 compute-0 ceph-osd[89722]: osd.2 448 heartbeat osd_stat(store_statfs(0x4f37fb000/0x0/0x4ffc00000, data 0x4f3b394/0x5193000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x726f9c6), peers [0,1] op hist [])
Oct 11 04:30:02 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 194134016 unmapped: 28860416 heap: 222994432 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:02 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:02 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:02 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:28:47.982554+0000)
Oct 11 04:30:02 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 194134016 unmapped: 28860416 heap: 222994432 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:02 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:02 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:02 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:28:48.982779+0000)
Oct 11 04:30:02 compute-0 ceph-osd[89722]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:30:02 compute-0 ceph-osd[89722]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 11 04:30:02 compute-0 ceph-osd[89722]: bluestore.MempoolThread(0x55f1042abb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3458537 data_alloc: 251658240 data_used: 28651520
Oct 11 04:30:02 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 194134016 unmapped: 28860416 heap: 222994432 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:02 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:02 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:02 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:28:49.983025+0000)
Oct 11 04:30:02 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 194134016 unmapped: 28860416 heap: 222994432 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:02 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:02 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:02 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:28:50.983192+0000)
Oct 11 04:30:02 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 194134016 unmapped: 28860416 heap: 222994432 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:02 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:02 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:02 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:28:51.983415+0000)
Oct 11 04:30:02 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 194134016 unmapped: 28860416 heap: 222994432 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:02 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:02 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:02 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:28:52.983725+0000)
Oct 11 04:30:02 compute-0 ceph-osd[89722]: osd.2 448 heartbeat osd_stat(store_statfs(0x4f37fb000/0x0/0x4ffc00000, data 0x4f3b394/0x5193000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x726f9c6), peers [0,1] op hist [])
Oct 11 04:30:02 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 194142208 unmapped: 28852224 heap: 222994432 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:02 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:02 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:02 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:28:53.984784+0000)
Oct 11 04:30:02 compute-0 ceph-osd[89722]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:30:02 compute-0 ceph-osd[89722]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 11 04:30:02 compute-0 ceph-osd[89722]: bluestore.MempoolThread(0x55f1042abb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3458537 data_alloc: 251658240 data_used: 28651520
Oct 11 04:30:02 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 194142208 unmapped: 28852224 heap: 222994432 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:02 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:02 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:02 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:28:54.985302+0000)
Oct 11 04:30:02 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 194142208 unmapped: 28852224 heap: 222994432 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:02 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:02 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:02 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:28:55.985608+0000)
Oct 11 04:30:02 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 194142208 unmapped: 28852224 heap: 222994432 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:02 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:02 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:02 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:28:56.985851+0000)
Oct 11 04:30:02 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 194142208 unmapped: 28852224 heap: 222994432 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:02 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:02 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:02 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:28:57.986571+0000)
Oct 11 04:30:02 compute-0 ceph-osd[89722]: osd.2 448 heartbeat osd_stat(store_statfs(0x4f37fb000/0x0/0x4ffc00000, data 0x4f3b394/0x5193000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x726f9c6), peers [0,1] op hist [])
Oct 11 04:30:02 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 194142208 unmapped: 28852224 heap: 222994432 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:02 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:02 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:02 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:28:58.986832+0000)
Oct 11 04:30:02 compute-0 ceph-osd[89722]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:30:02 compute-0 ceph-osd[89722]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 11 04:30:02 compute-0 ceph-osd[89722]: bluestore.MempoolThread(0x55f1042abb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3458537 data_alloc: 251658240 data_used: 28651520
Oct 11 04:30:02 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 194142208 unmapped: 28852224 heap: 222994432 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:02 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:02 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:02 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:28:59.987496+0000)
Oct 11 04:30:02 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 194142208 unmapped: 28852224 heap: 222994432 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:02 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:02 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:02 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:29:00.987731+0000)
Oct 11 04:30:02 compute-0 ceph-osd[89722]: osd.2 448 heartbeat osd_stat(store_statfs(0x4f37fb000/0x0/0x4ffc00000, data 0x4f3b394/0x5193000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x726f9c6), peers [0,1] op hist [])
Oct 11 04:30:02 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 194142208 unmapped: 28852224 heap: 222994432 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:02 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:02 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:02 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:29:01.987963+0000)
Oct 11 04:30:02 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 194142208 unmapped: 28852224 heap: 222994432 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:02 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:02 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:02 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:29:02.988137+0000)
Oct 11 04:30:02 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 194142208 unmapped: 28852224 heap: 222994432 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:02 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:02 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:02 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:29:03.988584+0000)
Oct 11 04:30:02 compute-0 ceph-osd[89722]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:30:02 compute-0 ceph-osd[89722]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 11 04:30:02 compute-0 ceph-osd[89722]: bluestore.MempoolThread(0x55f1042abb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3458537 data_alloc: 251658240 data_used: 28651520
Oct 11 04:30:02 compute-0 ceph-osd[89722]: osd.2 448 heartbeat osd_stat(store_statfs(0x4f37fb000/0x0/0x4ffc00000, data 0x4f3b394/0x5193000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x726f9c6), peers [0,1] op hist [])
Oct 11 04:30:02 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 194142208 unmapped: 28852224 heap: 222994432 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:02 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:02 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:02 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:29:04.988799+0000)
Oct 11 04:30:02 compute-0 ceph-osd[89722]: osd.2 448 heartbeat osd_stat(store_statfs(0x4f37fb000/0x0/0x4ffc00000, data 0x4f3b394/0x5193000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x726f9c6), peers [0,1] op hist [])
Oct 11 04:30:02 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 194150400 unmapped: 28844032 heap: 222994432 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:02 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:02 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:02 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:29:05.989186+0000)
Oct 11 04:30:02 compute-0 ceph-osd[89722]: osd.2 448 heartbeat osd_stat(store_statfs(0x4f37fb000/0x0/0x4ffc00000, data 0x4f3b394/0x5193000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x726f9c6), peers [0,1] op hist [])
Oct 11 04:30:02 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 194150400 unmapped: 28844032 heap: 222994432 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:02 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:02 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:02 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:29:06.989425+0000)
Oct 11 04:30:02 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 194150400 unmapped: 28844032 heap: 222994432 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:02 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:02 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:02 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:29:07.989797+0000)
Oct 11 04:30:02 compute-0 ceph-osd[89722]: osd.2 448 heartbeat osd_stat(store_statfs(0x4f37fb000/0x0/0x4ffc00000, data 0x4f3b394/0x5193000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x726f9c6), peers [0,1] op hist [])
Oct 11 04:30:02 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 194150400 unmapped: 28844032 heap: 222994432 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:02 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:02 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:02 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:29:08.990643+0000)
Oct 11 04:30:02 compute-0 ceph-osd[89722]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:30:02 compute-0 ceph-osd[89722]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 11 04:30:02 compute-0 ceph-osd[89722]: bluestore.MempoolThread(0x55f1042abb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3458537 data_alloc: 251658240 data_used: 28651520
Oct 11 04:30:02 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 194150400 unmapped: 28844032 heap: 222994432 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:02 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:02 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:02 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:29:09.990881+0000)
Oct 11 04:30:02 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 194150400 unmapped: 28844032 heap: 222994432 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:02 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:02 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:02 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:29:10.991195+0000)
Oct 11 04:30:02 compute-0 ceph-osd[89722]: osd.2 448 heartbeat osd_stat(store_statfs(0x4f37fb000/0x0/0x4ffc00000, data 0x4f3b394/0x5193000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x726f9c6), peers [0,1] op hist [])
Oct 11 04:30:02 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 194150400 unmapped: 28844032 heap: 222994432 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:02 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:02 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:02 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:29:11.991461+0000)
Oct 11 04:30:02 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 194150400 unmapped: 28844032 heap: 222994432 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:02 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:02 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:02 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:29:12.991725+0000)
Oct 11 04:30:02 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 194150400 unmapped: 28844032 heap: 222994432 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:02 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:02 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:02 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:29:13.992008+0000)
Oct 11 04:30:02 compute-0 ceph-osd[89722]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:30:02 compute-0 ceph-osd[89722]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 11 04:30:02 compute-0 ceph-osd[89722]: bluestore.MempoolThread(0x55f1042abb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3458537 data_alloc: 251658240 data_used: 28651520
Oct 11 04:30:02 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 194150400 unmapped: 28844032 heap: 222994432 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:02 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:02 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:02 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:29:14.992145+0000)
Oct 11 04:30:02 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 194150400 unmapped: 28844032 heap: 222994432 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:02 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:02 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:02 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:29:15.992457+0000)
Oct 11 04:30:02 compute-0 ceph-osd[89722]: osd.2 448 heartbeat osd_stat(store_statfs(0x4f37fb000/0x0/0x4ffc00000, data 0x4f3b394/0x5193000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x726f9c6), peers [0,1] op hist [])
Oct 11 04:30:02 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 194150400 unmapped: 28844032 heap: 222994432 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:02 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:02 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:02 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:29:16.992630+0000)
Oct 11 04:30:02 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 194150400 unmapped: 28844032 heap: 222994432 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:02 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:02 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:02 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:29:17.992753+0000)
Oct 11 04:30:02 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 194166784 unmapped: 28827648 heap: 222994432 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:02 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:02 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:02 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:29:18.992930+0000)
Oct 11 04:30:02 compute-0 ceph-osd[89722]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:30:02 compute-0 ceph-osd[89722]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 11 04:30:02 compute-0 ceph-osd[89722]: bluestore.MempoolThread(0x55f1042abb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3458537 data_alloc: 251658240 data_used: 28651520
Oct 11 04:30:02 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 194166784 unmapped: 28827648 heap: 222994432 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:02 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:02 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:02 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:29:19.993230+0000)
Oct 11 04:30:02 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 194166784 unmapped: 28827648 heap: 222994432 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:02 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:02 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:02 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:29:20.993447+0000)
Oct 11 04:30:02 compute-0 ceph-osd[89722]: osd.2 448 heartbeat osd_stat(store_statfs(0x4f37fb000/0x0/0x4ffc00000, data 0x4f3b394/0x5193000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x726f9c6), peers [0,1] op hist [])
Oct 11 04:30:02 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 194166784 unmapped: 28827648 heap: 222994432 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:02 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:02 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:02 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:29:21.993706+0000)
Oct 11 04:30:02 compute-0 ceph-osd[89722]: osd.2 448 heartbeat osd_stat(store_statfs(0x4f37fb000/0x0/0x4ffc00000, data 0x4f3b394/0x5193000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x726f9c6), peers [0,1] op hist [])
Oct 11 04:30:02 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 194166784 unmapped: 28827648 heap: 222994432 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:02 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:02 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:02 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:29:22.993982+0000)
Oct 11 04:30:02 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 194174976 unmapped: 28819456 heap: 222994432 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:02 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:02 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:02 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:29:23.994176+0000)
Oct 11 04:30:02 compute-0 ceph-osd[89722]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:30:02 compute-0 ceph-osd[89722]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 11 04:30:02 compute-0 ceph-osd[89722]: bluestore.MempoolThread(0x55f1042abb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3458537 data_alloc: 251658240 data_used: 28651520
Oct 11 04:30:02 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 194174976 unmapped: 28819456 heap: 222994432 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:02 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:02 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:02 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:29:24.994378+0000)
Oct 11 04:30:02 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 194174976 unmapped: 28819456 heap: 222994432 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:02 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:02 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:02 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:29:25.994497+0000)
Oct 11 04:30:02 compute-0 ceph-osd[89722]: osd.2 448 heartbeat osd_stat(store_statfs(0x4f37fb000/0x0/0x4ffc00000, data 0x4f3b394/0x5193000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x726f9c6), peers [0,1] op hist [])
Oct 11 04:30:02 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 194174976 unmapped: 28819456 heap: 222994432 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:02 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:02 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:02 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:29:26.994619+0000)
Oct 11 04:30:02 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 194174976 unmapped: 28819456 heap: 222994432 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:02 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:02 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:02 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:29:27.994735+0000)
Oct 11 04:30:02 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 194347008 unmapped: 28647424 heap: 222994432 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:02 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:02 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:02 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:29:28.994875+0000)
Oct 11 04:30:02 compute-0 ceph-osd[89722]: do_command 'config diff' '{prefix=config diff}'
Oct 11 04:30:02 compute-0 ceph-osd[89722]: do_command 'config diff' '{prefix=config diff}' result is 0 bytes
Oct 11 04:30:02 compute-0 ceph-osd[89722]: do_command 'config show' '{prefix=config show}'
Oct 11 04:30:02 compute-0 ceph-osd[89722]: do_command 'config show' '{prefix=config show}' result is 0 bytes
Oct 11 04:30:02 compute-0 ceph-osd[89722]: do_command 'counter dump' '{prefix=counter dump}'
Oct 11 04:30:02 compute-0 ceph-osd[89722]: do_command 'counter dump' '{prefix=counter dump}' result is 0 bytes
Oct 11 04:30:02 compute-0 ceph-osd[89722]: do_command 'counter schema' '{prefix=counter schema}'
Oct 11 04:30:02 compute-0 ceph-osd[89722]: do_command 'counter schema' '{prefix=counter schema}' result is 0 bytes
Oct 11 04:30:02 compute-0 ceph-osd[89722]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:30:02 compute-0 ceph-osd[89722]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 11 04:30:02 compute-0 ceph-osd[89722]: bluestore.MempoolThread(0x55f1042abb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3458537 data_alloc: 251658240 data_used: 28651520
Oct 11 04:30:02 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 194494464 unmapped: 28499968 heap: 222994432 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:02 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:02 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:02 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:29:29.995041+0000)
Oct 11 04:30:02 compute-0 ceph-osd[89722]: osd.2 448 heartbeat osd_stat(store_statfs(0x4f37fb000/0x0/0x4ffc00000, data 0x4f3b394/0x5193000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x726f9c6), peers [0,1] op hist [])
Oct 11 04:30:02 compute-0 ceph-osd[89722]: prioritycache tune_memory target: 4294967296 mapped: 194740224 unmapped: 28254208 heap: 222994432 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:02 compute-0 ceph-osd[89722]: monclient: tick
Oct 11 04:30:02 compute-0 ceph-osd[89722]: monclient: _check_auth_tickets
Oct 11 04:30:02 compute-0 ceph-osd[89722]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:29:30.995198+0000)
Oct 11 04:30:02 compute-0 ceph-osd[89722]: do_command 'log dump' '{prefix=log dump}'
Oct 11 04:30:02 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr metadata"} v 0) v1
Oct 11 04:30:02 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/235063687' entity='client.admin' cmd=[{"prefix": "mgr metadata"}]: dispatch
Oct 11 04:30:02 compute-0 rsyslogd[1005]: imjournal: journal files changed, reloading...  [v8.2506.0-2.el9 try https://www.rsyslog.com/e/0 ]
Oct 11 04:30:02 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v2049: 305 pgs: 305 active+clean; 271 MiB data, 657 MiB used, 59 GiB / 60 GiB avail
Oct 11 04:30:02 compute-0 ceph-mgr[74563]: log_channel(audit) log [DBG] : from='client.19275 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch
Oct 11 04:30:02 compute-0 ceph-mon[74273]: from='client.19267 -' entity='client.admin' cmd=[{"prefix": "orch ls", "target": ["mon-mgr", ""]}]: dispatch
Oct 11 04:30:02 compute-0 ceph-mon[74273]: from='client.? 192.168.122.100:0/4159813985' entity='client.admin' cmd=[{"prefix": "mgr dump"}]: dispatch
Oct 11 04:30:02 compute-0 ceph-mon[74273]: from='client.19271 -' entity='client.admin' cmd=[{"prefix": "orch ls", "export": true, "target": ["mon-mgr", ""]}]: dispatch
Oct 11 04:30:02 compute-0 ceph-mon[74273]: from='client.? 192.168.122.100:0/235063687' entity='client.admin' cmd=[{"prefix": "mgr metadata"}]: dispatch
Oct 11 04:30:02 compute-0 ceph-mon[74273]: pgmap v2049: 305 pgs: 305 active+clean; 271 MiB data, 657 MiB used, 59 GiB / 60 GiB avail
Oct 11 04:30:02 compute-0 ceph-mon[74273]: from='client.19275 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch
Oct 11 04:30:02 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr module ls"} v 0) v1
Oct 11 04:30:02 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2349242509' entity='client.admin' cmd=[{"prefix": "mgr module ls"}]: dispatch
Oct 11 04:30:02 compute-0 ceph-mgr[74563]: log_channel(audit) log [DBG] : from='client.19279 -' entity='client.admin' cmd=[{"prefix": "orch status", "detail": true, "target": ["mon-mgr", ""]}]: dispatch
Oct 11 04:30:02 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr services"} v 0) v1
Oct 11 04:30:02 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3292272968' entity='client.admin' cmd=[{"prefix": "mgr services"}]: dispatch
Oct 11 04:30:02 compute-0 ceph-mgr[74563]: log_channel(audit) log [DBG] : from='client.19283 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch
Oct 11 04:30:03 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr versions"} v 0) v1
Oct 11 04:30:03 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2667943571' entity='client.admin' cmd=[{"prefix": "mgr versions"}]: dispatch
Oct 11 04:30:03 compute-0 ceph-mgr[74563]: log_channel(audit) log [DBG] : from='client.19287 -' entity='client.admin' cmd=[{"prefix": "balancer eval", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 11 04:30:03 compute-0 nova_compute[259850]: 2025-10-11 04:30:03.373 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 04:30:03 compute-0 ceph-mon[74273]: from='client.? 192.168.122.100:0/2349242509' entity='client.admin' cmd=[{"prefix": "mgr module ls"}]: dispatch
Oct 11 04:30:03 compute-0 ceph-mon[74273]: from='client.19279 -' entity='client.admin' cmd=[{"prefix": "orch status", "detail": true, "target": ["mon-mgr", ""]}]: dispatch
Oct 11 04:30:03 compute-0 ceph-mon[74273]: from='client.? 192.168.122.100:0/3292272968' entity='client.admin' cmd=[{"prefix": "mgr services"}]: dispatch
Oct 11 04:30:03 compute-0 ceph-mon[74273]: from='client.19283 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch
Oct 11 04:30:03 compute-0 ceph-mon[74273]: from='client.? 192.168.122.100:0/2667943571' entity='client.admin' cmd=[{"prefix": "mgr versions"}]: dispatch
Oct 11 04:30:03 compute-0 ceph-mon[74273]: from='client.19287 -' entity='client.admin' cmd=[{"prefix": "balancer eval", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 11 04:30:03 compute-0 ceph-mgr[74563]: log_channel(audit) log [DBG] : from='client.19291 -' entity='client.admin' cmd=[{"prefix": "balancer status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 11 04:30:03 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon stat"} v 0) v1
Oct 11 04:30:03 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1215609068' entity='client.admin' cmd=[{"prefix": "mon stat"}]: dispatch
Oct 11 04:30:04 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v2050: 305 pgs: 305 active+clean; 271 MiB data, 657 MiB used, 59 GiB / 60 GiB avail
Oct 11 04:30:04 compute-0 podman[314186]: 2025-10-11 04:30:04.390068149 +0000 UTC m=+0.087399453 container health_status 8f03625dda308e9a3fe3b3f00360811eab2a53db90537fed5552a11820542f07 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, container_name=iscsid, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, org.label-schema.build-date=20251009, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, config_id=iscsid, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Oct 11 04:30:04 compute-0 podman[314185]: 2025-10-11 04:30:04.390202912 +0000 UTC m=+0.087665100 container health_status 83ff5079e26f0f00bbfd2512db4ebca61e431e3b7e362664573e34fff179574c (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, org.label-schema.vendor=CentOS, config_id=multipathd, tcib_managed=true, org.label-schema.build-date=20251009, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Oct 11 04:30:04 compute-0 ceph-mgr[74563]: log_channel(audit) log [DBG] : from='client.19297 -' entity='client.admin' cmd=[{"prefix": "healthcheck history ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 11 04:30:04 compute-0 ceph-23b68101-59a9-532f-ab6b-9acf78fb2162-mgr-compute-0-jhqlii[74559]: 2025-10-11T04:30:04.523+0000 7f0cd7a88640 -1 mgr.server reply reply (95) Operation not supported Module 'prometheus' is not enabled/loaded (required by command 'healthcheck history ls'): use `ceph mgr module enable prometheus` to enable it
Oct 11 04:30:04 compute-0 ceph-mgr[74563]: mgr.server reply reply (95) Operation not supported Module 'prometheus' is not enabled/loaded (required by command 'healthcheck history ls'): use `ceph mgr module enable prometheus` to enable it
Oct 11 04:30:04 compute-0 ceph-mon[74273]: from='client.19291 -' entity='client.admin' cmd=[{"prefix": "balancer status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 11 04:30:04 compute-0 ceph-mon[74273]: from='client.? 192.168.122.100:0/1215609068' entity='client.admin' cmd=[{"prefix": "mon stat"}]: dispatch
Oct 11 04:30:04 compute-0 ceph-mon[74273]: pgmap v2050: 305 pgs: 305 active+clean; 271 MiB data, 657 MiB used, 59 GiB / 60 GiB avail
Oct 11 04:30:04 compute-0 nova_compute[259850]: 2025-10-11 04:30:04.601 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 04:30:04 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "node ls"} v 0) v1
Oct 11 04:30:04 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/756975833' entity='client.admin' cmd=[{"prefix": "node ls"}]: dispatch
Oct 11 04:30:04 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "log last", "channel": "cephadm", "format": "json-pretty"} v 0) v1
Oct 11 04:30:04 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2739225188' entity='client.admin' cmd=[{"prefix": "log last", "channel": "cephadm", "format": "json-pretty"}]: dispatch
Oct 11 04:30:05 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd crush class ls"} v 0) v1
Oct 11 04:30:05 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2081537800' entity='client.admin' cmd=[{"prefix": "osd crush class ls"}]: dispatch
Oct 11 04:30:05 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e448 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 04:30:05 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd crush dump"} v 0) v1
Oct 11 04:30:05 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2319998201' entity='client.admin' cmd=[{"prefix": "osd crush dump"}]: dispatch
Oct 11 04:30:05 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr dump", "format": "json-pretty"} v 0) v1
Oct 11 04:30:05 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/705655437' entity='client.admin' cmd=[{"prefix": "mgr dump", "format": "json-pretty"}]: dispatch
Oct 11 04:30:05 compute-0 ceph-mon[74273]: from='client.19297 -' entity='client.admin' cmd=[{"prefix": "healthcheck history ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 11 04:30:05 compute-0 ceph-mon[74273]: from='client.? 192.168.122.100:0/756975833' entity='client.admin' cmd=[{"prefix": "node ls"}]: dispatch
Oct 11 04:30:05 compute-0 ceph-mon[74273]: from='client.? 192.168.122.100:0/2739225188' entity='client.admin' cmd=[{"prefix": "log last", "channel": "cephadm", "format": "json-pretty"}]: dispatch
Oct 11 04:30:05 compute-0 ceph-mon[74273]: from='client.? 192.168.122.100:0/2081537800' entity='client.admin' cmd=[{"prefix": "osd crush class ls"}]: dispatch
Oct 11 04:30:05 compute-0 ceph-mon[74273]: from='client.? 192.168.122.100:0/2319998201' entity='client.admin' cmd=[{"prefix": "osd crush dump"}]: dispatch
Oct 11 04:30:05 compute-0 ceph-mon[74273]: from='client.? 192.168.122.100:0/705655437' entity='client.admin' cmd=[{"prefix": "mgr dump", "format": "json-pretty"}]: dispatch
Oct 11 04:30:05 compute-0 crontab[314454]: (root) LIST (root)
Oct 11 04:30:05 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd crush rule ls"} v 0) v1
Oct 11 04:30:05 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/174990331' entity='client.admin' cmd=[{"prefix": "osd crush rule ls"}]: dispatch
Oct 11 04:30:06 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr metadata", "format": "json-pretty"} v 0) v1
Oct 11 04:30:06 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/99125495' entity='client.admin' cmd=[{"prefix": "mgr metadata", "format": "json-pretty"}]: dispatch
Oct 11 04:30:06 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v2051: 305 pgs: 305 active+clean; 271 MiB data, 657 MiB used, 59 GiB / 60 GiB avail
Oct 11 04:30:06 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd crush show-tunables"} v 0) v1
Oct 11 04:30:06 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3226386313' entity='client.admin' cmd=[{"prefix": "osd crush show-tunables"}]: dispatch
Oct 11 04:30:06 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr module ls", "format": "json-pretty"} v 0) v1
Oct 11 04:30:06 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/363482278' entity='client.admin' cmd=[{"prefix": "mgr module ls", "format": "json-pretty"}]: dispatch
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:05:50.457768+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 184 heartbeat osd_stat(store_statfs(0x4f988b000/0x0/0x4ffc00000, data 0x18c240a/0x19d3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: handle_auth_request added challenge on 0x56493c78b400
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 100433920 unmapped: 47996928 heap: 148430848 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:05:51.457896+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: handle_auth_request added challenge on 0x56493ef04000
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 184 ms_handle_reset con 0x56493ef04000 session 0x56493be1a780
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: handle_auth_request added challenge on 0x56493f650400
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: handle_auth_request added challenge on 0x56493edf4800
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 184 ms_handle_reset con 0x56493f650400 session 0x56493d8672c0
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 100515840 unmapped: 47915008 heap: 148430848 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:05:52.458003+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: handle_auth_request added challenge on 0x56493edf4000
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 184 ms_handle_reset con 0x56493edf4000 session 0x56493d4b8d20
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 104030208 unmapped: 44400640 heap: 148430848 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 184 heartbeat osd_stat(store_statfs(0x4f988b000/0x0/0x4ffc00000, data 0x18c23cb/0x19d3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:05:53.458212+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: handle_auth_request added challenge on 0x56493b9ad800
Oct 11 04:30:06 compute-0 ceph-osd[88594]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.126933098s of 10.622414589s, submitted: 126
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 184 ms_handle_reset con 0x56493b9ad800 session 0x56493bf7c000
Oct 11 04:30:06 compute-0 ceph-osd[88594]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:30:06 compute-0 ceph-osd[88594]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:30:06 compute-0 ceph-osd[88594]: bluestore.MempoolThread(0x56493a431b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1599665 data_alloc: 234881024 data_used: 10829824
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 104062976 unmapped: 44367872 heap: 148430848 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: handle_auth_request added challenge on 0x56493b9ad800
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 184 ms_handle_reset con 0x56493b9ad800 session 0x56493d864960
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: handle_auth_request added challenge on 0x56493c6efc00
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 184 ms_handle_reset con 0x56493c6efc00 session 0x56493caf9860
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:05:54.458350+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 104079360 unmapped: 44351488 heap: 148430848 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:05:55.458489+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 184 heartbeat osd_stat(store_statfs(0x4f988a000/0x0/0x4ffc00000, data 0x18c242d/0x19d4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 104112128 unmapped: 44318720 heap: 148430848 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: handle_auth_request added challenge on 0x56493edf4000
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 184 ms_handle_reset con 0x56493edf4000 session 0x56493caf94a0
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: handle_auth_request added challenge on 0x56493ef04000
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 184 ms_handle_reset con 0x56493ef04000 session 0x56493e20ba40
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:05:56.458715+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 104112128 unmapped: 44318720 heap: 148430848 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:05:57.458868+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 104112128 unmapped: 44318720 heap: 148430848 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:05:58.459004+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:30:06 compute-0 ceph-osd[88594]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:30:06 compute-0 ceph-osd[88594]: bluestore.MempoolThread(0x56493a431b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1598223 data_alloc: 234881024 data_used: 10829824
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 104112128 unmapped: 44318720 heap: 148430848 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 184 heartbeat osd_stat(store_statfs(0x4f988b000/0x0/0x4ffc00000, data 0x18c23cb/0x19d3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:05:59.459216+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 104144896 unmapped: 44285952 heap: 148430848 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:06:00.459325+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 104144896 unmapped: 44285952 heap: 148430848 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:06:01.459468+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 184 heartbeat osd_stat(store_statfs(0x4f988b000/0x0/0x4ffc00000, data 0x18c23cb/0x19d3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 107888640 unmapped: 40542208 heap: 148430848 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 184 heartbeat osd_stat(store_statfs(0x4f988b000/0x0/0x4ffc00000, data 0x18c23cb/0x19d3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [0,0,0,0,3])
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:06:02.459603+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 184 heartbeat osd_stat(store_statfs(0x4f988b000/0x0/0x4ffc00000, data 0x18c23cb/0x19d3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 112549888 unmapped: 35880960 heap: 148430848 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:06:03.459756+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:30:06 compute-0 ceph-osd[88594]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:30:06 compute-0 ceph-osd[88594]: bluestore.MempoolThread(0x56493a431b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1671705 data_alloc: 234881024 data_used: 12271616
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 113025024 unmapped: 35405824 heap: 148430848 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:06:04.459965+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 113025024 unmapped: 35405824 heap: 148430848 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:06:05.460100+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 113025024 unmapped: 35405824 heap: 148430848 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 184 heartbeat osd_stat(store_statfs(0x4f90cf000/0x0/0x4ffc00000, data 0x20763cb/0x2187000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:06:06.460411+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 184 heartbeat osd_stat(store_statfs(0x4f90cf000/0x0/0x4ffc00000, data 0x20763cb/0x2187000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 113025024 unmapped: 35405824 heap: 148430848 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:06:07.460565+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 184 heartbeat osd_stat(store_statfs(0x4f90cf000/0x0/0x4ffc00000, data 0x20763cb/0x2187000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 113057792 unmapped: 35373056 heap: 148430848 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:06:08.460787+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 14.802837372s of 15.120976448s, submitted: 105
Oct 11 04:30:06 compute-0 ceph-osd[88594]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:30:06 compute-0 ceph-osd[88594]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:30:06 compute-0 ceph-osd[88594]: bluestore.MempoolThread(0x56493a431b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1671721 data_alloc: 234881024 data_used: 12271616
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 113057792 unmapped: 35373056 heap: 148430848 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:06:09.460926+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 113057792 unmapped: 35373056 heap: 148430848 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: handle_auth_request added challenge on 0x56493f650400
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 184 ms_handle_reset con 0x56493f650400 session 0x56493c532f00
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:06:10.461114+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 111452160 unmapped: 36978688 heap: 148430848 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:06:11.461284+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: handle_auth_request added challenge on 0x56493b9ad800
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 184 ms_handle_reset con 0x56493b9ad800 session 0x56493c532000
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 111452160 unmapped: 36978688 heap: 148430848 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: handle_auth_request added challenge on 0x56493c6efc00
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:06:12.461446+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: handle_auth_request added challenge on 0x56493edf4000
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 184 ms_handle_reset con 0x56493edf4000 session 0x56493d8612c0
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 111468544 unmapped: 36962304 heap: 148430848 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: handle_auth_request added challenge on 0x56493ef04000
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 184 ms_handle_reset con 0x56493ef04000 session 0x56493c533680
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:06:13.461664+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 184 heartbeat osd_stat(store_statfs(0x4f90d4000/0x0/0x4ffc00000, data 0x207644d/0x218a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct 11 04:30:06 compute-0 ceph-osd[88594]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:30:06 compute-0 ceph-osd[88594]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:30:06 compute-0 ceph-osd[88594]: bluestore.MempoolThread(0x56493a431b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1674145 data_alloc: 234881024 data_used: 12288000
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: handle_auth_request added challenge on 0x56493e3cf800
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 111468544 unmapped: 36962304 heap: 148430848 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:06:14.461846+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _renew_subs
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 184 handle_osd_map epochs [185,185], i have 184, src has [1,185]
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 185 ms_handle_reset con 0x56493e3cf800 session 0x56493da28960
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 111124480 unmapped: 37306368 heap: 148430848 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: handle_auth_request added challenge on 0x56493e3d2c00
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 185 ms_handle_reset con 0x56493e3d2c00 session 0x56493da2eb40
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:06:15.462038+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 111124480 unmapped: 37306368 heap: 148430848 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: handle_auth_request added challenge on 0x56493b9ad800
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: handle_auth_request added challenge on 0x56493e3cf800
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 185 ms_handle_reset con 0x56493b9ad800 session 0x56493c532d20
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: handle_auth_request added challenge on 0x56493e3d2c00
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:06:16.462214+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: handle_auth_request added challenge on 0x56493edf4000
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 185 ms_handle_reset con 0x56493edf4000 session 0x56493eb5bc20
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: handle_auth_request added challenge on 0x56493ef04000
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 185 ms_handle_reset con 0x56493e3d2c00 session 0x56493d868d20
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 185 ms_handle_reset con 0x56493ef04000 session 0x56493c2f8960
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 185 handle_osd_map epochs [185,186], i have 185, src has [1,186]
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 186 ms_handle_reset con 0x56493e3cf800 session 0x56493d8605a0
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 186 heartbeat osd_stat(store_statfs(0x4f90ce000/0x0/0x4ffc00000, data 0x207808e/0x218f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 111173632 unmapped: 37257216 heap: 148430848 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:06:17.462432+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 186 heartbeat osd_stat(store_statfs(0x4f90ce000/0x0/0x4ffc00000, data 0x207808e/0x218f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: handle_auth_request added challenge on 0x56493b9ad800
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: handle_auth_request added challenge on 0x56493e3cf800
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 186 ms_handle_reset con 0x56493e3cf800 session 0x56493d860f00
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: handle_auth_request added challenge on 0x56493e3d2c00
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _renew_subs
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 186 handle_osd_map epochs [187,187], i have 186, src has [1,187]
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 187 ms_handle_reset con 0x56493b9ad800 session 0x56493ddf6960
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 187 heartbeat osd_stat(store_statfs(0x4f90ca000/0x0/0x4ffc00000, data 0x2079ba9/0x2191000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 111165440 unmapped: 37265408 heap: 148430848 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 187 ms_handle_reset con 0x56493e3d2c00 session 0x56493d864780
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:06:18.462681+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:30:06 compute-0 ceph-osd[88594]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:30:06 compute-0 ceph-osd[88594]: bluestore.MempoolThread(0x56493a431b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1685194 data_alloc: 234881024 data_used: 12296192
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: handle_auth_request added challenge on 0x56493edf4000
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 187 ms_handle_reset con 0x56493edf4000 session 0x56493b314960
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 111165440 unmapped: 37265408 heap: 148430848 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: handle_auth_request added challenge on 0x56493ef04000
Oct 11 04:30:06 compute-0 ceph-osd[88594]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.292782784s of 10.462227821s, submitted: 60
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: handle_auth_request added challenge on 0x56493edd7800
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: handle_auth_request added challenge on 0x56493edd6800
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 187 ms_handle_reset con 0x56493edd7800 session 0x56493e3792c0
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: handle_auth_request added challenge on 0x56493b9ad800
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 187 ms_handle_reset con 0x56493edd6800 session 0x56493e20ad20
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 187 ms_handle_reset con 0x56493b9ad800 session 0x56493bf7c5a0
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:06:19.462816+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: handle_auth_request added challenge on 0x56493e3cf800
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 187 ms_handle_reset con 0x56493e3cf800 session 0x56493d8cb860
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 187 handle_osd_map epochs [187,188], i have 187, src has [1,188]
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 188 ms_handle_reset con 0x56493ef04000 session 0x56493cba65a0
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 112861184 unmapped: 35569664 heap: 148430848 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: handle_auth_request added challenge on 0x56493e3d2c00
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 188 ms_handle_reset con 0x56493e3d2c00 session 0x56493cba7680
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: handle_auth_request added challenge on 0x56493b9ad800
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:06:20.463215+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 188 handle_osd_map epochs [188,189], i have 188, src has [1,189]
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 189 ms_handle_reset con 0x56493b9ad800 session 0x56493e267860
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 112861184 unmapped: 35569664 heap: 148430848 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:06:21.463430+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: handle_auth_request added challenge on 0x56493e3cf800
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 189 ms_handle_reset con 0x56493e3cf800 session 0x56493e2661e0
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: handle_auth_request added challenge on 0x56493edd6800
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 189 ms_handle_reset con 0x56493edd6800 session 0x56493bcb8960
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 112861184 unmapped: 35569664 heap: 148430848 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:06:22.463614+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 189 heartbeat osd_stat(store_statfs(0x4f866e000/0x0/0x4ffc00000, data 0x2ad5e66/0x2bef000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 189 handle_osd_map epochs [190,190], i have 189, src has [1,190]
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 189 handle_osd_map epochs [190,190], i have 190, src has [1,190]
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: handle_auth_request added challenge on 0x56493ef04000
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 190 ms_handle_reset con 0x56493ef04000 session 0x56493bcb8f00
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 112869376 unmapped: 35561472 heap: 148430848 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: handle_auth_request added challenge on 0x56493edf4000
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 190 ms_handle_reset con 0x56493edf4000 session 0x56493b964960
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:06:23.463950+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:30:06 compute-0 ceph-osd[88594]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:30:06 compute-0 ceph-osd[88594]: bluestore.MempoolThread(0x56493a431b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1778889 data_alloc: 234881024 data_used: 12308480
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: handle_auth_request added challenge on 0x56493b9ad800
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 190 ms_handle_reset con 0x56493b9ad800 session 0x56493b964b40
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 190 ms_handle_reset con 0x56493c6efc00 session 0x56493d8672c0
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: handle_auth_request added challenge on 0x56493e3cf800
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 190 ms_handle_reset con 0x56493e3cf800 session 0x56493b934960
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 112877568 unmapped: 35553280 heap: 148430848 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 190 ms_handle_reset con 0x56493edf4800 session 0x56493d866780
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 190 ms_handle_reset con 0x56493c78b400 session 0x56493bcb83c0
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:06:24.464169+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: handle_auth_request added challenge on 0x56493b9ad800
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: handle_auth_request added challenge on 0x56493c6efc00
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: handle_auth_request added challenge on 0x56493e3cf800
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _renew_subs
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 190 handle_osd_map epochs [191,191], i have 190, src has [1,191]
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 191 ms_handle_reset con 0x56493b9ad800 session 0x56493b9645a0
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 103686144 unmapped: 44744704 heap: 148430848 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:06:25.464306+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 191 heartbeat osd_stat(store_statfs(0x4f866a000/0x0/0x4ffc00000, data 0x2ad7a6a/0x2bf4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 103546880 unmapped: 44883968 heap: 148430848 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:06:26.464482+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: handle_auth_request added challenge on 0x56493edf4800
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 191 ms_handle_reset con 0x56493edf4800 session 0x56493eb5b2c0
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: handle_auth_request added challenge on 0x56493edd6800
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 191 ms_handle_reset con 0x56493edd6800 session 0x56493d867680
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 106520576 unmapped: 41910272 heap: 148430848 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:06:27.464729+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 191 heartbeat osd_stat(store_statfs(0x4f920b000/0x0/0x4ffc00000, data 0x192549a/0x1a41000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: handle_auth_request added challenge on 0x56493ef04000
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 191 ms_handle_reset con 0x56493ef04000 session 0x56493d866000
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 106536960 unmapped: 41893888 heap: 148430848 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:06:28.464875+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:30:06 compute-0 ceph-osd[88594]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:30:06 compute-0 ceph-osd[88594]: bluestore.MempoolThread(0x56493a431b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1643439 data_alloc: 234881024 data_used: 10117120
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 106536960 unmapped: 41893888 heap: 148430848 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: handle_auth_request added challenge on 0x56493edd7000
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 191 ms_handle_reset con 0x56493edd7000 session 0x56493d867a40
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: handle_auth_request added challenge on 0x56493b9ad800
Oct 11 04:30:06 compute-0 ceph-osd[88594]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.942967415s of 10.328780174s, submitted: 151
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:06:29.465019+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 191 ms_handle_reset con 0x56493b9ad800 session 0x56493da2e960
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 106536960 unmapped: 41893888 heap: 148430848 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: handle_auth_request added challenge on 0x56493edd6800
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 191 ms_handle_reset con 0x56493edd6800 session 0x56493b12d4a0
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: handle_auth_request added challenge on 0x56493edd7000
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 191 heartbeat osd_stat(store_statfs(0x4f9fb9000/0x0/0x4ffc00000, data 0x11884fc/0x12a5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 191 ms_handle_reset con 0x56493edd7000 session 0x56493c532960
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:06:30.465309+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: handle_auth_request added challenge on 0x56493edf4800
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: handle_auth_request added challenge on 0x56493ef04000
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 191 ms_handle_reset con 0x56493edf4800 session 0x56493de4f860
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: handle_auth_request added challenge on 0x56493edd7c00
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 191 ms_handle_reset con 0x56493edd7c00 session 0x56493de4f2c0
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 191 handle_osd_map epochs [191,192], i have 191, src has [1,192]
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 192 ms_handle_reset con 0x56493ef04000 session 0x56493de4e960
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 107102208 unmapped: 41328640 heap: 148430848 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:06:31.465606+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 192 heartbeat osd_stat(store_statfs(0x4f964f000/0x0/0x4ffc00000, data 0x1af1017/0x1c0e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 107102208 unmapped: 41328640 heap: 148430848 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: handle_auth_request added challenge on 0x56493b9ad800
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 192 ms_handle_reset con 0x56493b9ad800 session 0x56493da150e0
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: handle_auth_request added challenge on 0x56493edd6800
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:06:32.465983+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: handle_auth_request added challenge on 0x56493edd7000
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 192 ms_handle_reset con 0x56493edd7000 session 0x56493da14d20
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: handle_auth_request added challenge on 0x56493edf4800
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 192 ms_handle_reset con 0x56493edf4800 session 0x56493da143c0
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 192 ms_handle_reset con 0x56493edd6800 session 0x56493da141e0
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 107126784 unmapped: 41304064 heap: 148430848 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 192 handle_osd_map epochs [193,193], i have 192, src has [1,193]
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:06:33.466123+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: handle_auth_request added challenge on 0x56493b9ad800
Oct 11 04:30:06 compute-0 ceph-osd[88594]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:30:06 compute-0 ceph-osd[88594]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:30:06 compute-0 ceph-osd[88594]: bluestore.MempoolThread(0x56493a431b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1597834 data_alloc: 234881024 data_used: 10121216
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _renew_subs
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 193 handle_osd_map epochs [194,194], i have 193, src has [1,194]
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 107126784 unmapped: 41304064 heap: 148430848 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 194 ms_handle_reset con 0x56493b9ad800 session 0x56493da14000
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:06:34.466321+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 107126784 unmapped: 41304064 heap: 148430848 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 194 heartbeat osd_stat(store_statfs(0x4f9fae000/0x0/0x4ffc00000, data 0x118d7e3/0x12ae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:06:35.466520+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 113508352 unmapped: 34922496 heap: 148430848 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:06:36.466769+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 114909184 unmapped: 33521664 heap: 148430848 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:06:37.466909+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 113819648 unmapped: 34611200 heap: 148430848 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:06:38.467129+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:30:06 compute-0 ceph-osd[88594]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:30:06 compute-0 ceph-osd[88594]: bluestore.MempoolThread(0x56493a431b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1674220 data_alloc: 234881024 data_used: 10756096
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 194 heartbeat osd_stat(store_statfs(0x4f97d5000/0x0/0x4ffc00000, data 0x19687e3/0x1a89000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 113819648 unmapped: 34611200 heap: 148430848 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:06:39.467337+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 113819648 unmapped: 34611200 heap: 148430848 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:06:40.467488+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 113819648 unmapped: 34611200 heap: 148430848 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 194 heartbeat osd_stat(store_statfs(0x4f97d5000/0x0/0x4ffc00000, data 0x19687e3/0x1a89000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:06:41.467611+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 194 heartbeat osd_stat(store_statfs(0x4f97d5000/0x0/0x4ffc00000, data 0x19687e3/0x1a89000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: handle_auth_request added challenge on 0x56493edd7000
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 194 ms_handle_reset con 0x56493edd7000 session 0x56493b9330e0
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: handle_auth_request added challenge on 0x56493edf4800
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 194 ms_handle_reset con 0x56493edf4800 session 0x56493b933680
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: handle_auth_request added challenge on 0x56493ef04000
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 194 ms_handle_reset con 0x56493ef04000 session 0x56493b932f00
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: handle_auth_request added challenge on 0x56493f61c000
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 194 ms_handle_reset con 0x56493f61c000 session 0x56493b932780
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: handle_auth_request added challenge on 0x56493b9ad800
Oct 11 04:30:06 compute-0 ceph-osd[88594]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 12.114677429s of 12.580265999s, submitted: 168
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 115310592 unmapped: 33120256 heap: 148430848 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 194 ms_handle_reset con 0x56493b9ad800 session 0x56493b9323c0
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: handle_auth_request added challenge on 0x56493edd7000
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 194 ms_handle_reset con 0x56493edd7000 session 0x56493b92dc20
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: handle_auth_request added challenge on 0x56493edf4800
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 194 ms_handle_reset con 0x56493edf4800 session 0x56493b92c3c0
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:06:42.467776+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 114794496 unmapped: 33636352 heap: 148430848 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:06:43.467941+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: handle_auth_request added challenge on 0x56493ef04000
Oct 11 04:30:06 compute-0 ceph-osd[88594]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:30:06 compute-0 ceph-osd[88594]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:30:06 compute-0 ceph-osd[88594]: bluestore.MempoolThread(0x56493a431b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1714391 data_alloc: 234881024 data_used: 10756096
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _renew_subs
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 194 handle_osd_map epochs [195,195], i have 194, src has [1,195]
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 114810880 unmapped: 33619968 heap: 148430848 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 195 ms_handle_reset con 0x56493ef04000 session 0x56493b92c000
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:06:44.468081+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 195 handle_osd_map epochs [195,196], i have 195, src has [1,196]
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: handle_auth_request added challenge on 0x56493f61c400
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: handle_auth_request added challenge on 0x56493f61c800
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 196 ms_handle_reset con 0x56493f61c400 session 0x56493e72e5a0
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 114819072 unmapped: 33611776 heap: 148430848 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: handle_auth_request added challenge on 0x56493b9ad800
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 196 ms_handle_reset con 0x56493b9ad800 session 0x56493caf9c20
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:06:45.468235+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _renew_subs
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 196 handle_osd_map epochs [197,197], i have 196, src has [1,197]
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: handle_auth_request added challenge on 0x56493edd7000
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 197 ms_handle_reset con 0x56493f61c800 session 0x56493c533c20
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: handle_auth_request added challenge on 0x56493edf4800
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 197 ms_handle_reset con 0x56493edf4800 session 0x56493d8683c0
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 114819072 unmapped: 33611776 heap: 148430848 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: handle_auth_request added challenge on 0x56493ef04000
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 197 ms_handle_reset con 0x56493ef04000 session 0x56493e20a5a0
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:06:46.469752+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: handle_auth_request added challenge on 0x56493f61cc00
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 197 heartbeat osd_stat(store_statfs(0x4f9350000/0x0/0x4ffc00000, data 0x1de3a92/0x1f0c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 114819072 unmapped: 33611776 heap: 148430848 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: handle_auth_request added challenge on 0x56493f61d000
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 197 ms_handle_reset con 0x56493f61d000 session 0x56493de4ed20
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: handle_auth_request added challenge on 0x56493b9ad800
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: handle_auth_request added challenge on 0x56493edf4800
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 197 ms_handle_reset con 0x56493edf4800 session 0x56493da2d860
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:06:47.469943+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: handle_auth_request added challenge on 0x56493ef04000
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 197 handle_osd_map epochs [197,198], i have 197, src has [1,198]
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 116015104 unmapped: 32415744 heap: 148430848 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 198 ms_handle_reset con 0x56493b9ad800 session 0x56493bf7da40
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: handle_auth_request added challenge on 0x56493f61c800
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 198 ms_handle_reset con 0x56493f61c800 session 0x56493da2e1e0
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: handle_auth_request added challenge on 0x56493f61d400
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:06:48.470085+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:30:06 compute-0 ceph-osd[88594]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:30:06 compute-0 ceph-osd[88594]: bluestore.MempoolThread(0x56493a431b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1761547 data_alloc: 234881024 data_used: 15257600
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 118218752 unmapped: 30212096 heap: 148430848 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 198 ms_handle_reset con 0x56493f61d400 session 0x56493d8663c0
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: handle_auth_request added challenge on 0x56493f2b7400
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _renew_subs
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 198 handle_osd_map epochs [199,199], i have 198, src has [1,199]
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 199 ms_handle_reset con 0x56493f2b7400 session 0x56493d865680
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: handle_auth_request added challenge on 0x56493b9ad800
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:06:49.470235+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 199 ms_handle_reset con 0x56493edd7000 session 0x56493e0f8780
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 199 ms_handle_reset con 0x56493b9ad800 session 0x56493d864f00
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 118235136 unmapped: 30195712 heap: 148430848 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:06:50.470419+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 199 ms_handle_reset con 0x56493c6efc00 session 0x56493b12c960
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 199 ms_handle_reset con 0x56493e3cf800 session 0x56493d9045a0
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: handle_auth_request added challenge on 0x56493edf4800
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 112246784 unmapped: 36184064 heap: 148430848 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: handle_auth_request added challenge on 0x56493f2b7400
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 199 ms_handle_reset con 0x56493f2b7400 session 0x56493ca7f2c0
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 199 ms_handle_reset con 0x56493edf4800 session 0x56493edd0b40
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:06:51.470611+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Oct 11 04:30:06 compute-0 ceph-osd[88594]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 1800.1 total, 600.0 interval
                                           Cumulative writes: 12K writes, 51K keys, 12K commit groups, 1.0 writes per commit group, ingest: 0.03 GB, 0.02 MB/s
                                           Cumulative WAL: 12K writes, 3675 syncs, 3.40 writes per sync, written: 0.03 GB, 0.02 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 5507 writes, 22K keys, 5507 commit groups, 1.0 writes per commit group, ingest: 11.05 MB, 0.02 MB/s
                                           Interval WAL: 5507 writes, 2396 syncs, 2.30 writes per sync, written: 0.01 GB, 0.02 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 112263168 unmapped: 36167680 heap: 148430848 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 199 heartbeat osd_stat(store_statfs(0x4fa46b000/0x0/0x4ffc00000, data 0xbb20eb/0xcd9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:06:52.470744+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: handle_auth_request added challenge on 0x56493b9ad800
Oct 11 04:30:06 compute-0 ceph-osd[88594]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.246682167s of 10.622044563s, submitted: 134
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 112271360 unmapped: 36159488 heap: 148430848 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:06:53.470998+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 199 handle_osd_map epochs [199,200], i have 199, src has [1,200]
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 200 ms_handle_reset con 0x56493b9ad800 session 0x56493eccdc20
Oct 11 04:30:06 compute-0 ceph-osd[88594]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:30:06 compute-0 ceph-osd[88594]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:30:06 compute-0 ceph-osd[88594]: bluestore.MempoolThread(0x56493a431b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1559503 data_alloc: 218103808 data_used: 5103616
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: handle_auth_request added challenge on 0x56493c6efc00
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: handle_auth_request added challenge on 0x56493e3cf800
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 112304128 unmapped: 36126720 heap: 148430848 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 200 ms_handle_reset con 0x56493e3cf800 session 0x56493b965c20
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 200 ms_handle_reset con 0x56493c6efc00 session 0x56493b92c780
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:06:54.471247+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 200 handle_osd_map epochs [201,201], i have 200, src has [1,201]
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: handle_auth_request added challenge on 0x56493edd7000
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 201 ms_handle_reset con 0x56493edd7000 session 0x56493e267860
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: handle_auth_request added challenge on 0x56493b9ad800
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 112295936 unmapped: 36134912 heap: 148430848 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 201 ms_handle_reset con 0x56493b9ad800 session 0x56493e267c20
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: handle_auth_request added challenge on 0x56493c6efc00
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:06:55.471470+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 201 ms_handle_reset con 0x56493c6efc00 session 0x56493be1bc20
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: handle_auth_request added challenge on 0x56493e3cf800
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 201 ms_handle_reset con 0x56493e3cf800 session 0x56493e72f0e0
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: handle_auth_request added challenge on 0x56493edd7000
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 201 ms_handle_reset con 0x56493edd7000 session 0x56493da2c5a0
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: handle_auth_request added challenge on 0x56493edf4800
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: handle_auth_request added challenge on 0x56493f61c800
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 201 ms_handle_reset con 0x56493f61c800 session 0x56493e266000
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: handle_auth_request added challenge on 0x56493b9ad800
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 112361472 unmapped: 36069376 heap: 148430848 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:06:56.471767+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 201 ms_handle_reset con 0x56493bbaac00 session 0x56493b12cf00
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: handle_auth_request added challenge on 0x56493c6efc00
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 201 handle_osd_map epochs [201,202], i have 201, src has [1,202]
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _renew_subs
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 201 handle_osd_map epochs [202,202], i have 202, src has [1,202]
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 202 ms_handle_reset con 0x56493b9ad800 session 0x56493de4e3c0
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 202 ms_handle_reset con 0x56493edf4800 session 0x56493b935860
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 112427008 unmapped: 36003840 heap: 148430848 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:06:57.471999+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: mgrc ms_handle_reset ms_handle_reset con 0x56493b199c00
Oct 11 04:30:06 compute-0 ceph-osd[88594]: mgrc reconnect Terminating session with v2:192.168.122.100:6800/3360631616
Oct 11 04:30:06 compute-0 ceph-osd[88594]: mgrc reconnect Starting new session with [v2:192.168.122.100:6800/3360631616,v1:192.168.122.100:6801/3360631616]
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: get_auth_request con 0x56493f2b7400 auth_method 0
Oct 11 04:30:06 compute-0 ceph-osd[88594]: mgrc handle_mgr_configure stats_period=5
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 202 ms_handle_reset con 0x56493c594c00 session 0x56493bf7c3c0
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: handle_auth_request added challenge on 0x56493c9e5400
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 202 ms_handle_reset con 0x56493c6eec00 session 0x56493b92de00
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: handle_auth_request added challenge on 0x56493c6ef000
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: handle_auth_request added challenge on 0x56493f61c800
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 117530624 unmapped: 30900224 heap: 148430848 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 202 heartbeat osd_stat(store_statfs(0x4fa57b000/0x0/0x4ffc00000, data 0xbb730a/0xce2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [0,0,0,1,8])
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 202 ms_handle_reset con 0x56493f61c800 session 0x56493be1ab40
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: handle_auth_request added challenge on 0x56493f61d400
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:06:58.472180+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 202 ms_handle_reset con 0x56493f61d400 session 0x56493eb5af00
Oct 11 04:30:06 compute-0 ceph-osd[88594]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:30:06 compute-0 ceph-osd[88594]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:30:06 compute-0 ceph-osd[88594]: bluestore.MempoolThread(0x56493a431b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1665146 data_alloc: 218103808 data_used: 6139904
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 119603200 unmapped: 28827648 heap: 148430848 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:06:59.472316+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 120897536 unmapped: 27533312 heap: 148430848 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:07:00.472529+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 120897536 unmapped: 27533312 heap: 148430848 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:07:01.472756+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 120897536 unmapped: 27533312 heap: 148430848 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:07:02.472947+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 120897536 unmapped: 27533312 heap: 148430848 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:07:03.473143+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 202 heartbeat osd_stat(store_statfs(0x4f9414000/0x0/0x4ffc00000, data 0x1911298/0x1a3a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x4daf9c6), peers [0,2] op hist [])
Oct 11 04:30:06 compute-0 ceph-osd[88594]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:30:06 compute-0 ceph-osd[88594]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:30:06 compute-0 ceph-osd[88594]: bluestore.MempoolThread(0x56493a431b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1672326 data_alloc: 218103808 data_used: 6336512
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 202 heartbeat osd_stat(store_statfs(0x4f9414000/0x0/0x4ffc00000, data 0x1911298/0x1a3a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x4daf9c6), peers [0,2] op hist [])
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 120897536 unmapped: 27533312 heap: 148430848 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:07:04.473379+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 11.419804573s of 12.137329102s, submitted: 246
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _renew_subs
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 202 handle_osd_map epochs [203,203], i have 202, src has [1,203]
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 120897536 unmapped: 27533312 heap: 148430848 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:07:05.473565+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: handle_auth_request added challenge on 0x56493e3cf800
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 203 ms_handle_reset con 0x56493e3cf800 session 0x56493caf9e00
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 120897536 unmapped: 27533312 heap: 148430848 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:07:06.473799+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 120897536 unmapped: 27533312 heap: 148430848 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:07:07.473951+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 203 heartbeat osd_stat(store_statfs(0x4f940e000/0x0/0x4ffc00000, data 0x1915cfb/0x1a40000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x4daf9c6), peers [0,2] op hist [])
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 120897536 unmapped: 27533312 heap: 148430848 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: handle_auth_request added challenge on 0x56493e3cf800
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 203 ms_handle_reset con 0x56493e3cf800 session 0x56493caf8780
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: handle_auth_request added challenge on 0x56493b9ad800
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: handle_auth_request added challenge on 0x56493edf4800
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 203 ms_handle_reset con 0x56493b9ad800 session 0x56493cba6000
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:07:08.474120+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 203 heartbeat osd_stat(store_statfs(0x4f940e000/0x0/0x4ffc00000, data 0x1915cfb/0x1a40000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x4daf9c6), peers [0,2] op hist [])
Oct 11 04:30:06 compute-0 ceph-osd[88594]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:30:06 compute-0 ceph-osd[88594]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:30:06 compute-0 ceph-osd[88594]: bluestore.MempoolThread(0x56493a431b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1672725 data_alloc: 218103808 data_used: 6348800
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 120922112 unmapped: 27508736 heap: 148430848 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:07:09.474349+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: handle_auth_request added challenge on 0x56493f61c800
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 203 heartbeat osd_stat(store_statfs(0x4f940e000/0x0/0x4ffc00000, data 0x1915cfb/0x1a40000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x4daf9c6), peers [0,2] op hist [])
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _renew_subs
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 203 handle_osd_map epochs [204,204], i have 203, src has [1,204]
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: handle_auth_request added challenge on 0x56493f61d400
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 204 ms_handle_reset con 0x56493f61d400 session 0x56493e3790e0
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 204 ms_handle_reset con 0x56493f61c800 session 0x56493d861e00
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 120922112 unmapped: 27508736 heap: 148430848 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:07:10.474553+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: handle_auth_request added challenge on 0x56493edd7000
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 204 ms_handle_reset con 0x56493edd7000 session 0x56493d860780
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: handle_auth_request added challenge on 0x56493b9ad800
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 120922112 unmapped: 27508736 heap: 148430848 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:07:11.474741+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: handle_auth_request added challenge on 0x56493e3cf800
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 204 ms_handle_reset con 0x56493e3cf800 session 0x56493cba7e00
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: handle_auth_request added challenge on 0x56493f61c800
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: handle_auth_request added challenge on 0x56493f61d400
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 204 ms_handle_reset con 0x56493f61c800 session 0x56493d8610e0
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _renew_subs
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 204 handle_osd_map epochs [205,205], i have 204, src has [1,205]
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 205 ms_handle_reset con 0x56493b9ad800 session 0x56493be1b2c0
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 120938496 unmapped: 27492352 heap: 148430848 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:07:12.474871+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _renew_subs
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 205 handle_osd_map epochs [206,206], i have 205, src has [1,206]
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 206 ms_handle_reset con 0x56493f61d400 session 0x56493be1a000
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: handle_auth_request added challenge on 0x56493b9ac800
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 120938496 unmapped: 27492352 heap: 148430848 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 206 ms_handle_reset con 0x56493b9ac800 session 0x56493ddf61e0
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: handle_auth_request added challenge on 0x56493b9ac800
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:07:13.475063+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:30:06 compute-0 ceph-osd[88594]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:30:06 compute-0 ceph-osd[88594]: bluestore.MempoolThread(0x56493a431b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1684573 data_alloc: 218103808 data_used: 6356992
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 206 ms_handle_reset con 0x56493b9ac800 session 0x56493ddf74a0
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 206 handle_osd_map epochs [207,207], i have 206, src has [1,207]
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 120954880 unmapped: 27475968 heap: 148430848 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: handle_auth_request added challenge on 0x56493b9ad800
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:07:14.475291+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: handle_auth_request added challenge on 0x56493e3cf800
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 207 ms_handle_reset con 0x56493e3cf800 session 0x56493b9341e0
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: handle_auth_request added challenge on 0x56493f61c800
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _renew_subs
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 207 handle_osd_map epochs [208,208], i have 207, src has [1,208]
Oct 11 04:30:06 compute-0 ceph-osd[88594]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.011216164s of 10.187602043s, submitted: 58
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 208 ms_handle_reset con 0x56493f61c800 session 0x56493c532b40
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 208 heartbeat osd_stat(store_statfs(0x4f9402000/0x0/0x4ffc00000, data 0x191cb43/0x1a4c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x4daf9c6), peers [0,2] op hist [])
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 208 ms_handle_reset con 0x56493b9ad800 session 0x56493e20b2c0
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 119521280 unmapped: 28909568 heap: 148430848 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:07:15.475442+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: handle_auth_request added challenge on 0x56493f61d400
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _renew_subs
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 208 handle_osd_map epochs [209,209], i have 208, src has [1,209]
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: handle_auth_request added challenge on 0x56493ec08000
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 209 ms_handle_reset con 0x56493f61d400 session 0x56493e20b860
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 209 ms_handle_reset con 0x56493ec08000 session 0x56493e72fa40
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: handle_auth_request added challenge on 0x56493b9ac800
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 119521280 unmapped: 28909568 heap: 148430848 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:07:16.475629+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 209 handle_osd_map epochs [209,210], i have 209, src has [1,210]
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 210 ms_handle_reset con 0x56493b9ac800 session 0x56493d865860
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 119521280 unmapped: 28909568 heap: 148430848 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:07:17.475805+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 119521280 unmapped: 28909568 heap: 148430848 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 210 ms_handle_reset con 0x56493edf4800 session 0x56493eb5a000
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:07:18.475956+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 210 ms_handle_reset con 0x56493ef04000 session 0x56493da2fa40
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 210 ms_handle_reset con 0x56493f61cc00 session 0x56493d4b8960
Oct 11 04:30:06 compute-0 ceph-osd[88594]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:30:06 compute-0 ceph-osd[88594]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:30:06 compute-0 ceph-osd[88594]: bluestore.MempoolThread(0x56493a431b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1699573 data_alloc: 218103808 data_used: 6373376
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: handle_auth_request added challenge on 0x56493b9ac800
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: handle_auth_request added challenge on 0x56493ec08000
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 119545856 unmapped: 28884992 heap: 148430848 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 210 ms_handle_reset con 0x56493ec08000 session 0x56493d866960
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: handle_auth_request added challenge on 0x56493edf4800
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:07:19.476095+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 210 ms_handle_reset con 0x56493b9ac800 session 0x56493d8645a0
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 210 handle_osd_map epochs [210,211], i have 210, src has [1,211]
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 211 ms_handle_reset con 0x56493edf4800 session 0x56493d4b8d20
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 115220480 unmapped: 33210368 heap: 148430848 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 211 heartbeat osd_stat(store_statfs(0x4fa2bf000/0x0/0x4ffc00000, data 0x751e3c/0x883000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x4daf9c6), peers [0,2] op hist [])
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:07:20.476282+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: handle_auth_request added challenge on 0x56493ef04000
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 211 ms_handle_reset con 0x56493ef04000 session 0x56493b94a780
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: handle_auth_request added challenge on 0x56493b9ad800
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 211 handle_osd_map epochs [211,212], i have 211, src has [1,212]
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 212 ms_handle_reset con 0x56493b9ad800 session 0x56493e20a1e0
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 115236864 unmapped: 33193984 heap: 148430848 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:07:21.476438+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: handle_auth_request added challenge on 0x56493b9ad800
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 212 ms_handle_reset con 0x56493b9ad800 session 0x56493e20b860
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: handle_auth_request added challenge on 0x56493b9ac800
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 212 ms_handle_reset con 0x56493b9ac800 session 0x56493b9341e0
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 115236864 unmapped: 33193984 heap: 148430848 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:07:22.476564+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 212 heartbeat osd_stat(store_statfs(0x4fa5c5000/0x0/0x4ffc00000, data 0x7555cc/0x888000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x4daf9c6), peers [0,2] op hist [])
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 212 handle_osd_map epochs [213,213], i have 212, src has [1,213]
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 115236864 unmapped: 33193984 heap: 148430848 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: handle_auth_request added challenge on 0x56493ec08000
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 213 ms_handle_reset con 0x56493ec08000 session 0x56493ddf61e0
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: handle_auth_request added challenge on 0x56493edf4800
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:07:23.476686+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 213 ms_handle_reset con 0x56493edf4800 session 0x56493ddf74a0
Oct 11 04:30:06 compute-0 ceph-osd[88594]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:30:06 compute-0 ceph-osd[88594]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:30:06 compute-0 ceph-osd[88594]: bluestore.MempoolThread(0x56493a431b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1535100 data_alloc: 218103808 data_used: 667648
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 115236864 unmapped: 33193984 heap: 148430848 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:07:24.476812+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: handle_auth_request added challenge on 0x56493ef04000
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 213 ms_handle_reset con 0x56493ef04000 session 0x56493be1b2c0
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: handle_auth_request added challenge on 0x56493ef04000
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _renew_subs
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 213 handle_osd_map epochs [214,214], i have 213, src has [1,214]
Oct 11 04:30:06 compute-0 ceph-osd[88594]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.714889526s of 10.045491219s, submitted: 102
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 214 ms_handle_reset con 0x56493ef04000 session 0x56493d861a40
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 115245056 unmapped: 33185792 heap: 148430848 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:07:25.476973+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: handle_auth_request added challenge on 0x56493b9ac800
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _renew_subs
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 214 handle_osd_map epochs [215,215], i have 214, src has [1,215]
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 215 ms_handle_reset con 0x56493b9ac800 session 0x56493d861e00
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 115253248 unmapped: 33177600 heap: 148430848 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:07:26.477118+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 115253248 unmapped: 33177600 heap: 148430848 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:07:27.477264+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: handle_auth_request added challenge on 0x56493b9ad800
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: handle_auth_request added challenge on 0x56493ec08000
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 215 ms_handle_reset con 0x56493ec08000 session 0x56493bf6af00
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 215 ms_handle_reset con 0x56493b9ad800 session 0x56493d8610e0
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: handle_auth_request added challenge on 0x56493edf4800
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 215 ms_handle_reset con 0x56493edf4800 session 0x56493cba6000
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 115269632 unmapped: 33161216 heap: 148430848 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:07:28.477448+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:30:06 compute-0 ceph-osd[88594]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:30:06 compute-0 ceph-osd[88594]: bluestore.MempoolThread(0x56493a431b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1544655 data_alloc: 218103808 data_used: 675840
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 215 heartbeat osd_stat(store_statfs(0x4fa5bc000/0x0/0x4ffc00000, data 0x75a8e7/0x892000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x4daf9c6), peers [0,2] op hist [])
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: handle_auth_request added challenge on 0x56493b9ac800
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 215 ms_handle_reset con 0x56493b9ac800 session 0x56493b92c3c0
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: handle_auth_request added challenge on 0x56493b9ad800
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 115269632 unmapped: 33161216 heap: 148430848 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 215 ms_handle_reset con 0x56493b9ad800 session 0x56493b92c000
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:07:29.477584+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: handle_auth_request added challenge on 0x56493ec08000
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 215 ms_handle_reset con 0x56493ec08000 session 0x56493bf7cf00
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: handle_auth_request added challenge on 0x56493ef04000
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 114671616 unmapped: 33759232 heap: 148430848 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:07:30.477704+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 215 handle_osd_map epochs [215,216], i have 215, src has [1,216]
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 216 ms_handle_reset con 0x56493ef04000 session 0x56493bf7c3c0
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 114696192 unmapped: 33734656 heap: 148430848 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:07:31.477855+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 216 heartbeat osd_stat(store_statfs(0x4fa5b9000/0x0/0x4ffc00000, data 0x75c426/0x894000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x4daf9c6), peers [0,2] op hist [])
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 114696192 unmapped: 33734656 heap: 148430848 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:07:32.478008+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 216 heartbeat osd_stat(store_statfs(0x4fa5b9000/0x0/0x4ffc00000, data 0x75c426/0x894000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x4daf9c6), peers [0,2] op hist [])
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: handle_auth_request added challenge on 0x56493e3cf800
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 216 ms_handle_reset con 0x56493e3cf800 session 0x56493e2674a0
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: handle_auth_request added challenge on 0x56493e3cf800
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 216 ms_handle_reset con 0x56493e3cf800 session 0x56493bf7c5a0
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: handle_auth_request added challenge on 0x56493b9ac800
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 114704384 unmapped: 33726464 heap: 148430848 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:07:33.478193+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 216 ms_handle_reset con 0x56493b9ac800 session 0x56493eb5a780
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: handle_auth_request added challenge on 0x56493b9ad800
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 216 ms_handle_reset con 0x56493b9ad800 session 0x56493d8654a0
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: handle_auth_request added challenge on 0x56493ec08000
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 216 ms_handle_reset con 0x56493ec08000 session 0x56493b92c960
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: handle_auth_request added challenge on 0x56493ef04000
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 216 ms_handle_reset con 0x56493ef04000 session 0x56493be1ba40
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: handle_auth_request added challenge on 0x56493ef04000
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 216 ms_handle_reset con 0x56493ef04000 session 0x56493e266f00
Oct 11 04:30:06 compute-0 ceph-osd[88594]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:30:06 compute-0 ceph-osd[88594]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:30:06 compute-0 ceph-osd[88594]: bluestore.MempoolThread(0x56493a431b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1660310 data_alloc: 218103808 data_used: 684032
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: handle_auth_request added challenge on 0x56493b9ac800
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 216 ms_handle_reset con 0x56493b9ac800 session 0x56493de4fa40
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 114475008 unmapped: 33955840 heap: 148430848 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:07:34.478346+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 216 handle_osd_map epochs [216,217], i have 216, src has [1,217]
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 114475008 unmapped: 33955840 heap: 148430848 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:07:35.478505+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: handle_auth_request added challenge on 0x56493b9ad800
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 217 handle_osd_map epochs [217,218], i have 217, src has [1,218]
Oct 11 04:30:06 compute-0 ceph-osd[88594]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.295490265s of 10.799622536s, submitted: 85
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 218 ms_handle_reset con 0x56493b9ad800 session 0x56493ce5a780
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 114475008 unmapped: 33955840 heap: 148430848 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:07:36.478701+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: handle_auth_request added challenge on 0x56493c9e4800
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: handle_auth_request added challenge on 0x56493be27800
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 218 ms_handle_reset con 0x56493be27800 session 0x56493bf7da40
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 218 ms_handle_reset con 0x56493c9e4800 session 0x56493d8694a0
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 114475008 unmapped: 33955840 heap: 148430848 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:07:37.478840+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 218 heartbeat osd_stat(store_statfs(0x4f9739000/0x0/0x4ffc00000, data 0x15d6aa8/0x1714000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x4daf9c6), peers [0,2] op hist [])
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 114475008 unmapped: 33955840 heap: 148430848 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:07:38.478988+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: handle_auth_request added challenge on 0x56493c9e4800
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 218 ms_handle_reset con 0x56493c9e4800 session 0x56493b94b0e0
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: handle_auth_request added challenge on 0x56493b9ac800
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 218 ms_handle_reset con 0x56493b9ac800 session 0x56493bf7c3c0
Oct 11 04:30:06 compute-0 ceph-osd[88594]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:30:06 compute-0 ceph-osd[88594]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:30:06 compute-0 ceph-osd[88594]: bluestore.MempoolThread(0x56493a431b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1669486 data_alloc: 218103808 data_used: 700416
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: handle_auth_request added challenge on 0x56493b9ad800
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 114507776 unmapped: 33923072 heap: 148430848 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 218 ms_handle_reset con 0x56493b9ad800 session 0x56493bf7de00
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:07:39.479189+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: handle_auth_request added challenge on 0x56493be27800
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: handle_auth_request added challenge on 0x56493ef04000
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 218 ms_handle_reset con 0x56493ef04000 session 0x56493be1a3c0
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: handle_auth_request added challenge on 0x56493be26400
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 218 heartbeat osd_stat(store_statfs(0x4fa75a000/0x0/0x4ffc00000, data 0x15d6abb/0x1714000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x3d8f9c6), peers [0,2] op hist [])
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 114515968 unmapped: 33914880 heap: 148430848 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:07:40.479331+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 218 ms_handle_reset con 0x56493be26400 session 0x56493da2fe00
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _renew_subs
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 218 handle_osd_map epochs [219,219], i have 218, src has [1,219]
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 114491392 unmapped: 33939456 heap: 148430848 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: handle_auth_request added challenge on 0x56493b9ac800
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 219 ms_handle_reset con 0x56493b9ac800 session 0x56493bcb92c0
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: handle_auth_request added challenge on 0x56493b9ad800
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:07:41.479477+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 219 ms_handle_reset con 0x56493b9ad800 session 0x56493d861680
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 121618432 unmapped: 26812416 heap: 148430848 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:07:42.479587+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: handle_auth_request added challenge on 0x56493c9e4800
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 219 ms_handle_reset con 0x56493c9e4800 session 0x56493d4b85a0
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: handle_auth_request added challenge on 0x56493ef04000
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 219 ms_handle_reset con 0x56493ef04000 session 0x56493d4b9680
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 121618432 unmapped: 26812416 heap: 148430848 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:07:43.479695+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:30:06 compute-0 ceph-osd[88594]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:30:06 compute-0 ceph-osd[88594]: bluestore.MempoolThread(0x56493a431b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1784363 data_alloc: 234881024 data_used: 13455360
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 121618432 unmapped: 26812416 heap: 148430848 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:07:44.479841+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 219 heartbeat osd_stat(store_statfs(0x4fa759000/0x0/0x4ffc00000, data 0x15d861a/0x1715000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x3d8f9c6), peers [0,2] op hist [])
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 219 handle_osd_map epochs [220,220], i have 219, src has [1,220]
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 121643008 unmapped: 26787840 heap: 148430848 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:07:45.480011+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 121643008 unmapped: 26787840 heap: 148430848 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:07:46.480233+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 121643008 unmapped: 26787840 heap: 148430848 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:07:47.480381+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 220 heartbeat osd_stat(store_statfs(0x4fa755000/0x0/0x4ffc00000, data 0x15da07d/0x1718000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x3d8f9c6), peers [0,2] op hist [])
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 121659392 unmapped: 26771456 heap: 148430848 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:07:48.480501+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:30:06 compute-0 ceph-osd[88594]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:30:06 compute-0 ceph-osd[88594]: bluestore.MempoolThread(0x56493a431b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1788537 data_alloc: 234881024 data_used: 13463552
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 121659392 unmapped: 26771456 heap: 148430848 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:07:49.480634+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: handle_auth_request added challenge on 0x56493df05000
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 220 ms_handle_reset con 0x56493df05000 session 0x56493c5aa960
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: handle_auth_request added challenge on 0x56493b9ac800
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 220 ms_handle_reset con 0x56493b9ac800 session 0x56493bf7dc20
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 121659392 unmapped: 26771456 heap: 148430848 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:07:50.480746+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 14.964366913s of 15.072676659s, submitted: 40
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 127352832 unmapped: 21078016 heap: 148430848 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: handle_auth_request added challenge on 0x56493b9ad800
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:07:51.480956+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 220 ms_handle_reset con 0x56493b9ad800 session 0x56493c533680
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 127483904 unmapped: 20946944 heap: 148430848 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 220 heartbeat osd_stat(store_statfs(0x4f9fbe000/0x0/0x4ffc00000, data 0x1d700ef/0x1eb0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x3d8f9c6), peers [0,2] op hist [1])
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:07:52.481087+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 127008768 unmapped: 21422080 heap: 148430848 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:07:53.481221+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:30:06 compute-0 ceph-osd[88594]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:30:06 compute-0 ceph-osd[88594]: bluestore.MempoolThread(0x56493a431b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1862941 data_alloc: 234881024 data_used: 14143488
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 125599744 unmapped: 22831104 heap: 148430848 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:07:54.481337+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 220 heartbeat osd_stat(store_statfs(0x4f9f2a000/0x0/0x4ffc00000, data 0x1e040ef/0x1f44000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x3d8f9c6), peers [0,2] op hist [])
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 125599744 unmapped: 22831104 heap: 148430848 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:07:55.481577+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: handle_auth_request added challenge on 0x56493c9e4800
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 220 ms_handle_reset con 0x56493c9e4800 session 0x56493bf7cb40
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: handle_auth_request added challenge on 0x56493ef04000
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 220 ms_handle_reset con 0x56493ef04000 session 0x56493ddf6b40
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: handle_auth_request added challenge on 0x56493f61d800
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 125607936 unmapped: 22822912 heap: 148430848 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:07:56.481772+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 220 ms_handle_reset con 0x56493f61d800 session 0x56493ee1a3c0
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: handle_auth_request added challenge on 0x56493b9ac800
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 220 ms_handle_reset con 0x56493b9ac800 session 0x56493d861a40
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 126451712 unmapped: 21979136 heap: 148430848 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:07:57.482136+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 220 heartbeat osd_stat(store_statfs(0x4f9c84000/0x0/0x4ffc00000, data 0x20ab08d/0x21ea000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x3d8f9c6), peers [0,2] op hist [])
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 126590976 unmapped: 21839872 heap: 148430848 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:07:58.482309+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: handle_auth_request added challenge on 0x56493b9ad800
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: handle_auth_request added challenge on 0x56493c9e4800
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 220 ms_handle_reset con 0x56493c9e4800 session 0x56493cba7860
Oct 11 04:30:06 compute-0 ceph-osd[88594]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:30:06 compute-0 ceph-osd[88594]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:30:06 compute-0 ceph-osd[88594]: bluestore.MempoolThread(0x56493a431b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1889462 data_alloc: 234881024 data_used: 14143488
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: handle_auth_request added challenge on 0x56493ef04000
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _renew_subs
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 220 handle_osd_map epochs [221,221], i have 220, src has [1,221]
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: handle_auth_request added challenge on 0x56493eee3000
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 221 ms_handle_reset con 0x56493ef04000 session 0x56493da15a40
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 129482752 unmapped: 18948096 heap: 148430848 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:07:59.482441+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _renew_subs
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 221 handle_osd_map epochs [222,222], i have 221, src has [1,222]
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 222 ms_handle_reset con 0x56493eee3000 session 0x56493e0f8d20
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 222 ms_handle_reset con 0x56493b9ad800 session 0x56493d867680
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 129482752 unmapped: 18948096 heap: 148430848 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:08:00.482570+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: handle_auth_request added challenge on 0x56493b9ac800
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 129482752 unmapped: 18948096 heap: 148430848 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:08:01.482699+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: handle_auth_request added challenge on 0x56493b9ad800
Oct 11 04:30:06 compute-0 ceph-osd[88594]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.177031517s of 10.933658600s, submitted: 248
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 222 ms_handle_reset con 0x56493b9ad800 session 0x56493c533c20
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 129032192 unmapped: 19398656 heap: 148430848 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:08:02.482929+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: handle_auth_request added challenge on 0x56493c9e4800
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 129056768 unmapped: 19374080 heap: 148430848 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:08:03.483262+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: handle_auth_request added challenge on 0x56493eee3000
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 222 ms_handle_reset con 0x56493eee3000 session 0x56493eb5be00
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: handle_auth_request added challenge on 0x56493ef04000
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: handle_auth_request added challenge on 0x56493e3cf800
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: handle_auth_request added challenge on 0x56493ec09400
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 222 heartbeat osd_stat(store_statfs(0x4f953c000/0x0/0x4ffc00000, data 0x27ef7e9/0x2932000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x3d8f9c6), peers [0,2] op hist [0,0,0,0,0,1,1])
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 222 ms_handle_reset con 0x56493ec09400 session 0x56493b12c960
Oct 11 04:30:06 compute-0 ceph-osd[88594]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:30:06 compute-0 ceph-osd[88594]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:30:06 compute-0 ceph-osd[88594]: bluestore.MempoolThread(0x56493a431b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2011706 data_alloc: 234881024 data_used: 14155776
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _renew_subs
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 222 handle_osd_map epochs [223,223], i have 222, src has [1,223]
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 223 ms_handle_reset con 0x56493e3cf800 session 0x56493de4e3c0
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: handle_auth_request added challenge on 0x56493ec08000
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: handle_auth_request added challenge on 0x56493f61c800
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 223 ms_handle_reset con 0x56493ec08000 session 0x56493b92c780
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 129736704 unmapped: 18694144 heap: 148430848 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:08:04.483391+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 223 ms_handle_reset con 0x56493ef04000 session 0x56493eb5a3c0
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _renew_subs
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 223 handle_osd_map epochs [224,224], i have 223, src has [1,224]
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 224 ms_handle_reset con 0x56493f61c800 session 0x56493b92d680
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 224 ms_handle_reset con 0x56493c9e4800 session 0x56493c532d20
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 129753088 unmapped: 18677760 heap: 148430848 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:08:05.483499+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: handle_auth_request added challenge on 0x56493b9ad800
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 128090112 unmapped: 20340736 heap: 148430848 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:08:06.483648+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _renew_subs
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 224 handle_osd_map epochs [225,225], i have 224, src has [1,225]
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 128090112 unmapped: 20340736 heap: 148430848 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:08:07.483884+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 225 heartbeat osd_stat(store_statfs(0x4f9046000/0x0/0x4ffc00000, data 0x2cdffb7/0x2e28000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x3d8f9c6), peers [0,2] op hist [])
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 225 ms_handle_reset con 0x56493b9ad800 session 0x56493ddf7680
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 128090112 unmapped: 20340736 heap: 148430848 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:08:08.484041+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:30:06 compute-0 ceph-osd[88594]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:30:06 compute-0 ceph-osd[88594]: bluestore.MempoolThread(0x56493a431b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2019061 data_alloc: 234881024 data_used: 14172160
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 225 ms_handle_reset con 0x56493b9ac800 session 0x56493e72f680
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 128090112 unmapped: 20340736 heap: 148430848 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:08:09.484232+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _renew_subs
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 225 handle_osd_map epochs [226,226], i have 225, src has [1,226]
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 128090112 unmapped: 20340736 heap: 148430848 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:08:10.484385+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 226 heartbeat osd_stat(store_statfs(0x4f903e000/0x0/0x4ffc00000, data 0x2ce6579/0x2e2f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x3d8f9c6), peers [0,2] op hist [])
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 226 ms_handle_reset con 0x56493be27800 session 0x56493cba6000
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: handle_auth_request added challenge on 0x56493b9ad800
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 226 ms_handle_reset con 0x56493b9ad800 session 0x56493c532b40
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 121823232 unmapped: 26607616 heap: 148430848 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:08:11.484598+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:08:12.485237+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 121823232 unmapped: 26607616 heap: 148430848 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:08:13.485376+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 121823232 unmapped: 26607616 heap: 148430848 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:30:06 compute-0 ceph-osd[88594]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:30:06 compute-0 ceph-osd[88594]: bluestore.MempoolThread(0x56493a431b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1744081 data_alloc: 218103808 data_used: 745472
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:08:14.485531+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 121823232 unmapped: 26607616 heap: 148430848 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 226 heartbeat osd_stat(store_statfs(0x4fa548000/0x0/0x4ffc00000, data 0x161a556/0x1762000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x3d8f9c6), peers [0,2] op hist [])
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:08:15.485691+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 121823232 unmapped: 26607616 heap: 148430848 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: handle_auth_request added challenge on 0x56493c9e4800
Oct 11 04:30:06 compute-0 ceph-osd[88594]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 13.690373421s of 14.054018021s, submitted: 123
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 226 heartbeat osd_stat(store_statfs(0x4fa548000/0x0/0x4ffc00000, data 0x161a556/0x1762000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x3d8f9c6), peers [0,2] op hist [])
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:08:16.485894+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 121831424 unmapped: 26599424 heap: 148430848 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _renew_subs
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 226 handle_osd_map epochs [227,227], i have 226, src has [1,227]
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 227 ms_handle_reset con 0x56493c9e4800 session 0x56493e20b2c0
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: handle_auth_request added challenge on 0x56493ef04000
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 227 ms_handle_reset con 0x56493ef04000 session 0x56493b94bc20
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: handle_auth_request added challenge on 0x56493f61c800
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 227 ms_handle_reset con 0x56493f61c800 session 0x56493c5332c0
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:08:17.486101+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 121839616 unmapped: 26591232 heap: 148430848 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: handle_auth_request added challenge on 0x56493b9ad800
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 227 ms_handle_reset con 0x56493b9ad800 session 0x56493e2665a0
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: handle_auth_request added challenge on 0x56493be27800
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 227 ms_handle_reset con 0x56493be27800 session 0x56493e72f680
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:08:18.486274+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 121847808 unmapped: 26583040 heap: 148430848 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: handle_auth_request added challenge on 0x56493c9e4800
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 227 ms_handle_reset con 0x56493c9e4800 session 0x56493ddf7680
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: handle_auth_request added challenge on 0x56493ef04000
Oct 11 04:30:06 compute-0 ceph-osd[88594]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:30:06 compute-0 ceph-osd[88594]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:30:06 compute-0 ceph-osd[88594]: bluestore.MempoolThread(0x56493a431b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1747484 data_alloc: 218103808 data_used: 753664
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:08:19.486432+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 121864192 unmapped: 26566656 heap: 148430848 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: handle_auth_request added challenge on 0x56493e3cf800
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 227 heartbeat osd_stat(store_statfs(0x4fa708000/0x0/0x4ffc00000, data 0x161c18a/0x1766000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x3d8f9c6), peers [0,2] op hist [])
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:08:20.486637+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 121864192 unmapped: 26566656 heap: 148430848 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 227 ms_handle_reset con 0x56493ef04000 session 0x56493c532d20
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: handle_auth_request added challenge on 0x56493ec09400
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 227 ms_handle_reset con 0x56493ec09400 session 0x56493b92c780
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: handle_auth_request added challenge on 0x56493ec09400
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 227 ms_handle_reset con 0x56493ec09400 session 0x56493b12c960
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:08:21.486757+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 227 heartbeat osd_stat(store_statfs(0x4fa708000/0x0/0x4ffc00000, data 0x161c18a/0x1766000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x3d8f9c6), peers [0,2] op hist [])
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 121872384 unmapped: 26558464 heap: 148430848 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: handle_auth_request added challenge on 0x56493b9ad800
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 227 ms_handle_reset con 0x56493b9ad800 session 0x56493c533c20
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:08:22.486897+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 122970112 unmapped: 25460736 heap: 148430848 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 227 heartbeat osd_stat(store_statfs(0x4fa17f000/0x0/0x4ffc00000, data 0x1ba518a/0x1cef000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x3d8f9c6), peers [0,2] op hist [])
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: handle_auth_request added challenge on 0x56493be27800
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 227 ms_handle_reset con 0x56493be27800 session 0x56493da15a40
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:08:23.487047+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 122978304 unmapped: 25452544 heap: 148430848 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:30:06 compute-0 ceph-osd[88594]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:30:06 compute-0 ceph-osd[88594]: bluestore.MempoolThread(0x56493a431b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1834018 data_alloc: 218103808 data_used: 5820416
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:08:24.487251+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 122978304 unmapped: 25452544 heap: 148430848 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: handle_auth_request added challenge on 0x56493c9e4800
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _renew_subs
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 227 handle_osd_map epochs [228,228], i have 227, src has [1,228]
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 228 ms_handle_reset con 0x56493c9e4800 session 0x56493d861a40
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: handle_auth_request added challenge on 0x56493ef04000
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 228 ms_handle_reset con 0x56493ef04000 session 0x56493ee1a3c0
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: handle_auth_request added challenge on 0x56493ef04000
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:08:25.487382+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 123191296 unmapped: 25239552 heap: 148430848 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 228 ms_handle_reset con 0x56493ef04000 session 0x56493c533680
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: handle_auth_request added challenge on 0x56493b9ad800
Oct 11 04:30:06 compute-0 ceph-osd[88594]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.731298447s of 10.001713753s, submitted: 87
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 228 ms_handle_reset con 0x56493b9ad800 session 0x56493bcb92c0
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 228 heartbeat osd_stat(store_statfs(0x4fa17b000/0x0/0x4ffc00000, data 0x1ba6bed/0x1cf2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x3d8f9c6), peers [0,2] op hist [])
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: handle_auth_request added challenge on 0x56493be27800
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 228 ms_handle_reset con 0x56493be27800 session 0x56493b94b0e0
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:08:26.487619+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 123568128 unmapped: 24862720 heap: 148430848 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: handle_auth_request added challenge on 0x56493c9e4800
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 228 ms_handle_reset con 0x56493c9e4800 session 0x56493d8694a0
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 228 heartbeat osd_stat(store_statfs(0x4f9b9a000/0x0/0x4ffc00000, data 0x2187bed/0x22d3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x3d8f9c6), peers [0,2] op hist [])
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: handle_auth_request added challenge on 0x56493ec09400
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: handle_auth_request added challenge on 0x56493eee3000
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 228 ms_handle_reset con 0x56493eee3000 session 0x56493eb5bc20
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:08:27.487761+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 123568128 unmapped: 24862720 heap: 148430848 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: handle_auth_request added challenge on 0x56493eee3000
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 228 ms_handle_reset con 0x56493eee3000 session 0x56493d8641e0
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _renew_subs
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 228 handle_osd_map epochs [229,229], i have 228, src has [1,229]
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: handle_auth_request added challenge on 0x56493b9ad800
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 229 ms_handle_reset con 0x56493b9ad800 session 0x56493caf9680
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: handle_auth_request added challenge on 0x56493be27800
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: handle_auth_request added challenge on 0x56493c9e4800
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 229 ms_handle_reset con 0x56493be27800 session 0x56493be1ab40
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:08:28.487895+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 122912768 unmapped: 25518080 heap: 148430848 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: handle_auth_request added challenge on 0x56493ef04000
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: handle_auth_request added challenge on 0x56493f61dc00
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _renew_subs
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 229 handle_osd_map epochs [230,230], i have 229, src has [1,230]
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 230 ms_handle_reset con 0x56493c9e4800 session 0x56493ddf61e0
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: handle_auth_request added challenge on 0x56493e4a5000
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 230 ms_handle_reset con 0x56493e4a5000 session 0x56493cba72c0
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 230 ms_handle_reset con 0x56493ec09400 session 0x56493bf7da40
Oct 11 04:30:06 compute-0 ceph-osd[88594]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:30:06 compute-0 ceph-osd[88594]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:30:06 compute-0 ceph-osd[88594]: bluestore.MempoolThread(0x56493a431b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1948543 data_alloc: 218103808 data_used: 5840896
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:08:29.488019+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 122937344 unmapped: 25493504 heap: 148430848 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: handle_auth_request added challenge on 0x56493b9ad800
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:08:30.488141+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 230 ms_handle_reset con 0x56493b9ad800 session 0x56493da2f2c0
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 124747776 unmapped: 23683072 heap: 148430848 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 230 heartbeat osd_stat(store_statfs(0x4f8dba000/0x0/0x4ffc00000, data 0x30699cc/0x30b1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x3d8f9c6), peers [0,2] op hist [])
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:08:31.488273+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 127025152 unmapped: 21405696 heap: 148430848 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: handle_auth_request added challenge on 0x56493be27800
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: handle_auth_request added challenge on 0x56493c9e4800
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 230 ms_handle_reset con 0x56493c9e4800 session 0x56493caf94a0
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: handle_auth_request added challenge on 0x56493e4a5000
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: handle_auth_request added challenge on 0x56493eee3000
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:08:32.488371+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 130465792 unmapped: 17965056 heap: 148430848 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 230 ms_handle_reset con 0x56493e4a5000 session 0x56493d867a40
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 230 handle_osd_map epochs [230,231], i have 230, src has [1,231]
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _renew_subs
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 230 handle_osd_map epochs [231,231], i have 231, src has [1,231]
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 231 ms_handle_reset con 0x56493eee3000 session 0x56493da2eb40
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: handle_auth_request added challenge on 0x56493eee3000
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 231 ms_handle_reset con 0x56493eee3000 session 0x56493e2661e0
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: handle_auth_request added challenge on 0x56493b9ad800
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 231 ms_handle_reset con 0x56493b9ad800 session 0x56493e72f0e0
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 231 ms_handle_reset con 0x56493be27800 session 0x56493eb5a000
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:08:33.488635+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 129130496 unmapped: 19300352 heap: 148430848 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 231 heartbeat osd_stat(store_statfs(0x4f85f4000/0x0/0x4ffc00000, data 0x382f5ab/0x3879000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x3d8f9c6), peers [0,2] op hist [])
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: handle_auth_request added challenge on 0x56493c9e4800
Oct 11 04:30:06 compute-0 ceph-osd[88594]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:30:06 compute-0 ceph-osd[88594]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:30:06 compute-0 ceph-osd[88594]: bluestore.MempoolThread(0x56493a431b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2148936 data_alloc: 234881024 data_used: 10571776
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:08:34.488819+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 129146880 unmapped: 19283968 heap: 148430848 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _renew_subs
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 231 handle_osd_map epochs [232,232], i have 231, src has [1,232]
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 232 ms_handle_reset con 0x56493c9e4800 session 0x56493e72f0e0
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: handle_auth_request added challenge on 0x56493e4a5000
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 232 ms_handle_reset con 0x56493e4a5000 session 0x56493ddf61e0
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:08:35.489093+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 129155072 unmapped: 19275776 heap: 148430848 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:08:36.489539+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: handle_auth_request added challenge on 0x56493b9ad800
Oct 11 04:30:06 compute-0 ceph-osd[88594]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.998631477s of 10.687541008s, submitted: 203
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 129171456 unmapped: 19259392 heap: 148430848 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 232 handle_osd_map epochs [232,233], i have 232, src has [1,233]
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 233 heartbeat osd_stat(store_statfs(0x4f8d88000/0x0/0x4ffc00000, data 0x309acf7/0x30e5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x3d8f9c6), peers [0,2] op hist [])
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 233 ms_handle_reset con 0x56493b9ad800 session 0x56493b12c960
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: handle_auth_request added challenge on 0x56493be27800
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 233 ms_handle_reset con 0x56493be27800 session 0x56493de4fc20
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:08:37.489658+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 129204224 unmapped: 19226624 heap: 148430848 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: handle_auth_request added challenge on 0x56493c9e4800
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _renew_subs
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 233 handle_osd_map epochs [234,234], i have 233, src has [1,234]
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 234 ms_handle_reset con 0x56493c9e4800 session 0x56493c5aa960
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: handle_auth_request added challenge on 0x56493eee3000
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 234 ms_handle_reset con 0x56493e3cf800 session 0x56493e72ef00
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 234 ms_handle_reset con 0x56493eee3000 session 0x56493b92c3c0
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:08:38.489769+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: handle_auth_request added challenge on 0x56493eee3000
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 129245184 unmapped: 19185664 heap: 148430848 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 234 ms_handle_reset con 0x56493eee3000 session 0x56493be1a780
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 234 heartbeat osd_stat(store_statfs(0x4f9022000/0x0/0x4ffc00000, data 0x2b6f655/0x2bb8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x3d8f9c6), peers [0,2] op hist [])
Oct 11 04:30:06 compute-0 ceph-osd[88594]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:30:06 compute-0 ceph-osd[88594]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:30:06 compute-0 ceph-osd[88594]: bluestore.MempoolThread(0x56493a431b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2003532 data_alloc: 234881024 data_used: 10571776
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: handle_auth_request added challenge on 0x56493b9ad800
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 234 ms_handle_reset con 0x56493b9ad800 session 0x56493d864f00
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:08:39.489926+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 129581056 unmapped: 18849792 heap: 148430848 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:08:40.490075+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 130449408 unmapped: 17981440 heap: 148430848 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: handle_auth_request added challenge on 0x56493be27800
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: handle_auth_request added challenge on 0x56493c9e4800
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 234 ms_handle_reset con 0x56493c9e4800 session 0x56493be1b860
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:08:41.490216+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 130572288 unmapped: 17858560 heap: 148430848 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: handle_auth_request added challenge on 0x56493e3cf800
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: handle_auth_request added challenge on 0x56493ec09400
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: handle_auth_request added challenge on 0x56493c9e5800
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _renew_subs
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 234 handle_osd_map epochs [235,235], i have 234, src has [1,235]
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 235 ms_handle_reset con 0x56493e3cf800 session 0x56493be1a3c0
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 235 ms_handle_reset con 0x56493c9e5800 session 0x56493d4b92c0
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: handle_auth_request added challenge on 0x56493c9e5800
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: handle_auth_request added challenge on 0x56493b9ad800
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: handle_auth_request added challenge on 0x56493c9e4800
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: handle_auth_request added challenge on 0x56493e3cf800
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 235 ms_handle_reset con 0x56493c9e4800 session 0x56493d4b9680
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:08:42.490397+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 130678784 unmapped: 17752064 heap: 148430848 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 235 ms_handle_reset con 0x56493ec09400 session 0x56493d861680
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 235 ms_handle_reset con 0x56493c9e5800 session 0x56493b9330e0
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 235 ms_handle_reset con 0x56493b9ad800 session 0x56493b9334a0
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 235 heartbeat osd_stat(store_statfs(0x4f8f62000/0x0/0x4ffc00000, data 0x2ec218e/0x2f0b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x3d8f9c6), peers [0,2] op hist [])
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _renew_subs
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 235 handle_osd_map epochs [236,236], i have 235, src has [1,236]
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 236 ms_handle_reset con 0x56493e3cf800 session 0x56493e2672c0
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 236 ms_handle_reset con 0x56493be27800 session 0x56493bf7de00
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:08:43.490583+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 127410176 unmapped: 21020672 heap: 148430848 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:30:06 compute-0 ceph-osd[88594]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:30:06 compute-0 ceph-osd[88594]: bluestore.MempoolThread(0x56493a431b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1862397 data_alloc: 218103808 data_used: 6963200
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: handle_auth_request added challenge on 0x56493b9ad800
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 236 ms_handle_reset con 0x56493b9ad800 session 0x56493b94b2c0
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:08:44.490778+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 127344640 unmapped: 21086208 heap: 148430848 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:08:45.490911+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 127344640 unmapped: 21086208 heap: 148430848 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 236 handle_osd_map epochs [237,237], i have 236, src has [1,237]
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: handle_auth_request added challenge on 0x56493c9e4800
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: handle_auth_request added challenge on 0x56493c9e5800
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 237 ms_handle_reset con 0x56493c9e5800 session 0x56493da2a780
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 237 heartbeat osd_stat(store_statfs(0x4fa6de000/0x0/0x4ffc00000, data 0x1639b8c/0x1790000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x3d8f9c6), peers [0,2] op hist [])
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: handle_auth_request added challenge on 0x56493e3cf800
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: handle_auth_request added challenge on 0x56493ec09400
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:08:46.491046+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 237 ms_handle_reset con 0x56493e3cf800 session 0x56493d861e00
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 127377408 unmapped: 21053440 heap: 148430848 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 237 handle_osd_map epochs [237,238], i have 237, src has [1,238]
Oct 11 04:30:06 compute-0 ceph-osd[88594]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.586620331s of 10.581748962s, submitted: 315
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _renew_subs
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 237 handle_osd_map epochs [238,238], i have 238, src has [1,238]
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 238 ms_handle_reset con 0x56493ec09400 session 0x56493ce5a780
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: handle_auth_request added challenge on 0x56493b9ad800
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 238 ms_handle_reset con 0x56493b9ad800 session 0x56493e20a3c0
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 238 ms_handle_reset con 0x56493c9e4800 session 0x56493ca9dc20
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:08:47.491232+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 127385600 unmapped: 21045248 heap: 148430848 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: handle_auth_request added challenge on 0x56493be27800
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:08:48.491367+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 127385600 unmapped: 21045248 heap: 148430848 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _renew_subs
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 238 handle_osd_map epochs [239,239], i have 238, src has [1,239]
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 239 ms_handle_reset con 0x56493be27800 session 0x56493ce5ab40
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: handle_auth_request added challenge on 0x56493c9e5800
Oct 11 04:30:06 compute-0 ceph-osd[88594]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:30:06 compute-0 ceph-osd[88594]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:30:06 compute-0 ceph-osd[88594]: bluestore.MempoolThread(0x56493a431b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1870889 data_alloc: 218103808 data_used: 6979584
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:08:49.491471+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 239 ms_handle_reset con 0x56493c9e5800 session 0x56493da28780
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 128548864 unmapped: 19881984 heap: 148430848 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:08:50.491612+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 128548864 unmapped: 19881984 heap: 148430848 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: handle_auth_request added challenge on 0x56493e3cf800
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _renew_subs
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 239 handle_osd_map epochs [240,240], i have 239, src has [1,240]
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:08:51.491785+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 240 ms_handle_reset con 0x56493e3cf800 session 0x56493da292c0
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 128720896 unmapped: 19709952 heap: 148430848 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: handle_auth_request added challenge on 0x56493b9ad800
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: handle_auth_request added challenge on 0x56493be27800
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _renew_subs
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 240 handle_osd_map epochs [241,241], i have 240, src has [1,241]
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 241 ms_handle_reset con 0x56493be27800 session 0x56493ca9de00
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: handle_auth_request added challenge on 0x56493c9e4800
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 241 heartbeat osd_stat(store_statfs(0x4fa2c2000/0x0/0x4ffc00000, data 0x1640998/0x179b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x419f9c6), peers [0,2] op hist [])
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:08:52.492007+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 241 ms_handle_reset con 0x56493c9e4800 session 0x56493de4e3c0
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 128737280 unmapped: 19693568 heap: 148430848 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 241 heartbeat osd_stat(store_statfs(0x4fa2bf000/0x0/0x4ffc00000, data 0x1642595/0x179d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x419f9c6), peers [0,2] op hist [])
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:08:53.492212+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 128737280 unmapped: 19693568 heap: 148430848 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: handle_auth_request added challenge on 0x56493c9e5800
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 241 ms_handle_reset con 0x56493c9e5800 session 0x56493ce5ad20
Oct 11 04:30:06 compute-0 ceph-osd[88594]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:30:06 compute-0 ceph-osd[88594]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:30:06 compute-0 ceph-osd[88594]: bluestore.MempoolThread(0x56493a431b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1873913 data_alloc: 218103808 data_used: 6979584
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:08:54.492372+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 128737280 unmapped: 19693568 heap: 148430848 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 241 handle_osd_map epochs [242,242], i have 241, src has [1,242]
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:08:55.492607+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 242 heartbeat osd_stat(store_statfs(0x4fa2bf000/0x0/0x4ffc00000, data 0x1642595/0x179d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x419f9c6), peers [0,2] op hist [])
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 128745472 unmapped: 19685376 heap: 148430848 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 242 ms_handle_reset con 0x56493b9ad800 session 0x56493caf8d20
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:08:56.492881+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 128745472 unmapped: 19685376 heap: 148430848 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 242 ms_handle_reset con 0x56493ef04000 session 0x56493c2f9a40
Oct 11 04:30:06 compute-0 ceph-osd[88594]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.861132622s of 10.229597092s, submitted: 157
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 242 ms_handle_reset con 0x56493f61dc00 session 0x56493eb5b680
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 242 heartbeat osd_stat(store_statfs(0x4fa2bd000/0x0/0x4ffc00000, data 0x1644070/0x17a0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x419f9c6), peers [0,2] op hist [])
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:08:57.493011+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: handle_auth_request added challenge on 0x56493ef04000
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 128753664 unmapped: 19677184 heap: 148430848 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 242 ms_handle_reset con 0x56493ef04000 session 0x56493be1b2c0
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:08:58.493235+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 125681664 unmapped: 22749184 heap: 148430848 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:30:06 compute-0 ceph-osd[88594]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:30:06 compute-0 ceph-osd[88594]: bluestore.MempoolThread(0x56493a431b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1717632 data_alloc: 218103808 data_used: 815104
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:08:59.493517+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 125681664 unmapped: 22749184 heap: 148430848 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:09:00.493698+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 125681664 unmapped: 22749184 heap: 148430848 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:09:01.493869+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 125681664 unmapped: 22749184 heap: 148430848 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 242 heartbeat osd_stat(store_statfs(0x4fb17a000/0x0/0x4ffc00000, data 0x78903d/0x8e3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x419f9c6), peers [0,2] op hist [])
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: handle_auth_request added challenge on 0x56493b9ad800
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 242 ms_handle_reset con 0x56493b9ad800 session 0x56493b9645a0
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: handle_auth_request added challenge on 0x56493be27800
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:09:02.494033+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 242 ms_handle_reset con 0x56493be27800 session 0x56493d4b90e0
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: handle_auth_request added challenge on 0x56493c9e4800
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 242 ms_handle_reset con 0x56493c9e4800 session 0x56493b92cf00
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: handle_auth_request added challenge on 0x56493c9e4800
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 242 ms_handle_reset con 0x56493c9e4800 session 0x56493b932780
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: handle_auth_request added challenge on 0x56493b9ad800
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 242 ms_handle_reset con 0x56493b9ad800 session 0x56493e20b680
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 125681664 unmapped: 22749184 heap: 148430848 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: handle_auth_request added challenge on 0x56493be27800
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 242 ms_handle_reset con 0x56493be27800 session 0x56493cba72c0
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: handle_auth_request added challenge on 0x56493ef04000
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 242 ms_handle_reset con 0x56493ef04000 session 0x56493c2f9a40
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: handle_auth_request added challenge on 0x56493f61dc00
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 242 ms_handle_reset con 0x56493f61dc00 session 0x56493caf8d20
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 242 heartbeat osd_stat(store_statfs(0x4fb17b000/0x0/0x4ffc00000, data 0x788fda/0x8e2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x419f9c6), peers [0,2] op hist [])
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:09:03.494236+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 125714432 unmapped: 26918912 heap: 152633344 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:30:06 compute-0 ceph-osd[88594]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:30:06 compute-0 ceph-osd[88594]: bluestore.MempoolThread(0x56493a431b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1751730 data_alloc: 218103808 data_used: 811008
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:09:04.494403+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 125714432 unmapped: 26918912 heap: 152633344 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: handle_auth_request added challenge on 0x56493f61dc00
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: handle_auth_request added challenge on 0x56493b9ad800
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 242 ms_handle_reset con 0x56493b9ad800 session 0x56493da2e3c0
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:09:05.494570+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 125714432 unmapped: 26918912 heap: 152633344 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 242 handle_osd_map epochs [242,243], i have 242, src has [1,243]
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: handle_auth_request added challenge on 0x56493be27800
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: handle_auth_request added challenge on 0x56493c9e4800
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 243 ms_handle_reset con 0x56493be27800 session 0x56493da2f860
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:09:06.494769+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 125730816 unmapped: 26902528 heap: 152633344 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 243 ms_handle_reset con 0x56493c9e4800 session 0x56493ca9de00
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: handle_auth_request added challenge on 0x56493ef04000
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _renew_subs
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 243 handle_osd_map epochs [244,244], i have 243, src has [1,244]
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 244 ms_handle_reset con 0x56493ef04000 session 0x56493da15a40
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 244 ms_handle_reset con 0x56493f61dc00 session 0x56493de4e3c0
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: handle_auth_request added challenge on 0x56493f61dc00
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 244 ms_handle_reset con 0x56493f61dc00 session 0x56493da28780
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:09:07.494924+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 125730816 unmapped: 26902528 heap: 152633344 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 244 heartbeat osd_stat(store_statfs(0x4facef000/0x0/0x4ffc00000, data 0xc0fbd6/0xd6d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x419f9c6), peers [0,2] op hist [])
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: handle_auth_request added challenge on 0x56493b9ad800
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: handle_auth_request added challenge on 0x56493be27800
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 244 ms_handle_reset con 0x56493be27800 session 0x56493da15e00
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: handle_auth_request added challenge on 0x56493c9e4800
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 244 ms_handle_reset con 0x56493c9e4800 session 0x56493b9352c0
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: handle_auth_request added challenge on 0x56493ef04000
Oct 11 04:30:06 compute-0 ceph-osd[88594]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.790140152s of 11.040293694s, submitted: 67
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:09:08.496143+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 180314112 unmapped: 22724608 heap: 203038720 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: handle_auth_request added challenge on 0x56493c9e5800
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 244 ms_handle_reset con 0x56493c9e5800 session 0x56493d861e00
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: handle_auth_request added challenge on 0x56493ec09400
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 244 ms_handle_reset con 0x56493ec09400 session 0x56493da2a780
Oct 11 04:30:06 compute-0 ceph-osd[88594]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:30:06 compute-0 ceph-osd[88594]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:30:06 compute-0 ceph-osd[88594]: bluestore.MempoolThread(0x56493a431b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2352344 data_alloc: 218103808 data_used: 819200
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:09:09.496324+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 125812736 unmapped: 77225984 heap: 203038720 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: handle_auth_request added challenge on 0x56493ec09400
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:09:10.496453+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 125943808 unmapped: 77094912 heap: 203038720 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:09:11.496615+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: handle_auth_request added challenge on 0x56493be27800
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 130285568 unmapped: 72753152 heap: 203038720 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 244 heartbeat osd_stat(store_statfs(0x4eacf0000/0x0/0x4ffc00000, data 0x10c0fbe6/0x10d6e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x419f9c6), peers [0,2] op hist [])
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:09:12.496754+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 130555904 unmapped: 72482816 heap: 203038720 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 244 ms_handle_reset con 0x56493ef04000 session 0x56493b935680
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: handle_auth_request added challenge on 0x56493c9e4800
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 244 ms_handle_reset con 0x56493c9e4800 session 0x56493b94bc20
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 244 ms_handle_reset con 0x56493b9ad800 session 0x56493ca9dc20
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:09:13.496872+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 244 heartbeat osd_stat(store_statfs(0x4e2350000/0x0/0x4ffc00000, data 0x1840fbe6/0x1856e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x533f9c6), peers [0,2] op hist [])
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 126599168 unmapped: 76439552 heap: 203038720 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: handle_auth_request added challenge on 0x56493c9e5800
Oct 11 04:30:06 compute-0 ceph-osd[88594]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:30:06 compute-0 ceph-osd[88594]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:30:06 compute-0 ceph-osd[88594]: bluestore.MempoolThread(0x56493a431b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4433480 data_alloc: 218103808 data_used: 5545984
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:09:14.497027+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 126599168 unmapped: 76439552 heap: 203038720 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 244 handle_osd_map epochs [245,245], i have 244, src has [1,245]
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 245 ms_handle_reset con 0x56493c9e5800 session 0x56493ddf61e0
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: handle_auth_request added challenge on 0x56493f61dc00
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 245 ms_handle_reset con 0x56493f61dc00 session 0x56493e24c000
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:09:15.497221+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 126615552 unmapped: 76423168 heap: 203038720 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: handle_auth_request added challenge on 0x56493f61dc00
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 245 ms_handle_reset con 0x56493f61dc00 session 0x56493e24d4a0
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:09:16.497429+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 126615552 unmapped: 76423168 heap: 203038720 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:09:17.497577+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 126615552 unmapped: 76423168 heap: 203038720 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: handle_auth_request added challenge on 0x56493b9ad800
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 245 ms_handle_reset con 0x56493b9ad800 session 0x56493d8690e0
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: handle_auth_request added challenge on 0x56493c9e4800
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 245 heartbeat osd_stat(store_statfs(0x4e234e000/0x0/0x4ffc00000, data 0x184112b5/0x1856f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x533f9c6), peers [0,2] op hist [])
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: handle_auth_request added challenge on 0x56493c9e5800
Oct 11 04:30:06 compute-0 ceph-osd[88594]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 7.153848171s of 10.069036484s, submitted: 115
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 245 ms_handle_reset con 0x56493c9e5800 session 0x56493d8612c0
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:09:18.497752+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 126623744 unmapped: 76414976 heap: 203038720 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 245 handle_osd_map epochs [245,246], i have 245, src has [1,246]
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: handle_auth_request added challenge on 0x56493ef04000
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: handle_auth_request added challenge on 0x56493eee3000
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 246 ms_handle_reset con 0x56493ef04000 session 0x56493e72fe00
Oct 11 04:30:06 compute-0 ceph-osd[88594]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:30:06 compute-0 ceph-osd[88594]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:30:06 compute-0 ceph-osd[88594]: bluestore.MempoolThread(0x56493a431b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4445888 data_alloc: 218103808 data_used: 5554176
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:09:19.497970+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 126672896 unmapped: 76365824 heap: 203038720 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _renew_subs
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 246 handle_osd_map epochs [247,247], i have 246, src has [1,247]
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: handle_auth_request added challenge on 0x56493bbaa800
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 247 ms_handle_reset con 0x56493eee3000 session 0x56493da152c0
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: handle_auth_request added challenge on 0x56493b9ad800
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 247 ms_handle_reset con 0x56493c9e4800 session 0x56493d8683c0
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 247 ms_handle_reset con 0x56493b9ad800 session 0x56493caf8780
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:09:20.498187+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 126705664 unmapped: 76333056 heap: 203038720 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _renew_subs
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 247 handle_osd_map epochs [248,248], i have 247, src has [1,248]
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: handle_auth_request added challenge on 0x56493c9e5800
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: handle_auth_request added challenge on 0x56493ef04000
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 248 ms_handle_reset con 0x56493c9e5800 session 0x56493d869a40
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:09:21.498333+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: handle_auth_request added challenge on 0x56493f61dc00
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: handle_auth_request added challenge on 0x56493b4ae400
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 248 ms_handle_reset con 0x56493b4ae400 session 0x56493ce5af00
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 129884160 unmapped: 73154560 heap: 203038720 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _renew_subs
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 248 handle_osd_map epochs [249,249], i have 248, src has [1,249]
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 249 ms_handle_reset con 0x56493ef04000 session 0x56493b92c960
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: handle_auth_request added challenge on 0x56493b4ae400
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: handle_auth_request added challenge on 0x56493b9ad800
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: handle_auth_request added challenge on 0x56493c9e4800
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 249 ms_handle_reset con 0x56493b4ae400 session 0x56493ca9d0e0
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 249 ms_handle_reset con 0x56493bbaa800 session 0x56493d8614a0
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 249 ms_handle_reset con 0x56493b9ad800 session 0x56493de4f2c0
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:09:22.498610+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: handle_auth_request added challenge on 0x56493bde4000
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 130400256 unmapped: 72638464 heap: 203038720 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _renew_subs
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 249 handle_osd_map epochs [250,250], i have 249, src has [1,250]
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 250 ms_handle_reset con 0x56493c9e4800 session 0x56493da152c0
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 250 ms_handle_reset con 0x56493f61dc00 session 0x56493e72f680
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 250 ms_handle_reset con 0x56493bde4000 session 0x56493b965c20
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: handle_auth_request added challenge on 0x56493b4ae400
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 250 ms_handle_reset con 0x56493b4ae400 session 0x56493b9341e0
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:09:23.498820+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 250 heartbeat osd_stat(store_statfs(0x4e20d4000/0x0/0x4ffc00000, data 0x1868276c/0x187e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x533f9c6), peers [0,2] op hist [])
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 130433024 unmapped: 72605696 heap: 203038720 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: handle_auth_request added challenge on 0x56493b9ad800
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 250 ms_handle_reset con 0x56493b9ad800 session 0x56493edd0960
Oct 11 04:30:06 compute-0 ceph-osd[88594]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:30:06 compute-0 ceph-osd[88594]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:30:06 compute-0 ceph-osd[88594]: bluestore.MempoolThread(0x56493a431b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4488678 data_alloc: 218103808 data_used: 5570560
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:09:24.498950+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 130457600 unmapped: 72581120 heap: 203038720 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: handle_auth_request added challenge on 0x56493bbaa800
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _renew_subs
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 250 handle_osd_map epochs [251,251], i have 250, src has [1,251]
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: handle_auth_request added challenge on 0x56493c9e4800
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 251 ms_handle_reset con 0x56493c9e4800 session 0x56493b9352c0
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: handle_auth_request added challenge on 0x56493eee3400
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:09:25.499136+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 130981888 unmapped: 93061120 heap: 224043008 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: handle_auth_request added challenge on 0x56493eee2c00
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: handle_auth_request added challenge on 0x56493eee3800
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 251 ms_handle_reset con 0x56493eee3800 session 0x56493b94a000
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:09:26.499383+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 131383296 unmapped: 92659712 heap: 224043008 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _renew_subs
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 251 handle_osd_map epochs [252,252], i have 251, src has [1,252]
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 252 heartbeat osd_stat(store_statfs(0x4dbcc4000/0x0/0x4ffc00000, data 0x1ea8f9bf/0x1ebfa000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x533f9c6), peers [0,2] op hist [0,0,0,0,0,0,2,1])
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: handle_auth_request added challenge on 0x56493b4ae400
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: handle_auth_request added challenge on 0x56493b9ad800
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 252 ms_handle_reset con 0x56493b4ae400 session 0x56493d8690e0
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:09:27.499544+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 131883008 unmapped: 92160000 heap: 224043008 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 252 handle_osd_map epochs [252,253], i have 252, src has [1,253]
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _renew_subs
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 252 handle_osd_map epochs [253,253], i have 253, src has [1,253]
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 253 ms_handle_reset con 0x56493b9ad800 session 0x56493ca9de00
Oct 11 04:30:06 compute-0 ceph-osd[88594]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 7.919482708s of 10.022765160s, submitted: 194
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:09:28.499713+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 253 ms_handle_reset con 0x56493eee2c00 session 0x56493e72e780
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 132857856 unmapped: 91185152 heap: 224043008 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: handle_auth_request added challenge on 0x56493bde4000
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:09:29.499870+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:30:06 compute-0 ceph-osd[88594]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:30:06 compute-0 ceph-osd[88594]: bluestore.MempoolThread(0x56493a431b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 6693385 data_alloc: 218103808 data_used: 5599232
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 137756672 unmapped: 86286336 heap: 224043008 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 253 ms_handle_reset con 0x56493eee3400 session 0x56493da28780
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _renew_subs
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 253 handle_osd_map epochs [254,254], i have 253, src has [1,254]
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 254 ms_handle_reset con 0x56493bbaa800 session 0x56493bf7c3c0
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 254 ms_handle_reset con 0x56493bde4000 session 0x56493d8681e0
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:09:30.500006+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 133758976 unmapped: 90284032 heap: 224043008 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: handle_auth_request added challenge on 0x56493bbaa800
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 254 handle_osd_map epochs [254,255], i have 254, src has [1,255]
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 255 ms_handle_reset con 0x56493bbaa800 session 0x56493c532000
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:09:31.500176+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: handle_auth_request added challenge on 0x56493b4ae400
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 255 ms_handle_reset con 0x56493b4ae400 session 0x56493b12c780
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 133873664 unmapped: 90169344 heap: 224043008 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: handle_auth_request added challenge on 0x56493b9ad800
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:09:32.500319+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 255 heartbeat osd_stat(store_statfs(0x4cc0b6000/0x0/0x4ffc00000, data 0x2e698841/0x2e806000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x533f9c6), peers [0,2] op hist [0,0,0,1])
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 133914624 unmapped: 90128384 heap: 224043008 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 255 handle_osd_map epochs [255,256], i have 255, src has [1,256]
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _renew_subs
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 255 handle_osd_map epochs [256,256], i have 256, src has [1,256]
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: handle_auth_request added challenge on 0x56493eee2c00
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:09:33.500442+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: handle_auth_request added challenge on 0x56493eee3400
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 256 ms_handle_reset con 0x56493eee3400 session 0x56493b964960
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 256 ms_handle_reset con 0x56493b9ad800 session 0x56493e72f860
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 134070272 unmapped: 89972736 heap: 224043008 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _renew_subs
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 256 handle_osd_map epochs [257,257], i have 256, src has [1,257]
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:09:34.500587+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: handle_auth_request added challenge on 0x56493b4ae400
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: handle_auth_request added challenge on 0x56493bbaa800
Oct 11 04:30:06 compute-0 ceph-osd[88594]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:30:06 compute-0 ceph-osd[88594]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:30:06 compute-0 ceph-osd[88594]: bluestore.MempoolThread(0x56493a431b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 6989883 data_alloc: 218103808 data_used: 5615616
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 257 ms_handle_reset con 0x56493b4ae400 session 0x56493d8612c0
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 257 handle_osd_map epochs [257,258], i have 257, src has [1,258]
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _renew_subs
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 258 handle_osd_map epochs [258,258], i have 258, src has [1,258]
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 258 ms_handle_reset con 0x56493bbaa800 session 0x56493e20ba40
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 132767744 unmapped: 91275264 heap: 224043008 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 258 ms_handle_reset con 0x56493eee2c00 session 0x56493b932780
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 258 heartbeat osd_stat(store_statfs(0x4cc0af000/0x0/0x4ffc00000, data 0x2e69c02d/0x2e80d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x533f9c6), peers [0,2] op hist [])
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:09:35.500749+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 258 heartbeat osd_stat(store_statfs(0x4cc0ac000/0x0/0x4ffc00000, data 0x2e69ec80/0x2e812000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x533f9c6), peers [0,2] op hist [])
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 132784128 unmapped: 91258880 heap: 224043008 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:09:36.500923+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: handle_auth_request added challenge on 0x56493bde4000
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 258 ms_handle_reset con 0x56493bde4000 session 0x56493e2d5c20
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: handle_auth_request added challenge on 0x56493eee3400
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 132784128 unmapped: 91258880 heap: 224043008 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: handle_auth_request added challenge on 0x56493c9e4800
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 258 ms_handle_reset con 0x56493eee3400 session 0x56493d867a40
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: handle_auth_request added challenge on 0x56493eee3400
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 258 ms_handle_reset con 0x56493eee3400 session 0x56493caf9e00
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _renew_subs
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 258 handle_osd_map epochs [259,259], i have 258, src has [1,259]
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:09:37.501135+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: handle_auth_request added challenge on 0x56493b4ae400
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: handle_auth_request added challenge on 0x56493bbaa800
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 259 ms_handle_reset con 0x56493b4ae400 session 0x56493da2ef00
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 132890624 unmapped: 91152384 heap: 224043008 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: handle_auth_request added challenge on 0x56493bde4000
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 259 ms_handle_reset con 0x56493bde4000 session 0x56493da28d20
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 259 handle_osd_map epochs [259,260], i have 259, src has [1,260]
Oct 11 04:30:06 compute-0 ceph-osd[88594]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 8.366749763s of 10.004246712s, submitted: 179
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:09:38.501371+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 260 ms_handle_reset con 0x56493bbaa800 session 0x56493be1b2c0
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: handle_auth_request added challenge on 0x56493eee2c00
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 260 ms_handle_reset con 0x56493eee2c00 session 0x56493da285a0
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 260 ms_handle_reset con 0x56493c9e4800 session 0x56493d86c3c0
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 132972544 unmapped: 91070464 heap: 224043008 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:09:39.501538+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:30:06 compute-0 ceph-osd[88594]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:30:06 compute-0 ceph-osd[88594]: bluestore.MempoolThread(0x56493a431b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 7008741 data_alloc: 218103808 data_used: 5636096
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 132972544 unmapped: 91070464 heap: 224043008 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: handle_auth_request added challenge on 0x56493eee2c00
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:09:40.501723+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: handle_auth_request added challenge on 0x56493b4ae400
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: handle_auth_request added challenge on 0x56493bbaa800
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 260 ms_handle_reset con 0x56493b4ae400 session 0x56493d86d860
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 260 ms_handle_reset con 0x56493bbaa800 session 0x56493ddf74a0
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: handle_auth_request added challenge on 0x56493bde4000
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 260 heartbeat osd_stat(store_statfs(0x4cc0a0000/0x0/0x4ffc00000, data 0x2e6a29da/0x2e81d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x533f9c6), peers [0,2] op hist [])
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 260 ms_handle_reset con 0x56493bde4000 session 0x56493e24c960
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: handle_auth_request added challenge on 0x56493eee3400
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 142573568 unmapped: 81469440 heap: 224043008 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 260 ms_handle_reset con 0x56493eee3400 session 0x56493e24da40
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: handle_auth_request added challenge on 0x56493b4ae400
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 260 ms_handle_reset con 0x56493b4ae400 session 0x56493c533680
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 260 handle_osd_map epochs [260,261], i have 260, src has [1,261]
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _renew_subs
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 260 handle_osd_map epochs [261,261], i have 261, src has [1,261]
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:09:41.501862+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: handle_auth_request added challenge on 0x56493bbaa800
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: handle_auth_request added challenge on 0x56493bde4000
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 261 ms_handle_reset con 0x56493bbaa800 session 0x56493bcb85a0
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 133308416 unmapped: 90734592 heap: 224043008 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _renew_subs
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 261 handle_osd_map epochs [262,262], i have 261, src has [1,262]
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:09:42.502017+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 262 ms_handle_reset con 0x56493bde4000 session 0x56493b94b860
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 262 ms_handle_reset con 0x56493eee2c00 session 0x56493da29680
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 133382144 unmapped: 90660864 heap: 224043008 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: handle_auth_request added challenge on 0x56493c9e4800
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: handle_auth_request added challenge on 0x56493eee3400
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:09:43.502187+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 262 ms_handle_reset con 0x56493eee3400 session 0x56493d861680
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 133390336 unmapped: 90652672 heap: 224043008 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: handle_auth_request added challenge on 0x56493b4ae400
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: handle_auth_request added challenge on 0x56493bbaa800
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 262 ms_handle_reset con 0x56493bbaa800 session 0x56493da2a3c0
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:09:44.502352+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 262 handle_osd_map epochs [262,263], i have 262, src has [1,263]
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _renew_subs
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 263 handle_osd_map epochs [263,263], i have 263, src has [1,263]
Oct 11 04:30:06 compute-0 ceph-osd[88594]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:30:06 compute-0 ceph-osd[88594]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:30:06 compute-0 ceph-osd[88594]: bluestore.MempoolThread(0x56493a431b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 7125595 data_alloc: 218103808 data_used: 5660672
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: handle_auth_request added challenge on 0x56493bde4000
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: handle_auth_request added challenge on 0x56493eee2c00
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: handle_auth_request added challenge on 0x56493eee3400
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 263 ms_handle_reset con 0x56493eee2c00 session 0x56493c532000
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: handle_auth_request added challenge on 0x56493c9e5800
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 263 ms_handle_reset con 0x56493bde4000 session 0x56493bf7cb40
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _renew_subs
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 263 handle_osd_map epochs [264,264], i have 263, src has [1,264]
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 264 ms_handle_reset con 0x56493eee3400 session 0x56493d8650e0
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 133357568 unmapped: 90685440 heap: 224043008 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 264 ms_handle_reset con 0x56493b4ae400 session 0x56493da2f2c0
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:09:45.502571+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: handle_auth_request added challenge on 0x56493bbaa800
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 264 heartbeat osd_stat(store_statfs(0x4cabff000/0x0/0x4ffc00000, data 0x2fb3b91e/0x2fcbd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x533f9c6), peers [0,2] op hist [])
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 264 handle_osd_map epochs [265,265], i have 264, src has [1,265]
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 265 ms_handle_reset con 0x56493c9e5800 session 0x56493b92c960
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 133390336 unmapped: 90652672 heap: 224043008 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: handle_auth_request added challenge on 0x56493bde4000
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 265 ms_handle_reset con 0x56493bde4000 session 0x56493da143c0
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 265 ms_handle_reset con 0x56493c9e4800 session 0x56493b932780
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:09:46.502769+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _renew_subs
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 265 handle_osd_map epochs [266,266], i have 265, src has [1,266]
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 266 ms_handle_reset con 0x56493bbaa800 session 0x56493bf7c780
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 133406720 unmapped: 90636288 heap: 224043008 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:09:47.502971+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: handle_auth_request added challenge on 0x56493eee2c00
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 266 heartbeat osd_stat(store_statfs(0x4cabf6000/0x0/0x4ffc00000, data 0x2fb3f64b/0x2fcc3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x533f9c6), peers [0,2] op hist [])
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 133439488 unmapped: 90603520 heap: 224043008 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: handle_auth_request added challenge on 0x56493eee3400
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:09:48.503194+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 266 handle_osd_map epochs [266,267], i have 266, src has [1,267]
Oct 11 04:30:06 compute-0 ceph-osd[88594]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.835658073s of 10.425185204s, submitted: 137
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 267 ms_handle_reset con 0x56493eee3400 session 0x56493c7aa5a0
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 133513216 unmapped: 90529792 heap: 224043008 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 267 heartbeat osd_stat(store_statfs(0x4cabfb000/0x0/0x4ffc00000, data 0x2fb3f64b/0x2fcc3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x533f9c6), peers [0,2] op hist [])
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:09:49.503451+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:30:06 compute-0 ceph-osd[88594]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:30:06 compute-0 ceph-osd[88594]: bluestore.MempoolThread(0x56493a431b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 7184881 data_alloc: 218103808 data_used: 5668864
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: handle_auth_request added challenge on 0x56493bbaa800
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: handle_auth_request added challenge on 0x56493bde4000
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _renew_subs
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 267 handle_osd_map epochs [268,268], i have 267, src has [1,268]
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 133529600 unmapped: 90513408 heap: 224043008 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: handle_auth_request added challenge on 0x56493c9e4800
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 268 ms_handle_reset con 0x56493c9e4800 session 0x56493d4b9c20
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 268 ms_handle_reset con 0x56493bde4000 session 0x56493b92dc20
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: handle_auth_request added challenge on 0x56493c9e5800
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 268 ms_handle_reset con 0x56493c9e5800 session 0x56493e24de00
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:09:50.503575+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: handle_auth_request added challenge on 0x56493f69c800
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _renew_subs
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 268 handle_osd_map epochs [269,269], i have 268, src has [1,269]
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 269 heartbeat osd_stat(store_statfs(0x4cabf9000/0x0/0x4ffc00000, data 0x2fb42979/0x2fcc5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x533f9c6), peers [0,2] op hist [])
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 133562368 unmapped: 90480640 heap: 224043008 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 269 ms_handle_reset con 0x56493bbaa800 session 0x56493e72f4a0
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:09:51.503786+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _renew_subs
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 269 handle_osd_map epochs [270,270], i have 269, src has [1,270]
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 133595136 unmapped: 90447872 heap: 224043008 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 270 ms_handle_reset con 0x56493f69c800 session 0x56493d4b90e0
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:09:52.504008+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 133619712 unmapped: 90423296 heap: 224043008 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: handle_auth_request added challenge on 0x56493bbaa800
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: handle_auth_request added challenge on 0x56493bde4000
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:09:53.504223+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _renew_subs
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 270 handle_osd_map epochs [271,271], i have 270, src has [1,271]
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 133767168 unmapped: 90275840 heap: 224043008 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: handle_auth_request added challenge on 0x56493c9e4800
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 271 ms_handle_reset con 0x56493c9e4800 session 0x56493d860d20
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: handle_auth_request added challenge on 0x56493c9e5800
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 271 ms_handle_reset con 0x56493c9e5800 session 0x56493e266d20
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: handle_auth_request added challenge on 0x56493ef04000
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 271 ms_handle_reset con 0x56493ef04000 session 0x56493da28960
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: handle_auth_request added challenge on 0x56493e8ee400
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 271 ms_handle_reset con 0x56493e8ee400 session 0x56493d8645a0
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:09:54.504367+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: handle_auth_request added challenge on 0x56493f261c00
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 271 ms_handle_reset con 0x56493f261c00 session 0x56493eb5ba40
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: handle_auth_request added challenge on 0x56493c9e4800
Oct 11 04:30:06 compute-0 ceph-osd[88594]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:30:06 compute-0 ceph-osd[88594]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:30:06 compute-0 ceph-osd[88594]: bluestore.MempoolThread(0x56493a431b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 7193832 data_alloc: 218103808 data_used: 5681152
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 271 handle_osd_map epochs [271,272], i have 271, src has [1,272]
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _renew_subs
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 271 handle_osd_map epochs [272,272], i have 272, src has [1,272]
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 272 ms_handle_reset con 0x56493c9e4800 session 0x56493d864960
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 272 ms_handle_reset con 0x56493bbaa800 session 0x56493d8614a0
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 134332416 unmapped: 89710592 heap: 224043008 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:09:55.504525+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _renew_subs
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 272 handle_osd_map epochs [273,273], i have 272, src has [1,273]
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 134365184 unmapped: 89677824 heap: 224043008 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 273 heartbeat osd_stat(store_statfs(0x4ca4e4000/0x0/0x4ffc00000, data 0x30252967/0x303d8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x533f9c6), peers [0,2] op hist [])
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:09:56.504744+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _renew_subs
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 273 handle_osd_map epochs [274,274], i have 273, src has [1,274]
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 274 ms_handle_reset con 0x56493bde4000 session 0x56493e20be00
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: handle_auth_request added challenge on 0x56493c9e5800
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 134430720 unmapped: 89612288 heap: 224043008 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 274 ms_handle_reset con 0x56493c9e5800 session 0x56493cbd1860
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: handle_auth_request added challenge on 0x56493e8ee400
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 274 ms_handle_reset con 0x56493e8ee400 session 0x56493b935680
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:09:57.504932+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: handle_auth_request added challenge on 0x56493bbaa800
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 134471680 unmapped: 89571328 heap: 224043008 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: handle_auth_request added challenge on 0x56493bde4000
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: handle_auth_request added challenge on 0x56493c9e4800
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:09:58.505097+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: handle_auth_request added challenge on 0x56493c9e5800
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _renew_subs
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 274 handle_osd_map epochs [275,275], i have 274, src has [1,275]
Oct 11 04:30:06 compute-0 ceph-osd[88594]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.283307076s of 10.001691818s, submitted: 212
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 275 ms_handle_reset con 0x56493bbaa800 session 0x56493b94b0e0
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: handle_auth_request added challenge on 0x56493ef04000
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: handle_auth_request added challenge on 0x564940962c00
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 275 ms_handle_reset con 0x564940962c00 session 0x56493cba7860
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 134529024 unmapped: 89513984 heap: 224043008 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 275 ms_handle_reset con 0x56493ef04000 session 0x56493eb5a780
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:09:59.505216+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:30:06 compute-0 ceph-osd[88594]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:30:06 compute-0 ceph-osd[88594]: bluestore.MempoolThread(0x56493a431b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 7266348 data_alloc: 218103808 data_used: 6258688
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: handle_auth_request added challenge on 0x564940962400
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _renew_subs
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 275 handle_osd_map epochs [276,276], i have 275, src has [1,276]
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 276 ms_handle_reset con 0x564940962400 session 0x56493e72fc20
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 276 ms_handle_reset con 0x56493c9e5800 session 0x56493caf9680
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 137027584 unmapped: 87015424 heap: 224043008 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: handle_auth_request added challenge on 0x56493bbaa800
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:10:00.505418+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _renew_subs
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 276 handle_osd_map epochs [277,277], i have 276, src has [1,277]
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: handle_auth_request added challenge on 0x56493ef04000
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 277 ms_handle_reset con 0x56493bbaa800 session 0x56493d86da40
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: handle_auth_request added challenge on 0x564940962400
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 277 heartbeat osd_stat(store_statfs(0x4ca4d9000/0x0/0x4ffc00000, data 0x30259a94/0x303e4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x533f9c6), peers [0,2] op hist [1])
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 137101312 unmapped: 86941696 heap: 224043008 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 277 ms_handle_reset con 0x564940962400 session 0x56493eb5ba40
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:10:01.505563+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 277 ms_handle_reset con 0x56493ef04000 session 0x56493d86de00
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 277 handle_osd_map epochs [278,278], i have 277, src has [1,278]
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 137150464 unmapped: 86892544 heap: 224043008 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 278 heartbeat osd_stat(store_statfs(0x4ca4d6000/0x0/0x4ffc00000, data 0x3025b62f/0x303e5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x533f9c6), peers [0,2] op hist [])
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:10:02.505704+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 137166848 unmapped: 86876160 heap: 224043008 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: handle_auth_request added challenge on 0x564940962c00
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _renew_subs
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 278 handle_osd_map epochs [279,279], i have 278, src has [1,279]
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 279 ms_handle_reset con 0x564940962c00 session 0x56493d860f00
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:10:03.505819+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 137183232 unmapped: 86859776 heap: 224043008 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 279 ms_handle_reset con 0x56493eee2c00 session 0x56493ca7e5a0
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:10:04.505934+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:30:06 compute-0 ceph-osd[88594]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:30:06 compute-0 ceph-osd[88594]: bluestore.MempoolThread(0x56493a431b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 7318132 data_alloc: 234881024 data_used: 11862016
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 137183232 unmapped: 86859776 heap: 224043008 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:10:05.506058+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 279 ms_handle_reset con 0x56493be27800 session 0x56493caf83c0
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 279 ms_handle_reset con 0x56493ec09400 session 0x56493b9334a0
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: handle_auth_request added challenge on 0x56493eee2c00
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _renew_subs
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 279 handle_osd_map epochs [280,280], i have 279, src has [1,280]
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 138264576 unmapped: 85778432 heap: 224043008 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 280 ms_handle_reset con 0x56493eee2c00 session 0x56493bcb83c0
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:10:06.506311+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 280 heartbeat osd_stat(store_statfs(0x4cabca000/0x0/0x4ffc00000, data 0x2fb66980/0x2fcf3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x533f9c6), peers [0,2] op hist [])
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 137355264 unmapped: 86687744 heap: 224043008 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: handle_auth_request added challenge on 0x56493bbaa800
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:10:07.506476+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _renew_subs
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 280 handle_osd_map epochs [281,281], i have 280, src has [1,281]
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 281 ms_handle_reset con 0x56493bbaa800 session 0x56493d865680
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 137371648 unmapped: 86671360 heap: 224043008 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:10:08.506646+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 137371648 unmapped: 86671360 heap: 224043008 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:10:09.506812+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 281 heartbeat osd_stat(store_statfs(0x4cabc5000/0x0/0x4ffc00000, data 0x2fb68919/0x2fcf7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x533f9c6), peers [0,2] op hist [])
Oct 11 04:30:06 compute-0 ceph-osd[88594]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:30:06 compute-0 ceph-osd[88594]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:30:06 compute-0 ceph-osd[88594]: bluestore.MempoolThread(0x56493a431b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 7239718 data_alloc: 218103808 data_used: 7147520
Oct 11 04:30:06 compute-0 ceph-osd[88594]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.673897743s of 11.086964607s, submitted: 145
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: handle_auth_request added challenge on 0x56493ef04000
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 141705216 unmapped: 82337792 heap: 224043008 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: handle_auth_request added challenge on 0x564940962400
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 281 ms_handle_reset con 0x564940962400 session 0x56493caf9e00
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: handle_auth_request added challenge on 0x564940962400
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: handle_auth_request added challenge on 0x56493bbaa800
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 281 ms_handle_reset con 0x56493bbaa800 session 0x56493d860d20
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 281 ms_handle_reset con 0x564940962400 session 0x56493d8610e0
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: handle_auth_request added challenge on 0x56493be27800
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:10:10.506931+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 281 ms_handle_reset con 0x56493be27800 session 0x56493ce5ba40
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: handle_auth_request added challenge on 0x56493ec09400
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 281 ms_handle_reset con 0x56493ec09400 session 0x56493da2fe00
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: handle_auth_request added challenge on 0x56493eee2c00
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _renew_subs
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 281 handle_osd_map epochs [282,282], i have 281, src has [1,282]
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 282 ms_handle_reset con 0x56493eee2c00 session 0x56493d86c960
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 140328960 unmapped: 83714048 heap: 224043008 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:10:11.507096+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _renew_subs
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 282 handle_osd_map epochs [283,283], i have 282, src has [1,283]
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: handle_auth_request added challenge on 0x56493bbaa800
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 140615680 unmapped: 83427328 heap: 224043008 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:10:12.507260+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 283 handle_osd_map epochs [283,284], i have 283, src has [1,284]
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 284 ms_handle_reset con 0x56493ef04000 session 0x56493b964b40
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: handle_auth_request added challenge on 0x56493be27800
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 140623872 unmapped: 83419136 heap: 224043008 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 284 ms_handle_reset con 0x56493bbaa800 session 0x56493d4b90e0
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 284 ms_handle_reset con 0x56493be27800 session 0x56493bf7cb40
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:10:13.507411+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 140648448 unmapped: 83394560 heap: 224043008 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:10:14.507568+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:30:06 compute-0 ceph-osd[88594]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:30:06 compute-0 ceph-osd[88594]: bluestore.MempoolThread(0x56493a431b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 7365482 data_alloc: 218103808 data_used: 8085504
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 284 heartbeat osd_stat(store_statfs(0x4ca096000/0x0/0x4ffc00000, data 0x3091fa54/0x30416000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x574f9c6), peers [0,2] op hist [])
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 284 handle_osd_map epochs [285,285], i have 284, src has [1,285]
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 140673024 unmapped: 83369984 heap: 224043008 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:10:15.507732+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 140681216 unmapped: 83361792 heap: 224043008 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:10:16.507921+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 140681216 unmapped: 83361792 heap: 224043008 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:10:17.508127+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 140681216 unmapped: 83361792 heap: 224043008 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:10:18.508722+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 140681216 unmapped: 83361792 heap: 224043008 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:10:19.508927+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 285 ms_handle_reset con 0x56493bde4000 session 0x56493bf7cd20
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 285 ms_handle_reset con 0x56493c9e4800 session 0x56493be1a1e0
Oct 11 04:30:06 compute-0 ceph-osd[88594]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:30:06 compute-0 ceph-osd[88594]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:30:06 compute-0 ceph-osd[88594]: bluestore.MempoolThread(0x56493a431b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 7365648 data_alloc: 218103808 data_used: 8097792
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: handle_auth_request added challenge on 0x56493c9e4800
Oct 11 04:30:06 compute-0 ceph-osd[88594]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.437903404s of 10.000994682s, submitted: 212
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 136208384 unmapped: 87834624 heap: 224043008 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 285 ms_handle_reset con 0x56493c9e4800 session 0x56493ce5b0e0
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:10:20.509077+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: handle_auth_request added challenge on 0x56493bbaa800
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 285 ms_handle_reset con 0x56493bbaa800 session 0x56493da150e0
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 285 heartbeat osd_stat(store_statfs(0x4ca79e000/0x0/0x4ffc00000, data 0x30218524/0x2fd0f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x574f9c6), peers [0,2] op hist [])
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 136208384 unmapped: 87834624 heap: 224043008 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:10:21.509269+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 136208384 unmapped: 87834624 heap: 224043008 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:10:22.509408+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 136208384 unmapped: 87834624 heap: 224043008 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: handle_auth_request added challenge on 0x56493bde4000
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: handle_auth_request added challenge on 0x56493be27800
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:10:23.509566+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 285 ms_handle_reset con 0x56493be27800 session 0x56493b92dc20
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 285 heartbeat osd_stat(store_statfs(0x4ca79e000/0x0/0x4ffc00000, data 0x30218524/0x2fd0f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x574f9c6), peers [0,2] op hist [])
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _renew_subs
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 285 handle_osd_map epochs [286,286], i have 285, src has [1,286]
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 136216576 unmapped: 87826432 heap: 224043008 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: handle_auth_request added challenge on 0x56493ef04000
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: handle_auth_request added challenge on 0x56493ec09400
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 286 heartbeat osd_stat(store_statfs(0x4ca79e000/0x0/0x4ffc00000, data 0x30218524/0x2fd0f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x574f9c6), peers [0,2] op hist [])
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 286 ms_handle_reset con 0x56493ec09400 session 0x56493be1a3c0
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:10:24.509897+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: handle_auth_request added challenge on 0x564940962400
Oct 11 04:30:06 compute-0 ceph-osd[88594]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:30:06 compute-0 ceph-osd[88594]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:30:06 compute-0 ceph-osd[88594]: bluestore.MempoolThread(0x56493a431b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 7276553 data_alloc: 218103808 data_used: 1904640
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _renew_subs
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 286 handle_osd_map epochs [287,287], i have 286, src has [1,287]
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 287 ms_handle_reset con 0x56493ef04000 session 0x56493e266d20
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 136265728 unmapped: 87777280 heap: 224043008 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 287 ms_handle_reset con 0x56493bde4000 session 0x56493da29860
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:10:25.510079+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _renew_subs
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 287 handle_osd_map epochs [288,288], i have 287, src has [1,288]
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: handle_auth_request added challenge on 0x56493bbaa800
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: handle_auth_request added challenge on 0x56493be27800
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 137445376 unmapped: 86597632 heap: 224043008 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 288 ms_handle_reset con 0x56493bbaa800 session 0x56493b934f00
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:10:26.510254+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: handle_auth_request added challenge on 0x56493c9e4800
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 288 ms_handle_reset con 0x56493c9e4800 session 0x56493e266000
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: handle_auth_request added challenge on 0x56493ec09400
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 288 ms_handle_reset con 0x56493ec09400 session 0x56493d866000
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: handle_auth_request added challenge on 0x564940962c00
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 288 ms_handle_reset con 0x564940962c00 session 0x56493da2ab40
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: handle_auth_request added challenge on 0x56493bbaa800
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: handle_auth_request added challenge on 0x56493bde4000
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 288 ms_handle_reset con 0x56493bde4000 session 0x56493eb5b4a0
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: handle_auth_request added challenge on 0x56493c9e4800
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 139157504 unmapped: 84885504 heap: 224043008 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _renew_subs
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 288 handle_osd_map epochs [289,289], i have 288, src has [1,289]
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 288 ms_handle_reset con 0x56493be27800 session 0x56493d8683c0
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 289 ms_handle_reset con 0x56493c9e4800 session 0x56493c2f85a0
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:10:27.510402+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 289 ms_handle_reset con 0x564940962400 session 0x56493d8603c0
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: handle_auth_request added challenge on 0x56493ec09400
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 289 ms_handle_reset con 0x56493bbaa800 session 0x56493cba6d20
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 289 ms_handle_reset con 0x56493ec09400 session 0x56493bf7cd20
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: handle_auth_request added challenge on 0x56493bbaa800
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 289 handle_osd_map epochs [289,290], i have 289, src has [1,290]
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 139280384 unmapped: 84762624 heap: 224043008 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 290 ms_handle_reset con 0x56493bbaa800 session 0x56493d4b90e0
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:10:28.510556+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 290 heartbeat osd_stat(store_statfs(0x4ca0a0000/0x0/0x4ffc00000, data 0x30b23003/0x3040c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x574f9c6), peers [0,2] op hist [])
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: handle_auth_request added challenge on 0x56493bde4000
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 290 ms_handle_reset con 0x56493bde4000 session 0x56493ce5ba40
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 290 handle_osd_map epochs [290,291], i have 290, src has [1,291]
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 139296768 unmapped: 84746240 heap: 224043008 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: handle_auth_request added challenge on 0x56493be27800
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 291 ms_handle_reset con 0x56493be27800 session 0x56493d860d20
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: handle_auth_request added challenge on 0x56493c9e4800
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:10:29.510692+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 291 ms_handle_reset con 0x56493c9e4800 session 0x56493bcb83c0
Oct 11 04:30:06 compute-0 ceph-osd[88594]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:30:06 compute-0 ceph-osd[88594]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:30:06 compute-0 ceph-osd[88594]: bluestore.MempoolThread(0x56493a431b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 7376798 data_alloc: 218103808 data_used: 1912832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 138452992 unmapped: 85590016 heap: 224043008 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:10:30.510861+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: handle_auth_request added challenge on 0x56493c9e4800
Oct 11 04:30:06 compute-0 ceph-osd[88594]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.231670380s of 10.854344368s, submitted: 188
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 291 ms_handle_reset con 0x56493c9e4800 session 0x56493b94b0e0
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: handle_auth_request added challenge on 0x56493bbaa800
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 291 ms_handle_reset con 0x56493bbaa800 session 0x56493e20be00
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 136740864 unmapped: 87302144 heap: 224043008 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:10:31.511009+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: handle_auth_request added challenge on 0x56493bde4000
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 291 ms_handle_reset con 0x56493bde4000 session 0x56493d8690e0
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: handle_auth_request added challenge on 0x56493be27800
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 291 ms_handle_reset con 0x56493be27800 session 0x56493ddf6b40
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: handle_auth_request added challenge on 0x56493ec09400
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 137060352 unmapped: 86982656 heap: 224043008 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: handle_auth_request added challenge on 0x564940962400
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 291 ms_handle_reset con 0x564940962400 session 0x56493b9341e0
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: handle_auth_request added challenge on 0x56493bbaa800
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 291 ms_handle_reset con 0x56493ec09400 session 0x56493caf9c20
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:10:32.511185+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: handle_auth_request added challenge on 0x56493bde4000
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 291 ms_handle_reset con 0x56493bde4000 session 0x56493d4b9c20
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 291 ms_handle_reset con 0x56493bbaa800 session 0x56493c533680
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 137175040 unmapped: 86867968 heap: 224043008 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:10:33.511367+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: handle_auth_request added challenge on 0x56493be27800
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _renew_subs
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 291 handle_osd_map epochs [292,292], i have 291, src has [1,292]
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 292 heartbeat osd_stat(store_statfs(0x4c93bc000/0x0/0x4ffc00000, data 0x31806be4/0x310f2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x574f9c6), peers [0,2] op hist [0,0,1])
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: handle_auth_request added challenge on 0x56493c9e4800
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: handle_auth_request added challenge on 0x564940962c00
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 292 ms_handle_reset con 0x56493be27800 session 0x56493ddf74a0
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 292 ms_handle_reset con 0x56493c9e4800 session 0x56493cba7860
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: handle_auth_request added challenge on 0x56493bbaa800
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: handle_auth_request added challenge on 0x56493bde4000
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 292 ms_handle_reset con 0x56493bde4000 session 0x56493b935a40
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 292 ms_handle_reset con 0x564940962c00 session 0x56493bcb92c0
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 292 ms_handle_reset con 0x56493bbaa800 session 0x56493da150e0
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 292 heartbeat osd_stat(store_statfs(0x4c93bc000/0x0/0x4ffc00000, data 0x31806be4/0x310f2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x574f9c6), peers [0,2] op hist [])
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 137216000 unmapped: 86827008 heap: 224043008 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:10:34.511583+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 292 handle_osd_map epochs [292,293], i have 292, src has [1,293]
Oct 11 04:30:06 compute-0 ceph-osd[88594]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:30:06 compute-0 ceph-osd[88594]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:30:06 compute-0 ceph-osd[88594]: bluestore.MempoolThread(0x56493a431b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 7367281 data_alloc: 218103808 data_used: 1933312
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: handle_auth_request added challenge on 0x56493be27800
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 293 ms_handle_reset con 0x56493be27800 session 0x56493d864960
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: handle_auth_request added challenge on 0x56493ec09400
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: handle_auth_request added challenge on 0x564940962800
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 137248768 unmapped: 86794240 heap: 224043008 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 293 ms_handle_reset con 0x564940962800 session 0x56493da15c20
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: handle_auth_request added challenge on 0x56493bbaa800
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 293 ms_handle_reset con 0x56493ec09400 session 0x56493e3785a0
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 293 ms_handle_reset con 0x56493bbaa800 session 0x56493da2e3c0
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: handle_auth_request added challenge on 0x56493bde4000
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:10:35.511740+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 293 ms_handle_reset con 0x56493bde4000 session 0x56493c2f8d20
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 137256960 unmapped: 86786048 heap: 224043008 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: handle_auth_request added challenge on 0x56493be27800
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: handle_auth_request added challenge on 0x564940962c00
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:10:36.511912+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: handle_auth_request added challenge on 0x56493f3e3800
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 293 ms_handle_reset con 0x56493f3e3800 session 0x56493ca9d0e0
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _renew_subs
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 293 handle_osd_map epochs [294,294], i have 293, src has [1,294]
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 294 ms_handle_reset con 0x56493be27800 session 0x56493b12c960
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: handle_auth_request added challenge on 0x56493be27800
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 294 ms_handle_reset con 0x564940962c00 session 0x56493be1b2c0
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 294 ms_handle_reset con 0x56493be27800 session 0x56493e2d41e0
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 137281536 unmapped: 86761472 heap: 224043008 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:10:37.512245+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: handle_auth_request added challenge on 0x56493bbaa800
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: handle_auth_request added challenge on 0x56493bde4000
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 294 ms_handle_reset con 0x56493bde4000 session 0x56493da2b4a0
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 294 ms_handle_reset con 0x56493bbaa800 session 0x56493d8605a0
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 137281536 unmapped: 86761472 heap: 224043008 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: handle_auth_request added challenge on 0x56493ec08000
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:10:38.512516+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 294 handle_osd_map epochs [294,295], i have 294, src has [1,295]
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: handle_auth_request added challenge on 0x56493ec09400
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 295 ms_handle_reset con 0x56493ec08000 session 0x56493ce5a5a0
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: handle_auth_request added challenge on 0x56493ec08000
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 137330688 unmapped: 86712320 heap: 224043008 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:10:39.512670+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 295 heartbeat osd_stat(store_statfs(0x4ca9ac000/0x0/0x4ffc00000, data 0x2fffb3a5/0x2fb01000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x574f9c6), peers [0,2] op hist [0,0,0,1])
Oct 11 04:30:06 compute-0 ceph-osd[88594]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:30:06 compute-0 ceph-osd[88594]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:30:06 compute-0 ceph-osd[88594]: bluestore.MempoolThread(0x56493a431b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 7486643 data_alloc: 218103808 data_used: 1953792
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: handle_auth_request added challenge on 0x56493bbaa800
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 295 ms_handle_reset con 0x56493bbaa800 session 0x56493da14b40
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 154320896 unmapped: 69722112 heap: 224043008 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:10:40.512816+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: handle_auth_request added challenge on 0x56493bde4000
Oct 11 04:30:06 compute-0 ceph-osd[88594]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 8.523839951s of 10.036724091s, submitted: 188
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 295 ms_handle_reset con 0x56493bde4000 session 0x56493ca9c960
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: handle_auth_request added challenge on 0x56493be27400
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 142155776 unmapped: 81887232 heap: 224043008 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:10:41.513084+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _renew_subs
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 295 handle_osd_map epochs [296,296], i have 295, src has [1,296]
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 296 ms_handle_reset con 0x56493be27400 session 0x56493b92dc20
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 140419072 unmapped: 83623936 heap: 224043008 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: handle_auth_request added challenge on 0x56493be27800
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:10:42.513272+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 296 ms_handle_reset con 0x56493be27800 session 0x56493da15860
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: handle_auth_request added challenge on 0x564940962400
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 296 ms_handle_reset con 0x564940962400 session 0x56493e72f680
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: handle_auth_request added challenge on 0x564940962400
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: handle_auth_request added challenge on 0x56493bbaa800
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 296 ms_handle_reset con 0x564940962400 session 0x56493b94a000
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 296 ms_handle_reset con 0x56493bbaa800 session 0x56493bcb8f00
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: handle_auth_request added challenge on 0x56493bde4000
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: handle_auth_request added challenge on 0x56493be27400
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 149446656 unmapped: 74596352 heap: 224043008 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 296 ms_handle_reset con 0x56493bde4000 session 0x56493bf7c960
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 296 ms_handle_reset con 0x56493be27400 session 0x56493ca9cf00
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: handle_auth_request added challenge on 0x56493be27800
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:10:43.513496+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 296 ms_handle_reset con 0x56493be27800 session 0x56493d86cf00
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 141443072 unmapped: 82599936 heap: 224043008 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:10:44.513653+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 296 heartbeat osd_stat(store_statfs(0x4bada7000/0x0/0x4ffc00000, data 0x3fbfd013/0x3f706000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x574f9c6), peers [0,2] op hist [0,0,0,0,0,0,3,1])
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: handle_auth_request added challenge on 0x56493be27800
Oct 11 04:30:06 compute-0 ceph-osd[88594]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:30:06 compute-0 ceph-osd[88594]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:30:06 compute-0 ceph-osd[88594]: bluestore.MempoolThread(0x56493a431b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 9452067 data_alloc: 218103808 data_used: 1961984
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _renew_subs
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 296 ms_handle_reset con 0x56493be27800 session 0x56493c532000
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 296 handle_osd_map epochs [297,297], i have 296, src has [1,297]
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: handle_auth_request added challenge on 0x56493bbaa800
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 143081472 unmapped: 80961536 heap: 224043008 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 297 ms_handle_reset con 0x56493bbaa800 session 0x56493e2665a0
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:10:45.513876+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 297 ms_handle_reset con 0x56493ec08000 session 0x56493da2f860
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 297 ms_handle_reset con 0x56493ec09400 session 0x56493d4b9860
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: handle_auth_request added challenge on 0x56493bde4000
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 297 ms_handle_reset con 0x56493bde4000 session 0x56493caf9c20
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: handle_auth_request added challenge on 0x56493bde4000
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 143253504 unmapped: 80789504 heap: 224043008 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: handle_auth_request added challenge on 0x56493bbaa800
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: handle_auth_request added challenge on 0x56493be27800
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 297 ms_handle_reset con 0x56493bbaa800 session 0x56493cbd1860
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: handle_auth_request added challenge on 0x56493ec08000
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:10:46.514069+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 297 ms_handle_reset con 0x56493bde4000 session 0x56493b94b0e0
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 297 ms_handle_reset con 0x56493ec08000 session 0x56493d86de00
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 297 ms_handle_reset con 0x56493be27800 session 0x56493c2f85a0
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 142196736 unmapped: 81846272 heap: 224043008 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:10:47.514287+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: handle_auth_request added challenge on 0x56493ec09400
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: handle_auth_request added challenge on 0x56493be27400
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 297 ms_handle_reset con 0x56493ec09400 session 0x56493d86cb40
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: handle_auth_request added challenge on 0x56493bbaa800
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 297 ms_handle_reset con 0x56493be27400 session 0x56493e3790e0
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: handle_auth_request added challenge on 0x56493bde4000
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 297 ms_handle_reset con 0x56493bde4000 session 0x56493de4f860
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 297 handle_osd_map epochs [297,298], i have 297, src has [1,298]
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _renew_subs
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 298 handle_osd_map epochs [298,298], i have 298, src has [1,298]
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 298 ms_handle_reset con 0x56493bbaa800 session 0x56493c533a40
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: handle_auth_request added challenge on 0x56493be27800
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 298 ms_handle_reset con 0x56493be27800 session 0x56493b935680
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 142229504 unmapped: 81813504 heap: 224043008 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:10:48.514437+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: handle_auth_request added challenge on 0x56493ec08000
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 298 ms_handle_reset con 0x56493ec08000 session 0x56493e3790e0
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: handle_auth_request added challenge on 0x56493ec08000
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: handle_auth_request added challenge on 0x56493bbaa800
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 298 ms_handle_reset con 0x56493bbaa800 session 0x56493c5334a0
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: handle_auth_request added challenge on 0x56493bde4000
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 298 ms_handle_reset con 0x56493ec08000 session 0x56493d86de00
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 298 ms_handle_reset con 0x56493bde4000 session 0x56493da28780
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 142237696 unmapped: 81805312 heap: 224043008 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: handle_auth_request added challenge on 0x56493be27400
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:10:49.514597+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:30:06 compute-0 ceph-osd[88594]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:30:06 compute-0 ceph-osd[88594]: bluestore.MempoolThread(0x56493a431b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 7394570 data_alloc: 218103808 data_used: 1974272
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _renew_subs
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 298 handle_osd_map epochs [299,299], i have 298, src has [1,299]
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 299 ms_handle_reset con 0x56493be27400 session 0x56493b94a000
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 142254080 unmapped: 81788928 heap: 224043008 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: handle_auth_request added challenge on 0x56493be27800
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: handle_auth_request added challenge on 0x564940962400
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:10:50.514704+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 299 ms_handle_reset con 0x564940962400 session 0x56493b92c5a0
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: handle_auth_request added challenge on 0x56493bbaa800
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 299 heartbeat osd_stat(store_statfs(0x4cad9f000/0x0/0x4ffc00000, data 0x2fc02258/0x2f70d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x574f9c6), peers [0,2] op hist [0,1])
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 299 ms_handle_reset con 0x56493bbaa800 session 0x56493b935a40
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: handle_auth_request added challenge on 0x56493bde4000
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 299 ms_handle_reset con 0x56493bde4000 session 0x56493de4e960
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: handle_auth_request added challenge on 0x56493be27400
Oct 11 04:30:06 compute-0 ceph-osd[88594]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 6.545291424s of 10.171452522s, submitted: 364
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _renew_subs
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 299 handle_osd_map epochs [300,300], i have 299, src has [1,300]
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 300 ms_handle_reset con 0x56493be27400 session 0x56493e3792c0
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 142286848 unmapped: 81756160 heap: 224043008 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:10:51.514808+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 300 handle_osd_map epochs [300,301], i have 300, src has [1,301]
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 301 ms_handle_reset con 0x56493be27800 session 0x56493da15860
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: handle_auth_request added challenge on 0x56493ec08000
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 142327808 unmapped: 81715200 heap: 224043008 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: handle_auth_request added challenge on 0x564940962400
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 301 ms_handle_reset con 0x564940962400 session 0x56493da29680
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: handle_auth_request added challenge on 0x56493bbaa800
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:10:52.514941+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 301 ms_handle_reset con 0x56493ec08000 session 0x56493c2f8d20
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 301 ms_handle_reset con 0x56493bbaa800 session 0x56493d4b9a40
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: handle_auth_request added challenge on 0x56493bde4000
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _renew_subs
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 301 handle_osd_map epochs [302,302], i have 301, src has [1,302]
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 142483456 unmapped: 81559552 heap: 224043008 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:10:53.515096+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _renew_subs
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 302 handle_osd_map epochs [303,303], i have 302, src has [1,303]
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 303 ms_handle_reset con 0x56493bde4000 session 0x56493e24d4a0
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: handle_auth_request added challenge on 0x56493be27400
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 142491648 unmapped: 81551360 heap: 224043008 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:10:54.515247+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 303 ms_handle_reset con 0x56493be27400 session 0x56493e20af00
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 303 handle_osd_map epochs [303,304], i have 303, src has [1,304]
Oct 11 04:30:06 compute-0 ceph-osd[88594]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:30:06 compute-0 ceph-osd[88594]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:30:06 compute-0 ceph-osd[88594]: bluestore.MempoolThread(0x56493a431b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4832874 data_alloc: 218103808 data_used: 1114112
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: handle_auth_request added challenge on 0x56493be27800
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 304 ms_handle_reset con 0x56493be27800 session 0x56493c533e00
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 137740288 unmapped: 86302720 heap: 224043008 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:10:55.515402+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: handle_auth_request added challenge on 0x56493bbaa800
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 304 heartbeat osd_stat(store_statfs(0x4e2310000/0x0/0x4ffc00000, data 0x17ff4be6/0x1819c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x574f9c6), peers [0,2] op hist [])
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: handle_auth_request added challenge on 0x56493bde4000
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _renew_subs
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 304 handle_osd_map epochs [305,305], i have 304, src has [1,305]
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 305 ms_handle_reset con 0x56493bbaa800 session 0x56493d8672c0
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: handle_auth_request added challenge on 0x56493be27400
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 305 ms_handle_reset con 0x56493bde4000 session 0x56493da292c0
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 138788864 unmapped: 85254144 heap: 224043008 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:10:56.515621+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 305 heartbeat osd_stat(store_statfs(0x4e230e000/0x0/0x4ffc00000, data 0x17ff6859/0x181a0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x574f9c6), peers [0,2] op hist [0,2,1,0,1])
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 305 ms_handle_reset con 0x56493be27400 session 0x56493b9341e0
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 137879552 unmapped: 86163456 heap: 224043008 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:10:57.515839+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: handle_auth_request added challenge on 0x56493ec08000
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 305 ms_handle_reset con 0x56493ec08000 session 0x56493da294a0
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 137879552 unmapped: 86163456 heap: 224043008 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:10:58.516079+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 137879552 unmapped: 86163456 heap: 224043008 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: handle_auth_request added challenge on 0x564940962c00
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 305 ms_handle_reset con 0x564940962c00 session 0x56493ca9cb40
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:10:59.516212+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:30:06 compute-0 ceph-osd[88594]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:30:06 compute-0 ceph-osd[88594]: bluestore.MempoolThread(0x56493a431b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2306024 data_alloc: 218103808 data_used: 1122304
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 305 heartbeat osd_stat(store_statfs(0x4f9b0e000/0x0/0x4ffc00000, data 0x7f6859/0x9a0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x574f9c6), peers [0,2] op hist [])
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 305 handle_osd_map epochs [306,306], i have 305, src has [1,306]
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 137887744 unmapped: 86155264 heap: 224043008 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:11:00.516384+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: handle_auth_request added challenge on 0x56493bbaa800
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _renew_subs
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 306 handle_osd_map epochs [307,307], i have 306, src has [1,307]
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 307 ms_handle_reset con 0x56493bbaa800 session 0x56493cbd1e00
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 138936320 unmapped: 85106688 heap: 224043008 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:11:01.516567+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: handle_auth_request added challenge on 0x56493bde4000
Oct 11 04:30:06 compute-0 ceph-osd[88594]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.267196655s of 11.282325745s, submitted: 382
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 138944512 unmapped: 85098496 heap: 224043008 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 307 ms_handle_reset con 0x56493bde4000 session 0x56493bf7c780
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:11:02.516711+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 138895360 unmapped: 85147648 heap: 224043008 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:11:03.516921+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 307 heartbeat osd_stat(store_statfs(0x4f9b06000/0x0/0x4ffc00000, data 0x7f9fbd/0x9a8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x574f9c6), peers [0,2] op hist [])
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: handle_auth_request added challenge on 0x56493be27400
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 307 ms_handle_reset con 0x56493be27400 session 0x56493da15c20
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: handle_auth_request added challenge on 0x56493ec08000
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 138903552 unmapped: 85139456 heap: 224043008 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:11:04.517200+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: handle_auth_request added challenge on 0x56493f3e3800
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _renew_subs
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 307 handle_osd_map epochs [308,308], i have 307, src has [1,308]
Oct 11 04:30:06 compute-0 ceph-osd[88594]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:30:06 compute-0 ceph-osd[88594]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:30:06 compute-0 ceph-osd[88594]: bluestore.MempoolThread(0x56493a431b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2323336 data_alloc: 218103808 data_used: 1146880
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 308 ms_handle_reset con 0x56493ec08000 session 0x56493ce5ba40
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 139001856 unmapped: 85041152 heap: 224043008 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:11:05.517410+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: handle_auth_request added challenge on 0x56493e3cf800
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 308 ms_handle_reset con 0x56493e3cf800 session 0x56493eccc960
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: handle_auth_request added challenge on 0x56493bbaa800
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 308 heartbeat osd_stat(store_statfs(0x4f9b01000/0x0/0x4ffc00000, data 0x7fbf3c/0x9ac000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x574f9c6), peers [0,2] op hist [])
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 308 handle_osd_map epochs [308,309], i have 308, src has [1,309]
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 309 ms_handle_reset con 0x56493f3e3800 session 0x56493e24d0e0
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 309 ms_handle_reset con 0x56493bbaa800 session 0x56493ecccb40
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 139026432 unmapped: 85016576 heap: 224043008 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:11:06.517628+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: handle_auth_request added challenge on 0x56493bde4000
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 309 ms_handle_reset con 0x56493bde4000 session 0x56493e2acb40
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: handle_auth_request added challenge on 0x56493be27400
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 139026432 unmapped: 85016576 heap: 224043008 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:11:07.517821+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 309 ms_handle_reset con 0x56493be27400 session 0x56493e2ad0e0
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 309 heartbeat osd_stat(store_statfs(0x4f9b00000/0x0/0x4ffc00000, data 0x7fd9d1/0x9ad000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x574f9c6), peers [0,2] op hist [])
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 309 handle_osd_map epochs [310,310], i have 309, src has [1,310]
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 139042816 unmapped: 85000192 heap: 224043008 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: handle_auth_request added challenge on 0x56493ec08000
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 310 ms_handle_reset con 0x56493ec08000 session 0x56493bf6ba40
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: handle_auth_request added challenge on 0x56493bbaa800
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:11:08.518086+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: handle_auth_request added challenge on 0x56493bde4000
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 310 ms_handle_reset con 0x56493bbaa800 session 0x56493bf6be00
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 310 ms_handle_reset con 0x56493bde4000 session 0x56493d86c000
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: handle_auth_request added challenge on 0x56493be27400
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: handle_auth_request added challenge on 0x56493f3e3800
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 310 ms_handle_reset con 0x56493f3e3800 session 0x56493bf6b680
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 310 ms_handle_reset con 0x56493be27400 session 0x56493bf6b860
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: handle_auth_request added challenge on 0x56493e3ce000
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 310 ms_handle_reset con 0x56493e3ce000 session 0x56493e2ad0e0
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 139108352 unmapped: 84934656 heap: 224043008 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:11:09.518279+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:30:06 compute-0 ceph-osd[88594]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:30:06 compute-0 ceph-osd[88594]: bluestore.MempoolThread(0x56493a431b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2329000 data_alloc: 218103808 data_used: 1150976
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 139108352 unmapped: 84934656 heap: 224043008 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:11:10.518453+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: handle_auth_request added challenge on 0x56493bbaa800
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 310 ms_handle_reset con 0x56493bbaa800 session 0x56493bf7c780
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: handle_auth_request added challenge on 0x56493bde4000
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 310 ms_handle_reset con 0x56493bde4000 session 0x56493da292c0
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 139280384 unmapped: 84762624 heap: 224043008 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:11:11.518658+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 139280384 unmapped: 84762624 heap: 224043008 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: handle_auth_request added challenge on 0x56493be27400
Oct 11 04:30:06 compute-0 ceph-osd[88594]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.741507530s of 10.147704124s, submitted: 119
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:11:12.518783+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _renew_subs
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 310 handle_osd_map epochs [311,311], i have 310, src has [1,311]
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 311 ms_handle_reset con 0x56493be27400 session 0x56493b94a000
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 139403264 unmapped: 84639744 heap: 224043008 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:11:13.518961+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: handle_auth_request added challenge on 0x56493f3e3800
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 311 ms_handle_reset con 0x56493f3e3800 session 0x56493b9352c0
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: handle_auth_request added challenge on 0x56493ef04000
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 311 heartbeat osd_stat(store_statfs(0x4f9afc000/0x0/0x4ffc00000, data 0x800d7f/0x9b1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x574f9c6), peers [0,2] op hist [])
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _renew_subs
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 311 handle_osd_map epochs [312,312], i have 311, src has [1,312]
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 312 ms_handle_reset con 0x56493ef04000 session 0x56493d4b8d20
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: handle_auth_request added challenge on 0x56493bbaa800
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 312 ms_handle_reset con 0x56493bbaa800 session 0x56493e2665a0
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 139509760 unmapped: 84533248 heap: 224043008 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:11:14.519198+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:30:06 compute-0 ceph-osd[88594]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:30:06 compute-0 ceph-osd[88594]: bluestore.MempoolThread(0x56493a431b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2342942 data_alloc: 218103808 data_used: 1159168
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: handle_auth_request added challenge on 0x56493bde4000
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _renew_subs
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 312 handle_osd_map epochs [313,313], i have 312, src has [1,313]
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 313 ms_handle_reset con 0x56493bde4000 session 0x56493e72e000
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 139567104 unmapped: 84475904 heap: 224043008 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:11:15.519387+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: handle_auth_request added challenge on 0x56493be27400
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 313 ms_handle_reset con 0x56493be27400 session 0x56493e20a1e0
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 139567104 unmapped: 84475904 heap: 224043008 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: handle_auth_request added challenge on 0x56493f3e3800
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: handle_auth_request added challenge on 0x56493eee3800
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:11:16.519583+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 313 ms_handle_reset con 0x56493eee3800 session 0x56493da2e960
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 313 ms_handle_reset con 0x56493f3e3800 session 0x56493d86cf00
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: handle_auth_request added challenge on 0x56493f3e3800
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: handle_auth_request added challenge on 0x56493bbaa800
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 313 ms_handle_reset con 0x56493f3e3800 session 0x56493c532000
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 313 heartbeat osd_stat(store_statfs(0x4f9af0000/0x0/0x4ffc00000, data 0x8044ca/0x9bd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x574f9c6), peers [0,2] op hist [])
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 313 handle_osd_map epochs [313,314], i have 313, src has [1,314]
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 314 ms_handle_reset con 0x56493bbaa800 session 0x56493e20b860
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 139599872 unmapped: 84443136 heap: 224043008 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:11:17.519779+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: handle_auth_request added challenge on 0x56493bde4000
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 314 ms_handle_reset con 0x56493bde4000 session 0x56493b968960
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: handle_auth_request added challenge on 0x56493be27400
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 314 ms_handle_reset con 0x56493be27400 session 0x56493eccdc20
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: handle_auth_request added challenge on 0x56493eee3800
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 314 ms_handle_reset con 0x56493eee3800 session 0x56493eccd2c0
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: handle_auth_request added challenge on 0x56493eee3800
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 314 ms_handle_reset con 0x56493eee3800 session 0x56493eccc960
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: handle_auth_request added challenge on 0x56493bbaa800
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: handle_auth_request added challenge on 0x56493bde4000
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 314 ms_handle_reset con 0x56493bbaa800 session 0x56493eccd860
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: handle_auth_request added challenge on 0x56493be27400
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 314 ms_handle_reset con 0x56493bde4000 session 0x56493e267e00
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: handle_auth_request added challenge on 0x56493f3e3800
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: handle_auth_request added challenge on 0x56493eee2c00
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 314 ms_handle_reset con 0x56493eee2c00 session 0x56493eb5b4a0
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: handle_auth_request added challenge on 0x56493e8ee400
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 140001280 unmapped: 84041728 heap: 224043008 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 314 ms_handle_reset con 0x56493e8ee400 session 0x56493be1a780
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: handle_auth_request added challenge on 0x56493e8ee400
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:11:18.519945+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 314 ms_handle_reset con 0x56493e8ee400 session 0x56493de4fc20
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: handle_auth_request added challenge on 0x56493bbaa800
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 314 ms_handle_reset con 0x56493bbaa800 session 0x56493bf6a1e0
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _renew_subs
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 314 handle_osd_map epochs [315,315], i have 314, src has [1,315]
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 315 ms_handle_reset con 0x56493be27400 session 0x56493b935c20
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 315 ms_handle_reset con 0x56493f3e3800 session 0x56493e379e00
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 140099584 unmapped: 83943424 heap: 224043008 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:11:19.520143+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:30:06 compute-0 ceph-osd[88594]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:30:06 compute-0 ceph-osd[88594]: bluestore.MempoolThread(0x56493a431b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2418456 data_alloc: 218103808 data_used: 1183744
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: handle_auth_request added challenge on 0x56493bde4000
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 140132352 unmapped: 83910656 heap: 224043008 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:11:20.520327+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 315 ms_handle_reset con 0x56493bde4000 session 0x56493da2e3c0
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: handle_auth_request added challenge on 0x56493bbaa800
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: handle_auth_request added challenge on 0x56493b9ac800
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 315 ms_handle_reset con 0x56493bbaa800 session 0x56493b92d860
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 315 ms_handle_reset con 0x56493b9ac800 session 0x56493ce5b2c0
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: handle_auth_request added challenge on 0x56493b9ad800
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 315 ms_handle_reset con 0x56493b9ad800 session 0x56493ce5ad20
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 140165120 unmapped: 83877888 heap: 224043008 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:11:21.520482+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: handle_auth_request added challenge on 0x56493f61d400
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 315 ms_handle_reset con 0x56493f61d400 session 0x56493e2ade00
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: handle_auth_request added challenge on 0x56493f61cc00
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 315 ms_handle_reset con 0x56493f61cc00 session 0x56493c532f00
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: handle_auth_request added challenge on 0x56493b9ac800
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 315 ms_handle_reset con 0x56493b9ac800 session 0x56493bcb85a0
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 140165120 unmapped: 83877888 heap: 224043008 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:11:22.520623+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: handle_auth_request added challenge on 0x56493b9ad800
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 315 ms_handle_reset con 0x56493b9ad800 session 0x56493c532780
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: handle_auth_request added challenge on 0x56493bbaa800
Oct 11 04:30:06 compute-0 ceph-osd[88594]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.484973907s of 10.158519745s, submitted: 175
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 315 heartbeat osd_stat(store_statfs(0x4f8f73000/0x0/0x4ffc00000, data 0xf6fc30/0x112b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x5b5f9c6), peers [0,2] op hist [])
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _renew_subs
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 315 handle_osd_map epochs [316,316], i have 315, src has [1,316]
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: handle_auth_request added challenge on 0x56493f61d400
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 316 ms_handle_reset con 0x56493f61d400 session 0x56493e3785a0
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 316 ms_handle_reset con 0x56493bbaa800 session 0x56493da28960
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 140181504 unmapped: 83861504 heap: 224043008 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:11:23.520760+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: handle_auth_request added challenge on 0x56493f61d800
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 316 ms_handle_reset con 0x56493f61d800 session 0x56493e72ef00
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: handle_auth_request added challenge on 0x56493f61d800
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 316 ms_handle_reset con 0x56493f61d800 session 0x56493d868f00
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: handle_auth_request added challenge on 0x56493b9ac800
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: handle_auth_request added challenge on 0x56493b9ad800
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 140189696 unmapped: 83853312 heap: 224043008 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:11:24.520888+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 316 handle_osd_map epochs [316,317], i have 316, src has [1,317]
Oct 11 04:30:06 compute-0 ceph-osd[88594]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:30:06 compute-0 ceph-osd[88594]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:30:06 compute-0 ceph-osd[88594]: bluestore.MempoolThread(0x56493a431b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2428501 data_alloc: 218103808 data_used: 1204224
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 140197888 unmapped: 83845120 heap: 224043008 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:11:25.521040+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: handle_auth_request added challenge on 0x56493bbaa800
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 317 ms_handle_reset con 0x56493bbaa800 session 0x56493bcb8f00
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: handle_auth_request added challenge on 0x56493f61d400
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: handle_auth_request added challenge on 0x56493f61c800
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 317 ms_handle_reset con 0x56493f61c800 session 0x56493ca9d0e0
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 317 ms_handle_reset con 0x56493f61d400 session 0x56493e266d20
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: handle_auth_request added challenge on 0x56493f61dc00
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 317 ms_handle_reset con 0x56493f61dc00 session 0x56493bf6a1e0
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: handle_auth_request added challenge on 0x56493bbaa800
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 317 ms_handle_reset con 0x56493bbaa800 session 0x56493b968960
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 143360000 unmapped: 80683008 heap: 224043008 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:11:26.521235+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: handle_auth_request added challenge on 0x56493f61c800
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 143360000 unmapped: 80683008 heap: 224043008 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:11:27.521406+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 317 heartbeat osd_stat(store_statfs(0x4f8f6a000/0x0/0x4ffc00000, data 0xf736a3/0x1133000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x5b5f9c6), peers [0,2] op hist [])
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: handle_auth_request added challenge on 0x56493f61d400
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _renew_subs
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 317 handle_osd_map epochs [318,318], i have 317, src has [1,318]
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 318 ms_handle_reset con 0x56493f61c800 session 0x56493c532000
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: handle_auth_request added challenge on 0x56493f61d800
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 318 ms_handle_reset con 0x56493f61d800 session 0x56493da2e960
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 143368192 unmapped: 80674816 heap: 224043008 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:11:28.521587+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: handle_auth_request added challenge on 0x56493e4a5000
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: handle_auth_request added challenge on 0x56493edf4800
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 318 ms_handle_reset con 0x56493edf4800 session 0x56493eb5be00
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 318 ms_handle_reset con 0x56493e4a5000 session 0x56493e72e000
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: handle_auth_request added challenge on 0x56493bbaa800
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _renew_subs
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 318 handle_osd_map epochs [319,319], i have 318, src has [1,319]
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: handle_auth_request added challenge on 0x56493edf4800
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 319 ms_handle_reset con 0x56493bbaa800 session 0x56493d4b8d20
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 143400960 unmapped: 80642048 heap: 224043008 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:11:29.522816+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 319 heartbeat osd_stat(store_statfs(0x4f8f60000/0x0/0x4ffc00000, data 0xf76e1f/0x113c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x5b5f9c6), peers [0,2] op hist [])
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: handle_auth_request added challenge on 0x56493f61c800
Oct 11 04:30:06 compute-0 ceph-osd[88594]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:30:06 compute-0 ceph-osd[88594]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:30:06 compute-0 ceph-osd[88594]: bluestore.MempoolThread(0x56493a431b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2497574 data_alloc: 218103808 data_used: 8884224
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 319 ms_handle_reset con 0x56493f61c800 session 0x56493d866f00
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: handle_auth_request added challenge on 0x56493f61d800
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _renew_subs
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 319 handle_osd_map epochs [320,320], i have 319, src has [1,320]
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 320 ms_handle_reset con 0x56493f61d800 session 0x56493caf92c0
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 320 ms_handle_reset con 0x56493edf4800 session 0x56493da15e00
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 320 ms_handle_reset con 0x56493f61d400 session 0x56493da28000
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 143417344 unmapped: 80625664 heap: 224043008 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:11:30.523005+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: handle_auth_request added challenge on 0x56493bbaa800
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 143417344 unmapped: 80625664 heap: 224043008 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 320 ms_handle_reset con 0x56493bbaa800 session 0x56493d4b9c20
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: handle_auth_request added challenge on 0x56493edf4800
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:11:31.523112+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: handle_auth_request added challenge on 0x56493f61c800
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 320 handle_osd_map epochs [320,321], i have 320, src has [1,321]
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 321 ms_handle_reset con 0x56493edf4800 session 0x56493bf7cf00
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: handle_auth_request added challenge on 0x56493f61d400
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 321 ms_handle_reset con 0x56493f61d400 session 0x56493b94b860
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 143433728 unmapped: 80609280 heap: 224043008 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:11:32.523288+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: handle_auth_request added challenge on 0x56493f61d800
Oct 11 04:30:06 compute-0 ceph-osd[88594]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.819468498s of 10.022996902s, submitted: 86
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 321 ms_handle_reset con 0x56493f61d800 session 0x56493da28d20
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: handle_auth_request added challenge on 0x56493edd7000
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 321 ms_handle_reset con 0x56493edd7000 session 0x56493e2ac960
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: handle_auth_request added challenge on 0x56493edd7000
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 321 handle_osd_map epochs [321,322], i have 321, src has [1,322]
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 322 ms_handle_reset con 0x56493f61c800 session 0x56493da290e0
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 143474688 unmapped: 80568320 heap: 224043008 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:11:33.523469+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _renew_subs
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 322 handle_osd_map epochs [323,323], i have 322, src has [1,323]
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 323 ms_handle_reset con 0x56493edd7000 session 0x56493e267e00
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 143491072 unmapped: 80551936 heap: 224043008 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 323 heartbeat osd_stat(store_statfs(0x4f8f55000/0x0/0x4ffc00000, data 0xf7dce3/0x1147000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x5b5f9c6), peers [0,2] op hist [])
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:11:34.523591+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: handle_auth_request added challenge on 0x56493bbaa800
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 323 ms_handle_reset con 0x56493bbaa800 session 0x56493e2665a0
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _renew_subs
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 323 handle_osd_map epochs [324,324], i have 323, src has [1,324]
Oct 11 04:30:06 compute-0 ceph-osd[88594]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:30:06 compute-0 ceph-osd[88594]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:30:06 compute-0 ceph-osd[88594]: bluestore.MempoolThread(0x56493a431b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2517471 data_alloc: 218103808 data_used: 8908800
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 145973248 unmapped: 78069760 heap: 224043008 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:11:35.523735+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: handle_auth_request added challenge on 0x56493edf4800
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 324 ms_handle_reset con 0x56493edf4800 session 0x56493de4fc20
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: handle_auth_request added challenge on 0x56493f61d400
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 324 ms_handle_reset con 0x56493f61d400 session 0x56493eccc000
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 145424384 unmapped: 78618624 heap: 224043008 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:11:36.523948+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: handle_auth_request added challenge on 0x56493f61d400
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: handle_auth_request added challenge on 0x56493bbaa800
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 324 ms_handle_reset con 0x56493bbaa800 session 0x56493d8690e0
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _renew_subs
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 324 handle_osd_map epochs [325,325], i have 324, src has [1,325]
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: handle_auth_request added challenge on 0x56493edd7000
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: handle_auth_request added challenge on 0x56493edf4800
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 145424384 unmapped: 78618624 heap: 224043008 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 325 ms_handle_reset con 0x56493edd7000 session 0x56493d868d20
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:11:37.524126+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _renew_subs
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 325 handle_osd_map epochs [326,326], i have 325, src has [1,326]
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 326 ms_handle_reset con 0x56493edf4800 session 0x56493e24c960
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: handle_auth_request added challenge on 0x56493f61c800
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 326 ms_handle_reset con 0x56493f61c800 session 0x56493cba7c20
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 326 ms_handle_reset con 0x56493f61d400 session 0x56493e2ac960
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 145440768 unmapped: 78602240 heap: 224043008 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:11:38.524370+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 326 heartbeat osd_stat(store_statfs(0x4f8bdd000/0x0/0x4ffc00000, data 0x12f10f4/0x14bf000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x5b5f9c6), peers [0,2] op hist [])
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 145440768 unmapped: 78602240 heap: 224043008 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:11:39.524566+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: handle_auth_request added challenge on 0x56493bbaa800
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: handle_auth_request added challenge on 0x56493edd7000
Oct 11 04:30:06 compute-0 ceph-osd[88594]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:30:06 compute-0 ceph-osd[88594]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:30:06 compute-0 ceph-osd[88594]: bluestore.MempoolThread(0x56493a431b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2553869 data_alloc: 218103808 data_used: 8982528
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _renew_subs
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 326 handle_osd_map epochs [327,327], i have 326, src has [1,327]
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 327 ms_handle_reset con 0x56493edd7000 session 0x56493bcb8960
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: handle_auth_request added challenge on 0x56493edf4800
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 145522688 unmapped: 78520320 heap: 224043008 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:11:40.524723+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 327 ms_handle_reset con 0x56493edf4800 session 0x56493d869c20
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 327 ms_handle_reset con 0x56493bbaa800 session 0x56493d866f00
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: handle_auth_request added challenge on 0x56493f61c800
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 327 ms_handle_reset con 0x56493f61c800 session 0x56493bcb85a0
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: handle_auth_request added challenge on 0x56493f61d800
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 327 ms_handle_reset con 0x56493f61d800 session 0x56493e2ade00
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 145547264 unmapped: 78495744 heap: 224043008 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 327 heartbeat osd_stat(store_statfs(0x4f8bdd000/0x0/0x4ffc00000, data 0x12f2dd3/0x14c0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x5b5f9c6), peers [0,2] op hist [])
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:11:41.524844+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 145547264 unmapped: 78495744 heap: 224043008 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:11:42.524981+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: handle_auth_request added challenge on 0x56493bbaa800
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 327 ms_handle_reset con 0x56493bbaa800 session 0x56493ce5ad20
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: handle_auth_request added challenge on 0x56493edd7000
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: handle_auth_request added challenge on 0x56493edf4800
Oct 11 04:30:06 compute-0 ceph-osd[88594]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.949919701s of 10.737586975s, submitted: 248
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 327 ms_handle_reset con 0x56493edf4800 session 0x56493caf9860
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 145571840 unmapped: 78471168 heap: 224043008 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:11:43.525104+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: handle_auth_request added challenge on 0x56493f61c800
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _renew_subs
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 327 handle_osd_map epochs [328,328], i have 327, src has [1,328]
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:11:44.525220+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: handle_auth_request added challenge on 0x56493f7adc00
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 145571840 unmapped: 78471168 heap: 224043008 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: handle_auth_request added challenge on 0x56493f7ad800
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 328 ms_handle_reset con 0x56493f7adc00 session 0x56493ca9de00
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: handle_auth_request added challenge on 0x56493f7ad400
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 328 ms_handle_reset con 0x56493f7ad400 session 0x56493de4e3c0
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _renew_subs
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 328 handle_osd_map epochs [329,329], i have 328, src has [1,329]
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 329 ms_handle_reset con 0x56493f7ad800 session 0x56493e24d4a0
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: handle_auth_request added challenge on 0x56493f7ad800
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 329 ms_handle_reset con 0x56493f7ad800 session 0x56493caf9680
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: handle_auth_request added challenge on 0x56493bbaa800
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 329 ms_handle_reset con 0x56493edd7000 session 0x56493da28000
Oct 11 04:30:06 compute-0 ceph-osd[88594]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:30:06 compute-0 ceph-osd[88594]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:30:06 compute-0 ceph-osd[88594]: bluestore.MempoolThread(0x56493a431b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2570925 data_alloc: 218103808 data_used: 9007104
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:11:45.525387+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 145612800 unmapped: 78430208 heap: 224043008 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 329 heartbeat osd_stat(store_statfs(0x4f8bd3000/0x0/0x4ffc00000, data 0x12f68d3/0x14ca000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x5b5f9c6), peers [0,2] op hist [])
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: handle_auth_request added challenge on 0x56493edf4800
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _renew_subs
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 329 handle_osd_map epochs [330,330], i have 329, src has [1,330]
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 330 ms_handle_reset con 0x56493bbaa800 session 0x56493b92d680
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 330 ms_handle_reset con 0x56493f61c800 session 0x56493d4b8d20
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:11:46.525599+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 145711104 unmapped: 78331904 heap: 224043008 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: handle_auth_request added challenge on 0x56493f7ad400
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 330 ms_handle_reset con 0x56493f7ad400 session 0x56493d8cb860
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:11:47.525770+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 145735680 unmapped: 78307328 heap: 224043008 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: handle_auth_request added challenge on 0x56493bbaa800
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: handle_auth_request added challenge on 0x56493edd7000
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: handle_auth_request added challenge on 0x56493f61c800
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 330 ms_handle_reset con 0x56493f61c800 session 0x56493d8672c0
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:11:48.525948+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 145760256 unmapped: 78282752 heap: 224043008 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 330 handle_osd_map epochs [330,331], i have 330, src has [1,331]
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 331 ms_handle_reset con 0x56493bbaa800 session 0x56493bf7cd20
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: handle_auth_request added challenge on 0x56493f7ad800
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: handle_auth_request added challenge on 0x56493f7adc00
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 331 ms_handle_reset con 0x56493f7adc00 session 0x56493e379e00
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 331 ms_handle_reset con 0x56493f7ad800 session 0x56493e72f4a0
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 331 heartbeat osd_stat(store_statfs(0x4f8bd0000/0x0/0x4ffc00000, data 0x12f84b2/0x14ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x5b5f9c6), peers [0,2] op hist [])
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:11:49.526132+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 146866176 unmapped: 77176832 heap: 224043008 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 331 ms_handle_reset con 0x56493edf4800 session 0x56493e72e780
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 331 ms_handle_reset con 0x56493edd7000 session 0x56493da150e0
Oct 11 04:30:06 compute-0 ceph-osd[88594]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:30:06 compute-0 ceph-osd[88594]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:30:06 compute-0 ceph-osd[88594]: bluestore.MempoolThread(0x56493a431b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2580788 data_alloc: 218103808 data_used: 9023488
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:11:50.526363+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 146874368 unmapped: 77168640 heap: 224043008 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: handle_auth_request added challenge on 0x56493bbaa800
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _renew_subs
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 331 handle_osd_map epochs [332,332], i have 331, src has [1,332]
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 332 ms_handle_reset con 0x56493bbaa800 session 0x56493bf6ba40
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: handle_auth_request added challenge on 0x56493f61c800
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: handle_auth_request added challenge on 0x56493f7ad800
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 332 ms_handle_reset con 0x56493f7ad800 session 0x56493da15860
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:11:51.526559+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 146948096 unmapped: 77094912 heap: 224043008 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 332 handle_osd_map epochs [332,333], i have 332, src has [1,333]
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _renew_subs
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 332 handle_osd_map epochs [333,333], i have 333, src has [1,333]
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 333 ms_handle_reset con 0x56493f61c800 session 0x56493b94a1e0
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 333 heartbeat osd_stat(store_statfs(0x4f8bc6000/0x0/0x4ffc00000, data 0x12fcce0/0x14d7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x5b5f9c6), peers [0,2] op hist [])
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:11:52.526781+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 146964480 unmapped: 77078528 heap: 224043008 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: handle_auth_request added challenge on 0x56493f7adc00
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: handle_auth_request added challenge on 0x56493f7ad000
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 333 ms_handle_reset con 0x56493f7ad000 session 0x56493b932f00
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: handle_auth_request added challenge on 0x56493bbaa800
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 333 ms_handle_reset con 0x56493bbaa800 session 0x56493d4b81e0
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: handle_auth_request added challenge on 0x56493edd7000
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: handle_auth_request added challenge on 0x56493f61c800
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: handle_auth_request added challenge on 0x56493f7ad800
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 333 ms_handle_reset con 0x56493f7ad800 session 0x56493eccd680
Oct 11 04:30:06 compute-0 ceph-osd[88594]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.628168106s of 10.002744675s, submitted: 132
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:11:53.526985+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 146997248 unmapped: 77045760 heap: 224043008 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 333 heartbeat osd_stat(store_statfs(0x4f8bc1000/0x0/0x4ffc00000, data 0x12fea2f/0x14db000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x5b5f9c6), peers [0,2] op hist [])
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 333 handle_osd_map epochs [333,334], i have 333, src has [1,334]
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _renew_subs
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 334 handle_osd_map epochs [334,334], i have 334, src has [1,334]
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 334 ms_handle_reset con 0x56493f61c800 session 0x56493da2d860
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: handle_auth_request added challenge on 0x56493c78b400
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: handle_auth_request added challenge on 0x56493edf4000
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: handle_auth_request added challenge on 0x56493e38b400
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: handle_auth_request added challenge on 0x56493e3cec00
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 334 ms_handle_reset con 0x56493c78b400 session 0x56493da2d680
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: handle_auth_request added challenge on 0x56493e3cf400
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 334 ms_handle_reset con 0x56493e38b400 session 0x56493b92de00
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 334 ms_handle_reset con 0x56493e3cf400 session 0x56493e2ad0e0
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: handle_auth_request added challenge on 0x56493bbaa800
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 334 ms_handle_reset con 0x56493bbaa800 session 0x56493be1ad20
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: handle_auth_request added challenge on 0x56493c78b400
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:11:54.527232+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 157130752 unmapped: 66912256 heap: 224043008 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _renew_subs
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 334 handle_osd_map epochs [335,335], i have 334, src has [1,335]
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 335 ms_handle_reset con 0x56493c78b400 session 0x56493c5abe00
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 335 heartbeat osd_stat(store_statfs(0x4f7e42000/0x0/0x4ffc00000, data 0x207c7a8/0x225c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x5b5f9c6), peers [0,2] op hist [2,1])
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 335 ms_handle_reset con 0x56493edf4000 session 0x56493da29860
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 335 ms_handle_reset con 0x56493e3cec00 session 0x56493c5ab680
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: handle_auth_request added challenge on 0x56493bbaa800
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 335 handle_osd_map epochs [335,336], i have 335, src has [1,336]
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 336 ms_handle_reset con 0x56493edd7000 session 0x56493da292c0
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 336 ms_handle_reset con 0x56493f7adc00 session 0x56493c532b40
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 336 ms_handle_reset con 0x56493bbaa800 session 0x56493da2eb40
Oct 11 04:30:06 compute-0 ceph-osd[88594]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:30:06 compute-0 ceph-osd[88594]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:30:06 compute-0 ceph-osd[88594]: bluestore.MempoolThread(0x56493a431b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2743863 data_alloc: 234881024 data_used: 12685312
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: handle_auth_request added challenge on 0x56493c78b400
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:11:55.527425+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 157171712 unmapped: 66871296 heap: 224043008 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: handle_auth_request added challenge on 0x56493e3cec00
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 336 ms_handle_reset con 0x56493e3cec00 session 0x56493da2cb40
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: handle_auth_request added challenge on 0x56493e3cf400
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 336 ms_handle_reset con 0x56493e3cf400 session 0x56493da2d2c0
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _renew_subs
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 336 handle_osd_map epochs [337,337], i have 336, src has [1,337]
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 337 ms_handle_reset con 0x56493c78b400 session 0x56493eb5af00
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: handle_auth_request added challenge on 0x56493bbaa800
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: handle_auth_request added challenge on 0x56493e3cec00
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 337 ms_handle_reset con 0x56493bbaa800 session 0x56493d866000
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: handle_auth_request added challenge on 0x56493e3cf400
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: handle_auth_request added challenge on 0x56493edd7000
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 337 ms_handle_reset con 0x56493e3cec00 session 0x56493caf9e00
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:11:56.527646+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 157196288 unmapped: 66846720 heap: 224043008 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _renew_subs
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 337 handle_osd_map epochs [338,338], i have 337, src has [1,338]
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: handle_auth_request added challenge on 0x56493f7adc00
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 338 ms_handle_reset con 0x56493f7adc00 session 0x56493b935860
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 338 ms_handle_reset con 0x56493edd7000 session 0x56493e20a3c0
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: handle_auth_request added challenge on 0x56493bbaa800
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 338 ms_handle_reset con 0x56493e3cf400 session 0x56493bf6b860
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 338 ms_handle_reset con 0x56493bbaa800 session 0x56493da2b0e0
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: handle_auth_request added challenge on 0x56493c78b400
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:11:57.527790+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: handle_auth_request added challenge on 0x56493e3cec00
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 157483008 unmapped: 66560000 heap: 224043008 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 338 ms_handle_reset con 0x56493e3cec00 session 0x56493b94b0e0
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 338 ms_handle_reset con 0x56493c78b400 session 0x56493b92d680
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: handle_auth_request added challenge on 0x56493edd7000
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:11:58.527957+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 151191552 unmapped: 72851456 heap: 224043008 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: handle_auth_request added challenge on 0x56493f7adc00
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 338 ms_handle_reset con 0x56493f7adc00 session 0x56493ce5a3c0
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: handle_auth_request added challenge on 0x56493bbaa800
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: handle_auth_request added challenge on 0x56493c78b400
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: handle_auth_request added challenge on 0x56493e3cec00
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 338 ms_handle_reset con 0x56493e3cec00 session 0x56493b92c000
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 338 ms_handle_reset con 0x56493bbaa800 session 0x56493bf6a1e0
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 338 handle_osd_map epochs [338,339], i have 338, src has [1,339]
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 339 ms_handle_reset con 0x56493c78b400 session 0x56493eb5b2c0
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: handle_auth_request added challenge on 0x56493e3cf400
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 339 ms_handle_reset con 0x56493e3cf400 session 0x56493eccd2c0
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 339 ms_handle_reset con 0x56493edd7000 session 0x56493caf9860
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:11:59.528125+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 151240704 unmapped: 72802304 heap: 224043008 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 339 heartbeat osd_stat(store_statfs(0x4f7e32000/0x0/0x4ffc00000, data 0x208565d/0x226b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x5b5f9c6), peers [0,2] op hist [])
Oct 11 04:30:06 compute-0 ceph-osd[88594]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:30:06 compute-0 ceph-osd[88594]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:30:06 compute-0 ceph-osd[88594]: bluestore.MempoolThread(0x56493a431b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2736885 data_alloc: 234881024 data_used: 12689408
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: handle_auth_request added challenge on 0x56493edd7000
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:12:00.528301+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 151240704 unmapped: 72802304 heap: 224043008 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: handle_auth_request added challenge on 0x56493bbaa800
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: handle_auth_request added challenge on 0x56493c78b400
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 339 ms_handle_reset con 0x56493c78b400 session 0x56493caf9c20
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 339 handle_osd_map epochs [339,340], i have 339, src has [1,340]
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 340 ms_handle_reset con 0x56493edd7000 session 0x56493de4f4a0
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: handle_auth_request added challenge on 0x56493e3cec00
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: handle_auth_request added challenge on 0x56493e3cf400
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: handle_auth_request added challenge on 0x56493edf4000
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 340 ms_handle_reset con 0x56493e3cf400 session 0x56493e3785a0
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 340 ms_handle_reset con 0x56493e3cec00 session 0x56493eccc960
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: handle_auth_request added challenge on 0x56493f61c800
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 340 ms_handle_reset con 0x56493f61c800 session 0x56493ce5a3c0
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: handle_auth_request added challenge on 0x56493c78b400
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 340 ms_handle_reset con 0x56493c78b400 session 0x56493b92d680
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: handle_auth_request added challenge on 0x56493e3cec00
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 340 ms_handle_reset con 0x56493e3cec00 session 0x56493b94b0e0
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: handle_auth_request added challenge on 0x56493e3cf400
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 340 ms_handle_reset con 0x56493e3cf400 session 0x56493da2b0e0
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:12:01.528430+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: handle_auth_request added challenge on 0x56493edd7000
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 340 ms_handle_reset con 0x56493edd7000 session 0x56493bf6b860
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: handle_auth_request added challenge on 0x56493f61c800
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 151240704 unmapped: 72802304 heap: 224043008 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _renew_subs
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 340 handle_osd_map epochs [341,341], i have 340, src has [1,341]
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 341 ms_handle_reset con 0x56493edf4000 session 0x56493c532d20
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: handle_auth_request added challenge on 0x56493edf4000
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 341 ms_handle_reset con 0x56493edf4000 session 0x56493da2ab40
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 341 ms_handle_reset con 0x56493bbaa800 session 0x56493ce5ba40
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 341 ms_handle_reset con 0x56493f61c800 session 0x56493e20a3c0
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:12:02.528595+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 151289856 unmapped: 72753152 heap: 224043008 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: handle_auth_request added challenge on 0x56493c78b400
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _renew_subs
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 341 handle_osd_map epochs [342,342], i have 341, src has [1,342]
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 342 ms_handle_reset con 0x56493c78b400 session 0x56493b9645a0
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:12:03.528766+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 151347200 unmapped: 72695808 heap: 224043008 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: handle_auth_request added challenge on 0x56493e3cec00
Oct 11 04:30:06 compute-0 ceph-osd[88594]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.519982338s of 10.429484367s, submitted: 273
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _renew_subs
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 342 handle_osd_map epochs [343,343], i have 342, src has [1,343]
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 343 ms_handle_reset con 0x56493e3cec00 session 0x56493da2eb40
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: handle_auth_request added challenge on 0x56493bbaa800
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: handle_auth_request added challenge on 0x56493c78b400
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 343 ms_handle_reset con 0x56493bbaa800 session 0x56493d866780
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: handle_auth_request added challenge on 0x56493edf4000
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 343 ms_handle_reset con 0x56493edf4000 session 0x56493d4b85a0
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:12:04.528937+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 151371776 unmapped: 72671232 heap: 224043008 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 343 heartbeat osd_stat(store_statfs(0x4f7e28000/0x0/0x4ffc00000, data 0x208cb12/0x2275000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x5b5f9c6), peers [0,2] op hist [])
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _renew_subs
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 343 handle_osd_map epochs [344,344], i have 343, src has [1,344]
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: handle_auth_request added challenge on 0x56493f61c800
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 344 ms_handle_reset con 0x56493f61c800 session 0x56493cba7c20
Oct 11 04:30:06 compute-0 ceph-osd[88594]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:30:06 compute-0 ceph-osd[88594]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:30:06 compute-0 ceph-osd[88594]: bluestore.MempoolThread(0x56493a431b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2751100 data_alloc: 234881024 data_used: 12693504
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 344 ms_handle_reset con 0x56493c78b400 session 0x56493da2ba40
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: handle_auth_request added challenge on 0x56493e3cf400
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 344 ms_handle_reset con 0x56493e3cf400 session 0x56493bcb85a0
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:12:05.529079+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 151535616 unmapped: 72507392 heap: 224043008 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: handle_auth_request added challenge on 0x56493bbaa800
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 344 ms_handle_reset con 0x56493bbaa800 session 0x56493b92d0e0
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: handle_auth_request added challenge on 0x56493c78b400
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 344 ms_handle_reset con 0x56493c78b400 session 0x56493d4b8d20
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: handle_auth_request added challenge on 0x56493edf4000
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 344 heartbeat osd_stat(store_statfs(0x4f7e00000/0x0/0x4ffc00000, data 0x20b2798/0x229c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x5b5f9c6), peers [0,2] op hist [])
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:12:06.529231+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: handle_auth_request added challenge on 0x56493f61c800
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: handle_auth_request added challenge on 0x56493edd7000
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 151863296 unmapped: 72179712 heap: 224043008 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _renew_subs
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 344 handle_osd_map epochs [345,345], i have 344, src has [1,345]
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: handle_auth_request added challenge on 0x56493f7ad800
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:12:07.529351+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: handle_auth_request added challenge on 0x56493b4b0c00
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 345 ms_handle_reset con 0x56493b4b0c00 session 0x56493b94af00
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 152985600 unmapped: 71057408 heap: 224043008 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _renew_subs
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 345 handle_osd_map epochs [346,346], i have 345, src has [1,346]
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 346 ms_handle_reset con 0x56493edd7000 session 0x56493da2d680
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: handle_auth_request added challenge on 0x56493dfa0400
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: handle_auth_request added challenge on 0x56493f43b800
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 346 ms_handle_reset con 0x56493f43b800 session 0x56493bf6a780
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: handle_auth_request added challenge on 0x56493b4b0c00
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 346 ms_handle_reset con 0x56493dfa0400 session 0x56493eccd680
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:12:08.529469+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 158220288 unmapped: 65822720 heap: 224043008 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _renew_subs
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 346 handle_osd_map epochs [347,347], i have 346, src has [1,347]
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 347 ms_handle_reset con 0x56493b4b0c00 session 0x56493d861680
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 347 ms_handle_reset con 0x56493f7ad800 session 0x56493b969680
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:12:09.529615+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 158220288 unmapped: 65822720 heap: 224043008 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:30:06 compute-0 ceph-osd[88594]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:30:06 compute-0 ceph-osd[88594]: bluestore.MempoolThread(0x56493a431b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2815121 data_alloc: 234881024 data_used: 18788352
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:12:10.529764+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: handle_auth_request added challenge on 0x56493bbaa800
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 158253056 unmapped: 65789952 heap: 224043008 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 347 handle_osd_map epochs [347,348], i have 347, src has [1,348]
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _renew_subs
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 348 handle_osd_map epochs [348,348], i have 348, src has [1,348]
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 348 ms_handle_reset con 0x56493bbaa800 session 0x56493d4b81e0
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: handle_auth_request added challenge on 0x56493c78b400
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 348 ms_handle_reset con 0x56493c78b400 session 0x56493d8cba40
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:12:11.529919+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 158277632 unmapped: 65765376 heap: 224043008 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 348 heartbeat osd_stat(store_statfs(0x4f7df3000/0x0/0x4ffc00000, data 0x20b9898/0x22aa000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x5b5f9c6), peers [0,2] op hist [])
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: handle_auth_request added challenge on 0x56493b4b0c00
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 348 ms_handle_reset con 0x56493b4b0c00 session 0x56493b92c000
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: handle_auth_request added challenge on 0x56493bbaa800
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 348 ms_handle_reset con 0x56493bbaa800 session 0x56493b92c5a0
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:12:12.530124+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 158351360 unmapped: 65691648 heap: 224043008 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:12:13.530329+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 158351360 unmapped: 65691648 heap: 224043008 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: handle_auth_request added challenge on 0x56493dfa0400
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: handle_auth_request added challenge on 0x56493f7ad800
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 348 ms_handle_reset con 0x56493f7ad800 session 0x56493ddf6f00
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: handle_auth_request added challenge on 0x56493edd7000
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:12:14.530449+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: handle_auth_request added challenge on 0x56493f43b800
Oct 11 04:30:06 compute-0 ceph-osd[88594]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.233964920s of 10.702957153s, submitted: 192
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: handle_auth_request added challenge on 0x56493eee3400
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 348 ms_handle_reset con 0x56493eee3400 session 0x56493cbd1860
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 158359552 unmapped: 65683456 heap: 224043008 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _renew_subs
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 348 handle_osd_map epochs [349,349], i have 348, src has [1,349]
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 349 ms_handle_reset con 0x56493edd7000 session 0x56493d8654a0
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: handle_auth_request added challenge on 0x56493edd7000
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: handle_auth_request added challenge on 0x56493b4b0c00
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 349 ms_handle_reset con 0x56493edd7000 session 0x56493b12c780
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 349 ms_handle_reset con 0x56493dfa0400 session 0x56493ecccf00
Oct 11 04:30:06 compute-0 ceph-osd[88594]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:30:06 compute-0 ceph-osd[88594]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:30:06 compute-0 ceph-osd[88594]: bluestore.MempoolThread(0x56493a431b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2825869 data_alloc: 234881024 data_used: 18808832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:12:15.530605+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: handle_auth_request added challenge on 0x56493bbaa800
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 158359552 unmapped: 65683456 heap: 224043008 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 349 heartbeat osd_stat(store_statfs(0x4f7def000/0x0/0x4ffc00000, data 0x20bb515/0x22ae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x5b5f9c6), peers [0,2] op hist [])
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 349 handle_osd_map epochs [349,350], i have 349, src has [1,350]
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _renew_subs
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 350 handle_osd_map epochs [350,350], i have 350, src has [1,350]
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 350 ms_handle_reset con 0x56493bbaa800 session 0x56493d4b8d20
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: handle_auth_request added challenge on 0x56493eee3400
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 350 ms_handle_reset con 0x56493b4b0c00 session 0x56493cba6d20
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 350 ms_handle_reset con 0x56493f43b800 session 0x56493caf85a0
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 350 ms_handle_reset con 0x56493eee3400 session 0x56493b92d0e0
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:12:16.530788+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 158343168 unmapped: 65699840 heap: 224043008 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: handle_auth_request added challenge on 0x56493b4b0c00
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 350 handle_osd_map epochs [350,351], i have 350, src has [1,351]
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: handle_auth_request added challenge on 0x56493bbaa800
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 351 ms_handle_reset con 0x56493b4b0c00 session 0x56493d8694a0
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:12:17.530925+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 351 heartbeat osd_stat(store_statfs(0x4f7de9000/0x0/0x4ffc00000, data 0x20bedc3/0x22b3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x5b5f9c6), peers [0,2] op hist [])
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 158351360 unmapped: 65691648 heap: 224043008 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: handle_auth_request added challenge on 0x56493dfa0400
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _renew_subs
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 351 handle_osd_map epochs [352,352], i have 351, src has [1,352]
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 352 ms_handle_reset con 0x56493bbaa800 session 0x56493da2ba40
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:12:18.531036+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #44. Immutable memtables: 1.
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 165773312 unmapped: 58269696 heap: 224043008 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _renew_subs
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 352 handle_osd_map epochs [353,353], i have 352, src has [1,353]
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 353 ms_handle_reset con 0x56493dfa0400 session 0x56493e24da40
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: handle_auth_request added challenge on 0x56493edd7000
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 353 ms_handle_reset con 0x56493edd7000 session 0x56493d86c960
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:12:19.531223+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: handle_auth_request added challenge on 0x56493b4b0c00
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 163053568 unmapped: 60989440 heap: 224043008 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 353 heartbeat osd_stat(store_statfs(0x4f693b000/0x0/0x4ffc00000, data 0x23cc93e/0x25c0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cff9c6), peers [0,2] op hist [])
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 353 handle_osd_map epochs [353,354], i have 353, src has [1,354]
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _renew_subs
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 354 handle_osd_map epochs [354,354], i have 354, src has [1,354]
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 354 ms_handle_reset con 0x56493b4b0c00 session 0x56493e2ac960
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: handle_auth_request added challenge on 0x56493bbaa800
Oct 11 04:30:06 compute-0 ceph-osd[88594]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:30:06 compute-0 ceph-osd[88594]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:30:06 compute-0 ceph-osd[88594]: bluestore.MempoolThread(0x56493a431b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2880148 data_alloc: 234881024 data_used: 22056960
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: handle_auth_request added challenge on 0x56493dfa0400
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 354 ms_handle_reset con 0x56493bbaa800 session 0x56493e267e00
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:12:20.531384+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 163135488 unmapped: 60907520 heap: 224043008 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _renew_subs
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 354 handle_osd_map epochs [355,355], i have 354, src has [1,355]
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 355 ms_handle_reset con 0x56493dfa0400 session 0x56493b935860
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:12:21.531556+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 163151872 unmapped: 60891136 heap: 224043008 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: handle_auth_request added challenge on 0x56493eee3400
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _renew_subs
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 355 handle_osd_map epochs [356,356], i have 355, src has [1,356]
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:12:22.531739+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 163274752 unmapped: 60768256 heap: 224043008 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 356 handle_osd_map epochs [356,357], i have 356, src has [1,357]
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 357 ms_handle_reset con 0x56493eee3400 session 0x56493e20a3c0
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: handle_auth_request added challenge on 0x56493f7ad800
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 357 heartbeat osd_stat(store_statfs(0x4f6938000/0x0/0x4ffc00000, data 0x23d32c5/0x25c5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cff9c6), peers [0,2] op hist [1])
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 357 ms_handle_reset con 0x56493f7ad800 session 0x56493d8605a0
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:12:23.531883+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 163405824 unmapped: 60637184 heap: 224043008 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:12:24.532086+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 163405824 unmapped: 60637184 heap: 224043008 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:30:06 compute-0 ceph-osd[88594]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:30:06 compute-0 ceph-osd[88594]: bluestore.MempoolThread(0x56493a431b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2884091 data_alloc: 234881024 data_used: 22056960
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:12:25.532281+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 163405824 unmapped: 60637184 heap: 224043008 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:12:26.532491+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 163405824 unmapped: 60637184 heap: 224043008 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:12:27.532680+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 163405824 unmapped: 60637184 heap: 224043008 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 357 heartbeat osd_stat(store_statfs(0x4f6935000/0x0/0x4ffc00000, data 0x23d4e90/0x25c7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cff9c6), peers [0,2] op hist [])
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 357 handle_osd_map epochs [358,358], i have 357, src has [1,358]
Oct 11 04:30:06 compute-0 ceph-osd[88594]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 12.780139923s of 13.576430321s, submitted: 252
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:12:28.532856+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 163414016 unmapped: 60628992 heap: 224043008 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 358 heartbeat osd_stat(store_statfs(0x4f6933000/0x0/0x4ffc00000, data 0x23d69a7/0x25ca000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cff9c6), peers [0,2] op hist [])
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: handle_auth_request added challenge on 0x56493b4b0c00
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 358 ms_handle_reset con 0x56493b4b0c00 session 0x56493e0f81e0
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:12:29.533036+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 163414016 unmapped: 60628992 heap: 224043008 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:30:06 compute-0 ceph-osd[88594]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:30:06 compute-0 ceph-osd[88594]: bluestore.MempoolThread(0x56493a431b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2888175 data_alloc: 234881024 data_used: 22056960
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:12:30.533178+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 163414016 unmapped: 60628992 heap: 224043008 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: handle_auth_request added challenge on 0x56493bbaa800
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 358 ms_handle_reset con 0x56493bbaa800 session 0x56493d8672c0
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: handle_auth_request added challenge on 0x56493dfa0400
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:12:31.533312+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: handle_auth_request added challenge on 0x56493eee3400
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 358 ms_handle_reset con 0x56493eee3400 session 0x56493be1a1e0
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 163414016 unmapped: 60628992 heap: 224043008 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _renew_subs
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 358 handle_osd_map epochs [359,359], i have 358, src has [1,359]
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: handle_auth_request added challenge on 0x56493f651c00
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 359 ms_handle_reset con 0x56493f651c00 session 0x56493bf7d860
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: handle_auth_request added challenge on 0x56493ec09800
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: handle_auth_request added challenge on 0x56493defe800
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:12:32.533472+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 163446784 unmapped: 60596224 heap: 224043008 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 359 heartbeat osd_stat(store_statfs(0x4f692d000/0x0/0x4ffc00000, data 0x23d8666/0x25d0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cff9c6), peers [0,2] op hist [])
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _renew_subs
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 359 handle_osd_map epochs [360,360], i have 359, src has [1,360]
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 360 ms_handle_reset con 0x56493ec09800 session 0x56493da294a0
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 360 ms_handle_reset con 0x56493defe800 session 0x56493ddf6f00
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 360 ms_handle_reset con 0x56493dfa0400 session 0x56493d867c20
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:12:33.533657+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: handle_auth_request added challenge on 0x56493b4b0c00
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 360 ms_handle_reset con 0x56493b4b0c00 session 0x56493eb5b0e0
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 163446784 unmapped: 60596224 heap: 224043008 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: handle_auth_request added challenge on 0x56493bbaa800
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 360 handle_osd_map epochs [360,361], i have 360, src has [1,361]
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 361 ms_handle_reset con 0x56493bbaa800 session 0x56493e266b40
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: handle_auth_request added challenge on 0x56493eee3400
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:12:34.533827+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 163512320 unmapped: 60530688 heap: 224043008 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _renew_subs
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 361 handle_osd_map epochs [362,362], i have 361, src has [1,362]
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: handle_auth_request added challenge on 0x56493f651c00
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 362 ms_handle_reset con 0x56493f651c00 session 0x56493bcb81e0
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: handle_auth_request added challenge on 0x56493b4b0c00
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 362 ms_handle_reset con 0x56493eee3400 session 0x56493ce5a3c0
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 362 ms_handle_reset con 0x56493b4b0c00 session 0x56493d8663c0
Oct 11 04:30:06 compute-0 ceph-osd[88594]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:30:06 compute-0 ceph-osd[88594]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:30:06 compute-0 ceph-osd[88594]: bluestore.MempoolThread(0x56493a431b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2904286 data_alloc: 234881024 data_used: 22081536
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:12:35.534008+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 362 heartbeat osd_stat(store_statfs(0x4f6923000/0x0/0x4ffc00000, data 0x23ddb57/0x25d9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cff9c6), peers [0,2] op hist [])
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 162537472 unmapped: 61505536 heap: 224043008 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: handle_auth_request added challenge on 0x56493bbaa800
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _renew_subs
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 362 handle_osd_map epochs [363,363], i have 362, src has [1,363]
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 363 ms_handle_reset con 0x56493bbaa800 session 0x56493ddf6960
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: handle_auth_request added challenge on 0x56493defe800
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 363 heartbeat osd_stat(store_statfs(0x4f6515000/0x0/0x4ffc00000, data 0x23dda93/0x25d7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x710f9c6), peers [0,2] op hist [])
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 363 ms_handle_reset con 0x56493defe800 session 0x56493d866780
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: handle_auth_request added challenge on 0x56493dfa0400
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 363 ms_handle_reset con 0x56493dfa0400 session 0x56493e266000
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:12:36.534264+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 162537472 unmapped: 61505536 heap: 224043008 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:12:37.534398+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: handle_auth_request added challenge on 0x56493b4b0c00
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 162553856 unmapped: 61489152 heap: 224043008 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 363 handle_osd_map epochs [363,364], i have 363, src has [1,364]
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 364 ms_handle_reset con 0x56493b4b0c00 session 0x56493b934960
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 364 heartbeat osd_stat(store_statfs(0x4f6512000/0x0/0x4ffc00000, data 0x23df61e/0x25d9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x710f9c6), peers [0,2] op hist [])
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:12:38.534541+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 162553856 unmapped: 61489152 heap: 224043008 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 364 heartbeat osd_stat(store_statfs(0x4f6512000/0x0/0x4ffc00000, data 0x23df61e/0x25d9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x710f9c6), peers [0,2] op hist [])
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: handle_auth_request added challenge on 0x56493bbaa800
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 364 ms_handle_reset con 0x56493bbaa800 session 0x56493da2e960
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: handle_auth_request added challenge on 0x56493defe800
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:12:39.534796+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 162586624 unmapped: 61456384 heap: 224043008 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 364 handle_osd_map epochs [364,365], i have 364, src has [1,365]
Oct 11 04:30:06 compute-0 ceph-osd[88594]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 11.552629471s of 11.933937073s, submitted: 156
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 365 ms_handle_reset con 0x56493defe800 session 0x56493d4b81e0
Oct 11 04:30:06 compute-0 ceph-osd[88594]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:30:06 compute-0 ceph-osd[88594]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:30:06 compute-0 ceph-osd[88594]: bluestore.MempoolThread(0x56493a431b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2911664 data_alloc: 234881024 data_used: 22077440
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: handle_auth_request added challenge on 0x56493eee3400
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 365 ms_handle_reset con 0x56493eee3400 session 0x56493b92c000
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: handle_auth_request added challenge on 0x56493c797400
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:12:40.535038+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 365 ms_handle_reset con 0x56493c797400 session 0x56493b92c5a0
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 162594816 unmapped: 61448192 heap: 224043008 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 365 heartbeat osd_stat(store_statfs(0x4f650e000/0x0/0x4ffc00000, data 0x23e2dd0/0x25df000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x710f9c6), peers [0,2] op hist [])
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:12:41.535235+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: handle_auth_request added challenge on 0x56493b4b0c00
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 365 ms_handle_reset con 0x56493b4b0c00 session 0x56493b934960
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 162594816 unmapped: 61448192 heap: 224043008 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: handle_auth_request added challenge on 0x56493bbaa800
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 365 ms_handle_reset con 0x56493bbaa800 session 0x56493e266000
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:12:42.535415+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 162594816 unmapped: 61448192 heap: 224043008 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: handle_auth_request added challenge on 0x56493c797400
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:12:43.535597+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: handle_auth_request added challenge on 0x56493defe800
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 162594816 unmapped: 61448192 heap: 224043008 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _renew_subs
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 365 handle_osd_map epochs [366,366], i have 365, src has [1,366]
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 366 heartbeat osd_stat(store_statfs(0x4f650e000/0x0/0x4ffc00000, data 0x23e2e32/0x25e0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x710f9c6), peers [0,2] op hist [])
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 366 ms_handle_reset con 0x56493c797400 session 0x56493ddf6960
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: handle_auth_request added challenge on 0x56493eee3400
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 366 ms_handle_reset con 0x56493eee3400 session 0x56493da294a0
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:12:44.535763+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: handle_auth_request added challenge on 0x56493f43cc00
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 366 ms_handle_reset con 0x56493f43cc00 session 0x56493cba6d20
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: handle_auth_request added challenge on 0x56493b4b0c00
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 162611200 unmapped: 61431808 heap: 224043008 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: handle_auth_request added challenge on 0x56493bbaa800
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _renew_subs
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 366 handle_osd_map epochs [367,367], i have 366, src has [1,367]
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 367 ms_handle_reset con 0x56493b4b0c00 session 0x56493d8654a0
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: handle_auth_request added challenge on 0x56493c797400
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 367 ms_handle_reset con 0x56493c797400 session 0x56493b92dc20
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 367 ms_handle_reset con 0x56493defe800 session 0x56493bf7cf00
Oct 11 04:30:06 compute-0 ceph-osd[88594]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:30:06 compute-0 ceph-osd[88594]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:30:06 compute-0 ceph-osd[88594]: bluestore.MempoolThread(0x56493a431b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2925000 data_alloc: 234881024 data_used: 22089728
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:12:45.535950+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 162635776 unmapped: 61407232 heap: 224043008 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: handle_auth_request added challenge on 0x56493eee3400
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 367 ms_handle_reset con 0x56493eee3400 session 0x56493ca9d860
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: handle_auth_request added challenge on 0x56493cb95400
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 367 ms_handle_reset con 0x56493cb95400 session 0x56493e2d4f00
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:12:46.536139+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: handle_auth_request added challenge on 0x56493b4b0c00
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 367 ms_handle_reset con 0x56493b4b0c00 session 0x56493c5ab680
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 162668544 unmapped: 61374464 heap: 224043008 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 367 heartbeat osd_stat(store_statfs(0x4f6507000/0x0/0x4ffc00000, data 0x23e6c9e/0x25e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x710f9c6), peers [0,2] op hist [])
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: handle_auth_request added challenge on 0x56493c797400
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 367 ms_handle_reset con 0x56493c797400 session 0x56493d8cb0e0
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:12:47.536313+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 367 ms_handle_reset con 0x56493bbaa800 session 0x56493b92cf00
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 162668544 unmapped: 61374464 heap: 224043008 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:12:48.536481+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: handle_auth_request added challenge on 0x56493defe800
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 162676736 unmapped: 61366272 heap: 224043008 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: handle_auth_request added challenge on 0x56493eee3400
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _renew_subs
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 367 handle_osd_map epochs [368,368], i have 367, src has [1,368]
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 368 ms_handle_reset con 0x56493eee3400 session 0x56493b9650e0
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: handle_auth_request added challenge on 0x56493f3e2000
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 368 ms_handle_reset con 0x56493f3e2000 session 0x56493caf9680
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 368 heartbeat osd_stat(store_statfs(0x4f6505000/0x0/0x4ffc00000, data 0x23e6cbd/0x25e9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x710f9c6), peers [0,2] op hist [])
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:12:49.536636+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 368 ms_handle_reset con 0x56493edf4000 session 0x56493e72e780
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 368 ms_handle_reset con 0x56493f61c800 session 0x56493c532f00
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 162701312 unmapped: 61341696 heap: 224043008 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: handle_auth_request added challenge on 0x56493b4b0c00
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: handle_auth_request added challenge on 0x56493bbaa800
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _renew_subs
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 368 handle_osd_map epochs [369,369], i have 368, src has [1,369]
Oct 11 04:30:06 compute-0 ceph-osd[88594]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.800995827s of 10.075917244s, submitted: 104
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 369 ms_handle_reset con 0x56493bbaa800 session 0x56493de03e00
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: handle_auth_request added challenge on 0x56493c797400
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 369 ms_handle_reset con 0x56493defe800 session 0x56493d86d860
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 369 ms_handle_reset con 0x56493b4b0c00 session 0x56493c5ab2c0
Oct 11 04:30:06 compute-0 ceph-osd[88594]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:30:06 compute-0 ceph-osd[88594]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:30:06 compute-0 ceph-osd[88594]: bluestore.MempoolThread(0x56493a431b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2907494 data_alloc: 234881024 data_used: 21913600
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:12:50.536776+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 162734080 unmapped: 61308928 heap: 224043008 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 369 ms_handle_reset con 0x56493c797400 session 0x56493c5ab0e0
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: handle_auth_request added challenge on 0x56493b4b0c00
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _renew_subs
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 369 handle_osd_map epochs [370,370], i have 369, src has [1,370]
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:12:51.536935+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 370 ms_handle_reset con 0x56493b4b0c00 session 0x56493d867c20
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: handle_auth_request added challenge on 0x56493bbaa800
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: handle_auth_request added challenge on 0x56493defe800
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 370 ms_handle_reset con 0x56493bbaa800 session 0x56493be1a1e0
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 162783232 unmapped: 61259776 heap: 224043008 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _renew_subs
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 370 handle_osd_map epochs [371,371], i have 370, src has [1,371]
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 371 ms_handle_reset con 0x56493defe800 session 0x56493d4b9860
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: handle_auth_request added challenge on 0x56493edf4000
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 371 ms_handle_reset con 0x56493edf4000 session 0x56493b12c780
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:12:52.537073+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 371 heartbeat osd_stat(store_statfs(0x4f6828000/0x0/0x4ffc00000, data 0x20bbf9e/0x22c0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x710f9c6), peers [0,2] op hist [])
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 162783232 unmapped: 61259776 heap: 224043008 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: handle_auth_request added challenge on 0x56493b4b0c00
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: handle_auth_request added challenge on 0x56493bbaa800
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 371 ms_handle_reset con 0x56493b4b0c00 session 0x56493d8670e0
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: handle_auth_request added challenge on 0x56493c797400
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 371 ms_handle_reset con 0x56493bbaa800 session 0x56493d8610e0
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: handle_auth_request added challenge on 0x56493defe800
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 371 ms_handle_reset con 0x56493defe800 session 0x56493b964000
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 371 ms_handle_reset con 0x56493c797400 session 0x56493e24cf00
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:12:53.537252+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 162783232 unmapped: 61259776 heap: 224043008 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: handle_auth_request added challenge on 0x56493f61c800
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: handle_auth_request added challenge on 0x56493eee3400
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:12:54.537479+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 162783232 unmapped: 61259776 heap: 224043008 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _renew_subs
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 371 handle_osd_map epochs [372,372], i have 371, src has [1,372]
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 372 ms_handle_reset con 0x56493f61c800 session 0x56493da29a40
Oct 11 04:30:06 compute-0 ceph-osd[88594]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:30:06 compute-0 ceph-osd[88594]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:30:06 compute-0 ceph-osd[88594]: bluestore.MempoolThread(0x56493a431b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2912039 data_alloc: 234881024 data_used: 21909504
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: handle_auth_request added challenge on 0x56493f61c800
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:12:55.537637+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 372 ms_handle_reset con 0x56493eee3400 session 0x56493d8603c0
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: handle_auth_request added challenge on 0x56493b4b0c00
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 372 ms_handle_reset con 0x56493b4b0c00 session 0x56493d8610e0
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: handle_auth_request added challenge on 0x56493bbaa800
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 372 ms_handle_reset con 0x56493bbaa800 session 0x56493d8670e0
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: handle_auth_request added challenge on 0x56493c797400
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 161284096 unmapped: 62758912 heap: 224043008 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: handle_auth_request added challenge on 0x56493defe800
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 372 ms_handle_reset con 0x56493defe800 session 0x56493caf8f00
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: handle_auth_request added challenge on 0x56493e38a400
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _renew_subs
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 372 handle_osd_map epochs [373,373], i have 372, src has [1,373]
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 373 ms_handle_reset con 0x56493c797400 session 0x56493d4b9860
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 373 ms_handle_reset con 0x56493e38a400 session 0x56493b935860
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 373 heartbeat osd_stat(store_statfs(0x4f85e4000/0x0/0x4ffc00000, data 0x1343898/0x1549000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x60cf9c6), peers [0,2] op hist [])
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: handle_auth_request added challenge on 0x56493b4b0c00
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: handle_auth_request added challenge on 0x56493bbaa800
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 373 ms_handle_reset con 0x56493bbaa800 session 0x56493ca9cb40
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:12:56.537864+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 161308672 unmapped: 62734336 heap: 224043008 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: handle_auth_request added challenge on 0x56493defe800
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 373 ms_handle_reset con 0x56493defe800 session 0x56493caf9860
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: handle_auth_request added challenge on 0x56493eee3400
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _renew_subs
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 373 handle_osd_map epochs [374,374], i have 373, src has [1,374]
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 374 ms_handle_reset con 0x56493b4b0c00 session 0x56493d867c20
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 374 ms_handle_reset con 0x56493eee3400 session 0x56493ca9c960
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 374 ms_handle_reset con 0x56493f61c800 session 0x56493da2ef00
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 374 heartbeat osd_stat(store_statfs(0x4f85df000/0x0/0x4ffc00000, data 0x1345517/0x154d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x60cf9c6), peers [0,2] op hist [])
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:12:57.538000+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 161316864 unmapped: 62726144 heap: 224043008 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 374 ms_handle_reset con 0x56493b9ac800 session 0x56493e20a960
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 374 ms_handle_reset con 0x56493b9ad800 session 0x56493d869a40
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: handle_auth_request added challenge on 0x56493b4b0c00
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: handle_auth_request added challenge on 0x56493bbaa800
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 374 heartbeat osd_stat(store_statfs(0x4f85dc000/0x0/0x4ffc00000, data 0x13470cc/0x154f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x60cf9c6), peers [0,2] op hist [])
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:12:58.538142+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 374 ms_handle_reset con 0x56493bbaa800 session 0x56493de03e00
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 155009024 unmapped: 69033984 heap: 224043008 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _renew_subs
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 374 handle_osd_map epochs [375,375], i have 374, src has [1,375]
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 375 ms_handle_reset con 0x56493b4b0c00 session 0x56493cba6b40
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: handle_auth_request added challenge on 0x56493b4b0c00
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:12:59.538314+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 375 ms_handle_reset con 0x56493b4b0c00 session 0x56493b92c960
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 155058176 unmapped: 68984832 heap: 224043008 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: handle_auth_request added challenge on 0x56493b9ac800
Oct 11 04:30:06 compute-0 ceph-osd[88594]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.480748177s of 10.086325645s, submitted: 204
Oct 11 04:30:06 compute-0 ceph-osd[88594]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:30:06 compute-0 ceph-osd[88594]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:30:06 compute-0 ceph-osd[88594]: bluestore.MempoolThread(0x56493a431b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2615885 data_alloc: 218103808 data_used: 1413120
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:13:00.538476+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 155058176 unmapped: 68984832 heap: 224043008 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 375 handle_osd_map epochs [375,376], i have 375, src has [1,376]
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:13:01.538619+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 376 ms_handle_reset con 0x56493b9ac800 session 0x56493bcb8f00
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: handle_auth_request added challenge on 0x56493b9ad800
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 376 ms_handle_reset con 0x56493b9ad800 session 0x56493c2f9a40
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 153935872 unmapped: 70107136 heap: 224043008 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: handle_auth_request added challenge on 0x56493bbaa800
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 376 ms_handle_reset con 0x56493bbaa800 session 0x56493cba7860
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:13:02.538761+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 153952256 unmapped: 70090752 heap: 224043008 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:13:03.538866+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: handle_auth_request added challenge on 0x56493f61c800
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _renew_subs
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 376 handle_osd_map epochs [377,377], i have 376, src has [1,377]
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 377 ms_handle_reset con 0x56493f61c800 session 0x56493e2d5c20
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 153952256 unmapped: 70090752 heap: 224043008 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 377 handle_osd_map epochs [377,378], i have 377, src has [1,378]
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 378 heartbeat osd_stat(store_statfs(0x4f90ae000/0x0/0x4ffc00000, data 0x875417/0xa7d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x60cf9c6), peers [0,2] op hist [])
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:13:04.539026+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: handle_auth_request added challenge on 0x56493f61c800
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 378 ms_handle_reset con 0x56493f61c800 session 0x56493d868000
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 153952256 unmapped: 70090752 heap: 224043008 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: handle_auth_request added challenge on 0x56493b4b0c00
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _renew_subs
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 378 handle_osd_map epochs [379,379], i have 378, src has [1,379]
Oct 11 04:30:06 compute-0 ceph-osd[88594]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:30:06 compute-0 ceph-osd[88594]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:30:06 compute-0 ceph-osd[88594]: bluestore.MempoolThread(0x56493a431b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2624808 data_alloc: 218103808 data_used: 1409024
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 379 heartbeat osd_stat(store_statfs(0x4f90ae000/0x0/0x4ffc00000, data 0x876b3c/0xa7f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x60cf9c6), peers [0,2] op hist [])
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:13:05.539202+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 153952256 unmapped: 70090752 heap: 224043008 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 379 handle_osd_map epochs [379,380], i have 379, src has [1,380]
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 380 ms_handle_reset con 0x56493b4b0c00 session 0x56493d8612c0
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: handle_auth_request added challenge on 0x56493b9ac800
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:13:06.539399+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 380 heartbeat osd_stat(store_statfs(0x4f90aa000/0x0/0x4ffc00000, data 0x87864b/0xa82000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x60cf9c6), peers [0,2] op hist [])
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 380 ms_handle_reset con 0x56493b9ac800 session 0x56493ce5a5a0
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 153968640 unmapped: 70074368 heap: 224043008 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:13:07.539542+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 153968640 unmapped: 70074368 heap: 224043008 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:13:08.539715+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 153968640 unmapped: 70074368 heap: 224043008 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:13:09.539839+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 153968640 unmapped: 70074368 heap: 224043008 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:30:06 compute-0 ceph-osd[88594]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:30:06 compute-0 ceph-osd[88594]: bluestore.MempoolThread(0x56493a431b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2627670 data_alloc: 218103808 data_used: 1409024
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:13:10.539993+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 153968640 unmapped: 70074368 heap: 224043008 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 380 handle_osd_map epochs [381,381], i have 380, src has [1,381]
Oct 11 04:30:06 compute-0 ceph-osd[88594]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.378544807s of 10.756080627s, submitted: 147
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 381 heartbeat osd_stat(store_statfs(0x4f90a6000/0x0/0x4ffc00000, data 0x87bc71/0xa87000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x60cf9c6), peers [0,2] op hist [])
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:13:11.540138+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 153968640 unmapped: 70074368 heap: 224043008 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:13:12.540348+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 153968640 unmapped: 70074368 heap: 224043008 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:13:13.540559+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 153968640 unmapped: 70074368 heap: 224043008 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 381 heartbeat osd_stat(store_statfs(0x4f90a6000/0x0/0x4ffc00000, data 0x87bc71/0xa87000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x60cf9c6), peers [0,2] op hist [])
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:13:14.540734+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 153968640 unmapped: 70074368 heap: 224043008 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:30:06 compute-0 ceph-osd[88594]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:30:06 compute-0 ceph-osd[88594]: bluestore.MempoolThread(0x56493a431b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2629300 data_alloc: 218103808 data_used: 1409024
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:13:15.540934+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 153968640 unmapped: 70074368 heap: 224043008 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 381 heartbeat osd_stat(store_statfs(0x4f90a6000/0x0/0x4ffc00000, data 0x87bc71/0xa87000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x60cf9c6), peers [0,2] op hist [])
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:13:16.541118+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 153968640 unmapped: 70074368 heap: 224043008 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:13:17.541261+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 153968640 unmapped: 70074368 heap: 224043008 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:13:18.541417+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 153968640 unmapped: 70074368 heap: 224043008 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 381 heartbeat osd_stat(store_statfs(0x4f90a6000/0x0/0x4ffc00000, data 0x87bc71/0xa87000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x60cf9c6), peers [0,2] op hist [])
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:13:19.541560+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 153968640 unmapped: 70074368 heap: 224043008 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:13:20.541731+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:30:06 compute-0 ceph-osd[88594]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:30:06 compute-0 ceph-osd[88594]: bluestore.MempoolThread(0x56493a431b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2629300 data_alloc: 218103808 data_used: 1409024
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 153968640 unmapped: 70074368 heap: 224043008 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: handle_auth_request added challenge on 0x56493b9ad800
Oct 11 04:30:06 compute-0 ceph-osd[88594]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.259254456s of 10.270411491s, submitted: 23
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 381 ms_handle_reset con 0x56493b9ad800 session 0x56493d869c20
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:13:21.541893+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 381 heartbeat osd_stat(store_statfs(0x4f90a6000/0x0/0x4ffc00000, data 0x87bc71/0xa87000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x60cf9c6), peers [0,2] op hist [])
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 153968640 unmapped: 70074368 heap: 224043008 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:13:22.542025+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: handle_auth_request added challenge on 0x56493bbaa800
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 381 ms_handle_reset con 0x56493bbaa800 session 0x56493c2f85a0
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: handle_auth_request added challenge on 0x56493b4b0c00
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 381 ms_handle_reset con 0x56493b4b0c00 session 0x56493b3150e0
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 153976832 unmapped: 70066176 heap: 224043008 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:13:23.542263+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 153976832 unmapped: 70066176 heap: 224043008 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:13:24.542452+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 381 heartbeat osd_stat(store_statfs(0x4f90a7000/0x0/0x4ffc00000, data 0x87bc71/0xa87000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x60cf9c6), peers [0,2] op hist [])
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: handle_auth_request added challenge on 0x56493b9ac800
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 381 ms_handle_reset con 0x56493b9ac800 session 0x56493b964000
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 153976832 unmapped: 70066176 heap: 224043008 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:13:25.542613+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:30:06 compute-0 ceph-osd[88594]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:30:06 compute-0 ceph-osd[88594]: bluestore.MempoolThread(0x56493a431b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2631722 data_alloc: 218103808 data_used: 1409024
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 153976832 unmapped: 70066176 heap: 224043008 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: handle_auth_request added challenge on 0x56493b9ad800
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 381 ms_handle_reset con 0x56493b9ad800 session 0x56493da2f2c0
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: handle_auth_request added challenge on 0x56493f61c800
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 381 ms_handle_reset con 0x56493f61c800 session 0x56493e24da40
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:13:26.542817+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 154001408 unmapped: 70041600 heap: 224043008 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:13:27.542982+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 154001408 unmapped: 70041600 heap: 224043008 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:13:28.543104+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 381 heartbeat osd_stat(store_statfs(0x4f90a7000/0x0/0x4ffc00000, data 0x87bc71/0xa87000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x60cf9c6), peers [0,2] op hist [])
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 154001408 unmapped: 70041600 heap: 224043008 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:13:29.543302+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 154001408 unmapped: 70041600 heap: 224043008 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:13:30.543454+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:30:06 compute-0 ceph-osd[88594]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:30:06 compute-0 ceph-osd[88594]: bluestore.MempoolThread(0x56493a431b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2630257 data_alloc: 218103808 data_used: 1409024
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 154001408 unmapped: 70041600 heap: 224043008 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:13:31.543566+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 154001408 unmapped: 70041600 heap: 224043008 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:13:32.543731+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 154001408 unmapped: 70041600 heap: 224043008 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:13:33.543903+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 154001408 unmapped: 70041600 heap: 224043008 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 381 heartbeat osd_stat(store_statfs(0x4f90a7000/0x0/0x4ffc00000, data 0x87bc71/0xa87000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x60cf9c6), peers [0,2] op hist [])
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:13:34.544061+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 154001408 unmapped: 70041600 heap: 224043008 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:13:35.544223+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:30:06 compute-0 ceph-osd[88594]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:30:06 compute-0 ceph-osd[88594]: bluestore.MempoolThread(0x56493a431b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2630257 data_alloc: 218103808 data_used: 1409024
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 154001408 unmapped: 70041600 heap: 224043008 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:13:36.544433+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 154001408 unmapped: 70041600 heap: 224043008 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 381 heartbeat osd_stat(store_statfs(0x4f90a7000/0x0/0x4ffc00000, data 0x87bc71/0xa87000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x60cf9c6), peers [0,2] op hist [])
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:13:37.544599+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 154001408 unmapped: 70041600 heap: 224043008 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:13:38.544758+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: handle_auth_request added challenge on 0x56493defe800
Oct 11 04:30:06 compute-0 ceph-osd[88594]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 17.565946579s of 17.662443161s, submitted: 25
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 154001408 unmapped: 70041600 heap: 224043008 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:13:39.544941+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 154001408 unmapped: 70041600 heap: 224043008 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 381 ms_handle_reset con 0x56493defe800 session 0x56493b92d680
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:13:40.545361+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:30:06 compute-0 ceph-osd[88594]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:30:06 compute-0 ceph-osd[88594]: bluestore.MempoolThread(0x56493a431b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2630257 data_alloc: 218103808 data_used: 1409024
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 154001408 unmapped: 70041600 heap: 224043008 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:13:41.545535+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 381 heartbeat osd_stat(store_statfs(0x4f90a7000/0x0/0x4ffc00000, data 0x87bc71/0xa87000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x60cf9c6), peers [0,2] op hist [])
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 154001408 unmapped: 70041600 heap: 224043008 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: handle_auth_request added challenge on 0x56493defe800
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 381 ms_handle_reset con 0x56493defe800 session 0x56493ce5ba40
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:13:42.545677+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 154001408 unmapped: 70041600 heap: 224043008 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 381 heartbeat osd_stat(store_statfs(0x4f90a6000/0x0/0x4ffc00000, data 0x87bcd3/0xa88000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x60cf9c6), peers [0,2] op hist [])
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: handle_auth_request added challenge on 0x56493b4b0c00
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 381 ms_handle_reset con 0x56493b4b0c00 session 0x56493e24da40
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: handle_auth_request added challenge on 0x56493b9ac800
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:13:43.545831+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 381 ms_handle_reset con 0x56493b9ac800 session 0x56493b3150e0
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 154009600 unmapped: 70033408 heap: 224043008 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:13:44.545991+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: handle_auth_request added challenge on 0x56493b9ad800
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 381 ms_handle_reset con 0x56493b9ad800 session 0x56493e20a960
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 381 heartbeat osd_stat(store_statfs(0x4f90a7000/0x0/0x4ffc00000, data 0x87bc71/0xa87000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x60cf9c6), peers [0,2] op hist [])
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 154009600 unmapped: 70033408 heap: 224043008 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:13:45.546207+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:30:06 compute-0 ceph-osd[88594]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:30:06 compute-0 ceph-osd[88594]: bluestore.MempoolThread(0x56493a431b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2635373 data_alloc: 218103808 data_used: 1409024
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 154009600 unmapped: 70033408 heap: 224043008 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: handle_auth_request added challenge on 0x56493f61c800
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 381 ms_handle_reset con 0x56493f61c800 session 0x56493ca9c960
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: handle_auth_request added challenge on 0x56493b4b0c00
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:13:46.546450+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 381 ms_handle_reset con 0x56493b4b0c00 session 0x56493ca9cb40
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 154025984 unmapped: 70017024 heap: 224043008 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 381 heartbeat osd_stat(store_statfs(0x4f90a6000/0x0/0x4ffc00000, data 0x87bc81/0xa88000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x60cf9c6), peers [0,2] op hist [])
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:13:47.546599+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: handle_auth_request added challenge on 0x56493b9ac800
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 381 ms_handle_reset con 0x56493b9ac800 session 0x56493bf6a780
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 154025984 unmapped: 70017024 heap: 224043008 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 381 heartbeat osd_stat(store_statfs(0x4f90a7000/0x0/0x4ffc00000, data 0x87bc71/0xa87000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x60cf9c6), peers [0,2] op hist [])
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:13:48.547212+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 154025984 unmapped: 70017024 heap: 224043008 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: handle_auth_request added challenge on 0x56493b9ad800
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 381 ms_handle_reset con 0x56493b9ad800 session 0x56493ee1a3c0
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: handle_auth_request added challenge on 0x56493defe800
Oct 11 04:30:06 compute-0 ceph-osd[88594]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.196995735s of 10.367760658s, submitted: 50
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 381 ms_handle_reset con 0x56493defe800 session 0x56493da14960
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 381 heartbeat osd_stat(store_statfs(0x4f90a7000/0x0/0x4ffc00000, data 0x87bc71/0xa87000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x60cf9c6), peers [0,2] op hist [])
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:13:49.547332+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 154025984 unmapped: 70017024 heap: 224043008 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:13:50.547837+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: handle_auth_request added challenge on 0x56493e38a400
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 381 ms_handle_reset con 0x56493e38a400 session 0x56493c532d20
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 381 heartbeat osd_stat(store_statfs(0x4f90a7000/0x0/0x4ffc00000, data 0x87bc71/0xa87000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x60cf9c6), peers [0,2] op hist [])
Oct 11 04:30:06 compute-0 ceph-osd[88594]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:30:06 compute-0 ceph-osd[88594]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:30:06 compute-0 ceph-osd[88594]: bluestore.MempoolThread(0x56493a431b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2635736 data_alloc: 218103808 data_used: 1409024
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: handle_auth_request added challenge on 0x56493b4b0c00
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 381 ms_handle_reset con 0x56493b4b0c00 session 0x56493d86cb40
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 154058752 unmapped: 69984256 heap: 224043008 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: handle_auth_request added challenge on 0x56493b9ac800
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: handle_auth_request added challenge on 0x56493b9ad800
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:13:51.548557+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: handle_auth_request added challenge on 0x56493defe800
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 381 ms_handle_reset con 0x56493defe800 session 0x56493c5323c0
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: handle_auth_request added challenge on 0x56493f43a800
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 154058752 unmapped: 69984256 heap: 224043008 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 381 ms_handle_reset con 0x56493f43a800 session 0x56493c5ab2c0
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:13:52.548974+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 154058752 unmapped: 69984256 heap: 224043008 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:13:53.549119+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 154058752 unmapped: 69984256 heap: 224043008 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:13:54.549381+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 154058752 unmapped: 69984256 heap: 224043008 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 381 heartbeat osd_stat(store_statfs(0x4f90a6000/0x0/0x4ffc00000, data 0x87bc94/0xa88000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x60cf9c6), peers [0,2] op hist [])
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:13:55.549714+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:30:06 compute-0 ceph-osd[88594]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:30:06 compute-0 ceph-osd[88594]: bluestore.MempoolThread(0x56493a431b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2637318 data_alloc: 218103808 data_used: 1413120
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 154058752 unmapped: 69984256 heap: 224043008 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 381 heartbeat osd_stat(store_statfs(0x4f90a6000/0x0/0x4ffc00000, data 0x87bc94/0xa88000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x60cf9c6), peers [0,2] op hist [])
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:13:56.550277+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 154058752 unmapped: 69984256 heap: 224043008 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:13:57.550620+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 381 ms_handle_reset con 0x56493b9ac800 session 0x56493cba6d20
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 381 ms_handle_reset con 0x56493b9ad800 session 0x56493b934960
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: handle_auth_request added challenge on 0x56493b4b0c00
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 154058752 unmapped: 69984256 heap: 224043008 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 381 ms_handle_reset con 0x56493b4b0c00 session 0x56493caf85a0
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:13:58.550773+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 154066944 unmapped: 69976064 heap: 224043008 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:13:59.551141+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 154066944 unmapped: 69976064 heap: 224043008 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 381 heartbeat osd_stat(store_statfs(0x4f90a6000/0x0/0x4ffc00000, data 0x87bc71/0xa87000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x60cf9c6), peers [0,2] op hist [])
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:14:00.551341+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:30:06 compute-0 ceph-osd[88594]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:30:06 compute-0 ceph-osd[88594]: bluestore.MempoolThread(0x56493a431b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2635898 data_alloc: 218103808 data_used: 1409024
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: handle_auth_request added challenge on 0x56493b9ac800
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 381 ms_handle_reset con 0x56493b9ac800 session 0x56493e267860
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: handle_auth_request added challenge on 0x56493b9ad800
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 154066944 unmapped: 69976064 heap: 224043008 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 11.815820694s of 11.946370125s, submitted: 44
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 381 ms_handle_reset con 0x56493b9ad800 session 0x56493e20ba40
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:14:01.551503+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 154075136 unmapped: 69967872 heap: 224043008 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:14:02.551726+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 154075136 unmapped: 69967872 heap: 224043008 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:14:03.551878+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 154075136 unmapped: 69967872 heap: 224043008 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:14:04.552035+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 154075136 unmapped: 69967872 heap: 224043008 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 381 heartbeat osd_stat(store_statfs(0x4f90a6000/0x0/0x4ffc00000, data 0x87bc71/0xa87000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x60cf9c6), peers [0,2] op hist [])
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: handle_auth_request added challenge on 0x56493defe800
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 381 ms_handle_reset con 0x56493defe800 session 0x56493b92de00
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: handle_auth_request added challenge on 0x56493f43a800
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:14:05.552287+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 381 ms_handle_reset con 0x56493f43a800 session 0x56493cbd03c0
Oct 11 04:30:06 compute-0 ceph-osd[88594]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:30:06 compute-0 ceph-osd[88594]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:30:06 compute-0 ceph-osd[88594]: bluestore.MempoolThread(0x56493a431b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2635898 data_alloc: 218103808 data_used: 1409024
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 154075136 unmapped: 69967872 heap: 224043008 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:14:06.552561+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: handle_auth_request added challenge on 0x56493f43a800
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 381 ms_handle_reset con 0x56493f43a800 session 0x56493bf7de00
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 154075136 unmapped: 69967872 heap: 224043008 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:14:07.552753+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 154075136 unmapped: 69967872 heap: 224043008 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:14:08.552912+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 154075136 unmapped: 69967872 heap: 224043008 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:14:09.553082+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: handle_auth_request added challenge on 0x56493b4b0c00
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 381 ms_handle_reset con 0x56493b4b0c00 session 0x56493bf7d4a0
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: handle_auth_request added challenge on 0x56493b9ac800
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 381 ms_handle_reset con 0x56493b9ac800 session 0x56493da2cb40
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: handle_auth_request added challenge on 0x56493b9ad800
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 157376512 unmapped: 66666496 heap: 224043008 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 381 ms_handle_reset con 0x56493b9ad800 session 0x56493e267680
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: handle_auth_request added challenge on 0x56493defe800
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 381 ms_handle_reset con 0x56493defe800 session 0x56493eb5a960
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 381 heartbeat osd_stat(store_statfs(0x4f88ea000/0x0/0x4ffc00000, data 0x1038c71/0x1244000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x60cf9c6), peers [0,2] op hist [])
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:14:10.553264+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:30:06 compute-0 ceph-osd[88594]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:30:06 compute-0 ceph-osd[88594]: bluestore.MempoolThread(0x56493a431b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2697907 data_alloc: 218103808 data_used: 1409024
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: handle_auth_request added challenge on 0x56493defe800
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 153731072 unmapped: 70311936 heap: 224043008 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:14:11.553469+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 381 heartbeat osd_stat(store_statfs(0x4f88ea000/0x0/0x4ffc00000, data 0x1038c71/0x1244000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x60cf9c6), peers [0,2] op hist [])
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _renew_subs
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 381 handle_osd_map epochs [382,382], i have 381, src has [1,382]
Oct 11 04:30:06 compute-0 ceph-osd[88594]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.432409286s of 10.572415352s, submitted: 28
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 382 ms_handle_reset con 0x56493defe800 session 0x56493da29680
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 153755648 unmapped: 70287360 heap: 224043008 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:14:12.553682+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 153755648 unmapped: 70287360 heap: 224043008 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 382 heartbeat osd_stat(store_statfs(0x4f88e5000/0x0/0x4ffc00000, data 0x103a850/0x1248000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x60cf9c6), peers [0,2] op hist [])
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:14:13.553893+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 153755648 unmapped: 70287360 heap: 224043008 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:14:14.554059+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 153755648 unmapped: 70287360 heap: 224043008 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:14:15.554245+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:30:06 compute-0 ceph-osd[88594]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:30:06 compute-0 ceph-osd[88594]: bluestore.MempoolThread(0x56493a431b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2703897 data_alloc: 218103808 data_used: 1417216
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 153755648 unmapped: 70287360 heap: 224043008 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:14:16.554464+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: handle_auth_request added challenge on 0x56493b4b0c00
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 382 heartbeat osd_stat(store_statfs(0x4f88e5000/0x0/0x4ffc00000, data 0x103a850/0x1248000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x60cf9c6), peers [0,2] op hist [])
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: handle_auth_request added challenge on 0x56493b9ac800
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 382 ms_handle_reset con 0x56493b9ac800 session 0x56493b92c000
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 382 ms_handle_reset con 0x56493b4b0c00 session 0x56493da2a000
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 153755648 unmapped: 70287360 heap: 224043008 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:14:17.554698+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 153755648 unmapped: 70287360 heap: 224043008 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:14:18.554924+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 153755648 unmapped: 70287360 heap: 224043008 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:14:19.555294+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 153755648 unmapped: 70287360 heap: 224043008 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:14:20.555457+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:30:06 compute-0 ceph-osd[88594]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:30:06 compute-0 ceph-osd[88594]: bluestore.MempoolThread(0x56493a431b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2702841 data_alloc: 218103808 data_used: 1417216
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 153755648 unmapped: 70287360 heap: 224043008 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:14:21.555812+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 153755648 unmapped: 70287360 heap: 224043008 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:14:22.555996+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 382 heartbeat osd_stat(store_statfs(0x4f88e6000/0x0/0x4ffc00000, data 0x103a850/0x1248000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x60cf9c6), peers [0,2] op hist [])
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 153755648 unmapped: 70287360 heap: 224043008 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: handle_auth_request added challenge on 0x56493b9ad800
Oct 11 04:30:06 compute-0 ceph-osd[88594]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 11.501034737s of 11.533232689s, submitted: 8
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 382 ms_handle_reset con 0x56493b9ad800 session 0x56493de03e00
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:14:23.556133+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: handle_auth_request added challenge on 0x56493f43a800
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: handle_auth_request added challenge on 0x56493fca4c00
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 154058752 unmapped: 69984256 heap: 224043008 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 382 heartbeat osd_stat(store_statfs(0x4f88c2000/0x0/0x4ffc00000, data 0x105e850/0x126c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x60cf9c6), peers [0,2] op hist [])
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:14:24.556304+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 382 heartbeat osd_stat(store_statfs(0x4f88c2000/0x0/0x4ffc00000, data 0x105e850/0x126c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x60cf9c6), peers [0,2] op hist [])
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 154058752 unmapped: 69984256 heap: 224043008 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:14:25.556404+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:30:06 compute-0 ceph-osd[88594]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:30:06 compute-0 ceph-osd[88594]: bluestore.MempoolThread(0x56493a431b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2731965 data_alloc: 218103808 data_used: 4902912
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 155688960 unmapped: 68354048 heap: 224043008 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:14:26.556612+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 382 heartbeat osd_stat(store_statfs(0x4f88c2000/0x0/0x4ffc00000, data 0x105e850/0x126c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x60cf9c6), peers [0,2] op hist [])
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 155688960 unmapped: 68354048 heap: 224043008 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 382 ms_handle_reset con 0x56493f43a800 session 0x56493d861680
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 382 ms_handle_reset con 0x56493fca4c00 session 0x56493da2e960
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:14:27.556828+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: handle_auth_request added challenge on 0x56493f43a800
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: handle_auth_request added challenge on 0x56493b4b0c00
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 382 ms_handle_reset con 0x56493b4b0c00 session 0x56493d8641e0
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: handle_auth_request added challenge on 0x56493b9ac800
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 382 ms_handle_reset con 0x56493f43a800 session 0x56493b9352c0
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 382 ms_handle_reset con 0x56493b9ac800 session 0x56493da14d20
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 155688960 unmapped: 68354048 heap: 224043008 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 382 heartbeat osd_stat(store_statfs(0x4f88c2000/0x0/0x4ffc00000, data 0x105e850/0x126c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x60cf9c6), peers [0,2] op hist [])
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:14:28.556999+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: handle_auth_request added challenge on 0x56493b9ad800
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 382 ms_handle_reset con 0x56493b9ad800 session 0x56493eb5a000
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: handle_auth_request added challenge on 0x56493b9ad800
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 382 ms_handle_reset con 0x56493b9ad800 session 0x56493be1a000
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 155688960 unmapped: 68354048 heap: 224043008 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 382 heartbeat osd_stat(store_statfs(0x4f88e6000/0x0/0x4ffc00000, data 0x103a850/0x1248000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x60cf9c6), peers [0,2] op hist [])
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:14:29.557206+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 155688960 unmapped: 68354048 heap: 224043008 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:14:30.557383+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:30:06 compute-0 ceph-osd[88594]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:30:06 compute-0 ceph-osd[88594]: bluestore.MempoolThread(0x56493a431b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2761009 data_alloc: 218103808 data_used: 9359360
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: handle_auth_request added challenge on 0x56493b4b0c00
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 382 ms_handle_reset con 0x56493b4b0c00 session 0x56493da29e00
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: handle_auth_request added challenge on 0x56493b9ac800
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 155688960 unmapped: 68354048 heap: 224043008 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:14:31.557591+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _renew_subs
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 382 handle_osd_map epochs [383,383], i have 382, src has [1,383]
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 383 ms_handle_reset con 0x56493b9ac800 session 0x56493c532780
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 155688960 unmapped: 68354048 heap: 224043008 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: handle_auth_request added challenge on 0x56493f43a800
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 383 ms_handle_reset con 0x56493f43a800 session 0x56493ce5ad20
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: handle_auth_request added challenge on 0x56493fca4c00
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:14:32.557757+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 383 ms_handle_reset con 0x56493fca4c00 session 0x56493be1b0e0
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 154337280 unmapped: 69705728 heap: 224043008 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:14:33.557926+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: handle_auth_request added challenge on 0x56493fca4c00
Oct 11 04:30:06 compute-0 ceph-osd[88594]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.782000542s of 10.944031715s, submitted: 53
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 154337280 unmapped: 69705728 heap: 224043008 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:14:34.558094+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 154337280 unmapped: 69705728 heap: 224043008 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 383 heartbeat osd_stat(store_statfs(0x4f909f000/0x0/0x4ffc00000, data 0x87f3cf/0xa8e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x60cf9c6), peers [0,2] op hist [])
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 383 handle_osd_map epochs [384,384], i have 383, src has [1,384]
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:14:35.558371+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:30:06 compute-0 ceph-osd[88594]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:30:06 compute-0 ceph-osd[88594]: bluestore.MempoolThread(0x56493a431b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2654631 data_alloc: 218103808 data_used: 1425408
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: handle_auth_request added challenge on 0x56493b4afc00
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 384 ms_handle_reset con 0x56493b4afc00 session 0x56493e72f4a0
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: handle_auth_request added challenge on 0x56493df01c00
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 384 ms_handle_reset con 0x56493fca4c00 session 0x56493b3154a0
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 154337280 unmapped: 69705728 heap: 224043008 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 384 ms_handle_reset con 0x56493df01c00 session 0x56493da2e3c0
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:14:36.558621+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 154337280 unmapped: 69705728 heap: 224043008 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:14:37.558767+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: handle_auth_request added challenge on 0x56493f3e2c00
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 384 ms_handle_reset con 0x56493f3e2c00 session 0x56493da2a1e0
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 154337280 unmapped: 69705728 heap: 224043008 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:14:38.558918+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 384 heartbeat osd_stat(store_statfs(0x4f909d000/0x0/0x4ffc00000, data 0x880e32/0xa91000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x60cf9c6), peers [0,2] op hist [])
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 154337280 unmapped: 69705728 heap: 224043008 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:14:39.559192+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 154337280 unmapped: 69705728 heap: 224043008 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: handle_auth_request added challenge on 0x56493be26400
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 384 ms_handle_reset con 0x56493be26400 session 0x56493c5ab680
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: handle_auth_request added challenge on 0x56493be26400
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:14:40.559367+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 384 ms_handle_reset con 0x56493be26400 session 0x56493e72e780
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: handle_auth_request added challenge on 0x56493b4afc00
Oct 11 04:30:06 compute-0 ceph-osd[88594]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:30:06 compute-0 ceph-osd[88594]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:30:06 compute-0 ceph-osd[88594]: bluestore.MempoolThread(0x56493a431b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2679124 data_alloc: 218103808 data_used: 1425408
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 384 ms_handle_reset con 0x56493b4afc00 session 0x56493bf6b0e0
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: handle_auth_request added challenge on 0x56493df01c00
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 384 ms_handle_reset con 0x56493df01c00 session 0x56493d8caf00
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 154157056 unmapped: 69885952 heap: 224043008 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:14:41.559546+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 154157056 unmapped: 69885952 heap: 224043008 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:14:42.559724+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 384 heartbeat osd_stat(store_statfs(0x4f8c2e000/0x0/0x4ffc00000, data 0xcefe32/0xf00000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x60cf9c6), peers [0,2] op hist [])
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 154157056 unmapped: 69885952 heap: 224043008 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:14:43.559922+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 154157056 unmapped: 69885952 heap: 224043008 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:14:44.560083+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 384 heartbeat osd_stat(store_statfs(0x4f8c2e000/0x0/0x4ffc00000, data 0xcefe32/0xf00000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x60cf9c6), peers [0,2] op hist [])
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 154157056 unmapped: 69885952 heap: 224043008 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:14:45.560225+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:30:06 compute-0 ceph-osd[88594]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:30:06 compute-0 ceph-osd[88594]: bluestore.MempoolThread(0x56493a431b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2690804 data_alloc: 218103808 data_used: 1425408
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 154157056 unmapped: 69885952 heap: 224043008 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:14:46.560406+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 154157056 unmapped: 69885952 heap: 224043008 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:14:47.560585+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: handle_auth_request added challenge on 0x56493f3e2c00
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 384 ms_handle_reset con 0x56493f3e2c00 session 0x56493b935680
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: handle_auth_request added challenge on 0x56493fca4c00
Oct 11 04:30:06 compute-0 ceph-osd[88594]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 13.814949036s of 13.937349319s, submitted: 52
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 384 ms_handle_reset con 0x56493fca4c00 session 0x56493cbd0780
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 154165248 unmapped: 69877760 heap: 224043008 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: handle_auth_request added challenge on 0x56493fca4c00
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:14:48.560762+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 384 heartbeat osd_stat(store_statfs(0x4f842d000/0x0/0x4ffc00000, data 0x14efe5c/0x1701000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x60cf9c6), peers [0,2] op hist [0,0,0,0,1,1,2,2])
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 384 ms_handle_reset con 0x56493fca4c00 session 0x56493d865860
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: handle_auth_request added challenge on 0x56493b4afc00
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 384 ms_handle_reset con 0x56493b4afc00 session 0x56493b92c780
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 155410432 unmapped: 68632576 heap: 224043008 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:14:49.560940+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 155410432 unmapped: 68632576 heap: 224043008 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:14:50.561083+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:30:06 compute-0 ceph-osd[88594]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:30:06 compute-0 ceph-osd[88594]: bluestore.MempoolThread(0x56493a431b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3001129 data_alloc: 218103808 data_used: 1425408
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 155410432 unmapped: 68632576 heap: 224043008 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:14:51.561451+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 384 heartbeat osd_stat(store_statfs(0x4f602d000/0x0/0x4ffc00000, data 0x38efe94/0x3b01000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x60cf9c6), peers [0,2] op hist [])
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 155410432 unmapped: 68632576 heap: 224043008 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:14:52.561608+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 155410432 unmapped: 68632576 heap: 224043008 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: handle_auth_request added challenge on 0x56493be26400
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 384 ms_handle_reset con 0x56493be26400 session 0x56493ecccf00
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:14:53.561731+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 384 heartbeat osd_stat(store_statfs(0x4f602d000/0x0/0x4ffc00000, data 0x38efe94/0x3b01000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x60cf9c6), peers [0,2] op hist [])
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: handle_auth_request added challenge on 0x56493df01c00
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 384 ms_handle_reset con 0x56493df01c00 session 0x56493b965c20
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 155410432 unmapped: 68632576 heap: 224043008 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:14:54.561911+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: handle_auth_request added challenge on 0x56493f3e2c00
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 384 ms_handle_reset con 0x56493f3e2c00 session 0x56493caf81e0
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: handle_auth_request added challenge on 0x56493f3e2c00
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 384 ms_handle_reset con 0x56493f3e2c00 session 0x56493bf6a3c0
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 155426816 unmapped: 68616192 heap: 224043008 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:14:55.562034+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: handle_auth_request added challenge on 0x56493b4afc00
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: handle_auth_request added challenge on 0x56493be26400
Oct 11 04:30:06 compute-0 ceph-osd[88594]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:30:06 compute-0 ceph-osd[88594]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:30:06 compute-0 ceph-osd[88594]: bluestore.MempoolThread(0x56493a431b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3006249 data_alloc: 218103808 data_used: 1425408
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 155426816 unmapped: 68616192 heap: 224043008 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:14:56.562189+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 384 heartbeat osd_stat(store_statfs(0x4f602b000/0x0/0x4ffc00000, data 0x38efec7/0x3b03000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x60cf9c6), peers [0,2] op hist [])
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 155426816 unmapped: 68616192 heap: 224043008 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:14:57.562338+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: handle_auth_request added challenge on 0x56493df01c00
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 384 ms_handle_reset con 0x56493df01c00 session 0x56493e2adc20
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 384 heartbeat osd_stat(store_statfs(0x4f602b000/0x0/0x4ffc00000, data 0x38efec7/0x3b03000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x60cf9c6), peers [0,2] op hist [])
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 156319744 unmapped: 67723264 heap: 224043008 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:14:58.562532+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: handle_auth_request added challenge on 0x56493fca4c00
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 384 ms_handle_reset con 0x56493fca4c00 session 0x56493eccc780
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 156319744 unmapped: 67723264 heap: 224043008 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:14:59.562685+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: handle_auth_request added challenge on 0x56493f43e800
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 384 ms_handle_reset con 0x56493f43e800 session 0x56493c533e00
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: handle_auth_request added challenge on 0x56493df01000
Oct 11 04:30:06 compute-0 ceph-osd[88594]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 11.188836098s of 11.668465614s, submitted: 61
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 384 ms_handle_reset con 0x56493df01000 session 0x56493b9332c0
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 156622848 unmapped: 67420160 heap: 224043008 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:15:00.562790+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: handle_auth_request added challenge on 0x56493df01000
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: handle_auth_request added challenge on 0x56493df01c00
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 384 heartbeat osd_stat(store_statfs(0x4f6006000/0x0/0x4ffc00000, data 0x3913ed7/0x3b28000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x60cf9c6), peers [0,2] op hist [])
Oct 11 04:30:06 compute-0 ceph-osd[88594]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:30:06 compute-0 ceph-osd[88594]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:30:06 compute-0 ceph-osd[88594]: bluestore.MempoolThread(0x56493a431b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3045368 data_alloc: 218103808 data_used: 5996544
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 156622848 unmapped: 67420160 heap: 224043008 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:15:01.562968+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 156622848 unmapped: 67420160 heap: 224043008 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 384 heartbeat osd_stat(store_statfs(0x4f6006000/0x0/0x4ffc00000, data 0x3913ed7/0x3b28000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x60cf9c6), peers [0,2] op hist [])
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:15:02.563130+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 156622848 unmapped: 67420160 heap: 224043008 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:15:03.563380+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 156745728 unmapped: 67297280 heap: 224043008 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:15:04.563487+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 164061184 unmapped: 59981824 heap: 224043008 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:15:05.563699+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:30:06 compute-0 ceph-osd[88594]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:30:06 compute-0 ceph-osd[88594]: bluestore.MempoolThread(0x56493a431b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3124728 data_alloc: 234881024 data_used: 17240064
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 384 heartbeat osd_stat(store_statfs(0x4f6006000/0x0/0x4ffc00000, data 0x3913ed7/0x3b28000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x60cf9c6), peers [0,2] op hist [])
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 164093952 unmapped: 59949056 heap: 224043008 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:15:06.563958+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 172163072 unmapped: 51879936 heap: 224043008 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:15:07.564194+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 171859968 unmapped: 52183040 heap: 224043008 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:15:08.564392+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 171933696 unmapped: 52109312 heap: 224043008 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:15:09.564563+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 171933696 unmapped: 52109312 heap: 224043008 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:15:10.564794+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:30:06 compute-0 ceph-osd[88594]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:30:06 compute-0 ceph-osd[88594]: bluestore.MempoolThread(0x56493a431b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3212732 data_alloc: 234881024 data_used: 19124224
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 171933696 unmapped: 52109312 heap: 224043008 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:15:11.564958+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 384 heartbeat osd_stat(store_statfs(0x4f56ff000/0x0/0x4ffc00000, data 0x421aed7/0x442f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x60cf9c6), peers [0,2] op hist [])
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 171966464 unmapped: 52076544 heap: 224043008 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:15:12.565198+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 384 heartbeat osd_stat(store_statfs(0x4f56ff000/0x0/0x4ffc00000, data 0x421aed7/0x442f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x60cf9c6), peers [0,2] op hist [])
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 171999232 unmapped: 52043776 heap: 224043008 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:15:13.565369+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 384 heartbeat osd_stat(store_statfs(0x4f56ff000/0x0/0x4ffc00000, data 0x421aed7/0x442f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x60cf9c6), peers [0,2] op hist [])
Oct 11 04:30:06 compute-0 ceph-osd[88594]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 14.130527496s of 14.432677269s, submitted: 88
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 175579136 unmapped: 48463872 heap: 224043008 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:15:14.565541+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 178593792 unmapped: 45449216 heap: 224043008 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:15:15.565732+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:30:06 compute-0 ceph-osd[88594]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:30:06 compute-0 ceph-osd[88594]: bluestore.MempoolThread(0x56493a431b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3273774 data_alloc: 234881024 data_used: 19603456
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 178388992 unmapped: 45654016 heap: 224043008 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:15:16.565979+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: handle_auth_request added challenge on 0x56493f3e2c00
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 178388992 unmapped: 45654016 heap: 224043008 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:15:17.566181+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 384 heartbeat osd_stat(store_statfs(0x4f4a07000/0x0/0x4ffc00000, data 0x4ee2ed7/0x50f7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x60cf9c6), peers [0,2] op hist [])
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: handle_auth_request added challenge on 0x56493f43e800
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _renew_subs
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 384 handle_osd_map epochs [385,385], i have 384, src has [1,385]
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 385 ms_handle_reset con 0x56493f3e2c00 session 0x56493d8cb680
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 385 ms_handle_reset con 0x56493f43e800 session 0x56493ca9d860
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 178388992 unmapped: 45654016 heap: 224043008 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:15:18.566362+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 178388992 unmapped: 45654016 heap: 224043008 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:15:19.566571+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 178388992 unmapped: 45654016 heap: 224043008 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:15:20.566785+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:30:06 compute-0 ceph-osd[88594]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:30:06 compute-0 ceph-osd[88594]: bluestore.MempoolThread(0x56493a431b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3321187 data_alloc: 234881024 data_used: 19591168
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 178388992 unmapped: 45654016 heap: 224043008 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:15:21.566976+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 178429952 unmapped: 45613056 heap: 224043008 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:15:22.567233+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 178429952 unmapped: 45613056 heap: 224043008 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 385 heartbeat osd_stat(store_statfs(0x4f4a34000/0x0/0x4ffc00000, data 0x4ee4ba6/0x50fa000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x60cf9c6), peers [0,2] op hist [])
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:15:23.567425+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 178429952 unmapped: 45613056 heap: 224043008 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:15:24.568792+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 385 heartbeat osd_stat(store_statfs(0x4f4a34000/0x0/0x4ffc00000, data 0x4ee4ba6/0x50fa000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x60cf9c6), peers [0,2] op hist [])
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: handle_auth_request added challenge on 0x56493fca4c00
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: handle_auth_request added challenge on 0x56493e3ce400
Oct 11 04:30:06 compute-0 ceph-osd[88594]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.588084221s of 11.078954697s, submitted: 153
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 385 ms_handle_reset con 0x56493e3ce400 session 0x56493b9334a0
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 385 ms_handle_reset con 0x56493fca4c00 session 0x56493b968960
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 178429952 unmapped: 45613056 heap: 224043008 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:15:25.568995+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:30:06 compute-0 ceph-osd[88594]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:30:06 compute-0 ceph-osd[88594]: bluestore.MempoolThread(0x56493a431b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3317221 data_alloc: 234881024 data_used: 19591168
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 178429952 unmapped: 45613056 heap: 224043008 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:15:26.570028+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 178429952 unmapped: 45613056 heap: 224043008 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:15:27.570681+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 178446336 unmapped: 45596672 heap: 224043008 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:15:28.572768+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 385 ms_handle_reset con 0x56493df01000 session 0x56493ddf6b40
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 385 ms_handle_reset con 0x56493df01c00 session 0x56493e72e5a0
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: handle_auth_request added challenge on 0x56493df01000
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 385 ms_handle_reset con 0x56493df01000 session 0x56493be1a780
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 385 heartbeat osd_stat(store_statfs(0x4f4a34000/0x0/0x4ffc00000, data 0x4ee4ba6/0x50fa000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x60cf9c6), peers [0,2] op hist [])
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 178454528 unmapped: 45588480 heap: 224043008 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:15:29.573856+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 178454528 unmapped: 45588480 heap: 224043008 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:15:30.574006+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 385 heartbeat osd_stat(store_statfs(0x4f4a59000/0x0/0x4ffc00000, data 0x4ec0b96/0x50d5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x60cf9c6), peers [0,2] op hist [])
Oct 11 04:30:06 compute-0 ceph-osd[88594]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:30:06 compute-0 ceph-osd[88594]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:30:06 compute-0 ceph-osd[88594]: bluestore.MempoolThread(0x56493a431b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3309384 data_alloc: 234881024 data_used: 19480576
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: handle_auth_request added challenge on 0x56493e3ce400
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 385 ms_handle_reset con 0x56493e3ce400 session 0x56493c7aa5a0
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 178806784 unmapped: 45236224 heap: 224043008 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:15:31.574208+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: handle_auth_request added challenge on 0x56493f3e2c00
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: handle_auth_request added challenge on 0x56493f43e800
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:15:32.574336+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 178806784 unmapped: 45236224 heap: 224043008 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:15:33.574509+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 178814976 unmapped: 45228032 heap: 224043008 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:15:34.574818+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 178823168 unmapped: 45219840 heap: 224043008 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:15:35.575056+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 178823168 unmapped: 45219840 heap: 224043008 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 385 heartbeat osd_stat(store_statfs(0x4f4a2f000/0x0/0x4ffc00000, data 0x4eeab96/0x50ff000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x60cf9c6), peers [0,2] op hist [])
Oct 11 04:30:06 compute-0 ceph-osd[88594]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:30:06 compute-0 ceph-osd[88594]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:30:06 compute-0 ceph-osd[88594]: bluestore.MempoolThread(0x56493a431b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3315299 data_alloc: 234881024 data_used: 19550208
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:15:36.575334+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 178823168 unmapped: 45219840 heap: 224043008 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:15:37.575825+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 178823168 unmapped: 45219840 heap: 224043008 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 385 heartbeat osd_stat(store_statfs(0x4f4a2f000/0x0/0x4ffc00000, data 0x4eeab96/0x50ff000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x60cf9c6), peers [0,2] op hist [])
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:15:38.576014+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 178823168 unmapped: 45219840 heap: 224043008 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 13.644024849s of 13.791284561s, submitted: 55
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:15:39.576524+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 178831360 unmapped: 45211648 heap: 224043008 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:15:40.576824+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 178831360 unmapped: 45211648 heap: 224043008 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:30:06 compute-0 ceph-osd[88594]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:30:06 compute-0 ceph-mon[74273]: from='client.? 192.168.122.100:0/174990331' entity='client.admin' cmd=[{"prefix": "osd crush rule ls"}]: dispatch
Oct 11 04:30:06 compute-0 ceph-osd[88594]: bluestore.MempoolThread(0x56493a431b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3338429 data_alloc: 234881024 data_used: 19550208
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:15:41.577369+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 178831360 unmapped: 45211648 heap: 224043008 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:15:42.577536+0000)
Oct 11 04:30:06 compute-0 ceph-mon[74273]: from='client.? 192.168.122.100:0/99125495' entity='client.admin' cmd=[{"prefix": "mgr metadata", "format": "json-pretty"}]: dispatch
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 178831360 unmapped: 45211648 heap: 224043008 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 385 heartbeat osd_stat(store_statfs(0x4f4a2d000/0x0/0x4ffc00000, data 0x51f5b96/0x5101000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x60cf9c6), peers [0,2] op hist [])
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-mon[74273]: pgmap v2051: 305 pgs: 305 active+clean; 271 MiB data, 657 MiB used, 59 GiB / 60 GiB avail
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:15:43.577681+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 184688640 unmapped: 39354368 heap: 224043008 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 385 heartbeat osd_stat(store_statfs(0x4f4831000/0x0/0x4ffc00000, data 0x53f1b96/0x52fd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x60cf9c6), peers [0,2] op hist [])
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-mon[74273]: from='client.? 192.168.122.100:0/3226386313' entity='client.admin' cmd=[{"prefix": "osd crush show-tunables"}]: dispatch
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:15:44.578131+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 184901632 unmapped: 39141376 heap: 224043008 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 385 heartbeat osd_stat(store_statfs(0x4f4521000/0x0/0x4ffc00000, data 0x5701b96/0x560d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x60cf9c6), peers [0,2] op hist [])
Oct 11 04:30:06 compute-0 ceph-mon[74273]: from='client.? 192.168.122.100:0/363482278' entity='client.admin' cmd=[{"prefix": "mgr module ls", "format": "json-pretty"}]: dispatch
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:15:45.578402+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 184901632 unmapped: 39141376 heap: 224043008 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:30:06 compute-0 ceph-osd[88594]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:30:06 compute-0 ceph-osd[88594]: bluestore.MempoolThread(0x56493a431b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3407646 data_alloc: 234881024 data_used: 21934080
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:15:46.578660+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 184901632 unmapped: 39141376 heap: 224043008 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:15:47.578943+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 184901632 unmapped: 39141376 heap: 224043008 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: handle_auth_request added challenge on 0x56493fca4c00
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:15:48.579094+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 184901632 unmapped: 39141376 heap: 224043008 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:15:49.579258+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 181026816 unmapped: 43016192 heap: 224043008 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 385 heartbeat osd_stat(store_statfs(0x4f44b4000/0x0/0x4ffc00000, data 0x576eb96/0x567a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x60cf9c6), peers [0,2] op hist [])
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:15:50.579440+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 181026816 unmapped: 43016192 heap: 224043008 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:30:06 compute-0 ceph-osd[88594]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:30:06 compute-0 ceph-osd[88594]: bluestore.MempoolThread(0x56493a431b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3408186 data_alloc: 234881024 data_used: 21934080
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 385 heartbeat osd_stat(store_statfs(0x4f44b4000/0x0/0x4ffc00000, data 0x576eb96/0x567a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x60cf9c6), peers [0,2] op hist [])
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:15:51.579575+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 181059584 unmapped: 42983424 heap: 224043008 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:15:52.579810+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 181059584 unmapped: 42983424 heap: 224043008 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:15:53.580017+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 181059584 unmapped: 42983424 heap: 224043008 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 385 heartbeat osd_stat(store_statfs(0x4f44b4000/0x0/0x4ffc00000, data 0x576eb96/0x567a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x60cf9c6), peers [0,2] op hist [])
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 385 heartbeat osd_stat(store_statfs(0x4f44b4000/0x0/0x4ffc00000, data 0x576eb96/0x567a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x60cf9c6), peers [0,2] op hist [])
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:15:54.580187+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 181059584 unmapped: 42983424 heap: 224043008 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:15:55.580352+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 181059584 unmapped: 42983424 heap: 224043008 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 16.584291458s of 16.654275894s, submitted: 16
Oct 11 04:30:06 compute-0 ceph-osd[88594]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:30:06 compute-0 ceph-osd[88594]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:30:06 compute-0 ceph-osd[88594]: bluestore.MempoolThread(0x56493a431b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3407410 data_alloc: 234881024 data_used: 22028288
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:15:56.580527+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 180658176 unmapped: 43384832 heap: 224043008 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 385 heartbeat osd_stat(store_statfs(0x4f44b2000/0x0/0x4ffc00000, data 0x5770b96/0x567c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x60cf9c6), peers [0,2] op hist [])
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:15:57.580764+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 180658176 unmapped: 43384832 heap: 224043008 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:15:58.580922+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 180658176 unmapped: 43384832 heap: 224043008 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 385 heartbeat osd_stat(store_statfs(0x4f44b2000/0x0/0x4ffc00000, data 0x5770b96/0x567c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x60cf9c6), peers [0,2] op hist [])
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:15:59.581094+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 180658176 unmapped: 43384832 heap: 224043008 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:16:00.581249+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 180658176 unmapped: 43384832 heap: 224043008 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:30:06 compute-0 ceph-osd[88594]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:30:06 compute-0 ceph-osd[88594]: bluestore.MempoolThread(0x56493a431b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3407410 data_alloc: 234881024 data_used: 22028288
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:16:01.581420+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 180658176 unmapped: 43384832 heap: 224043008 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:16:02.581540+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 180772864 unmapped: 43270144 heap: 224043008 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 385 heartbeat osd_stat(store_statfs(0x4f44b2000/0x0/0x4ffc00000, data 0x5770b96/0x567c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x60cf9c6), peers [0,2] op hist [])
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:16:03.581663+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 181051392 unmapped: 42991616 heap: 224043008 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:16:04.581805+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 181051392 unmapped: 42991616 heap: 224043008 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:16:05.581970+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 181051392 unmapped: 42991616 heap: 224043008 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:30:06 compute-0 ceph-osd[88594]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:30:06 compute-0 ceph-osd[88594]: bluestore.MempoolThread(0x56493a431b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3419658 data_alloc: 234881024 data_used: 23015424
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:16:06.582899+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 181051392 unmapped: 42991616 heap: 224043008 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 11.375150681s of 11.402550697s, submitted: 16
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 385 heartbeat osd_stat(store_statfs(0x4f44b2000/0x0/0x4ffc00000, data 0x5770b96/0x567c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x60cf9c6), peers [0,2] op hist [])
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:16:07.583195+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 181370880 unmapped: 42672128 heap: 224043008 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 385 heartbeat osd_stat(store_statfs(0x4f44b2000/0x0/0x4ffc00000, data 0x5770b96/0x567c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x60cf9c6), peers [0,2] op hist [])
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 385 ms_handle_reset con 0x56493f3e2c00 session 0x56493ca9d860
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 385 ms_handle_reset con 0x56493f43e800 session 0x56493d86cb40
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:16:08.583871+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: handle_auth_request added challenge on 0x56493df01000
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 181239808 unmapped: 42803200 heap: 224043008 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 385 ms_handle_reset con 0x56493df01000 session 0x56493eb5a000
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:16:09.584260+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 182304768 unmapped: 41738240 heap: 224043008 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:16:10.584417+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 182304768 unmapped: 41738240 heap: 224043008 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 385 heartbeat osd_stat(store_statfs(0x4f44dc000/0x0/0x4ffc00000, data 0x5746b96/0x5652000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x60cf9c6), peers [0,2] op hist [])
Oct 11 04:30:06 compute-0 ceph-osd[88594]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:30:06 compute-0 ceph-osd[88594]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:30:06 compute-0 ceph-osd[88594]: bluestore.MempoolThread(0x56493a431b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3408868 data_alloc: 234881024 data_used: 22876160
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: handle_auth_request added challenge on 0x56493df01c00
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 385 ms_handle_reset con 0x56493df01c00 session 0x56493caf81e0
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: handle_auth_request added challenge on 0x56493e3ce400
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:16:11.584602+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 182304768 unmapped: 41738240 heap: 224043008 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 385 ms_handle_reset con 0x56493e3ce400 session 0x56493e2acb40
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:16:12.584762+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 182321152 unmapped: 41721856 heap: 224043008 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: handle_auth_request added challenge on 0x56493f3e2c00
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 385 ms_handle_reset con 0x56493f3e2c00 session 0x56493b12c3c0
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: handle_auth_request added challenge on 0x56493dfa0000
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:16:13.584918+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 182321152 unmapped: 41721856 heap: 224043008 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 385 handle_osd_map epochs [385,386], i have 385, src has [1,386]
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 386 ms_handle_reset con 0x56493dfa0000 session 0x56493b92d860
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 386 heartbeat osd_stat(store_statfs(0x4f4645000/0x0/0x4ffc00000, data 0x4ec2767/0x50d8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x64df9c6), peers [0,2] op hist [])
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:16:14.585222+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 182378496 unmapped: 41664512 heap: 224043008 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 386 ms_handle_reset con 0x56493b4afc00 session 0x56493b968b40
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 386 ms_handle_reset con 0x56493be26400 session 0x56493cba7680
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: handle_auth_request added challenge on 0x56493dfa0000
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 386 ms_handle_reset con 0x56493dfa0000 session 0x56493c5323c0
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:16:15.585438+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 182411264 unmapped: 41631744 heap: 224043008 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:30:06 compute-0 ceph-osd[88594]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:30:06 compute-0 ceph-osd[88594]: bluestore.MempoolThread(0x56493a431b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3350181 data_alloc: 234881024 data_used: 22708224
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: handle_auth_request added challenge on 0x56493df01000
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 386 ms_handle_reset con 0x56493df01000 session 0x56493ce5a3c0
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: handle_auth_request added challenge on 0x56493df01c00
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:16:16.585744+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 182427648 unmapped: 41615360 heap: 224043008 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 386 ms_handle_reset con 0x56493df01c00 session 0x56493ce5a000
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:16:17.585953+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 179183616 unmapped: 44859392 heap: 224043008 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:16:18.586101+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 179183616 unmapped: 44859392 heap: 224043008 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: handle_auth_request added challenge on 0x56493b4afc00
Oct 11 04:30:06 compute-0 ceph-osd[88594]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 11.595194817s of 11.914410591s, submitted: 112
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 386 heartbeat osd_stat(store_statfs(0x4f53be000/0x0/0x4ffc00000, data 0x414c734/0x4360000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x64df9c6), peers [0,2] op hist [0,0,1])
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:16:19.586349+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 179183616 unmapped: 44859392 heap: 224043008 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 386 handle_osd_map epochs [386,387], i have 386, src has [1,387]
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 387 handle_osd_map epochs [387,388], i have 387, src has [1,388]
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 388 ms_handle_reset con 0x56493b4afc00 session 0x56493e20af00
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:16:20.586539+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 179191808 unmapped: 44851200 heap: 224043008 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 388 heartbeat osd_stat(store_statfs(0x4f53b6000/0x0/0x4ffc00000, data 0x414fda0/0x4366000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x64df9c6), peers [0,2] op hist [])
Oct 11 04:30:06 compute-0 ceph-osd[88594]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:30:06 compute-0 ceph-osd[88594]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:30:06 compute-0 ceph-osd[88594]: bluestore.MempoolThread(0x56493a431b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3190929 data_alloc: 234881024 data_used: 14106624
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:16:21.586730+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 179191808 unmapped: 44851200 heap: 224043008 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:16:22.586977+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 179191808 unmapped: 44851200 heap: 224043008 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:16:23.587188+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 179191808 unmapped: 44851200 heap: 224043008 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:16:24.587389+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 179191808 unmapped: 44851200 heap: 224043008 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 388 heartbeat osd_stat(store_statfs(0x4f53b6000/0x0/0x4ffc00000, data 0x414fda0/0x4366000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x64df9c6), peers [0,2] op hist [])
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 388 handle_osd_map epochs [389,389], i have 388, src has [1,389]
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 388 handle_osd_map epochs [389,389], i have 389, src has [1,389]
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:16:25.587555+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 179200000 unmapped: 44843008 heap: 224043008 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 389 heartbeat osd_stat(store_statfs(0x4f53b6000/0x0/0x4ffc00000, data 0x414fda0/0x4366000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x64df9c6), peers [0,2] op hist [])
Oct 11 04:30:06 compute-0 ceph-osd[88594]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:30:06 compute-0 ceph-osd[88594]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:30:06 compute-0 ceph-osd[88594]: bluestore.MempoolThread(0x56493a431b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3192879 data_alloc: 234881024 data_used: 14106624
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:16:26.587778+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 179200000 unmapped: 44843008 heap: 224043008 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: handle_auth_request added challenge on 0x56493be26400
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 389 ms_handle_reset con 0x56493be26400 session 0x56493e72f860
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: handle_auth_request added challenge on 0x56493df01000
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:16:27.588004+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 389 ms_handle_reset con 0x56493df01000 session 0x56493e72fe00
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 179216384 unmapped: 44826624 heap: 224043008 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 389 ms_handle_reset con 0x56493fca4c00 session 0x56493e0f9e00
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: handle_auth_request added challenge on 0x56493dfa0000
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 389 ms_handle_reset con 0x56493dfa0000 session 0x56493e72f0e0
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:16:28.588183+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 179216384 unmapped: 44826624 heap: 224043008 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: handle_auth_request added challenge on 0x56493b4afc00
Oct 11 04:30:06 compute-0 ceph-osd[88594]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.957301140s of 10.111134529s, submitted: 68
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 389 ms_handle_reset con 0x56493b4afc00 session 0x56493b968000
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:16:29.588370+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 179224576 unmapped: 44818432 heap: 224043008 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:16:30.588517+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 389 heartbeat osd_stat(store_statfs(0x4f53b4000/0x0/0x4ffc00000, data 0x4151865/0x436a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x64df9c6), peers [0,2] op hist [])
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 179224576 unmapped: 44818432 heap: 224043008 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:30:06 compute-0 ceph-osd[88594]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:30:06 compute-0 ceph-osd[88594]: bluestore.MempoolThread(0x56493a431b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3204623 data_alloc: 234881024 data_used: 15859712
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:16:31.588664+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: handle_auth_request added challenge on 0x56493be26400
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 389 ms_handle_reset con 0x56493be26400 session 0x56493b969c20
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: handle_auth_request added challenge on 0x56493df01000
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 389 ms_handle_reset con 0x56493df01000 session 0x56493d864d20
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 179232768 unmapped: 44810240 heap: 224043008 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 389 heartbeat osd_stat(store_statfs(0x4f53b4000/0x0/0x4ffc00000, data 0x4151865/0x436a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x64df9c6), peers [0,2] op hist [])
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: handle_auth_request added challenge on 0x56493fca4c00
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 389 ms_handle_reset con 0x56493fca4c00 session 0x56493d86c960
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: handle_auth_request added challenge on 0x56493e3ce400
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 389 ms_handle_reset con 0x56493e3ce400 session 0x56493da29680
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:16:32.588831+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: handle_auth_request added challenge on 0x56493e3ce400
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 389 ms_handle_reset con 0x56493e3ce400 session 0x56493cba6b40
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: handle_auth_request added challenge on 0x56493b4afc00
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 170401792 unmapped: 53641216 heap: 224043008 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 389 ms_handle_reset con 0x56493b4afc00 session 0x56493ce5a960
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:16:33.589008+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 170401792 unmapped: 53641216 heap: 224043008 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:16:34.589225+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 170401792 unmapped: 53641216 heap: 224043008 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:16:35.589377+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 170401792 unmapped: 53641216 heap: 224043008 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 389 heartbeat osd_stat(store_statfs(0x4f847e000/0x0/0x4ffc00000, data 0x889791/0xa9f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x64df9c6), peers [0,2] op hist [])
Oct 11 04:30:06 compute-0 ceph-osd[88594]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:30:06 compute-0 ceph-osd[88594]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:30:06 compute-0 ceph-osd[88594]: bluestore.MempoolThread(0x56493a431b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2713192 data_alloc: 218103808 data_used: 1449984
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:16:36.589544+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 170401792 unmapped: 53641216 heap: 224043008 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:16:37.589713+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 170401792 unmapped: 53641216 heap: 224043008 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:16:38.589888+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 170401792 unmapped: 53641216 heap: 224043008 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 389 heartbeat osd_stat(store_statfs(0x4f847e000/0x0/0x4ffc00000, data 0x889791/0xa9f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x64df9c6), peers [0,2] op hist [])
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:16:39.590055+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 170401792 unmapped: 53641216 heap: 224043008 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: handle_auth_request added challenge on 0x56493be26400
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 389 ms_handle_reset con 0x56493be26400 session 0x56493da29e00
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: handle_auth_request added challenge on 0x56493df01000
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:16:40.590268+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 389 ms_handle_reset con 0x56493df01000 session 0x56493d8ca960
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: handle_auth_request added challenge on 0x56493fca4c00
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 170409984 unmapped: 53633024 heap: 224043008 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 389 ms_handle_reset con 0x56493fca4c00 session 0x56493e20b860
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: handle_auth_request added challenge on 0x56493fca4c00
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 389 ms_handle_reset con 0x56493fca4c00 session 0x56493ce5a5a0
Oct 11 04:30:06 compute-0 ceph-osd[88594]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:30:06 compute-0 ceph-osd[88594]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:30:06 compute-0 ceph-osd[88594]: bluestore.MempoolThread(0x56493a431b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2713192 data_alloc: 218103808 data_used: 1449984
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:16:41.590437+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 389 heartbeat osd_stat(store_statfs(0x4f847e000/0x0/0x4ffc00000, data 0x889791/0xa9f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x64df9c6), peers [0,2] op hist [])
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 170409984 unmapped: 53633024 heap: 224043008 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: handle_auth_request added challenge on 0x56493b4afc00
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 389 ms_handle_reset con 0x56493b4afc00 session 0x56493d86de00
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: handle_auth_request added challenge on 0x56493be26400
Oct 11 04:30:06 compute-0 ceph-osd[88594]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 12.627221107s of 12.817614555s, submitted: 56
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 389 ms_handle_reset con 0x56493be26400 session 0x56493b12c780
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: handle_auth_request added challenge on 0x56493df01000
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:16:42.590583+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: handle_auth_request added challenge on 0x56493e3ce400
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 170409984 unmapped: 53633024 heap: 224043008 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:16:43.590742+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 170409984 unmapped: 53633024 heap: 224043008 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:16:44.590894+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 170409984 unmapped: 53633024 heap: 224043008 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:16:45.591051+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 389 heartbeat osd_stat(store_statfs(0x4f847d000/0x0/0x4ffc00000, data 0x8897a1/0xaa0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x64df9c6), peers [0,2] op hist [])
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 170409984 unmapped: 53633024 heap: 224043008 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:30:06 compute-0 ceph-osd[88594]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:30:06 compute-0 ceph-osd[88594]: bluestore.MempoolThread(0x56493a431b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2715158 data_alloc: 218103808 data_used: 1449984
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:16:46.591222+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: handle_auth_request added challenge on 0x56493f3e2c00
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 169967616 unmapped: 54075392 heap: 224043008 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 389 heartbeat osd_stat(store_statfs(0x4f8c7d000/0x0/0x4ffc00000, data 0x8897b1/0xaa1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x64df9c6), peers [0,2] op hist [])
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:16:47.591343+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 169967616 unmapped: 54075392 heap: 224043008 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 389 ms_handle_reset con 0x56493f3e2c00 session 0x56493d866000
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:16:48.591466+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 169967616 unmapped: 54075392 heap: 224043008 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:16:49.591604+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 169967616 unmapped: 54075392 heap: 224043008 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:16:50.591745+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 169967616 unmapped: 54075392 heap: 224043008 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:30:06 compute-0 ceph-osd[88594]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:30:06 compute-0 ceph-osd[88594]: bluestore.MempoolThread(0x56493a431b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2717955 data_alloc: 218103808 data_used: 1449984
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:16:51.591874+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 169967616 unmapped: 54075392 heap: 224043008 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Oct 11 04:30:06 compute-0 ceph-osd[88594]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 2400.1 total, 600.0 interval
                                           Cumulative writes: 24K writes, 98K keys, 24K commit groups, 1.0 writes per commit group, ingest: 0.06 GB, 0.02 MB/s
                                           Cumulative WAL: 24K writes, 8978 syncs, 2.77 writes per sync, written: 0.06 GB, 0.02 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 12K writes, 46K keys, 12K commit groups, 1.0 writes per commit group, ingest: 27.82 MB, 0.05 MB/s
                                           Interval WAL: 12K writes, 5303 syncs, 2.33 writes per sync, written: 0.03 GB, 0.05 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 389 heartbeat osd_stat(store_statfs(0x4f8c7d000/0x0/0x4ffc00000, data 0x8897b1/0xaa1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x64df9c6), peers [0,2] op hist [])
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:16:52.591995+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 169967616 unmapped: 54075392 heap: 224043008 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 11.361540794s of 11.398303032s, submitted: 8
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:16:53.592108+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 169967616 unmapped: 54075392 heap: 224043008 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:16:54.592239+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 169967616 unmapped: 54075392 heap: 224043008 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:16:55.592385+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 169967616 unmapped: 54075392 heap: 224043008 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:30:06 compute-0 ceph-osd[88594]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:30:06 compute-0 ceph-osd[88594]: bluestore.MempoolThread(0x56493a431b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2727607 data_alloc: 218103808 data_used: 1449984
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:16:56.592592+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 169967616 unmapped: 54075392 heap: 224043008 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:16:57.592775+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 169959424 unmapped: 54083584 heap: 224043008 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 389 heartbeat osd_stat(store_statfs(0x4f8c13000/0x0/0x4ffc00000, data 0x8f37b1/0xb0b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x64df9c6), peers [0,2] op hist [])
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:16:58.592932+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 169959424 unmapped: 54083584 heap: 224043008 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:16:59.593082+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: handle_auth_request added challenge on 0x56493b9adc00
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 389 ms_handle_reset con 0x56493b9adc00 session 0x56493d8654a0
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: handle_auth_request added challenge on 0x56493b9adc00
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 169975808 unmapped: 54067200 heap: 224043008 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 389 ms_handle_reset con 0x56493b9adc00 session 0x56493d8ca960
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: handle_auth_request added challenge on 0x56493b4afc00
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:17:00.593221+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 389 ms_handle_reset con 0x56493b4afc00 session 0x56493e72f860
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: handle_auth_request added challenge on 0x56493be26400
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 389 ms_handle_reset con 0x56493be26400 session 0x56493b92d860
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 170008576 unmapped: 54034432 heap: 224043008 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:30:06 compute-0 ceph-osd[88594]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:30:06 compute-0 ceph-osd[88594]: bluestore.MempoolThread(0x56493a431b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2980416 data_alloc: 218103808 data_used: 1449984
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:17:01.593364+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 170008576 unmapped: 54034432 heap: 224043008 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:17:02.594585+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 170008576 unmapped: 54034432 heap: 224043008 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:17:03.596555+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 170008576 unmapped: 54034432 heap: 224043008 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 389 heartbeat osd_stat(store_statfs(0x4f6813000/0x0/0x4ffc00000, data 0x2cf37b1/0x2f0b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x64df9c6), peers [0,2] op hist [])
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 389 ms_handle_reset con 0x56493df01000 session 0x56493da2f860
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 389 ms_handle_reset con 0x56493e3ce400 session 0x56493eccc780
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:17:04.596694+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: handle_auth_request added challenge on 0x56493b4afc00
Oct 11 04:30:06 compute-0 ceph-osd[88594]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.945714951s of 11.254220009s, submitted: 44
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 389 ms_handle_reset con 0x56493b4afc00 session 0x56493d86cb40
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 170008576 unmapped: 54034432 heap: 224043008 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:17:05.597559+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 170008576 unmapped: 54034432 heap: 224043008 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:17:06.597740+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:30:06 compute-0 ceph-osd[88594]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:30:06 compute-0 ceph-osd[88594]: bluestore.MempoolThread(0x56493a431b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2979515 data_alloc: 218103808 data_used: 1449984
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 170008576 unmapped: 54034432 heap: 224043008 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 389 heartbeat osd_stat(store_statfs(0x4f6814000/0x0/0x4ffc00000, data 0x2cf37a1/0x2f0a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x64df9c6), peers [0,2] op hist [])
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:17:07.597942+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 170008576 unmapped: 54034432 heap: 224043008 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 389 heartbeat osd_stat(store_statfs(0x4f6814000/0x0/0x4ffc00000, data 0x2cf37a1/0x2f0a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x64df9c6), peers [0,2] op hist [])
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:17:08.599222+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 170008576 unmapped: 54034432 heap: 224043008 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 389 heartbeat osd_stat(store_statfs(0x4f6814000/0x0/0x4ffc00000, data 0x2cf37a1/0x2f0a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x64df9c6), peers [0,2] op hist [])
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:17:09.599386+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 170008576 unmapped: 54034432 heap: 224043008 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 389 heartbeat osd_stat(store_statfs(0x4f6814000/0x0/0x4ffc00000, data 0x2cf37a1/0x2f0a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x64df9c6), peers [0,2] op hist [])
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:17:10.599535+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: handle_auth_request added challenge on 0x56493b9adc00
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 389 ms_handle_reset con 0x56493b9adc00 session 0x56493e72e5a0
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 170164224 unmapped: 53878784 heap: 224043008 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:17:11.599704+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:30:06 compute-0 ceph-osd[88594]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:30:06 compute-0 ceph-osd[88594]: bluestore.MempoolThread(0x56493a431b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2982431 data_alloc: 218103808 data_used: 1449984
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: handle_auth_request added challenge on 0x56493be26400
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: handle_auth_request added challenge on 0x56493df01000
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 170164224 unmapped: 53878784 heap: 224043008 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:17:12.599837+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 170164224 unmapped: 53878784 heap: 224043008 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 389 heartbeat osd_stat(store_statfs(0x4f67f0000/0x0/0x4ffc00000, data 0x2d177a1/0x2f2e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x64df9c6), peers [0,2] op hist [])
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:17:13.599980+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 170164224 unmapped: 53878784 heap: 224043008 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:17:14.600130+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 170164224 unmapped: 53878784 heap: 224043008 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:17:15.600336+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: handle_auth_request added challenge on 0x56493e3ce400
Oct 11 04:30:06 compute-0 ceph-osd[88594]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.853548050s of 10.881413460s, submitted: 8
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 389 ms_handle_reset con 0x56493e3ce400 session 0x56493c533e00
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 170180608 unmapped: 53862400 heap: 224043008 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: handle_auth_request added challenge on 0x56493f3e2c00
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: handle_auth_request added challenge on 0x56493fca4c00
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:17:16.600510+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:30:06 compute-0 ceph-osd[88594]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:30:06 compute-0 ceph-osd[88594]: bluestore.MempoolThread(0x56493a431b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3018910 data_alloc: 218103808 data_used: 6258688
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 170196992 unmapped: 53846016 heap: 224043008 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:17:17.600764+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 170196992 unmapped: 53846016 heap: 224043008 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 389 heartbeat osd_stat(store_statfs(0x4f67f0000/0x0/0x4ffc00000, data 0x2d177a1/0x2f2e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x64df9c6), peers [0,2] op hist [])
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:17:18.600900+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 389 heartbeat osd_stat(store_statfs(0x4f67f0000/0x0/0x4ffc00000, data 0x2d177a1/0x2f2e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x64df9c6), peers [0,2] op hist [])
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 170196992 unmapped: 53846016 heap: 224043008 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:17:19.601233+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 170196992 unmapped: 53846016 heap: 224043008 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:17:20.601429+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 170196992 unmapped: 53846016 heap: 224043008 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 389 heartbeat osd_stat(store_statfs(0x4f67f0000/0x0/0x4ffc00000, data 0x2d177a1/0x2f2e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x64df9c6), peers [0,2] op hist [])
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:17:21.601644+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:30:06 compute-0 ceph-osd[88594]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:30:06 compute-0 ceph-osd[88594]: bluestore.MempoolThread(0x56493a431b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3018910 data_alloc: 218103808 data_used: 6258688
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 170196992 unmapped: 53846016 heap: 224043008 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:17:22.601811+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 170196992 unmapped: 53846016 heap: 224043008 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:17:23.601971+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 170196992 unmapped: 53846016 heap: 224043008 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:17:24.602219+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 170196992 unmapped: 53846016 heap: 224043008 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:17:25.602340+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 181403648 unmapped: 42639360 heap: 224043008 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:17:26.602543+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:30:06 compute-0 ceph-osd[88594]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:30:06 compute-0 ceph-osd[88594]: bluestore.MempoolThread(0x56493a431b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3048858 data_alloc: 218103808 data_used: 6967296
Oct 11 04:30:06 compute-0 ceph-osd[88594]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.859937668s of 11.076708794s, submitted: 74
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 181944320 unmapped: 42098688 heap: 224043008 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 389 heartbeat osd_stat(store_statfs(0x4f6500000/0x0/0x4ffc00000, data 0x2d177a1/0x2f2e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x64df9c6), peers [0,2] op hist [])
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:17:27.602729+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 181977088 unmapped: 42065920 heap: 224043008 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:17:28.602993+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 181977088 unmapped: 42065920 heap: 224043008 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:17:29.603135+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 181977088 unmapped: 42065920 heap: 224043008 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:17:30.603356+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 181977088 unmapped: 42065920 heap: 224043008 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 389 heartbeat osd_stat(store_statfs(0x4f57e8000/0x0/0x4ffc00000, data 0x3a2f7a1/0x3c46000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x64df9c6), peers [0,2] op hist [])
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:17:31.603520+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:30:06 compute-0 ceph-osd[88594]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:30:06 compute-0 ceph-osd[88594]: bluestore.MempoolThread(0x56493a431b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3154106 data_alloc: 218103808 data_used: 7118848
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 181977088 unmapped: 42065920 heap: 224043008 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:17:32.603688+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 181977088 unmapped: 42065920 heap: 224043008 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:17:33.604075+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 178020352 unmapped: 46022656 heap: 224043008 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:17:34.604594+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 389 heartbeat osd_stat(store_statfs(0x4f5acc000/0x0/0x4ffc00000, data 0x3a3b7a1/0x3c52000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x64df9c6), peers [0,2] op hist [])
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 178020352 unmapped: 46022656 heap: 224043008 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:17:35.604798+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 178020352 unmapped: 46022656 heap: 224043008 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:17:36.605013+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:30:06 compute-0 ceph-osd[88594]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:30:06 compute-0 ceph-osd[88594]: bluestore.MempoolThread(0x56493a431b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3138416 data_alloc: 218103808 data_used: 7118848
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 178020352 unmapped: 46022656 heap: 224043008 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:17:37.605545+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 178020352 unmapped: 46022656 heap: 224043008 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:17:38.605720+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 178020352 unmapped: 46022656 heap: 224043008 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 389 heartbeat osd_stat(store_statfs(0x4f5acc000/0x0/0x4ffc00000, data 0x3a3b7a1/0x3c52000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x64df9c6), peers [0,2] op hist [])
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 389 ms_handle_reset con 0x56493be26400 session 0x56493ddf6b40
Oct 11 04:30:06 compute-0 ceph-osd[88594]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 12.731762886s of 12.894522667s, submitted: 36
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:17:39.605928+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 389 ms_handle_reset con 0x56493df01000 session 0x56493eccd680
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: handle_auth_request added challenge on 0x56493b4afc00
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 389 ms_handle_reset con 0x56493b4afc00 session 0x56493bf6a780
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 177954816 unmapped: 46088192 heap: 224043008 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: handle_auth_request added challenge on 0x56493b9adc00
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:17:40.606391+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 389 handle_osd_map epochs [389,390], i have 389, src has [1,390]
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 177963008 unmapped: 46080000 heap: 224043008 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 390 ms_handle_reset con 0x56493b9adc00 session 0x56493c2f9a40
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:17:41.606587+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:30:06 compute-0 ceph-osd[88594]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:30:06 compute-0 ceph-osd[88594]: bluestore.MempoolThread(0x56493a431b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3139006 data_alloc: 218103808 data_used: 7290880
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: handle_auth_request added challenge on 0x56493be26400
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 177963008 unmapped: 46080000 heap: 224043008 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: handle_auth_request added challenge on 0x56493df01000
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 390 ms_handle_reset con 0x56493df01000 session 0x56493d4b85a0
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 390 ms_handle_reset con 0x56493be26400 session 0x56493eb5b2c0
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:17:42.606846+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 177979392 unmapped: 46063616 heap: 224043008 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 390 heartbeat osd_stat(store_statfs(0x4f5aea000/0x0/0x4ffc00000, data 0x3a1a390/0x3c34000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x64df9c6), peers [0,2] op hist [])
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:17:43.606996+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 177979392 unmapped: 46063616 heap: 224043008 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:17:44.607184+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 177979392 unmapped: 46063616 heap: 224043008 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 390 heartbeat osd_stat(store_statfs(0x4f5aea000/0x0/0x4ffc00000, data 0x3a1a390/0x3c34000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x64df9c6), peers [0,2] op hist [])
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:17:45.607329+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 177979392 unmapped: 46063616 heap: 224043008 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:17:46.607564+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:30:06 compute-0 ceph-osd[88594]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:30:06 compute-0 ceph-osd[88594]: bluestore.MempoolThread(0x56493a431b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3139602 data_alloc: 218103808 data_used: 7290880
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 177979392 unmapped: 46063616 heap: 224043008 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:17:47.607725+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 177979392 unmapped: 46063616 heap: 224043008 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:17:48.607933+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 177979392 unmapped: 46063616 heap: 224043008 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:17:49.608066+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 177979392 unmapped: 46063616 heap: 224043008 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 390 heartbeat osd_stat(store_statfs(0x4f5ae9000/0x0/0x4ffc00000, data 0x3a89390/0x3c35000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x64df9c6), peers [0,2] op hist [])
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:17:50.608229+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 177979392 unmapped: 46063616 heap: 224043008 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:17:51.608413+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:30:06 compute-0 ceph-osd[88594]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:30:06 compute-0 ceph-osd[88594]: bluestore.MempoolThread(0x56493a431b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3147624 data_alloc: 218103808 data_used: 7290880
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 177979392 unmapped: 46063616 heap: 224043008 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:17:52.608574+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 12.985089302s of 13.053511620s, submitted: 15
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 179036160 unmapped: 45006848 heap: 224043008 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: handle_auth_request added challenge on 0x56493e3ce400
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 390 ms_handle_reset con 0x56493e3ce400 session 0x56493da29a40
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:17:53.609283+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 178413568 unmapped: 45629440 heap: 224043008 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: handle_auth_request added challenge on 0x56493b4afc00
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: handle_auth_request added challenge on 0x56493b9adc00
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:17:54.609395+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 178413568 unmapped: 45629440 heap: 224043008 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: handle_auth_request added challenge on 0x56493be26400
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 390 ms_handle_reset con 0x56493be26400 session 0x56493b92d0e0
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:17:55.609512+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 178421760 unmapped: 45621248 heap: 224043008 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 390 heartbeat osd_stat(store_statfs(0x4f5ac5000/0x0/0x4ffc00000, data 0x3aad390/0x3c59000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x64df9c6), peers [0,2] op hist [])
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:17:56.609693+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: handle_auth_request added challenge on 0x56493df01000
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 390 ms_handle_reset con 0x56493df01000 session 0x56493da2da40
Oct 11 04:30:06 compute-0 ceph-osd[88594]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:30:06 compute-0 ceph-osd[88594]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:30:06 compute-0 ceph-osd[88594]: bluestore.MempoolThread(0x56493a431b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3150668 data_alloc: 218103808 data_used: 7303168
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 178438144 unmapped: 45604864 heap: 224043008 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: handle_auth_request added challenge on 0x56493f43c400
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 390 ms_handle_reset con 0x56493f43c400 session 0x56493c532d20
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: handle_auth_request added challenge on 0x56493f43bc00
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 390 ms_handle_reset con 0x56493f43bc00 session 0x56493eb5b4a0
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:17:57.609868+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: handle_auth_request added challenge on 0x56493e3d2c00
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: handle_auth_request added challenge on 0x56493b4ae800
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 178454528 unmapped: 45588480 heap: 224043008 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:17:58.610592+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 178454528 unmapped: 45588480 heap: 224043008 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:17:59.610747+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 178454528 unmapped: 45588480 heap: 224043008 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:18:00.610883+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 390 heartbeat osd_stat(store_statfs(0x4f5ac5000/0x0/0x4ffc00000, data 0x3aad390/0x3c59000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x64df9c6), peers [0,2] op hist [])
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 178454528 unmapped: 45588480 heap: 224043008 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:18:01.611033+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:30:06 compute-0 ceph-osd[88594]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:30:06 compute-0 ceph-osd[88594]: bluestore.MempoolThread(0x56493a431b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3152749 data_alloc: 218103808 data_used: 7352320
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 178454528 unmapped: 45588480 heap: 224043008 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:18:02.611188+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 178454528 unmapped: 45588480 heap: 224043008 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:18:03.611341+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 178454528 unmapped: 45588480 heap: 224043008 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:18:04.611529+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 390 heartbeat osd_stat(store_statfs(0x4f5ac5000/0x0/0x4ffc00000, data 0x3aad390/0x3c59000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x64df9c6), peers [0,2] op hist [])
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 178454528 unmapped: 45588480 heap: 224043008 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 12.579654694s of 12.940676689s, submitted: 99
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:18:05.611686+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 178069504 unmapped: 45973504 heap: 224043008 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:18:06.611912+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:30:06 compute-0 ceph-osd[88594]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:30:06 compute-0 ceph-osd[88594]: bluestore.MempoolThread(0x56493a431b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3171587 data_alloc: 218103808 data_used: 7606272
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 178069504 unmapped: 45973504 heap: 224043008 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 390 heartbeat osd_stat(store_statfs(0x4f59c9000/0x0/0x4ffc00000, data 0x3ba9390/0x3d55000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x64df9c6), peers [0,2] op hist [])
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:18:07.612051+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 178069504 unmapped: 45973504 heap: 224043008 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:18:08.612200+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 178069504 unmapped: 45973504 heap: 224043008 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:18:09.612387+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 178069504 unmapped: 45973504 heap: 224043008 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:18:10.612557+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 390 heartbeat osd_stat(store_statfs(0x4f59c9000/0x0/0x4ffc00000, data 0x3ba9390/0x3d55000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x64df9c6), peers [0,2] op hist [])
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 178282496 unmapped: 45760512 heap: 224043008 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:18:11.612724+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:30:06 compute-0 ceph-osd[88594]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:30:06 compute-0 ceph-osd[88594]: bluestore.MempoolThread(0x56493a431b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3179151 data_alloc: 218103808 data_used: 8474624
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 178323456 unmapped: 45719552 heap: 224043008 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:18:12.612950+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 178323456 unmapped: 45719552 heap: 224043008 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 390 heartbeat osd_stat(store_statfs(0x4f59c9000/0x0/0x4ffc00000, data 0x3ba9390/0x3d55000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x64df9c6), peers [0,2] op hist [])
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:18:13.613107+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 178323456 unmapped: 45719552 heap: 224043008 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:18:14.613206+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 390 heartbeat osd_stat(store_statfs(0x4f59c9000/0x0/0x4ffc00000, data 0x3ba9390/0x3d55000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x64df9c6), peers [0,2] op hist [])
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 178323456 unmapped: 45719552 heap: 224043008 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:18:15.613368+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 178323456 unmapped: 45719552 heap: 224043008 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:18:16.613527+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:30:06 compute-0 ceph-osd[88594]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:30:06 compute-0 ceph-osd[88594]: bluestore.MempoolThread(0x56493a431b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3187151 data_alloc: 218103808 data_used: 9400320
Oct 11 04:30:06 compute-0 ceph-osd[88594]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 11.292166710s of 11.353351593s, submitted: 11
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 178372608 unmapped: 45670400 heap: 224043008 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:18:17.613704+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 178372608 unmapped: 45670400 heap: 224043008 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:18:18.613884+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 178372608 unmapped: 45670400 heap: 224043008 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:18:19.614034+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 390 heartbeat osd_stat(store_statfs(0x4f59c9000/0x0/0x4ffc00000, data 0x3ba9390/0x3d55000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x64df9c6), peers [0,2] op hist [])
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 178372608 unmapped: 45670400 heap: 224043008 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:18:20.614208+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 178372608 unmapped: 45670400 heap: 224043008 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:18:21.614334+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:30:06 compute-0 ceph-osd[88594]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:30:06 compute-0 ceph-osd[88594]: bluestore.MempoolThread(0x56493a431b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3188911 data_alloc: 218103808 data_used: 9400320
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 178372608 unmapped: 45670400 heap: 224043008 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:18:22.614466+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 390 heartbeat osd_stat(store_statfs(0x4f59c9000/0x0/0x4ffc00000, data 0x3ba9390/0x3d55000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x64df9c6), peers [0,2] op hist [])
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 178372608 unmapped: 45670400 heap: 224043008 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:18:23.614605+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 178372608 unmapped: 45670400 heap: 224043008 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:18:24.614739+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 178372608 unmapped: 45670400 heap: 224043008 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 390 heartbeat osd_stat(store_statfs(0x4f59c9000/0x0/0x4ffc00000, data 0x3ba9390/0x3d55000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x64df9c6), peers [0,2] op hist [])
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:18:25.614931+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 178839552 unmapped: 45203456 heap: 224043008 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:18:26.615110+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:30:06 compute-0 ceph-osd[88594]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:30:06 compute-0 ceph-osd[88594]: bluestore.MempoolThread(0x56493a431b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3194989 data_alloc: 218103808 data_used: 9568256
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 178462720 unmapped: 45580288 heap: 224043008 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:18:27.615270+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 178462720 unmapped: 45580288 heap: 224043008 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:18:28.615403+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 178462720 unmapped: 45580288 heap: 224043008 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:18:29.615568+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 178462720 unmapped: 45580288 heap: 224043008 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:18:30.615727+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 390 heartbeat osd_stat(store_statfs(0x4f5952000/0x0/0x4ffc00000, data 0x3c18390/0x3dc4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x64df9c6), peers [0,2] op hist [])
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 178462720 unmapped: 45580288 heap: 224043008 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:18:31.615945+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:30:06 compute-0 ceph-osd[88594]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:30:06 compute-0 ceph-osd[88594]: bluestore.MempoolThread(0x56493a431b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3196109 data_alloc: 234881024 data_used: 9818112
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 178462720 unmapped: 45580288 heap: 224043008 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:18:32.616092+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 390 ms_handle_reset con 0x56493b4afc00 session 0x56493e72f680
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 390 ms_handle_reset con 0x56493b9adc00 session 0x56493c533680
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: handle_auth_request added challenge on 0x56493b4afc00
Oct 11 04:30:06 compute-0 ceph-osd[88594]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 16.155632019s of 16.214744568s, submitted: 20
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 390 ms_handle_reset con 0x56493b4afc00 session 0x56493b968000
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 178462720 unmapped: 45580288 heap: 224043008 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:18:33.616221+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 178462720 unmapped: 45580288 heap: 224043008 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 390 heartbeat osd_stat(store_statfs(0x4f597e000/0x0/0x4ffc00000, data 0x3bf4390/0x3da0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x64df9c6), peers [0,2] op hist [])
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:18:34.616376+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 178462720 unmapped: 45580288 heap: 224043008 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:18:35.616551+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 178462720 unmapped: 45580288 heap: 224043008 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: handle_auth_request added challenge on 0x56493be26400
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 390 ms_handle_reset con 0x56493be26400 session 0x56493da29e00
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: handle_auth_request added challenge on 0x56493df01000
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:18:36.616738+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 390 ms_handle_reset con 0x56493df01000 session 0x56493e0f8780
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 390 ms_handle_reset con 0x56493e3d2c00 session 0x56493d86cf00
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 390 ms_handle_reset con 0x56493b4ae800 session 0x56493d869860
Oct 11 04:30:06 compute-0 ceph-osd[88594]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:30:06 compute-0 ceph-osd[88594]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:30:06 compute-0 ceph-osd[88594]: bluestore.MempoolThread(0x56493a431b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3175905 data_alloc: 234881024 data_used: 10268672
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: handle_auth_request added challenge on 0x56493b4afc00
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 390 heartbeat osd_stat(store_statfs(0x4f5aea000/0x0/0x4ffc00000, data 0x3a89380/0x3c34000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x64df9c6), peers [0,2] op hist [0,0,1])
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 390 ms_handle_reset con 0x56493b4afc00 session 0x56493ddf7680
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 178634752 unmapped: 45408256 heap: 224043008 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:18:37.616896+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: handle_auth_request added challenge on 0x56493b9adc00
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 390 ms_handle_reset con 0x56493b9adc00 session 0x56493e0f8d20
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: handle_auth_request added challenge on 0x56493be26400
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 390 handle_osd_map epochs [390,391], i have 390, src has [1,391]
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 391 ms_handle_reset con 0x56493be26400 session 0x56493e0f9a40
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 178634752 unmapped: 45408256 heap: 224043008 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:18:38.617053+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 178634752 unmapped: 45408256 heap: 224043008 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 391 ms_handle_reset con 0x56493f3e2c00 session 0x56493da14960
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 391 ms_handle_reset con 0x56493fca4c00 session 0x56493d8672c0
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:18:39.617220+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: handle_auth_request added challenge on 0x56493f3e2c00
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 391 ms_handle_reset con 0x56493f3e2c00 session 0x56493b314b40
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 391 heartbeat osd_stat(store_statfs(0x4f5ae9000/0x0/0x4ffc00000, data 0x3a1beef/0x3c35000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x64df9c6), peers [0,2] op hist [])
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 178642944 unmapped: 45400064 heap: 224043008 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:18:40.617385+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 178642944 unmapped: 45400064 heap: 224043008 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:18:41.617517+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:30:06 compute-0 ceph-osd[88594]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:30:06 compute-0 ceph-osd[88594]: bluestore.MempoolThread(0x56493a431b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3172390 data_alloc: 234881024 data_used: 10272768
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: handle_auth_request added challenge on 0x56493b4ae800
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 391 ms_handle_reset con 0x56493b4ae800 session 0x56493e0f8780
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: handle_auth_request added challenge on 0x56493b4afc00
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 391 ms_handle_reset con 0x56493b4afc00 session 0x56493e72f680
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 175677440 unmapped: 48365568 heap: 224043008 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:18:42.617691+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 175677440 unmapped: 48365568 heap: 224043008 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:18:43.617862+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: handle_auth_request added challenge on 0x56493b9adc00
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 391 ms_handle_reset con 0x56493b9adc00 session 0x56493eccc780
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: handle_auth_request added challenge on 0x56493b9adc00
Oct 11 04:30:06 compute-0 ceph-osd[88594]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.650178909s of 10.976945877s, submitted: 103
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 391 ms_handle_reset con 0x56493b9adc00 session 0x56493da2f860
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 391 heartbeat osd_stat(store_statfs(0x4f8c02000/0x0/0x4ffc00000, data 0x903edf/0xb1c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x64df9c6), peers [0,2] op hist [])
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 175677440 unmapped: 48365568 heap: 224043008 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:18:44.618031+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 391 handle_osd_map epochs [391,392], i have 391, src has [1,392]
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 175677440 unmapped: 48365568 heap: 224043008 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:18:45.618255+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 175677440 unmapped: 48365568 heap: 224043008 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:18:46.618439+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:30:06 compute-0 ceph-osd[88594]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:30:06 compute-0 ceph-osd[88594]: bluestore.MempoolThread(0x56493a431b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2758324 data_alloc: 218103808 data_used: 1474560
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 175677440 unmapped: 48365568 heap: 224043008 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:18:47.618542+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 392 heartbeat osd_stat(store_statfs(0x4f8c75000/0x0/0x4ffc00000, data 0x88e942/0xaa8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x64df9c6), peers [0,2] op hist [])
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 392 heartbeat osd_stat(store_statfs(0x4f8c75000/0x0/0x4ffc00000, data 0x88e942/0xaa8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x64df9c6), peers [0,2] op hist [])
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 175677440 unmapped: 48365568 heap: 224043008 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:18:48.618691+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 175677440 unmapped: 48365568 heap: 224043008 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:18:49.618831+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 392 heartbeat osd_stat(store_statfs(0x4f8c75000/0x0/0x4ffc00000, data 0x88e942/0xaa8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x64df9c6), peers [0,2] op hist [])
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 175677440 unmapped: 48365568 heap: 224043008 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:18:50.618995+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 175677440 unmapped: 48365568 heap: 224043008 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:18:51.619218+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:30:06 compute-0 ceph-osd[88594]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:30:06 compute-0 ceph-osd[88594]: bluestore.MempoolThread(0x56493a431b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2758324 data_alloc: 218103808 data_used: 1474560
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 175677440 unmapped: 48365568 heap: 224043008 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:18:52.619424+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 175677440 unmapped: 48365568 heap: 224043008 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 392 heartbeat osd_stat(store_statfs(0x4f8c75000/0x0/0x4ffc00000, data 0x88e942/0xaa8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x64df9c6), peers [0,2] op hist [])
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:18:53.619582+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 175677440 unmapped: 48365568 heap: 224043008 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:18:54.619749+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 175677440 unmapped: 48365568 heap: 224043008 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:18:55.619906+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 392 heartbeat osd_stat(store_statfs(0x4f8c75000/0x0/0x4ffc00000, data 0x88e942/0xaa8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x64df9c6), peers [0,2] op hist [])
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 175677440 unmapped: 48365568 heap: 224043008 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:18:56.620261+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:30:06 compute-0 ceph-osd[88594]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:30:06 compute-0 ceph-osd[88594]: bluestore.MempoolThread(0x56493a431b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2758324 data_alloc: 218103808 data_used: 1474560
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 175677440 unmapped: 48365568 heap: 224043008 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: handle_auth_request added challenge on 0x56493b4ae800
Oct 11 04:30:06 compute-0 ceph-osd[88594]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 13.330597878s of 13.356624603s, submitted: 16
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 392 ms_handle_reset con 0x56493b4ae800 session 0x56493d8654a0
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:18:57.620441+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 175677440 unmapped: 48365568 heap: 224043008 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:18:58.620602+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: handle_auth_request added challenge on 0x56493b4afc00
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _renew_subs
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 392 handle_osd_map epochs [393,393], i have 392, src has [1,393]
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 393 ms_handle_reset con 0x56493b4afc00 session 0x56493d86de00
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 175702016 unmapped: 48340992 heap: 224043008 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:18:59.620796+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 175702016 unmapped: 48340992 heap: 224043008 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 393 heartbeat osd_stat(store_statfs(0x4f8c70000/0x0/0x4ffc00000, data 0x890531/0xaad000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x64df9c6), peers [0,2] op hist [])
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: handle_auth_request added challenge on 0x56493f3e2c00
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:19:00.620969+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 393 ms_handle_reset con 0x56493f3e2c00 session 0x56493b9332c0
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: handle_auth_request added challenge on 0x56493fca4c00
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 393 ms_handle_reset con 0x56493fca4c00 session 0x56493b932780
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: handle_auth_request added challenge on 0x56493b4ae800
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 393 ms_handle_reset con 0x56493b4ae800 session 0x56493c2f9680
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: handle_auth_request added challenge on 0x56493b4afc00
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 393 ms_handle_reset con 0x56493b4afc00 session 0x56493e2ac5a0
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: handle_auth_request added challenge on 0x56493b9adc00
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 393 ms_handle_reset con 0x56493b9adc00 session 0x56493d8caf00
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 176783360 unmapped: 47259648 heap: 224043008 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:19:01.621211+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: handle_auth_request added challenge on 0x56493f3e2c00
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 393 ms_handle_reset con 0x56493f3e2c00 session 0x56493ca9da40
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: handle_auth_request added challenge on 0x56493be26400
Oct 11 04:30:06 compute-0 ceph-osd[88594]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:30:06 compute-0 ceph-osd[88594]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:30:06 compute-0 ceph-osd[88594]: bluestore.MempoolThread(0x56493a431b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2779681 data_alloc: 218103808 data_used: 1482752
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 393 ms_handle_reset con 0x56493be26400 session 0x56493e72fc20
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 393 heartbeat osd_stat(store_statfs(0x4f8c6a000/0x0/0x4ffc00000, data 0x8906e9/0xab4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x64df9c6), peers [0,2] op hist [])
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 176807936 unmapped: 47235072 heap: 224043008 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:19:02.621376+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: handle_auth_request added challenge on 0x56493b4ae800
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 393 ms_handle_reset con 0x56493b4ae800 session 0x56493cbd0780
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: handle_auth_request added challenge on 0x56493b4afc00
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 393 ms_handle_reset con 0x56493b4afc00 session 0x56493d8683c0
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 176807936 unmapped: 47235072 heap: 224043008 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:19:03.621552+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 176807936 unmapped: 47235072 heap: 224043008 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: handle_auth_request added challenge on 0x56493b9adc00
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 393 ms_handle_reset con 0x56493b9adc00 session 0x56493cba65a0
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: handle_auth_request added challenge on 0x56493f3e2c00
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:19:04.621740+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 393 ms_handle_reset con 0x56493f3e2c00 session 0x56493be1b0e0
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 176824320 unmapped: 47218688 heap: 224043008 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:19:05.622005+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: handle_auth_request added challenge on 0x56493df01000
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 393 ms_handle_reset con 0x56493df01000 session 0x56493eb5a000
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: handle_auth_request added challenge on 0x56493b4ae800
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 393 ms_handle_reset con 0x56493b4ae800 session 0x56493da14d20
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 393 heartbeat osd_stat(store_statfs(0x4f8c6e000/0x0/0x4ffc00000, data 0x890605/0xab0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x64df9c6), peers [0,2] op hist [])
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 176824320 unmapped: 47218688 heap: 224043008 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:19:06.622230+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:30:06 compute-0 ceph-osd[88594]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:30:06 compute-0 ceph-osd[88594]: bluestore.MempoolThread(0x56493a431b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2770769 data_alloc: 218103808 data_used: 1482752
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: handle_auth_request added challenge on 0x56493b4afc00
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 393 ms_handle_reset con 0x56493b4afc00 session 0x56493d8650e0
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: handle_auth_request added challenge on 0x56493b9adc00
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 393 ms_handle_reset con 0x56493b9adc00 session 0x56493b94b0e0
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 176840704 unmapped: 47202304 heap: 224043008 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:19:07.622450+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: handle_auth_request added challenge on 0x56493f3e2c00
Oct 11 04:30:06 compute-0 ceph-osd[88594]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.646856308s of 10.913833618s, submitted: 79
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 393 ms_handle_reset con 0x56493f3e2c00 session 0x56493da292c0
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: handle_auth_request added challenge on 0x56493f43bc00
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 176840704 unmapped: 47202304 heap: 224043008 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:19:08.622668+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 393 handle_osd_map epochs [393,394], i have 393, src has [1,394]
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _renew_subs
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 394 handle_osd_map epochs [394,394], i have 394, src has [1,394]
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 394 ms_handle_reset con 0x56493f43bc00 session 0x56493da28000
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 176848896 unmapped: 47194112 heap: 224043008 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:19:09.622833+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: handle_auth_request added challenge on 0x56493f43bc00
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 394 ms_handle_reset con 0x56493f43bc00 session 0x56493e72f4a0
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: handle_auth_request added challenge on 0x56493b4ae800
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 394 ms_handle_reset con 0x56493b4ae800 session 0x56493be1a000
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 394 heartbeat osd_stat(store_statfs(0x4f8c6d000/0x0/0x4ffc00000, data 0x892102/0xab0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x64df9c6), peers [0,2] op hist [])
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 176865280 unmapped: 47177728 heap: 224043008 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:19:10.623020+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 176865280 unmapped: 47177728 heap: 224043008 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:19:11.623233+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:30:06 compute-0 ceph-osd[88594]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:30:06 compute-0 ceph-osd[88594]: bluestore.MempoolThread(0x56493a431b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2772222 data_alloc: 218103808 data_used: 1490944
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 176865280 unmapped: 47177728 heap: 224043008 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:19:12.623382+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 394 heartbeat osd_stat(store_statfs(0x4f8c6f000/0x0/0x4ffc00000, data 0x892090/0xaae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x64df9c6), peers [0,2] op hist [])
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 176865280 unmapped: 47177728 heap: 224043008 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:19:13.623594+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 176865280 unmapped: 47177728 heap: 224043008 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:19:14.623770+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 394 heartbeat osd_stat(store_statfs(0x4f8c6f000/0x0/0x4ffc00000, data 0x892090/0xaae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x64df9c6), peers [0,2] op hist [])
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: handle_auth_request added challenge on 0x56493b4afc00
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _renew_subs
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 394 handle_osd_map epochs [395,395], i have 394, src has [1,395]
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 395 ms_handle_reset con 0x56493b4afc00 session 0x56493e267e00
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 176865280 unmapped: 47177728 heap: 224043008 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:19:15.623903+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 395 heartbeat osd_stat(store_statfs(0x4f8c6c000/0x0/0x4ffc00000, data 0x893af3/0xab1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x64df9c6), peers [0,2] op hist [])
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 176865280 unmapped: 47177728 heap: 224043008 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:19:16.624083+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: handle_auth_request added challenge on 0x56493b9adc00
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 395 ms_handle_reset con 0x56493b9adc00 session 0x56493d864d20
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: handle_auth_request added challenge on 0x56493f3e2c00
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 395 ms_handle_reset con 0x56493f3e2c00 session 0x56493ce5a5a0
Oct 11 04:30:06 compute-0 ceph-osd[88594]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:30:06 compute-0 ceph-osd[88594]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:30:06 compute-0 ceph-osd[88594]: bluestore.MempoolThread(0x56493a431b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2780632 data_alloc: 218103808 data_used: 1490944
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: handle_auth_request added challenge on 0x56493f3e2c00
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 395 ms_handle_reset con 0x56493f3e2c00 session 0x56493d8cb0e0
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: handle_auth_request added challenge on 0x56493b4ae800
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 395 ms_handle_reset con 0x56493b4ae800 session 0x56493e378000
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 176873472 unmapped: 47169536 heap: 224043008 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: handle_auth_request added challenge on 0x56493b4afc00
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 395 ms_handle_reset con 0x56493b4afc00 session 0x56493b314960
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:19:17.624263+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: handle_auth_request added challenge on 0x56493b9adc00
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 395 ms_handle_reset con 0x56493b9adc00 session 0x56493b9692c0
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: handle_auth_request added challenge on 0x56493f43bc00
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 395 ms_handle_reset con 0x56493f43bc00 session 0x56493e267e00
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 176898048 unmapped: 47144960 heap: 224043008 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:19:18.624454+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: handle_auth_request added challenge on 0x56493f43bc00
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 395 ms_handle_reset con 0x56493f43bc00 session 0x56493cbd0780
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: handle_auth_request added challenge on 0x56493b4ae800
Oct 11 04:30:06 compute-0 ceph-osd[88594]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.801596642s of 11.086617470s, submitted: 92
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 176898048 unmapped: 47144960 heap: 224043008 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 395 ms_handle_reset con 0x56493b4ae800 session 0x56493e72fc20
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:19:19.624664+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 176898048 unmapped: 47144960 heap: 224043008 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:19:20.624802+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: handle_auth_request added challenge on 0x56493b4afc00
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 395 ms_handle_reset con 0x56493b4afc00 session 0x56493d8caf00
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: handle_auth_request added challenge on 0x56493b9adc00
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 395 heartbeat osd_stat(store_statfs(0x4f8c6a000/0x0/0x4ffc00000, data 0x893b74/0xab4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x64df9c6), peers [0,2] op hist [])
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 395 ms_handle_reset con 0x56493b9adc00 session 0x56493b932780
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 395 heartbeat osd_stat(store_statfs(0x4f8c6a000/0x0/0x4ffc00000, data 0x893b74/0xab4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x64df9c6), peers [0,2] op hist [])
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 176898048 unmapped: 47144960 heap: 224043008 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:19:21.625015+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: handle_auth_request added challenge on 0x56493f3e2c00
Oct 11 04:30:06 compute-0 ceph-osd[88594]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:30:06 compute-0 ceph-osd[88594]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:30:06 compute-0 ceph-osd[88594]: bluestore.MempoolThread(0x56493a431b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2781194 data_alloc: 218103808 data_used: 1495040
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 395 ms_handle_reset con 0x56493f3e2c00 session 0x56493d8654a0
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: handle_auth_request added challenge on 0x56493f3e2c00
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 395 ms_handle_reset con 0x56493f3e2c00 session 0x56493b9332c0
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 176898048 unmapped: 47144960 heap: 224043008 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:19:22.625144+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: handle_auth_request added challenge on 0x56493b4ae800
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 395 ms_handle_reset con 0x56493b4ae800 session 0x56493be1b0e0
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: handle_auth_request added challenge on 0x56493b4afc00
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 176898048 unmapped: 47144960 heap: 224043008 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 395 heartbeat osd_stat(store_statfs(0x4f8c6b000/0x0/0x4ffc00000, data 0x893b64/0xab3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x64df9c6), peers [0,2] op hist [])
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 395 ms_handle_reset con 0x56493b4afc00 session 0x56493da292c0
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:19:23.625364+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 176898048 unmapped: 47144960 heap: 224043008 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:19:24.625502+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: handle_auth_request added challenge on 0x56493b9adc00
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 395 ms_handle_reset con 0x56493b9adc00 session 0x56493b969c20
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: handle_auth_request added challenge on 0x56493f43bc00
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 395 ms_handle_reset con 0x56493f43bc00 session 0x56493d866b40
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 176898048 unmapped: 47144960 heap: 224043008 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:19:25.625621+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 176898048 unmapped: 47144960 heap: 224043008 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:19:26.625766+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:30:06 compute-0 ceph-osd[88594]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:30:06 compute-0 ceph-osd[88594]: bluestore.MempoolThread(0x56493a431b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2778898 data_alloc: 218103808 data_used: 1490944
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 176898048 unmapped: 47144960 heap: 224043008 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:19:27.625919+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 395 heartbeat osd_stat(store_statfs(0x4f8c6d000/0x0/0x4ffc00000, data 0x893af3/0xab1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x64df9c6), peers [0,2] op hist [])
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 176898048 unmapped: 47144960 heap: 224043008 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:19:28.626069+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 395 heartbeat osd_stat(store_statfs(0x4f8c6d000/0x0/0x4ffc00000, data 0x893af3/0xab1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x64df9c6), peers [0,2] op hist [])
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 176898048 unmapped: 47144960 heap: 224043008 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:19:29.626259+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 395 heartbeat osd_stat(store_statfs(0x4f8c6d000/0x0/0x4ffc00000, data 0x893af3/0xab1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x64df9c6), peers [0,2] op hist [])
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: handle_auth_request added challenge on 0x56493f43bc00
Oct 11 04:30:06 compute-0 ceph-osd[88594]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.937926292s of 11.027770996s, submitted: 30
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 395 ms_handle_reset con 0x56493f43bc00 session 0x56493d869a40
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 176898048 unmapped: 47144960 heap: 224043008 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:19:30.626431+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 395 heartbeat osd_stat(store_statfs(0x4f8c6d000/0x0/0x4ffc00000, data 0x893af3/0xab1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x64df9c6), peers [0,2] op hist [])
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 176898048 unmapped: 47144960 heap: 224043008 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:19:31.626618+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: handle_auth_request added challenge on 0x56493b4ae800
Oct 11 04:30:06 compute-0 ceph-osd[88594]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:30:06 compute-0 ceph-osd[88594]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:30:06 compute-0 ceph-osd[88594]: bluestore.MempoolThread(0x56493a431b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2778898 data_alloc: 218103808 data_used: 1490944
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 395 handle_osd_map epochs [395,396], i have 395, src has [1,396]
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 396 ms_handle_reset con 0x56493b4ae800 session 0x56493caf8d20
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 176906240 unmapped: 47136768 heap: 224043008 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:19:32.626771+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: handle_auth_request added challenge on 0x56493b4afc00
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: handle_auth_request added challenge on 0x56493b9adc00
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 396 ms_handle_reset con 0x56493b9adc00 session 0x56493ce5ab40
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 396 ms_handle_reset con 0x56493b4afc00 session 0x56493e72f860
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 176922624 unmapped: 47120384 heap: 224043008 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:19:33.626931+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 176922624 unmapped: 47120384 heap: 224043008 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:19:34.627099+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: handle_auth_request added challenge on 0x56493f3e2c00
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 396 handle_osd_map epochs [396,397], i have 396, src has [1,397]
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _renew_subs
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 397 handle_osd_map epochs [397,397], i have 397, src has [1,397]
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 397 ms_handle_reset con 0x56493f3e2c00 session 0x56493da15e00
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 176922624 unmapped: 47120384 heap: 224043008 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:19:35.627280+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: handle_auth_request added challenge on 0x56493b4ae800
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: handle_auth_request added challenge on 0x56493b4afc00
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 397 ms_handle_reset con 0x56493b4afc00 session 0x56493d86d4a0
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 397 ms_handle_reset con 0x56493b4ae800 session 0x56493e0f83c0
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 176971776 unmapped: 47071232 heap: 224043008 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:19:36.627460+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 397 heartbeat osd_stat(store_statfs(0x4f8c65000/0x0/0x4ffc00000, data 0x89725f/0xab9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x64df9c6), peers [0,2] op hist [])
Oct 11 04:30:06 compute-0 ceph-osd[88594]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:30:06 compute-0 ceph-osd[88594]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:30:06 compute-0 ceph-osd[88594]: bluestore.MempoolThread(0x56493a431b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2790611 data_alloc: 218103808 data_used: 1511424
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: handle_auth_request added challenge on 0x56493b9adc00
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 176971776 unmapped: 47071232 heap: 224043008 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:19:37.627615+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _renew_subs
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 397 handle_osd_map epochs [398,398], i have 397, src has [1,398]
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 398 ms_handle_reset con 0x56493b9adc00 session 0x56493b968000
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:19:38.627785+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 174825472 unmapped: 49217536 heap: 224043008 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: handle_auth_request added challenge on 0x56493f43bc00
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 398 ms_handle_reset con 0x56493f43bc00 session 0x56493b12c960
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: handle_auth_request added challenge on 0x56493f43c400
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 398 ms_handle_reset con 0x56493f43c400 session 0x56493da2cb40
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:19:39.627957+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 174825472 unmapped: 49217536 heap: 224043008 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: handle_auth_request added challenge on 0x56493f43c400
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 398 ms_handle_reset con 0x56493f43c400 session 0x56493c5aa780
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: handle_auth_request added challenge on 0x56493b4ae800
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 398 ms_handle_reset con 0x56493b4ae800 session 0x56493d8681e0
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:19:40.628189+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 174850048 unmapped: 49192960 heap: 224043008 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 398 heartbeat osd_stat(store_statfs(0x4f8c61000/0x0/0x4ffc00000, data 0x898dec/0xabd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x64df9c6), peers [0,2] op hist [])
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: handle_auth_request added challenge on 0x56493b4afc00
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 398 ms_handle_reset con 0x56493b4afc00 session 0x56493c5321e0
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: handle_auth_request added challenge on 0x56493b9adc00
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:19:41.628373+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 174776320 unmapped: 49266688 heap: 224043008 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 11.002533913s of 11.127006531s, submitted: 33
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 398 ms_handle_reset con 0x56493b9adc00 session 0x56493b92d4a0
Oct 11 04:30:06 compute-0 ceph-osd[88594]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:30:06 compute-0 ceph-osd[88594]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:30:06 compute-0 ceph-osd[88594]: bluestore.MempoolThread(0x56493a431b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2796505 data_alloc: 218103808 data_used: 1519616
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:19:42.628557+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 174784512 unmapped: 49258496 heap: 224043008 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 398 heartbeat osd_stat(store_statfs(0x4f8c61000/0x0/0x4ffc00000, data 0x898dec/0xabd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x64df9c6), peers [0,2] op hist [])
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: handle_auth_request added challenge on 0x56493f43bc00
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 398 ms_handle_reset con 0x56493f43bc00 session 0x56493e378d20
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: handle_auth_request added challenge on 0x56493f43bc00
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: handle_auth_request added challenge on 0x56493b4ae800
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 398 ms_handle_reset con 0x56493b4ae800 session 0x56493da29680
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: handle_auth_request added challenge on 0x56493b4afc00
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:19:43.628735+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 174792704 unmapped: 49250304 heap: 224043008 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _renew_subs
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 398 handle_osd_map epochs [399,399], i have 398, src has [1,399]
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 399 ms_handle_reset con 0x56493b4afc00 session 0x56493eccd0e0
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 399 ms_handle_reset con 0x56493f43bc00 session 0x56493c5ab680
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:19:44.628954+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 174792704 unmapped: 49250304 heap: 224043008 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: handle_auth_request added challenge on 0x56493b9adc00
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 399 ms_handle_reset con 0x56493b9adc00 session 0x56493e0f8780
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: handle_auth_request added challenge on 0x56493f43c400
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: handle_auth_request added challenge on 0x56494212bc00
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 399 ms_handle_reset con 0x56494212bc00 session 0x56493da29e00
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: handle_auth_request added challenge on 0x56493b4ae800
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:19:45.629202+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 174800896 unmapped: 49242112 heap: 224043008 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _renew_subs
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 399 handle_osd_map epochs [400,400], i have 399, src has [1,400]
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 400 ms_handle_reset con 0x56493b4ae800 session 0x56493b3141e0
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 400 ms_handle_reset con 0x56493f43c400 session 0x56493da14d20
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:19:46.629385+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 174809088 unmapped: 49233920 heap: 224043008 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:30:06 compute-0 ceph-osd[88594]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:30:06 compute-0 ceph-osd[88594]: bluestore.MempoolThread(0x56493a431b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2802491 data_alloc: 218103808 data_used: 1536000
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:19:47.629527+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: handle_auth_request added challenge on 0x56493b4afc00
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 174809088 unmapped: 49233920 heap: 224043008 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 400 ms_handle_reset con 0x56493b4afc00 session 0x56493b315680
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: handle_auth_request added challenge on 0x56493b9adc00
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 400 ms_handle_reset con 0x56493b9adc00 session 0x56493d864d20
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 400 heartbeat osd_stat(store_statfs(0x4f8c5b000/0x0/0x4ffc00000, data 0x89c52c/0xac2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x64df9c6), peers [0,2] op hist [])
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:19:48.629717+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 174809088 unmapped: 49233920 heap: 224043008 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: handle_auth_request added challenge on 0x56493f43bc00
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 400 ms_handle_reset con 0x56493f43bc00 session 0x56493de4f860
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:19:49.629920+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 174809088 unmapped: 49233920 heap: 224043008 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: handle_auth_request added challenge on 0x56493b4ae800
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:19:50.630088+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 174809088 unmapped: 49233920 heap: 224043008 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 400 handle_osd_map epochs [400,401], i have 400, src has [1,401]
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _renew_subs
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 401 handle_osd_map epochs [401,401], i have 401, src has [1,401]
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 401 ms_handle_reset con 0x56493b4ae800 session 0x56493b92d860
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:19:51.630243+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 174817280 unmapped: 49225728 heap: 224043008 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: handle_auth_request added challenge on 0x56493b4afc00
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: handle_auth_request added challenge on 0x56493b9adc00
Oct 11 04:30:06 compute-0 ceph-osd[88594]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.031558990s of 10.416639328s, submitted: 78
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 401 ms_handle_reset con 0x56493b9adc00 session 0x56493cba70e0
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 401 ms_handle_reset con 0x56493b4afc00 session 0x56493e0f9e00
Oct 11 04:30:06 compute-0 ceph-osd[88594]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:30:06 compute-0 ceph-osd[88594]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:30:06 compute-0 ceph-osd[88594]: bluestore.MempoolThread(0x56493a431b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2810190 data_alloc: 218103808 data_used: 1548288
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:19:52.630451+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 174841856 unmapped: 49201152 heap: 224043008 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: handle_auth_request added challenge on 0x56493f43c400
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:19:53.630622+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 174841856 unmapped: 49201152 heap: 224043008 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 401 handle_osd_map epochs [401,402], i have 401, src has [1,402]
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 402 ms_handle_reset con 0x56493f43c400 session 0x56493b935860
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 402 heartbeat osd_stat(store_statfs(0x4f8c57000/0x0/0x4ffc00000, data 0x89e127/0xac6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x64df9c6), peers [0,2] op hist [])
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:19:54.630857+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 174841856 unmapped: 49201152 heap: 224043008 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 402 handle_osd_map epochs [402,403], i have 402, src has [1,403]
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:19:55.631035+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 174841856 unmapped: 49201152 heap: 224043008 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: handle_auth_request added challenge on 0x56494212bc00
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: handle_auth_request added challenge on 0x56493df02000
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 403 ms_handle_reset con 0x56493df02000 session 0x56493eb5be00
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 403 ms_handle_reset con 0x56494212bc00 session 0x56493c2f85a0
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:19:56.631246+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 174841856 unmapped: 49201152 heap: 224043008 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: handle_auth_request added challenge on 0x56493b4ae800
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 403 ms_handle_reset con 0x56493b4ae800 session 0x56493c533c20
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: handle_auth_request added challenge on 0x56493b4afc00
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 403 ms_handle_reset con 0x56493b4afc00 session 0x56493b92d4a0
Oct 11 04:30:06 compute-0 ceph-osd[88594]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:30:06 compute-0 ceph-osd[88594]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:30:06 compute-0 ceph-osd[88594]: bluestore.MempoolThread(0x56493a431b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2819076 data_alloc: 218103808 data_used: 1552384
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:19:57.631455+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 174850048 unmapped: 49192960 heap: 224043008 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: handle_auth_request added challenge on 0x56493b9adc00
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 403 ms_handle_reset con 0x56493b9adc00 session 0x56493b968000
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: handle_auth_request added challenge on 0x56493f43c400
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 403 ms_handle_reset con 0x56493f43c400 session 0x56493e72f860
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:19:58.631640+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 174850048 unmapped: 49192960 heap: 224043008 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 403 heartbeat osd_stat(store_statfs(0x4f8c50000/0x0/0x4ffc00000, data 0x8a1795/0xace000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x64df9c6), peers [0,2] op hist [])
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: handle_auth_request added challenge on 0x56493f43c400
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 403 ms_handle_reset con 0x56493f43c400 session 0x56493d869a40
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:19:59.631773+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: handle_auth_request added challenge on 0x56493b4ae800
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 174850048 unmapped: 49192960 heap: 224043008 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: handle_auth_request added challenge on 0x56493b4afc00
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 403 ms_handle_reset con 0x56493b4afc00 session 0x56493b933e00
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: handle_auth_request added challenge on 0x56493b9adc00
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _renew_subs
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 403 handle_osd_map epochs [404,404], i have 403, src has [1,404]
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 404 ms_handle_reset con 0x56493b9adc00 session 0x56493b935a40
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 404 ms_handle_reset con 0x56493b4ae800 session 0x56493be1b0e0
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:20:00.631938+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 174882816 unmapped: 49160192 heap: 224043008 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: handle_auth_request added challenge on 0x56494212bc00
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 404 ms_handle_reset con 0x56494212bc00 session 0x56493b94ba40
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: handle_auth_request added challenge on 0x56494212bc00
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: handle_auth_request added challenge on 0x56493b4ae800
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 404 ms_handle_reset con 0x56493b4ae800 session 0x56493eccc000
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: handle_auth_request added challenge on 0x56493b4afc00
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:20:01.632123+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 174882816 unmapped: 49160192 heap: 224043008 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:30:06 compute-0 ceph-osd[88594]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:30:06 compute-0 ceph-osd[88594]: bluestore.MempoolThread(0x56493a431b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2820351 data_alloc: 218103808 data_used: 1556480
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 404 ms_handle_reset con 0x56493b4afc00 session 0x56493e2ad860
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _renew_subs
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 404 handle_osd_map epochs [405,405], i have 404, src has [1,405]
Oct 11 04:30:06 compute-0 ceph-osd[88594]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.109282494s of 10.305081367s, submitted: 79
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 405 ms_handle_reset con 0x56494212bc00 session 0x56493cbd03c0
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:20:02.632304+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 175931392 unmapped: 48111616 heap: 224043008 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: handle_auth_request added challenge on 0x56493b9adc00
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 405 ms_handle_reset con 0x56493b9adc00 session 0x56493e2ac3c0
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: handle_auth_request added challenge on 0x56493f43c400
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 405 ms_handle_reset con 0x56493f43c400 session 0x56493b92c960
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:20:03.632466+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 175939584 unmapped: 48103424 heap: 224043008 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 405 heartbeat osd_stat(store_statfs(0x4f8c4e000/0x0/0x4ffc00000, data 0x8a4e53/0xad0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x64df9c6), peers [0,2] op hist [])
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:20:04.632677+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 175939584 unmapped: 48103424 heap: 224043008 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: handle_auth_request added challenge on 0x56493b4ae800
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 405 ms_handle_reset con 0x56493b4ae800 session 0x56493eb5b2c0
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 405 handle_osd_map epochs [405,406], i have 405, src has [1,406]
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:20:05.632855+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 175939584 unmapped: 48103424 heap: 224043008 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: handle_auth_request added challenge on 0x56493b4afc00
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _renew_subs
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 406 handle_osd_map epochs [407,407], i have 406, src has [1,407]
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 407 ms_handle_reset con 0x56493b4afc00 session 0x56493b92c000
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:20:06.633047+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 175947776 unmapped: 48095232 heap: 224043008 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:30:06 compute-0 ceph-osd[88594]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:30:06 compute-0 ceph-osd[88594]: bluestore.MempoolThread(0x56493a431b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2830690 data_alloc: 218103808 data_used: 1572864
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: handle_auth_request added challenge on 0x56493b9adc00
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: handle_auth_request added challenge on 0x56494212bc00
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:20:07.633202+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 407 ms_handle_reset con 0x56494212bc00 session 0x56493da15c20
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 407 ms_handle_reset con 0x56493b9adc00 session 0x56493b92c5a0
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 175955968 unmapped: 48087040 heap: 224043008 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:20:08.633383+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: handle_auth_request added challenge on 0x56493edf4800
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 407 ms_handle_reset con 0x56493edf4800 session 0x56493be1a960
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 175964160 unmapped: 48078848 heap: 224043008 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: handle_auth_request added challenge on 0x56493edf4800
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 407 ms_handle_reset con 0x56493edf4800 session 0x56493b92c000
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:20:09.633556+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 175964160 unmapped: 48078848 heap: 224043008 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 407 heartbeat osd_stat(store_statfs(0x4f8c48000/0x0/0x4ffc00000, data 0x8a8433/0xad6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x64df9c6), peers [0,2] op hist [])
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: handle_auth_request added challenge on 0x56493b4ae800
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 407 ms_handle_reset con 0x56493b4ae800 session 0x56493e2ad860
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: handle_auth_request added challenge on 0x56493b4afc00
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: handle_auth_request added challenge on 0x56493b9adc00
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 407 ms_handle_reset con 0x56493b9adc00 session 0x56493bf7c960
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: handle_auth_request added challenge on 0x56494212bc00
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _renew_subs
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 407 handle_osd_map epochs [408,408], i have 407, src has [1,408]
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 408 ms_handle_reset con 0x56494212bc00 session 0x56493b94bc20
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:20:10.633682+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 408 ms_handle_reset con 0x56493b4afc00 session 0x56493be1b0e0
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 176021504 unmapped: 48021504 heap: 224043008 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: handle_auth_request added challenge on 0x56493b4afc00
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 408 ms_handle_reset con 0x56493b4afc00 session 0x56493b315680
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 408 heartbeat osd_stat(store_statfs(0x4f8c44000/0x0/0x4ffc00000, data 0x8aa004/0xad9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x64df9c6), peers [0,2] op hist [])
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:20:11.633847+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 176021504 unmapped: 48021504 heap: 224043008 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:30:06 compute-0 ceph-osd[88594]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:30:06 compute-0 ceph-osd[88594]: bluestore.MempoolThread(0x56493a431b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2836694 data_alloc: 218103808 data_used: 1581056
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:20:12.634035+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 176021504 unmapped: 48021504 heap: 224043008 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: handle_auth_request added challenge on 0x56493b4ae800
Oct 11 04:30:06 compute-0 ceph-osd[88594]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.174202919s of 10.475322723s, submitted: 117
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _renew_subs
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 408 handle_osd_map epochs [409,409], i have 408, src has [1,409]
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 409 ms_handle_reset con 0x56493b4ae800 session 0x56493da29e00
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:20:13.634338+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 176037888 unmapped: 48005120 heap: 224043008 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: handle_auth_request added challenge on 0x56493b9adc00
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 409 ms_handle_reset con 0x56493b9adc00 session 0x56493da292c0
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: handle_auth_request added challenge on 0x56493edf4800
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 409 ms_handle_reset con 0x56493edf4800 session 0x56493ca9d0e0
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:20:14.634538+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 176037888 unmapped: 48005120 heap: 224043008 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 409 heartbeat osd_stat(store_statfs(0x4f8c3f000/0x0/0x4ffc00000, data 0x8abc0f/0xade000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x64df9c6), peers [0,2] op hist [])
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 409 handle_osd_map epochs [409,410], i have 409, src has [1,410]
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: handle_auth_request added challenge on 0x56494212bc00
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 410 ms_handle_reset con 0x56494212bc00 session 0x56493cba7c20
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: handle_auth_request added challenge on 0x56494212bc00
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 410 ms_handle_reset con 0x56494212bc00 session 0x56493d86c000
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:20:15.634723+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 176046080 unmapped: 47996928 heap: 224043008 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: handle_auth_request added challenge on 0x56493b4ae800
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 410 ms_handle_reset con 0x56493b4ae800 session 0x56493e72f680
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: handle_auth_request added challenge on 0x56493b4afc00
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:20:16.634915+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 410 ms_handle_reset con 0x56493b4afc00 session 0x56493e24d0e0
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 176046080 unmapped: 47996928 heap: 224043008 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:30:06 compute-0 ceph-osd[88594]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:30:06 compute-0 ceph-osd[88594]: bluestore.MempoolThread(0x56493a431b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2845726 data_alloc: 218103808 data_used: 1593344
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:20:17.635091+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 176046080 unmapped: 47996928 heap: 224043008 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:20:18.635274+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 176046080 unmapped: 47996928 heap: 224043008 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:20:19.635471+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 176046080 unmapped: 47996928 heap: 224043008 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 410 heartbeat osd_stat(store_statfs(0x4f8c3d000/0x0/0x4ffc00000, data 0x8ad662/0xae0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x64df9c6), peers [0,2] op hist [])
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:20:20.635705+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 176046080 unmapped: 47996928 heap: 224043008 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:20:21.635914+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 176046080 unmapped: 47996928 heap: 224043008 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:30:06 compute-0 ceph-osd[88594]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:30:06 compute-0 ceph-osd[88594]: bluestore.MempoolThread(0x56493a431b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2845886 data_alloc: 218103808 data_used: 1597440
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:20:22.636094+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 176046080 unmapped: 47996928 heap: 224043008 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:20:23.636280+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 176046080 unmapped: 47996928 heap: 224043008 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:20:24.636483+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: handle_auth_request added challenge on 0x56493b9adc00
Oct 11 04:30:06 compute-0 ceph-osd[88594]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 11.700803757s of 11.855709076s, submitted: 56
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 410 ms_handle_reset con 0x56493b9adc00 session 0x56493e20a1e0
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 176054272 unmapped: 47988736 heap: 224043008 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:20:25.636694+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 176054272 unmapped: 47988736 heap: 224043008 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 410 heartbeat osd_stat(store_statfs(0x4f8c3d000/0x0/0x4ffc00000, data 0x8ad672/0xae1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x64df9c6), peers [0,2] op hist [])
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:20:26.636895+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: handle_auth_request added challenge on 0x56493edf4800
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: handle_auth_request added challenge on 0x56493edf5800
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 410 ms_handle_reset con 0x56493edf5800 session 0x56493d8ca960
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 176054272 unmapped: 47988736 heap: 224043008 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 410 handle_osd_map epochs [410,411], i have 410, src has [1,411]
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: handle_auth_request added challenge on 0x56493b4ae800
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 411 ms_handle_reset con 0x56493b4ae800 session 0x56493b9683c0
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: handle_auth_request added challenge on 0x56493b4afc00
Oct 11 04:30:06 compute-0 ceph-osd[88594]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:30:06 compute-0 ceph-osd[88594]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:30:06 compute-0 ceph-osd[88594]: bluestore.MempoolThread(0x56493a431b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2852845 data_alloc: 218103808 data_used: 1613824
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:20:27.637059+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 176054272 unmapped: 47988736 heap: 224043008 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _renew_subs
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 411 handle_osd_map epochs [412,412], i have 411, src has [1,412]
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 412 ms_handle_reset con 0x56493b4afc00 session 0x56493da28f00
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 412 ms_handle_reset con 0x56493edf4800 session 0x56493eb5a960
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:20:28.637257+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 176062464 unmapped: 47980544 heap: 224043008 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: handle_auth_request added challenge on 0x56493b9adc00
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:20:29.637415+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 176062464 unmapped: 47980544 heap: 224043008 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _renew_subs
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 412 handle_osd_map epochs [413,413], i have 412, src has [1,413]
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 413 ms_handle_reset con 0x56493b9adc00 session 0x56493da2e1e0
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:20:30.637582+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 176062464 unmapped: 47980544 heap: 224043008 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: handle_auth_request added challenge on 0x56493edf5800
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:20:31.637765+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 176062464 unmapped: 47980544 heap: 224043008 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 413 heartbeat osd_stat(store_statfs(0x4f8c33000/0x0/0x4ffc00000, data 0x8b293d/0xaea000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x64df9c6), peers [0,2] op hist [])
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _renew_subs
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 413 handle_osd_map epochs [414,414], i have 413, src has [1,414]
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 414 ms_handle_reset con 0x56493edf5800 session 0x56493e0f8d20
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: handle_auth_request added challenge on 0x56493edf5800
Oct 11 04:30:06 compute-0 ceph-osd[88594]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:30:06 compute-0 ceph-osd[88594]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:30:06 compute-0 ceph-osd[88594]: bluestore.MempoolThread(0x56493a431b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2860615 data_alloc: 218103808 data_used: 1605632
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 414 ms_handle_reset con 0x56493edf5800 session 0x56493e2d5c20
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:20:32.637948+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 176070656 unmapped: 47972352 heap: 224043008 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:20:33.638079+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 176070656 unmapped: 47972352 heap: 224043008 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:20:34.638339+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 176070656 unmapped: 47972352 heap: 224043008 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:20:35.638541+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 176070656 unmapped: 47972352 heap: 224043008 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:20:36.638783+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 176070656 unmapped: 47972352 heap: 224043008 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 414 handle_osd_map epochs [415,415], i have 414, src has [1,415]
Oct 11 04:30:06 compute-0 ceph-osd[88594]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 12.358996391s of 12.554878235s, submitted: 49
Oct 11 04:30:06 compute-0 ceph-osd[88594]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:30:06 compute-0 ceph-osd[88594]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:30:06 compute-0 ceph-osd[88594]: bluestore.MempoolThread(0x56493a431b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2862917 data_alloc: 218103808 data_used: 1605632
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:20:37.638971+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 176070656 unmapped: 47972352 heap: 224043008 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 415 heartbeat osd_stat(store_statfs(0x4f8c2e000/0x0/0x4ffc00000, data 0x8b5fb5/0xaef000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x64df9c6), peers [0,2] op hist [])
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:20:38.639101+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 176070656 unmapped: 47972352 heap: 224043008 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:20:39.639221+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 176070656 unmapped: 47972352 heap: 224043008 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:20:40.639374+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 176070656 unmapped: 47972352 heap: 224043008 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 415 heartbeat osd_stat(store_statfs(0x4f8c2e000/0x0/0x4ffc00000, data 0x8b5fb5/0xaef000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x64df9c6), peers [0,2] op hist [])
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:20:41.639542+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 176070656 unmapped: 47972352 heap: 224043008 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:30:06 compute-0 ceph-osd[88594]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:30:06 compute-0 ceph-osd[88594]: bluestore.MempoolThread(0x56493a431b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2862917 data_alloc: 218103808 data_used: 1605632
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:20:42.639729+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 176070656 unmapped: 47972352 heap: 224043008 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:20:43.639912+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 176070656 unmapped: 47972352 heap: 224043008 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:20:44.640101+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 176070656 unmapped: 47972352 heap: 224043008 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 415 heartbeat osd_stat(store_statfs(0x4f8c2e000/0x0/0x4ffc00000, data 0x8b5fb5/0xaef000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x64df9c6), peers [0,2] op hist [])
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:20:45.640257+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 176070656 unmapped: 47972352 heap: 224043008 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:20:46.640469+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 176070656 unmapped: 47972352 heap: 224043008 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:30:06 compute-0 ceph-osd[88594]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:30:06 compute-0 ceph-osd[88594]: bluestore.MempoolThread(0x56493a431b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2862917 data_alloc: 218103808 data_used: 1605632
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:20:47.640642+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 176070656 unmapped: 47972352 heap: 224043008 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 415 heartbeat osd_stat(store_statfs(0x4f8c2e000/0x0/0x4ffc00000, data 0x8b5fb5/0xaef000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x64df9c6), peers [0,2] op hist [])
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:20:48.640856+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 176070656 unmapped: 47972352 heap: 224043008 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:20:49.641069+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 176070656 unmapped: 47972352 heap: 224043008 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:20:50.641252+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 176070656 unmapped: 47972352 heap: 224043008 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: handle_auth_request added challenge on 0x56493b4ae800
Oct 11 04:30:06 compute-0 ceph-osd[88594]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 14.213273048s of 14.224831581s, submitted: 23
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 415 ms_handle_reset con 0x56493b4ae800 session 0x56493ca9da40
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:20:51.641415+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 176070656 unmapped: 47972352 heap: 224043008 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:30:06 compute-0 ceph-osd[88594]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:30:06 compute-0 ceph-osd[88594]: bluestore.MempoolThread(0x56493a431b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2863865 data_alloc: 218103808 data_used: 1605632
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:20:52.641587+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 176070656 unmapped: 47972352 heap: 224043008 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: handle_auth_request added challenge on 0x56493b4afc00
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 415 heartbeat osd_stat(store_statfs(0x4f8c2e000/0x0/0x4ffc00000, data 0x8b5fc5/0xaf0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x64df9c6), peers [0,2] op hist [])
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _renew_subs
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 415 handle_osd_map epochs [416,416], i have 415, src has [1,416]
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 416 ms_handle_reset con 0x56493b4afc00 session 0x56493c5323c0
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:20:53.641750+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 416 heartbeat osd_stat(store_statfs(0x4f8c2e000/0x0/0x4ffc00000, data 0x8b5fc5/0xaf0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x64df9c6), peers [0,2] op hist [])
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 176070656 unmapped: 47972352 heap: 224043008 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: handle_auth_request added challenge on 0x56493b9adc00
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 416 ms_handle_reset con 0x56493b9adc00 session 0x56493d861680
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: handle_auth_request added challenge on 0x56493edf4800
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:20:54.641970+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 176070656 unmapped: 47972352 heap: 224043008 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 416 handle_osd_map epochs [416,417], i have 416, src has [1,417]
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 417 ms_handle_reset con 0x56493edf4800 session 0x56493c7ab0e0
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: handle_auth_request added challenge on 0x56493edf4800
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:20:55.642134+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 417 ms_handle_reset con 0x56493edf4800 session 0x56493e378000
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: handle_auth_request added challenge on 0x56493b4ae800
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 417 ms_handle_reset con 0x56493b4ae800 session 0x56493b92c960
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 176070656 unmapped: 47972352 heap: 224043008 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:20:56.642446+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 176070656 unmapped: 47972352 heap: 224043008 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: handle_auth_request added challenge on 0x56493b4afc00
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 417 ms_handle_reset con 0x56493b4afc00 session 0x56493eccd860
Oct 11 04:30:06 compute-0 ceph-osd[88594]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:30:06 compute-0 ceph-osd[88594]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:30:06 compute-0 ceph-osd[88594]: bluestore.MempoolThread(0x56493a431b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2871019 data_alloc: 218103808 data_used: 1613824
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:20:57.642708+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 417 heartbeat osd_stat(store_statfs(0x4f8c28000/0x0/0x4ffc00000, data 0x8b9765/0xaf6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x64df9c6), peers [0,2] op hist [])
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 176070656 unmapped: 47972352 heap: 224043008 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: handle_auth_request added challenge on 0x56493b9adc00
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _renew_subs
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 417 handle_osd_map epochs [418,418], i have 417, src has [1,418]
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 418 ms_handle_reset con 0x56493b9adc00 session 0x56493da15e00
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:20:58.642885+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 176070656 unmapped: 47972352 heap: 224043008 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: handle_auth_request added challenge on 0x56493edf5800
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 418 ms_handle_reset con 0x56493edf5800 session 0x56493ecccb40
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: handle_auth_request added challenge on 0x56493edf5800
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:20:59.643033+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 176070656 unmapped: 47972352 heap: 224043008 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _renew_subs
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 418 handle_osd_map epochs [419,419], i have 418, src has [1,419]
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 419 ms_handle_reset con 0x56493edf5800 session 0x56493d8cb0e0
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:21:00.643234+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: handle_auth_request added challenge on 0x56493b4ae800
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 419 ms_handle_reset con 0x56493b4ae800 session 0x56493b9692c0
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: handle_auth_request added challenge on 0x56493b4afc00
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 176078848 unmapped: 47964160 heap: 224043008 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 419 ms_handle_reset con 0x56493b4afc00 session 0x56493e2d43c0
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:21:01.643402+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 176087040 unmapped: 47955968 heap: 224043008 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: handle_auth_request added challenge on 0x56493b9adc00
Oct 11 04:30:06 compute-0 ceph-osd[88594]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.602078438s of 10.759571075s, submitted: 52
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 419 ms_handle_reset con 0x56493b9adc00 session 0x56493d86cd20
Oct 11 04:30:06 compute-0 ceph-osd[88594]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:30:06 compute-0 ceph-osd[88594]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:30:06 compute-0 ceph-osd[88594]: bluestore.MempoolThread(0x56493a431b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2880756 data_alloc: 218103808 data_used: 1626112
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 419 heartbeat osd_stat(store_statfs(0x4f8812000/0x0/0x4ffc00000, data 0x8bce7d/0xafb000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x68ef9c6), peers [0,2] op hist [])
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:21:02.643538+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 176095232 unmapped: 47947776 heap: 224043008 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 419 heartbeat osd_stat(store_statfs(0x4f8811000/0x0/0x4ffc00000, data 0x8bce8d/0xafc000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x68ef9c6), peers [0,2] op hist [])
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: handle_auth_request added challenge on 0x56493edf4800
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:21:03.643746+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 176095232 unmapped: 47947776 heap: 224043008 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 419 handle_osd_map epochs [419,420], i have 419, src has [1,420]
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 420 ms_handle_reset con 0x56493edf4800 session 0x56493c533680
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:21:04.643925+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: handle_auth_request added challenge on 0x56493b4ae800
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 420 ms_handle_reset con 0x56493b4ae800 session 0x56493d8683c0
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: handle_auth_request added challenge on 0x56493b4afc00
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 176111616 unmapped: 47931392 heap: 224043008 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 420 handle_osd_map epochs [420,421], i have 420, src has [1,421]
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _renew_subs
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 421 handle_osd_map epochs [421,421], i have 421, src has [1,421]
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 421 ms_handle_reset con 0x56493b4afc00 session 0x56493b92c960
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:21:05.644123+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 421 heartbeat osd_stat(store_statfs(0x4f8809000/0x0/0x4ffc00000, data 0x8c065f/0xb02000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x68ef9c6), peers [0,2] op hist [])
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: handle_auth_request added challenge on 0x56493b9adc00
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 176111616 unmapped: 47931392 heap: 224043008 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 421 ms_handle_reset con 0x56493b9adc00 session 0x56493e2d5c20
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: handle_auth_request added challenge on 0x56493edf5800
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 421 ms_handle_reset con 0x56493edf5800 session 0x56493eb5a960
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:21:06.644404+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 176128000 unmapped: 47915008 heap: 224043008 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: handle_auth_request added challenge on 0x56494212bc00
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 421 ms_handle_reset con 0x56494212bc00 session 0x56493b315680
Oct 11 04:30:06 compute-0 ceph-osd[88594]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:30:06 compute-0 ceph-osd[88594]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:30:06 compute-0 ceph-osd[88594]: bluestore.MempoolThread(0x56493a431b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2886667 data_alloc: 218103808 data_used: 1638400
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:21:07.644607+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 176128000 unmapped: 47915008 heap: 224043008 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: handle_auth_request added challenge on 0x56493b4ae800
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:21:08.644868+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 421 heartbeat osd_stat(store_statfs(0x4f880c000/0x0/0x4ffc00000, data 0x8c065f/0xb02000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x68ef9c6), peers [0,2] op hist [])
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 176128000 unmapped: 47915008 heap: 224043008 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _renew_subs
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 421 handle_osd_map epochs [422,422], i have 421, src has [1,422]
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 422 ms_handle_reset con 0x56493b4ae800 session 0x56493bf7c960
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:21:09.645095+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: handle_auth_request added challenge on 0x56493b4afc00
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 422 ms_handle_reset con 0x56493b4afc00 session 0x56493e2ad860
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: handle_auth_request added challenge on 0x56493b9adc00
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 176128000 unmapped: 47915008 heap: 224043008 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 422 ms_handle_reset con 0x56493b9adc00 session 0x56493d8cb0e0
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:21:10.645285+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 176128000 unmapped: 47915008 heap: 224043008 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 422 handle_osd_map epochs [423,424], i have 422, src has [1,424]
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: handle_auth_request added challenge on 0x56493edf5800
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 424 ms_handle_reset con 0x56493edf5800 session 0x56493ddf7680
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: handle_auth_request added challenge on 0x56493edf5c00
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 424 ms_handle_reset con 0x56493edf5c00 session 0x56493eccd2c0
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:21:11.645586+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 424 heartbeat osd_stat(store_statfs(0x4f8808000/0x0/0x4ffc00000, data 0x8c21f8/0xb05000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x68ef9c6), peers [0,2] op hist [])
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 176218112 unmapped: 47824896 heap: 224043008 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:30:06 compute-0 ceph-osd[88594]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:30:06 compute-0 ceph-osd[88594]: bluestore.MempoolThread(0x56493a431b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2897253 data_alloc: 218103808 data_used: 1650688
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: handle_auth_request added challenge on 0x56493edf5c00
Oct 11 04:30:06 compute-0 ceph-osd[88594]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.153623581s of 10.399855614s, submitted: 93
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:21:12.645737+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 424 ms_handle_reset con 0x56493edf5c00 session 0x56493d4b9680
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 176218112 unmapped: 47824896 heap: 224043008 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 424 heartbeat osd_stat(store_statfs(0x4f8803000/0x0/0x4ffc00000, data 0x8c58aa/0xb0b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x68ef9c6), peers [0,2] op hist [])
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:21:13.645940+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: handle_auth_request added challenge on 0x56493b4ae800
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 176218112 unmapped: 47824896 heap: 224043008 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 424 handle_osd_map epochs [424,425], i have 424, src has [1,425]
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _renew_subs
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 424 handle_osd_map epochs [425,425], i have 425, src has [1,425]
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 425 ms_handle_reset con 0x56493b4ae800 session 0x56493d86d2c0
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:21:14.646120+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 176218112 unmapped: 47824896 heap: 224043008 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 425 heartbeat osd_stat(store_statfs(0x4f87ff000/0x0/0x4ffc00000, data 0x8c7443/0xb0e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x68ef9c6), peers [0,2] op hist [])
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 425 handle_osd_map epochs [426,426], i have 425, src has [1,426]
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: handle_auth_request added challenge on 0x56493b4afc00
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 426 ms_handle_reset con 0x56493b4afc00 session 0x56493d4b9860
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: handle_auth_request added challenge on 0x56493b9adc00
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:21:15.646309+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 176250880 unmapped: 47792128 heap: 224043008 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _renew_subs
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 426 handle_osd_map epochs [427,427], i have 426, src has [1,427]
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 427 ms_handle_reset con 0x56493b9adc00 session 0x56493d8caf00
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: handle_auth_request added challenge on 0x56493edf5800
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 427 ms_handle_reset con 0x56493edf5800 session 0x56493e267680
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: handle_auth_request added challenge on 0x56493edf5800
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 427 ms_handle_reset con 0x56493edf5800 session 0x56493eccd0e0
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:21:16.646569+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 427 heartbeat osd_stat(store_statfs(0x4f87f9000/0x0/0x4ffc00000, data 0x8caa15/0xb13000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x68ef9c6), peers [0,2] op hist [])
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 176291840 unmapped: 47751168 heap: 224043008 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:30:06 compute-0 ceph-osd[88594]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:30:06 compute-0 ceph-osd[88594]: bluestore.MempoolThread(0x56493a431b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2908261 data_alloc: 218103808 data_used: 1671168
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:21:17.646756+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: handle_auth_request added challenge on 0x56493b4ae800
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 427 ms_handle_reset con 0x56493b4ae800 session 0x56493caf85a0
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 176291840 unmapped: 47751168 heap: 224043008 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:21:18.646975+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: handle_auth_request added challenge on 0x56493b4afc00
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 176291840 unmapped: 47751168 heap: 224043008 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _renew_subs
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 427 handle_osd_map epochs [428,428], i have 427, src has [1,428]
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 428 ms_handle_reset con 0x56493b4afc00 session 0x56493d8cb4a0
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:21:19.647188+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 176291840 unmapped: 47751168 heap: 224043008 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: handle_auth_request added challenge on 0x56493b9adc00
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 428 ms_handle_reset con 0x56493b9adc00 session 0x56493d4b90e0
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: handle_auth_request added challenge on 0x56493edf5c00
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:21:20.647546+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 176291840 unmapped: 47751168 heap: 224043008 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 428 ms_handle_reset con 0x56493edf5c00 session 0x56493bf6a1e0
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 428 handle_osd_map epochs [429,429], i have 428, src has [1,429]
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: handle_auth_request added challenge on 0x56493edf5c00
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 429 ms_handle_reset con 0x56493edf5c00 session 0x56493ce5a3c0
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: handle_auth_request added challenge on 0x56493b4ae800
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:21:21.648081+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 429 ms_handle_reset con 0x56493b4ae800 session 0x56493ce5ba40
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 176324608 unmapped: 47718400 heap: 224043008 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:30:06 compute-0 ceph-osd[88594]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:30:06 compute-0 ceph-osd[88594]: bluestore.MempoolThread(0x56493a431b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2913537 data_alloc: 218103808 data_used: 1671168
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:21:22.649303+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 429 heartbeat osd_stat(store_statfs(0x4f87f4000/0x0/0x4ffc00000, data 0x8ce18f/0xb19000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x68ef9c6), peers [0,2] op hist [])
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: handle_auth_request added challenge on 0x56493b4afc00
Oct 11 04:30:06 compute-0 ceph-osd[88594]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.049833298s of 10.341119766s, submitted: 94
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 429 ms_handle_reset con 0x56493b4afc00 session 0x56493b315860
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 176332800 unmapped: 47710208 heap: 224043008 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:21:23.649635+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 176332800 unmapped: 47710208 heap: 224043008 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: handle_auth_request added challenge on 0x56493b9adc00
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:21:24.650916+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 176332800 unmapped: 47710208 heap: 224043008 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _renew_subs
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 429 handle_osd_map epochs [430,430], i have 429, src has [1,430]
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 430 ms_handle_reset con 0x56493b9adc00 session 0x56493da2a960
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: handle_auth_request added challenge on 0x56493edf5800
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 430 ms_handle_reset con 0x56493edf5800 session 0x56493d86da40
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: handle_auth_request added challenge on 0x56493edf5800
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:21:25.651353+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 176349184 unmapped: 47693824 heap: 224043008 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _renew_subs
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 430 handle_osd_map epochs [431,431], i have 430, src has [1,431]
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 431 ms_handle_reset con 0x56493edf5800 session 0x56493bcb9c20
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:21:26.652286+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: handle_auth_request added challenge on 0x56493b4ae800
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 431 ms_handle_reset con 0x56493b4ae800 session 0x56493b932f00
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: handle_auth_request added challenge on 0x56493b4afc00
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 431 heartbeat osd_stat(store_statfs(0x4f87ed000/0x0/0x4ffc00000, data 0x8d1935/0xb1f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x68ef9c6), peers [0,2] op hist [])
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 431 ms_handle_reset con 0x56493b4afc00 session 0x56493ca9da40
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 176365568 unmapped: 47677440 heap: 224043008 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 431 heartbeat osd_stat(store_statfs(0x4f87ed000/0x0/0x4ffc00000, data 0x8d1935/0xb1f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x68ef9c6), peers [0,2] op hist [])
Oct 11 04:30:06 compute-0 ceph-osd[88594]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:30:06 compute-0 ceph-osd[88594]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:30:06 compute-0 ceph-osd[88594]: bluestore.MempoolThread(0x56493a431b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2921542 data_alloc: 218103808 data_used: 1683456
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:21:27.652714+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: handle_auth_request added challenge on 0x56493b9adc00
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 176373760 unmapped: 47669248 heap: 224043008 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 431 ms_handle_reset con 0x56493b9adc00 session 0x56493da2cb40
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:21:28.653229+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 176373760 unmapped: 47669248 heap: 224043008 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: handle_auth_request added challenge on 0x56493edf5c00
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:21:29.653506+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 176373760 unmapped: 47669248 heap: 224043008 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _renew_subs
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 431 handle_osd_map epochs [432,432], i have 431, src has [1,432]
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 432 ms_handle_reset con 0x56493edf5c00 session 0x56493c2f8960
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: handle_auth_request added challenge on 0x56493b4ae800
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 432 ms_handle_reset con 0x56493b4ae800 session 0x56493bcb9c20
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: handle_auth_request added challenge on 0x56493b4afc00
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:21:30.653728+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 176414720 unmapped: 47628288 heap: 224043008 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _renew_subs
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 432 handle_osd_map epochs [433,433], i have 432, src has [1,433]
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 433 ms_handle_reset con 0x56493b4afc00 session 0x56493ce5ba40
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:21:31.654031+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: handle_auth_request added challenge on 0x56493b9adc00
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 433 ms_handle_reset con 0x56493b9adc00 session 0x56493eccd0e0
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: handle_auth_request added challenge on 0x56493edf5800
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 433 ms_handle_reset con 0x56493edf5800 session 0x56493d4b9860
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 176463872 unmapped: 47579136 heap: 224043008 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 433 heartbeat osd_stat(store_statfs(0x4f87e5000/0x0/0x4ffc00000, data 0x8d5111/0xb26000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x68ef9c6), peers [0,2] op hist [])
Oct 11 04:30:06 compute-0 ceph-osd[88594]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:30:06 compute-0 ceph-osd[88594]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:30:06 compute-0 ceph-osd[88594]: bluestore.MempoolThread(0x56493a431b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2930678 data_alloc: 218103808 data_used: 1691648
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:21:32.654371+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 176463872 unmapped: 47579136 heap: 224043008 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: handle_auth_request added challenge on 0x56493f43ec00
Oct 11 04:30:06 compute-0 ceph-osd[88594]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.974575996s of 10.337619781s, submitted: 152
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 433 ms_handle_reset con 0x56493f43ec00 session 0x56493d86cd20
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:21:33.654761+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 433 heartbeat osd_stat(store_statfs(0x4f87e5000/0x0/0x4ffc00000, data 0x8d50af/0xb25000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x68ef9c6), peers [0,2] op hist [])
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 176463872 unmapped: 47579136 heap: 224043008 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: handle_auth_request added challenge on 0x56493f43ec00
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:21:34.654976+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 433 handle_osd_map epochs [433,434], i have 433, src has [1,434]
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 434 ms_handle_reset con 0x56493f43ec00 session 0x56493b9692c0
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 176488448 unmapped: 47554560 heap: 224043008 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:21:35.655179+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: handle_auth_request added challenge on 0x56493b4ae800
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 434 ms_handle_reset con 0x56493b4ae800 session 0x56493ce5a960
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: handle_auth_request added challenge on 0x56493b4afc00
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 176488448 unmapped: 47554560 heap: 224043008 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 434 handle_osd_map epochs [434,435], i have 434, src has [1,435]
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 435 ms_handle_reset con 0x56493b4afc00 session 0x56493e266000
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:21:36.655417+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 435 heartbeat osd_stat(store_statfs(0x4f87e2000/0x0/0x4ffc00000, data 0x8d8855/0xb2b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x68ef9c6), peers [0,2] op hist [])
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 176513024 unmapped: 47529984 heap: 224043008 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: handle_auth_request added challenge on 0x56493b9adc00
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 435 ms_handle_reset con 0x56493b9adc00 session 0x56493da2d2c0
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: handle_auth_request added challenge on 0x56493edf5800
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 435 ms_handle_reset con 0x56493edf5800 session 0x56493c532d20
Oct 11 04:30:06 compute-0 ceph-osd[88594]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:30:06 compute-0 ceph-osd[88594]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:30:06 compute-0 ceph-osd[88594]: bluestore.MempoolThread(0x56493a431b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2934900 data_alloc: 218103808 data_used: 1691648
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:21:37.655581+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 176521216 unmapped: 47521792 heap: 224043008 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 435 heartbeat osd_stat(store_statfs(0x4f87e2000/0x0/0x4ffc00000, data 0x8d8855/0xb2b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x68ef9c6), peers [0,2] op hist [])
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:21:38.655767+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 176521216 unmapped: 47521792 heap: 224043008 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:21:39.656027+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 435 heartbeat osd_stat(store_statfs(0x4f87e2000/0x0/0x4ffc00000, data 0x8d8855/0xb2b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x68ef9c6), peers [0,2] op hist [])
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 176521216 unmapped: 47521792 heap: 224043008 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:21:40.656273+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 176521216 unmapped: 47521792 heap: 224043008 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:21:41.656678+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 176521216 unmapped: 47521792 heap: 224043008 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 435 heartbeat osd_stat(store_statfs(0x4f87e2000/0x0/0x4ffc00000, data 0x8d8855/0xb2b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x68ef9c6), peers [0,2] op hist [])
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:21:42.656909+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:30:06 compute-0 ceph-osd[88594]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:30:06 compute-0 ceph-osd[88594]: bluestore.MempoolThread(0x56493a431b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2935060 data_alloc: 218103808 data_used: 1695744
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 176521216 unmapped: 47521792 heap: 224043008 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:21:43.657103+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 176521216 unmapped: 47521792 heap: 224043008 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 435 heartbeat osd_stat(store_statfs(0x4f87e2000/0x0/0x4ffc00000, data 0x8d8855/0xb2b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x68ef9c6), peers [0,2] op hist [])
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:21:44.657378+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 435 handle_osd_map epochs [435,436], i have 435, src has [1,436]
Oct 11 04:30:06 compute-0 ceph-osd[88594]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 11.714191437s of 11.783653259s, submitted: 27
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 176521216 unmapped: 47521792 heap: 224043008 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:21:45.657582+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 436 heartbeat osd_stat(store_statfs(0x4f87df000/0x0/0x4ffc00000, data 0x8da2b8/0xb2e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x68ef9c6), peers [0,2] op hist [])
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 176529408 unmapped: 47513600 heap: 224043008 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:21:46.657771+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 176529408 unmapped: 47513600 heap: 224043008 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:21:47.657922+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:30:06 compute-0 ceph-osd[88594]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:30:06 compute-0 ceph-osd[88594]: bluestore.MempoolThread(0x56493a431b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2938034 data_alloc: 218103808 data_used: 1695744
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 176529408 unmapped: 47513600 heap: 224043008 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:21:48.658188+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 176529408 unmapped: 47513600 heap: 224043008 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:21:49.659547+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 176529408 unmapped: 47513600 heap: 224043008 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:21:50.659724+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 176529408 unmapped: 47513600 heap: 224043008 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 436 heartbeat osd_stat(store_statfs(0x4f87df000/0x0/0x4ffc00000, data 0x8da2b8/0xb2e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x68ef9c6), peers [0,2] op hist [])
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:21:51.659908+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 176529408 unmapped: 47513600 heap: 224043008 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets getting new tickets!
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:21:52.660247+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _finish_auth 0
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:21:52.661300+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:30:06 compute-0 ceph-osd[88594]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:30:06 compute-0 ceph-osd[88594]: bluestore.MempoolThread(0x56493a431b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2938034 data_alloc: 218103808 data_used: 1695744
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 176545792 unmapped: 47497216 heap: 224043008 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:21:53.660411+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 176545792 unmapped: 47497216 heap: 224043008 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:21:54.660524+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 176545792 unmapped: 47497216 heap: 224043008 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:21:55.660706+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 176545792 unmapped: 47497216 heap: 224043008 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:21:56.660922+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 436 ms_handle_reset con 0x56493c6efc00 session 0x56493df441e0
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: handle_auth_request added challenge on 0x56493b4ae800
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 176545792 unmapped: 47497216 heap: 224043008 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 436 heartbeat osd_stat(store_statfs(0x4f87df000/0x0/0x4ffc00000, data 0x8da2b8/0xb2e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x68ef9c6), peers [0,2] op hist [])
Oct 11 04:30:06 compute-0 ceph-osd[88594]: mgrc ms_handle_reset ms_handle_reset con 0x56493f2b7400
Oct 11 04:30:06 compute-0 ceph-osd[88594]: mgrc reconnect Terminating session with v2:192.168.122.100:6800/3360631616
Oct 11 04:30:06 compute-0 ceph-osd[88594]: mgrc reconnect Starting new session with [v2:192.168.122.100:6800/3360631616,v1:192.168.122.100:6801/3360631616]
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: get_auth_request con 0x56493edf5800 auth_method 0
Oct 11 04:30:06 compute-0 ceph-osd[88594]: mgrc handle_mgr_configure stats_period=5
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:21:57.661127+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:30:06 compute-0 ceph-osd[88594]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:30:06 compute-0 ceph-osd[88594]: bluestore.MempoolThread(0x56493a431b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2938034 data_alloc: 218103808 data_used: 1695744
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 436 ms_handle_reset con 0x56493c9e5400 session 0x56493e72e3c0
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: handle_auth_request added challenge on 0x56493c6efc00
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 436 ms_handle_reset con 0x56493c6ef000 session 0x56493be1b680
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: handle_auth_request added challenge on 0x56493f43ec00
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 176603136 unmapped: 47439872 heap: 224043008 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:21:58.661396+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 436 heartbeat osd_stat(store_statfs(0x4f87df000/0x0/0x4ffc00000, data 0x8da2b8/0xb2e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x68ef9c6), peers [0,2] op hist [])
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 176603136 unmapped: 47439872 heap: 224043008 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:21:59.661516+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 436 heartbeat osd_stat(store_statfs(0x4f87df000/0x0/0x4ffc00000, data 0x8da2b8/0xb2e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x68ef9c6), peers [0,2] op hist [])
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 176603136 unmapped: 47439872 heap: 224043008 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:22:00.661652+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 176603136 unmapped: 47439872 heap: 224043008 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:22:01.661838+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 176603136 unmapped: 47439872 heap: 224043008 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:22:02.662008+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:30:06 compute-0 ceph-osd[88594]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:30:06 compute-0 ceph-osd[88594]: bluestore.MempoolThread(0x56493a431b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2938034 data_alloc: 218103808 data_used: 1695744
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 436 heartbeat osd_stat(store_statfs(0x4f87df000/0x0/0x4ffc00000, data 0x8da2b8/0xb2e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x68ef9c6), peers [0,2] op hist [])
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 176603136 unmapped: 47439872 heap: 224043008 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:22:03.662394+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 176603136 unmapped: 47439872 heap: 224043008 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:22:04.662548+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 176603136 unmapped: 47439872 heap: 224043008 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:22:05.662707+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 176603136 unmapped: 47439872 heap: 224043008 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:22:06.662880+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 436 heartbeat osd_stat(store_statfs(0x4f87df000/0x0/0x4ffc00000, data 0x8da2b8/0xb2e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x68ef9c6), peers [0,2] op hist [])
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 176603136 unmapped: 47439872 heap: 224043008 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:22:07.663043+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:30:06 compute-0 ceph-osd[88594]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:30:06 compute-0 ceph-osd[88594]: bluestore.MempoolThread(0x56493a431b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2938034 data_alloc: 218103808 data_used: 1695744
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 176603136 unmapped: 47439872 heap: 224043008 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:22:08.663206+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 176603136 unmapped: 47439872 heap: 224043008 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:22:09.663358+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 176603136 unmapped: 47439872 heap: 224043008 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:22:10.663534+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 176603136 unmapped: 47439872 heap: 224043008 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:22:11.663728+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 176603136 unmapped: 47439872 heap: 224043008 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 436 heartbeat osd_stat(store_statfs(0x4f87df000/0x0/0x4ffc00000, data 0x8da2b8/0xb2e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x68ef9c6), peers [0,2] op hist [])
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:22:12.663904+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:30:06 compute-0 ceph-osd[88594]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:30:06 compute-0 ceph-osd[88594]: bluestore.MempoolThread(0x56493a431b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2938034 data_alloc: 218103808 data_used: 1695744
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 176603136 unmapped: 47439872 heap: 224043008 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:22:13.664089+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 176603136 unmapped: 47439872 heap: 224043008 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:22:14.664264+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 176603136 unmapped: 47439872 heap: 224043008 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:22:15.664450+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 176603136 unmapped: 47439872 heap: 224043008 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:22:16.664663+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 436 heartbeat osd_stat(store_statfs(0x4f87df000/0x0/0x4ffc00000, data 0x8da2b8/0xb2e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x68ef9c6), peers [0,2] op hist [])
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 176603136 unmapped: 47439872 heap: 224043008 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:22:17.664842+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:30:06 compute-0 ceph-osd[88594]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:30:06 compute-0 ceph-osd[88594]: bluestore.MempoolThread(0x56493a431b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2938034 data_alloc: 218103808 data_used: 1695744
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 176603136 unmapped: 47439872 heap: 224043008 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 436 heartbeat osd_stat(store_statfs(0x4f87df000/0x0/0x4ffc00000, data 0x8da2b8/0xb2e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x68ef9c6), peers [0,2] op hist [])
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:22:18.665078+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 176603136 unmapped: 47439872 heap: 224043008 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:22:19.665263+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: handle_auth_request added challenge on 0x56493eee2c00
Oct 11 04:30:06 compute-0 ceph-osd[88594]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 34.669986725s of 34.680049896s, submitted: 13
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 436 ms_handle_reset con 0x56493eee2c00 session 0x56493d8672c0
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 177127424 unmapped: 46915584 heap: 224043008 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:22:20.665404+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 177127424 unmapped: 46915584 heap: 224043008 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:22:21.665569+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 436 heartbeat osd_stat(store_statfs(0x4f870b000/0x0/0x4ffc00000, data 0x9af2b8/0xc03000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x68ef9c6), peers [0,2] op hist [])
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 177127424 unmapped: 46915584 heap: 224043008 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:22:22.665803+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:30:06 compute-0 ceph-osd[88594]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:30:06 compute-0 ceph-osd[88594]: bluestore.MempoolThread(0x56493a431b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2946550 data_alloc: 218103808 data_used: 1695744
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 177127424 unmapped: 46915584 heap: 224043008 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:22:23.665944+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 177127424 unmapped: 46915584 heap: 224043008 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:22:24.666082+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: handle_auth_request added challenge on 0x56493f7acc00
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 436 ms_handle_reset con 0x56493f7acc00 session 0x56493eb5ab40
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 177135616 unmapped: 46907392 heap: 224043008 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 436 heartbeat osd_stat(store_statfs(0x4f870b000/0x0/0x4ffc00000, data 0x9af2b8/0xc03000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x68ef9c6), peers [0,2] op hist [])
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: handle_auth_request added challenge on 0x56493df9fc00
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:22:25.666239+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 176431104 unmapped: 47611904 heap: 224043008 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: handle_auth_request added challenge on 0x56493b962800
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:22:26.666452+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 176439296 unmapped: 47603712 heap: 224043008 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:22:27.666611+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 436 heartbeat osd_stat(store_statfs(0x4f870a000/0x0/0x4ffc00000, data 0x9af2db/0xc04000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x68ef9c6), peers [0,2] op hist [])
Oct 11 04:30:06 compute-0 ceph-osd[88594]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:30:06 compute-0 ceph-osd[88594]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:30:06 compute-0 ceph-osd[88594]: bluestore.MempoolThread(0x56493a431b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2956088 data_alloc: 218103808 data_used: 2482176
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 176439296 unmapped: 47603712 heap: 224043008 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:22:28.666777+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 176439296 unmapped: 47603712 heap: 224043008 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 436 heartbeat osd_stat(store_statfs(0x4f870a000/0x0/0x4ffc00000, data 0x9af2db/0xc04000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x68ef9c6), peers [0,2] op hist [])
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:22:29.666924+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 176439296 unmapped: 47603712 heap: 224043008 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:22:30.667098+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 436 heartbeat osd_stat(store_statfs(0x4f870a000/0x0/0x4ffc00000, data 0x9af2db/0xc04000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x68ef9c6), peers [0,2] op hist [])
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 176439296 unmapped: 47603712 heap: 224043008 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:22:31.667271+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 176439296 unmapped: 47603712 heap: 224043008 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:22:32.667453+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:30:06 compute-0 ceph-osd[88594]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:30:06 compute-0 ceph-osd[88594]: bluestore.MempoolThread(0x56493a431b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2956088 data_alloc: 218103808 data_used: 2482176
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 176447488 unmapped: 47595520 heap: 224043008 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:22:33.667613+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 176447488 unmapped: 47595520 heap: 224043008 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 436 heartbeat osd_stat(store_statfs(0x4f870a000/0x0/0x4ffc00000, data 0x9af2db/0xc04000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x68ef9c6), peers [0,2] op hist [])
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:22:34.667753+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 176447488 unmapped: 47595520 heap: 224043008 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:22:35.667903+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 16.206941605s of 16.254709244s, submitted: 15
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 179707904 unmapped: 44335104 heap: 224043008 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:22:36.668103+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 178069504 unmapped: 45973504 heap: 224043008 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:22:37.668205+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:30:06 compute-0 ceph-osd[88594]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:30:06 compute-0 ceph-osd[88594]: bluestore.MempoolThread(0x56493a431b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2978212 data_alloc: 218103808 data_used: 3510272
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 178069504 unmapped: 45973504 heap: 224043008 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:22:38.668394+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 436 heartbeat osd_stat(store_statfs(0x4f84d0000/0x0/0x4ffc00000, data 0xbe92db/0xe3e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x68ef9c6), peers [0,2] op hist [])
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 178069504 unmapped: 45973504 heap: 224043008 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:22:39.668688+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 178069504 unmapped: 45973504 heap: 224043008 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:22:40.668866+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 178069504 unmapped: 45973504 heap: 224043008 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:22:41.669043+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 436 heartbeat osd_stat(store_statfs(0x4f84d0000/0x0/0x4ffc00000, data 0xbe92db/0xe3e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x68ef9c6), peers [0,2] op hist [])
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 178069504 unmapped: 45973504 heap: 224043008 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:22:42.669201+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:30:06 compute-0 ceph-osd[88594]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:30:06 compute-0 ceph-osd[88594]: bluestore.MempoolThread(0x56493a431b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2978212 data_alloc: 218103808 data_used: 3510272
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 178069504 unmapped: 45973504 heap: 224043008 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:22:43.669392+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 178069504 unmapped: 45973504 heap: 224043008 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:22:44.669622+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 178069504 unmapped: 45973504 heap: 224043008 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:22:45.669776+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 178069504 unmapped: 45973504 heap: 224043008 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:22:46.669923+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: handle_auth_request added challenge on 0x56493ec09800
Oct 11 04:30:06 compute-0 ceph-osd[88594]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.933771133s of 11.048447609s, submitted: 33
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 436 ms_handle_reset con 0x56493ec09800 session 0x56493da28f00
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 178069504 unmapped: 45973504 heap: 224043008 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 436 heartbeat osd_stat(store_statfs(0x4f84d0000/0x0/0x4ffc00000, data 0xbe92db/0xe3e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x68ef9c6), peers [0,2] op hist [])
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:22:47.670114+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:30:06 compute-0 ceph-osd[88594]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:30:06 compute-0 ceph-osd[88594]: bluestore.MempoolThread(0x56493a431b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2978212 data_alloc: 218103808 data_used: 3510272
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 178069504 unmapped: 45973504 heap: 224043008 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:22:48.670313+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 178069504 unmapped: 45973504 heap: 224043008 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:22:49.670569+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 178069504 unmapped: 45973504 heap: 224043008 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: handle_auth_request added challenge on 0x56493f43f000
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:22:50.670726+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 436 heartbeat osd_stat(store_statfs(0x4f84d0000/0x0/0x4ffc00000, data 0xbe92db/0xe3e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x68ef9c6), peers [0,2] op hist [])
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 178069504 unmapped: 45973504 heap: 224043008 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:22:51.670910+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 178069504 unmapped: 45973504 heap: 224043008 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:22:52.671261+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:30:06 compute-0 ceph-osd[88594]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:30:06 compute-0 ceph-osd[88594]: bluestore.MempoolThread(0x56493a431b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2987686 data_alloc: 218103808 data_used: 3518464
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: handle_auth_request added challenge on 0x56493f7adc00
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 178069504 unmapped: 45973504 heap: 224043008 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:22:53.672320+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _renew_subs
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 436 handle_osd_map epochs [437,437], i have 436, src has [1,437]
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 437 ms_handle_reset con 0x56493f7adc00 session 0x56493bf6ba40
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 437 heartbeat osd_stat(store_statfs(0x4f8446000/0x0/0x4ffc00000, data 0xc722db/0xec7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x68ef9c6), peers [0,2] op hist [])
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 178077696 unmapped: 45965312 heap: 224043008 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:22:54.672462+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 178077696 unmapped: 45965312 heap: 224043008 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:22:55.672632+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 437 heartbeat osd_stat(store_statfs(0x4f843e000/0x0/0x4ffc00000, data 0xcfbe58/0xecf000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x68ef9c6), peers [0,2] op hist [])
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 178077696 unmapped: 45965312 heap: 224043008 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:22:56.672851+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: handle_auth_request added challenge on 0x56493ef04400
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _renew_subs
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 437 handle_osd_map epochs [438,438], i have 437, src has [1,438]
Oct 11 04:30:06 compute-0 ceph-osd[88594]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.996654510s of 10.071245193s, submitted: 21
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 438 ms_handle_reset con 0x56493ef04400 session 0x56493b12c780
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 178077696 unmapped: 45965312 heap: 224043008 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:22:57.673007+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:30:06 compute-0 ceph-osd[88594]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:30:06 compute-0 ceph-osd[88594]: bluestore.MempoolThread(0x56493a431b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3003976 data_alloc: 218103808 data_used: 3534848
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 178143232 unmapped: 45899776 heap: 224043008 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:22:58.673189+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: handle_auth_request added challenge on 0x56493ec09800
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _renew_subs
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 438 handle_osd_map epochs [439,439], i have 438, src has [1,439]
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 439 ms_handle_reset con 0x56493ec09800 session 0x56493be1a1e0
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 178151424 unmapped: 45891584 heap: 224043008 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:22:59.673384+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 439 ms_handle_reset con 0x56493f43f000 session 0x56493c5323c0
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 178151424 unmapped: 45891584 heap: 224043008 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:23:00.673571+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 439 heartbeat osd_stat(store_statfs(0x4f8418000/0x0/0x4ffc00000, data 0xda35b4/0xef4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x68ef9c6), peers [0,2] op hist [])
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 178167808 unmapped: 45875200 heap: 224043008 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:23:01.673710+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: handle_auth_request added challenge on 0x56493eee2c00
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: handle_auth_request added challenge on 0x56493f7acc00
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 439 ms_handle_reset con 0x56493f7acc00 session 0x56493da28780
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 439 ms_handle_reset con 0x56493eee2c00 session 0x56493ca9cb40
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 178167808 unmapped: 45875200 heap: 224043008 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:23:02.673858+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:30:06 compute-0 ceph-osd[88594]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:30:06 compute-0 ceph-osd[88594]: bluestore.MempoolThread(0x56493a431b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3020748 data_alloc: 218103808 data_used: 3555328
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 178167808 unmapped: 45875200 heap: 224043008 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:23:03.674040+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 439 heartbeat osd_stat(store_statfs(0x4f8418000/0x0/0x4ffc00000, data 0xda35b4/0xef4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x68ef9c6), peers [0,2] op hist [])
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: handle_auth_request added challenge on 0x56493f7adc00
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 178167808 unmapped: 45875200 heap: 224043008 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:23:04.674204+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 178167808 unmapped: 45875200 heap: 224043008 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:23:05.674349+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 178692096 unmapped: 45350912 heap: 224043008 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:23:06.674573+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 178692096 unmapped: 45350912 heap: 224043008 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:23:07.674792+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 439 ms_handle_reset con 0x56493f7adc00 session 0x56493caf8f00
Oct 11 04:30:06 compute-0 ceph-osd[88594]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:30:06 compute-0 ceph-osd[88594]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:30:06 compute-0 ceph-osd[88594]: bluestore.MempoolThread(0x56493a431b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3023408 data_alloc: 218103808 data_used: 4091904
Oct 11 04:30:06 compute-0 ceph-osd[88594]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.893890381s of 10.982268333s, submitted: 22
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 178692096 unmapped: 45350912 heap: 224043008 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:23:08.674983+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: handle_auth_request added challenge on 0x56493f7adc00
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: handle_auth_request added challenge on 0x56493ec09800
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 439 ms_handle_reset con 0x56493ec09800 session 0x56493eccdc20
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 439 ms_handle_reset con 0x56493f7adc00 session 0x56493e20a3c0
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 178692096 unmapped: 45350912 heap: 224043008 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:23:09.675279+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 439 heartbeat osd_stat(store_statfs(0x4f840e000/0x0/0x4ffc00000, data 0xdaf5b4/0xf00000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x68ef9c6), peers [0,2] op hist [])
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 178692096 unmapped: 45350912 heap: 224043008 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:23:10.675581+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 178692096 unmapped: 45350912 heap: 224043008 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:23:11.675746+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: handle_auth_request added challenge on 0x56493eee2c00
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 178692096 unmapped: 45350912 heap: 224043008 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:23:12.675934+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:30:06 compute-0 ceph-osd[88594]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:30:06 compute-0 ceph-osd[88594]: bluestore.MempoolThread(0x56493a431b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3024080 data_alloc: 218103808 data_used: 4100096
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 439 heartbeat osd_stat(store_statfs(0x4f840e000/0x0/0x4ffc00000, data 0xdaf5b4/0xf00000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x68ef9c6), peers [0,2] op hist [])
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 178692096 unmapped: 45350912 heap: 224043008 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:23:13.676129+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 180813824 unmapped: 43229184 heap: 224043008 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:23:14.676354+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 439 ms_handle_reset con 0x56493eee2c00 session 0x56493e20b680
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 180813824 unmapped: 43229184 heap: 224043008 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:23:15.676547+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: handle_auth_request added challenge on 0x56493f43f000
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 180813824 unmapped: 43229184 heap: 224043008 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: handle_auth_request added challenge on 0x56493f7acc00
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:23:16.676726+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 439 ms_handle_reset con 0x56493f7acc00 session 0x56493de023c0
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 439 ms_handle_reset con 0x56493f43f000 session 0x56493da2f680
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 180248576 unmapped: 43794432 heap: 224043008 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:23:17.676863+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:30:06 compute-0 ceph-osd[88594]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:30:06 compute-0 ceph-osd[88594]: bluestore.MempoolThread(0x56493a431b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3034596 data_alloc: 218103808 data_used: 4116480
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 439 heartbeat osd_stat(store_statfs(0x4f8379000/0x0/0x4ffc00000, data 0xe43616/0xf95000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x68ef9c6), peers [0,2] op hist [])
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 180248576 unmapped: 43794432 heap: 224043008 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:23:18.677050+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: handle_auth_request added challenge on 0x56493f43f000
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 180248576 unmapped: 43794432 heap: 224043008 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:23:19.677262+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 439 heartbeat osd_stat(store_statfs(0x4f8379000/0x0/0x4ffc00000, data 0xe43616/0xf95000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x68ef9c6), peers [0,2] op hist [])
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 180248576 unmapped: 43794432 heap: 224043008 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:23:20.677458+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 12.930819511s of 13.021476746s, submitted: 21
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 180772864 unmapped: 43270144 heap: 224043008 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:23:21.677650+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 439 ms_handle_reset con 0x56493f43f000 session 0x56493b935c20
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 180207616 unmapped: 43835392 heap: 224043008 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:23:22.677926+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:30:06 compute-0 ceph-osd[88594]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:30:06 compute-0 ceph-osd[88594]: bluestore.MempoolThread(0x56493a431b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3043271 data_alloc: 218103808 data_used: 4243456
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 180207616 unmapped: 43835392 heap: 224043008 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:23:23.678090+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: handle_auth_request added challenge on 0x56493ec09800
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 439 ms_handle_reset con 0x56493ec09800 session 0x56493da2cf00
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: handle_auth_request added challenge on 0x56493eee2c00
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 439 ms_handle_reset con 0x56493eee2c00 session 0x56493da2de00
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 180215808 unmapped: 43827200 heap: 224043008 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:23:24.678337+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 439 heartbeat osd_stat(store_statfs(0x4f82ef000/0x0/0x4ffc00000, data 0xe435b4/0xf94000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x68ef9c6), peers [0,2] op hist [])
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: handle_auth_request added challenge on 0x56493f7acc00
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 439 ms_handle_reset con 0x56493f7acc00 session 0x56493c5aba40
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: handle_auth_request added challenge on 0x56493f7adc00
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 439 ms_handle_reset con 0x56493f7adc00 session 0x56493de023c0
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 180215808 unmapped: 43827200 heap: 224043008 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:23:25.678493+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 180215808 unmapped: 43827200 heap: 224043008 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:23:26.678702+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: handle_auth_request added challenge on 0x56493f7adc00
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 439 ms_handle_reset con 0x56493f7adc00 session 0x56493eccdc20
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: handle_auth_request added challenge on 0x56493ec09800
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 439 ms_handle_reset con 0x56493ec09800 session 0x56493ca9cb40
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 180215808 unmapped: 43827200 heap: 224043008 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:23:27.678869+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: handle_auth_request added challenge on 0x56493eee2c00
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 439 ms_handle_reset con 0x56493eee2c00 session 0x56493be1a1e0
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: handle_auth_request added challenge on 0x56493f43f000
Oct 11 04:30:06 compute-0 ceph-osd[88594]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:30:06 compute-0 ceph-osd[88594]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:30:06 compute-0 ceph-osd[88594]: bluestore.MempoolThread(0x56493a431b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3026279 data_alloc: 218103808 data_used: 4235264
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 439 handle_osd_map epochs [439,440], i have 439, src has [1,440]
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 440 ms_handle_reset con 0x56493f43f000 session 0x56493bf6ba40
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 180224000 unmapped: 43819008 heap: 224043008 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:23:28.679110+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: handle_auth_request added challenge on 0x56493f7acc00
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 440 ms_handle_reset con 0x56493f7acc00 session 0x56493da28f00
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: handle_auth_request added challenge on 0x56493f7acc00
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 440 ms_handle_reset con 0x56493f7acc00 session 0x56493d8672c0
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 180224000 unmapped: 43819008 heap: 224043008 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:23:29.679281+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 440 handle_osd_map epochs [441,441], i have 440, src has [1,441]
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: handle_auth_request added challenge on 0x56493ec09800
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 441 ms_handle_reset con 0x56493ec09800 session 0x56493e72e1e0
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: handle_auth_request added challenge on 0x56493eee2c00
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 180232192 unmapped: 43810816 heap: 224043008 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:23:30.679472+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 441 heartbeat osd_stat(store_statfs(0x4f8417000/0x0/0x4ffc00000, data 0xd1ecf4/0xef5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x68ef9c6), peers [0,2] op hist [])
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 441 ms_handle_reset con 0x56493eee2c00 session 0x56493e2ac000
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 180232192 unmapped: 43810816 heap: 224043008 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:23:31.679669+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 441 handle_osd_map epochs [442,442], i have 441, src has [1,442]
Oct 11 04:30:06 compute-0 ceph-osd[88594]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.633743286s of 10.822577477s, submitted: 54
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:23:32.679839+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 180240384 unmapped: 43802624 heap: 224043008 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: handle_auth_request added challenge on 0x56493f43f000
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 442 ms_handle_reset con 0x56493f43f000 session 0x56493df441e0
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: handle_auth_request added challenge on 0x56493f7adc00
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 442 ms_handle_reset con 0x56493f7adc00 session 0x56493b94be00
Oct 11 04:30:06 compute-0 ceph-osd[88594]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:30:06 compute-0 ceph-osd[88594]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:30:06 compute-0 ceph-osd[88594]: bluestore.MempoolThread(0x56493a431b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3006955 data_alloc: 218103808 data_used: 3547136
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:23:33.680018+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 180240384 unmapped: 43802624 heap: 224043008 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 442 ms_handle_reset con 0x56493b962800 session 0x56493b9325a0
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 442 ms_handle_reset con 0x56493df9fc00 session 0x56493b968f00
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: handle_auth_request added challenge on 0x56493ec09800
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:23:34.680161+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 178954240 unmapped: 45088768 heap: 224043008 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 442 ms_handle_reset con 0x56493ec09800 session 0x56493de021e0
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:23:35.680304+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 178970624 unmapped: 45072384 heap: 224043008 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:23:36.680523+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 178970624 unmapped: 45072384 heap: 224043008 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 442 heartbeat osd_stat(store_statfs(0x4f87cd000/0x0/0x4ffc00000, data 0x8e48a2/0xb40000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x68ef9c6), peers [0,2] op hist [])
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 442 handle_osd_map epochs [443,443], i have 442, src has [1,443]
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 442 handle_osd_map epochs [443,443], i have 443, src has [1,443]
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:23:37.680712+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 178970624 unmapped: 45072384 heap: 224043008 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:30:06 compute-0 ceph-osd[88594]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:30:06 compute-0 ceph-osd[88594]: bluestore.MempoolThread(0x56493a431b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2975221 data_alloc: 218103808 data_used: 1736704
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:23:38.680865+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 178970624 unmapped: 45072384 heap: 224043008 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:23:39.681207+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 178970624 unmapped: 45072384 heap: 224043008 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:23:40.681490+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 178970624 unmapped: 45072384 heap: 224043008 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:23:41.681780+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 178970624 unmapped: 45072384 heap: 224043008 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 443 heartbeat osd_stat(store_statfs(0x4f87ca000/0x0/0x4ffc00000, data 0x8e6305/0xb43000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x68ef9c6), peers [0,2] op hist [])
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:23:42.681993+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 178970624 unmapped: 45072384 heap: 224043008 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:30:06 compute-0 ceph-osd[88594]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:30:06 compute-0 ceph-osd[88594]: bluestore.MempoolThread(0x56493a431b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2975221 data_alloc: 218103808 data_used: 1736704
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 443 heartbeat osd_stat(store_statfs(0x4f87ca000/0x0/0x4ffc00000, data 0x8e6305/0xb43000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x68ef9c6), peers [0,2] op hist [])
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:23:43.682240+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 178970624 unmapped: 45072384 heap: 224043008 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:23:44.682416+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 178970624 unmapped: 45072384 heap: 224043008 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:23:45.682593+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 178970624 unmapped: 45072384 heap: 224043008 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:23:46.682834+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 178970624 unmapped: 45072384 heap: 224043008 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 443 heartbeat osd_stat(store_statfs(0x4f87ca000/0x0/0x4ffc00000, data 0x8e6305/0xb43000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x68ef9c6), peers [0,2] op hist [])
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:23:47.683087+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 178970624 unmapped: 45072384 heap: 224043008 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:30:06 compute-0 ceph-osd[88594]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:30:06 compute-0 ceph-osd[88594]: bluestore.MempoolThread(0x56493a431b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2975221 data_alloc: 218103808 data_used: 1736704
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:23:48.683350+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 178970624 unmapped: 45072384 heap: 224043008 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:23:49.683635+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 178970624 unmapped: 45072384 heap: 224043008 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:23:50.683849+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 178970624 unmapped: 45072384 heap: 224043008 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 443 heartbeat osd_stat(store_statfs(0x4f87ca000/0x0/0x4ffc00000, data 0x8e6305/0xb43000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x68ef9c6), peers [0,2] op hist [])
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:23:51.684039+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 178970624 unmapped: 45072384 heap: 224043008 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:23:52.684270+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 178970624 unmapped: 45072384 heap: 224043008 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:30:06 compute-0 ceph-osd[88594]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:30:06 compute-0 ceph-osd[88594]: bluestore.MempoolThread(0x56493a431b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2975221 data_alloc: 218103808 data_used: 1736704
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:23:53.684421+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 178970624 unmapped: 45072384 heap: 224043008 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:23:54.684612+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 178970624 unmapped: 45072384 heap: 224043008 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 443 heartbeat osd_stat(store_statfs(0x4f87ca000/0x0/0x4ffc00000, data 0x8e6305/0xb43000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x68ef9c6), peers [0,2] op hist [])
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:23:55.684736+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 178970624 unmapped: 45072384 heap: 224043008 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:23:56.684950+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 178970624 unmapped: 45072384 heap: 224043008 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:23:57.685223+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 443 heartbeat osd_stat(store_statfs(0x4f87ca000/0x0/0x4ffc00000, data 0x8e6305/0xb43000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x68ef9c6), peers [0,2] op hist [])
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 178970624 unmapped: 45072384 heap: 224043008 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:30:06 compute-0 ceph-osd[88594]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:30:06 compute-0 ceph-osd[88594]: bluestore.MempoolThread(0x56493a431b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2975221 data_alloc: 218103808 data_used: 1736704
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:23:58.685379+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 178970624 unmapped: 45072384 heap: 224043008 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:23:59.685595+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 178970624 unmapped: 45072384 heap: 224043008 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:24:00.685771+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 178970624 unmapped: 45072384 heap: 224043008 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:24:01.685974+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 178970624 unmapped: 45072384 heap: 224043008 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 443 heartbeat osd_stat(store_statfs(0x4f87ca000/0x0/0x4ffc00000, data 0x8e6305/0xb43000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x68ef9c6), peers [0,2] op hist [])
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:24:02.686138+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 178970624 unmapped: 45072384 heap: 224043008 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:30:06 compute-0 ceph-osd[88594]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:30:06 compute-0 ceph-osd[88594]: bluestore.MempoolThread(0x56493a431b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2975221 data_alloc: 218103808 data_used: 1736704
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:24:03.686323+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 178970624 unmapped: 45072384 heap: 224043008 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:24:04.686506+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 443 heartbeat osd_stat(store_statfs(0x4f87ca000/0x0/0x4ffc00000, data 0x8e6305/0xb43000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x68ef9c6), peers [0,2] op hist [])
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 178970624 unmapped: 45072384 heap: 224043008 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 443 heartbeat osd_stat(store_statfs(0x4f87ca000/0x0/0x4ffc00000, data 0x8e6305/0xb43000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x68ef9c6), peers [0,2] op hist [])
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:24:05.686743+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 178970624 unmapped: 45072384 heap: 224043008 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:24:06.686951+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 178970624 unmapped: 45072384 heap: 224043008 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 443 heartbeat osd_stat(store_statfs(0x4f87ca000/0x0/0x4ffc00000, data 0x8e6305/0xb43000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x68ef9c6), peers [0,2] op hist [])
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:24:07.687099+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 178970624 unmapped: 45072384 heap: 224043008 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: handle_auth_request added challenge on 0x56493eee2c00
Oct 11 04:30:06 compute-0 ceph-osd[88594]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 35.979934692s of 36.152660370s, submitted: 72
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 443 ms_handle_reset con 0x56493eee2c00 session 0x56493bf6a5a0
Oct 11 04:30:06 compute-0 ceph-osd[88594]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:30:06 compute-0 ceph-osd[88594]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:30:06 compute-0 ceph-osd[88594]: bluestore.MempoolThread(0x56493a431b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2975815 data_alloc: 218103808 data_used: 1736704
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:24:08.687308+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 178970624 unmapped: 45072384 heap: 224043008 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: handle_auth_request added challenge on 0x56493f43f000
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:24:09.687471+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 178978816 unmapped: 45064192 heap: 224043008 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 443 handle_osd_map epochs [443,444], i have 443, src has [1,444]
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 444 ms_handle_reset con 0x56493f43f000 session 0x56493e2665a0
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:24:10.687635+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 178987008 unmapped: 45056000 heap: 224043008 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: handle_auth_request added challenge on 0x56493b962800
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: handle_auth_request added challenge on 0x56493df9fc00
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 444 ms_handle_reset con 0x56493df9fc00 session 0x56493c7aa000
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 444 ms_handle_reset con 0x56493b962800 session 0x56493caf92c0
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:24:11.687824+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 178995200 unmapped: 45047808 heap: 224043008 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: handle_auth_request added challenge on 0x56493ec09800
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 444 ms_handle_reset con 0x56493ec09800 session 0x56493be1b680
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: handle_auth_request added challenge on 0x56493eee2c00
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 444 ms_handle_reset con 0x56493eee2c00 session 0x56493d8ca960
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:24:12.687943+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 178995200 unmapped: 45047808 heap: 224043008 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 444 heartbeat osd_stat(store_statfs(0x4f87c5000/0x0/0x4ffc00000, data 0x8e7ef4/0xb48000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x68ef9c6), peers [0,2] op hist [])
Oct 11 04:30:06 compute-0 ceph-osd[88594]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:30:06 compute-0 ceph-osd[88594]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:30:06 compute-0 ceph-osd[88594]: bluestore.MempoolThread(0x56493a431b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2985130 data_alloc: 218103808 data_used: 1744896
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 444 heartbeat osd_stat(store_statfs(0x4f87c5000/0x0/0x4ffc00000, data 0x8e7ef4/0xb48000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x68ef9c6), peers [0,2] op hist [])
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: handle_auth_request added challenge on 0x56493f7acc00
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 444 ms_handle_reset con 0x56493f7acc00 session 0x56493be1a3c0
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: handle_auth_request added challenge on 0x56493b962800
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: handle_auth_request added challenge on 0x56493df9fc00
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 444 ms_handle_reset con 0x56493df9fc00 session 0x56493e20ba40
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: handle_auth_request added challenge on 0x56493ec09800
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:24:13.688104+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 180068352 unmapped: 43974656 heap: 224043008 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 444 handle_osd_map epochs [444,445], i have 444, src has [1,445]
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _renew_subs
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 445 handle_osd_map epochs [445,445], i have 445, src has [1,445]
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 445 ms_handle_reset con 0x56493ec09800 session 0x56493d866b40
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 445 ms_handle_reset con 0x56493b962800 session 0x56493d8cba40
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:24:14.688330+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 180092928 unmapped: 43950080 heap: 224043008 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:24:15.688502+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: handle_auth_request added challenge on 0x56493eee2c00
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 445 ms_handle_reset con 0x56493eee2c00 session 0x56493da29a40
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: handle_auth_request added challenge on 0x56493e3ce000
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 180109312 unmapped: 43933696 heap: 224043008 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 445 ms_handle_reset con 0x56493e3ce000 session 0x56493da2e3c0
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:24:16.688672+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 180117504 unmapped: 43925504 heap: 224043008 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:24:17.688837+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 180117504 unmapped: 43925504 heap: 224043008 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:30:06 compute-0 ceph-osd[88594]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:30:06 compute-0 ceph-osd[88594]: bluestore.MempoolThread(0x56493a431b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2984597 data_alloc: 218103808 data_used: 1753088
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 445 heartbeat osd_stat(store_statfs(0x4f87c5000/0x0/0x4ffc00000, data 0x8e9a53/0xb49000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x68ef9c6), peers [0,2] op hist [])
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:24:18.689050+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 180117504 unmapped: 43925504 heap: 224043008 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:24:19.689230+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 180117504 unmapped: 43925504 heap: 224043008 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 445 heartbeat osd_stat(store_statfs(0x4f87c5000/0x0/0x4ffc00000, data 0x8e9a53/0xb49000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x68ef9c6), peers [0,2] op hist [])
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:24:20.689418+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 180117504 unmapped: 43925504 heap: 224043008 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 445 heartbeat osd_stat(store_statfs(0x4f87c5000/0x0/0x4ffc00000, data 0x8e9a53/0xb49000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x68ef9c6), peers [0,2] op hist [])
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:24:21.689631+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 180117504 unmapped: 43925504 heap: 224043008 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:24:22.689868+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 180117504 unmapped: 43925504 heap: 224043008 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:30:06 compute-0 ceph-osd[88594]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:30:06 compute-0 ceph-osd[88594]: bluestore.MempoolThread(0x56493a431b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2984597 data_alloc: 218103808 data_used: 1753088
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:24:23.690005+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 180117504 unmapped: 43925504 heap: 224043008 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:24:24.690216+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 180117504 unmapped: 43925504 heap: 224043008 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:24:25.690410+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 445 handle_osd_map epochs [446,446], i have 445, src has [1,446]
Oct 11 04:30:06 compute-0 ceph-osd[88594]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 17.273887634s of 17.637474060s, submitted: 95
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 180117504 unmapped: 43925504 heap: 224043008 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:24:26.690617+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 180117504 unmapped: 43925504 heap: 224043008 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 446 heartbeat osd_stat(store_statfs(0x4f87c1000/0x0/0x4ffc00000, data 0x8eb4b6/0xb4c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x68ef9c6), peers [0,2] op hist [])
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:24:27.690738+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 180117504 unmapped: 43925504 heap: 224043008 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:30:06 compute-0 ceph-osd[88594]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:30:06 compute-0 ceph-osd[88594]: bluestore.MempoolThread(0x56493a431b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2988771 data_alloc: 218103808 data_used: 1761280
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:24:28.690936+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 180117504 unmapped: 43925504 heap: 224043008 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:24:29.691132+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 180117504 unmapped: 43925504 heap: 224043008 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:24:30.691416+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 180117504 unmapped: 43925504 heap: 224043008 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:24:31.691635+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 180117504 unmapped: 43925504 heap: 224043008 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:24:32.691819+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 180117504 unmapped: 43925504 heap: 224043008 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 446 heartbeat osd_stat(store_statfs(0x4f87c1000/0x0/0x4ffc00000, data 0x8eb4b6/0xb4c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x68ef9c6), peers [0,2] op hist [])
Oct 11 04:30:06 compute-0 ceph-osd[88594]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:30:06 compute-0 ceph-osd[88594]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:30:06 compute-0 ceph-osd[88594]: bluestore.MempoolThread(0x56493a431b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2988771 data_alloc: 218103808 data_used: 1761280
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:24:33.692007+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 180117504 unmapped: 43925504 heap: 224043008 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:24:34.692227+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 180117504 unmapped: 43925504 heap: 224043008 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:24:35.692423+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 180117504 unmapped: 43925504 heap: 224043008 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 446 heartbeat osd_stat(store_statfs(0x4f87c1000/0x0/0x4ffc00000, data 0x8eb4b6/0xb4c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x68ef9c6), peers [0,2] op hist [])
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:24:36.692607+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 180117504 unmapped: 43925504 heap: 224043008 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:24:37.692737+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 180117504 unmapped: 43925504 heap: 224043008 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:30:06 compute-0 ceph-osd[88594]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:30:06 compute-0 ceph-osd[88594]: bluestore.MempoolThread(0x56493a431b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2988771 data_alloc: 218103808 data_used: 1761280
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:24:38.692893+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 180117504 unmapped: 43925504 heap: 224043008 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 446 heartbeat osd_stat(store_statfs(0x4f87c1000/0x0/0x4ffc00000, data 0x8eb4b6/0xb4c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x68ef9c6), peers [0,2] op hist [])
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 446 heartbeat osd_stat(store_statfs(0x4f87c1000/0x0/0x4ffc00000, data 0x8eb4b6/0xb4c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x68ef9c6), peers [0,2] op hist [])
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:24:39.693067+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 180117504 unmapped: 43925504 heap: 224043008 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:24:40.693234+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 180117504 unmapped: 43925504 heap: 224043008 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:24:41.693404+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 180117504 unmapped: 43925504 heap: 224043008 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:24:42.693563+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 180117504 unmapped: 43925504 heap: 224043008 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:30:06 compute-0 ceph-osd[88594]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:30:06 compute-0 ceph-osd[88594]: bluestore.MempoolThread(0x56493a431b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2988771 data_alloc: 218103808 data_used: 1761280
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 446 heartbeat osd_stat(store_statfs(0x4f87c1000/0x0/0x4ffc00000, data 0x8eb4b6/0xb4c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x68ef9c6), peers [0,2] op hist [])
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:24:43.693779+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 180117504 unmapped: 43925504 heap: 224043008 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:24:44.693936+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 180117504 unmapped: 43925504 heap: 224043008 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 446 heartbeat osd_stat(store_statfs(0x4f87c1000/0x0/0x4ffc00000, data 0x8eb4b6/0xb4c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x68ef9c6), peers [0,2] op hist [])
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:24:45.694090+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 180117504 unmapped: 43925504 heap: 224043008 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:24:46.694338+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 180117504 unmapped: 43925504 heap: 224043008 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 446 heartbeat osd_stat(store_statfs(0x4f87c1000/0x0/0x4ffc00000, data 0x8eb4b6/0xb4c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x68ef9c6), peers [0,2] op hist [])
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 446 heartbeat osd_stat(store_statfs(0x4f87c1000/0x0/0x4ffc00000, data 0x8eb4b6/0xb4c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x68ef9c6), peers [0,2] op hist [])
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:24:47.694693+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 180117504 unmapped: 43925504 heap: 224043008 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:30:06 compute-0 ceph-osd[88594]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:30:06 compute-0 ceph-osd[88594]: bluestore.MempoolThread(0x56493a431b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2988771 data_alloc: 218103808 data_used: 1761280
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:24:48.694799+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 180117504 unmapped: 43925504 heap: 224043008 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:24:49.695023+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 180117504 unmapped: 43925504 heap: 224043008 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: handle_auth_request added challenge on 0x56493b962800
Oct 11 04:30:06 compute-0 ceph-osd[88594]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 24.831819534s of 24.844062805s, submitted: 36
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:24:50.695237+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 180125696 unmapped: 43917312 heap: 224043008 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:24:51.695434+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 180125696 unmapped: 43917312 heap: 224043008 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 446 ms_handle_reset con 0x56493b962800 session 0x56493d8cb0e0
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:24:52.695575+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 446 heartbeat osd_stat(store_statfs(0x4f8781000/0x0/0x4ffc00000, data 0x92b519/0xb8d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x68ef9c6), peers [0,2] op hist [])
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 180125696 unmapped: 43917312 heap: 224043008 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:30:06 compute-0 ceph-osd[88594]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:30:06 compute-0 ceph-osd[88594]: bluestore.MempoolThread(0x56493a431b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2993206 data_alloc: 218103808 data_used: 1761280
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:24:53.695755+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 180125696 unmapped: 43917312 heap: 224043008 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: handle_auth_request added challenge on 0x56493df9fc00
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 446 heartbeat osd_stat(store_statfs(0x4f8781000/0x0/0x4ffc00000, data 0x92b519/0xb8d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x68ef9c6), peers [0,2] op hist [])
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:24:54.695903+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 180125696 unmapped: 43917312 heap: 224043008 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 446 heartbeat osd_stat(store_statfs(0x4f8781000/0x0/0x4ffc00000, data 0x92b519/0xb8d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x68ef9c6), peers [0,2] op hist [])
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _renew_subs
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 446 handle_osd_map epochs [447,447], i have 446, src has [1,447]
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 447 ms_handle_reset con 0x56493df9fc00 session 0x56493b92cf00
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:24:55.696085+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 180142080 unmapped: 43900928 heap: 224043008 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:24:56.696310+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: handle_auth_request added challenge on 0x56493ec09800
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 447 ms_handle_reset con 0x56493ec09800 session 0x56493c5ab0e0
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 180142080 unmapped: 43900928 heap: 224043008 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:24:57.696466+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 180142080 unmapped: 43900928 heap: 224043008 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:30:06 compute-0 ceph-osd[88594]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:30:06 compute-0 ceph-osd[88594]: bluestore.MempoolThread(0x56493a431b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2999005 data_alloc: 218103808 data_used: 1773568
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:24:58.696652+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 180142080 unmapped: 43900928 heap: 224043008 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:24:59.696816+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 447 heartbeat osd_stat(store_statfs(0x4f877d000/0x0/0x4ffc00000, data 0x92d096/0xb90000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x68ef9c6), peers [0,2] op hist [])
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 180142080 unmapped: 43900928 heap: 224043008 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:25:00.696997+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 180142080 unmapped: 43900928 heap: 224043008 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:25:01.697237+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 180142080 unmapped: 43900928 heap: 224043008 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:25:02.697403+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 180142080 unmapped: 43900928 heap: 224043008 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:30:06 compute-0 ceph-osd[88594]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:30:06 compute-0 ceph-osd[88594]: bluestore.MempoolThread(0x56493a431b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2999005 data_alloc: 218103808 data_used: 1773568
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:25:03.697580+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 180142080 unmapped: 43900928 heap: 224043008 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:25:04.697768+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 180142080 unmapped: 43900928 heap: 224043008 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 447 heartbeat osd_stat(store_statfs(0x4f877d000/0x0/0x4ffc00000, data 0x92d096/0xb90000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x68ef9c6), peers [0,2] op hist [])
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:25:05.697998+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 180142080 unmapped: 43900928 heap: 224043008 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: handle_auth_request added challenge on 0x56493eee2c00
Oct 11 04:30:06 compute-0 ceph-osd[88594]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 15.503505707s of 15.575025558s, submitted: 20
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 447 ms_handle_reset con 0x56493eee2c00 session 0x56493b9334a0
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: handle_auth_request added challenge on 0x56493efc5c00
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 447 ms_handle_reset con 0x56493efc5c00 session 0x56493be1a3c0
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: handle_auth_request added challenge on 0x56493b962800
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 447 ms_handle_reset con 0x56493b962800 session 0x56493be1b680
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: handle_auth_request added challenge on 0x56493df9fc00
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 447 ms_handle_reset con 0x56493df9fc00 session 0x56493c7aa000
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: handle_auth_request added challenge on 0x56493ec09800
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 447 ms_handle_reset con 0x56493ec09800 session 0x56493b9325a0
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:25:06.698210+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 179707904 unmapped: 44335104 heap: 224043008 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:25:07.698337+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 179707904 unmapped: 44335104 heap: 224043008 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:30:06 compute-0 ceph-osd[88594]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:30:06 compute-0 ceph-osd[88594]: bluestore.MempoolThread(0x56493a431b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3094990 data_alloc: 218103808 data_used: 1773568
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:25:08.698489+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 179707904 unmapped: 44335104 heap: 224043008 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: handle_auth_request added challenge on 0x56493eee2c00
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 447 ms_handle_reset con 0x56493eee2c00 session 0x56493df441e0
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:25:09.698664+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 447 heartbeat osd_stat(store_statfs(0x4f7afb000/0x0/0x4ffc00000, data 0x15b0096/0x1813000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x68ef9c6), peers [0,2] op hist [])
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: handle_auth_request added challenge on 0x56493eee3800
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 447 ms_handle_reset con 0x56493eee3800 session 0x56493e72e1e0
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 179707904 unmapped: 44335104 heap: 224043008 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:25:10.698813+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: handle_auth_request added challenge on 0x56493b962800
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 447 ms_handle_reset con 0x56493b962800 session 0x56493d8672c0
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: handle_auth_request added challenge on 0x56493df9fc00
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 447 ms_handle_reset con 0x56493df9fc00 session 0x56493da28f00
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 179716096 unmapped: 44326912 heap: 224043008 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 447 heartbeat osd_stat(store_statfs(0x4f7afb000/0x0/0x4ffc00000, data 0x15b0096/0x1813000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x68ef9c6), peers [0,2] op hist [])
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: handle_auth_request added challenge on 0x56493ec09800
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: handle_auth_request added challenge on 0x56493eee2c00
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:25:11.698959+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 179716096 unmapped: 44326912 heap: 224043008 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:25:12.699104+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 179724288 unmapped: 44318720 heap: 224043008 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:30:06 compute-0 ceph-osd[88594]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:30:06 compute-0 ceph-osd[88594]: bluestore.MempoolThread(0x56493a431b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3164181 data_alloc: 234881024 data_used: 11046912
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:25:13.699197+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 184057856 unmapped: 39985152 heap: 224043008 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:25:14.699392+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 184057856 unmapped: 39985152 heap: 224043008 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:25:15.699530+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 184057856 unmapped: 39985152 heap: 224043008 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 447 heartbeat osd_stat(store_statfs(0x4f7af9000/0x0/0x4ffc00000, data 0x15b00c9/0x1815000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x68ef9c6), peers [0,2] op hist [])
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:25:16.699685+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 184057856 unmapped: 39985152 heap: 224043008 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:25:17.699811+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 447 heartbeat osd_stat(store_statfs(0x4f7af9000/0x0/0x4ffc00000, data 0x15b00c9/0x1815000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x68ef9c6), peers [0,2] op hist [])
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 184057856 unmapped: 39985152 heap: 224043008 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:30:06 compute-0 ceph-osd[88594]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:30:06 compute-0 ceph-osd[88594]: bluestore.MempoolThread(0x56493a431b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3183061 data_alloc: 234881024 data_used: 13680640
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:25:18.699942+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 184057856 unmapped: 39985152 heap: 224043008 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:25:19.700052+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 184057856 unmapped: 39985152 heap: 224043008 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:25:20.700183+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 447 heartbeat osd_stat(store_statfs(0x4f7af9000/0x0/0x4ffc00000, data 0x15b00c9/0x1815000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x68ef9c6), peers [0,2] op hist [])
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 184057856 unmapped: 39985152 heap: 224043008 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:25:21.700296+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 447 heartbeat osd_stat(store_statfs(0x4f7af9000/0x0/0x4ffc00000, data 0x15b00c9/0x1815000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x68ef9c6), peers [0,2] op hist [])
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 184057856 unmapped: 39985152 heap: 224043008 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:25:22.700441+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 16.428741455s of 16.582397461s, submitted: 30
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 189833216 unmapped: 34209792 heap: 224043008 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:30:06 compute-0 ceph-osd[88594]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:30:06 compute-0 ceph-osd[88594]: bluestore.MempoolThread(0x56493a431b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3282117 data_alloc: 234881024 data_used: 14319616
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:25:23.700581+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 190119936 unmapped: 33923072 heap: 224043008 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:25:24.700749+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 190185472 unmapped: 33857536 heap: 224043008 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:25:25.700951+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 190185472 unmapped: 33857536 heap: 224043008 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:25:26.701237+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 190185472 unmapped: 33857536 heap: 224043008 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:25:27.701428+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 447 heartbeat osd_stat(store_statfs(0x4f5db5000/0x0/0x4ffc00000, data 0x21530c9/0x23b8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x7a8f9c6), peers [0,2] op hist [])
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 190267392 unmapped: 33775616 heap: 224043008 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:30:06 compute-0 ceph-osd[88594]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:30:06 compute-0 ceph-osd[88594]: bluestore.MempoolThread(0x56493a431b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3300417 data_alloc: 234881024 data_used: 14614528
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:25:28.701562+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 190267392 unmapped: 33775616 heap: 224043008 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:25:29.701689+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 190267392 unmapped: 33775616 heap: 224043008 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 447 heartbeat osd_stat(store_statfs(0x4f5db5000/0x0/0x4ffc00000, data 0x21530c9/0x23b8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x7a8f9c6), peers [0,2] op hist [])
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:25:30.701810+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 190267392 unmapped: 33775616 heap: 224043008 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:25:31.701951+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 447 heartbeat osd_stat(store_statfs(0x4f5db5000/0x0/0x4ffc00000, data 0x21530c9/0x23b8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x7a8f9c6), peers [0,2] op hist [])
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 190267392 unmapped: 33775616 heap: 224043008 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:25:32.702190+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: handle_auth_request added challenge on 0x56494212a000
Oct 11 04:30:06 compute-0 ceph-osd[88594]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.761034966s of 10.026660919s, submitted: 93
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 190267392 unmapped: 33775616 heap: 224043008 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:30:06 compute-0 ceph-osd[88594]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:30:06 compute-0 ceph-osd[88594]: bluestore.MempoolThread(0x56493a431b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3300869 data_alloc: 234881024 data_used: 14622720
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:25:33.702396+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 190267392 unmapped: 33775616 heap: 224043008 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 447 heartbeat osd_stat(store_statfs(0x4f5db5000/0x0/0x4ffc00000, data 0x21530c9/0x23b8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x7a8f9c6), peers [0,2] op hist [])
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:25:34.702546+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 190529536 unmapped: 33513472 heap: 224043008 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:25:35.702708+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 447 heartbeat osd_stat(store_statfs(0x4f5db5000/0x0/0x4ffc00000, data 0x21530c9/0x23b8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x7a8f9c6), peers [0,2] op hist [])
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 190529536 unmapped: 33513472 heap: 224043008 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 447 ms_handle_reset con 0x56494212a000 session 0x56493eccd0e0
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:25:36.702919+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 190529536 unmapped: 33513472 heap: 224043008 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:25:37.703085+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 447 ms_handle_reset con 0x56493ec09800 session 0x56493ca9cb40
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 447 ms_handle_reset con 0x56493eee2c00 session 0x56493d864d20
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: handle_auth_request added challenge on 0x56493b962800
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 190545920 unmapped: 33497088 heap: 224043008 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 447 ms_handle_reset con 0x56493b962800 session 0x56493d866000
Oct 11 04:30:06 compute-0 ceph-osd[88594]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:30:06 compute-0 ceph-osd[88594]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:30:06 compute-0 ceph-osd[88594]: bluestore.MempoolThread(0x56493a431b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3014484 data_alloc: 218103808 data_used: 2031616
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:25:38.704064+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 183451648 unmapped: 40591360 heap: 224043008 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:25:39.704586+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 183451648 unmapped: 40591360 heap: 224043008 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:25:40.704968+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 183451648 unmapped: 40591360 heap: 224043008 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 447 heartbeat osd_stat(store_statfs(0x4f75dd000/0x0/0x4ffc00000, data 0x92d096/0xb90000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x7a8f9c6), peers [0,2] op hist [])
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: handle_auth_request added challenge on 0x56493df9fc00
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:25:41.705239+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 447 ms_handle_reset con 0x56493df9fc00 session 0x56493caf83c0
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: handle_auth_request added challenge on 0x56493ec09800
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 447 ms_handle_reset con 0x56493ec09800 session 0x56493d868000
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 183484416 unmapped: 40558592 heap: 224043008 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:25:42.705607+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: handle_auth_request added challenge on 0x56494212a000
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 447 ms_handle_reset con 0x56494212a000 session 0x56493e2ac960
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: handle_auth_request added challenge on 0x56493bde5800
Oct 11 04:30:06 compute-0 ceph-osd[88594]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.140339851s of 10.400791168s, submitted: 68
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 183484416 unmapped: 40558592 heap: 224043008 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 447 ms_handle_reset con 0x56493bde5800 session 0x56493bf7d4a0
Oct 11 04:30:06 compute-0 ceph-osd[88594]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:30:06 compute-0 ceph-osd[88594]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:30:06 compute-0 ceph-osd[88594]: bluestore.MempoolThread(0x56493a431b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3013876 data_alloc: 218103808 data_used: 2035712
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:25:43.705925+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 183484416 unmapped: 40558592 heap: 224043008 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:25:44.706247+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 183484416 unmapped: 40558592 heap: 224043008 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:25:45.706549+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 183484416 unmapped: 40558592 heap: 224043008 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:25:46.706797+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 447 heartbeat osd_stat(store_statfs(0x4f75de000/0x0/0x4ffc00000, data 0x92d096/0xb90000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x7a8f9c6), peers [0,2] op hist [])
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 183484416 unmapped: 40558592 heap: 224043008 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:25:47.706937+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 183484416 unmapped: 40558592 heap: 224043008 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:30:06 compute-0 ceph-osd[88594]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:30:06 compute-0 ceph-osd[88594]: bluestore.MempoolThread(0x56493a431b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3013876 data_alloc: 218103808 data_used: 2035712
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:25:48.707621+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 183484416 unmapped: 40558592 heap: 224043008 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:25:49.708271+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 183484416 unmapped: 40558592 heap: 224043008 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 447 heartbeat osd_stat(store_statfs(0x4f75de000/0x0/0x4ffc00000, data 0x92d096/0xb90000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x7a8f9c6), peers [0,2] op hist [])
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:25:50.708766+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 183484416 unmapped: 40558592 heap: 224043008 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:25:51.709145+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: handle_auth_request added challenge on 0x56493b962800
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 447 ms_handle_reset con 0x56493b962800 session 0x56493da2e1e0
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: handle_auth_request added challenge on 0x56493df9fc00
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 447 ms_handle_reset con 0x56493df9fc00 session 0x56493b315c20
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 183492608 unmapped: 40550400 heap: 224043008 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:25:52.709496+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 447 heartbeat osd_stat(store_statfs(0x4f75de000/0x0/0x4ffc00000, data 0x92d096/0xb90000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x7a8f9c6), peers [0,2] op hist [])
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 447 heartbeat osd_stat(store_statfs(0x4f75de000/0x0/0x4ffc00000, data 0x92d096/0xb90000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x7a8f9c6), peers [0,2] op hist [])
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 183492608 unmapped: 40550400 heap: 224043008 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:30:06 compute-0 ceph-osd[88594]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:30:06 compute-0 ceph-osd[88594]: bluestore.MempoolThread(0x56493a431b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3013876 data_alloc: 218103808 data_used: 2035712
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:25:53.709765+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 183492608 unmapped: 40550400 heap: 224043008 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:25:54.710069+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 183492608 unmapped: 40550400 heap: 224043008 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:25:55.710345+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 183492608 unmapped: 40550400 heap: 224043008 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:25:56.710532+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 183492608 unmapped: 40550400 heap: 224043008 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 447 heartbeat osd_stat(store_statfs(0x4f75de000/0x0/0x4ffc00000, data 0x92d096/0xb90000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x7a8f9c6), peers [0,2] op hist [])
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:25:57.710778+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 183492608 unmapped: 40550400 heap: 224043008 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:30:06 compute-0 ceph-osd[88594]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:30:06 compute-0 ceph-osd[88594]: bluestore.MempoolThread(0x56493a431b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3013876 data_alloc: 218103808 data_used: 2035712
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:25:58.710983+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: handle_auth_request added challenge on 0x56493ec09800
Oct 11 04:30:06 compute-0 ceph-osd[88594]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 15.600405693s of 15.612550735s, submitted: 3
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 183517184 unmapped: 40525824 heap: 224043008 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:25:59.711235+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 447 ms_handle_reset con 0x56493ec09800 session 0x56493c5aba40
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 183517184 unmapped: 40525824 heap: 224043008 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:26:00.711534+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 183517184 unmapped: 40525824 heap: 224043008 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:26:01.711712+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 183517184 unmapped: 40525824 heap: 224043008 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 447 heartbeat osd_stat(store_statfs(0x4f759d000/0x0/0x4ffc00000, data 0x96d0f9/0xbd1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x7a8f9c6), peers [0,2] op hist [])
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:26:02.711875+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 183517184 unmapped: 40525824 heap: 224043008 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:30:06 compute-0 ceph-osd[88594]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:30:06 compute-0 ceph-osd[88594]: bluestore.MempoolThread(0x56493a431b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3019191 data_alloc: 218103808 data_used: 2035712
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:26:03.712049+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 183517184 unmapped: 40525824 heap: 224043008 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:26:04.712240+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 183517184 unmapped: 40525824 heap: 224043008 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:26:05.712378+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 183517184 unmapped: 40525824 heap: 224043008 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:26:06.712691+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 183517184 unmapped: 40525824 heap: 224043008 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:26:07.712933+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 183517184 unmapped: 40525824 heap: 224043008 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 447 heartbeat osd_stat(store_statfs(0x4f759d000/0x0/0x4ffc00000, data 0x96d0f9/0xbd1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x7a8f9c6), peers [0,2] op hist [])
Oct 11 04:30:06 compute-0 ceph-osd[88594]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:30:06 compute-0 ceph-osd[88594]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:30:06 compute-0 ceph-osd[88594]: bluestore.MempoolThread(0x56493a431b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3019191 data_alloc: 218103808 data_used: 2035712
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:26:08.713230+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 183517184 unmapped: 40525824 heap: 224043008 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:26:09.713483+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 183517184 unmapped: 40525824 heap: 224043008 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:26:10.716844+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 183517184 unmapped: 40525824 heap: 224043008 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: handle_auth_request added challenge on 0x56494212a000
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 447 ms_handle_reset con 0x56494212a000 session 0x56493ddf6f00
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: handle_auth_request added challenge on 0x56493edd6000
Oct 11 04:30:06 compute-0 ceph-osd[88594]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 12.785470009s of 12.810916901s, submitted: 6
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:26:11.717203+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 447 ms_handle_reset con 0x56493edd6000 session 0x56493b3154a0
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: handle_auth_request added challenge on 0x56493b962800
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 187719680 unmapped: 36323328 heap: 224043008 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:26:12.717991+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 447 ms_handle_reset con 0x56493b962800 session 0x56493d861a40
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: handle_auth_request added challenge on 0x56493df9fc00
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 447 ms_handle_reset con 0x56493df9fc00 session 0x56493da2a780
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 183525376 unmapped: 40517632 heap: 224043008 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:30:06 compute-0 ceph-osd[88594]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:30:06 compute-0 ceph-osd[88594]: bluestore.MempoolThread(0x56493a431b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3327729 data_alloc: 218103808 data_used: 2035712
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:26:13.718260+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 183525376 unmapped: 40517632 heap: 224043008 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 447 heartbeat osd_stat(store_statfs(0x4f499d000/0x0/0x4ffc00000, data 0x356d096/0x37d0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x7a8f9c6), peers [0,2] op hist [])
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:26:14.718582+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 183525376 unmapped: 40517632 heap: 224043008 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:26:15.719092+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 183525376 unmapped: 40517632 heap: 224043008 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:26:16.719689+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 183525376 unmapped: 40517632 heap: 224043008 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:26:17.720273+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 447 heartbeat osd_stat(store_statfs(0x4f499d000/0x0/0x4ffc00000, data 0x356d096/0x37d0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x7a8f9c6), peers [0,2] op hist [])
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 183525376 unmapped: 40517632 heap: 224043008 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:30:06 compute-0 ceph-osd[88594]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:30:06 compute-0 ceph-osd[88594]: bluestore.MempoolThread(0x56493a431b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3327729 data_alloc: 218103808 data_used: 2035712
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:26:18.720518+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 183525376 unmapped: 40517632 heap: 224043008 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:26:19.720670+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 183525376 unmapped: 40517632 heap: 224043008 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:26:20.720822+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 183525376 unmapped: 40517632 heap: 224043008 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:26:21.721008+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 183525376 unmapped: 40517632 heap: 224043008 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 447 heartbeat osd_stat(store_statfs(0x4f499d000/0x0/0x4ffc00000, data 0x356d096/0x37d0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x7a8f9c6), peers [0,2] op hist [])
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:26:22.721194+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 447 heartbeat osd_stat(store_statfs(0x4f499d000/0x0/0x4ffc00000, data 0x356d096/0x37d0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x7a8f9c6), peers [0,2] op hist [])
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 183525376 unmapped: 40517632 heap: 224043008 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:26:23.721349+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:30:06 compute-0 ceph-osd[88594]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:30:06 compute-0 ceph-osd[88594]: bluestore.MempoolThread(0x56493a431b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3327729 data_alloc: 218103808 data_used: 2035712
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 447 heartbeat osd_stat(store_statfs(0x4f499d000/0x0/0x4ffc00000, data 0x356d096/0x37d0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x7a8f9c6), peers [0,2] op hist [])
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 183525376 unmapped: 40517632 heap: 224043008 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:26:24.721617+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 447 heartbeat osd_stat(store_statfs(0x4f499d000/0x0/0x4ffc00000, data 0x356d096/0x37d0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x7a8f9c6), peers [0,2] op hist [])
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: handle_auth_request added challenge on 0x56493ec09800
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 183525376 unmapped: 40517632 heap: 224043008 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:26:25.721775+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 183525376 unmapped: 40517632 heap: 224043008 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:26:26.722017+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 183525376 unmapped: 40517632 heap: 224043008 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 447 heartbeat osd_stat(store_statfs(0x4f499d000/0x0/0x4ffc00000, data 0x356d096/0x37d0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x7a8f9c6), peers [0,2] op hist [])
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:26:27.722199+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 183525376 unmapped: 40517632 heap: 224043008 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:26:28.722359+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:30:06 compute-0 ceph-osd[88594]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:30:06 compute-0 ceph-osd[88594]: bluestore.MempoolThread(0x56493a431b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3329169 data_alloc: 218103808 data_used: 2367488
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 447 heartbeat osd_stat(store_statfs(0x4f499d000/0x0/0x4ffc00000, data 0x356d096/0x37d0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x7a8f9c6), peers [0,2] op hist [])
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 183533568 unmapped: 40509440 heap: 224043008 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:26:29.722499+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 183533568 unmapped: 40509440 heap: 224043008 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:26:30.722870+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 183533568 unmapped: 40509440 heap: 224043008 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 447 heartbeat osd_stat(store_statfs(0x4f499d000/0x0/0x4ffc00000, data 0x356d096/0x37d0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x7a8f9c6), peers [0,2] op hist [])
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:26:31.723029+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 183533568 unmapped: 40509440 heap: 224043008 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:26:32.723204+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 183533568 unmapped: 40509440 heap: 224043008 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:26:33.723445+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:30:06 compute-0 ceph-osd[88594]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:30:06 compute-0 ceph-osd[88594]: bluestore.MempoolThread(0x56493a431b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3345009 data_alloc: 218103808 data_used: 4603904
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 183533568 unmapped: 40509440 heap: 224043008 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:26:34.723617+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 183533568 unmapped: 40509440 heap: 224043008 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:26:35.723821+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 183533568 unmapped: 40509440 heap: 224043008 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:26:36.724091+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 447 heartbeat osd_stat(store_statfs(0x4f499d000/0x0/0x4ffc00000, data 0x356d096/0x37d0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x7a8f9c6), peers [0,2] op hist [])
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 183533568 unmapped: 40509440 heap: 224043008 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:26:37.724288+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 26.162944794s of 26.817161560s, submitted: 25
Oct 11 04:30:06 compute-0 ceph-osd[88594]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #45. Immutable memtables: 2.
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 205897728 unmapped: 18145280 heap: 224043008 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 447 heartbeat osd_stat(store_statfs(0x4f499d000/0x0/0x4ffc00000, data 0x356d096/0x37d0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x7a8f9c6), peers [0,2] op hist [])
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:26:38.724425+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:30:06 compute-0 ceph-osd[88594]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:30:06 compute-0 ceph-osd[88594]: bluestore.MempoolThread(0x56493a431b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3364353 data_alloc: 218103808 data_used: 5246976
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 200286208 unmapped: 23756800 heap: 224043008 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:26:39.724618+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 202293248 unmapped: 21749760 heap: 224043008 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:26:40.724795+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 202293248 unmapped: 21749760 heap: 224043008 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:26:41.725062+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 202293248 unmapped: 21749760 heap: 224043008 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:26:42.725205+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 447 heartbeat osd_stat(store_statfs(0x4f2be7000/0x0/0x4ffc00000, data 0x3ec4096/0x4127000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x8c2f9c6), peers [0,2] op hist [])
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 202293248 unmapped: 21749760 heap: 224043008 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:26:43.725343+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:30:06 compute-0 ceph-osd[88594]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:30:06 compute-0 ceph-osd[88594]: bluestore.MempoolThread(0x56493a431b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3450969 data_alloc: 218103808 data_used: 6664192
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 202293248 unmapped: 21749760 heap: 224043008 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:26:44.725525+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 202293248 unmapped: 21749760 heap: 224043008 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 447 heartbeat osd_stat(store_statfs(0x4f2be7000/0x0/0x4ffc00000, data 0x3ec4096/0x4127000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x8c2f9c6), peers [0,2] op hist [])
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:26:45.725783+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 202293248 unmapped: 21749760 heap: 224043008 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:26:46.726059+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 447 ms_handle_reset con 0x56493ec09800 session 0x56493de025a0
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 198557696 unmapped: 25485312 heap: 224043008 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:26:47.726267+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: handle_auth_request added challenge on 0x56494212a000
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 447 ms_handle_reset con 0x56494212a000 session 0x56493eb5b860
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 197042176 unmapped: 27000832 heap: 224043008 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:26:48.726537+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:30:06 compute-0 ceph-osd[88594]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:30:06 compute-0 ceph-osd[88594]: bluestore.MempoolThread(0x56493a431b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3433065 data_alloc: 218103808 data_used: 6664192
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 197042176 unmapped: 27000832 heap: 224043008 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:26:49.726816+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 447 heartbeat osd_stat(store_statfs(0x4f2ea7000/0x0/0x4ffc00000, data 0x3ec4096/0x4127000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x8c2f9c6), peers [0,2] op hist [])
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 197042176 unmapped: 27000832 heap: 224043008 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:26:50.728030+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 197042176 unmapped: 27000832 heap: 224043008 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:26:51.728216+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Oct 11 04:30:06 compute-0 ceph-osd[88594]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 3000.1 total, 600.0 interval
                                           Cumulative writes: 28K writes, 110K keys, 28K commit groups, 1.0 writes per commit group, ingest: 0.07 GB, 0.02 MB/s
                                           Cumulative WAL: 28K writes, 10K syncs, 2.69 writes per sync, written: 0.07 GB, 0.02 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 3948 writes, 11K keys, 3948 commit groups, 1.0 writes per commit group, ingest: 15.22 MB, 0.03 MB/s
                                           Interval WAL: 3948 writes, 1718 syncs, 2.30 writes per sync, written: 0.01 GB, 0.03 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 197042176 unmapped: 27000832 heap: 224043008 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:26:52.728374+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 197042176 unmapped: 27000832 heap: 224043008 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:26:53.728563+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:30:06 compute-0 ceph-osd[88594]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:30:06 compute-0 ceph-osd[88594]: bluestore.MempoolThread(0x56493a431b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3433065 data_alloc: 218103808 data_used: 6664192
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 197042176 unmapped: 27000832 heap: 224043008 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: handle_auth_request added challenge on 0x56493b4adc00
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:26:54.728746+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _renew_subs
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 447 handle_osd_map epochs [448,448], i have 447, src has [1,448]
Oct 11 04:30:06 compute-0 ceph-osd[88594]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 16.412555695s of 16.832530975s, submitted: 123
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 448 ms_handle_reset con 0x56493b4adc00 session 0x56493eb5a000
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 448 heartbeat osd_stat(store_statfs(0x4f2ea7000/0x0/0x4ffc00000, data 0x3ec4096/0x4127000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x8c2f9c6), peers [0,2] op hist [])
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 197042176 unmapped: 27000832 heap: 224043008 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:26:55.728897+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: handle_auth_request added challenge on 0x56493b4adc00
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 448 ms_handle_reset con 0x56493b4adc00 session 0x56493b92c3c0
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 197042176 unmapped: 27000832 heap: 224043008 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:26:56.729069+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 197042176 unmapped: 27000832 heap: 224043008 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:26:57.729207+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 448 heartbeat osd_stat(store_statfs(0x4f2ea1000/0x0/0x4ffc00000, data 0x3ec5cd7/0x412c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x8c2f9c6), peers [0,2] op hist [])
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 197042176 unmapped: 27000832 heap: 224043008 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:26:58.729331+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:30:06 compute-0 ceph-osd[88594]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:30:06 compute-0 ceph-osd[88594]: bluestore.MempoolThread(0x56493a431b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3440852 data_alloc: 218103808 data_used: 6672384
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 197042176 unmapped: 27000832 heap: 224043008 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:26:59.729484+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 197042176 unmapped: 27000832 heap: 224043008 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:27:00.729629+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 197042176 unmapped: 27000832 heap: 224043008 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:27:01.729804+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 197042176 unmapped: 27000832 heap: 224043008 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:27:02.729967+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 448 heartbeat osd_stat(store_statfs(0x4f2ea1000/0x0/0x4ffc00000, data 0x3ec5cd7/0x412c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x8c2f9c6), peers [0,2] op hist [])
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 197050368 unmapped: 26992640 heap: 224043008 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:27:03.730143+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:30:06 compute-0 ceph-osd[88594]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:30:06 compute-0 ceph-osd[88594]: bluestore.MempoolThread(0x56493a431b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3440852 data_alloc: 218103808 data_used: 6672384
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 197050368 unmapped: 26992640 heap: 224043008 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:27:04.730440+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: handle_auth_request added challenge on 0x56493b962800
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 448 ms_handle_reset con 0x56493b962800 session 0x56493caf9e00
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 197050368 unmapped: 26992640 heap: 224043008 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:27:05.730605+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: handle_auth_request added challenge on 0x56493df9fc00
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 448 ms_handle_reset con 0x56493df9fc00 session 0x56493d8605a0
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 197050368 unmapped: 26992640 heap: 224043008 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:27:06.730742+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 448 heartbeat osd_stat(store_statfs(0x4f2ea1000/0x0/0x4ffc00000, data 0x3ec5cd7/0x412c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x8c2f9c6), peers [0,2] op hist [])
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 197050368 unmapped: 26992640 heap: 224043008 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:27:07.730889+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: handle_auth_request added challenge on 0x56493ec09800
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 448 ms_handle_reset con 0x56493ec09800 session 0x56493e20af00
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: handle_auth_request added challenge on 0x56494212a000
Oct 11 04:30:06 compute-0 ceph-osd[88594]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 12.584046364s of 12.604511261s, submitted: 7
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 448 ms_handle_reset con 0x56494212a000 session 0x56493d8cb0e0
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: handle_auth_request added challenge on 0x56493b4adc00
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 197074944 unmapped: 26968064 heap: 224043008 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:27:08.731118+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: handle_auth_request added challenge on 0x56493b962800
Oct 11 04:30:06 compute-0 ceph-osd[88594]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:30:06 compute-0 ceph-osd[88594]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:30:06 compute-0 ceph-osd[88594]: bluestore.MempoolThread(0x56493a431b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3443447 data_alloc: 218103808 data_used: 6676480
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 448 heartbeat osd_stat(store_statfs(0x4f2ea0000/0x0/0x4ffc00000, data 0x3ec5d0a/0x412e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x8c2f9c6), peers [0,2] op hist [])
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:27:09.731314+0000)
Oct 11 04:30:06 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd crush tree", "show_shadow": true} v 0) v1
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 197074944 unmapped: 26968064 heap: 224043008 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:27:10.731541+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 197074944 unmapped: 26968064 heap: 224043008 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 448 heartbeat osd_stat(store_statfs(0x4f2ea0000/0x0/0x4ffc00000, data 0x3ec5d0a/0x412e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x8c2f9c6), peers [0,2] op hist [])
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:27:11.731722+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 197074944 unmapped: 26968064 heap: 224043008 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:27:12.731877+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 197091328 unmapped: 26951680 heap: 224043008 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2379946019' entity='client.admin' cmd=[{"prefix": "osd crush tree", "show_shadow": true}]: dispatch
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 448 heartbeat osd_stat(store_statfs(0x4f2ea0000/0x0/0x4ffc00000, data 0x3ec5d0a/0x412e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x8c2f9c6), peers [0,2] op hist [])
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:27:13.732029+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 197091328 unmapped: 26951680 heap: 224043008 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:30:06 compute-0 ceph-osd[88594]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:30:06 compute-0 ceph-osd[88594]: bluestore.MempoolThread(0x56493a431b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3444567 data_alloc: 218103808 data_used: 6782976
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:27:14.732220+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 197091328 unmapped: 26951680 heap: 224043008 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:27:15.732375+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 197091328 unmapped: 26951680 heap: 224043008 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:27:16.732582+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 197091328 unmapped: 26951680 heap: 224043008 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:27:17.732757+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 197091328 unmapped: 26951680 heap: 224043008 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 448 heartbeat osd_stat(store_statfs(0x4f2ea0000/0x0/0x4ffc00000, data 0x3ec5d0a/0x412e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x8c2f9c6), peers [0,2] op hist [])
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:27:18.732912+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 197091328 unmapped: 26951680 heap: 224043008 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:30:06 compute-0 ceph-osd[88594]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:30:06 compute-0 ceph-osd[88594]: bluestore.MempoolThread(0x56493a431b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3444567 data_alloc: 218103808 data_used: 6782976
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:27:19.733051+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 197091328 unmapped: 26951680 heap: 224043008 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 448 heartbeat osd_stat(store_statfs(0x4f2ea0000/0x0/0x4ffc00000, data 0x3ec5d0a/0x412e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x8c2f9c6), peers [0,2] op hist [])
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:27:20.733256+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 197091328 unmapped: 26951680 heap: 224043008 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:27:21.733455+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 197091328 unmapped: 26951680 heap: 224043008 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 14.096676826s of 14.112345695s, submitted: 5
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:27:22.733618+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 200097792 unmapped: 28147712 heap: 228245504 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:27:23.733803+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 200097792 unmapped: 28147712 heap: 228245504 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:30:06 compute-0 ceph-osd[88594]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:30:06 compute-0 ceph-osd[88594]: bluestore.MempoolThread(0x56493a431b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3540319 data_alloc: 234881024 data_used: 13516800
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:27:24.733995+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 448 heartbeat osd_stat(store_statfs(0x4f26a0000/0x0/0x4ffc00000, data 0x46c5d0a/0x492e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x8c2f9c6), peers [0,2] op hist [])
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 200097792 unmapped: 28147712 heap: 228245504 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:27:25.734189+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 200097792 unmapped: 28147712 heap: 228245504 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:27:26.734371+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 200097792 unmapped: 28147712 heap: 228245504 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 448 heartbeat osd_stat(store_statfs(0x4f26a0000/0x0/0x4ffc00000, data 0x46c5d0a/0x492e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x8c2f9c6), peers [0,2] op hist [])
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:27:27.734598+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 200097792 unmapped: 28147712 heap: 228245504 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:27:28.734815+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 200097792 unmapped: 28147712 heap: 228245504 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:30:06 compute-0 ceph-osd[88594]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:30:06 compute-0 ceph-osd[88594]: bluestore.MempoolThread(0x56493a431b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3540351 data_alloc: 234881024 data_used: 13516800
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:27:29.734959+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 200097792 unmapped: 28147712 heap: 228245504 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 448 heartbeat osd_stat(store_statfs(0x4f26a0000/0x0/0x4ffc00000, data 0x46c5d0a/0x492e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x8c2f9c6), peers [0,2] op hist [])
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:27:30.735093+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 200097792 unmapped: 28147712 heap: 228245504 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:27:31.735215+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 200097792 unmapped: 28147712 heap: 228245504 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:27:32.735387+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 200097792 unmapped: 28147712 heap: 228245504 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:27:33.735561+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 200097792 unmapped: 28147712 heap: 228245504 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:30:06 compute-0 ceph-osd[88594]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:30:06 compute-0 ceph-osd[88594]: bluestore.MempoolThread(0x56493a431b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3540351 data_alloc: 234881024 data_used: 13516800
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 448 heartbeat osd_stat(store_statfs(0x4f26a0000/0x0/0x4ffc00000, data 0x46c5d0a/0x492e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x8c2f9c6), peers [0,2] op hist [])
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:27:34.735766+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 200097792 unmapped: 28147712 heap: 228245504 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:27:35.735947+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 200097792 unmapped: 28147712 heap: 228245504 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:27:36.736212+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 200097792 unmapped: 28147712 heap: 228245504 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 15.202443123s of 15.263472557s, submitted: 9
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:27:37.736410+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 201080832 unmapped: 27164672 heap: 228245504 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:27:38.736562+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 201080832 unmapped: 27164672 heap: 228245504 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:30:06 compute-0 ceph-osd[88594]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:30:06 compute-0 ceph-osd[88594]: bluestore.MempoolThread(0x56493a431b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3543759 data_alloc: 234881024 data_used: 14565376
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:27:39.736746+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 201080832 unmapped: 27164672 heap: 228245504 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 448 heartbeat osd_stat(store_statfs(0x4f26a0000/0x0/0x4ffc00000, data 0x46c5d0a/0x492e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x8c2f9c6), peers [0,2] op hist [])
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:27:40.736940+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 201080832 unmapped: 27164672 heap: 228245504 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 448 heartbeat osd_stat(store_statfs(0x4f26a0000/0x0/0x4ffc00000, data 0x46c5d0a/0x492e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x8c2f9c6), peers [0,2] op hist [])
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:27:41.737210+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 201080832 unmapped: 27164672 heap: 228245504 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:27:42.737408+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 201080832 unmapped: 27164672 heap: 228245504 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:27:43.737616+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 201080832 unmapped: 27164672 heap: 228245504 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:30:06 compute-0 ceph-osd[88594]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:30:06 compute-0 ceph-osd[88594]: bluestore.MempoolThread(0x56493a431b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3543759 data_alloc: 234881024 data_used: 14565376
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 448 heartbeat osd_stat(store_statfs(0x4f26a0000/0x0/0x4ffc00000, data 0x46c5d0a/0x492e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x8c2f9c6), peers [0,2] op hist [])
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:27:44.737780+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 201080832 unmapped: 27164672 heap: 228245504 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:27:45.737984+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 201080832 unmapped: 27164672 heap: 228245504 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:27:46.738230+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 201080832 unmapped: 27164672 heap: 228245504 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 448 ms_handle_reset con 0x56493b4adc00 session 0x56493de03c20
Oct 11 04:30:06 compute-0 ceph-osd[88594]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.446428299s of 10.463625908s, submitted: 11
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:27:47.738386+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 448 ms_handle_reset con 0x56493b962800 session 0x56493e2ada40
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 201203712 unmapped: 27041792 heap: 228245504 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: handle_auth_request added challenge on 0x56493df9fc00
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 448 ms_handle_reset con 0x56493df9fc00 session 0x56493da2ef00
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:27:48.738535+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 201211904 unmapped: 27033600 heap: 228245504 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:30:06 compute-0 ceph-osd[88594]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:30:06 compute-0 ceph-osd[88594]: bluestore.MempoolThread(0x56493a431b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3547018 data_alloc: 234881024 data_used: 15380480
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 448 heartbeat osd_stat(store_statfs(0x4f26a1000/0x0/0x4ffc00000, data 0x46c5cd7/0x492c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x8c2f9c6), peers [0,2] op hist [])
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:27:49.738669+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 201211904 unmapped: 27033600 heap: 228245504 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:27:50.738816+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 201211904 unmapped: 27033600 heap: 228245504 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:27:51.738976+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 201211904 unmapped: 27033600 heap: 228245504 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:27:52.739128+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 201211904 unmapped: 27033600 heap: 228245504 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: handle_auth_request added challenge on 0x56493ec09800
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 448 ms_handle_reset con 0x56493ec09800 session 0x56493ca9c5a0
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: handle_auth_request added challenge on 0x56493efc4c00
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:27:53.739278+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 448 ms_handle_reset con 0x56493efc4c00 session 0x56493ddf65a0
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 201277440 unmapped: 26968064 heap: 228245504 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:30:06 compute-0 ceph-osd[88594]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:30:06 compute-0 ceph-osd[88594]: bluestore.MempoolThread(0x56493a431b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3478146 data_alloc: 234881024 data_used: 14008320
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:27:54.739410+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 201302016 unmapped: 26943488 heap: 228245504 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 448 heartbeat osd_stat(store_statfs(0x4f2ea3000/0x0/0x4ffc00000, data 0x3ec5c75/0x412b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x8c2f9c6), peers [0,2] op hist [])
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: handle_auth_request added challenge on 0x56493b4adc00
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 448 ms_handle_reset con 0x56493b4adc00 session 0x56493da2cd20
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: handle_auth_request added challenge on 0x56493b962800
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 448 ms_handle_reset con 0x56493b962800 session 0x56493ce5a960
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 448 heartbeat osd_stat(store_statfs(0x4f2ea2000/0x0/0x4ffc00000, data 0x3ec5c85/0x412c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x8c2f9c6), peers [0,2] op hist [])
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:27:55.739545+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 201302016 unmapped: 26943488 heap: 228245504 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:27:56.739770+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 201302016 unmapped: 26943488 heap: 228245504 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:27:57.739914+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 201302016 unmapped: 26943488 heap: 228245504 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 448 heartbeat osd_stat(store_statfs(0x4f2ea2000/0x0/0x4ffc00000, data 0x3ec5c85/0x412c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x8c2f9c6), peers [0,2] op hist [])
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:27:58.740029+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 201302016 unmapped: 26943488 heap: 228245504 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:30:06 compute-0 ceph-osd[88594]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:30:06 compute-0 ceph-osd[88594]: bluestore.MempoolThread(0x56493a431b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3479990 data_alloc: 234881024 data_used: 14008320
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:27:59.740240+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 201302016 unmapped: 26943488 heap: 228245504 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:28:00.740388+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 448 heartbeat osd_stat(store_statfs(0x4f2ea2000/0x0/0x4ffc00000, data 0x3ec5c85/0x412c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x8c2f9c6), peers [0,2] op hist [])
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 201302016 unmapped: 26943488 heap: 228245504 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:28:01.740530+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 201302016 unmapped: 26943488 heap: 228245504 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 448 heartbeat osd_stat(store_statfs(0x4f2ea2000/0x0/0x4ffc00000, data 0x3ec5c85/0x412c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x8c2f9c6), peers [0,2] op hist [])
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:28:02.740667+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 201302016 unmapped: 26943488 heap: 228245504 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:28:03.740786+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 201302016 unmapped: 26943488 heap: 228245504 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:30:06 compute-0 ceph-osd[88594]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:30:06 compute-0 ceph-osd[88594]: bluestore.MempoolThread(0x56493a431b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3479990 data_alloc: 234881024 data_used: 14008320
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:28:04.741037+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 201302016 unmapped: 26943488 heap: 228245504 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:28:05.741230+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 201302016 unmapped: 26943488 heap: 228245504 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 448 heartbeat osd_stat(store_statfs(0x4f2ea2000/0x0/0x4ffc00000, data 0x3ec5c85/0x412c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x8c2f9c6), peers [0,2] op hist [])
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:28:06.741385+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 201302016 unmapped: 26943488 heap: 228245504 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:28:07.741538+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 201302016 unmapped: 26943488 heap: 228245504 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:28:08.741713+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 201302016 unmapped: 26943488 heap: 228245504 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 448 heartbeat osd_stat(store_statfs(0x4f2ea2000/0x0/0x4ffc00000, data 0x3ec5c85/0x412c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x8c2f9c6), peers [0,2] op hist [])
Oct 11 04:30:06 compute-0 ceph-osd[88594]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:30:06 compute-0 ceph-osd[88594]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:30:06 compute-0 ceph-osd[88594]: bluestore.MempoolThread(0x56493a431b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3479990 data_alloc: 234881024 data_used: 14008320
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:28:09.741867+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 201302016 unmapped: 26943488 heap: 228245504 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:28:10.742055+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 201302016 unmapped: 26943488 heap: 228245504 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:28:11.742237+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 201302016 unmapped: 26943488 heap: 228245504 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:28:12.742435+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 201302016 unmapped: 26943488 heap: 228245504 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:28:13.742612+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 201302016 unmapped: 26943488 heap: 228245504 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:30:06 compute-0 ceph-osd[88594]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:30:06 compute-0 ceph-osd[88594]: bluestore.MempoolThread(0x56493a431b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3479990 data_alloc: 234881024 data_used: 14008320
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 448 heartbeat osd_stat(store_statfs(0x4f2ea2000/0x0/0x4ffc00000, data 0x3ec5c85/0x412c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x8c2f9c6), peers [0,2] op hist [])
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:28:14.742793+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 201302016 unmapped: 26943488 heap: 228245504 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:28:15.742984+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 201302016 unmapped: 26943488 heap: 228245504 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:28:16.743222+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 201302016 unmapped: 26943488 heap: 228245504 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:28:17.743347+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 201302016 unmapped: 26943488 heap: 228245504 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 448 heartbeat osd_stat(store_statfs(0x4f2ea2000/0x0/0x4ffc00000, data 0x3ec5c85/0x412c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x8c2f9c6), peers [0,2] op hist [])
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:28:18.743550+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 201310208 unmapped: 26935296 heap: 228245504 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:30:06 compute-0 ceph-osd[88594]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:30:06 compute-0 ceph-osd[88594]: bluestore.MempoolThread(0x56493a431b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3479990 data_alloc: 234881024 data_used: 14008320
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 448 heartbeat osd_stat(store_statfs(0x4f2ea2000/0x0/0x4ffc00000, data 0x3ec5c85/0x412c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x8c2f9c6), peers [0,2] op hist [])
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:28:19.743739+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 201310208 unmapped: 26935296 heap: 228245504 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:28:20.743922+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 201310208 unmapped: 26935296 heap: 228245504 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:28:21.744124+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 201310208 unmapped: 26935296 heap: 228245504 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:28:22.744262+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 201310208 unmapped: 26935296 heap: 228245504 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:28:23.744493+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 201310208 unmapped: 26935296 heap: 228245504 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:30:06 compute-0 ceph-osd[88594]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:30:06 compute-0 ceph-osd[88594]: bluestore.MempoolThread(0x56493a431b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3479990 data_alloc: 234881024 data_used: 14008320
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:28:24.744758+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 201310208 unmapped: 26935296 heap: 228245504 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 448 heartbeat osd_stat(store_statfs(0x4f2ea2000/0x0/0x4ffc00000, data 0x3ec5c85/0x412c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x8c2f9c6), peers [0,2] op hist [])
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:28:25.744947+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 201310208 unmapped: 26935296 heap: 228245504 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:28:26.745286+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 201310208 unmapped: 26935296 heap: 228245504 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 448 heartbeat osd_stat(store_statfs(0x4f2ea2000/0x0/0x4ffc00000, data 0x3ec5c85/0x412c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x8c2f9c6), peers [0,2] op hist [])
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:28:27.745538+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 201310208 unmapped: 26935296 heap: 228245504 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:28:28.745754+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 201310208 unmapped: 26935296 heap: 228245504 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:30:06 compute-0 ceph-osd[88594]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:30:06 compute-0 ceph-osd[88594]: bluestore.MempoolThread(0x56493a431b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3479990 data_alloc: 234881024 data_used: 14008320
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:28:29.745993+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 201310208 unmapped: 26935296 heap: 228245504 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:28:30.746252+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 201310208 unmapped: 26935296 heap: 228245504 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:28:31.746408+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 201310208 unmapped: 26935296 heap: 228245504 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 448 heartbeat osd_stat(store_statfs(0x4f2ea2000/0x0/0x4ffc00000, data 0x3ec5c85/0x412c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x8c2f9c6), peers [0,2] op hist [])
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:28:32.746605+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 201310208 unmapped: 26935296 heap: 228245504 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:28:33.746806+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 201310208 unmapped: 26935296 heap: 228245504 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:30:06 compute-0 ceph-osd[88594]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:30:06 compute-0 ceph-osd[88594]: bluestore.MempoolThread(0x56493a431b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3479990 data_alloc: 234881024 data_used: 14008320
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:28:34.746991+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 201310208 unmapped: 26935296 heap: 228245504 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:28:35.747190+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 201310208 unmapped: 26935296 heap: 228245504 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 448 heartbeat osd_stat(store_statfs(0x4f2ea2000/0x0/0x4ffc00000, data 0x3ec5c85/0x412c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x8c2f9c6), peers [0,2] op hist [])
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:28:36.747400+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 201310208 unmapped: 26935296 heap: 228245504 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:28:37.747589+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 201310208 unmapped: 26935296 heap: 228245504 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:28:38.747768+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 201310208 unmapped: 26935296 heap: 228245504 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:30:06 compute-0 ceph-osd[88594]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:30:06 compute-0 ceph-osd[88594]: bluestore.MempoolThread(0x56493a431b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3479990 data_alloc: 234881024 data_used: 14008320
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:28:39.747894+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 201310208 unmapped: 26935296 heap: 228245504 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:28:40.748064+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 201310208 unmapped: 26935296 heap: 228245504 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 448 heartbeat osd_stat(store_statfs(0x4f2ea2000/0x0/0x4ffc00000, data 0x3ec5c85/0x412c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x8c2f9c6), peers [0,2] op hist [])
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:28:41.748264+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 201310208 unmapped: 26935296 heap: 228245504 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 448 heartbeat osd_stat(store_statfs(0x4f2ea2000/0x0/0x4ffc00000, data 0x3ec5c85/0x412c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x8c2f9c6), peers [0,2] op hist [])
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:28:42.748488+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 201310208 unmapped: 26935296 heap: 228245504 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:28:43.748655+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 201310208 unmapped: 26935296 heap: 228245504 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:30:06 compute-0 ceph-osd[88594]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:30:06 compute-0 ceph-osd[88594]: bluestore.MempoolThread(0x56493a431b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3479990 data_alloc: 234881024 data_used: 14008320
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:28:44.748875+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 201310208 unmapped: 26935296 heap: 228245504 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:28:45.749113+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 201310208 unmapped: 26935296 heap: 228245504 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:28:46.749433+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 448 heartbeat osd_stat(store_statfs(0x4f2ea2000/0x0/0x4ffc00000, data 0x3ec5c85/0x412c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x8c2f9c6), peers [0,2] op hist [])
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 201310208 unmapped: 26935296 heap: 228245504 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:28:47.749650+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 201310208 unmapped: 26935296 heap: 228245504 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:28:48.749828+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 201310208 unmapped: 26935296 heap: 228245504 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:30:06 compute-0 ceph-osd[88594]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:30:06 compute-0 ceph-osd[88594]: bluestore.MempoolThread(0x56493a431b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3479990 data_alloc: 234881024 data_used: 14008320
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:28:49.750026+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 448 heartbeat osd_stat(store_statfs(0x4f2ea2000/0x0/0x4ffc00000, data 0x3ec5c85/0x412c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x8c2f9c6), peers [0,2] op hist [])
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 201310208 unmapped: 26935296 heap: 228245504 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:28:50.750258+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 201310208 unmapped: 26935296 heap: 228245504 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:28:51.750444+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 201310208 unmapped: 26935296 heap: 228245504 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:28:52.750888+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 201310208 unmapped: 26935296 heap: 228245504 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:28:53.751336+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 448 heartbeat osd_stat(store_statfs(0x4f2ea2000/0x0/0x4ffc00000, data 0x3ec5c85/0x412c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x8c2f9c6), peers [0,2] op hist [])
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 201310208 unmapped: 26935296 heap: 228245504 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:30:06 compute-0 ceph-osd[88594]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:30:06 compute-0 ceph-osd[88594]: bluestore.MempoolThread(0x56493a431b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3479990 data_alloc: 234881024 data_used: 14008320
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:28:54.751677+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 201310208 unmapped: 26935296 heap: 228245504 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:28:55.751923+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 201310208 unmapped: 26935296 heap: 228245504 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:28:56.752218+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 201310208 unmapped: 26935296 heap: 228245504 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:28:57.752487+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 201310208 unmapped: 26935296 heap: 228245504 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 448 heartbeat osd_stat(store_statfs(0x4f2ea2000/0x0/0x4ffc00000, data 0x3ec5c85/0x412c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x8c2f9c6), peers [0,2] op hist [])
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:28:58.752698+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 201310208 unmapped: 26935296 heap: 228245504 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:30:06 compute-0 ceph-osd[88594]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:30:06 compute-0 ceph-osd[88594]: bluestore.MempoolThread(0x56493a431b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3479990 data_alloc: 234881024 data_used: 14008320
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:28:59.752909+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 201310208 unmapped: 26935296 heap: 228245504 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:29:00.753063+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 201310208 unmapped: 26935296 heap: 228245504 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 448 heartbeat osd_stat(store_statfs(0x4f2ea2000/0x0/0x4ffc00000, data 0x3ec5c85/0x412c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x8c2f9c6), peers [0,2] op hist [])
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:29:01.753246+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 201310208 unmapped: 26935296 heap: 228245504 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:29:02.753412+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 201310208 unmapped: 26935296 heap: 228245504 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:29:03.753623+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 201310208 unmapped: 26935296 heap: 228245504 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:30:06 compute-0 ceph-osd[88594]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:30:06 compute-0 ceph-osd[88594]: bluestore.MempoolThread(0x56493a431b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3479990 data_alloc: 234881024 data_used: 14008320
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 448 heartbeat osd_stat(store_statfs(0x4f2ea2000/0x0/0x4ffc00000, data 0x3ec5c85/0x412c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x8c2f9c6), peers [0,2] op hist [])
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:29:04.753832+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 201310208 unmapped: 26935296 heap: 228245504 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:29:05.754016+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 201310208 unmapped: 26935296 heap: 228245504 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:29:06.754276+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 201310208 unmapped: 26935296 heap: 228245504 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:29:07.754489+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 201310208 unmapped: 26935296 heap: 228245504 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 448 heartbeat osd_stat(store_statfs(0x4f2ea2000/0x0/0x4ffc00000, data 0x3ec5c85/0x412c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x8c2f9c6), peers [0,2] op hist [])
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:29:08.754661+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 201318400 unmapped: 26927104 heap: 228245504 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:30:06 compute-0 ceph-osd[88594]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:30:06 compute-0 ceph-osd[88594]: bluestore.MempoolThread(0x56493a431b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3479990 data_alloc: 234881024 data_used: 14008320
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:29:09.754929+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 201318400 unmapped: 26927104 heap: 228245504 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:29:10.755126+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 201318400 unmapped: 26927104 heap: 228245504 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:29:11.755329+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 201318400 unmapped: 26927104 heap: 228245504 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:29:12.755516+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 201318400 unmapped: 26927104 heap: 228245504 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:29:13.755758+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 448 heartbeat osd_stat(store_statfs(0x4f2ea2000/0x0/0x4ffc00000, data 0x3ec5c85/0x412c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x8c2f9c6), peers [0,2] op hist [])
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 201318400 unmapped: 26927104 heap: 228245504 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:30:06 compute-0 ceph-osd[88594]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:30:06 compute-0 ceph-osd[88594]: bluestore.MempoolThread(0x56493a431b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3479990 data_alloc: 234881024 data_used: 14008320
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:29:14.755986+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 448 heartbeat osd_stat(store_statfs(0x4f2ea2000/0x0/0x4ffc00000, data 0x3ec5c85/0x412c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x8c2f9c6), peers [0,2] op hist [])
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 201318400 unmapped: 26927104 heap: 228245504 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:29:15.756167+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 201318400 unmapped: 26927104 heap: 228245504 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:29:16.756348+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 201318400 unmapped: 26927104 heap: 228245504 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:29:17.756545+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 201318400 unmapped: 26927104 heap: 228245504 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:29:18.756679+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 448 heartbeat osd_stat(store_statfs(0x4f2ea2000/0x0/0x4ffc00000, data 0x3ec5c85/0x412c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x8c2f9c6), peers [0,2] op hist [])
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 201318400 unmapped: 26927104 heap: 228245504 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:30:06 compute-0 ceph-osd[88594]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:30:06 compute-0 ceph-osd[88594]: bluestore.MempoolThread(0x56493a431b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3479990 data_alloc: 234881024 data_used: 14008320
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:29:19.756881+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 201318400 unmapped: 26927104 heap: 228245504 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:29:20.757087+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 201318400 unmapped: 26927104 heap: 228245504 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:29:21.757313+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 201318400 unmapped: 26927104 heap: 228245504 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:29:22.757448+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 448 heartbeat osd_stat(store_statfs(0x4f2ea2000/0x0/0x4ffc00000, data 0x3ec5c85/0x412c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x8c2f9c6), peers [0,2] op hist [])
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 201318400 unmapped: 26927104 heap: 228245504 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:29:23.757701+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 448 heartbeat osd_stat(store_statfs(0x4f2ea2000/0x0/0x4ffc00000, data 0x3ec5c85/0x412c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x8c2f9c6), peers [0,2] op hist [])
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 201326592 unmapped: 26918912 heap: 228245504 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:30:06 compute-0 ceph-osd[88594]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:30:06 compute-0 ceph-osd[88594]: bluestore.MempoolThread(0x56493a431b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3479990 data_alloc: 234881024 data_used: 14008320
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:29:24.757866+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 448 heartbeat osd_stat(store_statfs(0x4f2ea2000/0x0/0x4ffc00000, data 0x3ec5c85/0x412c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x8c2f9c6), peers [0,2] op hist [])
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 201326592 unmapped: 26918912 heap: 228245504 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:29:25.758019+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 201326592 unmapped: 26918912 heap: 228245504 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:29:26.758215+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 201326592 unmapped: 26918912 heap: 228245504 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:29:27.758359+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 201326592 unmapped: 26918912 heap: 228245504 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:29:28.758480+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 201326592 unmapped: 26918912 heap: 228245504 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:30:06 compute-0 ceph-osd[88594]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:30:06 compute-0 ceph-osd[88594]: bluestore.MempoolThread(0x56493a431b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3479990 data_alloc: 234881024 data_used: 14008320
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:29:29.758605+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 448 heartbeat osd_stat(store_statfs(0x4f2ea2000/0x0/0x4ffc00000, data 0x3ec5c85/0x412c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x8c2f9c6), peers [0,2] op hist [])
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 201326592 unmapped: 26918912 heap: 228245504 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:29:30.758739+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 201326592 unmapped: 26918912 heap: 228245504 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:29:31.758887+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 201326592 unmapped: 26918912 heap: 228245504 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:29:32.759022+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 201465856 unmapped: 26779648 heap: 228245504 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: do_command 'config diff' '{prefix=config diff}'
Oct 11 04:30:06 compute-0 ceph-osd[88594]: do_command 'config diff' '{prefix=config diff}' result is 0 bytes
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:29:33.759140+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: do_command 'config show' '{prefix=config show}'
Oct 11 04:30:06 compute-0 ceph-osd[88594]: do_command 'config show' '{prefix=config show}' result is 0 bytes
Oct 11 04:30:06 compute-0 ceph-osd[88594]: do_command 'counter dump' '{prefix=counter dump}'
Oct 11 04:30:06 compute-0 ceph-osd[88594]: do_command 'counter dump' '{prefix=counter dump}' result is 0 bytes
Oct 11 04:30:06 compute-0 ceph-osd[88594]: do_command 'counter schema' '{prefix=counter schema}'
Oct 11 04:30:06 compute-0 ceph-osd[88594]: do_command 'counter schema' '{prefix=counter schema}' result is 0 bytes
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 201105408 unmapped: 27140096 heap: 228245504 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:30:06 compute-0 ceph-osd[88594]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:30:06 compute-0 ceph-osd[88594]: bluestore.MempoolThread(0x56493a431b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3479990 data_alloc: 234881024 data_used: 14008320
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:29:34.759324+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: osd.1 448 heartbeat osd_stat(store_statfs(0x4f2ea2000/0x0/0x4ffc00000, data 0x3ec5c85/0x412c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x8c2f9c6), peers [0,2] op hist [])
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 201113600 unmapped: 27131904 heap: 228245504 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: tick
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_tickets
Oct 11 04:30:06 compute-0 ceph-osd[88594]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:29:35.759452+0000)
Oct 11 04:30:06 compute-0 ceph-osd[88594]: prioritycache tune_memory target: 4294967296 mapped: 201129984 unmapped: 27115520 heap: 228245504 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:06 compute-0 ceph-osd[88594]: do_command 'log dump' '{prefix=log dump}'
Oct 11 04:30:06 compute-0 rsyslogd[1005]: imjournal from <np0005480847:ceph-osd>: begin to drop messages due to rate-limiting
Oct 11 04:30:06 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr services", "format": "json-pretty"} v 0) v1
Oct 11 04:30:06 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/732060826' entity='client.admin' cmd=[{"prefix": "mgr services", "format": "json-pretty"}]: dispatch
Oct 11 04:30:06 compute-0 rsyslogd[1005]: imjournal: journal files changed, reloading...  [v8.2506.0-2.el9 try https://www.rsyslog.com/e/0 ]
Oct 11 04:30:07 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd erasure-code-profile ls"} v 0) v1
Oct 11 04:30:07 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1804927931' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile ls"}]: dispatch
Oct 11 04:30:07 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr stat", "format": "json-pretty"} v 0) v1
Oct 11 04:30:07 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3312233262' entity='client.admin' cmd=[{"prefix": "mgr stat", "format": "json-pretty"}]: dispatch
Oct 11 04:30:07 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata"} v 0) v1
Oct 11 04:30:07 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/256042211' entity='client.admin' cmd=[{"prefix": "osd metadata"}]: dispatch
Oct 11 04:30:07 compute-0 ceph-mon[74273]: from='client.? 192.168.122.100:0/2379946019' entity='client.admin' cmd=[{"prefix": "osd crush tree", "show_shadow": true}]: dispatch
Oct 11 04:30:07 compute-0 ceph-mon[74273]: from='client.? 192.168.122.100:0/732060826' entity='client.admin' cmd=[{"prefix": "mgr services", "format": "json-pretty"}]: dispatch
Oct 11 04:30:07 compute-0 ceph-mon[74273]: from='client.? 192.168.122.100:0/1804927931' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile ls"}]: dispatch
Oct 11 04:30:07 compute-0 ceph-mon[74273]: from='client.? 192.168.122.100:0/3312233262' entity='client.admin' cmd=[{"prefix": "mgr stat", "format": "json-pretty"}]: dispatch
Oct 11 04:30:07 compute-0 ceph-mon[74273]: from='client.? 192.168.122.100:0/256042211' entity='client.admin' cmd=[{"prefix": "osd metadata"}]: dispatch
Oct 11 04:30:07 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr versions", "format": "json-pretty"} v 0) v1
Oct 11 04:30:07 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/913841114' entity='client.admin' cmd=[{"prefix": "mgr versions", "format": "json-pretty"}]: dispatch
Oct 11 04:30:07 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd utilization"} v 0) v1
Oct 11 04:30:07 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3121151021' entity='client.admin' cmd=[{"prefix": "osd utilization"}]: dispatch
Oct 11 04:30:08 compute-0 ceph-mgr[74563]: log_channel(audit) log [DBG] : from='client.19331 -' entity='client.admin' cmd=[{"prefix": "orch host ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 11 04:30:08 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v2052: 305 pgs: 305 active+clean; 271 MiB data, 657 MiB used, 59 GiB / 60 GiB avail
Oct 11 04:30:08 compute-0 ceph-mgr[74563]: log_channel(audit) log [DBG] : from='client.19333 -' entity='client.admin' cmd=[{"prefix": "telemetry channel ls", "target": ["mon-mgr", ""]}]: dispatch
Oct 11 04:30:08 compute-0 nova_compute[259850]: 2025-10-11 04:30:08.375 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 04:30:08 compute-0 ceph-mgr[74563]: log_channel(audit) log [DBG] : from='client.19335 -' entity='client.admin' cmd=[{"prefix": "orch device ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 11 04:30:08 compute-0 ceph-mon[74273]: from='client.? 192.168.122.100:0/913841114' entity='client.admin' cmd=[{"prefix": "mgr versions", "format": "json-pretty"}]: dispatch
Oct 11 04:30:08 compute-0 ceph-mon[74273]: from='client.? 192.168.122.100:0/3121151021' entity='client.admin' cmd=[{"prefix": "osd utilization"}]: dispatch
Oct 11 04:30:08 compute-0 ceph-mon[74273]: from='client.19331 -' entity='client.admin' cmd=[{"prefix": "orch host ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 11 04:30:08 compute-0 ceph-mon[74273]: pgmap v2052: 305 pgs: 305 active+clean; 271 MiB data, 657 MiB used, 59 GiB / 60 GiB avail
Oct 11 04:30:08 compute-0 ceph-mon[74273]: from='client.19333 -' entity='client.admin' cmd=[{"prefix": "telemetry channel ls", "target": ["mon-mgr", ""]}]: dispatch
Oct 11 04:30:08 compute-0 ceph-mon[74273]: from='client.19335 -' entity='client.admin' cmd=[{"prefix": "orch device ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 11 04:30:08 compute-0 ceph-mgr[74563]: log_channel(audit) log [DBG] : from='client.19337 -' entity='client.admin' cmd=[{"prefix": "telemetry collection ls", "target": ["mon-mgr", ""]}]: dispatch
Oct 11 04:30:08 compute-0 ceph-mgr[74563]: log_channel(audit) log [DBG] : from='client.19339 -' entity='client.admin' cmd=[{"prefix": "orch ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 11 04:30:09 compute-0 ceph-mgr[74563]: log_channel(audit) log [DBG] : from='client.19343 -' entity='client.admin' cmd=[{"prefix": "orch ls", "export": true, "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 11 04:30:09 compute-0 ceph-mgr[74563]: log_channel(audit) log [DBG] : from='client.19347 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 11 04:30:09 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "quorum_status"} v 0) v1
Oct 11 04:30:09 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/376196559' entity='client.admin' cmd=[{"prefix": "quorum_status"}]: dispatch
Oct 11 04:30:09 compute-0 ceph-mon[74273]: from='client.19337 -' entity='client.admin' cmd=[{"prefix": "telemetry collection ls", "target": ["mon-mgr", ""]}]: dispatch
Oct 11 04:30:09 compute-0 ceph-mon[74273]: from='client.19339 -' entity='client.admin' cmd=[{"prefix": "orch ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 11 04:30:09 compute-0 ceph-mon[74273]: from='client.19343 -' entity='client.admin' cmd=[{"prefix": "orch ls", "export": true, "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 11 04:30:09 compute-0 ceph-mon[74273]: from='client.? 192.168.122.100:0/376196559' entity='client.admin' cmd=[{"prefix": "quorum_status"}]: dispatch
Oct 11 04:30:09 compute-0 nova_compute[259850]: 2025-10-11 04:30:09.602 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 04:30:09 compute-0 ceph-mgr[74563]: log_channel(audit) log [DBG] : from='client.19349 -' entity='client.admin' cmd=[{"prefix": "orch status", "detail": true, "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 11 04:30:10 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "versions"} v 0) v1
Oct 11 04:30:10 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1996400946' entity='client.admin' cmd=[{"prefix": "versions"}]: dispatch
Oct 11 04:30:10 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v2053: 305 pgs: 305 active+clean; 271 MiB data, 657 MiB used, 59 GiB / 60 GiB avail
Oct 11 04:30:10 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e448 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 04:30:10 compute-0 ceph-mgr[74563]: log_channel(audit) log [DBG] : from='client.19353 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 11 04:30:10 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "health", "detail": "detail", "format": "json-pretty"} v 0) v1
Oct 11 04:30:10 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3773867513' entity='client.admin' cmd=[{"prefix": "health", "detail": "detail", "format": "json-pretty"}]: dispatch
Oct 11 04:30:10 compute-0 ceph-mon[74273]: from='client.19347 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 11 04:30:10 compute-0 ceph-mon[74273]: from='client.19349 -' entity='client.admin' cmd=[{"prefix": "orch status", "detail": true, "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 11 04:30:10 compute-0 ceph-mon[74273]: from='client.? 192.168.122.100:0/1996400946' entity='client.admin' cmd=[{"prefix": "versions"}]: dispatch
Oct 11 04:30:10 compute-0 ceph-mon[74273]: pgmap v2053: 305 pgs: 305 active+clean; 271 MiB data, 657 MiB used, 59 GiB / 60 GiB avail
Oct 11 04:30:10 compute-0 ceph-mon[74273]: from='client.19353 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 11 04:30:10 compute-0 ceph-mon[74273]: from='client.? 192.168.122.100:0/3773867513' entity='client.admin' cmd=[{"prefix": "health", "detail": "detail", "format": "json-pretty"}]: dispatch
Oct 11 04:30:10 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "format": "json-pretty"} v 0) v1
Oct 11 04:30:10 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/186247759' entity='client.admin' cmd=[{"prefix": "osd tree", "format": "json-pretty"}]: dispatch
Oct 11 04:30:10 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
Oct 11 04:30:10 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 183 ms_handle_reset con 0x5651f658f800 session 0x5651f43183c0
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: handle_auth_request added challenge on 0x5651f5003c00
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 183 handle_osd_map epochs [183,184], i have 183, src has [1,184]
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 184 ms_handle_reset con 0x5651f5003c00 session 0x5651f5f44960
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 184 ms_handle_reset con 0x5651f4db7400 session 0x5651f4cd7a40
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: handle_auth_request added challenge on 0x5651f7cd0c00
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 107593728 unmapped: 51953664 heap: 159547392 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: handle_auth_request added challenge on 0x5651f7e51000
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 184 ms_handle_reset con 0x5651f7e51000 session 0x5651f5f443c0
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 184 ms_handle_reset con 0x5651f7cd0c00 session 0x5651f5f14b40
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:05:45.329355+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 107601920 unmapped: 51945472 heap: 159547392 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: handle_auth_request added challenge on 0x5651f7e51800
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 184 ms_handle_reset con 0x5651f7e51800 session 0x5651f5f15680
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: handle_auth_request added challenge on 0x5651f7e51800
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: handle_auth_request added challenge on 0x5651f4db7400
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 184 ms_handle_reset con 0x5651f4db7400 session 0x5651f5bbef00
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:05:46.329505+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 184 heartbeat osd_stat(store_statfs(0x4f8724000/0x0/0x4ffc00000, data 0x2a2901d/0x2b39000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [0,0,1])
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 184 ms_handle_reset con 0x5651f7e51800 session 0x5651f3f0e5a0
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 107601920 unmapped: 51945472 heap: 159547392 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:05:47.329751+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 107601920 unmapped: 51945472 heap: 159547392 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: handle_auth_request added challenge on 0x5651f5003c00
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 184 ms_handle_reset con 0x5651f5003c00 session 0x5651f43092c0
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: handle_auth_request added challenge on 0x5651f7cd0c00
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 184 ms_handle_reset con 0x5651f7cd0c00 session 0x5651f5ed8f00
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:05:48.329983+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:30:11 compute-0 ceph-osd[87591]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:30:11 compute-0 ceph-osd[87591]: bluestore.MempoolThread(0x5651f2adbb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1682937 data_alloc: 218103808 data_used: 7241728
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 107298816 unmapped: 52248576 heap: 159547392 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:05:49.330234+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: handle_auth_request added challenge on 0x5651f7e51000
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 184 ms_handle_reset con 0x5651f7e51000 session 0x5651f584af00
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 107298816 unmapped: 52248576 heap: 159547392 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: handle_auth_request added challenge on 0x5651f4db7400
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 184 ms_handle_reset con 0x5651f4db7400 session 0x5651f43dcb40
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:05:50.330397+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: handle_auth_request added challenge on 0x5651f5003c00
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: handle_auth_request added challenge on 0x5651f7cd0c00
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 107307008 unmapped: 52240384 heap: 159547392 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:05:51.330653+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: handle_auth_request added challenge on 0x5651f7e51800
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 184 heartbeat osd_stat(store_statfs(0x4f8725000/0x0/0x4ffc00000, data 0x2a2901d/0x2b39000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 184 ms_handle_reset con 0x5651f7e51800 session 0x5651f5054780
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 107339776 unmapped: 52207616 heap: 159547392 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:05:52.330763+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: handle_auth_request added challenge on 0x5651f5002400
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 184 ms_handle_reset con 0x5651f5002400 session 0x5651f43d50e0
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 110346240 unmapped: 49201152 heap: 159547392 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:05:53.330874+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:30:11 compute-0 ceph-osd[87591]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:30:11 compute-0 ceph-osd[87591]: bluestore.MempoolThread(0x5651f2adbb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1739056 data_alloc: 234881024 data_used: 14389248
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: handle_auth_request added challenge on 0x5651f806ec00
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 184 ms_handle_reset con 0x5651f806ec00 session 0x5651f4cd6d20
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 110346240 unmapped: 49201152 heap: 159547392 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: handle_auth_request added challenge on 0x5651f806f000
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 184 ms_handle_reset con 0x5651f806f000 session 0x5651f44ccb40
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:05:54.330981+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 110346240 unmapped: 49201152 heap: 159547392 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:05:55.331212+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 110346240 unmapped: 49201152 heap: 159547392 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: handle_auth_request added challenge on 0x5651f4db7400
Oct 11 04:30:11 compute-0 ceph-osd[87591]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 11.429535866s of 11.847714424s, submitted: 121
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 184 ms_handle_reset con 0x5651f4db7400 session 0x5651f3f10f00
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:05:56.331363+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 110362624 unmapped: 49184768 heap: 159547392 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 184 heartbeat osd_stat(store_statfs(0x4f8725000/0x0/0x4ffc00000, data 0x2a2901d/0x2b39000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:05:57.331546+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 110362624 unmapped: 49184768 heap: 159547392 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:05:58.331682+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:30:11 compute-0 ceph-osd[87591]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:30:11 compute-0 ceph-osd[87591]: bluestore.MempoolThread(0x5651f2adbb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1737287 data_alloc: 234881024 data_used: 14389248
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 110362624 unmapped: 49184768 heap: 159547392 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:05:59.331818+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 110362624 unmapped: 49184768 heap: 159547392 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:06:00.331948+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 110362624 unmapped: 49184768 heap: 159547392 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 184 heartbeat osd_stat(store_statfs(0x4f8725000/0x0/0x4ffc00000, data 0x2a2901d/0x2b39000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:06:01.332112+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 110592000 unmapped: 48955392 heap: 159547392 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:06:02.332281+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 111804416 unmapped: 47742976 heap: 159547392 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:06:03.332463+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:30:11 compute-0 ceph-osd[87591]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:30:11 compute-0 ceph-osd[87591]: bluestore.MempoolThread(0x5651f2adbb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1781635 data_alloc: 234881024 data_used: 14512128
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 113721344 unmapped: 45826048 heap: 159547392 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:06:04.332646+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 113721344 unmapped: 45826048 heap: 159547392 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 184 heartbeat osd_stat(store_statfs(0x4f8274000/0x0/0x4ffc00000, data 0x2ecb01d/0x2fdb000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:06:05.341222+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 113721344 unmapped: 45826048 heap: 159547392 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 184 heartbeat osd_stat(store_statfs(0x4f8274000/0x0/0x4ffc00000, data 0x2ecb01d/0x2fdb000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:06:06.341428+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 184 heartbeat osd_stat(store_statfs(0x4f8274000/0x0/0x4ffc00000, data 0x2ecb01d/0x2fdb000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 113721344 unmapped: 45826048 heap: 159547392 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:06:07.341633+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 113721344 unmapped: 45826048 heap: 159547392 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:06:08.341777+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 184 heartbeat osd_stat(store_statfs(0x4f8274000/0x0/0x4ffc00000, data 0x2ecb01d/0x2fdb000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Oct 11 04:30:11 compute-0 ceph-osd[87591]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:30:11 compute-0 ceph-osd[87591]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:30:11 compute-0 ceph-osd[87591]: bluestore.MempoolThread(0x5651f2adbb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1781635 data_alloc: 234881024 data_used: 14512128
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 113721344 unmapped: 45826048 heap: 159547392 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 184 heartbeat osd_stat(store_statfs(0x4f8274000/0x0/0x4ffc00000, data 0x2ecb01d/0x2fdb000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:06:09.341944+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 113721344 unmapped: 45826048 heap: 159547392 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:06:10.342094+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 113721344 unmapped: 45826048 heap: 159547392 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:06:11.342357+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: handle_auth_request added challenge on 0x5651f5002400
Oct 11 04:30:11 compute-0 ceph-osd[87591]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 15.415374756s of 15.594614029s, submitted: 68
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 184 ms_handle_reset con 0x5651f5002400 session 0x5651f37d7e00
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 113278976 unmapped: 46268416 heap: 159547392 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: handle_auth_request added challenge on 0x5651f7e51800
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:06:12.342520+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: handle_auth_request added challenge on 0x5651f806ec00
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 184 ms_handle_reset con 0x5651f806ec00 session 0x5651f584b680
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 113295360 unmapped: 46252032 heap: 159547392 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: handle_auth_request added challenge on 0x5651f806f400
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:06:13.342810+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 184 ms_handle_reset con 0x5651f806f400 session 0x5651f5c1f680
Oct 11 04:30:11 compute-0 ceph-osd[87591]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:30:11 compute-0 ceph-osd[87591]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:30:11 compute-0 ceph-osd[87591]: bluestore.MempoolThread(0x5651f2adbb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1784797 data_alloc: 234881024 data_used: 14520320
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 184 heartbeat osd_stat(store_statfs(0x4f827f000/0x0/0x4ffc00000, data 0x2ecb101/0x2fdf000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: handle_auth_request added challenge on 0x5651f806f800
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 113295360 unmapped: 46252032 heap: 159547392 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 184 heartbeat osd_stat(store_statfs(0x4f827f000/0x0/0x4ffc00000, data 0x2ecb101/0x2fdf000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:06:14.342983+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _renew_subs
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 184 handle_osd_map epochs [185,185], i have 184, src has [1,185]
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 185 ms_handle_reset con 0x5651f806f800 session 0x5651f5c1ef00
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 113303552 unmapped: 46243840 heap: 159547392 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: handle_auth_request added challenge on 0x5651f806f800
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 185 ms_handle_reset con 0x5651f806f800 session 0x5651f5f470e0
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:06:15.343208+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 113311744 unmapped: 46235648 heap: 159547392 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:06:16.343379+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: handle_auth_request added challenge on 0x5651f4db7400
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: handle_auth_request added challenge on 0x5651f5002400
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: handle_auth_request added challenge on 0x5651f806ec00
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 185 ms_handle_reset con 0x5651f806ec00 session 0x5651f3f0e000
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: handle_auth_request added challenge on 0x5651f806f400
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 185 ms_handle_reset con 0x5651f5002400 session 0x5651f5eca780
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 185 ms_handle_reset con 0x5651f806f400 session 0x5651f5bbeb40
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _renew_subs
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 185 handle_osd_map epochs [186,186], i have 185, src has [1,186]
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 186 ms_handle_reset con 0x5651f4db7400 session 0x5651f5f47860
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 113344512 unmapped: 46202880 heap: 159547392 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 186 heartbeat osd_stat(store_statfs(0x4f827a000/0x0/0x4ffc00000, data 0x2eccce0/0x2fe3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:06:17.343535+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: handle_auth_request added challenge on 0x5651f4db7400
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: handle_auth_request added challenge on 0x5651f5002400
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 186 handle_osd_map epochs [186,187], i have 186, src has [1,187]
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 187 ms_handle_reset con 0x5651f5002400 session 0x5651f5ef9680
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: handle_auth_request added challenge on 0x5651f806ec00
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 187 ms_handle_reset con 0x5651f4db7400 session 0x5651f5ef9a40
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 112902144 unmapped: 46645248 heap: 159547392 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 187 ms_handle_reset con 0x5651f806ec00 session 0x5651f37d6960
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:06:18.343693+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:30:11 compute-0 ceph-osd[87591]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:30:11 compute-0 ceph-osd[87591]: bluestore.MempoolThread(0x5651f2adbb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1795445 data_alloc: 234881024 data_used: 14540800
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: handle_auth_request added challenge on 0x5651f806f400
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 187 ms_handle_reset con 0x5651f806f400 session 0x5651f5120d20
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: handle_auth_request added challenge on 0x5651f806f800
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 187 ms_handle_reset con 0x5651f806f800 session 0x5651f5e943c0
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: handle_auth_request added challenge on 0x5651f4db7400
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: handle_auth_request added challenge on 0x5651f5002400
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 187 ms_handle_reset con 0x5651f5002400 session 0x5651f44cc5a0
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 187 ms_handle_reset con 0x5651f4db7400 session 0x5651f5eb3860
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 112934912 unmapped: 46612480 heap: 159547392 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: handle_auth_request added challenge on 0x5651f806ec00
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: handle_auth_request added challenge on 0x5651f806f400
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 187 ms_handle_reset con 0x5651f806f400 session 0x5651f50143c0
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: handle_auth_request added challenge on 0x5651f806fc00
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: handle_auth_request added challenge on 0x5651f806e000
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 187 ms_handle_reset con 0x5651f806fc00 session 0x5651f5015c20
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:06:19.343841+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: handle_auth_request added challenge on 0x5651f6284800
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 187 ms_handle_reset con 0x5651f6284800 session 0x5651f5120960
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: handle_auth_request added challenge on 0x5651f4db7400
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 187 ms_handle_reset con 0x5651f4db7400 session 0x5651f5120000
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 187 ms_handle_reset con 0x5651f806e000 session 0x5651f5f44000
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 187 heartbeat osd_stat(store_statfs(0x4f8276000/0x0/0x4ffc00000, data 0x2ed03ba/0x2fe7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: handle_auth_request added challenge on 0x5651f5002400
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 187 ms_handle_reset con 0x5651f5002400 session 0x5651f7badc20
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: handle_auth_request added challenge on 0x5651f806f400
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 187 ms_handle_reset con 0x5651f806f400 session 0x5651f3f10780
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _renew_subs
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 187 handle_osd_map epochs [188,188], i have 187, src has [1,188]
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 188 ms_handle_reset con 0x5651f806ec00 session 0x5651f5e78000
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 112902144 unmapped: 46645248 heap: 159547392 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: handle_auth_request added challenge on 0x5651f806ec00
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 188 ms_handle_reset con 0x5651f806ec00 session 0x5651f7da70e0
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:06:20.344066+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: handle_auth_request added challenge on 0x5651f4db7400
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _renew_subs
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 188 handle_osd_map epochs [189,189], i have 188, src has [1,189]
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 189 ms_handle_reset con 0x5651f4db7400 session 0x5651f5f46d20
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 112967680 unmapped: 46579712 heap: 159547392 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:06:21.344308+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: handle_auth_request added challenge on 0x5651f5002400
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 189 ms_handle_reset con 0x5651f5002400 session 0x5651f5ed9c20
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: handle_auth_request added challenge on 0x5651f806e000
Oct 11 04:30:11 compute-0 ceph-osd[87591]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 9.488204956s of 10.014543533s, submitted: 160
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _renew_subs
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 189 handle_osd_map epochs [190,190], i have 189, src has [1,190]
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 190 ms_handle_reset con 0x5651f806e000 session 0x5651f5054d20
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 112975872 unmapped: 46571520 heap: 159547392 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:06:22.344716+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 112975872 unmapped: 46571520 heap: 159547392 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:06:23.344901+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:30:11 compute-0 ceph-osd[87591]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:30:11 compute-0 ceph-osd[87591]: bluestore.MempoolThread(0x5651f2adbb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1827967 data_alloc: 234881024 data_used: 14544896
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 190 ms_handle_reset con 0x5651f7e51800 session 0x5651f5e78f00
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: handle_auth_request added challenge on 0x5651f4db7400
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 190 ms_handle_reset con 0x5651f4db7400 session 0x5651f4319c20
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 113344512 unmapped: 46202880 heap: 159547392 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 190 ms_handle_reset con 0x5651f5003c00 session 0x5651f5bd6780
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 190 ms_handle_reset con 0x5651f7cd0c00 session 0x5651f4dd81e0
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:06:24.345225+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: handle_auth_request added challenge on 0x5651f5002400
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: handle_auth_request added challenge on 0x5651f806e000
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: handle_auth_request added challenge on 0x5651f806ec00
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _renew_subs
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 190 handle_osd_map epochs [191,191], i have 190, src has [1,191]
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 191 ms_handle_reset con 0x5651f5002400 session 0x5651f50b9a40
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 109240320 unmapped: 50307072 heap: 159547392 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 191 heartbeat osd_stat(store_statfs(0x4f7ff1000/0x0/0x4ffc00000, data 0x3153689/0x326d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:06:25.345381+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 109223936 unmapped: 50323456 heap: 159547392 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:06:26.345566+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: handle_auth_request added challenge on 0x5651f806f400
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 191 ms_handle_reset con 0x5651f806f400 session 0x5651f3f10d20
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: handle_auth_request added challenge on 0x5651f806f400
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 191 ms_handle_reset con 0x5651f806f400 session 0x5651f7aa30e0
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 110247936 unmapped: 49299456 heap: 159547392 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:06:27.346034+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 191 heartbeat osd_stat(store_statfs(0x4f8c73000/0x0/0x4ffc00000, data 0x24cf08a/0x25e9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: handle_auth_request added challenge on 0x5651f4db7400
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 191 ms_handle_reset con 0x5651f4db7400 session 0x5651f4308b40
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 110247936 unmapped: 49299456 heap: 159547392 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:06:28.346674+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:30:11 compute-0 ceph-osd[87591]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:30:11 compute-0 ceph-osd[87591]: bluestore.MempoolThread(0x5651f2adbb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1710539 data_alloc: 234881024 data_used: 9527296
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 110247936 unmapped: 49299456 heap: 159547392 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: handle_auth_request added challenge on 0x5651f5002400
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 191 ms_handle_reset con 0x5651f5002400 session 0x5651f7da7680
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: handle_auth_request added challenge on 0x5651f5003c00
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:06:29.346789+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 191 ms_handle_reset con 0x5651f5003c00 session 0x5651f7da6d20
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 109379584 unmapped: 50167808 heap: 159547392 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: handle_auth_request added challenge on 0x5651f7cd0c00
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 191 ms_handle_reset con 0x5651f7cd0c00 session 0x5651f4cade00
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: handle_auth_request added challenge on 0x5651f4db7400
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:06:30.346922+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 191 ms_handle_reset con 0x5651f4db7400 session 0x5651f4cad680
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: handle_auth_request added challenge on 0x5651f5002400
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: handle_auth_request added challenge on 0x5651f5003c00
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 191 ms_handle_reset con 0x5651f5002400 session 0x5651f584bc20
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: handle_auth_request added challenge on 0x5651f806f400
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 191 ms_handle_reset con 0x5651f806f400 session 0x5651f5e7a780
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _renew_subs
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 191 handle_osd_map epochs [192,192], i have 191, src has [1,192]
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 192 ms_handle_reset con 0x5651f5003c00 session 0x5651f5e78780
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 110018560 unmapped: 49528832 heap: 159547392 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:06:31.347285+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: handle_auth_request added challenge on 0x5651f806fc00
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 110018560 unmapped: 49528832 heap: 159547392 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 9.876727104s of 10.337316513s, submitted: 159
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 192 ms_handle_reset con 0x5651f806fc00 session 0x5651f5f2a1e0
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: handle_auth_request added challenge on 0x5651f806fc00
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 192 heartbeat osd_stat(store_statfs(0x4f8ab6000/0x0/0x4ffc00000, data 0x268ac07/0x27a6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:06:32.347468+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: handle_auth_request added challenge on 0x5651f4db7400
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 192 ms_handle_reset con 0x5651f4db7400 session 0x5651f4cade00
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: handle_auth_request added challenge on 0x5651f5002400
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 192 ms_handle_reset con 0x5651f5002400 session 0x5651f7da70e0
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _renew_subs
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 192 handle_osd_map epochs [193,193], i have 192, src has [1,193]
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 193 ms_handle_reset con 0x5651f806fc00 session 0x5651f50e9e00
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 110649344 unmapped: 48898048 heap: 159547392 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:06:33.347605+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 193 heartbeat osd_stat(store_statfs(0x4f8ab9000/0x0/0x4ffc00000, data 0x268aba5/0x27a5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: handle_auth_request added challenge on 0x5651f5003c00
Oct 11 04:30:11 compute-0 ceph-osd[87591]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:30:11 compute-0 ceph-osd[87591]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:30:11 compute-0 ceph-osd[87591]: bluestore.MempoolThread(0x5651f2adbb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1678271 data_alloc: 234881024 data_used: 9543680
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _renew_subs
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 193 handle_osd_map epochs [194,194], i have 193, src has [1,194]
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 110731264 unmapped: 48816128 heap: 159547392 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 194 ms_handle_reset con 0x5651f5003c00 session 0x5651f3f10d20
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:06:34.347852+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 194 heartbeat osd_stat(store_statfs(0x4f91c5000/0x0/0x4ffc00000, data 0x1f7d766/0x2098000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 110731264 unmapped: 48816128 heap: 159547392 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:06:35.348315+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 115720192 unmapped: 43827200 heap: 159547392 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:06:36.348482+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 116064256 unmapped: 43483136 heap: 159547392 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:06:37.348686+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 116334592 unmapped: 43212800 heap: 159547392 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:06:38.348876+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:30:11 compute-0 ceph-osd[87591]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:30:11 compute-0 ceph-osd[87591]: bluestore.MempoolThread(0x5651f2adbb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1775908 data_alloc: 234881024 data_used: 9981952
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 116334592 unmapped: 43212800 heap: 159547392 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:06:39.349103+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 194 heartbeat osd_stat(store_statfs(0x4f86c3000/0x0/0x4ffc00000, data 0x2a7f2ff/0x2b9b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 116334592 unmapped: 43212800 heap: 159547392 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:06:40.349360+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 116334592 unmapped: 43212800 heap: 159547392 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:06:41.349550+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 194 heartbeat osd_stat(store_statfs(0x4f86c3000/0x0/0x4ffc00000, data 0x2a7f2ff/0x2b9b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: handle_auth_request added challenge on 0x5651f806f400
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 194 ms_handle_reset con 0x5651f806f400 session 0x5651f5ed9c20
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: handle_auth_request added challenge on 0x5651f4db7400
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 194 ms_handle_reset con 0x5651f4db7400 session 0x5651f5f46d20
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: handle_auth_request added challenge on 0x5651f5002400
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 194 ms_handle_reset con 0x5651f5002400 session 0x5651f7badc20
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: handle_auth_request added challenge on 0x5651f5003c00
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 194 ms_handle_reset con 0x5651f5003c00 session 0x5651f5f44000
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: handle_auth_request added challenge on 0x5651f806fc00
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 194 ms_handle_reset con 0x5651f806fc00 session 0x5651f5120960
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 114466816 unmapped: 45080576 heap: 159547392 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: handle_auth_request added challenge on 0x5651f6c83000
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 194 ms_handle_reset con 0x5651f6c83000 session 0x5651f3f11680
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: handle_auth_request added challenge on 0x5651f6c83000
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 194 ms_handle_reset con 0x5651f6c83000 session 0x5651f37d7e00
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: handle_auth_request added challenge on 0x5651f4db7400
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 194 ms_handle_reset con 0x5651f4db7400 session 0x5651f79f7860
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: handle_auth_request added challenge on 0x5651f5002400
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 194 ms_handle_reset con 0x5651f5002400 session 0x5651f79f65a0
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: handle_auth_request added challenge on 0x5651f5003c00
Oct 11 04:30:11 compute-0 ceph-osd[87591]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 9.690556526s of 10.190482140s, submitted: 141
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 194 ms_handle_reset con 0x5651f5003c00 session 0x5651f5121680
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:06:42.349688+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 114466816 unmapped: 45080576 heap: 159547392 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:06:43.349836+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: handle_auth_request added challenge on 0x5651f806fc00
Oct 11 04:30:11 compute-0 ceph-osd[87591]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:30:11 compute-0 ceph-osd[87591]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:30:11 compute-0 ceph-osd[87591]: bluestore.MempoolThread(0x5651f2adbb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1875102 data_alloc: 234881024 data_used: 9981952
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 194 handle_osd_map epochs [194,195], i have 194, src has [1,195]
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 195 ms_handle_reset con 0x5651f806fc00 session 0x5651f5121860
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 114638848 unmapped: 44908544 heap: 159547392 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:06:44.349971+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: handle_auth_request added challenge on 0x5651f4db7400
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _renew_subs
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 195 handle_osd_map epochs [196,196], i have 195, src has [1,196]
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 113876992 unmapped: 45670400 heap: 159547392 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:06:45.350094+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _renew_subs
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 196 handle_osd_map epochs [197,197], i have 196, src has [1,197]
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 197 ms_handle_reset con 0x5651f4db7400 session 0x5651f5f15a40
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 113885184 unmapped: 45662208 heap: 159547392 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: handle_auth_request added challenge on 0x5651f5002400
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 197 heartbeat osd_stat(store_statfs(0x4f7a1b000/0x0/0x4ffc00000, data 0x372290b/0x3842000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 197 ms_handle_reset con 0x5651f5002400 session 0x5651f5c58d20
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:06:46.350252+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: handle_auth_request added challenge on 0x5651f5003c00
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: handle_auth_request added challenge on 0x5651f6c83000
Oct 11 04:30:11 compute-0 ceph-osd[87591]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Oct 11 04:30:11 compute-0 ceph-osd[87591]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 1800.1 total, 600.0 interval
                                           Cumulative writes: 11K writes, 46K keys, 11K commit groups, 1.0 writes per commit group, ingest: 0.03 GB, 0.02 MB/s
                                           Cumulative WAL: 11K writes, 3626 syncs, 3.23 writes per sync, written: 0.03 GB, 0.02 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 5949 writes, 22K keys, 5949 commit groups, 1.0 writes per commit group, ingest: 11.69 MB, 0.02 MB/s
                                           Interval WAL: 5949 writes, 2659 syncs, 2.24 writes per sync, written: 0.01 GB, 0.02 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: handle_auth_request added challenge on 0x5651f6c82c00
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 113926144 unmapped: 45621248 heap: 159547392 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 197 ms_handle_reset con 0x5651f6c82c00 session 0x5651f5e79a40
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: handle_auth_request added challenge on 0x5651f6c83c00
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 197 heartbeat osd_stat(store_statfs(0x4f7a17000/0x0/0x4ffc00000, data 0x3724488/0x3845000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:06:47.350387+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: handle_auth_request added challenge on 0x5651f6c82800
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 197 ms_handle_reset con 0x5651f6c82800 session 0x5651f5f44d20
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _renew_subs
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 197 handle_osd_map epochs [198,198], i have 197, src has [1,198]
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: handle_auth_request added challenge on 0x5651f7d92000
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 198 ms_handle_reset con 0x5651f6c83c00 session 0x5651f50e9860
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 120381440 unmapped: 39165952 heap: 159547392 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: handle_auth_request added challenge on 0x5651f4db7400
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 198 ms_handle_reset con 0x5651f4db7400 session 0x5651f3f0e780
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: handle_auth_request added challenge on 0x5651f5002400
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:06:48.350507+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:30:11 compute-0 ceph-osd[87591]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:30:11 compute-0 ceph-osd[87591]: bluestore.MempoolThread(0x5651f2adbb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1973114 data_alloc: 234881024 data_used: 21782528
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _renew_subs
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 198 handle_osd_map epochs [199,199], i have 198, src has [1,199]
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 199 ms_handle_reset con 0x5651f5002400 session 0x5651f5ecb2c0
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 123486208 unmapped: 36061184 heap: 159547392 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: handle_auth_request added challenge on 0x5651f6c82800
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 199 ms_handle_reset con 0x5651f6c82800 session 0x5651f79fc1e0
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: handle_auth_request added challenge on 0x5651f6c82c00
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:06:49.350621+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 199 ms_handle_reset con 0x5651f7d92000 session 0x5651f7da6000
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 199 ms_handle_reset con 0x5651f6c82c00 session 0x5651f5f2bc20
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 123527168 unmapped: 36020224 heap: 159547392 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:06:50.350742+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 199 ms_handle_reset con 0x5651f806e000 session 0x5651f7bac3c0
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 199 ms_handle_reset con 0x5651f806ec00 session 0x5651f7bad4a0
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 199 heartbeat osd_stat(store_statfs(0x4f7a0e000/0x0/0x4ffc00000, data 0x372cc6a/0x3850000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: handle_auth_request added challenge on 0x5651f4db7400
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 121962496 unmapped: 37584896 heap: 159547392 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: handle_auth_request added challenge on 0x5651f5002400
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 199 ms_handle_reset con 0x5651f5002400 session 0x5651f43d2780
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:06:51.350873+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 199 ms_handle_reset con 0x5651f4db7400 session 0x5651f79fc3c0
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 121962496 unmapped: 37584896 heap: 159547392 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:06:52.351122+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 199 heartbeat osd_stat(store_statfs(0x4f85f6000/0x0/0x4ffc00000, data 0x2987c5a/0x2aaa000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 121962496 unmapped: 37584896 heap: 159547392 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:06:53.351399+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 199 handle_osd_map epochs [200,200], i have 199, src has [1,200]
Oct 11 04:30:11 compute-0 ceph-osd[87591]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 11.127789497s of 11.479012489s, submitted: 124
Oct 11 04:30:11 compute-0 ceph-osd[87591]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:30:11 compute-0 ceph-osd[87591]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:30:11 compute-0 ceph-osd[87591]: bluestore.MempoolThread(0x5651f2adbb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1851781 data_alloc: 234881024 data_used: 18845696
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 121962496 unmapped: 37584896 heap: 159547392 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:06:54.351624+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 200 handle_osd_map epochs [200,201], i have 200, src has [1,201]
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 201 heartbeat osd_stat(store_statfs(0x4f87b0000/0x0/0x4ffc00000, data 0x29897f3/0x2aad000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: handle_auth_request added challenge on 0x5651f6c82800
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 121962496 unmapped: 37584896 heap: 159547392 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 201 ms_handle_reset con 0x5651f6c82800 session 0x5651f51203c0
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:06:55.351844+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: handle_auth_request added challenge on 0x5651f4db7400
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 201 ms_handle_reset con 0x5651f4db7400 session 0x5651f5f450e0
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 201 heartbeat osd_stat(store_statfs(0x4f87ad000/0x0/0x4ffc00000, data 0x298b1b4/0x2aaf000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: handle_auth_request added challenge on 0x5651f5002400
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: handle_auth_request added challenge on 0x5651f806e000
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 201 ms_handle_reset con 0x5651f806e000 session 0x5651f5f44f00
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 122003456 unmapped: 37543936 heap: 159547392 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:06:56.352058+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: handle_auth_request added challenge on 0x5651f806ec00
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _renew_subs
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 201 handle_osd_map epochs [202,202], i have 201, src has [1,202]
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 202 ms_handle_reset con 0x5651f806ec00 session 0x5651f7bad680
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 202 ms_handle_reset con 0x5651f5002400 session 0x5651f7da6780
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 122093568 unmapped: 37453824 heap: 159547392 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:06:57.352238+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 202 ms_handle_reset con 0x5651f4022000 session 0x5651f5e94f00
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: handle_auth_request added challenge on 0x5651f4db7400
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 124870656 unmapped: 34676736 heap: 159547392 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: handle_auth_request added challenge on 0x5651f5002400
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:06:58.352518+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 202 ms_handle_reset con 0x5651f5002400 session 0x5651f5f47a40
Oct 11 04:30:11 compute-0 ceph-osd[87591]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:30:11 compute-0 ceph-osd[87591]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:30:11 compute-0 ceph-osd[87591]: bluestore.MempoolThread(0x5651f2adbb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1918239 data_alloc: 234881024 data_used: 19881984
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 125861888 unmapped: 33685504 heap: 159547392 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:06:59.352633+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 126377984 unmapped: 33169408 heap: 159547392 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:07:00.352752+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 202 heartbeat osd_stat(store_statfs(0x4f7c1b000/0x0/0x4ffc00000, data 0x30eed85/0x3214000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x4daf9c6), peers [1,2] op hist [])
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 126410752 unmapped: 33136640 heap: 159547392 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:07:01.352965+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 126410752 unmapped: 33136640 heap: 159547392 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 202 heartbeat osd_stat(store_statfs(0x4f7c1b000/0x0/0x4ffc00000, data 0x30eed85/0x3214000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x4daf9c6), peers [1,2] op hist [])
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:07:02.353297+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 202 heartbeat osd_stat(store_statfs(0x4f7c1b000/0x0/0x4ffc00000, data 0x30eed85/0x3214000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x4daf9c6), peers [1,2] op hist [])
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 126410752 unmapped: 33136640 heap: 159547392 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 202 heartbeat osd_stat(store_statfs(0x4f7c1b000/0x0/0x4ffc00000, data 0x30eed85/0x3214000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x4daf9c6), peers [1,2] op hist [])
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:07:03.353507+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:30:11 compute-0 ceph-osd[87591]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:30:11 compute-0 ceph-osd[87591]: bluestore.MempoolThread(0x5651f2adbb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1936783 data_alloc: 234881024 data_used: 19968000
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 126443520 unmapped: 33103872 heap: 159547392 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:07:04.353704+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 10.532958984s of 10.966660500s, submitted: 206
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _renew_subs
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 202 handle_osd_map epochs [203,203], i have 202, src has [1,203]
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 126623744 unmapped: 32923648 heap: 159547392 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:07:05.353896+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: handle_auth_request added challenge on 0x5651f806e000
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 203 ms_handle_reset con 0x5651f806e000 session 0x5651f5ecbc20
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 126623744 unmapped: 32923648 heap: 159547392 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:07:06.354287+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 126623744 unmapped: 32923648 heap: 159547392 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:07:07.354495+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 126623744 unmapped: 32923648 heap: 159547392 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: handle_auth_request added challenge on 0x5651f806ec00
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:07:08.354636+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: handle_auth_request added challenge on 0x5651f6c82c00
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 203 ms_handle_reset con 0x5651f806ec00 session 0x5651f5120960
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 203 heartbeat osd_stat(store_statfs(0x4f7c14000/0x0/0x4ffc00000, data 0x311284a/0x323a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x4daf9c6), peers [1,2] op hist [])
Oct 11 04:30:11 compute-0 ceph-osd[87591]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:30:11 compute-0 ceph-osd[87591]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:30:11 compute-0 ceph-osd[87591]: bluestore.MempoolThread(0x5651f2adbb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1934486 data_alloc: 234881024 data_used: 19984384
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 126631936 unmapped: 32915456 heap: 159547392 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:07:09.354872+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: handle_auth_request added challenge on 0x5651f7d92000
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _renew_subs
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 203 handle_osd_map epochs [204,204], i have 203, src has [1,204]
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: handle_auth_request added challenge on 0x5651f6c83c00
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 204 ms_handle_reset con 0x5651f6c83c00 session 0x5651f3f0e780
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 204 ms_handle_reset con 0x5651f7d92000 session 0x5651f5ed92c0
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 126656512 unmapped: 32890880 heap: 159547392 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:07:10.355055+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: handle_auth_request added challenge on 0x5651f5002400
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 204 ms_handle_reset con 0x5651f5002400 session 0x5651f50e8b40
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: handle_auth_request added challenge on 0x5651f6c83c00
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 126672896 unmapped: 32874496 heap: 159547392 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:07:11.355309+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: handle_auth_request added challenge on 0x5651f7d92000
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 204 ms_handle_reset con 0x5651f7d92000 session 0x5651f5f44000
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: handle_auth_request added challenge on 0x5651f806e000
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: handle_auth_request added challenge on 0x5651f806ec00
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 204 ms_handle_reset con 0x5651f806e000 session 0x5651f5bd65a0
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 204 handle_osd_map epochs [204,205], i have 204, src has [1,205]
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _renew_subs
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 204 handle_osd_map epochs [205,205], i have 205, src has [1,205]
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 205 ms_handle_reset con 0x5651f6c83c00 session 0x5651f50e85a0
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 205 heartbeat osd_stat(store_statfs(0x4f7c09000/0x0/0x4ffc00000, data 0x3119449/0x3245000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x4daf9c6), peers [1,2] op hist [])
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 126705664 unmapped: 32841728 heap: 159547392 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:07:12.355500+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 205 handle_osd_map epochs [205,206], i have 205, src has [1,206]
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 206 ms_handle_reset con 0x5651f806ec00 session 0x5651f4cacd20
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: handle_auth_request added challenge on 0x5651f5002400
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 206 ms_handle_reset con 0x5651f5002400 session 0x5651f79fd860
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 126713856 unmapped: 32833536 heap: 159547392 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: handle_auth_request added challenge on 0x5651f6c83c00
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:07:13.355681+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 206 handle_osd_map epochs [206,207], i have 206, src has [1,207]
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _renew_subs
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 207 handle_osd_map epochs [207,207], i have 207, src has [1,207]
Oct 11 04:30:11 compute-0 ceph-osd[87591]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:30:11 compute-0 ceph-osd[87591]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:30:11 compute-0 ceph-osd[87591]: bluestore.MempoolThread(0x5651f2adbb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1958047 data_alloc: 234881024 data_used: 20008960
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 207 ms_handle_reset con 0x5651f6c83c00 session 0x5651f5f46000
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 126722048 unmapped: 32825344 heap: 159547392 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:07:14.355870+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: handle_auth_request added challenge on 0x5651f7d92000
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 207 heartbeat osd_stat(store_statfs(0x4f7bfe000/0x0/0x4ffc00000, data 0x311e704/0x324d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x4daf9c6), peers [1,2] op hist [])
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: handle_auth_request added challenge on 0x5651f806e000
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 207 ms_handle_reset con 0x5651f806e000 session 0x5651f44cd680
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: handle_auth_request added challenge on 0x5651f7d93800
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 207 handle_osd_map epochs [207,208], i have 207, src has [1,208]
Oct 11 04:30:11 compute-0 ceph-osd[87591]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 9.714196205s of 10.154953003s, submitted: 114
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _renew_subs
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 208 handle_osd_map epochs [208,208], i have 208, src has [1,208]
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 208 ms_handle_reset con 0x5651f7d93800 session 0x5651f4cad2c0
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 208 ms_handle_reset con 0x5651f7d92000 session 0x5651f5f465a0
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 126803968 unmapped: 32743424 heap: 159547392 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:07:15.356075+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: handle_auth_request added challenge on 0x5651f5002400
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: handle_auth_request added challenge on 0x5651f6c83c00
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _renew_subs
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 208 handle_osd_map epochs [209,209], i have 208, src has [1,209]
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 209 ms_handle_reset con 0x5651f5002400 session 0x5651f5ecb2c0
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 209 ms_handle_reset con 0x5651f6c83c00 session 0x5651f5c1f0e0
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: handle_auth_request added challenge on 0x5651f7d92000
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 127860736 unmapped: 31686656 heap: 159547392 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:07:16.356246+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _renew_subs
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 209 handle_osd_map epochs [210,210], i have 209, src has [1,210]
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 210 ms_handle_reset con 0x5651f7d92000 session 0x5651f79fdc20
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 127877120 unmapped: 31670272 heap: 159547392 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:07:17.356430+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 127901696 unmapped: 31645696 heap: 159547392 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:07:18.356613+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 210 ms_handle_reset con 0x5651f6c82c00 session 0x5651f7badc20
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 210 ms_handle_reset con 0x5651f5003c00 session 0x5651f7da63c0
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 210 ms_handle_reset con 0x5651f6c83000 session 0x5651f4cd6000
Oct 11 04:30:11 compute-0 ceph-osd[87591]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:30:11 compute-0 ceph-osd[87591]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:30:11 compute-0 ceph-osd[87591]: bluestore.MempoolThread(0x5651f2adbb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1963924 data_alloc: 234881024 data_used: 20021248
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: handle_auth_request added challenge on 0x5651f5003c00
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: handle_auth_request added challenge on 0x5651f5002400
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 210 ms_handle_reset con 0x5651f5002400 session 0x5651f4319c20
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 118931456 unmapped: 40615936 heap: 159547392 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: handle_auth_request added challenge on 0x5651f6c82c00
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:07:19.356771+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 210 ms_handle_reset con 0x5651f5003c00 session 0x5651f5055680
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 210 heartbeat osd_stat(store_statfs(0x4f884e000/0x0/0x4ffc00000, data 0x1d1c9fd/0x1e4d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x4daf9c6), peers [1,2] op hist [])
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 210 ms_handle_reset con 0x5651f6c82c00 session 0x5651f5ed8d20
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 210 handle_osd_map epochs [211,211], i have 210, src has [1,211]
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 118669312 unmapped: 40878080 heap: 159547392 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:07:20.356952+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: handle_auth_request added challenge on 0x5651f6c83c00
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 211 ms_handle_reset con 0x5651f6c83c00 session 0x5651f4cad0e0
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: handle_auth_request added challenge on 0x5651f6c83c00
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 211 ms_handle_reset con 0x5651f6c83c00 session 0x5651f4cd6d20
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 118718464 unmapped: 40828928 heap: 159547392 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:07:21.357195+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: handle_auth_request added challenge on 0x5651f5002400
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _renew_subs
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 211 handle_osd_map epochs [212,212], i have 211, src has [1,212]
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 212 ms_handle_reset con 0x5651f5002400 session 0x5651f50143c0
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: handle_auth_request added challenge on 0x5651f5003c00
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 212 heartbeat osd_stat(store_statfs(0x4f8ffe000/0x0/0x4ffc00000, data 0x1d1e5ee/0x1e50000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x4daf9c6), peers [1,2] op hist [])
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 212 ms_handle_reset con 0x5651f5003c00 session 0x5651f79fcd20
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 118759424 unmapped: 40787968 heap: 159547392 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:07:22.357348+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 118759424 unmapped: 40787968 heap: 159547392 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: handle_auth_request added challenge on 0x5651f6c82c00
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _renew_subs
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 212 handle_osd_map epochs [213,213], i have 212, src has [1,213]
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:07:23.357516+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 213 ms_handle_reset con 0x5651f6c82c00 session 0x5651f5e78000
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: handle_auth_request added challenge on 0x5651f6c83000
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 213 ms_handle_reset con 0x5651f6c83000 session 0x5651f51aaf00
Oct 11 04:30:11 compute-0 ceph-osd[87591]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:30:11 compute-0 ceph-osd[87591]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:30:11 compute-0 ceph-osd[87591]: bluestore.MempoolThread(0x5651f2adbb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1726270 data_alloc: 218103808 data_used: 7131136
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 118816768 unmapped: 40730624 heap: 159547392 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:07:24.357873+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: handle_auth_request added challenge on 0x5651f6c83000
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 213 ms_handle_reset con 0x5651f6c83000 session 0x5651f4dd8b40
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: handle_auth_request added challenge on 0x5651f5002400
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _renew_subs
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 213 handle_osd_map epochs [214,214], i have 213, src has [1,214]
Oct 11 04:30:11 compute-0 ceph-osd[87591]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 9.593287468s of 10.037167549s, submitted: 125
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 214 ms_handle_reset con 0x5651f5002400 session 0x5651f3f0e5a0
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 118816768 unmapped: 40730624 heap: 159547392 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:07:25.358031+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: handle_auth_request added challenge on 0x5651f5003c00
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _renew_subs
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 214 handle_osd_map epochs [215,215], i have 214, src has [1,215]
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 215 ms_handle_reset con 0x5651f5003c00 session 0x5651f3f0e000
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 118816768 unmapped: 40730624 heap: 159547392 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:07:26.358222+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 118816768 unmapped: 40730624 heap: 159547392 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 215 heartbeat osd_stat(store_statfs(0x4fa00f000/0x0/0x4ffc00000, data 0x1d25468/0x1e5c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x3d8f9c6), peers [1,2] op hist [])
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:07:27.358378+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: handle_auth_request added challenge on 0x5651f6c82c00
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: handle_auth_request added challenge on 0x5651f6c83c00
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 215 ms_handle_reset con 0x5651f6c83c00 session 0x5651f5bbed20
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 215 ms_handle_reset con 0x5651f6c82c00 session 0x5651f50145a0
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 118718464 unmapped: 40828928 heap: 159547392 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:07:28.358579+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:30:11 compute-0 ceph-osd[87591]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:30:11 compute-0 ceph-osd[87591]: bluestore.MempoolThread(0x5651f2adbb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1737187 data_alloc: 218103808 data_used: 7131136
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: handle_auth_request added challenge on 0x5651f6c82c00
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 215 ms_handle_reset con 0x5651f6c82c00 session 0x5651f5ecb0e0
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: handle_auth_request added challenge on 0x5651f5002400
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 118718464 unmapped: 40828928 heap: 159547392 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:07:29.358726+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 215 ms_handle_reset con 0x5651f5002400 session 0x5651f5eca780
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: handle_auth_request added challenge on 0x5651f5003c00
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 215 ms_handle_reset con 0x5651f5003c00 session 0x5651f43d4f00
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: handle_auth_request added challenge on 0x5651f6c83000
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 118718464 unmapped: 40828928 heap: 159547392 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:07:30.358875+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _renew_subs
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 215 handle_osd_map epochs [216,216], i have 215, src has [1,216]
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 216 ms_handle_reset con 0x5651f6c83000 session 0x5651f5eb3860
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 118734848 unmapped: 40812544 heap: 159547392 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:07:31.359692+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 118734848 unmapped: 40812544 heap: 159547392 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:07:32.359882+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 216 heartbeat osd_stat(store_statfs(0x4fa00e000/0x0/0x4ffc00000, data 0x1d27049/0x1e5f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x3d8f9c6), peers [1,2] op hist [])
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: handle_auth_request added challenge on 0x5651f6c83c00
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: handle_auth_request added challenge on 0x5651f7d92000
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 216 ms_handle_reset con 0x5651f7d92000 session 0x5651f5f45e00
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 216 ms_handle_reset con 0x5651f6c83c00 session 0x5651f44ccb40
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: handle_auth_request added challenge on 0x5651f5002400
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 216 ms_handle_reset con 0x5651f5002400 session 0x5651f44cc5a0
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: handle_auth_request added challenge on 0x5651f5003c00
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: handle_auth_request added challenge on 0x5651f6c82c00
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 216 ms_handle_reset con 0x5651f6c82c00 session 0x5651f5e7a3c0
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: handle_auth_request added challenge on 0x5651f6c83000
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 216 ms_handle_reset con 0x5651f6c83000 session 0x5651f3f5d0e0
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 216 ms_handle_reset con 0x5651f5003c00 session 0x5651f5e943c0
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: handle_auth_request added challenge on 0x5651f5002400
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 118841344 unmapped: 40706048 heap: 159547392 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:07:33.360050+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 216 ms_handle_reset con 0x5651f5002400 session 0x5651f5bbf2c0
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: handle_auth_request added challenge on 0x5651f6c82c00
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 216 ms_handle_reset con 0x5651f6c82c00 session 0x5651f79fd680
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: handle_auth_request added challenge on 0x5651f6c83000
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 216 ms_handle_reset con 0x5651f6c83000 session 0x5651f3f5cf00
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: handle_auth_request added challenge on 0x5651f6c83c00
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 216 ms_handle_reset con 0x5651f6c83c00 session 0x5651f44cdc20
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: handle_auth_request added challenge on 0x5651f7d93800
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 216 ms_handle_reset con 0x5651f7d93800 session 0x5651f5c59a40
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 216 heartbeat osd_stat(store_statfs(0x4fa009000/0x0/0x4ffc00000, data 0x1d2d010/0x1e65000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x3d8f9c6), peers [1,2] op hist [])
Oct 11 04:30:11 compute-0 ceph-osd[87591]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:30:11 compute-0 ceph-osd[87591]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:30:11 compute-0 ceph-osd[87591]: bluestore.MempoolThread(0x5651f2adbb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1782858 data_alloc: 218103808 data_used: 7139328
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: handle_auth_request added challenge on 0x5651f7d93800
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 216 ms_handle_reset con 0x5651f7d93800 session 0x5651f7bca3c0
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:07:34.360195+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 118964224 unmapped: 40583168 heap: 159547392 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:07:35.360304+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 118964224 unmapped: 40583168 heap: 159547392 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: handle_auth_request added challenge on 0x5651f5002400
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _renew_subs
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 216 handle_osd_map epochs [217,217], i have 216, src has [1,217]
Oct 11 04:30:11 compute-0 ceph-osd[87591]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 10.185873985s of 10.573740959s, submitted: 82
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _renew_subs
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 217 handle_osd_map epochs [218,218], i have 217, src has [1,218]
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 218 ms_handle_reset con 0x5651f5002400 session 0x5651f79fcf00
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:07:36.360466+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 118972416 unmapped: 40574976 heap: 159547392 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: handle_auth_request added challenge on 0x5651f6c82c00
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 218 ms_handle_reset con 0x5651f6c82c00 session 0x5651f79fc1e0
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:07:37.360605+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 118972416 unmapped: 40574976 heap: 159547392 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: handle_auth_request added challenge on 0x5651f6c83000
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 218 ms_handle_reset con 0x5651f6c83000 session 0x5651f51205a0
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 218 heartbeat osd_stat(store_statfs(0x4f9af3000/0x0/0x4ffc00000, data 0x223d6ab/0x2379000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x3d8f9c6), peers [1,2] op hist [])
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:07:38.360764+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 118972416 unmapped: 40574976 heap: 159547392 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: handle_auth_request added challenge on 0x5651f6c83c00
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 218 ms_handle_reset con 0x5651f6c83c00 session 0x5651f4309860
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: handle_auth_request added challenge on 0x5651f6c83c00
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 218 ms_handle_reset con 0x5651f6c83c00 session 0x5651f3f0ef00
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: handle_auth_request added challenge on 0x5651f5002400
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 218 ms_handle_reset con 0x5651f5002400 session 0x5651f7bcad20
Oct 11 04:30:11 compute-0 ceph-osd[87591]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:30:11 compute-0 ceph-osd[87591]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:30:11 compute-0 ceph-osd[87591]: bluestore.MempoolThread(0x5651f2adbb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1790292 data_alloc: 218103808 data_used: 7151616
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: handle_auth_request added challenge on 0x5651f6c82c00
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 218 ms_handle_reset con 0x5651f6c82c00 session 0x5651f7bac780
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: handle_auth_request added challenge on 0x5651f6c83000
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 218 ms_handle_reset con 0x5651f6c83000 session 0x5651f7bacb40
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:07:39.360908+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 119021568 unmapped: 40525824 heap: 159547392 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: handle_auth_request added challenge on 0x5651f7d93800
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: handle_auth_request added challenge on 0x5651f806e000
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: handle_auth_request added challenge on 0x5651f6c5d400
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 218 ms_handle_reset con 0x5651f6c5d400 session 0x5651f5055c20
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: handle_auth_request added challenge on 0x5651f5002400
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:07:40.361024+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 119054336 unmapped: 40493056 heap: 159547392 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 218 handle_osd_map epochs [218,219], i have 218, src has [1,219]
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 219 ms_handle_reset con 0x5651f5002400 session 0x5651f43dde00
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:07:41.361200+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 119128064 unmapped: 40419328 heap: 159547392 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: handle_auth_request added challenge on 0x5651f6c5c400
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 219 ms_handle_reset con 0x5651f6c5c400 session 0x5651f5bbfa40
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:07:42.361325+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 119685120 unmapped: 39862272 heap: 159547392 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: handle_auth_request added challenge on 0x5651f6c5dc00
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 219 ms_handle_reset con 0x5651f6c5dc00 session 0x5651f3f5de00
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: handle_auth_request added challenge on 0x5651f6c5d000
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 219 ms_handle_reset con 0x5651f6c5d000 session 0x5651f5015c20
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 219 heartbeat osd_stat(store_statfs(0x4f9af3000/0x0/0x4ffc00000, data 0x223f21a/0x237b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x3d8f9c6), peers [1,2] op hist [])
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:07:43.361462+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 119693312 unmapped: 39854080 heap: 159547392 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:30:11 compute-0 ceph-osd[87591]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:30:11 compute-0 ceph-osd[87591]: bluestore.MempoolThread(0x5651f2adbb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1835015 data_alloc: 234881024 data_used: 12472320
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:07:44.361579+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 119693312 unmapped: 39854080 heap: 159547392 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 219 ms_handle_reset con 0x5651f4cef800 session 0x5651f4cd7e00
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: handle_auth_request added challenge on 0x5651f6c5cc00
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 219 handle_osd_map epochs [219,220], i have 219, src has [1,220]
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 220 heartbeat osd_stat(store_statfs(0x4f9af4000/0x0/0x4ffc00000, data 0x223f20a/0x237a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x3d8f9c6), peers [1,2] op hist [])
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:07:45.361715+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 119709696 unmapped: 39837696 heap: 159547392 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:07:46.361885+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 119709696 unmapped: 39837696 heap: 159547392 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:07:47.362050+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 119709696 unmapped: 39837696 heap: 159547392 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:07:48.362198+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 119709696 unmapped: 39837696 heap: 159547392 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:30:11 compute-0 ceph-osd[87591]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:30:11 compute-0 ceph-osd[87591]: bluestore.MempoolThread(0x5651f2adbb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1839189 data_alloc: 234881024 data_used: 12480512
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:07:49.362351+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 119709696 unmapped: 39837696 heap: 159547392 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: handle_auth_request added challenge on 0x5651f6c5dc00
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 220 ms_handle_reset con 0x5651f6c5dc00 session 0x5651f5bbf4a0
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: handle_auth_request added challenge on 0x5651f6c5d000
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 220 ms_handle_reset con 0x5651f6c5d000 session 0x5651f43dcb40
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:07:50.362536+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 121241600 unmapped: 38305792 heap: 159547392 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 220 heartbeat osd_stat(store_statfs(0x4f9af0000/0x0/0x4ffc00000, data 0x2240c6d/0x237d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x3d8f9c6), peers [1,2] op hist [])
Oct 11 04:30:11 compute-0 ceph-osd[87591]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 15.045801163s of 15.334266663s, submitted: 113
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:07:51.362699+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: handle_auth_request added challenge on 0x5651f6c5c400
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 125575168 unmapped: 33972224 heap: 159547392 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 220 ms_handle_reset con 0x5651f6c5c400 session 0x5651f5f14960
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:07:52.362864+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 125812736 unmapped: 33734656 heap: 159547392 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 220 heartbeat osd_stat(store_statfs(0x4f8ed4000/0x0/0x4ffc00000, data 0x2e57c6d/0x2f94000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x3d8f9c6), peers [1,2] op hist [])
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:07:53.363074+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 126083072 unmapped: 33464320 heap: 159547392 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 220 heartbeat osd_stat(store_statfs(0x4f8ebd000/0x0/0x4ffc00000, data 0x2e66c6d/0x2fa3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x3d8f9c6), peers [1,2] op hist [])
Oct 11 04:30:11 compute-0 ceph-osd[87591]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:30:11 compute-0 ceph-osd[87591]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:30:11 compute-0 ceph-osd[87591]: bluestore.MempoolThread(0x5651f2adbb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1944389 data_alloc: 234881024 data_used: 14098432
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:07:54.363284+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 126099456 unmapped: 33447936 heap: 159547392 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:07:55.363451+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 126099456 unmapped: 33447936 heap: 159547392 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: handle_auth_request added challenge on 0x5651f4cef800
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 220 ms_handle_reset con 0x5651f4cef800 session 0x5651f5f45a40
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:07:56.363633+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: handle_auth_request added challenge on 0x5651f5002400
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 126099456 unmapped: 33447936 heap: 159547392 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 220 ms_handle_reset con 0x5651f5002400 session 0x5651f5e94f00
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: handle_auth_request added challenge on 0x5651f4cef800
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 220 ms_handle_reset con 0x5651f4cef800 session 0x5651f79f7a40
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 220 heartbeat osd_stat(store_statfs(0x4f8eca000/0x0/0x4ffc00000, data 0x2e66c96/0x2fa4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x3d8f9c6), peers [1,2] op hist [])
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:07:57.363951+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 126296064 unmapped: 33251328 heap: 159547392 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:07:58.364267+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 126296064 unmapped: 33251328 heap: 159547392 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: handle_auth_request added challenge on 0x5651f6c5c400
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 220 ms_handle_reset con 0x5651f6c5c400 session 0x5651f79f74a0
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: handle_auth_request added challenge on 0x5651f6c5d000
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 220 handle_osd_map epochs [220,221], i have 220, src has [1,221]
Oct 11 04:30:11 compute-0 ceph-osd[87591]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:30:11 compute-0 ceph-osd[87591]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:30:11 compute-0 ceph-osd[87591]: bluestore.MempoolThread(0x5651f2adbb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2030754 data_alloc: 234881024 data_used: 14118912
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: handle_auth_request added challenge on 0x5651f6c5dc00
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: handle_auth_request added challenge on 0x5651f6c82c00
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:07:59.365235+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 221 ms_handle_reset con 0x5651f6c82c00 session 0x5651f4cac780
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 126533632 unmapped: 33013760 heap: 159547392 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 221 handle_osd_map epochs [221,222], i have 221, src has [1,222]
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 222 ms_handle_reset con 0x5651f6c5dc00 session 0x5651f43d50e0
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 222 ms_handle_reset con 0x5651f6c5d000 session 0x5651f79f6780
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:08:00.365374+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 126550016 unmapped: 32997376 heap: 159547392 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:08:01.365537+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 126550016 unmapped: 32997376 heap: 159547392 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: handle_auth_request added challenge on 0x5651f4cef800
Oct 11 04:30:11 compute-0 ceph-osd[87591]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 9.978602409s of 10.932939529s, submitted: 294
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 222 ms_handle_reset con 0x5651f4cef800 session 0x5651f5f46b40
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:08:02.365670+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 126558208 unmapped: 32989184 heap: 159547392 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 222 heartbeat osd_stat(store_statfs(0x4f7b31000/0x0/0x4ffc00000, data 0x41f849d/0x433b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x3d8f9c6), peers [1,2] op hist [])
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:08:03.365808+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 126566400 unmapped: 32980992 heap: 159547392 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: handle_auth_request added challenge on 0x5651f6c5c400
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 222 ms_handle_reset con 0x5651f6c5c400 session 0x5651f5f47a40
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: handle_auth_request added challenge on 0x5651f6c5dc00
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: handle_auth_request added challenge on 0x5651f6c82c00
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: handle_auth_request added challenge on 0x5651f6c83000
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 222 ms_handle_reset con 0x5651f6c83000 session 0x5651f5054000
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 222 ms_handle_reset con 0x5651f6c82c00 session 0x5651f5e943c0
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: handle_auth_request added challenge on 0x5651f6c83c00
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: handle_auth_request added challenge on 0x5651f6c83800
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 222 ms_handle_reset con 0x5651f6c83c00 session 0x5651f5e94f00
Oct 11 04:30:11 compute-0 ceph-osd[87591]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:30:11 compute-0 ceph-osd[87591]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:30:11 compute-0 ceph-osd[87591]: bluestore.MempoolThread(0x5651f2adbb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2175182 data_alloc: 234881024 data_used: 14118912
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: handle_auth_request added challenge on 0x5651f6c83c00
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _renew_subs
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 222 handle_osd_map epochs [223,223], i have 222, src has [1,223]
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 223 ms_handle_reset con 0x5651f6c83800 session 0x5651f5ed8780
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:08:04.366098+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: handle_auth_request added challenge on 0x5651f4cef800
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 223 ms_handle_reset con 0x5651f4cef800 session 0x5651f50e8d20
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: handle_auth_request added challenge on 0x5651f6c5c400
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: handle_auth_request added challenge on 0x5651f6c82c00
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 223 ms_handle_reset con 0x5651f6c5c400 session 0x5651f5054d20
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 126214144 unmapped: 33333248 heap: 159547392 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 223 ms_handle_reset con 0x5651f6c5dc00 session 0x5651f5f46000
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 223 handle_osd_map epochs [223,224], i have 223, src has [1,224]
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 224 ms_handle_reset con 0x5651f6c82c00 session 0x5651f50e92c0
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:08:05.366305+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: handle_auth_request added challenge on 0x5651f4cef800
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 224 ms_handle_reset con 0x5651f4cef800 session 0x5651f79f61e0
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 224 ms_handle_reset con 0x5651f6c83c00 session 0x5651f3f5de00
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 126222336 unmapped: 33325056 heap: 159547392 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 224 heartbeat osd_stat(store_statfs(0x4f72d4000/0x0/0x4ffc00000, data 0x4a510e8/0x4b98000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x3d8f9c6), peers [1,2] op hist [])
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: handle_auth_request added challenge on 0x5651f6c5c400
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:08:06.366475+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 126222336 unmapped: 33325056 heap: 159547392 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _renew_subs
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 224 handle_osd_map epochs [225,225], i have 224, src has [1,225]
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:08:07.366609+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 126246912 unmapped: 33300480 heap: 159547392 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 225 ms_handle_reset con 0x5651f6c5c400 session 0x5651f4dd94a0
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: handle_auth_request added challenge on 0x5651f6c5dc00
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 225 ms_handle_reset con 0x5651f6c5dc00 session 0x5651f37d7680
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 225 heartbeat osd_stat(store_statfs(0x4f72d2000/0x0/0x4ffc00000, data 0x4a52cb9/0x4b9b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x3d8f9c6), peers [1,2] op hist [])
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:08:08.366799+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 126255104 unmapped: 33292288 heap: 159547392 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:30:11 compute-0 ceph-osd[87591]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:30:11 compute-0 ceph-osd[87591]: bluestore.MempoolThread(0x5651f2adbb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2187465 data_alloc: 234881024 data_used: 14131200
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:08:09.366986+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 126255104 unmapped: 33292288 heap: 159547392 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 225 handle_osd_map epochs [225,226], i have 225, src has [1,226]
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:08:10.367202+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 127303680 unmapped: 32243712 heap: 159547392 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 226 ms_handle_reset con 0x5651f7d93800 session 0x5651f5f2b4a0
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 226 ms_handle_reset con 0x5651f806e000 session 0x5651f5f47680
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: handle_auth_request added challenge on 0x5651f7d93800
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:08:11.367400+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 226 ms_handle_reset con 0x5651f7d93800 session 0x5651f3f5cf00
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 121700352 unmapped: 37847040 heap: 159547392 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 226 heartbeat osd_stat(store_statfs(0x4f840a000/0x0/0x4ffc00000, data 0x391b1bb/0x3a62000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x3d8f9c6), peers [1,2] op hist [])
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:08:12.367600+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 121700352 unmapped: 37847040 heap: 159547392 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 226 heartbeat osd_stat(store_statfs(0x4f840a000/0x0/0x4ffc00000, data 0x391b1bb/0x3a62000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x3d8f9c6), peers [1,2] op hist [])
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:08:13.367835+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 121700352 unmapped: 37847040 heap: 159547392 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:30:11 compute-0 ceph-osd[87591]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:30:11 compute-0 ceph-osd[87591]: bluestore.MempoolThread(0x5651f2adbb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2014257 data_alloc: 218103808 data_used: 7979008
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:08:14.368001+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 121700352 unmapped: 37847040 heap: 159547392 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:08:15.368220+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 121700352 unmapped: 37847040 heap: 159547392 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: handle_auth_request added challenge on 0x5651f4cef800
Oct 11 04:30:11 compute-0 ceph-osd[87591]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 13.744064331s of 14.101218224s, submitted: 109
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:08:16.368400+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 121700352 unmapped: 37847040 heap: 159547392 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 226 handle_osd_map epochs [226,227], i have 226, src has [1,227]
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 227 ms_handle_reset con 0x5651f4cef800 session 0x5651f7da7e00
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: handle_auth_request added challenge on 0x5651f6c5c400
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 227 ms_handle_reset con 0x5651f6c5c400 session 0x5651f5eb23c0
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:08:17.368563+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 121700352 unmapped: 37847040 heap: 159547392 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:08:18.368747+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 121700352 unmapped: 37847040 heap: 159547392 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: handle_auth_request added challenge on 0x5651f6c5dc00
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 227 ms_handle_reset con 0x5651f6c5dc00 session 0x5651f5f2b680
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 227 heartbeat osd_stat(store_statfs(0x4f8409000/0x0/0x4ffc00000, data 0x391cd2a/0x3a64000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x3d8f9c6), peers [1,2] op hist [])
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: handle_auth_request added challenge on 0x5651f4cef800
Oct 11 04:30:11 compute-0 ceph-osd[87591]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:30:11 compute-0 ceph-osd[87591]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:30:11 compute-0 ceph-osd[87591]: bluestore.MempoolThread(0x5651f2adbb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2021925 data_alloc: 218103808 data_used: 7979008
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:08:19.368950+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 121913344 unmapped: 37634048 heap: 159547392 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: handle_auth_request added challenge on 0x5651f6c5c400
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: handle_auth_request added challenge on 0x5651f7d93800
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:08:20.369117+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 121913344 unmapped: 37634048 heap: 159547392 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 227 ms_handle_reset con 0x5651f4cef800 session 0x5651f79fc960
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: handle_auth_request added challenge on 0x5651f806e000
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 227 ms_handle_reset con 0x5651f806e000 session 0x5651f3f0f0e0
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: handle_auth_request added challenge on 0x5651f6c83c00
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 227 ms_handle_reset con 0x5651f6c83c00 session 0x5651f43dd4a0
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:08:21.369295+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 123437056 unmapped: 36110336 heap: 159547392 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: handle_auth_request added challenge on 0x5651f6c82c00
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 227 ms_handle_reset con 0x5651f6c82c00 session 0x5651f5f2ab40
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: handle_auth_request added challenge on 0x5651f6c83800
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 227 ms_handle_reset con 0x5651f6c83800 session 0x5651f5ed9c20
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:08:22.369405+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 125091840 unmapped: 34455552 heap: 159547392 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: handle_auth_request added challenge on 0x5651f6c83800
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 227 ms_handle_reset con 0x5651f6c83800 session 0x5651f44210e0
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:08:23.369558+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 125091840 unmapped: 34455552 heap: 159547392 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:30:11 compute-0 ceph-osd[87591]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:30:11 compute-0 ceph-osd[87591]: bluestore.MempoolThread(0x5651f2adbb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2153201 data_alloc: 234881024 data_used: 16113664
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:08:24.369729+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 227 heartbeat osd_stat(store_statfs(0x4f7bb8000/0x0/0x4ffc00000, data 0x416cd9c/0x42b6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x3d8f9c6), peers [1,2] op hist [])
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 125124608 unmapped: 34422784 heap: 159547392 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: handle_auth_request added challenge on 0x5651f4cef800
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _renew_subs
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 227 handle_osd_map epochs [228,228], i have 227, src has [1,228]
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 228 ms_handle_reset con 0x5651f4cef800 session 0x5651f4309e00
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: handle_auth_request added challenge on 0x5651f6c82c00
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 228 ms_handle_reset con 0x5651f6c82c00 session 0x5651f44cc000
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:08:25.369888+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: handle_auth_request added challenge on 0x5651f6c83c00
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 129835008 unmapped: 29712384 heap: 159547392 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 228 ms_handle_reset con 0x5651f6c83c00 session 0x5651f44ccb40
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: handle_auth_request added challenge on 0x5651f806e000
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 228 ms_handle_reset con 0x5651f806e000 session 0x5651f5c1e960
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:08:26.370033+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 126697472 unmapped: 32849920 heap: 159547392 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:08:27.370311+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: handle_auth_request added challenge on 0x5651f806e000
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 228 ms_handle_reset con 0x5651f806e000 session 0x5651f50154a0
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: handle_auth_request added challenge on 0x5651f4cef800
Oct 11 04:30:11 compute-0 ceph-osd[87591]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 11.121845245s of 11.547819138s, submitted: 130
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 126713856 unmapped: 32833536 heap: 159547392 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: handle_auth_request added challenge on 0x5651f6c82c00
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _renew_subs
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 228 handle_osd_map epochs [229,229], i have 228, src has [1,229]
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 229 ms_handle_reset con 0x5651f6c82c00 session 0x5651f5bbfc20
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: handle_auth_request added challenge on 0x5651f6c83800
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: handle_auth_request added challenge on 0x5651f6c83c00
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:08:28.374237+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 229 ms_handle_reset con 0x5651f6c83800 session 0x5651f5ef9a40
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 127705088 unmapped: 31842304 heap: 159547392 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: handle_auth_request added challenge on 0x5651f6c83000
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _renew_subs
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 229 handle_osd_map epochs [230,230], i have 229, src has [1,230]
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 230 ms_handle_reset con 0x5651f6c83c00 session 0x5651f79f7e00
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 230 ms_handle_reset con 0x5651f4cef800 session 0x5651f5bbf860
Oct 11 04:30:11 compute-0 ceph-osd[87591]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:30:11 compute-0 ceph-osd[87591]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:30:11 compute-0 ceph-osd[87591]: bluestore.MempoolThread(0x5651f2adbb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2327337 data_alloc: 234881024 data_used: 16121856
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:08:29.374357+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 127721472 unmapped: 31825920 heap: 159547392 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: handle_auth_request added challenge on 0x5651f6c82c00
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 230 heartbeat osd_stat(store_statfs(0x4f67e6000/0x0/0x4ffc00000, data 0x5536fbd/0x5686000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x3d8f9c6), peers [1,2] op hist [])
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:08:30.374521+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: handle_auth_request added challenge on 0x5651f6c83800
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 230 ms_handle_reset con 0x5651f6c83800 session 0x5651f7da61e0
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 136830976 unmapped: 22716416 heap: 159547392 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:08:31.374694+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 139173888 unmapped: 20373504 heap: 159547392 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: handle_auth_request added challenge on 0x5651f6c83c00
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: handle_auth_request added challenge on 0x5651f806e000
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 230 ms_handle_reset con 0x5651f6c83c00 session 0x5651f4318780
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:08:32.374798+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: handle_auth_request added challenge on 0x5651f74b0c00
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: handle_auth_request added challenge on 0x5651f6524c00
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 144031744 unmapped: 15515648 heap: 159547392 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 230 ms_handle_reset con 0x5651f74b0c00 session 0x5651f5ef8000
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 230 heartbeat osd_stat(store_statfs(0x4f5f97000/0x0/0x4ffc00000, data 0x61dbfbd/0x5ed7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x3d8f9c6), peers [1,2] op hist [])
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 230 ms_handle_reset con 0x5651f6524c00 session 0x5651f4319a40
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 230 ms_handle_reset con 0x5651f806e000 session 0x5651f5e945a0
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:08:33.374938+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 140550144 unmapped: 18997248 heap: 159547392 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 230 handle_osd_map epochs [231,231], i have 230, src has [1,231]
Oct 11 04:30:11 compute-0 ceph-osd[87591]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:30:11 compute-0 ceph-osd[87591]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 11 04:30:11 compute-0 ceph-osd[87591]: bluestore.MempoolThread(0x5651f2adbb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2574613 data_alloc: 234881024 data_used: 25808896
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:08:34.375269+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 231 heartbeat osd_stat(store_statfs(0x4f5597000/0x0/0x4ffc00000, data 0x6bdbfbd/0x68d7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x3d8f9c6), peers [1,2] op hist [])
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 140607488 unmapped: 18939904 heap: 159547392 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: handle_auth_request added challenge on 0x5651f6524c00
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _renew_subs
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 231 handle_osd_map epochs [232,232], i have 231, src has [1,232]
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 232 ms_handle_reset con 0x5651f6524c00 session 0x5651f44cc1e0
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:08:35.375413+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 139280384 unmapped: 20267008 heap: 159547392 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:08:36.375551+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: handle_auth_request added challenge on 0x5651f6c83800
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 139280384 unmapped: 20267008 heap: 159547392 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 232 handle_osd_map epochs [233,233], i have 232, src has [1,233]
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:08:37.375690+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 233 ms_handle_reset con 0x5651f6c83800 session 0x5651f5f2a3c0
Oct 11 04:30:11 compute-0 ceph-osd[87591]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 9.376515388s of 10.001482964s, submitted: 146
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 139280384 unmapped: 20267008 heap: 159547392 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: handle_auth_request added challenge on 0x5651f6c83c00
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 233 handle_osd_map epochs [233,234], i have 233, src has [1,234]
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 234 ms_handle_reset con 0x5651f6c83c00 session 0x5651f44cda40
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: handle_auth_request added challenge on 0x5651f74b0c00
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 234 ms_handle_reset con 0x5651f6c5c400 session 0x5651f4cd6000
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 234 ms_handle_reset con 0x5651f7d93800 session 0x5651f3f5c000
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:08:38.376097+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 234 ms_handle_reset con 0x5651f74b0c00 session 0x5651f5f472c0
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: handle_auth_request added challenge on 0x5651f6524c00
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 139296768 unmapped: 20250624 heap: 159547392 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 234 ms_handle_reset con 0x5651f6524c00 session 0x5651f79f63c0
Oct 11 04:30:11 compute-0 ceph-osd[87591]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:30:11 compute-0 ceph-osd[87591]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 11 04:30:11 compute-0 ceph-osd[87591]: bluestore.MempoolThread(0x5651f2adbb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2358641 data_alloc: 234881024 data_used: 25702400
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: handle_auth_request added challenge on 0x5651f6c5c400
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 234 ms_handle_reset con 0x5651f6c5c400 session 0x5651f5c59a40
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:08:39.376334+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 147546112 unmapped: 12001280 heap: 159547392 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 234 heartbeat osd_stat(store_statfs(0x4f6ddd000/0x0/0x4ffc00000, data 0x5370e83/0x5070000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x3d8f9c6), peers [1,2] op hist [])
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:08:40.376556+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 143917056 unmapped: 15630336 heap: 159547392 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: handle_auth_request added challenge on 0x5651f6c83800
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: handle_auth_request added challenge on 0x5651f6c83c00
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 234 ms_handle_reset con 0x5651f6c83c00 session 0x5651f5e78000
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:08:41.376824+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: handle_auth_request added challenge on 0x5651f806e000
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: handle_auth_request added challenge on 0x5651f6bdf400
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 144596992 unmapped: 14950400 heap: 159547392 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: handle_auth_request added challenge on 0x5651f6bc7000
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _renew_subs
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 234 handle_osd_map epochs [235,235], i have 234, src has [1,235]
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 235 ms_handle_reset con 0x5651f806e000 session 0x5651f5e790e0
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 235 ms_handle_reset con 0x5651f6bc7000 session 0x5651f584bc20
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: handle_auth_request added challenge on 0x5651f6524c00
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: handle_auth_request added challenge on 0x5651f6c5c400
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: handle_auth_request added challenge on 0x5651f6c83c00
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: handle_auth_request added challenge on 0x5651f74b0c00
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 235 ms_handle_reset con 0x5651f6524c00 session 0x5651f5e79860
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:08:42.376975+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 235 ms_handle_reset con 0x5651f6bdf400 session 0x5651f3f0e960
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 141090816 unmapped: 18456576 heap: 159547392 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 235 ms_handle_reset con 0x5651f6c83c00 session 0x5651f79fcb40
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 235 ms_handle_reset con 0x5651f74b0c00 session 0x5651f5121e00
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _renew_subs
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 235 handle_osd_map epochs [236,236], i have 235, src has [1,236]
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 236 ms_handle_reset con 0x5651f6c5c400 session 0x5651f3f5c780
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 236 ms_handle_reset con 0x5651f6c83800 session 0x5651f79f6f00
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:08:43.377189+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 138141696 unmapped: 21405696 heap: 159547392 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 236 heartbeat osd_stat(store_statfs(0x4f7778000/0x0/0x4ffc00000, data 0x2ff176b/0x3145000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x533f9c6), peers [1,2] op hist [])
Oct 11 04:30:11 compute-0 ceph-osd[87591]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:30:11 compute-0 ceph-osd[87591]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:30:11 compute-0 ceph-osd[87591]: bluestore.MempoolThread(0x5651f2adbb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2103835 data_alloc: 234881024 data_used: 17108992
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:08:44.377389+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: handle_auth_request added challenge on 0x5651f6524c00
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 236 ms_handle_reset con 0x5651f6524c00 session 0x5651f5f46000
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 138141696 unmapped: 21405696 heap: 159547392 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:08:45.377527+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 138141696 unmapped: 21405696 heap: 159547392 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 236 handle_osd_map epochs [237,237], i have 236, src has [1,237]
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: handle_auth_request added challenge on 0x5651f6bc7000
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _renew_subs
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 236 handle_osd_map epochs [237,237], i have 237, src has [1,237]
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: handle_auth_request added challenge on 0x5651f6bdf400
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: handle_auth_request added challenge on 0x5651f6c83c00
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 237 ms_handle_reset con 0x5651f6c83c00 session 0x5651f79f6780
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 237 ms_handle_reset con 0x5651f6bdf400 session 0x5651f5f143c0
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:08:46.377700+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: handle_auth_request added challenge on 0x5651f6524c00
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: handle_auth_request added challenge on 0x5651f6bdf400
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 237 ms_handle_reset con 0x5651f6524c00 session 0x5651f5bd63c0
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 138158080 unmapped: 21389312 heap: 159547392 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 237 ms_handle_reset con 0x5651f6bdf400 session 0x5651f79f7e00
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: handle_auth_request added challenge on 0x5651f6c5c400
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _renew_subs
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 237 handle_osd_map epochs [238,238], i have 237, src has [1,238]
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 238 ms_handle_reset con 0x5651f6c5c400 session 0x5651f4319e00
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 238 heartbeat osd_stat(store_statfs(0x4f7774000/0x0/0x4ffc00000, data 0x2ff3272/0x3149000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x533f9c6), peers [1,2] op hist [])
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:08:47.377886+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 238 ms_handle_reset con 0x5651f6bc7000 session 0x5651f5f47680
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 138436608 unmapped: 21110784 heap: 159547392 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 238 heartbeat osd_stat(store_statfs(0x4f774e000/0x0/0x4ffc00000, data 0x3016e0b/0x316e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x533f9c6), peers [1,2] op hist [])
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: handle_auth_request added challenge on 0x5651f6c83800
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 238 heartbeat osd_stat(store_statfs(0x4f774e000/0x0/0x4ffc00000, data 0x3016e0b/0x316e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x533f9c6), peers [1,2] op hist [])
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:08:48.378038+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 138436608 unmapped: 21110784 heap: 159547392 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 238 heartbeat osd_stat(store_statfs(0x4f774e000/0x0/0x4ffc00000, data 0x3016e0b/0x316e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x533f9c6), peers [1,2] op hist [])
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 238 handle_osd_map epochs [238,239], i have 238, src has [1,239]
Oct 11 04:30:11 compute-0 ceph-osd[87591]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 10.781530380s of 11.704752922s, submitted: 288
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 239 ms_handle_reset con 0x5651f6c83800 session 0x5651f79f7680
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: handle_auth_request added challenge on 0x5651f6524c00
Oct 11 04:30:11 compute-0 ceph-osd[87591]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:30:11 compute-0 ceph-osd[87591]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:30:11 compute-0 ceph-osd[87591]: bluestore.MempoolThread(0x5651f2adbb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2115889 data_alloc: 234881024 data_used: 17129472
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 239 heartbeat osd_stat(store_statfs(0x4f774e000/0x0/0x4ffc00000, data 0x3016e0b/0x316e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x533f9c6), peers [1,2] op hist [])
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:08:49.378205+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 239 ms_handle_reset con 0x5651f6524c00 session 0x5651f5eb21e0
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 138436608 unmapped: 21110784 heap: 159547392 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:08:50.378331+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: handle_auth_request added challenge on 0x5651f6bc7000
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 138436608 unmapped: 21110784 heap: 159547392 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 239 heartbeat osd_stat(store_statfs(0x4f774d000/0x0/0x4ffc00000, data 0x30189dc/0x3170000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x533f9c6), peers [1,2] op hist [])
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _renew_subs
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 239 handle_osd_map epochs [240,240], i have 239, src has [1,240]
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:08:51.378545+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 240 ms_handle_reset con 0x5651f6bc7000 session 0x5651f4318780
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 138436608 unmapped: 21110784 heap: 159547392 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: handle_auth_request added challenge on 0x5651f6bdf400
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: handle_auth_request added challenge on 0x5651f6c5c400
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 240 handle_osd_map epochs [240,241], i have 240, src has [1,241]
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 241 ms_handle_reset con 0x5651f6c5c400 session 0x5651f5e781e0
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: handle_auth_request added challenge on 0x5651f6c83c00
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:08:52.378707+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 241 ms_handle_reset con 0x5651f6c83c00 session 0x5651f5e7b4a0
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 138076160 unmapped: 21471232 heap: 159547392 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:08:53.378881+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 138076160 unmapped: 21471232 heap: 159547392 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: handle_auth_request added challenge on 0x5651f6845800
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 241 ms_handle_reset con 0x5651f6845800 session 0x5651f5c1f860
Oct 11 04:30:11 compute-0 ceph-osd[87591]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:30:11 compute-0 ceph-osd[87591]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:30:11 compute-0 ceph-osd[87591]: bluestore.MempoolThread(0x5651f2adbb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2120699 data_alloc: 234881024 data_used: 17129472
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:08:54.379078+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 138076160 unmapped: 21471232 heap: 159547392 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 241 handle_osd_map epochs [242,242], i have 241, src has [1,242]
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:08:55.379235+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 242 heartbeat osd_stat(store_statfs(0x4f7741000/0x0/0x4ffc00000, data 0x3022bfd/0x317c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x533f9c6), peers [1,2] op hist [])
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 138076160 unmapped: 21471232 heap: 159547392 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 242 ms_handle_reset con 0x5651f6bdf400 session 0x5651f3f0e960
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 242 heartbeat osd_stat(store_statfs(0x4f7741000/0x0/0x4ffc00000, data 0x3022bfd/0x317c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x533f9c6), peers [1,2] op hist [])
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:08:56.379403+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 138076160 unmapped: 21471232 heap: 159547392 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 242 ms_handle_reset con 0x5651f6c82c00 session 0x5651f50e9c20
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 242 ms_handle_reset con 0x5651f6c83000 session 0x5651f584b680
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:08:57.379566+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: handle_auth_request added challenge on 0x5651f6524c00
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 131506176 unmapped: 28041216 heap: 159547392 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 242 ms_handle_reset con 0x5651f6524c00 session 0x5651f5f45e00
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:08:58.379819+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 131506176 unmapped: 28041216 heap: 159547392 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:08:59.380518+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:30:11 compute-0 ceph-osd[87591]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:30:11 compute-0 ceph-osd[87591]: bluestore.MempoolThread(0x5651f2adbb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1906347 data_alloc: 218103808 data_used: 7524352
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 131506176 unmapped: 28041216 heap: 159547392 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 242 heartbeat osd_stat(store_statfs(0x4f8a11000/0x0/0x4ffc00000, data 0x1d53b9b/0x1eac000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x533f9c6), peers [1,2] op hist [])
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:09:00.380938+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 131506176 unmapped: 28041216 heap: 159547392 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:09:01.381756+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 131506176 unmapped: 28041216 heap: 159547392 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: handle_auth_request added challenge on 0x5651f6bcb400
Oct 11 04:30:11 compute-0 ceph-osd[87591]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 12.755709648s of 13.188314438s, submitted: 178
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 242 ms_handle_reset con 0x5651f6bcb400 session 0x5651f5c58d20
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:09:02.382002+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: handle_auth_request added challenge on 0x5651f6bcb400
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 242 ms_handle_reset con 0x5651f6bcb400 session 0x5651f5eca000
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: handle_auth_request added challenge on 0x5651f6524c00
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 242 ms_handle_reset con 0x5651f6524c00 session 0x5651f5f15a40
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: handle_auth_request added challenge on 0x5651f6284800
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 242 ms_handle_reset con 0x5651f6284800 session 0x5651f5f154a0
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: handle_auth_request added challenge on 0x5651f6285000
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 242 ms_handle_reset con 0x5651f6285000 session 0x5651f584af00
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: handle_auth_request added challenge on 0x5651f6bda800
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 242 ms_handle_reset con 0x5651f6bda800 session 0x5651f584bc20
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: handle_auth_request added challenge on 0x5651f658f800
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 130777088 unmapped: 28770304 heap: 159547392 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 242 ms_handle_reset con 0x5651f658f800 session 0x5651f584b4a0
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: handle_auth_request added challenge on 0x5651f6bda800
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 242 ms_handle_reset con 0x5651f6bda800 session 0x5651f7da70e0
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: handle_auth_request added challenge on 0x5651f6284800
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 242 ms_handle_reset con 0x5651f6284800 session 0x5651f5ef85a0
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: handle_auth_request added challenge on 0x5651f6285000
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 242 ms_handle_reset con 0x5651f6285000 session 0x5651f4319860
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: handle_auth_request added challenge on 0x5651f6524c00
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 242 ms_handle_reset con 0x5651f6524c00 session 0x5651f79fcb40
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: handle_auth_request added challenge on 0x5651f6524c00
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 242 ms_handle_reset con 0x5651f6524c00 session 0x5651f79fd680
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:09:03.383289+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 132743168 unmapped: 26804224 heap: 159547392 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:09:04.383670+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:30:11 compute-0 ceph-osd[87591]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:30:11 compute-0 ceph-osd[87591]: bluestore.MempoolThread(0x5651f2adbb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1994696 data_alloc: 218103808 data_used: 7520256
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: handle_auth_request added challenge on 0x5651f6284800
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: handle_auth_request added challenge on 0x5651f6285000
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 132743168 unmapped: 26804224 heap: 159547392 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 242 ms_handle_reset con 0x5651f6285000 session 0x5651f5e7b680
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:09:05.383812+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 132743168 unmapped: 26804224 heap: 159547392 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _renew_subs
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 242 handle_osd_map epochs [243,243], i have 242, src has [1,243]
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: handle_auth_request added challenge on 0x5651f658f800
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: handle_auth_request added challenge on 0x5651f6bda800
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 243 ms_handle_reset con 0x5651f658f800 session 0x5651f3f5c780
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 243 heartbeat osd_stat(store_statfs(0x4f7f98000/0x0/0x4ffc00000, data 0x27ccbab/0x2926000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x533f9c6), peers [1,2] op hist [])
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:09:06.384460+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 132636672 unmapped: 26910720 heap: 159547392 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 243 handle_osd_map epochs [243,244], i have 243, src has [1,244]
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _renew_subs
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 244 handle_osd_map epochs [244,244], i have 244, src has [1,244]
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 244 ms_handle_reset con 0x5651f6bda800 session 0x5651f51205a0
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 244 ms_handle_reset con 0x5651f6284800 session 0x5651f5eca960
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:09:07.384618+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 132636672 unmapped: 26910720 heap: 159547392 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: handle_auth_request added challenge on 0x5651f6284800
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: handle_auth_request added challenge on 0x5651f6285000
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: handle_auth_request added challenge on 0x5651f6524c00
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 244 ms_handle_reset con 0x5651f6524c00 session 0x5651f5bbe5a0
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 244 ms_handle_reset con 0x5651f6285000 session 0x5651f5c58d20
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: handle_auth_request added challenge on 0x5651f658f800
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:09:08.384769+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 132653056 unmapped: 56311808 heap: 188964864 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: handle_auth_request added challenge on 0x5651f6bda800
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 244 ms_handle_reset con 0x5651f6bda800 session 0x5651f5054d20
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 244 heartbeat osd_stat(store_statfs(0x4f5f90000/0x0/0x4ffc00000, data 0x47d0307/0x492d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x533f9c6), peers [1,2] op hist [0,0,0,0,0,0,1])
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:09:09.386088+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:30:11 compute-0 ceph-osd[87591]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:30:11 compute-0 ceph-osd[87591]: bluestore.MempoolThread(0x5651f2adbb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2543629 data_alloc: 218103808 data_used: 7532544
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: handle_auth_request added challenge on 0x5651f6bcb400
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 133144576 unmapped: 55820288 heap: 188964864 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: handle_auth_request added challenge on 0x5651f6be5400
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:09:10.386212+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 141713408 unmapped: 47251456 heap: 188964864 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:09:11.386358+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 137691136 unmapped: 51273728 heap: 188964864 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:09:12.386541+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 7.546722889s of 10.087677956s, submitted: 149
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 140705792 unmapped: 48259072 heap: 188964864 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 244 ms_handle_reset con 0x5651f658f800 session 0x5651f5f45e00
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 244 ms_handle_reset con 0x5651f6284800 session 0x5651f5ecb2c0
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:09:13.386741+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 136626176 unmapped: 52338688 heap: 188964864 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:09:14.386999+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: handle_auth_request added challenge on 0x5651f6285000
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 244 heartbeat osd_stat(store_statfs(0x4e1b66000/0x0/0x4ffc00000, data 0x18bfa32a/0x18d58000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x533f9c6), peers [1,2] op hist [1])
Oct 11 04:30:11 compute-0 ceph-osd[87591]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:30:11 compute-0 ceph-osd[87591]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:30:11 compute-0 ceph-osd[87591]: bluestore.MempoolThread(0x5651f2adbb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4584740 data_alloc: 234881024 data_used: 14319616
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 136634368 unmapped: 52330496 heap: 188964864 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 244 handle_osd_map epochs [244,245], i have 244, src has [1,245]
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 245 ms_handle_reset con 0x5651f6285000 session 0x5651f50e9c20
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:09:15.387237+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 136691712 unmapped: 52273152 heap: 188964864 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: handle_auth_request added challenge on 0x5651f6524c00
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 245 ms_handle_reset con 0x5651f6524c00 session 0x5651f5eb21e0
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:09:16.387401+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 136691712 unmapped: 52273152 heap: 188964864 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:09:17.387552+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 245 heartbeat osd_stat(store_statfs(0x4e1b61000/0x0/0x4ffc00000, data 0x18bfbf5d/0x18d5c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x533f9c6), peers [1,2] op hist [])
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 136691712 unmapped: 52273152 heap: 188964864 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: handle_auth_request added challenge on 0x5651f658f800
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 245 ms_handle_reset con 0x5651f658f800 session 0x5651f5bd7860
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:09:18.387854+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: handle_auth_request added challenge on 0x5651f6bda800
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 245 ms_handle_reset con 0x5651f6bda800 session 0x5651f5bbeb40
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: handle_auth_request added challenge on 0x5651f6be4000
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 136708096 unmapped: 52256768 heap: 188964864 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _renew_subs
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 245 handle_osd_map epochs [246,246], i have 245, src has [1,246]
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: handle_auth_request added challenge on 0x5651f6be4400
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: handle_auth_request added challenge on 0x5651f6be4800
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 246 ms_handle_reset con 0x5651f6be4800 session 0x5651f5121e00
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:09:19.388081+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:30:11 compute-0 ceph-osd[87591]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:30:11 compute-0 ceph-osd[87591]: bluestore.MempoolThread(0x5651f2adbb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4598573 data_alloc: 234881024 data_used: 14344192
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 136757248 unmapped: 52207616 heap: 188964864 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _renew_subs
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 246 handle_osd_map epochs [247,247], i have 246, src has [1,247]
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 247 ms_handle_reset con 0x5651f6be4400 session 0x5651f3f105a0
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: handle_auth_request added challenge on 0x5651f6285000
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: handle_auth_request added challenge on 0x5651f6524c00
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 247 ms_handle_reset con 0x5651f6285000 session 0x5651f5ed8f00
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 247 ms_handle_reset con 0x5651f6be4000 session 0x5651f5bbf860
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 247 ms_handle_reset con 0x5651f6524c00 session 0x5651f51203c0
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: handle_auth_request added challenge on 0x5651f658f800
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:09:20.388432+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 136765440 unmapped: 52199424 heap: 188964864 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _renew_subs
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 247 handle_osd_map epochs [248,248], i have 247, src has [1,248]
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: handle_auth_request added challenge on 0x5651f6bda800
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 248 ms_handle_reset con 0x5651f6bda800 session 0x5651f5ed8780
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: handle_auth_request added challenge on 0x5651f6285000
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:09:21.388623+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: handle_auth_request added challenge on 0x5651f6524c00
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 248 ms_handle_reset con 0x5651f6524c00 session 0x5651f4cd7e00
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: handle_auth_request added challenge on 0x5651f6be4000
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 142393344 unmapped: 46571520 heap: 188964864 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _renew_subs
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 248 handle_osd_map epochs [249,249], i have 248, src has [1,249]
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 249 ms_handle_reset con 0x5651f6285000 session 0x5651f43194a0
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: handle_auth_request added challenge on 0x5651f6be4400
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: handle_auth_request added challenge on 0x5651f6be5800
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 249 ms_handle_reset con 0x5651f658f800 session 0x5651f4cacf00
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 249 ms_handle_reset con 0x5651f6be5800 session 0x5651f79fcb40
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:09:22.388770+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 9.392299652s of 10.221873283s, submitted: 169
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: handle_auth_request added challenge on 0x5651f4c3d400
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 141426688 unmapped: 47538176 heap: 188964864 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 249 handle_osd_map epochs [249,250], i have 249, src has [1,250]
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 250 ms_handle_reset con 0x5651f6be4400 session 0x5651f3f0e5a0
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: handle_auth_request added challenge on 0x5651f6285000
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 250 ms_handle_reset con 0x5651f6285000 session 0x5651f5eca000
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 250 ms_handle_reset con 0x5651f6be4000 session 0x5651f43da3c0
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 250 heartbeat osd_stat(store_statfs(0x4e0d92000/0x0/0x4ffc00000, data 0x199be660/0x19b2b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x533f9c6), peers [1,2] op hist [])
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 250 ms_handle_reset con 0x5651f4c3d400 session 0x5651f5f14b40
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:09:23.388964+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 141426688 unmapped: 47538176 heap: 188964864 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: handle_auth_request added challenge on 0x5651f6524c00
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 250 ms_handle_reset con 0x5651f6524c00 session 0x5651f5ed92c0
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 250 heartbeat osd_stat(store_statfs(0x4e0d55000/0x0/0x4ffc00000, data 0x199f966e/0x19b66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x533f9c6), peers [1,2] op hist [])
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:09:24.389146+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:30:11 compute-0 ceph-osd[87591]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:30:11 compute-0 ceph-osd[87591]: bluestore.MempoolThread(0x5651f2adbb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4738581 data_alloc: 234881024 data_used: 15925248
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 141443072 unmapped: 47521792 heap: 188964864 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: handle_auth_request added challenge on 0x5651f658f800
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _renew_subs
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 250 handle_osd_map epochs [251,251], i have 250, src has [1,251]
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: handle_auth_request added challenge on 0x5651f6be5800
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 251 ms_handle_reset con 0x5651f6be5800 session 0x5651f584b4a0
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 251 ms_handle_reset con 0x5651f658f800 session 0x5651f5bd7e00
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: handle_auth_request added challenge on 0x5651f4c3d400
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: handle_auth_request added challenge on 0x5651f6285000
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:09:25.389432+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 146382848 unmapped: 55189504 heap: 201572352 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: handle_auth_request added challenge on 0x5651f6524c00
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: handle_auth_request added challenge on 0x5651f6be4000
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 251 ms_handle_reset con 0x5651f6be4000 session 0x5651f44cc1e0
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:09:26.389544+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 152477696 unmapped: 49094656 heap: 201572352 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 251 handle_osd_map epochs [251,252], i have 251, src has [1,252]
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _renew_subs
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 251 handle_osd_map epochs [252,252], i have 252, src has [1,252]
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: handle_auth_request added challenge on 0x5651f4c3c000
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 252 ms_handle_reset con 0x5651f4c3c000 session 0x5651f43d50e0
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:09:27.389664+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 145022976 unmapped: 56549376 heap: 201572352 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _renew_subs
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 252 handle_osd_map epochs [253,253], i have 252, src has [1,253]
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: handle_auth_request added challenge on 0x5651f4c3c800
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:09:28.389854+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 253 ms_handle_reset con 0x5651f4c3c800 session 0x5651f4cd74a0
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 253 ms_handle_reset con 0x5651f6524c00 session 0x5651f7badc20
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 155181056 unmapped: 46391296 heap: 201572352 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: handle_auth_request added challenge on 0x5651f4c3c000
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:09:29.390015+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:30:11 compute-0 ceph-osd[87591]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:30:11 compute-0 ceph-osd[87591]: bluestore.MempoolThread(0x5651f2adbb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 6870768 data_alloc: 234881024 data_used: 15941632
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 157089792 unmapped: 44482560 heap: 201572352 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 253 ms_handle_reset con 0x5651f4c3d400 session 0x5651f5bd74a0
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: handle_auth_request added challenge on 0x5651f4c3c800
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 253 ms_handle_reset con 0x5651f4c3c800 session 0x5651f5055e00
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _renew_subs
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 253 handle_osd_map epochs [254,254], i have 253, src has [1,254]
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 254 ms_handle_reset con 0x5651f6285000 session 0x5651f5e781e0
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 254 heartbeat osd_stat(store_statfs(0x4cdd2e000/0x0/0x4ffc00000, data 0x2ca1ed11/0x2cb90000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x533f9c6), peers [1,2] op hist [])
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:09:30.390174+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 254 ms_handle_reset con 0x5651f4c3c000 session 0x5651f5054960
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: handle_auth_request added challenge on 0x5651f658f800
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 254 ms_handle_reset con 0x5651f658f800 session 0x5651f5015680
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 148742144 unmapped: 52830208 heap: 201572352 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: handle_auth_request added challenge on 0x5651f4c3c000
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:09:31.390376+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _renew_subs
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 254 handle_osd_map epochs [255,255], i have 254, src has [1,255]
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 255 ms_handle_reset con 0x5651f4c3c000 session 0x5651f7da7a40
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: handle_auth_request added challenge on 0x5651f4c3c800
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 255 ms_handle_reset con 0x5651f4c3c800 session 0x5651f7da61e0
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 148750336 unmapped: 52822016 heap: 201572352 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:09:32.390519+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: handle_auth_request added challenge on 0x5651f4c3d400
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 148758528 unmapped: 52813824 heap: 201572352 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:09:33.390652+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _renew_subs
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 255 handle_osd_map epochs [256,256], i have 255, src has [1,256]
Oct 11 04:30:11 compute-0 ceph-osd[87591]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 6.845270634s of 10.774974823s, submitted: 459
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 256 ms_handle_reset con 0x5651f4c3d400 session 0x5651f7da6b40
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: handle_auth_request added challenge on 0x5651f6285000
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 256 ms_handle_reset con 0x5651f6285000 session 0x5651f7bac960
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 256 heartbeat osd_stat(store_statfs(0x4cb128000/0x0/0x4ffc00000, data 0x2f6235fd/0x2f794000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x533f9c6), peers [1,2] op hist [])
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 148815872 unmapped: 52756480 heap: 201572352 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:09:34.390814+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: handle_auth_request added challenge on 0x5651f6be4000
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _renew_subs
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 256 handle_osd_map epochs [257,257], i have 256, src has [1,257]
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 257 ms_handle_reset con 0x5651f6be4000 session 0x5651f5f44960
Oct 11 04:30:11 compute-0 ceph-osd[87591]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:30:11 compute-0 ceph-osd[87591]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:30:11 compute-0 ceph-osd[87591]: bluestore.MempoolThread(0x5651f2adbb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 7196856 data_alloc: 234881024 data_used: 15958016
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: handle_auth_request added challenge on 0x5651f4c3c000
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _renew_subs
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 257 handle_osd_map epochs [258,258], i have 257, src has [1,258]
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 258 ms_handle_reset con 0x5651f4c3c000 session 0x5651f5ecba40
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 148889600 unmapped: 52682752 heap: 201572352 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:09:35.391037+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 149012480 unmapped: 52559872 heap: 201572352 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:09:36.391221+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: handle_auth_request added challenge on 0x5651f4c3c800
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 258 ms_handle_reset con 0x5651f4c3c800 session 0x5651f5ed94a0
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: handle_auth_request added challenge on 0x5651f4c3d400
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 258 ms_handle_reset con 0x5651f4c3d400 session 0x5651f5121680
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 149471232 unmapped: 52101120 heap: 201572352 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: handle_auth_request added challenge on 0x5651f6285000
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 258 ms_handle_reset con 0x5651f6285000 session 0x5651f5e94f00
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:09:37.391391+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 258 handle_osd_map epochs [258,259], i have 258, src has [1,259]
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: handle_auth_request added challenge on 0x5651f3f44400
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: handle_auth_request added challenge on 0x5651f3f44800
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 259 ms_handle_reset con 0x5651f3f44800 session 0x5651f3f0e5a0
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 259 heartbeat osd_stat(store_statfs(0x4cb116000/0x0/0x4ffc00000, data 0x2f632931/0x2f7a7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x533f9c6), peers [1,2] op hist [])
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 149553152 unmapped: 52019200 heap: 201572352 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: handle_auth_request added challenge on 0x5651f4c3c000
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 259 ms_handle_reset con 0x5651f4c3c000 session 0x5651f5c58d20
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 259 heartbeat osd_stat(store_statfs(0x4cb116000/0x0/0x4ffc00000, data 0x2f632931/0x2f7a7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x533f9c6), peers [1,2] op hist [])
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:09:38.391627+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _renew_subs
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 259 handle_osd_map epochs [260,260], i have 259, src has [1,260]
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 260 ms_handle_reset con 0x5651f3f44400 session 0x5651f5f472c0
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 149585920 unmapped: 51986432 heap: 201572352 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:09:39.391850+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:30:11 compute-0 ceph-osd[87591]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:30:11 compute-0 ceph-osd[87591]: bluestore.MempoolThread(0x5651f2adbb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 7201671 data_alloc: 234881024 data_used: 16494592
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 149585920 unmapped: 51986432 heap: 201572352 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:09:40.392074+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: handle_auth_request added challenge on 0x5651f4c3c800
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 260 ms_handle_reset con 0x5651f4c3c800 session 0x5651f7bacd20
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: handle_auth_request added challenge on 0x5651f4c3d400
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 260 ms_handle_reset con 0x5651f4c3d400 session 0x5651f43db0e0
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: handle_auth_request added challenge on 0x5651f6285000
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 148283392 unmapped: 53288960 heap: 201572352 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 260 ms_handle_reset con 0x5651f6285000 session 0x5651f5f46b40
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: handle_auth_request added challenge on 0x5651f3f44400
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 260 ms_handle_reset con 0x5651f3f44400 session 0x5651f5015c20
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:09:41.392269+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: handle_auth_request added challenge on 0x5651f4c3c000
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _renew_subs
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 260 handle_osd_map epochs [261,261], i have 260, src has [1,261]
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: handle_auth_request added challenge on 0x5651f4c3c800
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 261 ms_handle_reset con 0x5651f4c3c800 session 0x5651f5bbe5a0
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 148316160 unmapped: 53256192 heap: 201572352 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:09:42.392431+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 261 heartbeat osd_stat(store_statfs(0x4cac56000/0x0/0x4ffc00000, data 0x2faee0d5/0x2fc67000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x533f9c6), peers [1,2] op hist [])
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 261 handle_osd_map epochs [261,262], i have 261, src has [1,262]
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: handle_auth_request added challenge on 0x5651f4c3d400
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 262 ms_handle_reset con 0x5651f4c3d400 session 0x5651f43d4960
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 262 ms_handle_reset con 0x5651f4c3c000 session 0x5651f5014b40
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 148324352 unmapped: 53248000 heap: 201572352 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: handle_auth_request added challenge on 0x5651f3f45800
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:09:43.392583+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: handle_auth_request added challenge on 0x5651f652c400
Oct 11 04:30:11 compute-0 ceph-osd[87591]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 9.771476746s of 10.076219559s, submitted: 99
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 262 ms_handle_reset con 0x5651f652c400 session 0x5651f3f0f860
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 148348928 unmapped: 53223424 heap: 201572352 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: handle_auth_request added challenge on 0x5651f3f44400
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 262 ms_handle_reset con 0x5651f3f44400 session 0x5651f726fa40
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:09:44.392752+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: handle_auth_request added challenge on 0x5651f4c3c000
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _renew_subs
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 262 handle_osd_map epochs [263,263], i have 262, src has [1,263]
Oct 11 04:30:11 compute-0 ceph-osd[87591]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #44. Immutable memtables: 1.
Oct 11 04:30:11 compute-0 ceph-osd[87591]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:30:11 compute-0 ceph-osd[87591]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:30:11 compute-0 ceph-osd[87591]: bluestore.MempoolThread(0x5651f2adbb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 7257601 data_alloc: 234881024 data_used: 16506880
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: handle_auth_request added challenge on 0x5651f4c3c800
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: handle_auth_request added challenge on 0x5651f4c3d400
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: handle_auth_request added challenge on 0x5651f4db6400
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: handle_auth_request added challenge on 0x5651f6bdf400
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 263 ms_handle_reset con 0x5651f4db6400 session 0x5651f7bcb0e0
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _renew_subs
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 263 handle_osd_map epochs [264,264], i have 263, src has [1,264]
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 264 ms_handle_reset con 0x5651f4c3c800 session 0x5651f3f5d4a0
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 264 ms_handle_reset con 0x5651f6bdf400 session 0x5651f5c1fc20
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 264 ms_handle_reset con 0x5651f4c3c000 session 0x5651f5e7a780
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 150487040 unmapped: 51085312 heap: 201572352 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:09:45.393019+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: handle_auth_request added challenge on 0x5651f4c3c000
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 264 handle_osd_map epochs [264,265], i have 264, src has [1,265]
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _renew_subs
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 264 handle_osd_map epochs [265,265], i have 265, src has [1,265]
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 265 heartbeat osd_stat(store_statfs(0x4c9401000/0x0/0x4ffc00000, data 0x3019c910/0x3031b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x64df9c6), peers [1,2] op hist [])
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 265 ms_handle_reset con 0x5651f4c3d400 session 0x5651f5eb23c0
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 150519808 unmapped: 51052544 heap: 201572352 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 265 ms_handle_reset con 0x5651f3f45800 session 0x5651f79fc960
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:09:46.393214+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _renew_subs
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 265 handle_osd_map epochs [266,266], i have 265, src has [1,266]
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 266 ms_handle_reset con 0x5651f4c3c000 session 0x5651f4cd7e00
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 150544384 unmapped: 51027968 heap: 201572352 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:09:47.393383+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: handle_auth_request added challenge on 0x5651f3f44400
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 150544384 unmapped: 51027968 heap: 201572352 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: handle_auth_request added challenge on 0x5651f4c3c800
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:09:48.393575+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 266 ms_handle_reset con 0x5651f4c3c800 session 0x5651f3f5c1e0
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: handle_auth_request added challenge on 0x5651f4db6400
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _renew_subs
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 266 handle_osd_map epochs [267,267], i have 266, src has [1,267]
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 267 ms_handle_reset con 0x5651f4db6400 session 0x5651f5f2ba40
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 150560768 unmapped: 51011584 heap: 201572352 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:09:49.393758+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:30:11 compute-0 ceph-osd[87591]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:30:11 compute-0 ceph-osd[87591]: bluestore.MempoolThread(0x5651f2adbb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 7323628 data_alloc: 234881024 data_used: 16515072
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: handle_auth_request added challenge on 0x5651f4db6400
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: handle_auth_request added challenge on 0x5651f3f45800
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 267 handle_osd_map epochs [267,268], i have 267, src has [1,268]
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _renew_subs
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 267 handle_osd_map epochs [268,268], i have 268, src has [1,268]
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 150593536 unmapped: 50978816 heap: 201572352 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: handle_auth_request added challenge on 0x5651f4c3c000
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 268 ms_handle_reset con 0x5651f4c3c000 session 0x5651f5f46f00
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 268 ms_handle_reset con 0x5651f3f45800 session 0x5651f5f141e0
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:09:50.393940+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 268 handle_osd_map epochs [268,269], i have 268, src has [1,269]
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: handle_auth_request added challenge on 0x5651f4c3c800
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: handle_auth_request added challenge on 0x5651f4c3d400
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 269 ms_handle_reset con 0x5651f4c3d400 session 0x5651f5eca780
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 150634496 unmapped: 50937856 heap: 201572352 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 269 ms_handle_reset con 0x5651f4db6400 session 0x5651f50143c0
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:09:51.394249+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 269 heartbeat osd_stat(store_statfs(0x4c93ec000/0x0/0x4ffc00000, data 0x301aa54a/0x30330000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x64df9c6), peers [1,2] op hist [])
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _renew_subs
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 269 handle_osd_map epochs [270,270], i have 269, src has [1,270]
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 150650880 unmapped: 50921472 heap: 201572352 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 270 ms_handle_reset con 0x5651f4c3c800 session 0x5651f5f44d20
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: handle_auth_request added challenge on 0x5651f3f45800
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 270 ms_handle_reset con 0x5651f3f45800 session 0x5651f5f2b860
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:09:52.394391+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 150650880 unmapped: 50921472 heap: 201572352 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:09:53.394507+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: handle_auth_request added challenge on 0x5651f4c3c000
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: handle_auth_request added challenge on 0x5651f4c3d400
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _renew_subs
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 270 handle_osd_map epochs [271,271], i have 270, src has [1,271]
Oct 11 04:30:11 compute-0 ceph-osd[87591]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 9.907543182s of 10.522663116s, submitted: 140
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: handle_auth_request added challenge on 0x5651f4db6400
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 150675456 unmapped: 50896896 heap: 201572352 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 271 ms_handle_reset con 0x5651f4db6400 session 0x5651f5120960
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:09:54.394640+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: handle_auth_request added challenge on 0x5651f6bdf400
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 271 ms_handle_reset con 0x5651f6bdf400 session 0x5651f5054b40
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: handle_auth_request added challenge on 0x5651f3c31400
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 271 ms_handle_reset con 0x5651f3c31400 session 0x5651f4dd8000
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: handle_auth_request added challenge on 0x5651f806e400
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 271 ms_handle_reset con 0x5651f806e400 session 0x5651f5e7ad20
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: handle_auth_request added challenge on 0x5651f3c31400
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 271 ms_handle_reset con 0x5651f3c31400 session 0x5651f44cc1e0
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: handle_auth_request added challenge on 0x5651f3f45800
Oct 11 04:30:11 compute-0 ceph-osd[87591]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:30:11 compute-0 ceph-osd[87591]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:30:11 compute-0 ceph-osd[87591]: bluestore.MempoolThread(0x5651f2adbb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 7344144 data_alloc: 234881024 data_used: 16543744
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 271 ms_handle_reset con 0x5651f3f45800 session 0x5651f5ed9c20
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: handle_auth_request added challenge on 0x5651f4db6400
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _renew_subs
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 271 handle_osd_map epochs [272,272], i have 271, src has [1,272]
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: handle_auth_request added challenge on 0x5651f6bdf400
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 272 ms_handle_reset con 0x5651f4db6400 session 0x5651f7bac780
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 272 ms_handle_reset con 0x5651f6bdf400 session 0x5651f4cd74a0
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: handle_auth_request added challenge on 0x5651f806e800
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 272 ms_handle_reset con 0x5651f806e800 session 0x5651f43d50e0
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: handle_auth_request added challenge on 0x5651f3c31400
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 272 ms_handle_reset con 0x5651f3c31400 session 0x5651f5121c20
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: handle_auth_request added challenge on 0x5651f3f45800
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 272 ms_handle_reset con 0x5651f4c3c000 session 0x5651f5e79c20
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 272 ms_handle_reset con 0x5651f3f45800 session 0x5651f5f465a0
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: handle_auth_request added challenge on 0x5651f4db6400
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 272 ms_handle_reset con 0x5651f4db6400 session 0x5651f79f63c0
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 151175168 unmapped: 50397184 heap: 201572352 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:09:55.394862+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 272 handle_osd_map epochs [272,273], i have 272, src has [1,273]
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: handle_auth_request added challenge on 0x5651f6bdf400
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 273 ms_handle_reset con 0x5651f6bdf400 session 0x5651f5bbf4a0
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: handle_auth_request added challenge on 0x5651f6bdf400
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 151199744 unmapped: 50372608 heap: 201572352 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:09:56.394990+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 273 heartbeat osd_stat(store_statfs(0x4c8d78000/0x0/0x4ffc00000, data 0x3081aad2/0x309a6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x64df9c6), peers [1,2] op hist [])
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: handle_auth_request added challenge on 0x5651f3c31400
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 273 ms_handle_reset con 0x5651f3c31400 session 0x5651f584b680
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 273 ms_handle_reset con 0x5651f6bdf400 session 0x5651f3f105a0
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 273 ms_handle_reset con 0x5651f4c3d400 session 0x5651f4dd90e0
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: handle_auth_request added challenge on 0x5651f3f45800
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _renew_subs
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 273 handle_osd_map epochs [274,274], i have 273, src has [1,274]
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 274 ms_handle_reset con 0x5651f3f45800 session 0x5651f79fc780
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: handle_auth_request added challenge on 0x5651f4c3c000
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 274 ms_handle_reset con 0x5651f4c3c000 session 0x5651f4318f00
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 151207936 unmapped: 50364416 heap: 201572352 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: handle_auth_request added challenge on 0x5651f4c3c000
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 274 ms_handle_reset con 0x5651f4c3c000 session 0x5651f3f112c0
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:09:57.395110+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: handle_auth_request added challenge on 0x5651f3c31400
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 274 ms_handle_reset con 0x5651f3c31400 session 0x5651f5c585a0
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: handle_auth_request added challenge on 0x5651f3f45800
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: handle_auth_request added challenge on 0x5651f4c3d400
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: handle_auth_request added challenge on 0x5651f6bdf400
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: handle_auth_request added challenge on 0x5651f4db6400
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 151232512 unmapped: 50339840 heap: 201572352 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 274 heartbeat osd_stat(store_statfs(0x4c8d72000/0x0/0x4ffc00000, data 0x3081de87/0x309aa000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x64df9c6), peers [1,2] op hist [])
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:09:58.395208+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: handle_auth_request added challenge on 0x5651f806e000
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _renew_subs
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 274 handle_osd_map epochs [275,275], i have 274, src has [1,275]
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 275 ms_handle_reset con 0x5651f3f45800 session 0x5651f43d4d20
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: handle_auth_request added challenge on 0x5651f806ec00
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: handle_auth_request added challenge on 0x5651f806fc00
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 275 ms_handle_reset con 0x5651f806fc00 session 0x5651f5eb3860
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 275 ms_handle_reset con 0x5651f806ec00 session 0x5651f7bad4a0
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 152297472 unmapped: 49274880 heap: 201572352 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:09:59.395339+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:30:11 compute-0 ceph-osd[87591]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 11 04:30:11 compute-0 ceph-osd[87591]: bluestore.MempoolThread(0x5651f2adbb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 7458615 data_alloc: 234881024 data_used: 22974464
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 275 handle_osd_map epochs [275,276], i have 275, src has [1,276]
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 276 ms_handle_reset con 0x5651f806e000 session 0x5651f5ed92c0
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 156762112 unmapped: 44810240 heap: 201572352 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:10:00.395508+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: handle_auth_request added challenge on 0x5651f806ec00
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: handle_auth_request added challenge on 0x5651f3c31400
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 276 handle_osd_map epochs [276,277], i have 276, src has [1,277]
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 277 ms_handle_reset con 0x5651f806ec00 session 0x5651f5c585a0
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: handle_auth_request added challenge on 0x5651f3f45800
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 277 ms_handle_reset con 0x5651f3f45800 session 0x5651f3f112c0
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 156803072 unmapped: 44769280 heap: 201572352 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:10:01.395710+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 277 ms_handle_reset con 0x5651f3c31400 session 0x5651f5c1fe00
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 277 handle_osd_map epochs [278,278], i have 277, src has [1,278]
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 156835840 unmapped: 44736512 heap: 201572352 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: handle_auth_request added challenge on 0x5651f4c3c000
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:10:02.395849+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 278 handle_osd_map epochs [278,279], i have 278, src has [1,279]
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 156884992 unmapped: 44687360 heap: 201572352 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 279 ms_handle_reset con 0x5651f4c3c000 session 0x5651f584b680
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: handle_auth_request added challenge on 0x5651f3c31400
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 279 ms_handle_reset con 0x5651f3c31400 session 0x5651f5e79c20
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:10:03.396046+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 279 heartbeat osd_stat(store_statfs(0x4c8959000/0x0/0x4ffc00000, data 0x30824daf/0x309b4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x68ef9c6), peers [1,2] op hist [])
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 156884992 unmapped: 44687360 heap: 201572352 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:10:04.396209+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 279 ms_handle_reset con 0x5651f3f44400 session 0x5651f50e90e0
Oct 11 04:30:11 compute-0 ceph-osd[87591]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:30:11 compute-0 ceph-osd[87591]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 11 04:30:11 compute-0 ceph-osd[87591]: bluestore.MempoolThread(0x5651f2adbb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 7470799 data_alloc: 234881024 data_used: 23207936
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 279 handle_osd_map epochs [279,280], i have 279, src has [1,280]
Oct 11 04:30:11 compute-0 ceph-osd[87591]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 9.955760002s of 10.812633514s, submitted: 242
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 156917760 unmapped: 44654592 heap: 201572352 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:10:05.397187+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 280 ms_handle_reset con 0x5651f6bcb400 session 0x5651f5c1e780
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 280 ms_handle_reset con 0x5651f6be5400 session 0x5651f43183c0
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: handle_auth_request added challenge on 0x5651f3f45800
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 150183936 unmapped: 51388416 heap: 201572352 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:10:06.397336+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 280 ms_handle_reset con 0x5651f3f45800 session 0x5651f43d50e0
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 280 heartbeat osd_stat(store_statfs(0x4c8954000/0x0/0x4ffc00000, data 0x30828004/0x309b9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x68ef9c6), peers [1,2] op hist [])
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 150208512 unmapped: 51363840 heap: 201572352 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: handle_auth_request added challenge on 0x5651f3f45800
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:10:07.397516+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 280 ms_handle_reset con 0x5651f3f45800 session 0x5651f79f6780
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 150208512 unmapped: 51363840 heap: 201572352 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:10:08.397671+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 150208512 unmapped: 51363840 heap: 201572352 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _renew_subs
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 280 handle_osd_map epochs [281,281], i have 280, src has [1,281]
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:10:09.397814+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:30:11 compute-0 ceph-osd[87591]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:30:11 compute-0 ceph-osd[87591]: bluestore.MempoolThread(0x5651f2adbb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 7207501 data_alloc: 234881024 data_used: 11898880
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: handle_auth_request added challenge on 0x5651f3c31400
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: handle_auth_request added challenge on 0x5651f3f44400
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 281 ms_handle_reset con 0x5651f3f44400 session 0x5651f5e941e0
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 150478848 unmapped: 51093504 heap: 201572352 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: handle_auth_request added challenge on 0x5651f6bcb400
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: handle_auth_request added challenge on 0x5651f6be5400
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:10:10.397968+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 281 ms_handle_reset con 0x5651f6be5400 session 0x5651f43190e0
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 281 ms_handle_reset con 0x5651f6bcb400 session 0x5651f79f7a40
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: handle_auth_request added challenge on 0x5651f806e000
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 281 ms_handle_reset con 0x5651f806e000 session 0x5651f5bd65a0
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: handle_auth_request added challenge on 0x5651f3f44400
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 281 ms_handle_reset con 0x5651f3f44400 session 0x5651f4473e00
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: handle_auth_request added challenge on 0x5651f3f45800
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _renew_subs
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 281 handle_osd_map epochs [282,282], i have 281, src has [1,282]
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 282 ms_handle_reset con 0x5651f3f45800 session 0x5651f4472960
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 150601728 unmapped: 50970624 heap: 201572352 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:10:11.398249+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 282 handle_osd_map epochs [282,283], i have 282, src has [1,283]
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: handle_auth_request added challenge on 0x5651f6bcb400
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 150659072 unmapped: 50913280 heap: 201572352 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 283 heartbeat osd_stat(store_statfs(0x4ca073000/0x0/0x4ffc00000, data 0x2f59a767/0x2f291000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x68ef9c6), peers [1,2] op hist [])
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:10:12.398433+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 283 ms_handle_reset con 0x5651f3c31400 session 0x5651f4309680
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: handle_auth_request added challenge on 0x5651f6be5400
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _renew_subs
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 283 handle_osd_map epochs [284,284], i have 283, src has [1,284]
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 150683648 unmapped: 50888704 heap: 201572352 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 284 ms_handle_reset con 0x5651f6bcb400 session 0x5651f5e7bc20
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 284 ms_handle_reset con 0x5651f6be5400 session 0x5651f3f10d20
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:10:13.398539+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 150683648 unmapped: 50888704 heap: 201572352 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:10:14.398713+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:30:11 compute-0 ceph-osd[87591]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:30:11 compute-0 ceph-osd[87591]: bluestore.MempoolThread(0x5651f2adbb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 7276145 data_alloc: 234881024 data_used: 11960320
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 150683648 unmapped: 50888704 heap: 201572352 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:10:15.398854+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 284 heartbeat osd_stat(store_statfs(0x4ca079000/0x0/0x4ffc00000, data 0x2f59e0f7/0x2f295000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x68ef9c6), peers [1,2] op hist [])
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 284 handle_osd_map epochs [285,285], i have 284, src has [1,285]
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 284 handle_osd_map epochs [285,285], i have 285, src has [1,285]
Oct 11 04:30:11 compute-0 ceph-osd[87591]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 10.269953728s of 11.024129868s, submitted: 252
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 150683648 unmapped: 50888704 heap: 201572352 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:10:16.399002+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 150683648 unmapped: 50888704 heap: 201572352 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:10:17.399142+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 150683648 unmapped: 50888704 heap: 201572352 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:10:18.399336+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 150683648 unmapped: 50888704 heap: 201572352 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:10:19.399491+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 285 ms_handle_reset con 0x5651f4c3d400 session 0x5651f5ecb0e0
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 285 ms_handle_reset con 0x5651f6bdf400 session 0x5651f4dd8d20
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 285 ms_handle_reset con 0x5651f4db6400 session 0x5651f7da7680
Oct 11 04:30:11 compute-0 ceph-osd[87591]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:30:11 compute-0 ceph-osd[87591]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:30:11 compute-0 ceph-osd[87591]: bluestore.MempoolThread(0x5651f2adbb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 7279175 data_alloc: 234881024 data_used: 11968512
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: handle_auth_request added challenge on 0x5651f6be5400
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 285 heartbeat osd_stat(store_statfs(0x4ca076000/0x0/0x4ffc00000, data 0x2f59fbea/0x2f298000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x68ef9c6), peers [1,2] op hist [2])
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 143949824 unmapped: 57622528 heap: 201572352 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 285 ms_handle_reset con 0x5651f6be5400 session 0x5651f3f101e0
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:10:20.399606+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: handle_auth_request added challenge on 0x5651f3c31400
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 285 ms_handle_reset con 0x5651f3c31400 session 0x5651f7badc20
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 143204352 unmapped: 58368000 heap: 201572352 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:10:21.399793+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 143204352 unmapped: 58368000 heap: 201572352 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:10:22.399946+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: handle_auth_request added challenge on 0x5651f3c31400
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 143204352 unmapped: 58368000 heap: 201572352 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:10:23.400272+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: handle_auth_request added challenge on 0x5651f4c3d400
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 285 ms_handle_reset con 0x5651f4c3d400 session 0x5651f4318b40
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 285 heartbeat osd_stat(store_statfs(0x4ca6e3000/0x0/0x4ffc00000, data 0x2ef34bca/0x2ec2b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x68ef9c6), peers [1,2] op hist [])
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _renew_subs
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 285 handle_osd_map epochs [286,286], i have 285, src has [1,286]
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: handle_auth_request added challenge on 0x5651f4db6400
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: handle_auth_request added challenge on 0x5651f6bdf400
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 144269312 unmapped: 57303040 heap: 201572352 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 286 ms_handle_reset con 0x5651f6bdf400 session 0x5651f5f452c0
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:10:24.400457+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: handle_auth_request added challenge on 0x5651f6be5400
Oct 11 04:30:11 compute-0 ceph-osd[87591]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:30:11 compute-0 ceph-osd[87591]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:30:11 compute-0 ceph-osd[87591]: bluestore.MempoolThread(0x5651f2adbb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 7188883 data_alloc: 218103808 data_used: 5332992
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 286 handle_osd_map epochs [286,287], i have 286, src has [1,287]
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 287 ms_handle_reset con 0x5651f4db6400 session 0x5651f4cacd20
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: handle_auth_request added challenge on 0x5651f3f44400
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 287 ms_handle_reset con 0x5651f3f44400 session 0x5651f4318960
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: handle_auth_request added challenge on 0x5651f3f45800
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 287 ms_handle_reset con 0x5651f3f45800 session 0x5651f43194a0
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 287 ms_handle_reset con 0x5651f3c31400 session 0x5651f5ed8f00
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 144285696 unmapped: 57286656 heap: 201572352 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:10:25.400626+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _renew_subs
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 287 handle_osd_map epochs [288,288], i have 287, src has [1,288]
Oct 11 04:30:11 compute-0 ceph-osd[87591]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 9.551834106s of 10.020395279s, submitted: 114
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: handle_auth_request added challenge on 0x5651f3f44400
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: handle_auth_request added challenge on 0x5651f4c3d400
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 146268160 unmapped: 55304192 heap: 201572352 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 288 ms_handle_reset con 0x5651f3f44400 session 0x5651f72334a0
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:10:26.400814+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: handle_auth_request added challenge on 0x5651f806f800
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 288 ms_handle_reset con 0x5651f806f800 session 0x5651f4473e00
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: handle_auth_request added challenge on 0x5651f4cef800
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 288 ms_handle_reset con 0x5651f4cef800 session 0x5651f5bd65a0
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: handle_auth_request added challenge on 0x5651f5002400
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 288 ms_handle_reset con 0x5651f5002400 session 0x5651f43190e0
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: handle_auth_request added challenge on 0x5651f3c31400
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: handle_auth_request added challenge on 0x5651f3f44400
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 288 handle_osd_map epochs [288,289], i have 288, src has [1,289]
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 149028864 unmapped: 52543488 heap: 201572352 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 289 ms_handle_reset con 0x5651f4c3d400 session 0x5651f7da74a0
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 289 ms_handle_reset con 0x5651f3f44400 session 0x5651f43d50e0
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:10:27.400954+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: handle_auth_request added challenge on 0x5651f4cef800
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 289 ms_handle_reset con 0x5651f4cef800 session 0x5651f5121860
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 289 ms_handle_reset con 0x5651f6be5400 session 0x5651f5054780
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: handle_auth_request added challenge on 0x5651f806f800
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 289 ms_handle_reset con 0x5651f3c31400 session 0x5651f5e941e0
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 289 ms_handle_reset con 0x5651f806f800 session 0x5651f5f15860
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: handle_auth_request added challenge on 0x5651f3c31400
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _renew_subs
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 289 handle_osd_map epochs [290,290], i have 289, src has [1,290]
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 149102592 unmapped: 52469760 heap: 201572352 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:10:28.401137+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 290 ms_handle_reset con 0x5651f3c31400 session 0x5651f43183c0
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: handle_auth_request added challenge on 0x5651f3f44400
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 290 ms_handle_reset con 0x5651f3f44400 session 0x5651f3f110e0
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: handle_auth_request added challenge on 0x5651f4c3d400
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: handle_auth_request added challenge on 0x5651f4cef800
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 290 ms_handle_reset con 0x5651f4cef800 session 0x5651f4472f00
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _renew_subs
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 290 handle_osd_map epochs [291,291], i have 290, src has [1,291]
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 291 ms_handle_reset con 0x5651f4c3d400 session 0x5651f5f2ad20
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: handle_auth_request added challenge on 0x5651f4c3d400
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: handle_auth_request added challenge on 0x5651f3c31400
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 291 ms_handle_reset con 0x5651f3c31400 session 0x5651f44721e0
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 291 ms_handle_reset con 0x5651f4c3d400 session 0x5651f7bacb40
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 148176896 unmapped: 53395456 heap: 201572352 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 291 heartbeat osd_stat(store_statfs(0x4c9cf8000/0x0/0x4ffc00000, data 0x2fbefe82/0x2f616000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x68ef9c6), peers [1,2] op hist [])
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: handle_auth_request added challenge on 0x5651f3f44400
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:10:29.401384+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 291 ms_handle_reset con 0x5651f3f44400 session 0x5651f7bada40
Oct 11 04:30:11 compute-0 ceph-osd[87591]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:30:11 compute-0 ceph-osd[87591]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:30:11 compute-0 ceph-osd[87591]: bluestore.MempoolThread(0x5651f2adbb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 7326650 data_alloc: 218103808 data_used: 7831552
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 148217856 unmapped: 53354496 heap: 201572352 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:10:30.401688+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 291 heartbeat osd_stat(store_statfs(0x4c9cf5000/0x0/0x4ffc00000, data 0x2fbf1464/0x2f618000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x68ef9c6), peers [1,2] op hist [])
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: handle_auth_request added challenge on 0x5651f4cef800
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 291 heartbeat osd_stat(store_statfs(0x4c9cf5000/0x0/0x4ffc00000, data 0x2fbf1402/0x2f617000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x68ef9c6), peers [1,2] op hist [])
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 291 ms_handle_reset con 0x5651f4cef800 session 0x5651f3f11a40
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 148242432 unmapped: 53329920 heap: 201572352 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:10:31.401989+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: handle_auth_request added challenge on 0x5651f806f800
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 291 ms_handle_reset con 0x5651f806f800 session 0x5651f5121e00
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: handle_auth_request added challenge on 0x5651f806f800
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 291 heartbeat osd_stat(store_statfs(0x4c9cf6000/0x0/0x4ffc00000, data 0x2fbf13a0/0x2f616000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x68ef9c6), peers [1,2] op hist [0,0,1,1,1])
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 148758528 unmapped: 52813824 heap: 201572352 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:10:32.402266+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: handle_auth_request added challenge on 0x5651f3c31400
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: handle_auth_request added challenge on 0x5651f3f44400
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 291 ms_handle_reset con 0x5651f806f800 session 0x5651f5f2b680
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: handle_auth_request added challenge on 0x5651f4c3d400
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 291 ms_handle_reset con 0x5651f4c3d400 session 0x5651f43d3c20
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 291 ms_handle_reset con 0x5651f3c31400 session 0x5651f72325a0
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 149856256 unmapped: 51716096 heap: 201572352 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:10:33.402540+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _renew_subs
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 291 handle_osd_map epochs [292,292], i have 291, src has [1,292]
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: handle_auth_request added challenge on 0x5651f4cef800
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: handle_auth_request added challenge on 0x5651f6be5400
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 292 heartbeat osd_stat(store_statfs(0x4c9af9000/0x0/0x4ffc00000, data 0x2fbe4f1f/0x2f814000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x68ef9c6), peers [1,2] op hist [])
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 292 ms_handle_reset con 0x5651f3f44400 session 0x5651f79f7680
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 292 ms_handle_reset con 0x5651f4cef800 session 0x5651f5bbfa40
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 292 ms_handle_reset con 0x5651f6be5400 session 0x5651f37d7e00
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: handle_auth_request added challenge on 0x5651f3c31400
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 292 ms_handle_reset con 0x5651f3c31400 session 0x5651f5bbef00
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 149913600 unmapped: 51658752 heap: 201572352 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:10:34.402844+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:30:11 compute-0 ceph-osd[87591]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:30:11 compute-0 ceph-osd[87591]: bluestore.MempoolThread(0x5651f2adbb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 7212879 data_alloc: 218103808 data_used: 7847936
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 292 heartbeat osd_stat(store_statfs(0x4cab78000/0x0/0x4ffc00000, data 0x2ea93eed/0x2e795000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x68ef9c6), peers [1,2] op hist [])
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 292 handle_osd_map epochs [293,293], i have 292, src has [1,293]
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: handle_auth_request added challenge on 0x5651f3f44400
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 293 ms_handle_reset con 0x5651f3f44400 session 0x5651f5f47860
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: handle_auth_request added challenge on 0x5651f4c3d400
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: handle_auth_request added challenge on 0x5651f806f800
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 293 ms_handle_reset con 0x5651f806f800 session 0x5651f43d4f00
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: handle_auth_request added challenge on 0x5651f5003c00
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 149053440 unmapped: 52518912 heap: 201572352 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 293 ms_handle_reset con 0x5651f4c3d400 session 0x5651f5f474a0
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 293 ms_handle_reset con 0x5651f5003c00 session 0x5651f44cc780
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:10:35.402980+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: handle_auth_request added challenge on 0x5651f4c3d400
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 293 ms_handle_reset con 0x5651f4c3d400 session 0x5651f7da7e00
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 149069824 unmapped: 52502528 heap: 201572352 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: handle_auth_request added challenge on 0x5651f3c31400
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:10:36.403118+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 9.304666519s of 10.519985199s, submitted: 359
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 293 handle_osd_map epochs [293,294], i have 293, src has [1,294]
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _renew_subs
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 294 handle_osd_map epochs [294,294], i have 294, src has [1,294]
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: handle_auth_request added challenge on 0x5651f3f44400
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 294 ms_handle_reset con 0x5651f3c31400 session 0x5651f79fc000
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: handle_auth_request added challenge on 0x5651f6be5400
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 294 ms_handle_reset con 0x5651f3f44400 session 0x5651f5bd7e00
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 294 ms_handle_reset con 0x5651f6be5400 session 0x5651f3f112c0
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 150151168 unmapped: 51421184 heap: 201572352 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:10:37.403279+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: handle_auth_request added challenge on 0x5651f6be5400
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: handle_auth_request added challenge on 0x5651f3c31400
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 294 ms_handle_reset con 0x5651f3c31400 session 0x5651f43dcb40
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 294 ms_handle_reset con 0x5651f6be5400 session 0x5651f7da7c20
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 150151168 unmapped: 51421184 heap: 201572352 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:10:38.403444+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 294 heartbeat osd_stat(store_statfs(0x4cad71000/0x0/0x4ffc00000, data 0x2e89b455/0x2e59d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x68ef9c6), peers [1,2] op hist [])
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 294 handle_osd_map epochs [295,295], i have 294, src has [1,295]
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 294 handle_osd_map epochs [295,295], i have 295, src has [1,295]
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: handle_auth_request added challenge on 0x5651f3f44400
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: handle_auth_request added challenge on 0x5651f4c3d400
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 192356352 unmapped: 17612800 heap: 209969152 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:10:39.403737+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:30:11 compute-0 ceph-osd[87591]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:30:11 compute-0 ceph-osd[87591]: bluestore.MempoolThread(0x5651f2adbb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 7691628 data_alloc: 218103808 data_used: 7872512
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:10:40.404035+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 154935296 unmapped: 84434944 heap: 239370240 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: handle_auth_request added challenge on 0x5651f5003c00
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 295 ms_handle_reset con 0x5651f5003c00 session 0x5651f79fc960
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:10:41.404207+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 159752192 unmapped: 79618048 heap: 239370240 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _renew_subs
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 295 handle_osd_map epochs [296,296], i have 295, src has [1,296]
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:10:42.404359+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: handle_auth_request added challenge on 0x5651f806f800
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 153468928 unmapped: 85901312 heap: 239370240 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 296 ms_handle_reset con 0x5651f806f800 session 0x5651f4cad680
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: handle_auth_request added challenge on 0x5651f7cd0c00
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 296 ms_handle_reset con 0x5651f7cd0c00 session 0x5651f5bd6b40
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: handle_auth_request added challenge on 0x5651f3c31400
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 296 ms_handle_reset con 0x5651f3c31400 session 0x5651f3f5c1e0
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 296 heartbeat osd_stat(store_statfs(0x4bed66000/0x0/0x4ffc00000, data 0x3a89eceb/0x3a5a7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x68ef9c6), peers [1,2] op hist [0,0,0,0,1,2])
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: handle_auth_request added challenge on 0x5651f5003c00
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 296 ms_handle_reset con 0x5651f5003c00 session 0x5651f5f141e0
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: handle_auth_request added challenge on 0x5651f6be5400
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:10:43.404507+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 154238976 unmapped: 85131264 heap: 239370240 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 296 heartbeat osd_stat(store_statfs(0x4bed66000/0x0/0x4ffc00000, data 0x3a89eceb/0x3a5a7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x68ef9c6), peers [1,2] op hist [1,0,1,0,0,0,0,2])
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 296 ms_handle_reset con 0x5651f6be5400 session 0x5651f5f441e0
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:10:44.404665+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 158973952 unmapped: 80396288 heap: 239370240 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: handle_auth_request added challenge on 0x5651f7cd0c00
Oct 11 04:30:11 compute-0 ceph-osd[87591]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:30:11 compute-0 ceph-osd[87591]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:30:11 compute-0 ceph-osd[87591]: bluestore.MempoolThread(0x5651f2adbb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 9461010 data_alloc: 218103808 data_used: 7888896
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _renew_subs
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 296 handle_osd_map epochs [297,297], i have 296, src has [1,297]
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 297 ms_handle_reset con 0x5651f7cd0c00 session 0x5651f5bbf2c0
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: handle_auth_request added challenge on 0x5651f806f800
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:10:45.404800+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 164798464 unmapped: 74571776 heap: 239370240 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 297 ms_handle_reset con 0x5651f806f800 session 0x5651f44cda40
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 297 ms_handle_reset con 0x5651f4c3d400 session 0x5651f5f14960
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: handle_auth_request added challenge on 0x5651f3c31400
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 297 ms_handle_reset con 0x5651f3c31400 session 0x5651f3f0e000
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 297 ms_handle_reset con 0x5651f3f44400 session 0x5651f43d2f00
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: handle_auth_request added challenge on 0x5651f5003c00
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 297 ms_handle_reset con 0x5651f5003c00 session 0x5651f5e7b680
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: handle_auth_request added challenge on 0x5651f6be5400
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: handle_auth_request added challenge on 0x5651f7cd0c00
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:10:46.404904+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 156770304 unmapped: 82599936 heap: 239370240 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: handle_auth_request added challenge on 0x5651f806f800
Oct 11 04:30:11 compute-0 ceph-osd[87591]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 5.114216805s of 10.067455292s, submitted: 466
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 297 ms_handle_reset con 0x5651f6be5400 session 0x5651f5e79c20
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 297 ms_handle_reset con 0x5651f806f800 session 0x5651f5ecb0e0
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 297 ms_handle_reset con 0x5651f7cd0c00 session 0x5651f79fd0e0
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: handle_auth_request added challenge on 0x5651f6be5400
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 297 ms_handle_reset con 0x5651f6be5400 session 0x5651f5e792c0
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 297 heartbeat osd_stat(store_statfs(0x4b7961000/0x0/0x4ffc00000, data 0x3f0a0cf1/0x3edac000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x68ef9c6), peers [1,2] op hist [])
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:10:47.405092+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 157237248 unmapped: 82132992 heap: 239370240 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: handle_auth_request added challenge on 0x5651f3c31400
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: handle_auth_request added challenge on 0x5651f3f44400
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 297 ms_handle_reset con 0x5651f3c31400 session 0x5651f3f5d2c0
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 297 ms_handle_reset con 0x5651f3f44400 session 0x5651f4dd8960
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: handle_auth_request added challenge on 0x5651f3c31400
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: handle_auth_request added challenge on 0x5651f6be5400
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 297 ms_handle_reset con 0x5651f6be5400 session 0x5651f3f5c000
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: handle_auth_request added challenge on 0x5651f7cd0c00
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _renew_subs
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 297 handle_osd_map epochs [298,298], i have 297, src has [1,298]
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 297 ms_handle_reset con 0x5651f3c31400 session 0x5651f4309a40
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 298 ms_handle_reset con 0x5651f7cd0c00 session 0x5651f5c1e780
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:10:48.405244+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 157237248 unmapped: 82132992 heap: 239370240 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: handle_auth_request added challenge on 0x5651f806f800
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: handle_auth_request added challenge on 0x5651f4c3d400
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 298 ms_handle_reset con 0x5651f4c3d400 session 0x5651f7233e00
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: handle_auth_request added challenge on 0x5651f5003c00
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 298 ms_handle_reset con 0x5651f806f800 session 0x5651f4cd7e00
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 298 ms_handle_reset con 0x5651f5003c00 session 0x5651f43085a0
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 298 heartbeat osd_stat(store_statfs(0x4cad62000/0x0/0x4ffc00000, data 0x2e8a22a3/0x2e5aa000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x68ef9c6), peers [1,2] op hist [])
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:10:49.405377+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 157286400 unmapped: 82083840 heap: 239370240 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: handle_auth_request added challenge on 0x5651f3c31400
Oct 11 04:30:11 compute-0 ceph-osd[87591]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:30:11 compute-0 ceph-osd[87591]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:30:11 compute-0 ceph-osd[87591]: bluestore.MempoolThread(0x5651f2adbb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 7322928 data_alloc: 218103808 data_used: 7901184
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 298 handle_osd_map epochs [298,299], i have 298, src has [1,299]
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _renew_subs
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 298 handle_osd_map epochs [299,299], i have 299, src has [1,299]
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 299 ms_handle_reset con 0x5651f3c31400 session 0x5651f5f2ba40
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: handle_auth_request added challenge on 0x5651f4c3d400
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:10:50.405556+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 157310976 unmapped: 82059264 heap: 239370240 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: handle_auth_request added challenge on 0x5651f6be5400
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 299 ms_handle_reset con 0x5651f6be5400 session 0x5651f3f5d860
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: handle_auth_request added challenge on 0x5651f7cd0c00
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 299 ms_handle_reset con 0x5651f7cd0c00 session 0x5651f7bad2c0
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: handle_auth_request added challenge on 0x5651f7cd1000
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 299 ms_handle_reset con 0x5651f7cd1000 session 0x5651f3f0e960
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: handle_auth_request added challenge on 0x5651f3c31400
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 299 handle_osd_map epochs [299,300], i have 299, src has [1,300]
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 300 ms_handle_reset con 0x5651f3c31400 session 0x5651f5c1f680
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:10:51.405728+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 157351936 unmapped: 82018304 heap: 239370240 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 300 ms_handle_reset con 0x5651f4c3d400 session 0x5651f43d23c0
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: handle_auth_request added challenge on 0x5651f5003c00
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _renew_subs
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 300 handle_osd_map epochs [301,301], i have 300, src has [1,301]
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:10:52.405907+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: handle_auth_request added challenge on 0x5651f6be5400
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 301 ms_handle_reset con 0x5651f6be5400 session 0x5651f4dd83c0
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: handle_auth_request added challenge on 0x5651f7cd0c00
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 157417472 unmapped: 81952768 heap: 239370240 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 301 ms_handle_reset con 0x5651f5003c00 session 0x5651f5ed85a0
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 301 ms_handle_reset con 0x5651f7cd0c00 session 0x5651f50545a0
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: handle_auth_request added challenge on 0x5651f3c31400
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 301 handle_osd_map epochs [301,302], i have 301, src has [1,302]
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:10:53.406059+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 302 heartbeat osd_stat(store_statfs(0x4caf9e000/0x0/0x4ffc00000, data 0x2ddba56a/0x2df5e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cff9c6), peers [1,2] op hist [])
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 157433856 unmapped: 81936384 heap: 239370240 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _renew_subs
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 302 handle_osd_map epochs [303,303], i have 302, src has [1,303]
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 303 ms_handle_reset con 0x5651f3c31400 session 0x5651f5054b40
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: handle_auth_request added challenge on 0x5651f4c3d400
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:10:54.406242+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 157458432 unmapped: 81911808 heap: 239370240 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 303 ms_handle_reset con 0x5651f4c3d400 session 0x5651f7da6d20
Oct 11 04:30:11 compute-0 ceph-osd[87591]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:30:11 compute-0 ceph-osd[87591]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:30:11 compute-0 ceph-osd[87591]: bluestore.MempoolThread(0x5651f2adbb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4900519 data_alloc: 218103808 data_used: 7847936
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: handle_auth_request added challenge on 0x5651f5003c00
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _renew_subs
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 303 handle_osd_map epochs [304,304], i have 303, src has [1,304]
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 304 ms_handle_reset con 0x5651f5003c00 session 0x5651f79fc5a0
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:10:55.406396+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 154443776 unmapped: 84926464 heap: 239370240 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: handle_auth_request added challenge on 0x5651f6be5400
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: handle_auth_request added challenge on 0x5651f4025800
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 304 ms_handle_reset con 0x5651f6be5400 session 0x5651f5f44b40
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 304 ms_handle_reset con 0x5651f4025800 session 0x5651f7233e00
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: handle_auth_request added challenge on 0x5651f3c31400
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _renew_subs
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 304 handle_osd_map epochs [305,305], i have 304, src has [1,305]
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:10:56.406536+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 8.665046692s of 10.006456375s, submitted: 463
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 154443776 unmapped: 84926464 heap: 239370240 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 305 ms_handle_reset con 0x5651f3c31400 session 0x5651f50e92c0
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:10:57.406695+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 152551424 unmapped: 86818816 heap: 239370240 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: handle_auth_request added challenge on 0x5651f4025800
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 305 ms_handle_reset con 0x5651f4025800 session 0x5651f5ecb2c0
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 305 heartbeat osd_stat(store_statfs(0x4f6f96000/0x0/0x4ffc00000, data 0x1dc13a8/0x1f68000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cff9c6), peers [1,2] op hist [])
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:10:58.406872+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 152518656 unmapped: 86851584 heap: 239370240 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 305 heartbeat osd_stat(store_statfs(0x4f6f96000/0x0/0x4ffc00000, data 0x1dc13a8/0x1f68000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cff9c6), peers [1,2] op hist [])
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 305 heartbeat osd_stat(store_statfs(0x4f6f96000/0x0/0x4ffc00000, data 0x1dc13a8/0x1f68000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cff9c6), peers [1,2] op hist [])
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:10:59.407043+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 152518656 unmapped: 86851584 heap: 239370240 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:30:11 compute-0 ceph-osd[87591]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:30:11 compute-0 ceph-osd[87591]: bluestore.MempoolThread(0x5651f2adbb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2507541 data_alloc: 218103808 data_used: 7864320
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 305 heartbeat osd_stat(store_statfs(0x4f6f96000/0x0/0x4ffc00000, data 0x1dc13a8/0x1f68000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cff9c6), peers [1,2] op hist [])
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 305 handle_osd_map epochs [306,306], i have 305, src has [1,306]
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 305 handle_osd_map epochs [306,306], i have 306, src has [1,306]
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:11:00.407211+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 152518656 unmapped: 86851584 heap: 239370240 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:11:01.407409+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 152518656 unmapped: 86851584 heap: 239370240 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:11:02.407556+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 152518656 unmapped: 86851584 heap: 239370240 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 306 handle_osd_map epochs [307,307], i have 306, src has [1,307]
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:11:03.407755+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 152526848 unmapped: 86843392 heap: 239370240 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: handle_auth_request added challenge on 0x5651f4c3d400
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 307 ms_handle_reset con 0x5651f4c3d400 session 0x5651f5e7a3c0
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:11:04.407942+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 152526848 unmapped: 86843392 heap: 239370240 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 307 handle_osd_map epochs [307,308], i have 307, src has [1,308]
Oct 11 04:30:11 compute-0 ceph-osd[87591]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:30:11 compute-0 ceph-osd[87591]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:30:11 compute-0 ceph-osd[87591]: bluestore.MempoolThread(0x5651f2adbb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2518335 data_alloc: 218103808 data_used: 7872512
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 308 heartbeat osd_stat(store_statfs(0x4f6f8e000/0x0/0x4ffc00000, data 0x1dc4a08/0x1f6e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cff9c6), peers [1,2] op hist [])
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:11:05.408141+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 152535040 unmapped: 86835200 heap: 239370240 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: handle_auth_request added challenge on 0x5651f5003c00
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _renew_subs
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 308 handle_osd_map epochs [309,309], i have 308, src has [1,309]
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 309 ms_handle_reset con 0x5651f5003c00 session 0x5651f7da7680
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:11:06.408408+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 152551424 unmapped: 86818816 heap: 239370240 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: handle_auth_request added challenge on 0x5651f6be5400
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 309 ms_handle_reset con 0x5651f6be5400 session 0x5651f5bbe1e0
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: handle_auth_request added challenge on 0x5651f3c31400
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:11:07.408562+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 152551424 unmapped: 86818816 heap: 239370240 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 309 heartbeat osd_stat(store_statfs(0x4f6f89000/0x0/0x4ffc00000, data 0x1dc8172/0x1f74000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cff9c6), peers [1,2] op hist [])
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 309 ms_handle_reset con 0x5651f3c31400 session 0x5651f37d7680
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:11:08.408686+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: handle_auth_request added challenge on 0x5651f4025800
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _renew_subs
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 309 handle_osd_map epochs [310,310], i have 309, src has [1,310]
Oct 11 04:30:11 compute-0 ceph-osd[87591]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 11.575140953s of 11.936071396s, submitted: 136
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 310 ms_handle_reset con 0x5651f4025800 session 0x5651f5e7a780
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 152576000 unmapped: 86794240 heap: 239370240 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: handle_auth_request added challenge on 0x5651f4c3d400
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 310 ms_handle_reset con 0x5651f4c3d400 session 0x5651f5c1fc20
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:11:09.408940+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 152584192 unmapped: 86786048 heap: 239370240 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:30:11 compute-0 ceph-osd[87591]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:30:11 compute-0 ceph-osd[87591]: bluestore.MempoolThread(0x5651f2adbb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2522171 data_alloc: 218103808 data_used: 7876608
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:11:10.409205+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 152584192 unmapped: 86786048 heap: 239370240 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: handle_auth_request added challenge on 0x5651f5003c00
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 310 ms_handle_reset con 0x5651f5003c00 session 0x5651f3f0f860
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: handle_auth_request added challenge on 0x5651f6bdb800
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 310 ms_handle_reset con 0x5651f6bdb800 session 0x5651f5014b40
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:11:11.409431+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 152584192 unmapped: 86786048 heap: 239370240 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 310 heartbeat osd_stat(store_statfs(0x4f6f88000/0x0/0x4ffc00000, data 0x1dc9d33/0x1f76000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cff9c6), peers [1,2] op hist [])
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:11:12.409598+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: handle_auth_request added challenge on 0x5651f6bdb800
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 152584192 unmapped: 86786048 heap: 239370240 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 310 handle_osd_map epochs [310,311], i have 310, src has [1,311]
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 311 ms_handle_reset con 0x5651f6bdb800 session 0x5651f5015c20
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:11:13.409787+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 152592384 unmapped: 86777856 heap: 239370240 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: handle_auth_request added challenge on 0x5651f3c31400
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 311 ms_handle_reset con 0x5651f3c31400 session 0x5651f43db0e0
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 311 handle_osd_map epochs [311,312], i have 311, src has [1,312]
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: handle_auth_request added challenge on 0x5651f4025800
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 312 ms_handle_reset con 0x5651f4025800 session 0x5651f5f472c0
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:11:14.409985+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 152600576 unmapped: 86769664 heap: 239370240 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 312 handle_osd_map epochs [312,313], i have 312, src has [1,313]
Oct 11 04:30:11 compute-0 ceph-osd[87591]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:30:11 compute-0 ceph-osd[87591]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:30:11 compute-0 ceph-osd[87591]: bluestore.MempoolThread(0x5651f2adbb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2535725 data_alloc: 218103808 data_used: 7892992
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:11:15.410240+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 313 heartbeat osd_stat(store_statfs(0x4f6f7f000/0x0/0x4ffc00000, data 0x1dcd4f8/0x1f7d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cff9c6), peers [1,2] op hist [])
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 152657920 unmapped: 86712320 heap: 239370240 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: handle_auth_request added challenge on 0x5651f4c3d400
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 313 ms_handle_reset con 0x5651f4c3d400 session 0x5651f5c58d20
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:11:16.410434+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: handle_auth_request added challenge on 0x5651f5003c00
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: handle_auth_request added challenge on 0x5651f6bdec00
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 152674304 unmapped: 86695936 heap: 239370240 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 313 ms_handle_reset con 0x5651f6bdec00 session 0x5651f3f5d2c0
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 313 ms_handle_reset con 0x5651f5003c00 session 0x5651f3f0e5a0
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: handle_auth_request added challenge on 0x5651f3c31400
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 313 ms_handle_reset con 0x5651f3c31400 session 0x5651f5e94f00
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: handle_auth_request added challenge on 0x5651f4025800
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _renew_subs
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 313 handle_osd_map epochs [314,314], i have 313, src has [1,314]
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 314 ms_handle_reset con 0x5651f4025800 session 0x5651f5ed94a0
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:11:17.410644+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 152682496 unmapped: 86687744 heap: 239370240 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: handle_auth_request added challenge on 0x5651f4c3d400
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 314 ms_handle_reset con 0x5651f4c3d400 session 0x5651f5ecba40
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: handle_auth_request added challenge on 0x5651f6bdb800
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 314 ms_handle_reset con 0x5651f6bdb800 session 0x5651f5f44960
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: handle_auth_request added challenge on 0x5651f6bdb800
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 314 ms_handle_reset con 0x5651f6bdb800 session 0x5651f7bac960
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: handle_auth_request added challenge on 0x5651f3c31400
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 314 ms_handle_reset con 0x5651f3c31400 session 0x5651f5f2a3c0
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: handle_auth_request added challenge on 0x5651f4025800
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: handle_auth_request added challenge on 0x5651f4c3d400
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 314 ms_handle_reset con 0x5651f4025800 session 0x5651f4309a40
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: handle_auth_request added challenge on 0x5651f5003c00
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 314 ms_handle_reset con 0x5651f4c3d400 session 0x5651f5ecb0e0
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: handle_auth_request added challenge on 0x5651f6bdec00
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: handle_auth_request added challenge on 0x5651f6bcb400
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 314 ms_handle_reset con 0x5651f6bcb400 session 0x5651f5c1ef00
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:11:18.410829+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: handle_auth_request added challenge on 0x5651f3c31400
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 314 ms_handle_reset con 0x5651f3c31400 session 0x5651f51205a0
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: handle_auth_request added challenge on 0x5651f4025800
Oct 11 04:30:11 compute-0 ceph-osd[87591]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 9.678371429s of 10.062820435s, submitted: 100
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 314 ms_handle_reset con 0x5651f4025800 session 0x5651f79fcb40
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: handle_auth_request added challenge on 0x5651f4c3d400
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 153567232 unmapped: 85803008 heap: 239370240 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 314 ms_handle_reset con 0x5651f4c3d400 session 0x5651f72330e0
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 314 handle_osd_map epochs [314,315], i have 314, src has [1,315]
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 315 ms_handle_reset con 0x5651f5003c00 session 0x5651f3f0f0e0
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 315 ms_handle_reset con 0x5651f6bdec00 session 0x5651f3f5c1e0
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:11:19.411000+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 153534464 unmapped: 85835776 heap: 239370240 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:30:11 compute-0 ceph-osd[87591]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:30:11 compute-0 ceph-osd[87591]: bluestore.MempoolThread(0x5651f2adbb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2619746 data_alloc: 218103808 data_used: 7921664
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:11:20.411204+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 153534464 unmapped: 85835776 heap: 239370240 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: handle_auth_request added challenge on 0x5651f3c31400
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: handle_auth_request added challenge on 0x5651f4025800
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 315 heartbeat osd_stat(store_statfs(0x4f66d4000/0x0/0x4ffc00000, data 0x26727b1/0x2829000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cff9c6), peers [1,2] op hist [])
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 315 ms_handle_reset con 0x5651f3c31400 session 0x5651f50e85a0
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 315 ms_handle_reset con 0x5651f4025800 session 0x5651f7da7c20
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: handle_auth_request added challenge on 0x5651f4c3d400
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 315 ms_handle_reset con 0x5651f4c3d400 session 0x5651f5bbef00
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:11:21.411486+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 153542656 unmapped: 85827584 heap: 239370240 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: handle_auth_request added challenge on 0x5651f5003c00
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 315 ms_handle_reset con 0x5651f5003c00 session 0x5651f5eb3680
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: handle_auth_request added challenge on 0x5651f6bcb400
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 315 ms_handle_reset con 0x5651f6bcb400 session 0x5651f79fc1e0
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: handle_auth_request added challenge on 0x5651f6bcb400
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 315 ms_handle_reset con 0x5651f6bcb400 session 0x5651f3f0ef00
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:11:22.411636+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: handle_auth_request added challenge on 0x5651f3c31400
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 315 ms_handle_reset con 0x5651f3c31400 session 0x5651f5f44d20
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 153542656 unmapped: 85827584 heap: 239370240 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 315 handle_osd_map epochs [315,316], i have 315, src has [1,316]
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: handle_auth_request added challenge on 0x5651f4025800
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 316 ms_handle_reset con 0x5651f4025800 session 0x5651f7232000
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: handle_auth_request added challenge on 0x5651f4c3d400
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 316 ms_handle_reset con 0x5651f4c3d400 session 0x5651f43dd0e0
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:11:23.411771+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 153550848 unmapped: 85819392 heap: 239370240 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: handle_auth_request added challenge on 0x5651f5003c00
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 316 ms_handle_reset con 0x5651f5003c00 session 0x5651f5f44f00
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: handle_auth_request added challenge on 0x5651f5003c00
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 316 ms_handle_reset con 0x5651f5003c00 session 0x5651f3f0eb40
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 316 heartbeat osd_stat(store_statfs(0x4f66d0000/0x0/0x4ffc00000, data 0x2674389/0x282d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cff9c6), peers [1,2] op hist [])
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: handle_auth_request added challenge on 0x5651f3c31400
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: handle_auth_request added challenge on 0x5651f4025800
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:11:24.411880+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 153575424 unmapped: 85794816 heap: 239370240 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:30:11 compute-0 ceph-osd[87591]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:30:11 compute-0 ceph-osd[87591]: bluestore.MempoolThread(0x5651f2adbb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2625538 data_alloc: 218103808 data_used: 7929856
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:11:25.412075+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _renew_subs
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 316 handle_osd_map epochs [317,317], i have 316, src has [1,317]
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 153575424 unmapped: 85794816 heap: 239370240 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: handle_auth_request added challenge on 0x5651f4c3d400
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 317 ms_handle_reset con 0x5651f4c3d400 session 0x5651f7badc20
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: handle_auth_request added challenge on 0x5651f6bcb400
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: handle_auth_request added challenge on 0x5651f6bdb800
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 317 ms_handle_reset con 0x5651f6bdb800 session 0x5651f5f452c0
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 317 ms_handle_reset con 0x5651f6bcb400 session 0x5651f4dd94a0
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: handle_auth_request added challenge on 0x5651f6be6400
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 317 ms_handle_reset con 0x5651f6be6400 session 0x5651f50e8000
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: handle_auth_request added challenge on 0x5651f4c3d400
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 317 ms_handle_reset con 0x5651f4c3d400 session 0x5651f79f7e00
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:11:26.412291+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 157786112 unmapped: 81584128 heap: 239370240 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: handle_auth_request added challenge on 0x5651f5003c00
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:11:27.412416+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 157786112 unmapped: 81584128 heap: 239370240 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 317 handle_osd_map epochs [317,318], i have 317, src has [1,318]
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: handle_auth_request added challenge on 0x5651f6bcb400
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 318 ms_handle_reset con 0x5651f5003c00 session 0x5651f5ef9a40
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:11:28.412561+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 157786112 unmapped: 81584128 heap: 239370240 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: handle_auth_request added challenge on 0x5651f6bdb800
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: handle_auth_request added challenge on 0x5651f6be6400
Oct 11 04:30:11 compute-0 ceph-osd[87591]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 10.251548767s of 10.482653618s, submitted: 88
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 318 heartbeat osd_stat(store_statfs(0x4f66cb000/0x0/0x4ffc00000, data 0x2675e43/0x2832000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cff9c6), peers [1,2] op hist [])
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 318 ms_handle_reset con 0x5651f6be6400 session 0x5651f44ccd20
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 318 ms_handle_reset con 0x5651f6bdb800 session 0x5651f5f47a40
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _renew_subs
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 318 handle_osd_map epochs [319,319], i have 318, src has [1,319]
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: handle_auth_request added challenge on 0x5651f6bd7c00
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 319 ms_handle_reset con 0x5651f6bd7c00 session 0x5651f44734a0
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:11:29.412715+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 158851072 unmapped: 80519168 heap: 239370240 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: handle_auth_request added challenge on 0x5651f4c3d400
Oct 11 04:30:11 compute-0 ceph-osd[87591]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:30:11 compute-0 ceph-osd[87591]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:30:11 compute-0 ceph-osd[87591]: bluestore.MempoolThread(0x5651f2adbb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2706212 data_alloc: 234881024 data_used: 16691200
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 319 ms_handle_reset con 0x5651f4c3d400 session 0x5651f7233860
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: handle_auth_request added challenge on 0x5651f5003c00
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 319 ms_handle_reset con 0x5651f5003c00 session 0x5651f4cad2c0
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: handle_auth_request added challenge on 0x5651f6bd7c00
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _renew_subs
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 319 handle_osd_map epochs [320,320], i have 319, src has [1,320]
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 320 ms_handle_reset con 0x5651f6bd7c00 session 0x5651f5c1f0e0
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 320 ms_handle_reset con 0x5651f6bcb400 session 0x5651f5eca000
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:11:30.412836+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 158859264 unmapped: 80510976 heap: 239370240 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: handle_auth_request added challenge on 0x5651f6bdb800
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:11:31.412992+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 320 ms_handle_reset con 0x5651f6bdb800 session 0x5651f43094a0
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: handle_auth_request added challenge on 0x5651f4c3d400
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 158900224 unmapped: 80470016 heap: 239370240 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: handle_auth_request added challenge on 0x5651f5003c00
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: handle_auth_request added challenge on 0x5651f6bcb400
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 320 ms_handle_reset con 0x5651f6bcb400 session 0x5651f4308d20
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 320 heartbeat osd_stat(store_statfs(0x4f66be000/0x0/0x4ffc00000, data 0x267b5f0/0x283d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cff9c6), peers [1,2] op hist [])
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _renew_subs
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 320 handle_osd_map epochs [321,321], i have 320, src has [1,321]
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 321 ms_handle_reset con 0x5651f4c3d400 session 0x5651f5120000
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: handle_auth_request added challenge on 0x5651f6bd7c00
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: handle_auth_request added challenge on 0x5651f6bdb800
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 321 ms_handle_reset con 0x5651f6bd7c00 session 0x5651f50e8000
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:11:32.413181+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: handle_auth_request added challenge on 0x5651f6be6400
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 158965760 unmapped: 80404480 heap: 239370240 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 321 ms_handle_reset con 0x5651f6be6400 session 0x5651f79fcb40
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: handle_auth_request added challenge on 0x5651f4024c00
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 321 ms_handle_reset con 0x5651f4024c00 session 0x5651f5ecb0e0
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: handle_auth_request added challenge on 0x5651f4024c00
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _renew_subs
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 321 handle_osd_map epochs [322,322], i have 321, src has [1,322]
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 322 ms_handle_reset con 0x5651f6bdb800 session 0x5651f5120780
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 322 ms_handle_reset con 0x5651f5003c00 session 0x5651f37d7680
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:11:33.413342+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 158982144 unmapped: 80388096 heap: 239370240 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 322 handle_osd_map epochs [322,323], i have 322, src has [1,323]
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 323 ms_handle_reset con 0x5651f4024c00 session 0x5651f5120960
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:11:34.413479+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: handle_auth_request added challenge on 0x5651f4c3d400
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 158982144 unmapped: 80388096 heap: 239370240 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: handle_auth_request added challenge on 0x5651f6bcb400
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 323 ms_handle_reset con 0x5651f6bcb400 session 0x5651f5f2b4a0
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 323 handle_osd_map epochs [323,324], i have 323, src has [1,324]
Oct 11 04:30:11 compute-0 ceph-osd[87591]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:30:11 compute-0 ceph-osd[87591]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:30:11 compute-0 ceph-osd[87591]: bluestore.MempoolThread(0x5651f2adbb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2752206 data_alloc: 234881024 data_used: 16703488
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 324 ms_handle_reset con 0x5651f4c3d400 session 0x5651f7232f00
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:11:35.413625+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 168787968 unmapped: 70582272 heap: 239370240 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: handle_auth_request added challenge on 0x5651f4024c00
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 324 ms_handle_reset con 0x5651f4024c00 session 0x5651f5ed9c20
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: handle_auth_request added challenge on 0x5651f5003c00
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 324 ms_handle_reset con 0x5651f5003c00 session 0x5651f5f45c20
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:11:36.413798+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 324 heartbeat osd_stat(store_statfs(0x4f5716000/0x0/0x4ffc00000, data 0x36224f8/0x37e6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cff9c6), peers [1,2] op hist [])
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 165855232 unmapped: 73515008 heap: 239370240 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: handle_auth_request added challenge on 0x5651f6bcb400
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 324 ms_handle_reset con 0x5651f6bcb400 session 0x5651f5f154a0
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: handle_auth_request added challenge on 0x5651f6bdb800
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _renew_subs
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 324 handle_osd_map epochs [325,325], i have 324, src has [1,325]
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:11:37.413972+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: handle_auth_request added challenge on 0x5651f6bd7c00
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: handle_auth_request added challenge on 0x5651f6be6400
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 325 ms_handle_reset con 0x5651f6bd7c00 session 0x5651f7bacd20
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 325 heartbeat osd_stat(store_statfs(0x4f5672000/0x0/0x4ffc00000, data 0x36c7508/0x388c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cff9c6), peers [1,2] op hist [])
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 165896192 unmapped: 73474048 heap: 239370240 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 325 ms_handle_reset con 0x5651f6be6400 session 0x5651f79f6000
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:11:38.414172+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 325 ms_handle_reset con 0x5651f6bdb800 session 0x5651f7da61e0
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 165888000 unmapped: 73482240 heap: 239370240 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 325 handle_osd_map epochs [326,326], i have 325, src has [1,326]
Oct 11 04:30:11 compute-0 ceph-osd[87591]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 9.470733643s of 10.258559227s, submitted: 316
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:11:39.414339+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: handle_auth_request added challenge on 0x5651f4024c00
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 165888000 unmapped: 73482240 heap: 239370240 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: handle_auth_request added challenge on 0x5651f5003c00
Oct 11 04:30:11 compute-0 ceph-osd[87591]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:30:11 compute-0 ceph-osd[87591]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:30:11 compute-0 ceph-osd[87591]: bluestore.MempoolThread(0x5651f2adbb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2865290 data_alloc: 234881024 data_used: 17874944
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 326 handle_osd_map epochs [326,327], i have 326, src has [1,327]
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 327 ms_handle_reset con 0x5651f5003c00 session 0x5651f5c58d20
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:11:40.414515+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: handle_auth_request added challenge on 0x5651f6bcb400
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 327 ms_handle_reset con 0x5651f6bcb400 session 0x5651f726fa40
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 327 ms_handle_reset con 0x5651f4024c00 session 0x5651f7da7a40
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 165904384 unmapped: 73465856 heap: 239370240 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:11:41.414835+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: handle_auth_request added challenge on 0x5651f6bd7c00
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 327 ms_handle_reset con 0x5651f6bd7c00 session 0x5651f43d2f00
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 165904384 unmapped: 73465856 heap: 239370240 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 327 heartbeat osd_stat(store_statfs(0x4f5669000/0x0/0x4ffc00000, data 0x36cca05/0x3894000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cff9c6), peers [1,2] op hist [])
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:11:42.415066+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: handle_auth_request added challenge on 0x5651f4024c00
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 327 ms_handle_reset con 0x5651f4024c00 session 0x5651f5055680
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 166051840 unmapped: 73318400 heap: 239370240 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: handle_auth_request added challenge on 0x5651f5003c00
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:11:43.415256+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 165953536 unmapped: 73416704 heap: 239370240 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _renew_subs
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 327 handle_osd_map epochs [328,328], i have 327, src has [1,328]
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:11:44.415377+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: handle_auth_request added challenge on 0x5651f6bd7c00
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: handle_auth_request added challenge on 0x5651f6bcb400
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 328 ms_handle_reset con 0x5651f6bcb400 session 0x5651f4309e00
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: handle_auth_request added challenge on 0x5651f6bdb800
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 328 ms_handle_reset con 0x5651f6bdb800 session 0x5651f50e8d20
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 165953536 unmapped: 73416704 heap: 239370240 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 328 handle_osd_map epochs [328,329], i have 328, src has [1,329]
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: handle_auth_request added challenge on 0x5651f6be6400
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: handle_auth_request added challenge on 0x5651f6be6800
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: handle_auth_request added challenge on 0x5651f4db6c00
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 329 ms_handle_reset con 0x5651f6be6800 session 0x5651f4318960
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 329 ms_handle_reset con 0x5651f6be6400 session 0x5651f4cacd20
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 329 ms_handle_reset con 0x5651f5003c00 session 0x5651f5bd63c0
Oct 11 04:30:11 compute-0 ceph-osd[87591]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:30:11 compute-0 ceph-osd[87591]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:30:11 compute-0 ceph-osd[87591]: bluestore.MempoolThread(0x5651f2adbb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2883460 data_alloc: 234881024 data_used: 17891328
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:11:45.415499+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 165666816 unmapped: 73703424 heap: 239370240 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 329 handle_osd_map epochs [329,330], i have 329, src has [1,330]
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 330 ms_handle_reset con 0x5651f4db6c00 session 0x5651f5f452c0
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 330 ms_handle_reset con 0x5651f6bd7c00 session 0x5651f50e9860
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:11:46.415655+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 330 heartbeat osd_stat(store_statfs(0x4f5638000/0x0/0x4ffc00000, data 0x36f6470/0x38c5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cff9c6), peers [1,2] op hist [])
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: handle_auth_request added challenge on 0x5651f6be6400
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 330 ms_handle_reset con 0x5651f6be6400 session 0x5651f5e790e0
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: handle_auth_request added challenge on 0x5651f4024c00
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 330 ms_handle_reset con 0x5651f4024c00 session 0x5651f50e92c0
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 165666816 unmapped: 73703424 heap: 239370240 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:11:47.415813+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 165724160 unmapped: 73646080 heap: 239370240 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: handle_auth_request added challenge on 0x5651f4024c00
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: handle_auth_request added challenge on 0x5651f4db6c00
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: handle_auth_request added challenge on 0x5651f5003c00
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 330 ms_handle_reset con 0x5651f5003c00 session 0x5651f5f47860
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:11:48.415980+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _renew_subs
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 330 handle_osd_map epochs [331,331], i have 330, src has [1,331]
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 165756928 unmapped: 73613312 heap: 239370240 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 331 ms_handle_reset con 0x5651f4024c00 session 0x5651f4cac3c0
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: handle_auth_request added challenge on 0x5651f6bd7c00
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: handle_auth_request added challenge on 0x5651f6be6400
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: handle_auth_request added challenge on 0x5651f6bcb400
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 331 ms_handle_reset con 0x5651f6bcb400 session 0x5651f5c1f680
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 331 ms_handle_reset con 0x5651f6bd7c00 session 0x5651f7da7e00
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:11:49.416119+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 331 handle_osd_map epochs [331,332], i have 331, src has [1,332]
Oct 11 04:30:11 compute-0 ceph-osd[87591]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 10.008262634s of 10.492926598s, submitted: 164
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 165806080 unmapped: 73564160 heap: 239370240 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 332 ms_handle_reset con 0x5651f6be6400 session 0x5651f4472b40
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 332 ms_handle_reset con 0x5651f4db6c00 session 0x5651f7233e00
Oct 11 04:30:11 compute-0 ceph-osd[87591]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:30:11 compute-0 ceph-osd[87591]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:30:11 compute-0 ceph-osd[87591]: bluestore.MempoolThread(0x5651f2adbb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2896381 data_alloc: 234881024 data_used: 17920000
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:11:50.418273+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: handle_auth_request added challenge on 0x5651f4024c00
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 165806080 unmapped: 73564160 heap: 239370240 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 332 ms_handle_reset con 0x5651f4024c00 session 0x5651f5e7a780
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: handle_auth_request added challenge on 0x5651f5003c00
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: handle_auth_request added challenge on 0x5651f6bcb400
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 332 ms_handle_reset con 0x5651f6bcb400 session 0x5651f44721e0
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:11:51.418508+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 165822464 unmapped: 73547776 heap: 239370240 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _renew_subs
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 332 handle_osd_map epochs [333,333], i have 332, src has [1,333]
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 333 ms_handle_reset con 0x5651f5003c00 session 0x5651f7da65a0
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 333 heartbeat osd_stat(store_statfs(0x4f5611000/0x0/0x4ffc00000, data 0x371699b/0x38ec000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cff9c6), peers [1,2] op hist [])
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:11:52.418644+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: handle_auth_request added challenge on 0x5651f6bd7c00
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: handle_auth_request added challenge on 0x5651f6bdb800
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 333 ms_handle_reset con 0x5651f6bdb800 session 0x5651f44723c0
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 165838848 unmapped: 73531392 heap: 239370240 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 333 heartbeat osd_stat(store_statfs(0x4f5611000/0x0/0x4ffc00000, data 0x371699b/0x38ec000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cff9c6), peers [1,2] op hist [])
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: handle_auth_request added challenge on 0x5651f4024c00
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 333 ms_handle_reset con 0x5651f4024c00 session 0x5651f4472960
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: handle_auth_request added challenge on 0x5651f4db6c00
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: handle_auth_request added challenge on 0x5651f5003c00
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: handle_auth_request added challenge on 0x5651f6bcb400
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 333 heartbeat osd_stat(store_statfs(0x4f5611000/0x0/0x4ffc00000, data 0x371699b/0x38ec000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cff9c6), peers [1,2] op hist [])
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 333 ms_handle_reset con 0x5651f6bcb400 session 0x5651f7232d20
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:11:53.418790+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 165847040 unmapped: 73523200 heap: 239370240 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _renew_subs
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 333 handle_osd_map epochs [334,334], i have 333, src has [1,334]
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 334 ms_handle_reset con 0x5651f5003c00 session 0x5651f7bac780
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: handle_auth_request added challenge on 0x5651f6be6800
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: handle_auth_request added challenge on 0x5651f6bcac00
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: handle_auth_request added challenge on 0x5651f6be1c00
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 334 ms_handle_reset con 0x5651f6be6800 session 0x5651f5ef9680
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: handle_auth_request added challenge on 0x5651f4027000
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 334 ms_handle_reset con 0x5651f6be1c00 session 0x5651f4308f00
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 334 ms_handle_reset con 0x5651f6bcac00 session 0x5651f5e7a3c0
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 334 ms_handle_reset con 0x5651f4027000 session 0x5651f7232780
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: handle_auth_request added challenge on 0x5651f6be1c00
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: handle_auth_request added challenge on 0x5651f4024c00
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 334 ms_handle_reset con 0x5651f4024c00 session 0x5651f79f63c0
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: handle_auth_request added challenge on 0x5651f5003c00
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:11:54.418935+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 188858368 unmapped: 50511872 heap: 239370240 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _renew_subs
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 334 handle_osd_map epochs [335,335], i have 334, src has [1,335]
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 335 ms_handle_reset con 0x5651f5003c00 session 0x5651f3f5cf00
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 335 ms_handle_reset con 0x5651f6be1c00 session 0x5651f50e9860
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: handle_auth_request added challenge on 0x5651f4024c00
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: handle_auth_request added challenge on 0x5651f4027000
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 335 ms_handle_reset con 0x5651f4027000 session 0x5651f5ef90e0
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _renew_subs
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 335 handle_osd_map epochs [336,336], i have 335, src has [1,336]
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 336 ms_handle_reset con 0x5651f4db6c00 session 0x5651f3f0fe00
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 336 ms_handle_reset con 0x5651f6bd7c00 session 0x5651f3f0f0e0
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 336 ms_handle_reset con 0x5651f4024c00 session 0x5651f5bd63c0
Oct 11 04:30:11 compute-0 ceph-osd[87591]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:30:11 compute-0 ceph-osd[87591]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 11 04:30:11 compute-0 ceph-osd[87591]: bluestore.MempoolThread(0x5651f2adbb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3136229 data_alloc: 251658240 data_used: 34377728
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:11:55.419086+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: handle_auth_request added challenge on 0x5651f5003c00
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 188973056 unmapped: 50397184 heap: 239370240 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: handle_auth_request added challenge on 0x5651f6bcac00
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 336 ms_handle_reset con 0x5651f6bcac00 session 0x5651f5e94d20
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: handle_auth_request added challenge on 0x5651f4024c00
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 336 ms_handle_reset con 0x5651f4024c00 session 0x5651f5f44960
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 336 handle_osd_map epochs [336,337], i have 336, src has [1,337]
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 337 ms_handle_reset con 0x5651f5003c00 session 0x5651f79f61e0
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: handle_auth_request added challenge on 0x5651f4027000
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: handle_auth_request added challenge on 0x5651f4db6c00
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 337 ms_handle_reset con 0x5651f4027000 session 0x5651f5f47680
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: handle_auth_request added challenge on 0x5651f6bd7c00
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: handle_auth_request added challenge on 0x5651f6be1c00
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 337 ms_handle_reset con 0x5651f4db6c00 session 0x5651f72321e0
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:11:56.419244+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 189177856 unmapped: 50192384 heap: 239370240 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _renew_subs
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 337 handle_osd_map epochs [338,338], i have 337, src has [1,338]
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: handle_auth_request added challenge on 0x5651f6bcb400
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 338 ms_handle_reset con 0x5651f6bcb400 session 0x5651f5c1e960
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 338 ms_handle_reset con 0x5651f6be1c00 session 0x5651f726fa40
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: handle_auth_request added challenge on 0x5651f6bcb400
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 338 ms_handle_reset con 0x5651f6bd7c00 session 0x5651f43d23c0
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 338 ms_handle_reset con 0x5651f6bcb400 session 0x5651f5c1f0e0
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: handle_auth_request added challenge on 0x5651f4024c00
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 338 ms_handle_reset con 0x5651f4024c00 session 0x5651f7232960
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:11:57.419381+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: handle_auth_request added challenge on 0x5651f4027000
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 338 ms_handle_reset con 0x5651f4027000 session 0x5651f44ccb40
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 189267968 unmapped: 50102272 heap: 239370240 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:11:58.419577+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: handle_auth_request added challenge on 0x5651f4024c00
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: handle_auth_request added challenge on 0x5651f6bcb400
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 338 heartbeat osd_stat(store_statfs(0x4f441e000/0x0/0x4ffc00000, data 0x48fe986/0x4ada000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cff9c6), peers [1,2] op hist [])
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 338 ms_handle_reset con 0x5651f6bcb400 session 0x5651f3f5c000
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 189612032 unmapped: 49758208 heap: 239370240 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: handle_auth_request added challenge on 0x5651f6bd7c00
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: handle_auth_request added challenge on 0x5651f6be1c00
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: handle_auth_request added challenge on 0x5651f4db6c00
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 338 ms_handle_reset con 0x5651f6be1c00 session 0x5651f79f6000
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 338 ms_handle_reset con 0x5651f4db6c00 session 0x5651f5bd63c0
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _renew_subs
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 338 handle_osd_map epochs [339,339], i have 338, src has [1,339]
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 339 ms_handle_reset con 0x5651f6bd7c00 session 0x5651f3f11a40
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: handle_auth_request added challenge on 0x5651f5003c00
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 339 ms_handle_reset con 0x5651f5003c00 session 0x5651f4473680
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 339 ms_handle_reset con 0x5651f4024c00 session 0x5651f4cd6000
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 339 heartbeat osd_stat(store_statfs(0x4f4420000/0x0/0x4ffc00000, data 0x4900557/0x4add000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cff9c6), peers [1,2] op hist [])
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:11:59.419727+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 182632448 unmapped: 56737792 heap: 239370240 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:30:11 compute-0 ceph-osd[87591]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 11 04:30:11 compute-0 ceph-osd[87591]: bluestore.MempoolThread(0x5651f2adbb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3125340 data_alloc: 251658240 data_used: 34377728
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: handle_auth_request added challenge on 0x5651f5003c00
Oct 11 04:30:11 compute-0 ceph-osd[87591]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 9.542853355s of 10.585609436s, submitted: 275
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:12:00.419879+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 182632448 unmapped: 56737792 heap: 239370240 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: handle_auth_request added challenge on 0x5651f4db6c00
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 339 ms_handle_reset con 0x5651f4db6c00 session 0x5651f4472000
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _renew_subs
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 339 handle_osd_map epochs [340,340], i have 339, src has [1,340]
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: handle_auth_request added challenge on 0x5651f6bcb400
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 340 ms_handle_reset con 0x5651f6bcb400 session 0x5651f79f6f00
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: handle_auth_request added challenge on 0x5651f6bd7c00
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 340 ms_handle_reset con 0x5651f6bd7c00 session 0x5651f44ccd20
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 340 ms_handle_reset con 0x5651f5003c00 session 0x5651f3f0f0e0
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: handle_auth_request added challenge on 0x5651f4024c00
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 340 ms_handle_reset con 0x5651f4024c00 session 0x5651f4cd6d20
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: handle_auth_request added challenge on 0x5651f4db6c00
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: handle_auth_request added challenge on 0x5651f5003c00
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: handle_auth_request added challenge on 0x5651f6bcb400
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 340 ms_handle_reset con 0x5651f6bcb400 session 0x5651f7bacd20
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 340 ms_handle_reset con 0x5651f4db6c00 session 0x5651f4cacf00
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: handle_auth_request added challenge on 0x5651f6bd7c00
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 340 ms_handle_reset con 0x5651f6bd7c00 session 0x5651f5c59a40
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: handle_auth_request added challenge on 0x5651f6be1c00
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: handle_auth_request added challenge on 0x5651f6be6800
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:12:01.420040+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 340 ms_handle_reset con 0x5651f6be6800 session 0x5651f5f14d20
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 340 ms_handle_reset con 0x5651f6be1c00 session 0x5651f5f44f00
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: handle_auth_request added challenge on 0x5651f4024c00
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 340 ms_handle_reset con 0x5651f4024c00 session 0x5651f5f44000
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: handle_auth_request added challenge on 0x5651f4db6c00
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 340 ms_handle_reset con 0x5651f4db6c00 session 0x5651f50545a0
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: handle_auth_request added challenge on 0x5651f6bcb400
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 340 ms_handle_reset con 0x5651f6bcb400 session 0x5651f43d34a0
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: handle_auth_request added challenge on 0x5651f6bd7c00
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 183484416 unmapped: 55885824 heap: 239370240 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _renew_subs
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 340 handle_osd_map epochs [341,341], i have 340, src has [1,341]
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 341 ms_handle_reset con 0x5651f5003c00 session 0x5651f44cdc20
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 341 ms_handle_reset con 0x5651f6bd7c00 session 0x5651f50e9c20
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:12:02.420211+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 341 heartbeat osd_stat(store_statfs(0x4f4417000/0x0/0x4ffc00000, data 0x4903e4d/0x4ae5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cff9c6), peers [1,2] op hist [])
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 183484416 unmapped: 55885824 heap: 239370240 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: handle_auth_request added challenge on 0x5651f5003c00
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _renew_subs
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 341 handle_osd_map epochs [342,342], i have 341, src has [1,342]
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 342 ms_handle_reset con 0x5651f5003c00 session 0x5651f5055e00
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: handle_auth_request added challenge on 0x5651f4024c00
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 342 ms_handle_reset con 0x5651f4024c00 session 0x5651f43da1e0
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:12:03.420344+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 183484416 unmapped: 55885824 heap: 239370240 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: handle_auth_request added challenge on 0x5651f4db6c00
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _renew_subs
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 342 handle_osd_map epochs [343,343], i have 342, src has [1,343]
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: handle_auth_request added challenge on 0x5651f6bcb400
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 343 ms_handle_reset con 0x5651f4db6c00 session 0x5651f5f2b860
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:12:04.420481+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 183517184 unmapped: 55853056 heap: 239370240 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 343 handle_osd_map epochs [343,344], i have 343, src has [1,344]
Oct 11 04:30:11 compute-0 ceph-osd[87591]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:30:11 compute-0 ceph-osd[87591]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 11 04:30:11 compute-0 ceph-osd[87591]: bluestore.MempoolThread(0x5651f2adbb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3145237 data_alloc: 251658240 data_used: 34381824
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 344 ms_handle_reset con 0x5651f6bcb400 session 0x5651f4cd7e00
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: handle_auth_request added challenge on 0x5651f6bcb400
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 344 ms_handle_reset con 0x5651f6bcb400 session 0x5651f584af00
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:12:05.420657+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: handle_auth_request added challenge on 0x5651f4024c00
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 344 ms_handle_reset con 0x5651f4024c00 session 0x5651f5c1fe00
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: handle_auth_request added challenge on 0x5651f4db6c00
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 183525376 unmapped: 55844864 heap: 239370240 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 344 ms_handle_reset con 0x5651f4db6c00 session 0x5651f43da5a0
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:12:06.420789+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: handle_auth_request added challenge on 0x5651f5003c00
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: handle_auth_request added challenge on 0x5651f6bd7c00
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 183533568 unmapped: 55836672 heap: 239370240 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _renew_subs
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 344 handle_osd_map epochs [345,345], i have 344, src has [1,345]
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:12:07.420921+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: handle_auth_request added challenge on 0x5651f6be1c00
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 345 ms_handle_reset con 0x5651f6be1c00 session 0x5651f5ed90e0
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 183877632 unmapped: 55492608 heap: 239370240 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _renew_subs
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 345 handle_osd_map epochs [346,346], i have 345, src has [1,346]
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: handle_auth_request added challenge on 0x5651f4dbb000
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 346 ms_handle_reset con 0x5651f6bd7c00 session 0x5651f43da3c0
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: handle_auth_request added challenge on 0x5651f6bd7c00
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: handle_auth_request added challenge on 0x5651f4024c00
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: handle_auth_request added challenge on 0x5651f4db6c00
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 346 ms_handle_reset con 0x5651f4db6c00 session 0x5651f5bbe5a0
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 346 ms_handle_reset con 0x5651f6bd7c00 session 0x5651f5eb2f00
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:12:08.421032+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 346 heartbeat osd_stat(store_statfs(0x4f3ffb000/0x0/0x4ffc00000, data 0x490c154/0x4af1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x710f9c6), peers [1,2] op hist [])
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 192815104 unmapped: 46555136 heap: 239370240 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 346 handle_osd_map epochs [346,347], i have 346, src has [1,347]
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 347 ms_handle_reset con 0x5651f4024c00 session 0x5651f5bbfc20
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: handle_auth_request added challenge on 0x5651f6bcb400
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 347 ms_handle_reset con 0x5651f6bcb400 session 0x5651f43da5a0
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 347 ms_handle_reset con 0x5651f4dbb000 session 0x5651f43d4960
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:12:09.421251+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 193847296 unmapped: 45522944 heap: 239370240 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:30:11 compute-0 ceph-osd[87591]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 11 04:30:11 compute-0 ceph-osd[87591]: bluestore.MempoolThread(0x5651f2adbb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3237750 data_alloc: 268435456 data_used: 45625344
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:12:10.421378+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: handle_auth_request added challenge on 0x5651f4024c00
Oct 11 04:30:11 compute-0 ceph-osd[87591]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 9.625890732s of 10.342947960s, submitted: 251
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 193871872 unmapped: 45498368 heap: 239370240 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 347 ms_handle_reset con 0x5651f4024c00 session 0x5651f584af00
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: handle_auth_request added challenge on 0x5651f4db6c00
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _renew_subs
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 347 handle_osd_map epochs [348,348], i have 347, src has [1,348]
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 348 ms_handle_reset con 0x5651f4db6c00 session 0x5651f4cd7e00
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:12:11.421565+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 348 heartbeat osd_stat(store_statfs(0x4f3ff6000/0x0/0x4ffc00000, data 0x490e21d/0x4af4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x710f9c6), peers [1,2] op hist [])
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 193880064 unmapped: 45490176 heap: 239370240 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: handle_auth_request added challenge on 0x5651f6bcb400
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 348 ms_handle_reset con 0x5651f6bcb400 session 0x5651f5055e00
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:12:12.421732+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 193888256 unmapped: 45481984 heap: 239370240 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 348 heartbeat osd_stat(store_statfs(0x4f3ff5000/0x0/0x4ffc00000, data 0x490feac/0x4af8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x710f9c6), peers [1,2] op hist [])
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:12:13.421925+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: handle_auth_request added challenge on 0x5651f6bd7c00
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 193888256 unmapped: 45481984 heap: 239370240 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: handle_auth_request added challenge on 0x5651f6be1c00
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: handle_auth_request added challenge on 0x5651f3f42000
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 348 ms_handle_reset con 0x5651f3f42000 session 0x5651f5015680
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 348 ms_handle_reset con 0x5651f6be1c00 session 0x5651f43d4f00
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: handle_auth_request added challenge on 0x5651f3f42000
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: handle_auth_request added challenge on 0x5651f4024c00
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 348 ms_handle_reset con 0x5651f3f42000 session 0x5651f43d50e0
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:12:14.422101+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: handle_auth_request added challenge on 0x5651f4db6c00
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 348 ms_handle_reset con 0x5651f4db6c00 session 0x5651f4318b40
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 193953792 unmapped: 45416448 heap: 239370240 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 348 handle_osd_map epochs [348,349], i have 348, src has [1,349]
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 349 ms_handle_reset con 0x5651f4024c00 session 0x5651f7da6b40
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: handle_auth_request added challenge on 0x5651f6bcb400
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 349 ms_handle_reset con 0x5651f6bcb400 session 0x5651f44723c0
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: handle_auth_request added challenge on 0x5651f4dbb400
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 349 ms_handle_reset con 0x5651f4dbb400 session 0x5651f43d54a0
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: handle_auth_request added challenge on 0x5651f3f42000
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 349 ms_handle_reset con 0x5651f6bd7c00 session 0x5651f44cdc20
Oct 11 04:30:11 compute-0 ceph-osd[87591]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:30:11 compute-0 ceph-osd[87591]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 11 04:30:11 compute-0 ceph-osd[87591]: bluestore.MempoolThread(0x5651f2adbb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3245989 data_alloc: 268435456 data_used: 45641728
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:12:15.422247+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: handle_auth_request added challenge on 0x5651f4024c00
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 193945600 unmapped: 45424640 heap: 239370240 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _renew_subs
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 349 handle_osd_map epochs [350,350], i have 349, src has [1,350]
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 350 ms_handle_reset con 0x5651f4024c00 session 0x5651f7da7860
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 350 ms_handle_reset con 0x5651f3f42000 session 0x5651f5c1f2c0
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: handle_auth_request added challenge on 0x5651f4db6c00
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: handle_auth_request added challenge on 0x5651f6bcb400
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 350 ms_handle_reset con 0x5651f6bcb400 session 0x5651f7da63c0
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 350 ms_handle_reset con 0x5651f4db6c00 session 0x5651f5c59a40
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:12:16.422404+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 194019328 unmapped: 45350912 heap: 239370240 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: handle_auth_request added challenge on 0x5651f4db6c00
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _renew_subs
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 350 handle_osd_map epochs [351,351], i have 350, src has [1,351]
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: handle_auth_request added challenge on 0x5651f3f42000
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:12:17.422523+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 351 ms_handle_reset con 0x5651f4db6c00 session 0x5651f4cacf00
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: handle_auth_request added challenge on 0x5651f4024c00
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 351 ms_handle_reset con 0x5651f4024c00 session 0x5651f584a5a0
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 195133440 unmapped: 44236800 heap: 239370240 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: handle_auth_request added challenge on 0x5651f6bcb400
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _renew_subs
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 351 handle_osd_map epochs [352,352], i have 351, src has [1,352]
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:12:18.422629+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 352 ms_handle_reset con 0x5651f3f42000 session 0x5651f5eb3680
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: handle_auth_request added challenge on 0x5651f6bd7c00
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 352 ms_handle_reset con 0x5651f6bd7c00 session 0x5651f5ed9c20
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 352 heartbeat osd_stat(store_statfs(0x4f502b000/0x0/0x4ffc00000, data 0x49153c7/0x4b00000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x60cf9c6), peers [1,2] op hist [])
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 198205440 unmapped: 41164800 heap: 239370240 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _renew_subs
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 352 handle_osd_map epochs [353,353], i have 352, src has [1,353]
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 353 ms_handle_reset con 0x5651f6bcb400 session 0x5651f7da74a0
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:12:19.422740+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 198582272 unmapped: 40787968 heap: 239370240 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:30:11 compute-0 ceph-osd[87591]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 11 04:30:11 compute-0 ceph-osd[87591]: bluestore.MempoolThread(0x5651f2adbb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3298631 data_alloc: 268435456 data_used: 49319936
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: handle_auth_request added challenge on 0x5651f6bcb400
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _renew_subs
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 353 handle_osd_map epochs [354,354], i have 353, src has [1,354]
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:12:20.422870+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: handle_auth_request added challenge on 0x5651f3f42000
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 354 ms_handle_reset con 0x5651f6bcb400 session 0x5651f50545a0
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 198639616 unmapped: 40730624 heap: 239370240 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 354 handle_osd_map epochs [354,355], i have 354, src has [1,355]
Oct 11 04:30:11 compute-0 ceph-osd[87591]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 9.788639069s of 10.566868782s, submitted: 257
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _renew_subs
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 355 handle_osd_map epochs [355,355], i have 355, src has [1,355]
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:12:21.423216+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 355 ms_handle_reset con 0x5651f3f42000 session 0x5651f4319a40
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: handle_auth_request added challenge on 0x5651f4024c00
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 198688768 unmapped: 40681472 heap: 239370240 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 355 handle_osd_map epochs [356,356], i have 355, src has [1,356]
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:12:22.423352+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 198787072 unmapped: 40583168 heap: 239370240 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 356 heartbeat osd_stat(store_statfs(0x4f4d9a000/0x0/0x4ffc00000, data 0x4ba4a72/0x4d93000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x60cf9c6), peers [1,2] op hist [])
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 356 ms_handle_reset con 0x5651f4024c00 session 0x5651f44ccd20
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: handle_auth_request added challenge on 0x5651f4db6c00
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _renew_subs
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 356 handle_osd_map epochs [357,357], i have 356, src has [1,357]
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:12:23.423553+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 357 ms_handle_reset con 0x5651f4db6c00 session 0x5651f3f11680
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 198787072 unmapped: 40583168 heap: 239370240 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 357 heartbeat osd_stat(store_statfs(0x4f4d29000/0x0/0x4ffc00000, data 0x4c1468f/0x4e03000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x60cf9c6), peers [1,2] op hist [])
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:12:24.423671+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 199376896 unmapped: 39993344 heap: 239370240 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 357 handle_osd_map epochs [357,358], i have 357, src has [1,358]
Oct 11 04:30:11 compute-0 ceph-osd[87591]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:30:11 compute-0 ceph-osd[87591]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 11 04:30:11 compute-0 ceph-osd[87591]: bluestore.MempoolThread(0x5651f2adbb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3319257 data_alloc: 268435456 data_used: 49713152
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:12:25.423829+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 199000064 unmapped: 40370176 heap: 239370240 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:12:26.424013+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 199000064 unmapped: 40370176 heap: 239370240 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:12:27.424185+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 199008256 unmapped: 40361984 heap: 239370240 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 358 heartbeat osd_stat(store_statfs(0x4f4d27000/0x0/0x4ffc00000, data 0x4c161a6/0x4e06000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x60cf9c6), peers [1,2] op hist [])
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:12:28.424347+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 199024640 unmapped: 40345600 heap: 239370240 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: handle_auth_request added challenge on 0x5651f6bd7c00
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 358 ms_handle_reset con 0x5651f6bd7c00 session 0x5651f5f472c0
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:12:29.424493+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 199065600 unmapped: 40304640 heap: 239370240 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:30:11 compute-0 ceph-osd[87591]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 11 04:30:11 compute-0 ceph-osd[87591]: bluestore.MempoolThread(0x5651f2adbb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3322207 data_alloc: 268435456 data_used: 49713152
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:12:30.424656+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 199065600 unmapped: 40304640 heap: 239370240 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: handle_auth_request added challenge on 0x5651f6bd7c00
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 358 ms_handle_reset con 0x5651f6bd7c00 session 0x5651f5e954a0
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:12:31.424836+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: handle_auth_request added challenge on 0x5651f3f42000
Oct 11 04:30:11 compute-0 ceph-osd[87591]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 9.923994064s of 10.228473663s, submitted: 109
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: handle_auth_request added challenge on 0x5651f4024c00
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 358 ms_handle_reset con 0x5651f4024c00 session 0x5651f5f443c0
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 199073792 unmapped: 40296448 heap: 239370240 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _renew_subs
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 358 handle_osd_map epochs [359,359], i have 358, src has [1,359]
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: handle_auth_request added challenge on 0x5651f4db6c00
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: handle_auth_request added challenge on 0x5651f6bcb400
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 359 ms_handle_reset con 0x5651f4db6c00 session 0x5651f5055860
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:12:32.424994+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 199098368 unmapped: 40271872 heap: 239370240 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _renew_subs
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 359 handle_osd_map epochs [360,360], i have 359, src has [1,360]
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 360 ms_handle_reset con 0x5651f6bcb400 session 0x5651f5e954a0
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: handle_auth_request added challenge on 0x5651f6bd6c00
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 360 ms_handle_reset con 0x5651f6bd6c00 session 0x5651f5ecb2c0
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 360 ms_handle_reset con 0x5651f3f42000 session 0x5651f5c59680
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:12:33.425228+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: handle_auth_request added challenge on 0x5651f4024c00
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 360 ms_handle_reset con 0x5651f4024c00 session 0x5651f5ef85a0
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 199098368 unmapped: 40271872 heap: 239370240 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 360 handle_osd_map epochs [361,361], i have 360, src has [1,361]
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 361 heartbeat osd_stat(store_statfs(0x4f4d1b000/0x0/0x4ffc00000, data 0x4c1bb70/0x4e12000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x60cf9c6), peers [1,2] op hist [])
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:12:34.425400+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: handle_auth_request added challenge on 0x5651f4db6c00
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 361 heartbeat osd_stat(store_statfs(0x4f4d1b000/0x0/0x4ffc00000, data 0x4c1bb70/0x4e12000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x60cf9c6), peers [1,2] op hist [])
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 199172096 unmapped: 40198144 heap: 239370240 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 361 handle_osd_map epochs [361,362], i have 361, src has [1,362]
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: handle_auth_request added challenge on 0x5651f6bcb400
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 362 ms_handle_reset con 0x5651f4db6c00 session 0x5651f5e7a3c0
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 362 ms_handle_reset con 0x5651f6bcb400 session 0x5651f5e79c20
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: handle_auth_request added challenge on 0x5651f6bd7c00
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 362 ms_handle_reset con 0x5651f6bd7c00 session 0x5651f43d3e00
Oct 11 04:30:11 compute-0 ceph-osd[87591]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:30:11 compute-0 ceph-osd[87591]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 11 04:30:11 compute-0 ceph-osd[87591]: bluestore.MempoolThread(0x5651f2adbb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3338843 data_alloc: 268435456 data_used: 49725440
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:12:35.425533+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: handle_auth_request added challenge on 0x5651f6bd7c00
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 362 heartbeat osd_stat(store_statfs(0x4f4d17000/0x0/0x4ffc00000, data 0x4c1e733/0x4e15000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x60cf9c6), peers [1,2] op hist [])
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 199409664 unmapped: 39960576 heap: 239370240 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _renew_subs
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 362 handle_osd_map epochs [363,363], i have 362, src has [1,363]
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 363 ms_handle_reset con 0x5651f6bd7c00 session 0x5651f50e8000
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: handle_auth_request added challenge on 0x5651f3f42000
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 363 ms_handle_reset con 0x5651f3f42000 session 0x5651f5f2a1e0
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: handle_auth_request added challenge on 0x5651f4024c00
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 363 ms_handle_reset con 0x5651f4024c00 session 0x5651f79fda40
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:12:36.425750+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 199524352 unmapped: 39845888 heap: 239370240 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:12:37.425902+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: handle_auth_request added challenge on 0x5651f4db6c00
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 199524352 unmapped: 39845888 heap: 239370240 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _renew_subs
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 363 handle_osd_map epochs [364,364], i have 363, src has [1,364]
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 364 ms_handle_reset con 0x5651f4db6c00 session 0x5651f72332c0
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:12:38.426073+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 199548928 unmapped: 39821312 heap: 239370240 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: handle_auth_request added challenge on 0x5651f6bcb400
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 364 ms_handle_reset con 0x5651f6bcb400 session 0x5651f5ed8d20
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: handle_auth_request added challenge on 0x5651f3f42000
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 364 heartbeat osd_stat(store_statfs(0x4f4d12000/0x0/0x4ffc00000, data 0x4c21a34/0x4e1a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x60cf9c6), peers [1,2] op hist [])
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:12:39.426231+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 199606272 unmapped: 39763968 heap: 239370240 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _renew_subs
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 364 handle_osd_map epochs [365,365], i have 364, src has [1,365]
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 365 ms_handle_reset con 0x5651f3f42000 session 0x5651f4dd81e0
Oct 11 04:30:11 compute-0 ceph-osd[87591]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:30:11 compute-0 ceph-osd[87591]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 11 04:30:11 compute-0 ceph-osd[87591]: bluestore.MempoolThread(0x5651f2adbb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3348691 data_alloc: 268435456 data_used: 49733632
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:12:40.426385+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: handle_auth_request added challenge on 0x5651f4024c00
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 365 ms_handle_reset con 0x5651f4024c00 session 0x5651f79f7a40
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 365 heartbeat osd_stat(store_statfs(0x4f4d10000/0x0/0x4ffc00000, data 0x4c23631/0x4e1d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x60cf9c6), peers [1,2] op hist [])
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: handle_auth_request added challenge on 0x5651f4db6c00
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 365 ms_handle_reset con 0x5651f4db6c00 session 0x5651f4319e00
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 199680000 unmapped: 39690240 heap: 239370240 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:12:41.426568+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 199680000 unmapped: 39690240 heap: 239370240 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: handle_auth_request added challenge on 0x5651f6bd7c00
Oct 11 04:30:11 compute-0 ceph-osd[87591]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 9.957052231s of 10.564007759s, submitted: 234
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 365 ms_handle_reset con 0x5651f6bd7c00 session 0x5651f43d3c20
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:12:42.426752+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 365 heartbeat osd_stat(store_statfs(0x4f4d10000/0x0/0x4ffc00000, data 0x4c235df/0x4e1d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x60cf9c6), peers [1,2] op hist [])
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 199688192 unmapped: 39682048 heap: 239370240 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: handle_auth_request added challenge on 0x5651f6bd6800
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 365 heartbeat osd_stat(store_statfs(0x4f4d10000/0x0/0x4ffc00000, data 0x4c235df/0x4e1d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x60cf9c6), peers [1,2] op hist [])
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:12:43.426880+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: handle_auth_request added challenge on 0x5651f657dc00
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 365 ms_handle_reset con 0x5651f657dc00 session 0x5651f5f2bc20
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 199720960 unmapped: 39649280 heap: 239370240 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: handle_auth_request added challenge on 0x5651f3f42000
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _renew_subs
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 365 handle_osd_map epochs [366,366], i have 365, src has [1,366]
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 366 ms_handle_reset con 0x5651f6bd6800 session 0x5651f4319680
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: handle_auth_request added challenge on 0x5651f4024c00
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 366 ms_handle_reset con 0x5651f4024c00 session 0x5651f584ab40
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: handle_auth_request added challenge on 0x5651f4db6c00
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:12:44.427034+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: handle_auth_request added challenge on 0x5651f657dc00
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 366 ms_handle_reset con 0x5651f657dc00 session 0x5651f7da74a0
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: handle_auth_request added challenge on 0x5651f6bd7c00
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: handle_auth_request added challenge on 0x5651f6284400
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 199770112 unmapped: 39600128 heap: 239370240 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _renew_subs
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 366 handle_osd_map epochs [367,367], i have 366, src has [1,367]
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 367 ms_handle_reset con 0x5651f6bd7c00 session 0x5651f5ed9c20
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 367 ms_handle_reset con 0x5651f4db6c00 session 0x5651f50541e0
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 367 ms_handle_reset con 0x5651f3f42000 session 0x5651f4318780
Oct 11 04:30:11 compute-0 ceph-osd[87591]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:30:11 compute-0 ceph-osd[87591]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 11 04:30:11 compute-0 ceph-osd[87591]: bluestore.MempoolThread(0x5651f2adbb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3359884 data_alloc: 268435456 data_used: 49745920
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:12:45.427223+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 199811072 unmapped: 39559168 heap: 239370240 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: handle_auth_request added challenge on 0x5651f4024c00
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 367 ms_handle_reset con 0x5651f4024c00 session 0x5651f5ed85a0
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: handle_auth_request added challenge on 0x5651f657dc00
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 367 ms_handle_reset con 0x5651f657dc00 session 0x5651f79f7e00
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:12:46.427398+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: handle_auth_request added challenge on 0x5651f6bd6800
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 367 ms_handle_reset con 0x5651f6bd6800 session 0x5651f5bbf2c0
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 200146944 unmapped: 39223296 heap: 239370240 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:12:47.427590+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: handle_auth_request added challenge on 0x5651f6bd7c00
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 367 ms_handle_reset con 0x5651f6bd7c00 session 0x5651f5f47c20
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 367 ms_handle_reset con 0x5651f6284400 session 0x5651f5055c20
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 200212480 unmapped: 39157760 heap: 239370240 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:12:48.427744+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 367 heartbeat osd_stat(store_statfs(0x4f4d07000/0x0/0x4ffc00000, data 0x4c280cf/0x4e27000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x60cf9c6), peers [1,2] op hist [])
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: handle_auth_request added challenge on 0x5651f3f42000
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: handle_auth_request added challenge on 0x5651f4024c00
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 367 ms_handle_reset con 0x5651f4024c00 session 0x5651f5bbe780
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: handle_auth_request added challenge on 0x5651f657dc00
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 200228864 unmapped: 39141376 heap: 239370240 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _renew_subs
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 367 handle_osd_map epochs [368,368], i have 367, src has [1,368]
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 368 ms_handle_reset con 0x5651f657dc00 session 0x5651f7bad680
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: handle_auth_request added challenge on 0x5651f6bd6800
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: handle_auth_request added challenge on 0x5651f6bd7c00
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 368 ms_handle_reset con 0x5651f6bd6800 session 0x5651f5e945a0
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:12:49.427835+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 368 ms_handle_reset con 0x5651f5003c00 session 0x5651f5c1e780
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 204914688 unmapped: 34455552 heap: 239370240 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: handle_auth_request added challenge on 0x5651f4024c00
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _renew_subs
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 368 handle_osd_map epochs [369,369], i have 368, src has [1,369]
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: handle_auth_request added challenge on 0x5651f5003c00
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 369 ms_handle_reset con 0x5651f6bd7c00 session 0x5651f7da7860
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 369 ms_handle_reset con 0x5651f5003c00 session 0x5651f4318b40
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: handle_auth_request added challenge on 0x5651f6284400
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 369 ms_handle_reset con 0x5651f3f42000 session 0x5651f51201e0
Oct 11 04:30:11 compute-0 ceph-osd[87591]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:30:11 compute-0 ceph-osd[87591]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 11 04:30:11 compute-0 ceph-osd[87591]: bluestore.MempoolThread(0x5651f2adbb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3353333 data_alloc: 268435456 data_used: 50663424
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 369 ms_handle_reset con 0x5651f4024c00 session 0x5651f5ef9a40
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:12:50.427986+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 205029376 unmapped: 34340864 heap: 239370240 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _renew_subs
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 369 handle_osd_map epochs [370,370], i have 369, src has [1,370]
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 370 ms_handle_reset con 0x5651f6284400 session 0x5651f43d4f00
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:12:51.428188+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: handle_auth_request added challenge on 0x5651f3f42000
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: handle_auth_request added challenge on 0x5651f4024c00
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 370 ms_handle_reset con 0x5651f3f42000 session 0x5651f5015680
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 205193216 unmapped: 34177024 heap: 239370240 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 370 handle_osd_map epochs [370,371], i have 370, src has [1,371]
Oct 11 04:30:11 compute-0 ceph-osd[87591]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 9.668113708s of 10.153851509s, submitted: 153
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 371 ms_handle_reset con 0x5651f4024c00 session 0x5651f5eb21e0
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: handle_auth_request added challenge on 0x5651f5003c00
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:12:52.428309+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 371 ms_handle_reset con 0x5651f5003c00 session 0x5651f43da5a0
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 203087872 unmapped: 36282368 heap: 239370240 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: handle_auth_request added challenge on 0x5651f6284400
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 371 ms_handle_reset con 0x5651f6284400 session 0x5651f726fc20
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: handle_auth_request added challenge on 0x5651f6bd7c00
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 371 ms_handle_reset con 0x5651f6bd7c00 session 0x5651f50b92c0
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: handle_auth_request added challenge on 0x5651f3f42000
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: handle_auth_request added challenge on 0x5651f4024c00
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 371 heartbeat osd_stat(store_statfs(0x4f4ff2000/0x0/0x4ffc00000, data 0x4937f0f/0x4b3b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x60cf9c6), peers [1,2] op hist [])
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 371 ms_handle_reset con 0x5651f4024c00 session 0x5651f5c1fc20
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 371 ms_handle_reset con 0x5651f3f42000 session 0x5651f50b9860
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:12:53.428482+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 203120640 unmapped: 36249600 heap: 239370240 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: handle_auth_request added challenge on 0x5651f5003c00
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: handle_auth_request added challenge on 0x5651f6284400
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:12:54.428607+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _renew_subs
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 371 handle_osd_map epochs [372,372], i have 371, src has [1,372]
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 203137024 unmapped: 36233216 heap: 239370240 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 372 ms_handle_reset con 0x5651f5003c00 session 0x5651f5ecba40
Oct 11 04:30:11 compute-0 ceph-osd[87591]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:30:11 compute-0 ceph-osd[87591]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 11 04:30:11 compute-0 ceph-osd[87591]: bluestore.MempoolThread(0x5651f2adbb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3356075 data_alloc: 268435456 data_used: 50651136
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:12:55.428737+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: handle_auth_request added challenge on 0x5651f657dc00
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 372 ms_handle_reset con 0x5651f6284400 session 0x5651f4472000
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 372 heartbeat osd_stat(store_statfs(0x4f4ff1000/0x0/0x4ffc00000, data 0x4939c28/0x4b3d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x60cf9c6), peers [1,2] op hist [])
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: handle_auth_request added challenge on 0x5651f6bd6800
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 372 ms_handle_reset con 0x5651f6bd6800 session 0x5651f44cc5a0
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: handle_auth_request added challenge on 0x5651f3f42000
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 372 ms_handle_reset con 0x5651f3f42000 session 0x5651f4dd83c0
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: handle_auth_request added challenge on 0x5651f4024c00
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 194199552 unmapped: 45170688 heap: 239370240 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: handle_auth_request added challenge on 0x5651f6c5c000
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 372 ms_handle_reset con 0x5651f6c5c000 session 0x5651f4318780
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 372 heartbeat osd_stat(store_statfs(0x4f4ff1000/0x0/0x4ffc00000, data 0x4939c28/0x4b3d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x60cf9c6), peers [1,2] op hist [])
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _renew_subs
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 372 handle_osd_map epochs [373,373], i have 372, src has [1,373]
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 373 ms_handle_reset con 0x5651f4024c00 session 0x5651f4cad680
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: handle_auth_request added challenge on 0x5651f6be5c00
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 373 ms_handle_reset con 0x5651f6be5c00 session 0x5651f5c585a0
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: handle_auth_request added challenge on 0x5651f3f44000
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:12:56.428847+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 194224128 unmapped: 45146112 heap: 239370240 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: handle_auth_request added challenge on 0x5651f6c83c00
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 373 ms_handle_reset con 0x5651f6c83c00 session 0x5651f5ed9c20
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: handle_auth_request added challenge on 0x5651f3f42000
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 373 heartbeat osd_stat(store_statfs(0x4f61c9000/0x0/0x4ffc00000, data 0x375f897/0x3964000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x60cf9c6), peers [1,2] op hist [])
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 373 handle_osd_map epochs [373,374], i have 373, src has [1,374]
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 374 ms_handle_reset con 0x5651f3f44000 session 0x5651f584a3c0
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 374 ms_handle_reset con 0x5651f3f42000 session 0x5651f4319680
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: handle_auth_request added challenge on 0x5651f4024c00
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 374 ms_handle_reset con 0x5651f4024c00 session 0x5651f7bad4a0
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:12:57.428987+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 374 ms_handle_reset con 0x5651f657dc00 session 0x5651f5f15680
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 194281472 unmapped: 45088768 heap: 239370240 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 374 ms_handle_reset con 0x5651f3c31400 session 0x5651f5f2a3c0
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 374 ms_handle_reset con 0x5651f4025800 session 0x5651f5bd7e00
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: handle_auth_request added challenge on 0x5651f3f42000
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:12:58.429120+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: handle_auth_request added challenge on 0x5651f3f44000
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 374 ms_handle_reset con 0x5651f3f44000 session 0x5651f5e79c20
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 174211072 unmapped: 65159168 heap: 239370240 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 374 handle_osd_map epochs [374,375], i have 374, src has [1,375]
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _renew_subs
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 374 handle_osd_map epochs [375,375], i have 375, src has [1,375]
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 375 ms_handle_reset con 0x5651f3f42000 session 0x5651f5bd74a0
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: handle_auth_request added challenge on 0x5651f4024c00
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:12:59.429279+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 375 ms_handle_reset con 0x5651f4024c00 session 0x5651f584a000
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 172711936 unmapped: 66658304 heap: 239370240 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 375 heartbeat osd_stat(store_statfs(0x4f777c000/0x0/0x4ffc00000, data 0x1e3c4db/0x2042000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x60cf9c6), peers [1,2] op hist [])
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: handle_auth_request added challenge on 0x5651f657dc00
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:13:00.429429+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:30:11 compute-0 ceph-osd[87591]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:30:11 compute-0 ceph-osd[87591]: bluestore.MempoolThread(0x5651f2adbb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2823032 data_alloc: 218103808 data_used: 8077312
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 172711936 unmapped: 66658304 heap: 239370240 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _renew_subs
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 375 handle_osd_map epochs [376,376], i have 375, src has [1,376]
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:13:01.429574+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 376 ms_handle_reset con 0x5651f657dc00 session 0x5651f5c1f860
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: handle_auth_request added challenge on 0x5651f3f42000
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 376 ms_handle_reset con 0x5651f3f42000 session 0x5651f5120d20
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: handle_auth_request added challenge on 0x5651f3f44000
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 172711936 unmapped: 66658304 heap: 239370240 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 376 ms_handle_reset con 0x5651f3f44000 session 0x5651f50145a0
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:13:02.429729+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: handle_auth_request added challenge on 0x5651f4024c00
Oct 11 04:30:11 compute-0 ceph-osd[87591]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 9.626250267s of 10.721616745s, submitted: 331
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 172744704 unmapped: 66625536 heap: 239370240 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 376 handle_osd_map epochs [376,377], i have 376, src has [1,377]
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _renew_subs
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 376 handle_osd_map epochs [377,377], i have 377, src has [1,377]
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:13:03.429920+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 377 ms_handle_reset con 0x5651f4024c00 session 0x5651f5e790e0
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 172793856 unmapped: 66576384 heap: 239370240 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: handle_auth_request added challenge on 0x5651f4025800
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 377 heartbeat osd_stat(store_statfs(0x4f76dc000/0x0/0x4ffc00000, data 0x1e3db2c/0x2041000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x64df9c6), peers [1,2] op hist [])
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _renew_subs
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 377 handle_osd_map epochs [378,378], i have 377, src has [1,378]
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:13:04.430073+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 378 ms_handle_reset con 0x5651f4025800 session 0x5651f4472000
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 172793856 unmapped: 66576384 heap: 239370240 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:13:05.430231+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:30:11 compute-0 ceph-osd[87591]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:30:11 compute-0 ceph-osd[87591]: bluestore.MempoolThread(0x5651f2adbb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2828208 data_alloc: 218103808 data_used: 8077312
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: handle_auth_request added challenge on 0x5651f6be5c00
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _renew_subs
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 378 handle_osd_map epochs [379,379], i have 378, src has [1,379]
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 172793856 unmapped: 66576384 heap: 239370240 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 379 ms_handle_reset con 0x5651f6be5c00 session 0x5651f43da5a0
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: handle_auth_request added challenge on 0x5651f3f42000
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _renew_subs
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 379 handle_osd_map epochs [380,380], i have 379, src has [1,380]
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:13:06.430404+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 380 ms_handle_reset con 0x5651f3f42000 session 0x5651f5015680
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 172802048 unmapped: 66568192 heap: 239370240 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:13:07.430539+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 172802048 unmapped: 66568192 heap: 239370240 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 380 heartbeat osd_stat(store_statfs(0x4f76d1000/0x0/0x4ffc00000, data 0x1e44986/0x204a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x64df9c6), peers [1,2] op hist [])
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:13:08.430717+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 172802048 unmapped: 66568192 heap: 239370240 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:13:09.430928+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 172802048 unmapped: 66568192 heap: 239370240 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:13:10.431071+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:30:11 compute-0 ceph-osd[87591]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:30:11 compute-0 ceph-osd[87591]: bluestore.MempoolThread(0x5651f2adbb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2834946 data_alloc: 218103808 data_used: 8081408
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 380 handle_osd_map epochs [381,381], i have 380, src has [1,381]
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 172802048 unmapped: 66568192 heap: 239370240 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:13:11.431298+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 172802048 unmapped: 66568192 heap: 239370240 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:13:12.431434+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 381 heartbeat osd_stat(store_statfs(0x4f76d0000/0x0/0x4ffc00000, data 0x1e46421/0x204d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x64df9c6), peers [1,2] op hist [])
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 172810240 unmapped: 66560000 heap: 239370240 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:13:13.431797+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 172810240 unmapped: 66560000 heap: 239370240 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:13:14.431990+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 172810240 unmapped: 66560000 heap: 239370240 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:13:15.432200+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:30:11 compute-0 ceph-osd[87591]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:30:11 compute-0 ceph-osd[87591]: bluestore.MempoolThread(0x5651f2adbb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2836576 data_alloc: 218103808 data_used: 8081408
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 172810240 unmapped: 66560000 heap: 239370240 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:13:16.432349+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 172810240 unmapped: 66560000 heap: 239370240 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:13:17.432557+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 172810240 unmapped: 66560000 heap: 239370240 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 381 heartbeat osd_stat(store_statfs(0x4f76d0000/0x0/0x4ffc00000, data 0x1e46421/0x204d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x64df9c6), peers [1,2] op hist [])
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:13:18.432652+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 172810240 unmapped: 66560000 heap: 239370240 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:13:19.432750+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 172810240 unmapped: 66560000 heap: 239370240 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:13:20.432953+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:30:11 compute-0 ceph-osd[87591]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:30:11 compute-0 ceph-osd[87591]: bluestore.MempoolThread(0x5651f2adbb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2836576 data_alloc: 218103808 data_used: 8081408
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 172810240 unmapped: 66560000 heap: 239370240 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 381 heartbeat osd_stat(store_statfs(0x4f76d0000/0x0/0x4ffc00000, data 0x1e46421/0x204d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x64df9c6), peers [1,2] op hist [])
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: handle_auth_request added challenge on 0x5651f3f44000
Oct 11 04:30:11 compute-0 ceph-osd[87591]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 18.089651108s of 18.392568588s, submitted: 145
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 381 ms_handle_reset con 0x5651f3f44000 session 0x5651f51201e0
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:13:21.433190+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 381 heartbeat osd_stat(store_statfs(0x4f76cf000/0x0/0x4ffc00000, data 0x1e46483/0x204e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x64df9c6), peers [1,2] op hist [])
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 172818432 unmapped: 66551808 heap: 239370240 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:13:22.433351+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: handle_auth_request added challenge on 0x5651f4024c00
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 381 ms_handle_reset con 0x5651f4024c00 session 0x5651f4318b40
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 172843008 unmapped: 66527232 heap: 239370240 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:13:23.433539+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 172843008 unmapped: 66527232 heap: 239370240 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:13:24.433765+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: handle_auth_request added challenge on 0x5651f4025800
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 381 ms_handle_reset con 0x5651f4025800 session 0x5651f3f11a40
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 172867584 unmapped: 66502656 heap: 239370240 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:13:25.433951+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:30:11 compute-0 ceph-osd[87591]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:30:11 compute-0 ceph-osd[87591]: bluestore.MempoolThread(0x5651f2adbb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2838627 data_alloc: 218103808 data_used: 8081408
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 172867584 unmapped: 66502656 heap: 239370240 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: handle_auth_request added challenge on 0x5651f6c5c000
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 381 ms_handle_reset con 0x5651f6c5c000 session 0x5651f5054960
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:13:26.434104+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 381 heartbeat osd_stat(store_statfs(0x4f76d1000/0x0/0x4ffc00000, data 0x1e46421/0x204d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x64df9c6), peers [1,2] op hist [])
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 172867584 unmapped: 66502656 heap: 239370240 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:13:27.434246+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 172867584 unmapped: 66502656 heap: 239370240 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:13:28.434434+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 172867584 unmapped: 66502656 heap: 239370240 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:13:29.434622+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 172867584 unmapped: 66502656 heap: 239370240 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:13:30.434797+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 381 heartbeat osd_stat(store_statfs(0x4f76d1000/0x0/0x4ffc00000, data 0x1e46421/0x204d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x64df9c6), peers [1,2] op hist [])
Oct 11 04:30:11 compute-0 ceph-osd[87591]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:30:11 compute-0 ceph-osd[87591]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:30:11 compute-0 ceph-osd[87591]: bluestore.MempoolThread(0x5651f2adbb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2837754 data_alloc: 218103808 data_used: 8081408
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 172867584 unmapped: 66502656 heap: 239370240 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:13:31.434965+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 172867584 unmapped: 66502656 heap: 239370240 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:13:32.435124+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 172867584 unmapped: 66502656 heap: 239370240 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:13:33.435285+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 172867584 unmapped: 66502656 heap: 239370240 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:13:34.435454+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 172867584 unmapped: 66502656 heap: 239370240 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 381 heartbeat osd_stat(store_statfs(0x4f76d1000/0x0/0x4ffc00000, data 0x1e46421/0x204d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x64df9c6), peers [1,2] op hist [])
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:13:35.435669+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:30:11 compute-0 ceph-osd[87591]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:30:11 compute-0 ceph-osd[87591]: bluestore.MempoolThread(0x5651f2adbb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2837754 data_alloc: 218103808 data_used: 8081408
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 172867584 unmapped: 66502656 heap: 239370240 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:13:36.436045+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 381 heartbeat osd_stat(store_statfs(0x4f76d1000/0x0/0x4ffc00000, data 0x1e46421/0x204d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x64df9c6), peers [1,2] op hist [])
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 172867584 unmapped: 66502656 heap: 239370240 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:13:37.436237+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 172867584 unmapped: 66502656 heap: 239370240 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:13:38.436408+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 381 heartbeat osd_stat(store_statfs(0x4f76d1000/0x0/0x4ffc00000, data 0x1e46421/0x204d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x64df9c6), peers [1,2] op hist [])
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: handle_auth_request added challenge on 0x5651f3f42000
Oct 11 04:30:11 compute-0 ceph-osd[87591]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 17.561161041s of 17.648281097s, submitted: 26
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 173203456 unmapped: 66166784 heap: 239370240 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:13:39.436573+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 173203456 unmapped: 66166784 heap: 239370240 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:13:40.436749+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 381 ms_handle_reset con 0x5651f3f42000 session 0x5651f5054d20
Oct 11 04:30:11 compute-0 ceph-osd[87591]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:30:11 compute-0 ceph-osd[87591]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:30:11 compute-0 ceph-osd[87591]: bluestore.MempoolThread(0x5651f2adbb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2846410 data_alloc: 218103808 data_used: 8085504
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 173203456 unmapped: 66166784 heap: 239370240 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:13:41.436932+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: handle_auth_request added challenge on 0x5651f3f44000
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 381 ms_handle_reset con 0x5651f3f44000 session 0x5651f5c59680
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 173228032 unmapped: 66142208 heap: 239370240 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:13:42.437104+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 173228032 unmapped: 66142208 heap: 239370240 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 381 heartbeat osd_stat(store_statfs(0x4f768e000/0x0/0x4ffc00000, data 0x1e864a4/0x2090000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x64df9c6), peers [1,2] op hist [])
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:13:43.437218+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: handle_auth_request added challenge on 0x5651f4024c00
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 381 ms_handle_reset con 0x5651f4024c00 session 0x5651f5121860
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: handle_auth_request added challenge on 0x5651f4025800
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 381 ms_handle_reset con 0x5651f4025800 session 0x5651f5120960
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 173228032 unmapped: 66142208 heap: 239370240 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:13:44.437426+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: handle_auth_request added challenge on 0x5651f3f43000
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 381 ms_handle_reset con 0x5651f3f43000 session 0x5651f5e941e0
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 173244416 unmapped: 66125824 heap: 239370240 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:13:45.437616+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:30:11 compute-0 ceph-osd[87591]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:30:11 compute-0 ceph-osd[87591]: bluestore.MempoolThread(0x5651f2adbb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2848192 data_alloc: 218103808 data_used: 8085504
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 173244416 unmapped: 66125824 heap: 239370240 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: handle_auth_request added challenge on 0x5651f3f43000
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:13:46.437763+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 381 ms_handle_reset con 0x5651f3f43000 session 0x5651f4319e00
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 173252608 unmapped: 66117632 heap: 239370240 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 381 heartbeat osd_stat(store_statfs(0x4f768e000/0x0/0x4ffc00000, data 0x1e864f6/0x2090000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x64df9c6), peers [1,2] op hist [])
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:13:47.437938+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: handle_auth_request added challenge on 0x5651f3f42000
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 381 ms_handle_reset con 0x5651f3f42000 session 0x5651f5e78960
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 173252608 unmapped: 66117632 heap: 239370240 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:13:48.438223+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: handle_auth_request added challenge on 0x5651f3f44000
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 381 ms_handle_reset con 0x5651f3f44000 session 0x5651f50141e0
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: handle_auth_request added challenge on 0x5651f4024c00
Oct 11 04:30:11 compute-0 ceph-osd[87591]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 10.105926514s of 10.231462479s, submitted: 32
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 173252608 unmapped: 66117632 heap: 239370240 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 381 ms_handle_reset con 0x5651f4024c00 session 0x5651f5014f00
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: handle_auth_request added challenge on 0x5651f4025800
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 381 ms_handle_reset con 0x5651f4025800 session 0x5651f5bd6b40
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:13:49.438420+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: handle_auth_request added challenge on 0x5651f4025800
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 381 ms_handle_reset con 0x5651f4025800 session 0x5651f50e8d20
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 173268992 unmapped: 66101248 heap: 239370240 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 381 heartbeat osd_stat(store_statfs(0x4f768f000/0x0/0x4ffc00000, data 0x1e86494/0x208f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x64df9c6), peers [1,2] op hist [])
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:13:50.438611+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:30:11 compute-0 ceph-osd[87591]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:30:11 compute-0 ceph-osd[87591]: bluestore.MempoolThread(0x5651f2adbb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2848224 data_alloc: 218103808 data_used: 8085504
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: handle_auth_request added challenge on 0x5651f3f42000
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 381 ms_handle_reset con 0x5651f3f42000 session 0x5651f50e9680
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: handle_auth_request added challenge on 0x5651f3f43000
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 381 ms_handle_reset con 0x5651f3f43000 session 0x5651f50e92c0
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 173604864 unmapped: 65765376 heap: 239370240 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: handle_auth_request added challenge on 0x5651f3f44000
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: handle_auth_request added challenge on 0x5651f4024c00
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:13:51.438804+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: handle_auth_request added challenge on 0x5651f4027800
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 381 ms_handle_reset con 0x5651f4027800 session 0x5651f7da63c0
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 173613056 unmapped: 65757184 heap: 239370240 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:13:52.439052+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 173613056 unmapped: 65757184 heap: 239370240 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 381 heartbeat osd_stat(store_statfs(0x4f766a000/0x0/0x4ffc00000, data 0x1eaa4a4/0x20b4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x64df9c6), peers [1,2] op hist [])
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:13:53.439294+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 173613056 unmapped: 65757184 heap: 239370240 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:13:54.439441+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 173613056 unmapped: 65757184 heap: 239370240 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:13:55.439589+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:30:11 compute-0 ceph-osd[87591]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:30:11 compute-0 ceph-osd[87591]: bluestore.MempoolThread(0x5651f2adbb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2854230 data_alloc: 218103808 data_used: 8290304
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 173613056 unmapped: 65757184 heap: 239370240 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:13:56.439733+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 173613056 unmapped: 65757184 heap: 239370240 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 381 heartbeat osd_stat(store_statfs(0x4f766a000/0x0/0x4ffc00000, data 0x1eaa4a4/0x20b4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x64df9c6), peers [1,2] op hist [])
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:13:57.440047+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 381 ms_handle_reset con 0x5651f3f44000 session 0x5651f5f44960
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 381 ms_handle_reset con 0x5651f4024c00 session 0x5651f4319a40
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: handle_auth_request added challenge on 0x5651f3f44000
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 173613056 unmapped: 65757184 heap: 239370240 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 381 ms_handle_reset con 0x5651f3f44000 session 0x5651f5f44b40
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:13:58.440330+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 381 heartbeat osd_stat(store_statfs(0x4f768f000/0x0/0x4ffc00000, data 0x1e86494/0x208f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x64df9c6), peers [1,2] op hist [])
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 173613056 unmapped: 65757184 heap: 239370240 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:13:59.440565+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 173613056 unmapped: 65757184 heap: 239370240 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:14:00.440782+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:30:11 compute-0 ceph-osd[87591]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:30:11 compute-0 ceph-osd[87591]: bluestore.MempoolThread(0x5651f2adbb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2850104 data_alloc: 218103808 data_used: 8286208
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: handle_auth_request added challenge on 0x5651f3f42000
Oct 11 04:30:11 compute-0 ceph-osd[87591]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 11.846592903s of 11.944470406s, submitted: 34
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 381 ms_handle_reset con 0x5651f3f42000 session 0x5651f5f44f00
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: handle_auth_request added challenge on 0x5651f3f43000
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 173654016 unmapped: 65716224 heap: 239370240 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 381 ms_handle_reset con 0x5651f3f43000 session 0x5651f43090e0
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:14:01.441074+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 173662208 unmapped: 65708032 heap: 239370240 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:14:02.441273+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 381 heartbeat osd_stat(store_statfs(0x4f76d0000/0x0/0x4ffc00000, data 0x1e46421/0x204d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x64df9c6), peers [1,2] op hist [])
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 173662208 unmapped: 65708032 heap: 239370240 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:14:03.441469+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 173662208 unmapped: 65708032 heap: 239370240 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:14:04.441713+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 173662208 unmapped: 65708032 heap: 239370240 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: handle_auth_request added challenge on 0x5651f4025800
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 381 ms_handle_reset con 0x5651f4025800 session 0x5651f7da7860
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: handle_auth_request added challenge on 0x5651f4025800
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:14:05.441926+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 381 ms_handle_reset con 0x5651f4025800 session 0x5651f4dd92c0
Oct 11 04:30:11 compute-0 ceph-osd[87591]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:30:11 compute-0 ceph-osd[87591]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:30:11 compute-0 ceph-osd[87591]: bluestore.MempoolThread(0x5651f2adbb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2841690 data_alloc: 218103808 data_used: 8081408
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 174120960 unmapped: 65249280 heap: 239370240 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:14:06.442106+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: handle_auth_request added challenge on 0x5651f3f43000
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 381 ms_handle_reset con 0x5651f3f43000 session 0x5651f5e94f00
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 175169536 unmapped: 64200704 heap: 239370240 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:14:07.442290+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 381 heartbeat osd_stat(store_statfs(0x4f76d0000/0x0/0x4ffc00000, data 0x1e46421/0x204d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x64df9c6), peers [1,2] op hist [])
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 175169536 unmapped: 64200704 heap: 239370240 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:14:08.442472+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 175169536 unmapped: 64200704 heap: 239370240 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:14:09.442678+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: handle_auth_request added challenge on 0x5651f3f44000
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 381 ms_handle_reset con 0x5651f3f44000 session 0x5651f5bd74a0
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: handle_auth_request added challenge on 0x5651f4024c00
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 381 ms_handle_reset con 0x5651f4024c00 session 0x5651f79f7e00
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: handle_auth_request added challenge on 0x5651f4027800
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 381 heartbeat osd_stat(store_statfs(0x4f76d0000/0x0/0x4ffc00000, data 0x1e46421/0x204d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x64df9c6), peers [1,2] op hist [])
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 381 ms_handle_reset con 0x5651f4027800 session 0x5651f726e5a0
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 175390720 unmapped: 63979520 heap: 239370240 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:14:10.442898+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:30:11 compute-0 ceph-osd[87591]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:30:11 compute-0 ceph-osd[87591]: bluestore.MempoolThread(0x5651f2adbb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2908043 data_alloc: 218103808 data_used: 8081408
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 175390720 unmapped: 63979520 heap: 239370240 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:14:11.443123+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 175390720 unmapped: 63979520 heap: 239370240 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:14:12.443285+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _renew_subs
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 381 handle_osd_map epochs [382,382], i have 381, src has [1,382]
Oct 11 04:30:11 compute-0 ceph-osd[87591]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 11.412140846s of 11.653039932s, submitted: 64
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 175390720 unmapped: 63979520 heap: 239370240 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:14:13.443479+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 382 heartbeat osd_stat(store_statfs(0x4f6ea7000/0x0/0x4ffc00000, data 0x266df9e/0x2876000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x64df9c6), peers [1,2] op hist [])
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 175390720 unmapped: 63979520 heap: 239370240 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:14:14.443693+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 175390720 unmapped: 63979520 heap: 239370240 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:14:15.444243+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:30:11 compute-0 ceph-osd[87591]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:30:11 compute-0 ceph-osd[87591]: bluestore.MempoolThread(0x5651f2adbb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2912217 data_alloc: 218103808 data_used: 8089600
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 175390720 unmapped: 63979520 heap: 239370240 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:14:16.444550+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: handle_auth_request added challenge on 0x5651f4027800
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 382 ms_handle_reset con 0x5651f4027800 session 0x5651f3f5d0e0
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 175407104 unmapped: 63963136 heap: 239370240 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:14:17.444700+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 382 heartbeat osd_stat(store_statfs(0x4f6ea7000/0x0/0x4ffc00000, data 0x266e000/0x2877000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x64df9c6), peers [1,2] op hist [])
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 175407104 unmapped: 63963136 heap: 239370240 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:14:18.444916+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 175407104 unmapped: 63963136 heap: 239370240 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:14:19.445146+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 175407104 unmapped: 63963136 heap: 239370240 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:14:20.445388+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:30:11 compute-0 ceph-osd[87591]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:30:11 compute-0 ceph-osd[87591]: bluestore.MempoolThread(0x5651f2adbb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2913889 data_alloc: 218103808 data_used: 8089600
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 175407104 unmapped: 63963136 heap: 239370240 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:14:21.445722+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 382 heartbeat osd_stat(store_statfs(0x4f6ea7000/0x0/0x4ffc00000, data 0x266e000/0x2877000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x64df9c6), peers [1,2] op hist [])
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: handle_auth_request added challenge on 0x5651f3f43000
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 382 ms_handle_reset con 0x5651f3f43000 session 0x5651f43dbe00
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 175407104 unmapped: 63963136 heap: 239370240 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:14:22.445977+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: handle_auth_request added challenge on 0x5651f3f44000
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 382 ms_handle_reset con 0x5651f3f44000 session 0x5651f4cac000
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: handle_auth_request added challenge on 0x5651f4024c00
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 382 ms_handle_reset con 0x5651f4024c00 session 0x5651f4309680
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: handle_auth_request added challenge on 0x5651f4025800
Oct 11 04:30:11 compute-0 ceph-osd[87591]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 10.502806664s of 10.528635025s, submitted: 7
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 175423488 unmapped: 63946752 heap: 239370240 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 382 ms_handle_reset con 0x5651f4025800 session 0x5651f43d21e0
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:14:23.446257+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 382 heartbeat osd_stat(store_statfs(0x4f6ea6000/0x0/0x4ffc00000, data 0x266e010/0x2878000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x64df9c6), peers [1,2] op hist [])
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: handle_auth_request added challenge on 0x5651f4025800
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: handle_auth_request added challenge on 0x5651f3f43000
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 175423488 unmapped: 63946752 heap: 239370240 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:14:24.446427+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 175292416 unmapped: 64077824 heap: 239370240 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:14:25.446554+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:30:11 compute-0 ceph-osd[87591]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:30:11 compute-0 ceph-osd[87591]: bluestore.MempoolThread(0x5651f2adbb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2963200 data_alloc: 234881024 data_used: 14487552
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 177332224 unmapped: 62038016 heap: 239370240 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:14:26.446731+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 382 ms_handle_reset con 0x5651f4025800 session 0x5651f4318960
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 382 ms_handle_reset con 0x5651f3f43000 session 0x5651f3f5c000
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 177332224 unmapped: 62038016 heap: 239370240 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:14:27.446909+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: handle_auth_request added challenge on 0x5651f3f44000
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: handle_auth_request added challenge on 0x5651f4024c00
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 382 ms_handle_reset con 0x5651f4024c00 session 0x5651f43090e0
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: handle_auth_request added challenge on 0x5651f4027800
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 382 ms_handle_reset con 0x5651f3f44000 session 0x5651f5055860
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 382 ms_handle_reset con 0x5651f4027800 session 0x5651f5f46b40
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 382 heartbeat osd_stat(store_statfs(0x4f6ea6000/0x0/0x4ffc00000, data 0x266e010/0x2878000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x64df9c6), peers [1,2] op hist [])
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 177315840 unmapped: 62054400 heap: 239370240 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:14:28.447102+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: handle_auth_request added challenge on 0x5651f3f43000
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 382 ms_handle_reset con 0x5651f3f43000 session 0x5651f43dd4a0
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: handle_auth_request added challenge on 0x5651f4024c00
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 382 ms_handle_reset con 0x5651f4024c00 session 0x5651f7bac780
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 177266688 unmapped: 62103552 heap: 239370240 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:14:29.447293+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 177266688 unmapped: 62103552 heap: 239370240 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:14:30.447437+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:30:11 compute-0 ceph-osd[87591]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:30:11 compute-0 ceph-osd[87591]: bluestore.MempoolThread(0x5651f2adbb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2971217 data_alloc: 234881024 data_used: 15970304
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: handle_auth_request added challenge on 0x5651f4025800
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 382 ms_handle_reset con 0x5651f4025800 session 0x5651f3f105a0
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 177266688 unmapped: 62103552 heap: 239370240 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:14:31.447620+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 382 handle_osd_map epochs [382,383], i have 382, src has [1,383]
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 177274880 unmapped: 62095360 heap: 239370240 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: handle_auth_request added challenge on 0x5651f5003800
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:14:32.447736+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 383 ms_handle_reset con 0x5651f5003800 session 0x5651f7da6000
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 174080000 unmapped: 65290240 heap: 239370240 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:14:33.447929+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 383 heartbeat osd_stat(store_statfs(0x4f76ca000/0x0/0x4ffc00000, data 0x1e49b6f/0x2053000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x64df9c6), peers [1,2] op hist [])
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: handle_auth_request added challenge on 0x5651f4020c00
Oct 11 04:30:11 compute-0 ceph-osd[87591]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 10.782951355s of 11.013412476s, submitted: 78
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 174342144 unmapped: 65028096 heap: 239370240 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:14:34.448109+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 174342144 unmapped: 65028096 heap: 239370240 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:14:35.448335+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 383 handle_osd_map epochs [384,384], i have 383, src has [1,384]
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 384 heartbeat osd_stat(store_statfs(0x4f768a000/0x0/0x4ffc00000, data 0x1e89b6f/0x2093000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x64df9c6), peers [1,2] op hist [])
Oct 11 04:30:11 compute-0 ceph-osd[87591]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:30:11 compute-0 ceph-osd[87591]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:30:11 compute-0 ceph-osd[87591]: bluestore.MempoolThread(0x5651f2adbb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2870307 data_alloc: 218103808 data_used: 8560640
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: handle_auth_request added challenge on 0x5651f652d400
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 384 ms_handle_reset con 0x5651f652d400 session 0x5651f5bbe000
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: handle_auth_request added challenge on 0x5651f3f43000
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 384 ms_handle_reset con 0x5651f4020c00 session 0x5651f5121860
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 173244416 unmapped: 66125824 heap: 239370240 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 384 ms_handle_reset con 0x5651f3f43000 session 0x5651f5f2a3c0
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:14:36.448496+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 173244416 unmapped: 66125824 heap: 239370240 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:14:37.448684+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: handle_auth_request added challenge on 0x5651f4024c00
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 384 ms_handle_reset con 0x5651f4024c00 session 0x5651f5f450e0
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 173244416 unmapped: 66125824 heap: 239370240 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:14:38.449067+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 384 heartbeat osd_stat(store_statfs(0x4f7686000/0x0/0x4ffc00000, data 0x1e8b644/0x2098000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x64df9c6), peers [1,2] op hist [])
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 173244416 unmapped: 66125824 heap: 239370240 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:14:39.449269+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: handle_auth_request added challenge on 0x5651f4025800
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 384 ms_handle_reset con 0x5651f4025800 session 0x5651f5121680
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 173244416 unmapped: 66125824 heap: 239370240 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: handle_auth_request added challenge on 0x5651f5003800
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:14:40.449429+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 384 ms_handle_reset con 0x5651f5003800 session 0x5651f4309a40
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: handle_auth_request added challenge on 0x5651f3f43000
Oct 11 04:30:11 compute-0 ceph-osd[87591]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:30:11 compute-0 ceph-osd[87591]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:30:11 compute-0 ceph-osd[87591]: bluestore.MempoolThread(0x5651f2adbb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2870610 data_alloc: 218103808 data_used: 8036352
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 384 ms_handle_reset con 0x5651f3f43000 session 0x5651f726fc20
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: handle_auth_request added challenge on 0x5651f4020c00
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 384 ms_handle_reset con 0x5651f4020c00 session 0x5651f43dba40
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 174055424 unmapped: 65314816 heap: 239370240 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:14:41.449637+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 174055424 unmapped: 65314816 heap: 239370240 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:14:42.449864+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 174055424 unmapped: 65314816 heap: 239370240 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:14:43.450085+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 384 heartbeat osd_stat(store_statfs(0x4f7483000/0x0/0x4ffc00000, data 0x208e644/0x229b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x64df9c6), peers [1,2] op hist [])
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 174055424 unmapped: 65314816 heap: 239370240 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:14:44.450286+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 174055424 unmapped: 65314816 heap: 239370240 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:14:45.450445+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:30:11 compute-0 ceph-osd[87591]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:30:11 compute-0 ceph-osd[87591]: bluestore.MempoolThread(0x5651f2adbb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2898292 data_alloc: 218103808 data_used: 8044544
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 174055424 unmapped: 65314816 heap: 239370240 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:14:46.450595+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 174055424 unmapped: 65314816 heap: 239370240 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:14:47.450792+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: handle_auth_request added challenge on 0x5651f4024c00
Oct 11 04:30:11 compute-0 ceph-osd[87591]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 13.582318306s of 13.777740479s, submitted: 82
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 384 ms_handle_reset con 0x5651f4024c00 session 0x5651f7bac3c0
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: handle_auth_request added challenge on 0x5651f4025800
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 384 heartbeat osd_stat(store_statfs(0x4f7483000/0x0/0x4ffc00000, data 0x208e644/0x229b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x64df9c6), peers [1,2] op hist [])
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 384 ms_handle_reset con 0x5651f4025800 session 0x5651f43dc5a0
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: handle_auth_request added challenge on 0x5651f6c83400
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 174080000 unmapped: 65290240 heap: 239370240 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:14:48.450948+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 384 ms_handle_reset con 0x5651f6c83400 session 0x5651f4dd90e0
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: handle_auth_request added challenge on 0x5651f6c83400
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 384 ms_handle_reset con 0x5651f6c83400 session 0x5651f72334a0
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 384 heartbeat osd_stat(store_statfs(0x4f74c3000/0x0/0x4ffc00000, data 0x204e644/0x225b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x64df9c6), peers [1,2] op hist [])
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 174104576 unmapped: 65265664 heap: 239370240 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:14:49.451101+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 174104576 unmapped: 65265664 heap: 239370240 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:14:50.451337+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:30:11 compute-0 ceph-osd[87591]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:30:11 compute-0 ceph-osd[87591]: bluestore.MempoolThread(0x5651f2adbb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3217183 data_alloc: 218103808 data_used: 8044544
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 174112768 unmapped: 65257472 heap: 239370240 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:14:51.451510+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 174112768 unmapped: 65257472 heap: 239370240 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:14:52.451655+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:14:53.451886+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 174112768 unmapped: 65257472 heap: 239370240 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:14:54.452014+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 174112768 unmapped: 65257472 heap: 239370240 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 384 heartbeat osd_stat(store_statfs(0x4f46ca000/0x0/0x4ffc00000, data 0x4e47644/0x5054000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x64df9c6), peers [1,2] op hist [])
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: handle_auth_request added challenge on 0x5651f3f43000
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 384 ms_handle_reset con 0x5651f3f43000 session 0x5651f7badc20
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:14:55.452363+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 174465024 unmapped: 64905216 heap: 239370240 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:30:11 compute-0 ceph-osd[87591]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:30:11 compute-0 ceph-osd[87591]: bluestore.MempoolThread(0x5651f2adbb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3220537 data_alloc: 218103808 data_used: 8044544
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: handle_auth_request added challenge on 0x5651f4020c00
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: handle_auth_request added challenge on 0x5651f4024c00
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:14:56.452563+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 174465024 unmapped: 64905216 heap: 239370240 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:14:57.452700+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 174202880 unmapped: 65167360 heap: 239370240 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: handle_auth_request added challenge on 0x5651f4025800
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 384 ms_handle_reset con 0x5651f4025800 session 0x5651f4472d20
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 384 heartbeat osd_stat(store_statfs(0x4f46a0000/0x0/0x4ffc00000, data 0x4e71644/0x507e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x64df9c6), peers [1,2] op hist [])
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:14:58.452831+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 174202880 unmapped: 65167360 heap: 239370240 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: handle_auth_request added challenge on 0x5651f4c3d000
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 384 ms_handle_reset con 0x5651f4c3d000 session 0x5651f5eca960
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 384 heartbeat osd_stat(store_statfs(0x4f46a0000/0x0/0x4ffc00000, data 0x4e71644/0x507e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x64df9c6), peers [1,2] op hist [])
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:14:59.452966+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 174202880 unmapped: 65167360 heap: 239370240 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 384 heartbeat osd_stat(store_statfs(0x4f46a0000/0x0/0x4ffc00000, data 0x4e71644/0x507e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x64df9c6), peers [1,2] op hist [])
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: handle_auth_request added challenge on 0x5651f7cd1c00
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 384 ms_handle_reset con 0x5651f7cd1c00 session 0x5651f5c58d20
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: handle_auth_request added challenge on 0x5651f3f43000
Oct 11 04:30:11 compute-0 ceph-osd[87591]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 11.361016273s of 11.810948372s, submitted: 46
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 384 ms_handle_reset con 0x5651f3f43000 session 0x5651f44cd0e0
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:15:00.453088+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 174202880 unmapped: 65167360 heap: 239370240 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: handle_auth_request added challenge on 0x5651f4025800
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 384 heartbeat osd_stat(store_statfs(0x4f46a0000/0x0/0x4ffc00000, data 0x4e71644/0x507e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x64df9c6), peers [1,2] op hist [])
Oct 11 04:30:11 compute-0 ceph-osd[87591]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:30:11 compute-0 ceph-osd[87591]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:30:11 compute-0 ceph-osd[87591]: bluestore.MempoolThread(0x5651f2adbb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3226109 data_alloc: 218103808 data_used: 8732672
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:15:01.453363+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 174202880 unmapped: 65167360 heap: 239370240 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:15:02.453546+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 174202880 unmapped: 65167360 heap: 239370240 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:15:03.453670+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 174202880 unmapped: 65167360 heap: 239370240 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:15:04.453741+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 174956544 unmapped: 64413696 heap: 239370240 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:15:05.453857+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 175128576 unmapped: 64241664 heap: 239370240 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:30:11 compute-0 ceph-osd[87591]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:30:11 compute-0 ceph-osd[87591]: bluestore.MempoolThread(0x5651f2adbb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3261149 data_alloc: 234881024 data_used: 13668352
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 384 heartbeat osd_stat(store_statfs(0x4f46a0000/0x0/0x4ffc00000, data 0x4e71644/0x507e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x64df9c6), peers [1,2] op hist [])
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:15:06.454265+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 384 heartbeat osd_stat(store_statfs(0x4f46a0000/0x0/0x4ffc00000, data 0x4e71644/0x507e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x64df9c6), peers [1,2] op hist [])
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 175128576 unmapped: 64241664 heap: 239370240 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:15:07.454477+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 179716096 unmapped: 59654144 heap: 239370240 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:15:08.454604+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 180985856 unmapped: 58384384 heap: 239370240 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 384 heartbeat osd_stat(store_statfs(0x4f3f5a000/0x0/0x4ffc00000, data 0x55b4644/0x57c1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x64df9c6), peers [1,2] op hist [])
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:15:09.454747+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 180985856 unmapped: 58384384 heap: 239370240 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:15:10.454899+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 180985856 unmapped: 58384384 heap: 239370240 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:30:11 compute-0 ceph-osd[87591]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:30:11 compute-0 ceph-osd[87591]: bluestore.MempoolThread(0x5651f2adbb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3342461 data_alloc: 234881024 data_used: 13893632
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 384 heartbeat osd_stat(store_statfs(0x4f3f43000/0x0/0x4ffc00000, data 0x55c5644/0x57d2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x64df9c6), peers [1,2] op hist [])
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:15:11.455096+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 180985856 unmapped: 58384384 heap: 239370240 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:15:12.455205+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 180985856 unmapped: 58384384 heap: 239370240 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 384 heartbeat osd_stat(store_statfs(0x4f3f43000/0x0/0x4ffc00000, data 0x55c5644/0x57d2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x64df9c6), peers [1,2] op hist [])
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:15:13.455317+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 180985856 unmapped: 58384384 heap: 239370240 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 13.353536606s of 13.689222336s, submitted: 109
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:15:14.455986+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 195584000 unmapped: 43786240 heap: 239370240 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 384 heartbeat osd_stat(store_statfs(0x4f2b37000/0x0/0x4ffc00000, data 0x55ea644/0x57f7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x767f9c6), peers [1,2] op hist [])
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:15:15.456105+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 197935104 unmapped: 41435136 heap: 239370240 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:30:11 compute-0 ceph-osd[87591]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:30:11 compute-0 ceph-osd[87591]: bluestore.MempoolThread(0x5651f2adbb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3463065 data_alloc: 234881024 data_used: 16424960
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:15:16.456223+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 192225280 unmapped: 47144960 heap: 239370240 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: handle_auth_request added challenge on 0x5651f4c3d000
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:15:17.456389+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 192225280 unmapped: 47144960 heap: 239370240 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: handle_auth_request added challenge on 0x5651f6c83400
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 384 handle_osd_map epochs [384,385], i have 384, src has [1,385]
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 385 ms_handle_reset con 0x5651f4c3d000 session 0x5651f5e7a3c0
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 385 ms_handle_reset con 0x5651f6c83400 session 0x5651f72330e0
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:15:18.456618+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 192258048 unmapped: 47112192 heap: 239370240 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 385 heartbeat osd_stat(store_statfs(0x4f1b5d000/0x0/0x4ffc00000, data 0x6810393/0x6a20000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x767f9c6), peers [1,2] op hist [])
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:15:19.456903+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 192258048 unmapped: 47112192 heap: 239370240 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 385 heartbeat osd_stat(store_statfs(0x4f1b5d000/0x0/0x4ffc00000, data 0x6810393/0x6a20000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x767f9c6), peers [1,2] op hist [])
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:15:20.457104+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 192266240 unmapped: 47104000 heap: 239370240 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 385 heartbeat osd_stat(store_statfs(0x4f1b59000/0x0/0x4ffc00000, data 0x69ca393/0x6a25000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x767f9c6), peers [1,2] op hist [])
Oct 11 04:30:11 compute-0 ceph-osd[87591]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:30:11 compute-0 ceph-osd[87591]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:30:11 compute-0 ceph-osd[87591]: bluestore.MempoolThread(0x5651f2adbb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3516302 data_alloc: 234881024 data_used: 16445440
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:15:21.457356+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 192266240 unmapped: 47104000 heap: 239370240 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:15:22.457539+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 192266240 unmapped: 47104000 heap: 239370240 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 385 heartbeat osd_stat(store_statfs(0x4f1b59000/0x0/0x4ffc00000, data 0x69ca393/0x6a25000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x767f9c6), peers [1,2] op hist [])
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:15:23.457763+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 192266240 unmapped: 47104000 heap: 239370240 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:15:24.458098+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 192266240 unmapped: 47104000 heap: 239370240 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: handle_auth_request added challenge on 0x5651f6be3000
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: handle_auth_request added challenge on 0x5651f6bda000
Oct 11 04:30:11 compute-0 ceph-osd[87591]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 11.375948906s of 11.834363937s, submitted: 176
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:15:25.458282+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 385 ms_handle_reset con 0x5651f6bda000 session 0x5651f43da1e0
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 385 ms_handle_reset con 0x5651f6be3000 session 0x5651f4318f00
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 192266240 unmapped: 47104000 heap: 239370240 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:30:11 compute-0 ceph-osd[87591]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:30:11 compute-0 ceph-osd[87591]: bluestore.MempoolThread(0x5651f2adbb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3519688 data_alloc: 234881024 data_used: 16453632
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:15:26.458447+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 192266240 unmapped: 47104000 heap: 239370240 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 385 heartbeat osd_stat(store_statfs(0x4f1b57000/0x0/0x4ffc00000, data 0x69ca405/0x6a27000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x767f9c6), peers [1,2] op hist [])
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:15:27.458618+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 192266240 unmapped: 47104000 heap: 239370240 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:15:28.458912+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 385 ms_handle_reset con 0x5651f4025800 session 0x5651f79fcf00
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 192266240 unmapped: 47104000 heap: 239370240 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: handle_auth_request added challenge on 0x5651f3f43000
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 385 heartbeat osd_stat(store_statfs(0x4f1b54000/0x0/0x4ffc00000, data 0x69cd405/0x6a2a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x767f9c6), peers [1,2] op hist [])
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 385 ms_handle_reset con 0x5651f3f43000 session 0x5651f7bacb40
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:15:29.459105+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 192266240 unmapped: 47104000 heap: 239370240 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: handle_auth_request added challenge on 0x5651f4c3d000
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 385 ms_handle_reset con 0x5651f4c3d000 session 0x5651f5ef9a40
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:15:30.459261+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 192266240 unmapped: 47104000 heap: 239370240 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: handle_auth_request added challenge on 0x5651f6bda000
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 385 ms_handle_reset con 0x5651f6bda000 session 0x5651f5120d20
Oct 11 04:30:11 compute-0 ceph-osd[87591]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:30:11 compute-0 ceph-osd[87591]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:30:11 compute-0 ceph-osd[87591]: bluestore.MempoolThread(0x5651f2adbb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3521172 data_alloc: 234881024 data_used: 16490496
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: handle_auth_request added challenge on 0x5651f6c83400
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 385 ms_handle_reset con 0x5651f6c83400 session 0x5651f43dd0e0
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: handle_auth_request added challenge on 0x5651f3f43000
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 385 ms_handle_reset con 0x5651f3f43000 session 0x5651f4cacf00
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:15:31.459432+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 192274432 unmapped: 47095808 heap: 239370240 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: handle_auth_request added challenge on 0x5651f4025800
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:15:32.459664+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 192274432 unmapped: 47095808 heap: 239370240 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 385 heartbeat osd_stat(store_statfs(0x4f1b53000/0x0/0x4ffc00000, data 0x69cd428/0x6a2b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x767f9c6), peers [1,2] op hist [1])
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:15:33.459797+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 192299008 unmapped: 47071232 heap: 239370240 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:15:34.460079+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 192299008 unmapped: 47071232 heap: 239370240 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:15:35.460333+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 192299008 unmapped: 47071232 heap: 239370240 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:30:11 compute-0 ceph-osd[87591]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:30:11 compute-0 ceph-osd[87591]: bluestore.MempoolThread(0x5651f2adbb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3524377 data_alloc: 234881024 data_used: 16658432
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:15:36.460528+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 385 heartbeat osd_stat(store_statfs(0x4f1b53000/0x0/0x4ffc00000, data 0x69cd428/0x6a2b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x767f9c6), peers [1,2] op hist [])
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 192299008 unmapped: 47071232 heap: 239370240 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:15:37.460652+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 192299008 unmapped: 47071232 heap: 239370240 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:15:38.460800+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 192299008 unmapped: 47071232 heap: 239370240 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 13.731799126s of 13.777522087s, submitted: 14
Oct 11 04:30:11 compute-0 ceph-osd[87591]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #45. Immutable memtables: 2.
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:15:39.460932+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 194420736 unmapped: 44949504 heap: 239370240 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:15:40.461073+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 194420736 unmapped: 44949504 heap: 239370240 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:30:11 compute-0 ceph-osd[87591]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:30:11 compute-0 ceph-osd[87591]: bluestore.MempoolThread(0x5651f2adbb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3547473 data_alloc: 234881024 data_used: 16658432
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:15:41.461279+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 194420736 unmapped: 44949504 heap: 239370240 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 385 heartbeat osd_stat(store_statfs(0x4f09ae000/0x0/0x4ffc00000, data 0x6bfd428/0x6a30000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x881f9c6), peers [1,2] op hist [])
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:15:42.461431+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 194420736 unmapped: 44949504 heap: 239370240 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:15:43.461572+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 194871296 unmapped: 44498944 heap: 239370240 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:15:44.461794+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 194789376 unmapped: 44580864 heap: 239370240 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:15:45.462022+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 194789376 unmapped: 44580864 heap: 239370240 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: handle_auth_request added challenge on 0x5651f4c3d000
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 385 ms_handle_reset con 0x5651f4c3d000 session 0x5651f43da780
Oct 11 04:30:11 compute-0 ceph-osd[87591]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:30:11 compute-0 ceph-osd[87591]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:30:11 compute-0 ceph-osd[87591]: bluestore.MempoolThread(0x5651f2adbb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3562920 data_alloc: 234881024 data_used: 19763200
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:15:46.462145+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 385 heartbeat osd_stat(store_statfs(0x4f09ae000/0x0/0x4ffc00000, data 0x6bfd428/0x6a30000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x881f9c6), peers [1,2] op hist [])
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 194789376 unmapped: 44580864 heap: 239370240 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: handle_auth_request added challenge on 0x5651f6bda000
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 385 ms_handle_reset con 0x5651f6bda000 session 0x5651f5f441e0
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:15:47.462398+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: handle_auth_request added challenge on 0x5651f6bd9c00
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 385 ms_handle_reset con 0x5651f6bd9c00 session 0x5651f44721e0
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: handle_auth_request added challenge on 0x5651f3f42c00
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 194789376 unmapped: 44580864 heap: 239370240 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 385 ms_handle_reset con 0x5651f3f42c00 session 0x5651f5c1f2c0
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: handle_auth_request added challenge on 0x5651f3f42c00
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: handle_auth_request added challenge on 0x5651f3f43000
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:15:48.462545+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 194945024 unmapped: 44425216 heap: 239370240 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 385 heartbeat osd_stat(store_statfs(0x4f098a000/0x0/0x4ffc00000, data 0x6c21428/0x6a54000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x881f9c6), peers [1,2] op hist [])
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:15:49.462746+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 195338240 unmapped: 44032000 heap: 239370240 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 11.045148849s of 11.134822845s, submitted: 23
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:15:50.462932+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 195338240 unmapped: 44032000 heap: 239370240 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 385 heartbeat osd_stat(store_statfs(0x4f098a000/0x0/0x4ffc00000, data 0x6c21428/0x6a54000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x881f9c6), peers [1,2] op hist [])
Oct 11 04:30:11 compute-0 ceph-osd[87591]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:30:11 compute-0 ceph-osd[87591]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:30:11 compute-0 ceph-osd[87591]: bluestore.MempoolThread(0x5651f2adbb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3568717 data_alloc: 234881024 data_used: 20164608
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:15:51.463218+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 195338240 unmapped: 44032000 heap: 239370240 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:15:52.463360+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 195354624 unmapped: 44015616 heap: 239370240 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:15:53.463514+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 195354624 unmapped: 44015616 heap: 239370240 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:15:54.463668+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 195403776 unmapped: 43966464 heap: 239370240 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:15:55.463873+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 195403776 unmapped: 43966464 heap: 239370240 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 385 heartbeat osd_stat(store_statfs(0x4f0970000/0x0/0x4ffc00000, data 0x6c68428/0x6a6e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x881f9c6), peers [1,2] op hist [])
Oct 11 04:30:11 compute-0 ceph-osd[87591]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:30:11 compute-0 ceph-osd[87591]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:30:11 compute-0 ceph-osd[87591]: bluestore.MempoolThread(0x5651f2adbb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3588937 data_alloc: 234881024 data_used: 20287488
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:15:56.464052+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 195403776 unmapped: 43966464 heap: 239370240 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:15:57.464197+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 195403776 unmapped: 43966464 heap: 239370240 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:15:58.464307+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 195403776 unmapped: 43966464 heap: 239370240 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:15:59.464461+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 195403776 unmapped: 43966464 heap: 239370240 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:16:00.464594+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 385 heartbeat osd_stat(store_statfs(0x4f0970000/0x0/0x4ffc00000, data 0x6c68428/0x6a6e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x881f9c6), peers [1,2] op hist [])
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 195403776 unmapped: 43966464 heap: 239370240 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:30:11 compute-0 ceph-osd[87591]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:30:11 compute-0 ceph-osd[87591]: bluestore.MempoolThread(0x5651f2adbb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3589577 data_alloc: 234881024 data_used: 20303872
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:16:01.464766+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 11.375447273s of 11.394979477s, submitted: 16
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 195477504 unmapped: 43892736 heap: 239370240 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 385 heartbeat osd_stat(store_statfs(0x4f0970000/0x0/0x4ffc00000, data 0x6c68428/0x6a6e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x881f9c6), peers [1,2] op hist [])
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:16:02.465196+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 195665920 unmapped: 43704320 heap: 239370240 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:16:03.465341+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 195780608 unmapped: 43589632 heap: 239370240 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:16:04.465638+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 195780608 unmapped: 43589632 heap: 239370240 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:16:05.465800+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 195780608 unmapped: 43589632 heap: 239370240 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:30:11 compute-0 ceph-osd[87591]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:30:11 compute-0 ceph-osd[87591]: bluestore.MempoolThread(0x5651f2adbb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3607669 data_alloc: 234881024 data_used: 21860352
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:16:06.465963+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 195780608 unmapped: 43589632 heap: 239370240 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:16:07.466857+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 195837952 unmapped: 43532288 heap: 239370240 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 385 heartbeat osd_stat(store_statfs(0x4f0970000/0x0/0x4ffc00000, data 0x6c68428/0x6a6e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x881f9c6), peers [1,2] op hist [])
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 385 ms_handle_reset con 0x5651f4025800 session 0x5651f43d50e0
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:16:08.468275+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: handle_auth_request added challenge on 0x5651f4c3d000
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 195846144 unmapped: 43524096 heap: 239370240 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 385 ms_handle_reset con 0x5651f4c3d000 session 0x5651f50145a0
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:16:09.468477+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 385 heartbeat osd_stat(store_statfs(0x4f0970000/0x0/0x4ffc00000, data 0x6c68428/0x6a6e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x881f9c6), peers [1,2] op hist [])
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 195846144 unmapped: 43524096 heap: 239370240 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:16:10.468658+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 195846144 unmapped: 43524096 heap: 239370240 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:30:11 compute-0 ceph-osd[87591]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:30:11 compute-0 ceph-osd[87591]: bluestore.MempoolThread(0x5651f2adbb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3606549 data_alloc: 234881024 data_used: 21856256
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:16:11.468896+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: handle_auth_request added challenge on 0x5651f6bd9c00
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 385 ms_handle_reset con 0x5651f6bd9c00 session 0x5651f43dba40
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: handle_auth_request added challenge on 0x5651f6bda000
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 195846144 unmapped: 43524096 heap: 239370240 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 10.110226631s of 10.199625015s, submitted: 28
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 385 ms_handle_reset con 0x5651f6bda000 session 0x5651f79f74a0
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 385 heartbeat osd_stat(store_statfs(0x4f0972000/0x0/0x4ffc00000, data 0x6c683f5/0x6a6c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x881f9c6), peers [1,2] op hist [])
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:16:12.469067+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 195846144 unmapped: 43524096 heap: 239370240 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: handle_auth_request added challenge on 0x5651f6bc7000
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:16:13.469252+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 385 ms_handle_reset con 0x5651f6bc7000 session 0x5651f72321e0
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: handle_auth_request added challenge on 0x5651f4025800
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 195846144 unmapped: 43524096 heap: 239370240 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 385 ms_handle_reset con 0x5651f4025800 session 0x5651f7233a40
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 385 handle_osd_map epochs [386,386], i have 385, src has [1,386]
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:16:14.469682+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 386 ms_handle_reset con 0x5651f4020c00 session 0x5651f5e79a40
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 386 ms_handle_reset con 0x5651f4024c00 session 0x5651f7233e00
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 195846144 unmapped: 43524096 heap: 239370240 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: handle_auth_request added challenge on 0x5651f4c3d000
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 386 ms_handle_reset con 0x5651f4c3d000 session 0x5651f7da6780
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:16:15.469924+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 195854336 unmapped: 43515904 heap: 239370240 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:30:11 compute-0 ceph-osd[87591]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:30:11 compute-0 ceph-osd[87591]: bluestore.MempoolThread(0x5651f2adbb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3578089 data_alloc: 234881024 data_used: 21712896
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:16:16.470211+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: handle_auth_request added challenge on 0x5651f6bd9c00
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 386 ms_handle_reset con 0x5651f6bd9c00 session 0x5651f5c58d20
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: handle_auth_request added challenge on 0x5651f4020c00
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 386 heartbeat osd_stat(store_statfs(0x4f09b9000/0x0/0x4ffc00000, data 0x6814ef2/0x6a24000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x881f9c6), peers [1,2] op hist [])
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 195862528 unmapped: 43507712 heap: 239370240 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 386 ms_handle_reset con 0x5651f4020c00 session 0x5651f50e81e0
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:16:17.470567+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 195862528 unmapped: 43507712 heap: 239370240 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:16:18.470741+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 195870720 unmapped: 43499520 heap: 239370240 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: handle_auth_request added challenge on 0x5651f4024c00
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:16:19.470988+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _renew_subs
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 386 handle_osd_map epochs [387,387], i have 386, src has [1,387]
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 195870720 unmapped: 43499520 heap: 239370240 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _renew_subs
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 387 handle_osd_map epochs [388,388], i have 387, src has [1,388]
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 388 ms_handle_reset con 0x5651f4024c00 session 0x5651f7bcb0e0
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:16:20.471259+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 195878912 unmapped: 43491328 heap: 239370240 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:30:11 compute-0 ceph-osd[87591]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:30:11 compute-0 ceph-osd[87591]: bluestore.MempoolThread(0x5651f2adbb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3449239 data_alloc: 234881024 data_used: 17260544
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:16:21.471451+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 388 heartbeat osd_stat(store_statfs(0x4f133a000/0x0/0x4ffc00000, data 0x5e934de/0x60a3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x881f9c6), peers [1,2] op hist [])
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 195878912 unmapped: 43491328 heap: 239370240 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:16:22.471645+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 195878912 unmapped: 43491328 heap: 239370240 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:16:23.471830+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 195878912 unmapped: 43491328 heap: 239370240 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:16:24.472100+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 195878912 unmapped: 43491328 heap: 239370240 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 388 heartbeat osd_stat(store_statfs(0x4f133a000/0x0/0x4ffc00000, data 0x5e934de/0x60a3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x881f9c6), peers [1,2] op hist [])
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 388 handle_osd_map epochs [389,389], i have 388, src has [1,389]
Oct 11 04:30:11 compute-0 ceph-osd[87591]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 12.844413757s of 13.107558250s, submitted: 101
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:16:25.472324+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 195878912 unmapped: 43491328 heap: 239370240 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:30:11 compute-0 ceph-osd[87591]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:30:11 compute-0 ceph-osd[87591]: bluestore.MempoolThread(0x5651f2adbb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3452213 data_alloc: 234881024 data_used: 17260544
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:16:26.472552+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 195878912 unmapped: 43491328 heap: 239370240 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: handle_auth_request added challenge on 0x5651f4025800
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 389 ms_handle_reset con 0x5651f4025800 session 0x5651f79fc3c0
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: handle_auth_request added challenge on 0x5651f4c3d000
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:16:27.472843+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 389 ms_handle_reset con 0x5651f4c3d000 session 0x5651f50543c0
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 195715072 unmapped: 43655168 heap: 239370240 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 389 ms_handle_reset con 0x5651f3f42c00 session 0x5651f5ef9860
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 389 ms_handle_reset con 0x5651f3f43000 session 0x5651f5f44b40
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: handle_auth_request added challenge on 0x5651f4020c00
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 389 ms_handle_reset con 0x5651f4020c00 session 0x5651f5f47e00
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:16:28.473101+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 195723264 unmapped: 43646976 heap: 239370240 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 389 heartbeat osd_stat(store_statfs(0x4f135c000/0x0/0x4ffc00000, data 0x5e70f41/0x6082000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x881f9c6), peers [1,2] op hist [])
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: handle_auth_request added challenge on 0x5651f4024c00
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 389 ms_handle_reset con 0x5651f4024c00 session 0x5651f4318960
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:16:29.473310+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 195731456 unmapped: 43638784 heap: 239370240 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:16:30.473475+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 195731456 unmapped: 43638784 heap: 239370240 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:30:11 compute-0 ceph-osd[87591]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:30:11 compute-0 ceph-osd[87591]: bluestore.MempoolThread(0x5651f2adbb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3453577 data_alloc: 234881024 data_used: 18333696
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:16:31.473655+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: handle_auth_request added challenge on 0x5651f4025800
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 389 ms_handle_reset con 0x5651f4025800 session 0x5651f4319e00
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: handle_auth_request added challenge on 0x5651f4c3d000
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 389 ms_handle_reset con 0x5651f4c3d000 session 0x5651f50541e0
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: handle_auth_request added challenge on 0x5651f3f43000
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 195731456 unmapped: 43638784 heap: 239370240 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 389 ms_handle_reset con 0x5651f3f43000 session 0x5651f5055c20
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: handle_auth_request added challenge on 0x5651f4020c00
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 389 ms_handle_reset con 0x5651f4020c00 session 0x5651f7233a40
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:16:32.473806+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: handle_auth_request added challenge on 0x5651f4024c00
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 389 ms_handle_reset con 0x5651f4024c00 session 0x5651f43d50e0
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: handle_auth_request added challenge on 0x5651f4025800
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 389 ms_handle_reset con 0x5651f4025800 session 0x5651f5f441e0
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 188506112 unmapped: 50864128 heap: 239370240 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:16:33.473960+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 188506112 unmapped: 50864128 heap: 239370240 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 389 heartbeat osd_stat(store_statfs(0x4f2b52000/0x0/0x4ffc00000, data 0x2929fb3/0x2b3d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x881f9c6), peers [1,2] op hist [])
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:16:34.474133+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 188506112 unmapped: 50864128 heap: 239370240 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:16:35.474320+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 188506112 unmapped: 50864128 heap: 239370240 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:30:11 compute-0 ceph-osd[87591]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:30:11 compute-0 ceph-osd[87591]: bluestore.MempoolThread(0x5651f2adbb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3034831 data_alloc: 218103808 data_used: 8609792
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:16:36.474455+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 188506112 unmapped: 50864128 heap: 239370240 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:16:37.474629+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 389 heartbeat osd_stat(store_statfs(0x4f2b52000/0x0/0x4ffc00000, data 0x2929fb3/0x2b3d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x881f9c6), peers [1,2] op hist [])
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 188506112 unmapped: 50864128 heap: 239370240 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:16:38.474760+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 188506112 unmapped: 50864128 heap: 239370240 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:16:39.474936+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 188506112 unmapped: 50864128 heap: 239370240 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:16:40.475084+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: handle_auth_request added challenge on 0x5651f6bda000
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 389 ms_handle_reset con 0x5651f6bda000 session 0x5651f726e5a0
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: handle_auth_request added challenge on 0x5651f3f43000
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 389 ms_handle_reset con 0x5651f3f43000 session 0x5651f44cc5a0
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 188448768 unmapped: 50921472 heap: 239370240 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:30:11 compute-0 ceph-osd[87591]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:30:11 compute-0 ceph-osd[87591]: bluestore.MempoolThread(0x5651f2adbb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3031791 data_alloc: 218103808 data_used: 8085504
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:16:41.475282+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 188448768 unmapped: 50921472 heap: 239370240 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: handle_auth_request added challenge on 0x5651f4020c00
Oct 11 04:30:11 compute-0 ceph-osd[87591]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 16.666786194s of 17.062187195s, submitted: 135
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 389 ms_handle_reset con 0x5651f4020c00 session 0x5651f79fc960
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:16:42.475412+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: handle_auth_request added challenge on 0x5651f4024c00
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: handle_auth_request added challenge on 0x5651f4025800
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 389 heartbeat osd_stat(store_statfs(0x4f2b52000/0x0/0x4ffc00000, data 0x2929fb3/0x2b3d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x881f9c6), peers [1,2] op hist [])
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 187203584 unmapped: 52166656 heap: 239370240 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:16:43.475564+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 187211776 unmapped: 52158464 heap: 239370240 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:16:44.475694+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 189833216 unmapped: 49537024 heap: 239370240 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:16:45.475820+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 189833216 unmapped: 49537024 heap: 239370240 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:30:11 compute-0 ceph-osd[87591]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:30:11 compute-0 ceph-osd[87591]: bluestore.MempoolThread(0x5651f2adbb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3107417 data_alloc: 234881024 data_used: 18026496
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:16:46.476070+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: handle_auth_request added challenge on 0x5651f4b52800
Oct 11 04:30:11 compute-0 ceph-osd[87591]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Oct 11 04:30:11 compute-0 ceph-osd[87591]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 2400.1 total, 600.0 interval
                                           Cumulative writes: 26K writes, 101K keys, 26K commit groups, 1.0 writes per commit group, ingest: 0.07 GB, 0.03 MB/s
                                           Cumulative WAL: 26K writes, 9708 syncs, 2.69 writes per sync, written: 0.07 GB, 0.03 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 14K writes, 54K keys, 14K commit groups, 1.0 writes per commit group, ingest: 39.04 MB, 0.07 MB/s
                                           Interval WAL: 14K writes, 6082 syncs, 2.37 writes per sync, written: 0.04 GB, 0.07 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 189833216 unmapped: 49537024 heap: 239370240 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:16:47.476262+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 389 heartbeat osd_stat(store_statfs(0x4f483c000/0x0/0x4ffc00000, data 0x298dfd6/0x2ba2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x881f9c6), peers [1,2] op hist [])
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 189833216 unmapped: 49537024 heap: 239370240 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 389 ms_handle_reset con 0x5651f4b52800 session 0x5651f5f45c20
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:16:48.476437+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 189833216 unmapped: 49537024 heap: 239370240 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:16:49.476612+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 189833216 unmapped: 49537024 heap: 239370240 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 389 heartbeat osd_stat(store_statfs(0x4f483c000/0x0/0x4ffc00000, data 0x298dfd6/0x2ba2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x881f9c6), peers [1,2] op hist [])
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:16:50.476771+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 189833216 unmapped: 49537024 heap: 239370240 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:16:51.477044+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:30:11 compute-0 ceph-osd[87591]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:30:11 compute-0 ceph-osd[87591]: bluestore.MempoolThread(0x5651f2adbb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3111015 data_alloc: 234881024 data_used: 18034688
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 189833216 unmapped: 49537024 heap: 239370240 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:16:52.477266+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 189833216 unmapped: 49537024 heap: 239370240 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 11.127324104s of 11.185605049s, submitted: 14
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:16:53.477408+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 197836800 unmapped: 41533440 heap: 239370240 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:16:54.477543+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 199180288 unmapped: 40189952 heap: 239370240 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:16:55.477692+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 199180288 unmapped: 40189952 heap: 239370240 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 389 heartbeat osd_stat(store_statfs(0x4f3976000/0x0/0x4ffc00000, data 0x384dfd6/0x3a62000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x881f9c6), peers [1,2] op hist [])
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:16:56.477922+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:30:11 compute-0 ceph-osd[87591]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:30:11 compute-0 ceph-osd[87591]: bluestore.MempoolThread(0x5651f2adbb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3249767 data_alloc: 234881024 data_used: 20340736
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 199180288 unmapped: 40189952 heap: 239370240 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:16:57.478220+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 199180288 unmapped: 40189952 heap: 239370240 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:16:58.478389+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 199180288 unmapped: 40189952 heap: 239370240 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:16:59.478535+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: handle_auth_request added challenge on 0x5651f7d92c00
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 389 ms_handle_reset con 0x5651f7d92c00 session 0x5651f4dd81e0
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 199180288 unmapped: 40189952 heap: 239370240 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: handle_auth_request added challenge on 0x5651f657fc00
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:17:00.478648+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 389 ms_handle_reset con 0x5651f657fc00 session 0x5651f5f2a3c0
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 199196672 unmapped: 48570368 heap: 247767040 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:17:01.478848+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:30:11 compute-0 ceph-osd[87591]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:30:11 compute-0 ceph-osd[87591]: bluestore.MempoolThread(0x5651f2adbb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3524253 data_alloc: 234881024 data_used: 20344832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 389 heartbeat osd_stat(store_statfs(0x4f119d000/0x0/0x4ffc00000, data 0x602cfd6/0x6241000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x881f9c6), peers [1,2] op hist [])
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 199229440 unmapped: 48537600 heap: 247767040 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:17:02.479075+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 199229440 unmapped: 48537600 heap: 247767040 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:17:03.479302+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 10.021449089s of 10.641031265s, submitted: 182
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 199335936 unmapped: 48431104 heap: 247767040 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 389 ms_handle_reset con 0x5651f4024c00 session 0x5651f4cade00
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 389 ms_handle_reset con 0x5651f4025800 session 0x5651f5055e00
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:17:04.479769+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: handle_auth_request added challenge on 0x5651f3f43000
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 389 ms_handle_reset con 0x5651f3f43000 session 0x5651f3f105a0
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 389 heartbeat osd_stat(store_statfs(0x4f1187000/0x0/0x4ffc00000, data 0x6041fd6/0x6256000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x881f9c6), peers [1,2] op hist [])
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 199286784 unmapped: 48480256 heap: 247767040 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:17:05.480226+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 199286784 unmapped: 48480256 heap: 247767040 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:17:06.480468+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:30:11 compute-0 ceph-osd[87591]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:30:11 compute-0 ceph-osd[87591]: bluestore.MempoolThread(0x5651f2adbb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3518033 data_alloc: 234881024 data_used: 20250624
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 199286784 unmapped: 48480256 heap: 247767040 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:17:07.480611+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 199286784 unmapped: 48480256 heap: 247767040 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:17:08.480744+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: handle_auth_request added challenge on 0x5651f4020c00
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 389 ms_handle_reset con 0x5651f4020c00 session 0x5651f72332c0
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 389 heartbeat osd_stat(store_statfs(0x4f11ac000/0x0/0x4ffc00000, data 0x601dfb3/0x6231000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x881f9c6), peers [1,2] op hist [])
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 199286784 unmapped: 48480256 heap: 247767040 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:17:09.480902+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 199286784 unmapped: 48480256 heap: 247767040 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: handle_auth_request added challenge on 0x5651f4b52800
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 389 ms_handle_reset con 0x5651f4b52800 session 0x5651f7233e00
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:17:10.481228+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: handle_auth_request added challenge on 0x5651f4b52800
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 389 ms_handle_reset con 0x5651f4b52800 session 0x5651f5bd74a0
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: handle_auth_request added challenge on 0x5651f3f43000
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 389 ms_handle_reset con 0x5651f3f43000 session 0x5651f5bd7680
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 199286784 unmapped: 48480256 heap: 247767040 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:17:11.481427+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:30:11 compute-0 ceph-osd[87591]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:30:11 compute-0 ceph-osd[87591]: bluestore.MempoolThread(0x5651f2adbb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3518033 data_alloc: 234881024 data_used: 20250624
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 199286784 unmapped: 48480256 heap: 247767040 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:17:12.481575+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 199286784 unmapped: 48480256 heap: 247767040 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:17:13.481795+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 389 heartbeat osd_stat(store_statfs(0x4f11ac000/0x0/0x4ffc00000, data 0x601dfb3/0x6231000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x881f9c6), peers [1,2] op hist [])
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 199286784 unmapped: 48480256 heap: 247767040 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 389 heartbeat osd_stat(store_statfs(0x4f11ac000/0x0/0x4ffc00000, data 0x601dfb3/0x6231000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x881f9c6), peers [1,2] op hist [])
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: handle_auth_request added challenge on 0x5651f4020c00
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 389 ms_handle_reset con 0x5651f4020c00 session 0x5651f5054d20
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:17:14.481960+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: handle_auth_request added challenge on 0x5651f4024c00
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 389 ms_handle_reset con 0x5651f4024c00 session 0x5651f50143c0
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 199286784 unmapped: 48480256 heap: 247767040 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: handle_auth_request added challenge on 0x5651f4025800
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:17:15.482097+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: handle_auth_request added challenge on 0x5651f7d92c00
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 389 ms_handle_reset con 0x5651f7d92c00 session 0x5651f5055860
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: handle_auth_request added challenge on 0x5651f3f43000
Oct 11 04:30:11 compute-0 ceph-osd[87591]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 11.561026573s of 11.702827454s, submitted: 53
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 389 ms_handle_reset con 0x5651f3f43000 session 0x5651f5e78780
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 210165760 unmapped: 37601280 heap: 247767040 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: handle_auth_request added challenge on 0x5651f4020c00
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: handle_auth_request added challenge on 0x5651f4024c00
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:17:16.482233+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 389 heartbeat osd_stat(store_statfs(0x4f1187000/0x0/0x4ffc00000, data 0x6041fe5/0x6257000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x881f9c6), peers [1,2] op hist [])
Oct 11 04:30:11 compute-0 ceph-osd[87591]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:30:11 compute-0 ceph-osd[87591]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 11 04:30:11 compute-0 ceph-osd[87591]: bluestore.MempoolThread(0x5651f2adbb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3614488 data_alloc: 251658240 data_used: 32944128
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 210198528 unmapped: 37568512 heap: 247767040 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:17:17.482410+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 210198528 unmapped: 37568512 heap: 247767040 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:17:18.482468+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 210616320 unmapped: 37150720 heap: 247767040 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 389 heartbeat osd_stat(store_statfs(0x4f1187000/0x0/0x4ffc00000, data 0x6041fe5/0x6257000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x881f9c6), peers [1,2] op hist [])
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:17:19.482794+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 210542592 unmapped: 37224448 heap: 247767040 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:17:20.482969+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 210542592 unmapped: 37224448 heap: 247767040 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:17:21.483128+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:30:11 compute-0 ceph-osd[87591]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 11 04:30:11 compute-0 ceph-osd[87591]: bluestore.MempoolThread(0x5651f2adbb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3616728 data_alloc: 251658240 data_used: 33161216
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 210542592 unmapped: 37224448 heap: 247767040 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:17:22.483202+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 389 heartbeat osd_stat(store_statfs(0x4f1187000/0x0/0x4ffc00000, data 0x6041fe5/0x6257000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x881f9c6), peers [1,2] op hist [])
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 210575360 unmapped: 37191680 heap: 247767040 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:17:23.483318+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 210575360 unmapped: 37191680 heap: 247767040 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:17:24.483489+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 210575360 unmapped: 37191680 heap: 247767040 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:17:25.483612+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 9.844915390s of 10.000625610s, submitted: 61
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 224493568 unmapped: 23273472 heap: 247767040 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 389 heartbeat osd_stat(store_statfs(0x4f0e07000/0x0/0x4ffc00000, data 0x6041fe5/0x6257000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x881f9c6), peers [1,2] op hist [])
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:17:26.483729+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:30:11 compute-0 ceph-osd[87591]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 11 04:30:11 compute-0 ceph-osd[87591]: bluestore.MempoolThread(0x5651f2adbb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3620240 data_alloc: 251658240 data_used: 33054720
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 217088000 unmapped: 30679040 heap: 247767040 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:17:27.483931+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 217104384 unmapped: 30662656 heap: 247767040 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:17:28.484176+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 217694208 unmapped: 30072832 heap: 247767040 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:17:29.484344+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 217825280 unmapped: 29941760 heap: 247767040 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:17:30.484473+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 217825280 unmapped: 29941760 heap: 247767040 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:17:31.484667+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:30:11 compute-0 ceph-osd[87591]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 11 04:30:11 compute-0 ceph-osd[87591]: bluestore.MempoolThread(0x5651f2adbb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3765460 data_alloc: 251658240 data_used: 34418688
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 389 heartbeat osd_stat(store_statfs(0x4efcdc000/0x0/0x4ffc00000, data 0x70dafe5/0x72f0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x8c2f9c6), peers [1,2] op hist [])
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 217825280 unmapped: 29941760 heap: 247767040 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:17:32.484795+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 217825280 unmapped: 29941760 heap: 247767040 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:17:33.485173+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 217825280 unmapped: 29941760 heap: 247767040 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:17:34.485494+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 217825280 unmapped: 29941760 heap: 247767040 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:17:35.485640+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 389 heartbeat osd_stat(store_statfs(0x4efcdc000/0x0/0x4ffc00000, data 0x70dafe5/0x72f0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x8c2f9c6), peers [1,2] op hist [])
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 217825280 unmapped: 29941760 heap: 247767040 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 389 heartbeat osd_stat(store_statfs(0x4efcdc000/0x0/0x4ffc00000, data 0x70dafe5/0x72f0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x8c2f9c6), peers [1,2] op hist [])
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:17:36.486000+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:30:11 compute-0 ceph-osd[87591]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 11 04:30:11 compute-0 ceph-osd[87591]: bluestore.MempoolThread(0x5651f2adbb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3765460 data_alloc: 251658240 data_used: 34418688
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 389 heartbeat osd_stat(store_statfs(0x4efcdc000/0x0/0x4ffc00000, data 0x70dafe5/0x72f0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x8c2f9c6), peers [1,2] op hist [])
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 217825280 unmapped: 29941760 heap: 247767040 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:17:37.486406+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 217825280 unmapped: 29941760 heap: 247767040 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:17:38.486773+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 217825280 unmapped: 29941760 heap: 247767040 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 13.418999672s of 13.648541451s, submitted: 78
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:17:39.486917+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 389 ms_handle_reset con 0x5651f4025800 session 0x5651f4cad4a0
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: handle_auth_request added challenge on 0x5651f4b52800
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 389 ms_handle_reset con 0x5651f4b52800 session 0x5651f43da780
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 217849856 unmapped: 29917184 heap: 247767040 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:17:40.487123+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: handle_auth_request added challenge on 0x5651f6c5d000
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _renew_subs
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 389 handle_osd_map epochs [390,390], i have 389, src has [1,390]
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 390 ms_handle_reset con 0x5651f6c5d000 session 0x5651f4309a40
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 217849856 unmapped: 29917184 heap: 247767040 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:17:41.487483+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:30:11 compute-0 ceph-osd[87591]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 11 04:30:11 compute-0 ceph-osd[87591]: bluestore.MempoolThread(0x5651f2adbb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3768226 data_alloc: 251658240 data_used: 35381248
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: handle_auth_request added challenge on 0x5651f657e400
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: handle_auth_request added challenge on 0x5651f6bc7c00
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 390 ms_handle_reset con 0x5651f6bc7c00 session 0x5651f4cad2c0
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 217849856 unmapped: 29917184 heap: 247767040 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 390 ms_handle_reset con 0x5651f657e400 session 0x5651f5054780
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:17:42.487630+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 390 heartbeat osd_stat(store_statfs(0x4efcda000/0x0/0x4ffc00000, data 0x70dcb62/0x72f3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x8c2f9c6), peers [1,2] op hist [])
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 217849856 unmapped: 29917184 heap: 247767040 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:17:43.487901+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 217849856 unmapped: 29917184 heap: 247767040 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:17:44.488189+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 217849856 unmapped: 29917184 heap: 247767040 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:17:45.488461+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 217849856 unmapped: 29917184 heap: 247767040 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:17:46.488607+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:30:11 compute-0 ceph-osd[87591]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 11 04:30:11 compute-0 ceph-osd[87591]: bluestore.MempoolThread(0x5651f2adbb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3770266 data_alloc: 251658240 data_used: 35381248
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 390 heartbeat osd_stat(store_statfs(0x4efcda000/0x0/0x4ffc00000, data 0x70dcbc4/0x72f4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x8c2f9c6), peers [1,2] op hist [])
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 217849856 unmapped: 29917184 heap: 247767040 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:17:47.488856+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 217849856 unmapped: 29917184 heap: 247767040 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:17:48.489325+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 217989120 unmapped: 29777920 heap: 247767040 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:17:49.489521+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 217989120 unmapped: 29777920 heap: 247767040 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 390 heartbeat osd_stat(store_statfs(0x4efcbf000/0x0/0x4ffc00000, data 0x75fcbc4/0x730f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x8c2f9c6), peers [1,2] op hist [])
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:17:50.489879+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 217989120 unmapped: 29777920 heap: 247767040 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:17:51.490207+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:30:11 compute-0 ceph-osd[87591]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 11 04:30:11 compute-0 ceph-osd[87591]: bluestore.MempoolThread(0x5651f2adbb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3822912 data_alloc: 251658240 data_used: 35381248
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 217989120 unmapped: 29777920 heap: 247767040 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:17:52.490332+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 13.336811066s of 13.449083328s, submitted: 38
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 390 heartbeat osd_stat(store_statfs(0x4efcbf000/0x0/0x4ffc00000, data 0x75fcbc4/0x730f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x8c2f9c6), peers [1,2] op hist [0,0,1])
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: handle_auth_request added challenge on 0x5651f3f43000
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 218038272 unmapped: 29728768 heap: 247767040 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 390 ms_handle_reset con 0x5651f3f43000 session 0x5651f5e7b680
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:17:53.493251+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: handle_auth_request added challenge on 0x5651f4025800
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 390 heartbeat osd_stat(store_statfs(0x4efcbe000/0x0/0x4ffc00000, data 0x75fcbe7/0x7310000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x8c2f9c6), peers [1,2] op hist [])
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 218079232 unmapped: 29687808 heap: 247767040 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:17:54.493403+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 218079232 unmapped: 29687808 heap: 247767040 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: handle_auth_request added challenge on 0x5651f4b52800
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 390 ms_handle_reset con 0x5651f4b52800 session 0x5651f4472b40
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:17:55.493606+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 218185728 unmapped: 29581312 heap: 247767040 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:17:56.493788+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: handle_auth_request added challenge on 0x5651f6c5d000
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 390 ms_handle_reset con 0x5651f6c5d000 session 0x5651f4cd74a0
Oct 11 04:30:11 compute-0 ceph-osd[87591]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:30:11 compute-0 ceph-osd[87591]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 11 04:30:11 compute-0 ceph-osd[87591]: bluestore.MempoolThread(0x5651f2adbb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3823161 data_alloc: 251658240 data_used: 35426304
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 390 heartbeat osd_stat(store_statfs(0x4efcbe000/0x0/0x4ffc00000, data 0x75fcbe7/0x7310000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x8c2f9c6), peers [1,2] op hist [])
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: handle_auth_request added challenge on 0x5651f6bdb800
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 390 ms_handle_reset con 0x5651f6bdb800 session 0x5651f4cd7a40
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 218185728 unmapped: 29581312 heap: 247767040 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: handle_auth_request added challenge on 0x5651f6bdb800
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:17:57.493910+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 390 ms_handle_reset con 0x5651f6bdb800 session 0x5651f4cd65a0
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: handle_auth_request added challenge on 0x5651f3f43000
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 390 heartbeat osd_stat(store_statfs(0x4efcbd000/0x0/0x4ffc00000, data 0x75fcbf7/0x7311000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x8c2f9c6), peers [1,2] op hist [])
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 218185728 unmapped: 29581312 heap: 247767040 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:17:58.494044+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 218185728 unmapped: 29581312 heap: 247767040 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:17:59.494196+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 390 heartbeat osd_stat(store_statfs(0x4efcbd000/0x0/0x4ffc00000, data 0x75fcbf7/0x7311000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x8c2f9c6), peers [1,2] op hist [])
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 218185728 unmapped: 29581312 heap: 247767040 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:18:00.494422+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 218185728 unmapped: 29581312 heap: 247767040 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:18:01.494645+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: handle_auth_request added challenge on 0x5651f4b52800
Oct 11 04:30:11 compute-0 ceph-osd[87591]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:30:11 compute-0 ceph-osd[87591]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 11 04:30:11 compute-0 ceph-osd[87591]: bluestore.MempoolThread(0x5651f2adbb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3825927 data_alloc: 251658240 data_used: 35500032
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 218226688 unmapped: 29540352 heap: 247767040 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:18:02.494782+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 390 heartbeat osd_stat(store_statfs(0x4efcbd000/0x0/0x4ffc00000, data 0x75fcbf7/0x7311000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x8c2f9c6), peers [1,2] op hist [])
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 218259456 unmapped: 29507584 heap: 247767040 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:18:03.495000+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 218259456 unmapped: 29507584 heap: 247767040 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:18:04.495126+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 218259456 unmapped: 29507584 heap: 247767040 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:18:05.495218+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 12.415917397s of 12.859089851s, submitted: 114
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 220258304 unmapped: 27508736 heap: 247767040 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:18:06.495340+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:30:11 compute-0 ceph-osd[87591]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 11 04:30:11 compute-0 ceph-osd[87591]: bluestore.MempoolThread(0x5651f2adbb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3843395 data_alloc: 251658240 data_used: 38215680
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 220258304 unmapped: 27508736 heap: 247767040 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:18:07.495544+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 390 heartbeat osd_stat(store_statfs(0x4efcb3000/0x0/0x4ffc00000, data 0x7606bf7/0x731b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x8c2f9c6), peers [1,2] op hist [])
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 220258304 unmapped: 27508736 heap: 247767040 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:18:08.495669+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 220258304 unmapped: 27508736 heap: 247767040 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:18:09.495791+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 220258304 unmapped: 27508736 heap: 247767040 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:18:10.495919+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 390 heartbeat osd_stat(store_statfs(0x4efcb3000/0x0/0x4ffc00000, data 0x7606bf7/0x731b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x8c2f9c6), peers [1,2] op hist [])
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 221863936 unmapped: 25903104 heap: 247767040 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:18:11.496048+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 390 heartbeat osd_stat(store_statfs(0x4efbe1000/0x0/0x4ffc00000, data 0x76d8bf7/0x73ed000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x8c2f9c6), peers [1,2] op hist [])
Oct 11 04:30:11 compute-0 ceph-osd[87591]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:30:11 compute-0 ceph-osd[87591]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 11 04:30:11 compute-0 ceph-osd[87591]: bluestore.MempoolThread(0x5651f2adbb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3868151 data_alloc: 251658240 data_used: 39751680
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 222765056 unmapped: 25001984 heap: 247767040 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:18:12.496207+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 222765056 unmapped: 25001984 heap: 247767040 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:18:13.496378+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 222765056 unmapped: 25001984 heap: 247767040 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:18:14.496527+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 222765056 unmapped: 25001984 heap: 247767040 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:18:15.496668+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 222765056 unmapped: 25001984 heap: 247767040 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:18:16.496791+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:30:11 compute-0 ceph-osd[87591]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 11 04:30:11 compute-0 ceph-osd[87591]: bluestore.MempoolThread(0x5651f2adbb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3871031 data_alloc: 251658240 data_used: 40013824
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 390 heartbeat osd_stat(store_statfs(0x4efbe1000/0x0/0x4ffc00000, data 0x76d8bf7/0x73ed000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x8c2f9c6), peers [1,2] op hist [])
Oct 11 04:30:11 compute-0 ceph-osd[87591]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 11.428494453s of 11.479592323s, submitted: 16
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 222765056 unmapped: 25001984 heap: 247767040 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:18:17.496953+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 222765056 unmapped: 25001984 heap: 247767040 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:18:18.497124+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 390 heartbeat osd_stat(store_statfs(0x4efbdf000/0x0/0x4ffc00000, data 0x76dabf7/0x73ef000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x8c2f9c6), peers [1,2] op hist [])
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 222765056 unmapped: 25001984 heap: 247767040 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:18:19.497361+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 222765056 unmapped: 25001984 heap: 247767040 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:18:20.497488+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 222765056 unmapped: 25001984 heap: 247767040 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:18:21.497627+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:30:11 compute-0 ceph-osd[87591]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 11 04:30:11 compute-0 ceph-osd[87591]: bluestore.MempoolThread(0x5651f2adbb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3871251 data_alloc: 251658240 data_used: 40013824
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 222765056 unmapped: 25001984 heap: 247767040 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:18:22.497808+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 390 heartbeat osd_stat(store_statfs(0x4efbdf000/0x0/0x4ffc00000, data 0x76dabf7/0x73ef000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x8c2f9c6), peers [1,2] op hist [])
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 222765056 unmapped: 25001984 heap: 247767040 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:18:23.497931+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 222765056 unmapped: 25001984 heap: 247767040 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:18:24.498111+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 222765056 unmapped: 25001984 heap: 247767040 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:18:25.498269+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 222019584 unmapped: 25747456 heap: 247767040 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:18:26.498415+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:30:11 compute-0 ceph-osd[87591]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 11 04:30:11 compute-0 ceph-osd[87591]: bluestore.MempoolThread(0x5651f2adbb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3867427 data_alloc: 251658240 data_used: 39997440
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 222068736 unmapped: 25698304 heap: 247767040 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:18:27.498623+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 390 heartbeat osd_stat(store_statfs(0x4efbdf000/0x0/0x4ffc00000, data 0x76dabf7/0x73ef000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x8c2f9c6), peers [1,2] op hist [])
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 222068736 unmapped: 25698304 heap: 247767040 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:18:28.498937+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 222068736 unmapped: 25698304 heap: 247767040 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:18:29.499116+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 390 heartbeat osd_stat(store_statfs(0x4efbdf000/0x0/0x4ffc00000, data 0x76dabf7/0x73ef000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x8c2f9c6), peers [1,2] op hist [])
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 222183424 unmapped: 25583616 heap: 247767040 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:18:30.499291+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 222183424 unmapped: 25583616 heap: 247767040 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:18:31.499517+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:30:11 compute-0 ceph-osd[87591]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 11 04:30:11 compute-0 ceph-osd[87591]: bluestore.MempoolThread(0x5651f2adbb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3867907 data_alloc: 251658240 data_used: 40103936
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 222183424 unmapped: 25583616 heap: 247767040 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:18:32.499670+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 15.712791443s of 15.740924835s, submitted: 8
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 390 ms_handle_reset con 0x5651f4025800 session 0x5651f5f2bc20
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: handle_auth_request added challenge on 0x5651f657e400
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 390 ms_handle_reset con 0x5651f657e400 session 0x5651f5e7a3c0
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:18:33.499838+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 222183424 unmapped: 25583616 heap: 247767040 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 390 heartbeat osd_stat(store_statfs(0x4efbdf000/0x0/0x4ffc00000, data 0x76dabd4/0x73ee000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x8c2f9c6), peers [1,2] op hist [])
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:18:34.500002+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 222183424 unmapped: 25583616 heap: 247767040 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:18:35.500202+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 222183424 unmapped: 25583616 heap: 247767040 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: handle_auth_request added challenge on 0x5651f6c5d000
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 390 ms_handle_reset con 0x5651f6c5d000 session 0x5651f5f2ba40
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:18:36.500342+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: handle_auth_request added challenge on 0x5651f657b800
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 222199808 unmapped: 25567232 heap: 247767040 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 390 ms_handle_reset con 0x5651f657b800 session 0x5651f4cacf00
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 390 ms_handle_reset con 0x5651f3f43000 session 0x5651f4cd7a40
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 390 ms_handle_reset con 0x5651f4b52800 session 0x5651f5ef9860
Oct 11 04:30:11 compute-0 ceph-osd[87591]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:30:11 compute-0 ceph-osd[87591]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 11 04:30:11 compute-0 ceph-osd[87591]: bluestore.MempoolThread(0x5651f2adbb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3856745 data_alloc: 251658240 data_used: 40013824
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: handle_auth_request added challenge on 0x5651f4025800
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 390 ms_handle_reset con 0x5651f4025800 session 0x5651f7232b40
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 390 heartbeat osd_stat(store_statfs(0x4efcbf000/0x0/0x4ffc00000, data 0x75fcb62/0x730e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x8c2f9c6), peers [1,2] op hist [])
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:18:37.500488+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 222216192 unmapped: 25550848 heap: 247767040 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: handle_auth_request added challenge on 0x5651f657e400
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 390 ms_handle_reset con 0x5651f657e400 session 0x5651f72332c0
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: handle_auth_request added challenge on 0x5651f6bdb800
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _renew_subs
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 390 handle_osd_map epochs [391,391], i have 390, src has [1,391]
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 391 ms_handle_reset con 0x5651f6bdb800 session 0x5651f79f70e0
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:18:38.500629+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 222289920 unmapped: 25477120 heap: 247767040 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:18:39.500780+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 391 ms_handle_reset con 0x5651f4020c00 session 0x5651f7232780
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 222298112 unmapped: 25468928 heap: 247767040 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 391 ms_handle_reset con 0x5651f4024c00 session 0x5651f7da7c20
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: handle_auth_request added challenge on 0x5651f3f43000
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 391 ms_handle_reset con 0x5651f3f43000 session 0x5651f5eca960
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:18:40.500950+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 222314496 unmapped: 25452544 heap: 247767040 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:18:41.501240+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 222314496 unmapped: 25452544 heap: 247767040 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:30:11 compute-0 ceph-osd[87591]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 11 04:30:11 compute-0 ceph-osd[87591]: bluestore.MempoolThread(0x5651f2adbb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3826737 data_alloc: 251658240 data_used: 39903232
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: handle_auth_request added challenge on 0x5651f4025800
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 391 ms_handle_reset con 0x5651f4025800 session 0x5651f5f2b860
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:18:42.501395+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 214589440 unmapped: 33177600 heap: 247767040 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 391 heartbeat osd_stat(store_statfs(0x4f3592000/0x0/0x4ffc00000, data 0x3826701/0x3a3c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x8c2f9c6), peers [1,2] op hist [])
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:18:43.501544+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 214589440 unmapped: 33177600 heap: 247767040 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: handle_auth_request added challenge on 0x5651f4b52800
Oct 11 04:30:11 compute-0 ceph-osd[87591]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 10.740956306s of 11.209186554s, submitted: 115
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 391 ms_handle_reset con 0x5651f4b52800 session 0x5651f7da6960
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: handle_auth_request added challenge on 0x5651f3f43000
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 391 ms_handle_reset con 0x5651f3f43000 session 0x5651f5e95680
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:18:44.501701+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 205029376 unmapped: 42737664 heap: 247767040 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:18:45.501862+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 205029376 unmapped: 42737664 heap: 247767040 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 391 heartbeat osd_stat(store_statfs(0x4f4b62000/0x0/0x4ffc00000, data 0x1e5768f/0x206b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x8c2f9c6), peers [1,2] op hist [])
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 391 handle_osd_map epochs [392,392], i have 391, src has [1,392]
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:18:46.502020+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 204251136 unmapped: 43515904 heap: 247767040 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 392 heartbeat osd_stat(store_statfs(0x4f4f5f000/0x0/0x4ffc00000, data 0x1e590f2/0x206e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x8c2f9c6), peers [1,2] op hist [])
Oct 11 04:30:11 compute-0 ceph-osd[87591]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:30:11 compute-0 ceph-osd[87591]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:30:11 compute-0 ceph-osd[87591]: bluestore.MempoolThread(0x5651f2adbb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2995039 data_alloc: 218103808 data_used: 8110080
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:18:47.502222+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 204251136 unmapped: 43515904 heap: 247767040 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:18:48.502340+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 204251136 unmapped: 43515904 heap: 247767040 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:18:49.502509+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 204251136 unmapped: 43515904 heap: 247767040 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:18:50.502670+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 204251136 unmapped: 43515904 heap: 247767040 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:18:51.502845+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 204251136 unmapped: 43515904 heap: 247767040 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 392 heartbeat osd_stat(store_statfs(0x4f4f5f000/0x0/0x4ffc00000, data 0x1e590f2/0x206e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x8c2f9c6), peers [1,2] op hist [])
Oct 11 04:30:11 compute-0 ceph-osd[87591]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:30:11 compute-0 ceph-osd[87591]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:30:11 compute-0 ceph-osd[87591]: bluestore.MempoolThread(0x5651f2adbb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2995039 data_alloc: 218103808 data_used: 8110080
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:18:52.502989+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 204251136 unmapped: 43515904 heap: 247767040 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:18:53.503186+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 204251136 unmapped: 43515904 heap: 247767040 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:18:54.503344+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 204251136 unmapped: 43515904 heap: 247767040 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:18:55.503466+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 204251136 unmapped: 43515904 heap: 247767040 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:18:56.503609+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 204251136 unmapped: 43515904 heap: 247767040 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:30:11 compute-0 ceph-osd[87591]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:30:11 compute-0 ceph-osd[87591]: bluestore.MempoolThread(0x5651f2adbb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2995039 data_alloc: 218103808 data_used: 8110080
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:18:57.503763+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 204251136 unmapped: 43515904 heap: 247767040 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 392 heartbeat osd_stat(store_statfs(0x4f4f5f000/0x0/0x4ffc00000, data 0x1e590f2/0x206e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x8c2f9c6), peers [1,2] op hist [])
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 392 heartbeat osd_stat(store_statfs(0x4f4f5f000/0x0/0x4ffc00000, data 0x1e590f2/0x206e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x8c2f9c6), peers [1,2] op hist [])
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:18:58.503953+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 204251136 unmapped: 43515904 heap: 247767040 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:18:59.504116+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 204251136 unmapped: 43515904 heap: 247767040 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 392 handle_osd_map epochs [393,393], i have 392, src has [1,393]
Oct 11 04:30:11 compute-0 ceph-osd[87591]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 16.354799271s of 16.496461868s, submitted: 60
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:19:00.504264+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 204251136 unmapped: 43515904 heap: 247767040 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: handle_auth_request added challenge on 0x5651f4020c00
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 393 ms_handle_reset con 0x5651f4020c00 session 0x5651f5ed8780
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: handle_auth_request added challenge on 0x5651f4024c00
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 393 ms_handle_reset con 0x5651f4024c00 session 0x5651f584b4a0
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: handle_auth_request added challenge on 0x5651f4025800
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 393 ms_handle_reset con 0x5651f4025800 session 0x5651f726fa40
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: handle_auth_request added challenge on 0x5651f657e400
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 393 ms_handle_reset con 0x5651f657e400 session 0x5651f7232960
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:19:01.504431+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 204259328 unmapped: 43507712 heap: 247767040 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: handle_auth_request added challenge on 0x5651f657e400
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 393 ms_handle_reset con 0x5651f657e400 session 0x5651f50552c0
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: handle_auth_request added challenge on 0x5651f3f43000
Oct 11 04:30:11 compute-0 ceph-osd[87591]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:30:11 compute-0 ceph-osd[87591]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:30:11 compute-0 ceph-osd[87591]: bluestore.MempoolThread(0x5651f2adbb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3002681 data_alloc: 218103808 data_used: 8110080
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 393 ms_handle_reset con 0x5651f3f43000 session 0x5651f50e8d20
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:19:02.504562+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 204267520 unmapped: 43499520 heap: 247767040 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: handle_auth_request added challenge on 0x5651f4020c00
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 393 ms_handle_reset con 0x5651f4020c00 session 0x5651f4cad680
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:19:03.504728+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 204267520 unmapped: 43499520 heap: 247767040 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 393 heartbeat osd_stat(store_statfs(0x4f4f5d000/0x0/0x4ffc00000, data 0x1e5ac6f/0x2071000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x8c2f9c6), peers [1,2] op hist [])
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:19:04.504874+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: handle_auth_request added challenge on 0x5651f4024c00
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 204267520 unmapped: 43499520 heap: 247767040 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 393 ms_handle_reset con 0x5651f4024c00 session 0x5651f50143c0
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:19:05.505019+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 204267520 unmapped: 43499520 heap: 247767040 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: handle_auth_request added challenge on 0x5651f4025800
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 393 ms_handle_reset con 0x5651f4025800 session 0x5651f7bad860
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: handle_auth_request added challenge on 0x5651f3f43000
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 393 ms_handle_reset con 0x5651f3f43000 session 0x5651f79f6960
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:19:06.505210+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 204259328 unmapped: 43507712 heap: 247767040 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:30:11 compute-0 ceph-osd[87591]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:30:11 compute-0 ceph-osd[87591]: bluestore.MempoolThread(0x5651f2adbb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3000417 data_alloc: 218103808 data_used: 8110080
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 393 heartbeat osd_stat(store_statfs(0x4f4f5d000/0x0/0x4ffc00000, data 0x1e5ac6f/0x2071000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x8c2f9c6), peers [1,2] op hist [])
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: handle_auth_request added challenge on 0x5651f738bc00
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 393 ms_handle_reset con 0x5651f738bc00 session 0x5651f5f474a0
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: handle_auth_request added challenge on 0x5651f7cd1800
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 393 ms_handle_reset con 0x5651f7cd1800 session 0x5651f5f47680
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:19:07.505380+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 204283904 unmapped: 43483136 heap: 247767040 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: handle_auth_request added challenge on 0x5651f6bc9000
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 393 ms_handle_reset con 0x5651f6bc9000 session 0x5651f5f472c0
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:19:08.505588+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 204283904 unmapped: 43483136 heap: 247767040 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:19:09.505722+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 204283904 unmapped: 43483136 heap: 247767040 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: handle_auth_request added challenge on 0x5651f6be7000
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _renew_subs
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 393 handle_osd_map epochs [394,394], i have 393, src has [1,394]
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 394 ms_handle_reset con 0x5651f6be7000 session 0x5651f7bac780
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:19:10.505869+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 205332480 unmapped: 42434560 heap: 247767040 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 394 heartbeat osd_stat(store_statfs(0x4f4f59000/0x0/0x4ffc00000, data 0x1e5c840/0x2074000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x8c2f9c6), peers [1,2] op hist [])
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:19:11.506138+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 205332480 unmapped: 42434560 heap: 247767040 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:30:11 compute-0 ceph-osd[87591]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:30:11 compute-0 ceph-osd[87591]: bluestore.MempoolThread(0x5651f2adbb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3004022 data_alloc: 218103808 data_used: 8118272
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:19:12.506348+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 205332480 unmapped: 42434560 heap: 247767040 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:19:13.506525+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 205332480 unmapped: 42434560 heap: 247767040 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:19:14.506719+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 205332480 unmapped: 42434560 heap: 247767040 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 394 heartbeat osd_stat(store_statfs(0x4f4f59000/0x0/0x4ffc00000, data 0x1e5c840/0x2074000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x8c2f9c6), peers [1,2] op hist [])
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: handle_auth_request added challenge on 0x5651f3f43000
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _renew_subs
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 394 handle_osd_map epochs [395,395], i have 394, src has [1,395]
Oct 11 04:30:11 compute-0 ceph-osd[87591]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 14.495304108s of 14.819120407s, submitted: 95
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 395 ms_handle_reset con 0x5651f3f43000 session 0x5651f7da6d20
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:19:15.506960+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 205340672 unmapped: 42426368 heap: 247767040 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:19:16.507120+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 205340672 unmapped: 42426368 heap: 247767040 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: handle_auth_request added challenge on 0x5651f6bc9000
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 395 ms_handle_reset con 0x5651f6bc9000 session 0x5651f5e941e0
Oct 11 04:30:11 compute-0 ceph-osd[87591]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:30:11 compute-0 ceph-osd[87591]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:30:11 compute-0 ceph-osd[87591]: bluestore.MempoolThread(0x5651f2adbb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3011770 data_alloc: 218103808 data_used: 8118272
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: handle_auth_request added challenge on 0x5651f738bc00
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 395 ms_handle_reset con 0x5651f738bc00 session 0x5651f5eb3860
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: handle_auth_request added challenge on 0x5651f7cd1800
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 395 ms_handle_reset con 0x5651f7cd1800 session 0x5651f50e9e00
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:19:17.507281+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: handle_auth_request added challenge on 0x5651f6c5d400
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 395 ms_handle_reset con 0x5651f6c5d400 session 0x5651f3f11680
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 395 heartbeat osd_stat(store_statfs(0x4f4f53000/0x0/0x4ffc00000, data 0x1e5e2d3/0x207a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x8c2f9c6), peers [1,2] op hist [])
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 205373440 unmapped: 42393600 heap: 247767040 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: handle_auth_request added challenge on 0x5651f6c5d400
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 395 ms_handle_reset con 0x5651f6c5d400 session 0x5651f51643c0
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: handle_auth_request added challenge on 0x5651f3f43000
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 395 ms_handle_reset con 0x5651f3f43000 session 0x5651f5e94f00
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:19:18.507427+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 205381632 unmapped: 42385408 heap: 247767040 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: handle_auth_request added challenge on 0x5651f6bc9000
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 395 ms_handle_reset con 0x5651f6bc9000 session 0x5651f5121e00
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: handle_auth_request added challenge on 0x5651f738bc00
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:19:19.507574+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 395 ms_handle_reset con 0x5651f738bc00 session 0x5651f4308b40
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 205406208 unmapped: 42360832 heap: 247767040 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:19:20.507718+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 205406208 unmapped: 42360832 heap: 247767040 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: handle_auth_request added challenge on 0x5651f7cd1800
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 395 ms_handle_reset con 0x5651f7cd1800 session 0x5651f44730e0
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: handle_auth_request added challenge on 0x5651f7cd1800
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 395 ms_handle_reset con 0x5651f7cd1800 session 0x5651f50141e0
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:19:21.507969+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 205406208 unmapped: 42360832 heap: 247767040 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 395 heartbeat osd_stat(store_statfs(0x4f4f56000/0x0/0x4ffc00000, data 0x1e5e2b3/0x2078000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x8c2f9c6), peers [1,2] op hist [])
Oct 11 04:30:11 compute-0 ceph-osd[87591]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:30:11 compute-0 ceph-osd[87591]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:30:11 compute-0 ceph-osd[87591]: bluestore.MempoolThread(0x5651f2adbb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3013058 data_alloc: 218103808 data_used: 8130560
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: handle_auth_request added challenge on 0x5651f3f43000
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 395 ms_handle_reset con 0x5651f3f43000 session 0x5651f7bac780
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:19:22.508230+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 205406208 unmapped: 42360832 heap: 247767040 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 395 heartbeat osd_stat(store_statfs(0x4f4f56000/0x0/0x4ffc00000, data 0x1e5e2b3/0x2078000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x8c2f9c6), peers [1,2] op hist [])
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: handle_auth_request added challenge on 0x5651f6bc9000
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 395 ms_handle_reset con 0x5651f6bc9000 session 0x5651f5f474a0
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: handle_auth_request added challenge on 0x5651f6c5d400
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:19:23.508373+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 395 ms_handle_reset con 0x5651f6c5d400 session 0x5651f50143c0
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 205447168 unmapped: 42319872 heap: 247767040 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:19:24.508518+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 205447168 unmapped: 42319872 heap: 247767040 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: handle_auth_request added challenge on 0x5651f738bc00
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 395 ms_handle_reset con 0x5651f738bc00 session 0x5651f7232780
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: handle_auth_request added challenge on 0x5651f3f43000
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 395 ms_handle_reset con 0x5651f3f43000 session 0x5651f7232b40
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:19:25.508674+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 205496320 unmapped: 42270720 heap: 247767040 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:19:26.508893+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 205496320 unmapped: 42270720 heap: 247767040 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:30:11 compute-0 ceph-osd[87591]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:30:11 compute-0 ceph-osd[87591]: bluestore.MempoolThread(0x5651f2adbb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3011201 data_alloc: 218103808 data_used: 8130560
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:19:27.509077+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 395 heartbeat osd_stat(store_statfs(0x4f4f57000/0x0/0x4ffc00000, data 0x1e5e2a3/0x2077000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x8c2f9c6), peers [1,2] op hist [])
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 205496320 unmapped: 42270720 heap: 247767040 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:19:28.509274+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 205504512 unmapped: 42262528 heap: 247767040 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:19:29.509413+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 205504512 unmapped: 42262528 heap: 247767040 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: handle_auth_request added challenge on 0x5651f6bc9000
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:19:30.509536+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 14.980410576s of 15.208749771s, submitted: 85
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 395 ms_handle_reset con 0x5651f6bc9000 session 0x5651f5bd65a0
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 205512704 unmapped: 42254336 heap: 247767040 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:19:31.509697+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 395 heartbeat osd_stat(store_statfs(0x4f4f56000/0x0/0x4ffc00000, data 0x1e5e2b3/0x2078000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x8c2f9c6), peers [1,2] op hist [])
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: handle_auth_request added challenge on 0x5651f6c5d400
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 205512704 unmapped: 42254336 heap: 247767040 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:30:11 compute-0 ceph-osd[87591]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:30:11 compute-0 ceph-osd[87591]: bluestore.MempoolThread(0x5651f2adbb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3014749 data_alloc: 218103808 data_used: 8130560
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _renew_subs
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 395 handle_osd_map epochs [396,396], i have 395, src has [1,396]
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 396 ms_handle_reset con 0x5651f6c5d400 session 0x5651f5ef9680
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:19:32.509831+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 205520896 unmapped: 42246144 heap: 247767040 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: handle_auth_request added challenge on 0x5651f7cd1800
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: handle_auth_request added challenge on 0x5651f658f800
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:19:33.509992+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 396 ms_handle_reset con 0x5651f658f800 session 0x5651f3f10f00
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 396 ms_handle_reset con 0x5651f7cd1800 session 0x5651f7233a40
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 205520896 unmapped: 42246144 heap: 247767040 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:19:34.510126+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: handle_auth_request added challenge on 0x5651f7cd1800
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 205520896 unmapped: 42246144 heap: 247767040 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _renew_subs
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 396 handle_osd_map epochs [397,397], i have 396, src has [1,397]
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 397 ms_handle_reset con 0x5651f7cd1800 session 0x5651f5e94b40
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:19:35.510306+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 205529088 unmapped: 42237952 heap: 247767040 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: handle_auth_request added challenge on 0x5651f3f43000
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: handle_auth_request added challenge on 0x5651f4029800
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 397 ms_handle_reset con 0x5651f4029800 session 0x5651f5bd7680
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 397 ms_handle_reset con 0x5651f3f43000 session 0x5651f5054780
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:19:36.510464+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 205275136 unmapped: 42491904 heap: 247767040 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 397 heartbeat osd_stat(store_statfs(0x4f4f4b000/0x0/0x4ffc00000, data 0x1e61ae3/0x2082000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x8c2f9c6), peers [1,2] op hist [])
Oct 11 04:30:11 compute-0 ceph-osd[87591]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:30:11 compute-0 ceph-osd[87591]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:30:11 compute-0 ceph-osd[87591]: bluestore.MempoolThread(0x5651f2adbb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3029041 data_alloc: 218103808 data_used: 8146944
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:19:37.510638+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: handle_auth_request added challenge on 0x5651f6c82c00
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _renew_subs
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 397 handle_osd_map epochs [398,398], i have 397, src has [1,398]
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 205283328 unmapped: 42483712 heap: 247767040 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 398 ms_handle_reset con 0x5651f6c82c00 session 0x5651f4dd92c0
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:19:38.510821+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 398 heartbeat osd_stat(store_statfs(0x4f4f47000/0x0/0x4ffc00000, data 0x1e636c2/0x2086000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x8c2f9c6), peers [1,2] op hist [])
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: handle_auth_request added challenge on 0x5651f6be3800
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 205299712 unmapped: 42467328 heap: 247767040 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 398 ms_handle_reset con 0x5651f6be3800 session 0x5651f5ef9860
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: handle_auth_request added challenge on 0x5651f3f43000
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 398 ms_handle_reset con 0x5651f3f43000 session 0x5651f5ecba40
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:19:39.511031+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 205316096 unmapped: 42450944 heap: 247767040 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: handle_auth_request added challenge on 0x5651f4029800
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 398 ms_handle_reset con 0x5651f4029800 session 0x5651f44cc000
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: handle_auth_request added challenge on 0x5651f6c82c00
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 398 ms_handle_reset con 0x5651f6c82c00 session 0x5651f5f14960
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:19:40.511229+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 205316096 unmapped: 42450944 heap: 247767040 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: handle_auth_request added challenge on 0x5651f7cd1800
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:19:41.511450+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 398 ms_handle_reset con 0x5651f7cd1800 session 0x5651f7bacd20
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: handle_auth_request added challenge on 0x5651f6c82800
Oct 11 04:30:11 compute-0 ceph-osd[87591]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 10.974204063s of 11.113242149s, submitted: 35
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 398 ms_handle_reset con 0x5651f6c82800 session 0x5651f5e7b4a0
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 205316096 unmapped: 42450944 heap: 247767040 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:30:11 compute-0 ceph-osd[87591]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:30:11 compute-0 ceph-osd[87591]: bluestore.MempoolThread(0x5651f2adbb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3033172 data_alloc: 218103808 data_used: 8175616
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:19:42.511631+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: handle_auth_request added challenge on 0x5651f6c82800
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 205316096 unmapped: 42450944 heap: 247767040 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 398 ms_handle_reset con 0x5651f6c82800 session 0x5651f4cad2c0
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: handle_auth_request added challenge on 0x5651f3f43000
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: handle_auth_request added challenge on 0x5651f4029800
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 398 ms_handle_reset con 0x5651f4029800 session 0x5651f4472b40
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: handle_auth_request added challenge on 0x5651f6c82c00
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:19:43.511819+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 205316096 unmapped: 42450944 heap: 247767040 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 398 heartbeat osd_stat(store_statfs(0x4f4f48000/0x0/0x4ffc00000, data 0x1e636c2/0x2086000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x8c2f9c6), peers [1,2] op hist [])
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 398 handle_osd_map epochs [398,399], i have 398, src has [1,399]
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _renew_subs
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 399 handle_osd_map epochs [399,399], i have 399, src has [1,399]
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 399 ms_handle_reset con 0x5651f6c82c00 session 0x5651f5e95a40
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 399 ms_handle_reset con 0x5651f3f43000 session 0x5651f5f452c0
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:19:44.512004+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 205316096 unmapped: 42450944 heap: 247767040 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: handle_auth_request added challenge on 0x5651f7cd1800
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 399 ms_handle_reset con 0x5651f7cd1800 session 0x5651f5c585a0
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: handle_auth_request added challenge on 0x5651f7cd1800
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: handle_auth_request added challenge on 0x5651f3f43000
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 399 ms_handle_reset con 0x5651f3f43000 session 0x5651f5015a40
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: handle_auth_request added challenge on 0x5651f4029800
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:19:45.512216+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 205316096 unmapped: 42450944 heap: 247767040 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 399 handle_osd_map epochs [399,400], i have 399, src has [1,400]
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 400 ms_handle_reset con 0x5651f4029800 session 0x5651f5f44b40
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 400 ms_handle_reset con 0x5651f7cd1800 session 0x5651f584b4a0
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:19:46.512391+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 205111296 unmapped: 42655744 heap: 247767040 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:30:11 compute-0 ceph-osd[87591]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:30:11 compute-0 ceph-osd[87591]: bluestore.MempoolThread(0x5651f2adbb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3036542 data_alloc: 218103808 data_used: 8175616
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:19:47.512561+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: handle_auth_request added challenge on 0x5651f6c82800
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 400 ms_handle_reset con 0x5651f6c82800 session 0x5651f43dbc20
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: handle_auth_request added challenge on 0x5651f6c82c00
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 205111296 unmapped: 42655744 heap: 247767040 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 400 ms_handle_reset con 0x5651f6c82c00 session 0x5651f43db680
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:19:48.512762+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 205127680 unmapped: 42639360 heap: 247767040 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: handle_auth_request added challenge on 0x5651f6c82c00
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 400 ms_handle_reset con 0x5651f6c82c00 session 0x5651f3f0f860
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:19:49.512926+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 205127680 unmapped: 42639360 heap: 247767040 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 400 heartbeat osd_stat(store_statfs(0x4f4f47000/0x0/0x4ffc00000, data 0x1e66d1e/0x2087000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x8c2f9c6), peers [1,2] op hist [])
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:19:50.513110+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 205127680 unmapped: 42639360 heap: 247767040 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:19:51.513353+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 400 handle_osd_map epochs [401,401], i have 400, src has [1,401]
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 205160448 unmapped: 42606592 heap: 247767040 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: handle_auth_request added challenge on 0x5651f3f43000
Oct 11 04:30:11 compute-0 ceph-osd[87591]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 10.047138214s of 10.433714867s, submitted: 99
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 401 ms_handle_reset con 0x5651f3f43000 session 0x5651f50e8d20
Oct 11 04:30:11 compute-0 ceph-osd[87591]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:30:11 compute-0 ceph-osd[87591]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:30:11 compute-0 ceph-osd[87591]: bluestore.MempoolThread(0x5651f2adbb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3041447 data_alloc: 218103808 data_used: 8196096
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:19:52.513537+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 205193216 unmapped: 42573824 heap: 247767040 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: handle_auth_request added challenge on 0x5651f4029800
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:19:53.513714+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 205193216 unmapped: 42573824 heap: 247767040 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 401 ms_handle_reset con 0x5651f4029800 session 0x5651f4dd8000
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 401 handle_osd_map epochs [402,402], i have 401, src has [1,402]
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:19:54.513880+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 205201408 unmapped: 42565632 heap: 247767040 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:19:55.514070+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 402 heartbeat osd_stat(store_statfs(0x4f4f3e000/0x0/0x4ffc00000, data 0x1e6a45f/0x208e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x8c2f9c6), peers [1,2] op hist [])
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 402 handle_osd_map epochs [403,403], i have 402, src has [1,403]
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 204980224 unmapped: 42786816 heap: 247767040 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: handle_auth_request added challenge on 0x5651f6c82800
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: handle_auth_request added challenge on 0x5651f7cd1800
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 403 ms_handle_reset con 0x5651f7cd1800 session 0x5651f43d4d20
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 403 ms_handle_reset con 0x5651f6c82800 session 0x5651f50e9860
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:19:56.514258+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 204980224 unmapped: 42786816 heap: 247767040 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: handle_auth_request added challenge on 0x5651f6c82800
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 403 ms_handle_reset con 0x5651f6c82800 session 0x5651f5e7b680
Oct 11 04:30:11 compute-0 ceph-osd[87591]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:30:11 compute-0 ceph-osd[87591]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:30:11 compute-0 ceph-osd[87591]: bluestore.MempoolThread(0x5651f2adbb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3049525 data_alloc: 218103808 data_used: 8200192
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:19:57.514410+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 204980224 unmapped: 42786816 heap: 247767040 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: handle_auth_request added challenge on 0x5651f3f43000
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 403 ms_handle_reset con 0x5651f3f43000 session 0x5651f4309a40
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: handle_auth_request added challenge on 0x5651f4029800
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 403 ms_handle_reset con 0x5651f4029800 session 0x5651f4cd65a0
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:19:58.514585+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 204980224 unmapped: 42786816 heap: 247767040 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:19:59.514795+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: handle_auth_request added challenge on 0x5651f6c82c00
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: handle_auth_request added challenge on 0x5651f7cd1800
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 403 ms_handle_reset con 0x5651f7cd1800 session 0x5651f5ecb0e0
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: handle_auth_request added challenge on 0x5651f6bdb000
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 204980224 unmapped: 42786816 heap: 247767040 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _renew_subs
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 403 handle_osd_map epochs [404,404], i have 403, src has [1,404]
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 404 ms_handle_reset con 0x5651f6bdb000 session 0x5651f5054d20
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 404 heartbeat osd_stat(store_statfs(0x4f4f39000/0x0/0x4ffc00000, data 0x1e6da93/0x2094000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x8c2f9c6), peers [1,2] op hist [])
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 404 ms_handle_reset con 0x5651f6c82c00 session 0x5651f5f45c20
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:20:00.514953+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 204980224 unmapped: 42786816 heap: 247767040 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: handle_auth_request added challenge on 0x5651f3f43000
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 404 ms_handle_reset con 0x5651f3f43000 session 0x5651f5eca960
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: handle_auth_request added challenge on 0x5651f4029800
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: handle_auth_request added challenge on 0x5651f6c82800
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 404 ms_handle_reset con 0x5651f6c82800 session 0x5651f5bd74a0
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: handle_auth_request added challenge on 0x5651f7cd1800
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:20:01.515130+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 204988416 unmapped: 42778624 heap: 247767040 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _renew_subs
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 404 handle_osd_map epochs [405,405], i have 404, src has [1,405]
Oct 11 04:30:11 compute-0 ceph-osd[87591]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 10.095185280s of 10.191224098s, submitted: 41
Oct 11 04:30:11 compute-0 ceph-osd[87591]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:30:11 compute-0 ceph-osd[87591]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:30:11 compute-0 ceph-osd[87591]: bluestore.MempoolThread(0x5651f2adbb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3055719 data_alloc: 218103808 data_used: 8200192
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 405 ms_handle_reset con 0x5651f7cd1800 session 0x5651f4dd81e0
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:20:02.515321+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 405 ms_handle_reset con 0x5651f4029800 session 0x5651f50552c0
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 204988416 unmapped: 42778624 heap: 247767040 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: handle_auth_request added challenge on 0x5651f4029800
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 405 ms_handle_reset con 0x5651f4029800 session 0x5651f43094a0
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 405 heartbeat osd_stat(store_statfs(0x4f4f36000/0x0/0x4ffc00000, data 0x1e6f655/0x2096000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x8c2f9c6), peers [1,2] op hist [])
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: handle_auth_request added challenge on 0x5651f3f43000
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:20:03.515459+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 405 ms_handle_reset con 0x5651f3f43000 session 0x5651f5bbf860
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 205004800 unmapped: 42762240 heap: 247767040 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:20:04.515608+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 205004800 unmapped: 42762240 heap: 247767040 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:20:05.515794+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 205004800 unmapped: 42762240 heap: 247767040 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 405 heartbeat osd_stat(store_statfs(0x4f4f38000/0x0/0x4ffc00000, data 0x1e6f655/0x2096000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x8c2f9c6), peers [1,2] op hist [])
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 405 handle_osd_map epochs [406,406], i have 405, src has [1,406]
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 405 handle_osd_map epochs [406,406], i have 406, src has [1,406]
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 406 handle_osd_map epochs [406,407], i have 406, src has [1,407]
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:20:06.515966+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 205012992 unmapped: 42754048 heap: 247767040 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:30:11 compute-0 ceph-osd[87591]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:30:11 compute-0 ceph-osd[87591]: bluestore.MempoolThread(0x5651f2adbb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3060292 data_alloc: 218103808 data_used: 8204288
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:20:07.516121+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: handle_auth_request added challenge on 0x5651f6c82800
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 407 ms_handle_reset con 0x5651f6c82800 session 0x5651f7da61e0
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 205012992 unmapped: 42754048 heap: 247767040 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:20:08.516312+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: handle_auth_request added challenge on 0x5651f6c82c00
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 407 ms_handle_reset con 0x5651f6c82c00 session 0x5651f43092c0
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 205012992 unmapped: 42754048 heap: 247767040 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 407 heartbeat osd_stat(store_statfs(0x4f4f2f000/0x0/0x4ffc00000, data 0x1e72c45/0x209d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x8c2f9c6), peers [1,2] op hist [])
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:20:09.516471+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 205012992 unmapped: 42754048 heap: 247767040 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: handle_auth_request added challenge on 0x5651f7cd1800
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: handle_auth_request added challenge on 0x5651f6bc9400
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 407 ms_handle_reset con 0x5651f6bc9400 session 0x5651f3f5d0e0
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: handle_auth_request added challenge on 0x5651f3f43000
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 407 handle_osd_map epochs [407,408], i have 407, src has [1,408]
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 408 ms_handle_reset con 0x5651f3f43000 session 0x5651f7da6b40
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:20:10.516634+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 408 ms_handle_reset con 0x5651f7cd1800 session 0x5651f5f2bc20
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 205037568 unmapped: 42729472 heap: 247767040 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:20:11.516814+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 205037568 unmapped: 42729472 heap: 247767040 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:30:11 compute-0 ceph-osd[87591]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:30:11 compute-0 ceph-osd[87591]: bluestore.MempoolThread(0x5651f2adbb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3064246 data_alloc: 218103808 data_used: 8204288
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:20:12.516986+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 408 heartbeat osd_stat(store_statfs(0x4f4f2d000/0x0/0x4ffc00000, data 0x1e74816/0x20a0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x8c2f9c6), peers [1,2] op hist [])
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 205037568 unmapped: 42729472 heap: 247767040 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 408 heartbeat osd_stat(store_statfs(0x4f4f2d000/0x0/0x4ffc00000, data 0x1e74816/0x20a0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x8c2f9c6), peers [1,2] op hist [])
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 408 handle_osd_map epochs [408,409], i have 408, src has [1,409]
Oct 11 04:30:11 compute-0 ceph-osd[87591]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 10.850748062s of 11.105458260s, submitted: 113
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: handle_auth_request added challenge on 0x5651f4029800
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:20:13.517207+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 409 ms_handle_reset con 0x5651f4029800 session 0x5651f50e8000
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 409 heartbeat osd_stat(store_statfs(0x4f4f2d000/0x0/0x4ffc00000, data 0x1e74816/0x20a0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x8c2f9c6), peers [1,2] op hist [])
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 205045760 unmapped: 42721280 heap: 247767040 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: handle_auth_request added challenge on 0x5651f6c82800
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 409 ms_handle_reset con 0x5651f6c82800 session 0x5651f4cd6d20
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 409 heartbeat osd_stat(store_statfs(0x4f4f2d000/0x0/0x4ffc00000, data 0x1e74816/0x20a0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x8c2f9c6), peers [1,2] op hist [])
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:20:14.517371+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 205045760 unmapped: 42721280 heap: 247767040 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: handle_auth_request added challenge on 0x5651f6c82c00
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _renew_subs
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 409 handle_osd_map epochs [410,410], i have 409, src has [1,410]
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 410 ms_handle_reset con 0x5651f6c82c00 session 0x5651f5015c20
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: handle_auth_request added challenge on 0x5651f3f43000
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 410 ms_handle_reset con 0x5651f3f43000 session 0x5651f584ab40
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:20:15.517587+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 205053952 unmapped: 42713088 heap: 247767040 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:20:16.517771+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: handle_auth_request added challenge on 0x5651f4029800
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 410 ms_handle_reset con 0x5651f4029800 session 0x5651f4472f00
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: handle_auth_request added challenge on 0x5651f6c82800
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 410 ms_handle_reset con 0x5651f6c82800 session 0x5651f5c594a0
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 205053952 unmapped: 42713088 heap: 247767040 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:30:11 compute-0 ceph-osd[87591]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:30:11 compute-0 ceph-osd[87591]: bluestore.MempoolThread(0x5651f2adbb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3072051 data_alloc: 218103808 data_used: 8204288
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:20:17.517900+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 205053952 unmapped: 42713088 heap: 247767040 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:20:18.518065+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 410 heartbeat osd_stat(store_statfs(0x4f4f27000/0x0/0x4ffc00000, data 0x1e77e12/0x20a6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x8c2f9c6), peers [1,2] op hist [])
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 205053952 unmapped: 42713088 heap: 247767040 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:20:19.518250+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 205078528 unmapped: 42688512 heap: 247767040 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:20:20.518454+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 205078528 unmapped: 42688512 heap: 247767040 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:20:21.518652+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 205078528 unmapped: 42688512 heap: 247767040 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:30:11 compute-0 ceph-osd[87591]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:30:11 compute-0 ceph-osd[87591]: bluestore.MempoolThread(0x5651f2adbb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3072211 data_alloc: 218103808 data_used: 8208384
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:20:22.518814+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 205078528 unmapped: 42688512 heap: 247767040 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:20:23.518980+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 205078528 unmapped: 42688512 heap: 247767040 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:20:24.519219+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: handle_auth_request added challenge on 0x5651f7cd1800
Oct 11 04:30:11 compute-0 ceph-osd[87591]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 11.257808685s of 11.343448639s, submitted: 35
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 410 ms_handle_reset con 0x5651f7cd1800 session 0x5651f7da7c20
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 410 heartbeat osd_stat(store_statfs(0x4f4f27000/0x0/0x4ffc00000, data 0x1e77e12/0x20a6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x8c2f9c6), peers [1,2] op hist [])
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 205078528 unmapped: 42688512 heap: 247767040 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:20:25.519380+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 205078528 unmapped: 42688512 heap: 247767040 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:20:26.519525+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: handle_auth_request added challenge on 0x5651f5002000
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: handle_auth_request added challenge on 0x5651f6c83400
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 410 ms_handle_reset con 0x5651f6c83400 session 0x5651f5f154a0
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 205078528 unmapped: 42688512 heap: 247767040 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _renew_subs
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 410 handle_osd_map epochs [411,411], i have 410, src has [1,411]
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: handle_auth_request added challenge on 0x5651f3f43000
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: handle_auth_request added challenge on 0x5651f4029800
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 411 ms_handle_reset con 0x5651f3f43000 session 0x5651f5ed8f00
Oct 11 04:30:11 compute-0 ceph-osd[87591]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:30:11 compute-0 ceph-osd[87591]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:30:11 compute-0 ceph-osd[87591]: bluestore.MempoolThread(0x5651f2adbb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3083158 data_alloc: 218103808 data_used: 8220672
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:20:27.519676+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 205086720 unmapped: 42680320 heap: 247767040 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 411 handle_osd_map epochs [411,412], i have 411, src has [1,412]
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 412 ms_handle_reset con 0x5651f4029800 session 0x5651f3f0f0e0
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: handle_auth_request added challenge on 0x5651f6c82800
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 412 ms_handle_reset con 0x5651f6c82800 session 0x5651f7da61e0
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 412 ms_handle_reset con 0x5651f5002000 session 0x5651f5f465a0
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:20:28.519860+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 412 heartbeat osd_stat(store_statfs(0x4f4f1d000/0x0/0x4ffc00000, data 0x1e7bac4/0x20af000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x8c2f9c6), peers [1,2] op hist [])
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 205103104 unmapped: 42663936 heap: 247767040 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: handle_auth_request added challenge on 0x5651f7cd1800
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:20:29.520029+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 205111296 unmapped: 42655744 heap: 247767040 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 412 handle_osd_map epochs [412,413], i have 412, src has [1,413]
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 413 ms_handle_reset con 0x5651f7cd1800 session 0x5651f50145a0
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: handle_auth_request added challenge on 0x5651f3f43000
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 413 ms_handle_reset con 0x5651f3f43000 session 0x5651f5ef9680
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:20:30.520191+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 205119488 unmapped: 42647552 heap: 247767040 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:20:31.520409+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: handle_auth_request added challenge on 0x5651f4029800
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 205119488 unmapped: 42647552 heap: 247767040 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 413 handle_osd_map epochs [413,414], i have 413, src has [1,414]
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _renew_subs
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 414 handle_osd_map epochs [414,414], i have 414, src has [1,414]
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 414 ms_handle_reset con 0x5651f4029800 session 0x5651f5bd65a0
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: handle_auth_request added challenge on 0x5651f5002000
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 414 heartbeat osd_stat(store_statfs(0x4f4f19000/0x0/0x4ffc00000, data 0x1e7ed2c/0x20b3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x8c2f9c6), peers [1,2] op hist [])
Oct 11 04:30:11 compute-0 ceph-osd[87591]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:30:11 compute-0 ceph-osd[87591]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:30:11 compute-0 ceph-osd[87591]: bluestore.MempoolThread(0x5651f2adbb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3091814 data_alloc: 218103808 data_used: 8220672
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 414 ms_handle_reset con 0x5651f5002000 session 0x5651f4308b40
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:20:32.520576+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 414 heartbeat osd_stat(store_statfs(0x4f4f19000/0x0/0x4ffc00000, data 0x1e7ed2c/0x20b3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x8c2f9c6), peers [1,2] op hist [])
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 205144064 unmapped: 42622976 heap: 247767040 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 414 heartbeat osd_stat(store_statfs(0x4f4f1a000/0x0/0x4ffc00000, data 0x1e7ecca/0x20b2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x8c2f9c6), peers [1,2] op hist [])
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:20:33.520727+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 205144064 unmapped: 42622976 heap: 247767040 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 414 heartbeat osd_stat(store_statfs(0x4f4f1a000/0x0/0x4ffc00000, data 0x1e7ecca/0x20b2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x8c2f9c6), peers [1,2] op hist [])
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:20:34.520890+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 205144064 unmapped: 42622976 heap: 247767040 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 414 heartbeat osd_stat(store_statfs(0x4f4f1a000/0x0/0x4ffc00000, data 0x1e7ecca/0x20b2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x8c2f9c6), peers [1,2] op hist [])
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 414 handle_osd_map epochs [414,415], i have 414, src has [1,415]
Oct 11 04:30:11 compute-0 ceph-osd[87591]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 9.935872078s of 10.364065170s, submitted: 108
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:20:35.521027+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 205168640 unmapped: 42598400 heap: 247767040 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:20:36.521227+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 205168640 unmapped: 42598400 heap: 247767040 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 415 heartbeat osd_stat(store_statfs(0x4f4f18000/0x0/0x4ffc00000, data 0x1e80765/0x20b5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x8c2f9c6), peers [1,2] op hist [])
Oct 11 04:30:11 compute-0 ceph-osd[87591]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:30:11 compute-0 ceph-osd[87591]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:30:11 compute-0 ceph-osd[87591]: bluestore.MempoolThread(0x5651f2adbb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3092520 data_alloc: 218103808 data_used: 8216576
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:20:37.521382+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 205168640 unmapped: 42598400 heap: 247767040 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 415 heartbeat osd_stat(store_statfs(0x4f4f18000/0x0/0x4ffc00000, data 0x1e80765/0x20b5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x8c2f9c6), peers [1,2] op hist [])
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:20:38.521525+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 205168640 unmapped: 42598400 heap: 247767040 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:20:39.521672+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 205168640 unmapped: 42598400 heap: 247767040 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:20:40.521823+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 205168640 unmapped: 42598400 heap: 247767040 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:20:41.522019+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 415 heartbeat osd_stat(store_statfs(0x4f4f18000/0x0/0x4ffc00000, data 0x1e80765/0x20b5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x8c2f9c6), peers [1,2] op hist [])
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 205168640 unmapped: 42598400 heap: 247767040 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:30:11 compute-0 ceph-osd[87591]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:30:11 compute-0 ceph-osd[87591]: bluestore.MempoolThread(0x5651f2adbb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3092520 data_alloc: 218103808 data_used: 8216576
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:20:42.522142+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 205168640 unmapped: 42598400 heap: 247767040 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:20:43.522370+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 205168640 unmapped: 42598400 heap: 247767040 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:20:44.522559+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 205168640 unmapped: 42598400 heap: 247767040 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:20:45.522742+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 205168640 unmapped: 42598400 heap: 247767040 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:20:46.522925+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 415 heartbeat osd_stat(store_statfs(0x4f4f18000/0x0/0x4ffc00000, data 0x1e80765/0x20b5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x8c2f9c6), peers [1,2] op hist [])
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 205168640 unmapped: 42598400 heap: 247767040 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:30:11 compute-0 ceph-osd[87591]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:30:11 compute-0 ceph-osd[87591]: bluestore.MempoolThread(0x5651f2adbb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3092520 data_alloc: 218103808 data_used: 8216576
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:20:47.523114+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 205168640 unmapped: 42598400 heap: 247767040 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:20:48.523509+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 205168640 unmapped: 42598400 heap: 247767040 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:20:49.523689+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 205168640 unmapped: 42598400 heap: 247767040 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:20:50.523851+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 205168640 unmapped: 42598400 heap: 247767040 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: handle_auth_request added challenge on 0x5651f6c82800
Oct 11 04:30:11 compute-0 ceph-osd[87591]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 16.402404785s of 16.415962219s, submitted: 27
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 415 ms_handle_reset con 0x5651f6c82800 session 0x5651f726fa40
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:20:51.524026+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 415 heartbeat osd_stat(store_statfs(0x4f4f18000/0x0/0x4ffc00000, data 0x1e80765/0x20b5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x8c2f9c6), peers [1,2] op hist [])
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 205176832 unmapped: 42590208 heap: 247767040 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:30:11 compute-0 ceph-osd[87591]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:30:11 compute-0 ceph-osd[87591]: bluestore.MempoolThread(0x5651f2adbb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3095056 data_alloc: 218103808 data_used: 8220672
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:20:52.524215+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 205176832 unmapped: 42590208 heap: 247767040 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: handle_auth_request added challenge on 0x5651f7e50800
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 415 handle_osd_map epochs [415,416], i have 415, src has [1,416]
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _renew_subs
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 415 handle_osd_map epochs [416,416], i have 416, src has [1,416]
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 416 ms_handle_reset con 0x5651f7e50800 session 0x5651f5f47680
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:20:53.524366+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 416 heartbeat osd_stat(store_statfs(0x4f4f14000/0x0/0x4ffc00000, data 0x1e82344/0x20b9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x8c2f9c6), peers [1,2] op hist [])
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 205185024 unmapped: 42582016 heap: 247767040 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: handle_auth_request added challenge on 0x5651f7e50800
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 416 ms_handle_reset con 0x5651f7e50800 session 0x5651f50b92c0
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: handle_auth_request added challenge on 0x5651f3f43000
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:20:54.524511+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 205209600 unmapped: 42557440 heap: 247767040 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _renew_subs
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 416 handle_osd_map epochs [417,417], i have 416, src has [1,417]
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 417 ms_handle_reset con 0x5651f3f43000 session 0x5651f5ed8d20
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:20:55.524656+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: handle_auth_request added challenge on 0x5651f4029800
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 417 ms_handle_reset con 0x5651f4029800 session 0x5651f4472000
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: handle_auth_request added challenge on 0x5651f5002000
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 417 heartbeat osd_stat(store_statfs(0x4f4f11000/0x0/0x4ffc00000, data 0x1e83f15/0x20bc000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x8c2f9c6), peers [1,2] op hist [])
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 417 ms_handle_reset con 0x5651f5002000 session 0x5651f5bbe1e0
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 205234176 unmapped: 42532864 heap: 247767040 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:20:56.524797+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: handle_auth_request added challenge on 0x5651f6c82800
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 417 ms_handle_reset con 0x5651f6c82800 session 0x5651f4318b40
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 205234176 unmapped: 42532864 heap: 247767040 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:30:11 compute-0 ceph-osd[87591]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:30:11 compute-0 ceph-osd[87591]: bluestore.MempoolThread(0x5651f2adbb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3104384 data_alloc: 218103808 data_used: 8237056
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:20:57.524964+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 205234176 unmapped: 42532864 heap: 247767040 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: handle_auth_request added challenge on 0x5651f6c82800
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _renew_subs
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 417 handle_osd_map epochs [418,418], i have 417, src has [1,418]
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 418 ms_handle_reset con 0x5651f6c82800 session 0x5651f50e81e0
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:20:58.525232+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 418 heartbeat osd_stat(store_statfs(0x4f4f0e000/0x0/0x4ffc00000, data 0x1e85a5c/0x20bf000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x8c2f9c6), peers [1,2] op hist [])
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 205242368 unmapped: 42524672 heap: 247767040 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: handle_auth_request added challenge on 0x5651f3f43000
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 418 ms_handle_reset con 0x5651f3f43000 session 0x5651f37d7680
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: handle_auth_request added challenge on 0x5651f4029800
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:20:59.525366+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 205250560 unmapped: 42516480 heap: 247767040 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 418 handle_osd_map epochs [418,419], i have 418, src has [1,419]
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 419 ms_handle_reset con 0x5651f4029800 session 0x5651f5ef85a0
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:21:00.525518+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: handle_auth_request added challenge on 0x5651f5002000
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 419 ms_handle_reset con 0x5651f5002000 session 0x5651f5f2ba40
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: handle_auth_request added challenge on 0x5651f7e50800
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 205258752 unmapped: 42508288 heap: 247767040 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 419 heartbeat osd_stat(store_statfs(0x4f4f0e000/0x0/0x4ffc00000, data 0x1e85a5c/0x20bf000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x8c2f9c6), peers [1,2] op hist [])
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 419 ms_handle_reset con 0x5651f7e50800 session 0x5651f51643c0
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:21:01.525701+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 205266944 unmapped: 42500096 heap: 247767040 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: handle_auth_request added challenge on 0x5651f3f43000
Oct 11 04:30:11 compute-0 ceph-osd[87591]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 10.367282867s of 10.760478973s, submitted: 121
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 419 ms_handle_reset con 0x5651f3f43000 session 0x5651f50141e0
Oct 11 04:30:11 compute-0 ceph-osd[87591]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:30:11 compute-0 ceph-osd[87591]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:30:11 compute-0 ceph-osd[87591]: bluestore.MempoolThread(0x5651f2adbb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3110969 data_alloc: 218103808 data_used: 8245248
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:21:02.525869+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 205266944 unmapped: 42500096 heap: 247767040 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: handle_auth_request added challenge on 0x5651f4029800
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:21:03.526010+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 205266944 unmapped: 42500096 heap: 247767040 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _renew_subs
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 419 handle_osd_map epochs [420,420], i have 419, src has [1,420]
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 420 ms_handle_reset con 0x5651f4029800 session 0x5651f5f474a0
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:21:04.526256+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: handle_auth_request added challenge on 0x5651f5002000
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 420 ms_handle_reset con 0x5651f5002000 session 0x5651f4472960
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: handle_auth_request added challenge on 0x5651f6c82800
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 205283328 unmapped: 42483712 heap: 247767040 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _renew_subs
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 420 handle_osd_map epochs [421,421], i have 420, src has [1,421]
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 421 ms_handle_reset con 0x5651f6c82800 session 0x5651f4dd94a0
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:21:05.526420+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: handle_auth_request added challenge on 0x5651f6bdcc00
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 205283328 unmapped: 42483712 heap: 247767040 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 421 ms_handle_reset con 0x5651f6bdcc00 session 0x5651f5054000
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:21:06.526607+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 421 heartbeat osd_stat(store_statfs(0x4f4f07000/0x0/0x4ffc00000, data 0x1e8adff/0x20c7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x8c2f9c6), peers [1,2] op hist [])
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 205291520 unmapped: 42475520 heap: 247767040 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: handle_auth_request added challenge on 0x5651f3f43000
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 421 ms_handle_reset con 0x5651f3f43000 session 0x5651f4cd7e00
Oct 11 04:30:11 compute-0 ceph-osd[87591]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:30:11 compute-0 ceph-osd[87591]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:30:11 compute-0 ceph-osd[87591]: bluestore.MempoolThread(0x5651f2adbb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3118580 data_alloc: 218103808 data_used: 8249344
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:21:07.526756+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 205291520 unmapped: 42475520 heap: 247767040 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:21:08.526967+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: handle_auth_request added challenge on 0x5651f4029800
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _renew_subs
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 421 handle_osd_map epochs [422,422], i have 421, src has [1,422]
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 422 ms_handle_reset con 0x5651f4029800 session 0x5651f43db0e0
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 206340096 unmapped: 41426944 heap: 247767040 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:21:09.527243+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: handle_auth_request added challenge on 0x5651f5002000
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 422 ms_handle_reset con 0x5651f5002000 session 0x5651f584a000
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: handle_auth_request added challenge on 0x5651f6c82800
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 422 handle_osd_map epochs [422,423], i have 422, src has [1,423]
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 423 ms_handle_reset con 0x5651f6c82800 session 0x5651f5e94f00
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 206340096 unmapped: 41426944 heap: 247767040 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 423 handle_osd_map epochs [423,424], i have 423, src has [1,424]
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:21:10.527441+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 424 heartbeat osd_stat(store_statfs(0x4f4aeb000/0x0/0x4ffc00000, data 0x1e9005a/0x20d1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x903f9c6), peers [1,2] op hist [])
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 205561856 unmapped: 42205184 heap: 247767040 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: handle_auth_request added challenge on 0x5651f8796400
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 424 ms_handle_reset con 0x5651f8796400 session 0x5651f5120960
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: handle_auth_request added challenge on 0x5651f3f43000
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 424 ms_handle_reset con 0x5651f3f43000 session 0x5651f5ecbc20
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:21:11.527644+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 205586432 unmapped: 42180608 heap: 247767040 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 424 heartbeat osd_stat(store_statfs(0x4f4aeb000/0x0/0x4ffc00000, data 0x1e9005a/0x20d1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x903f9c6), peers [1,2] op hist [])
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:21:12.527821+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:30:11 compute-0 ceph-osd[87591]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:30:11 compute-0 ceph-osd[87591]: bluestore.MempoolThread(0x5651f2adbb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3128572 data_alloc: 218103808 data_used: 8273920
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: handle_auth_request added challenge on 0x5651f4029800
Oct 11 04:30:11 compute-0 ceph-osd[87591]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 10.013599396s of 10.391996384s, submitted: 125
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 424 ms_handle_reset con 0x5651f4029800 session 0x5651f3f105a0
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 205586432 unmapped: 42180608 heap: 247767040 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:21:13.528043+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: handle_auth_request added challenge on 0x5651f5002000
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 205586432 unmapped: 42180608 heap: 247767040 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _renew_subs
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 424 handle_osd_map epochs [425,425], i have 424, src has [1,425]
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 425 ms_handle_reset con 0x5651f5002000 session 0x5651f5f2ad20
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:21:14.528219+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 425 handle_osd_map epochs [425,426], i have 425, src has [1,426]
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 205586432 unmapped: 42180608 heap: 247767040 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: handle_auth_request added challenge on 0x5651f6c82800
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 426 ms_handle_reset con 0x5651f6c82800 session 0x5651f584a000
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: handle_auth_request added challenge on 0x5651f738ac00
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:21:15.528383+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 205586432 unmapped: 42180608 heap: 247767040 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 426 ms_handle_reset con 0x5651f738ac00 session 0x5651f4cd7e00
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: handle_auth_request added challenge on 0x5651f738ac00
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _renew_subs
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 426 handle_osd_map epochs [427,427], i have 426, src has [1,427]
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 427 ms_handle_reset con 0x5651f738ac00 session 0x5651f5054000
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: handle_auth_request added challenge on 0x5651f3f43000
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:21:16.528534+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 427 ms_handle_reset con 0x5651f3f43000 session 0x5651f4472960
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 427 heartbeat osd_stat(store_statfs(0x4f4ae6000/0x0/0x4ffc00000, data 0x1e93604/0x20d7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x903f9c6), peers [1,2] op hist [])
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 205594624 unmapped: 42172416 heap: 247767040 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:21:17.528701+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:30:11 compute-0 ceph-osd[87591]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:30:11 compute-0 ceph-osd[87591]: bluestore.MempoolThread(0x5651f2adbb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3139512 data_alloc: 218103808 data_used: 8278016
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: handle_auth_request added challenge on 0x5651f4029800
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 427 ms_handle_reset con 0x5651f4029800 session 0x5651f51643c0
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 205602816 unmapped: 42164224 heap: 247767040 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:21:18.528929+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: handle_auth_request added challenge on 0x5651f5002000
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 205537280 unmapped: 42229760 heap: 247767040 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 427 handle_osd_map epochs [427,428], i have 427, src has [1,428]
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _renew_subs
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 428 handle_osd_map epochs [428,428], i have 428, src has [1,428]
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 428 ms_handle_reset con 0x5651f5002000 session 0x5651f5ef85a0
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:21:19.529199+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 205545472 unmapped: 42221568 heap: 247767040 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: handle_auth_request added challenge on 0x5651f6c82800
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 428 ms_handle_reset con 0x5651f6c82800 session 0x5651f726fa40
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: handle_auth_request added challenge on 0x5651f3f43000
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 428 heartbeat osd_stat(store_statfs(0x4f4ade000/0x0/0x4ffc00000, data 0x1e96e32/0x20df000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x903f9c6), peers [1,2] op hist [1])
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:21:20.529373+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 205553664 unmapped: 42213376 heap: 247767040 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _renew_subs
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 428 handle_osd_map epochs [429,429], i have 428, src has [1,429]
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 429 ms_handle_reset con 0x5651f3f43000 session 0x5651f50145a0
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 429 heartbeat osd_stat(store_statfs(0x4f4adc000/0x0/0x4ffc00000, data 0x1e989b1/0x20e1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x903f9c6), peers [1,2] op hist [])
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:21:21.529616+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: handle_auth_request added challenge on 0x5651f4029800
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 429 ms_handle_reset con 0x5651f4029800 session 0x5651f5c594a0
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: handle_auth_request added challenge on 0x5651f5002000
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 429 ms_handle_reset con 0x5651f5002000 session 0x5651f5bd65a0
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 205586432 unmapped: 42180608 heap: 247767040 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:21:22.530217+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 429 heartbeat osd_stat(store_statfs(0x4f4ade000/0x0/0x4ffc00000, data 0x1e9893f/0x20df000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x903f9c6), peers [1,2] op hist [])
Oct 11 04:30:11 compute-0 ceph-osd[87591]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:30:11 compute-0 ceph-osd[87591]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:30:11 compute-0 ceph-osd[87591]: bluestore.MempoolThread(0x5651f2adbb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3147854 data_alloc: 218103808 data_used: 8286208
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: handle_auth_request added challenge on 0x5651f738ac00
Oct 11 04:30:11 compute-0 ceph-osd[87591]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 9.996101379s of 10.340676308s, submitted: 109
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 429 ms_handle_reset con 0x5651f738ac00 session 0x5651f5f2a1e0
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 429 heartbeat osd_stat(store_statfs(0x4f4ade000/0x0/0x4ffc00000, data 0x1e9893f/0x20df000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x903f9c6), peers [1,2] op hist [])
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 205586432 unmapped: 42180608 heap: 247767040 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:21:23.530519+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 205586432 unmapped: 42180608 heap: 247767040 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: handle_auth_request added challenge on 0x5651f7cd0000
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:21:24.531764+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 429 handle_osd_map epochs [429,430], i have 429, src has [1,430]
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 430 ms_handle_reset con 0x5651f7cd0000 session 0x5651f584bc20
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 205570048 unmapped: 42196992 heap: 247767040 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: handle_auth_request added challenge on 0x5651f3f43000
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:21:25.531930+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 430 ms_handle_reset con 0x5651f3f43000 session 0x5651f7da7860
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: handle_auth_request added challenge on 0x5651f4029800
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 430 heartbeat osd_stat(store_statfs(0x4f4ad8000/0x0/0x4ffc00000, data 0x1e9a5e8/0x20e5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x903f9c6), peers [1,2] op hist [])
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 430 handle_osd_map epochs [430,431], i have 430, src has [1,431]
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 205578240 unmapped: 42188800 heap: 247767040 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 431 ms_handle_reset con 0x5651f4029800 session 0x5651f5f47860
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:21:26.532519+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: handle_auth_request added challenge on 0x5651f5002000
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 431 ms_handle_reset con 0x5651f5002000 session 0x5651f5c1f680
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: handle_auth_request added challenge on 0x5651f738ac00
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 431 ms_handle_reset con 0x5651f738ac00 session 0x5651f7da72c0
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 205594624 unmapped: 42172416 heap: 247767040 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:21:27.532871+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:30:11 compute-0 ceph-osd[87591]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:30:11 compute-0 ceph-osd[87591]: bluestore.MempoolThread(0x5651f2adbb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3157369 data_alloc: 218103808 data_used: 8286208
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 205594624 unmapped: 42172416 heap: 247767040 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:21:28.533112+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 205594624 unmapped: 42172416 heap: 247767040 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:21:29.533268+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 205594624 unmapped: 42172416 heap: 247767040 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:21:30.533646+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: handle_auth_request added challenge on 0x5651f6be1800
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _renew_subs
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 431 handle_osd_map epochs [432,432], i have 431, src has [1,432]
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 432 ms_handle_reset con 0x5651f6be1800 session 0x5651f44cc1e0
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 205594624 unmapped: 42172416 heap: 247767040 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:21:31.534005+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 432 heartbeat osd_stat(store_statfs(0x4f4ad5000/0x0/0x4ffc00000, data 0x1e9dc7e/0x20e8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x903f9c6), peers [1,2] op hist [])
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 432 handle_osd_map epochs [433,433], i have 432, src has [1,433]
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 432 handle_osd_map epochs [433,433], i have 433, src has [1,433]
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: handle_auth_request added challenge on 0x5651f3f43000
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 433 ms_handle_reset con 0x5651f3f43000 session 0x5651f50b9860
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 205602816 unmapped: 42164224 heap: 247767040 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:21:32.534348+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:30:11 compute-0 ceph-osd[87591]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:30:11 compute-0 ceph-osd[87591]: bluestore.MempoolThread(0x5651f2adbb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3162263 data_alloc: 218103808 data_used: 8286208
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 205602816 unmapped: 42164224 heap: 247767040 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: handle_auth_request added challenge on 0x5651f4029800
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 433 ms_handle_reset con 0x5651f4029800 session 0x5651f5f46000
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:21:33.534564+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 205602816 unmapped: 42164224 heap: 247767040 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:21:34.534822+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: handle_auth_request added challenge on 0x5651f5002000
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _renew_subs
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 433 handle_osd_map epochs [434,434], i have 433, src has [1,434]
Oct 11 04:30:11 compute-0 ceph-osd[87591]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 11.946639061s of 12.141755104s, submitted: 99
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 434 ms_handle_reset con 0x5651f5002000 session 0x5651f44721e0
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 205602816 unmapped: 42164224 heap: 247767040 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 434 heartbeat osd_stat(store_statfs(0x4f4ace000/0x0/0x4ffc00000, data 0x1ea1496/0x20ef000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x903f9c6), peers [1,2] op hist [])
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:21:35.535334+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: handle_auth_request added challenge on 0x5651f738ac00
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 434 ms_handle_reset con 0x5651f738ac00 session 0x5651f5121680
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: handle_auth_request added challenge on 0x5651f68d8c00
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 205602816 unmapped: 42164224 heap: 247767040 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 434 ms_handle_reset con 0x5651f68d8c00 session 0x5651f5f44f00
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:21:36.535772+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 434 handle_osd_map epochs [435,435], i have 434, src has [1,435]
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 205602816 unmapped: 42164224 heap: 247767040 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: handle_auth_request added challenge on 0x5651f3f43000
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:21:37.535973+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 435 ms_handle_reset con 0x5651f3f43000 session 0x5651f5f46b40
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 435 heartbeat osd_stat(store_statfs(0x4f4acc000/0x0/0x4ffc00000, data 0x1ea3005/0x20f1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x903f9c6), peers [1,2] op hist [])
Oct 11 04:30:11 compute-0 ceph-osd[87591]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:30:11 compute-0 ceph-osd[87591]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:30:11 compute-0 ceph-osd[87591]: bluestore.MempoolThread(0x5651f2adbb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3167539 data_alloc: 218103808 data_used: 8286208
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 205602816 unmapped: 42164224 heap: 247767040 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:21:38.536253+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 205602816 unmapped: 42164224 heap: 247767040 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:21:39.536654+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 205602816 unmapped: 42164224 heap: 247767040 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:21:40.537094+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 435 heartbeat osd_stat(store_statfs(0x4f4acc000/0x0/0x4ffc00000, data 0x1ea3005/0x20f1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x903f9c6), peers [1,2] op hist [])
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 205602816 unmapped: 42164224 heap: 247767040 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:21:41.537518+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 435 heartbeat osd_stat(store_statfs(0x4f4acc000/0x0/0x4ffc00000, data 0x1ea3005/0x20f1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x903f9c6), peers [1,2] op hist [])
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 205602816 unmapped: 42164224 heap: 247767040 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:21:42.537767+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:30:11 compute-0 ceph-osd[87591]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:30:11 compute-0 ceph-osd[87591]: bluestore.MempoolThread(0x5651f2adbb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3167699 data_alloc: 218103808 data_used: 8290304
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 205602816 unmapped: 42164224 heap: 247767040 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:21:43.537921+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 435 heartbeat osd_stat(store_statfs(0x4f4acc000/0x0/0x4ffc00000, data 0x1ea3005/0x20f1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x903f9c6), peers [1,2] op hist [])
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 205602816 unmapped: 42164224 heap: 247767040 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:21:44.538102+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 205602816 unmapped: 42164224 heap: 247767040 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:21:45.538231+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 435 handle_osd_map epochs [436,436], i have 435, src has [1,436]
Oct 11 04:30:11 compute-0 ceph-osd[87591]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 10.510136604s of 10.599349976s, submitted: 33
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 205602816 unmapped: 42164224 heap: 247767040 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:21:46.538392+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 205602816 unmapped: 42164224 heap: 247767040 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 436 heartbeat osd_stat(store_statfs(0x4f4ac9000/0x0/0x4ffc00000, data 0x1ea4a68/0x20f4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x903f9c6), peers [1,2] op hist [])
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets getting new tickets!
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:21:47.538662+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _finish_auth 0
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:21:47.539718+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:30:11 compute-0 ceph-osd[87591]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:30:11 compute-0 ceph-osd[87591]: bluestore.MempoolThread(0x5651f2adbb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3170673 data_alloc: 218103808 data_used: 8290304
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 205619200 unmapped: 42147840 heap: 247767040 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:21:48.538810+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 205619200 unmapped: 42147840 heap: 247767040 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:21:49.538972+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 205619200 unmapped: 42147840 heap: 247767040 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:21:50.539234+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 205619200 unmapped: 42147840 heap: 247767040 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:21:51.539413+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 205619200 unmapped: 42147840 heap: 247767040 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:21:52.539567+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: mgrc ms_handle_reset ms_handle_reset con 0x5651f4cee800
Oct 11 04:30:11 compute-0 ceph-osd[87591]: mgrc reconnect Terminating session with v2:192.168.122.100:6800/3360631616
Oct 11 04:30:11 compute-0 ceph-osd[87591]: mgrc reconnect Starting new session with [v2:192.168.122.100:6800/3360631616,v1:192.168.122.100:6801/3360631616]
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: get_auth_request con 0x5651f68d8c00 auth_method 0
Oct 11 04:30:11 compute-0 ceph-osd[87591]: mgrc handle_mgr_configure stats_period=5
Oct 11 04:30:11 compute-0 ceph-osd[87591]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:30:11 compute-0 ceph-osd[87591]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:30:11 compute-0 ceph-osd[87591]: bluestore.MempoolThread(0x5651f2adbb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3170673 data_alloc: 218103808 data_used: 8290304
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 436 heartbeat osd_stat(store_statfs(0x4f4ac9000/0x0/0x4ffc00000, data 0x1ea4a68/0x20f4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x903f9c6), peers [1,2] op hist [])
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 436 heartbeat osd_stat(store_statfs(0x4f4ac9000/0x0/0x4ffc00000, data 0x1ea4a68/0x20f4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x903f9c6), peers [1,2] op hist [])
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 205037568 unmapped: 42729472 heap: 247767040 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:21:53.539736+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 205037568 unmapped: 42729472 heap: 247767040 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:21:54.539891+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 205037568 unmapped: 42729472 heap: 247767040 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:21:55.540069+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 205037568 unmapped: 42729472 heap: 247767040 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:21:56.540287+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 205037568 unmapped: 42729472 heap: 247767040 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:21:57.540479+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:30:11 compute-0 ceph-osd[87591]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:30:11 compute-0 ceph-osd[87591]: bluestore.MempoolThread(0x5651f2adbb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3170673 data_alloc: 218103808 data_used: 8290304
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 436 ms_handle_reset con 0x5651f4db7400 session 0x5651f43d32c0
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: handle_auth_request added challenge on 0x5651f4029800
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 205037568 unmapped: 42729472 heap: 247767040 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:21:58.540629+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 436 heartbeat osd_stat(store_statfs(0x4f4ac9000/0x0/0x4ffc00000, data 0x1ea4a68/0x20f4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x903f9c6), peers [1,2] op hist [])
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 205037568 unmapped: 42729472 heap: 247767040 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:21:59.540787+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 205037568 unmapped: 42729472 heap: 247767040 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:22:00.540953+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 205037568 unmapped: 42729472 heap: 247767040 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 436 heartbeat osd_stat(store_statfs(0x4f4ac9000/0x0/0x4ffc00000, data 0x1ea4a68/0x20f4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x903f9c6), peers [1,2] op hist [])
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:22:01.541271+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 205037568 unmapped: 42729472 heap: 247767040 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:22:02.541469+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:30:11 compute-0 ceph-osd[87591]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:30:11 compute-0 ceph-osd[87591]: bluestore.MempoolThread(0x5651f2adbb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3170673 data_alloc: 218103808 data_used: 8290304
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 205037568 unmapped: 42729472 heap: 247767040 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:22:03.541626+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 436 heartbeat osd_stat(store_statfs(0x4f4ac9000/0x0/0x4ffc00000, data 0x1ea4a68/0x20f4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x903f9c6), peers [1,2] op hist [])
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 205021184 unmapped: 42745856 heap: 247767040 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:22:04.541758+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 205021184 unmapped: 42745856 heap: 247767040 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:22:05.541940+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 205021184 unmapped: 42745856 heap: 247767040 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:22:06.542139+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 205021184 unmapped: 42745856 heap: 247767040 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:22:07.542405+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:30:11 compute-0 ceph-osd[87591]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:30:11 compute-0 ceph-osd[87591]: bluestore.MempoolThread(0x5651f2adbb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3170673 data_alloc: 218103808 data_used: 8290304
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 436 heartbeat osd_stat(store_statfs(0x4f4ac9000/0x0/0x4ffc00000, data 0x1ea4a68/0x20f4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x903f9c6), peers [1,2] op hist [])
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 205021184 unmapped: 42745856 heap: 247767040 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:22:08.542553+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 205021184 unmapped: 42745856 heap: 247767040 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:22:09.542715+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 436 heartbeat osd_stat(store_statfs(0x4f4ac9000/0x0/0x4ffc00000, data 0x1ea4a68/0x20f4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x903f9c6), peers [1,2] op hist [])
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 205021184 unmapped: 42745856 heap: 247767040 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:22:10.542874+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 205021184 unmapped: 42745856 heap: 247767040 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:22:11.543094+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 436 heartbeat osd_stat(store_statfs(0x4f4ac9000/0x0/0x4ffc00000, data 0x1ea4a68/0x20f4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x903f9c6), peers [1,2] op hist [])
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 205021184 unmapped: 42745856 heap: 247767040 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:22:12.543263+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:30:11 compute-0 ceph-osd[87591]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:30:11 compute-0 ceph-osd[87591]: bluestore.MempoolThread(0x5651f2adbb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3170673 data_alloc: 218103808 data_used: 8290304
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 205021184 unmapped: 42745856 heap: 247767040 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:22:13.543372+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 205021184 unmapped: 42745856 heap: 247767040 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:22:14.543535+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 436 heartbeat osd_stat(store_statfs(0x4f4ac9000/0x0/0x4ffc00000, data 0x1ea4a68/0x20f4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x903f9c6), peers [1,2] op hist [])
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 205021184 unmapped: 42745856 heap: 247767040 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:22:15.543714+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 205021184 unmapped: 42745856 heap: 247767040 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:22:16.543872+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 205021184 unmapped: 42745856 heap: 247767040 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:22:17.544024+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:30:11 compute-0 ceph-osd[87591]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:30:11 compute-0 ceph-osd[87591]: bluestore.MempoolThread(0x5651f2adbb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3170673 data_alloc: 218103808 data_used: 8290304
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 205021184 unmapped: 42745856 heap: 247767040 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:22:18.544219+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: handle_auth_request added challenge on 0x5651f5002000
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 436 ms_handle_reset con 0x5651f5002000 session 0x5651f5eb25a0
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: handle_auth_request added challenge on 0x5651f738ac00
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 436 ms_handle_reset con 0x5651f738ac00 session 0x5651f3f5c000
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 205021184 unmapped: 42745856 heap: 247767040 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: handle_auth_request added challenge on 0x5651f68d9c00
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 436 ms_handle_reset con 0x5651f68d9c00 session 0x5651f5121e00
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 436 heartbeat osd_stat(store_statfs(0x4f4ac9000/0x0/0x4ffc00000, data 0x1ea4a68/0x20f4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x903f9c6), peers [1,2] op hist [])
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:22:19.544372+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: handle_auth_request added challenge on 0x5651f6bc9c00
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 436 ms_handle_reset con 0x5651f6bc9c00 session 0x5651f50e92c0
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: handle_auth_request added challenge on 0x5651f6bc9c00
Oct 11 04:30:11 compute-0 ceph-osd[87591]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 33.950496674s of 33.963787079s, submitted: 13
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 436 ms_handle_reset con 0x5651f6bc9c00 session 0x5651f7232b40
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: handle_auth_request added challenge on 0x5651f3f43000
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 436 ms_handle_reset con 0x5651f3f43000 session 0x5651f5e7a3c0
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 436 heartbeat osd_stat(store_statfs(0x4f4ac9000/0x0/0x4ffc00000, data 0x1ea4a68/0x20f4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x903f9c6), peers [1,2] op hist [])
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: handle_auth_request added challenge on 0x5651f5002000
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 436 ms_handle_reset con 0x5651f5002000 session 0x5651f51203c0
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: handle_auth_request added challenge on 0x5651f68d9c00
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 436 ms_handle_reset con 0x5651f68d9c00 session 0x5651f7da6000
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: handle_auth_request added challenge on 0x5651f738ac00
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 436 ms_handle_reset con 0x5651f738ac00 session 0x5651f5eb3680
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 204120064 unmapped: 47849472 heap: 251969536 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:22:20.544542+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 204120064 unmapped: 47849472 heap: 251969536 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:22:21.544759+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 204120064 unmapped: 47849472 heap: 251969536 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:22:22.544926+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:30:11 compute-0 ceph-osd[87591]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:30:11 compute-0 ceph-osd[87591]: bluestore.MempoolThread(0x5651f2adbb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3305326 data_alloc: 218103808 data_used: 8290304
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 204120064 unmapped: 47849472 heap: 251969536 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:22:23.545120+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: handle_auth_request added challenge on 0x5651f3f43000
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 436 ms_handle_reset con 0x5651f3f43000 session 0x5651f5eb23c0
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: handle_auth_request added challenge on 0x5651f5002000
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 436 ms_handle_reset con 0x5651f5002000 session 0x5651f4cacf00
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 204120064 unmapped: 47849472 heap: 251969536 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:22:24.545313+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: handle_auth_request added challenge on 0x5651f68d9c00
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 436 ms_handle_reset con 0x5651f68d9c00 session 0x5651f3f5c000
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: handle_auth_request added challenge on 0x5651f6bc9c00
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 436 ms_handle_reset con 0x5651f6bc9c00 session 0x5651f43d32c0
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 436 heartbeat osd_stat(store_statfs(0x4f3aa9000/0x0/0x4ffc00000, data 0x2ec3ada/0x3115000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x903f9c6), peers [1,2] op hist [])
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 203751424 unmapped: 48218112 heap: 251969536 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: handle_auth_request added challenge on 0x5651f74b1400
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:22:25.545446+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: handle_auth_request added challenge on 0x5651f5a82c00
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 203751424 unmapped: 48218112 heap: 251969536 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:22:26.545626+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 205570048 unmapped: 46399488 heap: 251969536 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:22:27.545779+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:30:11 compute-0 ceph-osd[87591]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:30:11 compute-0 ceph-osd[87591]: bluestore.MempoolThread(0x5651f2adbb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3416328 data_alloc: 234881024 data_used: 21606400
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 205897728 unmapped: 46071808 heap: 251969536 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:22:28.545911+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 205897728 unmapped: 46071808 heap: 251969536 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:22:29.546090+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 436 heartbeat osd_stat(store_statfs(0x4f3a7f000/0x0/0x4ffc00000, data 0x2eedada/0x313f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x903f9c6), peers [1,2] op hist [])
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 205897728 unmapped: 46071808 heap: 251969536 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:22:30.546281+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 205897728 unmapped: 46071808 heap: 251969536 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:22:31.546493+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 205897728 unmapped: 46071808 heap: 251969536 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:22:32.546650+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:30:11 compute-0 ceph-osd[87591]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:30:11 compute-0 ceph-osd[87591]: bluestore.MempoolThread(0x5651f2adbb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3416328 data_alloc: 234881024 data_used: 21606400
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 436 heartbeat osd_stat(store_statfs(0x4f3a7f000/0x0/0x4ffc00000, data 0x2eedada/0x313f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x903f9c6), peers [1,2] op hist [])
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 205897728 unmapped: 46071808 heap: 251969536 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:22:33.546818+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 436 heartbeat osd_stat(store_statfs(0x4f3a7f000/0x0/0x4ffc00000, data 0x2eedada/0x313f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x903f9c6), peers [1,2] op hist [])
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 205897728 unmapped: 46071808 heap: 251969536 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:22:34.546962+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 205897728 unmapped: 46071808 heap: 251969536 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:22:35.547125+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 16.172523499s of 16.321443558s, submitted: 48
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 216580096 unmapped: 35389440 heap: 251969536 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:22:36.547305+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 215375872 unmapped: 36593664 heap: 251969536 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:22:37.547438+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:30:11 compute-0 ceph-osd[87591]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:30:11 compute-0 ceph-osd[87591]: bluestore.MempoolThread(0x5651f2adbb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3580596 data_alloc: 234881024 data_used: 22663168
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 436 heartbeat osd_stat(store_statfs(0x4f28bc000/0x0/0x4ffc00000, data 0x40afada/0x4301000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x903f9c6), peers [1,2] op hist [])
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 215375872 unmapped: 36593664 heap: 251969536 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:22:38.547628+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 215375872 unmapped: 36593664 heap: 251969536 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:22:39.547872+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 215375872 unmapped: 36593664 heap: 251969536 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:22:40.548050+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 215375872 unmapped: 36593664 heap: 251969536 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:22:41.548265+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 215375872 unmapped: 36593664 heap: 251969536 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:22:42.548437+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 436 heartbeat osd_stat(store_statfs(0x4f2863000/0x0/0x4ffc00000, data 0x4100ada/0x4352000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x903f9c6), peers [1,2] op hist [])
Oct 11 04:30:11 compute-0 ceph-osd[87591]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:30:11 compute-0 ceph-osd[87591]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:30:11 compute-0 ceph-osd[87591]: bluestore.MempoolThread(0x5651f2adbb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3580756 data_alloc: 234881024 data_used: 22667264
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 215384064 unmapped: 36585472 heap: 251969536 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:22:43.548652+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 215384064 unmapped: 36585472 heap: 251969536 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:22:44.548839+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 436 ms_handle_reset con 0x5651f6c5cc00 session 0x5651f7bac000
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: handle_auth_request added challenge on 0x5651f3f43000
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 215384064 unmapped: 36585472 heap: 251969536 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:22:45.548994+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 215384064 unmapped: 36585472 heap: 251969536 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:22:46.549210+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 436 heartbeat osd_stat(store_statfs(0x4f284a000/0x0/0x4ffc00000, data 0x4122ada/0x4374000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x903f9c6), peers [1,2] op hist [])
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: handle_auth_request added challenge on 0x5651f5002000
Oct 11 04:30:11 compute-0 ceph-osd[87591]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 10.659181595s of 11.082481384s, submitted: 175
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 436 ms_handle_reset con 0x5651f5002000 session 0x5651f4309860
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 215392256 unmapped: 36577280 heap: 251969536 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:22:47.549450+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:30:11 compute-0 ceph-osd[87591]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:30:11 compute-0 ceph-osd[87591]: bluestore.MempoolThread(0x5651f2adbb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3579864 data_alloc: 234881024 data_used: 22679552
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 215392256 unmapped: 36577280 heap: 251969536 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:22:48.549631+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:22:49.549796+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 215392256 unmapped: 36577280 heap: 251969536 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:22:50.549986+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 215392256 unmapped: 36577280 heap: 251969536 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: handle_auth_request added challenge on 0x5651f68d9c00
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:22:51.550236+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 215392256 unmapped: 36577280 heap: 251969536 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 436 heartbeat osd_stat(store_statfs(0x4f2843000/0x0/0x4ffc00000, data 0x4127b4c/0x437b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x903f9c6), peers [1,2] op hist [])
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 436 heartbeat osd_stat(store_statfs(0x4f2843000/0x0/0x4ffc00000, data 0x4127b4c/0x437b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x903f9c6), peers [1,2] op hist [])
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:22:52.550399+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 215400448 unmapped: 36569088 heap: 251969536 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:30:11 compute-0 ceph-osd[87591]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:30:11 compute-0 ceph-osd[87591]: bluestore.MempoolThread(0x5651f2adbb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3585474 data_alloc: 234881024 data_used: 22712320
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: handle_auth_request added challenge on 0x5651f6bc9c00
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:22:53.550587+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 215400448 unmapped: 36569088 heap: 251969536 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 436 handle_osd_map epochs [436,437], i have 436, src has [1,437]
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _renew_subs
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 437 handle_osd_map epochs [437,437], i have 437, src has [1,437]
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 437 ms_handle_reset con 0x5651f6bc9c00 session 0x5651f5f2b4a0
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:22:54.550757+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 215416832 unmapped: 36552704 heap: 251969536 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 437 heartbeat osd_stat(store_statfs(0x4f2836000/0x0/0x4ffc00000, data 0x413172b/0x4387000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x903f9c6), peers [1,2] op hist [])
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:22:55.550904+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 215638016 unmapped: 36331520 heap: 251969536 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:22:56.551046+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 215638016 unmapped: 36331520 heap: 251969536 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: handle_auth_request added challenge on 0x5651f6bc6400
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _renew_subs
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 437 handle_osd_map epochs [438,438], i have 437, src has [1,438]
Oct 11 04:30:11 compute-0 ceph-osd[87591]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 9.860356331s of 10.024820328s, submitted: 36
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 438 ms_handle_reset con 0x5651f6bc6400 session 0x5651f4318f00
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:22:57.551288+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 215744512 unmapped: 36225024 heap: 251969536 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:30:11 compute-0 ceph-osd[87591]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:30:11 compute-0 ceph-osd[87591]: bluestore.MempoolThread(0x5651f2adbb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3601436 data_alloc: 234881024 data_used: 22728704
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:22:58.551431+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 215752704 unmapped: 36216832 heap: 251969536 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: handle_auth_request added challenge on 0x5651f4db6400
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 438 heartbeat osd_stat(store_statfs(0x4f280a000/0x0/0x4ffc00000, data 0x416030a/0x43b3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x903f9c6), peers [1,2] op hist [])
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _renew_subs
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 438 handle_osd_map epochs [439,439], i have 438, src has [1,439]
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 439 ms_handle_reset con 0x5651f4db6400 session 0x5651f44721e0
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:22:59.551542+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 439 heartbeat osd_stat(store_statfs(0x4f2806000/0x0/0x4ffc00000, data 0x4167e87/0x43b7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x903f9c6), peers [1,2] op hist [])
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 215752704 unmapped: 36216832 heap: 251969536 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 439 heartbeat osd_stat(store_statfs(0x4f2806000/0x0/0x4ffc00000, data 0x4167e87/0x43b7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x903f9c6), peers [1,2] op hist [])
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 439 ms_handle_reset con 0x5651f68d9c00 session 0x5651f5e954a0
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:23:00.551696+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 215752704 unmapped: 36216832 heap: 251969536 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:23:01.551924+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 215752704 unmapped: 36216832 heap: 251969536 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: handle_auth_request added challenge on 0x5651f4db6400
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: handle_auth_request added challenge on 0x5651f5002000
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 439 ms_handle_reset con 0x5651f5002000 session 0x5651f7bade00
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 439 ms_handle_reset con 0x5651f4db6400 session 0x5651f43d5a40
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:23:02.552109+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 214736896 unmapped: 37232640 heap: 251969536 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:30:11 compute-0 ceph-osd[87591]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:30:11 compute-0 ceph-osd[87591]: bluestore.MempoolThread(0x5651f2adbb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3608438 data_alloc: 234881024 data_used: 22740992
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:23:03.552295+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 214745088 unmapped: 37224448 heap: 251969536 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: handle_auth_request added challenge on 0x5651f68d9c00
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:23:04.552446+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 214753280 unmapped: 37216256 heap: 251969536 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 439 heartbeat osd_stat(store_statfs(0x4f2801000/0x0/0x4ffc00000, data 0x416cee9/0x43bd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x903f9c6), peers [1,2] op hist [])
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:23:05.552695+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 214753280 unmapped: 37216256 heap: 251969536 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:23:06.552898+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 214753280 unmapped: 37216256 heap: 251969536 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:23:07.553083+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 214753280 unmapped: 37216256 heap: 251969536 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 10.459381104s of 10.592652321s, submitted: 36
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 439 ms_handle_reset con 0x5651f68d9c00 session 0x5651f43d3e00
Oct 11 04:30:11 compute-0 ceph-osd[87591]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:30:11 compute-0 ceph-osd[87591]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:30:11 compute-0 ceph-osd[87591]: bluestore.MempoolThread(0x5651f2adbb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3608268 data_alloc: 234881024 data_used: 22749184
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 439 heartbeat osd_stat(store_statfs(0x4f27ff000/0x0/0x4ffc00000, data 0x416dee9/0x43be000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x903f9c6), peers [1,2] op hist [])
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:23:08.553222+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 214753280 unmapped: 37216256 heap: 251969536 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 439 heartbeat osd_stat(store_statfs(0x4f27ff000/0x0/0x4ffc00000, data 0x416dee9/0x43be000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x903f9c6), peers [1,2] op hist [])
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: handle_auth_request added challenge on 0x5651f6bc6400
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: handle_auth_request added challenge on 0x5651f6bc9c00
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 439 ms_handle_reset con 0x5651f6bc9c00 session 0x5651f43da780
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 439 ms_handle_reset con 0x5651f6bc6400 session 0x5651f5c1fc20
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:23:09.553428+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 214753280 unmapped: 37216256 heap: 251969536 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:23:10.553829+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 214753280 unmapped: 37216256 heap: 251969536 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:23:11.554039+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 214753280 unmapped: 37216256 heap: 251969536 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: handle_auth_request added challenge on 0x5651f4db6400
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:23:12.554208+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 214769664 unmapped: 37199872 heap: 251969536 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:30:11 compute-0 ceph-osd[87591]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:30:11 compute-0 ceph-osd[87591]: bluestore.MempoolThread(0x5651f2adbb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3613446 data_alloc: 234881024 data_used: 22753280
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:23:13.554383+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 214777856 unmapped: 37191680 heap: 251969536 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:23:14.554610+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 214777856 unmapped: 37191680 heap: 251969536 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 439 heartbeat osd_stat(store_statfs(0x4f27f8000/0x0/0x4ffc00000, data 0x4174ef9/0x43c6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x903f9c6), peers [1,2] op hist [])
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 439 ms_handle_reset con 0x5651f4db6400 session 0x5651f5f143c0
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:23:15.554763+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 214777856 unmapped: 37191680 heap: 251969536 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:23:16.554930+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: handle_auth_request added challenge on 0x5651f5002000
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 214777856 unmapped: 37191680 heap: 251969536 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: handle_auth_request added challenge on 0x5651f68d9c00
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 439 ms_handle_reset con 0x5651f68d9c00 session 0x5651f79f70e0
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 439 ms_handle_reset con 0x5651f5002000 session 0x5651f5f14d20
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:23:17.555088+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 439 heartbeat osd_stat(store_statfs(0x4f27f2000/0x0/0x4ffc00000, data 0x4179f09/0x43cc000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x903f9c6), peers [1,2] op hist [])
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 214777856 unmapped: 37191680 heap: 251969536 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:30:11 compute-0 ceph-osd[87591]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:30:11 compute-0 ceph-osd[87591]: bluestore.MempoolThread(0x5651f2adbb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3616611 data_alloc: 234881024 data_used: 22859776
Oct 11 04:30:11 compute-0 ceph-osd[87591]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 10.558839798s of 10.666181564s, submitted: 28
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:23:18.555259+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 215851008 unmapped: 36118528 heap: 251969536 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: handle_auth_request added challenge on 0x5651f6bc9c00
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:23:19.555479+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 215867392 unmapped: 36102144 heap: 251969536 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 439 heartbeat osd_stat(store_statfs(0x4f27f2000/0x0/0x4ffc00000, data 0x4179f09/0x43cc000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x903f9c6), peers [1,2] op hist [])
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:23:20.555653+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 215867392 unmapped: 36102144 heap: 251969536 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 439 heartbeat osd_stat(store_statfs(0x4f27f2000/0x0/0x4ffc00000, data 0x4179f09/0x43cc000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x903f9c6), peers [1,2] op hist [])
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:23:21.555844+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 215547904 unmapped: 36421632 heap: 251969536 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:23:22.556036+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 439 ms_handle_reset con 0x5651f6bc9c00 session 0x5651f7badc20
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 215556096 unmapped: 36413440 heap: 251969536 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:30:11 compute-0 ceph-osd[87591]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:30:11 compute-0 ceph-osd[87591]: bluestore.MempoolThread(0x5651f2adbb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3619026 data_alloc: 234881024 data_used: 22847488
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:23:23.556286+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 215556096 unmapped: 36413440 heap: 251969536 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: handle_auth_request added challenge on 0x5651f7e51000
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 439 ms_handle_reset con 0x5651f7e51000 session 0x5651f5120780
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: handle_auth_request added challenge on 0x5651f4db6400
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 439 ms_handle_reset con 0x5651f4db6400 session 0x5651f50141e0
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:23:24.556433+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 215687168 unmapped: 36282368 heap: 251969536 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: handle_auth_request added challenge on 0x5651f5002000
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 439 ms_handle_reset con 0x5651f5002000 session 0x5651f584ba40
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: handle_auth_request added challenge on 0x5651f68d9c00
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:23:25.556622+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 439 ms_handle_reset con 0x5651f68d9c00 session 0x5651f7232960
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 439 heartbeat osd_stat(store_statfs(0x4f27e2000/0x0/0x4ffc00000, data 0x417eef9/0x43d0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x903f9c6), peers [1,2] op hist [])
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 215990272 unmapped: 35979264 heap: 251969536 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 439 heartbeat osd_stat(store_statfs(0x4f27ef000/0x0/0x4ffc00000, data 0x417eee9/0x43cf000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x903f9c6), peers [1,2] op hist [])
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:23:26.556736+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: handle_auth_request added challenge on 0x5651f6bc9c00
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 215990272 unmapped: 35979264 heap: 251969536 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 439 ms_handle_reset con 0x5651f6bc9c00 session 0x5651f79fdc20
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: handle_auth_request added challenge on 0x5651f7e51000
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 439 ms_handle_reset con 0x5651f7e51000 session 0x5651f5f15680
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:23:27.556870+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 215834624 unmapped: 36134912 heap: 251969536 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: handle_auth_request added challenge on 0x5651f4db6400
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 439 ms_handle_reset con 0x5651f4db6400 session 0x5651f7badc20
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: handle_auth_request added challenge on 0x5651f5002000
Oct 11 04:30:11 compute-0 ceph-osd[87591]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:30:11 compute-0 ceph-osd[87591]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:30:11 compute-0 ceph-osd[87591]: bluestore.MempoolThread(0x5651f2adbb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3612733 data_alloc: 234881024 data_used: 22843392
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _renew_subs
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 439 handle_osd_map epochs [440,440], i have 439, src has [1,440]
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 440 ms_handle_reset con 0x5651f5002000 session 0x5651f5c1fc20
Oct 11 04:30:11 compute-0 ceph-osd[87591]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 9.594192505s of 10.016868591s, submitted: 127
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:23:28.557011+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 215842816 unmapped: 36126720 heap: 251969536 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: handle_auth_request added challenge on 0x5651f68d9c00
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 440 ms_handle_reset con 0x5651f68d9c00 session 0x5651f44721e0
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: handle_auth_request added challenge on 0x5651f6bc9c00
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 440 handle_osd_map epochs [440,441], i have 440, src has [1,441]
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 441 ms_handle_reset con 0x5651f6bc9c00 session 0x5651f7bac000
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:23:29.557142+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 215867392 unmapped: 36102144 heap: 251969536 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 441 heartbeat osd_stat(store_statfs(0x4f27ea000/0x0/0x4ffc00000, data 0x417c5c7/0x43d2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x903f9c6), peers [1,2] op hist [])
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: handle_auth_request added challenge on 0x5651f6be4800
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 441 ms_handle_reset con 0x5651f6be4800 session 0x5651f5eb3680
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: handle_auth_request added challenge on 0x5651f4db6400
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:23:30.557320+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 215916544 unmapped: 36052992 heap: 251969536 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 441 handle_osd_map epochs [441,442], i have 441, src has [1,442]
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _renew_subs
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 441 handle_osd_map epochs [442,442], i have 442, src has [1,442]
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 442 ms_handle_reset con 0x5651f4db6400 session 0x5651f7232b40
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:23:31.557547+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 215965696 unmapped: 36003840 heap: 251969536 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 442 heartbeat osd_stat(store_statfs(0x4f27ea000/0x0/0x4ffc00000, data 0x4178136/0x43d3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x903f9c6), peers [1,2] op hist [])
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:23:32.557714+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 215965696 unmapped: 36003840 heap: 251969536 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: handle_auth_request added challenge on 0x5651f5002000
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 442 ms_handle_reset con 0x5651f5002000 session 0x5651f7da74a0
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: handle_auth_request added challenge on 0x5651f68d9c00
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 442 ms_handle_reset con 0x5651f68d9c00 session 0x5651f7da7680
Oct 11 04:30:11 compute-0 ceph-osd[87591]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:30:11 compute-0 ceph-osd[87591]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:30:11 compute-0 ceph-osd[87591]: bluestore.MempoolThread(0x5651f2adbb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3611958 data_alloc: 234881024 data_used: 22724608
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:23:33.557865+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 216031232 unmapped: 35938304 heap: 251969536 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 442 ms_handle_reset con 0x5651f74b1400 session 0x5651f43d2960
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 442 ms_handle_reset con 0x5651f5a82c00 session 0x5651f43dbc20
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: handle_auth_request added challenge on 0x5651f4db6400
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:23:34.559262+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 208084992 unmapped: 43884544 heap: 251969536 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 442 ms_handle_reset con 0x5651f4db6400 session 0x5651f4cad0e0
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:23:35.559486+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 208084992 unmapped: 43884544 heap: 251969536 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 442 heartbeat osd_stat(store_statfs(0x4f4ab7000/0x0/0x4ffc00000, data 0x1eaf052/0x2106000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x903f9c6), peers [1,2] op hist [])
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 442 handle_osd_map epochs [443,443], i have 442, src has [1,443]
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:23:36.559713+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 208084992 unmapped: 43884544 heap: 251969536 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:23:37.559878+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 208084992 unmapped: 43884544 heap: 251969536 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:30:11 compute-0 ceph-osd[87591]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:30:11 compute-0 ceph-osd[87591]: bluestore.MempoolThread(0x5651f2adbb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3227968 data_alloc: 218103808 data_used: 6426624
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:23:38.560037+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 208084992 unmapped: 43884544 heap: 251969536 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:23:39.560219+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 208084992 unmapped: 43884544 heap: 251969536 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:23:40.560521+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 208084992 unmapped: 43884544 heap: 251969536 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 443 heartbeat osd_stat(store_statfs(0x4f4ab3000/0x0/0x4ffc00000, data 0x1eb0ab5/0x2109000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x903f9c6), peers [1,2] op hist [])
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:23:41.560795+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 208084992 unmapped: 43884544 heap: 251969536 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:23:42.560935+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 208084992 unmapped: 43884544 heap: 251969536 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:30:11 compute-0 ceph-osd[87591]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:30:11 compute-0 ceph-osd[87591]: bluestore.MempoolThread(0x5651f2adbb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3227968 data_alloc: 218103808 data_used: 6426624
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:23:43.561191+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 208084992 unmapped: 43884544 heap: 251969536 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:23:44.561382+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 208084992 unmapped: 43884544 heap: 251969536 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 443 heartbeat osd_stat(store_statfs(0x4f4ab3000/0x0/0x4ffc00000, data 0x1eb0ab5/0x2109000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x903f9c6), peers [1,2] op hist [])
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:23:45.561595+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 208084992 unmapped: 43884544 heap: 251969536 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:23:46.561773+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 208084992 unmapped: 43884544 heap: 251969536 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:23:47.561950+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 208084992 unmapped: 43884544 heap: 251969536 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:30:11 compute-0 ceph-osd[87591]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:30:11 compute-0 ceph-osd[87591]: bluestore.MempoolThread(0x5651f2adbb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3227968 data_alloc: 218103808 data_used: 6426624
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:23:48.562237+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 208084992 unmapped: 43884544 heap: 251969536 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:23:49.562507+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 443 heartbeat osd_stat(store_statfs(0x4f4ab3000/0x0/0x4ffc00000, data 0x1eb0ab5/0x2109000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x903f9c6), peers [1,2] op hist [])
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 208084992 unmapped: 43884544 heap: 251969536 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:23:50.562807+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 208084992 unmapped: 43884544 heap: 251969536 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:23:51.563058+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 208084992 unmapped: 43884544 heap: 251969536 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:23:52.563265+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 208084992 unmapped: 43884544 heap: 251969536 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:30:11 compute-0 ceph-osd[87591]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:30:11 compute-0 ceph-osd[87591]: bluestore.MempoolThread(0x5651f2adbb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3228128 data_alloc: 218103808 data_used: 6430720
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:23:53.563463+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 208084992 unmapped: 43884544 heap: 251969536 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 443 heartbeat osd_stat(store_statfs(0x4f4ab3000/0x0/0x4ffc00000, data 0x1eb0ab5/0x2109000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x903f9c6), peers [1,2] op hist [])
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:23:54.563641+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 208084992 unmapped: 43884544 heap: 251969536 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:23:55.564036+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 208084992 unmapped: 43884544 heap: 251969536 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:23:56.564317+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 443 heartbeat osd_stat(store_statfs(0x4f4ab3000/0x0/0x4ffc00000, data 0x1eb0ab5/0x2109000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x903f9c6), peers [1,2] op hist [])
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 208084992 unmapped: 43884544 heap: 251969536 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:23:57.564540+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 208084992 unmapped: 43884544 heap: 251969536 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:30:11 compute-0 ceph-osd[87591]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:30:11 compute-0 ceph-osd[87591]: bluestore.MempoolThread(0x5651f2adbb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3228128 data_alloc: 218103808 data_used: 6430720
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 443 heartbeat osd_stat(store_statfs(0x4f4ab3000/0x0/0x4ffc00000, data 0x1eb0ab5/0x2109000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x903f9c6), peers [1,2] op hist [])
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:23:58.564725+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 208084992 unmapped: 43884544 heap: 251969536 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:23:59.564888+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 208084992 unmapped: 43884544 heap: 251969536 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:24:00.565068+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 208084992 unmapped: 43884544 heap: 251969536 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:24:01.565323+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 208084992 unmapped: 43884544 heap: 251969536 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 443 heartbeat osd_stat(store_statfs(0x4f4ab3000/0x0/0x4ffc00000, data 0x1eb0ab5/0x2109000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x903f9c6), peers [1,2] op hist [])
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:24:02.565489+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 208084992 unmapped: 43884544 heap: 251969536 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:30:11 compute-0 ceph-osd[87591]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:30:11 compute-0 ceph-osd[87591]: bluestore.MempoolThread(0x5651f2adbb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3228128 data_alloc: 218103808 data_used: 6430720
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:24:03.565680+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 208084992 unmapped: 43884544 heap: 251969536 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 443 heartbeat osd_stat(store_statfs(0x4f4ab3000/0x0/0x4ffc00000, data 0x1eb0ab5/0x2109000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x903f9c6), peers [1,2] op hist [])
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:24:04.565856+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 208084992 unmapped: 43884544 heap: 251969536 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:24:05.566030+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 208084992 unmapped: 43884544 heap: 251969536 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:24:06.566276+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 208084992 unmapped: 43884544 heap: 251969536 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:24:07.566434+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 208093184 unmapped: 43876352 heap: 251969536 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: handle_auth_request added challenge on 0x5651f5002000
Oct 11 04:30:11 compute-0 ceph-osd[87591]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 39.330684662s of 39.748054504s, submitted: 142
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 443 ms_handle_reset con 0x5651f5002000 session 0x5651f79fc000
Oct 11 04:30:11 compute-0 ceph-osd[87591]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:30:11 compute-0 ceph-osd[87591]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:30:11 compute-0 ceph-osd[87591]: bluestore.MempoolThread(0x5651f2adbb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3230682 data_alloc: 218103808 data_used: 6430720
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:24:08.566598+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 208101376 unmapped: 43868160 heap: 251969536 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: handle_auth_request added challenge on 0x5651f68d9c00
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:24:09.566797+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 208101376 unmapped: 43868160 heap: 251969536 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 443 heartbeat osd_stat(store_statfs(0x4f4ab3000/0x0/0x4ffc00000, data 0x1eb0b27/0x210b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x903f9c6), peers [1,2] op hist [])
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _renew_subs
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 443 handle_osd_map epochs [444,444], i have 443, src has [1,444]
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 444 ms_handle_reset con 0x5651f68d9c00 session 0x5651f72330e0
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:24:10.566986+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 208109568 unmapped: 43859968 heap: 251969536 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: handle_auth_request added challenge on 0x5651f74b1400
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 444 ms_handle_reset con 0x5651f74b1400 session 0x5651f79fd4a0
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:24:11.567228+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 208109568 unmapped: 43859968 heap: 251969536 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: handle_auth_request added challenge on 0x5651f6bc9c00
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 444 ms_handle_reset con 0x5651f6bc9c00 session 0x5651f5e95a40
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: handle_auth_request added challenge on 0x5651f4db6400
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 444 ms_handle_reset con 0x5651f4db6400 session 0x5651f79fc5a0
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:24:12.567367+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 208125952 unmapped: 43843584 heap: 251969536 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:30:11 compute-0 ceph-osd[87591]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:30:11 compute-0 ceph-osd[87591]: bluestore.MempoolThread(0x5651f2adbb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3234839 data_alloc: 218103808 data_used: 6447104
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: handle_auth_request added challenge on 0x5651f5002000
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 444 ms_handle_reset con 0x5651f5002000 session 0x5651f5c59a40
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: handle_auth_request added challenge on 0x5651f68d9c00
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:24:13.567516+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: handle_auth_request added challenge on 0x5651f74b1400
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 444 ms_handle_reset con 0x5651f74b1400 session 0x5651f5c58d20
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: handle_auth_request added challenge on 0x5651f6285400
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 208125952 unmapped: 43843584 heap: 251969536 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _renew_subs
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 444 handle_osd_map epochs [445,445], i have 444, src has [1,445]
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 445 ms_handle_reset con 0x5651f6285400 session 0x5651f584b680
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:24:14.567661+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 445 ms_handle_reset con 0x5651f68d9c00 session 0x5651f44ccb40
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 445 heartbeat osd_stat(store_statfs(0x4f4ab0000/0x0/0x4ffc00000, data 0x1eb26a4/0x210e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x903f9c6), peers [1,2] op hist [])
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 208142336 unmapped: 43827200 heap: 251969536 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 445 heartbeat osd_stat(store_statfs(0x4f4aae000/0x0/0x4ffc00000, data 0x1eb4203/0x210f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x903f9c6), peers [1,2] op hist [])
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:24:15.567824+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: handle_auth_request added challenge on 0x5651f4db6400
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 208142336 unmapped: 43827200 heap: 251969536 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 445 ms_handle_reset con 0x5651f4db6400 session 0x5651f4cac3c0
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:24:16.567995+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 208142336 unmapped: 43827200 heap: 251969536 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:24:17.568142+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 208142336 unmapped: 43827200 heap: 251969536 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:30:11 compute-0 ceph-osd[87591]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:30:11 compute-0 ceph-osd[87591]: bluestore.MempoolThread(0x5651f2adbb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3235501 data_alloc: 218103808 data_used: 6447104
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:24:18.568341+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 445 heartbeat osd_stat(store_statfs(0x4f4aaf000/0x0/0x4ffc00000, data 0x1eb4203/0x210f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x903f9c6), peers [1,2] op hist [])
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 208142336 unmapped: 43827200 heap: 251969536 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:24:19.568575+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 208142336 unmapped: 43827200 heap: 251969536 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:24:20.568740+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 208142336 unmapped: 43827200 heap: 251969536 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:24:21.568950+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 208142336 unmapped: 43827200 heap: 251969536 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:24:22.569122+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 208142336 unmapped: 43827200 heap: 251969536 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:30:11 compute-0 ceph-osd[87591]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:30:11 compute-0 ceph-osd[87591]: bluestore.MempoolThread(0x5651f2adbb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3235821 data_alloc: 218103808 data_used: 6455296
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:24:23.569299+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 208142336 unmapped: 43827200 heap: 251969536 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 445 heartbeat osd_stat(store_statfs(0x4f4aaf000/0x0/0x4ffc00000, data 0x1eb4203/0x210f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x903f9c6), peers [1,2] op hist [])
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:24:24.569476+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 208142336 unmapped: 43827200 heap: 251969536 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 445 heartbeat osd_stat(store_statfs(0x4f4aaf000/0x0/0x4ffc00000, data 0x1eb4203/0x210f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x903f9c6), peers [1,2] op hist [])
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:24:25.569665+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 445 heartbeat osd_stat(store_statfs(0x4f4aaf000/0x0/0x4ffc00000, data 0x1eb4203/0x210f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x903f9c6), peers [1,2] op hist [])
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 445 handle_osd_map epochs [446,446], i have 445, src has [1,446]
Oct 11 04:30:11 compute-0 ceph-osd[87591]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 17.255254745s of 17.447477341s, submitted: 45
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 208142336 unmapped: 43827200 heap: 251969536 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:24:26.569806+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 446 heartbeat osd_stat(store_statfs(0x4f4aab000/0x0/0x4ffc00000, data 0x1eb5c66/0x2112000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x903f9c6), peers [1,2] op hist [])
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 208142336 unmapped: 43827200 heap: 251969536 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:24:27.569952+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 446 heartbeat osd_stat(store_statfs(0x4f4aab000/0x0/0x4ffc00000, data 0x1eb5c66/0x2112000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x903f9c6), peers [1,2] op hist [])
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 208142336 unmapped: 43827200 heap: 251969536 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:30:11 compute-0 ceph-osd[87591]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:30:11 compute-0 ceph-osd[87591]: bluestore.MempoolThread(0x5651f2adbb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3239995 data_alloc: 218103808 data_used: 6463488
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:24:28.570112+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 208142336 unmapped: 43827200 heap: 251969536 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 446 heartbeat osd_stat(store_statfs(0x4f4aab000/0x0/0x4ffc00000, data 0x1eb5c66/0x2112000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x903f9c6), peers [1,2] op hist [])
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:24:29.570279+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 208142336 unmapped: 43827200 heap: 251969536 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:24:30.570438+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 446 heartbeat osd_stat(store_statfs(0x4f4aab000/0x0/0x4ffc00000, data 0x1eb5c66/0x2112000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x903f9c6), peers [1,2] op hist [])
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 208142336 unmapped: 43827200 heap: 251969536 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:24:31.570620+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 208142336 unmapped: 43827200 heap: 251969536 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:24:32.570812+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 208142336 unmapped: 43827200 heap: 251969536 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:30:11 compute-0 ceph-osd[87591]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:30:11 compute-0 ceph-osd[87591]: bluestore.MempoolThread(0x5651f2adbb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3239995 data_alloc: 218103808 data_used: 6463488
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:24:33.570950+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 208142336 unmapped: 43827200 heap: 251969536 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 446 heartbeat osd_stat(store_statfs(0x4f4aab000/0x0/0x4ffc00000, data 0x1eb5c66/0x2112000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x903f9c6), peers [1,2] op hist [])
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:24:34.571135+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 208142336 unmapped: 43827200 heap: 251969536 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:24:35.571371+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 208142336 unmapped: 43827200 heap: 251969536 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 446 heartbeat osd_stat(store_statfs(0x4f4aab000/0x0/0x4ffc00000, data 0x1eb5c66/0x2112000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x903f9c6), peers [1,2] op hist [])
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:24:36.571567+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 208142336 unmapped: 43827200 heap: 251969536 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:24:37.571786+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 208142336 unmapped: 43827200 heap: 251969536 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:30:11 compute-0 ceph-osd[87591]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:30:11 compute-0 ceph-osd[87591]: bluestore.MempoolThread(0x5651f2adbb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3239995 data_alloc: 218103808 data_used: 6463488
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:24:38.571979+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 208142336 unmapped: 43827200 heap: 251969536 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:24:39.572204+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 208142336 unmapped: 43827200 heap: 251969536 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:24:40.572364+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 208142336 unmapped: 43827200 heap: 251969536 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:24:41.572586+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 446 heartbeat osd_stat(store_statfs(0x4f4aab000/0x0/0x4ffc00000, data 0x1eb5c66/0x2112000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x903f9c6), peers [1,2] op hist [])
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 208142336 unmapped: 43827200 heap: 251969536 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:24:42.572771+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 208142336 unmapped: 43827200 heap: 251969536 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:30:11 compute-0 ceph-osd[87591]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:30:11 compute-0 ceph-osd[87591]: bluestore.MempoolThread(0x5651f2adbb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3239995 data_alloc: 218103808 data_used: 6463488
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:24:43.573024+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 208142336 unmapped: 43827200 heap: 251969536 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:24:44.573217+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 446 heartbeat osd_stat(store_statfs(0x4f4aab000/0x0/0x4ffc00000, data 0x1eb5c66/0x2112000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x903f9c6), peers [1,2] op hist [])
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 208142336 unmapped: 43827200 heap: 251969536 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:24:45.573396+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 208142336 unmapped: 43827200 heap: 251969536 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 446 heartbeat osd_stat(store_statfs(0x4f4aab000/0x0/0x4ffc00000, data 0x1eb5c66/0x2112000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x903f9c6), peers [1,2] op hist [])
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:24:46.573554+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 208142336 unmapped: 43827200 heap: 251969536 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:24:47.573744+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 208142336 unmapped: 43827200 heap: 251969536 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 446 heartbeat osd_stat(store_statfs(0x4f4aab000/0x0/0x4ffc00000, data 0x1eb5c66/0x2112000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x903f9c6), peers [1,2] op hist [])
Oct 11 04:30:11 compute-0 ceph-osd[87591]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:30:11 compute-0 ceph-osd[87591]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:30:11 compute-0 ceph-osd[87591]: bluestore.MempoolThread(0x5651f2adbb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3239995 data_alloc: 218103808 data_used: 6463488
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:24:48.573912+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 208142336 unmapped: 43827200 heap: 251969536 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:24:49.574078+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 208142336 unmapped: 43827200 heap: 251969536 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 446 heartbeat osd_stat(store_statfs(0x4f4aab000/0x0/0x4ffc00000, data 0x1eb5c66/0x2112000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x903f9c6), peers [1,2] op hist [])
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:24:50.574280+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: handle_auth_request added challenge on 0x5651f5002000
Oct 11 04:30:11 compute-0 ceph-osd[87591]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 25.015634537s of 25.030782700s, submitted: 50
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 208150528 unmapped: 43819008 heap: 251969536 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:24:51.574491+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 208150528 unmapped: 43819008 heap: 251969536 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 446 ms_handle_reset con 0x5651f5002000 session 0x5651f5f45c20
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:24:52.574676+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 446 heartbeat osd_stat(store_statfs(0x4f4aab000/0x0/0x4ffc00000, data 0x1eb5c76/0x2113000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x903f9c6), peers [1,2] op hist [])
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 208150528 unmapped: 43819008 heap: 251969536 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:30:11 compute-0 ceph-osd[87591]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:30:11 compute-0 ceph-osd[87591]: bluestore.MempoolThread(0x5651f2adbb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3242424 data_alloc: 218103808 data_used: 6467584
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:24:53.574856+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 208150528 unmapped: 43819008 heap: 251969536 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: handle_auth_request added challenge on 0x5651f6285400
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 446 heartbeat osd_stat(store_statfs(0x4f4aab000/0x0/0x4ffc00000, data 0x1eb5c76/0x2113000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x903f9c6), peers [1,2] op hist [])
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:24:54.575049+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 209215488 unmapped: 42754048 heap: 251969536 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 446 handle_osd_map epochs [446,447], i have 446, src has [1,447]
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _renew_subs
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 447 handle_osd_map epochs [447,447], i have 447, src has [1,447]
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 447 ms_handle_reset con 0x5651f6285400 session 0x5651f5e79a40
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:24:55.575237+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 208175104 unmapped: 43794432 heap: 251969536 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:24:56.575409+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: handle_auth_request added challenge on 0x5651f74b1400
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 447 ms_handle_reset con 0x5651f74b1400 session 0x5651f5e7ad20
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 208183296 unmapped: 43786240 heap: 251969536 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:24:57.575600+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 447 heartbeat osd_stat(store_statfs(0x4f4aa5000/0x0/0x4ffc00000, data 0x1eb7866/0x2118000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x903f9c6), peers [1,2] op hist [])
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 208183296 unmapped: 43786240 heap: 251969536 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:30:11 compute-0 ceph-osd[87591]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:30:11 compute-0 ceph-osd[87591]: bluestore.MempoolThread(0x5651f2adbb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3250402 data_alloc: 218103808 data_used: 6479872
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:24:58.575778+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 208183296 unmapped: 43786240 heap: 251969536 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:24:59.575935+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 208183296 unmapped: 43786240 heap: 251969536 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:25:00.576099+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 208183296 unmapped: 43786240 heap: 251969536 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:25:01.576373+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 208183296 unmapped: 43786240 heap: 251969536 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:25:02.576588+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 208183296 unmapped: 43786240 heap: 251969536 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 447 heartbeat osd_stat(store_statfs(0x4f4aa5000/0x0/0x4ffc00000, data 0x1eb7866/0x2118000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x903f9c6), peers [1,2] op hist [])
Oct 11 04:30:11 compute-0 ceph-osd[87591]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:30:11 compute-0 ceph-osd[87591]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:30:11 compute-0 ceph-osd[87591]: bluestore.MempoolThread(0x5651f2adbb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3250402 data_alloc: 218103808 data_used: 6479872
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:25:03.576776+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 208183296 unmapped: 43786240 heap: 251969536 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:25:04.576968+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 208183296 unmapped: 43786240 heap: 251969536 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:25:05.577199+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: handle_auth_request added challenge on 0x5651f6bc9800
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 447 ms_handle_reset con 0x5651f6bc9800 session 0x5651f7da6d20
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: handle_auth_request added challenge on 0x5651f4db6400
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 447 ms_handle_reset con 0x5651f4db6400 session 0x5651f4309e00
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: handle_auth_request added challenge on 0x5651f5002000
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 447 ms_handle_reset con 0x5651f5002000 session 0x5651f50e8b40
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 208183296 unmapped: 43786240 heap: 251969536 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: handle_auth_request added challenge on 0x5651f6285400
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 447 ms_handle_reset con 0x5651f6285400 session 0x5651f5bd65a0
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: handle_auth_request added challenge on 0x5651f74b1400
Oct 11 04:30:11 compute-0 ceph-osd[87591]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 15.500113487s of 15.555168152s, submitted: 16
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 447 ms_handle_reset con 0x5651f74b1400 session 0x5651f4473680
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: handle_auth_request added challenge on 0x5651f4dba400
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 447 ms_handle_reset con 0x5651f4dba400 session 0x5651f7bac1e0
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: handle_auth_request added challenge on 0x5651f4dba400
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 447 ms_handle_reset con 0x5651f4dba400 session 0x5651f5121c20
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: handle_auth_request added challenge on 0x5651f4db6400
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 447 ms_handle_reset con 0x5651f4db6400 session 0x5651f5121680
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: handle_auth_request added challenge on 0x5651f5002000
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:25:06.577352+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 447 ms_handle_reset con 0x5651f5002000 session 0x5651f726fa40
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 205627392 unmapped: 46342144 heap: 251969536 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 447 heartbeat osd_stat(store_statfs(0x4f426c000/0x0/0x4ffc00000, data 0x26f0876/0x2952000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x903f9c6), peers [1,2] op hist [])
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:25:07.577538+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 205627392 unmapped: 46342144 heap: 251969536 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:30:11 compute-0 ceph-osd[87591]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:30:11 compute-0 ceph-osd[87591]: bluestore.MempoolThread(0x5651f2adbb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3322262 data_alloc: 218103808 data_used: 6479872
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:25:08.577675+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 205627392 unmapped: 46342144 heap: 251969536 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:25:09.577837+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 205627392 unmapped: 46342144 heap: 251969536 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:25:10.578019+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: handle_auth_request added challenge on 0x5651f6285400
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 447 ms_handle_reset con 0x5651f6285400 session 0x5651f7bad2c0
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 205856768 unmapped: 46112768 heap: 251969536 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:25:11.578198+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: handle_auth_request added challenge on 0x5651f74b1400
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: handle_auth_request added challenge on 0x5651f6bddc00
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 205856768 unmapped: 46112768 heap: 251969536 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:25:12.578358+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 205856768 unmapped: 46112768 heap: 251969536 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 447 heartbeat osd_stat(store_statfs(0x4f4242000/0x0/0x4ffc00000, data 0x271a876/0x297c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x903f9c6), peers [1,2] op hist [])
Oct 11 04:30:11 compute-0 ceph-osd[87591]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:30:11 compute-0 ceph-osd[87591]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:30:11 compute-0 ceph-osd[87591]: bluestore.MempoolThread(0x5651f2adbb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3363421 data_alloc: 234881024 data_used: 11669504
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:25:13.578507+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 207265792 unmapped: 44703744 heap: 251969536 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:25:14.578726+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 207265792 unmapped: 44703744 heap: 251969536 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:25:15.578854+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 207265792 unmapped: 44703744 heap: 251969536 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:25:16.578986+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 207265792 unmapped: 44703744 heap: 251969536 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:25:17.579123+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 447 heartbeat osd_stat(store_statfs(0x4f4242000/0x0/0x4ffc00000, data 0x271a876/0x297c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x903f9c6), peers [1,2] op hist [])
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 207265792 unmapped: 44703744 heap: 251969536 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:30:11 compute-0 ceph-osd[87591]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:30:11 compute-0 ceph-osd[87591]: bluestore.MempoolThread(0x5651f2adbb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3385341 data_alloc: 234881024 data_used: 14807040
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:25:18.579210+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 207265792 unmapped: 44703744 heap: 251969536 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:25:19.579333+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 207265792 unmapped: 44703744 heap: 251969536 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:25:20.579450+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 207265792 unmapped: 44703744 heap: 251969536 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:25:21.579601+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 207265792 unmapped: 44703744 heap: 251969536 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 447 heartbeat osd_stat(store_statfs(0x4f4242000/0x0/0x4ffc00000, data 0x271a876/0x297c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x903f9c6), peers [1,2] op hist [])
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:25:22.579795+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 16.492874146s of 16.609563828s, submitted: 27
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 209018880 unmapped: 42950656 heap: 251969536 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:30:11 compute-0 ceph-osd[87591]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:30:11 compute-0 ceph-osd[87591]: bluestore.MempoolThread(0x5651f2adbb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3434747 data_alloc: 234881024 data_used: 15204352
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:25:23.579943+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 209231872 unmapped: 42737664 heap: 251969536 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:25:24.580132+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 447 heartbeat osd_stat(store_statfs(0x4f3cd4000/0x0/0x4ffc00000, data 0x2c78876/0x2eda000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x903f9c6), peers [1,2] op hist [])
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 209821696 unmapped: 42147840 heap: 251969536 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:25:25.580370+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 209821696 unmapped: 42147840 heap: 251969536 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:25:26.580512+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 447 heartbeat osd_stat(store_statfs(0x4f3c8d000/0x0/0x4ffc00000, data 0x2cb9876/0x2f1b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x903f9c6), peers [1,2] op hist [])
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 209821696 unmapped: 42147840 heap: 251969536 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:25:27.580714+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 209821696 unmapped: 42147840 heap: 251969536 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:30:11 compute-0 ceph-osd[87591]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:30:11 compute-0 ceph-osd[87591]: bluestore.MempoolThread(0x5651f2adbb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3447065 data_alloc: 234881024 data_used: 15052800
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:25:28.580883+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 209821696 unmapped: 42147840 heap: 251969536 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:25:29.581039+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 447 heartbeat osd_stat(store_statfs(0x4f3c8d000/0x0/0x4ffc00000, data 0x2cb9876/0x2f1b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x903f9c6), peers [1,2] op hist [])
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 209428480 unmapped: 42541056 heap: 251969536 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:25:30.581214+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 447 heartbeat osd_stat(store_statfs(0x4f3c81000/0x0/0x4ffc00000, data 0x2cdb876/0x2f3d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x903f9c6), peers [1,2] op hist [])
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 209428480 unmapped: 42541056 heap: 251969536 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:25:31.581407+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 447 heartbeat osd_stat(store_statfs(0x4f3c81000/0x0/0x4ffc00000, data 0x2cdb876/0x2f3d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x903f9c6), peers [1,2] op hist [])
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 209428480 unmapped: 42541056 heap: 251969536 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:25:32.581565+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: handle_auth_request added challenge on 0x5651f6bd6400
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 209428480 unmapped: 42541056 heap: 251969536 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:30:11 compute-0 ceph-osd[87591]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:30:11 compute-0 ceph-osd[87591]: bluestore.MempoolThread(0x5651f2adbb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3438913 data_alloc: 234881024 data_used: 15060992
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:25:33.581799+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 209428480 unmapped: 42541056 heap: 251969536 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:25:34.581970+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 209428480 unmapped: 42541056 heap: 251969536 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 12.120042801s of 12.465004921s, submitted: 81
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:25:35.582191+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 447 heartbeat osd_stat(store_statfs(0x4f3c81000/0x0/0x4ffc00000, data 0x2cdb876/0x2f3d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x903f9c6), peers [1,2] op hist [])
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 209428480 unmapped: 42541056 heap: 251969536 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 447 ms_handle_reset con 0x5651f6bd6400 session 0x5651f3f101e0
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:25:36.582405+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 209428480 unmapped: 42541056 heap: 251969536 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 447 heartbeat osd_stat(store_statfs(0x4f3c7c000/0x0/0x4ffc00000, data 0x2ce0876/0x2f42000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x903f9c6), peers [1,2] op hist [])
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:25:37.582588+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 447 ms_handle_reset con 0x5651f74b1400 session 0x5651f5bd65a0
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 447 ms_handle_reset con 0x5651f6bddc00 session 0x5651f5f14d20
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: handle_auth_request added challenge on 0x5651f4db6400
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 209428480 unmapped: 42541056 heap: 251969536 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 447 ms_handle_reset con 0x5651f4db6400 session 0x5651f5055e00
Oct 11 04:30:11 compute-0 ceph-osd[87591]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:30:11 compute-0 ceph-osd[87591]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:30:11 compute-0 ceph-osd[87591]: bluestore.MempoolThread(0x5651f2adbb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3262037 data_alloc: 218103808 data_used: 6492160
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:25:38.582725+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 205635584 unmapped: 46333952 heap: 251969536 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:25:39.582935+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 205635584 unmapped: 46333952 heap: 251969536 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:25:40.583393+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 205635584 unmapped: 46333952 heap: 251969536 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:25:41.584309+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: handle_auth_request added challenge on 0x5651f4dba400
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 447 ms_handle_reset con 0x5651f4dba400 session 0x5651f7bac1e0
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: handle_auth_request added challenge on 0x5651f5002000
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 447 ms_handle_reset con 0x5651f5002000 session 0x5651f4dd8000
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 447 heartbeat osd_stat(store_statfs(0x4f4aa6000/0x0/0x4ffc00000, data 0x1eb7866/0x2118000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x903f9c6), peers [1,2] op hist [])
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 205651968 unmapped: 46317568 heap: 251969536 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:25:42.584723+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: handle_auth_request added challenge on 0x5651f4db6400
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 447 ms_handle_reset con 0x5651f4db6400 session 0x5651f5014d20
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: handle_auth_request added challenge on 0x5651f4dba400
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 205651968 unmapped: 46317568 heap: 251969536 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 447 ms_handle_reset con 0x5651f4dba400 session 0x5651f37d7680
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:25:43.585000+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:30:11 compute-0 ceph-osd[87591]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:30:11 compute-0 ceph-osd[87591]: bluestore.MempoolThread(0x5651f2adbb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3262121 data_alloc: 218103808 data_used: 6483968
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 205651968 unmapped: 46317568 heap: 251969536 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:25:44.585379+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 447 heartbeat osd_stat(store_statfs(0x4f4aa7000/0x0/0x4ffc00000, data 0x1eb7856/0x2117000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x903f9c6), peers [1,2] op hist [])
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 205651968 unmapped: 46317568 heap: 251969536 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:25:45.585956+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 205651968 unmapped: 46317568 heap: 251969536 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:25:46.586413+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 205651968 unmapped: 46317568 heap: 251969536 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:25:47.586686+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 447 heartbeat osd_stat(store_statfs(0x4f4aa7000/0x0/0x4ffc00000, data 0x1eb7856/0x2117000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x903f9c6), peers [1,2] op hist [])
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 205651968 unmapped: 46317568 heap: 251969536 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:25:48.586880+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:30:11 compute-0 ceph-osd[87591]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:30:11 compute-0 ceph-osd[87591]: bluestore.MempoolThread(0x5651f2adbb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3262121 data_alloc: 218103808 data_used: 6483968
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 205651968 unmapped: 46317568 heap: 251969536 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:25:49.587392+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 447 heartbeat osd_stat(store_statfs(0x4f4aa7000/0x0/0x4ffc00000, data 0x1eb7856/0x2117000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x903f9c6), peers [1,2] op hist [])
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 205651968 unmapped: 46317568 heap: 251969536 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:25:50.587717+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 205651968 unmapped: 46317568 heap: 251969536 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:25:51.588066+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: handle_auth_request added challenge on 0x5651f6bddc00
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 447 ms_handle_reset con 0x5651f6bddc00 session 0x5651f5f15860
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: handle_auth_request added challenge on 0x5651f74b1400
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 447 ms_handle_reset con 0x5651f74b1400 session 0x5651f50552c0
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 206864384 unmapped: 45105152 heap: 251969536 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:25:52.588281+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 206864384 unmapped: 45105152 heap: 251969536 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:25:53.588623+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:30:11 compute-0 ceph-osd[87591]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:30:11 compute-0 ceph-osd[87591]: bluestore.MempoolThread(0x5651f2adbb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3261321 data_alloc: 218103808 data_used: 8450048
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 206864384 unmapped: 45105152 heap: 251969536 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 447 heartbeat osd_stat(store_statfs(0x4f4aa7000/0x0/0x4ffc00000, data 0x1eb7856/0x2117000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x903f9c6), peers [1,2] op hist [])
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:25:54.588790+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 206864384 unmapped: 45105152 heap: 251969536 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 447 heartbeat osd_stat(store_statfs(0x4f4aa7000/0x0/0x4ffc00000, data 0x1eb7856/0x2117000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x903f9c6), peers [1,2] op hist [])
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:25:55.589203+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 206864384 unmapped: 45105152 heap: 251969536 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:25:56.589613+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 206864384 unmapped: 45105152 heap: 251969536 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:25:57.589864+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 447 heartbeat osd_stat(store_statfs(0x4f4aa7000/0x0/0x4ffc00000, data 0x1eb7856/0x2117000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x903f9c6), peers [1,2] op hist [])
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 206864384 unmapped: 45105152 heap: 251969536 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:25:58.590060+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:30:11 compute-0 ceph-osd[87591]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:30:11 compute-0 ceph-osd[87591]: bluestore.MempoolThread(0x5651f2adbb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3261321 data_alloc: 218103808 data_used: 8450048
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: handle_auth_request added challenge on 0x5651f4023000
Oct 11 04:30:11 compute-0 ceph-osd[87591]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 23.393486023s of 23.568820953s, submitted: 46
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 206864384 unmapped: 45105152 heap: 251969536 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:25:59.590331+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 206864384 unmapped: 45105152 heap: 251969536 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 447 ms_handle_reset con 0x5651f4023000 session 0x5651f5e79a40
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:26:00.590586+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 447 heartbeat osd_stat(store_statfs(0x4f4aa6000/0x0/0x4ffc00000, data 0x1eb7866/0x2118000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x903f9c6), peers [1,2] op hist [])
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 206864384 unmapped: 45105152 heap: 251969536 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:26:01.590855+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 206864384 unmapped: 45105152 heap: 251969536 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:26:02.591075+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 206864384 unmapped: 45105152 heap: 251969536 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:26:03.591325+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:30:11 compute-0 ceph-osd[87591]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:30:11 compute-0 ceph-osd[87591]: bluestore.MempoolThread(0x5651f2adbb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3264470 data_alloc: 218103808 data_used: 8450048
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 206864384 unmapped: 45105152 heap: 251969536 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:26:04.591636+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 206864384 unmapped: 45105152 heap: 251969536 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:26:05.591763+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 447 heartbeat osd_stat(store_statfs(0x4f4aa6000/0x0/0x4ffc00000, data 0x1eb7866/0x2118000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x903f9c6), peers [1,2] op hist [])
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 206864384 unmapped: 45105152 heap: 251969536 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:26:06.592031+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 206864384 unmapped: 45105152 heap: 251969536 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:26:07.592302+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 206864384 unmapped: 45105152 heap: 251969536 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:26:08.592474+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:30:11 compute-0 ceph-osd[87591]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:30:11 compute-0 ceph-osd[87591]: bluestore.MempoolThread(0x5651f2adbb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3264470 data_alloc: 218103808 data_used: 8450048
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 206864384 unmapped: 45105152 heap: 251969536 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 447 heartbeat osd_stat(store_statfs(0x4f4aa6000/0x0/0x4ffc00000, data 0x1eb7866/0x2118000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x903f9c6), peers [1,2] op hist [])
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:26:09.592734+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 206864384 unmapped: 45105152 heap: 251969536 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:26:10.594981+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 206864384 unmapped: 45105152 heap: 251969536 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: handle_auth_request added challenge on 0x5651f4023000
Oct 11 04:30:11 compute-0 ceph-osd[87591]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 12.710840225s of 12.731781006s, submitted: 6
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 447 ms_handle_reset con 0x5651f4023000 session 0x5651f72334a0
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: handle_auth_request added challenge on 0x5651f4db6400
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:26:11.596545+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 447 heartbeat osd_stat(store_statfs(0x4f4aa6000/0x0/0x4ffc00000, data 0x1eb7866/0x2118000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x903f9c6), peers [1,2] op hist [])
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 447 ms_handle_reset con 0x5651f4db6400 session 0x5651f584a000
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: handle_auth_request added challenge on 0x5651f4dba400
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 219463680 unmapped: 36708352 heap: 256172032 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:26:12.596753+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 447 ms_handle_reset con 0x5651f4dba400 session 0x5651f44cc1e0
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: handle_auth_request added challenge on 0x5651f6bddc00
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 447 ms_handle_reset con 0x5651f6bddc00 session 0x5651f4472f00
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 208019456 unmapped: 48152576 heap: 256172032 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:26:13.597860+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:30:11 compute-0 ceph-osd[87591]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:30:11 compute-0 ceph-osd[87591]: bluestore.MempoolThread(0x5651f2adbb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3548180 data_alloc: 218103808 data_used: 8454144
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 447 heartbeat osd_stat(store_statfs(0x4f22a5000/0x0/0x4ffc00000, data 0x46b7890/0x4919000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x903f9c6), peers [1,2] op hist [])
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 208019456 unmapped: 48152576 heap: 256172032 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:26:14.598789+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 447 heartbeat osd_stat(store_statfs(0x4f22a5000/0x0/0x4ffc00000, data 0x46b78c8/0x4919000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x903f9c6), peers [1,2] op hist [])
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 208019456 unmapped: 48152576 heap: 256172032 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:26:15.599127+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 208019456 unmapped: 48152576 heap: 256172032 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:26:16.599534+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 208019456 unmapped: 48152576 heap: 256172032 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 447 heartbeat osd_stat(store_statfs(0x4f22a5000/0x0/0x4ffc00000, data 0x46b78c8/0x4919000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x903f9c6), peers [1,2] op hist [])
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:26:17.599759+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 208019456 unmapped: 48152576 heap: 256172032 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:26:18.599990+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:30:11 compute-0 ceph-osd[87591]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:30:11 compute-0 ceph-osd[87591]: bluestore.MempoolThread(0x5651f2adbb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3548180 data_alloc: 218103808 data_used: 8454144
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 447 heartbeat osd_stat(store_statfs(0x4f22a5000/0x0/0x4ffc00000, data 0x46b78c8/0x4919000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x903f9c6), peers [1,2] op hist [])
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 208019456 unmapped: 48152576 heap: 256172032 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:26:19.600241+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 208019456 unmapped: 48152576 heap: 256172032 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:26:20.600383+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 208019456 unmapped: 48152576 heap: 256172032 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:26:21.600548+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 447 heartbeat osd_stat(store_statfs(0x4f22a5000/0x0/0x4ffc00000, data 0x46b78c8/0x4919000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x903f9c6), peers [1,2] op hist [])
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 208019456 unmapped: 48152576 heap: 256172032 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:26:22.600707+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 208019456 unmapped: 48152576 heap: 256172032 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:26:23.600903+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:30:11 compute-0 ceph-osd[87591]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:30:11 compute-0 ceph-osd[87591]: bluestore.MempoolThread(0x5651f2adbb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3548180 data_alloc: 218103808 data_used: 8454144
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 208019456 unmapped: 48152576 heap: 256172032 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:26:24.601048+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: handle_auth_request added challenge on 0x5651f74b1400
Oct 11 04:30:11 compute-0 ceph-osd[87591]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 12.487824440s of 13.334280014s, submitted: 78
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 447 ms_handle_reset con 0x5651f74b1400 session 0x5651f5bbef00
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: handle_auth_request added challenge on 0x5651f74b1400
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 208035840 unmapped: 48136192 heap: 256172032 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: handle_auth_request added challenge on 0x5651f4023000
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:26:25.601196+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 447 heartbeat osd_stat(store_statfs(0x4f22a4000/0x0/0x4ffc00000, data 0x46b78eb/0x491a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x903f9c6), peers [1,2] op hist [])
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 208035840 unmapped: 48136192 heap: 256172032 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:26:26.601480+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 208035840 unmapped: 48136192 heap: 256172032 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:26:27.601763+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 208035840 unmapped: 48136192 heap: 256172032 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:26:28.601921+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:30:11 compute-0 ceph-osd[87591]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:30:11 compute-0 ceph-osd[87591]: bluestore.MempoolThread(0x5651f2adbb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3561830 data_alloc: 218103808 data_used: 9867264
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 208052224 unmapped: 48119808 heap: 256172032 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:26:29.602101+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 208052224 unmapped: 48119808 heap: 256172032 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:26:30.602275+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 208052224 unmapped: 48119808 heap: 256172032 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:26:31.602482+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 447 heartbeat osd_stat(store_statfs(0x4f22a4000/0x0/0x4ffc00000, data 0x46b78eb/0x491a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x903f9c6), peers [1,2] op hist [])
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 208052224 unmapped: 48119808 heap: 256172032 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:26:32.602691+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 208052224 unmapped: 48119808 heap: 256172032 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:26:33.602850+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:30:11 compute-0 ceph-osd[87591]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:30:11 compute-0 ceph-osd[87591]: bluestore.MempoolThread(0x5651f2adbb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3590310 data_alloc: 234881024 data_used: 13860864
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 208052224 unmapped: 48119808 heap: 256172032 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:26:34.602992+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 208052224 unmapped: 48119808 heap: 256172032 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:26:35.603191+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 208052224 unmapped: 48119808 heap: 256172032 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:26:36.603328+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 447 heartbeat osd_stat(store_statfs(0x4f22a4000/0x0/0x4ffc00000, data 0x46b78eb/0x491a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x903f9c6), peers [1,2] op hist [])
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 208052224 unmapped: 48119808 heap: 256172032 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:26:37.603501+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 13.482630730s of 13.522216797s, submitted: 12
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 215392256 unmapped: 40779776 heap: 256172032 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:26:38.603629+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:30:11 compute-0 ceph-osd[87591]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:30:11 compute-0 ceph-osd[87591]: bluestore.MempoolThread(0x5651f2adbb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3618474 data_alloc: 234881024 data_used: 14024704
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 447 heartbeat osd_stat(store_statfs(0x4f22a4000/0x0/0x4ffc00000, data 0x46b78eb/0x491a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x903f9c6), peers [1,2] op hist [])
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 223207424 unmapped: 32964608 heap: 256172032 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:26:39.603808+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 218750976 unmapped: 37421056 heap: 256172032 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #46. Immutable memtables: 3.
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:26:40.603945+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 220864512 unmapped: 35307520 heap: 256172032 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:26:41.604221+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 220864512 unmapped: 35307520 heap: 256172032 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:26:42.604357+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 447 heartbeat osd_stat(store_statfs(0x4ef052000/0x0/0x4ffc00000, data 0x55c98eb/0x582c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0xb37f9c6), peers [1,2] op hist [])
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 220864512 unmapped: 35307520 heap: 256172032 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:26:43.604553+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:30:11 compute-0 ceph-osd[87591]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:30:11 compute-0 ceph-osd[87591]: bluestore.MempoolThread(0x5651f2adbb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3715370 data_alloc: 234881024 data_used: 14110720
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 220864512 unmapped: 35307520 heap: 256172032 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:26:44.604734+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 447 heartbeat osd_stat(store_statfs(0x4ef052000/0x0/0x4ffc00000, data 0x55c98eb/0x582c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0xb37f9c6), peers [1,2] op hist [])
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 220864512 unmapped: 35307520 heap: 256172032 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:26:45.604941+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 220864512 unmapped: 35307520 heap: 256172032 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:26:46.605120+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Oct 11 04:30:11 compute-0 ceph-osd[87591]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 3000.1 total, 600.0 interval
                                           Cumulative writes: 31K writes, 117K keys, 31K commit groups, 1.0 writes per commit group, ingest: 0.09 GB, 0.03 MB/s
                                           Cumulative WAL: 31K writes, 11K syncs, 2.63 writes per sync, written: 0.09 GB, 0.03 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 5155 writes, 16K keys, 5155 commit groups, 1.0 writes per commit group, ingest: 18.86 MB, 0.03 MB/s
                                           Interval WAL: 5155 writes, 2191 syncs, 2.35 writes per sync, written: 0.02 GB, 0.03 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 447 ms_handle_reset con 0x5651f74b1400 session 0x5651f4cacd20
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 447 ms_handle_reset con 0x5651f4023000 session 0x5651f7da6960
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 220749824 unmapped: 35422208 heap: 256172032 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:26:47.605215+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: handle_auth_request added challenge on 0x5651f4db6400
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 447 ms_handle_reset con 0x5651f4db6400 session 0x5651f4319c20
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 220766208 unmapped: 35405824 heap: 256172032 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:26:48.605392+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:30:11 compute-0 ceph-osd[87591]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:30:11 compute-0 ceph-osd[87591]: bluestore.MempoolThread(0x5651f2adbb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3707176 data_alloc: 234881024 data_used: 14123008
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 447 heartbeat osd_stat(store_statfs(0x4ef052000/0x0/0x4ffc00000, data 0x55c98c8/0x582b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0xb37f9c6), peers [1,2] op hist [])
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 220766208 unmapped: 35405824 heap: 256172032 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:26:49.605564+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 220766208 unmapped: 35405824 heap: 256172032 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:26:50.605717+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 220766208 unmapped: 35405824 heap: 256172032 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:26:51.605919+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 220766208 unmapped: 35405824 heap: 256172032 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:26:52.606082+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 220766208 unmapped: 35405824 heap: 256172032 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:26:53.606310+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:30:11 compute-0 ceph-osd[87591]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:30:11 compute-0 ceph-osd[87591]: bluestore.MempoolThread(0x5651f2adbb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3707176 data_alloc: 234881024 data_used: 14123008
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 220766208 unmapped: 35405824 heap: 256172032 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:26:54.606484+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: handle_auth_request added challenge on 0x5651f4dba400
Oct 11 04:30:11 compute-0 ceph-osd[87591]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 15.619311333s of 16.306537628s, submitted: 189
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 447 heartbeat osd_stat(store_statfs(0x4ef052000/0x0/0x4ffc00000, data 0x55c98c8/0x582b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0xb37f9c6), peers [1,2] op hist [])
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 447 handle_osd_map epochs [447,448], i have 447, src has [1,448]
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 448 ms_handle_reset con 0x5651f4dba400 session 0x5651f5f44d20
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 220766208 unmapped: 35405824 heap: 256172032 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:26:55.606665+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: handle_auth_request added challenge on 0x5651f6bddc00
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 448 ms_handle_reset con 0x5651f6bddc00 session 0x5651f4dd90e0
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 220774400 unmapped: 35397632 heap: 256172032 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:26:56.606801+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 220774400 unmapped: 35397632 heap: 256172032 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:26:57.606927+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 220774400 unmapped: 35397632 heap: 256172032 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:26:58.607136+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 448 heartbeat osd_stat(store_statfs(0x4eec40000/0x0/0x4ffc00000, data 0x55cb445/0x582e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0xb78f9c6), peers [1,2] op hist [])
Oct 11 04:30:11 compute-0 ceph-osd[87591]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:30:11 compute-0 ceph-osd[87591]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:30:11 compute-0 ceph-osd[87591]: bluestore.MempoolThread(0x5651f2adbb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3709343 data_alloc: 234881024 data_used: 14127104
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 220774400 unmapped: 35397632 heap: 256172032 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:26:59.607398+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 220774400 unmapped: 35397632 heap: 256172032 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:27:00.607567+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 448 heartbeat osd_stat(store_statfs(0x4eec40000/0x0/0x4ffc00000, data 0x55cb445/0x582e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0xb78f9c6), peers [1,2] op hist [])
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 220774400 unmapped: 35397632 heap: 256172032 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:27:01.607772+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 220774400 unmapped: 35397632 heap: 256172032 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:27:02.607942+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:27:03.608127+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 220774400 unmapped: 35397632 heap: 256172032 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:30:11 compute-0 ceph-osd[87591]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:30:11 compute-0 ceph-osd[87591]: bluestore.MempoolThread(0x5651f2adbb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3709343 data_alloc: 234881024 data_used: 14127104
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:27:04.608361+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 220774400 unmapped: 35397632 heap: 256172032 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 448 heartbeat osd_stat(store_statfs(0x4eec40000/0x0/0x4ffc00000, data 0x55cb445/0x582e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0xb78f9c6), peers [1,2] op hist [])
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: handle_auth_request added challenge on 0x5651f4023000
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 448 ms_handle_reset con 0x5651f4023000 session 0x5651f5055860
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:27:05.608526+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 220553216 unmapped: 35618816 heap: 256172032 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:27:06.608660+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 220553216 unmapped: 35618816 heap: 256172032 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: handle_auth_request added challenge on 0x5651f74b1400
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 448 ms_handle_reset con 0x5651f74b1400 session 0x5651f5015c20
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 448 heartbeat osd_stat(store_statfs(0x4eec40000/0x0/0x4ffc00000, data 0x55cb445/0x582e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0xb78f9c6), peers [1,2] op hist [])
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:27:07.608989+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 220553216 unmapped: 35618816 heap: 256172032 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: handle_auth_request added challenge on 0x5651f6bdf000
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 448 ms_handle_reset con 0x5651f6bdf000 session 0x5651f50e8d20
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: handle_auth_request added challenge on 0x5651f6bd7000
Oct 11 04:30:11 compute-0 ceph-osd[87591]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 12.988025665s of 13.160463333s, submitted: 20
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 448 ms_handle_reset con 0x5651f6bd7000 session 0x5651f5f14b40
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:27:08.609223+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 220880896 unmapped: 35291136 heap: 256172032 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: handle_auth_request added challenge on 0x5651f4ceec00
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: handle_auth_request added challenge on 0x5651f6c83000
Oct 11 04:30:11 compute-0 ceph-osd[87591]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:30:11 compute-0 ceph-osd[87591]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:30:11 compute-0 ceph-osd[87591]: bluestore.MempoolThread(0x5651f2adbb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3714020 data_alloc: 234881024 data_used: 14127104
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:27:09.609419+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 220880896 unmapped: 35291136 heap: 256172032 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:27:10.609587+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 220880896 unmapped: 35291136 heap: 256172032 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:27:11.609766+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 220880896 unmapped: 35291136 heap: 256172032 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 448 heartbeat osd_stat(store_statfs(0x4eec1c000/0x0/0x4ffc00000, data 0x55ef445/0x5852000, compress 0x0/0x0/0x0, omap 0x63a, meta 0xb78f9c6), peers [1,2] op hist [])
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:27:12.609944+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 220897280 unmapped: 35274752 heap: 256172032 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 448 heartbeat osd_stat(store_statfs(0x4eec1c000/0x0/0x4ffc00000, data 0x55ef445/0x5852000, compress 0x0/0x0/0x0, omap 0x63a, meta 0xb78f9c6), peers [1,2] op hist [])
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:27:13.610099+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 220897280 unmapped: 35274752 heap: 256172032 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:30:11 compute-0 ceph-osd[87591]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:30:11 compute-0 ceph-osd[87591]: bluestore.MempoolThread(0x5651f2adbb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3715780 data_alloc: 234881024 data_used: 14282752
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 448 heartbeat osd_stat(store_statfs(0x4eec1c000/0x0/0x4ffc00000, data 0x55ef445/0x5852000, compress 0x0/0x0/0x0, omap 0x63a, meta 0xb78f9c6), peers [1,2] op hist [])
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:27:14.610278+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 220897280 unmapped: 35274752 heap: 256172032 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:27:15.610434+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 220897280 unmapped: 35274752 heap: 256172032 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 448 heartbeat osd_stat(store_statfs(0x4eec1c000/0x0/0x4ffc00000, data 0x55ef445/0x5852000, compress 0x0/0x0/0x0, omap 0x63a, meta 0xb78f9c6), peers [1,2] op hist [])
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:27:16.610639+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 220897280 unmapped: 35274752 heap: 256172032 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:27:17.610780+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 220897280 unmapped: 35274752 heap: 256172032 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:27:18.610917+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 220897280 unmapped: 35274752 heap: 256172032 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:30:11 compute-0 ceph-osd[87591]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:30:11 compute-0 ceph-osd[87591]: bluestore.MempoolThread(0x5651f2adbb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3715780 data_alloc: 234881024 data_used: 14282752
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:27:19.611063+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 220897280 unmapped: 35274752 heap: 256172032 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 448 heartbeat osd_stat(store_statfs(0x4eec1c000/0x0/0x4ffc00000, data 0x55ef445/0x5852000, compress 0x0/0x0/0x0, omap 0x63a, meta 0xb78f9c6), peers [1,2] op hist [])
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:27:20.611370+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 220897280 unmapped: 35274752 heap: 256172032 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:27:21.611639+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 220905472 unmapped: 35266560 heap: 256172032 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 14.076938629s of 14.112119675s, submitted: 10
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:27:22.611815+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 227180544 unmapped: 28991488 heap: 256172032 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:27:23.611989+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 227180544 unmapped: 28991488 heap: 256172032 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 448 heartbeat osd_stat(store_statfs(0x4ee81c000/0x0/0x4ffc00000, data 0x59ef445/0x5c52000, compress 0x0/0x0/0x0, omap 0x63a, meta 0xb78f9c6), peers [1,2] op hist [])
Oct 11 04:30:11 compute-0 ceph-osd[87591]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:30:11 compute-0 ceph-osd[87591]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 04:30:11 compute-0 ceph-osd[87591]: bluestore.MempoolThread(0x5651f2adbb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3777056 data_alloc: 234881024 data_used: 22417408
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:27:24.612197+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 227180544 unmapped: 28991488 heap: 256172032 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:27:25.612373+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 227180544 unmapped: 28991488 heap: 256172032 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:27:26.612572+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 227180544 unmapped: 28991488 heap: 256172032 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:27:27.612725+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 230318080 unmapped: 25853952 heap: 256172032 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:27:28.612904+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 230318080 unmapped: 25853952 heap: 256172032 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:30:11 compute-0 ceph-osd[87591]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 11 04:30:11 compute-0 ceph-osd[87591]: bluestore.MempoolThread(0x5651f2adbb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3788176 data_alloc: 234881024 data_used: 26611712
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 448 heartbeat osd_stat(store_statfs(0x4ee81c000/0x0/0x4ffc00000, data 0x59ef445/0x5c52000, compress 0x0/0x0/0x0, omap 0x63a, meta 0xb78f9c6), peers [1,2] op hist [])
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:27:29.613039+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 230318080 unmapped: 25853952 heap: 256172032 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 448 heartbeat osd_stat(store_statfs(0x4ee81c000/0x0/0x4ffc00000, data 0x59ef445/0x5c52000, compress 0x0/0x0/0x0, omap 0x63a, meta 0xb78f9c6), peers [1,2] op hist [])
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:27:30.613227+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 230318080 unmapped: 25853952 heap: 256172032 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:27:31.613383+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 230318080 unmapped: 25853952 heap: 256172032 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:27:32.613527+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 230318080 unmapped: 25853952 heap: 256172032 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 10.814247131s of 10.850455284s, submitted: 7
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:27:33.613659+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 230440960 unmapped: 25731072 heap: 256172032 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:30:11 compute-0 ceph-osd[87591]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 11 04:30:11 compute-0 ceph-osd[87591]: bluestore.MempoolThread(0x5651f2adbb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3787472 data_alloc: 234881024 data_used: 26611712
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 448 heartbeat osd_stat(store_statfs(0x4ee81c000/0x0/0x4ffc00000, data 0x59ef445/0x5c52000, compress 0x0/0x0/0x0, omap 0x63a, meta 0xb78f9c6), peers [1,2] op hist [])
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:27:34.613844+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 230440960 unmapped: 25731072 heap: 256172032 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 448 heartbeat osd_stat(store_statfs(0x4ee81c000/0x0/0x4ffc00000, data 0x59ef445/0x5c52000, compress 0x0/0x0/0x0, omap 0x63a, meta 0xb78f9c6), peers [1,2] op hist [])
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:27:35.613993+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 230440960 unmapped: 25731072 heap: 256172032 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 448 heartbeat osd_stat(store_statfs(0x4ee81c000/0x0/0x4ffc00000, data 0x59ef445/0x5c52000, compress 0x0/0x0/0x0, omap 0x63a, meta 0xb78f9c6), peers [1,2] op hist [])
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:27:36.614256+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 230440960 unmapped: 25731072 heap: 256172032 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:27:37.614430+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 234356736 unmapped: 21815296 heap: 256172032 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:27:38.614628+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 234356736 unmapped: 21815296 heap: 256172032 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:30:11 compute-0 ceph-osd[87591]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 11 04:30:11 compute-0 ceph-osd[87591]: bluestore.MempoolThread(0x5651f2adbb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3798352 data_alloc: 251658240 data_used: 30695424
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 448 heartbeat osd_stat(store_statfs(0x4ee81c000/0x0/0x4ffc00000, data 0x59ef445/0x5c52000, compress 0x0/0x0/0x0, omap 0x63a, meta 0xb78f9c6), peers [1,2] op hist [])
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:27:39.614842+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 234356736 unmapped: 21815296 heap: 256172032 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:27:40.615040+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 234356736 unmapped: 21815296 heap: 256172032 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:27:41.615328+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 234356736 unmapped: 21815296 heap: 256172032 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:27:42.615519+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 234356736 unmapped: 21815296 heap: 256172032 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 448 heartbeat osd_stat(store_statfs(0x4ee81c000/0x0/0x4ffc00000, data 0x59ef445/0x5c52000, compress 0x0/0x0/0x0, omap 0x63a, meta 0xb78f9c6), peers [1,2] op hist [])
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:27:43.615720+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 234356736 unmapped: 21815296 heap: 256172032 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:30:11 compute-0 ceph-osd[87591]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 11 04:30:11 compute-0 ceph-osd[87591]: bluestore.MempoolThread(0x5651f2adbb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3798352 data_alloc: 251658240 data_used: 30695424
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:27:44.615906+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 448 heartbeat osd_stat(store_statfs(0x4ee81c000/0x0/0x4ffc00000, data 0x59ef445/0x5c52000, compress 0x0/0x0/0x0, omap 0x63a, meta 0xb78f9c6), peers [1,2] op hist [])
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 234356736 unmapped: 21815296 heap: 256172032 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 448 heartbeat osd_stat(store_statfs(0x4ee81c000/0x0/0x4ffc00000, data 0x59ef445/0x5c52000, compress 0x0/0x0/0x0, omap 0x63a, meta 0xb78f9c6), peers [1,2] op hist [])
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:27:45.616095+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 234356736 unmapped: 21815296 heap: 256172032 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:27:46.616297+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 234356736 unmapped: 21815296 heap: 256172032 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:27:47.616469+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 14.856099129s of 14.869825363s, submitted: 4
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 448 ms_handle_reset con 0x5651f4ceec00 session 0x5651f7bada40
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 448 ms_handle_reset con 0x5651f6c83000 session 0x5651f4318b40
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 234356736 unmapped: 21815296 heap: 256172032 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: handle_auth_request added challenge on 0x5651f4023000
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 448 ms_handle_reset con 0x5651f4023000 session 0x5651f3f0e960
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:27:48.616597+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 448 heartbeat osd_stat(store_statfs(0x4ee840000/0x0/0x4ffc00000, data 0x59cb445/0x5c2e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0xb78f9c6), peers [1,2] op hist [])
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 234356736 unmapped: 21815296 heap: 256172032 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:30:11 compute-0 ceph-osd[87591]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 11 04:30:11 compute-0 ceph-osd[87591]: bluestore.MempoolThread(0x5651f2adbb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3791376 data_alloc: 251658240 data_used: 30601216
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:27:49.616726+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 448 heartbeat osd_stat(store_statfs(0x4ee840000/0x0/0x4ffc00000, data 0x59cb445/0x5c2e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0xb78f9c6), peers [1,2] op hist [])
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 234356736 unmapped: 21815296 heap: 256172032 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:27:50.616900+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 234356736 unmapped: 21815296 heap: 256172032 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:27:51.617098+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 234356736 unmapped: 21815296 heap: 256172032 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 448 heartbeat osd_stat(store_statfs(0x4ee840000/0x0/0x4ffc00000, data 0x59cb445/0x5c2e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0xb78f9c6), peers [1,2] op hist [])
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:27:52.617258+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 234356736 unmapped: 21815296 heap: 256172032 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: handle_auth_request added challenge on 0x5651f6bd7000
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 448 ms_handle_reset con 0x5651f6bd7000 session 0x5651f5e95a40
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: handle_auth_request added challenge on 0x5651f6bdf000
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:27:53.617401+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 448 ms_handle_reset con 0x5651f6bdf000 session 0x5651f43dbc20
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 234422272 unmapped: 21749760 heap: 256172032 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:30:11 compute-0 ceph-osd[87591]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 11 04:30:11 compute-0 ceph-osd[87591]: bluestore.MempoolThread(0x5651f2adbb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3763672 data_alloc: 251658240 data_used: 30597120
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:27:54.617555+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 234446848 unmapped: 21725184 heap: 256172032 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: handle_auth_request added challenge on 0x5651f74b1400
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 448 ms_handle_reset con 0x5651f74b1400 session 0x5651f44cdc20
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: handle_auth_request added challenge on 0x5651f4023000
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 448 ms_handle_reset con 0x5651f4023000 session 0x5651f44734a0
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:27:55.617730+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 234463232 unmapped: 21708800 heap: 256172032 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 448 heartbeat osd_stat(store_statfs(0x4efc81000/0x0/0x4ffc00000, data 0x55cb435/0x582d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0xa74f9c6), peers [1,2] op hist [])
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:27:56.617931+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 448 heartbeat osd_stat(store_statfs(0x4efc81000/0x0/0x4ffc00000, data 0x55cb435/0x582d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0xa74f9c6), peers [1,2] op hist [])
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 234463232 unmapped: 21708800 heap: 256172032 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:27:57.618122+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 234463232 unmapped: 21708800 heap: 256172032 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:27:58.618295+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 234463232 unmapped: 21708800 heap: 256172032 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:30:11 compute-0 ceph-osd[87591]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 11 04:30:11 compute-0 ceph-osd[87591]: bluestore.MempoolThread(0x5651f2adbb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3762952 data_alloc: 251658240 data_used: 30597120
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:27:59.618481+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 234463232 unmapped: 21708800 heap: 256172032 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 448 heartbeat osd_stat(store_statfs(0x4efc81000/0x0/0x4ffc00000, data 0x55cb435/0x582d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0xa74f9c6), peers [1,2] op hist [])
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:28:00.618689+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 234463232 unmapped: 21708800 heap: 256172032 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:28:01.618959+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 234463232 unmapped: 21708800 heap: 256172032 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:28:02.619138+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 234463232 unmapped: 21708800 heap: 256172032 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:28:03.619311+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 234463232 unmapped: 21708800 heap: 256172032 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:30:11 compute-0 ceph-osd[87591]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 11 04:30:11 compute-0 ceph-osd[87591]: bluestore.MempoolThread(0x5651f2adbb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3762952 data_alloc: 251658240 data_used: 30597120
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:28:04.619434+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 448 heartbeat osd_stat(store_statfs(0x4efc81000/0x0/0x4ffc00000, data 0x55cb435/0x582d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0xa74f9c6), peers [1,2] op hist [])
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 234463232 unmapped: 21708800 heap: 256172032 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:28:05.619614+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 234463232 unmapped: 21708800 heap: 256172032 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:28:06.619753+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 234463232 unmapped: 21708800 heap: 256172032 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 448 heartbeat osd_stat(store_statfs(0x4efc81000/0x0/0x4ffc00000, data 0x55cb435/0x582d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0xa74f9c6), peers [1,2] op hist [])
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:28:07.619910+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 234463232 unmapped: 21708800 heap: 256172032 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:28:08.620039+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 234463232 unmapped: 21708800 heap: 256172032 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:30:11 compute-0 ceph-osd[87591]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 11 04:30:11 compute-0 ceph-osd[87591]: bluestore.MempoolThread(0x5651f2adbb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3762952 data_alloc: 251658240 data_used: 30597120
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:28:09.620218+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 234463232 unmapped: 21708800 heap: 256172032 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 448 heartbeat osd_stat(store_statfs(0x4efc81000/0x0/0x4ffc00000, data 0x55cb435/0x582d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0xa74f9c6), peers [1,2] op hist [])
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:28:10.620403+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 234463232 unmapped: 21708800 heap: 256172032 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:28:11.620659+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 234463232 unmapped: 21708800 heap: 256172032 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:28:12.620853+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 234463232 unmapped: 21708800 heap: 256172032 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 448 heartbeat osd_stat(store_statfs(0x4efc81000/0x0/0x4ffc00000, data 0x55cb435/0x582d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0xa74f9c6), peers [1,2] op hist [])
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:28:13.621082+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 234463232 unmapped: 21708800 heap: 256172032 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:30:11 compute-0 ceph-osd[87591]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 11 04:30:11 compute-0 ceph-osd[87591]: bluestore.MempoolThread(0x5651f2adbb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3762952 data_alloc: 251658240 data_used: 30597120
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:28:14.621257+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 234463232 unmapped: 21708800 heap: 256172032 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:28:15.621428+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 234463232 unmapped: 21708800 heap: 256172032 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:28:16.621588+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 234463232 unmapped: 21708800 heap: 256172032 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:28:17.621785+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 448 heartbeat osd_stat(store_statfs(0x4efc81000/0x0/0x4ffc00000, data 0x55cb435/0x582d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0xa74f9c6), peers [1,2] op hist [])
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 234463232 unmapped: 21708800 heap: 256172032 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:28:18.621983+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 234463232 unmapped: 21708800 heap: 256172032 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:30:11 compute-0 ceph-osd[87591]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 11 04:30:11 compute-0 ceph-osd[87591]: bluestore.MempoolThread(0x5651f2adbb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3762952 data_alloc: 251658240 data_used: 30597120
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:28:19.622248+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 234463232 unmapped: 21708800 heap: 256172032 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:28:20.622408+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 234463232 unmapped: 21708800 heap: 256172032 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:28:21.623133+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 234463232 unmapped: 21708800 heap: 256172032 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 448 heartbeat osd_stat(store_statfs(0x4efc81000/0x0/0x4ffc00000, data 0x55cb435/0x582d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0xa74f9c6), peers [1,2] op hist [])
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:28:22.623398+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 234463232 unmapped: 21708800 heap: 256172032 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:28:23.623625+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 234463232 unmapped: 21708800 heap: 256172032 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:30:11 compute-0 ceph-osd[87591]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 11 04:30:11 compute-0 ceph-osd[87591]: bluestore.MempoolThread(0x5651f2adbb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3762952 data_alloc: 251658240 data_used: 30597120
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:28:24.623843+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 234463232 unmapped: 21708800 heap: 256172032 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:28:25.624038+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 234463232 unmapped: 21708800 heap: 256172032 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:28:26.624223+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 234463232 unmapped: 21708800 heap: 256172032 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:28:27.624399+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 448 heartbeat osd_stat(store_statfs(0x4efc81000/0x0/0x4ffc00000, data 0x55cb435/0x582d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0xa74f9c6), peers [1,2] op hist [])
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 234463232 unmapped: 21708800 heap: 256172032 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 448 heartbeat osd_stat(store_statfs(0x4efc81000/0x0/0x4ffc00000, data 0x55cb435/0x582d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0xa74f9c6), peers [1,2] op hist [])
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:28:28.624607+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 234463232 unmapped: 21708800 heap: 256172032 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:30:11 compute-0 ceph-osd[87591]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 11 04:30:11 compute-0 ceph-osd[87591]: bluestore.MempoolThread(0x5651f2adbb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3762952 data_alloc: 251658240 data_used: 30597120
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:28:29.624757+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 234463232 unmapped: 21708800 heap: 256172032 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:28:30.624991+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 234463232 unmapped: 21708800 heap: 256172032 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:28:31.625221+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 234463232 unmapped: 21708800 heap: 256172032 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 448 heartbeat osd_stat(store_statfs(0x4efc81000/0x0/0x4ffc00000, data 0x55cb435/0x582d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0xa74f9c6), peers [1,2] op hist [])
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:28:32.625395+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 234463232 unmapped: 21708800 heap: 256172032 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:28:33.625637+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 234463232 unmapped: 21708800 heap: 256172032 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:30:11 compute-0 ceph-osd[87591]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 11 04:30:11 compute-0 ceph-osd[87591]: bluestore.MempoolThread(0x5651f2adbb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3762952 data_alloc: 251658240 data_used: 30597120
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:28:34.625822+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 234463232 unmapped: 21708800 heap: 256172032 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:28:35.626095+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 448 heartbeat osd_stat(store_statfs(0x4efc81000/0x0/0x4ffc00000, data 0x55cb435/0x582d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0xa74f9c6), peers [1,2] op hist [])
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 234463232 unmapped: 21708800 heap: 256172032 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:28:36.626294+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 234463232 unmapped: 21708800 heap: 256172032 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:28:37.626483+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 234463232 unmapped: 21708800 heap: 256172032 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 448 heartbeat osd_stat(store_statfs(0x4efc81000/0x0/0x4ffc00000, data 0x55cb435/0x582d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0xa74f9c6), peers [1,2] op hist [])
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:28:38.626649+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 234463232 unmapped: 21708800 heap: 256172032 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:30:11 compute-0 ceph-osd[87591]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 11 04:30:11 compute-0 ceph-osd[87591]: bluestore.MempoolThread(0x5651f2adbb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3762952 data_alloc: 251658240 data_used: 30597120
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:28:39.626819+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 234463232 unmapped: 21708800 heap: 256172032 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:28:40.627013+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 234463232 unmapped: 21708800 heap: 256172032 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:28:41.627213+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 234463232 unmapped: 21708800 heap: 256172032 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:28:42.627339+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 234463232 unmapped: 21708800 heap: 256172032 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 448 heartbeat osd_stat(store_statfs(0x4efc81000/0x0/0x4ffc00000, data 0x55cb435/0x582d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0xa74f9c6), peers [1,2] op hist [])
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:28:43.627501+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 234463232 unmapped: 21708800 heap: 256172032 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:30:11 compute-0 ceph-osd[87591]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 11 04:30:11 compute-0 ceph-osd[87591]: bluestore.MempoolThread(0x5651f2adbb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3762952 data_alloc: 251658240 data_used: 30597120
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:28:44.627702+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 234463232 unmapped: 21708800 heap: 256172032 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:28:45.627941+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 234463232 unmapped: 21708800 heap: 256172032 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:28:46.628233+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 448 heartbeat osd_stat(store_statfs(0x4efc81000/0x0/0x4ffc00000, data 0x55cb435/0x582d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0xa74f9c6), peers [1,2] op hist [])
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 234463232 unmapped: 21708800 heap: 256172032 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:28:47.628535+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 234463232 unmapped: 21708800 heap: 256172032 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:28:48.628785+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 234463232 unmapped: 21708800 heap: 256172032 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:30:11 compute-0 ceph-osd[87591]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 11 04:30:11 compute-0 ceph-osd[87591]: bluestore.MempoolThread(0x5651f2adbb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3762952 data_alloc: 251658240 data_used: 30597120
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:28:49.629038+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 234463232 unmapped: 21708800 heap: 256172032 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:28:50.629262+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 234463232 unmapped: 21708800 heap: 256172032 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:28:51.629577+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 234463232 unmapped: 21708800 heap: 256172032 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 448 heartbeat osd_stat(store_statfs(0x4efc81000/0x0/0x4ffc00000, data 0x55cb435/0x582d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0xa74f9c6), peers [1,2] op hist [])
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:28:52.629856+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 234463232 unmapped: 21708800 heap: 256172032 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:28:53.630179+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 234463232 unmapped: 21708800 heap: 256172032 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:30:11 compute-0 ceph-osd[87591]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 11 04:30:11 compute-0 ceph-osd[87591]: bluestore.MempoolThread(0x5651f2adbb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3762952 data_alloc: 251658240 data_used: 30597120
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:28:54.630511+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 234463232 unmapped: 21708800 heap: 256172032 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:28:55.631368+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 234463232 unmapped: 21708800 heap: 256172032 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:28:56.632247+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 448 heartbeat osd_stat(store_statfs(0x4efc81000/0x0/0x4ffc00000, data 0x55cb435/0x582d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0xa74f9c6), peers [1,2] op hist [])
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 234463232 unmapped: 21708800 heap: 256172032 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:28:57.632840+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 234463232 unmapped: 21708800 heap: 256172032 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:28:58.633187+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 234463232 unmapped: 21708800 heap: 256172032 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:30:11 compute-0 ceph-osd[87591]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 11 04:30:11 compute-0 ceph-osd[87591]: bluestore.MempoolThread(0x5651f2adbb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3762952 data_alloc: 251658240 data_used: 30597120
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:28:59.633787+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 234463232 unmapped: 21708800 heap: 256172032 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 448 heartbeat osd_stat(store_statfs(0x4efc81000/0x0/0x4ffc00000, data 0x55cb435/0x582d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0xa74f9c6), peers [1,2] op hist [])
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:29:00.634194+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 234463232 unmapped: 21708800 heap: 256172032 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 448 heartbeat osd_stat(store_statfs(0x4efc81000/0x0/0x4ffc00000, data 0x55cb435/0x582d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0xa74f9c6), peers [1,2] op hist [])
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:29:01.634387+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 234463232 unmapped: 21708800 heap: 256172032 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:29:02.634819+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 234463232 unmapped: 21708800 heap: 256172032 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:29:03.635261+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 234463232 unmapped: 21708800 heap: 256172032 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:30:11 compute-0 ceph-osd[87591]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 11 04:30:11 compute-0 ceph-osd[87591]: bluestore.MempoolThread(0x5651f2adbb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3762952 data_alloc: 251658240 data_used: 30597120
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:29:04.635496+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 448 heartbeat osd_stat(store_statfs(0x4efc81000/0x0/0x4ffc00000, data 0x55cb435/0x582d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0xa74f9c6), peers [1,2] op hist [])
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 234463232 unmapped: 21708800 heap: 256172032 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 448 heartbeat osd_stat(store_statfs(0x4efc81000/0x0/0x4ffc00000, data 0x55cb435/0x582d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0xa74f9c6), peers [1,2] op hist [])
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:29:05.636003+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 234471424 unmapped: 21700608 heap: 256172032 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:29:06.636224+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 234471424 unmapped: 21700608 heap: 256172032 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:29:07.636417+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 234471424 unmapped: 21700608 heap: 256172032 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:29:08.636729+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 234471424 unmapped: 21700608 heap: 256172032 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:30:11 compute-0 ceph-osd[87591]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 11 04:30:11 compute-0 ceph-osd[87591]: bluestore.MempoolThread(0x5651f2adbb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3762952 data_alloc: 251658240 data_used: 30597120
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:29:09.636930+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 234471424 unmapped: 21700608 heap: 256172032 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:29:10.637231+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 448 heartbeat osd_stat(store_statfs(0x4efc81000/0x0/0x4ffc00000, data 0x55cb435/0x582d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0xa74f9c6), peers [1,2] op hist [])
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 234471424 unmapped: 21700608 heap: 256172032 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:29:11.637499+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 234471424 unmapped: 21700608 heap: 256172032 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:29:12.637873+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 234471424 unmapped: 21700608 heap: 256172032 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:29:13.638090+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 234471424 unmapped: 21700608 heap: 256172032 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:30:11 compute-0 ceph-osd[87591]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 11 04:30:11 compute-0 ceph-osd[87591]: bluestore.MempoolThread(0x5651f2adbb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3762952 data_alloc: 251658240 data_used: 30597120
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:29:14.638328+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 234471424 unmapped: 21700608 heap: 256172032 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:29:15.638476+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 448 heartbeat osd_stat(store_statfs(0x4efc81000/0x0/0x4ffc00000, data 0x55cb435/0x582d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0xa74f9c6), peers [1,2] op hist [])
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 234471424 unmapped: 21700608 heap: 256172032 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:29:16.638635+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 234471424 unmapped: 21700608 heap: 256172032 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:29:17.638860+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 234471424 unmapped: 21700608 heap: 256172032 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 448 heartbeat osd_stat(store_statfs(0x4efc81000/0x0/0x4ffc00000, data 0x55cb435/0x582d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0xa74f9c6), peers [1,2] op hist [])
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:29:18.639071+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 234471424 unmapped: 21700608 heap: 256172032 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:30:11 compute-0 ceph-osd[87591]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 11 04:30:11 compute-0 ceph-osd[87591]: bluestore.MempoolThread(0x5651f2adbb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3762952 data_alloc: 251658240 data_used: 30597120
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:29:19.639216+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 234471424 unmapped: 21700608 heap: 256172032 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:29:20.639420+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 234471424 unmapped: 21700608 heap: 256172032 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:29:21.639683+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 234471424 unmapped: 21700608 heap: 256172032 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:29:22.639839+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 234471424 unmapped: 21700608 heap: 256172032 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:29:23.639975+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 234471424 unmapped: 21700608 heap: 256172032 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 448 heartbeat osd_stat(store_statfs(0x4efc81000/0x0/0x4ffc00000, data 0x55cb435/0x582d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0xa74f9c6), peers [1,2] op hist [])
Oct 11 04:30:11 compute-0 ceph-osd[87591]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:30:11 compute-0 ceph-osd[87591]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 11 04:30:11 compute-0 ceph-osd[87591]: bluestore.MempoolThread(0x5651f2adbb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3762952 data_alloc: 251658240 data_used: 30597120
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:29:24.640104+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 234471424 unmapped: 21700608 heap: 256172032 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:29:25.640222+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 234471424 unmapped: 21700608 heap: 256172032 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:29:26.640365+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 448 heartbeat osd_stat(store_statfs(0x4efc81000/0x0/0x4ffc00000, data 0x55cb435/0x582d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0xa74f9c6), peers [1,2] op hist [])
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 234471424 unmapped: 21700608 heap: 256172032 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:29:27.640496+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 234471424 unmapped: 21700608 heap: 256172032 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:29:28.640615+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 234471424 unmapped: 21700608 heap: 256172032 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:29:29.640727+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:30:11 compute-0 ceph-osd[87591]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 11 04:30:11 compute-0 ceph-osd[87591]: bluestore.MempoolThread(0x5651f2adbb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3762952 data_alloc: 251658240 data_used: 30597120
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 234471424 unmapped: 21700608 heap: 256172032 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:29:30.640848+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 234471424 unmapped: 21700608 heap: 256172032 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:29:31.640988+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 448 heartbeat osd_stat(store_statfs(0x4efc81000/0x0/0x4ffc00000, data 0x55cb435/0x582d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0xa74f9c6), peers [1,2] op hist [])
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 234471424 unmapped: 21700608 heap: 256172032 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:29:32.641109+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 234471424 unmapped: 21700608 heap: 256172032 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:29:33.641207+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 234471424 unmapped: 21700608 heap: 256172032 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:29:34.641348+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:30:11 compute-0 ceph-osd[87591]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 11 04:30:11 compute-0 ceph-osd[87591]: bluestore.MempoolThread(0x5651f2adbb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3762952 data_alloc: 251658240 data_used: 30597120
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 234471424 unmapped: 21700608 heap: 256172032 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:29:35.641462+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 448 heartbeat osd_stat(store_statfs(0x4efc81000/0x0/0x4ffc00000, data 0x55cb435/0x582d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0xa74f9c6), peers [1,2] op hist [])
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 234471424 unmapped: 21700608 heap: 256172032 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:29:36.641585+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 234471424 unmapped: 21700608 heap: 256172032 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:29:37.641718+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 234479616 unmapped: 21692416 heap: 256172032 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: do_command 'config diff' '{prefix=config diff}'
Oct 11 04:30:11 compute-0 ceph-osd[87591]: do_command 'config diff' '{prefix=config diff}' result is 0 bytes
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:29:38.641839+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: do_command 'config show' '{prefix=config show}'
Oct 11 04:30:11 compute-0 ceph-osd[87591]: do_command 'config show' '{prefix=config show}' result is 0 bytes
Oct 11 04:30:11 compute-0 ceph-osd[87591]: do_command 'counter dump' '{prefix=counter dump}'
Oct 11 04:30:11 compute-0 ceph-osd[87591]: do_command 'counter dump' '{prefix=counter dump}' result is 0 bytes
Oct 11 04:30:11 compute-0 ceph-osd[87591]: do_command 'counter schema' '{prefix=counter schema}'
Oct 11 04:30:11 compute-0 ceph-osd[87591]: do_command 'counter schema' '{prefix=counter schema}' result is 0 bytes
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 234577920 unmapped: 21594112 heap: 256172032 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:29:39.641971+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 04:30:11 compute-0 ceph-osd[87591]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 11 04:30:11 compute-0 ceph-osd[87591]: bluestore.MempoolThread(0x5651f2adbb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3762952 data_alloc: 251658240 data_used: 30597120
Oct 11 04:30:11 compute-0 ceph-osd[87591]: osd.0 448 heartbeat osd_stat(store_statfs(0x4efc81000/0x0/0x4ffc00000, data 0x55cb435/0x582d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0xa74f9c6), peers [1,2] op hist [])
Oct 11 04:30:11 compute-0 ceph-osd[87591]: prioritycache tune_memory target: 4294967296 mapped: 234684416 unmapped: 21487616 heap: 256172032 old mem: 2845415832 new mem: 2845415832
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: tick
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_tickets
Oct 11 04:30:11 compute-0 ceph-osd[87591]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T04:29:40.642140+0000)
Oct 11 04:30:11 compute-0 ceph-osd[87591]: do_command 'log dump' '{prefix=log dump}'
Oct 11 04:30:11 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config dump"} v 0) v1
Oct 11 04:30:11 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/130025415' entity='client.admin' cmd=[{"prefix": "config dump"}]: dispatch
Oct 11 04:30:11 compute-0 ceph-mon[74273]: from='client.? 192.168.122.100:0/186247759' entity='client.admin' cmd=[{"prefix": "osd tree", "format": "json-pretty"}]: dispatch
Oct 11 04:30:11 compute-0 ceph-mon[74273]: from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
Oct 11 04:30:11 compute-0 ceph-mon[74273]: from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
Oct 11 04:30:11 compute-0 ceph-mon[74273]: from='client.? 192.168.122.100:0/130025415' entity='client.admin' cmd=[{"prefix": "config dump"}]: dispatch
Oct 11 04:30:11 compute-0 ceph-mgr[74563]: log_channel(audit) log [DBG] : from='client.19365 -' entity='client.admin' cmd=[{"prefix": "device ls", "target": ["mon-mgr", ""]}]: dispatch
Oct 11 04:30:12 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "detail": "detail"} v 0) v1
Oct 11 04:30:12 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4199231592' entity='client.admin' cmd=[{"prefix": "df", "detail": "detail"}]: dispatch
Oct 11 04:30:12 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v2054: 305 pgs: 305 active+clean; 271 MiB data, 657 MiB used, 59 GiB / 60 GiB avail
Oct 11 04:30:12 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df"} v 0) v1
Oct 11 04:30:12 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1887940955' entity='client.admin' cmd=[{"prefix": "df"}]: dispatch
Oct 11 04:30:12 compute-0 ceph-mon[74273]: from='client.19365 -' entity='client.admin' cmd=[{"prefix": "device ls", "target": ["mon-mgr", ""]}]: dispatch
Oct 11 04:30:12 compute-0 ceph-mon[74273]: from='client.? 192.168.122.100:0/4199231592' entity='client.admin' cmd=[{"prefix": "df", "detail": "detail"}]: dispatch
Oct 11 04:30:12 compute-0 ceph-mon[74273]: pgmap v2054: 305 pgs: 305 active+clean; 271 MiB data, 657 MiB used, 59 GiB / 60 GiB avail
Oct 11 04:30:13 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "fs dump"} v 0) v1
Oct 11 04:30:13 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1920030402' entity='client.admin' cmd=[{"prefix": "fs dump"}]: dispatch
Oct 11 04:30:13 compute-0 nova_compute[259850]: 2025-10-11 04:30:13.378 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 04:30:13 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "fs ls"} v 0) v1
Oct 11 04:30:13 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/323618137' entity='client.admin' cmd=[{"prefix": "fs ls"}]: dispatch
Oct 11 04:30:13 compute-0 ceph-mon[74273]: from='client.? 192.168.122.100:0/1887940955' entity='client.admin' cmd=[{"prefix": "df"}]: dispatch
Oct 11 04:30:13 compute-0 ceph-mon[74273]: from='client.? 192.168.122.100:0/1920030402' entity='client.admin' cmd=[{"prefix": "fs dump"}]: dispatch
Oct 11 04:30:13 compute-0 ceph-mon[74273]: from='client.? 192.168.122.100:0/323618137' entity='client.admin' cmd=[{"prefix": "fs ls"}]: dispatch
Oct 11 04:30:13 compute-0 sudo[315412]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 04:30:13 compute-0 sudo[315412]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 04:30:13 compute-0 sudo[315412]: pam_unix(sudo:session): session closed for user root
Oct 11 04:30:13 compute-0 ceph-mgr[74563]: log_channel(audit) log [DBG] : from='client.19375 -' entity='client.admin' cmd=[{"prefix": "fs status", "target": ["mon-mgr", ""]}]: dispatch
Oct 11 04:30:13 compute-0 systemd[1]: Starting Hostname Service...
Oct 11 04:30:13 compute-0 sudo[315448]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 11 04:30:13 compute-0 sudo[315448]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 04:30:13 compute-0 sudo[315448]: pam_unix(sudo:session): session closed for user root
Oct 11 04:30:14 compute-0 systemd[1]: Started Hostname Service.
Oct 11 04:30:14 compute-0 sudo[315479]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 04:30:14 compute-0 sudo[315479]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 04:30:14 compute-0 sudo[315479]: pam_unix(sudo:session): session closed for user root
Oct 11 04:30:14 compute-0 sudo[315531]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/23b68101-59a9-532f-ab6b-9acf78fb2162/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Oct 11 04:30:14 compute-0 sudo[315531]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 04:30:14 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v2055: 305 pgs: 305 active+clean; 271 MiB data, 657 MiB used, 59 GiB / 60 GiB avail
Oct 11 04:30:14 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mds stat"} v 0) v1
Oct 11 04:30:14 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2184513820' entity='client.admin' cmd=[{"prefix": "mds stat"}]: dispatch
Oct 11 04:30:14 compute-0 sudo[315531]: pam_unix(sudo:session): session closed for user root
Oct 11 04:30:14 compute-0 nova_compute[259850]: 2025-10-11 04:30:14.603 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 04:30:14 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 11 04:30:14 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 11 04:30:14 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct 11 04:30:14 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 11 04:30:14 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct 11 04:30:14 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' 
Oct 11 04:30:14 compute-0 ceph-mgr[74563]: [progress WARNING root] complete: ev ed8b698e-171d-4982-8de0-b346c0da48d3 does not exist
Oct 11 04:30:14 compute-0 ceph-mgr[74563]: [progress WARNING root] complete: ev 5b4782c7-8b6b-4b33-844f-4a9ca7ed4074 does not exist
Oct 11 04:30:14 compute-0 ceph-mgr[74563]: [progress WARNING root] complete: ev 7a9af5e8-48fb-4dc5-9be5-d42f8da00adb does not exist
Oct 11 04:30:14 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct 11 04:30:14 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 11 04:30:14 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct 11 04:30:14 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 11 04:30:14 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 11 04:30:14 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 11 04:30:14 compute-0 ceph-mon[74273]: from='client.19375 -' entity='client.admin' cmd=[{"prefix": "fs status", "target": ["mon-mgr", ""]}]: dispatch
Oct 11 04:30:14 compute-0 ceph-mon[74273]: pgmap v2055: 305 pgs: 305 active+clean; 271 MiB data, 657 MiB used, 59 GiB / 60 GiB avail
Oct 11 04:30:14 compute-0 ceph-mon[74273]: from='client.? 192.168.122.100:0/2184513820' entity='client.admin' cmd=[{"prefix": "mds stat"}]: dispatch
Oct 11 04:30:14 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 11 04:30:14 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 11 04:30:14 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' 
Oct 11 04:30:14 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 11 04:30:14 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 11 04:30:14 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 11 04:30:14 compute-0 sudo[315658]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 04:30:14 compute-0 sudo[315658]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 04:30:14 compute-0 sudo[315658]: pam_unix(sudo:session): session closed for user root
Oct 11 04:30:14 compute-0 sudo[315683]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 11 04:30:14 compute-0 sudo[315683]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 04:30:14 compute-0 sudo[315683]: pam_unix(sudo:session): session closed for user root
Oct 11 04:30:14 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump"} v 0) v1
Oct 11 04:30:14 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/618986521' entity='client.admin' cmd=[{"prefix": "mon dump"}]: dispatch
Oct 11 04:30:14 compute-0 sudo[315708]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 04:30:14 compute-0 sudo[315708]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 04:30:14 compute-0 sudo[315708]: pam_unix(sudo:session): session closed for user root
Oct 11 04:30:14 compute-0 sudo[315737]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/23b68101-59a9-532f-ab6b-9acf78fb2162/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 23b68101-59a9-532f-ab6b-9acf78fb2162 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Oct 11 04:30:14 compute-0 sudo[315737]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 04:30:15 compute-0 ceph-mgr[74563]: log_channel(audit) log [DBG] : from='client.19381 -' entity='client.admin' cmd=[{"prefix": "osd blocked-by", "target": ["mon-mgr", ""]}]: dispatch
Oct 11 04:30:15 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e448 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 04:30:15 compute-0 podman[315827]: 2025-10-11 04:30:15.234075949 +0000 UTC m=+0.043515917 container create bfe0dba40e96bc9dade00ed30f87a4e41b9829da2678e87270e84bb7161a6f18 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_dhawan, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Oct 11 04:30:15 compute-0 systemd[1]: Started libpod-conmon-bfe0dba40e96bc9dade00ed30f87a4e41b9829da2678e87270e84bb7161a6f18.scope.
Oct 11 04:30:15 compute-0 systemd[1]: Started libcrun container.
Oct 11 04:30:15 compute-0 podman[315827]: 2025-10-11 04:30:15.212131641 +0000 UTC m=+0.021571629 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 04:30:15 compute-0 podman[315827]: 2025-10-11 04:30:15.320002759 +0000 UTC m=+0.129442757 container init bfe0dba40e96bc9dade00ed30f87a4e41b9829da2678e87270e84bb7161a6f18 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_dhawan, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Oct 11 04:30:15 compute-0 podman[315827]: 2025-10-11 04:30:15.331280227 +0000 UTC m=+0.140720195 container start bfe0dba40e96bc9dade00ed30f87a4e41b9829da2678e87270e84bb7161a6f18 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_dhawan, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, io.buildah.version=1.39.3)
Oct 11 04:30:15 compute-0 podman[315827]: 2025-10-11 04:30:15.335039623 +0000 UTC m=+0.144479611 container attach bfe0dba40e96bc9dade00ed30f87a4e41b9829da2678e87270e84bb7161a6f18 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_dhawan, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Oct 11 04:30:15 compute-0 cranky_dhawan[315847]: 167 167
Oct 11 04:30:15 compute-0 systemd[1]: libpod-bfe0dba40e96bc9dade00ed30f87a4e41b9829da2678e87270e84bb7161a6f18.scope: Deactivated successfully.
Oct 11 04:30:15 compute-0 podman[315827]: 2025-10-11 04:30:15.342618646 +0000 UTC m=+0.152058644 container died bfe0dba40e96bc9dade00ed30f87a4e41b9829da2678e87270e84bb7161a6f18 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_dhawan, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Oct 11 04:30:15 compute-0 systemd[1]: var-lib-containers-storage-overlay-149cab8d96656d78b3ff6539f7984782d8a6ae42e9f2ae0c6175b70b0c04bab5-merged.mount: Deactivated successfully.
Oct 11 04:30:15 compute-0 podman[315827]: 2025-10-11 04:30:15.396474833 +0000 UTC m=+0.205914801 container remove bfe0dba40e96bc9dade00ed30f87a4e41b9829da2678e87270e84bb7161a6f18 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_dhawan, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=reef, ceph=True, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 11 04:30:15 compute-0 systemd[1]: libpod-conmon-bfe0dba40e96bc9dade00ed30f87a4e41b9829da2678e87270e84bb7161a6f18.scope: Deactivated successfully.
Oct 11 04:30:15 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd blocklist ls"} v 0) v1
Oct 11 04:30:15 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/682427754' entity='client.admin' cmd=[{"prefix": "osd blocklist ls"}]: dispatch
Oct 11 04:30:15 compute-0 podman[315894]: 2025-10-11 04:30:15.59091115 +0000 UTC m=+0.064944100 container create 393df2c83f51137d1e5b37a73d570467a8eec1560ac7299be76c0a1e1ec65947 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_bose, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 11 04:30:15 compute-0 systemd[1]: Started libpod-conmon-393df2c83f51137d1e5b37a73d570467a8eec1560ac7299be76c0a1e1ec65947.scope.
Oct 11 04:30:15 compute-0 systemd[1]: Started libcrun container.
Oct 11 04:30:15 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/269e65f64fecff60c787777dbf72932d003279f7a987d128f7585d6013080ce9/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 11 04:30:15 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/269e65f64fecff60c787777dbf72932d003279f7a987d128f7585d6013080ce9/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 11 04:30:15 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/269e65f64fecff60c787777dbf72932d003279f7a987d128f7585d6013080ce9/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 11 04:30:15 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/269e65f64fecff60c787777dbf72932d003279f7a987d128f7585d6013080ce9/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 11 04:30:15 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/269e65f64fecff60c787777dbf72932d003279f7a987d128f7585d6013080ce9/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct 11 04:30:15 compute-0 podman[315894]: 2025-10-11 04:30:15.56853004 +0000 UTC m=+0.042563050 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 04:30:15 compute-0 ceph-mon[74273]: from='client.? 192.168.122.100:0/618986521' entity='client.admin' cmd=[{"prefix": "mon dump"}]: dispatch
Oct 11 04:30:15 compute-0 ceph-mon[74273]: from='client.19381 -' entity='client.admin' cmd=[{"prefix": "osd blocked-by", "target": ["mon-mgr", ""]}]: dispatch
Oct 11 04:30:15 compute-0 ceph-mon[74273]: from='client.? 192.168.122.100:0/682427754' entity='client.admin' cmd=[{"prefix": "osd blocklist ls"}]: dispatch
Oct 11 04:30:15 compute-0 podman[315894]: 2025-10-11 04:30:15.692956254 +0000 UTC m=+0.166989194 container init 393df2c83f51137d1e5b37a73d570467a8eec1560ac7299be76c0a1e1ec65947 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_bose, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20250507, io.buildah.version=1.39.3)
Oct 11 04:30:15 compute-0 podman[315894]: 2025-10-11 04:30:15.701410152 +0000 UTC m=+0.175443082 container start 393df2c83f51137d1e5b37a73d570467a8eec1560ac7299be76c0a1e1ec65947 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_bose, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 11 04:30:15 compute-0 podman[315894]: 2025-10-11 04:30:15.705904379 +0000 UTC m=+0.179937299 container attach 393df2c83f51137d1e5b37a73d570467a8eec1560ac7299be76c0a1e1ec65947 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_bose, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 11 04:30:15 compute-0 ceph-mgr[74563]: log_channel(audit) log [DBG] : from='client.19385 -' entity='client.admin' cmd=[{"prefix": "osd df", "output_method": "tree", "target": ["mon-mgr", ""]}]: dispatch
Oct 11 04:30:16 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v2056: 305 pgs: 305 active+clean; 271 MiB data, 657 MiB used, 59 GiB / 60 GiB avail
Oct 11 04:30:16 compute-0 ceph-mgr[74563]: log_channel(audit) log [DBG] : from='client.19387 -' entity='client.admin' cmd=[{"prefix": "osd df", "target": ["mon-mgr", ""]}]: dispatch
Oct 11 04:30:16 compute-0 ceph-mon[74273]: from='client.19385 -' entity='client.admin' cmd=[{"prefix": "osd df", "output_method": "tree", "target": ["mon-mgr", ""]}]: dispatch
Oct 11 04:30:16 compute-0 ceph-mon[74273]: pgmap v2056: 305 pgs: 305 active+clean; 271 MiB data, 657 MiB used, 59 GiB / 60 GiB avail
Oct 11 04:30:16 compute-0 ceph-mon[74273]: from='client.19387 -' entity='client.admin' cmd=[{"prefix": "osd df", "target": ["mon-mgr", ""]}]: dispatch
Oct 11 04:30:16 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd dump"} v 0) v1
Oct 11 04:30:16 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3370140286' entity='client.admin' cmd=[{"prefix": "osd dump"}]: dispatch
Oct 11 04:30:16 compute-0 eager_bose[315915]: --> passed data devices: 0 physical, 3 LVM
Oct 11 04:30:16 compute-0 eager_bose[315915]: --> relative data size: 1.0
Oct 11 04:30:16 compute-0 eager_bose[315915]: --> All data devices are unavailable
Oct 11 04:30:16 compute-0 systemd[1]: libpod-393df2c83f51137d1e5b37a73d570467a8eec1560ac7299be76c0a1e1ec65947.scope: Deactivated successfully.
Oct 11 04:30:16 compute-0 podman[316057]: 2025-10-11 04:30:16.842205096 +0000 UTC m=+0.047456227 container died 393df2c83f51137d1e5b37a73d570467a8eec1560ac7299be76c0a1e1ec65947 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_bose, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 11 04:30:16 compute-0 systemd[1]: var-lib-containers-storage-overlay-269e65f64fecff60c787777dbf72932d003279f7a987d128f7585d6013080ce9-merged.mount: Deactivated successfully.
Oct 11 04:30:16 compute-0 podman[316057]: 2025-10-11 04:30:16.973239446 +0000 UTC m=+0.178490547 container remove 393df2c83f51137d1e5b37a73d570467a8eec1560ac7299be76c0a1e1ec65947 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_bose, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 11 04:30:16 compute-0 systemd[1]: libpod-conmon-393df2c83f51137d1e5b37a73d570467a8eec1560ac7299be76c0a1e1ec65947.scope: Deactivated successfully.
Oct 11 04:30:17 compute-0 sudo[315737]: pam_unix(sudo:session): session closed for user root
Oct 11 04:30:17 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd numa-status"} v 0) v1
Oct 11 04:30:17 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/533469772' entity='client.admin' cmd=[{"prefix": "osd numa-status"}]: dispatch
Oct 11 04:30:17 compute-0 sudo[316093]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 04:30:17 compute-0 sudo[316093]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 04:30:17 compute-0 sudo[316093]: pam_unix(sudo:session): session closed for user root
Oct 11 04:30:17 compute-0 sudo[316127]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 11 04:30:17 compute-0 sudo[316127]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 04:30:17 compute-0 sudo[316127]: pam_unix(sudo:session): session closed for user root
Oct 11 04:30:17 compute-0 sudo[316159]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 04:30:17 compute-0 sudo[316159]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 04:30:17 compute-0 sudo[316159]: pam_unix(sudo:session): session closed for user root
Oct 11 04:30:17 compute-0 sudo[316202]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/23b68101-59a9-532f-ab6b-9acf78fb2162/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 23b68101-59a9-532f-ab6b-9acf78fb2162 -- lvm list --format json
Oct 11 04:30:17 compute-0 sudo[316202]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 04:30:17 compute-0 ceph-mgr[74563]: log_channel(audit) log [DBG] : from='client.19393 -' entity='client.admin' cmd=[{"prefix": "osd perf", "target": ["mon-mgr", ""]}]: dispatch
Oct 11 04:30:17 compute-0 podman[316306]: 2025-10-11 04:30:17.669498368 +0000 UTC m=+0.047320164 container create 1bb4d78f3eabaed8a5e2ef99cf43fd2864f6889da4c16c1a13acb669543a0e21 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_wescoff, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Oct 11 04:30:17 compute-0 systemd[1]: Started libpod-conmon-1bb4d78f3eabaed8a5e2ef99cf43fd2864f6889da4c16c1a13acb669543a0e21.scope.
Oct 11 04:30:17 compute-0 ceph-mon[74273]: from='client.? 192.168.122.100:0/3370140286' entity='client.admin' cmd=[{"prefix": "osd dump"}]: dispatch
Oct 11 04:30:17 compute-0 ceph-mon[74273]: from='client.? 192.168.122.100:0/533469772' entity='client.admin' cmd=[{"prefix": "osd numa-status"}]: dispatch
Oct 11 04:30:17 compute-0 systemd[1]: Started libcrun container.
Oct 11 04:30:17 compute-0 podman[316306]: 2025-10-11 04:30:17.647107918 +0000 UTC m=+0.024929734 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 04:30:17 compute-0 podman[316306]: 2025-10-11 04:30:17.742442183 +0000 UTC m=+0.120263989 container init 1bb4d78f3eabaed8a5e2ef99cf43fd2864f6889da4c16c1a13acb669543a0e21 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_wescoff, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, ceph=True, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Oct 11 04:30:17 compute-0 podman[316306]: 2025-10-11 04:30:17.750424628 +0000 UTC m=+0.128246404 container start 1bb4d78f3eabaed8a5e2ef99cf43fd2864f6889da4c16c1a13acb669543a0e21 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_wescoff, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Oct 11 04:30:17 compute-0 podman[316306]: 2025-10-11 04:30:17.753981918 +0000 UTC m=+0.131803724 container attach 1bb4d78f3eabaed8a5e2ef99cf43fd2864f6889da4c16c1a13acb669543a0e21 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_wescoff, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_REF=reef, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 11 04:30:17 compute-0 sad_wescoff[316352]: 167 167
Oct 11 04:30:17 compute-0 systemd[1]: libpod-1bb4d78f3eabaed8a5e2ef99cf43fd2864f6889da4c16c1a13acb669543a0e21.scope: Deactivated successfully.
Oct 11 04:30:17 compute-0 podman[316306]: 2025-10-11 04:30:17.760360668 +0000 UTC m=+0.138182444 container died 1bb4d78f3eabaed8a5e2ef99cf43fd2864f6889da4c16c1a13acb669543a0e21 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_wescoff, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Oct 11 04:30:17 compute-0 systemd[1]: var-lib-containers-storage-overlay-6f6c4bfd019713bb56bc60e0f1a88ba9ac4bc32b0fd10ff8ebe569009f1fb507-merged.mount: Deactivated successfully.
Oct 11 04:30:17 compute-0 podman[316306]: 2025-10-11 04:30:17.802384422 +0000 UTC m=+0.180206198 container remove 1bb4d78f3eabaed8a5e2ef99cf43fd2864f6889da4c16c1a13acb669543a0e21 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_wescoff, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.build-date=20250507)
Oct 11 04:30:17 compute-0 systemd[1]: libpod-conmon-1bb4d78f3eabaed8a5e2ef99cf43fd2864f6889da4c16c1a13acb669543a0e21.scope: Deactivated successfully.
Oct 11 04:30:17 compute-0 ceph-mgr[74563]: log_channel(audit) log [DBG] : from='client.19395 -' entity='client.admin' cmd=[{"prefix": "osd pool autoscale-status", "target": ["mon-mgr", ""]}]: dispatch
Oct 11 04:30:17 compute-0 ceph-mgr[74563]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 04:30:17 compute-0 ceph-mgr[74563]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct 11 04:30:17 compute-0 ceph-mgr[74563]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 04:30:17 compute-0 ceph-mgr[74563]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Oct 11 04:30:17 compute-0 ceph-mgr[74563]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 04:30:17 compute-0 ceph-mgr[74563]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.002894458247867422 of space, bias 1.0, pg target 0.8683374743602266 quantized to 32 (current 32)
Oct 11 04:30:17 compute-0 ceph-mgr[74563]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 04:30:17 compute-0 ceph-mgr[74563]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Oct 11 04:30:17 compute-0 ceph-mgr[74563]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 04:30:17 compute-0 ceph-mgr[74563]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006661762551279547 of space, bias 1.0, pg target 0.1998528765383864 quantized to 32 (current 32)
Oct 11 04:30:17 compute-0 ceph-mgr[74563]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 04:30:17 compute-0 ceph-mgr[74563]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Oct 11 04:30:17 compute-0 ceph-mgr[74563]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 04:30:17 compute-0 ceph-mgr[74563]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 11 04:30:17 compute-0 ceph-mgr[74563]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 04:30:17 compute-0 ceph-mgr[74563]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Oct 11 04:30:17 compute-0 ceph-mgr[74563]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 04:30:17 compute-0 ceph-mgr[74563]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct 11 04:30:17 compute-0 ceph-mgr[74563]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 04:30:17 compute-0 ceph-mgr[74563]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 11 04:30:17 compute-0 ceph-mgr[74563]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 04:30:17 compute-0 ceph-mgr[74563]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct 11 04:30:17 compute-0 podman[316422]: 2025-10-11 04:30:17.981928429 +0000 UTC m=+0.052422238 container create 4d0d486d41f4fa335bfa140d549a488a9ac9b2da67f1c43155e11b71b1f81c04 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_mcnulty, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 11 04:30:18 compute-0 systemd[1]: Started libpod-conmon-4d0d486d41f4fa335bfa140d549a488a9ac9b2da67f1c43155e11b71b1f81c04.scope.
Oct 11 04:30:18 compute-0 podman[316422]: 2025-10-11 04:30:17.955007261 +0000 UTC m=+0.025501100 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 04:30:18 compute-0 systemd[1]: Started libcrun container.
Oct 11 04:30:18 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ff4a99c88f38ef402d7ef272cf4cdc96acf20f7c643d104ebcfe11a05ddca4d8/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 11 04:30:18 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ff4a99c88f38ef402d7ef272cf4cdc96acf20f7c643d104ebcfe11a05ddca4d8/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 11 04:30:18 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ff4a99c88f38ef402d7ef272cf4cdc96acf20f7c643d104ebcfe11a05ddca4d8/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 11 04:30:18 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ff4a99c88f38ef402d7ef272cf4cdc96acf20f7c643d104ebcfe11a05ddca4d8/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 11 04:30:18 compute-0 podman[316422]: 2025-10-11 04:30:18.079988431 +0000 UTC m=+0.150482290 container init 4d0d486d41f4fa335bfa140d549a488a9ac9b2da67f1c43155e11b71b1f81c04 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_mcnulty, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 11 04:30:18 compute-0 podman[316422]: 2025-10-11 04:30:18.087800381 +0000 UTC m=+0.158294210 container start 4d0d486d41f4fa335bfa140d549a488a9ac9b2da67f1c43155e11b71b1f81c04 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_mcnulty, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=reef)
Oct 11 04:30:18 compute-0 podman[316422]: 2025-10-11 04:30:18.09097016 +0000 UTC m=+0.161463979 container attach 4d0d486d41f4fa335bfa140d549a488a9ac9b2da67f1c43155e11b71b1f81c04 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_mcnulty, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 11 04:30:18 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v2057: 305 pgs: 305 active+clean; 271 MiB data, 657 MiB used, 59 GiB / 60 GiB avail
Oct 11 04:30:18 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool ls", "detail": "detail"} v 0) v1
Oct 11 04:30:18 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2043124244' entity='client.admin' cmd=[{"prefix": "osd pool ls", "detail": "detail"}]: dispatch
Oct 11 04:30:18 compute-0 nova_compute[259850]: 2025-10-11 04:30:18.379 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 04:30:18 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd stat"} v 0) v1
Oct 11 04:30:18 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/683161310' entity='client.admin' cmd=[{"prefix": "osd stat"}]: dispatch
Oct 11 04:30:18 compute-0 ceph-mon[74273]: from='client.19393 -' entity='client.admin' cmd=[{"prefix": "osd perf", "target": ["mon-mgr", ""]}]: dispatch
Oct 11 04:30:18 compute-0 ceph-mon[74273]: from='client.19395 -' entity='client.admin' cmd=[{"prefix": "osd pool autoscale-status", "target": ["mon-mgr", ""]}]: dispatch
Oct 11 04:30:18 compute-0 ceph-mon[74273]: pgmap v2057: 305 pgs: 305 active+clean; 271 MiB data, 657 MiB used, 59 GiB / 60 GiB avail
Oct 11 04:30:18 compute-0 ceph-mon[74273]: from='client.? 192.168.122.100:0/2043124244' entity='client.admin' cmd=[{"prefix": "osd pool ls", "detail": "detail"}]: dispatch
Oct 11 04:30:18 compute-0 ceph-mon[74273]: from='client.? 192.168.122.100:0/683161310' entity='client.admin' cmd=[{"prefix": "osd stat"}]: dispatch
Oct 11 04:30:18 compute-0 ceph-mgr[74563]: log_channel(audit) log [DBG] : from='client.19401 -' entity='client.admin' cmd=[{"prefix": "pg dump", "target": ["mon-mgr", ""]}]: dispatch
Oct 11 04:30:18 compute-0 blissful_mcnulty[316468]: {
Oct 11 04:30:18 compute-0 blissful_mcnulty[316468]:     "0": [
Oct 11 04:30:18 compute-0 blissful_mcnulty[316468]:         {
Oct 11 04:30:18 compute-0 blissful_mcnulty[316468]:             "devices": [
Oct 11 04:30:18 compute-0 blissful_mcnulty[316468]:                 "/dev/loop3"
Oct 11 04:30:18 compute-0 blissful_mcnulty[316468]:             ],
Oct 11 04:30:18 compute-0 blissful_mcnulty[316468]:             "lv_name": "ceph_lv0",
Oct 11 04:30:18 compute-0 blissful_mcnulty[316468]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct 11 04:30:18 compute-0 blissful_mcnulty[316468]:             "lv_size": "21470642176",
Oct 11 04:30:18 compute-0 blissful_mcnulty[316468]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=1XVTvq-Um5W-oCLh-MZzq-EKrd-vgGj-JdO4S3,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=23b68101-59a9-532f-ab6b-9acf78fb2162,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=bd7ac921-1218-45c1-b1c6-7c594dbceccb,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 11 04:30:18 compute-0 blissful_mcnulty[316468]:             "lv_uuid": "1XVTvq-Um5W-oCLh-MZzq-EKrd-vgGj-JdO4S3",
Oct 11 04:30:18 compute-0 blissful_mcnulty[316468]:             "name": "ceph_lv0",
Oct 11 04:30:18 compute-0 blissful_mcnulty[316468]:             "path": "/dev/ceph_vg0/ceph_lv0",
Oct 11 04:30:18 compute-0 blissful_mcnulty[316468]:             "tags": {
Oct 11 04:30:18 compute-0 blissful_mcnulty[316468]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct 11 04:30:18 compute-0 blissful_mcnulty[316468]:                 "ceph.block_uuid": "1XVTvq-Um5W-oCLh-MZzq-EKrd-vgGj-JdO4S3",
Oct 11 04:30:18 compute-0 blissful_mcnulty[316468]:                 "ceph.cephx_lockbox_secret": "",
Oct 11 04:30:18 compute-0 blissful_mcnulty[316468]:                 "ceph.cluster_fsid": "23b68101-59a9-532f-ab6b-9acf78fb2162",
Oct 11 04:30:18 compute-0 blissful_mcnulty[316468]:                 "ceph.cluster_name": "ceph",
Oct 11 04:30:18 compute-0 blissful_mcnulty[316468]:                 "ceph.crush_device_class": "",
Oct 11 04:30:18 compute-0 blissful_mcnulty[316468]:                 "ceph.encrypted": "0",
Oct 11 04:30:18 compute-0 blissful_mcnulty[316468]:                 "ceph.osd_fsid": "bd7ac921-1218-45c1-b1c6-7c594dbceccb",
Oct 11 04:30:18 compute-0 blissful_mcnulty[316468]:                 "ceph.osd_id": "0",
Oct 11 04:30:18 compute-0 blissful_mcnulty[316468]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 11 04:30:18 compute-0 blissful_mcnulty[316468]:                 "ceph.type": "block",
Oct 11 04:30:18 compute-0 blissful_mcnulty[316468]:                 "ceph.vdo": "0"
Oct 11 04:30:18 compute-0 blissful_mcnulty[316468]:             },
Oct 11 04:30:18 compute-0 blissful_mcnulty[316468]:             "type": "block",
Oct 11 04:30:18 compute-0 blissful_mcnulty[316468]:             "vg_name": "ceph_vg0"
Oct 11 04:30:18 compute-0 blissful_mcnulty[316468]:         }
Oct 11 04:30:18 compute-0 blissful_mcnulty[316468]:     ],
Oct 11 04:30:18 compute-0 blissful_mcnulty[316468]:     "1": [
Oct 11 04:30:18 compute-0 blissful_mcnulty[316468]:         {
Oct 11 04:30:18 compute-0 blissful_mcnulty[316468]:             "devices": [
Oct 11 04:30:18 compute-0 blissful_mcnulty[316468]:                 "/dev/loop4"
Oct 11 04:30:18 compute-0 blissful_mcnulty[316468]:             ],
Oct 11 04:30:18 compute-0 blissful_mcnulty[316468]:             "lv_name": "ceph_lv1",
Oct 11 04:30:18 compute-0 blissful_mcnulty[316468]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Oct 11 04:30:18 compute-0 blissful_mcnulty[316468]:             "lv_size": "21470642176",
Oct 11 04:30:18 compute-0 blissful_mcnulty[316468]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=SYHCVo-UVf0-Iwc3-4dzz-QuCG-19LJ-9rRA5x,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=23b68101-59a9-532f-ab6b-9acf78fb2162,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=38da774d-7ecf-442f-9a7a-97978287cff8,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 11 04:30:18 compute-0 blissful_mcnulty[316468]:             "lv_uuid": "SYHCVo-UVf0-Iwc3-4dzz-QuCG-19LJ-9rRA5x",
Oct 11 04:30:18 compute-0 blissful_mcnulty[316468]:             "name": "ceph_lv1",
Oct 11 04:30:18 compute-0 blissful_mcnulty[316468]:             "path": "/dev/ceph_vg1/ceph_lv1",
Oct 11 04:30:18 compute-0 blissful_mcnulty[316468]:             "tags": {
Oct 11 04:30:18 compute-0 blissful_mcnulty[316468]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Oct 11 04:30:18 compute-0 blissful_mcnulty[316468]:                 "ceph.block_uuid": "SYHCVo-UVf0-Iwc3-4dzz-QuCG-19LJ-9rRA5x",
Oct 11 04:30:18 compute-0 blissful_mcnulty[316468]:                 "ceph.cephx_lockbox_secret": "",
Oct 11 04:30:18 compute-0 blissful_mcnulty[316468]:                 "ceph.cluster_fsid": "23b68101-59a9-532f-ab6b-9acf78fb2162",
Oct 11 04:30:18 compute-0 blissful_mcnulty[316468]:                 "ceph.cluster_name": "ceph",
Oct 11 04:30:18 compute-0 blissful_mcnulty[316468]:                 "ceph.crush_device_class": "",
Oct 11 04:30:18 compute-0 blissful_mcnulty[316468]:                 "ceph.encrypted": "0",
Oct 11 04:30:18 compute-0 blissful_mcnulty[316468]:                 "ceph.osd_fsid": "38da774d-7ecf-442f-9a7a-97978287cff8",
Oct 11 04:30:18 compute-0 blissful_mcnulty[316468]:                 "ceph.osd_id": "1",
Oct 11 04:30:18 compute-0 blissful_mcnulty[316468]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 11 04:30:18 compute-0 blissful_mcnulty[316468]:                 "ceph.type": "block",
Oct 11 04:30:18 compute-0 blissful_mcnulty[316468]:                 "ceph.vdo": "0"
Oct 11 04:30:18 compute-0 blissful_mcnulty[316468]:             },
Oct 11 04:30:18 compute-0 blissful_mcnulty[316468]:             "type": "block",
Oct 11 04:30:18 compute-0 blissful_mcnulty[316468]:             "vg_name": "ceph_vg1"
Oct 11 04:30:18 compute-0 blissful_mcnulty[316468]:         }
Oct 11 04:30:18 compute-0 blissful_mcnulty[316468]:     ],
Oct 11 04:30:18 compute-0 blissful_mcnulty[316468]:     "2": [
Oct 11 04:30:18 compute-0 blissful_mcnulty[316468]:         {
Oct 11 04:30:18 compute-0 blissful_mcnulty[316468]:             "devices": [
Oct 11 04:30:18 compute-0 blissful_mcnulty[316468]:                 "/dev/loop5"
Oct 11 04:30:18 compute-0 blissful_mcnulty[316468]:             ],
Oct 11 04:30:18 compute-0 blissful_mcnulty[316468]:             "lv_name": "ceph_lv2",
Oct 11 04:30:18 compute-0 blissful_mcnulty[316468]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Oct 11 04:30:18 compute-0 blissful_mcnulty[316468]:             "lv_size": "21470642176",
Oct 11 04:30:18 compute-0 blissful_mcnulty[316468]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=RoMfIg-8Myq-NBZ3-1HM3-6QfC-sjk8-dlChvt,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=23b68101-59a9-532f-ab6b-9acf78fb2162,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=b0a32a6b-06c7-4cda-9235-0c5ffbd1bd85,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 11 04:30:18 compute-0 blissful_mcnulty[316468]:             "lv_uuid": "RoMfIg-8Myq-NBZ3-1HM3-6QfC-sjk8-dlChvt",
Oct 11 04:30:18 compute-0 blissful_mcnulty[316468]:             "name": "ceph_lv2",
Oct 11 04:30:18 compute-0 blissful_mcnulty[316468]:             "path": "/dev/ceph_vg2/ceph_lv2",
Oct 11 04:30:18 compute-0 blissful_mcnulty[316468]:             "tags": {
Oct 11 04:30:18 compute-0 blissful_mcnulty[316468]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Oct 11 04:30:18 compute-0 blissful_mcnulty[316468]:                 "ceph.block_uuid": "RoMfIg-8Myq-NBZ3-1HM3-6QfC-sjk8-dlChvt",
Oct 11 04:30:18 compute-0 blissful_mcnulty[316468]:                 "ceph.cephx_lockbox_secret": "",
Oct 11 04:30:18 compute-0 blissful_mcnulty[316468]:                 "ceph.cluster_fsid": "23b68101-59a9-532f-ab6b-9acf78fb2162",
Oct 11 04:30:18 compute-0 blissful_mcnulty[316468]:                 "ceph.cluster_name": "ceph",
Oct 11 04:30:18 compute-0 blissful_mcnulty[316468]:                 "ceph.crush_device_class": "",
Oct 11 04:30:18 compute-0 blissful_mcnulty[316468]:                 "ceph.encrypted": "0",
Oct 11 04:30:18 compute-0 blissful_mcnulty[316468]:                 "ceph.osd_fsid": "b0a32a6b-06c7-4cda-9235-0c5ffbd1bd85",
Oct 11 04:30:18 compute-0 blissful_mcnulty[316468]:                 "ceph.osd_id": "2",
Oct 11 04:30:18 compute-0 blissful_mcnulty[316468]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 11 04:30:18 compute-0 blissful_mcnulty[316468]:                 "ceph.type": "block",
Oct 11 04:30:18 compute-0 blissful_mcnulty[316468]:                 "ceph.vdo": "0"
Oct 11 04:30:18 compute-0 blissful_mcnulty[316468]:             },
Oct 11 04:30:18 compute-0 blissful_mcnulty[316468]:             "type": "block",
Oct 11 04:30:18 compute-0 blissful_mcnulty[316468]:             "vg_name": "ceph_vg2"
Oct 11 04:30:18 compute-0 blissful_mcnulty[316468]:         }
Oct 11 04:30:18 compute-0 blissful_mcnulty[316468]:     ]
Oct 11 04:30:18 compute-0 blissful_mcnulty[316468]: }
Oct 11 04:30:19 compute-0 systemd[1]: libpod-4d0d486d41f4fa335bfa140d549a488a9ac9b2da67f1c43155e11b71b1f81c04.scope: Deactivated successfully.
Oct 11 04:30:19 compute-0 podman[316422]: 2025-10-11 04:30:19.00335158 +0000 UTC m=+1.073845389 container died 4d0d486d41f4fa335bfa140d549a488a9ac9b2da67f1c43155e11b71b1f81c04 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_mcnulty, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, io.buildah.version=1.39.3)
Oct 11 04:30:19 compute-0 systemd[1]: var-lib-containers-storage-overlay-ff4a99c88f38ef402d7ef272cf4cdc96acf20f7c643d104ebcfe11a05ddca4d8-merged.mount: Deactivated successfully.
Oct 11 04:30:19 compute-0 podman[316422]: 2025-10-11 04:30:19.074163805 +0000 UTC m=+1.144657614 container remove 4d0d486d41f4fa335bfa140d549a488a9ac9b2da67f1c43155e11b71b1f81c04 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_mcnulty, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 11 04:30:19 compute-0 systemd[1]: libpod-conmon-4d0d486d41f4fa335bfa140d549a488a9ac9b2da67f1c43155e11b71b1f81c04.scope: Deactivated successfully.
Oct 11 04:30:19 compute-0 sudo[316202]: pam_unix(sudo:session): session closed for user root
Oct 11 04:30:19 compute-0 sudo[316725]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 04:30:19 compute-0 sudo[316725]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 04:30:19 compute-0 sudo[316725]: pam_unix(sudo:session): session closed for user root
Oct 11 04:30:19 compute-0 sudo[316771]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 11 04:30:19 compute-0 sudo[316771]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 04:30:19 compute-0 sudo[316771]: pam_unix(sudo:session): session closed for user root
Oct 11 04:30:19 compute-0 sudo[316817]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 04:30:19 compute-0 sudo[316817]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 04:30:19 compute-0 sudo[316817]: pam_unix(sudo:session): session closed for user root
Oct 11 04:30:19 compute-0 ceph-mgr[74563]: log_channel(audit) log [DBG] : from='client.19403 -' entity='client.admin' cmd=[{"prefix": "pg stat", "target": ["mon-mgr", ""]}]: dispatch
Oct 11 04:30:19 compute-0 sudo[316852]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/23b68101-59a9-532f-ab6b-9acf78fb2162/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 23b68101-59a9-532f-ab6b-9acf78fb2162 -- raw list --format json
Oct 11 04:30:19 compute-0 sudo[316852]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 04:30:19 compute-0 nova_compute[259850]: 2025-10-11 04:30:19.605 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 04:30:19 compute-0 podman[317007]: 2025-10-11 04:30:19.715163081 +0000 UTC m=+0.034696239 container create 5a244447cfb11480342cd24f3bc090d403c0d56c99ced0cc8171e08725feece4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_lamarr, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef)
Oct 11 04:30:19 compute-0 ceph-mon[74273]: from='client.19401 -' entity='client.admin' cmd=[{"prefix": "pg dump", "target": ["mon-mgr", ""]}]: dispatch
Oct 11 04:30:19 compute-0 ceph-mon[74273]: from='client.19403 -' entity='client.admin' cmd=[{"prefix": "pg stat", "target": ["mon-mgr", ""]}]: dispatch
Oct 11 04:30:19 compute-0 systemd[1]: Started libpod-conmon-5a244447cfb11480342cd24f3bc090d403c0d56c99ced0cc8171e08725feece4.scope.
Oct 11 04:30:19 compute-0 systemd[1]: Started libcrun container.
Oct 11 04:30:19 compute-0 podman[317007]: 2025-10-11 04:30:19.773871004 +0000 UTC m=+0.093404182 container init 5a244447cfb11480342cd24f3bc090d403c0d56c99ced0cc8171e08725feece4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_lamarr, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 11 04:30:19 compute-0 podman[317007]: 2025-10-11 04:30:19.780900032 +0000 UTC m=+0.100433190 container start 5a244447cfb11480342cd24f3bc090d403c0d56c99ced0cc8171e08725feece4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_lamarr, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True)
Oct 11 04:30:19 compute-0 systemd[1]: libpod-5a244447cfb11480342cd24f3bc090d403c0d56c99ced0cc8171e08725feece4.scope: Deactivated successfully.
Oct 11 04:30:19 compute-0 podman[317007]: 2025-10-11 04:30:19.783993369 +0000 UTC m=+0.103526547 container attach 5a244447cfb11480342cd24f3bc090d403c0d56c99ced0cc8171e08725feece4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_lamarr, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 11 04:30:19 compute-0 practical_lamarr[317034]: 167 167
Oct 11 04:30:19 compute-0 conmon[317034]: conmon 5a244447cfb11480342c <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-5a244447cfb11480342cd24f3bc090d403c0d56c99ced0cc8171e08725feece4.scope/container/memory.events
Oct 11 04:30:19 compute-0 podman[317007]: 2025-10-11 04:30:19.790218775 +0000 UTC m=+0.109751943 container died 5a244447cfb11480342cd24f3bc090d403c0d56c99ced0cc8171e08725feece4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_lamarr, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507)
Oct 11 04:30:19 compute-0 podman[317007]: 2025-10-11 04:30:19.700044815 +0000 UTC m=+0.019577993 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 04:30:19 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status"} v 0) v1
Oct 11 04:30:19 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2277912398' entity='client.admin' cmd=[{"prefix": "status"}]: dispatch
Oct 11 04:30:19 compute-0 systemd[1]: var-lib-containers-storage-overlay-4a19f76ec36431627144194a175403152142ba72b8ef8e14925ba2cfe049d909-merged.mount: Deactivated successfully.
Oct 11 04:30:19 compute-0 podman[317007]: 2025-10-11 04:30:19.824951723 +0000 UTC m=+0.144484881 container remove 5a244447cfb11480342cd24f3bc090d403c0d56c99ced0cc8171e08725feece4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_lamarr, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.vendor=CentOS)
Oct 11 04:30:19 compute-0 systemd[1]: libpod-conmon-5a244447cfb11480342cd24f3bc090d403c0d56c99ced0cc8171e08725feece4.scope: Deactivated successfully.
Oct 11 04:30:19 compute-0 podman[317030]: 2025-10-11 04:30:19.867836691 +0000 UTC m=+0.117713807 container health_status 648a70a95c858d67aef01a55c153ff0b5b9bef4b589fa10c62a24185180db61f (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.license=GPLv2)
Oct 11 04:30:19 compute-0 podman[317105]: 2025-10-11 04:30:19.985059443 +0000 UTC m=+0.045338628 container create f22fcf0717861ef4e12786e3eb6fcddf61740247acfcf085e1e91324f4d7ad39 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_sutherland, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 11 04:30:20 compute-0 systemd[1]: Started libpod-conmon-f22fcf0717861ef4e12786e3eb6fcddf61740247acfcf085e1e91324f4d7ad39.scope.
Oct 11 04:30:20 compute-0 podman[317105]: 2025-10-11 04:30:19.963817765 +0000 UTC m=+0.024096960 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 04:30:20 compute-0 systemd[1]: Started libcrun container.
Oct 11 04:30:20 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/43c6e44316a6279aaebebdacbf3f616f49b13c4c00879343ce4da7b19d0c7f5b/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 11 04:30:20 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/43c6e44316a6279aaebebdacbf3f616f49b13c4c00879343ce4da7b19d0c7f5b/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 11 04:30:20 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/43c6e44316a6279aaebebdacbf3f616f49b13c4c00879343ce4da7b19d0c7f5b/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 11 04:30:20 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/43c6e44316a6279aaebebdacbf3f616f49b13c4c00879343ce4da7b19d0c7f5b/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 11 04:30:20 compute-0 podman[317105]: 2025-10-11 04:30:20.083958939 +0000 UTC m=+0.144238154 container init f22fcf0717861ef4e12786e3eb6fcddf61740247acfcf085e1e91324f4d7ad39 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_sutherland, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 11 04:30:20 compute-0 podman[317105]: 2025-10-11 04:30:20.094956108 +0000 UTC m=+0.155235323 container start f22fcf0717861ef4e12786e3eb6fcddf61740247acfcf085e1e91324f4d7ad39 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_sutherland, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2)
Oct 11 04:30:20 compute-0 podman[317105]: 2025-10-11 04:30:20.102630885 +0000 UTC m=+0.162910070 container attach f22fcf0717861ef4e12786e3eb6fcddf61740247acfcf085e1e91324f4d7ad39 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_sutherland, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.build-date=20250507, io.buildah.version=1.39.3)
Oct 11 04:30:20 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e448 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 04:30:20 compute-0 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v2058: 305 pgs: 305 active+clean; 271 MiB data, 657 MiB used, 59 GiB / 60 GiB avail
Oct 11 04:30:20 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "time-sync-status"} v 0) v1
Oct 11 04:30:20 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1404353658' entity='client.admin' cmd=[{"prefix": "time-sync-status"}]: dispatch
Oct 11 04:30:20 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config dump", "format": "json-pretty"} v 0) v1
Oct 11 04:30:20 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2018533853' entity='client.admin' cmd=[{"prefix": "config dump", "format": "json-pretty"}]: dispatch
Oct 11 04:30:20 compute-0 ceph-mon[74273]: from='client.? 192.168.122.100:0/2277912398' entity='client.admin' cmd=[{"prefix": "status"}]: dispatch
Oct 11 04:30:20 compute-0 ceph-mon[74273]: pgmap v2058: 305 pgs: 305 active+clean; 271 MiB data, 657 MiB used, 59 GiB / 60 GiB avail
Oct 11 04:30:20 compute-0 ceph-mon[74273]: from='client.? 192.168.122.100:0/1404353658' entity='client.admin' cmd=[{"prefix": "time-sync-status"}]: dispatch
Oct 11 04:30:20 compute-0 ceph-mon[74273]: from='client.? 192.168.122.100:0/2018533853' entity='client.admin' cmd=[{"prefix": "config dump", "format": "json-pretty"}]: dispatch
Oct 11 04:30:20 compute-0 ceph-mgr[74563]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 04:30:20 compute-0 ceph-mgr[74563]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 04:30:20 compute-0 ceph-mgr[74563]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 04:30:20 compute-0 ceph-mgr[74563]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 04:30:20 compute-0 ceph-mgr[74563]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 04:30:20 compute-0 ceph-mgr[74563]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 04:30:20 compute-0 ceph-mgr[74563]: [balancer INFO root] Optimize plan auto_2025-10-11_04:30:20
Oct 11 04:30:20 compute-0 ceph-mgr[74563]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 11 04:30:20 compute-0 ceph-mgr[74563]: [balancer INFO root] do_upmap
Oct 11 04:30:20 compute-0 ceph-mgr[74563]: [balancer INFO root] pools ['default.rgw.log', 'vms', '.rgw.root', 'cephfs.cephfs.data', 'images', 'default.rgw.control', '.mgr', 'default.rgw.meta', 'cephfs.cephfs.meta', 'volumes', 'backups']
Oct 11 04:30:20 compute-0 ceph-mgr[74563]: [balancer INFO root] prepared 0/10 changes
Oct 11 04:30:21 compute-0 ceph-mgr[74563]: log_channel(audit) log [DBG] : from='client.19411 -' entity='client.admin' cmd=[{"prefix": "device ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 11 04:30:21 compute-0 wonderful_sutherland[317160]: {
Oct 11 04:30:21 compute-0 wonderful_sutherland[317160]:     "38da774d-7ecf-442f-9a7a-97978287cff8": {
Oct 11 04:30:21 compute-0 wonderful_sutherland[317160]:         "ceph_fsid": "23b68101-59a9-532f-ab6b-9acf78fb2162",
Oct 11 04:30:21 compute-0 wonderful_sutherland[317160]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Oct 11 04:30:21 compute-0 wonderful_sutherland[317160]:         "osd_id": 1,
Oct 11 04:30:21 compute-0 wonderful_sutherland[317160]:         "osd_uuid": "38da774d-7ecf-442f-9a7a-97978287cff8",
Oct 11 04:30:21 compute-0 wonderful_sutherland[317160]:         "type": "bluestore"
Oct 11 04:30:21 compute-0 wonderful_sutherland[317160]:     },
Oct 11 04:30:21 compute-0 wonderful_sutherland[317160]:     "b0a32a6b-06c7-4cda-9235-0c5ffbd1bd85": {
Oct 11 04:30:21 compute-0 wonderful_sutherland[317160]:         "ceph_fsid": "23b68101-59a9-532f-ab6b-9acf78fb2162",
Oct 11 04:30:21 compute-0 wonderful_sutherland[317160]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Oct 11 04:30:21 compute-0 wonderful_sutherland[317160]:         "osd_id": 2,
Oct 11 04:30:21 compute-0 wonderful_sutherland[317160]:         "osd_uuid": "b0a32a6b-06c7-4cda-9235-0c5ffbd1bd85",
Oct 11 04:30:21 compute-0 wonderful_sutherland[317160]:         "type": "bluestore"
Oct 11 04:30:21 compute-0 wonderful_sutherland[317160]:     },
Oct 11 04:30:21 compute-0 wonderful_sutherland[317160]:     "bd7ac921-1218-45c1-b1c6-7c594dbceccb": {
Oct 11 04:30:21 compute-0 wonderful_sutherland[317160]:         "ceph_fsid": "23b68101-59a9-532f-ab6b-9acf78fb2162",
Oct 11 04:30:21 compute-0 wonderful_sutherland[317160]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct 11 04:30:21 compute-0 wonderful_sutherland[317160]:         "osd_id": 0,
Oct 11 04:30:21 compute-0 wonderful_sutherland[317160]:         "osd_uuid": "bd7ac921-1218-45c1-b1c6-7c594dbceccb",
Oct 11 04:30:21 compute-0 wonderful_sutherland[317160]:         "type": "bluestore"
Oct 11 04:30:21 compute-0 wonderful_sutherland[317160]:     }
Oct 11 04:30:21 compute-0 wonderful_sutherland[317160]: }
Oct 11 04:30:21 compute-0 systemd[1]: libpod-f22fcf0717861ef4e12786e3eb6fcddf61740247acfcf085e1e91324f4d7ad39.scope: Deactivated successfully.
Oct 11 04:30:21 compute-0 podman[317105]: 2025-10-11 04:30:21.078983856 +0000 UTC m=+1.139263021 container died f22fcf0717861ef4e12786e3eb6fcddf61740247acfcf085e1e91324f4d7ad39 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_sutherland, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 11 04:30:21 compute-0 systemd[1]: var-lib-containers-storage-overlay-43c6e44316a6279aaebebdacbf3f616f49b13c4c00879343ce4da7b19d0c7f5b-merged.mount: Deactivated successfully.
Oct 11 04:30:21 compute-0 podman[317105]: 2025-10-11 04:30:21.148888335 +0000 UTC m=+1.209167510 container remove f22fcf0717861ef4e12786e3eb6fcddf61740247acfcf085e1e91324f4d7ad39 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_sutherland, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef)
Oct 11 04:30:21 compute-0 systemd[1]: libpod-conmon-f22fcf0717861ef4e12786e3eb6fcddf61740247acfcf085e1e91324f4d7ad39.scope: Deactivated successfully.
Oct 11 04:30:21 compute-0 ceph-mgr[74563]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 11 04:30:21 compute-0 ceph-mgr[74563]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 11 04:30:21 compute-0 ceph-mgr[74563]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 11 04:30:21 compute-0 ceph-mgr[74563]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 11 04:30:21 compute-0 ceph-mgr[74563]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 11 04:30:21 compute-0 ceph-mgr[74563]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 11 04:30:21 compute-0 ceph-mgr[74563]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 11 04:30:21 compute-0 ceph-mgr[74563]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 11 04:30:21 compute-0 ceph-mgr[74563]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 11 04:30:21 compute-0 ceph-mgr[74563]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 11 04:30:21 compute-0 sudo[316852]: pam_unix(sudo:session): session closed for user root
Oct 11 04:30:21 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 11 04:30:21 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' 
Oct 11 04:30:21 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 11 04:30:21 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3396295551' entity='mgr.compute-0.jhqlii' 
Oct 11 04:30:21 compute-0 ceph-mgr[74563]: [progress WARNING root] complete: ev 5aa5a379-9ee9-49bf-aee3-c6cebcc2ea07 does not exist
Oct 11 04:30:21 compute-0 ceph-mgr[74563]: [progress WARNING root] complete: ev bf2d28ee-a612-4904-b118-964e25a2acfa does not exist
Oct 11 04:30:21 compute-0 ovs-appctl[317639]: ovs|00001|daemon_unix|WARN|/var/run/openvswitch/ovs-monitor-ipsec.pid: open: No such file or directory
